:_mod-docs-content-type: PROCEDURE [id="migrating-existing-daemons-to-target-nodes_{context}"] = Migrating the existing daemons to the target nodes [role="_abstract"] The following procedure is an example of an environment with 3 {Ceph} nodes or ComputeHCI nodes. This scenario extends the monitoring labels to all the {Ceph} or ComputeHCI nodes that are part of the cluster. This means that you keep 3 placements for the target nodes. .Procedure . Add the monitoring label to all the {CephCluster} or ComputeHCI nodes in the cluster: + ---- for item in $(sudo cephadm shell -- ceph orch host ls --format json | jq -r '.[].hostname'); do sudo cephadm shell -- ceph orch host label add $item monitoring; done ---- . Verify that all the hosts on the target nodes have the monitoring label: + ---- [tripleo-admin@controller-0 ~]$ sudo cephadm shell -- ceph orch host ls HOST ADDR LABELS cephstorage-0.redhat.local 192.168.24.11 osd monitoring cephstorage-1.redhat.local 192.168.24.12 osd monitoring cephstorage-2.redhat.local 192.168.24.47 osd monitoring controller-0.redhat.local 192.168.24.35 _admin mon mgr monitoring controller-1.redhat.local 192.168.24.53 mon _admin mgr monitoring controller-2.redhat.local 192.168.24.10 mon _admin mgr monitoring ---- . Remove the labels from the Controller nodes: + ---- $ for i in 0 1 2; do sudo cephadm shell -- ceph orch host label rm "controller-$i.redhat.local" monitoring; done Removed label monitoring from host controller-0.redhat.local Removed label monitoring from host controller-1.redhat.local Removed label monitoring from host controller-2.redhat.local ---- . Dump the current monitoring stack spec: + ---- function export_spec { local component="$1" local target_dir="$2" sudo cephadm shell -- ceph orch ls --export "$component" > "$target_dir/$component" } SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"} mkdir -p ${SPEC_DIR} for m in grafana prometheus alertmanager; do export_spec "$m" "$SPEC_DIR" done ---- . For each daemon, edit the current spec and replace the `placement.hosts:` section with the `placement.label:` section, for example: + [source,yaml] ---- service_type: grafana service_name: grafana placement: label: monitoring networks: - 172.17.3.0/24 spec: port: 3100 ---- + This step also applies to Prometheus and Alertmanager specs. . Apply the new monitoring spec to relocate the monitoring stack daemons: + ---- SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"} function migrate_daemon { local component="$1" local target_dir="$2" sudo cephadm shell -m "$target_dir" -- ceph orch apply -i /mnt/ceph_specs/$component } for m in grafana prometheus alertmanager; do migrate_daemon "$m" "$SPEC_DIR" done ---- . Verify that the daemons are deployed on the expected nodes: + ---- [ceph: root@controller-0 /]# ceph orch ps | grep -iE "(prome|alert|grafa)" alertmanager.cephstorage-2 cephstorage-2.redhat.local 172.17.3.144:9093,9094 grafana.cephstorage-0 cephstorage-0.redhat.local 172.17.3.83:3100 prometheus.cephstorage-1 cephstorage-1.redhat.local 172.17.3.53:9092 ---- + [NOTE] After you migrate the monitoring stack, you lose high availability. The monitoring stack daemons no longer have a Virtual IP address and HAProxy anymore. Node exporters are still running on all the nodes. . Review the {Ceph} configuration to ensure that it aligns with the configuration on the target nodes. In particular, focus on the following configuration entries: + ---- [ceph: root@controller-0 /]# ceph config dump | grep -i dashboard ... mgr advanced mgr/dashboard/ALERTMANAGER_API_HOST http://172.17.3.83:9093 mgr advanced mgr/dashboard/GRAFANA_API_URL https://172.17.3.144:3100 mgr advanced mgr/dashboard/PROMETHEUS_API_HOST http://172.17.3.83:9092 mgr advanced mgr/dashboard/controller-0.ycokob/server_addr 172.17.3.33 mgr advanced mgr/dashboard/controller-1.lmzpuc/server_addr 172.17.3.147 mgr advanced mgr/dashboard/controller-2.xpdgfl/server_addr 172.17.3.138 ---- . Verify that the `API_HOST/URL` of the `grafana`, `alertmanager` and `prometheus` services points to the IP addresses on the storage network of the node where each daemon is relocated: + ---- [ceph: root@controller-0 /]# ceph orch ps | grep -iE "(prome|alert|grafa)" alertmanager.cephstorage-0 cephstorage-0.redhat.local 172.17.3.83:9093,9094 alertmanager.cephstorage-1 cephstorage-1.redhat.local 172.17.3.53:9093,9094 alertmanager.cephstorage-2 cephstorage-2.redhat.local 172.17.3.144:9093,9094 grafana.cephstorage-0 cephstorage-0.redhat.local 172.17.3.83:3100 grafana.cephstorage-1 cephstorage-1.redhat.local 172.17.3.53:3100 grafana.cephstorage-2 cephstorage-2.redhat.local 172.17.3.144:3100 prometheus.cephstorage-0 cephstorage-0.redhat.local 172.17.3.83:9092 prometheus.cephstorage-1 cephstorage-1.redhat.local 172.17.3.53:9092 prometheus.cephstorage-2 cephstorage-2.redhat.local 172.17.3.144:9092 ---- + ---- [ceph: root@controller-0 /]# ceph config dump ... ... mgr advanced mgr/dashboard/ALERTMANAGER_API_HOST http://172.17.3.83:9093 mgr advanced mgr/dashboard/PROMETHEUS_API_HOST http://172.17.3.83:9092 mgr advanced mgr/dashboard/GRAFANA_API_URL https://172.17.3.144:3100 ---- + [NOTE] The Ceph Dashboard, as the service provided by the Ceph `mgr`, is not impacted by the relocation. You might experience an impact when the active `mgr` daemon is migrated or is force-failed. However, you can define 3 replicas in the Ceph Manager configuration to redirect requests to a different instance.