:_mod-docs-content-type: PROCEDURE [id="redeploying-a-ceph-monitor-on-the-target-node_{context}"] = Redeploying a Ceph Monitor on the target node [role="_abstract"] You use the IP address that you migrated to the target node to redeploy the Ceph Monitor on the target node. .Procedure . From the Ceph client node, for example `controller-0`, get the Ceph mon spec: + ---- $ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"} $ sudo cephadm shell -- ceph orch ls --export mon > ${SPEC_DIR}/mon ---- . Edit the retrieved spec and add the `unmanaged: true` keyword: + [source,yaml] ---- service_type: mon service_id: mon placement: label: mon unmanaged: true ---- . Save the spec. . Apply the spec with `cephadm` by using the Ceph Orchestrator: + ---- $ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"} $ sudo cephadm shell -m ${SPEC_DIR}/mon -- ceph orch apply -i /mnt/mon ---- + The Ceph Monitor daemons are marked as `unmanaged`, and you can now redeploy the existing daemon and bind it to the migrated IP address. . Delete the existing Ceph Monitor on the target node: + ---- $ sudo cephadm shell -- ceph orch daemon rm mon. --force ---- + * Replace `` with the hostname of the target node that is included in the {Ceph} cluster. . Redeploy the new Ceph Monitor on the target node by using the migrated IP address: + ---- $ sudo cephadm shell -- ceph orch daemon add mon : ---- + * Replace `` with the IP address of the migrated IP address. . Get the Ceph Monitor spec: + ---- $ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"} $ sudo cephadm shell -- ceph orch ls --export mon > ${SPEC_DIR}/mon ---- . Edit the retrieved spec and set the `unmanaged` keyword to `false`: + [source,yaml] ---- service_type: mon service_id: mon placement: label: mon unmanaged: false ---- . Save the spec. . Apply the spec with `cephadm` by using the Ceph Orchestrator: + ---- $ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"} $ sudo cephadm shell -m ${SPEC_DIR}/mon -- ceph orch apply -i /mnt/mon ---- + The new Ceph Monitor runs on the target node with the original IP address. . Identify the running `mgr`: + ---- $ sudo cephadm shell -- ceph mgr stat ---- + . Refresh the Ceph Manager information by force-failing it: + ---- $ sudo cephadm shell -- ceph mgr fail ---- + . Refresh the `OSD` information: + ---- $ sudo cephadm shell -- ceph orch reconfig osd.default_drive_group ---- .Next steps Repeat the procedure starting from step xref:draining-the-source-node_{context}[Draining the source node] for each node that you want to decommission. Proceed to the next step xref:verifying-the-cluster-after-ceph-mon-migration_{context}[Verifying the {CephCluster} cluster after Ceph Monitor migration].