:_mod-docs-content-type: PROCEDURE [id="migrating-ceph-mgr-daemons-to-ceph-nodes_{context}"] = Migrating Ceph Manager daemons to {Ceph} nodes [role="_abstract"] You must migrate your Ceph Manager daemons from the {rhos_prev_long} ({OpenStackShort}) Controller nodes to a set of target nodes. Target nodes are either existing {Ceph} nodes, or {OpenStackShort} Compute nodes if {Ceph} is deployed by {OpenStackPreviousInstaller} with a Hyperconverged Infrastructure (HCI) topology. [NOTE] The following procedure uses `cephadm` and the Ceph Orchestrator to drive the Ceph Manager migration, and the Ceph spec to modify the placement and reschedule the Ceph Manager daemons. Ceph Manager is run in an active/passive state. It also provides many modules, including the Ceph Orchestrator. Every potential module, such as the Ceph Dashboard, that is provided by `ceph-mgr` is implicitly migrated with Ceph Manager. .Procedure . SSH into the target node and enable the firewall rules that are required to reach a Ceph Manager service: + ---- dports="6800:7300" ssh heat-admin@ sudo iptables -I INPUT \ -p tcp --match multiport --dports $dports -j ACCEPT; ---- + * Replace `` with the hostname of the hosts that are listed in the {Ceph} environment. Run `ceph orch host ls` to see the list of the hosts. + Repeat this step for each target node. . Check that the rules are properly applied to the target node and persist them: + ---- $ sudo iptables-save $ sudo systemctl restart iptables ---- + [NOTE] The default dashboard port for `ceph-mgr` in a greenfield deployment is 8443. With director-deployed {Ceph}, the default port is 8444 because the service ran on the Controller node, and it was necessary to use this port to avoid a conflict. For adoption, update the dashboard port to 8443 in the `ceph-mgr` configuration and firewall rules. . Log in to `controller-0` and update the dashboard port in the `ceph-mgr` configuration to 8443: + ---- $ sudo cephadm shell $ ceph config set mgr mgr/dashboard/server_port 8443 $ ceph config set mgr mgr/dashboard/ssl_server_port 8443 $ ceph mgr module disable dashboard $ ceph mgr module enable dashboard ---- . If `nftables` is used in the existing deployment, edit `/etc/nftables/tripleo-rules.nft` and add the following content: + [source,yaml] ---- # 113 ceph_mgr {'dport': ['6800-7300', 8443]} add rule inet filter TRIPLEO_INPUT tcp dport { 6800-7300,8443 } ct state new counter accept comment "113 ceph_mgr" ---- . Save the file. . Restart the `nftables` service: + ---- $ sudo systemctl restart nftables ---- . Verify that the rules are applied: + ---- $ sudo nft list ruleset | grep ceph_mgr ---- . Prepare the target node to host the new Ceph Manager daemon, and add the `mgr` label to the target node: + ---- $ sudo cephadm shell -- ceph orch host label add mgr ---- . Repeat steps 1-7 for each target node that hosts a Ceph Manager daemon. . Get the Ceph Manager spec: + [source,yaml] ---- $ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"} $ mkdir -p ${SPEC_DIR} $ sudo cephadm shell -- ceph orch ls --export mgr > ${SPEC_DIR}/mgr ---- . Edit the retrieved spec and add the `label: mgr` section to the `placement` section: + [source,yaml] ---- service_type: mgr service_id: mgr placement: label: mgr ---- . Save the spec. . Apply the spec with `cephadm` by using the Ceph Orchestrator: + ---- $ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"} $ sudo cephadm shell -m ${SPEC_DIR}/mgr -- ceph orch apply -i /mnt/mgr ---- .Verification . Verify that the new Ceph Manager daemons are created in the target nodes: + ---- $ sudo cephadm shell -- ceph orch ps | grep -i mgr $ sudo cephadm shell -- ceph -s ---- + The Ceph Manager daemon count should match the number of hosts where the `mgr` label is added. + [NOTE] The migration does not shrink the Ceph Manager daemons. The count grows by the number of target nodes, and migrating Ceph Monitor daemons to {Ceph} nodes decommissions the stand-by Ceph Manager instances. For more information, see xref:migrating-mon-from-controller-nodes_migrating-ceph-rbd[Migrating Ceph Monitor daemons to {Ceph} nodes].