:_mod-docs-content-type: PROCEDURE [id="configuring-target-nodes-for-ceph-monitor-migration_{context}"] = Configuring target nodes for Ceph Monitor migration [role="_abstract"] Prepare the target {Ceph} nodes for the Ceph Monitor migration by performing the following actions: . Enable firewall rules in a target node and persist them. . Create a spec that is based on labels and apply it by using `cephadm`. . Ensure that the Ceph Monitor quorum is maintained during the migration process. .Procedure . SSH into the target node and enable the firewall rules that are required to reach a Ceph Monitor service: + ---- $ for port in 3300 6789; { ssh heat-admin@ sudo iptables -I INPUT \ -p tcp -m tcp --dport $port -m conntrack --ctstate NEW \ -j ACCEPT; } ---- + * Replace `` with the hostname of the node that hosts the new Ceph Monitor. . Check that the rules are properly applied to the target node and persist them: + ---- $ sudo iptables-save $ sudo systemctl restart iptables ---- . If `nftables` is used in the existing deployment, edit `/etc/nftables/tripleo-rules.nft` and add the following content: + [source,yaml] ---- # 110 ceph_mon {'dport': [6789, 3300, '9100']} add rule inet filter TRIPLEO_INPUT tcp dport { 6789,3300,9100 } ct state new counter accept comment "110 ceph_mon" ---- . Save the file. . Restart the `nftables` service: + ---- $ sudo systemctl restart nftables ---- . Verify that the rules are applied: + ---- $ sudo nft list ruleset | grep ceph_mon ---- . To migrate the existing Ceph Monitors to the target {Ceph} nodes, retrieve the {Ceph} mon spec from the first Ceph Monitor, or the first Controller node: + ---- $ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"} $ mkdir -p ${SPEC_DIR} $ sudo cephadm shell -- ceph orch ls --export mon > ${SPEC_DIR}/mon ---- . Add the `label:mon` section to the `placement` section: + ---- service_type: mon service_id: mon placement: label: mon ---- . Save the spec. . Apply the spec with `cephadm` by using the Ceph Orchestrator: + ---- $ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"} $ sudo cephadm shell -m ${SPEC_DIR}/mon -- ceph orch apply -i /mnt/mon ---- . Extend the `mon` label to the remaining {Ceph} target nodes to ensure that quorum is maintained during the migration process: + ---- for item in $(sudo cephadm shell -- ceph orch host ls --format json | jq -r '.[].hostname'); do sudo cephadm shell -- ceph orch host label add $item mon; sudo cephadm shell -- ceph orch host label add $item _admin; done ---- + [NOTE] Applying the `mon` spec allows the existing strategy to use `labels` instead of `hosts`. As a result, any node with the `mon` label can host a Ceph Monitor daemon. Perform this step only once to avoid multiple iterations when multiple Ceph Monitors are migrated. . Check the status of the {CephCluster} and the Ceph Orchestrator daemons list. Ensure that Ceph Monitors are in a quorum and listed by the `ceph orch` command: + ---- $ sudo cephadm shell -- ceph -s cluster: id: f6ec3ebe-26f7-56c8-985d-eb974e8e08e3 health: HEALTH_OK services: mon: 6 daemons, quorum controller-0,controller-1,controller-2,ceph-0,ceph-1,ceph-2 (age 19m) mgr: controller-0.xzgtvo(active, since 32m), standbys: controller-1.mtxohd, controller-2.ahrgsk osd: 8 osds: 8 up (since 12m), 8 in (since 18m); 1 remapped pgs data: pools: 1 pools, 1 pgs objects: 0 objects, 0 B usage: 43 MiB used, 400 GiB / 400 GiB avail pgs: 1 active+clean ---- + ---- $ sudo cephadm shell -- ceph orch host ls HOST ADDR LABELS STATUS ceph-0 192.168.24.14 osd mon mgr _admin ceph-1 192.168.24.7 osd mon mgr _admin ceph-2 192.168.24.8 osd mon mgr _admin controller-0 192.168.24.15 _admin mgr mon controller-1 192.168.24.23 _admin mgr mon controller-2 192.168.24.13 _admin mgr mon ---- . Set up a Ceph client on the first Controller node that is used during the rest of the procedure to interact with {Ceph}. Set up an additional IP address on the storage network that is used to interact with {Ceph} when the first Controller node is decommissioned: .. Back up the content of `/etc/ceph` in the `ceph_client_backup` directory. + ---- $ mkdir -p $HOME/ceph_client_backup $ sudo cp -R /etc/ceph/* $HOME/ceph_client_backup ---- .. Edit `/etc/os-net-config/config.yaml` and add `- ip_netmask: 172.17.3.200` after the IP address on the VLAN that belongs to the storage network. Replace `172.17.3.200` with any other available IP address on the storage network that can be statically assigned to `controller-0`. .. Save the file and refresh the `controller-0` network configuration: + ---- $ sudo os-net-config -c /etc/os-net-config/config.yaml ---- .. Verify that the IP address is present in the Controller node: + ---- $ ip -o a | grep 172.17.3.200 ---- .. Ping the IP address and confirm that it is reachable: + ---- $ ping -c 3 172.17.3.200 ---- .. Verify that you can interact with the {Ceph} cluster: + ---- $ sudo cephadm shell -c $HOME/ceph_client_backup/ceph.conf -k $HOME/ceph_client_backup/ceph.client.admin.keyring -- ceph -s ---- .Next steps Proceed to the next step xref:draining-the-source-node_{context}[Draining the source node].