:_mod-docs-content-type: PROCEDURE
[id="migrating-the-rgw-backends_{context}"]
= Migrating the {Ceph} RGW back ends
[role="_abstract"]
You must migrate your Ceph Object Gateway (RGW) back ends from your Controller nodes to your {Ceph} nodes. To ensure that you distribute the correct amount of services to your available nodes, you use `cephadm` labels to refer to a group of nodes where a given daemon type is deployed. For more information about the cardinality diagram, see xref:ceph-daemon-cardinality_migrating-ceph[{Ceph} daemon cardinality].
The following procedure assumes that you have three target nodes, `cephstorage-0`, `cephstorage-1`, `cephstorage-2`.
.Procedure
. Add the RGW label to the {Ceph} nodes that you want to migrate your RGW back ends to:
+
----
$ sudo cephadm shell -- ceph orch host label add cephstorage-0 rgw;
$ sudo cephadm shell -- ceph orch host label add cephstorage-1 rgw;
$ sudo cephadm shell -- ceph orch host label add cephstorage-2 rgw;
Added label rgw to host cephstorage-0
Added label rgw to host cephstorage-1
Added label rgw to host cephstorage-2
$ sudo cephadm shell -- ceph orch host ls
HOST ADDR LABELS STATUS
cephstorage-0 192.168.24.54 osd rgw
cephstorage-1 192.168.24.44 osd rgw
cephstorage-2 192.168.24.30 osd rgw
controller-0 192.168.24.45 _admin mon mgr
controller-1 192.168.24.11 _admin mon mgr
controller-2 192.168.24.38 _admin mon mgr
6 hosts in cluster
----
ifeval::["{build}" != "downstream"]
. During the overcloud deployment, a `cephadm`-compatible spec is generated in
`/home/ceph-admin/specs/rgw`. Find and patch the RGW spec, specify the right placement by using labels,
and change the RGW back-end port to `8090` to avoid conflicts with the Ceph ingress daemon front-end port.
endif::[]
ifeval::["{build}" != "upstream"]
. Locate the RGW spec and dump in the spec directory:
endif::[]
+
----
$ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"}
$ mkdir -p ${SPEC_DIR}
$ sudo cephadm shell -- ceph orch ls --export rgw > ${SPEC_DIR}/rgw
$ cat ${SPEC_DIR}/rgw
networks:
- 172.17.3.0/24
placement:
hosts:
- controller-0
- controller-1
- controller-2
service_id: rgw
service_name: rgw.rgw
service_type: rgw
spec:
rgw_frontend_port: 8080
rgw_realm: default
rgw_zone: default
----
+
This example assumes that `172.17.3.0/24` is the `storage` network.
. In the `placement` section, ensure that the `label` and `rgw_frontend_port` values are set:
+
----
---
networks:
- 172.17.3.0/24<1>
placement:
label: rgw <2>
service_id: rgw
service_name: rgw.rgw
service_type: rgw
spec:
rgw_frontend_port: 8090 <3>
rgw_realm: default
rgw_zone: default
rgw_frontend_ssl_certificate: ... <4>
ssl: true
----
+
<1> Add the storage network where the RGW back ends are deployed.
<2> Replace the Controller nodes with the `label: rgw` label.
<3> Change the `rgw_frontend_port` value to `8090` to avoid conflicts with the Ceph ingress daemon.
<4> Optional: if TLS is enabled, add the SSL certificate and key concatenation as described in link:{configuring-storage}/assembly_configuring-red-hat-ceph-storage-as-the-backend-for-rhosp-storage#proc_ceph-configure-rgw-with-tls_ceph-back-end[Configuring RGW with TLS for an external Red Hat Ceph Storage cluster] in _{configuring-storage-t}_.
. Apply the new RGW spec by using the orchestrator CLI:
+
----
$ SPEC_DIR=${SPEC_DIR:-"$PWD/ceph_specs"}
$ sudo cephadm shell -m ${SPEC_DIR}/rgw -- ceph orch apply -i /mnt/rgw
----
+
This command triggers the redeploy, for example:
+
----
...
osd.9 cephstorage-2
rgw.rgw.cephstorage-0.wsjlgx cephstorage-0 172.17.3.23:8090 starting
rgw.rgw.cephstorage-1.qynkan cephstorage-1 172.17.3.26:8090 starting
rgw.rgw.cephstorage-2.krycit cephstorage-2 172.17.3.81:8090 starting
rgw.rgw.controller-1.eyvrzw controller-1 172.17.3.146:8080 running (5h)
rgw.rgw.controller-2.navbxa controller-2 172.17.3.66:8080 running (5h)
...
osd.9 cephstorage-2
rgw.rgw.cephstorage-0.wsjlgx cephstorage-0 172.17.3.23:8090 running (19s)
rgw.rgw.cephstorage-1.qynkan cephstorage-1 172.17.3.26:8090 running (16s)
rgw.rgw.cephstorage-2.krycit cephstorage-2 172.17.3.81:8090 running (13s)
----
. Ensure that the new RGW back ends are reachable on the new ports, so you can enable an ingress daemon on port `8080` later. Log in to each {CephCluster} node that includes RGW and add the `iptables` rule to allow connections to both 8080 and 8090 ports in the {CephCluster} nodes:
+
----
$ iptables -I INPUT -p tcp -m tcp --dport 8080 -m conntrack --ctstate NEW -m comment --comment "ceph rgw ingress" -j ACCEPT
$ iptables -I INPUT -p tcp -m tcp --dport 8090 -m conntrack --ctstate NEW -m comment --comment "ceph rgw backends" -j ACCEPT
$ sudo iptables-save
$ sudo systemctl restart iptables
----
. If `nftables` is used in the existing deployment, edit `/etc/nftables/tripleo-rules.nft`
and add the following content:
+
[source,yaml]
----
# 100 ceph_rgw {'dport': ['8080','8090']}
add rule inet filter TRIPLEO_INPUT tcp dport { 8080,8090 } ct state new counter accept comment "100 ceph_rgw"
----
. Save the file.
. Restart the `nftables` service:
+
----
$ sudo systemctl restart nftables
----
. Verify that the rules are applied:
+
----
$ sudo nft list ruleset | grep ceph_rgw
----
. From a Controller node, such as `controller-0`, try to reach the RGW back ends:
+
----
$ curl http://cephstorage-0.storage:8090;
----
+
You should observe the following output:
+
----
anonymous
----
+
Repeat the verification for each node where a RGW daemon is deployed.
. If you migrated RGW back ends to the {Ceph} nodes, there is no `internalAPI` network, except in the case of HCI nodes. You must reconfigure the RGW keystone endpoint to point to the external network that you propagated:
+
----
[ceph: root@controller-0 /]# ceph config dump | grep keystone
global basic rgw_keystone_url http://172.16.1.111:5000
[ceph: root@controller-0 /]# ceph config set global rgw_keystone_url http://:5000
----
+
* Replace `` with the {identity_service_first_ref} internal endpoint of the service that is deployed in the `OpenStackControlPlane` CR when you adopt the {identity_service}. For more information, see xref:adopting-the-identity-service_adopt-control-plane[Adopting the {identity_service}].