:_mod-docs-content-type: PROCEDURE [id="completing-prerequisites-for-migrating-ceph-rgw_{context}"] = Completing prerequisites for {Ceph} RGW migration [role="_abstract"] Complete the following prerequisites before you begin the Ceph Object Gateway (RGW) migration. .Procedure . Check the current status of the {Ceph} nodes: + [source,yaml] ---- (undercloud) [stack@undercloud-0 ~]$ metalsmith list +------------------------+ +----------------+ | IP Addresses | | Hostname | +------------------------+ +----------------+ | ctlplane=192.168.24.25 | | cephstorage-0 | | ctlplane=192.168.24.10 | | cephstorage-1 | | ctlplane=192.168.24.32 | | cephstorage-2 | | ctlplane=192.168.24.28 | | compute-0 | | ctlplane=192.168.24.26 | | compute-1 | | ctlplane=192.168.24.43 | | controller-0 | | ctlplane=192.168.24.7 | | controller-1 | | ctlplane=192.168.24.41 | | controller-2 | +------------------------+ +----------------+ ---- . Log in to `controller-0` and check the Pacemaker status to identify important information for the RGW migration: + ---- Full List of Resources: * ip-192.168.24.46 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-10.0.0.103 (ocf:heartbeat:IPaddr2): Started controller-1 * ip-172.17.1.129 (ocf:heartbeat:IPaddr2): Started controller-2 * ip-172.17.3.68 (ocf:heartbeat:IPaddr2): Started controller-0 * ip-172.17.4.37 (ocf:heartbeat:IPaddr2): Started controller-1 * Container bundle set: haproxy-bundle [undercloud-0.ctlplane.redhat.local:8787/rh-osbs/rhosp17-openstack-haproxy:pcmklatest]: * haproxy-bundle-podman-0 (ocf:heartbeat:podman): Started controller-2 * haproxy-bundle-podman-1 (ocf:heartbeat:podman): Started controller-0 * haproxy-bundle-podman-2 (ocf:heartbeat:podman): Started controller-1 ---- . Identify the ranges of the storage networks. The following is an example and the values might differ in your environment: + [source,yaml] ---- [heat-admin@controller-0 ~]$ ip -o -4 a 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever 2: enp1s0 inet 192.168.24.45/24 brd 192.168.24.255 scope global enp1s0\ valid_lft forever preferred_lft forever 2: enp1s0 inet 192.168.24.46/32 brd 192.168.24.255 scope global enp1s0\ valid_lft forever preferred_lft forever 7: br-ex inet 10.0.0.122/24 brd 10.0.0.255 scope global br-ex\ valid_lft forever preferred_lft forever <1> 8: vlan70 inet 172.17.5.22/24 brd 172.17.5.255 scope global vlan70\ valid_lft forever preferred_lft forever 8: vlan70 inet 172.17.5.94/32 brd 172.17.5.255 scope global vlan70\ valid_lft forever preferred_lft forever 9: vlan50 inet 172.17.2.140/24 brd 172.17.2.255 scope global vlan50\ valid_lft forever preferred_lft forever 10: vlan30 inet 172.17.3.73/24 brd 172.17.3.255 scope global vlan30\ valid_lft forever preferred_lft forever <2> 10: vlan30 inet 172.17.3.68/32 brd 172.17.3.255 scope global vlan30\ valid_lft forever preferred_lft forever 11: vlan20 inet 172.17.1.88/24 brd 172.17.1.255 scope global vlan20\ valid_lft forever preferred_lft forever 12: vlan40 inet 172.17.4.24/24 brd 172.17.4.255 scope global vlan40\ valid_lft forever preferred_lft forever ---- + <1> `br-ex` represents the External Network, where in the current environment, HAProxy has the front-end Virtual IP (VIP) assigned. <2> `vlan30` represents the Storage Network, where the new RGW instances should be started on the {CephCluster} nodes. . Identify the network that you previously had in HAProxy and propagate it through {OpenStackPreviousInstaller} to the {CephCluster} nodes. Use this network to reserve a new VIP that is owned by {Ceph} as the entry point for the RGW service. .. Log in to `controller-0` and find the `ceph_rgw` section in the current HAProxy configuration: + ---- $ less /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg ... ... listen ceph_rgw bind 10.0.0.103:8080 transparent bind 172.17.3.68:8080 transparent mode http balance leastconn http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc } http-request set-header X-Forwarded-Port %[dst_port] option httpchk GET /swift/healthcheck option httplog option forwardfor server controller-0.storage.redhat.local 172.17.3.73:8080 check fall 5 inter 2000 rise 2 server controller-1.storage.redhat.local 172.17.3.146:8080 check fall 5 inter 2000 rise 2 server controller-2.storage.redhat.local 172.17.3.156:8080 check fall 5 inter 2000 rise 2 ---- .. Confirm that the network is used as an HAProxy front end. The following example shows that `controller-0` exposes the services by using the external network, which is absent from the {Ceph} nodes. You must propagate the external network through {OpenStackPreviousInstaller}: + [source,yaml] ---- [controller-0]$ ip -o -4 a ... 7: br-ex inet 10.0.0.106/24 brd 10.0.0.255 scope global br-ex\ valid_lft forever preferred_lft forever ... ---- + [NOTE] If the target nodes are not managed by director, you cannot use this procedure to configure the network. An administrator must manually configure all the required networks. . Propagate the HAProxy front-end network to {CephCluster} nodes. .. In the NIC template that you use to define the `ceph-storage` network interfaces, add the new config section in the {Ceph} network configuration template file, for example, `/home/stack/composable_roles/network/nic-configs/ceph-storage.j2`: + [source,yaml] ---- --- network_config: - type: interface name: nic1 use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} - type: vlan vlan_id: {{ storage_mgmt_vlan_id }} device: nic1 addresses: - ip_netmask: {{ storage_mgmt_ip }}/{{ storage_mgmt_cidr }} routes: {{ storage_mgmt_host_routes }} - type: interface name: nic2 use_dhcp: false defroute: false - type: vlan vlan_id: {{ storage_vlan_id }} device: nic2 addresses: - ip_netmask: {{ storage_ip }}/{{ storage_cidr }} routes: {{ storage_host_routes }} - type: ovs_bridge name: {{ neutron_physical_bridge_name }} dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} use_dhcp: false addresses: - ip_netmask: {{ external_ip }}/{{ external_cidr }} routes: {{ external_host_routes }} members: [] - type: interface name: nic3 primary: true ---- .. Add the External Network to the bare metal file, for example, `/home/stack/composable_roles/network/baremetal_deployment.yaml` that is used by `metalsmith`: + [NOTE] Ensure that 'network_config_update' is enabled for network propagation to the target nodes when `os-net-config` is triggered. + [source,yaml] ---- - name: CephStorage count: 3 hostname_format: cephstorage-%index% instances: - hostname: cephstorage-0 name: ceph-0 - hostname: cephstorage-1 name: ceph-1 - hostname: cephstorage-2 name: ceph-2 defaults: profile: ceph-storage network_config: template: /home/stack/composable_roles/network/nic-configs/ceph-storage.j2 network_config_update: true networks: - network: ctlplane vif: true - network: storage - network: storage_mgmt - network: external ---- .. Configure the new network on the bare metal nodes: + [source,yaml] ---- (undercloud) [stack@undercloud-0]$ openstack overcloud node provision -o overcloud-baremetal-deployed-0.yaml \ --stack overcloud \ --network-config -y \ $PWD/composable_roles/network/baremetal_deployment.yaml ---- .. Verify that the new network is configured on the {CephCluster} nodes: + [source,yaml] ---- [root@cephstorage-0 ~]# ip -o -4 a 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever 2: enp1s0 inet 192.168.24.54/24 brd 192.168.24.255 scope global enp1s0\ valid_lft forever preferred_lft forever 11: vlan40 inet 172.17.4.43/24 brd 172.17.4.255 scope global vlan40\ valid_lft forever preferred_lft forever 12: vlan30 inet 172.17.3.23/24 brd 172.17.3.255 scope global vlan30\ valid_lft forever preferred_lft forever 14: br-ex inet 10.0.0.133/24 brd 10.0.0.255 scope global br-ex\ valid_lft forever preferred_lft forever ----