:_mod-docs-content-type: PROCEDURE [id="configuring-a-ceph-backend_{context}"] = Configuring a {Ceph} back end [role="_abstract"] If your {rhos_prev_long} ({OpenStackShort}) {rhos_prev_ver} deployment uses a {Ceph} back end for any service, such as {image_service_first_ref}, {block_storage_first_ref}, {compute_service_first_ref}, or {rhos_component_storage_file_first_ref}, you must configure the custom resources (CRs) to use the same back end in the {rhos_long} {rhos_curr_ver} deployment. [NOTE] To run `ceph` commands, you must use SSH to connect to a {Ceph} node and run `sudo cephadm shell`. This generates a Ceph orchestrator container that enables you to run administrative commands against the {CephCluster} cluster. If you deployed the {CephCluster} cluster by using {OpenStackPreviousInstaller}, you can launch the `cephadm` shell from an {OpenStackShort} Controller node. .Prerequisites * The `OpenStackControlPlane` CR is created. * If your {OpenStackShort} {rhos_prev_ver} deployment uses the {rhos_component_storage_file}, the openstack keyring is updated. Modify the `openstack` user so that you can use it across all {OpenStackShort} services: + ---- ceph auth caps client.openstack \ mgr 'allow *' \ mon 'allow r, profile rbd' \ osd 'profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=images, allow rw pool manila_data' ---- + Using the same user across all services makes it simpler to create a common {Ceph} secret that includes the keyring and `ceph.conf` file and propagate the secret to all the services that need it. * The following shell variables are defined. Replace the following example values with values that are correct for your environment: + [subs=+quotes] ---- ifeval::["{build}" != "downstream"] CEPH_SSH="ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100" endif::[] ifeval::["{build}" == "downstream"] CEPH_SSH="ssh -i ** root@**" endif::[] CEPH_KEY=$($CEPH_SSH "cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0") CEPH_CONF=$($CEPH_SSH "cat /etc/ceph/ceph.conf | base64 -w 0") ---- .Procedure . Create the `ceph-conf-files` secret that includes the {Ceph} configuration: + ---- $ oc apply -f - < caps mgr = "allow *" caps mon = "allow r, profile rbd" caps osd = "pool=vms, profile rbd pool=volumes, profile rbd pool=images, allow rw pool manila_data' ceph.conf: | [global] fsid = 7a1719e8-9c59-49e2-ae2b-d7eb08c695d4 mon_host = 10.1.1.2,10.1.1.3,10.1.1.4 ---- + where: + `mon_host`:: specifies the addresses of the cluster's monitors. If you use IPv6, use brackets for the `mon_host`. For example: `mon_host = [v2:[fd00:cccc::100]:3300/0,v1:[fd00:cccc::100]:6789/0]` + [NOTE] ==== For Distributed Compute Node (DCN) deployments with multiple {Ceph} clusters, create one secret per site. Each secret contains only the keys that the respective site requires. For more information on the rationale and key distribution pattern, see xref:ceph-migration-dcn_{context}[{Ceph} migration for Distributed Compute Node deployments]. The {Ceph} configuration files for all clusters are available on the {OpenStackShort} controller at either `/var/lib/tripleo-config/ceph/`, or `/etc/ceph`. Copy them locally and create the per-site secrets: ---- $ CEPH_SSH="ssh root@" $ CEPH_DIR="/var/lib/tripleo-config/ceph" $ TMPDIR=$(mktemp -d) $ $CEPH_SSH "cat ${CEPH_DIR}/central.conf" > ${TMPDIR}/central.conf $ $CEPH_SSH "sudo cat ${CEPH_DIR}/central.client.openstack.keyring" > ${TMPDIR}/central.client.openstack.keyring $ $CEPH_SSH "cat ${CEPH_DIR}/dcn1.conf" > ${TMPDIR}/dcn1.conf $ $CEPH_SSH "sudo cat ${CEPH_DIR}/dcn1.client.openstack.keyring" > ${TMPDIR}/dcn1.client.openstack.keyring $ $CEPH_SSH "cat ${CEPH_DIR}/dcn2.conf" > ${TMPDIR}/dcn2.conf $ $CEPH_SSH "sudo cat ${CEPH_DIR}/dcn2.client.openstack.keyring" > ${TMPDIR}/dcn2.client.openstack.keyring # Central site secret: contains all clusters $ oc create secret generic ceph-conf-central \ --from-file=${TMPDIR}/central.conf \ --from-file=${TMPDIR}/central.client.openstack.keyring \ --from-file=${TMPDIR}/dcn1.conf \ --from-file=${TMPDIR}/dcn1.client.openstack.keyring \ --from-file=${TMPDIR}/dcn2.conf \ --from-file=${TMPDIR}/dcn2.client.openstack.keyring \ -n openstack # DCN1 edge site secret: central + local only $ oc create secret generic ceph-conf-dcn1 \ --from-file=${TMPDIR}/central.conf \ --from-file=${TMPDIR}/central.client.openstack.keyring \ --from-file=${TMPDIR}/dcn1.conf \ --from-file=${TMPDIR}/dcn1.client.openstack.keyring \ -n openstack # DCN2 edge site secret: central + local only $ oc create secret generic ceph-conf-dcn2 \ --from-file=${TMPDIR}/central.conf \ --from-file=${TMPDIR}/central.client.openstack.keyring \ --from-file=${TMPDIR}/dcn2.conf \ --from-file=${TMPDIR}/dcn2.client.openstack.keyring \ -n openstack $ rm -rf ${TMPDIR} ---- Repeat for each additional edge site. Each edge site secret must include the central cluster files and only the files for that edge site's local cluster. When configuring `extraMounts` on the `OpenStackControlPlane`, use propagation labels matching the service instance names (for example, `central`, `dcn1`, `dcn2`) so that each pod mounts only its site-specific secret. ==== . In your `OpenStackControlPlane` CR, inject the {Ceph} configuration into the {OpenStackShort} service pods using `extraMounts`. For a single-cluster deployment, propagate one secret to all services: + ---- $ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: extraMounts: - name: v1 region: r1 extraVol: - propagation: - CinderVolume - CinderBackup - GlanceAPI - ManilaShare extraVolType: Ceph volumes: - name: ceph projected: sources: - secret: name: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true ' ---- + For a DCN deployment with per-site secrets, use propagation labels matching each service instance name so that each pod receives only the keys for its site: + ---- $ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: extraMounts: - name: v1 region: r1 extraVol: - extraVolType: Ceph propagation: - central - CinderBackup - ManilaShare volumes: - name: ceph-central projected: sources: - secret: name: ceph-conf-central mounts: - name: ceph-central mountPath: "/etc/ceph" readOnly: true - extraVolType: Ceph propagation: - dcn1 volumes: - name: ceph-dcn1 projected: sources: - secret: name: ceph-conf-dcn1 mounts: - name: ceph-dcn1 mountPath: "/etc/ceph" readOnly: true - extraVolType: Ceph propagation: - dcn2 volumes: - name: ceph-dcn2 projected: sources: - secret: name: ceph-conf-dcn2 mounts: - name: ceph-dcn2 mountPath: "/etc/ceph" readOnly: true ' ---- + The propagation label `central` matches the {image_service} and {block_storage} pod instances named `central`. The `CinderBackup` and `ManilaShare` labels are service-type propagation and apply to all {block_storage} backup and {rhos_component_storage_file} pods, which run only at the central site. Replace `central`, `dcn1`, and `dcn2` with the instance names used in your deployment.