:_mod-docs-content-type: PROCEDURE [id="adopting-compute-services-with-dcn-backend_{context}"] = Adopting {compute_service} services with multiple {Ceph} back ends (DCN) [role="_abstract"] In a Distributed Compute Node (DCN) deployment where {image_service} and {block_storage} services run on edge compute nodes, each site has its own {CephCluster} cluster. The {compute_service_first_ref} nodes at each site must be configured with the {Ceph} connection details and {image_service_first_ref} endpoint for their local site. Because the {image_service} has a separate API endpoint at each site, each site's `OpenStackDataPlaneNodeSet` must use a different `OpenStackDataPlaneService` that points to the correct {image_service}. In a DCN deployment, all node sets belong to a single {compute_service} cell. The central site and each edge site are separate `OpenStackDataPlaneNodeSet` resources within that cell. The per-site `OpenStackDataPlaneService` resources deliver different {Ceph} and {image_service} configurations to each node set while sharing the same cell-level {compute_service} configuration. .Prerequisites * You have adopted the {image_service} with multiple {Ceph} back ends. For more information, see xref:adopting-image-service-with-dcn-backend_{context}[Adopting the Image service with multiple Ceph back ends]. * You have adopted the {block_storage} with multiple {Ceph} back ends. For more information, see xref:adopting-block-storage-service-with-dcn-backend_{context}[Adopting the Block Storage service with multiple Ceph back ends]. * The `ceph-conf-files` secret contains the configuration and keyrings for all {Ceph} clusters in your DCN deployment. * You know the `fsid` for each {Ceph} cluster. You can retrieve it with: + ---- $ oc get secret ceph-conf-files -o json | jq -r '.data | to_entries[] | select(.key | endswith(".conf")) | "\(.key): \(.value | @base64d)"' | grep fsid ---- .Procedure . Set the cell name variable. In a DCN deployment, all node sets belong to a single cell: + ---- $ DEFAULT_CELL_NAME="cell1" ---- . Retrieve the `fsid` for each {Ceph} cluster and store them in shell variables: + [subs="+quotes"] ---- $ CEPH_FSID_CENTRAL=$(oc get secret ceph-conf-files -o json | jq -r '.data."central.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //') $ CEPH_FSID_DCN1=$(oc get secret ceph-conf-files -o json | jq -r '.data."dcn1.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //') $ CEPH_FSID_DCN2=$(oc get secret ceph-conf-files -o json | jq -r '.data."dcn2.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //') ---- + Replace `central.conf`, `dcn1.conf`, and `dcn2.conf` with the names of your {Ceph} configuration files in the `ceph-conf-files` secret. . Create a per-site `ConfigMap` for each site. Each `ConfigMap` contains the {Ceph} and {image_service} configuration specific to that site. + The following example creates `ConfigMap` resources for a central site and two edge sites. Each `ConfigMap` contains three configuration sections: + * `[libvirt]` — Points to the local {Ceph} cluster configuration and uses the local `fsid` as the `rbd_secret_uuid`. * `[glance]` — Uses `endpoint_override` to direct {image_service} requests to the local {image_service} API endpoint instead of the endpoint registered in the {identity_service} catalog. The examples use `http://` for the {image_service} endpoints. If your {rhos_prev_long} deployment uses TLS for internal endpoints, use `https://` instead, and ensure that you have completed the TLS migration. For more information, see xref:migrating-tls-everywhere_{context}[Migrating TLS-e to the RHOSO deployment]. * `[cinder]` — Sets `cross_az_attach = False` to prevent volumes from being attached to instances in a different availability zone. + .. Create the `ConfigMap` for the central site: + ---- $ oc apply -f - <