:_mod-docs-content-type: PROCEDURE [id="adopting-compute-services-with-dcn-backend_{context}"] = Adopting Compute services with multiple {Ceph} back ends (DCN) [role="_abstract"] In a Distributed Compute Node (DCN) deployment where {image_service_first_ref} and {block_storage_first_ref} services run on edge Compute nodes, each site has its own {CephCluster} cluster. The {compute_service_first_ref} nodes at each site must be configured with the {Ceph} connection details and {image_service} endpoint for their local site. Because the {image_service} has a separate API endpoint at each site, each site's `OpenStackDataPlaneNodeSet` custom resource (CR) must use a different `OpenStackDataPlaneService` CR that points to the correct {image_service}. In a DCN deployment, all node sets belong to a single {compute_service} cell. The central site and each edge site are separate `OpenStackDataPlaneNodeSet` resources within that cell. The per-site `OpenStackDataPlaneService` resources deliver different {Ceph} and {image_service} configurations to each node set while sharing the same cell-level {compute_service} configuration. .Prerequisites * You have adopted the {image_service} with multiple {Ceph} back ends. For more information, see xref:adopting-image-service-with-dcn-backend_image-service[Adopting the Image service with multiple Ceph back ends]. * You have adopted the {block_storage} with multiple {Ceph} back ends. For more information, see xref:adopting-block-storage-service-with-dcn-backend_hsm-integration[Adopting the Block Storage service with multiple Ceph back ends]. * The per-site {Ceph} secrets (`ceph-conf-central`, `ceph-conf-dcn1`, `ceph-conf-dcn2`) exist. For more information, see xref:configuring-a-ceph-backend_migrating-databases[Configuring a {Ceph} back end]. * Retrieve the `fsid` for each {Ceph} cluster: + ---- $ oc get secret ceph-conf-central -o json | jq -r '.data | to_entries[] | select(.key | endswith(".conf")) | "\(.key): \(.value | @base64d)"' | grep fsid ---- .Procedure . Set the cell name variable. In a DCN deployment, all node sets belong to a single cell: + ---- $ DEFAULT_CELL_NAME="cell1" ---- . Retrieve the `fsid` for each {Ceph} cluster and store them in shell variables: + [subs="+quotes"] ---- $ CEPH_FSID_CENTRAL=$(oc get secret ceph-conf-central -o json | jq -r '.data.""' | base64 -d | awk '/fsid/{print $3}') $ CEPH_FSID_DCN1=$(oc get secret ceph-conf-dcn1 -o json | jq -r '.data.""' | base64 -d | awk '/fsid/{print $3}') $ CEPH_FSID_DCN2=$(oc get secret ceph-conf-dcn2 -o json | jq -r '.data.""' | base64 -d | awk '/fsid/{print $3}') ---- + where: ``:: Specifies the name of the {Ceph} configuration file for the central site in the `ceph-conf-central` secret. ``:: Specifies the name of the {Ceph} configuration file for an edge site in the `ceph-conf-dcn1` secret. ``:: Specifies the name of the {Ceph} configuration file for an additional edge site in the `ceph-conf-dcn2` secret. . Create a `ConfigMap` for each site. Each `ConfigMap` contains the {Ceph} and {image_service} configuration specific to that site. + The following example creates `ConfigMap` resources for a central site and two edge sites. + .. Create the `ConfigMap` for the central site: + ---- $ oc apply -f - <