:_mod-docs-content-type: PROCEDURE [id="configuring-dcn-data-plane-nodesets_{context}"] = Configuring data plane node sets for DCN sites [role="_abstract"] If you are adopting a Distributed Compute Node (DCN) deployment, you must create separate `OpenStackDataPlaneNodeSet` custom resources (CRs) for each site. Each node set requires site-specific configuration for network subnets, OVN bridge mappings, and inter-site routes. .Prerequisites * You have adopted the {rhos_prev_long} ({OpenStackShort}) control plane to {rhos_long}. * You have configured control plane networking for your spine-leaf topology, including multi-subnet `NetConfig` and `NetworkAttachmentDefinition` CRs with routes to remote sites. For more information, see xref:configuring-control-plane-networking-for-spine-leaf_adopt-control-plane[Configuring control plane networking for spine-leaf topologies]. * You have the network configuration information for each DCN site: ** IP addresses and hostnames for all Compute nodes ** VLAN IDs for each service network ** Gateway addresses for inter-site routing * You have identified the OVN bridge mappings (physnets) for each site. .Procedure . Define the OVN bridge mappings for each site. Each site requires a unique physnet that maps to the local provider network bridge: + .Example OVN bridge mappings [options="header"] |=== | Site | OVN bridge mapping | Central | `leaf0:br-ex` | DCN1 | `leaf1:br-ex` | DCN2 | `leaf2:br-ex` |=== . Configure OVN for DCN sites. The default OVN controller configuration uses the Kubernetes ClusterIP (`ovsdbserver-sb.openstack.svc`), which is not routable from remote DCN sites. You must create a DCN-specific configuration that uses direct `internalapi` IP addresses. .. Get the OVN Southbound database `internalapi` IP addresses: + ---- $ oc get pod -l service=ovsdbserver-sb -o jsonpath='{range .items[*]}{.metadata.annotations.k8s\.v1\.cni\.cncf\.io/network-status}{"\n"}{end}' | jq -r '.[] | select(.name=="openstack/internalapi") | .ips[0]' ---- + Example output: + ---- 172.17.0.34 172.17.0.35 172.17.0.36 ---- .. Create a ConfigMap with the OVN SB direct IPs for DCN sites: + [subs="+quotes"] ---- $ oc apply -f - < {{ ctlplane_host_routes }} *- ip_netmask: 192.168.122.0/24* *next_hop: 192.168.133.1* - ip_netmask: 192.168.144.0/24 next_hop: 192.168.133.1 members: - type: interface name: nic1 primary: true {% for network in nodeset_networks %} - type: vlan vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% if network == 'internalapi' %} *- ip_netmask: 172.17.0.0/24* *next_hop: 172.17.10.1* - ip_netmask: 172.17.20.0/24 next_hop: 172.17.10.1 {% endif %} {% if network == 'storage' %} - ip_netmask: 172.18.0.0/24 next_hop: 172.18.10.1 - ip_netmask: 172.18.20.0/24 next_hop: 172.18.10.1 {% endif %} {% if network == 'tenant' %} - ip_netmask: 172.19.0.0/24 next_hop: 172.19.10.1 - ip_netmask: 172.19.20.0/24 next_hop: 172.19.10.1 {% endif %} {% endfor %} nodes: dcn1-compute-0: hostName: dcn1-compute-0.example.com ansible: ansibleHost: dcn1-compute-0.example.com networks: - defaultRoute: true fixedIP: 192.168.133.100 name: ctlplane *subnetName: ctlplanedcn1* - name: internalapi *subnetName: internalapidcn1* - name: storage *subnetName: storagedcn1* - name: tenant *subnetName: tenantdcn1* ---- * Replace `ovn` with `ovn-dcn` under spec:services. This ensures OVN controller connects to the OVN Southbound database using direct internalapi IPs instead of the unreachable ClusterIP. * DCN1 uses the `leaf1` physnet, for its OVN bridge mapping under `spec:nodeTemplate:ansible:ansibleVars:edpm_ovn_bridge_mappings`. * Inter-site routes must be added to the network configuration template. These routes enable DCN1 compute nodes to reach the central site (192.168.122.0/24) and other DCN sites (192.168.144.0/24 for DCN2). Similar routes are added for each service network (internalapi, storage, tenant). * DCN1 nodes reference site-specific subnet names like `ctlplanedcn1` and `internalapidcn1`. These subnet names must match those defined in the `NetConfig` CR. . Repeat step 3 for all other DCN sites. Adjust site specific parameters: + * The nodeset name, for example: `openstack-edpm-dcn2` * The OVN bridge mapping, for example: `leaf2:br-ex` * The subnet names, for example: `ctlplanedcn2`, and `internalapidcn2` * The inter-site routes. The routes from DCN2 should point to the central site subnets and the DCN1 site subnets. * The compute node definitions with site-appropriate IP addresses. . Deploy all nodesets by creating an `OpenStackDataPlaneDeployment` CR: + [source,yaml] ---- apiVersion: dataplane.openstack.org/v1beta1 kind: OpenStackDataPlaneDeployment metadata: name: openstack-edpm-deployment spec: nodeSets: - openstack-edpm - openstack-edpm-dcn1 - openstack-edpm-dcn2 ---- + [NOTE] ==== All nodesets can be deployed in parallel once the control plane adoption is complete. ==== . Wait for the deployment to complete: + ---- $ oc wait --for condition=Ready openstackdataplanedeployment/openstack-edpm-deployment --timeout=40m ---- .Verification . Verify that all node sets reach the `Ready` status: + ---- $ oc get openstackdataplanenodeset NAME STATUS MESSAGE openstack-edpm True Ready openstack-edpm-dcn1 True Ready openstack-edpm-dcn2 True Ready ---- . Verify that Compute services are running across all sites. Ensure that all `nova-compute` services show `State=up` for nodes in all availability zones: + ---- $ oc exec openstackclient -- openstack compute service list ---- . Verify inter-site connectivity by checking routes on a DCN Compute node: + ---- $ ssh dcn1-compute-0 ip route show | grep 172.17.0 172.17.0.0/24 via 172.17.10.1 dev internalapi ---- . Test that DCN Compute nodes can reach the control plane: + ---- $ ssh dcn1-compute-0 ping -c 3 172.17.0.30 ---- + Replace `172.17.0.30` with an IP address of a control plane service on the internalapi network.