:_mod-docs-content-type: PROCEDURE [id="using-new-subnet-ranges_{context}"] = Configuring new subnet ranges [role="_abstract"] [NOTE] If you are using IPv6, you can reuse existing subnet ranges in most cases. For more information about existing subnet ranges, see xref:reusing-existing-subnet-ranges_{context}[Reusing existing subnet ranges]. You can define new IP ranges for control plane services that belong to a different subnet that is not used in the existing cluster. Then you configure link local IP routing between the existing and new subnets to enable existing and new service deployments to communicate. This involves using the {OpenStackPreviousInstaller} mechanism on a pre-adopted cluster to configure additional link local routes. This enables the data plane deployment to reach out to {rhos_prev_long} ({OpenStackShort}) nodes by using the existing subnet addresses. You can use new subnet ranges with any existing subnet configuration, and when the existing cluster subnet ranges do not have enough free IP addresses for the new control plane services. You must size the new subnet appropriately to accommodate the new control plane services. There are no specific requirements for the existing deployment allocation pools that are already consumed by the {OpenStackShort} environment. [IMPORTANT] Defining a new subnet for Storage and Storage management is not supported because {compute_service_first_ref} and {Ceph} do not allow modifying those networks during adoption. In the following procedure, you configure `NetworkAttachmentDefinition` custom resources (CRs) to use a different subnet from what is configured in the `network_config` section of the `OpenStackDataPlaneNodeSet` CR for the same networks. The new range in the `NetworkAttachmentDefinition` CR is used for control plane services, while the existing range in the `OpenStackDataPlaneNodeSet` CR is used to manage IP Address Management (IPAM) for data plane nodes. The values that are used in the following procedure are examples. Use values that are specific to your configuration. ifeval::["{build_variant}" == "ospdo"] For more information about director Operator network configurations, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#assembly_creating-networks-with-director-operator[Creating networks with director Operator] in _Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator_. endif::[] .Procedure . Configure link local routes on the existing deployment nodes for the control plane subnets. This is done through {OpenStackPreviousInstaller} configuration: + ---- network_config: - type: ovs_bridge name: br-ctlplane routes: - ip_netmask: 0.0.0.0/0 next_hop: 192.168.1.1 - ip_netmask: 172.31.0.0/24 <1> next_hop: 192.168.1.100 <2> ---- <1> The new control plane subnet. <2> The control plane IP address of the existing data plane node. + Repeat this configuration for other networks that need to use different subnets for the new and existing parts of the deployment. . Apply the new configuration to every {OpenStackShort} node: + ---- (undercloud)$ openstack overcloud network provision \ --output \ [--templates ]/home/stack/templates/ ---- + ---- (undercloud)$ openstack overcloud node provision \ --stack \ --network-config \ --output \ [--templates ]/home/stack/templates/ ---- + * Optional: Include the `--templates` option to use your own templates instead of the default templates located in `/usr/share/openstack-tripleo-heat-templates`. Replace `` with the path to the directory that contains your templates. * Replace `` with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is `overcloud`. * Include the `--network-config` optional argument to provide the network definitions to the `cli-overcloud-node-network-config.yaml` Ansible playbook. The `cli-overcloud-node-network-config.yaml` playbook uses the `os-net-config` tool to apply the network configuration on the deployed nodes. If you do not use `--network-config` to provide the network definitions, then you must configure the `{{role.name}}NetworkConfigTemplate` parameters in your `network-environment.yaml` file, otherwise the default network definitions are used. * Replace `` with the name of the heat environment file to generate for inclusion in the deployment command, for example `/home/stack/templates/overcloud-baremetal-deployed.yaml`. * Replace `` with the name of your node definition file, for example, `overcloud-baremetal-deploy.yaml`. Ensure that the `network_config_update` variable is set to `true` in the node definition file. + [NOTE] Network configuration changes are not applied by default to avoid the risk of network disruption. You must enforce the changes by setting the `StandaloneNetworkConfigUpdate: true` in the {OpenStackPreviousInstaller} configuration files. . Confirm that there are new link local routes to the new subnet on each node. For example: + [source,yaml] ---- # ip route | grep 172 172.31.0.0/24 via 192.168.122.100 dev br-ctlplane ---- . You also must configure link local routes to existing deployment on {rhos_long} worker nodes. This is achieved by adding `routes` entries to the `NodeNetworkConfigurationPolicy` CRs for each network. For example: + ---- - destination: 192.168.122.0/24 <1> next-hop-interface: ospbr <2> ---- <1> The original subnet of the isolated network on the data plane. <2> The {rhocp_long} worker network interface that corresponds to the isolated network on the data plane. + As a result, the following route is added to your {OpenShiftShort} nodes: + [source,yaml] ---- # ip route | grep 192 192.168.122.0/24 dev ospbr proto static scope link ---- + . Later, during the data plane adoption, in the `network_config` section of the `OpenStackDataPlaneNodeSet` CR, add the same link local routes for the new control plane subnet ranges. For example: + ---- nodeTemplate: ansible: ansibleUser: root ansibleVars: additional_ctlplane_host_routes: - ip_netmask: 172.31.0.0/24 next_hop: '{{ ctlplane_ip }}' edpm_network_config_template: | network_config: - type: ovs_bridge routes: {{ ctlplane_host_routes + additional_ctlplane_host_routes }} ... ---- . List the IP addresses that are used for the data plane nodes in the existing deployment as `ansibleHost` and `fixedIP`. For example: + ---- nodes: standalone: ansible: ansibleHost: 192.168.122.100 ansibleUser: "" hostName: standalone networks: - defaultRoute: true fixedIP: 192.168.122.100 name: ctlplane subnetName: subnet1 ---- + [IMPORTANT] Do not change {OpenStackShort} node IP addresses during the adoption process. List previously used IP addresses in the `fixedIP` fields for each node entry in the `nodes` section of the `OpenStackDataPlaneNodeSet` CR. . Expand the SSH range for the firewall configuration to include both subnets to allow SSH access to data plane nodes from both subnets: + ---- edpm_sshd_allowed_ranges: - 192.168.122.0/24 - 172.31.0.0/24 ---- + This provides SSH access from the new subnet to the {OpenStackShort} nodes as well as the {OpenStackShort} subnets.