:_mod-docs-content-type: PROCEDURE [id="adopting-compute-services-to-the-data-plane_{context}"] = Adopting Compute services to the {rhos_acro} data plane [role="_abstract"] Adopt your Compute (nova) services to the {rhos_long} data plane. .Prerequisites * You have stopped the remaining control plane nodes, repositories, and packages on the {compute_service_first_ref} hosts. For more information, see xref:stopping-infrastructure-management-and-compute-services_{context}[Stopping infrastructure management and Compute services]. * You have configured the Ceph back end for the `NovaLibvirt` service. For more information, see xref:configuring-a-ceph-backend_migrating-databases[Configuring a Ceph back end]. * You have configured IP Address Management (IPAM): + ---- $ oc apply -f - < *["standalone.localdomain"]="192.168.122.100"* > # ** > # ** > # ** >) $ declare -A COMPUTES_CELL2 $ export COMPUTES_CELL2=( > # ** >) $ declare -A COMPUTES_CELL3 $*export COMPUTES_CELL3=(* > # ** > # ** >) $ declare -A COMPUTES_API_CELL1 $*export COMPUTES_API_CELL1=(* > ["standalone.localdomain"]="172.17.0.100" > ["standalone2.localdomain"]="172.17.0.101" >) $ NODESETS="" $ for CELL in $(echo $RENAMED_CELLS); do > ref="COMPUTES_$(echo ${CELL}|tr '[:lower:]' '[:upper:]')" > eval names=\${!${ref}[@]} > [ -z "$names" ] && continue > NODESETS="'openstack-${CELL}', $NODESETS" >done $ NODESETS="[${NODESETS%,*}]" ---- + ** `DEFAULT_CELL_NAME="cell3"` defines the source cloud `default` cell that acquires a new `DEFAULT_CELL_NAME` on the destination cloud after adoption. In a multi-cell adoption scenario, you can retain the original name, `default`, or create a new cell default name by providing the incremented index of the last cell in the source cloud. For example, if the incremented index of the last cell is `cell5`, the new cell default name is `cell6`. ** `export COMPUTES_CELL1=` For each cell, update the `<["standalone.localdomain"]="x.x.x.x">` value and the `COMPUTES_CELL` value with the names and IP addresses of the {compute_service} nodes that are connected to the `ctlplane` and `internalapi` networks. Do not specify a real FQDN defined for each network. Always use the same hostname for each connected network of a Compute node. Provide the IP addresses and the names of the hosts on the remaining networks of the source cloud as needed, or you can manually adjust the files that you generate in step 9 of this procedure. ** ``, ``, and `` specifies the names of your {compute_service} nodes for each cell. Assign all {compute_service} nodes from the source cloud `cell1` cell into `COMPUTES_CELL1`, and so on. ** `export COMPUTES_CELL=(` specifies all {compute_service} nodes that you assign from the source cloud `default` cell into `COMPUTES_CELL` and `COMPUTES_API_CELL`, where `` is the `DEFAULT_CELL_NAME` environment variable value. In this example, the `DEFAULT_CELL_NAME` environment variable value equals `cell3`. ** `export COMPUTES_API_CELL1=(` For each cell, update the `<["standalone.localdomain"]="192.168.122.100">` value and the `COMPUTES_API_CELL` value with the names and IP addresses of the {compute_service} nodes that are connected to the `ctlplane` and `internalapi` networks. `["standalone.localdomain"]="192.168.122.100"` defines the custom DNS domain in the FQDN value of the nodes. This value is used in the data plane node set `spec.nodes..hostName`. Do not specify a real FQDN defined for each network. Use the same hostname for each of its connected networks. Provide the IP addresses and the names of the hosts on the remaining networks of the source cloud as needed, or you can manually adjust the files that you generate in step 9 of this procedure. ** `NODESETS="'openstack-${CELL}', $NODESETS"` specifies the cells that contain Compute nodes. Cells that do not contain Compute nodes are omitted from this template because no node sets are created for the cells. + [NOTE] ==== If you deployed the source cloud with a `default` cell, and want to rename it during adoption, define the new name that you want to use, as shown in the following example: ---- $ DEFAULT_CELL_NAME="cell1" $ RENAMED_CELLS="cell1" ---- ==== [NOTE] ==== Do not set a value for the `CEPH_FSID` parameter if the local storage back end is configured by the {compute_service} for libvirt. The storage back end must match the source cloud storage back end. You cannot change the storage back end during adoption. ==== .Procedure ifeval::["{build}" != "downstream"] . Create a https://kubernetes.io/docs/concepts/configuration/secret/#ssh-authentication-secrets[ssh authentication secret] for the data plane nodes: //kgilliga:I need to check if we will document this in Red Hat docs. endif::[] ifeval::["{build}" != "upstream"] . Create an SSH authentication secret for the data plane nodes: endif::[] + [subs=+quotes] ---- $ oc apply -f - < | base64 | sed \'s/^/ /') endif::[] EOF ---- + ifeval::["{build}" == "downstream"] * Replace `` with the path to your SSH key. endif::[] . Generate an ssh key-pair `nova-migration-ssh-key` secret: + ---- $ cd "$(mktemp -d)" $ ssh-keygen -f ./id -t ecdsa-sha2-nistp521 -N '' $ oc get secret nova-migration-ssh-key || oc create secret generic nova-migration-ssh-key \ --from-file=ssh-privatekey=id \ --from-file=ssh-publickey=id.pub \ --type kubernetes.io/ssh-auth $ rm -f id* $ cd - ---- . If TLS Everywhere is enabled, set `LIBVIRT_PASSWORD` to match the existing {OpenStackShort} deployment password: + ---- declare -A TRIPLEO_PASSWORDS TRIPLEO_PASSWORDS[default]="$HOME/overcloud-passwords.yaml" LIBVIRT_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' LibvirtTLSPassword:' | awk -F ': ' '{ print $2; }') LIBVIRT_PASSWORD_BASE64=$(echo -n "$LIBVIRT_PASSWORD" | base64) ---- .. Create libvirt-secret when TLS-e is enabled: + ---- $ oc apply -f - <` files. There is a requirement to index the `<*.conf>` files from '03' to '99', based on precedence. A `<99-*.conf>` file takes the highest precedence, while indexes below '03' are reserved for internal use. + [NOTE] If you adopt a live cloud, you might be required to carry over additional configurations for the default `nova` data plane services that are stored in the cell1 default `nova-extra-config` configuration map. Do not delete or overwrite the existing configuration in the `cell1` default `nova-extra-config` configuration map that is assigned to `nova`. Overwriting the configuration can break the data place services that rely on specific contents of the `nova-extra-config` configuration map. . Configure a {Ceph} back end for libvirt: + ---- $ oc apply -f - < oc apply -f - < --- > apiVersion: dataplane.openstack.org/v1beta1 > kind: OpenStackDataPlaneService > metadata: > name: nova-$CELL > spec: > dataSources: > - secretRef: > name: nova-$CELL-compute-config > - secretRef: > name: nova-migration-ssh-key > - configMapRef: > name: nova-cells-global-config > playbook: osp.edpm.nova > caCerts: combined-ca-bundle > edpmServiceType: nova > containerImageFields: > - NovaComputeImage > - EdpmIscsidImage >EOF > done ---- + * `spec.dataSources.secretRef` specifies an additional auto-generated `nova-cell-metadata-neutron-config` secret to enable a local metadata service for cell. You should also set `spec.nova.template.cellTemplates.cell.metadataServiceTemplate.enable` in the `OpenStackControlPlane/openstack` CR, as described in xref:adopting-the-compute-service_adopt-control-plane[Adopting the Compute service]. You can configure a single top-level metadata, or define the metadata per cell. * `nova-$CELL-compute-config` specifies the secret that auto-generates for each `cell`. You must append the `nova-cell-compute-config` for each custom `OpenStackDataPlaneService` CR that is related to the {compute_service}. * `nova-migration-ssh-key` specifies the secret that you must append for each custom `OpenStackDataPlaneService` CR that is related to the {compute_service}. + [NOTE] ==== When creating your data plane services for {compute_service} cells, review the following considerations: * In this example, the same `nova-migration-ssh-key` key is shared across cells. However, you should use different keys for different cells. * For simple configuration overrides, you do not need a custom data plane service. However, to reconfigure the cell, `cell1`, the safest option is to create a custom service and a dedicated configuration map for it. * The cell, `cell1`, is already managed with the default `OpenStackDataPlaneService` CR called `nova` and its `nova-extra-config` configuration map. Do not change the default data plane service `nova` definition. The changes are lost when the {rhos_acro} operator is updated with OLM. * When a cell spans multiple node sets, give the custom `OpenStackDataPlaneService` resources a name that relates to the node set, for example, `nova-cell1-nfv` and `nova-cell1-enterprise`. The auto-generated configuration maps are then named `nova-cell1-nfv-extra-config` and `nova-cell1-enterprise-extra-config`. * Different configurations for nodes in multiple node sets of the same cell are also supported, but are not covered in this guide. ==== . If TLS Everywhere is enabled, append the following content to the `OpenStackDataPlaneService` CR: + ---- tlsCerts: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-internal edpmRoleServiceName: nova caCerts: combined-ca-bundle edpmServiceType: nova ---- ifeval::["{build}" == "downstream"] . Create a secret for the subscription manager: + ---- $ oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "", "password": ""}}' ---- + * Replace `` with the applicable username. * Replace `` with the applicable password. . Create a secret for the Red Hat registry: + ---- $ oc create secret generic redhat-registry \ --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"": ""}}' ---- + * Replace `` with the applicable username. * Replace `` with the applicable password. endif::[] + [NOTE] You do not need to reference the `subscription-manager` secret in the `dataSources` field of the `OpenStackDataPlaneService` CR. The secret is already passed in with a node-specific `OpenStackDataPlaneNodeSet` CR in the `ansibleVarsFrom` property in the `nodeTemplate` field. . Create the data plane node set definitions for each cell: + [subs="+quotes"] ---- $ declare -A names $ for CELL in $(echo $RENAMED_CELLS); do ref="COMPUTES_$(echo ${CELL}|tr '[:lower:]' '[:upper:]')" eval names=\${!${ref}[@]} ref_api="COMPUTES_API_$(echo ${CELL}|tr '[:lower:]' '[:upper:]')" [ -z "$names" ] && continue ind=0 rm -f computes-$CELL for compute in $names; do ip="${ref}['$compute']" ip_api="${ref_api}['$compute']" cat >> computes-$CELL << EOF ${compute}: *hostName: $compute* ansible: ansibleHost: $compute *networks:* - defaultRoute: true fixedIP: ${!ip} name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 fixedIP: ${!ip_api} - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 EOF ind=$(( ind + 1 )) done test -f computes-$CELL || continue cat > nodeset-${CELL}.yaml <* edpm_ovn_bridge: br-int edpm_ovn_encap_type: geneve ovn_monitor_all: true edpm_ovn_remote_probe_interval: 60000 edpm_ovn_ofctrl_wait_before_clear: 8000 timesync_ntp_servers: ifeval::["{build}" != "downstream"] - hostname: pool.ntp.org endif::[] ifeval::["{build}" == "downstream"] - hostname: clock.redhat.com - hostname: clock2.redhat.com endif::[] ifeval::["{build}" != "downstream"] edpm_bootstrap_command: | # This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream set -euxo pipefail curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz python3 -m venv ./venv PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main # This is required for FIPS enabled until trunk.rdoproject.org # is not being served from a centos7 host, tracked by # https://issues.redhat.com/browse/RHOSZUUL-1517 dnf -y install crypto-policies update-crypto-policies --set FIPS:NO-ENFORCE-EMS # FIXME: perform dnf upgrade for other packages in EDPM ansible # here we only ensuring that decontainerized libvirt can start ./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream dnf -y upgrade openstack-selinux rm -f /run/virtlogd.pid rm -rf repo-setup-main endif::[] ifeval::["{build}" == "downstream"] edpm_bootstrap_command: | # FIXME: perform dnf upgrade for other packages in EDPM ansible # here we only ensuring that decontainerized libvirt can start dnf -y upgrade openstack-selinux rm -f /run/virtlogd.pid endif::[] gather_facts: false # edpm firewall, change the allowed CIDR if needed edpm_sshd_configure_firewall: true edpm_sshd_allowed_ranges: ['192.168.122.0/24'] # Do not attempt OVS major upgrades here edpm_ovs_packages: - openvswitch3.3 edpm_default_mounts: - *path: /dev/hugepages* *opts: pagesize=* fstype: hugetlbfs group: hugetlbfs nodes: EOF cat computes-$CELL >> nodeset-${CELL}.yaml done ---- + * `${compute}.hostName` specifies the FQDN for the node if your deployment has a custom DNS Domain. * `${compute}.networks` specifies the network composition. The network composition must match the source cloud configuration to avoid data plane connectivity downtime. The `ctlplane` network must come first. The commands only retain IP addresses for the hosts on the `ctlplane` and `internalapi` networks. Repeat this step for other isolated networks, or update the resulting files manually. * `metadata.name:` specifies the node set names for each cell, for example, `openstack-cell1`, `openstack-cell2`. Only create node sets for cells that contain Compute nodes. * `spec.tlsEnabled` specifies whether TLS Everywhere is enabled. If it is enabled, change `tlsEnabled` to `true`. * `spec.services` specifies the services to be adopted. If you are not adopting telemetry services, omit it from the services list. * `neutron_physical_bridge_name: br-ctlplane` specifies the bridge name. The bridge name and other OVN and {networking_service}-specific values must match the source cloud configuration to avoid data plane connectivity downtime. *`edpm_ovn_bridge_mappings: ` specifies the value of the bridge mappings in your configuration, for example, `"datacentre:br-ctlplane"`. * `path: /dev/hugepages` and `opts: pagesize=` configures huge pages. Replace `` with the size of the page. To configure multi-sized huge pages, create more items in the list. Note that the mount points must match the source cloud configuration. + [NOTE] ==== Ensure that you use the same `ovn-controller` settings in the `OpenStackDataPlaneNodeSet` CR that you used in the {compute_service} nodes before adoption. This configuration is stored in the `external_ids` column in the `Open_vSwitch` table in the Open vSwitch database: ---- $ ovs-vsctl list Open . ... external_ids : {hostname=standalone.localdomain, ovn-bridge=br-int, ovn-bridge-mappings=, ovn-chassis-mac-mappings="datacentre:1e:0a:bb:e6:7c:ad", ovn-encap-ip="172.19.0.100", ovn-encap-tos="0", ovn-encap-type=geneve, ovn-match-northd-version=False, ovn-monitor-all=True, ovn-ofctrl-wait-before-clear="8000", ovn-openflow-probe-interval="60", ovn-remote="tcp:ovsdbserver-sb.openstack.svc:6642", ovn-remote-probe-interval="60000", rundir="/var/run/openvswitch", system-id="2eec68e6-aa21-4c95-a868-31aeafc11736"} ... ---- ==== . Deploy the `OpenStackDataPlaneNodeSet` CRs for each Compute cell: + ---- $ for CELL in $(echo $RENAMED_CELLS); do > test -f nodeset-${CELL}.yaml || continue > oc apply -f nodeset-${CELL}.yaml > done ---- . If you use a {Ceph} back end for {block_storage_first_ref}, prepare the adopted data plane workloads: + ---- $ for CELL in $(echo $RENAMED_CELLS); do test -f nodeset-${CELL}.yaml || continue $ oc patch osdpns/openstack-$CELL --type=merge --patch " spec: services: ifeval::["{build}" == "downstream"] - redhat endif::[] - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - ceph-client - install-certs - ovn - neutron-metadata - libvirt - nova-$CELL - telemetry nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true " done ---- + [NOTE] Ensure that you use the same list of services from the original `OpenStackDataPlaneNodeSet` CR, except for the `ceph-client` and `ceph-hci-pre` services. . Optional: Enable `neutron-sriov-nic-agent` in the `OpenStackDataPlaneNodeSet` CR: + ---- $ for CELL in $(echo $RENAMED_CELLS); do test -f nodeset-${CELL}.yaml || continue $ oc patch openstackdataplanenodeset openstack-$CELL --type='json' --patch='[ { "op": "add", "path": "/spec/services/-", "value": "neutron-sriov" }, { "op": "add", "path": "/spec/nodeTemplate/ansible/ansibleVars/edpm_neutron_sriov_agent_SRIOV_NIC_physical_device_mappings", "value": "dummy_sriov_net:dummy-dev" }, { "op": "add", "path": "/spec/nodeTemplate/ansible/ansibleVars/edpm_neutron_sriov_agent_SRIOV_NIC_resource_provider_bandwidths", "value": "dummy-dev:40000000:40000000" }, { "op": "add", "path": "/spec/nodeTemplate/ansible/ansibleVars/edpm_neutron_sriov_agent_SRIOV_NIC_resource_provider_hypervisors", "value": "dummy-dev:standalone.localdomain" }]' done ---- . Optional: Enable `neutron-dhcp` in the `OpenStackDataPlaneNodeSet` CR: + ---- $ for CELL in $(echo $RENAMED_CELLS); do test -f nodeset-${CELL}.yaml || continue $ oc patch openstackdataplanenodeset openstack-$CELL --type='json' --patch='[ { "op": "add", "path": "/spec/services/-", "value": "neutron-dhcp" }]' done ---- + [NOTE] ==== To use `neutron-dhcp` with OVN for the {bare_metal_first_ref}, you must set the `disable_ovn_dhcp_for_baremetal_ports` configuration option for the {networking_first_ref} to `true`. You can set this configuration in the `NeutronAPI` spec: ---- .. spec: serviceUser: neutron ... customServiceConfig: | [DEFAULT] dhcp_agent_notification = True [ovn] disable_ovn_dhcp_for_baremetal_ports = true ---- ==== . Run the pre-adoption validation: .. Create the validation service: + ---- $ oc apply -f - < oc wait --for condition=Ready osdpns/openstack-$CELL --timeout=30m > done ---- . Verify that the {networking_first_ref} agents are running: + ---- $ oc exec openstackclient -- openstack network agent list +--------------------------------------+------------------------------+------------------------+-------------------+-------+-------+----------------------------+ | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | +--------------------------------------+------------------------------+------------------------+-------------------+-------+-------+----------------------------+ | 174fc099-5cc9-4348-b8fc-59ed44fcfb0e | DHCP agent | standalone.localdomain | nova | :-) | UP | neutron-dhcp-agent | | 10482583-2130-5b0d-958f-3430da21b929 | OVN Metadata agent | standalone.localdomain | | :-) | UP | neutron-ovn-metadata-agent | | a4f1b584-16f1-4937-b2b0-28102a3f6eaa | OVN Controller agent | standalone.localdomain | | :-) | UP | ovn-controller | +--------------------------------------+------------------------------+------------------------+-------------------+-------+-------+----------------------------+ ---- [NOTE] ==== After you remove all the services from the {OpenStackPreviousInstaller} cell controllers, you can decomission the cell controllers. To create new cell Compute nodes, you re-provision the decomissioned controllers as new data plane hosts and add them to the node sets of corresponding or new cells. ==== .Next steps * You must perform a fast-forward upgrade on your Compute services. For more information, see xref:performing-a-fast-forward-upgrade-on-compute-services_{context}[Performing a fast-forward upgrade on Compute services].