:_mod-docs-content-type: PROCEDURE [id="adopting-compute-services-to-the-data-plane_{context}"] = Adopting Compute services to the {rhos_acro} data plane [role="_abstract"] Adopt your Compute (nova) services to the {rhos_long} data plane. .Prerequisites * You have stopped the remaining control plane nodes, repositories, and packages on the {compute_service_first_ref} hosts. For more information, see xref:stopping-infrastructure-management-and-compute-services_{context}[Stopping infrastructure management and Compute services]. * You have configured the Ceph back end for the `NovaLibvirt` service. For more information, see xref:configuring-a-ceph-backend_migrating-databases[Configuring a Ceph back end]. * You have configured IP Address Management (IPAM): + ---- $ oc apply -f - < $ RENAMED_CELLS="cell1 cell2 $DEFAULT_CELL_NAME" $ declare -A COMPUTES_CELL1 $ export COMPUTES_CELL1=( <2> ["standalone.localdomain"]="192.168.122.100" <3> # <4> # # ) $ declare -A COMPUTES_CELL2 $ export COMPUTES_CELL2=( # ... ) $ declare -A COMPUTES_CELL3 $ export COMPUTES_CELL3=( # ... <5> ) # ... $ declare -A COMPUTES_API_CELL1 $ export COMPUTES_API_CELL1=( <6> ["standalone.localdomain"]="172.17.0.100" # ... ) # ... $ NODESETS="" $ for CELL in $(echo $RENAMED_CELLS); do ref="COMPUTES_$(echo ${CELL}|tr '[:lower:]' '[:upper:]')" eval names=\${!${ref}[@]} [ -z "$names" ] && continue NODESETS="'openstack-${CELL}', $NODESETS" <7> done $ NODESETS="[${NODESETS%,*}]" ---- + <1> The source cloud `default` cell acquires a new `DEFAULT_CELL_NAME` on the destination cloud after adoption. In a multi-cell adoption scenario, you can retain the original name, `default`, or create a new cell default name by providing the incremented index of the last cell in the source cloud. For example, if the incremented index of the last cell is `cell5`, the new cell default name is `cell6`. <2> For each cell, update the `<["standalone.localdomain"]="x.x.x.x">` value and the `COMPUTES_CELL` value with the names and IP addresses of the {compute_service} nodes that are connected to the `ctlplane` and `internalapi` networks. Do not specify a real FQDN defined for each network. Always use the same hostname for each connected network of a Compute node. Provide the IP addresses and the names of the hosts on the remaining networks of the source cloud as needed. Or you can manually adjust the files that you generate in step 9 of this procedure. <3> If your deployment has a custom DNS domain, specify it in the FQDN value of the nodes. This value is used in the data plane node set `spec.nodes..hostName`. <4> Assign all {compute_service} nodes from the source cloud `cell1` cell into `COMPUTES_CELL1`, and so on. Replace ``, ``, and `` with the names of your {compute_service} nodes. <5> Assign all {compute_service} nodes from the source cloud `default` cell into `COMPUTES_CELL` and `COMPUTES_API_CELL``, where `` is the `DEFAULT_CELL_NAME` environment variable value. In this example, the `DEFAULT_CELL_NAME` environment variable value equals `cell3`. <6> For each cell, update the `<["standalone.localdomain"]="192.168.122.100">` value and the `COMPUTES_API_CELL` value with the names and IP addresses of the {compute_service} nodes that are connected to the `ctlplane` and `internalapi` networks. Do not specify a real FQDN defined for each network. Use the same host name for each of its connected networks. Provide the IP addresses and the names of the hosts on the remaining networks of the source cloud as needed. Or you can manually adjust the files that you generate in step 9 of this procedure. <7> Cells that do not contain Compute nodes are omitted from this template because no node sets are created for the cells. + [NOTE] ==== If you deployed the source cloud with a `default` cell, and want to rename it during adoption, define the new name that you want to use, as shown in the following example: ---- $ DEFAULT_CELL_NAME="cell1" $ RENAMED_CELLS="cell1" ---- ==== [NOTE] ==== Do not set a value for the `CEPH_FSID` parameter if the local storage back end is configured by the {compute_service} for libvirt. The storage back end must match the source cloud storage back end. You cannot change the storage back end during adoption. ==== .Procedure ifeval::["{build}" != "downstream"] . Create a https://kubernetes.io/docs/concepts/configuration/secret/#ssh-authentication-secrets[ssh authentication secret] for the data plane nodes: //kgilliga:I need to check if we will document this in Red Hat docs. endif::[] ifeval::["{build}" != "upstream"] . Create an SSH authentication secret for the data plane nodes: endif::[] + [subs=+quotes] ---- $ oc apply -f - < | base64 | sed \'s/^/ /') endif::[] EOF ---- + ifeval::["{build}" == "downstream"] * Replace `` with the path to your SSH key. endif::[] . Generate an ssh key-pair `nova-migration-ssh-key` secret: + ---- $ cd "$(mktemp -d)" ssh-keygen -f ./id -t ecdsa-sha2-nistp521 -N '' oc get secret nova-migration-ssh-key || oc create secret generic nova-migration-ssh-key \ --from-file=ssh-privatekey=id \ --from-file=ssh-publickey=id.pub \ --type kubernetes.io/ssh-auth rm -f id* cd - ---- . If TLS Everywhere is enabled, set `LIBVIRT_PASSWORD` to match the existing {OpenStackShort} deployment password: + ---- declare -A TRIPLEO_PASSWORDS TRIPLEO_PASSWORDS[default]="$HOME/overcloud-passwords.yaml" LIBVIRT_PASSWORD=$(cat ${TRIPLEO_PASSWORDS[default]} | grep ' LibvirtTLSPassword:' | awk -F ': ' '{ print $2; }') LIBVIRT_PASSWORD_BASE64=$(echo -n "$LIBVIRT_PASSWORD" | base64) ---- .. Create libvirt-secret when TLS-e is enabled: + ---- $ oc apply -f - < 99-nova-compute-cells-workarounds.conf: | <2> [workarounds] disable_compute_service_check_for_ffu=true EOF ---- + <1> The `data` resources in the `ConfigMap` provide the configuration files for all the cells. <2> There is a requirement to index the `<*.conf>` files from '03' to '99', based on precedence. A `<99-*.conf>` file takes the highest precedence, while indexes below '03' are reserved for internal use. + [NOTE] If you adopt a live cloud, you might be required to carry over additional configurations for the default `nova` data plane services that are stored in the cell1 default `nova-extra-config` configuration map. Do not delete or overwrite the existing configuration in the `cell1` default `nova-extra-config` configuration map that is assigned to `nova`. Overwriting the configuration can break the data place services that rely on specific contents of the `nova-extra-config` configuration map. . Configure a {Ceph} back end for libvirt: + ---- $ oc apply -f - < - secretRef: name: nova-$CELL-compute-config <2> - secretRef: name: nova-migration-ssh-key <3> - configMapRef: name: nova-cells-global-config playbook: osp.edpm.nova caCerts: combined-ca-bundle edpmServiceType: nova containerImageFields: - NovaComputeImage - EdpmIscsidImage EOF done ---- + * If TLS Everywhere is enabled, append the following content to the `OpenStackDataPlaneService` CR: + ---- tlsCerts: contents: - dnsnames - ips networks: - ctlplane issuer: osp-rootca-issuer-internal edpmRoleServiceName: nova caCerts: combined-ca-bundle edpmServiceType: nova ---- + <1> To enable a local metadata service for cell, append a `spec.dataSources.secretRef` to reference an additional auto-generated `nova-cell-metadata-neutron-config` secret. You should also set `spec.nova.template.cellTemplates.cell.metadataServiceTemplate.enable` in the `OpenStackControlPlane/openstack` CR, as described in xref:adopting-the-compute-service_{context}[Adopting the Compute service]. You can configure a single top-level metadata, or define the metadata per cell. <2> The secret `nova-cell-compute-config` auto-generates for each `cell`. <3> You must append the `nova-cell-compute-config` and `nova-migration-ssh-key` references for each custom `OpenStackDataPlaneService` CR that is related to the {compute_service}. + [NOTE] ==== When creating your data plane services for {compute_service} cells, review the following considerations: * In this example, the same `nova-migration-ssh-key` key is shared across cells. However, you should use different keys for different cells. * For simple configuration overrides, you do not need a custom data plane service. However, to reconfigure the cell, `cell1`, the safest option is to create a custom service and a dedicated configuration map for it. * The cell, `cell1`, is already managed with the default `OpenStackDataPlaneService` CR called `nova` and its `nova-extra-config` configuration map. Do not change the default data plane service `nova` definition. The changes are lost when the {rhos_acro} operator is updated with OLM. * When a cell spans multiple node sets, give the custom `OpenStackDataPlaneService` resources a name that relates to the node set, for example, `nova-cell1-nfv` and `nova-cell1-enterprise`. The auto-generated configuration maps are then named `nova-cell1-nfv-extra-config` and `nova-cell1-enterprise-extra-config`. * Different configurations for nodes in multiple node sets of the same cell are also supported, but are not covered in this guide. ==== ifeval::["{build}" == "downstream"] . Create a secret for the subscription manager: + ---- $ oc create secret generic subscription-manager \ --from-literal rhc_auth='{"login": {"username": "", "password": ""}}' ---- + * Replace `` with the applicable username. * Replace `` with the applicable password. . Create a secret for the Red Hat registry: + ---- $ oc create secret generic redhat-registry \ --from-literal edpm_container_registry_logins='{"registry.redhat.io": {"": ""}}' ---- + * Replace `` with the applicable username. * Replace `` with the applicable password. endif::[] + [NOTE] You do not need to reference the `subscription-manager` secret in the `dataSources` field of the `OpenStackDataPlaneService` CR. The secret is already passed in with a node-specific `OpenStackDataPlaneNodeSet` CR in the `ansibleVarsFrom` property in the `nodeTemplate` field. . Create the data plane node set definitions for each cell: + ---- $ declare -A names $ for CELL in $(echo $RENAMED_CELLS); do ref="COMPUTES_$(echo ${CELL}|tr '[:lower:]' '[:upper:]')" eval names=\${!${ref}[@]} ref_api="COMPUTES_API_$(echo ${CELL}|tr '[:lower:]' '[:upper:]')" [ -z "$names" ] && continue ind=0 rm -f computes-$CELL for compute in $names; do ip="${ref}['$compute']" ip_api="${ref_api}['$compute']" cat >> computes-$CELL << EOF ${compute}: hostName: $compute <1> ansible: ansibleHost: $compute networks: <2> - defaultRoute: true fixedIP: ${!ip} name: ctlplane subnetName: subnet1 - name: internalapi subnetName: subnet1 fixedIP: ${!ip_api} - name: storage subnetName: subnet1 - name: tenant subnetName: subnet1 EOF ind=$(( ind + 1 )) done test -f computes-$CELL || continue cat > nodeset-${CELL}.yaml < spec: tlsEnabled: false <4> networkAttachments: - ctlplane preProvisioned: true services: ifeval::["{build}" == "downstream"] - redhat endif::[] - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - install-certs - ovn - neutron-metadata - libvirt - nova-$CELL - telemetry <5> env: - name: ANSIBLE_CALLBACKS_ENABLED value: "profile_tasks" - name: ANSIBLE_FORCE_COLOR value: "True" - name: ANSIBLE_VERBOSITY value: '3' nodeTemplate: ansibleSSHPrivateKeySecret: dataplane-adoption-secret ansible: ansibleUser: root ifeval::["{build}" == "downstream"] ansibleVarsFrom: - secretRef: name: subscription-manager - secretRef: name: redhat-registry endif::[] ansibleVars: ifeval::["{build}" == "downstream"] rhc_release: 9.2 rhc_repositories: - {name: "*", state: disabled} - {name: "rhel-9-for-x86_64-baseos-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-appstream-eus-rpms", state: enabled} - {name: "rhel-9-for-x86_64-highavailability-eus-rpms", state: enabled} - {name: "rhoso-18.0-for-rhel-9-x86_64-rpms", state: enabled} - {name: "fast-datapath-for-rhel-9-x86_64-rpms", state: enabled} - {name: "rhceph-7-tools-for-rhel-9-x86_64-rpms", state: enabled} endif::[] edpm_bootstrap_release_version_package: [] # edpm_network_config # Default nic config template for a EDPM node # These vars are edpm_network_config role vars edpm_network_config_template: | --- {% set mtu_list = [ctlplane_mtu] %} {% for network in nodeset_networks %} {% set _ = mtu_list.append(lookup('vars', networks_lower[network] ~ '_mtu')) %} {%- endfor %} {% set min_viable_mtu = mtu_list | max %} network_config: - type: ovs_bridge name: {{ neutron_physical_bridge_name }} mtu: {{ min_viable_mtu }} use_dhcp: false dns_servers: {{ ctlplane_dns_nameservers }} domain: {{ dns_search_domains }} addresses: - ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }} routes: {{ ctlplane_host_routes }} members: - type: interface name: nic1 mtu: {{ min_viable_mtu }} # force the MAC address of the bridge to this interface primary: true {% for network in nodeset_networks %} - type: vlan mtu: {{ lookup('vars', networks_lower[network] ~ '_mtu') }} vlan_id: {{ lookup('vars', networks_lower[network] ~ '_vlan_id') }} addresses: - ip_netmask: {{ lookup('vars', networks_lower[network] ~ '_ip') }}/{{ lookup('vars', networks_lower[network] ~ '_cidr') }} routes: {{ lookup('vars', networks_lower[network] ~ '_host_routes') }} {% endfor %} edpm_network_config_nmstate: false # Control resolv.conf management by NetworkManager # false = disable NetworkManager resolv.conf update (default) # true = enable NetworkManager resolv.conf update edpm_bootstrap_network_resolvconf_update: false edpm_network_config_hide_sensitive_logs: false # # These vars are for the network config templates themselves and are # considered EDPM network defaults. neutron_physical_bridge_name: br-ctlplane <6> neutron_public_interface_name: eth0 # edpm_nodes_validation edpm_nodes_validation_validate_controllers_icmp: false edpm_nodes_validation_validate_gateway_icmp: false # edpm ovn-controller configuration edpm_ovn_bridge_mappings: <7> edpm_ovn_bridge: br-int edpm_ovn_encap_type: geneve ovn_monitor_all: true edpm_ovn_remote_probe_interval: 60000 edpm_ovn_ofctrl_wait_before_clear: 8000 timesync_ntp_servers: ifeval::["{build}" != "downstream"] - hostname: pool.ntp.org endif::[] ifeval::["{build}" == "downstream"] - hostname: clock.redhat.com - hostname: clock2.redhat.com endif::[] ifeval::["{build}" != "downstream"] edpm_bootstrap_command: | # This is a hack to deploy RDO Delorean repos to RHEL as if it were Centos 9 Stream set -euxo pipefail curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz python3 -m venv ./venv PBR_VERSION=0.0.0 ./venv/bin/pip install ./repo-setup-main # This is required for FIPS enabled until trunk.rdoproject.org # is not being served from a centos7 host, tracked by # https://issues.redhat.com/browse/RHOSZUUL-1517 dnf -y install crypto-policies update-crypto-policies --set FIPS:NO-ENFORCE-EMS # FIXME: perform dnf upgrade for other packages in EDPM ansible # here we only ensuring that decontainerized libvirt can start ./venv/bin/repo-setup current-podified -b antelope -d centos9 --stream dnf -y upgrade openstack-selinux rm -f /run/virtlogd.pid rm -rf repo-setup-main endif::[] ifeval::["{build}" == "downstream"] edpm_bootstrap_command: | # FIXME: perform dnf upgrade for other packages in EDPM ansible # here we only ensuring that decontainerized libvirt can start dnf -y upgrade openstack-selinux rm -f /run/virtlogd.pid endif::[] gather_facts: false # edpm firewall, change the allowed CIDR if needed edpm_sshd_configure_firewall: true edpm_sshd_allowed_ranges: ['192.168.122.0/24'] # Do not attempt OVS major upgrades here edpm_ovs_packages: - openvswitch3.3 edpm_default_mounts: <8> - path: /dev/hugepages opts: pagesize= fstype: hugetlbfs group: hugetlbfs nodes: EOF cat computes-$CELL >> nodeset-${CELL}.yaml done ---- + <1> If your deployment has a custom DNS Domain, specify the FQDN for the node. <2> The network composition must match the source cloud configuration to avoid data plane connectivity downtime. The `ctlplane` network must come first. The commands only retain IP addresses for the hosts on the `ctlplane` and `internalapi` networks. Repeat this step for other isolated networks, or update the resulting files manually. <3> Use node sets names, such as `openstack-cell1`, `openstack-cell2`. Only create node sets for cells that contain Compute nodes. <4> If TLS Everywhere is enabled, change `tlsEnabled` to `true`. <5> If you are not adopting telemetry services, omit it from the services list. <6> The bridge name and other OVN and {networking_service}-specific values must match the source cloud configuration to avoid data plane connectivity downtime. <7> Replace `` with the value of the bridge mappings in your configuration, for example, `"datacentre:br-ctlplane"`. <8> To configure huge pages, replace `` with the size of the page. To configure multi-sized huge pages, create more items in the list. Note that the mount points must match the source cloud configuration. + [NOTE] ==== Ensure that you use the same `ovn-controller` settings in the `OpenStackDataPlaneNodeSet` CR that you used in the {compute_service} nodes before adoption. This configuration is stored in the `external_ids` column in the `Open_vSwitch` table in the Open vSwitch database: ---- $ ovs-vsctl list Open . ... external_ids : {hostname=standalone.localdomain, ovn-bridge=br-int, ovn-bridge-mappings=, ovn-chassis-mac-mappings="datacentre:1e:0a:bb:e6:7c:ad", ovn-encap-ip="172.19.0.100", ovn-encap-tos="0", ovn-encap-type=geneve, ovn-match-northd-version=False, ovn-monitor-all=True, ovn-ofctrl-wait-before-clear="8000", ovn-openflow-probe-interval="60", ovn-remote="tcp:ovsdbserver-sb.openstack.svc:6642", ovn-remote-probe-interval="60000", rundir="/var/run/openvswitch", system-id="2eec68e6-aa21-4c95-a868-31aeafc11736"} ... ---- ==== . Deploy the `OpenStackDataPlaneNodeSet` CRs for each Compute cell: + ---- $ for CELL in $(echo $RENAMED_CELLS); do test -f nodeset-${CELL}.yaml || continue $ oc apply -f nodeset-${CELL}.yaml done ---- . If you use a {Ceph} back end for {block_storage_first_ref}, prepare the adopted data plane workloads: + ---- $ for CELL in $(echo $RENAMED_CELLS); do test -f nodeset-${CELL}.yaml || continue $ oc patch osdpns/openstack-$CELL --type=merge --patch " spec: services: ifeval::["{build}" == "downstream"] - redhat endif::[] - bootstrap - download-cache - configure-network - validate-network - install-os - configure-os - ssh-known-hosts - run-os - reboot-os - ceph-client - install-certs - ovn - neutron-metadata - libvirt - nova-$CELL - telemetry nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true " done ---- + [NOTE] Ensure that you use the same list of services from the original `OpenStackDataPlaneNodeSet` CR, except for the `ceph-client` and `ceph-hci-pre` services. . Optional: Enable `neutron-sriov-nic-agent` in the `OpenStackDataPlaneNodeSet` CR: + ---- $ for CELL in $(echo $RENAMED_CELLS); do test -f nodeset-${CELL}.yaml || continue $ oc patch openstackdataplanenodeset openstack-$CELL --type='json' --patch='[ { "op": "add", "path": "/spec/services/-", "value": "neutron-sriov" }, { "op": "add", "path": "/spec/nodeTemplate/ansible/ansibleVars/edpm_neutron_sriov_agent_SRIOV_NIC_physical_device_mappings", "value": "dummy_sriov_net:dummy-dev" }, { "op": "add", "path": "/spec/nodeTemplate/ansible/ansibleVars/edpm_neutron_sriov_agent_SRIOV_NIC_resource_provider_bandwidths", "value": "dummy-dev:40000000:40000000" }, { "op": "add", "path": "/spec/nodeTemplate/ansible/ansibleVars/edpm_neutron_sriov_agent_SRIOV_NIC_resource_provider_hypervisors", "value": "dummy-dev:standalone.localdomain" }]' done ---- . Optional: Enable `neutron-dhcp` in the `OpenStackDataPlaneNodeSet` CR: + ---- $ for CELL in $(echo $RENAMED_CELLS); do test -f nodeset-${CELL}.yaml || continue $ oc patch openstackdataplanenodeset openstack-$CELL --type='json' --patch='[ { "op": "add", "path": "/spec/services/-", "value": "neutron-dhcp" }]' done ---- + [NOTE] ==== To use `neutron-dhcp` with OVN for the {bare_metal_first_ref}, you must set the `disable_ovn_dhcp_for_baremetal_ports` configuration option for the {networking_first_ref} to `true`. You can set this configuration in the `NeutronAPI` spec: ---- .. spec: serviceUser: neutron ... customServiceConfig: | [DEFAULT] dhcp_agent_notification = True [ovn] disable_ovn_dhcp_for_baremetal_ports = true ---- ==== . Run the pre-adoption validation: .. Create the validation service: + ---- $ oc apply -f - <