:_mod-docs-content-type: PROCEDURE [id="migrating-ovn-data_{context}"] = Migrating OVN data [role="_abstract"] Migrate the data in the OVN databases from the original {rhos_prev_long} deployment to `ovsdb-server` instances that are running in the {rhocp_long} cluster. .Prerequisites * The `OpenStackControlPlane` resource is created. * `NetworkAttachmentDefinition` custom resources (CRs) for the original cluster are defined. Specifically, the `internalapi` network is defined. * The original {networking_first_ref} and OVN `northd` are not running. * There is network routability between the control plane services and the adopted cluster. * The cloud is migrated to the Modular Layer 2 plug-in with Open Virtual Networking (ML2/OVN) mechanism driver. * Define the following shell variables. Replace the example values with values that are correct for your environment: + ---- ifeval::["{build}" != "downstream"] STORAGE_CLASS_NAME=local-storage OVSDB_IMAGE=quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified endif::[] ifeval::["{build}" == "downstream"] STORAGE_CLASS=local-storage OVSDB_IMAGE=registry.redhat.io/rhoso/openstack-ovn-base-rhel9:18.0 endif::[] SOURCE_OVSDB_IP=172.17.0.100 # For IPv4 SOURCE_OVSDB_IP=[fd00:bbbb::100] # For IPv6 ---- + ifeval::["{build_variant}" == "ospdo"] Update the IP address value for `SOURCE_OVSDB_IP` to the internalApi IP address that is associated with the remaining RHOSP 17 controller VM. The IP address can be retrieved by using the following command: ---- $ $CONTROLLER_SSH ip a s enp4s0 ** Select the non /32 IP address ---- [NOTE] If you use a disconnected environment director Operator deployment, use `OVSDB_IMAGE=registry.redhat.io/rhoso/openstack-ovn-base-rhel9@sha256:967046c6bdb8f55c236085b5c5f9667f0dbb9f3ac52a6560dd36a6bfac051e1f`. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_configuring-an-airgapped-environment_air-gapped-environment[Configuring an airgapped environment] in _Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator_. endif::[] ifeval::["{build_variant}" != "ospdo"] To get the value to set `SOURCE_OVSDB_IP`, query the puppet-generated configurations in a Controller node: + ---- $ grep -rI 'ovn_[ns]b_conn' /var/lib/config-data/puppet-generated/ ---- endif::[] .Procedure ifeval::["{build_variant}" == "ospdo"] . Get the {OpenShiftShort} master node that contains the {OpenStackShort} Controller node: + ---- $ oc get vmi -n $ -o jsonpath='{.items[0].metadata.labels.kubevirt\.io/nodeName}' ---- + * Replace `` with your OSPdO namespace. endif::[] . Prepare a temporary `PersistentVolume` claim and the helper pod for the OVN backup. Adjust the storage requests for a large database, if needed: + [source,yaml] ---- $ oc apply -f - <, "ips": [""]}]' endif::[] labels: app: adoption ifeval::["{build_variant}" == "ospdo"] namespace: $OSPDO_NAMESPACE endif::[] spec: ifeval::["{build_variant}" == "ospdo"] nodeName: '{{ }}' <1> endif::[] containers: - image: $OVSDB_IMAGE command: [ "sh", "-c", "sleep infinity"] name: adoption volumeMounts: - mountPath: /backup name: ovn-data - mountPath: /etc/pki/tls/misc name: ovn-data-cert readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: ALL runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: ovn-data persistentVolumeClaim: claimName: ovn-data - name: ovn-data-cert secret: secretName: ovn-data-cert EOF ---- + ifeval::["{build_variant}" == "ospdo"] <1> Replace `` with the {OpenStackShort} node that contains the Controller node. endif::[] . Wait for the pod to be ready: + ---- ifeval::["{build_variant}" != "ospdo"] $ oc wait --for=condition=Ready pod/ovn-copy-data --timeout=30s endif::[] ifeval::["{build_variant}" == "ospdo"] $ oc wait --for=condition=Ready -n $OSPDO_NAMESPACE pod/ovn-copy-data --timeout=30s endif::[] ---- . If the podified internalapi cidr is different than the source internalapi cidr, add an iptables accept rule on the Controller nodes: + ---- $ $CONTROLLER1_SSH sudo iptables -I INPUT -s {PODIFIED_INTERNALAPI_NETWORK} -p tcp -m tcp --dport 6641 -m conntrack --ctstate NEW -j ACCEPT $ $CONTROLLER1_SSH sudo iptables -I INPUT -s {PODIFIED_INTERNALAPI_NETWORK} -p tcp -m tcp --dport 6642 -m conntrack --ctstate NEW -j ACCEPT ---- . Back up your OVN databases: * If you did not enable TLS everywhere, run the following command: + ---- ifeval::["{build_variant}" != "ospdo"] $ oc exec ovn-copy-data -- bash -c "ovsdb-client backup tcp:$SOURCE_OVSDB_IP:6641 > /backup/ovs-nb.db" $ oc exec ovn-copy-data -- bash -c "ovsdb-client backup tcp:$SOURCE_OVSDB_IP:6642 > /backup/ovs-sb.db" endif::[] ifeval::["{build_variant}" == "ospdo"] $ oc exec -n $OSPDO_NAMESPACE ovn-copy-data -- bash -c "ovsdb-client backup tcp:$SOURCE_OVSDB_IP:6641 > /backup/ovs-nb.db" $ oc exec -n $OSPDO_NAMESPACE ovn-copy-data -- bash -c "ovsdb-client backup tcp:$SOURCE_OVSDB_IP:6642 > /backup/ovs-sb.db" endif::[] ---- + * If you enabled TLS everywhere, run the following command: + ---- $ oc exec ovn-copy-data -- bash -c "ovsdb-client backup --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$SOURCE_OVSDB_IP:6641 > /backup/ovs-nb.db" $ oc exec ovn-copy-data -- bash -c "ovsdb-client backup --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$SOURCE_OVSDB_IP:6642 > /backup/ovs-sb.db" ---- . Start the control plane OVN database services prior to import, with `northd` disabled: + [source,yaml] ---- $ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnDBCluster: ovndbcluster-nb: replicas: 3 dbType: NB storageRequest: 10G networkAttachment: internalapi ovndbcluster-sb: replicas: 3 dbType: SB storageRequest: 10G networkAttachment: internalapi ovnNorthd: replicas: 0 ' ---- . Wait for the OVN database services to reach the `Running` phase: + ---- $ oc wait --for=jsonpath='{.status.phase}'=Running pod --selector=service=ovsdbserver-nb $ oc wait --for=jsonpath='{.status.phase}'=Running pod --selector=service=ovsdbserver-sb ---- . Fetch the OVN database IP addresses on the `clusterIP` service network: + ---- PODIFIED_OVSDB_NB_IP=$(oc get svc --selector "statefulset.kubernetes.io/pod-name=ovsdbserver-nb-0" -ojsonpath='{.items[0].spec.clusterIP}') PODIFIED_OVSDB_SB_IP=$(oc get svc --selector "statefulset.kubernetes.io/pod-name=ovsdbserver-sb-0" -ojsonpath='{.items[0].spec.clusterIP}') ---- . If you are using IPv6, adjust the address to the format expected by `ovsdb-*` tools: + ---- PODIFIED_OVSDB_NB_IP=[$PODIFIED_OVSDB_NB_IP] PODIFIED_OVSDB_SB_IP=[$PODIFIED_OVSDB_SB_IP] ---- . Upgrade the database schema for the backup files: .. If you did not enable TLS everywhere, use the following command: + ---- $ oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema tcp:$PODIFIED_OVSDB_NB_IP:6641 > /backup/ovs-nb.ovsschema && ovsdb-tool convert /backup/ovs-nb.db /backup/ovs-nb.ovsschema" $ oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema tcp:$PODIFIED_OVSDB_SB_IP:6642 > /backup/ovs-sb.ovsschema && ovsdb-tool convert /backup/ovs-sb.db /backup/ovs-sb.ovsschema" ---- .. If you enabled TLS everywhere, use the following command: + ---- $ oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_NB_IP:6641 > /backup/ovs-nb.ovsschema && ovsdb-tool convert /backup/ovs-nb.db /backup/ovs-nb.ovsschema" $ oc exec ovn-copy-data -- bash -c "ovsdb-client get-schema --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_SB_IP:6642 > /backup/ovs-sb.ovsschema && ovsdb-tool convert /backup/ovs-sb.db /backup/ovs-sb.ovsschema" ---- . Restore the database backup to the new OVN database servers: .. If you did not enable TLS everywhere, use the following command: + ---- ifeval::["{build_variant}" != "ospdo"] $ oc exec ovn-copy-data -- bash -c "ovsdb-client restore tcp:$PODIFIED_OVSDB_NB_IP:6641 < /backup/ovs-nb.db" $ oc exec ovn-copy-data -- bash -c "ovsdb-client restore tcp:$PODIFIED_OVSDB_SB_IP:6642 < /backup/ovs-sb.db" endif::[] ---- .. If you enabled TLS everywhere, use the following command: + ---- $ oc exec ovn-copy-data -- bash -c "ovsdb-client restore --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_NB_IP:6641 < /backup/ovs-nb.db" $ oc exec ovn-copy-data -- bash -c "ovsdb-client restore --ca-cert=/etc/pki/tls/misc/ca.crt --private-key=/etc/pki/tls/misc/tls.key --certificate=/etc/pki/tls/misc/tls.crt ssl:$PODIFIED_OVSDB_SB_IP:6642 < /backup/ovs-sb.db" ---- . Check that the data was successfully migrated by running the following commands against the new database servers, for example: + ---- $ oc exec -it ovsdbserver-nb-0 -- ovn-nbctl show $ oc exec -it ovsdbserver-sb-0 -- ovn-sbctl list Chassis ---- . Start the control plane `ovn-northd` service to keep both OVN databases in sync: + [source,yaml] ---- $ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnNorthd: replicas: 1 ' ---- . If you are running OVN gateway services on {OpenShiftShort} nodes, enable the control plane `ovn-controller` service: + [source,yaml] ---- $ oc patch openstackcontrolplane openstack --type=merge --patch ' spec: ovn: enabled: true template: ovnController: nicMappings: physNet: NIC <1> ---- + <1> `physNet` is the name of your physical network. `NIC` is the name of the physical interface that is connected to your physical network. + [NOTE] Running OVN gateways on {OpenShiftShort} nodes might be prone to data plane downtime during Open vSwitch upgrades. Consider running OVN gateways on dedicated `Networker` data plane nodes for production deployments instead. . Delete the `ovn-data` helper pod and the temporary `PersistentVolumeClaim` that is used to store OVN database backup files: + ---- $ oc delete --ignore-not-found=true pod ovn-copy-data $ oc delete --ignore-not-found=true pvc ovn-data ---- + [NOTE] Consider taking a snapshot of the `ovn-data` helper pod and the temporary `PersistentVolumeClaim` before deleting them. For more information, see link:{defaultOCPURL}/storage/index#lvms-about-volume-snapsot_logical-volume-manager-storage[About volume snapshots] in _OpenShift Container Platform storage overview_. . Stop the adopted OVN database servers: + ---- ServicesToStop=("tripleo_ovn_cluster_north_db_server.service" "tripleo_ovn_cluster_south_db_server.service") echo "Stopping systemd OpenStack services" for service in ${ServicesToStop[*]}; do ifeval::["{build_variant}" != "ospdo"] for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH endif::[] ifeval::["{build_variant}" == "ospdo"] SSH_CMD=CONTROLLER_SSH endif::[] if [ ! -z "${!SSH_CMD}" ]; then echo "Stopping the $service in controller $i" if ${!SSH_CMD} sudo systemctl is-active $service; then ${!SSH_CMD} sudo systemctl stop $service fi fi ifeval::["{build_variant}" != "ospdo"] done endif::[] done echo "Checking systemd OpenStack services" for service in ${ServicesToStop[*]}; do ifeval::["{build_variant}" != "ospdo"] for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH endif::[] ifeval::["{build_variant}" == "ospdo"] SSH_CMD=CONTROLLER_SSH endif::[] if [ ! -z "${!SSH_CMD}" ]; then if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on controller $i" else echo "OK: Service $service is not running on controller $i" fi fi ifeval::["{build_variant}" != "ospdo"] done endif::[] done ----