:_mod-docs-content-type: PROCEDURE [id="migrating-databases-to-mariadb-instances_{context}"] = Migrating databases to MariaDB instances [role="_abstract"] Migrate your databases from the original {rhos_prev_long} ({OpenStackShort}) deployment to the MariaDB instances in the {rhocp_long} cluster. //[NOTE] //TOD(bogdando): For OSPDo, this example scenario describes a simple single-cell setup. //TODO(kgilliga): I hid the same note in the Compute adoption procedure. Will likely reinstate this after multi-cell is released. //kgilliga: The following text was below step 5. I'm preserving it here: + //Note filtering the information and performance schema tables. //Gnocchi is no longer used as a metric store as well .Prerequisites * Ensure that the control plane MariaDB and RabbitMQ are running, and that no other control plane services are running. * Retrieve the topology-specific service configuration. For more information, see xref:proc_retrieving-topology-specific-service-configuration_migrating-databases[Retrieving topology-specific service configuration]. * Stop the {OpenStackShort} services. For more information, see xref:stopping-openstack-services_{context}[Stopping {rhos_prev_long} services]. * Ensure that there is network routability between the original MariaDB and the MariaDB for the control plane. * Define the following shell variables. Replace the following example values with values that are correct for your environment: + ---- ifeval::["{build}" != "downstream"] $ STORAGE_CLASS=local-storage $ MARIADB_IMAGE=quay.io/podified-antelope-centos9/openstack-mariadb:current-podified endif::[] ifeval::["{build}" == "downstream"] $ STORAGE_CLASS=local-storage $ MARIADB_IMAGE=registry.redhat.io/rhoso/openstack-mariadb-rhel9:18.0 endif::[] ifeval::["{build_variant}" == "ospdo"] $ OSPDO_MARIADB_CLIENT_ANNOTATIONS='[{"name": "internalapi-static","ips": ["172.17.0.99/24"]}]' $ MARIADB_RUN_OVERRIDES="$OSPDO_MARIADB_CLIENT_ANNOTATIONS" endif::[] $ CELLS="default cell1 cell2" $ DEFAULT_CELL_NAME="cell3" $ RENAMED_CELLS="cell1 cell2 $DEFAULT_CELL_NAME" $ CHARACTER_SET=utf8 # $ COLLATION=utf8_general_ci $ declare -A PODIFIED_DB_ROOT_PASSWORD $ for CELL in $(echo "super $RENAMED_CELLS"); do > PODIFIED_DB_ROOT_PASSWORD[$CELL]=$(oc get -o json secret/osp-secret | jq -r .data.DbRootPassword | base64 -d) > done $ declare -A PODIFIED_MARIADB_IP $ for CELL in $(echo "super $RENAMED_CELLS"); do > if [ "$CELL" = "super" ]; then > PODIFIED_MARIADB_IP[$CELL]=$(oc get svc --selector "mariadb/name=openstack" -ojsonpath='{.items[0].spec.clusterIP}') > else > PODIFIED_MARIADB_IP[$CELL]=$(oc get svc --selector "mariadb/name=openstack-$CELL" -ojsonpath='{.items[0].spec.clusterIP}') > fi > done $ declare -A TRIPLEO_PASSWORDS $ for CELL in $(echo $CELLS); do > if [ "$CELL" = "default" ]; then > TRIPLEO_PASSWORDS[default]="$HOME/overcloud-passwords.yaml" > else > # in a split-stack source cloud, it should take a stack-specific passwords file instead > TRIPLEO_PASSWORDS[$CELL]="$HOME/overcloud-passwords.yaml" > fi > done $ declare -A SOURCE_DB_ROOT_PASSWORD $ for CELL in $(echo $CELLS); do > SOURCE_DB_ROOT_PASSWORD[$CELL]=$(cat ${TRIPLEO_PASSWORDS[$CELL]} | grep ' MysqlRootPassword:' | awk -F ': ' '{ print $2; }') > done $ declare -A SOURCE_MARIADB_IP $ SOURCE_MARIADB_IP[default]=** $ SOURCE_MARIADB_IP[cell1]=** $ SOURCE_MARIADB_IP[cell2]=** # ... $ declare -A SOURCE_GALERA_MEMBERS_DEFAULT $ SOURCE_GALERA_MEMBERS_DEFAULT=( > ["standalone.localdomain"]=172.17.0.100 > # [...]=... > ) $ declare -A SOURCE_GALERA_MEMBERS_CELL1 $ SOURCE_GALERA_MEMBERS_CELL1=( > # ... > ) $ declare -A SOURCE_GALERA_MEMBERS_CELL2 $ SOURCE_GALERA_MEMBERS_CELL2=( > # ... > ) ---- + ** `CELLS` and `RENAMED_CELLS` represent changes that are going to be made after you import the databases. The `default` cell takes a new name from `DEFAULT_CELL_NAME`. In a multi-cell adoption scenario, `default` cell might retain its original 'default' name as well. ** `CHARACTER_SET` and `COLLATION` should match the source database. If they do not match, then foreign key relationships break for any tables that are created in the future as part of the database sync. ** `SOURCE_MARIADB_IP[X]= ...` includes the data for each cell that is defined in `CELLS`. Provide records for the cell names and VIP addresses of MariaDB Galera clusters. ** `` defines the VIP of your galera cell1 cluster. ** `` defines the VIP of your galera cell2 cluster, and so on. ** `SOURCE_GALERA_MEMBERS_CELL`, defines the names of the MariaDB Galera cluster members and their IP address for each cell defined in `CELLS`. Replace `["standalone.localdomain"]="172.17.0.100"` with the real hosts data. [NOTE] A standalone {OpenStackPreviousInstaller} environment only creates a 'default' cell, which should be the only `CELLS` value in this case. The `DEFAULT_CELL_NAME` value should be `cell1`. [NOTE] The `super` is the top-scope Nova API upcall database instance. A super conductor connects to that database. In subsequent examples, the upcall and cells databases use the same password that is defined in `osp-secret`. Old passwords are only needed to prepare the data exports. * To get the values for `SOURCE_MARIADB_IP`, query the puppet-generated configurations in the Controller and CellController nodes: + ---- $ sudo grep -rI 'listen mysql' -A10 /var/lib/config-data/puppet-generated/ | grep bind ---- * To get the values for `SOURCE_GALERA_MEMBERS_*`, query the puppet-generated configurations in the Controller and CellController nodes: + ---- $ sudo grep -rI 'listen mysql' -A10 /var/lib/config-data/puppet-generated/ | grep server ---- + The source cloud always uses the same password for cells databases. For that reason, the same passwords file is used for all cells stacks. However, split-stack topology allows using different passwords files for each stack. * Prepare the MariaDB adoption helper pod: . Create a temporary volume claim and a pod for the database data copy. Edit the volume claim storage request if necessary, to give it enough space for the overcloud databases: + ---- $ oc apply -f - < endif::[] containers: - image: $MARIADB_IMAGE command: [ "sh", "-c", "sleep infinity"] name: adoption volumeMounts: - mountPath: /backup name: mariadb-data securityContext: allowPrivilegeEscalation: false capabilities: drop: ALL runAsNonRoot: true seccompProfile: type: RuntimeDefault volumes: - name: mariadb-data persistentVolumeClaim: claimName: mariadb-data EOF ---- . Wait for the pod to be ready: + ---- $ oc wait --for condition=Ready pod/mariadb-copy-data --timeout=30s ---- .Procedure . Check that the source Galera database clusters in each cell have its members online and synced: + ---- $ for CELL in $(echo $CELLS); do > MEMBERS=SOURCE_GALERA_MEMBERS_$(echo ${CELL}|tr '[:lower:]' '[:upper:]')[@] > for i in "${!MEMBERS}"; do > echo "Checking for the database node $i WSREP status Synced" > oc rsh mariadb-copy-data mysql \ > -h "$i" -uroot -p"${SOURCE_DB_ROOT_PASSWORD[$CELL]}" \ > -e "show global status like 'wsrep_local_state_comment'" | \ > grep -qE "\bSynced\b" > done > done ---- + [NOTE] Each additional {compute_service_first_ref} v2 cell runs a dedicated Galera database cluster, so the command checks each cell. . Get the count of source databases with the `NOK` (not-OK) status: + ---- $ for CELL in $(echo $CELLS); do > oc rsh mariadb-copy-data mysql -h "${SOURCE_MARIADB_IP[$CELL]}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD[$CELL]}" -e "SHOW databases;" > end ---- . Check that `mysqlcheck` had no errors: + ---- $ for CELL in $(echo $CELLS); do > set +u > . ~/.source_cloud_exported_variables_$CELL > set -u > done $ test -z "$PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK" || [ "x$PULL_OPENSTACK_CONFIGURATION_MYSQLCHECK_NOK" = "x " ] && echo "OK" || echo "CHECK FAILED" ---- . Test the connection to the control plane upcall and cells databases: + ---- $ for CELL in $(echo "super $RENAMED_CELLS"); do > oc run mariadb-client --image $MARIADB_IMAGE -i --rm --restart=Never -- \ > mysql -rsh "${PODIFIED_MARIADB_IP[$CELL]}" -uroot -p"${PODIFIED_DB_ROOT_PASSWORD[$CELL]}" -e 'SHOW databases;' > done ---- + [NOTE] You must transition Compute services that you import later into a superconductor architecture by deleting the old service records in the cell databases, starting with `cell1`. New records are registered with different hostnames that are provided by the {compute_service} operator. All Compute services, except the Compute agent, have no internal state, and you can safely delete their service records. You also need to rename the former `default` cell to `DEFAULT_CELL_NAME`. . Create a dump of the original databases: + ---- $ for CELL in $(echo $CELLS); do > oc rsh mariadb-copy-data << EOF > mysql -h"${SOURCE_MARIADB_IP[$CELL]}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD[$CELL]}" \ > -N -e "show databases" | grep -E -v "schema|mysql|gnocchi|aodh" | \ > while read dbname; do > echo "Dumping $CELL cell \${dbname}"; > mysqldump -h"${SOURCE_MARIADB_IP[$CELL]}" -uroot -p"${SOURCE_DB_ROOT_PASSWORD[$CELL]}" \ > --single-transaction --complete-insert --skip-lock-tables --lock-tables=0 \ > "\${dbname}" > /backup/"${CELL}.\${dbname}".sql; > done > EOF > done ---- . Restore the databases from `.sql` files into the control plane MariaDB: + ---- $ for CELL in $(echo $CELLS); do > RCELL=$CELL > [ "$CELL" = "default" ] && RCELL=$DEFAULT_CELL_NAME > oc rsh mariadb-copy-data << EOF > declare -A db_name_map > db_name_map['nova']="nova_$RCELL" > db_name_map['ovs_neutron']='neutron' > db_name_map['ironic-inspector']='ironic_inspector' > declare -A db_cell_map > db_cell_map['nova']="nova_$DEFAULT_CELL_NAME" > db_cell_map["nova_$RCELL"]="nova_$RCELL" > declare -A db_server_map > db_server_map['default']=${PODIFIED_MARIADB_IP['super']} > db_server_map["nova"]=${PODIFIED_MARIADB_IP[$DEFAULT_CELL_NAME]} > db_server_map["nova_$RCELL"]=${PODIFIED_MARIADB_IP[$RCELL]} > declare -A db_server_password_map > db_server_password_map['default']=${PODIFIED_DB_ROOT_PASSWORD['super']} > db_server_password_map["nova"]=${PODIFIED_DB_ROOT_PASSWORD[$DEFAULT_CELL_NAME]} > db_server_password_map["nova_$RCELL"]=${PODIFIED_DB_ROOT_PASSWORD[$RCELL]} > cd /backup > for db_file in \$(ls ${CELL}.*.sql); do > db_name=\$(echo \${db_file} | awk -F'.' '{ print \$2; }') > [[ "$CELL" != "default" && ! -v "db_cell_map[\${db_name}]" ]] && continue > if [[ "$CELL" == "default" && -v "db_cell_map[\${db_name}]" ]] ; then > target=$DEFAULT_CELL_NAME > elif [[ "$CELL" == "default" && ! -v "db_cell_map[\${db_name}]" ]] ; then > target=super > else > target=$RCELL > fi > renamed_db_file="\${target}_new.\${db_name}.sql" > mv -f \${db_file} \${renamed_db_file} > if [[ -v "db_name_map[\${db_name}]" ]]; then > echo "renaming $CELL cell \${db_name} to \$target \${db_name_map[\${db_name}]}" > db_name=\${db_name_map[\${db_name}]} > fi > db_server=\${db_server_map["default"]} > if [[ -v "db_server_map[\${db_name}]" ]]; then > db_server=\${db_server_map[\${db_name}]} > fi > db_password=\${db_server_password_map['default']} > if [[ -v "db_server_password_map[\${db_name}]" ]]; then > db_password=\${db_server_password_map[\${db_name}]} > fi > echo "creating $CELL cell \${db_name} in \$target \${db_server}" > mysql -h"\${db_server}" -uroot "-p\${db_password}" -e \ > "CREATE DATABASE IF NOT EXISTS \${db_name} DEFAULT \ > CHARACTER SET ${CHARACTER_SET} DEFAULT COLLATE ${COLLATION};" > echo "importing $CELL cell \${db_name} into \$target \${db_server} from \${renamed_db_file}" > mysql -h "\${db_server}" -uroot "-p\${db_password}" "\${db_name}" < "\${renamed_db_file}" > done > if [ "$CELL" = "default" ] ; then > mysql -h "\${db_server_map['default']}" -uroot -p"\${db_server_password_map['default']}" -e \ > "update nova_api.cell_mappings set name='$DEFAULT_CELL_NAME' where name='default';" > fi > mysql -h "\${db_server_map["nova_$RCELL"]}" -uroot -p"\${db_server_password_map["nova_$RCELL"]}" -e \ > "delete from nova_${RCELL}.services where host not like '%nova_${RCELL}-%' and services.binary != 'nova-compute';" > EOF > done ---- + * `db_name_map` defines which common databases to rename when importing them. * `db_cell_map` defines which cells databases to import, and how to rename them, if needed. * `db_cell_map["nova_$RCELL"]="nova_$RCELL"` omits importing special `cell0` databases of the cells, as its contents cannot be consolidated during adoption. * `db_server_map` defines which databases to import into which servers, usually dedicated for cells. * `db_server_password_map` defines the root passwords map for database servers. You can only use the same password for now. * `renamed_db_file="\${target}_new.\${db_name}.sql"` assigns which databases to import into which hosts when extracting databases from the `default` cell. .Verification Compare the following outputs with the topology-specific service configuration. For more information, see xref:proc_retrieving-topology-specific-service-configuration_migrating-databases[Retrieving topology-specific service configuration]. . Check that the databases are imported correctly: + ---- $ set +u $ . ~/.source_cloud_exported_variables_default $ set -u $ dbs=$(oc exec openstack-galera-0 -c galera -- mysql -rs -uroot -p"${PODIFIED_DB_ROOT_PASSWORD['super']}" -e 'SHOW databases;') $ echo $dbs | grep -Eq '\bkeystone\b' && echo "OK" || echo "CHECK FAILED" $ echo $dbs | grep -Eq '\bneutron\b' && echo "OK" || echo "CHECK FAILED" $ echo "${PULL_OPENSTACK_CONFIGURATION_DATABASES[@]}" | grep -Eq '\bovs_neutron\b' && echo "OK" || echo "CHECK FAILED" $ novadb_mapped_cells=$(oc exec openstack-galera-0 -c galera -- mysql -rs -uroot -p"${PODIFIED_DB_ROOT_PASSWORD['super']}" \ > nova_api -e 'select uuid,name,transport_url,database_connection,disabled from cell_mappings;') $ uuidf='\S{8,}-\S{4,}-\S{4,}-\S{4,}-\S{12,}' $ default=$(printf "%s\n" "$PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS" | sed -rn "s/^($uuidf)\s+default\b.*$/\1/p") $ difference=$(diff -ZNua \ > <(printf "%s\n" "$PULL_OPENSTACK_CONFIGURATION_NOVADB_MAPPED_CELLS") \ > <(printf "%s\n" "$novadb_mapped_cells")) || true $ if [ "$DEFAULT_CELL_NAME" != "default" ]; then > printf "%s\n" "$difference" | grep -qE "^\-$default\s+default\b" && echo "OK" || echo "CHECK FAILED" > printf "%s\n" "$difference" | grep -qE "^\+$default\s+$DEFAULT_CELL_NAME\b" && echo "OK" || echo "CHECK FAILED" > [ $(grep -E "^[-\+]$uuidf" <<<"$difference" | wc -l) -eq 2 ] && echo "OK" || echo "CHECK FAILED" > else > [ "x$difference" = "x" ] && echo "OK" || echo "CHECK FAILED" > fi $ for CELL in $(echo $RENAMED_CELLS); do > RCELL=$CELL > [ "$CELL" = "$DEFAULT_CELL_NAME" ] && RCELL=default > set +u > . ~/.source_cloud_exported_variables_$RCELL > set -u > c1dbs=$(oc exec openstack-$CELL-galera-0 -c galera -- mysql -rs -uroot -p${PODIFIED_DB_ROOT_PASSWORD[$CELL]} -e 'SHOW databases;') > echo $c1dbs | grep -Eq "\bnova_${CELL}\b" && echo "OK" || echo "CHECK FAILED" > novadb_svc_records=$(oc exec openstack-$CELL-galera-0 -c galera -- mysql -rs -uroot -p${PODIFIED_DB_ROOT_PASSWORD[$CELL]} \ > nova_$CELL -e "select host from services where services.binary='nova-compute' and deleted=0 order by host asc;") > diff -Z <(echo "x$novadb_svc_records") <(echo "x${PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES[@]}") && echo "OK" || echo "CHECK FAILED" > done ---- + * `echo "${PULL_OPENSTACK_CONFIGURATION_DATABASES[@]}" | grep -Eq '\bovs_neutron\b' && echo "OK" || echo "CHECK FAILED"` ensures that the {networking_first_ref} database is renamed from `ovs_neutron`. * `nova_api -e 'select uuid,name,transport_url,database_connection,disabled from cell_mappings;')` ensures that the `default` cell is renamed to `$DEFAULT_CELL_NAME`, and the cell UUIDs are retained. * `for CELL in $(echo $RENAMED_CELLS); do` ensures that the registered Compute services names have not changed. * `c1dbs=$(oc exec openstack-$CELL-galera-0 -c galera -- mysql -rs -uroot -p${PODIFIED_DB_ROOT_PASSWORD[$CELL]} -e 'SHOW databases;')` ensures {compute_service} cells databases are extracted to separate database servers, and renamed from `nova` to `nova_cell`. * `diff -Z <(echo "x$novadb_svc_records") <(echo "x${PULL_OPENSTACK_CONFIGURATION_NOVA_COMPUTE_HOSTNAMES[@]}") && echo "OK" || echo "CHECK FAILED"` ensures that the registered {compute_service} name has not changed. . Delete the `mariadb-data` pod and the `mariadb-copy-data` persistent volume claim that contains the database backup: + [NOTE] Consider taking a snapshot of them before deleting. + ---- $ oc delete pod mariadb-copy-data $ oc delete pvc mariadb-data ---- [NOTE] During the pre-checks and post-checks, the `mariadb-client` pod might return a pod security warning related to the `restricted:latest` security context constraint. This warning is due to default security context constraints and does not prevent the admission controller from creating a pod. You see a warning for the short-lived pod, but it does not interfere with functionality. For more information, see link:https://learn.redhat.com/t5/DO280-Red-Hat-OpenShift/About-pod-security-standards-and-warnings/m-p/32502[About pod security standards and warnings].