= Development environment The Adoption development environment utilizes https://github.com/openstack-k8s-operators/install_yamls[install_yamls] for CRC VM creation and for creation of the VM that hosts the source OSP 17.1 OpenStack in Standalone configuration. == Preparing the host Install https://pre-commit.com/[pre-commit hooks] before contributing: [,bash] ---- pip install pre-commit pre-commit install ---- Get dataplane adoption repo: [,bash] ---- git clone https://github.com/openstack-k8s-operators/data-plane-adoption.git ~/data-plane-adoption ---- Get install_yamls: [,bash] ---- git clone https://github.com/openstack-k8s-operators/install_yamls.git ~/install_yamls ---- Install tools for operator development: [,bash] ---- cd ~/install_yamls/devsetup make download_tools ---- ''' == Deploying CRC [WARNING] If you want to deploy using IPv6, our current way of deploying a lightweight OCP environment is with Single Node Openshift (SNO) instead of CRC. See section at the bottom of the page for IPv6 development environment deployment. === CRC environment for virtual workloads [,bash] ---- cd ~/install_yamls/devsetup PULL_SECRET=$HOME/pull-secret.txt CPUS=12 MEMORY=40000 DISK=100 make crc eval $(crc oc-env) oc login -u kubeadmin -p 12345678 https://api.crc.testing:6443 make crc_attach_default_interface ---- === CRC environment with Openstack Ironic [IMPORTANT] This section is specific to deploying Nova with Ironic backend. Skip it if you want to deploy Nova normally. Create the BMaaS network (`crc-bmaas`) and virtual baremetal nodes controlled by a RedFish BMC emulator. [,bash] ---- cd ~/install_yamls make nmstate make namespace cd devsetup # back to install_yamls/devsetup make bmaas BMAAS_NODE_COUNT=2 ---- A node definition YAML file to use with the `openstack baremetal create .yaml` command can be generated for the virtual baremetal nodes by running the `bmaas_generate_nodes_yaml` make target. Store it in a temp file for later. [,bash] ---- make bmaas_generate_nodes_yaml | tail -n +2 | tee /tmp/ironic_nodes.yaml ---- Set variables to deploy edpm Standalone with additional network (`baremetal`) and compute driver `ironic`. [,bash] ---- cat << EOF > /tmp/addtional_nets.json [ { "type": "network", "name": "crc-bmaas", "standalone_config": { "type": "ovs_bridge", "name": "baremetal", "mtu": 1500, "vip": true, "ip_subnet": "172.20.1.0/24", "allocation_pools": [ { "start": "172.20.1.100", "end": "172.20.1.150" } ], "host_routes": [ { "destination": "192.168.130.0/24", "nexthop": "172.20.1.1" } ] } } ] EOF export EDPM_COMPUTE_ADDITIONAL_NETWORKS=$(jq -c . /tmp/addtional_nets.json) export STANDALONE_COMPUTE_DRIVER=ironic export EDPM_COMPUTE_CEPH_ENABLED=false # Optional export EDPM_COMPUTE_CEPH_NOVA=false # Optional export EDPM_COMPUTE_SRIOV_ENABLED=false # Without this the standalone deploy fails when compute driver is ironic. export NTP_SERVER= # Default pool.ntp.org do not work in RedHat network, set this var as per the environment ---- [Note] === If `EDPM_COMPUTE_CEPH_ENABLED=false` is set, TripleO configures `Glance` with `Swift` as a backend. If `EDPM_COMPUTE_CEPH_NOVA=false` is set, TripleO configures `Nova/Libvirt` with a local storage backend. === ''' == Deploying TripleO Standalone Use the https://github.com/openstack-k8s-operators/install_yamls/tree/main/devsetup[install_yamls devsetup] to create a virtual machine (edpm-compute-0) connected to the isolated networks. [IMPORTANT] To use OSP 17.1 content to deploy TripleO Standalone, follow the https://url.corp.redhat.com/devel-rhoso-adoption[guide for setting up downstream content] for `make standalone`. [,bash] ---- cd ~/install_yamls/devsetup EDPM_CONFIGURE_HUGEPAGES=false make standalone <1> ---- <1> To configure the host for a simplistic hugepages setup, set `EDPM_CONFIGURE_HUGEPAGES=true`. === Snapshot/revert When the deployment of the Standalone OpenStack is finished, it's a good time to snapshot the machine, so that multiple Adoption attempts can be done without having to deploy from scratch. [,bash] ---- cd ~/install_yamls/devsetup make standalone_snapshot ---- And when you wish to revert the Standalone deployment to the snapshotted state: [,bash] ---- cd ~/install_yamls/devsetup make standalone_revert ---- Similar snapshot could be done for the CRC virtual machine, but the developer environment reset on CRC side can be done sufficiently via the install_yamls `*_cleanup` targets. This is further detailed in the section: https://openstack-k8s-operators.github.io/data-plane-adoption/dev/#_reset_the_environment_to_pre_adoption_state[Reset the environment to pre-adoption state] ''' == Deploying TripleO With Multiple Cells A TripleO Standalone setup creates only a single Nova v2 cell, with a combined controller and compute services on it. In order to deploy multiple compute cells for adoption testing (without Ceph), create a 5 VMs, with the following requirements met: * Named `edpm-compute-0` .. `edpm-compute-4`. * Running RHEL 9.2, with RHOSP 17.1 repositories configured. * Can login via SSH without a password as the root user, from the hypervisor host. * User `zuul` is created, and can sudo without a password, and login via SSH without a password, from the hypervisor host. * User `zuul` can login to `edpm-compute-1`, `edpm-compute-2`, `edpm-compute-3`, `edpm-compute-4` nodes via SSH without a password, from the `edpm-compute-0` node, by using the generated `/home/zuul/.ssh/id_rsa` private key. * RedHat registry credentials are exported on the hypervisor host. Adjust the following commands for a repositories configuration tool of your choice: [,bash] ---- export RH_REGISTRY_USER="" export RH_REGISTRY_PWD="" DEFAULT_CELL_NAME="cell3" <1> RENAMED_CELLS="cell1 cell2 $DEFAULT_CELL_NAME" cd ~/install_yamls/devsetup cat < /tmp/osp17_repos # Use a tool of your choice: # 1. Rhos-release example steps are only available from the internal RedHat network # ... skipping download and install steps ... # sudo rhos-release -x # sudo rhos-release 17.1 # 2. Subscription-manager example steps require an active registration # subscription-manager release --set=9.2 # subscription-manager repos --disable=* # sudo subscription-manager repos \ # --enable=rhel-9-for-x86_64-baseos-eus-rpms \ # --enable=rhel-9-for-x86_64-appstream-eus-rpms \ # --enable=rhel-9-for-x86_64-highavailability-eus-rpms \ # --enable=openstack-17.1-for-rhel-9-x86_64-rpms \ # --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms \ # --enable=fast-datapath-for-rhel-9-x86_64-rpms # firstboot commands sudo dnf install -y git curl wget podman python3-tripleoclient openvswitch3.1 NetworkManager-initscripts-updown \ sudo dnf install -y util-linux cephadm driverctl lvm2 jq nftables iptables-nft openstack-heat-agents \ os-net-config python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc sudo dnf install -y puppet-tripleo puppet-headless sudo dnf install -y openstack-selinux EOF export CENTOS_9_STREAM_URL= export NTP_SERVER= export MANILA_ENABLED=false export EDPM_COMPUTE_CEPH_ENABLED=false export EDPM_COMPUTE_CEPH_NOVA=false export EDPM_COMPUTE_CELLS=3 export STANDALONE_EXTRA_CMD="bash -c 'echo \"$RH_REGISTRY_PWD\" > ~/authfile; chmod 0600 ~/authfile; sudo dnf install -y podman; sudo /bin/podman login registry.redhat.io -u \"$RH_REGISTRY_USER\" --password-stdin < ~/authfile'" export EDPM_FIRSTBOOT_EXTRA=/tmp/osp17_repos export EDPM_TOTAL_NODES=1 export SKIP_TRIPLEO_REPOS=false export EDPM_COMPUTE_NETWORK_IP=192.168.122.1 export HOST_PRIMARY_RESOLV_CONF_ENTRY=192.168.122.1 export BASE_DISK_FILENAME="rhel-9-base.qcow2" EDPM_COMPUTE_SUFFIX=0 IP=192.168.122.100 EDPM_COMPUTE_DISK_SIZE=10 EDPM_COMPUTE_RAM=9 EDPM_COMPUTE_VCPUS=2 make edpm_compute EDPM_COMPUTE_SUFFIX=1 IP=192.168.122.103 EDPM_COMPUTE_DISK_SIZE=17 EDPM_COMPUTE_RAM=12 EDPM_COMPUTE_VCPUS=4 make edpm_compute EDPM_COMPUTE_SUFFIX=2 IP=192.168.122.106 EDPM_COMPUTE_DISK_SIZE=14 EDPM_COMPUTE_RAM=12 EDPM_COMPUTE_VCPUS=4 make edpm_compute EDPM_COMPUTE_SUFFIX=3 IP=192.168.122.107 EDPM_COMPUTE_DISK_SIZE=12 EDPM_COMPUTE_RAM=4 EDPM_COMPUTE_VCPUS=2 make edpm_compute EDPM_COMPUTE_SUFFIX=4 IP=192.168.122.109 EDPM_COMPUTE_DISK_SIZE=16 EDPM_COMPUTE_RAM=12 EDPM_COMPUTE_VCPUS=4 make edpm_compute for n in 0 3 6 7 9; do # w/a bad packages installation, if done by firstboot - resulting in rpm -V check failures in tripleo-ansible ssh -o StrictHostKeyChecking=false -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa \ root@192.168.122.10${n} dnf install -y openstack-selinux ';' \ dnf reinstall -y openstack-selinux ssh -o StrictHostKeyChecking=false -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa \ root@192.168.122.10${n} useradd --create-home --shell /bin/bash --groups root zuul ';' \ mkdir -p /home/zuul/.ssh scp -o StrictHostKeyChecking=false -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa \ ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.10${n}:/home/zuul/.ssh/id_rsa ssh -o StrictHostKeyChecking=false -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa \ root@192.168.122.10${n} ssh-keygen -yf /home/zuul/.ssh/id_rsa '>' /home/zuul/.ssh/id_rsa.pub ssh -o StrictHostKeyChecking=false -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa \ root@192.168.122.10${n} cp /root/.ssh/authorized_keys /home/zuul/.ssh/authorized_keys ssh -o StrictHostKeyChecking=false -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa \ root@192.168.122.10${n} chown zuul: /home/zuul/.ssh/* ssh -o StrictHostKeyChecking=false -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa \ root@192.168.122.10${n} echo "zuul ALL=NOPASSWD:ALL" '>' /etc/sudoers.d/zuul done make tripleo_deploy for n in 0 1 2 3 4; do make standalone_snapshot EDPM_COMPUTE_SUFFIX=$n; done ---- <1> The source cloud 'default' cell takes a new `$DEFAULT_CELL_NAME`. In a multi-cell adoption scenario, it may either retain its original name (`DEFAULT_CELL_NAME=default`), or become renamed into a free for use cell name. Never chose other existing cells names (except 'default') for `DEFAULT_CELL_NAME`. == Network routing Route VLAN20 to have access to the MariaDB cluster: [,bash] ---- EDPM_BRIDGE=$(sudo virsh dumpxml edpm-compute-0 | grep -oP "(?<=bridge=').*(?=')") sudo ip link add link $EDPM_BRIDGE name vlan20 type vlan id 20 sudo ip addr add dev vlan20 172.17.0.222/24 sudo ip link set up dev vlan20 ---- To adopt the Swift service as well, route VLAN23 to have access to the storage backend services: [,bash] ---- EDPM_BRIDGE=$(sudo virsh dumpxml edpm-compute-0 | grep -oP "(?<=bridge=').*(?=')") sudo ip link add link $EDPM_BRIDGE name vlan23 type vlan id 23 sudo ip addr add dev vlan23 172.20.0.222/24 sudo ip link set up dev vlan23 ---- ''' == Creating a workload to adopt To run `openstack` commands from the host without installing the package and copying the configuration file from the virtual machine, create an alias: [,bash] ---- OS_CLOUD_NAME=standalone alias openstack="ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100 OS_CLOUD=$OS_CLOUD_NAME openstack" ---- For a multi-cell environment, set `OS_CLOUD_NAME` to `overcloud`. === Virtual machine steps Create a test VM instance with a test volume attachement: [,bash] ---- cd ~/data-plane-adoption export CINDER_VOLUME_BACKEND_CONFIGURED=true <1> export CINDER_BACKUP_BACKEND_CONFIGURED=true export EDPM_CONFIGURE_HUGEPAGES=false <2> export OPENSTACK_COMMAND="ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100 OS_CLOUD=standalone openstack" OS_CLOUD_IP=192.168.122.100 OS_CLOUD_NAME=standalone \ bash tests/roles/development_environment/files/pre_launch.bash ---- <1> Use `CINDER_*_BACKEND_CONFIGURED=false`, if Cinder Volume or Backup services' storage backends have been not configured for the source cloud, or won't be configured for the target cloud. That might be a valid case for some developement setups, but not for a production scenarios. <2> To configure the host for a simplistic hugepages setup `EDPM_CONFIGURE_HUGEPAGES=true`. To apply kernel args, you will need to reboot the standalone host after deployment completed. This also creates a test Cinder volume, a backup from it, and a snapshot of it. Create a Barbican secret: ``` openstack secret store --name testSecret --payload 'TestPayload' ``` If using Ceph backend, confirm the image UUID can be seen in Ceph's images pool: [,bash] ---- ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100 sudo cephadm shell -- rbd -p images ls -l ---- === Ironic steps [IMPORTANT] This section is specific to deploying Nova with Ironic backend. Skip it if you deployed Nova normally. [,bash] ---- # Enroll baremetal nodes make bmaas_generate_nodes_yaml | tail -n +2 | tee /tmp/ironic_nodes.yaml scp -i $HOME/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa /tmp/ironic_nodes.yaml root@192.168.122.100: ssh -i $HOME/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100 export OS_CLOUD=standalone openstack baremetal create /root/ironic_nodes.yaml export IRONIC_PYTHON_AGENT_RAMDISK_ID=$(openstack image show deploy-ramdisk -c id -f value) export IRONIC_PYTHON_AGENT_KERNEL_ID=$(openstack image show deploy-kernel -c id -f value) for node in $(openstack baremetal node list -c UUID -f value); do openstack baremetal node set $node \ --driver-info deploy_ramdisk=${IRONIC_PYTHON_AGENT_RAMDISK_ID} \ --driver-info deploy_kernel=${IRONIC_PYTHON_AGENT_KERNEL_ID} \ --resource-class baremetal \ --property capabilities='boot_mode:uefi' done # Create a baremetal flavor openstack flavor create baremetal --ram 1024 --vcpus 1 --disk 15 \ --property resources:VCPU=0 \ --property resources:MEMORY_MB=0 \ --property resources:DISK_GB=0 \ --property resources:CUSTOM_BAREMETAL=1 \ --property capabilities:boot_mode="uefi" # Create image IMG=Fedora-Cloud-Base-38-1.6.x86_64.qcow2 URL=https://download.fedoraproject.org/pub/fedora/linux/releases/38/Cloud/x86_64/images/$IMG curl -o /tmp/${IMG} -L $URL DISK_FORMAT=$(qemu-img info /tmp/${IMG} | grep "file format:" | awk '{print $NF}') openstack image create --container-format bare --disk-format ${DISK_FORMAT} Fedora-Cloud-Base-38 < /tmp/${IMG} export BAREMETAL_NODES=$(openstack baremetal node list -c UUID -f value) # Manage nodes for node in $BAREMETAL_NODES; do openstack baremetal node manage $node done # Wait for nodes to reach "manageable" state watch openstack baremetal node list # Inspect baremetal nodes for node in $BAREMETAL_NODES; do openstack baremetal introspection start $node done # Wait for inspection to complete watch openstack baremetal introspection list # Provide nodes for node in $BAREMETAL_NODES; do openstack baremetal node provide $node done # Wait for nodes to reach "available" state watch openstack baremetal node list # Create an instance on baremetal openstack server show test-baremetal || { openstack server create test-baremetal --flavor baremetal --image Fedora-Cloud-Base-38 --nic net-id=provisioning --wait } # Check instance status and network connectivity openstack server show test-baremetal ping -c 4 $(openstack server show test-baremetal -f json -c addresses | jq -r .addresses.provisioning[0]) ---- ''' == Installing the OpenStack operators [,bash] ---- cd .. # back to install_yamls make crc_storage make input make openstack make openstack_init ---- ''' == Performing the adoption procedure To simplify the adoption procedure with additional cells, copy and rename the deployment passwords that you use in copy the deployment passwords that you use in the https://openstack-k8s-operators.github.io/data-plane-adoption/user/#deploying-backend-services_migrating-databases[backend services deployment phase of the data plane adoption]. For a single-cell standalone TripleO deployment: [,bash] ---- scp -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100:/root/tripleo-standalone-passwords.yaml ~/overcloud-passwords.yaml ---- Further on, this password is going to be referenced as `TRIPLEO_PASSWORDS[default]` for a `default` cell name, in terms of TripleO. For a source cloud deployment with multiple stacks, change the above command to these: [,bash] ---- scp -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa zuul@192.168.122.100:overcloud-deploy/overcloud/overcloud-passwords.yaml ~/ scp -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa zuul@192.168.122.100:overcloud-deploy/cell1/cell1-passwords.yaml ~/ scp -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa zuul@192.168.122.100:overcloud-deploy/cell2/cell2-passwords.yaml ~/ ---- Note that all compute cells of the source cloud always share the same database and messaging passwords. On the contrary, a generic split-stack topology allows using different passwords files for its stacks. The development environment is now set up, you can go to the https://openstack-k8s-operators.github.io/data-plane-adoption/[Adoption documentation] and perform adoption manually, or run the https://openstack-k8s-operators.github.io/data-plane-adoption/dev/#_test_suite_information[test suite] against your environment. ''' == Resetting the environment to pre-adoption state The development environment must be rolled back in case we want to execute another Adoption run. Delete the data-plane and control-plane resources from the CRC vm [,bash] ---- for CELL in $(echo $RENAMED_CELLS); do oc delete --ignore-not-found=true --wait=false openstackdataplanedeployment/openstack-$CELL oc delete --ignore-not-found=true --wait=false openstackdataplanedeployment/openstack-nova-compute-ffu-$CELL done oc delete --ignore-not-found=true --wait=false openstackcontrolplane/openstack oc patch openstackcontrolplane openstack --type=merge --patch ' metadata: finalizers: [] ' || true while oc get pod | grep rabbitmq-server-0; do sleep 2 done while oc get pod | grep openstack-galera-0; do sleep 2 done oc delete --wait=false pod ovn-copy-data || true oc delete --wait=false pod mariadb-copy-data || true oc delete secret osp-secret || true ---- Revert the standalone vm(s) to the snapshotted state [,bash] ---- cd ~/install_yamls/devsetup make standalone_revert ---- For a multi-cell deployment, change the above command to these: [,bash] ---- cd ~/install_yamls/devsetup for n in 0 1 2 3 4; do make standalone_revert EDPM_COMPUTE_SUFFIX=$n; done ---- Clean up and initialize the storage PVs in CRC vm [,bash] ---- cd .. for i in {1..3}; do make crc_storage_cleanup crc_storage && break || sleep 5; done for CELL in $(echo $RENAMED_CELLS); do oc delete pvc mysql-db-openstack-$CELL-galera-0 --ignore-not-found=true oc delete pvc persistence-rabbitmq-$CELL-server-0 --ignore-not-found=true done ---- Use indexes like `*-0`, `*-1` based on the replica counts configured in `oscp/openstack` CR. ''' == Experimenting with an additional compute node The following is not on the critical path of preparing the development environment for Adoption, but it shows how to make the environment work with an additional compute node VM. The remaining steps should be completed on the hypervisor hosting crc and edpm-compute-0. === Deploy NG Control Plane with Ceph Export the Ceph configuration from edpm-compute-0 into a secret. [,bash] ---- SSH=$(ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100) KEY=$($SSH "cat /etc/ceph/ceph.client.openstack.keyring | base64 -w 0") CONF=$($SSH "cat /etc/ceph/ceph.conf | base64 -w 0") cat < ceph_secret.yaml apiVersion: v1 data: ceph.client.openstack.keyring: $KEY ceph.conf: $CONF kind: Secret metadata: name: ceph-conf-files type: Opaque EOF oc create -f ceph_secret.yaml ---- Deploy the NG control plane with Ceph as backend for Glance and Cinder. As described in https://github.com/openstack-k8s-operators/install_yamls/tree/main[the install_yamls README], use the sample config located at https://github.com/openstack-k8s-operators/openstack-operator/blob/main/config/samples/core_v1beta1_openstackcontrolplane_network_isolation_ceph.yaml but make sure to replace the `_FSID_` in the sample with the one from the secret created in the previous step. [,bash] ---- curl -o /tmp/core_v1beta1_openstackcontrolplane_network_isolation_ceph.yaml https://raw.githubusercontent.com/openstack-k8s-operators/openstack-operator/main/config/samples/core_v1beta1_openstackcontrolplane_network_isolation_ceph.yaml FSID=$(oc get secret ceph-conf-files -o json | jq -r '.data."ceph.conf"' | base64 -d | grep fsid | sed -e 's/fsid = //') && echo $FSID sed -i "s/_FSID_/${FSID}/" /tmp/core_v1beta1_openstackcontrolplane_network_isolation_ceph.yaml oc apply -f /tmp/core_v1beta1_openstackcontrolplane_network_isolation_ceph.yaml ---- A NG control plane which uses the same Ceph backend should now be functional. If you create a test image on the NG system to confirm it works from the configuration above, be sure to read the warning in the next section. Before beginning adoption testing or development you may wish to deploy an EDPM node as described in the following section. === Warning about two OpenStacks and one Ceph Though workloads can be created in the NG deployment to test, be careful not to confuse them with workloads from the Wallaby cluster to be migrated. The following scenario is now possible. A Glance image exists on the Wallaby OpenStack to be adopted. [,bash] ---- [stack@standalone standalone]$ export OS_CLOUD=standalone [stack@standalone standalone]$ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 33a43519-a960-4cd0-a593-eca56ee553aa | cirros | active | +--------------------------------------+--------+--------+ [stack@standalone standalone]$ ---- If you now create an image with the NG cluster, then a Glance image will exsit on the NG OpenStack which will adopt the workloads of the wallaby. [,bash] ---- [fultonj@hamfast ng]$ export OS_CLOUD=default [fultonj@hamfast ng]$ export OS_PASSWORD=12345678 [fultonj@hamfast ng]$ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 4ebccb29-193b-4d52-9ffd-034d440e073c | cirros | active | +--------------------------------------+--------+--------+ [fultonj@hamfast ng]$ ---- Both Glance images are stored in the same Ceph pool. [,bash] ---- ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100 sudo cephadm shell -- rbd -p images ls -l Inferring fsid 7133115f-7751-5c2f-88bd-fbff2f140791 Using recent ceph image quay.rdoproject.org/tripleowallabycentos9/daemon@sha256:aa259dd2439dfaa60b27c9ebb4fb310cdf1e8e62aa7467df350baf22c5d992d8 NAME SIZE PARENT FMT PROT LOCK 33a43519-a960-4cd0-a593-eca56ee553aa 273 B 2 33a43519-a960-4cd0-a593-eca56ee553aa@snap 273 B 2 yes 4ebccb29-193b-4d52-9ffd-034d440e073c 112 MiB 2 4ebccb29-193b-4d52-9ffd-034d440e073c@snap 112 MiB 2 yes ---- However, as far as each Glance service is concerned each has one image. Thus, in order to avoid confusion during adoption the test Glance image on the NG OpenStack should be deleted. [,bash] ---- openstack image delete 4ebccb29-193b-4d52-9ffd-034d440e073c ---- Connecting the NG OpenStack to the existing Ceph cluster is part of the adoption procedure so that the data migration can be minimized but understand the implications of the above example. === Deploy edpm-compute-1 edpm-compute-0 is not available as a standard EDPM system to be managed by https://openstack-k8s-operators.github.io/edpm-ansible[edpm-ansible] or https://openstack-k8s-operators.github.io/openstack-operator/dataplane[openstack-operator] because it hosts the wallaby deployment which will be adopted and after adoption it will only host the Ceph server. Use the https://github.com/openstack-k8s-operators/install_yamls/tree/main/devsetup[install_yamls devsetup] to create additional virtual machines and be sure that the `EDPM_COMPUTE_SUFFIX` is set to `1` or greater. Do not set `EDPM_COMPUTE_SUFFIX` to `0` or you could delete the Wallaby system created in the previous section. When deploying EDPM nodes add an `extraMounts` like the following in the `OpenStackDataPlaneNodeSet` CR `nodeTemplate` so that they will be configured to use the same Ceph cluster. [,bash] ---- edpm-compute: nodeTemplate: extraMounts: - extraVolType: Ceph volumes: - name: ceph secret: secretName: ceph-conf-files mounts: - name: ceph mountPath: "/etc/ceph" readOnly: true ---- A NG data plane which uses the same Ceph backend should now be functional. Be careful about not confusing new workloads to test the NG OpenStack with the Wallaby OpenStack as described in the previous section. === Begin Adoption Testing or Development We should now have: * An NG glance service based on Antelope running on CRC * An TripleO-deployed glance serviced running on edpm-compute-0 * Both services have the same Ceph backend * Each service has their own independent database An environment above is assumed to be available in the https://openstack-k8s-operators.github.io/data-plane-adoption/user/#adopting-the-image-service_adopt-control-plane[Glance Adoption documentation]. You may now follow other Data Plane Adoption procedures described in the https://openstack-k8s-operators.github.io/data-plane-adoption[documentation]. The same pattern can be applied to other services. == Deploying an IPv6 environment In order to perform an adoption with IPv6, we will need an Openshift node (SNO instead of CRC in this case), an IPv6 control plane Openstack environment, and some extra settings we will see through this section. === IPv6 Lab As a prerequisite, make sure you have `systemd-resolved` configured for DNS resolution. [,bash] ---- dnf install -y systemd-resolved systemctl enable --now systemd-resolved ln -sf ../run/systemd/resolve/stub-resolv.conf /etc/resolv.conf ---- We should also have Virtualization Tools installed (`libvirt` and `qemu`), and the username you are going to use added to the `libvirt` and `qemu` group. [,bash] ---- sudo usermod -a -G libvirt,qemu ---- Furthermore, you should have an RSA key generated to use as identification to access your SNO, and your pull secret stored on your user folder. If you did not have libvirt installed, there is a chance that you don't have a default pool defined in libvirt. If that is the case, you can define it with the following commands [,bash] ---- cat > /tmp/default-pool.xml < default /var/lib/libvirt/images 0711 0 0 EOF sudo virsh pool-define default-pool.xml sudo virsh pool-start default ---- Once all the prerequisites are present, you can go ahead and use the `install_yamls` repository to install the IPv6Lab from the `devsetup` folder. Steps are taken from the https://github.com/openstack-k8s-operators/install_yamls/tree/main/devsetup[install_yamls devsetup README]: [,bash] ---- cd install_yamls/devsetup export NETWORK_ISOLATION_NET_NAME=net-iso export NETWORK_ISOLATION_IPV4=false export NETWORK_ISOLATION_IPV6=true export NETWORK_ISOLATION_INSTANCE_NAME=sno export NETWORK_ISOLATION_IP_ADDRESS=fd00:aaaa::10 export NNCP_INTERFACE=enp7s0 make download_tools make ipv6_lab # Set up the needed networking setup (NAT64 bridge) make network_isolation_bridge # Create the network-isolation network make attach_default_interface # Attach the network-isolation bridge to SNO ---- To be able to access the SNO lab you need to source the SNO environment. After that you will be able to use `oc` commands: [,bash] ---- source /home//.ipv6lab/sno_env oc login -u admin -p 12345678 https://api.sno.lab.example.com:6443 ---- You can also ssh the SNO for debugging purposes: [,bash] ---- ssh -i ~/.ssh/id_rsa core@fd00:aaaa::10 ---- [NOTE] If you find any problems on the nat64 router you can connect via SSH with `fedora@fd00:abcd:abcd:fc00::2` or the SNO installation via `core@fd00:abcd:abcd:fc00::11` === Deploying TripleO Standalone with IPv6 [WARNING] There is still no official setup, but in this https://github.com/karelyatin/install_yamls/commit/8151634183fe1302383a98e0e9f0779b68232ad6[fork of install_yamls] there is a commit that can be used in order to deploy it successfully. The steps to deploy would be (assuming you are using https://github.com/karelyatin/install_yamls/commit/8151634183fe1302383a98e0e9f0779b68232ad6[this commit]): [,bash] ---- sudo chmod 777 /var/lib/libvirt/images #This might be needed to download the images cat > /tmp/additional_nets.json < mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 52:54:00:f9:af:e4 brd ff:ff:ff:ff:ff:ff inet 192.168.122.3/24 scope global net-iso valid_lft forever preferred_lft forever inet6 fd00:aaaa::1/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fef9:afe4/64 scope link valid_lft forever preferred_lft forever sudo ip route del 192.168.122.0/24 dev virbr0 ip route # Output ... 192.168.122.0/24 dev net-iso proto kernel scope link src 192.168.122.3 ---- On the standalone: [,bash] ---- ip addr add 192.168.122.4/24 dev net-iso ip a show br-ctlplane # Output 5: br-ctlplane: mtu 1500 qdisc noqueue state UNKNOWN g roup default qlen 1000 link/ether 52:54:00:46:72:c6 brd ff:ff:ff:ff:ff:ff inet 192.168.122.4/24 scope global br-ctlplane valid_lft forever preferred_lft forever inet6 fd00:aaaa::99/128 scope global valid_lft forever preferred_lft forever inet6 fd00:aaaa::100/64 scope global valid_lft forever preferred_lft forever inet6 fe80::5054:ff:fe46:72c6/64 scope link valid_lft forever preferred_lft forever ip route # Output 192.168.122.0/24 dev br-ctlplane proto kernel scope link src 192.168.122.4 ---- === Further steps From here, the steps should be similar to the IPv4 adoption. Note that every command that requires access to the standalone VM via SSH (i.e. when creating a workload) should be done using a different address: [,bash] ---- OS_CLOUD_IP=fd00:aaaa::100 OS_CLOUD_NAME=standalone \ bash tests/roles/development_environment/files/pre_launch.bash ---- And, when installing operators, use: [,bash] ---- scp -6 -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@[fd00:aaaa::100]:/root/tripleo-standalone-passwords.yaml ~/ ----