:_mod-docs-content-type: PROCEDURE [id="stopping-openstack-services_{context}"] = Stopping {rhos_prev_long} services [role="_abstract"] Before you start the {rhos_long} adoption, you must stop the {rhos_prev_long} ({OpenStackShort}) services to avoid inconsistencies in the data that you migrate for the data plane adoption. Inconsistencies are caused by resource changes after the database is copied to the new deployment. You should not stop the infrastructure management services yet, such as: * Database * RabbitMQ * HAProxy Load Balancer * Ceph-nfs * Compute service * Containerized modular libvirt daemons * {object_storage_first_ref} back-end services .Prerequisites * Ensure that there no long-running tasks that require the services that you plan to stop, such as instance live migrations, volume migrations, volume creation, backup and restore, attaching, detaching, and other similar operations: + ---- $ openstack server list --all-projects -c ID -c Status |grep -E '\| .+ing \|' $ openstack volume list --all-projects -c ID -c Status |grep -E '\| .+ing \|'| grep -vi error $ openstack volume backup list --all-projects -c ID -c Status |grep -E '\| .+ing \|' | grep -vi error $ openstack share list --all-projects -c ID -c Status |grep -E '\| .+ing \|'| grep -vi error $ openstack image list -c ID -c Status |grep -E '\| .+ing \|' ---- * Collect the services topology-specific configuration. For more information, see xref:proc_retrieving-topology-specific-service-configuration_migrating-databases[Retrieving topology-specific service configuration]. * Define the following shell variables. The values are examples and refer to a single node standalone {OpenStackPreviousInstaller} deployment. Replace these example values with values that are correct for your environment: + [subs=+quotes] ---- ifeval::["{build}" != "downstream"] CONTROLLER1_SSH="ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.100" endif::[] ifeval::["{build}" == "downstream"] CONTROLLER1_SSH="ssh -i ** root@**" CONTROLLER2_SSH="ssh -i ** root@**" CONTROLLER3_SSH="ssh -i ** root@**" endif::[] ---- * Specify the IP addresses of all Controller nodes, for example: + [subs=+quotes] ---- ifeval::["{build}" != "downstream"] CONTROLLER1_SSH="ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.103" CONTROLLER2_SSH="ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.106" CONTROLLER3_SSH="ssh -i ~/install_yamls/out/edpm/ansibleee-ssh-key-id_rsa root@192.168.122.109" endif::[] ifeval::["{build}" == "downstream"] CONTROLLER1_SSH="ssh -i ** root@**" CONTROLLER2_SSH="ssh -i ** root@**" CONTROLLER3_SSH="ssh -i ** root@**" # ... endif::[] ---- ifeval::["{build}" == "downstream"] + ** `` defines the path to your SSH key. ** ` IP>` defines the IP addresses of all Controller nodes. endif::[] .Procedure . If your deployment enables CephFS through NFS as a back end for {rhos_component_storage_file_first_ref}, remove the following Pacemaker ordering and co-location constraints that govern the Virtual IP address of the `ceph-nfs` service and the `manila-share` service: + [source,yaml] ---- # check the co-location and ordering constraints concerning "manila-share" sudo pcs constraint list --full # remove these constraints sudo pcs constraint remove colocation-openstack-manila-share-ceph-nfs-INFINITY sudo pcs constraint remove order-ceph-nfs-openstack-manila-share-Optional ---- . Disable {OpenStackShort} control plane services: + [source,yaml] ---- # Update the services list to be stopped ServicesToStop=("tripleo_aodh_api.service" "tripleo_aodh_api_cron.service" "tripleo_aodh_evaluator.service" "tripleo_aodh_listener.service" "tripleo_aodh_notifier.service" "tripleo_ceilometer_agent_central.service" "tripleo_ceilometer_agent_notification.service" "tripleo_designate_api.service" "tripleo_designate_backend_bind9.service" "tripleo_designate_central.service" "tripleo_designate_mdns.service" "tripleo_designate_producer.service" "tripleo_designate_worker.service" "tripleo_octavia_api.service" "tripleo_octavia_health_manager.service" "tripleo_octavia_rsyslog.service" "tripleo_octavia_driver_agent.service" "tripleo_octavia_housekeeping.service" "tripleo_octavia_worker.service" "tripleo_horizon.service" "tripleo_keystone.service" "tripleo_barbican_api.service" "tripleo_barbican_worker.service" "tripleo_barbican_keystone_listener.service" "tripleo_cinder_api.service" "tripleo_cinder_api_cron.service" "tripleo_cinder_scheduler.service" "tripleo_cinder_volume.service" "tripleo_cinder_backup.service" "tripleo_collectd.service" "tripleo_glance_api.service" "tripleo_gnocchi_api.service" "tripleo_gnocchi_metricd.service" "tripleo_gnocchi_statsd.service" "tripleo_manila_api.service" "tripleo_manila_api_cron.service" "tripleo_manila_scheduler.service" "tripleo_neutron_api.service" "tripleo_placement_api.service" "tripleo_nova_api_cron.service" "tripleo_nova_api.service" "tripleo_nova_conductor.service" "tripleo_nova_metadata.service" "tripleo_nova_scheduler.service" "tripleo_nova_vnc_proxy.service" "tripleo_aodh_api.service" "tripleo_aodh_api_cron.service" "tripleo_aodh_evaluator.service" "tripleo_aodh_listener.service" "tripleo_aodh_notifier.service" "tripleo_ceilometer_agent_central.service" "tripleo_ceilometer_agent_compute.service" "tripleo_ceilometer_agent_ipmi.service" "tripleo_ceilometer_agent_notification.service" "tripleo_ovn_cluster_northd.service" "tripleo_ironic_neutron_agent.service" "tripleo_ironic_api.service" "tripleo_ironic_inspector.service" "tripleo_ironic_conductor.service" "tripleo_ironic_inspector_dnsmasq.service" "tripleo_ironic_pxe_http.service" "tripleo_ironic_pxe_tftp.service" "tripleo_unbound.service") PacemakerResourcesToStop=("openstack-cinder-volume" "openstack-cinder-backup" "openstack-manila-share") echo "Stopping systemd OpenStack services" for service in ${ServicesToStop[*]}; do ifeval::["{build_variant}" != "ospdo"] for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH endif::[] ifeval::["{build_variant}" == "ospdo"] SSH_CMD=CONTROLLER_SSH endif::[] if [ ! -z "${!SSH_CMD}" ]; then echo "Stopping the $service in controller $i" if ${!SSH_CMD} sudo systemctl is-active $service; then ${!SSH_CMD} sudo systemctl stop $service fi fi ifeval::["{build_variant}" != "ospdo"] done endif::[] done echo "Checking systemd OpenStack services" for service in ${ServicesToStop[*]}; do for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH if [ ! -z "${!SSH_CMD}" ]; then if ! ${!SSH_CMD} systemctl show $service | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on controller $i" else echo "OK: Service $service is not running on controller $i" fi fi done done echo "Stopping pacemaker OpenStack services" ifeval::["{build_variant}" != "ospdo"] for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH endif::[] ifeval::["{build_variant}" == "ospdo"] SSH_CMD=CONTROLLER_SSH endif::[] if [ ! -z "${!SSH_CMD}" ]; then echo "Using controller $i to run pacemaker commands" for resource in ${PacemakerResourcesToStop[*]}; do if ${!SSH_CMD} sudo pcs resource config $resource &>/dev/null; then echo "Stopping $resource" ${!SSH_CMD} sudo pcs resource disable $resource else echo "Service $resource not present" fi ifeval::["{build_variant}" != "ospdo"] done endif::[] break fi done echo "Checking pacemaker OpenStack services" ifeval::["{build_variant}" != "ospdo"] for i in {1..3}; do SSH_CMD=CONTROLLER${i}_SSH endif::[] ifeval::["{build_variant}" == "ospdo"] SSH_CMD=CONTROLLER_SSH endif::[] if [ ! -z "${!SSH_CMD}" ]; then echo "Using controller $i to run pacemaker commands" for resource in ${PacemakerResourcesToStop[*]}; do if ${!SSH_CMD} sudo pcs resource config $resource &>/dev/null; then if ! ${!SSH_CMD} sudo pcs resource status $resource | grep Started; then echo "OK: Service $resource is stopped" else echo "ERROR: Service $resource is started" fi fi done break fi ifeval::["{build_variant}" != "ospdo"] done endif::[] ---- + If the status of each service is `OK`, then the services stopped successfully. . For Distributed Compute Node (DCN) deployments where {image_service}, {block_storage}, and {Ceph} services run on edge compute nodes, stop the {image_service}, {block_storage}, and etcd services on all edge compute nodes with the `DistributedComputeHCI` role: + [NOTE] ==== This step applies only to DCN deployments where {image_service}, {block_storage}, and {Ceph} services run on edge compute nodes. The `DistributedComputeHCI` role runs `GlanceApiEdge`, `CinderVolumeEdge`, and `Etcd` services. A minimum of three nodes per site use this role. Skip this step if your DCN deployment does not run these services on edge compute nodes. The examples in this procedure use hyper-converged (HCI) roles. If your deployment does not use HCI, the same services apply to the `DistributedCompute` role, which runs the same `GlanceApiEdge`, `CinderVolumeEdge`, and `Etcd` services but without Ceph OSD, Mon, or Mgr. ==== + .. Define shell variables for your `DistributedComputeHCI` edge compute nodes. Replace the example values with the correct values for your environment: + [subs=+quotes] ---- # DCN1 edge site DistributedComputeHCI nodes DCN1_HCI0_SSH="ssh -i ** root@**" DCN1_HCI1_SSH="ssh -i ** root@**" DCN1_HCI2_SSH="ssh -i ** root@**" # DCN2 edge site DistributedComputeHCI nodes DCN2_HCI0_SSH="ssh -i ** root@**" DCN2_HCI1_SSH="ssh -i ** root@**" DCN2_HCI2_SSH="ssh -i ** root@**" ---- + .. Stop the storage services on all `DistributedComputeHCI` nodes: + ---- # Services to stop on DistributedComputeHCI edge compute nodes DCN_HCI_SERVICES=("tripleo_glance_api_internal.service" "tripleo_cinder_volume.service" "tripleo_etcd.service") # List of all DistributedComputeHCI node SSH commands DCN_HCI_NODES=("$DCN1_HCI0_SSH" "$DCN1_HCI1_SSH" "$DCN1_HCI2_SSH" "$DCN2_HCI0_SSH" "$DCN2_HCI1_SSH" "$DCN2_HCI2_SSH") echo "Stopping storage services on DistributedComputeHCI nodes" for node_ssh in "${DCN_HCI_NODES[@]}"; do [ -z "$node_ssh" ] && continue echo "Processing node: $node_ssh" for service in "${DCN_HCI_SERVICES[@]}"; do if $node_ssh sudo systemctl is-active $service 2>/dev/null; then echo "Stopping $service" $node_ssh sudo systemctl stop $service fi done done echo "Checking storage services on DistributedComputeHCI nodes" for node_ssh in "${DCN_HCI_NODES[@]}"; do [ -z "$node_ssh" ] && continue for service in "${DCN_HCI_SERVICES[@]}"; do if ! $node_ssh systemctl show $service 2>/dev/null | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on $node_ssh" else echo "OK: Service $service is not running on $node_ssh" fi done done ---- + [NOTE] ==== * On edge sites, the {image_service} runs with the service name `tripleo_glance_api_internal.service`, which is different from the `tripleo_glance_api.service` on the central controller. * The {block_storage} volume service (`tripleo_cinder_volume.service`) uses the same service name on both edge sites and the central controller. * The etcd service (`tripleo_etcd.service`) is used as a distributed lock manager (DLM) for the {block_storage} volume service running in active/active mode on edge sites. ==== . If your DCN deployment includes `DistributedComputeHCIScaleOut` nodes, stop the HAProxy service on those nodes: + [NOTE] ==== The `DistributedComputeHCIScaleOut` role is used to scale compute and storage capacity beyond the initial three `DistributedComputeHCI` nodes at each site. These nodes run `HAProxyEdge`, which proxies {image_service} requests to the `GlanceApiEdge` instances on `DistributedComputeHCI` nodes. Skip this step if your DCN deployment does not include `DistributedComputeHCIScaleOut` nodes. For non-HCI deployments, the equivalent role is `DistributedComputeScaleOut`, which runs the same `HAProxyEdge` service. ==== + .. Define shell variables for your `DistributedComputeHCIScaleOut` edge compute nodes. Replace the example values with the correct values for your environment: + [subs=+quotes] ---- # DCN1 edge site DistributedComputeHCIScaleOut nodes DCN1_SCALEOUT0_SSH="ssh -i ** root@**" DCN1_SCALEOUT1_SSH="ssh -i ** root@**" # DCN2 edge site DistributedComputeHCIScaleOut nodes DCN2_SCALEOUT0_SSH="ssh -i ** root@**" DCN2_SCALEOUT1_SSH="ssh -i ** root@**" ---- + .. Stop the services on all `DistributedComputeHCIScaleOut` nodes: + ---- # Services to stop on DistributedComputeHCIScaleOut edge compute nodes DCN_SCALEOUT_SERVICES=("tripleo_haproxy_edge.service") # List of all DistributedComputeHCIScaleOut node SSH commands DCN_SCALEOUT_NODES=("$DCN1_SCALEOUT0_SSH" "$DCN1_SCALEOUT1_SSH" "$DCN2_SCALEOUT0_SSH" "$DCN2_SCALEOUT1_SSH") echo "Stopping services on DistributedComputeHCIScaleOut nodes" for node_ssh in "${DCN_SCALEOUT_NODES[@]}"; do [ -z "$node_ssh" ] && continue echo "Processing node: $node_ssh" for service in "${DCN_SCALEOUT_SERVICES[@]}"; do if $node_ssh sudo systemctl is-active $service 2>/dev/null; then echo "Stopping $service" $node_ssh sudo systemctl stop $service fi done done echo "Checking services on DistributedComputeHCIScaleOut nodes" for node_ssh in "${DCN_SCALEOUT_NODES[@]}"; do [ -z "$node_ssh" ] && continue for service in "${DCN_SCALEOUT_SERVICES[@]}"; do if ! $node_ssh systemctl show $service 2>/dev/null | grep ActiveState=inactive >/dev/null; then echo "ERROR: Service $service still running on $node_ssh" else echo "OK: Service $service is not running on $node_ssh" fi done done ---- + [NOTE] ==== * The HAProxy edge service (`tripleo_haproxy_edge.service`) provided a local {image_service} endpoint on `DistributedComputeHCIScaleOut` nodes, proxying requests to the `GlanceApiEdge` instances on `DistributedComputeHCI` nodes. During adoption, {rhocp_long} Kubernetes service endpoints backed by MetalLB replace HAProxy. ====