:_mod-docs-content-type: CONCEPT [id="about-node-roles_{context}"] = About node roles In {OpenStackPreviousInstaller} deployments you had 5 different standard roles for the nodes: `Controller`, `Compute`, `Networker`, `Ceph Storage`, `Swift Storage`, but in the control plane you make a distinction based on where things are running, in {OpenShift} ({OpenShiftShort}) or external to it. When adopting a {OpenStackPreviousInstaller} {rhos_prev_long} ({OpenStackShort}) your `Compute` nodes will directly become external nodes, so there should not be much additional planning needed there. Your `Networker` nodes will also become external nodes. In most cases your `Controller` nodes would also become external nodes to avoid dataplane outage during adoption. This is because when there are no dedicated `Networker` nodes in {OpenStackPreviousInstaller}, `Controller` nodes act as OVN gateway chassis i.e external SNAT and Centralized Floating IP(if DVR disabled) traffic goes through these nodes and removing these nodes during adoption will lead to some outage and this should be avoided. Such disruption should be planned post adoption. In many deployments being adopted the `Controller` nodes will require some thought because you have many {OpenShiftShort} nodes where the Controller services could run, and you have to decide which ones you want to use, how you are going to use them, and make sure those nodes are ready to run the services. In most deployments running {OpenStackShort} services on `master` nodes can have a seriously adverse impact on the {OpenShiftShort} cluster, so it is recommended that you place {OpenStackShort} services on non `master` nodes. By default {OpenStackShort} Operators deploy {OpenStackShort} services on any worker node, but that is not necessarily what's best for all deployments, and there may be even services that won't even work deployed like that. When planing a deployment it's good to remember that not all the services on an {OpenStackShort} deployments are the same as they have very different requirements. Looking at the Block Storage service (cinder) component you can clearly see different requirements for its services: the cinder-scheduler is a very light service with low memory, disk, network, and CPU usage; cinder-api service has a higher network usage due to resource listing requests; the cinder-volume service will have a high disk and network usage since many of its operations are in the data path (offline volume migration, create volume from image, etc.), and then you have the cinder-backup service which has high memory, network, and CPU (to compress data) requirements. The {image_service_first_ref} and {object_storage_first_ref} components are in the data path, as well as RabbitMQ and Galera services. Given these requirements it may be preferable not to let these services wander all over your {OpenShiftShort} worker nodes with the possibility of impacting other workloads, or maybe you don't mind the light services wandering around but you want to pin down the heavy ones to a set of infrastructure nodes. There are also hardware restrictions to take into consideration, because if you are using a Fibre Channel (FC) Block Storage service backend you need the cinder-volume, cinder-backup, and maybe even the {image_service_first_ref} (if it's using the Block Storage service as a backend) services to run on a {OpenShiftShort} host that has an HBA. The {OpenStackShort} Operators allow a great deal of flexibility on where to run the {OpenStackShort} services, as you can use node labels to define which {OpenShiftShort} nodes are eligible to run the different {OpenStackShort} services. Refer to the xref:about-node-selector_{context}[About node selector] to learn more about using labels to define placement of the {OpenStackShort} services.