:_mod-docs-content-type: CONCEPT [id="openshift-preparation-for-block-storage-adoption_{context}"] = {OpenShiftShort} preparation for {block_storage} adoption Before you deploy {rhos_prev_long} ({OpenStackShort}) in {rhocp_long} nodes, ensure that the networks are ready, that you decide which {OpenShiftShort} nodes to restrict, and that you make any necessary changes to the {OpenShiftShort} nodes. Node selection:: You might need to restrict the {OpenShiftShort} nodes where the {block_storage} volume and backup services run. + An example of when you need to restrict nodes for a specific {block_storage} is when you deploy the {block_storage} with the LVM driver. In that scenario, the LVM data where the volumes are stored only exists in a specific host, so you need to pin the Block Storage-volume service to that specific {OpenShiftShort} node. Running the service on any other {OpenShiftShort} node does not work. You cannot use the {OpenShiftShort} host node name to restrict the LVM back end. You need to identify the LVM back end by using a unique label, an existing label, or a new label: + ---- $ oc label nodes worker0 lvm=cinder-volumes ---- + [source,yaml] ---- apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: secret: osp-secret storageClass: local-storage cinder: enabled: true template: cinderVolumes: lvm-iscsi: nodeSelector: lvm: cinder-volumes < . . . > ---- + For more information about node selection, see link:{defaultOCPURL}/nodes/index#nodes-scheduler-node-selectors-about_nodes-scheduler-node-selectors[About node selectors]. + [NOTE] ==== If your nodes do not have enough local disk space for temporary images, you can use a remote NFS location by setting the extra volumes feature, `extraMounts`. ==== Transport protocols:: Some changes to the storage transport protocols might be required for {OpenShiftShort}: + * If you use a `MachineConfig` to make changes to {OpenShiftShort} nodes, the nodes reboot. * Check the back-end sections that are listed in the `enabled_backends` configuration option in your `cinder.conf` file to determine the enabled storage back-end sections. * Depending on the back end, you can find the transport protocol by viewing the `volume_driver` or `target_protocol` configuration options. * The `icssid` service, `multipathd` service, and `NVMe-TCP` kernel modules start automatically on data plane nodes. NFS::: ** {OpenShiftShort} connects to NFS back ends without additional changes. Rados Block Device and {Ceph}::: ** {OpenShiftShort} connects to {Ceph} back ends without additional changes. You must provide credentials and configuration files to the services. iSCSI::: ** To connect to iSCSI volumes, the iSCSI initiator must run on the {OpenShiftShort} hosts where the volume and backup services run. The Linux Open iSCSI initiator does not support network namespaces, so you must only run one instance of the service for the normal {OpenShiftShort} usage, as well as the {OpenShiftShort} CSI plugins and the {OpenStackShort} services. ** If you are not already running `iscsid` on the {OpenShiftShort} nodes, then you must apply a `MachineConfig`. For example: + [source,yaml] ---- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-enable-iscsid spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: iscsid.service ---- ** If you use labels to restrict the nodes where the Block Storage services run, you must use a `MachineConfigPool` to limit the effects of the `MachineConfig` to the nodes where your services might run. For more information, see link:{defaultOCPURL}/nodes/index#nodes-scheduler-node-selectors-about_nodes-scheduler-node-selectors[About node selectors]. ** If you are using a single node deployment to test the process, replace `worker` with `master` in the `MachineConfig`. ** For production deployments that use iSCSI volumes, configure multipathing for better I/O. FC::: ** The {block_storage} volume and {block_storage} backup services must run in an {OpenShiftShort} host that has host bus adapters (HBAs). If some nodes do not have HBAs, then use labels to restrict where these services run. For more information, see link:{defaultOCPURL}/nodes/index#nodes-scheduler-node-selectors-about_nodes-scheduler-node-selectors[About node selectors]. ** If you have virtualized {OpenShiftShort} clusters that use FC you need to expose the host HBAs inside the virtual machine. ** For production deployments that use FC volumes, configure multipathing for better I/O. NVMe-TCP::: ** To connect to NVMe-TCP volumes, load NVMe-TCP kernel modules on the {OpenShiftShort} hosts. ** If you do not already load the `nvme-fabrics` module on the {OpenShiftShort} nodes where the volume and backup services are going to run, then you must apply a `MachineConfig`. For example: + ---- apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-load-nvme-fabrics spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modules-load.d/nvme_fabrics.conf overwrite: false # Mode must be decimal, this is 0644 mode: 420 user: name: root group: name: root contents: # Source can be a http, https, tftp, s3, gs, or data as defined in rfc2397. # This is the rfc2397 text/plain string format source: data:,nvme-fabrics ---- ** If you use labels to restrict the nodes where Block Storage services run, use a `MachineConfigPool` to limit the effects of the `MachineConfig` to the nodes where your services run. For more information, see link:{defaultOCPURL}/nodes/index#nodes-scheduler-node-selectors-about_nodes-scheduler-node-selectors[About node selectors]. ** If you use a single node deployment to test the process, replace `worker` with `master` in the `MachineConfig`. ** Only load the `nvme-fabrics` module because it loads the transport-specific modules, such as TCP, RDMA, or FC, as needed. + ifeval::["{build}" != "downstream"] For production deployments that use NVMe-TCP volumes, use multipathing. For NVMe-TCP volumes {OpenShiftShort} uses native multipathing, called https://nvmexpress.org/faq-items/what-is-ana-nvme-multipathing/[ANA]. endif::[] ifeval::["{build}" != "upstream"] ** For production deployments that use NVMe-TCP volumes, use multipathing for better I/O. For NVMe-TCP volumes, {OpenShiftShort} uses native multipathing, called ANA. endif::[] ** After the {OpenShiftShort} nodes reboot and load the `nvme-fabrics` module, you can confirm that the operating system is configured and that it supports ANA by checking the host: + ---- $ cat /sys/module/nvme_core/parameters/multipath ---- + [IMPORTANT] ANA does not use the Linux Multipathing Device Mapper, but {OpenShiftShort} requires `multipathd` to run on Compute nodes for the {compute_service_first_ref} to be able to use multipathing. Multipathing is automatically configured on data plane nodes when they are provisioned. Multipathing::: ** Use multipathing for iSCSI and FC protocols. To configure multipathing on these protocols, you perform the following tasks: *** Prepare the {OpenShiftShort} hosts *** Configure the Block Storage services *** Prepare the {compute_service} nodes *** Configure the {compute_service} ** To prepare the {OpenShiftShort} hosts, ensure that the Linux Multipath Device Mapper is configured and running on the {OpenShiftShort} hosts by using `MachineConfig`. For example: + [source,yaml] ---- # Includes the /etc/multipathd.conf contents and the systemd unit changes apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker service: cinder name: 99-master-cinder-enable-multipathd spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/multipath.conf overwrite: false # Mode must be decimal, this is 0600 mode: 384 user: name: root group: name: root contents: # Source can be a http, https, tftp, s3, gs, or data as defined in rfc2397. # This is the rfc2397 text/plain string format source: data:,defaults%20%7B%0A%20%20user_friendly_names%20no%0A%20%20recheck_wwid%20yes%0A%20%20skip_kpartx%20yes%0A%20%20find_multipaths%20yes%0A%7D%0A%0Ablacklist%20%7B%0A%7D systemd: units: - enabled: true name: multipathd.service ---- ** If you use labels to restrict the nodes where Block Storage services run, you need to use a `MachineConfigPool` to limit the effects of the `MachineConfig` to only the nodes where your services run. For more information, see link:{defaultOCPURL}/nodes/index#nodes-scheduler-node-selectors-about_nodes-scheduler-node-selectors[About node selectors]. ** If you are using a single node deployment to test the process, replace `worker` with `master` in the `MachineConfig`. ** Cinder volume and backup are configured by default to use multipathing.