:_mod-docs-content-type: PROCEDURE [id="creating-a-ceph-nfs-cluster_{context}"] = Creating an NFS Ganesha cluster [role="_abstract"] If you use CephFS through NFS with the {rhos_component_storage_file_first_ref}, you must create a new clustered NFS service on the {Ceph} cluster. This service replaces the standalone, Pacemaker-controlled `ceph-nfs` service that you use in {rhos_prev_long} ({OpenStackShort}) {rhos_prev_ver}. .Procedure . Identify the {Ceph} nodes to deploy the new clustered NFS service, for example, `cephstorage-0`, `cephstorage-1`, `cephstorage-2`. + [NOTE] You must deploy this service on the `StorageNFS` isolated network so that you can mount your existing shares through the new NFS export locations. You can deploy the new clustered NFS service on your existing CephStorage nodes or HCI nodes, or on new hardware that you enrolled in the {Ceph} cluster. . If you deployed your {Ceph} nodes with {OpenStackPreviousInstaller}, propagate the `StorageNFS` network to the target nodes where the `ceph-nfs` service is deployed. + [NOTE] If the target nodes are not managed by director, you cannot use this procedure to configure the network. An administrator must manually configure all the required networks. .. Identify the node definition file, `overcloud-baremetal-deploy.yaml`, that is used in the {OpenStackShort} environment. ifeval::["{build}" != "upstream"] For more information about identifying the `overcloud-baremetal-deploy.yaml` file, see link:{customizing-rhoso}/index#assembly_customizing-overcloud-networks[Customizing overcloud networks] in _{customizing-rhoso-t}_. endif::[] ifeval::["{build}" != "downstream"] See link:https://docs.openstack.org/project-deploy-guide/tripleo-docs/wallaby/features/network_isolation.html#deploying-the-overcloud-with-network-isolation[Deploying an Overcloud with Network Isolation with TripleO] and link:https://docs.openstack.org/project-deploy-guide/tripleo-docs/wallaby/post_deployment/updating_network_configuration_post_deployment.html[Applying network configuration changes after deployment] for the background to these tasks. endif::[] .. Edit the networks that are associated with the {CephCluster} nodes to include the `StorageNFS` network: + [source,yaml] ---- - name: CephStorage count: 3 hostname_format: cephstorage-%index% instances: - hostname: cephstorage-0 name: ceph-0 - hostname: cephstorage-1 name: ceph-1 - hostname: cephstorage-2 name: ceph-2 defaults: profile: ceph-storage network_config: template: /home/stack/network/nic-configs/ceph-storage.j2 network_config_update: true networks: - network: ctlplane vif: true - network: storage - network: storage_mgmt - network: storage_nfs ---- .. Edit the network configuration template file, for example, `/home/stack/network/nic-configs/ceph-storage.j2`, for the {CephCluster} nodes to include an interface that connects to the `StorageNFS` network: + [source,yaml] ---- - type: vlan device: nic2 vlan_id: {{ storage_nfs_vlan_id }} addresses: - ip_netmask: {{ storage_nfs_ip }}/{{ storage_nfs_cidr }} routes: {{ storage_nfs_host_routes }} ---- .. Update the {CephCluster} nodes: + ---- $ openstack overcloud node provision \ --stack overcloud \ --network-config -y \ -o overcloud-baremetal-deployed-storage_nfs.yaml \ --concurrency 2 \ /home/stack/network/baremetal_deployment.yaml ---- + When the update is complete, ensure that a new interface is created in the{CephCluster} nodes and that they are tagged with the VLAN that is associated with `StorageNFS`. . Identify the IP address from the `StorageNFS` network to use as the Virtual IP address (VIP) for the Ceph NFS service: + ---- $ openstack port list -c "Fixed IP Addresses" --network storage_nfs ---- . In a running `cephadm` shell, identify the hosts for the NFS service: + ---- $ ceph orch host ls ---- . Label each host that you identified. Repeat this command for each host that you want to label: + ---- $ ceph orch host label add nfs ---- + * Replace `` with the name of the host that you identified. . Create the NFS cluster: + ---- $ ceph nfs cluster create cephfs \ "label:nfs" \ --ingress \ --virtual-ip= \ --ingress-mode=haproxy-protocol ---- + * Replace `` with the VIP for the Ceph NFS service. + [NOTE] You must set the `ingress-mode` argument to `haproxy-protocol`. No other ingress-mode is supported. This ingress mode allows you to enforce client restrictions through the {rhos_component_storage_file}. ifeval::["{build}" != "downstream"] * For more information on deploying the clustered Ceph NFS service, see the link:https://docs.ceph.com/en/latest/cephadm/services/nfs/[ceph orchestrator documentation]. endif::[] ifeval::["{build}" != "upstream"] For more information on deploying the clustered Ceph NFS service, see the link:{defaultCephURL}/operations_guide/index#management-of-nfs-ganesha-gateway-using-the-ceph-orchestrator[Management of NFS-Ganesha gateway using the Ceph Orchestrator (Limited Availability)] in _Red Hat Ceph Storage 7 Operations Guide_. endif::[] . Check the status of the NFS cluster: + ---- $ ceph nfs cluster ls $ ceph nfs cluster info cephfs ----