:_mod-docs-content-type: CONCEPT [id="changes-to-cephFS-through-NFS_{context}"] = Changes to CephFS through NFS [role="_abstract"] Before you begin the adoption, review the following information to understand the changes to CephFS through NFS between {rhos_prev_long} ({OpenStackShort}) {rhos_prev_ver} and {rhos_long} {rhos_curr_ver}: * If the {OpenStackShort} {rhos_prev_ver} deployment uses CephFS through NFS as a back end for {rhos_component_storage_file_first_ref}, you cannot directly import the `ceph-nfs` service on the {OpenStackShort} Controller nodes into {rhos_acro} {rhos_curr_ver}. In {rhos_acro} {rhos_curr_ver}, the {rhos_component_storage_file} only supports using a clustered NFS service that is directly managed on the {Ceph} cluster. Adoption with the `ceph-nfs` service involves a data path disruption to existing NFS clients. * On {OpenStackShort} {rhos_prev_ver}, Pacemaker controls the high availability of the `ceph-nfs` service. This service is assigned a Virtual IP (VIP) address that is also managed by Pacemaker. The VIP is typically created on an isolated `StorageNFS` network. The Controller nodes have ordering and collocation constraints established between this VIP, `ceph-nfs`, and the {rhos_component_storage_file_first_ref} share manager service. Prior to adopting {rhos_component_storage_file}, you must adjust the Pacemaker ordering and collocation constraints to separate the share manager service. This establishes `ceph-nfs` with its VIP as an isolated, standalone NFS service that you can decommission after completing the {rhos_acro} adoption. * In {Ceph} {CephVernum}, a native clustered Ceph NFS service has to be deployed on the {Ceph} cluster by using the Ceph Orchestrator prior to adopting the {rhos_component_storage_file}. This NFS service eventually replaces the standalone NFS service from {OpenStackShort} {rhos_prev_ver} in your deployment. When the {rhos_component_storage_file} is adopted into the {rhos_acro} {rhos_curr_ver} environment, it establishes all the existing exports and client restrictions on the new clustered Ceph NFS service. Clients can continue to read and write data on existing NFS shares, and are not affected until the old standalone NFS service is decommissioned. After the service is decommissioned, you can re-mount the same share from the new clustered Ceph NFS service during a scheduled downtime. * To ensure that NFS users are not required to make any networking changes to their existing workloads, assign an IP address from the same isolated `StorageNFS` network to the clustered Ceph NFS service. NFS users only need to discover and re-mount their shares by using new export paths. When the adoption is complete, {rhos_acro} users can query the {rhos_component_storage_file} API to list the export locations on existing shares to identify the preferred paths to mount these shares. These preferred paths correspond to the new clustered Ceph NFS service in contrast to other non-preferred export paths that continue to be displayed until the old isolated, standalone NFS service is decommissioned. For more information on setting up a clustered NFS service, see xref:creating-a-ceph-nfs-cluster_ceph-prerequisites[Creating an NFS Ganesha cluster].