:_mod-docs-content-type: PROCEDURE [id="adopting-image-service-with-nfs-backend_{context}"] = Adopting the {image_service} that is deployed with an NFS back end [role="_abstract"] Adopt the {image_service_first_ref} that you deployed with an NFS back end. To complete the following procedure, ensure that your environment meets the following criteria: * The Storage network is propagated to the {rhos_prev_long} ({OpenStackShort}) control plane. * The {image_service} can reach the Storage network and connect to the nfs-server through the port `2049`. .Prerequisites * You have completed the previous adoption steps. * In the source cloud, verify the NFS parameters that the overcloud uses to configure the {image_service} back end. Specifically, in your{OpenStackPreviousInstaller} heat templates, find the following variables that override the default content that is provided by the `glance-nfs.yaml` file in the `/usr/share/openstack-tripleo-heat-templates/environments/storage` directory: + ---- GlanceBackend: file GlanceNfsEnabled: true GlanceNfsShare: 192.168.24.1:/var/nfs ---- + [NOTE] ==== In this example, the `GlanceBackend` variable shows that the {image_service} has no notion of an NFS back end. The variable is using the `File` driver and, in the background, the `filesystem_store_datadir`. The `filesystem_store_datadir` is mapped to the export value provided by the `GlanceNfsShare` variable instead of `/var/lib/glance/images/`. If you do not export the `GlanceNfsShare` through a network that is propagated to the adopted {rhos_long} control plane, you must stop the `nfs-server` and remap the export to the `storage` network. Before doing so, ensure that the {image_service} is stopped in the source Controller nodes. ifeval::["{build}" != "downstream"] In the control plane, as per the (https://github.com/openstack-k8s-operators/docs/blob/main/images/network_diagram.jpg)[network isolation diagram], the {image_service} is attached to the Storage network, propagated via the associated `NetworkAttachmentsDefinition` custom resource, and the resulting Pods have already the right permissions to handle the {image_service} traffic through this network. endif::[] ifeval::["{build}" != "upstream"] In the control plane, the {image_service} is attached to the Storage network, then propagated through the associated `NetworkAttachmentsDefinition` custom resource (CR), and the resulting pods already have the right permissions to handle the {image_service} traffic through this network. endif::[] In a deployed {OpenStackShort} control plane, you can verify that the network mapping matches with what has been deployed in the {OpenStackPreviousInstaller}-based environment by checking both the `NodeNetworkConfigPolicy` (`nncp`) and the `NetworkAttachmentDefinition` (`net-attach-def`). The following is an example of the output that you should check in the {rhocp_long} environment to make sure that there are no issues with the propagated networks: ---- $ oc get nncp NAME STATUS REASON enp6s0-crc-8cf2w-master-0 Available SuccessfullyConfigured $ oc get net-attach-def NAME ctlplane internalapi storage tenant $ oc get ipaddresspool -n metallb-system NAME AUTO ASSIGN AVOID BUGGY IPS ADDRESSES ctlplane true false ["192.168.122.80-192.168.122.90"] internalapi true false ["172.17.0.80-172.17.0.90"] storage true false ["172.18.0.80-172.18.0.90"] tenant true false ["172.19.0.80-172.19.0.90"] ---- ==== .Procedure . Adopt the {image_service} and create a new `default` `GlanceAPI` instance that is connected with the existing NFS share: + ---- $ cat << EOF > glance_nfs_patch.yaml spec: extraMounts: - extraVol: - extraVolType: Nfs mounts: - mountPath: /var/lib/glance/images name: nfs propagation: - Glance volumes: - name: nfs nfs: path: <1> server: <2> name: r1 region: r1 glance: enabled: true template: databaseInstance: openstack customServiceConfig: | [DEFAULT] enabled_backends = default_backend:file [glance_store] default_backend = default_backend [default_backend] filesystem_store_datadir = /var/lib/glance/images/ storage: storageRequest: 10G keystoneEndpoint: nfs glanceAPIs: nfs: replicas: 3 type: single override: service: internal: metadata: annotations: metallb.universe.tf/address-pool: internalapi metallb.universe.tf/allow-shared-ip: internalapi metallb.universe.tf/loadBalancerIPs: 172.17.0.80 <3> spec: type: LoadBalancer networkAttachments: - storage EOF ---- + <1> Replace `` with the exported path in the `nfs-server`. <2> Replace `` with the IP address that you use to communicate with the `nfs-server`. <3> If you use IPv6, change the load balancer IP to the load balancer IP in your environment, for example, `metallb.universe.tf/loadBalancerIPs: fd00:bbbb::80`. . Patch the `OpenStackControlPlane` CR to deploy the {image_service} with an NFS back end: + ---- $ oc patch openstackcontrolplane openstack --type=merge --patch-file glance_nfs_patch.yaml ---- . Patch the `OpenStackControlPlane` CR to remove the default {image_service}: + ---- $ oc patch openstackcontrolplane openstack --type=json -p="[{'op': 'remove', 'path': '/spec/glance/template/glanceAPIs/default'}]" ---- .Verification * When `GlanceAPI` is active, confirm that you can see a single API instance: + ---- $ oc get pods -l service=glance NAME READY STATUS RESTARTS glance-nfs-single-0 2/2 Running 0 glance-nfs-single-1 2/2 Running 0 glance-nfs-single-2 2/2 Running 0 ---- * Ensure that the description of the pod reports the following output: + ---- Mounts: ... nfs: Type: NFS (an NFS mount that lasts the lifetime of a pod) Server: {{ server ip address }} Path: {{ nfs export path }} ReadOnly: false ... ---- * Check that the mountpoint that points to `/var/lib/glance/images` is mapped to the expected `nfs server ip` and `nfs path` that you defined in the new default `GlanceAPI` instance: + ---- $ oc rsh -c glance-api glance-default-single-0 sh-5.1# mount ... ... {{ ip address }}:/var/nfs on /var/lib/glance/images type nfs4 (rw,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=172.18.0.5,local_lock=none,addr=172.18.0.5) ... ... ---- * Confirm that the UUID is created in the exported directory on the NFS node. For example: + ---- $ oc rsh openstackclient $ openstack image list sh-5.1$ curl -L -o /tmp/cirros-0.6.3-x86_64-disk.img http://download.cirros-cloud.net/0.6.3/cirros-0.6.3-x86_64-disk.img ... ... sh-5.1$ openstack image create --container-format bare --disk-format raw --file /tmp/cirros-0.6.3-x86_64-disk.img cirros ... ... sh-5.1$ openstack image list +--------------------------------------+--------+--------+ | ID | Name | Status | +--------------------------------------+--------+--------+ | 634482ca-4002-4a6d-b1d5-64502ad02630 | cirros | active | +--------------------------------------+--------+--------+ ---- * On the `nfs-server` node, the same `uuid` is in the exported `/var/nfs`: + ---- $ ls /var/nfs/ 634482ca-4002-4a6d-b1d5-64502ad02630 ----