How to use AWS Storage Gateway to expose Amazon S3 locally as an NFS share

14. March 2018 AWS, Cloud 0
How to use AWS Storage Gateway to expose Amazon S3 locally as an NFS share

What is AWS Storage Gateway

 

AWS Storage Gateway is basically an on-premises software appliance that exposes AWS cloud storage to your local environment. You basically deploy the appliance, connect it to the AWS Storage Gateway backend service and create a File Share (S3 or Glacier backed) or Volume (EBS Snapshot backed):

In this blogpost I will show you how to deploy the AWS Storage Gateway appliance in an VMware vSphere environment and how to expose an NFS share that’s backed by S3.

Creating an S3 bucket

Select your region of choice, and create an S3 bucket. For the purpose of this short demo, I will use all the default settings. Remember, S3 bucket names must be globally unique:

Deploying the AWS Storage Gateway Appliance

Go to the Storage Gateway service and create a new Gateway. We will be deploying a File Gateway:

Download the VMware ESXi OVA image and deploy it in your vSphere environment. For demo purposes, I will use a thin provisioned disk. Before booting the appliance, I added an additional 200GB thin provisioned disk which will be used as a cache disk by the Storage Gateway:

After initial bootup, open the remote console and configure the Storage Gateway appliance in terms of network configuration and time synchronisation:

Once the Storage Gateway appliance is fully configured, return to the AWS Console to finish configuration of the backend services:

Configuring the AWS Storage Gateway

 

Activate the gateway and configure the available 200GB disk as a local cache. Uploads to the AWS Storage Gateway backend services will be cached here for performance and latency reasons and then uploaded to AWS cloud. .

Create the AWS Storage Gateway file share

The final step in the proces is the creation of the Storage Gateway file share. This is were you connect the Storage Gateway to the previously configured S3 bucket:

Pay special attention to the Allowed clients and Squash level security settings:

Allowed Clients is configured by default to 0.0.0.0/0 which means all clients on your network are allowed to connect to the Storage Gateway appliance.

Note: this is the ACL for the local networks connecting to your on-premises Storage Gateway appliance. This has nothing to do with exposing your data to the outside world

For Squash level, choose the squash level setting you want for your file share, and then choose Save. Possible values are the following:

  • Root squash (default) – Access for the remote superuser (root) is mapped to UID (65534) and GID (65534).
  • No root squash – Remote superuser (root) receives access as root.
  • All squash – All user access is mapped to UID (65534) and GID (65534).

Mounting the NFS share

AWS provides a nice set of mount commands for Linux, Windows and macOS systems:

I can now use the NFS share just like any other NFS share and use it like a file based storage solution:

In the backend, all files are stored as objects in the specified S3 bucket:

As you might have noticed, I also use the AWS Storage Gateway solution to store my homelab backups in Glacier. You can treat the S3 bucket just like a normal S3 bucket and configure data management policies on it, such as when to transition data from S3 to S3 Infrequent Access and/or Glacier. You can also do cross-region replication, versioning, encryption, and so on. Do keep an eye on your costs!


Leave a Reply