After posting my blog post about obtaining my AWS Solutions Architect – Associate (SAA) certification, my blog sponsor asked me if I could write a blog post covering the AWS capabilities of Nakivo Backup & Restore. I immediately jumped at the opportunity because this is a nice and concrete use case I can build in my AWS environment.
Setting up my lab environment
Being inspired by Ryan Kroonenburg, I decided to set up a topology similar to what Ryan sets up in the ‘The Real World – Creating a fault tolerant Word Press Site’ section of his course. Basically, it is a set of EC2 instances running a vanilla installation of WordPress, behind an Application Load Balancer and using RDS as a MySQL backend. Very basic, and very simple to set up in just a couple of minutes. I’m not going to reproduce the steps in this blog post because the focus is on backing up EC2 instances, not on deploying WordPress in EC2. If you have any questions, feel free to ask or have a look at the video course by A Cloud Guru. Anyway, here is a screenshot of my fancy WordPress website running in AWS:
Getting Nakio Backup & Restore up and running
Deploying the AMI
In order to get Nakivo’s software up and running in EC2, I chose the free community AMI called “NAKIVO_Backup_Replication_v7.2.0_Free_Edition-360aecd0-4610-4ac4-ab76-b77f558323d0-ami-e1858a9a.4”. This is still running v7.2 but we will surely be able to update to v7.3 after the AMI is deployed if necessary. If this the first time you are using a Nakivo community AMI, you will be asked to accept the terms and conditions in the AWS marketplace:
The benefit of using a community AMI is that everything is pre-provisioned. The Security Group with the appropriate inbound and outbound rules is automatically created, and the Nakivo software is obviously already installed. Saved me a bunch of time!
I had the EC2 instance up and running in no time at all. Next up: the configuration of Nakivo Backup & Restore.
Nakivo Inventory configuration
For lab purposes, my Web interface is exposed directly to the internet on port 4443. Using the FQDN or public IP address I accessed the Web interface through the URL https://machine_IP_or_DNS:4443. By providing my AWS Access Key ID and Secret Access Key I was able to connect my AWS account to the Nakivo Inventory. I’m not sure where this information is stored on the Nakivo appliance. I do hope it is stored encrypted instead of in plain text. The Nakivo appliance needs to access the AWS APIs so the use of an Access Key ID is required to allow programmatic access to AWS. After the initial sync was finished, I could see all my EC2 instances:
Nakivo Transporter configuration
The Appliance comes with an onboard Transporter which is Nakivo lingo for “Backup proxy”. Because this is a simple, single region deployment there is no need to deploy additional Transporters:
To store backups, Nakivo needs to configure a Backup Repository. You can choose to deploy it on a shared CIFS or NFS share, a local disk or on Elastic Block Storage (EBS). I chose the latter in combination with a cold magnetic storage tier called Cold HDD (sc1). The minimum size is 500GiB and you can automatically increase capacity with 500GiB increments. Nakivo automatically provisions the EBS volume for you and attaches it to the Nakivo appliance:
The pay as you grow pricing model and the virtually unlimited scalability of EFS are also appealing from business case perspective. It’s very easy to create an NFS file system using EFS. Unfortunately, there is no direct support for native S3 or Glacier to store the Backup Repositories. I guess I could deploy a Storage Gateway but that’s beyond the goal now. The combination of a Backup repository and S3 Lifecycle Manager rules that automatically transitions backup data to lower grade storage tiers would allow for a very resource (and cost) efficient backup solution.
I wonder if S3 and Glacier support is on the Nakivo roadmap. I will update this post if I get an update from Nakivo!
Backing up my EC2 instance
Backing up my EC2 instance is no different from setting up a backup job in an on-premises environment. There are 5 basic steps:
What Nakivo does for you, is automatically create a temporary EBS snapshot of your EC2 instance and then copy it into the Backup Repository. You can find out more about the inner workings at https://www.nakivo.com/aws-ec2-instance-backup/ and in a Nakivo blog post called “Amazon EC2 Backup with NAKIVO Backup & Replication“. The next screenshot shows the creation of the temporary EBS snapshot during the backup job.
Instance Level and File Level Restores
Besides being able to do a full instance restore, you can also do file-level restores of EC2 instances. This is a fully agentless solution for both Windows and Linux instances:
File Level Restore
Instance Level Restore
Breaking something and restoring
I first tried to break my WordPress web server by deleting the index.php:
My website defaulted to the default AWS Linux AMI web page so I officially broke my WordPress site. Unfortunately, there is no support for an in-place restore to the original location. You can only Download the file to your local host (or email forward it):
After downloading the file and SFTP-ing it back up to the original location, my shiny website was up and running again. After a more rigorous failure, such as an accidental termination of the EC2 instance, I tested the instance level restore option:
After a brief restore operation, my EC2 instance was up and running again. Very nice!