How to Build a VMware Homelab – Step-By-Step Tutorial Series: Deploy and Configure vCenter Server and vSAN

In this blog post we are going to pick up right where we left off the last time. Just to recall: we now have a physical ESXi server with 3 nested ESXi servers with switching and routing in place, thanks to our VyOS appliance. We also have a domain controller that provides our core infrastructure services. In this part of the series, we are going download, deploy and configure vCenter Server.
Download, Deploy and Configure vCenter Server
Deploy vCenter Server
Step 1 is to grab the binaries. Log on to VMware Customer Connect or Partner Connect and download the ISO file for vCenter Server Standard. The vCenter Server installation is done from a Windows, Mac or Linux client, so start the installer from the right folder. I’m using my Windows domain controller as my jumphost, so I use the Windows installer. The deployment is done in 2 stages. In stage 1, the vCenter Server Appliance is deployed and in stage 2, the SSO domain is created and vCenter Server ‘s configuration is done. Let’s move forward with Stage 1.
Start the Installer and follow the wizard to start Stage 1 of the deployment process

You need to provide the target ESXi server to which you want to deploy vCenter Server. That is the physical ESXi server with IP address 192.168.2.35. After providing the root username and password, click Next. Provide the VM name and set a root password for the vCenter Server appliance
For lab purposes, a Tiny deployment is more than enough:

I deploy the appliance onto my fastest VMFS datastore, which is backed by my Samsung SSD 960 Pro NVME drive.
On the next screen, provide the network configuration:

Make sure you select the correct Network (port group). In my case, it’s the Nested-ESXi-Management Network and I use IP address 10.0.10.100/24. This is also the network to which my nested ESXi servers are connected to. Be extra cautious to provide the correct FQDN and make sure DNS forward AND reverse lookup is working properly to resolve the FQDN and IP address. Stage 2 of the process will fail, or get stuck at 0% if DNS is not setup correctly. So, if you are following along step-by-step, open the DNS management console on the domain controller and add a new host record (A) to the homelab.local lookup zone. I’m using vcenter.homelab.local with IP address 10.0.10.10024. Also make sure the PTR record is also created.

Click Next and click Finish to kick off Stage 1 of the deployment process:

Once Stage 1 is completed, click Continue to set up vCenter Server
Configure NTP with IP address 192.168.2.36 to sync time between the domain controller and vCenter Server. I also like to enable SSH for troubleshooting purposes.

We are creating a new SSO (Single-Sign On) domain for vCenter Server. Use the default name vsphere.local and provide the password for the administrator@vsphere.local account

Disable CEIP on the next screen, click next, confirm the configuration and click finish.

vCenter Server is now being configured. Wait for Stage 2 to finish.
Configure vCenter Server
Open a Browser and hop over to https://vcenter.homelab.local:443. Here we are going to configure our vCenter Server, our Cluster, Networking and vSAN. Step 1 is to provide proper licensing. I’m a vExpert, so I will use the NFR licenses provided to me by the awesome vExpert program.
Now create the Datacenter object and a new Cluster. For the Datacenter, I like to use the name Nested-Datacenter and for the cluster Nested-Cluster. Enable DRS but don’t enable HA yet. That will come after we configure vSAN.

For DRS, the default settings are good. Now it’s time to add our nested ESXi hosts to the Cluster. Right click the new cluster and select Add Hosts, provide the FQDNs and provide the credentials.

I also like to create a second datacenter called Physical-Datacenter, a cluster called Physical-Cluster, and add my physical ESXi host to it. This way, I can manage both my physical and nested ESXi servers from one console. It does become a bit confusing to see the nested ESXi server as VMs in the Physical-Datacenter and as ESXi hosts in the Nested-Datacenter but getting access to features like Storage vMotion is definitely worth it.

Configure the vSphere Distributed Switch
In the current configuration, each ESXi server holds a local Standard vSwitch that we need to manage individually. We want to migrate this to a single Distributed Virtual Switch that we can manage from vCenter Server. Go to networking and create a new Distributed Switch. Name it vds01 for example. Select the latest version and configure it with 2 uplinks and Network I/O Control enabled. Deslect Create a default port group.

Now, migrating vmkernel interfaces from a Standard Switch to a Distributed Switch has always given me problems when I tried to take to many steps at the same time. I now always like to do this in a phased approach. I first add the hosts to the Distributed Switch and only connect Uplink 2 to vmnic1, which is still unused on the Standard Switch. I then migrate the vmkernel adapters and finally migrate vmnic0 to uplink 1 of the Distributed Switch. Start the Wizard by right-clicking on the switch and selecting Add and Manage Hosts

Next, go to the Advanced tab of the Distributed Switch settings page and configure an MTU size of 9000 bytes. Now, create the Distributed Port Groups for ESXi-Management (VLAN10), vMotion (VLAN4) and vSAN (VLAN8). Configure all the Teaming and failover policies with “Route based on physical NIC load”. To add the vmkernel adapters for vMotion and vSAN, right click the vMotion port group and click Add vmkernel adapters…
Select all the attached hosts and click Next. Select the vMotion TCP/IP stack and click Next. Provide static IP addresses: 10.0.4.101 – 10.0.4.103 with subnet 255.255.255.0. Configure the gateway 10.0.4.254 om the TCP/IP stack. Repeat the process for the vSAN port group. Select the Default TCP/IP stack and only select the vSAN service from the list. I use IP addresses 10.0.8.101/24 – 10.0.8.103/24 and no gateway configured. Next, go to each ESXi host and delete the now empty Standard vSwitch
Configure vSAN and vSphere HA
The next step is to enable and configure vSAN. The nested ESXi servers have a 20GB cache disk and a 200GB capacity disk. Go to Cluster settings, vSAN, Services and configure vSAN. Assign the appropriate disks and follow the wizard.
Finally, enable vSphere HA on the cluster with the following settings:
- Enable Host Monitoring: Yes
- Host Failure Response Response for Host Isolation: Restart VMs
- Datastore with PDL: Disabled
- Datastore with APD: Disabled
- VM Monitoring: VM Monitoring Only
In a production environment you’d want to enable admission control, but because this is a lab environment, I will leave this disabled, so I have all the resources available to the cluster for consumption.
And that’s it!
Next steps
In the next part in the series, we will finally get our hands dirty with VMware NSX. See you next time!
Hi Jeffrey, great blog and very intuitive. Do you have details of your entire deployment including components that include the NSX-T parts as well? The blog post on that is about 3 years old.