How I configured VLANs, VXLANs and OSPF in my nested VMware homelab

How I configured VLANs, VXLANs and OSPF in my nested VMware homelab

Nested homelab introduction

I was chatting with someone the other week about homelab setups. The old debate of running on physical hosts vs. nested came up. I am running a completely nested homelab with stretched vSAN, NSX-v, et cetera and as a follow-up to that conversation I was having, I decided to do a short writeup on how I configured my homelab.

Basic homelab setup

I have a single SuperMicro SuperServer mini tower with 128GB RAM, an 8-core Xeon D-1541 and some fast SSD storage. I run a 4-node stretched vSAN management cluster for all my SDDC components/workloads and a 2-node workload cluster with nested (FreeNAS) NFS storage. I created two standard virtual switches on the physical ESXi host. One vSwitch holds the actual uplink to my home network with the VMkernel management interface and the other is an internal vSwitch without any physical uplinks and is just used for nested virtualization:

My home network is nothing fancy by the way. It’s just an ISP provided internet router with Wi-Fi and switch onboard. From a homelab perspective, I’m just using the vSwitch connected to my home network to do basic administration of the physical ESXi host and remotely access my nested lab environment (I use a Windows domain controller as a simple RDP jumphost).

The Nested-ESXi internal vSwitch holds all the nested virtualization magic (accept forged transmits, promiscuous mode and MAC address changes). I also configured this internal vSwitch for jumbo frames (MTU 9000) so I can add VXLAN overlay networks using NSX-v. Because all my nested virtualization traffic stays on the internal vSwitch, I never have to worry about jumbo frames in my physical home network. VXLAN traffic will never ever hit the actual physical wire. Because everything is running nested, I can even run all-flash vSAN with all data services without having to invest in 10GE physical network hardware. Thanks to the fast NVME SSD, performance is more than fine…

Infrastructure services

I’m running all required infrastructure services for my homelab such as NTP, DNS, AD, et cetera, on the Windows Domain Controller / Jumphost which is running as a VM on the physical ESXi host. Everything that’s running directly on the physical ESXi host can be considered a “physical” workload. So, in the real world analogy I would be running a physical domain controller outside of my VMware environment.

How to use VLANs in nested ESXi

All my nested ESXi hosts have two uplinks to simulate a typical enterprise vSphere deployment. The nested ESXi hosts are simply equipped with two virtual NICs:

The nested view, looking at the nested Distributed Switch physical adapters:

Both uplinks are connected to the VLAN 4095 Trunk port group on the internal Nested-ESXi vSwitch. By using VLAN ID 4095, I basically created a trunk port that allows Guest OS VLAN tagging:

This allows me to do the VLAN tagging on the nested ESXi hosts’ port groups (with ESXi being my Guest OS):

Routing between VLANs

I’m simulating a physical network routing layer by running a pfSense appliance directly on the physical ESXi host. The uplink is connected to the home network portgroup on the actually connected vSwitch. The internal LAN interface is connected to the internal Nested-ESXi vSwitch trunk port. This allows me to create VLAN interfaces on the internal facing interface in pfSense and route between my physical home network and the VLANs living in my nested virtualization world:

I’m allowing pfSense admin traffic on my WAN interface so I can easily manage it directly from my home network. In the screenshot below you can see some of the VLAN interfaces I have created in pfSense. (I have actually logically created a dual site environment so I can use stretched vSAN, that’s why you are seeing references to Site A and Site B):

Enabling VXLAN by configuring jumbo frames end-to-end

As I mentioned before, I configured the internal Nested-ESXi vSwitch with jumbo frames support. To allow the use of jumbo frames, you have to configure a consistent MTU size end to end. In this case the physical ESXi host’s internal vSwitch, the nested distributed vSwitch, the VMkernel interface and the VLAN interface on the pfSense appliance must be configured with jumbo frames support:

NSX logical routing and switching

I deployed a typical NSX logical routing and switching topology in my homelab. I have two NSX edges deployed and they are configured for Equal Cost Multipath Routing to my “physical core network / pfSense appliace” using OSPF:

I am running Quagga OSPF on the pfSense appliance and my NSX edges are peering nicely with it:

Downstream, the NSX Edges are peering with my NSX Distributed Logical Router (Control VM):

The DLR is obviously the first hop gayeway for the logical networks (VXLANs) attached to it:

VPN Remote Access

I was thinking about configuring VPN remote access to my homelab through NSX but in the end I decided I would be better of investing a bit of money in a dedicated out of band VPN appliance. I bought a Raspberry Pi :-). With PiVPN and some basic port forwarding on my internet router, I was quickly VPN-ing into my homelab from all over the world. With the SuperMicro IPMI interface, I can even turn my homelab on or off.

Closing thoughts and future homelab plans

For me, nested virtualization is a perfect fit. I’m running a complete SDDC, following the Multi Availability Zone VMware Validated Design reference architecture, in a single small, relatively low powered box. It is a great environment that will come in really handy when preparing for my VCDX-NV. To create a Dual Region environment with Cross-VC NSX, I am going to add a second region to my homelab by using the Ravello cloud service. As a vExpert, I have access to plenty of free CPU hours, and with Ravello now supporting bare metal virtualization, it seems like a great opportunity to expand my lab even further and do some cross region disaster recovery testing.

If you have any questions about my homelab setup, or if you have useful tips on how to improve my setup, just let me know!


2 thoughts on “How I configured VLANs, VXLANs and OSPF in my nested VMware homelab”

  • 1
    Marc on May 23, 2018 Reply

    Why not use pivpn on a small linux on the host?

  • 2
    jkusters on May 25, 2018 Reply

    I really wanted something out of band. The small linux VM would rely on my physical ESXi host being up and running. Now, I can VPN into my home network and turn on my homelab remotely or troubleshoot it need be. The Raspberry Pi is a really cheap solution. I also wanted to get my hands on a Raspberry Pi in all honesty 🙂

Leave a Reply