Adding NSX-T to my nested VMware homelab

Nested homelab and NSX-T
I recently published a blogpost about my fully nested VMware homelab.After completing a beta NSX-T Install, Configure and Manage course, I was keen on adding NSX-T to the mix. The beauty of NSX-T is that it is not hard coupled to vCenter Server. By simply adding another (nested) ESXi host to my environment, NSX-T can happily coexist next to my existing NSX-v powered clusters:
I’ve got enough storage space to accommodate this new NSX-T environment but RAM resources will be a constraint (what else is new). By simply shutting down my NSX-v compute cluster and some of the more resource hungry vRealize components (*cough* vRNI) in my management cluster, I can spin up NSX-T in a heartbeat. Cool!
By the way, this blogpost just explains how I installed and configured NSX-T. I will not go into much specific details on NSX-T though. There are already lots of excellent detailed blogposts out there explaining NSX-T. Google is your friend!
Basic homelab setup
Just to recall, my physical host is setup as follows: I created two standard virtual switches on the physical ESXi host. One vSwitch holds the actual uplink to my home network with the VMkernel management interface (with an IP address in my home network) and the other is an internal vSwitch without any physical uplinks and is just used for nested virtualization.
Deploying the NSX-T core components
To minimize virtualization overhead, I choose to deploy the NSX-T Manager appliance directly onto my physical ESXi host This is just a simple OVA deployment. I followed the ICM lab guide and downscaled my appliance to 2 vCPUs, 16GB vRAM and no resource reservations. I also deployed a vanilla nested ESXi Host that will be an NSX-T Transport Node later on.
Important note: because I don’t want to migrate my ESXi kernel interfaces to the NSX-T Virtual Distributed Switch (N-VDS), I added two additional NICs to my new host (another big benefit of using nested virtualization!):
After NSX-T preparation, you can see the additional NICs attached to the N-VDS called VTSU. NSX-T is completely managed outside of vCenter Server in the NSX-T Manager. In vCenter you will not be able to see or manage this new N-VDS construct. All the physical NICs on my nested ESXi host are attached to the Nested-ESXi portgroup on my physical ESXi host. This portgroup is configured with VLAN 4095 which will allows me to do VLAN tagging inside of my nested ESXi host:
Another key component of an NSX-T environment is the Controller Cluster. I deployed three controllers using the OVA directly onto my physical host. I also downscaled these controllers to 2 vCPUs, 16GB vRAM and no reservations. All that’s left is preparing the controller cluster through some CLI voodoo (or using the GUI in NSX-T Manager in NSX-T v2.2.0). Again, this is not a step by step install guide so please refer to VMware Docs or other excellent “how to install NSX-T” blogposts out there.
Setting up the NSX-T Environment
NSX-T Fabric Nodes
NSX-T does not depend on vCenter Server at all. You could simply add standalone ESXi hosts to NSX-T Manager. I choose to add my vCenter Server as a Compute Manager for simplicity’s sake:
This pulls in the inventory of vCenter Server (which, together with ESXi needs to be on Update 2 or higher if you want to deploy NSX-T v2.2.0 by the way!) which allows you to prepare the ESXi host for NSX-T. You can configure the cluster for auto-install of NSX-T but I choose to initiate a manual install.
As you can see, the new ESXi host is nicely installed with NSX-T 2.2.0. To make the ESXi host join the NSX-T data plane, there needs to be a TEP (Tunnel Endpoint) configured. This is done by promoting a host to a Transport Node. Up to this point, things are pretty similar to NSX-v (conceptually that is) but this is where things get a bit different. After creating a Transport Zone for overlay traffic and a Transport Zone for VLAN traffic, I configured my ESXi host as a Transport Node. I created a new N-VDS and attached the ESXi host to it. This basically creates the TEP interface (note: this is not a VTEP because NSX-T uses the GENEVE overlay network protocol and not the VXLAN protocol that is used in NSX-v).
As you can see, I mapped the additional two physical NICs of my nested ESXi host to uplink-1 and uplink-2. The TEP interface is assigned an IP address from an IP Pool I created and that’s it. Easy as pie!
NSX-T Edges
The NSX-T Edge is a completely different beast from the NSX-v Edge. I am even thinking VMware should have named the NSX-T Edges differently. A lot of NSX-v folk are having a hard time understanding the concept of the NSX-T Edges because they are constantly confusing them with NSX-v. I might do a writeup on the core differences. Anyway, NSX-T Edges are also deployed using an OVA. Conceptually an NSX-T Edge does pretty much the same thing as an NSX-v Edge. It has upstream connectivity to a VLAN and has downstream connectivity to the logical networks in NSX-T. I deployed two Edges, configured them as Transport Nodes (so they also get a TEP interface) and placed them in an Edge Cluster:
The NSX-T Edge Transport Nodes are connected to both the Overlay Transport Zone and the VLAN Transport Zone. Just like with the ESXi Transport Nodes, the uplink connectivity is defined according to an Uplink Profile. Here, I kind of messed up my configuration due to inception complexity…
Inception trouble in nested paradise
The Edges are deployed in my vSphere Management Cluster. This cluster has a normal vSphere Distributed Virtual Switch with several distributed portgroups. One of these portgroups is the Transit Network VLAN that I also use in my NSX-v lab to connect the N/S Edges to my pfSense appliance (acting like my physical router layer). I am tagging this portgroup with VLAN ID 2711:
You probably guessed it: in my NSX-T Uplink Profile for Edge connectivity, I accidentally also configured the VLAN tag. Yikes!
Later on, I spent about an hour or so troubleshooting connectivity issues between my T0 Edge router and my pfSense appliance. With all the fancy new NSX-T stuff like T0 and T1 routers, services routers and so on, I was not expecting a simple problem like a double VLAN tag would be the issue. As soon as I removed the VLAN tag from the uplink profile, I could ping between my nested T0 router running on an NSX-T Edge VM running on a Physical ESXi host. But hey, this is exactly why we are building labs, right? I learned a ton during that hour of troubleshooting. Live and learn!
NSX-T Switching
With the Edges deployed as Transport Nodes, I moved on to creating the logical switches. I am using a demo 3-tier app so, in my NSX-T homelab I created three logical switches:
The Uplink-LS1 logical switch is used later on to hook up the Tier 0 router uplink to my infamous transit VLAN to my upstream router (a pfSense appliance). The logical switches are exposed in vCenter Server by the way and are consumable by VMs. You simply connect a vNIC to the logical network:
NSX-T Routing
The logical switches connect to the T1 router, the T1 router connects to the T0 router which ultimately connects to the upstream network. In NSX-T both East/West routing and North/South routing is distributed. In NSX-v only east-west routing is distributed in the DLR. Anyway, I created router ports for each of my logical switches on the T1 router and I created a router port on the T0 router that connects to the Uplink-LS1 switch I mentioned earlier:
Closing thoughts next steps
I am pretty pleased with my current setup. I’ve deployed the required NSX-T infrastructure components, logical switches and T0 and T1 routers. My 3-tier app has L2 and L3 connectivity. But I’ve only scratched the surface. There’s lots more to learn and built. I have an upcoming enablement training for NSX-T and PKS (Pivotal Container Services) soon and I’m really looking forward to it. For now I want to expand my NSX-T environment with proper dynamic routing, load balancing for my web-tier and of course the distributed firewall.
NSX-T only support BGP for dynamic routing. I’ve already installed the OpenBGPD package on my pfSense appliance so my next challenge is to peer my T0 router with it. That will be covered by a later blogpost for sure. If I get this up and running correctly, I will also draw up some diagrams to better explain how routing is handled in NSX-T and specifically in my homelab. Make sure to check back!
Finally, If you have any questions about my NSX-T nested setup, or if you have useful tips on how to improve my setup, just let me know!
where did you get licence from?
I’m a vExpert so I have access to NFR licenses. There is no separate license for NSX-T by the way. The license is interchange between V and T.