vSphere Integrated Containers (VIC) – Part 2: Installing VIC v1.1

Installing vSphere Integrated Containers (VIC)
Part 1 of this blogseries explained what VIC is, why and when you should use it and which components are included. VIC is open sourced by VMware so you can grab a copy at https://vmware.github.io/vic/. For enterprise support, vSphere Enterprise Plus is required and a commercial copy of VIC needs to be procured. For my lab purposes, the open source release will do just fine ;-).
This blogseries covering VIC is divided into three blogposts:
- Overview of VIC v1.1
- Installation procedure (this blogpost)
- VIC v1.2 release and upgrade procedure for VIC v1.1 to VIC v1.2
This blogpost will cover the installation steps I followed to get VIC v1.1 up and running in my lab. Credits to https://www.thehumblelab.com/deploying-vsphere-integrated-containers/ and especially http://www.virtually-limitless.com/vsphere-integrated-containers/vsphere-integrated-containers-part-5-installing-and-using-vmware-admiral/. These blogs were really helpful and filled the gaps the official documentation at https://vmware.github.io/vic-product/assets/files/html/1.1/ didn’t cover in enough detail for me. But I have to say the VIC team did an excellent job in delivering the official documentation. One of the best VMware documentation sets I have seen in ages!
Deploying the VIC appliance
The VIC appliance is provided as an OVA. You simply deploy it just like any other OVA, providing all necessary details. Make sure you choose to install the Admiral management portal and Harbor container registry components. For a step by step guidance, please have a look at https://vmware.github.io/vic-product/assets/files/html/1.1/vic_vsphere_admin/deploy_vic_appliance.html.
Once the deployment is finished, the appliance console screen shows you how to access the Admiral Management Portal for VIC and the Harbor Container Registry:
Downloading the VIC binaries from the appliance
Now point your webbrowser to the file server of the appliance at https://vic_appliance_address:9443 and download the VIC binaries to a Windows or Linux based client:
You are looking for the tarball file with the tar.gz file extension. Unpack the tarball using any tool of your choice. Inside you will find the vic-machine utility. Depending on the type of OS you are using you will need to use a specific version of the tool. I am running a Windows management client in my lab so I will be using the vic-machine-windows.exe utility. This utility is used to create a Virtual Container Host (VCH):
Deploying a Virtual Container Host (VCH)
The syntax of the vic-machine utility is fairly complex. Please refer to the VIC documentation for a complete explanation of all syntax commands. Before we can deploy our VCH, we first need to ensure some firewall ports are opened up to allow communication between the VCH and ESXi. For more information about which ports are used specifically, please refer to https://vmware.github.io/vic-product/assets/files/html/1.1/vic_vsphere_admin/open_ports_on_hosts.html. Thankfully, the vic-machine utility provides an ‘update firewall’ option which configures the firewalls of the ESXi hosts automatically:
Make sure you meet the other prerequisites listed at https://vmware.github.io/vic-product/assets/files/html/1.1/vic_vsphere_admin/deploy_vch_vcenter.html. I want to especially mention the Bridge Network port-group you will need to provision before you can deploy a VCH. The bridge network takes care of container to container networking. Each VCH requires a unique bridge network. The bridge network is a port group on a distributed virtual switch. and you should not use the bridge network for any other VM workloads, or as a bridge for more than one VCH. I simply created a new distributed port-group called “Bridge”.
I used the following syntax to deploy my first VCH:
vic-machine-windows.exe create ^
–target vcsa-01b.corp.local ^
–user ******* ^
–password ********* ^
–image-store NFS-B ^
–bridge-network Bridge ^
–name VCH-01b ^
–public-network Management ^
–public-network-ip 192.168.210.88/24 ^
–public-network-gateway 192.168.210.1 ^
–dns-server 192.168.110.10 ^
–force ^
–timeout 5m0s ^
–no-tlsverify ^
–registry-ca=F:\VIC\ca.crt
Because I used self-signed certificates, I needed to download the certificate from the Harbor container registry by logging in to the vSphere Integrated Containers Registry interface as admin user, clicking the admin drop-down menu and clicking Download Root Cert:
I saved it as F:\VIC\ca.crt and referenced it with the –registry-ca option above. See https://vmware.github.io/vic-product/assets/files/html/1.1/vic_vsphere_admin/deploy_vch_registry.html for more information. After the deployment of the VCH, it will appear as a vApp inside vCenter. The running VM inside the vApp is the VCH Endpoint VM.
Install the vSphere Client plug-ins for vSphere Integrated Containers
To manage VIC using the vSphere Web Client, we need to manually install plug-ins. Unfortunately, this is not a very plug-and-play experience yet. We need to SSH into the vCenter Server Appliance, download the UI files and install them directly from the command line. The steps are explained in great detail at https://vmware.github.io/vic-product/assets/files/html/1.1/vic_vsphere_admin/plugins_vcsa.html. So, SSH into your VCSA, use curl to download the plugin from the VIC appliance and extract the tarball:
Then edit the plugin configs file located in /vic/ui/VCSA/configs by entering the vCenter Server IP address, the URL of the VIC appliance, VCSA thumbprint and so. Just follow the step by step instructions in the documentation at https://vmware.github.io/vic-product/assets/files/html/1.1/vic_vsphere_admin/plugins_vcsa.html. If you followed instructions correctly, you will see the VIC plugin in your web client. There is a Flash and HTML5 plugin available:
Firing up our first Docker container
Using the regular Docker client for Windows, I was now able to connect to my VCH and spin up my first container:
Logging into the vSphere Integrated Containers Management Portal
Point your browser at https://vic_appliance_address:8282 to log on to the Admiral Management Portal. There you need to add you VCH to the management portal by clicking on Add Host and providing the address to the VCH Endpoint VM. If you deployed the VCH using the Registry’s certificate, you do not need to specify a username and password:
You can now manage your VCH and the container images running on it using the Admiral management portal UI:
Logging into the vSphere Integrated Containers Registry
The final steps in this blogpost are logging in to the Harbor container registry by pointing your browser at https://vic_appliance_address:443 and logging in using the credentials you specified during the deployment process. Alternatively, you can simply click on Registry in the top menu bar of the management portal:
Harbor provides an enterprise grade container image management system delivering capabilities such as security, identity and replication often required in enterprise environments. Basically, you manage all your enterprise’s container images here.
That’s it for now. Stay tuned for the next blogpost in this series covering the recent v1.2 announcement of vSphere Integrated Containers and how to upgrade from v1.1!