vSphere Integrated Containers (VIC) – Part 1: Overview

Introduction to vSphere Integrated Containers (VIC)
Containerization is a hot topic in IT nowadays. In most companies containers are synonymous with Docker and what we are seeing a lot is developers simply requesting huge VMs from the IT infrastructure team to run Docker on it. The infra team manages a VM so they are happy, the Devs are happy because they can use Docker and spin up new containers without having to wait for the infra team. Everybody happy… or not quite so???
This blogseries covering VIC is divided into three blogposts:
- Overview of VIC v1.1 (this blogpost)
- Installation procedure
- VIC v1.2 release and upgrade procedure for VIC v1.1 to VIC v1.2
What is VIC?
vSphere Integrated Containers combines the flexibility and agility of containers with the manageability capabilities and security of an enterprise-grade infrastructure. vSphere becomes the data plane for running containers as opposed to a single VM running a container engine like Docker. The fundamental idea of VIC is to expose a Docker API and run Docker containers natively on vSphere without having to deal with underlying VMs. Of course, vSphere is not something you “run” anything on. vSphere is the combination of the ESXi hypervisor and the vCenter Server management appliance. So where do the containers actually run? VIC instantiates a lightweight VM running PhotonOS for every container. Every container image running in VIC runs on a dedicated lightweight VM. This provides vSphere admins a familiar control point (a VM) and developers can use their normal API calls to create, run, delete container images. PhotonOS is a cloud native OS purpose built to run cloud native apps so there is not a lot of overhead. VIC also provides some additional management capabilities but more on those later on.
Core components of VIC
VIC consists of three major core components:
- VIC Engine (enterprise container runtime for vSphere)
- Harbor (enterprise container registry that stores and distributes container images)
- Admiral (management portal that provides an UI to provision and manage containers)
Architectural concepts of VIC
Now we understand what components make up VIC, it is time to dive a bit deeper into the architectural concepts of VIC.
VIC Appliance
VIC Appliance is the initial OVF you deploy onto your vSphere infrastructure. It provides the necessary binaries for VIC engine and hosts the Harbor registry and Admiral management portal.
Virtual Container Host (VCH)
The VCH runs the actual VIC engine and exposes the Docker API to clients. It runs and manages multiple Container VMs.
Container VMs
A container VM is a lightweight VM running the cloud native PhotonOS operating system.
Virtual Container
The actual container image.
VIC-machine utility
To manage the VCHs, VIC provides a command-line utility:
VCH networking
Networking in VIC can be a bit daunting at first glance. There are five types of networks in VIC:
- Management Network
- Public Network
- Client Network
- Bridge Network
- Container Network
The following diagram is courtesy of the official VIC documentation on GitHub. I would strongly recommend looking at the documentation set at https://vmware.github.io/vic-product/assets/files/html/1.1 for VIC 1.1 or at https://vmware.github.io/vic-product/assets/files/html/1.2/ for the newly released VIC 1.2.
VIC Requirements
To run VIC on your vSphere environment, there are a number of prerequisites that have to be met:
- vSphere 6.0 or 6.5 Enterprise Plus edition
- 2 vCPU, 8GB RAM and 80GB disk space or the VIC Appliance
- Outbound TCP traffic to port 2377 on the endpoint VM
- Inbound HTTPS traffic on port 443 for uploading and downloading from datastores
If you are just starting out with vSphere Integrated Containers, you are likely going straight for VIC v1.2. Make sure to follow my blog and keep an eye out for the VIC v1.2 blogpost in this series!