Virtualization in the context of computing comes in many flavors. Typically, virtualization is referring to hypervised environments (more on that in a bit), but can also mean containerization or other technologies. This post explores several of these flavors and how they work.

If you didn’t catch the intro to this series, you can read a little about the motivation for these posts in the previous post.

Types of Virtualization Technology

When considering the delineations between different types of virtualization technology, it helps to draw the line based on where the Virtual Machine (VM) or container touches the hardware. Let’s look at this aspect of virtualization first.

Containerization

Containerization is the application of virtualization to create lightweight, small footprint “services” that sit on top of a host operating system. In the previous post, I linked to Kubernetes which is an orchestration engine (among other things) for containers. Containers are often used to segregate applications or services into their own user space for a multitude of reasons. One of the primary reasons is scalability.

The containerization stack.
The containerization stack.

Using an orchestrator like Kubernetes, you can write your applications’ services such that they can dynamically scale up or down to meet demand by spinning up or down containers “in the cloud.” In fact, this is the exact model that many SaaS (software as a service) providers are using to maximize their capabilities on a limited set of hardware resources. Check out Amazon AWS for more details on this model (it’s pretty neat!). This model typically requires the host OS to have support for this container creation and can typically only support containers of the same flavor as the host OS (this is not always true, but there are caveats)[1]. For example with Docker as your container manager, you can only host Linux containers on a Linux host OS.

At first glance, containerization may not seem like a good fit for embedded systems, but when we explore use cases specifically for embedded devices, we’ll find there are some pretty neat applications of the technology in various industries.

Process Virtual Machines

Process Virtual Machines (PVMs) are very similar to containerization, but instead of presenting a full user space for your application, you’re only allowed one process. Due to the similarities in structure to containers, I’m going to lump these two together.

Type 2 Hypervisors

This type of virtualization is related to containerization in the respect that a host operating system sits between the virtualized application and the hardware. It is also quite a bit different, as there is also an entire guest operating system sitting between your application and the hardware.

The Type 2 Hypervisor Stack.
The Type 2 Hypervisor Stack.

This gives you more isolation in some respects, but can also create performance issues. One major benefit of type 2 hypervisors is that you can run mixed varieties of guest OSes independent of your host OS. VirtualBox is a good example of this type of virtualization. Using VirtualBox, I can run a Linux Mint guest and a Windows 7 guest on my Windows 10 host, side-by-side. Another benefit of type 2 hypervisors is that you can also run mixed architectures on the same host platform. For example, using QEMU, I can run a PowerPC-based Linux image on my Intel x86_64 processor. In this mode however, the performance is greatly impacted (although there have been many improvements), and you will not get the same performance as running on the native instruction set of the host OS.

Type 1 Hypervisors

Type 1 hypervisors are the flavor of virtualization which brings the guest OS closest to the hardware, and provides the best performance in a hypervised environment. In this type of virtualization, the hypervisor itself is the host OS, and provides all critical facilities to the guest OS with minimal impact to the guest OSes execution.

The Type 1 Hypervisor Stack.
The Type 1 Hypervisor Stack.

Another critical difference here is that the guest OSes must share the same instruction set architecture (ISA). In other words, if your host platform is an ARMv8 cluster, your guest OSes must all run ARMv8 machine code, however, some architectures have support to run 32bit and 64bit side by side on the same chip depending on configuration (we’ll explore this later.)


Other Considerations

With all the types of virtualization technology I’ve listed, there are a few other considerations they have in common which are worth at least noting.

Processor Oversubscription

Processor oversubscription is the concept of assigning more virtual CPU cores to your guest OS machines than you have physical CPU cores on your hardware. Some environments allow you to do this, and it is not a universal feature. It’s also not universally desired for every virtualization solution. Typically, your physical CPU’s cores are not utilized at 100%. Allowing oversubscription so you can make better use of the processing cycles makes sense, but what happens if a malicious application starts hogging all of the cycles on one core? Depending on the virtualization technology and its configuration, the other guest OSes may suffer as a result.

RAM and Storage Oversubscription

This concept is similar to Processor Oversubscription, but applies to the physical RAM and the non-volatile storage in your host hardware. Again, there is a chance that a guest OS will need to use its RAM resource, and because it is oversubscribed it may not be available, leading to processing delays, or at worst, failure. Similarly, perhaps a critical log message needs to be written to a disk, but the storage quota for the disk is already met due to oversubscription – a failure is likely!

Hardware-assisted Hardware Sharing

Sounds redundant doesn’t it? It’s not, I promise! This concept is a hardware-based capability and varies by platform. The basic concept is that a device that multiple guest OSes may want to use (like an Ethernet controller) has some built-in capabilities to help each VM or guest OS use the shared resource, while keeping their data separate. An example of this is SR-IOV. Single Root I/O Virtualization is a technology related to PCIe where PCIe resources can be divvied up for use among several guest OSes. Check out this Microsoft article for more information on SR-IOV.

Wrapping Up

Now that we’ve explored several different flavors of virtualization technology, I’d like to zero in on just the ones that we are likely to use in embedded systems – containers and hypervisors. In the next post, we will walk through some examples of how each of these can be applied to embedded systems, and what benefits they can bring.

References

[1] Docker: Linux Containers on Windows Hosts ( https://www.docker.com/blog/preview-linux-containers-on-windows/ )