In the last post, we looked at several different types of virtualization technologies. We wrapped up by narrowing our focus on the types of virtualization down to just two primary categories – hypervisors and containers. In this post I’ll dig in to some real world applications of these types of virtualization, and we’ll look at how they can be used to solve real problems.
One of the low hanging fruits of container virtualization is the ability to manage your application images in a straightforward and simplified manner. Because container images carry with them all the dependencies needed to run the application the container is intended for, we eliminate the need to prepare the host environment to deploy that application. In other words, we no longer need to worry if our application is deployed on six system variants with six different hardware configurations running six different host OS revisions. We simply package our app for the container infrastructure, and ship it as a container. Simple.
Rapid and Widespread Deployments
Utilizing the same image management capabilities, we can easily deploy our applications on a multitude of platforms, since we do not need to worry about the underlying host OS too much. For example, you have written an application which provides critical situational-awareness (SA) functionality to the Soldier. Your application simply needs the SA data as an input and it serves a web page with the resulting SA data displayed for fast, efficient dissemination to the necessary parties. This application packaged as a traditional binary would require configuration management to be performed on any deployment platform, and care would need to be taken each time the application was updated to manage dependencies. If deployed as a containerized solution, the container carries its dependencies along with it. We can then deploy in real-time to platforms we may not have directly designed the application for use on, and provide valuable capabilities to the warfighter in a blink.
Type-2 Hypervised Systems
Scratching the Nostalgia Itch
Who remembers the 80’s? Not me! But I did have a Sega as a kid. Gaming platforms these days have become more PC-like than embedded system, but in the not so distant past, gaming systems were 100% embedded systems by design. But what about scratching that itch for Mortal Combat on the Sega in 2019 without searching E-bay endlessly? This is a perfect fit for a type-2 hypervisor to help you out. The RetroPie Project is all about bringing different emulated platforms into the modern age on a Raspberry Pi. Via the hypervised environment (and assuming you have legal rights to the console title you’re emulating) you can spin up a virtual Sega and load up a copy of Mortal Combat to throw down with the boys on a Saturday night. I call that solving problems with technology!
Frequently in the embedded systems world, we find ourselves developing code on one platform to be deployed on another. Type-2 hypervisors make it easy to test our developed code on something that mimics the real target platform with a high level of fidelity. Using a tool like QEMU, you can quickly deploy ARMv7 code on top of a dual-core Cortex-A9 VM while being hosted on your Ubuntu Linux running an Intel chip. In the same vein, cross platform development can be achieved with a type-2 hypervisor like VirtualBox as well, by running a VM of your deployment platform and developing directly “on-platform.” For a highly flexible and incredibly powerful commercial platform that supports the system simulation paradigm for developing, deploying, testing and more for embedded systems and beyond, check out Wind River’s Simics.
Type-1 Hypervised Systems
An aside: Type-1 hypervisors are, in my opinion, one of the most useful virtualization strategies for embedded systems. For starters, embedded devices often have less processing power and RAM than their datacenter counterparts, so the low overhead factor for type-1 hypervisors makes them ideal for squeezing as much performance as possible out of the hardware platform.
Functionality implemented by software has historically been included in embedded systems in a federated manner. This has been especially true for safety critical functions in software. Federated systems have been the standard model for safety critical systems since software began to take a major role in system operation.
In federated systems, each function often had its own “box” in the system. This means for every function or function group there was an entire dedicated hardware and software stack to support it. This model was also reinforced due to limited computing resource availability on single core processors. When a single core chip was maxed out on processing power, the only solution was to add another “box.” Each function belonging to an individual “box” obviously increases the size, weight, and power (SWaP) required for the system.
By consolidating the various applications into one hardware platform we reduce the SWaP required for the system-of-systems, and to do this we can use a type-1 hypervisor. We can accomplish even more if we utilize a multicore processor with a type-1 hypervisor. This application consolidation mechanism is easily seen in the selection of an ARINC653-capable OS for the Boeing 7E7 Common Core System.
Security Risk Mitigation
One of the glaring issues of fielding embedded systems is that often, once fielded, the option to update them does not exist. When embedded systems are expected to work 24/7/365 from the time of production until replacement, this “long tail” needs to be considered. Whenever security holes are found in a fielded embedded system, it can be difficult to design a mitigation that can be applied without rearchitecting the entire system. Take for example the following scenario: a safety-critical embedded device has been deployed with an custom home-grown RTOS. It was recently discovered that this home-grown code has a major security flaw in the TCP/IP stack, despite the fact that the application only uses the serial interface. The RTOS is inflexible and it would cause too much rework to update the RTOS and application and get an ATO again. Instead of modifying the RTOS and application, you simply run a type-1 hypervisor underneath the RTOS and application, and use the type-1 hypervisor to restrict which hardware can be accessed by the RTOS and application. By simply virtualizing the application, we’ve moved it “up the stack” so that we can better manage the resources it has access to, and which resources have access to it – in this way, we can patch a large number of security issues simply by using a VM.
Since embedded systems often have long lived deployments as identified in the Security Risk Mitigation example, the addition of new features and the migration of legacy code is also a difficult task many times. Again, having a type-1 hypervisor to move your application “up the stack,” can ease the difficulty in this migration. For example, you deployed an application on Linux and it has been fielded for seven years. Recently you have new requirements for the application which requires some functionality to be moved to an RTOS to be responsive and keep up with demand. Rather than making a sweeping change to port all functionality to an RTOS, using a type-1 hypervisor can allow you to retain most of the functionality in Linux in one VM and port just the necessary functions over to an RTOS in another VM. All this while still supporting the same hardware platform.
Here are a few more examples that don’t necessarily fall under the umbrella of embedded systems, but help round out the picture.
Scalable Services by Design
Previously I noted that Docker is a popular container engine or daemon for managing containers on a given host machine. I also mentioned Kubernetes as an orchestration engine to deploy and manage these containers hosted by Docker. Pairing these two technologies, we are able to design services that are scalable in nature right from the start.
But what exactly is meant by scalable services though? Imagine the following example. You’ve created a web application and are using Docker for the various services that make up your application. You start with less than 1000 daily users and are spinning up additional containers manually to manage the load. Suddenly, your app is featured on Slashdot and you begin experiencing the Slashdot Effect. Your running containers are overloaded and your users are denied access – not good for a fledgling web app! Now, consider if your application’s containers were being managed and monitored by Kubernetes. The same Slashdot Effect begins to occur, but the Kubernetes engine reacts by spinning up additional containers to handle the spike in load. No crash for your app, and tons of happy new users. Thanks to this type of virtualization, you can be infinitely flexible to support your users’ demand, on-demand.
Network Function Virtualization
Network function virtualization as a concept has grown in popularity rather rapidly over the past several years. It straddles the line between IT-grade virtualization and embedded systems because it takes what would have traditionally been a dedicated embedded device performing one function or group of functions and has virtualized it so that it can sit as a virtual appliance or virtual machine on enterprise grade computing platforms. This is typically deployed as a hypervised environment with management and orchestration software running on top to handle the chaining of functions. For more info on NFV, check out the Wikipedia article.
Computing Power On Demand
My website is hosted by DigitalOcean on a Droplet. A Droplet is DigitalOceans’ term for a Virtual Private Server (VPS). They have many physical behemoth servers in datacenters all over the world, and using virtualization, are able to divvy up the resources on those servers into virtual machines called Droplets. I can log into my Droplet as if it is a physical machine and do with it as I please. What’s really neat about DigitalOcean (and many other providers like it) is that they provide a way to spin up these VPS units on demand via a web API. We could actually use this API to spin up a swarm of computing power long enough for it to crunch some data for us and then turn them back off, releasing the hardware to another user! In fact, DigitalOcean uses QEMU and libvirt to accomplish their virtualization and provide this on demand compute to consumers like me. Pretty cool!
In this post, we looked at how the various types of virtualization technology can be applied in a variety of ways. In the next couple of posts, I’ll be taking a look at software platforms that enable these virtualization mechanisms, and building up example systems with mostly FOSS pieces to show what things we can accomplish and to demonstrate some of the concepts I noted in this post.