In the previous post, I looked at several real-world use cases for containers and hypervisors. This post will be a deep dive into containers and a how-to on using them.
Note: all the code, Dockerfiles, etc. are archived in a git repo at GitHub.
Container History
Origins
A little history is needed before we jump into building and deploying containers. Docker, which is likely still the largest container technology provider by a longshot, was originally released in 2013. In just 6 years a huge community using containers sprang up around the concept – check out DockerHub for a sense of scale in the community. Preceding the Docker phenomenon, we had LXC (Linux Containers) which had been around since around 2008, but it really never had the sticking power that Docker has enjoyed. Fast-forward to today, and there are a handful of competing container technologies, all with similar features, but some have better ecosystems or support than others.
Where We Are Today
Recently there has a push for a standardization of the container frameworks that makeup these ecosystems. The Open Container Initiative is an open governance body for working towards standardization of the container framework. Many container systems have already adopted this and become OCI compliant or aligned. Docker has also bolstered the community by donating much of its container infrastructure components to the open source world for OCI to utilize.
Goal of the Open Container Initiative
I encourage you to go to the OCI’s website and read their mission documents in whole, but I’d like to provide the brief description here as it relates to our use of containers in embedded systems. The OCI intends to create a standard for representing containers and their basic runtime requirements and interfaces for portability. From their FAQs:
The mission of the Open Container Initiative (OCI) is to promote a set of common, minimal, open standards and specifications around container technology.
What is the mission of the OCI? FAQ (https://www.opencontainers.org/faq#faq1)
This is important to us in embedded engineering because we need this portability and compatibility to fully realize the use cases outlined in the last post. Since the OCI has not reached a sufficiently mature status, I will be sticking with Docker as the container ecosystem in this post.
Docker Containers: A Hands-On Example
Objectives
In the following sections we will accomplish the following:
- Create a basic container image from a Dockerfile
- Create a container to build and test a custom application
- Create a container to host the custom application
- Export the container
- Run the container on a different platform
Creating a Basic Container
I’m starting with the assumption that Docker is installed and you’ve been able to run the Docker hello-world image. We will use a Dockerfile to compose our basic image. A Dockerfile is a script of sorts which instructs the engine to construct our container piece by piece. I’ve put a (very) basic Dockerfile below.
##########################################
# File: Dockerfile
# Author: Jacob Calvert <jcalvert@jacobncalvert.com>
# Date: Nov-08-2019
#
# This is a basic Dockerfile
#
##########################################
# start from the basic busybox image
FROM busybox
# put a file in our container
COPY hello-world.txt .
What I have in my workspace on my host machine is as follows:
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/basic $ ls
Dockerfile hello-world.txt
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/basic $ cat hello-world.txt
hello, world!
To build the Docker image from the Dockerfile, you run a ‘docker build’ command. The ‘-t’ specifies a tag by which we’ll reference this image, and the ‘.’ specifies the directory for the Dockerfile – in this case, the current directory.
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/basic $ docker build -t basic .
Sending build context to Docker daemon 3.072kB
Step 1/2 : FROM busybox
---> 020584afccce
Step 2/2 : COPY hello-world.txt .
---> 11e3eeda8f8e
Successfully built 11e3eeda8f8e
Successfully tagged basic:latest
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/basic $
Let’s view and run our image now.
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/basic $ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
basic latest 11e3eeda8f8e 42 seconds ago 1.22MB
ubuntu latest 775349758637 8 days ago 64.2MB
busybox latest 020584afccce 9 days ago 1.22MB
hello-world latest f2a91732366c 23 months ago 1.85kB
quantumobject/docker-zoneminder latest 469615ab191d 24 months ago 1.15GB
mysql/mysql-server latest a3ee341faefb 2 years ago 246MB
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/basic $ docker run -it basic
/ # ls
bin dev etc hello-world.txt home proc root sys tmp usr var
/ # cat hello-world.txt
hello, world!
/ # exit
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/basic $
What are we looking at here? First we list our available local images with the docker image ls command. We can see that I have several images on my development machine, including basic which was created only a few moments ago from our build command. Next I run the image we created with docker run -it <image>. The -it flags are for –interactive and –tty. These two together essentially present the container’s console as a pseudo-tty device in your terminal, in interactive mode. It is as if you are sitting in front of another machine.
Next, you can see that I have a different prompt. This is the container’s prompt. I type ls and we can see our hello-world.txt is there on the filesystem of our container as we desired via the COPY command in the Dockerfile. Lastly, we type exit which ends the busybox process and exits the container.
Now let’s look at the persistent state parts of Docker.
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/basic $ docker system info
Client:
Debug Mode: false
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 6
Server Version: 19.03.2
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.4.0-165-generic
Operating System: Linux Mint 18
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 15.55GiB
Name: jacob-aspire-mint
ID: Z2GE:5Y2A:K4LP:UC4O:QV3A:4JMR:5RLW:2DHQ:WANN:2RA3:VKJ2:UZMI
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Using docker system info we can see that there is 1 container and that it is stopped (not running). How do we see what that container is?
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/basic $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c76e0de1a530 basic "sh" 4 minutes ago Exited (0) 3 minutes ago silly_kalam
We can use the ps command to see information about our containers. Notice the STATUS column. Our container’s execution has exited. This begs the question: can we restart a container and pick up where we left off? The answer of course is yes!
jacob@jacob-aspire-mint /media/jacob/jacob/Documents/Workspaces/Blog/virtualization-for-embedded-systems-series/containers-deep-dive/basic $ docker container start -i silly_kalam
/ # ls
bin dev etc hello-world.txt home proc root sys tmp usr var
/ #
Notice the command difference this time around. I am using docker container start -i <name> where the name is a name given to the container at run time. The container can be referenced by its human-friendly name as I have done here, or by its ID (the long UUID). Also note the -i flag; this is to open it as an interactive session again. No -t is needed since the TTY device has already been allocated. So can we work inside this running container like a real machine and see the state persist? Indeed we can as well! Let’s see it in action.
/ # cd home/
/home # mkdir -p user/workspace/
/home # cd user/workspace/
/home/user/workspace # echo "another file!" > another.file
/home/user/workspace # ls
another.file
/home/user/workspace # exit
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/basic $ docker container start -i silly_kalam
/ # ls
bin dev etc hello-world.txt home proc root sys tmp usr var
/ # cd home/user/workspace/
/home/user/workspace # ls
another.file
/home/user/workspace # cat another.file
another file!
/home/user/workspace #
It may be a little hard to follow, but here’s what has happened. From inside our previously started container named silly_kalam, I have created a directory /home/user/workspace. Next, I created a file named another.file and filled it with “another file!”. I then exited the container. Next, I started the container again, changed directory to my created directory, and printed out the contents of another.file.
Using this example we can see that the content inside a container is not purely ephemeral, but can persist over many starts/stops. For more persistent storage check out Docker’s Volume subsystem.
Creating a Development Environment Container
Using the knowledge gained through building a basic container, we will now create a container to use as a development environment for our example application.
The Application Specs
We want to build a simple application which demonstrates the portability of containers and also does something we can test. We also want it to be a simple application for demo purposes. With this in mind, our application will be a simple data modem. It will take in data from one source medium and spit it out as another medium. Below is a list of basic requirements codified for this application:
- Translate data from a serial device to UDP
- Serial will be at 115200/8N1 configuration
- UDP will listen on a configurable port
- TTY device will be configurable
- Modem will log messages to container console
To satisfy these requirements we need to be able to build application code for the container. Why use a container to build applications? Why not just use your host machine’s development environment? Repeatability is the answer. Rather than struggle to maintain a development environment across multiple developers who have their own flavors of Linux distro and preferences and so on, we can distribute a “builder” container which has all the dependencies and tools needed to build the application (albeit a simple one in this case) from scratch and is completely independent of the host it runs on.
Building and Using the Container
For our “builder” container we create the following Dockerfile:
##########################################
# File: Dockerfile
# Author: Jacob Calvert <jcalvert@jacobncalvert.com>
# Date: Nov-09-2019
#
# This Dockerfile creates a development environment for
# building a sample application
#
##########################################
# start from the basic ubuntu image
FROM ubuntu
# install the needed tools in our container
RUN apt-get update && apt-get install build-essential git net-tools -y
This Dockerfile starts with an Ubuntu base image, and adds the three common packages build-essential, git, and net-tools. I won’t show the entire build process but at the end you should be able to run the new container as we did in the basic use case.
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/dev-env $ docker run -it dev-env
root@32984d05f73f:/# ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.2 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:ac:11:00:02 txqueuelen 0 (Ethernet)
RX packets 19 bytes 2853 (2.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1 (Local Loopback)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
root@32984d05f73f:/# gcc
gcc: fatal error: no input files
compilation terminated.
root@32984d05f73f:/#
Now in another tab let’s copy our source into our builder image (note: we could have simply git cloned it as well, but in many areas I work in, network access is a no-go).
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/dev-env $ docker cp modem.c jovial_rhodes:/
And now we can see this file in our running container.
root@32984d05f73f:/# ls
bin boot dev etc home lib lib64 media mnt modem.c opt proc root run sbin srv sys tmp usr var
root@32984d05f73f:/#
So let’s build our application, and grab the resulting binary.
root@32984d05f73f:/# gcc modem.c -o modem -lpthread
root@32984d05f73f:/# ls
bin boot dev etc home lib lib64 media mnt modem modem.c opt proc root run sbin srv sys tmp usr var
root@32984d05f73f:/#
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/dev-env $ docker cp jovial_rhodes:/modem ./modem
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/dev-env $ ls
Dockerfile modem modem.c
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/dev-env $
We have successfully used a container to build an image for deployment. The application is a simple one, but it illustrates the point of using a “builder” container for repeatable builds. Now we can move on to deploying our application. Of course there’d be rigorous testing in a real-world application, but we can neglect that for the purposes of demonstration.
Creating a Deployment Environment Container
We want to now create a container which will start our application on boot, and run that application until we exit the container. We will start with our basic Ubuntu, and add a few things as shown below.
##########################################
# File: Dockerfile
# Author: Jacob Calvert <jcalvert@jacobncalvert.com>
# Date: Nov-09-2019
#
# This Dockerfile creates a deployment environment for
# the sample application
#
##########################################
# start from the basic ubuntu image
FROM ubuntu
# create a app/bin directory in our container
RUN mkdir -p /app/bin
# copy in the app and a startup script
COPY modem /app/bin
COPY start-modem.sh /
# set the entry point
ENTRYPOINT /start-modem.sh
Notice a new directive here? The ENTRYPOINT directive is what will be run on startup of the container. Now let’s take a look at the contents of that script.
#!/bin/sh
/app/bin/modem -d $TTY_DEVICE -p $PORT
Simple right? All it does is start our application with some environment variables as parameters.
Testing our Deployment Container
Here’s where the magic of containers will really start to shine.
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/deploy-env $ sudo docker run -it -e TTY_DEVICE=/dev/ttyACM0 -e PORT=9000 --device=/dev/ttyACM0 deploy-env
This time we will run our container image with a few extra parameters. First, the -e flag will inject key=value pairs as environment variables. That’s how our script will know what those are at run time. Next we pass through the /dev/ttyACM0 device from the host machine to the docker container. If we modify that line to –device=/dev/ttyACM0:<custom container path> we can give the serial device a specific path in the container, otherwise it just gets identity mapped. Docker sets up a default NAT network on the host on the 172.17.0.0/24 subnet, so we will have access to this container at an IP address in that range once we start it. Also, the device hanging on /dev/ttyACM0 is just printing out a sequence for test data every second.
So what’s it look like when it runs? From the container you see:
Device selected is '/dev/ttyACM0'
Port base selected is 9000
UDP TX: '3
'
UDP TX: '0
'
UDP TX: '1
'
UDP TX: '2
'
UDP TX: '3
'
UDP TX: '4
'
UDP TX: '5
'
UDP TX: '6
'
UDP RX: 'hello world!
'
UDP TX: '7
'
UDP TX: '8
'
UDP TX: '9
'
UDP TX: '10
'
UDP TX: '11
'
UDP TX: '12
'
UDP TX: '13
'
UDP RX: 'hello virtualization!
'
UDP TX: '14
'
UDP TX: '15
'
UDP TX: '16
'
UDP TX: '17
'
UDP TX: '18
'
UDP TX: '19
'
UDP TX: '20
'
UDP TX: '21
'
UDP TX: '22
'
'DP TX: '23
UDP TX: '
'
UDP TX: '24
And from the netcat (nc) session on my host:
jacob@jacob-aspire-mint /workspace/virtualization-for-embedded-systems-series/containers-deep-dive/dev-env $ nc -u 172.17.0.2 9000
hello world!
hello virtualization!
14
15
16
17
18
19
20
21
22
23
24
^C
I’ve typed the “hello” statements into the netcat session and it is receiving the increment sequence from the serial device via UDP on the remote end. We have successfully created and deployed a containerized application!
Exporting the Containerized Application
Since we now have a complete application all self-contained in a container, we can export this to run on other systems. In Docker, this is as simple as:
docker save -o deployment-environment.tar deploy-env:latest
We now have a TAR archive of all the layers building up our container. This is a portable format you can move around to deploy on different machines by importing it onto another host and running it.
Running the Containerized Application on Another Machine
I used scp to copy my TAR file to another machine and imported it into Docker using:
jacob@dev-ubuntu:~# docker load < deployment-environment.tar
bd59016c97ec: Loading layer [==================================================>] 2.048kB/2.048kB
8c558935f47c: Loading layer [==================================================>] 16.9kB/16.9kB
0cc408f84946: Loading layer [==================================================>] 2.048kB/2.048kB
Loaded image: deploy-env:latest
jacob@dev-ubuntu:~# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
deploy-env latest d427611b8747 About an hour ago 64.2MB
Now we can run the app on a different machine just like our original host machine. (I omitted any parameters, I just want to see the modem binary print out errors).
jacob@dev-ubuntu:~# docker run -it deploy-env
Device selected is '-p'
Failed to open device '-p'
So what’s the point?
So what’s the point? Couldn’t we just copy our modem binary to the other machine and run it in the native host? Sure, for this application, because it has no special dependencies. But imagine if your application depends on QT5, TensorFlow, and a pre-trained Data Model? Making sure all that stuff gets installed on the target host machine is a nightmare. Having a single TAR file with all dependencies built in, with your application properly parameterized ready to launch makes portability and usabilty a much easier sell, especially for embedded systems.
Said another way: if the only requirement to upgrade your application or add additional applications to a fielded embedded system is that you have the right container infrastructure, it is much easier to rapidly update existing capabilities and to deploy new capabilities to the edge with a high confidence of success.
Wrapping Up
In this post, we focused on a practical example of using containers. It’s easy to see how this technology has the ability to be incredibly impactful for embedded systems. In the next post, we will take a look at type-2 hypervisors and how to use them for embedded systems.
I hope you enjoyed the content of this post! If you did, feel free to comment or shoot me a message over at the contact page!