Docker: Containers for the Masses -- Getting terms straight

24 Jul 2014

I was recently discussing container technologies with my team and was trying to explain the various container-related projects including LXC, libcontainer, Docker, Kubernetes, CoreOS, and how all these fit together.

I agreed that a blog post would be a good way to further clarify some terms. I wanted to continue my more in-depth example-laden blog posts, particularly about Ansible and Docker as well as the Ansible dynamic inventory plugin for Docker, though wanted to get my thoughts out before I forget (I have young children)!

This blog post is the latest in the series:

Virtual Machines

The first term is one that anyone reading this article already knows, but I find it useful to revisit it when discussing containers. I myself still fall back into thinking about virtual machine concepts when discussing containers which is a normal way to grok a new concept, though containers really are different and using accurate terminology is ultimately a requirement to understanding something correctly.

A virtual machine, or for the sake of discussion, a VM, is emulation software that provides the ability to run programs as if you are running them on real hardware, constrained to that machine, that environment.

This usually means you install an OS on a virtual machine and on top of that OS, programs of your choice to make your VM complete. You can start and stop a VM, create an image based off of the VM at a given state and launch those images to have more copies of a given machine and environment.

There are various types of implementations that provide virtual machines including: KVM, VMware, Virtualbox, Windows Virtual PC, and QEMU.

In terms of containers, VMs can be incredibly useful for setting up something like Docker – giving one the ability to run numerous VMs with containers running on each VM. VMs are also useful for composing blog posts and providing a test environment for Jekyll!


A container is an operating system-level virtualization method that provides an self-contained execution environment, that look and feel like virtual machines, but run within the OS and us the OS itself to provide this functionality. Containers don’t require installing and operating system. When you run a container, you run whatever program that you want to run in the container without the overhead having to run an entire operating system. The processes run in a container are only visible inside the container and isolated from the host OS and other containers and don’t require any emulating to run.

Container functionality is made possible by Linux kernel features such a cgroups, namespaces, apparmor, networking interfaces, and firewalling rules - the idea being, use the host OS to create and provide an environment where disk, CPU, memory, and networking work as if on a host of its own, just as you have with a VM.

There are various mechanisms that make containers possible: chroot, Docker, LXC, OpenVZ, Parallels, Solaris Containers, FreeBSD Jail. This blog post hones in on Docker and similar Linux containers.


Until recently, LXC was the default execution environment for Docker. LXC stands for LinuX Containers. LXC combines cgroups, Linux Kernel namespaces, apparmor profiles, seccomp policies, and chroots to provide containers– or what they refer to as “Chroot on steroids”.

LXC, written in C, provides a library, liblxc, language bindings, a set of tools for runnign containers.


libcontainer, written in the Go language, is the default execution environment for Docker. libcontainer provides essentially the same thing that LXC provides but in Go with no external dependencies, as well as a goal to be underlying container technology agnostic.


CoreOS is a stripped-down Linux OS based off of [Chrome OS][chromeos] that uses Docker containers to run multiple isolated Linux systems for resource partitioning.

CoreOS provides etcd, a key/value store written in Go to provide both distributed configuration information as well as service discovery for the cluster, implemented using what CoreOS also provides called Fleet, a cluster management daemon used to control systemd on each node.

This is certainly a topic this blog will revisit!


Kubernetes another topic the author has devled into and will have a separate post on in the future, is a project by Google (Google Cloud Platform guys). It is an Open Source container cluster management project.

Google uses container technology thoughout Google to both scale out and provide security for a number of applications such as search and Gmail where they run up to 2 billion containers a week — 3300/sec! GCE – Google Cloud Environment– runs VMs inside of containers for resource isolation between VMs and non-VM workloads.

Google doesn’t currently use Docker internally yet has written kernel features that make containerization possible cgroups (control groups) which are used to limit, account, and isolate resource usage (CPU, memory, disk I/O) of process groups. There is also the work Google did with LMCTFY (Let Me Contain That For You), which is the open-source version of Google’s container stack ( which the functionality is being moved into libcontainer, the current default Go-based container driver for Docker.

Kubernetes is written in Go and the idea is to build on top of Docker, which Google sees as a technology that will be a standard for containerization (hence the aforementioned work they are doing). It is a scheduler for containers organized into what it refers to as “pods” as well as providing communication for containers.

The primary purpose of pods is support of co-located and co-managed helper programs such as content management systems, logging, log management and backup, proxies, bridges, adaptors, etc.

Also, like CoreOS does, Kubernetes makes use of etcd which provides the persistent state of the master.

There is much more information about Kubernetes, and as already has been mentioned, there will be a future blog post specifically covering Kubernetes.

comments powered by Disqus