Welcome back! This is the second of a series of blog posts on Docker. The previous post introduced the reader to Docker: the concept of containers and the technology that makes them possible. This current post will cover installation and configuration of Docker and issues that the author of this post has found in using Docker that are worth sharing.
Installing and configuring Docker
In addition to their instructions, I needed containers to use 0.0.0.0 when binding container ports so it will be possible to connect to a container running on one cartridge from an external server (external to the Docker host). The following was added to the end of the file /etc/defaults/docker:
With this option, it is possible to run containers and have any exposed ports on the container to be bound to ports on the host. There will be more elaboration on this in a later post in this series.
Running the Docker Daemon: UNIX domain socket or TCP socket
By default, Docker will runs so that it binds to a UNIX domain socket versus a TCP socket on 127.0.0.1. There is a good reason for this: Docker runs as root. There are risks of cross-site-scripting attacks using a TCP socket if you are not on a completely trusted network or VPN (or both).
I recently investigated and ended up demonstrating using Ansible modules (see presentation) to manage containers run across 45 cartridges on a Moonshot server. Since the network was behind a VPN, completely trusted, and locked down (even using an internal apt repo) the Docker daemon was set to run as a service on port 4243 so that the Ansible modules, run via playbooks on a single host, could connect and communicate with these 45 Docker daemons running on each cartridge. This also makes it possible to connect to Docker locally as a non-privileged user.
The option for using a TCP socket setting in /etc/defaults/docker:
I will again stress that this was a need for my specific setup from a single host to an HP Moonshot system with 45 cartridges on an internal network that could only be accessed through a VPN with no external access. ONLY use a setup like this if you have a secure setup. Also make sure that access to port you use (for instance, 4243 in the example above) is limited to only the host that needs to access it. In my case, it was the host I was running Ansible playbooks on.
Sign up for an account
So you can share your images as well as follow this blog post, it is recommended that you sign up for an Docker account. Follow the ‘sign up’ link next to ‘login’ in the upper right hand corner of the page. This will be a name you want to use to tag your images with and the string value that when searched on, will display your images for someone interested.
About using sudo to run the Docker CLI
The documentation on the Docker site has docker commands run via sudo. This is a best practice and assumed setup for security purposes. As mentioned before, Docker runs as root, so it’s important to make sure that only trusted users can run Docker. In the case of most developers running Docker locally either on a laptop of VM, since there would only be one user to concern themselves with, could set up their system to allow a given user the ability to not have to preface every docker command with sudo. The following will set this up:
$ sudo groupadd docker $ sudo gpasswd -a myusername docker # myusername == user in question $ sudo service docker restart
Note: The above steps are for Ubuntu. Mileage may vary for other Linux variants.
This the second blog post in the series on Docker, and discussed Docker installation and configuration. Specifics that the author has found in working with Docker particular to configuration issues were provided, including: how to run the Docker daemon, running Docker bound to a UNIX socket versus a TCP socket, and the use of sudo for running the Docker CLI.
The next post will cover actual usage of Docker.