Where are the Docker images and containers stored on the host? - CloudSavvy IT
Web agency » Digital news » How (and why) to run Docker Inside Docker - CloudSavvy IT

How (and why) to run Docker Inside Docker - CloudSavvy IT

Running Docker in Docker allows you to create images and start containers in an already containerized environment. There are two possible approaches to achieve this depending on whether you want to start child or sibling containers.

Accessing Docker from inside a Docker container is most often desirable in the context of CI and CD systems. It's common to host the agents that run your pipeline in a Docker container. You'll end up using a Docker-in-Docker strategy if any of your pipeline stages then image or interact with containers.

The Docker-in-Docker image

Docker is delivered as a stand-alone image through the docker:dind tag on Docker Hub. Booting this image will give you a working installation of the Docker daemon in your new container. It will run regardless of your host's daemon that is running the dind container, therefore docker ps inside the container will give different results to docker ps on your host.

docker run -d --privileged --name docker -e DOCKER_TLS_CERTDIR=/certs -v docker-certs-ca:/certs/ca -v docker-certs-client:/certs/client docker:dind

There is one major downside to using Docker-in-Docker in this way - you have to use privileged mode. This constraint applies even if you are using containers without root. The privileged mode is activated by the --privileged flag in the above command.

Using privileged mode gives the container full access to your host system. This is necessary in a Docker-in-Docker scenario so that your internal Docker can create new containers. However, this can represent an unacceptable security risk in some environments.

There are other issues with dind too much. Some systems may experience conflicts with Linux Security Modules (LSM) such as AppArmor and SELinux. This happens when the internal Docker applies LSM policies that the external daemon cannot anticipate.

Another challenge is with container file systems. The external daemon will run on your host's normal file system, such as ext4. However, all of its containers, including the internal Docker daemon, will rely on a copy-on-write (CoW) file system. This can create incompatibilities if the internal daemon is configured to use a pilote storage that cannot be used on an existing CoW file system.

Mount your host's Docker socket instead

The challenges associated with dind are best treated by completely avoiding its use. In many scenarios, you can achieve the desired effect by mounting your host's Docker socket in a docker container:

docker run -d --name docker -v /var/run/docker.sock:/var/run/docker.sock docker:latest

The Docker CLI inside the docker the image interacts with the Docker daemon socket that it finds at /var/run/docker.sock. Mounting your host's socket on this path means docker commands run inside the container will run on your existing Docker daemon.

This means that the containers created by the internal Docker will reside on your host system, next to the Docker container itself. All containers will exist as siblings, even though it looks like the nested Docker is a child of the parent. Functioning docker ps will produce the same results whether it is run on the host or inside your container.

This technique alleviates the difficulties of implementing dind. It also removes the need to use privileged mode, although mounting the Docker socket is itself a potential security concern. Anyone with access to the socket can send instructions to the Docker daemon, providing the ability to start containers on your host, extract images, or delete data.

When to use each approach

Docker-in-Docker via dind has always been widely used in CI environments. This means that the "inner" containers have a layer of isolation from the host. A single CI runtime container supports each pipeline container without polluting the host's Docker daemon.

While this often works, it is fraught with side effects and is not the intended use case for dind. It was added to facilitate the development of Docker itself, not to provide end-user support for nested Docker installations.

According to Jérôme Petazzoni, the creator of the dind implementation, taking the socket-based approach should be your preferred solution. Binding your host's socket daemon is safer, more flexible, and just as complete as starting a dind container.

If your use case means you absolutely need dind, there is a more secure way to deploy it. The modern Sysbox project is a dedicated container runtime that can nest other runtimes without using privileged mode. Sysbox containers become similar to virtual machines, so they can support software that typically runs without an operating system on a physical or virtual machine. This includes Docker and Kubernetes without any special configuration.

Conclusion

Running Docker in Docker is a relatively common requirement. You will likely see this when configuring CI Servers that should support container image builds from user-created pipelines.

Using docker:dind gives you an independent Docker daemon running in its own container. It efficiently creates child containers that are not directly visible from the host. Although it seems to offer strong isolation, dind is actually home to many peripheral case issues and security issues. These are due to interactions of the Docker operating system.

Mount your host's Docker socket in a container that includes the docker binary is a simpler and more predictable alternative. This allows the nested Docker process to start containers that become its own siblings. No other parameters are needed when using the socket-based approach.

★ ★ ★ ★ ★