But, why do we need to do this? There are several occasions that make you want to perform this action, especially if you are working on the development of a continuous delivery procedure. For example:
- You want to set up a closed environment (like a container) for a software testing process that requires external applications which can be run as containers.
- You want to build an application in a container then deploy it on another host using Docker API only, without the need for shell command execution.
There are two common methods to achieve this objective. The first is by binding the Unix socket of the running Docker Engine into the container. The second is by installing a specific Docker Engine inside the container.
For instance, we will run a container based on an image of Docker 20.10. We can run the following command.
docker run -v /var/run/docker.sock:/var/run/docker.sock -it --rm docker:20.10
Now, you can run any docker
command inside the container that you just have created. Because we share the same socket for commanding the Docker engine, each docker
command can interfere with other container processes. For example, this command "docker rm -f $(docker ps -a -q)
" can kill all containers while it is just run inside a container.
For the second method, we need to run a specific Docker image as a container. The image can be called the "dind" version of the Docker image.
docker run -d --privileged --name myDind docker:20.10-dind
docker exec -it myDind /bin/sh
This method requires a container to be run with privileged which means that the container will have access to the host resources. It actually makes sense if we understand that a container requires shared Linux kernels that are provided by the host.
Comments
Post a Comment