Introduction to Docker

Testez-le maintenant pour 1 euro seulement !
Vous avez de grands projets d’hébergement avec beaucoup de ressources ? Avec le CloudServer (VPS) de hosting.fr, ce n’est pas un problème. Grâce à notre vaste gamme d’outils d’hébergement, vous bénéficiez d’une liberté absolue. Bien entendu, vous pouvez choisir librement le système d’exploitation en appuyant sur un bouton.
Réserver un serveur cloud maintenant

Docker Logo

Please note: The following guide cannot be implemented in a shared web hosting product, but only on an unmanaged server.

Overview

Modern web applications often consist of multiple components that need to be installed individually. This often requires changes to the underlying Linux system. Additional dependencies must be installed on the system. Installing an application on a Linux system can take several hours. Once the installation is complete, an application can be deeply integrated into the Linux system and no longer encapsulated. To circumvent this problem, a separate virtual server can be configured for each complex web application. However, this poses the following problems:

  • No deduplication: If multiple images are installed on a server, for example, based on a Debian Linux image, all the image data must be stored multiple times.

  • Compatibility: The image must be compatible with the virtualization solution used. Portability is therefore limited.

  • Full virtualization: This is full virtualization, where even device drivers, for example, for network cards, are fully virtualized. This results in additional overhead.

To solve this problem, Docker was developed. Important concepts and terms in the Docker ecosystem include:

  • An image is the hard drive of the container. An image can consist of multiple layers. These are created when the container is created and can no longer be modified by the running container. Each layer only stores changes compared to the previous layer. When a container is started based on an image, a container layer is created, which can be individually described by the respective container. Images can be built on other images. Through layers and nested images, deduplication of stored data can be achieved. An image is not used for persistent data storage. For example, when updating software, the entire image is replaced. It is therefore considered stateless.

  • A container is an instance of the image. It has a paravirtualized Linux environment provided by the host.

  • A volume is used to store persistent data from a container, such as the contents of a database. If an update to the container needs to be performed later, the container image can be exchanged, while the data remains intact.

  • The engine is the server-side process on which the Docker environment is run.

  • A Docker registry allows sharing images with an organization or also publicly. In the standard Docker installation, Docker Hub is integrated as the default registry.

  • A Docker repository is a set of Docker images that can be stored in a registry. A repository can also contain multiple different versions of an image.

  • The Dockerfile is the recipe for a container. It contains the instructions for creating the image of a container. With each instruction, a new layer is created inside the container, so that in case of a change to the Dockerfile, the entire image does not need to be recreated. In a Debian container, for example, it can contain apt commands that install packages.

Use Cases for Docker

  • Docker is suitable as a development environment. Developers can have a consistent environment to develop Linux-based applications via Docker containers. Docker containers, which effectively contain a Linux, can also be launched identically on other platforms, such as Windows or macOS.

  • The developed software can be directly used in production in the form of containers.

  • Docker facilitates the documentation of the installed software. In the Dockerfile or Docker Compose, the instructions for creating a container are recorded. These recipes can be managed and versioned in a version control system, such as git.

  • Docker is well-suited for starting web applications on individual servers that would otherwise require complex installation. They thus run in an isolated environment and can be easily removed from the respective server if necessary.

  • As part of continuous integration, Docker containers can be integrated into pipelines using tools such as Travis CI, Jenkins, or Wercker to compile and test applications before deployment is automated via Docker.

  • Docker is also suitable for deploying large scalable applications based on microservices. Thanks to Docker’s great flexibility, containers can be quickly launched consistently across multiple servers.

When Not to Use Docker

  • Docker is not suitable for applications with a desktop graphical interface, as it was primarily designed for web and command-line applications.

  • Using Docker can lead to slight performance impacts. It is therefore not suitable for high-performance applications.

  • Since it is paravirtualization, no changes to the Linux kernel can be made. Loading kernel modules is not possible in Docker containers.

Docker Compose

To create and manage multiple containers that are part of a service, Docker Compose was invented as a simple tool. In a docker-compose.yml file, individual containers and their relationships, for example, volumes, networks, and port redirections, can be defined. Docker Compose can also create containers based on Dockerfiles.

The use of Docker Compose will be explained in a following example. At the end of the article, you will find a collection of important commands for Docker Compose.

Large Docker Environments

To manage large Docker environments, various additional tools and solutions are available. Here is a brief overview of these solutions. A well-known solution is, for example, Kubernetes. Kubernetes supports the following tasks:

  • Creating application services that span multiple containers.
  • Collecting telemetry data, for example, to monitor load and performance.
  • Managing access permissions for container management.
  • Managing service replication to increase availability.

Rancher is a commonly used solution for managing Kubernetes.

Practical Implementation: Installing Docker on a Debian Server

It is recommended to use the community version of Docker. The installation is shown here for Debian 10 “Buster”. A detailed guide is available on the Docker website. Docker will be installed as follows, not from the official Debian repositories, but directly from the official Docker repository. This allows it to be kept up to date.

Please connect to your server via SSH. First, all previously installed versions of Docker must be removed:

sudo apt update
sudo apt remove docker docker-engine docker.io containerd runc

Please install the necessary dependencies from Debian:

sudo apt install \
    apt-transport-https \
    ca-certificates \
    curl \
    gnupg2 \
    software-properties-common

and add the Docker PGP key:

curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

Then add the Docker repository:

sudo add-apt-repository \
   "deb [arch=amd64] https://download.docker.com/linux/debian \
   $(lsb_release -cs) \
   stable"

In the next step, Docker can be installed:

sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io

You can allow other users on the Linux system to manage Docker. For this, the user must be a member of the docker group. If the group does not exist, please create it with:

sudo groupadd docker

and then add the user to the group:

sudo usermod -aG docker <USER>

To check if Docker has been installed correctly, a “Hello World” container can be started:

docker run hello-world 

After executing the command, the required image will be automatically downloaded and started:

max@demoserver ➜  ~ docker run hello-world 
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
1b930d010525: Pull complete 
Digest: sha256:fc6a51919cfeb2e6763f62b6d9e8815acbf7cd2e476ea353743570610737b752
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

If you see this message, then you have successfully installed Docker.

Portainer as an Example of a Web Application

To show how a web application can be started in Docker, we will take Portainer as an example. It is a web-based management tool for Docker.

We create a volume where Portainer’s persistent data will be stored with the following command:

docker volume create portainer_data

Existing volumes can be displayed with the docker volume ls command, which in our case will look like this:

root@demoserver:~# docker volume ls
DRIVER              VOLUME NAME
local               portainer_data
root@demoserver:~# 

The following command starts Portainer:

docker run -d -p 9000:9000 --name=portainer --restart=unless-stopped -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer

The -p options establish a port redirection from the host to the container. The host port numbering is specified first. The --name option allows you to set a name for the container. With the --restart option, you can define when the container should be started. The unless-stopped option only restarts the container if it has not been stopped. With the -v option, you specify which volume should be mounted where in the container. For Portainer, we also pass the Unix socket /var/run/docker.sock, which allows the container to access Docker running on the host to manage it. Finally, you need to specify which image should be used. In our case, it is the portainer/portainer image.

By executing the command, the required images are automatically downloaded from the server and the container is started. With the docker ps command, running containers can be displayed:

root@demoserver:~# docker ps
CONTAINER ID        IMAGE                 COMMAND             CREATED             STATUS              PORTS                                            NAMES
e12e5ebbd96a        portainer/portainer   "/portainer"        6 minutes ago       Up About a minute   0.0.0.0:8000->8000/tcp, 0.0.0.0:9000->9000/tcp   portainer
root@demoserver:~# 

Here, you can see the container ID, as well as the image on which the container is based.

The web browser can now access the Portainer web interface, for example via http://demoserver.mustermann-domain.de:9000/. Here, a password for the Portainer admin user must be set:

Portainer

To connect to Docker, select the “Local” option so that the passed socket is used:

Portainer

In Portainer, you can now graphically manage the Docker instance:

Portainer

If you now want to stop the running container, you can do so using its ID with the docker stop <ID> command:

root@demoserver:~# docker stop 36fc433abcef
36fc433abcef
root@demoserver:~# 

Similarly, the container can be restarted with docker start <ID>. With the docker rm <ID> command, the container will be deleted.

A problem in this example is that the web server under port 8000 only offers an HTTP server. For a secure connection to Portainer, an SSL-encrypted HTTPS connection would be necessary. At this point, it is common to set up an HTTP reverse proxy on the host computer, for example nginx, which receives the client’s encrypted connection and forwards it to the container.

Creating Your Own Containers

To better understand Docker, we will show how to create your own image using Docker Compose. In this example, it is a small web server based on the Python framework Flask and a Redis database server. With Docker Compose, it is possible to create multiple Docker containers together.

For the new web application, we create a directory, for example named my-first-container. In this directory, we create the app.py file with the following content:

# my-first-container/app.py

from flask import Flask
from redis import Redis

app = Flask(__name__)
redis = Redis(host='redis', port=6379)

@app.route('/')
def hello():
    redis.incr('hits')
    return 'My example application has been displayed %s times.' % redis.get('hits').decode("utf-8")


if __name__ == "__main__":
    app.run(host="0.0.0.0", debug=True, port=8080)

For Python, we also need a file named requirements.txt in which the Python dependencies are specified:

Flask==1.1.1
redis==3.4.1

The actual “recipe” for the Docker container is stored in the Dockerfile:

FROM ubuntu:latest

MAINTAINER Max Mustermann "max@mustermann-domain.fr"

RUN apt-get update -y && \
    apt-get install -y python3 python3-pip python3-dev

# We first copy just the requirements.txt to take advantage of Docker cache
COPY ./requirements.txt /app/requirements.txt

WORKDIR /app

RUN pip3 install -r requirements.txt

COPY . /app

ENTRYPOINT [ "python3" ]

CMD [ "app.py" ]

The containers can then be created and started with the command

docker-compose up --build

The --build option ensures that the container is rebuilt with all steps. The required images are automatically downloaded, and the container is created and started on this basis. With the -d option, for example docker-compose up -d, the container can be started in the background.

Once the example container is started, it can be accessed via the web interface on port 8080, for example via https://demoserver.mustermann-domain.de:8080/. The text of the example program will be displayed: “My example application has been displayed 5 times”.

With the command

docker-compose exec web /bin/bash

you can start a bash shell inside the web container.

With docker-compose down, the container can be stopped.

What Makes Containers Different from Virtual Machines

Containers and virtual machines solve similar problems, but they work in very different ways. A virtual machine simulates an entire operating system, which makes it heavier and slower to start. A container shares the host system’s kernel and only packages the files and libraries an application needs. Because of this lightweight structure, containers start almost instantly and use fewer resources. They also behave consistently across different servers, making deployments more predictable and easier to maintain.

When Docker Is the Right Choice

Docker is especially helpful when you need an environment that behaves the same in development, testing, and production. It keeps all dependencies bundled so you don’t run into differences between servers. This makes it a strong fit for web applications, microservices, and situations where isolated environments reduce conflicts. On the other hand, Docker isn’t ideal for software that requires deep system access or custom kernel modules. In those cases, a full virtual machine may still be the better option.

Key Practices for Running Docker on a Server

Running Docker in a server environment works best with a bit of routine maintenance.

Here are the most important points to keep in mind:

  • Keep your Docker Engine and images up to date to avoid security issues.
  • Use persistent volumes for data you need to keep, such as databases or file uploads.
  • Limit container privileges to reduce the impact of potential vulnerabilities.
  • Monitor resource usage to prevent containers from overloading your system.

What to Watch Out For: Common Docker Pitfalls

Using Docker makes many tasks easier — but there are common pitfalls to be aware of.

  • Misconfigured volume mounts or forgetting to use volumes can lead to data loss if containers are removed or recreated.
  • Using many containers with overlapping dependencies or logics can make debugging harder, because issues may arise from interactions between containers.
  • Ignoring security hardening — e.g. e, exposing container ports blindly, not updating images, or using insecure default configurations — can expose your server.
  • Overlooking performance trade-offs for disk I/O or network performance when many containers run simultaneously, particularly when containers share host resources heavily.

Understanding these helps you avoid surprises and ensure a smoother container-based deployment.

Collection of Important Commands for Beginners

Containers

  • docker container ls -a - Display existing containers. With the additional -a option, non-running containers are also displayed.
  • docker ps - Shows running containers
  • docker run -d <IMAGE_NAME> - Create and start a container based on an image. The -d option starts the container in the background.
  • docker start <CONTAINER_ID> - Starts a container given by its ID
  • docker stop <CONTAINER_ID> - Stops a running container identified by its ID
  • docker kill <CONTAINER_ID> - Interrupts a running container and terminates it
  • docker rm <CONTAINER_ID> - Deletes a stopped container
  • docker log -f <CONTAINER_ID> - Displays the logging output of a container by its ID.
  • docker attach <CONTAINER_ID> - here you can interact with the running process in the container via standard input and output. If this main process of the container is stopped, the container is also stopped. This means that the keyboard shortcut Ctrl + C or the exit command may stop the container.
  • docker exec -it <CONTAINER_ID> /bin/sh - the command starts a shell inside the container

Images

  • docker image ls - Displays existing images
  • docker image prune - Deletes unnecessary images
  • docker image pull <NAME> and docker image push <NAME> retrieve or copy images from or to a registry by their name
  • docker image rm <NAME> deletes an unused image

Docker Compose

  • docker-compose up - starts the containers that are defined in the docker-compose.yml file of the current directory
  • docker-compose down - stops a running collection of containers defined in a docker-compose.yml file
  • docker-compose ps - Displays the containers currently managed by Docker Compose
  • docker-compose start <NAME> and docker-compose stop <NAME> start or stop the containers identified by the name recorded in the docker-compose.yml file

References

Don't hesitate to subscribe to our newsletter



Thank you for subscribing to the hosting.fr newsletter.
Introduction to Docker Application Server Containers Docker