Devsecops

Top 10 Docker Hardening Best Practices

Devsecops
Jun 23, 2022
8 min
Pranav S

A container security post that shows 10 ways to harden your Docker infrastructure and protect your containers and data from the bad guys.

Introduction

With many companies adopting Docker into their infrastructure, the attack surface has also increased for threat actors. This introduces the need for securing our Docker infrastructure. In this article, we mention some points with which you can harden your Docker containers’ security.

To utilize this article to its fullest, you must have the following:

  • Familiarity with the Linux command line
  • A basic idea about containerization and Docker would be helpful
What is Docker?

Docker is an open source containerization platform. It allows developers to package their applications into containers — standardized executable components combining application source code with the operating system (OS) libraries and dependencies required to run that code in any environment.

The Top 10 Best Practices

The Docker documentation outlines four major areas to consider while securing Docker containers-

  • The kernel’s support for namespaces and cgroups
  • The attack surface of the Docker daemon
  • Container misconfigurations
  • Using Linux kernel security modules like AppArmor, SELinux. etc.

We have broken these down into top 10 practices that you can follow to harden your Docker environment.

1. Update the host and Docker daemon frequently

Containers share the kernel with the host system. Any kernel exploits executed in the context of the container will directly impact the host kernel. The kernel privilege escalation exploit, Dirty Cow, when executed within the container, results in root access to the host. Therefore, it is important to keep the host and Docker engine up to date.

2. Do not expose the Docker daemon socket

All communications that take place between the Docker client and Docker daemon happen through the Docker daemon socket, which is a UNIX socket, and generally found at /var/run/docker.sock. This allows access to the Docker API. Traditional UNIX file permissions are used to limit access to this socket. The socket is owned by the root user in a default configuration. If anyone else obtains access to the socket, they will have root access to the host.

  • Set permissions such that only the root user and the docker group can access the Docker daemon socket.
  • Use SSH to protect the Docker daemon socket
  • Use TLS (HTTPS) to protect the Docker daemon socket. This allows Docker to be reached through HTTP in a safe manner.
  • Do not make the daemon socket available for remote connections, unless you are using Docker’s encrypted HTTPS socket, which supports authentication.
  • Do not run Docker images with an option like -v /var/run/docker.sock:/var/run/docker.sock, which exposes the socket in the resulting container. Remember that mounting the socket read-only is not a solution but only makes it harder to be compromised. An example of this in the docker compose file is —
volumes:- “/var/run/docker.sock:/var/run/docker.sock”

To check if you already have a container which is running in such a configuration:

docker inspect --format=’{{.HostConfig.Binds}}’ [container id]
3. Run Docker in rootless Mode

It is possible to run the Docker daemon as a non-root user which prevents potential vulnerabilities in Docker. This is called the “Rootless mode”. Rootless mode does not require root privileges, while installing Docker or communicating with the Docker API.

In rootless mode the Docker daemon and containers are run within a user namespace and without root privileges by default.

To run Docker in rootless mode:
  • Install uidmap package with sudo privileges:
    apt-get install -y uidmap
  • Fetch the installation script from Docker’s website and run it:
    curl -fSsL https://get.docker.com/rootless | sh
Fetching the rootless installation script
End of the command’s output
  • Copy the last two lines beginning with export and paste them at the end of your ~/.bashrc file. This makes sure, each time you open a Bash shell, these two variables, PATH and DOCKER_HOST, are set.
.bashrc
  • Run source ~/.bashrc to set these variables in your current shell session.
  • Run systemctl --user start docker to start the Docker Engine.
  • We can check if docker is running by running docker version
docker version output
4. Container Resource Configuration

Control Groups or cgroups is a Linux kernel feature that plays a key role in implementing resource allocation and limiting for containers. Their job is to not only make sure that each container gets its fair share of resources like memory and CPU, but also see to it that a single container cannot bring the system down by exhausting one of these resources.

Limiting resources prevents Denial-of-Service attacks. Following are some CLI flags you can use to limit resources for a container:

  • --memory=<memory size> — maximum amount of memory
  • --restart=on-failure:<number_of_restarts> — number of restarts
  • --memory-swap <value> — amount of swap memory
  • --cpus=<number> — maximum CPU resources available to a container
  • --ulimit nofile=<number> — maximum number of file descriptors
  • --ulimit nproc=<number> — maximum number of processes

By default, Docker allows the container to use as much RAM and CPU resources that are allowed by the host’s kernel. Therefore it is necessary to set resource constraints to prevent security issues in the container and host.

5. Avoid Privileged Containers
Avoid using the ` — privileged` flag

Docker has features to allow a container to run with root privileges on the host. This is done with the—-privileged flag. A container run in privileged mode has root capabilities to all devices on the host.

If an attacker were to compromise a privileged container, it would be possible for them to access resources on the host easily. Tampering of security modules in the system, like SELinux, would also be trivial. Due to this, it is not recommended to run containers in privileged mode in any stage of the development lifecycle.

Privileged containers are a major security risk. The possibilities for abuse are endless. Attackers can identify services running on the host to find and exploit vulnerabilities. They can also exploit container misconfigurations, such as containers with weak credentials or no authentication. Privileged containers give an attacker root access, leading to malicious code to be executed. Avoid using them in any environment.

To check if the container is running in privileged mode, use the following command:

docker inspect --format=’{{.HostConfig.Privileged}}’ [container_id]
  • true implies the container is privileged
  • false implies the container is not privileged
Use the `no-new-privileges` option

Add the no-new-privileges security option while creating containers to disable container processes from escalating their privileges using setuid or setgid binaries. This prevents the processes inside the container from gaining new privileges during execution. So if there is a program with the setuid or setgid bit set, any action that tries to gain privileges with this program will be denied.

6. Set Filesystem and Volumes to Read-only

A security conscious, useful feature in Docker is to run containers with a read-only filesystem. This reduces attack vectors since the container’s filesystem cannot be tampered with or written to unless it has explicit read-write permissions on its filesystem files and directories.

The following code sets a Docker container to read only:

docker run --read-only alpine sh -c ‘echo “read only” > /tmp’
7. Drop capabilities

The Linux kernel is able to break down the privileges of the root user into distinct units referred to as capabilities. Almost all of the special powers associated with the Linux root user are broken down into individual capabilities.

Capabilities of a privileged container shown by capsh

Docker imposes certain limitations that make working with capabilities much simpler. File capabilities are stored within a file’s extended attributes, and extended attributes are stripped out when Docker images are built. This means you will not normally have to concern yourself too much with file capabilities in containers.

As we mentioned before, remember not to run containers with the --privileged flag as this adds all Linux kernel capabilities to the container.

The most secure setup is to drop all capabilities using --cap-drop all and then add only required ones. For example:

docker run --cap-drop all --cap-add CHOWN alpine
8. Use Linux Security Modules (seccomp, AppArmor, or SELinux)

Consider using a security module like seccomp or AppArmor. The following are some well known modules:

  • Seccomp: Used to allow/disallow system calls that run in a container
  • AppArmor: Uses program profiles to restrict the capabilities of individual programs
  • SELinux: Uses security policies, which are a set of rules that tell SELinux what can or can’t be accessed, to enforce the access allowed by a policy.

These security modules can be used to provide another level of security checks on the access rights of processes and users, beyond that provided by the standard file-level access control.

Seccomp

By default, a container gets the default seccomp profile. This can be overridden with the following command —

docker run --rm -it --security-opt seccomp=./seccomp/profile.json hello-world
Running a container with a Seccomp profile

With Seccomp profiles you choose which system calls are allowed and which are denied in a container, because not all are needed in a production environment. You can learn more about writing seccomp profiles from the Docker documentation.

AppArmor

By default, a container uses the docker-default AppArmor template. To use a custom profile, you can override this setting with the --security-opt option.

To do so, you must first load the new profile into AppArmor to use with containers

apparmor_parser -r -W /path/to/custom_profile

Now run a container with the custom profile

docker run --rm -it --security-opt apparmor=custom_profile hello-world

Refer to this wiki to understand how AppArmor profiles are created.

9. Setting the container’s user

An easy way to prevent privilege escalation attacks is to set a container’s user as an unprivileged user. If and when a container gets compromised, the attacker would not have enough privileges on the container to launch attacks. There are multiple ways to set a user for a container:

  • Using the -u flag when running containers:
docker run -u 1001 nobody
  • Enable user namespace support in the Docker daemon (--userns-remap=default)
  • Use the USER Dockerfile directive while building the image:
FROM ubuntu:latest
RUN apt-get -y update
RUN groupadd -r john && useradd -r -g john john
USER john

However, it is important to note that this assumption does not hold true if there is a local privilege escalation within the container.

10. Set the logging level to at least INFO

Maintaining logging information is important as it helps in troubleshooting any potential issues during runtime. By default, the Docker daemon is configured to have a logging level of info, if this is not the case, it can be set using the --log-level info option. This is a base log level and captures all logs except the debug logs. It is not recommended to use the debug log level, unless required.

To configure the log level in docker-compose:

docker-compose --log-level info up
Sources:

HAZE WEBFLOW TEMPLATE

Build a website that actually performs better.

1
Lorem ipsum dolor sit amet consectutar
2
Lorem ipsum dolor sit amet consectutar
3
Lorem ipsum dolor sit amet consectutar