Files
oam/knowledge base/docker.md
2026-02-11 00:21:27 +01:00

25 KiB

Docker

  1. TL;DR
  2. Gotchas
  3. Daemon configuration
    1. Credentials
  4. Images configuration
  5. Building images
    1. Exclude files from the build context
    2. Only include what the final image needs
  6. Containers configuration
  7. Health checks
  8. Advanced build with buildx
    1. Create builders
    2. Build for specific platforms
  9. Compose
  10. Running LLMs locally
  11. Best practices
  12. Troubleshooting
    1. Use environment variables in the ENTRYPOINT
  13. Further readings
    1. Sources

TL;DR

Setup
OS Setup type Engine configuration file Settings Data directory
Linux Engine, regular /etc/docker/daemon.json /var/lib/docker
Linux Engine, rootless ${XDG_CONFIG_HOME}/docker/daemon.json
~/.config/docker/daemon.json
Linux Docker Desktop ${HOME}/.docker/daemon.json ${HOME}/.docker/desktop/settings.json
Mac OS X Docker Desktop ${HOME}/.docker/daemon.json ${HOME}/Library/Group Containers/group.com.docker/settings.json
Windows Docker Desktop C:\ProgramData\docker\config\daemon.json C:\Users\UserName\AppData\Roaming\Docker\settings.json C:\ProgramData\docker
# Install.
brew install --cask 'docker'
sudo zypper install 'docker'

# Configure.
vim '/etc/docker/daemon.json'
jq -i '."log-level"="info"' '/etc/docker/daemon.json'
jq -i '.dns=["8.8.8.8", "1.1.1.1"]' "${HOME}/.docker/daemon.json"

# Allow containers to use devices on systems with SELinux.
sudo setsebool container_use_devices=1
Usage
# Show locally available images.
docker images -a

# Search for images.
docker search 'boinc'

# Login to registries.
docker login
docker login -u 'username' -p 'password'
aws ecr get-login-password \
| docker login --username 'AWS' --password-stdin '012345678901.dkr.ecr.eu-east-2.amazonaws.com'

# Pull images.
docker pull 'alpine:3.14'
docker pull 'boinc/client:latest'
docker pull 'moby/buildkit@sha256:00d2…'
docker pull 'pulumi/pulumi-nodejs:3.112.0@sha256:37a0…'
docker pull 'quay.io/strimzi/kafka:latest-kafka-3.6.1'
docker pull '012345678901.dkr.ecr.eu-west-1.amazonaws.com/example-com/syncthing:1.27.8'

# Remove images.
docker rmi 'node'
docker rmi 'alpine:3.14'
docker rmi 'f91a431c5276'

# Create containers.
docker create -h 'alpine-test-host' --name 'alpine-test-container' 'alpine:3.19'
docker create … 'quay.io/strimzi/kafka:latest-kafka-3.6.1'

# Start containers.
docker start 'alpine-test-container'
docker start 'bdbe3f45'

# Create and start containers.
docker run 'hello-world'
docker run -ti --rm --platform 'linux/amd64' 'alpine:3.19' cat '/etc/apk/repositories'
docker run -d --name 'boinc' --network='host' --pid='host' -v 'boinc:/var/lib/boinc' \
  -e BOINC_GUI_RPC_PASSWORD='123' -e BOINC_CMD_LINE_OPTIONS='--allow_remote_gui_rpc' \
  'boinc/client'

# Gracefully stop containers.
docker stop 'alpine-test'
docker stop -t '0' 'bdbe3f45'

# Kill containers.
docker kill 'alpine-test'

# Restart containers.
docker restart 'alpine-test'
docker restart 'bdbe3f45'

# Show containers' status.
docker ps
docker ps --all

# List containers with specific metadata values.
docker ps -f 'name=pihole' -f 'status=running' -f 'health=healthy' -q

# Execute commands inside *running* containers.
docker exec 'app_web_1' tail 'logs/development.log'
docker exec -ti 'alpine-test' 'sh'

# Show containers' output.
docker logs -f 'alpine-test'
docker logs --since '1m' 'dblab_server' --details
docker logs --since '2024-05-01' -n '100' 'mariadb'
docker logs --since '2024-08-01T23:11:35' --until '2024-08-05T20:43:35' 'gitlab'

# List processes running inside containers.
docker top 'alpine-test'

# Show information on containers.
docker inspect 'alpine-test'
docker inspect --format='{{index .RepoDigests 0}}' 'pulumi/pulumi-nodejs:3.112.0'

# Build a docker image.
docker build -t 'private/alpine:3.14' .

# Tag images.
docker tag 'alpine:3.14' 'private/alpine:3.14'
docker tag 'f91a431c5276' 'pulumi/pulumi-nodejs:3.112.0'

# Push images.
docker push 'private/alpine:3.14'

# Export images to tarballs.
docker save 'alpine:3.14' -o 'alpine.tar'
docker save 'hello-world' > 'hw.tar'

# Load images from tarballs.
docker load -i 'hw.tar'

# Delete containers.
docker rm 'alpine-test'
docker rm -f '87b27'

# Cleanup.
docker logout
docker rmi 'alpine'
docker image prune -a
docker system prune -a
docker builder prune -a
docker buildx prune -a

# List networks.
docker network ls

# Inspect networks.
docker network inspect 'monitoring_default'

# Create volumes.
docker volume create 'volume-name'

# List volumes.
docker volume list

# Inspect volumes.
docker volume inspect 'volume-name'

# Display a summary of the vulnerabilities in images.
# If not given any input, it targets the most recently built image.
docker scout qv
docker scout quickview 'debian:unstable-slim'
docker scout quickview 'archive://hw.tar'

# Display vulnerabilities in images.
docker scout cves
docker scout cves 'alpine'
docker scout cves 'archive://alpine.tar'
docker scout cves --format 'sarif' --output 'alpine.sarif.json' 'oci-dir://alpine'
docker scout cves --format 'only-packages' --only-package-type 'golang' --only-vuln-packages 'fs://.'

# Display base image update recommendations.
docker scout recommendations
docker scout recommendations 'golang:1.19.4' --only-refresh
docker scout recommendations 'golang:1.19.4' --only-update

# List builders.
docker buildx ls

# Create builders.
docker buildx create --name 'builder_name'

# Switch between builders.
docker buildx use 'builder_name'
docker buildx create --name 'builder_name' --use

# Modify builders.
docker buildx create --node 'builder_name'

# Build images.
# '--load' currently only works for builds for a single platform.
docker buildx build -t 'image:tag' --load '.'
docker buildx build … -t 'image:tag' --load --platform 'linux/amd64' '.'
docker buildx build … --push \
  --cache-to 'mode=max,image-manifest=true,oci-mediatypes=true,type=registry,ref=012345678901.dkr.ecr.eu-west-2.amazonaws.com/buildkit-test:cache \
  --cache-from type=registry,ref=012345678901.dkr.ecr.eu-west-2.amazonaws.com/buildkit-test:cache \
  --platform 'linux/amd64,linux/arm64,linux/arm/v7' '.'

# Clean up the build cache.
docker buildx prune
docker buildx prune -a

# Remove builders.
docker buildx rm 'builder_name'

# Pull images used in compositions.
docker compose pull

# Start compositions.
docker compose up
docker compose up -d

# Execute commands in compositions' containers
docker compose exec 'service-name' 'ls' '-Al'

# Get logs.
docker compose logs
docker compose logs -f --index='3' 'service-name'

# End compositions.
docker compose down
Real world use cases
# Get the SHAsum of images.
docker inspect --format='{{index .RepoDigests 0}}' 'node:18-buster'

# Act upon files in volumes.
sudo ls "$(docker volume inspect --format '{{.Mountpoint}}' 'baikal_config')"
sudo vim "$(docker volume inspect --format '{{.Mountpoint}}' 'gitea_config')/app.ini"

# Send images to other nodes with Docker.
docker save 'local/image:latest' | ssh -C 'user@remote.host' docker load

The Docker engine leverages specific Linux capabilities.

On Windows and Mac OS X the engine runs in Linux VMs.
Docker's host network mode will use the VM's network, and not the host's one. Using that mode on those OSes will result in the containers being silently unable to receive traffic from outside the host.
To solve this, use a different network mode and explicitly publish the ports used.

Gotchas

  • Containers created with no specified name will be assigned one automatically:

    $ docker create 'hello-world'
    8eaaae8c0c720ac220abac763ad4b477d807be4522d58e334337b1b74a14d0bd
    
    $ docker create --name 'alpine' 'alpine'
    63b1a0a3e557094eba7f18424fd50d49b36cacbc21f1df60b918b375b857f809
    
    $ docker ps -a
    CONTAINER ID   IMAGE         COMMAND    CREATED          STATUS    PORTS   NAMES
    63b1a0a3e557   alpine        "/bin/sh"  24 seconds ago   Created           alpine
    8eaaae8c0c72   hello-world   "/hello"   21 seconds ago   Created           sleepy_brown
    
  • When referring to a container or image using their ID, you just need to use as many characters you need to uniquely specify a single one of them:

    $ docker ps -a
    CONTAINER ID   IMAGE         COMMAND    CREATED          STATUS    PORTS   NAMES
    63b1a0a3e557   alpine        "/bin/sh"  34 seconds ago   Created           alpine
    8eaaae8c0c72   hello-world   "/hello"   31 seconds ago   Created           sleepy_brown
    
    $ docker start 8
    8
    
    $ docker ps -a
    CONTAINER ID   IMAGE         COMMAND    CREATED          STATUS                      PORTS   NAMES
    63b1a0a3e557   alpine        "/bin/sh"  48 seconds ago   Created                             alpine
    8eaaae8c0c72   hello-world   "/hello"   45 seconds ago   Exited (0) 10 seconds ago           sleepy_brown
    
  • From inside a container, localhost and 127.0.0.1 will always refer to the container itself unless it is configured to use the host networking feature.

  • One cannot reach containers directly via the network on Mac, even when started with the --network=host setting.

    Docker Desktop runs the Engine in a virtual machine, not natively; hence, ports are exposed on the VM and not of the host running Docker Desktop.
    Refer I cannot ping my containers.

    One can go around this limitation by:

Daemon configuration

The docker daemon is configured using the /etc/docker/daemon.json file:

{
    "default-runtime": "runc",
    "dns": ["8.8.8.8", "1.1.1.1"]
}

Credentials

Configured in the ${HOME}/.docker/config.json file of the user executing docker commands:

{
  "credsStore": "ecr-login",
  "auths": {
    "https://index.docker.io/v1/": {
      "auth": "abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ101234"
    }
  }
}

The ecr-login credentials store requires the amazon-ecr-credential-helper to be present on the system.

brew install 'docker-credential-helper-ecr'
dnf install 'amazon-ecr-credential-helper'

Images configuration

One should follow the OpenContainers Image Spec.

Building images

Also see Advanced build with buildx.

Exclude files from the build context

Leverage a .dockerignore file.

Refer How to Use a .dockerignore File: A Comprehensive Guide with Examples

Only include what the final image needs

Leverage Multi-stage builds.

Containers configuration

Docker mounts specific system files in all containers to forward its settings:

6a95fabde222$ mount
/dev/disk/by-uuid/1bb…eb5 on /etc/resolv.conf type btrfs (rw,…)
/dev/disk/by-uuid/1bb…eb5 on /etc/hostname type btrfs (rw,…)
/dev/disk/by-uuid/1bb…eb5 on /etc/hosts type btrfs (rw,…)

Those files come from the volume the docker container is using for its root, and are modified on the container's startup with the information from the CLI, the daemon itself and, when missing, the host.

Health checks

The following have the same effect:

Command line
docker run … \
  --health-cmd 'curl --fail --insecure --silent --show-error http://localhost/ || exit 1' \
  --health-interval '5m' \
  --health-timeout '3s' \
  --health-retries '4' \
  --health-start-period '10s'
Dockerfile
HEALTHCHECK --interval=5m --timeout=3s --start-period=10s --retries=4 \
  CMD curl --fail --insecure --silent --show-error http://localhost/ || exit 1
Docker-compose file
version: '3.6'
services:
  web-server:
    healthcheck:
      test: curl --fail --insecure --silent --show-error http://localhost/ || exit 1
      interval: 5m
      timeout: 3s
      retries: 4
      start_period: 10s
    

The command's exit status indicates the health status of the container. The possible values are:

  • 0: success - the container is healthy and ready for use
  • 1: unhealthy - the container isn't working correctly
  • 2: reserved - don't use this exit code

Advanced build with buildx

Create builders

$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  BUILDKIT             PLATFORMS
default * docker
  default default         running v0.11.7+d3e6c1360f6e linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386

$ docker buildx create --name 'multiarch' --use
multiarch

$ docker buildx ls
NAME/NODE    DRIVER/ENDPOINT             STATUS   BUILDKIT             PLATFORMS
multiarch *  docker-container
  multiarch0 unix:///var/run/docker.sock inactive
default      docker
  default    default                     running  v0.11.7+d3e6c1360f6e linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/386

Build for specific platforms

The --load option currently only works for builds for a single platform.
See https://github.com/docker/buildx/issues/59.

docker buildx build --platform 'linux/amd64,linux/arm64,linux/arm/v7' -t 'image:tag' '.'
docker load …

Compose

Refer Docker compose.

Setup
Via shell
mkdir -p '/usr/local/lib/docker/cli-plugins' \
&& curl 'https://github.com/docker/compose/releases/latest/download/docker-compose-linux-aarch64' \
    -o '/usr/local/lib/docker/cli-plugins/docker-compose' \
&& chmod 'ug=rwx,o=rx' '/usr/local/lib/docker/cli-plugins/docker-compose'
Via Ansible
- name: Create Docker's CLI plugins directory
  become: true
  ansible.builtin.file:
    dest: /usr/local/lib/docker/cli-plugins
    state: directory
    owner: root
    group: root
    mode: u=rwx,g=rx,o=rx
- name: Get Docker compose from its official binaries
  become: true
  ansible.builtin.get_url:
    url: https://github.com/docker/compose/releases/latest/download/docker-compose-{{ ansible_system }}-{{ ansible_architecture }}
    dest: /usr/local/lib/docker/cli-plugins/docker-compose
    owner: root
    group: root
    mode: u=rwx,g=rx,o=rx

Running LLMs locally

Refer Run LLMs Locally with Docker: A Quickstart Guide to Model Runner and Docker Model Runner.

Docker introduced Model Runner in version 4.40.
It makes it easy to pull, run, and experiment with LLMs on local machines.

# Enable in Docker Desktop.
docker desktop enable model-runner
docker desktop enable model-runner --tcp='12434'  # enable TCP interaction from host processes

# Install as plugin.
apt install 'docker-model-plugin'
dnf install 'docker-model-plugin'
pacman -S 'docker-model-plugin'

# Verify the installation.
docker model --help
docker model status

# Stop the current runner.
docker model stop-runner

# Reinstall runners with CUDA GPU support.
docker model reinstall-runner --gpu 'cuda'

# Check the Model Runner container can access the GPU.
docker exec docker-model-runner nvidia-smi

# Disable in Docker Desktop.
docker desktop disable model-runner

Models are available in Docker Hub under the ai/ prefix.
Tags for models distributed by Docker follow the {model}:{parameters}-{quantization} scheme.
Alternatively, they can be downloaded from Hugging Face.

# Search for model variants.
docker search ai/llama2

# Pull models.
docker model pull 'ai/qwen2.5'
docker model pull 'ai/qwen3-coder:30B'
docker model pull 'ai/smollm2:360M-Q4_K_M' 'ai/llama2:7b-q4'
docker model pull 'some.registry.com/models/mistral:latest'

# Run models.
docker model run 'ai/smollm2:360M-Q4_K_M' 'Give me a fact about whales'
docker model run -d 'ai/qwen3-coder:30B'
docker model run -e 'MODEL_API_KEY=my-secret-key' --gpus 'all' …
docker model run --gpus '0' --gpu-memory '8g' -e 'MODEL_GPU_LAYERS=40' …
docker model run --gpus '0,1,2' --memory '16g' --memory-swap '16g' …
docker model run --no-gpu --cpus '4' …
docker model run -p '3000:8080' …
docker model run -p '127.0.0.1:8080:8080' …
docker model run -p '8080:8080' -p '9090:9090'# Distribute models across GPUs.
docker model run --gpus 'all' --tensor-parallel '2' 'ai/llama2-70b'

# View models' logs.
docker model logs
docker model logs llm | grep -i gpu
docker model logs -f llm
docker model logs --tail 100 -t llm

Model Runner exposes an OpenAI endpoint under http://model-runner.docker.internal/engines/v1 for containers, and (if TCP host access was enabled during initialization on port 12434) under http://localhost:12434/engines/v1 for host processes.
Use this endpoint to hook up OpenAI-compatible clients or frameworks.

Executing docker model run will not spin up containers.
Instead, it calls an Inference Server API endpoint hosted by Model Runner through Docker Desktop.

The Inference Server runs an inference engine as a native host process, and provides interaction through an OpenAI/Ollama-compatible API.
When requests come in, Model Runner loads the requested model on demand, then performs the inference on the requests.

The active model will stay in memory until another model is requested, or until a pre-defined inactivity timeout (usually 5 minutes) is reached.

Model Runner will transparently load the requested model on-demand, assuming it has been pulled beforehand and is locally available. There is no need to execute docker model run before interacting with a specific model from host processes or from within containers.

Docker Model Runner supports the llama.cpp, vLLM, and Diffusers inference engines.
llama.cpp is the default one.

# List downloaded models.
docker model list
docker model ls --json
docker model ls --openai
docker model ls -q

# List running models.
docker model ps

# Show models' configuration.
docker model inspect 'ai/qwen2.5-coder'

# View models' layers.
docker model history 'ai/llama2'

# Configure models.
docker model configure --context-size '8192' 'ai/qwen2.5-coder'

# Reset model configuration.
docker model configure --context-size '-1' 'ai/qwen2.5-coder'

# Remove models.
docker model rm 'ai/llama2'
docker model rm -f 'ai/llama2'
docker model rm $(docker model ls -q)

# Print system information.
docker model status

# Print disk usage.
docker model df

# Full cleanup (remove all models)
docker model purge

Model Runner collects user data.
Data collection is controlled by the Send usage statistics setting.

Best practices

  • Use multi-stage Dockerfiles when possible to reduce the final image's size.
  • Use a .dockerignore file to exclude from the build context all files that are not needed for it.

Troubleshooting

Use environment variables in the ENTRYPOINT

Refer Exec form ENTRYPOINT example.

Root cause

The ENTRYPOINT's exec form does not invoke a command shell. This means that environment substitution does not happen like it would in shell environments.
I.E., ENTRYPOINT [ "echo", "$HOME" ] will not do variable substitution on $HOME, while ENTRYPOINT echo $HOME will.

Solution

Use the ENTRYPOINT's shell form instead of its exec form:

-ENTRYPOINT [ "echo", "$HOME" ]
+ENTRYPOINT echo $HOME

Alternatively, keep the exec form but force invoking a shell in it:

-ENTRYPOINT [ "echo", "$HOME" ]
+ENTRYPOINT [ "sh", "-c", "echo", "$HOME" ]

Further readings

Sources