Pillar Guide Complete Guide 2026 50 min read · 9,000 words

Docker on VPS — The Definitive 2026 Guide

Docker has redefined how applications are deployed on VPS servers. Instead of fighting dependency conflicts, wrestling with system packages, or praying that a manual deployment does not break production, Docker lets you ship a self-contained image that runs identically from your laptop to a $6/mo VPS in Ashburn, Virginia. This definitive guide covers every layer: from initial engine installation and daemon tuning through production Compose stacks, Traefik reverse proxy with automatic SSL, monitoring with Prometheus and Grafana, backup strategies, security hardening, CI/CD via GitHub Actions, and multi-node Docker Swarm.

Whether you are running a single WordPress site or a dozen microservices, this guide gives you the complete architecture to do it correctly on a VPS. See our best VPS for Docker page for provider recommendations, or jump straight to Hetzner (our top Docker VPS pick) or Vultr for the most US datacenter locations.

Recommended Specs: 2 vCPU / 4GB RAM / NVMe SSD for basic Docker; 4 vCPU / 8GB RAM for production multi-service stacks. KVM virtualization required (OpenVZ 6 does not support Docker). Use our VPS size calculator to estimate your workload.

1. Why Docker on VPS? (vs ECS, GKE, Managed Container Services)

Managed container services like AWS ECS, Google GKE, and Azure AKS abstract away the underlying infrastructure, but that abstraction has a real cost: you pay 5–10x more per unit of compute, and you lose the fine-grained control that makes a VPS powerful. Let's compare:

Factor Docker on VPS AWS ECS / Fargate GKE / EKS
Monthly cost (4 vCPU / 8GB) $8–$25 $120–$180 $150–$250+
Configuration control Full (daemon.json, kernel params) Limited Moderate
Complexity Low–Medium Medium High
Auto-scaling Manual (Swarm can help) Native Native
Vendor lock-in None — portable AWS-specific APIs Moderate

For most VPS workloads — hobby projects, small SaaS products, development environments, staging servers, and production apps serving under ~500k monthly users — Docker on a well-chosen VPS delivers better economics, simpler operations, and zero vendor lock-in compared to managed container platforms.

The key advantages specific to VPS deployments:

  • Isolation without overhead: Each app runs in its own container with its own filesystem and dependencies. No more "works on my machine" failures. The container is the same artifact from development through production.
  • Density: A 4GB VPS can comfortably run 8–12 lightweight containerized services that would require separate VMs if deployed the traditional way.
  • Portability: Your entire stack is defined in docker-compose.yml. Migrating to a new provider takes minutes, not days.
  • Rollback: Tag your images. Rolling back from a bad deployment is docker compose up -d with the previous image tag.
  • Cost: A Hetzner CX22 (4 vCPU, 8GB NVMe) costs ~$8/mo. The equivalent Fargate task would cost $120+/mo.

Read our in-depth comparison at managed vs unmanaged VPS and cloud VPS vs bare metal for the full trade-off analysis.

2. Choosing a VPS for Docker (Specs, Providers, Storage)

Docker's minimum requirements are modest, but production use demands careful hardware selection. Here are the practical guidelines:

Minimum Specifications

  • Basic Docker (1–3 containers): 2 vCPU, 4GB RAM, 40GB SSD. Suitable for a Docker blog, small API, or dev environment.
  • Production multi-service (5–15 containers): 4 vCPU, 8GB RAM, 80GB NVMe. Sufficient for a full web stack with database, cache, proxy, and monitoring.
  • Heavy workloads (databases, high-traffic APIs): 6–8 vCPU, 16–32GB RAM, 160GB+ NVMe.
  • OS: Ubuntu 22.04 LTS or 24.04 LTS. Docker fully supports both; 24.04 ships with a newer kernel.
  • Virtualization: KVM (required). OpenVZ 6 does not support Docker. Modern OpenVZ 7 does, but KVM is preferred.
  • Storage driver: NVMe SSD is strongly preferred. Docker's overlay2 filesystem is I/O-intensive during image pulls and builds.

Recommended Providers for Docker

Based on our Hetzner benchmarks and Vultr benchmarks:

  • Hetzner — Best price/performance for Docker. CX22 (4 vCPU, 8GB, NVMe) at ~$8/mo. One US datacenter (Ashburn, VA). Top pick for cost-conscious production. See Hetzner deals.
  • Vultr — 17 US datacenters, hourly billing, NVMe on all plans. Best for multi-region Docker or when specific US city latency matters. See Vultr coupon codes.
  • DigitalOcean — Best documentation in the industry, managed databases available to pair with Docker apps, $200 free credit.
  • Contabo — Cheapest raw specs for Docker at scale. Large RAM plans at very low cost. Good for bulk container workloads where price is the primary concern.

Use our VPS size calculator to estimate the right plan for your specific container count and traffic projections.

3. Docker Engine Installation & Daemon Optimization

Start from a fresh Ubuntu 22.04 or 24.04 LTS VPS. The official convenience script is the fastest path to a working Docker install:

# One-line install (adds repo, GPG key, installs engine + Compose plugin)
curl -fsSL https://get.docker.com | sh

# Verify the install
docker --version
docker compose version

For production servers where you want version pinning and the full manual repo method:

# Install prerequisites
apt-get update
apt-get install -y ca-certificates curl gnupg lsb-release

# Add Docker's official GPG key
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg \
  | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg

# Add the stable repository
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
  https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" \
  | tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker Engine + Compose plugin
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io \
  docker-buildx-plugin docker-compose-plugin

Add your deploy user to the docker group so you do not need sudo for every command:

# Add current user to docker group
usermod -aG docker $USER

# Apply immediately without logout
newgrp docker

# Verify non-root access
docker run hello-world

Enable Docker to start on boot and configure log rotation and storage driver in /etc/docker/daemon.json:

# Enable Docker on boot
systemctl enable docker
systemctl enable containerd

# Create daemon config with production-optimized settings
cat > /etc/docker/daemon.json << 'EOF'
{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  },
  "storage-driver": "overlay2",
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Hard": 65536,
      "Soft": 65536
    }
  },
  "live-restore": true,
  "userland-proxy": false,
  "metrics-addr": "127.0.0.1:9323",
  "experimental": false
}
EOF

# Apply the new daemon config
systemctl reload docker

Key daemon settings explained:

  • live-restore: true — containers keep running if the Docker daemon restarts (e.g., during upgrades). Critical for production.
  • userland-proxy: false — disables the userland proxy for port forwarding, using direct iptables rules instead. Slightly better performance.
  • log-opts max-size/max-file — prevents container log files from filling your disk. Without this, a chatty container can exhaust a 40GB VPS.
  • metrics-addr — exposes Docker engine metrics for Prometheus scraping (used in the monitoring section).
# Verify daemon settings are active
docker info | grep -E "Storage Driver|Logging Driver|Live Restore"

# Check disk usage by Docker (images, containers, volumes, build cache)
docker system df

# Full cleanup of stopped containers, unused images, and dangling volumes
docker system prune -af --volumes

4. Docker Compose Essentials (v2 Plugin, YAML Patterns)

Docker Compose v2 ships as a CLI plugin — invoked as docker compose (space, not hyphen). It is the standard way to define and run multi-container applications on a single VPS. The core file is docker-compose.yml (or compose.yaml — both work).

# Verify Compose v2 is installed
docker compose version
# Docker Compose version v2.24.5

# If not installed, add the plugin
apt-get install -y docker-compose-plugin

The anatomy of a production-ready docker-compose.yml:

# docker-compose.yml — annotated production pattern
services:

  # Service definition
  app:
    image: myapp:1.2.3          # Always pin image tags in production
    build:                       # OR build from local Dockerfile
      context: .
      dockerfile: Dockerfile
      target: production          # Multi-stage build target
    restart: unless-stopped       # Restart on crash, not on explicit stop
    depends_on:
      db:
        condition: service_healthy # Wait for db healthcheck to pass
    environment:
      NODE_ENV: production
      DB_HOST: db
      DB_PASSWORD: ${DB_PASSWORD}  # Loaded from .env file
    env_file:
      - .env                       # Alternative: load all vars from file
    ports:
      - "127.0.0.1:3000:3000"     # Bind to localhost only (Traefik routes externally)
    volumes:
      - app_uploads:/app/uploads   # Named volume for persistent data
      - ./config:/app/config:ro    # Bind mount, read-only
    networks:
      - frontend
      - backend
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 512M
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/health"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

  db:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: myapp
      POSTGRES_USER: myapp
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - db_data:/var/lib/postgresql/data
    networks:
      - backend
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U myapp"]
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  db_data:
  app_uploads:

networks:
  frontend:
  backend:

Essential Compose commands for daily operations:

# Start all services in detached mode
docker compose up -d

# Pull latest images and recreate changed services only
docker compose pull && docker compose up -d

# View aggregated logs (follow mode)
docker compose logs -f

# Scale a service to 3 replicas (requires no host port binding)
docker compose up -d --scale app=3

# Execute a command inside a running service container
docker compose exec app sh

# Run a one-off command without starting a persistent container
docker compose run --rm app python manage.py migrate

# Show resource usage for all services
docker compose stats

# Validate compose file syntax
docker compose config

Use a .env file in the same directory as your docker-compose.yml for secrets. Compose automatically loads it:

# .env — NEVER commit to git; chmod 600 this file
DB_PASSWORD=super_secret_password_here
DB_ROOT_PASSWORD=root_password_here
REDIS_PASSWORD=redis_secret_here
APP_SECRET_KEY=long_random_string_here

# Protect the file
chmod 600 .env
echo ".env" >> .gitignore

5. Networking Deep Dive (bridge, overlay, macvlan)

Docker networking determines how containers communicate with each other, with the host, and with the outside world. Understanding it is essential for secure, correct production deployments.

Bridge Networks (default)

Every container attaches to a bridge network by default. Containers on the same user-defined bridge network can reach each other by service name (Docker's built-in DNS). The host sees the network as a virtual ethernet bridge (docker0 for the default bridge, custom names for user-defined).

# Create a custom bridge network with a specific subnet
docker network create \
  --driver bridge \
  --subnet 172.20.0.0/16 \
  --gateway 172.20.0.1 \
  --opt com.docker.network.bridge.name=br-myapp \
  myapp-net

# Run containers on the custom network
docker run -d --network myapp-net --name web nginx
docker run -d --network myapp-net --name api myapi:latest

# Containers can reach each other: curl http://web or curl http://api
# Inspect network configuration
docker network inspect myapp-net

Host Networking

With --network host, the container shares the host's network namespace. No port mapping is needed — the container binds directly to VPS interfaces. Best for latency-sensitive workloads or when port mapping overhead matters.

# Host network mode — container uses VPS network directly
docker run -d --network host --name nginx-host nginx
# Nginx now listens on VPS port 80 directly — no -p flag needed

# In Compose:
services:
  app:
    image: myapp:latest
    network_mode: host

Overlay Networks (Docker Swarm)

Overlay networks span multiple Docker hosts, enabling containers on different VPS nodes to communicate as if on the same local network. Used with Docker Swarm.

# Create an overlay network (Swarm mode required)
docker network create \
  --driver overlay \
  --attachable \
  --subnet 10.10.0.0/16 \
  swarm-net

# Services deployed to this network can communicate across nodes
docker service create \
  --name myapp \
  --network swarm-net \
  --replicas 3 \
  myapp:latest

macvlan Networks

macvlan assigns each container its own MAC address, making it appear as a separate physical device on the network. Rarely needed on VPS, but useful for legacy applications that expect direct L2 access.

# macvlan network (requires knowing your VPS network interface name)
docker network create -d macvlan \
  --subnet=192.168.1.0/24 \
  --gateway=192.168.1.1 \
  -o parent=eth0 \
  macvlan-net

# List all Docker networks
docker network ls

# Remove unused networks
docker network prune

6. Storage Management (Volumes, Bind Mounts, tmpfs, Backup)

Container filesystems are ephemeral by default — data written inside a container is lost when the container is removed. Persistent storage requires explicit configuration through volumes or bind mounts.

Named Volumes vs Bind Mounts

  • Named volumes — managed by Docker, stored at /var/lib/docker/volumes/NAME/_data/. Best for production databases and app data. Portable, easy to back up.
  • Bind mounts — maps a specific host path into the container. Best for configuration files and development code. Host path must exist.
  • tmpfs mounts — in-memory only, discarded on container stop. Best for temporary secrets or session data that must not touch disk.
# Create and use named volumes
docker volume create postgres_data
docker volume create redis_data

# Inspect volume location on host
docker volume inspect postgres_data
# Output includes "Mountpoint": "/var/lib/docker/volumes/postgres_data/_data"

# Bind mount example
docker run -d \
  -v /home/deploy/myapp/config:/app/config:ro \
  -v /home/deploy/myapp/uploads:/app/uploads \
  myapp:latest

# tmpfs mount (in-memory, for secrets)
docker run -d \
  --tmpfs /app/secrets:rw,noexec,nosuid,size=64m \
  myapp:latest
# Volume backup using Alpine container
docker run --rm \
  -v postgres_data:/source:ro \
  -v /backups:/dest \
  alpine \
  tar czf /dest/postgres_data_$(date +%Y%m%d_%H%M%S).tar.gz -C /source .

# Restore a volume from backup
docker run --rm \
  -v postgres_data:/dest \
  -v /backups:/source:ro \
  alpine \
  sh -c "cd /dest && tar xzf /source/postgres_data_20260315_120000.tar.gz"

# Clean unused volumes (careful — this is irreversible)
docker volume prune -f

For database containers, always dump with the database's own tool rather than raw volume backup. See VPS backup strategies for automated backup scripts and remote storage patterns.

7. WordPress Stack with Docker Compose (Nginx + PHP-FPM + MariaDB + Redis)

A production WordPress stack on Docker Compose: Nginx handles web requests, PHP-FPM processes PHP, MariaDB stores data, Redis handles object caching. This architecture serves a typical WordPress site on a 2GB VPS with room to spare.

# docker-compose.yml — WordPress production stack
services:

  mariadb:
    image: mariadb:11.2
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: wordpress
      MYSQL_USER: wpuser
      MYSQL_PASSWORD: ${MYSQL_PASSWORD}
    volumes:
      - mariadb_data:/var/lib/mysql
    networks:
      - backend
    healthcheck:
      test: ["CMD", "healthcheck.sh", "--connect", "--innodb_initialized"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 128mb --maxmemory-policy allkeys-lru
    volumes:
      - redis_data:/data
    networks:
      - backend

  wordpress:
    image: wordpress:6.5-php8.3-fpm-alpine
    restart: unless-stopped
    depends_on:
      mariadb:
        condition: service_healthy
    environment:
      WORDPRESS_DB_HOST: mariadb:3306
      WORDPRESS_DB_NAME: wordpress
      WORDPRESS_DB_USER: wpuser
      WORDPRESS_DB_PASSWORD: ${MYSQL_PASSWORD}
      WORDPRESS_TABLE_PREFIX: wp_
    volumes:
      - wp_data:/var/www/html
      - ./php-overrides.ini:/usr/local/etc/php/conf.d/overrides.ini:ro
    networks:
      - backend

  nginx:
    image: nginx:1.25-alpine
    restart: unless-stopped
    depends_on:
      - wordpress
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - wp_data:/var/www/html:ro
      - ./nginx/wordpress.conf:/etc/nginx/conf.d/default.conf:ro
      - nginx_certs:/etc/nginx/certs
      - nginx_logs:/var/log/nginx
    networks:
      - backend
      - frontend

volumes:
  mariadb_data:
  redis_data:
  wp_data:
  nginx_certs:
  nginx_logs:

networks:
  frontend:
  backend:
# nginx/wordpress.conf
server {
    listen 80;
    server_name example.com www.example.com;
    root /var/www/html;
    index index.php;

    # Security headers
    add_header X-Frame-Options "SAMEORIGIN" always;
    add_header X-Content-Type-Options "nosniff" always;
    add_header Referrer-Policy "no-referrer-when-downgrade" always;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ \.php$ {
        fastcgi_pass wordpress:9000;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
        fastcgi_buffers 16 16k;
        fastcgi_buffer_size 32k;
    }

    location ~* \.(css|gif|ico|jpeg|jpg|js|png|svg|webp|woff2)$ {
        expires 1y;
        add_header Cache-Control "public, immutable";
    }

    location = /favicon.ico { log_not_found off; access_log off; }
    location = /robots.txt  { log_not_found off; access_log off; }
    location ~ /\. { deny all; }
}
# php-overrides.ini — tune PHP for WordPress
upload_max_filesize = 64M
post_max_size = 64M
max_execution_time = 120
memory_limit = 256M
opcache.enable = 1
opcache.memory_consumption = 128
opcache.max_accelerated_files = 10000
opcache.revalidate_freq = 2
# Deploy the WordPress stack
docker compose up -d

# Watch logs until all services are healthy
docker compose logs -f

# Verify all containers are running
docker compose ps

See our Docker on VPS blog post for a quicker single-container WordPress setup, and Nginx reverse proxy guide for advanced Nginx configuration patterns.

8. Node.js App Stack (Node + Postgres + Redis + Nginx)

A typical production Node.js stack on Docker Compose: Nginx terminates TLS and handles static files, the Node.js app runs as a container, PostgreSQL stores relational data, Redis handles sessions and caching.

# Dockerfile — multi-stage build for Node.js
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

FROM node:20-alpine AS production
WORKDIR /app
RUN addgroup -g 1001 -S nodejs && adduser -S nodejs -u 1001
COPY --from=builder /app/node_modules ./node_modules
COPY --chown=nodejs:nodejs . .
USER nodejs
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=10s --retries=3 \
  CMD wget -qO- http://localhost:3000/health || exit 1
CMD ["node", "server.js"]
# docker-compose.yml — Node.js production stack
services:

  postgres:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./db/init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    networks:
      - backend
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
      interval: 10s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    restart: unless-stopped
    command: redis-server --requirepass ${REDIS_PASSWORD} --maxmemory 64mb
    volumes:
      - redis_data:/data
    networks:
      - backend

  app:
    build:
      context: .
      target: production
    image: myapp:${APP_VERSION:-latest}
    restart: unless-stopped
    depends_on:
      postgres:
        condition: service_healthy
    environment:
      NODE_ENV: production
      DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
      REDIS_URL: redis://:${REDIS_PASSWORD}@redis:6379
      PORT: 3000
    networks:
      - backend
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 512M

  nginx:
    image: nginx:1.25-alpine
    restart: unless-stopped
    depends_on:
      - app
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./nginx/app.conf:/etc/nginx/conf.d/default.conf:ro
      - ./dist/static:/usr/share/nginx/html/static:ro
    networks:
      - backend

volumes:
  postgres_data:
  redis_data:

networks:
  backend:
# nginx/app.conf — Node.js upstream
upstream nodeapp {
    server app:3000;
    keepalive 64;
}

server {
    listen 80;
    server_name api.example.com;

    location /static/ {
        root /usr/share/nginx/html;
        expires 1y;
        add_header Cache-Control "public, immutable";
    }

    location / {
        proxy_pass http://nodeapp;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_cache_bypass $http_upgrade;
        proxy_read_timeout 86400s;
    }
}
# Build and deploy
docker compose build --no-cache
docker compose up -d

# Tail app logs
docker compose logs -f app

# Run database migrations
docker compose exec app node migrate.js

For VPS recommendations suited to Node.js workloads, see best VPS for Node.js and our developer VPS guide.

9. Traefik Reverse Proxy (Auto SSL, Routing, Middleware)

Traefik is the best reverse proxy for Docker environments. It reads container labels, auto-discovers services, handles Let's Encrypt SSL certificates automatically, and requires zero manual configuration changes when adding new services. See our Nginx reverse proxy guide if you prefer Nginx.

# traefik/docker-compose.yml — Traefik as standalone stack
services:
  traefik:
    image: traefik:v3.0
    container_name: traefik
    restart: unless-stopped
    security_opt:
      - no-new-privileges:true
    command:
      # API and dashboard
      - "--api.dashboard=true"
      - "--api.insecure=false"
      # Docker provider
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--providers.docker.network=traefik-proxy"
      # Entrypoints
      - "--entrypoints.web.address=:80"
      - "--entrypoints.websecure.address=:443"
      # Redirect HTTP to HTTPS globally
      - "--entrypoints.web.http.redirections.entrypoint.to=websecure"
      - "--entrypoints.web.http.redirections.entrypoint.scheme=https"
      # Let's Encrypt
      - "--certificatesresolvers.le.acme.httpchallenge=true"
      - "--certificatesresolvers.le.acme.httpchallenge.entrypoint=web"
      - "--certificatesresolvers.le.acme.email=admin@example.com"
      - "--certificatesresolvers.le.acme.storage=/letsencrypt/acme.json"
      # Logging
      - "--log.level=INFO"
      - "--accesslog=true"
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - traefik_certs:/letsencrypt
    networks:
      - traefik-proxy
    labels:
      - "traefik.enable=true"
      # Dashboard (protect with middleware)
      - "traefik.http.routers.traefik-dashboard.rule=Host(`traefik.example.com`)"
      - "traefik.http.routers.traefik-dashboard.entrypoints=websecure"
      - "traefik.http.routers.traefik-dashboard.tls.certresolver=le"
      - "traefik.http.routers.traefik-dashboard.service=api@internal"
      - "traefik.http.routers.traefik-dashboard.middlewares=dashboard-auth"
      - "traefik.http.middlewares.dashboard-auth.basicauth.users=admin:$$apr1$$xyz$$hashedpassword"

volumes:
  traefik_certs:

networks:
  traefik-proxy:
    name: traefik-proxy
    external: true
# Create the shared Traefik network (run once)
docker network create traefik-proxy

# Start Traefik
docker compose -f traefik/docker-compose.yml up -d

Adding any new service to Traefik routing is just a matter of adding labels:

# Labels to add to any service for automatic routing + SSL
services:
  myapp:
    image: myapp:latest
    networks:
      - traefik-proxy  # Must be on Traefik's network
      - backend        # Internal network for database access
    labels:
      - "traefik.enable=true"
      - "traefik.http.routers.myapp.rule=Host(`myapp.example.com`)"
      - "traefik.http.routers.myapp.entrypoints=websecure"
      - "traefik.http.routers.myapp.tls.certresolver=le"
      - "traefik.http.services.myapp.loadbalancer.server.port=3000"
      # Rate limiting middleware
      - "traefik.http.middlewares.myapp-ratelimit.ratelimit.average=100"
      - "traefik.http.middlewares.myapp-ratelimit.ratelimit.burst=50"
      - "traefik.http.routers.myapp.middlewares=myapp-ratelimit"
# Useful Traefik middleware patterns

# Secure headers middleware
- "traefik.http.middlewares.secure-headers.headers.stsSeconds=31536000"
- "traefik.http.middlewares.secure-headers.headers.stsIncludeSubdomains=true"
- "traefik.http.middlewares.secure-headers.headers.contentTypeNosniff=true"
- "traefik.http.middlewares.secure-headers.headers.browserXssFilter=true"

# IP allowlist (restrict dashboard to your IP)
- "traefik.http.middlewares.ip-allowlist.ipAllowList.sourceRange=1.2.3.4/32"

# Compress responses
- "traefik.http.middlewares.compress.compress=true"

10. Let's Encrypt SSL Automation

Traefik handles Let's Encrypt automatically via labels (as shown above). If you prefer certbot standalone or need to issue certificates for non-Dockerized services, here is the standalone certbot approach for your VPS.

# Install certbot on the VPS host (for non-Traefik SSL)
apt-get install -y certbot

# Issue a wildcard certificate via DNS challenge (requires DNS API access)
certbot certonly \
  --manual \
  --preferred-challenges=dns \
  -d "*.example.com" \
  -d "example.com" \
  --email admin@example.com \
  --agree-tos

# Issue a standard certificate via HTTP challenge
certbot certonly \
  --standalone \
  --non-interactive \
  --agree-tos \
  --email admin@example.com \
  -d app.example.com
# Mount host certs into Nginx container
services:
  nginx:
    image: nginx:1.25-alpine
    volumes:
      - /etc/letsencrypt/live/example.com/fullchain.pem:/etc/nginx/ssl/cert.pem:ro
      - /etc/letsencrypt/live/example.com/privkey.pem:/etc/nginx/ssl/key.pem:ro
      - ./nginx/ssl.conf:/etc/nginx/conf.d/default.conf:ro

# Reload Nginx after cert renewal
certbot renew --post-hook "docker compose exec nginx nginx -s reload"

# Set up auto-renewal cron
(crontab -l 2>/dev/null; echo "0 3 * * * certbot renew --quiet --post-hook 'docker compose -f /opt/myapp/docker-compose.yml exec nginx nginx -s reload'") | crontab -
# Test renewal without actually renewing
certbot renew --dry-run

# List all issued certificates and expiry dates
certbot certificates

See our SSL certificates on VPS guide for more patterns including Nginx SSL configuration and HSTS setup.

11. Monitoring: Portainer + cAdvisor + Prometheus/Grafana

Three tiers of Docker monitoring: Portainer for the GUI management layer, cAdvisor for per-container metrics, and Prometheus/Grafana for long-term dashboards and alerting. See also our VPS monitoring setup guide.

Portainer (Web UI)

# Deploy Portainer Community Edition
docker volume create portainer_data

docker run -d \
  --name portainer \
  --restart unless-stopped \
  -p 127.0.0.1:9000:9000 \
  -p 127.0.0.1:9443:9443 \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v portainer_data:/data \
  portainer/portainer-ce:latest

# Access via SSH tunnel for security:
# ssh -L 9000:localhost:9000 deploy@YOUR_VPS_IP
# Then open http://localhost:9000 in your browser

cAdvisor (Container Metrics)

# cAdvisor — exposes per-container Prometheus metrics
docker run -d \
  --name cadvisor \
  --restart unless-stopped \
  -p 127.0.0.1:8080:8080 \
  --volume=/:/rootfs:ro \
  --volume=/var/run:/var/run:ro \
  --volume=/sys:/sys:ro \
  --volume=/var/lib/docker/:/var/lib/docker:ro \
  --volume=/dev/disk/:/dev/disk:ro \
  --privileged \
  --device=/dev/kmsg \
  gcr.io/cadvisor/cadvisor:v0.49.1

Prometheus + Grafana Stack

# monitoring/docker-compose.yml
services:
  prometheus:
    image: prom/prometheus:v2.49.1
    restart: unless-stopped
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.path=/prometheus'
      - '--storage.tsdb.retention.time=30d'
      - '--web.enable-lifecycle'
    volumes:
      - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
      - prometheus_data:/prometheus
    networks:
      - monitoring
    ports:
      - "127.0.0.1:9090:9090"

  grafana:
    image: grafana/grafana:10.3.1
    restart: unless-stopped
    environment:
      GF_SECURITY_ADMIN_PASSWORD: ${GRAFANA_PASSWORD}
      GF_USERS_ALLOW_SIGN_UP: "false"
    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana/dashboards:/etc/grafana/provisioning/dashboards
    networks:
      - monitoring
    ports:
      - "127.0.0.1:3001:3000"
    depends_on:
      - prometheus

  node-exporter:
    image: prom/node-exporter:v1.7.0
    restart: unless-stopped
    pid: host
    volumes:
      - /proc:/host/proc:ro
      - /sys:/host/sys:ro
      - /:/rootfs:ro
    command:
      - '--path.procfs=/host/proc'
      - '--path.sysfs=/host/sys'
      - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
    networks:
      - monitoring

volumes:
  prometheus_data:
  grafana_data:

networks:
  monitoring:
# prometheus.yml — scrape config
global:
  scrape_interval: 15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: 'docker-engine'
    static_configs:
      - targets: ['host.docker.internal:9323']

  - job_name: 'cadvisor'
    static_configs:
      - targets: ['host.docker.internal:8080']

  - job_name: 'node-exporter'
    static_configs:
      - targets: ['node-exporter:9100']

Import Grafana dashboard ID 893 (Docker and system monitoring) and 1860 (Node Exporter Full) for instant visibility into your Docker VPS. See VPS monitoring setup for alerting configuration.

12. Backup Strategy for Docker

A complete Docker backup strategy covers: named volumes, Compose stacks, custom images, and database dumps. See VPS backup strategies for off-site backup patterns (Backblaze B2, rclone to S3).

#!/bin/bash
# /opt/scripts/docker-backup.sh — full Docker backup script

BACKUP_DIR="/backups/docker"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
mkdir -p "$BACKUP_DIR"

# 1. Stop services for consistent backup (optional — comment out for live backup)
# docker compose -f /opt/myapp/docker-compose.yml stop

# 2. Backup all Docker volumes
for vol in $(docker volume ls -q); do
  docker run --rm \
    -v "$vol":/source:ro \
    -v "$BACKUP_DIR":/dest \
    alpine \
    tar czf "/dest/${vol}_${TIMESTAMP}.tar.gz" -C /source .
  echo "Backed up volume: $vol"
done

# 3. Backup Compose stack files
tar czf "$BACKUP_DIR/compose_stacks_${TIMESTAMP}.tar.gz" /opt/myapp /opt/monitoring

# 4. Backup database dumps separately (MariaDB example)
docker compose -f /opt/myapp/docker-compose.yml exec -T mariadb \
  mysqldump -u root -p"${MYSQL_ROOT_PASSWORD}" --all-databases | \
  gzip > "$BACKUP_DIR/mysql_dump_${TIMESTAMP}.sql.gz"

# 5. Remove backups older than 7 days
find "$BACKUP_DIR" -name "*.tar.gz" -o -name "*.sql.gz" | \
  xargs ls -t | tail -n +50 | xargs rm -f

echo "Backup completed: $BACKUP_DIR"
# PostgreSQL dump from running container
docker compose exec -T postgres \
  pg_dumpall -U postgres | \
  gzip > /backups/postgres_$(date +%Y%m%d).sql.gz

# Restore a Postgres dump
gunzip < /backups/postgres_20260315.sql.gz | \
  docker compose exec -T postgres psql -U postgres

# Schedule daily backups at 2am
(crontab -l 2>/dev/null; echo "0 2 * * * /opt/scripts/docker-backup.sh >> /var/log/docker-backup.log 2>&1") | crontab -
# Restore a volume from backup
VOLUME_NAME="postgres_data"
BACKUP_FILE="/backups/docker/postgres_data_20260315_020000.tar.gz"

# Create a fresh volume
docker volume create $VOLUME_NAME

# Restore data
docker run --rm \
  -v $VOLUME_NAME:/dest \
  -v $(dirname $BACKUP_FILE):/source:ro \
  alpine \
  sh -c "cd /dest && tar xzf /source/$(basename $BACKUP_FILE)"

echo "Volume $VOLUME_NAME restored from $BACKUP_FILE"

13. Docker Security Hardening

Docker's isolation is not security by default. Apply these hardening measures on any internet-facing VPS. Also see our dedicated VPS security hardening guide for host-level hardening.

# Dockerfile security best practices

FROM node:20-alpine

# 1. Create a non-root user
RUN addgroup -g 1001 -S appuser && adduser -u 1001 -S appuser -G appuser

WORKDIR /app

# 2. Copy with ownership set to non-root user
COPY --chown=appuser:appuser package*.json ./
RUN npm ci --only=production --ignore-scripts

COPY --chown=appuser:appuser . .

# 3. Switch to non-root user
USER appuser

# 4. Use read-only filesystem where possible (set in Compose)
EXPOSE 3000
CMD ["node", "server.js"]
# Compose security settings
services:
  app:
    image: myapp:latest
    user: "1001:1001"              # Run as non-root
    read_only: true                # Read-only root filesystem
    tmpfs:
      - /tmp:noexec,nosuid,size=64m  # Writable tmpfs for /tmp
    security_opt:
      - no-new-privileges:true     # Prevent privilege escalation
    cap_drop:
      - ALL                        # Drop all Linux capabilities
    cap_add:
      - NET_BIND_SERVICE           # Add back only what's needed
    ulimits:
      nproc: 65535
      nofile:
        soft: 20000
        hard: 40000
# Scan images for vulnerabilities
docker scout cves myapp:latest

# Check running containers for security issues
docker run --rm \
  -v /var/run/docker.sock:/var/run/docker.sock \
  aquasec/trivy image myapp:latest

# Install rootless Docker (runs daemon without root)
# After installing Docker, run as your deploy user:
dockerd-rootless-setuptool.sh install

# Add to .bashrc or .profile:
export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/docker.sock
# Restrict Docker socket access (do not expose publicly)
# Only Traefik and Portainer should mount /var/run/docker.sock
# Use socket proxy to limit what labels can do:
docker run -d \
  --name dockerproxy \
  --restart unless-stopped \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -p 2375:2375 \
  -e CONTAINERS=1 \
  -e NETWORKS=1 \
  -e SERVICES=1 \
  -e POST=0 \
  tecnativa/docker-socket-proxy

Additional security practices: keep images updated weekly (docker compose pull && docker compose up -d), use private registry for proprietary application images, enable UFW on the host to restrict which ports are publicly reachable, and review our VPS performance tuning guide for kernel parameter optimization.

14. CI/CD Integration with GitHub Actions

Automate your Docker deployments with GitHub Actions. The self-hosted runner on your VPS runs the workflow directly on the server; SSH-based deploy from GitHub's hosted runners works well too. See best VPS for CI/CD for provider recommendations. Our VPS for developers guide covers more CI/CD patterns.

# Install GitHub Actions self-hosted runner on your VPS
mkdir -p /opt/actions-runner && cd /opt/actions-runner

# Download the latest runner (get URL from GitHub repo Settings > Actions > Runners)
curl -o actions-runner-linux-x64-2.315.0.tar.gz -L \
  https://github.com/actions/runner/releases/download/v2.315.0/actions-runner-linux-x64-2.315.0.tar.gz

tar xzf ./actions-runner-linux-x64-2.315.0.tar.gz

# Configure (get token from GitHub)
./config.sh --url https://github.com/yourorg/yourrepo --token YOUR_TOKEN

# Install as a systemd service
sudo ./svc.sh install
sudo ./svc.sh start
# .github/workflows/deploy.yml — self-hosted runner deployment
name: Deploy to VPS

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: self-hosted
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Login to GitHub Container Registry
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: |
            ghcr.io/${{ github.repository }}:latest
            ghcr.io/${{ github.repository }}:${{ github.sha }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

      - name: Deploy with Docker Compose
        run: |
          cd /opt/myapp
          docker compose pull
          docker compose up -d --remove-orphans
          docker compose ps
# Alternative: SSH-based deploy from GitHub hosted runners
# .github/workflows/deploy-ssh.yml
name: Deploy via SSH

on:
  push:
    branches: [main]

jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to VPS via SSH
        uses: appleboy/ssh-action@v1.0.0
        with:
          host: ${{ secrets.VPS_HOST }}
          username: deploy
          key: ${{ secrets.VPS_SSH_KEY }}
          script: |
            cd /opt/myapp
            git pull origin main
            docker compose build --no-cache app
            docker compose up -d --no-deps app
            docker system prune -f
# Zero-downtime deploy script (with health check loop)
#!/bin/bash
set -e

APP_DIR="/opt/myapp"
cd "$APP_DIR"

echo "Pulling latest images..."
docker compose pull

echo "Recreating app service..."
docker compose up -d --no-deps --wait app

# Wait for health check to pass
ATTEMPTS=0
MAX=30
until docker compose ps app | grep -q "healthy"; do
  ATTEMPTS=$((ATTEMPTS+1))
  if [ $ATTEMPTS -ge $MAX ]; then
    echo "Health check failed after $MAX attempts!"
    exit 1
  fi
  echo "Waiting for app to be healthy ($ATTEMPTS/$MAX)..."
  sleep 5
done

echo "Deployment successful!"
docker system prune -f

15. Docker Swarm on VPS (Multi-Node Setup)

Docker Swarm provides native clustering and orchestration across multiple VPS nodes. When you outgrow a single server but Kubernetes feels like too much overhead, Swarm is the pragmatic middle ground. See also our best VPS for Kubernetes comparison.

# Initialize Swarm on your primary VPS (manager node)
docker swarm init --advertise-addr YOUR_VPS_IP

# Output includes a join token — save it!
# To get the worker join token later:
docker swarm join-token worker

# On each worker VPS, run the join command:
docker swarm join \
  --token SWMTKN-1-xyz-abc \
  MANAGER_VPS_IP:2377

# List all nodes
docker node ls
# Deploy a stack to Swarm (uses docker-compose.yml with deploy keys)
docker stack deploy -c docker-compose.yml myapp

# List stacks
docker stack ls

# List services in a stack
docker stack services myapp

# View tasks (individual container instances) for a service
docker service ps myapp_app

# Scale a service up/down
docker service scale myapp_app=5

# Update a service image (rolling update)
docker service update \
  --image ghcr.io/myorg/myapp:v1.2.3 \
  --update-parallelism 1 \
  --update-delay 10s \
  myapp_app
# docker-compose.yml with Swarm deploy section
services:
  app:
    image: myapp:latest
    networks:
      - swarm-net
    deploy:
      replicas: 3
      update_config:
        parallelism: 1
        delay: 10s
        order: start-first        # Start new replica before stopping old
        failure_action: rollback
      rollback_config:
        parallelism: 1
        delay: 5s
      restart_policy:
        condition: on-failure
        delay: 5s
        max_attempts: 3
      placement:
        constraints:
          - "node.role == worker"  # Only deploy on worker nodes

networks:
  swarm-net:
    driver: overlay
    attachable: true
# Swarm Secrets (safer than environment variables)
# Create a secret from a file or stdin
echo "my_db_password" | docker secret create db_password -

# Use the secret in a service
docker service create \
  --name myapp \
  --secret db_password \
  --env DB_PASSWORD_FILE=/run/secrets/db_password \
  myapp:latest

# List secrets (values are never shown)
docker secret ls

# Remove a secret (services using it must be stopped first)
docker secret rm db_password

For multi-node Swarm, each VPS node should be in a different US datacenter for fault tolerance. Pair with a load balancer (Vultr LB, Hetzner LB, or HAProxy) at the front. See cloud VPS vs bare metal for hardware considerations at scale.

16. Frequently Asked Questions

What VPS specs do I need to run Docker in production?

For a basic single-app Docker setup, 2 vCPU and 4GB RAM is the comfortable minimum. For production with multiple services (app + database + Redis + Traefik), target 4 vCPU and 8GB RAM. NVMe SSD storage is strongly preferred — Docker image pulls and overlay2 filesystem operations are I/O-sensitive. Hetzner's CX22 (4 vCPU, 8GB, NVMe, ~$8/mo) is an excellent Docker production server. Use our VPS calculator to size your workload.

Docker Compose vs Docker Swarm — which should I use?

Use Docker Compose for single-server setups — the vast majority of VPS workloads. Use Docker Swarm when you genuinely need multi-node orchestration across 2+ servers and Kubernetes feels like overkill. Compose is simpler, better documented, and sufficient for most production workloads. Migrate to Swarm only when you actually need horizontal scaling. See best VPS for Kubernetes if you need full container orchestration.

Should I use Traefik or Nginx as a reverse proxy for Docker?

Traefik is the better choice for Docker-native environments. It reads Docker labels and auto-configures routing and Let's Encrypt SSL without manual config file edits. Nginx requires manual upstream configuration changes every time you add a service. The exception: if you have extensive Nginx expertise and complex rewrite rules, stick with what you know. See our Nginx reverse proxy guide for a Nginx-based setup.

How do I update Docker containers without downtime?

With Docker Compose: docker compose pull && docker compose up -d --no-deps app recreates only the app service. For zero-downtime with multiple replicas, use Docker Swarm's docker service update --image newimage:tag myservice, which performs a rolling update. For single-container setups on Compose, downtime is typically 2–5 seconds. Scale to 2 replicas behind a load balancer for true zero-downtime.

Can I run Docker on an OpenVZ VPS?

Modern OpenVZ 7 (Virtuozzo 7) supports Docker via kernel namespaces, but older OpenVZ 6 does not. KVM-based VPS always supports Docker fully. Check with your provider before purchasing if Docker support is critical. Contabo, Vultr, Hetzner, and DigitalOcean all use KVM and fully support Docker.

How do I handle environment secrets in Docker?

Never put secrets in your Dockerfile or commit them to git. Use a .env file loaded by Docker Compose (add .env to .gitignore and chmod 600 .env), Docker Secrets for Swarm deployments, or an external secrets manager. For simple VPS deployments, a .env file with restrictive permissions is pragmatic and secure. Review our security hardening guide for additional secrets management patterns.

What is the best VPS provider for Docker hosting?

Hetzner is the top recommendation: best price-to-performance, NVMe storage, and KVM virtualization. Vultr is best for US-specific datacenters with 17 locations. DigitalOcean has the best documentation. See our full comparison at best VPS for Docker, check Hetzner deals and Vultr coupon codes, and view real performance data in our Hetzner benchmarks.

Best VPS for Docker → Developer VPS Guide Security Hardening
AC
Alex Chen — Senior Systems Engineer

Alex Chen is a Senior Systems Engineer with 12+ years of experience managing Linux servers, VPS infrastructure, and containerized deployments. He has personally benchmarked 50+ VPS providers and runs production Docker workloads across Hetzner, Vultr, and DigitalOcean. About our methodology →

Last updated: March 15, 2026