Quick Stack
Best value Docker VPS: Hetzner CX22 — 2 vCPU, 4GB RAM, 40GB SSD, 20TB bandwidth for $4.59/mo. Best for US locations: Vultr — 9 US cities, $10/mo for 2GB, one-click Docker app. Most RAM per dollar: Contabo — 8GB RAM at $6.99/mo, but expect lower I/O performance. Time to deploy: 15 minutes from a fresh Ubuntu 24.04 install.
Table of Contents
- Choosing the Right VPS for Docker
- Installing Docker the Right Way
- Your First Docker Compose Stack
- Nginx Reverse Proxy with SSL
- Storage Strategy: Where Things Go Wrong
- Docker Security on a Public Server
- Monitoring Containers in Production
- Provider-Specific Gotchas
- Production Deployment Checklist
- Docker Commands I Use Every Day
- FAQ
Choosing the Right VPS for Docker
Docker does not care about your CPU. Not really. Unless you are running compute-heavy containers (video transcoding, machine learning inference, build servers), a 1-2 vCPU plan handles most workloads without breaking a sweat. What Docker cares about is RAM and disk I/O. Every container has a memory footprint, and overlay2 — Docker's storage driver — amplifies the impact of slow disks. Choose wrong here and your containers will crawl.
The RAM Math Nobody Shows You
This is what a typical self-hosted stack actually uses on a 2GB VPS running Ubuntu 24.04 with Docker:
| Component | RAM Usage | Notes |
|---|---|---|
| Ubuntu 24.04 base | 180-220 MB | systemd + kernel + SSH |
| Docker daemon | 50-80 MB | containerd + dockerd |
| Nginx (reverse proxy) | 5-15 MB | Alpine image, 10 upstreams |
| Node.js app | 80-150 MB | Depends on heap allocation |
| PostgreSQL | 100-250 MB | Default config, small dataset |
| Redis | 10-30 MB | Grows with cached data |
| Total | 425-745 MB | Before application load |
On a 1GB VPS, that stack has about 250MB of headroom under no load. The first traffic spike pushes Node.js to 200MB and PostgreSQL to 400MB and you are deep in swap, which on a VPS means disk I/O, which means everything grinds to a halt. I have seen this exact failure mode on three different client deployments. 2GB is the practical minimum for any Docker stack that includes a database. 4GB if you are running more than three services.
Which Providers Work Best
I have benchmarked Docker builds and container startup times across providers. The differences are real:
| Provider | Plan | Price | RAM | Storage | Docker Build Time* |
|---|---|---|---|---|---|
| Hetzner | CX22 | $4.59/mo | 4 GB | 40 GB SSD | 42s |
| Vultr | 2 vCPU / 4GB | $20/mo | 4 GB | 80 GB SSD | 38s |
| DigitalOcean | Basic 2 vCPU | $24/mo | 4 GB | 80 GB SSD | 36s |
| Contabo | Cloud VPS S | $6.99/mo | 8 GB | 200 GB SSD | 68s |
| Hostinger | KVM 2 | $8.99/mo | 8 GB | 100 GB NVMe | 31s |
*Building a multi-stage Node.js 20 image with npm install (14 dependencies). Lower is better.
Contabo's 8GB for $6.99/mo is tempting, and the RAM is real. But their disk I/O is 40-60% slower than Hetzner or Vultr, and Docker image builds are where you feel it most. If you build images on the server (as opposed to using a CI pipeline that pushes pre-built images), that difference adds up fast. For a production deployment where you pull pre-built images and rarely rebuild, Contabo's RAM advantage matters more than its I/O disadvantage.
Installing Docker the Right Way
Do not use apt install docker.io. The version in Ubuntu's default repository is months behind and misses critical security patches. Use Docker's official repository. This takes 60 seconds:
# Remove any old versions sudo apt remove docker docker-engine docker.io containerd runc 2>/dev/null # Add Docker's official GPG key and repository sudo apt update sudo apt install -y ca-certificates curl gnupg sudo install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \ sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg sudo chmod a+r /etc/apt/keyrings/docker.gpg echo "deb [arch=$(dpkg --print-architecture) \ signed-by=/etc/apt/keyrings/docker.gpg] \ https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null # Install Docker Engine + Compose plugin sudo apt update sudo apt install -y docker-ce docker-ce-cli containerd.io \ docker-buildx-plugin docker-compose-plugin # Add your user to the docker group (log out and back in after) sudo usermod -aG docker $USER
Verify it works:
docker --version # Docker version 27.x.x, build xxxxxxx docker compose version # Docker Compose version v2.x.x
Important: That usermod command gives your user root-equivalent access to the Docker daemon. On a personal VPS, this is fine. On a shared server with multiple users, every member of the docker group can mount any host directory into a container and read/write anything on the system. Understand the implications before adding users to that group.
Post-Install Configuration
Docker's default settings are fine for development. Production wants a few tweaks. Create /etc/docker/daemon.json:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"storage-driver": "overlay2",
"default-address-pools": [
{"base": "172.17.0.0/16", "size": 24}
]
}
The log rotation settings are critical. Without them, container logs grow without limit and will fill your disk. I have seen a 25GB VPS run out of space because a Node.js app was logging every HTTP request and nobody set max-size. Three months of uncapped logs. That is the kind of lesson you only need once.
# Apply the configuration sudo systemctl restart docker # Verify docker info | grep -i "storage driver" # Storage Driver: overlay2
Your First Docker Compose Stack
Single containers are for tutorials. Real deployments use Docker Compose. Here is a production-grade stack that deploys a Node.js app with PostgreSQL, Redis, and an Nginx reverse proxy. I use variations of this for every project:
# docker-compose.yml
services:
app:
image: node:20-alpine
working_dir: /app
volumes:
- ./app:/app
command: node server.js
environment:
DATABASE_URL: postgres://app:${DB_PASSWORD}@db:5432/myapp
REDIS_URL: redis://cache:6379
NODE_ENV: production
depends_on:
db:
condition: service_healthy
cache:
condition: service_started
restart: unless-stopped
networks:
- backend
db:
image: postgres:16-alpine
environment:
POSTGRES_USER: app
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_DB: myapp
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U app -d myapp"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
networks:
- backend
cache:
image: redis:7-alpine
command: redis-server --maxmemory 64mb --maxmemory-policy allkeys-lru
restart: unless-stopped
networks:
- backend
proxy:
image: nginx:alpine
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./certbot/www:/var/www/certbot:ro
- ./certbot/conf:/etc/letsencrypt:ro
depends_on:
- app
restart: unless-stopped
networks:
- backend
volumes:
pgdata:
networks:
backend:
driver: bridge
And the .env file that goes alongside it:
# .env (never commit this to Git) DB_PASSWORD=your-secure-password-here
Three things people get wrong with their first Compose stack:
- No healthchecks on the database. Without
condition: service_healthy, your app container starts before PostgreSQL is ready to accept connections. The app crashes on boot, Docker restarts it, it crashes again, and you have a restart loop that pollutes your logs. The healthcheck above fixes this entirely. - Bind mounts instead of named volumes for data.
./pgdata:/var/lib/postgresql/datacreates a directory owned by your user. PostgreSQL inside the container runs as uid 999. Permission hell ensues. Named volumes (pgdata:) let Docker manage ownership correctly. - No memory limit on Redis. Without
--maxmemory, Redis grows until it consumes all available RAM. On a 2GB VPS, this kills your database and app. Set a cap and an eviction policy.
Deploy the stack:
# Start everything in the background docker compose up -d # Watch the logs (Ctrl+C to exit) docker compose logs -f # Check container status docker compose ps
Nginx Reverse Proxy with Automatic SSL
Your app container should never expose ports 80 or 443 directly. Nginx sits in front, handles TLS termination, adds security headers, and routes traffic to the right container. This is the pattern used by every serious Docker deployment. For a deeper dive on Nginx configuration, see our Nginx reverse proxy guide.
Create nginx/conf.d/app.conf:
server {
listen 80;
server_name yourdomain.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
server_name yourdomain.com;
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
location / {
proxy_pass http://app:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Get the initial certificate with Certbot:
# Start Nginx with only the HTTP block first docker compose up -d proxy # Run Certbot docker run --rm \ -v ./certbot/www:/var/www/certbot \ -v ./certbot/conf:/etc/letsencrypt \ certbot/certbot certonly \ --webroot --webroot-path=/var/www/certbot \ -d yourdomain.com --agree-tos --email you@email.com # Enable the SSL block, then restart docker compose restart proxy
Automate renewal with a cron job:
# Add to crontab (crontab -e) 0 3 * * * docker run --rm \ -v /home/deploy/certbot/www:/var/www/certbot \ -v /home/deploy/certbot/conf:/etc/letsencrypt \ certbot/certbot renew --quiet \ && docker compose -f /home/deploy/docker-compose.yml restart proxy
For more on SSL certificates — including wildcard certs and DNS validation — see our SSL certificates on VPS guide.
Storage Strategy: Where Things Go Wrong
Docker storage is a silent killer. Every image layer, every build cache artifact, every unused volume accumulates in /var/lib/docker/ until your disk is full. On a 25GB VPS — the default for $5-6/mo plans from Vultr and Linode — you have about 15GB of usable space after the OS, and Docker can eat that in a week of active development.
How Disk Space Disappears
# Check Docker's disk usage docker system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 12 4 3.2GB 2.1GB (65%) Containers 6 4 125MB 45MB (36%) Local Volumes 3 3 1.8GB 0B (0%) Build Cache 24 890MB 890MB
That is real output from one of my VPS instances after two weeks of development. 3.2GB in images, most of them unused. 890MB in build cache from iterating on a Dockerfile. On a 25GB disk, that is 16% of your storage consumed by garbage. The fix is automated cleanup:
# Nuclear option: remove everything unused docker system prune -af --volumes # Safer: remove just dangling images and build cache docker image prune -f docker builder prune -f # Schedule weekly cleanup (crontab -e) 0 4 * * 0 docker system prune -af --filter "until=168h" \ >> /var/log/docker-prune.log 2>&1
Use Smaller Base Images
This is the single most impactful change you can make for both disk usage and security. The numbers are dramatic:
| Base Image | Size | Use When |
|---|---|---|
node:20 | 1.1 GB | Never in production |
node:20-slim | 220 MB | Need glibc compatibility |
node:20-alpine | 130 MB | Default choice for most apps |
python:3.12 | 1.0 GB | Never in production |
python:3.12-slim | 155 MB | Default for Python apps |
python:3.12-alpine | 58 MB | If all deps are pure Python |
Switching from node:20 to node:20-alpine saves nearly 1GB per image. Three Node.js services means 3GB recovered. On a 25GB disk, that is the difference between "out of space" warnings and having room to breathe.
Multi-Stage Builds
If you build your own images, multi-stage builds are non-negotiable. They separate the build environment (compilers, dev dependencies) from the runtime image:
# Dockerfile FROM node:20-alpine AS builder WORKDIR /app COPY package*.json ./ RUN npm ci --only=production COPY . . RUN npm run build FROM node:20-alpine WORKDIR /app COPY --from=builder /app/dist ./dist COPY --from=builder /app/node_modules ./node_modules USER node EXPOSE 3000 CMD ["node", "dist/server.js"]
The final image contains only your compiled code and production dependencies. Build tools, source files, and dev dependencies stay in the builder stage and never make it into the deployed image. I have seen this reduce image sizes from 800MB to 120MB.
Docker Security on a Public Server
Every Docker tutorial ignores this section, and it is the most important one. Your VPS has a public IP address. Every container that publishes a port is exposed to the internet. Docker modifies iptables rules directly — even if you set up UFW, Docker containers bypass it entirely. This is not a bug; it is by design, and it has burned more people than I can count. For a comprehensive security approach, see our VPS security hardening guide.
The UFW Problem (and Fix)
# You set up UFW to only allow SSH and HTTP: sudo ufw allow 22 sudo ufw allow 80 sudo ufw allow 443 sudo ufw enable # Then you run a container: docker run -d -p 5432:5432 postgres # PostgreSQL is now exposed to the entire internet. # UFW did not block it. Docker rewrote iptables behind UFW's back.
The fix is to tell Docker not to manipulate iptables. Add to /etc/docker/daemon.json:
{
"iptables": false
}
Then restart Docker and configure UFW to route traffic to Docker containers manually. This is the approach I use on every production server. It is more work upfront, but it means your firewall actually controls what is accessible from the internet.
The alternative approach — and the one I actually prefer for simplicity — is to never publish ports directly to the host. Instead, keep all services on Docker's internal bridge network and let only Nginx (your reverse proxy container) expose ports 80 and 443. Your database, cache, and application containers are only reachable through Docker's internal DNS. No published ports means no iptables bypass, and UFW works normally for the two ports Nginx uses.
Container Security Essentials
# Hardened service in docker-compose.yml
services:
app:
image: myapp:latest
read_only: true
security_opt:
- no-new-privileges:true
tmpfs:
- /tmp
deploy:
resources:
limits:
memory: 512M
cpus: "1.0"
user: "1000:1000"
restart: unless-stopped
- read_only: true — prevents modifications to the container filesystem. Mount writable directories explicitly with tmpfs or volumes.
- no-new-privileges — prevents privilege escalation inside the container. If an attacker gets into your container, they cannot sudo.
- Resource limits — prevents a runaway container from consuming all RAM and CPU. Without limits, a memory leak kills every other container on the host.
- user: "1000:1000" — runs the process as a non-root user. Most official images include a non-root user; use it.
Monitoring Containers in Production
Running docker compose ps every few hours is not monitoring. You need to know when a container crashes, when memory usage spikes, and when your disk is filling up — ideally before it affects your users. For a full monitoring setup, see our VPS monitoring guide.
Quick Health Check Script
This is the minimum I run on every Docker VPS. A cron job that checks container health and sends an alert if something is down:
#!/bin/bash
# /usr/local/bin/docker-health-check.sh
WEBHOOK_URL="https://hooks.slack.com/services/your/webhook/url"
HOSTNAME=$(hostname)
# Check for unhealthy or exited containers
UNHEALTHY=$(docker ps --filter "health=unhealthy" --format "{{.Names}}" 2>/dev/null)
EXITED=$(docker ps -a --filter "status=exited" \
--filter "label=com.docker.compose.service" \
--format "{{.Names}} (exited {{.Status}})" 2>/dev/null)
if [ -n "$UNHEALTHY" ] || [ -n "$EXITED" ]; then
MSG="Docker alert on $HOSTNAME:\n"
[ -n "$UNHEALTHY" ] && MSG+="Unhealthy: $UNHEALTHY\n"
[ -n "$EXITED" ] && MSG+="Exited: $EXITED\n"
curl -s -X POST -H 'Content-type: application/json' \
-d "{\"text\":\"$MSG\"}" "$WEBHOOK_URL"
fi
# Run every 5 minutes (crontab -e) */5 * * * * /usr/local/bin/docker-health-check.sh
Lightweight Monitoring Stack
For a visual dashboard, add cAdvisor to your Compose file. It exposes container-level CPU, memory, network, and disk metrics through a web UI:
# Add to your existing docker-compose.yml
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
ports:
- "127.0.0.1:8080:8080" # Only accessible from localhost
restart: unless-stopped
networks:
- backend
Notice the 127.0.0.1: prefix on the port mapping. That binds cAdvisor to localhost only, preventing anyone on the internet from accessing your monitoring data. Access it through an SSH tunnel: ssh -L 8080:localhost:8080 user@your-vps.
For external uptime monitoring that works even when your server is down, run Uptime Kuma on a separate VPS. It takes 30 seconds to deploy:
docker run -d --restart unless-stopped \ -p 3001:3001 \ -v uptime-kuma:/app/data \ --name uptime-kuma \ louislam/uptime-kuma:1
Provider-Specific Gotchas
Not all VPS providers are equal when it comes to Docker. Here is what I have encountered firsthand:
Hetzner Cloud
Docker works flawlessly on Hetzner's CX-series instances. One unique advantage: their Cloud Firewall operates at the hypervisor level, outside the VPS. This means it does not suffer from the UFW+Docker iptables conflict. If you use Hetzner Cloud Firewall, Docker containers actually obey it. This is unique among providers and is one of the reasons I recommend Hetzner for Docker deployments. Their hcloud CLI also lets you script server creation:
# Create a Docker-ready server on Hetzner hcloud server create \ --name docker-prod \ --type cx22 \ --image docker-ce \ --location ash \ --ssh-key my-key
Vultr
Vultr offers a one-click Docker marketplace image that pre-installs Docker CE and Docker Compose. Saves you 5 minutes. Their startup scripts feature lets you run a bash script on first boot — I use it to configure daemon.json, pull images, and start containers automatically. The $5/mo 1GB plan runs a single container fine but hits the wall with any database. Start at the $10/mo 2GB plan for real Docker work. With 9 US datacenter locations, Vultr gives you the most flexibility for latency-sensitive deployments.
Contabo
Contabo gives you 8GB RAM for $6.99/mo, which is absurd value for container-heavy workloads. The trade-off: their SSD I/O is the slowest among major providers. Image pulls and builds take 40-60% longer than Hetzner. For a production server where you pull pre-built images once and run them for months, this does not matter. For a CI/CD build server, go elsewhere. Also, Contabo's default kernel on some older instances ships without the overlay module loaded. If you get an error about overlay2, run modprobe overlay and add it to /etc/modules-load.d/overlay.conf.
DigitalOcean
DigitalOcean deserves credit for having the best Docker documentation in the industry. Their tutorials alone are worth the premium over Hetzner. They also offer App Platform — a managed container hosting service where you push a Dockerfile and they handle everything. It starts at $5/mo per container. For self-managed Docker on a Droplet, everything works as expected. Their $200 free credit for 60 days gives you plenty of room to experiment with different stack configurations.
Hostinger VPS
Hostinger's NVMe storage makes Docker image builds noticeably faster — 31 seconds for our benchmark build versus 42 seconds on Hetzner's standard SSD. Their KVM 2 plan (8GB RAM, 100GB NVMe, $8.99/mo) is a solid middle ground between Contabo's raw specs and Hetzner's price. The integrated firewall works well with Docker if you configure it through their dashboard rather than relying on UFW.
Kamatera
Kamatera lets you build custom configurations, which is useful for Docker workloads that need unusual resource ratios. Need 8GB of RAM but only 1 vCPU? Kamatera lets you configure that. Their $100 free trial credit is enough to test your Docker stack thoroughly before committing. Per-hour billing means you can spin up beefy build servers, push images to a registry, and tear them down without paying for idle time.
Production Deployment Checklist
Before you point DNS at your Docker VPS and call it production, run through this list. Every item exists because I skipped it once and paid for it:
- Docker installed from official repository (not
apt install docker.io) - Log rotation configured in
daemon.json(max-size: 10m, max-file: 3) - Firewall configured (Hetzner Cloud Firewall or UFW with Docker iptables disabled)
- All services have
restart: unless-stopped - Database healthchecks configured in Compose
- Named volumes for all persistent data (not bind mounts for databases)
- Secrets in
.envfile (not hardcoded in docker-compose.yml) - SSL certificates via Certbot with automated renewal cron
- Weekly
docker system prunescheduled in cron - Backup strategy for Docker volumes (see VPS backup strategies)
- Container health monitoring (even basic scripts beat nothing)
- Non-root users in containers, no-new-privileges, resource limits
- SSH key authentication only, password auth disabled
- Docker daemon listening on Unix socket only (not TCP)
Docker Commands I Use Every Day
# View running containers with live resource usage
docker stats --no-stream
# Shell into a running container
docker exec -it container_name sh
# View logs for a specific container (last 100 lines, follow)
docker logs --tail 100 -f container_name
# Copy files from container to host
docker cp container_name:/path/to/file ./local-path
# Rebuild and restart a single service
docker compose up -d --build app
# Check which ports are exposed
docker compose port app 3000
# Inspect a container's IP address
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name
# View Docker disk usage breakdown
docker system df -v
# Remove all stopped containers, unused networks, dangling images
docker system prune -f
# Follow all container logs from a Compose stack
docker compose logs -f --tail 50
Ready to Deploy Docker?
Start with a 4GB RAM VPS — it is the sweet spot for most Docker stacks. These providers offer the best combination of performance, pricing, and Docker compatibility:
Frequently Asked Questions
How much RAM does Docker need on a VPS?
Docker itself uses about 50-80MB of RAM. The real cost is your containers. A basic Nginx container uses ~5MB, a Node.js app uses 50-150MB, a PostgreSQL database uses 100-300MB, and WordPress with MySQL uses 300-500MB combined. For a single-app stack (app + database + reverse proxy), 2GB of RAM is the practical minimum. For 3-5 containers comfortably, 4GB is the sweet spot. Contabo's 8GB plan at $6.99/mo gives you the most headroom per dollar.
Can I run Docker on a 1GB RAM VPS?
Technically yes, but practically it depends on what you run. A single lightweight container (Nginx, Redis, a small Go or Rust app) works fine. But the moment you add a database container, you will hit memory pressure. Docker plus Ubuntu overhead consumes about 300MB before you start any containers, leaving 700MB for your workload. A 2GB VPS at $5-6/mo from Vultr or Linode is a much better starting point.
Should I use Docker Compose or Docker Swarm on a VPS?
Docker Compose for a single VPS, always. Docker Swarm is a multi-node orchestrator that adds complexity without benefit on a single server. Use docker compose up -d to manage your entire stack from one YAML file. If you outgrow a single VPS and need multi-server orchestration, skip Swarm entirely and go straight to Kubernetes — DigitalOcean and Linode both offer managed Kubernetes. Swarm is effectively abandoned by Docker Inc.
Which VPS provider is best for Docker?
For most Docker workloads: Hetzner Cloud. Their CX22 gives you 2 vCPU, 4GB RAM, 40GB SSD, and 20TB bandwidth for $4.59/mo. For maximum US locations: Vultr (9 US cities) or Linode (9 US cities). For raw resources on a budget: Contabo's 8GB RAM plan at $6.99/mo. For managed Docker hosting: Cloudways or DigitalOcean App Platform.
How do I back up Docker volumes on a VPS?
Three approaches: (1) Provider snapshots — take a full VPS snapshot before major changes. Works on Vultr, Hetzner, DigitalOcean. (2) Volume-level backup — stop the container, tar the volume directory, upload to S3 or Backblaze B2 via cron. (3) Application-level backup — use pg_dump for PostgreSQL, mysqldump for MySQL. Option 3 is most reliable. Combine with option 1 for belt-and-suspenders protection.
Is Docker slower than running applications directly on the VPS?
No, not in any meaningful way. Docker containers on Linux use the host kernel directly — no hypervisor, no emulation. Network I/O through Docker's bridge adds roughly 1-3% latency, and disk I/O through overlay2 is within 2-5% of native. The only real overhead is RAM: Docker daemon uses 50-80MB. Use --network host if even 1-3% network overhead matters to your use case.
How do I update Docker containers without downtime?
Pull the new image, then recreate: docker compose pull && docker compose up -d. Compose recreates only containers whose images changed, and downtime is typically under 2 seconds. For true zero-downtime, use an Nginx reverse proxy with two instances of your app behind it and update one at a time (blue-green deployment). Watchtower can automate pulls and restarts, but only use it for non-critical services.