VPS Backup Strategies — The Day Your Server Dies Should Not Be Your Worst Day

I lost a client's database once. It was a Saturday morning. The VPS was on a provider that shall remain nameless, the SSD failed, and the provider's response was "we don't have backups of your data." The client had 14 months of customer records, order history, and product data. We recovered about 60% from cached pages and email exports. The other 40% was gone. That afternoon I wrote the backup script I am about to share with you, and it has run without failure every night since. Total cost: $0.47/month on Backblaze B2.

This is not a theoretical exercise in backup taxonomy. It is the practical answer to one question: when your VPS dies — and eventually, one will — how fast can you get back to normal?

The 3-2-1 Rule (Non-Negotiable)

3 copies of your data. 2 different storage types. 1 off-site. In VPS terms: your live server, a provider snapshot, and an off-site backup to Backblaze B2 or AWS S3. This costs under $1/month for most VPS setups and takes about 30 minutes to configure.

Provider Backup Options Compared

Every major VPS provider offers some form of backup. The quality, cost, and reliability vary wildly:

Provider Snapshots Auto Backups Backup Cost Backup Limit
VultrFree (manual)$1-8/mo20% of planLatest only
HetznerFree (manual)$0.92-6.50/mo20% of plan7 daily + weekly
DigitalOceanFree (manual)$1.20-9.60/mo20% of planWeekly (4 kept)
LinodeN/A$2.50-10/mo20-25% of plan3 daily + weekly
KamateraYesYesVariableConfigurable
ContaboYes (manual)No autoIncluded1 snapshot
HostingerYesWeeklyIncludedRecent only
RackNerdNoNoN/AN/A
InterServerNoNoN/AN/A

The uncomfortable truth: provider backups are not enough on their own. They are stored on the same infrastructure as your VPS. A datacenter-level failure, an account compromise, or even a billing dispute can take out your server and your backups simultaneously. Treat provider backups as convenience snapshots for quick rollbacks. Your real backup lives off-site, on infrastructure you control.

If you are on RackNerd or InterServer, you have zero provider backup options. Off-site backup is not optional — it is your only backup.

Database Backup Scripts

Databases are where the irreplaceable data lives. Files can be redeployed from Git. Configuration can be recreated from documentation. Customer records, transaction history, and user accounts exist only in your database. Back up the database first, everything else second.

MySQL / MariaDB

#!/bin/bash
# /usr/local/bin/backup-mysql.sh

BACKUP_DIR="/backup/mysql"
DATE=$(date +%Y-%m-%d_%H%M)
RETENTION_DAYS=14

mkdir -p "$BACKUP_DIR"

# Dump all databases with consistent snapshot
mysqldump --all-databases \
  --single-transaction \
  --routines \
  --triggers \
  --events \
  --quick \
  | gzip > "$BACKUP_DIR/all-databases-$DATE.sql.gz"

# Check if dump succeeded
if [ $? -eq 0 ]; then
  echo "$(date): MySQL backup successful - $(du -h $BACKUP_DIR/all-databases-$DATE.sql.gz | cut -f1)"
else
  echo "$(date): MySQL backup FAILED" >&2
  exit 1
fi

# Remove backups older than retention period
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete

PostgreSQL

#!/bin/bash
# /usr/local/bin/backup-postgres.sh

BACKUP_DIR="/backup/postgres"
DATE=$(date +%Y-%m-%d_%H%M)
RETENTION_DAYS=14

mkdir -p "$BACKUP_DIR"

# Dump all databases (custom format for pg_restore)
pg_dumpall -U postgres | gzip > "$BACKUP_DIR/all-databases-$DATE.sql.gz"

# Or individual database in custom format (smaller, faster restore)
pg_dump -U postgres -Fc myapp > "$BACKUP_DIR/myapp-$DATE.dump"

# Remove old backups
find "$BACKUP_DIR" -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
find "$BACKUP_DIR" -name "*.dump" -mtime +$RETENTION_DAYS -delete

The --single-transaction flag for MySQL is critical. Without it, mysqldump locks tables during the backup, which blocks writes to your database for the duration of the dump. On a busy site, that can mean 30-60 seconds of failed requests. The single-transaction flag takes a consistent snapshot using InnoDB's MVCC without blocking anything.

File-Level Backups with restic

restic is the tool I recommend for file-level VPS backups. It is encrypted by default, deduplicates data (subsequent backups only transfer changed blocks), compresses everything, and supports S3, Backblaze B2, and SFTP as storage backends. It replaced my tangled mess of rsync scripts and GPG encryption five years ago and I have not looked back.

# Install restic
sudo apt install -y restic

# Initialize a Backblaze B2 repository
export B2_ACCOUNT_ID="your-account-id"
export B2_ACCOUNT_KEY="your-account-key"
restic -r b2:your-bucket-name:vps-backup init

# First backup
restic -r b2:your-bucket-name:vps-backup backup \
  /etc \
  /home \
  /var/www \
  /backup/mysql \
  /backup/postgres \
  --exclude-caches \
  --exclude="*.log" \
  --exclude="/var/www/*/node_modules"

# Subsequent backups (only changed data is transferred)
restic -r b2:your-bucket-name:vps-backup backup \
  /etc /home /var/www /backup/mysql /backup/postgres

The first backup uploads everything. Subsequent backups use content-defined chunking to identify and upload only the blocks that changed. A 20GB VPS with 200MB of daily changes transfers about 200MB per backup, not 20GB. On providers like Vultr and DigitalOcean where bandwidth is metered, this efficiency matters.

restic Retention Policy

# Keep: 7 daily, 4 weekly, 6 monthly, 2 yearly
restic -r b2:your-bucket-name:vps-backup forget \
  --keep-daily 7 \
  --keep-weekly 4 \
  --keep-monthly 6 \
  --keep-yearly 2 \
  --prune

Off-Site Backup to Cloud Storage

Your off-site backup destination should be on different infrastructure from your VPS provider. If your VPS is on Hetzner, do not store backups on Hetzner Storage Boxes. Same company, same failure domain. Pick a different provider entirely:

Service Cost per TB/mo Egress Cost restic Support Best For
Backblaze B2$5$0.01/GBYes (native)Best value, small-medium VPS
Wasabi$6.99FreeYes (S3 compat)Large data, frequent restores
AWS S3 Standard$23$0.09/GBYes (native)AWS ecosystem integration
AWS S3 Glacier$4$0.09/GB + retrievalYesArchive, rarely restored
Hetzner Storage Box~$3.50FreeYes (SFTP)European data residency

For most VPS users, Backblaze B2 is the answer. A typical VPS backup of 10-50GB costs $0.05-$0.25/month. That is less than the cost of a single API call to argue with a provider about data recovery. The free egress tier (1GB/day) covers most restore scenarios.

Automating Everything with Cron

#!/bin/bash
# /usr/local/bin/nightly-backup.sh
# Full backup script: database dump + file backup + off-site sync

set -euo pipefail

LOG="/var/log/backup.log"
exec >> "$LOG" 2>&1
echo "=== Backup started: $(date) ==="

# 1. Database dumps
/usr/local/bin/backup-mysql.sh || /usr/local/bin/backup-postgres.sh

# 2. Off-site backup with restic
export B2_ACCOUNT_ID="your-account-id"
export B2_ACCOUNT_KEY="your-account-key"
REPO="b2:your-bucket:vps-backup"

restic -r "$REPO" backup \
  /etc /home /var/www /backup \
  --exclude-caches \
  --exclude="*.log" \
  --exclude="node_modules" \
  --exclude=".git" \
  --tag "nightly"

# 3. Apply retention policy (weekly)
if [ "$(date +%u)" -eq 7 ]; then
  restic -r "$REPO" forget \
    --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prune
fi

echo "=== Backup completed: $(date) ==="
# Make executable and schedule
sudo chmod +x /usr/local/bin/nightly-backup.sh

# Run at 3am daily (crontab -e)
0 3 * * * /usr/local/bin/nightly-backup.sh

The set -euo pipefail at the top is important. It makes the script exit on any error instead of silently continuing. Without it, a failed database dump does not prevent the file backup from running and you might not notice the database backup has been failing for weeks.

Backing Up Docker Volumes

If you run Docker (see our Docker guide), backing up named volumes requires extra steps because the data lives in Docker's internal directory structure:

#!/bin/bash
# /usr/local/bin/backup-docker-volumes.sh

BACKUP_DIR="/backup/docker-volumes"
DATE=$(date +%Y-%m-%d_%H%M)
mkdir -p "$BACKUP_DIR"

# List all named volumes
for vol in $(docker volume ls -q); do
  echo "Backing up volume: $vol"
  docker run --rm \
    -v "$vol":/source:ro \
    -v "$BACKUP_DIR":/backup \
    alpine tar czf "/backup/${vol}-${DATE}.tar.gz" -C /source .
done

# For database volumes, prefer application-level dumps:
# docker exec postgres pg_dumpall -U postgres | gzip > "$BACKUP_DIR/pg-$DATE.sql.gz"
# docker exec mysql mysqldump --all-databases --single-transaction | gzip > "$BACKUP_DIR/mysql-$DATE.sql.gz"

echo "Docker volume backup complete: $(ls -lh $BACKUP_DIR/*$DATE*)"

For database containers, always prefer application-level dumps (pg_dump, mysqldump) over volume tarballs. A tarball of a running PostgreSQL data directory is not guaranteed to be consistent — you might capture it mid-write. A pg_dump gives you a clean, consistent, portable dump every time.

Testing Your Restores (Most Important Section)

A backup you have never restored is not a backup. It is a hope. I test restores quarterly, and here is exactly how:

# 1. Spin up a test VPS (hourly billing — costs pennies)
# Vultr: $0.007/hour, Hetzner: $0.007/hour

# 2. Install restic on the test server
sudo apt install -y restic

# 3. Restore from off-site backup
export B2_ACCOUNT_ID="your-account-id"
export B2_ACCOUNT_KEY="your-account-key"
restic -r b2:your-bucket:vps-backup restore latest --target /

# 4. Restore the database
gunzip < /backup/mysql/all-databases-latest.sql.gz | mysql -u root

# 5. Verify
# - Can you access the website?
# - Do database queries return expected data?
# - Are file permissions correct?
# - Do SSL certificates exist?

# 6. Destroy the test VPS
# (delete it from the provider dashboard)

The first time I tested a restore, I discovered that my backup script excluded the Nginx configuration directory. Everything was backed up except the thing that made the web server work. That one test saved me from a real disaster three months later when I actually needed to restore from backup. Test your restores.

Retention Policies

Keeping every backup forever costs money and makes it harder to find the one you need. A sensible retention policy for most VPS use cases:

Backup Type Frequency Retention Storage Cost (10GB VPS)
Database dumpsDaily14 days~$0.05/mo (B2)
File backups (restic)Daily7 daily, 4 weekly, 6 monthly~$0.10/mo (B2, deduplicated)
Provider snapshotsBefore changes2-3 snapshotsFree or included
Full system imageMonthly3 months~$0.15/mo (B2)

Total off-site backup cost for a typical 10GB VPS: about $0.30-$0.50/month. For a 50GB VPS: about $1.50-$2.50/month. This is the cheapest insurance you will ever buy.

Disaster Recovery Plan

When your VPS dies, you should not be figuring out what to do. You should be following a checklist. Here is the one I keep in a separate document (not on the VPS that just died):

  1. Spin up a new VPS on the same or different provider. Same specs, same OS, same region. (5 minutes)
  2. Run your provisioning script — install Nginx, Docker, PostgreSQL, whatever your stack needs. If you do not have a provisioning script, write one now while your server is still alive. (10-20 minutes)
  3. Restore from off-site backup — restic restore or download and decompress your latest backup. (10-30 minutes depending on data size)
  4. Restore the database from the most recent dump. (5 minutes)
  5. Update DNS to point to the new server's IP address. (5 minutes, then wait for propagation)
  6. Re-issue SSL certificates with Certbot. (2 minutes per domain, see our SSL guide)
  7. Verify everything works. Test login, test transactions, test API endpoints. (10 minutes)

Total recovery time: 45-90 minutes. That is the difference between "our site was down for an hour" and "we lost everything." The provisioning script is the thing most people skip. Document and automate your server setup while it is working, not when it is dead.

Complete Backup Architecture: From $5/mo VPS to Production

Everything above is individual components. Here is how they fit together in a real production environment. I run this exact architecture on every client VPS I manage.

Single VPS Backup Architecture

#!/bin/bash
# /usr/local/bin/full-backup.sh
# Complete production backup script
# Covers: databases, files, Docker volumes, config, and off-site sync

set -euo pipefail

# Configuration
BACKUP_ROOT="/backup"
DATE=$(date +%Y-%m-%d_%H%M)
LOG="/var/log/backup.log"
B2_ACCOUNT_ID="your-account-id"
B2_ACCOUNT_KEY="your-account-key"
REPO="b2:your-bucket:$(hostname)"
SLACK_WEBHOOK="https://hooks.slack.com/services/your/webhook"

exec >> "$LOG" 2>&1
echo "=== Full backup started: $(date) ==="

# Function to send alert on failure
alert_failure() {
  curl -s -X POST -H 'Content-type: application/json' \
    -d "{\"text\":\"BACKUP FAILED on $(hostname): $1\"}" \
    "$SLACK_WEBHOOK"
}

trap 'alert_failure "Script exited with error on line $LINENO"' ERR

# ---- Phase 1: Database Dumps ----
echo "Phase 1: Database dumps"
mkdir -p "$BACKUP_ROOT/db"

# PostgreSQL (if running)
if command -v pg_dumpall &>/dev/null && systemctl is-active --quiet postgresql; then
  pg_dumpall -U postgres | gzip > "$BACKUP_ROOT/db/postgres-$DATE.sql.gz"
  echo "PostgreSQL dump: $(du -h $BACKUP_ROOT/db/postgres-$DATE.sql.gz | cut -f1)"
fi

# MySQL/MariaDB (if running)
if command -v mysqldump &>/dev/null && systemctl is-active --quiet mysql; then
  mysqldump --all-databases --single-transaction --routines --triggers \
    | gzip > "$BACKUP_ROOT/db/mysql-$DATE.sql.gz"
  echo "MySQL dump: $(du -h $BACKUP_ROOT/db/mysql-$DATE.sql.gz | cut -f1)"
fi

# Docker database containers
if command -v docker &>/dev/null; then
  for container in $(docker ps --format '{{.Names}}' | grep -E 'postgres|mysql|mariadb|mongo'); do
    echo "Dumping Docker database: $container"
    if echo "$container" | grep -qi postgres; then
      docker exec "$container" pg_dumpall -U postgres 2>/dev/null | \
        gzip > "$BACKUP_ROOT/db/docker-${container}-$DATE.sql.gz"
    elif echo "$container" | grep -qi 'mysql\|mariadb'; then
      docker exec "$container" mysqldump --all-databases --single-transaction 2>/dev/null | \
        gzip > "$BACKUP_ROOT/db/docker-${container}-$DATE.sql.gz"
    elif echo "$container" | grep -qi mongo; then
      docker exec "$container" mongodump --archive 2>/dev/null | \
        gzip > "$BACKUP_ROOT/db/docker-${container}-$DATE.archive.gz"
    fi
  done
fi

# ---- Phase 2: Docker Volume Backup ----
echo "Phase 2: Docker volumes"
mkdir -p "$BACKUP_ROOT/docker-volumes"

if command -v docker &>/dev/null; then
  for vol in $(docker volume ls -q | grep -v '^[a-f0-9]\{64\}$'); do
    docker run --rm -v "$vol":/source:ro -v "$BACKUP_ROOT/docker-volumes":/backup \
      alpine tar czf "/backup/${vol}-${DATE}.tar.gz" -C /source . 2>/dev/null || true
  done
fi

# ---- Phase 3: Off-site Sync with restic ----
echo "Phase 3: Off-site backup via restic"
export B2_ACCOUNT_ID B2_ACCOUNT_KEY

# Check if repo exists, initialize if not
restic -r "$REPO" snapshots &>/dev/null || restic -r "$REPO" init

restic -r "$REPO" backup \
  /etc \
  /home \
  /var/www \
  /root/.ssh \
  "$BACKUP_ROOT/db" \
  "$BACKUP_ROOT/docker-volumes" \
  --exclude-caches \
  --exclude="*.log" \
  --exclude="node_modules" \
  --exclude=".git" \
  --exclude="*.tmp" \
  --tag "nightly" \
  --tag "$(date +%A)"

# ---- Phase 4: Retention (run on Sundays) ----
if [ "$(date +%u)" -eq 7 ]; then
  echo "Phase 4: Applying retention policy"
  restic -r "$REPO" forget \
    --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --keep-yearly 1 --prune
fi

# ---- Phase 5: Local cleanup ----
echo "Phase 5: Local cleanup"
find "$BACKUP_ROOT/db" -name "*.sql.gz" -mtime +7 -delete
find "$BACKUP_ROOT/db" -name "*.archive.gz" -mtime +7 -delete
find "$BACKUP_ROOT/docker-volumes" -name "*.tar.gz" -mtime +3 -delete

# ---- Phase 6: Verification ----
echo "Phase 6: Verification"
LATEST=$(restic -r "$REPO" snapshots --latest 1 --json | python3 -c \
  "import sys,json; s=json.load(sys.stdin); print(s[0]['short_id'] if s else 'NONE')")
echo "Latest snapshot: $LATEST"

BACKUP_SIZE=$(du -sh "$BACKUP_ROOT" | cut -f1)
echo "Local backup size: $BACKUP_SIZE"
echo "=== Full backup completed: $(date) ==="

# Success notification (optional)
curl -s -X POST -H 'Content-type: application/json' \
  -d "{\"text\":\"Backup OK on $(hostname): snapshot $LATEST, local size $BACKUP_SIZE\"}" \
  "$SLACK_WEBHOOK"
# Install and schedule
sudo chmod +x /usr/local/bin/full-backup.sh

# Run at 3am daily, send output to log
# crontab -e
0 3 * * * /usr/local/bin/full-backup.sh 2>&1

Cost Breakdown by Provider

Here is what the complete backup architecture costs on different VPS providers, including the off-site storage:

VPS Provider VPS Plan VPS Cost Provider Backup Off-site (B2, 20GB) Total Monthly
HetznerCX22 (4GB)$4.59$0.92 (20%)$0.10$5.61
Vultr1GB plan$5.00$1.00 (20%)$0.10$6.10
ContaboVPS S (8GB)$6.99$0 (1 snap free)$0.10$7.09
DigitalOceanBasic 1GB$6.00$1.20 (20%)$0.10$7.30
RackNerd2GB plan$3.49N/A$0.10$3.59
InterServerStandard$6.00N/A$0.10$6.10

The off-site backup costs $0.10/month for 20GB on Backblaze B2. That is the cheapest insurance in all of IT. Even on budget providers like RackNerd where there is no provider backup at all, the off-site restic backup gives you full disaster recovery for a dime a month.

Server-to-Server Backup with rsync

If you have two VPS instances (common when you keep a staging server or a DR standby), rsync provides fast incremental file synchronization. This is simpler than restic when your backup destination is another server rather than cloud storage:

# One-time setup: copy SSH key to backup server
ssh-keygen -t ed25519 -f ~/.ssh/backup_key -N ""
ssh-copy-id -i ~/.ssh/backup_key backup-user@backup-server-ip

# Basic rsync backup
rsync -avz --delete \
  -e "ssh -i ~/.ssh/backup_key" \
  /var/www/ \
  /etc/nginx/ \
  /backup/db/ \
  backup-user@backup-server-ip:/backup/primary-server/

# rsync with bandwidth limiting (useful on metered providers)
# Limit to 10MB/s to avoid eating bandwidth allocation
rsync -avz --delete --bwlimit=10000 \
  -e "ssh -i ~/.ssh/backup_key" \
  /var/www/ \
  backup-user@backup-server-ip:/backup/primary-server/

The --delete flag keeps the backup mirror in sync by removing files on the destination that were deleted on the source. Without it, your backup grows indefinitely with deleted files. The --bwlimit flag is important on providers like Vultr ($5/mo, 2TB bandwidth) and DigitalOcean ($6/mo, 1TB bandwidth) where bandwidth is metered — you do not want your nightly backup eating a significant chunk of your monthly allowance.

Backup Server Options

A backup server does not need much compute, but it needs storage. Good options:

  • BuyVM $3.50/mo + $1.25/mo for 256GB block storage — total $4.75/mo for a dedicated backup destination with unmetered bandwidth. Best value for rsync-based backups.
  • Contabo VPS S at $6.99/mo with 200GB SSD — the included 200GB storage is enough for most backup needs, plus you can run other services on it.
  • Hetzner Storage Box from $3.81/mo for 1TB — dedicated backup storage with SFTP, rsync, BorgBackup, and restic support. European-only but excellent for off-continent disaster recovery.

Monitoring Your Backups

A backup system that fails silently is worse than no backup system, because it gives you false confidence. Here is how I monitor backup health:

#!/bin/bash
# /usr/local/bin/check-backup-health.sh
# Run daily after backup window (e.g., 6am if backups run at 3am)

WEBHOOK_URL="https://hooks.slack.com/services/your/webhook"
BACKUP_LOG="/var/log/backup.log"
MAX_AGE_HOURS=28  # Alert if last backup is older than 28 hours

# Check 1: Did the backup script run recently?
if [ -f "$BACKUP_LOG" ]; then
  last_modified=$(stat -c %Y "$BACKUP_LOG")
  now=$(date +%s)
  age_hours=$(( (now - last_modified) / 3600 ))

  if [ "$age_hours" -gt "$MAX_AGE_HOURS" ]; then
    curl -s -X POST -H 'Content-type: application/json' \
      -d "{\"text\":\"BACKUP ALERT: No backup log update in ${age_hours}h on $(hostname)\"}" \
      "$WEBHOOK_URL"
  fi
fi

# Check 2: Did the backup log contain errors?
if grep -q "FAILED\|ERROR\|error" "$BACKUP_LOG" 2>/dev/null; then
  errors=$(grep -c "FAILED\|ERROR\|error" "$BACKUP_LOG")
  curl -s -X POST -H 'Content-type: application/json' \
    -d "{\"text\":\"BACKUP WARNING: $errors error(s) in backup log on $(hostname)\"}" \
    "$WEBHOOK_URL"
fi

# Check 3: Is local backup directory growing?
BACKUP_SIZE=$(du -sm /backup 2>/dev/null | cut -f1)
if [ "${BACKUP_SIZE:-0}" -lt 10 ]; then
  curl -s -X POST -H 'Content-type: application/json' \
    -d "{\"text\":\"BACKUP ALERT: Backup directory is only ${BACKUP_SIZE}MB on $(hostname) — possible failure\"}" \
    "$WEBHOOK_URL"
fi

# Check 4: Can we still access the off-site repository?
export B2_ACCOUNT_ID="your-account-id"
export B2_ACCOUNT_KEY="your-account-key"
if ! restic -r "b2:your-bucket:$(hostname)" snapshots --latest 1 &>/dev/null; then
  curl -s -X POST -H 'Content-type: application/json' \
    -d "{\"text\":\"BACKUP ALERT: Cannot access off-site repository from $(hostname)!\"}" \
    "$WEBHOOK_URL"
fi
# Schedule monitoring check (crontab -e)
0 6 * * * /usr/local/bin/check-backup-health.sh

If you use Uptime Kuma for server monitoring (and you should), you can also set up a push monitor. Have your backup script ping a URL at the end of a successful backup. If Uptime Kuma does not receive the ping within the expected window, it alerts you. This is the most reliable way to catch silent backup failures because it detects both "backup failed" and "backup did not run at all."

Choose a VPS Provider with Good Backup Support

Some providers make backups trivially easy. Others leave you on your own. Here are our top picks for servers where data matters:

Hetzner (Snapshots + Backups) → Vultr ($100 Credit) → All VPS Reviews

Frequently Asked Questions

How often should I back up my VPS?

Depends on how much data you can afford to lose. Daily database dumps + weekly full-system snapshots work for most websites. E-commerce sites should back up databases every 6-12 hours. Critical systems need real-time replication.

What is the difference between a VPS snapshot and a backup?

A snapshot is a point-in-time disk copy stored on the same infrastructure as your server. Fast to create and restore, but vulnerable to provider-level failures. A backup is stored separately (S3, Backblaze B2). Snapshots are for quick rollbacks; backups are for disaster recovery. You need both.

Which VPS providers include free backups?

Free: Hostinger (weekly), Hostwinds (nightly), InMotion (managed). Paid add-on (20% of plan): Vultr, Hetzner, DigitalOcean, Linode. No backup at all: RackNerd, InterServer. Always supplement with off-site.

How much does off-site VPS backup storage cost?

Backblaze B2: $5/TB/month. Wasabi: $6.99/TB (no egress fees). AWS S3: $23/TB. For a 10-50GB VPS, off-site backup costs $0.05-$0.35/month on B2. The cost is trivial compared to the value of your data.

How do I back up a database on my VPS?

MySQL: mysqldump --all-databases --single-transaction | gzip > backup.sql.gz. PostgreSQL: pg_dumpall | gzip > backup.sql.gz. Always use --single-transaction for MySQL. Compress output (80-90% reduction). Automate with cron.

Should I use rsync or restic for VPS backups?

rsync for simple server-to-server file sync. restic for encrypted, deduplicated backups to cloud storage (S3, B2). restic handles encryption, compression, deduplication, and retention automatically. Use rsync for local replication, restic for off-site backup.

How do I test my VPS backup restore process?

Spin up a temporary VPS (hourly billing costs pennies on Vultr or Hetzner), restore your backup, verify the application works, check database integrity, then destroy the test server. Do this quarterly. A backup you have never tested is not a backup.

AC
Alex Chen — Senior Systems Engineer

I lost client data exactly once in my career. The backup system in this guide is the direct result of that failure. It has protected every VPS I have managed since, across over 40 production servers. Learn more about our testing methodology →