Migrate from AWS to Independent VPS — I Cut My Cloud Bill by 73%

I spent January 2026 doing something I should have done two years ago: I pulled every workload off AWS and moved it to independent VPS providers. The trigger was not some grand strategy session. It was a $478 invoice for a Node.js API that serves about 3,000 requests per hour. Not 3,000 per second. Per hour. A modest B2B SaaS product with a modest user base and an immodest AWS bill.

The breakdown was infuriating once I actually read it line by line. Two t3.medium instances ($67), an RDS db.t3.small ($52), an ElastiCache t3.micro ($25), 1.8TB of outbound bandwidth ($153 — the real killer), S3 storage and requests ($18), CloudWatch logs ($12), a NAT gateway I had forgotten about ($32), and various other line items that added up to the kind of bill that makes you question your career choices. I had been paying this for 14 months.

Three weeks after starting the migration, the same stack was running on two Vultr instances totaling $60/month. Same code, same database, same Redis cache, same performance. Thirteen months of overpaying — roughly $5,400 — that I will never get back. So let me help you avoid making the same mistake for as long as I did.

The Core Math: AWS charges $0.09/GB for outbound bandwidth after the first 100GB. An app pushing 1TB/month pays ~$81 just in bandwidth. Vultr includes 4TB with their $20/mo plan. Hetzner includes 20TB with their $4.59/mo plan. The bandwidth tax alone justifies the migration for most workloads.

1. The Anatomy of an AWS Bill (Why It Is Always More Than You Think)

AWS pricing is not complicated by accident. Every line item that surprises you on your bill is a line item you did not budget for, and therefore did not comparison-shop. The pricing page for EC2 alone has more words than most novels. Here is what a real small-to-medium workload actually costs when you account for everything:

# Real AWS monthly bill — Node.js API + PostgreSQL + Redis + S3
# (us-east-1, on-demand pricing, March 2026)

EC2 t3.medium (2vCPU/4GB) x2:        $66.82
  → $0.0464/hr × 730 hrs × 2 instances

EBS gp3 storage (50GB × 2):            $8.00
  → $0.08/GB/mo × 100GB total

RDS db.t3.small (PostgreSQL):         $52.56
  → $0.036/hr × 730 hrs × 2 (multi-AZ)

RDS storage (50GB gp2):                $5.75
  → $0.115/GB/mo

ElastiCache t3.micro (Redis):         $24.82
  → $0.034/hr × 730 hrs

S3 storage (200GB):                    $4.60
  → $0.023/GB/mo

S3 requests (5M GET, 500K PUT):        $2.50

Data transfer OUT (1.8TB):          $153.00
  → 100GB free, then $0.09/GB × 1,700GB

NAT Gateway (processing 500GB):       $33.75
  → $0.045/hr + $0.045/GB processed

CloudWatch Logs (10GB ingested):       $5.00

Application Load Balancer:            $22.27
  → $0.0225/hr + $0.008/LCU-hour

Route 53 hosted zone:                  $0.50

TOTAL:                               ~$379/mo
                                    = $4,548/year

That is not a contrived example. I pulled those numbers from AWS’s own pricing calculator using modest, realistic usage. Notice that the compute (EC2) is only $67 of a $379 bill. The bandwidth, managed database, and NAT gateway make up more than half. This is the pattern I see repeatedly: people sign up for EC2 thinking they are paying $34/month, and then wonder why their bill is ten times that.

2. Real Cost Comparison: AWS vs Independent VPS Providers

Here is the same workload priced on three independent VPS providers. I am using comparable specs: 2–4 vCPU, 4–8GB RAM, SSD storage, and enough bandwidth for 1.8TB outbound per month.

Component AWS Vultr Hetzner Kamatera
App server (2–4 vCPU, 4–8GB) $66.82 $40.00 $8.49 $18.00
Database (PostgreSQL) $58.31 $0* $0* $0*
Redis cache $24.82 $0* $0* $0*
1.8TB bandwidth $153.00 $0 (incl.) $0 (incl.) $0 (incl.)
Object storage (200GB) $7.10 $6.00 $5.00 $5.00
Load balancer / NAT / misc $56.52 $10.00 $5.00 $0
Monthly Total $379/mo $56/mo $18.49/mo $23/mo
Annual Total $4,548 $672 $222 $276

* Self-hosted on the same VPS. For workloads that need a dedicated database server, add a second VPS ($5–$20/mo depending on provider).

Hetzner’s CX32 plan gives you 4 vCPU, 8GB RAM, 80GB SSD, and 20TB of bandwidth for $8.49/month. That alone replaces your EC2 instance, and you still have 18TB of bandwidth headroom. The Vultr equivalent at $40/mo (4 vCPU, 8GB, 160GB SSD, 5TB bandwidth) is pricier but comes with 9 US datacenter locations and DDoS protection. Kamatera at $18/mo for 4 vCPU / 4GB RAM with 5TB bandwidth splits the difference. Pick based on your location and feature needs, not brand familiarity.

3. What You Actually Lose Leaving AWS (Honest Assessment)

I am not going to pretend this is a free lunch. There are real trade-offs, and anyone who tells you otherwise is selling you something. Here is my honest list after six weeks post-migration:

Things you genuinely lose:

  • Managed database failover. RDS Multi-AZ gives you automatic failover in about 60 seconds. Self-hosted PostgreSQL requires you to configure streaming replication and a failover mechanism (Patroni, repmgr). It works, but you have to set it up and test it.
  • AWS-native serverless services. Lambda, DynamoDB, Step Functions, AppSync — these have no equivalent on a VPS. If your architecture is deeply integrated with three or more of these, your migration is an application rewrite, not a server move.
  • Compliance certifications. AWS has SOC 2 Type II, HIPAA BAA, FedRAMP, PCI DSS, and dozens more. Vultr and DigitalOcean have SOC 2 and some ISO certifications but fewer. If your compliance officer requires specific AWS certifications, this matters.
  • Automatic scaling. EC2 Auto Scaling Groups spin up instances when demand spikes. On a VPS you either over-provision or handle scaling manually. For most apps under 10,000 concurrent users, a properly sized single VPS handles it. But the safety net is gone.

Things people think they lose but actually do not:

  • Reliability. Vultr has a 99.99% SLA. Hetzner publishes 99.9%. Both are comparable to EC2’s 99.99%. I have had zero unplanned downtime on Vultr in six weeks. I had two incidents on AWS in 14 months. Anecdotal, but it supports the point: independent VPS providers are not less reliable.
  • S3-compatible storage. The S3 API is an industry standard. Vultr Object Storage, Backblaze B2, and Cloudflare R2 all speak it. Your boto3 or aws-sdk code changes three lines (endpoint, key, secret) and everything works.
  • CDN. Cloudflare’s free tier is unlimited bandwidth and covers most CDN needs. You do not need CloudFront.
  • Monitoring. Prometheus + Grafana on a VPS is more powerful than CloudWatch and costs $0 in licensing.

4. Pre-Migration Checklist

I learned the hard way that jumping straight into rsync commands without a plan leads to a painful week of "wait, I forgot about that service." Run through this checklist before touching a single server:

Pre-Migration Checklist:
  • Inventory all AWS services in use (not just EC2 — check IAM, SES, SQS, CloudWatch, etc.)
  • Document current database size, connection strings, and backup schedule
  • Calculate actual monthly bandwidth usage from CloudWatch or billing console
  • List all environment variables and secrets (AWS Secrets Manager, Parameter Store)
  • Identify DNS provider and current TTL settings
  • Document SSL certificate source (ACM? Let’s Encrypt? Manual?)
  • Ensure application code is in Git (if it is only on EC2, fix that first)
  • Take a full RDS snapshot and verify you can export it
  • Choose target VPS provider and plan (see comparison above)
  • Schedule a maintenance window for the DNS cutover

5. Step 1: Audit Your AWS Architecture

Before you leave, you need to know exactly what you are leaving. AWS has a way of accumulating services you forgot about — a Lambda function from a hackathon, a test RDS instance that is still running, an unused Elastic IP that is quietly costing $3.65/month. Run these commands to get the full picture:

# List all running EC2 instances with name, type, and IP:
aws ec2 describe-instances \
  --filters "Name=instance-state-name,Values=running" \
  --query 'Reservations[].Instances[].[InstanceId,InstanceType,PublicIpAddress,Tags[?Key==`Name`].Value|[0]]' \
  --output table

# List all RDS instances:
aws rds describe-db-instances \
  --query 'DBInstances[].[DBInstanceIdentifier,DBInstanceClass,Engine,AllocatedStorage,Endpoint.Address]' \
  --output table

# List S3 buckets with approximate sizes:
for bucket in $(aws s3api list-buckets --query 'Buckets[].Name' --output text); do
  size=$(aws s3 ls s3://$bucket --recursive --summarize 2>/dev/null | grep "Total Size" | awk '{print $3, $4}')
  echo "$bucket: $size"
done

# List ElastiCache clusters:
aws elasticache describe-cache-clusters \
  --query 'CacheClusters[].[CacheClusterId,CacheNodeType,Engine]' \
  --output table

# Check for unused Elastic IPs (they cost money when unattached):
aws ec2 describe-addresses \
  --query 'Addresses[?AssociationId==null].[PublicIp,AllocationId]' \
  --output table

# Get last 3 months of cost by service:
aws ce get-cost-and-usage \
  --time-period Start=2025-12-01,End=2026-03-01 \
  --granularity MONTHLY \
  --metrics "BlendedCost" \
  --group-by Type=DIMENSION,Key=SERVICE \
  --output table

Write down every service. For each one, decide: migrate to self-hosted on VPS, replace with a third-party service, or eliminate entirely. In my case, I found two Lambda functions I had forgotten about, an ElastiCache cluster serving a feature we deprecated in 2025, and three unused Elastic IPs. Deleting those alone saved $45/month before the migration even started.

6. Step 2: Choose Your Target Provider

I evaluated five providers for this migration. Here is how they compared for my specific workload (Node.js API, PostgreSQL, Redis, ~1.8TB monthly bandwidth, US East preferred):

  • Vultr (where I landed): 4 vCPU / 8GB plan at $40/mo. 9 US datacenter locations including New Jersey and Atlanta. 5TB bandwidth included. DDoS protection included. Snapshot and backup support. API and CLI (vultr-cli) for automation. This was my pick for the primary workload because I needed a US East location and S3-compatible object storage.
  • Hetzner: CX32 at $8.49/mo for 4 vCPU / 8GB. The best price-to-performance ratio by far. US locations in Ashburn, VA and Hillsboro, OR. I use Hetzner for staging environments and non-latency-critical workloads. 20TB bandwidth included.
  • DigitalOcean: 4GB Droplet at $24/mo. The best documentation and managed database option ($15/mo for PostgreSQL). Only 2 US regions (NYC, SFO). If you want a managed database without going full AWS, DO is the middle ground.
  • Kamatera: Fully custom configs starting at $4/mo. 3 US locations (New York, Dallas, Santa Clara). Good if your workload needs unusual CPU/RAM ratios. $100 free trial credit for 30 days.
  • Contabo: The value king — 4 vCPU / 8GB / 200GB SSD for $6.99/mo with 32TB bandwidth. But no hourly billing, slower support, and lower benchmark scores. Good for non-critical or development workloads.

7. Step 3: Provision and Harden the New VPS

AWS wraps you in a security blanket of Security Groups, NACLs, and IAM policies. On a VPS, you are responsible for your own security from minute one. Here is the provisioning and hardening script I run on every new server:

# === Initial server setup (run as root) ===

# Update system
apt update && apt upgrade -y

# Create a non-root user with sudo
adduser deploy
usermod -aG sudo deploy

# Copy your SSH key to the new user
mkdir -p /home/deploy/.ssh
cp /root/.ssh/authorized_keys /home/deploy/.ssh/
chown -R deploy:deploy /home/deploy/.ssh
chmod 700 /home/deploy/.ssh
chmod 600 /home/deploy/.ssh/authorized_keys

# Disable root login and password auth
sed -i 's/^PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
sed -i 's/^#PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
sed -i 's/^PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
systemctl restart sshd

# Configure UFW firewall
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp comment 'SSH'
ufw allow 80/tcp comment 'HTTP'
ufw allow 443/tcp comment 'HTTPS'
ufw --force enable

# Install fail2ban
apt install -y fail2ban
systemctl enable fail2ban
systemctl start fail2ban

# Enable automatic security updates
apt install -y unattended-upgrades
dpkg-reconfigure --priority=low unattended-upgrades
# === Install your application stack ===
# Adjust based on your needs (this is my Node.js + PostgreSQL + Redis + Nginx stack)

# Install Node.js 20 LTS
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt install -y nodejs

# Install PostgreSQL 16
sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
apt update
apt install -y postgresql-16 postgresql-client-16

# Install Redis
apt install -y redis-server
sed -i 's/^supervised no/supervised systemd/' /etc/redis/redis.conf
systemctl restart redis

# Install Nginx
apt install -y nginx

# Install Certbot for Let's Encrypt SSL
apt install -y certbot python3-certbot-nginx

This entire process takes about 15 minutes. Compare that to navigating the AWS console to set up a VPC, subnets, security groups, an internet gateway, route tables, an ALB, target groups, and launch templates. I am not exaggerating when I say the VPS setup is faster. For a deeper dive, see our VPS security hardening guide.

8. Step 4: Migrate Application Code

If your code is in Git (and it should be), this is the easiest step of the entire migration. If your code only exists on the EC2 instance, the first thing you do is commit it to a repository. Do not migrate code by copying it from one server to another without version control — that is how things get lost.

# On the NEW VPS: Clone your repository
git clone git@github.com:yourorg/yourapp.git /var/www/yourapp
cd /var/www/yourapp

# Install dependencies
npm ci --production
# or: pip install -r requirements.txt
# or: composer install --no-dev --optimize-autoloader

# Set permissions
chown -R www-data:www-data /var/www/yourapp

# If your code is NOT in Git — rsync from EC2 first:
rsync -avzP -e "ssh -i ~/.ssh/aws-key.pem" \
  ubuntu@EC2_PUBLIC_IP:/var/www/yourapp/ \
  /var/www/yourapp/

# Don't forget to grab Nginx/Apache configs:
rsync -avzP -e "ssh -i ~/.ssh/aws-key.pem" \
  ubuntu@EC2_PUBLIC_IP:/etc/nginx/sites-available/ \
  /etc/nginx/sites-available/

# And cron jobs:
ssh -i ~/.ssh/aws-key.pem ubuntu@EC2_PUBLIC_IP "crontab -l" > /tmp/ec2-crontab.txt
cat /tmp/ec2-crontab.txt  # Review before importing
crontab /tmp/ec2-crontab.txt

9. Step 5: Migrate Databases (RDS to Self-Hosted)

This is the step that makes people most nervous, and honestly, it is the step that requires the most care. A botched database migration means data loss. But the actual process is straightforward if you follow it carefully. I have done this 30+ times and the pg_dump/pg_restore path has never failed me.

# === PostgreSQL Migration (RDS → Self-Hosted) ===

# Step 1: On your local machine or a jump box, dump the RDS database:
pg_dump \
  -h your-rds-instance.abc123.us-east-1.rds.amazonaws.com \
  -U dbadmin \
  -d myappdb \
  --no-owner \
  --no-acl \
  --format=custom \
  --compress=9 \
  --file=/tmp/myappdb.dump

# The --no-owner and --no-acl flags are important:
# RDS uses different user names than your VPS will.
# We'll set ownership on the new server.

# Step 2: Transfer dump to new VPS:
scp /tmp/myappdb.dump deploy@NEW_VPS_IP:/tmp/

# Step 3: On the NEW VPS, create the database and user:
sudo -u postgres psql << 'EOSQL'
CREATE USER appuser WITH PASSWORD 'generate-a-strong-password-here';
CREATE DATABASE myappdb OWNER appuser;
GRANT ALL PRIVILEGES ON DATABASE myappdb TO appuser;
\q
EOSQL

# Step 4: Restore the dump:
pg_restore -U postgres -d myappdb --no-owner /tmp/myappdb.dump

# Step 5: Grant permissions on all existing tables to appuser:
sudo -u postgres psql -d myappdb << 'EOSQL'
GRANT ALL ON ALL TABLES IN SCHEMA public TO appuser;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO appuser;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO appuser;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON TABLES TO appuser;
ALTER DEFAULT PRIVILEGES IN SCHEMA public GRANT ALL ON SEQUENCES TO appuser;
\q
EOSQL

# Step 6: Verify row counts match:
sudo -u postgres psql -d myappdb -c \
  "SELECT schemaname, relname, n_live_tup FROM pg_stat_user_tables ORDER BY n_live_tup DESC LIMIT 20;"
# === MySQL Migration (RDS → Self-Hosted) ===

# Step 1: Dump from RDS:
mysqldump \
  -h your-rds-instance.abc123.us-east-1.rds.amazonaws.com \
  -u admin -p \
  --all-databases \
  --single-transaction \
  --routines \
  --triggers \
  --events \
  --set-gtid-purged=OFF \
  > /tmp/rds-full-dump.sql

# Step 2: Transfer to new VPS:
scp /tmp/rds-full-dump.sql deploy@NEW_VPS_IP:/tmp/

# Step 3: Install MySQL on VPS and restore:
apt install -y mysql-server
mysql -u root < /tmp/rds-full-dump.sql

# Step 4: Verify:
mysql -u root -e "SELECT table_schema, COUNT(*) as tables, \
  SUM(table_rows) as rows FROM information_schema.tables \
  WHERE table_schema NOT IN ('mysql','information_schema','performance_schema','sys') \
  GROUP BY table_schema;"

One thing I want to emphasize: do a final sync. The dump you just restored is a point-in-time snapshot. Between when you took the dump and when you cut over DNS, users are still writing to the old RDS database. You need to either put the application in maintenance mode during the cutover, or do a second incremental sync right before the DNS change. I cover the exact timing in the DNS cutover section.

10. Step 6: Migrate S3 Assets to S3-Compatible Storage

This was the part I expected to be painful, and it was actually the easiest. The S3 API has become an industry standard — Vultr Object Storage, Backblaze B2, Cloudflare R2, and MinIO all speak it natively. Your application code barely changes.

# Install rclone for the actual file transfer:
curl https://rclone.org/install.sh | bash

# Configure rclone with both providers:
# Run 'rclone config' interactively, or create the config file:
cat > ~/.config/rclone/rclone.conf << 'EOF'
[aws-s3]
type = s3
provider = AWS
access_key_id = YOUR_AWS_ACCESS_KEY
secret_access_key = YOUR_AWS_SECRET_KEY
region = us-east-1

[vultr-objstore]
type = s3
provider = Other
access_key_id = YOUR_VULTR_OBJ_ACCESS_KEY
secret_access_key = YOUR_VULTR_OBJ_SECRET_KEY
endpoint = ewr1.vultrobjects.com
acl = private
EOF

# Sync all objects (this runs in parallel, handles retries):
rclone sync aws-s3:your-bucket-name vultr-objstore:your-new-bucket \
  --progress \
  --transfers=16 \
  --checkers=8 \
  --fast-list

# Verify object counts match:
echo "AWS objects: $(rclone size aws-s3:your-bucket-name | grep 'objects')"
echo "Vultr objects: $(rclone size vultr-objstore:your-new-bucket | grep 'objects')"
# Update your application code — the change is minimal:

# Python (boto3) — before:
s3 = boto3.client('s3',
    region_name='us-east-1'
)

# Python (boto3) — after:
s3 = boto3.client('s3',
    endpoint_url='https://ewr1.vultrobjects.com',
    aws_access_key_id=os.environ['VULTR_OBJ_KEY'],
    aws_secret_access_key=os.environ['VULTR_OBJ_SECRET'],
    region_name='ewr1'
)
# All existing s3.upload_file(), s3.get_object(), etc. calls work unchanged.

# Node.js (AWS SDK v3) — before:
const s3 = new S3Client({ region: 'us-east-1' });

# Node.js (AWS SDK v3) — after:
const s3 = new S3Client({
  endpoint: 'https://ewr1.vultrobjects.com',
  region: 'ewr1',
  credentials: {
    accessKeyId: process.env.VULTR_OBJ_KEY,
    secretAccessKey: process.env.VULTR_OBJ_SECRET,
  },
  forcePathStyle: true,
});

11. Step 7: Replace AWS-Native Services

This is the step that varies most between migrations. If you only use EC2 + RDS + S3, you are mostly done already. But most AWS users accumulate dependencies on managed services over time. Here is a practical replacement guide for the most common ones:

# AWS service → VPS/third-party replacement

# SES (transactional email) → Postmark ($10/mo for 10K emails)
#   or Resend, Mailgun, SendGrid
#   Change SMTP settings in your .env:
SMTP_HOST=smtp.postmarkapp.com
SMTP_PORT=587
SMTP_USER=your-postmark-server-api-token
SMTP_PASS=your-postmark-server-api-token

# CloudFront (CDN) → Cloudflare Free tier
#   Point DNS to Cloudflare, enable proxy (orange cloud)
#   Zero cost, unlimited bandwidth, DDoS protection included

# SQS (message queue) → Redis + BullMQ (Node.js) or Celery (Python)
#   Redis is already on your VPS. Add the queue library:
npm install bullmq
# or: pip install celery[redis]

# CloudWatch (monitoring) → Prometheus + Grafana
#   Both free, self-hosted on VPS
apt install -y prometheus grafana

# Route 53 (DNS) → Cloudflare DNS (free)
#   or any DNS provider. Transfer your domain's nameservers.

# Secrets Manager → dotenv + encrypted .env files
#   or HashiCorp Vault (self-hosted, free)
#   or Doppler (managed, free tier available)

# Elastic Load Balancer → Nginx reverse proxy
#   Already on your VPS. Configure upstream blocks.

12. Step 8: Testing the New Environment

Do not skip this. Do not rush this. I run the new VPS environment for at least one week before touching DNS. Here is the testing protocol:

# === Testing Protocol ===

# 1. Test the application directly by IP (bypass DNS):
curl -H "Host: yourdomain.com" http://NEW_VPS_IP/
curl -H "Host: yourdomain.com" http://NEW_VPS_IP/api/health

# 2. Test database connectivity:
psql -h localhost -U appuser -d myappdb -c "SELECT COUNT(*) FROM users;"

# 3. Test Redis:
redis-cli ping  # Should return PONG
redis-cli INFO memory | grep used_memory_human

# 4. Run your application's test suite against the new server:
DATABASE_URL="postgresql://appuser:pass@NEW_VPS_IP/myappdb" npm test

# 5. Load test with wrk or hey:
hey -n 10000 -c 50 http://NEW_VPS_IP/api/health
# Compare response times with your AWS production baseline

# 6. Test SSL certificate provisioning:
# (Temporarily point a test subdomain to the VPS)
certbot --nginx -d test.yourdomain.com --dry-run

# 7. Test backup and restore:
pg_dump -U appuser myappdb > /tmp/test-backup.sql
wc -l /tmp/test-backup.sql  # Should be non-zero
rm /tmp/test-backup.sql

# 8. Test cron jobs:
crontab -l  # Verify all jobs are present
# Manually trigger each cron job and verify output

13. Step 9: DNS Cutover Strategy

The DNS cutover is the moment of truth. Done right, your users notice nothing. Done wrong, you get 502 errors, SSL warnings, and panicked Slack messages. Here is exactly how I do it, step by step:

# === 48 hours before cutover ===

# Lower DNS TTL to 300 seconds (5 minutes):
# If using Route 53:
aws route53 change-resource-record-sets \
  --hosted-zone-id YOUR_ZONE_ID \
  --change-batch '{
    "Changes": [{
      "Action": "UPSERT",
      "ResourceRecordSet": {
        "Name": "yourdomain.com",
        "Type": "A",
        "TTL": 300,
        "ResourceRecords": [{"Value": "OLD_EC2_IP"}]
      }
    }]
  }'
# This ensures that when you change the IP, resolvers pick it up within 5 minutes
# instead of caching the old IP for hours.

# === Cutover time (ideally during low-traffic window) ===

# Step 1: Put the OLD server in read-only / maintenance mode
# (prevents data writes during the transition gap)
ssh ubuntu@EC2_IP "sudo touch /var/www/yourapp/maintenance.flag"

# Step 2: Take a FINAL database dump (captures last-minute writes):
pg_dump -h your-rds-instance.us-east-1.rds.amazonaws.com \
  -U dbadmin -d myappdb --format=custom --file=/tmp/final-sync.dump

# Step 3: Restore final dump on new VPS:
scp /tmp/final-sync.dump deploy@NEW_VPS_IP:/tmp/
ssh deploy@NEW_VPS_IP "pg_restore -U postgres -d myappdb --clean /tmp/final-sync.dump"

# Step 4: Update DNS to point to new VPS IP:
# (Cloudflare dashboard, or your DNS provider's API)
# A record: yourdomain.com → NEW_VPS_IP
# AAAA record: yourdomain.com → NEW_VPS_IPV6 (if applicable)

# Step 5: Issue SSL certificate on new VPS:
ssh deploy@NEW_VPS_IP "sudo certbot --nginx -d yourdomain.com -d www.yourdomain.com"

# Step 6: Monitor:
ssh deploy@NEW_VPS_IP "tail -f /var/log/nginx/access.log"
# You should see real traffic appearing within 5-10 minutes.
# === Post-cutover verification ===

# Verify DNS is resolving to new IP:
dig yourdomain.com +short
# Should show: NEW_VPS_IP

# Verify HTTPS is working:
curl -I https://yourdomain.com
# Should show: HTTP/2 200 with valid SSL

# Check that the old EC2 instance is receiving NO new requests:
ssh ubuntu@EC2_IP "tail -5 /var/log/nginx/access.log"
# Timestamps should be getting older (no new entries)

# Run your smoke tests against production:
curl https://yourdomain.com/api/health
curl https://yourdomain.com/  # Check homepage loads

14. Step 10: The Rollback Plan

Every migration needs a rollback plan. Not because you expect to use it — because the existence of a rollback plan means you can be bold with the cutover. Here is mine:

Rollback Plan (if things go wrong after cutover):
  1. Point DNS back to the old EC2 Elastic IP (takes 5 minutes to propagate with 300s TTL)
  2. Start the EC2 instance if it was stopped: aws ec2 start-instances --instance-ids i-xxx
  3. Remove maintenance flag: rm /var/www/yourapp/maintenance.flag
  4. If any writes happened on the new VPS during the failed cutover, export them and replay on RDS
  5. Investigate what went wrong, fix it, and try again in a few days

Keep AWS running (stopped/minimized) for at least 14 days after a successful cutover. A stopped EC2 instance costs nothing for compute; you only pay for EBS storage (~$0.08/GB/mo). Keep RDS on the smallest tier. Your total rollback insurance cost: under $10/month.

15. Post-Migration: Shut Down AWS Resources

After your new VPS has been running stable for two weeks with no issues, it is time to start terminating AWS resources. Do this in order — some resources depend on others:

# === Shutdown sequence (do NOT rush this) ===

# Week 2 post-migration: Take final snapshots of everything
aws ec2 create-snapshot --volume-id vol-xxx --description "final-pre-delete-backup"
aws rds create-db-snapshot --db-instance-identifier yourdb --db-snapshot-identifier final-backup

# Week 3: Delete the resources
# 1. Terminate EC2 instances:
aws ec2 terminate-instances --instance-ids i-1234567890abcdef0

# 2. Delete RDS instance (keep final snapshot):
aws rds delete-db-instance \
  --db-instance-identifier yourdb \
  --final-db-snapshot-identifier yourdb-final-snapshot \
  --no-delete-automated-backups

# 3. Release Elastic IPs:
aws ec2 release-address --allocation-id eipalloc-xxx

# 4. Delete NAT Gateways (these cost $0.045/hr even when idle):
aws ec2 delete-nat-gateway --nat-gateway-id nat-xxx

# 5. Empty and delete S3 buckets (ONLY after verifying all data is on new storage):
aws s3 rb s3://your-bucket --force

# 6. Delete security groups, VPC (if dedicated), CloudWatch alarms, etc.

# 7. Check your final AWS bill in 30 days — it should be near zero.
# Some charges (like S3 storage for snapshots) may persist until you delete them.

The satisfaction of watching your AWS bill go from $400+ to $0 is a feeling I highly recommend. My current monthly infrastructure cost for the same workload: $60 on Vultr. That is an 84% reduction. The migration took three weeks of part-time work, and it will pay for itself every single month going forward. If you are reading this and your AWS bill makes you wince, the time to start planning is now — not next quarter.

For more migration guides, check out our articles on migrating from DigitalOcean to Vultr, moving Docker apps between VPS providers, and upgrading from shared hosting to VPS. And if you are still choosing a provider, our VPS size calculator can help match your workload to the right plan.

FAQ

How much can I realistically save by leaving AWS for a VPS?

For a typical small-to-medium web application (2–4 vCPU, 4–8GB RAM, under 2TB monthly traffic), most teams save 50–75%. A workload costing $150–$400/mo on AWS (EC2 + RDS + S3 + bandwidth) typically runs on a $20–$40/mo VPS. The savings come primarily from included bandwidth (AWS charges $0.09/GB outbound, VPS providers include 1–32TB free) and not paying separately for managed database instances.

What AWS services have no direct VPS equivalent?

Lambda (serverless functions), DynamoDB (managed NoSQL), SQS/SNS (managed queues/notifications), Cognito (managed auth), and Step Functions have no drop-in VPS replacements. You need alternatives: cron jobs or worker processes for Lambda, Redis+BullMQ for SQS, PostgreSQL for DynamoDB (if your data model allows), and Keycloak for Cognito. If your architecture depends heavily on 3+ proprietary AWS services, migration requires significant application-level changes — plan accordingly.

How long does an AWS to VPS migration typically take?

Simple stack (EC2 + RDS + S3): 2–5 days including testing. Medium complexity (add ElastiCache, SES, CloudFront): 1–2 weeks. Microservices with multiple AWS-native integrations: 3–8 weeks. The data transfer itself is fast — a 50GB database dumps and restores in under an hour. The time is in testing, configuration adjustments, and running parallel environments to verify everything works before cutting over.

Is self-hosted PostgreSQL as reliable as AWS RDS?

PostgreSQL itself is identical software. What RDS gives you is automated backups, automatic failover, and managed patching. On a VPS you set these up yourself: pg_dump on cron for backups (takes 10 minutes to configure), streaming replication for HA (1–2 hours), and unattended-upgrades for patching. After initial setup, it runs itself. PostgreSQL on a $24/mo Vultr VPS (4 vCPU, 8GB RAM) outperforms RDS db.t3.medium ($60/mo) in raw query performance because you get dedicated resources instead of burstable CPU credits.

Can I keep using S3-compatible storage after leaving AWS?

Yes. Vultr Object Storage, Backblaze B2, and Cloudflare R2 all support the S3 API. Your application code stays nearly identical — change the endpoint URL and credentials, keep the same SDK calls (boto3, aws-sdk, etc.). Backblaze B2 is $0.006/GB/mo vs S3 at $0.023/GB/mo. Cloudflare R2 has zero egress fees. The S3 API is an industry standard now; it is not an AWS exclusive.

What happens to my data during the DNS cutover?

During DNS propagation (typically 5–60 minutes with a low TTL), some users hit the old server and some hit the new one. For read-heavy apps this is fine — both servers serve the same content. For write-heavy apps, you need a strategy: put the old server in maintenance/read-only mode during cutover, take a final database dump, restore it on the new server, then flip DNS. Lower your DNS TTL to 300 seconds at least 24–48 hours before the cutover so propagation is fast.

What if the migration fails — can I roll back to AWS?

Yes, and you should always plan for this. Keep your AWS infrastructure running (stopped or downsized to minimize cost) for at least two weeks after cutover. Rollback is as simple as pointing DNS back to the old EC2 Elastic IP. A stopped EC2 instance costs $0 for compute — you only pay for EBS storage (~$0.08/GB/mo). Keep RDS running on the smallest tier or in a stopped state (AWS allows 7-day stop periods, auto-restarts, then stop again). Total rollback insurance: under $10/month.

AC
Alex Chen — Senior Systems Engineer

Alex migrated 14 production workloads off AWS in Q1 2026, reducing total infrastructure spend by 73%. He now runs all client projects on independent VPS providers and documents the process so others can do the same. Learn more about our testing methodology →