Quick Answer: Best Django VPS
The Django stack eats memory in layers: Gunicorn workers, PostgreSQL buffer pool, Redis for Celery, and the application itself. DigitalOcean at $6/mo is the best overall because managed PostgreSQL eliminates the hardest part of Django ops — database crashes at 3 AM. For raw throughput on a self-managed stack, Hostinger’s 4GB NVMe VPS at $6.49/mo hit 1,200 req/sec with Gunicorn and had enough RAM left over for Celery. That is the one I run my own Django projects on.
Table of Contents
- The Celery Memory Trap That Killed My First Deployment
- Gunicorn Worker Math: The Formula Everyone Gets Wrong
- #1. DigitalOcean — Managed PostgreSQL Saves Your Sleep
- #2. Hostinger — 1,200 req/sec for $6.49
- #3. Vultr — 9 US DCs, Best Geographic Coverage
- #4. Kamatera — Separate Your Celery Workers
- #5. Linode — Akamai CDN Replaces collectstatic Headaches
- Django ORM Optimization on VPS: The Queries That Matter
- collectstatic + WhiteNoise vs CDN: When to Switch
- Full Benchmark Comparison
- Zero-Downtime Django Deploy Script
- FAQ (9 Questions)
The Celery Memory Trap That Killed My First Deployment
Here is what happened. I had a Django e-commerce app running beautifully on a 2GB VPS. Gunicorn with 3 workers, Nginx reverse proxy, PostgreSQL 16 with 256MB shared buffers, Redis for sessions. Memory usage sat at 1.4GB. Comfortable.
Then I added Celery for order confirmation emails and PDF invoice generation. I ran celery -A myapp worker with default settings and walked away. Two hours later, the site was down. The OOM killer had murdered PostgreSQL.
What happened: Celery’s default concurrency equals the number of CPU cores. On a 4-vCPU VPS, that spawns 4 worker processes. Each worker loads the entire Django app into memory — every model, every middleware, every signal handler. That is 80–150MB per Celery worker. Four workers: 320–600MB. My PDF generation task loaded 50MB of invoice data into memory per task. Two concurrent PDF tasks, and the 2GB VPS was done.
The fix that saved the deployment:
celery -A myapp worker
# DO: Explicit concurrency + worker recycling
celery -A myapp worker \
--concurrency=2 \
--max-tasks-per-child=100 \
--max-memory-per-child=200000 \
-Q default,emails,invoices
# --concurrency=2: Only 2 worker processes (not 4)
# --max-tasks-per-child=100: Restart worker after 100 tasks (kills memory leaks)
# --max-memory-per-child=200000: Kill worker if it exceeds 200MB
# -Q: Named queues so you can route heavy tasks to dedicated workers
This is why VPS selection for Django is not just about Gunicorn benchmarks. The full production stack — Gunicorn + PostgreSQL + Redis + Celery — has a RAM floor that most entry-tier VPS plans cannot meet. Here is the actual memory budget:
| Component | Memory Usage | Purpose | Can You Skip It? |
|---|---|---|---|
| Gunicorn (3 sync workers) | 150–360 MB | Serve Django requests | No |
| PostgreSQL 16 | 300–500 MB | Database (shared_buffers) | No (use managed DB to offload) |
| Redis 7 | 30–60 MB | Cache + Celery broker + sessions | Only on toy projects |
| Celery (2 workers) | 160–300 MB | Background tasks | If you have no async work |
| Nginx | 10–20 MB | Reverse proxy + static files | No |
| OS + Python runtime | 200–300 MB | Linux kernel + virtualenv | No |
| Full stack total | 850 MB – 1.54 GB | Without Celery: 690 MB – 1.24 GB | |
A 1GB VPS can technically run Django without Celery. A 2GB VPS runs the full stack with zero headroom. 4GB is the minimum where you can breathe. That fact alone eliminates most $5/mo plans from serious consideration and makes 4GB RAM VPS options the starting point for Django.
Gunicorn Worker Math: The Formula Everyone Gets Wrong
You have seen the formula: (2 × CPU_CORES) + 1. It is in every tutorial, every Stack Overflow answer, and the Gunicorn docs themselves. And it is wrong for most Django deployments.
That formula assumes CPU-bound workloads. Django views are rarely CPU-bound. A typical Django view does this: accept request, run 2–8 database queries, serialize data, render template, return response. The Gunicorn worker spends 80% of its time waiting for PostgreSQL to return data. It is I/O-bound, not CPU-bound.
For I/O-bound Django apps, you have two better options:
Option A: More sync workers (simple, more RAM)
On a 2-vCPU VPS, 5 workers instead of the formula’s 5. Wait — same number? Yes, the formula accidentally works for 2 vCPU. But on 1 vCPU, the formula says 3 workers. I tested with 4 and got 15% more throughput because the extra worker handles requests while others wait on PostgreSQL. RAM is the real limit, not CPU.
Option B: Async workers with gevent (complex, less RAM per connection)
gunicorn myapp.wsgi:application --worker-class gevent --workers 2 --worker-connections 100
2 gevent workers with 100 connections each handle 200 concurrent requests. Each greenlet uses ~1MB versus 50–120MB per sync worker. The catch: every library in your stack must be gevent-compatible, and psycopg2 needs to be patched or replaced with psycogreen. Django 5.x’s native async views with Uvicorn are a cleaner solution if you can rewrite views as async def.
Here is my actual Gunicorn worker recommendation based on testing across all five providers:
| VPS RAM | Sync Workers | With Celery | Without Celery | Max Concurrent Users |
|---|---|---|---|---|
| 1 GB | 2 | Not recommended | 2 workers, no Redis | ~15 |
| 2 GB | 3 | 1 Celery worker | 4 workers + Redis | ~40 |
| 4 GB | 5 | 2 Celery workers | 7 workers + Redis | ~100 |
| 8 GB | 9 | 4 Celery workers | 13 workers + Redis | ~250 |
These numbers come from actually running locust against each VPS tier until response times degraded past 500ms. The “max concurrent users” column assumes a typical Django app with 3–5 ORM queries per view and a 50ms average response time.
#1. DigitalOcean — Managed PostgreSQL Saves Your Sleep ($6/mo)
DigitalOcean is number one for Django and it is not because of raw performance. Hostinger is faster. It is because of managed PostgreSQL.
I have been deploying Django apps for five years and the thing that has woken me up at 3 AM is never Gunicorn crashing. Gunicorn crashes, systemd restarts it, life goes on. The thing that ruins your weekend is PostgreSQL. A failed vacuum that bloats the database. A crashed write-ahead log that corrupts data. A point-in-time recovery that you did not set up because it was on your to-do list. DigitalOcean’s managed PostgreSQL ($15/mo extra) handles automated daily backups, point-in-time recovery, read replicas, automatic failover, and version upgrades. I cannot put a dollar value on never writing another pg_dump cron job.
The Architecture That Actually Works
The smart DigitalOcean setup for Django is not one big Droplet. It is a small Droplet plus a managed database:
$12/mo Droplet (2 vCPU, 2GB) + $15/mo Managed PostgreSQL = $27/mo total
The Droplet runs Gunicorn (5 workers), Nginx, Redis, and Celery (1 worker). No PostgreSQL eating 300–500MB of RAM. The 2GB is enough because the heaviest memory consumer is offloaded. The managed database gets its own resources, automatic backups, and failover that you did not have to configure.
DigitalOcean also has the best Django deployment documentation on the internet. Their “How to Set Up Django with Postgres, Nginx, and Gunicorn on Ubuntu” tutorial has been followed by hundreds of thousands of developers. Their community tutorials cover every layer of the Django stack: database tuning, Celery with Redis, Let’s Encrypt SSL, and Django Channels with WebSockets. The $200 free trial runs a properly-sized Django + managed PostgreSQL stack for about 5 weeks without spending a cent.
What I Did Not Love
The $6/mo entry Droplet has 1GB RAM. That is not enough for a production Django stack with all services local — which is why the managed database architecture is the way to go. Standard SSD at 50K IOPS is solid but not as fast as Hostinger’s NVMe for self-hosted PostgreSQL. Only 2 US regions (New York, San Francisco) limits latency optimization for Midwest and Southern users.
#2. Hostinger — 1,200 req/sec on a $6.49 VPS ($6.49/mo)
This is where I got the 1,200 req/sec number from the headline. Hostinger’s KVM2 plan: 1 vCPU, 4GB RAM, 50GB NVMe, $6.49/month. I ran Django 5.1, Gunicorn with 5 sync workers (yes, more than the formula says — I had the RAM for it), Nginx, PostgreSQL 16 with 512MB shared_buffers, and Redis 7. Apache Bench at 100 concurrent connections against a view with select_related and prefetch_related: 1,247 req/sec.
Then I added Celery with 2 workers and the same benchmark dropped to 1,180 req/sec. Still excellent. And I still had 800MB of free RAM. On every other provider’s $5–6 entry plan with 1GB RAM, this full stack is impossible.
Why NVMe Matters More for Django Than You Think
65,000 read IOPS. That number matters because of how Django’s ORM interacts with PostgreSQL. Every .filter() chain, every select_related() JOIN, every Django Admin list view with 50 foreign key columns — they all translate to PostgreSQL queries that hit the buffer pool, and when the buffer pool misses, they hit disk. On Hostinger’s NVMe, a complex Django Admin list view (30 models, 5 inline editors, filter sidebar) loaded in 180ms. On a standard SSD provider at 25K IOPS, the same view took 420ms. That is a 2.3x difference from disk speed alone.
The collectstatic difference is smaller but adds up in CI/CD: 200 static files collected in 1.8 seconds on NVMe versus 7.2 seconds on standard SSD. If you deploy 5 times a day, that is 27 seconds saved per day. Not life-changing, but NVMe makes everything feel snappier.
The Configuration I Actually Use
[Unit]
Description=Gunicorn Django
After=network.target
[Service]
User=www-data
WorkingDirectory=/var/www/myapp
ExecStart=/var/www/myapp/venv/bin/gunicorn \
myapp.wsgi:application \
--workers 5 \
--bind unix:/run/gunicorn.sock \
--timeout 30 \
--max-requests 1000 \
--max-requests-jitter 50
ExecReload=/bin/kill -s HUP $MAINPID
Restart=on-failure
[Install]
WantedBy=multi-user.target
# --max-requests 1000: Restart workers after 1000 requests
# Prevents gradual memory leaks in long-running Django processes
# --max-requests-jitter 50: Randomize so all workers don't restart at once
What Is Missing
No managed PostgreSQL. You are running your own database, doing your own backups, handling your own upgrades. If that scares you, go with DigitalOcean. Hostinger also has only 2 US datacenter locations and fewer Django-specific tutorials in their knowledge base. Their AI assistant can help scaffold the Nginx + Gunicorn + PostgreSQL stack, but it is not the same as DigitalOcean’s depth of community documentation.
#3. Vultr — 9 US Datacenters, Best Geographic Coverage ($6/mo)
If your Django app serves a specific US region — a real estate site for Texas, a restaurant delivery app for the Midwest, a healthcare portal for the East Coast — Vultr’s 9 US datacenter locations are unmatched. New York, Chicago, Dallas, Atlanta, Miami, Los Angeles, Seattle, Silicon Valley, Honolulu. Every other provider on this list has 2–4 US locations.
The latency difference is real. A Django app in Dallas serving users in Houston has ~8ms round-trip. The same app in New York serving Houston: ~40ms. Multiply that by the 3–5 database queries in a typical Django view, and the total page response time difference is 100–160ms. For an e-commerce checkout flow or a real-time dashboard, that matters.
The Marketplace Shortcut
Vultr’s marketplace has a pre-configured Django stack: Ubuntu 22.04, Python 3.11, Gunicorn, Nginx, PostgreSQL, and Let’s Encrypt. It saves about 45 minutes of initial server configuration. The config is not perfect — I would change the Gunicorn worker count, add Redis, and tune PostgreSQL shared_buffers — but it gets you to a running Django app in under 5 minutes. Good for prototyping and staging environments.
High Frequency for Python’s GIL
Vultr offers High Frequency instances with higher-clocked CPUs. Python’s GIL (Global Interpreter Lock) means Django request processing is effectively single-threaded within each Gunicorn worker. Higher clock speed per core directly translates to faster individual request processing. Vultr’s High Frequency plan at $12/mo delivered 1,050 req/sec in our benchmarks — a 18% improvement over their regular compute at $6/mo. If your Django views are compute-heavy (data processing, template rendering with complex context), the High Frequency premium pays for itself.
The 2GB Limitation
Vultr’s $6/mo plan has 2GB RAM. That runs Django + PostgreSQL + Gunicorn (3 workers) + Nginx + Redis. You cannot add Celery without swapping. For the full Django stack with Celery, you need the $24/mo plan (4GB, 2 vCPU). More expensive than Hostinger’s $6.49 for 4GB, but the 9 US locations and NVMe storage offset the cost for latency-sensitive applications.
#4. Kamatera — Separate Your Celery Workers from Your Web Servers ($4/mo+)
Here is a deployment architecture that most tutorials never mention: your Celery workers and your Gunicorn web server should not be on the same VPS.
A Celery worker processing PDF invoices is CPU-bound and memory-hungry. A Gunicorn worker serving Django views is I/O-bound and needs fast disk for PostgreSQL. These are fundamentally different resource profiles. Putting both on the same VPS means one starves the other. A heavy Celery task spikes CPU to 100%, and your web request latency doubles. A traffic spike fills Gunicorn’s workers, and Celery tasks queue up because there is no CPU left.
Kamatera’s fully customizable VPS configurations let you build purpose-specific servers:
| Server Role | Kamatera Config | Monthly Cost | Why This Config |
|---|---|---|---|
| Web (Gunicorn + Nginx) | 1 vCPU / 4GB RAM / 30GB SSD | ~$14 | RAM for 5 Gunicorn workers + Redis cache |
| Celery workers | 4 vCPU / 2GB RAM / 20GB SSD | ~$18 | CPU for task processing, minimal storage |
| PostgreSQL | 2 vCPU / 8GB RAM / 50GB SSD | ~$28 | RAM for buffer pool, CPU for complex queries |
| Total: 3-server Django architecture | ~$60/mo | Each component independently sized | |
On fixed-plan providers, matching these specs costs $72–96/mo because you are forced into plans with resources you do not need. Kamatera’s hourly billing also means you can spin up temporary Celery worker fleets for batch processing jobs — import 100,000 records, process them, tear down the extra workers. You pay for the hours used.
The $100 Free Trial for Load Testing
Kamatera’s $100 trial credit (30 days) is enough to provision all three servers in the architecture above, run a realistic load test with locust, identify bottlenecks, then right-size for production. I used the trial to discover that my Celery workers were CPU-bound on PDF generation but memory-bound on data export tasks — which led me to create two separate Celery queues on two differently-configured servers.
The Downsides
Kamatera’s control panel is functional but not pretty. No managed PostgreSQL, no managed Redis, no marketplace apps. Their SATA SSD (18K IOPS) is the slowest storage in this comparison — which hurts self-hosted PostgreSQL performance. If you are running everything on a single server, Hostinger or DigitalOcean are better choices. Kamatera shines only when you split your Django stack across multiple purpose-built servers.
#5. Linode (Akamai) — CDN Replaces collectstatic Headaches ($5/mo)
Every Django developer has this moment: you run python manage.py collectstatic, it copies 847 files into STATIC_ROOT, Nginx serves them, and everything works. Until your site gets traffic. Then you notice that Nginx is spending 40% of its connections serving admin/css/base.css and rest_framework/js/default.js and your custom CSS — static files that never change between deployments. Your VPS bandwidth is being consumed by files that should be cached at the edge.
Linode’s Akamai CDN integration solves this. Configure Django’s STATIC_URL to point to the Akamai CDN domain, and every template tag — {% static 'css/app.css' %} — generates a CDN URL instead of an origin URL. Akamai caches the files at edge nodes worldwide. Your VPS handles only dynamic Django requests. On a $5/mo Nanode with 1GB RAM, offloading static files to the CDN frees enough bandwidth and connections to handle 2x the dynamic traffic.
The Django Settings for Akamai CDN
STATIC_URL = 'https://your-cdn.akamaized.net/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
# Or use WhiteNoise for origin + CDN combo
MIDDLEWARE = [
'django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
# ... rest of middleware
]
STATICFILES_STORAGE = 'whitenoise.storage.CompressedManifestStaticFilesStorage'
Phone Support at 2 AM
Linode is the only provider on this list with phone support at the entry price tier. When your Django migration breaks the database schema and the site is returning 500 errors, calling someone who can check if PostgreSQL is even running is faster than a support ticket. The support team cannot fix your Django code, but they can tell you if the server is out of disk space, if a process is stuck, or if the firewall is blocking your database port.
What Holds It Back
1GB RAM on the $5/mo plan. Standard SSD, not NVMe. No managed PostgreSQL at any price point. Gunicorn benchmark of 820 req/sec is the lowest in our test group, though the Akamai CDN more than compensates by offloading static file delivery. For a budget Django deployment where you are willing to manage your own database and lean on the CDN for performance, Linode is a solid choice. For production apps with Celery, you need the $24/mo plan (4GB).
Django ORM Optimization on VPS: The Queries That Matter
I installed django-debug-toolbar on a production app (behind an IP whitelist, obviously) and found a single Django Rest Framework list endpoint generating 47 queries for 20 objects. The N+1 problem was everywhere: serializer.data accessing obj.author.profile.company without select_related.
On Hostinger’s NVMe, those 47 queries completed in 85ms. Fast disk masked the bad code. On a standard SSD provider, the same endpoint took 340ms. The code was identical — only the disk speed changed. That is why I keep saying NVMe is the most important Django VPS spec after RAM.
But fixing the queries is better than buying faster hardware. Here is what I did to that endpoint:
class ProductViewSet(ModelViewSet):
queryset = Product.objects.all()
serializer_class = ProductSerializer
# AFTER: 3 queries, 12ms on NVMe, 28ms on SSD
class ProductViewSet(ModelViewSet):
queryset = Product.objects.select_related(
'author__profile__company',
'category',
).prefetch_related(
'tags',
'images',
).only(
'id', 'title', 'price', 'slug',
'author__name', 'category__name',
)
serializer_class = ProductSerializer
From 47 queries to 3. From 340ms to 28ms on the same hardware. The only() call is the underrated part — it tells PostgreSQL to return only the columns you need instead of SELECT *. On a table with 30 columns including a TextField with 5KB of content, only() reduced the data transferred per query by 80%.
Other optimizations that made a measurable difference on VPS hardware:
- PgBouncer for connection pooling: Django opens a new PostgreSQL connection per request by default. Each connection costs 5–10MB of PostgreSQL memory. PgBouncer pools connections, reducing PostgreSQL memory by 30–50% on a busy site. On a 4GB VPS, this is the difference between PostgreSQL using 800MB and using 400MB. Install
django-db-connection-poolor run PgBouncer as a separate process. - Database indexes on filter fields:
db_index=Trueon any field used in.filter(),.order_by(), or.exclude(). A missing index on a table with 100K rows turns a 2ms query into a 200ms sequential scan. UseEXPLAIN ANALYZEinmanage.py dbshellto verify indexes are being used. - Django’s cached template loader: Add
'OPTIONS': {'loaders': [('django.template.loaders.cached.Loader', [...])]}to your TEMPLATES config. Templates are parsed once and cached in memory instead of reading from disk on every request. On NVMe this saves 1–2ms per request. On SSD it saves 5–10ms. - Redis for cache backend + sessions: Replace
django.core.cache.backends.db.DatabaseCachewithdjango_redis.cache.RedisCache. Session reads go from 2–5ms (database) to 0.1ms (Redis). Per-view cache with@cache_page(60*15)eliminates database queries entirely for repeated requests.
collectstatic + WhiteNoise vs CDN: When to Switch
Django’s static file story is a three-stage progression, and each stage matches a different traffic level:
| Stage | Method | Traffic Level | Setup Complexity | VPS Impact |
|---|---|---|---|---|
| 1 | WhiteNoise (serves from Gunicorn) | <50K monthly visitors | 1 pip install + 2 lines in settings.py | Gunicorn handles static + dynamic |
| 2 | Nginx direct (static) + Gunicorn (dynamic) | 50K–500K monthly visitors | Nginx location block config | Static bypasses Python entirely |
| 3 | CDN (Akamai/Cloudflare) + Gunicorn | >500K monthly visitors | CDN setup + STATIC_URL change | Static never hits VPS |
Most Django tutorials start with Stage 2 (Nginx for static files), which is fine. But WhiteNoise (Stage 1) is underrated for small to medium sites. One pip install, add the middleware, set STATICFILES_STORAGE, and WhiteNoise serves static files with proper compression, cache headers, and unique hashing — all from within Gunicorn. No Nginx static file configuration needed. I have run sites with 30K monthly visitors on WhiteNoise without any performance issues.
The switch to Stage 3 (CDN) should happen when you notice Nginx’s worker connections climbing because of static file requests. On a budget VPS, the CDN frees bandwidth and connections that your Gunicorn workers need for dynamic content. Cloudflare’s free tier is enough for most Django sites. Linode’s Akamai CDN is the premium option with guaranteed performance.
Full Django Benchmark Comparison
| Provider | 4GB Plan | Gunicorn req/sec | PostgreSQL TPS | collectstatic (200 files) | 500 Celery Tasks | Storage | US DCs | Managed PG |
|---|---|---|---|---|---|---|---|---|
| DigitalOcean | $24 | 980 | 3,400 | 3.1s | 42s | SSD | 2 | ✓ $15/mo |
| Hostinger | $6.49 | 1,247 | 4,100 | 1.8s | 35s | NVMe | 2 | ✗ |
| Vultr | $24 | 890 | 3,200 | 2.4s | 44s | NVMe | 9 | ✗ |
| Kamatera | ~$14 | 850 | 2,800 | 4.8s | 48s | SSD | 4 | ✗ |
| Linode | $24 | 820 | 3,000 | 3.8s | 46s | SSD | 9 | ✗ |
All benchmarks: Django 5.1, Python 3.12, Gunicorn 22 (sync workers), Nginx 1.24, PostgreSQL 16, Redis 7, Ubuntu 24.04, 4GB RAM. Gunicorn req/sec measured with Apache Benchmark at 100 concurrent connections against an eager-loading DRF endpoint. PostgreSQL TPS via pgbench TPC-B. Celery tasks include email dispatch, thumbnail generation, and CSV export.
Zero-Downtime Django Deploy Script
This is the script I use for every Django deployment. Gunicorn’s graceful reload is the key: it spawns new workers with updated code while existing workers finish their current requests. Zero dropped requests.
# deploy.sh — Zero-downtime Django deployment
set -e
APP_DIR="/var/www/myapp"
VENV="$APP_DIR/venv/bin"
cd "$APP_DIR"
# Pull latest code
git pull origin main
# Install dependencies
"$VENV/pip" install -r requirements.txt --quiet
# Run migrations BEFORE reloading Gunicorn
# New code may depend on new schema
"$VENV/python" manage.py migrate --noinput
# Collect static files
"$VENV/python" manage.py collectstatic --noinput --clear
# Graceful Gunicorn reload (zero downtime)
# Spawns new workers, old workers finish current requests
sudo systemctl reload gunicorn
# Restart Celery workers to pick up new code
# Celery cannot graceful-reload like Gunicorn
sudo systemctl restart celery
echo "Deployed at $(date)"
The critical order: migrations first, then Gunicorn reload. If you reload Gunicorn before running migrations, the new code tries to access columns or tables that do not exist yet. I learned this the hard way on a Friday at 5 PM.
Which Django VPS Should You Choose?
- Best overall: DigitalOcean — managed PostgreSQL eliminates the hardest part of Django ops. $200 trial runs the full stack for 5 weeks free
- Best performance per dollar: Hostinger — 4GB RAM + NVMe for $6.49/mo. The only entry plan that runs the complete Django + Celery stack
- Best for regional apps: Vultr — 9 US datacenters. Put your Django app in the same city as your users
- Best for scaling Celery workers: Kamatera — custom CPU/RAM per server role. Separate web and worker tiers at lower total cost
- Best for static-heavy sites: Linode — Akamai CDN for static files + phone support when things break
Related guides: Best VPS for Python • Best VPS for PostgreSQL • Best VPS for Redis • Best VPS for Docker • Best NVMe SSD VPS • Best VPS for Development
Frequently Asked Questions
How many Gunicorn workers should I run on my VPS?
The standard formula is (2 × CPU cores) + 1, but it assumes CPU-bound work. Django views are I/O-bound — they spend 80% of time waiting on PostgreSQL. In practice, I run more workers than the formula suggests and let RAM be the constraint. On a 4GB VPS with PostgreSQL and Redis running locally: 5 sync workers. On an 8GB VPS: 9 workers. Each worker uses 50–120MB depending on your installed apps. Test with locust at realistic concurrency to find your actual optimal count. If you want higher concurrency without more RAM, use --worker-class gevent --worker-connections 100 for async I/O workers instead.
Why does Celery use so much RAM on a VPS?
Each Celery worker loads the entire Django application into memory, just like Gunicorn. Default concurrency equals your CPU cores, so a 4-vCPU VPS spawns 4 workers using 320–600MB. Tasks that process data add temporary memory on top. The fix: celery -A myapp worker --concurrency=2 --max-tasks-per-child=100 --max-memory-per-child=200000. This limits to 2 workers, recycles them every 100 tasks (prevents memory leaks), and hard-kills any worker exceeding 200MB. On a 4GB VPS running the full Django stack, keep Celery at 1–2 workers maximum.
Gunicorn vs uWSGI vs Uvicorn for Django?
Gunicorn for standard synchronous Django apps. Simple config, stable, battle-tested. uWSGI for advanced users who want harakiri timeouts, cheaper worker recycling, and async modes — but the config file complexity is real. Uvicorn for Django 5.x async views, Django Channels, and WebSockets. Do not use Uvicorn for sync-only Django apps because it adds complexity without benefit. In our benchmarks, Gunicorn and uWSGI performed within 5% of each other. Uvicorn with async views was 30–40% faster for I/O-bound endpoints that use async def.
WhiteNoise vs Nginx vs CDN for Django static files?
Three stages. WhiteNoise (pip install + 2 lines in settings.py) serves static files from Gunicorn with compression and caching — fine for sites under 50K monthly visitors. Nginx serving /static/ directly bypasses Python entirely using sendfile — best for 50K–500K visitors. CDN (Cloudflare free tier or Akamai) caches static files at edge nodes and eliminates static load from your VPS entirely — required above 500K visitors or when VPS bandwidth is a constraint. Start with WhiteNoise. Switch to Nginx when you want to free Gunicorn workers from static file duty. Add CDN when bandwidth costs or latency matter.
PostgreSQL vs MySQL for Django on VPS?
PostgreSQL, every time. Django’s django.contrib.postgres module exposes features that do not exist in the MySQL backend: JSONField with database-level querying, ArrayField, full-text search with SearchVector and SearchRank, range fields, and exclusive constraints. PostgreSQL’s MVCC handles concurrent writes from multiple Gunicorn workers better than MySQL’s locking behavior. For new Django projects, there is zero reason to choose MySQL. SQLite is for development only — it will corrupt under concurrent Gunicorn worker writes.
How to optimize Django ORM queries on a limited VPS?
Install django-debug-toolbar and fix N+1 queries first — they are the single biggest performance drain. Use select_related() for ForeignKey/OneToOne, prefetch_related() for ManyToMany. Use only() to load only needed columns instead of SELECT *. Add db_index=True to fields used in filter() and order_by(). Install PgBouncer or django-db-connection-pool for connection pooling — it reduces PostgreSQL memory by 30–50% on busy sites. Use @cache_page with Redis backend for views that can be cached. These optimizations matter more on a VPS than on a cloud database because you cannot throw more hardware at the problem.
Can I run Django on a 1GB RAM VPS?
Yes, with painful compromises. Gunicorn (2 workers: ~200MB) + PostgreSQL (minimal: ~250MB) + Nginx (~15MB) leaves ~500MB for OS and your app. No Redis. No Celery. Running manage.py commands during traffic risks the OOM killer. A 1GB VPS handles a personal project with under 10 concurrent users. For anything else, 4GB is the minimum. Hostinger’s 4GB plan at $6.49/mo is $1.49 more than most 1GB plans and eliminates every memory constraint.
How do I deploy Django without downtime?
Run migrations first, then send SIGHUP to Gunicorn’s master process (systemctl reload gunicorn). Gunicorn spawns new workers with updated code while old workers finish current requests — zero dropped connections. For Celery, use systemctl restart celery after Gunicorn reloads. Critical: always run migrations before reloading, never after. New code that depends on unmigrated schema will 500 on every request. For extra safety, test migrations on a staging server first: manage.py migrate --plan shows what will change without applying.
Do I need Redis for Django on a VPS?
If you use Celery, you need Redis (or RabbitMQ) as the message broker. Even without Celery, Redis as Django’s cache backend replaces the database cache with something 10–50x faster. Session reads drop from 2–5ms (database) to 0.1ms (Redis). Per-view caching with @cache_page eliminates database queries for repeated requests. Redis uses 30–60MB RAM — worthwhile on a 4GB VPS, questionable on 1GB. If you are on 1GB RAM, skip Redis and use the database cache backend. Accept slower caching and use Celery’s database backend if you absolutely need background tasks.
Our Top Pick for Django VPS
DigitalOcean with managed PostgreSQL eliminates the hardest part of Django ops. Hostinger at $6.49/mo hit 1,247 req/sec with the full Gunicorn + Celery + PostgreSQL stack on 4GB NVMe. Either way, your Django app is in good hands.