Quick Answer
Minimum VPS: Any $4-5/mo plan works — Nginx uses under 15MB of RAM. Best providers for multi-app hosting: Hetzner CX22 ($4.59/mo, 4GB RAM) or Vultr 2GB ($10/mo). Time to set up: 10 minutes for basic proxy, 30 minutes with SSL and security hardening.
Table of Contents
- Why You Need a Reverse Proxy
- Installing Nginx on Your VPS
- Basic Reverse Proxy Configuration
- Multiple Domains on One VPS
- SSL Termination with Let's Encrypt
- WebSocket Proxying
- Load Balancing Multiple Backends
- Response Caching
- Security Hardening
- Performance Tuning for VPS
- Troubleshooting Common Issues
- FAQ
Why You Need a Reverse Proxy (Even for One App)
Most tutorials justify reverse proxies with load balancing. That is the least common reason to use one. Here is why I deploy Nginx in front of every application, even when there is only one backend:
- SSL termination in one place. Configure TLS once in Nginx, not in every application. Your Node.js, Python, Go, or PHP app speaks plain HTTP to Nginx on localhost. Nginx handles the encryption on the public-facing side.
- Security layer. Your application never directly faces the internet. Nginx filters malformed requests, enforces rate limits, and adds security headers before traffic ever reaches your code. A vulnerability in your app is one step further from the attacker.
- Graceful deployments. Update your app, restart it on a different port, swap the Nginx upstream, reload with zero downtime. Without a proxy, restarting your app means seconds of downtime on every deployment.
- Static file serving. Nginx serves static files 10-50x faster than most application servers. Let Nginx handle CSS, JS, and images directly, and your app handles only dynamic requests.
- Request buffering. Nginx buffers slow client uploads, protecting your app from clients on 3G connections that trickle data over 30 seconds. Your app sees the request only when it is fully received.
Installing Nginx on Your VPS
Ubuntu 24.04, Debian 12, or any recent Debian-based distribution:
# Install Nginx sudo apt update sudo apt install -y nginx # Start and enable on boot sudo systemctl start nginx sudo systemctl enable nginx # Verify it is running sudo systemctl status nginx curl -I http://localhost # HTTP/1.1 200 OK
You should now see the Nginx welcome page at your VPS's public IP. If you do not, check your firewall. On Hetzner, this means the Cloud Firewall in their dashboard. On Vultr and most other providers, it is UFW on the server:
sudo ufw allow 'Nginx Full' # Opens ports 80 and 443 sudo ufw enable
Directory Structure You Need to Know
/etc/nginx/ nginx.conf # Main config (rarely touch this) sites-available/ # All virtual host configs sites-enabled/ # Symlinks to active configs conf.d/ # Additional config fragments snippets/ # Reusable config snippets /var/log/nginx/ access.log # All HTTP requests error.log # Errors and warnings
Basic Reverse Proxy Configuration
Remove the default site and create your first proxy configuration:
# Remove default sudo rm /etc/nginx/sites-enabled/default # Create your proxy config sudo nano /etc/nginx/sites-available/myapp
Here is the configuration. I will explain every directive because copying configs blindly is how people end up with broken setups they cannot debug:
server {
listen 80;
server_name myapp.com www.myapp.com;
# Proxy all requests to the backend app
location / {
proxy_pass http://127.0.0.1:3000;
# Pass the original client info to the backend
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts (increase for slow APIs)
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
# Buffer settings
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
}
# Enable the site sudo ln -s /etc/nginx/sites-available/myapp /etc/nginx/sites-enabled/ # Test configuration for syntax errors sudo nginx -t # Reload (not restart — no downtime) sudo systemctl reload nginx
The proxy_set_header lines are not optional. Without X-Real-IP, your application logs show 127.0.0.1 for every request instead of the real client IP. Without X-Forwarded-Proto, your app cannot tell if the original request was HTTP or HTTPS, which breaks redirect logic and CSRF protection. I have debugged both of these issues in other people's setups more times than I want to admit.
Multiple Domains on One VPS
This is where Nginx earns its place. Three different applications, three different domains, one VPS, one IP address:
# /etc/nginx/sites-available/api
server {
listen 80;
server_name api.mycompany.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# /etc/nginx/sites-available/dashboard
server {
listen 80;
server_name dashboard.mycompany.com;
location / {
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# /etc/nginx/sites-available/blog
server {
listen 80;
server_name blog.mycompany.com;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Enable all three sudo ln -s /etc/nginx/sites-available/api /etc/nginx/sites-enabled/ sudo ln -s /etc/nginx/sites-available/dashboard /etc/nginx/sites-enabled/ sudo ln -s /etc/nginx/sites-available/blog /etc/nginx/sites-enabled/ sudo nginx -t && sudo systemctl reload nginx
Nginx inspects the Host header of each incoming request and routes it to the matching server_name. No DNS trickery, no port forwarding, no complicated routing tables. This is the simplest and most reliable way to host multiple applications on a single VPS. I run six domains on a single Hetzner CX22 this way with zero issues.
SSL Termination with Let's Encrypt
Certbot with the Nginx plugin handles everything — certificate generation, Nginx configuration, and automatic renewal. This is the correct way to do it in 2026. For more SSL options including wildcard certificates, see our SSL certificates on VPS guide.
# Install Certbot sudo apt install -y certbot python3-certbot-nginx # Get certificates for all your domains sudo certbot --nginx -d api.mycompany.com sudo certbot --nginx -d dashboard.mycompany.com sudo certbot --nginx -d blog.mycompany.com # Verify auto-renewal sudo certbot renew --dry-run
Certbot modifies your Nginx config files automatically, adding SSL directives and a redirect from HTTP to HTTPS. After running Certbot, your config looks like this (Certbot-added lines marked):
server {
server_name api.mycompany.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/api.mycompany.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.mycompany.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
}
server { # managed by Certbot
if ($host = api.mycompany.com) {
return 301 https://$host$request_uri;
}
listen 80;
server_name api.mycompany.com;
return 404;
}
WebSocket Proxying
If your application uses WebSockets (Socket.io, real-time features, chat apps), you need specific headers that most reverse proxy tutorials omit. Without these, the WebSocket handshake fails silently, and your application falls back to HTTP long-polling without any error message. I wasted an entire afternoon debugging this the first time:
server {
listen 443 ssl;
server_name chat.mycompany.com;
ssl_certificate /etc/letsencrypt/live/chat.mycompany.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/chat.mycompany.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
# WebSocket support
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Standard proxy headers
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Keep WebSocket connections alive longer
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
}
The key lines are proxy_http_version 1.1, Upgrade $http_upgrade, and Connection "upgrade". These tell Nginx to pass the WebSocket upgrade request through to the backend instead of treating it as a normal HTTP request. The extended timeouts (86400s = 24 hours) prevent Nginx from closing idle WebSocket connections — the default 60-second timeout kills WebSocket connections that do not send data constantly.
Load Balancing Multiple Backends
When a single instance of your application cannot handle the traffic, run multiple instances and let Nginx distribute requests across them. This works whether your instances are on the same VPS (different ports) or on separate servers:
upstream app_backend {
# Round-robin by default
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
# Optional: sticky sessions (for apps that store session state)
# ip_hash;
# Optional: least connections (best for varying response times)
# least_conn;
# Health check: mark server as down after 3 failures
server 127.0.0.1:3001 max_fails=3 fail_timeout=30s;
server 127.0.0.1:3002 max_fails=3 fail_timeout=30s;
server 127.0.0.1:3003 max_fails=3 fail_timeout=30s;
}
server {
listen 443 ssl;
server_name myapp.com;
ssl_certificate /etc/letsencrypt/live/myapp.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myapp.com/privkey.pem;
location / {
proxy_pass http://app_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
On a VPS, the most practical use case for load balancing is zero-downtime deployments. Run your app on port 3001 and 3002. Deploy the new version to 3002 first, verify it works, then deploy to 3001. At no point is the entire application down. If you are using Docker, docker compose up -d --scale app=3 creates three instances and Nginx distributes traffic across them. See our Docker on VPS guide for the full setup.
Response Caching
Nginx can cache responses from your backend, dramatically reducing load on your application server. This is especially valuable on VPS hardware where CPU and RAM are limited:
# Add to the http block in /etc/nginx/nginx.conf
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=app_cache:10m
max_size=1g inactive=60m use_temp_path=off;
# In your server block
server {
listen 443 ssl;
server_name myapp.com;
# Cache API responses
location /api/ {
proxy_pass http://127.0.0.1:3000;
proxy_cache app_cache;
proxy_cache_valid 200 5m; # Cache 200 responses for 5 minutes
proxy_cache_valid 404 1m; # Cache 404s for 1 minute
proxy_cache_use_stale error timeout http_500 http_502 http_503;
add_header X-Cache-Status $upstream_cache_status;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Serve static files directly (bypass proxy)
location /static/ {
alias /var/www/myapp/static/;
expires 30d;
add_header Cache-Control "public, immutable";
}
}
The proxy_cache_use_stale directive is the hidden gem. If your backend crashes or becomes slow, Nginx serves cached responses instead of returning errors. Your users see slightly stale data instead of a 502 error page. On a VPS where your app shares resources with other services, this is cheap insurance against traffic spikes overwhelming your backend.
Security Hardening
A properly configured Nginx reverse proxy is a security layer. An improperly configured one is a liability. Here is the hardened configuration I use on every production deployment. For broader server security, see our VPS security hardening guide.
# /etc/nginx/snippets/security-headers.conf # Include this in every server block # Prevent clickjacking add_header X-Frame-Options "SAMEORIGIN" always; # Prevent MIME type sniffing add_header X-Content-Type-Options "nosniff" always; # Enable HSTS (force HTTPS for 1 year) add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always; # Basic CSP — customize for your app add_header Content-Security-Policy "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline';" always; # Referrer policy add_header Referrer-Policy "strict-origin-when-cross-origin" always; # Permissions policy add_header Permissions-Policy "camera=(), microphone=(), geolocation=()" always; # Hide Nginx version server_tokens off;
# Use in your server blocks:
server {
listen 443 ssl;
server_name myapp.com;
include snippets/security-headers.conf;
# ... rest of config
}
Rate Limiting
# In the http block of nginx.conf
limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=login:10m rate=1r/s;
# In your server block
location /api/ {
limit_req zone=api burst=20 nodelay;
proxy_pass http://127.0.0.1:3000;
}
location /login {
limit_req zone=login burst=5;
proxy_pass http://127.0.0.1:3000;
}
The login rate limit of 1 request per second with a burst of 5 stops brute-force attacks dead. The API rate limit of 10 requests per second with a burst of 20 handles normal traffic while preventing abuse. Adjust these numbers based on your traffic patterns — check your access logs for a week to understand normal usage before setting limits.
Performance Tuning for VPS
Nginx's defaults are conservative. On a VPS, you can squeeze more performance out of it with a few changes to /etc/nginx/nginx.conf:
worker_processes auto; # Match CPU cores worker_connections 1024; # Per worker (increase to 2048 on 4+ vCPU) # Gzip compression gzip on; gzip_comp_level 5; gzip_min_length 256; gzip_types text/plain text/css application/json application/javascript text/xml application/xml image/svg+xml; # Sendfile for static content sendfile on; tcp_nopush on; tcp_nodelay on; # Keepalive keepalive_timeout 65; keepalive_requests 100; # Client body size (increase for file uploads) client_max_body_size 50M;
The gzip configuration alone typically reduces bandwidth usage by 60-80% for text-based responses. On providers like Vultr and DigitalOcean where bandwidth is metered, this directly saves money. On Contabo and BuyVM where bandwidth is effectively unlimited, it still reduces page load times for your users. For more optimization techniques, see our VPS performance tuning guide.
Troubleshooting Common Issues
502 Bad Gateway
This means Nginx cannot reach your backend application. Nine times out of ten, the app is not running:
# Check if your app is actually listening ss -tlnp | grep 3000 # Check Nginx error log sudo tail -20 /var/log/nginx/error.log # Common causes: # 1. App crashed — restart it # 2. App listening on wrong port — check proxy_pass matches # 3. App listening on 0.0.0.0 vs 127.0.0.1 — must match # 4. SELinux blocking connections (CentOS/RHEL) sudo setsebool -P httpd_can_network_connect 1
504 Gateway Timeout
Your backend is running but responding too slowly. Increase timeouts or fix the slow endpoint:
# Increase proxy timeouts proxy_connect_timeout 300s; proxy_send_timeout 300s; proxy_read_timeout 300s;
413 Request Entity Too Large
# Increase max upload size client_max_body_size 100M;
Configuration Testing
# Always test before reloading sudo nginx -t # If test passes, reload (not restart) sudo systemctl reload nginx # View current Nginx configuration (including included files) sudo nginx -T
Real-World Multi-App Architecture on a Single VPS
The examples above show individual configurations. Here is what a real production setup looks like when you are running multiple applications on one VPS — the kind of setup I actually maintain.
The Stack: SaaS Application + Marketing Site + API + Admin Dashboard
This is a common pattern for small SaaS companies. Four applications, one VPS, one IP address. On a Hetzner CX22 ($4.59/mo, 2 vCPU, 4GB RAM), this runs comfortably with room to spare:
# Application inventory: # app.saas.com → Next.js on port 3000 (customer-facing app) # www.saas.com → Static marketing site on port 8080 (Hugo) # api.saas.com → Express.js API on port 4000 # admin.saas.com → Internal dashboard on port 5000 (React + Express) # Shared snippet for all server blocks # /etc/nginx/snippets/proxy-common.conf proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_connect_timeout 60s; proxy_send_timeout 60s; proxy_read_timeout 60s;
# /etc/nginx/sites-available/app.saas.com
server {
listen 443 ssl http2;
server_name app.saas.com;
ssl_certificate /etc/letsencrypt/live/app.saas.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.saas.com/privkey.pem;
include snippets/security-headers.conf;
# Next.js with WebSocket support for HMR in development
location / {
proxy_pass http://127.0.0.1:3000;
include snippets/proxy-common.conf;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
# Serve Next.js static assets with aggressive caching
location /_next/static/ {
proxy_pass http://127.0.0.1:3000;
include snippets/proxy-common.conf;
expires 365d;
add_header Cache-Control "public, immutable";
}
}
# /etc/nginx/sites-available/api.saas.com
server {
listen 443 ssl http2;
server_name api.saas.com;
ssl_certificate /etc/letsencrypt/live/api.saas.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/api.saas.com/privkey.pem;
include snippets/security-headers.conf;
# CORS headers for API
add_header Access-Control-Allow-Origin "https://app.saas.com" always;
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
add_header Access-Control-Allow-Headers "Authorization, Content-Type" always;
location / {
# Rate limit API requests
limit_req zone=api burst=30 nodelay;
proxy_pass http://127.0.0.1:4000;
include snippets/proxy-common.conf;
}
# Webhook endpoint with higher timeout
location /webhooks/ {
proxy_pass http://127.0.0.1:4000;
include snippets/proxy-common.conf;
proxy_read_timeout 300s;
}
}
# /etc/nginx/sites-available/admin.saas.com
server {
listen 443 ssl http2;
server_name admin.saas.com;
ssl_certificate /etc/letsencrypt/live/admin.saas.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/admin.saas.com/privkey.pem;
include snippets/security-headers.conf;
# IP whitelist for admin panel
allow 203.0.113.50; # Office IP
allow 198.51.100.25; # VPN IP
deny all;
location / {
proxy_pass http://127.0.0.1:5000;
include snippets/proxy-common.conf;
}
}
Notice the patterns: shared snippets to avoid repetition, IP whitelisting on the admin panel, CORS headers on the API, aggressive caching on static assets, and rate limiting on API endpoints. These are the details that separate a working config from a production-ready one.
Memory Footprint of This Setup
# Check what is using memory ps aux --sort=-%mem | head -20 # Typical breakdown on a 4GB Hetzner CX22: # Nginx: ~12MB (all 4 server blocks) # Next.js app: ~180MB # Express API: ~80MB # Admin dashboard: ~90MB # Hugo (static): ~5MB (barely anything) # PostgreSQL: ~120MB # System/OS: ~300MB # Total: ~787MB out of 4GB (plenty of headroom)
Nginx itself is a rounding error. The applications behind it use 50x more memory. This is why even a $4-5/mo VPS with 1GB RAM can run Nginx proxying one or two lightweight applications. The proxy overhead is effectively zero.
Provider Cost Comparison for Multi-App Hosting
For this four-app stack, you need at least 2GB RAM (tight) or 4GB (comfortable). Here is what it costs across providers:
| Provider | Plan | RAM | Price/mo | Notes |
|---|---|---|---|---|
| Hetzner | CX22 | 4GB | $4.59 | Best value, 20TB BW, 2 US DCs |
| Contabo | VPS S | 8GB | $6.99 | Most RAM, 200GB storage, slower IOPS |
| Hostinger | KVM 1 | 4GB | $6.49 | NVMe (65K IOPS), intro price |
| Vultr | 2GB plan | 2GB | $10.00 | Tight for 4 apps, 9 US DCs |
| Vultr | 4GB plan | 4GB | $20.00 | Comfortable, hourly billing |
| DigitalOcean | 4GB Droplet | 4GB | $24.00 | Best docs, managed DB available |
Advanced Nginx Patterns for VPS
Maintenance Mode Without Touching Your App
When you need to take an application down for maintenance, swap the proxy with a static page. No app restart needed:
# /etc/nginx/snippets/maintenance.conf
# Create this file and include it in any server block to enable maintenance mode
# Check for maintenance mode file
if (-f /var/www/maintenance.html) {
return 503;
}
error_page 503 @maintenance;
location @maintenance {
root /var/www;
rewrite ^(.*)$ /maintenance.html break;
}
# Enable maintenance mode: echo '<h1>Back in 5 minutes</h1>' | sudo tee /var/www/maintenance.html # Disable maintenance mode: sudo rm /var/www/maintenance.html # No Nginx reload needed — it checks the file on every request
Blue-Green Deployments with Nginx
Zero-downtime deployments without Docker or Kubernetes. Run two versions of your app simultaneously and switch traffic between them:
# /etc/nginx/conf.d/upstream-app.conf
# Blue is the current live version, Green is the new version
upstream app_live {
server 127.0.0.1:3001; # Blue (current)
# server 127.0.0.1:3002; # Green (new — uncomment to switch)
}
# In your server block:
location / {
proxy_pass http://app_live;
include snippets/proxy-common.conf;
}
# Deployment process: # 1. Deploy new version to port 3002 (Green) # 2. Test Green directly: curl http://localhost:3002/health # 3. Edit upstream to switch from 3001 to 3002 # 4. Reload: sudo nginx -t && sudo systemctl reload nginx # 5. Verify: curl -I https://myapp.com # 6. If something is wrong, switch back to 3001 and reload # 7. Once confirmed, stop the old version on 3001
Geo-Based Routing
If you run application instances in multiple locations (say, a Vultr VPS in New York and another in Los Angeles), Nginx can route users to the nearest one using the GeoIP module:
# Requires nginx-module-geoip2
# apt install libnginx-mod-http-geoip2
geoip2 /usr/share/GeoIP/GeoLite2-Country.mmdb {
$geoip2_data_country_code country iso_code;
}
map $geoip2_data_country_code $backend {
default http://127.0.0.1:3000; # Default backend
# Route based on geographic proximity
}
# This is more commonly handled at the DNS level (Cloudflare load balancing)
# but Nginx geo-routing is useful for path-based decisions within a single server
Request Logging for Debugging
Custom log formats that actually help you debug proxy issues:
# /etc/nginx/conf.d/logging.conf
log_format proxy_log '$remote_addr - [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'upstream: $upstream_addr '
'response_time: ${upstream_response_time}s '
'request_time: ${request_time}s '
'cache: $upstream_cache_status';
# Use in server blocks:
access_log /var/log/nginx/myapp-access.log proxy_log;
The upstream_response_time tells you how long your backend took to respond. If this number is high but request_time is only slightly higher, the bottleneck is your app, not Nginx. If request_time is much higher than upstream_response_time, the bottleneck is the client connection (slow upload, high latency). This distinction is invaluable when diagnosing performance problems.
Need a VPS for Your Proxy Setup?
Nginx uses almost no resources. Even a $4-5/mo VPS can run Nginx as a reverse proxy for multiple applications. Here are the best options:
Frequently Asked Questions
What is a reverse proxy and why do I need one on my VPS?
A reverse proxy sits between the internet and your application servers. It receives all incoming requests and forwards them to the appropriate backend. You need one for: (1) SSL termination in one place. (2) Security — your app never directly faces the internet. (3) Multiple apps on one IP via domain-based routing. (4) Caching and compression. (5) Load balancing across multiple instances.
Nginx or Apache as a reverse proxy — which is better for VPS?
Nginx, without question. It uses an event-driven architecture that handles thousands of concurrent connections with 5-15MB of RAM. Apache's process-per-connection model uses 50-100MB+ for the same workload. On a VPS where RAM is limited and expensive, Nginx's efficiency matters. Apache is better only if you need .htaccess support or specific Apache modules.
How much RAM does Nginx use as a reverse proxy?
Very little. A typical Nginx reverse proxy uses 5-15MB of RAM with default worker configuration. Even under heavy load (1000+ concurrent connections), memory rarely exceeds 30MB. This makes Nginx ideal for VPS environments where RAM is shared with your applications. On a 1GB VPS from Vultr or Linode, Nginx's overhead is negligible.
Can I use Nginx to proxy multiple domains on one VPS?
Yes, this is one of the primary use cases. Create separate server blocks for each domain with different proxy_pass targets. For example, app1.com proxies to localhost:3000, app2.com to localhost:4000. All share the same IP and ports 80/443. Nginx routes traffic based on the Host header. I run six domains on a single Hetzner CX22 this way.
How do I set up SSL with Nginx reverse proxy?
Use Certbot with the Nginx plugin: sudo certbot --nginx -d yourdomain.com. Certbot modifies your Nginx config automatically, creates certificates, and sets up renewal. For multiple domains, add more -d flags. For wildcard certificates, use DNS validation. See our SSL certificates guide for advanced configurations.
Does Nginx reverse proxy add latency?
Negligible. Nginx adds roughly 0.1-0.5ms per request when proxying to a local backend. Network latency between user and server is 10-200ms, making the proxy overhead unmeasurable in practice. The caching and compression benefits typically result in faster overall response times despite the proxy hop.
How do I proxy WebSocket connections through Nginx?
Add the Upgrade and Connection headers: proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_http_version 1.1;. Without these, WebSocket handshakes fail silently. Also increase proxy_read_timeout to prevent Nginx from closing idle WebSocket connections (default 60 seconds is too short for persistent connections).