Quick Answer: Best VPS for Node.js
Hetzner at $4.49/mo runs my entire Node.js production stack: 3 Express services, Postgres, Redis, and Nginx on 4GB RAM with PM2 cluster mode. For Node.js beginners who want guided deployment tutorials, DigitalOcean's documentation walks you from zero to production step-by-step, and the $200 free credit covers 60 days of mistakes. For Next.js SSR that needs more than 4GB, Hostinger at $6.49 gives you the highest CPU benchmark for server-side rendering.
Table of Contents
- What Actually Matters for Node.js on a VPS (It's Not CPU)
- My Production Stack: 3 Services on $4.49/mo
- #1. Hetzner — Best Value for Production Node.js
- #2. DigitalOcean — Best for Your First Node.js Deployment
- #3. Vultr — Best for API-Heavy Node.js Services
- #4. Hostinger — Best CPU for Next.js SSR
- #5. Linode — Best for WebSocket Applications
- The Memory Leak That Taught Me What VPS Specs Actually Matter
- PM2 vs Docker vs Both: What I Actually Use
- Comparison Table
- How We Tested
- FAQ
What Actually Matters for Node.js on a VPS (It's Not CPU)
Node.js is single-threaded. A 16-core VPS runs a single Express process on one core. The other 15 sit idle. PM2 cluster mode helps — it spawns one worker per core — but each worker is still single-threaded and most Node.js applications are I/O bound, not CPU bound. Your Express API spends 95% of its time waiting for database queries and HTTP responses, not crunching numbers.
Here's what actually determines whether your Node.js VPS is fast or frustrating:
RAM Per PM2 Worker (The Real Bottleneck)
Each PM2 cluster worker is a separate Node.js process. A typical Express API worker uses 80-130MB. A Next.js SSR worker uses 200-400MB. On a 2-core VPS with PM2 cluster mode:
- Express API: 2 workers × 130MB = 260MB for Node + OS overhead (~200MB) + Nginx (~15MB) = ~475MB minimum. A 1GB plan works but leaves zero headroom for traffic spikes or memory leaks.
- Next.js SSR: 2 workers × 400MB = 800MB for Node + OS = ~1GB minimum. A 1GB plan is a guaranteed OOM. 2GB is tight. 4GB is comfortable.
- Full stack (Node + Postgres + Redis): 260MB Node + 300MB Postgres + 50MB Redis + 200MB OS + 15MB Nginx = ~825MB minimum. A 1GB plan fails under load. 4GB is the realistic starting point.
Network I/O (Where Node.js Shines)
Node's event loop is designed for I/O multiplexing. A well-written Express API can handle 3,000-5,000 concurrent connections on a single core. But that throughput depends on the VPS's network performance: bandwidth for response payloads, connection handling capacity, and DNS resolution speed for upstream API calls. NVMe storage also matters if you're serving static files or writing logs — synchronous disk I/O blocks the event loop.
The Event Loop Monitor (What Nobody Checks)
Add perf_hooks monitoring to your Express app. If your event loop delay exceeds 100ms, no amount of VPS upgrades will help — you have a blocking operation in your code. I've seen developers upgrade from a $5 VPS to a $50 VPS because their API was slow, only to find the same latency because a single fs.readFileSync() was blocking every request. The VPS specs are the floor; your code quality is the ceiling.
My Production Stack: 3 Node.js Services on $4.49/mo
| Service | Framework | PM2 Workers | RAM Usage | Req/sec |
|---|---|---|---|---|
| API server | Express 4 | 2 (cluster) | 240 MB | ~2,800 |
| Background worker | Bull + Express | 1 | 120 MB | N/A (queue) |
| Webhook receiver | Fastify | 1 | 65 MB | ~4,200 |
| PostgreSQL 16 | — | — | 320 MB | — |
| Redis 7 | — | — | 85 MB | — |
| Nginx | — | — | 15 MB | — |
| Total | ~845 MB | |||
| OS + Docker overhead | ~350 MB | |||
| Free (of 4GB) | ~2.8 GB | |||
Hetzner CX22. $4.49/month. 2.8GB of headroom. The API handles our traffic without PM2 showing any memory warnings. The background worker processes queued jobs (email sending, PDF generation, webhook delivery) without affecting API response times because it runs as a separate PM2 process. Fastify on the webhook receiver handles 4,200 req/sec — more than we'll ever need — because Fastify's schema validation is faster than Express's middleware chain.
Key decision: I put the API and background worker in PM2 cluster mode with 2 workers for the API (one per vCPU) and 1 worker for the background job processor. The webhook receiver is single-worker because it just validates and queues — processing happens in the background worker. This architecture keeps the event loop clean on all three services.
#1. Hetzner — Best Value for Production Node.js
4GB RAM for $4.49/month. For Node.js, that translates to: enough RAM for PM2 cluster mode with 2 workers on a full Express + Postgres + Redis stack, plus 2.8GB headroom for traffic spikes and the occasional memory leak investigation. The next cheapest 4GB option is Linode at $12/month. That's a 2.7x price difference for the same RAM.
The 2 vCPU matters more for Node.js than for most workloads. PM2 cluster mode with 2 workers means your API handles twice the throughput of a single-process setup. My Express API benchmarks: 1,400 req/sec on 1 worker, 2,800 req/sec on 2 workers (PM2 cluster). That linear scaling holds because Node.js workers are isolated — they don't share memory or contend for locks.
Hetzner's Terraform provider lets me define the entire Node.js deployment as code: server specs, firewall rules (only 22, 80, 443), cloud-init script that installs Node 20 via nvm, Docker, pulls the Compose stack, and starts PM2. terraform apply gives me a fresh production-identical staging environment in 90 seconds. When I need to test a Node.js version upgrade (like the Node 18 → 20 migration), I spin up a parallel server, deploy the same app on Node 20, run the test suite, and destroy it if anything breaks.
Hetzner Node.js At a Glance
What I Don't Love
Two US datacenter locations (Ashburn, Hillsboro). If your Node.js API serves users in Dallas or Miami, you're adding 20-30ms of network latency that's invisible in benchmarks but real in user experience. Vultr with 9 US locations can put your API closer to more users.
No free trial. For your first Node.js deployment, DigitalOcean's $200 credit lets you experiment without financial anxiety. Once you know what you're doing, migrate to Hetzner for the value.
#2. DigitalOcean — Best for Your First Node.js Deployment
DigitalOcean's tutorial "How to Set Up a Node.js Application for Production on Ubuntu" is the single most useful Node.js deployment guide on the internet. It walks you from a fresh Droplet to a production-ready Express app behind Nginx with PM2 and Let's Encrypt HTTPS, explaining every command and every configuration decision. I've sent this tutorial to at least 20 developers who were deploying Node.js to a server for the first time.
The $200 free credit means you can follow that tutorial, make mistakes, break things, delete the Droplet, and start over — without paying. When I first deployed Node.js, I misconfigured Nginx's proxy_pass and spent 2 hours debugging 502 errors. On DigitalOcean, that was a free learning experience. On a paid plan, it would have been a stressful one.
For Node.js apps that outgrow a single Droplet, DigitalOcean's App Platform does git-push deployments with automatic HTTPS and zero-downtime rolling updates. It's more expensive than a VPS ($12/mo for the equivalent of a $6 Droplet), but eliminates Nginx configuration, PM2 setup, and deployment scripting. For a side project or startup that values shipping speed over cost optimization, App Platform is a valid shortcut until traffic justifies a proper VPS setup.
DigitalOcean Node.js At a Glance
The Value Gap After Free Credit
Once you're paying, DigitalOcean is expensive. Their $6/mo plan gives you 1 vCPU and 1GB — tight for Node.js + Postgres. Their 4GB plan costs $24/mo. Hetzner gives you the same 4GB for $4.49. That's a 5.3x price difference. Use DigitalOcean's free credit to learn, then move your production Node.js to Hetzner for the long term.
#3. Vultr — Best for API-Heavy Node.js Services
I run a Node.js API gateway on Vultr that handles 12 million requests per month. The reason it's on Vultr and not Hetzner: my API serves users across the entire US, and Vultr's 9 datacenter locations let me deploy in Newark (East Coast users), Dallas (Central), and Los Angeles (West Coast) with the same vultr-cli command. A user in Miami hits the Newark instance at 15ms instead of Hetzner's Ashburn at 25ms. A user in Seattle hits the LA instance at 12ms instead of 55ms.
For an API that serves JSON payloads, 10-40ms of network latency difference doesn't sound like much. But it compounds: a web page that makes 5 API calls in sequence turns a 50ms savings per call into 250ms of total improvement. That's the difference between "fast" and "instant" from the user's perspective.
Vultr's hourly billing also enables a Node.js-specific pattern: blue-green deployments. Deploy the new version to a fresh Vultr instance, route 10% of traffic to it, monitor error rates and response times for an hour, then either cut over fully or destroy the new instance if something's wrong. Total cost of a failed deployment test: $0.02. On a monthly-billed server, you'd need to maintain a parallel server permanently or risk deploying directly to production.
Vultr Node.js At a Glance
The Price Reality
For a single Node.js service, Vultr costs more than Hetzner. Their 2GB plan at $12/mo gives you half the RAM of Hetzner's $4.49 plan. Vultr wins when you need geographic distribution or disposable servers for testing. For a single-location production Node.js app, Hetzner's value is unbeatable.
#4. Hostinger VPS — Best CPU for Next.js SSR
Next.js server-side rendering is the one Node.js workload where CPU actually matters. Each SSR request requires the server to render a React component tree to HTML — a CPU-intensive operation that takes 20-80ms depending on component complexity. Hostinger's Geekbench score (4,400 single-core) is the highest on this list, and it shows: my Next.js test app rendered pages 35% faster on Hostinger than on Contabo (3,200 score).
For API servers (Express, Fastify, Koa), this CPU advantage is irrelevant because APIs are I/O-bound. But for Next.js SSR, especially with heavy component trees or data-fetching in getServerSideProps, the per-core performance directly translates to faster Time to First Byte. On Hostinger, my Next.js pages averaged 45ms TTFB. On Contabo, 68ms. On Hetzner (4,000 score), 52ms.
4GB RAM at $6.49/mo. That's enough for 2 PM2 workers running Next.js SSR (800MB for workers + Postgres + Redis + OS). The NVMe storage helps during next build — which writes hundreds of small files during compilation. Build time on NVMe: 38 seconds. Same project on SATA SSD: 55 seconds.
Hostinger Node.js At a Glance
The Catch
Renewal pricing. Hostinger's $6.49 is an introductory rate. After the first term, it increases to $10-14/month depending on your billing cycle. At $14/mo, Hetzner's $4.49 for the same RAM is a much better deal. Lock in the longest term you're comfortable with, or plan to migrate to Hetzner when the intro pricing expires.
One US datacenter (Ashburn). If your Next.js app serves users nationwide, Vultr's 9 locations provide better geographic coverage. But for SSR where TTFB matters most, Hostinger's CPU advantage partially compensates for extra network latency.
#5. Linode (Akamai) — Best for WebSocket Applications
Node.js excels at WebSocket connections because the event loop handles thousands of persistent connections without spawning threads. But WebSocket applications have a specific VPS requirement that HTTP APIs don't: connection persistence across server restarts.
Linode's NodeBalancer ($10/mo) supports sticky sessions, which routes a WebSocket client back to the same backend server after reconnection. Combined with 9 US datacenter locations, you can deploy WebSocket servers close to your users and have the load balancer handle connection routing. I run a chat application with ~2,000 concurrent WebSocket connections on a 2GB Linode behind a NodeBalancer. The connections distribute evenly across 2 backend servers, and sticky sessions ensure reconnecting clients resume their state on the same server.
Linode's tutorials on Node.js WebSocket deployment with Socket.io are the most thorough I've found. They cover the non-obvious parts: configuring Nginx for WebSocket proxy (the Upgrade header that everyone forgets), handling connection timeouts at the load balancer level, and implementing heartbeats to detect stale connections.
Linode Node.js At a Glance
Why It's #5
For standard HTTP Node.js APIs, Linode is overpriced compared to Hetzner. Their 2GB plan at $12/mo gives you the same RAM as DigitalOcean at the same price — and a quarter of what Hetzner offers for $4.49. Linode's value is in the NodeBalancer for WebSocket apps and the 9 US datacenter locations. If you're not building a WebSocket application and don't need sticky sessions, save your money on Hetzner.
The Memory Leak That Taught Me What VPS Specs Actually Matter
Month 3 of running my Express API on a 1GB VPS. PM2 dashboard showed memory climbing 2MB per hour. At that rate, the process would OOM in about 3 days. PM2's --max-memory-restart was set to 800MB, so it would restart before crashing, but each restart dropped 50-100 active connections and took 3 seconds to come back.
The debugging process required exactly the kind of headroom that a cheap 1GB plan doesn't provide:
- Heap snapshot: I needed to run
node --inspectalongside the production process. The inspector alone uses 100-200MB. On a 1GB plan, enabling inspection left 0MB for the actual app. I had to upgrade to 4GB just to debug the problem. - The cause: An Express middleware was attaching a response-time tracker to
reqobjects. But a downstream middleware was adding thosereqobjects to an in-memory cache for request deduplication — and the cache's TTL was set to 24 hours instead of 60 seconds. Each request leaked ~2KB of attached metadata that wouldn't be garbage collected for a day. - The fix: One line: change
ttl: 86400tottl: 60. Memory stabilized at 120MB per worker instantly.
The lesson: a 1GB VPS can run a Node.js app. But it can't run a Node.js app and debug it when something goes wrong. On a 4GB plan, I have the headroom to attach inspectors, take heap snapshots, run pm2 monit, and profile the event loop without killing the production process. The extra $0-8/month (Hetzner's 4GB is cheaper than most providers' 1GB) is insurance against the debugging session you'll inevitably need.
My Node.js-specific PM2 configuration after this experience:
--max-memory-restart 1500Mper worker (on a 4GB plan, that's generous but safe)--max-restarts 10within 1 minute (if it keeps crashing, stop restarting and alert me)--node-args="--max-old-space-size=1536"(tell V8 to GC more aggressively before hitting the PM2 limit)--exp-backoff-restart-delay 100(exponential backoff on restarts to avoid hammering a failed dependency)
PM2 vs Docker vs Both: What I Actually Use
| Scenario | My Choice | Why |
|---|---|---|
| Single Express app, no database | PM2 only | Docker adds 60MB overhead and complexity for zero benefit. PM2 handles restarts and cluster mode. |
| Express + Postgres + Redis | Docker Compose + PM2 inside container | Docker manages the stack (networking, volumes, lifecycle). PM2 inside the Node container handles cluster mode and process restarts. |
| Next.js with custom server | Docker Compose + PM2 | Next.js builds are reproducible in Docker. PM2 cluster mode gives you workers per core. |
| CI/CD ephemeral testing | Docker only | The server lives for 5 minutes. PM2 adds nothing to a throwaway test environment. |
| Microservices (3+ Node.js services) | Docker Compose + PM2 in each | Compose orchestrates the services. PM2 manages each service's lifecycle independently. This is my production setup. |
The "both" approach (Docker Compose + PM2 inside containers) is what most production Node.js deployments should use. Docker gives you reproducible environments and docker compose up to start your entire stack. PM2 gives you cluster mode, graceful restarts (zero-downtime reloads with pm2 reload), and process-level monitoring. They're not competing tools — they work at different layers.
Node.js VPS Comparison Table
| Provider | Node.js Plan | Price | vCPU | RAM | Express req/sec | Next.js TTFB | US DCs |
|---|---|---|---|---|---|---|---|
| Hetzner | CX22 | $4.49 | 2 | 4 GB | 2,800 | 52 ms | 2 |
| DigitalOcean | Basic 2GB | $12.00 | 1 | 2 GB | 1,500 | 58 ms | 3 |
| Vultr | Cloud 2GB | $12.00 | 1 | 2 GB | 1,600 | 55 ms | 9 |
| Hostinger | KVM 2 | $6.49 | 2 | 4 GB | 2,600 | 45 ms | 1 |
| Linode | Linode 2GB | $12.00 | 1 | 2 GB | 1,450 | 60 ms | 9 |
Express req/sec: simple JSON API response measured with autocannon, PM2 cluster mode, all available workers. Next.js TTFB: SSR page with 3 component levels and a database query, p50 measurement. Both tested over 60 seconds with 50 concurrent connections.
How We Tested
Same Express API and Next.js app deployed on each provider. Node 20 LTS, PM2 cluster mode with all available vCPU cores. Nginx as reverse proxy. PostgreSQL 16 and Redis 7 running on the same server (not managed database services).
- Express throughput: autocannon with 50 concurrent connections for 60 seconds. JSON API endpoint that reads from PostgreSQL (simple SELECT, indexed). Measured requests/second and p99 latency. Cluster mode with all available cores.
- Next.js SSR TTFB: Server-rendered page with 3 React component levels, one
getServerSidePropsdatabase query. Measured Time to First Byte at p50 and p99 with 20 concurrent users. CPU-bound workload — the per-core Geekbench score directly correlates with TTFB. - Memory stability: Ran each setup for 7 days monitoring PM2's memory reporting. Checked for memory growth patterns that indicate leaks in the test application. All providers showed stable memory — confirming the leak in my own code, not the infrastructure.
- PM2 restart behavior: Triggered deliberate OOM on each provider by allocating a large buffer in a test endpoint. Measured time from PM2 detecting the crash to the replacement worker accepting requests. All providers: 2-4 seconds. No meaningful difference.
- npm install speed: Fresh
npm installon a project with 847 dependencies (typical Next.js + testing stack). NVMe providers (Vultr, DigitalOcean, Hostinger): 22-28 seconds. SSD providers (Hetzner, Linode): 30-35 seconds. Difference matters during CI/CD, not daily development.
Frequently Asked Questions
How much RAM does a Node.js app need on a VPS?
Node.js runtime: ~60MB. Express API worker: 80-130MB. Next.js SSR worker: 200-400MB. PM2 cluster mode multiplies these by the number of workers. A full stack (2 Express workers + Postgres + Redis + Nginx + OS) uses ~1.2GB total. 4GB is the comfortable minimum for production — Hetzner at $4.49 gives you that with 2.8GB headroom. Don't run production Node.js on 1GB — you can't debug memory leaks without headroom.
Should I use PM2 or Docker for Node.js on a VPS?
Both. Docker Compose manages your stack (Node + Postgres + Redis + Nginx). PM2 runs inside the Node container for cluster mode and process management. For a single Express app without a database, PM2 alone is fine — Docker adds 60MB overhead for zero benefit in that case. For anything with multiple services, Docker Compose + PM2 is the standard production pattern.
How many concurrent users can a Node.js VPS handle?
On Hetzner CX22 (2 vCPU, $4.49/mo) with PM2 cluster: ~2,800 req/sec for Express JSON APIs, ~800-1,200 for database-backed responses, ~200-400 for Next.js SSR pages. These assume no blocking operations in the event loop. One synchronous JSON.parse() on a large payload or one fs.readFileSync() can cut throughput by 90%. Profile your event loop delay before blaming the VPS.
Do I need a VPS or can I use Vercel/Railway for Node.js?
Vercel is great for Next.js frontends where their edge network adds value. For API servers, background workers, WebSocket services, cron jobs, or anything that needs persistent connections: use a VPS. Vercel's Pro plan ($20/mo) gives you serverless functions with cold starts and execution time limits. A $4.49 Hetzner VPS gives you a persistent process with no limits. For long-running or connection-persistent workloads, serverless platforms are fundamentally the wrong tool.
What Node.js version should I use on a VPS?
Node 20 LTS. Install via nvm (Node Version Manager), not apt — Ubuntu's repository ships ancient versions. Commands: curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash, then nvm install 20. Pin the exact version in .nvmrc (e.g., 20.11.1) for reproducible deployments. Never use node:latest in Docker — use node:20.11-alpine for a specific, lightweight base image.
How do I handle Node.js memory leaks on a VPS?
Prevention: set --max-old-space-size to 75% of available RAM per worker, and --max-memory-restart in PM2 to auto-restart leaking processes. Debugging: you need enough RAM headroom to run node --inspect alongside production (adds 100-200MB). Take heap snapshots via Chrome DevTools. Common VPS leak sources: unclosed database connections, event listeners on long-lived objects, and in-memory caches without TTL. I had a cache with a 24-hour TTL that should have been 60 seconds — one line fix, problem solved.
Nginx or Caddy as reverse proxy for Node.js?
Caddy for simplicity (auto HTTPS, 5-line config). Nginx for control (caching headers, rate limiting, WebSocket proxy tuning). On a 1GB VPS, Nginx uses 5MB vs Caddy's 30MB — meaningful when headroom is tight. For WordPress on the same server, Nginx is standard. For a standalone Node.js API, Caddy saves 30 minutes of certbot configuration. I use Nginx in production because I need its proxy_cache and custom limit_req zones. For most Node.js apps, either works.
My Recommendation by Use Case
Production Express/Fastify API: Hetzner CX22 — $4.49 for 4GB, 2 PM2 workers, unbeatable value
Next.js SSR: Hostinger — fastest single-core for server rendering
First Node.js deployment: DigitalOcean — best tutorials + $200 free credit
Multi-region API: Vultr — 9 US datacenters for lowest user-facing latency
WebSocket applications: Linode — NodeBalancer with sticky sessions