Quick Answer: Best VPS for Self-Hosted Next.js
If you know how to run npm install and edit an Nginx config, you do not need Vercel. Hostinger VPS at $6.49/mo gives you 4GB RAM (enough to build and serve simultaneously), NVMe storage for fast ISR cache reads, and the highest CPU score in our tests for SSR rendering. That is $13.51/month less than Vercel Pro, with zero bandwidth caps and unlimited next/image optimization via sharp. For full-stack apps that need managed PostgreSQL alongside Next.js, DigitalOcean is the better ecosystem — their managed database removes the one piece of infrastructure you genuinely do not want to maintain yourself.
Table of Contents
- The Vercel Tax: What You Are Actually Paying For
- Standalone Output Mode — The Feature That Makes Self-Hosting Easy
- #1. Hostinger — $6.49 Replaces a $20 Vercel Seat
- #2. DigitalOcean — Full-Stack Next.js with Managed DB
- #3. Vultr — High Frequency CPUs for SSR-Heavy Apps
- #4. Kamatera — Scale RAM for Monorepo Builds
- #5. Linode — Akamai CDN for Global Static Delivery
- PM2 + Nginx: The Production Stack
- Docker Deployment for Next.js
- The Image Optimization Trap
- Comparison Table
- How I Tested
- FAQ
The Vercel Tax: What You Are Actually Paying For
I am not going to tell you Vercel is bad. Vercel is excellent. It is also a business that makes money by charging you for things a VPS does for free. Understanding what those things are is how you decide whether to self-host.
Here is what Vercel Pro ($20/seat/month) gives you and what it costs when you outgrow the defaults:
- Serverless functions: Vercel runs your API routes and SSR as serverless functions. You get 1,000 GB-hours on Pro. A moderately busy site with 500K monthly page views and server components can hit that limit. Overage: $0.18 per additional GB-hour. On a VPS, your Node.js process runs 24/7 — there is no per-invocation cost, no cold start, and no execution time limit.
- Bandwidth: Pro includes 1TB. A Next.js app serving 300KB pages to 200K monthly visitors uses ~60GB. Add image optimization responses and API calls, and 1TB goes faster than you think. Overage: $0.15/GB. A $6.49 VPS includes 4-8TB of transfer. You will not hit that limit with a Next.js site.
- Image optimization: Vercel's
next/imageoptimization is just sharp running on their infrastructure. Pro includes 5,000 source images. Runnpm install sharpon your VPS and the same optimization runs locally with zero limits, zero cost. - Build minutes: Pro gives you 6,000 minutes/month. Large Next.js apps with 500+ pages take 5-10 minutes to build. If you deploy 20 times per day (aggressive but not unusual during active development), that is 3,000-6,000 minutes. On a VPS, builds are limited only by your CPU and RAM. There is no minute counter.
- Edge Functions: Vercel's real differentiator. If you need middleware running at 30+ global edge locations, that is genuinely hard to replicate on a single VPS. But most Next.js apps do not need edge middleware. They need SSR in one region, close to their database.
The math: a Next.js app with 100K monthly page views costs $20-40/month on Vercel (Pro + potential overages). The same app on a Hostinger VPS costs $6.49/month, flat, with no usage-based surprises. Over a year, that is $162-$402 in savings. The tradeoff is 30 minutes of initial server setup.
Standalone Output Mode — The Feature That Makes Self-Hosting Easy
Before Next.js 12.2, self-hosting was genuinely painful. You had to copy your entire node_modules directory to the server — sometimes 300-800MB of dependencies, most of which your app never touches at runtime. Deployments were slow. Docker images were bloated. People had legitimate reasons to use Vercel.
Then Vercel shipped output: 'standalone' and quietly undermined their own business model.
Add one line to next.config.js:
// next.config.js
module.exports = {
output: 'standalone',
}
Now next build produces a .next/standalone directory containing a self-contained server.js and only the Node.js modules your app actually imports. Typical size: 30-80MB instead of 300-800MB. You can run the entire app with node .next/standalone/server.js — no npm install on the server, no dependency resolution, no version conflicts.
The standalone output does not include the public folder or .next/static. You copy those separately and let Nginx serve them directly — which is faster than having Node.js serve static files anyway. This is exactly what Vercel does internally: their CDN serves static assets, and their serverless functions handle only dynamic requests. On a VPS, Nginx is your CDN layer.
I deploy standalone builds to every VPS I test. The deployment script is six lines:
#!/bin/bash
npm ci
npm run build
cp -r public .next/standalone/public
cp -r .next/static .next/standalone/.next/static
pm2 reload nextjs-app || pm2 start .next/standalone/server.js --name nextjs-app -i max
#1. Hostinger VPS — $6.49 Replaces a $20 Vercel Seat
I will be blunt about why Hostinger is number one: it is the only provider under $10/month where next build does not crash.
That is not a selling point you will find in marketing copy, but it is the single most important technical fact about self-hosting Next.js. The build process peaks at 2-4GB RAM for any production app with more than 50 pages and a reasonable dependency tree. Hostinger's entry VPS plan starts at 4GB RAM. Every other provider on this list starts at 1GB for comparable pricing, which means your very first deployment will OOM-kill both the build process and whatever was serving traffic at the time.
I ran my 200-page test app through the full deployment cycle on Hostinger fourteen times over two weeks. Build succeeded every time. Peak memory during build: 2.8GB. Headroom remaining for Nginx and PM2 to keep serving the previous version: 1.2GB. That headroom is the difference between a deployment and a 12-minute outage while you SSH in and figure out why PM2 shows zero processes.
The NVMe Advantage for ISR
ISR revalidation on Hostinger averaged 8ms per cache write. On standard SSD providers, the same operation averaged 22ms. That 2.75x difference is invisible on a low-traffic site. On a content-heavy site receiving 50+ revalidation events per minute during a traffic spike, it is the difference between consistent sub-200ms TTFB and occasional 400ms+ pages that tank your Core Web Vitals.
Hostinger's NVMe runs at 65K IOPS. ISR cache reads — the hot path for every stale-while-revalidate response — execute in under 1ms. I measured this with process.hrtime() instrumentation inside a custom cache handler. The filesystem is fast enough that Redis caching adds complexity without measurable benefit on a single server.
What I Liked
- 4GB RAM means you can build and serve simultaneously — no CI pipeline required
- NVMe at 65K IOPS: fastest ISR cache I/O in our tests
- Highest CPU benchmark score (4,400) translates directly to faster SSR response times
- 50GB storage handles large
.next/cachedirectories without cleanup scripts - Built-in DDoS protection and firewall — one less thing to configure
What Fell Short
- Only 2 US datacenter locations — if your users are concentrated on the West Coast and Hostinger puts you in Virginia, add Cloudflare
- No marketplace app for Node.js — you configure everything manually (which this article covers)
- No managed database — run PostgreSQL on the same VPS or use an external service like DigitalOcean's managed DB
#2. DigitalOcean — Full-Stack Next.js Without Managing Your Own Database
Why DigitalOcean for Next.js Specifically
Next.js server components fetch data at render time. If your database is on the same private network as your VPS, those fetches take 1-2ms. If your database is a managed service on another provider (like PlanetScale or Supabase), those fetches take 20-50ms. DigitalOcean is the only provider on this list where you can spin up a managed PostgreSQL instance on the same private network as your Droplet. Your getServerSideProps and server components get local-network database latency without you managing PostgreSQL replication, backups, or connection pooling.
The Droplet itself is not the story here. At $6/month, the entry plan gives you 1GB RAM — insufficient for on-server builds. You either build in CI and deploy the standalone output, or you upgrade to the $12/month 2GB plan. Where DigitalOcean earns its spot is the ecosystem around the Droplet.
Managed PostgreSQL starts at $15/month. Managed Redis for session storage and rate limiting starts at $15/month. Both live on DigitalOcean's internal network with your Droplet. I tested a Next.js app with 5 API routes hitting PostgreSQL through Prisma — average query response time was 3.2ms on DigitalOcean's internal network versus 28ms when the same database was hosted externally. For pages that make 4-5 database calls during server rendering, that is the difference between 15ms and 140ms of database latency alone.
DigitalOcean also has App Platform — a Vercel-like deployment service. I mention it for completeness but do not recommend it for cost-conscious deployments. App Platform's $12/month starter plan gives you less compute than a $6 Droplet. It exists for teams that want git-push-to-deploy without server management. If that is you, you would probably just use Vercel.
Full-Stack Strengths
- Managed PostgreSQL + Redis on private network — 3ms internal query latency
- $200 free trial credit covers 2+ months of a full Next.js stack (Droplet + DB + Redis)
- Best Node.js deployment documentation of any provider I tested
- Highest network throughput (980 Mbps) — relevant if your Next.js API serves large JSON payloads
- Spaces (S3-compatible object storage) for user uploads and large static assets
Limitations
- 1GB entry Droplet cannot run
next buildon production apps — upgrade to $12/mo or build in CI - Standard SSD, not NVMe — ISR cache reads are 2.75x slower than Hostinger
- Only 2 US regions (NYC and SFO) — if you need Dallas or Chicago, look at Vultr
#3. Vultr — High Frequency CPUs for SSR-Heavy Apps
Here is a thing most Next.js VPS guides get wrong: they benchmark SSR throughput using wrk or k6 against a simple page and declare that all providers perform similarly. They do, on simple pages. Render a page with 15 server components, 3 Suspense boundaries, and a data fetch waterfall, and single-core CPU speed becomes the dominant variable.
The High Frequency Tier
Vultr's standard instances use shared vCPUs that perform adequately for most workloads. Their High Frequency tier uses dedicated 3.8GHz+ AMD EPYC and Intel cores. The price premium is modest — roughly 20-30% more than standard — and the SSR throughput difference is measurable.
I tested both tiers with a complex server component tree (the kind of page a real SaaS dashboard renders — not a blog post). Standard Vultr instance: 142 requests/second. High Frequency instance: 189 requests/second. That is a 33% improvement for SSR-bound workloads. If your Next.js app is mostly static or ISR, this does not matter. If every page hit executes server components with database calls, High Frequency pays for itself by serving more users per dollar of compute.
Geographic Flexibility
Vultr operates 9 US datacenters: New York, New Jersey, Chicago, Dallas, Atlanta, Miami, Los Angeles, Seattle, and Silicon Valley. No other provider on this list offers that kind of US coverage. If your Next.js app serves a regional audience and you want the origin server close to your users, Vultr is the only option that lets you put a VPS in Miami or Atlanta without going to a hyperscaler.
This also makes Vultr the best choice for a multi-region setup where you run Next.js instances in 2-3 US locations behind a DNS load balancer. Each instance costs $5-6/month. Three regions for $15-18/month still costs less than Vercel Pro for one seat.
.next/standalone output is deployed.
- High Frequency tier delivers 33% better SSR throughput than standard instances
- 9 US datacenter locations — unmatched for regional deployments
- $100 free credit to benchmark your specific Next.js app
- NVMe available on High Frequency tier for fast ISR cache I/O
- Block storage add-ons for persistent ISR cache across server migrations
- 1GB RAM on entry plan — builds must run in CI, not on the VPS
- Standard tier uses shared vCPU — SSR performance varies under noisy-neighbor conditions
- No managed database — co-locate PostgreSQL on the same VPS or use an external service
#4. Kamatera — The Only Provider Where You Configure RAM by the Gigabyte
Every other provider on this list sells fixed tiers: 1GB, 2GB, 4GB. Kamatera lets you configure RAM in 1GB increments. This matters for Next.js because the framework has a bizarre resource profile — the running server needs 300-500MB, but the build process needs 2-4GB. You are either paying for RAM you only use during builds, or you are building in CI to avoid paying for it.
Kamatera offers a third option: hourly billing with on-demand scaling.
The Build-Scale-Serve Pattern
- Provision a 1GB Kamatera VPS for $4/month to serve your Next.js standalone output
- When you need to deploy, temporarily scale to 4GB via the API (takes ~2 minutes)
- Run
next build, deploy, verify - Scale back to 1GB
You pay hourly rates for the 4GB window — typically 15-30 minutes for a build cycle. At Kamatera's hourly pricing, each build-deploy costs roughly $0.02-0.04 in RAM surcharge. If you deploy once daily, that is $0.60-1.20/month on top of the $4 base. Total: under $6/month with the ability to build locally. No CI pipeline needed, no 4GB VPS sitting at 12% memory utilization between deployments.
I automated this with a deploy script that calls Kamatera's API to scale up, waits for the resize, runs the build, and scales back down. The total deploy cycle including the resize is about 5 minutes. Not as fast as Vercel's git-push-to-deploy, but considerably cheaper.
- Granular RAM configuration — pay only for what your Next.js app actually uses
- Hourly billing enables build-scale-serve pattern at ~$0.03 per deployment
- Intel Xeon CPUs score 4,250 — strong single-core SSR performance
- $100 free trial to test your exact build and serve resource profile
- API-driven scaling for automated deployment pipelines
- Control panel is functional but not intuitive — expect a learning curve
- No managed databases or pre-configured Node.js stacks
- 20GB base storage is tight if you have a large ISR cache — add block storage or upgrade
#5. Linode (Akamai) — When Your Static Assets Need a Global CDN
Linode's acquisition by Akamai is the entire reason it is on this list. A $5/month Linode by itself is a perfectly adequate but unremarkable Next.js server — 1GB RAM, standard SSD, decent CPU. What makes it interesting is the Akamai CDN integration that no other budget VPS provider can match.
How This Works with Next.js Architecture
Next.js generates content-hashed filenames for everything in _next/static: JavaScript bundles, CSS, and client-side assets. These files never change — the hash in the filename guarantees it. They are safe to cache for one year (max-age=31536000, immutable). This is exactly what CDNs are designed for.
On Vercel, their built-in CDN handles this automatically. On a VPS, you configure Nginx to set cache headers for /_next/static/, then put Akamai (or Cloudflare) in front. The VPS handles only SSR and API requests — dynamic work that cannot be cached. Static assets are served from edge nodes closest to your users.
For a Next.js site with a high ratio of static to dynamic content (content sites, marketing sites, documentation), the Linode + Akamai CDN combination means your $5 VPS only handles the dynamic slice of traffic. A site serving 500K page views/month might generate only 50K origin requests — the rest are CDN cache hits. That 1GB Linode handles 50K SSR requests per month without breaking a sweat.
- Akamai CDN integration offloads
_next/staticto global edge — VPS handles only dynamic requests - Phone support at $5/mo — unique at this price tier
- 9 US datacenter locations for regional origin placement
- Free DDoS protection via Akamai network
- $100 free trial credit to test the CDN + VPS combination
- 1GB RAM — build in CI and deploy standalone output only
- Standard SSD — ISR revalidation 2.75x slower than Hostinger's NVMe
- No managed database — run PostgreSQL on the Linode or use an external service
PM2 + Nginx: The Production Stack That Replaces Vercel's Infrastructure
Every Next.js VPS deployment needs two things: a process manager to keep Node.js running (and restart it when it crashes), and a reverse proxy to handle SSL, compression, and static file serving. PM2 and Nginx are the standard choices. Here is the production configuration I use on every VPS I test.
PM2 Cluster Mode
// ecosystem.config.js
module.exports = {
apps: [{
name: 'nextjs-app',
script: '.next/standalone/server.js',
exec_mode: 'cluster',
instances: 'max', // one worker per CPU core
env: {
PORT: 3000,
NODE_ENV: 'production',
HOSTNAME: '0.0.0.0'
}
}]
}
On a 2-core VPS, cluster mode spawns 2 Node.js workers. Each handles SSR independently. Throughput approximately doubles. On Hostinger's 1 vCPU plan, cluster mode still provides a benefit: PM2's cluster manages graceful restarts, so pm2 reload nextjs-app replaces the worker without dropping active connections. This is your zero-downtime deployment mechanism.
Key PM2 commands for Next.js:
pm2 start ecosystem.config.js— initial startuppm2 reload nextjs-app— zero-downtime restart after deploymentpm2 logs nextjs-app --lines 100— view recent logs (SSR errors show here)pm2 monit— real-time CPU and memory monitoring per workerpm2 save && pm2 startup— persist across server reboots
Nginx Reverse Proxy
server {
listen 80;
server_name yourdomain.com;
# Static assets — Nginx serves directly, bypasses Node.js
location /_next/static {
alias /var/www/nextjs-app/.next/static;
expires 365d;
add_header Cache-Control "public, max-age=31536000, immutable";
access_log off;
}
location /public {
alias /var/www/nextjs-app/public;
expires 30d;
access_log off;
}
# Everything else — proxy to Node.js
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_read_timeout 120s;
}
# Gzip compression
gzip on;
gzip_types text/plain text/css application/json application/javascript text/xml;
gzip_min_length 1000;
}
The critical detail: Nginx serves _next/static directly from the filesystem. These requests never touch Node.js. On a high-traffic page, 60-80% of HTTP requests are for static assets (JS bundles, CSS, images). Having Nginx serve them means your Node.js process handles only SSR and API routes — exactly the architecture Vercel uses, except you control it and it costs $6.
Docker Deployment: When You Want Reproducible Builds
Docker adds overhead (100-200MB for the base image, slight memory increase from containerization). On a 4GB VPS, that overhead is negligible. On a 1GB VPS, it matters. Use Docker when you run multiple apps on one VPS, need identical dev/prod environments, or want rollback by switching container tags. Skip it if you are running a single Next.js app on a small VPS and simplicity matters more than isolation.
# Dockerfile — multi-stage for minimal image size
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-alpine AS runner
WORKDIR /app
ENV NODE_ENV=production
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
COPY --from=builder /app/public ./public
EXPOSE 3000
ENV PORT=3000
ENV HOSTNAME="0.0.0.0"
CMD ["node", "server.js"]
This multi-stage build produces an image of 150-200MB. The builder stage installs all dev dependencies and runs next build. The runner stage copies only the standalone output — no node_modules, no source code, no dev dependencies. The attack surface is minimal. Deploy with docker compose up -d and put Nginx in front as the reverse proxy.
For teams using Docker on VPS, the multi-stage pattern is the standard. If you are new to containerized Next.js, our CI/CD guide covers the full GitHub Actions pipeline.
The Image Optimization Trap: sharp vs Vercel's Built-In
This is the part where Vercel's marketing is most effective and least honest.
Vercel describes their image optimization as a platform feature. Their documentation implies it is something special that Vercel provides. In reality, Vercel's image optimization is the sharp npm package running on their servers. The same package you can install on any VPS with npm install sharp.
Here is what Vercel charges for image optimization on Pro: 5,000 source images included, then $5 per additional 1,000 images. A content site with 500 pages averaging 4 images per page has 2,000 source images — safely within the limit. But each image generates multiple optimized variants (different sizes for responsive srcSet, different formats like WebP and AVIF). The source image count is what Vercel bills, but the actual processing load scales with variants.
On a VPS, sharp processes images on-demand and caches the results in .next/cache/images. There is no limit, no counter, no overage fee. The only constraint is CPU time during the first request for each image variant. On Hostinger's 4,400 CPU score, a typical next/image optimization (resize + WebP conversion) takes 50-150ms. After the first request, the optimized image is cached and served in under 5ms.
The one legitimate caveat: if your Next.js app serves user-uploaded images at scale (thousands of unique images per day), the initial optimization load can spike CPU. In that case, either pre-process images during upload (using sharp directly in your API route) or use a dedicated image CDN like Cloudinary or imgix. But for most Next.js apps — marketing sites, blogs, SaaS dashboards — running sharp on a VPS is identical to what Vercel does, minus the bill.
Next.js VPS Comparison Table
| Provider | Price | RAM | Storage | NVMe | Can Build On-Server | Managed DB | Free Credit | Rating |
|---|---|---|---|---|---|---|---|---|
| Hostinger | $6.49 | 4 GB | 50 GB | ✓ | ✓ | ✗ | ✗ | 9.4/10 |
| DigitalOcean | $6.00 | 1 GB | 25 GB | ✗ | ✗ | ✓ | ✓ $200 | 9.0/10 |
| Vultr | $5.00 | 1 GB | 25 GB | ✗ | ✗ | ✗ | ✓ $100 | 8.7/10 |
| Kamatera | $4.00 | 1 GB (scalable) | 20 GB | ✗ | ✓ (after scale-up) | ✗ | ✓ $100 | 8.5/10 |
| Linode | $5.00 | 1 GB | 25 GB | ✗ | ✗ | ✗ | ✓ $100 | 8.3/10 |
"Can Build On-Server" means the entry plan has enough RAM to run next build on a production app without OOM. Providers marked ✗ require building in CI (GitHub Actions, etc.) and deploying the pre-built standalone output.
How I Tested: Builds First, Then Traffic
I deployed the same Next.js 15 app to every provider. The test app: 200 pages (50 SSR, 100 ISR with 60-second revalidation, 50 static), 5 API routes hitting PostgreSQL via Prisma, next/image with sharp for 40 product images, and a middleware that checks auth tokens. This is not a synthetic benchmark — it mirrors a real SaaS marketing site with a few dynamic features.
Each VPS ran Ubuntu 22.04, Node.js 20 LTS, PM2 5.x, and Nginx 1.24. I tested standalone output mode on all providers. Here is what I measured:
- Build completion and peak RAM: Can the VPS actually run
next buildwithout OOM? I ran cold builds from scratch and recorded peak RSS. On 1GB plans: build failed. On 2GB: succeeded with zero headroom. On 4GB: succeeded with PM2 still serving the previous version during the build. - SSR response time (p50/p95/p99): k6 at 50 concurrent users for 120 seconds against a complex server component page. This tests real V8 rendering speed, not static file serving. Hostinger's 4,400 CPU score correlated directly with the lowest p99 latency.
- ISR revalidation latency: Triggered on-demand revalidation via the API and measured time from stale-cache-read to fresh-cache-write. Pure disk I/O. NVMe vs SSD difference: 2.75x.
- Image optimization throughput: First-request latency for
next/imageoptimization (resize 2000x1500 JPEG to 800x600 WebP via sharp). Second-request latency (cache hit). Tests whether the VPS has enough CPU headroom to optimize images without impacting SSR. - Standalone output deployment time: From
git pulltopm2 reloadcompleting with zero dropped requests. Includes build time on 4GB plans and transfer-only time on 1GB plans (where builds ran in GitHub Actions).
Frequently Asked Questions
Is self-hosting Next.js on a VPS cheaper than Vercel Pro?
Yes, significantly. Vercel Pro is $20/month per team member, plus potential overages for bandwidth (beyond 1TB), serverless function execution (beyond 1,000 GB-hours), and image optimization (beyond 5,000 source images). A $6.49 VPS from Hostinger handles unlimited builds, unlimited image optimization via sharp, and includes 4-8TB of bandwidth. At 100K monthly page views, the annual savings are $162-402. The only scenario where Vercel is cheaper is if you have zero devops knowledge and value your time at over $150/hour for the initial 30-minute server setup.
What is standalone output mode and why should I use it?
Adding output: 'standalone' to next.config.js makes next build produce a self-contained .next/standalone directory with a server.js file and only the Node.js modules your app actually imports. Typical size: 30-80MB instead of the full 300-800MB node_modules. You run the app with node .next/standalone/server.js — no npm install on the server. This is the single most important configuration for VPS deployment. Without it, you are copying hundreds of megabytes of unused dependencies to your server on every deploy.
How do I run Next.js with PM2 in cluster mode?
Create an ecosystem.config.js with script pointing to .next/standalone/server.js, exec_mode: 'cluster', and instances: 'max'. PM2 forks one Node.js worker per CPU core. On a 2-core VPS, this doubles SSR throughput because each worker handles requests independently. Use pm2 reload nextjs-app for zero-downtime deployments — PM2 replaces workers one at a time, finishing active requests on old workers before terminating them. Run pm2 save && pm2 startup to persist the configuration across server reboots.
Does ISR work on a self-hosted VPS?
ISR works identically on a VPS. Next.js stores ISR cache in .next/cache on the filesystem, serves stale pages instantly while revalidating in the background, and writes fresh cache entries after revalidation completes. The performance difference between VPS providers comes from disk speed: NVMe (65K IOPS on Hostinger) delivers 8ms cache writes versus 22ms on standard SSD. Critical detail: if your deployment process deletes .next and rebuilds, you lose the ISR cache. Either preserve .next/cache across deploys or accept cold rendering after each deployment.
How do I handle next/image optimization without Vercel?
Install sharp: npm install sharp. That is it. Next.js detects sharp automatically and uses it for on-demand image optimization. Vercel's "built-in" optimization is literally sharp running on their servers. On your VPS, sharp resizes and converts images on first request (50-150ms on a modern CPU) and caches the result in .next/cache/images. Subsequent requests serve the cached version in under 5ms. There are no per-image fees, no monthly limits, and no optimization tier to upgrade. The only cost is CPU time during the first request for each unique image variant.
Should I use Docker for Next.js on a VPS?
Docker adds value when you run multiple apps on one VPS, need identical dev/prod environments, or want instant rollback by switching container tags. Use a multi-stage Dockerfile: stage 1 installs dependencies and builds, stage 2 copies only the standalone output into a slim Node.js image (final size: 150-200MB). Skip Docker if you run a single Next.js app on a 1GB VPS — the containerization overhead (100-200MB memory) eats into your limited resources. On a 4GB VPS like Hostinger's entry plan, Docker overhead is negligible.
What Nginx configuration does Next.js need?
Nginx serves as a reverse proxy to Node.js on port 3000. The critical configuration: serve /_next/static/ directly from the filesystem with Cache-Control: public, max-age=31536000, immutable (content-hashed files, safe to cache forever). Proxy everything else to http://localhost:3000. Enable gzip for text responses. Set proxy_http_version 1.1 with Upgrade headers for WebSocket support (needed for next dev hot reload). Increase proxy_read_timeout to 120s for long-running API routes. This setup offloads 60-80% of requests (static assets) from Node.js to Nginx.
How much RAM does self-hosted Next.js need?
The running Next.js server uses 200-500MB depending on app complexity and traffic. But next build is the memory killer — a production build with 200+ pages can spike to 2.8-4GB. If you build on the same server that serves traffic, 4GB RAM is the practical minimum (Hostinger's entry plan). If you build in CI (GitHub Actions, GitLab CI) and deploy only the standalone output, a 1-2GB VPS is sufficient for serving. The split approach — build in CI, serve on a small VPS — is how I run Next.js on every provider with less than 4GB RAM.
Can I use Redis as a Next.js ISR cache backend on a VPS?
Yes. Next.js supports custom cache handlers via the incrementalCacheHandlerPath config option. You can write a handler that stores ISR cache in Redis instead of the filesystem. This is useful when running multiple Next.js instances behind a load balancer — all instances share the same cache. On a single VPS, filesystem cache on NVMe is fast enough (8ms writes on Hostinger). Redis adds complexity, a memory footprint (~50-100MB), and a failure point without meaningful speed gains on a single-server setup. Use Redis for multi-server architectures; use filesystem for single-VPS deployments.
Related Guides
- Best VPS for Node.js — general Node.js hosting comparison, applicable if you run Express or Fastify alongside Next.js
- Best VPS for Docker — provider comparison focused on containerized deployments
- Best VPS for CI/CD — if you build Next.js in CI and deploy to a VPS
- Best VPS for PostgreSQL — relevant for full-stack Next.js apps with self-hosted databases
- Docker on VPS Guide — step-by-step Docker setup for production deployments
- VPS Size Calculator — estimate CPU, RAM, and storage needs for your specific Next.js app
My Recommendation
Hostinger VPS at $6.49/mo is the simplest path from Vercel to self-hosted. 4GB RAM means you skip the CI pipeline entirely — build and serve on the same box. NVMe makes ISR fast. The CPU score makes SSR fast. For full-stack apps with managed databases, DigitalOcean with $200 free credit lets you test the full stack (Droplet + managed PostgreSQL + managed Redis) for free before committing.