Here is the query that started this whole article:
SELECT o.id, o.total, c.name, c.email
FROM orders o
JOIN customers c ON c.id = o.customer_id
WHERE o.created_at >= '2026-01-01'
AND o.status = 'completed'
AND c.region = 'us-east'
ORDER BY o.created_at DESC
LIMIT 50;
Nothing exotic. A filtered join with an ORDER BY on two indexed tables — about 2.3 million rows in orders, 180K in customers. The kind of query your average SaaS dashboard runs hundreds of times per hour.
I deployed PostgreSQL 17 on five different VPS providers. Same Ubuntu 24.04 base image, same pg_restore from the same dump file, same indexes. First I ran the query with default postgresql.conf. Then I tuned shared_buffers, effective_cache_size, work_mem, and random_page_cost for each server's actual hardware. The delta between those two runs is what separates a VPS that "supports PostgreSQL" from one that is actually good for it.
On Hostinger, the tuned query plan looked like this:
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
Limit (cost=0.87..156.32 rows=50 width=86)
(actual time=0.089..4.127 rows=50 loops=1)
Buffers: shared hit=247
-> Nested Loop (cost=0.87..48291.05 rows=15523 width=86)
(actual time=0.088..4.098 rows=50 loops=1)
Buffers: shared hit=247
-> Index Scan Backward using idx_orders_created_status
on orders o (cost=0.43..12847.22 rows=31046 width=44)
(actual time=0.041..0.892 rows=71 loops=1)
Index Cond: ((created_at >= '2026-01-01') AND (status = 'completed'))
Buffers: shared hit=76
-> Index Scan using customers_pkey on customers c
(cost=0.42..1.14 rows=1 width=42)
(actual time=0.038..0.039 rows=1 loops=71)
Index Cond: (id = o.customer_id)
Filter: (region = 'us-east')
Rows Removed by Filter: 0
Buffers: shared hit=171
Planning Time: 0.412 ms
Execution Time: 4.186 ms
4.186ms. All 247 buffer hits from shared memory. Zero disk reads.
Same query, same data, same indexes, same tuning formula — on Kamatera's standard SSD with default random_page_cost = 4.0:
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
Limit (cost=52481.93..52482.06 rows=50 width=86)
(actual time=279.441..279.812 rows=50 loops=1)
Buffers: shared hit=18923 read=14207
-> Sort (cost=52481.93..52520.74 rows=15523 width=86)
(actual time=279.439..279.784 rows=50 loops=1)
Sort Key: o.created_at DESC
Sort Method: top-N heapsort Memory: 38kB
Buffers: shared hit=18923 read=14207
-> Hash Join (cost=5765.01..52014.87 rows=15523 width=86)
(actual time=41.228..271.106 rows=15523 loops=1)
Hash Cond: (o.customer_id = c.id)
Buffers: shared hit=18923 read=14207
-> Seq Scan on orders o
(cost=0.00..45182.40 rows=31046 width=44)
(actual time=0.092..198.437 rows=31046 loops=1)
Filter: ((created_at >= '2026-01-01') AND (status = 'completed'))
Rows Removed by Filter: 2268954
Buffers: shared hit=14201 read=14207
-> Hash (cost=4891.00..4891.00 rows=69921 width=42)
(actual time=40.892..40.893 rows=69904 loops=1)
Buckets: 131072 Batches: 1 Memory Usage: 5765kB
Buffers: shared hit=4722
-> Seq Scan on customers c
(cost=0.00..4891.00 rows=69921 width=42)
(actual time=0.019..22.814 rows=69904 loops=1)
Filter: (region = 'us-east')
Rows Removed by Filter: 110096
Buffers: shared hit=4722
Planning Time: 0.891 ms
Execution Time: 279.918 ms
279.918ms. Sequential scan on 2.3M rows. 14,207 blocks read from disk. The planner chose a hash join with sequential scans because random_page_cost = 4.0 told it that random I/O was expensive — which it is, on spinning rust. On standard SSD it is less true. On NVMe it is almost false. But the planner does not know what storage you have. You have to tell it.
That is the thesis of this article. The VPS you choose determines the ceiling. The tuning you apply determines whether you reach it. I tested all five providers with both default configs and PostgreSQL-optimized settings to show you where the real differences are.
Quick Answer: Best PostgreSQL VPS in 2026
Hostinger ($6.49/mo) — fastest NVMe, 12,847 pgbench TPS, best WAL throughput. The 4ms query above happened here. DigitalOcean ($15/mo managed) — skip the tuning entirely, get PITR + built-in PgBouncer + automatic failover. Kamatera (from $4/mo) — the only provider where you can buy 32GB RAM with 1 vCPU, which is exactly what a large shared_buffers deployment actually needs.
Table of Contents
- The Tuning That Changed Everything: shared_buffers, random_page_cost, work_mem
- #1. Hostinger — 4ms Queries and 12,847 pgbench TPS
- #2. DigitalOcean — Managed PostgreSQL That Eliminates 3 AM Pager Alerts
- #3. UpCloud — The Only Provider Where Checkpoints Did Not Spike Latency
- #4. Vultr — 32 Locations, Managed DB, and the Best Snapshot System for Database Cloning
- #5. Kamatera — 1 vCPU / 32GB RAM Because PostgreSQL Does Not Care About Your Cores
- Full pgbench Results: All 5 Providers, Default vs Tuned
- WAL Throughput and Checkpoint Impact Comparison
- PgBouncer Configuration That Worked on Every Provider
- Testing Methodology
- FAQ (9 Questions)
The Tuning That Changed Everything
Before I talk about providers, you need to understand the four settings that determine whether your PostgreSQL VPS runs at 10% or 95% of its capability. I see teams spend hours comparing VPS specs and then deploy with the default postgresql.conf, which is tuned for a system with 128MB of RAM and a mechanical hard drive from 2005.
shared_buffers: The Setting Everyone Gets Wrong
Default: 128MB. What it should be: 25% of RAM.
On a 4GB VPS, that is 1GB. On 8GB, 2GB. The canonical recommendation has not changed in a decade because the underlying reason has not changed: PostgreSQL maintains its own buffer pool separate from the OS page cache. Every page your query needs gets looked up in shared_buffers first. A miss means a system call to the OS, which then checks its own cache, and if that misses too, you hit disk. The shared hit=247 versus shared hit=18923 read=14207 difference in my EXPLAIN outputs above is entirely about this setting.
But here is the thing nobody tells you: monitor pg_stat_bgwriter. If buffers_backend is climbing, your shared_buffers is too small and PostgreSQL is doing synchronous writes in the foreground — the worst possible scenario for query latency. I saw this on the Kamatera 4GB plan before tuning.
random_page_cost: The NVMe Multiplier
Default: 4.0. What it should be on NVMe: 1.1. On standard SSD: 1.5.
This is the setting responsible for the 340ms-to-4ms difference. A random_page_cost of 4.0 tells the query planner that reading a random page costs 4x as much as a sequential page. On a spinning disk, that is accurate. On NVMe, random and sequential access are within 10-20% of each other. With the default value, the planner avoids index scans and prefers sequential scans on large tables — scanning 2.3 million rows when it could have read 247 index blocks. Lowering this value is free performance on every NVMe VPS.
On Hostinger and UpCloud (both NVMe), I set random_page_cost = 1.1. On Vultr's High Frequency plans: 1.1. On DigitalOcean's regular Droplets and Kamatera's standard SSD: 1.5. The planner immediately switched to nested loop joins with index scans, and query times dropped by an order of magnitude.
effective_cache_size: Telling the Planner the Truth
Default: 4GB (reasonable for some systems, wildly wrong for others). What it should be: 75% of RAM.
This setting allocates zero memory. It is a hint. It tells the planner how much total cache (shared_buffers + OS page cache) is likely available. If you set it too low, the planner assumes data is not cached and makes conservative choices (sequential scans, merge joins) when aggressive choices (index scans, nested loops) would be faster. On a 4GB VPS: effective_cache_size = 3GB. On 8GB: 6GB. This works in tandem with random_page_cost to determine whether the planner reaches for an index or scans the entire table.
work_mem: The Silent Performance Killer
Default: 4MB. What it should be: depends on max_connections.
Formula: (RAM - shared_buffers) / max_connections / 2. On a 4GB VPS with 1GB shared_buffers and 100 connections: (3072MB) / 100 / 2 = 15MB. Each sort or hash operation in a query gets this much memory before spilling to disk. The Sort Method: top-N heapsort Memory: 38kB in my slow Kamatera plan output was fine — but in more complex queries with GROUP BY, DISTINCT, or window functions, insufficient work_mem forces temporary file creation on disk. You can see this with EXPLAIN (ANALYZE, BUFFERS) when it shows Sort Method: external merge Disk: XXkB.
Here is the full tuning config I used across all five providers (4GB RAM variant). Paste this into /etc/postgresql/17/main/conf.d/tuning.conf:
# === shared_buffers & memory ===
shared_buffers = 1GB
effective_cache_size = 3GB
work_mem = 16MB
maintenance_work_mem = 256MB
huge_pages = try
# === planner cost tuning ===
random_page_cost = 1.1 # NVMe: 1.1, SSD: 1.5, HDD: 4.0
effective_io_concurrency = 200 # NVMe: 200, SSD: 100, HDD: 2
seq_page_cost = 1.0
# === WAL & checkpoints ===
wal_buffers = 64MB
max_wal_size = 4GB
min_wal_size = 1GB
checkpoint_completion_target = 0.9
checkpoint_timeout = 15min
# === write-ahead log ===
wal_compression = zstd
wal_level = replica # enable if you want streaming replication
# === connections ===
max_connections = 100 # keep low, use PgBouncer
superuser_reserved_connections = 3
# === autovacuum ===
autovacuum_max_workers = 3
autovacuum_naptime = 30s
autovacuum_vacuum_cost_delay = 2ms
I ran pgtune on each provider's specs as a sanity check. The output matched my manual config within 5% on every parameter. If you do not want to think about this, just use pgtune and move on. But if you want to understand why each value is what it is, the explanation above covers it.
Now let me show you what each provider did with these settings.
#1. Hostinger — 4ms Queries and 12,847 pgbench TPS
The EXPLAIN ANALYZE output at the top of this article came from Hostinger. Here is why it was the fastest.
Hostinger's NVMe storage delivered 55,000 write IOPS in my fio test — not manufacturer-claimed, actually measured with fio --randwrite --bs=8k --iodepth=32 --numjobs=4 (8K blocks because that is PostgreSQL's page size). That translates directly to WAL throughput. Every COMMIT in PostgreSQL waits for the WAL record to hit durable storage before returning success to your application. Faster WAL writes = faster commits = higher transaction throughput. It is the most directly measurable impact of storage speed on PostgreSQL performance.
pgbench Results: Hostinger
$ pgbench -c 10 -j 2 -T 120 -P 10 benchdb
# Default postgresql.conf
scaling factor: 10
number of clients: 10
number of threads: 2
duration: 120 s
tps = 5,241.83 (without initial connection time)
latency average = 1.907 ms
latency stddev = 2.143 ms
# Tuned config (shared_buffers=1GB, random_page_cost=1.1)
tps = 12,847.29 (without initial connection time)
latency average = 0.778 ms
latency stddev = 0.412 ms
From 5,241 to 12,847 TPS just from tuning. That 2.45x improvement is almost entirely attributable to three changes: shared_buffers going from 128MB to 1GB (eliminated disk reads for the pgbench working set), wal_buffers going from the default 4MB to 64MB (fewer WAL flushes per second), and checkpoint_completion_target = 0.9 (spread checkpoint I/O instead of dumping it all at once). The NVMe storage made each individual write fast; the tuning made PostgreSQL issue fewer, larger writes.
What actually made Hostinger #1 over UpCloud was WAL commit latency. I measured it with pg_test_fsync:
$ pg_test_fsync
Compare file sync methods using two 8kB writes:
open_datasync 3298.024 ops/sec 303 usecs/op
fdatasync 3187.441 ops/sec 314 usecs/op
fsync 3012.876 ops/sec 332 usecs/op
open_sync 2987.210 ops/sec 335 usecs/op
303 microseconds per fsync operation. That is 0.3ms per WAL flush. On Kamatera's standard SSD, the same test returned 1,890 usecs/op — over 6x slower. Every single transaction commit pays that tax.
Why I Picked It for PostgreSQL
The math is simple. At $6.49/mo you get 4GB RAM (1GB shared_buffers covering most small-to-medium working sets), NVMe with the fastest WAL throughput I measured, and 2 vCPUs so autovacuum does not steal cycles from live queries. For self-hosted PostgreSQL where you are willing to manage your own backups and failover, nothing else at this price comes close. I run my own Django project's PostgreSQL instance on this exact plan.
The Tradeoffs
- No managed PostgreSQL — you own WAL archiving, backups, version upgrades, and failover
- Only 2 US datacenter locations (no US West Coast)
- No private networking between VPS instances — a problem if you need a separate app server
- 50GB storage cap on the $6.49 plan — fine for databases under ~30GB but tight beyond that
Read our full Hostinger VPS review with CPU and network benchmarks.
#2. DigitalOcean — Managed PostgreSQL That Eliminates 3 AM Pager Alerts
I spent a week self-hosting PostgreSQL on each provider in this list. During that week, I dealt with: a WAL segment that did not archive because the cron job silently failed, an autovacuum that locked a table during a schema migration because I forgot to set lock_timeout, and a disk space alert at 11 PM because WAL recycling was not aggressive enough. These are all solvable problems. They are also all problems that DigitalOcean's managed PostgreSQL handles for you automatically.
What $15/mo Managed PostgreSQL Actually Includes
| Automated backups | Daily full backup, 7-day retention, one-click restore |
| Point-in-time recovery | Recover to any second within the retention window |
| Automatic failover | Standby node promotes in under 30 seconds |
| Built-in PgBouncer | Connection pooling enabled via a toggle, no server-side config needed |
| TLS encryption | Enforced by default on all connections |
| Version upgrades | One-click major version upgrade with rollback option |
The self-hosted Droplet performance is not remarkable. Standard SSD delivers about 4,512 TPS on pgbench with my tuned config — acceptable but 2.8x slower than Hostinger's NVMe. Where DigitalOcean earns its ranking is the ecosystem around PostgreSQL:
- Spaces integration for WAL archiving — S3-compatible object storage in the same datacenter for backup workflows
- Private VPC networking at zero cost — your app server and database communicate over an internal network with 0.4ms latency
- The best documentation I have found for PostgreSQL on VPS — their guides on replication, VACUUM tuning, and pgBackRest are detailed enough to replace a PostgreSQL administration book
- $200 free trial credit — enough to run your actual production workload for weeks before committing, not just a quick benchmark
Self-Hosted Droplet pgbench
$ pgbench -c 10 -j 2 -T 120 benchdb
# $24/mo Droplet (2vCPU / 4GB / regular SSD)
# Default config: tps = 2,104.55
# Tuned config: tps = 4,512.87
# Latency avg: 2.216 ms (tuned)
# pg_test_fsync: 1,287.331 ops/sec 776 usecs/op
776 microseconds per fsync — 2.5x slower than Hostinger. For read-heavy workloads where most data lives in shared_buffers, this gap does not matter. For write-heavy workloads (order processing, event logging, time-series ingestion), it matters a lot. That is why I recommend DigitalOcean's managed offering over self-hosting on Droplets: the managed service runs on optimized infrastructure that you cannot replicate on a basic Droplet.
When to Choose DigitalOcean
If you are a development team that does not have (or does not want to be) a DBA, managed PostgreSQL at $15/mo is the best deal in this list. If you are a solo developer or small team that is comfortable with PostgreSQL administration and wants maximum performance per dollar, Hostinger or UpCloud give you more raw speed. If you want to self-host on DigitalOcean but want the ecosystem benefits (Spaces for backups, VPC, great docs), use their Premium Droplets with NVMe instead of regular Droplets.
Read our full DigitalOcean review with Droplet benchmarks.
#3. UpCloud — The Only Provider Where Checkpoints Did Not Spike Latency
I almost put UpCloud at #1. Here is why I did not, and why for some workloads it should be.
During a checkpoint, PostgreSQL flushes every dirty page from shared_buffers to disk. With shared_buffers = 1GB and a busy OLTP workload, a checkpoint can mean writing hundreds of megabytes in a burst. On most providers, this creates an I/O storm. Foreground queries that need to read from disk during a checkpoint compete with the checkpoint writes and their latency spikes. I measured this by running pgbench continuously while forcing checkpoints with CHECKPOINT; and logging per-second TPS:
Checkpoint Impact (TPS during forced CHECKPOINT vs steady-state):
Provider Steady TPS During CKPT Drop Recovery
──────────────────────────────────────────────────────────────
Hostinger 12,847 11,204 -12.8% ~3 sec
UpCloud 10,523 10,087 -4.1% ~1 sec
Vultr (HF) 8,201 5,904 -28.0% ~7 sec
DigitalOcean 4,512 2,891 -35.9% ~11 sec
Kamatera 4,187 2,513 -40.0% ~14 sec
4.1% drop. One second to recover. UpCloud's MaxIOPS storage architecture handles concurrent sequential writes (checkpoint) and random reads (foreground queries) without one degrading the other. This is not something you can see in a synthetic fio benchmark. It only shows up under real database workload patterns where reads and writes compete on the same volume.
Hostinger's absolute TPS is higher (12,847 vs 10,523). But if your workload is sensitive to latency spikes — think financial transactions, real-time analytics dashboards, anything where a 280ms pause is unacceptable — UpCloud's consistency matters more than Hostinger's peak throughput. The 100% uptime SLA with actual service credits (not marketing language) is the cherry on top for production database servers where downtime has a dollar cost.
What MaxIOPS Actually Does for PostgreSQL
PostgreSQL generates three distinct I/O patterns simultaneously:
- WAL writes — sequential, latency-sensitive (every COMMIT waits for this)
- Checkpoint writes — sequential, throughput-sensitive (flushing dirty pages)
- Data reads — random, during cache misses (index scans, heap fetches)
Most VPS storage presents a single NVMe device where all three patterns compete. UpCloud's MaxIOPS tier appears to separate or prioritize these patterns at the storage layer. I cannot confirm the implementation details — UpCloud does not document them — but the benchmark results speak clearly: mixed workload performance degrades less than any other provider I tested.
The Downside
- One US datacenter (Chicago) — if your users are on the coasts, add 20-40ms network latency
- No managed PostgreSQL — self-host only
- $28/mo for the 4GB plan is 4.3x Hostinger's price for similar raw specs
- Smaller community — fewer Stack Overflow answers mentioning UpCloud-specific configuration
#4. Vultr — 32 Locations, Managed DB, and the Best Snapshot System for Database Cloning
The reason Vultr is on this list is not raw PostgreSQL performance. At 8,201 TPS on High Frequency NVMe, it sits between UpCloud and DigitalOcean — respectable, not exceptional. The reason is infrastructure flexibility that matters specifically for PostgreSQL deployments at scale.
The Snapshot-Based Database Cloning Workflow
Here is a workflow I use in production that only works well on Vultr:
- Take a snapshot of the production PostgreSQL VPS (Vultr snapshots are free and consistent)
- Deploy the snapshot as a new instance in any of their 32 locations
- You now have a perfect clone of your production database — running, queryable, with all data — in under 5 minutes
- Use it for staging, testing migrations, running analytics queries that would hammer production, or as a manual warm standby
Other providers offer snapshots, but Vultr's are faster to create, faster to deploy, and available in more locations. For teams that do database-intensive development (running schema migrations against production-scale data, testing query performance on real datasets), this workflow saves hours per week.
Managed Database Option
Vultr's managed PostgreSQL includes automatic failover, daily backups, connection pooling, and maintenance windows for version upgrades. It is less polished than DigitalOcean's offering — the documentation is thinner and the control panel has some rough edges — but it covers the operational basics at competitive pricing. The advantage over DigitalOcean: you can deploy a managed database in 32 locations worldwide, including 9 US cities. If you need your database physically close to an edge compute deployment, Vultr is the only managed PostgreSQL provider with that geographic reach.
pgbench and pg_test_fsync
$ pgbench -c 10 -j 2 -T 120 benchdb
# Vultr High Frequency (2vCPU / 4GB / NVMe)
# Default config: tps = 4,087.14
# Tuned config: tps = 8,201.55
# Latency avg: 1.219 ms (tuned)
# pg_test_fsync: 2,412.087 ops/sec 415 usecs/op
415 microseconds per fsync — solid but not in Hostinger territory. The 2x improvement from tuning (4,087 to 8,201) is characteristic of providers with good NVMe storage where the default config was the bottleneck, not the hardware. Vultr's $100 trial credit is enough to run extended benchmarks with your actual data before you commit.
Where Vultr Falls Short for PostgreSQL
- NVMe only on High Frequency plans — the regular $5/mo plan uses standard SSD, which is too slow for production PostgreSQL
- Managed database documentation is sparse compared to DigitalOcean — expect to figure things out by trial and error
- 28% checkpoint throughput drop is the worst among NVMe providers I tested
- No built-in PgBouncer on managed offering — they provide connection pooling but it is less configurable
Read our full Vultr review with all benchmarks.
#5. Kamatera — 1 vCPU / 32GB RAM Because PostgreSQL Does Not Care About Your Cores
Every other provider on this list forces you to buy CPU when you buy RAM. Want 16GB for a large shared_buffers? Here is 4 vCPUs you will barely use. Want 32GB? Here are 8 vCPUs at a price that makes you reconsider managed hosting.
PostgreSQL OLTP workloads are not CPU-bound. A typical web application query — indexed lookups, simple joins, filtering, sorting small result sets — uses a fraction of a modern vCPU. What it needs is RAM. Specifically, enough RAM that shared_buffers covers the entire hot working set and the OS page cache handles the warm data. On a 32GB Kamatera server, you get 8GB shared_buffers and 24GB effective_cache_size. That covers most databases up to 20GB without touching disk for reads.
The Custom Configuration Advantage
I configured a Kamatera instance with 2 vCPU / 16GB RAM / 100GB SSD for approximately $45/month. On any other provider, 16GB RAM comes with 4-8 vCPUs at $96-160/month. That is the entire value proposition. If your PostgreSQL instance's working set exceeds what 4-8GB of shared_buffers can hold, Kamatera's custom RAM allocation lets you solve the problem without doubling or tripling your infrastructure cost.
The catch — and it is a significant one — is standard SSD storage. Here is how that manifests:
$ pgbench -c 10 -j 2 -T 120 benchdb
# Kamatera (2vCPU / 4GB / standard SSD) for fair comparison
# Default config: tps = 1,847.62
# Tuned config: tps = 4,187.33
# Latency avg: 2.388 ms (tuned)
# pg_test_fsync: 528.917 ops/sec 1,890 usecs/op
1,890 microseconds per fsync. That is 6.2x slower than Hostinger. Every COMMIT pays 1.9ms in WAL write latency. For a read-heavy workload where 99% of queries hit shared_buffers, this does not matter — you are reading from RAM. For a write-heavy workload (order processing, event ingestion, frequent UPDATEs), this is the bottleneck that caps your throughput at roughly 500 write transactions per second.
The strategy: use Kamatera's cheap RAM to make your workload read-from-memory, and accept slower writes. For analytics dashboards, content management systems, and read-heavy APIs, this is the correct tradeoff. For e-commerce checkouts and real-time event processing, it is the wrong one.
My 16GB Kamatera Config for a Large Database
# Kamatera 2vCPU / 16GB RAM / SSD
shared_buffers = 4GB
effective_cache_size = 12GB
work_mem = 64MB
maintenance_work_mem = 1GB
random_page_cost = 1.5 # standard SSD, not NVMe
effective_io_concurrency = 100
huge_pages = on # 16GB+ shared_buffers benefits from huge pages
max_wal_size = 4GB
checkpoint_completion_target = 0.9
With this config and the same test dataset, the EXPLAIN ANALYZE for my original query showed shared hit=247, read=0 — identical to Hostinger. The entire working set fit in shared_buffers. Execution time: 5.1ms. Not quite Hostinger's 4.2ms (slightly slower CPU clock), but the same order of magnitude because the query never touched disk.
Full pgbench Results: All 5 Providers, Default vs Tuned
All tests: PostgreSQL 17 on Ubuntu 24.04 LTS, 4GB RAM plans (or nearest equivalent), scale factor 10 (roughly 150MB dataset), 10 clients, 2 threads, 120-second runs. Three runs per provider, best result shown. Full VPS benchmark methodology here.
| Provider | Plan / Price | Storage | Default TPS | Tuned TPS | Improvement | Avg Latency |
|---|---|---|---|---|---|---|
| Hostinger | KVM 2 / $6.49 | NVMe | 5,241 | 12,847 | +145% | 0.778 ms |
| UpCloud | 4GB / $28 | MaxIOPS NVMe | 4,903 | 10,523 | +115% | 0.950 ms |
| Vultr | HF 4GB / $12 | NVMe | 4,087 | 8,201 | +101% | 1.219 ms |
| DigitalOcean | 4GB / $24 | SSD | 2,104 | 4,512 | +114% | 2.216 ms |
| Kamatera | 4GB / ~$20 | SSD | 1,847 | 4,187 | +127% | 2.388 ms |
pgbench TPC-B, scale factor 10, 10 clients, 2 threads, 120-second runs. "Tuned" = shared_buffers 25% RAM, random_page_cost adjusted for storage type, wal_buffers 64MB, checkpoint_completion_target 0.9.
Two patterns jump out:
- Every provider at least doubled its TPS from tuning alone. The default
postgresql.confis leaving half your VPS performance on the table. This is free. Run pgtune or use my config above. There is no reason to accept default settings. - NVMe providers separate clearly from SSD providers under write load. Hostinger (12,847) and UpCloud (10,523) are in a different tier from DigitalOcean (4,512) and Kamatera (4,187). The pgbench TPC-B workload is ~70% writes. If your workload is read-heavy, this gap narrows significantly because reads come from shared_buffers regardless of storage type.
WAL Throughput and Checkpoint Impact Comparison
WAL (Write-Ahead Log) performance is the single most important storage metric for PostgreSQL. Every data modification — INSERT, UPDATE, DELETE — writes a WAL record before the change is considered committed. Your application's COMMIT statement literally waits for the WAL write to complete. Slow WAL = slow commits = slow application.
| Provider | pg_test_fsync (ops/sec) | fsync Latency | Checkpoint Drop | Recovery Time |
|---|---|---|---|---|
| Hostinger | 3,298 | 303 µs | -12.8% | ~3 sec |
| UpCloud | 2,891 | 346 µs | -4.1% | ~1 sec |
| Vultr (HF) | 2,412 | 415 µs | -28.0% | ~7 sec |
| DigitalOcean | 1,287 | 776 µs | -35.9% | ~11 sec |
| Kamatera | 528 | 1,890 µs | -40.0% | ~14 sec |
pg_test_fsync using open_datasync method. Checkpoint impact measured as TPS drop during forced CHECKPOINT under pgbench load. Recovery time = seconds until TPS returns to within 5% of steady state.
The checkpoint impact column is the one most people overlook. Hostinger has the fastest raw fsync (303 µs), but UpCloud has the smallest checkpoint impact (4.1%). If your workload generates large checkpoints (lots of dirty pages, frequent writes to many different tables), UpCloud's consistency may matter more than Hostinger's peak speed. For lighter workloads where checkpoints are small and infrequent, Hostinger's raw speed wins.
PgBouncer Configuration That Worked on Every Provider
PostgreSQL forks a new backend process for every connection. Each process consumes 5-10MB of RAM. At 200 connections, that is 1-2GB of RAM stolen from shared_buffers and effective_cache_size. On a 4GB VPS, that is catastrophic — you have traded buffer cache for idle connection memory.
PgBouncer fixes this by pooling connections. Your application opens hundreds of connections to PgBouncer, which maps them to a small pool of actual PostgreSQL backends. With pool_mode = transaction, a backend is only assigned to a client for the duration of a single transaction, then returned to the pool. I tested this configuration on all five providers:
# /etc/pgbouncer/pgbouncer.ini
[databases]
myapp = host=127.0.0.1 port=5432 dbname=myapp
[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = md5
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
default_pool_size = 20
max_client_conn = 1000
max_db_connections = 25
reserve_pool_size = 5
reserve_pool_timeout = 3
# Timeouts
server_idle_timeout = 300
client_idle_timeout = 0
query_timeout = 30
# Logging
log_connections = 0
log_disconnections = 0
stats_period = 60
With default_pool_size = 20 and max_connections = 100 in PostgreSQL, your 20 PgBouncer backends use ~150MB total. Your application can open 1,000 connections to PgBouncer with negligible additional memory. That frees 850MB+ of RAM for shared_buffers compared to letting 200 direct connections each fork their own backend process.
On a 4GB VPS, PgBouncer is not optional. It is mandatory for any application with more than 25 concurrent database connections. I wrote about this in more detail in the Best VPS for Databases guide.
One PgBouncer Gotcha Per Provider
- Hostinger: No private networking. Run PgBouncer on the same VPS as PostgreSQL (127.0.0.1), which is fine for single-server deployments.
- DigitalOcean (managed): Built-in PgBouncer via the control panel. Do not run your own in front of theirs — double-pooling causes connection tracking bugs.
- UpCloud: Use their free private network interface for PgBouncer-to-PostgreSQL traffic if they are on separate servers.
- Vultr: Their managed database includes connection pooling, but with fewer configuration options than standalone PgBouncer. Self-hosted gives you full control.
- Kamatera: With extra RAM for shared_buffers, you can afford a larger
default_pool_size(30-40) without memory pressure.
Testing Methodology
I deployed PostgreSQL 17.2 on Ubuntu 24.04 LTS on each provider between January and March 2026. Each provider was tested on their nearest-equivalent 4GB RAM plan (or 2vCPU/4GB where available). The test dataset was a pg_restore of a 2.3 million row transactional database (orders + customers + products + line_items) that represents a typical SaaS application schema.
Test Sequence
- Fresh deploy: Ubuntu 24.04 LTS minimal, PostgreSQL 17.2 from the official apt repository
- Default config run: pgbench TPC-B with the stock
postgresql.conf— 3 runs of 120 seconds each - Tuned config run: Applied the tuning config shown above (adjusted
random_page_costfor each provider's storage type), restarted PostgreSQL, ran pgbench again — 3 runs of 120 seconds each - pg_test_fsync: Raw fsync latency measurement with the PostgreSQL utility
- Checkpoint impact: Continuous pgbench with forced
CHECKPOINT;commands injected every 30 seconds, per-second TPS logged - VACUUM test: 5GB table with 30% dead tuples, timed
VACUUM ANALYZE - EXPLAIN ANALYZE: The specific query shown in the intro, run after each config change to capture plan differences
All monetary costs are list prices as of March 2026. Benchmark numbers are reproducible — I ran each test three times and report the best of three to minimize noise from noisy neighbor effects (which are unavoidable on shared VPS infrastructure).
Frequently Asked Questions
How should I set shared_buffers on a VPS?
Start at 25% of total RAM. On a 4GB VPS, set shared_buffers = 1GB. On 8GB, set it to 2GB. But this is a floor, not a ceiling. Check pg_stat_bgwriter for buffers_backend — if that number is climbing, your shared_buffers is too small and PostgreSQL is bypassing the background writer to handle dirty pages in the foreground, which stalls queries. On NVMe VPS providers with fast storage, you can sometimes push to 30-35% of RAM because checkpoint flushes complete faster.
What is random_page_cost and why does NVMe change it?
random_page_cost tells the PostgreSQL query planner how expensive a random disk read is relative to a sequential read. The default is 4.0, calibrated for spinning disks. On NVMe, random and sequential reads are nearly the same speed, so you should set random_page_cost between 1.1 and 1.5. This single change causes the planner to prefer index scans over sequential scans in many more cases, often cutting query times by 10-50x on tables with proper indexes. This was the main reason for the 340ms-to-4ms improvement in my test — the planner switched from a sequential scan of 2.3M rows to an index scan of 247 blocks.
PostgreSQL on NVMe vs standard SSD — how big is the real difference?
For write-heavy workloads, NVMe delivers 3-5x the IOPS of SATA SSD. In my pgbench TPC-B tests (70% writes), Hostinger's NVMe hit 12,847 TPS versus 4,512 TPS on DigitalOcean's standard SSD Droplet with identical PostgreSQL tuning. WAL commit latency dropped from 776 µs to 303 µs (2.5x faster). For read-heavy workloads where hot data fits in shared_buffers, the gap narrows significantly — you are reading from RAM regardless. But checkpoints and autovacuum still hit disk even on read-heavy systems, so NVMe provides a baseline improvement for all workloads.
Should I colocate PostgreSQL with my app or use a separate VPS?
Colocate when your total resource usage (app + database) fits comfortably in one VPS. Queries go through the loopback interface at near-zero latency instead of 0.5-1.5ms over the network. This works well up to about 50-80 concurrent connections with moderate query complexity. Separate them when you see resource contention: your application and PostgreSQL fighting over the same RAM (watch for shared_buffers evictions), CPU (autovacuum competing with app processes), or I/O (WAL writes interfering with application log writes). When you separate, connect via private network — DigitalOcean and UpCloud offer free private networking, keeping latency under 1ms.
PgBouncer vs pgpool-II — which connection pooler should I use?
PgBouncer for 95% of use cases. It is a single lightweight process (5-8MB RAM) that multiplexes thousands of client connections into a small PostgreSQL backend pool. Set pool_mode = transaction for Django, Rails, and most web frameworks — this gives the highest connection reuse ratio. pgpool-II adds read-replica load balancing and query routing, but it is significantly more complex to operate and debug. Only consider pgpool-II if you need transparent read/write splitting across replicas and cannot handle this at the application level (e.g., your ORM does not support read replicas natively).
How does checkpoint tuning affect PostgreSQL VPS performance?
Checkpoints flush dirty shared_buffers pages to disk. A poorly tuned checkpoint creates an I/O storm that spikes query latency for seconds. Set checkpoint_completion_target = 0.9 (spreads writes over 90% of the checkpoint interval instead of rushing them), max_wal_size = 2GB or higher (delays checkpoint frequency, gives more time between flushes), and checkpoint_timeout = 15min. On NVMe providers like Hostinger and UpCloud, checkpoint impact is 4-13% throughput drop. On standard SSD, it can be 35-40% — a scenario where cheap storage costs you real production performance.
How do I back up PostgreSQL on a VPS with point-in-time recovery?
For basic backups: pg_dump dbname | gzip > backup-$(date +%Y%m%d).sql.gz, scheduled via cron. For point-in-time recovery (PITR): use pgBackRest — it handles WAL archiving to S3-compatible storage (DigitalOcean Spaces, Backblaze B2, AWS S3), incremental backups, and parallel restore. Configure archive_mode = on and archive_command to ship WAL segments. This lets you recover to any transaction, not just the last dump. Test restores monthly — a backup you have never restored is not a backup. If using DigitalOcean's managed PostgreSQL, PITR and daily backups are included at no extra cost.
What is effective_cache_size and how should I set it?
effective_cache_size does not allocate any memory — it tells the query planner how much total memory is available for caching (shared_buffers + OS page cache combined). Set it to 75% of total RAM. On a 4GB VPS: effective_cache_size = 3GB. On 8GB: 6GB. This helps the planner decide when to use index scans versus sequential scans. Setting it too low causes the planner to assume data is not cached and prefer sequential scans even when indexes would be significantly faster. It works in combination with random_page_cost — both settings together determine the planner's cost model for disk access.
How much RAM do I need for PostgreSQL on a VPS?
It depends on your working set size, not your total database size. Run SELECT pg_size_pretty(pg_database_size('your_db')); to get total size, then check pg_stat_user_tables to identify which tables are frequently accessed. Your shared_buffers (25% of RAM) should cover the hot data. For most small-to-medium apps: 4GB VPS (1GB shared_buffers) handles databases up to ~3GB working set. 8GB covers up to ~6GB. If your working set exceeds shared_buffers, you need either more RAM (consider Kamatera's custom configurations) or faster NVMe storage (Hostinger, UpCloud) to absorb the cache misses without destroying query latency.
My PostgreSQL VPS Recommendations
Best performance per dollar: Hostinger at $6.49/mo — 12,847 pgbench TPS, fastest WAL, NVMe storage. This is where my 4ms query happened.
Best for zero-ops teams: DigitalOcean managed PostgreSQL at $15/mo — PITR, auto-failover, built-in PgBouncer. You get a connection string and never think about backups again.
Best for large databases: Kamatera from $4/mo — buy 16-32GB RAM without paying for 8 vCPUs you will not use. The only rational choice for large shared_buffers deployments.