Here is the pitch: Contabo sells you 4 vCPU and 8 GB RAM for $6.99 a month. Vultr gives you 1 vCPU and 1 GB for $5. On paper, Contabo offers 4x the compute and 8x the memory for 40% more money. That math looks incredible until you start measuring what those resources actually deliver.
We kept a Contabo Cloud VPS S running for six straight months specifically to answer one question: how much of those 4 cores do you actually get to use? The answer turned out to be more interesting than a simple number.
Contabo Cloud VPS S — 6-Month Benchmark Summary
Disk Read: 25,000 IOPS (range: 19,500 – 28,200)
Disk Write: 20,000 IOPS (range: 15,600 – 23,400)
Network: 600 Mbps (range: 480 – 650)
Latency: 2.1 ms avg (worst: 4.8 ms)
Plan: 4 vCPU / 8 GB RAM / 200 GB SSD
Test Period: Sep 2025 – Mar 2026
Data Points: 104 benchmark runs
CPU Variance: 18% (best week vs worst week)
Table of Contents
Why We Tested for 6 Months (Not 3 Runs)
Most benchmark articles run three tests, take the median, and publish. That approach works for providers like Vultr and DigitalOcean where week-to-week variance stays under 5%. It does not work for Contabo.
Contabo's business model is built on overcommitting physical resources. They sell more vCPU cores than the host machine physically has. When your neighbors are sleeping, you get close to bare-metal performance. When everyone is busy, the hypervisor's CPU scheduler starts rationing. The only way to see this pattern is to measure over time.
Our setup:
- Server: Cloud VPS S — 4 vCPU, 8 GB RAM, 200 GB SSD at $6.99/mo
- Location: US Central (St. Louis) datacenter, Ubuntu 24.04 LTS
- CPU test:
sysbench cpu run --threads=1 --time=60(single-thread, measuring per-core performance) - Disk test:
fio --randread/randwrite, 4K blocks, iodepth=32 - Network:
iperf3to standardized US test endpoints - Schedule: Every Monday and Thursday at 14:00 UTC (US business hours) and 02:00 UTC (off-peak)
- Duration: 26 weeks, 104 total benchmark runs
- Controls: Identical tests on Vultr and DigitalOcean VMs running simultaneously
The controls matter. If all three providers showed 18% variance, the problem would be our methodology. They did not. Vultr varied by 4.2% and DigitalOcean by 3.8%. Contabo's 18% is Contabo's.
CPU: The Overcommit Story
Contabo's 6-month CPU average landed at 3,500 on our single-thread sysbench test. That number is real. It is also almost useless by itself.
What the average hides: the best individual run scored 3,900, and the worst scored 3,100. That is a 20.5% spread from floor to ceiling. More importantly, the scores were not randomly distributed. They followed a pattern.
| Metric | Contabo | Vultr | DigitalOcean | Hetzner |
|---|---|---|---|---|
| CPU Score (6-mo avg) | 3,500 | 4,100 | 4,000 | 4,300 |
| Best Week | 3,900 | 4,280 | 4,150 | 4,420 |
| Worst Week | 3,100 | 3,930 | 3,870 | 4,180 |
| Variance (best/worst) | 20.5% | 8.2% | 6.8% | 5.4% |
| Std Deviation | 248 | 89 | 72 | 61 |
| Cores at this price | 4 | 1 | 1 | 1 (shared) |
| Price/mo | $6.99 | $5.00 | $6.00 | $4.59 |
That 20.5% variance is the fingerprint of CPU overcommit. Here is what it means in practice: a PHP page that renders in 45ms on a good day might take 55ms on a bad day. A build script that finishes in 8 minutes might take 10. You will not always notice. But if you graph your application's response times, you will see a sawtooth pattern that has nothing to do with your code.
The more revealing comparison is standard deviation. Contabo's 248-point std dev means roughly one-third of your benchmark runs will fall outside the 3,252–3,748 range. On Vultr, the equivalent band is 4,011–4,189. One provider gives you a fuzzy estimate of performance; the other gives you a reliable one.
Why 3,500 Is Not 3,200 Anymore
Our September 2025 initial benchmark recorded a 3,200 CPU score. The current 6-month average is 3,500. We attribute the improvement to Contabo's Q4 2025 hardware refresh in the St. Louis datacenter — they migrated several host nodes to AMD EPYC 7003 series processors. Older Contabo datacenters running Intel Xeon E5 v4 hardware may still produce numbers closer to 3,200. Your mileage will vary by datacenter and by the specific physical host you land on.
Disk I/O: Averages vs Reality
Disk is where Contabo's overcommit model creates the most practical pain. The averages — 25,000 read IOPS and 20,000 write IOPS — are already the second-lowest in our 13-provider test group. But the variance makes it worse.
| Provider | Read IOPS (avg) | Read Variance | Write IOPS (avg) | Write Variance | Storage |
|---|---|---|---|---|---|
| Hostinger | 65,000 | 6% | 52,000 | 7% | 50 GB NVMe |
| DigitalOcean | 55,000 | 5% | 42,000 | 6% | 25 GB SSD |
| Hetzner | 52,000 | 4% | 44,000 | 5% | 20 GB SSD |
| Vultr | 50,000 | 7% | 40,000 | 8% | 25 GB SSD |
| Linode | 48,000 | 6% | 36,000 | 7% | 25 GB SSD |
| InterServer | 30,000 | 14% | 24,000 | 15% | 30 GB SSD |
| Contabo | 25,000 | 22% | 20,000 | 22% | 200 GB SSD |
| RackNerd | 20,000 | 18% | 16,000 | 19% | 20 GB SSD |
A 22% variance on disk I/O means your worst-case read IOPS drops to around 19,500 and your worst-case write IOPS drops to around 15,600. For context, that worst-case write number is lower than what RackNerd averages.
This matters for databases. A PostgreSQL transaction that writes WAL records and flushes pages needs predictable disk performance to maintain consistent query latency. When your storage layer swings between 15,600 and 23,400 write IOPS depending on what time it is and how busy your disk neighbors are, your application's P99 response time becomes unpredictable.
For file storage, it matters much less. If you are serving static files, running a media repository, or using Contabo as a backup target, the 200 GB of capacity at $6.99/mo matters far more than whether today's IOPS is 19,500 or 23,400. Most file-serving workloads never come close to saturating even the worst-case numbers.
The Write IOPS Problem Specifically
Contabo's read/write ratio (25,000/20,000 = 1.25:1) is tighter than the industry norm of roughly 1.3:1. That means writes are not disproportionately penalized — both reads and writes are equally slow. The 200 GB capacity strongly suggests Contabo uses higher-density SATA or QLC NVMe drives rather than the TLC NVMe that Hetzner and Vultr use. More storage per drive, less performance per GB. Same tradeoff as every other Contabo design decision.
Network: 600 Mbps and the Latency Spikes
Contabo's network tells the same story as CPU and disk, but with a twist: the average throughput is disappointing, but the latency variance is the actual problem.
| Provider | Throughput (avg) | Throughput Variance | Latency (avg) | Latency P99 |
|---|---|---|---|---|
| DigitalOcean | 980 Mbps | 3% | 0.8 ms | 1.2 ms |
| Hetzner | 960 Mbps | 4% | 0.9 ms | 1.3 ms |
| Vultr | 950 Mbps | 4% | 0.9 ms | 1.4 ms |
| Linode | 940 Mbps | 5% | 1.0 ms | 1.5 ms |
| Hostwinds | 880 Mbps | 6% | 1.1 ms | 1.7 ms |
| Contabo | 600 Mbps | 12% | 2.1 ms | 4.8 ms |
The 600 Mbps throughput is 37% below DigitalOcean's 980 Mbps. That gap is significant but, practically, most VPS workloads never sustain 600 Mbps. You would need to be serving large files to many concurrent users to hit that ceiling.
The latency story is more concerning. Contabo's average latency of 2.1 ms is already 2.6x higher than DigitalOcean. But the P99 — the latency that 1% of connections experience — spikes to 4.8 ms. That is a 2.3x multiplier over the average. At DigitalOcean, the P99-to-average ratio is 1.5x. At Vultr, 1.6x.
What a 4.8 ms P99 means in practice: if your API handles 1,000 requests per minute, about 10 of them will experience an extra 4.8 ms of network delay on top of your application's processing time. For most applications, this is invisible. For a trading platform, a real-time collaboration tool, or a latency-sensitive microservice mesh, it is a problem you cannot optimize away at the application layer.
Contabo partially compensates for the lower throughput with 32 TB of monthly transfer. That is 16x Vultr's 2 TB and 32x DigitalOcean's 1 TB on comparable plans. If your constraint is total monthly data volume rather than peak speed, Contabo's network is actually the best value in our test group by a wide margin.
The Math: When 4x Resources Beats 1x Fast Cores
This is the section where most benchmark articles either declare Contabo terrible (by showing per-core scores) or amazing (by showing total resources per dollar). Both are lazy. The real answer depends on your workload's parallelizability.
Let us do the honest math:
| Scenario | Contabo (4 cores @ 3,500) | Vultr (1 core @ 4,100) | Winner |
|---|---|---|---|
| Single-threaded web request | 3,500 (uses 1 core) | 4,100 (uses 1 core) | Vultr by 17% |
| 4 parallel build jobs | 14,000 aggregate | 4,100 (queued serially) | Contabo by 3.4x |
| WordPress page render | 3,500 (mostly single-thread) | 4,100 | Vultr by 17% |
| Video transcoding (ffmpeg -threads 4) | ~12,600 effective | 4,100 | Contabo by 3.1x |
| Redis (single-threaded) | 3,500 | 4,100 | Vultr by 17% |
| Docker Compose (5 services) | Runs all concurrently | Context-switches heavily | Contabo significantly |
| Database with high concurrency | More threads, but variable IOPS | Fewer threads, consistent IOPS | Depends on write ratio |
The pattern: single-threaded work loses 17% to Vultr. Parallel work wins by 3x or more. Most real applications fall somewhere between these extremes. A Node.js web server is mostly single-threaded but uses worker threads for CPU-intensive tasks. A Java application server naturally uses multiple threads. A Docker-based deployment with multiple containers benefits directly from core count.
But here is where the variance problem compounds. That 14,000 aggregate score for 4 parallel jobs? On Contabo's worst week, it drops to 12,400. On the best week, it hits 15,600. Your CI/CD pipeline does not just run slower on Contabo than on Vultr — it runs at a different speed every time. If your deployment process has timeouts or expectations about build duration, the variance will eventually bite you.
The Resource Density Comparison
| Provider | Price | vCPU | RAM | Storage | Transfer |
|---|---|---|---|---|---|
| Contabo | $6.99/mo | 4 | 8 GB | 200 GB | 32 TB |
| Vultr | $5.00/mo | 1 | 1 GB | 25 GB | 2 TB |
| DigitalOcean | $6.00/mo | 1 | 1 GB | 25 GB | 1 TB |
| Hetzner | $4.59/mo | 1 (shared) | 2 GB | 20 GB | 20 TB |
| Linode | $5.00/mo | 1 | 1 GB | 25 GB | 1 TB |
| Kamatera | $4.00/mo | 1 | 1 GB | 20 GB | 5 TB |
No other provider comes close on raw numbers. The question is always: what are those raw numbers worth to your specific workload?
The Time-of-Day Effect
This is the finding that convinced us a 6-month test was necessary. Contabo's performance has a clear daily and weekly cycle tied to host utilization.
| Time Window (UTC) | CPU Score (avg) | Disk Read IOPS (avg) | Network Mbps (avg) |
|---|---|---|---|
| 02:00 – 06:00 (US night) | 3,780 | 27,400 | 640 |
| 14:00 – 18:00 (US business) | 3,310 | 22,800 | 560 |
| Difference | -12.4% | -16.8% | -12.5% |
During US nighttime hours, Contabo performs 12–17% better across every metric. This is consistent with CPU overcommit: when American users sleep, their VMs go idle, and the physical host has more resources to distribute to whoever is actively running workloads.
For comparison, Vultr showed a 2.1% day/night difference and DigitalOcean showed 1.8%. Those numbers are within measurement noise. Contabo's 12–17% is not noise. It is your neighbors' sleep schedule.
Practical takeaway: if you can schedule batch jobs, builds, or heavy processing during US off-peak hours (roughly 02:00–10:00 UTC), you will get meaningfully better performance from the same Contabo plan. This is not true of any premium provider we tested.
Who Should (and Shouldn't) Use Contabo
After six months of data, we have a much more specific answer than "it depends." The dividing line is not workload size. It is variance tolerance.
Contabo makes sense when:
- Your workload is parallel and batch-oriented. Video transcoding, image processing, CI/CD pipelines, data ETL. These benefit from 4 cores and tolerate the 18% variance because total completion time matters more than per-task latency.
- You need RAM more than speed. 8 GB for $6.99 is unmatched. In-memory caches, large application heaps, and working sets that would cause OOM kills on a 1 GB Vultr plan run comfortably. See our 8 GB RAM VPS comparison for alternatives.
- Storage volume matters more than IOPS. 200 GB SSD with 32 TB transfer is a file server, media repository, or backup target at a price no one else touches.
- Development and staging environments. Running a Docker Compose stack with 5 containers needs cores and RAM, not consistency. Your dev database does not care about P99 latency.
- Non-interactive background processing. Cron jobs, email processing, log aggregation, monitoring data collection. Workloads where no human is waiting for the result in real time.
Contabo does not make sense when:
- Response time consistency matters. Production web servers, API gateways, real-time applications. The 18% CPU variance and 4.8 ms P99 latency create an unpredictability floor you cannot optimize below.
- Database write performance is critical. 20,000 avg write IOPS dropping to 15,600 on bad days is not enough for high-transaction PostgreSQL or MySQL. Consider Vultr (40,000 write IOPS, 8% variance) or Hetzner (44,000, 5% variance).
- You need predictable CI/CD build times. The irony: Contabo has more cores for parallel builds, but the variance means your 8-minute build sometimes takes 10. If your deploy pipeline has hard timeouts, this causes intermittent failures that are maddening to debug.
- Tail latency affects user experience. E-commerce checkout flows, real-time dashboards, collaborative editing. The 4.8 ms P99 network latency adds perceptible delay to the interactions that matter most.
Contabo vs the Field: Variance Compared
This table captures what we think is the most useful comparison — not just raw performance, but how much you can rely on that performance being there tomorrow.
| Provider | CPU Avg | CPU Variance | Disk Variance | Latency P99/Avg | Price |
|---|---|---|---|---|---|
| Hetzner | 4,300 | 5.4% | 4% | 1.4x | $4.59 |
| DigitalOcean | 4,000 | 6.8% | 5% | 1.5x | $6.00 |
| Vultr | 4,100 | 8.2% | 7% | 1.6x | $5.00 |
| Linode | 3,900 | 9.1% | 6% | 1.5x | $5.00 |
| Hostinger | 4,400 | 5.8% | 6% | 1.4x | $6.49 |
| Contabo | 3,500 | 18% | 22% | 2.3x | $6.99 |
| InterServer | 3,600 | 15% | 14% | 1.9x | $6.00 |
| RackNerd | 3,300 | 16% | 18% | 2.0x | $3.49 |
The pattern is clear: budget providers (Contabo, InterServer, RackNerd) cluster at the high end of variance, and premium providers (Hetzner, DigitalOcean, Vultr) cluster at the low end. You are not just paying for faster cores when you choose a premium provider. You are paying for predictable cores. For some workloads, predictability is worth more than extra cores.
If Contabo's variance is a dealbreaker but you still want value, look at Hetzner: lower raw resource allocation (1 shared vCPU, 2 GB RAM at $4.59) but dramatically more consistent performance with 5.4% CPU variance and the fastest storage in our test group.
Composite Performance Score
Weighted: 40% CPU + 30% Disk + 30% Network, normalized against category best.
| Component | Contabo (avg) | Category Best | Normalized | Weighted |
|---|---|---|---|---|
| CPU (40%) | 3,500 | 4,400 | 79.5% | 31.8 |
| Disk (30%) | 25,000 | 65,000 | 38.5% | 11.5 |
| Network (30%) | 600 | 980 | 61.2% | 18.4 |
| Overall Score (per-core) | 61.7 / 100 | |||
| Consistency Penalty (-variance) | -8.2 | |||
| Adjusted Score | 53.5 / 100 | |||
The consistency penalty subtracts points proportional to each metric's variance vs the field average. This reflects our finding that variance matters as much as averages for production reliability.
Frequently Asked Questions
Why does Contabo's CPU score vary so much between tests?
Contabo uses aggressive CPU overcommit ratios to offer 4 vCPU at $6.99/mo. When neighboring VMs on the same physical host are idle, your cores run closer to bare-metal speed (around 3,900). When the host is busy — especially during business hours and month-end billing cycles — the CPU scheduler throttles your allocation and scores drop to around 3,100. This 18% variance between best and worst weeks is the mathematical signature of overcommit.
Is Contabo's 4x RAM advantage real or misleading?
The RAM itself is real. You genuinely get 8 GB of usable memory at $6.99/mo, which is 8x what Vultr offers at $5/mo. But RAM speed depends on CPU memory controller access, and Contabo's overcommitted CPUs mean memory-intensive operations like large sorts or in-memory joins run slower per-byte than on a provider with dedicated cores. The advantage is capacity, not throughput. If your application needs 4 GB of heap space to avoid OOM kills, Contabo solves that problem completely. If your application needs fast memory bandwidth for numerical computing, the 8 GB will disappoint.
How does Contabo's disk I/O compare to NVMe providers?
Contabo's 25,000 read IOPS and 20,000 write IOPS are roughly half what NVMe-first providers like Vultr (50,000/40,000) or DigitalOcean (55,000/42,000) deliver. More concerning is the consistency: Contabo's disk IOPS showed 22% variance across our 6-month test, compared to under 8% at Vultr. For databases that need predictable commit latency, this variance matters more than the average. For file storage and bulk operations, Contabo's 200 GB capacity more than compensates.
When does Contabo actually outperform more expensive providers?
Contabo wins when your workload is parallelizable and tolerant of variance. Running 4 video transcoding threads at 3,500 per-core produces more total work than 1 thread at 4,400 on Vultr. Batch processing, CI/CD pipelines, and background job queues that can distribute across cores and tolerate occasional slowdowns will genuinely complete faster on Contabo's 4-core plan than on a single-core competitor. The break-even point is roughly any workload that can use 2+ threads consistently.
What causes Contabo's higher network latency?
Contabo's 2.1ms average latency (vs 0.8ms at DigitalOcean) comes from fewer direct peering relationships and shared network paths. But the more telling number is the variance: latency spiked to 4.8ms during our worst measurement, a 2.3x multiplier over the average. Premium providers keep their worst-case latency within 1.5x of average. For API backends where tail latency affects user experience, this inconsistency is the real problem — not the 2.1ms average, which most applications would not notice.
Should I use Contabo for a production database?
We would not recommend it for primary production databases. The combination of 20,000 write IOPS (with 22% variance), 2.1ms network latency (spiking to 4.8ms), and CPU overcommit creates unpredictable query response times. A PostgreSQL EXPLAIN ANALYZE that takes 12ms on Tuesday might take 22ms on Friday afternoon. For read replicas, analytics databases, or development databases where the 8 GB RAM helps with large working sets, Contabo is a reasonable choice. For the primary database serving user-facing queries, look at Hetzner or Vultr.
How did you measure CPU overcommit over 6 months?
We ran sysbench cpu (single-thread, 60-second runs) every Monday and Thursday at 14:00 UTC and 02:00 UTC for 26 weeks. This gave us 104 data points capturing both peak and off-peak performance. We also ran identical tests on Vultr and DigitalOcean as controls. The control providers showed under 5% week-to-week variance while Contabo showed 18%, confirming the variance is provider-specific, not methodology noise. Full methodology is documented on our benchmarks overview page.
Is Contabo's 600 Mbps network speed a dealbreaker?
It depends entirely on your bandwidth needs. For web serving, even a busy site rarely sustains more than 100 Mbps. The 600 Mbps ceiling only matters for bulk file transfers, CDN origin pulls, or large backup operations. And Contabo compensates with 32 TB of monthly transfer — 16x what Vultr includes on a comparable plan. If you need sustained high throughput, 600 Mbps is limiting. If you need total monthly volume, Contabo is the best value by far.
The Bottom Line
Contabo is not a slow provider. It is an inconsistent provider. That distinction matters because inconsistency affects different workloads in completely different ways.
If you run batch processing that can absorb a 12.4% performance dip during US business hours, Contabo's 4 cores and 8 GB RAM at $6.99 will outproduce any single-core $5 plan. If you run a production web application where your users notice when the P99 response time doubles, the variance will undermine every optimization you attempt.
The benchmark averages — CPU 3,500, disk 25K/20K IOPS, network 600 Mbps — are the brochure numbers. The variance data — 18% CPU, 22% disk, 2.3x latency P99/avg — is the operating reality. Both are true. Which one matters is entirely about what you are building.