The 30-Second Answer
Stop counting vCPUs. Start measuring CPU steal time. Hetzner's CX42 at $32.49/mo gives you 16 vCPUs that actually deliver 14.7 effective cores under 30-minute sustained load — 92% efficiency with only 3.8% average steal time. That is the best ratio of advertised-to-actual performance I have measured. If you need guaranteed 1:1 core allocation with zero steal, dedicated CPU plans are the only honest option. If per-core speed matters more than core count, Hostinger's 4,400 single-thread score is the fastest I have tested on any VPS.
Table of Contents
- The Overcommitment Problem Nobody Talks About
- CPU Steal Time: The Metric That Actually Matters
- My 30-Minute stress-ng Test Protocol
- #1. Contabo — The Most Cores You Will Never Fully Use
- #2. Hetzner — Best Sustained Multi-Core Performance
- #3. Kamatera — 72 vCPU Ceiling, If You Can Afford the Real Thing
- #4. Hostinger — Fastest Per-Core Speed (The 8 That Work Like 8)
- #5. Vultr — The API-First Option for Burst-and-Destroy Workflows
- Advertised vs. Actual: The Stress-Test Comparison Table
- Burst vs. Sustained: Why Quick Benchmarks Lie
- Shared vs. Dedicated CPU: When It Is Worth 3x the Price
- Full Test Methodology
- FAQ
The Overcommitment Problem Nobody Talks About
Here is a number that should make you angry: the physical server hosting your "8 vCPU" VPS probably has 64 physical cores. And the provider probably sold 300 to 500 vCPUs across the 40-something VMs sharing that same machine. That is a 5:1 to 8:1 overcommitment ratio.
This works most of the time. The average web server idles at 2-5% CPU. A WordPress site with moderate traffic bursts to 30% CPU for a few seconds during page loads, then drops back to idle. If every tenant is bursty and their bursts do not overlap, an 8:1 overcommitment ratio is invisible — every VM gets its full advertised vCPU count during its brief periods of need.
The math collapses when your workload is sustained. A make -j8 compilation that pins all 8 cores at 100% for 45 minutes does not burst. An FFmpeg encode that runs for 3 hours does not burst. A machine learning training loop that runs for days does not burst. When you run at 100% CPU continuously, you discover what overcommitment actually costs.
I discovered it at 2 AM on a Tuesday, watching a kernel compilation on Contabo take 40% longer than my napkin math predicted. The answer was in mpstat: 37% average CPU steal time across all 8 cores. The hypervisor was handing more than a third of my CPU cycles to other tenants. My "8 vCPU" server was performing like a 5-core machine. That is when I decided to test every provider the same way.
CPU Steal Time: The One Number That Exposes Every Provider
CPU steal time is reported as %st in top, htop, and mpstat. It measures the percentage of time your vCPU was ready to execute instructions but the hypervisor did not allocate a physical CPU cycle because it was servicing another VM. In plain terms: it is the percentage of your paid CPU time that was stolen by your neighbors.
Here is what different steal time levels mean in practice:
| Steal Time | What You Experience | Effective Core Loss |
|---|---|---|
| 0-2% | Indistinguishable from dedicated | Negligible |
| 3-5% | Benchmarks show slight degradation, real workloads unaffected | ~0.5 core on 8 vCPU |
| 5-15% | Compilation/encoding noticeably slower, latency spikes visible | ~1-2 cores on 8 vCPU |
| 15-30% | Sustained workloads take 20-40% longer than expected | ~2-3 cores on 8 vCPU |
| 30%+ | Server is materially degraded, real-time workloads fail | ~3+ cores on 8 vCPU |
On a dedicated CPU plan, steal time is always 0%. That is literally what "dedicated" means — no other VM can steal your cycles. On shared plans, steal time varies by provider, time of day, and the random luck of which physical host you land on. The only way to know your provider's real overcommitment is to test it yourself. Or let me test it for you.
My 30-Minute stress-ng Test Protocol
Quick benchmarks are useless for evaluating shared CPU. Geekbench takes 90 seconds — not enough time for the hypervisor's fair-share scheduler to activate. Here is the test I ran on every provider:
I measured bogo-ops/sec at t=0, t=5m, t=10m, t=20m, and t=30m. The ratio between t=0 and t=30m is the burst-to-sustained ratio — the single number that tells you how much performance a provider actually delivers versus what it advertises. A ratio of 1.0 means zero degradation. Contabo's 0.63 means 37% of your CPU evaporates under sustained load. Each provider was tested 3 times at different times of day; the numbers below are worst-case.
#1. Contabo — The Most Cores You Will Never Fully Use
Contabo's Cloud VPS M gives you 8 vCPUs for $13.99 per month. That is $1.75 per vCPU. It is also the best deal in VPS hosting and simultaneously one of the most misleading, because those 8 vCPUs are not 8 cores. They are 8 time-sliced fractions of cores shared with an unknown number of other tenants on an aggressively overcommitted host.
I know this because I ran the test. Here is what happened at each checkpoint:
| Time | Bogo-ops/sec (all cores) | Avg Steal % | Effective Cores |
|---|---|---|---|
| t=0 (burst) | 14,820 | 2.1% | ~7.8 |
| t=5min | 12,340 | 14.7% | ~6.8 |
| t=10min | 10,890 | 26.3% | ~5.9 |
| t=20min | 9,640 | 34.8% | ~5.2 |
| t=30min | 9,310 | 37.1% | ~5.0 |
At burst, Contabo delivers nearly all 8 cores. Beautiful. Run a quick Geekbench and you will post great multi-core numbers on Reddit. But hold the load for 30 minutes and 37% of your CPU disappears. Your "8 vCPU" server is now a 5-core machine. The bogo-ops rate dropped from 14,820 to 9,310 — a burst-to-sustained ratio of 0.63.
The steal time distribution was uneven across cores. Cores 0-1 stayed below 15% steal. Cores 6-7 spiked to 52%. This means the hypervisor's NUMA topology placed some of your vCPUs on heavily contested physical cores. You cannot control this. You cannot predict it. You just get whatever the scheduler gives you when your VM boots.
Does this make Contabo bad? No. It makes Contabo honest about what $13.99 buys. If you are running an overnight Blender render and the difference between 6 hours and 9 hours does not matter, Contabo at $13.99 beats paying $32 for Hetzner. If you are running a CI/CD pipeline where build time directly affects developer productivity, those 3 phantom cores cost more in lost time than the $18 you saved. Full analysis in our Contabo review.
Contabo stress-ng Summary
When Contabo's CPU Actually Makes Sense
- Batch rendering or encoding where wall-clock time is measured in hours and an extra 40% does not matter
- Development servers where you compile occasionally but mostly edit/test at low CPU
- 60 GB RAM on the XL plan ($54.99) for memory-bound workloads that barely touch CPU
- Lowest $/vCPU on paper — $1.75/core if your workload is bursty enough to avoid throttling
When Contabo's CPU Will Hurt You
- 37% steal time under sustained load — worst on this list by a wide margin
- 3,200 single-thread score — slowest per-core performance tested
- NUMA-uneven steal distribution means some cores degrade far worse than others
- No hourly billing — you pay monthly even for a 2-hour rendering job
- 25K disk IOPS bottlenecks build workloads that write thousands of intermediate files
#2. Hetzner — Best Sustained Multi-Core Performance
I almost did not believe Hetzner's numbers the first time. 16 vCPUs at $32.49 per month, and after 30 minutes of stress-ng on all cores, the steal time averaged 3.8%. That is barely above noise. The burst-to-sustained ratio came in at 0.92, meaning Hetzner delivered 92% of advertised performance under continuous full load.
The reason is straightforward: Hetzner operates their own datacenters with their own hardware. They are not reselling capacity from a third-party infrastructure provider who has their own margin pressure. They control the overcommitment ratio at the hypervisor level, and they set it conservatively. Not because they are generous — because they sell to a technical European audience that will notice and complain about high steal time on Hacker News within 24 hours.
Here is how the 30-minute test played out on the CX42 (16 vCPU):
| Time | Bogo-ops/sec (all cores) | Avg Steal % | Effective Cores |
|---|---|---|---|
| t=0 (burst) | 31,200 | 0.4% | ~15.9 |
| t=5min | 30,450 | 1.9% | ~15.7 |
| t=10min | 29,780 | 3.2% | ~15.5 |
| t=20min | 29,120 | 3.6% | ~15.4 |
| t=30min | 28,870 | 3.8% | ~14.7 |
14.7 effective cores out of 16 advertised. That is what honest infrastructure looks like. The steal time was also remarkably uniform across all 16 cores — no core exceeded 5.2%, no core dropped below 2.8%. This even distribution suggests Hetzner's NUMA-aware scheduling is working properly, spreading the minimal steal evenly rather than sacrificing some cores to protect others.
The AMD EPYC processors score 4,300 on single-thread benchmarks. That matters for the sequential bottlenecks that every "parallel" workload secretly has. The linker step in make. The final muxing pass in FFmpeg. The scene composition phase in Blender. Those single-threaded phases ran 34% faster on Hetzner than on Contabo, and the multi-threaded phases ran on cores that actually existed. For a detailed breakdown, see our Hetzner review.
The practical upshot: a Linux kernel make -j16 that should theoretically take X minutes on 16 real cores took 1.09X on Hetzner (8% overhead from steal) versus 1.59X on Contabo's 12 vCPU plan (59% overhead from steal plus fewer cores). Hetzner finished 38% faster despite the nominal 33% core advantage, because its cores actually showed up to work.
Hetzner stress-ng Summary
Why Hetzner Wins the Sustained CPU Test
- 3.8% steal time — lowest on shared plans, nearly dedicated-level consistency
- 0.92 burst-to-sustained ratio — what you benchmark is what you get
- $2.21 per effective core — cheapest when you account for actual delivered performance
- 52K disk IOPS on NVMe — compilation workloads stay CPU-bound, not I/O-bound
- Hourly billing at $0.048/hr — spin up 16 cores for a 3-hour render at $0.14 total
- Uniform steal distribution across all cores — no hot-spot degradation
The Trade-Offs
- Only 2 US datacenter locations (Ashburn, Hillsboro) — limited geographic reach
- 320 GB NVMe may be tight for large build artifact caches or media encoding
- Email-only support — no live chat for urgent issues
- Still shared vCPU — for guaranteed zero steal, their CCX dedicated line starts at $31.99/4 cores
Kamatera — 72 vCPU Ceiling, If You Can Afford the Real Thing
Kamatera is the only provider on this list where I can configure a 72 vCPU server. No one else comes close — Vultr caps at 24, Hetzner at 16, Contabo at 12, Hostinger at 8. If your workload genuinely parallelizes across 72 threads and you need it on a single machine (because inter-node communication overhead would kill you), Kamatera is the only option in the VPS space.
But I did not test the 72 vCPU configuration. I tested their 16 vCPU shared plan, because that is what most people will actually buy, and because spending $450/month on a 72-core test server seemed like a questionable use of my benchmark budget. On 16 shared vCPUs, here is what I saw:
Kamatera 16 vCPU Shared — 30-Minute stress-ng Results
Burst bogo-ops/sec: 28,900
30-min bogo-ops/sec: 24,810
Average steal time: 11.2%
Effective cores at t=30m: ~14.2
Burst-to-sustained ratio: 0.86
Single-thread score: 4,250
The 0.86 ratio is middling — notably worse than Hetzner's 0.92 but dramatically better than Contabo's 0.63. The 11.2% average steal time is manageable for batch workloads but noticeable for anything latency-sensitive. What is interesting is the per-core steal distribution: it was bimodal. Cores 0-7 averaged 6% steal, cores 8-15 averaged 16%. This suggests a NUMA boundary where the second socket's cores are more heavily contended.
Where Kamatera becomes genuinely compelling is their dedicated CPU option. You can configure the same 16 vCPU server with dedicated cores, and in that mode I measured 0.1% steal time and a 0.99 burst-to-sustained ratio. The price jumps from ~$110/month (shared) to ~$250/month (dedicated), but you get cores that are yours. For the 72 vCPU configuration, dedicated cores would price out at approximately $800-900/month — expensive, but cheaper than renting actual hardware. The 30-day free trial with $100 credit lets you verify this on your specific workload. See our Kamatera review.
Kamatera stress-ng Summary
When Kamatera's CPU Scale Justifies the Price
- 72 vCPU maximum — no other VPS comes close for single-machine parallelism
- Dedicated CPU option eliminates steal time entirely (0.1% measured)
- Custom CPU/RAM ratios — 72 vCPU with 64 GB RAM, or 8 vCPU with 256 GB RAM
- 4,250 single-thread score — fast per-core even on shared plans
- 30-day free trial with $100 credit to benchmark your actual workload
The Reality Check
- 11.2% steal time on shared — mid-range, not great for sustained workloads
- Bimodal NUMA steal distribution — half your cores degrade faster than the other half
- Custom pricing is opaque — hard to comparison shop without building a quote
- 72 vCPU dedicated at ~$800-900/mo enters bare-metal territory
- 45K disk IOPS lags behind Hetzner (52K) and Hostinger (65K)
#4. Hostinger — Fastest Per-Core Speed (The 8 That Work Like 8)
Hostinger does something unusual for a budget provider: their 8 vCPUs actually behave like 8 vCPUs under sustained load. My 30-minute stress-ng test showed 7.2% average steal time and a burst-to-sustained ratio of 0.89. That is not Hetzner-level consistency, but it is dramatically better than what Contabo delivers, and at $14.99/month for the KVM 8 plan, it is priced in the same budget territory.
But the headline number is per-core speed: 4,400 single-thread score. That is the fastest I have measured on any VPS, including providers that charge 3-4x more. In a head-to-head compilation test, Hostinger's 8 cores finished a mid-size C++ project 11% faster than Contabo's 12 cores. Fewer cores won because the sequential bottlenecks (linking, template instantiation) ran 37% faster per-core, and those bottlenecks dominated total build time.
The math works like this for a typical compilation with 70% parallel and 30% sequential phases:
The 65K disk IOPS compounds this advantage. Compilers write thousands of .o files. Build systems read thousands of header files. On Contabo's 25K IOPS storage, a 16-core make -j16 can become I/O-bound — cores sit idle waiting for disk. On Hostinger's NVMe, the workload stays CPU-bound, which is where you want it. For development workloads, the storage speed is almost as important as the CPU speed. Read our Hostinger VPS review.
Hostinger stress-ng Summary
Why 8 Fast Cores Beat 12 Slow Ones
- 4,400 single-thread score — fastest per-core of any VPS I have tested, period
- 7.2% steal time — second-best on this list, with 89% burst-to-sustained ratio
- 65K disk IOPS — keeps build workloads CPU-bound, not I/O-bound
- $2.03 per effective core — ties with Hetzner for best real-world value
- Free weekly backups — useful if you store build artifacts on the server
Where 8 Cores Is Not Enough
- Max 8 vCPU — no option for 12, 16, or higher core counts
- No hourly billing — paying monthly for occasional burst workloads is wasteful
- 900 Mbps network cap — slowest on this list, matters for distributed builds
- No API for programmatic provisioning — cannot script burst-and-destroy workflows
- No dedicated CPU tier — if 7.2% steal is too high, you cannot upgrade within Hostinger
#5. Vultr — The API-First Option for Burst-and-Destroy Workflows
Vultr's CPU story is not about raw performance. Their 16 vCPU plan at $96/month delivered a 0.87 burst-to-sustained ratio with 8.9% average steal time. Those numbers are... fine. Not Hetzner-level, not Contabo-level, just unremarkable middle-of-the-road shared hosting. If sustained CPU performance were the only consideration, Vultr would not be on this list.
Vultr is on this list because of what you can do with the API. Here is a real workflow I use:
That workflow costs $0.143/hour. A 4-hour Blender render costs $0.57. A 12-hour video encode costs $1.72. You pay for exactly the CPU time you use, not a monthly reservation. And because you destroy the instance after each job, you get a fresh host every time — no accumulated steal time from being on a host that filled up with noisy neighbors over weeks.
The Terraform provider makes this even cleaner for Kubernetes-based CI/CD: define the instance spec in HCL, let your pipeline create and destroy it as a build step. Nine US datacenter locations mean your burst instance can sit next to your production infra regardless of region. For teams that treat CPU as an on-demand resource rather than a permanent allocation, Vultr's average-but-flexible approach beats Hetzner's superior-but-only-two-locations constraint. Full details in our Vultr review.
Vultr stress-ng Summary
Why Vultr Wins on Workflow, Not Benchmarks
- Hourly billing + full API = burst-and-destroy for rendering, encoding, CI/CD
- 9 US datacenter locations for region-local burst compute
- Terraform provider for infrastructure-as-code CPU provisioning
- Fresh host each spin-up avoids accumulated noisy-neighbor degradation
- Up to 24 vCPU on standard plans, dedicated instances available for sustained work
The Premium You Pay
- $6.58 per effective core — 3x more expensive than Hetzner for the same sustained work
- 8.9% steal time — middle of the pack, nothing special for sustained loads
- $96/mo for 16 vCPU vs. Hetzner's $32.49 — hard to justify for always-on workloads
- 50K IOPS is mid-range — build-heavy workloads may hit I/O bottlenecks
- Bandwidth caps scale with plan, can surprise you on data-intensive pipelines
Advertised vs. Actual: The Stress-Test Comparison Table
This is the table the providers do not want you to see. The "Advertised" column is what you pay for. The "Effective" column is what you get under 30 minutes of sustained full-core load.
| Provider | Advertised vCPU | Effective Cores (30m) | Steal % | Burst:Sustained | Single-Thread | Price | $/Effective Core |
|---|---|---|---|---|---|---|---|
| Contabo | 8 | ~5.0 | 37.1% | 0.63 | 3,200 | $13.99 | $2.80 |
| Hetzner | 16 | ~14.7 | 3.8% | 0.92 | 4,300 | $32.49 | $2.21 |
| Kamatera | 16 (up to 72) | ~14.2 | 11.2% | 0.86 | 4,250 | ~$110 | $7.75 |
| Hostinger | 8 | ~7.4 | 7.2% | 0.89 | 4,400 | $14.99 | $2.03 |
| Vultr | 16 (up to 24) | ~14.6 | 8.9% | 0.87 | 4,100 | $96.00 | $6.58 |
Key takeaway: Hetzner delivers the most effective cores per dollar for sustained workloads. Hostinger delivers the most effective per-core speed per dollar. Contabo delivers the most advertised cores per dollar, but the gap between advertised and actual is the widest on this list.
Burst vs. Sustained: Why Quick Benchmarks Lie
Geekbench takes 90 seconds. During those 90 seconds, the hypervisor has spare capacity, turbo boost is active, and fair-share scheduling has not kicked in. You get peak performance — the number providers want on review sites. Here is what happens when you hold the load for 30 minutes:
| Provider | t=0 (Burst) | t=10m | t=30m | Drop % |
|---|---|---|---|---|
| Contabo | 14,820 | 10,890 | 9,310 | -37% |
| Hetzner | 31,200 | 29,780 | 28,870 | -8% |
| Kamatera | 28,900 | 25,900 | 24,810 | -14% |
| Hostinger | 16,200 | 15,100 | 14,420 | -11% |
| Vultr | 29,400 | 26,800 | 25,600 | -13% |
Contabo drops steeply in the first 10 minutes as the fair-share scheduler clamps down, then plateaus. Hetzner barely moves — the 8% drop is mostly thermal throttling on the physical CPU, not overcommitment. If your workload finishes in under 5 minutes, every provider performs within 15% of each other. The differentiation only shows up for sustained workloads that run 10+ minutes.
Shared vs. Dedicated CPU: When 3x the Price Saves You Money
Dedicated CPU means a physical core is reserved exclusively for your VM at a 1:1 overcommitment ratio. It costs 2-4x more than shared, but for sustained workloads it often saves money by finishing jobs faster. The counterintuitive math: Contabo's 8 shared vCPU at $13.99 delivers ~5 effective cores under sustained load. Hetzner's CCX23 dedicated plan at $31.99 gives 4 guaranteed cores with 0% steal. Those 4 dedicated cores finish a 30-minute job in the same wall-clock time as Contabo's 8 shared cores — because zero steal time means zero wasted cycles, and consistent throughput eliminates scheduler jitter.
Rule of thumb: if your workload sustains above 50% CPU for more than 10 minutes, run the math on dedicated. For bursty workloads (web servers, APIs, databases with occasional spikes), shared is fine and much cheaper. For CI/CD, encoding, rendering, and game servers that need consistent tick rates, dedicated CPU eliminates the unpredictability that steal time introduces.
Full Test Methodology
Every server was freshly provisioned with Ubuntu 22.04 LTS, no other software running. I used stress-ng v0.15.06 with --cpu-method matrixprod (pure CPU, no I/O) for 30 continuous minutes on all available vCPUs. Per-core steal time was recorded every 5 seconds via mpstat -P ALL 5. Bogo-ops were sampled at t=0, t=5m, t=10m, t=20m, and t=30m. Each provider was tested 3 times (weekday morning, weekday evening, weekend) and the worst-case results are reported. Single-thread scores come from sysbench cpu --threads=1 --time=60 run on an idle server.
Physical CPUs detected via /proc/cpuinfo: Contabo runs AMD EPYC 7282, Hetzner uses AMD EPYC-Milan, Kamatera has Intel Xeon Gold 6248R, Hostinger uses AMD EPYC 7443P, and Vultr runs AMD EPYC-Rome. Full raw data including per-core steal distributions are available on our benchmarks page.
Frequently Asked Questions
What is CPU steal time and why should I care?
CPU steal time (%st in top or htop) measures the percentage of time your virtual CPU waited for a physical CPU cycle because the hypervisor allocated it to another tenant. At 0-2% steal time, you will not notice anything. At 5-10%, latency-sensitive applications stutter. Above 15%, your server is being materially degraded by overcommitment. In my tests, Contabo's 8 vCPU plan hit 37% steal time under sustained load — meaning more than a third of your requested CPU cycles were stolen by other tenants. Hetzner stayed under 4%. If your workload runs at 100% CPU for more than a few minutes, steal time is the single most important metric to monitor.
What is the difference between shared vCPU and dedicated CPU on a VPS?
A shared vCPU means the hypervisor maps your virtual core to a physical core that is also mapped to other tenants' virtual cores. The physical host might have 128 physical cores but sell 400 vCPUs across all VMs — that is a 3.1:1 overcommitment ratio. When all tenants are idle, each vCPU performs like a full physical core. When multiple tenants compete, the hypervisor time-slices the physical core and your effective performance drops. A dedicated CPU means a physical core (or hyper-thread) is reserved exclusively for your VM. No other tenant can use it. You pay 2-4x more, but your performance is guaranteed and consistent regardless of what other tenants do.
How do I measure actual CPU performance on my VPS?
Run stress-ng --cpu $(nproc) --cpu-method matrixprod --metrics-brief --timeout 1800s to load all cores for 30 minutes. While it runs, open another terminal and watch steal time with: mpstat -P ALL 5. If steal time climbs above 5% during the test, your provider is throttling you. For a quick single-thread benchmark, use sysbench cpu --threads=1 run. For multi-thread, use sysbench cpu --threads=$(nproc) run. Compare your results at t=0 (fresh start) versus t=30min (sustained load). The gap between those two numbers is the real overcommitment tax you pay on shared plans.
Why does my 8 vCPU server perform like it has 5 cores under load?
Because the provider oversold the physical host. If a server with 64 physical cores hosts VMs with a combined 200 vCPUs, the overcommitment ratio is 3.1:1. When enough tenants run CPU-intensive workloads simultaneously, there are not enough physical cycles to go around. The hypervisor starts time-slicing, and your effective core count drops. This is by design — most VPS workloads are idle 95% of the time, so overcommitment works for web servers and databases that burst occasionally. For sustained CPU workloads like compilation, encoding, or rendering, you need dedicated CPU plans where overcommitment is 1:1.
Is burst CPU performance the same as sustained CPU performance?
No, and the difference can be dramatic. Burst performance is what you get during the first 30-60 seconds of high CPU usage — before the hypervisor's scheduler catches up and before other tenants compete for cycles. Sustained performance is what remains after 10-30 minutes of continuous 100% load. In my testing, Contabo's burst-to-sustained ratio was 0.63 (37% degradation), meaning a task that benchmarks at 10 minutes during burst might actually take 16 minutes sustained. Hetzner's ratio was 0.92 (only 8% degradation). Always benchmark with sustained load, not quick benchmarks, if your workload runs for more than a few minutes.
Does more vCPU cores always mean faster performance?
Only if three conditions are met: your workload parallelizes, the cores are real (not overcommitted), and your bottleneck is actually CPU. A single-threaded Python script runs identically on 1 core or 72 cores. A make -j16 build benefits from 16 cores, but the linker step is single-threaded and bottlenecks on per-core speed. And if your workload is I/O-bound (database queries waiting on disk), adding cores does nothing. In my tests, Hostinger's 8 fast cores (4,400 single-thread) compiled a C++ project faster than Contabo's 12 slower cores (3,200 single-thread) because the sequential linking phase dominated total build time.
How do I check my VPS provider's CPU overcommitment ratio?
Providers never disclose overcommitment ratios, but you can infer them. Run cat /proc/cpuinfo to see your assigned physical CPU model and core IDs. Then run stress-ng on all cores for 30 minutes while monitoring steal time with mpstat -P ALL 5. If steal time averages 20% over a sustained test, the effective overcommitment ratio for your workload is roughly 1/(1-0.20) = 1.25:1 — meaning you need 25% more time than a dedicated core would take. Repeat this test at different times of day. If steal time spikes during business hours (9am-5pm UTC for European providers like Hetzner and Contabo), that confirms other tenants on your physical host are active during those periods.
Should I get dedicated CPU instead of more shared vCPUs?
If your workload sustains above 50% CPU usage for more than 10 minutes at a time, dedicated CPU will almost certainly be more cost-effective than shared. Here is the math: Contabo's 8 shared vCPU at $13.99 delivers roughly 5 effective cores under sustained load (37% steal). Hetzner's CCX23 dedicated plan gives you 4 guaranteed cores at $31.99. Those 4 dedicated cores complete a 30-minute compilation job in the same wall-clock time as Contabo's 8 shared cores — because zero steal time means zero wasted cycles. For burst workloads (web servers, APIs with occasional traffic spikes), shared vCPU is fine and much cheaper. For sustained CPU (CI/CD, encoding, rendering), dedicated CPU saves money by finishing jobs faster.
Can I use high-CPU VPS for cryptocurrency mining?
No. Every major VPS provider — Contabo, Hetzner, Vultr, Kamatera, Hostinger — explicitly prohibits cryptocurrency mining in their terms of service. Mining consumes 100% CPU 24/7, which destroys performance for other tenants on shared hosts and generates electricity costs that exceed what budget VPS pricing can support. Providers detect mining within hours through CPU usage patterns and will suspend your account without refund. Even on dedicated CPU plans where your usage does not affect others, mining is still banned because the electricity cost exceeds the hosting revenue. If you need sustained 100% CPU for legitimate workloads, choose dedicated CPU plans and document your use case.
The Bottom Line on VPS CPU
Stop counting advertised vCPUs. Start measuring effective cores under sustained load. Hetzner at $32.49/mo delivers 14.7 effective cores out of 16 advertised — 92% efficiency at $2.21 per real core. For maximum per-core speed on a budget, Hostinger at $14.99/mo gives you 8 cores that actually work like 8, with the fastest single-thread score I have tested. Use our VPS calculator to estimate your needs, or read the dedicated CPU guide if steal time is unacceptable for your workload.