I ran into this problem migrating a client’s trading API. Contabo showed 800 Mbps on a quick iperf3 burst test. Looked fine. But the API dropped 3-4% of WebSocket messages during New York market open. Contabo’s sustained throughput during peak hours was around 420 Mbps, with latency jitter spiking to 8ms. Moved to DigitalOcean: 920+ Mbps sustained during peak, jitter under 0.3ms. Problem disappeared.
That changed how I benchmark VPS networks. A 5-second speed test tells you nothing useful. What matters is sustained throughput under contention — how fast your VPS moves data when all the other tenants are also busy.
Table of Contents
- The “1 Gbps” Lie — Why Advertised Speeds Are Meaningless
- How I Tested (And Why Quick Speed Tests Are Useless)
- Full Results: Sustained Throughput, Latency, and Jitter
- The Peak Hours Problem — Where Budget Providers Fall Apart
- Burst vs Sustained — The Numbers That Actually Matter
- Who Actually Delivers on Their Promise
- Throughput vs Latency: Different Problems, Different Winners
- Cost Per Sustained Mbps — The Real Value Calculation
- When Network Speed Is (and Isn’t) Your Bottleneck
- FAQ
The “1 Gbps” Lie — Why Advertised Speeds Are Meaningless
All 13 providers advertise a 1 Gbps network port. Technically true, in the same way a highway with a 70 mph speed limit is “a 70 mph road” during rush hour. The 1 Gbps figure is port speed — the theoretical maximum with zero contention. Here is what actually happens on a shared host node:
- Oversubscription: A typical host node has a 10 Gbps uplink shared among 30-60 VPS instances. The math does not work if everyone transfers simultaneously. Providers bet most instances are idle most of the time. During peak hours, that bet fails.
- Burst vs sustained: Some providers (Contabo, RackNerd) allow full 1 Gbps burst for 2-5 seconds, then throttle to 300-500 Mbps. A quick iperf3 catches the burst. A 30-second test catches the throttle.
- TCP overhead: Headers, ACKs, and congestion control consume 3-5%. The theoretical TCP max on a 1 Gbps link is ~960 Mbps.
- Noisy neighbors: One tenant running a large backup can temporarily saturate the shared uplink, affecting everyone on the node.
This is why I spent 72 hours running sustained load tests instead of a quick burst check.
How I Tested (And Why Quick Speed Tests Are Useless)
The standard VPS network “benchmark” is iperf3 -c server -t 10 run once. That captures burst speed, misses peak-hour degradation, and has no statistical significance. Here is what I did instead:
- Throughput:
iperf3 -c <server> -t 30 -P 4— 30-second sustained test, 4 parallel streams, past any burst throttling - Latency:
ping -c 500+mtr -c 200— average and p99 latency, hop-by-hop jitter analysis - Jitter: Standard deviation of 500 ping RTT samples — consistency matters more than average for real-time apps
- Frequency: Every 4 hours for 72 hours (18 measurements per provider) — captures peak/off-peak across 3 business days
- Endpoints: Public iperf3 servers at NYIIX and LAIIX — major internet exchange points
- VPS location: Each provider’s closest US datacenter to NY endpoint
- Instance tier: Cheapest plan with 1+ vCPU, 1+ GB RAM — what most users buy
The key difference from most benchmarks: I report the median sustained throughput (not the peak burst) and I include peak-hour performance separately. A provider that delivers 900 Mbps at 3 AM but 450 Mbps at 2 PM is not a “900 Mbps provider.”
Full Results: Sustained Throughput, Latency, and Jitter
Sorted by median sustained throughput. “Peak” column shows the worst performance recorded during business hours (9 AM – 6 PM ET). This is the number that matters for production workloads.
| # | Provider | Advertised | Median Sustained | Peak-Hour Worst | Latency (avg) | Jitter (p99) | Price |
|---|---|---|---|---|---|---|---|
| 1 | DigitalOcean | 1 Gbps | 943 Mbps | 887 Mbps | 0.4 ms | 0.2 ms | $6/mo |
| 2 | Hetzner | 1 Gbps | 928 Mbps | 861 Mbps | 0.5 ms | 0.3 ms | $4.59/mo |
| 3 | Vultr | 1 Gbps | 911 Mbps | 842 Mbps | 0.5 ms | 0.4 ms | $5/mo |
| 4 | Linode | 1 Gbps | 897 Mbps | 819 Mbps | 0.6 ms | 0.5 ms | $5/mo |
| 5 | BuyVM | 1 Gbps | 884 Mbps | 791 Mbps | 1.1 ms | 0.8 ms | $3.50/mo |
| 6 | Kamatera | 1 Gbps | 871 Mbps | 762 Mbps | 0.7 ms | 0.6 ms | $4/mo |
| 7 | AWS Lightsail | 1 Gbps | 856 Mbps | 798 Mbps | 0.5 ms | 0.3 ms | $5/mo |
| 8 | ScalaHosting | 1 Gbps | 843 Mbps | 731 Mbps | 0.8 ms | 0.7 ms | $29.95/mo |
| 9 | Hostinger VPS | 1 Gbps | 812 Mbps | 683 Mbps | 0.9 ms | 1.1 ms | $6.49/mo |
| 10 | Hostwinds | 1 Gbps | 789 Mbps | 641 Mbps | 1.0 ms | 1.3 ms | $4.99/mo |
| 11 | InterServer | 1 Gbps | 756 Mbps | 612 Mbps | 1.1 ms | 1.5 ms | $6/mo |
| 12 | Contabo | 1 Gbps | 687 Mbps | 412 Mbps | 1.4 ms | 3.2 ms | $6.99/mo |
| 13 | RackNerd | 1 Gbps | 641 Mbps | 438 Mbps | 1.6 ms | 4.1 ms | $3.49/mo |
DigitalOcean is the only provider sustaining above 900 Mbps during peak hours. AWS Lightsail shows surprisingly good consistency (7% peak-hour drop) despite lower absolute throughput — Amazon provisions conservatively but does not oversubscribe as aggressively.
At the bottom, the gap is startling. RackNerd’s 1 Gbps port delivers 438 Mbps during business hours — 44% of advertised. Contabo: 412 Mbps peak-hour, 41% of advertised. Both deliver acceptable burst speeds for a 5-second test, which is presumably the basis for their marketing.
The Peak Hours Problem — Where Budget Providers Fall Apart
This is the chart that changed my perspective on VPS networking. I plotted sustained throughput at each 4-hour measurement interval across 72 hours, and the pattern is unmistakable.
| Provider | Off-Peak (2-6 AM ET) | Business Hours (9 AM-6 PM ET) | Degradation |
|---|---|---|---|
| DigitalOcean | 951 Mbps | 887 Mbps | -6.7% |
| Hetzner | 942 Mbps | 861 Mbps | -8.6% |
| Vultr | 938 Mbps | 842 Mbps | -10.2% |
| Linode | 921 Mbps | 819 Mbps | -11.1% |
| AWS Lightsail | 862 Mbps | 798 Mbps | -7.4% |
| BuyVM | 923 Mbps | 791 Mbps | -14.3% |
| Kamatera | 912 Mbps | 762 Mbps | -16.4% |
| Hostinger VPS | 878 Mbps | 683 Mbps | -22.2% |
| Hostwinds | 856 Mbps | 641 Mbps | -25.1% |
| InterServer | 831 Mbps | 612 Mbps | -26.4% |
| Contabo | 780 Mbps | 412 Mbps | -47.2% |
| RackNerd | 752 Mbps | 438 Mbps | -41.8% |
Premium providers (DigitalOcean, Hetzner, AWS Lightsail) maintain 90%+ of off-peak performance during business hours. Budget providers lose 25-47% of throughput when tenants are actually using their servers. That is the hidden cost of $3.49/mo hosting. If you only run batch jobs overnight, RackNerd at 752 Mbps off-peak is fine. If your traffic peaks when humans are awake, those off-peak numbers are irrelevant.
Burst vs Sustained — The Numbers That Actually Matter
I ran an additional test specifically to quantify burst behavior. For each provider, I measured throughput at 2 seconds, 10 seconds, and 30 seconds into an iperf3 transfer:
| Provider | 2-sec Burst | 10-sec Avg | 30-sec Sustained | Burst-to-Sustained Drop |
|---|---|---|---|---|
| DigitalOcean | 962 Mbps | 951 Mbps | 943 Mbps | -2.0% |
| Hetzner | 955 Mbps | 939 Mbps | 928 Mbps | -2.8% |
| Vultr | 948 Mbps | 927 Mbps | 911 Mbps | -3.9% |
| Linode | 938 Mbps | 914 Mbps | 897 Mbps | -4.4% |
| Contabo | 921 Mbps | 784 Mbps | 687 Mbps | -25.4% |
| RackNerd | 893 Mbps | 741 Mbps | 641 Mbps | -28.2% |
DigitalOcean and Hetzner are essentially flat — 2-3% drop from burst to sustained. What you see in a quick test is what you get in a long transfer.
Contabo and RackNerd show a pronounced cliff. The 2-second burst looks promising (921 and 893 Mbps), but sustained throughput drops 25-28% as the throttle kicks in. Run a 5-second iperf3 test on Contabo and you see ~850 Mbps. Run it for 30 seconds and reality emerges. Burst allowance helps with short HTTP requests and small file transfers, but for sustained workloads like streaming and backups, the 30-second number is the only one that counts.
Who Actually Delivers on Their Promise
DigitalOcean — The Closest Thing to Honest 1 Gbps
94.3% of advertised port speed under sustained load. During peak hours, 88.7% — still the best by a meaningful margin. The 0.4ms latency and 0.2ms jitter make it the obvious choice for latency-sensitive workloads. That trading API client I mentioned? WebSocket drop rate went from 3-4% to zero after the migration.
The weakness is bandwidth allocation: 1 TB/month is tight for media or large downloads, with $0.01/GB overage. For APIs, web apps, and databases where throughput matters but monthly transfer stays modest, DigitalOcean is the best network I have tested. They offer a $200 free credit to verify these numbers on your specific datacenter.
Hetzner — 98% of the Performance, 20x the Bandwidth
The 15 Mbps sustained throughput gap (943 vs 928) is imperceptible in practice. Where Hetzner pulls ahead is the complete package: 20 TB/month versus DigitalOcean’s 1 TB, at a lower base price. For CDN origin servers, media hosting, backup targets, or any workload transferring large data volumes, Hetzner is the clear pick — 98% of the network speed, 20x the transfer allowance, 24% less cost.
The caveat: Hetzner’s US presence is limited to Ashburn, VA. If you need West Coast or Southern US locations, Vultr is a better geographic fit.
Vultr — Best Network Across 9 US Locations
Vultr’s network advantage is geographic. With 9 US datacenters (New Jersey, Chicago, Dallas, Atlanta, Miami, Seattle, Silicon Valley, Los Angeles, Honolulu), you can place a server within one or two network hops of almost any US user base. A Vultr server in Miami delivers lower real-world latency to Florida users than a DigitalOcean server in New York, regardless of absolute network scores. Physics wins over peering when distance is large enough.
My throughput numbers are from Vultr’s New Jersey location. Other locations varied: Silicon Valley 876 Mbps, Dallas 891 Mbps, Honolulu 743 Mbps (transpacific hop penalty). If your users are distributed across the US, Vultr’s location flexibility is worth more than DigitalOcean’s extra 32 Mbps.
Throughput vs Latency: Different Problems, Different Winners
These two metrics measure fundamentally different things and optimizing for one does not necessarily optimize for the other. Here is the practical breakdown.
Throughput-Sensitive Workloads
- File downloads and large asset serving
- Backup and replication (database, S3 sync)
- Media streaming origin servers
- Server-to-server data migration
- Large API batch responses (>1MB payloads)
Winner: DigitalOcean for raw speed, Hetzner for speed + bandwidth cap
Latency-Sensitive Workloads
- Algorithmic trading and market data feeds
- Multiplayer game servers
- Real-time APIs and WebSocket connections
- VoIP and video conferencing backends
- Interactive remote desktops (RDP, VNC)
Winner: DigitalOcean (0.4ms, 0.2ms jitter), then Hetzner/Vultr/Lightsail tied at 0.5ms
The important nuance: jitter (p99 latency variation) often matters more than average latency. AWS Lightsail matches Vultr’s 0.5ms average but has lower jitter (0.3ms vs 0.4ms). For game servers where one laggy tick ruins the experience, consistency trumps raw speed. For web APIs needing sub-100ms responses, any provider in the top 7 is indistinguishable. See also our CPU benchmark and disk I/O comparison for factors that interact with network performance.
Cost Per Sustained Mbps — The Real Value Calculation
Most “value” comparisons divide advertised speed by price, which produces meaningless numbers because the advertised speed is the same for everyone. Here I divide actual sustained throughput by monthly cost:
| # | Provider | Sustained Mbps | Price/mo | Mbps per $ | Bandwidth Cap |
|---|---|---|---|---|---|
| 1 | BuyVM | 884 | $3.50 | 253 | Unmetered |
| 2 | Kamatera | 871 | $4.00 | 218 | 5 TB/mo |
| 3 | Hetzner | 928 | $4.59 | 202 | 20 TB/mo |
| 4 | RackNerd | 641 | $3.49 | 184 | 3 TB/mo |
| 5 | Vultr | 911 | $5.00 | 182 | 2 TB/mo |
| 6 | Linode | 897 | $5.00 | 179 | 1 TB/mo |
| 7 | AWS Lightsail | 856 | $5.00 | 171 | 2 TB/mo |
| 8 | DigitalOcean | 943 | $6.00 | 157 | 1 TB/mo |
| 9 | Hostwinds | 789 | $4.99 | 158 | 1 TB/mo |
| 10 | Hostinger VPS | 812 | $6.49 | 125 | 1 TB/mo |
| 11 | InterServer | 756 | $6.00 | 126 | 2 TB/mo |
| 12 | Contabo | 687 | $6.99 | 98 | 32 TB/mo |
| 13 | ScalaHosting | 843 | $29.95 | 28 | Unmetered |
BuyVM leads on pure value at 253 sustained Mbps per dollar with unmetered bandwidth. For bandwidth-heavy, latency-tolerant workloads (backup storage, file hosting, media serving), BuyVM at $3.50/mo is hard to beat.
The more interesting finding: Hetzner offers the best balance of speed, value, and bandwidth cap. At 202 Mbps/$ with 20 TB/mo, it outperforms providers costing 30-50% more. Contabo lands near the bottom despite budget pricing — its poor sustained throughput means poor per-dollar efficiency even at $6.99/mo.
When Network Speed Is (and Isn’t) Your Bottleneck
Before you optimize for network speed, make sure it is actually your bottleneck. In my experience, network speed is the limiting factor less often than people assume.
Network speed is probably NOT your bottleneck if:
- You run a typical website or blog. A 2MB page transfers in 17ms at 943 Mbps vs 25ms at 641 Mbps. The 8ms difference is invisible. Your disk I/O and CPU add 50-500ms each — orders of magnitude more.
- You use a CDN for static assets. Only dynamic HTML (20-80KB) travels from your VPS. Every provider on this list transfers 80KB in under 1ms.
- Your traffic is under 10,000 daily page views. You never saturate even a 400 Mbps connection.
Network speed IS your bottleneck if:
- You run a trading bot or market data feed. The 1.2ms gap between DigitalOcean (0.4ms) and RackNerd (1.6ms) means worse fill prices on thousands of daily trades. See our best VPS for forex trading guide.
- You host multiplayer game servers. A Minecraft or Valheim server sends state updates every 50ms. If network jitter exceeds the tick interval, players rubber-band. Contabo’s 8ms+ jitter spikes during peak hours cause exactly that.
- You serve large files without a CDN. A 500MB download takes 4.2s at 943 Mbps vs 6.2s at 641 Mbps. Multiply by thousands of downloads.
- Your app makes many API calls per request. 20 inter-service calls at 0.4ms each (DigitalOcean) = 8ms. At 1.6ms each (RackNerd) = 32ms. Noticeable.
Honest recommendation: pick a provider based on CPU, disk I/O, and price first. Network speed should be a tiebreaker unless you fall into the bottleneck categories above.
Pick Based on What Actually Matters for Your Workload
Network testing only. For overall rankings: full reviews. By workload: Forex · Gaming · Under $5
Frequently Asked Questions
Why does my 1 Gbps VPS only show 400-600 Mbps in speed tests?
Three factors conspire against you. First, most providers oversubscribe their network — 50 VPS instances sharing a single 10 Gbps uplink means 200 Mbps per instance on average, not 1 Gbps. Second, TCP overhead consumes 3-5% of raw bandwidth. Third, and most importantly, burst vs sustained matters: providers often allow 1 Gbps burst speeds for a few seconds but throttle sustained transfers to 300-500 Mbps. Our tests measure 30-second sustained throughput, which is why our numbers look different from quick speedtest results.
What is the difference between port speed and actual throughput?
Port speed is the physical connection rate between your VPS’s virtual NIC and the hypervisor’s network stack. All 13 providers in our test advertise 1 Gbps port speed. Actual throughput is what you can sustain over a real transfer. Think of port speed as the speed limit on a highway — it tells you the maximum, but traffic (other tenants), road conditions (network congestion), and your car’s engine (CPU overhead from encryption or packet processing) all reduce your actual speed. In our tests, actual sustained throughput ranged from 641 Mbps (RackNerd median) to 943 Mbps (DigitalOcean).
Does datacenter location affect VPS network speed?
Location primarily affects latency, not throughput. A VPS in New York will have 20-40ms lower latency to East Coast users than one in Los Angeles, regardless of provider. But throughput depends on the provider’s network infrastructure, not geography. That said, some providers have better peering arrangements in certain regions — Vultr’s New Jersey datacenter consistently outperforms their Dallas location by about 80 Mbps in our tests, likely because NJ has better backbone connectivity.
How much bandwidth do I actually need for web hosting?
Less than you think for throughput, more than you think for monthly transfer. Even 400 Mbps sustained can serve a 2MB page to 25 simultaneous users in under a second. The real constraint is monthly bandwidth caps: DigitalOcean’s 1TB/mo means about 33GB/day, which supports roughly 15,000-20,000 page views daily. Hetzner’s 20TB/mo supports 300,000+ daily page views. If you run media-heavy sites or file hosting, bandwidth caps matter far more than speed.
Should I use a CDN instead of paying for faster VPS network?
For static assets (images, CSS, JS), absolutely — a free Cloudflare plan eliminates VPS network speed as a factor. But CDNs cannot help with dynamic content: API responses, database queries, WebSocket connections, and server-rendered pages must travel from your VPS to the user. If your application is 80% static, a CDN matters more than VPS network speed. If it is 80% dynamic (SaaS apps, trading platforms, game servers), your VPS network is the bottleneck.
Why do sustained throughput numbers drop during peak hours?
Network oversubscription. Providers sell more bandwidth than they have physical capacity, betting that not all customers will saturate their connections simultaneously. During off-peak hours (2-6 AM ET), fewer tenants are active, so each VPS gets closer to its advertised speed. During peak hours (11 AM - 9 PM ET), contention increases. In our tests, the worst peak-hour degradation was Contabo dropping from 780 Mbps off-peak to 412 Mbps peak — a 47% reduction. DigitalOcean showed only 6.7% variance.
Does bandwidth cap or speed matter more for VPS selection?
Bandwidth cap matters more for the majority of users. Speed differences between providers (641-943 Mbps sustained) rarely create a noticeable user experience gap for web applications. But hitting a bandwidth cap has immediate consequences: DigitalOcean charges $0.01/GB overage, which can add up fast if you serve video or large downloads. Hetzner includes 20TB/mo, BuyVM offers unmetered transfer, and Contabo gives 32TB/mo. For bandwidth-heavy workloads, cap size should be your primary filter.
What network latency is acceptable for game servers and trading?
For game servers, sub-2ms VPS-side latency is ideal — player-to-server latency will always be higher (20-80ms typical), so you want the server itself adding as little as possible. For algorithmic trading, every 0.1ms counts: our top performer (DigitalOcean at 0.4ms to NY exchange points) provides roughly 1.2ms advantage over budget providers like RackNerd (1.6ms). Over 10,000 trades/day, that compounds into measurable fill-price differences. For web APIs, anything under 2ms is fine — the database query time (5-50ms) dwarfs network latency.