VPS Networking Deep Dive — Why a $5 Server Outperformed a $36 One
In January I ran an experiment that surprised even me. I provisioned two VPS instances to serve the same static site for a week: a $5/month Vultr instance in New Jersey with 1 vCPU and 1GB RAM, and a $36/month Kamatera instance in New York with 8 vCPU and 8GB RAM. Seven times the hardware. Nine times the price. Identical content.
The $5 Vultr server had a median Time To First Byte (TTFB) of 43ms from a Virginia test point. The $36 Kamatera server came in at 67ms. The cheaper server was 36% faster. Not because Kamatera has bad hardware — their CPU benchmarks are competitive. The difference was entirely in the network: Vultr peers directly with more Tier 1 networks from their New Jersey facility, and that 24ms gap compounded across every single request. Over the course of the week, across 50,000 test requests, the $5 server delivered a consistently faster experience.
That result is not an indictment of Kamatera — they excel in customizability, phone support, and flexible configurations. It is an illustration of the fact that network quality is the invisible spec that shapes real-world performance more than CPU cores or RAM, and almost nobody looks at it before buying. I have spent years benchmarking VPS networking across dozens of providers, and the patterns I have found inform everything below.
Quick Answer
VPS networking quality comes down to three things: bandwidth (port speed, typically 1 Gbps), transfer (monthly data cap, 1–32 TB), and peering (how efficiently your provider connects to the rest of the internet). DigitalOcean and Vultr lead our network benchmarks. Private networks eliminate transfer costs and add security for multi-server setups. Enable TCP BBR for immediate throughput improvement. Use mtr to diagnose any network problem.
Table of Contents
- Bandwidth vs Transfer — The Confusion That Costs Money
- Private Networks — Free Performance
- WireGuard Tunnels Between VPS Instances
- Peering and Transit — The Hidden Quality Metric
- BGP — The Protocol Running Underneath Everything
- Network Diagnostics Toolkit
- Provider Network Benchmarks
- TCP BBR and Kernel Tuning
- Optimization Playbook
- Transfer Economics — Who Gives You the Most Data
- FAQ
Bandwidth vs Transfer — The Confusion That Costs Money
I get this question at least once a week, and the confusion has cost people real money. These are two completely different numbers that providers present side-by-side with no explanation:
- Bandwidth is your port speed — how fast data can move at any given moment. Measured in Gbps. Most VPS providers give you a 1 Gbps port, which translates to roughly 125 MB/s maximum throughput. Some budget providers cap you at 200-500 Mbps even though they advertise "1 Gbps" in marketing.
- Transfer is your monthly data allotment — the total volume of data you can send and receive before overage charges or throttling kicks in. Measured in TB. Plans typically include 1-5 TB, with outliers like Contabo at 32 TB and Hetzner at 20 TB.
The analogy: bandwidth is how wide your pipe is. Transfer is how much water you are allowed to push through it this month. A 1 Gbps pipe with 1 TB of transfer means you can blast at full speed — and blow through your entire monthly allowance in exactly 2 hours and 13 minutes. That is not hypothetical. I once helped a client debug a $47 overage bill on DigitalOcean because their automated backup job ran rsync at full throughput for 4 hours overnight, chewing through their entire 2 TB transfer allocation in one night. They switched to incremental backups with restic and the next month's overage was zero.
For perspective on what typical workloads consume:
| Workload | Monthly Transfer | 1 TB Cap? |
|---|---|---|
| Blog / Static site (50K visitors) | 50–150 GB | Plenty |
| WordPress + images (100K visitors) | 200–500 GB | Fine |
| E-commerce (200K visitors) | 500 GB–1 TB | Tight |
| Video streaming / downloads | 2–20 TB | Need Contabo/Hetzner |
| CDN origin server | 5–50 TB | Need Contabo/Hetzner |
| Game server (Minecraft, 50 players) | 100–300 GB | Plenty |
This is why Contabo (32 TB, $5.50/mo) and Hetzner (20 TB, $4.59/mo) are so popular for media-heavy sites — the included transfer is 4-32x what competitors offer at the same price. A video site that would cost $200/month in DigitalOcean overages runs within Contabo's included allocation. The trade-off is that Contabo's network quality is lower (800 Mbps actual, 2.0 ms latency in our tests). For throughput-sensitive work where every millisecond matters, Vultr or DigitalOcean wins. For data-heavy work where total volume matters more than speed, Contabo or Hetzner wins. Same product category, completely different optimization targets.
Private Networks — Free Performance Most People Never Enable
A private network — called VPC, VLAN, or internal network depending on the provider — connects your VPS instances over a dedicated, isolated link. The traffic never touches the public internet. It does not count against your transfer quota. It is free on every major provider that offers it. And most single-server users have never turned it on, which means they are missing one of the best features they are already paying for.
The moment you run more than one server — an app server talking to a database server, a load balancer distributing to multiple backends, a Redis cache sitting in front of your application — private networking stops being "nice to have" and becomes essential:
- Speed: Private network latency is 0.1–0.5 ms with 10 Gbps bandwidth on premium providers. Public internet connections between the same servers add 0.5–2.0 ms because they route through the provider's public network stack. That difference compounds: a page load triggering 50 database queries saves 25-75 ms just from the network layer.
- Cost: Private network traffic is free. If your app server and database exchange 500 GB/month of data, that is 500 GB of transfer you are not paying for. At DigitalOcean's overage rate of $0.01/GB, that is $5/month saved — free by using a feature you already have.
- Security: Database connections over private networks never touch the public internet. No encryption overhead needed (though you can still encrypt if compliance requires it). No exposed database ports for attackers to find. This is why our VPS security guide recommends always binding databases to private network interfaces.
Provider Private Network Comparison
| Provider | Feature Name | Internal Speed | Free? | Cross-DC? |
|---|---|---|---|---|
| Vultr | VPC 2.0 | 10 Gbps | Yes | No |
| DigitalOcean | VPC | 10 Gbps | Yes | No |
| Linode (Akamai) | VLAN | 10 Gbps | Yes | No |
| Hetzner | Private Network | 10 Gbps | Yes | No |
| Kamatera | Private Network | 1 Gbps | Yes | No |
| Contabo | Private Network | 1 Gbps | Yes | No |
| Hostinger | — | — | — | — |
| RackNerd | — | — | — | — |
Two providers conspicuously absent: Hostinger and RackNerd. Neither offers private networking. If your deployment will ever grow beyond a single server — and most non-trivial deployments eventually do — this is a deal-breaker. I consider private network support a hard requirement for any provider I recommend for production use. The lack of it on Hostinger is particularly puzzling given their otherwise competitive performance numbers.
Setting Up Private Networking on Vultr VPC 2.0
# After enabling VPC in the Vultr dashboard, configure the interface
# Check which interface is the private network
ip addr show
# You should see an interface (often ens7 or enp6s0) with a 10.x.x.x address
# If not auto-configured, set it manually:
sudo ip addr add 10.1.96.5/20 dev ens7
sudo ip link set ens7 up
# Make it persistent (Ubuntu 24.04 with Netplan)
sudo tee /etc/netplan/60-private.yaml <<'EOF'
network:
version: 2
ethernets:
ens7:
addresses:
- 10.1.96.5/20
mtu: 1450
EOF
sudo netplan apply
# Test connectivity to another VPS on the same VPC
ping 10.1.96.6
# Bind your database to the private IP only
# In /etc/mysql/mariadb.conf.d/50-server.cnf:
# bind-address = 10.1.96.5
WireGuard Tunnels Between VPS Instances
Private networks work within a single datacenter. The second your infrastructure spans multiple datacenters, multiple regions, or multiple providers, you need encrypted tunnels. I use this constantly: a Vultr web server in New Jersey talking to a Hetzner database in Ashburn. A staging server at RackNerd connecting to production at DigitalOcean for migration testing. WireGuard is the only VPN I deploy for server-to-server links in 2026.
Why WireGuard and not OpenVPN or IPsec? Because WireGuard lives in the Linux kernel (since 5.6), adds roughly 3% throughput overhead (compared to OpenVPN's 15-20%), configures in 10 lines instead of 200, and I can set up a tunnel between two servers in under 8 minutes. The last time I configured IPsec between two VPS instances it took an afternoon and I am still not confident it was correct.
# ========== SERVER A (e.g., Vultr New Jersey) ==========
sudo apt install wireguard -y
# Generate keys
wg genkey | sudo tee /etc/wireguard/private.key
sudo chmod 600 /etc/wireguard/private.key
sudo cat /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key
# Create config
sudo tee /etc/wireguard/wg0.conf <<'EOF'
[Interface]
PrivateKey = PASTE_SERVER_A_PRIVATE_KEY
Address = 10.100.0.1/24
ListenPort = 51820
[Peer]
PublicKey = PASTE_SERVER_B_PUBLIC_KEY
AllowedIPs = 10.100.0.2/32
Endpoint = SERVER_B_PUBLIC_IP:51820
PersistentKeepalive = 25
EOF
# ========== SERVER B (e.g., Hetzner Ashburn) ==========
# Same key generation steps, then:
sudo tee /etc/wireguard/wg0.conf <<'EOF'
[Interface]
PrivateKey = PASTE_SERVER_B_PRIVATE_KEY
Address = 10.100.0.2/24
ListenPort = 51820
[Peer]
PublicKey = PASTE_SERVER_A_PUBLIC_KEY
AllowedIPs = 10.100.0.1/32
Endpoint = SERVER_A_PUBLIC_IP:51820
PersistentKeepalive = 25
EOF
# ========== ON BOTH SERVERS ==========
# Allow WireGuard port in firewall
sudo nft add rule inet filter input udp dport 51820 accept
# Or with UFW: sudo ufw allow 51820/udp
# Start the tunnel
sudo wg-quick up wg0
# Enable on boot
sudo systemctl enable wg-quick@wg0
# Verify the tunnel
sudo wg show
ping 10.100.0.2 # from Server A
ping 10.100.0.1 # from Server B
One important requirement: WireGuard needs kernel module access, which means a KVM-based VPS. Every provider in our reviews uses KVM, so this is not a practical limitation. The PersistentKeepalive setting at 25 seconds prevents NAT tables from expiring the connection on providers that use NAT for internal routing. Without it, the tunnel drops after periods of inactivity. I learned this the hard way when a database replication job failed at 3 AM because the tunnel had silently died.
Peering and Transit — The Hidden Quality Metric
This section explains why the $5 Vultr server beat the $36 Kamatera server. Two providers can both advertise "1 Gbps" bandwidth and deliver measurably different real-world performance. The difference is almost entirely in how they connect to the rest of the internet — a detail that never appears on any pricing page.
- Peering: Direct physical connections between two networks at an Internet Exchange (IX). When Vultr peers with Comcast at the Equinix NY facility, traffic between Vultr servers and Comcast customers takes the shortest possible path — sometimes literally across the same building. Peering costs almost nothing per gigabyte.
- Transit: Paying a third-party network (a "transit provider") to carry your traffic to destinations you do not peer with directly. Transit adds extra network hops, each adding latency. Budget VPS providers rely more heavily on transit because maintaining peering relationships at dozens of Internet Exchanges requires significant infrastructure investment.
You can actually measure peering quality before you buy. PeeringDB.com publishes the peering relationships and IX connections for every network. Here is what the numbers look like for the providers I benchmark regularly:
| Provider | ASN | IX Connections | Peering Category |
|---|---|---|---|
| Vultr | AS20473 | 35+ IXs worldwide | Premium |
| DigitalOcean | AS14061 | 30+ IXs worldwide | Premium |
| Linode/Akamai | AS63949 | 20+ IXs (Akamai backbone) | Premium |
| Hetzner | AS24940 | 25+ IXs (DE-CIX anchor) | Premium |
| Contabo | AS40021 | 5–10 IXs | Mid-tier |
| RackNerd | AS36352 (ColoCrossing) | Limited IX presence | Budget |
More IX connections means more direct paths to more networks, which means lower latency to more users. Vultr's 35+ IX connections and DigitalOcean's 30+ is why they consistently lead our network benchmarks. RackNerd's limited IX presence is why their latency numbers are 2-3x higher — the traffic takes longer paths through transit providers instead of direct peering. For a personal blog, that extra 1.5ms per request is invisible. For an API handling 10,000 requests per second, it adds 15 seconds of cumulative latency per second. Context matters enormously in networking.
BGP — The Protocol Running Underneath Everything
BGP (Border Gateway Protocol) is the routing system that holds the internet together. Every packet that reaches your VPS was guided there by BGP. Most VPS users never interact with it directly. But understanding the basics helps you diagnose mysterious network issues and evaluate provider claims about network quality.
Key Concepts
- AS Number (ASN): Every network has a unique identifier. Vultr is AS20473, DigitalOcean is AS14061. You can look up any IP's ASN to identify which network it belongs to.
- BGP Announcements: When your VPS provider assigns you an IP, they "announce" that IP range via BGP to their peers and transit providers. This is how every other network on the internet learns where to send traffic destined for your server.
- AS Path: The sequence of networks traffic traverses. Shorter paths = fewer hops = lower latency. Premium providers with direct peering have AS paths of 2-3 hops to major ISPs. Budget providers might be 4-6 hops.
- BGP Communities: Some providers (notably Vultr and BuyVM) offer BGP sessions that let you control how your IP routes are announced — useful for anycast, traffic engineering, and multi-provider failover.
# Look up the ASN for any IP address
whois -h whois.radb.net 149.28.100.1
# Shows: AS20473 (Vultr)
# View the BGP path from your VPS to a destination
traceroute -A google.com
# The -A flag shows ASN at each hop, revealing the network path
# Check how many BGP prefixes your provider announces
whois -h whois.radb.net -- '-i origin AS20473' | grep route: | wc -l
Most people reading this will never touch BGP directly. But if you need IP portability between providers (switching without changing IPs), multi-provider failover, or anycast routing for a DNS service or CDN, BGP session support at Vultr ($5/mo additional) or BuyVM lets you announce your own IP space. I have set this up exactly once, for a client running a global DNS service that needed seamless failover. It took two days of configuration. It worked beautifully. I never want to touch it again.
The Network Diagnostics Toolkit
These four tools diagnose 99% of network problems I encounter. They are the first commands I run when someone reports "the site is slow" — before looking at application logs, before checking CPU, before anything else. Because nine times out of ten, "slow" is a network issue, not a server issue.
mtr — The Best Network Diagnostic Tool
# Install mtr
sudo apt install mtr -y
# Run a 100-packet report to a destination
mtr --report --report-cycles 100 google.com
# Example output interpretation:
# HOST Loss% Snt Last Avg Best Wrst StDev
# 1. gateway 0.0% 100 0.3 0.4 0.2 1.1 0.2 ← Provider internal
# 2. ix-peer 0.0% 100 0.8 0.9 0.7 1.5 0.2 ← Internet exchange
# 3. isp-router 0.0% 100 1.2 1.3 1.0 2.8 0.3 ← Destination ISP
# 4. google-edge 0.0% 100 0.9 1.0 0.8 1.8 0.2 ← Final destination
# What to look for:
# - Packet loss at a single hop: that router is congested
# - Sudden latency jump between two hops: congested or long-distance link
# - Loss that appears at one hop but not subsequent hops: ICMP deprioritization (harmless)
# - Progressive loss that increases at each hop: real congestion problem
traceroute — Path Discovery
# Standard traceroute
traceroute bestusavps.com
# TCP traceroute — bypasses ICMP filters (many networks block ICMP)
traceroute -T -p 443 bestusavps.com
# UDP traceroute with AS path information
traceroute -A bestusavps.com
# Paris traceroute — avoids load balancer artifacts
sudo apt install paris-traceroute -y
paris-traceroute bestusavps.com
iperf3 — Actual Throughput Measurement
# On the receiving VPS (server mode):
iperf3 -s
# On the sending VPS (client mode):
# Single stream test (30 seconds)
iperf3 -c SERVER_IP -t 30
# Multi-stream test (more realistic for real traffic)
iperf3 -c SERVER_IP -t 30 -P 4
# Bidirectional test
iperf3 -c SERVER_IP -t 30 --bidir
# Test with different packet sizes
iperf3 -c SERVER_IP -t 30 -M 1400 # Simulate typical web traffic
ss — Socket Statistics (Better Than netstat)
# All listening TCP ports with process names
ss -tlnp
# All active connections with state
ss -tanp
# Show TCP connection details (window size, RTT, congestion control)
ss -ti
# Count connections per state
ss -s
# Find connections to a specific port
ss -tnp dst :3306
Provider Network Benchmarks — The Numbers Nobody Else Publishes
Every number below comes from our standardized testing methodology: iperf3 between two VPS instances in the same datacenter, mtr to Cloudflare's anycast network for latency, and real-world HTTP transfer tests using curl against a 100MB test file. All tests run on each provider's lowest-tier US plans to show what budget customers actually get:
| Provider | Throughput | Intra-DC Latency | Jitter | Private Net | US DCs | Transfer | Price |
|---|---|---|---|---|---|---|---|
| DigitalOcean | 980 Mbps | 0.8 ms | 0.1 ms | VPC | 3 | 1 TB | $6/mo |
| Hetzner | 960 Mbps | 0.9 ms | 0.1 ms | Yes | 2 | 20 TB | $4.59/mo |
| Vultr | 950 Mbps | 0.9 ms | 0.1 ms | VPC 2.0 | 9 | 2 TB | $5/mo |
| UpCloud | 950 Mbps | 1.0 ms | 0.2 ms | Yes | 2 | 2 TB | $7/mo |
| Linode | 940 Mbps | 1.0 ms | 0.2 ms | VLAN | 4 | 1 TB | $5/mo |
| Kamatera | 920 Mbps | 1.2 ms | 0.3 ms | Yes | 3 | 5 TB | $4/mo |
| Hostinger | 900 Mbps | 1.3 ms | 0.3 ms | No | 2 | 2 TB | $4.99/mo |
| Contabo | 800 Mbps | 2.0 ms | 0.5 ms | Yes | 2 | 32 TB | $5.50/mo |
| RackNerd | 750 Mbps | 2.5 ms | 0.8 ms | No | 3 | 2 TB | $1.49/mo |
The gap between top-tier and budget-tier networking is real but not always relevant. DigitalOcean at 0.8ms versus RackNerd at 2.5ms — that 1.7ms difference is invisible on a blog. On a real-time API handling 5,000 requests per second, it adds 8.5 seconds of cumulative latency per second of operation. I have moved clients from budget to premium providers purely for the networking, with zero hardware changes, and seen API response times drop 30-40%. Same code. Same RAM. Same CPU. Better pipes.
The jitter column matters for real-time applications: VoIP, gaming, video streaming. Low jitter means consistent latency. High jitter means latency spikes that cause buffering, voice drops, and rubber-banding. RackNerd's 0.8ms jitter is fine for web hosting but would be painful for a game server. Vultr's 0.1ms jitter is why I recommend them for Minecraft servers and forex trading VPS.
TCP BBR and Kernel Tuning
TCP BBR (Bottleneck Bandwidth and Round-trip propagation time) is Google's congestion control algorithm, and it is the single highest-impact network optimization you can make in two lines of configuration. BBR models the network path to find optimal sending rates instead of reacting to packet loss like traditional algorithms (CUBIC, Reno). On lossy or high-latency connections — which describes most of the internet — BBR delivers measurably better throughput.
I ran A/B tests on identical Vultr instances serving a 10MB download to 500 test clients across the US. CUBIC (default): average 47 Mbps. BBR: average 68 Mbps. A 45% improvement from two sysctl lines. The improvement is most pronounced on connections with packet loss — exactly the conditions your users on mobile networks or congested ISPs experience.
# Enable TCP BBR (Ubuntu 22.04+ and Debian 12+)
sudo tee /etc/sysctl.d/99-network-performance.conf <<'EOF'
# ===== TCP BBR Congestion Control =====
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
# ===== Network Buffer Tuning =====
# Increase receive buffer (default is too small for high-bandwidth)
net.core.rmem_max = 16777216
net.core.rmem_default = 1048576
# Increase send buffer
net.core.wmem_max = 16777216
net.core.wmem_default = 1048576
# TCP buffer auto-tuning range
net.ipv4.tcp_rmem = 4096 1048576 16777216
net.ipv4.tcp_wmem = 4096 1048576 16777216
# ===== Connection Handling =====
# Increase maximum connections
net.core.somaxconn = 65535
net.core.netdev_max_backlog = 65535
# Faster TIME_WAIT recycling
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
# TCP keepalive (detect dead connections faster)
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_keepalive_probes = 5
EOF
# Apply all settings
sudo sysctl --system
# Verify BBR is active
sysctl net.ipv4.tcp_congestion_control
# Output: net.ipv4.tcp_congestion_control = bbr
These tuning parameters are safe for any VPS from 1GB RAM upward. The buffer sizes (16MB max) handle high-bandwidth connections efficiently without consuming excessive memory. The tcp_tw_reuse setting recycles TIME_WAIT sockets, which matters for servers handling thousands of short-lived connections (web servers, API endpoints). See our security guide for additional sysctl hardening that complements these performance settings.
The Optimization Playbook
Five optimizations ordered by impact-per-minute-spent. The first one is free and takes 30 seconds. The last requires an architectural decision but pays dividends at scale.
- Choose a datacenter close to your users. US East (New York, New Jersey, Ashburn) for East Coast and European visitors. US Central (Dallas, Chicago) for nationwide coverage. US West (Los Angeles, Silicon Valley) for West Coast and Asian visitors. Vultr's 9 US locations give the most geographic precision. Linode's 4 US datacenters cover the major regions. Use our VPS calculator to match workload to plan.
- Enable TCP BBR. Two sysctl lines, 45% throughput improvement on lossy connections. There is no reason not to do this on every server immediately. See the configuration above.
- Use private networks for internal traffic. Database queries, cache lookups, and inter-service communication over private networks: faster (10 Gbps internal), free (zero transfer charges), and more secure (never publicly exposed). This is the correct architecture, not an optimization hack.
- Put a CDN in front of static assets. Cloudflare (free tier) or your provider's CDN offloads bandwidth and serves static files from edge locations near users. Your VPS handles dynamic requests while the CDN serves images, CSS, JS, and fonts. On a content-heavy site, this cuts VPS transfer consumption by 60-80% and improves global load times.
- Monitor with mtr regularly. Run periodic mtr traces from your VPS to key destinations and from user locations to your VPS. Sudden changes in hop count or latency indicate routing changes, peering disputes, or congestion that may require a datacenter migration or provider escalation. I run automated mtr checks every 6 hours on all production servers.
Transfer Economics — Who Gives You the Most Data
Transfer pricing is where the biggest cost surprises hide. Providers who look cheap on base price can become expensive once you account for bandwidth consumption. Here is the effective cost per TB of transfer on each provider's entry-level plan:
| Provider | Plan Price | Included Transfer | Overage Cost | Effective $/TB |
|---|---|---|---|---|
| Hetzner | $4.59/mo | 20 TB | $1.19/TB extra | $0.23/TB |
| Contabo | $5.50/mo | 32 TB | Contact for options | $0.17/TB |
| Vultr | $5/mo | 2 TB | $0.01/GB ($10/TB) | $2.50/TB |
| Linode | $5/mo | 1 TB | $0.01/GB ($10/TB) | $5.00/TB |
| DigitalOcean | $6/mo | 1 TB | $0.01/GB ($10/TB) | $6.00/TB |
| Kamatera | $4/mo | 5 TB | $0.01/GB ($10/TB) | $0.80/TB |
| RackNerd | $1.49/mo | 2 TB | Varies | $0.75/TB |
The delta is enormous. Serving 10 TB/month of video or large downloads would cost $90 in DigitalOcean overages on top of the $6 base, versus $0 extra on Contabo where 10 TB is well within the 32 TB included allocation. This is exactly the kind of analysis that never shows up in "best VPS" listicles that only compare base price. Our price comparison tool accounts for transfer costs.
Frequently Asked Questions
What is a good latency for a VPS?
Under 1 ms intra-datacenter is excellent and achievable with premium providers. Under 20 ms to end users in the same country is good. Under 50 ms for same-continent connections is acceptable. Over 100 ms becomes noticeable in interactive applications. For context, DigitalOcean hits 0.8 ms intra-DC in our benchmarks while RackNerd averages 2.5 ms.
Does bandwidth affect my website speed?
Only at scale. A typical web page is 2-5 MB. At 1 Gbps, your VPS serves that to about 25 users simultaneously at full speed. For most websites, CPU and disk I/O are the bottleneck long before bandwidth. Bandwidth matters for video streaming, large file downloads, high-traffic APIs, and CDN origin servers.
What happens when I exceed my monthly transfer limit?
It varies dramatically. Vultr and DigitalOcean charge $0.01/GB overage. AWS Lightsail charges $0.09/GB. Hetzner throttles your speed instead of billing. Contabo contacts you to discuss options rather than auto-billing. Always verify your provider's overage policy before deploying bandwidth-heavy workloads.
Can I get a dedicated IP on a VPS?
Yes. Every VPS comes with at least one dedicated public IPv4 address. Most providers also assign a free IPv6 address. Additional IPv4 addresses cost $1.19–$5/mo due to IPv4 exhaustion. See our IPv4 vs IPv6 guide for pricing and allocation details across all providers.
Which VPS provider has the best network?
DigitalOcean leads raw network benchmarks (980 Mbps, 0.8 ms). For the best combination of network quality and US coverage, Vultr (9 US locations, 950 Mbps) wins. For the best transfer-per-dollar, Hetzner ($4.59/mo for 20 TB) and Contabo ($5.50/mo for 32 TB) dominate. Budget pick: RackNerd at $1.49/mo offers adequate networking for non-latency-sensitive workloads.
Is a 1 Gbps VPS actually 1 Gbps?
The 1 Gbps figure is port speed — the theoretical maximum. Real throughput depends on network quality, peering, and tenant density on the physical NIC. In our iperf3 tests, premium providers hit 940-980 Mbps consistently. Budget providers deliver 750-800 Mbps. Nobody hits exactly 1000 Mbps because of protocol overhead, but top providers get remarkably close.
How do I reduce latency between my VPS and users?
Four approaches by impact: (1) Choose a datacenter close to your users — biggest single improvement. (2) Enable TCP BBR — two sysctl lines, measurable throughput gains. (3) Use a CDN for static assets — Cloudflare's free tier works well. (4) Use private networks for inter-server communication. Combined, these can cut perceived latency by 50% or more.
What is the difference between bandwidth and transfer?
Bandwidth = port speed (how fast, measured in Gbps). Transfer = monthly data cap (how much, measured in TB). A 1 Gbps port with 1 TB transfer means you can blast at full speed but exhaust your monthly quota in 2.5 hours. Most plans include 1-5 TB. Contabo (32 TB) and Hetzner (20 TB) are exceptional for data-heavy workloads.
$5 Beat $36 — Because Networking Matters
CPU and RAM are easy to compare. Network quality is not. Our benchmarks measure what the marketing pages skip: actual throughput, real latency, and the peering infrastructure underneath.