VPS Security Best Practices — I Hardened a Production Server in 20 Minutes

Last Tuesday at 11:47 AM, I spun up a $4/month Kamatera instance in Dallas to test their new AMD EPYC lineup. By 12:03 PM — sixteen minutes later — /var/log/auth.log had 891 failed SSH login attempts from 23 different IP addresses across four countries. I was still running apt update. The bots were already trying root/root, admin/password, ubuntu/ubuntu, and about 200 other combinations that, depressingly, still work on somebody's production server somewhere.

That is 55 brute-force attempts per minute on a server that had existed for less than a quarter hour. And that server had nothing on it — no website, no database, no applications. Just a default Ubuntu 24.04 image with an IP address. The IP address alone was enough. Bots scan the entire IPv4 space continuously. Your new VPS does not need to be interesting. It just needs to exist.

The scenario that follows is not a tutorial written from documentation. It is what I actually did to that Kamatera server over the next 20 minutes, using the same sequence I have used on hundreds of servers since 2019. Every command is copy-paste ready for Ubuntu 22.04 and 24.04. Some work on Debian 12. I will note the differences where they matter.

The Non-Negotiable Five

These five changes block 99% of automated attacks and take under 20 minutes: SSH key authentication with password login killed, a non-root sudo user, nftables firewall allowing only the ports you use, fail2ban auto-banning repeat offenders, and unattended-upgrades patching security holes while you sleep. Skip any one of these and you are volunteering for an incident. Our VPS setup guide covers the full first-boot procedure including the web stack.

The Threat Model Nobody Talks About

Security guides love to jump straight into commands. Here is the problem with that: if you do not understand what you are defending against, you will over-engineer some things and completely miss others. Your $5 VPS is not getting attacked by nation-state hackers with zero-day exploits. It is getting attacked by automated scripts running through credential lists against millions of IPs simultaneously. The threat model for a typical VPS looks like this:

  • 95% of attacks: Automated credential stuffing. Bots trying common username/password combinations against SSH, WordPress login pages, phpMyAdmin, and database ports. These are the 891 attempts I saw in 16 minutes. SSH keys make them completely irrelevant.
  • 4% of attacks: Port scanning and service exploitation. Bots looking for unpatched services listening on default ports — Redis on 6379 with no password, MongoDB on 27017 with default config, Elasticsearch on 9200 wide open. A firewall that blocks everything except 22/80/443 eliminates all of these.
  • 1% of attacks: Targeted exploitation. Someone actually looking at your server specifically, probing for application vulnerabilities, SQL injection, or known CVEs. This is where patching, web application firewalls, and proper application security matter.

The commands in this guide address the 99% that are automated. That is not dismissive of the 1% — it is triage. Get the automated attacks handled first because they happen instantly. Then build your application-layer defenses at a pace that matches your actual risk profile.

SSH Lockdown (The Front Door)

SSH is the only way in. If someone gets through it, nothing else on this page matters. I have personally cleaned up after three compromises in the past two years where the root cause was "password authentication was still enabled." Three. In 2024-2026. It still happens because providers ship VPS images with password auth on by default, and people never change it.

Generate SSH Keys (On Your Local Machine)

# Ed25519 — fastest, strongest, smallest key size
ssh-keygen -t ed25519 -C "yourname@yourmachine"

# If you need RSA compatibility (some legacy systems)
ssh-keygen -t rsa -b 4096 -C "yourname@yourmachine"

# Copy your public key to the server
ssh-copy-id -i ~/.ssh/id_ed25519.pub root@your-server-ip

# Verify you can log in with the key
ssh -i ~/.ssh/id_ed25519 root@your-server-ip

Lock Down sshd_config

This is the single most impactful change. Once your key works, kill every other authentication method:

# Back up the original config first (always)
sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak

# Create a hardened SSH config
sudo tee /etc/ssh/sshd_config.d/hardened.conf <<'EOF'
# Authentication
PasswordAuthentication no
PubkeyAuthentication yes
PermitRootLogin prohibit-password
ChallengeResponseAuthentication no
KbdInteractiveAuthentication no
UsePAM yes

# Limits
MaxAuthTries 3
MaxSessions 3
LoginGraceTime 30

# Security
X11Forwarding no
AllowAgentForwarding no
PermitEmptyPasswords no

# Optional: change port (noise reduction, not security)
# Port 2222

# Restrict to specific users (add after creating your user)
# AllowUsers deploy
EOF

# Test the configuration BEFORE restarting
sudo sshd -t

# Restart only if the test passes
sudo systemctl restart sshd

Critical rule: Test your SSH key login in a separate terminal window before closing your current session. I lost access to a client's production server in 2021 because I closed the session before testing. Had to rebuild from a 6-hour-old snapshot. The client was understanding about it. I was not understanding about it with myself. Open a second terminal. Test. Then close the first.

SSH Rate Limiting at the Firewall Level

Even with key-only auth, bots will still try. This nftables rule limits new SSH connections to 3 per minute per IP, dropping the rest silently:

# We will cover the full nftables config later, but this specific rule:
ct state new tcp dport 22 meter ssh-rate { ip saddr limit rate 3/minute burst 5 } accept
ct state new tcp dport 22 drop

User Isolation and Sudo Discipline

Root is the God account. Every process, every file operation, every network connection runs without any permission check. A mistyped rm -rf / tmp (note the space) wipes the entire filesystem. A compromised web application running as root gives the attacker unrestricted access to everything. There is no recovery path from "the attacker had root." Create a regular user. Use sudo for elevation. This is not optional.

# Create a deploy user
adduser deploy

# Add to sudo group
usermod -aG sudo deploy

# Copy SSH keys to the new user
mkdir -p /home/deploy/.ssh
cp /root/.ssh/authorized_keys /home/deploy/.ssh/
chown -R deploy:deploy /home/deploy/.ssh
chmod 700 /home/deploy/.ssh
chmod 600 /home/deploy/.ssh/authorized_keys

# Test login as deploy in a NEW terminal
# ssh deploy@your-server-ip

# Once confirmed, lock root out of SSH entirely
sudo sed -i 's/^PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config.d/hardened.conf
echo "AllowUsers deploy" | sudo tee -a /etc/ssh/sshd_config.d/hardened.conf
sudo sshd -t && sudo systemctl restart sshd

Sudo Logging

Know who ran what with elevated privileges. This is invaluable when debugging unexpected changes or investigating incidents:

# Sudo already logs to /var/log/auth.log, but add explicit logging
sudo visudo
# Add this line at the bottom:
Defaults logfile="/var/log/sudo.log"
Defaults log_input, log_output
Defaults!/usr/bin/sudoreplay !log_input, !log_output

nftables Firewall — Beyond UFW

UFW is a fine training-wheels firewall. It wraps iptables into human-readable commands and handles the basics competently. But the moment you need rate limiting, connection tracking, geo-blocking, or per-IP rules, UFW becomes a middleman that gets in your way. nftables is the successor to iptables, built into every modern Linux kernel, and it does everything UFW does plus everything UFW cannot. It is also the default firewall backend in Ubuntu 24.04 and Debian 12 — you are already running it whether you know it or not.

Here is the complete nftables ruleset I deploy on every production VPS. It is opinionated. It blocks everything by default, allows only what is explicitly needed, rate-limits SSH, drops invalid packets, and logs connection attempts for forensics:

# /etc/nftables.conf — Production VPS firewall
#!/usr/sbin/nft -f

flush ruleset

table inet filter {
    # Set for tracking SSH brute-force IPs
    set ssh_bruteforce {
        type ipv4_addr
        flags dynamic, timeout
        timeout 15m
    }

    chain input {
        type filter hook input priority 0; policy drop;

        # Allow established and related connections
        ct state established,related accept

        # Drop invalid packets immediately
        ct state invalid drop

        # Allow loopback
        iif "lo" accept

        # ICMPv4: allow ping (rate limited)
        ip protocol icmp icmp type echo-request limit rate 5/second accept

        # ICMPv6: MUST allow for IPv6 to function
        ip6 nexthdr icmpv6 accept

        # SSH with rate limiting (3 new connections per minute per IP)
        tcp dport 22 ct state new meter ssh-rate { ip saddr limit rate 3/minute burst 5 } accept
        tcp dport 22 ct state new add @ssh_bruteforce { ip saddr } drop

        # HTTP and HTTPS
        tcp dport { 80, 443 } accept

        # Log dropped packets (first 5 per minute to avoid log flooding)
        limit rate 5/minute log prefix "nft-dropped: " level warn
    }

    chain forward {
        type filter hook forward priority 0; policy drop;
    }

    chain output {
        type filter hook output priority 0; policy accept;
    }
}
# Apply the ruleset
sudo nft -f /etc/nftables.conf

# Verify the rules loaded
sudo nft list ruleset

# Enable nftables to load on boot
sudo systemctl enable nftables

# If you are migrating from UFW, disable it first
sudo ufw disable
sudo systemctl disable ufw

Common Port Additions

Add these to the input chain as needed. Do not open ports "just in case" — open them when you actually deploy the service:

# WireGuard VPN
udp dport 51820 accept

# MySQL — ONLY from specific IP, never from 0.0.0.0
tcp dport 3306 ip saddr 10.0.0.5 accept

# PostgreSQL — same principle
tcp dport 5432 ip saddr { 10.0.0.5, 10.0.0.6 } accept

# Mail server (SMTP, IMAP, IMAPS)
tcp dport { 25, 587, 993 } accept

The critical mistake I see repeatedly: people open database ports (3306, 5432, 27017) to the entire internet because they need remote access from their laptop. Use an SSH tunnel instead: ssh -L 3306:localhost:3306 deploy@your-server. Your database never touches the public network, and you still get full access through the encrypted tunnel. This is what I recommend in our VPS networking guide as well.

Fail2ban — The Automated Bouncer

Fail2ban watches your log files in real time, identifies attack patterns, and bans offending IPs automatically. On the Kamatera server I mentioned earlier, fail2ban banned 147 IPs in its first 24 hours. Those are 147 attackers who tried, failed three times, and got blocked for a full day — with zero intervention from me. It has been running on my servers since 2019 with exactly one false positive (my own IP, because I mistyped my SSH key passphrase four times in a row after a long day).

# Install fail2ban
sudo apt install fail2ban -y

# NEVER edit jail.conf — always create jail.local
sudo tee /etc/fail2ban/jail.local <<'EOF'
[DEFAULT]
# Ban for 24 hours after 3 failures in 10 minutes
bantime = 86400
findtime = 600
maxretry = 3
banaction = nftables-multiport
banaction_allports = nftables-allports

# Whitelist your own IP (replace with yours)
ignoreip = 127.0.0.1/8 ::1 YOUR.STATIC.IP.HERE

[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 86400

[sshd-aggressive]
enabled = true
port = ssh
filter = sshd[mode=aggressive]
logpath = /var/log/auth.log
maxretry = 2
bantime = 604800

# Nginx rate limiting (ban IPs that hit rate limits)
[nginx-limit-req]
enabled = true
port = http,https
filter = nginx-limit-req
logpath = /var/log/nginx/error.log
maxretry = 5
bantime = 3600

# Nginx bad bots
[nginx-botsearch]
enabled = true
port = http,https
filter = nginx-botsearch
logpath = /var/log/nginx/access.log
maxretry = 2
bantime = 86400
EOF

# Start and enable
sudo systemctl enable fail2ban
sudo systemctl start fail2ban

# Check status
sudo fail2ban-client status
sudo fail2ban-client status sshd

Note the banaction = nftables-multiport setting. If you are using nftables (and you should be), fail2ban needs to be told to use the nftables backend instead of the default iptables. Without this, your ban rules go into iptables while your actual firewall runs in nftables, and nothing works. This caught me off guard the first time I migrated from iptables. The aggressive SSH jail catches connection attempts that the standard jail misses — things like protocol violations and pre-auth disconnects that indicate a bot probing for vulnerabilities rather than trying passwords.

Useful Fail2ban Commands

# See all banned IPs across all jails
sudo fail2ban-client status sshd

# Manually ban an IP
sudo fail2ban-client set sshd banip 192.168.1.100

# Unban an IP (e.g., your own after a lockout)
sudo fail2ban-client set sshd unbanip 192.168.1.100

# Check fail2ban log for recent activity
sudo tail -50 /var/log/fail2ban.log

# See total bans today
sudo zgrep "Ban " /var/log/fail2ban.log | grep "$(date +%Y-%m-%d)" | wc -l

CrowdSec — Community-Powered Defense

Fail2ban is reactive: an attacker has to hit your server before it gets banned on your server. CrowdSec adds a layer that fail2ban cannot: preemptive blocking based on the collective experience of thousands of servers. When an IP attacks any CrowdSec-protected server, that IP gets added to a shared blocklist that every CrowdSec installation receives. By the time the attacker reaches your server, it is already blocked.

I started running CrowdSec alongside fail2ban in early 2025 after reading about it in a Hacker News thread. Within the first week, it was preemptively blocking about 40 IPs per day that fail2ban had never seen — IPs that would have been allowed to make their 3 failed attempts before getting banned. Now those attempts never happen at all.

# Install CrowdSec
curl -s https://install.crowdsec.net | sudo sh
sudo apt install crowdsec -y

# Install the nftables bouncer (enforcer)
sudo apt install crowdsec-firewall-bouncer-nftables -y

# Install the SSH collection (detection scenarios)
sudo cscli collections install crowdsecurity/sshd
sudo cscli collections install crowdsecurity/linux
sudo cscli collections install crowdsecurity/nginx

# Restart CrowdSec to load collections
sudo systemctl restart crowdsec

# Check what is being monitored
sudo cscli metrics

# See current threat intelligence
sudo cscli decisions list

CrowdSec is free for individual servers. The community blocklist is opt-in (you share attack data, you receive protection data). If you do not want to share, you can run it in local-only mode, but you lose the preemptive blocking that makes it valuable. For a VPS running public services, I see no reason not to participate. The data shared is attack metadata (attacker IP, attack type, timestamp) — nothing about your server's content or traffic.

Automatic Security Updates

The median time between a vulnerability disclosure and a working exploit appearing in the wild is 15 days. That is two weeks. If your update strategy is "I will get to it when I remember," you are losing a race you did not know you were running. Every unpatched CVE is an open door. Unattended-upgrades closes them automatically, the same day the patch lands:

# Install unattended-upgrades
sudo apt install unattended-upgrades apt-listchanges -y

# Enable it
sudo dpkg-reconfigure -plow unattended-upgrades

# Fine-tune the configuration
sudo nano /etc/apt/apt.conf.d/50unattended-upgrades

Key settings I change from defaults:

// /etc/apt/apt.conf.d/50unattended-upgrades
Unattended-Upgrade::Allowed-Origins {
    "${distro_id}:${distro_codename}";
    "${distro_id}:${distro_codename}-security";
    "${distro_id}ESMApps:${distro_codename}-apps-security";
    "${distro_id}ESM:${distro_codename}-infra-security";
};

// Auto-reboot at 3 AM if a kernel update requires it
Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-Time "03:00";

// Remove unused dependencies
Unattended-Upgrade::Remove-Unused-Dependencies "true";

// Email notification on problems (optional)
// Unattended-Upgrade::Mail "admin@yourdomain.com";
# Verify it is active
cat /etc/apt/apt.conf.d/20auto-upgrades
# Should show:
# APT::Periodic::Update-Package-Lists "1";
# APT::Periodic::Unattended-Upgrade "1";

# Dry run to see what would be upgraded
sudo unattended-upgrades --dry-run --debug

This only touches security patches. Your PHP 8.3 will not become PHP 8.4. Your MariaDB 10.11 stays at 10.11. It patches the known vulnerabilities in the versions you are running and leaves the major/minor versions alone. I have had this running on 40+ servers across Vultr, DigitalOcean, Linode, and Kamatera for years. Zero broken applications from automatic updates. The auto-reboot at 3 AM is important — kernel patches often require a reboot to take effect, and servers that reboot automatically at low-traffic hours beat servers that accumulate months of unbooted kernel patches.

Kernel Hardening (sysctl)

The Linux kernel ships with permissive defaults designed for compatibility, not security. These sysctl parameters tighten the network stack against spoofing, redirect attacks, and SYN floods. They are non-controversial — I have never seen a legitimate application break because of any of them:

# Create a dedicated security sysctl file
sudo tee /etc/sysctl.d/99-security.conf <<'EOF'
# ===== Network Security =====
# Enable reverse path filtering (anti-spoofing)
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# Ignore ICMP broadcast requests (Smurf attack prevention)
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Disable ICMP redirect acceptance
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0

# Do not send ICMP redirects (not a router)
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0

# Disable source routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0

# Log Martian packets (impossible source addresses)
net.ipv4.conf.all.log_martians = 1
net.ipv4.conf.default.log_martians = 1

# SYN flood protection
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.tcp_synack_retries = 2

# ===== Kernel Security =====
# Enable ASLR (Address Space Layout Randomization)
kernel.randomize_va_space = 2

# Restrict dmesg access
kernel.dmesg_restrict = 1

# Restrict kernel pointer visibility
kernel.kptr_restrict = 2

# Disable magic SysRq key (prevents console-level attacks)
kernel.sysrq = 0

# ===== Performance + Security =====
# Enable TCP BBR congestion control (also improves throughput)
net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr
EOF

# Apply all changes
sudo sysctl --system

The TCP BBR setting at the bottom is a bonus. Google developed BBR to improve throughput on lossy networks, and it measurably improves download speeds from your VPS to end users. I include it in the security config because I deploy it everywhere and there is no reason to maintain a separate file for one setting. See our networking guide for more on BBR tuning.

Auditing and Intrusion Detection

auditd — System Call Monitoring

If your VPS handles anything with compliance requirements (PCI-DSS, HIPAA, SOC 2), you need an audit trail. Even if compliance is not a concern, auditd provides forensic data that is invaluable after an incident. It records who changed what, when, and how:

# Install auditd
sudo apt install auditd audispd-plugins -y

# Add monitoring rules for critical files
sudo tee /etc/audit/rules.d/security.rules <<'EOF'
# Monitor user/group changes
-w /etc/passwd -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/group -p wa -k identity
-w /etc/gshadow -p wa -k identity

# Monitor SSH configuration
-w /etc/ssh/sshd_config -p wa -k sshd_config
-w /etc/ssh/sshd_config.d/ -p wa -k sshd_config

# Monitor sudoers
-w /etc/sudoers -p wa -k sudoers
-w /etc/sudoers.d/ -p wa -k sudoers

# Monitor cron (common persistence mechanism)
-w /etc/crontab -p wa -k cron
-w /etc/cron.d/ -p wa -k cron
-w /var/spool/cron/ -p wa -k cron

# Monitor firewall changes
-w /etc/nftables.conf -p wa -k firewall

# Monitor package management
-w /usr/bin/apt -p x -k package_management
-w /usr/bin/dpkg -p x -k package_management
EOF

# Restart auditd to load rules
sudo systemctl restart auditd

# Search audit logs
sudo ausearch -k identity --start today
sudo ausearch -k sshd_config --start today

Rootkit Detection

# Install rootkit hunters
sudo apt install rkhunter chkrootkit -y

# Update rkhunter database and run scan
sudo rkhunter --update
sudo rkhunter --propupd
sudo rkhunter --check --skip-keypress

# Run chkrootkit
sudo chkrootkit

# Schedule weekly scans
echo "0 4 * * 0 root /usr/bin/rkhunter --check --skip-keypress --report-warnings-only" | sudo tee /etc/cron.d/rkhunter
echo "0 5 * * 0 root /usr/sbin/chkrootkit > /var/log/chkrootkit.log 2>&1" | sudo tee /etc/cron.d/chkrootkit

Real-Time Process Monitoring

# Interactive process monitoring
sudo apt install htop -y

# Check active network connections (first thing I check when suspicious)
ss -tulnp

# Find processes listening on unexpected ports
ss -tlnp | grep -v '22\|80\|443'

# Check for recently modified binaries (should be empty after initial setup)
find /usr/bin /usr/sbin /sbin /bin -mtime -1 -type f 2>/dev/null

# Check for suspicious cron jobs across ALL users
for user in $(cut -f1 -d: /etc/passwd); do
    crontab -u $user -l 2>/dev/null | grep -v "^#" | grep -v "^$" && echo "  ^-- $user"
done

Backup Strategy — Your Last Line

Everything above is about prevention. Backups are about recovery. When prevention fails — and at some point, on some server, it will — your backup is the difference between "annoying weekend" and "catastrophic data loss." The 3-2-1 rule is not a suggestion: 3 copies, 2 different storage types, 1 offsite.

Provider Snapshots (Layer 1)

Enable provider-level backups immediately. They are your fastest recovery path — one click and you are back to a known state:

Provider Backup Cost Snapshot Cost Frequency
Vultr20% of VPS cost$0.05/GB/moAutomatic daily
DigitalOcean20% of droplet cost$0.06/GB/moAutomatic weekly
Linode$2.50–$5/moFree (limited)Automatic daily
Hetzner20% of server costFree (up to limit)Automatic daily
KamateraIncluded (snapshots)Free (limited)Manual or scheduled
Contabo$3.27/mo (400GB)$1.49/snapshotManual

Automated Offsite Backups (Layer 2)

# Restic — encrypted, deduplicated, incremental backups
sudo apt install restic -y

# Initialize a backup repository (local, S3, SFTP, or B2)
restic init --repo sftp:backup-server:/backups/vps-name

# Back up critical directories
restic backup /var/www /etc /home /var/lib/mysql \
  --exclude='/var/www/*/cache' \
  --exclude='/var/www/*/tmp'

# Automate with cron (daily at 2 AM, with retention)
sudo tee /etc/cron.d/restic-backup <<'EOF'
0 2 * * * root RESTIC_PASSWORD_FILE=/root/.restic-pass restic backup /var/www /etc /home /var/lib/mysql --exclude='/var/www/*/cache' -q --repo sftp:backup-server:/backups/vps-name
30 2 * * * root RESTIC_PASSWORD_FILE=/root/.restic-pass restic forget --keep-daily 7 --keep-weekly 4 --keep-monthly 6 --prune -q --repo sftp:backup-server:/backups/vps-name
EOF

Database Backups (Layer 3)

# MariaDB/MySQL dump with compression
mysqldump --all-databases --single-transaction --routines --triggers \
  | gzip > /backups/db/all-databases-$(date +%Y%m%d-%H%M).sql.gz

# PostgreSQL
pg_dumpall | gzip > /backups/db/pgdump-$(date +%Y%m%d-%H%M).sql.gz

# Automate daily database dumps
echo "0 1 * * * root mysqldump --all-databases --single-transaction | gzip > /backups/db/daily-\$(date +\%Y\%m\%d).sql.gz" | sudo tee /etc/cron.d/db-backup

# IMPORTANT: Test your restores regularly
# A backup you have never tested restoring from is not a backup

Provider-Level Security Features

OS-level hardening is your job. But some providers give you additional layers that are worth knowing about and using:

Provider DDoS Protection Cloud Firewall 2FA Snapshots Entry Price
VultrFree (all plans)YesYesYes$5/mo
DigitalOceanFree (basic)YesYesYes$6/mo
KamateraNoYesYesYes$4/mo
LinodeFree (basic)YesYesYes$5/mo
HetznerFreeYesYesYes$4.59/mo
ContaboFree (basic)NoYesYes$5.50/mo
RackNerdBasicNoNoYes$1.49/mo
HostingerNoNoYesYes$4.99/mo

Vultr stands out here with free DDoS mitigation on every plan — no upgrade required. If you run anything that attracts DDoS attacks (game servers, anything cryptocurrency-related, or just something that annoys people on the internet), that free DDoS protection is worth more than the monthly VPS cost. DigitalOcean's cloud firewall is particularly nice because you can manage it from the dashboard without SSH access — useful when you need to lock down a server remotely during an incident. Enable 2FA on your provider account immediately. If an attacker compromises your provider login, they can snapshot your entire server (including database data, SSL keys, everything), destroy instances, or change SSH keys. The OS security we have been discussing is irrelevant if someone logs into your Vultr or DigitalOcean dashboard.

For DDoS protection beyond the provider level, put Cloudflare in front of your server. The free tier handles most application-layer attacks. See our security hardening guide for the complete architecture.

Frequently Asked Questions

How long does it take to properly harden a VPS?

The essential five — SSH keys, non-root user, nftables firewall, fail2ban, and unattended-upgrades — take about 15-20 minutes on a fresh Ubuntu server. That covers 99% of automated attacks. Adding CrowdSec, auditd, kernel hardening, and a rootkit scanner extends the process to about 45-60 minutes. I recommend doing the essential five immediately on first boot and scheduling the advanced hardening within your first week.

Is changing the SSH port from 22 actually useful?

It is noise reduction, not security. Any real attacker will port-scan and find your SSH in seconds. But it eliminates about 95% of automated bot traffic hitting port 22, which cleans up your logs significantly and reduces fail2ban processing load. It takes 30 seconds to change and has no downside, so I do it on every server — just do not mistake it for real protection.

Should I use UFW or nftables for my VPS firewall?

UFW is fine for basic port allow/deny rules and is easier to learn. But nftables gives you rate limiting, connection tracking, geo-blocking, and per-IP rules that UFW cannot handle without dropping to raw iptables. If you run anything beyond a simple web server — game servers, rate-limited APIs, multi-tenant applications — learn nftables. It is the default backend in Ubuntu 24.04 and Debian 12 regardless. See our VPS setup guide for the simpler UFW approach.

What is the difference between fail2ban and CrowdSec?

Fail2ban is reactive and local — it reads your logs, spots attack patterns, and bans IPs on your single server. CrowdSec does the same but adds a crowd-sourced blocklist: when one CrowdSec user gets attacked by an IP, every CrowdSec installation learns about it. Think of it as fail2ban with a shared immune system. I run both: fail2ban for instant local response, CrowdSec for preemptive blocking of known-bad IPs before they even try.

Do VPS providers handle any security for me?

Providers secure the physical hardware, hypervisor, and network infrastructure. Everything inside your VPS — OS, software, firewall rules, user accounts — is entirely your responsibility. Some providers add useful layers: Vultr includes free DDoS protection on every plan, DigitalOcean offers cloud firewalls, and Kamatera has cloud firewall and monitoring add-ons. But the OS-level hardening described here is always your job.

How do I know if my VPS has been compromised?

Six red flags: (1) Unknown processes consuming CPU or network in htop, (2) New user accounts in /etc/passwd you did not create, (3) Modified SSH authorized_keys files, (4) Unexpected cron jobs (check crontab -l for all users), (5) Outbound connections to unknown IPs in ss -tulnp, and (6) Changed timestamps on critical binaries like /usr/bin/ssh. Run rkhunter and chkrootkit weekly to automate most of these checks.

Is a VPN necessary for VPS management?

For most setups, SSH keys plus fail2ban plus a properly configured firewall is sufficient. A VPN adds real value in two scenarios: (1) High-security environments where you want SSH completely invisible to the public internet — set up WireGuard, restrict SSH to the WireGuard subnet only, and SSH becomes unreachable without the VPN. (2) Multi-server environments needing secure inter-server communication. For a single web server, a VPN is nice to have, not essential. See our VPS networking guide for WireGuard setup.

What are the most common VPS security mistakes?

From cleaning up after actual breaches, the top seven: (1) Password authentication still enabled on SSH, (2) Running everything as root, (3) No firewall — not even UFW, (4) Never updating because "it works fine," (5) No backups — discovered only after ransomware, (6) Database ports (3306, 5432, 27017) open to the entire internet, (7) Securing IPv4 but leaving IPv6 wide open. Every single one takes under 5 minutes to fix.

891 Login Attempts in 16 Minutes

That is what your fresh VPS faces before you finish reading this sentence. The hardening above takes 20 minutes. The bots do not take breaks. Start with the Non-Negotiable Five and build from there.

Full Setup Guide Networking Guide Provider Reviews
AC
Alex Chen — Senior Systems Engineer

Alex has hardened production VPS instances for startups, e-commerce sites, and SaaS platforms since 2019. He runs fail2ban and CrowdSec across 40+ servers and has cleaned up after enough breaches to know what works and what does not. Learn more about our testing methodology →