Troubleshooting Very Average Wireguard Performance on IONOS

I’ve been obsessing over this on and off for about a month, and I just can’t seem to figure it out. The situation is as follows:

public internet ———-> VPS <====WG Tunnel====> pfSense ———-> LAN/Server

I have a Wireguard tunnel set up to route all traffic between my pfSense LAN and a VPS. This means everything on that subnet sees it’s public IP as that of the VPS, and likewise public facing services are accessible only through the VPS and not the WAN address of my pfSense. This is done via iptables rules on the VPS to masquerade and forward the appropriate traffic and ports. To my slight surprise, this works perfectly!

The whole purpose of this is to have a public IP which isn’t residential, isn’t on the Spamhaus blocklist and generally has a good reputation among email providers and search engines. I also detest the idea of Virgin Media performing deep packet inspection (innocent or not) and blocking some DNS queries.

The only bone I have to pick is throughput. The thing is… it’s OK. But ONLY OK. It’s not bad enough to point to something being obviously wrong – like MTU or MSS. It’s working as it should, but I’m not getting the performance that I could. I’m only getting about 150mbps when I should be getting a theoretical maximum of 800mbps download.

As far as I can test, network is not the bottleneck. iperf3 in every direction using every protocol on every peer shows speeds of AT LEAST 500mbps (with the exception of my WAN upload, which is only 100mbps).

As far as I can see, CPU is also not the bottleneck. htop shows low usage on both ends.

So what gives? Am I doing something wrong?

Things I’ve checked:

Iperf Results

doesn’t even matter what’s what… it’s all the same.

WG Performance Tuning Guide

Followed this guide to a T – no change with any of the settings, unfortunately. Left flow control at BBR, because why not. Great write up, definitely worth a skim.

Checking Adapter Settings

I also checked all the adapter settings using ethtool. GRO was on, and everything else was mostly fixed. I did however enable rx-udp-gro-forwarding and rx-gro-list on recommendation of this tailscale guide:

Not exactly my use case but figured it couldn’t hurt. Unfortunately it didn’t help either.

Crappy ISP? Virgin Media Modems Perhaps?

I valiantly battled a previous issue whereby the Virgin Media SuperHub 3 would shit itself if presented with more than a few UDP packets in modem mode. This was a VERY irritating problem at the time, but we have since upgraded to a SuperHub 5, where (as far as I can tell) this problem does not exist. No one seems to have had problems, and my iperf UDP tests confirm this.

MTU/MSS

Also done this to death. Tried everything between 1380 and 1420 on both ends of the tunnel with not much difference. I have set it back to 1420, since bigger can’t hurt when the difference is negligible. No problem here.

Power Savings & C States on pfSense Side

Disabled and set to high performance in BIOS. I can route at over gigabit speeds, so no problems here.

So It’s a VPS Problem?

Note the VPS has 1vCPU and 1GB of RAM – not stunning, but should be enough, no?

Well it’s clearly not enough. I rented 2 other VPS – one on Hostworld and another on IONOS (where I currently have the WG VPS). Both with 2 cores and 2GB of memory. The Hostworld 2vCPU VPS yielded only 100mbps about to my home network, the IONOS 2vCPU 142mbps avg. Between the two IONOS servers I got about 170mbps. Slightly better on average, but not groundbreaking and I’ve seen that kind of speed to my pfSense under optimal circumstances.

Ok then… how do we solve it? An extra core didn’t seem to do anything! A problem for another day or a forum somewhere. I’m off to have some dinner.

2024 UPDATE:

It was indeed the VPS. I migrated to Fasthosts and I was getting the advertised link speed of the VPS instance – about 400mbps.

Interestingly, I called up IONOS before I made the switch. I explained my situation and was met with a suspiciously knowing “yeaaaaaah”. Sounds like this was a known problem, but he couldn’t give any more information. This made me think that they might be artificially limiting the bandwidth of UDP.

BUT WAIT THERE’S MORE!

VMware recently changed it’s pricing, which saw Fasthosts migrate it’s VPS’ over to Proxmox instead. You know who else uses Proxmox? IONOS.

The performance instantly dropped. Speeds are still better than my old IONOS instance… probably due to having a few more VPS resources, but still wayyy down from what there were before the Proxmox migration. It’s also worth noting that Fasthosts IP block is listed as belonging to IONOS. So I find the probability that IONOS and Fasthosts have some kind of agreement high, maybe they’re in the same DC, maybe Fasthosts migrated their VMware VPS’ to IONOS’ Proxmox environment in a pinch to avoid paying VMware. The announcement and subsequent migration were pretty close to each other so I find it difficult to believe they were able to deploy that much infrastructure in such a short amount of time.

Since TCP speed tests are still fine, I’m not sure if this is a inherent problem with UDP and Proxmox, the underlying hardware, or if Fasthosts are sharing IONOS’ infrastructure which is already set up to limit UDP based on IONOS’ specifications. Who knows. I’ll call Fasthosts and see what they have to say.

MORE LIKE THIS:

Cortical Archival System

sounds way cooler than “AI note taker” CLICK TO WATCH MY INSTAGRAM AND UNDERSTAND THIS! The following was transcribed and corrected by my program – 

Read more >

HOMELAB ’24

My collection of servers is still going strong and has evolved massively since my last post about my homelab in 2021. It’s now less of

Read more >