wireguard.lists.zx2c4.com archive mirror
 help / color / mirror / Atom feed
* Issue w/ TCP Performance
@ 2018-12-13  1:10 Robert Straw
  0 siblings, 0 replies; only message in thread
From: Robert Straw @ 2018-12-13  1:10 UTC (permalink / raw)
  To: wireguard

Greetings,

I am running into a rather strange issue re: WireGuard performance. I 
have a link which I'm expecting to saturate at ~200+Mbps or so. We can 
see that an iperf to the server's public IP everything works as expected:

[  5] local 45.xxx.xxx.xxx port 43458 connected to 65.xxx.xxx.xxx port 5201
[ ID] Interval           Transfer     Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec  20.4 MBytes   171 Mbits/sec 0   2.08 MBytes
[  5]   1.00-2.00   sec  30.0 MBytes   252 Mbits/sec 1   1.93 MBytes
[  5]   2.00-3.01   sec  32.5 MBytes   271 Mbits/sec 0   2.10 MBytes
[  5]   3.01-4.01   sec  31.2 MBytes   262 Mbits/sec 0   2.24 MBytes
[  5]   4.01-5.00   sec  32.5 MBytes   275 Mbits/sec 0   2.36 MBytes
[  5]   5.00-6.00   sec  35.0 MBytes   294 Mbits/sec 1   2.38 MBytes
[  5]   6.00-7.00   sec  37.5 MBytes   314 Mbits/sec 0   2.40 MBytes
[  5]   7.00-8.00   sec  36.2 MBytes   304 Mbits/sec 0   2.41 MBytes
[  5]   8.00-9.01   sec  36.2 MBytes   302 Mbits/sec 0   2.42 MBytes
[  5]   9.01-10.00  sec  37.5 MBytes   316 Mbits/sec 0   2.44 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate Retr
[  5]   0.00-10.00  sec   329 MBytes   276 Mbits/sec 2             sender
[  5]   0.00-10.07  sec   329 MBytes   274 Mbits/sec                  
receiver

iperf Done.


However if I instead use a WireGuard tunnel I'm only seeing about 
50Mbit/s, a mere fraction of my link speed:

Connecting to host 10.43.0.1, port 5201
[  5] local 10.43.0.2 port 39528 connected to 10.43.0.1 port 5201
[ ID] Interval           Transfer     Bitrate Retr  Cwnd
[  5]   0.00-1.00   sec  2.89 MBytes  24.2 Mbits/sec 0    232 KBytes
[  5]   1.00-2.00   sec  7.48 MBytes  62.6 Mbits/sec 3    359 KBytes
[  5]   2.00-3.00   sec  7.36 MBytes  62.0 Mbits/sec 0    418 KBytes
[  5]   3.00-4.00   sec  5.21 MBytes  43.6 Mbits/sec 5    227 KBytes
[  5]   4.00-5.01   sec  4.41 MBytes  36.7 Mbits/sec 2    244 KBytes
[  5]   5.01-6.00   sec  5.03 MBytes  42.6 Mbits/sec 0    258 KBytes
[  5]   6.00-7.00   sec  4.48 MBytes  37.6 Mbits/sec 3    206 KBytes
[  5]   7.00-8.00   sec  4.41 MBytes  37.0 Mbits/sec 0    236 KBytes
[  5]   8.00-9.01   sec  4.41 MBytes  36.7 Mbits/sec 0    255 KBytes
[  5]   9.01-10.00  sec  4.97 MBytes  42.1 Mbits/sec 0    263 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate Retr
[  5]   0.00-10.00  sec  50.7 MBytes  42.5 Mbits/sec 13             sender
[  5]   0.00-10.04  sec  49.9 MBytes  41.6 Mbits/sec                  
receiver

iperf Done.


What I find most interesting though is that if I use `iperf` in UDP mode 
and set the target rate to roughly my link speed I get the expected line 
speeds, with minimal loss on either end:

Connecting to host 10.43.0.1, port 5201
[  5] local 10.43.0.2 port 50476 connected to 10.43.0.1 port 5201
[ ID] Interval           Transfer     Bitrate Total Datagrams
[  5]   0.00-1.01   sec  23.1 MBytes   191 Mbits/sec 17683
[  5]   1.01-2.00   sec  22.3 MBytes   190 Mbits/sec 17127
[  5]   2.00-3.00   sec  26.1 MBytes   219 Mbits/sec 19995
[  5]   3.00-4.00   sec  23.8 MBytes   200 Mbits/sec 18258
[  5]   4.00-5.00   sec  23.8 MBytes   199 Mbits/sec 18220
[  5]   5.00-6.00   sec  23.9 MBytes   201 Mbits/sec 18356
[  5]   6.00-7.00   sec  23.8 MBytes   200 Mbits/sec 18275
[  5]   7.00-8.00   sec  23.8 MBytes   200 Mbits/sec 18262
[  5]   8.00-9.00   sec  23.7 MBytes   199 Mbits/sec 18170
[  5]   9.00-10.00  sec  23.6 MBytes   198 Mbits/sec 18125
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate Jitter    Lost/Total Datagrams
[  5]   0.00-10.00  sec   238 MBytes   200 Mbits/sec 0.000 ms  0/182471 
(0%)  sender
[  5]   0.00-10.09  sec   238 MBytes   198 Mbits/sec 0.105 ms  0/182467 
(0%)  receiver

iperf Done.

---

Unfortunately most of my usecases for WireGuard involve bulk TCP 
transfers between hosts. Does anyone have any idea as to why a TCP 
stream over a WireGuard VPN would be so slow, but a UDP stream works as 
expected? I am using version `0.0.20181119` of the module built against 
kernel 4.19.8.All my net.ipv4 settings are set to the defaults I 
believe. (Also just FYI I don't seem to be saturating the CPU or 
anything: I can do multiple gigabits of TCP through the tunnel when the 
packets don't have to traverse the WAN on the less powerful node.)

Thanks,
Rob

_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2019-01-02 18:01 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-13  1:10 Issue w/ TCP Performance Robert Straw

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).