WireGuard Archive on lore.kernel.org
 help / color / Atom feed
* Significant packet loss on a wg interface
@ 2019-07-09 22:05 Ian Blackburn
  0 siblings, 0 replies; only message in thread
From: Ian Blackburn @ 2019-07-09 22:05 UTC (permalink / raw)
  To: wireguard

[-- Attachment #1.1: Type: text/plain, Size: 2063 bytes --]

Hello,

I've been evaluating the use of Wireguard to replace a setup that uses
OpenVPN. Initial tests look promising in terms of system resources required
(much less CPU than OpenVPN), but I'm encountering a fair amount of packet
loss and I can't see why.

The scenario is a public API endpoint that devices ping with a reasonably
hefty payload. The payload is received by nginx which proxies it over a tunnel
(via public network) to a server downstream.

Wiregard version is 0.0.20190406-1,

The test server is an Intel i5-4460 running Debian, with 4.19.0-5-amd64
kernel.

load average: 2.18, 2.12, 2.12  
%Cpu(s): 25.2 us, 3.0 sy, 0.0 ni, 68.3 id, 0.0 wa, 0.0 hi, 3.5 si, 0.0 st

So basically, the traffic isn't exceptionally heavy and it is pretty stable in
terms of volume, and the machine is not doing anything else.

Looking at the wg0 interface, I see it dropping a fair amount of RX packets.
Doing some maths with /sys/class/net/wg0/statistics, it shows the interface is
receiving about 600KB/sec and around 5000pps. The RX dropped counter is rising
at about 120-150pps (between 2-3%) and this is show up as an error to the
sender which then has to explicitly retry (this is how I became aware of the
problem in the first place).

The underlying eth0 interface isn't seeing a single packet dropped or any
errors.

eth0 mtu is 1500, wg0 mtu is 1420 (haven't touched these).

I've tried raising txqueuelen, raising net.core.rmem_max and
net.core.rmem_default to stupidly high values with 0 difference.

I've tried setting net.ipv4.tcp_rmem='16384 33554432 67108864, increasing
net.core.netdev_max_backlog and net.ipv4.udp_mem but nothing changes. So
rather than try even more random changes, I'm wondering if anybody recognizes
the symptoms, and what the fix is? I think that covers it, but feel free to
ask for other metrics.

The exact same machine using OpenVPN dropped nothing (although user cpu was
closer to 60%).

Thanks,  
Ian.  

\-- Sent using MsgSafe.io's Free Plan Private, encrypted, online communication
For everyone. https://www.msgsafe.io



[-- Attachment #1.2: Type: text/html, Size: 2414 bytes --]

<p>Hello,</p>
<p>I've been evaluating the use of Wireguard to replace a setup that uses OpenVPN. Initial tests look promising in terms of system resources required (much less CPU than OpenVPN), but I'm encountering a fair amount of packet loss and I can't see why.</p>
<p>The scenario is a public API endpoint that devices ping with a reasonably hefty payload. The payload is received by nginx which proxies it over a tunnel (via public network) to a server downstream.</p>
<p>Wiregard version is 0.0.20190406-1,</p>
<p>The test server is an Intel i5-4460 running Debian, with 4.19.0-5-amd64 kernel.</p>
<p>load average: 2.18, 2.12, 2.12<br>%Cpu(s): 25.2 us,  3.0 sy,  0.0 ni, 68.3 id,  0.0 wa,  0.0 hi,  3.5 si,  0.0 st</p>
<p>So basically, the traffic isn't exceptionally heavy and it is pretty stable in terms of volume, and the machine is not doing anything else.</p>
<p>Looking at the wg0 interface, I see it dropping a fair amount of RX packets. Doing some maths with /sys/class/net/wg0/statistics, it shows the interface is receiving about 600KB/sec and around 5000pps. The RX dropped counter is rising at about 120-150pps (between 2-3%) and this is show up as an error to the sender which then has to explicitly retry (this is how I became aware of the problem in the first place).&nbsp;</p>
<p>The underlying eth0 interface isn't seeing a single packet dropped or any errors.</p>
<p>eth0 mtu is 1500, wg0 mtu is 1420 (haven't touched these).</p>
<p>I've tried raising txqueuelen, raising net.core.rmem_max and net.core.rmem_default to stupidly high values with 0 difference.</p>
<p>I've tried setting net.ipv4.tcp_rmem='16384 33554432 67108864, increasing net.core.netdev_max_backlog and net.ipv4.udp_mem but nothing changes. So rather than try even more random changes, I'm wondering if anybody recognizes the symptoms, and what the fix is? I think that covers it, but feel free to ask for other metrics.</p>
<p>The exact same machine using OpenVPN dropped nothing (although user cpu was closer to 60%).</p>
<p>Thanks,<br>Ian.<br></p>
<br><br></div>--</div>
<div>Sent using <a href='https:/www.msgsafe.io/?utm_source=msgsafe&utm_medium=email&utm_campaign=freemailsignature'>MsgSafe.io</a>'s Free Plan</div>
<div>Private, encrypted, online communication</div>
<div>For everyone. <a href='https:/www.msgsafe.io/?utm_source=msgsafe&utm_medium=email&utm_campaign=freemailsignature'>www.msgsafe.io</a></div>


[-- Attachment #2: Type: text/plain, Size: 148 bytes --]

_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, back to index

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-07-09 22:05 Significant packet loss on a wg interface Ian Blackburn

WireGuard Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/wireguard/0 wireguard/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 wireguard wireguard/ https://lore.kernel.org/wireguard \
		wireguard@lists.zx2c4.com zx2c4-wireguard@archiver.kernel.org
	public-inbox-index wireguard


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/com.zx2c4.lists.wireguard


AGPL code for this site: git clone https://public-inbox.org/ public-inbox