WireGuard Archive on lore.kernel.org
 help / color / Atom feed
* Wireguard Bug?
@ 2019-05-12 13:44 Ryan Whelan
  2019-05-12 15:41 ` Jason A. Donenfeld
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Ryan Whelan @ 2019-05-12 13:44 UTC (permalink / raw)
  To: WireGuard mailing list

[-- Attachment #1.1: Type: text/plain, Size: 6725 bytes --]

I am building a system which coordinates the meshing of wireguard devices.
Currently, all the devices are running on an embedded platform (AMD Geode
LX500) and works as expected.  However, when introducing a 64bit KVM host
for testing, all the 32bit hosts running on the Geode platform, report the
following warning and drop offline temporarily.  The CPU usage on KVM host
spikes and all the CPU time is spent on the kernel threads servicing the WG
interfaces.

I'm using kernel 4.19.41 and have seen the issue with both the last WG
snapshot as well building from master.

Is this a bug in WG?

May 10 18:36:25 buildroot kern.warn kernel: WARNING: CPU: 0 PID: 9 at
kernel/workqueue.c:1442 __queue_work+0x1d4/0x2aa
May 10 18:36:25 buildroot kern.warn kernel: CPU: 0 PID: 9 Comm: ksoftirqd/0
Tainted: G        W         4.19.41-banana #1
May 10 18:36:25 buildroot kern.warn kernel: EIP: __queue_work+0x1d4/0x2aa
May 10 18:36:25 buildroot kern.warn kernel: Code: c1 e8 4e 09 1d 00 c7 05
30 d3 5f c1 00 00 00 00 c7 45 e4 00 00 00 00 e9 76 fe ff ff 8b 3d 80 3c 60
c1 89 78 10 e9 32 ff ff ff <0f> 0b b8 01 00 00 00 e8 1c 82 00 00 a1 a8 9e
5f c1 85 c0 0f 85 3b
May 10 18:36:25 buildroot kern.warn kernel: EAX: 00000000 EBX: cfb63044
ECX: 00000003 EDX: cfb63048
May 10 18:36:25 buildroot kern.warn kernel: ESI: cfdfbb00 EDI: cfab29c0
EBP: cf847d38 ESP: cf847d1c
May 10 18:36:25 buildroot kern.warn kernel: DS: 007b ES: 007b FS: 0000 GS:
00e0 SS: 0068 EFLAGS: 00010086
May 10 18:36:25 buildroot kern.warn kernel: CR0: 80050033 CR2: a7bb5058
CR3: 0eebe000 CR4: 00000090
May 10 18:36:25 buildroot kern.warn kernel: Call Trace:
May 10 18:36:25 buildroot kern.warn kernel:  queue_work_on+0x19/0x1d
May 10 18:36:25 buildroot kern.warn kernel:  wg_packet_receive+0x4a5/0x557
May 10 18:36:25 buildroot kern.warn kernel:  ? send6+0x1fc/0x1fc
May 10 18:36:25 buildroot kern.warn kernel:  wg_receive+0x16/0x25
May 10 18:36:25 buildroot kern.warn kernel:  udp_queue_rcv_skb+0x254/0x324
May 10 18:36:25 buildroot kern.warn kernel:
udp_unicast_rcv_skb.isra.12+0x68/0x7a
May 10 18:36:25 buildroot kern.warn kernel:  __udp4_lib_rcv+0x413/0x88b
May 10 18:36:25 buildroot kern.warn kernel:  udp_rcv+0x12/0x14
May 10 18:36:25 buildroot kern.warn kernel:
ip_local_deliver_finish+0x82/0x1ff
May 10 18:36:25 buildroot kern.warn kernel:  ip_local_deliver+0xc0/0xcb
May 10 18:36:25 buildroot kern.warn kernel:  ?
ip_sublist_rcv_finish+0x41/0x41
May 10 18:36:25 buildroot kern.warn kernel:  ip_rcv_finish+0x24/0x2a
May 10 18:36:25 buildroot kern.warn kernel:  ip_rcv+0xa1/0xaf
May 10 18:36:25 buildroot kern.warn kernel:  ?
ip_rcv_finish_core.isra.0+0x331/0x331
May 10 18:36:25 buildroot kern.warn kernel:
__netif_receive_skb_one_core+0x3f/0x59
May 10 18:36:25 buildroot kern.warn kernel:  __netif_receive_skb+0x16/0x4f
May 10 18:36:25 buildroot kern.warn kernel:
netif_receive_skb_internal+0x26/0xaf
May 10 18:36:25 buildroot kern.warn kernel:  netif_receive_skb+0x8/0xa
May 10 18:36:25 buildroot kern.warn kernel:  rhine_napipoll+0x5ef/0x9c6
May 10 18:36:25 buildroot kern.warn kernel:  net_rx_action+0x197/0x24d
May 10 18:36:25 buildroot kern.warn kernel:  __do_softirq+0xd6/0x1ae
May 10 18:36:25 buildroot kern.warn kernel:  run_ksoftirqd+0x21/0x24
May 10 18:36:25 buildroot kern.warn kernel:  smpboot_thread_fn+0x137/0x1ea
May 10 18:36:25 buildroot kern.warn kernel:  kthread+0xbe/0xea
May 10 18:36:25 buildroot kern.warn kernel:  ? sort_range+0x18/0x18
May 10 18:36:25 buildroot kern.warn kernel:  ?
__kthread_create_on_node+0x13e/0x13e
May 10 18:36:25 buildroot kern.warn kernel:  ret_from_fork+0x19/0x24
May 10 18:36:25 buildroot kern.warn kernel: ---[ end trace c200a14cd22c0ee1
]---
May 10 18:36:25 buildroot kern.warn kernel: WARNING: CPU: 0 PID: 9 at
kernel/workqueue.c:1442 __queue_work+0x1d4/0x2aa
May 10 18:36:25 buildroot kern.warn kernel: CPU: 0 PID: 9 Comm: ksoftirqd/0
Tainted: G        W         4.19.41-banana #1
May 10 18:36:25 buildroot kern.warn kernel: EIP: __queue_work+0x1d4/0x2aa
May 10 18:36:25 buildroot kern.warn kernel: Code: c1 e8 4e 09 1d 00 c7 05
30 d3 5f c1 00 00 00 00 c7 45 e4 00 00 00 00 e9 76 fe ff ff 8b 3d 80 3c 60
c1 89 78 10 e9 32 ff ff ff <0f> 0b b8 01 00 00 00 e8 1c 82 00 00 a1 a8 9e
5f c1 85 c0 0f 85 3b
May 10 18:36:25 buildroot kern.warn kernel: EAX: 00000000 EBX: cfb62044
ECX: 00000003 EDX: cfb62048
May 10 18:36:25 buildroot kern.warn kernel: ESI: cfdfbb00 EDI: cfab29c0
EBP: cf847d38 ESP: cf847d1c
May 10 18:36:25 buildroot kern.warn kernel: DS: 007b ES: 007b FS: 0000 GS:
00e0 SS: 0068 EFLAGS: 00010086
May 10 18:36:25 buildroot kern.warn kernel: CR0: 80050033 CR2: a7bb5058
CR3: 0eebe000 CR4: 00000090
May 10 18:36:25 buildroot kern.warn kernel: Call Trace:
May 10 18:36:25 buildroot kern.warn kernel:  queue_work_on+0x19/0x1d
May 10 18:36:25 buildroot kern.warn kernel:  wg_packet_receive+0x4a5/0x557
May 10 18:36:25 buildroot kern.warn kernel:  ? send6+0x1fc/0x1fc
May 10 18:36:25 buildroot kern.warn kernel:  wg_receive+0x16/0x25
May 10 18:36:25 buildroot kern.warn kernel:  udp_queue_rcv_skb+0x254/0x324
May 10 18:36:25 buildroot kern.warn kernel:
udp_unicast_rcv_skb.isra.12+0x68/0x7a
May 10 18:36:25 buildroot kern.warn kernel:  __udp4_lib_rcv+0x413/0x88b
May 10 18:36:25 buildroot kern.warn kernel:  udp_rcv+0x12/0x14
May 10 18:36:25 buildroot kern.warn kernel:
ip_local_deliver_finish+0x82/0x1ff
May 10 18:36:25 buildroot kern.warn kernel:  ip_local_deliver+0xc0/0xcb
May 10 18:36:25 buildroot kern.warn kernel:  ?
ip_sublist_rcv_finish+0x41/0x41
May 10 18:36:25 buildroot kern.warn kernel:  ip_rcv_finish+0x24/0x2a
May 10 18:36:25 buildroot kern.warn kernel:  ip_rcv+0xa1/0xaf
May 10 18:36:25 buildroot kern.warn kernel:  ?
ip_rcv_finish_core.isra.0+0x331/0x331
May 10 18:36:25 buildroot kern.warn kernel:
__netif_receive_skb_one_core+0x3f/0x59
May 10 18:36:25 buildroot kern.warn kernel:  __netif_receive_skb+0x16/0x4f
May 10 18:36:25 buildroot kern.warn kernel:
netif_receive_skb_internal+0x26/0xaf
May 10 18:36:25 buildroot kern.warn kernel:  netif_receive_skb+0x8/0xa
May 10 18:36:25 buildroot kern.warn kernel:  rhine_napipoll+0x5ef/0x9c6
May 10 18:36:25 buildroot kern.warn kernel:  net_rx_action+0x197/0x24d
May 10 18:36:25 buildroot kern.warn kernel:  __do_softirq+0xd6/0x1ae
May 10 18:36:25 buildroot kern.warn kernel:  run_ksoftirqd+0x21/0x24
May 10 18:36:25 buildroot kern.warn kernel:  smpboot_thread_fn+0x137/0x1ea
May 10 18:36:25 buildroot kern.warn kernel:  kthread+0xbe/0xea
May 10 18:36:25 buildroot kern.warn kernel:  ? sort_range+0x18/0x18
May 10 18:36:25 buildroot kern.warn kernel:  ?
__kthread_create_on_node+0x13e/0x13e
May 10 18:36:25 buildroot kern.warn kernel:  ret_from_fork+0x19/0x24
May 10 18:36:25 buildroot kern.warn kernel: ---[ end trace c200a14cd22c0ee2
]---

[-- Attachment #1.2: Type: text/html, Size: 7700 bytes --]

<div dir="ltr"><div dir="ltr"><div>I am building a system which coordinates the meshing of wireguard devices.  Currently, all the devices are running on an embedded platform (AMD Geode LX500) and works as expected.  However, when introducing a 64bit KVM host for testing, all the 32bit hosts running on the Geode platform, report the following warning and drop offline temporarily.  The CPU usage on KVM host spikes and all the CPU time is spent on the kernel threads servicing the WG interfaces.</div><div><br></div><div>I&#39;m using kernel 4.19.41 and have seen the issue with both the last WG snapshot as well building from master.</div><div><br></div><div>Is this a bug in WG?  </div><div><br></div><div>May 10 18:36:25 buildroot kern.warn kernel: WARNING: CPU: 0 PID: 9 at kernel/workqueue.c:1442 __queue_work+0x1d4/0x2aa</div><div>May 10 18:36:25 buildroot kern.warn kernel: CPU: 0 PID: 9 Comm: ksoftirqd/0 Tainted: G        W         4.19.41-banana #1</div><div>May 10 18:36:25 buildroot kern.warn kernel: EIP: __queue_work+0x1d4/0x2aa</div><div>May 10 18:36:25 buildroot kern.warn kernel: Code: c1 e8 4e 09 1d 00 c7 05 30 d3 5f c1 00 00 00 00 c7 45 e4 00 00 00 00 e9 76 fe ff ff 8b 3d 80 3c 60 c1 89 78 10 e9 32 ff ff ff &lt;0f&gt; 0b b8 01 00 00 00 e8 1c 82 00 00 a1 a8 9e 5f c1 85 c0 0f 85 3b</div><div>May 10 18:36:25 buildroot kern.warn kernel: EAX: 00000000 EBX: cfb63044 ECX: 00000003 EDX: cfb63048</div><div>May 10 18:36:25 buildroot kern.warn kernel: ESI: cfdfbb00 EDI: cfab29c0 EBP: cf847d38 ESP: cf847d1c</div><div>May 10 18:36:25 buildroot kern.warn kernel: DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068 EFLAGS: 00010086</div><div>May 10 18:36:25 buildroot kern.warn kernel: CR0: 80050033 CR2: a7bb5058 CR3: 0eebe000 CR4: 00000090</div><div>May 10 18:36:25 buildroot kern.warn kernel: Call Trace:</div><div>May 10 18:36:25 buildroot kern.warn kernel:  queue_work_on+0x19/0x1d</div><div>May 10 18:36:25 buildroot kern.warn kernel:  wg_packet_receive+0x4a5/0x557</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ? send6+0x1fc/0x1fc</div><div>May 10 18:36:25 buildroot kern.warn kernel:  wg_receive+0x16/0x25</div><div>May 10 18:36:25 buildroot kern.warn kernel:  udp_queue_rcv_skb+0x254/0x324</div><div>May 10 18:36:25 buildroot kern.warn kernel:  udp_unicast_rcv_skb.isra.12+0x68/0x7a</div><div>May 10 18:36:25 buildroot kern.warn kernel:  __udp4_lib_rcv+0x413/0x88b</div><div>May 10 18:36:25 buildroot kern.warn kernel:  udp_rcv+0x12/0x14</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ip_local_deliver_finish+0x82/0x1ff</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ip_local_deliver+0xc0/0xcb</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ? ip_sublist_rcv_finish+0x41/0x41</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ip_rcv_finish+0x24/0x2a</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ip_rcv+0xa1/0xaf</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ? ip_rcv_finish_core.isra.0+0x331/0x331</div><div>May 10 18:36:25 buildroot kern.warn kernel:  __netif_receive_skb_one_core+0x3f/0x59</div><div>May 10 18:36:25 buildroot kern.warn kernel:  __netif_receive_skb+0x16/0x4f</div><div>May 10 18:36:25 buildroot kern.warn kernel:  netif_receive_skb_internal+0x26/0xaf</div><div>May 10 18:36:25 buildroot kern.warn kernel:  netif_receive_skb+0x8/0xa</div><div>May 10 18:36:25 buildroot kern.warn kernel:  rhine_napipoll+0x5ef/0x9c6</div><div>May 10 18:36:25 buildroot kern.warn kernel:  net_rx_action+0x197/0x24d</div><div>May 10 18:36:25 buildroot kern.warn kernel:  __do_softirq+0xd6/0x1ae</div><div>May 10 18:36:25 buildroot kern.warn kernel:  run_ksoftirqd+0x21/0x24</div><div>May 10 18:36:25 buildroot kern.warn kernel:  smpboot_thread_fn+0x137/0x1ea</div><div>May 10 18:36:25 buildroot kern.warn kernel:  kthread+0xbe/0xea</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ? sort_range+0x18/0x18</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ? __kthread_create_on_node+0x13e/0x13e</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ret_from_fork+0x19/0x24</div><div>May 10 18:36:25 buildroot kern.warn kernel: ---[ end trace c200a14cd22c0ee1 ]---</div><div>May 10 18:36:25 buildroot kern.warn kernel: WARNING: CPU: 0 PID: 9 at kernel/workqueue.c:1442 __queue_work+0x1d4/0x2aa</div><div>May 10 18:36:25 buildroot kern.warn kernel: CPU: 0 PID: 9 Comm: ksoftirqd/0 Tainted: G        W         4.19.41-banana #1</div><div>May 10 18:36:25 buildroot kern.warn kernel: EIP: __queue_work+0x1d4/0x2aa</div><div>May 10 18:36:25 buildroot kern.warn kernel: Code: c1 e8 4e 09 1d 00 c7 05 30 d3 5f c1 00 00 00 00 c7 45 e4 00 00 00 00 e9 76 fe ff ff 8b 3d 80 3c 60 c1 89 78 10 e9 32 ff ff ff &lt;0f&gt; 0b b8 01 00 00 00 e8 1c 82 00 00 a1 a8 9e 5f c1 85 c0 0f 85 3b</div><div>May 10 18:36:25 buildroot kern.warn kernel: EAX: 00000000 EBX: cfb62044 ECX: 00000003 EDX: cfb62048</div><div>May 10 18:36:25 buildroot kern.warn kernel: ESI: cfdfbb00 EDI: cfab29c0 EBP: cf847d38 ESP: cf847d1c</div><div>May 10 18:36:25 buildroot kern.warn kernel: DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068 EFLAGS: 00010086</div><div>May 10 18:36:25 buildroot kern.warn kernel: CR0: 80050033 CR2: a7bb5058 CR3: 0eebe000 CR4: 00000090</div><div>May 10 18:36:25 buildroot kern.warn kernel: Call Trace:</div><div>May 10 18:36:25 buildroot kern.warn kernel:  queue_work_on+0x19/0x1d</div><div>May 10 18:36:25 buildroot kern.warn kernel:  wg_packet_receive+0x4a5/0x557</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ? send6+0x1fc/0x1fc</div><div>May 10 18:36:25 buildroot kern.warn kernel:  wg_receive+0x16/0x25</div><div>May 10 18:36:25 buildroot kern.warn kernel:  udp_queue_rcv_skb+0x254/0x324</div><div>May 10 18:36:25 buildroot kern.warn kernel:  udp_unicast_rcv_skb.isra.12+0x68/0x7a</div><div>May 10 18:36:25 buildroot kern.warn kernel:  __udp4_lib_rcv+0x413/0x88b</div><div>May 10 18:36:25 buildroot kern.warn kernel:  udp_rcv+0x12/0x14</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ip_local_deliver_finish+0x82/0x1ff</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ip_local_deliver+0xc0/0xcb</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ? ip_sublist_rcv_finish+0x41/0x41</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ip_rcv_finish+0x24/0x2a</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ip_rcv+0xa1/0xaf</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ? ip_rcv_finish_core.isra.0+0x331/0x331</div><div>May 10 18:36:25 buildroot kern.warn kernel:  __netif_receive_skb_one_core+0x3f/0x59</div><div>May 10 18:36:25 buildroot kern.warn kernel:  __netif_receive_skb+0x16/0x4f</div><div>May 10 18:36:25 buildroot kern.warn kernel:  netif_receive_skb_internal+0x26/0xaf</div><div>May 10 18:36:25 buildroot kern.warn kernel:  netif_receive_skb+0x8/0xa</div><div>May 10 18:36:25 buildroot kern.warn kernel:  rhine_napipoll+0x5ef/0x9c6</div><div>May 10 18:36:25 buildroot kern.warn kernel:  net_rx_action+0x197/0x24d</div><div>May 10 18:36:25 buildroot kern.warn kernel:  __do_softirq+0xd6/0x1ae</div><div>May 10 18:36:25 buildroot kern.warn kernel:  run_ksoftirqd+0x21/0x24</div><div>May 10 18:36:25 buildroot kern.warn kernel:  smpboot_thread_fn+0x137/0x1ea</div><div>May 10 18:36:25 buildroot kern.warn kernel:  kthread+0xbe/0xea</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ? sort_range+0x18/0x18</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ? __kthread_create_on_node+0x13e/0x13e</div><div>May 10 18:36:25 buildroot kern.warn kernel:  ret_from_fork+0x19/0x24</div><div>May 10 18:36:25 buildroot kern.warn kernel: ---[ end trace c200a14cd22c0ee2 ]---</div><div><br></div></div></div>

[-- Attachment #2: Type: text/plain, Size: 148 bytes --]

_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Wireguard Bug?
  2019-05-12 13:44 Wireguard Bug? Ryan Whelan
@ 2019-05-12 15:41 ` Jason A. Donenfeld
  2019-05-12 23:02 ` Lonnie Abelbeck
  2019-06-14 11:56 ` Jason A. Donenfeld
  2 siblings, 0 replies; 6+ messages in thread
From: Jason A. Donenfeld @ 2019-05-12 15:41 UTC (permalink / raw)
  To: Ryan Whelan; +Cc: WireGuard mailing list

Hey Ryan,

Can you confirm the following?

- You can easily reproduce this in a matter of seconds.
- The stacktrace you sent is from from the 32bit machine.
- The 64bit KVM machine, after triggering the stacktrace on the 32bit
machines, starts using tons of CPU.

Could you also send the .config of the 32bit machine and perhaps any
additional interesting information about the 64bit KVM machine?

Jason
_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Wireguard Bug?
  2019-05-12 13:44 Wireguard Bug? Ryan Whelan
  2019-05-12 15:41 ` Jason A. Donenfeld
@ 2019-05-12 23:02 ` Lonnie Abelbeck
  2019-06-14 11:56 ` Jason A. Donenfeld
  2 siblings, 0 replies; 6+ messages in thread
From: Lonnie Abelbeck @ 2019-05-12 23:02 UTC (permalink / raw)
  To: Ryan Whelan; +Cc: WireGuard mailing list



> On May 12, 2019, at 8:44 AM, Ryan Whelan <rcwhelan@gmail.com> wrote:
> 
> I am building a system which coordinates the meshing of wireguard devices.  Currently, all the devices are running on an embedded platform (AMD Geode LX500) and works as expected.  However, when introducing a 64bit KVM host for testing, all the 32bit hosts running on the Geode platform, report the following warning and drop offline temporarily.  The CPU usage on KVM host spikes and all the CPU time is spent on the kernel threads servicing the WG interfaces.
> 
> I'm using kernel 4.19.41 and have seen the issue with both the last WG snapshot as well building from master.

Hi Ryan,

Did you mean "AMD Geode LX800 @500 MHz" ? (ex. ALIX/net5501)

If so, I have a couple of those using kernel 3.16.64 (i586) mixed with 3.16.64 (x86_64), and WG works well between them, no issues like you reported.  iperf3 over WG runs at 23.8 Mbits/sec.

Lonnie

_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: Wireguard Bug?
  2019-05-12 13:44 Wireguard Bug? Ryan Whelan
  2019-05-12 15:41 ` Jason A. Donenfeld
  2019-05-12 23:02 ` Lonnie Abelbeck
@ 2019-06-14 11:56 ` Jason A. Donenfeld
  2 siblings, 0 replies; 6+ messages in thread
From: Jason A. Donenfeld @ 2019-06-14 11:56 UTC (permalink / raw)
  To: Ryan Whelan; +Cc: WireGuard mailing list

[-- Attachment #1: Type: text/plain, Size: 141 bytes --]

Hey Ryan,

If you still have a reliable test rig for the bug, would you try
running with you kernel compiled with the attached patch?

Jason

[-- Attachment #2: willitwork.diff --]
[-- Type: text/x-patch, Size: 455 bytes --]

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index d8a2084b88db..3860c09d6ac1 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1454,6 +1454,9 @@ bool queue_work_on(int cpu, struct workqueue_struct *wq,
 
 	local_irq_save(flags);
 
+	/* Pair with the smp_wmb() in set_work_pool_and_clear_pending. */
+	smp_rmb();
+
 	if (!test_and_set_bit(WORK_STRUCT_PENDING_BIT, work_data_bits(work))) {
 		__queue_work(cpu, wq, work);
 		ret = true;

[-- Attachment #3: Type: text/plain, Size: 148 bytes --]

_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: WireGuard Bug?
  2019-05-17  6:34 WireGuard Bug? . .
@ 2019-05-18 17:03 ` Lucian Cristian
  0 siblings, 0 replies; 6+ messages in thread
From: Lucian Cristian @ 2019-05-18 17:03 UTC (permalink / raw)
  To: wireguard

[-- Attachment #1.1: Type: text/plain, Size: 1390 bytes --]

On 17.05.2019 09:34, . . wrote:
> Hi,
>
> I am using WireGuard on a Raspberry Pi 3 B+ with Raspbian Stretch and 
> 4.14.98-v7+ kernel.
> Now this works great for me and is very efficient, however I tried to 
> add a lot of routes on one of the "spoke/client" nodes, 517 routes to 
> be exact.
> If I do this, WireGuard stops working, tcpdump shows the traffic being 
> sent out the wg0 interface but never actually being processed by 
> wireguard, meaning the encapsulated packet to the "hub" never leaves.
>
> So I tried doing this with wg instead of wg-quick and this works fine 
> until I actually add a lot of the routes to the routing table, the 
> sweet spot seems to be 384. If 383 routes are present in the routing 
> table, wg will still work but if I add one more, all previously 
> working ones dont anymore, if I reduce it again to <=383 then it 
> starts working again. wg itself doesnt mind having all those routes 
> (wg show) but I wonder if it tries to read the routing table as well 
> for some reason?
>
> Appreciate any insight/help on this, thanks.
> Chris
>
>
>
> _______________________________________________
> WireGuard mailing list
> WireGuard@lists.zx2c4.com
> https://lists.zx2c4.com/mailman/listinfo/wireguard

did you tried using dynamic routing ? or it can't be applied ? I have 
262 routes available so can't confirm if dynamic routing will work


Regards


[-- Attachment #1.2: Type: text/html, Size: 2786 bytes --]

<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html;
      charset=windows-1252">
  </head>
  <body text="#000000" bgcolor="#FFFFFF">
    <div class="moz-cite-prefix">On 17.05.2019 09:34, . . wrote:<br>
    </div>
    <blockquote type="cite"
cite="mid:VI1PR08MB430161EDE2737ABEF683CDC7900B0@VI1PR08MB4301.eurprd08.prod.outlook.com">
      <meta http-equiv="Content-Type" content="text/html;
        charset=windows-1252">
      <style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
      <div style="font-family: Calibri, Helvetica, sans-serif;
        font-size: 12pt; color: rgb(0, 0, 0);">
      </div>
      <span>Hi,<br>
      </span>
      <div><br>
      </div>
      <div>I am using WireGuard on a Raspberry Pi 3 B+ with Raspbian
        Stretch and 4.14.98-v7+ kernel.<br>
      </div>
      <div>Now this works great for me and is very efficient, however I
        tried to add a lot of routes on one of the "spoke/client" nodes,
        517 routes to be exact.<br>
      </div>
      <div>If I do this, WireGuard stops working, tcpdump shows the
        traffic being sent out the wg0 interface but never actually
        being processed by wireguard, meaning the encapsulated packet to
        the "hub" never leaves.<br>
      </div>
      <div><br>
      </div>
      <div>So I tried doing this with wg instead of wg-quick and this
        works fine until I actually add a lot of the routes to the
        routing table, the sweet spot seems to be 384. If 383 routes are
        present in the routing table, wg will still work but if I add
        one more, all previously working ones dont anymore, if I reduce
        it again to &lt;=383 then it starts working again. wg itself
        doesnt mind having all those routes (wg show) but I wonder if it
        tries to read the routing table as well for some reason?<br>
      </div>
      <div><br>
      </div>
      <div>Appreciate any insight/help on this, thanks.<br>
      </div>
      <span>Chris</span>
      <div id="Signature">
        <p><br>
        </p>
      </div>
      <br>
      <fieldset class="mimeAttachmentHeader"></fieldset>
      <pre class="moz-quote-pre" wrap="">_______________________________________________
WireGuard mailing list
<a class="moz-txt-link-abbreviated" href="mailto:WireGuard@lists.zx2c4.com">WireGuard@lists.zx2c4.com</a>
<a class="moz-txt-link-freetext" href="https://lists.zx2c4.com/mailman/listinfo/wireguard">https://lists.zx2c4.com/mailman/listinfo/wireguard</a>
</pre>
    </blockquote>
    <p>did you tried using dynamic routing ? or it can't be applied ? I
      have 262 routes available so can't confirm if dynamic routing will
      work</p>
    <p><br>
    </p>
    <p>Regards<br>
    </p>
  </body>
</html>

[-- Attachment #2: Type: text/plain, Size: 148 bytes --]

_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] 6+ messages in thread

* WireGuard Bug?
@ 2019-05-17  6:34 . .
  2019-05-18 17:03 ` Lucian Cristian
  0 siblings, 1 reply; 6+ messages in thread
From: . . @ 2019-05-17  6:34 UTC (permalink / raw)
  To: wireguard

[-- Attachment #1.1: Type: text/plain, Size: 1033 bytes --]

Hi,

I am using WireGuard on a Raspberry Pi 3 B+ with Raspbian Stretch and 4.14.98-v7+ kernel.
Now this works great for me and is very efficient, however I tried to add a lot of routes on one of the "spoke/client" nodes, 517 routes to be exact.
If I do this, WireGuard stops working, tcpdump shows the traffic being sent out the wg0 interface but never actually being processed by wireguard, meaning the encapsulated packet to the "hub" never leaves.

So I tried doing this with wg instead of wg-quick and this works fine until I actually add a lot of the routes to the routing table, the sweet spot seems to be 384. If 383 routes are present in the routing table, wg will still work but if I add one more, all previously working ones dont anymore, if I reduce it again to <=383 then it starts working again. wg itself doesnt mind having all those routes (wg show) but I wonder if it tries to read the routing table as well for some reason?

Appreciate any insight/help on this, thanks.
Chris

<http://aka.ms/weboutlook>

[-- Attachment #1.2: Type: text/html, Size: 1619 bytes --]

<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<style type="text/css" style="display:none;"> P {margin-top:0;margin-bottom:0;} </style>
</head>
<body dir="ltr">
<div style="font-family: Calibri, Helvetica, sans-serif; font-size: 12pt; color: rgb(0, 0, 0);">
</div>
<span>Hi,<br>
</span>
<div><br>
</div>
<div>I am using WireGuard on a Raspberry Pi 3 B&#43; with Raspbian Stretch and 4.14.98-v7&#43; kernel.<br>
</div>
<div>Now this works great for me and is very efficient, however I tried to add a lot of routes on one of the &quot;spoke/client&quot; nodes, 517 routes to be exact.<br>
</div>
<div>If I do this, WireGuard stops working, tcpdump shows the traffic being sent out the wg0 interface but never actually being processed by wireguard, meaning the encapsulated packet to the &quot;hub&quot; never leaves.<br>
</div>
<div><br>
</div>
<div>So I tried doing this with wg instead of wg-quick and this works fine until I actually add a lot of the routes to the routing table, the sweet spot seems to be 384. If 383 routes are present in the routing table, wg will still work but if I add one more,
 all previously working ones dont anymore, if I reduce it again to &lt;=383 then it starts working again. wg itself doesnt mind having all those routes (wg show) but I wonder if it tries to read the routing table as well for some reason?<br>
</div>
<div><br>
</div>
<div>Appreciate any insight/help on this, thanks.<br>
</div>
<span>Chris</span>
<div id="Signature">
<p><a href="http://aka.ms/weboutlook"></a><br>
</p>
</div>
</body>
</html>

[-- Attachment #2: Type: text/plain, Size: 148 bytes --]

_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, back to index

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-12 13:44 Wireguard Bug? Ryan Whelan
2019-05-12 15:41 ` Jason A. Donenfeld
2019-05-12 23:02 ` Lonnie Abelbeck
2019-06-14 11:56 ` Jason A. Donenfeld
2019-05-17  6:34 WireGuard Bug? . .
2019-05-18 17:03 ` Lucian Cristian

WireGuard Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/wireguard/0 wireguard/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 wireguard wireguard/ https://lore.kernel.org/wireguard \
		wireguard@lists.zx2c4.com zx2c4-wireguard@archiver.kernel.org
	public-inbox-index wireguard


Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/com.zx2c4.lists.wireguard


AGPL code for this site: git clone https://public-inbox.org/ public-inbox