wireguard.lists.zx2c4.com archive mirror
 help / color / mirror / Atom feed
* Kernel panic on Linux 5.2.7 using Wireguard 0.0.20190702 on Fedora 30
@ 2019-08-15  9:41 Damon
  2019-08-25 15:44 ` Jason A. Donenfeld
  0 siblings, 1 reply; 2+ messages in thread
From: Damon @ 2019-08-15  9:41 UTC (permalink / raw)
  To: wireguard

Hey guys,

I'm using wireguard-dkms 1:0.0.20190702-1.fc30 on Fedora 30. If I send data through the tunnel as part of a long lasting task, I run into a kernel panic. This behavior is reproducible, even though the panic only occurs after some time. The peer is also running Fedora 30 with the same Wireguard version. Both systems are up-to-date.

I've attached the kernel panic log. I'm happy to provide any additional information, if necessary.

Best,
Damon

[ 5094.906244] BUG: unable to handle page fault for address: ffff8f584996c000
[ 5094.913136] #PF: supervisor write access in kernel mode
[ 5094.918373] #PF: error_code(0x0002) - not-present page
[ 5094.923515] PGD ce001067 P4D ce001067 PUD ce005067 PMD 10996b063 PTE 0
[ 5094.930061] Oops: 0002 [#1] SMP NOPTI
[ 5094.933739] CPU: 1 PID: 10041 Comm: kworker/1:1 Tainted: G           OE     5.2.7-200.fc30.x86_64 #1
[ 5094.942882] Hardware name: PC Engines apu2/apu2, BIOS v4.9.0.4 04/03/2019
[ 5094.949698] Workqueue: wg-crypt-wg-i1 wg_packet_tx_worker [wireguard]
[ 5094.956157] RIP: 0010:__memset+0x24/0x30
[ 5094.960091] Code: 90 90 90 90 90 90 0f 1f 44 00 00 49 89 f9 48 89 d1 83 e2 07 48 c1 e9 03 40 0f b6 f6 48 b8 01 01 01 01 01 01 01 01 48 0f af c6 <f3> 48 ab 89 d1 f3 aa 4c 89 c8 c3 90 49 89 f9 40 88 f0 48 89 d1 f3
[ 5094.978841] RSP: 0018:ffffa42d82157a38 EFLAGS: 00010216
[ 5094.984076] RAX: 0000000000000000 RBX: 00000000fffffffe RCX: 000000001ffdf000
[ 5094.991219] RDX: 0000000000000006 RSI: 0000000000000000 RDI: ffff8f584996bffa
[ 5094.998359] RBP: ffff8f586408f000 R08: 0000000000000000 R09: ffff8f5849864002
[ 5095.005501] R10: ffff8f5849860000 R11: 000000000000007c R12: ffff8f585aa563c0
[ 5095.012641] R13: 000000000000000b R14: 0000000000535049 R15: ffff8f5846f33100
[ 5095.019776] FS:  0000000000000000(0000) GS:ffff8f586aa80000(0000) knlGS:0000000000000000
[ 5095.027869] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5095.033623] CR2: ffff8f584996c000 CR3: 0000000118614000 CR4: 00000000000406e0
[ 5095.040764] Call Trace:
[ 5095.043242]  cdc_ncm_fill_tx_frame+0x597/0x700 [cdc_ncm]
[ 5095.048572]  cdc_mbim_tx_fixup+0x181/0x230 [cdc_mbim]
[ 5095.053642]  usbnet_start_xmit+0x62/0x700 [usbnet]
[ 5095.058450]  dev_hard_start_xmit+0x8d/0x1e0
[ 5095.062658]  sch_direct_xmit+0x140/0x300
[ 5095.066597]  __dev_queue_xmit+0x571/0x980
[ 5095.070621]  ip_finish_output2+0x2c8/0x580
[ 5095.074758]  ? nf_confirm+0xcc/0xf0 [nf_conntrack]
[ 5095.079559]  ? ip_finish_output+0x139/0x270
[ 5095.083755]  ip_output+0x71/0xf0
[ 5095.086998]  ? ip_finish_output2+0x580/0x580
[ 5095.091279]  iptunnel_xmit+0x163/0x210
[ 5095.095056]  send4+0x128/0x3a0 [wireguard]
[ 5095.099182]  wg_socket_send_skb_to_peer+0x98/0xb0 [wireguard]
[ 5095.104957]  wg_packet_tx_worker+0xa3/0x210 [wireguard]
[ 5095.110197]  process_one_work+0x19d/0x380
[ 5095.114219]  worker_thread+0x50/0x3b0
[ 5095.117895]  kthread+0xfb/0x130
[ 5095.121048]  ? process_one_work+0x380/0x380
[ 5095.125240]  ? kthread_park+0x80/0x80
[ 5095.128919]  ret_from_fork+0x22/0x40
[ 5095.132506] Modules linked in: nf_conntrack_netlink xt_addrtype br_netfilter ccm 8021q garp mrp xt_MASQUERADE nf_conntrack_netbios_ns nf_conntrack_broadcast xt_CT bridge stp llc wireguard(OE) ip6_udp_tunnel udp_tunnel ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 ipt_REJECi
[ 5095.132661]  crc32c_intel mmc_core
[ 5095.223417] CR2: ffff8f584996c000
[ 5095.226751] ---[ end trace cbcf3c94fa645e9e ]---
[ 5095.231383] RIP: 0010:__memset+0x24/0x30
[ 5095.235313] Code: 90 90 90 90 90 90 0f 1f 44 00 00 49 89 f9 48 89 d1 83 e2 07 48 c1 e9 03 40 0f b6 f6 48 b8 01 01 01 01 01 01 01 01 48 0f af c6 <f3> 48 ab 89 d1 f3 aa 4c 89 c8 c3 90 49 89 f9 40 88 f0 48 89 d1 f3
[ 5095.254092] RSP: 0018:ffffa42d82157a38 EFLAGS: 00010216
[ 5095.259327] RAX: 0000000000000000 RBX: 00000000fffffffe RCX: 000000001ffdf000
[ 5095.266466] RDX: 0000000000000006 RSI: 0000000000000000 RDI: ffff8f584996bffa
[ 5095.273608] RBP: ffff8f586408f000 R08: 0000000000000000 R09: ffff8f5849864002
[ 5095.280743] R10: ffff8f5849860000 R11: 000000000000007c R12: ffff8f585aa563c0
[ 5095.287882] R13: 000000000000000b R14: 0000000000535049 R15: ffff8f5846f33100
[ 5095.295019] FS:  0000000000000000(0000) GS:ffff8f586aa80000(0000) knlGS:0000000000000000
[ 5095.303111] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5095.308863] CR2: ffff8f584996c000 CR3: 0000000118614000 CR4: 00000000000406e0
[ 5095.315999] Kernel panic - not syncing: Fatal exception in interrupt
[ 5095.322397] Kernel Offset: 0x2f000000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
[ 5095.333183] ---[ end Kernel panic - not syncing: Fatal exception in interrupt ]---
_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Kernel panic on Linux 5.2.7 using Wireguard 0.0.20190702 on Fedora 30
  2019-08-15  9:41 Kernel panic on Linux 5.2.7 using Wireguard 0.0.20190702 on Fedora 30 Damon
@ 2019-08-25 15:44 ` Jason A. Donenfeld
  0 siblings, 0 replies; 2+ messages in thread
From: Jason A. Donenfeld @ 2019-08-25 15:44 UTC (permalink / raw)
  To: Damon; +Cc: WireGuard mailing list

Is this problem still present in 5.2.10?
_______________________________________________
WireGuard mailing list
WireGuard@lists.zx2c4.com
https://lists.zx2c4.com/mailman/listinfo/wireguard

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-08-25 15:51 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-15  9:41 Kernel panic on Linux 5.2.7 using Wireguard 0.0.20190702 on Fedora 30 Damon
2019-08-25 15:44 ` Jason A. Donenfeld

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).