linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RT] Question about i40e threaded irq
@ 2021-05-11  6:09 Juri Lelli
  2021-05-11  6:24 ` Stefan Assmann
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Juri Lelli @ 2021-05-11  6:09 UTC (permalink / raw)
  To: Thomas Gleixner, Sebastian Andrzej Siewior; +Cc: linux-rt-users, LKML, sassmann

Hi,

The following has been reported on RT.

[ 2007.106483] list_add corruption. next->prev should be prev (ffff8d86a0aadd00), but was ffff8d86a0aadec8. (next=ffff8d86a0aadd00).
[ 2007.118155] ------------[ cut here ]------------
[ 2007.118156] kernel BUG at lib/list_debug.c:25!
[ 2007.118160] invalid opcode: 0000 [#1] PREEMPT_RT SMP NOPTI
[ 2007.118162] CPU: 36 PID: 4651 Comm: irq/429-i40e-en Kdump: loaded Tainted: G          I  #1
[ 2007.118163] Hardware name: Dell Inc. PowerEdge R740xd/04FC42, BIOS 2.6.4 04/09/2020
[ 2007.118168] RIP: 0010:__list_add_valid.cold.0+0x12/0x28
[ 2007.118169] Code: 85 5d 00 00 00 48 8b 50 08 48 39 f2 0f 85 42 00 00 00 b8 01 00 00 00 c3 48 89 d1 48 c7 c7 20 1e 6e ad 48 89 c2 e8 e0 11 cd ff <0f> 0b 48 89 c1 4c 89 c6 48 c7 c7 78 1e 6e ad e8 cc 11 cd ff 0f 0b
[ 2007.118170] RSP: 0018:ffffa598d9b0be68 EFLAGS: 00010246
[ 2007.118171] RAX: 0000000000000075 RBX: ffff8d86a0aadd00 RCX: 0000000000000001
[ 2007.118171] RDX: 0000000000000000 RSI: ffffffffad6cf723 RDI: 00000000ffffffff
[ 2007.118172] RBP: ffff8d8694112010 R08: 0000000000000000 R09: 0000000000000a36
[ 2007.118173] R10: 0000000000000000 R11: 0000000000000001 R12: ffff8d86a0aadd00
[ 2007.118173] R13: ffffffffac71f240 R14: ffff8d869d3758a0 R15: ffff8d993af88000
[ 2007.118174] FS:  0000000000000000(0000) GS:ffff8d86a0a80000(0000) knlGS:0000000000000000
[ 2007.118174] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 2007.118175] CR2: 00007fa2d29211a0 CR3: 00000011d8c0e002 CR4: 00000000007606e0
[ 2007.118175] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 2007.118176] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 2007.118176] PKRU: 55555554
[ 2007.118176] Call Trace:
[ 2007.118180]  __napi_schedule_irqoff+0x34/0x60
[ 2007.118191]  i40e_msix_clean_rings+0x3f/0x50 [i40e]
[ 2007.118195]  irq_forced_thread_fn+0x30/0x80
[ 2007.118196]  irq_thread+0xdd/0x180
[ 2007.118198]  ? wake_threads_waitq+0x30/0x30
[ 2007.118198]  ? irq_thread_check_affinity+0x20/0x20
[ 2007.118202]  kthread+0x112/0x130
[ 2007.118203]  ? kthread_flush_work_fn+0x10/0x10
[ 2007.118207]  ret_from_fork+0x1f/0x40

The following tracing bits have been then collected, which seem
relevant.

irq/532--4667   36....2 13343.788389: softirq_raise:        vec=3 [action=NET_RX]
irq/532--4667   36....2 13343.788391: kernel_stack:         <stack trace>
=> trace_event_raw_event_softirq (ffffffff83eb6f77)
=> __raise_softirq_irqoff (ffffffff83eb7acd)
=> i40e_msix_clean_rings (ffffffffc03fe6ef)
=> irq_forced_thread_fn (ffffffff83f1f2a0)
=> irq_thread (ffffffff83f1f58d)
=> kthread (ffffffff83ed5f42)
=> ret_from_fork (ffffffff8480023f)

irq/529--4664   36d.h.2 13343.788402: softirq_raise:        vec=3 [action=NET_RX]
irq/529--4664   36d.h.2 13343.788404: kernel_stack:         <stack trace>
=> trace_event_raw_event_softirq (ffffffff83eb6f77)
=> __raise_softirq_irqoff (ffffffff83eb7acd)
=> rps_trigger_softirq (ffffffff8452fa49)
=> flush_smp_call_function_queue (ffffffff83f591c3)
=> smp_call_function_single_interrupt (ffffffff8480294b)
=> call_function_single_interrupt (ffffffff84801c3f)
=> __list_add_valid (ffffffff8424b050)
=> __napi_schedule_irqoff (ffffffff8452fa94)
=> i40e_msix_clean_rings (ffffffffc03fe6ef)
=> irq_forced_thread_fn (ffffffff83f1f2a0)
=> irq_thread (ffffffff83f1f58d)
=> kthread (ffffffff83ed5f42)
=> ret_from_fork (ffffffff8480023f)

My understanding is that rps_trigger_softirq() sneaked in while
i40e_msix_clean_rings() threaded irq was running and, since the latter is
using napi_schedule_irqoff(), the softnet_data poll_list got eventually
corrupted.

Now, doing s/napi_schedule_irqoff/napi_schedule/ in the i40e driver seem
to cure the problem. I'm not sure that there isn't a more general
solution, though. Is it expected that napi_schedule_irqoff users in RT
know what they are doing or do we want/need to fix this in a more
general way?

Thanks!

Juri


^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-05-14 21:54 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-11  6:09 [RT] Question about i40e threaded irq Juri Lelli
2021-05-11  6:24 ` Stefan Assmann
2021-05-11  7:46 ` Thomas Gleixner
2021-05-12 21:43 ` [PATCH net-next] net: Treat __napi_schedule_irqoff() as __napi_schedule() on PREEMPT_RT Sebastian Andrzej Siewior
2021-05-12 22:28   ` Thomas Gleixner
2021-05-13  0:50     ` Steven Rostedt
2021-05-13 16:43       ` Steven Rostedt
2021-05-14 12:11         ` Thomas Gleixner
2021-05-14 18:56     ` Jakub Kicinski
2021-05-14 19:44       ` Alison Chaiken
2021-05-14 21:53         ` Thomas Gleixner
2021-05-14 20:16       ` Thomas Gleixner
2021-05-14 20:38         ` Jakub Kicinski
2021-05-13  5:12   ` Juri Lelli
2021-05-13 20:20   ` patchwork-bot+netdevbpf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).