* [RT] 5.9-rt14 softirq_ctrl.lock vs listening_hash[i].lock lockdep splat
@ 2020-10-10 4:31 Mike Galbraith
2020-10-12 16:45 ` Sebastian Andrzej Siewior
2020-12-09 10:05 ` Peter Zijlstra
0 siblings, 2 replies; 9+ messages in thread
From: Mike Galbraith @ 2020-10-10 4:31 UTC (permalink / raw)
To: Sebastian Andrzej Siewior; +Cc: tglx, linux-rt-users, lkml
[ 47.844511] ======================================================
[ 47.844511] WARNING: possible circular locking dependency detected
[ 47.844512] 5.9.0.gc85fb28-rt14-rt #1 Tainted: G E
[ 47.844513] ------------------------------------------------------
[ 47.844514] perl/2751 is trying to acquire lock:
[ 47.844515] ffff92cadec5a410 ((softirq_ctrl.lock).lock){+.+.}-{2:2}, at: __local_bh_disable_ip+0x127/0x2c0
[ 47.844521]
but task is already holding lock:
[ 47.844522] ffffffffa8871468 (&h->listening_hash[i].lock){+.+.}-{0:0}, at: listening_get_next.isra.41+0xd7/0x130
[ 47.844528]
which lock already depends on the new lock.
[ 47.844528]
the existing dependency chain (in reverse order) is:
[ 47.844529]
-> #1 (&h->listening_hash[i].lock){+.+.}-{0:0}:
[ 47.844532] rt_spin_lock+0x2b/0xc0
[ 47.844536] __inet_hash+0x68/0x320
[ 47.844539] inet_hash+0x31/0x60
[ 47.844541] inet_csk_listen_start+0xaf/0xe0
[ 47.844543] inet_listen+0x86/0x150
[ 47.844546] __sys_listen+0x58/0x80
[ 47.844548] __x64_sys_listen+0x12/0x20
[ 47.844549] do_syscall_64+0x33/0x40
[ 47.844552] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 47.844555]
-> #0 ((softirq_ctrl.lock).lock){+.+.}-{2:2}:
[ 47.844557] __lock_acquire+0x1343/0x1890
[ 47.844560] lock_acquire+0x92/0x410
[ 47.844562] rt_spin_lock+0x2b/0xc0
[ 47.844564] __local_bh_disable_ip+0x127/0x2c0
[ 47.844566] sock_i_ino+0x22/0x60
[ 47.844569] tcp4_seq_show+0x14f/0x420
[ 47.844571] seq_read+0x27c/0x420
[ 47.844574] proc_reg_read+0x5c/0x80
[ 47.844576] vfs_read+0xd1/0x1d0
[ 47.844580] ksys_read+0x87/0xc0
[ 47.844581] do_syscall_64+0x33/0x40
[ 47.844583] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 47.844585]
other info that might help us debug this:
[ 47.844585] Possible unsafe locking scenario:
[ 47.844586] CPU0 CPU1
[ 47.844586] ---- ----
[ 47.844587] lock(&h->listening_hash[i].lock);
[ 47.844588] lock((softirq_ctrl.lock).lock);
[ 47.844588] lock(&h->listening_hash[i].lock);
[ 47.844589] lock((softirq_ctrl.lock).lock);
[ 47.844590]
*** DEADLOCK ***
[ 47.844590] 3 locks held by perl/2751:
[ 47.844591] #0: ffff92ca6525a4e0 (&p->lock){+.+.}-{0:0}, at: seq_read+0x37/0x420
[ 47.844594] #1: ffffffffa8871468 (&h->listening_hash[i].lock){+.+.}-{0:0}, at: listening_get_next.isra.41+0xd7/0x130
[ 47.844597] #2: ffffffffa74b90e0 (rcu_read_lock){....}-{1:2}, at: rt_spin_lock+0x5/0xc0
[ 47.844600]
stack backtrace:
[ 47.844601] CPU: 1 PID: 2751 Comm: perl Kdump: loaded Tainted: G E 5.9.0.gc85fb28-rt14-rt #1
[ 47.844603] Hardware name: MEDION MS-7848/MS-7848, BIOS M7848W08.20C 09/23/2013
[ 47.844604] Call Trace:
[ 47.844606] dump_stack+0x77/0x9b
[ 47.844611] check_noncircular+0x148/0x160
[ 47.844616] ? __lock_acquire+0x1343/0x1890
[ 47.844617] __lock_acquire+0x1343/0x1890
[ 47.844621] lock_acquire+0x92/0x410
[ 47.844623] ? __local_bh_disable_ip+0x127/0x2c0
[ 47.844626] ? sock_i_ino+0x5/0x60
[ 47.844628] rt_spin_lock+0x2b/0xc0
[ 47.844630] ? __local_bh_disable_ip+0x127/0x2c0
[ 47.844631] __local_bh_disable_ip+0x127/0x2c0
[ 47.844634] sock_i_ino+0x22/0x60
[ 47.844635] tcp4_seq_show+0x14f/0x420
[ 47.844640] seq_read+0x27c/0x420
[ 47.844643] proc_reg_read+0x5c/0x80
[ 47.844645] vfs_read+0xd1/0x1d0
[ 47.844648] ksys_read+0x87/0xc0
[ 47.844649] ? lockdep_hardirqs_on+0x78/0x100
[ 47.844652] do_syscall_64+0x33/0x40
[ 47.844654] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 47.844656] RIP: 0033:0x7fb3f3c23e51
[ 47.844658] Code: 7d 81 20 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb ba 0f 1f 80 00 00 00 00 8b 05 1a c3 20 00 48 63 ff 85 c0 75 13 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 57 f3 c3 0f 1f 44 00 00 55 53 48 89 d5 48 89
[ 47.844660] RSP: 002b:00007ffd7604f108 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[ 47.844661] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fb3f3c23e51
[ 47.844662] RDX: 0000000000002000 RSI: 000055dbff4da600 RDI: 0000000000000003
[ 47.844662] RBP: 0000000000002000 R08: 000055dbff4d9290 R09: 000055dbff4da600
[ 47.844663] R10: ffffffffffffffb0 R11: 0000000000000246 R12: 000055dbff4da600
[ 47.844664] R13: 000055dbff4ae260 R14: 000055dbff4d92c0 R15: 0000000000000003
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RT] 5.9-rt14 softirq_ctrl.lock vs listening_hash[i].lock lockdep splat
2020-10-10 4:31 [RT] 5.9-rt14 softirq_ctrl.lock vs listening_hash[i].lock lockdep splat Mike Galbraith
@ 2020-10-12 16:45 ` Sebastian Andrzej Siewior
2020-10-12 18:34 ` Mike Galbraith
2020-12-09 10:05 ` Peter Zijlstra
1 sibling, 1 reply; 9+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-10-12 16:45 UTC (permalink / raw)
To: Mike Galbraith; +Cc: tglx, linux-rt-users, lkml
On 2020-10-10 06:31:57 [+0200], Mike Galbraith wrote:
so this then. Do you have more of these?
----------->8--------------------
Subject: [PATCH] tcp: Remove superfluous BH-disable around listening_hash
Commit
9652dc2eb9e40 ("tcp: relax listening_hash operations")
removed the need to disable bottom half while acquiring
listening_hash.lock. There are still two callers left which disable
bottom half before the lock is acquired.
Drop local_bh_disable() around __inet_hash() which acquires
listening_hash->lock, invoke inet_ehash_nolisten() with disabled BH.
inet_unhash() conditionally acquires listening_hash->lock.
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
net/ipv4/inet_hashtables.c | 19 ++++++++++++-------
net/ipv6/inet6_hashtables.c | 5 +----
2 files changed, 13 insertions(+), 11 deletions(-)
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index 239e54474b653..fcb105cbb5465 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -585,7 +585,9 @@ int __inet_hash(struct sock *sk, struct sock *osk)
int err = 0;
if (sk->sk_state != TCP_LISTEN) {
+ local_bh_disable();
inet_ehash_nolisten(sk, osk);
+ local_bh_enable();
return 0;
}
WARN_ON(!sk_unhashed(sk));
@@ -617,11 +619,8 @@ int inet_hash(struct sock *sk)
{
int err = 0;
- if (sk->sk_state != TCP_CLOSE) {
- local_bh_disable();
+ if (sk->sk_state != TCP_CLOSE)
err = __inet_hash(sk, NULL);
- local_bh_enable();
- }
return err;
}
@@ -632,17 +631,20 @@ void inet_unhash(struct sock *sk)
struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
struct inet_listen_hashbucket *ilb = NULL;
spinlock_t *lock;
+ bool state_listen;
if (sk_unhashed(sk))
return;
if (sk->sk_state == TCP_LISTEN) {
+ state_listen = true;
ilb = &hashinfo->listening_hash[inet_sk_listen_hashfn(sk)];
- lock = &ilb->lock;
+ spin_lock(&ilb->lock);
} else {
+ state_listen = false;
lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
+ spin_lock_bh(lock);
}
- spin_lock_bh(lock);
if (sk_unhashed(sk))
goto unlock;
@@ -655,7 +657,10 @@ void inet_unhash(struct sock *sk)
__sk_nulls_del_node_init_rcu(sk);
sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
unlock:
- spin_unlock_bh(lock);
+ if (state_listen)
+ spin_unlock(&ilb->lock);
+ else
+ spin_unlock_bh(lock);
}
EXPORT_SYMBOL_GPL(inet_unhash);
diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
index 2d3add9e61162..50fd17cbf3ec7 100644
--- a/net/ipv6/inet6_hashtables.c
+++ b/net/ipv6/inet6_hashtables.c
@@ -335,11 +335,8 @@ int inet6_hash(struct sock *sk)
{
int err = 0;
- if (sk->sk_state != TCP_CLOSE) {
- local_bh_disable();
+ if (sk->sk_state != TCP_CLOSE)
err = __inet_hash(sk, NULL);
- local_bh_enable();
- }
return err;
}
--
2.28.0
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [RT] 5.9-rt14 softirq_ctrl.lock vs listening_hash[i].lock lockdep splat
2020-10-12 16:45 ` Sebastian Andrzej Siewior
@ 2020-10-12 18:34 ` Mike Galbraith
2020-10-13 3:00 ` Mike Galbraith
0 siblings, 1 reply; 9+ messages in thread
From: Mike Galbraith @ 2020-10-12 18:34 UTC (permalink / raw)
To: Sebastian Andrzej Siewior; +Cc: tglx, linux-rt-users, lkml
On Mon, 2020-10-12 at 18:45 +0200, Sebastian Andrzej Siewior wrote:
> On 2020-10-10 06:31:57 [+0200], Mike Galbraith wrote:
>
> so this then. Do you have more of these?
Nope, nothing was hiding behind it, all better now.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RT] 5.9-rt14 softirq_ctrl.lock vs listening_hash[i].lock lockdep splat
2020-10-12 18:34 ` Mike Galbraith
@ 2020-10-13 3:00 ` Mike Galbraith
2020-10-14 10:22 ` Sebastian Andrzej Siewior
0 siblings, 1 reply; 9+ messages in thread
From: Mike Galbraith @ 2020-10-13 3:00 UTC (permalink / raw)
To: Sebastian Andrzej Siewior; +Cc: tglx, linux-rt-users, lkml
On Mon, 2020-10-12 at 20:34 +0200, Mike Galbraith wrote:
> On Mon, 2020-10-12 at 18:45 +0200, Sebastian Andrzej Siewior wrote:
> > On 2020-10-10 06:31:57 [+0200], Mike Galbraith wrote:
> >
> > so this then. Do you have more of these?
>
> Nope....
Well, I do have a gripe from 5.6-rt, which I just took a moment to
confirm in virgin source, but that kernel is probably EOL.
[ 24.613988] ======================================================
[ 24.613988] WARNING: possible circular locking dependency detected
[ 24.613989] 5.6.19-rt12-rt #3 Tainted: G E
[ 24.613990] ------------------------------------------------------
[ 24.613991] ksoftirqd/0/10 is trying to acquire lock:
[ 24.613992] ffff94a639fd6a48 (&sch->q.lock){+...}, at: sch_direct_xmit+0x81/0x2f0
[ 24.613998]
but task is already holding lock:
[ 24.613998] ffff94a639fd6a80 (&(&sch->running)->seqcount){+...}, at: br_dev_queue_push_xmit+0x79/0x160 [bridge]
[ 24.614007]
which lock already depends on the new lock.
[ 24.614008]
the existing dependency chain (in reverse order) is:
[ 24.614009]
-> #1 (&(&sch->running)->seqcount){+...}:
[ 24.614010] __dev_queue_xmit+0xc86/0xda0
[ 24.614012] br_dev_queue_push_xmit+0x79/0x160 [bridge]
[ 24.614017] br_forward_finish+0x10a/0x1b0 [bridge]
[ 24.614021] __br_forward+0x17d/0x340 [bridge]
[ 24.614024] br_dev_xmit+0x432/0x560 [bridge]
[ 24.614029] dev_hard_start_xmit+0xc5/0x3f0
[ 24.614030] __dev_queue_xmit+0x973/0xda0
[ 24.614031] ip6_finish_output2+0x290/0x980
[ 24.614033] ip6_output+0x6d/0x260
[ 24.614034] mld_sendpack+0x1d9/0x360
[ 24.614035] mld_ifc_timer_expire+0x1f7/0x370
[ 24.614036] call_timer_fn+0x98/0x390
[ 24.614038] run_timer_softirq+0x591/0x720
[ 24.614040] __do_softirq+0xca/0x561
[ 24.614042] run_ksoftirqd+0x45/0x70
[ 24.614043] smpboot_thread_fn+0x266/0x320
[ 24.614045] kthread+0x11c/0x140
[ 24.614047] ret_from_fork+0x24/0x50
[ 24.614049]
-> #0 (&sch->q.lock){+...}:
[ 24.614050] __lock_acquire+0x115a/0x1440
[ 24.614052] lock_acquire+0x93/0x230
[ 24.614053] rt_spin_lock+0x78/0xd0
[ 24.614055] sch_direct_xmit+0x81/0x2f0
[ 24.614056] __dev_queue_xmit+0xcd7/0xda0
[ 24.614057] br_dev_queue_push_xmit+0x79/0x160 [bridge]
[ 24.614062] br_forward_finish+0x10a/0x1b0 [bridge]
[ 24.614067] __br_forward+0x17d/0x340 [bridge]
[ 24.614072] br_dev_xmit+0x432/0x560 [bridge]
[ 24.614076] dev_hard_start_xmit+0xc5/0x3f0
[ 24.614077] __dev_queue_xmit+0x973/0xda0
[ 24.614078] ip6_finish_output2+0x290/0x980
[ 24.614079] ip6_output+0x6d/0x260
[ 24.614080] mld_sendpack+0x1d9/0x360
[ 24.614081] mld_ifc_timer_expire+0x1f7/0x370
[ 24.614082] call_timer_fn+0x98/0x390
[ 24.614084] run_timer_softirq+0x591/0x720
[ 24.614085] __do_softirq+0xca/0x561
[ 24.614086] run_ksoftirqd+0x45/0x70
[ 24.614087] smpboot_thread_fn+0x266/0x320
[ 24.614089] kthread+0x11c/0x140
[ 24.614090] ret_from_fork+0x24/0x50
[ 24.614091]
other info that might help us debug this:
[ 24.614092] Possible unsafe locking scenario:
[ 24.614092] CPU0 CPU1
[ 24.614093] ---- ----
[ 24.614093] lock(&(&sch->running)->seqcount);
[ 24.614094] lock(&sch->q.lock);
[ 24.614095] lock(&(&sch->running)->seqcount);
[ 24.614096] lock(&sch->q.lock);
[ 24.614097]
*** DEADLOCK ***
[ 24.614097] 20 locks held by ksoftirqd/0/10:
[ 24.614098] #0: ffffffffa2485fc0 (rcu_read_lock){....}, at: rt_spin_lock+0x5/0xd0
[ 24.614101] #1: ffff94a65ec1b5a0 (per_cpu_ptr(&bh_lock.lock, cpu)){....}, at: __local_bh_disable_ip+0xda/0x1c0
[ 24.614103] #2: ffffffffa2485fc0 (rcu_read_lock){....}, at: __local_bh_disable_ip+0x106/0x1c0
[ 24.614105] #3: ffffffffa2485fc0 (rcu_read_lock){....}, at: rt_spin_lock+0x5/0xd0
[ 24.614107] #4: ffff94a65ec1c1e0 (&base->expiry_lock){+...}, at: run_timer_softirq+0x3e3/0x720
[ 24.614110] #5: ffffb3bd40077d70 ((&idev->mc_ifc_timer)){+...}, at: call_timer_fn+0x5/0x390
[ 24.614113] #6: ffffffffa2485fc0 (rcu_read_lock){....}, at: mld_sendpack+0x5/0x360
[ 24.614116] #7: ffffffffa2485fc0 (rcu_read_lock){....}, at: __local_bh_disable_ip+0x106/0x1c0
[ 24.614118] #8: ffffffffa2485fa0 (rcu_read_lock_bh){....}, at: ip6_finish_output2+0x7a/0x980
[ 24.614121] #9: ffffffffa2485fc0 (rcu_read_lock){....}, at: __local_bh_disable_ip+0x106/0x1c0
[ 24.614124] #10: ffffffffa2485fa0 (rcu_read_lock_bh){....}, at: __dev_queue_xmit+0x63/0xda0
[ 24.614126] #11: ffffffffa2485fc0 (rcu_read_lock){....}, at: br_dev_xmit+0x5/0x560 [bridge]
[ 24.614133] #12: ffffffffa2485fc0 (rcu_read_lock){....}, at: __local_bh_disable_ip+0x106/0x1c0
[ 24.614135] #13: ffffffffa2485fa0 (rcu_read_lock_bh){....}, at: __dev_queue_xmit+0x63/0xda0
[ 24.614138] #14: ffffffffa2485fc0 (rcu_read_lock){....}, at: rt_spin_lock+0x5/0xd0
[ 24.614140] #15: ffff94a639fd6d60 (&dev->qdisc_tx_busylock_key){+...}, at: __dev_queue_xmit+0x89e/0xda0
[ 24.614143] #16: ffffffffa2485fc0 (rcu_read_lock){....}, at: rt_spin_lock+0x5/0xd0
[ 24.614145] #17: ffff94a639fd6b40 (&dev->qdisc_running_key){+...}, at: __dev_queue_xmit+0xc52/0xda0
[ 24.614148] #18: ffff94a639fd6a80 (&(&sch->running)->seqcount){+...}, at: br_dev_queue_push_xmit+0x79/0x160 [bridge]
[ 24.614154] #19: ffffffffa2485fc0 (rcu_read_lock){....}, at: rt_spin_lock+0x5/0xd0
[ 24.614155]
stack backtrace:
[ 24.614156] CPU: 0 PID: 10 Comm: ksoftirqd/0 Kdump: loaded Tainted: G E 5.6.19-rt12-rt #3
[ 24.614157] Hardware name: MEDION MS-7848/MS-7848, BIOS M7848W08.20C 09/23/2013
[ 24.614158] Call Trace:
[ 24.614160] dump_stack+0x71/0x9b
[ 24.614163] check_noncircular+0x155/0x170
[ 24.614166] ? __lock_acquire+0x115a/0x1440
[ 24.614168] __lock_acquire+0x115a/0x1440
[ 24.614172] lock_acquire+0x93/0x230
[ 24.614173] ? sch_direct_xmit+0x81/0x2f0
[ 24.614177] rt_spin_lock+0x78/0xd0
[ 24.614178] ? sch_direct_xmit+0x81/0x2f0
[ 24.614180] sch_direct_xmit+0x81/0x2f0
[ 24.614182] __dev_queue_xmit+0xcd7/0xda0
[ 24.614184] ? find_held_lock+0x2d/0x90
[ 24.614186] ? br_forward_finish+0xde/0x1b0 [bridge]
[ 24.614192] ? br_dev_queue_push_xmit+0x79/0x160 [bridge]
[ 24.614197] br_dev_queue_push_xmit+0x79/0x160 [bridge]
[ 24.614203] br_forward_finish+0x10a/0x1b0 [bridge]
[ 24.614210] __br_forward+0x17d/0x340 [bridge]
[ 24.614216] ? br_flood+0x98/0x120 [bridge]
[ 24.614222] br_dev_xmit+0x432/0x560 [bridge]
[ 24.614228] dev_hard_start_xmit+0xc5/0x3f0
[ 24.614232] __dev_queue_xmit+0x973/0xda0
[ 24.614233] ? mark_held_locks+0x2d/0x80
[ 24.614235] ? eth_header+0x25/0xc0
[ 24.614238] ? ip6_finish_output2+0x290/0x980
[ 24.614239] ip6_finish_output2+0x290/0x980
[ 24.614242] ? ip6_mtu+0x135/0x1b0
[ 24.614246] ? ip6_output+0x6d/0x260
[ 24.614247] ip6_output+0x6d/0x260
[ 24.614249] ? __ip6_finish_output+0x210/0x210
[ 24.614252] mld_sendpack+0x1d9/0x360
[ 24.614255] ? mld_ifc_timer_expire+0x119/0x370
[ 24.614256] mld_ifc_timer_expire+0x1f7/0x370
[ 24.614258] ? mld_dad_timer_expire+0xb0/0xb0
[ 24.614259] ? mld_dad_timer_expire+0xb0/0xb0
[ 24.614260] call_timer_fn+0x98/0x390
[ 24.614263] ? mld_dad_timer_expire+0xb0/0xb0
[ 24.614264] run_timer_softirq+0x591/0x720
[ 24.614267] __do_softirq+0xca/0x561
[ 24.614271] ? smpboot_thread_fn+0x28/0x320
[ 24.614273] ? smpboot_thread_fn+0x70/0x320
[ 24.614274] run_ksoftirqd+0x45/0x70
[ 24.614275] smpboot_thread_fn+0x266/0x320
[ 24.614277] ? smpboot_register_percpu_thread+0xe0/0xe0
[ 24.614278] kthread+0x11c/0x140
[ 24.614280] ? kthread_park+0x90/0x90
[ 24.614282] ret_from_fork+0x24/0x50
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RT] 5.9-rt14 softirq_ctrl.lock vs listening_hash[i].lock lockdep splat
2020-10-13 3:00 ` Mike Galbraith
@ 2020-10-14 10:22 ` Sebastian Andrzej Siewior
0 siblings, 0 replies; 9+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-10-14 10:22 UTC (permalink / raw)
To: Mike Galbraith; +Cc: tglx, linux-rt-users, lkml
On 2020-10-13 05:00:18 [+0200], Mike Galbraith wrote:
> Well, I do have a gripe from 5.6-rt, which I just took a moment to
> confirm in virgin source, but that kernel is probably EOL.
Yes. But I you patch for v5.9 so this should also work on v5.6.
Sebastian
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RT] 5.9-rt14 softirq_ctrl.lock vs listening_hash[i].lock lockdep splat
2020-10-10 4:31 [RT] 5.9-rt14 softirq_ctrl.lock vs listening_hash[i].lock lockdep splat Mike Galbraith
2020-10-12 16:45 ` Sebastian Andrzej Siewior
@ 2020-12-09 10:05 ` Peter Zijlstra
2020-12-09 10:25 ` Mike Galbraith
2020-12-09 11:47 ` Sebastian Andrzej Siewior
1 sibling, 2 replies; 9+ messages in thread
From: Peter Zijlstra @ 2020-12-09 10:05 UTC (permalink / raw)
To: Mike Galbraith
Cc: Sebastian Andrzej Siewior, tglx, linux-rt-users, lkml,
Boqun Feng, Ingo Molnar, Will Deacon
On Sat, Oct 10, 2020 at 06:31:57AM +0200, Mike Galbraith wrote:
>
> [ 47.844511] ======================================================
> [ 47.844511] WARNING: possible circular locking dependency detected
> [ 47.844512] 5.9.0.gc85fb28-rt14-rt #1 Tainted: G E
> [ 47.844513] ------------------------------------------------------
> [ 47.844514] perl/2751 is trying to acquire lock:
> [ 47.844515] ffff92cadec5a410 ((softirq_ctrl.lock).lock){+.+.}-{2:2}, at: __local_bh_disable_ip+0x127/0x2c0
> [ 47.844521]
> but task is already holding lock:
> [ 47.844522] ffffffffa8871468 (&h->listening_hash[i].lock){+.+.}-{0:0}, at: listening_get_next.isra.41+0xd7/0x130
> [ 47.844528]
> which lock already depends on the new lock.
>
> [ 47.844528]
> the existing dependency chain (in reverse order) is:
> [ 47.844529]
> -> #1 (&h->listening_hash[i].lock){+.+.}-{0:0}:
> [ 47.844532] rt_spin_lock+0x2b/0xc0
> [ 47.844536] __inet_hash+0x68/0x320
> [ 47.844539] inet_hash+0x31/0x60
> [ 47.844541] inet_csk_listen_start+0xaf/0xe0
> [ 47.844543] inet_listen+0x86/0x150
> [ 47.844546] __sys_listen+0x58/0x80
> [ 47.844548] __x64_sys_listen+0x12/0x20
> [ 47.844549] do_syscall_64+0x33/0x40
> [ 47.844552] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [ 47.844555]
> -> #0 ((softirq_ctrl.lock).lock){+.+.}-{2:2}:
> [ 47.844557] __lock_acquire+0x1343/0x1890
> [ 47.844560] lock_acquire+0x92/0x410
> [ 47.844562] rt_spin_lock+0x2b/0xc0
> [ 47.844564] __local_bh_disable_ip+0x127/0x2c0
> [ 47.844566] sock_i_ino+0x22/0x60
> [ 47.844569] tcp4_seq_show+0x14f/0x420
> [ 47.844571] seq_read+0x27c/0x420
> [ 47.844574] proc_reg_read+0x5c/0x80
> [ 47.844576] vfs_read+0xd1/0x1d0
> [ 47.844580] ksys_read+0x87/0xc0
> [ 47.844581] do_syscall_64+0x33/0x40
> [ 47.844583] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [ 47.844585]
> other info that might help us debug this:
>
> [ 47.844585] Possible unsafe locking scenario:
>
> [ 47.844586] CPU0 CPU1
> [ 47.844586] ---- ----
> [ 47.844587] lock(&h->listening_hash[i].lock);
> [ 47.844588] lock((softirq_ctrl.lock).lock);
> [ 47.844588] lock(&h->listening_hash[i].lock);
> [ 47.844589] lock((softirq_ctrl.lock).lock);
> [ 47.844590]
> *** DEADLOCK ***
>
> [ 47.844590] 3 locks held by perl/2751:
> [ 47.844591] #0: ffff92ca6525a4e0 (&p->lock){+.+.}-{0:0}, at: seq_read+0x37/0x420
> [ 47.844594] #1: ffffffffa8871468 (&h->listening_hash[i].lock){+.+.}-{0:0}, at: listening_get_next.isra.41+0xd7/0x130
> [ 47.844597] #2: ffffffffa74b90e0 (rcu_read_lock){....}-{1:2}, at: rt_spin_lock+0x5/0xc0
> [ 47.844600]
> stack backtrace:
> [ 47.844601] CPU: 1 PID: 2751 Comm: perl Kdump: loaded Tainted: G E 5.9.0.gc85fb28-rt14-rt #1
> [ 47.844603] Hardware name: MEDION MS-7848/MS-7848, BIOS M7848W08.20C 09/23/2013
> [ 47.844604] Call Trace:
> [ 47.844606] dump_stack+0x77/0x9b
> [ 47.844611] check_noncircular+0x148/0x160
> [ 47.844616] ? __lock_acquire+0x1343/0x1890
> [ 47.844617] __lock_acquire+0x1343/0x1890
> [ 47.844621] lock_acquire+0x92/0x410
> [ 47.844623] ? __local_bh_disable_ip+0x127/0x2c0
> [ 47.844626] ? sock_i_ino+0x5/0x60
> [ 47.844628] rt_spin_lock+0x2b/0xc0
> [ 47.844630] ? __local_bh_disable_ip+0x127/0x2c0
> [ 47.844631] __local_bh_disable_ip+0x127/0x2c0
> [ 47.844634] sock_i_ino+0x22/0x60
> [ 47.844635] tcp4_seq_show+0x14f/0x420
> [ 47.844640] seq_read+0x27c/0x420
> [ 47.844643] proc_reg_read+0x5c/0x80
> [ 47.844645] vfs_read+0xd1/0x1d0
> [ 47.844648] ksys_read+0x87/0xc0
> [ 47.844649] ? lockdep_hardirqs_on+0x78/0x100
> [ 47.844652] do_syscall_64+0x33/0x40
> [ 47.844654] entry_SYSCALL_64_after_hwframe+0x44/0xa9
> [ 47.844656] RIP: 0033:0x7fb3f3c23e51
> [ 47.844658] Code: 7d 81 20 00 f7 d8 64 89 02 48 c7 c0 ff ff ff ff eb ba 0f 1f 80 00 00 00 00 8b 05 1a c3 20 00 48 63 ff 85 c0 75 13 31 c0 0f 05 <48> 3d 00 f0 ff ff 77 57 f3 c3 0f 1f 44 00 00 55 53 48 89 d5 48 89
> [ 47.844660] RSP: 002b:00007ffd7604f108 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
> [ 47.844661] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fb3f3c23e51
> [ 47.844662] RDX: 0000000000002000 RSI: 000055dbff4da600 RDI: 0000000000000003
> [ 47.844662] RBP: 0000000000002000 R08: 000055dbff4d9290 R09: 000055dbff4da600
> [ 47.844663] R10: ffffffffffffffb0 R11: 0000000000000246 R12: 000055dbff4da600
> [ 47.844664] R13: 000055dbff4ae260 R14: 000055dbff4d92c0 R15: 0000000000000003
So I've been looking at these local_lock vs lockdep splats for a bit,
and unlike the IRQ inversions as reported here:
https://lore.kernel.org/linux-usb/20201029174348.omqiwjqy64tebg5z@linutronix.de/
I think the above is an actual real problem (for RT).
AFAICT the above translates to:
inet_listen()
lock_sock()
spin_lock_bh(&sk->sk_lock.slock);
acquire(softirq_ctrl);
acquire(&sk->sk_lock.slock);
inet_csk_listen_start()
sk->sk_prot->hash() := inet_hash()
local_bh_disable()
__inet_hash()
spin_lock(&ilb->lock);
acquire(&ilb->lock);
----
tcp4_seq_next()
listening_get_next()
spin_lock(&ilb->lock);
acquire(&ilb->lock);
tcp4_seq_show()
get_tcp4_sock()
sock_i_ino()
read_lock_bh(&sk->sk_callback_lock);
acquire(softirq_ctrl) // <---- whoops
acquire(&sk->sk_callback_lock)
Which you can run in two tasks on the same CPU (and thus get the same
softirq_ctrl local_lock), and deadlock.
By holding softirq_ctrl we serialize against softirq-context
(in-softirq) but that isn't relevant here, since neither context is
that.
On !RT there isn't a problem because softirq_ctrl isn't an actual lock,
but the moment that turns into a real lock (like on RT) you're up a
creek.
In general we have the rule that as long as a lock is only ever used
from task context (like the above ilb->lock, afaict) then it doesn't
matter if you also take it with (soft)irqs disabled or not. But this
softirq scheme breaks that. If you ever take a lock with BH disabled,
you must now always take it with BH disabled, otherwise you risk
deadlocks against the softirq_ctrl lock.
Or am I missing something obvious (again) ?
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RT] 5.9-rt14 softirq_ctrl.lock vs listening_hash[i].lock lockdep splat
2020-12-09 10:05 ` Peter Zijlstra
@ 2020-12-09 10:25 ` Mike Galbraith
2020-12-09 10:32 ` Sebastian Andrzej Siewior
2020-12-09 11:47 ` Sebastian Andrzej Siewior
1 sibling, 1 reply; 9+ messages in thread
From: Mike Galbraith @ 2020-12-09 10:25 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Sebastian Andrzej Siewior, tglx, linux-rt-users, lkml,
Boqun Feng, Ingo Molnar, Will Deacon
On Wed, 2020-12-09 at 11:05 +0100, Peter Zijlstra wrote:
>
> > [ 47.844585] Possible unsafe locking scenario:
> >
> > [ 47.844586] CPU0 CPU1
> > [ 47.844586] ---- ----
> > [ 47.844587] lock(&h->listening_hash[i].lock);
> > [ 47.844588] lock((softirq_ctrl.lock).lock);
> > [ 47.844588] lock(&h->listening_hash[i].lock);
> > [ 47.844589] lock((softirq_ctrl.lock).lock);
> > [ 47.844590]
> > *** DEADLOCK ***
> >
> >
> So I've been looking at these local_lock vs lockdep splats for a bit,
> and unlike the IRQ inversions as reported here:
>
> https://lore.kernel.org/linux-usb/20201029174348.omqiwjqy64tebg5z@linutronix.de/
>
> I think the above is an actual real problem (for RT).
>
> AFAICT the above translates to:
>
> inet_listen()
> lock_sock()
> spin_lock_bh(&sk->sk_lock.slock);
> acquire(softirq_ctrl);
> acquire(&sk->sk_lock.slock);
>
> inet_csk_listen_start()
> sk->sk_prot->hash() := inet_hash()
> local_bh_disable()
> __inet_hash()
> spin_lock(&ilb->lock);
> acquire(&ilb->lock);
>
> ----
>
> tcp4_seq_next()
> listening_get_next()
> spin_lock(&ilb->lock);
> acquire(&ilb->lock);
>
> tcp4_seq_show()
> get_tcp4_sock()
> sock_i_ino()
> read_lock_bh(&sk->sk_callback_lock);
> acquire(softirq_ctrl) // <---- whoops
> acquire(&sk->sk_callback_lock)
>
>
> Which you can run in two tasks on the same CPU (and thus get the same
> softirq_ctrl local_lock), and deadlock.
>
> By holding softirq_ctrl we serialize against softirq-context
> (in-softirq) but that isn't relevant here, since neither context is
> that.
>
> On !RT there isn't a problem because softirq_ctrl isn't an actual lock,
> but the moment that turns into a real lock (like on RT) you're up a
> creek.
>
> In general we have the rule that as long as a lock is only ever used
> from task context (like the above ilb->lock, afaict) then it doesn't
> matter if you also take it with (soft)irqs disabled or not. But this
> softirq scheme breaks that. If you ever take a lock with BH disabled,
> you must now always take it with BH disabled, otherwise you risk
> deadlocks against the softirq_ctrl lock.
>
> Or am I missing something obvious (again) ?
Sebastian fixed this via...
From 0fe43be6c32e05d0dd692069d41a40c5453a2195 Mon Sep 17 00:00:00 2001
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Date: Mon, 12 Oct 2020 17:33:54 +0200
Subject: tcp: Remove superfluous BH-disable around listening_hash
Commit
9652dc2eb9e40 ("tcp: relax listening_hash operations")
removed the need to disable bottom half while acquiring
listening_hash.lock. There are still two callers left which disable
bottom half before the lock is acquired.
Drop local_bh_disable() around __inet_hash() which acquires
listening_hash->lock, invoke inet_ehash_nolisten() with disabled BH.
inet_unhash() conditionally acquires listening_hash->lock.
Reported-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Link: https://lore.kernel.org/linux-rt-users/12d6f9879a97cd56c09fb53dee343cbb14f7f1f7.camel@gmx.de/
Signed-off-by: Mike Galbraith <efault@gmx.de>
---
net/ipv4/inet_hashtables.c | 19 ++++++++++++-------
net/ipv6/inet6_hashtables.c | 5 +----
2 files changed, 13 insertions(+), 11 deletions(-)
diff --git a/net/ipv4/inet_hashtables.c b/net/ipv4/inet_hashtables.c
index 45fb450b4522..5fb95030e7c0 100644
--- a/net/ipv4/inet_hashtables.c
+++ b/net/ipv4/inet_hashtables.c
@@ -635,7 +635,9 @@ int __inet_hash(struct sock *sk, struct sock *osk)
int err = 0;
if (sk->sk_state != TCP_LISTEN) {
+ local_bh_disable();
inet_ehash_nolisten(sk, osk, NULL);
+ local_bh_enable();
return 0;
}
WARN_ON(!sk_unhashed(sk));
@@ -667,11 +669,8 @@ int inet_hash(struct sock *sk)
{
int err = 0;
- if (sk->sk_state != TCP_CLOSE) {
- local_bh_disable();
+ if (sk->sk_state != TCP_CLOSE)
err = __inet_hash(sk, NULL);
- local_bh_enable();
- }
return err;
}
@@ -682,17 +681,20 @@ void inet_unhash(struct sock *sk)
struct inet_hashinfo *hashinfo = sk->sk_prot->h.hashinfo;
struct inet_listen_hashbucket *ilb = NULL;
spinlock_t *lock;
+ bool state_listen;
if (sk_unhashed(sk))
return;
if (sk->sk_state == TCP_LISTEN) {
+ state_listen = true;
ilb = &hashinfo->listening_hash[inet_sk_listen_hashfn(sk)];
- lock = &ilb->lock;
+ spin_lock(&ilb->lock);
} else {
+ state_listen = false;
lock = inet_ehash_lockp(hashinfo, sk->sk_hash);
+ spin_lock_bh(lock);
}
- spin_lock_bh(lock);
if (sk_unhashed(sk))
goto unlock;
@@ -705,7 +707,10 @@ void inet_unhash(struct sock *sk)
__sk_nulls_del_node_init_rcu(sk);
sock_prot_inuse_add(sock_net(sk), sk->sk_prot, -1);
unlock:
- spin_unlock_bh(lock);
+ if (state_listen)
+ spin_unlock(&ilb->lock);
+ else
+ spin_unlock_bh(lock);
}
EXPORT_SYMBOL_GPL(inet_unhash);
diff --git a/net/ipv6/inet6_hashtables.c b/net/ipv6/inet6_hashtables.c
index 55c290d55605..9bad345cba9a 100644
--- a/net/ipv6/inet6_hashtables.c
+++ b/net/ipv6/inet6_hashtables.c
@@ -333,11 +333,8 @@ int inet6_hash(struct sock *sk)
{
int err = 0;
- if (sk->sk_state != TCP_CLOSE) {
- local_bh_disable();
+ if (sk->sk_state != TCP_CLOSE)
err = __inet_hash(sk, NULL);
- local_bh_enable();
- }
return err;
}
--
2.29.2
^ permalink raw reply related [flat|nested] 9+ messages in thread
* Re: [RT] 5.9-rt14 softirq_ctrl.lock vs listening_hash[i].lock lockdep splat
2020-12-09 10:25 ` Mike Galbraith
@ 2020-12-09 10:32 ` Sebastian Andrzej Siewior
0 siblings, 0 replies; 9+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-12-09 10:32 UTC (permalink / raw)
To: Mike Galbraith
Cc: Peter Zijlstra, tglx, linux-rt-users, lkml, Boqun Feng,
Ingo Molnar, Will Deacon
On 2020-12-09 11:25:34 [+0100], Mike Galbraith wrote:
> Sebastian fixed this via...
We still look if we go that road or update lockdep.
Sebastian
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [RT] 5.9-rt14 softirq_ctrl.lock vs listening_hash[i].lock lockdep splat
2020-12-09 10:05 ` Peter Zijlstra
2020-12-09 10:25 ` Mike Galbraith
@ 2020-12-09 11:47 ` Sebastian Andrzej Siewior
1 sibling, 0 replies; 9+ messages in thread
From: Sebastian Andrzej Siewior @ 2020-12-09 11:47 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Mike Galbraith, tglx, linux-rt-users, lkml, Boqun Feng,
Ingo Molnar, Will Deacon
On 2020-12-09 11:05:45 [+0100], Peter Zijlstra wrote:
> In general we have the rule that as long as a lock is only ever used
> from task context (like the above ilb->lock, afaict) then it doesn't
> matter if you also take it with (soft)irqs disabled or not. But this
> softirq scheme breaks that. If you ever take a lock with BH disabled,
> you must now always take it with BH disabled, otherwise you risk
> deadlocks against the softirq_ctrl lock.
>
> Or am I missing something obvious (again) ?
No. With this explanation it makes sense. Thank you.
Sebastian
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2020-12-09 11:49 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-10 4:31 [RT] 5.9-rt14 softirq_ctrl.lock vs listening_hash[i].lock lockdep splat Mike Galbraith
2020-10-12 16:45 ` Sebastian Andrzej Siewior
2020-10-12 18:34 ` Mike Galbraith
2020-10-13 3:00 ` Mike Galbraith
2020-10-14 10:22 ` Sebastian Andrzej Siewior
2020-12-09 10:05 ` Peter Zijlstra
2020-12-09 10:25 ` Mike Galbraith
2020-12-09 10:32 ` Sebastian Andrzej Siewior
2020-12-09 11:47 ` Sebastian Andrzej Siewior
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).