From: Will Deacon <will.deacon@arm.com> To: linux-kernel@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org, mingo@kernel.org, boqun.feng@gmail.com, paulmck@linux.vnet.ibm.com, catalin.marinas@arm.com, Will Deacon <will.deacon@arm.com> Subject: [PATCH 02/10] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath Date: Thu, 5 Apr 2018 17:58:59 +0100 [thread overview] Message-ID: <1522947547-24081-3-git-send-email-will.deacon@arm.com> (raw) In-Reply-To: <1522947547-24081-1-git-send-email-will.deacon@arm.com> The qspinlock locking slowpath utilises a "pending" bit as a simple form of an embedded test-and-set lock that can avoid the overhead of explicit queuing in cases where the lock is held but uncontended. This bit is managed using a cmpxchg loop which tries to transition the uncontended lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1). Unfortunately, the cmpxchg loop is unbounded and lockers can be starved indefinitely if the lock word is seen to oscillate between unlocked (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are able to take the lock in the cmpxchg loop without queuing and pass it around amongst themselves. This patch fixes the problem by unconditionally setting _Q_PENDING_VAL using atomic_fetch_or, and then inspecting the old value to see whether we need to spin on the current lock owner, or whether we now effectively hold the lock. The tricky scenario is when concurrent lockers end up queuing on the lock and the lock becomes available, causing us to see a lockword of (n,0,0). With pending now set, simply queuing could lead to deadlock as the head of the queue may not have observed the pending flag being cleared. Conversely, if the head of the queue did observe pending being cleared, then it could transition the lock from (n,0,0) -> (0,0,1) meaning that any attempt to "undo" our setting of the pending bit could race with a concurrent locker trying to set it. We handle this race by preserving the pending bit when taking the lock after reaching the head of the queue and leaving the tail entry intact if we saw pending set, because we know that the tail is going to be updated shortly. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com> --- kernel/locking/qspinlock.c | 80 ++++++++++++++++++++-------------------------- 1 file changed, 35 insertions(+), 45 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index a192af2fe378..b75361d23ea5 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -294,7 +294,7 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock, void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) { struct mcs_spinlock *prev, *next, *node; - u32 new, old, tail; + u32 old, tail; int idx; BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); @@ -306,58 +306,48 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) return; /* + * If we observe any contention; queue. + */ + if (val & ~_Q_LOCKED_MASK) + goto queue; + + /* * trylock || pending * * 0,0,0 -> 0,0,1 ; trylock * 0,0,1 -> 0,1,1 ; pending */ - for (;;) { + val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val); + if (!(val & ~_Q_LOCKED_MASK)) { /* - * If we observe any contention; queue. + * we're pending, wait for the owner to go away. + * + * *,1,1 -> *,1,0 + * + * this wait loop must be a load-acquire such that we match the + * store-release that clears the locked bit and create lock + * sequentiality; this is because not all + * clear_pending_set_locked() implementations imply full + * barriers. */ - if (val & ~_Q_LOCKED_MASK) - goto queue; - - new = _Q_LOCKED_VAL; - if (val == new) - new |= _Q_PENDING_VAL; - + if (val & _Q_LOCKED_MASK) + smp_cond_load_acquire(&lock->val.counter, + !(VAL & _Q_LOCKED_MASK)); /* - * Acquire semantic is required here as the function may - * return immediately if the lock was free. + * take ownership and clear the pending bit. + * + * *,1,0 -> *,0,1 */ - old = atomic_cmpxchg_acquire(&lock->val, val, new); - if (old == val) - break; - - val = old; - } - - /* - * we won the trylock - */ - if (new == _Q_LOCKED_VAL) + clear_pending_set_locked(lock); return; + } /* - * we're pending, wait for the owner to go away. - * - * *,1,1 -> *,1,0 - * - * this wait loop must be a load-acquire such that we match the - * store-release that clears the locked bit and create lock - * sequentiality; this is because not all clear_pending_set_locked() - * implementations imply full barriers. - */ - smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK)); - - /* - * take ownership and clear the pending bit. - * - * *,1,0 -> *,0,1 + * If pending was clear but there are waiters in the queue, then + * we need to undo our setting of pending before we queue ourselves. */ - clear_pending_set_locked(lock); - return; + if (!(val & _Q_PENDING_MASK)) + atomic_andnot(_Q_PENDING_VAL, &lock->val); /* * End of pending bit optimistic spinning and beginning of MCS @@ -461,15 +451,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * claim the lock: * * n,0,0 -> 0,0,1 : lock, uncontended - * *,0,0 -> *,0,1 : lock, contended + * *,*,0 -> *,*,1 : lock, contended * - * If the queue head is the only one in the queue (lock value == tail), - * clear the tail code and grab the lock. Otherwise, we only need - * to grab the lock. + * If the queue head is the only one in the queue (lock value == tail) + * and nobody is pending, clear the tail code and grab the lock. + * Otherwise, we only need to grab the lock. */ for (;;) { /* In the PV case we might already have _Q_LOCKED_VAL set */ - if ((val & _Q_TAIL_MASK) != tail) { + if ((val & _Q_TAIL_MASK) != tail || (val & _Q_PENDING_MASK)) { set_locked(lock); break; } -- 2.1.4
WARNING: multiple messages have this Message-ID (diff)
From: will.deacon@arm.com (Will Deacon) To: linux-arm-kernel@lists.infradead.org Subject: [PATCH 02/10] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath Date: Thu, 5 Apr 2018 17:58:59 +0100 [thread overview] Message-ID: <1522947547-24081-3-git-send-email-will.deacon@arm.com> (raw) In-Reply-To: <1522947547-24081-1-git-send-email-will.deacon@arm.com> The qspinlock locking slowpath utilises a "pending" bit as a simple form of an embedded test-and-set lock that can avoid the overhead of explicit queuing in cases where the lock is held but uncontended. This bit is managed using a cmpxchg loop which tries to transition the uncontended lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1). Unfortunately, the cmpxchg loop is unbounded and lockers can be starved indefinitely if the lock word is seen to oscillate between unlocked (0,0,0) and locked (0,0,1). This could happen if concurrent lockers are able to take the lock in the cmpxchg loop without queuing and pass it around amongst themselves. This patch fixes the problem by unconditionally setting _Q_PENDING_VAL using atomic_fetch_or, and then inspecting the old value to see whether we need to spin on the current lock owner, or whether we now effectively hold the lock. The tricky scenario is when concurrent lockers end up queuing on the lock and the lock becomes available, causing us to see a lockword of (n,0,0). With pending now set, simply queuing could lead to deadlock as the head of the queue may not have observed the pending flag being cleared. Conversely, if the head of the queue did observe pending being cleared, then it could transition the lock from (n,0,0) -> (0,0,1) meaning that any attempt to "undo" our setting of the pending bit could race with a concurrent locker trying to set it. We handle this race by preserving the pending bit when taking the lock after reaching the head of the queue and leaving the tail entry intact if we saw pending set, because we know that the tail is going to be updated shortly. Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@kernel.org> Signed-off-by: Will Deacon <will.deacon@arm.com> --- kernel/locking/qspinlock.c | 80 ++++++++++++++++++++-------------------------- 1 file changed, 35 insertions(+), 45 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index a192af2fe378..b75361d23ea5 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -294,7 +294,7 @@ static __always_inline u32 __pv_wait_head_or_lock(struct qspinlock *lock, void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) { struct mcs_spinlock *prev, *next, *node; - u32 new, old, tail; + u32 old, tail; int idx; BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); @@ -306,58 +306,48 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) return; /* + * If we observe any contention; queue. + */ + if (val & ~_Q_LOCKED_MASK) + goto queue; + + /* * trylock || pending * * 0,0,0 -> 0,0,1 ; trylock * 0,0,1 -> 0,1,1 ; pending */ - for (;;) { + val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val); + if (!(val & ~_Q_LOCKED_MASK)) { /* - * If we observe any contention; queue. + * we're pending, wait for the owner to go away. + * + * *,1,1 -> *,1,0 + * + * this wait loop must be a load-acquire such that we match the + * store-release that clears the locked bit and create lock + * sequentiality; this is because not all + * clear_pending_set_locked() implementations imply full + * barriers. */ - if (val & ~_Q_LOCKED_MASK) - goto queue; - - new = _Q_LOCKED_VAL; - if (val == new) - new |= _Q_PENDING_VAL; - + if (val & _Q_LOCKED_MASK) + smp_cond_load_acquire(&lock->val.counter, + !(VAL & _Q_LOCKED_MASK)); /* - * Acquire semantic is required here as the function may - * return immediately if the lock was free. + * take ownership and clear the pending bit. + * + * *,1,0 -> *,0,1 */ - old = atomic_cmpxchg_acquire(&lock->val, val, new); - if (old == val) - break; - - val = old; - } - - /* - * we won the trylock - */ - if (new == _Q_LOCKED_VAL) + clear_pending_set_locked(lock); return; + } /* - * we're pending, wait for the owner to go away. - * - * *,1,1 -> *,1,0 - * - * this wait loop must be a load-acquire such that we match the - * store-release that clears the locked bit and create lock - * sequentiality; this is because not all clear_pending_set_locked() - * implementations imply full barriers. - */ - smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK)); - - /* - * take ownership and clear the pending bit. - * - * *,1,0 -> *,0,1 + * If pending was clear but there are waiters in the queue, then + * we need to undo our setting of pending before we queue ourselves. */ - clear_pending_set_locked(lock); - return; + if (!(val & _Q_PENDING_MASK)) + atomic_andnot(_Q_PENDING_VAL, &lock->val); /* * End of pending bit optimistic spinning and beginning of MCS @@ -461,15 +451,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) * claim the lock: * * n,0,0 -> 0,0,1 : lock, uncontended - * *,0,0 -> *,0,1 : lock, contended + * *,*,0 -> *,*,1 : lock, contended * - * If the queue head is the only one in the queue (lock value == tail), - * clear the tail code and grab the lock. Otherwise, we only need - * to grab the lock. + * If the queue head is the only one in the queue (lock value == tail) + * and nobody is pending, clear the tail code and grab the lock. + * Otherwise, we only need to grab the lock. */ for (;;) { /* In the PV case we might already have _Q_LOCKED_VAL set */ - if ((val & _Q_TAIL_MASK) != tail) { + if ((val & _Q_TAIL_MASK) != tail || (val & _Q_PENDING_MASK)) { set_locked(lock); break; } -- 2.1.4
next prev parent reply other threads:[~2018-04-05 17:00 UTC|newest] Thread overview: 94+ messages / expand[flat|nested] mbox.gz Atom feed top 2018-04-05 16:58 [PATCH 00/10] kernel/locking: qspinlock improvements Will Deacon 2018-04-05 16:58 ` Will Deacon 2018-04-05 16:58 ` [PATCH 01/10] locking/qspinlock: Don't spin on pending->locked transition in slowpath Will Deacon 2018-04-05 16:58 ` Will Deacon 2018-04-05 16:58 ` Will Deacon [this message] 2018-04-05 16:58 ` [PATCH 02/10] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath Will Deacon 2018-04-05 17:07 ` Peter Zijlstra 2018-04-05 17:07 ` Peter Zijlstra 2018-04-06 15:08 ` Will Deacon 2018-04-06 15:08 ` Will Deacon 2018-04-05 17:13 ` Peter Zijlstra 2018-04-05 17:13 ` Peter Zijlstra 2018-04-05 21:16 ` Waiman Long 2018-04-05 21:16 ` Waiman Long 2018-04-06 15:08 ` Will Deacon 2018-04-06 15:08 ` Will Deacon 2018-04-06 20:50 ` Waiman Long 2018-04-06 20:50 ` Waiman Long 2018-04-06 21:09 ` Paul E. McKenney 2018-04-06 21:09 ` Paul E. McKenney 2018-04-07 8:47 ` Peter Zijlstra 2018-04-07 8:47 ` Peter Zijlstra 2018-04-07 23:37 ` Paul E. McKenney 2018-04-07 23:37 ` Paul E. McKenney 2018-04-09 10:58 ` Will Deacon 2018-04-09 10:58 ` Will Deacon 2018-04-07 9:07 ` Peter Zijlstra 2018-04-07 9:07 ` Peter Zijlstra 2018-04-09 10:58 ` Will Deacon 2018-04-09 10:58 ` Will Deacon 2018-04-09 14:54 ` Will Deacon 2018-04-09 14:54 ` Will Deacon 2018-04-09 15:54 ` Peter Zijlstra 2018-04-09 15:54 ` Peter Zijlstra 2018-04-09 17:19 ` Will Deacon 2018-04-09 17:19 ` Will Deacon 2018-04-10 9:35 ` Peter Zijlstra 2018-04-10 9:35 ` Peter Zijlstra 2018-09-20 16:08 ` Peter Zijlstra 2018-09-20 16:08 ` Peter Zijlstra 2018-09-20 16:22 ` Peter Zijlstra 2018-09-20 16:22 ` Peter Zijlstra 2018-04-09 19:33 ` Waiman Long 2018-04-09 19:33 ` Waiman Long 2018-04-09 17:55 ` Waiman Long 2018-04-09 17:55 ` Waiman Long 2018-04-10 13:49 ` Sasha Levin 2018-04-10 13:49 ` Sasha Levin 2018-04-05 16:59 ` [PATCH 03/10] locking/qspinlock: Kill cmpxchg loop when claiming lock from head of queue Will Deacon 2018-04-05 16:59 ` Will Deacon 2018-04-05 17:19 ` Peter Zijlstra 2018-04-05 17:19 ` Peter Zijlstra 2018-04-06 10:54 ` Will Deacon 2018-04-06 10:54 ` Will Deacon 2018-04-05 16:59 ` [PATCH 04/10] locking/qspinlock: Use atomic_cond_read_acquire Will Deacon 2018-04-05 16:59 ` Will Deacon 2018-04-05 16:59 ` [PATCH 05/10] locking/mcs: Use smp_cond_load_acquire() in mcs spin loop Will Deacon 2018-04-05 16:59 ` Will Deacon 2018-04-05 16:59 ` [PATCH 06/10] barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed Will Deacon 2018-04-05 16:59 ` Will Deacon 2018-04-05 17:22 ` Peter Zijlstra 2018-04-05 17:22 ` Peter Zijlstra 2018-04-06 10:55 ` Will Deacon 2018-04-06 10:55 ` Will Deacon 2018-04-05 16:59 ` [PATCH 07/10] locking/qspinlock: Use smp_cond_load_relaxed to wait for next node Will Deacon 2018-04-05 16:59 ` Will Deacon 2018-04-05 16:59 ` [PATCH 08/10] locking/qspinlock: Merge struct __qspinlock into struct qspinlock Will Deacon 2018-04-05 16:59 ` Will Deacon 2018-04-07 5:23 ` Boqun Feng 2018-04-07 5:23 ` Boqun Feng 2018-04-05 16:59 ` [PATCH 09/10] locking/qspinlock: Make queued_spin_unlock use smp_store_release Will Deacon 2018-04-05 16:59 ` Will Deacon 2018-04-05 16:59 ` [PATCH 10/10] locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb() Will Deacon 2018-04-05 16:59 ` Will Deacon 2018-04-05 17:28 ` Peter Zijlstra 2018-04-05 17:28 ` Peter Zijlstra 2018-04-06 11:34 ` Will Deacon 2018-04-06 11:34 ` Will Deacon 2018-04-06 13:05 ` Andrea Parri 2018-04-06 13:05 ` Andrea Parri 2018-04-06 15:27 ` Will Deacon 2018-04-06 15:27 ` Will Deacon 2018-04-06 15:49 ` Andrea Parri 2018-04-06 15:49 ` Andrea Parri 2018-04-07 5:47 ` Boqun Feng 2018-04-07 5:47 ` Boqun Feng 2018-04-09 10:47 ` Will Deacon 2018-04-09 10:47 ` Will Deacon 2018-04-06 13:22 ` [PATCH 00/10] kernel/locking: qspinlock improvements Andrea Parri 2018-04-06 13:22 ` Andrea Parri 2018-04-11 10:20 ` Catalin Marinas 2018-04-11 10:20 ` Catalin Marinas 2018-04-11 15:39 ` Andrea Parri 2018-04-11 15:39 ` Andrea Parri
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=1522947547-24081-3-git-send-email-will.deacon@arm.com \ --to=will.deacon@arm.com \ --cc=boqun.feng@gmail.com \ --cc=catalin.marinas@arm.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=mingo@kernel.org \ --cc=paulmck@linux.vnet.ibm.com \ --cc=peterz@infradead.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.