All of lore.kernel.org
 help / color / mirror / Atom feed
* (no subject)
@ 2018-12-13 14:09 Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 01/10] locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath() Sebastian Andrzej Siewior
                   ` (11 more replies)
  0 siblings, 12 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 14:09 UTC (permalink / raw)
  To: stable; +Cc: Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner

Hi,

this is a backport of commit 7aa54be297655 ("locking/qspinlock, x86:
Provide liveness guarantee") for the v4.9 stable tree.
For the v4.4 tree the ARCH_USE_QUEUED_SPINLOCKS option got disabled on
x86.
For v4.9 it has been decided to do a minimal backport of the final fix
(including all its dependencies).
With this backport I can't reproduce the issue in the latest v4.9-RT
tree. I was able to boot (and use) an arm64 box with these patches so it
is not broken in an abvious way.

Sebastian

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH STABLE v4.9 01/10] locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath()
  2018-12-13 14:09 Sebastian Andrzej Siewior
@ 2018-12-13 14:09 ` Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 02/10] locking/qspinlock: Ensure node is initialised before updating prev->next Sebastian Andrzej Siewior
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 14:09 UTC (permalink / raw)
  To: stable
  Cc: Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner,
	Paul E. McKenney, Ingo Molnar, Sebastian Andrzej Siewior

From: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>

commit 548095dea63ffc016d39c35b32c628d033638aca upstream.

Queued spinlocks are not used by DEC Alpha, and furthermore operations
such as READ_ONCE() and release/relaxed RMW atomics are being changed
to imply smp_read_barrier_depends().  This commit therefore removes the
now-redundant smp_read_barrier_depends() from queued_spin_lock_slowpath(),
and adjusts the comments accordingly.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/locking/qspinlock.c | 12 +++++-------
 1 file changed, 5 insertions(+), 7 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index a72f5df643f8d..8710fbe8d26c0 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -169,7 +169,7 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock)
  * @tail : The new queue tail code word
  * Return: The previous queue tail code word
  *
- * xchg(lock, tail)
+ * xchg(lock, tail), which heads an address dependency
  *
  * p,*,* -> n,*,* ; prev = xchg(lock, node)
  */
@@ -533,13 +533,11 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	if (old & _Q_TAIL_MASK) {
 		prev = decode_tail(old);
 		/*
-		 * The above xchg_tail() is also a load of @lock which generates,
-		 * through decode_tail(), a pointer.
-		 *
-		 * The address dependency matches the RELEASE of xchg_tail()
-		 * such that the access to @prev must happen after.
+		 * The above xchg_tail() is also a load of @lock which
+		 * generates, through decode_tail(), a pointer.  The address
+		 * dependency matches the RELEASE of xchg_tail() such that
+		 * the subsequent access to @prev happens after.
 		 */
-		smp_read_barrier_depends();
 
 		WRITE_ONCE(prev->next, node);
 
-- 
2.20.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH STABLE v4.9 02/10] locking/qspinlock: Ensure node is initialised before updating prev->next
  2018-12-13 14:09 Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 01/10] locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath() Sebastian Andrzej Siewior
@ 2018-12-13 14:09 ` Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 03/10] locking/qspinlock: Bound spinning on pending->locked transition in slowpath Sebastian Andrzej Siewior
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 14:09 UTC (permalink / raw)
  To: stable
  Cc: Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner,
	Linus Torvalds, Ingo Molnar, Sebastian Andrzej Siewior

From: Will Deacon <will.deacon@arm.com>

commit 95bcade33a8af38755c9b0636e36a36ad3789fe6 upstream.

When a locker ends up queuing on the qspinlock locking slowpath, we
initialise the relevant mcs node and publish it indirectly by updating
the tail portion of the lock word using xchg_tail. If we find that there
was a pre-existing locker in the queue, we subsequently update their
->next field to point at our node so that we are notified when it's our
turn to take the lock.

This can be roughly illustrated as follows:

  /* Initialise the fields in node and encode a pointer to node in tail */
  tail = initialise_node(node);

  /*
   * Exchange tail into the lockword using an atomic read-modify-write
   * operation with release semantics
   */
  old = xchg_tail(lock, tail);

  /* If there was a pre-existing waiter ... */
  if (old & _Q_TAIL_MASK) {
	prev = decode_tail(old);
	smp_read_barrier_depends();

	/* ... then update their ->next field to point to node.
	WRITE_ONCE(prev->next, node);
  }

The conditional update of prev->next therefore relies on the address
dependency from the result of xchg_tail ensuring order against the
prior initialisation of node. However, since the release semantics of
the xchg_tail operation apply only to the write portion of the RmW,
then this ordering is not guaranteed and it is possible for the CPU
to return old before the writes to node have been published, consequently
allowing us to point prev->next to an uninitialised node.

This patch fixes the problem by making the update of prev->next a RELEASE
operation, which also removes the reliance on dependency ordering.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/1518528177-19169-2-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/locking/qspinlock.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 8710fbe8d26c0..6fce84401dba1 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -532,14 +532,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	 */
 	if (old & _Q_TAIL_MASK) {
 		prev = decode_tail(old);
-		/*
-		 * The above xchg_tail() is also a load of @lock which
-		 * generates, through decode_tail(), a pointer.  The address
-		 * dependency matches the RELEASE of xchg_tail() such that
-		 * the subsequent access to @prev happens after.
-		 */
 
-		WRITE_ONCE(prev->next, node);
+		/*
+		 * We must ensure that the stores to @node are observed before
+		 * the write to prev->next. The address dependency from
+		 * xchg_tail is not sufficient to ensure this because the read
+		 * component of xchg_tail is unordered with respect to the
+		 * initialisation of @node.
+		 */
+		smp_store_release(&prev->next, node);
 
 		pv_wait_node(node, prev);
 		arch_mcs_spin_lock_contended(&node->locked);
-- 
2.20.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH STABLE v4.9 03/10] locking/qspinlock: Bound spinning on pending->locked transition in slowpath
  2018-12-13 14:09 Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 01/10] locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath() Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 02/10] locking/qspinlock: Ensure node is initialised before updating prev->next Sebastian Andrzej Siewior
@ 2018-12-13 14:09 ` Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 04/10] locking/qspinlock: Merge 'struct __qspinlock' into 'struct qspinlock' Sebastian Andrzej Siewior
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 14:09 UTC (permalink / raw)
  To: stable
  Cc: Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner,
	Waiman Long, Linus Torvalds, boqun.feng, linux-arm-kernel,
	paulmck, Ingo Molnar, Sebastian Andrzej Siewior

From: Will Deacon <will.deacon@arm.com>

commit 6512276d97b160d90b53285bd06f7f201459a7e3 upstream.

If a locker taking the qspinlock slowpath reads a lock value indicating
that only the pending bit is set, then it will spin whilst the
concurrent pending->locked transition takes effect.

Unfortunately, there is no guarantee that such a transition will ever be
observed since concurrent lockers could continuously set pending and
hand over the lock amongst themselves, leading to starvation. Whilst
this would probably resolve in practice, it means that it is not
possible to prove liveness properties about the lock and means that lock
acquisition time is unbounded.

Rather than removing the pending->locked spinning from the slowpath
altogether (which has been shown to heavily penalise a 2-threaded
locking stress test on x86), this patch replaces the explicit spinning
with a call to atomic_cond_read_relaxed and allows the architecture to
provide a bound on the number of spins. For architectures that can
respond to changes in cacheline state in their smp_cond_load implementation,
it should be sufficient to use the default bound of 1.

Suggested-by: Waiman Long <longman@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-4-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/locking/qspinlock.c | 20 +++++++++++++++++---
 1 file changed, 17 insertions(+), 3 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 6fce84401dba1..a8da1fc5222eb 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -75,6 +75,18 @@
 #define MAX_NODES	4
 #endif
 
+/*
+ * The pending bit spinning loop count.
+ * This heuristic is used to limit the number of lockword accesses
+ * made by atomic_cond_read_relaxed when waiting for the lock to
+ * transition out of the "== _Q_PENDING_VAL" state. We don't spin
+ * indefinitely because there's no guarantee that we'll make forward
+ * progress.
+ */
+#ifndef _Q_PENDING_LOOPS
+#define _Q_PENDING_LOOPS	1
+#endif
+
 /*
  * Per-CPU queue node structures; we can never have more than 4 nested
  * contexts: task, softirq, hardirq, nmi.
@@ -422,13 +434,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 		return;
 
 	/*
-	 * wait for in-progress pending->locked hand-overs
+	 * Wait for in-progress pending->locked hand-overs with a bounded
+	 * number of spins so that we guarantee forward progress.
 	 *
 	 * 0,1,0 -> 0,0,1
 	 */
 	if (val == _Q_PENDING_VAL) {
-		while ((val = atomic_read(&lock->val)) == _Q_PENDING_VAL)
-			cpu_relax();
+		int cnt = _Q_PENDING_LOOPS;
+		val = smp_cond_load_acquire(&lock->val.counter,
+					       (VAL != _Q_PENDING_VAL) || !cnt--);
 	}
 
 	/*
-- 
2.20.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH STABLE v4.9 04/10] locking/qspinlock: Merge 'struct __qspinlock' into 'struct qspinlock'
  2018-12-13 14:09 Sebastian Andrzej Siewior
                   ` (2 preceding siblings ...)
  2018-12-13 14:09 ` [PATCH STABLE v4.9 03/10] locking/qspinlock: Bound spinning on pending->locked transition in slowpath Sebastian Andrzej Siewior
@ 2018-12-13 14:09 ` Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 05/10] locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath Sebastian Andrzej Siewior
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 14:09 UTC (permalink / raw)
  To: stable
  Cc: Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner,
	Waiman Long, Boqun Feng, Linus Torvalds, linux-arm-kernel,
	paulmck, Ingo Molnar, Sebastian Andrzej Siewior

From: Will Deacon <will.deacon@arm.com>

commit 625e88be1f41b53cec55827c984e4a89ea8ee9f9 upstream.

'struct __qspinlock' provides a handy union of fields so that
subcomponents of the lockword can be accessed by name, without having to
manage shifts and masks explicitly and take endianness into account.

This is useful in qspinlock.h and also potentially in arch headers, so
move the 'struct __qspinlock' into 'struct qspinlock' and kill the extra
definition.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Acked-by: Boqun Feng <boqun.feng@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-3-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/x86/include/asm/qspinlock.h          |  2 +-
 arch/x86/include/asm/qspinlock_paravirt.h |  3 +-
 include/asm-generic/qspinlock_types.h     | 32 +++++++++++++++-
 kernel/locking/qspinlock.c                | 46 ++---------------------
 kernel/locking/qspinlock_paravirt.h       | 34 ++++++-----------
 5 files changed, 46 insertions(+), 71 deletions(-)

diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index eaba080760300..e07cc206919d4 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -14,7 +14,7 @@
  */
 static inline void native_queued_spin_unlock(struct qspinlock *lock)
 {
-	smp_store_release((u8 *)lock, 0);
+	smp_store_release(&lock->locked, 0);
 }
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
diff --git a/arch/x86/include/asm/qspinlock_paravirt.h b/arch/x86/include/asm/qspinlock_paravirt.h
index 9d55f9b6e167c..fc75415ae9719 100644
--- a/arch/x86/include/asm/qspinlock_paravirt.h
+++ b/arch/x86/include/asm/qspinlock_paravirt.h
@@ -21,8 +21,7 @@ PV_CALLEE_SAVE_REGS_THUNK(__pv_queued_spin_unlock_slowpath);
  *
  * void __pv_queued_spin_unlock(struct qspinlock *lock)
  * {
- *	struct __qspinlock *l = (void *)lock;
- *	u8 lockval = cmpxchg(&l->locked, _Q_LOCKED_VAL, 0);
+ *	u8 lockval = cmpxchg(&lock->locked, _Q_LOCKED_VAL, 0);
  *
  *	if (likely(lockval == _Q_LOCKED_VAL))
  *		return;
diff --git a/include/asm-generic/qspinlock_types.h b/include/asm-generic/qspinlock_types.h
index 034acd0c4956b..0763f065b975a 100644
--- a/include/asm-generic/qspinlock_types.h
+++ b/include/asm-generic/qspinlock_types.h
@@ -29,13 +29,41 @@
 #endif
 
 typedef struct qspinlock {
-	atomic_t	val;
+	union {
+		atomic_t val;
+
+		/*
+		 * By using the whole 2nd least significant byte for the
+		 * pending bit, we can allow better optimization of the lock
+		 * acquisition for the pending bit holder.
+		 */
+#ifdef __LITTLE_ENDIAN
+		struct {
+			u8	locked;
+			u8	pending;
+		};
+		struct {
+			u16	locked_pending;
+			u16	tail;
+		};
+#else
+		struct {
+			u16	tail;
+			u16	locked_pending;
+		};
+		struct {
+			u8	reserved[2];
+			u8	pending;
+			u8	locked;
+		};
+#endif
+	};
 } arch_spinlock_t;
 
 /*
  * Initializier
  */
-#define	__ARCH_SPIN_LOCK_UNLOCKED	{ ATOMIC_INIT(0) }
+#define	__ARCH_SPIN_LOCK_UNLOCKED	{ .val = ATOMIC_INIT(0) }
 
 /*
  * Bitfields in the atomic value:
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index a8da1fc5222eb..cc98050e8bec0 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -125,40 +125,6 @@ static inline __pure struct mcs_spinlock *decode_tail(u32 tail)
 
 #define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
 
-/*
- * By using the whole 2nd least significant byte for the pending bit, we
- * can allow better optimization of the lock acquisition for the pending
- * bit holder.
- *
- * This internal structure is also used by the set_locked function which
- * is not restricted to _Q_PENDING_BITS == 8.
- */
-struct __qspinlock {
-	union {
-		atomic_t val;
-#ifdef __LITTLE_ENDIAN
-		struct {
-			u8	locked;
-			u8	pending;
-		};
-		struct {
-			u16	locked_pending;
-			u16	tail;
-		};
-#else
-		struct {
-			u16	tail;
-			u16	locked_pending;
-		};
-		struct {
-			u8	reserved[2];
-			u8	pending;
-			u8	locked;
-		};
-#endif
-	};
-};
-
 #if _Q_PENDING_BITS == 8
 /**
  * clear_pending_set_locked - take ownership and clear the pending bit.
@@ -170,9 +136,7 @@ struct __qspinlock {
  */
 static __always_inline void clear_pending_set_locked(struct qspinlock *lock)
 {
-	struct __qspinlock *l = (void *)lock;
-
-	WRITE_ONCE(l->locked_pending, _Q_LOCKED_VAL);
+	WRITE_ONCE(lock->locked_pending, _Q_LOCKED_VAL);
 }
 
 /*
@@ -187,13 +151,11 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock)
  */
 static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
 {
-	struct __qspinlock *l = (void *)lock;
-
 	/*
 	 * Use release semantics to make sure that the MCS node is properly
 	 * initialized before changing the tail code.
 	 */
-	return (u32)xchg_release(&l->tail,
+	return (u32)xchg_release(&lock->tail,
 				 tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET;
 }
 
@@ -248,9 +210,7 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
  */
 static __always_inline void set_locked(struct qspinlock *lock)
 {
-	struct __qspinlock *l = (void *)lock;
-
-	WRITE_ONCE(l->locked, _Q_LOCKED_VAL);
+	WRITE_ONCE(lock->locked, _Q_LOCKED_VAL);
 }
 
 
diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
index e3b5520005db7..3e33336911ffe 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -69,10 +69,8 @@ struct pv_node {
 #define queued_spin_trylock(l)	pv_queued_spin_steal_lock(l)
 static inline bool pv_queued_spin_steal_lock(struct qspinlock *lock)
 {
-	struct __qspinlock *l = (void *)lock;
-
 	if (!(atomic_read(&lock->val) & _Q_LOCKED_PENDING_MASK) &&
-	    (cmpxchg(&l->locked, 0, _Q_LOCKED_VAL) == 0)) {
+	    (cmpxchg(&lock->locked, 0, _Q_LOCKED_VAL) == 0)) {
 		qstat_inc(qstat_pv_lock_stealing, true);
 		return true;
 	}
@@ -87,16 +85,12 @@ static inline bool pv_queued_spin_steal_lock(struct qspinlock *lock)
 #if _Q_PENDING_BITS == 8
 static __always_inline void set_pending(struct qspinlock *lock)
 {
-	struct __qspinlock *l = (void *)lock;
-
-	WRITE_ONCE(l->pending, 1);
+	WRITE_ONCE(lock->pending, 1);
 }
 
 static __always_inline void clear_pending(struct qspinlock *lock)
 {
-	struct __qspinlock *l = (void *)lock;
-
-	WRITE_ONCE(l->pending, 0);
+	WRITE_ONCE(lock->pending, 0);
 }
 
 /*
@@ -106,10 +100,8 @@ static __always_inline void clear_pending(struct qspinlock *lock)
  */
 static __always_inline int trylock_clear_pending(struct qspinlock *lock)
 {
-	struct __qspinlock *l = (void *)lock;
-
-	return !READ_ONCE(l->locked) &&
-	       (cmpxchg(&l->locked_pending, _Q_PENDING_VAL, _Q_LOCKED_VAL)
+	return !READ_ONCE(lock->locked) &&
+	       (cmpxchg(&lock->locked_pending, _Q_PENDING_VAL, _Q_LOCKED_VAL)
 			== _Q_PENDING_VAL);
 }
 #else /* _Q_PENDING_BITS == 8 */
@@ -353,7 +345,6 @@ static void pv_wait_node(struct mcs_spinlock *node, struct mcs_spinlock *prev)
 static void pv_kick_node(struct qspinlock *lock, struct mcs_spinlock *node)
 {
 	struct pv_node *pn = (struct pv_node *)node;
-	struct __qspinlock *l = (void *)lock;
 
 	/*
 	 * If the vCPU is indeed halted, advance its state to match that of
@@ -372,7 +363,7 @@ static void pv_kick_node(struct qspinlock *lock, struct mcs_spinlock *node)
 	 * the hash table later on at unlock time, no atomic instruction is
 	 * needed.
 	 */
-	WRITE_ONCE(l->locked, _Q_SLOW_VAL);
+	WRITE_ONCE(lock->locked, _Q_SLOW_VAL);
 	(void)pv_hash(lock, pn);
 }
 
@@ -387,7 +378,6 @@ static u32
 pv_wait_head_or_lock(struct qspinlock *lock, struct mcs_spinlock *node)
 {
 	struct pv_node *pn = (struct pv_node *)node;
-	struct __qspinlock *l = (void *)lock;
 	struct qspinlock **lp = NULL;
 	int waitcnt = 0;
 	int loop;
@@ -438,13 +428,13 @@ pv_wait_head_or_lock(struct qspinlock *lock, struct mcs_spinlock *node)
 			 *
 			 * Matches the smp_rmb() in __pv_queued_spin_unlock().
 			 */
-			if (xchg(&l->locked, _Q_SLOW_VAL) == 0) {
+			if (xchg(&lock->locked, _Q_SLOW_VAL) == 0) {
 				/*
 				 * The lock was free and now we own the lock.
 				 * Change the lock value back to _Q_LOCKED_VAL
 				 * and unhash the table.
 				 */
-				WRITE_ONCE(l->locked, _Q_LOCKED_VAL);
+				WRITE_ONCE(lock->locked, _Q_LOCKED_VAL);
 				WRITE_ONCE(*lp, NULL);
 				goto gotlock;
 			}
@@ -452,7 +442,7 @@ pv_wait_head_or_lock(struct qspinlock *lock, struct mcs_spinlock *node)
 		WRITE_ONCE(pn->state, vcpu_hashed);
 		qstat_inc(qstat_pv_wait_head, true);
 		qstat_inc(qstat_pv_wait_again, waitcnt);
-		pv_wait(&l->locked, _Q_SLOW_VAL);
+		pv_wait(&lock->locked, _Q_SLOW_VAL);
 
 		/*
 		 * Because of lock stealing, the queue head vCPU may not be
@@ -477,7 +467,6 @@ pv_wait_head_or_lock(struct qspinlock *lock, struct mcs_spinlock *node)
 __visible void
 __pv_queued_spin_unlock_slowpath(struct qspinlock *lock, u8 locked)
 {
-	struct __qspinlock *l = (void *)lock;
 	struct pv_node *node;
 
 	if (unlikely(locked != _Q_SLOW_VAL)) {
@@ -506,7 +495,7 @@ __pv_queued_spin_unlock_slowpath(struct qspinlock *lock, u8 locked)
 	 * Now that we have a reference to the (likely) blocked pv_node,
 	 * release the lock.
 	 */
-	smp_store_release(&l->locked, 0);
+	smp_store_release(&lock->locked, 0);
 
 	/*
 	 * At this point the memory pointed at by lock can be freed/reused,
@@ -532,7 +521,6 @@ __pv_queued_spin_unlock_slowpath(struct qspinlock *lock, u8 locked)
 #ifndef __pv_queued_spin_unlock
 __visible void __pv_queued_spin_unlock(struct qspinlock *lock)
 {
-	struct __qspinlock *l = (void *)lock;
 	u8 locked;
 
 	/*
@@ -540,7 +528,7 @@ __visible void __pv_queued_spin_unlock(struct qspinlock *lock)
 	 * unhash. Otherwise it would be possible to have multiple @lock
 	 * entries, which would be BAD.
 	 */
-	locked = cmpxchg_release(&l->locked, _Q_LOCKED_VAL, 0);
+	locked = cmpxchg_release(&lock->locked, _Q_LOCKED_VAL, 0);
 	if (likely(locked == _Q_LOCKED_VAL))
 		return;
 
-- 
2.20.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH STABLE v4.9 05/10] locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath
  2018-12-13 14:09 Sebastian Andrzej Siewior
                   ` (3 preceding siblings ...)
  2018-12-13 14:09 ` [PATCH STABLE v4.9 04/10] locking/qspinlock: Merge 'struct __qspinlock' into 'struct qspinlock' Sebastian Andrzej Siewior
@ 2018-12-13 14:09 ` Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 06/10] locking/qspinlock: Remove duplicate clear_pending() function from PV code Sebastian Andrzej Siewior
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 14:09 UTC (permalink / raw)
  To: stable
  Cc: Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner,
	Waiman Long, Linus Torvalds, boqun.feng, linux-arm-kernel,
	paulmck, Ingo Molnar, Sebastian Andrzej Siewior

From: Will Deacon <will.deacon@arm.com>

commit 59fb586b4a07b4e1a0ee577140ab4842ba451acd upstream.

The qspinlock locking slowpath utilises a "pending" bit as a simple form
of an embedded test-and-set lock that can avoid the overhead of explicit
queuing in cases where the lock is held but uncontended. This bit is
managed using a cmpxchg() loop which tries to transition the uncontended
lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).

Unfortunately, the cmpxchg() loop is unbounded and lockers can be starved
indefinitely if the lock word is seen to oscillate between unlocked
(0,0,0) and locked (0,0,1). This could happen if concurrent lockers are
able to take the lock in the cmpxchg() loop without queuing and pass it
around amongst themselves.

This patch fixes the problem by unconditionally setting _Q_PENDING_VAL
using atomic_fetch_or, and then inspecting the old value to see whether
we need to spin on the current lock owner, or whether we now effectively
hold the lock. The tricky scenario is when concurrent lockers end up
queuing on the lock and the lock becomes available, causing us to see
a lockword of (n,0,0). With pending now set, simply queuing could lead
to deadlock as the head of the queue may not have observed the pending
flag being cleared. Conversely, if the head of the queue did observe
pending being cleared, then it could transition the lock from (n,0,0) ->
(0,0,1) meaning that any attempt to "undo" our setting of the pending
bit could race with a concurrent locker trying to set it.

We handle this race by preserving the pending bit when taking the lock
after reaching the head of the queue and leaving the tail entry intact
if we saw pending set, because we know that the tail is going to be
updated shortly.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-6-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/locking/qspinlock.c          | 102 ++++++++++++++++------------
 kernel/locking/qspinlock_paravirt.h |   5 --
 2 files changed, 58 insertions(+), 49 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index cc98050e8bec0..e7ab99a1f4387 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -126,6 +126,17 @@ static inline __pure struct mcs_spinlock *decode_tail(u32 tail)
 #define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
 
 #if _Q_PENDING_BITS == 8
+/**
+ * clear_pending - clear the pending bit.
+ * @lock: Pointer to queued spinlock structure
+ *
+ * *,1,* -> *,0,*
+ */
+static __always_inline void clear_pending(struct qspinlock *lock)
+{
+	WRITE_ONCE(lock->pending, 0);
+}
+
 /**
  * clear_pending_set_locked - take ownership and clear the pending bit.
  * @lock: Pointer to queued spinlock structure
@@ -161,6 +172,17 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
 
 #else /* _Q_PENDING_BITS == 8 */
 
+/**
+ * clear_pending - clear the pending bit.
+ * @lock: Pointer to queued spinlock structure
+ *
+ * *,1,* -> *,0,*
+ */
+static __always_inline void clear_pending(struct qspinlock *lock)
+{
+	atomic_andnot(_Q_PENDING_VAL, &lock->val);
+}
+
 /**
  * clear_pending_set_locked - take ownership and clear the pending bit.
  * @lock: Pointer to queued spinlock structure
@@ -382,7 +404,7 @@ EXPORT_SYMBOL(queued_spin_unlock_wait);
 void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 {
 	struct mcs_spinlock *prev, *next, *node;
-	u32 new, old, tail;
+	u32 old, tail;
 	int idx;
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
@@ -405,59 +427,51 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 					       (VAL != _Q_PENDING_VAL) || !cnt--);
 	}
 
+	/*
+	 * If we observe any contention; queue.
+	 */
+	if (val & ~_Q_LOCKED_MASK)
+		goto queue;
+
 	/*
 	 * trylock || pending
 	 *
 	 * 0,0,0 -> 0,0,1 ; trylock
 	 * 0,0,1 -> 0,1,1 ; pending
 	 */
-	for (;;) {
+	val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
+	if (!(val & ~_Q_LOCKED_MASK)) {
 		/*
-		 * If we observe any contention; queue.
+		 * We're pending, wait for the owner to go away.
+		 *
+		 * *,1,1 -> *,1,0
+		 *
+		 * this wait loop must be a load-acquire such that we match the
+		 * store-release that clears the locked bit and create lock
+		 * sequentiality; this is because not all
+		 * clear_pending_set_locked() implementations imply full
+		 * barriers.
 		 */
-		if (val & ~_Q_LOCKED_MASK)
-			goto queue;
-
-		new = _Q_LOCKED_VAL;
-		if (val == new)
-			new |= _Q_PENDING_VAL;
+		if (val & _Q_LOCKED_MASK) {
+			smp_cond_load_acquire(&lock->val.counter,
+					      !(VAL & _Q_LOCKED_MASK));
+		}
 
 		/*
-		 * Acquire semantic is required here as the function may
-		 * return immediately if the lock was free.
+		 * take ownership and clear the pending bit.
+		 *
+		 * *,1,0 -> *,0,1
 		 */
-		old = atomic_cmpxchg_acquire(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
+		clear_pending_set_locked(lock);
+		return;
 	}
 
 	/*
-	 * we won the trylock
+	 * If pending was clear but there are waiters in the queue, then
+	 * we need to undo our setting of pending before we queue ourselves.
 	 */
-	if (new == _Q_LOCKED_VAL)
-		return;
-
-	/*
-	 * we're pending, wait for the owner to go away.
-	 *
-	 * *,1,1 -> *,1,0
-	 *
-	 * this wait loop must be a load-acquire such that we match the
-	 * store-release that clears the locked bit and create lock
-	 * sequentiality; this is because not all clear_pending_set_locked()
-	 * implementations imply full barriers.
-	 */
-	smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK));
-
-	/*
-	 * take ownership and clear the pending bit.
-	 *
-	 * *,1,0 -> *,0,1
-	 */
-	clear_pending_set_locked(lock);
-	return;
+	if (!(val & _Q_PENDING_MASK))
+		clear_pending(lock);
 
 	/*
 	 * End of pending bit optimistic spinning and beginning of MCS
@@ -561,15 +575,15 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	 * claim the lock:
 	 *
 	 * n,0,0 -> 0,0,1 : lock, uncontended
-	 * *,0,0 -> *,0,1 : lock, contended
+	 * *,*,0 -> *,*,1 : lock, contended
 	 *
-	 * If the queue head is the only one in the queue (lock value == tail),
-	 * clear the tail code and grab the lock. Otherwise, we only need
-	 * to grab the lock.
+	 * If the queue head is the only one in the queue (lock value == tail)
+	 * and nobody is pending, clear the tail code and grab the lock.
+	 * Otherwise, we only need to grab the lock.
 	 */
 	for (;;) {
 		/* In the PV case we might already have _Q_LOCKED_VAL set */
-		if ((val & _Q_TAIL_MASK) != tail) {
+		if ((val & _Q_TAIL_MASK) != tail || (val & _Q_PENDING_MASK)) {
 			set_locked(lock);
 			break;
 		}
diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
index 3e33336911ffe..9c07c72fb10e9 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -88,11 +88,6 @@ static __always_inline void set_pending(struct qspinlock *lock)
 	WRITE_ONCE(lock->pending, 1);
 }
 
-static __always_inline void clear_pending(struct qspinlock *lock)
-{
-	WRITE_ONCE(lock->pending, 0);
-}
-
 /*
  * The pending bit check in pv_queued_spin_steal_lock() isn't a memory
  * barrier. Therefore, an atomic cmpxchg() is used to acquire the lock
-- 
2.20.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH STABLE v4.9 06/10] locking/qspinlock: Remove duplicate clear_pending() function from PV code
  2018-12-13 14:09 Sebastian Andrzej Siewior
                   ` (4 preceding siblings ...)
  2018-12-13 14:09 ` [PATCH STABLE v4.9 05/10] locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath Sebastian Andrzej Siewior
@ 2018-12-13 14:09 ` Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 07/10] locking/qspinlock: Kill cmpxchg() loop when claiming lock from head of queue Sebastian Andrzej Siewior
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 14:09 UTC (permalink / raw)
  To: stable
  Cc: Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner,
	Waiman Long, Linus Torvalds, boqun.feng, linux-arm-kernel,
	paulmck, Ingo Molnar, Sebastian Andrzej Siewior

From: Will Deacon <will.deacon@arm.com>

commit 3bea9adc96842b8a7345c7fb202c16ae9c8d5b25 upstream.

The native clear_pending() function is identical to the PV version, so the
latter can simply be removed.

This fixes the build for systems with >= 16K CPUs using the PV lock implementation.

Reported-by: Waiman Long <longman@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/20180427101619.GB21705@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/locking/qspinlock_paravirt.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
index 9c07c72fb10e9..af2a24d484aab 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -105,11 +105,6 @@ static __always_inline void set_pending(struct qspinlock *lock)
 	atomic_or(_Q_PENDING_VAL, &lock->val);
 }
 
-static __always_inline void clear_pending(struct qspinlock *lock)
-{
-	atomic_andnot(_Q_PENDING_VAL, &lock->val);
-}
-
 static __always_inline int trylock_clear_pending(struct qspinlock *lock)
 {
 	int val = atomic_read(&lock->val);
-- 
2.20.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH STABLE v4.9 07/10] locking/qspinlock: Kill cmpxchg() loop when claiming lock from head of queue
  2018-12-13 14:09 Sebastian Andrzej Siewior
                   ` (5 preceding siblings ...)
  2018-12-13 14:09 ` [PATCH STABLE v4.9 06/10] locking/qspinlock: Remove duplicate clear_pending() function from PV code Sebastian Andrzej Siewior
@ 2018-12-13 14:09 ` Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 08/10] locking/qspinlock: Re-order code Sebastian Andrzej Siewior
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 14:09 UTC (permalink / raw)
  To: stable
  Cc: Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner,
	Waiman Long, Linus Torvalds, boqun.feng, linux-arm-kernel,
	paulmck, Ingo Molnar, Sebastian Andrzej Siewior

From: Will Deacon <will.deacon@arm.com>

commit c61da58d8a9ba9238250a548f00826eaf44af0f7 upstream.

When a queued locker reaches the head of the queue, it claims the lock
by setting _Q_LOCKED_VAL in the lockword. If there isn't contention, it
must also clear the tail as part of this operation so that subsequent
lockers can avoid taking the slowpath altogether.

Currently this is expressed as a cmpxchg() loop that practically only
runs up to two iterations. This is confusing to the reader and unhelpful
to the compiler. Rewrite the cmpxchg() loop without the loop, so that a
failed cmpxchg() implies that there is contention and we just need to
write to _Q_LOCKED_VAL without considering the rest of the lockword.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-7-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/locking/qspinlock.c | 19 ++++++++-----------
 1 file changed, 8 insertions(+), 11 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index e7ab99a1f4387..ba5dc86a4d831 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -581,24 +581,21 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	 * and nobody is pending, clear the tail code and grab the lock.
 	 * Otherwise, we only need to grab the lock.
 	 */
-	for (;;) {
-		/* In the PV case we might already have _Q_LOCKED_VAL set */
-		if ((val & _Q_TAIL_MASK) != tail || (val & _Q_PENDING_MASK)) {
-			set_locked(lock);
-			break;
-		}
+
+	/* In the PV case we might already have _Q_LOCKED_VAL set */
+	if ((val & _Q_TAIL_MASK) == tail) {
 		/*
 		 * The smp_cond_load_acquire() call above has provided the
-		 * necessary acquire semantics required for locking. At most
-		 * two iterations of this loop may be ran.
+		 * necessary acquire semantics required for locking.
 		 */
 		old = atomic_cmpxchg_relaxed(&lock->val, val, _Q_LOCKED_VAL);
 		if (old == val)
-			goto release;	/* No contention */
-
-		val = old;
+			goto release; /* No contention */
 	}
 
+	/* Either somebody is queued behind us or _Q_PENDING_VAL is set */
+	set_locked(lock);
+
 	/*
 	 * contended path; wait for next if not observed yet, release.
 	 */
-- 
2.20.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH STABLE v4.9 08/10] locking/qspinlock: Re-order code
  2018-12-13 14:09 Sebastian Andrzej Siewior
                   ` (6 preceding siblings ...)
  2018-12-13 14:09 ` [PATCH STABLE v4.9 07/10] locking/qspinlock: Kill cmpxchg() loop when claiming lock from head of queue Sebastian Andrzej Siewior
@ 2018-12-13 14:09 ` Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 09/10] locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound Sebastian Andrzej Siewior
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 14:09 UTC (permalink / raw)
  To: stable
  Cc: Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner,
	Linus Torvalds, andrea.parri, longman, Ingo Molnar,
	Sebastian Andrzej Siewior

From: Peter Zijlstra <peterz@infradead.org>

commit 53bf57fab7321fb42b703056a4c80fc9d986d170 upstream.

Flip the branch condition after atomic_fetch_or_acquire(_Q_PENDING_VAL)
such that we loose the indent. This also result in a more natural code
flow IMO.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: andrea.parri@amarulasolutions.com
Cc: longman@redhat.com
Link: https://lkml.kernel.org/r/20181003130257.156322446@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 kernel/locking/qspinlock.c | 54 ++++++++++++++++++--------------------
 1 file changed, 26 insertions(+), 28 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index ba5dc86a4d831..f493a4fce6243 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -440,38 +440,36 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	 * 0,0,1 -> 0,1,1 ; pending
 	 */
 	val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
-	if (!(val & ~_Q_LOCKED_MASK)) {
-		/*
-		 * We're pending, wait for the owner to go away.
-		 *
-		 * *,1,1 -> *,1,0
-		 *
-		 * this wait loop must be a load-acquire such that we match the
-		 * store-release that clears the locked bit and create lock
-		 * sequentiality; this is because not all
-		 * clear_pending_set_locked() implementations imply full
-		 * barriers.
-		 */
-		if (val & _Q_LOCKED_MASK) {
-			smp_cond_load_acquire(&lock->val.counter,
-					      !(VAL & _Q_LOCKED_MASK));
-		}
-
-		/*
-		 * take ownership and clear the pending bit.
-		 *
-		 * *,1,0 -> *,0,1
-		 */
-		clear_pending_set_locked(lock);
-		return;
+	/*
+	 * If we observe any contention; undo and queue.
+	 */
+	if (unlikely(val & ~_Q_LOCKED_MASK)) {
+		if (!(val & _Q_PENDING_MASK))
+			clear_pending(lock);
+		goto queue;
 	}
 
 	/*
-	 * If pending was clear but there are waiters in the queue, then
-	 * we need to undo our setting of pending before we queue ourselves.
+	 * We're pending, wait for the owner to go away.
+	 *
+	 * 0,1,1 -> 0,1,0
+	 *
+	 * this wait loop must be a load-acquire such that we match the
+	 * store-release that clears the locked bit and create lock
+	 * sequentiality; this is because not all
+	 * clear_pending_set_locked() implementations imply full
+	 * barriers.
 	 */
-	if (!(val & _Q_PENDING_MASK))
-		clear_pending(lock);
+	if (val & _Q_LOCKED_MASK)
+		smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK));
+
+	/*
+	 * take ownership and clear the pending bit.
+	 *
+	 * 0,1,0 -> 0,0,1
+	 */
+	clear_pending_set_locked(lock);
+	return;
 
 	/*
 	 * End of pending bit optimistic spinning and beginning of MCS
-- 
2.20.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH STABLE v4.9 09/10] locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound
  2018-12-13 14:09 Sebastian Andrzej Siewior
                   ` (7 preceding siblings ...)
  2018-12-13 14:09 ` [PATCH STABLE v4.9 08/10] locking/qspinlock: Re-order code Sebastian Andrzej Siewior
@ 2018-12-13 14:09 ` Sebastian Andrzej Siewior
  2018-12-13 14:09 ` [PATCH STABLE v4.9 10/10] locking/qspinlock, x86: Provide liveness guarantee Sebastian Andrzej Siewior
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 14:09 UTC (permalink / raw)
  To: stable
  Cc: Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner,
	Waiman Long, Linus Torvalds, boqun.feng, linux-arm-kernel,
	paulmck, Ingo Molnar, Sebastian Andrzej Siewior

From: Will Deacon <will.deacon@arm.com>

commit b247be3fe89b6aba928bf80f4453d1c4ba8d2063 upstream.

On x86, atomic_cond_read_relaxed will busy-wait with a cpu_relax() loop,
so it is desirable to increase the number of times we spin on the qspinlock
lockword when it is found to be transitioning from pending to locked.

According to Waiman Long:

 | Ideally, the spinning times should be at least a few times the typical
 | cacheline load time from memory which I think can be down to 100ns or
 | so for each cacheline load with the newest systems or up to several
 | hundreds ns for older systems.

which in his benchmarking corresponded to 512 iterations.

Suggested-by: Waiman Long <longman@redhat.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-5-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/x86/include/asm/qspinlock.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index e07cc206919d4..8b1ba1607091c 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -5,6 +5,8 @@
 #include <asm-generic/qspinlock_types.h>
 #include <asm/paravirt.h>
 
+#define _Q_PENDING_LOOPS	(1 << 9)
+
 #define	queued_spin_unlock queued_spin_unlock
 /**
  * queued_spin_unlock - release a queued spinlock
-- 
2.20.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH STABLE v4.9 10/10] locking/qspinlock, x86: Provide liveness guarantee
  2018-12-13 14:09 Sebastian Andrzej Siewior
                   ` (8 preceding siblings ...)
  2018-12-13 14:09 ` [PATCH STABLE v4.9 09/10] locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound Sebastian Andrzej Siewior
@ 2018-12-13 14:09 ` Sebastian Andrzej Siewior
  2018-12-13 14:33 ` your mail Sasha Levin
  2018-12-14  7:11 ` Greg KH
  11 siblings, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 14:09 UTC (permalink / raw)
  To: stable
  Cc: Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner,
	Linus Torvalds, andrea.parri, longman, Ingo Molnar,
	Sebastian Andrzej Siewior

From: Peter Zijlstra <peterz@infradead.org>

commit 7aa54be2976550f17c11a1c3e3630002dea39303 upstream.

On x86 we cannot do fetch_or() with a single instruction and thus end up
using a cmpxchg loop, this reduces determinism. Replace the fetch_or()
with a composite operation: tas-pending + load.

Using two instructions of course opens a window we previously did not
have. Consider the scenario:

	CPU0		CPU1		CPU2

 1)	lock
	  trylock -> (0,0,1)

 2)			lock
			  trylock /* fail */

 3)	unlock -> (0,0,0)

 4)					lock
					  trylock -> (0,0,1)

 5)			  tas-pending -> (0,1,1)
			  load-val <- (0,1,0) from 3

 6)			  clear-pending-set-locked -> (0,0,1)

			  FAIL: _2_ owners

where 5) is our new composite operation. When we consider each part of
the qspinlock state as a separate variable (as we can when
_Q_PENDING_BITS == 8) then the above is entirely possible, because
tas-pending will only RmW the pending byte, so the later load is able
to observe prior tail and lock state (but not earlier than its own
trylock, which operates on the whole word, due to coherence).

To avoid this we need 2 things:

 - the load must come after the tas-pending (obviously, otherwise it
   can trivially observe prior state).

 - the tas-pending must be a full word RmW instruction, it cannot be an XCHGB for
   example, such that we cannot observe other state prior to setting
   pending.

On x86 we can realize this by using "LOCK BTS m32, r32" for
tas-pending followed by a regular load.

Note that observing later state is not a problem:

 - if we fail to observe a later unlock, we'll simply spin-wait for
   that store to become visible.

 - if we observe a later xchg_tail(), there is no difference from that
   xchg_tail() having taken place before the tas-pending.

Suggested-by: Will Deacon <will.deacon@arm.com>
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: andrea.parri@amarulasolutions.com
Cc: longman@redhat.com
Fixes: 59fb586b4a07 ("locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath")
Link: https://lkml.kernel.org/r/20181003130957.183726335@infradead.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
[bigeasy: GEN_BINARY_RMWcc macro redo]
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
 arch/x86/include/asm/qspinlock.h | 21 +++++++++++++++++++++
 kernel/locking/qspinlock.c       | 17 ++++++++++++++++-
 2 files changed, 37 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 8b1ba1607091c..9e78e963afb84 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -4,9 +4,30 @@
 #include <asm/cpufeature.h>
 #include <asm-generic/qspinlock_types.h>
 #include <asm/paravirt.h>
+#include <asm/rmwcc.h>
 
 #define _Q_PENDING_LOOPS	(1 << 9)
 
+#define queued_fetch_set_pending_acquire queued_fetch_set_pending_acquire
+
+static __always_inline bool __queued_RMW_btsl(struct qspinlock *lock)
+{
+	GEN_BINARY_RMWcc(LOCK_PREFIX "btsl", lock->val.counter,
+			 "I", _Q_PENDING_OFFSET, "%0", c);
+}
+
+static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lock)
+{
+	u32 val = 0;
+
+	if (__queued_RMW_btsl(lock))
+		val |= _Q_PENDING_VAL;
+
+	val |= atomic_read(&lock->val) & ~_Q_PENDING_MASK;
+
+	return val;
+}
+
 #define	queued_spin_unlock queued_spin_unlock
 /**
  * queued_spin_unlock - release a queued spinlock
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index f493a4fce6243..0ed478e10071d 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -224,6 +224,20 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
 }
 #endif /* _Q_PENDING_BITS == 8 */
 
+/**
+ * queued_fetch_set_pending_acquire - fetch the whole lock value and set pending
+ * @lock : Pointer to queued spinlock structure
+ * Return: The previous lock value
+ *
+ * *,*,* -> *,1,*
+ */
+#ifndef queued_fetch_set_pending_acquire
+static __always_inline u32 queued_fetch_set_pending_acquire(struct qspinlock *lock)
+{
+	return atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
+}
+#endif
+
 /**
  * set_locked - Set the lock bit and own the lock
  * @lock: Pointer to queued spinlock structure
@@ -439,7 +453,8 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 	 * 0,0,0 -> 0,0,1 ; trylock
 	 * 0,0,1 -> 0,1,1 ; pending
 	 */
-	val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
+	val = queued_fetch_set_pending_acquire(lock);
+
 	/*
 	 * If we observe any contention; undo and queue.
 	 */
-- 
2.20.0

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: your mail
  2018-12-13 14:09 Sebastian Andrzej Siewior
                   ` (9 preceding siblings ...)
  2018-12-13 14:09 ` [PATCH STABLE v4.9 10/10] locking/qspinlock, x86: Provide liveness guarantee Sebastian Andrzej Siewior
@ 2018-12-13 14:33 ` Sasha Levin
  2018-12-13 14:39   ` Sasha Levin
  2018-12-13 14:53   ` Peter Zijlstra
  2018-12-14  7:11 ` Greg KH
  11 siblings, 2 replies; 16+ messages in thread
From: Sasha Levin @ 2018-12-13 14:33 UTC (permalink / raw)
  To: Backport, for, cachelinestarvationonx86
  Cc: stable, Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner

On Thu, Dec 13, 2018 at 03:09:15PM +0100, Sebastian Andrzej Siewior wrote:
>Hi,
>
>this is a backport of commit 7aa54be297655 ("locking/qspinlock, x86:
>Provide liveness guarantee") for the v4.9 stable tree.
>For the v4.4 tree the ARCH_USE_QUEUED_SPINLOCKS option got disabled on
>x86.
>For v4.9 it has been decided to do a minimal backport of the final fix
>(including all its dependencies).
>With this backport I can't reproduce the issue in the latest v4.9-RT
>tree. I was able to boot (and use) an arm64 box with these patches so it
>is not broken in an abvious way.

What about 4.14? I see that at least some of these patches are missing
there.

--
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: your mail
  2018-12-13 14:33 ` your mail Sasha Levin
@ 2018-12-13 14:39   ` Sasha Levin
  2018-12-13 15:13     ` Sebastian Andrzej Siewior
  2018-12-13 14:53   ` Peter Zijlstra
  1 sibling, 1 reply; 16+ messages in thread
From: Sasha Levin @ 2018-12-13 14:39 UTC (permalink / raw)
  To: bigeasy
  Cc: stable, Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner

On Thu, Dec 13, 2018 at 09:33:36AM -0500, Sasha Levin wrote:
>On Thu, Dec 13, 2018 at 03:09:15PM +0100, Sebastian Andrzej Siewior wrote:
>>Hi,
>>
>>this is a backport of commit 7aa54be297655 ("locking/qspinlock, x86:
>>Provide liveness guarantee") for the v4.9 stable tree.
>>For the v4.4 tree the ARCH_USE_QUEUED_SPINLOCKS option got disabled on
>>x86.
>>For v4.9 it has been decided to do a minimal backport of the final fix
>>(including all its dependencies).
>>With this backport I can't reproduce the issue in the latest v4.9-RT
>>tree. I was able to boot (and use) an arm64 box with these patches so it
>>is not broken in an abvious way.
>
>What about 4.14? I see that at least some of these patches are missing
>there.

Hrm, there was something really fishy with the headers on your original
mail...

--
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: your mail
  2018-12-13 14:33 ` your mail Sasha Levin
  2018-12-13 14:39   ` Sasha Levin
@ 2018-12-13 14:53   ` Peter Zijlstra
  1 sibling, 0 replies; 16+ messages in thread
From: Peter Zijlstra @ 2018-12-13 14:53 UTC (permalink / raw)
  To: Sasha Levin
  Cc: Backport, for, cachelinestarvationonx86, stable, Will Deacon,
	Thomas Gleixner, Daniel Wagner

On Thu, Dec 13, 2018 at 09:33:36AM -0500, Sasha Levin wrote:
> On Thu, Dec 13, 2018 at 03:09:15PM +0100, Sebastian Andrzej Siewior wrote:
> > Hi,
> > 
> > this is a backport of commit 7aa54be297655 ("locking/qspinlock, x86:
> > Provide liveness guarantee") for the v4.9 stable tree.
> > For the v4.4 tree the ARCH_USE_QUEUED_SPINLOCKS option got disabled on
> > x86.
> > For v4.9 it has been decided to do a minimal backport of the final fix
> > (including all its dependencies).
> > With this backport I can't reproduce the issue in the latest v4.9-RT
> > tree. I was able to boot (and use) an arm64 box with these patches so it
> > is not broken in an abvious way.
> 
> What about 4.14? I see that at least some of these patches are missing
> there.

4.14 can have a simpler backport indeed

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: your mail
  2018-12-13 14:39   ` Sasha Levin
@ 2018-12-13 15:13     ` Sebastian Andrzej Siewior
  0 siblings, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2018-12-13 15:13 UTC (permalink / raw)
  To: Sasha Levin
  Cc: stable, Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner

On 2018-12-13 09:39:46 [-0500], Sasha Levin wrote:
> On Thu, Dec 13, 2018 at 09:33:36AM -0500, Sasha Levin wrote:
> > On Thu, Dec 13, 2018 at 03:09:15PM +0100, Sebastian Andrzej Siewior wrote:
> > > Hi,
> > > 
> > > this is a backport of commit 7aa54be297655 ("locking/qspinlock, x86:
> > > Provide liveness guarantee") for the v4.9 stable tree.
> > > For the v4.4 tree the ARCH_USE_QUEUED_SPINLOCKS option got disabled on
> > > x86.
> > > For v4.9 it has been decided to do a minimal backport of the final fix
> > > (including all its dependencies).
> > > With this backport I can't reproduce the issue in the latest v4.9-RT
> > > tree. I was able to boot (and use) an arm64 box with these patches so it
> > > is not broken in an abvious way.
> > 
> > What about 4.14? I see that at least some of these patches are missing
> > there.

so v4.4 was easy peasy, v4.9 required a little work. So let me worry
about v4.14 and v4.19 next.

> Hrm, there was something really fishy with the headers on your original
> mail...

Interesting and I'm sorry. So infradead.org rejected all of my emails
saying something similar but I have no clue what went wrong.

Sebastian

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: your mail
  2018-12-13 14:09 Sebastian Andrzej Siewior
                   ` (10 preceding siblings ...)
  2018-12-13 14:33 ` your mail Sasha Levin
@ 2018-12-14  7:11 ` Greg KH
  11 siblings, 0 replies; 16+ messages in thread
From: Greg KH @ 2018-12-14  7:11 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: stable, Peter Zijlstra, Will Deacon, Thomas Gleixner, Daniel Wagner

On Thu, Dec 13, 2018 at 03:09:15PM +0100, Sebastian Andrzej Siewior wrote:
> Hi,
> 
> this is a backport of commit 7aa54be297655 ("locking/qspinlock, x86:
> Provide liveness guarantee") for the v4.9 stable tree.
> For the v4.4 tree the ARCH_USE_QUEUED_SPINLOCKS option got disabled on
> x86.
> For v4.9 it has been decided to do a minimal backport of the final fix
> (including all its dependencies).
> With this backport I can't reproduce the issue in the latest v4.9-RT
> tree. I was able to boot (and use) an arm64 box with these patches so it
> is not broken in an abvious way.

As Peter said, a 4.14 backport would be simpler, but I would prefer to
wait for that before accepting these patches into 4.9.

I don't want people to move from 4.9 to 4.14 and hit a regression right
away.  So if we could get a 4.14 backport first, that would be wonderful
and then allow me to take the 4.9 patches.

Seeing patches in 4.9 and 4.19 but not in 4.14 does not make me feel
good :)

And given the horrible header mistakes on this series, I'm going to drop
them to prevent anyone else from having to hand-edit them in order to
get things cleaned up.  Please resend this series after you have done
that.

thanks,

greg k-h

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2018-12-14  7:11 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-12-13 14:09 Sebastian Andrzej Siewior
2018-12-13 14:09 ` [PATCH STABLE v4.9 01/10] locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath() Sebastian Andrzej Siewior
2018-12-13 14:09 ` [PATCH STABLE v4.9 02/10] locking/qspinlock: Ensure node is initialised before updating prev->next Sebastian Andrzej Siewior
2018-12-13 14:09 ` [PATCH STABLE v4.9 03/10] locking/qspinlock: Bound spinning on pending->locked transition in slowpath Sebastian Andrzej Siewior
2018-12-13 14:09 ` [PATCH STABLE v4.9 04/10] locking/qspinlock: Merge 'struct __qspinlock' into 'struct qspinlock' Sebastian Andrzej Siewior
2018-12-13 14:09 ` [PATCH STABLE v4.9 05/10] locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath Sebastian Andrzej Siewior
2018-12-13 14:09 ` [PATCH STABLE v4.9 06/10] locking/qspinlock: Remove duplicate clear_pending() function from PV code Sebastian Andrzej Siewior
2018-12-13 14:09 ` [PATCH STABLE v4.9 07/10] locking/qspinlock: Kill cmpxchg() loop when claiming lock from head of queue Sebastian Andrzej Siewior
2018-12-13 14:09 ` [PATCH STABLE v4.9 08/10] locking/qspinlock: Re-order code Sebastian Andrzej Siewior
2018-12-13 14:09 ` [PATCH STABLE v4.9 09/10] locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound Sebastian Andrzej Siewior
2018-12-13 14:09 ` [PATCH STABLE v4.9 10/10] locking/qspinlock, x86: Provide liveness guarantee Sebastian Andrzej Siewior
2018-12-13 14:33 ` your mail Sasha Levin
2018-12-13 14:39   ` Sasha Levin
2018-12-13 15:13     ` Sebastian Andrzej Siewior
2018-12-13 14:53   ` Peter Zijlstra
2018-12-14  7:11 ` Greg KH

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.