All of lore.kernel.org
 help / color / mirror / Atom feed
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	stable@vger.kernel.org, Will Deacon <will.deacon@arm.com>,
	"Peter Zijlstra (Intel)" <peterz@infradead.org>,
	Waiman Long <longman@redhat.com>,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	boqun.feng@gmail.com, linux-arm-kernel@lists.infradead.org,
	paulmck@linux.vnet.ibm.com, Ingo Molnar <mingo@kernel.org>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Sasha Levin <sashal@kernel.org>
Subject: [PATCH 4.9 24/61] locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath
Date: Thu, 20 Dec 2018 10:18:24 +0100	[thread overview]
Message-ID: <20181220085844.710354700@linuxfoundation.org> (raw)
In-Reply-To: <20181220085843.743900603@linuxfoundation.org>

4.9-stable review patch.  If anyone has any objections, please let me know.

------------------

commit 59fb586b4a07b4e1a0ee577140ab4842ba451acd upstream.

The qspinlock locking slowpath utilises a "pending" bit as a simple form
of an embedded test-and-set lock that can avoid the overhead of explicit
queuing in cases where the lock is held but uncontended. This bit is
managed using a cmpxchg() loop which tries to transition the uncontended
lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).

Unfortunately, the cmpxchg() loop is unbounded and lockers can be starved
indefinitely if the lock word is seen to oscillate between unlocked
(0,0,0) and locked (0,0,1). This could happen if concurrent lockers are
able to take the lock in the cmpxchg() loop without queuing and pass it
around amongst themselves.

This patch fixes the problem by unconditionally setting _Q_PENDING_VAL
using atomic_fetch_or, and then inspecting the old value to see whether
we need to spin on the current lock owner, or whether we now effectively
hold the lock. The tricky scenario is when concurrent lockers end up
queuing on the lock and the lock becomes available, causing us to see
a lockword of (n,0,0). With pending now set, simply queuing could lead
to deadlock as the head of the queue may not have observed the pending
flag being cleared. Conversely, if the head of the queue did observe
pending being cleared, then it could transition the lock from (n,0,0) ->
(0,0,1) meaning that any attempt to "undo" our setting of the pending
bit could race with a concurrent locker trying to set it.

We handle this race by preserving the pending bit when taking the lock
after reaching the head of the queue and leaving the tail entry intact
if we saw pending set, because we know that the tail is going to be
updated shortly.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-6-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 kernel/locking/qspinlock.c          | 102 ++++++++++++++++------------
 kernel/locking/qspinlock_paravirt.h |   5 --
 2 files changed, 58 insertions(+), 49 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index cc98050e8bec..e7ab99a1f438 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -126,6 +126,17 @@ static inline __pure struct mcs_spinlock *decode_tail(u32 tail)
 #define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
 
 #if _Q_PENDING_BITS == 8
+/**
+ * clear_pending - clear the pending bit.
+ * @lock: Pointer to queued spinlock structure
+ *
+ * *,1,* -> *,0,*
+ */
+static __always_inline void clear_pending(struct qspinlock *lock)
+{
+	WRITE_ONCE(lock->pending, 0);
+}
+
 /**
  * clear_pending_set_locked - take ownership and clear the pending bit.
  * @lock: Pointer to queued spinlock structure
@@ -161,6 +172,17 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
 
 #else /* _Q_PENDING_BITS == 8 */
 
+/**
+ * clear_pending - clear the pending bit.
+ * @lock: Pointer to queued spinlock structure
+ *
+ * *,1,* -> *,0,*
+ */
+static __always_inline void clear_pending(struct qspinlock *lock)
+{
+	atomic_andnot(_Q_PENDING_VAL, &lock->val);
+}
+
 /**
  * clear_pending_set_locked - take ownership and clear the pending bit.
  * @lock: Pointer to queued spinlock structure
@@ -382,7 +404,7 @@ EXPORT_SYMBOL(queued_spin_unlock_wait);
 void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 {
 	struct mcs_spinlock *prev, *next, *node;
-	u32 new, old, tail;
+	u32 old, tail;
 	int idx;
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
@@ -405,59 +427,51 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 					       (VAL != _Q_PENDING_VAL) || !cnt--);
 	}
 
+	/*
+	 * If we observe any contention; queue.
+	 */
+	if (val & ~_Q_LOCKED_MASK)
+		goto queue;
+
 	/*
 	 * trylock || pending
 	 *
 	 * 0,0,0 -> 0,0,1 ; trylock
 	 * 0,0,1 -> 0,1,1 ; pending
 	 */
-	for (;;) {
+	val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
+	if (!(val & ~_Q_LOCKED_MASK)) {
 		/*
-		 * If we observe any contention; queue.
+		 * We're pending, wait for the owner to go away.
+		 *
+		 * *,1,1 -> *,1,0
+		 *
+		 * this wait loop must be a load-acquire such that we match the
+		 * store-release that clears the locked bit and create lock
+		 * sequentiality; this is because not all
+		 * clear_pending_set_locked() implementations imply full
+		 * barriers.
 		 */
-		if (val & ~_Q_LOCKED_MASK)
-			goto queue;
-
-		new = _Q_LOCKED_VAL;
-		if (val == new)
-			new |= _Q_PENDING_VAL;
+		if (val & _Q_LOCKED_MASK) {
+			smp_cond_load_acquire(&lock->val.counter,
+					      !(VAL & _Q_LOCKED_MASK));
+		}
 
 		/*
-		 * Acquire semantic is required here as the function may
-		 * return immediately if the lock was free.
+		 * take ownership and clear the pending bit.
+		 *
+		 * *,1,0 -> *,0,1
 		 */
-		old = atomic_cmpxchg_acquire(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
-	}
-
-	/*
-	 * we won the trylock
-	 */
-	if (new == _Q_LOCKED_VAL)
+		clear_pending_set_locked(lock);
 		return;
+	}
 
 	/*
-	 * we're pending, wait for the owner to go away.
-	 *
-	 * *,1,1 -> *,1,0
-	 *
-	 * this wait loop must be a load-acquire such that we match the
-	 * store-release that clears the locked bit and create lock
-	 * sequentiality; this is because not all clear_pending_set_locked()
-	 * implementations imply full barriers.
-	 */
-	smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK));
-
-	/*
-	 * take ownership and clear the pending bit.
-	 *
-	 * *,1,0 -> *,0,1
+	 * If pending was clear but there are waiters in the queue, then
+	 * we need to undo our setting of pending before we queue ourselves.
 	 */
-	clear_pending_set_locked(lock);
-	return;
+	if (!(val & _Q_PENDING_MASK))
+		clear_pending(lock);
 
 	/*
 	 * End of pending bit optimistic spinning and beginning of MCS
@@ -561,15 +575,15 @@ locked:
 	 * claim the lock:
 	 *
 	 * n,0,0 -> 0,0,1 : lock, uncontended
-	 * *,0,0 -> *,0,1 : lock, contended
+	 * *,*,0 -> *,*,1 : lock, contended
 	 *
-	 * If the queue head is the only one in the queue (lock value == tail),
-	 * clear the tail code and grab the lock. Otherwise, we only need
-	 * to grab the lock.
+	 * If the queue head is the only one in the queue (lock value == tail)
+	 * and nobody is pending, clear the tail code and grab the lock.
+	 * Otherwise, we only need to grab the lock.
 	 */
 	for (;;) {
 		/* In the PV case we might already have _Q_LOCKED_VAL set */
-		if ((val & _Q_TAIL_MASK) != tail) {
+		if ((val & _Q_TAIL_MASK) != tail || (val & _Q_PENDING_MASK)) {
 			set_locked(lock);
 			break;
 		}
diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
index 3e33336911ff..9c07c72fb10e 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -88,11 +88,6 @@ static __always_inline void set_pending(struct qspinlock *lock)
 	WRITE_ONCE(lock->pending, 1);
 }
 
-static __always_inline void clear_pending(struct qspinlock *lock)
-{
-	WRITE_ONCE(lock->pending, 0);
-}
-
 /*
  * The pending bit check in pv_queued_spin_steal_lock() isn't a memory
  * barrier. Therefore, an atomic cmpxchg() is used to acquire the lock
-- 
2.19.1




WARNING: multiple messages have this Message-ID (diff)
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Cc: Sasha Levin <sashal@kernel.org>,
	"Peter Zijlstra \(Intel\)" <peterz@infradead.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	boqun.feng@gmail.com, Will Deacon <will.deacon@arm.com>,
	stable@vger.kernel.org,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Waiman Long <longman@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	paulmck@linux.vnet.ibm.com,
	Linus Torvalds <torvalds@linux-foundation.org>,
	Ingo Molnar <mingo@kernel.org>,
	linux-arm-kernel@lists.infradead.org
Subject: [PATCH 4.9 24/61] locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath
Date: Thu, 20 Dec 2018 10:18:24 +0100	[thread overview]
Message-ID: <20181220085844.710354700@linuxfoundation.org> (raw)
In-Reply-To: <20181220085843.743900603@linuxfoundation.org>

4.9-stable review patch.  If anyone has any objections, please let me know.

------------------

commit 59fb586b4a07b4e1a0ee577140ab4842ba451acd upstream.

The qspinlock locking slowpath utilises a "pending" bit as a simple form
of an embedded test-and-set lock that can avoid the overhead of explicit
queuing in cases where the lock is held but uncontended. This bit is
managed using a cmpxchg() loop which tries to transition the uncontended
lock word from (0,0,0) -> (0,0,1) or (0,0,1) -> (0,1,1).

Unfortunately, the cmpxchg() loop is unbounded and lockers can be starved
indefinitely if the lock word is seen to oscillate between unlocked
(0,0,0) and locked (0,0,1). This could happen if concurrent lockers are
able to take the lock in the cmpxchg() loop without queuing and pass it
around amongst themselves.

This patch fixes the problem by unconditionally setting _Q_PENDING_VAL
using atomic_fetch_or, and then inspecting the old value to see whether
we need to spin on the current lock owner, or whether we now effectively
hold the lock. The tricky scenario is when concurrent lockers end up
queuing on the lock and the lock becomes available, causing us to see
a lockword of (n,0,0). With pending now set, simply queuing could lead
to deadlock as the head of the queue may not have observed the pending
flag being cleared. Conversely, if the head of the queue did observe
pending being cleared, then it could transition the lock from (n,0,0) ->
(0,0,1) meaning that any attempt to "undo" our setting of the pending
bit could race with a concurrent locker trying to set it.

We handle this race by preserving the pending bit when taking the lock
after reaching the head of the queue and leaving the tail entry intact
if we saw pending set, because we know that the tail is going to be
updated shortly.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Waiman Long <longman@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: boqun.feng@gmail.com
Cc: linux-arm-kernel@lists.infradead.org
Cc: paulmck@linux.vnet.ibm.com
Link: http://lkml.kernel.org/r/1524738868-31318-6-git-send-email-will.deacon@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Sasha Levin <sashal@kernel.org>
---
 kernel/locking/qspinlock.c          | 102 ++++++++++++++++------------
 kernel/locking/qspinlock_paravirt.h |   5 --
 2 files changed, 58 insertions(+), 49 deletions(-)

diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index cc98050e8bec..e7ab99a1f438 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -126,6 +126,17 @@ static inline __pure struct mcs_spinlock *decode_tail(u32 tail)
 #define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK)
 
 #if _Q_PENDING_BITS == 8
+/**
+ * clear_pending - clear the pending bit.
+ * @lock: Pointer to queued spinlock structure
+ *
+ * *,1,* -> *,0,*
+ */
+static __always_inline void clear_pending(struct qspinlock *lock)
+{
+	WRITE_ONCE(lock->pending, 0);
+}
+
 /**
  * clear_pending_set_locked - take ownership and clear the pending bit.
  * @lock: Pointer to queued spinlock structure
@@ -161,6 +172,17 @@ static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail)
 
 #else /* _Q_PENDING_BITS == 8 */
 
+/**
+ * clear_pending - clear the pending bit.
+ * @lock: Pointer to queued spinlock structure
+ *
+ * *,1,* -> *,0,*
+ */
+static __always_inline void clear_pending(struct qspinlock *lock)
+{
+	atomic_andnot(_Q_PENDING_VAL, &lock->val);
+}
+
 /**
  * clear_pending_set_locked - take ownership and clear the pending bit.
  * @lock: Pointer to queued spinlock structure
@@ -382,7 +404,7 @@ EXPORT_SYMBOL(queued_spin_unlock_wait);
 void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 {
 	struct mcs_spinlock *prev, *next, *node;
-	u32 new, old, tail;
+	u32 old, tail;
 	int idx;
 
 	BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS));
@@ -405,59 +427,51 @@ void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val)
 					       (VAL != _Q_PENDING_VAL) || !cnt--);
 	}
 
+	/*
+	 * If we observe any contention; queue.
+	 */
+	if (val & ~_Q_LOCKED_MASK)
+		goto queue;
+
 	/*
 	 * trylock || pending
 	 *
 	 * 0,0,0 -> 0,0,1 ; trylock
 	 * 0,0,1 -> 0,1,1 ; pending
 	 */
-	for (;;) {
+	val = atomic_fetch_or_acquire(_Q_PENDING_VAL, &lock->val);
+	if (!(val & ~_Q_LOCKED_MASK)) {
 		/*
-		 * If we observe any contention; queue.
+		 * We're pending, wait for the owner to go away.
+		 *
+		 * *,1,1 -> *,1,0
+		 *
+		 * this wait loop must be a load-acquire such that we match the
+		 * store-release that clears the locked bit and create lock
+		 * sequentiality; this is because not all
+		 * clear_pending_set_locked() implementations imply full
+		 * barriers.
 		 */
-		if (val & ~_Q_LOCKED_MASK)
-			goto queue;
-
-		new = _Q_LOCKED_VAL;
-		if (val == new)
-			new |= _Q_PENDING_VAL;
+		if (val & _Q_LOCKED_MASK) {
+			smp_cond_load_acquire(&lock->val.counter,
+					      !(VAL & _Q_LOCKED_MASK));
+		}
 
 		/*
-		 * Acquire semantic is required here as the function may
-		 * return immediately if the lock was free.
+		 * take ownership and clear the pending bit.
+		 *
+		 * *,1,0 -> *,0,1
 		 */
-		old = atomic_cmpxchg_acquire(&lock->val, val, new);
-		if (old == val)
-			break;
-
-		val = old;
-	}
-
-	/*
-	 * we won the trylock
-	 */
-	if (new == _Q_LOCKED_VAL)
+		clear_pending_set_locked(lock);
 		return;
+	}
 
 	/*
-	 * we're pending, wait for the owner to go away.
-	 *
-	 * *,1,1 -> *,1,0
-	 *
-	 * this wait loop must be a load-acquire such that we match the
-	 * store-release that clears the locked bit and create lock
-	 * sequentiality; this is because not all clear_pending_set_locked()
-	 * implementations imply full barriers.
-	 */
-	smp_cond_load_acquire(&lock->val.counter, !(VAL & _Q_LOCKED_MASK));
-
-	/*
-	 * take ownership and clear the pending bit.
-	 *
-	 * *,1,0 -> *,0,1
+	 * If pending was clear but there are waiters in the queue, then
+	 * we need to undo our setting of pending before we queue ourselves.
 	 */
-	clear_pending_set_locked(lock);
-	return;
+	if (!(val & _Q_PENDING_MASK))
+		clear_pending(lock);
 
 	/*
 	 * End of pending bit optimistic spinning and beginning of MCS
@@ -561,15 +575,15 @@ locked:
 	 * claim the lock:
 	 *
 	 * n,0,0 -> 0,0,1 : lock, uncontended
-	 * *,0,0 -> *,0,1 : lock, contended
+	 * *,*,0 -> *,*,1 : lock, contended
 	 *
-	 * If the queue head is the only one in the queue (lock value == tail),
-	 * clear the tail code and grab the lock. Otherwise, we only need
-	 * to grab the lock.
+	 * If the queue head is the only one in the queue (lock value == tail)
+	 * and nobody is pending, clear the tail code and grab the lock.
+	 * Otherwise, we only need to grab the lock.
 	 */
 	for (;;) {
 		/* In the PV case we might already have _Q_LOCKED_VAL set */
-		if ((val & _Q_TAIL_MASK) != tail) {
+		if ((val & _Q_TAIL_MASK) != tail || (val & _Q_PENDING_MASK)) {
 			set_locked(lock);
 			break;
 		}
diff --git a/kernel/locking/qspinlock_paravirt.h b/kernel/locking/qspinlock_paravirt.h
index 3e33336911ff..9c07c72fb10e 100644
--- a/kernel/locking/qspinlock_paravirt.h
+++ b/kernel/locking/qspinlock_paravirt.h
@@ -88,11 +88,6 @@ static __always_inline void set_pending(struct qspinlock *lock)
 	WRITE_ONCE(lock->pending, 1);
 }
 
-static __always_inline void clear_pending(struct qspinlock *lock)
-{
-	WRITE_ONCE(lock->pending, 0);
-}
-
 /*
  * The pending bit check in pv_queued_spin_steal_lock() isn't a memory
  * barrier. Therefore, an atomic cmpxchg() is used to acquire the lock
-- 
2.19.1




_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

  parent reply	other threads:[~2018-12-20  9:39 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-20  9:18 [PATCH 4.9 00/61] 4.9.147-stable review Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 01/61] signal: Introduce COMPAT_SIGMINSTKSZ for use in compat_sys_sigaltstack Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 02/61] lib/interval_tree_test.c: make test options module parameters Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 03/61] lib/interval_tree_test.c: allow full tree search Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 04/61] lib/rbtree_test.c: make input module parameters Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 05/61] lib/rbtree-test: lower default params Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 06/61] lib/interval_tree_test.c: allow users to limit scope of endpoint Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 07/61] timer/debug: Change /proc/timer_list from 0444 to 0400 Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 08/61] pinctrl: sunxi: a83t: Fix IRQ offset typo for PH11 Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 09/61] aio: fix spectre gadget in lookup_ioctx Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 10/61] MMC: OMAP: fix broken MMC on OMAP15XX/OMAP5910/OMAP310 Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 11/61] ARM: mmp/mmp2: fix cpu_is_mmp2() on mmp2-dt Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 12/61] tracing: Fix memory leak in set_trigger_filter() Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 13/61] tracing: Fix memory leak of instance function hash filters Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 14/61] powerpc/msi: Fix NULL pointer access in teardown code Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 15/61] Revert "drm/rockchip: Allow driver to be shutdown on reboot/kexec" Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 16/61] drm/i915/execlists: Apply a full mb before execution for Braswell Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 17/61] mac80211: dont WARN on bad WMM parameters from buggy APs Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 18/61] mac80211: Fix condition validating WMM IE Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 19/61] IB/hfi1: Remove race conditions in user_sdma send path Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 20/61] locking: Remove smp_read_barrier_depends() from queued_spin_lock_slowpath() Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 21/61] locking/qspinlock: Ensure node is initialised before updating prev->next Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 22/61] locking/qspinlock: Bound spinning on pending->locked transition in slowpath Greg Kroah-Hartman
2018-12-20  9:18   ` Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 23/61] locking/qspinlock: Merge struct __qspinlock into struct qspinlock Greg Kroah-Hartman
2018-12-20  9:18   ` Greg Kroah-Hartman
2018-12-20  9:18 ` Greg Kroah-Hartman [this message]
2018-12-20  9:18   ` [PATCH 4.9 24/61] locking/qspinlock: Remove unbounded cmpxchg() loop from locking slowpath Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 25/61] locking/qspinlock: Remove duplicate clear_pending() function from PV code Greg Kroah-Hartman
2018-12-20  9:18   ` Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 26/61] locking/qspinlock: Kill cmpxchg() loop when claiming lock from head of queue Greg Kroah-Hartman
2018-12-20  9:18   ` Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 27/61] locking/qspinlock: Re-order code Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 28/61] locking/qspinlock/x86: Increase _Q_PENDING_LOOPS upper bound Greg Kroah-Hartman
2018-12-20  9:18   ` Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 29/61] locking/qspinlock, x86: Provide liveness guarantee Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 30/61] locking/qspinlock: Fix build for anonymous union in older GCC compilers Greg Kroah-Hartman
2018-12-20  9:18   ` Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 31/61] mac80211_hwsim: fix module init error paths for netlink Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 32/61] scsi: libiscsi: Fix NULL pointer dereference in iscsi_eh_session_reset Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 33/61] scsi: vmw_pscsi: Rearrange code to avoid multiple calls to free_irq during unload Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 34/61] x86/earlyprintk/efi: Fix infinite loop on some screen widths Greg Kroah-Hartman
2018-12-20  9:18   ` Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 35/61] drm/msm: Grab a vblank reference when waiting for commit_done Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 36/61] ARC: io.h: Implement reads{x}()/writes{x}() Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 37/61] bonding: fix 802.3ad state sent to partner when unbinding slave Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 38/61] nfs: dont dirty kernel pages read by direct-io Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 39/61] SUNRPC: Fix a potential race in xprt_connect() Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 40/61] sbus: char: add of_node_put() Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 41/61] drivers/sbus/char: " Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 42/61] drivers/tty: add missing of_node_put() Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 43/61] ide: pmac: add of_node_put() Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 44/61] clk: mvebu: Off by one bugs in cp110_of_clk_get() Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 45/61] clk: mmp: Off by one in mmp_clk_add() Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 46/61] Input: omap-keypad - fix keyboard debounce configuration Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 47/61] libata: whitelist all SAMSUNG MZ7KM* solid-state disks Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 48/61] mv88e6060: disable hardware level MAC learning Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 49/61] net/mlx4_en: Fix build break when CONFIG_INET is off Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 50/61] bpf: check pending signals while verifying programs Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 51/61] ARM: 8814/1: mm: improve/fix ARM v7_dma_inv_range() unaligned address handling Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 52/61] ARM: 8815/1: V7M: align v7m_dma_inv_range() with v7 counterpart Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 53/61] ethernet: fman: fix wrong of_node_put() in probe function Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 54/61] drm/ast: Fix connector leak during driver unload Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 55/61] cifs: In Kconfig CONFIG_CIFS_POSIX needs depends on legacy (insecure cifs) Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 56/61] vhost/vsock: fix reset orphans race with close timeout Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 57/61] i2c: axxia: properly handle master timeout Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 58/61] i2c: scmi: Fix probe error on devices with an empty SMB0001 ACPI device node Greg Kroah-Hartman
2018-12-20  9:18 ` [PATCH 4.9 59/61] nvmet-rdma: fix response use after free Greg Kroah-Hartman
2018-12-20  9:19 ` [PATCH 4.9 60/61] rtc: snvs: add a missing write sync Greg Kroah-Hartman
2018-12-20  9:19 ` [PATCH 4.9 61/61] rtc: snvs: Add timeouts to avoid kernel lockups Greg Kroah-Hartman
2018-12-20 15:00 ` [PATCH 4.9 00/61] 4.9.147-stable review Naresh Kamboju
2018-12-20 18:28 ` Guenter Roeck
2018-12-20 22:55 ` shuah
2018-12-21  9:25 ` Jon Hunter
2018-12-21  9:25   ` Jon Hunter

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20181220085844.710354700@linuxfoundation.org \
    --to=gregkh@linuxfoundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=boqun.feng@gmail.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=longman@redhat.com \
    --cc=mingo@kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    --cc=sashal@kernel.org \
    --cc=stable@vger.kernel.org \
    --cc=tglx@linutronix.de \
    --cc=torvalds@linux-foundation.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.