linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/4] locking/rwsem: Add reader-owned state to the owner field
@ 2016-05-12 22:56 Waiman Long
  2016-05-12 22:56 ` [PATCH v3 1/4] " Waiman Long
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Waiman Long @ 2016-05-12 22:56 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar
  Cc: linux-kernel, Davidlohr Bueso, Jason Low, Dave Chinner,
	Peter Hurley, Scott J Norton, Douglas Hatch, Waiman Long

 v2->v3:
  - Make minor code changes as suggested by PeterZ & Peter Hurley.
  - Add 2 minor patches (#2 & #3) to improve the rwsem code
  - Add a 4th patch to streamline the rwsem_optimistic_spin() code.

 v1->v2:
  - Add rwsem_is_reader_owned() helper & rename rwsem_reader_owned()
    to rwsem_set_reader_owned().
  - Add more comments to clarify the purpose of some of the code
    changes.

Patch 1 is the main patch of this series, whereas patches 2 & 3 are
just minor patches to improve the efficiency of the rwsem code. Patch
4 streamlines the rwsem_optimistic_spin() to make it simpler.

Waiman Long (4):
  locking/rwsem: Add reader-owned state to the owner field
  locking/rwsem: Don't wake up one's own task
  locking/rwsem: Improve reader wakeup code
  locking/rwsem: Streamline the rwsem_optimistic_spin() code

 kernel/locking/rwsem-xadd.c |   75 ++++++++++++++++++++++++------------------
 kernel/locking/rwsem.c      |    8 +++-
 kernel/locking/rwsem.h      |   41 +++++++++++++++++++++++
 3 files changed, 90 insertions(+), 34 deletions(-)

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v3 1/4] locking/rwsem: Add reader-owned state to the owner field
  2016-05-12 22:56 [PATCH v3 0/4] locking/rwsem: Add reader-owned state to the owner field Waiman Long
@ 2016-05-12 22:56 ` Waiman Long
  2016-05-12 22:56 ` [PATCH v3 2/4] locking/rwsem: Don't wake up one's own task Waiman Long
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Waiman Long @ 2016-05-12 22:56 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar
  Cc: linux-kernel, Davidlohr Bueso, Jason Low, Dave Chinner,
	Peter Hurley, Scott J Norton, Douglas Hatch, Waiman Long

Currently, it is not possible to determine for sure if a reader
owns a rwsem by looking at the content of the rwsem data structure.
This patch adds a new state RWSEM_READER_OWNED to the owner field
to indicate that readers currently own the lock. This enables us to
address the following 2 issues in the rwsem optimistic spinning code:

 1) rwsem_can_spin_on_owner() will disallow optimistic spinning if
    the owner field is NULL which can mean either the readers own
    the lock or the owning writer hasn't set the owner field yet.
    In the latter case, we miss the chance to do optimistic spinning.

 2) While a writer is waiting in the OSQ and a reader takes the lock,
    the writer will continue to spin when out of the OSQ in the main
    rwsem_optimistic_spin() loop as the owner field is NULL wasting
    CPU cycles if some of readers are sleeping.

Adding the new state will allow optimistic spinning to go forward as
long as the owner field is not RWSEM_READER_OWNED and the owner is
running, if set, but stop immediately when that state has been reached.

On a 4-socket Haswell machine running on a 4.6-rc1 based kernel, the
fio test with multithreaded randrw and randwrite tests on the same
file on a XFS partition on top of a NVDIMM were run, the aggregated
bandwidths before and after the patch were as follows:

  Test      BW before patch     BW after patch  % change
  ----      ---------------     --------------  --------
  randrw         988 MB/s          1192 MB/s      +21%
  randwrite     1513 MB/s          1623 MB/s      +7.3%

The perf profile of the rwsem_down_write_failed() function in randrw
before and after the patch were:

   19.95%  5.88%  fio  [kernel.vmlinux]  [k] rwsem_down_write_failed
   14.20%  1.52%  fio  [kernel.vmlinux]  [k] rwsem_down_write_failed

The actual CPU cycles spend in rwsem_down_write_failed() dropped from
5.88% to 1.52% after the patch.

The xfstests was also run and no regression was observed.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Acked-by: Jason Low <jason.low2@hp.com>
Acked-by: Davidlohr Bueso <dave@stgolabs.net>
---
 kernel/locking/rwsem-xadd.c |   41 ++++++++++++++++++++++-------------------
 kernel/locking/rwsem.c      |    8 ++++++--
 kernel/locking/rwsem.h      |   41 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 69 insertions(+), 21 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 09e30c6..7ccab5c 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -155,6 +155,12 @@ __rwsem_do_wake(struct rw_semaphore *sem, enum rwsem_wake_type wake_type)
 			/* Last active locker left. Retry waking readers. */
 			goto try_reader_grant;
 		}
+		/*
+		 * It is not really necessary to set it to reader-owned here,
+		 * but it gives the spinners an early indication that the
+		 * readers now have the lock.
+		 */
+		rwsem_set_reader_owned(sem);
 	}
 
 	/* Grant an infinite number of read locks to the readers at the front
@@ -306,16 +312,11 @@ static inline bool rwsem_can_spin_on_owner(struct rw_semaphore *sem)
 
 	rcu_read_lock();
 	owner = READ_ONCE(sem->owner);
-	if (!owner) {
-		long count = READ_ONCE(sem->count);
+	if (!rwsem_owner_is_writer(owner)) {
 		/*
-		 * If sem->owner is not set, yet we have just recently entered the
-		 * slowpath with the lock being active, then there is a possibility
-		 * reader(s) may have the lock. To be safe, bail spinning in these
-		 * situations.
+		 * Don't spin if the rwsem is readers owned.
 		 */
-		if (count & RWSEM_ACTIVE_MASK)
-			ret = false;
+		ret = !rwsem_owner_is_reader(owner);
 		goto done;
 	}
 
@@ -328,8 +329,6 @@ done:
 static noinline
 bool rwsem_spin_on_owner(struct rw_semaphore *sem, struct task_struct *owner)
 {
-	long count;
-
 	rcu_read_lock();
 	while (sem->owner == owner) {
 		/*
@@ -350,16 +349,11 @@ bool rwsem_spin_on_owner(struct rw_semaphore *sem, struct task_struct *owner)
 	}
 	rcu_read_unlock();
 
-	if (READ_ONCE(sem->owner))
-		return true; /* new owner, continue spinning */
-
 	/*
-	 * When the owner is not set, the lock could be free or
-	 * held by readers. Check the counter to verify the
-	 * state.
+	 * If there is a new owner or the owner is not set, we continue
+	 * spinning.
 	 */
-	count = READ_ONCE(sem->count);
-	return (count == 0 || count == RWSEM_WAITING_BIAS);
+	return !rwsem_owner_is_reader(sem->owner);
 }
 
 static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
@@ -378,7 +372,16 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
 
 	while (true) {
 		owner = READ_ONCE(sem->owner);
-		if (owner && !rwsem_spin_on_owner(sem, owner))
+		/*
+		 * Don't spin if
+		 * 1) the owner is a reader as we we can't determine if the
+		 *    reader is actively running or not.
+		 * 2) The rwsem_spin_on_owner() returns false which means
+		 *    the owner isn't running.
+		 */
+		if (rwsem_owner_is_reader(owner) ||
+		   (rwsem_owner_is_writer(owner) &&
+		   !rwsem_spin_on_owner(sem, owner)))
 			break;
 
 		/* wait_lock will be acquired if write_lock is obtained */
diff --git a/kernel/locking/rwsem.c b/kernel/locking/rwsem.c
index c817216..5838f56 100644
--- a/kernel/locking/rwsem.c
+++ b/kernel/locking/rwsem.c
@@ -22,6 +22,7 @@ void __sched down_read(struct rw_semaphore *sem)
 	rwsem_acquire_read(&sem->dep_map, 0, 0, _RET_IP_);
 
 	LOCK_CONTENDED(sem, __down_read_trylock, __down_read);
+	rwsem_set_reader_owned(sem);
 }
 
 EXPORT_SYMBOL(down_read);
@@ -33,8 +34,10 @@ int down_read_trylock(struct rw_semaphore *sem)
 {
 	int ret = __down_read_trylock(sem);
 
-	if (ret == 1)
+	if (ret == 1) {
 		rwsem_acquire_read(&sem->dep_map, 0, 1, _RET_IP_);
+		rwsem_set_reader_owned(sem);
+	}
 	return ret;
 }
 
@@ -124,7 +127,7 @@ void downgrade_write(struct rw_semaphore *sem)
 	 * lockdep: a downgraded write will live on as a write
 	 * dependency.
 	 */
-	rwsem_clear_owner(sem);
+	rwsem_set_reader_owned(sem);
 	__downgrade_write(sem);
 }
 
@@ -138,6 +141,7 @@ void down_read_nested(struct rw_semaphore *sem, int subclass)
 	rwsem_acquire_read(&sem->dep_map, subclass, 0, _RET_IP_);
 
 	LOCK_CONTENDED(sem, __down_read_trylock, __down_read);
+	rwsem_set_reader_owned(sem);
 }
 
 EXPORT_SYMBOL(down_read_nested);
diff --git a/kernel/locking/rwsem.h b/kernel/locking/rwsem.h
index 870ed9a..8f43ba2 100644
--- a/kernel/locking/rwsem.h
+++ b/kernel/locking/rwsem.h
@@ -1,3 +1,20 @@
+/*
+ * The owner field of the rw_semaphore structure will be set to
+ * RWSEM_READ_OWNED when a reader grabs the lock. A writer will clear
+ * the owner field when it unlocks. A reader, on the other hand, will
+ * not touch the owner field when it unlocks.
+ *
+ * In essence, the owner field now has the following 3 states:
+ *  1) 0
+ *     - lock is free or the owner hasn't set the field yet
+ *  2) RWSEM_READER_OWNED
+ *     - lock is currently or previously owned by readers (lock is free
+ *       or not set by owner yet)
+ *  3) Other non-zero value
+ *     - a writer owns the lock
+ */
+#define RWSEM_READER_OWNED	((struct task_struct *)1UL)
+
 #ifdef CONFIG_RWSEM_SPIN_ON_OWNER
 static inline void rwsem_set_owner(struct rw_semaphore *sem)
 {
@@ -9,6 +26,26 @@ static inline void rwsem_clear_owner(struct rw_semaphore *sem)
 	sem->owner = NULL;
 }
 
+static inline void rwsem_set_reader_owned(struct rw_semaphore *sem)
+{
+	/*
+	 * We check the owner value first to make sure that we will only
+	 * do a write to the rwsem cacheline when it is really necessary
+	 * to minimize cacheline contention.
+	 */
+	if (sem->owner != RWSEM_READER_OWNED)
+		sem->owner = RWSEM_READER_OWNED;
+}
+
+static inline bool rwsem_owner_is_writer(struct task_struct *owner)
+{
+	return owner && owner != RWSEM_READER_OWNED;
+}
+
+static inline bool rwsem_owner_is_reader(struct task_struct *owner)
+{
+	return owner == RWSEM_READER_OWNED;
+}
 #else
 static inline void rwsem_set_owner(struct rw_semaphore *sem)
 {
@@ -17,4 +54,8 @@ static inline void rwsem_set_owner(struct rw_semaphore *sem)
 static inline void rwsem_clear_owner(struct rw_semaphore *sem)
 {
 }
+
+static inline void rwsem_set_reader_owned(struct rw_semaphore *sem)
+{
+}
 #endif
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 2/4] locking/rwsem: Don't wake up one's own task
  2016-05-12 22:56 [PATCH v3 0/4] locking/rwsem: Add reader-owned state to the owner field Waiman Long
  2016-05-12 22:56 ` [PATCH v3 1/4] " Waiman Long
@ 2016-05-12 22:56 ` Waiman Long
  2016-05-17 17:45   ` Peter Hurley
  2016-05-12 22:56 ` [PATCH v3 3/4] locking/rwsem: Improve reader wakeup code Waiman Long
  2016-05-12 22:56 ` [PATCH v3 4/4] locking/rwsem: Streamline the rwsem_optimistic_spin() code Waiman Long
  3 siblings, 1 reply; 8+ messages in thread
From: Waiman Long @ 2016-05-12 22:56 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar
  Cc: linux-kernel, Davidlohr Bueso, Jason Low, Dave Chinner,
	Peter Hurley, Scott J Norton, Douglas Hatch, Waiman Long

As rwsem_down_read_failed() will queue itself and potentially call
__rwsem_do_wake(sem, RWSEM_WAKE_ANY), it is possible that a reader
will try to wake up its own task. This patch adds a check to make
sure that this won't happen.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
---
 kernel/locking/rwsem-xadd.c |    3 ++-
 1 files changed, 2 insertions(+), 1 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 7ccab5c..22f7d58 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -202,7 +202,8 @@ __rwsem_do_wake(struct rw_semaphore *sem, enum rwsem_wake_type wake_type)
 		 */
 		smp_mb();
 		waiter->task = NULL;
-		wake_up_process(tsk);
+		if (tsk != current)
+			wake_up_process(tsk);
 		put_task_struct(tsk);
 	} while (--loop);
 
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 3/4] locking/rwsem: Improve reader wakeup code
  2016-05-12 22:56 [PATCH v3 0/4] locking/rwsem: Add reader-owned state to the owner field Waiman Long
  2016-05-12 22:56 ` [PATCH v3 1/4] " Waiman Long
  2016-05-12 22:56 ` [PATCH v3 2/4] locking/rwsem: Don't wake up one's own task Waiman Long
@ 2016-05-12 22:56 ` Waiman Long
  2016-05-17 17:30   ` Peter Hurley
  2016-05-12 22:56 ` [PATCH v3 4/4] locking/rwsem: Streamline the rwsem_optimistic_spin() code Waiman Long
  3 siblings, 1 reply; 8+ messages in thread
From: Waiman Long @ 2016-05-12 22:56 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar
  Cc: linux-kernel, Davidlohr Bueso, Jason Low, Dave Chinner,
	Peter Hurley, Scott J Norton, Douglas Hatch, Waiman Long

In __rwsem_do_wake(), the reader wakeup code will assume a writer
has stolen the lock if the active reader/writer count is not 0.
However, this is not as reliable an indicator as the original
"< RWSEM_WAITING_BIAS" check. If another reader is present, the code
will still break out and exit even if the writer is gone. This patch
changes it to check the same "< RWSEM_WAITING_BIAS" condition to
reduce the chance of false positive.

Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
---
 kernel/locking/rwsem-xadd.c |   11 ++++++++---
 1 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 22f7d58..6c08ad9 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -148,9 +148,14 @@ __rwsem_do_wake(struct rw_semaphore *sem, enum rwsem_wake_type wake_type)
  try_reader_grant:
 		oldcount = rwsem_atomic_update(adjustment, sem) - adjustment;
 		if (unlikely(oldcount < RWSEM_WAITING_BIAS)) {
-			/* A writer stole the lock. Undo our reader grant. */
-			if (rwsem_atomic_update(-adjustment, sem) &
-						RWSEM_ACTIVE_MASK)
+			/*
+			 * If the count is still less than RWSEM_WAITING_BIAS
+			 * after removing the adjustment, it is assumed that
+			 * a writer has stolen the lock. We have to undo our
+			 * reader grant.
+			 */
+			if (rwsem_atomic_update(-adjustment, sem)
+			    < RWSEM_WAITING_BIAS)
 				goto out;
 			/* Last active locker left. Retry waking readers. */
 			goto try_reader_grant;
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v3 4/4] locking/rwsem: Streamline the rwsem_optimistic_spin() code
  2016-05-12 22:56 [PATCH v3 0/4] locking/rwsem: Add reader-owned state to the owner field Waiman Long
                   ` (2 preceding siblings ...)
  2016-05-12 22:56 ` [PATCH v3 3/4] locking/rwsem: Improve reader wakeup code Waiman Long
@ 2016-05-12 22:56 ` Waiman Long
  2016-05-17 17:47   ` Peter Hurley
  3 siblings, 1 reply; 8+ messages in thread
From: Waiman Long @ 2016-05-12 22:56 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar
  Cc: linux-kernel, Davidlohr Bueso, Jason Low, Dave Chinner,
	Peter Hurley, Scott J Norton, Douglas Hatch, Waiman Long

This patch moves the owner loading and checking code entirely inside of
rwsem_spin_on_owner() to simplify the logic of rwsem_optimistic_spin()
loop.

Suggested-by: Peter Hurley <peter@hurleysoftware.com>
Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
---
 kernel/locking/rwsem-xadd.c |   38 ++++++++++++++++++++------------------
 1 files changed, 20 insertions(+), 18 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 6c08ad9..5788b63 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -332,9 +332,16 @@ done:
 	return ret;
 }
 
-static noinline
-bool rwsem_spin_on_owner(struct rw_semaphore *sem, struct task_struct *owner)
+/*
+ * Return true only if we can still spin on the owner field of the rwsem.
+ */
+static noinline bool rwsem_spin_on_owner(struct rw_semaphore *sem)
 {
+	struct task_struct *owner = READ_ONCE(sem->owner);
+
+	if (!rwsem_owner_is_writer(owner))
+		goto out;
+
 	rcu_read_lock();
 	while (sem->owner == owner) {
 		/*
@@ -354,7 +361,7 @@ bool rwsem_spin_on_owner(struct rw_semaphore *sem, struct task_struct *owner)
 		cpu_relax_lowlatency();
 	}
 	rcu_read_unlock();
-
+out:
 	/*
 	 * If there is a new owner or the owner is not set, we continue
 	 * spinning.
@@ -364,7 +371,6 @@ bool rwsem_spin_on_owner(struct rw_semaphore *sem, struct task_struct *owner)
 
 static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
 {
-	struct task_struct *owner;
 	bool taken = false;
 
 	preempt_disable();
@@ -376,21 +382,17 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
 	if (!osq_lock(&sem->osq))
 		goto done;
 
-	while (true) {
-		owner = READ_ONCE(sem->owner);
+	/*
+	 * Optimistically spin on the owner field and attempt to acquire the
+	 * lock whenever the owner changes. Spinning will be stopped when:
+	 *  1) the owning writer isn't running; or
+	 *  2) readers own the lock as we can't determine if they are
+	 *     actively running or not.
+	 */
+	while (rwsem_spin_on_owner(sem)) {
 		/*
-		 * Don't spin if
-		 * 1) the owner is a reader as we we can't determine if the
-		 *    reader is actively running or not.
-		 * 2) The rwsem_spin_on_owner() returns false which means
-		 *    the owner isn't running.
+		 * Try to acquire the lock
 		 */
-		if (rwsem_owner_is_reader(owner) ||
-		   (rwsem_owner_is_writer(owner) &&
-		   !rwsem_spin_on_owner(sem, owner)))
-			break;
-
-		/* wait_lock will be acquired if write_lock is obtained */
 		if (rwsem_try_write_lock_unqueued(sem)) {
 			taken = true;
 			break;
@@ -402,7 +404,7 @@ static bool rwsem_optimistic_spin(struct rw_semaphore *sem)
 		 * we're an RT task that will live-lock because we won't let
 		 * the owner complete.
 		 */
-		if (!owner && (need_resched() || rt_task(current)))
+		if (!sem->owner && (need_resched() || rt_task(current)))
 			break;
 
 		/*
-- 
1.7.1

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 3/4] locking/rwsem: Improve reader wakeup code
  2016-05-12 22:56 ` [PATCH v3 3/4] locking/rwsem: Improve reader wakeup code Waiman Long
@ 2016-05-17 17:30   ` Peter Hurley
  0 siblings, 0 replies; 8+ messages in thread
From: Peter Hurley @ 2016-05-17 17:30 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Davidlohr Bueso,
	Jason Low, Dave Chinner, Scott J Norton, Douglas Hatch

On 05/12/2016 03:56 PM, Waiman Long wrote:
> In __rwsem_do_wake(), the reader wakeup code will assume a writer
> has stolen the lock if the active reader/writer count is not 0.
> However, this is not as reliable an indicator as the original
> "< RWSEM_WAITING_BIAS" check. If another reader is present, the code
> will still break out and exit even if the writer is gone. This patch
> changes it to check the same "< RWSEM_WAITING_BIAS" condition to
> reduce the chance of false positive.

Nice.

Reviewed-by: Peter Hurley <peter@hurleysoftware.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 2/4] locking/rwsem: Don't wake up one's own task
  2016-05-12 22:56 ` [PATCH v3 2/4] locking/rwsem: Don't wake up one's own task Waiman Long
@ 2016-05-17 17:45   ` Peter Hurley
  0 siblings, 0 replies; 8+ messages in thread
From: Peter Hurley @ 2016-05-17 17:45 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Davidlohr Bueso,
	Jason Low, Dave Chinner, Scott J Norton, Douglas Hatch

On 05/12/2016 03:56 PM, Waiman Long wrote:
> As rwsem_down_read_failed() will queue itself and potentially call
> __rwsem_do_wake(sem, RWSEM_WAKE_ANY), it is possible that a reader
> will try to wake up its own task. This patch adds a check to make
> sure that this won't happen.

Although there's no particular harm in the current code, this at
least spells out this condition is normal (ie., when a failed reader
wakes itself while waking the other waiting readers).

Reviewed-by: Peter Hurley <peter@hurleysoftware.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v3 4/4] locking/rwsem: Streamline the rwsem_optimistic_spin() code
  2016-05-12 22:56 ` [PATCH v3 4/4] locking/rwsem: Streamline the rwsem_optimistic_spin() code Waiman Long
@ 2016-05-17 17:47   ` Peter Hurley
  0 siblings, 0 replies; 8+ messages in thread
From: Peter Hurley @ 2016-05-17 17:47 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, linux-kernel, Davidlohr Bueso,
	Jason Low, Dave Chinner, Scott J Norton, Douglas Hatch

On 05/12/2016 03:56 PM, Waiman Long wrote:
> This patch moves the owner loading and checking code entirely inside of
> rwsem_spin_on_owner() to simplify the logic of rwsem_optimistic_spin()
> loop.

Thanks for this.

Reviewed-by: Peter Hurley <peter@hurleysoftware.com>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-05-17 17:47 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-05-12 22:56 [PATCH v3 0/4] locking/rwsem: Add reader-owned state to the owner field Waiman Long
2016-05-12 22:56 ` [PATCH v3 1/4] " Waiman Long
2016-05-12 22:56 ` [PATCH v3 2/4] locking/rwsem: Don't wake up one's own task Waiman Long
2016-05-17 17:45   ` Peter Hurley
2016-05-12 22:56 ` [PATCH v3 3/4] locking/rwsem: Improve reader wakeup code Waiman Long
2016-05-17 17:30   ` Peter Hurley
2016-05-12 22:56 ` [PATCH v3 4/4] locking/rwsem: Streamline the rwsem_optimistic_spin() code Waiman Long
2016-05-17 17:47   ` Peter Hurley

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).