All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3] locking/rwsem: Avoid double checking before try acquiring write lock
@ 2014-09-17  0:16 Jason Low
  2014-09-17  9:34 ` Davidlohr Bueso
  2014-10-03  5:29 ` [tip:locking/core] " tip-bot for Jason Low
  0 siblings, 2 replies; 4+ messages in thread
From: Jason Low @ 2014-09-17  0:16 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Tim Chen, Peter Hurley, Davidlohr Bueso
  Cc: linux-kernel, Aswin Chandramouleeswaran, Chegu Vinod, Jason Low

Commit 9b0fc9c09f1b checks for if there are known active lockers
in order to avoid write trylocking using expensive cmpxchg() when
it likely wouldn't get the lock.

However, a subsequent patch was added such that we directly
check for sem->count == RWSEM_WAITING_BIAS right before trying
that cmpxchg(). Thus, commit 9b0fc9c09f1b now just adds overhead.
This patch modifies it so that we only do a check for if
count == RWSEM_WAITING_BIAS.

Also, add a comment on why we do an "extra check" of count
before the cmpxchg().

Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Jason Low <jason.low2@hp.com>
---
 kernel/locking/rwsem-xadd.c |   20 +++++++++++---------
 1 files changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 12166ec..7628c3f 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -250,16 +250,18 @@ EXPORT_SYMBOL(rwsem_down_read_failed);
 
 static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
 {
-	if (!(count & RWSEM_ACTIVE_MASK)) {
-		/* try acquiring the write lock */
-		if (sem->count == RWSEM_WAITING_BIAS &&
-		    cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
-			    RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
-			if (!list_is_singular(&sem->wait_list))
-				rwsem_atomic_update(RWSEM_WAITING_BIAS, sem);
-			return true;
-		}
+	/*
+	 * Try acquiring the write lock. Check count first in order
+	 * to reduce unnecessary expensive cmpxchg() operations.
+	 */
+	if (count == RWSEM_WAITING_BIAS &&
+	    cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
+		    RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
+		if (!list_is_singular(&sem->wait_list))
+			rwsem_atomic_update(RWSEM_WAITING_BIAS, sem);
+		return true;
 	}
+
 	return false;
 }
 
-- 
1.7.1




^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] locking/rwsem: Avoid double checking before try acquiring write lock
  2014-09-17  0:16 [PATCH v3] locking/rwsem: Avoid double checking before try acquiring write lock Jason Low
@ 2014-09-17  9:34 ` Davidlohr Bueso
  2014-09-17 16:47   ` Jason Low
  2014-10-03  5:29 ` [tip:locking/core] " tip-bot for Jason Low
  1 sibling, 1 reply; 4+ messages in thread
From: Davidlohr Bueso @ 2014-09-17  9:34 UTC (permalink / raw)
  To: Jason Low
  Cc: Peter Zijlstra, Ingo Molnar, Tim Chen, Peter Hurley,
	linux-kernel, Aswin Chandramouleeswaran, Chegu Vinod

On Tue, 2014-09-16 at 17:16 -0700, Jason Low wrote:
> Commit 9b0fc9c09f1b checks for if there are known active lockers
> in order to avoid write trylocking using expensive cmpxchg() when
> it likely wouldn't get the lock.

Ah, I remember you had this one in your queue for some time :)

> However, a subsequent patch was added such that we directly
> check for sem->count == RWSEM_WAITING_BIAS right before trying
> that cmpxchg(). Thus, commit 9b0fc9c09f1b now just adds overhead.
> This patch modifies it so that we only do a check for if
> count == RWSEM_WAITING_BIAS.
> 
> Also, add a comment on why we do an "extra check" of count
> before the cmpxchg().

heh, so people don't try to remove the "redundant" check!

> Cc: Peter Hurley <peter@hurleysoftware.com>
> Cc: Tim Chen <tim.c.chen@linux.intel.com>

Acked-by: Davidlohr Bueso <dave@stgolabs.net>

> Signed-off-by: Jason Low <jason.low2@hp.com>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] locking/rwsem: Avoid double checking before try acquiring write lock
  2014-09-17  9:34 ` Davidlohr Bueso
@ 2014-09-17 16:47   ` Jason Low
  0 siblings, 0 replies; 4+ messages in thread
From: Jason Low @ 2014-09-17 16:47 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: Peter Zijlstra, Ingo Molnar, Tim Chen, Peter Hurley,
	linux-kernel, Aswin Chandramouleeswaran, Chegu Vinod, jason.low2

On Wed, 2014-09-17 at 11:34 +0200, Davidlohr Bueso wrote:
> On Tue, 2014-09-16 at 17:16 -0700, Jason Low wrote:
> > Commit 9b0fc9c09f1b checks for if there are known active lockers
> > in order to avoid write trylocking using expensive cmpxchg() when
> > it likely wouldn't get the lock.
> 
> Ah, I remember you had this one in your queue for some time :)
> 
> > However, a subsequent patch was added such that we directly
> > check for sem->count == RWSEM_WAITING_BIAS right before trying
> > that cmpxchg(). Thus, commit 9b0fc9c09f1b now just adds overhead.
> > This patch modifies it so that we only do a check for if
> > count == RWSEM_WAITING_BIAS.
> > 
> > Also, add a comment on why we do an "extra check" of count
> > before the cmpxchg().
> 
> heh, so people don't try to remove the "redundant" check!

Right, I've already seen at least 1 such patch  :)

> > Cc: Peter Hurley <peter@hurleysoftware.com>
> > Cc: Tim Chen <tim.c.chen@linux.intel.com>
> 
> Acked-by: Davidlohr Bueso <dave@stgolabs.net>

Thanks!


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [tip:locking/core] locking/rwsem: Avoid double checking before try acquiring write lock
  2014-09-17  0:16 [PATCH v3] locking/rwsem: Avoid double checking before try acquiring write lock Jason Low
  2014-09-17  9:34 ` Davidlohr Bueso
@ 2014-10-03  5:29 ` tip-bot for Jason Low
  1 sibling, 0 replies; 4+ messages in thread
From: tip-bot for Jason Low @ 2014-10-03  5:29 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, torvalds, peterz, tim.c.chen, peter,
	jason.low2, dbueso, chegu_vinod, tglx, aswin

Commit-ID:  debfab74e453f079cd8b12b0604387a8c510ef3a
Gitweb:     http://git.kernel.org/tip/debfab74e453f079cd8b12b0604387a8c510ef3a
Author:     Jason Low <jason.low2@hp.com>
AuthorDate: Tue, 16 Sep 2014 17:16:57 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 3 Oct 2014 06:09:29 +0200

locking/rwsem: Avoid double checking before try acquiring write lock

Commit 9b0fc9c09f1b ("rwsem: skip initial trylock in rwsem_down_write_failed")
checks for if there are known active lockers in order to avoid write trylocking
using expensive cmpxchg() when it likely wouldn't get the lock.

However, a subsequent patch was added such that we directly
check for sem->count == RWSEM_WAITING_BIAS right before trying
that cmpxchg().

Thus, commit 9b0fc9c09f1b now just adds overhead.

This patch modifies it so that we only do a check for if
count == RWSEM_WAITING_BIAS.

Also, add a comment on why we do an "extra check" of count
before the cmpxchg().

Signed-off-by: Jason Low <jason.low2@hp.com>
Acked-by: Davidlohr Bueso <dbueso@suse.de>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Aswin Chandramouleeswaran <aswin@hp.com>
Cc: Chegu Vinod <chegu_vinod@hp.com>
Cc: Peter Hurley <peter@hurleysoftware.com>
Cc: Tim Chen <tim.c.chen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: http://lkml.kernel.org/r/1410913017.2447.22.camel@j-VirtualBox
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/locking/rwsem-xadd.c | 20 +++++++++++---------
 1 file changed, 11 insertions(+), 9 deletions(-)

diff --git a/kernel/locking/rwsem-xadd.c b/kernel/locking/rwsem-xadd.c
index 12166ec..7628c3f 100644
--- a/kernel/locking/rwsem-xadd.c
+++ b/kernel/locking/rwsem-xadd.c
@@ -250,16 +250,18 @@ EXPORT_SYMBOL(rwsem_down_read_failed);
 
 static inline bool rwsem_try_write_lock(long count, struct rw_semaphore *sem)
 {
-	if (!(count & RWSEM_ACTIVE_MASK)) {
-		/* try acquiring the write lock */
-		if (sem->count == RWSEM_WAITING_BIAS &&
-		    cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
-			    RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
-			if (!list_is_singular(&sem->wait_list))
-				rwsem_atomic_update(RWSEM_WAITING_BIAS, sem);
-			return true;
-		}
+	/*
+	 * Try acquiring the write lock. Check count first in order
+	 * to reduce unnecessary expensive cmpxchg() operations.
+	 */
+	if (count == RWSEM_WAITING_BIAS &&
+	    cmpxchg(&sem->count, RWSEM_WAITING_BIAS,
+		    RWSEM_ACTIVE_WRITE_BIAS) == RWSEM_WAITING_BIAS) {
+		if (!list_is_singular(&sem->wait_list))
+			rwsem_atomic_update(RWSEM_WAITING_BIAS, sem);
+		return true;
 	}
+
 	return false;
 }
 

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2014-10-03  5:30 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-17  0:16 [PATCH v3] locking/rwsem: Avoid double checking before try acquiring write lock Jason Low
2014-09-17  9:34 ` Davidlohr Bueso
2014-09-17 16:47   ` Jason Low
2014-10-03  5:29 ` [tip:locking/core] " tip-bot for Jason Low

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.