linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] locking/qspinlock: Use atomic_sub_return_release in queued_spin_unlock
@ 2016-06-03  8:38 Pan Xinhui
  2016-06-08 14:27 ` [tip:locking/core] locking/qspinlock: Use atomic_sub_return_release() in queued_spin_unlock() tip-bot for Pan Xinhui
  2016-06-13 19:45 ` [PATCH] locking/qspinlock: Use atomic_sub_return_release in queued_spin_unlock Davidlohr Bueso
  0 siblings, 2 replies; 5+ messages in thread
From: Pan Xinhui @ 2016-06-03  8:38 UTC (permalink / raw)
  To: linux-kernel, linux-arch; +Cc: arnd, peterz, waiman.long, Pan Xinhui

The existing version uses a heavy barrier while only release semantics
is required. So use atomic_sub_return_release instead.

Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
---
 include/asm-generic/qspinlock.h | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index 35a52a8..8947cd2 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -92,10 +92,9 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock)
 static __always_inline void queued_spin_unlock(struct qspinlock *lock)
 {
 	/*
-	 * smp_mb__before_atomic() in order to guarantee release semantics
-	 */
-	smp_mb__before_atomic();
-	atomic_sub(_Q_LOCKED_VAL, &lock->val);
+	* unlock() need release semantics
+	*/
+	(void)atomic_sub_return_release(_Q_LOCKED_VAL, &lock->val);
 }
 #endif
 
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [tip:locking/core] locking/qspinlock: Use atomic_sub_return_release() in queued_spin_unlock()
  2016-06-03  8:38 [PATCH] locking/qspinlock: Use atomic_sub_return_release in queued_spin_unlock Pan Xinhui
@ 2016-06-08 14:27 ` tip-bot for Pan Xinhui
  2016-06-13 19:45 ` [PATCH] locking/qspinlock: Use atomic_sub_return_release in queued_spin_unlock Davidlohr Bueso
  1 sibling, 0 replies; 5+ messages in thread
From: tip-bot for Pan Xinhui @ 2016-06-08 14:27 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, xinhui.pan, mingo, torvalds, paulmck, akpm, peterz,
	linux-kernel, hpa

Commit-ID:  ca50e426f96c905e7d14a9c7a6bd4e0330516047
Gitweb:     http://git.kernel.org/tip/ca50e426f96c905e7d14a9c7a6bd4e0330516047
Author:     Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
AuthorDate: Fri, 3 Jun 2016 16:38:14 +0800
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Wed, 8 Jun 2016 15:17:01 +0200

locking/qspinlock: Use atomic_sub_return_release() in queued_spin_unlock()

The existing version uses a heavy barrier while only release semantics
is required. So use atomic_sub_return_release() instead.

Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arnd@arndb.de
Cc: waiman.long@hp.com
Link: http://lkml.kernel.org/r/1464943094-3129-1-git-send-email-xinhui.pan@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/asm-generic/qspinlock.h | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index 05f05f1..9f0681b 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -111,10 +111,9 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock)
 static __always_inline void queued_spin_unlock(struct qspinlock *lock)
 {
 	/*
-	 * smp_mb__before_atomic() in order to guarantee release semantics
+	 * unlock() needs release semantics:
 	 */
-	smp_mb__before_atomic();
-	atomic_sub(_Q_LOCKED_VAL, &lock->val);
+	(void)atomic_sub_return_release(_Q_LOCKED_VAL, &lock->val);
 }
 #endif
 

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] locking/qspinlock: Use atomic_sub_return_release in queued_spin_unlock
  2016-06-03  8:38 [PATCH] locking/qspinlock: Use atomic_sub_return_release in queued_spin_unlock Pan Xinhui
  2016-06-08 14:27 ` [tip:locking/core] locking/qspinlock: Use atomic_sub_return_release() in queued_spin_unlock() tip-bot for Pan Xinhui
@ 2016-06-13 19:45 ` Davidlohr Bueso
  2016-06-14  5:52   ` Boqun Feng
  1 sibling, 1 reply; 5+ messages in thread
From: Davidlohr Bueso @ 2016-06-13 19:45 UTC (permalink / raw)
  To: Pan Xinhui; +Cc: linux-kernel, linux-arch, arnd, peterz, waiman.long

On Fri, 03 Jun 2016, Pan Xinhui wrote:

>The existing version uses a heavy barrier while only release semantics
>is required. So use atomic_sub_return_release instead.
>
>Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
>Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>

I just noticed this change in -tip and, while I know that saving a barrier
in core spinlock paths is perhaps a worthy exception, I cannot help but
wonder if this is the begging of the end for smp__{before,after}_atomic().

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] locking/qspinlock: Use atomic_sub_return_release in queued_spin_unlock
  2016-06-13 19:45 ` [PATCH] locking/qspinlock: Use atomic_sub_return_release in queued_spin_unlock Davidlohr Bueso
@ 2016-06-14  5:52   ` Boqun Feng
  2016-06-14 12:04     ` Peter Zijlstra
  0 siblings, 1 reply; 5+ messages in thread
From: Boqun Feng @ 2016-06-14  5:52 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: Pan Xinhui, linux-kernel, linux-arch, arnd, peterz, waiman.long

[-- Attachment #1: Type: text/plain, Size: 1329 bytes --]

On Mon, Jun 13, 2016 at 12:45:23PM -0700, Davidlohr Bueso wrote:
> On Fri, 03 Jun 2016, Pan Xinhui wrote:
> 
> > The existing version uses a heavy barrier while only release semantics
> > is required. So use atomic_sub_return_release instead.
> > 
> > Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
> 
> I just noticed this change in -tip and, while I know that saving a barrier
> in core spinlock paths is perhaps a worthy exception, I cannot help but
> wonder if this is the begging of the end for smp__{before,after}_atomic().

This is surely a good direction I think, that is using _acquire and
_release primitives to replace those barriers. However, I think we
should do this carefully, because the _acquire and _release primitives
are RCpc because they are on PPC, IOW, a ACQUIRE and RELEASE pair is not
a full barrier nor provides global transivity. I'm worried about there
are some users depending on the full-barrier semantics, which means we
must audit each use carefully before we make the change.

Besides, if we want to do the conversion, we'd better have _acquire and
_release variants for non-value-returning atomic operations.

I remember you were working on those variants. How is that going?

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] locking/qspinlock: Use atomic_sub_return_release in queued_spin_unlock
  2016-06-14  5:52   ` Boqun Feng
@ 2016-06-14 12:04     ` Peter Zijlstra
  0 siblings, 0 replies; 5+ messages in thread
From: Peter Zijlstra @ 2016-06-14 12:04 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Davidlohr Bueso, Pan Xinhui, linux-kernel, linux-arch, arnd, waiman.long

On Tue, Jun 14, 2016 at 01:52:53PM +0800, Boqun Feng wrote:
> On Mon, Jun 13, 2016 at 12:45:23PM -0700, Davidlohr Bueso wrote:
> > On Fri, 03 Jun 2016, Pan Xinhui wrote:
> > 
> > > The existing version uses a heavy barrier while only release semantics
> > > is required. So use atomic_sub_return_release instead.
> > > 
> > > Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > > Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>
> > 
> > I just noticed this change in -tip and, while I know that saving a barrier
> > in core spinlock paths is perhaps a worthy exception, I cannot help but
> > wonder if this is the begging of the end for smp__{before,after}_atomic().
> 
> This is surely a good direction I think, that is using _acquire and
> _release primitives to replace those barriers. However, I think we
> should do this carefully, because the _acquire and _release primitives
> are RCpc because they are on PPC, IOW, a ACQUIRE and RELEASE pair is not
> a full barrier nor provides global transivity. I'm worried about there
> are some users depending on the full-barrier semantics, which means we
> must audit each use carefully before we make the change.

Very good point indeed. And yes, the whole RCpc thing, but also the
tricky wandering store on PPC/ARM64 ACQUIRE makes for lots of 'fun' we
can do without.

> Besides, if we want to do the conversion, we'd better have _acquire and
> _release variants for non-value-returning atomic operations.

Indeed, I've been tempted to introduce those before.

> I remember you were working on those variants. How is that going?

Ah, if Davidlohr is working on that, brilliant, less work for me ;-)

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2016-06-14 12:04 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-03  8:38 [PATCH] locking/qspinlock: Use atomic_sub_return_release in queued_spin_unlock Pan Xinhui
2016-06-08 14:27 ` [tip:locking/core] locking/qspinlock: Use atomic_sub_return_release() in queued_spin_unlock() tip-bot for Pan Xinhui
2016-06-13 19:45 ` [PATCH] locking/qspinlock: Use atomic_sub_return_release in queued_spin_unlock Davidlohr Bueso
2016-06-14  5:52   ` Boqun Feng
2016-06-14 12:04     ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).