All of lore.kernel.org
 help / color / mirror / Atom feed
From: Will Deacon <will.deacon@arm.com>
To: linux-kernel@vger.kernel.org
Cc: linux-arm-kernel@lists.infradead.org, peterz@infradead.org,
	mingo@kernel.org, boqun.feng@gmail.com,
	paulmck@linux.vnet.ibm.com, catalin.marinas@arm.com,
	Will Deacon <will.deacon@arm.com>
Subject: [PATCH 06/10] barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed
Date: Thu,  5 Apr 2018 17:59:03 +0100	[thread overview]
Message-ID: <1522947547-24081-7-git-send-email-will.deacon@arm.com> (raw)
In-Reply-To: <1522947547-24081-1-git-send-email-will.deacon@arm.com>

Whilst we currently provide smp_cond_load_acquire and
atomic_cond_read_acquire, there are cases where the ACQUIRE semantics are
not required because of a subsequent fence or release operation once the
conditional loop has exited.

This patch adds relaxed versions of the conditional spinning primitives
to avoid unnecessary barrier overhead on architectures such as arm64.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 include/asm-generic/barrier.h | 27 +++++++++++++++++++++------
 include/linux/atomic.h        |  2 ++
 2 files changed, 23 insertions(+), 6 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index fe297b599b0a..305e03b19a26 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -221,18 +221,17 @@ do {									\
 #endif
 
 /**
- * smp_cond_load_acquire() - (Spin) wait for cond with ACQUIRE ordering
+ * smp_cond_load_relaxed() - (Spin) wait for cond with no ordering guarantees
  * @ptr: pointer to the variable to wait on
  * @cond: boolean expression to wait for
  *
- * Equivalent to using smp_load_acquire() on the condition variable but employs
- * the control dependency of the wait to reduce the barrier on many platforms.
+ * Equivalent to using READ_ONCE() on the condition variable.
  *
  * Due to C lacking lambda expressions we load the value of *ptr into a
  * pre-named variable @VAL to be used in @cond.
  */
-#ifndef smp_cond_load_acquire
-#define smp_cond_load_acquire(ptr, cond_expr) ({		\
+#ifndef smp_cond_load_relaxed
+#define smp_cond_load_relaxed(ptr, cond_expr) ({		\
 	typeof(ptr) __PTR = (ptr);				\
 	typeof(*ptr) VAL;					\
 	for (;;) {						\
@@ -241,10 +240,26 @@ do {									\
 			break;					\
 		cpu_relax();					\
 	}							\
-	smp_acquire__after_ctrl_dep();				\
 	VAL;							\
 })
 #endif
 
+/**
+ * smp_cond_load_acquire() - (Spin) wait for cond with ACQUIRE ordering
+ * @ptr: pointer to the variable to wait on
+ * @cond: boolean expression to wait for
+ *
+ * Equivalent to using smp_load_acquire() on the condition variable but employs
+ * the control dependency of the wait to reduce the barrier on many platforms.
+ */
+#ifndef smp_cond_load_acquire
+#define smp_cond_load_acquire(ptr, cond_expr) ({		\
+	typeof(*ptr) _val;					\
+	_val = smp_cond_load_relaxed(ptr, cond_expr);		\
+	smp_acquire__after_ctrl_dep();				\
+	_val;							\
+})
+#endif
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 8b276fd9a127..01ce3997cb42 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -654,6 +654,7 @@ static inline int atomic_dec_if_positive(atomic_t *v)
 }
 #endif
 
+#define atomic_cond_read_relaxed(v, c)	smp_cond_load_relaxed(&(v)->counter, (c))
 #define atomic_cond_read_acquire(v, c)	smp_cond_load_acquire(&(v)->counter, (c))
 
 #ifdef CONFIG_GENERIC_ATOMIC64
@@ -1075,6 +1076,7 @@ static inline long long atomic64_fetch_andnot_release(long long i, atomic64_t *v
 }
 #endif
 
+#define atomic64_cond_read_relaxed(v, c)	smp_cond_load_relaxed(&(v)->counter, (c))
 #define atomic64_cond_read_acquire(v, c)	smp_cond_load_acquire(&(v)->counter, (c))
 
 #include <asm-generic/atomic-long.h>
-- 
2.1.4

WARNING: multiple messages have this Message-ID (diff)
From: will.deacon@arm.com (Will Deacon)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 06/10] barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed
Date: Thu,  5 Apr 2018 17:59:03 +0100	[thread overview]
Message-ID: <1522947547-24081-7-git-send-email-will.deacon@arm.com> (raw)
In-Reply-To: <1522947547-24081-1-git-send-email-will.deacon@arm.com>

Whilst we currently provide smp_cond_load_acquire and
atomic_cond_read_acquire, there are cases where the ACQUIRE semantics are
not required because of a subsequent fence or release operation once the
conditional loop has exited.

This patch adds relaxed versions of the conditional spinning primitives
to avoid unnecessary barrier overhead on architectures such as arm64.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 include/asm-generic/barrier.h | 27 +++++++++++++++++++++------
 include/linux/atomic.h        |  2 ++
 2 files changed, 23 insertions(+), 6 deletions(-)

diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h
index fe297b599b0a..305e03b19a26 100644
--- a/include/asm-generic/barrier.h
+++ b/include/asm-generic/barrier.h
@@ -221,18 +221,17 @@ do {									\
 #endif
 
 /**
- * smp_cond_load_acquire() - (Spin) wait for cond with ACQUIRE ordering
+ * smp_cond_load_relaxed() - (Spin) wait for cond with no ordering guarantees
  * @ptr: pointer to the variable to wait on
  * @cond: boolean expression to wait for
  *
- * Equivalent to using smp_load_acquire() on the condition variable but employs
- * the control dependency of the wait to reduce the barrier on many platforms.
+ * Equivalent to using READ_ONCE() on the condition variable.
  *
  * Due to C lacking lambda expressions we load the value of *ptr into a
  * pre-named variable @VAL to be used in @cond.
  */
-#ifndef smp_cond_load_acquire
-#define smp_cond_load_acquire(ptr, cond_expr) ({		\
+#ifndef smp_cond_load_relaxed
+#define smp_cond_load_relaxed(ptr, cond_expr) ({		\
 	typeof(ptr) __PTR = (ptr);				\
 	typeof(*ptr) VAL;					\
 	for (;;) {						\
@@ -241,10 +240,26 @@ do {									\
 			break;					\
 		cpu_relax();					\
 	}							\
-	smp_acquire__after_ctrl_dep();				\
 	VAL;							\
 })
 #endif
 
+/**
+ * smp_cond_load_acquire() - (Spin) wait for cond with ACQUIRE ordering
+ * @ptr: pointer to the variable to wait on
+ * @cond: boolean expression to wait for
+ *
+ * Equivalent to using smp_load_acquire() on the condition variable but employs
+ * the control dependency of the wait to reduce the barrier on many platforms.
+ */
+#ifndef smp_cond_load_acquire
+#define smp_cond_load_acquire(ptr, cond_expr) ({		\
+	typeof(*ptr) _val;					\
+	_val = smp_cond_load_relaxed(ptr, cond_expr);		\
+	smp_acquire__after_ctrl_dep();				\
+	_val;							\
+})
+#endif
+
 #endif /* !__ASSEMBLY__ */
 #endif /* __ASM_GENERIC_BARRIER_H */
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 8b276fd9a127..01ce3997cb42 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -654,6 +654,7 @@ static inline int atomic_dec_if_positive(atomic_t *v)
 }
 #endif
 
+#define atomic_cond_read_relaxed(v, c)	smp_cond_load_relaxed(&(v)->counter, (c))
 #define atomic_cond_read_acquire(v, c)	smp_cond_load_acquire(&(v)->counter, (c))
 
 #ifdef CONFIG_GENERIC_ATOMIC64
@@ -1075,6 +1076,7 @@ static inline long long atomic64_fetch_andnot_release(long long i, atomic64_t *v
 }
 #endif
 
+#define atomic64_cond_read_relaxed(v, c)	smp_cond_load_relaxed(&(v)->counter, (c))
 #define atomic64_cond_read_acquire(v, c)	smp_cond_load_acquire(&(v)->counter, (c))
 
 #include <asm-generic/atomic-long.h>
-- 
2.1.4

  parent reply	other threads:[~2018-04-05 17:00 UTC|newest]

Thread overview: 94+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-05 16:58 [PATCH 00/10] kernel/locking: qspinlock improvements Will Deacon
2018-04-05 16:58 ` Will Deacon
2018-04-05 16:58 ` [PATCH 01/10] locking/qspinlock: Don't spin on pending->locked transition in slowpath Will Deacon
2018-04-05 16:58   ` Will Deacon
2018-04-05 16:58 ` [PATCH 02/10] locking/qspinlock: Remove unbounded cmpxchg loop from locking slowpath Will Deacon
2018-04-05 16:58   ` Will Deacon
2018-04-05 17:07   ` Peter Zijlstra
2018-04-05 17:07     ` Peter Zijlstra
2018-04-06 15:08     ` Will Deacon
2018-04-06 15:08       ` Will Deacon
2018-04-05 17:13   ` Peter Zijlstra
2018-04-05 17:13     ` Peter Zijlstra
2018-04-05 21:16   ` Waiman Long
2018-04-05 21:16     ` Waiman Long
2018-04-06 15:08     ` Will Deacon
2018-04-06 15:08       ` Will Deacon
2018-04-06 20:50   ` Waiman Long
2018-04-06 20:50     ` Waiman Long
2018-04-06 21:09     ` Paul E. McKenney
2018-04-06 21:09       ` Paul E. McKenney
2018-04-07  8:47       ` Peter Zijlstra
2018-04-07  8:47         ` Peter Zijlstra
2018-04-07 23:37         ` Paul E. McKenney
2018-04-07 23:37           ` Paul E. McKenney
2018-04-09 10:58         ` Will Deacon
2018-04-09 10:58           ` Will Deacon
2018-04-07  9:07     ` Peter Zijlstra
2018-04-07  9:07       ` Peter Zijlstra
2018-04-09 10:58     ` Will Deacon
2018-04-09 10:58       ` Will Deacon
2018-04-09 14:54       ` Will Deacon
2018-04-09 14:54         ` Will Deacon
2018-04-09 15:54         ` Peter Zijlstra
2018-04-09 15:54           ` Peter Zijlstra
2018-04-09 17:19           ` Will Deacon
2018-04-09 17:19             ` Will Deacon
2018-04-10  9:35             ` Peter Zijlstra
2018-04-10  9:35               ` Peter Zijlstra
2018-09-20 16:08             ` Peter Zijlstra
2018-09-20 16:08               ` Peter Zijlstra
2018-09-20 16:22               ` Peter Zijlstra
2018-09-20 16:22                 ` Peter Zijlstra
2018-04-09 19:33         ` Waiman Long
2018-04-09 19:33           ` Waiman Long
2018-04-09 17:55       ` Waiman Long
2018-04-09 17:55         ` Waiman Long
2018-04-10 13:49   ` Sasha Levin
2018-04-10 13:49     ` Sasha Levin
2018-04-05 16:59 ` [PATCH 03/10] locking/qspinlock: Kill cmpxchg loop when claiming lock from head of queue Will Deacon
2018-04-05 16:59   ` Will Deacon
2018-04-05 17:19   ` Peter Zijlstra
2018-04-05 17:19     ` Peter Zijlstra
2018-04-06 10:54     ` Will Deacon
2018-04-06 10:54       ` Will Deacon
2018-04-05 16:59 ` [PATCH 04/10] locking/qspinlock: Use atomic_cond_read_acquire Will Deacon
2018-04-05 16:59   ` Will Deacon
2018-04-05 16:59 ` [PATCH 05/10] locking/mcs: Use smp_cond_load_acquire() in mcs spin loop Will Deacon
2018-04-05 16:59   ` Will Deacon
2018-04-05 16:59 ` Will Deacon [this message]
2018-04-05 16:59   ` [PATCH 06/10] barriers: Introduce smp_cond_load_relaxed and atomic_cond_read_relaxed Will Deacon
2018-04-05 17:22   ` Peter Zijlstra
2018-04-05 17:22     ` Peter Zijlstra
2018-04-06 10:55     ` Will Deacon
2018-04-06 10:55       ` Will Deacon
2018-04-05 16:59 ` [PATCH 07/10] locking/qspinlock: Use smp_cond_load_relaxed to wait for next node Will Deacon
2018-04-05 16:59   ` Will Deacon
2018-04-05 16:59 ` [PATCH 08/10] locking/qspinlock: Merge struct __qspinlock into struct qspinlock Will Deacon
2018-04-05 16:59   ` Will Deacon
2018-04-07  5:23   ` Boqun Feng
2018-04-07  5:23     ` Boqun Feng
2018-04-05 16:59 ` [PATCH 09/10] locking/qspinlock: Make queued_spin_unlock use smp_store_release Will Deacon
2018-04-05 16:59   ` Will Deacon
2018-04-05 16:59 ` [PATCH 10/10] locking/qspinlock: Elide back-to-back RELEASE operations with smp_wmb() Will Deacon
2018-04-05 16:59   ` Will Deacon
2018-04-05 17:28   ` Peter Zijlstra
2018-04-05 17:28     ` Peter Zijlstra
2018-04-06 11:34     ` Will Deacon
2018-04-06 11:34       ` Will Deacon
2018-04-06 13:05       ` Andrea Parri
2018-04-06 13:05         ` Andrea Parri
2018-04-06 15:27         ` Will Deacon
2018-04-06 15:27           ` Will Deacon
2018-04-06 15:49           ` Andrea Parri
2018-04-06 15:49             ` Andrea Parri
2018-04-07  5:47   ` Boqun Feng
2018-04-07  5:47     ` Boqun Feng
2018-04-09 10:47     ` Will Deacon
2018-04-09 10:47       ` Will Deacon
2018-04-06 13:22 ` [PATCH 00/10] kernel/locking: qspinlock improvements Andrea Parri
2018-04-06 13:22   ` Andrea Parri
2018-04-11 10:20   ` Catalin Marinas
2018-04-11 10:20     ` Catalin Marinas
2018-04-11 15:39     ` Andrea Parri
2018-04-11 15:39       ` Andrea Parri

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1522947547-24081-7-git-send-email-will.deacon@arm.com \
    --to=will.deacon@arm.com \
    --cc=boqun.feng@gmail.com \
    --cc=catalin.marinas@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@kernel.org \
    --cc=paulmck@linux.vnet.ibm.com \
    --cc=peterz@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.