All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-12  3:49 ` guoren
  0 siblings, 0 replies; 42+ messages in thread
From: guoren @ 2022-04-12  3:49 UTC (permalink / raw)
  To: guoren, arnd, palmer, mark.rutland, will, peterz, boqun.feng
  Cc: linux-arch, linux-kernel, linux-riscv, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

These patch series contain one cleanup and some optimizations for
atomic operations.

Changes in V2:
 - Fixup LR/SC memory barrier semantic problems which pointed by
   Rutland
 - Combine patches into one patchset series
 - Separate AMO optimization & LRSC optimization for convenience
   patch review

Guo Ren (3):
  riscv: atomic: Cleanup unnecessary definition
  riscv: atomic: Optimize acquire and release for AMO operations
  riscv: atomic: Optimize memory barrier semantics of LRSC-pairs

 arch/riscv/include/asm/atomic.h  | 70 ++++++++++++++++++++++++++++++--
 arch/riscv/include/asm/cmpxchg.h | 42 +++++--------------
 2 files changed, 76 insertions(+), 36 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-12  3:49 ` guoren
  0 siblings, 0 replies; 42+ messages in thread
From: guoren @ 2022-04-12  3:49 UTC (permalink / raw)
  To: guoren, arnd, palmer, mark.rutland, will, peterz, boqun.feng
  Cc: linux-arch, linux-kernel, linux-riscv, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

These patch series contain one cleanup and some optimizations for
atomic operations.

Changes in V2:
 - Fixup LR/SC memory barrier semantic problems which pointed by
   Rutland
 - Combine patches into one patchset series
 - Separate AMO optimization & LRSC optimization for convenience
   patch review

Guo Ren (3):
  riscv: atomic: Cleanup unnecessary definition
  riscv: atomic: Optimize acquire and release for AMO operations
  riscv: atomic: Optimize memory barrier semantics of LRSC-pairs

 arch/riscv/include/asm/atomic.h  | 70 ++++++++++++++++++++++++++++++--
 arch/riscv/include/asm/cmpxchg.h | 42 +++++--------------
 2 files changed, 76 insertions(+), 36 deletions(-)

-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [PATCH V2 1/3] riscv: atomic: Cleanup unnecessary definition
  2022-04-12  3:49 ` guoren
@ 2022-04-12  3:49   ` guoren
  -1 siblings, 0 replies; 42+ messages in thread
From: guoren @ 2022-04-12  3:49 UTC (permalink / raw)
  To: guoren, arnd, palmer, mark.rutland, will, peterz, boqun.feng
  Cc: linux-arch, linux-kernel, linux-riscv, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

The cmpxchg32 & cmpxchg32_local have been never used in linux, so
remove them from cmpxchg.h.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Arnd Bergmann <arnd@arndb.de>
---
 arch/riscv/include/asm/cmpxchg.h | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 36dc962f6343..12debce235e5 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -348,18 +348,6 @@
 #define arch_cmpxchg_local(ptr, o, n)					\
 	(__cmpxchg_relaxed((ptr), (o), (n), sizeof(*(ptr))))
 
-#define cmpxchg32(ptr, o, n)						\
-({									\
-	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
-	arch_cmpxchg((ptr), (o), (n));					\
-})
-
-#define cmpxchg32_local(ptr, o, n)					\
-({									\
-	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
-	arch_cmpxchg_relaxed((ptr), (o), (n))				\
-})
-
 #define arch_cmpxchg64(ptr, o, n)					\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V2 1/3] riscv: atomic: Cleanup unnecessary definition
@ 2022-04-12  3:49   ` guoren
  0 siblings, 0 replies; 42+ messages in thread
From: guoren @ 2022-04-12  3:49 UTC (permalink / raw)
  To: guoren, arnd, palmer, mark.rutland, will, peterz, boqun.feng
  Cc: linux-arch, linux-kernel, linux-riscv, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

The cmpxchg32 & cmpxchg32_local have been never used in linux, so
remove them from cmpxchg.h.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Arnd Bergmann <arnd@arndb.de>
---
 arch/riscv/include/asm/cmpxchg.h | 12 ------------
 1 file changed, 12 deletions(-)

diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 36dc962f6343..12debce235e5 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -348,18 +348,6 @@
 #define arch_cmpxchg_local(ptr, o, n)					\
 	(__cmpxchg_relaxed((ptr), (o), (n), sizeof(*(ptr))))
 
-#define cmpxchg32(ptr, o, n)						\
-({									\
-	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
-	arch_cmpxchg((ptr), (o), (n));					\
-})
-
-#define cmpxchg32_local(ptr, o, n)					\
-({									\
-	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
-	arch_cmpxchg_relaxed((ptr), (o), (n))				\
-})
-
 #define arch_cmpxchg64(ptr, o, n)					\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V2 2/3] riscv: atomic: Optimize acquire and release for AMO operations
  2022-04-12  3:49 ` guoren
@ 2022-04-12  3:49   ` guoren
  -1 siblings, 0 replies; 42+ messages in thread
From: guoren @ 2022-04-12  3:49 UTC (permalink / raw)
  To: guoren, arnd, palmer, mark.rutland, will, peterz, boqun.feng
  Cc: linux-arch, linux-kernel, linux-riscv, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

Current acquire & release implementations from atomic-arch-
fallback.h are using __atomic_acquire/release_fence(), it cause
another extra "fence r, rw/fence rw,w" instruction after/before
AMO instruction. RISC-V AMO instructions could combine acquire
and release in the instruction self, we could gain two benefits by
using this feature:
 - Reduce an instruction
 - Prevent strong "fence r/fence ,w" which are used to protect
   "lr/sc".

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Mark Rutland <mark.rutland@arm.com>
---
 arch/riscv/include/asm/atomic.h  | 64 ++++++++++++++++++++++++++++++++
 arch/riscv/include/asm/cmpxchg.h | 12 ++----
 2 files changed, 68 insertions(+), 8 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index ac9bdf4fc404..20ce8b83bc18 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -99,6 +99,30 @@ c_type arch_atomic##prefix##_fetch_##op##_relaxed(c_type i,		\
 	return ret;							\
 }									\
 static __always_inline							\
+c_type arch_atomic##prefix##_fetch_##op##_acquire(c_type i,		\
+					     atomic##prefix##_t *v)	\
+{									\
+	register c_type ret;						\
+	__asm__ __volatile__ (						\
+		"	amo" #asm_op "." #asm_type ".aq %1, %2, %0"	\
+		: "+A" (v->counter), "=r" (ret)				\
+		: "r" (I)						\
+		: "memory");						\
+	return ret;							\
+}									\
+static __always_inline							\
+c_type arch_atomic##prefix##_fetch_##op##_release(c_type i,		\
+					     atomic##prefix##_t *v)	\
+{									\
+	register c_type ret;						\
+	__asm__ __volatile__ (						\
+		"	amo" #asm_op "." #asm_type ".rl %1, %2, %0"	\
+		: "+A" (v->counter), "=r" (ret)				\
+		: "r" (I)						\
+		: "memory");						\
+	return ret;							\
+}									\
+static __always_inline							\
 c_type arch_atomic##prefix##_fetch_##op(c_type i, atomic##prefix##_t *v)	\
 {									\
 	register c_type ret;						\
@@ -118,6 +142,18 @@ c_type arch_atomic##prefix##_##op##_return_relaxed(c_type i,		\
         return arch_atomic##prefix##_fetch_##op##_relaxed(i, v) c_op I;	\
 }									\
 static __always_inline							\
+c_type arch_atomic##prefix##_##op##_return_acquire(c_type i,		\
+					      atomic##prefix##_t *v)	\
+{									\
+        return arch_atomic##prefix##_fetch_##op##_acquire(i, v) c_op I;	\
+}									\
+static __always_inline							\
+c_type arch_atomic##prefix##_##op##_return_release(c_type i,		\
+					      atomic##prefix##_t *v)	\
+{									\
+        return arch_atomic##prefix##_fetch_##op##_release(i, v) c_op I;	\
+}									\
+static __always_inline							\
 c_type arch_atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v)	\
 {									\
         return arch_atomic##prefix##_fetch_##op(i, v) c_op I;		\
@@ -140,22 +176,38 @@ ATOMIC_OPS(sub, add, +, -i)
 
 #define arch_atomic_add_return_relaxed	arch_atomic_add_return_relaxed
 #define arch_atomic_sub_return_relaxed	arch_atomic_sub_return_relaxed
+#define arch_atomic_add_return_acquire	arch_atomic_add_return_acquire
+#define arch_atomic_sub_return_acquire	arch_atomic_sub_return_acquire
+#define arch_atomic_add_return_release	arch_atomic_add_return_release
+#define arch_atomic_sub_return_release	arch_atomic_sub_return_release
 #define arch_atomic_add_return		arch_atomic_add_return
 #define arch_atomic_sub_return		arch_atomic_sub_return
 
 #define arch_atomic_fetch_add_relaxed	arch_atomic_fetch_add_relaxed
 #define arch_atomic_fetch_sub_relaxed	arch_atomic_fetch_sub_relaxed
+#define arch_atomic_fetch_add_acquire	arch_atomic_fetch_add_acquire
+#define arch_atomic_fetch_sub_acquire	arch_atomic_fetch_sub_acquire
+#define arch_atomic_fetch_add_release	arch_atomic_fetch_add_release
+#define arch_atomic_fetch_sub_release	arch_atomic_fetch_sub_release
 #define arch_atomic_fetch_add		arch_atomic_fetch_add
 #define arch_atomic_fetch_sub		arch_atomic_fetch_sub
 
 #ifndef CONFIG_GENERIC_ATOMIC64
 #define arch_atomic64_add_return_relaxed	arch_atomic64_add_return_relaxed
 #define arch_atomic64_sub_return_relaxed	arch_atomic64_sub_return_relaxed
+#define arch_atomic64_add_return_acquire	arch_atomic64_add_return_acquire
+#define arch_atomic64_sub_return_acquire	arch_atomic64_sub_return_acquire
+#define arch_atomic64_add_return_release	arch_atomic64_add_return_release
+#define arch_atomic64_sub_return_release	arch_atomic64_sub_return_release
 #define arch_atomic64_add_return		arch_atomic64_add_return
 #define arch_atomic64_sub_return		arch_atomic64_sub_return
 
 #define arch_atomic64_fetch_add_relaxed	arch_atomic64_fetch_add_relaxed
 #define arch_atomic64_fetch_sub_relaxed	arch_atomic64_fetch_sub_relaxed
+#define arch_atomic64_fetch_add_acquire	arch_atomic64_fetch_add_acquire
+#define arch_atomic64_fetch_sub_acquire	arch_atomic64_fetch_sub_acquire
+#define arch_atomic64_fetch_add_release	arch_atomic64_fetch_add_release
+#define arch_atomic64_fetch_sub_release	arch_atomic64_fetch_sub_release
 #define arch_atomic64_fetch_add		arch_atomic64_fetch_add
 #define arch_atomic64_fetch_sub		arch_atomic64_fetch_sub
 #endif
@@ -178,6 +230,12 @@ ATOMIC_OPS(xor, xor, i)
 #define arch_atomic_fetch_and_relaxed	arch_atomic_fetch_and_relaxed
 #define arch_atomic_fetch_or_relaxed	arch_atomic_fetch_or_relaxed
 #define arch_atomic_fetch_xor_relaxed	arch_atomic_fetch_xor_relaxed
+#define arch_atomic_fetch_and_acquire	arch_atomic_fetch_and_acquire
+#define arch_atomic_fetch_or_acquire	arch_atomic_fetch_or_acquire
+#define arch_atomic_fetch_xor_acquire	arch_atomic_fetch_xor_acquire
+#define arch_atomic_fetch_and_release	arch_atomic_fetch_and_release
+#define arch_atomic_fetch_or_release	arch_atomic_fetch_or_release
+#define arch_atomic_fetch_xor_release	arch_atomic_fetch_xor_release
 #define arch_atomic_fetch_and		arch_atomic_fetch_and
 #define arch_atomic_fetch_or		arch_atomic_fetch_or
 #define arch_atomic_fetch_xor		arch_atomic_fetch_xor
@@ -186,6 +244,12 @@ ATOMIC_OPS(xor, xor, i)
 #define arch_atomic64_fetch_and_relaxed	arch_atomic64_fetch_and_relaxed
 #define arch_atomic64_fetch_or_relaxed	arch_atomic64_fetch_or_relaxed
 #define arch_atomic64_fetch_xor_relaxed	arch_atomic64_fetch_xor_relaxed
+#define arch_atomic64_fetch_and_acquire	arch_atomic64_fetch_and_acquire
+#define arch_atomic64_fetch_or_acquire	arch_atomic64_fetch_or_acquire
+#define arch_atomic64_fetch_xor_acquire	arch_atomic64_fetch_xor_acquire
+#define arch_atomic64_fetch_and_release	arch_atomic64_fetch_and_release
+#define arch_atomic64_fetch_or_release	arch_atomic64_fetch_or_release
+#define arch_atomic64_fetch_xor_release	arch_atomic64_fetch_xor_release
 #define arch_atomic64_fetch_and		arch_atomic64_fetch_and
 #define arch_atomic64_fetch_or		arch_atomic64_fetch_or
 #define arch_atomic64_fetch_xor		arch_atomic64_fetch_xor
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 12debce235e5..1af8db92250b 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -52,16 +52,14 @@
 	switch (size) {							\
 	case 4:								\
 		__asm__ __volatile__ (					\
-			"	amoswap.w %0, %2, %1\n"			\
-			RISCV_ACQUIRE_BARRIER				\
+			"	amoswap.w.aq %0, %2, %1\n"		\
 			: "=r" (__ret), "+A" (*__ptr)			\
 			: "r" (__new)					\
 			: "memory");					\
 		break;							\
 	case 8:								\
 		__asm__ __volatile__ (					\
-			"	amoswap.d %0, %2, %1\n"			\
-			RISCV_ACQUIRE_BARRIER				\
+			"	amoswap.d.aq %0, %2, %1\n"		\
 			: "=r" (__ret), "+A" (*__ptr)			\
 			: "r" (__new)					\
 			: "memory");					\
@@ -87,16 +85,14 @@
 	switch (size) {							\
 	case 4:								\
 		__asm__ __volatile__ (					\
-			RISCV_RELEASE_BARRIER				\
-			"	amoswap.w %0, %2, %1\n"			\
+			"	amoswap.w.rl %0, %2, %1\n"		\
 			: "=r" (__ret), "+A" (*__ptr)			\
 			: "r" (__new)					\
 			: "memory");					\
 		break;							\
 	case 8:								\
 		__asm__ __volatile__ (					\
-			RISCV_RELEASE_BARRIER				\
-			"	amoswap.d %0, %2, %1\n"			\
+			"	amoswap.d.rl %0, %2, %1\n"		\
 			: "=r" (__ret), "+A" (*__ptr)			\
 			: "r" (__new)					\
 			: "memory");					\
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V2 2/3] riscv: atomic: Optimize acquire and release for AMO operations
@ 2022-04-12  3:49   ` guoren
  0 siblings, 0 replies; 42+ messages in thread
From: guoren @ 2022-04-12  3:49 UTC (permalink / raw)
  To: guoren, arnd, palmer, mark.rutland, will, peterz, boqun.feng
  Cc: linux-arch, linux-kernel, linux-riscv, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

Current acquire & release implementations from atomic-arch-
fallback.h are using __atomic_acquire/release_fence(), it cause
another extra "fence r, rw/fence rw,w" instruction after/before
AMO instruction. RISC-V AMO instructions could combine acquire
and release in the instruction self, we could gain two benefits by
using this feature:
 - Reduce an instruction
 - Prevent strong "fence r/fence ,w" which are used to protect
   "lr/sc".

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Mark Rutland <mark.rutland@arm.com>
---
 arch/riscv/include/asm/atomic.h  | 64 ++++++++++++++++++++++++++++++++
 arch/riscv/include/asm/cmpxchg.h | 12 ++----
 2 files changed, 68 insertions(+), 8 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index ac9bdf4fc404..20ce8b83bc18 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -99,6 +99,30 @@ c_type arch_atomic##prefix##_fetch_##op##_relaxed(c_type i,		\
 	return ret;							\
 }									\
 static __always_inline							\
+c_type arch_atomic##prefix##_fetch_##op##_acquire(c_type i,		\
+					     atomic##prefix##_t *v)	\
+{									\
+	register c_type ret;						\
+	__asm__ __volatile__ (						\
+		"	amo" #asm_op "." #asm_type ".aq %1, %2, %0"	\
+		: "+A" (v->counter), "=r" (ret)				\
+		: "r" (I)						\
+		: "memory");						\
+	return ret;							\
+}									\
+static __always_inline							\
+c_type arch_atomic##prefix##_fetch_##op##_release(c_type i,		\
+					     atomic##prefix##_t *v)	\
+{									\
+	register c_type ret;						\
+	__asm__ __volatile__ (						\
+		"	amo" #asm_op "." #asm_type ".rl %1, %2, %0"	\
+		: "+A" (v->counter), "=r" (ret)				\
+		: "r" (I)						\
+		: "memory");						\
+	return ret;							\
+}									\
+static __always_inline							\
 c_type arch_atomic##prefix##_fetch_##op(c_type i, atomic##prefix##_t *v)	\
 {									\
 	register c_type ret;						\
@@ -118,6 +142,18 @@ c_type arch_atomic##prefix##_##op##_return_relaxed(c_type i,		\
         return arch_atomic##prefix##_fetch_##op##_relaxed(i, v) c_op I;	\
 }									\
 static __always_inline							\
+c_type arch_atomic##prefix##_##op##_return_acquire(c_type i,		\
+					      atomic##prefix##_t *v)	\
+{									\
+        return arch_atomic##prefix##_fetch_##op##_acquire(i, v) c_op I;	\
+}									\
+static __always_inline							\
+c_type arch_atomic##prefix##_##op##_return_release(c_type i,		\
+					      atomic##prefix##_t *v)	\
+{									\
+        return arch_atomic##prefix##_fetch_##op##_release(i, v) c_op I;	\
+}									\
+static __always_inline							\
 c_type arch_atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v)	\
 {									\
         return arch_atomic##prefix##_fetch_##op(i, v) c_op I;		\
@@ -140,22 +176,38 @@ ATOMIC_OPS(sub, add, +, -i)
 
 #define arch_atomic_add_return_relaxed	arch_atomic_add_return_relaxed
 #define arch_atomic_sub_return_relaxed	arch_atomic_sub_return_relaxed
+#define arch_atomic_add_return_acquire	arch_atomic_add_return_acquire
+#define arch_atomic_sub_return_acquire	arch_atomic_sub_return_acquire
+#define arch_atomic_add_return_release	arch_atomic_add_return_release
+#define arch_atomic_sub_return_release	arch_atomic_sub_return_release
 #define arch_atomic_add_return		arch_atomic_add_return
 #define arch_atomic_sub_return		arch_atomic_sub_return
 
 #define arch_atomic_fetch_add_relaxed	arch_atomic_fetch_add_relaxed
 #define arch_atomic_fetch_sub_relaxed	arch_atomic_fetch_sub_relaxed
+#define arch_atomic_fetch_add_acquire	arch_atomic_fetch_add_acquire
+#define arch_atomic_fetch_sub_acquire	arch_atomic_fetch_sub_acquire
+#define arch_atomic_fetch_add_release	arch_atomic_fetch_add_release
+#define arch_atomic_fetch_sub_release	arch_atomic_fetch_sub_release
 #define arch_atomic_fetch_add		arch_atomic_fetch_add
 #define arch_atomic_fetch_sub		arch_atomic_fetch_sub
 
 #ifndef CONFIG_GENERIC_ATOMIC64
 #define arch_atomic64_add_return_relaxed	arch_atomic64_add_return_relaxed
 #define arch_atomic64_sub_return_relaxed	arch_atomic64_sub_return_relaxed
+#define arch_atomic64_add_return_acquire	arch_atomic64_add_return_acquire
+#define arch_atomic64_sub_return_acquire	arch_atomic64_sub_return_acquire
+#define arch_atomic64_add_return_release	arch_atomic64_add_return_release
+#define arch_atomic64_sub_return_release	arch_atomic64_sub_return_release
 #define arch_atomic64_add_return		arch_atomic64_add_return
 #define arch_atomic64_sub_return		arch_atomic64_sub_return
 
 #define arch_atomic64_fetch_add_relaxed	arch_atomic64_fetch_add_relaxed
 #define arch_atomic64_fetch_sub_relaxed	arch_atomic64_fetch_sub_relaxed
+#define arch_atomic64_fetch_add_acquire	arch_atomic64_fetch_add_acquire
+#define arch_atomic64_fetch_sub_acquire	arch_atomic64_fetch_sub_acquire
+#define arch_atomic64_fetch_add_release	arch_atomic64_fetch_add_release
+#define arch_atomic64_fetch_sub_release	arch_atomic64_fetch_sub_release
 #define arch_atomic64_fetch_add		arch_atomic64_fetch_add
 #define arch_atomic64_fetch_sub		arch_atomic64_fetch_sub
 #endif
@@ -178,6 +230,12 @@ ATOMIC_OPS(xor, xor, i)
 #define arch_atomic_fetch_and_relaxed	arch_atomic_fetch_and_relaxed
 #define arch_atomic_fetch_or_relaxed	arch_atomic_fetch_or_relaxed
 #define arch_atomic_fetch_xor_relaxed	arch_atomic_fetch_xor_relaxed
+#define arch_atomic_fetch_and_acquire	arch_atomic_fetch_and_acquire
+#define arch_atomic_fetch_or_acquire	arch_atomic_fetch_or_acquire
+#define arch_atomic_fetch_xor_acquire	arch_atomic_fetch_xor_acquire
+#define arch_atomic_fetch_and_release	arch_atomic_fetch_and_release
+#define arch_atomic_fetch_or_release	arch_atomic_fetch_or_release
+#define arch_atomic_fetch_xor_release	arch_atomic_fetch_xor_release
 #define arch_atomic_fetch_and		arch_atomic_fetch_and
 #define arch_atomic_fetch_or		arch_atomic_fetch_or
 #define arch_atomic_fetch_xor		arch_atomic_fetch_xor
@@ -186,6 +244,12 @@ ATOMIC_OPS(xor, xor, i)
 #define arch_atomic64_fetch_and_relaxed	arch_atomic64_fetch_and_relaxed
 #define arch_atomic64_fetch_or_relaxed	arch_atomic64_fetch_or_relaxed
 #define arch_atomic64_fetch_xor_relaxed	arch_atomic64_fetch_xor_relaxed
+#define arch_atomic64_fetch_and_acquire	arch_atomic64_fetch_and_acquire
+#define arch_atomic64_fetch_or_acquire	arch_atomic64_fetch_or_acquire
+#define arch_atomic64_fetch_xor_acquire	arch_atomic64_fetch_xor_acquire
+#define arch_atomic64_fetch_and_release	arch_atomic64_fetch_and_release
+#define arch_atomic64_fetch_or_release	arch_atomic64_fetch_or_release
+#define arch_atomic64_fetch_xor_release	arch_atomic64_fetch_xor_release
 #define arch_atomic64_fetch_and		arch_atomic64_fetch_and
 #define arch_atomic64_fetch_or		arch_atomic64_fetch_or
 #define arch_atomic64_fetch_xor		arch_atomic64_fetch_xor
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 12debce235e5..1af8db92250b 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -52,16 +52,14 @@
 	switch (size) {							\
 	case 4:								\
 		__asm__ __volatile__ (					\
-			"	amoswap.w %0, %2, %1\n"			\
-			RISCV_ACQUIRE_BARRIER				\
+			"	amoswap.w.aq %0, %2, %1\n"		\
 			: "=r" (__ret), "+A" (*__ptr)			\
 			: "r" (__new)					\
 			: "memory");					\
 		break;							\
 	case 8:								\
 		__asm__ __volatile__ (					\
-			"	amoswap.d %0, %2, %1\n"			\
-			RISCV_ACQUIRE_BARRIER				\
+			"	amoswap.d.aq %0, %2, %1\n"		\
 			: "=r" (__ret), "+A" (*__ptr)			\
 			: "r" (__new)					\
 			: "memory");					\
@@ -87,16 +85,14 @@
 	switch (size) {							\
 	case 4:								\
 		__asm__ __volatile__ (					\
-			RISCV_RELEASE_BARRIER				\
-			"	amoswap.w %0, %2, %1\n"			\
+			"	amoswap.w.rl %0, %2, %1\n"		\
 			: "=r" (__ret), "+A" (*__ptr)			\
 			: "r" (__new)					\
 			: "memory");					\
 		break;							\
 	case 8:								\
 		__asm__ __volatile__ (					\
-			RISCV_RELEASE_BARRIER				\
-			"	amoswap.d %0, %2, %1\n"			\
+			"	amoswap.d.rl %0, %2, %1\n"		\
 			: "=r" (__ret), "+A" (*__ptr)			\
 			: "r" (__new)					\
 			: "memory");					\
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V2 3/3] riscv: atomic: Optimize memory barrier semantics of LRSC-pairs
  2022-04-12  3:49 ` guoren
@ 2022-04-12  3:49   ` guoren
  -1 siblings, 0 replies; 42+ messages in thread
From: guoren @ 2022-04-12  3:49 UTC (permalink / raw)
  To: guoren, arnd, palmer, mark.rutland, will, peterz, boqun.feng
  Cc: linux-arch, linux-kernel, linux-riscv, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

The current implementation is the same with 8e86f0b409a4 ("arm64:
atomics: fix use of acquire + release for full barrier semantics").
RISC-V could combine acquire and release into the AMO instructions
and it could reduce the cost of instruction in performance. Here
are the reasons for optimization:
 - Reduce one extra fence instruction
 - The "LR/SC" instruction with "acquire and release" operation is
   less cost than ACQUIRE_BARRIER/RELEASE_BARRIER which used
   precedes-loads/subsequent-stores prohibit to protect only LR/SC
   self-instruction.
 - Putting acquire/release barrier into the loop shouldn't cost
   extra performance problems from the micro-arch design view.
   Because LR and SC are sequential in the loop by RVWMO rules.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Mark Rutland <mark.rutland@arm.com>
---
 arch/riscv/include/asm/atomic.h  |  6 ++----
 arch/riscv/include/asm/cmpxchg.h | 18 ++++++------------
 2 files changed, 8 insertions(+), 16 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index 20ce8b83bc18..4aaf5b01e7c6 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -382,9 +382,8 @@ static __always_inline int arch_atomic_sub_if_positive(atomic_t *v, int offset)
 		"0:	lr.w     %[p],  %[c]\n"
 		"	sub      %[rc], %[p], %[o]\n"
 		"	bltz     %[rc], 1f\n"
-		"	sc.w.rl  %[rc], %[rc], %[c]\n"
+		"	sc.w.aqrl %[rc], %[rc], %[c]\n"
 		"	bnez     %[rc], 0b\n"
-		"	fence    rw, rw\n"
 		"1:\n"
 		: [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter)
 		: [o]"r" (offset)
@@ -404,9 +403,8 @@ static __always_inline s64 arch_atomic64_sub_if_positive(atomic64_t *v, s64 offs
 		"0:	lr.d     %[p],  %[c]\n"
 		"	sub      %[rc], %[p], %[o]\n"
 		"	bltz     %[rc], 1f\n"
-		"	sc.d.rl  %[rc], %[rc], %[c]\n"
+		"	sc.d.aqrl %[rc], %[rc], %[c]\n"
 		"	bnez     %[rc], 0b\n"
-		"	fence    rw, rw\n"
 		"1:\n"
 		: [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter)
 		: [o]"r" (offset)
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 1af8db92250b..dfb51c98324d 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -215,9 +215,8 @@
 		__asm__ __volatile__ (					\
 			"0:	lr.w %0, %2\n"				\
 			"	bne  %0, %z3, 1f\n"			\
-			"	sc.w %1, %z4, %2\n"			\
+			"	sc.w.aq %1, %z4, %2\n"			\
 			"	bnez %1, 0b\n"				\
-			RISCV_ACQUIRE_BARRIER				\
 			"1:\n"						\
 			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
 			: "rJ" ((long)__old), "rJ" (__new)		\
@@ -227,9 +226,8 @@
 		__asm__ __volatile__ (					\
 			"0:	lr.d %0, %2\n"				\
 			"	bne %0, %z3, 1f\n"			\
-			"	sc.d %1, %z4, %2\n"			\
+			"	sc.d.aq %1, %z4, %2\n"			\
 			"	bnez %1, 0b\n"				\
-			RISCV_ACQUIRE_BARRIER				\
 			"1:\n"						\
 			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
 			: "rJ" (__old), "rJ" (__new)			\
@@ -259,8 +257,7 @@
 	switch (size) {							\
 	case 4:								\
 		__asm__ __volatile__ (					\
-			RISCV_RELEASE_BARRIER				\
-			"0:	lr.w %0, %2\n"				\
+			"0:	lr.w.rl %0, %2\n"			\
 			"	bne  %0, %z3, 1f\n"			\
 			"	sc.w %1, %z4, %2\n"			\
 			"	bnez %1, 0b\n"				\
@@ -271,8 +268,7 @@
 		break;							\
 	case 8:								\
 		__asm__ __volatile__ (					\
-			RISCV_RELEASE_BARRIER				\
-			"0:	lr.d %0, %2\n"				\
+			"0:	lr.d.rl %0, %2\n"			\
 			"	bne %0, %z3, 1f\n"			\
 			"	sc.d %1, %z4, %2\n"			\
 			"	bnez %1, 0b\n"				\
@@ -307,9 +303,8 @@
 		__asm__ __volatile__ (					\
 			"0:	lr.w %0, %2\n"				\
 			"	bne  %0, %z3, 1f\n"			\
-			"	sc.w.rl %1, %z4, %2\n"			\
+			"	sc.w.aqrl %1, %z4, %2\n"		\
 			"	bnez %1, 0b\n"				\
-			"	fence rw, rw\n"				\
 			"1:\n"						\
 			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
 			: "rJ" ((long)__old), "rJ" (__new)		\
@@ -319,9 +314,8 @@
 		__asm__ __volatile__ (					\
 			"0:	lr.d %0, %2\n"				\
 			"	bne %0, %z3, 1f\n"			\
-			"	sc.d.rl %1, %z4, %2\n"			\
+			"	sc.d.aqrl %1, %z4, %2\n"		\
 			"	bnez %1, 0b\n"				\
-			"	fence rw, rw\n"				\
 			"1:\n"						\
 			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
 			: "rJ" (__old), "rJ" (__new)			\
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [PATCH V2 3/3] riscv: atomic: Optimize memory barrier semantics of LRSC-pairs
@ 2022-04-12  3:49   ` guoren
  0 siblings, 0 replies; 42+ messages in thread
From: guoren @ 2022-04-12  3:49 UTC (permalink / raw)
  To: guoren, arnd, palmer, mark.rutland, will, peterz, boqun.feng
  Cc: linux-arch, linux-kernel, linux-riscv, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

The current implementation is the same with 8e86f0b409a4 ("arm64:
atomics: fix use of acquire + release for full barrier semantics").
RISC-V could combine acquire and release into the AMO instructions
and it could reduce the cost of instruction in performance. Here
are the reasons for optimization:
 - Reduce one extra fence instruction
 - The "LR/SC" instruction with "acquire and release" operation is
   less cost than ACQUIRE_BARRIER/RELEASE_BARRIER which used
   precedes-loads/subsequent-stores prohibit to protect only LR/SC
   self-instruction.
 - Putting acquire/release barrier into the loop shouldn't cost
   extra performance problems from the micro-arch design view.
   Because LR and SC are sequential in the loop by RVWMO rules.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Mark Rutland <mark.rutland@arm.com>
---
 arch/riscv/include/asm/atomic.h  |  6 ++----
 arch/riscv/include/asm/cmpxchg.h | 18 ++++++------------
 2 files changed, 8 insertions(+), 16 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index 20ce8b83bc18..4aaf5b01e7c6 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -382,9 +382,8 @@ static __always_inline int arch_atomic_sub_if_positive(atomic_t *v, int offset)
 		"0:	lr.w     %[p],  %[c]\n"
 		"	sub      %[rc], %[p], %[o]\n"
 		"	bltz     %[rc], 1f\n"
-		"	sc.w.rl  %[rc], %[rc], %[c]\n"
+		"	sc.w.aqrl %[rc], %[rc], %[c]\n"
 		"	bnez     %[rc], 0b\n"
-		"	fence    rw, rw\n"
 		"1:\n"
 		: [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter)
 		: [o]"r" (offset)
@@ -404,9 +403,8 @@ static __always_inline s64 arch_atomic64_sub_if_positive(atomic64_t *v, s64 offs
 		"0:	lr.d     %[p],  %[c]\n"
 		"	sub      %[rc], %[p], %[o]\n"
 		"	bltz     %[rc], 1f\n"
-		"	sc.d.rl  %[rc], %[rc], %[c]\n"
+		"	sc.d.aqrl %[rc], %[rc], %[c]\n"
 		"	bnez     %[rc], 0b\n"
-		"	fence    rw, rw\n"
 		"1:\n"
 		: [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter)
 		: [o]"r" (offset)
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 1af8db92250b..dfb51c98324d 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -215,9 +215,8 @@
 		__asm__ __volatile__ (					\
 			"0:	lr.w %0, %2\n"				\
 			"	bne  %0, %z3, 1f\n"			\
-			"	sc.w %1, %z4, %2\n"			\
+			"	sc.w.aq %1, %z4, %2\n"			\
 			"	bnez %1, 0b\n"				\
-			RISCV_ACQUIRE_BARRIER				\
 			"1:\n"						\
 			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
 			: "rJ" ((long)__old), "rJ" (__new)		\
@@ -227,9 +226,8 @@
 		__asm__ __volatile__ (					\
 			"0:	lr.d %0, %2\n"				\
 			"	bne %0, %z3, 1f\n"			\
-			"	sc.d %1, %z4, %2\n"			\
+			"	sc.d.aq %1, %z4, %2\n"			\
 			"	bnez %1, 0b\n"				\
-			RISCV_ACQUIRE_BARRIER				\
 			"1:\n"						\
 			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
 			: "rJ" (__old), "rJ" (__new)			\
@@ -259,8 +257,7 @@
 	switch (size) {							\
 	case 4:								\
 		__asm__ __volatile__ (					\
-			RISCV_RELEASE_BARRIER				\
-			"0:	lr.w %0, %2\n"				\
+			"0:	lr.w.rl %0, %2\n"			\
 			"	bne  %0, %z3, 1f\n"			\
 			"	sc.w %1, %z4, %2\n"			\
 			"	bnez %1, 0b\n"				\
@@ -271,8 +268,7 @@
 		break;							\
 	case 8:								\
 		__asm__ __volatile__ (					\
-			RISCV_RELEASE_BARRIER				\
-			"0:	lr.d %0, %2\n"				\
+			"0:	lr.d.rl %0, %2\n"			\
 			"	bne %0, %z3, 1f\n"			\
 			"	sc.d %1, %z4, %2\n"			\
 			"	bnez %1, 0b\n"				\
@@ -307,9 +303,8 @@
 		__asm__ __volatile__ (					\
 			"0:	lr.w %0, %2\n"				\
 			"	bne  %0, %z3, 1f\n"			\
-			"	sc.w.rl %1, %z4, %2\n"			\
+			"	sc.w.aqrl %1, %z4, %2\n"		\
 			"	bnez %1, 0b\n"				\
-			"	fence rw, rw\n"				\
 			"1:\n"						\
 			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
 			: "rJ" ((long)__old), "rJ" (__new)		\
@@ -319,9 +314,8 @@
 		__asm__ __volatile__ (					\
 			"0:	lr.d %0, %2\n"				\
 			"	bne %0, %z3, 1f\n"			\
-			"	sc.d.rl %1, %z4, %2\n"			\
+			"	sc.d.aqrl %1, %z4, %2\n"		\
 			"	bnez %1, 0b\n"				\
-			"	fence rw, rw\n"				\
 			"1:\n"						\
 			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
 			: "rJ" (__old), "rJ" (__new)			\
-- 
2.25.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-12  3:49 ` guoren
@ 2022-04-13 15:46   ` Boqun Feng
  -1 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2022-04-13 15:46 UTC (permalink / raw)
  To: guoren
  Cc: arnd, palmer, mark.rutland, will, peterz, linux-arch,
	linux-kernel, linux-riscv, Guo Ren, Andrea Parri

[-- Attachment #1: Type: text/plain, Size: 1130 bytes --]

[Cc Andrea]

On Tue, Apr 12, 2022 at 11:49:54AM +0800, guoren@kernel.org wrote:
> From: Guo Ren <guoren@linux.alibaba.com>
> 
> These patch series contain one cleanup and some optimizations for
> atomic operations.
> 

Seems to me that you are basically reverting 5ce6c1f3535f
("riscv/atomic: Strengthen implementations with fences"). That commit
fixed an memory ordering issue, could you explain why the issue no
longer needs a fix?

Regards,
Boqun

> Changes in V2:
>  - Fixup LR/SC memory barrier semantic problems which pointed by
>    Rutland
>  - Combine patches into one patchset series
>  - Separate AMO optimization & LRSC optimization for convenience
>    patch review
> 
> Guo Ren (3):
>   riscv: atomic: Cleanup unnecessary definition
>   riscv: atomic: Optimize acquire and release for AMO operations
>   riscv: atomic: Optimize memory barrier semantics of LRSC-pairs
> 
>  arch/riscv/include/asm/atomic.h  | 70 ++++++++++++++++++++++++++++++--
>  arch/riscv/include/asm/cmpxchg.h | 42 +++++--------------
>  2 files changed, 76 insertions(+), 36 deletions(-)
> 
> -- 
> 2.25.1
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-13 15:46   ` Boqun Feng
  0 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2022-04-13 15:46 UTC (permalink / raw)
  To: guoren
  Cc: arnd, palmer, mark.rutland, will, peterz, linux-arch,
	linux-kernel, linux-riscv, Guo Ren, Andrea Parri


[-- Attachment #1.1: Type: text/plain, Size: 1130 bytes --]

[Cc Andrea]

On Tue, Apr 12, 2022 at 11:49:54AM +0800, guoren@kernel.org wrote:
> From: Guo Ren <guoren@linux.alibaba.com>
> 
> These patch series contain one cleanup and some optimizations for
> atomic operations.
> 

Seems to me that you are basically reverting 5ce6c1f3535f
("riscv/atomic: Strengthen implementations with fences"). That commit
fixed an memory ordering issue, could you explain why the issue no
longer needs a fix?

Regards,
Boqun

> Changes in V2:
>  - Fixup LR/SC memory barrier semantic problems which pointed by
>    Rutland
>  - Combine patches into one patchset series
>  - Separate AMO optimization & LRSC optimization for convenience
>    patch review
> 
> Guo Ren (3):
>   riscv: atomic: Cleanup unnecessary definition
>   riscv: atomic: Optimize acquire and release for AMO operations
>   riscv: atomic: Optimize memory barrier semantics of LRSC-pairs
> 
>  arch/riscv/include/asm/atomic.h  | 70 ++++++++++++++++++++++++++++++--
>  arch/riscv/include/asm/cmpxchg.h | 42 +++++--------------
>  2 files changed, 76 insertions(+), 36 deletions(-)
> 
> -- 
> 2.25.1
> 

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 161 bytes --]

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-13 15:46   ` Boqun Feng
@ 2022-04-16 16:49     ` Guo Ren
  -1 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-16 16:49 UTC (permalink / raw)
  To: Boqun Feng, Andrea Parri, Daniel Lustig, Paul E. McKenney
  Cc: Arnd Bergmann, Palmer Dabbelt, Mark Rutland, Will Deacon,
	Peter Zijlstra, linux-arch, Linux Kernel Mailing List,
	linux-riscv, Guo Ren

Hi Boqun,

On Wed, Apr 13, 2022 at 11:46 PM Boqun Feng <boqun.feng@gmail.com> wrote:
>
> [Cc Andrea]
>
> On Tue, Apr 12, 2022 at 11:49:54AM +0800, guoren@kernel.org wrote:
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > These patch series contain one cleanup and some optimizations for
> > atomic operations.
> >
>
> Seems to me that you are basically reverting 5ce6c1f3535f
> ("riscv/atomic: Strengthen implementations with fences"). That commit
> fixed an memory ordering issue, could you explain why the issue no
> longer needs a fix?

I'm not reverting the prior patch, just optimizing it.

In RISC-V “A” Standard Extension for Atomic Instructions spec, it said:
If only the aq bit is set, the atomic memory operation is treated as
an acquire access, i.e., no following memory operations on this RISC-V
hart can be observed to take place before the acquire memory
operation.
-                       "       amoswap.w %0, %2, %1\n"                 \
-                       RISCV_ACQUIRE_BARRIER                           \
+                       "       amoswap.w.aq %0, %2, %1\n"              \
So RISCV_ACQUIRE_BARRIER is "fence r, rw" and "fence r" is over
constraints to protect amoswap.w. Here using amoswap.w.aq is more
proper.

If only the rl bit is set, the atomic memory operation is treated as a
release access, i.e., the release memory operation cannot be observed
to take place before any earlier memory operations on this RISC-V
hart.
-                       RISCV_RELEASE_BARRIER                           \
-                       "       amoswap.w %0, %2, %1\n"                 \
+                       "       amoswap.w.rl %0, %2, %1\n"              \
So RISCV_RELEASE_BARRIER is "fence rw, w" and "fence ,w" is over
constraints to protect amoswap.w. Here using amoswap.w.rl is more
proper.

If both the aq and rl bits are set, the atomic memory operation is
sequentially consistent and cannot be observed to happen before any
earlier memory operations or after any later memory operations in the
same RISC-V hart and to the same address domain.
                "0:     lr.w     %[p],  %[c]\n"
                "       sub      %[rc], %[p], %[o]\n"
                "       bltz     %[rc], 1f\n".
-               "       sc.w.rl  %[rc], %[rc], %[c]\n"
+               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
                "       bnez     %[rc], 0b\n"
-               "       fence    rw, rw\n"
                "1:\n"
So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.

>
> Regards,
> Boqun
>
> > Changes in V2:
> >  - Fixup LR/SC memory barrier semantic problems which pointed by
> >    Rutland
> >  - Combine patches into one patchset series
> >  - Separate AMO optimization & LRSC optimization for convenience
> >    patch review
> >
> > Guo Ren (3):
> >   riscv: atomic: Cleanup unnecessary definition
> >   riscv: atomic: Optimize acquire and release for AMO operations
> >   riscv: atomic: Optimize memory barrier semantics of LRSC-pairs
> >
> >  arch/riscv/include/asm/atomic.h  | 70 ++++++++++++++++++++++++++++++--
> >  arch/riscv/include/asm/cmpxchg.h | 42 +++++--------------
> >  2 files changed, 76 insertions(+), 36 deletions(-)
> >
> > --
> > 2.25.1
> >



--
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-16 16:49     ` Guo Ren
  0 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-16 16:49 UTC (permalink / raw)
  To: Boqun Feng, Andrea Parri, Daniel Lustig, Paul E. McKenney
  Cc: Arnd Bergmann, Palmer Dabbelt, Mark Rutland, Will Deacon,
	Peter Zijlstra, linux-arch, Linux Kernel Mailing List,
	linux-riscv, Guo Ren

Hi Boqun,

On Wed, Apr 13, 2022 at 11:46 PM Boqun Feng <boqun.feng@gmail.com> wrote:
>
> [Cc Andrea]
>
> On Tue, Apr 12, 2022 at 11:49:54AM +0800, guoren@kernel.org wrote:
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > These patch series contain one cleanup and some optimizations for
> > atomic operations.
> >
>
> Seems to me that you are basically reverting 5ce6c1f3535f
> ("riscv/atomic: Strengthen implementations with fences"). That commit
> fixed an memory ordering issue, could you explain why the issue no
> longer needs a fix?

I'm not reverting the prior patch, just optimizing it.

In RISC-V “A” Standard Extension for Atomic Instructions spec, it said:
If only the aq bit is set, the atomic memory operation is treated as
an acquire access, i.e., no following memory operations on this RISC-V
hart can be observed to take place before the acquire memory
operation.
-                       "       amoswap.w %0, %2, %1\n"                 \
-                       RISCV_ACQUIRE_BARRIER                           \
+                       "       amoswap.w.aq %0, %2, %1\n"              \
So RISCV_ACQUIRE_BARRIER is "fence r, rw" and "fence r" is over
constraints to protect amoswap.w. Here using amoswap.w.aq is more
proper.

If only the rl bit is set, the atomic memory operation is treated as a
release access, i.e., the release memory operation cannot be observed
to take place before any earlier memory operations on this RISC-V
hart.
-                       RISCV_RELEASE_BARRIER                           \
-                       "       amoswap.w %0, %2, %1\n"                 \
+                       "       amoswap.w.rl %0, %2, %1\n"              \
So RISCV_RELEASE_BARRIER is "fence rw, w" and "fence ,w" is over
constraints to protect amoswap.w. Here using amoswap.w.rl is more
proper.

If both the aq and rl bits are set, the atomic memory operation is
sequentially consistent and cannot be observed to happen before any
earlier memory operations or after any later memory operations in the
same RISC-V hart and to the same address domain.
                "0:     lr.w     %[p],  %[c]\n"
                "       sub      %[rc], %[p], %[o]\n"
                "       bltz     %[rc], 1f\n".
-               "       sc.w.rl  %[rc], %[rc], %[c]\n"
+               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
                "       bnez     %[rc], 0b\n"
-               "       fence    rw, rw\n"
                "1:\n"
So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.

>
> Regards,
> Boqun
>
> > Changes in V2:
> >  - Fixup LR/SC memory barrier semantic problems which pointed by
> >    Rutland
> >  - Combine patches into one patchset series
> >  - Separate AMO optimization & LRSC optimization for convenience
> >    patch review
> >
> > Guo Ren (3):
> >   riscv: atomic: Cleanup unnecessary definition
> >   riscv: atomic: Optimize acquire and release for AMO operations
> >   riscv: atomic: Optimize memory barrier semantics of LRSC-pairs
> >
> >  arch/riscv/include/asm/atomic.h  | 70 ++++++++++++++++++++++++++++++--
> >  arch/riscv/include/asm/cmpxchg.h | 42 +++++--------------
> >  2 files changed, 76 insertions(+), 36 deletions(-)
> >
> > --
> > 2.25.1
> >



--
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-16 16:49     ` Guo Ren
@ 2022-04-17  2:26       ` Boqun Feng
  -1 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2022-04-17  2:26 UTC (permalink / raw)
  To: Guo Ren
  Cc: Andrea Parri, Daniel Lustig, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

[-- Attachment #1: Type: text/plain, Size: 1616 bytes --]

On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
[...]
> 
> If both the aq and rl bits are set, the atomic memory operation is
> sequentially consistent and cannot be observed to happen before any
> earlier memory operations or after any later memory operations in the
> same RISC-V hart and to the same address domain.
>                 "0:     lr.w     %[p],  %[c]\n"
>                 "       sub      %[rc], %[p], %[o]\n"
>                 "       bltz     %[rc], 1f\n".
> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
>                 "       bnez     %[rc], 0b\n"
> -               "       fence    rw, rw\n"
>                 "1:\n"
> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> 

Can .aqrl order memory accesses before and after it (not against itself,
against each other), i.e. act as a full memory barrier? For example, can
we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
following litmus test?

    C lr-sc-aqrl-pair-vs-full-barrier
    
    {}
    
    P0(int *x, int *y, atomic_t *u)
    {
            int r0;
            int r1;
    
            WRITE_ONCE(*x, 1);
            r0 = atomic_cmpxchg(u, 0, 1);
            r1 = READ_ONCE(*y);
    }
    
    P1(int *x, int *y, atomic_t *v)
    {
            int r0;
            int r1;
    
            WRITE_ONCE(*y, 1);
            r0 = atomic_cmpxchg(v, 0, 1);
            r1 = READ_ONCE(*x);
    }
    
    exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-17  2:26       ` Boqun Feng
  0 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2022-04-17  2:26 UTC (permalink / raw)
  To: Guo Ren
  Cc: Andrea Parri, Daniel Lustig, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren


[-- Attachment #1.1: Type: text/plain, Size: 1616 bytes --]

On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
[...]
> 
> If both the aq and rl bits are set, the atomic memory operation is
> sequentially consistent and cannot be observed to happen before any
> earlier memory operations or after any later memory operations in the
> same RISC-V hart and to the same address domain.
>                 "0:     lr.w     %[p],  %[c]\n"
>                 "       sub      %[rc], %[p], %[o]\n"
>                 "       bltz     %[rc], 1f\n".
> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
>                 "       bnez     %[rc], 0b\n"
> -               "       fence    rw, rw\n"
>                 "1:\n"
> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> 

Can .aqrl order memory accesses before and after it (not against itself,
against each other), i.e. act as a full memory barrier? For example, can
we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
following litmus test?

    C lr-sc-aqrl-pair-vs-full-barrier
    
    {}
    
    P0(int *x, int *y, atomic_t *u)
    {
            int r0;
            int r1;
    
            WRITE_ONCE(*x, 1);
            r0 = atomic_cmpxchg(u, 0, 1);
            r1 = READ_ONCE(*y);
    }
    
    P1(int *x, int *y, atomic_t *v)
    {
            int r0;
            int r1;
    
            WRITE_ONCE(*y, 1);
            r0 = atomic_cmpxchg(v, 0, 1);
            r1 = READ_ONCE(*x);
    }
    
    exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)

Regards,
Boqun

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 161 bytes --]

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-17  2:26       ` Boqun Feng
@ 2022-04-17  4:51         ` Guo Ren
  -1 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-17  4:51 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Andrea Parri, Daniel Lustig, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

Hi Boqun & Andrea,

On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
>
> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> [...]
> >
> > If both the aq and rl bits are set, the atomic memory operation is
> > sequentially consistent and cannot be observed to happen before any
> > earlier memory operations or after any later memory operations in the
> > same RISC-V hart and to the same address domain.
> >                 "0:     lr.w     %[p],  %[c]\n"
> >                 "       sub      %[rc], %[p], %[o]\n"
> >                 "       bltz     %[rc], 1f\n".
> > -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> >                 "       bnez     %[rc], 0b\n"
> > -               "       fence    rw, rw\n"
> >                 "1:\n"
> > So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> >
>
> Can .aqrl order memory accesses before and after it (not against itself,
> against each other), i.e. act as a full memory barrier? For example, can
From the RVWMO spec description, the .aqrl annotation appends the same
effect with "fence rw, rw" to the AMO instruction, so it's RCsc.

Not only .aqrl, and I think the below also could be an RCsc when
sc.w.aq is executed:
A: Pre-Access
B: lr.w.rl ADDR-0
...
C: sc.w.aq ADDR-0
D: Post-Acess
Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
global memory order should be A->B->C->D when sc.w.aq is executed. For
the amoswap

The purpose of the whole patchset is to reduce the usage of
independent fence rw, rw instructions, and maximize the usage of the
.aq/.rl/.aqrl aonntation of RISC-V.

                __asm__ __volatile__ (                                  \
                        "0:     lr.w %0, %2\n"                          \
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w.rl %1, %z4, %2\n"                  \
                        "       bnez %1, 0b\n"                          \
                        "       fence rw, rw\n"                         \
                        "1:\n"                                          \

> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> following litmus test?
>
>     C lr-sc-aqrl-pair-vs-full-barrier
>
>     {}
>
>     P0(int *x, int *y, atomic_t *u)
>     {
>             int r0;
>             int r1;
>
>             WRITE_ONCE(*x, 1);
>             r0 = atomic_cmpxchg(u, 0, 1);
>             r1 = READ_ONCE(*y);
>     }
>
>     P1(int *x, int *y, atomic_t *v)
>     {
>             int r0;
>             int r1;
>
>             WRITE_ONCE(*y, 1);
>             r0 = atomic_cmpxchg(v, 0, 1);
>             r1 = READ_ONCE(*x);
>     }
>
>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
I think my patchset won't affect the above sequence guarantee. Current
RISC-V implementation only gives RCsc when the original value is the
same at least once. So I prefer RISC-V cmpxchg should be:


-                       "0:     lr.w %0, %2\n"                          \
+                      "0:     lr.w.rl %0, %2\n"                          \
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w.rl %1, %z4, %2\n"                  \
                        "       bnez %1, 0b\n"                          \
-                       "       fence rw, rw\n"                         \
                        "1:\n"                                          \
+                        "       fence w, rw\n"                    \

To give an unconditional RSsc for atomic_cmpxchg.

>
> Regards,
> Boqun



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-17  4:51         ` Guo Ren
  0 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-17  4:51 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Andrea Parri, Daniel Lustig, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

Hi Boqun & Andrea,

On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
>
> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> [...]
> >
> > If both the aq and rl bits are set, the atomic memory operation is
> > sequentially consistent and cannot be observed to happen before any
> > earlier memory operations or after any later memory operations in the
> > same RISC-V hart and to the same address domain.
> >                 "0:     lr.w     %[p],  %[c]\n"
> >                 "       sub      %[rc], %[p], %[o]\n"
> >                 "       bltz     %[rc], 1f\n".
> > -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> >                 "       bnez     %[rc], 0b\n"
> > -               "       fence    rw, rw\n"
> >                 "1:\n"
> > So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> >
>
> Can .aqrl order memory accesses before and after it (not against itself,
> against each other), i.e. act as a full memory barrier? For example, can
From the RVWMO spec description, the .aqrl annotation appends the same
effect with "fence rw, rw" to the AMO instruction, so it's RCsc.

Not only .aqrl, and I think the below also could be an RCsc when
sc.w.aq is executed:
A: Pre-Access
B: lr.w.rl ADDR-0
...
C: sc.w.aq ADDR-0
D: Post-Acess
Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
global memory order should be A->B->C->D when sc.w.aq is executed. For
the amoswap

The purpose of the whole patchset is to reduce the usage of
independent fence rw, rw instructions, and maximize the usage of the
.aq/.rl/.aqrl aonntation of RISC-V.

                __asm__ __volatile__ (                                  \
                        "0:     lr.w %0, %2\n"                          \
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w.rl %1, %z4, %2\n"                  \
                        "       bnez %1, 0b\n"                          \
                        "       fence rw, rw\n"                         \
                        "1:\n"                                          \

> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> following litmus test?
>
>     C lr-sc-aqrl-pair-vs-full-barrier
>
>     {}
>
>     P0(int *x, int *y, atomic_t *u)
>     {
>             int r0;
>             int r1;
>
>             WRITE_ONCE(*x, 1);
>             r0 = atomic_cmpxchg(u, 0, 1);
>             r1 = READ_ONCE(*y);
>     }
>
>     P1(int *x, int *y, atomic_t *v)
>     {
>             int r0;
>             int r1;
>
>             WRITE_ONCE(*y, 1);
>             r0 = atomic_cmpxchg(v, 0, 1);
>             r1 = READ_ONCE(*x);
>     }
>
>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
I think my patchset won't affect the above sequence guarantee. Current
RISC-V implementation only gives RCsc when the original value is the
same at least once. So I prefer RISC-V cmpxchg should be:


-                       "0:     lr.w %0, %2\n"                          \
+                      "0:     lr.w.rl %0, %2\n"                          \
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w.rl %1, %z4, %2\n"                  \
                        "       bnez %1, 0b\n"                          \
-                       "       fence rw, rw\n"                         \
                        "1:\n"                                          \
+                        "       fence w, rw\n"                    \

To give an unconditional RSsc for atomic_cmpxchg.

>
> Regards,
> Boqun



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-17  4:51         ` Guo Ren
@ 2022-04-17  6:30           ` Boqun Feng
  -1 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2022-04-17  6:30 UTC (permalink / raw)
  To: Guo Ren
  Cc: Andrea Parri, Daniel Lustig, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

[-- Attachment #1: Type: text/plain, Size: 4414 bytes --]

On Sun, Apr 17, 2022 at 12:51:38PM +0800, Guo Ren wrote:
> Hi Boqun & Andrea,
> 
> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> >
> > On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > [...]
> > >
> > > If both the aq and rl bits are set, the atomic memory operation is
> > > sequentially consistent and cannot be observed to happen before any
> > > earlier memory operations or after any later memory operations in the
> > > same RISC-V hart and to the same address domain.
> > >                 "0:     lr.w     %[p],  %[c]\n"
> > >                 "       sub      %[rc], %[p], %[o]\n"
> > >                 "       bltz     %[rc], 1f\n".
> > > -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > >                 "       bnez     %[rc], 0b\n"
> > > -               "       fence    rw, rw\n"
> > >                 "1:\n"
> > > So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > >
> >
> > Can .aqrl order memory accesses before and after it (not against itself,
> > against each other), i.e. act as a full memory barrier? For example, can
> From the RVWMO spec description, the .aqrl annotation appends the same
> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> 

Thanks for the confirmation, btw, where can I find the RVWMO spec?

> Not only .aqrl, and I think the below also could be an RCsc when
> sc.w.aq is executed:
> A: Pre-Access
> B: lr.w.rl ADDR-0
> ...
> C: sc.w.aq ADDR-0
> D: Post-Acess
> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> global memory order should be A->B->C->D when sc.w.aq is executed. For
> the amoswap
> 
> The purpose of the whole patchset is to reduce the usage of
> independent fence rw, rw instructions, and maximize the usage of the
> .aq/.rl/.aqrl aonntation of RISC-V.
> 
>                 __asm__ __volatile__ (                                  \
>                         "0:     lr.w %0, %2\n"                          \
>                         "       bne  %0, %z3, 1f\n"                     \
>                         "       sc.w.rl %1, %z4, %2\n"                  \
>                         "       bnez %1, 0b\n"                          \
>                         "       fence rw, rw\n"                         \
>                         "1:\n"                                          \
> 
> > we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > following litmus test?
> >
> >     C lr-sc-aqrl-pair-vs-full-barrier
> >
> >     {}
> >
> >     P0(int *x, int *y, atomic_t *u)
> >     {
> >             int r0;
> >             int r1;
> >
> >             WRITE_ONCE(*x, 1);
> >             r0 = atomic_cmpxchg(u, 0, 1);
> >             r1 = READ_ONCE(*y);
> >     }
> >
> >     P1(int *x, int *y, atomic_t *v)
> >     {
> >             int r0;
> >             int r1;
> >
> >             WRITE_ONCE(*y, 1);
> >             r0 = atomic_cmpxchg(v, 0, 1);
> >             r1 = READ_ONCE(*x);
> >     }
> >
> >     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> I think my patchset won't affect the above sequence guarantee. Current
> RISC-V implementation only gives RCsc when the original value is the
> same at least once. So I prefer RISC-V cmpxchg should be:
> 
> 
> -                       "0:     lr.w %0, %2\n"                          \
> +                      "0:     lr.w.rl %0, %2\n"                          \
>                         "       bne  %0, %z3, 1f\n"                     \
>                         "       sc.w.rl %1, %z4, %2\n"                  \
>                         "       bnez %1, 0b\n"                          \
> -                       "       fence rw, rw\n"                         \
>                         "1:\n"                                          \
> +                        "       fence w, rw\n"                    \
> 
> To give an unconditional RSsc for atomic_cmpxchg.
> 

Note that Linux kernel doesn't require cmpxchg() to provide any order if
cmpxchg() fails to update the memory location. So you won't need to
strengthen the atomic_cmpxchg().

Regards,
Boqun

> >
> > Regards,
> > Boqun
> 
> 
> 
> -- 
> Best Regards
>  Guo Ren
> 
> ML: https://lore.kernel.org/linux-csky/

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-17  6:30           ` Boqun Feng
  0 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2022-04-17  6:30 UTC (permalink / raw)
  To: Guo Ren
  Cc: Andrea Parri, Daniel Lustig, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren


[-- Attachment #1.1: Type: text/plain, Size: 4414 bytes --]

On Sun, Apr 17, 2022 at 12:51:38PM +0800, Guo Ren wrote:
> Hi Boqun & Andrea,
> 
> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> >
> > On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > [...]
> > >
> > > If both the aq and rl bits are set, the atomic memory operation is
> > > sequentially consistent and cannot be observed to happen before any
> > > earlier memory operations or after any later memory operations in the
> > > same RISC-V hart and to the same address domain.
> > >                 "0:     lr.w     %[p],  %[c]\n"
> > >                 "       sub      %[rc], %[p], %[o]\n"
> > >                 "       bltz     %[rc], 1f\n".
> > > -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > >                 "       bnez     %[rc], 0b\n"
> > > -               "       fence    rw, rw\n"
> > >                 "1:\n"
> > > So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > >
> >
> > Can .aqrl order memory accesses before and after it (not against itself,
> > against each other), i.e. act as a full memory barrier? For example, can
> From the RVWMO spec description, the .aqrl annotation appends the same
> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> 

Thanks for the confirmation, btw, where can I find the RVWMO spec?

> Not only .aqrl, and I think the below also could be an RCsc when
> sc.w.aq is executed:
> A: Pre-Access
> B: lr.w.rl ADDR-0
> ...
> C: sc.w.aq ADDR-0
> D: Post-Acess
> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> global memory order should be A->B->C->D when sc.w.aq is executed. For
> the amoswap
> 
> The purpose of the whole patchset is to reduce the usage of
> independent fence rw, rw instructions, and maximize the usage of the
> .aq/.rl/.aqrl aonntation of RISC-V.
> 
>                 __asm__ __volatile__ (                                  \
>                         "0:     lr.w %0, %2\n"                          \
>                         "       bne  %0, %z3, 1f\n"                     \
>                         "       sc.w.rl %1, %z4, %2\n"                  \
>                         "       bnez %1, 0b\n"                          \
>                         "       fence rw, rw\n"                         \
>                         "1:\n"                                          \
> 
> > we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > following litmus test?
> >
> >     C lr-sc-aqrl-pair-vs-full-barrier
> >
> >     {}
> >
> >     P0(int *x, int *y, atomic_t *u)
> >     {
> >             int r0;
> >             int r1;
> >
> >             WRITE_ONCE(*x, 1);
> >             r0 = atomic_cmpxchg(u, 0, 1);
> >             r1 = READ_ONCE(*y);
> >     }
> >
> >     P1(int *x, int *y, atomic_t *v)
> >     {
> >             int r0;
> >             int r1;
> >
> >             WRITE_ONCE(*y, 1);
> >             r0 = atomic_cmpxchg(v, 0, 1);
> >             r1 = READ_ONCE(*x);
> >     }
> >
> >     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> I think my patchset won't affect the above sequence guarantee. Current
> RISC-V implementation only gives RCsc when the original value is the
> same at least once. So I prefer RISC-V cmpxchg should be:
> 
> 
> -                       "0:     lr.w %0, %2\n"                          \
> +                      "0:     lr.w.rl %0, %2\n"                          \
>                         "       bne  %0, %z3, 1f\n"                     \
>                         "       sc.w.rl %1, %z4, %2\n"                  \
>                         "       bnez %1, 0b\n"                          \
> -                       "       fence rw, rw\n"                         \
>                         "1:\n"                                          \
> +                        "       fence w, rw\n"                    \
> 
> To give an unconditional RSsc for atomic_cmpxchg.
> 

Note that Linux kernel doesn't require cmpxchg() to provide any order if
cmpxchg() fails to update the memory location. So you won't need to
strengthen the atomic_cmpxchg().

Regards,
Boqun

> >
> > Regards,
> > Boqun
> 
> 
> 
> -- 
> Best Regards
>  Guo Ren
> 
> ML: https://lore.kernel.org/linux-csky/

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 161 bytes --]

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-17  6:30           ` Boqun Feng
@ 2022-04-17  6:45             ` Guo Ren
  -1 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-17  6:45 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Andrea Parri, Daniel Lustig, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

On Sun, Apr 17, 2022 at 2:31 PM Boqun Feng <boqun.feng@gmail.com> wrote:
>
> On Sun, Apr 17, 2022 at 12:51:38PM +0800, Guo Ren wrote:
> > Hi Boqun & Andrea,
> >
> > On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> > >
> > > On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > > [...]
> > > >
> > > > If both the aq and rl bits are set, the atomic memory operation is
> > > > sequentially consistent and cannot be observed to happen before any
> > > > earlier memory operations or after any later memory operations in the
> > > > same RISC-V hart and to the same address domain.
> > > >                 "0:     lr.w     %[p],  %[c]\n"
> > > >                 "       sub      %[rc], %[p], %[o]\n"
> > > >                 "       bltz     %[rc], 1f\n".
> > > > -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > > +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > >                 "       bnez     %[rc], 0b\n"
> > > > -               "       fence    rw, rw\n"
> > > >                 "1:\n"
> > > > So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > > >
> > >
> > > Can .aqrl order memory accesses before and after it (not against itself,
> > > against each other), i.e. act as a full memory barrier? For example, can
> > From the RVWMO spec description, the .aqrl annotation appends the same
> > effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> >
>
> Thanks for the confirmation, btw, where can I find the RVWMO spec?
RVWMO section:
https://five-embeddev.com/riscv-isa-manual/latest/rvwmo.html#ch:memorymodel

ATOMIC instructions:
https://five-embeddev.com/riscv-isa-manual/latest/a.html#atomics

>
> > Not only .aqrl, and I think the below also could be an RCsc when
> > sc.w.aq is executed:
> > A: Pre-Access
> > B: lr.w.rl ADDR-0
> > ...
> > C: sc.w.aq ADDR-0
> > D: Post-Acess
> > Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > global memory order should be A->B->C->D when sc.w.aq is executed. For
> > the amoswap
> >
> > The purpose of the whole patchset is to reduce the usage of
> > independent fence rw, rw instructions, and maximize the usage of the
> > .aq/.rl/.aqrl aonntation of RISC-V.
> >
> >                 __asm__ __volatile__ (                                  \
> >                         "0:     lr.w %0, %2\n"                          \
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w.rl %1, %z4, %2\n"                  \
> >                         "       bnez %1, 0b\n"                          \
> >                         "       fence rw, rw\n"                         \
> >                         "1:\n"                                          \
> >
> > > we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > > following litmus test?
> > >
> > >     C lr-sc-aqrl-pair-vs-full-barrier
> > >
> > >     {}
> > >
> > >     P0(int *x, int *y, atomic_t *u)
> > >     {
> > >             int r0;
> > >             int r1;
> > >
> > >             WRITE_ONCE(*x, 1);
> > >             r0 = atomic_cmpxchg(u, 0, 1);
> > >             r1 = READ_ONCE(*y);
> > >     }
> > >
> > >     P1(int *x, int *y, atomic_t *v)
> > >     {
> > >             int r0;
> > >             int r1;
> > >
> > >             WRITE_ONCE(*y, 1);
> > >             r0 = atomic_cmpxchg(v, 0, 1);
> > >             r1 = READ_ONCE(*x);
> > >     }
> > >
> > >     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > I think my patchset won't affect the above sequence guarantee. Current
> > RISC-V implementation only gives RCsc when the original value is the
> > same at least once. So I prefer RISC-V cmpxchg should be:
> >
> >
> > -                       "0:     lr.w %0, %2\n"                          \
> > +                      "0:     lr.w.rl %0, %2\n"                          \
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w.rl %1, %z4, %2\n"                  \
> >                         "       bnez %1, 0b\n"                          \
> > -                       "       fence rw, rw\n"                         \
> >                         "1:\n"                                          \
> > +                        "       fence w, rw\n"                    \
> >
> > To give an unconditional RSsc for atomic_cmpxchg.
> >
>
> Note that Linux kernel doesn't require cmpxchg() to provide any order if
> cmpxchg() fails to update the memory location. So you won't need to
> strengthen the atomic_cmpxchg().
Thx for the clarification.

>
> Regards,
> Boqun
>
> > >
> > > Regards,
> > > Boqun
> >
> >
> >
> > --
> > Best Regards
> >  Guo Ren
> >
> > ML: https://lore.kernel.org/linux-csky/



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-17  6:45             ` Guo Ren
  0 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-17  6:45 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Andrea Parri, Daniel Lustig, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

On Sun, Apr 17, 2022 at 2:31 PM Boqun Feng <boqun.feng@gmail.com> wrote:
>
> On Sun, Apr 17, 2022 at 12:51:38PM +0800, Guo Ren wrote:
> > Hi Boqun & Andrea,
> >
> > On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> > >
> > > On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > > [...]
> > > >
> > > > If both the aq and rl bits are set, the atomic memory operation is
> > > > sequentially consistent and cannot be observed to happen before any
> > > > earlier memory operations or after any later memory operations in the
> > > > same RISC-V hart and to the same address domain.
> > > >                 "0:     lr.w     %[p],  %[c]\n"
> > > >                 "       sub      %[rc], %[p], %[o]\n"
> > > >                 "       bltz     %[rc], 1f\n".
> > > > -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > > +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > >                 "       bnez     %[rc], 0b\n"
> > > > -               "       fence    rw, rw\n"
> > > >                 "1:\n"
> > > > So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > > >
> > >
> > > Can .aqrl order memory accesses before and after it (not against itself,
> > > against each other), i.e. act as a full memory barrier? For example, can
> > From the RVWMO spec description, the .aqrl annotation appends the same
> > effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> >
>
> Thanks for the confirmation, btw, where can I find the RVWMO spec?
RVWMO section:
https://five-embeddev.com/riscv-isa-manual/latest/rvwmo.html#ch:memorymodel

ATOMIC instructions:
https://five-embeddev.com/riscv-isa-manual/latest/a.html#atomics

>
> > Not only .aqrl, and I think the below also could be an RCsc when
> > sc.w.aq is executed:
> > A: Pre-Access
> > B: lr.w.rl ADDR-0
> > ...
> > C: sc.w.aq ADDR-0
> > D: Post-Acess
> > Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > global memory order should be A->B->C->D when sc.w.aq is executed. For
> > the amoswap
> >
> > The purpose of the whole patchset is to reduce the usage of
> > independent fence rw, rw instructions, and maximize the usage of the
> > .aq/.rl/.aqrl aonntation of RISC-V.
> >
> >                 __asm__ __volatile__ (                                  \
> >                         "0:     lr.w %0, %2\n"                          \
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w.rl %1, %z4, %2\n"                  \
> >                         "       bnez %1, 0b\n"                          \
> >                         "       fence rw, rw\n"                         \
> >                         "1:\n"                                          \
> >
> > > we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > > following litmus test?
> > >
> > >     C lr-sc-aqrl-pair-vs-full-barrier
> > >
> > >     {}
> > >
> > >     P0(int *x, int *y, atomic_t *u)
> > >     {
> > >             int r0;
> > >             int r1;
> > >
> > >             WRITE_ONCE(*x, 1);
> > >             r0 = atomic_cmpxchg(u, 0, 1);
> > >             r1 = READ_ONCE(*y);
> > >     }
> > >
> > >     P1(int *x, int *y, atomic_t *v)
> > >     {
> > >             int r0;
> > >             int r1;
> > >
> > >             WRITE_ONCE(*y, 1);
> > >             r0 = atomic_cmpxchg(v, 0, 1);
> > >             r1 = READ_ONCE(*x);
> > >     }
> > >
> > >     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > I think my patchset won't affect the above sequence guarantee. Current
> > RISC-V implementation only gives RCsc when the original value is the
> > same at least once. So I prefer RISC-V cmpxchg should be:
> >
> >
> > -                       "0:     lr.w %0, %2\n"                          \
> > +                      "0:     lr.w.rl %0, %2\n"                          \
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w.rl %1, %z4, %2\n"                  \
> >                         "       bnez %1, 0b\n"                          \
> > -                       "       fence rw, rw\n"                         \
> >                         "1:\n"                                          \
> > +                        "       fence w, rw\n"                    \
> >
> > To give an unconditional RSsc for atomic_cmpxchg.
> >
>
> Note that Linux kernel doesn't require cmpxchg() to provide any order if
> cmpxchg() fails to update the memory location. So you won't need to
> strengthen the atomic_cmpxchg().
Thx for the clarification.

>
> Regards,
> Boqun
>
> > >
> > > Regards,
> > > Boqun
> >
> >
> >
> > --
> > Best Regards
> >  Guo Ren
> >
> > ML: https://lore.kernel.org/linux-csky/



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-16 16:49     ` Guo Ren
@ 2022-04-18 23:41       ` Andrea Parri
  -1 siblings, 0 replies; 42+ messages in thread
From: Andrea Parri @ 2022-04-18 23:41 UTC (permalink / raw)
  To: Guo Ren
  Cc: Boqun Feng, Daniel Lustig, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

> > Seems to me that you are basically reverting 5ce6c1f3535f
> > ("riscv/atomic: Strengthen implementations with fences"). That commit
> > fixed an memory ordering issue, could you explain why the issue no
> > longer needs a fix?
> 
> I'm not reverting the prior patch, just optimizing it.
> 
> In RISC-V “A” Standard Extension for Atomic Instructions spec, it said:

With reference to the RISC-V herd specification at:

  https://github.com/riscv/riscv-isa-manual.git

the issue, better, lr-sc-aqrl-pair-vs-full-barrier seems to _no longer_
need a fix since commit:

  03a5e722fc0f ("Updates to the memory consistency model spec")

(here a template, to double check:

  https://github.com/litmus-tests/litmus-tests-riscv/blob/master/tests/non-mixed-size/HAND/LR-SC-NOT-FENCE.litmus )

I defer to Daniel/others for a "bi-section" of the prose specification.
;-)

  Andrea

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-18 23:41       ` Andrea Parri
  0 siblings, 0 replies; 42+ messages in thread
From: Andrea Parri @ 2022-04-18 23:41 UTC (permalink / raw)
  To: Guo Ren
  Cc: Boqun Feng, Daniel Lustig, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

> > Seems to me that you are basically reverting 5ce6c1f3535f
> > ("riscv/atomic: Strengthen implementations with fences"). That commit
> > fixed an memory ordering issue, could you explain why the issue no
> > longer needs a fix?
> 
> I'm not reverting the prior patch, just optimizing it.
> 
> In RISC-V “A” Standard Extension for Atomic Instructions spec, it said:

With reference to the RISC-V herd specification at:

  https://github.com/riscv/riscv-isa-manual.git

the issue, better, lr-sc-aqrl-pair-vs-full-barrier seems to _no longer_
need a fix since commit:

  03a5e722fc0f ("Updates to the memory consistency model spec")

(here a template, to double check:

  https://github.com/litmus-tests/litmus-tests-riscv/blob/master/tests/non-mixed-size/HAND/LR-SC-NOT-FENCE.litmus )

I defer to Daniel/others for a "bi-section" of the prose specification.
;-)

  Andrea

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-17  4:51         ` Guo Ren
@ 2022-04-19 17:12           ` Dan Lustig
  -1 siblings, 0 replies; 42+ messages in thread
From: Dan Lustig @ 2022-04-19 17:12 UTC (permalink / raw)
  To: Guo Ren, Boqun Feng
  Cc: Andrea Parri, Paul E. McKenney, Arnd Bergmann, Palmer Dabbelt,
	Mark Rutland, Will Deacon, Peter Zijlstra, linux-arch,
	Linux Kernel Mailing List, linux-riscv, Guo Ren

On 4/17/2022 12:51 AM, Guo Ren wrote:
> Hi Boqun & Andrea,
> 
> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
>>
>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
>> [...]
>>>
>>> If both the aq and rl bits are set, the atomic memory operation is
>>> sequentially consistent and cannot be observed to happen before any
>>> earlier memory operations or after any later memory operations in the
>>> same RISC-V hart and to the same address domain.
>>>                 "0:     lr.w     %[p],  %[c]\n"
>>>                 "       sub      %[rc], %[p], %[o]\n"
>>>                 "       bltz     %[rc], 1f\n".
>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
>>>                 "       bnez     %[rc], 0b\n"
>>> -               "       fence    rw, rw\n"
>>>                 "1:\n"
>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
>>>
>>
>> Can .aqrl order memory accesses before and after it (not against itself,
>> against each other), i.e. act as a full memory barrier? For example, can
> From the RVWMO spec description, the .aqrl annotation appends the same
> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> 
> Not only .aqrl, and I think the below also could be an RCsc when
> sc.w.aq is executed:
> A: Pre-Access
> B: lr.w.rl ADDR-0
> ...
> C: sc.w.aq ADDR-0
> D: Post-Acess
> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> global memory order should be A->B->C->D when sc.w.aq is executed. For
> the amoswap

These opcodes aren't actually meaningful, unfortunately.

Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
on an LR instruction unless the aq bit is also set, nor should software
set the aq bit on an SC instruction unless the rl bit is also set."

Dan

> The purpose of the whole patchset is to reduce the usage of
> independent fence rw, rw instructions, and maximize the usage of the
> .aq/.rl/.aqrl aonntation of RISC-V.
> 
>                 __asm__ __volatile__ (                                  \
>                         "0:     lr.w %0, %2\n"                          \
>                         "       bne  %0, %z3, 1f\n"                     \
>                         "       sc.w.rl %1, %z4, %2\n"                  \
>                         "       bnez %1, 0b\n"                          \
>                         "       fence rw, rw\n"                         \
>                         "1:\n"                                          \
> 
>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
>> following litmus test?
>>
>>     C lr-sc-aqrl-pair-vs-full-barrier
>>
>>     {}
>>
>>     P0(int *x, int *y, atomic_t *u)
>>     {
>>             int r0;
>>             int r1;
>>
>>             WRITE_ONCE(*x, 1);
>>             r0 = atomic_cmpxchg(u, 0, 1);
>>             r1 = READ_ONCE(*y);
>>     }
>>
>>     P1(int *x, int *y, atomic_t *v)
>>     {
>>             int r0;
>>             int r1;
>>
>>             WRITE_ONCE(*y, 1);
>>             r0 = atomic_cmpxchg(v, 0, 1);
>>             r1 = READ_ONCE(*x);
>>     }
>>
>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> I think my patchset won't affect the above sequence guarantee. Current
> RISC-V implementation only gives RCsc when the original value is the
> same at least once. So I prefer RISC-V cmpxchg should be:
> 
> 
> -                       "0:     lr.w %0, %2\n"                          \
> +                      "0:     lr.w.rl %0, %2\n"                          \
>                         "       bne  %0, %z3, 1f\n"                     \
>                         "       sc.w.rl %1, %z4, %2\n"                  \
>                         "       bnez %1, 0b\n"                          \
> -                       "       fence rw, rw\n"                         \
>                         "1:\n"                                          \
> +                        "       fence w, rw\n"                    \
> 
> To give an unconditional RSsc for atomic_cmpxchg.
> 
>>
>> Regards,
>> Boqun
> 
> 
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-19 17:12           ` Dan Lustig
  0 siblings, 0 replies; 42+ messages in thread
From: Dan Lustig @ 2022-04-19 17:12 UTC (permalink / raw)
  To: Guo Ren, Boqun Feng
  Cc: Andrea Parri, Paul E. McKenney, Arnd Bergmann, Palmer Dabbelt,
	Mark Rutland, Will Deacon, Peter Zijlstra, linux-arch,
	Linux Kernel Mailing List, linux-riscv, Guo Ren

On 4/17/2022 12:51 AM, Guo Ren wrote:
> Hi Boqun & Andrea,
> 
> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
>>
>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
>> [...]
>>>
>>> If both the aq and rl bits are set, the atomic memory operation is
>>> sequentially consistent and cannot be observed to happen before any
>>> earlier memory operations or after any later memory operations in the
>>> same RISC-V hart and to the same address domain.
>>>                 "0:     lr.w     %[p],  %[c]\n"
>>>                 "       sub      %[rc], %[p], %[o]\n"
>>>                 "       bltz     %[rc], 1f\n".
>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
>>>                 "       bnez     %[rc], 0b\n"
>>> -               "       fence    rw, rw\n"
>>>                 "1:\n"
>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
>>>
>>
>> Can .aqrl order memory accesses before and after it (not against itself,
>> against each other), i.e. act as a full memory barrier? For example, can
> From the RVWMO spec description, the .aqrl annotation appends the same
> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> 
> Not only .aqrl, and I think the below also could be an RCsc when
> sc.w.aq is executed:
> A: Pre-Access
> B: lr.w.rl ADDR-0
> ...
> C: sc.w.aq ADDR-0
> D: Post-Acess
> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> global memory order should be A->B->C->D when sc.w.aq is executed. For
> the amoswap

These opcodes aren't actually meaningful, unfortunately.

Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
on an LR instruction unless the aq bit is also set, nor should software
set the aq bit on an SC instruction unless the rl bit is also set."

Dan

> The purpose of the whole patchset is to reduce the usage of
> independent fence rw, rw instructions, and maximize the usage of the
> .aq/.rl/.aqrl aonntation of RISC-V.
> 
>                 __asm__ __volatile__ (                                  \
>                         "0:     lr.w %0, %2\n"                          \
>                         "       bne  %0, %z3, 1f\n"                     \
>                         "       sc.w.rl %1, %z4, %2\n"                  \
>                         "       bnez %1, 0b\n"                          \
>                         "       fence rw, rw\n"                         \
>                         "1:\n"                                          \
> 
>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
>> following litmus test?
>>
>>     C lr-sc-aqrl-pair-vs-full-barrier
>>
>>     {}
>>
>>     P0(int *x, int *y, atomic_t *u)
>>     {
>>             int r0;
>>             int r1;
>>
>>             WRITE_ONCE(*x, 1);
>>             r0 = atomic_cmpxchg(u, 0, 1);
>>             r1 = READ_ONCE(*y);
>>     }
>>
>>     P1(int *x, int *y, atomic_t *v)
>>     {
>>             int r0;
>>             int r1;
>>
>>             WRITE_ONCE(*y, 1);
>>             r0 = atomic_cmpxchg(v, 0, 1);
>>             r1 = READ_ONCE(*x);
>>     }
>>
>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> I think my patchset won't affect the above sequence guarantee. Current
> RISC-V implementation only gives RCsc when the original value is the
> same at least once. So I prefer RISC-V cmpxchg should be:
> 
> 
> -                       "0:     lr.w %0, %2\n"                          \
> +                      "0:     lr.w.rl %0, %2\n"                          \
>                         "       bne  %0, %z3, 1f\n"                     \
>                         "       sc.w.rl %1, %z4, %2\n"                  \
>                         "       bnez %1, 0b\n"                          \
> -                       "       fence rw, rw\n"                         \
>                         "1:\n"                                          \
> +                        "       fence w, rw\n"                    \
> 
> To give an unconditional RSsc for atomic_cmpxchg.
> 
>>
>> Regards,
>> Boqun
> 
> 
> 

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-18 23:41       ` Andrea Parri
@ 2022-04-19 17:13         ` Dan Lustig
  -1 siblings, 0 replies; 42+ messages in thread
From: Dan Lustig @ 2022-04-19 17:13 UTC (permalink / raw)
  To: Andrea Parri, Guo Ren
  Cc: Boqun Feng, Paul E. McKenney, Arnd Bergmann, Palmer Dabbelt,
	Mark Rutland, Will Deacon, Peter Zijlstra, linux-arch,
	Linux Kernel Mailing List, linux-riscv, Guo Ren

On 4/18/2022 7:41 PM, Andrea Parri wrote:
>>> Seems to me that you are basically reverting 5ce6c1f3535f
>>> ("riscv/atomic: Strengthen implementations with fences"). That commit
>>> fixed an memory ordering issue, could you explain why the issue no
>>> longer needs a fix?
>>
>> I'm not reverting the prior patch, just optimizing it.
>>
>> In RISC-V “A” Standard Extension for Atomic Instructions spec, it said:
> 
> With reference to the RISC-V herd specification at:
> 
>   https://github.com/riscv/riscv-isa-manual.git
> 
> the issue, better, lr-sc-aqrl-pair-vs-full-barrier seems to _no longer_
> need a fix since commit:
> 
>   03a5e722fc0f ("Updates to the memory consistency model spec")
> 
> (here a template, to double check:
> 
>   https://github.com/litmus-tests/litmus-tests-riscv/blob/master/tests/non-mixed-size/HAND/LR-SC-NOT-FENCE.litmus )
> 
> I defer to Daniel/others for a "bi-section" of the prose specification.
> ;-)

What is the question exactly?

Dan

> 
>   Andrea

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-19 17:13         ` Dan Lustig
  0 siblings, 0 replies; 42+ messages in thread
From: Dan Lustig @ 2022-04-19 17:13 UTC (permalink / raw)
  To: Andrea Parri, Guo Ren
  Cc: Boqun Feng, Paul E. McKenney, Arnd Bergmann, Palmer Dabbelt,
	Mark Rutland, Will Deacon, Peter Zijlstra, linux-arch,
	Linux Kernel Mailing List, linux-riscv, Guo Ren

On 4/18/2022 7:41 PM, Andrea Parri wrote:
>>> Seems to me that you are basically reverting 5ce6c1f3535f
>>> ("riscv/atomic: Strengthen implementations with fences"). That commit
>>> fixed an memory ordering issue, could you explain why the issue no
>>> longer needs a fix?
>>
>> I'm not reverting the prior patch, just optimizing it.
>>
>> In RISC-V “A” Standard Extension for Atomic Instructions spec, it said:
> 
> With reference to the RISC-V herd specification at:
> 
>   https://github.com/riscv/riscv-isa-manual.git
> 
> the issue, better, lr-sc-aqrl-pair-vs-full-barrier seems to _no longer_
> need a fix since commit:
> 
>   03a5e722fc0f ("Updates to the memory consistency model spec")
> 
> (here a template, to double check:
> 
>   https://github.com/litmus-tests/litmus-tests-riscv/blob/master/tests/non-mixed-size/HAND/LR-SC-NOT-FENCE.litmus )
> 
> I defer to Daniel/others for a "bi-section" of the prose specification.
> ;-)

What is the question exactly?

Dan

> 
>   Andrea

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-19 17:12           ` Dan Lustig
@ 2022-04-20  5:33             ` Guo Ren
  -1 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-20  5:33 UTC (permalink / raw)
  To: Dan Lustig
  Cc: Boqun Feng, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

Thx Dan,

On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
>
> On 4/17/2022 12:51 AM, Guo Ren wrote:
> > Hi Boqun & Andrea,
> >
> > On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> >>
> >> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> >> [...]
> >>>
> >>> If both the aq and rl bits are set, the atomic memory operation is
> >>> sequentially consistent and cannot be observed to happen before any
> >>> earlier memory operations or after any later memory operations in the
> >>> same RISC-V hart and to the same address domain.
> >>>                 "0:     lr.w     %[p],  %[c]\n"
> >>>                 "       sub      %[rc], %[p], %[o]\n"
> >>>                 "       bltz     %[rc], 1f\n".
> >>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> >>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> >>>                 "       bnez     %[rc], 0b\n"
> >>> -               "       fence    rw, rw\n"
> >>>                 "1:\n"
> >>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> >>>
> >>
> >> Can .aqrl order memory accesses before and after it (not against itself,
> >> against each other), i.e. act as a full memory barrier? For example, can
> > From the RVWMO spec description, the .aqrl annotation appends the same
> > effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> >
> > Not only .aqrl, and I think the below also could be an RCsc when
> > sc.w.aq is executed:
> > A: Pre-Access
> > B: lr.w.rl ADDR-0
> > ...
> > C: sc.w.aq ADDR-0
> > D: Post-Acess
> > Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > global memory order should be A->B->C->D when sc.w.aq is executed. For
> > the amoswap
>
> These opcodes aren't actually meaningful, unfortunately.
>
> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
> on an LR instruction unless the aq bit is also set, nor should software
> set the aq bit on an SC instruction unless the rl bit is also set."
1. Oh, I've missed the behind half of the ISA manual. But why can't we
utilize lr.rl & sc.aq in software programming to guarantee the
sequence?

2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
right? And reducing a fence instruction to gain better performance:
                "0:     lr.w     %[p],  %[c]\n"
                 "       sub      %[rc], %[p], %[o]\n"
                 "       bltz     %[rc], 1f\n".
 -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
 +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
                 "       bnez     %[rc], 0b\n"
 -               "       fence    rw, rw\n"

>
> Dan
>
> > The purpose of the whole patchset is to reduce the usage of
> > independent fence rw, rw instructions, and maximize the usage of the
> > .aq/.rl/.aqrl aonntation of RISC-V.
> >
> >                 __asm__ __volatile__ (                                  \
> >                         "0:     lr.w %0, %2\n"                          \
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w.rl %1, %z4, %2\n"                  \
> >                         "       bnez %1, 0b\n"                          \
> >                         "       fence rw, rw\n"                         \
> >                         "1:\n"                                          \
> >
> >> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> >> following litmus test?
> >>
> >>     C lr-sc-aqrl-pair-vs-full-barrier
> >>
> >>     {}
> >>
> >>     P0(int *x, int *y, atomic_t *u)
> >>     {
> >>             int r0;
> >>             int r1;
> >>
> >>             WRITE_ONCE(*x, 1);
> >>             r0 = atomic_cmpxchg(u, 0, 1);
> >>             r1 = READ_ONCE(*y);
> >>     }
> >>
> >>     P1(int *x, int *y, atomic_t *v)
> >>     {
> >>             int r0;
> >>             int r1;
> >>
> >>             WRITE_ONCE(*y, 1);
> >>             r0 = atomic_cmpxchg(v, 0, 1);
> >>             r1 = READ_ONCE(*x);
> >>     }
> >>
> >>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > I think my patchset won't affect the above sequence guarantee. Current
> > RISC-V implementation only gives RCsc when the original value is the
> > same at least once. So I prefer RISC-V cmpxchg should be:
> >
> >
> > -                       "0:     lr.w %0, %2\n"                          \
> > +                      "0:     lr.w.rl %0, %2\n"                          \
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w.rl %1, %z4, %2\n"                  \
> >                         "       bnez %1, 0b\n"                          \
> > -                       "       fence rw, rw\n"                         \
> >                         "1:\n"                                          \
> > +                        "       fence w, rw\n"                    \
> >
> > To give an unconditional RSsc for atomic_cmpxchg.
> >
> >>
> >> Regards,
> >> Boqun
> >
> >
> >



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-20  5:33             ` Guo Ren
  0 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-20  5:33 UTC (permalink / raw)
  To: Dan Lustig
  Cc: Boqun Feng, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

Thx Dan,

On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
>
> On 4/17/2022 12:51 AM, Guo Ren wrote:
> > Hi Boqun & Andrea,
> >
> > On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> >>
> >> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> >> [...]
> >>>
> >>> If both the aq and rl bits are set, the atomic memory operation is
> >>> sequentially consistent and cannot be observed to happen before any
> >>> earlier memory operations or after any later memory operations in the
> >>> same RISC-V hart and to the same address domain.
> >>>                 "0:     lr.w     %[p],  %[c]\n"
> >>>                 "       sub      %[rc], %[p], %[o]\n"
> >>>                 "       bltz     %[rc], 1f\n".
> >>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> >>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> >>>                 "       bnez     %[rc], 0b\n"
> >>> -               "       fence    rw, rw\n"
> >>>                 "1:\n"
> >>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> >>>
> >>
> >> Can .aqrl order memory accesses before and after it (not against itself,
> >> against each other), i.e. act as a full memory barrier? For example, can
> > From the RVWMO spec description, the .aqrl annotation appends the same
> > effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> >
> > Not only .aqrl, and I think the below also could be an RCsc when
> > sc.w.aq is executed:
> > A: Pre-Access
> > B: lr.w.rl ADDR-0
> > ...
> > C: sc.w.aq ADDR-0
> > D: Post-Acess
> > Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > global memory order should be A->B->C->D when sc.w.aq is executed. For
> > the amoswap
>
> These opcodes aren't actually meaningful, unfortunately.
>
> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
> on an LR instruction unless the aq bit is also set, nor should software
> set the aq bit on an SC instruction unless the rl bit is also set."
1. Oh, I've missed the behind half of the ISA manual. But why can't we
utilize lr.rl & sc.aq in software programming to guarantee the
sequence?

2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
right? And reducing a fence instruction to gain better performance:
                "0:     lr.w     %[p],  %[c]\n"
                 "       sub      %[rc], %[p], %[o]\n"
                 "       bltz     %[rc], 1f\n".
 -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
 +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
                 "       bnez     %[rc], 0b\n"
 -               "       fence    rw, rw\n"

>
> Dan
>
> > The purpose of the whole patchset is to reduce the usage of
> > independent fence rw, rw instructions, and maximize the usage of the
> > .aq/.rl/.aqrl aonntation of RISC-V.
> >
> >                 __asm__ __volatile__ (                                  \
> >                         "0:     lr.w %0, %2\n"                          \
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w.rl %1, %z4, %2\n"                  \
> >                         "       bnez %1, 0b\n"                          \
> >                         "       fence rw, rw\n"                         \
> >                         "1:\n"                                          \
> >
> >> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> >> following litmus test?
> >>
> >>     C lr-sc-aqrl-pair-vs-full-barrier
> >>
> >>     {}
> >>
> >>     P0(int *x, int *y, atomic_t *u)
> >>     {
> >>             int r0;
> >>             int r1;
> >>
> >>             WRITE_ONCE(*x, 1);
> >>             r0 = atomic_cmpxchg(u, 0, 1);
> >>             r1 = READ_ONCE(*y);
> >>     }
> >>
> >>     P1(int *x, int *y, atomic_t *v)
> >>     {
> >>             int r0;
> >>             int r1;
> >>
> >>             WRITE_ONCE(*y, 1);
> >>             r0 = atomic_cmpxchg(v, 0, 1);
> >>             r1 = READ_ONCE(*x);
> >>     }
> >>
> >>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > I think my patchset won't affect the above sequence guarantee. Current
> > RISC-V implementation only gives RCsc when the original value is the
> > same at least once. So I prefer RISC-V cmpxchg should be:
> >
> >
> > -                       "0:     lr.w %0, %2\n"                          \
> > +                      "0:     lr.w.rl %0, %2\n"                          \
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w.rl %1, %z4, %2\n"                  \
> >                         "       bnez %1, 0b\n"                          \
> > -                       "       fence rw, rw\n"                         \
> >                         "1:\n"                                          \
> > +                        "       fence w, rw\n"                    \
> >
> > To give an unconditional RSsc for atomic_cmpxchg.
> >
> >>
> >> Regards,
> >> Boqun
> >
> >
> >



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-20  5:33             ` Guo Ren
@ 2022-04-20 17:03               ` Dan Lustig
  -1 siblings, 0 replies; 42+ messages in thread
From: Dan Lustig @ 2022-04-20 17:03 UTC (permalink / raw)
  To: Guo Ren
  Cc: Boqun Feng, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

On 4/20/2022 1:33 AM, Guo Ren wrote:
> Thx Dan,
> 
> On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
>>
>> On 4/17/2022 12:51 AM, Guo Ren wrote:
>>> Hi Boqun & Andrea,
>>>
>>> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
>>>>
>>>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
>>>> [...]
>>>>>
>>>>> If both the aq and rl bits are set, the atomic memory operation is
>>>>> sequentially consistent and cannot be observed to happen before any
>>>>> earlier memory operations or after any later memory operations in the
>>>>> same RISC-V hart and to the same address domain.
>>>>>                 "0:     lr.w     %[p],  %[c]\n"
>>>>>                 "       sub      %[rc], %[p], %[o]\n"
>>>>>                 "       bltz     %[rc], 1f\n".
>>>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
>>>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
>>>>>                 "       bnez     %[rc], 0b\n"
>>>>> -               "       fence    rw, rw\n"
>>>>>                 "1:\n"
>>>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
>>>>>
>>>>
>>>> Can .aqrl order memory accesses before and after it (not against itself,
>>>> against each other), i.e. act as a full memory barrier? For example, can
>>> From the RVWMO spec description, the .aqrl annotation appends the same
>>> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
>>>
>>> Not only .aqrl, and I think the below also could be an RCsc when
>>> sc.w.aq is executed:
>>> A: Pre-Access
>>> B: lr.w.rl ADDR-0
>>> ...
>>> C: sc.w.aq ADDR-0
>>> D: Post-Acess
>>> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
>>> global memory order should be A->B->C->D when sc.w.aq is executed. For
>>> the amoswap
>>
>> These opcodes aren't actually meaningful, unfortunately.
>>
>> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
>> on an LR instruction unless the aq bit is also set, nor should software
>> set the aq bit on an SC instruction unless the rl bit is also set."
> 1. Oh, I've missed the behind half of the ISA manual. But why can't we
> utilize lr.rl & sc.aq in software programming to guarantee the
> sequence?

lr.aq and sc.rl map more naturally to hardware than lr.rl and sc.aq.
Plus, they just aren't common operations to begin with, e.g., there
is no smp_store_acquire() or smp_load_release(), nor are there
equivalents in C/C++ atomics.

> 2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
> right? And reducing a fence instruction to gain better performance:
>                 "0:     lr.w     %[p],  %[c]\n"
>                  "       sub      %[rc], %[p], %[o]\n"
>                  "       bltz     %[rc], 1f\n".
>  -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
>  +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
>                  "       bnez     %[rc], 0b\n"
>  -               "       fence    rw, rw\n"

Yes, using .aqrl is valid.

Dan

>>
>> Dan
>>
>>> The purpose of the whole patchset is to reduce the usage of
>>> independent fence rw, rw instructions, and maximize the usage of the
>>> .aq/.rl/.aqrl aonntation of RISC-V.
>>>
>>>                 __asm__ __volatile__ (                                  \
>>>                         "0:     lr.w %0, %2\n"                          \
>>>                         "       bne  %0, %z3, 1f\n"                     \
>>>                         "       sc.w.rl %1, %z4, %2\n"                  \
>>>                         "       bnez %1, 0b\n"                          \
>>>                         "       fence rw, rw\n"                         \
>>>                         "1:\n"                                          \
>>>
>>>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
>>>> following litmus test?
>>>>
>>>>     C lr-sc-aqrl-pair-vs-full-barrier
>>>>
>>>>     {}
>>>>
>>>>     P0(int *x, int *y, atomic_t *u)
>>>>     {
>>>>             int r0;
>>>>             int r1;
>>>>
>>>>             WRITE_ONCE(*x, 1);
>>>>             r0 = atomic_cmpxchg(u, 0, 1);
>>>>             r1 = READ_ONCE(*y);
>>>>     }
>>>>
>>>>     P1(int *x, int *y, atomic_t *v)
>>>>     {
>>>>             int r0;
>>>>             int r1;
>>>>
>>>>             WRITE_ONCE(*y, 1);
>>>>             r0 = atomic_cmpxchg(v, 0, 1);
>>>>             r1 = READ_ONCE(*x);
>>>>     }
>>>>
>>>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
>>> I think my patchset won't affect the above sequence guarantee. Current
>>> RISC-V implementation only gives RCsc when the original value is the
>>> same at least once. So I prefer RISC-V cmpxchg should be:
>>>
>>>
>>> -                       "0:     lr.w %0, %2\n"                          \
>>> +                      "0:     lr.w.rl %0, %2\n"                          \
>>>                         "       bne  %0, %z3, 1f\n"                     \
>>>                         "       sc.w.rl %1, %z4, %2\n"                  \
>>>                         "       bnez %1, 0b\n"                          \
>>> -                       "       fence rw, rw\n"                         \
>>>                         "1:\n"                                          \
>>> +                        "       fence w, rw\n"                    \
>>>
>>> To give an unconditional RSsc for atomic_cmpxchg.
>>>
>>>>
>>>> Regards,
>>>> Boqun
>>>
>>>
>>>
> 
> 
> 

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-20 17:03               ` Dan Lustig
  0 siblings, 0 replies; 42+ messages in thread
From: Dan Lustig @ 2022-04-20 17:03 UTC (permalink / raw)
  To: Guo Ren
  Cc: Boqun Feng, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

On 4/20/2022 1:33 AM, Guo Ren wrote:
> Thx Dan,
> 
> On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
>>
>> On 4/17/2022 12:51 AM, Guo Ren wrote:
>>> Hi Boqun & Andrea,
>>>
>>> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
>>>>
>>>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
>>>> [...]
>>>>>
>>>>> If both the aq and rl bits are set, the atomic memory operation is
>>>>> sequentially consistent and cannot be observed to happen before any
>>>>> earlier memory operations or after any later memory operations in the
>>>>> same RISC-V hart and to the same address domain.
>>>>>                 "0:     lr.w     %[p],  %[c]\n"
>>>>>                 "       sub      %[rc], %[p], %[o]\n"
>>>>>                 "       bltz     %[rc], 1f\n".
>>>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
>>>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
>>>>>                 "       bnez     %[rc], 0b\n"
>>>>> -               "       fence    rw, rw\n"
>>>>>                 "1:\n"
>>>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
>>>>>
>>>>
>>>> Can .aqrl order memory accesses before and after it (not against itself,
>>>> against each other), i.e. act as a full memory barrier? For example, can
>>> From the RVWMO spec description, the .aqrl annotation appends the same
>>> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
>>>
>>> Not only .aqrl, and I think the below also could be an RCsc when
>>> sc.w.aq is executed:
>>> A: Pre-Access
>>> B: lr.w.rl ADDR-0
>>> ...
>>> C: sc.w.aq ADDR-0
>>> D: Post-Acess
>>> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
>>> global memory order should be A->B->C->D when sc.w.aq is executed. For
>>> the amoswap
>>
>> These opcodes aren't actually meaningful, unfortunately.
>>
>> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
>> on an LR instruction unless the aq bit is also set, nor should software
>> set the aq bit on an SC instruction unless the rl bit is also set."
> 1. Oh, I've missed the behind half of the ISA manual. But why can't we
> utilize lr.rl & sc.aq in software programming to guarantee the
> sequence?

lr.aq and sc.rl map more naturally to hardware than lr.rl and sc.aq.
Plus, they just aren't common operations to begin with, e.g., there
is no smp_store_acquire() or smp_load_release(), nor are there
equivalents in C/C++ atomics.

> 2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
> right? And reducing a fence instruction to gain better performance:
>                 "0:     lr.w     %[p],  %[c]\n"
>                  "       sub      %[rc], %[p], %[o]\n"
>                  "       bltz     %[rc], 1f\n".
>  -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
>  +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
>                  "       bnez     %[rc], 0b\n"
>  -               "       fence    rw, rw\n"

Yes, using .aqrl is valid.

Dan

>>
>> Dan
>>
>>> The purpose of the whole patchset is to reduce the usage of
>>> independent fence rw, rw instructions, and maximize the usage of the
>>> .aq/.rl/.aqrl aonntation of RISC-V.
>>>
>>>                 __asm__ __volatile__ (                                  \
>>>                         "0:     lr.w %0, %2\n"                          \
>>>                         "       bne  %0, %z3, 1f\n"                     \
>>>                         "       sc.w.rl %1, %z4, %2\n"                  \
>>>                         "       bnez %1, 0b\n"                          \
>>>                         "       fence rw, rw\n"                         \
>>>                         "1:\n"                                          \
>>>
>>>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
>>>> following litmus test?
>>>>
>>>>     C lr-sc-aqrl-pair-vs-full-barrier
>>>>
>>>>     {}
>>>>
>>>>     P0(int *x, int *y, atomic_t *u)
>>>>     {
>>>>             int r0;
>>>>             int r1;
>>>>
>>>>             WRITE_ONCE(*x, 1);
>>>>             r0 = atomic_cmpxchg(u, 0, 1);
>>>>             r1 = READ_ONCE(*y);
>>>>     }
>>>>
>>>>     P1(int *x, int *y, atomic_t *v)
>>>>     {
>>>>             int r0;
>>>>             int r1;
>>>>
>>>>             WRITE_ONCE(*y, 1);
>>>>             r0 = atomic_cmpxchg(v, 0, 1);
>>>>             r1 = READ_ONCE(*x);
>>>>     }
>>>>
>>>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
>>> I think my patchset won't affect the above sequence guarantee. Current
>>> RISC-V implementation only gives RCsc when the original value is the
>>> same at least once. So I prefer RISC-V cmpxchg should be:
>>>
>>>
>>> -                       "0:     lr.w %0, %2\n"                          \
>>> +                      "0:     lr.w.rl %0, %2\n"                          \
>>>                         "       bne  %0, %z3, 1f\n"                     \
>>>                         "       sc.w.rl %1, %z4, %2\n"                  \
>>>                         "       bnez %1, 0b\n"                          \
>>> -                       "       fence rw, rw\n"                         \
>>>                         "1:\n"                                          \
>>> +                        "       fence w, rw\n"                    \
>>>
>>> To give an unconditional RSsc for atomic_cmpxchg.
>>>
>>>>
>>>> Regards,
>>>> Boqun
>>>
>>>
>>>
> 
> 
> 

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-20 17:03               ` Dan Lustig
@ 2022-04-21  9:39                 ` Guo Ren
  -1 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-21  9:39 UTC (permalink / raw)
  To: Dan Lustig
  Cc: Boqun Feng, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

Hi Dan,

On Thu, Apr 21, 2022 at 1:03 AM Dan Lustig <dlustig@nvidia.com> wrote:
>
> On 4/20/2022 1:33 AM, Guo Ren wrote:
> > Thx Dan,
> >
> > On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
> >>
> >> On 4/17/2022 12:51 AM, Guo Ren wrote:
> >>> Hi Boqun & Andrea,
> >>>
> >>> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> >>>>
> >>>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> >>>> [...]
> >>>>>
> >>>>> If both the aq and rl bits are set, the atomic memory operation is
> >>>>> sequentially consistent and cannot be observed to happen before any
> >>>>> earlier memory operations or after any later memory operations in the
> >>>>> same RISC-V hart and to the same address domain.
> >>>>>                 "0:     lr.w     %[p],  %[c]\n"
> >>>>>                 "       sub      %[rc], %[p], %[o]\n"
> >>>>>                 "       bltz     %[rc], 1f\n".
> >>>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> >>>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> >>>>>                 "       bnez     %[rc], 0b\n"
> >>>>> -               "       fence    rw, rw\n"
> >>>>>                 "1:\n"
> >>>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> >>>>>
> >>>>
> >>>> Can .aqrl order memory accesses before and after it (not against itself,
> >>>> against each other), i.e. act as a full memory barrier? For example, can
> >>> From the RVWMO spec description, the .aqrl annotation appends the same
> >>> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> >>>
> >>> Not only .aqrl, and I think the below also could be an RCsc when
> >>> sc.w.aq is executed:
> >>> A: Pre-Access
> >>> B: lr.w.rl ADDR-0
> >>> ...
> >>> C: sc.w.aq ADDR-0
> >>> D: Post-Acess
> >>> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> >>> global memory order should be A->B->C->D when sc.w.aq is executed. For
> >>> the amoswap
> >>
> >> These opcodes aren't actually meaningful, unfortunately.
> >>
> >> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
> >> on an LR instruction unless the aq bit is also set, nor should software
> >> set the aq bit on an SC instruction unless the rl bit is also set."
> > 1. Oh, I've missed the behind half of the ISA manual. But why can't we
> > utilize lr.rl & sc.aq in software programming to guarantee the
> > sequence?
>
> lr.aq and sc.rl map more naturally to hardware than lr.rl and sc.aq.
> Plus, they just aren't common operations to begin with, e.g., there
> is no smp_store_acquire() or smp_load_release(), nor are there
> equivalents in C/C++ atomics.
First, thx for pointing out that my patch violates the rules defined
in the ISA manual. I've abandoned these parts in v3.

It's easy to let hw support lr.rl & sc.aq (eg: our hardware supports
them). I agree there are no equivalents in C/C++ atomics. But they are
useful for LR/SC pairs to implement atomic_acqurie/release semantics.
Compare below:
A): fence rw, r; lr
B): lr.rl
The A has another "fence ,r" effect in semantics, it's over commit
from a software design view.

ps: Current definition has problems:
#define RISCV_ACQUIRE_BARRIER           "\tfence r , rw\n"
#define RISCV_RELEASE_BARRIER           "\tfence rw,  w\n"

#define __cmpxchg_release(ptr, old, new, size)                          \
...
                __asm__ __volatile__ (                                  \
                        RISCV_RELEASE_BARRIER                           \
                        "0:     lr.w %0, %2\n"                          \

That means "fence rw, w" can't prevent lr.w beyond the fence, we need
a "fence.rw. r" here. Here is the Fixup patch which I'm preparing:

From 14c93aca0c3b10cf134791cf491b459972a36ec4 Mon Sep 17 00:00:00 2001
From: Guo Ren <guoren@linux.alibaba.com>
Date: Thu, 21 Apr 2022 16:44:48 +0800
Subject: [PATCH] riscv: atomic: Fixup wrong __atomic_acquire/release_fence
 implementation

Current RISCV_ACQUIRE/RELEASE_BARRIER is for spin_lock not atomic.

__cmpxchg_release(ptr, old, new, size)
...
        __asm__ __volatile__ (
                        RISCV_RELEASE_BARRIER
                        "0:     lr.w %0, %2\n"

The "fence rw, w -> lr.w" is invalid and lr would beyond fence, so
we need "fence rw, r -> lr.w" here. Atomic acquire is the same.

Fixes: 0123f4d76ca6 ("riscv/spinlock: Strengthen implementations with fences")
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andrea Parri <parri.andrea@gmail.com>
Cc: Dan Lustig <dlustig@nvidia.com>
Cc: stable@vger.kernel.org
---
 arch/riscv/include/asm/atomic.h  | 4 ++--
 arch/riscv/include/asm/cmpxchg.h | 8 ++++----
 arch/riscv/include/asm/fence.h   | 4 ++++
 3 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index aef8aa9ac4f4..7cd66eba6ec3 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -20,10 +20,10 @@
 #include <asm/barrier.h>

 #define __atomic_acquire_fence()                                       \
-       __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory")
+       __asm__ __volatile__(RISCV_ATOMIC_ACQUIRE_BARRIER "":::"memory")

 #define __atomic_release_fence()                                       \
-       __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");
+       __asm__ __volatile__(RISCV_ATOMIC_RELEASE_BARRIER"" ::: "memory");

 static __always_inline int arch_atomic_read(const atomic_t *v)
 {
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 9269fceb86e0..605edc2fca3b 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -217,7 +217,7 @@
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w %1, %z4, %2\n"                     \
                        "       bnez %1, 0b\n"                          \
-                       RISCV_ACQUIRE_BARRIER                           \
+                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
                        "1:\n"                                          \
                        : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
                        : "rJ" ((long)__old), "rJ" (__new)              \
@@ -229,7 +229,7 @@
                        "       bne %0, %z3, 1f\n"                      \
                        "       sc.d %1, %z4, %2\n"                     \
                        "       bnez %1, 0b\n"                          \
-                       RISCV_ACQUIRE_BARRIER                           \
+                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
                        "1:\n"                                          \
                        : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
                        : "rJ" (__old), "rJ" (__new)                    \
@@ -259,7 +259,7 @@
        switch (size) {                                                 \
        case 4:                                                         \
                __asm__ __volatile__ (                                  \
-                       RISCV_RELEASE_BARRIER                           \
+                       RISCV_ATOMIC_RELEASE_BARRIER                    \
                        "0:     lr.w %0, %2\n"                          \
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w %1, %z4, %2\n"                     \
@@ -271,7 +271,7 @@
                break;                                                  \
        case 8:                                                         \
                __asm__ __volatile__ (                                  \
-                       RISCV_RELEASE_BARRIER                           \
+                       RISCV_ATOMIC_RELEASE_BARRIER                    \
                        "0:     lr.d %0, %2\n"                          \
                        "       bne %0, %z3, 1f\n"                      \
                        "       sc.d %1, %z4, %2\n"                     \
diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h
index 2b443a3a487f..4e446d64f04f 100644
--- a/arch/riscv/include/asm/fence.h
+++ b/arch/riscv/include/asm/fence.h
@@ -4,9 +4,13 @@
 #ifdef CONFIG_SMP
 #define RISCV_ACQUIRE_BARRIER          "\tfence r , rw\n"
 #define RISCV_RELEASE_BARRIER          "\tfence rw,  w\n"
+#define RISCV_ATOMIC_ACQUIRE_BARRIER   "\tfence w , rw\n"
+#define RISCV_ATOMIC_RELEASE_BARRIER   "\tfence rw,  r\n"
 #else
 #define RISCV_ACQUIRE_BARRIER
 #define RISCV_RELEASE_BARRIER
+#define RISCV_ATOMIC_ACQUIRE_BARRIER
+#define RISCV_ATOMIC_RELEASE_BARRIER
 #endif

 #endif /* _ASM_RISCV_FENCE_H */


>
> > 2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
> > right? And reducing a fence instruction to gain better performance:
> >                 "0:     lr.w     %[p],  %[c]\n"
> >                  "       sub      %[rc], %[p], %[o]\n"
> >                  "       bltz     %[rc], 1f\n".
> >  -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> >  +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> >                  "       bnez     %[rc], 0b\n"
> >  -               "       fence    rw, rw\n"
>
> Yes, using .aqrl is valid.
Thx and I think the below is also valid, right?

-                       RISCV_RELEASE_BARRIER                           \
-                       "       amoswap.w %0, %2, %1\n"                 \
+                       "       amoswap.w.rl %0, %2, %1\n"              \

-                       "       amoswap.d %0, %2, %1\n"                 \
-                       RISCV_ACQUIRE_BARRIER                           \
+                       "       amoswap.d.aq %0, %2, %1\n"              \

>
> Dan
>
> >>
> >> Dan
> >>
> >>> The purpose of the whole patchset is to reduce the usage of
> >>> independent fence rw, rw instructions, and maximize the usage of the
> >>> .aq/.rl/.aqrl aonntation of RISC-V.
> >>>
> >>>                 __asm__ __volatile__ (                                  \
> >>>                         "0:     lr.w %0, %2\n"                          \
> >>>                         "       bne  %0, %z3, 1f\n"                     \
> >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> >>>                         "       bnez %1, 0b\n"                          \
> >>>                         "       fence rw, rw\n"                         \
> >>>                         "1:\n"                                          \
> >>>
> >>>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> >>>> following litmus test?
> >>>>
> >>>>     C lr-sc-aqrl-pair-vs-full-barrier
> >>>>
> >>>>     {}
> >>>>
> >>>>     P0(int *x, int *y, atomic_t *u)
> >>>>     {
> >>>>             int r0;
> >>>>             int r1;
> >>>>
> >>>>             WRITE_ONCE(*x, 1);
> >>>>             r0 = atomic_cmpxchg(u, 0, 1);
> >>>>             r1 = READ_ONCE(*y);
> >>>>     }
> >>>>
> >>>>     P1(int *x, int *y, atomic_t *v)
> >>>>     {
> >>>>             int r0;
> >>>>             int r1;
> >>>>
> >>>>             WRITE_ONCE(*y, 1);
> >>>>             r0 = atomic_cmpxchg(v, 0, 1);
> >>>>             r1 = READ_ONCE(*x);
> >>>>     }
> >>>>
> >>>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> >>> I think my patchset won't affect the above sequence guarantee. Current
> >>> RISC-V implementation only gives RCsc when the original value is the
> >>> same at least once. So I prefer RISC-V cmpxchg should be:
> >>>
> >>>
> >>> -                       "0:     lr.w %0, %2\n"                          \
> >>> +                      "0:     lr.w.rl %0, %2\n"                          \
> >>>                         "       bne  %0, %z3, 1f\n"                     \
> >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> >>>                         "       bnez %1, 0b\n"                          \
> >>> -                       "       fence rw, rw\n"                         \
> >>>                         "1:\n"                                          \
> >>> +                        "       fence w, rw\n"                    \
> >>>
> >>> To give an unconditional RSsc for atomic_cmpxchg.
> >>>
> >>>>
> >>>> Regards,
> >>>> Boqun
> >>>
> >>>
> >>>
> >
> >
> >



--
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-21  9:39                 ` Guo Ren
  0 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-21  9:39 UTC (permalink / raw)
  To: Dan Lustig
  Cc: Boqun Feng, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

Hi Dan,

On Thu, Apr 21, 2022 at 1:03 AM Dan Lustig <dlustig@nvidia.com> wrote:
>
> On 4/20/2022 1:33 AM, Guo Ren wrote:
> > Thx Dan,
> >
> > On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
> >>
> >> On 4/17/2022 12:51 AM, Guo Ren wrote:
> >>> Hi Boqun & Andrea,
> >>>
> >>> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> >>>>
> >>>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> >>>> [...]
> >>>>>
> >>>>> If both the aq and rl bits are set, the atomic memory operation is
> >>>>> sequentially consistent and cannot be observed to happen before any
> >>>>> earlier memory operations or after any later memory operations in the
> >>>>> same RISC-V hart and to the same address domain.
> >>>>>                 "0:     lr.w     %[p],  %[c]\n"
> >>>>>                 "       sub      %[rc], %[p], %[o]\n"
> >>>>>                 "       bltz     %[rc], 1f\n".
> >>>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> >>>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> >>>>>                 "       bnez     %[rc], 0b\n"
> >>>>> -               "       fence    rw, rw\n"
> >>>>>                 "1:\n"
> >>>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> >>>>>
> >>>>
> >>>> Can .aqrl order memory accesses before and after it (not against itself,
> >>>> against each other), i.e. act as a full memory barrier? For example, can
> >>> From the RVWMO spec description, the .aqrl annotation appends the same
> >>> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> >>>
> >>> Not only .aqrl, and I think the below also could be an RCsc when
> >>> sc.w.aq is executed:
> >>> A: Pre-Access
> >>> B: lr.w.rl ADDR-0
> >>> ...
> >>> C: sc.w.aq ADDR-0
> >>> D: Post-Acess
> >>> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> >>> global memory order should be A->B->C->D when sc.w.aq is executed. For
> >>> the amoswap
> >>
> >> These opcodes aren't actually meaningful, unfortunately.
> >>
> >> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
> >> on an LR instruction unless the aq bit is also set, nor should software
> >> set the aq bit on an SC instruction unless the rl bit is also set."
> > 1. Oh, I've missed the behind half of the ISA manual. But why can't we
> > utilize lr.rl & sc.aq in software programming to guarantee the
> > sequence?
>
> lr.aq and sc.rl map more naturally to hardware than lr.rl and sc.aq.
> Plus, they just aren't common operations to begin with, e.g., there
> is no smp_store_acquire() or smp_load_release(), nor are there
> equivalents in C/C++ atomics.
First, thx for pointing out that my patch violates the rules defined
in the ISA manual. I've abandoned these parts in v3.

It's easy to let hw support lr.rl & sc.aq (eg: our hardware supports
them). I agree there are no equivalents in C/C++ atomics. But they are
useful for LR/SC pairs to implement atomic_acqurie/release semantics.
Compare below:
A): fence rw, r; lr
B): lr.rl
The A has another "fence ,r" effect in semantics, it's over commit
from a software design view.

ps: Current definition has problems:
#define RISCV_ACQUIRE_BARRIER           "\tfence r , rw\n"
#define RISCV_RELEASE_BARRIER           "\tfence rw,  w\n"

#define __cmpxchg_release(ptr, old, new, size)                          \
...
                __asm__ __volatile__ (                                  \
                        RISCV_RELEASE_BARRIER                           \
                        "0:     lr.w %0, %2\n"                          \

That means "fence rw, w" can't prevent lr.w beyond the fence, we need
a "fence.rw. r" here. Here is the Fixup patch which I'm preparing:

From 14c93aca0c3b10cf134791cf491b459972a36ec4 Mon Sep 17 00:00:00 2001
From: Guo Ren <guoren@linux.alibaba.com>
Date: Thu, 21 Apr 2022 16:44:48 +0800
Subject: [PATCH] riscv: atomic: Fixup wrong __atomic_acquire/release_fence
 implementation

Current RISCV_ACQUIRE/RELEASE_BARRIER is for spin_lock not atomic.

__cmpxchg_release(ptr, old, new, size)
...
        __asm__ __volatile__ (
                        RISCV_RELEASE_BARRIER
                        "0:     lr.w %0, %2\n"

The "fence rw, w -> lr.w" is invalid and lr would beyond fence, so
we need "fence rw, r -> lr.w" here. Atomic acquire is the same.

Fixes: 0123f4d76ca6 ("riscv/spinlock: Strengthen implementations with fences")
Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andrea Parri <parri.andrea@gmail.com>
Cc: Dan Lustig <dlustig@nvidia.com>
Cc: stable@vger.kernel.org
---
 arch/riscv/include/asm/atomic.h  | 4 ++--
 arch/riscv/include/asm/cmpxchg.h | 8 ++++----
 arch/riscv/include/asm/fence.h   | 4 ++++
 3 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index aef8aa9ac4f4..7cd66eba6ec3 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -20,10 +20,10 @@
 #include <asm/barrier.h>

 #define __atomic_acquire_fence()                                       \
-       __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory")
+       __asm__ __volatile__(RISCV_ATOMIC_ACQUIRE_BARRIER "":::"memory")

 #define __atomic_release_fence()                                       \
-       __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");
+       __asm__ __volatile__(RISCV_ATOMIC_RELEASE_BARRIER"" ::: "memory");

 static __always_inline int arch_atomic_read(const atomic_t *v)
 {
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 9269fceb86e0..605edc2fca3b 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -217,7 +217,7 @@
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w %1, %z4, %2\n"                     \
                        "       bnez %1, 0b\n"                          \
-                       RISCV_ACQUIRE_BARRIER                           \
+                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
                        "1:\n"                                          \
                        : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
                        : "rJ" ((long)__old), "rJ" (__new)              \
@@ -229,7 +229,7 @@
                        "       bne %0, %z3, 1f\n"                      \
                        "       sc.d %1, %z4, %2\n"                     \
                        "       bnez %1, 0b\n"                          \
-                       RISCV_ACQUIRE_BARRIER                           \
+                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
                        "1:\n"                                          \
                        : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
                        : "rJ" (__old), "rJ" (__new)                    \
@@ -259,7 +259,7 @@
        switch (size) {                                                 \
        case 4:                                                         \
                __asm__ __volatile__ (                                  \
-                       RISCV_RELEASE_BARRIER                           \
+                       RISCV_ATOMIC_RELEASE_BARRIER                    \
                        "0:     lr.w %0, %2\n"                          \
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w %1, %z4, %2\n"                     \
@@ -271,7 +271,7 @@
                break;                                                  \
        case 8:                                                         \
                __asm__ __volatile__ (                                  \
-                       RISCV_RELEASE_BARRIER                           \
+                       RISCV_ATOMIC_RELEASE_BARRIER                    \
                        "0:     lr.d %0, %2\n"                          \
                        "       bne %0, %z3, 1f\n"                      \
                        "       sc.d %1, %z4, %2\n"                     \
diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h
index 2b443a3a487f..4e446d64f04f 100644
--- a/arch/riscv/include/asm/fence.h
+++ b/arch/riscv/include/asm/fence.h
@@ -4,9 +4,13 @@
 #ifdef CONFIG_SMP
 #define RISCV_ACQUIRE_BARRIER          "\tfence r , rw\n"
 #define RISCV_RELEASE_BARRIER          "\tfence rw,  w\n"
+#define RISCV_ATOMIC_ACQUIRE_BARRIER   "\tfence w , rw\n"
+#define RISCV_ATOMIC_RELEASE_BARRIER   "\tfence rw,  r\n"
 #else
 #define RISCV_ACQUIRE_BARRIER
 #define RISCV_RELEASE_BARRIER
+#define RISCV_ATOMIC_ACQUIRE_BARRIER
+#define RISCV_ATOMIC_RELEASE_BARRIER
 #endif

 #endif /* _ASM_RISCV_FENCE_H */


>
> > 2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
> > right? And reducing a fence instruction to gain better performance:
> >                 "0:     lr.w     %[p],  %[c]\n"
> >                  "       sub      %[rc], %[p], %[o]\n"
> >                  "       bltz     %[rc], 1f\n".
> >  -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> >  +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> >                  "       bnez     %[rc], 0b\n"
> >  -               "       fence    rw, rw\n"
>
> Yes, using .aqrl is valid.
Thx and I think the below is also valid, right?

-                       RISCV_RELEASE_BARRIER                           \
-                       "       amoswap.w %0, %2, %1\n"                 \
+                       "       amoswap.w.rl %0, %2, %1\n"              \

-                       "       amoswap.d %0, %2, %1\n"                 \
-                       RISCV_ACQUIRE_BARRIER                           \
+                       "       amoswap.d.aq %0, %2, %1\n"              \

>
> Dan
>
> >>
> >> Dan
> >>
> >>> The purpose of the whole patchset is to reduce the usage of
> >>> independent fence rw, rw instructions, and maximize the usage of the
> >>> .aq/.rl/.aqrl aonntation of RISC-V.
> >>>
> >>>                 __asm__ __volatile__ (                                  \
> >>>                         "0:     lr.w %0, %2\n"                          \
> >>>                         "       bne  %0, %z3, 1f\n"                     \
> >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> >>>                         "       bnez %1, 0b\n"                          \
> >>>                         "       fence rw, rw\n"                         \
> >>>                         "1:\n"                                          \
> >>>
> >>>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> >>>> following litmus test?
> >>>>
> >>>>     C lr-sc-aqrl-pair-vs-full-barrier
> >>>>
> >>>>     {}
> >>>>
> >>>>     P0(int *x, int *y, atomic_t *u)
> >>>>     {
> >>>>             int r0;
> >>>>             int r1;
> >>>>
> >>>>             WRITE_ONCE(*x, 1);
> >>>>             r0 = atomic_cmpxchg(u, 0, 1);
> >>>>             r1 = READ_ONCE(*y);
> >>>>     }
> >>>>
> >>>>     P1(int *x, int *y, atomic_t *v)
> >>>>     {
> >>>>             int r0;
> >>>>             int r1;
> >>>>
> >>>>             WRITE_ONCE(*y, 1);
> >>>>             r0 = atomic_cmpxchg(v, 0, 1);
> >>>>             r1 = READ_ONCE(*x);
> >>>>     }
> >>>>
> >>>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> >>> I think my patchset won't affect the above sequence guarantee. Current
> >>> RISC-V implementation only gives RCsc when the original value is the
> >>> same at least once. So I prefer RISC-V cmpxchg should be:
> >>>
> >>>
> >>> -                       "0:     lr.w %0, %2\n"                          \
> >>> +                      "0:     lr.w.rl %0, %2\n"                          \
> >>>                         "       bne  %0, %z3, 1f\n"                     \
> >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> >>>                         "       bnez %1, 0b\n"                          \
> >>> -                       "       fence rw, rw\n"                         \
> >>>                         "1:\n"                                          \
> >>> +                        "       fence w, rw\n"                    \
> >>>
> >>> To give an unconditional RSsc for atomic_cmpxchg.
> >>>
> >>>>
> >>>> Regards,
> >>>> Boqun
> >>>
> >>>
> >>>
> >
> >
> >



--
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-21  9:39                 ` Guo Ren
@ 2022-04-21 22:56                   ` Boqun Feng
  -1 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2022-04-21 22:56 UTC (permalink / raw)
  To: Guo Ren
  Cc: Dan Lustig, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

[-- Attachment #1: Type: text/plain, Size: 14085 bytes --]

On Thu, Apr 21, 2022 at 05:39:09PM +0800, Guo Ren wrote:
> Hi Dan,
> 
> On Thu, Apr 21, 2022 at 1:03 AM Dan Lustig <dlustig@nvidia.com> wrote:
> >
> > On 4/20/2022 1:33 AM, Guo Ren wrote:
> > > Thx Dan,
> > >
> > > On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > >>
> > >> On 4/17/2022 12:51 AM, Guo Ren wrote:
> > >>> Hi Boqun & Andrea,
> > >>>
> > >>> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> > >>>>
> > >>>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > >>>> [...]
> > >>>>>
> > >>>>> If both the aq and rl bits are set, the atomic memory operation is
> > >>>>> sequentially consistent and cannot be observed to happen before any
> > >>>>> earlier memory operations or after any later memory operations in the
> > >>>>> same RISC-V hart and to the same address domain.
> > >>>>>                 "0:     lr.w     %[p],  %[c]\n"
> > >>>>>                 "       sub      %[rc], %[p], %[o]\n"
> > >>>>>                 "       bltz     %[rc], 1f\n".
> > >>>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > >>>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > >>>>>                 "       bnez     %[rc], 0b\n"
> > >>>>> -               "       fence    rw, rw\n"
> > >>>>>                 "1:\n"
> > >>>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > >>>>>
> > >>>>
> > >>>> Can .aqrl order memory accesses before and after it (not against itself,
> > >>>> against each other), i.e. act as a full memory barrier? For example, can
> > >>> From the RVWMO spec description, the .aqrl annotation appends the same
> > >>> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> > >>>
> > >>> Not only .aqrl, and I think the below also could be an RCsc when
> > >>> sc.w.aq is executed:
> > >>> A: Pre-Access
> > >>> B: lr.w.rl ADDR-0
> > >>> ...
> > >>> C: sc.w.aq ADDR-0
> > >>> D: Post-Acess
> > >>> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > >>> global memory order should be A->B->C->D when sc.w.aq is executed. For
> > >>> the amoswap
> > >>
> > >> These opcodes aren't actually meaningful, unfortunately.
> > >>
> > >> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
> > >> on an LR instruction unless the aq bit is also set, nor should software
> > >> set the aq bit on an SC instruction unless the rl bit is also set."
> > > 1. Oh, I've missed the behind half of the ISA manual. But why can't we
> > > utilize lr.rl & sc.aq in software programming to guarantee the
> > > sequence?
> >
> > lr.aq and sc.rl map more naturally to hardware than lr.rl and sc.aq.
> > Plus, they just aren't common operations to begin with, e.g., there
> > is no smp_store_acquire() or smp_load_release(), nor are there
> > equivalents in C/C++ atomics.
> First, thx for pointing out that my patch violates the rules defined
> in the ISA manual. I've abandoned these parts in v3.
> 
> It's easy to let hw support lr.rl & sc.aq (eg: our hardware supports
> them). I agree there are no equivalents in C/C++ atomics. But they are
> useful for LR/SC pairs to implement atomic_acqurie/release semantics.
> Compare below:
> A): fence rw, r; lr
> B): lr.rl
> The A has another "fence ,r" effect in semantics, it's over commit
> from a software design view.
> 
> ps: Current definition has problems:
> #define RISCV_ACQUIRE_BARRIER           "\tfence r , rw\n"
> #define RISCV_RELEASE_BARRIER           "\tfence rw,  w\n"
> 
> #define __cmpxchg_release(ptr, old, new, size)                          \
> ...
>                 __asm__ __volatile__ (                                  \
>                         RISCV_RELEASE_BARRIER                           \
>                         "0:     lr.w %0, %2\n"                          \
> 
> That means "fence rw, w" can't prevent lr.w beyond the fence, we need
> a "fence.rw. r" here. Here is the Fixup patch which I'm preparing:
> 

That's not true. Note that RELEASE semantics only applies to the
write/store part of a read-modify-write atomic, similarly, ACQUIRE only
applies to the read/load part. For example, the following litmus test
can observe the exists clause being true.

	{}

	P0(int *x, int *y)
	{
		int r0;
		int r1;

		r0 = cmpxchg_acquire(x, 0, 1);
		r1 = READ_ONCE(*y);
	}

	P1(int *x, int *y)
	{
		int r0;

		WRITE_ONCE(*y, 1);
		smp_mb();
		r0 = READ_ONCE(*x);
	}

	exists (0:r0=0 /\ 0:r1=0 /\ 1:r0=0)

Regards,
Boqun

> From 14c93aca0c3b10cf134791cf491b459972a36ec4 Mon Sep 17 00:00:00 2001
> From: Guo Ren <guoren@linux.alibaba.com>
> Date: Thu, 21 Apr 2022 16:44:48 +0800
> Subject: [PATCH] riscv: atomic: Fixup wrong __atomic_acquire/release_fence
>  implementation
> 
> Current RISCV_ACQUIRE/RELEASE_BARRIER is for spin_lock not atomic.
> 
> __cmpxchg_release(ptr, old, new, size)
> ...
>         __asm__ __volatile__ (
>                         RISCV_RELEASE_BARRIER
>                         "0:     lr.w %0, %2\n"
> 
> The "fence rw, w -> lr.w" is invalid and lr would beyond fence, so
> we need "fence rw, r -> lr.w" here. Atomic acquire is the same.
> 
> Fixes: 0123f4d76ca6 ("riscv/spinlock: Strengthen implementations with fences")
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Andrea Parri <parri.andrea@gmail.com>
> Cc: Dan Lustig <dlustig@nvidia.com>
> Cc: stable@vger.kernel.org
> ---
>  arch/riscv/include/asm/atomic.h  | 4 ++--
>  arch/riscv/include/asm/cmpxchg.h | 8 ++++----
>  arch/riscv/include/asm/fence.h   | 4 ++++
>  3 files changed, 10 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> index aef8aa9ac4f4..7cd66eba6ec3 100644
> --- a/arch/riscv/include/asm/atomic.h
> +++ b/arch/riscv/include/asm/atomic.h
> @@ -20,10 +20,10 @@
>  #include <asm/barrier.h>
> 
>  #define __atomic_acquire_fence()                                       \
> -       __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory")
> +       __asm__ __volatile__(RISCV_ATOMIC_ACQUIRE_BARRIER "":::"memory")
> 
>  #define __atomic_release_fence()                                       \
> -       __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");
> +       __asm__ __volatile__(RISCV_ATOMIC_RELEASE_BARRIER"" ::: "memory");
> 
>  static __always_inline int arch_atomic_read(const atomic_t *v)
>  {
> diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> index 9269fceb86e0..605edc2fca3b 100644
> --- a/arch/riscv/include/asm/cmpxchg.h
> +++ b/arch/riscv/include/asm/cmpxchg.h
> @@ -217,7 +217,7 @@
>                         "       bne  %0, %z3, 1f\n"                     \
>                         "       sc.w %1, %z4, %2\n"                     \
>                         "       bnez %1, 0b\n"                          \
> -                       RISCV_ACQUIRE_BARRIER                           \
> +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
>                         "1:\n"                                          \
>                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
>                         : "rJ" ((long)__old), "rJ" (__new)              \
> @@ -229,7 +229,7 @@
>                         "       bne %0, %z3, 1f\n"                      \
>                         "       sc.d %1, %z4, %2\n"                     \
>                         "       bnez %1, 0b\n"                          \
> -                       RISCV_ACQUIRE_BARRIER                           \
> +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
>                         "1:\n"                                          \
>                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
>                         : "rJ" (__old), "rJ" (__new)                    \
> @@ -259,7 +259,7 @@
>         switch (size) {                                                 \
>         case 4:                                                         \
>                 __asm__ __volatile__ (                                  \
> -                       RISCV_RELEASE_BARRIER                           \
> +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
>                         "0:     lr.w %0, %2\n"                          \
>                         "       bne  %0, %z3, 1f\n"                     \
>                         "       sc.w %1, %z4, %2\n"                     \
> @@ -271,7 +271,7 @@
>                 break;                                                  \
>         case 8:                                                         \
>                 __asm__ __volatile__ (                                  \
> -                       RISCV_RELEASE_BARRIER                           \
> +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
>                         "0:     lr.d %0, %2\n"                          \
>                         "       bne %0, %z3, 1f\n"                      \
>                         "       sc.d %1, %z4, %2\n"                     \
> diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h
> index 2b443a3a487f..4e446d64f04f 100644
> --- a/arch/riscv/include/asm/fence.h
> +++ b/arch/riscv/include/asm/fence.h
> @@ -4,9 +4,13 @@
>  #ifdef CONFIG_SMP
>  #define RISCV_ACQUIRE_BARRIER          "\tfence r , rw\n"
>  #define RISCV_RELEASE_BARRIER          "\tfence rw,  w\n"
> +#define RISCV_ATOMIC_ACQUIRE_BARRIER   "\tfence w , rw\n"
> +#define RISCV_ATOMIC_RELEASE_BARRIER   "\tfence rw,  r\n"
>  #else
>  #define RISCV_ACQUIRE_BARRIER
>  #define RISCV_RELEASE_BARRIER
> +#define RISCV_ATOMIC_ACQUIRE_BARRIER
> +#define RISCV_ATOMIC_RELEASE_BARRIER
>  #endif
> 
>  #endif /* _ASM_RISCV_FENCE_H */
> 
> 
> >
> > > 2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
> > > right? And reducing a fence instruction to gain better performance:
> > >                 "0:     lr.w     %[p],  %[c]\n"
> > >                  "       sub      %[rc], %[p], %[o]\n"
> > >                  "       bltz     %[rc], 1f\n".
> > >  -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > >  +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > >                  "       bnez     %[rc], 0b\n"
> > >  -               "       fence    rw, rw\n"
> >
> > Yes, using .aqrl is valid.
> Thx and I think the below is also valid, right?
> 
> -                       RISCV_RELEASE_BARRIER                           \
> -                       "       amoswap.w %0, %2, %1\n"                 \
> +                       "       amoswap.w.rl %0, %2, %1\n"              \
> 
> -                       "       amoswap.d %0, %2, %1\n"                 \
> -                       RISCV_ACQUIRE_BARRIER                           \
> +                       "       amoswap.d.aq %0, %2, %1\n"              \
> 
> >
> > Dan
> >
> > >>
> > >> Dan
> > >>
> > >>> The purpose of the whole patchset is to reduce the usage of
> > >>> independent fence rw, rw instructions, and maximize the usage of the
> > >>> .aq/.rl/.aqrl aonntation of RISC-V.
> > >>>
> > >>>                 __asm__ __volatile__ (                                  \
> > >>>                         "0:     lr.w %0, %2\n"                          \
> > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > >>>                         "       bnez %1, 0b\n"                          \
> > >>>                         "       fence rw, rw\n"                         \
> > >>>                         "1:\n"                                          \
> > >>>
> > >>>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > >>>> following litmus test?
> > >>>>
> > >>>>     C lr-sc-aqrl-pair-vs-full-barrier
> > >>>>
> > >>>>     {}
> > >>>>
> > >>>>     P0(int *x, int *y, atomic_t *u)
> > >>>>     {
> > >>>>             int r0;
> > >>>>             int r1;
> > >>>>
> > >>>>             WRITE_ONCE(*x, 1);
> > >>>>             r0 = atomic_cmpxchg(u, 0, 1);
> > >>>>             r1 = READ_ONCE(*y);
> > >>>>     }
> > >>>>
> > >>>>     P1(int *x, int *y, atomic_t *v)
> > >>>>     {
> > >>>>             int r0;
> > >>>>             int r1;
> > >>>>
> > >>>>             WRITE_ONCE(*y, 1);
> > >>>>             r0 = atomic_cmpxchg(v, 0, 1);
> > >>>>             r1 = READ_ONCE(*x);
> > >>>>     }
> > >>>>
> > >>>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > >>> I think my patchset won't affect the above sequence guarantee. Current
> > >>> RISC-V implementation only gives RCsc when the original value is the
> > >>> same at least once. So I prefer RISC-V cmpxchg should be:
> > >>>
> > >>>
> > >>> -                       "0:     lr.w %0, %2\n"                          \
> > >>> +                      "0:     lr.w.rl %0, %2\n"                          \
> > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > >>>                         "       bnez %1, 0b\n"                          \
> > >>> -                       "       fence rw, rw\n"                         \
> > >>>                         "1:\n"                                          \
> > >>> +                        "       fence w, rw\n"                    \
> > >>>
> > >>> To give an unconditional RSsc for atomic_cmpxchg.
> > >>>
> > >>>>
> > >>>> Regards,
> > >>>> Boqun
> > >>>
> > >>>
> > >>>
> > >
> > >
> > >
> 
> 
> 
> --
> Best Regards
>  Guo Ren
> 
> ML: https://lore.kernel.org/linux-csky/

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-21 22:56                   ` Boqun Feng
  0 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2022-04-21 22:56 UTC (permalink / raw)
  To: Guo Ren
  Cc: Dan Lustig, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren


[-- Attachment #1.1: Type: text/plain, Size: 14085 bytes --]

On Thu, Apr 21, 2022 at 05:39:09PM +0800, Guo Ren wrote:
> Hi Dan,
> 
> On Thu, Apr 21, 2022 at 1:03 AM Dan Lustig <dlustig@nvidia.com> wrote:
> >
> > On 4/20/2022 1:33 AM, Guo Ren wrote:
> > > Thx Dan,
> > >
> > > On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > >>
> > >> On 4/17/2022 12:51 AM, Guo Ren wrote:
> > >>> Hi Boqun & Andrea,
> > >>>
> > >>> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> > >>>>
> > >>>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > >>>> [...]
> > >>>>>
> > >>>>> If both the aq and rl bits are set, the atomic memory operation is
> > >>>>> sequentially consistent and cannot be observed to happen before any
> > >>>>> earlier memory operations or after any later memory operations in the
> > >>>>> same RISC-V hart and to the same address domain.
> > >>>>>                 "0:     lr.w     %[p],  %[c]\n"
> > >>>>>                 "       sub      %[rc], %[p], %[o]\n"
> > >>>>>                 "       bltz     %[rc], 1f\n".
> > >>>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > >>>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > >>>>>                 "       bnez     %[rc], 0b\n"
> > >>>>> -               "       fence    rw, rw\n"
> > >>>>>                 "1:\n"
> > >>>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > >>>>>
> > >>>>
> > >>>> Can .aqrl order memory accesses before and after it (not against itself,
> > >>>> against each other), i.e. act as a full memory barrier? For example, can
> > >>> From the RVWMO spec description, the .aqrl annotation appends the same
> > >>> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> > >>>
> > >>> Not only .aqrl, and I think the below also could be an RCsc when
> > >>> sc.w.aq is executed:
> > >>> A: Pre-Access
> > >>> B: lr.w.rl ADDR-0
> > >>> ...
> > >>> C: sc.w.aq ADDR-0
> > >>> D: Post-Acess
> > >>> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > >>> global memory order should be A->B->C->D when sc.w.aq is executed. For
> > >>> the amoswap
> > >>
> > >> These opcodes aren't actually meaningful, unfortunately.
> > >>
> > >> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
> > >> on an LR instruction unless the aq bit is also set, nor should software
> > >> set the aq bit on an SC instruction unless the rl bit is also set."
> > > 1. Oh, I've missed the behind half of the ISA manual. But why can't we
> > > utilize lr.rl & sc.aq in software programming to guarantee the
> > > sequence?
> >
> > lr.aq and sc.rl map more naturally to hardware than lr.rl and sc.aq.
> > Plus, they just aren't common operations to begin with, e.g., there
> > is no smp_store_acquire() or smp_load_release(), nor are there
> > equivalents in C/C++ atomics.
> First, thx for pointing out that my patch violates the rules defined
> in the ISA manual. I've abandoned these parts in v3.
> 
> It's easy to let hw support lr.rl & sc.aq (eg: our hardware supports
> them). I agree there are no equivalents in C/C++ atomics. But they are
> useful for LR/SC pairs to implement atomic_acqurie/release semantics.
> Compare below:
> A): fence rw, r; lr
> B): lr.rl
> The A has another "fence ,r" effect in semantics, it's over commit
> from a software design view.
> 
> ps: Current definition has problems:
> #define RISCV_ACQUIRE_BARRIER           "\tfence r , rw\n"
> #define RISCV_RELEASE_BARRIER           "\tfence rw,  w\n"
> 
> #define __cmpxchg_release(ptr, old, new, size)                          \
> ...
>                 __asm__ __volatile__ (                                  \
>                         RISCV_RELEASE_BARRIER                           \
>                         "0:     lr.w %0, %2\n"                          \
> 
> That means "fence rw, w" can't prevent lr.w beyond the fence, we need
> a "fence.rw. r" here. Here is the Fixup patch which I'm preparing:
> 

That's not true. Note that RELEASE semantics only applies to the
write/store part of a read-modify-write atomic, similarly, ACQUIRE only
applies to the read/load part. For example, the following litmus test
can observe the exists clause being true.

	{}

	P0(int *x, int *y)
	{
		int r0;
		int r1;

		r0 = cmpxchg_acquire(x, 0, 1);
		r1 = READ_ONCE(*y);
	}

	P1(int *x, int *y)
	{
		int r0;

		WRITE_ONCE(*y, 1);
		smp_mb();
		r0 = READ_ONCE(*x);
	}

	exists (0:r0=0 /\ 0:r1=0 /\ 1:r0=0)

Regards,
Boqun

> From 14c93aca0c3b10cf134791cf491b459972a36ec4 Mon Sep 17 00:00:00 2001
> From: Guo Ren <guoren@linux.alibaba.com>
> Date: Thu, 21 Apr 2022 16:44:48 +0800
> Subject: [PATCH] riscv: atomic: Fixup wrong __atomic_acquire/release_fence
>  implementation
> 
> Current RISCV_ACQUIRE/RELEASE_BARRIER is for spin_lock not atomic.
> 
> __cmpxchg_release(ptr, old, new, size)
> ...
>         __asm__ __volatile__ (
>                         RISCV_RELEASE_BARRIER
>                         "0:     lr.w %0, %2\n"
> 
> The "fence rw, w -> lr.w" is invalid and lr would beyond fence, so
> we need "fence rw, r -> lr.w" here. Atomic acquire is the same.
> 
> Fixes: 0123f4d76ca6 ("riscv/spinlock: Strengthen implementations with fences")
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Andrea Parri <parri.andrea@gmail.com>
> Cc: Dan Lustig <dlustig@nvidia.com>
> Cc: stable@vger.kernel.org
> ---
>  arch/riscv/include/asm/atomic.h  | 4 ++--
>  arch/riscv/include/asm/cmpxchg.h | 8 ++++----
>  arch/riscv/include/asm/fence.h   | 4 ++++
>  3 files changed, 10 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> index aef8aa9ac4f4..7cd66eba6ec3 100644
> --- a/arch/riscv/include/asm/atomic.h
> +++ b/arch/riscv/include/asm/atomic.h
> @@ -20,10 +20,10 @@
>  #include <asm/barrier.h>
> 
>  #define __atomic_acquire_fence()                                       \
> -       __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory")
> +       __asm__ __volatile__(RISCV_ATOMIC_ACQUIRE_BARRIER "":::"memory")
> 
>  #define __atomic_release_fence()                                       \
> -       __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");
> +       __asm__ __volatile__(RISCV_ATOMIC_RELEASE_BARRIER"" ::: "memory");
> 
>  static __always_inline int arch_atomic_read(const atomic_t *v)
>  {
> diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> index 9269fceb86e0..605edc2fca3b 100644
> --- a/arch/riscv/include/asm/cmpxchg.h
> +++ b/arch/riscv/include/asm/cmpxchg.h
> @@ -217,7 +217,7 @@
>                         "       bne  %0, %z3, 1f\n"                     \
>                         "       sc.w %1, %z4, %2\n"                     \
>                         "       bnez %1, 0b\n"                          \
> -                       RISCV_ACQUIRE_BARRIER                           \
> +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
>                         "1:\n"                                          \
>                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
>                         : "rJ" ((long)__old), "rJ" (__new)              \
> @@ -229,7 +229,7 @@
>                         "       bne %0, %z3, 1f\n"                      \
>                         "       sc.d %1, %z4, %2\n"                     \
>                         "       bnez %1, 0b\n"                          \
> -                       RISCV_ACQUIRE_BARRIER                           \
> +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
>                         "1:\n"                                          \
>                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
>                         : "rJ" (__old), "rJ" (__new)                    \
> @@ -259,7 +259,7 @@
>         switch (size) {                                                 \
>         case 4:                                                         \
>                 __asm__ __volatile__ (                                  \
> -                       RISCV_RELEASE_BARRIER                           \
> +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
>                         "0:     lr.w %0, %2\n"                          \
>                         "       bne  %0, %z3, 1f\n"                     \
>                         "       sc.w %1, %z4, %2\n"                     \
> @@ -271,7 +271,7 @@
>                 break;                                                  \
>         case 8:                                                         \
>                 __asm__ __volatile__ (                                  \
> -                       RISCV_RELEASE_BARRIER                           \
> +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
>                         "0:     lr.d %0, %2\n"                          \
>                         "       bne %0, %z3, 1f\n"                      \
>                         "       sc.d %1, %z4, %2\n"                     \
> diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h
> index 2b443a3a487f..4e446d64f04f 100644
> --- a/arch/riscv/include/asm/fence.h
> +++ b/arch/riscv/include/asm/fence.h
> @@ -4,9 +4,13 @@
>  #ifdef CONFIG_SMP
>  #define RISCV_ACQUIRE_BARRIER          "\tfence r , rw\n"
>  #define RISCV_RELEASE_BARRIER          "\tfence rw,  w\n"
> +#define RISCV_ATOMIC_ACQUIRE_BARRIER   "\tfence w , rw\n"
> +#define RISCV_ATOMIC_RELEASE_BARRIER   "\tfence rw,  r\n"
>  #else
>  #define RISCV_ACQUIRE_BARRIER
>  #define RISCV_RELEASE_BARRIER
> +#define RISCV_ATOMIC_ACQUIRE_BARRIER
> +#define RISCV_ATOMIC_RELEASE_BARRIER
>  #endif
> 
>  #endif /* _ASM_RISCV_FENCE_H */
> 
> 
> >
> > > 2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
> > > right? And reducing a fence instruction to gain better performance:
> > >                 "0:     lr.w     %[p],  %[c]\n"
> > >                  "       sub      %[rc], %[p], %[o]\n"
> > >                  "       bltz     %[rc], 1f\n".
> > >  -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > >  +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > >                  "       bnez     %[rc], 0b\n"
> > >  -               "       fence    rw, rw\n"
> >
> > Yes, using .aqrl is valid.
> Thx and I think the below is also valid, right?
> 
> -                       RISCV_RELEASE_BARRIER                           \
> -                       "       amoswap.w %0, %2, %1\n"                 \
> +                       "       amoswap.w.rl %0, %2, %1\n"              \
> 
> -                       "       amoswap.d %0, %2, %1\n"                 \
> -                       RISCV_ACQUIRE_BARRIER                           \
> +                       "       amoswap.d.aq %0, %2, %1\n"              \
> 
> >
> > Dan
> >
> > >>
> > >> Dan
> > >>
> > >>> The purpose of the whole patchset is to reduce the usage of
> > >>> independent fence rw, rw instructions, and maximize the usage of the
> > >>> .aq/.rl/.aqrl aonntation of RISC-V.
> > >>>
> > >>>                 __asm__ __volatile__ (                                  \
> > >>>                         "0:     lr.w %0, %2\n"                          \
> > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > >>>                         "       bnez %1, 0b\n"                          \
> > >>>                         "       fence rw, rw\n"                         \
> > >>>                         "1:\n"                                          \
> > >>>
> > >>>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > >>>> following litmus test?
> > >>>>
> > >>>>     C lr-sc-aqrl-pair-vs-full-barrier
> > >>>>
> > >>>>     {}
> > >>>>
> > >>>>     P0(int *x, int *y, atomic_t *u)
> > >>>>     {
> > >>>>             int r0;
> > >>>>             int r1;
> > >>>>
> > >>>>             WRITE_ONCE(*x, 1);
> > >>>>             r0 = atomic_cmpxchg(u, 0, 1);
> > >>>>             r1 = READ_ONCE(*y);
> > >>>>     }
> > >>>>
> > >>>>     P1(int *x, int *y, atomic_t *v)
> > >>>>     {
> > >>>>             int r0;
> > >>>>             int r1;
> > >>>>
> > >>>>             WRITE_ONCE(*y, 1);
> > >>>>             r0 = atomic_cmpxchg(v, 0, 1);
> > >>>>             r1 = READ_ONCE(*x);
> > >>>>     }
> > >>>>
> > >>>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > >>> I think my patchset won't affect the above sequence guarantee. Current
> > >>> RISC-V implementation only gives RCsc when the original value is the
> > >>> same at least once. So I prefer RISC-V cmpxchg should be:
> > >>>
> > >>>
> > >>> -                       "0:     lr.w %0, %2\n"                          \
> > >>> +                      "0:     lr.w.rl %0, %2\n"                          \
> > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > >>>                         "       bnez %1, 0b\n"                          \
> > >>> -                       "       fence rw, rw\n"                         \
> > >>>                         "1:\n"                                          \
> > >>> +                        "       fence w, rw\n"                    \
> > >>>
> > >>> To give an unconditional RSsc for atomic_cmpxchg.
> > >>>
> > >>>>
> > >>>> Regards,
> > >>>> Boqun
> > >>>
> > >>>
> > >>>
> > >
> > >
> > >
> 
> 
> 
> --
> Best Regards
>  Guo Ren
> 
> ML: https://lore.kernel.org/linux-csky/

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 161 bytes --]

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-21 22:56                   ` Boqun Feng
@ 2022-04-22  1:56                     ` Guo Ren
  -1 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-22  1:56 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Dan Lustig, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

On Fri, Apr 22, 2022 at 6:56 AM Boqun Feng <boqun.feng@gmail.com> wrote:
>
> On Thu, Apr 21, 2022 at 05:39:09PM +0800, Guo Ren wrote:
> > Hi Dan,
> >
> > On Thu, Apr 21, 2022 at 1:03 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > >
> > > On 4/20/2022 1:33 AM, Guo Ren wrote:
> > > > Thx Dan,
> > > >
> > > > On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > > >>
> > > >> On 4/17/2022 12:51 AM, Guo Ren wrote:
> > > >>> Hi Boqun & Andrea,
> > > >>>
> > > >>> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> > > >>>>
> > > >>>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > > >>>> [...]
> > > >>>>>
> > > >>>>> If both the aq and rl bits are set, the atomic memory operation is
> > > >>>>> sequentially consistent and cannot be observed to happen before any
> > > >>>>> earlier memory operations or after any later memory operations in the
> > > >>>>> same RISC-V hart and to the same address domain.
> > > >>>>>                 "0:     lr.w     %[p],  %[c]\n"
> > > >>>>>                 "       sub      %[rc], %[p], %[o]\n"
> > > >>>>>                 "       bltz     %[rc], 1f\n".
> > > >>>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > >>>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > >>>>>                 "       bnez     %[rc], 0b\n"
> > > >>>>> -               "       fence    rw, rw\n"
> > > >>>>>                 "1:\n"
> > > >>>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > > >>>>>
> > > >>>>
> > > >>>> Can .aqrl order memory accesses before and after it (not against itself,
> > > >>>> against each other), i.e. act as a full memory barrier? For example, can
> > > >>> From the RVWMO spec description, the .aqrl annotation appends the same
> > > >>> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> > > >>>
> > > >>> Not only .aqrl, and I think the below also could be an RCsc when
> > > >>> sc.w.aq is executed:
> > > >>> A: Pre-Access
> > > >>> B: lr.w.rl ADDR-0
> > > >>> ...
> > > >>> C: sc.w.aq ADDR-0
> > > >>> D: Post-Acess
> > > >>> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > > >>> global memory order should be A->B->C->D when sc.w.aq is executed. For
> > > >>> the amoswap
> > > >>
> > > >> These opcodes aren't actually meaningful, unfortunately.
> > > >>
> > > >> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
> > > >> on an LR instruction unless the aq bit is also set, nor should software
> > > >> set the aq bit on an SC instruction unless the rl bit is also set."
> > > > 1. Oh, I've missed the behind half of the ISA manual. But why can't we
> > > > utilize lr.rl & sc.aq in software programming to guarantee the
> > > > sequence?
> > >
> > > lr.aq and sc.rl map more naturally to hardware than lr.rl and sc.aq.
> > > Plus, they just aren't common operations to begin with, e.g., there
> > > is no smp_store_acquire() or smp_load_release(), nor are there
> > > equivalents in C/C++ atomics.
> > First, thx for pointing out that my patch violates the rules defined
> > in the ISA manual. I've abandoned these parts in v3.
> >
> > It's easy to let hw support lr.rl & sc.aq (eg: our hardware supports
> > them). I agree there are no equivalents in C/C++ atomics. But they are
> > useful for LR/SC pairs to implement atomic_acqurie/release semantics.
> > Compare below:
> > A): fence rw, r; lr
> > B): lr.rl
> > The A has another "fence ,r" effect in semantics, it's over commit
> > from a software design view.
> >
> > ps: Current definition has problems:
> > #define RISCV_ACQUIRE_BARRIER           "\tfence r , rw\n"
> > #define RISCV_RELEASE_BARRIER           "\tfence rw,  w\n"
> >
> > #define __cmpxchg_release(ptr, old, new, size)                          \
> > ...
> >                 __asm__ __volatile__ (                                  \
> >                         RISCV_RELEASE_BARRIER                           \
> >                         "0:     lr.w %0, %2\n"                          \
> >
> > That means "fence rw, w" can't prevent lr.w beyond the fence, we need
> > a "fence.rw. r" here. Here is the Fixup patch which I'm preparing:
> >
>
> That's not true. Note that RELEASE semantics only applies to the
> write/store part of a read-modify-write atomic, similarly, ACQUIRE only
I just want to point out that the "atomic" mentioned here is only for
RISC-V LR/SC AMO instructions. It has been clarified to tread AMO
instruction as the whole part for other AMO instructions.

     - .aq:   If the aq bit is set, then no later memory operations
              in this RISC-V hart can be observed to take place
              before the AMO.
     - .rl:   If the rl bit is set, then other RISC-V harts will not
              observe the AMO before memory accesses preceding the
              AMO in this RISC-V hart.
     - .aqrl: Setting both the aq and the rl bit on an AMO makes the
              sequence sequentially consistent, meaning that it cannot
              be reordered with earlier or later memory operations
              from the same hart.

> applies to the read/load part. For example, the following litmus test
> can observe the exists clause being true.
Thx for pointing out, that means changing "fence rw, w" to "fence rw.
r" is more strict and it would lower performance, right?

>
>         {}
>
>         P0(int *x, int *y)
>         {
>                 int r0;
>                 int r1;
>
>                 r0 = cmpxchg_acquire(x, 0, 1);
>                 r1 = READ_ONCE(*y);
Oh, READ_ONCE could be beyond the write/store part of cmpxchg_acquire,
right? We shouldn't prevent it.

>         }
>
>         P1(int *x, int *y)
>         {
>                 int r0;
>
>                 WRITE_ONCE(*y, 1);
>                 smp_mb();
>                 r0 = READ_ONCE(*x);
>         }
>
>         exists (0:r0=0 /\ 0:r1=0 /\ 1:r0=0)
>
> Regards,
> Boqun
>
> > From 14c93aca0c3b10cf134791cf491b459972a36ec4 Mon Sep 17 00:00:00 2001
> > From: Guo Ren <guoren@linux.alibaba.com>
> > Date: Thu, 21 Apr 2022 16:44:48 +0800
> > Subject: [PATCH] riscv: atomic: Fixup wrong __atomic_acquire/release_fence
> >  implementation
> >
> > Current RISCV_ACQUIRE/RELEASE_BARRIER is for spin_lock not atomic.
> >
> > __cmpxchg_release(ptr, old, new, size)
> > ...
> >         __asm__ __volatile__ (
> >                         RISCV_RELEASE_BARRIER
> >                         "0:     lr.w %0, %2\n"
> >
> > The "fence rw, w -> lr.w" is invalid and lr would beyond fence, so
> > we need "fence rw, r -> lr.w" here. Atomic acquire is the same.
> >
> > Fixes: 0123f4d76ca6 ("riscv/spinlock: Strengthen implementations with fences")
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Cc: Andrea Parri <parri.andrea@gmail.com>
> > Cc: Dan Lustig <dlustig@nvidia.com>
> > Cc: stable@vger.kernel.org
> > ---
> >  arch/riscv/include/asm/atomic.h  | 4 ++--
> >  arch/riscv/include/asm/cmpxchg.h | 8 ++++----
> >  arch/riscv/include/asm/fence.h   | 4 ++++
> >  3 files changed, 10 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> > index aef8aa9ac4f4..7cd66eba6ec3 100644
> > --- a/arch/riscv/include/asm/atomic.h
> > +++ b/arch/riscv/include/asm/atomic.h
> > @@ -20,10 +20,10 @@
> >  #include <asm/barrier.h>
> >
> >  #define __atomic_acquire_fence()                                       \
> > -       __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory")
> > +       __asm__ __volatile__(RISCV_ATOMIC_ACQUIRE_BARRIER "":::"memory")
> >
> >  #define __atomic_release_fence()                                       \
> > -       __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");
> > +       __asm__ __volatile__(RISCV_ATOMIC_RELEASE_BARRIER"" ::: "memory");
> >
> >  static __always_inline int arch_atomic_read(const atomic_t *v)
> >  {
> > diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> > index 9269fceb86e0..605edc2fca3b 100644
> > --- a/arch/riscv/include/asm/cmpxchg.h
> > +++ b/arch/riscv/include/asm/cmpxchg.h
> > @@ -217,7 +217,7 @@
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w %1, %z4, %2\n"                     \
> >                         "       bnez %1, 0b\n"                          \
> > -                       RISCV_ACQUIRE_BARRIER                           \
> > +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
> >                         "1:\n"                                          \
> >                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> >                         : "rJ" ((long)__old), "rJ" (__new)              \
> > @@ -229,7 +229,7 @@
> >                         "       bne %0, %z3, 1f\n"                      \
> >                         "       sc.d %1, %z4, %2\n"                     \
> >                         "       bnez %1, 0b\n"                          \
> > -                       RISCV_ACQUIRE_BARRIER                           \
> > +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
> >                         "1:\n"                                          \
> >                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> >                         : "rJ" (__old), "rJ" (__new)                    \
> > @@ -259,7 +259,7 @@
> >         switch (size) {                                                 \
> >         case 4:                                                         \
> >                 __asm__ __volatile__ (                                  \
> > -                       RISCV_RELEASE_BARRIER                           \
> > +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
> >                         "0:     lr.w %0, %2\n"                          \
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w %1, %z4, %2\n"                     \
> > @@ -271,7 +271,7 @@
> >                 break;                                                  \
> >         case 8:                                                         \
> >                 __asm__ __volatile__ (                                  \
> > -                       RISCV_RELEASE_BARRIER                           \
> > +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
> >                         "0:     lr.d %0, %2\n"                          \
> >                         "       bne %0, %z3, 1f\n"                      \
> >                         "       sc.d %1, %z4, %2\n"                     \
> > diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h
> > index 2b443a3a487f..4e446d64f04f 100644
> > --- a/arch/riscv/include/asm/fence.h
> > +++ b/arch/riscv/include/asm/fence.h
> > @@ -4,9 +4,13 @@
> >  #ifdef CONFIG_SMP
> >  #define RISCV_ACQUIRE_BARRIER          "\tfence r , rw\n"
> >  #define RISCV_RELEASE_BARRIER          "\tfence rw,  w\n"
> > +#define RISCV_ATOMIC_ACQUIRE_BARRIER   "\tfence w , rw\n"
> > +#define RISCV_ATOMIC_RELEASE_BARRIER   "\tfence rw,  r\n"
> >  #else
> >  #define RISCV_ACQUIRE_BARRIER
> >  #define RISCV_RELEASE_BARRIER
> > +#define RISCV_ATOMIC_ACQUIRE_BARRIER
> > +#define RISCV_ATOMIC_RELEASE_BARRIER
> >  #endif
> >
> >  #endif /* _ASM_RISCV_FENCE_H */
> >
> >
> > >
> > > > 2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
> > > > right? And reducing a fence instruction to gain better performance:
> > > >                 "0:     lr.w     %[p],  %[c]\n"
> > > >                  "       sub      %[rc], %[p], %[o]\n"
> > > >                  "       bltz     %[rc], 1f\n".
> > > >  -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > >  +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > >                  "       bnez     %[rc], 0b\n"
> > > >  -               "       fence    rw, rw\n"
> > >
> > > Yes, using .aqrl is valid.
> > Thx and I think the below is also valid, right?
> >
> > -                       RISCV_RELEASE_BARRIER                           \
> > -                       "       amoswap.w %0, %2, %1\n"                 \
> > +                       "       amoswap.w.rl %0, %2, %1\n"              \
> >
> > -                       "       amoswap.d %0, %2, %1\n"                 \
> > -                       RISCV_ACQUIRE_BARRIER                           \
> > +                       "       amoswap.d.aq %0, %2, %1\n"              \
> >
> > >
> > > Dan
> > >
> > > >>
> > > >> Dan
> > > >>
> > > >>> The purpose of the whole patchset is to reduce the usage of
> > > >>> independent fence rw, rw instructions, and maximize the usage of the
> > > >>> .aq/.rl/.aqrl aonntation of RISC-V.
> > > >>>
> > > >>>                 __asm__ __volatile__ (                                  \
> > > >>>                         "0:     lr.w %0, %2\n"                          \
> > > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > > >>>                         "       bnez %1, 0b\n"                          \
> > > >>>                         "       fence rw, rw\n"                         \
> > > >>>                         "1:\n"                                          \
> > > >>>
> > > >>>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > > >>>> following litmus test?
> > > >>>>
> > > >>>>     C lr-sc-aqrl-pair-vs-full-barrier
> > > >>>>
> > > >>>>     {}
> > > >>>>
> > > >>>>     P0(int *x, int *y, atomic_t *u)
> > > >>>>     {
> > > >>>>             int r0;
> > > >>>>             int r1;
> > > >>>>
> > > >>>>             WRITE_ONCE(*x, 1);
> > > >>>>             r0 = atomic_cmpxchg(u, 0, 1);
> > > >>>>             r1 = READ_ONCE(*y);
> > > >>>>     }
> > > >>>>
> > > >>>>     P1(int *x, int *y, atomic_t *v)
> > > >>>>     {
> > > >>>>             int r0;
> > > >>>>             int r1;
> > > >>>>
> > > >>>>             WRITE_ONCE(*y, 1);
> > > >>>>             r0 = atomic_cmpxchg(v, 0, 1);
> > > >>>>             r1 = READ_ONCE(*x);
> > > >>>>     }
> > > >>>>
> > > >>>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > > >>> I think my patchset won't affect the above sequence guarantee. Current
> > > >>> RISC-V implementation only gives RCsc when the original value is the
> > > >>> same at least once. So I prefer RISC-V cmpxchg should be:
> > > >>>
> > > >>>
> > > >>> -                       "0:     lr.w %0, %2\n"                          \
> > > >>> +                      "0:     lr.w.rl %0, %2\n"                          \
> > > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > > >>>                         "       bnez %1, 0b\n"                          \
> > > >>> -                       "       fence rw, rw\n"                         \
> > > >>>                         "1:\n"                                          \
> > > >>> +                        "       fence w, rw\n"                    \
> > > >>>
> > > >>> To give an unconditional RSsc for atomic_cmpxchg.
> > > >>>
> > > >>>>
> > > >>>> Regards,
> > > >>>> Boqun
> > > >>>
> > > >>>
> > > >>>
> > > >
> > > >
> > > >
> >
> >
> >
> > --
> > Best Regards
> >  Guo Ren
> >
> > ML: https://lore.kernel.org/linux-csky/



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-22  1:56                     ` Guo Ren
  0 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-22  1:56 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Dan Lustig, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

On Fri, Apr 22, 2022 at 6:56 AM Boqun Feng <boqun.feng@gmail.com> wrote:
>
> On Thu, Apr 21, 2022 at 05:39:09PM +0800, Guo Ren wrote:
> > Hi Dan,
> >
> > On Thu, Apr 21, 2022 at 1:03 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > >
> > > On 4/20/2022 1:33 AM, Guo Ren wrote:
> > > > Thx Dan,
> > > >
> > > > On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > > >>
> > > >> On 4/17/2022 12:51 AM, Guo Ren wrote:
> > > >>> Hi Boqun & Andrea,
> > > >>>
> > > >>> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> > > >>>>
> > > >>>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > > >>>> [...]
> > > >>>>>
> > > >>>>> If both the aq and rl bits are set, the atomic memory operation is
> > > >>>>> sequentially consistent and cannot be observed to happen before any
> > > >>>>> earlier memory operations or after any later memory operations in the
> > > >>>>> same RISC-V hart and to the same address domain.
> > > >>>>>                 "0:     lr.w     %[p],  %[c]\n"
> > > >>>>>                 "       sub      %[rc], %[p], %[o]\n"
> > > >>>>>                 "       bltz     %[rc], 1f\n".
> > > >>>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > >>>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > >>>>>                 "       bnez     %[rc], 0b\n"
> > > >>>>> -               "       fence    rw, rw\n"
> > > >>>>>                 "1:\n"
> > > >>>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > > >>>>>
> > > >>>>
> > > >>>> Can .aqrl order memory accesses before and after it (not against itself,
> > > >>>> against each other), i.e. act as a full memory barrier? For example, can
> > > >>> From the RVWMO spec description, the .aqrl annotation appends the same
> > > >>> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> > > >>>
> > > >>> Not only .aqrl, and I think the below also could be an RCsc when
> > > >>> sc.w.aq is executed:
> > > >>> A: Pre-Access
> > > >>> B: lr.w.rl ADDR-0
> > > >>> ...
> > > >>> C: sc.w.aq ADDR-0
> > > >>> D: Post-Acess
> > > >>> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > > >>> global memory order should be A->B->C->D when sc.w.aq is executed. For
> > > >>> the amoswap
> > > >>
> > > >> These opcodes aren't actually meaningful, unfortunately.
> > > >>
> > > >> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
> > > >> on an LR instruction unless the aq bit is also set, nor should software
> > > >> set the aq bit on an SC instruction unless the rl bit is also set."
> > > > 1. Oh, I've missed the behind half of the ISA manual. But why can't we
> > > > utilize lr.rl & sc.aq in software programming to guarantee the
> > > > sequence?
> > >
> > > lr.aq and sc.rl map more naturally to hardware than lr.rl and sc.aq.
> > > Plus, they just aren't common operations to begin with, e.g., there
> > > is no smp_store_acquire() or smp_load_release(), nor are there
> > > equivalents in C/C++ atomics.
> > First, thx for pointing out that my patch violates the rules defined
> > in the ISA manual. I've abandoned these parts in v3.
> >
> > It's easy to let hw support lr.rl & sc.aq (eg: our hardware supports
> > them). I agree there are no equivalents in C/C++ atomics. But they are
> > useful for LR/SC pairs to implement atomic_acqurie/release semantics.
> > Compare below:
> > A): fence rw, r; lr
> > B): lr.rl
> > The A has another "fence ,r" effect in semantics, it's over commit
> > from a software design view.
> >
> > ps: Current definition has problems:
> > #define RISCV_ACQUIRE_BARRIER           "\tfence r , rw\n"
> > #define RISCV_RELEASE_BARRIER           "\tfence rw,  w\n"
> >
> > #define __cmpxchg_release(ptr, old, new, size)                          \
> > ...
> >                 __asm__ __volatile__ (                                  \
> >                         RISCV_RELEASE_BARRIER                           \
> >                         "0:     lr.w %0, %2\n"                          \
> >
> > That means "fence rw, w" can't prevent lr.w beyond the fence, we need
> > a "fence.rw. r" here. Here is the Fixup patch which I'm preparing:
> >
>
> That's not true. Note that RELEASE semantics only applies to the
> write/store part of a read-modify-write atomic, similarly, ACQUIRE only
I just want to point out that the "atomic" mentioned here is only for
RISC-V LR/SC AMO instructions. It has been clarified to tread AMO
instruction as the whole part for other AMO instructions.

     - .aq:   If the aq bit is set, then no later memory operations
              in this RISC-V hart can be observed to take place
              before the AMO.
     - .rl:   If the rl bit is set, then other RISC-V harts will not
              observe the AMO before memory accesses preceding the
              AMO in this RISC-V hart.
     - .aqrl: Setting both the aq and the rl bit on an AMO makes the
              sequence sequentially consistent, meaning that it cannot
              be reordered with earlier or later memory operations
              from the same hart.

> applies to the read/load part. For example, the following litmus test
> can observe the exists clause being true.
Thx for pointing out, that means changing "fence rw, w" to "fence rw.
r" is more strict and it would lower performance, right?

>
>         {}
>
>         P0(int *x, int *y)
>         {
>                 int r0;
>                 int r1;
>
>                 r0 = cmpxchg_acquire(x, 0, 1);
>                 r1 = READ_ONCE(*y);
Oh, READ_ONCE could be beyond the write/store part of cmpxchg_acquire,
right? We shouldn't prevent it.

>         }
>
>         P1(int *x, int *y)
>         {
>                 int r0;
>
>                 WRITE_ONCE(*y, 1);
>                 smp_mb();
>                 r0 = READ_ONCE(*x);
>         }
>
>         exists (0:r0=0 /\ 0:r1=0 /\ 1:r0=0)
>
> Regards,
> Boqun
>
> > From 14c93aca0c3b10cf134791cf491b459972a36ec4 Mon Sep 17 00:00:00 2001
> > From: Guo Ren <guoren@linux.alibaba.com>
> > Date: Thu, 21 Apr 2022 16:44:48 +0800
> > Subject: [PATCH] riscv: atomic: Fixup wrong __atomic_acquire/release_fence
> >  implementation
> >
> > Current RISCV_ACQUIRE/RELEASE_BARRIER is for spin_lock not atomic.
> >
> > __cmpxchg_release(ptr, old, new, size)
> > ...
> >         __asm__ __volatile__ (
> >                         RISCV_RELEASE_BARRIER
> >                         "0:     lr.w %0, %2\n"
> >
> > The "fence rw, w -> lr.w" is invalid and lr would beyond fence, so
> > we need "fence rw, r -> lr.w" here. Atomic acquire is the same.
> >
> > Fixes: 0123f4d76ca6 ("riscv/spinlock: Strengthen implementations with fences")
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > Cc: Mark Rutland <mark.rutland@arm.com>
> > Cc: Andrea Parri <parri.andrea@gmail.com>
> > Cc: Dan Lustig <dlustig@nvidia.com>
> > Cc: stable@vger.kernel.org
> > ---
> >  arch/riscv/include/asm/atomic.h  | 4 ++--
> >  arch/riscv/include/asm/cmpxchg.h | 8 ++++----
> >  arch/riscv/include/asm/fence.h   | 4 ++++
> >  3 files changed, 10 insertions(+), 6 deletions(-)
> >
> > diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> > index aef8aa9ac4f4..7cd66eba6ec3 100644
> > --- a/arch/riscv/include/asm/atomic.h
> > +++ b/arch/riscv/include/asm/atomic.h
> > @@ -20,10 +20,10 @@
> >  #include <asm/barrier.h>
> >
> >  #define __atomic_acquire_fence()                                       \
> > -       __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory")
> > +       __asm__ __volatile__(RISCV_ATOMIC_ACQUIRE_BARRIER "":::"memory")
> >
> >  #define __atomic_release_fence()                                       \
> > -       __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");
> > +       __asm__ __volatile__(RISCV_ATOMIC_RELEASE_BARRIER"" ::: "memory");
> >
> >  static __always_inline int arch_atomic_read(const atomic_t *v)
> >  {
> > diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> > index 9269fceb86e0..605edc2fca3b 100644
> > --- a/arch/riscv/include/asm/cmpxchg.h
> > +++ b/arch/riscv/include/asm/cmpxchg.h
> > @@ -217,7 +217,7 @@
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w %1, %z4, %2\n"                     \
> >                         "       bnez %1, 0b\n"                          \
> > -                       RISCV_ACQUIRE_BARRIER                           \
> > +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
> >                         "1:\n"                                          \
> >                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> >                         : "rJ" ((long)__old), "rJ" (__new)              \
> > @@ -229,7 +229,7 @@
> >                         "       bne %0, %z3, 1f\n"                      \
> >                         "       sc.d %1, %z4, %2\n"                     \
> >                         "       bnez %1, 0b\n"                          \
> > -                       RISCV_ACQUIRE_BARRIER                           \
> > +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
> >                         "1:\n"                                          \
> >                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> >                         : "rJ" (__old), "rJ" (__new)                    \
> > @@ -259,7 +259,7 @@
> >         switch (size) {                                                 \
> >         case 4:                                                         \
> >                 __asm__ __volatile__ (                                  \
> > -                       RISCV_RELEASE_BARRIER                           \
> > +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
> >                         "0:     lr.w %0, %2\n"                          \
> >                         "       bne  %0, %z3, 1f\n"                     \
> >                         "       sc.w %1, %z4, %2\n"                     \
> > @@ -271,7 +271,7 @@
> >                 break;                                                  \
> >         case 8:                                                         \
> >                 __asm__ __volatile__ (                                  \
> > -                       RISCV_RELEASE_BARRIER                           \
> > +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
> >                         "0:     lr.d %0, %2\n"                          \
> >                         "       bne %0, %z3, 1f\n"                      \
> >                         "       sc.d %1, %z4, %2\n"                     \
> > diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h
> > index 2b443a3a487f..4e446d64f04f 100644
> > --- a/arch/riscv/include/asm/fence.h
> > +++ b/arch/riscv/include/asm/fence.h
> > @@ -4,9 +4,13 @@
> >  #ifdef CONFIG_SMP
> >  #define RISCV_ACQUIRE_BARRIER          "\tfence r , rw\n"
> >  #define RISCV_RELEASE_BARRIER          "\tfence rw,  w\n"
> > +#define RISCV_ATOMIC_ACQUIRE_BARRIER   "\tfence w , rw\n"
> > +#define RISCV_ATOMIC_RELEASE_BARRIER   "\tfence rw,  r\n"
> >  #else
> >  #define RISCV_ACQUIRE_BARRIER
> >  #define RISCV_RELEASE_BARRIER
> > +#define RISCV_ATOMIC_ACQUIRE_BARRIER
> > +#define RISCV_ATOMIC_RELEASE_BARRIER
> >  #endif
> >
> >  #endif /* _ASM_RISCV_FENCE_H */
> >
> >
> > >
> > > > 2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
> > > > right? And reducing a fence instruction to gain better performance:
> > > >                 "0:     lr.w     %[p],  %[c]\n"
> > > >                  "       sub      %[rc], %[p], %[o]\n"
> > > >                  "       bltz     %[rc], 1f\n".
> > > >  -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > >  +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > >                  "       bnez     %[rc], 0b\n"
> > > >  -               "       fence    rw, rw\n"
> > >
> > > Yes, using .aqrl is valid.
> > Thx and I think the below is also valid, right?
> >
> > -                       RISCV_RELEASE_BARRIER                           \
> > -                       "       amoswap.w %0, %2, %1\n"                 \
> > +                       "       amoswap.w.rl %0, %2, %1\n"              \
> >
> > -                       "       amoswap.d %0, %2, %1\n"                 \
> > -                       RISCV_ACQUIRE_BARRIER                           \
> > +                       "       amoswap.d.aq %0, %2, %1\n"              \
> >
> > >
> > > Dan
> > >
> > > >>
> > > >> Dan
> > > >>
> > > >>> The purpose of the whole patchset is to reduce the usage of
> > > >>> independent fence rw, rw instructions, and maximize the usage of the
> > > >>> .aq/.rl/.aqrl aonntation of RISC-V.
> > > >>>
> > > >>>                 __asm__ __volatile__ (                                  \
> > > >>>                         "0:     lr.w %0, %2\n"                          \
> > > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > > >>>                         "       bnez %1, 0b\n"                          \
> > > >>>                         "       fence rw, rw\n"                         \
> > > >>>                         "1:\n"                                          \
> > > >>>
> > > >>>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > > >>>> following litmus test?
> > > >>>>
> > > >>>>     C lr-sc-aqrl-pair-vs-full-barrier
> > > >>>>
> > > >>>>     {}
> > > >>>>
> > > >>>>     P0(int *x, int *y, atomic_t *u)
> > > >>>>     {
> > > >>>>             int r0;
> > > >>>>             int r1;
> > > >>>>
> > > >>>>             WRITE_ONCE(*x, 1);
> > > >>>>             r0 = atomic_cmpxchg(u, 0, 1);
> > > >>>>             r1 = READ_ONCE(*y);
> > > >>>>     }
> > > >>>>
> > > >>>>     P1(int *x, int *y, atomic_t *v)
> > > >>>>     {
> > > >>>>             int r0;
> > > >>>>             int r1;
> > > >>>>
> > > >>>>             WRITE_ONCE(*y, 1);
> > > >>>>             r0 = atomic_cmpxchg(v, 0, 1);
> > > >>>>             r1 = READ_ONCE(*x);
> > > >>>>     }
> > > >>>>
> > > >>>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > > >>> I think my patchset won't affect the above sequence guarantee. Current
> > > >>> RISC-V implementation only gives RCsc when the original value is the
> > > >>> same at least once. So I prefer RISC-V cmpxchg should be:
> > > >>>
> > > >>>
> > > >>> -                       "0:     lr.w %0, %2\n"                          \
> > > >>> +                      "0:     lr.w.rl %0, %2\n"                          \
> > > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > > >>>                         "       bnez %1, 0b\n"                          \
> > > >>> -                       "       fence rw, rw\n"                         \
> > > >>>                         "1:\n"                                          \
> > > >>> +                        "       fence w, rw\n"                    \
> > > >>>
> > > >>> To give an unconditional RSsc for atomic_cmpxchg.
> > > >>>
> > > >>>>
> > > >>>> Regards,
> > > >>>> Boqun
> > > >>>
> > > >>>
> > > >>>
> > > >
> > > >
> > > >
> >
> >
> >
> > --
> > Best Regards
> >  Guo Ren
> >
> > ML: https://lore.kernel.org/linux-csky/



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-22  1:56                     ` Guo Ren
@ 2022-04-22  3:11                       ` Boqun Feng
  -1 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2022-04-22  3:11 UTC (permalink / raw)
  To: Guo Ren
  Cc: Dan Lustig, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

[-- Attachment #1: Type: text/plain, Size: 17039 bytes --]

On Fri, Apr 22, 2022 at 09:56:21AM +0800, Guo Ren wrote:
> On Fri, Apr 22, 2022 at 6:56 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> >
> > On Thu, Apr 21, 2022 at 05:39:09PM +0800, Guo Ren wrote:
> > > Hi Dan,
> > >
> > > On Thu, Apr 21, 2022 at 1:03 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > > >
> > > > On 4/20/2022 1:33 AM, Guo Ren wrote:
> > > > > Thx Dan,
> > > > >
> > > > > On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > > > >>
> > > > >> On 4/17/2022 12:51 AM, Guo Ren wrote:
> > > > >>> Hi Boqun & Andrea,
> > > > >>>
> > > > >>> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> > > > >>>>
> > > > >>>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > > > >>>> [...]
> > > > >>>>>
> > > > >>>>> If both the aq and rl bits are set, the atomic memory operation is
> > > > >>>>> sequentially consistent and cannot be observed to happen before any
> > > > >>>>> earlier memory operations or after any later memory operations in the
> > > > >>>>> same RISC-V hart and to the same address domain.
> > > > >>>>>                 "0:     lr.w     %[p],  %[c]\n"
> > > > >>>>>                 "       sub      %[rc], %[p], %[o]\n"
> > > > >>>>>                 "       bltz     %[rc], 1f\n".
> > > > >>>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > > >>>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > > >>>>>                 "       bnez     %[rc], 0b\n"
> > > > >>>>> -               "       fence    rw, rw\n"
> > > > >>>>>                 "1:\n"
> > > > >>>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > > > >>>>>
> > > > >>>>
> > > > >>>> Can .aqrl order memory accesses before and after it (not against itself,
> > > > >>>> against each other), i.e. act as a full memory barrier? For example, can
> > > > >>> From the RVWMO spec description, the .aqrl annotation appends the same
> > > > >>> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> > > > >>>
> > > > >>> Not only .aqrl, and I think the below also could be an RCsc when
> > > > >>> sc.w.aq is executed:
> > > > >>> A: Pre-Access
> > > > >>> B: lr.w.rl ADDR-0
> > > > >>> ...
> > > > >>> C: sc.w.aq ADDR-0
> > > > >>> D: Post-Acess
> > > > >>> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > > > >>> global memory order should be A->B->C->D when sc.w.aq is executed. For
> > > > >>> the amoswap
> > > > >>
> > > > >> These opcodes aren't actually meaningful, unfortunately.
> > > > >>
> > > > >> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
> > > > >> on an LR instruction unless the aq bit is also set, nor should software
> > > > >> set the aq bit on an SC instruction unless the rl bit is also set."
> > > > > 1. Oh, I've missed the behind half of the ISA manual. But why can't we
> > > > > utilize lr.rl & sc.aq in software programming to guarantee the
> > > > > sequence?
> > > >
> > > > lr.aq and sc.rl map more naturally to hardware than lr.rl and sc.aq.
> > > > Plus, they just aren't common operations to begin with, e.g., there
> > > > is no smp_store_acquire() or smp_load_release(), nor are there
> > > > equivalents in C/C++ atomics.
> > > First, thx for pointing out that my patch violates the rules defined
> > > in the ISA manual. I've abandoned these parts in v3.
> > >
> > > It's easy to let hw support lr.rl & sc.aq (eg: our hardware supports
> > > them). I agree there are no equivalents in C/C++ atomics. But they are
> > > useful for LR/SC pairs to implement atomic_acqurie/release semantics.
> > > Compare below:
> > > A): fence rw, r; lr
> > > B): lr.rl
> > > The A has another "fence ,r" effect in semantics, it's over commit
> > > from a software design view.
> > >
> > > ps: Current definition has problems:
> > > #define RISCV_ACQUIRE_BARRIER           "\tfence r , rw\n"
> > > #define RISCV_RELEASE_BARRIER           "\tfence rw,  w\n"
> > >
> > > #define __cmpxchg_release(ptr, old, new, size)                          \
> > > ...
> > >                 __asm__ __volatile__ (                                  \
> > >                         RISCV_RELEASE_BARRIER                           \
> > >                         "0:     lr.w %0, %2\n"                          \
> > >
> > > That means "fence rw, w" can't prevent lr.w beyond the fence, we need
> > > a "fence.rw. r" here. Here is the Fixup patch which I'm preparing:
> > >
> >
> > That's not true. Note that RELEASE semantics only applies to the
> > write/store part of a read-modify-write atomic, similarly, ACQUIRE only
> I just want to point out that the "atomic" mentioned here is only for
> RISC-V LR/SC AMO instructions. It has been clarified to tread AMO
> instruction as the whole part for other AMO instructions.
> 
>      - .aq:   If the aq bit is set, then no later memory operations
>               in this RISC-V hart can be observed to take place
>               before the AMO.
>      - .rl:   If the rl bit is set, then other RISC-V harts will not
>               observe the AMO before memory accesses preceding the
>               AMO in this RISC-V hart.
>      - .aqrl: Setting both the aq and the rl bit on an AMO makes the
>               sequence sequentially consistent, meaning that it cannot
>               be reordered with earlier or later memory operations
>               from the same hart.
> 
> > applies to the read/load part. For example, the following litmus test
> > can observe the exists clause being true.
> Thx for pointing out, that means changing "fence rw, w" to "fence rw.
> r" is more strict and it would lower performance, right?

Yes, I think it's more strict but honestly I don't know the performance
impact ;-)

> 
> >
> >         {}
> >
> >         P0(int *x, int *y)
> >         {
> >                 int r0;
> >                 int r1;
> >
> >                 r0 = cmpxchg_acquire(x, 0, 1);
> >                 r1 = READ_ONCE(*y);
> Oh, READ_ONCE could be beyond the write/store part of cmpxchg_acquire,
> right? We shouldn't prevent it.

Right, the reordering is allowed by the API of Linux atomics and you
don't have to prevent it.

Regards,
Boqun

> 
> >         }
> >
> >         P1(int *x, int *y)
> >         {
> >                 int r0;
> >
> >                 WRITE_ONCE(*y, 1);
> >                 smp_mb();
> >                 r0 = READ_ONCE(*x);
> >         }
> >
> >         exists (0:r0=0 /\ 0:r1=0 /\ 1:r0=0)
> >
> > Regards,
> > Boqun
> >
> > > From 14c93aca0c3b10cf134791cf491b459972a36ec4 Mon Sep 17 00:00:00 2001
> > > From: Guo Ren <guoren@linux.alibaba.com>
> > > Date: Thu, 21 Apr 2022 16:44:48 +0800
> > > Subject: [PATCH] riscv: atomic: Fixup wrong __atomic_acquire/release_fence
> > >  implementation
> > >
> > > Current RISCV_ACQUIRE/RELEASE_BARRIER is for spin_lock not atomic.
> > >
> > > __cmpxchg_release(ptr, old, new, size)
> > > ...
> > >         __asm__ __volatile__ (
> > >                         RISCV_RELEASE_BARRIER
> > >                         "0:     lr.w %0, %2\n"
> > >
> > > The "fence rw, w -> lr.w" is invalid and lr would beyond fence, so
> > > we need "fence rw, r -> lr.w" here. Atomic acquire is the same.
> > >
> > > Fixes: 0123f4d76ca6 ("riscv/spinlock: Strengthen implementations with fences")
> > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > > Signed-off-by: Guo Ren <guoren@kernel.org>
> > > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > > Cc: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Andrea Parri <parri.andrea@gmail.com>
> > > Cc: Dan Lustig <dlustig@nvidia.com>
> > > Cc: stable@vger.kernel.org
> > > ---
> > >  arch/riscv/include/asm/atomic.h  | 4 ++--
> > >  arch/riscv/include/asm/cmpxchg.h | 8 ++++----
> > >  arch/riscv/include/asm/fence.h   | 4 ++++
> > >  3 files changed, 10 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> > > index aef8aa9ac4f4..7cd66eba6ec3 100644
> > > --- a/arch/riscv/include/asm/atomic.h
> > > +++ b/arch/riscv/include/asm/atomic.h
> > > @@ -20,10 +20,10 @@
> > >  #include <asm/barrier.h>
> > >
> > >  #define __atomic_acquire_fence()                                       \
> > > -       __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory")
> > > +       __asm__ __volatile__(RISCV_ATOMIC_ACQUIRE_BARRIER "":::"memory")
> > >
> > >  #define __atomic_release_fence()                                       \
> > > -       __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");
> > > +       __asm__ __volatile__(RISCV_ATOMIC_RELEASE_BARRIER"" ::: "memory");
> > >
> > >  static __always_inline int arch_atomic_read(const atomic_t *v)
> > >  {
> > > diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> > > index 9269fceb86e0..605edc2fca3b 100644
> > > --- a/arch/riscv/include/asm/cmpxchg.h
> > > +++ b/arch/riscv/include/asm/cmpxchg.h
> > > @@ -217,7 +217,7 @@
> > >                         "       bne  %0, %z3, 1f\n"                     \
> > >                         "       sc.w %1, %z4, %2\n"                     \
> > >                         "       bnez %1, 0b\n"                          \
> > > -                       RISCV_ACQUIRE_BARRIER                           \
> > > +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
> > >                         "1:\n"                                          \
> > >                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> > >                         : "rJ" ((long)__old), "rJ" (__new)              \
> > > @@ -229,7 +229,7 @@
> > >                         "       bne %0, %z3, 1f\n"                      \
> > >                         "       sc.d %1, %z4, %2\n"                     \
> > >                         "       bnez %1, 0b\n"                          \
> > > -                       RISCV_ACQUIRE_BARRIER                           \
> > > +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
> > >                         "1:\n"                                          \
> > >                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> > >                         : "rJ" (__old), "rJ" (__new)                    \
> > > @@ -259,7 +259,7 @@
> > >         switch (size) {                                                 \
> > >         case 4:                                                         \
> > >                 __asm__ __volatile__ (                                  \
> > > -                       RISCV_RELEASE_BARRIER                           \
> > > +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
> > >                         "0:     lr.w %0, %2\n"                          \
> > >                         "       bne  %0, %z3, 1f\n"                     \
> > >                         "       sc.w %1, %z4, %2\n"                     \
> > > @@ -271,7 +271,7 @@
> > >                 break;                                                  \
> > >         case 8:                                                         \
> > >                 __asm__ __volatile__ (                                  \
> > > -                       RISCV_RELEASE_BARRIER                           \
> > > +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
> > >                         "0:     lr.d %0, %2\n"                          \
> > >                         "       bne %0, %z3, 1f\n"                      \
> > >                         "       sc.d %1, %z4, %2\n"                     \
> > > diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h
> > > index 2b443a3a487f..4e446d64f04f 100644
> > > --- a/arch/riscv/include/asm/fence.h
> > > +++ b/arch/riscv/include/asm/fence.h
> > > @@ -4,9 +4,13 @@
> > >  #ifdef CONFIG_SMP
> > >  #define RISCV_ACQUIRE_BARRIER          "\tfence r , rw\n"
> > >  #define RISCV_RELEASE_BARRIER          "\tfence rw,  w\n"
> > > +#define RISCV_ATOMIC_ACQUIRE_BARRIER   "\tfence w , rw\n"
> > > +#define RISCV_ATOMIC_RELEASE_BARRIER   "\tfence rw,  r\n"
> > >  #else
> > >  #define RISCV_ACQUIRE_BARRIER
> > >  #define RISCV_RELEASE_BARRIER
> > > +#define RISCV_ATOMIC_ACQUIRE_BARRIER
> > > +#define RISCV_ATOMIC_RELEASE_BARRIER
> > >  #endif
> > >
> > >  #endif /* _ASM_RISCV_FENCE_H */
> > >
> > >
> > > >
> > > > > 2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
> > > > > right? And reducing a fence instruction to gain better performance:
> > > > >                 "0:     lr.w     %[p],  %[c]\n"
> > > > >                  "       sub      %[rc], %[p], %[o]\n"
> > > > >                  "       bltz     %[rc], 1f\n".
> > > > >  -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > > >  +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > > >                  "       bnez     %[rc], 0b\n"
> > > > >  -               "       fence    rw, rw\n"
> > > >
> > > > Yes, using .aqrl is valid.
> > > Thx and I think the below is also valid, right?
> > >
> > > -                       RISCV_RELEASE_BARRIER                           \
> > > -                       "       amoswap.w %0, %2, %1\n"                 \
> > > +                       "       amoswap.w.rl %0, %2, %1\n"              \
> > >
> > > -                       "       amoswap.d %0, %2, %1\n"                 \
> > > -                       RISCV_ACQUIRE_BARRIER                           \
> > > +                       "       amoswap.d.aq %0, %2, %1\n"              \
> > >
> > > >
> > > > Dan
> > > >
> > > > >>
> > > > >> Dan
> > > > >>
> > > > >>> The purpose of the whole patchset is to reduce the usage of
> > > > >>> independent fence rw, rw instructions, and maximize the usage of the
> > > > >>> .aq/.rl/.aqrl aonntation of RISC-V.
> > > > >>>
> > > > >>>                 __asm__ __volatile__ (                                  \
> > > > >>>                         "0:     lr.w %0, %2\n"                          \
> > > > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > > > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > > > >>>                         "       bnez %1, 0b\n"                          \
> > > > >>>                         "       fence rw, rw\n"                         \
> > > > >>>                         "1:\n"                                          \
> > > > >>>
> > > > >>>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > > > >>>> following litmus test?
> > > > >>>>
> > > > >>>>     C lr-sc-aqrl-pair-vs-full-barrier
> > > > >>>>
> > > > >>>>     {}
> > > > >>>>
> > > > >>>>     P0(int *x, int *y, atomic_t *u)
> > > > >>>>     {
> > > > >>>>             int r0;
> > > > >>>>             int r1;
> > > > >>>>
> > > > >>>>             WRITE_ONCE(*x, 1);
> > > > >>>>             r0 = atomic_cmpxchg(u, 0, 1);
> > > > >>>>             r1 = READ_ONCE(*y);
> > > > >>>>     }
> > > > >>>>
> > > > >>>>     P1(int *x, int *y, atomic_t *v)
> > > > >>>>     {
> > > > >>>>             int r0;
> > > > >>>>             int r1;
> > > > >>>>
> > > > >>>>             WRITE_ONCE(*y, 1);
> > > > >>>>             r0 = atomic_cmpxchg(v, 0, 1);
> > > > >>>>             r1 = READ_ONCE(*x);
> > > > >>>>     }
> > > > >>>>
> > > > >>>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > > > >>> I think my patchset won't affect the above sequence guarantee. Current
> > > > >>> RISC-V implementation only gives RCsc when the original value is the
> > > > >>> same at least once. So I prefer RISC-V cmpxchg should be:
> > > > >>>
> > > > >>>
> > > > >>> -                       "0:     lr.w %0, %2\n"                          \
> > > > >>> +                      "0:     lr.w.rl %0, %2\n"                          \
> > > > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > > > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > > > >>>                         "       bnez %1, 0b\n"                          \
> > > > >>> -                       "       fence rw, rw\n"                         \
> > > > >>>                         "1:\n"                                          \
> > > > >>> +                        "       fence w, rw\n"                    \
> > > > >>>
> > > > >>> To give an unconditional RSsc for atomic_cmpxchg.
> > > > >>>
> > > > >>>>
> > > > >>>> Regards,
> > > > >>>> Boqun
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >
> > > > >
> > > > >
> > >
> > >
> > >
> > > --
> > > Best Regards
> > >  Guo Ren
> > >
> > > ML: https://lore.kernel.org/linux-csky/
> 
> 
> 
> -- 
> Best Regards
>  Guo Ren
> 
> ML: https://lore.kernel.org/linux-csky/

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-22  3:11                       ` Boqun Feng
  0 siblings, 0 replies; 42+ messages in thread
From: Boqun Feng @ 2022-04-22  3:11 UTC (permalink / raw)
  To: Guo Ren
  Cc: Dan Lustig, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren


[-- Attachment #1.1: Type: text/plain, Size: 17039 bytes --]

On Fri, Apr 22, 2022 at 09:56:21AM +0800, Guo Ren wrote:
> On Fri, Apr 22, 2022 at 6:56 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> >
> > On Thu, Apr 21, 2022 at 05:39:09PM +0800, Guo Ren wrote:
> > > Hi Dan,
> > >
> > > On Thu, Apr 21, 2022 at 1:03 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > > >
> > > > On 4/20/2022 1:33 AM, Guo Ren wrote:
> > > > > Thx Dan,
> > > > >
> > > > > On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > > > >>
> > > > >> On 4/17/2022 12:51 AM, Guo Ren wrote:
> > > > >>> Hi Boqun & Andrea,
> > > > >>>
> > > > >>> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> > > > >>>>
> > > > >>>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > > > >>>> [...]
> > > > >>>>>
> > > > >>>>> If both the aq and rl bits are set, the atomic memory operation is
> > > > >>>>> sequentially consistent and cannot be observed to happen before any
> > > > >>>>> earlier memory operations or after any later memory operations in the
> > > > >>>>> same RISC-V hart and to the same address domain.
> > > > >>>>>                 "0:     lr.w     %[p],  %[c]\n"
> > > > >>>>>                 "       sub      %[rc], %[p], %[o]\n"
> > > > >>>>>                 "       bltz     %[rc], 1f\n".
> > > > >>>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > > >>>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > > >>>>>                 "       bnez     %[rc], 0b\n"
> > > > >>>>> -               "       fence    rw, rw\n"
> > > > >>>>>                 "1:\n"
> > > > >>>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > > > >>>>>
> > > > >>>>
> > > > >>>> Can .aqrl order memory accesses before and after it (not against itself,
> > > > >>>> against each other), i.e. act as a full memory barrier? For example, can
> > > > >>> From the RVWMO spec description, the .aqrl annotation appends the same
> > > > >>> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> > > > >>>
> > > > >>> Not only .aqrl, and I think the below also could be an RCsc when
> > > > >>> sc.w.aq is executed:
> > > > >>> A: Pre-Access
> > > > >>> B: lr.w.rl ADDR-0
> > > > >>> ...
> > > > >>> C: sc.w.aq ADDR-0
> > > > >>> D: Post-Acess
> > > > >>> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > > > >>> global memory order should be A->B->C->D when sc.w.aq is executed. For
> > > > >>> the amoswap
> > > > >>
> > > > >> These opcodes aren't actually meaningful, unfortunately.
> > > > >>
> > > > >> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
> > > > >> on an LR instruction unless the aq bit is also set, nor should software
> > > > >> set the aq bit on an SC instruction unless the rl bit is also set."
> > > > > 1. Oh, I've missed the behind half of the ISA manual. But why can't we
> > > > > utilize lr.rl & sc.aq in software programming to guarantee the
> > > > > sequence?
> > > >
> > > > lr.aq and sc.rl map more naturally to hardware than lr.rl and sc.aq.
> > > > Plus, they just aren't common operations to begin with, e.g., there
> > > > is no smp_store_acquire() or smp_load_release(), nor are there
> > > > equivalents in C/C++ atomics.
> > > First, thx for pointing out that my patch violates the rules defined
> > > in the ISA manual. I've abandoned these parts in v3.
> > >
> > > It's easy to let hw support lr.rl & sc.aq (eg: our hardware supports
> > > them). I agree there are no equivalents in C/C++ atomics. But they are
> > > useful for LR/SC pairs to implement atomic_acqurie/release semantics.
> > > Compare below:
> > > A): fence rw, r; lr
> > > B): lr.rl
> > > The A has another "fence ,r" effect in semantics, it's over commit
> > > from a software design view.
> > >
> > > ps: Current definition has problems:
> > > #define RISCV_ACQUIRE_BARRIER           "\tfence r , rw\n"
> > > #define RISCV_RELEASE_BARRIER           "\tfence rw,  w\n"
> > >
> > > #define __cmpxchg_release(ptr, old, new, size)                          \
> > > ...
> > >                 __asm__ __volatile__ (                                  \
> > >                         RISCV_RELEASE_BARRIER                           \
> > >                         "0:     lr.w %0, %2\n"                          \
> > >
> > > That means "fence rw, w" can't prevent lr.w beyond the fence, we need
> > > a "fence.rw. r" here. Here is the Fixup patch which I'm preparing:
> > >
> >
> > That's not true. Note that RELEASE semantics only applies to the
> > write/store part of a read-modify-write atomic, similarly, ACQUIRE only
> I just want to point out that the "atomic" mentioned here is only for
> RISC-V LR/SC AMO instructions. It has been clarified to tread AMO
> instruction as the whole part for other AMO instructions.
> 
>      - .aq:   If the aq bit is set, then no later memory operations
>               in this RISC-V hart can be observed to take place
>               before the AMO.
>      - .rl:   If the rl bit is set, then other RISC-V harts will not
>               observe the AMO before memory accesses preceding the
>               AMO in this RISC-V hart.
>      - .aqrl: Setting both the aq and the rl bit on an AMO makes the
>               sequence sequentially consistent, meaning that it cannot
>               be reordered with earlier or later memory operations
>               from the same hart.
> 
> > applies to the read/load part. For example, the following litmus test
> > can observe the exists clause being true.
> Thx for pointing out, that means changing "fence rw, w" to "fence rw.
> r" is more strict and it would lower performance, right?

Yes, I think it's more strict but honestly I don't know the performance
impact ;-)

> 
> >
> >         {}
> >
> >         P0(int *x, int *y)
> >         {
> >                 int r0;
> >                 int r1;
> >
> >                 r0 = cmpxchg_acquire(x, 0, 1);
> >                 r1 = READ_ONCE(*y);
> Oh, READ_ONCE could be beyond the write/store part of cmpxchg_acquire,
> right? We shouldn't prevent it.

Right, the reordering is allowed by the API of Linux atomics and you
don't have to prevent it.

Regards,
Boqun

> 
> >         }
> >
> >         P1(int *x, int *y)
> >         {
> >                 int r0;
> >
> >                 WRITE_ONCE(*y, 1);
> >                 smp_mb();
> >                 r0 = READ_ONCE(*x);
> >         }
> >
> >         exists (0:r0=0 /\ 0:r1=0 /\ 1:r0=0)
> >
> > Regards,
> > Boqun
> >
> > > From 14c93aca0c3b10cf134791cf491b459972a36ec4 Mon Sep 17 00:00:00 2001
> > > From: Guo Ren <guoren@linux.alibaba.com>
> > > Date: Thu, 21 Apr 2022 16:44:48 +0800
> > > Subject: [PATCH] riscv: atomic: Fixup wrong __atomic_acquire/release_fence
> > >  implementation
> > >
> > > Current RISCV_ACQUIRE/RELEASE_BARRIER is for spin_lock not atomic.
> > >
> > > __cmpxchg_release(ptr, old, new, size)
> > > ...
> > >         __asm__ __volatile__ (
> > >                         RISCV_RELEASE_BARRIER
> > >                         "0:     lr.w %0, %2\n"
> > >
> > > The "fence rw, w -> lr.w" is invalid and lr would beyond fence, so
> > > we need "fence rw, r -> lr.w" here. Atomic acquire is the same.
> > >
> > > Fixes: 0123f4d76ca6 ("riscv/spinlock: Strengthen implementations with fences")
> > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > > Signed-off-by: Guo Ren <guoren@kernel.org>
> > > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > > Cc: Mark Rutland <mark.rutland@arm.com>
> > > Cc: Andrea Parri <parri.andrea@gmail.com>
> > > Cc: Dan Lustig <dlustig@nvidia.com>
> > > Cc: stable@vger.kernel.org
> > > ---
> > >  arch/riscv/include/asm/atomic.h  | 4 ++--
> > >  arch/riscv/include/asm/cmpxchg.h | 8 ++++----
> > >  arch/riscv/include/asm/fence.h   | 4 ++++
> > >  3 files changed, 10 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> > > index aef8aa9ac4f4..7cd66eba6ec3 100644
> > > --- a/arch/riscv/include/asm/atomic.h
> > > +++ b/arch/riscv/include/asm/atomic.h
> > > @@ -20,10 +20,10 @@
> > >  #include <asm/barrier.h>
> > >
> > >  #define __atomic_acquire_fence()                                       \
> > > -       __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory")
> > > +       __asm__ __volatile__(RISCV_ATOMIC_ACQUIRE_BARRIER "":::"memory")
> > >
> > >  #define __atomic_release_fence()                                       \
> > > -       __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");
> > > +       __asm__ __volatile__(RISCV_ATOMIC_RELEASE_BARRIER"" ::: "memory");
> > >
> > >  static __always_inline int arch_atomic_read(const atomic_t *v)
> > >  {
> > > diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> > > index 9269fceb86e0..605edc2fca3b 100644
> > > --- a/arch/riscv/include/asm/cmpxchg.h
> > > +++ b/arch/riscv/include/asm/cmpxchg.h
> > > @@ -217,7 +217,7 @@
> > >                         "       bne  %0, %z3, 1f\n"                     \
> > >                         "       sc.w %1, %z4, %2\n"                     \
> > >                         "       bnez %1, 0b\n"                          \
> > > -                       RISCV_ACQUIRE_BARRIER                           \
> > > +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
> > >                         "1:\n"                                          \
> > >                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> > >                         : "rJ" ((long)__old), "rJ" (__new)              \
> > > @@ -229,7 +229,7 @@
> > >                         "       bne %0, %z3, 1f\n"                      \
> > >                         "       sc.d %1, %z4, %2\n"                     \
> > >                         "       bnez %1, 0b\n"                          \
> > > -                       RISCV_ACQUIRE_BARRIER                           \
> > > +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
> > >                         "1:\n"                                          \
> > >                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> > >                         : "rJ" (__old), "rJ" (__new)                    \
> > > @@ -259,7 +259,7 @@
> > >         switch (size) {                                                 \
> > >         case 4:                                                         \
> > >                 __asm__ __volatile__ (                                  \
> > > -                       RISCV_RELEASE_BARRIER                           \
> > > +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
> > >                         "0:     lr.w %0, %2\n"                          \
> > >                         "       bne  %0, %z3, 1f\n"                     \
> > >                         "       sc.w %1, %z4, %2\n"                     \
> > > @@ -271,7 +271,7 @@
> > >                 break;                                                  \
> > >         case 8:                                                         \
> > >                 __asm__ __volatile__ (                                  \
> > > -                       RISCV_RELEASE_BARRIER                           \
> > > +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
> > >                         "0:     lr.d %0, %2\n"                          \
> > >                         "       bne %0, %z3, 1f\n"                      \
> > >                         "       sc.d %1, %z4, %2\n"                     \
> > > diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h
> > > index 2b443a3a487f..4e446d64f04f 100644
> > > --- a/arch/riscv/include/asm/fence.h
> > > +++ b/arch/riscv/include/asm/fence.h
> > > @@ -4,9 +4,13 @@
> > >  #ifdef CONFIG_SMP
> > >  #define RISCV_ACQUIRE_BARRIER          "\tfence r , rw\n"
> > >  #define RISCV_RELEASE_BARRIER          "\tfence rw,  w\n"
> > > +#define RISCV_ATOMIC_ACQUIRE_BARRIER   "\tfence w , rw\n"
> > > +#define RISCV_ATOMIC_RELEASE_BARRIER   "\tfence rw,  r\n"
> > >  #else
> > >  #define RISCV_ACQUIRE_BARRIER
> > >  #define RISCV_RELEASE_BARRIER
> > > +#define RISCV_ATOMIC_ACQUIRE_BARRIER
> > > +#define RISCV_ATOMIC_RELEASE_BARRIER
> > >  #endif
> > >
> > >  #endif /* _ASM_RISCV_FENCE_H */
> > >
> > >
> > > >
> > > > > 2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
> > > > > right? And reducing a fence instruction to gain better performance:
> > > > >                 "0:     lr.w     %[p],  %[c]\n"
> > > > >                  "       sub      %[rc], %[p], %[o]\n"
> > > > >                  "       bltz     %[rc], 1f\n".
> > > > >  -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > > >  +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > > >                  "       bnez     %[rc], 0b\n"
> > > > >  -               "       fence    rw, rw\n"
> > > >
> > > > Yes, using .aqrl is valid.
> > > Thx and I think the below is also valid, right?
> > >
> > > -                       RISCV_RELEASE_BARRIER                           \
> > > -                       "       amoswap.w %0, %2, %1\n"                 \
> > > +                       "       amoswap.w.rl %0, %2, %1\n"              \
> > >
> > > -                       "       amoswap.d %0, %2, %1\n"                 \
> > > -                       RISCV_ACQUIRE_BARRIER                           \
> > > +                       "       amoswap.d.aq %0, %2, %1\n"              \
> > >
> > > >
> > > > Dan
> > > >
> > > > >>
> > > > >> Dan
> > > > >>
> > > > >>> The purpose of the whole patchset is to reduce the usage of
> > > > >>> independent fence rw, rw instructions, and maximize the usage of the
> > > > >>> .aq/.rl/.aqrl aonntation of RISC-V.
> > > > >>>
> > > > >>>                 __asm__ __volatile__ (                                  \
> > > > >>>                         "0:     lr.w %0, %2\n"                          \
> > > > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > > > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > > > >>>                         "       bnez %1, 0b\n"                          \
> > > > >>>                         "       fence rw, rw\n"                         \
> > > > >>>                         "1:\n"                                          \
> > > > >>>
> > > > >>>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > > > >>>> following litmus test?
> > > > >>>>
> > > > >>>>     C lr-sc-aqrl-pair-vs-full-barrier
> > > > >>>>
> > > > >>>>     {}
> > > > >>>>
> > > > >>>>     P0(int *x, int *y, atomic_t *u)
> > > > >>>>     {
> > > > >>>>             int r0;
> > > > >>>>             int r1;
> > > > >>>>
> > > > >>>>             WRITE_ONCE(*x, 1);
> > > > >>>>             r0 = atomic_cmpxchg(u, 0, 1);
> > > > >>>>             r1 = READ_ONCE(*y);
> > > > >>>>     }
> > > > >>>>
> > > > >>>>     P1(int *x, int *y, atomic_t *v)
> > > > >>>>     {
> > > > >>>>             int r0;
> > > > >>>>             int r1;
> > > > >>>>
> > > > >>>>             WRITE_ONCE(*y, 1);
> > > > >>>>             r0 = atomic_cmpxchg(v, 0, 1);
> > > > >>>>             r1 = READ_ONCE(*x);
> > > > >>>>     }
> > > > >>>>
> > > > >>>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > > > >>> I think my patchset won't affect the above sequence guarantee. Current
> > > > >>> RISC-V implementation only gives RCsc when the original value is the
> > > > >>> same at least once. So I prefer RISC-V cmpxchg should be:
> > > > >>>
> > > > >>>
> > > > >>> -                       "0:     lr.w %0, %2\n"                          \
> > > > >>> +                      "0:     lr.w.rl %0, %2\n"                          \
> > > > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > > > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > > > >>>                         "       bnez %1, 0b\n"                          \
> > > > >>> -                       "       fence rw, rw\n"                         \
> > > > >>>                         "1:\n"                                          \
> > > > >>> +                        "       fence w, rw\n"                    \
> > > > >>>
> > > > >>> To give an unconditional RSsc for atomic_cmpxchg.
> > > > >>>
> > > > >>>>
> > > > >>>> Regards,
> > > > >>>> Boqun
> > > > >>>
> > > > >>>
> > > > >>>
> > > > >
> > > > >
> > > > >
> > >
> > >
> > >
> > > --
> > > Best Regards
> > >  Guo Ren
> > >
> > > ML: https://lore.kernel.org/linux-csky/
> 
> 
> 
> -- 
> Best Regards
>  Guo Ren
> 
> ML: https://lore.kernel.org/linux-csky/

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

[-- Attachment #2: Type: text/plain, Size: 161 bytes --]

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-22  3:11                       ` Boqun Feng
@ 2022-04-24  7:52                         ` Guo Ren
  -1 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-24  7:52 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Dan Lustig, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

On Fri, Apr 22, 2022 at 11:11 AM Boqun Feng <boqun.feng@gmail.com> wrote:
>
> On Fri, Apr 22, 2022 at 09:56:21AM +0800, Guo Ren wrote:
> > On Fri, Apr 22, 2022 at 6:56 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> > >
> > > On Thu, Apr 21, 2022 at 05:39:09PM +0800, Guo Ren wrote:
> > > > Hi Dan,
> > > >
> > > > On Thu, Apr 21, 2022 at 1:03 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > > > >
> > > > > On 4/20/2022 1:33 AM, Guo Ren wrote:
> > > > > > Thx Dan,
> > > > > >
> > > > > > On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > > > > >>
> > > > > >> On 4/17/2022 12:51 AM, Guo Ren wrote:
> > > > > >>> Hi Boqun & Andrea,
> > > > > >>>
> > > > > >>> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> > > > > >>>>
> > > > > >>>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > > > > >>>> [...]
> > > > > >>>>>
> > > > > >>>>> If both the aq and rl bits are set, the atomic memory operation is
> > > > > >>>>> sequentially consistent and cannot be observed to happen before any
> > > > > >>>>> earlier memory operations or after any later memory operations in the
> > > > > >>>>> same RISC-V hart and to the same address domain.
> > > > > >>>>>                 "0:     lr.w     %[p],  %[c]\n"
> > > > > >>>>>                 "       sub      %[rc], %[p], %[o]\n"
> > > > > >>>>>                 "       bltz     %[rc], 1f\n".
> > > > > >>>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > > > >>>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > > > >>>>>                 "       bnez     %[rc], 0b\n"
> > > > > >>>>> -               "       fence    rw, rw\n"
> > > > > >>>>>                 "1:\n"
> > > > > >>>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > > > > >>>>>
> > > > > >>>>
> > > > > >>>> Can .aqrl order memory accesses before and after it (not against itself,
> > > > > >>>> against each other), i.e. act as a full memory barrier? For example, can
> > > > > >>> From the RVWMO spec description, the .aqrl annotation appends the same
> > > > > >>> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> > > > > >>>
> > > > > >>> Not only .aqrl, and I think the below also could be an RCsc when
> > > > > >>> sc.w.aq is executed:
> > > > > >>> A: Pre-Access
> > > > > >>> B: lr.w.rl ADDR-0
> > > > > >>> ...
> > > > > >>> C: sc.w.aq ADDR-0
> > > > > >>> D: Post-Acess
> > > > > >>> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > > > > >>> global memory order should be A->B->C->D when sc.w.aq is executed. For
> > > > > >>> the amoswap
> > > > > >>
> > > > > >> These opcodes aren't actually meaningful, unfortunately.
> > > > > >>
> > > > > >> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
> > > > > >> on an LR instruction unless the aq bit is also set, nor should software
> > > > > >> set the aq bit on an SC instruction unless the rl bit is also set."
> > > > > > 1. Oh, I've missed the behind half of the ISA manual. But why can't we
> > > > > > utilize lr.rl & sc.aq in software programming to guarantee the
> > > > > > sequence?
> > > > >
> > > > > lr.aq and sc.rl map more naturally to hardware than lr.rl and sc.aq.
> > > > > Plus, they just aren't common operations to begin with, e.g., there
> > > > > is no smp_store_acquire() or smp_load_release(), nor are there
> > > > > equivalents in C/C++ atomics.
> > > > First, thx for pointing out that my patch violates the rules defined
> > > > in the ISA manual. I've abandoned these parts in v3.
> > > >
> > > > It's easy to let hw support lr.rl & sc.aq (eg: our hardware supports
> > > > them). I agree there are no equivalents in C/C++ atomics. But they are
> > > > useful for LR/SC pairs to implement atomic_acqurie/release semantics.
> > > > Compare below:
> > > > A): fence rw, r; lr
> > > > B): lr.rl
> > > > The A has another "fence ,r" effect in semantics, it's over commit
> > > > from a software design view.
> > > >
> > > > ps: Current definition has problems:
> > > > #define RISCV_ACQUIRE_BARRIER           "\tfence r , rw\n"
> > > > #define RISCV_RELEASE_BARRIER           "\tfence rw,  w\n"
> > > >
> > > > #define __cmpxchg_release(ptr, old, new, size)                          \
> > > > ...
> > > >                 __asm__ __volatile__ (                                  \
> > > >                         RISCV_RELEASE_BARRIER                           \
> > > >                         "0:     lr.w %0, %2\n"                          \
> > > >
> > > > That means "fence rw, w" can't prevent lr.w beyond the fence, we need
> > > > a "fence.rw. r" here. Here is the Fixup patch which I'm preparing:
> > > >
> > >
> > > That's not true. Note that RELEASE semantics only applies to the
> > > write/store part of a read-modify-write atomic, similarly, ACQUIRE only
> > I just want to point out that the "atomic" mentioned here is only for
> > RISC-V LR/SC AMO instructions. It has been clarified to tread AMO
> > instruction as the whole part for other AMO instructions.
> >
> >      - .aq:   If the aq bit is set, then no later memory operations
> >               in this RISC-V hart can be observed to take place
> >               before the AMO.
> >      - .rl:   If the rl bit is set, then other RISC-V harts will not
> >               observe the AMO before memory accesses preceding the
> >               AMO in this RISC-V hart.
> >      - .aqrl: Setting both the aq and the rl bit on an AMO makes the
> >               sequence sequentially consistent, meaning that it cannot
> >               be reordered with earlier or later memory operations
> >               from the same hart.
> >
> > > applies to the read/load part. For example, the following litmus test
> > > can observe the exists clause being true.
> > Thx for pointing out, that means changing "fence rw, w" to "fence rw.
> > r" is more strict and it would lower performance, right?
>
> Yes, I think it's more strict but honestly I don't know the performance
> impact ;-)
>
> >
> > >
> > >         {}
> > >
> > >         P0(int *x, int *y)
> > >         {
> > >                 int r0;
> > >                 int r1;
> > >
> > >                 r0 = cmpxchg_acquire(x, 0, 1);
> > >                 r1 = READ_ONCE(*y);
> > Oh, READ_ONCE could be beyond the write/store part of cmpxchg_acquire,
> > right? We shouldn't prevent it.
>
> Right, the reordering is allowed by the API of Linux atomics and you
> don't have to prevent it.
Thx, you are right, I got it.

>
> Regards,
> Boqun
>
> >
> > >         }
> > >
> > >         P1(int *x, int *y)
> > >         {
> > >                 int r0;
> > >
> > >                 WRITE_ONCE(*y, 1);
> > >                 smp_mb();
> > >                 r0 = READ_ONCE(*x);
> > >         }
> > >
> > >         exists (0:r0=0 /\ 0:r1=0 /\ 1:r0=0)
> > >
> > > Regards,
> > > Boqun
> > >
> > > > From 14c93aca0c3b10cf134791cf491b459972a36ec4 Mon Sep 17 00:00:00 2001
> > > > From: Guo Ren <guoren@linux.alibaba.com>
> > > > Date: Thu, 21 Apr 2022 16:44:48 +0800
> > > > Subject: [PATCH] riscv: atomic: Fixup wrong __atomic_acquire/release_fence
> > > >  implementation
> > > >
> > > > Current RISCV_ACQUIRE/RELEASE_BARRIER is for spin_lock not atomic.
> > > >
> > > > __cmpxchg_release(ptr, old, new, size)
> > > > ...
> > > >         __asm__ __volatile__ (
> > > >                         RISCV_RELEASE_BARRIER
> > > >                         "0:     lr.w %0, %2\n"
> > > >
> > > > The "fence rw, w -> lr.w" is invalid and lr would beyond fence, so
> > > > we need "fence rw, r -> lr.w" here. Atomic acquire is the same.
> > > >
> > > > Fixes: 0123f4d76ca6 ("riscv/spinlock: Strengthen implementations with fences")
> > > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > > > Signed-off-by: Guo Ren <guoren@kernel.org>
> > > > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > > > Cc: Mark Rutland <mark.rutland@arm.com>
> > > > Cc: Andrea Parri <parri.andrea@gmail.com>
> > > > Cc: Dan Lustig <dlustig@nvidia.com>
> > > > Cc: stable@vger.kernel.org
> > > > ---
> > > >  arch/riscv/include/asm/atomic.h  | 4 ++--
> > > >  arch/riscv/include/asm/cmpxchg.h | 8 ++++----
> > > >  arch/riscv/include/asm/fence.h   | 4 ++++
> > > >  3 files changed, 10 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> > > > index aef8aa9ac4f4..7cd66eba6ec3 100644
> > > > --- a/arch/riscv/include/asm/atomic.h
> > > > +++ b/arch/riscv/include/asm/atomic.h
> > > > @@ -20,10 +20,10 @@
> > > >  #include <asm/barrier.h>
> > > >
> > > >  #define __atomic_acquire_fence()                                       \
> > > > -       __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory")
> > > > +       __asm__ __volatile__(RISCV_ATOMIC_ACQUIRE_BARRIER "":::"memory")
> > > >
> > > >  #define __atomic_release_fence()                                       \
> > > > -       __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");
> > > > +       __asm__ __volatile__(RISCV_ATOMIC_RELEASE_BARRIER"" ::: "memory");
> > > >
> > > >  static __always_inline int arch_atomic_read(const atomic_t *v)
> > > >  {
> > > > diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> > > > index 9269fceb86e0..605edc2fca3b 100644
> > > > --- a/arch/riscv/include/asm/cmpxchg.h
> > > > +++ b/arch/riscv/include/asm/cmpxchg.h
> > > > @@ -217,7 +217,7 @@
> > > >                         "       bne  %0, %z3, 1f\n"                     \
> > > >                         "       sc.w %1, %z4, %2\n"                     \
> > > >                         "       bnez %1, 0b\n"                          \
> > > > -                       RISCV_ACQUIRE_BARRIER                           \
> > > > +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
> > > >                         "1:\n"                                          \
> > > >                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> > > >                         : "rJ" ((long)__old), "rJ" (__new)              \
> > > > @@ -229,7 +229,7 @@
> > > >                         "       bne %0, %z3, 1f\n"                      \
> > > >                         "       sc.d %1, %z4, %2\n"                     \
> > > >                         "       bnez %1, 0b\n"                          \
> > > > -                       RISCV_ACQUIRE_BARRIER                           \
> > > > +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
> > > >                         "1:\n"                                          \
> > > >                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> > > >                         : "rJ" (__old), "rJ" (__new)                    \
> > > > @@ -259,7 +259,7 @@
> > > >         switch (size) {                                                 \
> > > >         case 4:                                                         \
> > > >                 __asm__ __volatile__ (                                  \
> > > > -                       RISCV_RELEASE_BARRIER                           \
> > > > +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
> > > >                         "0:     lr.w %0, %2\n"                          \
> > > >                         "       bne  %0, %z3, 1f\n"                     \
> > > >                         "       sc.w %1, %z4, %2\n"                     \
> > > > @@ -271,7 +271,7 @@
> > > >                 break;                                                  \
> > > >         case 8:                                                         \
> > > >                 __asm__ __volatile__ (                                  \
> > > > -                       RISCV_RELEASE_BARRIER                           \
> > > > +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
> > > >                         "0:     lr.d %0, %2\n"                          \
> > > >                         "       bne %0, %z3, 1f\n"                      \
> > > >                         "       sc.d %1, %z4, %2\n"                     \
> > > > diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h
> > > > index 2b443a3a487f..4e446d64f04f 100644
> > > > --- a/arch/riscv/include/asm/fence.h
> > > > +++ b/arch/riscv/include/asm/fence.h
> > > > @@ -4,9 +4,13 @@
> > > >  #ifdef CONFIG_SMP
> > > >  #define RISCV_ACQUIRE_BARRIER          "\tfence r , rw\n"
> > > >  #define RISCV_RELEASE_BARRIER          "\tfence rw,  w\n"
> > > > +#define RISCV_ATOMIC_ACQUIRE_BARRIER   "\tfence w , rw\n"
> > > > +#define RISCV_ATOMIC_RELEASE_BARRIER   "\tfence rw,  r\n"
> > > >  #else
> > > >  #define RISCV_ACQUIRE_BARRIER
> > > >  #define RISCV_RELEASE_BARRIER
> > > > +#define RISCV_ATOMIC_ACQUIRE_BARRIER
> > > > +#define RISCV_ATOMIC_RELEASE_BARRIER
> > > >  #endif
> > > >
> > > >  #endif /* _ASM_RISCV_FENCE_H */
> > > >
> > > >
> > > > >
> > > > > > 2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
> > > > > > right? And reducing a fence instruction to gain better performance:
> > > > > >                 "0:     lr.w     %[p],  %[c]\n"
> > > > > >                  "       sub      %[rc], %[p], %[o]\n"
> > > > > >                  "       bltz     %[rc], 1f\n".
> > > > > >  -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > > > >  +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > > > >                  "       bnez     %[rc], 0b\n"
> > > > > >  -               "       fence    rw, rw\n"
> > > > >
> > > > > Yes, using .aqrl is valid.
> > > > Thx and I think the below is also valid, right?
> > > >
> > > > -                       RISCV_RELEASE_BARRIER                           \
> > > > -                       "       amoswap.w %0, %2, %1\n"                 \
> > > > +                       "       amoswap.w.rl %0, %2, %1\n"              \
> > > >
> > > > -                       "       amoswap.d %0, %2, %1\n"                 \
> > > > -                       RISCV_ACQUIRE_BARRIER                           \
> > > > +                       "       amoswap.d.aq %0, %2, %1\n"              \
> > > >
> > > > >
> > > > > Dan
> > > > >
> > > > > >>
> > > > > >> Dan
> > > > > >>
> > > > > >>> The purpose of the whole patchset is to reduce the usage of
> > > > > >>> independent fence rw, rw instructions, and maximize the usage of the
> > > > > >>> .aq/.rl/.aqrl aonntation of RISC-V.
> > > > > >>>
> > > > > >>>                 __asm__ __volatile__ (                                  \
> > > > > >>>                         "0:     lr.w %0, %2\n"                          \
> > > > > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > > > > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > > > > >>>                         "       bnez %1, 0b\n"                          \
> > > > > >>>                         "       fence rw, rw\n"                         \
> > > > > >>>                         "1:\n"                                          \
> > > > > >>>
> > > > > >>>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > > > > >>>> following litmus test?
> > > > > >>>>
> > > > > >>>>     C lr-sc-aqrl-pair-vs-full-barrier
> > > > > >>>>
> > > > > >>>>     {}
> > > > > >>>>
> > > > > >>>>     P0(int *x, int *y, atomic_t *u)
> > > > > >>>>     {
> > > > > >>>>             int r0;
> > > > > >>>>             int r1;
> > > > > >>>>
> > > > > >>>>             WRITE_ONCE(*x, 1);
> > > > > >>>>             r0 = atomic_cmpxchg(u, 0, 1);
> > > > > >>>>             r1 = READ_ONCE(*y);
> > > > > >>>>     }
> > > > > >>>>
> > > > > >>>>     P1(int *x, int *y, atomic_t *v)
> > > > > >>>>     {
> > > > > >>>>             int r0;
> > > > > >>>>             int r1;
> > > > > >>>>
> > > > > >>>>             WRITE_ONCE(*y, 1);
> > > > > >>>>             r0 = atomic_cmpxchg(v, 0, 1);
> > > > > >>>>             r1 = READ_ONCE(*x);
> > > > > >>>>     }
> > > > > >>>>
> > > > > >>>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > > > > >>> I think my patchset won't affect the above sequence guarantee. Current
> > > > > >>> RISC-V implementation only gives RCsc when the original value is the
> > > > > >>> same at least once. So I prefer RISC-V cmpxchg should be:
> > > > > >>>
> > > > > >>>
> > > > > >>> -                       "0:     lr.w %0, %2\n"                          \
> > > > > >>> +                      "0:     lr.w.rl %0, %2\n"                          \
> > > > > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > > > > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > > > > >>>                         "       bnez %1, 0b\n"                          \
> > > > > >>> -                       "       fence rw, rw\n"                         \
> > > > > >>>                         "1:\n"                                          \
> > > > > >>> +                        "       fence w, rw\n"                    \
> > > > > >>>
> > > > > >>> To give an unconditional RSsc for atomic_cmpxchg.
> > > > > >>>
> > > > > >>>>
> > > > > >>>> Regards,
> > > > > >>>> Boqun
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >
> > > > > >
> > > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Best Regards
> > > >  Guo Ren
> > > >
> > > > ML: https://lore.kernel.org/linux-csky/
> >
> >
> >
> > --
> > Best Regards
> >  Guo Ren
> >
> > ML: https://lore.kernel.org/linux-csky/



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-24  7:52                         ` Guo Ren
  0 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-24  7:52 UTC (permalink / raw)
  To: Boqun Feng
  Cc: Dan Lustig, Andrea Parri, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

On Fri, Apr 22, 2022 at 11:11 AM Boqun Feng <boqun.feng@gmail.com> wrote:
>
> On Fri, Apr 22, 2022 at 09:56:21AM +0800, Guo Ren wrote:
> > On Fri, Apr 22, 2022 at 6:56 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> > >
> > > On Thu, Apr 21, 2022 at 05:39:09PM +0800, Guo Ren wrote:
> > > > Hi Dan,
> > > >
> > > > On Thu, Apr 21, 2022 at 1:03 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > > > >
> > > > > On 4/20/2022 1:33 AM, Guo Ren wrote:
> > > > > > Thx Dan,
> > > > > >
> > > > > > On Wed, Apr 20, 2022 at 1:12 AM Dan Lustig <dlustig@nvidia.com> wrote:
> > > > > >>
> > > > > >> On 4/17/2022 12:51 AM, Guo Ren wrote:
> > > > > >>> Hi Boqun & Andrea,
> > > > > >>>
> > > > > >>> On Sun, Apr 17, 2022 at 10:26 AM Boqun Feng <boqun.feng@gmail.com> wrote:
> > > > > >>>>
> > > > > >>>> On Sun, Apr 17, 2022 at 12:49:44AM +0800, Guo Ren wrote:
> > > > > >>>> [...]
> > > > > >>>>>
> > > > > >>>>> If both the aq and rl bits are set, the atomic memory operation is
> > > > > >>>>> sequentially consistent and cannot be observed to happen before any
> > > > > >>>>> earlier memory operations or after any later memory operations in the
> > > > > >>>>> same RISC-V hart and to the same address domain.
> > > > > >>>>>                 "0:     lr.w     %[p],  %[c]\n"
> > > > > >>>>>                 "       sub      %[rc], %[p], %[o]\n"
> > > > > >>>>>                 "       bltz     %[rc], 1f\n".
> > > > > >>>>> -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > > > >>>>> +               "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > > > >>>>>                 "       bnez     %[rc], 0b\n"
> > > > > >>>>> -               "       fence    rw, rw\n"
> > > > > >>>>>                 "1:\n"
> > > > > >>>>> So .rl + fence rw, rw is over constraints, only using sc.w.aqrl is more proper.
> > > > > >>>>>
> > > > > >>>>
> > > > > >>>> Can .aqrl order memory accesses before and after it (not against itself,
> > > > > >>>> against each other), i.e. act as a full memory barrier? For example, can
> > > > > >>> From the RVWMO spec description, the .aqrl annotation appends the same
> > > > > >>> effect with "fence rw, rw" to the AMO instruction, so it's RCsc.
> > > > > >>>
> > > > > >>> Not only .aqrl, and I think the below also could be an RCsc when
> > > > > >>> sc.w.aq is executed:
> > > > > >>> A: Pre-Access
> > > > > >>> B: lr.w.rl ADDR-0
> > > > > >>> ...
> > > > > >>> C: sc.w.aq ADDR-0
> > > > > >>> D: Post-Acess
> > > > > >>> Because sc.w.aq has overlap address & data dependency on lr.w.rl, the
> > > > > >>> global memory order should be A->B->C->D when sc.w.aq is executed. For
> > > > > >>> the amoswap
> > > > > >>
> > > > > >> These opcodes aren't actually meaningful, unfortunately.
> > > > > >>
> > > > > >> Quoting the ISA manual chapter 10.2: "Software should not set the rl bit
> > > > > >> on an LR instruction unless the aq bit is also set, nor should software
> > > > > >> set the aq bit on an SC instruction unless the rl bit is also set."
> > > > > > 1. Oh, I've missed the behind half of the ISA manual. But why can't we
> > > > > > utilize lr.rl & sc.aq in software programming to guarantee the
> > > > > > sequence?
> > > > >
> > > > > lr.aq and sc.rl map more naturally to hardware than lr.rl and sc.aq.
> > > > > Plus, they just aren't common operations to begin with, e.g., there
> > > > > is no smp_store_acquire() or smp_load_release(), nor are there
> > > > > equivalents in C/C++ atomics.
> > > > First, thx for pointing out that my patch violates the rules defined
> > > > in the ISA manual. I've abandoned these parts in v3.
> > > >
> > > > It's easy to let hw support lr.rl & sc.aq (eg: our hardware supports
> > > > them). I agree there are no equivalents in C/C++ atomics. But they are
> > > > useful for LR/SC pairs to implement atomic_acqurie/release semantics.
> > > > Compare below:
> > > > A): fence rw, r; lr
> > > > B): lr.rl
> > > > The A has another "fence ,r" effect in semantics, it's over commit
> > > > from a software design view.
> > > >
> > > > ps: Current definition has problems:
> > > > #define RISCV_ACQUIRE_BARRIER           "\tfence r , rw\n"
> > > > #define RISCV_RELEASE_BARRIER           "\tfence rw,  w\n"
> > > >
> > > > #define __cmpxchg_release(ptr, old, new, size)                          \
> > > > ...
> > > >                 __asm__ __volatile__ (                                  \
> > > >                         RISCV_RELEASE_BARRIER                           \
> > > >                         "0:     lr.w %0, %2\n"                          \
> > > >
> > > > That means "fence rw, w" can't prevent lr.w beyond the fence, we need
> > > > a "fence.rw. r" here. Here is the Fixup patch which I'm preparing:
> > > >
> > >
> > > That's not true. Note that RELEASE semantics only applies to the
> > > write/store part of a read-modify-write atomic, similarly, ACQUIRE only
> > I just want to point out that the "atomic" mentioned here is only for
> > RISC-V LR/SC AMO instructions. It has been clarified to tread AMO
> > instruction as the whole part for other AMO instructions.
> >
> >      - .aq:   If the aq bit is set, then no later memory operations
> >               in this RISC-V hart can be observed to take place
> >               before the AMO.
> >      - .rl:   If the rl bit is set, then other RISC-V harts will not
> >               observe the AMO before memory accesses preceding the
> >               AMO in this RISC-V hart.
> >      - .aqrl: Setting both the aq and the rl bit on an AMO makes the
> >               sequence sequentially consistent, meaning that it cannot
> >               be reordered with earlier or later memory operations
> >               from the same hart.
> >
> > > applies to the read/load part. For example, the following litmus test
> > > can observe the exists clause being true.
> > Thx for pointing out, that means changing "fence rw, w" to "fence rw.
> > r" is more strict and it would lower performance, right?
>
> Yes, I think it's more strict but honestly I don't know the performance
> impact ;-)
>
> >
> > >
> > >         {}
> > >
> > >         P0(int *x, int *y)
> > >         {
> > >                 int r0;
> > >                 int r1;
> > >
> > >                 r0 = cmpxchg_acquire(x, 0, 1);
> > >                 r1 = READ_ONCE(*y);
> > Oh, READ_ONCE could be beyond the write/store part of cmpxchg_acquire,
> > right? We shouldn't prevent it.
>
> Right, the reordering is allowed by the API of Linux atomics and you
> don't have to prevent it.
Thx, you are right, I got it.

>
> Regards,
> Boqun
>
> >
> > >         }
> > >
> > >         P1(int *x, int *y)
> > >         {
> > >                 int r0;
> > >
> > >                 WRITE_ONCE(*y, 1);
> > >                 smp_mb();
> > >                 r0 = READ_ONCE(*x);
> > >         }
> > >
> > >         exists (0:r0=0 /\ 0:r1=0 /\ 1:r0=0)
> > >
> > > Regards,
> > > Boqun
> > >
> > > > From 14c93aca0c3b10cf134791cf491b459972a36ec4 Mon Sep 17 00:00:00 2001
> > > > From: Guo Ren <guoren@linux.alibaba.com>
> > > > Date: Thu, 21 Apr 2022 16:44:48 +0800
> > > > Subject: [PATCH] riscv: atomic: Fixup wrong __atomic_acquire/release_fence
> > > >  implementation
> > > >
> > > > Current RISCV_ACQUIRE/RELEASE_BARRIER is for spin_lock not atomic.
> > > >
> > > > __cmpxchg_release(ptr, old, new, size)
> > > > ...
> > > >         __asm__ __volatile__ (
> > > >                         RISCV_RELEASE_BARRIER
> > > >                         "0:     lr.w %0, %2\n"
> > > >
> > > > The "fence rw, w -> lr.w" is invalid and lr would beyond fence, so
> > > > we need "fence rw, r -> lr.w" here. Atomic acquire is the same.
> > > >
> > > > Fixes: 0123f4d76ca6 ("riscv/spinlock: Strengthen implementations with fences")
> > > > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > > > Signed-off-by: Guo Ren <guoren@kernel.org>
> > > > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > > > Cc: Mark Rutland <mark.rutland@arm.com>
> > > > Cc: Andrea Parri <parri.andrea@gmail.com>
> > > > Cc: Dan Lustig <dlustig@nvidia.com>
> > > > Cc: stable@vger.kernel.org
> > > > ---
> > > >  arch/riscv/include/asm/atomic.h  | 4 ++--
> > > >  arch/riscv/include/asm/cmpxchg.h | 8 ++++----
> > > >  arch/riscv/include/asm/fence.h   | 4 ++++
> > > >  3 files changed, 10 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> > > > index aef8aa9ac4f4..7cd66eba6ec3 100644
> > > > --- a/arch/riscv/include/asm/atomic.h
> > > > +++ b/arch/riscv/include/asm/atomic.h
> > > > @@ -20,10 +20,10 @@
> > > >  #include <asm/barrier.h>
> > > >
> > > >  #define __atomic_acquire_fence()                                       \
> > > > -       __asm__ __volatile__(RISCV_ACQUIRE_BARRIER "" ::: "memory")
> > > > +       __asm__ __volatile__(RISCV_ATOMIC_ACQUIRE_BARRIER "":::"memory")
> > > >
> > > >  #define __atomic_release_fence()                                       \
> > > > -       __asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");
> > > > +       __asm__ __volatile__(RISCV_ATOMIC_RELEASE_BARRIER"" ::: "memory");
> > > >
> > > >  static __always_inline int arch_atomic_read(const atomic_t *v)
> > > >  {
> > > > diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> > > > index 9269fceb86e0..605edc2fca3b 100644
> > > > --- a/arch/riscv/include/asm/cmpxchg.h
> > > > +++ b/arch/riscv/include/asm/cmpxchg.h
> > > > @@ -217,7 +217,7 @@
> > > >                         "       bne  %0, %z3, 1f\n"                     \
> > > >                         "       sc.w %1, %z4, %2\n"                     \
> > > >                         "       bnez %1, 0b\n"                          \
> > > > -                       RISCV_ACQUIRE_BARRIER                           \
> > > > +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
> > > >                         "1:\n"                                          \
> > > >                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> > > >                         : "rJ" ((long)__old), "rJ" (__new)              \
> > > > @@ -229,7 +229,7 @@
> > > >                         "       bne %0, %z3, 1f\n"                      \
> > > >                         "       sc.d %1, %z4, %2\n"                     \
> > > >                         "       bnez %1, 0b\n"                          \
> > > > -                       RISCV_ACQUIRE_BARRIER                           \
> > > > +                       RISCV_ATOMIC_ACQUIRE_BARRIER                    \
> > > >                         "1:\n"                                          \
> > > >                         : "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)    \
> > > >                         : "rJ" (__old), "rJ" (__new)                    \
> > > > @@ -259,7 +259,7 @@
> > > >         switch (size) {                                                 \
> > > >         case 4:                                                         \
> > > >                 __asm__ __volatile__ (                                  \
> > > > -                       RISCV_RELEASE_BARRIER                           \
> > > > +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
> > > >                         "0:     lr.w %0, %2\n"                          \
> > > >                         "       bne  %0, %z3, 1f\n"                     \
> > > >                         "       sc.w %1, %z4, %2\n"                     \
> > > > @@ -271,7 +271,7 @@
> > > >                 break;                                                  \
> > > >         case 8:                                                         \
> > > >                 __asm__ __volatile__ (                                  \
> > > > -                       RISCV_RELEASE_BARRIER                           \
> > > > +                       RISCV_ATOMIC_RELEASE_BARRIER                    \
> > > >                         "0:     lr.d %0, %2\n"                          \
> > > >                         "       bne %0, %z3, 1f\n"                      \
> > > >                         "       sc.d %1, %z4, %2\n"                     \
> > > > diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h
> > > > index 2b443a3a487f..4e446d64f04f 100644
> > > > --- a/arch/riscv/include/asm/fence.h
> > > > +++ b/arch/riscv/include/asm/fence.h
> > > > @@ -4,9 +4,13 @@
> > > >  #ifdef CONFIG_SMP
> > > >  #define RISCV_ACQUIRE_BARRIER          "\tfence r , rw\n"
> > > >  #define RISCV_RELEASE_BARRIER          "\tfence rw,  w\n"
> > > > +#define RISCV_ATOMIC_ACQUIRE_BARRIER   "\tfence w , rw\n"
> > > > +#define RISCV_ATOMIC_RELEASE_BARRIER   "\tfence rw,  r\n"
> > > >  #else
> > > >  #define RISCV_ACQUIRE_BARRIER
> > > >  #define RISCV_RELEASE_BARRIER
> > > > +#define RISCV_ATOMIC_ACQUIRE_BARRIER
> > > > +#define RISCV_ATOMIC_RELEASE_BARRIER
> > > >  #endif
> > > >
> > > >  #endif /* _ASM_RISCV_FENCE_H */
> > > >
> > > >
> > > > >
> > > > > > 2. Using .aqrl to replace the fence rw, rw is okay to ISA manual,
> > > > > > right? And reducing a fence instruction to gain better performance:
> > > > > >                 "0:     lr.w     %[p],  %[c]\n"
> > > > > >                  "       sub      %[rc], %[p], %[o]\n"
> > > > > >                  "       bltz     %[rc], 1f\n".
> > > > > >  -               "       sc.w.rl  %[rc], %[rc], %[c]\n"
> > > > > >  +              "       sc.w.aqrl %[rc], %[rc], %[c]\n"
> > > > > >                  "       bnez     %[rc], 0b\n"
> > > > > >  -               "       fence    rw, rw\n"
> > > > >
> > > > > Yes, using .aqrl is valid.
> > > > Thx and I think the below is also valid, right?
> > > >
> > > > -                       RISCV_RELEASE_BARRIER                           \
> > > > -                       "       amoswap.w %0, %2, %1\n"                 \
> > > > +                       "       amoswap.w.rl %0, %2, %1\n"              \
> > > >
> > > > -                       "       amoswap.d %0, %2, %1\n"                 \
> > > > -                       RISCV_ACQUIRE_BARRIER                           \
> > > > +                       "       amoswap.d.aq %0, %2, %1\n"              \
> > > >
> > > > >
> > > > > Dan
> > > > >
> > > > > >>
> > > > > >> Dan
> > > > > >>
> > > > > >>> The purpose of the whole patchset is to reduce the usage of
> > > > > >>> independent fence rw, rw instructions, and maximize the usage of the
> > > > > >>> .aq/.rl/.aqrl aonntation of RISC-V.
> > > > > >>>
> > > > > >>>                 __asm__ __volatile__ (                                  \
> > > > > >>>                         "0:     lr.w %0, %2\n"                          \
> > > > > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > > > > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > > > > >>>                         "       bnez %1, 0b\n"                          \
> > > > > >>>                         "       fence rw, rw\n"                         \
> > > > > >>>                         "1:\n"                                          \
> > > > > >>>
> > > > > >>>> we end up with u == 1, v == 1, r1 on P0 is 0 and r1 on P1 is 0, for the
> > > > > >>>> following litmus test?
> > > > > >>>>
> > > > > >>>>     C lr-sc-aqrl-pair-vs-full-barrier
> > > > > >>>>
> > > > > >>>>     {}
> > > > > >>>>
> > > > > >>>>     P0(int *x, int *y, atomic_t *u)
> > > > > >>>>     {
> > > > > >>>>             int r0;
> > > > > >>>>             int r1;
> > > > > >>>>
> > > > > >>>>             WRITE_ONCE(*x, 1);
> > > > > >>>>             r0 = atomic_cmpxchg(u, 0, 1);
> > > > > >>>>             r1 = READ_ONCE(*y);
> > > > > >>>>     }
> > > > > >>>>
> > > > > >>>>     P1(int *x, int *y, atomic_t *v)
> > > > > >>>>     {
> > > > > >>>>             int r0;
> > > > > >>>>             int r1;
> > > > > >>>>
> > > > > >>>>             WRITE_ONCE(*y, 1);
> > > > > >>>>             r0 = atomic_cmpxchg(v, 0, 1);
> > > > > >>>>             r1 = READ_ONCE(*x);
> > > > > >>>>     }
> > > > > >>>>
> > > > > >>>>     exists (u=1 /\ v=1 /\ 0:r1=0 /\ 1:r1=0)
> > > > > >>> I think my patchset won't affect the above sequence guarantee. Current
> > > > > >>> RISC-V implementation only gives RCsc when the original value is the
> > > > > >>> same at least once. So I prefer RISC-V cmpxchg should be:
> > > > > >>>
> > > > > >>>
> > > > > >>> -                       "0:     lr.w %0, %2\n"                          \
> > > > > >>> +                      "0:     lr.w.rl %0, %2\n"                          \
> > > > > >>>                         "       bne  %0, %z3, 1f\n"                     \
> > > > > >>>                         "       sc.w.rl %1, %z4, %2\n"                  \
> > > > > >>>                         "       bnez %1, 0b\n"                          \
> > > > > >>> -                       "       fence rw, rw\n"                         \
> > > > > >>>                         "1:\n"                                          \
> > > > > >>> +                        "       fence w, rw\n"                    \
> > > > > >>>
> > > > > >>> To give an unconditional RSsc for atomic_cmpxchg.
> > > > > >>>
> > > > > >>>>
> > > > > >>>> Regards,
> > > > > >>>> Boqun
> > > > > >>>
> > > > > >>>
> > > > > >>>
> > > > > >
> > > > > >
> > > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Best Regards
> > > >  Guo Ren
> > > >
> > > > ML: https://lore.kernel.org/linux-csky/
> >
> >
> >
> > --
> > Best Regards
> >  Guo Ren
> >
> > ML: https://lore.kernel.org/linux-csky/



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
  2022-04-18 23:41       ` Andrea Parri
@ 2022-04-24  8:33         ` Guo Ren
  -1 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-24  8:33 UTC (permalink / raw)
  To: Andrea Parri
  Cc: Boqun Feng, Daniel Lustig, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

On Tue, Apr 19, 2022 at 7:41 AM Andrea Parri <parri.andrea@gmail.com> wrote:
>
> > > Seems to me that you are basically reverting 5ce6c1f3535f
> > > ("riscv/atomic: Strengthen implementations with fences"). That commit
> > > fixed an memory ordering issue, could you explain why the issue no
> > > longer needs a fix?
> >
> > I'm not reverting the prior patch, just optimizing it.
> >
> > In RISC-V “A” Standard Extension for Atomic Instructions spec, it said:
>
> With reference to the RISC-V herd specification at:
>
>   https://github.com/riscv/riscv-isa-manual.git
>
> the issue, better, lr-sc-aqrl-pair-vs-full-barrier seems to _no longer_
> need a fix since commit:
                        "0:     lr.w %0, %2\n"                          \
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w.rl %1, %z4, %2\n"                  \
                        "       bnez %1, 0b\n"                          \
                        "       fence rw, rw\n"                         \
Above is the current implementation, and the logic is in conflict. If
we want full-barrier, we should implement like below:
                        fence rw, w
                        "0:     lr.w %0, %2\n"                          \
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w %1, %z4, %2\n"                  \
                        "       bnez %1, 0b\n"                          \
                        "       fence rw, rw\n"                         \
Above we could let lr.w & sc.w executed fastest. If we think .aq/.rl
won't affect forward guarantee, we should implement like below:
                        "0:     lr.w %0, %2\n"                          \
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w.aqrl %1, %z4, %2\n"                  \
                        "       bnez %1, 0b\n"                          \

Using .aqrl is better than sc.w.rl + fence rw, rw, because lr/sc.rl
pair forward guarantee is the same with lr/sw.aqrl and only sc.rl part
would affect the speed of lr/sc speed. Second, it could reduce one
fence rw, rw overhead. So for riscv, we needn't put a full-barrier
after sc like arm64 and use .aqrl instead.

>
>   03a5e722fc0f ("Updates to the memory consistency model spec")
>
> (here a template, to double check:
>
>   https://github.com/litmus-tests/litmus-tests-riscv/blob/master/tests/non-mixed-size/HAND/LR-SC-NOT-FENCE.litmus )
>
> I defer to Daniel/others for a "bi-section" of the prose specification.
> ;-)
>
>   Andrea



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage
@ 2022-04-24  8:33         ` Guo Ren
  0 siblings, 0 replies; 42+ messages in thread
From: Guo Ren @ 2022-04-24  8:33 UTC (permalink / raw)
  To: Andrea Parri
  Cc: Boqun Feng, Daniel Lustig, Paul E. McKenney, Arnd Bergmann,
	Palmer Dabbelt, Mark Rutland, Will Deacon, Peter Zijlstra,
	linux-arch, Linux Kernel Mailing List, linux-riscv, Guo Ren

On Tue, Apr 19, 2022 at 7:41 AM Andrea Parri <parri.andrea@gmail.com> wrote:
>
> > > Seems to me that you are basically reverting 5ce6c1f3535f
> > > ("riscv/atomic: Strengthen implementations with fences"). That commit
> > > fixed an memory ordering issue, could you explain why the issue no
> > > longer needs a fix?
> >
> > I'm not reverting the prior patch, just optimizing it.
> >
> > In RISC-V “A” Standard Extension for Atomic Instructions spec, it said:
>
> With reference to the RISC-V herd specification at:
>
>   https://github.com/riscv/riscv-isa-manual.git
>
> the issue, better, lr-sc-aqrl-pair-vs-full-barrier seems to _no longer_
> need a fix since commit:
                        "0:     lr.w %0, %2\n"                          \
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w.rl %1, %z4, %2\n"                  \
                        "       bnez %1, 0b\n"                          \
                        "       fence rw, rw\n"                         \
Above is the current implementation, and the logic is in conflict. If
we want full-barrier, we should implement like below:
                        fence rw, w
                        "0:     lr.w %0, %2\n"                          \
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w %1, %z4, %2\n"                  \
                        "       bnez %1, 0b\n"                          \
                        "       fence rw, rw\n"                         \
Above we could let lr.w & sc.w executed fastest. If we think .aq/.rl
won't affect forward guarantee, we should implement like below:
                        "0:     lr.w %0, %2\n"                          \
                        "       bne  %0, %z3, 1f\n"                     \
                        "       sc.w.aqrl %1, %z4, %2\n"                  \
                        "       bnez %1, 0b\n"                          \

Using .aqrl is better than sc.w.rl + fence rw, rw, because lr/sc.rl
pair forward guarantee is the same with lr/sw.aqrl and only sc.rl part
would affect the speed of lr/sc speed. Second, it could reduce one
fence rw, rw overhead. So for riscv, we needn't put a full-barrier
after sc like arm64 and use .aqrl instead.

>
>   03a5e722fc0f ("Updates to the memory consistency model spec")
>
> (here a template, to double check:
>
>   https://github.com/litmus-tests/litmus-tests-riscv/blob/master/tests/non-mixed-size/HAND/LR-SC-NOT-FENCE.litmus )
>
> I defer to Daniel/others for a "bi-section" of the prose specification.
> ;-)
>
>   Andrea



-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2022-04-24  8:34 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-12  3:49 [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage guoren
2022-04-12  3:49 ` guoren
2022-04-12  3:49 ` [PATCH V2 1/3] riscv: atomic: Cleanup unnecessary definition guoren
2022-04-12  3:49   ` guoren
2022-04-12  3:49 ` [PATCH V2 2/3] riscv: atomic: Optimize acquire and release for AMO operations guoren
2022-04-12  3:49   ` guoren
2022-04-12  3:49 ` [PATCH V2 3/3] riscv: atomic: Optimize memory barrier semantics of LRSC-pairs guoren
2022-04-12  3:49   ` guoren
2022-04-13 15:46 ` [PATCH V2 0/3] riscv: atomic: Optimize AMO instructions usage Boqun Feng
2022-04-13 15:46   ` Boqun Feng
2022-04-16 16:49   ` Guo Ren
2022-04-16 16:49     ` Guo Ren
2022-04-17  2:26     ` Boqun Feng
2022-04-17  2:26       ` Boqun Feng
2022-04-17  4:51       ` Guo Ren
2022-04-17  4:51         ` Guo Ren
2022-04-17  6:30         ` Boqun Feng
2022-04-17  6:30           ` Boqun Feng
2022-04-17  6:45           ` Guo Ren
2022-04-17  6:45             ` Guo Ren
2022-04-19 17:12         ` Dan Lustig
2022-04-19 17:12           ` Dan Lustig
2022-04-20  5:33           ` Guo Ren
2022-04-20  5:33             ` Guo Ren
2022-04-20 17:03             ` Dan Lustig
2022-04-20 17:03               ` Dan Lustig
2022-04-21  9:39               ` Guo Ren
2022-04-21  9:39                 ` Guo Ren
2022-04-21 22:56                 ` Boqun Feng
2022-04-21 22:56                   ` Boqun Feng
2022-04-22  1:56                   ` Guo Ren
2022-04-22  1:56                     ` Guo Ren
2022-04-22  3:11                     ` Boqun Feng
2022-04-22  3:11                       ` Boqun Feng
2022-04-24  7:52                       ` Guo Ren
2022-04-24  7:52                         ` Guo Ren
2022-04-18 23:41     ` Andrea Parri
2022-04-18 23:41       ` Andrea Parri
2022-04-19 17:13       ` Dan Lustig
2022-04-19 17:13         ` Dan Lustig
2022-04-24  8:33       ` Guo Ren
2022-04-24  8:33         ` Guo Ren

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.