linux-riscv.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V6 0/2] riscv: Support qspinlock with generic headers
@ 2022-06-21 14:49 guoren
  2022-06-21 14:49 ` [PATCH V6 1/2] asm-generic: spinlock: Move qspinlock & ticket-lock into generic spinlock.h guoren
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: guoren @ 2022-06-21 14:49 UTC (permalink / raw)
  To: palmer, arnd, peterz, longman, boqun.feng, Conor.Dooley,
	chenhuacai, kernel, r, shorne
  Cc: linux-riscv, linux-arch, linux-kernel, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

Enable qspinlock and meet the requirements mentioned in a8ad07e5240c9
("asm-generic: qspinlock: Indicate the use of mixed-size atomics").

RISC-V LR/SC pairs could provide a strong/weak forward guarantee that
depends on micro-architecture. And RISC-V ISA spec has given out
several limitations to let hardware support strict forward guarantee
(RISC-V User ISA - 8.3 Eventual Success of Store-Conditional
Instructions):
We restricted the length of LR/SC loops to fit within 64 contiguous
instruction bytes in the base ISA to avoid undue restrictions on
instruction cache and TLB size and associativity. Similarly, we
disallowed other loads and stores within the loops to avoid restrictions
on data-cache associativity in simple implementations that track the
reservation within a private cache. The restrictions on branches and
jumps limit the time that can be spent in the sequence. Floating-point
operations and integer multiply/divide were disallowed to simplify the
operating system’s emulation of these instructions on implementations
lacking appropriate hardware support.
Software is not forbidden from using unconstrained LR/SC sequences, but
portable software must detect the case that the sequence repeatedly
fails, then fall back to an alternate code sequence that does not rely
on an unconstrained LR/SC sequence. Implementations are permitted to
unconditionally fail any unconstrained LR/SC sequence.

eg:
Some riscv hardware such as BOOMv3 & XiangShan could provide strict &
strong forward guarantee (The cache line would be kept in an exclusive
state for Backoff cycles, and only this core's interrupt could break
the LR/SC pair).
Qemu riscv give a weak forward guarantee by wrong implementation
currently [1].

The first version of patch was made in 2019.1 [2].

[1] https://github.com/qemu/qemu/blob/master/target/riscv/insn_trans/trans_rva.c.inc
[2] https://lore.kernel.org/linux-riscv/20190211043829.30096-1-michaeljclark@mac.com/#r

Change V6:
 - Fixup Clang compile problem Reported-by: kernel test robot
   <lkp@intel.com>
 - Cleanup asm-generic/spinlock.h
 - Remove changelog in patch main comment part, suggested by
   Conor.Dooley@microchip.com
 - Remove "default y if NUMA" in Kconfig

Change V5:
 - Update comment with RISC-V forward guarantee feature.
 - Back to V3 direction and optimize asm code.

Change V4:
 - Remove custom sub-word xchg implementation
 - Add ARCH_USE_QUEUED_SPINLOCKS_XCHG32 in locking/qspinlock

Change V3:
 - Coding convention by Peter Zijlstra's advices

Change V2:
 - Coding convention in cmpxchg.h
 - Re-implement short xchg
 - Remove char & cmpxchg implementations

Guo Ren (2):
  asm-generic: spinlock: Move qspinlock & ticket-lock into generic
    spinlock.h
  riscv: Add qspinlock support

 arch/riscv/Kconfig                    |  8 +++
 arch/riscv/include/asm/Kbuild         |  2 +
 arch/riscv/include/asm/cmpxchg.h      | 17 +++++
 include/asm-generic/spinlock.h        | 90 ++------------------------
 include/asm-generic/spinlock_types.h  | 14 ++--
 include/asm-generic/tspinlock.h       | 92 +++++++++++++++++++++++++++
 include/asm-generic/tspinlock_types.h | 17 +++++
 7 files changed, 146 insertions(+), 94 deletions(-)
 create mode 100644 include/asm-generic/tspinlock.h
 create mode 100644 include/asm-generic/tspinlock_types.h

-- 
2.36.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH V6 1/2] asm-generic: spinlock: Move qspinlock & ticket-lock into generic spinlock.h
  2022-06-21 14:49 [PATCH V6 0/2] riscv: Support qspinlock with generic headers guoren
@ 2022-06-21 14:49 ` guoren
  2022-06-23  8:33   ` Arnd Bergmann
  2022-06-21 14:49 ` [PATCH V6 2/2] riscv: Add qspinlock support guoren
  2022-06-21 15:08 ` [PATCH V6 0/2] riscv: Support qspinlock with generic headers Waiman Long
  2 siblings, 1 reply; 7+ messages in thread
From: guoren @ 2022-06-21 14:49 UTC (permalink / raw)
  To: palmer, arnd, peterz, longman, boqun.feng, Conor.Dooley,
	chenhuacai, kernel, r, shorne
  Cc: linux-riscv, linux-arch, linux-kernel, Guo Ren, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

Separate ticket-lock into tspinlock.h and let generic spinlock support
qspinlock or ticket-lock selected by CONFIG_ARCH_USE_QUEUED_SPINLOCKS
config.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Arnd Bergmann <arnd@arndb.de>
---
 include/asm-generic/spinlock.h        | 90 ++------------------------
 include/asm-generic/spinlock_types.h  | 14 ++--
 include/asm-generic/tspinlock.h       | 92 +++++++++++++++++++++++++++
 include/asm-generic/tspinlock_types.h | 17 +++++
 4 files changed, 119 insertions(+), 94 deletions(-)
 create mode 100644 include/asm-generic/tspinlock.h
 create mode 100644 include/asm-generic/tspinlock_types.h

diff --git a/include/asm-generic/spinlock.h b/include/asm-generic/spinlock.h
index fdfebcb050f4..4eca2488af38 100644
--- a/include/asm-generic/spinlock.h
+++ b/include/asm-generic/spinlock.h
@@ -1,92 +1,12 @@
 /* SPDX-License-Identifier: GPL-2.0 */
-
-/*
- * 'Generic' ticket-lock implementation.
- *
- * It relies on atomic_fetch_add() having well defined forward progress
- * guarantees under contention. If your architecture cannot provide this, stick
- * to a test-and-set lock.
- *
- * It also relies on atomic_fetch_add() being safe vs smp_store_release() on a
- * sub-word of the value. This is generally true for anything LL/SC although
- * you'd be hard pressed to find anything useful in architecture specifications
- * about this. If your architecture cannot do this you might be better off with
- * a test-and-set.
- *
- * It further assumes atomic_*_release() + atomic_*_acquire() is RCpc and hence
- * uses atomic_fetch_add() which is RCsc to create an RCsc hot path, along with
- * a full fence after the spin to upgrade the otherwise-RCpc
- * atomic_cond_read_acquire().
- *
- * The implementation uses smp_cond_load_acquire() to spin, so if the
- * architecture has WFE like instructions to sleep instead of poll for word
- * modifications be sure to implement that (see ARM64 for example).
- *
- */
-
 #ifndef __ASM_GENERIC_SPINLOCK_H
 #define __ASM_GENERIC_SPINLOCK_H
 
-#include <linux/atomic.h>
-#include <asm-generic/spinlock_types.h>
-
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-	u32 val = atomic_fetch_add(1<<16, lock);
-	u16 ticket = val >> 16;
-
-	if (ticket == (u16)val)
-		return;
-
-	/*
-	 * atomic_cond_read_acquire() is RCpc, but rather than defining a
-	 * custom cond_read_rcsc() here we just emit a full fence.  We only
-	 * need the prior reads before subsequent writes ordering from
-	 * smb_mb(), but as atomic_cond_read_acquire() just emits reads and we
-	 * have no outstanding writes due to the atomic_fetch_add() the extra
-	 * orderings are free.
-	 */
-	atomic_cond_read_acquire(lock, ticket == (u16)VAL);
-	smp_mb();
-}
-
-static __always_inline bool arch_spin_trylock(arch_spinlock_t *lock)
-{
-	u32 old = atomic_read(lock);
-
-	if ((old >> 16) != (old & 0xffff))
-		return false;
-
-	return atomic_try_cmpxchg(lock, &old, old + (1<<16)); /* SC, for RCsc */
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-	u16 *ptr = (u16 *)lock + IS_ENABLED(CONFIG_CPU_BIG_ENDIAN);
-	u32 val = atomic_read(lock);
-
-	smp_store_release(ptr, (u16)val + 1);
-}
-
-static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-	u32 val = atomic_read(lock);
-
-	return ((val >> 16) != (val & 0xffff));
-}
-
-static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-	u32 val = atomic_read(lock);
-
-	return (s16)((val >> 16) - (val & 0xffff)) > 1;
-}
-
-static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
-{
-	return !arch_spin_is_locked(&lock);
-}
-
+#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS
+#include <asm/qspinlock.h>
 #include <asm/qrwlock.h>
+#else
+#include <asm-generic/tspinlock.h>
+#endif
 
 #endif /* __ASM_GENERIC_SPINLOCK_H */
diff --git a/include/asm-generic/spinlock_types.h b/include/asm-generic/spinlock_types.h
index 8962bb730945..9875c1d058b3 100644
--- a/include/asm-generic/spinlock_types.h
+++ b/include/asm-generic/spinlock_types.h
@@ -3,15 +3,11 @@
 #ifndef __ASM_GENERIC_SPINLOCK_TYPES_H
 #define __ASM_GENERIC_SPINLOCK_TYPES_H
 
-#include <linux/types.h>
-typedef atomic_t arch_spinlock_t;
-
-/*
- * qrwlock_types depends on arch_spinlock_t, so we must typedef that before the
- * include.
- */
+#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS
+#include <asm-generic/qspinlock_types.h>
 #include <asm/qrwlock_types.h>
-
-#define __ARCH_SPIN_LOCK_UNLOCKED	ATOMIC_INIT(0)
+#else
+#include <asm-generic/tspinlock_types.h>
+#endif
 
 #endif /* __ASM_GENERIC_SPINLOCK_TYPES_H */
diff --git a/include/asm-generic/tspinlock.h b/include/asm-generic/tspinlock.h
new file mode 100644
index 000000000000..def7b8f0f4f4
--- /dev/null
+++ b/include/asm-generic/tspinlock.h
@@ -0,0 +1,92 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * 'Generic' ticket-lock implementation.
+ *
+ * It relies on atomic_fetch_add() having well defined forward progress
+ * guarantees under contention. If your architecture cannot provide this, stick
+ * to a test-and-set lock.
+ *
+ * It also relies on atomic_fetch_add() being safe vs smp_store_release() on a
+ * sub-word of the value. This is generally true for anything LL/SC although
+ * you'd be hard pressed to find anything useful in architecture specifications
+ * about this. If your architecture cannot do this you might be better off with
+ * a test-and-set.
+ *
+ * It further assumes atomic_*_release() + atomic_*_acquire() is RCpc and hence
+ * uses atomic_fetch_add() which is RCsc to create an RCsc hot path, along with
+ * a full fence after the spin to upgrade the otherwise-RCpc
+ * atomic_cond_read_acquire().
+ *
+ * The implementation uses smp_cond_load_acquire() to spin, so if the
+ * architecture has WFE like instructions to sleep instead of poll for word
+ * modifications be sure to implement that (see ARM64 for example).
+ *
+ */
+
+#ifndef __ASM_GENERIC_TSPINLOCK_H
+#define __ASM_GENERIC_TSPINLOCK_H
+
+#include <linux/atomic.h>
+#include <asm-generic/tspinlock_types.h>
+
+static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
+{
+	u32 val = atomic_fetch_add(1<<16, lock);
+	u16 ticket = val >> 16;
+
+	if (ticket == (u16)val)
+		return;
+
+	/*
+	 * atomic_cond_read_acquire() is RCpc, but rather than defining a
+	 * custom cond_read_rcsc() here we just emit a full fence.  We only
+	 * need the prior reads before subsequent writes ordering from
+	 * smb_mb(), but as atomic_cond_read_acquire() just emits reads and we
+	 * have no outstanding writes due to the atomic_fetch_add() the extra
+	 * orderings are free.
+	 */
+	atomic_cond_read_acquire(lock, ticket == (u16)VAL);
+	smp_mb();
+}
+
+static __always_inline bool arch_spin_trylock(arch_spinlock_t *lock)
+{
+	u32 old = atomic_read(lock);
+
+	if ((old >> 16) != (old & 0xffff))
+		return false;
+
+	return atomic_try_cmpxchg(lock, &old, old + (1<<16)); /* SC, for RCsc */
+}
+
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
+{
+	u16 *ptr = (u16 *)lock + IS_ENABLED(CONFIG_CPU_BIG_ENDIAN);
+	u32 val = atomic_read(lock);
+
+	smp_store_release(ptr, (u16)val + 1);
+}
+
+static __always_inline int arch_spin_is_locked(arch_spinlock_t *lock)
+{
+	u32 val = atomic_read(lock);
+
+	return ((val >> 16) != (val & 0xffff));
+}
+
+static __always_inline int arch_spin_is_contended(arch_spinlock_t *lock)
+{
+	u32 val = atomic_read(lock);
+
+	return (s16)((val >> 16) - (val & 0xffff)) > 1;
+}
+
+static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
+{
+	return !arch_spin_is_locked(&lock);
+}
+
+#include <asm/qrwlock.h>
+
+#endif /* __ASM_GENERIC_TSPINLOCK_H */
diff --git a/include/asm-generic/tspinlock_types.h b/include/asm-generic/tspinlock_types.h
new file mode 100644
index 000000000000..ca3ea5acd172
--- /dev/null
+++ b/include/asm-generic/tspinlock_types.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef __ASM_GENERIC_TSPINLOCK_TYPES_H
+#define __ASM_GENERIC_TSPINLOCK_TYPES_H
+
+#include <linux/types.h>
+typedef atomic_t arch_spinlock_t;
+
+/*
+ * qrwlock_types depends on arch_spinlock_t, so we must typedef that before the
+ * include.
+ */
+#include <asm/qrwlock_types.h>
+
+#define __ARCH_SPIN_LOCK_UNLOCKED	ATOMIC_INIT(0)
+
+#endif /* __ASM_GENERIC_TSPINLOCK_TYPES_H */
-- 
2.36.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH V6 2/2] riscv: Add qspinlock support
  2022-06-21 14:49 [PATCH V6 0/2] riscv: Support qspinlock with generic headers guoren
  2022-06-21 14:49 ` [PATCH V6 1/2] asm-generic: spinlock: Move qspinlock & ticket-lock into generic spinlock.h guoren
@ 2022-06-21 14:49 ` guoren
  2022-06-21 15:08 ` [PATCH V6 0/2] riscv: Support qspinlock with generic headers Waiman Long
  2 siblings, 0 replies; 7+ messages in thread
From: guoren @ 2022-06-21 14:49 UTC (permalink / raw)
  To: palmer, arnd, peterz, longman, boqun.feng, Conor.Dooley,
	chenhuacai, kernel, r, shorne
  Cc: linux-riscv, linux-arch, linux-kernel, Guo Ren, Guo Ren

From: Guo Ren <guoren@linux.alibaba.com>

Enable qspinlock by the requirements mentioned in a8ad07e5240c9
("asm-generic: qspinlock: Indicate the use of mixed-size atomics").

 - RISC-V atomic_*_release()/atomic_*_acquire() are implemented with
   own relaxed version plus acquire/release_fence for RCsc
   synchronization.

 - RISC-V LR/SC pairs could provide a strong/weak forward guarantee
   that depends on micro-architecture. And RISC-V ISA spec has given
   out several limitations to let hardware support strict forward
   guarantee (RISC-V User ISA - 8.3 Eventual Success of
   Store-Conditional Instructions). Some riscv cores such as BOOMv3
   & XiangShan could provide strict & strong forward guarantee (The
   cache line would be kept in an exclusive state for Backoff cycles,
   and only this core's interrupt could break the LR/SC pair).

 - RISC-V provides cheap atomic_fetch_or_acquire() with RCsc.

 - RISC-V only provides relaxed xchg16 to support qspinlock.

Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
Signed-off-by: Guo Ren <guoren@kernel.org>
Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Waiman Long <longman@redhat.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Palmer Dabbelt <palmer@rivosinc.com>
---
 arch/riscv/Kconfig               |  8 ++++++++
 arch/riscv/include/asm/Kbuild    |  2 ++
 arch/riscv/include/asm/cmpxchg.h | 17 +++++++++++++++++
 3 files changed, 27 insertions(+)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index 32ffef9f6e5b..e1b57cb89189 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -333,6 +333,14 @@ config NODES_SHIFT
 	  Specify the maximum number of NUMA Nodes available on the target
 	  system.  Increases memory reserved to accommodate various tables.
 
+config RISCV_USE_QUEUED_SPINLOCKS
+	bool "Using queued spinlock instead of ticket-lock"
+	depends on SMP && MMU
+	select ARCH_USE_QUEUED_SPINLOCKS
+	help
+	  Make sure your micro arch LL/SC has a strong forward progress guarantee.
+	  Otherwise, stay at ticket-lock.
+
 config RISCV_ALTERNATIVE
 	bool
 	depends on !XIP_KERNEL
diff --git a/arch/riscv/include/asm/Kbuild b/arch/riscv/include/asm/Kbuild
index 504f8b7e72d4..2cce98c7b653 100644
--- a/arch/riscv/include/asm/Kbuild
+++ b/arch/riscv/include/asm/Kbuild
@@ -2,7 +2,9 @@
 generic-y += early_ioremap.h
 generic-y += flat.h
 generic-y += kvm_para.h
+generic-y += mcs_spinlock.h
 generic-y += parport.h
+generic-y += qspinlock.h
 generic-y += spinlock.h
 generic-y += spinlock_types.h
 generic-y += qrwlock.h
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 12debce235e5..492104d45a23 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -17,6 +17,23 @@
 	__typeof__(new) __new = (new);					\
 	__typeof__(*(ptr)) __ret;					\
 	switch (size) {							\
+	case 2: { 							\
+		u32 temp;						\
+		u32 shif = ((ulong)__ptr & 2) ? 16 : 0;			\
+		u32 mask = 0xffff << shif;				\
+		__ptr = (__typeof__(ptr))((ulong)__ptr & ~(ulong)2);	\
+		__asm__ __volatile__ (					\
+			"0:	lr.w %0, %2\n"				\
+			"	and  %1, %0, %z3\n"			\
+			"	or   %1, %1, %z4\n"			\
+			"	sc.w %1, %1, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			: "=&r" (__ret), "=&r" (temp), "+A" (*__ptr)	\
+			: "rJ" (~mask), "rJ" (__new << shif)		\
+			: "memory");					\
+		__ret = (__ret & mask) >> shif;				\
+		break;							\
+	}								\
 	case 4:								\
 		__asm__ __volatile__ (					\
 			"	amoswap.w %0, %2, %1\n"			\
-- 
2.36.1


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH V6 0/2] riscv: Support qspinlock with generic headers
  2022-06-21 14:49 [PATCH V6 0/2] riscv: Support qspinlock with generic headers guoren
  2022-06-21 14:49 ` [PATCH V6 1/2] asm-generic: spinlock: Move qspinlock & ticket-lock into generic spinlock.h guoren
  2022-06-21 14:49 ` [PATCH V6 2/2] riscv: Add qspinlock support guoren
@ 2022-06-21 15:08 ` Waiman Long
  2022-06-22  1:51   ` Guo Ren
  2 siblings, 1 reply; 7+ messages in thread
From: Waiman Long @ 2022-06-21 15:08 UTC (permalink / raw)
  To: guoren, palmer, arnd, peterz, boqun.feng, Conor.Dooley,
	chenhuacai, kernel, r, shorne
  Cc: linux-riscv, linux-arch, linux-kernel, Guo Ren

On 6/21/22 10:49, guoren@kernel.org wrote:
> From: Guo Ren <guoren@linux.alibaba.com>
>
> Enable qspinlock and meet the requirements mentioned in a8ad07e5240c9
> ("asm-generic: qspinlock: Indicate the use of mixed-size atomics").
>
> RISC-V LR/SC pairs could provide a strong/weak forward guarantee that
> depends on micro-architecture. And RISC-V ISA spec has given out
> several limitations to let hardware support strict forward guarantee
> (RISC-V User ISA - 8.3 Eventual Success of Store-Conditional
> Instructions):
> We restricted the length of LR/SC loops to fit within 64 contiguous
> instruction bytes in the base ISA to avoid undue restrictions on
> instruction cache and TLB size and associativity. Similarly, we

Does the 64 contiguous bytes need to be cacheline aligned?

Regards,
Longman


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V6 0/2] riscv: Support qspinlock with generic headers
  2022-06-21 15:08 ` [PATCH V6 0/2] riscv: Support qspinlock with generic headers Waiman Long
@ 2022-06-22  1:51   ` Guo Ren
  0 siblings, 0 replies; 7+ messages in thread
From: Guo Ren @ 2022-06-22  1:51 UTC (permalink / raw)
  To: Waiman Long
  Cc: Palmer Dabbelt, Arnd Bergmann, Peter Zijlstra, Boqun Feng,
	Conor.Dooley, Huacai Chen, Xuerui Wang, hev, Stafford Horne,
	linux-riscv, linux-arch, Linux Kernel Mailing List, Guo Ren

On Tue, Jun 21, 2022 at 11:08 PM Waiman Long <longman@redhat.com> wrote:
>
> On 6/21/22 10:49, guoren@kernel.org wrote:
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > Enable qspinlock and meet the requirements mentioned in a8ad07e5240c9
> > ("asm-generic: qspinlock: Indicate the use of mixed-size atomics").
> >
> > RISC-V LR/SC pairs could provide a strong/weak forward guarantee that
> > depends on micro-architecture. And RISC-V ISA spec has given out
> > several limitations to let hardware support strict forward guarantee
> > (RISC-V User ISA - 8.3 Eventual Success of Store-Conditional
> > Instructions):
> > We restricted the length of LR/SC loops to fit within 64 contiguous
> > instruction bytes in the base ISA to avoid undue restrictions on
> > instruction cache and TLB size and associativity. Similarly, we
>
> Does the 64 contiguous bytes need to be cacheline aligned?
No, they are instructions, and the IFU & issue units would guarantee
that. The programmer needn't worry about that.

>
> Regards,
> Longman
>


-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V6 1/2] asm-generic: spinlock: Move qspinlock & ticket-lock into generic spinlock.h
  2022-06-21 14:49 ` [PATCH V6 1/2] asm-generic: spinlock: Move qspinlock & ticket-lock into generic spinlock.h guoren
@ 2022-06-23  8:33   ` Arnd Bergmann
  2022-06-23  9:17     ` Guo Ren
  0 siblings, 1 reply; 7+ messages in thread
From: Arnd Bergmann @ 2022-06-23  8:33 UTC (permalink / raw)
  To: Guo Ren
  Cc: Palmer Dabbelt, Arnd Bergmann, Peter Zijlstra, Waiman Long,
	Boqun Feng, Conor Dooley, Huacai Chen, Xuerui Wang, Rui Wang,
	Stafford Horne, linux-riscv, linux-arch,
	Linux Kernel Mailing List, Guo Ren

On Tue, Jun 21, 2022 at 4:49 PM <guoren@kernel.org> wrote:
>
> From: Guo Ren <guoren@linux.alibaba.com>
>
> Separate ticket-lock into tspinlock.h and let generic spinlock support
> qspinlock or ticket-lock selected by CONFIG_ARCH_USE_QUEUED_SPINLOCKS
> config.
>
> Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> Signed-off-by: Guo Ren <guoren@kernel.org>
> Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: Arnd Bergmann <arnd@arndb.de>
> ---
>  include/asm-generic/spinlock.h        | 90 ++------------------------
>  include/asm-generic/spinlock_types.h  | 14 ++--
>  include/asm-generic/tspinlock.h       | 92 +++++++++++++++++++++++++++
>  include/asm-generic/tspinlock_types.h | 17 +++++

Unless someone has a very good argument for the "tspinlock" name, I would
prefer naming the new file ticket_spinlock.h. While the 'qspinlock' name has
an established meaning already, this is not the case for 'tspinlock', and
the longer name would be less confusing in my opinion.

> +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS
> +#include <asm/qspinlock.h>
>  #include <asm/qrwlock.h>
> +#else
> +#include <asm-generic/tspinlock.h>
> +#endif

As Huacai Chen suggested in the other thread, the asm/qrwlock.h include should
be outside of the #ifdef here.

> diff --git a/include/asm-generic/spinlock_types.h b/include/asm-generic/spinlock_types.h
> index 8962bb730945..9875c1d058b3 100644
> --- a/include/asm-generic/spinlock_types.h
> +++ b/include/asm-generic/spinlock_types.h
> @@ -3,15 +3,11 @@
>  #ifndef __ASM_GENERIC_SPINLOCK_TYPES_H
>  #define __ASM_GENERIC_SPINLOCK_TYPES_H
>
> -#include <linux/types.h>
> -typedef atomic_t arch_spinlock_t;
> -
> -/*
> - * qrwlock_types depends on arch_spinlock_t, so we must typedef that before the
> - * include.
> - */
> +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS
> +#include <asm-generic/qspinlock_types.h>
>  #include <asm/qrwlock_types.h>
> -
> -#define __ARCH_SPIN_LOCK_UNLOCKED      ATOMIC_INIT(0)
> +#else
> +#include <asm-generic/tspinlock_types.h>
> +#endif

I don't think this file warrants the extra indirection, since both
versions have only a
few lines. Just put it all into one file, and change the files that include
asm-generic/qspinlock_types.h to use asm-generic/spinlock_types.h instead.

      Arnd

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH V6 1/2] asm-generic: spinlock: Move qspinlock & ticket-lock into generic spinlock.h
  2022-06-23  8:33   ` Arnd Bergmann
@ 2022-06-23  9:17     ` Guo Ren
  0 siblings, 0 replies; 7+ messages in thread
From: Guo Ren @ 2022-06-23  9:17 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Palmer Dabbelt, Peter Zijlstra, Waiman Long, Boqun Feng,
	Conor Dooley, Huacai Chen, Xuerui Wang, Rui Wang, Stafford Horne,
	linux-riscv, linux-arch, Linux Kernel Mailing List, Guo Ren

On Thu, Jun 23, 2022 at 4:33 PM Arnd Bergmann <arnd@arndb.de> wrote:
>
> On Tue, Jun 21, 2022 at 4:49 PM <guoren@kernel.org> wrote:
> >
> > From: Guo Ren <guoren@linux.alibaba.com>
> >
> > Separate ticket-lock into tspinlock.h and let generic spinlock support
> > qspinlock or ticket-lock selected by CONFIG_ARCH_USE_QUEUED_SPINLOCKS
> > config.
> >
> > Signed-off-by: Guo Ren <guoren@linux.alibaba.com>
> > Signed-off-by: Guo Ren <guoren@kernel.org>
> > Cc: Peter Zijlstra (Intel) <peterz@infradead.org>
> > Cc: Arnd Bergmann <arnd@arndb.de>
> > ---
> >  include/asm-generic/spinlock.h        | 90 ++------------------------
> >  include/asm-generic/spinlock_types.h  | 14 ++--
> >  include/asm-generic/tspinlock.h       | 92 +++++++++++++++++++++++++++
> >  include/asm-generic/tspinlock_types.h | 17 +++++
>
> Unless someone has a very good argument for the "tspinlock" name, I would
> prefer naming the new file ticket_spinlock.h. While the 'qspinlock' name has
> an established meaning already, this is not the case for 'tspinlock', and
> the longer name would be less confusing in my opinion.
Okay. ticket_spinlock is also good to me.

>
> > +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS
> > +#include <asm/qspinlock.h>
> >  #include <asm/qrwlock.h>
> > +#else
> > +#include <asm-generic/tspinlock.h>
> > +#endif
>
> As Huacai Chen suggested in the other thread, the asm/qrwlock.h include should
> be outside of the #ifdef here.
Okay


>
> > diff --git a/include/asm-generic/spinlock_types.h b/include/asm-generic/spinlock_types.h
> > index 8962bb730945..9875c1d058b3 100644
> > --- a/include/asm-generic/spinlock_types.h
> > +++ b/include/asm-generic/spinlock_types.h
> > @@ -3,15 +3,11 @@
> >  #ifndef __ASM_GENERIC_SPINLOCK_TYPES_H
> >  #define __ASM_GENERIC_SPINLOCK_TYPES_H
> >
> > -#include <linux/types.h>
> > -typedef atomic_t arch_spinlock_t;
> > -
> > -/*
> > - * qrwlock_types depends on arch_spinlock_t, so we must typedef that before the
> > - * include.
> > - */
> > +#ifdef CONFIG_ARCH_USE_QUEUED_SPINLOCKS
> > +#include <asm-generic/qspinlock_types.h>
> >  #include <asm/qrwlock_types.h>
> > -
> > -#define __ARCH_SPIN_LOCK_UNLOCKED      ATOMIC_INIT(0)
> > +#else
> > +#include <asm-generic/tspinlock_types.h>
> > +#endif
>
> I don't think this file warrants the extra indirection, since both
> versions have only a
> few lines. Just put it all into one file, and change the files that include
> asm-generic/qspinlock_types.h to use asm-generic/spinlock_types.h instead.

Okay, I'll try that.


>
>       Arnd



--
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-06-23  9:18 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-21 14:49 [PATCH V6 0/2] riscv: Support qspinlock with generic headers guoren
2022-06-21 14:49 ` [PATCH V6 1/2] asm-generic: spinlock: Move qspinlock & ticket-lock into generic spinlock.h guoren
2022-06-23  8:33   ` Arnd Bergmann
2022-06-23  9:17     ` Guo Ren
2022-06-21 14:49 ` [PATCH V6 2/2] riscv: Add qspinlock support guoren
2022-06-21 15:08 ` [PATCH V6 0/2] riscv: Support qspinlock with generic headers Waiman Long
2022-06-22  1:51   ` Guo Ren

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).