linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 1/2] riscv/spinlock: Strengthen implementations with fences
@ 2018-03-05 18:24 Andrea Parri
  2018-03-07  2:02 ` Palmer Dabbelt
  0 siblings, 1 reply; 8+ messages in thread
From: Andrea Parri @ 2018-03-05 18:24 UTC (permalink / raw)
  To: Palmer Dabbelt, Albert Ou
  Cc: Daniel Lustig, Alan Stern, Will Deacon, Peter Zijlstra,
	Boqun Feng, Nicholas Piggin, David Howells, Jade Alglave,
	Luc Maranget, Paul McKenney, Akira Yokosawa, Ingo Molnar,
	Linus Torvalds, linux-riscv, linux-kernel, Andrea Parri

Current implementations map locking operations using .rl and .aq
annotations.  However, this mapping is unsound w.r.t. the kernel
memory consistency model (LKMM) [1]:

Referring to the "unlock-lock-read-ordering" test reported below,
Daniel wrote:

  "I think an RCpc interpretation of .aq and .rl would in fact
   allow the two normal loads in P1 to be reordered [...]

   The intuition would be that the amoswap.w.aq can forward from
   the amoswap.w.rl while that's still in the store buffer, and
   then the lw x3,0(x4) can also perform while the amoswap.w.rl
   is still in the store buffer, all before the l1 x1,0(x2)
   executes.  That's not forbidden unless the amoswaps are RCsc,
   unless I'm missing something.

   Likewise even if the unlock()/lock() is between two stores.
   A control dependency might originate from the load part of
   the amoswap.w.aq, but there still would have to be something
   to ensure that this load part in fact performs after the store
   part of the amoswap.w.rl performs globally, and that's not
   automatic under RCpc."

Simulation of the RISC-V memory consistency model confirmed this
expectation.

In order to "synchronize" LKMM and RISC-V's implementation, this
commit strengthens the implementations of the locking operations
by replacing .rl and .aq with the use of ("lightweigth") fences,
resp., "fence rw,  w" and "fence r , rw".

C unlock-lock-read-ordering

{}
/* s initially owned by P1 */

P0(int *x, int *y)
{
        WRITE_ONCE(*x, 1);
        smp_wmb();
        WRITE_ONCE(*y, 1);
}

P1(int *x, int *y, spinlock_t *s)
{
        int r0;
        int r1;

        r0 = READ_ONCE(*y);
        spin_unlock(s);
        spin_lock(s);
        r1 = READ_ONCE(*x);
}

exists (1:r0=1 /\ 1:r1=0)

[1] https://marc.info/?l=linux-kernel&m=151930201102853&w=2
    https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
    https://marc.info/?l=linux-kernel&m=151633436614259&w=2

Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Albert Ou <albert@sifive.com>
Cc: Daniel Lustig <dlustig@nvidia.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jade Alglave <j.alglave@ucl.ac.uk>
Cc: Luc Maranget <luc.maranget@inria.fr>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Akira Yokosawa <akiyks@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-riscv@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
---
 arch/riscv/include/asm/fence.h    | 12 ++++++++++++
 arch/riscv/include/asm/spinlock.h | 29 +++++++++++++++--------------
 2 files changed, 27 insertions(+), 14 deletions(-)
 create mode 100644 arch/riscv/include/asm/fence.h

diff --git a/arch/riscv/include/asm/fence.h b/arch/riscv/include/asm/fence.h
new file mode 100644
index 0000000000000..2b443a3a487f3
--- /dev/null
+++ b/arch/riscv/include/asm/fence.h
@@ -0,0 +1,12 @@
+#ifndef _ASM_RISCV_FENCE_H
+#define _ASM_RISCV_FENCE_H
+
+#ifdef CONFIG_SMP
+#define RISCV_ACQUIRE_BARRIER		"\tfence r , rw\n"
+#define RISCV_RELEASE_BARRIER		"\tfence rw,  w\n"
+#else
+#define RISCV_ACQUIRE_BARRIER
+#define RISCV_RELEASE_BARRIER
+#endif
+
+#endif	/* _ASM_RISCV_FENCE_H */
diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
index 2fd27e8ef1fd6..8eb26d1ede819 100644
--- a/arch/riscv/include/asm/spinlock.h
+++ b/arch/riscv/include/asm/spinlock.h
@@ -17,6 +17,7 @@
 
 #include <linux/kernel.h>
 #include <asm/current.h>
+#include <asm/fence.h>
 
 /*
  * Simple spin lock operations.  These provide no fairness guarantees.
@@ -28,10 +29,7 @@
 
 static inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-	__asm__ __volatile__ (
-		"amoswap.w.rl x0, x0, %0"
-		: "=A" (lock->lock)
-		:: "memory");
+	smp_store_release(&lock->lock, 0);
 }
 
 static inline int arch_spin_trylock(arch_spinlock_t *lock)
@@ -39,7 +37,8 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
 	int tmp = 1, busy;
 
 	__asm__ __volatile__ (
-		"amoswap.w.aq %0, %2, %1"
+		"	amoswap.w %0, %2, %1\n"
+		RISCV_ACQUIRE_BARRIER
 		: "=r" (busy), "+A" (lock->lock)
 		: "r" (tmp)
 		: "memory");
@@ -68,8 +67,9 @@ static inline void arch_read_lock(arch_rwlock_t *lock)
 		"1:	lr.w	%1, %0\n"
 		"	bltz	%1, 1b\n"
 		"	addi	%1, %1, 1\n"
-		"	sc.w.aq	%1, %1, %0\n"
+		"	sc.w	%1, %1, %0\n"
 		"	bnez	%1, 1b\n"
+		RISCV_ACQUIRE_BARRIER
 		: "+A" (lock->lock), "=&r" (tmp)
 		:: "memory");
 }
@@ -82,8 +82,9 @@ static inline void arch_write_lock(arch_rwlock_t *lock)
 		"1:	lr.w	%1, %0\n"
 		"	bnez	%1, 1b\n"
 		"	li	%1, -1\n"
-		"	sc.w.aq	%1, %1, %0\n"
+		"	sc.w	%1, %1, %0\n"
 		"	bnez	%1, 1b\n"
+		RISCV_ACQUIRE_BARRIER
 		: "+A" (lock->lock), "=&r" (tmp)
 		:: "memory");
 }
@@ -96,8 +97,9 @@ static inline int arch_read_trylock(arch_rwlock_t *lock)
 		"1:	lr.w	%1, %0\n"
 		"	bltz	%1, 1f\n"
 		"	addi	%1, %1, 1\n"
-		"	sc.w.aq	%1, %1, %0\n"
+		"	sc.w	%1, %1, %0\n"
 		"	bnez	%1, 1b\n"
+		RISCV_ACQUIRE_BARRIER
 		"1:\n"
 		: "+A" (lock->lock), "=&r" (busy)
 		:: "memory");
@@ -113,8 +115,9 @@ static inline int arch_write_trylock(arch_rwlock_t *lock)
 		"1:	lr.w	%1, %0\n"
 		"	bnez	%1, 1f\n"
 		"	li	%1, -1\n"
-		"	sc.w.aq	%1, %1, %0\n"
+		"	sc.w	%1, %1, %0\n"
 		"	bnez	%1, 1b\n"
+		RISCV_ACQUIRE_BARRIER
 		"1:\n"
 		: "+A" (lock->lock), "=&r" (busy)
 		:: "memory");
@@ -125,7 +128,8 @@ static inline int arch_write_trylock(arch_rwlock_t *lock)
 static inline void arch_read_unlock(arch_rwlock_t *lock)
 {
 	__asm__ __volatile__(
-		"amoadd.w.rl x0, %1, %0"
+		RISCV_RELEASE_BARRIER
+		"	amoadd.w x0, %1, %0\n"
 		: "+A" (lock->lock)
 		: "r" (-1)
 		: "memory");
@@ -133,10 +137,7 @@ static inline void arch_read_unlock(arch_rwlock_t *lock)
 
 static inline void arch_write_unlock(arch_rwlock_t *lock)
 {
-	__asm__ __volatile__ (
-		"amoswap.w.rl x0, x0, %0"
-		: "=A" (lock->lock)
-		:: "memory");
+	smp_store_release(&lock->lock, 0);
 }
 
 #endif /* _ASM_RISCV_SPINLOCK_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 1/2] riscv/spinlock: Strengthen implementations with fences
  2018-03-05 18:24 [RFC PATCH 1/2] riscv/spinlock: Strengthen implementations with fences Andrea Parri
@ 2018-03-07  2:02 ` Palmer Dabbelt
  2018-03-07 10:52   ` Andrea Parri
  0 siblings, 1 reply; 8+ messages in thread
From: Palmer Dabbelt @ 2018-03-07  2:02 UTC (permalink / raw)
  To: parri.andrea
  Cc: albert, Daniel Lustig, stern, Will Deacon, peterz, boqun.feng,
	npiggin, dhowells, j.alglave, luc.maranget, paulmck, akiyks,
	mingo, Linus Torvalds, linux-riscv, linux-kernel, parri.andrea

On Mon, 05 Mar 2018 10:24:09 PST (-0800), parri.andrea@gmail.com wrote:
> Current implementations map locking operations using .rl and .aq
> annotations.  However, this mapping is unsound w.r.t. the kernel
> memory consistency model (LKMM) [1]:
>
> Referring to the "unlock-lock-read-ordering" test reported below,
> Daniel wrote:
>
>   "I think an RCpc interpretation of .aq and .rl would in fact
>    allow the two normal loads in P1 to be reordered [...]
>
>    The intuition would be that the amoswap.w.aq can forward from
>    the amoswap.w.rl while that's still in the store buffer, and
>    then the lw x3,0(x4) can also perform while the amoswap.w.rl
>    is still in the store buffer, all before the l1 x1,0(x2)
>    executes.  That's not forbidden unless the amoswaps are RCsc,
>    unless I'm missing something.
>
>    Likewise even if the unlock()/lock() is between two stores.
>    A control dependency might originate from the load part of
>    the amoswap.w.aq, but there still would have to be something
>    to ensure that this load part in fact performs after the store
>    part of the amoswap.w.rl performs globally, and that's not
>    automatic under RCpc."
>
> Simulation of the RISC-V memory consistency model confirmed this
> expectation.
>
> In order to "synchronize" LKMM and RISC-V's implementation, this
> commit strengthens the implementations of the locking operations
> by replacing .rl and .aq with the use of ("lightweigth") fences,
> resp., "fence rw,  w" and "fence r , rw".
>
> C unlock-lock-read-ordering
>
> {}
> /* s initially owned by P1 */
>
> P0(int *x, int *y)
> {
>         WRITE_ONCE(*x, 1);
>         smp_wmb();
>         WRITE_ONCE(*y, 1);
> }
>
> P1(int *x, int *y, spinlock_t *s)
> {
>         int r0;
>         int r1;
>
>         r0 = READ_ONCE(*y);
>         spin_unlock(s);
>         spin_lock(s);
>         r1 = READ_ONCE(*x);
> }
>
> exists (1:r0=1 /\ 1:r1=0)
>
> [1] https://marc.info/?l=linux-kernel&m=151930201102853&w=2
>     https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
>     https://marc.info/?l=linux-kernel&m=151633436614259&w=2
>
> Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
> Cc: Palmer Dabbelt <palmer@sifive.com>
> Cc: Albert Ou <albert@sifive.com>
> Cc: Daniel Lustig <dlustig@nvidia.com>
> Cc: Alan Stern <stern@rowland.harvard.edu>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: David Howells <dhowells@redhat.com>
> Cc: Jade Alglave <j.alglave@ucl.ac.uk>
> Cc: Luc Maranget <luc.maranget@inria.fr>
> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> Cc: Akira Yokosawa <akiyks@gmail.com>
> Cc: Ingo Molnar <mingo@kernel.org>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: linux-riscv@lists.infradead.org
> Cc: linux-kernel@vger.kernel.org
> ---
>  arch/riscv/include/asm/fence.h    | 12 ++++++++++++
>  arch/riscv/include/asm/spinlock.h | 29 +++++++++++++++--------------
>  2 files changed, 27 insertions(+), 14 deletions(-)
>  create mode 100644 arch/riscv/include/asm/fence.h

Oh, sorry about this -- I thought I'd deleted all this code, but I guess I just 
wrote a patch and then forgot about it.  Here's my original patch, which I have 
marked as a WIP:

commit 39908f1f8b75ae88ce44dc77b8219a94078ad298
Author: Palmer Dabbelt <palmer@sifive.com>
Date:   Tue Dec 5 16:26:50 2017 -0800

    RISC-V: Use generic spin and rw locks

    This might not be exactly the right thing to do: we could use LR/SC to
    produce slightly better locks by rolling the tests into the LR/SC.  I'm
    going to defer that until I get a better handle on the new memory model
    and just be safe here: after some discussion I'm pretty sure the AMOs
    are good, and cmpxchg is safe (by being way too string).

    Since we'd want to rewrite the spinlocks anyway so they queue, I don't
    see any reason to keep the old implementations around.

    Signed-off-by: Palmer Dabbelt <palmer@sifive.com>

diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
index 2fd27e8ef1fd..9b166ea81fe5 100644
--- a/arch/riscv/include/asm/spinlock.h
+++ b/arch/riscv/include/asm/spinlock.h
@@ -15,128 +15,7 @@
 #ifndef _ASM_RISCV_SPINLOCK_H
 #define _ASM_RISCV_SPINLOCK_H

-#include <linux/kernel.h>
-#include <asm/current.h>
-
-/*
- * Simple spin lock operations.  These provide no fairness guarantees.
- */
-
-/* FIXME: Replace this with a ticket lock, like MIPS. */
-
-#define arch_spin_is_locked(x)	(READ_ONCE((x)->lock) != 0)
-
-static inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-	__asm__ __volatile__ (
-		"amoswap.w.rl x0, x0, %0"
-		: "=A" (lock->lock)
-		:: "memory");
-}
-
-static inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-	int tmp = 1, busy;
-
-	__asm__ __volatile__ (
-		"amoswap.w.aq %0, %2, %1"
-		: "=r" (busy), "+A" (lock->lock)
-		: "r" (tmp)
-		: "memory");
-
-	return !busy;
-}
-
-static inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-	while (1) {
-		if (arch_spin_is_locked(lock))
-			continue;
-
-		if (arch_spin_trylock(lock))
-			break;
-	}
-}
-
-/***********************************************************/
-
-static inline void arch_read_lock(arch_rwlock_t *lock)
-{
-	int tmp;
-
-	__asm__ __volatile__(
-		"1:	lr.w	%1, %0\n"
-		"	bltz	%1, 1b\n"
-		"	addi	%1, %1, 1\n"
-		"	sc.w.aq	%1, %1, %0\n"
-		"	bnez	%1, 1b\n"
-		: "+A" (lock->lock), "=&r" (tmp)
-		:: "memory");
-}
-
-static inline void arch_write_lock(arch_rwlock_t *lock)
-{
-	int tmp;
-
-	__asm__ __volatile__(
-		"1:	lr.w	%1, %0\n"
-		"	bnez	%1, 1b\n"
-		"	li	%1, -1\n"
-		"	sc.w.aq	%1, %1, %0\n"
-		"	bnez	%1, 1b\n"
-		: "+A" (lock->lock), "=&r" (tmp)
-		:: "memory");
-}
-
-static inline int arch_read_trylock(arch_rwlock_t *lock)
-{
-	int busy;
-
-	__asm__ __volatile__(
-		"1:	lr.w	%1, %0\n"
-		"	bltz	%1, 1f\n"
-		"	addi	%1, %1, 1\n"
-		"	sc.w.aq	%1, %1, %0\n"
-		"	bnez	%1, 1b\n"
-		"1:\n"
-		: "+A" (lock->lock), "=&r" (busy)
-		:: "memory");
-
-	return !busy;
-}
-
-static inline int arch_write_trylock(arch_rwlock_t *lock)
-{
-	int busy;
-
-	__asm__ __volatile__(
-		"1:	lr.w	%1, %0\n"
-		"	bnez	%1, 1f\n"
-		"	li	%1, -1\n"
-		"	sc.w.aq	%1, %1, %0\n"
-		"	bnez	%1, 1b\n"
-		"1:\n"
-		: "+A" (lock->lock), "=&r" (busy)
-		:: "memory");
-
-	return !busy;
-}
-
-static inline void arch_read_unlock(arch_rwlock_t *lock)
-{
-	__asm__ __volatile__(
-		"amoadd.w.rl x0, %1, %0"
-		: "+A" (lock->lock)
-		: "r" (-1)
-		: "memory");
-}
-
-static inline void arch_write_unlock(arch_rwlock_t *lock)
-{
-	__asm__ __volatile__ (
-		"amoswap.w.rl x0, x0, %0"
-		: "=A" (lock->lock)
-		:: "memory");
-}
+#include <asm-generic/qspinlock.h>
+#include <asm-generic/qrwlock.h>

 #endif /* _ASM_RISCV_SPINLOCK_H */

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 1/2] riscv/spinlock: Strengthen implementations with fences
  2018-03-07  2:02 ` Palmer Dabbelt
@ 2018-03-07 10:52   ` Andrea Parri
  2018-03-07 18:33     ` Palmer Dabbelt
  0 siblings, 1 reply; 8+ messages in thread
From: Andrea Parri @ 2018-03-07 10:52 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: albert, Daniel Lustig, stern, Will Deacon, peterz, boqun.feng,
	npiggin, dhowells, j.alglave, luc.maranget, paulmck, akiyks,
	mingo, Linus Torvalds, linux-riscv, linux-kernel

On Tue, Mar 06, 2018 at 06:02:28PM -0800, Palmer Dabbelt wrote:
> On Mon, 05 Mar 2018 10:24:09 PST (-0800), parri.andrea@gmail.com wrote:
> >Current implementations map locking operations using .rl and .aq
> >annotations.  However, this mapping is unsound w.r.t. the kernel
> >memory consistency model (LKMM) [1]:
> >
> >Referring to the "unlock-lock-read-ordering" test reported below,
> >Daniel wrote:
> >
> >  "I think an RCpc interpretation of .aq and .rl would in fact
> >   allow the two normal loads in P1 to be reordered [...]
> >
> >   The intuition would be that the amoswap.w.aq can forward from
> >   the amoswap.w.rl while that's still in the store buffer, and
> >   then the lw x3,0(x4) can also perform while the amoswap.w.rl
> >   is still in the store buffer, all before the l1 x1,0(x2)
> >   executes.  That's not forbidden unless the amoswaps are RCsc,
> >   unless I'm missing something.
> >
> >   Likewise even if the unlock()/lock() is between two stores.
> >   A control dependency might originate from the load part of
> >   the amoswap.w.aq, but there still would have to be something
> >   to ensure that this load part in fact performs after the store
> >   part of the amoswap.w.rl performs globally, and that's not
> >   automatic under RCpc."
> >
> >Simulation of the RISC-V memory consistency model confirmed this
> >expectation.
> >
> >In order to "synchronize" LKMM and RISC-V's implementation, this
> >commit strengthens the implementations of the locking operations
> >by replacing .rl and .aq with the use of ("lightweigth") fences,
> >resp., "fence rw,  w" and "fence r , rw".
> >
> >C unlock-lock-read-ordering
> >
> >{}
> >/* s initially owned by P1 */
> >
> >P0(int *x, int *y)
> >{
> >        WRITE_ONCE(*x, 1);
> >        smp_wmb();
> >        WRITE_ONCE(*y, 1);
> >}
> >
> >P1(int *x, int *y, spinlock_t *s)
> >{
> >        int r0;
> >        int r1;
> >
> >        r0 = READ_ONCE(*y);
> >        spin_unlock(s);
> >        spin_lock(s);
> >        r1 = READ_ONCE(*x);
> >}
> >
> >exists (1:r0=1 /\ 1:r1=0)
> >
> >[1] https://marc.info/?l=linux-kernel&m=151930201102853&w=2
> >    https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
> >    https://marc.info/?l=linux-kernel&m=151633436614259&w=2
> >
> >Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
> >Cc: Palmer Dabbelt <palmer@sifive.com>
> >Cc: Albert Ou <albert@sifive.com>
> >Cc: Daniel Lustig <dlustig@nvidia.com>
> >Cc: Alan Stern <stern@rowland.harvard.edu>
> >Cc: Will Deacon <will.deacon@arm.com>
> >Cc: Peter Zijlstra <peterz@infradead.org>
> >Cc: Boqun Feng <boqun.feng@gmail.com>
> >Cc: Nicholas Piggin <npiggin@gmail.com>
> >Cc: David Howells <dhowells@redhat.com>
> >Cc: Jade Alglave <j.alglave@ucl.ac.uk>
> >Cc: Luc Maranget <luc.maranget@inria.fr>
> >Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> >Cc: Akira Yokosawa <akiyks@gmail.com>
> >Cc: Ingo Molnar <mingo@kernel.org>
> >Cc: Linus Torvalds <torvalds@linux-foundation.org>
> >Cc: linux-riscv@lists.infradead.org
> >Cc: linux-kernel@vger.kernel.org
> >---
> > arch/riscv/include/asm/fence.h    | 12 ++++++++++++
> > arch/riscv/include/asm/spinlock.h | 29 +++++++++++++++--------------
> > 2 files changed, 27 insertions(+), 14 deletions(-)
> > create mode 100644 arch/riscv/include/asm/fence.h
> 
> Oh, sorry about this -- I thought I'd deleted all this code, but I guess I
> just wrote a patch and then forgot about it.  Here's my original patch,
> which I have marked as a WIP:

No problem.


> 
> commit 39908f1f8b75ae88ce44dc77b8219a94078ad298
> Author: Palmer Dabbelt <palmer@sifive.com>
> Date:   Tue Dec 5 16:26:50 2017 -0800
> 
>    RISC-V: Use generic spin and rw locks
> 
>    This might not be exactly the right thing to do: we could use LR/SC to
>    produce slightly better locks by rolling the tests into the LR/SC.  I'm
>    going to defer that until I get a better handle on the new memory model
>    and just be safe here: after some discussion I'm pretty sure the AMOs
>    are good, and cmpxchg is safe (by being way too string).

I'm pretty sure you lost me (and a few other people) here.

IIUC, this says: "what we've been discussing within the last few weeks is
going to change", but not much else...

Or am I misunderstanding? You mean cmpxchg, ... as in my patch 2/2?


> 
>    Since we'd want to rewrite the spinlocks anyway so they queue, I don't
>    see any reason to keep the old implementations around.

Keep in mind that queued locks were written and optimized for x86.  arm64
only recently adopted qrwlocks:

  087133ac90763cd339b6b67f2998f87dcc136c52
  ("locking/qrwlock, arm64: Move rwlock implementation over to qrwlocks")

This certainly needs further testing and reviewing. (Nit: your patch does
not compile on any of the "riscv" branches I'm currently tracking...)

  Andrea


> 
>    Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
> 
> diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
> index 2fd27e8ef1fd..9b166ea81fe5 100644
> --- a/arch/riscv/include/asm/spinlock.h
> +++ b/arch/riscv/include/asm/spinlock.h
> @@ -15,128 +15,7 @@
> #ifndef _ASM_RISCV_SPINLOCK_H
> #define _ASM_RISCV_SPINLOCK_H
> 
> -#include <linux/kernel.h>
> -#include <asm/current.h>
> -
> -/*
> - * Simple spin lock operations.  These provide no fairness guarantees.
> - */
> -
> -/* FIXME: Replace this with a ticket lock, like MIPS. */
> -
> -#define arch_spin_is_locked(x)	(READ_ONCE((x)->lock) != 0)
> -
> -static inline void arch_spin_unlock(arch_spinlock_t *lock)
> -{
> -	__asm__ __volatile__ (
> -		"amoswap.w.rl x0, x0, %0"
> -		: "=A" (lock->lock)
> -		:: "memory");
> -}
> -
> -static inline int arch_spin_trylock(arch_spinlock_t *lock)
> -{
> -	int tmp = 1, busy;
> -
> -	__asm__ __volatile__ (
> -		"amoswap.w.aq %0, %2, %1"
> -		: "=r" (busy), "+A" (lock->lock)
> -		: "r" (tmp)
> -		: "memory");
> -
> -	return !busy;
> -}
> -
> -static inline void arch_spin_lock(arch_spinlock_t *lock)
> -{
> -	while (1) {
> -		if (arch_spin_is_locked(lock))
> -			continue;
> -
> -		if (arch_spin_trylock(lock))
> -			break;
> -	}
> -}
> -
> -/***********************************************************/
> -
> -static inline void arch_read_lock(arch_rwlock_t *lock)
> -{
> -	int tmp;
> -
> -	__asm__ __volatile__(
> -		"1:	lr.w	%1, %0\n"
> -		"	bltz	%1, 1b\n"
> -		"	addi	%1, %1, 1\n"
> -		"	sc.w.aq	%1, %1, %0\n"
> -		"	bnez	%1, 1b\n"
> -		: "+A" (lock->lock), "=&r" (tmp)
> -		:: "memory");
> -}
> -
> -static inline void arch_write_lock(arch_rwlock_t *lock)
> -{
> -	int tmp;
> -
> -	__asm__ __volatile__(
> -		"1:	lr.w	%1, %0\n"
> -		"	bnez	%1, 1b\n"
> -		"	li	%1, -1\n"
> -		"	sc.w.aq	%1, %1, %0\n"
> -		"	bnez	%1, 1b\n"
> -		: "+A" (lock->lock), "=&r" (tmp)
> -		:: "memory");
> -}
> -
> -static inline int arch_read_trylock(arch_rwlock_t *lock)
> -{
> -	int busy;
> -
> -	__asm__ __volatile__(
> -		"1:	lr.w	%1, %0\n"
> -		"	bltz	%1, 1f\n"
> -		"	addi	%1, %1, 1\n"
> -		"	sc.w.aq	%1, %1, %0\n"
> -		"	bnez	%1, 1b\n"
> -		"1:\n"
> -		: "+A" (lock->lock), "=&r" (busy)
> -		:: "memory");
> -
> -	return !busy;
> -}
> -
> -static inline int arch_write_trylock(arch_rwlock_t *lock)
> -{
> -	int busy;
> -
> -	__asm__ __volatile__(
> -		"1:	lr.w	%1, %0\n"
> -		"	bnez	%1, 1f\n"
> -		"	li	%1, -1\n"
> -		"	sc.w.aq	%1, %1, %0\n"
> -		"	bnez	%1, 1b\n"
> -		"1:\n"
> -		: "+A" (lock->lock), "=&r" (busy)
> -		:: "memory");
> -
> -	return !busy;
> -}
> -
> -static inline void arch_read_unlock(arch_rwlock_t *lock)
> -{
> -	__asm__ __volatile__(
> -		"amoadd.w.rl x0, %1, %0"
> -		: "+A" (lock->lock)
> -		: "r" (-1)
> -		: "memory");
> -}
> -
> -static inline void arch_write_unlock(arch_rwlock_t *lock)
> -{
> -	__asm__ __volatile__ (
> -		"amoswap.w.rl x0, x0, %0"
> -		: "=A" (lock->lock)
> -		:: "memory");
> -}
> +#include <asm-generic/qspinlock.h>
> +#include <asm-generic/qrwlock.h>
> 
> #endif /* _ASM_RISCV_SPINLOCK_H */
> 

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 1/2] riscv/spinlock: Strengthen implementations with fences
  2018-03-07 10:52   ` Andrea Parri
@ 2018-03-07 18:33     ` Palmer Dabbelt
  2018-03-08 21:03       ` Andrea Parri
  0 siblings, 1 reply; 8+ messages in thread
From: Palmer Dabbelt @ 2018-03-07 18:33 UTC (permalink / raw)
  To: parri.andrea
  Cc: albert, Daniel Lustig, stern, Will Deacon, peterz, boqun.feng,
	npiggin, dhowells, j.alglave, luc.maranget, paulmck, akiyks,
	mingo, Linus Torvalds, linux-riscv, linux-kernel

On Wed, 07 Mar 2018 02:52:42 PST (-0800), parri.andrea@gmail.com wrote:
> On Tue, Mar 06, 2018 at 06:02:28PM -0800, Palmer Dabbelt wrote:
>> On Mon, 05 Mar 2018 10:24:09 PST (-0800), parri.andrea@gmail.com wrote:
>> >Current implementations map locking operations using .rl and .aq
>> >annotations.  However, this mapping is unsound w.r.t. the kernel
>> >memory consistency model (LKMM) [1]:
>> >
>> >Referring to the "unlock-lock-read-ordering" test reported below,
>> >Daniel wrote:
>> >
>> >  "I think an RCpc interpretation of .aq and .rl would in fact
>> >   allow the two normal loads in P1 to be reordered [...]
>> >
>> >   The intuition would be that the amoswap.w.aq can forward from
>> >   the amoswap.w.rl while that's still in the store buffer, and
>> >   then the lw x3,0(x4) can also perform while the amoswap.w.rl
>> >   is still in the store buffer, all before the l1 x1,0(x2)
>> >   executes.  That's not forbidden unless the amoswaps are RCsc,
>> >   unless I'm missing something.
>> >
>> >   Likewise even if the unlock()/lock() is between two stores.
>> >   A control dependency might originate from the load part of
>> >   the amoswap.w.aq, but there still would have to be something
>> >   to ensure that this load part in fact performs after the store
>> >   part of the amoswap.w.rl performs globally, and that's not
>> >   automatic under RCpc."
>> >
>> >Simulation of the RISC-V memory consistency model confirmed this
>> >expectation.
>> >
>> >In order to "synchronize" LKMM and RISC-V's implementation, this
>> >commit strengthens the implementations of the locking operations
>> >by replacing .rl and .aq with the use of ("lightweigth") fences,
>> >resp., "fence rw,  w" and "fence r , rw".
>> >
>> >C unlock-lock-read-ordering
>> >
>> >{}
>> >/* s initially owned by P1 */
>> >
>> >P0(int *x, int *y)
>> >{
>> >        WRITE_ONCE(*x, 1);
>> >        smp_wmb();
>> >        WRITE_ONCE(*y, 1);
>> >}
>> >
>> >P1(int *x, int *y, spinlock_t *s)
>> >{
>> >        int r0;
>> >        int r1;
>> >
>> >        r0 = READ_ONCE(*y);
>> >        spin_unlock(s);
>> >        spin_lock(s);
>> >        r1 = READ_ONCE(*x);
>> >}
>> >
>> >exists (1:r0=1 /\ 1:r1=0)
>> >
>> >[1] https://marc.info/?l=linux-kernel&m=151930201102853&w=2
>> >    https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/hKywNHBkAXM
>> >    https://marc.info/?l=linux-kernel&m=151633436614259&w=2
>> >
>> >Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
>> >Cc: Palmer Dabbelt <palmer@sifive.com>
>> >Cc: Albert Ou <albert@sifive.com>
>> >Cc: Daniel Lustig <dlustig@nvidia.com>
>> >Cc: Alan Stern <stern@rowland.harvard.edu>
>> >Cc: Will Deacon <will.deacon@arm.com>
>> >Cc: Peter Zijlstra <peterz@infradead.org>
>> >Cc: Boqun Feng <boqun.feng@gmail.com>
>> >Cc: Nicholas Piggin <npiggin@gmail.com>
>> >Cc: David Howells <dhowells@redhat.com>
>> >Cc: Jade Alglave <j.alglave@ucl.ac.uk>
>> >Cc: Luc Maranget <luc.maranget@inria.fr>
>> >Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
>> >Cc: Akira Yokosawa <akiyks@gmail.com>
>> >Cc: Ingo Molnar <mingo@kernel.org>
>> >Cc: Linus Torvalds <torvalds@linux-foundation.org>
>> >Cc: linux-riscv@lists.infradead.org
>> >Cc: linux-kernel@vger.kernel.org
>> >---
>> > arch/riscv/include/asm/fence.h    | 12 ++++++++++++
>> > arch/riscv/include/asm/spinlock.h | 29 +++++++++++++++--------------
>> > 2 files changed, 27 insertions(+), 14 deletions(-)
>> > create mode 100644 arch/riscv/include/asm/fence.h
>>
>> Oh, sorry about this -- I thought I'd deleted all this code, but I guess I
>> just wrote a patch and then forgot about it.  Here's my original patch,
>> which I have marked as a WIP:
>
> No problem.
>
>
>>
>> commit 39908f1f8b75ae88ce44dc77b8219a94078ad298
>> Author: Palmer Dabbelt <palmer@sifive.com>
>> Date:   Tue Dec 5 16:26:50 2017 -0800
>>
>>    RISC-V: Use generic spin and rw locks
>>
>>    This might not be exactly the right thing to do: we could use LR/SC to
>>    produce slightly better locks by rolling the tests into the LR/SC.  I'm
>>    going to defer that until I get a better handle on the new memory model
>>    and just be safe here: after some discussion I'm pretty sure the AMOs
>>    are good, and cmpxchg is safe (by being way too string).
>
> I'm pretty sure you lost me (and a few other people) here.
>
> IIUC, this says: "what we've been discussing within the last few weeks is
> going to change", but not much else...
>
> Or am I misunderstanding? You mean cmpxchg, ... as in my patch 2/2?

Well, it was what we were discussing for the past few weeks before Dec 5th (as 
that's when I wrote the patch).  It's more of a note for myself than a proper 
commit message, and I've also forgotten what I was talking about.

>>
>>    Since we'd want to rewrite the spinlocks anyway so they queue, I don't
>>    see any reason to keep the old implementations around.
>
> Keep in mind that queued locks were written and optimized for x86.  arm64
> only recently adopted qrwlocks:
>
>   087133ac90763cd339b6b67f2998f87dcc136c52
>   ("locking/qrwlock, arm64: Move rwlock implementation over to qrwlocks")
>
> This certainly needs further testing and reviewing. (Nit: your patch does
> not compile on any of the "riscv" branches I'm currently tracking...)

That's probably why it was just floating around and not sent out :).  I went 
and talked to Andrew and we think there's actually a reasonable argument for 
some spinlocks that are similar to what we currently have.  The ISA manual 
describes some canonical spinlock code, which has the advantage of being 
smaller and being defined as a target for micro architectural pattern matching.

I'm going to go produce a new set of spinlocks, I think it'll be a bit more 
coherent then.

I'm keeping your other patch in my queue for now, it generally looks good but I 
haven't looked closely yet.

Thanks!

>
>   Andrea
>
>
>>
>>    Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
>>
>> diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
>> index 2fd27e8ef1fd..9b166ea81fe5 100644
>> --- a/arch/riscv/include/asm/spinlock.h
>> +++ b/arch/riscv/include/asm/spinlock.h
>> @@ -15,128 +15,7 @@
>> #ifndef _ASM_RISCV_SPINLOCK_H
>> #define _ASM_RISCV_SPINLOCK_H
>>
>> -#include <linux/kernel.h>
>> -#include <asm/current.h>
>> -
>> -/*
>> - * Simple spin lock operations.  These provide no fairness guarantees.
>> - */
>> -
>> -/* FIXME: Replace this with a ticket lock, like MIPS. */
>> -
>> -#define arch_spin_is_locked(x)	(READ_ONCE((x)->lock) != 0)
>> -
>> -static inline void arch_spin_unlock(arch_spinlock_t *lock)
>> -{
>> -	__asm__ __volatile__ (
>> -		"amoswap.w.rl x0, x0, %0"
>> -		: "=A" (lock->lock)
>> -		:: "memory");
>> -}
>> -
>> -static inline int arch_spin_trylock(arch_spinlock_t *lock)
>> -{
>> -	int tmp = 1, busy;
>> -
>> -	__asm__ __volatile__ (
>> -		"amoswap.w.aq %0, %2, %1"
>> -		: "=r" (busy), "+A" (lock->lock)
>> -		: "r" (tmp)
>> -		: "memory");
>> -
>> -	return !busy;
>> -}
>> -
>> -static inline void arch_spin_lock(arch_spinlock_t *lock)
>> -{
>> -	while (1) {
>> -		if (arch_spin_is_locked(lock))
>> -			continue;
>> -
>> -		if (arch_spin_trylock(lock))
>> -			break;
>> -	}
>> -}
>> -
>> -/***********************************************************/
>> -
>> -static inline void arch_read_lock(arch_rwlock_t *lock)
>> -{
>> -	int tmp;
>> -
>> -	__asm__ __volatile__(
>> -		"1:	lr.w	%1, %0\n"
>> -		"	bltz	%1, 1b\n"
>> -		"	addi	%1, %1, 1\n"
>> -		"	sc.w.aq	%1, %1, %0\n"
>> -		"	bnez	%1, 1b\n"
>> -		: "+A" (lock->lock), "=&r" (tmp)
>> -		:: "memory");
>> -}
>> -
>> -static inline void arch_write_lock(arch_rwlock_t *lock)
>> -{
>> -	int tmp;
>> -
>> -	__asm__ __volatile__(
>> -		"1:	lr.w	%1, %0\n"
>> -		"	bnez	%1, 1b\n"
>> -		"	li	%1, -1\n"
>> -		"	sc.w.aq	%1, %1, %0\n"
>> -		"	bnez	%1, 1b\n"
>> -		: "+A" (lock->lock), "=&r" (tmp)
>> -		:: "memory");
>> -}
>> -
>> -static inline int arch_read_trylock(arch_rwlock_t *lock)
>> -{
>> -	int busy;
>> -
>> -	__asm__ __volatile__(
>> -		"1:	lr.w	%1, %0\n"
>> -		"	bltz	%1, 1f\n"
>> -		"	addi	%1, %1, 1\n"
>> -		"	sc.w.aq	%1, %1, %0\n"
>> -		"	bnez	%1, 1b\n"
>> -		"1:\n"
>> -		: "+A" (lock->lock), "=&r" (busy)
>> -		:: "memory");
>> -
>> -	return !busy;
>> -}
>> -
>> -static inline int arch_write_trylock(arch_rwlock_t *lock)
>> -{
>> -	int busy;
>> -
>> -	__asm__ __volatile__(
>> -		"1:	lr.w	%1, %0\n"
>> -		"	bnez	%1, 1f\n"
>> -		"	li	%1, -1\n"
>> -		"	sc.w.aq	%1, %1, %0\n"
>> -		"	bnez	%1, 1b\n"
>> -		"1:\n"
>> -		: "+A" (lock->lock), "=&r" (busy)
>> -		:: "memory");
>> -
>> -	return !busy;
>> -}
>> -
>> -static inline void arch_read_unlock(arch_rwlock_t *lock)
>> -{
>> -	__asm__ __volatile__(
>> -		"amoadd.w.rl x0, %1, %0"
>> -		: "+A" (lock->lock)
>> -		: "r" (-1)
>> -		: "memory");
>> -}
>> -
>> -static inline void arch_write_unlock(arch_rwlock_t *lock)
>> -{
>> -	__asm__ __volatile__ (
>> -		"amoswap.w.rl x0, x0, %0"
>> -		: "=A" (lock->lock)
>> -		:: "memory");
>> -}
>> +#include <asm-generic/qspinlock.h>
>> +#include <asm-generic/qrwlock.h>
>>
>> #endif /* _ASM_RISCV_SPINLOCK_H */
>>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 1/2] riscv/spinlock: Strengthen implementations with fences
  2018-03-07 18:33     ` Palmer Dabbelt
@ 2018-03-08 21:03       ` Andrea Parri
  2018-03-08 22:11         ` Palmer Dabbelt
  0 siblings, 1 reply; 8+ messages in thread
From: Andrea Parri @ 2018-03-08 21:03 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: albert, Daniel Lustig, stern, Will Deacon, peterz, boqun.feng,
	npiggin, dhowells, j.alglave, luc.maranget, paulmck, akiyks,
	mingo, Linus Torvalds, linux-riscv, linux-kernel

On Wed, Mar 07, 2018 at 10:33:49AM -0800, Palmer Dabbelt wrote:

[...]

> I'm going to go produce a new set of spinlocks, I think it'll be a bit more
> coherent then.
> 
> I'm keeping your other patch in my queue for now, it generally looks good
> but I haven't looked closely yet.

Patches 1 and 2 address a same issue ("release-to-acquire"); this is also
expressed, more or less explicitly, in the corresponding commit messages:
it might make sense to "queue" them together, and to build the new locks
on top of these (even if this meant "rewrite all of/a large portion of
spinlock.h"...).

  Andrea


> 
> Thanks!
> 
> >
> >  Andrea
> >
> >
> >>
> >>   Signed-off-by: Palmer Dabbelt <palmer@sifive.com>
> >>
> >>diff --git a/arch/riscv/include/asm/spinlock.h b/arch/riscv/include/asm/spinlock.h
> >>index 2fd27e8ef1fd..9b166ea81fe5 100644
> >>--- a/arch/riscv/include/asm/spinlock.h
> >>+++ b/arch/riscv/include/asm/spinlock.h
> >>@@ -15,128 +15,7 @@
> >>#ifndef _ASM_RISCV_SPINLOCK_H
> >>#define _ASM_RISCV_SPINLOCK_H
> >>
> >>-#include <linux/kernel.h>
> >>-#include <asm/current.h>
> >>-
> >>-/*
> >>- * Simple spin lock operations.  These provide no fairness guarantees.
> >>- */
> >>-
> >>-/* FIXME: Replace this with a ticket lock, like MIPS. */
> >>-
> >>-#define arch_spin_is_locked(x)	(READ_ONCE((x)->lock) != 0)
> >>-
> >>-static inline void arch_spin_unlock(arch_spinlock_t *lock)
> >>-{
> >>-	__asm__ __volatile__ (
> >>-		"amoswap.w.rl x0, x0, %0"
> >>-		: "=A" (lock->lock)
> >>-		:: "memory");
> >>-}
> >>-
> >>-static inline int arch_spin_trylock(arch_spinlock_t *lock)
> >>-{
> >>-	int tmp = 1, busy;
> >>-
> >>-	__asm__ __volatile__ (
> >>-		"amoswap.w.aq %0, %2, %1"
> >>-		: "=r" (busy), "+A" (lock->lock)
> >>-		: "r" (tmp)
> >>-		: "memory");
> >>-
> >>-	return !busy;
> >>-}
> >>-
> >>-static inline void arch_spin_lock(arch_spinlock_t *lock)
> >>-{
> >>-	while (1) {
> >>-		if (arch_spin_is_locked(lock))
> >>-			continue;
> >>-
> >>-		if (arch_spin_trylock(lock))
> >>-			break;
> >>-	}
> >>-}
> >>-
> >>-/***********************************************************/
> >>-
> >>-static inline void arch_read_lock(arch_rwlock_t *lock)
> >>-{
> >>-	int tmp;
> >>-
> >>-	__asm__ __volatile__(
> >>-		"1:	lr.w	%1, %0\n"
> >>-		"	bltz	%1, 1b\n"
> >>-		"	addi	%1, %1, 1\n"
> >>-		"	sc.w.aq	%1, %1, %0\n"
> >>-		"	bnez	%1, 1b\n"
> >>-		: "+A" (lock->lock), "=&r" (tmp)
> >>-		:: "memory");
> >>-}
> >>-
> >>-static inline void arch_write_lock(arch_rwlock_t *lock)
> >>-{
> >>-	int tmp;
> >>-
> >>-	__asm__ __volatile__(
> >>-		"1:	lr.w	%1, %0\n"
> >>-		"	bnez	%1, 1b\n"
> >>-		"	li	%1, -1\n"
> >>-		"	sc.w.aq	%1, %1, %0\n"
> >>-		"	bnez	%1, 1b\n"
> >>-		: "+A" (lock->lock), "=&r" (tmp)
> >>-		:: "memory");
> >>-}
> >>-
> >>-static inline int arch_read_trylock(arch_rwlock_t *lock)
> >>-{
> >>-	int busy;
> >>-
> >>-	__asm__ __volatile__(
> >>-		"1:	lr.w	%1, %0\n"
> >>-		"	bltz	%1, 1f\n"
> >>-		"	addi	%1, %1, 1\n"
> >>-		"	sc.w.aq	%1, %1, %0\n"
> >>-		"	bnez	%1, 1b\n"
> >>-		"1:\n"
> >>-		: "+A" (lock->lock), "=&r" (busy)
> >>-		:: "memory");
> >>-
> >>-	return !busy;
> >>-}
> >>-
> >>-static inline int arch_write_trylock(arch_rwlock_t *lock)
> >>-{
> >>-	int busy;
> >>-
> >>-	__asm__ __volatile__(
> >>-		"1:	lr.w	%1, %0\n"
> >>-		"	bnez	%1, 1f\n"
> >>-		"	li	%1, -1\n"
> >>-		"	sc.w.aq	%1, %1, %0\n"
> >>-		"	bnez	%1, 1b\n"
> >>-		"1:\n"
> >>-		: "+A" (lock->lock), "=&r" (busy)
> >>-		:: "memory");
> >>-
> >>-	return !busy;
> >>-}
> >>-
> >>-static inline void arch_read_unlock(arch_rwlock_t *lock)
> >>-{
> >>-	__asm__ __volatile__(
> >>-		"amoadd.w.rl x0, %1, %0"
> >>-		: "+A" (lock->lock)
> >>-		: "r" (-1)
> >>-		: "memory");
> >>-}
> >>-
> >>-static inline void arch_write_unlock(arch_rwlock_t *lock)
> >>-{
> >>-	__asm__ __volatile__ (
> >>-		"amoswap.w.rl x0, x0, %0"
> >>-		: "=A" (lock->lock)
> >>-		:: "memory");
> >>-}
> >>+#include <asm-generic/qspinlock.h>
> >>+#include <asm-generic/qrwlock.h>
> >>
> >>#endif /* _ASM_RISCV_SPINLOCK_H */
> >>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 1/2] riscv/spinlock: Strengthen implementations with fences
  2018-03-08 21:03       ` Andrea Parri
@ 2018-03-08 22:11         ` Palmer Dabbelt
  2018-03-09 12:16           ` Andrea Parri
  0 siblings, 1 reply; 8+ messages in thread
From: Palmer Dabbelt @ 2018-03-08 22:11 UTC (permalink / raw)
  To: parri.andrea
  Cc: albert, Daniel Lustig, stern, Will Deacon, peterz, boqun.feng,
	npiggin, dhowells, j.alglave, luc.maranget, paulmck, akiyks,
	mingo, Linus Torvalds, linux-riscv, linux-kernel

On Thu, 08 Mar 2018 13:03:03 PST (-0800), parri.andrea@gmail.com wrote:
> On Wed, Mar 07, 2018 at 10:33:49AM -0800, Palmer Dabbelt wrote:
>
> [...]
>
>> I'm going to go produce a new set of spinlocks, I think it'll be a bit more
>> coherent then.
>>
>> I'm keeping your other patch in my queue for now, it generally looks good
>> but I haven't looked closely yet.
>
> Patches 1 and 2 address a same issue ("release-to-acquire"); this is also
> expressed, more or less explicitly, in the corresponding commit messages:
> it might make sense to "queue" them together, and to build the new locks
> on top of these (even if this meant "rewrite all of/a large portion of
> spinlock.h"...).

I agree.  IIRC you had a fixup to the first pair of patches, can you submit a 
v2?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 1/2] riscv/spinlock: Strengthen implementations with fences
  2018-03-08 22:11         ` Palmer Dabbelt
@ 2018-03-09 12:16           ` Andrea Parri
  2018-03-09 18:07             ` Palmer Dabbelt
  0 siblings, 1 reply; 8+ messages in thread
From: Andrea Parri @ 2018-03-09 12:16 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: albert, Daniel Lustig, stern, Will Deacon, peterz, boqun.feng,
	npiggin, dhowells, j.alglave, luc.maranget, paulmck, akiyks,
	mingo, Linus Torvalds, linux-riscv, linux-kernel

On Thu, Mar 08, 2018 at 02:11:12PM -0800, Palmer Dabbelt wrote:
> On Thu, 08 Mar 2018 13:03:03 PST (-0800), parri.andrea@gmail.com wrote:
> >On Wed, Mar 07, 2018 at 10:33:49AM -0800, Palmer Dabbelt wrote:
> >
> >[...]
> >
> >>I'm going to go produce a new set of spinlocks, I think it'll be a bit more
> >>coherent then.
> >>
> >>I'm keeping your other patch in my queue for now, it generally looks good
> >>but I haven't looked closely yet.
> >
> >Patches 1 and 2 address a same issue ("release-to-acquire"); this is also
> >expressed, more or less explicitly, in the corresponding commit messages:
> >it might make sense to "queue" them together, and to build the new locks
> >on top of these (even if this meant "rewrite all of/a large portion of
> >spinlock.h"...).
> 
> I agree.  IIRC you had a fixup to the first pair of patches, can you submit
> a v2?

I've just sent it (with updated changelog).

  Andrea

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC PATCH 1/2] riscv/spinlock: Strengthen implementations with fences
  2018-03-09 12:16           ` Andrea Parri
@ 2018-03-09 18:07             ` Palmer Dabbelt
  0 siblings, 0 replies; 8+ messages in thread
From: Palmer Dabbelt @ 2018-03-09 18:07 UTC (permalink / raw)
  To: parri.andrea
  Cc: albert, Daniel Lustig, stern, Will Deacon, peterz, boqun.feng,
	npiggin, dhowells, j.alglave, luc.maranget, paulmck, akiyks,
	mingo, Linus Torvalds, linux-riscv, linux-kernel

On Fri, 09 Mar 2018 04:16:43 PST (-0800), parri.andrea@gmail.com wrote:
> On Thu, Mar 08, 2018 at 02:11:12PM -0800, Palmer Dabbelt wrote:
>> On Thu, 08 Mar 2018 13:03:03 PST (-0800), parri.andrea@gmail.com wrote:
>> >On Wed, Mar 07, 2018 at 10:33:49AM -0800, Palmer Dabbelt wrote:
>> >
>> >[...]
>> >
>> >>I'm going to go produce a new set of spinlocks, I think it'll be a bit more
>> >>coherent then.
>> >>
>> >>I'm keeping your other patch in my queue for now, it generally looks good
>> >>but I haven't looked closely yet.
>> >
>> >Patches 1 and 2 address a same issue ("release-to-acquire"); this is also
>> >expressed, more or less explicitly, in the corresponding commit messages:
>> >it might make sense to "queue" them together, and to build the new locks
>> >on top of these (even if this meant "rewrite all of/a large portion of
>> >spinlock.h"...).
>>
>> I agree.  IIRC you had a fixup to the first pair of patches, can you submit
>> a v2?
>
> I've just sent it (with updated changelog).

Thanks!

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-03-09 18:07 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-05 18:24 [RFC PATCH 1/2] riscv/spinlock: Strengthen implementations with fences Andrea Parri
2018-03-07  2:02 ` Palmer Dabbelt
2018-03-07 10:52   ` Andrea Parri
2018-03-07 18:33     ` Palmer Dabbelt
2018-03-08 21:03       ` Andrea Parri
2018-03-08 22:11         ` Palmer Dabbelt
2018-03-09 12:16           ` Andrea Parri
2018-03-09 18:07             ` Palmer Dabbelt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).