linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
@ 2016-06-06 11:42 Michael Ellerman
  2016-06-06 11:56 ` Peter Zijlstra
  0 siblings, 1 reply; 13+ messages in thread
From: Michael Ellerman @ 2016-06-06 11:42 UTC (permalink / raw)
  To: linuxppc-dev, Linux Kernel Mailing List
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Paul E. McKenney, Peter Zijlstra, Will Deacon, Boqun Feng

From: Boqun Feng <boqun.feng@gmail.com>

There is an ordering issue with spin_unlock_wait() on powerpc, because
the spin_lock primitive is an ACQUIRE and an ACQUIRE is only ordering
the load part of the operation with memory operations following it.
Therefore the following event sequence can happen:

CPU 1			CPU 2			CPU 3

==================	====================	==============
						spin_unlock(&lock);
			spin_lock(&lock):
			  r1 = *lock; // r1 == 0;
o = object;		o = READ_ONCE(object); // reordered here
object = NULL;
smp_mb();
spin_unlock_wait(&lock);
			  *lock = 1;
smp_mb();
o->dead = true;         < o = READ_ONCE(object); > // reordered upwards
			if (o) // true
				BUG_ON(o->dead); // true!!

To fix this, we add a "nop" ll/sc loop in arch_spin_unlock_wait() on
ppc, the "nop" ll/sc loop reads the lock
value and writes it back atomically, in this way it will synchronize the
view of the lock on CPU1 with that on CPU2. Therefore in the scenario
above, either CPU2 will fail to get the lock at first or CPU1 will see
the lock acquired by CPU2, both cases will eliminate this bug. This is a
similar idea as what Will Deacon did for ARM64 in:

  d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")

Furthermore, if the "nop" ll/sc figures out the lock is locked, we
actually don't need to do the "nop" ll/sc trick again, we can just do a
normal load+check loop for the lock to be released, because in that
case, spin_unlock_wait() is called when someone is holding the lock, and
the store part of the "nop" ll/sc happens before the lock release of the
current lock holder:

	"nop" ll/sc -> spin_unlock()

and the lock release happens before the next lock acquisition:

	spin_unlock() -> spin_lock() <next holder>

which means the "nop" ll/sc happens before the next lock acquisition:

	"nop" ll/sc -> spin_unlock() -> spin_lock() <next holder>

With a smp_mb() preceding spin_unlock_wait(), the store of object is
guaranteed to be observed by the next lock holder:

	STORE -> smp_mb() -> "nop" ll/sc
	-> spin_unlock() -> spin_lock() <next holder>

This patch therefore fixes the issue and also cleans the
arch_spin_unlock_wait() a little bit by removing superfluous memory
barriers in loops and consolidating the implementations for PPC32 and
PPC64 into one.

Suggested-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
[mpe: Inline the "nop" ll/sc loop and set EH=0, munge change log]
Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/include/asm/spinlock.h | 38 +++++++++++++++++++++++++++++++------
 arch/powerpc/lib/locks.c            | 16 ----------------
 2 files changed, 32 insertions(+), 22 deletions(-)

v3 (mpe):
 - Inline the ll/sc loop.
 - Change the EH on the LWARX to 0
 - Rewrite change log to cope with the fact we removed arch_spin_is_locked_sync()

v1-->v2:

 - Improve the commit log, suggested by Peter Zijlstra
 - Keep two smp_mb()s for the safety, which though could be deleted
   if all the users have been aduited and fixed later.

diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 523673d7583c..5f84fc147664 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -162,12 +162,38 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
 	lock->slock = 0;
 }
 
-#ifdef CONFIG_PPC64
-extern void arch_spin_unlock_wait(arch_spinlock_t *lock);
-#else
-#define arch_spin_unlock_wait(lock) \
-	do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0)
-#endif
+static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
+{
+	arch_spinlock_t lock_val;
+
+	smp_mb();
+
+	/*
+	 * Atomically load and store back the lock value (unchanged). This
+	 * ensures that our observation of the lock value is ordered with
+	 * respect to other lock operations.
+	 */
+	__asm__ __volatile__(
+"1:	" PPC_LWARX(%0, 0, %2, 0) "\n"
+"	stwcx. %0, 0, %2\n"
+"	bne- 1b\n"
+	: "=&r" (lock_val), "+m" (*lock)
+	: "r" (lock)
+	: "cr0", "xer");
+
+	if (arch_spin_value_unlocked(lock_val))
+		goto out;
+
+	while (!arch_spin_value_unlocked(*lock)) {
+		HMT_low();
+		if (SHARED_PROCESSOR)
+			__spin_yield(lock);
+	}
+	HMT_medium();
+
+out:
+	smp_mb();
+}
 
 /*
  * Read-write spinlocks, allowing multiple readers
diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c
index f7deebdf3365..b7b1237d4aa6 100644
--- a/arch/powerpc/lib/locks.c
+++ b/arch/powerpc/lib/locks.c
@@ -68,19 +68,3 @@ void __rw_yield(arch_rwlock_t *rw)
 		get_hard_smp_processor_id(holder_cpu), yield_count);
 }
 #endif
-
-void arch_spin_unlock_wait(arch_spinlock_t *lock)
-{
-	smp_mb();
-
-	while (lock->slock) {
-		HMT_low();
-		if (SHARED_PROCESSOR)
-			__spin_yield(lock);
-	}
-	HMT_medium();
-
-	smp_mb();
-}
-
-EXPORT_SYMBOL(arch_spin_unlock_wait);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-06 11:42 [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait() Michael Ellerman
@ 2016-06-06 11:56 ` Peter Zijlstra
  2016-06-06 12:17   ` Michael Ellerman
  0 siblings, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2016-06-06 11:56 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Linux Kernel Mailing List, Benjamin Herrenschmidt,
	Paul Mackerras, Paul E. McKenney, Will Deacon, Boqun Feng

On Mon, Jun 06, 2016 at 09:42:20PM +1000, Michael Ellerman wrote:
> +static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> +{
> +	arch_spinlock_t lock_val;
> +
> +	smp_mb();
> +
> +	/*
> +	 * Atomically load and store back the lock value (unchanged). This
> +	 * ensures that our observation of the lock value is ordered with
> +	 * respect to other lock operations.
> +	 */
> +	__asm__ __volatile__(
> +"1:	" PPC_LWARX(%0, 0, %2, 0) "\n"
> +"	stwcx. %0, 0, %2\n"
> +"	bne- 1b\n"
> +	: "=&r" (lock_val), "+m" (*lock)
> +	: "r" (lock)
> +	: "cr0", "xer");
> +
> +	if (arch_spin_value_unlocked(lock_val))
> +		goto out;
> +
> +	while (!arch_spin_value_unlocked(*lock)) {
> +		HMT_low();
> +		if (SHARED_PROCESSOR)
> +			__spin_yield(lock);
> +	}
> +	HMT_medium();
> +
> +out:
> +	smp_mb();
> +}

Why the move to in-line this implementation? It looks like a fairly big
function.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-06 11:56 ` Peter Zijlstra
@ 2016-06-06 12:17   ` Michael Ellerman
  2016-06-06 14:46     ` Peter Zijlstra
  0 siblings, 1 reply; 13+ messages in thread
From: Michael Ellerman @ 2016-06-06 12:17 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linuxppc-dev, Linux Kernel Mailing List, Benjamin Herrenschmidt,
	Paul Mackerras, Paul E. McKenney, Will Deacon, Boqun Feng

On Mon, 2016-06-06 at 13:56 +0200, Peter Zijlstra wrote:
> On Mon, Jun 06, 2016 at 09:42:20PM +1000, Michael Ellerman wrote:
> > +static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> > +{
> > +	arch_spinlock_t lock_val;
> > +
> > +	smp_mb();
> > +
> > +	/*
> > +	 * Atomically load and store back the lock value (unchanged). This
> > +	 * ensures that our observation of the lock value is ordered with
> > +	 * respect to other lock operations.
> > +	 */
> > +	__asm__ __volatile__(
> > +"1:	" PPC_LWARX(%0, 0, %2, 0) "\n"
> > +"	stwcx. %0, 0, %2\n"
> > +"	bne- 1b\n"
> > +	: "=&r" (lock_val), "+m" (*lock)
> > +	: "r" (lock)
> > +	: "cr0", "xer");
> > +
> > +	if (arch_spin_value_unlocked(lock_val))
> > +		goto out;
> > +
> > +	while (!arch_spin_value_unlocked(*lock)) {
> > +		HMT_low();
> > +		if (SHARED_PROCESSOR)
> > +			__spin_yield(lock);
> > +	}
> > +	HMT_medium();
> > +
> > +out:
> > +	smp_mb();
> > +}
> 
> Why the move to in-line this implementation? It looks like a fairly big
> function.

I agree it's not pretty.

I just didn't think having it out-of-line made it easier to understand. The
previous version had:

  static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
  {
  	...
  	if (!arch_spin_is_locked_sync(lock))
  		goto out;

Then elsewhere:

  static inline bool arch_spin_is_locked_sync(arch_spinlock_t *lock)
  {
  	...
  	return !arch_spin_value_unlocked(tmp);
  }


So two negations and one routine called "locked" and one "unlocked", which just
didn't read well IMHO.

Another minor concern was that someone might be "clever" and call the _sync()
version manually (though hopefully we'd catch that in review).

I'm not beholden to v3 though if you hate it.

cheers

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-06 12:17   ` Michael Ellerman
@ 2016-06-06 14:46     ` Peter Zijlstra
  2016-06-08 11:20       ` Michael Ellerman
  0 siblings, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2016-06-06 14:46 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Linux Kernel Mailing List, Benjamin Herrenschmidt,
	Paul Mackerras, Paul E. McKenney, Will Deacon, Boqun Feng

On Mon, Jun 06, 2016 at 10:17:25PM +1000, Michael Ellerman wrote:
> On Mon, 2016-06-06 at 13:56 +0200, Peter Zijlstra wrote:
> > On Mon, Jun 06, 2016 at 09:42:20PM +1000, Michael Ellerman wrote:
> > > +static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> > > +{
> > > +	arch_spinlock_t lock_val;
> > > +
> > > +	smp_mb();
> > > +
> > > +	/*
> > > +	 * Atomically load and store back the lock value (unchanged). This
> > > +	 * ensures that our observation of the lock value is ordered with
> > > +	 * respect to other lock operations.
> > > +	 */
> > > +	__asm__ __volatile__(
> > > +"1:	" PPC_LWARX(%0, 0, %2, 0) "\n"
> > > +"	stwcx. %0, 0, %2\n"
> > > +"	bne- 1b\n"
> > > +	: "=&r" (lock_val), "+m" (*lock)
> > > +	: "r" (lock)
> > > +	: "cr0", "xer");
> > > +
> > > +	if (arch_spin_value_unlocked(lock_val))
> > > +		goto out;
> > > +
> > > +	while (!arch_spin_value_unlocked(*lock)) {
> > > +		HMT_low();
> > > +		if (SHARED_PROCESSOR)
> > > +			__spin_yield(lock);
> > > +	}
> > > +	HMT_medium();
> > > +
> > > +out:
> > > +	smp_mb();
> > > +}
> > 
> > Why the move to in-line this implementation? It looks like a fairly big
> > function.
> 
> I agree it's not pretty.

> 
> I'm not beholden to v3 though if you hate it.

I don't mind; its just that I am in a similar boat with qspinlock and
chose the other option. So I just figured I'd ask :-)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-06 14:46     ` Peter Zijlstra
@ 2016-06-08 11:20       ` Michael Ellerman
  2016-06-08 12:35         ` Peter Zijlstra
  0 siblings, 1 reply; 13+ messages in thread
From: Michael Ellerman @ 2016-06-08 11:20 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linuxppc-dev, Linux Kernel Mailing List, Benjamin Herrenschmidt,
	Paul Mackerras, Paul E. McKenney, Will Deacon, Boqun Feng

On Mon, 2016-06-06 at 16:46 +0200, Peter Zijlstra wrote:
> On Mon, Jun 06, 2016 at 10:17:25PM +1000, Michael Ellerman wrote:
> > On Mon, 2016-06-06 at 13:56 +0200, Peter Zijlstra wrote:
> > > On Mon, Jun 06, 2016 at 09:42:20PM +1000, Michael Ellerman wrote:
> > > 
> > > Why the move to in-line this implementation? It looks like a fairly big
> > > function.
> > 
> > I agree it's not pretty.
> 
> > I'm not beholden to v3 though if you hate it.
> 
> I don't mind; its just that I am in a similar boat with qspinlock and
> chose the other option. So I just figured I'd ask :-)

OK. I'll go with inline and we'll see which version gets "cleaned-up" by a
janitor first ;)

cheers

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-08 11:20       ` Michael Ellerman
@ 2016-06-08 12:35         ` Peter Zijlstra
  2016-06-08 13:49           ` Michael Ellerman
  0 siblings, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2016-06-08 12:35 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Linux Kernel Mailing List, Benjamin Herrenschmidt,
	Paul Mackerras, Paul E. McKenney, Will Deacon, Boqun Feng

On Wed, Jun 08, 2016 at 09:20:45PM +1000, Michael Ellerman wrote:
> On Mon, 2016-06-06 at 16:46 +0200, Peter Zijlstra wrote:
> > On Mon, Jun 06, 2016 at 10:17:25PM +1000, Michael Ellerman wrote:
> > > On Mon, 2016-06-06 at 13:56 +0200, Peter Zijlstra wrote:
> > > > On Mon, Jun 06, 2016 at 09:42:20PM +1000, Michael Ellerman wrote:
> > > > 
> > > > Why the move to in-line this implementation? It looks like a fairly big
> > > > function.
> > > 
> > > I agree it's not pretty.
> > 
> > > I'm not beholden to v3 though if you hate it.
> > 
> > I don't mind; its just that I am in a similar boat with qspinlock and
> > chose the other option. So I just figured I'd ask :-)
> 
> OK. I'll go with inline and we'll see which version gets "cleaned-up" by a
> janitor first ;)

Ok; what tree does this go in? I have this dependent series which I'd
like to get sorted and merged somewhere.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-08 12:35         ` Peter Zijlstra
@ 2016-06-08 13:49           ` Michael Ellerman
  2016-06-08 13:59             ` Peter Zijlstra
  0 siblings, 1 reply; 13+ messages in thread
From: Michael Ellerman @ 2016-06-08 13:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linuxppc-dev, Linux Kernel Mailing List, Benjamin Herrenschmidt,
	Paul Mackerras, Paul E. McKenney, Will Deacon, Boqun Feng

On Wed, 2016-06-08 at 14:35 +0200, Peter Zijlstra wrote:
> On Wed, Jun 08, 2016 at 09:20:45PM +1000, Michael Ellerman wrote:
> > On Mon, 2016-06-06 at 16:46 +0200, Peter Zijlstra wrote:
> > > On Mon, Jun 06, 2016 at 10:17:25PM +1000, Michael Ellerman wrote:
> > > > On Mon, 2016-06-06 at 13:56 +0200, Peter Zijlstra wrote:
> > > > > On Mon, Jun 06, 2016 at 09:42:20PM +1000, Michael Ellerman wrote:
> > > > > 
> > > > > Why the move to in-line this implementation? It looks like a fairly big
> > > > > function.
> > > > 
> > > > I agree it's not pretty.
> > > 
> > > > I'm not beholden to v3 though if you hate it.
> > > 
> > > I don't mind; its just that I am in a similar boat with qspinlock and
> > > chose the other option. So I just figured I'd ask :-)
> > 
> > OK. I'll go with inline and we'll see which version gets "cleaned-up" by a
> > janitor first ;)
> 
> Ok; what tree does this go in? I have this dependent series which I'd
> like to get sorted and merged somewhere.

Ah sorry, I didn't realise. I was going to put it in my next (which doesn't
exist yet but hopefully will early next week).

I'll make a topic branch with just that commit based on rc2 or rc3?

cheers

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-08 13:49           ` Michael Ellerman
@ 2016-06-08 13:59             ` Peter Zijlstra
  2016-06-09 12:23               ` Michael Ellerman
  0 siblings, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2016-06-08 13:59 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, Linux Kernel Mailing List, Benjamin Herrenschmidt,
	Paul Mackerras, Paul E. McKenney, Will Deacon, Boqun Feng

On Wed, Jun 08, 2016 at 11:49:20PM +1000, Michael Ellerman wrote:

> > Ok; what tree does this go in? I have this dependent series which I'd
> > like to get sorted and merged somewhere.
> 
> Ah sorry, I didn't realise. I was going to put it in my next (which doesn't
> exist yet but hopefully will early next week).
> 
> I'll make a topic branch with just that commit based on rc2 or rc3?

Works for me; thanks!

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-08 13:59             ` Peter Zijlstra
@ 2016-06-09 12:23               ` Michael Ellerman
  2016-06-09 17:25                 ` Boqun Feng
  2016-06-09 17:50                 ` Peter Zijlstra
  0 siblings, 2 replies; 13+ messages in thread
From: Michael Ellerman @ 2016-06-09 12:23 UTC (permalink / raw)
  To: Peter Zijlstra, Boqun Feng
  Cc: linuxppc-dev, Linux Kernel Mailing List, Benjamin Herrenschmidt,
	Paul Mackerras, Paul E. McKenney, Will Deacon

On Wed, 2016-06-08 at 15:59 +0200, Peter Zijlstra wrote:
> On Wed, Jun 08, 2016 at 11:49:20PM +1000, Michael Ellerman wrote:
>
> > > Ok; what tree does this go in? I have this dependent series which I'd
> > > like to get sorted and merged somewhere.
> > 
> > Ah sorry, I didn't realise. I was going to put it in my next (which doesn't
> > exist yet but hopefully will early next week).
> > 
> > I'll make a topic branch with just that commit based on rc2 or rc3?
> 
> Works for me; thanks!
 
Unfortunately the patch isn't 100%.

It's causing some of my machines to lock up hard, which isn't surprising when
you look at the generated code for the non-atomic spin loop:

  c00000000009af48:	7c 21 0b 78 	mr      r1,r1					# HMT_LOW
  c00000000009af4c:	40 9e ff fc 	bne     cr7,c00000000009af48 <.do_exit+0x6d8>

Which is a spin loop waiting for a result in cr7, but with no comparison.

The problem seems to be that we did:

@@ -184,7 +184,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
 	if (arch_spin_value_unlocked(lock_val))
 		goto out;
 
-	while (lock->slock) {
+	while (!arch_spin_value_unlocked(*lock)) {
 		HMT_low();
 		if (SHARED_PROCESSOR)
 			__spin_yield(lock);

Which seems to be hiding the fact that lock->slock is volatile from the
compiler, even though arch_spin_value_unlocked() is inline. Not sure if that's
our bug or gcc's.

Will sleep on it.

cheers

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-09 12:23               ` Michael Ellerman
@ 2016-06-09 17:25                 ` Boqun Feng
  2016-06-10  3:06                   ` Boqun Feng
  2016-06-09 17:50                 ` Peter Zijlstra
  1 sibling, 1 reply; 13+ messages in thread
From: Boqun Feng @ 2016-06-09 17:25 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Peter Zijlstra, linuxppc-dev, Linux Kernel Mailing List,
	Benjamin Herrenschmidt, Paul Mackerras, Paul E. McKenney,
	Will Deacon

[-- Attachment #1: Type: text/plain, Size: 3473 bytes --]

On Thu, Jun 09, 2016 at 10:23:28PM +1000, Michael Ellerman wrote:
> On Wed, 2016-06-08 at 15:59 +0200, Peter Zijlstra wrote:
> > On Wed, Jun 08, 2016 at 11:49:20PM +1000, Michael Ellerman wrote:
> >
> > > > Ok; what tree does this go in? I have this dependent series which I'd
> > > > like to get sorted and merged somewhere.
> > > 
> > > Ah sorry, I didn't realise. I was going to put it in my next (which doesn't
> > > exist yet but hopefully will early next week).
> > > 
> > > I'll make a topic branch with just that commit based on rc2 or rc3?
> > 
> > Works for me; thanks!
>  
> Unfortunately the patch isn't 100%.
> 
> It's causing some of my machines to lock up hard, which isn't surprising when
> you look at the generated code for the non-atomic spin loop:
> 
>   c00000000009af48:	7c 21 0b 78 	mr      r1,r1					# HMT_LOW
>   c00000000009af4c:	40 9e ff fc 	bne     cr7,c00000000009af48 <.do_exit+0x6d8>
> 

There is even no code checking for SHARED_PROCESSOR here, so I assume
your config is !PPC_SPLPAR.

> Which is a spin loop waiting for a result in cr7, but with no comparison.
> 
> The problem seems to be that we did:
> 
> @@ -184,7 +184,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
>  	if (arch_spin_value_unlocked(lock_val))
>  		goto out;
>  
> -	while (lock->slock) {
> +	while (!arch_spin_value_unlocked(*lock)) {
>  		HMT_low();
>  		if (SHARED_PROCESSOR)
>  			__spin_yield(lock);
> 

And as I also did an consolidation in this patch, we now share the same
piece of arch_spin_unlock_wait(), so if !PPC_SPLPAR, the previous loop
became:

	while (!arch_spin_value_unlocked(*lock)) {
 		HMT_low();
	}

and given HMT_low() is not a compiler barrier. So the compiler may
optimize out the loop..

> Which seems to be hiding the fact that lock->slock is volatile from the
> compiler, even though arch_spin_value_unlocked() is inline. Not sure if that's
> our bug or gcc's.
> 

I think arch_spin_value_unlocked() is not volatile because
arch_spin_value_unlocked() takes the value of the lock rather than the
address of the lock as its parameter, which makes it a pure function.

To fix this we can add READ_ONCE() for the read of lock value like the
following:

	while(!arch_spin_value_unlock(READ_ONCE(*lock))) {
		HMT_low();
		...

Or you prefer to simply using lock->slock which is a volatile variable
already?

Or maybe we can refactor the code a little like this:

static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
{
       arch_spinlock_t lock_val;

       smp_mb();

       /*
        * Atomically load and store back the lock value (unchanged).  This
        * ensures that our observation of the lock value is ordered with
        * respect to other lock operations.
        */
       __asm__ __volatile__(
"1:    " PPC_LWARX(%0, 0, %2, 0) "\n"
"      stwcx. %0, 0, %2\n"
"      bne- 1b\n"
       : "=&r" (lock_val), "+m" (*lock)
       : "r" (lock)
       : "cr0", "xer");

       while (!arch_spin_value_unlocked(lock_val)) {
               HMT_low();
               if (SHARED_PROCESSOR)
                       __spin_yield(lock);

               lock_val = READ_ONCE(*lock);
       }
       HMT_medium();

       smp_mb();
}

> Will sleep on it.
> 

Bed time for me too, I will run more tests on the three proposals above
tomorrow and see how things are going.

Regards,
Boqun

> cheers
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-09 12:23               ` Michael Ellerman
  2016-06-09 17:25                 ` Boqun Feng
@ 2016-06-09 17:50                 ` Peter Zijlstra
  2016-06-10  0:57                   ` Michael Ellerman
  1 sibling, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2016-06-09 17:50 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Boqun Feng, linuxppc-dev, Linux Kernel Mailing List,
	Benjamin Herrenschmidt, Paul Mackerras, Paul E. McKenney,
	Will Deacon

On Thu, Jun 09, 2016 at 10:23:28PM +1000, Michael Ellerman wrote:
> Unfortunately the patch isn't 100%.

So what I'll do; since my patch is trying to ensure all implementations
of spin_unlock_wait() provide ACQUIRE semantics, and this patch does
indeed do so, is skip touching PPC entirely and hope this patch lands
before 4.8.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-09 17:50                 ` Peter Zijlstra
@ 2016-06-10  0:57                   ` Michael Ellerman
  0 siblings, 0 replies; 13+ messages in thread
From: Michael Ellerman @ 2016-06-10  0:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Boqun Feng, linuxppc-dev, Linux Kernel Mailing List,
	Benjamin Herrenschmidt, Paul Mackerras, Paul E. McKenney,
	Will Deacon

On Thu, 2016-06-09 at 19:50 +0200, Peter Zijlstra wrote:
> On Thu, Jun 09, 2016 at 10:23:28PM +1000, Michael Ellerman wrote:
> > Unfortunately the patch isn't 100%.
> 
> So what I'll do; since my patch is trying to ensure all implementations
> of spin_unlock_wait() provide ACQUIRE semantics, and this patch does
> indeed do so, is skip touching PPC entirely and hope this patch lands
> before 4.8.

OK.

I don't see any reason it wouldn't make 4.8, we just need to decide whether we
use the old "while (lock->slock)" or one of the other options Boqun proposed.

cheers

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-09 17:25                 ` Boqun Feng
@ 2016-06-10  3:06                   ` Boqun Feng
  0 siblings, 0 replies; 13+ messages in thread
From: Boqun Feng @ 2016-06-10  3:06 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Peter Zijlstra, linuxppc-dev, Linux Kernel Mailing List,
	Benjamin Herrenschmidt, Paul Mackerras, Paul E. McKenney,
	Will Deacon

[-- Attachment #1: Type: text/plain, Size: 4339 bytes --]

On Fri, Jun 10, 2016 at 01:25:03AM +0800, Boqun Feng wrote:
> On Thu, Jun 09, 2016 at 10:23:28PM +1000, Michael Ellerman wrote:
> > On Wed, 2016-06-08 at 15:59 +0200, Peter Zijlstra wrote:
> > > On Wed, Jun 08, 2016 at 11:49:20PM +1000, Michael Ellerman wrote:
> > >
> > > > > Ok; what tree does this go in? I have this dependent series which I'd
> > > > > like to get sorted and merged somewhere.
> > > > 
> > > > Ah sorry, I didn't realise. I was going to put it in my next (which doesn't
> > > > exist yet but hopefully will early next week).
> > > > 
> > > > I'll make a topic branch with just that commit based on rc2 or rc3?
> > > 
> > > Works for me; thanks!
> >  
> > Unfortunately the patch isn't 100%.
> > 
> > It's causing some of my machines to lock up hard, which isn't surprising when
> > you look at the generated code for the non-atomic spin loop:
> > 
> >   c00000000009af48:	7c 21 0b 78 	mr      r1,r1					# HMT_LOW
> >   c00000000009af4c:	40 9e ff fc 	bne     cr7,c00000000009af48 <.do_exit+0x6d8>
> > 
> 
> There is even no code checking for SHARED_PROCESSOR here, so I assume
> your config is !PPC_SPLPAR.
> 
> > Which is a spin loop waiting for a result in cr7, but with no comparison.
> > 
> > The problem seems to be that we did:
> > 
> > @@ -184,7 +184,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> >  	if (arch_spin_value_unlocked(lock_val))
> >  		goto out;
> >  
> > -	while (lock->slock) {
> > +	while (!arch_spin_value_unlocked(*lock)) {
> >  		HMT_low();
> >  		if (SHARED_PROCESSOR)
> >  			__spin_yield(lock);
> > 
> 
> And as I also did an consolidation in this patch, we now share the same
> piece of arch_spin_unlock_wait(), so if !PPC_SPLPAR, the previous loop
> became:
> 
> 	while (!arch_spin_value_unlocked(*lock)) {
>  		HMT_low();
> 	}
> 
> and given HMT_low() is not a compiler barrier. So the compiler may
> optimize out the loop..
> 
> > Which seems to be hiding the fact that lock->slock is volatile from the
> > compiler, even though arch_spin_value_unlocked() is inline. Not sure if that's
> > our bug or gcc's.
> > 
> 
> I think arch_spin_value_unlocked() is not volatile because
> arch_spin_value_unlocked() takes the value of the lock rather than the
> address of the lock as its parameter, which makes it a pure function.
> 
> To fix this we can add READ_ONCE() for the read of lock value like the
> following:
> 
> 	while(!arch_spin_value_unlock(READ_ONCE(*lock))) {
> 		HMT_low();
> 		...
> 
> Or you prefer to simply using lock->slock which is a volatile variable
> already?
> 
> Or maybe we can refactor the code a little like this:
> 
> static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> {
>        arch_spinlock_t lock_val;
> 
>        smp_mb();
> 
>        /*
>         * Atomically load and store back the lock value (unchanged).  This
>         * ensures that our observation of the lock value is ordered with
>         * respect to other lock operations.
>         */
>        __asm__ __volatile__(
> "1:    " PPC_LWARX(%0, 0, %2, 0) "\n"
> "      stwcx. %0, 0, %2\n"
> "      bne- 1b\n"
>        : "=&r" (lock_val), "+m" (*lock)
>        : "r" (lock)
>        : "cr0", "xer");
> 
>        while (!arch_spin_value_unlocked(lock_val)) {
>                HMT_low();
>                if (SHARED_PROCESSOR)
>                        __spin_yield(lock);
> 
>                lock_val = READ_ONCE(*lock);
>        }
>        HMT_medium();
> 
>        smp_mb();
> }
> 

This version will generate the correct code for the loop if !PPC_SPLPAR:

c00000000009fa70:       78 0b 21 7c     mr      r1,r1
c00000000009fa74:       ec 06 37 81     lwz     r9,1772(r23)
c00000000009fa78:       00 00 a9 2f     cmpdi   cr7,r9,0
c00000000009fa7c:       f4 ff 9e 40     bne     cr7,c00000000009fa70 <do_exit+0xf0>
c00000000009fa80:       78 13 42 7c     mr      r2,r2

The reason I used arch_spin_value_unlocked() was trying to be consistent
with arch_spin_is_locked(), but most of our all lock primitives use
->slock directly. So I don't see a strong reason for us to use
arch_spin_value_unlocked() here. That said, this version does save a few
lines of code and make the logic a little more clear, I think.

Thoughts?

Regards,
Boqun

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2016-06-10  3:03 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-06 11:42 [PATCH v3] powerpc: spinlock: Fix spin_unlock_wait() Michael Ellerman
2016-06-06 11:56 ` Peter Zijlstra
2016-06-06 12:17   ` Michael Ellerman
2016-06-06 14:46     ` Peter Zijlstra
2016-06-08 11:20       ` Michael Ellerman
2016-06-08 12:35         ` Peter Zijlstra
2016-06-08 13:49           ` Michael Ellerman
2016-06-08 13:59             ` Peter Zijlstra
2016-06-09 12:23               ` Michael Ellerman
2016-06-09 17:25                 ` Boqun Feng
2016-06-10  3:06                   ` Boqun Feng
2016-06-09 17:50                 ` Peter Zijlstra
2016-06-10  0:57                   ` Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).