linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] powerpc: spinlock: Fix spin_unlock_wait()
@ 2016-06-03  3:49 Boqun Feng
  2016-06-06  4:52 ` [v2] " Michael Ellerman
  2016-06-06 11:14 ` Michael Ellerman
  0 siblings, 2 replies; 4+ messages in thread
From: Boqun Feng @ 2016-06-03  3:49 UTC (permalink / raw)
  To: linuxppc-dev, linux-kernel
  Cc: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman,
	Paul E. McKenney, Peter Zijlstra, Will Deacon, Boqun Feng

There is an ordering issue with spin_unlock_wait() on powerpc, because
the spin_lock primitive is an ACQUIRE and an ACQUIRE is only ordering
the load part of the operation with memory operations following it.
Therefore the following event sequence can happen:

CPU 1			CPU 2			CPU 3
==================	====================	==============
						spin_unlock(&lock);
			spin_lock(&lock):
			  r1 = *lock; // r1 == 0;
o = object;		o = READ_ONCE(object); // reordered here
object = NULL;
smp_mb();
spin_unlock_wait(&lock);
			  *lock = 1;
smp_mb();
o->dead = true;         < o = READ_ONCE(object); > // reordered upwards
			if (o) // true
				BUG_ON(o->dead); // true!!

To fix this, we add a "nop" ll/sc loop in arch_spin_unlock_wait() on
ppc (arch_spin_is_locked_sync()), the "nop" ll/sc loop reads the lock
value and writes it back atomically, in this way it will synchronize the
view of the lock on CPU1 with that on CPU2. Therefore in the scenario
above, either CPU2 will fail to get the lock at first or CPU1 will see
the lock acquired by CPU2, both cases will eliminate this bug. This is a
similar idea as what Will Deacon did for ARM64 in:

  d86b8da04dfa ("arm64: spinlock: serialise spin_unlock_wait against concurrent lockers")

Further more, if arch_spin_is_locked_sync() figures out the lock is
locked, we actually don't need to do the "nop" ll/sc trick again, we can
just do a normal load+check loop for the lock to be released, because in
that case, spin_unlock_wait() is called when someone is holding the
lock, and the store part of arch_spin_is_locked_sync() happens before
the lock release of the current lock holder:

	arch_spin_is_locked_sync() -> spin_unlock()

and the lock release happens before the next lock acquisition:

	spin_unlock() -> spin_lock() <next holder>

which means arch_spin_is_locked_sync() happens before the next lock
acquisition:

	arch_spin_is_locked_sync() -> spin_unlock() -> spin_lock() <next holder>

With a smp_mb() perceding spin_unlock_wait(), the store of object is
guaranteed to be observed by the next lock holder:

	STORE -> smp_mb() -> arch_spin_is_locked_sync()
	-> spin_unlock() -> spin_lock() <next holder>

This patch therefore fixes the issue and also cleans the
arch_spin_unlock_wait() a little bit by removing superfluous memory
barriers in loops and consolidating the implementations for PPC32 and
PPC64 into one.

Suggested-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
Reviewed-by: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
---
v1-->v2:

*	Improve the commit log, suggested by Peter Zijlstra

*	Keep two smp_mb()s for the safety, which though could be deleted
	if all the users have been aduited and fixed later.
	

 arch/powerpc/include/asm/spinlock.h | 42 +++++++++++++++++++++++++++++++------
 arch/powerpc/lib/locks.c            | 16 --------------
 2 files changed, 36 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 523673d7583c..2ed893662866 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -64,6 +64,25 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 }
 
 /*
+ * Use a ll/sc loop to read the lock value, the STORE part of this operation is
+ * used for making later lock operation observe it.
+ */
+static inline bool arch_spin_is_locked_sync(arch_spinlock_t *lock)
+{
+	arch_spinlock_t tmp;
+
+	__asm__ __volatile__(
+"1:	" PPC_LWARX(%0, 0, %2, 1) "\n"
+"	stwcx. %0, 0, %2\n"
+"	bne- 1b\n"
+	: "=&r" (tmp), "+m" (*lock)
+	: "r" (lock)
+	: "cr0", "xer");
+
+	return !arch_spin_value_unlocked(tmp);
+}
+
+/*
  * This returns the old value in the lock, so we succeeded
  * in getting the lock if the return value is 0.
  */
@@ -162,12 +181,23 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
 	lock->slock = 0;
 }
 
-#ifdef CONFIG_PPC64
-extern void arch_spin_unlock_wait(arch_spinlock_t *lock);
-#else
-#define arch_spin_unlock_wait(lock) \
-	do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0)
-#endif
+static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
+{
+	smp_mb();
+
+	if (!arch_spin_is_locked_sync(lock))
+		goto out;
+
+	while (!arch_spin_value_unlocked(*lock)) {
+		HMT_low();
+		if (SHARED_PROCESSOR)
+			__spin_yield(lock);
+	}
+	HMT_medium();
+
+out:
+	smp_mb();
+}
 
 /*
  * Read-write spinlocks, allowing multiple readers
diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c
index f7deebdf3365..b7b1237d4aa6 100644
--- a/arch/powerpc/lib/locks.c
+++ b/arch/powerpc/lib/locks.c
@@ -68,19 +68,3 @@ void __rw_yield(arch_rwlock_t *rw)
 		get_hard_smp_processor_id(holder_cpu), yield_count);
 }
 #endif
-
-void arch_spin_unlock_wait(arch_spinlock_t *lock)
-{
-	smp_mb();
-
-	while (lock->slock) {
-		HMT_low();
-		if (SHARED_PROCESSOR)
-			__spin_yield(lock);
-	}
-	HMT_medium();
-
-	smp_mb();
-}
-
-EXPORT_SYMBOL(arch_spin_unlock_wait);
-- 
2.8.3

^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [v2] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-03  3:49 [PATCH v2] powerpc: spinlock: Fix spin_unlock_wait() Boqun Feng
@ 2016-06-06  4:52 ` Michael Ellerman
  2016-06-06  4:59   ` Boqun Feng
  2016-06-06 11:14 ` Michael Ellerman
  1 sibling, 1 reply; 4+ messages in thread
From: Michael Ellerman @ 2016-06-06  4:52 UTC (permalink / raw)
  To: Boqun Feng, linuxppc-dev, linux-kernel
  Cc: Peter Zijlstra, Boqun Feng, Will Deacon, Paul Mackerras,
	Paul E. McKenney

On Fri, 2016-03-06 at 03:49:48 UTC, Boqun Feng wrote:
> There is an ordering issue with spin_unlock_wait() on powerpc, because
> the spin_lock primitive is an ACQUIRE and an ACQUIRE is only ordering
> the load part of the operation with memory operations following it.

...
> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> index 523673d7583c..2ed893662866 100644
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -162,12 +181,23 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
>  	lock->slock = 0;
>  }
>  
> -#ifdef CONFIG_PPC64
> -extern void arch_spin_unlock_wait(arch_spinlock_t *lock);
> -#else
> -#define arch_spin_unlock_wait(lock) \
> -	do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0)
> -#endif
> +static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> +{
> +	smp_mb();
> +
> +	if (!arch_spin_is_locked_sync(lock))
> +		goto out;
> +
> +	while (!arch_spin_value_unlocked(*lock)) {
> +		HMT_low();
> +		if (SHARED_PROCESSOR)
> +			__spin_yield(lock);
> +	}
> +	HMT_medium();
> +
> +out:
> +	smp_mb();
> +}

I think this would actually be easier to follow if it was all just in one routine:

static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
{
	arch_spinlock_t lock_val;

	smp_mb();

	/*
	 * Atomically load and store back the lock value (unchanged). This
	 * ensures that our observation of the lock value is ordered with
	 * respect to other lock operations.
	 */
	__asm__ __volatile__(
"1:	" PPC_LWARX(%0, 0, %2, 1) "\n"
"	stwcx. %0, 0, %2\n"
"	bne- 1b\n"
	: "=&r" (lock_val), "+m" (*lock)
	: "r" (lock)
	: "cr0", "xer");

	if (arch_spin_value_unlocked(lock_val))
		goto out;

	while (!arch_spin_value_unlocked(*lock)) {
		HMT_low();
		if (SHARED_PROCESSOR)
			__spin_yield(lock);
	}
	HMT_medium();

out:
	smp_mb();
}


Thoughts?

cheers

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [v2] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-06  4:52 ` [v2] " Michael Ellerman
@ 2016-06-06  4:59   ` Boqun Feng
  0 siblings, 0 replies; 4+ messages in thread
From: Boqun Feng @ 2016-06-06  4:59 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: linuxppc-dev, linux-kernel, Peter Zijlstra, Will Deacon,
	Paul Mackerras, Paul E. McKenney

[-- Attachment #1: Type: text/plain, Size: 2339 bytes --]

On Mon, Jun 06, 2016 at 02:52:05PM +1000, Michael Ellerman wrote:
> On Fri, 2016-03-06 at 03:49:48 UTC, Boqun Feng wrote:
> > There is an ordering issue with spin_unlock_wait() on powerpc, because
> > the spin_lock primitive is an ACQUIRE and an ACQUIRE is only ordering
> > the load part of the operation with memory operations following it.
> 
> ...
> > diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> > index 523673d7583c..2ed893662866 100644
> > --- a/arch/powerpc/include/asm/spinlock.h
> > +++ b/arch/powerpc/include/asm/spinlock.h
> > @@ -162,12 +181,23 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
> >  	lock->slock = 0;
> >  }
> >  
> > -#ifdef CONFIG_PPC64
> > -extern void arch_spin_unlock_wait(arch_spinlock_t *lock);
> > -#else
> > -#define arch_spin_unlock_wait(lock) \
> > -	do { while (arch_spin_is_locked(lock)) cpu_relax(); } while (0)
> > -#endif
> > +static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> > +{
> > +	smp_mb();
> > +
> > +	if (!arch_spin_is_locked_sync(lock))
> > +		goto out;
> > +
> > +	while (!arch_spin_value_unlocked(*lock)) {
> > +		HMT_low();
> > +		if (SHARED_PROCESSOR)
> > +			__spin_yield(lock);
> > +	}
> > +	HMT_medium();
> > +
> > +out:
> > +	smp_mb();
> > +}
> 
> I think this would actually be easier to follow if it was all just in one routine:
> 
> static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> {
> 	arch_spinlock_t lock_val;
> 
> 	smp_mb();
> 
> 	/*
> 	 * Atomically load and store back the lock value (unchanged). This
> 	 * ensures that our observation of the lock value is ordered with
> 	 * respect to other lock operations.
> 	 */
> 	__asm__ __volatile__(
> "1:	" PPC_LWARX(%0, 0, %2, 1) "\n"
> "	stwcx. %0, 0, %2\n"
> "	bne- 1b\n"
> 	: "=&r" (lock_val), "+m" (*lock)
> 	: "r" (lock)
> 	: "cr0", "xer");
> 
> 	if (arch_spin_value_unlocked(lock_val))
> 		goto out;
> 
> 	while (!arch_spin_value_unlocked(*lock)) {
> 		HMT_low();
> 		if (SHARED_PROCESSOR)
> 			__spin_yield(lock);
> 	}
> 	HMT_medium();
> 
> out:
> 	smp_mb();
> }
> 
> 
> Thoughts?
> 

Make sense. I admit that I sort of overdesigned by introducing
arch_spin_is_locked_sync().

This version is better, thank you!

Regards,
Boqun

> cheers

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [v2] powerpc: spinlock: Fix spin_unlock_wait()
  2016-06-03  3:49 [PATCH v2] powerpc: spinlock: Fix spin_unlock_wait() Boqun Feng
  2016-06-06  4:52 ` [v2] " Michael Ellerman
@ 2016-06-06 11:14 ` Michael Ellerman
  1 sibling, 0 replies; 4+ messages in thread
From: Michael Ellerman @ 2016-06-06 11:14 UTC (permalink / raw)
  To: Boqun Feng, linuxppc-dev, linux-kernel
  Cc: Peter Zijlstra, Boqun Feng, Will Deacon, Paul Mackerras,
	Paul E. McKenney

On Fri, 2016-03-06 at 03:49:48 UTC, Boqun Feng wrote:
> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> index 523673d7583c..2ed893662866 100644
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -64,6 +64,25 @@ static inline int arch_spin_is_locked(arch_spinlock_t *lock)
>  }
>  
>  /*
> + * Use a ll/sc loop to read the lock value, the STORE part of this operation is
> + * used for making later lock operation observe it.
> + */
> +static inline bool arch_spin_is_locked_sync(arch_spinlock_t *lock)
> +{
> +	arch_spinlock_t tmp;
> +
> +	__asm__ __volatile__(
> +"1:	" PPC_LWARX(%0, 0, %2, 1) "\n"

As discussed offline I think we want to use EH=0 here (the final parameter).

>From the ISA:

  The value of EH provides a hint as to whether the program will perform a
  subsequent store to the word in storage addressed by EA before some other
  processor attempts to modify it.

  0 Other programs might attempt to modify the word in storage addressed by EA
    regardless of the result of the corresponding stwcx. instruction.

cheers

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2016-06-06 11:14 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-03  3:49 [PATCH v2] powerpc: spinlock: Fix spin_unlock_wait() Boqun Feng
2016-06-06  4:52 ` [v2] " Michael Ellerman
2016-06-06  4:59   ` Boqun Feng
2016-06-06 11:14 ` Michael Ellerman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).