All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] arm/arm64: tcg_baremetal_tests inspired patches
@ 2015-06-25 16:12 Andrew Jones
  2015-06-25 16:12 ` [PATCH 1/3] arm/arm64: spinlocks: fix memory barriers Andrew Jones
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Andrew Jones @ 2015-06-25 16:12 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: christoffer.dall, pbonzini

While porting the test in virtualopensystems' tcg_baremetal_tests
to kvm-unit-tests, I found a couple places to improve the spinlock
implementation. I also added a way to build a single test that
doesn't necessary have an entry in the makefile, which should be handy
for experimental tests.

Andrew Jones (3):
  arm/arm64: spinlocks: fix memory barriers
  arm/arm64: allow building a single test
  arm/arm64: speed up spinlocks and atomic ops

 config/config-arm-common.mak | 6 ++++++
 lib/arm/asm/mmu-api.h        | 4 ++++
 lib/arm/mmu.c                | 3 +++
 lib/arm/spinlock.c           | 8 ++++----
 lib/arm64/spinlock.c         | 5 ++---
 5 files changed, 19 insertions(+), 7 deletions(-)

-- 
2.4.3


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/3] arm/arm64: spinlocks: fix memory barriers
  2015-06-25 16:12 [PATCH 0/3] arm/arm64: tcg_baremetal_tests inspired patches Andrew Jones
@ 2015-06-25 16:12 ` Andrew Jones
  2015-06-29 10:27   ` Christoffer Dall
  2015-07-03 17:42   ` Paolo Bonzini
  2015-06-25 16:12 ` [PATCH 2/3] arm/arm64: speed up spinlocks and atomic ops Andrew Jones
  2015-06-25 16:12 ` [PATCH 3/3] arm/arm64: allow building a single test Andrew Jones
  2 siblings, 2 replies; 11+ messages in thread
From: Andrew Jones @ 2015-06-25 16:12 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: pbonzini

It shouldn't be necessary to use a barrier on the way into
spin_lock. We'll be focused on a single address until we get
it (exclusively) set, and then we'll do a barrier on the way
out. Also, it does make sense to do a barrier on the way in
to spin_unlock, i.e. ensure what we did in the critical section
is ordered wrt to what we do outside it, before we announce that
we're outside.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/spinlock.c   | 8 ++++----
 lib/arm64/spinlock.c | 5 ++---
 2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/lib/arm/spinlock.c b/lib/arm/spinlock.c
index 3b023ceaebf71..116ea5d7db930 100644
--- a/lib/arm/spinlock.c
+++ b/lib/arm/spinlock.c
@@ -7,10 +7,9 @@ void spin_lock(struct spinlock *lock)
 {
 	u32 val, fail;
 
-	dmb();
-
 	if (!mmu_enabled()) {
 		lock->v = 1;
+		smp_mb();
 		return;
 	}
 
@@ -25,11 +24,12 @@ void spin_lock(struct spinlock *lock)
 		: "r" (&lock->v)
 		: "cc" );
 	} while (fail);
-	dmb();
+
+	smp_mb();
 }
 
 void spin_unlock(struct spinlock *lock)
 {
+	smp_mb();
 	lock->v = 0;
-	dmb();
 }
diff --git a/lib/arm64/spinlock.c b/lib/arm64/spinlock.c
index 68b68b75ba60d..a3907f03cacda 100644
--- a/lib/arm64/spinlock.c
+++ b/lib/arm64/spinlock.c
@@ -13,10 +13,9 @@ void spin_lock(struct spinlock *lock)
 {
 	u32 val, fail;
 
-	smp_mb();
-
 	if (!mmu_enabled()) {
 		lock->v = 1;
+		smp_mb();
 		return;
 	}
 
@@ -35,9 +34,9 @@ void spin_lock(struct spinlock *lock)
 
 void spin_unlock(struct spinlock *lock)
 {
+	smp_mb();
 	if (mmu_enabled())
 		asm volatile("stlrh wzr, [%0]" :: "r" (&lock->v));
 	else
 		lock->v = 0;
-	smp_mb();
 }
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/3] arm/arm64: speed up spinlocks and atomic ops
  2015-06-25 16:12 [PATCH 0/3] arm/arm64: tcg_baremetal_tests inspired patches Andrew Jones
  2015-06-25 16:12 ` [PATCH 1/3] arm/arm64: spinlocks: fix memory barriers Andrew Jones
@ 2015-06-25 16:12 ` Andrew Jones
  2015-06-25 16:23   ` Paolo Bonzini
  2015-06-29 10:28   ` Christoffer Dall
  2015-06-25 16:12 ` [PATCH 3/3] arm/arm64: allow building a single test Andrew Jones
  2 siblings, 2 replies; 11+ messages in thread
From: Andrew Jones @ 2015-06-25 16:12 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: christoffer.dall, pbonzini

spinlock torture tests made it clear that checking mmu_enabled()
every time we call spin_lock is a bad idea. As most tests will
want the MMU enabled the entire time, then just hard code
mmu_enabled() to true. Tests that want to play with the MMU can
be compiled with CONFIG_MAY_DISABLE_MMU to get the actual check
back.

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 lib/arm/asm/mmu-api.h | 4 ++++
 lib/arm/mmu.c         | 3 +++
 2 files changed, 7 insertions(+)

diff --git a/lib/arm/asm/mmu-api.h b/lib/arm/asm/mmu-api.h
index 68dc707d67241..1a4d91163c398 100644
--- a/lib/arm/asm/mmu-api.h
+++ b/lib/arm/asm/mmu-api.h
@@ -1,7 +1,11 @@
 #ifndef __ASMARM_MMU_API_H_
 #define __ASMARM_MMU_API_H_
 extern pgd_t *mmu_idmap;
+#ifdef CONFIG_MAY_DISABLE_MMU
 extern bool mmu_enabled(void);
+#else
+#define mmu_enabled() (1)
+#endif
 extern void mmu_set_enabled(void);
 extern void mmu_enable(pgd_t *pgtable);
 extern void mmu_enable_idmap(void);
diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c
index 732000a8eb088..405717b6332bf 100644
--- a/lib/arm/mmu.c
+++ b/lib/arm/mmu.c
@@ -15,11 +15,14 @@ extern unsigned long etext;
 pgd_t *mmu_idmap;
 
 static cpumask_t mmu_enabled_cpumask;
+
+#ifdef CONFIG_MAY_DISABLE_MMU
 bool mmu_enabled(void)
 {
 	struct thread_info *ti = current_thread_info();
 	return cpumask_test_cpu(ti->cpu, &mmu_enabled_cpumask);
 }
+#endif
 
 void mmu_set_enabled(void)
 {
-- 
2.4.3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/3] arm/arm64: allow building a single test
  2015-06-25 16:12 [PATCH 0/3] arm/arm64: tcg_baremetal_tests inspired patches Andrew Jones
  2015-06-25 16:12 ` [PATCH 1/3] arm/arm64: spinlocks: fix memory barriers Andrew Jones
  2015-06-25 16:12 ` [PATCH 2/3] arm/arm64: speed up spinlocks and atomic ops Andrew Jones
@ 2015-06-25 16:12 ` Andrew Jones
  2 siblings, 0 replies; 11+ messages in thread
From: Andrew Jones @ 2015-06-25 16:12 UTC (permalink / raw)
  To: kvm, kvmarm; +Cc: pbonzini

This is mostly useful for building new tests that don't yet (and
may never) have entries in the makefiles (config-arm*.mak). Of course
it can be used to build tests that do have entries as well, in order
to avoid building all tests, if the plan is to run just the one.

Just do 'make TEST=some-test' to use it, where "some-test" matches
the name of the source file, i.e. arm/some-test.c

Signed-off-by: Andrew Jones <drjones@redhat.com>
---
 config/config-arm-common.mak | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/config/config-arm-common.mak b/config/config-arm-common.mak
index 314261ef60cf7..2ad29c115f946 100644
--- a/config/config-arm-common.mak
+++ b/config/config-arm-common.mak
@@ -12,6 +12,11 @@ endif
 tests-common = \
 	$(TEST_DIR)/selftest.flat
 
+ifneq ($(TEST),)
+	tests = $(TEST_DIR)/$(TEST).flat
+	tests-common =
+endif
+
 all: test_cases
 
 ##################################################################
@@ -69,4 +74,5 @@ generated_files = $(asm-offsets)
 
 test_cases: $(generated_files) $(tests-common) $(tests)
 
+$(TEST_DIR)/$(TEST).elf: $(cstart.o) $(TEST_DIR)/$(TEST).o
 $(TEST_DIR)/selftest.elf: $(cstart.o) $(TEST_DIR)/selftest.o
-- 
2.4.3

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] arm/arm64: speed up spinlocks and atomic ops
  2015-06-25 16:12 ` [PATCH 2/3] arm/arm64: speed up spinlocks and atomic ops Andrew Jones
@ 2015-06-25 16:23   ` Paolo Bonzini
  2015-06-25 16:55     ` Andrew Jones
  2015-06-29 10:28   ` Christoffer Dall
  1 sibling, 1 reply; 11+ messages in thread
From: Paolo Bonzini @ 2015-06-25 16:23 UTC (permalink / raw)
  To: Andrew Jones, kvm, kvmarm; +Cc: christoffer.dall



On 25/06/2015 18:12, Andrew Jones wrote:
> spinlock torture tests made it clear that checking mmu_enabled()
> every time we call spin_lock is a bad idea. As most tests will
> want the MMU enabled the entire time, then just hard code
> mmu_enabled() to true. Tests that want to play with the MMU can
> be compiled with CONFIG_MAY_DISABLE_MMU to get the actual check
> back.

This doesn't work if you compile mmu.o just once.  Can you make
something like

static inline bool mmu_enabled(void)
{
	return disabled_mmu_cpu_count == 0 || __mmu_enabled();
}

...

bool __mmu_enabled(void)
{
	struct thread_info *ti = current_thread_info();
	return cpumask_test_cpu(ti->cpu, &mmu_enabled_cpumask);
}

?

Paolo

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] arm/arm64: speed up spinlocks and atomic ops
  2015-06-25 16:23   ` Paolo Bonzini
@ 2015-06-25 16:55     ` Andrew Jones
  0 siblings, 0 replies; 11+ messages in thread
From: Andrew Jones @ 2015-06-25 16:55 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: kvm, kvmarm, christoffer.dall

On Thu, Jun 25, 2015 at 06:23:48PM +0200, Paolo Bonzini wrote:
> 
> 
> On 25/06/2015 18:12, Andrew Jones wrote:
> > spinlock torture tests made it clear that checking mmu_enabled()
> > every time we call spin_lock is a bad idea. As most tests will
> > want the MMU enabled the entire time, then just hard code
> > mmu_enabled() to true. Tests that want to play with the MMU can
> > be compiled with CONFIG_MAY_DISABLE_MMU to get the actual check
> > back.
> 
> This doesn't work if you compile mmu.o just once.  Can you make
> something like
> 
> static inline bool mmu_enabled(void)
> {
> 	return disabled_mmu_cpu_count == 0 || __mmu_enabled();
> }
> 
> ...
> 
> bool __mmu_enabled(void)
> {
> 	struct thread_info *ti = current_thread_info();
> 	return cpumask_test_cpu(ti->cpu, &mmu_enabled_cpumask);
> }
> 
> ?

Agreed. But I might as well actually add the support for disabling
the mmu, if I'm going to add yet another variable dependant on it.
We should drop this patch from this series, and I'll submit another
series of a few patches that
  - introduce mmu_disable
  - switch to assuming the mmu is enabled, and then manage disabled
    state instead
  - optimize mmu_enabled()

Thanks,
drew

> 
> Paolo
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] arm/arm64: spinlocks: fix memory barriers
  2015-06-25 16:12 ` [PATCH 1/3] arm/arm64: spinlocks: fix memory barriers Andrew Jones
@ 2015-06-29 10:27   ` Christoffer Dall
  2015-07-03 17:42   ` Paolo Bonzini
  1 sibling, 0 replies; 11+ messages in thread
From: Christoffer Dall @ 2015-06-29 10:27 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, kvmarm, pbonzini

On Thu, Jun 25, 2015 at 06:12:17PM +0200, Andrew Jones wrote:
> It shouldn't be necessary to use a barrier on the way into
> spin_lock. We'll be focused on a single address until we get
> it (exclusively) set, and then we'll do a barrier on the way
> out. Also, it does make sense to do a barrier on the way in
> to spin_unlock, i.e. ensure what we did in the critical section
> is ordered wrt to what we do outside it, before we announce that
> we're outside.
> 
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
>  lib/arm/spinlock.c   | 8 ++++----
>  lib/arm64/spinlock.c | 5 ++---
>  2 files changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/lib/arm/spinlock.c b/lib/arm/spinlock.c
> index 3b023ceaebf71..116ea5d7db930 100644
> --- a/lib/arm/spinlock.c
> +++ b/lib/arm/spinlock.c
> @@ -7,10 +7,9 @@ void spin_lock(struct spinlock *lock)
>  {
>  	u32 val, fail;
>  
> -	dmb();
> -
>  	if (!mmu_enabled()) {
>  		lock->v = 1;
> +		smp_mb();
>  		return;
>  	}
>  
> @@ -25,11 +24,12 @@ void spin_lock(struct spinlock *lock)
>  		: "r" (&lock->v)
>  		: "cc" );
>  	} while (fail);
> -	dmb();
> +
> +	smp_mb();
>  }
>  
>  void spin_unlock(struct spinlock *lock)
>  {
> +	smp_mb();
>  	lock->v = 0;
> -	dmb();
>  }
> diff --git a/lib/arm64/spinlock.c b/lib/arm64/spinlock.c
> index 68b68b75ba60d..a3907f03cacda 100644
> --- a/lib/arm64/spinlock.c
> +++ b/lib/arm64/spinlock.c
> @@ -13,10 +13,9 @@ void spin_lock(struct spinlock *lock)
>  {
>  	u32 val, fail;
>  
> -	smp_mb();
> -
>  	if (!mmu_enabled()) {
>  		lock->v = 1;
> +		smp_mb();
>  		return;
>  	}
>  
> @@ -35,9 +34,9 @@ void spin_lock(struct spinlock *lock)
>  
>  void spin_unlock(struct spinlock *lock)
>  {
> +	smp_mb();
>  	if (mmu_enabled())
>  		asm volatile("stlrh wzr, [%0]" :: "r" (&lock->v));
>  	else
>  		lock->v = 0;
> -	smp_mb();
>  }
> -- 
> 2.4.3
> 
looks good to me

-Christoffer

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] arm/arm64: speed up spinlocks and atomic ops
  2015-06-25 16:12 ` [PATCH 2/3] arm/arm64: speed up spinlocks and atomic ops Andrew Jones
  2015-06-25 16:23   ` Paolo Bonzini
@ 2015-06-29 10:28   ` Christoffer Dall
  2015-06-29 10:44     ` Andrew Jones
  1 sibling, 1 reply; 11+ messages in thread
From: Christoffer Dall @ 2015-06-29 10:28 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, kvmarm, pbonzini

On Thu, Jun 25, 2015 at 06:12:18PM +0200, Andrew Jones wrote:
> spinlock torture tests made it clear that checking mmu_enabled()
> every time we call spin_lock is a bad idea.

why a bad idea?  Does it break, is it slow?

> As most tests will
> want the MMU enabled the entire time, then just hard code
> mmu_enabled() to true. Tests that want to play with the MMU can
> be compiled with CONFIG_MAY_DISABLE_MMU to get the actual check
> back.

If we don't care about performance, why this added complexity?

> 
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
>  lib/arm/asm/mmu-api.h | 4 ++++
>  lib/arm/mmu.c         | 3 +++
>  2 files changed, 7 insertions(+)
> 
> diff --git a/lib/arm/asm/mmu-api.h b/lib/arm/asm/mmu-api.h
> index 68dc707d67241..1a4d91163c398 100644
> --- a/lib/arm/asm/mmu-api.h
> +++ b/lib/arm/asm/mmu-api.h
> @@ -1,7 +1,11 @@
>  #ifndef __ASMARM_MMU_API_H_
>  #define __ASMARM_MMU_API_H_
>  extern pgd_t *mmu_idmap;
> +#ifdef CONFIG_MAY_DISABLE_MMU
>  extern bool mmu_enabled(void);
> +#else
> +#define mmu_enabled() (1)
> +#endif
>  extern void mmu_set_enabled(void);
>  extern void mmu_enable(pgd_t *pgtable);
>  extern void mmu_enable_idmap(void);
> diff --git a/lib/arm/mmu.c b/lib/arm/mmu.c
> index 732000a8eb088..405717b6332bf 100644
> --- a/lib/arm/mmu.c
> +++ b/lib/arm/mmu.c
> @@ -15,11 +15,14 @@ extern unsigned long etext;
>  pgd_t *mmu_idmap;
>  
>  static cpumask_t mmu_enabled_cpumask;
> +
> +#ifdef CONFIG_MAY_DISABLE_MMU
>  bool mmu_enabled(void)
>  {
>  	struct thread_info *ti = current_thread_info();
>  	return cpumask_test_cpu(ti->cpu, &mmu_enabled_cpumask);
>  }
> +#endif
>  
>  void mmu_set_enabled(void)
>  {
> -- 
> 2.4.3
> 

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] arm/arm64: speed up spinlocks and atomic ops
  2015-06-29 10:28   ` Christoffer Dall
@ 2015-06-29 10:44     ` Andrew Jones
  2015-06-29 12:53       ` Christoffer Dall
  0 siblings, 1 reply; 11+ messages in thread
From: Andrew Jones @ 2015-06-29 10:44 UTC (permalink / raw)
  To: Christoffer Dall; +Cc: pbonzini, kvmarm, kvm

On Mon, Jun 29, 2015 at 12:28:32PM +0200, Christoffer Dall wrote:
> On Thu, Jun 25, 2015 at 06:12:18PM +0200, Andrew Jones wrote:
> > spinlock torture tests made it clear that checking mmu_enabled()
> > every time we call spin_lock is a bad idea.
> 
> why a bad idea?  Does it break, is it slow?

Just slow, but really slow. After porting vos' spinlock test
over, there were three implementations to compare, this one,
gcc-builtin, and none. none doesn't really matter as it's not
"real". gcc-builtin took about 6 seconds to complete on my
machine (an x86 notebook, recall it's a tcg test), and this one
took 20 seconds.

> 
> > As most tests will
> > want the MMU enabled the entire time, then just hard code
> > mmu_enabled() to true. Tests that want to play with the MMU can
> > be compiled with CONFIG_MAY_DISABLE_MMU to get the actual check
> > back.
> 
> If we don't care about performance, why this added complexity?

I think the series I sent that allows us to optimize mmu_enabled()
has about the same level of complexity, not much, but now we also
only take 6 seconds with the test. So, IMO, the extra _count
variable is worth it.

Thanks,
drew

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/3] arm/arm64: speed up spinlocks and atomic ops
  2015-06-29 10:44     ` Andrew Jones
@ 2015-06-29 12:53       ` Christoffer Dall
  0 siblings, 0 replies; 11+ messages in thread
From: Christoffer Dall @ 2015-06-29 12:53 UTC (permalink / raw)
  To: Andrew Jones; +Cc: kvm, kvmarm, pbonzini

On Mon, Jun 29, 2015 at 12:44:27PM +0200, Andrew Jones wrote:
> On Mon, Jun 29, 2015 at 12:28:32PM +0200, Christoffer Dall wrote:
> > On Thu, Jun 25, 2015 at 06:12:18PM +0200, Andrew Jones wrote:
> > > spinlock torture tests made it clear that checking mmu_enabled()
> > > every time we call spin_lock is a bad idea.
> > 
> > why a bad idea?  Does it break, is it slow?
> 
> Just slow, but really slow. After porting vos' spinlock test
> over, there were three implementations to compare, this one,
> gcc-builtin, and none. none doesn't really matter as it's not
> "real". gcc-builtin took about 6 seconds to complete on my
> machine (an x86 notebook, recall it's a tcg test), and this one
> took 20 seconds.
> 
> > 
> > > As most tests will
> > > want the MMU enabled the entire time, then just hard code
> > > mmu_enabled() to true. Tests that want to play with the MMU can
> > > be compiled with CONFIG_MAY_DISABLE_MMU to get the actual check
> > > back.
> > 
> > If we don't care about performance, why this added complexity?
> 
> I think the series I sent that allows us to optimize mmu_enabled()
> has about the same level of complexity, not much, but now we also
> only take 6 seconds with the test. So, IMO, the extra _count
> variable is worth it.
> 
ok, sounds good.  I didn't see the other series before I glanced at this
one, and the other one didn't look that complicated.

-Christoffer

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/3] arm/arm64: spinlocks: fix memory barriers
  2015-06-25 16:12 ` [PATCH 1/3] arm/arm64: spinlocks: fix memory barriers Andrew Jones
  2015-06-29 10:27   ` Christoffer Dall
@ 2015-07-03 17:42   ` Paolo Bonzini
  1 sibling, 0 replies; 11+ messages in thread
From: Paolo Bonzini @ 2015-07-03 17:42 UTC (permalink / raw)
  To: Andrew Jones, kvm, kvmarm



On 25/06/2015 18:12, Andrew Jones wrote:
> It shouldn't be necessary to use a barrier on the way into
> spin_lock. We'll be focused on a single address until we get
> it (exclusively) set, and then we'll do a barrier on the way
> out. Also, it does make sense to do a barrier on the way in
> to spin_unlock, i.e. ensure what we did in the critical section
> is ordered wrt to what we do outside it, before we announce that
> we're outside.
> 
> Signed-off-by: Andrew Jones <drjones@redhat.com>
> ---
>  lib/arm/spinlock.c   | 8 ++++----
>  lib/arm64/spinlock.c | 5 ++---
>  2 files changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/lib/arm/spinlock.c b/lib/arm/spinlock.c
> index 3b023ceaebf71..116ea5d7db930 100644
> --- a/lib/arm/spinlock.c
> +++ b/lib/arm/spinlock.c
> @@ -7,10 +7,9 @@ void spin_lock(struct spinlock *lock)
>  {
>  	u32 val, fail;
>  
> -	dmb();
> -
>  	if (!mmu_enabled()) {
>  		lock->v = 1;
> +		smp_mb();
>  		return;
>  	}
>  
> @@ -25,11 +24,12 @@ void spin_lock(struct spinlock *lock)
>  		: "r" (&lock->v)
>  		: "cc" );
>  	} while (fail);
> -	dmb();
> +
> +	smp_mb();
>  }
>  
>  void spin_unlock(struct spinlock *lock)
>  {
> +	smp_mb();
>  	lock->v = 0;
> -	dmb();
>  }
> diff --git a/lib/arm64/spinlock.c b/lib/arm64/spinlock.c
> index 68b68b75ba60d..a3907f03cacda 100644
> --- a/lib/arm64/spinlock.c
> +++ b/lib/arm64/spinlock.c
> @@ -13,10 +13,9 @@ void spin_lock(struct spinlock *lock)
>  {
>  	u32 val, fail;
>  
> -	smp_mb();
> -
>  	if (!mmu_enabled()) {
>  		lock->v = 1;
> +		smp_mb();
>  		return;
>  	}
>  
> @@ -35,9 +34,9 @@ void spin_lock(struct spinlock *lock)
>  
>  void spin_unlock(struct spinlock *lock)
>  {
> +	smp_mb();
>  	if (mmu_enabled())
>  		asm volatile("stlrh wzr, [%0]" :: "r" (&lock->v));
>  	else
>  		lock->v = 0;
> -	smp_mb();
>  }
> 

Applied, thanks.

Paolo

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2015-07-03 17:42 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-25 16:12 [PATCH 0/3] arm/arm64: tcg_baremetal_tests inspired patches Andrew Jones
2015-06-25 16:12 ` [PATCH 1/3] arm/arm64: spinlocks: fix memory barriers Andrew Jones
2015-06-29 10:27   ` Christoffer Dall
2015-07-03 17:42   ` Paolo Bonzini
2015-06-25 16:12 ` [PATCH 2/3] arm/arm64: speed up spinlocks and atomic ops Andrew Jones
2015-06-25 16:23   ` Paolo Bonzini
2015-06-25 16:55     ` Andrew Jones
2015-06-29 10:28   ` Christoffer Dall
2015-06-29 10:44     ` Andrew Jones
2015-06-29 12:53       ` Christoffer Dall
2015-06-25 16:12 ` [PATCH 3/3] arm/arm64: allow building a single test Andrew Jones

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.