All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] [bugfix] replace unnessary ldax with common ldr
@ 2016-08-30  6:35 ` Kenneth Lee
  0 siblings, 0 replies; 11+ messages in thread
From: Kenneth Lee @ 2016-08-30  6:35 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: linux-arm-kernel, linux-kernel, Kenneth Lee

(add comment for the previous mail, sorry for the duplication)

There is no store_ex pairing with this load_ex. It is not necessary and
gave wrong hint to the cache system.

Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
---
 arch/arm64/include/asm/spinlock.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index c85e96d..3334c4f 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -63,7 +63,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
 	 */
 "	sevl\n"
 "2:	wfe\n"
-"	ldaxrh	%w2, %4\n"
+"	ldrh	%w2, %4\n"
 "	eor	%w1, %w2, %w0, lsr #16\n"
 "	cbnz	%w1, 2b\n"
 	/* We got the lock. Critical section starts here. */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH] [bugfix] replace unnessary ldax with common ldr
@ 2016-08-30  6:35 ` Kenneth Lee
  0 siblings, 0 replies; 11+ messages in thread
From: Kenneth Lee @ 2016-08-30  6:35 UTC (permalink / raw)
  To: linux-arm-kernel

(add comment for the previous mail, sorry for the duplication)

There is no store_ex pairing with this load_ex. It is not necessary and
gave wrong hint to the cache system.

Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
---
 arch/arm64/include/asm/spinlock.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index c85e96d..3334c4f 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -63,7 +63,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
 	 */
 "	sevl\n"
 "2:	wfe\n"
-"	ldaxrh	%w2, %4\n"
+"	ldrh	%w2, %4\n"
 "	eor	%w1, %w2, %w0, lsr #16\n"
 "	cbnz	%w1, 2b\n"
 	/* We got the lock. Critical section starts here. */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH] [bugfix] replace unnessary ldax with common ldr
  2016-08-30  6:35 ` Kenneth Lee
@ 2016-08-30  9:07   ` Catalin Marinas
  -1 siblings, 0 replies; 11+ messages in thread
From: Catalin Marinas @ 2016-08-30  9:07 UTC (permalink / raw)
  To: Kenneth Lee; +Cc: Will Deacon, linux-kernel, linux-arm-kernel

On Tue, Aug 30, 2016 at 02:35:31PM +0800, Kenneth Lee wrote:
> (add comment for the previous mail, sorry for the duplication)
> 
> There is no store_ex pairing with this load_ex. It is not necessary and
> gave wrong hint to the cache system.
> 
> Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
> ---
>  arch/arm64/include/asm/spinlock.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
> index c85e96d..3334c4f 100644
> --- a/arch/arm64/include/asm/spinlock.h
> +++ b/arch/arm64/include/asm/spinlock.h
> @@ -63,7 +63,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
>  	 */
>  "	sevl\n"
>  "2:	wfe\n"
> -"	ldaxrh	%w2, %4\n"
> +"	ldrh	%w2, %4\n"
>  "	eor	%w1, %w2, %w0, lsr #16\n"
>  "	cbnz	%w1, 2b\n"
>  	/* We got the lock. Critical section starts here. */

This is needed because the arch_spin_unlock() code only uses an STLR
without an explicit SEV (like we have on AArch32). An event is
automatically generated when the exclusive monitor is cleared by STLR.
But without setting it with a load exclusive in arch_spin_lock() (even
though it does not acquire the lock), there won't be anything to clear,
hence no event to be generated. In this case, the WFE would wait
indefinitely.

-- 
Catalin

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] [bugfix] replace unnessary ldax with common ldr
@ 2016-08-30  9:07   ` Catalin Marinas
  0 siblings, 0 replies; 11+ messages in thread
From: Catalin Marinas @ 2016-08-30  9:07 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Aug 30, 2016 at 02:35:31PM +0800, Kenneth Lee wrote:
> (add comment for the previous mail, sorry for the duplication)
> 
> There is no store_ex pairing with this load_ex. It is not necessary and
> gave wrong hint to the cache system.
> 
> Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
> ---
>  arch/arm64/include/asm/spinlock.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
> index c85e96d..3334c4f 100644
> --- a/arch/arm64/include/asm/spinlock.h
> +++ b/arch/arm64/include/asm/spinlock.h
> @@ -63,7 +63,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
>  	 */
>  "	sevl\n"
>  "2:	wfe\n"
> -"	ldaxrh	%w2, %4\n"
> +"	ldrh	%w2, %4\n"
>  "	eor	%w1, %w2, %w0, lsr #16\n"
>  "	cbnz	%w1, 2b\n"
>  	/* We got the lock. Critical section starts here. */

This is needed because the arch_spin_unlock() code only uses an STLR
without an explicit SEV (like we have on AArch32). An event is
automatically generated when the exclusive monitor is cleared by STLR.
But without setting it with a load exclusive in arch_spin_lock() (even
though it does not acquire the lock), there won't be anything to clear,
hence no event to be generated. In this case, the WFE would wait
indefinitely.

-- 
Catalin

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] [bugfix] replace unnessary ldax with common ldr
  2016-08-30  9:07   ` Catalin Marinas
@ 2016-08-31 13:30     ` Vladimir Murzin
  -1 siblings, 0 replies; 11+ messages in thread
From: Vladimir Murzin @ 2016-08-31 13:30 UTC (permalink / raw)
  To: Catalin Marinas, Kenneth Lee; +Cc: Will Deacon, linux-kernel, linux-arm-kernel

On 30/08/16 10:07, Catalin Marinas wrote:
> On Tue, Aug 30, 2016 at 02:35:31PM +0800, Kenneth Lee wrote:
>> (add comment for the previous mail, sorry for the duplication)
>>
>> There is no store_ex pairing with this load_ex. It is not necessary and
>> gave wrong hint to the cache system.
>>
>> Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
>> ---
>>  arch/arm64/include/asm/spinlock.h | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
>> index c85e96d..3334c4f 100644
>> --- a/arch/arm64/include/asm/spinlock.h
>> +++ b/arch/arm64/include/asm/spinlock.h
>> @@ -63,7 +63,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
>>  	 */
>>  "	sevl\n"
>>  "2:	wfe\n"
>> -"	ldaxrh	%w2, %4\n"
>> +"	ldrh	%w2, %4\n"
>>  "	eor	%w1, %w2, %w0, lsr #16\n"
>>  "	cbnz	%w1, 2b\n"
>>  	/* We got the lock. Critical section starts here. */
> 
> This is needed because the arch_spin_unlock() code only uses an STLR
> without an explicit SEV (like we have on AArch32). An event is
> automatically generated when the exclusive monitor is cleared by STLR.
> But without setting it with a load exclusive in arch_spin_lock() (even
> though it does not acquire the lock), there won't be anything to clear,
> hence no event to be generated. In this case, the WFE would wait
> indefinitely.
> 

Maybe worth to add this as a comment, no?

Cheers
Vladimir

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] [bugfix] replace unnessary ldax with common ldr
@ 2016-08-31 13:30     ` Vladimir Murzin
  0 siblings, 0 replies; 11+ messages in thread
From: Vladimir Murzin @ 2016-08-31 13:30 UTC (permalink / raw)
  To: linux-arm-kernel

On 30/08/16 10:07, Catalin Marinas wrote:
> On Tue, Aug 30, 2016 at 02:35:31PM +0800, Kenneth Lee wrote:
>> (add comment for the previous mail, sorry for the duplication)
>>
>> There is no store_ex pairing with this load_ex. It is not necessary and
>> gave wrong hint to the cache system.
>>
>> Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
>> ---
>>  arch/arm64/include/asm/spinlock.h | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
>> index c85e96d..3334c4f 100644
>> --- a/arch/arm64/include/asm/spinlock.h
>> +++ b/arch/arm64/include/asm/spinlock.h
>> @@ -63,7 +63,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
>>  	 */
>>  "	sevl\n"
>>  "2:	wfe\n"
>> -"	ldaxrh	%w2, %4\n"
>> +"	ldrh	%w2, %4\n"
>>  "	eor	%w1, %w2, %w0, lsr #16\n"
>>  "	cbnz	%w1, 2b\n"
>>  	/* We got the lock. Critical section starts here. */
> 
> This is needed because the arch_spin_unlock() code only uses an STLR
> without an explicit SEV (like we have on AArch32). An event is
> automatically generated when the exclusive monitor is cleared by STLR.
> But without setting it with a load exclusive in arch_spin_lock() (even
> though it does not acquire the lock), there won't be anything to clear,
> hence no event to be generated. In this case, the WFE would wait
> indefinitely.
> 

Maybe worth to add this as a comment, no?

Cheers
Vladimir

^ permalink raw reply	[flat|nested] 11+ messages in thread

* 答复: [PATCH] [bugfix] replace unnessary ldax with common ldr
  2016-08-30  9:07   ` Catalin Marinas
  (?)
  (?)
@ 2016-09-01  3:44   ` Liguozhu (Kenneth)
  -1 siblings, 0 replies; 11+ messages in thread
From: Liguozhu (Kenneth) @ 2016-09-01  3:44 UTC (permalink / raw)
  To: linux-arm-kernel

Thanks for the clarification. 

Add a comment there will be nice:)


-Kenneth Lee (Hisilicon)


-----????-----
???: Catalin Marinas [mailto:catalin.marinas at arm.com] 
????: 2016?8?30? 17:07
???: Liguozhu (Kenneth)
??: Will Deacon; linux-kernel at vger.kernel.org; linux-arm-kernel at lists.infradead.org
??: Re: [PATCH] [bugfix] replace unnessary ldax with common ldr

On Tue, Aug 30, 2016 at 02:35:31PM +0800, Kenneth Lee wrote:
> (add comment for the previous mail, sorry for the duplication)
> 
> There is no store_ex pairing with this load_ex. It is not necessary and
> gave wrong hint to the cache system.
> 
> Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
> ---
>  arch/arm64/include/asm/spinlock.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
> index c85e96d..3334c4f 100644
> --- a/arch/arm64/include/asm/spinlock.h
> +++ b/arch/arm64/include/asm/spinlock.h
> @@ -63,7 +63,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
>  	 */
>  "	sevl\n"
>  "2:	wfe\n"
> -"	ldaxrh	%w2, %4\n"
> +"	ldrh	%w2, %4\n"
>  "	eor	%w1, %w2, %w0, lsr #16\n"
>  "	cbnz	%w1, 2b\n"
>  	/* We got the lock. Critical section starts here. */

This is needed because the arch_spin_unlock() code only uses an STLR
without an explicit SEV (like we have on AArch32). An event is
automatically generated when the exclusive monitor is cleared by STLR.
But without setting it with a load exclusive in arch_spin_lock() (even
though it does not acquire the lock), there won't be anything to clear,
hence no event to be generated. In this case, the WFE would wait
indefinitely.

-- 
Catalin

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH] [bugfix] replace unnessary ldax with common ldr
  2016-08-31 13:30     ` Vladimir Murzin
@ 2016-09-01 10:20       ` Catalin Marinas
  -1 siblings, 0 replies; 11+ messages in thread
From: Catalin Marinas @ 2016-09-01 10:20 UTC (permalink / raw)
  To: Vladimir Murzin; +Cc: Kenneth Lee, Will Deacon, linux-kernel, linux-arm-kernel

On Wed, Aug 31, 2016 at 02:30:40PM +0100, Vladimir Murzin wrote:
> On 30/08/16 10:07, Catalin Marinas wrote:
> > On Tue, Aug 30, 2016 at 02:35:31PM +0800, Kenneth Lee wrote:
> >> (add comment for the previous mail, sorry for the duplication)
> >>
> >> There is no store_ex pairing with this load_ex. It is not necessary and
> >> gave wrong hint to the cache system.
> >>
> >> Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
> >> ---
> >>  arch/arm64/include/asm/spinlock.h | 2 +-
> >>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
> >> index c85e96d..3334c4f 100644
> >> --- a/arch/arm64/include/asm/spinlock.h
> >> +++ b/arch/arm64/include/asm/spinlock.h
> >> @@ -63,7 +63,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
> >>  	 */
> >>  "	sevl\n"
> >>  "2:	wfe\n"
> >> -"	ldaxrh	%w2, %4\n"
> >> +"	ldrh	%w2, %4\n"
> >>  "	eor	%w1, %w2, %w0, lsr #16\n"
> >>  "	cbnz	%w1, 2b\n"
> >>  	/* We got the lock. Critical section starts here. */
> > 
> > This is needed because the arch_spin_unlock() code only uses an STLR
> > without an explicit SEV (like we have on AArch32). An event is
> > automatically generated when the exclusive monitor is cleared by STLR.
> > But without setting it with a load exclusive in arch_spin_lock() (even
> > though it does not acquire the lock), there won't be anything to clear,
> > hence no event to be generated. In this case, the WFE would wait
> > indefinitely.
> 
> Maybe worth to add this as a comment, no?

Yes, we just need to find someone to send a patch ;).

-- 
Catalin

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] [bugfix] replace unnessary ldax with common ldr
@ 2016-09-01 10:20       ` Catalin Marinas
  0 siblings, 0 replies; 11+ messages in thread
From: Catalin Marinas @ 2016-09-01 10:20 UTC (permalink / raw)
  To: linux-arm-kernel

On Wed, Aug 31, 2016 at 02:30:40PM +0100, Vladimir Murzin wrote:
> On 30/08/16 10:07, Catalin Marinas wrote:
> > On Tue, Aug 30, 2016 at 02:35:31PM +0800, Kenneth Lee wrote:
> >> (add comment for the previous mail, sorry for the duplication)
> >>
> >> There is no store_ex pairing with this load_ex. It is not necessary and
> >> gave wrong hint to the cache system.
> >>
> >> Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
> >> ---
> >>  arch/arm64/include/asm/spinlock.h | 2 +-
> >>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
> >> index c85e96d..3334c4f 100644
> >> --- a/arch/arm64/include/asm/spinlock.h
> >> +++ b/arch/arm64/include/asm/spinlock.h
> >> @@ -63,7 +63,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
> >>  	 */
> >>  "	sevl\n"
> >>  "2:	wfe\n"
> >> -"	ldaxrh	%w2, %4\n"
> >> +"	ldrh	%w2, %4\n"
> >>  "	eor	%w1, %w2, %w0, lsr #16\n"
> >>  "	cbnz	%w1, 2b\n"
> >>  	/* We got the lock. Critical section starts here. */
> > 
> > This is needed because the arch_spin_unlock() code only uses an STLR
> > without an explicit SEV (like we have on AArch32). An event is
> > automatically generated when the exclusive monitor is cleared by STLR.
> > But without setting it with a load exclusive in arch_spin_lock() (even
> > though it does not acquire the lock), there won't be anything to clear,
> > hence no event to be generated. In this case, the WFE would wait
> > indefinitely.
> 
> Maybe worth to add this as a comment, no?

Yes, we just need to find someone to send a patch ;).

-- 
Catalin

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH] [bugfix] replace unnessary ldax with common ldr
@ 2016-08-30  4:13 ` Kenneth Lee
  0 siblings, 0 replies; 11+ messages in thread
From: Kenneth Lee @ 2016-08-30  4:13 UTC (permalink / raw)
  To: Catalin Marinas, Will Deacon; +Cc: linux-arm-kernel, linux-kernel, Kenneth Lee

Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
---
 arch/arm64/include/asm/spinlock.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index c85e96d..3334c4f 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -63,7 +63,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
 	 */
 "	sevl\n"
 "2:	wfe\n"
-"	ldaxrh	%w2, %4\n"
+"	ldrh	%w2, %4\n"
 "	eor	%w1, %w2, %w0, lsr #16\n"
 "	cbnz	%w1, 2b\n"
 	/* We got the lock. Critical section starts here. */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH] [bugfix] replace unnessary ldax with common ldr
@ 2016-08-30  4:13 ` Kenneth Lee
  0 siblings, 0 replies; 11+ messages in thread
From: Kenneth Lee @ 2016-08-30  4:13 UTC (permalink / raw)
  To: linux-arm-kernel

Signed-off-by: Kenneth Lee <liguozhu@hisilicon.com>
---
 arch/arm64/include/asm/spinlock.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index c85e96d..3334c4f 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -63,7 +63,7 @@ static inline void arch_spin_lock(arch_spinlock_t *lock)
 	 */
 "	sevl\n"
 "2:	wfe\n"
-"	ldaxrh	%w2, %4\n"
+"	ldrh	%w2, %4\n"
 "	eor	%w1, %w2, %w0, lsr #16\n"
 "	cbnz	%w1, 2b\n"
 	/* We got the lock. Critical section starts here. */
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2016-09-01 10:20 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-30  6:35 [PATCH] [bugfix] replace unnessary ldax with common ldr Kenneth Lee
2016-08-30  6:35 ` Kenneth Lee
2016-08-30  9:07 ` Catalin Marinas
2016-08-30  9:07   ` Catalin Marinas
2016-08-31 13:30   ` Vladimir Murzin
2016-08-31 13:30     ` Vladimir Murzin
2016-09-01 10:20     ` Catalin Marinas
2016-09-01 10:20       ` Catalin Marinas
2016-09-01  3:44   ` 答复: " Liguozhu (Kenneth)
  -- strict thread matches above, loose matches on Subject: below --
2016-08-30  4:13 Kenneth Lee
2016-08-30  4:13 ` Kenneth Lee

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.