linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: Re: Re: Re: [PATCH v3] arm: Adding support for atomic half word exchange
@ 2015-10-06  8:03 Sarbojit Ganguly
  2015-10-06 14:54 ` Will Deacon
  0 siblings, 1 reply; 2+ messages in thread
From: Sarbojit Ganguly @ 2015-10-06  8:03 UTC (permalink / raw)
  To: Sarbojit Ganguly, Will Deacon
  Cc: linux, catalin.marinas, Waiman.Long, peterz, VIKRAM MUPPARTHI,
	linux-kernel, SUNEEL KUMAR SURIMANI, SHARAN ALLUR, torvalds,
	linux-arm-kernel

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=windows-1252, Size: 4082 bytes --]


Hello Will
Here is the version 3 of the patch correcting earlier issues.

v2 -> v3 : Removed the comment related to Qspinlock, changed !defined to
#ifndef.
v1 -> v2 : Extended the guard code to cover the byte exchange case as 
well following opinion of Will Deacon.
Checkpatch has been run and issues were taken care of.

Since support for half-word atomic exchange was not there and Qspinlock
on ARM requires it, modified __xchg() to add support for that as well.
ARMv6 and lower does not support ldrex{b,h} so, added a guard code
to prevent build breaks.

Signed-off-by: Sarbojit Ganguly <ganguly.s@samsung.com>
---
 arch/arm/include/asm/cmpxchg.h | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
index 916a274..c6436c1 100644
--- a/arch/arm/include/asm/cmpxchg.h
+++ b/arch/arm/include/asm/cmpxchg.h
@@ -39,6 +39,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 
 	switch (size) {
 #if __LINUX_ARM_ARCH__ >= 6
+#ifndef CONFIG_CPU_V6 /* MIN ARCH >= V6K */
 	case 1:
 		asm volatile("@	__xchg1\n"
 		"1:	ldrexb	%0, [%3]\n"
@@ -49,6 +50,17 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 			: "r" (x), "r" (ptr)
 			: "memory", "cc");
 		break;
+	case 2:
+		asm volatile("@	__xchg2\n"
+		"1:	ldrexh	%0, [%3]\n"
+		"	strexh	%1, %2, [%3]\n"
+		"	teq	%1, #0\n"
+		"	bne	1b"
+			: "=&r" (ret), "=&r" (tmp)
+			: "r" (x), "r" (ptr)
+			: "memory", "cc");
+		break;
+#endif
 	case 4:
 		asm volatile("@	__xchg4\n"
 		"1:	ldrex	%0, [%3]\n"
-- 
1.9.1


------- Original Message -------
Sender : Sarbojit Ganguly<ganguly.s@samsung.com> Technical Lead/SRI-Bangalore-AP Systems 1/Samsung Electronics
Date : Oct 06, 2015 08:38 (GMT+05:30)
Title : Re: Re: Re: [PATCH v2] arm: Adding support for atomic half word exchange

Hello Will,

Thank you so much for the review. I have thought it over and it makes
sense not to have that comment in cmpxchg.h, I will also change !defined to
#ifndef and quickly post a v3.

Regards,
Sarbojit

------- Original Message -------
Sender : Will Deacon
Date : Oct 05, 2015 21:30 (GMT+05:30)
Title : Re: Re: [PATCH v2] arm: Adding support for atomic half word exchange

On Mon, Oct 05, 2015 at 01:10:53PM +0000, Sarbojit Ganguly wrote:
> My sincere apologies for the format issue. This was due to the e-mail editor
> which reformats the text.
> I am reposting the patch, please let me know if it is ok this time.
> 
> 
> v1-->v2 : Extended the guard code to cover the byte exchange case as 
> well following opinion of Will Deacon.
> Checkpatch has been run and issues were taken care of.
> 
> Since support for half-word atomic exchange was not there and Qspinlock
> on ARM requires it, modified __xchg() to add support for that as well.
> ARMv6 and lower does not support ldrex{b,h} so, added a guard code
> to prevent build breaks.
> 
> Signed-off-by: Sarbojit Ganguly 
> ---
>  arch/arm/include/asm/cmpxchg.h | 17 +++++++++++++++++
>  1 file changed, 17 insertions(+)
> 
> diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
> index 916a274..a53cbeb 100644
> --- a/arch/arm/include/asm/cmpxchg.h
> +++ b/arch/arm/include/asm/cmpxchg.h
> @@ -39,6 +39,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
>  
>   switch (size) {
>  #if __LINUX_ARM_ARCH__ >= 6
> +#if !defined(CONFIG_CPU_V6)

#ifndef ? (to match the __cmpxchg code).

>   case 1:
>   asm volatile("@ __xchg1\n"
>   "1: ldrexb %0, [%3]\n"
> @@ -49,6 +50,22 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
>   : "r" (x), "r" (ptr)
>   : "memory", "cc");
>   break;
> +
> + /*
> + * Half-word atomic exchange, required
> + * for Qspinlock support on ARM.
> + */

I think I said it before, but I don't think this comment is of any real
value.

Other than those, this looks ok to me.

Willÿôèº{.nÇ+‰·Ÿ®‰­†+%ŠËÿ±éݶ\x17¥Šwÿº{.nÇ+‰·¥Š{±þG«éÿŠ{ayº\x1dʇڙë,j\a­¢f£¢·hšïêÿ‘êçz_è®\x03(­éšŽŠÝ¢j"ú\x1a¶^[m§ÿÿ¾\a«þG«éÿ¢¸?™¨è­Ú&£ø§~á¶iO•æ¬z·švØ^\x14\x04\x1a¶^[m§ÿÿÃ\fÿ¶ìÿ¢¸?–I¥

^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: Re: Re: Re: [PATCH v3] arm: Adding support for atomic half word exchange
  2015-10-06  8:03 Re: Re: Re: [PATCH v3] arm: Adding support for atomic half word exchange Sarbojit Ganguly
@ 2015-10-06 14:54 ` Will Deacon
  0 siblings, 0 replies; 2+ messages in thread
From: Will Deacon @ 2015-10-06 14:54 UTC (permalink / raw)
  To: Sarbojit Ganguly
  Cc: linux, catalin.marinas, Waiman.Long, peterz, VIKRAM MUPPARTHI,
	linux-kernel, SUNEEL KUMAR SURIMANI, SHARAN ALLUR, torvalds,
	linux-arm-kernel

On Tue, Oct 06, 2015 at 08:03:02AM +0000, Sarbojit Ganguly wrote:
> Here is the version 3 of the patch correcting earlier issues.

This looks good to me now:

  Acked-by: Will Deacon <will.deacon@arm.com>

> v2 -> v3 : Removed the comment related to Qspinlock, changed !defined to
> #ifndef.
> v1 -> v2 : Extended the guard code to cover the byte exchange case as 
> well following opinion of Will Deacon.
> Checkpatch has been run and issues were taken care of.

The part of your text up until here doesn't belong in the commit message.
You'll also need to send this to Russell's patch system.

Will

> Since support for half-word atomic exchange was not there and Qspinlock
> on ARM requires it, modified __xchg() to add support for that as well.
> ARMv6 and lower does not support ldrex{b,h} so, added a guard code
> to prevent build breaks.
> 
> Signed-off-by: Sarbojit Ganguly <ganguly.s@samsung.com>
> ---
>  arch/arm/include/asm/cmpxchg.h | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
> index 916a274..c6436c1 100644
> --- a/arch/arm/include/asm/cmpxchg.h
> +++ b/arch/arm/include/asm/cmpxchg.h
> @@ -39,6 +39,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
>  
>  	switch (size) {
>  #if __LINUX_ARM_ARCH__ >= 6
> +#ifndef CONFIG_CPU_V6 /* MIN ARCH >= V6K */
>  	case 1:
>  		asm volatile("@	__xchg1\n"
>  		"1:	ldrexb	%0, [%3]\n"
> @@ -49,6 +50,17 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
>  			: "r" (x), "r" (ptr)
>  			: "memory", "cc");
>  		break;
> +	case 2:
> +		asm volatile("@	__xchg2\n"
> +		"1:	ldrexh	%0, [%3]\n"
> +		"	strexh	%1, %2, [%3]\n"
> +		"	teq	%1, #0\n"
> +		"	bne	1b"
> +			: "=&r" (ret), "=&r" (tmp)
> +			: "r" (x), "r" (ptr)
> +			: "memory", "cc");
> +		break;
> +#endif
>  	case 4:
>  		asm volatile("@	__xchg4\n"
>  		"1:	ldrex	%0, [%3]\n"
> -- 
> 1.9.1

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2015-10-06 14:54 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-10-06  8:03 Re: Re: Re: [PATCH v3] arm: Adding support for atomic half word exchange Sarbojit Ganguly
2015-10-06 14:54 ` Will Deacon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).