linux-hexagon.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/5] arch,locking/atomic: add arch_cmpxchg[64]_local
@ 2023-11-21 14:23 wuqiang.matt
  2023-11-21 14:23 ` [PATCH v3 1/5] arch,locking/atomic: arc: arch_cmpxchg should check data size wuqiang.matt
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: wuqiang.matt @ 2023-11-21 14:23 UTC (permalink / raw)
  To: ubizjak, mark.rutland, vgupta, bcain, jonas, stefan.kristiansson,
	shorne, chris, jcmvbkbc, geert, andi.shyti, mingo, palmer,
	andrzej.hajda, arnd, peterz, mhiramat
  Cc: linux-arch, linux-snps-arc, linux-kernel, linux-hexagon,
	linux-openrisc, linux-trace-kernel, mattwu, linux, wuqiang.matt

Archtectures arc, openrisc and hexagon haven't arch_cmpxchg_local()
defined, so the usecase of try_cmpxchg_local() in lib/objpool.c can
not pass kernel building by the kernel test robot.

Patch 1 improves the data size checking logic for arc; Patches 2/3/4
implement arch_cmpxchg[64]_local for arc/openrisc/hexagon. Patch 5
defines arch_cmpxchg_local as existing __cmpxchg_local rather the
generic variant.

wuqiang.matt (5):
  arch,locking/atomic: arc: arch_cmpxchg should check data size
  arch,locking/atomic: arc: add arch_cmpxchg[64]_local
  arch,locking/atomic: openrisc: add arch_cmpxchg[64]_local
  arch,locking/atomic: hexagon: add arch_cmpxchg[64]_local
  arch,locking/atomic: xtensa: define arch_cmpxchg_local as
    __cmpxchg_local

 arch/arc/include/asm/cmpxchg.h      | 40 ++++++++++++++++++----
 arch/hexagon/include/asm/cmpxchg.h  | 51 ++++++++++++++++++++++++++++-
 arch/openrisc/include/asm/cmpxchg.h |  6 ++++
 arch/xtensa/include/asm/cmpxchg.h   |  2 +-
 4 files changed, 91 insertions(+), 8 deletions(-)

-- 
2.40.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 1/5] arch,locking/atomic: arc: arch_cmpxchg should check data size
  2023-11-21 14:23 [PATCH v3 0/5] arch,locking/atomic: add arch_cmpxchg[64]_local wuqiang.matt
@ 2023-11-21 14:23 ` wuqiang.matt
  2023-11-22 22:17   ` Andi Shyti
  2023-11-21 14:23 ` [PATCH v3 2/5] arch,locking/atomic: arc: add arch_cmpxchg[64]_local wuqiang.matt
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 9+ messages in thread
From: wuqiang.matt @ 2023-11-21 14:23 UTC (permalink / raw)
  To: ubizjak, mark.rutland, vgupta, bcain, jonas, stefan.kristiansson,
	shorne, chris, jcmvbkbc, geert, andi.shyti, mingo, palmer,
	andrzej.hajda, arnd, peterz, mhiramat
  Cc: linux-arch, linux-snps-arc, linux-kernel, linux-hexagon,
	linux-openrisc, linux-trace-kernel, mattwu, linux, wuqiang.matt

arch_cmpxchg() should check data size rather than pointer size in case
CONFIG_ARC_HAS_LLSC is defined. So rename __cmpxchg to __cmpxchg_32 to
emphasize it's explicit support of 32bit data size with BUILD_BUG_ON()
added to avoid any possible misuses with unsupported data types.

In case CONFIG_ARC_HAS_LLSC is undefined, arch_cmpxchg() uses spinlock
to accomplish SMP-safety, so the BUILD_BUG_ON checking is uncecessary.

v2 -> v3:
  - Patches regrouped and has the improvement for xtensa included
  - Comments refined to address why these changes are needed

v1 -> v2:
  - Try using native cmpxchg variants if avaialble, as Arnd advised

Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com>
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 arch/arc/include/asm/cmpxchg.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
index e138fde067de..bf46514f6f12 100644
--- a/arch/arc/include/asm/cmpxchg.h
+++ b/arch/arc/include/asm/cmpxchg.h
@@ -18,14 +18,16 @@
  * if (*ptr == @old)
  *      *ptr = @new
  */
-#define __cmpxchg(ptr, old, new)					\
+#define __cmpxchg_32(ptr, old, new)					\
 ({									\
 	__typeof__(*(ptr)) _prev;					\
 									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
+									\
 	__asm__ __volatile__(						\
-	"1:	llock  %0, [%1]	\n"					\
+	"1:	llock  %0, [%1]		\n"				\
 	"	brne   %0, %2, 2f	\n"				\
-	"	scond  %3, [%1]	\n"					\
+	"	scond  %3, [%1]		\n"				\
 	"	bnz     1b		\n"				\
 	"2:				\n"				\
 	: "=&r"(_prev)	/* Early clobber prevent reg reuse */		\
@@ -47,7 +49,7 @@
 									\
 	switch(sizeof((_p_))) {						\
 	case 4:								\
-		_prev_ = __cmpxchg(_p_, _o_, _n_);			\
+		_prev_ = __cmpxchg_32(_p_, _o_, _n_);			\
 		break;							\
 	default:							\
 		BUILD_BUG();						\
@@ -65,8 +67,6 @@
 	__typeof__(*(ptr)) _prev_;					\
 	unsigned long __flags;						\
 									\
-	BUILD_BUG_ON(sizeof(_p_) != 4);					\
-									\
 	/*								\
 	 * spin lock/unlock provide the needed smp_mb() before/after	\
 	 */								\
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 2/5] arch,locking/atomic: arc: add arch_cmpxchg[64]_local
  2023-11-21 14:23 [PATCH v3 0/5] arch,locking/atomic: add arch_cmpxchg[64]_local wuqiang.matt
  2023-11-21 14:23 ` [PATCH v3 1/5] arch,locking/atomic: arc: arch_cmpxchg should check data size wuqiang.matt
@ 2023-11-21 14:23 ` wuqiang.matt
  2023-11-21 14:23 ` [PATCH v3 3/5] arch,locking/atomic: openrisc: " wuqiang.matt
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: wuqiang.matt @ 2023-11-21 14:23 UTC (permalink / raw)
  To: ubizjak, mark.rutland, vgupta, bcain, jonas, stefan.kristiansson,
	shorne, chris, jcmvbkbc, geert, andi.shyti, mingo, palmer,
	andrzej.hajda, arnd, peterz, mhiramat
  Cc: linux-arch, linux-snps-arc, linux-kernel, linux-hexagon,
	linux-openrisc, linux-trace-kernel, mattwu, linux, wuqiang.matt,
	kernel test robot

arc doesn't have arch_cmpxhg_local implemented, which causes
building failures for any references of try_cmpxchg_local,
reported by the kernel test robot.

This patch implements arch_cmpxchg[64]_local with the native
cmpxchg variant if the corresponding data size is supported,
otherwise generci_cmpxchg[64]_local is to be used.

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202310272207.tLPflya4-lkp@intel.com/

Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com>
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 arch/arc/include/asm/cmpxchg.h | 28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
index bf46514f6f12..91429f2350df 100644
--- a/arch/arc/include/asm/cmpxchg.h
+++ b/arch/arc/include/asm/cmpxchg.h
@@ -80,6 +80,34 @@
 
 #endif
 
+/*
+ * always make arch_cmpxchg[64]_local available, native cmpxchg
+ * will be used if available, then generic_cmpxchg[64]_local
+ */
+#include <asm-generic/cmpxchg-local.h>
+static inline unsigned long __cmpxchg_local(volatile void *ptr,
+				      unsigned long old,
+				      unsigned long new, int size)
+{
+	switch (size) {
+#ifdef CONFIG_ARC_HAS_LLSC
+	case 4:
+		return __cmpxchg_32((int32_t *)ptr, old, new);
+#endif
+	default:
+		return __generic_cmpxchg_local(ptr, old, new, size);
+	}
+
+	return old;
+}
+#define arch_cmpxchg_local(ptr, o, n) ({				\
+	(__typeof__(*ptr))__cmpxchg_local((ptr),			\
+					(unsigned long)(o),		\
+					(unsigned long)(n),		\
+					sizeof(*(ptr)));		\
+})
+#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
+
 /*
  * xchg
  */
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 3/5] arch,locking/atomic: openrisc: add arch_cmpxchg[64]_local
  2023-11-21 14:23 [PATCH v3 0/5] arch,locking/atomic: add arch_cmpxchg[64]_local wuqiang.matt
  2023-11-21 14:23 ` [PATCH v3 1/5] arch,locking/atomic: arc: arch_cmpxchg should check data size wuqiang.matt
  2023-11-21 14:23 ` [PATCH v3 2/5] arch,locking/atomic: arc: add arch_cmpxchg[64]_local wuqiang.matt
@ 2023-11-21 14:23 ` wuqiang.matt
  2023-11-21 14:23 ` [PATCH v3 4/5] arch,locking/atomic: hexagon: " wuqiang.matt
  2023-11-21 14:23 ` [PATCH v3 5/5] arch,locking/atomic: xtensa: define arch_cmpxchg_local as __cmpxchg_local wuqiang.matt
  4 siblings, 0 replies; 9+ messages in thread
From: wuqiang.matt @ 2023-11-21 14:23 UTC (permalink / raw)
  To: ubizjak, mark.rutland, vgupta, bcain, jonas, stefan.kristiansson,
	shorne, chris, jcmvbkbc, geert, andi.shyti, mingo, palmer,
	andrzej.hajda, arnd, peterz, mhiramat
  Cc: linux-arch, linux-snps-arc, linux-kernel, linux-hexagon,
	linux-openrisc, linux-trace-kernel, mattwu, linux, wuqiang.matt,
	kernel test robot

openrisc hasn't arch_cmpxhg_local implemented, which causes
building failures for any references of try_cmpxchg_local,
reported by the kernel test robot.

This patch implements arch_cmpxchg[64]_local with the native
cmpxchg variant if the corresponding data size is supported,
otherwise generci_cmpxchg[64]_local is to be used.

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202310272207.tLPflya4-lkp@intel.com/

Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com>
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 arch/openrisc/include/asm/cmpxchg.h | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/openrisc/include/asm/cmpxchg.h b/arch/openrisc/include/asm/cmpxchg.h
index 8ee151c072e4..f1ffe8b6f5ef 100644
--- a/arch/openrisc/include/asm/cmpxchg.h
+++ b/arch/openrisc/include/asm/cmpxchg.h
@@ -139,6 +139,12 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 					       (unsigned long)(n),	\
 					       sizeof(*(ptr)));		\
 	})
+#define arch_cmpxchg_local arch_cmpxchg
+
+/* always make arch_cmpxchg64_local available for openrisc */
+#include <asm-generic/cmpxchg-local.h>
+
+#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 
 /*
  * This function doesn't exist, so you'll get a linker error if
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 4/5] arch,locking/atomic: hexagon: add arch_cmpxchg[64]_local
  2023-11-21 14:23 [PATCH v3 0/5] arch,locking/atomic: add arch_cmpxchg[64]_local wuqiang.matt
                   ` (2 preceding siblings ...)
  2023-11-21 14:23 ` [PATCH v3 3/5] arch,locking/atomic: openrisc: " wuqiang.matt
@ 2023-11-21 14:23 ` wuqiang.matt
  2023-11-22 16:55   ` Brian Cain
  2023-11-21 14:23 ` [PATCH v3 5/5] arch,locking/atomic: xtensa: define arch_cmpxchg_local as __cmpxchg_local wuqiang.matt
  4 siblings, 1 reply; 9+ messages in thread
From: wuqiang.matt @ 2023-11-21 14:23 UTC (permalink / raw)
  To: ubizjak, mark.rutland, vgupta, bcain, jonas, stefan.kristiansson,
	shorne, chris, jcmvbkbc, geert, andi.shyti, mingo, palmer,
	andrzej.hajda, arnd, peterz, mhiramat
  Cc: linux-arch, linux-snps-arc, linux-kernel, linux-hexagon,
	linux-openrisc, linux-trace-kernel, mattwu, linux, wuqiang.matt,
	kernel test robot

hexagonc hasn't arch_cmpxhg_local implemented, which causes
building failures for any references of try_cmpxchg_local,
reported by the kernel test robot.

This patch implements arch_cmpxchg[64]_local with the native
cmpxchg variant if the corresponding data size is supported,
otherwise generci_cmpxchg[64]_local is to be used.

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202310272207.tLPflya4-lkp@intel.com/

Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com>
Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
---
 arch/hexagon/include/asm/cmpxchg.h | 51 +++++++++++++++++++++++++++++-
 1 file changed, 50 insertions(+), 1 deletion(-)

diff --git a/arch/hexagon/include/asm/cmpxchg.h b/arch/hexagon/include/asm/cmpxchg.h
index bf6cf5579cf4..302fa30f25aa 100644
--- a/arch/hexagon/include/asm/cmpxchg.h
+++ b/arch/hexagon/include/asm/cmpxchg.h
@@ -8,6 +8,8 @@
 #ifndef _ASM_CMPXCHG_H
 #define _ASM_CMPXCHG_H
 
+#include <linux/build_bug.h>
+
 /*
  * __arch_xchg - atomically exchange a register and a memory location
  * @x: value to swap
@@ -51,13 +53,15 @@ __arch_xchg(unsigned long x, volatile void *ptr, int size)
  *  variable casting.
  */
 
-#define arch_cmpxchg(ptr, old, new)				\
+#define __cmpxchg_32(ptr, old, new)				\
 ({								\
 	__typeof__(ptr) __ptr = (ptr);				\
 	__typeof__(*(ptr)) __old = (old);			\
 	__typeof__(*(ptr)) __new = (new);			\
 	__typeof__(*(ptr)) __oldval = 0;			\
 								\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);			\
+								\
 	asm volatile(						\
 		"1:	%0 = memw_locked(%1);\n"		\
 		"	{ P0 = cmp.eq(%0,%2);\n"		\
@@ -72,4 +76,49 @@ __arch_xchg(unsigned long x, volatile void *ptr, int size)
 	__oldval;						\
 })
 
+#define __cmpxchg(ptr, old, val, size)				\
+({								\
+	__typeof__(*(ptr)) oldval;				\
+								\
+	switch (size) {						\
+	case 4:							\
+		oldval = __cmpxchg_32(ptr, old, val);		\
+		break;						\
+	default:						\
+		BUILD_BUG();					\
+		oldval = val;					\
+		break;						\
+	}							\
+								\
+	oldval;							\
+})
+
+#define arch_cmpxchg(ptr, o, n)	__cmpxchg((ptr), (o), (n), sizeof(*(ptr)))
+
+/*
+ * always make arch_cmpxchg[64]_local available, native cmpxchg
+ * will be used if available, then generic_cmpxchg[64]_local
+ */
+#include <asm-generic/cmpxchg-local.h>
+
+#define arch_cmpxchg_local(ptr, old, val)			\
+({								\
+	__typeof__(*(ptr)) __retval;				\
+	int __size = sizeof(*(ptr));				\
+								\
+	switch (__size) {					\
+	case 4:							\
+		__retval = __cmpxchg_32(ptr, old, val);		\
+		break;						\
+	default:						\
+		__retval = __generic_cmpxchg_local(ptr, old,	\
+						val, __size);	\
+		break;						\
+	}							\
+								\
+	__retval;						\
+})
+
+#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
+
 #endif /* _ASM_CMPXCHG_H */
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 5/5] arch,locking/atomic: xtensa: define arch_cmpxchg_local as __cmpxchg_local
  2023-11-21 14:23 [PATCH v3 0/5] arch,locking/atomic: add arch_cmpxchg[64]_local wuqiang.matt
                   ` (3 preceding siblings ...)
  2023-11-21 14:23 ` [PATCH v3 4/5] arch,locking/atomic: hexagon: " wuqiang.matt
@ 2023-11-21 14:23 ` wuqiang.matt
  4 siblings, 0 replies; 9+ messages in thread
From: wuqiang.matt @ 2023-11-21 14:23 UTC (permalink / raw)
  To: ubizjak, mark.rutland, vgupta, bcain, jonas, stefan.kristiansson,
	shorne, chris, jcmvbkbc, geert, andi.shyti, mingo, palmer,
	andrzej.hajda, arnd, peterz, mhiramat
  Cc: linux-arch, linux-snps-arc, linux-kernel, linux-hexagon,
	linux-openrisc, linux-trace-kernel, mattwu, linux, wuqiang.matt

The xtensa architecture already has __cmpxchg_local defined but not used.
The purpose of __cmpxchg_local() is solely for arch_cmpxchg_local(), just
as the definition of arch_cmpxchg_local() for other architectures like x86,
arm and powerpc.

Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com>
---
 arch/xtensa/include/asm/cmpxchg.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/xtensa/include/asm/cmpxchg.h b/arch/xtensa/include/asm/cmpxchg.h
index 675a11ea8de7..956c9925df1c 100644
--- a/arch/xtensa/include/asm/cmpxchg.h
+++ b/arch/xtensa/include/asm/cmpxchg.h
@@ -108,7 +108,7 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
  * them available.
  */
 #define arch_cmpxchg_local(ptr, o, n)				  	       \
-	((__typeof__(*(ptr)))__generic_cmpxchg_local((ptr), (unsigned long)(o),\
+	((__typeof__(*(ptr)))__cmpxchg_local((ptr), (unsigned long)(o),        \
 			(unsigned long)(n), sizeof(*(ptr))))
 #define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 #define arch_cmpxchg64(ptr, o, n)    arch_cmpxchg64_local((ptr), (o), (n))
-- 
2.40.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* RE: [PATCH v3 4/5] arch,locking/atomic: hexagon: add arch_cmpxchg[64]_local
  2023-11-21 14:23 ` [PATCH v3 4/5] arch,locking/atomic: hexagon: " wuqiang.matt
@ 2023-11-22 16:55   ` Brian Cain
  0 siblings, 0 replies; 9+ messages in thread
From: Brian Cain @ 2023-11-22 16:55 UTC (permalink / raw)
  To: wuqiang.matt, ubizjak, mark.rutland, vgupta, jonas,
	stefan.kristiansson, shorne, chris, jcmvbkbc, geert, andi.shyti,
	mingo, palmer, andrzej.hajda, arnd, peterz, mhiramat
  Cc: linux-arch, linux-snps-arc, linux-kernel, linux-hexagon,
	linux-openrisc, linux-trace-kernel, mattwu, linux,
	kernel test robot



> -----Original Message-----
> From: wuqiang.matt <wuqiang.matt@bytedance.com>
> Sent: Tuesday, November 21, 2023 8:24 AM
> To: ubizjak@gmail.com; mark.rutland@arm.com; vgupta@kernel.org; Brian
> Cain <bcain@quicinc.com>; jonas@southpole.se;
> stefan.kristiansson@saunalahti.fi; shorne@gmail.com; chris@zankel.net;
> jcmvbkbc@gmail.com; geert@linux-m68k.org; andi.shyti@linux.intel.com;
> mingo@kernel.org; palmer@rivosinc.com; andrzej.hajda@intel.com;
> arnd@arndb.de; peterz@infradead.org; mhiramat@kernel.org
> Cc: linux-arch@vger.kernel.org; linux-snps-arc@lists.infradead.org; linux-
> kernel@vger.kernel.org; linux-hexagon@vger.kernel.org; linux-
> openrisc@vger.kernel.org; linux-trace-kernel@vger.kernel.org;
> mattwu@163.com; linux@roeck-us.net; wuqiang.matt
> <wuqiang.matt@bytedance.com>; kernel test robot <lkp@intel.com>
> Subject: [PATCH v3 4/5] arch,locking/atomic: hexagon: add
> arch_cmpxchg[64]_local
> 
> WARNING: This email originated from outside of Qualcomm. Please be wary of
> any links or attachments, and do not enable macros.
> 
> hexagonc hasn't arch_cmpxhg_local implemented, which causes
> building failures for any references of try_cmpxchg_local,
> reported by the kernel test robot.
> 
> This patch implements arch_cmpxchg[64]_local with the native
> cmpxchg variant if the corresponding data size is supported,
> otherwise generci_cmpxchg[64]_local is to be used.
> 
> Reported-by: kernel test robot <lkp@intel.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202310272207.tLPflya4-
> lkp@intel.com/
> 
> Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com>
> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> ---
>  arch/hexagon/include/asm/cmpxchg.h | 51 +++++++++++++++++++++++++++++-
>  1 file changed, 50 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/hexagon/include/asm/cmpxchg.h
> b/arch/hexagon/include/asm/cmpxchg.h
> index bf6cf5579cf4..302fa30f25aa 100644
> --- a/arch/hexagon/include/asm/cmpxchg.h
> +++ b/arch/hexagon/include/asm/cmpxchg.h
> @@ -8,6 +8,8 @@
>  #ifndef _ASM_CMPXCHG_H
>  #define _ASM_CMPXCHG_H
> 
> +#include <linux/build_bug.h>
> +
>  /*
>   * __arch_xchg - atomically exchange a register and a memory location
>   * @x: value to swap
> @@ -51,13 +53,15 @@ __arch_xchg(unsigned long x, volatile void *ptr, int
> size)
>   *  variable casting.
>   */
> 
> -#define arch_cmpxchg(ptr, old, new)                            \
> +#define __cmpxchg_32(ptr, old, new)                            \
>  ({                                                             \
>         __typeof__(ptr) __ptr = (ptr);                          \
>         __typeof__(*(ptr)) __old = (old);                       \
>         __typeof__(*(ptr)) __new = (new);                       \
>         __typeof__(*(ptr)) __oldval = 0;                        \
>                                                                 \
> +       BUILD_BUG_ON(sizeof(*(ptr)) != 4);                      \
> +                                                               \
>         asm volatile(                                           \
>                 "1:     %0 = memw_locked(%1);\n"                \
>                 "       { P0 = cmp.eq(%0,%2);\n"                \
> @@ -72,4 +76,49 @@ __arch_xchg(unsigned long x, volatile void *ptr, int size)
>         __oldval;                                               \
>  })
> 
> +#define __cmpxchg(ptr, old, val, size)                         \
> +({                                                             \
> +       __typeof__(*(ptr)) oldval;                              \
> +                                                               \
> +       switch (size) {                                         \
> +       case 4:                                                 \
> +               oldval = __cmpxchg_32(ptr, old, val);           \
> +               break;                                          \
> +       default:                                                \
> +               BUILD_BUG();                                    \
> +               oldval = val;                                   \
> +               break;                                          \
> +       }                                                       \
> +                                                               \
> +       oldval;                                                 \
> +})
> +
> +#define arch_cmpxchg(ptr, o, n)        __cmpxchg((ptr), (o), (n), sizeof(*(ptr)))
> +
> +/*
> + * always make arch_cmpxchg[64]_local available, native cmpxchg
> + * will be used if available, then generic_cmpxchg[64]_local
> + */
> +#include <asm-generic/cmpxchg-local.h>
> +
> +#define arch_cmpxchg_local(ptr, old, val)                      \
> +({                                                             \
> +       __typeof__(*(ptr)) __retval;                            \
> +       int __size = sizeof(*(ptr));                            \
> +                                                               \
> +       switch (__size) {                                       \
> +       case 4:                                                 \
> +               __retval = __cmpxchg_32(ptr, old, val);         \
> +               break;                                          \
> +       default:                                                \
> +               __retval = __generic_cmpxchg_local(ptr, old,    \
> +                                               val, __size);   \
> +               break;                                          \
> +       }                                                       \
> +                                                               \
> +       __retval;                                               \
> +})
> +
> +#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr),
> (o), (n))
> +
>  #endif /* _ASM_CMPXCHG_H */
> --
> 2.40.1

Acked-by: Brian Cain <bcain@quicinc.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/5] arch,locking/atomic: arc: arch_cmpxchg should check data size
  2023-11-21 14:23 ` [PATCH v3 1/5] arch,locking/atomic: arc: arch_cmpxchg should check data size wuqiang.matt
@ 2023-11-22 22:17   ` Andi Shyti
  2023-11-23  0:06     ` wuqiang.matt
  0 siblings, 1 reply; 9+ messages in thread
From: Andi Shyti @ 2023-11-22 22:17 UTC (permalink / raw)
  To: wuqiang.matt
  Cc: ubizjak, mark.rutland, vgupta, bcain, jonas, stefan.kristiansson,
	shorne, chris, jcmvbkbc, geert, andi.shyti, mingo, palmer,
	andrzej.hajda, arnd, peterz, mhiramat, linux-arch,
	linux-snps-arc, linux-kernel, linux-hexagon, linux-openrisc,
	linux-trace-kernel, mattwu, linux

Hi Wuqiang,

On Tue, Nov 21, 2023 at 10:23:43PM +0800, wuqiang.matt wrote:
> arch_cmpxchg() should check data size rather than pointer size in case
> CONFIG_ARC_HAS_LLSC is defined. So rename __cmpxchg to __cmpxchg_32 to
> emphasize it's explicit support of 32bit data size with BUILD_BUG_ON()
> added to avoid any possible misuses with unsupported data types.
> 
> In case CONFIG_ARC_HAS_LLSC is undefined, arch_cmpxchg() uses spinlock
> to accomplish SMP-safety, so the BUILD_BUG_ON checking is uncecessary.
> 
> v2 -> v3:
>   - Patches regrouped and has the improvement for xtensa included
>   - Comments refined to address why these changes are needed
> 
> v1 -> v2:
>   - Try using native cmpxchg variants if avaialble, as Arnd advised
> 
> Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com>
> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
> ---
>  arch/arc/include/asm/cmpxchg.h | 12 ++++++------
>  1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
> index e138fde067de..bf46514f6f12 100644
> --- a/arch/arc/include/asm/cmpxchg.h
> +++ b/arch/arc/include/asm/cmpxchg.h
> @@ -18,14 +18,16 @@
>   * if (*ptr == @old)
>   *      *ptr = @new
>   */
> -#define __cmpxchg(ptr, old, new)					\
> +#define __cmpxchg_32(ptr, old, new)					\
>  ({									\
>  	__typeof__(*(ptr)) _prev;					\
>  									\
> +	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
> +									\
>  	__asm__ __volatile__(						\
> -	"1:	llock  %0, [%1]	\n"					\
> +	"1:	llock  %0, [%1]		\n"				\
>  	"	brne   %0, %2, 2f	\n"				\
> -	"	scond  %3, [%1]	\n"					\
> +	"	scond  %3, [%1]		\n"				\
>  	"	bnz     1b		\n"				\
>  	"2:				\n"				\
>  	: "=&r"(_prev)	/* Early clobber prevent reg reuse */		\
> @@ -47,7 +49,7 @@
>  									\
>  	switch(sizeof((_p_))) {						\
>  	case 4:								\
> -		_prev_ = __cmpxchg(_p_, _o_, _n_);			\
> +		_prev_ = __cmpxchg_32(_p_, _o_, _n_);			\
>  		break;							\
>  	default:							\
>  		BUILD_BUG();						\
> @@ -65,8 +67,6 @@
>  	__typeof__(*(ptr)) _prev_;					\
>  	unsigned long __flags;						\
>  									\
> -	BUILD_BUG_ON(sizeof(_p_) != 4);					\
> -									\

I think I made some comments here that have not been addressed or
replied.

Thanks,
Andi

>  	/*								\
>  	 * spin lock/unlock provide the needed smp_mb() before/after	\
>  	 */								\
> -- 
> 2.40.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/5] arch,locking/atomic: arc: arch_cmpxchg should check data size
  2023-11-22 22:17   ` Andi Shyti
@ 2023-11-23  0:06     ` wuqiang.matt
  0 siblings, 0 replies; 9+ messages in thread
From: wuqiang.matt @ 2023-11-23  0:06 UTC (permalink / raw)
  To: Andi Shyti
  Cc: ubizjak, mark.rutland, vgupta, bcain, jonas, stefan.kristiansson,
	shorne, chris, jcmvbkbc, geert, mingo, palmer, andrzej.hajda,
	arnd, peterz, mhiramat, linux-arch, linux-snps-arc, linux-kernel,
	linux-hexagon, linux-openrisc, linux-trace-kernel, mattwu, linux

Hello Andi,

On 2023/11/23 06:17, Andi Shyti wrote:
> Hi Wuqiang,
> 
> On Tue, Nov 21, 2023 at 10:23:43PM +0800, wuqiang.matt wrote:
>> arch_cmpxchg() should check data size rather than pointer size in case
>> CONFIG_ARC_HAS_LLSC is defined. So rename __cmpxchg to __cmpxchg_32 to
>> emphasize it's explicit support of 32bit data size with BUILD_BUG_ON()
>> added to avoid any possible misuses with unsupported data types.
>>
>> In case CONFIG_ARC_HAS_LLSC is undefined, arch_cmpxchg() uses spinlock
>> to accomplish SMP-safety, so the BUILD_BUG_ON checking is uncecessary.
>>
>> v2 -> v3:
>>    - Patches regrouped and has the improvement for xtensa included
>>    - Comments refined to address why these changes are needed
>>
>> v1 -> v2:
>>    - Try using native cmpxchg variants if avaialble, as Arnd advised

BTW, the changelog should be in the cover letter. I'll correct it in next
version, so don't bother resending to make more noises.

>> Signed-off-by: wuqiang.matt <wuqiang.matt@bytedance.com>
>> Reviewed-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
>> ---
>>   arch/arc/include/asm/cmpxchg.h | 12 ++++++------
>>   1 file changed, 6 insertions(+), 6 deletions(-)
>>
>> diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
>> index e138fde067de..bf46514f6f12 100644
>> --- a/arch/arc/include/asm/cmpxchg.h
>> +++ b/arch/arc/include/asm/cmpxchg.h
>> @@ -18,14 +18,16 @@
>>    * if (*ptr == @old)
>>    *      *ptr = @new
>>    */
>> -#define __cmpxchg(ptr, old, new)					\
>> +#define __cmpxchg_32(ptr, old, new)					\
>>   ({									\
>>   	__typeof__(*(ptr)) _prev;					\
>>   									\
>> +	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
>> +									\
>>   	__asm__ __volatile__(						\
>> -	"1:	llock  %0, [%1]	\n"					\
>> +	"1:	llock  %0, [%1]		\n"				\
>>   	"	brne   %0, %2, 2f	\n"				\
>> -	"	scond  %3, [%1]	\n"					\
>> +	"	scond  %3, [%1]		\n"				\
>>   	"	bnz     1b		\n"				\
>>   	"2:				\n"				\
>>   	: "=&r"(_prev)	/* Early clobber prevent reg reuse */		\
>> @@ -47,7 +49,7 @@
>>   									\
>>   	switch(sizeof((_p_))) {						\
>>   	case 4:								\
>> -		_prev_ = __cmpxchg(_p_, _o_, _n_);			\
>> +		_prev_ = __cmpxchg_32(_p_, _o_, _n_);			\
>>   		break;							\
>>   	default:							\
>>   		BUILD_BUG();						\
>> @@ -65,8 +67,6 @@
>>   	__typeof__(*(ptr)) _prev_;					\
>>   	unsigned long __flags;						\
>>   									\
>> -	BUILD_BUG_ON(sizeof(_p_) != 4);					\
>> -									\
> 
> I think I made some comments here that have not been addressed or
> replied.

Sorry that I haven't seen your message. Could you resend ? I rechecked
my mailbox and the mailing lists, but no luck.


> Thanks,
> Andi

Regards,
Wuqiang

> 
>>   	/*								\
>>   	 * spin lock/unlock provide the needed smp_mb() before/after	\
>>   	 */								\
>> -- 
>> 2.40.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-11-23  0:06 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-21 14:23 [PATCH v3 0/5] arch,locking/atomic: add arch_cmpxchg[64]_local wuqiang.matt
2023-11-21 14:23 ` [PATCH v3 1/5] arch,locking/atomic: arc: arch_cmpxchg should check data size wuqiang.matt
2023-11-22 22:17   ` Andi Shyti
2023-11-23  0:06     ` wuqiang.matt
2023-11-21 14:23 ` [PATCH v3 2/5] arch,locking/atomic: arc: add arch_cmpxchg[64]_local wuqiang.matt
2023-11-21 14:23 ` [PATCH v3 3/5] arch,locking/atomic: openrisc: " wuqiang.matt
2023-11-21 14:23 ` [PATCH v3 4/5] arch,locking/atomic: hexagon: " wuqiang.matt
2023-11-22 16:55   ` Brian Cain
2023-11-21 14:23 ` [PATCH v3 5/5] arch,locking/atomic: xtensa: define arch_cmpxchg_local as __cmpxchg_local wuqiang.matt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).