All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC
@ 2021-05-10  9:37 Mark Rutland
  2021-05-10  9:37 ` [PATCH 01/33] locking/atomic: make ARCH_ATOMIC a Kconfig symbol Mark Rutland
                   ` (33 more replies)
  0 siblings, 34 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

This series (based on v5.13-rc1) converts all architectures to
ARCH_ATOMIC. This will allow the use of instrumented atomics on all
architectures (e.g. for KASAN and similar), and simplifies the core
atomic code (which should allow for easier rework of the fallbacks and
other bits in future).

The series is split into three parts:

1) Some preparatory work is done to prepare architectures and common
   code for the conversion. In this phase h8300 and microblaze are
   converted to use the asm-generic atomics exclusively, and the
   asm-generic implementations are made to function with or without
   ARCH_ATOMIC.

2) Architectures are converted one-by-one to use the ARCH_ATOMIC
   interface. I've converted each architecture with its own patch (even
   where the conversion is trivial) to make review and bisection easier.

3) The code handling !ARCH_ATOMIC is removed.

Note: I've generated the patches with:

  git format-patch -C -M -D 

... so the preimage of include/linux/atomic-fallback.h is not included
in the diff when it is deleted.

The series can also be found in my atomics/arch-atomic branch on
kernel.org:

  https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/arch-atomic
  git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git atomics/arch-atomic

Thanks,
Mark.

Mark Rutland (33):
  locking/atomic: make ARCH_ATOMIC a Kconfig symbol
  locking/atomic: net: use linux/atomic.h for xchg & cmpxchg
  locking/atomic: h8300: use asm-generic exclusively
  locking/atomic: microblaze: use asm-generic exclusively
  locking/atomic: openrisc: avoid asm-generic/atomic.h
  locking/atomic: atomic: remove stale comments
  locking/atomic: atomic: remove redundant include
  locking/atomic: atomic: simplify ifdeffery
  locking/atomic: atomic: support ARCH_ATOMIC
  locking/atomic: atomic64: support ARCH_ATOMIC
  locking/atomic: cmpxchg: make `generic` a prefix
  locking/atomic: cmpxchg: support ARCH_ATOMIC
  locking/atomic: alpha: move to ARCH_ATOMIC
  locking/atomic: arc: move to ARCH_ATOMIC
  locking/atomic: arm: move to ARCH_ATOMIC
  locking/atomic: csky: move to ARCH_ATOMIC
  locking/atomic: h8300: move to ARCH_ATOMIC
  locking/atomic: hexagon: move to ARCH_ATOMIC
  locking/atomic: ia64: move to ARCH_ATOMIC
  locking/atomic: m68k: move to ARCH_ATOMIC
  locking/atomic: microblaze: move to ARCH_ATOMIC
  locking/atomic: mips: move to ARCH_ATOMIC
  locking/atomic: nds32: move to ARCH_ATOMIC
  locking/atomic: nios2: move to ARCH_ATOMIC
  locking/atomic: openrisc: move to ARCH_ATOMIC
  locking/atomic: parisc: move to ARCH_ATOMIC
  locking/atomic: powerpc: move to ARCH_ATOMIC
  locking/atomic: riscv: move to ARCH_ATOMIC
  locking/atomic: sh: move to ARCH_ATOMIC
  locking/atomic: sparc: move to ARCH_ATOMIC
  locking/atomic: xtensa: move to ARCH_ATOMIC
  locking/atomic: delete !ARCH_ATOMIC remnants
  locking/atomics: atomic-instrumented: simplify ifdeffery

 arch/alpha/include/asm/atomic.h           |   88 +-
 arch/alpha/include/asm/cmpxchg.h          |   12 +-
 arch/arc/include/asm/atomic.h             |   56 +-
 arch/arc/include/asm/cmpxchg.h            |    8 +-
 arch/arm/include/asm/atomic.h             |   96 +-
 arch/arm/include/asm/cmpxchg.h            |   20 +-
 arch/arm64/include/asm/atomic.h           |    2 -
 arch/csky/include/asm/cmpxchg.h           |    8 +-
 arch/h8300/include/asm/Kbuild             |    1 +
 arch/h8300/include/asm/atomic.h           |   97 --
 arch/h8300/include/asm/cmpxchg.h          |   66 -
 arch/hexagon/include/asm/atomic.h         |   28 +-
 arch/hexagon/include/asm/cmpxchg.h        |    4 +-
 arch/ia64/include/asm/atomic.h            |   74 +-
 arch/ia64/include/asm/cmpxchg.h           |   16 +
 arch/ia64/include/uapi/asm/cmpxchg.h      |   10 +-
 arch/m68k/include/asm/atomic.h            |   60 +-
 arch/m68k/include/asm/cmpxchg.h           |   10 +-
 arch/microblaze/include/asm/Kbuild        |    1 +
 arch/microblaze/include/asm/atomic.h      |   28 -
 arch/microblaze/include/asm/cmpxchg.h     |    9 -
 arch/mips/include/asm/atomic.h            |   55 +-
 arch/mips/include/asm/cmpxchg.h           |   22 +-
 arch/mips/kernel/cmpxchg.c                |    4 +-
 arch/openrisc/include/asm/atomic.h        |   42 +-
 arch/openrisc/include/asm/cmpxchg.h       |    4 +-
 arch/parisc/include/asm/atomic.h          |   34 +-
 arch/parisc/include/asm/cmpxchg.h         |   14 +-
 arch/powerpc/include/asm/atomic.h         |  140 +-
 arch/powerpc/include/asm/cmpxchg.h        |   30 +-
 arch/riscv/include/asm/atomic.h           |  128 +-
 arch/riscv/include/asm/cmpxchg.h          |   34 +-
 arch/s390/include/asm/atomic.h            |    2 -
 arch/sh/include/asm/atomic-grb.h          |    6 +-
 arch/sh/include/asm/atomic-irq.h          |    6 +-
 arch/sh/include/asm/atomic-llsc.h         |    6 +-
 arch/sh/include/asm/atomic.h              |    8 +-
 arch/sh/include/asm/cmpxchg.h             |    4 +-
 arch/sparc/include/asm/atomic_32.h        |   38 +-
 arch/sparc/include/asm/atomic_64.h        |   36 +-
 arch/sparc/include/asm/cmpxchg_32.h       |   12 +-
 arch/sparc/include/asm/cmpxchg_64.h       |   12 +-
 arch/sparc/lib/atomic32.c                 |   24 +-
 arch/sparc/lib/atomic_64.S                |   42 +-
 arch/x86/include/asm/atomic.h             |    2 -
 arch/xtensa/include/asm/atomic.h          |   26 +-
 arch/xtensa/include/asm/cmpxchg.h         |   14 +-
 include/asm-generic/atomic-instrumented.h |  498 +-----
 include/asm-generic/atomic.h              |  118 +-
 include/asm-generic/atomic64.h            |   45 +-
 include/asm-generic/cmpxchg-local.h       |    4 +-
 include/asm-generic/cmpxchg.h             |   42 +-
 include/linux/atomic-fallback.h           | 2595 -----------------------------
 include/linux/atomic.h                    |    4 -
 lib/atomic64.c                            |   36 +-
 net/core/filter.c                         |    2 +-
 net/sunrpc/xprtmultipath.c                |    2 +-
 scripts/atomic/check-atomics.sh           |    1 -
 scripts/atomic/gen-atomic-instrumented.sh |   51 +-
 scripts/atomic/gen-atomics.sh             |    1 -
 60 files changed, 750 insertions(+), 4088 deletions(-)
 delete mode 100644 arch/h8300/include/asm/atomic.h
 delete mode 100644 arch/h8300/include/asm/cmpxchg.h
 create mode 100644 arch/ia64/include/asm/cmpxchg.h
 delete mode 100644 arch/microblaze/include/asm/atomic.h
 delete mode 100644 arch/microblaze/include/asm/cmpxchg.h
 delete mode 100644 include/linux/atomic-fallback.h

-- 
2.11.0


^ permalink raw reply	[flat|nested] 57+ messages in thread

* [PATCH 01/33] locking/atomic: make ARCH_ATOMIC a Kconfig symbol
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 02/33] locking/atomic: net: use linux/atomic.h for xchg & cmpxchg Mark Rutland
                   ` (32 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

Subsequent patches will move architectures over to the ARCH_ATOMIC API,
after preparing the asm-generic atomic implementations to function with
or without ARCH_ATOMIC.

As some architectures use the asm-generic implementations exclusively
(and don't have a local atomic.h), and to avoid the risk that
ARCH_ATOMIC isn't defined in some cases we expect, let's make the
ARCH_ATOMIC macro a Kconfig symbol instead, so that we can guarantee it
is consistently available where needed.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/Kconfig                    | 3 +++
 arch/arm64/Kconfig              | 1 +
 arch/arm64/include/asm/atomic.h | 2 --
 arch/s390/Kconfig               | 1 +
 arch/s390/include/asm/atomic.h  | 2 --
 arch/um/Kconfig                 | 1 +
 arch/x86/Kconfig                | 1 +
 arch/x86/include/asm/atomic.h   | 2 --
 include/linux/atomic.h          | 2 +-
 9 files changed, 8 insertions(+), 7 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index c45b770d3579..3fb3b12d4a95 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -11,6 +11,9 @@ source "arch/$(SRCARCH)/Kconfig"
 
 menu "General architecture-dependent options"
 
+config ARCH_ATOMIC
+	bool
+
 config CRASH_CORE
 	bool
 
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 9f1d8566bbf9..62ab429d1f42 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -9,6 +9,7 @@ config ARM64
 	select ACPI_MCFG if (ACPI && PCI)
 	select ACPI_SPCR_TABLE if ACPI
 	select ACPI_PPTT if ACPI
+	select ARCH_ATOMIC
 	select ARCH_HAS_DEBUG_WX
 	select ARCH_BINFMT_ELF_STATE
 	select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION
diff --git a/arch/arm64/include/asm/atomic.h b/arch/arm64/include/asm/atomic.h
index b56a4b2bc248..c9979273d389 100644
--- a/arch/arm64/include/asm/atomic.h
+++ b/arch/arm64/include/asm/atomic.h
@@ -223,6 +223,4 @@ static __always_inline long arch_atomic64_dec_if_positive(atomic64_t *v)
 
 #define arch_atomic64_dec_if_positive		arch_atomic64_dec_if_positive
 
-#define ARCH_ATOMIC
-
 #endif /* __ASM_ATOMIC_H */
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index b4c7c34069f8..85374a36c69e 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -58,6 +58,7 @@ config S390
 	# Note: keep this list sorted alphabetically
 	#
 	imply IMA_SECURE_AND_OR_TRUSTED_BOOT
+	select ARCH_ATOMIC
 	select ARCH_32BIT_USTAT_F_TINODE
 	select ARCH_BINFMT_ELF_STATE
 	select ARCH_ENABLE_MEMORY_HOTPLUG if SPARSEMEM
diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h
index 7c93c6573524..7138d189cc42 100644
--- a/arch/s390/include/asm/atomic.h
+++ b/arch/s390/include/asm/atomic.h
@@ -147,6 +147,4 @@ ATOMIC64_OPS(xor)
 #define arch_atomic64_fetch_sub(_i, _v)  arch_atomic64_fetch_add(-(s64)(_i), _v)
 #define arch_atomic64_sub(_i, _v)	 arch_atomic64_add(-(s64)(_i), _v)
 
-#define ARCH_ATOMIC
-
 #endif /* __ARCH_S390_ATOMIC__  */
diff --git a/arch/um/Kconfig b/arch/um/Kconfig
index 57cfd9a1c082..4370a9521ea4 100644
--- a/arch/um/Kconfig
+++ b/arch/um/Kconfig
@@ -5,6 +5,7 @@ menu "UML-specific options"
 config UML
 	bool
 	default y
+	select ARCH_ATOMIC
 	select ARCH_EPHEMERAL_INODES
 	select ARCH_HAS_KCOV
 	select ARCH_NO_PREEMPT
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0045e1b44190..11a27563033d 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -58,6 +58,7 @@ config X86
 	#
 	select ACPI_LEGACY_TABLES_LOOKUP	if ACPI
 	select ACPI_SYSTEM_POWER_STATES_SUPPORT	if ACPI
+	select ARCH_ATOMIC
 	select ARCH_32BIT_OFF_T			if X86_32
 	select ARCH_CLOCKSOURCE_INIT
 	select ARCH_ENABLE_HUGEPAGE_MIGRATION if X86_64 && HUGETLB_PAGE && MIGRATION
diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index f732741ad7c7..5e754e895767 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -269,6 +269,4 @@ static __always_inline int arch_atomic_fetch_xor(int i, atomic_t *v)
 # include <asm/atomic64_64.h>
 #endif
 
-#define ARCH_ATOMIC
-
 #endif /* _ASM_X86_ATOMIC_H */
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 571a11008ab5..4f8d83f9e480 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -77,7 +77,7 @@
 	__ret;								\
 })
 
-#ifdef ARCH_ATOMIC
+#ifdef CONFIG_ARCH_ATOMIC
 #include <linux/atomic-arch-fallback.h>
 #include <asm-generic/atomic-instrumented.h>
 #else
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 02/33] locking/atomic: net: use linux/atomic.h for xchg & cmpxchg
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
  2021-05-10  9:37 ` [PATCH 01/33] locking/atomic: make ARCH_ATOMIC a Kconfig symbol Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 03/33] locking/atomic: h8300: use asm-generic exclusively Mark Rutland
                   ` (31 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

As xchg*() and cmpxchg*() may be instrumented by atomic-instrumented.h,
it's necessary to include <linux/atomic.h> to use these, rather than
<asm/cmpxchg.h>, which is effectively an arch-internal header.

In a couple of places we include <asm/cmpxchg.h>, but get away with this
as <linux/atomic.h> gets pulled in inidrectly by another include. Before
we convert more architectures to use atomic-instrumented.h, let's fix
these up to use <linux/atomic.h> so that we don't make things more
fragile.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 net/core/filter.c          | 2 +-
 net/sunrpc/xprtmultipath.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/net/core/filter.c b/net/core/filter.c
index cae56d08a670..ce4ae1a19a71 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -17,6 +17,7 @@
  * Kris Katterjohn - Added many additional checks in bpf_check_classic()
  */
 
+#include <linux/atomic.h>
 #include <linux/module.h>
 #include <linux/types.h>
 #include <linux/mm.h>
@@ -41,7 +42,6 @@
 #include <linux/timer.h>
 #include <linux/uaccess.h>
 #include <asm/unaligned.h>
-#include <asm/cmpxchg.h>
 #include <linux/filter.h>
 #include <linux/ratelimit.h>
 #include <linux/seccomp.h>
diff --git a/net/sunrpc/xprtmultipath.c b/net/sunrpc/xprtmultipath.c
index 78c075a68c04..1b4073131c6f 100644
--- a/net/sunrpc/xprtmultipath.c
+++ b/net/sunrpc/xprtmultipath.c
@@ -7,13 +7,13 @@
  * Trond Myklebust <trond.myklebust@primarydata.com>
  *
  */
+#include <linux/atomic.h>
 #include <linux/types.h>
 #include <linux/kref.h>
 #include <linux/list.h>
 #include <linux/rcupdate.h>
 #include <linux/rculist.h>
 #include <linux/slab.h>
-#include <asm/cmpxchg.h>
 #include <linux/spinlock.h>
 #include <linux/sunrpc/xprt.h>
 #include <linux/sunrpc/addr.h>
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 03/33] locking/atomic: h8300: use asm-generic exclusively
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
  2021-05-10  9:37 ` [PATCH 01/33] locking/atomic: make ARCH_ATOMIC a Kconfig symbol Mark Rutland
  2021-05-10  9:37 ` [PATCH 02/33] locking/atomic: net: use linux/atomic.h for xchg & cmpxchg Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 04/33] locking/atomic: microblaze: " Mark Rutland
                   ` (30 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

As h8300's implementation of the atomics isn't using any arch-specific
functionality, and its implementation of cmpxchg only uses assembly to
non-atomically swap two elements in memory, we may as well use the
asm-generic atomic.h and cmpxchg.h, and avoid the duplicate code.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
---
 arch/h8300/include/asm/Kbuild    |  1 +
 arch/h8300/include/asm/atomic.h  | 97 ----------------------------------------
 arch/h8300/include/asm/cmpxchg.h | 66 ---------------------------
 3 files changed, 1 insertion(+), 163 deletions(-)
 delete mode 100644 arch/h8300/include/asm/atomic.h
 delete mode 100644 arch/h8300/include/asm/cmpxchg.h

diff --git a/arch/h8300/include/asm/Kbuild b/arch/h8300/include/asm/Kbuild
index 60ee7f0d60a8..e23139c8fc0d 100644
--- a/arch/h8300/include/asm/Kbuild
+++ b/arch/h8300/include/asm/Kbuild
@@ -1,5 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0
 generic-y += asm-offsets.h
+generic-y += cmpxchg.h
 generic-y += extable.h
 generic-y += kvm_para.h
 generic-y += mcs_spinlock.h
diff --git a/arch/h8300/include/asm/atomic.h b/arch/h8300/include/asm/atomic.h
deleted file mode 100644
index a990d151f163..000000000000
diff --git a/arch/h8300/include/asm/cmpxchg.h b/arch/h8300/include/asm/cmpxchg.h
deleted file mode 100644
index c64bb38ce242..000000000000
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 04/33] locking/atomic: microblaze: use asm-generic exclusively
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (2 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 03/33] locking/atomic: h8300: use asm-generic exclusively Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 05/33] locking/atomic: openrisc: avoid asm-generic/atomic.h Mark Rutland
                   ` (29 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

Microblaze provides its own implementation of atomic_dec_if_positive(),
but nothing else. For a while now, the conditional inc/dec ops have been
optional, and the core code will provide generic implementations using
the code templates in scripts/atomic/fallbacks/.

For simplicity, and for consistency with the other conditional atomic
ops, let's drop the microblaze implementation of
atomic_dec_if_positive(), and use the generic implementation.

With that, we can also drop the local asm/atomic.h and asm/cmpxchg.h
headers, as asm-generic/atomic.h is mandatory-y, and we can pull in
asm-generic/cmpxchg.h via generic-y. This matches what nios2 and nds32
do today.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/microblaze/include/asm/Kbuild    |  1 +
 arch/microblaze/include/asm/atomic.h  | 28 ----------------------------
 arch/microblaze/include/asm/cmpxchg.h |  9 ---------
 3 files changed, 1 insertion(+), 37 deletions(-)
 delete mode 100644 arch/microblaze/include/asm/atomic.h
 delete mode 100644 arch/microblaze/include/asm/cmpxchg.h

diff --git a/arch/microblaze/include/asm/Kbuild b/arch/microblaze/include/asm/Kbuild
index 29b0e557aa7c..a055f5dbe00a 100644
--- a/arch/microblaze/include/asm/Kbuild
+++ b/arch/microblaze/include/asm/Kbuild
@@ -1,5 +1,6 @@
 # SPDX-License-Identifier: GPL-2.0
 generated-y += syscall_table.h
+generic-y += cmpxchg.h
 generic-y += extable.h
 generic-y += kvm_para.h
 generic-y += mcs_spinlock.h
diff --git a/arch/microblaze/include/asm/atomic.h b/arch/microblaze/include/asm/atomic.h
deleted file mode 100644
index 41e9aff23a62..000000000000
diff --git a/arch/microblaze/include/asm/cmpxchg.h b/arch/microblaze/include/asm/cmpxchg.h
deleted file mode 100644
index 3523b51aab36..000000000000
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 05/33] locking/atomic: openrisc: avoid asm-generic/atomic.h
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (3 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 04/33] locking/atomic: microblaze: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-13 21:44   ` Stafford Horne
  2021-05-10  9:37 ` [PATCH 06/33] locking/atomic: atomic: remove stale comments Mark Rutland
                   ` (28 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

OpenRISC is the only architecture which uses asm-generic/atomic.h and
also provides its own implementation of some functions, requiring
ifdeferry in the asm-generic header. As OpenRISC provides the vast
majority of functions itself, it would be simpler overall if it also
provided the few functions it cribs from asm-generic.

This patch decouples OpenRISC from asm-generic/atomic.h. Subsequent
patches will simplify the asm-generic implementation.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
Cc: Will Deacon <will@kernel.org>
---
 arch/openrisc/include/asm/atomic.h | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/openrisc/include/asm/atomic.h b/arch/openrisc/include/asm/atomic.h
index b589fac39b92..cb86970d3859 100644
--- a/arch/openrisc/include/asm/atomic.h
+++ b/arch/openrisc/include/asm/atomic.h
@@ -121,6 +121,12 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 }
 #define atomic_fetch_add_unless	atomic_fetch_add_unless
 
-#include <asm-generic/atomic.h>
+#define atomic_read(v)			READ_ONCE((v)->counter)
+#define atomic_set(v,i)			WRITE_ONCE((v)->counter, (i))
+
+#include <asm/cmpxchg.h>
+
+#define atomic_xchg(ptr, v)		(xchg(&(ptr)->counter, (v)))
+#define atomic_cmpxchg(v, old, new)	(cmpxchg(&((v)->counter), (old), (new)))
 
 #endif /* __ASM_OPENRISC_ATOMIC_H */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 06/33] locking/atomic: atomic: remove stale comments
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (4 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 05/33] locking/atomic: openrisc: avoid asm-generic/atomic.h Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 07/33] locking/atomic: atomic: remove redundant include Mark Rutland
                   ` (27 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

The commentary in asm-generic/atomic.h is stale; let's bring it up-to
date:

* The block comment at the start of the file mentions this is only
  usable on UP systems, but is immediately followed by an SMP
  implementation using cmpxchg. Let's delete the misleading statement.

* A comment near the end of the file was originally at the top of the
  file, but over time rework has shuffled it near the end, and it's long
  been superceded by the block comment at the top of the file. Let's
  remove it.

* Since asm-generic/atomic.h isn't the canonical documentation for the
  atomic ops, and since the existing comments are not in kerneldoc
  format, we don't need to document the semantics of each operation here
  (and this would be better done in a centralised document). Let's
  remove these comments.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 include/asm-generic/atomic.h | 39 ++-------------------------------------
 1 file changed, 2 insertions(+), 37 deletions(-)

diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index 11f96f40f4a7..ebacbc6b363b 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: GPL-2.0-or-later */
 /*
- * Generic C implementation of atomic counter operations. Usable on
- * UP systems only. Do not include in machine independent code.
+ * Generic C implementation of atomic counter operations. Do not include in
+ * machine independent code.
  *
  * Copyright (C) 2007 Red Hat, Inc. All Rights Reserved.
  * Written by David Howells (dhowells@redhat.com)
@@ -12,23 +12,6 @@
 #include <asm/cmpxchg.h>
 #include <asm/barrier.h>
 
-/*
- * atomic_$op() - $op integer to atomic variable
- * @i: integer value to $op
- * @v: pointer to the atomic variable
- *
- * Atomically $ops @i to @v. Does not strictly guarantee a memory-barrier, use
- * smp_mb__{before,after}_atomic().
- */
-
-/*
- * atomic_$op_return() - $op interer to atomic variable and returns the result
- * @i: integer value to $op
- * @v: pointer to the atomic variable
- *
- * Atomically $ops @i to @v. Does imply a full memory barrier.
- */
-
 #ifdef CONFIG_SMP
 
 /* we can build all atomic primitives from cmpxchg */
@@ -154,28 +137,10 @@ ATOMIC_OP(xor, ^)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-/*
- * Atomic operations that C can't guarantee us.  Useful for
- * resource counting etc..
- */
-
-/**
- * atomic_read - read atomic variable
- * @v: pointer of type atomic_t
- *
- * Atomically reads the value of @v.
- */
 #ifndef atomic_read
 #define atomic_read(v)	READ_ONCE((v)->counter)
 #endif
 
-/**
- * atomic_set - set atomic variable
- * @v: pointer of type atomic_t
- * @i: required value
- *
- * Atomically sets the value of @v to @i.
- */
 #define atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
 
 #include <linux/irqflags.h>
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 07/33] locking/atomic: atomic: remove redundant include
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (5 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 06/33] locking/atomic: atomic: remove stale comments Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 08/33] locking/atomic: atomic: simplify ifdeffery Mark Rutland
                   ` (26 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

Since commit:

  560cb12a4080a48b ("locking,arch: Rewrite generic atomic support")

... we conditionally include <linux/irqflags.h> before defining atomics
using locking, and hence do not need to do so unconditionally later in
the header.

This patch removes the redundant include.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 include/asm-generic/atomic.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index ebacbc6b363b..d4bf803d6491 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -143,8 +143,6 @@ ATOMIC_OP(xor, ^)
 
 #define atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
 
-#include <linux/irqflags.h>
-
 static inline void atomic_add(int i, atomic_t *v)
 {
 	atomic_add_return(i, v);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 08/33] locking/atomic: atomic: simplify ifdeffery
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (6 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 07/33] locking/atomic: atomic: remove redundant include Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 09/33] locking/atomic: atomic: support ARCH_ATOMIC Mark Rutland
                   ` (25 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

Now that asm-generic/atomic.h is only used by architectures without any
architecture-specific atomic definitions, we know that there will be no
architecture-specific implementations to override, and can remove the
ifdeffery this has previously required, bringing it into line with
asm-generic/atomic64.h.

At the same time, we can implement atomic_add() and atomic_sub()
directly using ATOMIC_OP(), since we know architectures won't provide
atomic_add_return() or atomic_sub_return().

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 include/asm-generic/atomic.h | 46 ++++----------------------------------------
 1 file changed, 4 insertions(+), 42 deletions(-)

diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index d4bf803d6491..316c82a27b0a 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -93,65 +93,27 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)			\
 
 #endif /* CONFIG_SMP */
 
-#ifndef atomic_add_return
 ATOMIC_OP_RETURN(add, +)
-#endif
-
-#ifndef atomic_sub_return
 ATOMIC_OP_RETURN(sub, -)
-#endif
 
-#ifndef atomic_fetch_add
 ATOMIC_FETCH_OP(add, +)
-#endif
-
-#ifndef atomic_fetch_sub
 ATOMIC_FETCH_OP(sub, -)
-#endif
-
-#ifndef atomic_fetch_and
 ATOMIC_FETCH_OP(and, &)
-#endif
-
-#ifndef atomic_fetch_or
 ATOMIC_FETCH_OP(or, |)
-#endif
-
-#ifndef atomic_fetch_xor
 ATOMIC_FETCH_OP(xor, ^)
-#endif
 
-#ifndef atomic_and
+ATOMIC_OP(add, +)
+ATOMIC_OP(sub, -)
 ATOMIC_OP(and, &)
-#endif
-
-#ifndef atomic_or
 ATOMIC_OP(or, |)
-#endif
-
-#ifndef atomic_xor
 ATOMIC_OP(xor, ^)
-#endif
 
 #undef ATOMIC_FETCH_OP
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#ifndef atomic_read
-#define atomic_read(v)	READ_ONCE((v)->counter)
-#endif
-
-#define atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
-
-static inline void atomic_add(int i, atomic_t *v)
-{
-	atomic_add_return(i, v);
-}
-
-static inline void atomic_sub(int i, atomic_t *v)
-{
-	atomic_sub_return(i, v);
-}
+#define atomic_read(v)			READ_ONCE((v)->counter)
+#define atomic_set(v, i)		WRITE_ONCE(((v)->counter), (i))
 
 #define atomic_xchg(ptr, v)		(xchg(&(ptr)->counter, (v)))
 #define atomic_cmpxchg(v, old, new)	(cmpxchg(&((v)->counter), (old), (new)))
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 09/33] locking/atomic: atomic: support ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (7 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 08/33] locking/atomic: atomic: simplify ifdeffery Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 10/33] locking/atomic: atomic64: " Mark Rutland
                   ` (24 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as this will
enable functionality, and once all architectures are converted it will
be possible to make significant cleanups to the atomic headers.

A number of architectures use asm-generic/atomic.h, and it's impractical
to convert the header and all these architectures in one go. To make it
possible to convert them one-by-one, let's make the asm-generic
implementation function as either atomic_*() or arch_atomic_*()
depending on whether ARCH_ATOMIC is selected. To do this, the C
implementations are prefixed as generic_atomic_*(), and preprocessor
definitions map atomic_*()/arch_atomic_*() onto these as
appropriate.

Once all users are moved over to ARCH_ATOMIC the ifdeffery in the header
can be simplified and/or removed entirely.

For existing users (none of which select ARCH_ATOMIC), there should be
no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 include/asm-generic/atomic.h | 71 ++++++++++++++++++++++++++++++++++++++------
 1 file changed, 62 insertions(+), 9 deletions(-)

diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index 316c82a27b0a..649060fa0fe8 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -12,39 +12,47 @@
 #include <asm/cmpxchg.h>
 #include <asm/barrier.h>
 
+#ifdef CONFIG_ARCH_ATOMIC
+#define __ga_cmpxchg	arch_cmpxchg
+#define __ga_xchg	arch_xchg
+#else
+#define __ga_cmpxchg	cmpxchg
+#define __ga_xchg	xchg
+#endif
+
 #ifdef CONFIG_SMP
 
 /* we can build all atomic primitives from cmpxchg */
 
 #define ATOMIC_OP(op, c_op)						\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void generic_atomic_##op(int i, atomic_t *v)		\
 {									\
 	int c, old;							\
 									\
 	c = v->counter;							\
-	while ((old = cmpxchg(&v->counter, c, c c_op i)) != c)		\
+	while ((old = __ga_cmpxchg(&v->counter, c, c c_op i)) != c)	\
 		c = old;						\
 }
 
 #define ATOMIC_OP_RETURN(op, c_op)					\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
+static inline int generic_atomic_##op##_return(int i, atomic_t *v)	\
 {									\
 	int c, old;							\
 									\
 	c = v->counter;							\
-	while ((old = cmpxchg(&v->counter, c, c c_op i)) != c)		\
+	while ((old = __ga_cmpxchg(&v->counter, c, c c_op i)) != c)	\
 		c = old;						\
 									\
 	return c c_op i;						\
 }
 
 #define ATOMIC_FETCH_OP(op, c_op)					\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+static inline int generic_atomic_fetch_##op(int i, atomic_t *v)		\
 {									\
 	int c, old;							\
 									\
 	c = v->counter;							\
-	while ((old = cmpxchg(&v->counter, c, c c_op i)) != c)		\
+	while ((old = __ga_cmpxchg(&v->counter, c, c c_op i)) != c)	\
 		c = old;						\
 									\
 	return c;							\
@@ -55,7 +63,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)			\
 #include <linux/irqflags.h>
 
 #define ATOMIC_OP(op, c_op)						\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void generic_atomic_##op(int i, atomic_t *v)		\
 {									\
 	unsigned long flags;						\
 									\
@@ -65,7 +73,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
 }
 
 #define ATOMIC_OP_RETURN(op, c_op)					\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
+static inline int generic_atomic_##op##_return(int i, atomic_t *v)	\
 {									\
 	unsigned long flags;						\
 	int ret;							\
@@ -78,7 +86,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
 }
 
 #define ATOMIC_FETCH_OP(op, c_op)					\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+static inline int generic_atomic_fetch_##op(int i, atomic_t *v)		\
 {									\
 	unsigned long flags;						\
 	int ret;							\
@@ -112,10 +120,55 @@ ATOMIC_OP(xor, ^)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
+#undef __ga_cmpxchg
+#undef __ga_xchg
+
+#ifdef CONFIG_ARCH_ATOMIC
+
+#define arch_atomic_add_return			generic_atomic_add_return
+#define arch_atomic_sub_return			generic_atomic_sub_return
+
+#define arch_atomic_fetch_add			generic_atomic_fetch_add
+#define arch_atomic_fetch_sub			generic_atomic_fetch_sub
+#define arch_atomic_fetch_and			generic_atomic_fetch_and
+#define arch_atomic_fetch_or			generic_atomic_fetch_or
+#define arch_atomic_fetch_xor			generic_atomic_fetch_xor
+
+#define arch_atomic_add				generic_atomic_add
+#define arch_atomic_sub				generic_atomic_sub
+#define arch_atomic_and				generic_atomic_and
+#define arch_atomic_or				generic_atomic_or
+#define arch_atomic_xor				generic_atomic_xor
+
+#define arch_atomic_read(v)			READ_ONCE((v)->counter)
+#define arch_atomic_set(v, i)			WRITE_ONCE(((v)->counter), (i))
+
+#define arch_atomic_xchg(ptr, v)		(arch_xchg(&(ptr)->counter, (v)))
+#define arch_atomic_cmpxchg(v, old, new)	(arch_cmpxchg(&((v)->counter), (old), (new)))
+
+#else /* CONFIG_ARCH_ATOMIC */
+
+#define atomic_add_return		generic_atomic_add_return
+#define atomic_sub_return		generic_atomic_sub_return
+
+#define atomic_fetch_add		generic_atomic_fetch_add
+#define atomic_fetch_sub		generic_atomic_fetch_sub
+#define atomic_fetch_and		generic_atomic_fetch_and
+#define atomic_fetch_or			generic_atomic_fetch_or
+#define atomic_fetch_xor		generic_atomic_fetch_xor
+
+#define atomic_add			generic_atomic_add
+#define atomic_sub			generic_atomic_sub
+#define atomic_and			generic_atomic_and
+#define atomic_or			generic_atomic_or
+#define atomic_xor			generic_atomic_xor
+
 #define atomic_read(v)			READ_ONCE((v)->counter)
 #define atomic_set(v, i)		WRITE_ONCE(((v)->counter), (i))
 
 #define atomic_xchg(ptr, v)		(xchg(&(ptr)->counter, (v)))
 #define atomic_cmpxchg(v, old, new)	(cmpxchg(&((v)->counter), (old), (new)))
 
+#endif /* CONFIG_ARCH_ATOMIC */
+
 #endif /* __ASM_GENERIC_ATOMIC_H */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 10/33] locking/atomic: atomic64: support ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (8 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 09/33] locking/atomic: atomic: support ARCH_ATOMIC Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 11/33] locking/atomic: cmpxchg: make `generic` a prefix Mark Rutland
                   ` (23 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as this will
enable functionality, and once all architectures are converted it will
be possible to make significant cleanups to the atomic headers.

A number of architectures use asm-generic/atomic64.h, and it's
impractical to convert the header and all these architectures in one go.
To make it possible to convert them one-by-one, let's make the
asm-generic implementation function as either atomic64_*() or
arch_atomic64_*() depending on whether ARCH_ATOMIC is selected. To do
this, the generic implementations are prefixed as generic_atomic64_*(),
and preprocessor definitions map atomic64_*()/arch_atomic64_*() onto
these as appropriate.

Once all users are moved over to ARCH_ATOMIC the ifdeffery in the header
can be simplified and/or removed entirely.

For existing users (none of which select ARCH_ATOMIC), there should be
no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 include/asm-generic/atomic64.h | 74 ++++++++++++++++++++++++++++++++++--------
 lib/atomic64.c                 | 36 ++++++++++----------
 2 files changed, 79 insertions(+), 31 deletions(-)

diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h
index 370f01d4450f..c8c7d9fae820 100644
--- a/include/asm-generic/atomic64.h
+++ b/include/asm-generic/atomic64.h
@@ -15,19 +15,17 @@ typedef struct {
 
 #define ATOMIC64_INIT(i)	{ (i) }
 
-extern s64 atomic64_read(const atomic64_t *v);
-extern void atomic64_set(atomic64_t *v, s64 i);
-
-#define atomic64_set_release(v, i)	atomic64_set((v), (i))
+extern s64 generic_atomic64_read(const atomic64_t *v);
+extern void generic_atomic64_set(atomic64_t *v, s64 i);
 
 #define ATOMIC64_OP(op)							\
-extern void	 atomic64_##op(s64 a, atomic64_t *v);
+extern void generic_atomic64_##op(s64 a, atomic64_t *v);
 
 #define ATOMIC64_OP_RETURN(op)						\
-extern s64 atomic64_##op##_return(s64 a, atomic64_t *v);
+extern s64 generic_atomic64_##op##_return(s64 a, atomic64_t *v);
 
 #define ATOMIC64_FETCH_OP(op)						\
-extern s64 atomic64_fetch_##op(s64 a, atomic64_t *v);
+extern s64 generic_atomic64_fetch_##op(s64 a, atomic64_t *v);
 
 #define ATOMIC64_OPS(op)	ATOMIC64_OP(op) ATOMIC64_OP_RETURN(op) ATOMIC64_FETCH_OP(op)
 
@@ -46,11 +44,61 @@ ATOMIC64_OPS(xor)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-extern s64 atomic64_dec_if_positive(atomic64_t *v);
-#define atomic64_dec_if_positive atomic64_dec_if_positive
-extern s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n);
-extern s64 atomic64_xchg(atomic64_t *v, s64 new);
-extern s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u);
-#define atomic64_fetch_add_unless atomic64_fetch_add_unless
+extern s64 generic_atomic64_dec_if_positive(atomic64_t *v);
+extern s64 generic_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n);
+extern s64 generic_atomic64_xchg(atomic64_t *v, s64 new);
+extern s64 generic_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u);
+
+#ifdef CONFIG_ARCH_ATOMIC
+
+#define arch_atomic64_read		generic_atomic64_read
+#define arch_atomic64_set		generic_atomic64_set
+#define arch_atomic64_set_release	generic_atomic64_set
+
+#define arch_atomic64_add		generic_atomic64_add
+#define arch_atomic64_add_return	generic_atomic64_add_return
+#define arch_atomic64_fetch_add		generic_atomic64_fetch_add
+#define arch_atomic64_sub		generic_atomic64_sub
+#define arch_atomic64_sub_return	generic_atomic64_sub_return
+#define arch_atomic64_fetch_sub		generic_atomic64_fetch_sub
+
+#define arch_atomic64_and		generic_atomic64_and
+#define arch_atomic64_fetch_and		generic_atomic64_fetch_and
+#define arch_atomic64_or		generic_atomic64_or
+#define arch_atomic64_fetch_or		generic_atomic64_fetch_or
+#define arch_atomic64_xor		generic_atomic64_xor
+#define arch_atomic64_fetch_xor		generic_atomic64_fetch_xor
+
+#define arch_atomic64_dec_if_positive	generic_atomic64_dec_if_positive
+#define arch_atomic64_cmpxchg		generic_atomic64_cmpxchg
+#define arch_atomic64_xchg		generic_atomic64_xchg
+#define arch_atomic64_fetch_add_unless	generic_atomic64_fetch_add_unless
+
+#else /* CONFIG_ARCH_ATOMIC */
+
+#define atomic64_read			generic_atomic64_read
+#define atomic64_set			generic_atomic64_set
+#define atomic64_set_release		generic_atomic64_set
+
+#define atomic64_add			generic_atomic64_add
+#define atomic64_add_return		generic_atomic64_add_return
+#define atomic64_fetch_add		generic_atomic64_fetch_add
+#define atomic64_sub			generic_atomic64_sub
+#define atomic64_sub_return		generic_atomic64_sub_return
+#define atomic64_fetch_sub		generic_atomic64_fetch_sub
+
+#define atomic64_and			generic_atomic64_and
+#define atomic64_fetch_and		generic_atomic64_fetch_and
+#define atomic64_or			generic_atomic64_or
+#define atomic64_fetch_or		generic_atomic64_fetch_or
+#define atomic64_xor			generic_atomic64_xor
+#define atomic64_fetch_xor		generic_atomic64_fetch_xor
+
+#define atomic64_dec_if_positive	generic_atomic64_dec_if_positive
+#define atomic64_cmpxchg		generic_atomic64_cmpxchg
+#define atomic64_xchg			generic_atomic64_xchg
+#define atomic64_fetch_add_unless	generic_atomic64_fetch_add_unless
+
+#endif /* CONFIG_ARCH_ATOMIC */
 
 #endif  /*  _ASM_GENERIC_ATOMIC64_H  */
diff --git a/lib/atomic64.c b/lib/atomic64.c
index e98c85a99787..3df653994177 100644
--- a/lib/atomic64.c
+++ b/lib/atomic64.c
@@ -42,7 +42,7 @@ static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
 	return &atomic64_lock[addr & (NR_LOCKS - 1)].lock;
 }
 
-s64 atomic64_read(const atomic64_t *v)
+s64 generic_atomic64_read(const atomic64_t *v)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
@@ -53,9 +53,9 @@ s64 atomic64_read(const atomic64_t *v)
 	raw_spin_unlock_irqrestore(lock, flags);
 	return val;
 }
-EXPORT_SYMBOL(atomic64_read);
+EXPORT_SYMBOL(generic_atomic64_read);
 
-void atomic64_set(atomic64_t *v, s64 i)
+void generic_atomic64_set(atomic64_t *v, s64 i)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
@@ -64,10 +64,10 @@ void atomic64_set(atomic64_t *v, s64 i)
 	v->counter = i;
 	raw_spin_unlock_irqrestore(lock, flags);
 }
-EXPORT_SYMBOL(atomic64_set);
+EXPORT_SYMBOL(generic_atomic64_set);
 
 #define ATOMIC64_OP(op, c_op)						\
-void atomic64_##op(s64 a, atomic64_t *v)				\
+void generic_atomic64_##op(s64 a, atomic64_t *v)			\
 {									\
 	unsigned long flags;						\
 	raw_spinlock_t *lock = lock_addr(v);				\
@@ -76,10 +76,10 @@ void atomic64_##op(s64 a, atomic64_t *v)				\
 	v->counter c_op a;						\
 	raw_spin_unlock_irqrestore(lock, flags);			\
 }									\
-EXPORT_SYMBOL(atomic64_##op);
+EXPORT_SYMBOL(generic_atomic64_##op);
 
 #define ATOMIC64_OP_RETURN(op, c_op)					\
-s64 atomic64_##op##_return(s64 a, atomic64_t *v)			\
+s64 generic_atomic64_##op##_return(s64 a, atomic64_t *v)		\
 {									\
 	unsigned long flags;						\
 	raw_spinlock_t *lock = lock_addr(v);				\
@@ -90,10 +90,10 @@ s64 atomic64_##op##_return(s64 a, atomic64_t *v)			\
 	raw_spin_unlock_irqrestore(lock, flags);			\
 	return val;							\
 }									\
-EXPORT_SYMBOL(atomic64_##op##_return);
+EXPORT_SYMBOL(generic_atomic64_##op##_return);
 
 #define ATOMIC64_FETCH_OP(op, c_op)					\
-s64 atomic64_fetch_##op(s64 a, atomic64_t *v)				\
+s64 generic_atomic64_fetch_##op(s64 a, atomic64_t *v)			\
 {									\
 	unsigned long flags;						\
 	raw_spinlock_t *lock = lock_addr(v);				\
@@ -105,7 +105,7 @@ s64 atomic64_fetch_##op(s64 a, atomic64_t *v)				\
 	raw_spin_unlock_irqrestore(lock, flags);			\
 	return val;							\
 }									\
-EXPORT_SYMBOL(atomic64_fetch_##op);
+EXPORT_SYMBOL(generic_atomic64_fetch_##op);
 
 #define ATOMIC64_OPS(op, c_op)						\
 	ATOMIC64_OP(op, c_op)						\
@@ -130,7 +130,7 @@ ATOMIC64_OPS(xor, ^=)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-s64 atomic64_dec_if_positive(atomic64_t *v)
+s64 generic_atomic64_dec_if_positive(atomic64_t *v)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
@@ -143,9 +143,9 @@ s64 atomic64_dec_if_positive(atomic64_t *v)
 	raw_spin_unlock_irqrestore(lock, flags);
 	return val;
 }
-EXPORT_SYMBOL(atomic64_dec_if_positive);
+EXPORT_SYMBOL(generic_atomic64_dec_if_positive);
 
-s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
+s64 generic_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
@@ -158,9 +158,9 @@ s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
 	raw_spin_unlock_irqrestore(lock, flags);
 	return val;
 }
-EXPORT_SYMBOL(atomic64_cmpxchg);
+EXPORT_SYMBOL(generic_atomic64_cmpxchg);
 
-s64 atomic64_xchg(atomic64_t *v, s64 new)
+s64 generic_atomic64_xchg(atomic64_t *v, s64 new)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
@@ -172,9 +172,9 @@ s64 atomic64_xchg(atomic64_t *v, s64 new)
 	raw_spin_unlock_irqrestore(lock, flags);
 	return val;
 }
-EXPORT_SYMBOL(atomic64_xchg);
+EXPORT_SYMBOL(generic_atomic64_xchg);
 
-s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+s64 generic_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
@@ -188,4 +188,4 @@ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 
 	return val;
 }
-EXPORT_SYMBOL(atomic64_fetch_add_unless);
+EXPORT_SYMBOL(generic_atomic64_fetch_add_unless);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 11/33] locking/atomic: cmpxchg: make `generic` a prefix
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (9 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 10/33] locking/atomic: atomic64: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-11  9:29   ` Geert Uytterhoeven
  2021-05-10  9:37 ` [PATCH 12/33] locking/atomic: cmpxchg: support ARCH_ATOMIC Mark Rutland
                   ` (22 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

The asm-generic implementations of cmpxchg_local() and cmpxchg64_local()
use a `_generic` suffix to distinguish themselves from arch code or
wrappers used elsewhere.

Subsequent patches will add ARCH_ATOMIC support to these
implementations, and will distinguish more functions with a `generic`
portion. To align with how ARCH_ATOMIC uses an `arch_` prefix, it would
be helpful to use a `generic_` prefix rather than a `_generic` suffix.

In preparation for this, this patch renames the existing functions to
make `generic` a prefix rather than a suffix. There should be no
functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm/include/asm/cmpxchg.h      | 6 +++---
 arch/m68k/include/asm/cmpxchg.h     | 2 +-
 arch/mips/include/asm/cmpxchg.h     | 2 +-
 arch/parisc/include/asm/cmpxchg.h   | 4 ++--
 arch/powerpc/include/asm/cmpxchg.h  | 2 +-
 arch/sparc/include/asm/cmpxchg_32.h | 4 ++--
 arch/sparc/include/asm/cmpxchg_64.h | 2 +-
 arch/xtensa/include/asm/cmpxchg.h   | 6 +++---
 include/asm-generic/cmpxchg-local.h | 4 ++--
 include/asm-generic/cmpxchg.h       | 4 ++--
 10 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
index 8b701f8e175c..06bd8cea861a 100644
--- a/arch/arm/include/asm/cmpxchg.h
+++ b/arch/arm/include/asm/cmpxchg.h
@@ -135,13 +135,13 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
  * them available.
  */
 #define cmpxchg_local(ptr, o, n) ({					\
-	(__typeof(*ptr))__cmpxchg_local_generic((ptr),			\
+	(__typeof(*ptr))__generic_cmpxchg_local((ptr),			\
 					        (unsigned long)(o),	\
 					        (unsigned long)(n),	\
 					        sizeof(*(ptr)));	\
 })
 
-#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
+#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 
 #include <asm-generic/cmpxchg.h>
 
@@ -224,7 +224,7 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
 #ifdef CONFIG_CPU_V6	/* min ARCH == ARMv6 */
 	case 1:
 	case 2:
-		ret = __cmpxchg_local_generic(ptr, old, new, size);
+		ret = __generic_cmpxchg_local(ptr, old, new, size);
 		break;
 #endif
 	default:
diff --git a/arch/m68k/include/asm/cmpxchg.h b/arch/m68k/include/asm/cmpxchg.h
index a4aa82021d3b..7629c9c1ed5b 100644
--- a/arch/m68k/include/asm/cmpxchg.h
+++ b/arch/m68k/include/asm/cmpxchg.h
@@ -80,7 +80,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void * ptr, int siz
 
 #include <asm-generic/cmpxchg-local.h>
 
-#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
+#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 
 extern unsigned long __invalid_cmpxchg_size(volatile void *,
 					    unsigned long, unsigned long, int);
diff --git a/arch/mips/include/asm/cmpxchg.h b/arch/mips/include/asm/cmpxchg.h
index ed8f3f3c4304..c7e0455d4d46 100644
--- a/arch/mips/include/asm/cmpxchg.h
+++ b/arch/mips/include/asm/cmpxchg.h
@@ -222,7 +222,7 @@ unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 #else
 
 # include <asm-generic/cmpxchg-local.h>
-# define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
+# define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 
 # ifdef CONFIG_SMP
 
diff --git a/arch/parisc/include/asm/cmpxchg.h b/arch/parisc/include/asm/cmpxchg.h
index 84ee232278a6..c2015654b684 100644
--- a/arch/parisc/include/asm/cmpxchg.h
+++ b/arch/parisc/include/asm/cmpxchg.h
@@ -98,7 +98,7 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
 #endif
 	case 4:	return __cmpxchg_u32(ptr, old, new_);
 	default:
-		return __cmpxchg_local_generic(ptr, old, new_, size);
+		return __generic_cmpxchg_local(ptr, old, new_, size);
 	}
 }
 
@@ -116,7 +116,7 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
 	cmpxchg_local((ptr), (o), (n));					\
 })
 #else
-#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
+#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 #endif
 
 #define cmpxchg64(ptr, o, n) __cmpxchg_u64(ptr, o, n)
diff --git a/arch/powerpc/include/asm/cmpxchg.h b/arch/powerpc/include/asm/cmpxchg.h
index cf091c4c22e5..69f52fdcf064 100644
--- a/arch/powerpc/include/asm/cmpxchg.h
+++ b/arch/powerpc/include/asm/cmpxchg.h
@@ -524,7 +524,7 @@ __cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new,
 })
 #else
 #include <asm-generic/cmpxchg-local.h>
-#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
+#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 #endif
 
 #endif /* __KERNEL__ */
diff --git a/arch/sparc/include/asm/cmpxchg_32.h b/arch/sparc/include/asm/cmpxchg_32.h
index a53d744d4212..86e3da1d973d 100644
--- a/arch/sparc/include/asm/cmpxchg_32.h
+++ b/arch/sparc/include/asm/cmpxchg_32.h
@@ -73,8 +73,8 @@ u64 __cmpxchg_u64(u64 *ptr, u64 old, u64 new);
  * them available.
  */
 #define cmpxchg_local(ptr, o, n)				  	       \
-	((__typeof__(*(ptr)))__cmpxchg_local_generic((ptr), (unsigned long)(o),\
+	((__typeof__(*(ptr)))__generic_cmpxchg_local((ptr), (unsigned long)(o),\
 			(unsigned long)(n), sizeof(*(ptr))))
-#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
+#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 
 #endif /* __ARCH_SPARC_CMPXCHG__ */
diff --git a/arch/sparc/include/asm/cmpxchg_64.h b/arch/sparc/include/asm/cmpxchg_64.h
index 316faa0130ba..8915b577b92f 100644
--- a/arch/sparc/include/asm/cmpxchg_64.h
+++ b/arch/sparc/include/asm/cmpxchg_64.h
@@ -189,7 +189,7 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
 	case 4:
 	case 8:	return __cmpxchg(ptr, old, new, size);
 	default:
-		return __cmpxchg_local_generic(ptr, old, new, size);
+		return __generic_cmpxchg_local(ptr, old, new, size);
 	}
 
 	return old;
diff --git a/arch/xtensa/include/asm/cmpxchg.h b/arch/xtensa/include/asm/cmpxchg.h
index a175f8aec3fb..9c4d6e5316ce 100644
--- a/arch/xtensa/include/asm/cmpxchg.h
+++ b/arch/xtensa/include/asm/cmpxchg.h
@@ -97,7 +97,7 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
 	case 4:
 		return __cmpxchg_u32(ptr, old, new);
 	default:
-		return __cmpxchg_local_generic(ptr, old, new, size);
+		return __generic_cmpxchg_local(ptr, old, new, size);
 	}
 
 	return old;
@@ -108,9 +108,9 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
  * them available.
  */
 #define cmpxchg_local(ptr, o, n)				  	       \
-	((__typeof__(*(ptr)))__cmpxchg_local_generic((ptr), (unsigned long)(o),\
+	((__typeof__(*(ptr)))__generic_cmpxchg_local((ptr), (unsigned long)(o),\
 			(unsigned long)(n), sizeof(*(ptr))))
-#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
+#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 #define cmpxchg64(ptr, o, n)    cmpxchg64_local((ptr), (o), (n))
 
 /*
diff --git a/include/asm-generic/cmpxchg-local.h b/include/asm-generic/cmpxchg-local.h
index f17f14f84d09..380cdc824e4b 100644
--- a/include/asm-generic/cmpxchg-local.h
+++ b/include/asm-generic/cmpxchg-local.h
@@ -12,7 +12,7 @@ extern unsigned long wrong_size_cmpxchg(volatile void *ptr)
  * Generic version of __cmpxchg_local (disables interrupts). Takes an unsigned
  * long parameter, supporting various types of architectures.
  */
-static inline unsigned long __cmpxchg_local_generic(volatile void *ptr,
+static inline unsigned long __generic_cmpxchg_local(volatile void *ptr,
 		unsigned long old, unsigned long new, int size)
 {
 	unsigned long flags, prev;
@@ -51,7 +51,7 @@ static inline unsigned long __cmpxchg_local_generic(volatile void *ptr,
 /*
  * Generic version of __cmpxchg64_local. Takes an u64 parameter.
  */
-static inline u64 __cmpxchg64_local_generic(volatile void *ptr,
+static inline u64 __generic_cmpxchg64_local(volatile void *ptr,
 		u64 old, u64 new)
 {
 	u64 prev;
diff --git a/include/asm-generic/cmpxchg.h b/include/asm-generic/cmpxchg.h
index 9a24510cd8c1..b9d54c7afc52 100644
--- a/include/asm-generic/cmpxchg.h
+++ b/include/asm-generic/cmpxchg.h
@@ -94,13 +94,13 @@ unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
 
 #ifndef cmpxchg_local
 #define cmpxchg_local(ptr, o, n) ({					       \
-	((__typeof__(*(ptr)))__cmpxchg_local_generic((ptr), (unsigned long)(o),\
+	((__typeof__(*(ptr)))__generic_cmpxchg_local((ptr), (unsigned long)(o),\
 			(unsigned long)(n), sizeof(*(ptr))));		       \
 })
 #endif
 
 #ifndef cmpxchg64_local
-#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
+#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 #endif
 
 #define cmpxchg(ptr, o, n)	cmpxchg_local((ptr), (o), (n))
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 12/33] locking/atomic: cmpxchg: support ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (10 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 11/33] locking/atomic: cmpxchg: make `generic` a prefix Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 13/33] locking/atomic: alpha: move to ARCH_ATOMIC Mark Rutland
                   ` (21 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as this will
enable functionality, and once all architectures are converted it will
be possible to make significant cleanups to the atomic headers.

A number of architectures use asm-generic/cmpxchg.h or
asm-generic/cmpxhg-local.h, and it's impractical to convert the headers
and all these architectures in one go. To make it possible to convert
them one-by-one, let's make the asm-generic implementation function as
either cmpxchg*() or arch_cmpxchg*() depending on whether ARCH_ATOMIC is
selected. To do this, the generic implementations are prefixed as
generic_cmpxchg_*(), and preprocessor definitions map
cmpxchg_*()/arch_cmpxchg_*() onto these as appropriate.

Once all users are moved over to ARCH_ATOMIC the ifdeffery in the header
can be simplified and/or removed entirely.

For existing users (none of which select ARCH_ATOMIC), there should be
no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 include/asm-generic/cmpxchg.h | 61 +++++++++++++++++++++++++++++++------------
 1 file changed, 44 insertions(+), 17 deletions(-)

diff --git a/include/asm-generic/cmpxchg.h b/include/asm-generic/cmpxchg.h
index b9d54c7afc52..98c931199089 100644
--- a/include/asm-generic/cmpxchg.h
+++ b/include/asm-generic/cmpxchg.h
@@ -14,16 +14,14 @@
 #include <linux/types.h>
 #include <linux/irqflags.h>
 
-#ifndef xchg
-
 /*
  * This function doesn't exist, so you'll get a linker error if
  * something tries to do an invalidly-sized xchg().
  */
-extern void __xchg_called_with_bad_pointer(void);
+extern void __generic_xchg_called_with_bad_pointer(void);
 
 static inline
-unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
+unsigned long __generic_xchg(unsigned long x, volatile void *ptr, int size)
 {
 	unsigned long ret, flags;
 
@@ -75,35 +73,64 @@ unsigned long __xchg(unsigned long x, volatile void *ptr, int size)
 #endif /* CONFIG_64BIT */
 
 	default:
-		__xchg_called_with_bad_pointer();
+		__generic_xchg_called_with_bad_pointer();
 		return x;
 	}
 }
 
-#define xchg(ptr, x) ({							\
-	((__typeof__(*(ptr)))						\
-		__xchg((unsigned long)(x), (ptr), sizeof(*(ptr))));	\
+#define generic_xchg(ptr, x) ({							\
+	((__typeof__(*(ptr)))							\
+		__generic_xchg((unsigned long)(x), (ptr), sizeof(*(ptr))));	\
 })
 
-#endif /* xchg */
-
 /*
  * Atomic compare and exchange.
  */
 #include <asm-generic/cmpxchg-local.h>
 
-#ifndef cmpxchg_local
-#define cmpxchg_local(ptr, o, n) ({					       \
-	((__typeof__(*(ptr)))__generic_cmpxchg_local((ptr), (unsigned long)(o),\
-			(unsigned long)(n), sizeof(*(ptr))));		       \
+#define generic_cmpxchg_local(ptr, o, n) ({					\
+	((__typeof__(*(ptr)))__generic_cmpxchg_local((ptr), (unsigned long)(o),	\
+			(unsigned long)(n), sizeof(*(ptr))));			\
 })
+
+#define generic_cmpxchg64_local(ptr, o, n) \
+	__generic_cmpxchg64_local((ptr), (o), (n))
+
+
+#ifdef CONFIG_ARCH_ATOMIC
+
+#ifndef arch_xchg
+#define arch_xchg		generic_xchg
+#endif
+
+#ifndef arch_cmpxchg_local
+#define arch_cmpxchg_local	generic_cmpxchg_local
+#endif
+
+#ifndef arch_cmpxchg64_local
+#define arch_cmpxchg64_local	generic_cmpxchg64_local
+#endif
+
+#define arch_cmpxchg		arch_cmpxchg_local
+#define arch_cmpxchg64		arch_cmpxchg64_local
+
+#else /* CONFIG_ARCH_ATOMIC */
+
+#ifndef xchg
+#define xchg			generic_xchg
+#endif
+
+#ifndef cmpxchg_local
+#define cmpxchg_local		generic_cmpxchg_local
 #endif
 
 #ifndef cmpxchg64_local
-#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
+#define cmpxchg64_local		generic_cmpxchg64_local
 #endif
 
-#define cmpxchg(ptr, o, n)	cmpxchg_local((ptr), (o), (n))
-#define cmpxchg64(ptr, o, n)	cmpxchg64_local((ptr), (o), (n))
+#define cmpxchg			cmpxchg_local
+#define cmpxchg64		cmpxchg64_local
+
+#endif /* CONFIG_ARCH_ATOMIC */
 
 #endif /* __ASM_GENERIC_CMPXCHG_H */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 13/33] locking/atomic: alpha: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (11 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 12/33] locking/atomic: cmpxchg: support ARCH_ATOMIC Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 14/33] locking/atomic: arc: " Mark Rutland
                   ` (20 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates alpha to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Note: xchg_local() is NOT currently part of the generic atomic
arch_atomic API, and is not instrumented.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Will Deacon <will@kernel.org>
---
 arch/alpha/Kconfig               |  1 +
 arch/alpha/include/asm/atomic.h  | 88 +++++++++++++++++++++-------------------
 arch/alpha/include/asm/cmpxchg.h | 12 +++---
 3 files changed, 54 insertions(+), 47 deletions(-)

diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index 5998106faa60..7920fc2e2a2a 100644
--- a/arch/alpha/Kconfig
+++ b/arch/alpha/Kconfig
@@ -2,6 +2,7 @@
 config ALPHA
 	bool
 	default y
+	select ARCH_ATOMIC
 	select ARCH_32BIT_USTAT_F_TINODE
 	select ARCH_MIGHT_HAVE_PC_PARPORT
 	select ARCH_MIGHT_HAVE_PC_SERIO
diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index e41c113c6688..60a2872986a2 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -26,11 +26,11 @@
 
 #define ATOMIC64_INIT(i)	{ (i) }
 
-#define atomic_read(v)		READ_ONCE((v)->counter)
-#define atomic64_read(v)	READ_ONCE((v)->counter)
+#define arch_atomic_read(v)	READ_ONCE((v)->counter)
+#define arch_atomic64_read(v)	READ_ONCE((v)->counter)
 
-#define atomic_set(v,i)		WRITE_ONCE((v)->counter, (i))
-#define atomic64_set(v,i)	WRITE_ONCE((v)->counter, (i))
+#define arch_atomic_set(v,i)	WRITE_ONCE((v)->counter, (i))
+#define arch_atomic64_set(v,i)	WRITE_ONCE((v)->counter, (i))
 
 /*
  * To get proper branch prediction for the main line, we must branch
@@ -39,7 +39,7 @@
  */
 
 #define ATOMIC_OP(op, asm_op)						\
-static __inline__ void atomic_##op(int i, atomic_t * v)			\
+static __inline__ void arch_atomic_##op(int i, atomic_t * v)		\
 {									\
 	unsigned long temp;						\
 	__asm__ __volatile__(						\
@@ -55,7 +55,7 @@ static __inline__ void atomic_##op(int i, atomic_t * v)			\
 }									\
 
 #define ATOMIC_OP_RETURN(op, asm_op)					\
-static inline int atomic_##op##_return_relaxed(int i, atomic_t *v)	\
+static inline int arch_atomic_##op##_return_relaxed(int i, atomic_t *v)	\
 {									\
 	long temp, result;						\
 	__asm__ __volatile__(						\
@@ -74,7 +74,7 @@ static inline int atomic_##op##_return_relaxed(int i, atomic_t *v)	\
 }
 
 #define ATOMIC_FETCH_OP(op, asm_op)					\
-static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
+static inline int arch_atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
 {									\
 	long temp, result;						\
 	__asm__ __volatile__(						\
@@ -92,7 +92,7 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
 }
 
 #define ATOMIC64_OP(op, asm_op)						\
-static __inline__ void atomic64_##op(s64 i, atomic64_t * v)		\
+static __inline__ void arch_atomic64_##op(s64 i, atomic64_t * v)	\
 {									\
 	s64 temp;							\
 	__asm__ __volatile__(						\
@@ -108,7 +108,8 @@ static __inline__ void atomic64_##op(s64 i, atomic64_t * v)		\
 }									\
 
 #define ATOMIC64_OP_RETURN(op, asm_op)					\
-static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v)	\
+static __inline__ s64							\
+arch_atomic64_##op##_return_relaxed(s64 i, atomic64_t * v)		\
 {									\
 	s64 temp, result;						\
 	__asm__ __volatile__(						\
@@ -127,7 +128,8 @@ static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v)	\
 }
 
 #define ATOMIC64_FETCH_OP(op, asm_op)					\
-static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v)	\
+static __inline__ s64							\
+arch_atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v)		\
 {									\
 	s64 temp, result;						\
 	__asm__ __volatile__(						\
@@ -155,18 +157,18 @@ static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v)	\
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
 
-#define atomic_add_return_relaxed	atomic_add_return_relaxed
-#define atomic_sub_return_relaxed	atomic_sub_return_relaxed
-#define atomic_fetch_add_relaxed	atomic_fetch_add_relaxed
-#define atomic_fetch_sub_relaxed	atomic_fetch_sub_relaxed
+#define arch_atomic_add_return_relaxed	arch_atomic_add_return_relaxed
+#define arch_atomic_sub_return_relaxed	arch_atomic_sub_return_relaxed
+#define arch_atomic_fetch_add_relaxed	arch_atomic_fetch_add_relaxed
+#define arch_atomic_fetch_sub_relaxed	arch_atomic_fetch_sub_relaxed
 
-#define atomic64_add_return_relaxed	atomic64_add_return_relaxed
-#define atomic64_sub_return_relaxed	atomic64_sub_return_relaxed
-#define atomic64_fetch_add_relaxed	atomic64_fetch_add_relaxed
-#define atomic64_fetch_sub_relaxed	atomic64_fetch_sub_relaxed
+#define arch_atomic64_add_return_relaxed	arch_atomic64_add_return_relaxed
+#define arch_atomic64_sub_return_relaxed	arch_atomic64_sub_return_relaxed
+#define arch_atomic64_fetch_add_relaxed		arch_atomic64_fetch_add_relaxed
+#define arch_atomic64_fetch_sub_relaxed		arch_atomic64_fetch_sub_relaxed
 
-#define atomic_andnot atomic_andnot
-#define atomic64_andnot atomic64_andnot
+#define arch_atomic_andnot		arch_atomic_andnot
+#define arch_atomic64_andnot		arch_atomic64_andnot
 
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op, asm)						\
@@ -180,15 +182,15 @@ ATOMIC_OPS(andnot, bic)
 ATOMIC_OPS(or, bis)
 ATOMIC_OPS(xor, xor)
 
-#define atomic_fetch_and_relaxed	atomic_fetch_and_relaxed
-#define atomic_fetch_andnot_relaxed	atomic_fetch_andnot_relaxed
-#define atomic_fetch_or_relaxed		atomic_fetch_or_relaxed
-#define atomic_fetch_xor_relaxed	atomic_fetch_xor_relaxed
+#define arch_atomic_fetch_and_relaxed		arch_atomic_fetch_and_relaxed
+#define arch_atomic_fetch_andnot_relaxed	arch_atomic_fetch_andnot_relaxed
+#define arch_atomic_fetch_or_relaxed		arch_atomic_fetch_or_relaxed
+#define arch_atomic_fetch_xor_relaxed		arch_atomic_fetch_xor_relaxed
 
-#define atomic64_fetch_and_relaxed	atomic64_fetch_and_relaxed
-#define atomic64_fetch_andnot_relaxed	atomic64_fetch_andnot_relaxed
-#define atomic64_fetch_or_relaxed	atomic64_fetch_or_relaxed
-#define atomic64_fetch_xor_relaxed	atomic64_fetch_xor_relaxed
+#define arch_atomic64_fetch_and_relaxed		arch_atomic64_fetch_and_relaxed
+#define arch_atomic64_fetch_andnot_relaxed	arch_atomic64_fetch_andnot_relaxed
+#define arch_atomic64_fetch_or_relaxed		arch_atomic64_fetch_or_relaxed
+#define arch_atomic64_fetch_xor_relaxed		arch_atomic64_fetch_xor_relaxed
 
 #undef ATOMIC_OPS
 #undef ATOMIC64_FETCH_OP
@@ -198,14 +200,18 @@ ATOMIC_OPS(xor, xor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#define atomic64_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), old, new))
-#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
+#define arch_atomic64_cmpxchg(v, old, new) \
+	(arch_cmpxchg(&((v)->counter), old, new))
+#define arch_atomic64_xchg(v, new) \
+	(arch_xchg(&((v)->counter), new))
 
-#define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), old, new))
-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+#define arch_atomic_cmpxchg(v, old, new) \
+	(arch_cmpxchg(&((v)->counter), old, new))
+#define arch_atomic_xchg(v, new) \
+	(arch_xchg(&((v)->counter), new))
 
 /**
- * atomic_fetch_add_unless - add unless the number is a given value
+ * arch_atomic_fetch_add_unless - add unless the number is a given value
  * @v: pointer of type atomic_t
  * @a: the amount to add to v...
  * @u: ...unless v is equal to u.
@@ -213,7 +219,7 @@ ATOMIC_OPS(xor, xor)
  * Atomically adds @a to @v, so long as it was not @u.
  * Returns the old value of @v.
  */
-static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u)
+static __inline__ int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
 	int c, new, old;
 	smp_mb();
@@ -234,10 +240,10 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 	smp_mb();
 	return old;
 }
-#define atomic_fetch_add_unless atomic_fetch_add_unless
+#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
 
 /**
- * atomic64_fetch_add_unless - add unless the number is a given value
+ * arch_atomic64_fetch_add_unless - add unless the number is a given value
  * @v: pointer of type atomic64_t
  * @a: the amount to add to v...
  * @u: ...unless v is equal to u.
@@ -245,7 +251,7 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u)
  * Atomically adds @a to @v, so long as it was not @u.
  * Returns the old value of @v.
  */
-static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	s64 c, new, old;
 	smp_mb();
@@ -266,16 +272,16 @@ static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 	smp_mb();
 	return old;
 }
-#define atomic64_fetch_add_unless atomic64_fetch_add_unless
+#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
 
 /*
- * atomic64_dec_if_positive - decrement by 1 if old value positive
+ * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
  * @v: pointer of type atomic_t
  *
  * The function returns the old value of *v minus 1, even if
  * the atomic variable, v, was not decremented.
  */
-static inline s64 atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
 {
 	s64 old, tmp;
 	smp_mb();
@@ -295,6 +301,6 @@ static inline s64 atomic64_dec_if_positive(atomic64_t *v)
 	smp_mb();
 	return old - 1;
 }
-#define atomic64_dec_if_positive atomic64_dec_if_positive
+#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
 
 #endif /* _ALPHA_ATOMIC_H */
diff --git a/arch/alpha/include/asm/cmpxchg.h b/arch/alpha/include/asm/cmpxchg.h
index 6c7c39452471..6e0a850aa9d3 100644
--- a/arch/alpha/include/asm/cmpxchg.h
+++ b/arch/alpha/include/asm/cmpxchg.h
@@ -17,7 +17,7 @@
 				       sizeof(*(ptr)));			\
 })
 
-#define cmpxchg_local(ptr, o, n)					\
+#define arch_cmpxchg_local(ptr, o, n)					\
 ({									\
 	__typeof__(*(ptr)) _o_ = (o);					\
 	__typeof__(*(ptr)) _n_ = (n);					\
@@ -26,7 +26,7 @@
 					  sizeof(*(ptr)));		\
 })
 
-#define cmpxchg64_local(ptr, o, n)					\
+#define arch_cmpxchg64_local(ptr, o, n)					\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
 	cmpxchg_local((ptr), (o), (n));					\
@@ -42,7 +42,7 @@
  * The leading and the trailing memory barriers guarantee that these
  * operations are fully ordered.
  */
-#define xchg(ptr, x)							\
+#define arch_xchg(ptr, x)						\
 ({									\
 	__typeof__(*(ptr)) __ret;					\
 	__typeof__(*(ptr)) _x_ = (x);					\
@@ -53,7 +53,7 @@
 	__ret;								\
 })
 
-#define cmpxchg(ptr, o, n)						\
+#define arch_cmpxchg(ptr, o, n)						\
 ({									\
 	__typeof__(*(ptr)) __ret;					\
 	__typeof__(*(ptr)) _o_ = (o);					\
@@ -65,10 +65,10 @@
 	__ret;								\
 })
 
-#define cmpxchg64(ptr, o, n)						\
+#define arch_cmpxchg64(ptr, o, n)					\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
-	cmpxchg((ptr), (o), (n));					\
+	arch_cmpxchg((ptr), (o), (n));					\
 })
 
 #undef ____cmpxchg
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 14/33] locking/atomic: arc: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (12 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 13/33] locking/atomic: alpha: move to ARCH_ATOMIC Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10 14:57   ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 15/33] locking/atomic: arm: " Mark Rutland
                   ` (19 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates arc to ARCH_ATOMIC. The arch
code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code
wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/arc/Kconfig               |  1 +
 arch/arc/include/asm/atomic.h  | 56 +++++++++++++++++++++---------------------
 arch/arc/include/asm/cmpxchg.h |  8 +++---
 3 files changed, 33 insertions(+), 32 deletions(-)

diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 2d98501c0897..098ecc72d048 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -6,6 +6,7 @@
 config ARC
 	def_bool y
 	select ARC_TIMERS
+	select ARCH_ATOMIC
 	select ARCH_HAS_CACHE_LINE_SIZE
 	select ARCH_HAS_DEBUG_VM_PGTABLE
 	select ARCH_HAS_DMA_PREP_COHERENT
diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index 5afc79c9b2f5..1576a7ebec01 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -14,14 +14,14 @@
 #include <asm/barrier.h>
 #include <asm/smp.h>
 
-#define atomic_read(v)  READ_ONCE((v)->counter)
+#define arch_atomic_read(v)  READ_ONCE((v)->counter)
 
 #ifdef CONFIG_ARC_HAS_LLSC
 
-#define atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
+#define arch_atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
 
 #define ATOMIC_OP(op, c_op, asm_op)					\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void arch_atomic_##op(int i, atomic_t *v)			\
 {									\
 	unsigned int val;						\
 									\
@@ -37,7 +37,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
 }									\
 
 #define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
 {									\
 	unsigned int val;						\
 									\
@@ -63,7 +63,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
 }
 
 #define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 {									\
 	unsigned int val, orig;						\
 									\
@@ -94,11 +94,11 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)			\
 #ifndef CONFIG_SMP
 
  /* violating atomic_xxx API locking protocol in UP for optimization sake */
-#define atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
+#define arch_atomic_set(v, i) WRITE_ONCE(((v)->counter), (i))
 
 #else
 
-static inline void atomic_set(atomic_t *v, int i)
+static inline void arch_atomic_set(atomic_t *v, int i)
 {
 	/*
 	 * Independent of hardware support, all of the atomic_xxx() APIs need
@@ -116,7 +116,7 @@ static inline void atomic_set(atomic_t *v, int i)
 	atomic_ops_unlock(flags);
 }
 
-#define atomic_set_release(v, i)	atomic_set((v), (i))
+#define arch_atomic_set_release(v, i)	arch_atomic_set((v), (i))
 
 #endif
 
@@ -126,7 +126,7 @@ static inline void atomic_set(atomic_t *v, int i)
  */
 
 #define ATOMIC_OP(op, c_op, asm_op)					\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void arch_atomic_##op(int i, atomic_t *v)			\
 {									\
 	unsigned long flags;						\
 									\
@@ -136,7 +136,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
 }
 
 #define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
 {									\
 	unsigned long flags;						\
 	unsigned long temp;						\
@@ -154,7 +154,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
 }
 
 #define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 {									\
 	unsigned long flags;						\
 	unsigned long orig;						\
@@ -180,8 +180,8 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)			\
 ATOMIC_OPS(add, +=, add)
 ATOMIC_OPS(sub, -=, sub)
 
-#define atomic_andnot		atomic_andnot
-#define atomic_fetch_andnot	atomic_fetch_andnot
+#define arch_atomic_andnot		arch_atomic_andnot
+#define arch_atomic_fetch_andnot	arch_atomic_fetch_andnot
 
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op, c_op, asm_op)					\
@@ -220,7 +220,7 @@ typedef struct {
 
 #define ATOMIC64_INIT(a) { (a) }
 
-static inline s64 atomic64_read(const atomic64_t *v)
+static inline s64 arch_atomic64_read(const atomic64_t *v)
 {
 	s64 val;
 
@@ -232,7 +232,7 @@ static inline s64 atomic64_read(const atomic64_t *v)
 	return val;
 }
 
-static inline void atomic64_set(atomic64_t *v, s64 a)
+static inline void arch_atomic64_set(atomic64_t *v, s64 a)
 {
 	/*
 	 * This could have been a simple assignment in "C" but would need
@@ -253,7 +253,7 @@ static inline void atomic64_set(atomic64_t *v, s64 a)
 }
 
 #define ATOMIC64_OP(op, op1, op2)					\
-static inline void atomic64_##op(s64 a, atomic64_t *v)			\
+static inline void arch_atomic64_##op(s64 a, atomic64_t *v)		\
 {									\
 	s64 val;							\
 									\
@@ -270,7 +270,7 @@ static inline void atomic64_##op(s64 a, atomic64_t *v)			\
 }									\
 
 #define ATOMIC64_OP_RETURN(op, op1, op2)		        	\
-static inline s64 atomic64_##op##_return(s64 a, atomic64_t *v)		\
+static inline s64 arch_atomic64_##op##_return(s64 a, atomic64_t *v)	\
 {									\
 	s64 val;							\
 									\
@@ -293,7 +293,7 @@ static inline s64 atomic64_##op##_return(s64 a, atomic64_t *v)		\
 }
 
 #define ATOMIC64_FETCH_OP(op, op1, op2)		        		\
-static inline s64 atomic64_fetch_##op(s64 a, atomic64_t *v)		\
+static inline s64 arch_atomic64_fetch_##op(s64 a, atomic64_t *v)	\
 {									\
 	s64 val, orig;							\
 									\
@@ -320,8 +320,8 @@ static inline s64 atomic64_fetch_##op(s64 a, atomic64_t *v)		\
 	ATOMIC64_OP_RETURN(op, op1, op2)				\
 	ATOMIC64_FETCH_OP(op, op1, op2)
 
-#define atomic64_andnot		atomic64_andnot
-#define atomic64_fetch_andnot	atomic64_fetch_andnot
+#define arch_atomic64_andnot		arch_atomic64_andnot
+#define arch_atomic64_fetch_andnot	arch_atomic64_fetch_andnot
 
 ATOMIC64_OPS(add, add.f, adc)
 ATOMIC64_OPS(sub, sub.f, sbc)
@@ -336,7 +336,7 @@ ATOMIC64_OPS(xor, xor, xor)
 #undef ATOMIC64_OP
 
 static inline s64
-atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new)
+arch_atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new)
 {
 	s64 prev;
 
@@ -358,7 +358,7 @@ atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new)
 	return prev;
 }
 
-static inline s64 atomic64_xchg(atomic64_t *ptr, s64 new)
+static inline s64 arch_atomic64_xchg(atomic64_t *ptr, s64 new)
 {
 	s64 prev;
 
@@ -379,14 +379,14 @@ static inline s64 atomic64_xchg(atomic64_t *ptr, s64 new)
 }
 
 /**
- * atomic64_dec_if_positive - decrement by 1 if old value positive
+ * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
  * @v: pointer of type atomic64_t
  *
  * The function returns the old value of *v minus 1, even if
  * the atomic variable, v, was not decremented.
  */
 
-static inline s64 atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
 {
 	s64 val;
 
@@ -408,10 +408,10 @@ static inline s64 atomic64_dec_if_positive(atomic64_t *v)
 
 	return val;
 }
-#define atomic64_dec_if_positive atomic64_dec_if_positive
+#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
 
 /**
- * atomic64_fetch_add_unless - add unless the number is a given value
+ * arch_atomic64_fetch_add_unless - add unless the number is a given value
  * @v: pointer of type atomic64_t
  * @a: the amount to add to v...
  * @u: ...unless v is equal to u.
@@ -419,7 +419,7 @@ static inline s64 atomic64_dec_if_positive(atomic64_t *v)
  * Atomically adds @a to @v, if it was not @u.
  * Returns the old value of @v
  */
-static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+static inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	s64 old, temp;
 
@@ -443,7 +443,7 @@ static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 
 	return old;
 }
-#define atomic64_fetch_add_unless atomic64_fetch_add_unless
+#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
 
 #endif	/* !CONFIG_GENERIC_ATOMIC64 */
 
diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
index 9b87e162e539..87666980b78a 100644
--- a/arch/arc/include/asm/cmpxchg.h
+++ b/arch/arc/include/asm/cmpxchg.h
@@ -63,7 +63,7 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
 
 #endif
 
-#define cmpxchg(ptr, o, n) ({				\
+#define arch_cmpxchg(ptr, o, n) ({			\
 	(typeof(*(ptr)))__cmpxchg((ptr),		\
 				  (unsigned long)(o),	\
 				  (unsigned long)(n));	\
@@ -75,7 +75,7 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
  *  !LLSC: cmpxchg() has to use an external lock atomic_ops_lock to guarantee
  *         semantics, and this lock also happens to be used by atomic_*()
  */
-#define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n)))
+#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n)))
 
 
 /*
@@ -123,7 +123,7 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
 
 #if !defined(CONFIG_ARC_HAS_LLSC) && defined(CONFIG_SMP)
 
-#define xchg(ptr, with)			\
+#define arch_xchg(ptr, with)		\
 ({					\
 	unsigned long flags;		\
 	typeof(*(ptr)) old_val;		\
@@ -136,7 +136,7 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
 
 #else
 
-#define xchg(ptr, with)  _xchg(ptr, with)
+#define arch_xchg(ptr, with)  _xchg(ptr, with)
 
 #endif
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 15/33] locking/atomic: arm: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (13 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 14/33] locking/atomic: arc: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-13 10:49   ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 16/33] locking/atomic: csky: " Mark Rutland
                   ` (18 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates arm to ARCH_ATOMIC. The arch
code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code
wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Will Deacon <will@kernel.org>
---
 arch/arm/Kconfig               |  1 +
 arch/arm/include/asm/atomic.h  | 96 +++++++++++++++++++++---------------------
 arch/arm/include/asm/cmpxchg.h | 16 +++----
 3 files changed, 57 insertions(+), 56 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 24804f11302d..b7334a6643b9 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -3,6 +3,7 @@ config ARM
 	bool
 	default y
 	select ARCH_32BIT_OFF_T
+	select ARCH_ATOMIC
 	select ARCH_HAS_BINFMT_FLAT
 	select ARCH_HAS_DEBUG_VIRTUAL if MMU
 	select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE
diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index 455eb19a5ac1..db8512d9a918 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -22,8 +22,8 @@
  * strex/ldrex monitor on some implementations. The reason we can use it for
  * atomic_set() is the clrex or dummy strex done on every exception return.
  */
-#define atomic_read(v)	READ_ONCE((v)->counter)
-#define atomic_set(v,i)	WRITE_ONCE(((v)->counter), (i))
+#define arch_atomic_read(v)	READ_ONCE((v)->counter)
+#define arch_atomic_set(v,i)	WRITE_ONCE(((v)->counter), (i))
 
 #if __LINUX_ARM_ARCH__ >= 6
 
@@ -34,7 +34,7 @@
  */
 
 #define ATOMIC_OP(op, c_op, asm_op)					\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void arch_atomic_##op(int i, atomic_t *v)			\
 {									\
 	unsigned long tmp;						\
 	int result;							\
@@ -52,7 +52,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
 }									\
 
 #define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
-static inline int atomic_##op##_return_relaxed(int i, atomic_t *v)	\
+static inline int arch_atomic_##op##_return_relaxed(int i, atomic_t *v)	\
 {									\
 	unsigned long tmp;						\
 	int result;							\
@@ -73,7 +73,7 @@ static inline int atomic_##op##_return_relaxed(int i, atomic_t *v)	\
 }
 
 #define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
-static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
+static inline int arch_atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
 {									\
 	unsigned long tmp;						\
 	int result, val;						\
@@ -93,17 +93,17 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
 	return result;							\
 }
 
-#define atomic_add_return_relaxed	atomic_add_return_relaxed
-#define atomic_sub_return_relaxed	atomic_sub_return_relaxed
-#define atomic_fetch_add_relaxed	atomic_fetch_add_relaxed
-#define atomic_fetch_sub_relaxed	atomic_fetch_sub_relaxed
+#define arch_atomic_add_return_relaxed		arch_atomic_add_return_relaxed
+#define arch_atomic_sub_return_relaxed		arch_atomic_sub_return_relaxed
+#define arch_atomic_fetch_add_relaxed		arch_atomic_fetch_add_relaxed
+#define arch_atomic_fetch_sub_relaxed		arch_atomic_fetch_sub_relaxed
 
-#define atomic_fetch_and_relaxed	atomic_fetch_and_relaxed
-#define atomic_fetch_andnot_relaxed	atomic_fetch_andnot_relaxed
-#define atomic_fetch_or_relaxed		atomic_fetch_or_relaxed
-#define atomic_fetch_xor_relaxed	atomic_fetch_xor_relaxed
+#define arch_atomic_fetch_and_relaxed		arch_atomic_fetch_and_relaxed
+#define arch_atomic_fetch_andnot_relaxed	arch_atomic_fetch_andnot_relaxed
+#define arch_atomic_fetch_or_relaxed		arch_atomic_fetch_or_relaxed
+#define arch_atomic_fetch_xor_relaxed		arch_atomic_fetch_xor_relaxed
 
-static inline int atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new)
+static inline int arch_atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new)
 {
 	int oldval;
 	unsigned long res;
@@ -123,9 +123,9 @@ static inline int atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new)
 
 	return oldval;
 }
-#define atomic_cmpxchg_relaxed		atomic_cmpxchg_relaxed
+#define arch_atomic_cmpxchg_relaxed		arch_atomic_cmpxchg_relaxed
 
-static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
+static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
 	int oldval, newval;
 	unsigned long tmp;
@@ -151,7 +151,7 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 
 	return oldval;
 }
-#define atomic_fetch_add_unless		atomic_fetch_add_unless
+#define arch_atomic_fetch_add_unless		arch_atomic_fetch_add_unless
 
 #else /* ARM_ARCH_6 */
 
@@ -160,7 +160,7 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 #endif
 
 #define ATOMIC_OP(op, c_op, asm_op)					\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void arch_atomic_##op(int i, atomic_t *v)			\
 {									\
 	unsigned long flags;						\
 									\
@@ -170,7 +170,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
 }									\
 
 #define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
 {									\
 	unsigned long flags;						\
 	int val;							\
@@ -184,7 +184,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
 }
 
 #define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 {									\
 	unsigned long flags;						\
 	int val;							\
@@ -197,7 +197,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)			\
 	return val;							\
 }
 
-static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
 {
 	int ret;
 	unsigned long flags;
@@ -211,7 +211,7 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
 	return ret;
 }
 
-#define atomic_fetch_andnot		atomic_fetch_andnot
+#define arch_atomic_fetch_andnot		arch_atomic_fetch_andnot
 
 #endif /* __LINUX_ARM_ARCH__ */
 
@@ -223,7 +223,7 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
 ATOMIC_OPS(add, +=, add)
 ATOMIC_OPS(sub, -=, sub)
 
-#define atomic_andnot atomic_andnot
+#define arch_atomic_andnot arch_atomic_andnot
 
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op, c_op, asm_op)					\
@@ -240,7 +240,7 @@ ATOMIC_OPS(xor, ^=, eor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
 
 #ifndef CONFIG_GENERIC_ATOMIC64
 typedef struct {
@@ -250,7 +250,7 @@ typedef struct {
 #define ATOMIC64_INIT(i) { (i) }
 
 #ifdef CONFIG_ARM_LPAE
-static inline s64 atomic64_read(const atomic64_t *v)
+static inline s64 arch_atomic64_read(const atomic64_t *v)
 {
 	s64 result;
 
@@ -263,7 +263,7 @@ static inline s64 atomic64_read(const atomic64_t *v)
 	return result;
 }
 
-static inline void atomic64_set(atomic64_t *v, s64 i)
+static inline void arch_atomic64_set(atomic64_t *v, s64 i)
 {
 	__asm__ __volatile__("@ atomic64_set\n"
 "	strd	%2, %H2, [%1]"
@@ -272,7 +272,7 @@ static inline void atomic64_set(atomic64_t *v, s64 i)
 	);
 }
 #else
-static inline s64 atomic64_read(const atomic64_t *v)
+static inline s64 arch_atomic64_read(const atomic64_t *v)
 {
 	s64 result;
 
@@ -285,7 +285,7 @@ static inline s64 atomic64_read(const atomic64_t *v)
 	return result;
 }
 
-static inline void atomic64_set(atomic64_t *v, s64 i)
+static inline void arch_atomic64_set(atomic64_t *v, s64 i)
 {
 	s64 tmp;
 
@@ -302,7 +302,7 @@ static inline void atomic64_set(atomic64_t *v, s64 i)
 #endif
 
 #define ATOMIC64_OP(op, op1, op2)					\
-static inline void atomic64_##op(s64 i, atomic64_t *v)			\
+static inline void arch_atomic64_##op(s64 i, atomic64_t *v)		\
 {									\
 	s64 result;							\
 	unsigned long tmp;						\
@@ -322,7 +322,7 @@ static inline void atomic64_##op(s64 i, atomic64_t *v)			\
 
 #define ATOMIC64_OP_RETURN(op, op1, op2)				\
 static inline s64							\
-atomic64_##op##_return_relaxed(s64 i, atomic64_t *v)			\
+arch_atomic64_##op##_return_relaxed(s64 i, atomic64_t *v)		\
 {									\
 	s64 result;							\
 	unsigned long tmp;						\
@@ -345,7 +345,7 @@ atomic64_##op##_return_relaxed(s64 i, atomic64_t *v)			\
 
 #define ATOMIC64_FETCH_OP(op, op1, op2)					\
 static inline s64							\
-atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v)			\
+arch_atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v)		\
 {									\
 	s64 result, val;						\
 	unsigned long tmp;						\
@@ -374,34 +374,34 @@ atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v)			\
 ATOMIC64_OPS(add, adds, adc)
 ATOMIC64_OPS(sub, subs, sbc)
 
-#define atomic64_add_return_relaxed	atomic64_add_return_relaxed
-#define atomic64_sub_return_relaxed	atomic64_sub_return_relaxed
-#define atomic64_fetch_add_relaxed	atomic64_fetch_add_relaxed
-#define atomic64_fetch_sub_relaxed	atomic64_fetch_sub_relaxed
+#define arch_atomic64_add_return_relaxed	arch_atomic64_add_return_relaxed
+#define arch_atomic64_sub_return_relaxed	arch_atomic64_sub_return_relaxed
+#define arch_atomic64_fetch_add_relaxed		arch_atomic64_fetch_add_relaxed
+#define arch_atomic64_fetch_sub_relaxed		arch_atomic64_fetch_sub_relaxed
 
 #undef ATOMIC64_OPS
 #define ATOMIC64_OPS(op, op1, op2)					\
 	ATOMIC64_OP(op, op1, op2)					\
 	ATOMIC64_FETCH_OP(op, op1, op2)
 
-#define atomic64_andnot atomic64_andnot
+#define arch_atomic64_andnot arch_atomic64_andnot
 
 ATOMIC64_OPS(and, and, and)
 ATOMIC64_OPS(andnot, bic, bic)
 ATOMIC64_OPS(or,  orr, orr)
 ATOMIC64_OPS(xor, eor, eor)
 
-#define atomic64_fetch_and_relaxed	atomic64_fetch_and_relaxed
-#define atomic64_fetch_andnot_relaxed	atomic64_fetch_andnot_relaxed
-#define atomic64_fetch_or_relaxed	atomic64_fetch_or_relaxed
-#define atomic64_fetch_xor_relaxed	atomic64_fetch_xor_relaxed
+#define arch_atomic64_fetch_and_relaxed		arch_atomic64_fetch_and_relaxed
+#define arch_atomic64_fetch_andnot_relaxed	arch_atomic64_fetch_andnot_relaxed
+#define arch_atomic64_fetch_or_relaxed		arch_atomic64_fetch_or_relaxed
+#define arch_atomic64_fetch_xor_relaxed		arch_atomic64_fetch_xor_relaxed
 
 #undef ATOMIC64_OPS
 #undef ATOMIC64_FETCH_OP
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new)
+static inline s64 arch_atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new)
 {
 	s64 oldval;
 	unsigned long res;
@@ -422,9 +422,9 @@ static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new)
 
 	return oldval;
 }
-#define atomic64_cmpxchg_relaxed	atomic64_cmpxchg_relaxed
+#define arch_atomic64_cmpxchg_relaxed	arch_atomic64_cmpxchg_relaxed
 
-static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new)
+static inline s64 arch_atomic64_xchg_relaxed(atomic64_t *ptr, s64 new)
 {
 	s64 result;
 	unsigned long tmp;
@@ -442,9 +442,9 @@ static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new)
 
 	return result;
 }
-#define atomic64_xchg_relaxed		atomic64_xchg_relaxed
+#define arch_atomic64_xchg_relaxed		arch_atomic64_xchg_relaxed
 
-static inline s64 atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
 {
 	s64 result;
 	unsigned long tmp;
@@ -470,9 +470,9 @@ static inline s64 atomic64_dec_if_positive(atomic64_t *v)
 
 	return result;
 }
-#define atomic64_dec_if_positive atomic64_dec_if_positive
+#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
 
-static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+static inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	s64 oldval, newval;
 	unsigned long tmp;
@@ -500,7 +500,7 @@ static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 
 	return oldval;
 }
-#define atomic64_fetch_add_unless atomic64_fetch_add_unless
+#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
 
 #endif /* !CONFIG_GENERIC_ATOMIC64 */
 #endif
diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
index 06bd8cea861a..4dfe538dfc68 100644
--- a/arch/arm/include/asm/cmpxchg.h
+++ b/arch/arm/include/asm/cmpxchg.h
@@ -114,7 +114,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 	return ret;
 }
 
-#define xchg_relaxed(ptr, x) ({						\
+#define arch_xchg_relaxed(ptr, x) ({					\
 	(__typeof__(*(ptr)))__xchg((unsigned long)(x), (ptr),		\
 				   sizeof(*(ptr)));			\
 })
@@ -128,20 +128,20 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
 #error "SMP is not supported on this platform"
 #endif
 
-#define xchg xchg_relaxed
+#define arch_xchg arch_xchg_relaxed
 
 /*
  * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make
  * them available.
  */
-#define cmpxchg_local(ptr, o, n) ({					\
+#define arch_cmpxchg_local(ptr, o, n) ({				\
 	(__typeof(*ptr))__generic_cmpxchg_local((ptr),			\
 					        (unsigned long)(o),	\
 					        (unsigned long)(n),	\
 					        sizeof(*(ptr)));	\
 })
 
-#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
+#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 
 #include <asm-generic/cmpxchg.h>
 
@@ -207,7 +207,7 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 	return oldval;
 }
 
-#define cmpxchg_relaxed(ptr,o,n) ({					\
+#define arch_cmpxchg_relaxed(ptr,o,n) ({				\
 	(__typeof__(*(ptr)))__cmpxchg((ptr),				\
 				      (unsigned long)(o),		\
 				      (unsigned long)(n),		\
@@ -234,7 +234,7 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
 	return ret;
 }
 
-#define cmpxchg_local(ptr, o, n) ({					\
+#define arch_cmpxchg_local(ptr, o, n) ({				\
 	(__typeof(*ptr))__cmpxchg_local((ptr),				\
 				        (unsigned long)(o),		\
 				        (unsigned long)(n),		\
@@ -266,13 +266,13 @@ static inline unsigned long long __cmpxchg64(unsigned long long *ptr,
 	return oldval;
 }
 
-#define cmpxchg64_relaxed(ptr, o, n) ({					\
+#define arch_cmpxchg64_relaxed(ptr, o, n) ({				\
 	(__typeof__(*(ptr)))__cmpxchg64((ptr),				\
 					(unsigned long long)(o),	\
 					(unsigned long long)(n));	\
 })
 
-#define cmpxchg64_local(ptr, o, n) cmpxchg64_relaxed((ptr), (o), (n))
+#define arch_cmpxchg64_local(ptr, o, n) arch_cmpxchg64_relaxed((ptr), (o), (n))
 
 #endif	/* __LINUX_ARM_ARCH__ >= 6 */
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 16/33] locking/atomic: csky: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (14 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 15/33] locking/atomic: arm: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10 13:27   ` Guo Ren
  2021-05-10  9:37 ` [PATCH 17/33] locking/atomic: h8300: " Mark Rutland
                   ` (17 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates csky to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Guo Ren <guoren@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/csky/Kconfig               | 1 +
 arch/csky/include/asm/cmpxchg.h | 8 ++++----
 2 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
index 8de5b987edb9..3521f14bcd96 100644
--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -2,6 +2,7 @@
 config CSKY
 	def_bool y
 	select ARCH_32BIT_OFF_T
+	select ARCH_ATOMIC
 	select ARCH_HAS_DMA_PREP_COHERENT
 	select ARCH_HAS_GCOV_PROFILE_ALL
 	select ARCH_HAS_SYNC_DMA_FOR_CPU
diff --git a/arch/csky/include/asm/cmpxchg.h b/arch/csky/include/asm/cmpxchg.h
index dabc8e46ce7b..d1bef11f8dc9 100644
--- a/arch/csky/include/asm/cmpxchg.h
+++ b/arch/csky/include/asm/cmpxchg.h
@@ -31,7 +31,7 @@ extern void __bad_xchg(void);
 	__ret;							\
 })
 
-#define xchg_relaxed(ptr, x) \
+#define arch_xchg_relaxed(ptr, x) \
 		(__xchg_relaxed((x), (ptr), sizeof(*(ptr))))
 
 #define __cmpxchg_relaxed(ptr, old, new, size)			\
@@ -61,14 +61,14 @@ extern void __bad_xchg(void);
 	__ret;							\
 })
 
-#define cmpxchg_relaxed(ptr, o, n) \
+#define arch_cmpxchg_relaxed(ptr, o, n) \
 	(__cmpxchg_relaxed((ptr), (o), (n), sizeof(*(ptr))))
 
-#define cmpxchg(ptr, o, n) 					\
+#define arch_cmpxchg(ptr, o, n) 				\
 ({								\
 	__typeof__(*(ptr)) __ret;				\
 	__smp_release_fence();					\
-	__ret = cmpxchg_relaxed(ptr, o, n);			\
+	__ret = arch_cmpxchg_relaxed(ptr, o, n);		\
 	__smp_acquire_fence();					\
 	__ret;							\
 })
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 17/33] locking/atomic: h8300: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (15 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 16/33] locking/atomic: csky: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 18/33] locking/atomic: hexagon: " Mark Rutland
                   ` (16 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates h8300 to ARCH_ATOMIC, using
the asm-generic implementations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
---
 arch/h8300/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/h8300/Kconfig b/arch/h8300/Kconfig
index 3e3e0f16f7e0..bdf05ad3206a 100644
--- a/arch/h8300/Kconfig
+++ b/arch/h8300/Kconfig
@@ -2,6 +2,7 @@
 config H8300
         def_bool y
 	select ARCH_32BIT_OFF_T
+	select ARCH_ATOMIC
 	select ARCH_HAS_BINFMT_FLAT
 	select BINFMT_FLAT_ARGVP_ENVP_ON_STACK
 	select BINFMT_FLAT_OLD_ALWAYS_RAM
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 18/33] locking/atomic: hexagon: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (16 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 17/33] locking/atomic: h8300: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 19/33] locking/atomic: ia64: " Mark Rutland
                   ` (15 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates hexagon to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Brian Cain <bcain@codeaurora.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/hexagon/Kconfig               |  1 +
 arch/hexagon/include/asm/atomic.h  | 28 ++++++++++++++--------------
 arch/hexagon/include/asm/cmpxchg.h |  4 ++--
 3 files changed, 17 insertions(+), 16 deletions(-)

diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig
index 44a409967af1..1368954ef679 100644
--- a/arch/hexagon/Kconfig
+++ b/arch/hexagon/Kconfig
@@ -5,6 +5,7 @@ comment "Linux Kernel Configuration for Hexagon"
 config HEXAGON
 	def_bool y
 	select ARCH_32BIT_OFF_T
+	select ARCH_ATOMIC
 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
 	select ARCH_NO_PREEMPT
 	# Other pending projects/to-do items.
diff --git a/arch/hexagon/include/asm/atomic.h b/arch/hexagon/include/asm/atomic.h
index 4ab895d7111f..6e94f8d04146 100644
--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -14,7 +14,7 @@
 
 /*  Normal writes in our arch don't clear lock reservations  */
 
-static inline void atomic_set(atomic_t *v, int new)
+static inline void arch_atomic_set(atomic_t *v, int new)
 {
 	asm volatile(
 		"1:	r6 = memw_locked(%0);\n"
@@ -26,26 +26,26 @@ static inline void atomic_set(atomic_t *v, int new)
 	);
 }
 
-#define atomic_set_release(v, i)	atomic_set((v), (i))
+#define arch_atomic_set_release(v, i)	arch_atomic_set((v), (i))
 
 /**
- * atomic_read - reads a word, atomically
+ * arch_atomic_read - reads a word, atomically
  * @v: pointer to atomic value
  *
  * Assumes all word reads on our architecture are atomic.
  */
-#define atomic_read(v)		READ_ONCE((v)->counter)
+#define arch_atomic_read(v)		READ_ONCE((v)->counter)
 
 /**
- * atomic_xchg - atomic
+ * arch_atomic_xchg - atomic
  * @v: pointer to memory to change
  * @new: new value (technically passed in a register -- see xchg)
  */
-#define atomic_xchg(v, new)	(xchg(&((v)->counter), (new)))
+#define arch_atomic_xchg(v, new)	(arch_xchg(&((v)->counter), (new)))
 
 
 /**
- * atomic_cmpxchg - atomic compare-and-exchange values
+ * arch_atomic_cmpxchg - atomic compare-and-exchange values
  * @v: pointer to value to change
  * @old:  desired old value to match
  * @new:  new value to put in
@@ -61,7 +61,7 @@ static inline void atomic_set(atomic_t *v, int new)
  *
  * "old" is "expected" old val, __oldval is actual old value
  */
-static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
 {
 	int __oldval;
 
@@ -81,7 +81,7 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
 }
 
 #define ATOMIC_OP(op)							\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void arch_atomic_##op(int i, atomic_t *v)			\
 {									\
 	int output;							\
 									\
@@ -97,7 +97,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
 }									\
 
 #define ATOMIC_OP_RETURN(op)						\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
 {									\
 	int output;							\
 									\
@@ -114,7 +114,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
 }
 
 #define ATOMIC_FETCH_OP(op)						\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 {									\
 	int output, val;						\
 									\
@@ -148,7 +148,7 @@ ATOMIC_OPS(xor)
 #undef ATOMIC_OP
 
 /**
- * atomic_fetch_add_unless - add unless the number is a given value
+ * arch_atomic_fetch_add_unless - add unless the number is a given value
  * @v: pointer to value
  * @a: amount to add
  * @u: unless value is equal to u
@@ -157,7 +157,7 @@ ATOMIC_OPS(xor)
  *
  */
 
-static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
+static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
 	int __oldval;
 	register int tmp;
@@ -180,6 +180,6 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 	);
 	return __oldval;
 }
-#define atomic_fetch_add_unless atomic_fetch_add_unless
+#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
 
 #endif
diff --git a/arch/hexagon/include/asm/cmpxchg.h b/arch/hexagon/include/asm/cmpxchg.h
index 92b8a02e588a..cdb705e1496a 100644
--- a/arch/hexagon/include/asm/cmpxchg.h
+++ b/arch/hexagon/include/asm/cmpxchg.h
@@ -42,7 +42,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr,
  * Atomically swap the contents of a register with memory.  Should be atomic
  * between multiple CPU's and within interrupts on the same CPU.
  */
-#define xchg(ptr, v) ((__typeof__(*(ptr)))__xchg((unsigned long)(v), (ptr), \
+#define arch_xchg(ptr, v) ((__typeof__(*(ptr)))__xchg((unsigned long)(v), (ptr), \
 	sizeof(*(ptr))))
 
 /*
@@ -51,7 +51,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr,
  *  variable casting.
  */
 
-#define cmpxchg(ptr, old, new)					\
+#define arch_cmpxchg(ptr, old, new)				\
 ({								\
 	__typeof__(ptr) __ptr = (ptr);				\
 	__typeof__(*(ptr)) __old = (old);			\
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 19/33] locking/atomic: ia64: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (17 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 18/33] locking/atomic: hexagon: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 20/33] locking/atomic: m68k: " Mark Rutland
                   ` (14 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates ia64 to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/ia64/Kconfig                    |  1 +
 arch/ia64/include/asm/atomic.h       | 74 ++++++++++++++++++------------------
 arch/ia64/include/asm/cmpxchg.h      | 16 ++++++++
 arch/ia64/include/uapi/asm/cmpxchg.h | 10 +++--
 4 files changed, 61 insertions(+), 40 deletions(-)
 create mode 100644 arch/ia64/include/asm/cmpxchg.h

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index 279252e3e0f7..c5414dcd5d0d 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -8,6 +8,7 @@ menu "Processor type and features"
 
 config IA64
 	bool
+	select ARCH_ATOMIC
 	select ARCH_HAS_DMA_MARK_CLEAN
 	select ARCH_MIGHT_HAVE_PC_PARPORT
 	select ARCH_MIGHT_HAVE_PC_SERIO
diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index f267d956458f..266c429b9137 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -21,11 +21,11 @@
 
 #define ATOMIC64_INIT(i)	{ (i) }
 
-#define atomic_read(v)		READ_ONCE((v)->counter)
-#define atomic64_read(v)	READ_ONCE((v)->counter)
+#define arch_atomic_read(v)	READ_ONCE((v)->counter)
+#define arch_atomic64_read(v)	READ_ONCE((v)->counter)
 
-#define atomic_set(v,i)		WRITE_ONCE(((v)->counter), (i))
-#define atomic64_set(v,i)	WRITE_ONCE(((v)->counter), (i))
+#define arch_atomic_set(v,i)	WRITE_ONCE(((v)->counter), (i))
+#define arch_atomic64_set(v,i)	WRITE_ONCE(((v)->counter), (i))
 
 #define ATOMIC_OP(op, c_op)						\
 static __inline__ int							\
@@ -36,7 +36,7 @@ ia64_atomic_##op (int i, atomic_t *v)					\
 									\
 	do {								\
 		CMPXCHG_BUGCHECK(v);					\
-		old = atomic_read(v);					\
+		old = arch_atomic_read(v);				\
 		new = old c_op i;					\
 	} while (ia64_cmpxchg(acq, v, old, new, sizeof(atomic_t)) != old); \
 	return new;							\
@@ -51,7 +51,7 @@ ia64_atomic_fetch_##op (int i, atomic_t *v)				\
 									\
 	do {								\
 		CMPXCHG_BUGCHECK(v);					\
-		old = atomic_read(v);					\
+		old = arch_atomic_read(v);				\
 		new = old c_op i;					\
 	} while (ia64_cmpxchg(acq, v, old, new, sizeof(atomic_t)) != old); \
 	return old;							\
@@ -74,7 +74,7 @@ ATOMIC_OPS(sub, -)
 #define __ia64_atomic_const(i)	0
 #endif
 
-#define atomic_add_return(i,v)						\
+#define arch_atomic_add_return(i,v)					\
 ({									\
 	int __ia64_aar_i = (i);						\
 	__ia64_atomic_const(i)						\
@@ -82,7 +82,7 @@ ATOMIC_OPS(sub, -)
 		: ia64_atomic_add(__ia64_aar_i, v);			\
 })
 
-#define atomic_sub_return(i,v)						\
+#define arch_atomic_sub_return(i,v)					\
 ({									\
 	int __ia64_asr_i = (i);						\
 	__ia64_atomic_const(i)						\
@@ -90,7 +90,7 @@ ATOMIC_OPS(sub, -)
 		: ia64_atomic_sub(__ia64_asr_i, v);			\
 })
 
-#define atomic_fetch_add(i,v)						\
+#define arch_atomic_fetch_add(i,v)					\
 ({									\
 	int __ia64_aar_i = (i);						\
 	__ia64_atomic_const(i)						\
@@ -98,7 +98,7 @@ ATOMIC_OPS(sub, -)
 		: ia64_atomic_fetch_add(__ia64_aar_i, v);		\
 })
 
-#define atomic_fetch_sub(i,v)						\
+#define arch_atomic_fetch_sub(i,v)					\
 ({									\
 	int __ia64_asr_i = (i);						\
 	__ia64_atomic_const(i)						\
@@ -110,13 +110,13 @@ ATOMIC_FETCH_OP(and, &)
 ATOMIC_FETCH_OP(or, |)
 ATOMIC_FETCH_OP(xor, ^)
 
-#define atomic_and(i,v)	(void)ia64_atomic_fetch_and(i,v)
-#define atomic_or(i,v)	(void)ia64_atomic_fetch_or(i,v)
-#define atomic_xor(i,v)	(void)ia64_atomic_fetch_xor(i,v)
+#define arch_atomic_and(i,v)	(void)ia64_atomic_fetch_and(i,v)
+#define arch_atomic_or(i,v)	(void)ia64_atomic_fetch_or(i,v)
+#define arch_atomic_xor(i,v)	(void)ia64_atomic_fetch_xor(i,v)
 
-#define atomic_fetch_and(i,v)	ia64_atomic_fetch_and(i,v)
-#define atomic_fetch_or(i,v)	ia64_atomic_fetch_or(i,v)
-#define atomic_fetch_xor(i,v)	ia64_atomic_fetch_xor(i,v)
+#define arch_atomic_fetch_and(i,v)	ia64_atomic_fetch_and(i,v)
+#define arch_atomic_fetch_or(i,v)	ia64_atomic_fetch_or(i,v)
+#define arch_atomic_fetch_xor(i,v)	ia64_atomic_fetch_xor(i,v)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_FETCH_OP
@@ -131,7 +131,7 @@ ia64_atomic64_##op (s64 i, atomic64_t *v)				\
 									\
 	do {								\
 		CMPXCHG_BUGCHECK(v);					\
-		old = atomic64_read(v);					\
+		old = arch_atomic64_read(v);				\
 		new = old c_op i;					\
 	} while (ia64_cmpxchg(acq, v, old, new, sizeof(atomic64_t)) != old); \
 	return new;							\
@@ -146,7 +146,7 @@ ia64_atomic64_fetch_##op (s64 i, atomic64_t *v)				\
 									\
 	do {								\
 		CMPXCHG_BUGCHECK(v);					\
-		old = atomic64_read(v);					\
+		old = arch_atomic64_read(v);				\
 		new = old c_op i;					\
 	} while (ia64_cmpxchg(acq, v, old, new, sizeof(atomic64_t)) != old); \
 	return old;							\
@@ -159,7 +159,7 @@ ia64_atomic64_fetch_##op (s64 i, atomic64_t *v)				\
 ATOMIC64_OPS(add, +)
 ATOMIC64_OPS(sub, -)
 
-#define atomic64_add_return(i,v)					\
+#define arch_atomic64_add_return(i,v)					\
 ({									\
 	s64 __ia64_aar_i = (i);						\
 	__ia64_atomic_const(i)						\
@@ -167,7 +167,7 @@ ATOMIC64_OPS(sub, -)
 		: ia64_atomic64_add(__ia64_aar_i, v);			\
 })
 
-#define atomic64_sub_return(i,v)					\
+#define arch_atomic64_sub_return(i,v)					\
 ({									\
 	s64 __ia64_asr_i = (i);						\
 	__ia64_atomic_const(i)						\
@@ -175,7 +175,7 @@ ATOMIC64_OPS(sub, -)
 		: ia64_atomic64_sub(__ia64_asr_i, v);			\
 })
 
-#define atomic64_fetch_add(i,v)						\
+#define arch_atomic64_fetch_add(i,v)					\
 ({									\
 	s64 __ia64_aar_i = (i);						\
 	__ia64_atomic_const(i)						\
@@ -183,7 +183,7 @@ ATOMIC64_OPS(sub, -)
 		: ia64_atomic64_fetch_add(__ia64_aar_i, v);		\
 })
 
-#define atomic64_fetch_sub(i,v)						\
+#define arch_atomic64_fetch_sub(i,v)					\
 ({									\
 	s64 __ia64_asr_i = (i);						\
 	__ia64_atomic_const(i)						\
@@ -195,29 +195,29 @@ ATOMIC64_FETCH_OP(and, &)
 ATOMIC64_FETCH_OP(or, |)
 ATOMIC64_FETCH_OP(xor, ^)
 
-#define atomic64_and(i,v)	(void)ia64_atomic64_fetch_and(i,v)
-#define atomic64_or(i,v)	(void)ia64_atomic64_fetch_or(i,v)
-#define atomic64_xor(i,v)	(void)ia64_atomic64_fetch_xor(i,v)
+#define arch_atomic64_and(i,v)	(void)ia64_atomic64_fetch_and(i,v)
+#define arch_atomic64_or(i,v)	(void)ia64_atomic64_fetch_or(i,v)
+#define arch_atomic64_xor(i,v)	(void)ia64_atomic64_fetch_xor(i,v)
 
-#define atomic64_fetch_and(i,v)	ia64_atomic64_fetch_and(i,v)
-#define atomic64_fetch_or(i,v)	ia64_atomic64_fetch_or(i,v)
-#define atomic64_fetch_xor(i,v)	ia64_atomic64_fetch_xor(i,v)
+#define arch_atomic64_fetch_and(i,v)	ia64_atomic64_fetch_and(i,v)
+#define arch_atomic64_fetch_or(i,v)	ia64_atomic64_fetch_or(i,v)
+#define arch_atomic64_fetch_xor(i,v)	ia64_atomic64_fetch_xor(i,v)
 
 #undef ATOMIC64_OPS
 #undef ATOMIC64_FETCH_OP
 #undef ATOMIC64_OP
 
-#define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), old, new))
-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+#define arch_atomic_cmpxchg(v, old, new) (arch_cmpxchg(&((v)->counter), old, new))
+#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
 
-#define atomic64_cmpxchg(v, old, new) \
-	(cmpxchg(&((v)->counter), old, new))
-#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
+#define arch_atomic64_cmpxchg(v, old, new) \
+	(arch_cmpxchg(&((v)->counter), old, new))
+#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
 
-#define atomic_add(i,v)			(void)atomic_add_return((i), (v))
-#define atomic_sub(i,v)			(void)atomic_sub_return((i), (v))
+#define arch_atomic_add(i,v)		(void)arch_atomic_add_return((i), (v))
+#define arch_atomic_sub(i,v)		(void)arch_atomic_sub_return((i), (v))
 
-#define atomic64_add(i,v)		(void)atomic64_add_return((i), (v))
-#define atomic64_sub(i,v)		(void)atomic64_sub_return((i), (v))
+#define arch_atomic64_add(i,v)		(void)arch_atomic64_add_return((i), (v))
+#define arch_atomic64_sub(i,v)		(void)arch_atomic64_sub_return((i), (v))
 
 #endif /* _ASM_IA64_ATOMIC_H */
diff --git a/arch/ia64/include/asm/cmpxchg.h b/arch/ia64/include/asm/cmpxchg.h
new file mode 100644
index 000000000000..94ef84429843
--- /dev/null
+++ b/arch/ia64/include/asm/cmpxchg.h
@@ -0,0 +1,16 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _ASM_IA64_CMPXCHG_H
+#define _ASM_IA64_CMPXCHG_H
+
+#include <uapi/asm/cmpxchg.h>
+
+#define arch_xchg(ptr, x)	\
+({(__typeof__(*(ptr))) __xchg((unsigned long) (x), (ptr), sizeof(*(ptr)));})
+
+#define arch_cmpxchg(ptr, o, n)		cmpxchg_acq((ptr), (o), (n))
+#define arch_cmpxchg64(ptr, o, n)	cmpxchg_acq((ptr), (o), (n))
+
+#define arch_cmpxchg_local		arch_cmpxchg
+#define arch_cmpxchg64_local		arch_cmpxchg64
+
+#endif /* _ASM_IA64_CMPXCHG_H */
diff --git a/arch/ia64/include/uapi/asm/cmpxchg.h b/arch/ia64/include/uapi/asm/cmpxchg.h
index 5d90307fd6e0..926c6cb1e029 100644
--- a/arch/ia64/include/uapi/asm/cmpxchg.h
+++ b/arch/ia64/include/uapi/asm/cmpxchg.h
@@ -1,6 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
-#ifndef _ASM_IA64_CMPXCHG_H
-#define _ASM_IA64_CMPXCHG_H
+#ifndef _UAPI_ASM_IA64_CMPXCHG_H
+#define _UAPI_ASM_IA64_CMPXCHG_H
 
 /*
  * Compare/Exchange, forked from asm/intrinsics.h
@@ -53,8 +53,10 @@ extern void ia64_xchg_called_with_bad_pointer(void);
 	__xchg_result;							\
 })
 
+#ifndef __KERNEL__
 #define xchg(ptr, x)							\
 ({(__typeof__(*(ptr))) __xchg((unsigned long) (x), (ptr), sizeof(*(ptr)));})
+#endif
 
 /*
  * Atomic compare and exchange.  Compare OLD with MEM, if identical,
@@ -126,12 +128,14 @@ extern long ia64_cmpxchg_called_with_bad_pointer(void);
  * we had to back-pedal and keep the "legacy" behavior of a full fence :-(
  */
 
+#ifndef __KERNEL__
 /* for compatibility with other platforms: */
 #define cmpxchg(ptr, o, n)	cmpxchg_acq((ptr), (o), (n))
 #define cmpxchg64(ptr, o, n)	cmpxchg_acq((ptr), (o), (n))
 
 #define cmpxchg_local		cmpxchg
 #define cmpxchg64_local		cmpxchg64
+#endif
 
 #ifdef CONFIG_IA64_DEBUG_CMPXCHG
 # define CMPXCHG_BUGCHECK_DECL	int _cmpxchg_bugcheck_count = 128;
@@ -152,4 +156,4 @@ do {									\
 
 #endif /* !__ASSEMBLY__ */
 
-#endif /* _ASM_IA64_CMPXCHG_H */
+#endif /* _UAPI_ASM_IA64_CMPXCHG_H */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 20/33] locking/atomic: m68k: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (18 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 19/33] locking/atomic: ia64: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-11  9:27   ` Geert Uytterhoeven
  2021-05-10  9:37 ` [PATCH 21/33] locking/atomic: microblaze: " Mark Rutland
                   ` (13 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates m68k to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/m68k/Kconfig               |  1 +
 arch/m68k/include/asm/atomic.h  | 60 ++++++++++++++++++++---------------------
 arch/m68k/include/asm/cmpxchg.h | 10 +++----
 3 files changed, 36 insertions(+), 35 deletions(-)

diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
index 372e4e69c43a..d1d91ac47f51 100644
--- a/arch/m68k/Kconfig
+++ b/arch/m68k/Kconfig
@@ -3,6 +3,7 @@ config M68K
 	bool
 	default y
 	select ARCH_32BIT_OFF_T
+	select ARCH_ATOMIC
 	select ARCH_HAS_BINFMT_FLAT
 	select ARCH_HAS_DMA_PREP_COHERENT if HAS_DMA && MMU && !COLDFIRE
 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE if HAS_DMA
diff --git a/arch/m68k/include/asm/atomic.h b/arch/m68k/include/asm/atomic.h
index 756c5cc58f94..8637bf8a2f65 100644
--- a/arch/m68k/include/asm/atomic.h
+++ b/arch/m68k/include/asm/atomic.h
@@ -16,8 +16,8 @@
  * We do not have SMP m68k systems, so we don't have to deal with that.
  */
 
-#define atomic_read(v)		READ_ONCE((v)->counter)
-#define atomic_set(v, i)	WRITE_ONCE(((v)->counter), (i))
+#define arch_atomic_read(v)	READ_ONCE((v)->counter)
+#define arch_atomic_set(v, i)	WRITE_ONCE(((v)->counter), (i))
 
 /*
  * The ColdFire parts cannot do some immediate to memory operations,
@@ -30,7 +30,7 @@
 #endif
 
 #define ATOMIC_OP(op, c_op, asm_op)					\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void arch_atomic_##op(int i, atomic_t *v)			\
 {									\
 	__asm__ __volatile__(#asm_op "l %1,%0" : "+m" (*v) : ASM_DI (i));\
 }									\
@@ -38,7 +38,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
 #ifdef CONFIG_RMW_INSNS
 
 #define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
 {									\
 	int t, tmp;							\
 									\
@@ -48,12 +48,12 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
 			"	casl %2,%1,%0\n"			\
 			"	jne 1b"					\
 			: "+m" (*v), "=&d" (t), "=&d" (tmp)		\
-			: "g" (i), "2" (atomic_read(v)));		\
+			: "g" (i), "2" (arch_atomic_read(v)));		\
 	return t;							\
 }
 
 #define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 {									\
 	int t, tmp;							\
 									\
@@ -63,14 +63,14 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)			\
 			"	casl %2,%1,%0\n"			\
 			"	jne 1b"					\
 			: "+m" (*v), "=&d" (t), "=&d" (tmp)		\
-			: "g" (i), "2" (atomic_read(v)));		\
+			: "g" (i), "2" (arch_atomic_read(v)));		\
 	return tmp;							\
 }
 
 #else
 
 #define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
-static inline int atomic_##op##_return(int i, atomic_t * v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t * v)	\
 {									\
 	unsigned long flags;						\
 	int t;								\
@@ -83,7 +83,7 @@ static inline int atomic_##op##_return(int i, atomic_t * v)		\
 }
 
 #define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
-static inline int atomic_fetch_##op(int i, atomic_t * v)		\
+static inline int arch_atomic_fetch_##op(int i, atomic_t * v)		\
 {									\
 	unsigned long flags;						\
 	int t;								\
@@ -120,27 +120,27 @@ ATOMIC_OPS(xor, ^=, eor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-static inline void atomic_inc(atomic_t *v)
+static inline void arch_atomic_inc(atomic_t *v)
 {
 	__asm__ __volatile__("addql #1,%0" : "+m" (*v));
 }
-#define atomic_inc atomic_inc
+#define arch_atomic_inc arch_atomic_inc
 
-static inline void atomic_dec(atomic_t *v)
+static inline void arch_atomic_dec(atomic_t *v)
 {
 	__asm__ __volatile__("subql #1,%0" : "+m" (*v));
 }
-#define atomic_dec atomic_dec
+#define arch_atomic_dec arch_atomic_dec
 
-static inline int atomic_dec_and_test(atomic_t *v)
+static inline int arch_atomic_dec_and_test(atomic_t *v)
 {
 	char c;
 	__asm__ __volatile__("subql #1,%1; seq %0" : "=d" (c), "+m" (*v));
 	return c != 0;
 }
-#define atomic_dec_and_test atomic_dec_and_test
+#define arch_atomic_dec_and_test arch_atomic_dec_and_test
 
-static inline int atomic_dec_and_test_lt(atomic_t *v)
+static inline int arch_atomic_dec_and_test_lt(atomic_t *v)
 {
 	char c;
 	__asm__ __volatile__(
@@ -150,49 +150,49 @@ static inline int atomic_dec_and_test_lt(atomic_t *v)
 	return c != 0;
 }
 
-static inline int atomic_inc_and_test(atomic_t *v)
+static inline int arch_atomic_inc_and_test(atomic_t *v)
 {
 	char c;
 	__asm__ __volatile__("addql #1,%1; seq %0" : "=d" (c), "+m" (*v));
 	return c != 0;
 }
-#define atomic_inc_and_test atomic_inc_and_test
+#define arch_atomic_inc_and_test arch_atomic_inc_and_test
 
 #ifdef CONFIG_RMW_INSNS
 
-#define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n)))
-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n)))
+#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
 
 #else /* !CONFIG_RMW_INSNS */
 
-static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
 {
 	unsigned long flags;
 	int prev;
 
 	local_irq_save(flags);
-	prev = atomic_read(v);
+	prev = arch_atomic_read(v);
 	if (prev == old)
-		atomic_set(v, new);
+		arch_atomic_set(v, new);
 	local_irq_restore(flags);
 	return prev;
 }
 
-static inline int atomic_xchg(atomic_t *v, int new)
+static inline int arch_atomic_xchg(atomic_t *v, int new)
 {
 	unsigned long flags;
 	int prev;
 
 	local_irq_save(flags);
-	prev = atomic_read(v);
-	atomic_set(v, new);
+	prev = arch_atomic_read(v);
+	arch_atomic_set(v, new);
 	local_irq_restore(flags);
 	return prev;
 }
 
 #endif /* !CONFIG_RMW_INSNS */
 
-static inline int atomic_sub_and_test(int i, atomic_t *v)
+static inline int arch_atomic_sub_and_test(int i, atomic_t *v)
 {
 	char c;
 	__asm__ __volatile__("subl %2,%1; seq %0"
@@ -200,9 +200,9 @@ static inline int atomic_sub_and_test(int i, atomic_t *v)
 			     : ASM_DI (i));
 	return c != 0;
 }
-#define atomic_sub_and_test atomic_sub_and_test
+#define arch_atomic_sub_and_test arch_atomic_sub_and_test
 
-static inline int atomic_add_negative(int i, atomic_t *v)
+static inline int arch_atomic_add_negative(int i, atomic_t *v)
 {
 	char c;
 	__asm__ __volatile__("addl %2,%1; smi %0"
@@ -210,6 +210,6 @@ static inline int atomic_add_negative(int i, atomic_t *v)
 			     : ASM_DI (i));
 	return c != 0;
 }
-#define atomic_add_negative atomic_add_negative
+#define arch_atomic_add_negative arch_atomic_add_negative
 
 #endif /* __ARCH_M68K_ATOMIC __ */
diff --git a/arch/m68k/include/asm/cmpxchg.h b/arch/m68k/include/asm/cmpxchg.h
index 7629c9c1ed5b..e8ca4b0ccefa 100644
--- a/arch/m68k/include/asm/cmpxchg.h
+++ b/arch/m68k/include/asm/cmpxchg.h
@@ -76,11 +76,11 @@ static inline unsigned long __xchg(unsigned long x, volatile void * ptr, int siz
 }
 #endif
 
-#define xchg(ptr,x) ({(__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)));})
+#define arch_xchg(ptr,x) ({(__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)));})
 
 #include <asm-generic/cmpxchg-local.h>
 
-#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
+#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 
 extern unsigned long __invalid_cmpxchg_size(volatile void *,
 					    unsigned long, unsigned long, int);
@@ -118,14 +118,14 @@ static inline unsigned long __cmpxchg(volatile void *p, unsigned long old,
 	return old;
 }
 
-#define cmpxchg(ptr, o, n)						    \
+#define arch_cmpxchg(ptr, o, n)						    \
 	({(__typeof__(*(ptr)))__cmpxchg((ptr), (unsigned long)(o),	    \
 			(unsigned long)(n), sizeof(*(ptr)));})
-#define cmpxchg_local(ptr, o, n)					    \
+#define arch_cmpxchg_local(ptr, o, n)					    \
 	({(__typeof__(*(ptr)))__cmpxchg((ptr), (unsigned long)(o),	    \
 			(unsigned long)(n), sizeof(*(ptr)));})
 
-#define cmpxchg64(ptr, o, n)	cmpxchg64_local((ptr), (o), (n))
+#define arch_cmpxchg64(ptr, o, n)	arch_cmpxchg64_local((ptr), (o), (n))
 
 #else
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 21/33] locking/atomic: microblaze: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (19 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 20/33] locking/atomic: m68k: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 22/33] locking/atomic: mips: " Mark Rutland
                   ` (12 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates microblaze to ARCH_ATOMIC,
using the asm-generic implementations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Michal Simek <monstr@monstr.eu>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/microblaze/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig
index 0660f47012bc..5a52922dc225 100644
--- a/arch/microblaze/Kconfig
+++ b/arch/microblaze/Kconfig
@@ -2,6 +2,7 @@
 config MICROBLAZE
 	def_bool y
 	select ARCH_32BIT_OFF_T
+	select ARCH_ATOMIC
 	select ARCH_NO_SWAP
 	select ARCH_HAS_DMA_PREP_COHERENT
 	select ARCH_HAS_GCOV_PROFILE_ALL
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 22/33] locking/atomic: mips: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (20 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 21/33] locking/atomic: microblaze: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 23/33] locking/atomic: nds32: " Mark Rutland
                   ` (11 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates mips to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Will Deacon <will@kernel.org>
---
 arch/mips/Kconfig               |  1 +
 arch/mips/include/asm/atomic.h  | 55 ++++++++++++++++++++++-------------------
 arch/mips/include/asm/cmpxchg.h | 22 ++++++++---------
 arch/mips/kernel/cmpxchg.c      |  4 +--
 4 files changed, 43 insertions(+), 39 deletions(-)

diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index ed51970c08e7..55b4da96872f 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -3,6 +3,7 @@ config MIPS
 	bool
 	default y
 	select ARCH_32BIT_OFF_T if !64BIT
+	select ARCH_ATOMIC
 	select ARCH_BINFMT_ELF_STATE if MIPS_FP_SUPPORT
 	select ARCH_HAS_DEBUG_VIRTUAL if !64BIT
 	select ARCH_HAS_FORTIFY_SOURCE
diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 27ad76791539..95e1f7f3597f 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -25,24 +25,25 @@
 #include <asm/war.h>
 
 #define ATOMIC_OPS(pfx, type)						\
-static __always_inline type pfx##_read(const pfx##_t *v)		\
+static __always_inline type arch_##pfx##_read(const pfx##_t *v)		\
 {									\
 	return READ_ONCE(v->counter);					\
 }									\
 									\
-static __always_inline void pfx##_set(pfx##_t *v, type i)		\
+static __always_inline void arch_##pfx##_set(pfx##_t *v, type i)	\
 {									\
 	WRITE_ONCE(v->counter, i);					\
 }									\
 									\
-static __always_inline type pfx##_cmpxchg(pfx##_t *v, type o, type n)	\
+static __always_inline type						\
+arch_##pfx##_cmpxchg(pfx##_t *v, type o, type n)			\
 {									\
-	return cmpxchg(&v->counter, o, n);				\
+	return arch_cmpxchg(&v->counter, o, n);				\
 }									\
 									\
-static __always_inline type pfx##_xchg(pfx##_t *v, type n)		\
+static __always_inline type arch_##pfx##_xchg(pfx##_t *v, type n)	\
 {									\
-	return xchg(&v->counter, n);					\
+	return arch_xchg(&v->counter, n);				\
 }
 
 ATOMIC_OPS(atomic, int)
@@ -53,7 +54,7 @@ ATOMIC_OPS(atomic64, s64)
 #endif
 
 #define ATOMIC_OP(pfx, op, type, c_op, asm_op, ll, sc)			\
-static __inline__ void pfx##_##op(type i, pfx##_t * v)			\
+static __inline__ void arch_##pfx##_##op(type i, pfx##_t * v)		\
 {									\
 	type temp;							\
 									\
@@ -80,7 +81,8 @@ static __inline__ void pfx##_##op(type i, pfx##_t * v)			\
 }
 
 #define ATOMIC_OP_RETURN(pfx, op, type, c_op, asm_op, ll, sc)		\
-static __inline__ type pfx##_##op##_return_relaxed(type i, pfx##_t * v)	\
+static __inline__ type							\
+arch_##pfx##_##op##_return_relaxed(type i, pfx##_t * v)			\
 {									\
 	type temp, result;						\
 									\
@@ -113,7 +115,8 @@ static __inline__ type pfx##_##op##_return_relaxed(type i, pfx##_t * v)	\
 }
 
 #define ATOMIC_FETCH_OP(pfx, op, type, c_op, asm_op, ll, sc)		\
-static __inline__ type pfx##_fetch_##op##_relaxed(type i, pfx##_t * v)	\
+static __inline__ type							\
+arch_##pfx##_fetch_##op##_relaxed(type i, pfx##_t * v)			\
 {									\
 	int temp, result;						\
 									\
@@ -153,18 +156,18 @@ static __inline__ type pfx##_fetch_##op##_relaxed(type i, pfx##_t * v)	\
 ATOMIC_OPS(atomic, add, int, +=, addu, ll, sc)
 ATOMIC_OPS(atomic, sub, int, -=, subu, ll, sc)
 
-#define atomic_add_return_relaxed	atomic_add_return_relaxed
-#define atomic_sub_return_relaxed	atomic_sub_return_relaxed
-#define atomic_fetch_add_relaxed	atomic_fetch_add_relaxed
-#define atomic_fetch_sub_relaxed	atomic_fetch_sub_relaxed
+#define arch_atomic_add_return_relaxed	arch_atomic_add_return_relaxed
+#define arch_atomic_sub_return_relaxed	arch_atomic_sub_return_relaxed
+#define arch_atomic_fetch_add_relaxed	arch_atomic_fetch_add_relaxed
+#define arch_atomic_fetch_sub_relaxed	arch_atomic_fetch_sub_relaxed
 
 #ifdef CONFIG_64BIT
 ATOMIC_OPS(atomic64, add, s64, +=, daddu, lld, scd)
 ATOMIC_OPS(atomic64, sub, s64, -=, dsubu, lld, scd)
-# define atomic64_add_return_relaxed	atomic64_add_return_relaxed
-# define atomic64_sub_return_relaxed	atomic64_sub_return_relaxed
-# define atomic64_fetch_add_relaxed	atomic64_fetch_add_relaxed
-# define atomic64_fetch_sub_relaxed	atomic64_fetch_sub_relaxed
+# define arch_atomic64_add_return_relaxed	arch_atomic64_add_return_relaxed
+# define arch_atomic64_sub_return_relaxed	arch_atomic64_sub_return_relaxed
+# define arch_atomic64_fetch_add_relaxed	arch_atomic64_fetch_add_relaxed
+# define arch_atomic64_fetch_sub_relaxed	arch_atomic64_fetch_sub_relaxed
 #endif /* CONFIG_64BIT */
 
 #undef ATOMIC_OPS
@@ -176,17 +179,17 @@ ATOMIC_OPS(atomic, and, int, &=, and, ll, sc)
 ATOMIC_OPS(atomic, or, int, |=, or, ll, sc)
 ATOMIC_OPS(atomic, xor, int, ^=, xor, ll, sc)
 
-#define atomic_fetch_and_relaxed	atomic_fetch_and_relaxed
-#define atomic_fetch_or_relaxed		atomic_fetch_or_relaxed
-#define atomic_fetch_xor_relaxed	atomic_fetch_xor_relaxed
+#define arch_atomic_fetch_and_relaxed	arch_atomic_fetch_and_relaxed
+#define arch_atomic_fetch_or_relaxed	arch_atomic_fetch_or_relaxed
+#define arch_atomic_fetch_xor_relaxed	arch_atomic_fetch_xor_relaxed
 
 #ifdef CONFIG_64BIT
 ATOMIC_OPS(atomic64, and, s64, &=, and, lld, scd)
 ATOMIC_OPS(atomic64, or, s64, |=, or, lld, scd)
 ATOMIC_OPS(atomic64, xor, s64, ^=, xor, lld, scd)
-# define atomic64_fetch_and_relaxed	atomic64_fetch_and_relaxed
-# define atomic64_fetch_or_relaxed	atomic64_fetch_or_relaxed
-# define atomic64_fetch_xor_relaxed	atomic64_fetch_xor_relaxed
+# define arch_atomic64_fetch_and_relaxed	arch_atomic64_fetch_and_relaxed
+# define arch_atomic64_fetch_or_relaxed		arch_atomic64_fetch_or_relaxed
+# define arch_atomic64_fetch_xor_relaxed	arch_atomic64_fetch_xor_relaxed
 #endif
 
 #undef ATOMIC_OPS
@@ -203,7 +206,7 @@ ATOMIC_OPS(atomic64, xor, s64, ^=, xor, lld, scd)
  * The function returns the old value of @v minus @i.
  */
 #define ATOMIC_SIP_OP(pfx, type, op, ll, sc)				\
-static __inline__ int pfx##_sub_if_positive(type i, pfx##_t * v)	\
+static __inline__ int arch_##pfx##_sub_if_positive(type i, pfx##_t * v)	\
 {									\
 	type temp, result;						\
 									\
@@ -255,11 +258,11 @@ static __inline__ int pfx##_sub_if_positive(type i, pfx##_t * v)	\
 }
 
 ATOMIC_SIP_OP(atomic, int, subu, ll, sc)
-#define atomic_dec_if_positive(v)	atomic_sub_if_positive(1, v)
+#define arch_atomic_dec_if_positive(v)	arch_atomic_sub_if_positive(1, v)
 
 #ifdef CONFIG_64BIT
 ATOMIC_SIP_OP(atomic64, s64, dsubu, lld, scd)
-#define atomic64_dec_if_positive(v)	atomic64_sub_if_positive(1, v)
+#define arch_atomic64_dec_if_positive(v)	arch_atomic64_sub_if_positive(1, v)
 #endif
 
 #undef ATOMIC_SIP_OP
diff --git a/arch/mips/include/asm/cmpxchg.h b/arch/mips/include/asm/cmpxchg.h
index c7e0455d4d46..0b983800f48b 100644
--- a/arch/mips/include/asm/cmpxchg.h
+++ b/arch/mips/include/asm/cmpxchg.h
@@ -90,7 +90,7 @@ unsigned long __xchg(volatile void *ptr, unsigned long x, int size)
 	}
 }
 
-#define xchg(ptr, x)							\
+#define arch_xchg(ptr, x)						\
 ({									\
 	__typeof__(*(ptr)) __res;					\
 									\
@@ -175,14 +175,14 @@ unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 	}
 }
 
-#define cmpxchg_local(ptr, old, new)					\
+#define arch_cmpxchg_local(ptr, old, new)				\
 	((__typeof__(*(ptr)))						\
 		__cmpxchg((ptr),					\
 			  (unsigned long)(__typeof__(*(ptr)))(old),	\
 			  (unsigned long)(__typeof__(*(ptr)))(new),	\
 			  sizeof(*(ptr))))
 
-#define cmpxchg(ptr, old, new)						\
+#define arch_cmpxchg(ptr, old, new)					\
 ({									\
 	__typeof__(*(ptr)) __res;					\
 									\
@@ -194,7 +194,7 @@ unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 	if (__SYNC_loongson3_war == 0)					\
 		smp_mb__before_llsc();					\
 									\
-	__res = cmpxchg_local((ptr), (old), (new));			\
+	__res = arch_cmpxchg_local((ptr), (old), (new));		\
 									\
 	/*								\
 	 * In the Loongson3 workaround case __cmpxchg_asm() already	\
@@ -208,21 +208,21 @@ unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 })
 
 #ifdef CONFIG_64BIT
-#define cmpxchg64_local(ptr, o, n)					\
+#define arch_cmpxchg64_local(ptr, o, n)					\
   ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
-	cmpxchg_local((ptr), (o), (n));					\
+	arch_cmpxchg_local((ptr), (o), (n));				\
   })
 
-#define cmpxchg64(ptr, o, n)						\
+#define arch_cmpxchg64(ptr, o, n)					\
   ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
-	cmpxchg((ptr), (o), (n));					\
+	arch_cmpxchg((ptr), (o), (n));					\
   })
 #else
 
 # include <asm-generic/cmpxchg-local.h>
-# define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
+# define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 
 # ifdef CONFIG_SMP
 
@@ -294,7 +294,7 @@ static inline unsigned long __cmpxchg64(volatile void *ptr,
 	return ret;
 }
 
-#  define cmpxchg64(ptr, o, n) ({					\
+#  define arch_cmpxchg64(ptr, o, n) ({					\
 	unsigned long long __old = (__typeof__(*(ptr)))(o);		\
 	unsigned long long __new = (__typeof__(*(ptr)))(n);		\
 	__typeof__(*(ptr)) __res;					\
@@ -317,7 +317,7 @@ static inline unsigned long __cmpxchg64(volatile void *ptr,
 })
 
 # else /* !CONFIG_SMP */
-#  define cmpxchg64(ptr, o, n) cmpxchg64_local((ptr), (o), (n))
+#  define arch_cmpxchg64(ptr, o, n) arch_cmpxchg64_local((ptr), (o), (n))
 # endif /* !CONFIG_SMP */
 #endif /* !CONFIG_64BIT */
 
diff --git a/arch/mips/kernel/cmpxchg.c b/arch/mips/kernel/cmpxchg.c
index 89107deb03fc..ac9c8cfb2ba9 100644
--- a/arch/mips/kernel/cmpxchg.c
+++ b/arch/mips/kernel/cmpxchg.c
@@ -41,7 +41,7 @@ unsigned long __xchg_small(volatile void *ptr, unsigned long val, unsigned int s
 	do {
 		old32 = load32;
 		new32 = (load32 & ~mask) | (val << shift);
-		load32 = cmpxchg(ptr32, old32, new32);
+		load32 = arch_cmpxchg(ptr32, old32, new32);
 	} while (load32 != old32);
 
 	return (load32 & mask) >> shift;
@@ -97,7 +97,7 @@ unsigned long __cmpxchg_small(volatile void *ptr, unsigned long old,
 		 */
 		old32 = (load32 & ~mask) | (old << shift);
 		new32 = (load32 & ~mask) | (new << shift);
-		load32 = cmpxchg(ptr32, old32, new32);
+		load32 = arch_cmpxchg(ptr32, old32, new32);
 		if (load32 == old32)
 			return old;
 	}
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 23/33] locking/atomic: nds32: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (21 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 22/33] locking/atomic: mips: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 24/33] locking/atomic: nios2: " Mark Rutland
                   ` (10 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates nds32 to ARCH_ATOMIC, using
the asm-generic implementations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Nick Hu <nickhu@andestech.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: Will Deacon <will@kernel.org>
---
 arch/nds32/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/nds32/Kconfig b/arch/nds32/Kconfig
index 62313902d75d..352913573aee 100644
--- a/arch/nds32/Kconfig
+++ b/arch/nds32/Kconfig
@@ -7,6 +7,7 @@
 config NDS32
 	def_bool y
 	select ARCH_32BIT_OFF_T
+	select ARCH_ATOMIC
 	select ARCH_HAS_DMA_PREP_COHERENT
 	select ARCH_HAS_SYNC_DMA_FOR_CPU
 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 24/33] locking/atomic: nios2: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (22 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 23/33] locking/atomic: nds32: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 25/33] locking/atomic: openrisc: " Mark Rutland
                   ` (9 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates nios2 to ARCH_ATOMIC, using
the asm-generic implementations.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Ley Foon Tan <ley.foon.tan@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/nios2/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig
index c24955c81c92..67dae88c5b53 100644
--- a/arch/nios2/Kconfig
+++ b/arch/nios2/Kconfig
@@ -2,6 +2,7 @@
 config NIOS2
 	def_bool y
 	select ARCH_32BIT_OFF_T
+	select ARCH_ATOMIC
 	select ARCH_HAS_DMA_PREP_COHERENT
 	select ARCH_HAS_SYNC_DMA_FOR_CPU
 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 25/33] locking/atomic: openrisc: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (23 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 24/33] locking/atomic: nios2: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-15 21:47   ` Stafford Horne
  2021-05-10  9:37 ` [PATCH 26/33] locking/atomic: parisc: " Mark Rutland
                   ` (8 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates openrisc to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Jonas Bonn <jonas@southpole.se>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stafford Horne <shorne@gmail.com>
Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
Cc: Will Deacon <will@kernel.org>
---
 arch/openrisc/Kconfig               |  1 +
 arch/openrisc/include/asm/atomic.h  | 42 ++++++++++++++++++++-----------------
 arch/openrisc/include/asm/cmpxchg.h |  4 ++--
 3 files changed, 26 insertions(+), 21 deletions(-)

diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
index 591acc5990dc..8c50bc9674f5 100644
--- a/arch/openrisc/Kconfig
+++ b/arch/openrisc/Kconfig
@@ -7,6 +7,7 @@
 config OPENRISC
 	def_bool y
 	select ARCH_32BIT_OFF_T
+	select ARCH_ATOMIC
 	select ARCH_HAS_DMA_SET_UNCACHED
 	select ARCH_HAS_DMA_CLEAR_UNCACHED
 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
diff --git a/arch/openrisc/include/asm/atomic.h b/arch/openrisc/include/asm/atomic.h
index cb86970d3859..326167e4783a 100644
--- a/arch/openrisc/include/asm/atomic.h
+++ b/arch/openrisc/include/asm/atomic.h
@@ -13,7 +13,7 @@
 
 /* Atomically perform op with v->counter and i */
 #define ATOMIC_OP(op)							\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void arch_atomic_##op(int i, atomic_t *v)			\
 {									\
 	int tmp;							\
 									\
@@ -30,7 +30,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
 
 /* Atomically perform op with v->counter and i, return the result */
 #define ATOMIC_OP_RETURN(op)						\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
 {									\
 	int tmp;							\
 									\
@@ -49,7 +49,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
 
 /* Atomically perform op with v->counter and i, return orig v->counter */
 #define ATOMIC_FETCH_OP(op)						\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 {									\
 	int tmp, old;							\
 									\
@@ -75,6 +75,8 @@ ATOMIC_FETCH_OP(and)
 ATOMIC_FETCH_OP(or)
 ATOMIC_FETCH_OP(xor)
 
+ATOMIC_OP(add)
+ATOMIC_OP(sub)
 ATOMIC_OP(and)
 ATOMIC_OP(or)
 ATOMIC_OP(xor)
@@ -83,16 +85,18 @@ ATOMIC_OP(xor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#define atomic_add_return	atomic_add_return
-#define atomic_sub_return	atomic_sub_return
-#define atomic_fetch_add	atomic_fetch_add
-#define atomic_fetch_sub	atomic_fetch_sub
-#define atomic_fetch_and	atomic_fetch_and
-#define atomic_fetch_or		atomic_fetch_or
-#define atomic_fetch_xor	atomic_fetch_xor
-#define atomic_and	atomic_and
-#define atomic_or	atomic_or
-#define atomic_xor	atomic_xor
+#define arch_atomic_add_return	arch_atomic_add_return
+#define arch_atomic_sub_return	arch_atomic_sub_return
+#define arch_atomic_fetch_add	arch_atomic_fetch_add
+#define arch_atomic_fetch_sub	arch_atomic_fetch_sub
+#define arch_atomic_fetch_and	arch_atomic_fetch_and
+#define arch_atomic_fetch_or	arch_atomic_fetch_or
+#define arch_atomic_fetch_xor	arch_atomic_fetch_xor
+#define arch_atomic_add		arch_atomic_add
+#define arch_atomic_sub		arch_atomic_sub
+#define arch_atomic_and		arch_atomic_and
+#define arch_atomic_or		arch_atomic_or
+#define arch_atomic_xor		arch_atomic_xor
 
 /*
  * Atomically add a to v->counter as long as v is not already u.
@@ -100,7 +104,7 @@ ATOMIC_OP(xor)
  *
  * This is often used through atomic_inc_not_zero()
  */
-static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
+static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
 	int old, tmp;
 
@@ -119,14 +123,14 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 
 	return old;
 }
-#define atomic_fetch_add_unless	atomic_fetch_add_unless
+#define arch_atomic_fetch_add_unless	arch_atomic_fetch_add_unless
 
-#define atomic_read(v)			READ_ONCE((v)->counter)
-#define atomic_set(v,i)			WRITE_ONCE((v)->counter, (i))
+#define arch_atomic_read(v)		READ_ONCE((v)->counter)
+#define arch_atomic_set(v,i)		WRITE_ONCE((v)->counter, (i))
 
 #include <asm/cmpxchg.h>
 
-#define atomic_xchg(ptr, v)		(xchg(&(ptr)->counter, (v)))
-#define atomic_cmpxchg(v, old, new)	(cmpxchg(&((v)->counter), (old), (new)))
+#define arch_atomic_xchg(ptr, v)		(arch_xchg(&(ptr)->counter, (v)))
+#define arch_atomic_cmpxchg(v, old, new)	(arch_cmpxchg(&((v)->counter), (old), (new)))
 
 #endif /* __ASM_OPENRISC_ATOMIC_H */
diff --git a/arch/openrisc/include/asm/cmpxchg.h b/arch/openrisc/include/asm/cmpxchg.h
index f9cd43a39d72..79fd16162ccb 100644
--- a/arch/openrisc/include/asm/cmpxchg.h
+++ b/arch/openrisc/include/asm/cmpxchg.h
@@ -132,7 +132,7 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
 	}
 }
 
-#define cmpxchg(ptr, o, n)						\
+#define arch_cmpxchg(ptr, o, n)						\
 	({								\
 		(__typeof__(*(ptr))) __cmpxchg((ptr),			\
 					       (unsigned long)(o),	\
@@ -161,7 +161,7 @@ static inline unsigned long __xchg(volatile void *ptr, unsigned long with,
 	}
 }
 
-#define xchg(ptr, with) 						\
+#define arch_xchg(ptr, with) 						\
 	({								\
 		(__typeof__(*(ptr))) __xchg((ptr),			\
 					    (unsigned long)(with),	\
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 26/33] locking/atomic: parisc: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (24 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 25/33] locking/atomic: openrisc: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 27/33] locking/atomic: powerpc: " Mark Rutland
                   ` (7 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates parisc to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: "James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Helge Deller <deller@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/parisc/Kconfig               |  1 +
 arch/parisc/include/asm/atomic.h  | 34 +++++++++++++++++-----------------
 arch/parisc/include/asm/cmpxchg.h | 12 ++++++------
 3 files changed, 24 insertions(+), 23 deletions(-)

diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
index bde9907bc5b2..bfa120a4add1 100644
--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -2,6 +2,7 @@
 config PARISC
 	def_bool y
 	select ARCH_32BIT_OFF_T if !64BIT
+	select ARCH_ATOMIC
 	select ARCH_MIGHT_HAVE_PC_PARPORT
 	select HAVE_IDE
 	select HAVE_FUNCTION_TRACER
diff --git a/arch/parisc/include/asm/atomic.h b/arch/parisc/include/asm/atomic.h
index 21b375c67e53..dd5a299ada69 100644
--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -56,7 +56,7 @@ extern arch_spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned;
  * are atomic, so a reader never sees inconsistent values.
  */
 
-static __inline__ void atomic_set(atomic_t *v, int i)
+static __inline__ void arch_atomic_set(atomic_t *v, int i)
 {
 	unsigned long flags;
 	_atomic_spin_lock_irqsave(v, flags);
@@ -66,19 +66,19 @@ static __inline__ void atomic_set(atomic_t *v, int i)
 	_atomic_spin_unlock_irqrestore(v, flags);
 }
 
-#define atomic_set_release(v, i)	atomic_set((v), (i))
+#define arch_atomic_set_release(v, i)	arch_atomic_set((v), (i))
 
-static __inline__ int atomic_read(const atomic_t *v)
+static __inline__ int arch_atomic_read(const atomic_t *v)
 {
 	return READ_ONCE((v)->counter);
 }
 
 /* exported interface */
-#define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n)))
-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+#define arch_atomic_cmpxchg(v, o, n)	(arch_cmpxchg(&((v)->counter), (o), (n)))
+#define arch_atomic_xchg(v, new)	(arch_xchg(&((v)->counter), new))
 
 #define ATOMIC_OP(op, c_op)						\
-static __inline__ void atomic_##op(int i, atomic_t *v)			\
+static __inline__ void arch_atomic_##op(int i, atomic_t *v)		\
 {									\
 	unsigned long flags;						\
 									\
@@ -88,7 +88,7 @@ static __inline__ void atomic_##op(int i, atomic_t *v)			\
 }
 
 #define ATOMIC_OP_RETURN(op, c_op)					\
-static __inline__ int atomic_##op##_return(int i, atomic_t *v)		\
+static __inline__ int arch_atomic_##op##_return(int i, atomic_t *v)	\
 {									\
 	unsigned long flags;						\
 	int ret;							\
@@ -101,7 +101,7 @@ static __inline__ int atomic_##op##_return(int i, atomic_t *v)		\
 }
 
 #define ATOMIC_FETCH_OP(op, c_op)					\
-static __inline__ int atomic_fetch_##op(int i, atomic_t *v)		\
+static __inline__ int arch_atomic_fetch_##op(int i, atomic_t *v)	\
 {									\
 	unsigned long flags;						\
 	int ret;							\
@@ -141,7 +141,7 @@ ATOMIC_OPS(xor, ^=)
 #define ATOMIC64_INIT(i) { (i) }
 
 #define ATOMIC64_OP(op, c_op)						\
-static __inline__ void atomic64_##op(s64 i, atomic64_t *v)		\
+static __inline__ void arch_atomic64_##op(s64 i, atomic64_t *v)		\
 {									\
 	unsigned long flags;						\
 									\
@@ -151,7 +151,7 @@ static __inline__ void atomic64_##op(s64 i, atomic64_t *v)		\
 }
 
 #define ATOMIC64_OP_RETURN(op, c_op)					\
-static __inline__ s64 atomic64_##op##_return(s64 i, atomic64_t *v)	\
+static __inline__ s64 arch_atomic64_##op##_return(s64 i, atomic64_t *v)	\
 {									\
 	unsigned long flags;						\
 	s64 ret;							\
@@ -164,7 +164,7 @@ static __inline__ s64 atomic64_##op##_return(s64 i, atomic64_t *v)	\
 }
 
 #define ATOMIC64_FETCH_OP(op, c_op)					\
-static __inline__ s64 atomic64_fetch_##op(s64 i, atomic64_t *v)		\
+static __inline__ s64 arch_atomic64_fetch_##op(s64 i, atomic64_t *v)	\
 {									\
 	unsigned long flags;						\
 	s64 ret;							\
@@ -200,7 +200,7 @@ ATOMIC64_OPS(xor, ^=)
 #undef ATOMIC64_OP
 
 static __inline__ void
-atomic64_set(atomic64_t *v, s64 i)
+arch_atomic64_set(atomic64_t *v, s64 i)
 {
 	unsigned long flags;
 	_atomic_spin_lock_irqsave(v, flags);
@@ -210,18 +210,18 @@ atomic64_set(atomic64_t *v, s64 i)
 	_atomic_spin_unlock_irqrestore(v, flags);
 }
 
-#define atomic64_set_release(v, i)	atomic64_set((v), (i))
+#define arch_atomic64_set_release(v, i)	arch_atomic64_set((v), (i))
 
 static __inline__ s64
-atomic64_read(const atomic64_t *v)
+arch_atomic64_read(const atomic64_t *v)
 {
 	return READ_ONCE((v)->counter);
 }
 
 /* exported interface */
-#define atomic64_cmpxchg(v, o, n) \
-	((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n)))
-#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
+#define arch_atomic64_cmpxchg(v, o, n) \
+	((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n)))
+#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
 
 #endif /* !CONFIG_64BIT */
 
diff --git a/arch/parisc/include/asm/cmpxchg.h b/arch/parisc/include/asm/cmpxchg.h
index c2015654b684..5f274be10567 100644
--- a/arch/parisc/include/asm/cmpxchg.h
+++ b/arch/parisc/include/asm/cmpxchg.h
@@ -44,7 +44,7 @@ __xchg(unsigned long x, volatile void *ptr, int size)
 **		if (((unsigned long)p & 0xf) == 0)
 **			return __ldcw(p);
 */
-#define xchg(ptr, x)							\
+#define arch_xchg(ptr, x)						\
 ({									\
 	__typeof__(*(ptr)) __ret;					\
 	__typeof__(*(ptr)) _x_ = (x);					\
@@ -78,7 +78,7 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size)
 	return old;
 }
 
-#define cmpxchg(ptr, o, n)						 \
+#define arch_cmpxchg(ptr, o, n)						 \
 ({									 \
 	__typeof__(*(ptr)) _o_ = (o);					 \
 	__typeof__(*(ptr)) _n_ = (n);					 \
@@ -106,19 +106,19 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
  * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make
  * them available.
  */
-#define cmpxchg_local(ptr, o, n)					\
+#define arch_cmpxchg_local(ptr, o, n)					\
 	((__typeof__(*(ptr)))__cmpxchg_local((ptr), (unsigned long)(o),	\
 			(unsigned long)(n), sizeof(*(ptr))))
 #ifdef CONFIG_64BIT
-#define cmpxchg64_local(ptr, o, n)					\
+#define arch_cmpxchg64_local(ptr, o, n)					\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
 	cmpxchg_local((ptr), (o), (n));					\
 })
 #else
-#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
+#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 #endif
 
-#define cmpxchg64(ptr, o, n) __cmpxchg_u64(ptr, o, n)
+#define arch_cmpxchg64(ptr, o, n) __cmpxchg_u64(ptr, o, n)
 
 #endif /* _ASM_PARISC_CMPXCHG_H_ */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 27/33] locking/atomic: powerpc: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (25 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 26/33] locking/atomic: parisc: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-11  9:16   ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 28/33] locking/atomic: riscv: " Mark Rutland
                   ` (6 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates powerpc to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/powerpc/Kconfig               |   1 +
 arch/powerpc/include/asm/atomic.h  | 140 +++++++++++++++++++------------------
 arch/powerpc/include/asm/cmpxchg.h |  30 ++++----
 3 files changed, 89 insertions(+), 82 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 088dd2afcfe4..d143c2b616f0 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -118,6 +118,7 @@ config PPC
 	# Please keep this list sorted alphabetically.
 	#
 	select ARCH_32BIT_OFF_T if PPC32
+	select ARCH_ATOMIC
 	select ARCH_ENABLE_MEMORY_HOTPLUG
 	select ARCH_ENABLE_MEMORY_HOTREMOVE
 	select ARCH_HAS_COPY_MC			if PPC64
diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
index 61c6e8b200e8..5f2a8e2514d8 100644
--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -23,7 +23,7 @@
 #define __atomic_release_fence()					\
 	__asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory")
 
-static __inline__ int atomic_read(const atomic_t *v)
+static __inline__ int arch_atomic_read(const atomic_t *v)
 {
 	int t;
 
@@ -32,13 +32,13 @@ static __inline__ int atomic_read(const atomic_t *v)
 	return t;
 }
 
-static __inline__ void atomic_set(atomic_t *v, int i)
+static __inline__ void arch_atomic_set(atomic_t *v, int i)
 {
 	__asm__ __volatile__("stw%U0%X0 %1,%0" : "=m"UPD_CONSTR(v->counter) : "r"(i));
 }
 
 #define ATOMIC_OP(op, asm_op)						\
-static __inline__ void atomic_##op(int a, atomic_t *v)			\
+static __inline__ void arch_atomic_##op(int a, atomic_t *v)		\
 {									\
 	int t;								\
 									\
@@ -53,7 +53,7 @@ static __inline__ void atomic_##op(int a, atomic_t *v)			\
 }									\
 
 #define ATOMIC_OP_RETURN_RELAXED(op, asm_op)				\
-static inline int atomic_##op##_return_relaxed(int a, atomic_t *v)	\
+static inline int arch_atomic_##op##_return_relaxed(int a, atomic_t *v)	\
 {									\
 	int t;								\
 									\
@@ -70,7 +70,7 @@ static inline int atomic_##op##_return_relaxed(int a, atomic_t *v)	\
 }
 
 #define ATOMIC_FETCH_OP_RELAXED(op, asm_op)				\
-static inline int atomic_fetch_##op##_relaxed(int a, atomic_t *v)	\
+static inline int arch_atomic_fetch_##op##_relaxed(int a, atomic_t *v)	\
 {									\
 	int res, t;							\
 									\
@@ -94,11 +94,11 @@ static inline int atomic_fetch_##op##_relaxed(int a, atomic_t *v)	\
 ATOMIC_OPS(add, add)
 ATOMIC_OPS(sub, subf)
 
-#define atomic_add_return_relaxed atomic_add_return_relaxed
-#define atomic_sub_return_relaxed atomic_sub_return_relaxed
+#define arch_atomic_add_return_relaxed arch_atomic_add_return_relaxed
+#define arch_atomic_sub_return_relaxed arch_atomic_sub_return_relaxed
 
-#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed
-#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed
+#define arch_atomic_fetch_add_relaxed arch_atomic_fetch_add_relaxed
+#define arch_atomic_fetch_sub_relaxed arch_atomic_fetch_sub_relaxed
 
 #undef ATOMIC_OPS
 #define ATOMIC_OPS(op, asm_op)						\
@@ -109,16 +109,16 @@ ATOMIC_OPS(and, and)
 ATOMIC_OPS(or, or)
 ATOMIC_OPS(xor, xor)
 
-#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed
-#define atomic_fetch_or_relaxed  atomic_fetch_or_relaxed
-#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed
+#define arch_atomic_fetch_and_relaxed arch_atomic_fetch_and_relaxed
+#define arch_atomic_fetch_or_relaxed  arch_atomic_fetch_or_relaxed
+#define arch_atomic_fetch_xor_relaxed arch_atomic_fetch_xor_relaxed
 
 #undef ATOMIC_OPS
 #undef ATOMIC_FETCH_OP_RELAXED
 #undef ATOMIC_OP_RETURN_RELAXED
 #undef ATOMIC_OP
 
-static __inline__ void atomic_inc(atomic_t *v)
+static __inline__ void arch_atomic_inc(atomic_t *v)
 {
 	int t;
 
@@ -131,9 +131,9 @@ static __inline__ void atomic_inc(atomic_t *v)
 	: "r" (&v->counter)
 	: "cc", "xer");
 }
-#define atomic_inc atomic_inc
+#define arch_atomic_inc arch_atomic_inc
 
-static __inline__ int atomic_inc_return_relaxed(atomic_t *v)
+static __inline__ int arch_atomic_inc_return_relaxed(atomic_t *v)
 {
 	int t;
 
@@ -149,7 +149,7 @@ static __inline__ int atomic_inc_return_relaxed(atomic_t *v)
 	return t;
 }
 
-static __inline__ void atomic_dec(atomic_t *v)
+static __inline__ void arch_atomic_dec(atomic_t *v)
 {
 	int t;
 
@@ -162,9 +162,9 @@ static __inline__ void atomic_dec(atomic_t *v)
 	: "r" (&v->counter)
 	: "cc", "xer");
 }
-#define atomic_dec atomic_dec
+#define arch_atomic_dec arch_atomic_dec
 
-static __inline__ int atomic_dec_return_relaxed(atomic_t *v)
+static __inline__ int arch_atomic_dec_return_relaxed(atomic_t *v)
 {
 	int t;
 
@@ -180,17 +180,20 @@ static __inline__ int atomic_dec_return_relaxed(atomic_t *v)
 	return t;
 }
 
-#define atomic_inc_return_relaxed atomic_inc_return_relaxed
-#define atomic_dec_return_relaxed atomic_dec_return_relaxed
+#define arch_atomic_inc_return_relaxed arch_atomic_inc_return_relaxed
+#define arch_atomic_dec_return_relaxed arch_atomic_dec_return_relaxed
 
-#define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n)))
-#define atomic_cmpxchg_relaxed(v, o, n) \
-	cmpxchg_relaxed(&((v)->counter), (o), (n))
-#define atomic_cmpxchg_acquire(v, o, n) \
-	cmpxchg_acquire(&((v)->counter), (o), (n))
+#define arch_atomic_cmpxchg(v, o, n) \
+	(arch_cmpxchg(&((v)->counter), (o), (n)))
+#define arch_atomic_cmpxchg_relaxed(v, o, n) \
+	arch_cmpxchg_relaxed(&((v)->counter), (o), (n))
+#define arch_atomic_cmpxchg_acquire(v, o, n) \
+	arch_cmpxchg_acquire(&((v)->counter), (o), (n))
 
-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
-#define atomic_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new))
+#define arch_atomic_xchg(v, new) \
+	(arch_xchg(&((v)->counter), new))
+#define arch_atomic_xchg_relaxed(v, new) \
+	arch_xchg_relaxed(&((v)->counter), (new))
 
 /*
  * Don't want to override the generic atomic_try_cmpxchg_acquire, because
@@ -199,7 +202,7 @@ static __inline__ int atomic_dec_return_relaxed(atomic_t *v)
  * would be a surprise).
  */
 static __always_inline bool
-atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new)
+arch_atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new)
 {
 	int r, o = *old;
 
@@ -229,7 +232,7 @@ atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new)
  * Atomically adds @a to @v, so long as it was not @u.
  * Returns the old value of @v.
  */
-static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u)
+static __inline__ int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
 	int t;
 
@@ -250,7 +253,7 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 
 	return t;
 }
-#define atomic_fetch_add_unless atomic_fetch_add_unless
+#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
 
 /**
  * atomic_inc_not_zero - increment unless the number is zero
@@ -259,7 +262,7 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u)
  * Atomically increments @v by 1, so long as @v is non-zero.
  * Returns non-zero if @v was non-zero, and zero otherwise.
  */
-static __inline__ int atomic_inc_not_zero(atomic_t *v)
+static __inline__ int arch_atomic_inc_not_zero(atomic_t *v)
 {
 	int t1, t2;
 
@@ -280,14 +283,14 @@ static __inline__ int atomic_inc_not_zero(atomic_t *v)
 
 	return t1;
 }
-#define atomic_inc_not_zero(v) atomic_inc_not_zero((v))
+#define arch_atomic_inc_not_zero(v) arch_atomic_inc_not_zero((v))
 
 /*
  * Atomically test *v and decrement if it is greater than 0.
  * The function returns the old value of *v minus 1, even if
  * the atomic variable, v, was not decremented.
  */
-static __inline__ int atomic_dec_if_positive(atomic_t *v)
+static __inline__ int arch_atomic_dec_if_positive(atomic_t *v)
 {
 	int t;
 
@@ -307,13 +310,13 @@ static __inline__ int atomic_dec_if_positive(atomic_t *v)
 
 	return t;
 }
-#define atomic_dec_if_positive atomic_dec_if_positive
+#define arch_atomic_dec_if_positive arch_atomic_dec_if_positive
 
 #ifdef __powerpc64__
 
 #define ATOMIC64_INIT(i)	{ (i) }
 
-static __inline__ s64 atomic64_read(const atomic64_t *v)
+static __inline__ s64 arch_atomic64_read(const atomic64_t *v)
 {
 	s64 t;
 
@@ -322,13 +325,13 @@ static __inline__ s64 atomic64_read(const atomic64_t *v)
 	return t;
 }
 
-static __inline__ void atomic64_set(atomic64_t *v, s64 i)
+static __inline__ void arch_atomic64_set(atomic64_t *v, s64 i)
 {
 	__asm__ __volatile__("std%U0%X0 %1,%0" : "=m"UPD_CONSTR(v->counter) : "r"(i));
 }
 
 #define ATOMIC64_OP(op, asm_op)						\
-static __inline__ void atomic64_##op(s64 a, atomic64_t *v)		\
+static __inline__ void arch_atomic64_##op(s64 a, atomic64_t *v)		\
 {									\
 	s64 t;								\
 									\
@@ -344,7 +347,7 @@ static __inline__ void atomic64_##op(s64 a, atomic64_t *v)		\
 
 #define ATOMIC64_OP_RETURN_RELAXED(op, asm_op)				\
 static inline s64							\
-atomic64_##op##_return_relaxed(s64 a, atomic64_t *v)			\
+arch_atomic64_##op##_return_relaxed(s64 a, atomic64_t *v)		\
 {									\
 	s64 t;								\
 									\
@@ -362,7 +365,7 @@ atomic64_##op##_return_relaxed(s64 a, atomic64_t *v)			\
 
 #define ATOMIC64_FETCH_OP_RELAXED(op, asm_op)				\
 static inline s64							\
-atomic64_fetch_##op##_relaxed(s64 a, atomic64_t *v)			\
+arch_atomic64_fetch_##op##_relaxed(s64 a, atomic64_t *v)		\
 {									\
 	s64 res, t;							\
 									\
@@ -386,11 +389,11 @@ atomic64_fetch_##op##_relaxed(s64 a, atomic64_t *v)			\
 ATOMIC64_OPS(add, add)
 ATOMIC64_OPS(sub, subf)
 
-#define atomic64_add_return_relaxed atomic64_add_return_relaxed
-#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed
+#define arch_atomic64_add_return_relaxed arch_atomic64_add_return_relaxed
+#define arch_atomic64_sub_return_relaxed arch_atomic64_sub_return_relaxed
 
-#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed
-#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed
+#define arch_atomic64_fetch_add_relaxed arch_atomic64_fetch_add_relaxed
+#define arch_atomic64_fetch_sub_relaxed arch_atomic64_fetch_sub_relaxed
 
 #undef ATOMIC64_OPS
 #define ATOMIC64_OPS(op, asm_op)					\
@@ -401,16 +404,16 @@ ATOMIC64_OPS(and, and)
 ATOMIC64_OPS(or, or)
 ATOMIC64_OPS(xor, xor)
 
-#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed
-#define atomic64_fetch_or_relaxed  atomic64_fetch_or_relaxed
-#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed
+#define arch_atomic64_fetch_and_relaxed arch_atomic64_fetch_and_relaxed
+#define arch_atomic64_fetch_or_relaxed  arch_atomic64_fetch_or_relaxed
+#define arch_atomic64_fetch_xor_relaxed arch_atomic64_fetch_xor_relaxed
 
 #undef ATOPIC64_OPS
 #undef ATOMIC64_FETCH_OP_RELAXED
 #undef ATOMIC64_OP_RETURN_RELAXED
 #undef ATOMIC64_OP
 
-static __inline__ void atomic64_inc(atomic64_t *v)
+static __inline__ void arch_atomic64_inc(atomic64_t *v)
 {
 	s64 t;
 
@@ -423,9 +426,9 @@ static __inline__ void atomic64_inc(atomic64_t *v)
 	: "r" (&v->counter)
 	: "cc", "xer");
 }
-#define atomic64_inc atomic64_inc
+#define arch_atomic64_inc arch_atomic64_inc
 
-static __inline__ s64 atomic64_inc_return_relaxed(atomic64_t *v)
+static __inline__ s64 arch_atomic64_inc_return_relaxed(atomic64_t *v)
 {
 	s64 t;
 
@@ -441,7 +444,7 @@ static __inline__ s64 atomic64_inc_return_relaxed(atomic64_t *v)
 	return t;
 }
 
-static __inline__ void atomic64_dec(atomic64_t *v)
+static __inline__ void arch_atomic64_dec(atomic64_t *v)
 {
 	s64 t;
 
@@ -454,9 +457,9 @@ static __inline__ void atomic64_dec(atomic64_t *v)
 	: "r" (&v->counter)
 	: "cc", "xer");
 }
-#define atomic64_dec atomic64_dec
+#define arch_atomic64_dec arch_atomic64_dec
 
-static __inline__ s64 atomic64_dec_return_relaxed(atomic64_t *v)
+static __inline__ s64 arch_atomic64_dec_return_relaxed(atomic64_t *v)
 {
 	s64 t;
 
@@ -472,14 +475,14 @@ static __inline__ s64 atomic64_dec_return_relaxed(atomic64_t *v)
 	return t;
 }
 
-#define atomic64_inc_return_relaxed atomic64_inc_return_relaxed
-#define atomic64_dec_return_relaxed atomic64_dec_return_relaxed
+#define arch_atomic64_inc_return_relaxed arch_atomic64_inc_return_relaxed
+#define arch_atomic64_dec_return_relaxed arch_atomic64_dec_return_relaxed
 
 /*
  * Atomically test *v and decrement if it is greater than 0.
  * The function returns the old value of *v minus 1.
  */
-static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v)
+static __inline__ s64 arch_atomic64_dec_if_positive(atomic64_t *v)
 {
 	s64 t;
 
@@ -498,16 +501,19 @@ static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v)
 
 	return t;
 }
-#define atomic64_dec_if_positive atomic64_dec_if_positive
+#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
 
-#define atomic64_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n)))
-#define atomic64_cmpxchg_relaxed(v, o, n) \
-	cmpxchg_relaxed(&((v)->counter), (o), (n))
-#define atomic64_cmpxchg_acquire(v, o, n) \
-	cmpxchg_acquire(&((v)->counter), (o), (n))
+#define arch_atomic64_cmpxchg(v, o, n) \
+	(arch_cmpxchg(&((v)->counter), (o), (n)))
+#define arch_atomic64_cmpxchg_relaxed(v, o, n) \
+	arch_cmpxchg_relaxed(&((v)->counter), (o), (n))
+#define arch_atomic64_cmpxchg_acquire(v, o, n) \
+	arch_cmpxchg_acquire(&((v)->counter), (o), (n))
 
-#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
-#define atomic64_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new))
+#define arch_atomic64_xchg(v, new) \
+	(arch_xchg(&((v)->counter), new))
+#define arch_atomic64_xchg_relaxed(v, new) \
+	arch_xchg_relaxed(&((v)->counter), (new))
 
 /**
  * atomic64_fetch_add_unless - add unless the number is a given value
@@ -518,7 +524,7 @@ static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v)
  * Atomically adds @a to @v, so long as it was not @u.
  * Returns the old value of @v.
  */
-static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	s64 t;
 
@@ -539,7 +545,7 @@ static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 
 	return t;
 }
-#define atomic64_fetch_add_unless atomic64_fetch_add_unless
+#define arch_atomic64_fetch_add_unless atomic64_fetch_add_unless
 
 /**
  * atomic_inc64_not_zero - increment unless the number is zero
@@ -548,7 +554,7 @@ static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
  * Atomically increments @v by 1, so long as @v is non-zero.
  * Returns non-zero if @v was non-zero, and zero otherwise.
  */
-static __inline__ int atomic64_inc_not_zero(atomic64_t *v)
+static __inline__ int arch_atomic64_inc_not_zero(atomic64_t *v)
 {
 	s64 t1, t2;
 
@@ -569,7 +575,7 @@ static __inline__ int atomic64_inc_not_zero(atomic64_t *v)
 
 	return t1 != 0;
 }
-#define atomic64_inc_not_zero(v) atomic64_inc_not_zero((v))
+#define arch_atomic64_inc_not_zero(v) arch_atomic64_inc_not_zero((v))
 
 #endif /* __powerpc64__ */
 
diff --git a/arch/powerpc/include/asm/cmpxchg.h b/arch/powerpc/include/asm/cmpxchg.h
index 69f52fdcf064..05f246c0e36e 100644
--- a/arch/powerpc/include/asm/cmpxchg.h
+++ b/arch/powerpc/include/asm/cmpxchg.h
@@ -185,14 +185,14 @@ __xchg_relaxed(void *ptr, unsigned long x, unsigned int size)
 	BUILD_BUG_ON_MSG(1, "Unsupported size for __xchg_local");
 	return x;
 }
-#define xchg_local(ptr,x)						     \
+#define arch_xchg_local(ptr,x)						     \
   ({									     \
      __typeof__(*(ptr)) _x_ = (x);					     \
      (__typeof__(*(ptr))) __xchg_local((ptr),				     \
      		(unsigned long)_x_, sizeof(*(ptr))); 			     \
   })
 
-#define xchg_relaxed(ptr, x)						\
+#define arch_xchg_relaxed(ptr, x)					\
 ({									\
 	__typeof__(*(ptr)) _x_ = (x);					\
 	(__typeof__(*(ptr))) __xchg_relaxed((ptr),			\
@@ -467,7 +467,7 @@ __cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new,
 	BUILD_BUG_ON_MSG(1, "Unsupported size for __cmpxchg_acquire");
 	return old;
 }
-#define cmpxchg(ptr, o, n)						 \
+#define arch_cmpxchg(ptr, o, n)						 \
   ({									 \
      __typeof__(*(ptr)) _o_ = (o);					 \
      __typeof__(*(ptr)) _n_ = (n);					 \
@@ -476,7 +476,7 @@ __cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new,
   })
 
 
-#define cmpxchg_local(ptr, o, n)					 \
+#define arch_cmpxchg_local(ptr, o, n)					 \
   ({									 \
      __typeof__(*(ptr)) _o_ = (o);					 \
      __typeof__(*(ptr)) _n_ = (n);					 \
@@ -484,7 +484,7 @@ __cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new,
 				    (unsigned long)_n_, sizeof(*(ptr))); \
   })
 
-#define cmpxchg_relaxed(ptr, o, n)					\
+#define arch_cmpxchg_relaxed(ptr, o, n)					\
 ({									\
 	__typeof__(*(ptr)) _o_ = (o);					\
 	__typeof__(*(ptr)) _n_ = (n);					\
@@ -493,7 +493,7 @@ __cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new,
 			sizeof(*(ptr)));				\
 })
 
-#define cmpxchg_acquire(ptr, o, n)					\
+#define arch_cmpxchg_acquire(ptr, o, n)					\
 ({									\
 	__typeof__(*(ptr)) _o_ = (o);					\
 	__typeof__(*(ptr)) _n_ = (n);					\
@@ -502,29 +502,29 @@ __cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new,
 			sizeof(*(ptr)));				\
 })
 #ifdef CONFIG_PPC64
-#define cmpxchg64(ptr, o, n)						\
+#define arch_cmpxchg64(ptr, o, n)					\
   ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
-	cmpxchg((ptr), (o), (n));					\
+	arch_cmpxchg((ptr), (o), (n));					\
   })
-#define cmpxchg64_local(ptr, o, n)					\
+#define arch_cmpxchg64_local(ptr, o, n)					\
   ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
-	cmpxchg_local((ptr), (o), (n));					\
+	arch_cmpxchg_local((ptr), (o), (n));				\
   })
-#define cmpxchg64_relaxed(ptr, o, n)					\
+#define arch_cmpxchg64_relaxed(ptr, o, n)				\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
-	cmpxchg_relaxed((ptr), (o), (n));				\
+	arch_cmpxchg_relaxed((ptr), (o), (n));				\
 })
-#define cmpxchg64_acquire(ptr, o, n)					\
+#define arch_cmpxchg64_acquire(ptr, o, n)				\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
-	cmpxchg_acquire((ptr), (o), (n));				\
+	arch_cmpxchg_acquire((ptr), (o), (n));				\
 })
 #else
 #include <asm-generic/cmpxchg-local.h>
-#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
+#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 #endif
 
 #endif /* __KERNEL__ */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 28/33] locking/atomic: riscv: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (26 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 27/33] locking/atomic: powerpc: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-22 19:02   ` Palmer Dabbelt
  2021-05-10  9:37 ` [PATCH 29/33] locking/atomic: sh: " Mark Rutland
                   ` (5 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates riscv to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Palmer Dabbelt <palmer@dabbelt.com>
Cc: Paul Walmsley <paul.walmsley@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/riscv/Kconfig               |   1 +
 arch/riscv/include/asm/atomic.h  | 128 +++++++++++++++++++--------------------
 arch/riscv/include/asm/cmpxchg.h |  34 +++++------
 3 files changed, 82 insertions(+), 81 deletions(-)

diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index a8ad8eb76120..c59b9f4a9d62 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -12,6 +12,7 @@ config 32BIT
 
 config RISCV
 	def_bool y
+	select ARCH_ATOMIC
 	select ARCH_CLOCKSOURCE_INIT
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index 400a8c8b6de7..ac9bdf4fc404 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -25,22 +25,22 @@
 #define __atomic_release_fence()					\
 	__asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");
 
-static __always_inline int atomic_read(const atomic_t *v)
+static __always_inline int arch_atomic_read(const atomic_t *v)
 {
 	return READ_ONCE(v->counter);
 }
-static __always_inline void atomic_set(atomic_t *v, int i)
+static __always_inline void arch_atomic_set(atomic_t *v, int i)
 {
 	WRITE_ONCE(v->counter, i);
 }
 
 #ifndef CONFIG_GENERIC_ATOMIC64
 #define ATOMIC64_INIT(i) { (i) }
-static __always_inline s64 atomic64_read(const atomic64_t *v)
+static __always_inline s64 arch_atomic64_read(const atomic64_t *v)
 {
 	return READ_ONCE(v->counter);
 }
-static __always_inline void atomic64_set(atomic64_t *v, s64 i)
+static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
 {
 	WRITE_ONCE(v->counter, i);
 }
@@ -53,7 +53,7 @@ static __always_inline void atomic64_set(atomic64_t *v, s64 i)
  */
 #define ATOMIC_OP(op, asm_op, I, asm_type, c_type, prefix)		\
 static __always_inline							\
-void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v)		\
+void arch_atomic##prefix##_##op(c_type i, atomic##prefix##_t *v)	\
 {									\
 	__asm__ __volatile__ (						\
 		"	amo" #asm_op "." #asm_type " zero, %1, %0"	\
@@ -87,7 +87,7 @@ ATOMIC_OPS(xor, xor,  i)
  */
 #define ATOMIC_FETCH_OP(op, asm_op, I, asm_type, c_type, prefix)	\
 static __always_inline							\
-c_type atomic##prefix##_fetch_##op##_relaxed(c_type i,			\
+c_type arch_atomic##prefix##_fetch_##op##_relaxed(c_type i,		\
 					     atomic##prefix##_t *v)	\
 {									\
 	register c_type ret;						\
@@ -99,7 +99,7 @@ c_type atomic##prefix##_fetch_##op##_relaxed(c_type i,			\
 	return ret;							\
 }									\
 static __always_inline							\
-c_type atomic##prefix##_fetch_##op(c_type i, atomic##prefix##_t *v)	\
+c_type arch_atomic##prefix##_fetch_##op(c_type i, atomic##prefix##_t *v)	\
 {									\
 	register c_type ret;						\
 	__asm__ __volatile__ (						\
@@ -112,15 +112,15 @@ c_type atomic##prefix##_fetch_##op(c_type i, atomic##prefix##_t *v)	\
 
 #define ATOMIC_OP_RETURN(op, asm_op, c_op, I, asm_type, c_type, prefix)	\
 static __always_inline							\
-c_type atomic##prefix##_##op##_return_relaxed(c_type i,			\
+c_type arch_atomic##prefix##_##op##_return_relaxed(c_type i,		\
 					      atomic##prefix##_t *v)	\
 {									\
-        return atomic##prefix##_fetch_##op##_relaxed(i, v) c_op I;	\
+        return arch_atomic##prefix##_fetch_##op##_relaxed(i, v) c_op I;	\
 }									\
 static __always_inline							\
-c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v)	\
+c_type arch_atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v)	\
 {									\
-        return atomic##prefix##_fetch_##op(i, v) c_op I;		\
+        return arch_atomic##prefix##_fetch_##op(i, v) c_op I;		\
 }
 
 #ifdef CONFIG_GENERIC_ATOMIC64
@@ -138,26 +138,26 @@ c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v)	\
 ATOMIC_OPS(add, add, +,  i)
 ATOMIC_OPS(sub, add, +, -i)
 
-#define atomic_add_return_relaxed	atomic_add_return_relaxed
-#define atomic_sub_return_relaxed	atomic_sub_return_relaxed
-#define atomic_add_return		atomic_add_return
-#define atomic_sub_return		atomic_sub_return
+#define arch_atomic_add_return_relaxed	arch_atomic_add_return_relaxed
+#define arch_atomic_sub_return_relaxed	arch_atomic_sub_return_relaxed
+#define arch_atomic_add_return		arch_atomic_add_return
+#define arch_atomic_sub_return		arch_atomic_sub_return
 
-#define atomic_fetch_add_relaxed	atomic_fetch_add_relaxed
-#define atomic_fetch_sub_relaxed	atomic_fetch_sub_relaxed
-#define atomic_fetch_add		atomic_fetch_add
-#define atomic_fetch_sub		atomic_fetch_sub
+#define arch_atomic_fetch_add_relaxed	arch_atomic_fetch_add_relaxed
+#define arch_atomic_fetch_sub_relaxed	arch_atomic_fetch_sub_relaxed
+#define arch_atomic_fetch_add		arch_atomic_fetch_add
+#define arch_atomic_fetch_sub		arch_atomic_fetch_sub
 
 #ifndef CONFIG_GENERIC_ATOMIC64
-#define atomic64_add_return_relaxed	atomic64_add_return_relaxed
-#define atomic64_sub_return_relaxed	atomic64_sub_return_relaxed
-#define atomic64_add_return		atomic64_add_return
-#define atomic64_sub_return		atomic64_sub_return
-
-#define atomic64_fetch_add_relaxed	atomic64_fetch_add_relaxed
-#define atomic64_fetch_sub_relaxed	atomic64_fetch_sub_relaxed
-#define atomic64_fetch_add		atomic64_fetch_add
-#define atomic64_fetch_sub		atomic64_fetch_sub
+#define arch_atomic64_add_return_relaxed	arch_atomic64_add_return_relaxed
+#define arch_atomic64_sub_return_relaxed	arch_atomic64_sub_return_relaxed
+#define arch_atomic64_add_return		arch_atomic64_add_return
+#define arch_atomic64_sub_return		arch_atomic64_sub_return
+
+#define arch_atomic64_fetch_add_relaxed	arch_atomic64_fetch_add_relaxed
+#define arch_atomic64_fetch_sub_relaxed	arch_atomic64_fetch_sub_relaxed
+#define arch_atomic64_fetch_add		arch_atomic64_fetch_add
+#define arch_atomic64_fetch_sub		arch_atomic64_fetch_sub
 #endif
 
 #undef ATOMIC_OPS
@@ -175,20 +175,20 @@ ATOMIC_OPS(and, and, i)
 ATOMIC_OPS( or,  or, i)
 ATOMIC_OPS(xor, xor, i)
 
-#define atomic_fetch_and_relaxed	atomic_fetch_and_relaxed
-#define atomic_fetch_or_relaxed		atomic_fetch_or_relaxed
-#define atomic_fetch_xor_relaxed	atomic_fetch_xor_relaxed
-#define atomic_fetch_and		atomic_fetch_and
-#define atomic_fetch_or			atomic_fetch_or
-#define atomic_fetch_xor		atomic_fetch_xor
+#define arch_atomic_fetch_and_relaxed	arch_atomic_fetch_and_relaxed
+#define arch_atomic_fetch_or_relaxed	arch_atomic_fetch_or_relaxed
+#define arch_atomic_fetch_xor_relaxed	arch_atomic_fetch_xor_relaxed
+#define arch_atomic_fetch_and		arch_atomic_fetch_and
+#define arch_atomic_fetch_or		arch_atomic_fetch_or
+#define arch_atomic_fetch_xor		arch_atomic_fetch_xor
 
 #ifndef CONFIG_GENERIC_ATOMIC64
-#define atomic64_fetch_and_relaxed	atomic64_fetch_and_relaxed
-#define atomic64_fetch_or_relaxed	atomic64_fetch_or_relaxed
-#define atomic64_fetch_xor_relaxed	atomic64_fetch_xor_relaxed
-#define atomic64_fetch_and		atomic64_fetch_and
-#define atomic64_fetch_or		atomic64_fetch_or
-#define atomic64_fetch_xor		atomic64_fetch_xor
+#define arch_atomic64_fetch_and_relaxed	arch_atomic64_fetch_and_relaxed
+#define arch_atomic64_fetch_or_relaxed	arch_atomic64_fetch_or_relaxed
+#define arch_atomic64_fetch_xor_relaxed	arch_atomic64_fetch_xor_relaxed
+#define arch_atomic64_fetch_and		arch_atomic64_fetch_and
+#define arch_atomic64_fetch_or		arch_atomic64_fetch_or
+#define arch_atomic64_fetch_xor		arch_atomic64_fetch_xor
 #endif
 
 #undef ATOMIC_OPS
@@ -197,7 +197,7 @@ ATOMIC_OPS(xor, xor, i)
 #undef ATOMIC_OP_RETURN
 
 /* This is required to provide a full barrier on success. */
-static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
+static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
        int prev, rc;
 
@@ -214,10 +214,10 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 		: "memory");
 	return prev;
 }
-#define atomic_fetch_add_unless atomic_fetch_add_unless
+#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
 
 #ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
+static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
        s64 prev;
        long rc;
@@ -235,7 +235,7 @@ static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u
 		: "memory");
 	return prev;
 }
-#define atomic64_fetch_add_unless atomic64_fetch_add_unless
+#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
 #endif
 
 /*
@@ -244,45 +244,45 @@ static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u
  */
 #define ATOMIC_OP(c_t, prefix, size)					\
 static __always_inline							\
-c_t atomic##prefix##_xchg_relaxed(atomic##prefix##_t *v, c_t n)		\
+c_t arch_atomic##prefix##_xchg_relaxed(atomic##prefix##_t *v, c_t n)	\
 {									\
 	return __xchg_relaxed(&(v->counter), n, size);			\
 }									\
 static __always_inline							\
-c_t atomic##prefix##_xchg_acquire(atomic##prefix##_t *v, c_t n)		\
+c_t arch_atomic##prefix##_xchg_acquire(atomic##prefix##_t *v, c_t n)	\
 {									\
 	return __xchg_acquire(&(v->counter), n, size);			\
 }									\
 static __always_inline							\
-c_t atomic##prefix##_xchg_release(atomic##prefix##_t *v, c_t n)		\
+c_t arch_atomic##prefix##_xchg_release(atomic##prefix##_t *v, c_t n)	\
 {									\
 	return __xchg_release(&(v->counter), n, size);			\
 }									\
 static __always_inline							\
-c_t atomic##prefix##_xchg(atomic##prefix##_t *v, c_t n)			\
+c_t arch_atomic##prefix##_xchg(atomic##prefix##_t *v, c_t n)		\
 {									\
 	return __xchg(&(v->counter), n, size);				\
 }									\
 static __always_inline							\
-c_t atomic##prefix##_cmpxchg_relaxed(atomic##prefix##_t *v,		\
+c_t arch_atomic##prefix##_cmpxchg_relaxed(atomic##prefix##_t *v,	\
 				     c_t o, c_t n)			\
 {									\
 	return __cmpxchg_relaxed(&(v->counter), o, n, size);		\
 }									\
 static __always_inline							\
-c_t atomic##prefix##_cmpxchg_acquire(atomic##prefix##_t *v,		\
+c_t arch_atomic##prefix##_cmpxchg_acquire(atomic##prefix##_t *v,	\
 				     c_t o, c_t n)			\
 {									\
 	return __cmpxchg_acquire(&(v->counter), o, n, size);		\
 }									\
 static __always_inline							\
-c_t atomic##prefix##_cmpxchg_release(atomic##prefix##_t *v,		\
+c_t arch_atomic##prefix##_cmpxchg_release(atomic##prefix##_t *v,	\
 				     c_t o, c_t n)			\
 {									\
 	return __cmpxchg_release(&(v->counter), o, n, size);		\
 }									\
 static __always_inline							\
-c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n)	\
+c_t arch_atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n)	\
 {									\
 	return __cmpxchg(&(v->counter), o, n, size);			\
 }
@@ -298,19 +298,19 @@ c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n)	\
 
 ATOMIC_OPS()
 
-#define atomic_xchg_relaxed atomic_xchg_relaxed
-#define atomic_xchg_acquire atomic_xchg_acquire
-#define atomic_xchg_release atomic_xchg_release
-#define atomic_xchg atomic_xchg
-#define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed
-#define atomic_cmpxchg_acquire atomic_cmpxchg_acquire
-#define atomic_cmpxchg_release atomic_cmpxchg_release
-#define atomic_cmpxchg atomic_cmpxchg
+#define arch_atomic_xchg_relaxed	arch_atomic_xchg_relaxed
+#define arch_atomic_xchg_acquire	arch_atomic_xchg_acquire
+#define arch_atomic_xchg_release	arch_atomic_xchg_release
+#define arch_atomic_xchg		arch_atomic_xchg
+#define arch_atomic_cmpxchg_relaxed	arch_atomic_cmpxchg_relaxed
+#define arch_atomic_cmpxchg_acquire	arch_atomic_cmpxchg_acquire
+#define arch_atomic_cmpxchg_release	arch_atomic_cmpxchg_release
+#define arch_atomic_cmpxchg		arch_atomic_cmpxchg
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP
 
-static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
+static __always_inline int arch_atomic_sub_if_positive(atomic_t *v, int offset)
 {
        int prev, rc;
 
@@ -328,10 +328,10 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
 	return prev - offset;
 }
 
-#define atomic_dec_if_positive(v)	atomic_sub_if_positive(v, 1)
+#define arch_atomic_dec_if_positive(v)	arch_atomic_sub_if_positive(v, 1)
 
 #ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset)
+static __always_inline s64 arch_atomic64_sub_if_positive(atomic64_t *v, s64 offset)
 {
        s64 prev;
        long rc;
@@ -350,7 +350,7 @@ static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset)
 	return prev - offset;
 }
 
-#define atomic64_dec_if_positive(v)	atomic64_sub_if_positive(v, 1)
+#define arch_atomic64_dec_if_positive(v)	arch_atomic64_sub_if_positive(v, 1)
 #endif
 
 #endif /* _ASM_RISCV_ATOMIC_H */
diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
index 262e5bbb2776..36dc962f6343 100644
--- a/arch/riscv/include/asm/cmpxchg.h
+++ b/arch/riscv/include/asm/cmpxchg.h
@@ -37,7 +37,7 @@
 	__ret;								\
 })
 
-#define xchg_relaxed(ptr, x)						\
+#define arch_xchg_relaxed(ptr, x)					\
 ({									\
 	__typeof__(*(ptr)) _x_ = (x);					\
 	(__typeof__(*(ptr))) __xchg_relaxed((ptr),			\
@@ -72,7 +72,7 @@
 	__ret;								\
 })
 
-#define xchg_acquire(ptr, x)						\
+#define arch_xchg_acquire(ptr, x)					\
 ({									\
 	__typeof__(*(ptr)) _x_ = (x);					\
 	(__typeof__(*(ptr))) __xchg_acquire((ptr),			\
@@ -107,7 +107,7 @@
 	__ret;								\
 })
 
-#define xchg_release(ptr, x)						\
+#define arch_xchg_release(ptr, x)					\
 ({									\
 	__typeof__(*(ptr)) _x_ = (x);					\
 	(__typeof__(*(ptr))) __xchg_release((ptr),			\
@@ -140,7 +140,7 @@
 	__ret;								\
 })
 
-#define xchg(ptr, x)							\
+#define arch_xchg(ptr, x)						\
 ({									\
 	__typeof__(*(ptr)) _x_ = (x);					\
 	(__typeof__(*(ptr))) __xchg((ptr), _x_, sizeof(*(ptr)));	\
@@ -149,13 +149,13 @@
 #define xchg32(ptr, x)							\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
-	xchg((ptr), (x));						\
+	arch_xchg((ptr), (x));						\
 })
 
 #define xchg64(ptr, x)							\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
-	xchg((ptr), (x));						\
+	arch_xchg((ptr), (x));						\
 })
 
 /*
@@ -199,7 +199,7 @@
 	__ret;								\
 })
 
-#define cmpxchg_relaxed(ptr, o, n)					\
+#define arch_cmpxchg_relaxed(ptr, o, n)					\
 ({									\
 	__typeof__(*(ptr)) _o_ = (o);					\
 	__typeof__(*(ptr)) _n_ = (n);					\
@@ -245,7 +245,7 @@
 	__ret;								\
 })
 
-#define cmpxchg_acquire(ptr, o, n)					\
+#define arch_cmpxchg_acquire(ptr, o, n)					\
 ({									\
 	__typeof__(*(ptr)) _o_ = (o);					\
 	__typeof__(*(ptr)) _n_ = (n);					\
@@ -291,7 +291,7 @@
 	__ret;								\
 })
 
-#define cmpxchg_release(ptr, o, n)					\
+#define arch_cmpxchg_release(ptr, o, n)					\
 ({									\
 	__typeof__(*(ptr)) _o_ = (o);					\
 	__typeof__(*(ptr)) _n_ = (n);					\
@@ -337,7 +337,7 @@
 	__ret;								\
 })
 
-#define cmpxchg(ptr, o, n)						\
+#define arch_cmpxchg(ptr, o, n)						\
 ({									\
 	__typeof__(*(ptr)) _o_ = (o);					\
 	__typeof__(*(ptr)) _n_ = (n);					\
@@ -345,31 +345,31 @@
 				       _o_, _n_, sizeof(*(ptr)));	\
 })
 
-#define cmpxchg_local(ptr, o, n)					\
+#define arch_cmpxchg_local(ptr, o, n)					\
 	(__cmpxchg_relaxed((ptr), (o), (n), sizeof(*(ptr))))
 
 #define cmpxchg32(ptr, o, n)						\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
-	cmpxchg((ptr), (o), (n));					\
+	arch_cmpxchg((ptr), (o), (n));					\
 })
 
 #define cmpxchg32_local(ptr, o, n)					\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
-	cmpxchg_relaxed((ptr), (o), (n))				\
+	arch_cmpxchg_relaxed((ptr), (o), (n))				\
 })
 
-#define cmpxchg64(ptr, o, n)						\
+#define arch_cmpxchg64(ptr, o, n)					\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
-	cmpxchg((ptr), (o), (n));					\
+	arch_cmpxchg((ptr), (o), (n));					\
 })
 
-#define cmpxchg64_local(ptr, o, n)					\
+#define arch_cmpxchg64_local(ptr, o, n)					\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
-	cmpxchg_relaxed((ptr), (o), (n));				\
+	arch_cmpxchg_relaxed((ptr), (o), (n));				\
 })
 
 #endif /* _ASM_RISCV_CMPXCHG_H */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 29/33] locking/atomic: sh: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (27 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 28/33] locking/atomic: riscv: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 30/33] locking/atomic: sparc: " Mark Rutland
                   ` (4 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates sh to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Rich Felker <dalias@libc.org>
Cc: Will Deacon <will@kernel.org>
Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
---
 arch/sh/Kconfig                   | 1 +
 arch/sh/include/asm/atomic-grb.h  | 6 +++---
 arch/sh/include/asm/atomic-irq.h  | 6 +++---
 arch/sh/include/asm/atomic-llsc.h | 6 +++---
 arch/sh/include/asm/atomic.h      | 8 ++++----
 arch/sh/include/asm/cmpxchg.h     | 4 ++--
 6 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index 68129537e350..d2925cbb6fa4 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -2,6 +2,7 @@
 config SUPERH
 	def_bool y
 	select ARCH_32BIT_OFF_T
+	select ARCH_ATOMIC
 	select ARCH_ENABLE_MEMORY_HOTPLUG if SPARSEMEM && MMU
 	select ARCH_ENABLE_MEMORY_HOTREMOVE if SPARSEMEM && MMU
 	select ARCH_HAVE_CUSTOM_GPIO_H
diff --git a/arch/sh/include/asm/atomic-grb.h b/arch/sh/include/asm/atomic-grb.h
index aace62d42288..059791fd394f 100644
--- a/arch/sh/include/asm/atomic-grb.h
+++ b/arch/sh/include/asm/atomic-grb.h
@@ -3,7 +3,7 @@
 #define __ASM_SH_ATOMIC_GRB_H
 
 #define ATOMIC_OP(op)							\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void arch_atomic_##op(int i, atomic_t *v)			\
 {									\
 	int tmp;							\
 									\
@@ -23,7 +23,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
 }									\
 
 #define ATOMIC_OP_RETURN(op)						\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
 {									\
 	int tmp;							\
 									\
@@ -45,7 +45,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
 }
 
 #define ATOMIC_FETCH_OP(op)						\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 {									\
 	int res, tmp;							\
 									\
diff --git a/arch/sh/include/asm/atomic-irq.h b/arch/sh/include/asm/atomic-irq.h
index ee523bd2120f..7665de9d00d0 100644
--- a/arch/sh/include/asm/atomic-irq.h
+++ b/arch/sh/include/asm/atomic-irq.h
@@ -11,7 +11,7 @@
  */
 
 #define ATOMIC_OP(op, c_op)						\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void arch_atomic_##op(int i, atomic_t *v)			\
 {									\
 	unsigned long flags;						\
 									\
@@ -21,7 +21,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
 }
 
 #define ATOMIC_OP_RETURN(op, c_op)					\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
 {									\
 	unsigned long temp, flags;					\
 									\
@@ -35,7 +35,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
 }
 
 #define ATOMIC_FETCH_OP(op, c_op)					\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 {									\
 	unsigned long temp, flags;					\
 									\
diff --git a/arch/sh/include/asm/atomic-llsc.h b/arch/sh/include/asm/atomic-llsc.h
index 1d06e4d288dc..b63dcfbfa14e 100644
--- a/arch/sh/include/asm/atomic-llsc.h
+++ b/arch/sh/include/asm/atomic-llsc.h
@@ -17,7 +17,7 @@
  */
 
 #define ATOMIC_OP(op)							\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void arch_atomic_##op(int i, atomic_t *v)			\
 {									\
 	unsigned long tmp;						\
 									\
@@ -32,7 +32,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
 }
 
 #define ATOMIC_OP_RETURN(op)						\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
 {									\
 	unsigned long temp;						\
 									\
@@ -50,7 +50,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
 }
 
 #define ATOMIC_FETCH_OP(op)						\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 {									\
 	unsigned long res, temp;					\
 									\
diff --git a/arch/sh/include/asm/atomic.h b/arch/sh/include/asm/atomic.h
index 7c2a8a703b9a..528bfeda78f5 100644
--- a/arch/sh/include/asm/atomic.h
+++ b/arch/sh/include/asm/atomic.h
@@ -19,8 +19,8 @@
 #include <asm/cmpxchg.h>
 #include <asm/barrier.h>
 
-#define atomic_read(v)		READ_ONCE((v)->counter)
-#define atomic_set(v,i)		WRITE_ONCE((v)->counter, (i))
+#define arch_atomic_read(v)		READ_ONCE((v)->counter)
+#define arch_atomic_set(v,i)		WRITE_ONCE((v)->counter, (i))
 
 #if defined(CONFIG_GUSA_RB)
 #include <asm/atomic-grb.h>
@@ -30,8 +30,8 @@
 #include <asm/atomic-irq.h>
 #endif
 
-#define atomic_xchg(v, new)		(xchg(&((v)->counter), new))
-#define atomic_cmpxchg(v, o, n)		(cmpxchg(&((v)->counter), (o), (n)))
+#define arch_atomic_xchg(v, new)	(arch_xchg(&((v)->counter), new))
+#define arch_atomic_cmpxchg(v, o, n)	(arch_cmpxchg(&((v)->counter), (o), (n)))
 
 #endif /* CONFIG_CPU_J2 */
 
diff --git a/arch/sh/include/asm/cmpxchg.h b/arch/sh/include/asm/cmpxchg.h
index e9501d85c278..0ed9b3f4a577 100644
--- a/arch/sh/include/asm/cmpxchg.h
+++ b/arch/sh/include/asm/cmpxchg.h
@@ -45,7 +45,7 @@ extern void __xchg_called_with_bad_pointer(void);
 	__xchg__res;					\
 })
 
-#define xchg(ptr,x)	\
+#define arch_xchg(ptr,x)	\
 	((__typeof__(*(ptr)))__xchg((ptr),(unsigned long)(x), sizeof(*(ptr))))
 
 /* This function doesn't exist, so you'll get a linker error
@@ -63,7 +63,7 @@ static inline unsigned long __cmpxchg(volatile void * ptr, unsigned long old,
 	return old;
 }
 
-#define cmpxchg(ptr,o,n)						 \
+#define arch_cmpxchg(ptr,o,n)						 \
   ({									 \
      __typeof__(*(ptr)) _o_ = (o);					 \
      __typeof__(*(ptr)) _n_ = (n);					 \
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 30/33] locking/atomic: sparc: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (28 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 29/33] locking/atomic: sh: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10  9:37 ` [PATCH 31/33] locking/atomic: xtensa: " Mark Rutland
                   ` (3 subsequent siblings)
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates sparc to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/sparc/Kconfig                  |  1 +
 arch/sparc/include/asm/atomic_32.h  | 38 ++++++++++++++++-----------------
 arch/sparc/include/asm/atomic_64.h  | 36 +++++++++++++++----------------
 arch/sparc/include/asm/cmpxchg_32.h | 10 ++++-----
 arch/sparc/include/asm/cmpxchg_64.h | 10 ++++-----
 arch/sparc/lib/atomic32.c           | 24 ++++++++++-----------
 arch/sparc/lib/atomic_64.S          | 42 ++++++++++++++++++-------------------
 7 files changed, 81 insertions(+), 80 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 164a5254c91c..46790083e918 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -13,6 +13,7 @@ config 64BIT
 config SPARC
 	bool
 	default y
+	select ARCH_ATOMIC
 	select ARCH_MIGHT_HAVE_PC_PARPORT if SPARC64 && PCI
 	select ARCH_MIGHT_HAVE_PC_SERIO
 	select DMA_OPS
diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h
index efad5532f169..38537109ec5d 100644
--- a/arch/sparc/include/asm/atomic_32.h
+++ b/arch/sparc/include/asm/atomic_32.h
@@ -18,30 +18,30 @@
 #include <asm/barrier.h>
 #include <asm-generic/atomic64.h>
 
-int atomic_add_return(int, atomic_t *);
-int atomic_fetch_add(int, atomic_t *);
-int atomic_fetch_and(int, atomic_t *);
-int atomic_fetch_or(int, atomic_t *);
-int atomic_fetch_xor(int, atomic_t *);
-int atomic_cmpxchg(atomic_t *, int, int);
-int atomic_xchg(atomic_t *, int);
-int atomic_fetch_add_unless(atomic_t *, int, int);
-void atomic_set(atomic_t *, int);
+int arch_atomic_add_return(int, atomic_t *);
+int arch_atomic_fetch_add(int, atomic_t *);
+int arch_atomic_fetch_and(int, atomic_t *);
+int arch_atomic_fetch_or(int, atomic_t *);
+int arch_atomic_fetch_xor(int, atomic_t *);
+int arch_atomic_cmpxchg(atomic_t *, int, int);
+int arch_atomic_xchg(atomic_t *, int);
+int arch_atomic_fetch_add_unless(atomic_t *, int, int);
+void arch_atomic_set(atomic_t *, int);
 
-#define atomic_fetch_add_unless	atomic_fetch_add_unless
+#define arch_atomic_fetch_add_unless	arch_atomic_fetch_add_unless
 
-#define atomic_set_release(v, i)	atomic_set((v), (i))
+#define arch_atomic_set_release(v, i)	arch_atomic_set((v), (i))
 
-#define atomic_read(v)          READ_ONCE((v)->counter)
+#define arch_atomic_read(v)		READ_ONCE((v)->counter)
 
-#define atomic_add(i, v)	((void)atomic_add_return( (int)(i), (v)))
-#define atomic_sub(i, v)	((void)atomic_add_return(-(int)(i), (v)))
+#define arch_atomic_add(i, v)	((void)arch_atomic_add_return( (int)(i), (v)))
+#define arch_atomic_sub(i, v)	((void)arch_atomic_add_return(-(int)(i), (v)))
 
-#define atomic_and(i, v)	((void)atomic_fetch_and((i), (v)))
-#define atomic_or(i, v)		((void)atomic_fetch_or((i), (v)))
-#define atomic_xor(i, v)	((void)atomic_fetch_xor((i), (v)))
+#define arch_atomic_and(i, v)	((void)arch_atomic_fetch_and((i), (v)))
+#define arch_atomic_or(i, v)	((void)arch_atomic_fetch_or((i), (v)))
+#define arch_atomic_xor(i, v)	((void)arch_atomic_fetch_xor((i), (v)))
 
-#define atomic_sub_return(i, v)	(atomic_add_return(-(int)(i), (v)))
-#define atomic_fetch_sub(i, v)  (atomic_fetch_add (-(int)(i), (v)))
+#define arch_atomic_sub_return(i, v)	(arch_atomic_add_return(-(int)(i), (v)))
+#define arch_atomic_fetch_sub(i, v)	(arch_atomic_fetch_add (-(int)(i), (v)))
 
 #endif /* !(__ARCH_SPARC_ATOMIC__) */
diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index 6b235d3d1d9d..077891686715 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -14,23 +14,23 @@
 
 #define ATOMIC64_INIT(i)	{ (i) }
 
-#define atomic_read(v)		READ_ONCE((v)->counter)
-#define atomic64_read(v)	READ_ONCE((v)->counter)
+#define arch_atomic_read(v)	READ_ONCE((v)->counter)
+#define arch_atomic64_read(v)	READ_ONCE((v)->counter)
 
-#define atomic_set(v, i)	WRITE_ONCE(((v)->counter), (i))
-#define atomic64_set(v, i)	WRITE_ONCE(((v)->counter), (i))
+#define arch_atomic_set(v, i)	WRITE_ONCE(((v)->counter), (i))
+#define arch_atomic64_set(v, i)	WRITE_ONCE(((v)->counter), (i))
 
 #define ATOMIC_OP(op)							\
-void atomic_##op(int, atomic_t *);					\
-void atomic64_##op(s64, atomic64_t *);
+void arch_atomic_##op(int, atomic_t *);					\
+void arch_atomic64_##op(s64, atomic64_t *);
 
 #define ATOMIC_OP_RETURN(op)						\
-int atomic_##op##_return(int, atomic_t *);				\
-s64 atomic64_##op##_return(s64, atomic64_t *);
+int arch_atomic_##op##_return(int, atomic_t *);				\
+s64 arch_atomic64_##op##_return(s64, atomic64_t *);
 
 #define ATOMIC_FETCH_OP(op)						\
-int atomic_fetch_##op(int, atomic_t *);					\
-s64 atomic64_fetch_##op(s64, atomic64_t *);
+int arch_atomic_fetch_##op(int, atomic_t *);				\
+s64 arch_atomic64_fetch_##op(s64, atomic64_t *);
 
 #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_OP_RETURN(op) ATOMIC_FETCH_OP(op)
 
@@ -49,18 +49,18 @@ ATOMIC_OPS(xor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#define atomic_cmpxchg(v, o, n) (cmpxchg(&((v)->counter), (o), (n)))
+#define arch_atomic_cmpxchg(v, o, n) (arch_cmpxchg(&((v)->counter), (o), (n)))
 
-static inline int atomic_xchg(atomic_t *v, int new)
+static inline int arch_atomic_xchg(atomic_t *v, int new)
 {
-	return xchg(&v->counter, new);
+	return arch_xchg(&v->counter, new);
 }
 
-#define atomic64_cmpxchg(v, o, n) \
-	((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n)))
-#define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
+#define arch_atomic64_cmpxchg(v, o, n) \
+	((__typeof__((v)->counter))arch_cmpxchg(&((v)->counter), (o), (n)))
+#define arch_atomic64_xchg(v, new) (arch_xchg(&((v)->counter), new))
 
-s64 atomic64_dec_if_positive(atomic64_t *v);
-#define atomic64_dec_if_positive atomic64_dec_if_positive
+s64 arch_atomic64_dec_if_positive(atomic64_t *v);
+#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
 
 #endif /* !(__ARCH_SPARC64_ATOMIC__) */
diff --git a/arch/sparc/include/asm/cmpxchg_32.h b/arch/sparc/include/asm/cmpxchg_32.h
index 86e3da1d973d..27a57a3a7597 100644
--- a/arch/sparc/include/asm/cmpxchg_32.h
+++ b/arch/sparc/include/asm/cmpxchg_32.h
@@ -25,7 +25,7 @@ static inline unsigned long __xchg(unsigned long x, __volatile__ void * ptr, int
 	return x;
 }
 
-#define xchg(ptr,x) ({(__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)));})
+#define arch_xchg(ptr,x) ({(__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr)));})
 
 /* Emulate cmpxchg() the same way we emulate atomics,
  * by hashing the object address and indexing into an array
@@ -55,7 +55,7 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size)
 	return old;
 }
 
-#define cmpxchg(ptr, o, n)						\
+#define arch_cmpxchg(ptr, o, n)						\
 ({									\
 	__typeof__(*(ptr)) _o_ = (o);					\
 	__typeof__(*(ptr)) _n_ = (n);					\
@@ -64,7 +64,7 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size)
 })
 
 u64 __cmpxchg_u64(u64 *ptr, u64 old, u64 new);
-#define cmpxchg64(ptr, old, new)	__cmpxchg_u64(ptr, old, new)
+#define arch_cmpxchg64(ptr, old, new)	__cmpxchg_u64(ptr, old, new)
 
 #include <asm-generic/cmpxchg-local.h>
 
@@ -72,9 +72,9 @@ u64 __cmpxchg_u64(u64 *ptr, u64 old, u64 new);
  * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make
  * them available.
  */
-#define cmpxchg_local(ptr, o, n)				  	       \
+#define arch_cmpxchg_local(ptr, o, n)				  	       \
 	((__typeof__(*(ptr)))__generic_cmpxchg_local((ptr), (unsigned long)(o),\
 			(unsigned long)(n), sizeof(*(ptr))))
-#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
+#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
 
 #endif /* __ARCH_SPARC_CMPXCHG__ */
diff --git a/arch/sparc/include/asm/cmpxchg_64.h b/arch/sparc/include/asm/cmpxchg_64.h
index 8915b577b92f..8c39a9981187 100644
--- a/arch/sparc/include/asm/cmpxchg_64.h
+++ b/arch/sparc/include/asm/cmpxchg_64.h
@@ -52,7 +52,7 @@ static inline unsigned long xchg64(__volatile__ unsigned long *m, unsigned long
 	return val;
 }
 
-#define xchg(ptr,x)							\
+#define arch_xchg(ptr,x)							\
 ({	__typeof__(*(ptr)) __ret;					\
 	__ret = (__typeof__(*(ptr)))					\
 		__xchg((unsigned long)(x), (ptr), sizeof(*(ptr)));	\
@@ -168,7 +168,7 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
 	return old;
 }
 
-#define cmpxchg(ptr,o,n)						 \
+#define arch_cmpxchg(ptr,o,n)						 \
   ({									 \
      __typeof__(*(ptr)) _o_ = (o);					 \
      __typeof__(*(ptr)) _n_ = (n);					 \
@@ -195,14 +195,14 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
 	return old;
 }
 
-#define cmpxchg_local(ptr, o, n)				  	\
+#define arch_cmpxchg_local(ptr, o, n)				  	\
 	((__typeof__(*(ptr)))__cmpxchg_local((ptr), (unsigned long)(o),	\
 			(unsigned long)(n), sizeof(*(ptr))))
-#define cmpxchg64_local(ptr, o, n)					\
+#define arch_cmpxchg64_local(ptr, o, n)					\
   ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
 	cmpxchg_local((ptr), (o), (n));					\
   })
-#define cmpxchg64(ptr, o, n)	cmpxchg64_local((ptr), (o), (n))
+#define arch_cmpxchg64(ptr, o, n)	arch_cmpxchg64_local((ptr), (o), (n))
 
 #endif /* __ARCH_SPARC64_CMPXCHG__ */
diff --git a/arch/sparc/lib/atomic32.c b/arch/sparc/lib/atomic32.c
index 281fa634bb1a..8b81d0f00c97 100644
--- a/arch/sparc/lib/atomic32.c
+++ b/arch/sparc/lib/atomic32.c
@@ -29,7 +29,7 @@ static DEFINE_SPINLOCK(dummy);
 #endif /* SMP */
 
 #define ATOMIC_FETCH_OP(op, c_op)					\
-int atomic_fetch_##op(int i, atomic_t *v)				\
+int arch_atomic_fetch_##op(int i, atomic_t *v)				\
 {									\
 	int ret;							\
 	unsigned long flags;						\
@@ -41,10 +41,10 @@ int atomic_fetch_##op(int i, atomic_t *v)				\
 	spin_unlock_irqrestore(ATOMIC_HASH(v), flags);			\
 	return ret;							\
 }									\
-EXPORT_SYMBOL(atomic_fetch_##op);
+EXPORT_SYMBOL(arch_atomic_fetch_##op);
 
 #define ATOMIC_OP_RETURN(op, c_op)					\
-int atomic_##op##_return(int i, atomic_t *v)				\
+int arch_atomic_##op##_return(int i, atomic_t *v)			\
 {									\
 	int ret;							\
 	unsigned long flags;						\
@@ -55,7 +55,7 @@ int atomic_##op##_return(int i, atomic_t *v)				\
 	spin_unlock_irqrestore(ATOMIC_HASH(v), flags);			\
 	return ret;							\
 }									\
-EXPORT_SYMBOL(atomic_##op##_return);
+EXPORT_SYMBOL(arch_atomic_##op##_return);
 
 ATOMIC_OP_RETURN(add, +=)
 
@@ -67,7 +67,7 @@ ATOMIC_FETCH_OP(xor, ^=)
 #undef ATOMIC_FETCH_OP
 #undef ATOMIC_OP_RETURN
 
-int atomic_xchg(atomic_t *v, int new)
+int arch_atomic_xchg(atomic_t *v, int new)
 {
 	int ret;
 	unsigned long flags;
@@ -78,9 +78,9 @@ int atomic_xchg(atomic_t *v, int new)
 	spin_unlock_irqrestore(ATOMIC_HASH(v), flags);
 	return ret;
 }
-EXPORT_SYMBOL(atomic_xchg);
+EXPORT_SYMBOL(arch_atomic_xchg);
 
-int atomic_cmpxchg(atomic_t *v, int old, int new)
+int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
 {
 	int ret;
 	unsigned long flags;
@@ -93,9 +93,9 @@ int atomic_cmpxchg(atomic_t *v, int old, int new)
 	spin_unlock_irqrestore(ATOMIC_HASH(v), flags);
 	return ret;
 }
-EXPORT_SYMBOL(atomic_cmpxchg);
+EXPORT_SYMBOL(arch_atomic_cmpxchg);
 
-int atomic_fetch_add_unless(atomic_t *v, int a, int u)
+int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
 	int ret;
 	unsigned long flags;
@@ -107,10 +107,10 @@ int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 	spin_unlock_irqrestore(ATOMIC_HASH(v), flags);
 	return ret;
 }
-EXPORT_SYMBOL(atomic_fetch_add_unless);
+EXPORT_SYMBOL(arch_atomic_fetch_add_unless);
 
 /* Atomic operations are already serializing */
-void atomic_set(atomic_t *v, int i)
+void arch_atomic_set(atomic_t *v, int i)
 {
 	unsigned long flags;
 
@@ -118,7 +118,7 @@ void atomic_set(atomic_t *v, int i)
 	v->counter = i;
 	spin_unlock_irqrestore(ATOMIC_HASH(v), flags);
 }
-EXPORT_SYMBOL(atomic_set);
+EXPORT_SYMBOL(arch_atomic_set);
 
 unsigned long ___set_bit(unsigned long *addr, unsigned long mask)
 {
diff --git a/arch/sparc/lib/atomic_64.S b/arch/sparc/lib/atomic_64.S
index 456b65a30ecf..8245d4a97301 100644
--- a/arch/sparc/lib/atomic_64.S
+++ b/arch/sparc/lib/atomic_64.S
@@ -19,7 +19,7 @@
 	 */
 
 #define ATOMIC_OP(op)							\
-ENTRY(atomic_##op) /* %o0 = increment, %o1 = atomic_ptr */		\
+ENTRY(arch_atomic_##op) /* %o0 = increment, %o1 = atomic_ptr */		\
 	BACKOFF_SETUP(%o2);						\
 1:	lduw	[%o1], %g1;						\
 	op	%g1, %o0, %g7;						\
@@ -30,11 +30,11 @@ ENTRY(atomic_##op) /* %o0 = increment, %o1 = atomic_ptr */		\
 	retl;								\
 	 nop;								\
 2:	BACKOFF_SPIN(%o2, %o3, 1b);					\
-ENDPROC(atomic_##op);							\
-EXPORT_SYMBOL(atomic_##op);
+ENDPROC(arch_atomic_##op);						\
+EXPORT_SYMBOL(arch_atomic_##op);
 
 #define ATOMIC_OP_RETURN(op)						\
-ENTRY(atomic_##op##_return) /* %o0 = increment, %o1 = atomic_ptr */	\
+ENTRY(arch_atomic_##op##_return) /* %o0 = increment, %o1 = atomic_ptr */\
 	BACKOFF_SETUP(%o2);						\
 1:	lduw	[%o1], %g1;						\
 	op	%g1, %o0, %g7;						\
@@ -45,11 +45,11 @@ ENTRY(atomic_##op##_return) /* %o0 = increment, %o1 = atomic_ptr */	\
 	retl;								\
 	 sra	%g1, 0, %o0;						\
 2:	BACKOFF_SPIN(%o2, %o3, 1b);					\
-ENDPROC(atomic_##op##_return);						\
-EXPORT_SYMBOL(atomic_##op##_return);
+ENDPROC(arch_atomic_##op##_return);					\
+EXPORT_SYMBOL(arch_atomic_##op##_return);
 
 #define ATOMIC_FETCH_OP(op)						\
-ENTRY(atomic_fetch_##op) /* %o0 = increment, %o1 = atomic_ptr */	\
+ENTRY(arch_atomic_fetch_##op) /* %o0 = increment, %o1 = atomic_ptr */	\
 	BACKOFF_SETUP(%o2);						\
 1:	lduw	[%o1], %g1;						\
 	op	%g1, %o0, %g7;						\
@@ -60,8 +60,8 @@ ENTRY(atomic_fetch_##op) /* %o0 = increment, %o1 = atomic_ptr */	\
 	retl;								\
 	 sra	%g1, 0, %o0;						\
 2:	BACKOFF_SPIN(%o2, %o3, 1b);					\
-ENDPROC(atomic_fetch_##op);						\
-EXPORT_SYMBOL(atomic_fetch_##op);
+ENDPROC(arch_atomic_fetch_##op);					\
+EXPORT_SYMBOL(arch_atomic_fetch_##op);
 
 ATOMIC_OP(add)
 ATOMIC_OP_RETURN(add)
@@ -85,7 +85,7 @@ ATOMIC_FETCH_OP(xor)
 #undef ATOMIC_OP
 
 #define ATOMIC64_OP(op)							\
-ENTRY(atomic64_##op) /* %o0 = increment, %o1 = atomic_ptr */		\
+ENTRY(arch_atomic64_##op) /* %o0 = increment, %o1 = atomic_ptr */	\
 	BACKOFF_SETUP(%o2);						\
 1:	ldx	[%o1], %g1;						\
 	op	%g1, %o0, %g7;						\
@@ -96,11 +96,11 @@ ENTRY(atomic64_##op) /* %o0 = increment, %o1 = atomic_ptr */		\
 	retl;								\
 	 nop;								\
 2:	BACKOFF_SPIN(%o2, %o3, 1b);					\
-ENDPROC(atomic64_##op);							\
-EXPORT_SYMBOL(atomic64_##op);
+ENDPROC(arch_atomic64_##op);						\
+EXPORT_SYMBOL(arch_atomic64_##op);
 
 #define ATOMIC64_OP_RETURN(op)						\
-ENTRY(atomic64_##op##_return) /* %o0 = increment, %o1 = atomic_ptr */	\
+ENTRY(arch_atomic64_##op##_return) /* %o0 = increment, %o1 = atomic_ptr */	\
 	BACKOFF_SETUP(%o2);						\
 1:	ldx	[%o1], %g1;						\
 	op	%g1, %o0, %g7;						\
@@ -111,11 +111,11 @@ ENTRY(atomic64_##op##_return) /* %o0 = increment, %o1 = atomic_ptr */	\
 	retl;								\
 	 op	%g1, %o0, %o0;						\
 2:	BACKOFF_SPIN(%o2, %o3, 1b);					\
-ENDPROC(atomic64_##op##_return);					\
-EXPORT_SYMBOL(atomic64_##op##_return);
+ENDPROC(arch_atomic64_##op##_return);					\
+EXPORT_SYMBOL(arch_atomic64_##op##_return);
 
 #define ATOMIC64_FETCH_OP(op)						\
-ENTRY(atomic64_fetch_##op) /* %o0 = increment, %o1 = atomic_ptr */	\
+ENTRY(arch_atomic64_fetch_##op) /* %o0 = increment, %o1 = atomic_ptr */	\
 	BACKOFF_SETUP(%o2);						\
 1:	ldx	[%o1], %g1;						\
 	op	%g1, %o0, %g7;						\
@@ -126,8 +126,8 @@ ENTRY(atomic64_fetch_##op) /* %o0 = increment, %o1 = atomic_ptr */	\
 	retl;								\
 	 mov	%g1, %o0;						\
 2:	BACKOFF_SPIN(%o2, %o3, 1b);					\
-ENDPROC(atomic64_fetch_##op);						\
-EXPORT_SYMBOL(atomic64_fetch_##op);
+ENDPROC(arch_atomic64_fetch_##op);					\
+EXPORT_SYMBOL(arch_atomic64_fetch_##op);
 
 ATOMIC64_OP(add)
 ATOMIC64_OP_RETURN(add)
@@ -150,7 +150,7 @@ ATOMIC64_FETCH_OP(xor)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-ENTRY(atomic64_dec_if_positive) /* %o0 = atomic_ptr */
+ENTRY(arch_atomic64_dec_if_positive) /* %o0 = atomic_ptr */
 	BACKOFF_SETUP(%o2)
 1:	ldx	[%o0], %g1
 	brlez,pn %g1, 3f
@@ -162,5 +162,5 @@ ENTRY(atomic64_dec_if_positive) /* %o0 = atomic_ptr */
 3:	retl
 	 sub	%g1, 1, %o0
 2:	BACKOFF_SPIN(%o2, %o3, 1b)
-ENDPROC(atomic64_dec_if_positive)
-EXPORT_SYMBOL(atomic64_dec_if_positive)
+ENDPROC(arch_atomic64_dec_if_positive)
+EXPORT_SYMBOL(arch_atomic64_dec_if_positive)
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 31/33] locking/atomic: xtensa: move to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (29 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 30/33] locking/atomic: sparc: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10 14:19   ` Max Filippov
  2021-05-10  9:37 ` [PATCH 32/33] locking/atomic: delete !ARCH_ATOMIC remnants Mark Rutland
                   ` (2 subsequent siblings)
  33 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

We'd like all architectures to convert to ARCH_ATOMIC, as once all
architectures are converted it will be possible to make significant
cleanups to the atomics headers, and this will make it much easier to
generically enable atomic functionality (e.g. debug logic in the
instrumented wrappers).

As a step towards that, this patch migrates xtensa to ARCH_ATOMIC. The
arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
code wraps these with optional instrumentation to provide the regular
functions.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Chris Zankel <chris@zankel.net>
Cc: Max Filippov <jcmvbkbc@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/xtensa/Kconfig               |  1 +
 arch/xtensa/include/asm/atomic.h  | 26 +++++++++++++-------------
 arch/xtensa/include/asm/cmpxchg.h | 10 +++++-----
 3 files changed, 19 insertions(+), 18 deletions(-)

diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index 2332b2156993..39bb9bdae6b1 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -2,6 +2,7 @@
 config XTENSA
 	def_bool y
 	select ARCH_32BIT_OFF_T
+	select ARCH_ATOMIC
 	select ARCH_HAS_BINFMT_FLAT if !MMU
 	select ARCH_HAS_DMA_PREP_COHERENT if MMU
 	select ARCH_HAS_SYNC_DMA_FOR_CPU if MMU
diff --git a/arch/xtensa/include/asm/atomic.h b/arch/xtensa/include/asm/atomic.h
index 744c2f463845..4361fe4247e3 100644
--- a/arch/xtensa/include/asm/atomic.h
+++ b/arch/xtensa/include/asm/atomic.h
@@ -43,7 +43,7 @@
  *
  * Atomically reads the value of @v.
  */
-#define atomic_read(v)		READ_ONCE((v)->counter)
+#define arch_atomic_read(v)		READ_ONCE((v)->counter)
 
 /**
  * atomic_set - set atomic variable
@@ -52,11 +52,11 @@
  *
  * Atomically sets the value of @v to @i.
  */
-#define atomic_set(v,i)		WRITE_ONCE((v)->counter, (i))
+#define arch_atomic_set(v,i)		WRITE_ONCE((v)->counter, (i))
 
 #if XCHAL_HAVE_EXCLUSIVE
 #define ATOMIC_OP(op)							\
-static inline void atomic_##op(int i, atomic_t *v)			\
+static inline void arch_atomic_##op(int i, atomic_t *v)			\
 {									\
 	unsigned long tmp;						\
 	int result;							\
@@ -74,7 +74,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
 }									\
 
 #define ATOMIC_OP_RETURN(op)						\
-static inline int atomic_##op##_return(int i, atomic_t *v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
 {									\
 	unsigned long tmp;						\
 	int result;							\
@@ -95,7 +95,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
 }
 
 #define ATOMIC_FETCH_OP(op)						\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
+static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
 {									\
 	unsigned long tmp;						\
 	int result;							\
@@ -116,7 +116,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)			\
 
 #elif XCHAL_HAVE_S32C1I
 #define ATOMIC_OP(op)							\
-static inline void atomic_##op(int i, atomic_t * v)			\
+static inline void arch_atomic_##op(int i, atomic_t * v)		\
 {									\
 	unsigned long tmp;						\
 	int result;							\
@@ -135,7 +135,7 @@ static inline void atomic_##op(int i, atomic_t * v)			\
 }									\
 
 #define ATOMIC_OP_RETURN(op)						\
-static inline int atomic_##op##_return(int i, atomic_t * v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t * v)	\
 {									\
 	unsigned long tmp;						\
 	int result;							\
@@ -157,7 +157,7 @@ static inline int atomic_##op##_return(int i, atomic_t * v)		\
 }
 
 #define ATOMIC_FETCH_OP(op)						\
-static inline int atomic_fetch_##op(int i, atomic_t * v)		\
+static inline int arch_atomic_fetch_##op(int i, atomic_t * v)		\
 {									\
 	unsigned long tmp;						\
 	int result;							\
@@ -180,7 +180,7 @@ static inline int atomic_fetch_##op(int i, atomic_t * v)		\
 #else /* XCHAL_HAVE_S32C1I */
 
 #define ATOMIC_OP(op)							\
-static inline void atomic_##op(int i, atomic_t * v)			\
+static inline void arch_atomic_##op(int i, atomic_t * v)		\
 {									\
 	unsigned int vval;						\
 									\
@@ -198,7 +198,7 @@ static inline void atomic_##op(int i, atomic_t * v)			\
 }									\
 
 #define ATOMIC_OP_RETURN(op)						\
-static inline int atomic_##op##_return(int i, atomic_t * v)		\
+static inline int arch_atomic_##op##_return(int i, atomic_t * v)	\
 {									\
 	unsigned int vval;						\
 									\
@@ -218,7 +218,7 @@ static inline int atomic_##op##_return(int i, atomic_t * v)		\
 }
 
 #define ATOMIC_FETCH_OP(op)						\
-static inline int atomic_fetch_##op(int i, atomic_t * v)		\
+static inline int arch_atomic_fetch_##op(int i, atomic_t * v)		\
 {									\
 	unsigned int tmp, vval;						\
 									\
@@ -257,7 +257,7 @@ ATOMIC_OPS(xor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n)))
-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+#define arch_atomic_cmpxchg(v, o, n) ((int)arch_cmpxchg(&((v)->counter), (o), (n)))
+#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
 
 #endif /* _XTENSA_ATOMIC_H */
diff --git a/arch/xtensa/include/asm/cmpxchg.h b/arch/xtensa/include/asm/cmpxchg.h
index 9c4d6e5316ce..3699e2818efb 100644
--- a/arch/xtensa/include/asm/cmpxchg.h
+++ b/arch/xtensa/include/asm/cmpxchg.h
@@ -80,7 +80,7 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new, int size)
 	}
 }
 
-#define cmpxchg(ptr,o,n)						      \
+#define arch_cmpxchg(ptr,o,n)						      \
 	({ __typeof__(*(ptr)) _o_ = (o);				      \
 	   __typeof__(*(ptr)) _n_ = (n);				      \
 	   (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_,	      \
@@ -107,11 +107,11 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
  * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make
  * them available.
  */
-#define cmpxchg_local(ptr, o, n)				  	       \
+#define arch_cmpxchg_local(ptr, o, n)				  	       \
 	((__typeof__(*(ptr)))__generic_cmpxchg_local((ptr), (unsigned long)(o),\
 			(unsigned long)(n), sizeof(*(ptr))))
-#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
-#define cmpxchg64(ptr, o, n)    cmpxchg64_local((ptr), (o), (n))
+#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
+#define arch_cmpxchg64(ptr, o, n)    arch_cmpxchg64_local((ptr), (o), (n))
 
 /*
  * xchg_u32
@@ -169,7 +169,7 @@ static inline unsigned long xchg_u32(volatile int * m, unsigned long val)
 #endif
 }
 
-#define xchg(ptr,x) \
+#define arch_xchg(ptr,x) \
 	((__typeof__(*(ptr)))__xchg((unsigned long)(x),(ptr),sizeof(*(ptr))))
 
 static inline u32 xchg_small(volatile void *ptr, u32 x, int size)
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 32/33] locking/atomic: delete !ARCH_ATOMIC remnants
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (30 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 31/33] locking/atomic: xtensa: " Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-11  9:30   ` Geert Uytterhoeven
  2021-05-10  9:37 ` [PATCH 33/33] locking/atomics: atomic-instrumented: simplify ifdeffery Mark Rutland
  2021-05-10 14:56 ` [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Arnd Bergmann
  33 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

Now that all architectures implement ARCH_ATOMIC, we can make it
mandatory, removing the Kconfig symbol and logic for !ARCH_ATOMIC.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 arch/Kconfig                    |    3 -
 arch/alpha/Kconfig              |    1 -
 arch/arc/Kconfig                |    1 -
 arch/arm/Kconfig                |    1 -
 arch/arm64/Kconfig              |    1 -
 arch/csky/Kconfig               |    1 -
 arch/h8300/Kconfig              |    1 -
 arch/hexagon/Kconfig            |    1 -
 arch/ia64/Kconfig               |    1 -
 arch/m68k/Kconfig               |    1 -
 arch/microblaze/Kconfig         |    1 -
 arch/mips/Kconfig               |    1 -
 arch/nds32/Kconfig              |    1 -
 arch/nios2/Kconfig              |    1 -
 arch/openrisc/Kconfig           |    1 -
 arch/parisc/Kconfig             |    1 -
 arch/powerpc/Kconfig            |    1 -
 arch/riscv/Kconfig              |    1 -
 arch/s390/Kconfig               |    1 -
 arch/sh/Kconfig                 |    1 -
 arch/sparc/Kconfig              |    1 -
 arch/um/Kconfig                 |    1 -
 arch/x86/Kconfig                |    1 -
 arch/xtensa/Kconfig             |    1 -
 include/asm-generic/atomic.h    |   44 +-
 include/asm-generic/atomic64.h  |   29 -
 include/asm-generic/cmpxchg.h   |   21 -
 include/linux/atomic-fallback.h | 2595 ---------------------------------------
 include/linux/atomic.h          |    4 -
 scripts/atomic/check-atomics.sh |    1 -
 scripts/atomic/gen-atomics.sh   |    1 -
 31 files changed, 3 insertions(+), 2718 deletions(-)
 delete mode 100644 include/linux/atomic-fallback.h

diff --git a/arch/Kconfig b/arch/Kconfig
index 3fb3b12d4a95..c45b770d3579 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -11,9 +11,6 @@ source "arch/$(SRCARCH)/Kconfig"
 
 menu "General architecture-dependent options"
 
-config ARCH_ATOMIC
-	bool
-
 config CRASH_CORE
 	bool
 
diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index 7920fc2e2a2a..5998106faa60 100644
--- a/arch/alpha/Kconfig
+++ b/arch/alpha/Kconfig
@@ -2,7 +2,6 @@
 config ALPHA
 	bool
 	default y
-	select ARCH_ATOMIC
 	select ARCH_32BIT_USTAT_F_TINODE
 	select ARCH_MIGHT_HAVE_PC_PARPORT
 	select ARCH_MIGHT_HAVE_PC_SERIO
diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index 098ecc72d048..2d98501c0897 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -6,7 +6,6 @@
 config ARC
 	def_bool y
 	select ARC_TIMERS
-	select ARCH_ATOMIC
 	select ARCH_HAS_CACHE_LINE_SIZE
 	select ARCH_HAS_DEBUG_VM_PGTABLE
 	select ARCH_HAS_DMA_PREP_COHERENT
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index b7334a6643b9..24804f11302d 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -3,7 +3,6 @@ config ARM
 	bool
 	default y
 	select ARCH_32BIT_OFF_T
-	select ARCH_ATOMIC
 	select ARCH_HAS_BINFMT_FLAT
 	select ARCH_HAS_DEBUG_VIRTUAL if MMU
 	select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 62ab429d1f42..9f1d8566bbf9 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -9,7 +9,6 @@ config ARM64
 	select ACPI_MCFG if (ACPI && PCI)
 	select ACPI_SPCR_TABLE if ACPI
 	select ACPI_PPTT if ACPI
-	select ARCH_ATOMIC
 	select ARCH_HAS_DEBUG_WX
 	select ARCH_BINFMT_ELF_STATE
 	select ARCH_ENABLE_HUGEPAGE_MIGRATION if HUGETLB_PAGE && MIGRATION
diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
index 3521f14bcd96..8de5b987edb9 100644
--- a/arch/csky/Kconfig
+++ b/arch/csky/Kconfig
@@ -2,7 +2,6 @@
 config CSKY
 	def_bool y
 	select ARCH_32BIT_OFF_T
-	select ARCH_ATOMIC
 	select ARCH_HAS_DMA_PREP_COHERENT
 	select ARCH_HAS_GCOV_PROFILE_ALL
 	select ARCH_HAS_SYNC_DMA_FOR_CPU
diff --git a/arch/h8300/Kconfig b/arch/h8300/Kconfig
index bdf05ad3206a..3e3e0f16f7e0 100644
--- a/arch/h8300/Kconfig
+++ b/arch/h8300/Kconfig
@@ -2,7 +2,6 @@
 config H8300
         def_bool y
 	select ARCH_32BIT_OFF_T
-	select ARCH_ATOMIC
 	select ARCH_HAS_BINFMT_FLAT
 	select BINFMT_FLAT_ARGVP_ENVP_ON_STACK
 	select BINFMT_FLAT_OLD_ALWAYS_RAM
diff --git a/arch/hexagon/Kconfig b/arch/hexagon/Kconfig
index 1368954ef679..44a409967af1 100644
--- a/arch/hexagon/Kconfig
+++ b/arch/hexagon/Kconfig
@@ -5,7 +5,6 @@ comment "Linux Kernel Configuration for Hexagon"
 config HEXAGON
 	def_bool y
 	select ARCH_32BIT_OFF_T
-	select ARCH_ATOMIC
 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
 	select ARCH_NO_PREEMPT
 	# Other pending projects/to-do items.
diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index c5414dcd5d0d..279252e3e0f7 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -8,7 +8,6 @@ menu "Processor type and features"
 
 config IA64
 	bool
-	select ARCH_ATOMIC
 	select ARCH_HAS_DMA_MARK_CLEAN
 	select ARCH_MIGHT_HAVE_PC_PARPORT
 	select ARCH_MIGHT_HAVE_PC_SERIO
diff --git a/arch/m68k/Kconfig b/arch/m68k/Kconfig
index d1d91ac47f51..372e4e69c43a 100644
--- a/arch/m68k/Kconfig
+++ b/arch/m68k/Kconfig
@@ -3,7 +3,6 @@ config M68K
 	bool
 	default y
 	select ARCH_32BIT_OFF_T
-	select ARCH_ATOMIC
 	select ARCH_HAS_BINFMT_FLAT
 	select ARCH_HAS_DMA_PREP_COHERENT if HAS_DMA && MMU && !COLDFIRE
 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE if HAS_DMA
diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig
index 5a52922dc225..0660f47012bc 100644
--- a/arch/microblaze/Kconfig
+++ b/arch/microblaze/Kconfig
@@ -2,7 +2,6 @@
 config MICROBLAZE
 	def_bool y
 	select ARCH_32BIT_OFF_T
-	select ARCH_ATOMIC
 	select ARCH_NO_SWAP
 	select ARCH_HAS_DMA_PREP_COHERENT
 	select ARCH_HAS_GCOV_PROFILE_ALL
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 55b4da96872f..ed51970c08e7 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -3,7 +3,6 @@ config MIPS
 	bool
 	default y
 	select ARCH_32BIT_OFF_T if !64BIT
-	select ARCH_ATOMIC
 	select ARCH_BINFMT_ELF_STATE if MIPS_FP_SUPPORT
 	select ARCH_HAS_DEBUG_VIRTUAL if !64BIT
 	select ARCH_HAS_FORTIFY_SOURCE
diff --git a/arch/nds32/Kconfig b/arch/nds32/Kconfig
index 352913573aee..62313902d75d 100644
--- a/arch/nds32/Kconfig
+++ b/arch/nds32/Kconfig
@@ -7,7 +7,6 @@
 config NDS32
 	def_bool y
 	select ARCH_32BIT_OFF_T
-	select ARCH_ATOMIC
 	select ARCH_HAS_DMA_PREP_COHERENT
 	select ARCH_HAS_SYNC_DMA_FOR_CPU
 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig
index 67dae88c5b53..c24955c81c92 100644
--- a/arch/nios2/Kconfig
+++ b/arch/nios2/Kconfig
@@ -2,7 +2,6 @@
 config NIOS2
 	def_bool y
 	select ARCH_32BIT_OFF_T
-	select ARCH_ATOMIC
 	select ARCH_HAS_DMA_PREP_COHERENT
 	select ARCH_HAS_SYNC_DMA_FOR_CPU
 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
index 8c50bc9674f5..591acc5990dc 100644
--- a/arch/openrisc/Kconfig
+++ b/arch/openrisc/Kconfig
@@ -7,7 +7,6 @@
 config OPENRISC
 	def_bool y
 	select ARCH_32BIT_OFF_T
-	select ARCH_ATOMIC
 	select ARCH_HAS_DMA_SET_UNCACHED
 	select ARCH_HAS_DMA_CLEAR_UNCACHED
 	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
index bfa120a4add1..bde9907bc5b2 100644
--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -2,7 +2,6 @@
 config PARISC
 	def_bool y
 	select ARCH_32BIT_OFF_T if !64BIT
-	select ARCH_ATOMIC
 	select ARCH_MIGHT_HAVE_PC_PARPORT
 	select HAVE_IDE
 	select HAVE_FUNCTION_TRACER
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index d143c2b616f0..088dd2afcfe4 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -118,7 +118,6 @@ config PPC
 	# Please keep this list sorted alphabetically.
 	#
 	select ARCH_32BIT_OFF_T if PPC32
-	select ARCH_ATOMIC
 	select ARCH_ENABLE_MEMORY_HOTPLUG
 	select ARCH_ENABLE_MEMORY_HOTREMOVE
 	select ARCH_HAS_COPY_MC			if PPC64
diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
index c59b9f4a9d62..a8ad8eb76120 100644
--- a/arch/riscv/Kconfig
+++ b/arch/riscv/Kconfig
@@ -12,7 +12,6 @@ config 32BIT
 
 config RISCV
 	def_bool y
-	select ARCH_ATOMIC
 	select ARCH_CLOCKSOURCE_INIT
 	select ARCH_SUPPORTS_ATOMIC_RMW
 	select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index 85374a36c69e..b4c7c34069f8 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -58,7 +58,6 @@ config S390
 	# Note: keep this list sorted alphabetically
 	#
 	imply IMA_SECURE_AND_OR_TRUSTED_BOOT
-	select ARCH_ATOMIC
 	select ARCH_32BIT_USTAT_F_TINODE
 	select ARCH_BINFMT_ELF_STATE
 	select ARCH_ENABLE_MEMORY_HOTPLUG if SPARSEMEM
diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index d2925cbb6fa4..68129537e350 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -2,7 +2,6 @@
 config SUPERH
 	def_bool y
 	select ARCH_32BIT_OFF_T
-	select ARCH_ATOMIC
 	select ARCH_ENABLE_MEMORY_HOTPLUG if SPARSEMEM && MMU
 	select ARCH_ENABLE_MEMORY_HOTREMOVE if SPARSEMEM && MMU
 	select ARCH_HAVE_CUSTOM_GPIO_H
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 46790083e918..164a5254c91c 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -13,7 +13,6 @@ config 64BIT
 config SPARC
 	bool
 	default y
-	select ARCH_ATOMIC
 	select ARCH_MIGHT_HAVE_PC_PARPORT if SPARC64 && PCI
 	select ARCH_MIGHT_HAVE_PC_SERIO
 	select DMA_OPS
diff --git a/arch/um/Kconfig b/arch/um/Kconfig
index 4370a9521ea4..57cfd9a1c082 100644
--- a/arch/um/Kconfig
+++ b/arch/um/Kconfig
@@ -5,7 +5,6 @@ menu "UML-specific options"
 config UML
 	bool
 	default y
-	select ARCH_ATOMIC
 	select ARCH_EPHEMERAL_INODES
 	select ARCH_HAS_KCOV
 	select ARCH_NO_PREEMPT
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 11a27563033d..0045e1b44190 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -58,7 +58,6 @@ config X86
 	#
 	select ACPI_LEGACY_TABLES_LOOKUP	if ACPI
 	select ACPI_SYSTEM_POWER_STATES_SUPPORT	if ACPI
-	select ARCH_ATOMIC
 	select ARCH_32BIT_OFF_T			if X86_32
 	select ARCH_CLOCKSOURCE_INIT
 	select ARCH_ENABLE_HUGEPAGE_MIGRATION if X86_64 && HUGETLB_PAGE && MIGRATION
diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index 39bb9bdae6b1..2332b2156993 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -2,7 +2,6 @@
 config XTENSA
 	def_bool y
 	select ARCH_32BIT_OFF_T
-	select ARCH_ATOMIC
 	select ARCH_HAS_BINFMT_FLAT if !MMU
 	select ARCH_HAS_DMA_PREP_COHERENT if MMU
 	select ARCH_HAS_SYNC_DMA_FOR_CPU if MMU
diff --git a/include/asm-generic/atomic.h b/include/asm-generic/atomic.h
index 649060fa0fe8..04b8be9f1a77 100644
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -12,14 +12,6 @@
 #include <asm/cmpxchg.h>
 #include <asm/barrier.h>
 
-#ifdef CONFIG_ARCH_ATOMIC
-#define __ga_cmpxchg	arch_cmpxchg
-#define __ga_xchg	arch_xchg
-#else
-#define __ga_cmpxchg	cmpxchg
-#define __ga_xchg	xchg
-#endif
-
 #ifdef CONFIG_SMP
 
 /* we can build all atomic primitives from cmpxchg */
@@ -30,7 +22,7 @@ static inline void generic_atomic_##op(int i, atomic_t *v)		\
 	int c, old;							\
 									\
 	c = v->counter;							\
-	while ((old = __ga_cmpxchg(&v->counter, c, c c_op i)) != c)	\
+	while ((old = arch_cmpxchg(&v->counter, c, c c_op i)) != c)	\
 		c = old;						\
 }
 
@@ -40,7 +32,7 @@ static inline int generic_atomic_##op##_return(int i, atomic_t *v)	\
 	int c, old;							\
 									\
 	c = v->counter;							\
-	while ((old = __ga_cmpxchg(&v->counter, c, c c_op i)) != c)	\
+	while ((old = arch_cmpxchg(&v->counter, c, c c_op i)) != c)	\
 		c = old;						\
 									\
 	return c c_op i;						\
@@ -52,7 +44,7 @@ static inline int generic_atomic_fetch_##op(int i, atomic_t *v)		\
 	int c, old;							\
 									\
 	c = v->counter;							\
-	while ((old = __ga_cmpxchg(&v->counter, c, c c_op i)) != c)	\
+	while ((old = arch_cmpxchg(&v->counter, c, c c_op i)) != c)	\
 		c = old;						\
 									\
 	return c;							\
@@ -120,11 +112,6 @@ ATOMIC_OP(xor, ^)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-#undef __ga_cmpxchg
-#undef __ga_xchg
-
-#ifdef CONFIG_ARCH_ATOMIC
-
 #define arch_atomic_add_return			generic_atomic_add_return
 #define arch_atomic_sub_return			generic_atomic_sub_return
 
@@ -146,29 +133,4 @@ ATOMIC_OP(xor, ^)
 #define arch_atomic_xchg(ptr, v)		(arch_xchg(&(ptr)->counter, (v)))
 #define arch_atomic_cmpxchg(v, old, new)	(arch_cmpxchg(&((v)->counter), (old), (new)))
 
-#else /* CONFIG_ARCH_ATOMIC */
-
-#define atomic_add_return		generic_atomic_add_return
-#define atomic_sub_return		generic_atomic_sub_return
-
-#define atomic_fetch_add		generic_atomic_fetch_add
-#define atomic_fetch_sub		generic_atomic_fetch_sub
-#define atomic_fetch_and		generic_atomic_fetch_and
-#define atomic_fetch_or			generic_atomic_fetch_or
-#define atomic_fetch_xor		generic_atomic_fetch_xor
-
-#define atomic_add			generic_atomic_add
-#define atomic_sub			generic_atomic_sub
-#define atomic_and			generic_atomic_and
-#define atomic_or			generic_atomic_or
-#define atomic_xor			generic_atomic_xor
-
-#define atomic_read(v)			READ_ONCE((v)->counter)
-#define atomic_set(v, i)		WRITE_ONCE(((v)->counter), (i))
-
-#define atomic_xchg(ptr, v)		(xchg(&(ptr)->counter, (v)))
-#define atomic_cmpxchg(v, old, new)	(cmpxchg(&((v)->counter), (old), (new)))
-
-#endif /* CONFIG_ARCH_ATOMIC */
-
 #endif /* __ASM_GENERIC_ATOMIC_H */
diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h
index c8c7d9fae820..100d24b02e52 100644
--- a/include/asm-generic/atomic64.h
+++ b/include/asm-generic/atomic64.h
@@ -49,8 +49,6 @@ extern s64 generic_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n);
 extern s64 generic_atomic64_xchg(atomic64_t *v, s64 new);
 extern s64 generic_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u);
 
-#ifdef CONFIG_ARCH_ATOMIC
-
 #define arch_atomic64_read		generic_atomic64_read
 #define arch_atomic64_set		generic_atomic64_set
 #define arch_atomic64_set_release	generic_atomic64_set
@@ -74,31 +72,4 @@ extern s64 generic_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u);
 #define arch_atomic64_xchg		generic_atomic64_xchg
 #define arch_atomic64_fetch_add_unless	generic_atomic64_fetch_add_unless
 
-#else /* CONFIG_ARCH_ATOMIC */
-
-#define atomic64_read			generic_atomic64_read
-#define atomic64_set			generic_atomic64_set
-#define atomic64_set_release		generic_atomic64_set
-
-#define atomic64_add			generic_atomic64_add
-#define atomic64_add_return		generic_atomic64_add_return
-#define atomic64_fetch_add		generic_atomic64_fetch_add
-#define atomic64_sub			generic_atomic64_sub
-#define atomic64_sub_return		generic_atomic64_sub_return
-#define atomic64_fetch_sub		generic_atomic64_fetch_sub
-
-#define atomic64_and			generic_atomic64_and
-#define atomic64_fetch_and		generic_atomic64_fetch_and
-#define atomic64_or			generic_atomic64_or
-#define atomic64_fetch_or		generic_atomic64_fetch_or
-#define atomic64_xor			generic_atomic64_xor
-#define atomic64_fetch_xor		generic_atomic64_fetch_xor
-
-#define atomic64_dec_if_positive	generic_atomic64_dec_if_positive
-#define atomic64_cmpxchg		generic_atomic64_cmpxchg
-#define atomic64_xchg			generic_atomic64_xchg
-#define atomic64_fetch_add_unless	generic_atomic64_fetch_add_unless
-
-#endif /* CONFIG_ARCH_ATOMIC */
-
 #endif  /*  _ASM_GENERIC_ATOMIC64_H  */
diff --git a/include/asm-generic/cmpxchg.h b/include/asm-generic/cmpxchg.h
index 98c931199089..dca4419922a9 100644
--- a/include/asm-generic/cmpxchg.h
+++ b/include/asm-generic/cmpxchg.h
@@ -97,8 +97,6 @@ unsigned long __generic_xchg(unsigned long x, volatile void *ptr, int size)
 	__generic_cmpxchg64_local((ptr), (o), (n))
 
 
-#ifdef CONFIG_ARCH_ATOMIC
-
 #ifndef arch_xchg
 #define arch_xchg		generic_xchg
 #endif
@@ -114,23 +112,4 @@ unsigned long __generic_xchg(unsigned long x, volatile void *ptr, int size)
 #define arch_cmpxchg		arch_cmpxchg_local
 #define arch_cmpxchg64		arch_cmpxchg64_local
 
-#else /* CONFIG_ARCH_ATOMIC */
-
-#ifndef xchg
-#define xchg			generic_xchg
-#endif
-
-#ifndef cmpxchg_local
-#define cmpxchg_local		generic_cmpxchg_local
-#endif
-
-#ifndef cmpxchg64_local
-#define cmpxchg64_local		generic_cmpxchg64_local
-#endif
-
-#define cmpxchg			cmpxchg_local
-#define cmpxchg64		cmpxchg64_local
-
-#endif /* CONFIG_ARCH_ATOMIC */
-
 #endif /* __ASM_GENERIC_CMPXCHG_H */
diff --git a/include/linux/atomic-fallback.h b/include/linux/atomic-fallback.h
deleted file mode 100644
index 2a3f55d98be9..000000000000
diff --git a/include/linux/atomic.h b/include/linux/atomic.h
index 4f8d83f9e480..ed1d3ffd5b9d 100644
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -77,12 +77,8 @@
 	__ret;								\
 })
 
-#ifdef CONFIG_ARCH_ATOMIC
 #include <linux/atomic-arch-fallback.h>
 #include <asm-generic/atomic-instrumented.h>
-#else
-#include <linux/atomic-fallback.h>
-#endif
 
 #include <asm-generic/atomic-long.h>
 
diff --git a/scripts/atomic/check-atomics.sh b/scripts/atomic/check-atomics.sh
index 82748d42ecc5..9c7fbd4bcbce 100755
--- a/scripts/atomic/check-atomics.sh
+++ b/scripts/atomic/check-atomics.sh
@@ -17,7 +17,6 @@ cat <<EOF |
 asm-generic/atomic-instrumented.h
 asm-generic/atomic-long.h
 linux/atomic-arch-fallback.h
-linux/atomic-fallback.h
 EOF
 while read header; do
 	OLDSUM="$(tail -n 1 ${LINUXDIR}/include/${header})"
diff --git a/scripts/atomic/gen-atomics.sh b/scripts/atomic/gen-atomics.sh
index d29e159ef489..f776a574224d 100755
--- a/scripts/atomic/gen-atomics.sh
+++ b/scripts/atomic/gen-atomics.sh
@@ -11,7 +11,6 @@ cat <<EOF |
 gen-atomic-instrumented.sh      asm-generic/atomic-instrumented.h
 gen-atomic-long.sh              asm-generic/atomic-long.h
 gen-atomic-fallback.sh          linux/atomic-arch-fallback.h		arch_
-gen-atomic-fallback.sh          linux/atomic-fallback.h
 EOF
 while read script header args; do
 	/bin/sh ${ATOMICDIR}/${script} ${ATOMICTBL} ${args} > ${LINUXDIR}/include/${header}
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* [PATCH 33/33] locking/atomics: atomic-instrumented: simplify ifdeffery
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (31 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 32/33] locking/atomic: delete !ARCH_ATOMIC remnants Mark Rutland
@ 2021-05-10  9:37 ` Mark Rutland
  2021-05-10 14:56 ` [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Arnd Bergmann
  33 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-10  9:37 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	palmer, paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

Now that all architectures implement ARCH_ATOMIC, the fallbacks are
generated before the instrumented wrappers are generated. Due to this,
in atomic-instrumented.h we can assume that the whole set of atomic
functions has been generated. Likewise, atomic-instrumented.h doesn't
need to provide a preprocessor definition for every atomic it wraps.

This patch removes the redundant ifdeffery.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will@kernel.org>
---
 include/asm-generic/atomic-instrumented.h | 498 +-----------------------------
 scripts/atomic/gen-atomic-instrumented.sh |  51 +--
 2 files changed, 3 insertions(+), 546 deletions(-)

diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
index 888b6cfeed91..bc45af52c93b 100644
--- a/include/asm-generic/atomic-instrumented.h
+++ b/include/asm-generic/atomic-instrumented.h
@@ -27,17 +27,13 @@ atomic_read(const atomic_t *v)
 	instrument_atomic_read(v, sizeof(*v));
 	return arch_atomic_read(v);
 }
-#define atomic_read atomic_read
 
-#if defined(arch_atomic_read_acquire)
 static __always_inline int
 atomic_read_acquire(const atomic_t *v)
 {
 	instrument_atomic_read(v, sizeof(*v));
 	return arch_atomic_read_acquire(v);
 }
-#define atomic_read_acquire atomic_read_acquire
-#endif
 
 static __always_inline void
 atomic_set(atomic_t *v, int i)
@@ -45,17 +41,13 @@ atomic_set(atomic_t *v, int i)
 	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic_set(v, i);
 }
-#define atomic_set atomic_set
 
-#if defined(arch_atomic_set_release)
 static __always_inline void
 atomic_set_release(atomic_t *v, int i)
 {
 	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic_set_release(v, i);
 }
-#define atomic_set_release atomic_set_release
-#endif
 
 static __always_inline void
 atomic_add(int i, atomic_t *v)
@@ -63,87 +55,62 @@ atomic_add(int i, atomic_t *v)
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic_add(i, v);
 }
-#define atomic_add atomic_add
 
-#if !defined(arch_atomic_add_return_relaxed) || defined(arch_atomic_add_return)
 static __always_inline int
 atomic_add_return(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_add_return(i, v);
 }
-#define atomic_add_return atomic_add_return
-#endif
 
-#if defined(arch_atomic_add_return_acquire)
 static __always_inline int
 atomic_add_return_acquire(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_add_return_acquire(i, v);
 }
-#define atomic_add_return_acquire atomic_add_return_acquire
-#endif
 
-#if defined(arch_atomic_add_return_release)
 static __always_inline int
 atomic_add_return_release(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_add_return_release(i, v);
 }
-#define atomic_add_return_release atomic_add_return_release
-#endif
 
-#if defined(arch_atomic_add_return_relaxed)
 static __always_inline int
 atomic_add_return_relaxed(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_add_return_relaxed(i, v);
 }
-#define atomic_add_return_relaxed atomic_add_return_relaxed
-#endif
 
-#if !defined(arch_atomic_fetch_add_relaxed) || defined(arch_atomic_fetch_add)
 static __always_inline int
 atomic_fetch_add(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_add(i, v);
 }
-#define atomic_fetch_add atomic_fetch_add
-#endif
 
-#if defined(arch_atomic_fetch_add_acquire)
 static __always_inline int
 atomic_fetch_add_acquire(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_add_acquire(i, v);
 }
-#define atomic_fetch_add_acquire atomic_fetch_add_acquire
-#endif
 
-#if defined(arch_atomic_fetch_add_release)
 static __always_inline int
 atomic_fetch_add_release(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_add_release(i, v);
 }
-#define atomic_fetch_add_release atomic_fetch_add_release
-#endif
 
-#if defined(arch_atomic_fetch_add_relaxed)
 static __always_inline int
 atomic_fetch_add_relaxed(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_add_relaxed(i, v);
 }
-#define atomic_fetch_add_relaxed atomic_fetch_add_relaxed
-#endif
 
 static __always_inline void
 atomic_sub(int i, atomic_t *v)
@@ -151,267 +118,188 @@ atomic_sub(int i, atomic_t *v)
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic_sub(i, v);
 }
-#define atomic_sub atomic_sub
 
-#if !defined(arch_atomic_sub_return_relaxed) || defined(arch_atomic_sub_return)
 static __always_inline int
 atomic_sub_return(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_sub_return(i, v);
 }
-#define atomic_sub_return atomic_sub_return
-#endif
 
-#if defined(arch_atomic_sub_return_acquire)
 static __always_inline int
 atomic_sub_return_acquire(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_sub_return_acquire(i, v);
 }
-#define atomic_sub_return_acquire atomic_sub_return_acquire
-#endif
 
-#if defined(arch_atomic_sub_return_release)
 static __always_inline int
 atomic_sub_return_release(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_sub_return_release(i, v);
 }
-#define atomic_sub_return_release atomic_sub_return_release
-#endif
 
-#if defined(arch_atomic_sub_return_relaxed)
 static __always_inline int
 atomic_sub_return_relaxed(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_sub_return_relaxed(i, v);
 }
-#define atomic_sub_return_relaxed atomic_sub_return_relaxed
-#endif
 
-#if !defined(arch_atomic_fetch_sub_relaxed) || defined(arch_atomic_fetch_sub)
 static __always_inline int
 atomic_fetch_sub(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_sub(i, v);
 }
-#define atomic_fetch_sub atomic_fetch_sub
-#endif
 
-#if defined(arch_atomic_fetch_sub_acquire)
 static __always_inline int
 atomic_fetch_sub_acquire(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_sub_acquire(i, v);
 }
-#define atomic_fetch_sub_acquire atomic_fetch_sub_acquire
-#endif
 
-#if defined(arch_atomic_fetch_sub_release)
 static __always_inline int
 atomic_fetch_sub_release(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_sub_release(i, v);
 }
-#define atomic_fetch_sub_release atomic_fetch_sub_release
-#endif
 
-#if defined(arch_atomic_fetch_sub_relaxed)
 static __always_inline int
 atomic_fetch_sub_relaxed(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_sub_relaxed(i, v);
 }
-#define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed
-#endif
 
-#if defined(arch_atomic_inc)
 static __always_inline void
 atomic_inc(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic_inc(v);
 }
-#define atomic_inc atomic_inc
-#endif
 
-#if defined(arch_atomic_inc_return)
 static __always_inline int
 atomic_inc_return(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_inc_return(v);
 }
-#define atomic_inc_return atomic_inc_return
-#endif
 
-#if defined(arch_atomic_inc_return_acquire)
 static __always_inline int
 atomic_inc_return_acquire(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_inc_return_acquire(v);
 }
-#define atomic_inc_return_acquire atomic_inc_return_acquire
-#endif
 
-#if defined(arch_atomic_inc_return_release)
 static __always_inline int
 atomic_inc_return_release(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_inc_return_release(v);
 }
-#define atomic_inc_return_release atomic_inc_return_release
-#endif
 
-#if defined(arch_atomic_inc_return_relaxed)
 static __always_inline int
 atomic_inc_return_relaxed(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_inc_return_relaxed(v);
 }
-#define atomic_inc_return_relaxed atomic_inc_return_relaxed
-#endif
 
-#if defined(arch_atomic_fetch_inc)
 static __always_inline int
 atomic_fetch_inc(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_inc(v);
 }
-#define atomic_fetch_inc atomic_fetch_inc
-#endif
 
-#if defined(arch_atomic_fetch_inc_acquire)
 static __always_inline int
 atomic_fetch_inc_acquire(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_inc_acquire(v);
 }
-#define atomic_fetch_inc_acquire atomic_fetch_inc_acquire
-#endif
 
-#if defined(arch_atomic_fetch_inc_release)
 static __always_inline int
 atomic_fetch_inc_release(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_inc_release(v);
 }
-#define atomic_fetch_inc_release atomic_fetch_inc_release
-#endif
 
-#if defined(arch_atomic_fetch_inc_relaxed)
 static __always_inline int
 atomic_fetch_inc_relaxed(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_inc_relaxed(v);
 }
-#define atomic_fetch_inc_relaxed atomic_fetch_inc_relaxed
-#endif
 
-#if defined(arch_atomic_dec)
 static __always_inline void
 atomic_dec(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic_dec(v);
 }
-#define atomic_dec atomic_dec
-#endif
 
-#if defined(arch_atomic_dec_return)
 static __always_inline int
 atomic_dec_return(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_dec_return(v);
 }
-#define atomic_dec_return atomic_dec_return
-#endif
 
-#if defined(arch_atomic_dec_return_acquire)
 static __always_inline int
 atomic_dec_return_acquire(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_dec_return_acquire(v);
 }
-#define atomic_dec_return_acquire atomic_dec_return_acquire
-#endif
 
-#if defined(arch_atomic_dec_return_release)
 static __always_inline int
 atomic_dec_return_release(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_dec_return_release(v);
 }
-#define atomic_dec_return_release atomic_dec_return_release
-#endif
 
-#if defined(arch_atomic_dec_return_relaxed)
 static __always_inline int
 atomic_dec_return_relaxed(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_dec_return_relaxed(v);
 }
-#define atomic_dec_return_relaxed atomic_dec_return_relaxed
-#endif
 
-#if defined(arch_atomic_fetch_dec)
 static __always_inline int
 atomic_fetch_dec(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_dec(v);
 }
-#define atomic_fetch_dec atomic_fetch_dec
-#endif
 
-#if defined(arch_atomic_fetch_dec_acquire)
 static __always_inline int
 atomic_fetch_dec_acquire(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_dec_acquire(v);
 }
-#define atomic_fetch_dec_acquire atomic_fetch_dec_acquire
-#endif
 
-#if defined(arch_atomic_fetch_dec_release)
 static __always_inline int
 atomic_fetch_dec_release(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_dec_release(v);
 }
-#define atomic_fetch_dec_release atomic_fetch_dec_release
-#endif
 
-#if defined(arch_atomic_fetch_dec_relaxed)
 static __always_inline int
 atomic_fetch_dec_relaxed(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_dec_relaxed(v);
 }
-#define atomic_fetch_dec_relaxed atomic_fetch_dec_relaxed
-#endif
 
 static __always_inline void
 atomic_and(int i, atomic_t *v)
@@ -419,97 +307,69 @@ atomic_and(int i, atomic_t *v)
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic_and(i, v);
 }
-#define atomic_and atomic_and
 
-#if !defined(arch_atomic_fetch_and_relaxed) || defined(arch_atomic_fetch_and)
 static __always_inline int
 atomic_fetch_and(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_and(i, v);
 }
-#define atomic_fetch_and atomic_fetch_and
-#endif
 
-#if defined(arch_atomic_fetch_and_acquire)
 static __always_inline int
 atomic_fetch_and_acquire(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_and_acquire(i, v);
 }
-#define atomic_fetch_and_acquire atomic_fetch_and_acquire
-#endif
 
-#if defined(arch_atomic_fetch_and_release)
 static __always_inline int
 atomic_fetch_and_release(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_and_release(i, v);
 }
-#define atomic_fetch_and_release atomic_fetch_and_release
-#endif
 
-#if defined(arch_atomic_fetch_and_relaxed)
 static __always_inline int
 atomic_fetch_and_relaxed(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_and_relaxed(i, v);
 }
-#define atomic_fetch_and_relaxed atomic_fetch_and_relaxed
-#endif
 
-#if defined(arch_atomic_andnot)
 static __always_inline void
 atomic_andnot(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic_andnot(i, v);
 }
-#define atomic_andnot atomic_andnot
-#endif
 
-#if defined(arch_atomic_fetch_andnot)
 static __always_inline int
 atomic_fetch_andnot(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_andnot(i, v);
 }
-#define atomic_fetch_andnot atomic_fetch_andnot
-#endif
 
-#if defined(arch_atomic_fetch_andnot_acquire)
 static __always_inline int
 atomic_fetch_andnot_acquire(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_andnot_acquire(i, v);
 }
-#define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire
-#endif
 
-#if defined(arch_atomic_fetch_andnot_release)
 static __always_inline int
 atomic_fetch_andnot_release(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_andnot_release(i, v);
 }
-#define atomic_fetch_andnot_release atomic_fetch_andnot_release
-#endif
 
-#if defined(arch_atomic_fetch_andnot_relaxed)
 static __always_inline int
 atomic_fetch_andnot_relaxed(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_andnot_relaxed(i, v);
 }
-#define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed
-#endif
 
 static __always_inline void
 atomic_or(int i, atomic_t *v)
@@ -517,47 +377,34 @@ atomic_or(int i, atomic_t *v)
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic_or(i, v);
 }
-#define atomic_or atomic_or
 
-#if !defined(arch_atomic_fetch_or_relaxed) || defined(arch_atomic_fetch_or)
 static __always_inline int
 atomic_fetch_or(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_or(i, v);
 }
-#define atomic_fetch_or atomic_fetch_or
-#endif
 
-#if defined(arch_atomic_fetch_or_acquire)
 static __always_inline int
 atomic_fetch_or_acquire(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_or_acquire(i, v);
 }
-#define atomic_fetch_or_acquire atomic_fetch_or_acquire
-#endif
 
-#if defined(arch_atomic_fetch_or_release)
 static __always_inline int
 atomic_fetch_or_release(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_or_release(i, v);
 }
-#define atomic_fetch_or_release atomic_fetch_or_release
-#endif
 
-#if defined(arch_atomic_fetch_or_relaxed)
 static __always_inline int
 atomic_fetch_or_relaxed(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_or_relaxed(i, v);
 }
-#define atomic_fetch_or_relaxed atomic_fetch_or_relaxed
-#endif
 
 static __always_inline void
 atomic_xor(int i, atomic_t *v)
@@ -565,129 +412,91 @@ atomic_xor(int i, atomic_t *v)
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic_xor(i, v);
 }
-#define atomic_xor atomic_xor
 
-#if !defined(arch_atomic_fetch_xor_relaxed) || defined(arch_atomic_fetch_xor)
 static __always_inline int
 atomic_fetch_xor(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_xor(i, v);
 }
-#define atomic_fetch_xor atomic_fetch_xor
-#endif
 
-#if defined(arch_atomic_fetch_xor_acquire)
 static __always_inline int
 atomic_fetch_xor_acquire(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_xor_acquire(i, v);
 }
-#define atomic_fetch_xor_acquire atomic_fetch_xor_acquire
-#endif
 
-#if defined(arch_atomic_fetch_xor_release)
 static __always_inline int
 atomic_fetch_xor_release(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_xor_release(i, v);
 }
-#define atomic_fetch_xor_release atomic_fetch_xor_release
-#endif
 
-#if defined(arch_atomic_fetch_xor_relaxed)
 static __always_inline int
 atomic_fetch_xor_relaxed(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_xor_relaxed(i, v);
 }
-#define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed
-#endif
 
-#if !defined(arch_atomic_xchg_relaxed) || defined(arch_atomic_xchg)
 static __always_inline int
 atomic_xchg(atomic_t *v, int i)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_xchg(v, i);
 }
-#define atomic_xchg atomic_xchg
-#endif
 
-#if defined(arch_atomic_xchg_acquire)
 static __always_inline int
 atomic_xchg_acquire(atomic_t *v, int i)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_xchg_acquire(v, i);
 }
-#define atomic_xchg_acquire atomic_xchg_acquire
-#endif
 
-#if defined(arch_atomic_xchg_release)
 static __always_inline int
 atomic_xchg_release(atomic_t *v, int i)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_xchg_release(v, i);
 }
-#define atomic_xchg_release atomic_xchg_release
-#endif
 
-#if defined(arch_atomic_xchg_relaxed)
 static __always_inline int
 atomic_xchg_relaxed(atomic_t *v, int i)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_xchg_relaxed(v, i);
 }
-#define atomic_xchg_relaxed atomic_xchg_relaxed
-#endif
 
-#if !defined(arch_atomic_cmpxchg_relaxed) || defined(arch_atomic_cmpxchg)
 static __always_inline int
 atomic_cmpxchg(atomic_t *v, int old, int new)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_cmpxchg(v, old, new);
 }
-#define atomic_cmpxchg atomic_cmpxchg
-#endif
 
-#if defined(arch_atomic_cmpxchg_acquire)
 static __always_inline int
 atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_cmpxchg_acquire(v, old, new);
 }
-#define atomic_cmpxchg_acquire atomic_cmpxchg_acquire
-#endif
 
-#if defined(arch_atomic_cmpxchg_release)
 static __always_inline int
 atomic_cmpxchg_release(atomic_t *v, int old, int new)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_cmpxchg_release(v, old, new);
 }
-#define atomic_cmpxchg_release atomic_cmpxchg_release
-#endif
 
-#if defined(arch_atomic_cmpxchg_relaxed)
 static __always_inline int
 atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_cmpxchg_relaxed(v, old, new);
 }
-#define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed
-#endif
 
-#if defined(arch_atomic_try_cmpxchg)
 static __always_inline bool
 atomic_try_cmpxchg(atomic_t *v, int *old, int new)
 {
@@ -695,10 +504,7 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new)
 	instrument_atomic_read_write(old, sizeof(*old));
 	return arch_atomic_try_cmpxchg(v, old, new);
 }
-#define atomic_try_cmpxchg atomic_try_cmpxchg
-#endif
 
-#if defined(arch_atomic_try_cmpxchg_acquire)
 static __always_inline bool
 atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
 {
@@ -706,10 +512,7 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
 	instrument_atomic_read_write(old, sizeof(*old));
 	return arch_atomic_try_cmpxchg_acquire(v, old, new);
 }
-#define atomic_try_cmpxchg_acquire atomic_try_cmpxchg_acquire
-#endif
 
-#if defined(arch_atomic_try_cmpxchg_release)
 static __always_inline bool
 atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
 {
@@ -717,10 +520,7 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
 	instrument_atomic_read_write(old, sizeof(*old));
 	return arch_atomic_try_cmpxchg_release(v, old, new);
 }
-#define atomic_try_cmpxchg_release atomic_try_cmpxchg_release
-#endif
 
-#if defined(arch_atomic_try_cmpxchg_relaxed)
 static __always_inline bool
 atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
 {
@@ -728,108 +528,76 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
 	instrument_atomic_read_write(old, sizeof(*old));
 	return arch_atomic_try_cmpxchg_relaxed(v, old, new);
 }
-#define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg_relaxed
-#endif
 
-#if defined(arch_atomic_sub_and_test)
 static __always_inline bool
 atomic_sub_and_test(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_sub_and_test(i, v);
 }
-#define atomic_sub_and_test atomic_sub_and_test
-#endif
 
-#if defined(arch_atomic_dec_and_test)
 static __always_inline bool
 atomic_dec_and_test(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_dec_and_test(v);
 }
-#define atomic_dec_and_test atomic_dec_and_test
-#endif
 
-#if defined(arch_atomic_inc_and_test)
 static __always_inline bool
 atomic_inc_and_test(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_inc_and_test(v);
 }
-#define atomic_inc_and_test atomic_inc_and_test
-#endif
 
-#if defined(arch_atomic_add_negative)
 static __always_inline bool
 atomic_add_negative(int i, atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_add_negative(i, v);
 }
-#define atomic_add_negative atomic_add_negative
-#endif
 
-#if defined(arch_atomic_fetch_add_unless)
 static __always_inline int
 atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_fetch_add_unless(v, a, u);
 }
-#define atomic_fetch_add_unless atomic_fetch_add_unless
-#endif
 
-#if defined(arch_atomic_add_unless)
 static __always_inline bool
 atomic_add_unless(atomic_t *v, int a, int u)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_add_unless(v, a, u);
 }
-#define atomic_add_unless atomic_add_unless
-#endif
 
-#if defined(arch_atomic_inc_not_zero)
 static __always_inline bool
 atomic_inc_not_zero(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_inc_not_zero(v);
 }
-#define atomic_inc_not_zero atomic_inc_not_zero
-#endif
 
-#if defined(arch_atomic_inc_unless_negative)
 static __always_inline bool
 atomic_inc_unless_negative(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_inc_unless_negative(v);
 }
-#define atomic_inc_unless_negative atomic_inc_unless_negative
-#endif
 
-#if defined(arch_atomic_dec_unless_positive)
 static __always_inline bool
 atomic_dec_unless_positive(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_dec_unless_positive(v);
 }
-#define atomic_dec_unless_positive atomic_dec_unless_positive
-#endif
 
-#if defined(arch_atomic_dec_if_positive)
 static __always_inline int
 atomic_dec_if_positive(atomic_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic_dec_if_positive(v);
 }
-#define atomic_dec_if_positive atomic_dec_if_positive
-#endif
 
 static __always_inline s64
 atomic64_read(const atomic64_t *v)
@@ -837,17 +605,13 @@ atomic64_read(const atomic64_t *v)
 	instrument_atomic_read(v, sizeof(*v));
 	return arch_atomic64_read(v);
 }
-#define atomic64_read atomic64_read
 
-#if defined(arch_atomic64_read_acquire)
 static __always_inline s64
 atomic64_read_acquire(const atomic64_t *v)
 {
 	instrument_atomic_read(v, sizeof(*v));
 	return arch_atomic64_read_acquire(v);
 }
-#define atomic64_read_acquire atomic64_read_acquire
-#endif
 
 static __always_inline void
 atomic64_set(atomic64_t *v, s64 i)
@@ -855,17 +619,13 @@ atomic64_set(atomic64_t *v, s64 i)
 	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic64_set(v, i);
 }
-#define atomic64_set atomic64_set
 
-#if defined(arch_atomic64_set_release)
 static __always_inline void
 atomic64_set_release(atomic64_t *v, s64 i)
 {
 	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic64_set_release(v, i);
 }
-#define atomic64_set_release atomic64_set_release
-#endif
 
 static __always_inline void
 atomic64_add(s64 i, atomic64_t *v)
@@ -873,87 +633,62 @@ atomic64_add(s64 i, atomic64_t *v)
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic64_add(i, v);
 }
-#define atomic64_add atomic64_add
 
-#if !defined(arch_atomic64_add_return_relaxed) || defined(arch_atomic64_add_return)
 static __always_inline s64
 atomic64_add_return(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_add_return(i, v);
 }
-#define atomic64_add_return atomic64_add_return
-#endif
 
-#if defined(arch_atomic64_add_return_acquire)
 static __always_inline s64
 atomic64_add_return_acquire(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_add_return_acquire(i, v);
 }
-#define atomic64_add_return_acquire atomic64_add_return_acquire
-#endif
 
-#if defined(arch_atomic64_add_return_release)
 static __always_inline s64
 atomic64_add_return_release(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_add_return_release(i, v);
 }
-#define atomic64_add_return_release atomic64_add_return_release
-#endif
 
-#if defined(arch_atomic64_add_return_relaxed)
 static __always_inline s64
 atomic64_add_return_relaxed(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_add_return_relaxed(i, v);
 }
-#define atomic64_add_return_relaxed atomic64_add_return_relaxed
-#endif
 
-#if !defined(arch_atomic64_fetch_add_relaxed) || defined(arch_atomic64_fetch_add)
 static __always_inline s64
 atomic64_fetch_add(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_add(i, v);
 }
-#define atomic64_fetch_add atomic64_fetch_add
-#endif
 
-#if defined(arch_atomic64_fetch_add_acquire)
 static __always_inline s64
 atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_add_acquire(i, v);
 }
-#define atomic64_fetch_add_acquire atomic64_fetch_add_acquire
-#endif
 
-#if defined(arch_atomic64_fetch_add_release)
 static __always_inline s64
 atomic64_fetch_add_release(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_add_release(i, v);
 }
-#define atomic64_fetch_add_release atomic64_fetch_add_release
-#endif
 
-#if defined(arch_atomic64_fetch_add_relaxed)
 static __always_inline s64
 atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_add_relaxed(i, v);
 }
-#define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed
-#endif
 
 static __always_inline void
 atomic64_sub(s64 i, atomic64_t *v)
@@ -961,267 +696,188 @@ atomic64_sub(s64 i, atomic64_t *v)
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic64_sub(i, v);
 }
-#define atomic64_sub atomic64_sub
 
-#if !defined(arch_atomic64_sub_return_relaxed) || defined(arch_atomic64_sub_return)
 static __always_inline s64
 atomic64_sub_return(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_sub_return(i, v);
 }
-#define atomic64_sub_return atomic64_sub_return
-#endif
 
-#if defined(arch_atomic64_sub_return_acquire)
 static __always_inline s64
 atomic64_sub_return_acquire(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_sub_return_acquire(i, v);
 }
-#define atomic64_sub_return_acquire atomic64_sub_return_acquire
-#endif
 
-#if defined(arch_atomic64_sub_return_release)
 static __always_inline s64
 atomic64_sub_return_release(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_sub_return_release(i, v);
 }
-#define atomic64_sub_return_release atomic64_sub_return_release
-#endif
 
-#if defined(arch_atomic64_sub_return_relaxed)
 static __always_inline s64
 atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_sub_return_relaxed(i, v);
 }
-#define atomic64_sub_return_relaxed atomic64_sub_return_relaxed
-#endif
 
-#if !defined(arch_atomic64_fetch_sub_relaxed) || defined(arch_atomic64_fetch_sub)
 static __always_inline s64
 atomic64_fetch_sub(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_sub(i, v);
 }
-#define atomic64_fetch_sub atomic64_fetch_sub
-#endif
 
-#if defined(arch_atomic64_fetch_sub_acquire)
 static __always_inline s64
 atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_sub_acquire(i, v);
 }
-#define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire
-#endif
 
-#if defined(arch_atomic64_fetch_sub_release)
 static __always_inline s64
 atomic64_fetch_sub_release(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_sub_release(i, v);
 }
-#define atomic64_fetch_sub_release atomic64_fetch_sub_release
-#endif
 
-#if defined(arch_atomic64_fetch_sub_relaxed)
 static __always_inline s64
 atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_sub_relaxed(i, v);
 }
-#define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed
-#endif
 
-#if defined(arch_atomic64_inc)
 static __always_inline void
 atomic64_inc(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic64_inc(v);
 }
-#define atomic64_inc atomic64_inc
-#endif
 
-#if defined(arch_atomic64_inc_return)
 static __always_inline s64
 atomic64_inc_return(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_inc_return(v);
 }
-#define atomic64_inc_return atomic64_inc_return
-#endif
 
-#if defined(arch_atomic64_inc_return_acquire)
 static __always_inline s64
 atomic64_inc_return_acquire(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_inc_return_acquire(v);
 }
-#define atomic64_inc_return_acquire atomic64_inc_return_acquire
-#endif
 
-#if defined(arch_atomic64_inc_return_release)
 static __always_inline s64
 atomic64_inc_return_release(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_inc_return_release(v);
 }
-#define atomic64_inc_return_release atomic64_inc_return_release
-#endif
 
-#if defined(arch_atomic64_inc_return_relaxed)
 static __always_inline s64
 atomic64_inc_return_relaxed(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_inc_return_relaxed(v);
 }
-#define atomic64_inc_return_relaxed atomic64_inc_return_relaxed
-#endif
 
-#if defined(arch_atomic64_fetch_inc)
 static __always_inline s64
 atomic64_fetch_inc(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_inc(v);
 }
-#define atomic64_fetch_inc atomic64_fetch_inc
-#endif
 
-#if defined(arch_atomic64_fetch_inc_acquire)
 static __always_inline s64
 atomic64_fetch_inc_acquire(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_inc_acquire(v);
 }
-#define atomic64_fetch_inc_acquire atomic64_fetch_inc_acquire
-#endif
 
-#if defined(arch_atomic64_fetch_inc_release)
 static __always_inline s64
 atomic64_fetch_inc_release(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_inc_release(v);
 }
-#define atomic64_fetch_inc_release atomic64_fetch_inc_release
-#endif
 
-#if defined(arch_atomic64_fetch_inc_relaxed)
 static __always_inline s64
 atomic64_fetch_inc_relaxed(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_inc_relaxed(v);
 }
-#define atomic64_fetch_inc_relaxed atomic64_fetch_inc_relaxed
-#endif
 
-#if defined(arch_atomic64_dec)
 static __always_inline void
 atomic64_dec(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic64_dec(v);
 }
-#define atomic64_dec atomic64_dec
-#endif
 
-#if defined(arch_atomic64_dec_return)
 static __always_inline s64
 atomic64_dec_return(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_dec_return(v);
 }
-#define atomic64_dec_return atomic64_dec_return
-#endif
 
-#if defined(arch_atomic64_dec_return_acquire)
 static __always_inline s64
 atomic64_dec_return_acquire(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_dec_return_acquire(v);
 }
-#define atomic64_dec_return_acquire atomic64_dec_return_acquire
-#endif
 
-#if defined(arch_atomic64_dec_return_release)
 static __always_inline s64
 atomic64_dec_return_release(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_dec_return_release(v);
 }
-#define atomic64_dec_return_release atomic64_dec_return_release
-#endif
 
-#if defined(arch_atomic64_dec_return_relaxed)
 static __always_inline s64
 atomic64_dec_return_relaxed(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_dec_return_relaxed(v);
 }
-#define atomic64_dec_return_relaxed atomic64_dec_return_relaxed
-#endif
 
-#if defined(arch_atomic64_fetch_dec)
 static __always_inline s64
 atomic64_fetch_dec(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_dec(v);
 }
-#define atomic64_fetch_dec atomic64_fetch_dec
-#endif
 
-#if defined(arch_atomic64_fetch_dec_acquire)
 static __always_inline s64
 atomic64_fetch_dec_acquire(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_dec_acquire(v);
 }
-#define atomic64_fetch_dec_acquire atomic64_fetch_dec_acquire
-#endif
 
-#if defined(arch_atomic64_fetch_dec_release)
 static __always_inline s64
 atomic64_fetch_dec_release(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_dec_release(v);
 }
-#define atomic64_fetch_dec_release atomic64_fetch_dec_release
-#endif
 
-#if defined(arch_atomic64_fetch_dec_relaxed)
 static __always_inline s64
 atomic64_fetch_dec_relaxed(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_dec_relaxed(v);
 }
-#define atomic64_fetch_dec_relaxed atomic64_fetch_dec_relaxed
-#endif
 
 static __always_inline void
 atomic64_and(s64 i, atomic64_t *v)
@@ -1229,97 +885,69 @@ atomic64_and(s64 i, atomic64_t *v)
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic64_and(i, v);
 }
-#define atomic64_and atomic64_and
 
-#if !defined(arch_atomic64_fetch_and_relaxed) || defined(arch_atomic64_fetch_and)
 static __always_inline s64
 atomic64_fetch_and(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_and(i, v);
 }
-#define atomic64_fetch_and atomic64_fetch_and
-#endif
 
-#if defined(arch_atomic64_fetch_and_acquire)
 static __always_inline s64
 atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_and_acquire(i, v);
 }
-#define atomic64_fetch_and_acquire atomic64_fetch_and_acquire
-#endif
 
-#if defined(arch_atomic64_fetch_and_release)
 static __always_inline s64
 atomic64_fetch_and_release(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_and_release(i, v);
 }
-#define atomic64_fetch_and_release atomic64_fetch_and_release
-#endif
 
-#if defined(arch_atomic64_fetch_and_relaxed)
 static __always_inline s64
 atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_and_relaxed(i, v);
 }
-#define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed
-#endif
 
-#if defined(arch_atomic64_andnot)
 static __always_inline void
 atomic64_andnot(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic64_andnot(i, v);
 }
-#define atomic64_andnot atomic64_andnot
-#endif
 
-#if defined(arch_atomic64_fetch_andnot)
 static __always_inline s64
 atomic64_fetch_andnot(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_andnot(i, v);
 }
-#define atomic64_fetch_andnot atomic64_fetch_andnot
-#endif
 
-#if defined(arch_atomic64_fetch_andnot_acquire)
 static __always_inline s64
 atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_andnot_acquire(i, v);
 }
-#define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire
-#endif
 
-#if defined(arch_atomic64_fetch_andnot_release)
 static __always_inline s64
 atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_andnot_release(i, v);
 }
-#define atomic64_fetch_andnot_release atomic64_fetch_andnot_release
-#endif
 
-#if defined(arch_atomic64_fetch_andnot_relaxed)
 static __always_inline s64
 atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_andnot_relaxed(i, v);
 }
-#define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed
-#endif
 
 static __always_inline void
 atomic64_or(s64 i, atomic64_t *v)
@@ -1327,47 +955,34 @@ atomic64_or(s64 i, atomic64_t *v)
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic64_or(i, v);
 }
-#define atomic64_or atomic64_or
 
-#if !defined(arch_atomic64_fetch_or_relaxed) || defined(arch_atomic64_fetch_or)
 static __always_inline s64
 atomic64_fetch_or(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_or(i, v);
 }
-#define atomic64_fetch_or atomic64_fetch_or
-#endif
 
-#if defined(arch_atomic64_fetch_or_acquire)
 static __always_inline s64
 atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_or_acquire(i, v);
 }
-#define atomic64_fetch_or_acquire atomic64_fetch_or_acquire
-#endif
 
-#if defined(arch_atomic64_fetch_or_release)
 static __always_inline s64
 atomic64_fetch_or_release(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_or_release(i, v);
 }
-#define atomic64_fetch_or_release atomic64_fetch_or_release
-#endif
 
-#if defined(arch_atomic64_fetch_or_relaxed)
 static __always_inline s64
 atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_or_relaxed(i, v);
 }
-#define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed
-#endif
 
 static __always_inline void
 atomic64_xor(s64 i, atomic64_t *v)
@@ -1375,129 +990,91 @@ atomic64_xor(s64 i, atomic64_t *v)
 	instrument_atomic_read_write(v, sizeof(*v));
 	arch_atomic64_xor(i, v);
 }
-#define atomic64_xor atomic64_xor
 
-#if !defined(arch_atomic64_fetch_xor_relaxed) || defined(arch_atomic64_fetch_xor)
 static __always_inline s64
 atomic64_fetch_xor(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_xor(i, v);
 }
-#define atomic64_fetch_xor atomic64_fetch_xor
-#endif
 
-#if defined(arch_atomic64_fetch_xor_acquire)
 static __always_inline s64
 atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_xor_acquire(i, v);
 }
-#define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire
-#endif
 
-#if defined(arch_atomic64_fetch_xor_release)
 static __always_inline s64
 atomic64_fetch_xor_release(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_xor_release(i, v);
 }
-#define atomic64_fetch_xor_release atomic64_fetch_xor_release
-#endif
 
-#if defined(arch_atomic64_fetch_xor_relaxed)
 static __always_inline s64
 atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_xor_relaxed(i, v);
 }
-#define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed
-#endif
 
-#if !defined(arch_atomic64_xchg_relaxed) || defined(arch_atomic64_xchg)
 static __always_inline s64
 atomic64_xchg(atomic64_t *v, s64 i)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_xchg(v, i);
 }
-#define atomic64_xchg atomic64_xchg
-#endif
 
-#if defined(arch_atomic64_xchg_acquire)
 static __always_inline s64
 atomic64_xchg_acquire(atomic64_t *v, s64 i)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_xchg_acquire(v, i);
 }
-#define atomic64_xchg_acquire atomic64_xchg_acquire
-#endif
 
-#if defined(arch_atomic64_xchg_release)
 static __always_inline s64
 atomic64_xchg_release(atomic64_t *v, s64 i)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_xchg_release(v, i);
 }
-#define atomic64_xchg_release atomic64_xchg_release
-#endif
 
-#if defined(arch_atomic64_xchg_relaxed)
 static __always_inline s64
 atomic64_xchg_relaxed(atomic64_t *v, s64 i)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_xchg_relaxed(v, i);
 }
-#define atomic64_xchg_relaxed atomic64_xchg_relaxed
-#endif
 
-#if !defined(arch_atomic64_cmpxchg_relaxed) || defined(arch_atomic64_cmpxchg)
 static __always_inline s64
 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_cmpxchg(v, old, new);
 }
-#define atomic64_cmpxchg atomic64_cmpxchg
-#endif
 
-#if defined(arch_atomic64_cmpxchg_acquire)
 static __always_inline s64
 atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_cmpxchg_acquire(v, old, new);
 }
-#define atomic64_cmpxchg_acquire atomic64_cmpxchg_acquire
-#endif
 
-#if defined(arch_atomic64_cmpxchg_release)
 static __always_inline s64
 atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_cmpxchg_release(v, old, new);
 }
-#define atomic64_cmpxchg_release atomic64_cmpxchg_release
-#endif
 
-#if defined(arch_atomic64_cmpxchg_relaxed)
 static __always_inline s64
 atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_cmpxchg_relaxed(v, old, new);
 }
-#define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed
-#endif
 
-#if defined(arch_atomic64_try_cmpxchg)
 static __always_inline bool
 atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
 {
@@ -1505,10 +1082,7 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
 	instrument_atomic_read_write(old, sizeof(*old));
 	return arch_atomic64_try_cmpxchg(v, old, new);
 }
-#define atomic64_try_cmpxchg atomic64_try_cmpxchg
-#endif
 
-#if defined(arch_atomic64_try_cmpxchg_acquire)
 static __always_inline bool
 atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
 {
@@ -1516,10 +1090,7 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
 	instrument_atomic_read_write(old, sizeof(*old));
 	return arch_atomic64_try_cmpxchg_acquire(v, old, new);
 }
-#define atomic64_try_cmpxchg_acquire atomic64_try_cmpxchg_acquire
-#endif
 
-#if defined(arch_atomic64_try_cmpxchg_release)
 static __always_inline bool
 atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
 {
@@ -1527,10 +1098,7 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
 	instrument_atomic_read_write(old, sizeof(*old));
 	return arch_atomic64_try_cmpxchg_release(v, old, new);
 }
-#define atomic64_try_cmpxchg_release atomic64_try_cmpxchg_release
-#endif
 
-#if defined(arch_atomic64_try_cmpxchg_relaxed)
 static __always_inline bool
 atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
 {
@@ -1538,218 +1106,161 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
 	instrument_atomic_read_write(old, sizeof(*old));
 	return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
 }
-#define atomic64_try_cmpxchg_relaxed atomic64_try_cmpxchg_relaxed
-#endif
 
-#if defined(arch_atomic64_sub_and_test)
 static __always_inline bool
 atomic64_sub_and_test(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_sub_and_test(i, v);
 }
-#define atomic64_sub_and_test atomic64_sub_and_test
-#endif
 
-#if defined(arch_atomic64_dec_and_test)
 static __always_inline bool
 atomic64_dec_and_test(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_dec_and_test(v);
 }
-#define atomic64_dec_and_test atomic64_dec_and_test
-#endif
 
-#if defined(arch_atomic64_inc_and_test)
 static __always_inline bool
 atomic64_inc_and_test(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_inc_and_test(v);
 }
-#define atomic64_inc_and_test atomic64_inc_and_test
-#endif
 
-#if defined(arch_atomic64_add_negative)
 static __always_inline bool
 atomic64_add_negative(s64 i, atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_add_negative(i, v);
 }
-#define atomic64_add_negative atomic64_add_negative
-#endif
 
-#if defined(arch_atomic64_fetch_add_unless)
 static __always_inline s64
 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_fetch_add_unless(v, a, u);
 }
-#define atomic64_fetch_add_unless atomic64_fetch_add_unless
-#endif
 
-#if defined(arch_atomic64_add_unless)
 static __always_inline bool
 atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_add_unless(v, a, u);
 }
-#define atomic64_add_unless atomic64_add_unless
-#endif
 
-#if defined(arch_atomic64_inc_not_zero)
 static __always_inline bool
 atomic64_inc_not_zero(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_inc_not_zero(v);
 }
-#define atomic64_inc_not_zero atomic64_inc_not_zero
-#endif
 
-#if defined(arch_atomic64_inc_unless_negative)
 static __always_inline bool
 atomic64_inc_unless_negative(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_inc_unless_negative(v);
 }
-#define atomic64_inc_unless_negative atomic64_inc_unless_negative
-#endif
 
-#if defined(arch_atomic64_dec_unless_positive)
 static __always_inline bool
 atomic64_dec_unless_positive(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_dec_unless_positive(v);
 }
-#define atomic64_dec_unless_positive atomic64_dec_unless_positive
-#endif
 
-#if defined(arch_atomic64_dec_if_positive)
 static __always_inline s64
 atomic64_dec_if_positive(atomic64_t *v)
 {
 	instrument_atomic_read_write(v, sizeof(*v));
 	return arch_atomic64_dec_if_positive(v);
 }
-#define atomic64_dec_if_positive atomic64_dec_if_positive
-#endif
 
-#if !defined(arch_xchg_relaxed) || defined(arch_xchg)
 #define xchg(ptr, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
 	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
 	arch_xchg(__ai_ptr, __VA_ARGS__); \
 })
-#endif
 
-#if defined(arch_xchg_acquire)
 #define xchg_acquire(ptr, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
 	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
 	arch_xchg_acquire(__ai_ptr, __VA_ARGS__); \
 })
-#endif
 
-#if defined(arch_xchg_release)
 #define xchg_release(ptr, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
 	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
 	arch_xchg_release(__ai_ptr, __VA_ARGS__); \
 })
-#endif
 
-#if defined(arch_xchg_relaxed)
 #define xchg_relaxed(ptr, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
 	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
 	arch_xchg_relaxed(__ai_ptr, __VA_ARGS__); \
 })
-#endif
 
-#if !defined(arch_cmpxchg_relaxed) || defined(arch_cmpxchg)
 #define cmpxchg(ptr, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
 	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
 	arch_cmpxchg(__ai_ptr, __VA_ARGS__); \
 })
-#endif
 
-#if defined(arch_cmpxchg_acquire)
 #define cmpxchg_acquire(ptr, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
 	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
 	arch_cmpxchg_acquire(__ai_ptr, __VA_ARGS__); \
 })
-#endif
 
-#if defined(arch_cmpxchg_release)
 #define cmpxchg_release(ptr, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
 	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
 	arch_cmpxchg_release(__ai_ptr, __VA_ARGS__); \
 })
-#endif
 
-#if defined(arch_cmpxchg_relaxed)
 #define cmpxchg_relaxed(ptr, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
 	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
 	arch_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__); \
 })
-#endif
 
-#if !defined(arch_cmpxchg64_relaxed) || defined(arch_cmpxchg64)
 #define cmpxchg64(ptr, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
 	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
 	arch_cmpxchg64(__ai_ptr, __VA_ARGS__); \
 })
-#endif
 
-#if defined(arch_cmpxchg64_acquire)
 #define cmpxchg64_acquire(ptr, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
 	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
 	arch_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__); \
 })
-#endif
 
-#if defined(arch_cmpxchg64_release)
 #define cmpxchg64_release(ptr, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
 	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
 	arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__); \
 })
-#endif
 
-#if defined(arch_cmpxchg64_relaxed)
 #define cmpxchg64_relaxed(ptr, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
 	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr)); \
 	arch_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__); \
 })
-#endif
 
-#if !defined(arch_try_cmpxchg_relaxed) || defined(arch_try_cmpxchg)
 #define try_cmpxchg(ptr, oldp, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
@@ -1758,9 +1269,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 	instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \
 	arch_try_cmpxchg(__ai_ptr, __ai_oldp, __VA_ARGS__); \
 })
-#endif
 
-#if defined(arch_try_cmpxchg_acquire)
 #define try_cmpxchg_acquire(ptr, oldp, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
@@ -1769,9 +1278,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 	instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \
 	arch_try_cmpxchg_acquire(__ai_ptr, __ai_oldp, __VA_ARGS__); \
 })
-#endif
 
-#if defined(arch_try_cmpxchg_release)
 #define try_cmpxchg_release(ptr, oldp, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
@@ -1780,9 +1287,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 	instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \
 	arch_try_cmpxchg_release(__ai_ptr, __ai_oldp, __VA_ARGS__); \
 })
-#endif
 
-#if defined(arch_try_cmpxchg_relaxed)
 #define try_cmpxchg_relaxed(ptr, oldp, ...) \
 ({ \
 	typeof(ptr) __ai_ptr = (ptr); \
@@ -1791,7 +1296,6 @@ atomic64_dec_if_positive(atomic64_t *v)
 	instrument_atomic_write(__ai_oldp, sizeof(*__ai_oldp)); \
 	arch_try_cmpxchg_relaxed(__ai_ptr, __ai_oldp, __VA_ARGS__); \
 })
-#endif
 
 #define cmpxchg_local(ptr, ...) \
 ({ \
@@ -1830,4 +1334,4 @@ atomic64_dec_if_positive(atomic64_t *v)
 })
 
 #endif /* _ASM_GENERIC_ATOMIC_INSTRUMENTED_H */
-// 4bec382e44520f4d8267e42620054db26a659ea3
+// 1d7c3a25aca5c7fb031c307be4c3d24c7b48fcd5
diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index 5766ffcec7c5..b0c45aee19d7 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -41,34 +41,6 @@ gen_params_checks()
 	done
 }
 
-# gen_guard(meta, atomic, pfx, name, sfx, order)
-gen_guard()
-{
-	local meta="$1"; shift
-	local atomic="$1"; shift
-	local pfx="$1"; shift
-	local name="$1"; shift
-	local sfx="$1"; shift
-	local order="$1"; shift
-
-	local atomicname="arch_${atomic}_${pfx}${name}${sfx}${order}"
-
-	local template="$(find_fallback_template "${pfx}" "${name}" "${sfx}" "${order}")"
-
-	# We definitely need a preprocessor symbol for this atomic if it is an
-	# ordering variant, or if there's a generic fallback.
-	if [ ! -z "${order}" ] || [ ! -z "${template}" ]; then
-		printf "defined(${atomicname})"
-		return
-	fi
-
-	# If this is a base variant, but a relaxed variant *may* exist, then we
-	# only have a preprocessor symbol if the relaxed variant isn't defined
-	if meta_has_relaxed "${meta}"; then
-		printf "!defined(${atomicname}_relaxed) || defined(${atomicname})"
-	fi
-}
-
 #gen_proto_order_variant(meta, pfx, name, sfx, order, atomic, int, arg...)
 gen_proto_order_variant()
 {
@@ -82,16 +54,12 @@ gen_proto_order_variant()
 
 	local atomicname="${atomic}_${pfx}${name}${sfx}${order}"
 
-	local guard="$(gen_guard "${meta}" "${atomic}" "${pfx}" "${name}" "${sfx}" "${order}")"
-
 	local ret="$(gen_ret_type "${meta}" "${int}")"
 	local params="$(gen_params "${int}" "${atomic}" "$@")"
 	local checks="$(gen_params_checks "${meta}" "$@")"
 	local args="$(gen_args "$@")"
 	local retstmt="$(gen_ret_stmt "${meta}")"
 
-	[ ! -z "${guard}" ] && printf "#if ${guard}\n"
-
 cat <<EOF
 static __always_inline ${ret}
 ${atomicname}(${params})
@@ -99,11 +67,8 @@ ${atomicname}(${params})
 ${checks}
 	${retstmt}arch_${atomicname}(${args});
 }
-#define ${atomicname} ${atomicname}
 EOF
 
-	[ ! -z "${guard}" ] && printf "#endif\n"
-
 	printf "\n"
 }
 
@@ -139,19 +104,6 @@ EOF
 	fi
 }
 
-gen_optional_xchg()
-{
-	local name="$1"; shift
-	local sfx="$1"; shift
-	local guard="defined(arch_${name}${sfx})"
-
-	[ -z "${sfx}" ] && guard="!defined(arch_${name}_relaxed) || defined(arch_${name})"
-
-	printf "#if ${guard}\n"
-	gen_xchg "${name}${sfx}" ""
-	printf "#endif\n\n"
-}
-
 cat << EOF
 // SPDX-License-Identifier: GPL-2.0
 
@@ -188,7 +140,8 @@ done
 
 for xchg in "xchg" "cmpxchg" "cmpxchg64" "try_cmpxchg"; do
 	for order in "" "_acquire" "_release" "_relaxed"; do
-		gen_optional_xchg "${xchg}" "${order}"
+		gen_xchg "${xchg}${order}" ""
+		printf "\n"
 	done
 done
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 16/33] locking/atomic: csky: move to ARCH_ATOMIC
  2021-05-10  9:37 ` [PATCH 16/33] locking/atomic: csky: " Mark Rutland
@ 2021-05-10 13:27   ` Guo Ren
  0 siblings, 0 replies; 57+ messages in thread
From: Guo Ren @ 2021-05-10 13:27 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux Kernel Mailing List, Will Deacon, boqun.feng,
	Peter Zijlstra, Albert Ou, Arnd Bergmann, Brian Cain,
	Benjamin Herrenschmidt, Chris Zankel, Rich Felker, David Miller,
	Vincent Chen, Helge Deller, Geert Uytterhoeven, Greentime Hu,
	Ivan Kokshaysky, James E.J. Bottomley, Max Filippov, Jonas Bonn,
	Ley Foon Tan, Russell King, Matt Turner, Michal Simek,
	Michael Ellerman, Nick Hu, Palmer Dabbelt, Paul Mackerras,
	Paul Walmsley, Richard Henderson, Stafford Horne,
	Stefan Kristiansson, Thomas Bogendoerfer, Vineet Gupta,
	Yoshinori Sato

On Mon, May 10, 2021 at 5:41 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> We'd like all architectures to convert to ARCH_ATOMIC, as once all
> architectures are converted it will be possible to make significant
> cleanups to the atomics headers, and this will make it much easier to
> generically enable atomic functionality (e.g. debug logic in the
> instrumented wrappers).
>
> As a step towards that, this patch migrates csky to ARCH_ATOMIC. The
> arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> code wraps these with optional instrumentation to provide the regular
> functions.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Guo Ren <guoren@kernel.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/csky/Kconfig               | 1 +
>  arch/csky/include/asm/cmpxchg.h | 8 ++++----
>  2 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/arch/csky/Kconfig b/arch/csky/Kconfig
> index 8de5b987edb9..3521f14bcd96 100644
> --- a/arch/csky/Kconfig
> +++ b/arch/csky/Kconfig
> @@ -2,6 +2,7 @@
>  config CSKY
>         def_bool y
>         select ARCH_32BIT_OFF_T
> +       select ARCH_ATOMIC
>         select ARCH_HAS_DMA_PREP_COHERENT
>         select ARCH_HAS_GCOV_PROFILE_ALL
>         select ARCH_HAS_SYNC_DMA_FOR_CPU
> diff --git a/arch/csky/include/asm/cmpxchg.h b/arch/csky/include/asm/cmpxchg.h
> index dabc8e46ce7b..d1bef11f8dc9 100644
> --- a/arch/csky/include/asm/cmpxchg.h
> +++ b/arch/csky/include/asm/cmpxchg.h
> @@ -31,7 +31,7 @@ extern void __bad_xchg(void);
>         __ret;                                                  \
>  })
>
> -#define xchg_relaxed(ptr, x) \
> +#define arch_xchg_relaxed(ptr, x) \
>                 (__xchg_relaxed((x), (ptr), sizeof(*(ptr))))
>
>  #define __cmpxchg_relaxed(ptr, old, new, size)                 \
> @@ -61,14 +61,14 @@ extern void __bad_xchg(void);
>         __ret;                                                  \
>  })
>
> -#define cmpxchg_relaxed(ptr, o, n) \
> +#define arch_cmpxchg_relaxed(ptr, o, n) \
>         (__cmpxchg_relaxed((ptr), (o), (n), sizeof(*(ptr))))
>
> -#define cmpxchg(ptr, o, n)                                     \
> +#define arch_cmpxchg(ptr, o, n)                                \
>  ({                                                             \
>         __typeof__(*(ptr)) __ret;                               \
>         __smp_release_fence();                                  \
> -       __ret = cmpxchg_relaxed(ptr, o, n);                     \
> +       __ret = arch_cmpxchg_relaxed(ptr, o, n);                \
>         __smp_acquire_fence();                                  \
>         __ret;                                                  \
>  })
> --
> 2.11.0
>
Acked-by: Guo Ren <guoren@kernel.org>

-- 
Best Regards
 Guo Ren

ML: https://lore.kernel.org/linux-csky/

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 31/33] locking/atomic: xtensa: move to ARCH_ATOMIC
  2021-05-10  9:37 ` [PATCH 31/33] locking/atomic: xtensa: " Mark Rutland
@ 2021-05-10 14:19   ` Max Filippov
  0 siblings, 0 replies; 57+ messages in thread
From: Max Filippov @ 2021-05-10 14:19 UTC (permalink / raw)
  To: Mark Rutland
  Cc: LKML, Will Deacon, boqun.feng, Peter Zijlstra, aou,
	Arnd Bergmann, bcain, Benjamin Herrenschmidt, Chris Zankel,
	Rich Felker, David S. Miller, deanbo422, Helge Deller,
	Geert Uytterhoeven, Greentime Hu, Guo Ren, Ivan Kokshaysky,
	James E.J. Bottomley, Jonas Bonn, ley.foon.tan, Russell King,
	Matt Turner, Michal Simek, Michael Ellerman, nickhu,
	Palmer Dabbelt, Paul Mackerras, Paul Walmsley, Richard Henderson,
	Stafford Horne, Stefan Kristiansson, Thomas Bogendoerfer,
	Vineet Gupta, Yoshinori Sato

On Mon, May 10, 2021 at 2:44 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> We'd like all architectures to convert to ARCH_ATOMIC, as once all
> architectures are converted it will be possible to make significant
> cleanups to the atomics headers, and this will make it much easier to
> generically enable atomic functionality (e.g. debug logic in the
> instrumented wrappers).
>
> As a step towards that, this patch migrates xtensa to ARCH_ATOMIC. The
> arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> code wraps these with optional instrumentation to provide the regular
> functions.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Chris Zankel <chris@zankel.net>
> Cc: Max Filippov <jcmvbkbc@gmail.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/xtensa/Kconfig               |  1 +
>  arch/xtensa/include/asm/atomic.h  | 26 +++++++++++++-------------
>  arch/xtensa/include/asm/cmpxchg.h | 10 +++++-----
>  3 files changed, 19 insertions(+), 18 deletions(-)

Acked-by: Max Filippov <jcmvbkbc@gmail.com>

-- 
Thanks.
-- Max

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC
  2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
                   ` (32 preceding siblings ...)
  2021-05-10  9:37 ` [PATCH 33/33] locking/atomics: atomic-instrumented: simplify ifdeffery Mark Rutland
@ 2021-05-10 14:56 ` Arnd Bergmann
  2021-05-11  9:27   ` Mark Rutland
  33 siblings, 1 reply; 57+ messages in thread
From: Arnd Bergmann @ 2021-05-10 14:56 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux Kernel Mailing List, Will Deacon, Boqun Feng,
	Peter Zijlstra, Albert Ou, Brian Cain, Benjamin Herrenschmidt,
	Chris Zankel, Rich Felker, David Miller, Vincent Chen,
	Helge Deller, Geert Uytterhoeven, Greentime Hu, Guo Ren,
	Ivan Kokshaysky, James Bottomley, Max Filippov, Jonas Bonn,
	Ley Foon Tan, Russell King - ARM Linux, Matt Turner,
	Michal Simek, Michael Ellerman, Nick Hu, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Richard Henderson, Stafford Horne,
	Stefan Kristiansson, Thomas Bogendoerfer, Vineet Gupta,
	Yoshinori Sato

On Mon, May 10, 2021 at 11:39 AM Mark Rutland <mark.rutland@arm.com> wrote:
>
> This series (based on v5.13-rc1) converts all architectures to
> ARCH_ATOMIC. This will allow the use of instrumented atomics on all
> architectures (e.g. for KASAN and similar), and simplifies the core
> atomic code (which should allow for easier rework of the fallbacks and
> other bits in future).
>
> The series is split into three parts:
>
> 1) Some preparatory work is done to prepare architectures and common
>    code for the conversion. In this phase h8300 and microblaze are
>    converted to use the asm-generic atomics exclusively, and the
>    asm-generic implementations are made to function with or without
>    ARCH_ATOMIC.
>
> 2) Architectures are converted one-by-one to use the ARCH_ATOMIC
>    interface. I've converted each architecture with its own patch (even
>    where the conversion is trivial) to make review and bisection easier.
>
> 3) The code handling !ARCH_ATOMIC is removed.
>
> Note: I've generated the patches with:
>
>   git format-patch -C -M -D
>
> ... so the preimage of include/linux/atomic-fallback.h is not included
> in the diff when it is deleted.
>
> The series can also be found in my atomics/arch-atomic branch on
> kernel.org:
>
>   https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/arch-atomic
>   git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git atomics/arch-atomic

Nice work!

I had recently tried experimented with cleanups around cmpxchg(),
adding fixed-length
versions of that which can be used to implement the atomics with less bloated
preprocessor output. My patches will clearly conflict with your
series, but it seems
that we should do yours first in any case.

I'm happy to put this into the asm-generic tree once you have
collected enough Acks.

       Arnd

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 14/33] locking/atomic: arc: move to ARCH_ATOMIC
  2021-05-10  9:37 ` [PATCH 14/33] locking/atomic: arc: " Mark Rutland
@ 2021-05-10 14:57   ` Mark Rutland
  2021-05-10 19:48     ` Vineet Gupta
  0 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-10 14:57 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mattst88, monstr, mpe, nickhu, palmer,
	paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

On Mon, May 10, 2021 at 10:37:34AM +0100, Mark Rutland wrote:
> We'd like all architectures to convert to ARCH_ATOMIC, as once all
> architectures are converted it will be possible to make significant
> cleanups to the atomics headers, and this will make it much easier to
> generically enable atomic functionality (e.g. debug logic in the
> instrumented wrappers).
> 
> As a step towards that, this patch migrates arc to ARCH_ATOMIC. The arch
> code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code
> wraps these with optional instrumentation to provide the regular
> functions.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Vineet Gupta <vgupta@synopsys.com>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arc/Kconfig               |  1 +
>  arch/arc/include/asm/atomic.h  | 56 +++++++++++++++++++++---------------------
>  arch/arc/include/asm/cmpxchg.h |  8 +++---
>  3 files changed, 33 insertions(+), 32 deletions(-)

I evidently missed atomic_xchg() in asm/cmpxchg.h, so the fixup below is
needed. I've pushed that to my branch on kernel.org for now.

Mark.

---->8----
diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
index 87666980b78a..d1781bdf6527 100644
--- a/arch/arc/include/asm/cmpxchg.h
+++ b/arch/arc/include/asm/cmpxchg.h
@@ -153,6 +153,6 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
  *         can't be clobbered by others. Thus no serialization required when
  *         atomic_xchg is involved.
  */
-#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
+#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
 
 #endif

^ permalink raw reply related	[flat|nested] 57+ messages in thread

* Re: [PATCH 14/33] locking/atomic: arc: move to ARCH_ATOMIC
  2021-05-10 14:57   ` Mark Rutland
@ 2021-05-10 19:48     ` Vineet Gupta
  0 siblings, 0 replies; 57+ messages in thread
From: Vineet Gupta @ 2021-05-10 19:48 UTC (permalink / raw)
  To: Mark Rutland, linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mattst88, monstr, mpe, nickhu, palmer,
	paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, ysato

On 5/10/21 7:57 AM, Mark Rutland wrote:
> On Mon, May 10, 2021 at 10:37:34AM +0100, Mark Rutland wrote:
>> We'd like all architectures to convert to ARCH_ATOMIC, as once all
>> architectures are converted it will be possible to make significant
>> cleanups to the atomics headers, and this will make it much easier to
>> generically enable atomic functionality (e.g. debug logic in the
>> instrumented wrappers).
>>
>> As a step towards that, this patch migrates arc to ARCH_ATOMIC. The arch
>> code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code
>> wraps these with optional instrumentation to provide the regular
>> functions.
>>
>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>> Cc: Boqun Feng <boqun.feng@gmail.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Vineet Gupta <vgupta@synopsys.com>
>> Cc: Will Deacon <will@kernel.org>

Acked-by: Vineet Gupta <vgupta@synopsys,com>    #arch/arc

FWIW, I do have a bunch of ARC specific cleanups and updates for 
atomics/bitops/cmpxchg from my arc64 work, which I'm planning to send 
out soon. That will be independent of this changeset for now, but we'll 
have this one merge first.

Thx for doing this.
-Vineet

>> ---
>>   arch/arc/Kconfig               |  1 +
>>   arch/arc/include/asm/atomic.h  | 56 +++++++++++++++++++++---------------------
>>   arch/arc/include/asm/cmpxchg.h |  8 +++---
>>   3 files changed, 33 insertions(+), 32 deletions(-)
> I evidently missed atomic_xchg() in asm/cmpxchg.h, so the fixup below is
> needed. I've pushed that to my branch on kernel.org for now.
>
> Mark.
>
> ---->8----
> diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
> index 87666980b78a..d1781bdf6527 100644
> --- a/arch/arc/include/asm/cmpxchg.h
> +++ b/arch/arc/include/asm/cmpxchg.h
> @@ -153,6 +153,6 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
>    *         can't be clobbered by others. Thus no serialization required when
>    *         atomic_xchg is involved.
>    */
> -#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
> +#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
>   
>   #endif


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 27/33] locking/atomic: powerpc: move to ARCH_ATOMIC
  2021-05-10  9:37 ` [PATCH 27/33] locking/atomic: powerpc: " Mark Rutland
@ 2021-05-11  9:16   ` Mark Rutland
  2021-05-11  9:57     ` Mark Rutland
  0 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-11  9:16 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mattst88, monstr, mpe, nickhu, palmer,
	paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

On Mon, May 10, 2021 at 10:37:47AM +0100, Mark Rutland wrote:
> We'd like all architectures to convert to ARCH_ATOMIC, as once all
> architectures are converted it will be possible to make significant
> cleanups to the atomics headers, and this will make it much easier to
> generically enable atomic functionality (e.g. debug logic in the
> instrumented wrappers).
> 
> As a step towards that, this patch migrates powerpc to ARCH_ATOMIC. The
> arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> code wraps these with optional instrumentation to provide the regular
> functions.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/powerpc/Kconfig               |   1 +
>  arch/powerpc/include/asm/atomic.h  | 140 +++++++++++++++++++------------------
>  arch/powerpc/include/asm/cmpxchg.h |  30 ++++----
>  3 files changed, 89 insertions(+), 82 deletions(-)

The kbuild test robot spotted a couple of bits I'd got wrong; I've noted
those below (and both are now fixed in my kernel.org branch).

>  static __always_inline bool
> -atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new)
> +arch_atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new)

Since this isn't part of the core atomic API, and is used directly by
powerpc's spinlock implementation, this should have stayed as-is (or we
should use the `arch_` prefix consitently and update the spinlock code).

I've dropped the `arch_` prefix for now.

[...]

>  /**
>   * atomic64_fetch_add_unless - add unless the number is a given value
> @@ -518,7 +524,7 @@ static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v)
>   * Atomically adds @a to @v, so long as it was not @u.
>   * Returns the old value of @v.
>   */
> -static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> +static __inline__ s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
>  {
>  	s64 t;
>  
> @@ -539,7 +545,7 @@ static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
>  
>  	return t;
>  }
> -#define atomic64_fetch_add_unless atomic64_fetch_add_unless
> +#define arch_atomic64_fetch_add_unless atomic64_fetch_add_unless

Looks like I forgot the `arch_` prefix on the right hand side here; this
should have been:

#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC
  2021-05-10 14:56 ` [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Arnd Bergmann
@ 2021-05-11  9:27   ` Mark Rutland
  2021-05-12  7:28     ` Peter Zijlstra
  0 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-11  9:27 UTC (permalink / raw)
  To: Arnd Bergmann, Peter Zijlstra
  Cc: Linux Kernel Mailing List, Will Deacon, Boqun Feng, Albert Ou,
	Brian Cain, Benjamin Herrenschmidt, Chris Zankel, Rich Felker,
	David Miller, Vincent Chen, Helge Deller, Geert Uytterhoeven,
	Greentime Hu, Guo Ren, Ivan Kokshaysky, James Bottomley,
	Max Filippov, Jonas Bonn, Ley Foon Tan, Russell King - ARM Linux,
	Matt Turner, Michal Simek, Michael Ellerman, Nick Hu,
	Palmer Dabbelt, Paul Mackerras, Paul Walmsley, Richard Henderson,
	Stafford Horne, Stefan Kristiansson, Thomas Bogendoerfer,
	Vineet Gupta, Yoshinori Sato

On Mon, May 10, 2021 at 04:56:03PM +0200, Arnd Bergmann wrote:
> On Mon, May 10, 2021 at 11:39 AM Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > This series (based on v5.13-rc1) converts all architectures to
> > ARCH_ATOMIC. This will allow the use of instrumented atomics on all
> > architectures (e.g. for KASAN and similar), and simplifies the core
> > atomic code (which should allow for easier rework of the fallbacks and
> > other bits in future).
> >
> > The series is split into three parts:
> >
> > 1) Some preparatory work is done to prepare architectures and common
> >    code for the conversion. In this phase h8300 and microblaze are
> >    converted to use the asm-generic atomics exclusively, and the
> >    asm-generic implementations are made to function with or without
> >    ARCH_ATOMIC.
> >
> > 2) Architectures are converted one-by-one to use the ARCH_ATOMIC
> >    interface. I've converted each architecture with its own patch (even
> >    where the conversion is trivial) to make review and bisection easier.
> >
> > 3) The code handling !ARCH_ATOMIC is removed.
> >
> > Note: I've generated the patches with:
> >
> >   git format-patch -C -M -D
> >
> > ... so the preimage of include/linux/atomic-fallback.h is not included
> > in the diff when it is deleted.
> >
> > The series can also be found in my atomics/arch-atomic branch on
> > kernel.org:
> >
> >   https://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git/log/?h=atomics/arch-atomic
> >   git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git atomics/arch-atomic
> 
> Nice work!
> 
> I had recently tried experimented with cleanups around cmpxchg(),
> adding fixed-length
> versions of that which can be used to implement the atomics with less bloated
> preprocessor output. My patches will clearly conflict with your
> series, but it seems
> that we should do yours first in any case.

Ah; sorry for stepping on your toes with this. Simplifying the
xchg/cmpxchg glue sounds like a nice cleanup too!

> I'm happy to put this into the asm-generic tree once you have
> collected enough Acks.

Great! Going via asm-generic sounds good to me. The kbuild test robot
has found a few issues, so I'll probably have a v2 with those fixed and
some acks accumulated.

Peter, does that make sense to you, or did you have reasons to want this
to go via tip?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 20/33] locking/atomic: m68k: move to ARCH_ATOMIC
  2021-05-10  9:37 ` [PATCH 20/33] locking/atomic: m68k: " Mark Rutland
@ 2021-05-11  9:27   ` Geert Uytterhoeven
  2021-05-11 10:00     ` Mark Rutland
  0 siblings, 1 reply; 57+ messages in thread
From: Geert Uytterhoeven @ 2021-05-11  9:27 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux Kernel Mailing List, Will Deacon, Boqun Feng,
	Peter Zijlstra, Albert Ou, Arnd Bergmann, Brian Cain,
	Benjamin Herrenschmidt, Chris Zankel, Rich Felker,
	David S. Miller, Vincent Chen, Helge Deller, Greentime Hu,
	Guo Ren, Ivan Kokshaysky, James Bottomley, Max Filippov,
	Jonas Bonn, Ley Foon Tan, Russell King, Matt Turner,
	Michal Simek, Michael Ellerman, Nick Hu, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Richard Henderson, Stafford Horne,
	Stefan Kristiansson, Thomas Bogendoerfer, Vineet Gupta,
	Yoshinori Sato

Hi Mark,

On Mon, May 10, 2021 at 11:42 AM Mark Rutland <mark.rutland@arm.com> wrote:
> We'd like all architectures to convert to ARCH_ATOMIC, as once all
> architectures are converted it will be possible to make significant
> cleanups to the atomics headers, and this will make it much easier to
> generically enable atomic functionality (e.g. debug logic in the
> instrumented wrappers).
>
> As a step towards that, this patch migrates m68k to ARCH_ATOMIC. The
> arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> code wraps these with optional instrumentation to provide the regular
> functions.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>

Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>

I even gave this series a spin on ARAnyM and QEMU, and everything
seems to be fine.

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 11/33] locking/atomic: cmpxchg: make `generic` a prefix
  2021-05-10  9:37 ` [PATCH 11/33] locking/atomic: cmpxchg: make `generic` a prefix Mark Rutland
@ 2021-05-11  9:29   ` Geert Uytterhoeven
  0 siblings, 0 replies; 57+ messages in thread
From: Geert Uytterhoeven @ 2021-05-11  9:29 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux Kernel Mailing List, Will Deacon, Boqun Feng,
	Peter Zijlstra, Albert Ou, Arnd Bergmann, Brian Cain,
	Benjamin Herrenschmidt, Chris Zankel, Rich Felker,
	David S. Miller, Vincent Chen, Helge Deller, Greentime Hu,
	Guo Ren, Ivan Kokshaysky, James Bottomley, Max Filippov,
	Jonas Bonn, Ley Foon Tan, Russell King, Matt Turner,
	Michal Simek, Michael Ellerman, Nick Hu, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Richard Henderson, Stafford Horne,
	Stefan Kristiansson, Thomas Bogendoerfer, Vineet Gupta,
	Yoshinori Sato

On Mon, May 10, 2021 at 11:40 AM Mark Rutland <mark.rutland@arm.com> wrote:
> The asm-generic implementations of cmpxchg_local() and cmpxchg64_local()
> use a `_generic` suffix to distinguish themselves from arch code or
> wrappers used elsewhere.
>
> Subsequent patches will add ARCH_ATOMIC support to these
> implementations, and will distinguish more functions with a `generic`
> portion. To align with how ARCH_ATOMIC uses an `arch_` prefix, it would
> be helpful to use a `generic_` prefix rather than a `_generic` suffix.
>
> In preparation for this, this patch renames the existing functions to
> make `generic` a prefix rather than a suffix. There should be no
> functional change as a result of this patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>

>  arch/m68k/include/asm/cmpxchg.h     | 2 +-

Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 32/33] locking/atomic: delete !ARCH_ATOMIC remnants
  2021-05-10  9:37 ` [PATCH 32/33] locking/atomic: delete !ARCH_ATOMIC remnants Mark Rutland
@ 2021-05-11  9:30   ` Geert Uytterhoeven
  0 siblings, 0 replies; 57+ messages in thread
From: Geert Uytterhoeven @ 2021-05-11  9:30 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux Kernel Mailing List, Will Deacon, Boqun Feng,
	Peter Zijlstra, Albert Ou, Arnd Bergmann, Brian Cain,
	Benjamin Herrenschmidt, Chris Zankel, Rich Felker,
	David S. Miller, Vincent Chen, Helge Deller, Greentime Hu,
	Guo Ren, Ivan Kokshaysky, James Bottomley, Max Filippov,
	Jonas Bonn, Ley Foon Tan, Russell King, Matt Turner,
	Michal Simek, Michael Ellerman, Nick Hu, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Richard Henderson, Stafford Horne,
	Stefan Kristiansson, Thomas Bogendoerfer, Vineet Gupta,
	Yoshinori Sato

On Mon, May 10, 2021 at 11:44 AM Mark Rutland <mark.rutland@arm.com> wrote:
> Now that all architectures implement ARCH_ATOMIC, we can make it
> mandatory, removing the Kconfig symbol and logic for !ARCH_ATOMIC.
>
> There should be no functional change as a result of this patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>

>  arch/m68k/Kconfig               |    1 -

FWIW
Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 27/33] locking/atomic: powerpc: move to ARCH_ATOMIC
  2021-05-11  9:16   ` Mark Rutland
@ 2021-05-11  9:57     ` Mark Rutland
  0 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-11  9:57 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mattst88, monstr, mpe, nickhu, palmer,
	paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

On Tue, May 11, 2021 at 10:16:21AM +0100, Mark Rutland wrote:
> On Mon, May 10, 2021 at 10:37:47AM +0100, Mark Rutland wrote:
> > We'd like all architectures to convert to ARCH_ATOMIC, as once all
> > architectures are converted it will be possible to make significant
> > cleanups to the atomics headers, and this will make it much easier to
> > generically enable atomic functionality (e.g. debug logic in the
> > instrumented wrappers).
> > 
> > As a step towards that, this patch migrates powerpc to ARCH_ATOMIC. The
> > arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> > code wraps these with optional instrumentation to provide the regular
> > functions.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> > Cc: Boqun Feng <boqun.feng@gmail.com>
> > Cc: Michael Ellerman <mpe@ellerman.id.au>
> > Cc: Paul Mackerras <paulus@samba.org>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >  arch/powerpc/Kconfig               |   1 +
> >  arch/powerpc/include/asm/atomic.h  | 140 +++++++++++++++++++------------------
> >  arch/powerpc/include/asm/cmpxchg.h |  30 ++++----
> >  3 files changed, 89 insertions(+), 82 deletions(-)
> 
> The kbuild test robot spotted a couple of bits I'd got wrong; I've noted
> those below (and both are now fixed in my kernel.org branch).
> 
> >  static __always_inline bool
> > -atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new)
> > +arch_atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new)
> 
> Since this isn't part of the core atomic API, and is used directly by
> powerpc's spinlock implementation, this should have stayed as-is (or we
> should use the `arch_` prefix consitently and update the spinlock code).
> 
> I've dropped the `arch_` prefix for now.

On second thought (and having hit a similar issue on m68k), I've added
the `arch_` prefix and updated the caller, to more clearly separate the
common atomic API and the arch-specific bits.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 20/33] locking/atomic: m68k: move to ARCH_ATOMIC
  2021-05-11  9:27   ` Geert Uytterhoeven
@ 2021-05-11 10:00     ` Mark Rutland
  2021-05-11 10:27       ` Geert Uytterhoeven
  0 siblings, 1 reply; 57+ messages in thread
From: Mark Rutland @ 2021-05-11 10:00 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: Linux Kernel Mailing List, Will Deacon, Boqun Feng,
	Peter Zijlstra, Albert Ou, Arnd Bergmann, Brian Cain,
	Benjamin Herrenschmidt, Chris Zankel, Rich Felker,
	David S. Miller, Vincent Chen, Helge Deller, Greentime Hu,
	Guo Ren, Ivan Kokshaysky, James Bottomley, Max Filippov,
	Jonas Bonn, Ley Foon Tan, Russell King, Matt Turner,
	Michal Simek, Michael Ellerman, Nick Hu, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Richard Henderson, Stafford Horne,
	Stefan Kristiansson, Thomas Bogendoerfer, Vineet Gupta,
	Yoshinori Sato

On Tue, May 11, 2021 at 11:27:45AM +0200, Geert Uytterhoeven wrote:
> Hi Mark,

Hi Geert,

> On Mon, May 10, 2021 at 11:42 AM Mark Rutland <mark.rutland@arm.com> wrote:
> > We'd like all architectures to convert to ARCH_ATOMIC, as once all
> > architectures are converted it will be possible to make significant
> > cleanups to the atomics headers, and this will make it much easier to
> > generically enable atomic functionality (e.g. debug logic in the
> > instrumented wrappers).
> >
> > As a step towards that, this patch migrates m68k to ARCH_ATOMIC. The
> > arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> > code wraps these with optional instrumentation to provide the regular
> > functions.
> >
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> 
> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
> 
> I even gave this series a spin on ARAnyM and QEMU, and everything
> seems to be fine.

Thanks! That's much appreciated; I was able to build-test, but didn't
have a suitable setup for boot testing.

The Kbuild test robot noticed I broke one configuration by renaming 
atomic_dec_and_test_lt(), since it's not part of the common atomic API,
and is used in get_mmu_context() in <asm/mmu_context.h>.

On the primciple of keeping the common atomic API separate from
arch-specific bits, I've kept the `arch_` prefix and updated the caller.
Does that sound good to you?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 20/33] locking/atomic: m68k: move to ARCH_ATOMIC
  2021-05-11 10:00     ` Mark Rutland
@ 2021-05-11 10:27       ` Geert Uytterhoeven
  2021-05-12 14:15         ` Greg Ungerer
  0 siblings, 1 reply; 57+ messages in thread
From: Geert Uytterhoeven @ 2021-05-11 10:27 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux Kernel Mailing List, Will Deacon, Boqun Feng,
	Peter Zijlstra, Albert Ou, Arnd Bergmann, Brian Cain,
	Benjamin Herrenschmidt, Chris Zankel, Rich Felker,
	David S. Miller, Vincent Chen, Helge Deller, Greentime Hu,
	Guo Ren, Ivan Kokshaysky, James Bottomley, Max Filippov,
	Jonas Bonn, Ley Foon Tan, Russell King, Matt Turner,
	Michal Simek, Michael Ellerman, Nick Hu, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Richard Henderson, Stafford Horne,
	Stefan Kristiansson, Thomas Bogendoerfer, Vineet Gupta,
	Yoshinori Sato, Greg Ungerer

Hi Mark,

CC Greg

On Tue, May 11, 2021 at 12:00 PM Mark Rutland <mark.rutland@arm.com> wrote:
> On Tue, May 11, 2021 at 11:27:45AM +0200, Geert Uytterhoeven wrote:
> > On Mon, May 10, 2021 at 11:42 AM Mark Rutland <mark.rutland@arm.com> wrote:
> > > We'd like all architectures to convert to ARCH_ATOMIC, as once all
> > > architectures are converted it will be possible to make significant
> > > cleanups to the atomics headers, and this will make it much easier to
> > > generically enable atomic functionality (e.g. debug logic in the
> > > instrumented wrappers).
> > >
> > > As a step towards that, this patch migrates m68k to ARCH_ATOMIC. The
> > > arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> > > code wraps these with optional instrumentation to provide the regular
> > > functions.
> > >
> > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> >
> > Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
> > Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
> >
> > I even gave this series a spin on ARAnyM and QEMU, and everything
> > seems to be fine.
>
> Thanks! That's much appreciated; I was able to build-test, but didn't
> have a suitable setup for boot testing.
>
> The Kbuild test robot noticed I broke one configuration by renaming
> atomic_dec_and_test_lt(), since it's not part of the common atomic API,
> and is used in get_mmu_context() in <asm/mmu_context.h>.

Ah, that's in the part for Coldfire, so I (and my builds) didn't notice.

> On the primciple of keeping the common atomic API separate from
> arch-specific bits, I've kept the `arch_` prefix and updated the caller.
> Does that sound good to you?

That's fine for me, but as this is Coldfire, I'd like to defer to Greg.

Gr{oetje,eeting}s,

                        Geert

-- 
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC
  2021-05-11  9:27   ` Mark Rutland
@ 2021-05-12  7:28     ` Peter Zijlstra
  2021-05-12  7:49       ` Arnd Bergmann
  0 siblings, 1 reply; 57+ messages in thread
From: Peter Zijlstra @ 2021-05-12  7:28 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Arnd Bergmann, Linux Kernel Mailing List, Will Deacon,
	Boqun Feng, Albert Ou, Brian Cain, Benjamin Herrenschmidt,
	Chris Zankel, Rich Felker, David Miller, Vincent Chen,
	Helge Deller, Geert Uytterhoeven, Greentime Hu, Guo Ren,
	Ivan Kokshaysky, James Bottomley, Max Filippov, Jonas Bonn,
	Ley Foon Tan, Russell King - ARM Linux, Matt Turner,
	Michal Simek, Michael Ellerman, Nick Hu, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Richard Henderson, Stafford Horne,
	Stefan Kristiansson, Thomas Bogendoerfer, Vineet Gupta,
	Yoshinori Sato

On Tue, May 11, 2021 at 10:27:01AM +0100, Mark Rutland wrote:
> > I'm happy to put this into the asm-generic tree once you have
> > collected enough Acks.
> 
> Great! Going via asm-generic sounds good to me. The kbuild test robot
> has found a few issues, so I'll probably have a v2 with those fixed and
> some acks accumulated.
> 
> Peter, does that make sense to you, or did you have reasons to want this
> to go via tip?

I was expecting this to to through tip/locking/core, and I think I
briefly talked to arnd about that on IRC, but whatever works I suppose.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC
  2021-05-12  7:28     ` Peter Zijlstra
@ 2021-05-12  7:49       ` Arnd Bergmann
  0 siblings, 0 replies; 57+ messages in thread
From: Arnd Bergmann @ 2021-05-12  7:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mark Rutland, Linux Kernel Mailing List, Will Deacon, Boqun Feng,
	Albert Ou, Brian Cain, Benjamin Herrenschmidt, Chris Zankel,
	Rich Felker, David Miller, Vincent Chen, Helge Deller,
	Geert Uytterhoeven, Greentime Hu, Guo Ren, Ivan Kokshaysky,
	James Bottomley, Max Filippov, Jonas Bonn, Ley Foon Tan,
	Russell King - ARM Linux, Matt Turner, Michal Simek,
	Michael Ellerman, Nick Hu, Palmer Dabbelt, Paul Mackerras,
	Paul Walmsley, Richard Henderson, Stafford Horne,
	Stefan Kristiansson, Thomas Bogendoerfer, Vineet Gupta,
	Yoshinori Sato

On Wed, May 12, 2021 at 9:28 AM Peter Zijlstra <peterz@infradead.org> wrote:
> On Tue, May 11, 2021 at 10:27:01AM +0100, Mark Rutland wrote:
> > > I'm happy to put this into the asm-generic tree once you have
> > > collected enough Acks.
> >
> > Great! Going via asm-generic sounds good to me. The kbuild test robot
> > has found a few issues, so I'll probably have a v2 with those fixed and
> > some acks accumulated.
> >
> > Peter, does that make sense to you, or did you have reasons to want this
> > to go via tip?
>
> I was expecting this to to through tip/locking/core, and I think I
> briefly talked to arnd about that on IRC, but whatever works I suppose.

Yes, that's fine with me if it helps you.

      Arnd

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 20/33] locking/atomic: m68k: move to ARCH_ATOMIC
  2021-05-11 10:27       ` Geert Uytterhoeven
@ 2021-05-12 14:15         ` Greg Ungerer
  2021-05-12 16:33           ` Mark Rutland
  0 siblings, 1 reply; 57+ messages in thread
From: Greg Ungerer @ 2021-05-12 14:15 UTC (permalink / raw)
  To: Geert Uytterhoeven, Mark Rutland
  Cc: Linux Kernel Mailing List, Will Deacon, Boqun Feng,
	Peter Zijlstra, Albert Ou, Arnd Bergmann, Brian Cain,
	Benjamin Herrenschmidt, Chris Zankel, Rich Felker,
	David S. Miller, Vincent Chen, Helge Deller, Greentime Hu,
	Guo Ren, Ivan Kokshaysky, James Bottomley, Max Filippov,
	Jonas Bonn, Ley Foon Tan, Russell King, Matt Turner,
	Michal Simek, Michael Ellerman, Nick Hu, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Richard Henderson, Stafford Horne,
	Stefan Kristiansson, Thomas Bogendoerfer, Vineet Gupta,
	Yoshinori Sato

Hi Mark, Geert,

On 11/5/21 8:27 pm, Geert Uytterhoeven wrote:
> Hi Mark,
> 
> CC Greg
> 
> On Tue, May 11, 2021 at 12:00 PM Mark Rutland <mark.rutland@arm.com> wrote:
>> On Tue, May 11, 2021 at 11:27:45AM +0200, Geert Uytterhoeven wrote:
>>> On Mon, May 10, 2021 at 11:42 AM Mark Rutland <mark.rutland@arm.com> wrote:
>>>> We'd like all architectures to convert to ARCH_ATOMIC, as once all
>>>> architectures are converted it will be possible to make significant
>>>> cleanups to the atomics headers, and this will make it much easier to
>>>> generically enable atomic functionality (e.g. debug logic in the
>>>> instrumented wrappers).
>>>>
>>>> As a step towards that, this patch migrates m68k to ARCH_ATOMIC. The
>>>> arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
>>>> code wraps these with optional instrumentation to provide the regular
>>>> functions.
>>>>
>>>> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
>>>
>>> Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
>>> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
>>>
>>> I even gave this series a spin on ARAnyM and QEMU, and everything
>>> seems to be fine.
>>
>> Thanks! That's much appreciated; I was able to build-test, but didn't
>> have a suitable setup for boot testing.
>>
>> The Kbuild test robot noticed I broke one configuration by renaming
>> atomic_dec_and_test_lt(), since it's not part of the common atomic API,
>> and is used in get_mmu_context() in <asm/mmu_context.h>.
> 
> Ah, that's in the part for Coldfire, so I (and my builds) didn't notice.
> 
>> On the primciple of keeping the common atomic API separate from
>> arch-specific bits, I've kept the `arch_` prefix and updated the caller.
>> Does that sound good to you?
> 
> That's fine for me, but as this is Coldfire, I'd like to defer to Greg.

Looks fine to me too.

Acked-by: Greg Ungerer <gerg@linux-m68k.org>

Regards
Greg


^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 20/33] locking/atomic: m68k: move to ARCH_ATOMIC
  2021-05-12 14:15         ` Greg Ungerer
@ 2021-05-12 16:33           ` Mark Rutland
  0 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-12 16:33 UTC (permalink / raw)
  To: Greg Ungerer
  Cc: Geert Uytterhoeven, Linux Kernel Mailing List, Will Deacon,
	Boqun Feng, Peter Zijlstra, Albert Ou, Arnd Bergmann, Brian Cain,
	Benjamin Herrenschmidt, Chris Zankel, Rich Felker,
	David S. Miller, Vincent Chen, Helge Deller, Greentime Hu,
	Guo Ren, Ivan Kokshaysky, James Bottomley, Max Filippov,
	Jonas Bonn, Ley Foon Tan, Russell King, Matt Turner,
	Michal Simek, Michael Ellerman, Nick Hu, Palmer Dabbelt,
	Paul Mackerras, Paul Walmsley, Richard Henderson, Stafford Horne,
	Stefan Kristiansson, Thomas Bogendoerfer, Vineet Gupta,
	Yoshinori Sato

On Thu, May 13, 2021 at 12:15:58AM +1000, Greg Ungerer wrote:
> On 11/5/21 8:27 pm, Geert Uytterhoeven wrote:
> > On Tue, May 11, 2021 at 12:00 PM Mark Rutland <mark.rutland@arm.com> wrote:
> > > On Tue, May 11, 2021 at 11:27:45AM +0200, Geert Uytterhoeven wrote:
> > > > On Mon, May 10, 2021 at 11:42 AM Mark Rutland <mark.rutland@arm.com> wrote:
> > > > > We'd like all architectures to convert to ARCH_ATOMIC, as once all
> > > > > architectures are converted it will be possible to make significant
> > > > > cleanups to the atomics headers, and this will make it much easier to
> > > > > generically enable atomic functionality (e.g. debug logic in the
> > > > > instrumented wrappers).
> > > > > 
> > > > > As a step towards that, this patch migrates m68k to ARCH_ATOMIC. The
> > > > > arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> > > > > code wraps these with optional instrumentation to provide the regular
> > > > > functions.
> > > > > 
> > > > > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > > > 
> > > > Reviewed-by: Geert Uytterhoeven <geert@linux-m68k.org>
> > > > Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
> > > > 
> > > > I even gave this series a spin on ARAnyM and QEMU, and everything
> > > > seems to be fine.
> > > 
> > > Thanks! That's much appreciated; I was able to build-test, but didn't
> > > have a suitable setup for boot testing.
> > > 
> > > The Kbuild test robot noticed I broke one configuration by renaming
> > > atomic_dec_and_test_lt(), since it's not part of the common atomic API,
> > > and is used in get_mmu_context() in <asm/mmu_context.h>.
> > 
> > Ah, that's in the part for Coldfire, so I (and my builds) didn't notice.
> > 
> > > On the primciple of keeping the common atomic API separate from
> > > arch-specific bits, I've kept the `arch_` prefix and updated the caller.
> > > Does that sound good to you?
> > 
> > That's fine for me, but as this is Coldfire, I'd like to defer to Greg.
> 
> Looks fine to me too.
> 
> Acked-by: Greg Ungerer <gerg@linux-m68k.org>

Thanks!

Mark.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 15/33] locking/atomic: arm: move to ARCH_ATOMIC
  2021-05-10  9:37 ` [PATCH 15/33] locking/atomic: arm: " Mark Rutland
@ 2021-05-13 10:49   ` Mark Rutland
  0 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-13 10:49 UTC (permalink / raw)
  To: linux-kernel, will, boqun.feng, peterz
  Cc: aou, arnd, bcain, benh, chris, dalias, davem, deanbo422, deller,
	geert, green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mattst88, monstr, mpe, nickhu, palmer,
	paulus, paul.walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

On Mon, May 10, 2021 at 10:37:35AM +0100, Mark Rutland wrote:
> We'd like all architectures to convert to ARCH_ATOMIC, as once all
> architectures are converted it will be possible to make significant
> cleanups to the atomics headers, and this will make it much easier to
> generically enable atomic functionality (e.g. debug logic in the
> instrumented wrappers).
> 
> As a step towards that, this patch migrates arm to ARCH_ATOMIC. The arch
> code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common code
> wraps these with optional instrumentation to provide the regular
> functions.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Russell King <linux@armlinux.org.uk>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/arm/Kconfig               |  1 +
>  arch/arm/include/asm/atomic.h  | 96 +++++++++++++++++++++---------------------
>  arch/arm/include/asm/cmpxchg.h | 16 +++----
>  3 files changed, 57 insertions(+), 56 deletions(-)

The Kbuild test robot pointed out I'd forgotten to update sync_cmpxchg()
in <asm/sync_bitops.h>, so I've fixed that up in my kernel.org branch
for now.

Mark.

> 
> diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
> index 24804f11302d..b7334a6643b9 100644
> --- a/arch/arm/Kconfig
> +++ b/arch/arm/Kconfig
> @@ -3,6 +3,7 @@ config ARM
>  	bool
>  	default y
>  	select ARCH_32BIT_OFF_T
> +	select ARCH_ATOMIC
>  	select ARCH_HAS_BINFMT_FLAT
>  	select ARCH_HAS_DEBUG_VIRTUAL if MMU
>  	select ARCH_HAS_DMA_WRITE_COMBINE if !ARM_DMA_MEM_BUFFERABLE
> diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
> index 455eb19a5ac1..db8512d9a918 100644
> --- a/arch/arm/include/asm/atomic.h
> +++ b/arch/arm/include/asm/atomic.h
> @@ -22,8 +22,8 @@
>   * strex/ldrex monitor on some implementations. The reason we can use it for
>   * atomic_set() is the clrex or dummy strex done on every exception return.
>   */
> -#define atomic_read(v)	READ_ONCE((v)->counter)
> -#define atomic_set(v,i)	WRITE_ONCE(((v)->counter), (i))
> +#define arch_atomic_read(v)	READ_ONCE((v)->counter)
> +#define arch_atomic_set(v,i)	WRITE_ONCE(((v)->counter), (i))
>  
>  #if __LINUX_ARM_ARCH__ >= 6
>  
> @@ -34,7 +34,7 @@
>   */
>  
>  #define ATOMIC_OP(op, c_op, asm_op)					\
> -static inline void atomic_##op(int i, atomic_t *v)			\
> +static inline void arch_atomic_##op(int i, atomic_t *v)			\
>  {									\
>  	unsigned long tmp;						\
>  	int result;							\
> @@ -52,7 +52,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
>  }									\
>  
>  #define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
> -static inline int atomic_##op##_return_relaxed(int i, atomic_t *v)	\
> +static inline int arch_atomic_##op##_return_relaxed(int i, atomic_t *v)	\
>  {									\
>  	unsigned long tmp;						\
>  	int result;							\
> @@ -73,7 +73,7 @@ static inline int atomic_##op##_return_relaxed(int i, atomic_t *v)	\
>  }
>  
>  #define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
> -static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
> +static inline int arch_atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
>  {									\
>  	unsigned long tmp;						\
>  	int result, val;						\
> @@ -93,17 +93,17 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
>  	return result;							\
>  }
>  
> -#define atomic_add_return_relaxed	atomic_add_return_relaxed
> -#define atomic_sub_return_relaxed	atomic_sub_return_relaxed
> -#define atomic_fetch_add_relaxed	atomic_fetch_add_relaxed
> -#define atomic_fetch_sub_relaxed	atomic_fetch_sub_relaxed
> +#define arch_atomic_add_return_relaxed		arch_atomic_add_return_relaxed
> +#define arch_atomic_sub_return_relaxed		arch_atomic_sub_return_relaxed
> +#define arch_atomic_fetch_add_relaxed		arch_atomic_fetch_add_relaxed
> +#define arch_atomic_fetch_sub_relaxed		arch_atomic_fetch_sub_relaxed
>  
> -#define atomic_fetch_and_relaxed	atomic_fetch_and_relaxed
> -#define atomic_fetch_andnot_relaxed	atomic_fetch_andnot_relaxed
> -#define atomic_fetch_or_relaxed		atomic_fetch_or_relaxed
> -#define atomic_fetch_xor_relaxed	atomic_fetch_xor_relaxed
> +#define arch_atomic_fetch_and_relaxed		arch_atomic_fetch_and_relaxed
> +#define arch_atomic_fetch_andnot_relaxed	arch_atomic_fetch_andnot_relaxed
> +#define arch_atomic_fetch_or_relaxed		arch_atomic_fetch_or_relaxed
> +#define arch_atomic_fetch_xor_relaxed		arch_atomic_fetch_xor_relaxed
>  
> -static inline int atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new)
> +static inline int arch_atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new)
>  {
>  	int oldval;
>  	unsigned long res;
> @@ -123,9 +123,9 @@ static inline int atomic_cmpxchg_relaxed(atomic_t *ptr, int old, int new)
>  
>  	return oldval;
>  }
> -#define atomic_cmpxchg_relaxed		atomic_cmpxchg_relaxed
> +#define arch_atomic_cmpxchg_relaxed		arch_atomic_cmpxchg_relaxed
>  
> -static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
> +static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
>  {
>  	int oldval, newval;
>  	unsigned long tmp;
> @@ -151,7 +151,7 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
>  
>  	return oldval;
>  }
> -#define atomic_fetch_add_unless		atomic_fetch_add_unless
> +#define arch_atomic_fetch_add_unless		arch_atomic_fetch_add_unless
>  
>  #else /* ARM_ARCH_6 */
>  
> @@ -160,7 +160,7 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
>  #endif
>  
>  #define ATOMIC_OP(op, c_op, asm_op)					\
> -static inline void atomic_##op(int i, atomic_t *v)			\
> +static inline void arch_atomic_##op(int i, atomic_t *v)			\
>  {									\
>  	unsigned long flags;						\
>  									\
> @@ -170,7 +170,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
>  }									\
>  
>  #define ATOMIC_OP_RETURN(op, c_op, asm_op)				\
> -static inline int atomic_##op##_return(int i, atomic_t *v)		\
> +static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
>  {									\
>  	unsigned long flags;						\
>  	int val;							\
> @@ -184,7 +184,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
>  }
>  
>  #define ATOMIC_FETCH_OP(op, c_op, asm_op)				\
> -static inline int atomic_fetch_##op(int i, atomic_t *v)			\
> +static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
>  {									\
>  	unsigned long flags;						\
>  	int val;							\
> @@ -197,7 +197,7 @@ static inline int atomic_fetch_##op(int i, atomic_t *v)			\
>  	return val;							\
>  }
>  
> -static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
> +static inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
>  {
>  	int ret;
>  	unsigned long flags;
> @@ -211,7 +211,7 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
>  	return ret;
>  }
>  
> -#define atomic_fetch_andnot		atomic_fetch_andnot
> +#define arch_atomic_fetch_andnot		arch_atomic_fetch_andnot
>  
>  #endif /* __LINUX_ARM_ARCH__ */
>  
> @@ -223,7 +223,7 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
>  ATOMIC_OPS(add, +=, add)
>  ATOMIC_OPS(sub, -=, sub)
>  
> -#define atomic_andnot atomic_andnot
> +#define arch_atomic_andnot arch_atomic_andnot
>  
>  #undef ATOMIC_OPS
>  #define ATOMIC_OPS(op, c_op, asm_op)					\
> @@ -240,7 +240,7 @@ ATOMIC_OPS(xor, ^=, eor)
>  #undef ATOMIC_OP_RETURN
>  #undef ATOMIC_OP
>  
> -#define atomic_xchg(v, new) (xchg(&((v)->counter), new))
> +#define arch_atomic_xchg(v, new) (arch_xchg(&((v)->counter), new))
>  
>  #ifndef CONFIG_GENERIC_ATOMIC64
>  typedef struct {
> @@ -250,7 +250,7 @@ typedef struct {
>  #define ATOMIC64_INIT(i) { (i) }
>  
>  #ifdef CONFIG_ARM_LPAE
> -static inline s64 atomic64_read(const atomic64_t *v)
> +static inline s64 arch_atomic64_read(const atomic64_t *v)
>  {
>  	s64 result;
>  
> @@ -263,7 +263,7 @@ static inline s64 atomic64_read(const atomic64_t *v)
>  	return result;
>  }
>  
> -static inline void atomic64_set(atomic64_t *v, s64 i)
> +static inline void arch_atomic64_set(atomic64_t *v, s64 i)
>  {
>  	__asm__ __volatile__("@ atomic64_set\n"
>  "	strd	%2, %H2, [%1]"
> @@ -272,7 +272,7 @@ static inline void atomic64_set(atomic64_t *v, s64 i)
>  	);
>  }
>  #else
> -static inline s64 atomic64_read(const atomic64_t *v)
> +static inline s64 arch_atomic64_read(const atomic64_t *v)
>  {
>  	s64 result;
>  
> @@ -285,7 +285,7 @@ static inline s64 atomic64_read(const atomic64_t *v)
>  	return result;
>  }
>  
> -static inline void atomic64_set(atomic64_t *v, s64 i)
> +static inline void arch_atomic64_set(atomic64_t *v, s64 i)
>  {
>  	s64 tmp;
>  
> @@ -302,7 +302,7 @@ static inline void atomic64_set(atomic64_t *v, s64 i)
>  #endif
>  
>  #define ATOMIC64_OP(op, op1, op2)					\
> -static inline void atomic64_##op(s64 i, atomic64_t *v)			\
> +static inline void arch_atomic64_##op(s64 i, atomic64_t *v)		\
>  {									\
>  	s64 result;							\
>  	unsigned long tmp;						\
> @@ -322,7 +322,7 @@ static inline void atomic64_##op(s64 i, atomic64_t *v)			\
>  
>  #define ATOMIC64_OP_RETURN(op, op1, op2)				\
>  static inline s64							\
> -atomic64_##op##_return_relaxed(s64 i, atomic64_t *v)			\
> +arch_atomic64_##op##_return_relaxed(s64 i, atomic64_t *v)		\
>  {									\
>  	s64 result;							\
>  	unsigned long tmp;						\
> @@ -345,7 +345,7 @@ atomic64_##op##_return_relaxed(s64 i, atomic64_t *v)			\
>  
>  #define ATOMIC64_FETCH_OP(op, op1, op2)					\
>  static inline s64							\
> -atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v)			\
> +arch_atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v)		\
>  {									\
>  	s64 result, val;						\
>  	unsigned long tmp;						\
> @@ -374,34 +374,34 @@ atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v)			\
>  ATOMIC64_OPS(add, adds, adc)
>  ATOMIC64_OPS(sub, subs, sbc)
>  
> -#define atomic64_add_return_relaxed	atomic64_add_return_relaxed
> -#define atomic64_sub_return_relaxed	atomic64_sub_return_relaxed
> -#define atomic64_fetch_add_relaxed	atomic64_fetch_add_relaxed
> -#define atomic64_fetch_sub_relaxed	atomic64_fetch_sub_relaxed
> +#define arch_atomic64_add_return_relaxed	arch_atomic64_add_return_relaxed
> +#define arch_atomic64_sub_return_relaxed	arch_atomic64_sub_return_relaxed
> +#define arch_atomic64_fetch_add_relaxed		arch_atomic64_fetch_add_relaxed
> +#define arch_atomic64_fetch_sub_relaxed		arch_atomic64_fetch_sub_relaxed
>  
>  #undef ATOMIC64_OPS
>  #define ATOMIC64_OPS(op, op1, op2)					\
>  	ATOMIC64_OP(op, op1, op2)					\
>  	ATOMIC64_FETCH_OP(op, op1, op2)
>  
> -#define atomic64_andnot atomic64_andnot
> +#define arch_atomic64_andnot arch_atomic64_andnot
>  
>  ATOMIC64_OPS(and, and, and)
>  ATOMIC64_OPS(andnot, bic, bic)
>  ATOMIC64_OPS(or,  orr, orr)
>  ATOMIC64_OPS(xor, eor, eor)
>  
> -#define atomic64_fetch_and_relaxed	atomic64_fetch_and_relaxed
> -#define atomic64_fetch_andnot_relaxed	atomic64_fetch_andnot_relaxed
> -#define atomic64_fetch_or_relaxed	atomic64_fetch_or_relaxed
> -#define atomic64_fetch_xor_relaxed	atomic64_fetch_xor_relaxed
> +#define arch_atomic64_fetch_and_relaxed		arch_atomic64_fetch_and_relaxed
> +#define arch_atomic64_fetch_andnot_relaxed	arch_atomic64_fetch_andnot_relaxed
> +#define arch_atomic64_fetch_or_relaxed		arch_atomic64_fetch_or_relaxed
> +#define arch_atomic64_fetch_xor_relaxed		arch_atomic64_fetch_xor_relaxed
>  
>  #undef ATOMIC64_OPS
>  #undef ATOMIC64_FETCH_OP
>  #undef ATOMIC64_OP_RETURN
>  #undef ATOMIC64_OP
>  
> -static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new)
> +static inline s64 arch_atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new)
>  {
>  	s64 oldval;
>  	unsigned long res;
> @@ -422,9 +422,9 @@ static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new)
>  
>  	return oldval;
>  }
> -#define atomic64_cmpxchg_relaxed	atomic64_cmpxchg_relaxed
> +#define arch_atomic64_cmpxchg_relaxed	arch_atomic64_cmpxchg_relaxed
>  
> -static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new)
> +static inline s64 arch_atomic64_xchg_relaxed(atomic64_t *ptr, s64 new)
>  {
>  	s64 result;
>  	unsigned long tmp;
> @@ -442,9 +442,9 @@ static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new)
>  
>  	return result;
>  }
> -#define atomic64_xchg_relaxed		atomic64_xchg_relaxed
> +#define arch_atomic64_xchg_relaxed		arch_atomic64_xchg_relaxed
>  
> -static inline s64 atomic64_dec_if_positive(atomic64_t *v)
> +static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
>  {
>  	s64 result;
>  	unsigned long tmp;
> @@ -470,9 +470,9 @@ static inline s64 atomic64_dec_if_positive(atomic64_t *v)
>  
>  	return result;
>  }
> -#define atomic64_dec_if_positive atomic64_dec_if_positive
> +#define arch_atomic64_dec_if_positive arch_atomic64_dec_if_positive
>  
> -static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> +static inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
>  {
>  	s64 oldval, newval;
>  	unsigned long tmp;
> @@ -500,7 +500,7 @@ static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
>  
>  	return oldval;
>  }
> -#define atomic64_fetch_add_unless atomic64_fetch_add_unless
> +#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
>  
>  #endif /* !CONFIG_GENERIC_ATOMIC64 */
>  #endif
> diff --git a/arch/arm/include/asm/cmpxchg.h b/arch/arm/include/asm/cmpxchg.h
> index 06bd8cea861a..4dfe538dfc68 100644
> --- a/arch/arm/include/asm/cmpxchg.h
> +++ b/arch/arm/include/asm/cmpxchg.h
> @@ -114,7 +114,7 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
>  	return ret;
>  }
>  
> -#define xchg_relaxed(ptr, x) ({						\
> +#define arch_xchg_relaxed(ptr, x) ({					\
>  	(__typeof__(*(ptr)))__xchg((unsigned long)(x), (ptr),		\
>  				   sizeof(*(ptr)));			\
>  })
> @@ -128,20 +128,20 @@ static inline unsigned long __xchg(unsigned long x, volatile void *ptr, int size
>  #error "SMP is not supported on this platform"
>  #endif
>  
> -#define xchg xchg_relaxed
> +#define arch_xchg arch_xchg_relaxed
>  
>  /*
>   * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make
>   * them available.
>   */
> -#define cmpxchg_local(ptr, o, n) ({					\
> +#define arch_cmpxchg_local(ptr, o, n) ({				\
>  	(__typeof(*ptr))__generic_cmpxchg_local((ptr),			\
>  					        (unsigned long)(o),	\
>  					        (unsigned long)(n),	\
>  					        sizeof(*(ptr)));	\
>  })
>  
> -#define cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
> +#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n))
>  
>  #include <asm-generic/cmpxchg.h>
>  
> @@ -207,7 +207,7 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
>  	return oldval;
>  }
>  
> -#define cmpxchg_relaxed(ptr,o,n) ({					\
> +#define arch_cmpxchg_relaxed(ptr,o,n) ({				\
>  	(__typeof__(*(ptr)))__cmpxchg((ptr),				\
>  				      (unsigned long)(o),		\
>  				      (unsigned long)(n),		\
> @@ -234,7 +234,7 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
>  	return ret;
>  }
>  
> -#define cmpxchg_local(ptr, o, n) ({					\
> +#define arch_cmpxchg_local(ptr, o, n) ({				\
>  	(__typeof(*ptr))__cmpxchg_local((ptr),				\
>  				        (unsigned long)(o),		\
>  				        (unsigned long)(n),		\
> @@ -266,13 +266,13 @@ static inline unsigned long long __cmpxchg64(unsigned long long *ptr,
>  	return oldval;
>  }
>  
> -#define cmpxchg64_relaxed(ptr, o, n) ({					\
> +#define arch_cmpxchg64_relaxed(ptr, o, n) ({				\
>  	(__typeof__(*(ptr)))__cmpxchg64((ptr),				\
>  					(unsigned long long)(o),	\
>  					(unsigned long long)(n));	\
>  })
>  
> -#define cmpxchg64_local(ptr, o, n) cmpxchg64_relaxed((ptr), (o), (n))
> +#define arch_cmpxchg64_local(ptr, o, n) arch_cmpxchg64_relaxed((ptr), (o), (n))
>  
>  #endif	/* __LINUX_ARM_ARCH__ >= 6 */
>  
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 05/33] locking/atomic: openrisc: avoid asm-generic/atomic.h
  2021-05-10  9:37 ` [PATCH 05/33] locking/atomic: openrisc: avoid asm-generic/atomic.h Mark Rutland
@ 2021-05-13 21:44   ` Stafford Horne
  2021-05-14 10:17     ` Mark Rutland
  0 siblings, 1 reply; 57+ messages in thread
From: Stafford Horne @ 2021-05-13 21:44 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-kernel, will, boqun.feng, peterz, aou, arnd, bcain, benh,
	chris, dalias, davem, deanbo422, deller, geert, green.hu, guoren,
	ink, James.Bottomley, jcmvbkbc, jonas, ley.foon.tan, linux,
	mattst88, monstr, mpe, nickhu, palmer, paulus, paul.walmsley,
	rth, stefan.kristiansson, tsbogend, vgupta, ysato

On Mon, May 10, 2021 at 10:37:25AM +0100, Mark Rutland wrote:
> OpenRISC is the only architecture which uses asm-generic/atomic.h and
> also provides its own implementation of some functions, requiring
> ifdeferry in the asm-generic header. As OpenRISC provides the vast
> majority of functions itself, it would be simpler overall if it also
> provided the few functions it cribs from asm-generic.
> 
> This patch decouples OpenRISC from asm-generic/atomic.h. Subsequent
> patches will simplify the asm-generic implementation.

This is fine, but just a thought.  If OpenRISC is the only architecture to do
this then after this patch could we remove the ifndefs from
asm-generic/atomic.h as additional cleanup?

> There should be no functional change as a result of this patch.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Jonas Bonn <jonas@southpole.se>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Stafford Horne <shorne@gmail.com>
> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/openrisc/include/asm/atomic.h | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/openrisc/include/asm/atomic.h b/arch/openrisc/include/asm/atomic.h
> index b589fac39b92..cb86970d3859 100644
> --- a/arch/openrisc/include/asm/atomic.h
> +++ b/arch/openrisc/include/asm/atomic.h
> @@ -121,6 +121,12 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
>  }
>  #define atomic_fetch_add_unless	atomic_fetch_add_unless
>  
> -#include <asm-generic/atomic.h>
> +#define atomic_read(v)			READ_ONCE((v)->counter)
> +#define atomic_set(v,i)			WRITE_ONCE((v)->counter, (i))
> +
> +#include <asm/cmpxchg.h>
> +
> +#define atomic_xchg(ptr, v)		(xchg(&(ptr)->counter, (v)))
> +#define atomic_cmpxchg(v, old, new)	(cmpxchg(&((v)->counter), (old), (new)))

Acked-by: Stafford Horne <shorne@gmail.com>

>  #endif /* __ASM_OPENRISC_ATOMIC_H */
> -- 
> 2.11.0
> 

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 05/33] locking/atomic: openrisc: avoid asm-generic/atomic.h
  2021-05-13 21:44   ` Stafford Horne
@ 2021-05-14 10:17     ` Mark Rutland
  0 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-14 10:17 UTC (permalink / raw)
  To: Stafford Horne
  Cc: linux-kernel, will, boqun.feng, peterz, aou, arnd, bcain, benh,
	chris, dalias, davem, deanbo422, deller, geert, green.hu, guoren,
	ink, James.Bottomley, jcmvbkbc, jonas, ley.foon.tan, linux,
	mattst88, monstr, mpe, nickhu, palmer, paulus, paul.walmsley,
	rth, stefan.kristiansson, tsbogend, vgupta, ysato

On Fri, May 14, 2021 at 06:44:25AM +0900, Stafford Horne wrote:
> On Mon, May 10, 2021 at 10:37:25AM +0100, Mark Rutland wrote:
> > OpenRISC is the only architecture which uses asm-generic/atomic.h and
> > also provides its own implementation of some functions, requiring
> > ifdeferry in the asm-generic header. As OpenRISC provides the vast
> > majority of functions itself, it would be simpler overall if it also
> > provided the few functions it cribs from asm-generic.
> > 
> > This patch decouples OpenRISC from asm-generic/atomic.h. Subsequent
> > patches will simplify the asm-generic implementation.
> 
> This is fine, but just a thought.  If OpenRISC is the only architecture to do
> this then after this patch could we remove the ifndefs from
> asm-generic/atomic.h as additional cleanup?

Yup! That's done in patch 8. :)

I'll expand the wording in this commit message to explicitly mention
that subsequent patches will remove the ifdeferry, rather than just
simplfying the asm-generic implementation.

[...]

> Acked-by: Stafford Horne <shorne@gmail.com>

Thanks!

Mark.

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 25/33] locking/atomic: openrisc: move to ARCH_ATOMIC
  2021-05-10  9:37 ` [PATCH 25/33] locking/atomic: openrisc: " Mark Rutland
@ 2021-05-15 21:47   ` Stafford Horne
  0 siblings, 0 replies; 57+ messages in thread
From: Stafford Horne @ 2021-05-15 21:47 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-kernel, will, boqun.feng, peterz, aou, arnd, bcain, benh,
	chris, dalias, davem, deanbo422, deller, geert, green.hu, guoren,
	ink, James.Bottomley, jcmvbkbc, jonas, ley.foon.tan, linux,
	mattst88, monstr, mpe, nickhu, palmer, paulus, paul.walmsley,
	rth, stefan.kristiansson, tsbogend, vgupta, ysato

On Mon, May 10, 2021 at 10:37:45AM +0100, Mark Rutland wrote:
> We'd like all architectures to convert to ARCH_ATOMIC, as once all
> architectures are converted it will be possible to make significant
> cleanups to the atomics headers, and this will make it much easier to
> generically enable atomic functionality (e.g. debug logic in the
> instrumented wrappers).
> 
> As a step towards that, this patch migrates openrisc to ARCH_ATOMIC. The
> arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> code wraps these with optional instrumentation to provide the regular
> functions.
> 
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Jonas Bonn <jonas@southpole.se>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Stafford Horne <shorne@gmail.com>
> Cc: Stefan Kristiansson <stefan.kristiansson@saunalahti.fi>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/openrisc/Kconfig               |  1 +
>  arch/openrisc/include/asm/atomic.h  | 42 ++++++++++++++++++++-----------------
>  arch/openrisc/include/asm/cmpxchg.h |  4 ++--
>  3 files changed, 26 insertions(+), 21 deletions(-)
> 
> diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
> index 591acc5990dc..8c50bc9674f5 100644
> --- a/arch/openrisc/Kconfig
> +++ b/arch/openrisc/Kconfig
> @@ -7,6 +7,7 @@
>  config OPENRISC
>  	def_bool y
>  	select ARCH_32BIT_OFF_T
> +	select ARCH_ATOMIC
>  	select ARCH_HAS_DMA_SET_UNCACHED
>  	select ARCH_HAS_DMA_CLEAR_UNCACHED
>  	select ARCH_HAS_SYNC_DMA_FOR_DEVICE
> diff --git a/arch/openrisc/include/asm/atomic.h b/arch/openrisc/include/asm/atomic.h
> index cb86970d3859..326167e4783a 100644
> --- a/arch/openrisc/include/asm/atomic.h
> +++ b/arch/openrisc/include/asm/atomic.h
> @@ -13,7 +13,7 @@
>  
>  /* Atomically perform op with v->counter and i */
>  #define ATOMIC_OP(op)							\
> -static inline void atomic_##op(int i, atomic_t *v)			\
> +static inline void arch_atomic_##op(int i, atomic_t *v)			\
>  {									\
>  	int tmp;							\
>  									\
> @@ -30,7 +30,7 @@ static inline void atomic_##op(int i, atomic_t *v)			\
>  
>  /* Atomically perform op with v->counter and i, return the result */
>  #define ATOMIC_OP_RETURN(op)						\
> -static inline int atomic_##op##_return(int i, atomic_t *v)		\
> +static inline int arch_atomic_##op##_return(int i, atomic_t *v)		\
>  {									\
>  	int tmp;							\
>  									\
> @@ -49,7 +49,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v)		\
>  
>  /* Atomically perform op with v->counter and i, return orig v->counter */
>  #define ATOMIC_FETCH_OP(op)						\
> -static inline int atomic_fetch_##op(int i, atomic_t *v)			\
> +static inline int arch_atomic_fetch_##op(int i, atomic_t *v)		\
>  {									\
>  	int tmp, old;							\
>  									\
> @@ -75,6 +75,8 @@ ATOMIC_FETCH_OP(and)
>  ATOMIC_FETCH_OP(or)
>  ATOMIC_FETCH_OP(xor)
>  
> +ATOMIC_OP(add)
> +ATOMIC_OP(sub)
>  ATOMIC_OP(and)
>  ATOMIC_OP(or)
>  ATOMIC_OP(xor)
> @@ -83,16 +85,18 @@ ATOMIC_OP(xor)
>  #undef ATOMIC_OP_RETURN
>  #undef ATOMIC_OP
>  
> -#define atomic_add_return	atomic_add_return
> -#define atomic_sub_return	atomic_sub_return
> -#define atomic_fetch_add	atomic_fetch_add
> -#define atomic_fetch_sub	atomic_fetch_sub
> -#define atomic_fetch_and	atomic_fetch_and
> -#define atomic_fetch_or		atomic_fetch_or
> -#define atomic_fetch_xor	atomic_fetch_xor
> -#define atomic_and	atomic_and
> -#define atomic_or	atomic_or
> -#define atomic_xor	atomic_xor
> +#define arch_atomic_add_return	arch_atomic_add_return
> +#define arch_atomic_sub_return	arch_atomic_sub_return
> +#define arch_atomic_fetch_add	arch_atomic_fetch_add
> +#define arch_atomic_fetch_sub	arch_atomic_fetch_sub
> +#define arch_atomic_fetch_and	arch_atomic_fetch_and
> +#define arch_atomic_fetch_or	arch_atomic_fetch_or
> +#define arch_atomic_fetch_xor	arch_atomic_fetch_xor
> +#define arch_atomic_add		arch_atomic_add
> +#define arch_atomic_sub		arch_atomic_sub
> +#define arch_atomic_and		arch_atomic_and
> +#define arch_atomic_or		arch_atomic_or
> +#define arch_atomic_xor		arch_atomic_xor
>  
>  /*
>   * Atomically add a to v->counter as long as v is not already u.
> @@ -100,7 +104,7 @@ ATOMIC_OP(xor)
>   *
>   * This is often used through atomic_inc_not_zero()
>   */
> -static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
> +static inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
>  {
>  	int old, tmp;
>  
> @@ -119,14 +123,14 @@ static inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
>  
>  	return old;
>  }
> -#define atomic_fetch_add_unless	atomic_fetch_add_unless
> +#define arch_atomic_fetch_add_unless	arch_atomic_fetch_add_unless
>  
> -#define atomic_read(v)			READ_ONCE((v)->counter)
> -#define atomic_set(v,i)			WRITE_ONCE((v)->counter, (i))
> +#define arch_atomic_read(v)		READ_ONCE((v)->counter)
> +#define arch_atomic_set(v,i)		WRITE_ONCE((v)->counter, (i))
>  
>  #include <asm/cmpxchg.h>
>  
> -#define atomic_xchg(ptr, v)		(xchg(&(ptr)->counter, (v)))
> -#define atomic_cmpxchg(v, old, new)	(cmpxchg(&((v)->counter), (old), (new)))
> +#define arch_atomic_xchg(ptr, v)		(arch_xchg(&(ptr)->counter, (v)))
> +#define arch_atomic_cmpxchg(v, old, new)	(arch_cmpxchg(&((v)->counter), (old), (new)))
>  
>  #endif /* __ASM_OPENRISC_ATOMIC_H */
> diff --git a/arch/openrisc/include/asm/cmpxchg.h b/arch/openrisc/include/asm/cmpxchg.h
> index f9cd43a39d72..79fd16162ccb 100644
> --- a/arch/openrisc/include/asm/cmpxchg.h
> +++ b/arch/openrisc/include/asm/cmpxchg.h
> @@ -132,7 +132,7 @@ static inline unsigned long __cmpxchg(volatile void *ptr, unsigned long old,
>  	}
>  }
>  
> -#define cmpxchg(ptr, o, n)						\
> +#define arch_cmpxchg(ptr, o, n)						\
>  	({								\
>  		(__typeof__(*(ptr))) __cmpxchg((ptr),			\
>  					       (unsigned long)(o),	\
> @@ -161,7 +161,7 @@ static inline unsigned long __xchg(volatile void *ptr, unsigned long with,
>  	}
>  }
>  
> -#define xchg(ptr, with) 						\
> +#define arch_xchg(ptr, with) 						\
>  	({								\
>  		(__typeof__(*(ptr))) __xchg((ptr),			\
>  					    (unsigned long)(with),	\

I was questioning some of this so I took your branch for a spin.  This builds
and boots and all looks good to me on OpenRISC.

The changes all look good, I was just wondering if all of the refactors were
done correctly for OpenRISC.

Acked-by: Stafford Horne <shorne@gmail.com>

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 28/33] locking/atomic: riscv: move to ARCH_ATOMIC
  2021-05-10  9:37 ` [PATCH 28/33] locking/atomic: riscv: " Mark Rutland
@ 2021-05-22 19:02   ` Palmer Dabbelt
  2021-05-24 12:10     ` Mark Rutland
  0 siblings, 1 reply; 57+ messages in thread
From: Palmer Dabbelt @ 2021-05-22 19:02 UTC (permalink / raw)
  To: mark.rutland
  Cc: linux-kernel, will, boqun.feng, peterz, aou, Arnd Bergmann,
	bcain, benh, chris, dalias, davem, deanbo422, deller, geert,
	green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mark.rutland, mattst88, monstr, mpe, nickhu,
	paulus, Paul Walmsley, rth, shorne, stefan.kristiansson,
	tsbogend, vgupta, ysato

On Mon, 10 May 2021 02:37:48 PDT (-0700), mark.rutland@arm.com wrote:
> We'd like all architectures to convert to ARCH_ATOMIC, as once all
> architectures are converted it will be possible to make significant
> cleanups to the atomics headers, and this will make it much easier to
> generically enable atomic functionality (e.g. debug logic in the
> instrumented wrappers).
>
> As a step towards that, this patch migrates riscv to ARCH_ATOMIC. The
> arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> code wraps these with optional instrumentation to provide the regular
> functions.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Albert Ou <aou@eecs.berkeley.edu>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>
> Cc: Paul Walmsley <paul.walmsley@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will@kernel.org>
> ---
>  arch/riscv/Kconfig               |   1 +
>  arch/riscv/include/asm/atomic.h  | 128 +++++++++++++++++++--------------------
>  arch/riscv/include/asm/cmpxchg.h |  34 +++++------
>  3 files changed, 82 insertions(+), 81 deletions(-)
>
> diff --git a/arch/riscv/Kconfig b/arch/riscv/Kconfig
> index a8ad8eb76120..c59b9f4a9d62 100644
> --- a/arch/riscv/Kconfig
> +++ b/arch/riscv/Kconfig
> @@ -12,6 +12,7 @@ config 32BIT
>
>  config RISCV
>  	def_bool y
> +	select ARCH_ATOMIC
>  	select ARCH_CLOCKSOURCE_INIT
>  	select ARCH_SUPPORTS_ATOMIC_RMW
>  	select ARCH_SUPPORTS_DEBUG_PAGEALLOC if MMU
> diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> index 400a8c8b6de7..ac9bdf4fc404 100644
> --- a/arch/riscv/include/asm/atomic.h
> +++ b/arch/riscv/include/asm/atomic.h
> @@ -25,22 +25,22 @@
>  #define __atomic_release_fence()					\
>  	__asm__ __volatile__(RISCV_RELEASE_BARRIER "" ::: "memory");
>
> -static __always_inline int atomic_read(const atomic_t *v)
> +static __always_inline int arch_atomic_read(const atomic_t *v)
>  {
>  	return READ_ONCE(v->counter);
>  }
> -static __always_inline void atomic_set(atomic_t *v, int i)
> +static __always_inline void arch_atomic_set(atomic_t *v, int i)
>  {
>  	WRITE_ONCE(v->counter, i);
>  }
>
>  #ifndef CONFIG_GENERIC_ATOMIC64
>  #define ATOMIC64_INIT(i) { (i) }
> -static __always_inline s64 atomic64_read(const atomic64_t *v)
> +static __always_inline s64 arch_atomic64_read(const atomic64_t *v)
>  {
>  	return READ_ONCE(v->counter);
>  }
> -static __always_inline void atomic64_set(atomic64_t *v, s64 i)
> +static __always_inline void arch_atomic64_set(atomic64_t *v, s64 i)
>  {
>  	WRITE_ONCE(v->counter, i);
>  }
> @@ -53,7 +53,7 @@ static __always_inline void atomic64_set(atomic64_t *v, s64 i)
>   */
>  #define ATOMIC_OP(op, asm_op, I, asm_type, c_type, prefix)		\
>  static __always_inline							\
> -void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v)		\
> +void arch_atomic##prefix##_##op(c_type i, atomic##prefix##_t *v)	\
>  {									\
>  	__asm__ __volatile__ (						\
>  		"	amo" #asm_op "." #asm_type " zero, %1, %0"	\
> @@ -87,7 +87,7 @@ ATOMIC_OPS(xor, xor,  i)
>   */
>  #define ATOMIC_FETCH_OP(op, asm_op, I, asm_type, c_type, prefix)	\
>  static __always_inline							\
> -c_type atomic##prefix##_fetch_##op##_relaxed(c_type i,			\
> +c_type arch_atomic##prefix##_fetch_##op##_relaxed(c_type i,		\
>  					     atomic##prefix##_t *v)	\
>  {									\
>  	register c_type ret;						\
> @@ -99,7 +99,7 @@ c_type atomic##prefix##_fetch_##op##_relaxed(c_type i,			\
>  	return ret;							\
>  }									\
>  static __always_inline							\
> -c_type atomic##prefix##_fetch_##op(c_type i, atomic##prefix##_t *v)	\
> +c_type arch_atomic##prefix##_fetch_##op(c_type i, atomic##prefix##_t *v)	\
>  {									\
>  	register c_type ret;						\
>  	__asm__ __volatile__ (						\
> @@ -112,15 +112,15 @@ c_type atomic##prefix##_fetch_##op(c_type i, atomic##prefix##_t *v)	\
>
>  #define ATOMIC_OP_RETURN(op, asm_op, c_op, I, asm_type, c_type, prefix)	\
>  static __always_inline							\
> -c_type atomic##prefix##_##op##_return_relaxed(c_type i,			\
> +c_type arch_atomic##prefix##_##op##_return_relaxed(c_type i,		\
>  					      atomic##prefix##_t *v)	\
>  {									\
> -        return atomic##prefix##_fetch_##op##_relaxed(i, v) c_op I;	\
> +        return arch_atomic##prefix##_fetch_##op##_relaxed(i, v) c_op I;	\
>  }									\
>  static __always_inline							\
> -c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v)	\
> +c_type arch_atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v)	\
>  {									\
> -        return atomic##prefix##_fetch_##op(i, v) c_op I;		\
> +        return arch_atomic##prefix##_fetch_##op(i, v) c_op I;		\
>  }
>
>  #ifdef CONFIG_GENERIC_ATOMIC64
> @@ -138,26 +138,26 @@ c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v)	\
>  ATOMIC_OPS(add, add, +,  i)
>  ATOMIC_OPS(sub, add, +, -i)
>
> -#define atomic_add_return_relaxed	atomic_add_return_relaxed
> -#define atomic_sub_return_relaxed	atomic_sub_return_relaxed
> -#define atomic_add_return		atomic_add_return
> -#define atomic_sub_return		atomic_sub_return
> +#define arch_atomic_add_return_relaxed	arch_atomic_add_return_relaxed
> +#define arch_atomic_sub_return_relaxed	arch_atomic_sub_return_relaxed
> +#define arch_atomic_add_return		arch_atomic_add_return
> +#define arch_atomic_sub_return		arch_atomic_sub_return
>
> -#define atomic_fetch_add_relaxed	atomic_fetch_add_relaxed
> -#define atomic_fetch_sub_relaxed	atomic_fetch_sub_relaxed
> -#define atomic_fetch_add		atomic_fetch_add
> -#define atomic_fetch_sub		atomic_fetch_sub
> +#define arch_atomic_fetch_add_relaxed	arch_atomic_fetch_add_relaxed
> +#define arch_atomic_fetch_sub_relaxed	arch_atomic_fetch_sub_relaxed
> +#define arch_atomic_fetch_add		arch_atomic_fetch_add
> +#define arch_atomic_fetch_sub		arch_atomic_fetch_sub
>
>  #ifndef CONFIG_GENERIC_ATOMIC64
> -#define atomic64_add_return_relaxed	atomic64_add_return_relaxed
> -#define atomic64_sub_return_relaxed	atomic64_sub_return_relaxed
> -#define atomic64_add_return		atomic64_add_return
> -#define atomic64_sub_return		atomic64_sub_return
> -
> -#define atomic64_fetch_add_relaxed	atomic64_fetch_add_relaxed
> -#define atomic64_fetch_sub_relaxed	atomic64_fetch_sub_relaxed
> -#define atomic64_fetch_add		atomic64_fetch_add
> -#define atomic64_fetch_sub		atomic64_fetch_sub
> +#define arch_atomic64_add_return_relaxed	arch_atomic64_add_return_relaxed
> +#define arch_atomic64_sub_return_relaxed	arch_atomic64_sub_return_relaxed
> +#define arch_atomic64_add_return		arch_atomic64_add_return
> +#define arch_atomic64_sub_return		arch_atomic64_sub_return
> +
> +#define arch_atomic64_fetch_add_relaxed	arch_atomic64_fetch_add_relaxed
> +#define arch_atomic64_fetch_sub_relaxed	arch_atomic64_fetch_sub_relaxed
> +#define arch_atomic64_fetch_add		arch_atomic64_fetch_add
> +#define arch_atomic64_fetch_sub		arch_atomic64_fetch_sub
>  #endif
>
>  #undef ATOMIC_OPS
> @@ -175,20 +175,20 @@ ATOMIC_OPS(and, and, i)
>  ATOMIC_OPS( or,  or, i)
>  ATOMIC_OPS(xor, xor, i)
>
> -#define atomic_fetch_and_relaxed	atomic_fetch_and_relaxed
> -#define atomic_fetch_or_relaxed		atomic_fetch_or_relaxed
> -#define atomic_fetch_xor_relaxed	atomic_fetch_xor_relaxed
> -#define atomic_fetch_and		atomic_fetch_and
> -#define atomic_fetch_or			atomic_fetch_or
> -#define atomic_fetch_xor		atomic_fetch_xor
> +#define arch_atomic_fetch_and_relaxed	arch_atomic_fetch_and_relaxed
> +#define arch_atomic_fetch_or_relaxed	arch_atomic_fetch_or_relaxed
> +#define arch_atomic_fetch_xor_relaxed	arch_atomic_fetch_xor_relaxed
> +#define arch_atomic_fetch_and		arch_atomic_fetch_and
> +#define arch_atomic_fetch_or		arch_atomic_fetch_or
> +#define arch_atomic_fetch_xor		arch_atomic_fetch_xor
>
>  #ifndef CONFIG_GENERIC_ATOMIC64
> -#define atomic64_fetch_and_relaxed	atomic64_fetch_and_relaxed
> -#define atomic64_fetch_or_relaxed	atomic64_fetch_or_relaxed
> -#define atomic64_fetch_xor_relaxed	atomic64_fetch_xor_relaxed
> -#define atomic64_fetch_and		atomic64_fetch_and
> -#define atomic64_fetch_or		atomic64_fetch_or
> -#define atomic64_fetch_xor		atomic64_fetch_xor
> +#define arch_atomic64_fetch_and_relaxed	arch_atomic64_fetch_and_relaxed
> +#define arch_atomic64_fetch_or_relaxed	arch_atomic64_fetch_or_relaxed
> +#define arch_atomic64_fetch_xor_relaxed	arch_atomic64_fetch_xor_relaxed
> +#define arch_atomic64_fetch_and		arch_atomic64_fetch_and
> +#define arch_atomic64_fetch_or		arch_atomic64_fetch_or
> +#define arch_atomic64_fetch_xor		arch_atomic64_fetch_xor
>  #endif
>
>  #undef ATOMIC_OPS
> @@ -197,7 +197,7 @@ ATOMIC_OPS(xor, xor, i)
>  #undef ATOMIC_OP_RETURN
>
>  /* This is required to provide a full barrier on success. */
> -static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
> +static __always_inline int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u)
>  {
>         int prev, rc;
>
> @@ -214,10 +214,10 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
>  		: "memory");
>  	return prev;
>  }
> -#define atomic_fetch_add_unless atomic_fetch_add_unless
> +#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless
>
>  #ifndef CONFIG_GENERIC_ATOMIC64
> -static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
> +static __always_inline s64 arch_atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
>  {
>         s64 prev;
>         long rc;
> @@ -235,7 +235,7 @@ static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u
>  		: "memory");
>  	return prev;
>  }
> -#define atomic64_fetch_add_unless atomic64_fetch_add_unless
> +#define arch_atomic64_fetch_add_unless arch_atomic64_fetch_add_unless
>  #endif
>
>  /*
> @@ -244,45 +244,45 @@ static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u
>   */
>  #define ATOMIC_OP(c_t, prefix, size)					\
>  static __always_inline							\
> -c_t atomic##prefix##_xchg_relaxed(atomic##prefix##_t *v, c_t n)		\
> +c_t arch_atomic##prefix##_xchg_relaxed(atomic##prefix##_t *v, c_t n)	\
>  {									\
>  	return __xchg_relaxed(&(v->counter), n, size);			\
>  }									\
>  static __always_inline							\
> -c_t atomic##prefix##_xchg_acquire(atomic##prefix##_t *v, c_t n)		\
> +c_t arch_atomic##prefix##_xchg_acquire(atomic##prefix##_t *v, c_t n)	\
>  {									\
>  	return __xchg_acquire(&(v->counter), n, size);			\
>  }									\
>  static __always_inline							\
> -c_t atomic##prefix##_xchg_release(atomic##prefix##_t *v, c_t n)		\
> +c_t arch_atomic##prefix##_xchg_release(atomic##prefix##_t *v, c_t n)	\
>  {									\
>  	return __xchg_release(&(v->counter), n, size);			\
>  }									\
>  static __always_inline							\
> -c_t atomic##prefix##_xchg(atomic##prefix##_t *v, c_t n)			\
> +c_t arch_atomic##prefix##_xchg(atomic##prefix##_t *v, c_t n)		\
>  {									\
>  	return __xchg(&(v->counter), n, size);				\
>  }									\
>  static __always_inline							\
> -c_t atomic##prefix##_cmpxchg_relaxed(atomic##prefix##_t *v,		\
> +c_t arch_atomic##prefix##_cmpxchg_relaxed(atomic##prefix##_t *v,	\
>  				     c_t o, c_t n)			\
>  {									\
>  	return __cmpxchg_relaxed(&(v->counter), o, n, size);		\
>  }									\
>  static __always_inline							\
> -c_t atomic##prefix##_cmpxchg_acquire(atomic##prefix##_t *v,		\
> +c_t arch_atomic##prefix##_cmpxchg_acquire(atomic##prefix##_t *v,	\
>  				     c_t o, c_t n)			\
>  {									\
>  	return __cmpxchg_acquire(&(v->counter), o, n, size);		\
>  }									\
>  static __always_inline							\
> -c_t atomic##prefix##_cmpxchg_release(atomic##prefix##_t *v,		\
> +c_t arch_atomic##prefix##_cmpxchg_release(atomic##prefix##_t *v,	\
>  				     c_t o, c_t n)			\
>  {									\
>  	return __cmpxchg_release(&(v->counter), o, n, size);		\
>  }									\
>  static __always_inline							\
> -c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n)	\
> +c_t arch_atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n)	\
>  {									\
>  	return __cmpxchg(&(v->counter), o, n, size);			\
>  }
> @@ -298,19 +298,19 @@ c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n)	\
>
>  ATOMIC_OPS()
>
> -#define atomic_xchg_relaxed atomic_xchg_relaxed
> -#define atomic_xchg_acquire atomic_xchg_acquire
> -#define atomic_xchg_release atomic_xchg_release
> -#define atomic_xchg atomic_xchg
> -#define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed
> -#define atomic_cmpxchg_acquire atomic_cmpxchg_acquire
> -#define atomic_cmpxchg_release atomic_cmpxchg_release
> -#define atomic_cmpxchg atomic_cmpxchg
> +#define arch_atomic_xchg_relaxed	arch_atomic_xchg_relaxed
> +#define arch_atomic_xchg_acquire	arch_atomic_xchg_acquire
> +#define arch_atomic_xchg_release	arch_atomic_xchg_release
> +#define arch_atomic_xchg		arch_atomic_xchg
> +#define arch_atomic_cmpxchg_relaxed	arch_atomic_cmpxchg_relaxed
> +#define arch_atomic_cmpxchg_acquire	arch_atomic_cmpxchg_acquire
> +#define arch_atomic_cmpxchg_release	arch_atomic_cmpxchg_release
> +#define arch_atomic_cmpxchg		arch_atomic_cmpxchg
>
>  #undef ATOMIC_OPS
>  #undef ATOMIC_OP
>
> -static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
> +static __always_inline int arch_atomic_sub_if_positive(atomic_t *v, int offset)
>  {
>         int prev, rc;
>
> @@ -328,10 +328,10 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
>  	return prev - offset;
>  }
>
> -#define atomic_dec_if_positive(v)	atomic_sub_if_positive(v, 1)
> +#define arch_atomic_dec_if_positive(v)	arch_atomic_sub_if_positive(v, 1)
>
>  #ifndef CONFIG_GENERIC_ATOMIC64
> -static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset)
> +static __always_inline s64 arch_atomic64_sub_if_positive(atomic64_t *v, s64 offset)
>  {
>         s64 prev;
>         long rc;
> @@ -350,7 +350,7 @@ static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset)
>  	return prev - offset;
>  }
>
> -#define atomic64_dec_if_positive(v)	atomic64_sub_if_positive(v, 1)
> +#define arch_atomic64_dec_if_positive(v)	arch_atomic64_sub_if_positive(v, 1)
>  #endif
>
>  #endif /* _ASM_RISCV_ATOMIC_H */
> diff --git a/arch/riscv/include/asm/cmpxchg.h b/arch/riscv/include/asm/cmpxchg.h
> index 262e5bbb2776..36dc962f6343 100644
> --- a/arch/riscv/include/asm/cmpxchg.h
> +++ b/arch/riscv/include/asm/cmpxchg.h
> @@ -37,7 +37,7 @@
>  	__ret;								\
>  })
>
> -#define xchg_relaxed(ptr, x)						\
> +#define arch_xchg_relaxed(ptr, x)					\
>  ({									\
>  	__typeof__(*(ptr)) _x_ = (x);					\
>  	(__typeof__(*(ptr))) __xchg_relaxed((ptr),			\
> @@ -72,7 +72,7 @@
>  	__ret;								\
>  })
>
> -#define xchg_acquire(ptr, x)						\
> +#define arch_xchg_acquire(ptr, x)					\
>  ({									\
>  	__typeof__(*(ptr)) _x_ = (x);					\
>  	(__typeof__(*(ptr))) __xchg_acquire((ptr),			\
> @@ -107,7 +107,7 @@
>  	__ret;								\
>  })
>
> -#define xchg_release(ptr, x)						\
> +#define arch_xchg_release(ptr, x)					\
>  ({									\
>  	__typeof__(*(ptr)) _x_ = (x);					\
>  	(__typeof__(*(ptr))) __xchg_release((ptr),			\
> @@ -140,7 +140,7 @@
>  	__ret;								\
>  })
>
> -#define xchg(ptr, x)							\
> +#define arch_xchg(ptr, x)						\
>  ({									\
>  	__typeof__(*(ptr)) _x_ = (x);					\
>  	(__typeof__(*(ptr))) __xchg((ptr), _x_, sizeof(*(ptr)));	\
> @@ -149,13 +149,13 @@
>  #define xchg32(ptr, x)							\
>  ({									\
>  	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
> -	xchg((ptr), (x));						\
> +	arch_xchg((ptr), (x));						\
>  })
>
>  #define xchg64(ptr, x)							\
>  ({									\
>  	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
> -	xchg((ptr), (x));						\
> +	arch_xchg((ptr), (x));						\
>  })
>
>  /*
> @@ -199,7 +199,7 @@
>  	__ret;								\
>  })
>
> -#define cmpxchg_relaxed(ptr, o, n)					\
> +#define arch_cmpxchg_relaxed(ptr, o, n)					\
>  ({									\
>  	__typeof__(*(ptr)) _o_ = (o);					\
>  	__typeof__(*(ptr)) _n_ = (n);					\
> @@ -245,7 +245,7 @@
>  	__ret;								\
>  })
>
> -#define cmpxchg_acquire(ptr, o, n)					\
> +#define arch_cmpxchg_acquire(ptr, o, n)					\
>  ({									\
>  	__typeof__(*(ptr)) _o_ = (o);					\
>  	__typeof__(*(ptr)) _n_ = (n);					\
> @@ -291,7 +291,7 @@
>  	__ret;								\
>  })
>
> -#define cmpxchg_release(ptr, o, n)					\
> +#define arch_cmpxchg_release(ptr, o, n)					\
>  ({									\
>  	__typeof__(*(ptr)) _o_ = (o);					\
>  	__typeof__(*(ptr)) _n_ = (n);					\
> @@ -337,7 +337,7 @@
>  	__ret;								\
>  })
>
> -#define cmpxchg(ptr, o, n)						\
> +#define arch_cmpxchg(ptr, o, n)						\
>  ({									\
>  	__typeof__(*(ptr)) _o_ = (o);					\
>  	__typeof__(*(ptr)) _n_ = (n);					\
> @@ -345,31 +345,31 @@
>  				       _o_, _n_, sizeof(*(ptr)));	\
>  })
>
> -#define cmpxchg_local(ptr, o, n)					\
> +#define arch_cmpxchg_local(ptr, o, n)					\
>  	(__cmpxchg_relaxed((ptr), (o), (n), sizeof(*(ptr))))
>
>  #define cmpxchg32(ptr, o, n)						\
>  ({									\
>  	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
> -	cmpxchg((ptr), (o), (n));					\
> +	arch_cmpxchg((ptr), (o), (n));					\
>  })
>
>  #define cmpxchg32_local(ptr, o, n)					\
>  ({									\
>  	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
> -	cmpxchg_relaxed((ptr), (o), (n))				\
> +	arch_cmpxchg_relaxed((ptr), (o), (n))				\
>  })
>
> -#define cmpxchg64(ptr, o, n)						\
> +#define arch_cmpxchg64(ptr, o, n)					\
>  ({									\
>  	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
> -	cmpxchg((ptr), (o), (n));					\
> +	arch_cmpxchg((ptr), (o), (n));					\
>  })
>
> -#define cmpxchg64_local(ptr, o, n)					\
> +#define arch_cmpxchg64_local(ptr, o, n)					\
>  ({									\
>  	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
> -	cmpxchg_relaxed((ptr), (o), (n));				\
> +	arch_cmpxchg_relaxed((ptr), (o), (n));				\
>  })
>
>  #endif /* _ASM_RISCV_CMPXCHG_H */

Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com>
Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>

Sorry this took a bit, I had this put on the slow queue because I 
thought it would conflict with Guo's patch set but from reading the rest 
of that thread it appears this is just supersedes which makes things 
much simpler.

I'm assuming this would be for 5.14, and you'd be merging it along with 
the prep work?  I don't think we have anything else in the works that 
would require atomic changes for 5.14 yet so that seems OK for now, but 
it'd be nice to have some shared tag I could pull in both so I can test 
this and in case we have more to put on top of it.  I'm fine waiting 
until something comes up if that's a headache on your end, though.

Thanks for the cleanup!

^ permalink raw reply	[flat|nested] 57+ messages in thread

* Re: [PATCH 28/33] locking/atomic: riscv: move to ARCH_ATOMIC
  2021-05-22 19:02   ` Palmer Dabbelt
@ 2021-05-24 12:10     ` Mark Rutland
  0 siblings, 0 replies; 57+ messages in thread
From: Mark Rutland @ 2021-05-24 12:10 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-kernel, will, boqun.feng, peterz, aou, Arnd Bergmann,
	bcain, benh, chris, dalias, davem, deanbo422, deller, geert,
	green.hu, guoren, ink, James.Bottomley, jcmvbkbc, jonas,
	ley.foon.tan, linux, mattst88, monstr, mpe, nickhu, paulus,
	Paul Walmsley, rth, shorne, stefan.kristiansson, tsbogend,
	vgupta, ysato

On Sat, May 22, 2021 at 12:02:45PM -0700, Palmer Dabbelt wrote:
> On Mon, 10 May 2021 02:37:48 PDT (-0700), mark.rutland@arm.com wrote:
> > We'd like all architectures to convert to ARCH_ATOMIC, as once all
> > architectures are converted it will be possible to make significant
> > cleanups to the atomics headers, and this will make it much easier to
> > generically enable atomic functionality (e.g. debug logic in the
> > instrumented wrappers).
> > 
> > As a step towards that, this patch migrates riscv to ARCH_ATOMIC. The
> > arch code provides arch_{atomic,atomic64,xchg,cmpxchg}*(), and common
> > code wraps these with optional instrumentation to provide the regular
> > functions.
> > 
> > Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> > Cc: Albert Ou <aou@eecs.berkeley.edu>
> > Cc: Boqun Feng <boqun.feng@gmail.com>
> > Cc: Palmer Dabbelt <palmer@dabbelt.com>
> > Cc: Paul Walmsley <paul.walmsley@sifive.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Will Deacon <will@kernel.org>
> > ---
> >  arch/riscv/Kconfig               |   1 +
> >  arch/riscv/include/asm/atomic.h  | 128 +++++++++++++++++++--------------------
> >  arch/riscv/include/asm/cmpxchg.h |  34 +++++------
> >  3 files changed, 82 insertions(+), 81 deletions(-)

[...]

> Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com>
> Acked-by: Palmer Dabbelt <palmerdabbelt@google.com>

Thanks!

> Sorry this took a bit, I had this put on the slow queue because I thought it
> would conflict with Guo's patch set but from reading the rest of that thread
> it appears this is just supersedes which makes things much simpler.
> 
> I'm assuming this would be for 5.14, and you'd be merging it along with the
> prep work?

Yup; the plan is to take the whole series through the tip tree for
v5.14.

I'll post a v2 soon with some minor fixups.

> I don't think we have anything else in the works that would
> require atomic changes for 5.14 yet so that seems OK for now, but it'd be
> nice to have some shared tag I could pull in both so I can test this and in
> case we have more to put on top of it.  I'm fine waiting until something
> comes up if that's a headache on your end, though.

I think we can arrange a stable tag soon.

The Kbuild test robot is still finding a few edge cases with the
preprocessor glue on a few architectures, so my branch isn't stable yet,
but I think the core bit is solid, so if you wanted to test you should
be able to fetch my branch from:

  git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git atomics/arch-atomic

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 57+ messages in thread

end of thread, other threads:[~2021-05-24 12:10 UTC | newest]

Thread overview: 57+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-10  9:37 [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Mark Rutland
2021-05-10  9:37 ` [PATCH 01/33] locking/atomic: make ARCH_ATOMIC a Kconfig symbol Mark Rutland
2021-05-10  9:37 ` [PATCH 02/33] locking/atomic: net: use linux/atomic.h for xchg & cmpxchg Mark Rutland
2021-05-10  9:37 ` [PATCH 03/33] locking/atomic: h8300: use asm-generic exclusively Mark Rutland
2021-05-10  9:37 ` [PATCH 04/33] locking/atomic: microblaze: " Mark Rutland
2021-05-10  9:37 ` [PATCH 05/33] locking/atomic: openrisc: avoid asm-generic/atomic.h Mark Rutland
2021-05-13 21:44   ` Stafford Horne
2021-05-14 10:17     ` Mark Rutland
2021-05-10  9:37 ` [PATCH 06/33] locking/atomic: atomic: remove stale comments Mark Rutland
2021-05-10  9:37 ` [PATCH 07/33] locking/atomic: atomic: remove redundant include Mark Rutland
2021-05-10  9:37 ` [PATCH 08/33] locking/atomic: atomic: simplify ifdeffery Mark Rutland
2021-05-10  9:37 ` [PATCH 09/33] locking/atomic: atomic: support ARCH_ATOMIC Mark Rutland
2021-05-10  9:37 ` [PATCH 10/33] locking/atomic: atomic64: " Mark Rutland
2021-05-10  9:37 ` [PATCH 11/33] locking/atomic: cmpxchg: make `generic` a prefix Mark Rutland
2021-05-11  9:29   ` Geert Uytterhoeven
2021-05-10  9:37 ` [PATCH 12/33] locking/atomic: cmpxchg: support ARCH_ATOMIC Mark Rutland
2021-05-10  9:37 ` [PATCH 13/33] locking/atomic: alpha: move to ARCH_ATOMIC Mark Rutland
2021-05-10  9:37 ` [PATCH 14/33] locking/atomic: arc: " Mark Rutland
2021-05-10 14:57   ` Mark Rutland
2021-05-10 19:48     ` Vineet Gupta
2021-05-10  9:37 ` [PATCH 15/33] locking/atomic: arm: " Mark Rutland
2021-05-13 10:49   ` Mark Rutland
2021-05-10  9:37 ` [PATCH 16/33] locking/atomic: csky: " Mark Rutland
2021-05-10 13:27   ` Guo Ren
2021-05-10  9:37 ` [PATCH 17/33] locking/atomic: h8300: " Mark Rutland
2021-05-10  9:37 ` [PATCH 18/33] locking/atomic: hexagon: " Mark Rutland
2021-05-10  9:37 ` [PATCH 19/33] locking/atomic: ia64: " Mark Rutland
2021-05-10  9:37 ` [PATCH 20/33] locking/atomic: m68k: " Mark Rutland
2021-05-11  9:27   ` Geert Uytterhoeven
2021-05-11 10:00     ` Mark Rutland
2021-05-11 10:27       ` Geert Uytterhoeven
2021-05-12 14:15         ` Greg Ungerer
2021-05-12 16:33           ` Mark Rutland
2021-05-10  9:37 ` [PATCH 21/33] locking/atomic: microblaze: " Mark Rutland
2021-05-10  9:37 ` [PATCH 22/33] locking/atomic: mips: " Mark Rutland
2021-05-10  9:37 ` [PATCH 23/33] locking/atomic: nds32: " Mark Rutland
2021-05-10  9:37 ` [PATCH 24/33] locking/atomic: nios2: " Mark Rutland
2021-05-10  9:37 ` [PATCH 25/33] locking/atomic: openrisc: " Mark Rutland
2021-05-15 21:47   ` Stafford Horne
2021-05-10  9:37 ` [PATCH 26/33] locking/atomic: parisc: " Mark Rutland
2021-05-10  9:37 ` [PATCH 27/33] locking/atomic: powerpc: " Mark Rutland
2021-05-11  9:16   ` Mark Rutland
2021-05-11  9:57     ` Mark Rutland
2021-05-10  9:37 ` [PATCH 28/33] locking/atomic: riscv: " Mark Rutland
2021-05-22 19:02   ` Palmer Dabbelt
2021-05-24 12:10     ` Mark Rutland
2021-05-10  9:37 ` [PATCH 29/33] locking/atomic: sh: " Mark Rutland
2021-05-10  9:37 ` [PATCH 30/33] locking/atomic: sparc: " Mark Rutland
2021-05-10  9:37 ` [PATCH 31/33] locking/atomic: xtensa: " Mark Rutland
2021-05-10 14:19   ` Max Filippov
2021-05-10  9:37 ` [PATCH 32/33] locking/atomic: delete !ARCH_ATOMIC remnants Mark Rutland
2021-05-11  9:30   ` Geert Uytterhoeven
2021-05-10  9:37 ` [PATCH 33/33] locking/atomics: atomic-instrumented: simplify ifdeffery Mark Rutland
2021-05-10 14:56 ` [PATCH 00/33] locking/atomic: convert all architectures to ARCH_ATOMIC Arnd Bergmann
2021-05-11  9:27   ` Mark Rutland
2021-05-12  7:28     ` Peter Zijlstra
2021-05-12  7:49       ` Arnd Bergmann

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.