All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic,lock}.h and use on arm64
@ 2018-02-26 15:04 ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon

Hi everyone,

This is version two of the RFC I previously posted here:

  https://www.spinics.net/lists/arm-kernel/msg634719.html

Changes since v1 include:

  * Fixed __clear_bit_unlock to work on archs with lock-based atomics
  * Moved lock ops into bitops/lock.h
  * Fixed build breakage on lesser-spotted architectures

Trying to fix the circular #includes introduced by pulling atomic.h
into btops/lock.h has been driving me insane. I've ended up moving some
basic BIT definitions into bits.h, but this might all be better in
const.h which is being proposed by Masahiro. Feedback is especially
welcome on this part.

I've not bothered optimising for the case of a 64-bit, big-endian
architecture that uses the generic implementation of atomic64_t because
it's both messy and hypothetical. The code here should still work
correctly for that case, it just sucks (as does the implementation
currently in mainline).

Cheers,

Will

--->8

Will Deacon (12):
  h8300: Don't include linux/kernel.h in asm/atomic.h
  m68k: Don't use asm-generic/bitops/lock.h
  asm-generic: Move some macros from linux/bitops.h to a new bits.h file
  openrisc: Don't pull in all of linux/bitops.h in asm/cmpxchg.h
  sh: Don't pull in all of linux/bitops.h in asm/cmpxchg-xchg.h
  arm64: fpsimd: include <linux/init.h> in fpsimd.h
  arm64: lse: Include compiler_types.h and export.h for out-of-line
    LL/SC
  arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
  asm-generic/bitops/atomic.h: Rewrite using atomic_fetch_*
  asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*
  arm64: Replace our atomic/lock bitop implementations with asm-generic
  arm64: bitops: Include <asm-generic/bitops/ext2-atomic-setbit.h>

 arch/arm64/include/asm/bitops.h     |  21 +---
 arch/arm64/include/asm/cmpxchg.h    |   2 +-
 arch/arm64/include/asm/fpsimd.h     |   1 +
 arch/arm64/include/asm/lse.h        |   2 +
 arch/arm64/lib/Makefile             |   2 +-
 arch/arm64/lib/bitops.S             |  76 ---------------
 arch/h8300/include/asm/atomic.h     |   4 +-
 arch/m68k/include/asm/bitops.h      |   6 +-
 arch/openrisc/include/asm/cmpxchg.h |   3 +-
 arch/sh/include/asm/cmpxchg-xchg.h  |   3 +-
 include/asm-generic/bitops/atomic.h | 188 +++++++-----------------------------
 include/asm-generic/bitops/lock.h   |  68 ++++++++++---
 include/asm-generic/bits.h          |  26 +++++
 include/linux/bitops.h              |  22 +----
 14 files changed, 135 insertions(+), 289 deletions(-)
 delete mode 100644 arch/arm64/lib/bitops.S
 create mode 100644 include/asm-generic/bits.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h and use on arm64
@ 2018-02-26 15:04 ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

Hi everyone,

This is version two of the RFC I previously posted here:

  https://www.spinics.net/lists/arm-kernel/msg634719.html

Changes since v1 include:

  * Fixed __clear_bit_unlock to work on archs with lock-based atomics
  * Moved lock ops into bitops/lock.h
  * Fixed build breakage on lesser-spotted architectures

Trying to fix the circular #includes introduced by pulling atomic.h
into btops/lock.h has been driving me insane. I've ended up moving some
basic BIT definitions into bits.h, but this might all be better in
const.h which is being proposed by Masahiro. Feedback is especially
welcome on this part.

I've not bothered optimising for the case of a 64-bit, big-endian
architecture that uses the generic implementation of atomic64_t because
it's both messy and hypothetical. The code here should still work
correctly for that case, it just sucks (as does the implementation
currently in mainline).

Cheers,

Will

--->8

Will Deacon (12):
  h8300: Don't include linux/kernel.h in asm/atomic.h
  m68k: Don't use asm-generic/bitops/lock.h
  asm-generic: Move some macros from linux/bitops.h to a new bits.h file
  openrisc: Don't pull in all of linux/bitops.h in asm/cmpxchg.h
  sh: Don't pull in all of linux/bitops.h in asm/cmpxchg-xchg.h
  arm64: fpsimd: include <linux/init.h> in fpsimd.h
  arm64: lse: Include compiler_types.h and export.h for out-of-line
    LL/SC
  arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
  asm-generic/bitops/atomic.h: Rewrite using atomic_fetch_*
  asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*
  arm64: Replace our atomic/lock bitop implementations with asm-generic
  arm64: bitops: Include <asm-generic/bitops/ext2-atomic-setbit.h>

 arch/arm64/include/asm/bitops.h     |  21 +---
 arch/arm64/include/asm/cmpxchg.h    |   2 +-
 arch/arm64/include/asm/fpsimd.h     |   1 +
 arch/arm64/include/asm/lse.h        |   2 +
 arch/arm64/lib/Makefile             |   2 +-
 arch/arm64/lib/bitops.S             |  76 ---------------
 arch/h8300/include/asm/atomic.h     |   4 +-
 arch/m68k/include/asm/bitops.h      |   6 +-
 arch/openrisc/include/asm/cmpxchg.h |   3 +-
 arch/sh/include/asm/cmpxchg-xchg.h  |   3 +-
 include/asm-generic/bitops/atomic.h | 188 +++++++-----------------------------
 include/asm-generic/bitops/lock.h   |  68 ++++++++++---
 include/asm-generic/bits.h          |  26 +++++
 include/linux/bitops.h              |  22 +----
 14 files changed, 135 insertions(+), 289 deletions(-)
 delete mode 100644 arch/arm64/lib/bitops.S
 create mode 100644 include/asm-generic/bits.h

-- 
2.1.4

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 01/12] h8300: Don't include linux/kernel.h in asm/atomic.h
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-02-26 15:04   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon,
	Yoshinori Sato

linux/kernel.h isn't needed by asm/atomic.h and will result in circular
dependencies when the asm-generic atomic bitops are built around the
tomic_long_t interface.

Remove the broad include and replace it with linux/compiler.h for
READ_ONCE etc and asm/irqflags.h for arch_local_irq_save etc.

Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/h8300/include/asm/atomic.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/h8300/include/asm/atomic.h b/arch/h8300/include/asm/atomic.h
index 941e7554e886..b174dec099bf 100644
--- a/arch/h8300/include/asm/atomic.h
+++ b/arch/h8300/include/asm/atomic.h
@@ -2,8 +2,10 @@
 #ifndef __ARCH_H8300_ATOMIC__
 #define __ARCH_H8300_ATOMIC__
 
+#include <linux/compiler.h>
 #include <linux/types.h>
 #include <asm/cmpxchg.h>
+#include <asm/irqflags.h>
 
 /*
  * Atomic operations that C can't guarantee us.  Useful for
@@ -15,8 +17,6 @@
 #define atomic_read(v)		READ_ONCE((v)->counter)
 #define atomic_set(v, i)	WRITE_ONCE(((v)->counter), (i))
 
-#include <linux/kernel.h>
-
 #define ATOMIC_OP_RETURN(op, c_op)				\
 static inline int atomic_##op##_return(int i, atomic_t *v)	\
 {								\
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 01/12] h8300: Don't include linux/kernel.h in asm/atomic.h
@ 2018-02-26 15:04   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

linux/kernel.h isn't needed by asm/atomic.h and will result in circular
dependencies when the asm-generic atomic bitops are built around the
tomic_long_t interface.

Remove the broad include and replace it with linux/compiler.h for
READ_ONCE etc and asm/irqflags.h for arch_local_irq_save etc.

Cc: Yoshinori Sato <ysato@users.sourceforge.jp>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/h8300/include/asm/atomic.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/h8300/include/asm/atomic.h b/arch/h8300/include/asm/atomic.h
index 941e7554e886..b174dec099bf 100644
--- a/arch/h8300/include/asm/atomic.h
+++ b/arch/h8300/include/asm/atomic.h
@@ -2,8 +2,10 @@
 #ifndef __ARCH_H8300_ATOMIC__
 #define __ARCH_H8300_ATOMIC__
 
+#include <linux/compiler.h>
 #include <linux/types.h>
 #include <asm/cmpxchg.h>
+#include <asm/irqflags.h>
 
 /*
  * Atomic operations that C can't guarantee us.  Useful for
@@ -15,8 +17,6 @@
 #define atomic_read(v)		READ_ONCE((v)->counter)
 #define atomic_set(v, i)	WRITE_ONCE(((v)->counter), (i))
 
-#include <linux/kernel.h>
-
 #define ATOMIC_OP_RETURN(op, c_op)				\
 static inline int atomic_##op##_return(int i, atomic_t *v)	\
 {								\
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 02/12] m68k: Don't use asm-generic/bitops/lock.h
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-02-26 15:04   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon

asm-generic/bitops/lock.h is shortly going to be built on top of the
atomic_long_* API, which introduces a nasty circular dependency for
m68k where linux/atomic.h pulls in linux/bitops.h via:

	linux/atomic.h
	asm/atomic.h
	linux/irqflags.h
	asm/irqflags.h
	linux/preempt.h
	asm/preempt.h
	asm-generic/preempt.h
	linux/thread_info.h
	asm/thread_info.h
	asm/page.h
	asm-generic/getorder.h
	linux/log2.h
	linux/bitops.h

Since m68k isn't SMP and doesn't support ACQUIRE/RELEASE barriers, we
can just define the lock bitops in terms of the atomic bitops in the
asm/bitops.h header.

Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/m68k/include/asm/bitops.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/m68k/include/asm/bitops.h b/arch/m68k/include/asm/bitops.h
index 93b47b1f6fb4..18193419f97d 100644
--- a/arch/m68k/include/asm/bitops.h
+++ b/arch/m68k/include/asm/bitops.h
@@ -515,12 +515,16 @@ static inline int __fls(int x)
 
 #endif
 
+/* Simple test-and-set bit locks */
+#define test_and_set_bit_lock	test_and_set_bit
+#define clear_bit_unlock	clear_bit
+#define __clear_bit_unlock	clear_bit_unlock
+
 #include <asm-generic/bitops/ext2-atomic.h>
 #include <asm-generic/bitops/le.h>
 #include <asm-generic/bitops/fls64.h>
 #include <asm-generic/bitops/sched.h>
 #include <asm-generic/bitops/hweight.h>
-#include <asm-generic/bitops/lock.h>
 #endif /* __KERNEL__ */
 
 #endif /* _M68K_BITOPS_H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 02/12] m68k: Don't use asm-generic/bitops/lock.h
@ 2018-02-26 15:04   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

asm-generic/bitops/lock.h is shortly going to be built on top of the
atomic_long_* API, which introduces a nasty circular dependency for
m68k where linux/atomic.h pulls in linux/bitops.h via:

	linux/atomic.h
	asm/atomic.h
	linux/irqflags.h
	asm/irqflags.h
	linux/preempt.h
	asm/preempt.h
	asm-generic/preempt.h
	linux/thread_info.h
	asm/thread_info.h
	asm/page.h
	asm-generic/getorder.h
	linux/log2.h
	linux/bitops.h

Since m68k isn't SMP and doesn't support ACQUIRE/RELEASE barriers, we
can just define the lock bitops in terms of the atomic bitops in the
asm/bitops.h header.

Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/m68k/include/asm/bitops.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/arch/m68k/include/asm/bitops.h b/arch/m68k/include/asm/bitops.h
index 93b47b1f6fb4..18193419f97d 100644
--- a/arch/m68k/include/asm/bitops.h
+++ b/arch/m68k/include/asm/bitops.h
@@ -515,12 +515,16 @@ static inline int __fls(int x)
 
 #endif
 
+/* Simple test-and-set bit locks */
+#define test_and_set_bit_lock	test_and_set_bit
+#define clear_bit_unlock	clear_bit
+#define __clear_bit_unlock	clear_bit_unlock
+
 #include <asm-generic/bitops/ext2-atomic.h>
 #include <asm-generic/bitops/le.h>
 #include <asm-generic/bitops/fls64.h>
 #include <asm-generic/bitops/sched.h>
 #include <asm-generic/bitops/hweight.h>
-#include <asm-generic/bitops/lock.h>
 #endif /* __KERNEL__ */
 
 #endif /* _M68K_BITOPS_H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 03/12] asm-generic: Move some macros from linux/bitops.h to a new bits.h file
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-02-26 15:04   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon

In preparation for implementing the asm-generic atomic bitops in terms
of atomic_long_*, we need to prevent asm/atomic.h implementations from
pulling in linux/bitops.h. A common reason for this include is for the
BITS_PER_BYTE definition, so move this and some other BIT and masking
macros into a new header file, asm-generic/bits.h

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 include/asm-generic/bits.h | 26 ++++++++++++++++++++++++++
 include/linux/bitops.h     | 22 +---------------------
 2 files changed, 27 insertions(+), 21 deletions(-)
 create mode 100644 include/asm-generic/bits.h

diff --git a/include/asm-generic/bits.h b/include/asm-generic/bits.h
new file mode 100644
index 000000000000..738f8038440b
--- /dev/null
+++ b/include/asm-generic/bits.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_GENERIC_BITS_H
+#define __ASM_GENERIC_BITS_H
+#include <asm/bitsperlong.h>
+
+#define BIT(nr)			(1UL << (nr))
+#define BIT_ULL(nr)		(1ULL << (nr))
+#define BIT_MASK(nr)		(1UL << ((nr) % BITS_PER_LONG))
+#define BIT_WORD(nr)		((nr) / BITS_PER_LONG)
+#define BIT_ULL_MASK(nr)	(1ULL << ((nr) % BITS_PER_LONG_LONG))
+#define BIT_ULL_WORD(nr)	((nr) / BITS_PER_LONG_LONG)
+#define BITS_PER_BYTE		8
+
+/*
+ * Create a contiguous bitmask starting at bit position @l and ending at
+ * position @h. For example
+ * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.
+ */
+#define GENMASK(h, l) \
+	(((~0UL) - (1UL << (l)) + 1) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+
+#define GENMASK_ULL(h, l) \
+	(((~0ULL) - (1ULL << (l)) + 1) & \
+	 (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
+
+#endif	/* __ASM_GENERIC_BITS_H */
diff --git a/include/linux/bitops.h b/include/linux/bitops.h
index 4cac4e1a72ff..57ba7f67b360 100644
--- a/include/linux/bitops.h
+++ b/include/linux/bitops.h
@@ -2,29 +2,9 @@
 #ifndef _LINUX_BITOPS_H
 #define _LINUX_BITOPS_H
 #include <asm/types.h>
+#include <asm-generic/bits.h>
 
-#ifdef	__KERNEL__
-#define BIT(nr)			(1UL << (nr))
-#define BIT_ULL(nr)		(1ULL << (nr))
-#define BIT_MASK(nr)		(1UL << ((nr) % BITS_PER_LONG))
-#define BIT_WORD(nr)		((nr) / BITS_PER_LONG)
-#define BIT_ULL_MASK(nr)	(1ULL << ((nr) % BITS_PER_LONG_LONG))
-#define BIT_ULL_WORD(nr)	((nr) / BITS_PER_LONG_LONG)
-#define BITS_PER_BYTE		8
 #define BITS_TO_LONGS(nr)	DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long))
-#endif
-
-/*
- * Create a contiguous bitmask starting at bit position @l and ending at
- * position @h. For example
- * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.
- */
-#define GENMASK(h, l) \
-	(((~0UL) - (1UL << (l)) + 1) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
-
-#define GENMASK_ULL(h, l) \
-	(((~0ULL) - (1ULL << (l)) + 1) & \
-	 (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
 
 extern unsigned int __sw_hweight8(unsigned int w);
 extern unsigned int __sw_hweight16(unsigned int w);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 03/12] asm-generic: Move some macros from linux/bitops.h to a new bits.h file
@ 2018-02-26 15:04   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

In preparation for implementing the asm-generic atomic bitops in terms
of atomic_long_*, we need to prevent asm/atomic.h implementations from
pulling in linux/bitops.h. A common reason for this include is for the
BITS_PER_BYTE definition, so move this and some other BIT and masking
macros into a new header file, asm-generic/bits.h

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 include/asm-generic/bits.h | 26 ++++++++++++++++++++++++++
 include/linux/bitops.h     | 22 +---------------------
 2 files changed, 27 insertions(+), 21 deletions(-)
 create mode 100644 include/asm-generic/bits.h

diff --git a/include/asm-generic/bits.h b/include/asm-generic/bits.h
new file mode 100644
index 000000000000..738f8038440b
--- /dev/null
+++ b/include/asm-generic/bits.h
@@ -0,0 +1,26 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __ASM_GENERIC_BITS_H
+#define __ASM_GENERIC_BITS_H
+#include <asm/bitsperlong.h>
+
+#define BIT(nr)			(1UL << (nr))
+#define BIT_ULL(nr)		(1ULL << (nr))
+#define BIT_MASK(nr)		(1UL << ((nr) % BITS_PER_LONG))
+#define BIT_WORD(nr)		((nr) / BITS_PER_LONG)
+#define BIT_ULL_MASK(nr)	(1ULL << ((nr) % BITS_PER_LONG_LONG))
+#define BIT_ULL_WORD(nr)	((nr) / BITS_PER_LONG_LONG)
+#define BITS_PER_BYTE		8
+
+/*
+ * Create a contiguous bitmask starting@bit position @l and ending at
+ * position @h. For example
+ * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.
+ */
+#define GENMASK(h, l) \
+	(((~0UL) - (1UL << (l)) + 1) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
+
+#define GENMASK_ULL(h, l) \
+	(((~0ULL) - (1ULL << (l)) + 1) & \
+	 (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
+
+#endif	/* __ASM_GENERIC_BITS_H */
diff --git a/include/linux/bitops.h b/include/linux/bitops.h
index 4cac4e1a72ff..57ba7f67b360 100644
--- a/include/linux/bitops.h
+++ b/include/linux/bitops.h
@@ -2,29 +2,9 @@
 #ifndef _LINUX_BITOPS_H
 #define _LINUX_BITOPS_H
 #include <asm/types.h>
+#include <asm-generic/bits.h>
 
-#ifdef	__KERNEL__
-#define BIT(nr)			(1UL << (nr))
-#define BIT_ULL(nr)		(1ULL << (nr))
-#define BIT_MASK(nr)		(1UL << ((nr) % BITS_PER_LONG))
-#define BIT_WORD(nr)		((nr) / BITS_PER_LONG)
-#define BIT_ULL_MASK(nr)	(1ULL << ((nr) % BITS_PER_LONG_LONG))
-#define BIT_ULL_WORD(nr)	((nr) / BITS_PER_LONG_LONG)
-#define BITS_PER_BYTE		8
 #define BITS_TO_LONGS(nr)	DIV_ROUND_UP(nr, BITS_PER_BYTE * sizeof(long))
-#endif
-
-/*
- * Create a contiguous bitmask starting@bit position @l and ending at
- * position @h. For example
- * GENMASK_ULL(39, 21) gives us the 64bit vector 0x000000ffffe00000.
- */
-#define GENMASK(h, l) \
-	(((~0UL) - (1UL << (l)) + 1) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
-
-#define GENMASK_ULL(h, l) \
-	(((~0ULL) - (1ULL << (l)) + 1) & \
-	 (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
 
 extern unsigned int __sw_hweight8(unsigned int w);
 extern unsigned int __sw_hweight16(unsigned int w);
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 04/12] openrisc: Don't pull in all of linux/bitops.h in asm/cmpxchg.h
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-02-26 15:04   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon

The openrisc implementation of asm/cmpxchg.h pulls in linux/bitops.h
so that it can refer to BITS_PER_BYTE. It also transitively relies on
this pulling in linux/compiler.h for READ_ONCE.

Replace the #include with asm-generic/bits.h and linux/compiler.h

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/openrisc/include/asm/cmpxchg.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/openrisc/include/asm/cmpxchg.h b/arch/openrisc/include/asm/cmpxchg.h
index d29f7db53906..94b578388fe2 100644
--- a/arch/openrisc/include/asm/cmpxchg.h
+++ b/arch/openrisc/include/asm/cmpxchg.h
@@ -16,8 +16,9 @@
 #ifndef __ASM_OPENRISC_CMPXCHG_H
 #define __ASM_OPENRISC_CMPXCHG_H
 
+#include  <linux/compiler.h>
 #include  <linux/types.h>
-#include  <linux/bitops.h>
+#include  <asm-generic/bits.h>
 
 #define __HAVE_ARCH_CMPXCHG 1
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 04/12] openrisc: Don't pull in all of linux/bitops.h in asm/cmpxchg.h
@ 2018-02-26 15:04   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

The openrisc implementation of asm/cmpxchg.h pulls in linux/bitops.h
so that it can refer to BITS_PER_BYTE. It also transitively relies on
this pulling in linux/compiler.h for READ_ONCE.

Replace the #include with asm-generic/bits.h and linux/compiler.h

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/openrisc/include/asm/cmpxchg.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/openrisc/include/asm/cmpxchg.h b/arch/openrisc/include/asm/cmpxchg.h
index d29f7db53906..94b578388fe2 100644
--- a/arch/openrisc/include/asm/cmpxchg.h
+++ b/arch/openrisc/include/asm/cmpxchg.h
@@ -16,8 +16,9 @@
 #ifndef __ASM_OPENRISC_CMPXCHG_H
 #define __ASM_OPENRISC_CMPXCHG_H
 
+#include  <linux/compiler.h>
 #include  <linux/types.h>
-#include  <linux/bitops.h>
+#include  <asm-generic/bits.h>
 
 #define __HAVE_ARCH_CMPXCHG 1
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 05/12] sh: Don't pull in all of linux/bitops.h in asm/cmpxchg-xchg.h
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-02-26 15:04   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon

The sh implementation of asm/cmpxchg-xchg.h pulls in linux/bitops.h
so that it can refer to BITS_PER_BYTE. It also transitively relies on
this pulling in linux/compiler.h for READ_ONCE.

Replace the #include with asm-generic/bits.h and linux/compiler.h

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/sh/include/asm/cmpxchg-xchg.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/cmpxchg-xchg.h b/arch/sh/include/asm/cmpxchg-xchg.h
index 1e881f5db659..41c290efa3c4 100644
--- a/arch/sh/include/asm/cmpxchg-xchg.h
+++ b/arch/sh/include/asm/cmpxchg-xchg.h
@@ -8,7 +8,8 @@
  * This work is licensed under the terms of the GNU GPL, version 2.  See the
  * file "COPYING" in the main directory of this archive for more details.
  */
-#include <linux/bitops.h>
+#include <linux/compiler.h>
+#include <asm-generic/bits.h>
 #include <asm/byteorder.h>
 
 /*
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 05/12] sh: Don't pull in all of linux/bitops.h in asm/cmpxchg-xchg.h
@ 2018-02-26 15:04   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

The sh implementation of asm/cmpxchg-xchg.h pulls in linux/bitops.h
so that it can refer to BITS_PER_BYTE. It also transitively relies on
this pulling in linux/compiler.h for READ_ONCE.

Replace the #include with asm-generic/bits.h and linux/compiler.h

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/sh/include/asm/cmpxchg-xchg.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/sh/include/asm/cmpxchg-xchg.h b/arch/sh/include/asm/cmpxchg-xchg.h
index 1e881f5db659..41c290efa3c4 100644
--- a/arch/sh/include/asm/cmpxchg-xchg.h
+++ b/arch/sh/include/asm/cmpxchg-xchg.h
@@ -8,7 +8,8 @@
  * This work is licensed under the terms of the GNU GPL, version 2.  See the
  * file "COPYING" in the main directory of this archive for more details.
  */
-#include <linux/bitops.h>
+#include <linux/compiler.h>
+#include <asm-generic/bits.h>
 #include <asm/byteorder.h>
 
 /*
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 06/12] arm64: fpsimd: include <linux/init.h> in fpsimd.h
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-02-26 15:04   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon

fpsimd.h uses the __init annotation, so pull in linux/init.h

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/fpsimd.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index 8857a0f0d0f7..fc3527b985ca 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -22,6 +22,7 @@
 #ifndef __ASSEMBLY__
 
 #include <linux/cache.h>
+#include <linux/init.h>
 #include <linux/stddef.h>
 
 /*
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 06/12] arm64: fpsimd: include <linux/init.h> in fpsimd.h
@ 2018-02-26 15:04   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

fpsimd.h uses the __init annotation, so pull in linux/init.h

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/fpsimd.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
index 8857a0f0d0f7..fc3527b985ca 100644
--- a/arch/arm64/include/asm/fpsimd.h
+++ b/arch/arm64/include/asm/fpsimd.h
@@ -22,6 +22,7 @@
 #ifndef __ASSEMBLY__
 
 #include <linux/cache.h>
+#include <linux/init.h>
 #include <linux/stddef.h>
 
 /*
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 07/12] arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SC
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-02-26 15:04   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon

When the LL/SC atomics are moved out-of-line, they are annotated as
notrace and exported to modules. Ensure we pull in the relevant include
files so that these macros are defined when we need them.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/lse.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/include/asm/lse.h b/arch/arm64/include/asm/lse.h
index eec95768eaad..e612a6be113f 100644
--- a/arch/arm64/include/asm/lse.h
+++ b/arch/arm64/include/asm/lse.h
@@ -4,6 +4,8 @@
 
 #if defined(CONFIG_AS_LSE) && defined(CONFIG_ARM64_LSE_ATOMICS)
 
+#include <linux/compiler_types.h>
+#include <linux/export.h>
 #include <linux/stringify.h>
 #include <asm/alternative.h>
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 07/12] arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SC
@ 2018-02-26 15:04   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

When the LL/SC atomics are moved out-of-line, they are annotated as
notrace and exported to modules. Ensure we pull in the relevant include
files so that these macros are defined when we need them.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/lse.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm64/include/asm/lse.h b/arch/arm64/include/asm/lse.h
index eec95768eaad..e612a6be113f 100644
--- a/arch/arm64/include/asm/lse.h
+++ b/arch/arm64/include/asm/lse.h
@@ -4,6 +4,8 @@
 
 #if defined(CONFIG_AS_LSE) && defined(CONFIG_ARM64_LSE_ATOMICS)
 
+#include <linux/compiler_types.h>
+#include <linux/export.h>
 #include <linux/stringify.h>
 #include <asm/alternative.h>
 
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 08/12] arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-02-26 15:04   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon

Having asm/cmpxchg.h pull in linux/bug.h is problematic because this
ends up pulling in the atomic bitops which themselves may be built on
top of atomic.h and cmpxchg.h.

Instead, just include build_bug.h for the definition of BUILD_BUG.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cmpxchg.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h
index ae852add053d..bc9e07bc6428 100644
--- a/arch/arm64/include/asm/cmpxchg.h
+++ b/arch/arm64/include/asm/cmpxchg.h
@@ -18,7 +18,7 @@
 #ifndef __ASM_CMPXCHG_H
 #define __ASM_CMPXCHG_H
 
-#include <linux/bug.h>
+#include <linux/build_bug.h>
 
 #include <asm/atomic.h>
 #include <asm/barrier.h>
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 08/12] arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
@ 2018-02-26 15:04   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

Having asm/cmpxchg.h pull in linux/bug.h is problematic because this
ends up pulling in the atomic bitops which themselves may be built on
top of atomic.h and cmpxchg.h.

Instead, just include build_bug.h for the definition of BUILD_BUG.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/cmpxchg.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h
index ae852add053d..bc9e07bc6428 100644
--- a/arch/arm64/include/asm/cmpxchg.h
+++ b/arch/arm64/include/asm/cmpxchg.h
@@ -18,7 +18,7 @@
 #ifndef __ASM_CMPXCHG_H
 #define __ASM_CMPXCHG_H
 
-#include <linux/bug.h>
+#include <linux/build_bug.h>
 
 #include <asm/atomic.h>
 #include <asm/barrier.h>
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 09/12] asm-generic/bitops/atomic.h: Rewrite using atomic_fetch_*
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-02-26 15:04   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon

The atomic bitops can actually be implemented pretty efficiently using
the atomic_fetch_* ops, rather than explicit use of spinlocks.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 include/asm-generic/bitops/atomic.h | 188 +++++++-----------------------------
 1 file changed, 33 insertions(+), 155 deletions(-)

diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
index 04deffaf5f7d..bca92586c2f6 100644
--- a/include/asm-generic/bitops/atomic.h
+++ b/include/asm-generic/bitops/atomic.h
@@ -2,189 +2,67 @@
 #ifndef _ASM_GENERIC_BITOPS_ATOMIC_H_
 #define _ASM_GENERIC_BITOPS_ATOMIC_H_
 
-#include <asm/types.h>
-#include <linux/irqflags.h>
-
-#ifdef CONFIG_SMP
-#include <asm/spinlock.h>
-#include <asm/cache.h>		/* we use L1_CACHE_BYTES */
-
-/* Use an array of spinlocks for our atomic_ts.
- * Hash function to index into a different SPINLOCK.
- * Since "a" is usually an address, use one spinlock per cacheline.
- */
-#  define ATOMIC_HASH_SIZE 4
-#  define ATOMIC_HASH(a) (&(__atomic_hash[ (((unsigned long) a)/L1_CACHE_BYTES) & (ATOMIC_HASH_SIZE-1) ]))
-
-extern arch_spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned;
-
-/* Can't use raw_spin_lock_irq because of #include problems, so
- * this is the substitute */
-#define _atomic_spin_lock_irqsave(l,f) do {	\
-	arch_spinlock_t *s = ATOMIC_HASH(l);	\
-	local_irq_save(f);			\
-	arch_spin_lock(s);			\
-} while(0)
-
-#define _atomic_spin_unlock_irqrestore(l,f) do {	\
-	arch_spinlock_t *s = ATOMIC_HASH(l);		\
-	arch_spin_unlock(s);				\
-	local_irq_restore(f);				\
-} while(0)
-
-
-#else
-#  define _atomic_spin_lock_irqsave(l,f) do { local_irq_save(f); } while (0)
-#  define _atomic_spin_unlock_irqrestore(l,f) do { local_irq_restore(f); } while (0)
-#endif
+#include <linux/atomic.h>
+#include <linux/compiler.h>
+#include <asm/barrier.h>
 
 /*
- * NMI events can occur at any time, including when interrupts have been
- * disabled by *_irqsave().  So you can get NMI events occurring while a
- * *_bit function is holding a spin lock.  If the NMI handler also wants
- * to do bit manipulation (and they do) then you can get a deadlock
- * between the original caller of *_bit() and the NMI handler.
- *
- * by Keith Owens
+ * Implementation of atomic bitops using atomic-fetch ops.
+ * See Documentation/atomic_bitops.txt for details.
  */
 
-/**
- * set_bit - Atomically set a bit in memory
- * @nr: the bit to set
- * @addr: the address to start counting from
- *
- * This function is atomic and may not be reordered.  See __set_bit()
- * if you do not require the atomic guarantees.
- *
- * Note: there are no guarantees that this function will not be reordered
- * on non x86 architectures, so if you are writing portable code,
- * make sure not to rely on its reordering guarantees.
- *
- * Note that @nr may be almost arbitrarily large; this function is not
- * restricted to acting on a single-word quantity.
- */
-static inline void set_bit(int nr, volatile unsigned long *addr)
+static inline void set_bit(unsigned int nr, volatile unsigned long *p)
 {
-	unsigned long mask = BIT_MASK(nr);
-	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
-	unsigned long flags;
-
-	_atomic_spin_lock_irqsave(p, flags);
-	*p  |= mask;
-	_atomic_spin_unlock_irqrestore(p, flags);
+	p += BIT_WORD(nr);
+	atomic_long_fetch_or_relaxed(BIT_MASK(nr), (atomic_long_t *)p);
 }
 
-/**
- * clear_bit - Clears a bit in memory
- * @nr: Bit to clear
- * @addr: Address to start counting from
- *
- * clear_bit() is atomic and may not be reordered.  However, it does
- * not contain a memory barrier, so if it is used for locking purposes,
- * you should call smp_mb__before_atomic() and/or smp_mb__after_atomic()
- * in order to ensure changes are visible on other processors.
- */
-static inline void clear_bit(int nr, volatile unsigned long *addr)
+static inline void clear_bit(unsigned int nr, volatile unsigned long *p)
 {
-	unsigned long mask = BIT_MASK(nr);
-	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
-	unsigned long flags;
-
-	_atomic_spin_lock_irqsave(p, flags);
-	*p &= ~mask;
-	_atomic_spin_unlock_irqrestore(p, flags);
+	p += BIT_WORD(nr);
+	atomic_long_fetch_andnot_relaxed(BIT_MASK(nr), (atomic_long_t *)p);
 }
 
-/**
- * change_bit - Toggle a bit in memory
- * @nr: Bit to change
- * @addr: Address to start counting from
- *
- * change_bit() is atomic and may not be reordered. It may be
- * reordered on other architectures than x86.
- * Note that @nr may be almost arbitrarily large; this function is not
- * restricted to acting on a single-word quantity.
- */
-static inline void change_bit(int nr, volatile unsigned long *addr)
+static inline void change_bit(unsigned int nr, volatile unsigned long *p)
 {
-	unsigned long mask = BIT_MASK(nr);
-	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
-	unsigned long flags;
-
-	_atomic_spin_lock_irqsave(p, flags);
-	*p ^= mask;
-	_atomic_spin_unlock_irqrestore(p, flags);
+	p += BIT_WORD(nr);
+	atomic_long_fetch_xor_relaxed(BIT_MASK(nr), (atomic_long_t *)p);
 }
 
-/**
- * test_and_set_bit - Set a bit and return its old value
- * @nr: Bit to set
- * @addr: Address to count from
- *
- * This operation is atomic and cannot be reordered.
- * It may be reordered on other architectures than x86.
- * It also implies a memory barrier.
- */
-static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
+static inline int test_and_set_bit(unsigned int nr, volatile unsigned long *p)
 {
+	long old;
 	unsigned long mask = BIT_MASK(nr);
-	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
-	unsigned long old;
-	unsigned long flags;
 
-	_atomic_spin_lock_irqsave(p, flags);
-	old = *p;
-	*p = old | mask;
-	_atomic_spin_unlock_irqrestore(p, flags);
+	p += BIT_WORD(nr);
+	if (READ_ONCE(*p) & mask)
+		return 1;
 
-	return (old & mask) != 0;
+	old = atomic_long_fetch_or(mask, (atomic_long_t *)p);
+	return !!(old & mask);
 }
 
-/**
- * test_and_clear_bit - Clear a bit and return its old value
- * @nr: Bit to clear
- * @addr: Address to count from
- *
- * This operation is atomic and cannot be reordered.
- * It can be reorderdered on other architectures other than x86.
- * It also implies a memory barrier.
- */
-static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
+static inline int test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
 {
+	long old;
 	unsigned long mask = BIT_MASK(nr);
-	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
-	unsigned long old;
-	unsigned long flags;
 
-	_atomic_spin_lock_irqsave(p, flags);
-	old = *p;
-	*p = old & ~mask;
-	_atomic_spin_unlock_irqrestore(p, flags);
+	p += BIT_WORD(nr);
+	if (!(READ_ONCE(*p) & mask))
+		return 0;
 
-	return (old & mask) != 0;
+	old = atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
+	return !!(old & mask);
 }
 
-/**
- * test_and_change_bit - Change a bit and return its old value
- * @nr: Bit to change
- * @addr: Address to count from
- *
- * This operation is atomic and cannot be reordered.
- * It also implies a memory barrier.
- */
-static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
+static inline int test_and_change_bit(unsigned int nr, volatile unsigned long *p)
 {
+	long old;
 	unsigned long mask = BIT_MASK(nr);
-	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
-	unsigned long old;
-	unsigned long flags;
-
-	_atomic_spin_lock_irqsave(p, flags);
-	old = *p;
-	*p = old ^ mask;
-	_atomic_spin_unlock_irqrestore(p, flags);
 
-	return (old & mask) != 0;
+	p += BIT_WORD(nr);
+	old = atomic_long_fetch_xor(mask, (atomic_long_t *)p);
+	return !!(old & mask);
 }
 
 #endif /* _ASM_GENERIC_BITOPS_ATOMIC_H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 09/12] asm-generic/bitops/atomic.h: Rewrite using atomic_fetch_*
@ 2018-02-26 15:04   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

The atomic bitops can actually be implemented pretty efficiently using
the atomic_fetch_* ops, rather than explicit use of spinlocks.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 include/asm-generic/bitops/atomic.h | 188 +++++++-----------------------------
 1 file changed, 33 insertions(+), 155 deletions(-)

diff --git a/include/asm-generic/bitops/atomic.h b/include/asm-generic/bitops/atomic.h
index 04deffaf5f7d..bca92586c2f6 100644
--- a/include/asm-generic/bitops/atomic.h
+++ b/include/asm-generic/bitops/atomic.h
@@ -2,189 +2,67 @@
 #ifndef _ASM_GENERIC_BITOPS_ATOMIC_H_
 #define _ASM_GENERIC_BITOPS_ATOMIC_H_
 
-#include <asm/types.h>
-#include <linux/irqflags.h>
-
-#ifdef CONFIG_SMP
-#include <asm/spinlock.h>
-#include <asm/cache.h>		/* we use L1_CACHE_BYTES */
-
-/* Use an array of spinlocks for our atomic_ts.
- * Hash function to index into a different SPINLOCK.
- * Since "a" is usually an address, use one spinlock per cacheline.
- */
-#  define ATOMIC_HASH_SIZE 4
-#  define ATOMIC_HASH(a) (&(__atomic_hash[ (((unsigned long) a)/L1_CACHE_BYTES) & (ATOMIC_HASH_SIZE-1) ]))
-
-extern arch_spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] __lock_aligned;
-
-/* Can't use raw_spin_lock_irq because of #include problems, so
- * this is the substitute */
-#define _atomic_spin_lock_irqsave(l,f) do {	\
-	arch_spinlock_t *s = ATOMIC_HASH(l);	\
-	local_irq_save(f);			\
-	arch_spin_lock(s);			\
-} while(0)
-
-#define _atomic_spin_unlock_irqrestore(l,f) do {	\
-	arch_spinlock_t *s = ATOMIC_HASH(l);		\
-	arch_spin_unlock(s);				\
-	local_irq_restore(f);				\
-} while(0)
-
-
-#else
-#  define _atomic_spin_lock_irqsave(l,f) do { local_irq_save(f); } while (0)
-#  define _atomic_spin_unlock_irqrestore(l,f) do { local_irq_restore(f); } while (0)
-#endif
+#include <linux/atomic.h>
+#include <linux/compiler.h>
+#include <asm/barrier.h>
 
 /*
- * NMI events can occur at any time, including when interrupts have been
- * disabled by *_irqsave().  So you can get NMI events occurring while a
- * *_bit function is holding a spin lock.  If the NMI handler also wants
- * to do bit manipulation (and they do) then you can get a deadlock
- * between the original caller of *_bit() and the NMI handler.
- *
- * by Keith Owens
+ * Implementation of atomic bitops using atomic-fetch ops.
+ * See Documentation/atomic_bitops.txt for details.
  */
 
-/**
- * set_bit - Atomically set a bit in memory
- * @nr: the bit to set
- * @addr: the address to start counting from
- *
- * This function is atomic and may not be reordered.  See __set_bit()
- * if you do not require the atomic guarantees.
- *
- * Note: there are no guarantees that this function will not be reordered
- * on non x86 architectures, so if you are writing portable code,
- * make sure not to rely on its reordering guarantees.
- *
- * Note that @nr may be almost arbitrarily large; this function is not
- * restricted to acting on a single-word quantity.
- */
-static inline void set_bit(int nr, volatile unsigned long *addr)
+static inline void set_bit(unsigned int nr, volatile unsigned long *p)
 {
-	unsigned long mask = BIT_MASK(nr);
-	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
-	unsigned long flags;
-
-	_atomic_spin_lock_irqsave(p, flags);
-	*p  |= mask;
-	_atomic_spin_unlock_irqrestore(p, flags);
+	p += BIT_WORD(nr);
+	atomic_long_fetch_or_relaxed(BIT_MASK(nr), (atomic_long_t *)p);
 }
 
-/**
- * clear_bit - Clears a bit in memory
- * @nr: Bit to clear
- * @addr: Address to start counting from
- *
- * clear_bit() is atomic and may not be reordered.  However, it does
- * not contain a memory barrier, so if it is used for locking purposes,
- * you should call smp_mb__before_atomic() and/or smp_mb__after_atomic()
- * in order to ensure changes are visible on other processors.
- */
-static inline void clear_bit(int nr, volatile unsigned long *addr)
+static inline void clear_bit(unsigned int nr, volatile unsigned long *p)
 {
-	unsigned long mask = BIT_MASK(nr);
-	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
-	unsigned long flags;
-
-	_atomic_spin_lock_irqsave(p, flags);
-	*p &= ~mask;
-	_atomic_spin_unlock_irqrestore(p, flags);
+	p += BIT_WORD(nr);
+	atomic_long_fetch_andnot_relaxed(BIT_MASK(nr), (atomic_long_t *)p);
 }
 
-/**
- * change_bit - Toggle a bit in memory
- * @nr: Bit to change
- * @addr: Address to start counting from
- *
- * change_bit() is atomic and may not be reordered. It may be
- * reordered on other architectures than x86.
- * Note that @nr may be almost arbitrarily large; this function is not
- * restricted to acting on a single-word quantity.
- */
-static inline void change_bit(int nr, volatile unsigned long *addr)
+static inline void change_bit(unsigned int nr, volatile unsigned long *p)
 {
-	unsigned long mask = BIT_MASK(nr);
-	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
-	unsigned long flags;
-
-	_atomic_spin_lock_irqsave(p, flags);
-	*p ^= mask;
-	_atomic_spin_unlock_irqrestore(p, flags);
+	p += BIT_WORD(nr);
+	atomic_long_fetch_xor_relaxed(BIT_MASK(nr), (atomic_long_t *)p);
 }
 
-/**
- * test_and_set_bit - Set a bit and return its old value
- * @nr: Bit to set
- * @addr: Address to count from
- *
- * This operation is atomic and cannot be reordered.
- * It may be reordered on other architectures than x86.
- * It also implies a memory barrier.
- */
-static inline int test_and_set_bit(int nr, volatile unsigned long *addr)
+static inline int test_and_set_bit(unsigned int nr, volatile unsigned long *p)
 {
+	long old;
 	unsigned long mask = BIT_MASK(nr);
-	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
-	unsigned long old;
-	unsigned long flags;
 
-	_atomic_spin_lock_irqsave(p, flags);
-	old = *p;
-	*p = old | mask;
-	_atomic_spin_unlock_irqrestore(p, flags);
+	p += BIT_WORD(nr);
+	if (READ_ONCE(*p) & mask)
+		return 1;
 
-	return (old & mask) != 0;
+	old = atomic_long_fetch_or(mask, (atomic_long_t *)p);
+	return !!(old & mask);
 }
 
-/**
- * test_and_clear_bit - Clear a bit and return its old value
- * @nr: Bit to clear
- * @addr: Address to count from
- *
- * This operation is atomic and cannot be reordered.
- * It can be reorderdered on other architectures other than x86.
- * It also implies a memory barrier.
- */
-static inline int test_and_clear_bit(int nr, volatile unsigned long *addr)
+static inline int test_and_clear_bit(unsigned int nr, volatile unsigned long *p)
 {
+	long old;
 	unsigned long mask = BIT_MASK(nr);
-	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
-	unsigned long old;
-	unsigned long flags;
 
-	_atomic_spin_lock_irqsave(p, flags);
-	old = *p;
-	*p = old & ~mask;
-	_atomic_spin_unlock_irqrestore(p, flags);
+	p += BIT_WORD(nr);
+	if (!(READ_ONCE(*p) & mask))
+		return 0;
 
-	return (old & mask) != 0;
+	old = atomic_long_fetch_andnot(mask, (atomic_long_t *)p);
+	return !!(old & mask);
 }
 
-/**
- * test_and_change_bit - Change a bit and return its old value
- * @nr: Bit to change
- * @addr: Address to count from
- *
- * This operation is atomic and cannot be reordered.
- * It also implies a memory barrier.
- */
-static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
+static inline int test_and_change_bit(unsigned int nr, volatile unsigned long *p)
 {
+	long old;
 	unsigned long mask = BIT_MASK(nr);
-	unsigned long *p = ((unsigned long *)addr) + BIT_WORD(nr);
-	unsigned long old;
-	unsigned long flags;
-
-	_atomic_spin_lock_irqsave(p, flags);
-	old = *p;
-	*p = old ^ mask;
-	_atomic_spin_unlock_irqrestore(p, flags);
 
-	return (old & mask) != 0;
+	p += BIT_WORD(nr);
+	old = atomic_long_fetch_xor(mask, (atomic_long_t *)p);
+	return !!(old & mask);
 }
 
 #endif /* _ASM_GENERIC_BITOPS_ATOMIC_H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 10/12] asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-02-26 15:04   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon

The lock bitops can be implemented more efficiently using the atomic_fetch_*
ops, which provide finer-grained control over the memory ordering semantics
than the bitops.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 include/asm-generic/bitops/lock.h | 68 ++++++++++++++++++++++++++++++++-------
 1 file changed, 56 insertions(+), 12 deletions(-)

diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h
index 67ab280ad134..3ae021368f48 100644
--- a/include/asm-generic/bitops/lock.h
+++ b/include/asm-generic/bitops/lock.h
@@ -2,6 +2,10 @@
 #ifndef _ASM_GENERIC_BITOPS_LOCK_H_
 #define _ASM_GENERIC_BITOPS_LOCK_H_
 
+#include <linux/atomic.h>
+#include <linux/compiler.h>
+#include <asm/barrier.h>
+
 /**
  * test_and_set_bit_lock - Set a bit and return its old value, for lock
  * @nr: Bit to set
@@ -11,7 +15,20 @@
  * the returned value is 0.
  * It can be used to implement bit locks.
  */
-#define test_and_set_bit_lock(nr, addr)	test_and_set_bit(nr, addr)
+static inline int test_and_set_bit_lock(unsigned int nr,
+					volatile unsigned long *p)
+{
+	long old;
+	unsigned long mask = BIT_MASK(nr);
+
+	p += BIT_WORD(nr);
+	if (READ_ONCE(*p) & mask)
+		return 1;
+
+	old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p);
+	return !!(old & mask);
+}
+
 
 /**
  * clear_bit_unlock - Clear a bit in memory, for unlock
@@ -20,11 +37,11 @@
  *
  * This operation is atomic and provides release barrier semantics.
  */
-#define clear_bit_unlock(nr, addr)	\
-do {					\
-	smp_mb__before_atomic();	\
-	clear_bit(nr, addr);		\
-} while (0)
+static inline void clear_bit_unlock(unsigned int nr, volatile unsigned long *p)
+{
+	p += BIT_WORD(nr);
+	atomic_long_fetch_andnot_release(BIT_MASK(nr), (atomic_long_t *)p);
+}
 
 /**
  * __clear_bit_unlock - Clear a bit in memory, for unlock
@@ -37,11 +54,38 @@ do {					\
  *
  * See for example x86's implementation.
  */
-#define __clear_bit_unlock(nr, addr)	\
-do {					\
-	smp_mb__before_atomic();	\
-	clear_bit(nr, addr);		\
-} while (0)
+static inline void __clear_bit_unlock(unsigned int nr,
+				      volatile unsigned long *p)
+{
+	unsigned long old;
 
-#endif /* _ASM_GENERIC_BITOPS_LOCK_H_ */
+	p += BIT_WORD(nr);
+	old = READ_ONCE(*p);
+	old &= ~BIT_MASK(nr);
+	atomic_long_set_release((atomic_long_t *)p, old);
+}
+
+/**
+ * clear_bit_unlock_is_negative_byte - Clear a bit in memory and test if bottom
+ *                                     byte is negative, for unlock.
+ * @nr: the bit to clear
+ * @addr: the address to start counting from
+ *
+ * This is a bit of a one-trick-pony for the filemap code, which clears
+ * PG_locked and tests PG_waiters,
+ */
+#ifndef clear_bit_unlock_is_negative_byte
+static inline bool clear_bit_unlock_is_negative_byte(unsigned int nr,
+						     volatile unsigned long *p)
+{
+	long old;
+	unsigned long mask = BIT_MASK(nr);
+
+	p += BIT_WORD(nr);
+	old = atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p);
+	return !!(old & BIT(7));
+}
+#define clear_bit_unlock_is_negative_byte clear_bit_unlock_is_negative_byte
+#endif
 
+#endif /* _ASM_GENERIC_BITOPS_LOCK_H_ */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 10/12] asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*
@ 2018-02-26 15:04   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

The lock bitops can be implemented more efficiently using the atomic_fetch_*
ops, which provide finer-grained control over the memory ordering semantics
than the bitops.

Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 include/asm-generic/bitops/lock.h | 68 ++++++++++++++++++++++++++++++++-------
 1 file changed, 56 insertions(+), 12 deletions(-)

diff --git a/include/asm-generic/bitops/lock.h b/include/asm-generic/bitops/lock.h
index 67ab280ad134..3ae021368f48 100644
--- a/include/asm-generic/bitops/lock.h
+++ b/include/asm-generic/bitops/lock.h
@@ -2,6 +2,10 @@
 #ifndef _ASM_GENERIC_BITOPS_LOCK_H_
 #define _ASM_GENERIC_BITOPS_LOCK_H_
 
+#include <linux/atomic.h>
+#include <linux/compiler.h>
+#include <asm/barrier.h>
+
 /**
  * test_and_set_bit_lock - Set a bit and return its old value, for lock
  * @nr: Bit to set
@@ -11,7 +15,20 @@
  * the returned value is 0.
  * It can be used to implement bit locks.
  */
-#define test_and_set_bit_lock(nr, addr)	test_and_set_bit(nr, addr)
+static inline int test_and_set_bit_lock(unsigned int nr,
+					volatile unsigned long *p)
+{
+	long old;
+	unsigned long mask = BIT_MASK(nr);
+
+	p += BIT_WORD(nr);
+	if (READ_ONCE(*p) & mask)
+		return 1;
+
+	old = atomic_long_fetch_or_acquire(mask, (atomic_long_t *)p);
+	return !!(old & mask);
+}
+
 
 /**
  * clear_bit_unlock - Clear a bit in memory, for unlock
@@ -20,11 +37,11 @@
  *
  * This operation is atomic and provides release barrier semantics.
  */
-#define clear_bit_unlock(nr, addr)	\
-do {					\
-	smp_mb__before_atomic();	\
-	clear_bit(nr, addr);		\
-} while (0)
+static inline void clear_bit_unlock(unsigned int nr, volatile unsigned long *p)
+{
+	p += BIT_WORD(nr);
+	atomic_long_fetch_andnot_release(BIT_MASK(nr), (atomic_long_t *)p);
+}
 
 /**
  * __clear_bit_unlock - Clear a bit in memory, for unlock
@@ -37,11 +54,38 @@ do {					\
  *
  * See for example x86's implementation.
  */
-#define __clear_bit_unlock(nr, addr)	\
-do {					\
-	smp_mb__before_atomic();	\
-	clear_bit(nr, addr);		\
-} while (0)
+static inline void __clear_bit_unlock(unsigned int nr,
+				      volatile unsigned long *p)
+{
+	unsigned long old;
 
-#endif /* _ASM_GENERIC_BITOPS_LOCK_H_ */
+	p += BIT_WORD(nr);
+	old = READ_ONCE(*p);
+	old &= ~BIT_MASK(nr);
+	atomic_long_set_release((atomic_long_t *)p, old);
+}
+
+/**
+ * clear_bit_unlock_is_negative_byte - Clear a bit in memory and test if bottom
+ *                                     byte is negative, for unlock.
+ * @nr: the bit to clear
+ * @addr: the address to start counting from
+ *
+ * This is a bit of a one-trick-pony for the filemap code, which clears
+ * PG_locked and tests PG_waiters,
+ */
+#ifndef clear_bit_unlock_is_negative_byte
+static inline bool clear_bit_unlock_is_negative_byte(unsigned int nr,
+						     volatile unsigned long *p)
+{
+	long old;
+	unsigned long mask = BIT_MASK(nr);
+
+	p += BIT_WORD(nr);
+	old = atomic_long_fetch_andnot_release(mask, (atomic_long_t *)p);
+	return !!(old & BIT(7));
+}
+#define clear_bit_unlock_is_negative_byte clear_bit_unlock_is_negative_byte
+#endif
 
+#endif /* _ASM_GENERIC_BITOPS_LOCK_H_ */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 11/12] arm64: Replace our atomic/lock bitop implementations with asm-generic
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-02-26 15:04   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon

The asm-generic/bitops/{atomic,lock}.h implementations are built around
the atomic-fetch ops, which we implement efficiently for both LSE and
LL/SC systems. Use that instead of our hand-rolled, out-of-line bitops.S.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/bitops.h | 14 ++------
 arch/arm64/lib/Makefile         |  2 +-
 arch/arm64/lib/bitops.S         | 76 -----------------------------------------
 3 files changed, 3 insertions(+), 89 deletions(-)
 delete mode 100644 arch/arm64/lib/bitops.S

diff --git a/arch/arm64/include/asm/bitops.h b/arch/arm64/include/asm/bitops.h
index 9c19594ce7cb..13501460be6b 100644
--- a/arch/arm64/include/asm/bitops.h
+++ b/arch/arm64/include/asm/bitops.h
@@ -17,22 +17,11 @@
 #define __ASM_BITOPS_H
 
 #include <linux/compiler.h>
-#include <asm/barrier.h>
 
 #ifndef _LINUX_BITOPS_H
 #error only <linux/bitops.h> can be included directly
 #endif
 
-/*
- * Little endian assembly atomic bitops.
- */
-extern void set_bit(int nr, volatile unsigned long *p);
-extern void clear_bit(int nr, volatile unsigned long *p);
-extern void change_bit(int nr, volatile unsigned long *p);
-extern int test_and_set_bit(int nr, volatile unsigned long *p);
-extern int test_and_clear_bit(int nr, volatile unsigned long *p);
-extern int test_and_change_bit(int nr, volatile unsigned long *p);
-
 #include <asm-generic/bitops/builtin-__ffs.h>
 #include <asm-generic/bitops/builtin-ffs.h>
 #include <asm-generic/bitops/builtin-__fls.h>
@@ -44,8 +33,9 @@ extern int test_and_change_bit(int nr, volatile unsigned long *p);
 
 #include <asm-generic/bitops/sched.h>
 #include <asm-generic/bitops/hweight.h>
-#include <asm-generic/bitops/lock.h>
 
+#include <asm-generic/bitops/atomic.h>
+#include <asm-generic/bitops/lock.h>
 #include <asm-generic/bitops/non-atomic.h>
 #include <asm-generic/bitops/le.h>
 
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index 4e696f96451f..73095a04c0ad 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: GPL-2.0
-lib-y		:= bitops.o clear_user.o delay.o copy_from_user.o	\
+lib-y		:= clear_user.o delay.o copy_from_user.o		\
 		   copy_to_user.o copy_in_user.o copy_page.o		\
 		   clear_page.o memchr.o memcpy.o memmove.o memset.o	\
 		   memcmp.o strcmp.o strncmp.o strlen.o strnlen.o	\
diff --git a/arch/arm64/lib/bitops.S b/arch/arm64/lib/bitops.S
deleted file mode 100644
index 43ac736baa5b..000000000000
--- a/arch/arm64/lib/bitops.S
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * Based on arch/arm/lib/bitops.h
- *
- * Copyright (C) 2013 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <linux/linkage.h>
-#include <asm/assembler.h>
-#include <asm/lse.h>
-
-/*
- * x0: bits 5:0  bit offset
- *     bits 31:6 word offset
- * x1: address
- */
-	.macro	bitop, name, llsc, lse
-ENTRY(	\name	)
-	and	w3, w0, #63		// Get bit offset
-	eor	w0, w0, w3		// Clear low bits
-	mov	x2, #1
-	add	x1, x1, x0, lsr #3	// Get word offset
-alt_lse "	prfm	pstl1strm, [x1]",	"nop"
-	lsl	x3, x2, x3		// Create mask
-
-alt_lse	"1:	ldxr	x2, [x1]",		"\lse	x3, [x1]"
-alt_lse	"	\llsc	x2, x2, x3",		"nop"
-alt_lse	"	stxr	w0, x2, [x1]",		"nop"
-alt_lse	"	cbnz	w0, 1b",		"nop"
-
-	ret
-ENDPROC(\name	)
-	.endm
-
-	.macro	testop, name, llsc, lse
-ENTRY(	\name	)
-	and	w3, w0, #63		// Get bit offset
-	eor	w0, w0, w3		// Clear low bits
-	mov	x2, #1
-	add	x1, x1, x0, lsr #3	// Get word offset
-alt_lse "	prfm	pstl1strm, [x1]",	"nop"
-	lsl	x4, x2, x3		// Create mask
-
-alt_lse	"1:	ldxr	x2, [x1]",		"\lse	x4, x2, [x1]"
-	lsr	x0, x2, x3
-alt_lse	"	\llsc	x2, x2, x4",		"nop"
-alt_lse	"	stlxr	w5, x2, [x1]",		"nop"
-alt_lse	"	cbnz	w5, 1b",		"nop"
-alt_lse	"	dmb	ish",			"nop"
-
-	and	x0, x0, #1
-	ret
-ENDPROC(\name	)
-	.endm
-
-/*
- * Atomic bit operations.
- */
-	bitop	change_bit, eor, steor
-	bitop	clear_bit, bic, stclr
-	bitop	set_bit, orr, stset
-
-	testop	test_and_change_bit, eor, ldeoral
-	testop	test_and_clear_bit, bic, ldclral
-	testop	test_and_set_bit, orr, ldsetal
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 11/12] arm64: Replace our atomic/lock bitop implementations with asm-generic
@ 2018-02-26 15:04   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:04 UTC (permalink / raw)
  To: linux-arm-kernel

The asm-generic/bitops/{atomic,lock}.h implementations are built around
the atomic-fetch ops, which we implement efficiently for both LSE and
LL/SC systems. Use that instead of our hand-rolled, out-of-line bitops.S.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/bitops.h | 14 ++------
 arch/arm64/lib/Makefile         |  2 +-
 arch/arm64/lib/bitops.S         | 76 -----------------------------------------
 3 files changed, 3 insertions(+), 89 deletions(-)
 delete mode 100644 arch/arm64/lib/bitops.S

diff --git a/arch/arm64/include/asm/bitops.h b/arch/arm64/include/asm/bitops.h
index 9c19594ce7cb..13501460be6b 100644
--- a/arch/arm64/include/asm/bitops.h
+++ b/arch/arm64/include/asm/bitops.h
@@ -17,22 +17,11 @@
 #define __ASM_BITOPS_H
 
 #include <linux/compiler.h>
-#include <asm/barrier.h>
 
 #ifndef _LINUX_BITOPS_H
 #error only <linux/bitops.h> can be included directly
 #endif
 
-/*
- * Little endian assembly atomic bitops.
- */
-extern void set_bit(int nr, volatile unsigned long *p);
-extern void clear_bit(int nr, volatile unsigned long *p);
-extern void change_bit(int nr, volatile unsigned long *p);
-extern int test_and_set_bit(int nr, volatile unsigned long *p);
-extern int test_and_clear_bit(int nr, volatile unsigned long *p);
-extern int test_and_change_bit(int nr, volatile unsigned long *p);
-
 #include <asm-generic/bitops/builtin-__ffs.h>
 #include <asm-generic/bitops/builtin-ffs.h>
 #include <asm-generic/bitops/builtin-__fls.h>
@@ -44,8 +33,9 @@ extern int test_and_change_bit(int nr, volatile unsigned long *p);
 
 #include <asm-generic/bitops/sched.h>
 #include <asm-generic/bitops/hweight.h>
-#include <asm-generic/bitops/lock.h>
 
+#include <asm-generic/bitops/atomic.h>
+#include <asm-generic/bitops/lock.h>
 #include <asm-generic/bitops/non-atomic.h>
 #include <asm-generic/bitops/le.h>
 
diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile
index 4e696f96451f..73095a04c0ad 100644
--- a/arch/arm64/lib/Makefile
+++ b/arch/arm64/lib/Makefile
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: GPL-2.0
-lib-y		:= bitops.o clear_user.o delay.o copy_from_user.o	\
+lib-y		:= clear_user.o delay.o copy_from_user.o		\
 		   copy_to_user.o copy_in_user.o copy_page.o		\
 		   clear_page.o memchr.o memcpy.o memmove.o memset.o	\
 		   memcmp.o strcmp.o strncmp.o strlen.o strnlen.o	\
diff --git a/arch/arm64/lib/bitops.S b/arch/arm64/lib/bitops.S
deleted file mode 100644
index 43ac736baa5b..000000000000
--- a/arch/arm64/lib/bitops.S
+++ /dev/null
@@ -1,76 +0,0 @@
-/*
- * Based on arch/arm/lib/bitops.h
- *
- * Copyright (C) 2013 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program.  If not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <linux/linkage.h>
-#include <asm/assembler.h>
-#include <asm/lse.h>
-
-/*
- * x0: bits 5:0  bit offset
- *     bits 31:6 word offset
- * x1: address
- */
-	.macro	bitop, name, llsc, lse
-ENTRY(	\name	)
-	and	w3, w0, #63		// Get bit offset
-	eor	w0, w0, w3		// Clear low bits
-	mov	x2, #1
-	add	x1, x1, x0, lsr #3	// Get word offset
-alt_lse "	prfm	pstl1strm, [x1]",	"nop"
-	lsl	x3, x2, x3		// Create mask
-
-alt_lse	"1:	ldxr	x2, [x1]",		"\lse	x3, [x1]"
-alt_lse	"	\llsc	x2, x2, x3",		"nop"
-alt_lse	"	stxr	w0, x2, [x1]",		"nop"
-alt_lse	"	cbnz	w0, 1b",		"nop"
-
-	ret
-ENDPROC(\name	)
-	.endm
-
-	.macro	testop, name, llsc, lse
-ENTRY(	\name	)
-	and	w3, w0, #63		// Get bit offset
-	eor	w0, w0, w3		// Clear low bits
-	mov	x2, #1
-	add	x1, x1, x0, lsr #3	// Get word offset
-alt_lse "	prfm	pstl1strm, [x1]",	"nop"
-	lsl	x4, x2, x3		// Create mask
-
-alt_lse	"1:	ldxr	x2, [x1]",		"\lse	x4, x2, [x1]"
-	lsr	x0, x2, x3
-alt_lse	"	\llsc	x2, x2, x4",		"nop"
-alt_lse	"	stlxr	w5, x2, [x1]",		"nop"
-alt_lse	"	cbnz	w5, 1b",		"nop"
-alt_lse	"	dmb	ish",			"nop"
-
-	and	x0, x0, #1
-	ret
-ENDPROC(\name	)
-	.endm
-
-/*
- * Atomic bit operations.
- */
-	bitop	change_bit, eor, steor
-	bitop	clear_bit, bic, stclr
-	bitop	set_bit, orr, stset
-
-	testop	test_and_change_bit, eor, ldeoral
-	testop	test_and_clear_bit, bic, ldclral
-	testop	test_and_set_bit, orr, ldsetal
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 12/12] arm64: bitops: Include <asm-generic/bitops/ext2-atomic-setbit.h>
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-02-26 15:05   ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:05 UTC (permalink / raw)
  To: linux-kernel
  Cc: peterz, mingo, linux-arm-kernel, yamada.masahiro, Will Deacon

asm-generic/bitops/ext2-atomic-setbit.h provides the ext2 atomic bitop
definitions, so we don't need to define our own.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/bitops.h | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/bitops.h b/arch/arm64/include/asm/bitops.h
index 13501460be6b..10d536b1af74 100644
--- a/arch/arm64/include/asm/bitops.h
+++ b/arch/arm64/include/asm/bitops.h
@@ -38,11 +38,6 @@
 #include <asm-generic/bitops/lock.h>
 #include <asm-generic/bitops/non-atomic.h>
 #include <asm-generic/bitops/le.h>
-
-/*
- * Ext2 is defined to use little-endian byte ordering.
- */
-#define ext2_set_bit_atomic(lock, nr, p)	test_and_set_bit_le(nr, p)
-#define ext2_clear_bit_atomic(lock, nr, p)	test_and_clear_bit_le(nr, p)
+#include <asm-generic/bitops/ext2-atomic-setbit.h>
 
 #endif /* __ASM_BITOPS_H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 12/12] arm64: bitops: Include <asm-generic/bitops/ext2-atomic-setbit.h>
@ 2018-02-26 15:05   ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-26 15:05 UTC (permalink / raw)
  To: linux-arm-kernel

asm-generic/bitops/ext2-atomic-setbit.h provides the ext2 atomic bitop
definitions, so we don't need to define our own.

Signed-off-by: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/bitops.h | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/arm64/include/asm/bitops.h b/arch/arm64/include/asm/bitops.h
index 13501460be6b..10d536b1af74 100644
--- a/arch/arm64/include/asm/bitops.h
+++ b/arch/arm64/include/asm/bitops.h
@@ -38,11 +38,6 @@
 #include <asm-generic/bitops/lock.h>
 #include <asm-generic/bitops/non-atomic.h>
 #include <asm-generic/bitops/le.h>
-
-/*
- * Ext2 is defined to use little-endian byte ordering.
- */
-#define ext2_set_bit_atomic(lock, nr, p)	test_and_set_bit_le(nr, p)
-#define ext2_clear_bit_atomic(lock, nr, p)	test_and_clear_bit_le(nr, p)
+#include <asm-generic/bitops/ext2-atomic-setbit.h>
 
 #endif /* __ASM_BITOPS_H */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2 06/12] arm64: fpsimd: include <linux/init.h> in fpsimd.h
  2018-02-26 15:04   ` Will Deacon
@ 2018-02-26 15:37     ` Mark Rutland
  -1 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2018-02-26 15:37 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-kernel, peterz, yamada.masahiro, mingo, linux-arm-kernel

On Mon, Feb 26, 2018 at 03:04:54PM +0000, Will Deacon wrote:
> fpsimd.h uses the __init annotation, so pull in linux/init.h
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Having skimmed through, this looks like all we need.

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/fpsimd.h | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index 8857a0f0d0f7..fc3527b985ca 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -22,6 +22,7 @@
>  #ifndef __ASSEMBLY__
>  
>  #include <linux/cache.h>
> +#include <linux/init.h>
>  #include <linux/stddef.h>
>  
>  /*
> -- 
> 2.1.4
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 06/12] arm64: fpsimd: include <linux/init.h> in fpsimd.h
@ 2018-02-26 15:37     ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2018-02-26 15:37 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 26, 2018 at 03:04:54PM +0000, Will Deacon wrote:
> fpsimd.h uses the __init annotation, so pull in linux/init.h
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>

Having skimmed through, this looks like all we need.

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/fpsimd.h | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index 8857a0f0d0f7..fc3527b985ca 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -22,6 +22,7 @@
>  #ifndef __ASSEMBLY__
>  
>  #include <linux/cache.h>
> +#include <linux/init.h>
>  #include <linux/stddef.h>
>  
>  /*
> -- 
> 2.1.4
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2 07/12] arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SC
  2018-02-26 15:04   ` Will Deacon
@ 2018-02-26 15:42     ` Mark Rutland
  -1 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2018-02-26 15:42 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-kernel, peterz, yamada.masahiro, mingo, linux-arm-kernel

On Mon, Feb 26, 2018 at 03:04:55PM +0000, Will Deacon wrote:
> When the LL/SC atomics are moved out-of-line, they are annotated as
> notrace and exported to modules. Ensure we pull in the relevant include
> files so that these macros are defined when we need them.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/lse.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/lse.h b/arch/arm64/include/asm/lse.h
> index eec95768eaad..e612a6be113f 100644
> --- a/arch/arm64/include/asm/lse.h
> +++ b/arch/arm64/include/asm/lse.h
> @@ -4,6 +4,8 @@
>  
>  #if defined(CONFIG_AS_LSE) && defined(CONFIG_ARM64_LSE_ATOMICS)
>  
> +#include <linux/compiler_types.h>
> +#include <linux/export.h>
>  #include <linux/stringify.h>
>  #include <asm/alternative.h>

I think we should include <asm/cpucaps.h> since we explicitly use
ARM64_HAS_LSE_ATOMICS here.

Otherwise, I don't see that we need anything else here. With that, or if we
decide that <asm/alternative.h> will always include the definition of cpucaps:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 07/12] arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SC
@ 2018-02-26 15:42     ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2018-02-26 15:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 26, 2018 at 03:04:55PM +0000, Will Deacon wrote:
> When the LL/SC atomics are moved out-of-line, they are annotated as
> notrace and exported to modules. Ensure we pull in the relevant include
> files so that these macros are defined when we need them.
> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/lse.h | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/lse.h b/arch/arm64/include/asm/lse.h
> index eec95768eaad..e612a6be113f 100644
> --- a/arch/arm64/include/asm/lse.h
> +++ b/arch/arm64/include/asm/lse.h
> @@ -4,6 +4,8 @@
>  
>  #if defined(CONFIG_AS_LSE) && defined(CONFIG_ARM64_LSE_ATOMICS)
>  
> +#include <linux/compiler_types.h>
> +#include <linux/export.h>
>  #include <linux/stringify.h>
>  #include <asm/alternative.h>

I think we should include <asm/cpucaps.h> since we explicitly use
ARM64_HAS_LSE_ATOMICS here.

Otherwise, I don't see that we need anything else here. With that, or if we
decide that <asm/alternative.h> will always include the definition of cpucaps:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2 08/12] arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
  2018-02-26 15:04   ` Will Deacon
@ 2018-02-26 15:48     ` Mark Rutland
  -1 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2018-02-26 15:48 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-kernel, peterz, yamada.masahiro, mingo, linux-arm-kernel

On Mon, Feb 26, 2018 at 03:04:56PM +0000, Will Deacon wrote:
> Having asm/cmpxchg.h pull in linux/bug.h is problematic because this
> ends up pulling in the atomic bitops which themselves may be built on
> top of atomic.h and cmpxchg.h.
> 
> Instead, just include build_bug.h for the definition of BUILD_BUG.

We also use VM_BUG_ON(), defined in <linux/mmdebug.h>, which includes
<linux/bug.h>.

... so I think we still have some fragility here, albeit no worse than before.

We also miss includes for:

* <linux/percpu-defs.h> (raw_cpu_ptr)
* <linux/preempt.h> (preempt_disable, preempt_enable)
* <linux/compiler.h> (unreachable)

I'm not sure if those are made worse by this change.

Mark.

> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/cmpxchg.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h
> index ae852add053d..bc9e07bc6428 100644
> --- a/arch/arm64/include/asm/cmpxchg.h
> +++ b/arch/arm64/include/asm/cmpxchg.h
> @@ -18,7 +18,7 @@
>  #ifndef __ASM_CMPXCHG_H
>  #define __ASM_CMPXCHG_H
>  
> -#include <linux/bug.h>
> +#include <linux/build_bug.h>
>  
>  #include <asm/atomic.h>
>  #include <asm/barrier.h>
> -- 
> 2.1.4
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 08/12] arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
@ 2018-02-26 15:48     ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2018-02-26 15:48 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 26, 2018 at 03:04:56PM +0000, Will Deacon wrote:
> Having asm/cmpxchg.h pull in linux/bug.h is problematic because this
> ends up pulling in the atomic bitops which themselves may be built on
> top of atomic.h and cmpxchg.h.
> 
> Instead, just include build_bug.h for the definition of BUILD_BUG.

We also use VM_BUG_ON(), defined in <linux/mmdebug.h>, which includes
<linux/bug.h>.

... so I think we still have some fragility here, albeit no worse than before.

We also miss includes for:

* <linux/percpu-defs.h> (raw_cpu_ptr)
* <linux/preempt.h> (preempt_disable, preempt_enable)
* <linux/compiler.h> (unreachable)

I'm not sure if those are made worse by this change.

Mark.

> 
> Signed-off-by: Will Deacon <will.deacon@arm.com>
> ---
>  arch/arm64/include/asm/cmpxchg.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/cmpxchg.h b/arch/arm64/include/asm/cmpxchg.h
> index ae852add053d..bc9e07bc6428 100644
> --- a/arch/arm64/include/asm/cmpxchg.h
> +++ b/arch/arm64/include/asm/cmpxchg.h
> @@ -18,7 +18,7 @@
>  #ifndef __ASM_CMPXCHG_H
>  #define __ASM_CMPXCHG_H
>  
> -#include <linux/bug.h>
> +#include <linux/build_bug.h>
>  
>  #include <asm/atomic.h>
>  #include <asm/barrier.h>
> -- 
> 2.1.4
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2 08/12] arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
  2018-02-26 15:48     ` Mark Rutland
@ 2018-02-27 17:33       ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-27 17:33 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-kernel, peterz, yamada.masahiro, mingo, linux-arm-kernel

On Mon, Feb 26, 2018 at 03:48:49PM +0000, Mark Rutland wrote:
> On Mon, Feb 26, 2018 at 03:04:56PM +0000, Will Deacon wrote:
> > Having asm/cmpxchg.h pull in linux/bug.h is problematic because this
> > ends up pulling in the atomic bitops which themselves may be built on
> > top of atomic.h and cmpxchg.h.
> > 
> > Instead, just include build_bug.h for the definition of BUILD_BUG.
> 
> We also use VM_BUG_ON(), defined in <linux/mmdebug.h>, which includes
> <linux/bug.h>.
> 
> ... so I think we still have some fragility here, albeit no worse than before.
> 
> We also miss includes for:
> 
> * <linux/percpu-defs.h> (raw_cpu_ptr)
> * <linux/preempt.h> (preempt_disable, preempt_enable)

Hmm, we can't include this one because it pulls in linux/bitops.h. I've
moved the percpu cmpxchg stuff into asm/percpu.h, but that too is missing
the linux/preempt.h #include, so I've added that as well.

Generally, I think if we want to clean up our #includes then that's better
done as a separate series rather than as a piecemeal effort, which will
likely fail to identify many of the underlying problems.

Will

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 08/12] arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
@ 2018-02-27 17:33       ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-02-27 17:33 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Feb 26, 2018 at 03:48:49PM +0000, Mark Rutland wrote:
> On Mon, Feb 26, 2018 at 03:04:56PM +0000, Will Deacon wrote:
> > Having asm/cmpxchg.h pull in linux/bug.h is problematic because this
> > ends up pulling in the atomic bitops which themselves may be built on
> > top of atomic.h and cmpxchg.h.
> > 
> > Instead, just include build_bug.h for the definition of BUILD_BUG.
> 
> We also use VM_BUG_ON(), defined in <linux/mmdebug.h>, which includes
> <linux/bug.h>.
> 
> ... so I think we still have some fragility here, albeit no worse than before.
> 
> We also miss includes for:
> 
> * <linux/percpu-defs.h> (raw_cpu_ptr)
> * <linux/preempt.h> (preempt_disable, preempt_enable)

Hmm, we can't include this one because it pulls in linux/bitops.h. I've
moved the percpu cmpxchg stuff into asm/percpu.h, but that too is missing
the linux/preempt.h #include, so I've added that as well.

Generally, I think if we want to clean up our #includes then that's better
done as a separate series rather than as a piecemeal effort, which will
likely fail to identify many of the underlying problems.

Will

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2 08/12] arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
  2018-02-27 17:33       ` Will Deacon
@ 2018-02-27 17:34         ` Mark Rutland
  -1 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2018-02-27 17:34 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-kernel, peterz, yamada.masahiro, mingo, linux-arm-kernel

On Tue, Feb 27, 2018 at 05:33:23PM +0000, Will Deacon wrote:
> On Mon, Feb 26, 2018 at 03:48:49PM +0000, Mark Rutland wrote:
> > On Mon, Feb 26, 2018 at 03:04:56PM +0000, Will Deacon wrote:
> > > Having asm/cmpxchg.h pull in linux/bug.h is problematic because this
> > > ends up pulling in the atomic bitops which themselves may be built on
> > > top of atomic.h and cmpxchg.h.
> > > 
> > > Instead, just include build_bug.h for the definition of BUILD_BUG.
> > 
> > We also use VM_BUG_ON(), defined in <linux/mmdebug.h>, which includes
> > <linux/bug.h>.
> > 
> > ... so I think we still have some fragility here, albeit no worse than before.
> > 
> > We also miss includes for:
> > 
> > * <linux/percpu-defs.h> (raw_cpu_ptr)
> > * <linux/preempt.h> (preempt_disable, preempt_enable)
> 
> Hmm, we can't include this one because it pulls in linux/bitops.h. I've
> moved the percpu cmpxchg stuff into asm/percpu.h, but that too is missing
> the linux/preempt.h #include, so I've added that as well.
> 
> Generally, I think if we want to clean up our #includes then that's better
> done as a separate series rather than as a piecemeal effort, which will
> likely fail to identify many of the underlying problems.

Sure thing; the above shouldn't hold up this series.

Mark.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 08/12] arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
@ 2018-02-27 17:34         ` Mark Rutland
  0 siblings, 0 replies; 42+ messages in thread
From: Mark Rutland @ 2018-02-27 17:34 UTC (permalink / raw)
  To: linux-arm-kernel

On Tue, Feb 27, 2018 at 05:33:23PM +0000, Will Deacon wrote:
> On Mon, Feb 26, 2018 at 03:48:49PM +0000, Mark Rutland wrote:
> > On Mon, Feb 26, 2018 at 03:04:56PM +0000, Will Deacon wrote:
> > > Having asm/cmpxchg.h pull in linux/bug.h is problematic because this
> > > ends up pulling in the atomic bitops which themselves may be built on
> > > top of atomic.h and cmpxchg.h.
> > > 
> > > Instead, just include build_bug.h for the definition of BUILD_BUG.
> > 
> > We also use VM_BUG_ON(), defined in <linux/mmdebug.h>, which includes
> > <linux/bug.h>.
> > 
> > ... so I think we still have some fragility here, albeit no worse than before.
> > 
> > We also miss includes for:
> > 
> > * <linux/percpu-defs.h> (raw_cpu_ptr)
> > * <linux/preempt.h> (preempt_disable, preempt_enable)
> 
> Hmm, we can't include this one because it pulls in linux/bitops.h. I've
> moved the percpu cmpxchg stuff into asm/percpu.h, but that too is missing
> the linux/preempt.h #include, so I've added that as well.
> 
> Generally, I think if we want to clean up our #includes then that's better
> done as a separate series rather than as a piecemeal effort, which will
> likely fail to identify many of the underlying problems.

Sure thing; the above shouldn't hold up this series.

Mark.

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic,lock}.h and use on arm64
  2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
@ 2018-03-01  7:16   ` Masahiro Yamada
  -1 siblings, 0 replies; 42+ messages in thread
From: Masahiro Yamada @ 2018-03-01  7:16 UTC (permalink / raw)
  To: Will Deacon
  Cc: Linux Kernel Mailing List, Peter Zijlstra (Intel),
	Ingo Molnar, linux-arm-kernel

2018-02-27 0:04 GMT+09:00 Will Deacon <will.deacon@arm.com>:
> Hi everyone,
>
> This is version two of the RFC I previously posted here:
>
>   https://www.spinics.net/lists/arm-kernel/msg634719.html
>
> Changes since v1 include:
>
>   * Fixed __clear_bit_unlock to work on archs with lock-based atomics
>   * Moved lock ops into bitops/lock.h
>   * Fixed build breakage on lesser-spotted architectures
>
> Trying to fix the circular #includes introduced by pulling atomic.h
> into btops/lock.h has been driving me insane. I've ended up moving some
> basic BIT definitions into bits.h, but this might all be better in
> const.h which is being proposed by Masahiro. Feedback is especially
> welcome on this part.


Info for reviewers:

You can see my patches at the following:

1/5: https://patchwork.kernel.org/patch/10235457/
2/5: https://patchwork.kernel.org/patch/10235461/
3/5: https://patchwork.kernel.org/patch/10235463/
4/5: https://patchwork.kernel.org/patch/10235469/
5/5: https://patchwork.kernel.org/patch/10235471/


5/5 has conflict with Will's 2/12.

Fortunately, it is at the tail of the series.
It is easy to pick/drop/change
when we decide how to organize it.








> I've not bothered optimising for the case of a 64-bit, big-endian
> architecture that uses the generic implementation of atomic64_t because
> it's both messy and hypothetical. The code here should still work
> correctly for that case, it just sucks (as does the implementation
> currently in mainline).
>
> Cheers,
>
> Will
>
> --->8
>
> Will Deacon (12):
>   h8300: Don't include linux/kernel.h in asm/atomic.h
>   m68k: Don't use asm-generic/bitops/lock.h
>   asm-generic: Move some macros from linux/bitops.h to a new bits.h file
>   openrisc: Don't pull in all of linux/bitops.h in asm/cmpxchg.h
>   sh: Don't pull in all of linux/bitops.h in asm/cmpxchg-xchg.h
>   arm64: fpsimd: include <linux/init.h> in fpsimd.h
>   arm64: lse: Include compiler_types.h and export.h for out-of-line
>     LL/SC
>   arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
>   asm-generic/bitops/atomic.h: Rewrite using atomic_fetch_*
>   asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*
>   arm64: Replace our atomic/lock bitop implementations with asm-generic
>   arm64: bitops: Include <asm-generic/bitops/ext2-atomic-setbit.h>
>
>  arch/arm64/include/asm/bitops.h     |  21 +---
>  arch/arm64/include/asm/cmpxchg.h    |   2 +-
>  arch/arm64/include/asm/fpsimd.h     |   1 +
>  arch/arm64/include/asm/lse.h        |   2 +
>  arch/arm64/lib/Makefile             |   2 +-
>  arch/arm64/lib/bitops.S             |  76 ---------------
>  arch/h8300/include/asm/atomic.h     |   4 +-
>  arch/m68k/include/asm/bitops.h      |   6 +-
>  arch/openrisc/include/asm/cmpxchg.h |   3 +-
>  arch/sh/include/asm/cmpxchg-xchg.h  |   3 +-
>  include/asm-generic/bitops/atomic.h | 188 +++++++-----------------------------
>  include/asm-generic/bitops/lock.h   |  68 ++++++++++---
>  include/asm-generic/bits.h          |  26 +++++
>  include/linux/bitops.h              |  22 +----
>  14 files changed, 135 insertions(+), 289 deletions(-)
>  delete mode 100644 arch/arm64/lib/bitops.S
>  create mode 100644 include/asm-generic/bits.h
>
> --
> 2.1.4
>



-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic,lock}.h and use on arm64
@ 2018-03-01  7:16   ` Masahiro Yamada
  0 siblings, 0 replies; 42+ messages in thread
From: Masahiro Yamada @ 2018-03-01  7:16 UTC (permalink / raw)
  To: linux-arm-kernel

2018-02-27 0:04 GMT+09:00 Will Deacon <will.deacon@arm.com>:
> Hi everyone,
>
> This is version two of the RFC I previously posted here:
>
>   https://www.spinics.net/lists/arm-kernel/msg634719.html
>
> Changes since v1 include:
>
>   * Fixed __clear_bit_unlock to work on archs with lock-based atomics
>   * Moved lock ops into bitops/lock.h
>   * Fixed build breakage on lesser-spotted architectures
>
> Trying to fix the circular #includes introduced by pulling atomic.h
> into btops/lock.h has been driving me insane. I've ended up moving some
> basic BIT definitions into bits.h, but this might all be better in
> const.h which is being proposed by Masahiro. Feedback is especially
> welcome on this part.


Info for reviewers:

You can see my patches at the following:

1/5: https://patchwork.kernel.org/patch/10235457/
2/5: https://patchwork.kernel.org/patch/10235461/
3/5: https://patchwork.kernel.org/patch/10235463/
4/5: https://patchwork.kernel.org/patch/10235469/
5/5: https://patchwork.kernel.org/patch/10235471/


5/5 has conflict with Will's 2/12.

Fortunately, it is at the tail of the series.
It is easy to pick/drop/change
when we decide how to organize it.








> I've not bothered optimising for the case of a 64-bit, big-endian
> architecture that uses the generic implementation of atomic64_t because
> it's both messy and hypothetical. The code here should still work
> correctly for that case, it just sucks (as does the implementation
> currently in mainline).
>
> Cheers,
>
> Will
>
> --->8
>
> Will Deacon (12):
>   h8300: Don't include linux/kernel.h in asm/atomic.h
>   m68k: Don't use asm-generic/bitops/lock.h
>   asm-generic: Move some macros from linux/bitops.h to a new bits.h file
>   openrisc: Don't pull in all of linux/bitops.h in asm/cmpxchg.h
>   sh: Don't pull in all of linux/bitops.h in asm/cmpxchg-xchg.h
>   arm64: fpsimd: include <linux/init.h> in fpsimd.h
>   arm64: lse: Include compiler_types.h and export.h for out-of-line
>     LL/SC
>   arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG
>   asm-generic/bitops/atomic.h: Rewrite using atomic_fetch_*
>   asm-generic/bitops/lock.h: Rewrite using atomic_fetch_*
>   arm64: Replace our atomic/lock bitop implementations with asm-generic
>   arm64: bitops: Include <asm-generic/bitops/ext2-atomic-setbit.h>
>
>  arch/arm64/include/asm/bitops.h     |  21 +---
>  arch/arm64/include/asm/cmpxchg.h    |   2 +-
>  arch/arm64/include/asm/fpsimd.h     |   1 +
>  arch/arm64/include/asm/lse.h        |   2 +
>  arch/arm64/lib/Makefile             |   2 +-
>  arch/arm64/lib/bitops.S             |  76 ---------------
>  arch/h8300/include/asm/atomic.h     |   4 +-
>  arch/m68k/include/asm/bitops.h      |   6 +-
>  arch/openrisc/include/asm/cmpxchg.h |   3 +-
>  arch/sh/include/asm/cmpxchg-xchg.h  |   3 +-
>  include/asm-generic/bitops/atomic.h | 188 +++++++-----------------------------
>  include/asm-generic/bitops/lock.h   |  68 ++++++++++---
>  include/asm-generic/bits.h          |  26 +++++
>  include/linux/bitops.h              |  22 +----
>  14 files changed, 135 insertions(+), 289 deletions(-)
>  delete mode 100644 arch/arm64/lib/bitops.S
>  create mode 100644 include/asm-generic/bits.h
>
> --
> 2.1.4
>



-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic,lock}.h and use on arm64
  2018-03-01  7:16   ` Masahiro Yamada
@ 2018-03-12  3:56     ` Masahiro Yamada
  -1 siblings, 0 replies; 42+ messages in thread
From: Masahiro Yamada @ 2018-03-12  3:56 UTC (permalink / raw)
  To: Will Deacon
  Cc: Linux Kernel Mailing List, Peter Zijlstra (Intel),
	Ingo Molnar, linux-arm-kernel

Hi Will,


2018-03-01 16:16 GMT+09:00 Masahiro Yamada <yamada.masahiro@socionext.com>:
> 2018-02-27 0:04 GMT+09:00 Will Deacon <will.deacon@arm.com>:
>> Hi everyone,
>>
>> This is version two of the RFC I previously posted here:
>>
>>   https://www.spinics.net/lists/arm-kernel/msg634719.html
>>
>> Changes since v1 include:
>>
>>   * Fixed __clear_bit_unlock to work on archs with lock-based atomics
>>   * Moved lock ops into bitops/lock.h
>>   * Fixed build breakage on lesser-spotted architectures
>>
>> Trying to fix the circular #includes introduced by pulling atomic.h
>> into btops/lock.h has been driving me insane. I've ended up moving some
>> basic BIT definitions into bits.h, but this might all be better in
>> const.h which is being proposed by Masahiro. Feedback is especially
>> welcome on this part.
>
>
> Info for reviewers:
>
> You can see my patches at the following:
>
> 1/5: https://patchwork.kernel.org/patch/10235457/
> 2/5: https://patchwork.kernel.org/patch/10235461/
> 3/5: https://patchwork.kernel.org/patch/10235463/
> 4/5: https://patchwork.kernel.org/patch/10235469/
> 5/5: https://patchwork.kernel.org/patch/10235471/
>
>
> 5/5 has conflict with Will's 2/12.
>
> Fortunately, it is at the tail of the series.
> It is easy to pick/drop/change
> when we decide how to organize it.


No comments so far about this part.

I think your approach is better
since putting BIT* macros into a single header
is more consistent.

So, I will ask Andrew to drop mine.


However, I think <linux/bits.h> will make more sense
than <asm-generic/bits.h>

These macros are really arch-agnostic.
So, we would not expect to have <asm/bits.h>
that could fall back to <asm-generic/bits.h>, right?




-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic,lock}.h and use on arm64
@ 2018-03-12  3:56     ` Masahiro Yamada
  0 siblings, 0 replies; 42+ messages in thread
From: Masahiro Yamada @ 2018-03-12  3:56 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Will,


2018-03-01 16:16 GMT+09:00 Masahiro Yamada <yamada.masahiro@socionext.com>:
> 2018-02-27 0:04 GMT+09:00 Will Deacon <will.deacon@arm.com>:
>> Hi everyone,
>>
>> This is version two of the RFC I previously posted here:
>>
>>   https://www.spinics.net/lists/arm-kernel/msg634719.html
>>
>> Changes since v1 include:
>>
>>   * Fixed __clear_bit_unlock to work on archs with lock-based atomics
>>   * Moved lock ops into bitops/lock.h
>>   * Fixed build breakage on lesser-spotted architectures
>>
>> Trying to fix the circular #includes introduced by pulling atomic.h
>> into btops/lock.h has been driving me insane. I've ended up moving some
>> basic BIT definitions into bits.h, but this might all be better in
>> const.h which is being proposed by Masahiro. Feedback is especially
>> welcome on this part.
>
>
> Info for reviewers:
>
> You can see my patches at the following:
>
> 1/5: https://patchwork.kernel.org/patch/10235457/
> 2/5: https://patchwork.kernel.org/patch/10235461/
> 3/5: https://patchwork.kernel.org/patch/10235463/
> 4/5: https://patchwork.kernel.org/patch/10235469/
> 5/5: https://patchwork.kernel.org/patch/10235471/
>
>
> 5/5 has conflict with Will's 2/12.
>
> Fortunately, it is at the tail of the series.
> It is easy to pick/drop/change
> when we decide how to organize it.


No comments so far about this part.

I think your approach is better
since putting BIT* macros into a single header
is more consistent.

So, I will ask Andrew to drop mine.


However, I think <linux/bits.h> will make more sense
than <asm-generic/bits.h>

These macros are really arch-agnostic.
So, we would not expect to have <asm/bits.h>
that could fall back to <asm-generic/bits.h>, right?




-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 42+ messages in thread

* Re: [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic,lock}.h and use on arm64
  2018-03-12  3:56     ` Masahiro Yamada
@ 2018-03-19 17:21       ` Will Deacon
  -1 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-03-19 17:21 UTC (permalink / raw)
  To: Masahiro Yamada
  Cc: Linux Kernel Mailing List, Peter Zijlstra (Intel),
	Ingo Molnar, linux-arm-kernel

Hi Masahiro,

On Mon, Mar 12, 2018 at 12:56:28PM +0900, Masahiro Yamada wrote:
> 2018-03-01 16:16 GMT+09:00 Masahiro Yamada <yamada.masahiro@socionext.com>:
> > 2018-02-27 0:04 GMT+09:00 Will Deacon <will.deacon@arm.com>:
> >> Hi everyone,
> >>
> >> This is version two of the RFC I previously posted here:
> >>
> >>   https://www.spinics.net/lists/arm-kernel/msg634719.html
> >>
> >> Changes since v1 include:
> >>
> >>   * Fixed __clear_bit_unlock to work on archs with lock-based atomics
> >>   * Moved lock ops into bitops/lock.h
> >>   * Fixed build breakage on lesser-spotted architectures
> >>
> >> Trying to fix the circular #includes introduced by pulling atomic.h
> >> into btops/lock.h has been driving me insane. I've ended up moving some
> >> basic BIT definitions into bits.h, but this might all be better in
> >> const.h which is being proposed by Masahiro. Feedback is especially
> >> welcome on this part.
> >
> >
> > Info for reviewers:
> >
> > You can see my patches at the following:
> >
> > 1/5: https://patchwork.kernel.org/patch/10235457/
> > 2/5: https://patchwork.kernel.org/patch/10235461/
> > 3/5: https://patchwork.kernel.org/patch/10235463/
> > 4/5: https://patchwork.kernel.org/patch/10235469/
> > 5/5: https://patchwork.kernel.org/patch/10235471/
> >
> >
> > 5/5 has conflict with Will's 2/12.
> >
> > Fortunately, it is at the tail of the series.
> > It is easy to pick/drop/change
> > when we decide how to organize it.
> 
> 
> No comments so far about this part.
> 
> I think your approach is better
> since putting BIT* macros into a single header
> is more consistent.
> 
> So, I will ask Andrew to drop mine.

Thanks.

> However, I think <linux/bits.h> will make more sense
> than <asm-generic/bits.h>
> 
> These macros are really arch-agnostic.
> So, we would not expect to have <asm/bits.h>
> that could fall back to <asm-generic/bits.h>, right?

That's fair. I'll do a respin using linux/*.

Cheers,

Will

^ permalink raw reply	[flat|nested] 42+ messages in thread

* [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic,lock}.h and use on arm64
@ 2018-03-19 17:21       ` Will Deacon
  0 siblings, 0 replies; 42+ messages in thread
From: Will Deacon @ 2018-03-19 17:21 UTC (permalink / raw)
  To: linux-arm-kernel

Hi Masahiro,

On Mon, Mar 12, 2018 at 12:56:28PM +0900, Masahiro Yamada wrote:
> 2018-03-01 16:16 GMT+09:00 Masahiro Yamada <yamada.masahiro@socionext.com>:
> > 2018-02-27 0:04 GMT+09:00 Will Deacon <will.deacon@arm.com>:
> >> Hi everyone,
> >>
> >> This is version two of the RFC I previously posted here:
> >>
> >>   https://www.spinics.net/lists/arm-kernel/msg634719.html
> >>
> >> Changes since v1 include:
> >>
> >>   * Fixed __clear_bit_unlock to work on archs with lock-based atomics
> >>   * Moved lock ops into bitops/lock.h
> >>   * Fixed build breakage on lesser-spotted architectures
> >>
> >> Trying to fix the circular #includes introduced by pulling atomic.h
> >> into btops/lock.h has been driving me insane. I've ended up moving some
> >> basic BIT definitions into bits.h, but this might all be better in
> >> const.h which is being proposed by Masahiro. Feedback is especially
> >> welcome on this part.
> >
> >
> > Info for reviewers:
> >
> > You can see my patches at the following:
> >
> > 1/5: https://patchwork.kernel.org/patch/10235457/
> > 2/5: https://patchwork.kernel.org/patch/10235461/
> > 3/5: https://patchwork.kernel.org/patch/10235463/
> > 4/5: https://patchwork.kernel.org/patch/10235469/
> > 5/5: https://patchwork.kernel.org/patch/10235471/
> >
> >
> > 5/5 has conflict with Will's 2/12.
> >
> > Fortunately, it is at the tail of the series.
> > It is easy to pick/drop/change
> > when we decide how to organize it.
> 
> 
> No comments so far about this part.
> 
> I think your approach is better
> since putting BIT* macros into a single header
> is more consistent.
> 
> So, I will ask Andrew to drop mine.

Thanks.

> However, I think <linux/bits.h> will make more sense
> than <asm-generic/bits.h>
> 
> These macros are really arch-agnostic.
> So, we would not expect to have <asm/bits.h>
> that could fall back to <asm-generic/bits.h>, right?

That's fair. I'll do a respin using linux/*.

Cheers,

Will

^ permalink raw reply	[flat|nested] 42+ messages in thread

end of thread, other threads:[~2018-03-19 17:22 UTC | newest]

Thread overview: 42+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-26 15:04 [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic,lock}.h and use on arm64 Will Deacon
2018-02-26 15:04 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic, lock}.h " Will Deacon
2018-02-26 15:04 ` [RFC PATCH v2 01/12] h8300: Don't include linux/kernel.h in asm/atomic.h Will Deacon
2018-02-26 15:04   ` Will Deacon
2018-02-26 15:04 ` [RFC PATCH v2 02/12] m68k: Don't use asm-generic/bitops/lock.h Will Deacon
2018-02-26 15:04   ` Will Deacon
2018-02-26 15:04 ` [RFC PATCH v2 03/12] asm-generic: Move some macros from linux/bitops.h to a new bits.h file Will Deacon
2018-02-26 15:04   ` Will Deacon
2018-02-26 15:04 ` [RFC PATCH v2 04/12] openrisc: Don't pull in all of linux/bitops.h in asm/cmpxchg.h Will Deacon
2018-02-26 15:04   ` Will Deacon
2018-02-26 15:04 ` [RFC PATCH v2 05/12] sh: Don't pull in all of linux/bitops.h in asm/cmpxchg-xchg.h Will Deacon
2018-02-26 15:04   ` Will Deacon
2018-02-26 15:04 ` [RFC PATCH v2 06/12] arm64: fpsimd: include <linux/init.h> in fpsimd.h Will Deacon
2018-02-26 15:04   ` Will Deacon
2018-02-26 15:37   ` Mark Rutland
2018-02-26 15:37     ` Mark Rutland
2018-02-26 15:04 ` [RFC PATCH v2 07/12] arm64: lse: Include compiler_types.h and export.h for out-of-line LL/SC Will Deacon
2018-02-26 15:04   ` Will Deacon
2018-02-26 15:42   ` Mark Rutland
2018-02-26 15:42     ` Mark Rutland
2018-02-26 15:04 ` [RFC PATCH v2 08/12] arm64: cmpxchg: Include build_bug.h instead of bug.h for BUILD_BUG Will Deacon
2018-02-26 15:04   ` Will Deacon
2018-02-26 15:48   ` Mark Rutland
2018-02-26 15:48     ` Mark Rutland
2018-02-27 17:33     ` Will Deacon
2018-02-27 17:33       ` Will Deacon
2018-02-27 17:34       ` Mark Rutland
2018-02-27 17:34         ` Mark Rutland
2018-02-26 15:04 ` [RFC PATCH v2 09/12] asm-generic/bitops/atomic.h: Rewrite using atomic_fetch_* Will Deacon
2018-02-26 15:04   ` Will Deacon
2018-02-26 15:04 ` [RFC PATCH v2 10/12] asm-generic/bitops/lock.h: " Will Deacon
2018-02-26 15:04   ` Will Deacon
2018-02-26 15:04 ` [RFC PATCH v2 11/12] arm64: Replace our atomic/lock bitop implementations with asm-generic Will Deacon
2018-02-26 15:04   ` Will Deacon
2018-02-26 15:05 ` [RFC PATCH v2 12/12] arm64: bitops: Include <asm-generic/bitops/ext2-atomic-setbit.h> Will Deacon
2018-02-26 15:05   ` Will Deacon
2018-03-01  7:16 ` [RFC PATCH v2 00/12] Rewrite asm-generic/bitops/{atomic,lock}.h and use on arm64 Masahiro Yamada
2018-03-01  7:16   ` Masahiro Yamada
2018-03-12  3:56   ` Masahiro Yamada
2018-03-12  3:56     ` Masahiro Yamada
2018-03-19 17:21     ` Will Deacon
2018-03-19 17:21       ` Will Deacon

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.