linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH 00/24] arch: Provide atomic logic ops
@ 2015-07-09 17:28 Peter Zijlstra
  2015-07-09 17:28 ` [RFC][PATCH 01/24] alpha: Provide atomic_{or,xor,and} Peter Zijlstra
                   ` (24 more replies)
  0 siblings, 25 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:28 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

Currently there is an incoherent mess of atomic_{set,clear}_mask() and
atomic_or() (but no atomic_{and,nand,xor}()) in the tree.

Those archs that implement atomic_{set,clear}_mask() are not even consistent on
its signature.

Implement atomic_{or,and,xor}() on all archs and deprecate
atomic_{set,clear}_mask().

Notes:
 - FRV got a total rewrite of its atomic implementation,
 - Blackfin could use one, or at least some macro help,
 - TILE still needs to be done.

The series has been compile tested by the build-bot, only TILE fails to build.

---
 arch/alpha/include/asm/atomic.h        |    6 +
 arch/arc/include/asm/atomic.h          |    4 
 arch/arm/include/asm/atomic.h          |    6 +
 arch/arm64/include/asm/atomic.h        |    6 +
 arch/avr32/include/asm/atomic.h        |   12 ++
 arch/blackfin/include/asm/atomic.h     |   16 ++-
 arch/blackfin/kernel/bfin_ksyms.c      |    7 -
 arch/blackfin/mach-bf561/atomic.S      |   30 +++---
 arch/blackfin/mach-common/smp.c        |    2 
 arch/frv/include/asm/atomic.h          |  107 ++++++++++++------------
 arch/frv/include/asm/atomic_defs.h     |  143 +++++++++++++++++++++++++++++++++
 arch/frv/include/asm/bitops.h          |   99 ++--------------------
 arch/frv/kernel/dma.c                  |    6 -
 arch/frv/lib/Makefile                  |    2 
 arch/frv/lib/atomic-lib.c              |    7 +
 arch/frv/lib/atomic-ops.S              |  110 -------------------------
 arch/frv/lib/atomic64-ops.S            |   94 ---------------------
 arch/hexagon/include/asm/atomic.h      |    3 
 arch/ia64/include/asm/atomic.h         |   24 ++++-
 arch/m32r/include/asm/atomic.h         |   44 ----------
 arch/m32r/kernel/smp.c                 |    4 
 arch/m68k/include/asm/atomic.h         |   13 ---
 arch/metag/include/asm/atomic_lnkget.h |   37 --------
 arch/metag/include/asm/atomic_lock1.h  |   23 -----
 arch/mips/include/asm/atomic.h         |    6 +
 arch/mn10300/include/asm/atomic.h      |   70 ----------------
 arch/mn10300/mm/tlb-smp.c              |    2 
 arch/parisc/include/asm/atomic.h       |    6 +
 arch/powerpc/include/asm/atomic.h      |    6 +
 arch/powerpc/kernel/misc_32.S          |   19 ----
 arch/s390/include/asm/atomic.h         |   41 +++++----
 arch/s390/kernel/time.c                |    4 
 arch/s390/kvm/interrupt.c              |   28 +++---
 arch/s390/kvm/kvm-s390.c               |   24 ++---
 arch/sh/include/asm/atomic-grb.h       |   42 ---------
 arch/sh/include/asm/atomic-irq.h       |   21 ----
 arch/sh/include/asm/atomic-llsc.h      |   31 -------
 arch/sparc/include/asm/atomic_32.h     |    3 
 arch/sparc/include/asm/atomic_64.h     |    3 
 arch/sparc/lib/atomic32.c              |   22 ++++-
 arch/sparc/lib/atomic_64.S             |    6 +
 arch/sparc/lib/ksyms.c                 |    3 
 arch/x86/include/asm/atomic.h          |   25 +++--
 arch/x86/include/asm/atomic64_32.h     |   14 +++
 arch/x86/include/asm/atomic64_64.h     |   15 +++
 arch/xtensa/include/asm/atomic.h       |   72 ----------------
 drivers/gpu/drm/i915/i915_drv.c        |    2 
 drivers/gpu/drm/i915/i915_gem.c        |    2 
 drivers/gpu/drm/i915/i915_irq.c        |    4 
 drivers/s390/scsi/zfcp_aux.c           |    2 
 drivers/s390/scsi/zfcp_erp.c           |   62 +++++++-------
 drivers/s390/scsi/zfcp_fc.c            |    8 -
 drivers/s390/scsi/zfcp_fsf.c           |   26 +++---
 drivers/s390/scsi/zfcp_qdio.c          |   14 +--
 include/asm-generic/atomic.h           |   11 +-
 include/asm-generic/atomic64.h         |    3 
 include/linux/atomic.h                 |   30 +++---
 lib/atomic64.c                         |    3 
 58 files changed, 569 insertions(+), 866 deletions(-)


^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 01/24] alpha: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
@ 2015-07-09 17:28 ` Peter Zijlstra
  2015-07-09 17:28 ` [RFC][PATCH 02/24] arc: " Peter Zijlstra
                   ` (23 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:28 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-alpha-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 614 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/alpha/include/asm/atomic.h |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -109,6 +109,12 @@ static __inline__ long atomic64_##op##_r
 
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
+ATOMIC64_OP(and)
+ATOMIC64_OP(or)
+ATOMIC64_OP(xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC64_OP_RETURN



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 02/24] arc: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
  2015-07-09 17:28 ` [RFC][PATCH 01/24] alpha: Provide atomic_{or,xor,and} Peter Zijlstra
@ 2015-07-09 17:28 ` Peter Zijlstra
  2015-07-10  4:30   ` Vineet Gupta
  2015-07-09 17:28 ` [RFC][PATCH 03/24] arm: " Peter Zijlstra
                   ` (22 subsequent siblings)
  24 siblings, 1 reply; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:28 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-arc-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 1033 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arc/include/asm/atomic.h |   14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -125,13 +125,23 @@ static inline int atomic_##op##_return(i
 ATOMIC_OPS(add, +=, add)
 ATOMIC_OPS(sub, -=, sub)
 ATOMIC_OP(and, &=, and)
-
-#define atomic_clear_mask(mask, v) atomic_and(~(mask), (v))
+ATOMIC_OP(or, |=, or)
+ATOMIC_OP(xor, ^=, xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_and(~mask, v);
+}
+
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_or(mask, v);
+}
+
 /**
  * __atomic_add_unless - add unless the number is a given value
  * @v: pointer of type atomic_t



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 03/24] arm: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
  2015-07-09 17:28 ` [RFC][PATCH 01/24] alpha: Provide atomic_{or,xor,and} Peter Zijlstra
  2015-07-09 17:28 ` [RFC][PATCH 02/24] arc: " Peter Zijlstra
@ 2015-07-09 17:28 ` Peter Zijlstra
  2015-07-09 18:02   ` Peter Zijlstra
  2015-07-09 17:28 ` [RFC][PATCH 04/24] arm64: " Peter Zijlstra
                   ` (21 subsequent siblings)
  24 siblings, 1 reply; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:28 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-arm-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 852 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.


Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arm/include/asm/atomic.h |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -193,6 +193,9 @@ static inline int __atomic_add_unless(at
 
 ATOMIC_OPS(add, +=, add)
 ATOMIC_OPS(sub, -=, sub)
+ATOMIC_OP(and, &=, and)
+ATOMIC_OP(or, |=, orr)
+ATOMIC_OP(xor, ^=, eor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
@@ -320,6 +323,9 @@ static inline long long atomic64_##op##_
 
 ATOMIC64_OPS(add, adds, adc)
 ATOMIC64_OPS(sub, subs, sbc)
+ATOMIC64_OP(and, and, and)
+ATOMIC64_OP(or, or, or)
+ATOMIC64_OP(xor, eor, eor)
 
 #undef ATOMIC64_OPS
 #undef ATOMIC64_OP_RETURN



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 04/24] arm64: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (2 preceding siblings ...)
  2015-07-09 17:28 ` [RFC][PATCH 03/24] arm: " Peter Zijlstra
@ 2015-07-09 17:28 ` Peter Zijlstra
  2015-07-10  8:42   ` Will Deacon
  2015-07-15 16:01   ` Will Deacon
  2015-07-09 17:29 ` [RFC][PATCH 05/24] avr32: " Peter Zijlstra
                   ` (20 subsequent siblings)
  24 siblings, 2 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:28 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-arm64-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 810 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.


Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arm64/include/asm/atomic.h |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/arch/arm64/include/asm/atomic.h
+++ b/arch/arm64/include/asm/atomic.h
@@ -84,6 +84,9 @@ static inline int atomic_##op##_return(i
 
 ATOMIC_OPS(add, add)
 ATOMIC_OPS(sub, sub)
+ATOMIC_OP(and, and)
+ATOMIC_OP(or, orr)
+ATOMIC_OP(xor, eor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
@@ -182,6 +185,9 @@ static inline long atomic64_##op##_retur
 
 ATOMIC64_OPS(add, add)
 ATOMIC64_OPS(sub, sub)
+ATOMIC64_OP(and, and)
+ATOMIC64_OP(or, or)
+ATOMIC64_OP(xor, eor)
 
 #undef ATOMIC64_OPS
 #undef ATOMIC64_OP_RETURN



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 05/24] avr32: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (3 preceding siblings ...)
  2015-07-09 17:28 ` [RFC][PATCH 04/24] arm64: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 06/24] blackfin: " Peter Zijlstra
                   ` (19 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-avr32-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 811 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.


Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/avr32/include/asm/atomic.h |   12 ++++++++++++
 1 file changed, 12 insertions(+)

--- a/arch/avr32/include/asm/atomic.h
+++ b/arch/avr32/include/asm/atomic.h
@@ -44,6 +44,18 @@ static inline int __atomic_##op##_return
 ATOMIC_OP_RETURN(sub, sub, rKs21)
 ATOMIC_OP_RETURN(add, add, r)
 
+#define ATOMIC_OP(op, asm_op)						\
+ATOMIC_OP_RETURN(op, asm_op, r)						\
+static inline void atomic_##op(int i, atomic_t *v)			\
+{									\
+	(void)__atomic_##op##_return(i, v);				\
+}
+
+ATOMIC_OP(and, and)
+ATOMIC_OP(or, or)
+ATOMIC_OP(xor, eor)
+
+#undef ATOMIC_OP
 #undef ATOMIC_OP_RETURN
 
 /*



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 06/24] blackfin: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (4 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 05/24] avr32: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 07/24] hexagon: " Peter Zijlstra
                   ` (18 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-blackfin-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 4898 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

TODO: use inline asm or at least asm macros to collapse the lot.


Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/blackfin/include/asm/atomic.h |   26 +++++++++++++++++++-------
 arch/blackfin/kernel/bfin_ksyms.c  |    7 ++++---
 arch/blackfin/mach-bf561/atomic.S  |   30 +++++++++++++++---------------
 3 files changed, 38 insertions(+), 25 deletions(-)

--- a/arch/blackfin/include/asm/atomic.h
+++ b/arch/blackfin/include/asm/atomic.h
@@ -16,19 +16,31 @@
 #include <linux/types.h>
 
 asmlinkage int __raw_uncached_fetch_asm(const volatile int *ptr);
-asmlinkage int __raw_atomic_update_asm(volatile int *ptr, int value);
-asmlinkage int __raw_atomic_clear_asm(volatile int *ptr, int value);
-asmlinkage int __raw_atomic_set_asm(volatile int *ptr, int value);
+asmlinkage int __raw_atomic_add_asm(volatile int *ptr, int value);
+
+asmlinkage int __raw_atomic_and_asm(volatile int *ptr, int value);
+asmlinkage int __raw_atomic_or_asm(volatile int *ptr, int value);
 asmlinkage int __raw_atomic_xor_asm(volatile int *ptr, int value);
 asmlinkage int __raw_atomic_test_asm(const volatile int *ptr, int value);
 
 #define atomic_read(v) __raw_uncached_fetch_asm(&(v)->counter)
 
-#define atomic_add_return(i, v) __raw_atomic_update_asm(&(v)->counter, i)
-#define atomic_sub_return(i, v) __raw_atomic_update_asm(&(v)->counter, -(i))
+#define atomic_add_return(i, v) __raw_atomic_add_asm(&(v)->counter, i)
+#define atomic_sub_return(i, v) __raw_atomic_add_asm(&(v)->counter, -(i))
 
-#define atomic_clear_mask(m, v) __raw_atomic_clear_asm(&(v)->counter, m)
-#define atomic_set_mask(m, v)   __raw_atomic_set_asm(&(v)->counter, m)
+#define atomic_or(i, v)  (void)__raw_atomic_or_asm(&(v)->counter, i)
+#define atomic_and(i, v) (void)__raw_atomic_and_asm(&(v)->counter, i)
+#define atomic_xor(i, v) (void)__raw_atomic_xor_asm(&(v)->counter, i)
+
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_and(~mask, v);
+}
+
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_or(mask, v);
+}
 
 #endif
 
--- a/arch/blackfin/kernel/bfin_ksyms.c
+++ b/arch/blackfin/kernel/bfin_ksyms.c
@@ -83,11 +83,12 @@ EXPORT_SYMBOL(insl);
 EXPORT_SYMBOL(insl_16);
 
 #ifdef CONFIG_SMP
-EXPORT_SYMBOL(__raw_atomic_update_asm);
-EXPORT_SYMBOL(__raw_atomic_clear_asm);
-EXPORT_SYMBOL(__raw_atomic_set_asm);
+EXPORT_SYMBOL(__raw_atomic_add_asm);
+EXPORT_SYMBOL(__raw_atomic_and_asm);
+EXPORT_SYMBOL(__raw_atomic_or_asm);
 EXPORT_SYMBOL(__raw_atomic_xor_asm);
 EXPORT_SYMBOL(__raw_atomic_test_asm);
+
 EXPORT_SYMBOL(__raw_xchg_1_asm);
 EXPORT_SYMBOL(__raw_xchg_2_asm);
 EXPORT_SYMBOL(__raw_xchg_4_asm);
--- a/arch/blackfin/mach-bf561/atomic.S
+++ b/arch/blackfin/mach-bf561/atomic.S
@@ -587,10 +587,10 @@ ENDPROC(___raw_write_unlock_asm)
  * r0 = ptr
  * r1 = value
  *
- * Add a signed value to a 32bit word and return the new value atomically.
+ * ADD a signed value to a 32bit word and return the new value atomically.
  * Clobbers: r3:0, p1:0
  */
-ENTRY(___raw_atomic_update_asm)
+ENTRY(___raw_atomic_add_asm)
 	p1 = r0;
 	r3 = r1;
 	[--sp] = rets;
@@ -603,19 +603,19 @@ ENTRY(___raw_atomic_update_asm)
 	r0 = r3;
 	rets = [sp++];
 	rts;
-ENDPROC(___raw_atomic_update_asm)
+ENDPROC(___raw_atomic_add_asm)
 
 /*
  * r0 = ptr
  * r1 = mask
  *
- * Clear the mask bits from a 32bit word and return the old 32bit value
+ * AND the mask bits from a 32bit word and return the old 32bit value
  * atomically.
  * Clobbers: r3:0, p1:0
  */
-ENTRY(___raw_atomic_clear_asm)
+ENTRY(___raw_atomic_and_asm)
 	p1 = r0;
-	r3 = ~r1;
+	r3 = r1;
 	[--sp] = rets;
 	call _get_core_lock;
 	r2 = [p1];
@@ -627,17 +627,17 @@ ENTRY(___raw_atomic_clear_asm)
 	r0 = r3;
 	rets = [sp++];
 	rts;
-ENDPROC(___raw_atomic_clear_asm)
+ENDPROC(___raw_atomic_and_asm)
 
 /*
  * r0 = ptr
  * r1 = mask
  *
- * Set the mask bits into a 32bit word and return the old 32bit value
+ * OR the mask bits into a 32bit word and return the old 32bit value
  * atomically.
  * Clobbers: r3:0, p1:0
  */
-ENTRY(___raw_atomic_set_asm)
+ENTRY(___raw_atomic_or_asm)
 	p1 = r0;
 	r3 = r1;
 	[--sp] = rets;
@@ -651,7 +651,7 @@ ENTRY(___raw_atomic_set_asm)
 	r0 = r3;
 	rets = [sp++];
 	rts;
-ENDPROC(___raw_atomic_set_asm)
+ENDPROC(___raw_atomic_or_asm)
 
 /*
  * r0 = ptr
@@ -787,7 +787,7 @@ ENTRY(___raw_bit_set_asm)
 	r2 = r1;
 	r1 = 1;
 	r1 <<= r2;
-	jump ___raw_atomic_set_asm
+	jump ___raw_atomic_or_asm
 ENDPROC(___raw_bit_set_asm)
 
 /*
@@ -798,10 +798,10 @@ ENDPROC(___raw_bit_set_asm)
  * Clobbers: r3:0, p1:0
  */
 ENTRY(___raw_bit_clear_asm)
-	r2 = r1;
-	r1 = 1;
-	r1 <<= r2;
-	jump ___raw_atomic_clear_asm
+	r2 = 1;
+	r2 <<= r1;
+	r1 = ~r2;
+	jump ___raw_atomic_and_asm
 ENDPROC(___raw_bit_clear_asm)
 
 /*



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 07/24] hexagon: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (5 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 06/24] blackfin: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 08/24] ia64: " Peter Zijlstra
                   ` (17 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-hexagon-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 561 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/hexagon/include/asm/atomic.h |    3 +++
 1 file changed, 3 insertions(+)

--- a/arch/hexagon/include/asm/atomic.h
+++ b/arch/hexagon/include/asm/atomic.h
@@ -131,6 +131,9 @@ static inline int atomic_##op##_return(i
 
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 08/24] ia64: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (6 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 07/24] hexagon: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 09/24] m32r: " Peter Zijlstra
                   ` (16 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-ia64-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 1733 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/ia64/include/asm/atomic.h |   24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -45,8 +45,6 @@ ia64_atomic_##op (int i, atomic_t *v)
 ATOMIC_OP(add, +)
 ATOMIC_OP(sub, -)
 
-#undef ATOMIC_OP
-
 #define atomic_add_return(i,v)						\
 ({									\
 	int __ia64_aar_i = (i);						\
@@ -71,6 +69,16 @@ ATOMIC_OP(sub, -)
 		: ia64_atomic_sub(__ia64_asr_i, v);			\
 })
 
+ATOMIC_OP(and, &)
+ATOMIC_OP(or, |)
+ATOMIC_OP(xor, ^)
+
+#define atomic_and(i,v)	(void)ia64_atomic_and(i,v)
+#define atomic_or(i,v)	(void)ia64_atomic_or(i,v)
+#define atomic_xor(i,v)	(void)ia64_atomic_xor(i,v)
+
+#undef ATOMIC_OP
+
 #define ATOMIC64_OP(op, c_op)						\
 static __inline__ long							\
 ia64_atomic64_##op (__s64 i, atomic64_t *v)				\
@@ -89,8 +97,6 @@ ia64_atomic64_##op (__s64 i, atomic64_t
 ATOMIC64_OP(add, +)
 ATOMIC64_OP(sub, -)
 
-#undef ATOMIC64_OP
-
 #define atomic64_add_return(i,v)					\
 ({									\
 	long __ia64_aar_i = (i);					\
@@ -115,6 +121,16 @@ ATOMIC64_OP(sub, -)
 		: ia64_atomic64_sub(__ia64_asr_i, v);			\
 })
 
+ATOMIC64_OP(and, &)
+ATOMIC64_OP(or, |)
+ATOMIC64_OP(xor, ^)
+
+#define atomic64_and(i,v)	(void)ia64_atomic64_and(i,v)
+#define atomic64_or(i,v)	(void)ia64_atomic64_or(i,v)
+#define atomic64_xor(i,v)	(void)ia64_atomic64_xor(i,v)
+
+#undef ATOMIC64_OP
+
 #define atomic_cmpxchg(v, old, new) (cmpxchg(&((v)->counter), old, new))
 #define atomic_xchg(v, new) (xchg(&((v)->counter), new))
 



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 09/24] m32r: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (7 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 08/24] ia64: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 10/24] m68k: " Peter Zijlstra
                   ` (15 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-m32r-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 1814 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/m32r/include/asm/atomic.h |   41 +++++++----------------------------------
 1 file changed, 7 insertions(+), 34 deletions(-)

--- a/arch/m32r/include/asm/atomic.h
+++ b/arch/m32r/include/asm/atomic.h
@@ -93,6 +93,9 @@ static __inline__ int atomic_##op##_retu
 
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
@@ -240,44 +243,14 @@ static __inline__ int __atomic_add_unles
 }
 
 
-static __inline__ void atomic_clear_mask(unsigned long  mask, atomic_t *addr)
+static __inline__ __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
 {
-	unsigned long flags;
-	unsigned long tmp;
-
-	local_irq_save(flags);
-	__asm__ __volatile__ (
-		"# atomic_clear_mask		\n\t"
-		DCACHE_CLEAR("%0", "r5", "%1")
-		M32R_LOCK" %0, @%1;		\n\t"
-		"and	%0, %2;			\n\t"
-		M32R_UNLOCK" %0, @%1;		\n\t"
-		: "=&r" (tmp)
-		: "r" (addr), "r" (~mask)
-		: "memory"
-		__ATOMIC_CLOBBER
-	);
-	local_irq_restore(flags);
+	atomic_and(~mask, v);
 }
 
-static __inline__ void atomic_set_mask(unsigned long  mask, atomic_t *addr)
+static __inline__ __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
 {
-	unsigned long flags;
-	unsigned long tmp;
-
-	local_irq_save(flags);
-	__asm__ __volatile__ (
-		"# atomic_set_mask		\n\t"
-		DCACHE_CLEAR("%0", "r5", "%1")
-		M32R_LOCK" %0, @%1;		\n\t"
-		"or	%0, %2;			\n\t"
-		M32R_UNLOCK" %0, @%1;		\n\t"
-		: "=&r" (tmp)
-		: "r" (addr), "r" (mask)
-		: "memory"
-		__ATOMIC_CLOBBER
-	);
-	local_irq_restore(flags);
+	atomic_or(mask, v);
 }
 
 #endif	/* _ASM_M32R_ATOMIC_H */



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 10/24] m68k: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (8 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 09/24] m32r: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-10  9:13   ` Geert Uytterhoeven
  2015-07-09 17:29 ` [RFC][PATCH 11/24] metag: " Peter Zijlstra
                   ` (14 subsequent siblings)
  24 siblings, 1 reply; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-m68k-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 1283 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/m68k/include/asm/atomic.h |   11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

--- a/arch/m68k/include/asm/atomic.h
+++ b/arch/m68k/include/asm/atomic.h
@@ -82,6 +82,9 @@ static inline int atomic_##op##_return(i
 
 ATOMIC_OPS(add, +=, add)
 ATOMIC_OPS(sub, -=, sub)
+ATOMIC_OP(and, &=, and)
+ATOMIC_OP(or, |=, or)
+ATOMIC_OP(xor, ^=, eor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
@@ -176,14 +179,14 @@ static inline int atomic_add_negative(in
 	return c != 0;
 }
 
-static inline void atomic_clear_mask(unsigned long mask, unsigned long *v)
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
 {
-	__asm__ __volatile__("andl %1,%0" : "+m" (*v) : ASM_DI (~(mask)));
+	atomic_and(~mask, v);
 }
 
-static inline void atomic_set_mask(unsigned long mask, unsigned long *v)
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
 {
-	__asm__ __volatile__("orl %1,%0" : "+m" (*v) : ASM_DI (mask));
+	atomic_or(mask, v);
 }
 
 static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u)



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 11/24] metag: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (9 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 10/24] m68k: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 12/24] mips: " Peter Zijlstra
                   ` (13 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-metag-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 2672 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/metag/include/asm/atomic_lnkget.h |   35 ++++++---------------------------
 arch/metag/include/asm/atomic_lock1.h  |   21 ++++++-------------
 2 files changed, 14 insertions(+), 42 deletions(-)

--- a/arch/metag/include/asm/atomic_lnkget.h
+++ b/arch/metag/include/asm/atomic_lnkget.h
@@ -73,43 +73,22 @@ static inline int atomic_##op##_return(i
 
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
 {
-	int temp;
-
-	asm volatile (
-		"1:	LNKGETD %0, [%1]\n"
-		"	AND	%0, %0, %2\n"
-		"	LNKSETD	[%1] %0\n"
-		"	DEFR	%0, TXSTAT\n"
-		"	ANDT	%0, %0, #HI(0x3f000000)\n"
-		"	CMPT	%0, #HI(0x02000000)\n"
-		"	BNZ	1b\n"
-		: "=&d" (temp)
-		: "da" (&v->counter), "bd" (~mask)
-		: "cc");
+	atomic_and(~mask, v);
 }
 
-static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
 {
-	int temp;
-
-	asm volatile (
-		"1:	LNKGETD %0, [%1]\n"
-		"	OR	%0, %0, %2\n"
-		"	LNKSETD	[%1], %0\n"
-		"	DEFR	%0, TXSTAT\n"
-		"	ANDT	%0, %0, #HI(0x3f000000)\n"
-		"	CMPT	%0, #HI(0x02000000)\n"
-		"	BNZ	1b\n"
-		: "=&d" (temp)
-		: "da" (&v->counter), "bd" (mask)
-		: "cc");
+	atomic_or(mask, v);
 }
 
 static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
--- a/arch/metag/include/asm/atomic_lock1.h
+++ b/arch/metag/include/asm/atomic_lock1.h
@@ -68,29 +68,22 @@ static inline int atomic_##op##_return(i
 
 ATOMIC_OPS(add, +=)
 ATOMIC_OPS(sub, -=)
+ATOMIC_OP(and, &=)
+ATOMIC_OP(or, |=)
+ATOMIC_OP(xor, ^=)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
 {
-	unsigned long flags;
-
-	__global_lock1(flags);
-	fence();
-	v->counter &= ~mask;
-	__global_unlock1(flags);
+	atomic_and(~mask, v);
 }
 
-static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
 {
-	unsigned long flags;
-
-	__global_lock1(flags);
-	fence();
-	v->counter |= mask;
-	__global_unlock1(flags);
+	atomic_or(mask, v);
 }
 
 static inline int atomic_cmpxchg(atomic_t *v, int old, int new)



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 12/24] mips: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (10 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 11/24] metag: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 18:45   ` Ralf Baechle
  2015-07-09 17:29 ` [RFC][PATCH 13/24] mn10300: " Peter Zijlstra
                   ` (12 subsequent siblings)
  24 siblings, 1 reply; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-mips-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 854 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.


Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/mips/include/asm/atomic.h |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -136,6 +136,9 @@ static __inline__ int atomic_##op##_retu
 
 ATOMIC_OPS(add, +=, addu)
 ATOMIC_OPS(sub, -=, subu)
+ATOMIC_OP(and, &=, and)
+ATOMIC_OP(or, |=, or)
+ATOMIC_OP(xor, ^=, xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
@@ -416,6 +419,9 @@ static __inline__ long atomic64_##op##_r
 
 ATOMIC64_OPS(add, +=, daddu)
 ATOMIC64_OPS(sub, -=, dsubu)
+ATOMIC64_OP(and, &=, and)
+ATOMIC64_OP(or, |=, or)
+ATOMIC64_OP(xor, ^=, xor)
 
 #undef ATOMIC64_OPS
 #undef ATOMIC64_OP_RETURN



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 13/24] mn10300: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (11 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 12/24] mips: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 14/24] parisc: " Peter Zijlstra
                   ` (11 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-mn10300-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 2291 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/mn10300/include/asm/atomic.h |   54 ++++----------------------------------
 1 file changed, 7 insertions(+), 47 deletions(-)

--- a/arch/mn10300/include/asm/atomic.h
+++ b/arch/mn10300/include/asm/atomic.h
@@ -88,6 +88,9 @@ static inline int atomic_##op##_return(i
 
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
@@ -134,31 +137,9 @@ static inline void atomic_dec(atomic_t *
  *
  * Atomically clears the bits set in mask from the memory word specified.
  */
-static inline void atomic_clear_mask(unsigned long mask, unsigned long *addr)
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
 {
-#ifdef CONFIG_SMP
-	int status;
-
-	asm volatile(
-		"1:	mov	%3,(_AAR,%2)	\n"
-		"	mov	(_ADR,%2),%0	\n"
-		"	and	%4,%0		\n"
-		"	mov	%0,(_ADR,%2)	\n"
-		"	mov	(_ADR,%2),%0	\n"	/* flush */
-		"	mov	(_ASR,%2),%0	\n"
-		"	or	%0,%0		\n"
-		"	bne	1b		\n"
-		: "=&r"(status), "=m"(*addr)
-		: "a"(ATOMIC_OPS_BASE_ADDR), "r"(addr), "r"(~mask)
-		: "memory", "cc");
-#else
-	unsigned long flags;
-
-	mask = ~mask;
-	flags = arch_local_cli_save();
-	*addr &= mask;
-	arch_local_irq_restore(flags);
-#endif
+	atomic_and(~mask, v);
 }
 
 /**
@@ -168,30 +149,9 @@ static inline void atomic_clear_mask(uns
  *
  * Atomically sets the bits set in mask from the memory word specified.
  */
-static inline void atomic_set_mask(unsigned long mask, unsigned long *addr)
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
 {
-#ifdef CONFIG_SMP
-	int status;
-
-	asm volatile(
-		"1:	mov	%3,(_AAR,%2)	\n"
-		"	mov	(_ADR,%2),%0	\n"
-		"	or	%4,%0		\n"
-		"	mov	%0,(_ADR,%2)	\n"
-		"	mov	(_ADR,%2),%0	\n"	/* flush */
-		"	mov	(_ASR,%2),%0	\n"
-		"	or	%0,%0		\n"
-		"	bne	1b		\n"
-		: "=&r"(status), "=m"(*addr)
-		: "a"(ATOMIC_OPS_BASE_ADDR), "r"(addr), "r"(mask)
-		: "memory", "cc");
-#else
-	unsigned long flags;
-
-	flags = arch_local_cli_save();
-	*addr |= mask;
-	arch_local_irq_restore(flags);
-#endif
+	atomic_or(mask, v);
 }
 
 #endif /* __KERNEL__ */



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 14/24] parisc: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (12 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 13/24] mn10300: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 15/24] powerpc: " Peter Zijlstra
                   ` (10 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-parisc-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 805 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/parisc/include/asm/atomic.h |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/arch/parisc/include/asm/atomic.h
+++ b/arch/parisc/include/asm/atomic.h
@@ -125,6 +125,9 @@ static __inline__ int atomic_##op##_retu
 
 ATOMIC_OPS(add, +=)
 ATOMIC_OPS(sub, -=)
+ATOMIC_OP(and, &=)
+ATOMIC_OP(or, |=)
+ATOMIC_OP(xor, ^=)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
@@ -185,6 +188,9 @@ static __inline__ s64 atomic64_##op##_re
 
 ATOMIC64_OPS(add, +=)
 ATOMIC64_OPS(sub, -=)
+ATOMIC64_OP(and, &=)
+ATOMIC64_OP(or, |=)
+ATOMIC64_OP(xor, ^=)
 
 #undef ATOMIC64_OPS
 #undef ATOMIC64_OP_RETURN



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 15/24] powerpc: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (13 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 14/24] parisc: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 21:49   ` Benjamin Herrenschmidt
  2015-07-09 17:29 ` [RFC][PATCH 16/24] sh: " Peter Zijlstra
                   ` (9 subsequent siblings)
  24 siblings, 1 reply; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-powerpc-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 816 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/powerpc/include/asm/atomic.h |    6 ++++++
 1 file changed, 6 insertions(+)

--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -66,6 +66,9 @@ static __inline__ int atomic_##op##_retu
 
 ATOMIC_OPS(add, add)
 ATOMIC_OPS(sub, subf)
+ATOMIC_OP(and, and)
+ATOMIC_OP(or, or)
+ATOMIC_OP(xor, xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
@@ -304,6 +307,9 @@ static __inline__ long atomic64_##op##_r
 
 ATOMIC64_OPS(add, add)
 ATOMIC64_OPS(sub, subf)
+ATOMIC64_OP(and, and)
+ATOMIC64_OP(or, or)
+ATOMIC64_OP(xor, xor)
 
 #undef ATOMIC64_OPS
 #undef ATOMIC64_OP_RETURN



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 16/24] sh: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (14 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 15/24] powerpc: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 17/24] sparc: " Peter Zijlstra
                   ` (8 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-sh-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 4269 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/sh/include/asm/atomic-grb.h  |   42 ++------------------------------------
 arch/sh/include/asm/atomic-irq.h  |   21 ++-----------------
 arch/sh/include/asm/atomic-llsc.h |   31 ++--------------------------
 arch/sh/include/asm/atomic.h      |   10 +++++++++
 4 files changed, 19 insertions(+), 85 deletions(-)

--- a/arch/sh/include/asm/atomic-grb.h
+++ b/arch/sh/include/asm/atomic-grb.h
@@ -47,48 +47,12 @@ static inline int atomic_##op##_return(i
 
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	int tmp;
-	unsigned int _mask = ~mask;
-
-	__asm__ __volatile__ (
-		"   .align 2              \n\t"
-		"   mova    1f,   r0      \n\t" /* r0 = end point */
-		"   mov    r15,   r1      \n\t" /* r1 = saved sp */
-		"   mov    #-6,   r15     \n\t" /* LOGIN: r15 = size */
-		"   mov.l  @%1,   %0      \n\t" /* load  old value */
-		"   and     %2,   %0      \n\t" /* add */
-		"   mov.l   %0,   @%1     \n\t" /* store new value */
-		"1: mov     r1,   r15     \n\t" /* LOGOUT */
-		: "=&r" (tmp),
-		  "+r"  (v)
-		: "r"   (_mask)
-		: "memory" , "r0", "r1");
-}
-
-static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	int tmp;
-
-	__asm__ __volatile__ (
-		"   .align 2              \n\t"
-		"   mova    1f,   r0      \n\t" /* r0 = end point */
-		"   mov    r15,   r1      \n\t" /* r1 = saved sp */
-		"   mov    #-6,   r15     \n\t" /* LOGIN: r15 = size */
-		"   mov.l  @%1,   %0      \n\t" /* load  old value */
-		"   or      %2,   %0      \n\t" /* or */
-		"   mov.l   %0,   @%1     \n\t" /* store new value */
-		"1: mov     r1,   r15     \n\t" /* LOGOUT */
-		: "=&r" (tmp),
-		  "+r"  (v)
-		: "r"   (mask)
-		: "memory" , "r0", "r1");
-}
-
 #endif /* __ASM_SH_ATOMIC_GRB_H */
--- a/arch/sh/include/asm/atomic-irq.h
+++ b/arch/sh/include/asm/atomic-irq.h
@@ -37,27 +37,12 @@ static inline int atomic_##op##_return(i
 
 ATOMIC_OPS(add, +=)
 ATOMIC_OPS(sub, -=)
+ATOMIC_OP(and, &=)
+ATOMIC_OP(or, |=)
+ATOMIC_OP(xor, ^=)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	unsigned long flags;
-
-	raw_local_irq_save(flags);
-	v->counter &= ~mask;
-	raw_local_irq_restore(flags);
-}
-
-static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	unsigned long flags;
-
-	raw_local_irq_save(flags);
-	v->counter |= mask;
-	raw_local_irq_restore(flags);
-}
-
 #endif /* __ASM_SH_ATOMIC_IRQ_H */
--- a/arch/sh/include/asm/atomic-llsc.h
+++ b/arch/sh/include/asm/atomic-llsc.h
@@ -52,37 +52,12 @@ static inline int atomic_##op##_return(i
 
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	unsigned long tmp;
-
-	__asm__ __volatile__ (
-"1:	movli.l @%2, %0		! atomic_clear_mask	\n"
-"	and	%1, %0					\n"
-"	movco.l	%0, @%2					\n"
-"	bf	1b					\n"
-	: "=&z" (tmp)
-	: "r" (~mask), "r" (&v->counter)
-	: "t");
-}
-
-static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	unsigned long tmp;
-
-	__asm__ __volatile__ (
-"1:	movli.l @%2, %0		! atomic_set_mask	\n"
-"	or	%1, %0					\n"
-"	movco.l	%0, @%2					\n"
-"	bf	1b					\n"
-	: "=&z" (tmp)
-	: "r" (mask), "r" (&v->counter)
-	: "t");
-}
-
 #endif /* __ASM_SH_ATOMIC_LLSC_H */
--- a/arch/sh/include/asm/atomic.h
+++ b/arch/sh/include/asm/atomic.h
@@ -25,6 +25,16 @@
 #include <asm/atomic-irq.h>
 #endif
 
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_and(~mask, v);
+}
+
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_or(mask, v);
+}
+
 #define atomic_add_negative(a, v)	(atomic_add_return((a), (v)) < 0)
 #define atomic_dec_return(v)		atomic_sub_return(1, (v))
 #define atomic_inc_return(v)		atomic_add_return(1, (v))



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 17/24] sparc: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (15 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 16/24] sh: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 18:05   ` David Miller
  2015-07-09 17:29 ` [RFC][PATCH 18/24] xtensa: " Peter Zijlstra
                   ` (7 subsequent siblings)
  24 siblings, 1 reply; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-sparc-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 2989 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.


Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/sparc/include/asm/atomic_32.h |    3 +++
 arch/sparc/include/asm/atomic_64.h |    3 +++
 arch/sparc/lib/atomic32.c          |   22 +++++++++++++++++++---
 arch/sparc/lib/atomic_64.S         |    6 ++++++
 arch/sparc/lib/ksyms.c             |    3 +++
 5 files changed, 34 insertions(+), 3 deletions(-)

--- a/arch/sparc/include/asm/atomic_32.h
+++ b/arch/sparc/include/asm/atomic_32.h
@@ -21,6 +21,9 @@
 #define ATOMIC_INIT(i)  { (i) }
 
 int atomic_add_return(int, atomic_t *);
+void atomic_and(int, atomic_t *);
+void atomic_or(int, atomic_t *);
+void atomic_xor(int, atomic_t *);
 int atomic_cmpxchg(atomic_t *, int, int);
 int atomic_xchg(atomic_t *, int);
 int __atomic_add_unless(atomic_t *, int, int);
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -32,6 +32,9 @@ long atomic64_##op##_return(long, atomic
 
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
--- a/arch/sparc/lib/atomic32.c
+++ b/arch/sparc/lib/atomic32.c
@@ -27,22 +27,38 @@ static DEFINE_SPINLOCK(dummy);
 
 #endif /* SMP */
 
-#define ATOMIC_OP(op, cop)						\
+#define ATOMIC_OP_RETURN(op, c_op)					\
 int atomic_##op##_return(int i, atomic_t *v)				\
 {									\
 	int ret;							\
 	unsigned long flags;						\
 	spin_lock_irqsave(ATOMIC_HASH(v), flags);			\
 									\
-	ret = (v->counter cop i);					\
+	ret = (v->counter c_op i);					\
 									\
 	spin_unlock_irqrestore(ATOMIC_HASH(v), flags);			\
 	return ret;							\
 }									\
 EXPORT_SYMBOL(atomic_##op##_return);
 
-ATOMIC_OP(add, +=)
+#define ATOMIC_OP(op, c_op)						\
+void atomic_##op(int i, atomic_t *v)					\
+{									\
+	unsigned long flags;						\
+	spin_lock_irqsave(ATOMIC_HASH(v), flags);			\
+									\
+	v->counter c_op i;						\
+									\
+	spin_unlock_irqrestore(ATOMIC_HASH(v), flags);			\
+}									\
+EXPORT_SYMBOL(atomic_##op);
+
+ATOMIC_OP_RETURN(add, +=)
+ATOMIC_OP(and, &=)
+ATOMIC_OP(or, |=)
+ATOMIC_OP(xor, ^=)
 
+#undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
 int atomic_xchg(atomic_t *v, int new)
--- a/arch/sparc/lib/atomic_64.S
+++ b/arch/sparc/lib/atomic_64.S
@@ -47,6 +47,9 @@ ENDPROC(atomic_##op##_return);
 
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
@@ -84,6 +87,9 @@ ENDPROC(atomic64_##op##_return);
 
 ATOMIC64_OPS(add)
 ATOMIC64_OPS(sub)
+ATOMIC64_OP(and)
+ATOMIC64_OP(or)
+ATOMIC64_OP(xor)
 
 #undef ATOMIC64_OPS
 #undef ATOMIC64_OP_RETURN
--- a/arch/sparc/lib/ksyms.c
+++ b/arch/sparc/lib/ksyms.c
@@ -111,6 +111,9 @@ EXPORT_SYMBOL(atomic64_##op##_return);
 
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 18/24] xtensa: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (16 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 17/24] sparc: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 19/24] s390: " Peter Zijlstra
                   ` (6 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-xtensa-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 2705 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/xtensa/include/asm/atomic.h |   82 ++++++---------------------------------
 1 file changed, 13 insertions(+), 69 deletions(-)

--- a/arch/xtensa/include/asm/atomic.h
+++ b/arch/xtensa/include/asm/atomic.h
@@ -144,11 +144,24 @@ static inline int atomic_##op##_return(i
 
 ATOMIC_OPS(add)
 ATOMIC_OPS(sub)
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
 
 #undef ATOMIC_OPS
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_or(mask, v);
+}
+
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_and(~mask, v);
+}
+
 /**
  * atomic_sub_and_test - subtract value from variable and test result
  * @i: integer value to subtract
@@ -250,75 +263,6 @@ static __inline__ int __atomic_add_unles
 	return c;
 }
 
-
-static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-#if XCHAL_HAVE_S32C1I
-	unsigned long tmp;
-	int result;
-
-	__asm__ __volatile__(
-			"1:     l32i    %1, %3, 0\n"
-			"       wsr     %1, scompare1\n"
-			"       and     %0, %1, %2\n"
-			"       s32c1i  %0, %3, 0\n"
-			"       bne     %0, %1, 1b\n"
-			: "=&a" (result), "=&a" (tmp)
-			: "a" (~mask), "a" (v)
-			: "memory"
-			);
-#else
-	unsigned int all_f = -1;
-	unsigned int vval;
-
-	__asm__ __volatile__(
-			"       rsil    a15,"__stringify(LOCKLEVEL)"\n"
-			"       l32i    %0, %2, 0\n"
-			"       xor     %1, %4, %3\n"
-			"       and     %0, %0, %4\n"
-			"       s32i    %0, %2, 0\n"
-			"       wsr     a15, ps\n"
-			"       rsync\n"
-			: "=&a" (vval), "=a" (mask)
-			: "a" (v), "a" (all_f), "1" (mask)
-			: "a15", "memory"
-			);
-#endif
-}
-
-static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-#if XCHAL_HAVE_S32C1I
-	unsigned long tmp;
-	int result;
-
-	__asm__ __volatile__(
-			"1:     l32i    %1, %3, 0\n"
-			"       wsr     %1, scompare1\n"
-			"       or      %0, %1, %2\n"
-			"       s32c1i  %0, %3, 0\n"
-			"       bne     %0, %1, 1b\n"
-			: "=&a" (result), "=&a" (tmp)
-			: "a" (mask), "a" (v)
-			: "memory"
-			);
-#else
-	unsigned int vval;
-
-	__asm__ __volatile__(
-			"       rsil    a15,"__stringify(LOCKLEVEL)"\n"
-			"       l32i    %0, %2, 0\n"
-			"       or      %0, %0, %1\n"
-			"       s32i    %0, %2, 0\n"
-			"       wsr     a15, ps\n"
-			"       rsync\n"
-			: "=&a" (vval)
-			: "a" (mask), "a" (v)
-			: "a15", "memory"
-			);
-#endif
-}
-
 #endif /* __KERNEL__ */
 
 #endif /* _XTENSA_ATOMIC_H */



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 19/24] s390: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (17 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 18/24] xtensa: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-10  7:17   ` Heiko Carstens
  2015-07-09 17:29 ` [RFC][PATCH 20/24] x86: " Peter Zijlstra
                   ` (5 subsequent siblings)
  24 siblings, 1 reply; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-s390-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 3401 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/s390/include/asm/atomic.h |   45 ++++++++++++++++++++++++++++-------------
 1 file changed, 31 insertions(+), 14 deletions(-)

--- a/arch/s390/include/asm/atomic.h
+++ b/arch/s390/include/asm/atomic.h
@@ -28,6 +28,7 @@
 #define __ATOMIC_AND	"lan"
 #define __ATOMIC_ADD	"laa"
 #define __ATOMIC_BARRIER "bcr	14,0\n"
+#define __ATOMIC_XOR	"lax"
 
 #define __ATOMIC_LOOP(ptr, op_val, op_string, __barrier)		\
 ({									\
@@ -50,6 +51,7 @@
 #define __ATOMIC_AND	"nr"
 #define __ATOMIC_ADD	"ar"
 #define __ATOMIC_BARRIER "\n"
+#define __ATOMIC_XOR	"xr"
 
 #define __ATOMIC_LOOP(ptr, op_val, op_string, __barrier)		\
 ({									\
@@ -118,14 +120,26 @@ static inline void atomic_add(int i, ato
 #define atomic_dec_return(_v)		atomic_sub_return(1, _v)
 #define atomic_dec_and_test(_v)		(atomic_sub_return(1, _v) == 0)
 
-static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
+#define ATOMIC_OP(op, OP)						\
+static inline void atomic_##op(int i, atomic_t *v)			\
+{									\
+	__ATOMIC_LOOP(v, i, __ATOMIC_##OP, __ATOMIC_NO_BARRIER);	\
+}
+
+ATOMIC_OP(and, AND)
+ATOMIC_OP(or, OR)
+ATOMIC_OP(xor, XOR)
+
+#undef ATOMIC_OP
+
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
 {
-	__ATOMIC_LOOP(v, ~mask, __ATOMIC_AND, __ATOMIC_NO_BARRIER);
+	atomic_and(~mask, v);
 }
 
-static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
 {
-	__ATOMIC_LOOP(v, mask, __ATOMIC_OR, __ATOMIC_NO_BARRIER);
+	atomic_or(mask, v);
 }
 
 #define atomic_xchg(v, new) (xchg(&((v)->counter), new))
@@ -167,6 +181,7 @@ static inline int __atomic_add_unless(at
 #define __ATOMIC64_OR	"laog"
 #define __ATOMIC64_AND	"lang"
 #define __ATOMIC64_ADD	"laag"
+#define __ATOMIC64_XOR	"laxg"
 #define __ATOMIC64_BARRIER "bcr	14,0\n"
 
 #define __ATOMIC64_LOOP(ptr, op_val, op_string, __barrier)		\
@@ -189,6 +204,7 @@ static inline int __atomic_add_unless(at
 #define __ATOMIC64_OR	"ogr"
 #define __ATOMIC64_AND	"ngr"
 #define __ATOMIC64_ADD	"agr"
+#define __ATOMIC64_XOR	"xgr"
 #define __ATOMIC64_BARRIER "\n"
 
 #define __ATOMIC64_LOOP(ptr, op_val, op_string, __barrier)		\
@@ -247,16 +263,6 @@ static inline void atomic64_add(long lon
 	__ATOMIC64_LOOP(v, i, __ATOMIC64_ADD, __ATOMIC64_NO_BARRIER);
 }
 
-static inline void atomic64_clear_mask(unsigned long mask, atomic64_t *v)
-{
-	__ATOMIC64_LOOP(v, ~mask, __ATOMIC64_AND, __ATOMIC64_NO_BARRIER);
-}
-
-static inline void atomic64_set_mask(unsigned long mask, atomic64_t *v)
-{
-	__ATOMIC64_LOOP(v, mask, __ATOMIC64_OR, __ATOMIC64_NO_BARRIER);
-}
-
 #define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
 
 static inline long long atomic64_cmpxchg(atomic64_t *v,
@@ -270,6 +276,17 @@ static inline long long atomic64_cmpxchg
 	return old;
 }
 
+#define ATOMIC64_OP(op, OP)						\
+static inline void atomic64_##op(long i, atomic64_t *v)			\
+{									\
+	__ATOMIC64_LOOP(v, i, __ATOMIC64_##OP, __ATOMIC64_NO_BARRIER);	\
+}
+
+ATOMIC64_OP(and, AND)
+ATOMIC64_OP(or, OR)
+ATOMIC64_OP(xor, XOR)
+
+#undef ATOMIC64_OP
 #undef __ATOMIC64_LOOP
 
 static inline int atomic64_add_unless(atomic64_t *v, long long i, long long u)



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 20/24] x86: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (18 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 19/24] s390: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 21/24] atomic: " Peter Zijlstra
                   ` (4 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-x86-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 2855 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.


Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/include/asm/atomic.h      |   35 ++++++++++++++++++++++++++---------
 arch/x86/include/asm/atomic64_32.h |   14 ++++++++++++++
 arch/x86/include/asm/atomic64_64.h |   15 +++++++++++++++
 3 files changed, 55 insertions(+), 9 deletions(-)

--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -182,6 +182,23 @@ static inline int atomic_xchg(atomic_t *
 	return xchg(&v->counter, new);
 }
 
+#define ATOMIC_OP(op)							\
+static inline void atomic_##op(int i, atomic_t *v)			\
+{									\
+	asm volatile(LOCK_PREFIX #op"l %1,%0"				\
+			: "+m" (v->counter)				\
+			: "ir" (i)					\
+			: "memory");					\
+}
+
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
+
+#define CONFIG_ARCH_HAS_ATOMIC_OR
+
+#undef ATOMIC_OP
+
 /**
  * __atomic_add_unless - add unless the number is already a given value
  * @v: pointer of type atomic_t
@@ -219,15 +236,15 @@ static __always_inline short int atomic_
 	return *v;
 }
 
-/* These are x86-specific, used by some header files */
-#define atomic_clear_mask(mask, addr)				\
-	asm volatile(LOCK_PREFIX "andl %0,%1"			\
-		     : : "r" (~(mask)), "m" (*(addr)) : "memory")
-
-#define atomic_set_mask(mask, addr)				\
-	asm volatile(LOCK_PREFIX "orl %0,%1"			\
-		     : : "r" ((unsigned)(mask)), "m" (*(addr))	\
-		     : "memory")
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_and(~mask, v);
+}
+
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_or(mask, v);
+}
 
 #ifdef CONFIG_X86_32
 # include <asm/atomic64_32.h>
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -313,4 +313,18 @@ static inline long long atomic64_dec_if_
 #undef alternative_atomic64
 #undef __alternative_atomic64
 
+#define ATOMIC64_OP(op, c_op)						\
+static inline void atomic64_##op(long long i, atomic64_t *v)		\
+{									\
+	long long old, c = 0;						\
+	while ((old = atomic64_cmpxchg(v, c, c c_op i)) != c)		\
+		c = old;						\
+}
+
+ATOMIC64_OP(and, &)
+ATOMIC64_OP(or, |)
+ATOMIC64_OP(xor, ^)
+
+#undef ATOMIC64_OP
+
 #endif /* _ASM_X86_ATOMIC64_32_H */
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -220,4 +220,19 @@ static inline long atomic64_dec_if_posit
 	return dec;
 }
 
+#define ATOMIC64_OP(op)							\
+static inline void atomic64_##op(long i, atomic64_t *v)			\
+{									\
+	asm volatile(LOCK_PREFIX #op"q %1,%0"				\
+			: "+m" (v->counter)				\
+			: "ir" (i)					\
+			: "memory");					\
+}
+
+ATOMIC64_OP(and)
+ATOMIC64_OP(or)
+ATOMIC64_OP(xor)
+
+#undef ATOMIC64_OP
+
 #endif /* _ASM_X86_ATOMIC64_64_H */



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 21/24] atomic: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (19 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 20/24] x86: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 22/24] frv: Rewrite atomic implementation Peter Zijlstra
                   ` (3 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-generic-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 2603 bytes --]

Implement atomic logic ops -- atomic_{or,xor,and}.

These will replace the atomic_{set,clear}_mask functions that are
available on some archs.


Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/x86/include/asm/atomic.h  |    2 --
 include/asm-generic/atomic.h   |   21 ++++++++++++++++-----
 include/asm-generic/atomic64.h |    3 +++
 include/linux/atomic.h         |   13 -------------
 lib/atomic64.c                 |    3 +++
 5 files changed, 22 insertions(+), 20 deletions(-)

--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -195,8 +195,6 @@ ATOMIC_OP(and)
 ATOMIC_OP(or)
 ATOMIC_OP(xor)
 
-#define CONFIG_ARCH_HAS_ATOMIC_OR
-
 #undef ATOMIC_OP
 
 /**
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -98,20 +98,31 @@ ATOMIC_OP_RETURN(add, +)
 ATOMIC_OP_RETURN(sub, -)
 #endif
 
-#ifndef atomic_clear_mask
+#ifndef atomic_and
 ATOMIC_OP(and, &)
-#define atomic_clear_mask(i, v) atomic_and(~(i), (v))
 #endif
 
-#ifndef atomic_set_mask
-#define CONFIG_ARCH_HAS_ATOMIC_OR
+#ifndef atomic_or
 ATOMIC_OP(or, |)
-#define atomic_set_mask(i, v)	atomic_or((i), (v))
+#endif
+
+#ifndef atomic_xor
+ATOMIC_OP(xor, ^)
 #endif
 
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_and(~mask, v);
+}
+
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_or(mask, v);
+}
+
 /*
  * Atomic operations that C can't guarantee us.  Useful for
  * resource counting etc..
--- a/include/asm-generic/atomic64.h
+++ b/include/asm-generic/atomic64.h
@@ -31,6 +31,9 @@ extern long long atomic64_##op##_return(
 
 ATOMIC64_OPS(add)
 ATOMIC64_OPS(sub)
+ATOMIC64_OP(and)
+ATOMIC64_OP(or)
+ATOMIC64_OP(xor)
 
 #undef ATOMIC64_OPS
 #undef ATOMIC64_OP_RETURN
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -111,19 +111,6 @@ static inline int atomic_dec_if_positive
 }
 #endif
 
-#ifndef CONFIG_ARCH_HAS_ATOMIC_OR
-static inline void atomic_or(int i, atomic_t *v)
-{
-	int old;
-	int new;
-
-	do {
-		old = atomic_read(v);
-		new = old | i;
-	} while (atomic_cmpxchg(v, old, new) != old);
-}
-#endif /* #ifndef CONFIG_ARCH_HAS_ATOMIC_OR */
-
 #include <asm-generic/atomic-long.h>
 #ifdef CONFIG_GENERIC_ATOMIC64
 #include <asm-generic/atomic64.h>
--- a/lib/atomic64.c
+++ b/lib/atomic64.c
@@ -102,6 +102,9 @@ EXPORT_SYMBOL(atomic64_##op##_return);
 
 ATOMIC64_OPS(add, +=)
 ATOMIC64_OPS(sub, -=)
+ATOMIC64_OP(and, &=)
+ATOMIC64_OP(or, |=)
+ATOMIC64_OP(xor, ^=)
 
 #undef ATOMIC64_OPS
 #undef ATOMIC64_OP_RETURN



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 22/24] frv: Rewrite atomic implementation
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (20 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 21/24] atomic: " Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 17:29 ` [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions Peter Zijlstra
                   ` (2 subsequent siblings)
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-frv-atomic_logic_ops.patch --]
[-- Type: text/plain, Size: 23025 bytes --]

Mostly complete rewrite of the FRV atomic implementation, instead of
using assembly files, use inline assembler.

The out-of-line CONFIG option makes a bit of a mess of things, but a
little CPP trickery gets that done too.

FRV already had the atomic logic ops but under a non standard name,
the reimplementation provides the generic names and provides the
intermediate form required for the bitops implementation.

The slightly inconsistent __atomic32_fetch_##op naming is because
__atomic_fetch_##op conlicts with GCC builtin functions.

The 64bit atomic ops use the inline assembly %Ln construct to access
the low word register (r+1), afaik this construct was not previously
used in the kernel and is completely undocumented, but I found it in
the FRV GCC code and it seems to work.

FRV had a non-standard definition of atomic_{clear,set}_mask() which
would work types other than atomic_t, the one user relying on that
(arch/frv/kernel/dma.c) got converted to use the new intermediate
form.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/frv/include/asm/atomic.h      |  117 ++++++++++++++++--------------
 arch/frv/include/asm/atomic_defs.h |  143 +++++++++++++++++++++++++++++++++++++
 arch/frv/include/asm/bitops.h      |   99 ++-----------------------
 arch/frv/kernel/dma.c              |    6 -
 arch/frv/lib/Makefile              |    2 
 arch/frv/lib/atomic-lib.c          |    7 +
 arch/frv/lib/atomic-ops.S          |  110 ----------------------------
 arch/frv/lib/atomic64-ops.S        |   94 ------------------------
 8 files changed, 228 insertions(+), 350 deletions(-)

--- a/arch/frv/include/asm/atomic.h
+++ b/arch/frv/include/asm/atomic.h
@@ -15,7 +15,6 @@
 #define _ASM_ATOMIC_H
 
 #include <linux/types.h>
-#include <asm/spr-regs.h>
 #include <asm/cmpxchg.h>
 #include <asm/barrier.h>
 
@@ -23,6 +22,8 @@
 #error not SMP safe
 #endif
 
+#include <asm/atomic_defs.h>
+
 /*
  * Atomic operations that C can't guarantee us.  Useful for
  * resource counting etc..
@@ -34,56 +35,26 @@
 #define atomic_read(v)		ACCESS_ONCE((v)->counter)
 #define atomic_set(v, i)	(((v)->counter) = (i))
 
-#ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS
-static inline int atomic_add_return(int i, atomic_t *v)
+static inline int atomic_inc_return(atomic_t *v)
 {
-	unsigned long val;
+	return __atomic_add_return(1, &v->counter);
+}
 
-	asm("0:						\n"
-	    "	orcc		gr0,gr0,gr0,icc3	\n"	/* set ICC3.Z */
-	    "	ckeq		icc3,cc7		\n"
-	    "	ld.p		%M0,%1			\n"	/* LD.P/ORCR must be atomic */
-	    "	orcr		cc7,cc7,cc3		\n"	/* set CC3 to true */
-	    "	add%I2		%1,%2,%1		\n"
-	    "	cst.p		%1,%M0		,cc3,#1	\n"
-	    "	corcc		gr29,gr29,gr0	,cc3,#1	\n"	/* clear ICC3.Z if store happens */
-	    "	beq		icc3,#0,0b		\n"
-	    : "+U"(v->counter), "=&r"(val)
-	    : "NPr"(i)
-	    : "memory", "cc7", "cc3", "icc3"
-	    );
+static inline int atomic_dec_return(atomic_t *v)
+{
+	return __atomic_sub_return(1, &v->counter);
+}
 
-	return val;
+static inline int atomic_add_return(int i, atomic_t *v)
+{
+	return __atomic_add_return(i, &v->counter);
 }
 
 static inline int atomic_sub_return(int i, atomic_t *v)
 {
-	unsigned long val;
-
-	asm("0:						\n"
-	    "	orcc		gr0,gr0,gr0,icc3	\n"	/* set ICC3.Z */
-	    "	ckeq		icc3,cc7		\n"
-	    "	ld.p		%M0,%1			\n"	/* LD.P/ORCR must be atomic */
-	    "	orcr		cc7,cc7,cc3		\n"	/* set CC3 to true */
-	    "	sub%I2		%1,%2,%1		\n"
-	    "	cst.p		%1,%M0		,cc3,#1	\n"
-	    "	corcc		gr29,gr29,gr0	,cc3,#1	\n"	/* clear ICC3.Z if store happens */
-	    "	beq		icc3,#0,0b		\n"
-	    : "+U"(v->counter), "=&r"(val)
-	    : "NPr"(i)
-	    : "memory", "cc7", "cc3", "icc3"
-	    );
-
-	return val;
+	return __atomic_sub_return(i, &v->counter);
 }
 
-#else
-
-extern int atomic_add_return(int i, atomic_t *v);
-extern int atomic_sub_return(int i, atomic_t *v);
-
-#endif
-
 static inline int atomic_add_negative(int i, atomic_t *v)
 {
 	return atomic_add_return(i, v) < 0;
@@ -101,17 +72,14 @@ static inline void atomic_sub(int i, ato
 
 static inline void atomic_inc(atomic_t *v)
 {
-	atomic_add_return(1, v);
+	atomic_inc_return(v);
 }
 
 static inline void atomic_dec(atomic_t *v)
 {
-	atomic_sub_return(1, v);
+	atomic_dec_return(v);
 }
 
-#define atomic_dec_return(v)		atomic_sub_return(1, (v))
-#define atomic_inc_return(v)		atomic_add_return(1, (v))
-
 #define atomic_sub_and_test(i,v)	(atomic_sub_return((i), (v)) == 0)
 #define atomic_dec_and_test(v)		(atomic_sub_return(1, (v)) == 0)
 #define atomic_inc_and_test(v)		(atomic_add_return(1, (v)) == 0)
@@ -120,18 +88,19 @@ static inline void atomic_dec(atomic_t *
  * 64-bit atomic ops
  */
 typedef struct {
-	volatile long long counter;
+	long long counter;
 } atomic64_t;
 
 #define ATOMIC64_INIT(i)	{ (i) }
 
-static inline long long atomic64_read(atomic64_t *v)
+static inline long long atomic64_read(const atomic64_t *v)
 {
 	long long counter;
 
 	asm("ldd%I1 %M1,%0"
 	    : "=e"(counter)
 	    : "m"(v->counter));
+
 	return counter;
 }
 
@@ -142,10 +111,25 @@ static inline void atomic64_set(atomic64
 		     : "e"(i));
 }
 
-extern long long atomic64_inc_return(atomic64_t *v);
-extern long long atomic64_dec_return(atomic64_t *v);
-extern long long atomic64_add_return(long long i, atomic64_t *v);
-extern long long atomic64_sub_return(long long i, atomic64_t *v);
+static inline long long atomic64_inc_return(atomic64_t *v)
+{
+	return __atomic64_add_return(1, &v->counter);
+}
+
+static inline long long atomic64_dec_return(atomic64_t *v)
+{
+	return __atomic64_sub_return(1, &v->counter);
+}
+
+static inline long long atomic64_add_return(long long i, atomic64_t *v)
+{
+	return __atomic64_add_return(i, &v->counter);
+}
+
+static inline long long atomic64_sub_return(long long i, atomic64_t *v)
+{
+	return __atomic64_sub_return(i, &v->counter);
+}
 
 static inline long long atomic64_add_negative(long long i, atomic64_t *v)
 {
@@ -176,6 +160,7 @@ static inline void atomic64_dec(atomic64
 #define atomic64_dec_and_test(v)	(atomic64_dec_return((v)) == 0)
 #define atomic64_inc_and_test(v)	(atomic64_inc_return((v)) == 0)
 
+
 #define atomic_cmpxchg(v, old, new)	(cmpxchg(&(v)->counter, old, new))
 #define atomic_xchg(v, new)		(xchg(&(v)->counter, new))
 #define atomic64_cmpxchg(v, old, new)	(__cmpxchg_64(old, new, &(v)->counter))
@@ -196,5 +181,31 @@ static __inline__ int __atomic_add_unles
 	return c;
 }
 
+#define ATOMIC_OP(op)							\
+static inline void atomic_##op(int i, atomic_t *v)			\
+{									\
+	(void)__atomic32_fetch_##op(i, &v->counter);			\
+}									\
+									\
+static inline void atomic64_##op(long long i, atomic64_t *v)		\
+{									\
+	(void)__atomic64_fetch_##op(i, &v->counter);			\
+}
+
+ATOMIC_OP(or)
+ATOMIC_OP(and)
+ATOMIC_OP(xor)
+
+#undef ATOMIC_OP
+
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_and(~mask, v);
+}
+
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_or(mask, v);
+}
 
 #endif /* _ASM_ATOMIC_H */
--- /dev/null
+++ b/arch/frv/include/asm/atomic_defs.h
@@ -0,0 +1,143 @@
+
+#include <asm/spr-regs.h>
+
+#ifndef __ATOMIC_LIB__
+
+#ifdef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS
+
+#define ATOMIC_EXPORT(x)
+
+#define ATOMIC_OP_RETURN(op)						\
+extern int __atomic_##op##_return(int i, int *v);			\
+extern long long __atomic64_##op##_return(long long i, long long *v);
+
+#define ATOMIC_FETCH_OP(op)						\
+extern int __atomic32_fetch_##op(int i, int *v);				\
+extern long long __atomic64_fetch_##op(long long i, long long *v);
+
+#else
+#define ATOMIC_QUALS	static inline
+#endif
+
+#else /* __ATOMIC_LIB__ */
+
+#ifdef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS
+#define ATOMIC_QUALS
+#define ATOMIC_EXPORT(x)	EXPORT_SYMBOL(x)
+#else
+#define ATOMIC_OP_RETURN(op)
+#define ATOMIC_FETCH_OP(op)
+#endif
+
+#endif /* __ATOMIC_LIB__ */
+
+#ifndef ATOMIC_OP_RETURN
+#define ATOMIC_OP_RETURN(op)						\
+ATOMIC_QUALS int __atomic_##op##_return(int i, int *v)			\
+{									\
+	int val;							\
+									\
+	asm volatile(							\
+	    "0:						\n"		\
+	    "	orcc		gr0,gr0,gr0,icc3	\n"		\
+	    "	ckeq		icc3,cc7		\n"		\
+	    "	ld.p		%M0,%1			\n"		\
+	    "	orcr		cc7,cc7,cc3		\n"		\
+	    "   "#op"%I2	%1,%2,%1		\n"		\
+	    "	cst.p		%1,%M0		,cc3,#1	\n"		\
+	    "	corcc		gr29,gr29,gr0	,cc3,#1	\n"		\
+	    "	beq		icc3,#0,0b		\n"		\
+	    : "+m"(*v), "=&r"(val)					\
+	    : "NPr"(i)							\
+	    : "memory", "cc7", "cc3", "icc3"				\
+	    );								\
+									\
+	return val;							\
+}									\
+ATOMIC_EXPORT(__atomic_##op##_return);					\
+									\
+ATOMIC_QUALS long long __atomic64_##op##_return(long long i, long long *v)	\
+{									\
+	long long val;							\
+									\
+	asm volatile(							\
+	    "0:						\n"		\
+	    "	orcc		gr0,gr0,gr0,icc3	\n"		\
+	    "	ckeq		icc3,cc7		\n"		\
+	    "	ldd.p		%M0,%1			\n"		\
+	    "	orcr		cc7,cc7,cc3		\n"		\
+	    "   "#op"%I2cc	%L1,%L2,%L1,icc0	\n"		\
+	    "   "#op"x%I2	%1,%2,%1,icc0		\n"		\
+	    "	cstd.p		%1,%M0		,cc3,#1	\n"		\
+	    "	corcc		gr29,gr29,gr0	,cc3,#1	\n"		\
+	    "	beq		icc3,#0,0b		\n"		\
+	    : "=m"(*v), "=&e"(val)					\
+	    : "NPe"(i)							\
+	    : "memory", "cc7", "cc3", "icc0", "icc3"			\
+	    );								\
+									\
+	return val;							\
+}									\
+ATOMIC_EXPORT(__atomic64_##op##_return);
+#endif
+
+#ifndef ATOMIC_FETCH_OP
+#define ATOMIC_FETCH_OP(op)						\
+ATOMIC_QUALS int __atomic32_fetch_##op(int i, int *v)			\
+{									\
+	int old, tmp;							\
+									\
+	asm volatile(							\
+		"0:						\n"	\
+		"	orcc		gr0,gr0,gr0,icc3	\n"	\
+		"	ckeq		icc3,cc7		\n"	\
+		"	ld.p		%M0,%1			\n"	\
+		"	orcr		cc7,cc7,cc3		\n"	\
+		"	"#op"%I3	%1,%3,%2		\n"	\
+		"	cst.p		%2,%M0		,cc3,#1	\n"	\
+		"	corcc		gr29,gr29,gr0	,cc3,#1	\n"	\
+		"	beq		icc3,#0,0b		\n"	\
+		: "+m"(*v), "=&r"(old), "=r"(tmp)			\
+		: "NPr"(i)						\
+		: "memory", "cc7", "cc3", "icc3"			\
+		);							\
+									\
+	return old;							\
+}									\
+ATOMIC_EXPORT(__atomic32_fetch_##op);					\
+									\
+ATOMIC_QUALS long long __atomic64_fetch_##op(long long i, long long *v)	\
+{									\
+	long long old, tmp;						\
+									\
+	asm volatile(							\
+		"0:						\n"	\
+		"	orcc		gr0,gr0,gr0,icc3	\n"	\
+		"	ckeq		icc3,cc7		\n"	\
+		"	ldd.p		%M0,%1			\n"	\
+		"	orcr		cc7,cc7,cc3		\n"	\
+		"	"#op"%I3	%L1,%L3,%L2		\n"	\
+		"	"#op"%I3	%1,%3,%2		\n"	\
+		"	cstd.p		%2,%M0		,cc3,#1	\n"	\
+		"	corcc		gr29,gr29,gr0	,cc3,#1	\n"	\
+		"	beq		icc3,#0,0b		\n"	\
+		: "+m"(*v), "=&e"(old), "=e"(tmp)			\
+		: "NPe"(i)						\
+		: "memory", "cc7", "cc3", "icc3"			\
+		);							\
+									\
+	return old;							\
+}									\
+ATOMIC_EXPORT(__atomic64_fetch_##op);
+#endif
+
+ATOMIC_FETCH_OP(or)
+ATOMIC_FETCH_OP(and)
+ATOMIC_FETCH_OP(xor)
+
+ATOMIC_OP_RETURN(add)
+ATOMIC_OP_RETURN(sub)
+
+#undef ATOMIC_OP_RETURN
+#undef ATOMIC_FETCH_OP
+#undef ATOMIC_EXPORT
--- a/arch/frv/include/asm/bitops.h
+++ b/arch/frv/include/asm/bitops.h
@@ -25,109 +25,30 @@
 
 #include <asm-generic/bitops/ffz.h>
 
-#ifndef CONFIG_FRV_OUTOFLINE_ATOMIC_OPS
-static inline
-unsigned long atomic_test_and_ANDNOT_mask(unsigned long mask, volatile unsigned long *v)
-{
-	unsigned long old, tmp;
-
-	asm volatile(
-		"0:						\n"
-		"	orcc		gr0,gr0,gr0,icc3	\n"	/* set ICC3.Z */
-		"	ckeq		icc3,cc7		\n"
-		"	ld.p		%M0,%1			\n"	/* LD.P/ORCR are atomic */
-		"	orcr		cc7,cc7,cc3		\n"	/* set CC3 to true */
-		"	and%I3		%1,%3,%2		\n"
-		"	cst.p		%2,%M0		,cc3,#1	\n"	/* if store happens... */
-		"	corcc		gr29,gr29,gr0	,cc3,#1	\n"	/* ... clear ICC3.Z */
-		"	beq		icc3,#0,0b		\n"
-		: "+U"(*v), "=&r"(old), "=r"(tmp)
-		: "NPr"(~mask)
-		: "memory", "cc7", "cc3", "icc3"
-		);
-
-	return old;
-}
-
-static inline
-unsigned long atomic_test_and_OR_mask(unsigned long mask, volatile unsigned long *v)
-{
-	unsigned long old, tmp;
-
-	asm volatile(
-		"0:						\n"
-		"	orcc		gr0,gr0,gr0,icc3	\n"	/* set ICC3.Z */
-		"	ckeq		icc3,cc7		\n"
-		"	ld.p		%M0,%1			\n"	/* LD.P/ORCR are atomic */
-		"	orcr		cc7,cc7,cc3		\n"	/* set CC3 to true */
-		"	or%I3		%1,%3,%2		\n"
-		"	cst.p		%2,%M0		,cc3,#1	\n"	/* if store happens... */
-		"	corcc		gr29,gr29,gr0	,cc3,#1	\n"	/* ... clear ICC3.Z */
-		"	beq		icc3,#0,0b		\n"
-		: "+U"(*v), "=&r"(old), "=r"(tmp)
-		: "NPr"(mask)
-		: "memory", "cc7", "cc3", "icc3"
-		);
-
-	return old;
-}
-
-static inline
-unsigned long atomic_test_and_XOR_mask(unsigned long mask, volatile unsigned long *v)
-{
-	unsigned long old, tmp;
-
-	asm volatile(
-		"0:						\n"
-		"	orcc		gr0,gr0,gr0,icc3	\n"	/* set ICC3.Z */
-		"	ckeq		icc3,cc7		\n"
-		"	ld.p		%M0,%1			\n"	/* LD.P/ORCR are atomic */
-		"	orcr		cc7,cc7,cc3		\n"	/* set CC3 to true */
-		"	xor%I3		%1,%3,%2		\n"
-		"	cst.p		%2,%M0		,cc3,#1	\n"	/* if store happens... */
-		"	corcc		gr29,gr29,gr0	,cc3,#1	\n"	/* ... clear ICC3.Z */
-		"	beq		icc3,#0,0b		\n"
-		: "+U"(*v), "=&r"(old), "=r"(tmp)
-		: "NPr"(mask)
-		: "memory", "cc7", "cc3", "icc3"
-		);
-
-	return old;
-}
-
-#else
-
-extern unsigned long atomic_test_and_ANDNOT_mask(unsigned long mask, volatile unsigned long *v);
-extern unsigned long atomic_test_and_OR_mask(unsigned long mask, volatile unsigned long *v);
-extern unsigned long atomic_test_and_XOR_mask(unsigned long mask, volatile unsigned long *v);
-
-#endif
-
-#define atomic_clear_mask(mask, v)	atomic_test_and_ANDNOT_mask((mask), (v))
-#define atomic_set_mask(mask, v)	atomic_test_and_OR_mask((mask), (v))
+#include <asm/atomic_defs.h>
 
 static inline int test_and_clear_bit(unsigned long nr, volatile void *addr)
 {
-	volatile unsigned long *ptr = addr;
-	unsigned long mask = 1UL << (nr & 31);
+	unsigned int *ptr = (void *)addr;
+	unsigned int mask = 1UL << (nr & 31);
 	ptr += nr >> 5;
-	return (atomic_test_and_ANDNOT_mask(mask, ptr) & mask) != 0;
+	return (__atomic32_fetch_and(~mask, ptr) & mask) != 0;
 }
 
 static inline int test_and_set_bit(unsigned long nr, volatile void *addr)
 {
-	volatile unsigned long *ptr = addr;
-	unsigned long mask = 1UL << (nr & 31);
+	unsigned int *ptr = (void *)addr;
+	unsigned int mask = 1UL << (nr & 31);
 	ptr += nr >> 5;
-	return (atomic_test_and_OR_mask(mask, ptr) & mask) != 0;
+	return (__atomic32_fetch_or(mask, ptr) & mask) != 0;
 }
 
 static inline int test_and_change_bit(unsigned long nr, volatile void *addr)
 {
-	volatile unsigned long *ptr = addr;
-	unsigned long mask = 1UL << (nr & 31);
+	unsigned int *ptr = (void *)addr;
+	unsigned int mask = 1UL << (nr & 31);
 	ptr += nr >> 5;
-	return (atomic_test_and_XOR_mask(mask, ptr) & mask) != 0;
+	return (__atomic32_fetch_xor(mask, ptr) & mask) != 0;
 }
 
 static inline void clear_bit(unsigned long nr, volatile void *addr)
--- a/arch/frv/kernel/dma.c
+++ b/arch/frv/kernel/dma.c
@@ -109,13 +109,13 @@ static struct frv_dma_channel frv_dma_ch
 
 static DEFINE_RWLOCK(frv_dma_channels_lock);
 
-unsigned long frv_dma_inprogress;
+unsigned int frv_dma_inprogress;
 
 #define frv_clear_dma_inprogress(channel) \
-	atomic_clear_mask(1 << (channel), &frv_dma_inprogress);
+	(void)__atomic32_fetch_and(~(1 << (channel)), &frv_dma_inprogress);
 
 #define frv_set_dma_inprogress(channel) \
-	atomic_set_mask(1 << (channel), &frv_dma_inprogress);
+	(void)__atomic32_fetch_or(1 << (channel), &frv_dma_inprogress);
 
 /*****************************************************************************/
 /*
--- a/arch/frv/lib/Makefile
+++ b/arch/frv/lib/Makefile
@@ -5,4 +5,4 @@
 lib-y := \
 	__ashldi3.o __lshrdi3.o __muldi3.o __ashrdi3.o __negdi2.o __ucmpdi2.o \
 	checksum.o memcpy.o memset.o atomic-ops.o atomic64-ops.o \
-	outsl_ns.o outsl_sw.o insl_ns.o insl_sw.o cache.o
+	outsl_ns.o outsl_sw.o insl_ns.o insl_sw.o cache.o atomic-lib.o
--- /dev/null
+++ b/arch/frv/lib/atomic-lib.c
@@ -0,0 +1,7 @@
+
+#include <linux/export.h>
+#include <asm/atomic.h>
+
+#define __ATOMIC_LIB__
+
+#include <asm/atomic_defs.h>
--- a/arch/frv/lib/atomic-ops.S
+++ b/arch/frv/lib/atomic-ops.S
@@ -19,116 +19,6 @@
 
 ###############################################################################
 #
-# unsigned long atomic_test_and_ANDNOT_mask(unsigned long mask, volatile unsigned long *v);
-#
-###############################################################################
-	.globl		atomic_test_and_ANDNOT_mask
-        .type		atomic_test_and_ANDNOT_mask,@function
-atomic_test_and_ANDNOT_mask:
-	not.p		gr8,gr10
-0:
-	orcc		gr0,gr0,gr0,icc3		/* set ICC3.Z */
-	ckeq		icc3,cc7
-	ld.p		@(gr9,gr0),gr8			/* LD.P/ORCR must be atomic */
-	orcr		cc7,cc7,cc3			/* set CC3 to true */
-	and		gr8,gr10,gr11
-	cst.p		gr11,@(gr9,gr0)		,cc3,#1
-	corcc		gr29,gr29,gr0		,cc3,#1	/* clear ICC3.Z if store happens */
-	beq		icc3,#0,0b
-	bralr
-
-	.size		atomic_test_and_ANDNOT_mask, .-atomic_test_and_ANDNOT_mask
-
-###############################################################################
-#
-# unsigned long atomic_test_and_OR_mask(unsigned long mask, volatile unsigned long *v);
-#
-###############################################################################
-	.globl		atomic_test_and_OR_mask
-        .type		atomic_test_and_OR_mask,@function
-atomic_test_and_OR_mask:
-	or.p		gr8,gr8,gr10
-0:
-	orcc		gr0,gr0,gr0,icc3		/* set ICC3.Z */
-	ckeq		icc3,cc7
-	ld.p		@(gr9,gr0),gr8			/* LD.P/ORCR must be atomic */
-	orcr		cc7,cc7,cc3			/* set CC3 to true */
-	or		gr8,gr10,gr11
-	cst.p		gr11,@(gr9,gr0)		,cc3,#1
-	corcc		gr29,gr29,gr0		,cc3,#1	/* clear ICC3.Z if store happens */
-	beq		icc3,#0,0b
-	bralr
-
-	.size		atomic_test_and_OR_mask, .-atomic_test_and_OR_mask
-
-###############################################################################
-#
-# unsigned long atomic_test_and_XOR_mask(unsigned long mask, volatile unsigned long *v);
-#
-###############################################################################
-	.globl		atomic_test_and_XOR_mask
-        .type		atomic_test_and_XOR_mask,@function
-atomic_test_and_XOR_mask:
-	or.p		gr8,gr8,gr10
-0:
-	orcc		gr0,gr0,gr0,icc3		/* set ICC3.Z */
-	ckeq		icc3,cc7
-	ld.p		@(gr9,gr0),gr8			/* LD.P/ORCR must be atomic */
-	orcr		cc7,cc7,cc3			/* set CC3 to true */
-	xor		gr8,gr10,gr11
-	cst.p		gr11,@(gr9,gr0)		,cc3,#1
-	corcc		gr29,gr29,gr0		,cc3,#1	/* clear ICC3.Z if store happens */
-	beq		icc3,#0,0b
-	bralr
-
-	.size		atomic_test_and_XOR_mask, .-atomic_test_and_XOR_mask
-
-###############################################################################
-#
-# int atomic_add_return(int i, atomic_t *v)
-#
-###############################################################################
-	.globl		atomic_add_return
-        .type		atomic_add_return,@function
-atomic_add_return:
-	or.p		gr8,gr8,gr10
-0:
-	orcc		gr0,gr0,gr0,icc3		/* set ICC3.Z */
-	ckeq		icc3,cc7
-	ld.p		@(gr9,gr0),gr8			/* LD.P/ORCR must be atomic */
-	orcr		cc7,cc7,cc3			/* set CC3 to true */
-	add		gr8,gr10,gr8
-	cst.p		gr8,@(gr9,gr0)		,cc3,#1
-	corcc		gr29,gr29,gr0		,cc3,#1	/* clear ICC3.Z if store happens */
-	beq		icc3,#0,0b
-	bralr
-
-	.size		atomic_add_return, .-atomic_add_return
-
-###############################################################################
-#
-# int atomic_sub_return(int i, atomic_t *v)
-#
-###############################################################################
-	.globl		atomic_sub_return
-        .type		atomic_sub_return,@function
-atomic_sub_return:
-	or.p		gr8,gr8,gr10
-0:
-	orcc		gr0,gr0,gr0,icc3		/* set ICC3.Z */
-	ckeq		icc3,cc7
-	ld.p		@(gr9,gr0),gr8			/* LD.P/ORCR must be atomic */
-	orcr		cc7,cc7,cc3			/* set CC3 to true */
-	sub		gr8,gr10,gr8
-	cst.p		gr8,@(gr9,gr0)		,cc3,#1
-	corcc		gr29,gr29,gr0		,cc3,#1	/* clear ICC3.Z if store happens */
-	beq		icc3,#0,0b
-	bralr
-
-	.size		atomic_sub_return, .-atomic_sub_return
-
-###############################################################################
-#
 # uint32_t __xchg_32(uint32_t i, uint32_t *v)
 #
 ###############################################################################
--- a/arch/frv/lib/atomic64-ops.S
+++ b/arch/frv/lib/atomic64-ops.S
@@ -20,100 +20,6 @@
 
 ###############################################################################
 #
-# long long atomic64_inc_return(atomic64_t *v)
-#
-###############################################################################
-	.globl		atomic64_inc_return
-        .type		atomic64_inc_return,@function
-atomic64_inc_return:
-	or.p		gr8,gr8,gr10
-0:
-	orcc		gr0,gr0,gr0,icc3		/* set ICC3.Z */
-	ckeq		icc3,cc7
-	ldd.p		@(gr10,gr0),gr8			/* LDD.P/ORCR must be atomic */
-	orcr		cc7,cc7,cc3			/* set CC3 to true */
-	addicc		gr9,#1,gr9,icc0
-	addxi		gr8,#0,gr8,icc0
-	cstd.p		gr8,@(gr10,gr0)		,cc3,#1
-	corcc		gr29,gr29,gr0		,cc3,#1	/* clear ICC3.Z if store happens */
-	beq		icc3,#0,0b
-	bralr
-
-	.size		atomic64_inc_return, .-atomic64_inc_return
-
-###############################################################################
-#
-# long long atomic64_dec_return(atomic64_t *v)
-#
-###############################################################################
-	.globl		atomic64_dec_return
-        .type		atomic64_dec_return,@function
-atomic64_dec_return:
-	or.p		gr8,gr8,gr10
-0:
-	orcc		gr0,gr0,gr0,icc3		/* set ICC3.Z */
-	ckeq		icc3,cc7
-	ldd.p		@(gr10,gr0),gr8			/* LDD.P/ORCR must be atomic */
-	orcr		cc7,cc7,cc3			/* set CC3 to true */
-	subicc		gr9,#1,gr9,icc0
-	subxi		gr8,#0,gr8,icc0
-	cstd.p		gr8,@(gr10,gr0)		,cc3,#1
-	corcc		gr29,gr29,gr0		,cc3,#1	/* clear ICC3.Z if store happens */
-	beq		icc3,#0,0b
-	bralr
-
-	.size		atomic64_dec_return, .-atomic64_dec_return
-
-###############################################################################
-#
-# long long atomic64_add_return(long long i, atomic64_t *v)
-#
-###############################################################################
-	.globl		atomic64_add_return
-        .type		atomic64_add_return,@function
-atomic64_add_return:
-	or.p		gr8,gr8,gr4
-	or		gr9,gr9,gr5
-0:
-	orcc		gr0,gr0,gr0,icc3		/* set ICC3.Z */
-	ckeq		icc3,cc7
-	ldd.p		@(gr10,gr0),gr8			/* LDD.P/ORCR must be atomic */
-	orcr		cc7,cc7,cc3			/* set CC3 to true */
-	addcc		gr9,gr5,gr9,icc0
-	addx		gr8,gr4,gr8,icc0
-	cstd.p		gr8,@(gr10,gr0)		,cc3,#1
-	corcc		gr29,gr29,gr0		,cc3,#1	/* clear ICC3.Z if store happens */
-	beq		icc3,#0,0b
-	bralr
-
-	.size		atomic64_add_return, .-atomic64_add_return
-
-###############################################################################
-#
-# long long atomic64_sub_return(long long i, atomic64_t *v)
-#
-###############################################################################
-	.globl		atomic64_sub_return
-        .type		atomic64_sub_return,@function
-atomic64_sub_return:
-	or.p		gr8,gr8,gr4
-	or		gr9,gr9,gr5
-0:
-	orcc		gr0,gr0,gr0,icc3		/* set ICC3.Z */
-	ckeq		icc3,cc7
-	ldd.p		@(gr10,gr0),gr8			/* LDD.P/ORCR must be atomic */
-	orcr		cc7,cc7,cc3			/* set CC3 to true */
-	subcc		gr9,gr5,gr9,icc0
-	subx		gr8,gr4,gr8,icc0
-	cstd.p		gr8,@(gr10,gr0)		,cc3,#1
-	corcc		gr29,gr29,gr0		,cc3,#1	/* clear ICC3.Z if store happens */
-	beq		icc3,#0,0b
-	bralr
-
-	.size		atomic64_sub_return, .-atomic64_sub_return
-
-###############################################################################
-#
 # uint64_t __xchg_64(uint64_t i, uint64_t *v)
 #
 ###############################################################################



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (21 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 22/24] frv: Rewrite atomic implementation Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-10  9:10   ` Geert Uytterhoeven
  2015-07-09 17:29 ` [RFC][PATCH 24/24] atomic: Replace atomic_{set,clear}_mask() usage Peter Zijlstra
  2015-07-09 20:38 ` [PATCH] tile: Provide atomic_{or,xor,and} Chris Metcalf
  24 siblings, 1 reply; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-atomic-linux-ops.patch --]
[-- Type: text/plain, Size: 9165 bytes --]

Move the now generic definitions of atomic_{set,clear}_mask() into
linux/atomic.h to avoid endless and pointless repetition.

Also, provide an atomic_nand() wrapper for those few archs that can
implement that.


Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/arc/include/asm/atomic.h          |   10 ----------
 arch/blackfin/include/asm/atomic.h     |   10 ----------
 arch/frv/include/asm/atomic.h          |   10 ----------
 arch/m32r/include/asm/atomic.h         |   11 -----------
 arch/m68k/include/asm/atomic.h         |   10 ----------
 arch/metag/include/asm/atomic_lnkget.h |   10 ----------
 arch/metag/include/asm/atomic_lock1.h  |   10 ----------
 arch/mn10300/include/asm/atomic.h      |   24 ------------------------
 arch/powerpc/kernel/misc_32.S          |   19 -------------------
 arch/s390/include/asm/atomic.h         |   10 ----------
 arch/sh/include/asm/atomic.h           |   10 ----------
 arch/x86/include/asm/atomic.h          |   10 ----------
 arch/xtensa/include/asm/atomic.h       |   10 ----------
 include/asm-generic/atomic.h           |   10 ----------
 include/linux/atomic.h                 |   17 +++++++++++++++++
 15 files changed, 17 insertions(+), 164 deletions(-)

--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -151,16 +151,6 @@ ATOMIC_OP(xor, ^=, xor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
-static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
 /**
  * __atomic_add_unless - add unless the number is a given value
  * @v: pointer of type atomic_t
--- a/arch/blackfin/include/asm/atomic.h
+++ b/arch/blackfin/include/asm/atomic.h
@@ -32,16 +32,6 @@ asmlinkage int __raw_atomic_test_asm(con
 #define atomic_and(i, v) (void)__raw_atomic_and_asm(&(v)->counter, i)
 #define atomic_xor(i, v) (void)__raw_atomic_xor_asm(&(v)->counter, i)
 
-static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
-static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
 #endif
 
 #include <asm-generic/atomic.h>
--- a/arch/frv/include/asm/atomic.h
+++ b/arch/frv/include/asm/atomic.h
@@ -198,14 +198,4 @@ ATOMIC_OP(xor)
 
 #undef ATOMIC_OP
 
-static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
-static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
 #endif /* _ASM_ATOMIC_H */
--- a/arch/m32r/include/asm/atomic.h
+++ b/arch/m32r/include/asm/atomic.h
@@ -242,15 +242,4 @@ static __inline__ int __atomic_add_unles
 	return c;
 }
 
-
-static __inline__ __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
-static __inline__ __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
 #endif	/* _ASM_M32R_ATOMIC_H */
--- a/arch/m68k/include/asm/atomic.h
+++ b/arch/m68k/include/asm/atomic.h
@@ -173,16 +173,6 @@ static inline int atomic_add_negative(in
 	return c != 0;
 }
 
-static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
-static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
 static __inline__ int __atomic_add_unless(atomic_t *v, int a, int u)
 {
 	int c, old;
--- a/arch/metag/include/asm/atomic_lnkget.h
+++ b/arch/metag/include/asm/atomic_lnkget.h
@@ -81,16 +81,6 @@ ATOMIC_OP(xor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
-static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
 static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
 {
 	int result, temp;
--- a/arch/metag/include/asm/atomic_lock1.h
+++ b/arch/metag/include/asm/atomic_lock1.h
@@ -76,16 +76,6 @@ ATOMIC_OP(xor, ^=)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
-static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
 static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
 {
 	int ret;
--- a/arch/mn10300/include/asm/atomic.h
+++ b/arch/mn10300/include/asm/atomic.h
@@ -130,30 +130,6 @@ static inline void atomic_dec(atomic_t *
 #define atomic_xchg(ptr, v)		(xchg(&(ptr)->counter, (v)))
 #define atomic_cmpxchg(v, old, new)	(cmpxchg(&((v)->counter), (old), (new)))
 
-/**
- * atomic_clear_mask - Atomically clear bits in memory
- * @mask: Mask of the bits to be cleared
- * @v: pointer to word in memory
- *
- * Atomically clears the bits set in mask from the memory word specified.
- */
-static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
-/**
- * atomic_set_mask - Atomically set bits in memory
- * @mask: Mask of the bits to be set
- * @v: pointer to word in memory
- *
- * Atomically sets the bits set in mask from the memory word specified.
- */
-static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
 #endif /* __KERNEL__ */
 #endif /* CONFIG_SMP */
 #endif /* _ASM_ATOMIC_H */
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -596,25 +596,6 @@ _GLOBAL(copy_page)
 	b	2b
 
 /*
- * void atomic_clear_mask(atomic_t mask, atomic_t *addr)
- * void atomic_set_mask(atomic_t mask, atomic_t *addr);
- */
-_GLOBAL(atomic_clear_mask)
-10:	lwarx	r5,0,r4
-	andc	r5,r5,r3
-	PPC405_ERR77(0,r4)
-	stwcx.	r5,0,r4
-	bne-	10b
-	blr
-_GLOBAL(atomic_set_mask)
-10:	lwarx	r5,0,r4
-	or	r5,r5,r3
-	PPC405_ERR77(0,r4)
-	stwcx.	r5,0,r4
-	bne-	10b
-	blr
-
-/*
  * Extended precision shifts.
  *
  * Updated to be valid for shift counts from 0 to 63 inclusive.
--- a/arch/s390/include/asm/atomic.h
+++ b/arch/s390/include/asm/atomic.h
@@ -132,16 +132,6 @@ ATOMIC_OP(xor, XOR)
 
 #undef ATOMIC_OP
 
-static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
-static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
 #define atomic_xchg(v, new) (xchg(&((v)->counter), new))
 
 static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
--- a/arch/sh/include/asm/atomic.h
+++ b/arch/sh/include/asm/atomic.h
@@ -25,16 +25,6 @@
 #include <asm/atomic-irq.h>
 #endif
 
-static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
-static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
 #define atomic_add_negative(a, v)	(atomic_add_return((a), (v)) < 0)
 #define atomic_dec_return(v)		atomic_sub_return(1, (v))
 #define atomic_inc_return(v)		atomic_add_return(1, (v))
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -234,16 +234,6 @@ static __always_inline short int atomic_
 	return *v;
 }
 
-static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
-static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
 #ifdef CONFIG_X86_32
 # include <asm/atomic64_32.h>
 #else
--- a/arch/xtensa/include/asm/atomic.h
+++ b/arch/xtensa/include/asm/atomic.h
@@ -152,16 +152,6 @@ ATOMIC_OP(xor)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
-static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
 /**
  * atomic_sub_and_test - subtract value from variable and test result
  * @i: integer value to subtract
--- a/include/asm-generic/atomic.h
+++ b/include/asm-generic/atomic.h
@@ -113,16 +113,6 @@ ATOMIC_OP(xor, ^)
 #undef ATOMIC_OP_RETURN
 #undef ATOMIC_OP
 
-static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_and(~mask, v);
-}
-
-static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
-{
-	atomic_or(mask, v);
-}
-
 /*
  * Atomic operations that C can't guarantee us.  Useful for
  * resource counting etc..
--- a/include/linux/atomic.h
+++ b/include/linux/atomic.h
@@ -28,6 +28,23 @@ static inline int atomic_add_unless(atom
 #define atomic_inc_not_zero(v)		atomic_add_unless((v), 1, 0)
 #endif
 
+#ifndef atomic_nand
+static inline void atomic_nand(int i, atomic_t *v)
+{
+	atomic_and(~i, v);
+}
+#endif
+
+static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_nand(mask, v);
+}
+
+static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
+{
+	atomic_or(mask, v);
+}
+
 /**
  * atomic_inc_not_zero_hint - increment if not null
  * @v: pointer of type atomic_t



^ permalink raw reply	[flat|nested] 54+ messages in thread

* [RFC][PATCH 24/24] atomic: Replace atomic_{set,clear}_mask() usage
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (22 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions Peter Zijlstra
@ 2015-07-09 17:29 ` Peter Zijlstra
  2015-07-09 20:38 ` [PATCH] tile: Provide atomic_{or,xor,and} Chris Metcalf
  24 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 17:29 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo, peterz

[-- Attachment #1: peterz-atomic-mask-remove.patch --]
[-- Type: text/plain, Size: 29265 bytes --]

Replace the deprecated atomic_{set,clear}_mask() usage with the now
ubiquous atomic_{or,nand}() functions.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 arch/blackfin/mach-common/smp.c |    2 -
 arch/m32r/kernel/smp.c          |    4 +-
 arch/mn10300/mm/tlb-smp.c       |    2 -
 arch/s390/kernel/time.c         |    4 +-
 arch/s390/kvm/interrupt.c       |   28 +++++++++---------
 arch/s390/kvm/kvm-s390.c        |   24 +++++++--------
 drivers/gpu/drm/i915/i915_drv.c |    2 -
 drivers/gpu/drm/i915/i915_gem.c |    2 -
 drivers/gpu/drm/i915/i915_irq.c |    4 +-
 drivers/s390/scsi/zfcp_aux.c    |    2 -
 drivers/s390/scsi/zfcp_erp.c    |   62 ++++++++++++++++++++--------------------
 drivers/s390/scsi/zfcp_fc.c     |    8 ++---
 drivers/s390/scsi/zfcp_fsf.c    |   26 ++++++++--------
 drivers/s390/scsi/zfcp_qdio.c   |   14 ++++-----
 14 files changed, 92 insertions(+), 92 deletions(-)

--- a/arch/blackfin/mach-common/smp.c
+++ b/arch/blackfin/mach-common/smp.c
@@ -195,7 +195,7 @@ void send_ipi(const struct cpumask *cpum
 	local_irq_save(flags);
 	for_each_cpu(cpu, cpumask) {
 		bfin_ipi_data = &per_cpu(bfin_ipi, cpu);
-		atomic_set_mask((1 << msg), &bfin_ipi_data->bits);
+		atomic_or((1 << msg), &bfin_ipi_data->bits);
 		atomic_inc(&bfin_ipi_data->count);
 	}
 	local_irq_restore(flags);
--- a/arch/m32r/kernel/smp.c
+++ b/arch/m32r/kernel/smp.c
@@ -156,7 +156,7 @@ void smp_flush_cache_all(void)
 	cpumask_clear_cpu(smp_processor_id(), &cpumask);
 	spin_lock(&flushcache_lock);
 	mask=cpumask_bits(&cpumask);
-	atomic_set_mask(*mask, (atomic_t *)&flushcache_cpumask);
+	atomic_or(*mask, (atomic_t *)&flushcache_cpumask);
 	send_IPI_mask(&cpumask, INVALIDATE_CACHE_IPI, 0);
 	_flush_cache_copyback_all();
 	while (flushcache_cpumask)
@@ -407,7 +407,7 @@ static void flush_tlb_others(cpumask_t c
 	flush_vma = vma;
 	flush_va = va;
 	mask=cpumask_bits(&cpumask);
-	atomic_set_mask(*mask, (atomic_t *)&flush_cpumask);
+	atomic_or(*mask, (atomic_t *)&flush_cpumask);
 
 	/*
 	 * We have to send the IPI only to
--- a/arch/mn10300/mm/tlb-smp.c
+++ b/arch/mn10300/mm/tlb-smp.c
@@ -119,7 +119,7 @@ static void flush_tlb_others(cpumask_t c
 	flush_mm = mm;
 	flush_va = va;
 #if NR_CPUS <= BITS_PER_LONG
-	atomic_set_mask(cpumask.bits[0], &flush_cpumask.bits[0]);
+	atomic_or(cpumask.bits[0], (atomic_t *)&flush_cpumask.bits[0]);
 #else
 #error Not supported.
 #endif
--- a/arch/s390/kernel/time.c
+++ b/arch/s390/kernel/time.c
@@ -381,7 +381,7 @@ static void disable_sync_clock(void *dum
 	 * increase the "sequence" counter to avoid the race of an
 	 * etr event and the complete recovery against get_sync_clock.
 	 */
-	atomic_clear_mask(0x80000000, sw_ptr);
+	atomic_nand(0x80000000, sw_ptr);
 	atomic_inc(sw_ptr);
 }
 
@@ -392,7 +392,7 @@ static void disable_sync_clock(void *dum
 static void enable_sync_clock(void)
 {
 	atomic_t *sw_ptr = this_cpu_ptr(&clock_sync_word);
-	atomic_set_mask(0x80000000, sw_ptr);
+	atomic_or(0x80000000, sw_ptr);
 }
 
 /*
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -170,20 +170,20 @@ static unsigned long deliverable_irqs(st
 
 static void __set_cpu_idle(struct kvm_vcpu *vcpu)
 {
-	atomic_set_mask(CPUSTAT_WAIT, &vcpu->arch.sie_block->cpuflags);
+	atomic_or(CPUSTAT_WAIT, &vcpu->arch.sie_block->cpuflags);
 	set_bit(vcpu->vcpu_id, vcpu->arch.local_int.float_int->idle_mask);
 }
 
 static void __unset_cpu_idle(struct kvm_vcpu *vcpu)
 {
-	atomic_clear_mask(CPUSTAT_WAIT, &vcpu->arch.sie_block->cpuflags);
+	atomic_nand(CPUSTAT_WAIT, &vcpu->arch.sie_block->cpuflags);
 	clear_bit(vcpu->vcpu_id, vcpu->arch.local_int.float_int->idle_mask);
 }
 
 static void __reset_intercept_indicators(struct kvm_vcpu *vcpu)
 {
-	atomic_clear_mask(CPUSTAT_IO_INT | CPUSTAT_EXT_INT | CPUSTAT_STOP_INT,
-			  &vcpu->arch.sie_block->cpuflags);
+	atomic_nand(CPUSTAT_IO_INT | CPUSTAT_EXT_INT | CPUSTAT_STOP_INT,
+		    &vcpu->arch.sie_block->cpuflags);
 	vcpu->arch.sie_block->lctl = 0x0000;
 	vcpu->arch.sie_block->ictl &= ~(ICTL_LPSW | ICTL_STCTL | ICTL_PINT);
 
@@ -196,7 +196,7 @@ static void __reset_intercept_indicators
 
 static void __set_cpuflag(struct kvm_vcpu *vcpu, u32 flag)
 {
-	atomic_set_mask(flag, &vcpu->arch.sie_block->cpuflags);
+	atomic_or(flag, &vcpu->arch.sie_block->cpuflags);
 }
 
 static void set_intercept_indicators_io(struct kvm_vcpu *vcpu)
@@ -919,7 +919,7 @@ void kvm_s390_clear_local_irqs(struct kv
 	spin_unlock(&li->lock);
 
 	/* clear pending external calls set by sigp interpretation facility */
-	atomic_clear_mask(CPUSTAT_ECALL_PEND, li->cpuflags);
+	atomic_nand(CPUSTAT_ECALL_PEND, li->cpuflags);
 	vcpu->kvm->arch.sca->cpu[vcpu->vcpu_id].sigp_ctrl = 0;
 }
 
@@ -1020,7 +1020,7 @@ static int __inject_pfault_init(struct k
 
 	li->irq.ext = irq->u.ext;
 	set_bit(IRQ_PEND_PFAULT_INIT, &li->pending_irqs);
-	atomic_set_mask(CPUSTAT_EXT_INT, li->cpuflags);
+	atomic_or(CPUSTAT_EXT_INT, li->cpuflags);
 	return 0;
 }
 
@@ -1035,7 +1035,7 @@ static int __inject_extcall_sigpif(struc
 		/* another external call is pending */
 		return -EBUSY;
 	}
-	atomic_set_mask(CPUSTAT_ECALL_PEND, &vcpu->arch.sie_block->cpuflags);
+	atomic_or(CPUSTAT_ECALL_PEND, &vcpu->arch.sie_block->cpuflags);
 	return 0;
 }
 
@@ -1133,7 +1133,7 @@ static int __inject_sigp_emergency(struc
 
 	set_bit(irq->u.emerg.code, li->sigp_emerg_pending);
 	set_bit(IRQ_PEND_EXT_EMERGENCY, &li->pending_irqs);
-	atomic_set_mask(CPUSTAT_EXT_INT, li->cpuflags);
+	atomic_or(CPUSTAT_EXT_INT, li->cpuflags);
 	return 0;
 }
 
@@ -1177,7 +1177,7 @@ static int __inject_ckc(struct kvm_vcpu
 				   0, 0, 2);
 
 	set_bit(IRQ_PEND_EXT_CLOCK_COMP, &li->pending_irqs);
-	atomic_set_mask(CPUSTAT_EXT_INT, li->cpuflags);
+	atomic_or(CPUSTAT_EXT_INT, li->cpuflags);
 	return 0;
 }
 
@@ -1190,7 +1190,7 @@ static int __inject_cpu_timer(struct kvm
 				   0, 0, 2);
 
 	set_bit(IRQ_PEND_EXT_CPU_TIMER, &li->pending_irqs);
-	atomic_set_mask(CPUSTAT_EXT_INT, li->cpuflags);
+	atomic_or(CPUSTAT_EXT_INT, li->cpuflags);
 	return 0;
 }
 
@@ -1369,13 +1369,13 @@ static void __floating_irq_kick(struct k
 	spin_lock(&li->lock);
 	switch (type) {
 	case KVM_S390_MCHK:
-		atomic_set_mask(CPUSTAT_STOP_INT, li->cpuflags);
+		atomic_or(CPUSTAT_STOP_INT, li->cpuflags);
 		break;
 	case KVM_S390_INT_IO_MIN...KVM_S390_INT_IO_MAX:
-		atomic_set_mask(CPUSTAT_IO_INT, li->cpuflags);
+		atomic_or(CPUSTAT_IO_INT, li->cpuflags);
 		break;
 	default:
-		atomic_set_mask(CPUSTAT_EXT_INT, li->cpuflags);
+		atomic_or(CPUSTAT_EXT_INT, li->cpuflags);
 		break;
 	}
 	spin_unlock(&li->lock);
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -1215,12 +1215,12 @@ void kvm_arch_vcpu_load(struct kvm_vcpu
 	}
 	restore_access_regs(vcpu->run->s.regs.acrs);
 	gmap_enable(vcpu->arch.gmap);
-	atomic_set_mask(CPUSTAT_RUNNING, &vcpu->arch.sie_block->cpuflags);
+	atomic_or(CPUSTAT_RUNNING, &vcpu->arch.sie_block->cpuflags);
 }
 
 void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
 {
-	atomic_clear_mask(CPUSTAT_RUNNING, &vcpu->arch.sie_block->cpuflags);
+	atomic_nand(CPUSTAT_RUNNING, &vcpu->arch.sie_block->cpuflags);
 	gmap_disable(vcpu->arch.gmap);
 	if (test_kvm_facility(vcpu->kvm, 129)) {
 		save_fp_ctl(&vcpu->run->s.regs.fpc);
@@ -1422,13 +1422,13 @@ int kvm_arch_vcpu_runnable(struct kvm_vc
 
 void kvm_s390_vcpu_block(struct kvm_vcpu *vcpu)
 {
-	atomic_set_mask(PROG_BLOCK_SIE, &vcpu->arch.sie_block->prog20);
+	atomic_or(PROG_BLOCK_SIE, &vcpu->arch.sie_block->prog20);
 	exit_sie(vcpu);
 }
 
 void kvm_s390_vcpu_unblock(struct kvm_vcpu *vcpu)
 {
-	atomic_clear_mask(PROG_BLOCK_SIE, &vcpu->arch.sie_block->prog20);
+	atomic_nand(PROG_BLOCK_SIE, &vcpu->arch.sie_block->prog20);
 }
 
 static void kvm_s390_vcpu_request(struct kvm_vcpu *vcpu)
@@ -1448,7 +1448,7 @@ static void kvm_s390_vcpu_request_handle
  * return immediately. */
 void exit_sie(struct kvm_vcpu *vcpu)
 {
-	atomic_set_mask(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags);
+	atomic_or(CPUSTAT_STOP_INT, &vcpu->arch.sie_block->cpuflags);
 	while (vcpu->arch.sie_block->prog0c & PROG_IN_SIE)
 		cpu_relax();
 }
@@ -1672,19 +1672,19 @@ int kvm_arch_vcpu_ioctl_set_guest_debug(
 	if (dbg->control & KVM_GUESTDBG_ENABLE) {
 		vcpu->guest_debug = dbg->control;
 		/* enforce guest PER */
-		atomic_set_mask(CPUSTAT_P, &vcpu->arch.sie_block->cpuflags);
+		atomic_or(CPUSTAT_P, &vcpu->arch.sie_block->cpuflags);
 
 		if (dbg->control & KVM_GUESTDBG_USE_HW_BP)
 			rc = kvm_s390_import_bp_data(vcpu, dbg);
 	} else {
-		atomic_clear_mask(CPUSTAT_P, &vcpu->arch.sie_block->cpuflags);
+		atomic_nand(CPUSTAT_P, &vcpu->arch.sie_block->cpuflags);
 		vcpu->arch.guestdbg.last_bp = 0;
 	}
 
 	if (rc) {
 		vcpu->guest_debug = 0;
 		kvm_s390_clear_bp_data(vcpu);
-		atomic_clear_mask(CPUSTAT_P, &vcpu->arch.sie_block->cpuflags);
+		atomic_nand(CPUSTAT_P, &vcpu->arch.sie_block->cpuflags);
 	}
 
 	return rc;
@@ -1771,7 +1771,7 @@ static int kvm_s390_handle_requests(stru
 	if (kvm_check_request(KVM_REQ_ENABLE_IBS, vcpu)) {
 		if (!ibs_enabled(vcpu)) {
 			trace_kvm_s390_enable_disable_ibs(vcpu->vcpu_id, 1);
-			atomic_set_mask(CPUSTAT_IBS,
+			atomic_or(CPUSTAT_IBS,
 					&vcpu->arch.sie_block->cpuflags);
 		}
 		goto retry;
@@ -1780,7 +1780,7 @@ static int kvm_s390_handle_requests(stru
 	if (kvm_check_request(KVM_REQ_DISABLE_IBS, vcpu)) {
 		if (ibs_enabled(vcpu)) {
 			trace_kvm_s390_enable_disable_ibs(vcpu->vcpu_id, 0);
-			atomic_clear_mask(CPUSTAT_IBS,
+			atomic_nand(CPUSTAT_IBS,
 					  &vcpu->arch.sie_block->cpuflags);
 		}
 		goto retry;
@@ -2280,7 +2280,7 @@ void kvm_s390_vcpu_start(struct kvm_vcpu
 		__disable_ibs_on_all_vcpus(vcpu->kvm);
 	}
 
-	atomic_clear_mask(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags);
+	atomic_nand(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags);
 	/*
 	 * Another VCPU might have used IBS while we were offline.
 	 * Let's play safe and flush the VCPU at startup.
@@ -2306,7 +2306,7 @@ void kvm_s390_vcpu_stop(struct kvm_vcpu
 	/* SIGP STOP and SIGP STOP AND STORE STATUS has been fully processed */
 	kvm_s390_clear_stop_irq(vcpu);
 
-	atomic_set_mask(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags);
+	atomic_or(CPUSTAT_STOPPED, &vcpu->arch.sie_block->cpuflags);
 	__disable_ibs_on_vcpu(vcpu);
 
 	for (i = 0; i < online_vcpus; i++) {
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -748,7 +748,7 @@ static int i915_drm_resume(struct drm_de
 	mutex_lock(&dev->struct_mutex);
 	if (i915_gem_init_hw(dev)) {
 		DRM_ERROR("failed to re-initialize GPU, declaring wedged!\n");
-		atomic_set_mask(I915_WEDGED, &dev_priv->gpu_error.reset_counter);
+			atomic_or(I915_WEDGED, &dev_priv->gpu_error.reset_counter);
 	}
 	mutex_unlock(&dev->struct_mutex);
 
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -5092,7 +5092,7 @@ int i915_gem_init(struct drm_device *dev
 		 * for all other failure, such as an allocation failure, bail.
 		 */
 		DRM_ERROR("Failed to initialize GPU, declaring it wedged\n");
-		atomic_set_mask(I915_WEDGED, &dev_priv->gpu_error.reset_counter);
+		atomic_or(I915_WEDGED, &dev_priv->gpu_error.reset_counter);
 		ret = 0;
 	}
 
--- a/drivers/gpu/drm/i915/i915_irq.c
+++ b/drivers/gpu/drm/i915/i915_irq.c
@@ -2446,7 +2446,7 @@ static void i915_reset_and_wakeup(struct
 			kobject_uevent_env(&dev->primary->kdev->kobj,
 					   KOBJ_CHANGE, reset_done_event);
 		} else {
-			atomic_set_mask(I915_WEDGED, &error->reset_counter);
+			atomic_or(I915_WEDGED, &error->reset_counter);
 		}
 
 		/*
@@ -2574,7 +2574,7 @@ void i915_handle_error(struct drm_device
 	i915_report_and_clear_eir(dev);
 
 	if (wedged) {
-		atomic_set_mask(I915_RESET_IN_PROGRESS_FLAG,
+		atomic_or(I915_RESET_IN_PROGRESS_FLAG,
 				&dev_priv->gpu_error.reset_counter);
 
 		/*
--- a/drivers/s390/scsi/zfcp_aux.c
+++ b/drivers/s390/scsi/zfcp_aux.c
@@ -529,7 +529,7 @@ struct zfcp_port *zfcp_port_enqueue(stru
 	list_add_tail(&port->list, &adapter->port_list);
 	write_unlock_irq(&adapter->port_list_lock);
 
-	atomic_set_mask(status | ZFCP_STATUS_COMMON_RUNNING, &port->status);
+	atomic_or(status | ZFCP_STATUS_COMMON_RUNNING, &port->status);
 
 	return port;
 
--- a/drivers/s390/scsi/zfcp_erp.c
+++ b/drivers/s390/scsi/zfcp_erp.c
@@ -190,7 +190,7 @@ static struct zfcp_erp_action *zfcp_erp_
 		if (!(act_status & ZFCP_STATUS_ERP_NO_REF))
 			if (scsi_device_get(sdev))
 				return NULL;
-		atomic_set_mask(ZFCP_STATUS_COMMON_ERP_INUSE,
+		atomic_or(ZFCP_STATUS_COMMON_ERP_INUSE,
 				&zfcp_sdev->status);
 		erp_action = &zfcp_sdev->erp_action;
 		memset(erp_action, 0, sizeof(struct zfcp_erp_action));
@@ -206,7 +206,7 @@ static struct zfcp_erp_action *zfcp_erp_
 		if (!get_device(&port->dev))
 			return NULL;
 		zfcp_erp_action_dismiss_port(port);
-		atomic_set_mask(ZFCP_STATUS_COMMON_ERP_INUSE, &port->status);
+		atomic_or(ZFCP_STATUS_COMMON_ERP_INUSE, &port->status);
 		erp_action = &port->erp_action;
 		memset(erp_action, 0, sizeof(struct zfcp_erp_action));
 		erp_action->port = port;
@@ -217,7 +217,7 @@ static struct zfcp_erp_action *zfcp_erp_
 	case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
 		kref_get(&adapter->ref);
 		zfcp_erp_action_dismiss_adapter(adapter);
-		atomic_set_mask(ZFCP_STATUS_COMMON_ERP_INUSE, &adapter->status);
+		atomic_or(ZFCP_STATUS_COMMON_ERP_INUSE, &adapter->status);
 		erp_action = &adapter->erp_action;
 		memset(erp_action, 0, sizeof(struct zfcp_erp_action));
 		if (!(atomic_read(&adapter->status) &
@@ -254,7 +254,7 @@ static int zfcp_erp_action_enqueue(int w
 	act = zfcp_erp_setup_act(need, act_status, adapter, port, sdev);
 	if (!act)
 		goto out;
-	atomic_set_mask(ZFCP_STATUS_ADAPTER_ERP_PENDING, &adapter->status);
+	atomic_or(ZFCP_STATUS_ADAPTER_ERP_PENDING, &adapter->status);
 	++adapter->erp_total_count;
 	list_add_tail(&act->list, &adapter->erp_ready_head);
 	wake_up(&adapter->erp_ready_wq);
@@ -486,14 +486,14 @@ static void zfcp_erp_adapter_unblock(str
 {
 	if (status_change_set(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status))
 		zfcp_dbf_rec_run("eraubl1", &adapter->erp_action);
-	atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status);
+	atomic_or(ZFCP_STATUS_COMMON_UNBLOCKED, &adapter->status);
 }
 
 static void zfcp_erp_port_unblock(struct zfcp_port *port)
 {
 	if (status_change_set(ZFCP_STATUS_COMMON_UNBLOCKED, &port->status))
 		zfcp_dbf_rec_run("erpubl1", &port->erp_action);
-	atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &port->status);
+	atomic_or(ZFCP_STATUS_COMMON_UNBLOCKED, &port->status);
 }
 
 static void zfcp_erp_lun_unblock(struct scsi_device *sdev)
@@ -502,7 +502,7 @@ static void zfcp_erp_lun_unblock(struct
 
 	if (status_change_set(ZFCP_STATUS_COMMON_UNBLOCKED, &zfcp_sdev->status))
 		zfcp_dbf_rec_run("erlubl1", &sdev_to_zfcp(sdev)->erp_action);
-	atomic_set_mask(ZFCP_STATUS_COMMON_UNBLOCKED, &zfcp_sdev->status);
+	atomic_or(ZFCP_STATUS_COMMON_UNBLOCKED, &zfcp_sdev->status);
 }
 
 static void zfcp_erp_action_to_running(struct zfcp_erp_action *erp_action)
@@ -642,7 +642,7 @@ static void zfcp_erp_wakeup(struct zfcp_
 	read_lock_irqsave(&adapter->erp_lock, flags);
 	if (list_empty(&adapter->erp_ready_head) &&
 	    list_empty(&adapter->erp_running_head)) {
-			atomic_clear_mask(ZFCP_STATUS_ADAPTER_ERP_PENDING,
+			atomic_nand(ZFCP_STATUS_ADAPTER_ERP_PENDING,
 					  &adapter->status);
 			wake_up(&adapter->erp_done_wqh);
 	}
@@ -665,16 +665,16 @@ static int zfcp_erp_adapter_strat_fsf_xc
 	int sleep = 1;
 	struct zfcp_adapter *adapter = erp_action->adapter;
 
-	atomic_clear_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK, &adapter->status);
+	atomic_nand(ZFCP_STATUS_ADAPTER_XCONFIG_OK, &adapter->status);
 
 	for (retries = 7; retries; retries--) {
-		atomic_clear_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
+		atomic_nand(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
 				  &adapter->status);
 		write_lock_irq(&adapter->erp_lock);
 		zfcp_erp_action_to_running(erp_action);
 		write_unlock_irq(&adapter->erp_lock);
 		if (zfcp_fsf_exchange_config_data(erp_action)) {
-			atomic_clear_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
+			atomic_nand(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
 					  &adapter->status);
 			return ZFCP_ERP_FAILED;
 		}
@@ -692,7 +692,7 @@ static int zfcp_erp_adapter_strat_fsf_xc
 		sleep *= 2;
 	}
 
-	atomic_clear_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
+	atomic_nand(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
 			  &adapter->status);
 
 	if (!(atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_XCONFIG_OK))
@@ -764,7 +764,7 @@ static void zfcp_erp_adapter_strategy_cl
 	/* all ports and LUNs are closed */
 	zfcp_erp_clear_adapter_status(adapter, ZFCP_STATUS_COMMON_OPEN);
 
-	atomic_clear_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK |
+	atomic_nand(ZFCP_STATUS_ADAPTER_XCONFIG_OK |
 			  ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, &adapter->status);
 }
 
@@ -773,7 +773,7 @@ static int zfcp_erp_adapter_strategy_ope
 	struct zfcp_adapter *adapter = act->adapter;
 
 	if (zfcp_qdio_open(adapter->qdio)) {
-		atomic_clear_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK |
+		atomic_nand(ZFCP_STATUS_ADAPTER_XCONFIG_OK |
 				  ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED,
 				  &adapter->status);
 		return ZFCP_ERP_FAILED;
@@ -784,7 +784,7 @@ static int zfcp_erp_adapter_strategy_ope
 		return ZFCP_ERP_FAILED;
 	}
 
-	atomic_set_mask(ZFCP_STATUS_COMMON_OPEN, &adapter->status);
+	atomic_or(ZFCP_STATUS_COMMON_OPEN, &adapter->status);
 
 	return ZFCP_ERP_SUCCEEDED;
 }
@@ -948,7 +948,7 @@ static void zfcp_erp_lun_strategy_clears
 {
 	struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
 
-	atomic_clear_mask(ZFCP_STATUS_COMMON_ACCESS_DENIED,
+	atomic_nand(ZFCP_STATUS_COMMON_ACCESS_DENIED,
 			  &zfcp_sdev->status);
 }
 
@@ -1187,18 +1187,18 @@ static void zfcp_erp_action_dequeue(stru
 	switch (erp_action->action) {
 	case ZFCP_ERP_ACTION_REOPEN_LUN:
 		zfcp_sdev = sdev_to_zfcp(erp_action->sdev);
-		atomic_clear_mask(ZFCP_STATUS_COMMON_ERP_INUSE,
+		atomic_nand(ZFCP_STATUS_COMMON_ERP_INUSE,
 				  &zfcp_sdev->status);
 		break;
 
 	case ZFCP_ERP_ACTION_REOPEN_PORT_FORCED:
 	case ZFCP_ERP_ACTION_REOPEN_PORT:
-		atomic_clear_mask(ZFCP_STATUS_COMMON_ERP_INUSE,
+		atomic_nand(ZFCP_STATUS_COMMON_ERP_INUSE,
 				  &erp_action->port->status);
 		break;
 
 	case ZFCP_ERP_ACTION_REOPEN_ADAPTER:
-		atomic_clear_mask(ZFCP_STATUS_COMMON_ERP_INUSE,
+		atomic_nand(ZFCP_STATUS_COMMON_ERP_INUSE,
 				  &erp_action->adapter->status);
 		break;
 	}
@@ -1422,19 +1422,19 @@ void zfcp_erp_set_adapter_status(struct
 	unsigned long flags;
 	u32 common_mask = mask & ZFCP_COMMON_FLAGS;
 
-	atomic_set_mask(mask, &adapter->status);
+	atomic_or(mask, &adapter->status);
 
 	if (!common_mask)
 		return;
 
 	read_lock_irqsave(&adapter->port_list_lock, flags);
 	list_for_each_entry(port, &adapter->port_list, list)
-		atomic_set_mask(common_mask, &port->status);
+		atomic_or(common_mask, &port->status);
 	read_unlock_irqrestore(&adapter->port_list_lock, flags);
 
 	spin_lock_irqsave(adapter->scsi_host->host_lock, flags);
 	__shost_for_each_device(sdev, adapter->scsi_host)
-		atomic_set_mask(common_mask, &sdev_to_zfcp(sdev)->status);
+		atomic_or(common_mask, &sdev_to_zfcp(sdev)->status);
 	spin_unlock_irqrestore(adapter->scsi_host->host_lock, flags);
 }
 
@@ -1453,7 +1453,7 @@ void zfcp_erp_clear_adapter_status(struc
 	u32 common_mask = mask & ZFCP_COMMON_FLAGS;
 	u32 clear_counter = mask & ZFCP_STATUS_COMMON_ERP_FAILED;
 
-	atomic_clear_mask(mask, &adapter->status);
+	atomic_nand(mask, &adapter->status);
 
 	if (!common_mask)
 		return;
@@ -1463,7 +1463,7 @@ void zfcp_erp_clear_adapter_status(struc
 
 	read_lock_irqsave(&adapter->port_list_lock, flags);
 	list_for_each_entry(port, &adapter->port_list, list) {
-		atomic_clear_mask(common_mask, &port->status);
+		atomic_nand(common_mask, &port->status);
 		if (clear_counter)
 			atomic_set(&port->erp_counter, 0);
 	}
@@ -1471,7 +1471,7 @@ void zfcp_erp_clear_adapter_status(struc
 
 	spin_lock_irqsave(adapter->scsi_host->host_lock, flags);
 	__shost_for_each_device(sdev, adapter->scsi_host) {
-		atomic_clear_mask(common_mask, &sdev_to_zfcp(sdev)->status);
+		atomic_nand(common_mask, &sdev_to_zfcp(sdev)->status);
 		if (clear_counter)
 			atomic_set(&sdev_to_zfcp(sdev)->erp_counter, 0);
 	}
@@ -1491,7 +1491,7 @@ void zfcp_erp_set_port_status(struct zfc
 	u32 common_mask = mask & ZFCP_COMMON_FLAGS;
 	unsigned long flags;
 
-	atomic_set_mask(mask, &port->status);
+	atomic_or(mask, &port->status);
 
 	if (!common_mask)
 		return;
@@ -1499,7 +1499,7 @@ void zfcp_erp_set_port_status(struct zfc
 	spin_lock_irqsave(port->adapter->scsi_host->host_lock, flags);
 	__shost_for_each_device(sdev, port->adapter->scsi_host)
 		if (sdev_to_zfcp(sdev)->port == port)
-			atomic_set_mask(common_mask,
+			atomic_or(common_mask,
 					&sdev_to_zfcp(sdev)->status);
 	spin_unlock_irqrestore(port->adapter->scsi_host->host_lock, flags);
 }
@@ -1518,7 +1518,7 @@ void zfcp_erp_clear_port_status(struct z
 	u32 clear_counter = mask & ZFCP_STATUS_COMMON_ERP_FAILED;
 	unsigned long flags;
 
-	atomic_clear_mask(mask, &port->status);
+	atomic_nand(mask, &port->status);
 
 	if (!common_mask)
 		return;
@@ -1529,7 +1529,7 @@ void zfcp_erp_clear_port_status(struct z
 	spin_lock_irqsave(port->adapter->scsi_host->host_lock, flags);
 	__shost_for_each_device(sdev, port->adapter->scsi_host)
 		if (sdev_to_zfcp(sdev)->port == port) {
-			atomic_clear_mask(common_mask,
+			atomic_nand(common_mask,
 					  &sdev_to_zfcp(sdev)->status);
 			if (clear_counter)
 				atomic_set(&sdev_to_zfcp(sdev)->erp_counter, 0);
@@ -1546,7 +1546,7 @@ void zfcp_erp_set_lun_status(struct scsi
 {
 	struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
 
-	atomic_set_mask(mask, &zfcp_sdev->status);
+	atomic_or(mask, &zfcp_sdev->status);
 }
 
 /**
@@ -1558,7 +1558,7 @@ void zfcp_erp_clear_lun_status(struct sc
 {
 	struct zfcp_scsi_dev *zfcp_sdev = sdev_to_zfcp(sdev);
 
-	atomic_clear_mask(mask, &zfcp_sdev->status);
+	atomic_nand(mask, &zfcp_sdev->status);
 
 	if (mask & ZFCP_STATUS_COMMON_ERP_FAILED)
 		atomic_set(&zfcp_sdev->erp_counter, 0);
--- a/drivers/s390/scsi/zfcp_fc.c
+++ b/drivers/s390/scsi/zfcp_fc.c
@@ -508,7 +508,7 @@ static void zfcp_fc_adisc_handler(void *
 	/* port is good, unblock rport without going through erp */
 	zfcp_scsi_schedule_rport_register(port);
  out:
-	atomic_clear_mask(ZFCP_STATUS_PORT_LINK_TEST, &port->status);
+	atomic_nand(ZFCP_STATUS_PORT_LINK_TEST, &port->status);
 	put_device(&port->dev);
 	kmem_cache_free(zfcp_fc_req_cache, fc_req);
 }
@@ -564,14 +564,14 @@ void zfcp_fc_link_test_work(struct work_
 	if (atomic_read(&port->status) & ZFCP_STATUS_PORT_LINK_TEST)
 		goto out;
 
-	atomic_set_mask(ZFCP_STATUS_PORT_LINK_TEST, &port->status);
+	atomic_or(ZFCP_STATUS_PORT_LINK_TEST, &port->status);
 
 	retval = zfcp_fc_adisc(port);
 	if (retval == 0)
 		return;
 
 	/* send of ADISC was not possible */
-	atomic_clear_mask(ZFCP_STATUS_PORT_LINK_TEST, &port->status);
+	atomic_nand(ZFCP_STATUS_PORT_LINK_TEST, &port->status);
 	zfcp_erp_port_forced_reopen(port, 0, "fcltwk1");
 
 out:
@@ -640,7 +640,7 @@ static void zfcp_fc_validate_port(struct
 	if (!(atomic_read(&port->status) & ZFCP_STATUS_COMMON_NOESC))
 		return;
 
-	atomic_clear_mask(ZFCP_STATUS_COMMON_NOESC, &port->status);
+	atomic_nand(ZFCP_STATUS_COMMON_NOESC, &port->status);
 
 	if ((port->supported_classes != 0) ||
 	    !list_empty(&port->unit_list))
--- a/drivers/s390/scsi/zfcp_fsf.c
+++ b/drivers/s390/scsi/zfcp_fsf.c
@@ -114,7 +114,7 @@ static void zfcp_fsf_link_down_info_eval
 	if (atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED)
 		return;
 
-	atomic_set_mask(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, &adapter->status);
+	atomic_or(ZFCP_STATUS_ADAPTER_LINK_UNPLUGGED, &adapter->status);
 
 	zfcp_scsi_schedule_rports_block(adapter);
 
@@ -345,7 +345,7 @@ static void zfcp_fsf_protstatus_eval(str
 		zfcp_erp_adapter_shutdown(adapter, 0, "fspse_3");
 		break;
 	case FSF_PROT_HOST_CONNECTION_INITIALIZING:
-		atomic_set_mask(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
+		atomic_or(ZFCP_STATUS_ADAPTER_HOST_CON_INIT,
 				&adapter->status);
 		break;
 	case FSF_PROT_DUPLICATE_REQUEST_ID:
@@ -554,7 +554,7 @@ static void zfcp_fsf_exchange_config_dat
 			zfcp_erp_adapter_shutdown(adapter, 0, "fsecdh1");
 			return;
 		}
-		atomic_set_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK,
+		atomic_or(ZFCP_STATUS_ADAPTER_XCONFIG_OK,
 				&adapter->status);
 		break;
 	case FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE:
@@ -567,7 +567,7 @@ static void zfcp_fsf_exchange_config_dat
 
 		/* avoids adapter shutdown to be able to recognize
 		 * events such as LINK UP */
-		atomic_set_mask(ZFCP_STATUS_ADAPTER_XCONFIG_OK,
+		atomic_or(ZFCP_STATUS_ADAPTER_XCONFIG_OK,
 				&adapter->status);
 		zfcp_fsf_link_down_info_eval(req,
 			&qtcb->header.fsf_status_qual.link_down_info);
@@ -1394,9 +1394,9 @@ static void zfcp_fsf_open_port_handler(s
 		break;
 	case FSF_GOOD:
 		port->handle = header->port_handle;
-		atomic_set_mask(ZFCP_STATUS_COMMON_OPEN |
+		atomic_or(ZFCP_STATUS_COMMON_OPEN |
 				ZFCP_STATUS_PORT_PHYS_OPEN, &port->status);
-		atomic_clear_mask(ZFCP_STATUS_COMMON_ACCESS_BOXED,
+		atomic_nand(ZFCP_STATUS_COMMON_ACCESS_BOXED,
 		                  &port->status);
 		/* check whether D_ID has changed during open */
 		/*
@@ -1677,10 +1677,10 @@ static void zfcp_fsf_close_physical_port
 	case FSF_PORT_BOXED:
 		/* can't use generic zfcp_erp_modify_port_status because
 		 * ZFCP_STATUS_COMMON_OPEN must not be reset for the port */
-		atomic_clear_mask(ZFCP_STATUS_PORT_PHYS_OPEN, &port->status);
+		atomic_nand(ZFCP_STATUS_PORT_PHYS_OPEN, &port->status);
 		shost_for_each_device(sdev, port->adapter->scsi_host)
 			if (sdev_to_zfcp(sdev)->port == port)
-				atomic_clear_mask(ZFCP_STATUS_COMMON_OPEN,
+				atomic_nand(ZFCP_STATUS_COMMON_OPEN,
 						  &sdev_to_zfcp(sdev)->status);
 		zfcp_erp_set_port_status(port, ZFCP_STATUS_COMMON_ACCESS_BOXED);
 		zfcp_erp_port_reopen(port, ZFCP_STATUS_COMMON_ERP_FAILED,
@@ -1700,10 +1700,10 @@ static void zfcp_fsf_close_physical_port
 		/* can't use generic zfcp_erp_modify_port_status because
 		 * ZFCP_STATUS_COMMON_OPEN must not be reset for the port
 		 */
-		atomic_clear_mask(ZFCP_STATUS_PORT_PHYS_OPEN, &port->status);
+		atomic_nand(ZFCP_STATUS_PORT_PHYS_OPEN, &port->status);
 		shost_for_each_device(sdev, port->adapter->scsi_host)
 			if (sdev_to_zfcp(sdev)->port == port)
-				atomic_clear_mask(ZFCP_STATUS_COMMON_OPEN,
+				atomic_nand(ZFCP_STATUS_COMMON_OPEN,
 						  &sdev_to_zfcp(sdev)->status);
 		break;
 	}
@@ -1766,7 +1766,7 @@ static void zfcp_fsf_open_lun_handler(st
 
 	zfcp_sdev = sdev_to_zfcp(sdev);
 
-	atomic_clear_mask(ZFCP_STATUS_COMMON_ACCESS_DENIED |
+	atomic_nand(ZFCP_STATUS_COMMON_ACCESS_DENIED |
 			  ZFCP_STATUS_COMMON_ACCESS_BOXED,
 			  &zfcp_sdev->status);
 
@@ -1822,7 +1822,7 @@ static void zfcp_fsf_open_lun_handler(st
 
 	case FSF_GOOD:
 		zfcp_sdev->lun_handle = header->lun_handle;
-		atomic_set_mask(ZFCP_STATUS_COMMON_OPEN, &zfcp_sdev->status);
+		atomic_or(ZFCP_STATUS_COMMON_OPEN, &zfcp_sdev->status);
 		break;
 	}
 }
@@ -1913,7 +1913,7 @@ static void zfcp_fsf_close_lun_handler(s
 		}
 		break;
 	case FSF_GOOD:
-		atomic_clear_mask(ZFCP_STATUS_COMMON_OPEN, &zfcp_sdev->status);
+		atomic_nand(ZFCP_STATUS_COMMON_OPEN, &zfcp_sdev->status);
 		break;
 	}
 }
--- a/drivers/s390/scsi/zfcp_qdio.c
+++ b/drivers/s390/scsi/zfcp_qdio.c
@@ -349,7 +349,7 @@ void zfcp_qdio_close(struct zfcp_qdio *q
 
 	/* clear QDIOUP flag, thus do_QDIO is not called during qdio_shutdown */
 	spin_lock_irq(&qdio->req_q_lock);
-	atomic_clear_mask(ZFCP_STATUS_ADAPTER_QDIOUP, &adapter->status);
+	atomic_nand(ZFCP_STATUS_ADAPTER_QDIOUP, &adapter->status);
 	spin_unlock_irq(&qdio->req_q_lock);
 
 	wake_up(&qdio->req_q_wq);
@@ -384,7 +384,7 @@ int zfcp_qdio_open(struct zfcp_qdio *qdi
 	if (atomic_read(&adapter->status) & ZFCP_STATUS_ADAPTER_QDIOUP)
 		return -EIO;
 
-	atomic_clear_mask(ZFCP_STATUS_ADAPTER_SIOSL_ISSUED,
+	atomic_nand(ZFCP_STATUS_ADAPTER_SIOSL_ISSUED,
 			  &qdio->adapter->status);
 
 	zfcp_qdio_setup_init_data(&init_data, qdio);
@@ -396,14 +396,14 @@ int zfcp_qdio_open(struct zfcp_qdio *qdi
 		goto failed_qdio;
 
 	if (ssqd.qdioac2 & CHSC_AC2_DATA_DIV_ENABLED)
-		atomic_set_mask(ZFCP_STATUS_ADAPTER_DATA_DIV_ENABLED,
+		atomic_or(ZFCP_STATUS_ADAPTER_DATA_DIV_ENABLED,
 				&qdio->adapter->status);
 
 	if (ssqd.qdioac2 & CHSC_AC2_MULTI_BUFFER_ENABLED) {
-		atomic_set_mask(ZFCP_STATUS_ADAPTER_MB_ACT, &adapter->status);
+		atomic_or(ZFCP_STATUS_ADAPTER_MB_ACT, &adapter->status);
 		qdio->max_sbale_per_sbal = QDIO_MAX_ELEMENTS_PER_BUFFER;
 	} else {
-		atomic_clear_mask(ZFCP_STATUS_ADAPTER_MB_ACT, &adapter->status);
+		atomic_nand(ZFCP_STATUS_ADAPTER_MB_ACT, &adapter->status);
 		qdio->max_sbale_per_sbal = QDIO_MAX_ELEMENTS_PER_BUFFER - 1;
 	}
 
@@ -427,7 +427,7 @@ int zfcp_qdio_open(struct zfcp_qdio *qdi
 	/* set index of first available SBALS / number of available SBALS */
 	qdio->req_q_idx = 0;
 	atomic_set(&qdio->req_q_free, QDIO_MAX_BUFFERS_PER_Q);
-	atomic_set_mask(ZFCP_STATUS_ADAPTER_QDIOUP, &qdio->adapter->status);
+	atomic_or(ZFCP_STATUS_ADAPTER_QDIOUP, &qdio->adapter->status);
 
 	if (adapter->scsi_host) {
 		adapter->scsi_host->sg_tablesize = qdio->max_sbale_per_req;
@@ -499,6 +499,6 @@ void zfcp_qdio_siosl(struct zfcp_adapter
 
 	rc = ccw_device_siosl(adapter->ccw_device);
 	if (!rc)
-		atomic_set_mask(ZFCP_STATUS_ADAPTER_SIOSL_ISSUED,
+		atomic_or(ZFCP_STATUS_ADAPTER_SIOSL_ISSUED,
 				&adapter->status);
 }



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 03/24] arm: Provide atomic_{or,xor,and}
  2015-07-09 17:28 ` [RFC][PATCH 03/24] arm: " Peter Zijlstra
@ 2015-07-09 18:02   ` Peter Zijlstra
  2015-07-10 10:24     ` Russell King - ARM Linux
  0 siblings, 1 reply; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 18:02 UTC (permalink / raw)
  To: linux-kernel, linux-arch
  Cc: rth, vgupta, linux, will.deacon, hskinnemoen, realmz6, dhowells,
	rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo

On Thu, Jul 09, 2015 at 07:28:58PM +0200, Peter Zijlstra wrote:
> Implement atomic logic ops -- atomic_{or,xor,and}.
> 
> These will replace the atomic_{set,clear}_mask functions that are
> available on some archs.
> 
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  arch/arm/include/asm/atomic.h |    6 ++++++
>  1 file changed, 6 insertions(+)
> 
> --- a/arch/arm/include/asm/atomic.h
> +++ b/arch/arm/include/asm/atomic.h
> @@ -193,6 +193,9 @@ static inline int __atomic_add_unless(at
>  
>  ATOMIC_OPS(add, +=, add)
>  ATOMIC_OPS(sub, -=, sub)
> +ATOMIC_OP(and, &=, and)
> +ATOMIC_OP(or, |=, orr)
> +ATOMIC_OP(xor, ^=, eor)
>  
>  #undef ATOMIC_OPS
>  #undef ATOMIC_OP_RETURN
> @@ -320,6 +323,9 @@ static inline long long atomic64_##op##_
>  
>  ATOMIC64_OPS(add, adds, adc)
>  ATOMIC64_OPS(sub, subs, sbc)
> +ATOMIC64_OP(and, and, and)
> +ATOMIC64_OP(or, or, or)

Hmm, reading through them, this should be:

ATOMIC64_OP(or, orr, orr)

I suppose, not sure why the compiler didn't complain, maybe because
there aren't any users..

> +ATOMIC64_OP(xor, eor, eor)
>  
>  #undef ATOMIC64_OPS
>  #undef ATOMIC64_OP_RETURN
> 
> 

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 17/24] sparc: Provide atomic_{or,xor,and}
  2015-07-09 17:29 ` [RFC][PATCH 17/24] sparc: " Peter Zijlstra
@ 2015-07-09 18:05   ` David Miller
  0 siblings, 0 replies; 54+ messages in thread
From: David Miller @ 2015-07-09 18:05 UTC (permalink / raw)
  To: peterz
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, will.deacon,
	hskinnemoen, realmz6, dhowells, rkuo, tony.luck, geert,
	james.hogan, ralf, jejb, benh, heiko.carstens, cmetcalf, mingo

From: Peter Zijlstra <peterz@infradead.org>
Date: Thu, 09 Jul 2015 19:29:12 +0200

> Implement atomic logic ops -- atomic_{or,xor,and}.
> 
> These will replace the atomic_{set,clear}_mask functions that are
> available on some archs.
> 
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 12/24] mips: Provide atomic_{or,xor,and}
  2015-07-09 17:29 ` [RFC][PATCH 12/24] mips: " Peter Zijlstra
@ 2015-07-09 18:45   ` Ralf Baechle
  0 siblings, 0 replies; 54+ messages in thread
From: Ralf Baechle @ 2015-07-09 18:45 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, will.deacon,
	hskinnemoen, realmz6, dhowells, rkuo, tony.luck, geert,
	james.hogan, jejb, benh, heiko.carstens, davem, cmetcalf, mingo

On Thu, Jul 09, 2015 at 07:29:07PM +0200, Peter Zijlstra wrote:

> Implement atomic logic ops -- atomic_{or,xor,and}.
> 
> These will replace the atomic_{set,clear}_mask functions that are
> available on some archs.

Acked-by: Ralf Baechle <ralf@linux-mips.org>

^ permalink raw reply	[flat|nested] 54+ messages in thread

* [PATCH] tile: Provide atomic_{or,xor,and}
  2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
                   ` (23 preceding siblings ...)
  2015-07-09 17:29 ` [RFC][PATCH 24/24] atomic: Replace atomic_{set,clear}_mask() usage Peter Zijlstra
@ 2015-07-09 20:38 ` Chris Metcalf
  2015-07-09 20:49   ` Peter Zijlstra
  2015-07-27 12:17   ` [tip:locking/arch-atomic] " tip-bot for Chris Metcalf
  24 siblings, 2 replies; 54+ messages in thread
From: Chris Metcalf @ 2015-07-09 20:38 UTC (permalink / raw)
  To: linux-kernel, linux-arch, peterz
  Cc: Chris Metcalf, rth, vgupta, linux, will.deacon, hskinnemoen,
	realmz6, dhowells, rkuo, tony.luck, geert, james.hogan, ralf,
	jejb, benh, heiko.carstens, davem, mingo

Implement atomic logic ops -- atomic_{or,xor,and}.

For tilegx, these are relatively straightforward; the architecture
provides atomic "or" and "and", both 32-bit and 64-bit.  To support
xor we provide a loop using "cmpexch".

For the older 32-bit tilepro architecture, we have to extend
the set of low-level assembly routines to include 32-bit "and",
as well as all three 64-bit routines.  Somewhat confusingly,
some 32-bit versions are already used by the bitops inlines, with
parameter types appropriate for bitops, so we have to do a bit of
casting to match "int" to "unsigned long".

Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
---
Peter, I'm guessing you should just take this into your series,
rather than my pushing it through the tile tree.

 arch/tile/include/asm/atomic_32.h | 28 +++++++++++++++++++++++++++
 arch/tile/include/asm/atomic_64.h | 40 +++++++++++++++++++++++++++++++++++++++
 arch/tile/lib/atomic_32.c         | 23 ++++++++++++++++++++++
 arch/tile/lib/atomic_asm_32.S     |  4 ++++
 4 files changed, 95 insertions(+)

diff --git a/arch/tile/include/asm/atomic_32.h b/arch/tile/include/asm/atomic_32.h
index 1b109fad9fff..d320ce253d86 100644
--- a/arch/tile/include/asm/atomic_32.h
+++ b/arch/tile/include/asm/atomic_32.h
@@ -34,6 +34,19 @@ static inline void atomic_add(int i, atomic_t *v)
 	_atomic_xchg_add(&v->counter, i);
 }
 
+#define ATOMIC_OP(op)							\
+unsigned long _atomic_##op(volatile unsigned long *p, unsigned long mask); \
+static inline void atomic_##op(int i, atomic_t *v)			\
+{									\
+	_atomic_##op((unsigned long *)&v->counter, i);			\
+}
+
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
+
+#undef ATOMIC_OP
+
 /**
  * atomic_add_return - add integer and return
  * @v: pointer of type atomic_t
@@ -113,6 +126,17 @@ static inline void atomic64_add(long long i, atomic64_t *v)
 	_atomic64_xchg_add(&v->counter, i);
 }
 
+#define ATOMIC64_OP(op)						\
+long long _atomic64_##op(long long *v, long long n);		\
+static inline void atomic64_##op(long long i, atomic64_t *v)	\
+{								\
+	_atomic64_##op(&v->counter, i);				\
+}
+
+ATOMIC64_OP(and)
+ATOMIC64_OP(or)
+ATOMIC64_OP(xor)
+
 /**
  * atomic64_add_return - add integer and return
  * @v: pointer of type atomic64_t
@@ -225,6 +249,7 @@ extern struct __get_user __atomic_xchg_add(volatile int *p, int *lock, int n);
 extern struct __get_user __atomic_xchg_add_unless(volatile int *p,
 						  int *lock, int o, int n);
 extern struct __get_user __atomic_or(volatile int *p, int *lock, int n);
+extern struct __get_user __atomic_and(volatile int *p, int *lock, int n);
 extern struct __get_user __atomic_andn(volatile int *p, int *lock, int n);
 extern struct __get_user __atomic_xor(volatile int *p, int *lock, int n);
 extern long long __atomic64_cmpxchg(volatile long long *p, int *lock,
@@ -234,6 +259,9 @@ extern long long __atomic64_xchg_add(volatile long long *p, int *lock,
 					long long n);
 extern long long __atomic64_xchg_add_unless(volatile long long *p,
 					int *lock, long long o, long long n);
+extern long long __atomic64_and(volatile long long *p, int *lock, long long n);
+extern long long __atomic64_or(volatile long long *p, int *lock, long long n);
+extern long long __atomic64_xor(volatile long long *p, int *lock, long long n);
 
 /* Return failure from the atomic wrappers. */
 struct __get_user __atomic_bad_address(int __user *addr);
diff --git a/arch/tile/include/asm/atomic_64.h b/arch/tile/include/asm/atomic_64.h
index 0496970cef82..096a56d6ead4 100644
--- a/arch/tile/include/asm/atomic_64.h
+++ b/arch/tile/include/asm/atomic_64.h
@@ -58,6 +58,26 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
 	return oldval;
 }
 
+static inline void atomic_and(int i, atomic_t *v)
+{
+	__insn_fetchand4((void *)&v->counter, i);
+}
+
+static inline void atomic_or(int i, atomic_t *v)
+{
+	__insn_fetchor4((void *)&v->counter, i);
+}
+
+static inline void atomic_xor(int i, atomic_t *v)
+{
+	int guess, oldval = v->counter;
+	do {
+		guess = oldval;
+		__insn_mtspr(SPR_CMPEXCH_VALUE, guess);
+		oldval = __insn_cmpexch4(&v->counter, guess ^ i);
+	} while (guess != oldval);
+}
+
 /* Now the true 64-bit operations. */
 
 #define ATOMIC64_INIT(i)	{ (i) }
@@ -91,6 +111,26 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
 	return oldval != u;
 }
 
+static inline void atomic64_and(long i, atomic64_t *v)
+{
+	__insn_fetchand((void *)&v->counter, i);
+}
+
+static inline void atomic64_or(long i, atomic64_t *v)
+{
+	__insn_fetchor((void *)&v->counter, i);
+}
+
+static inline void atomic64_xor(long i, atomic64_t *v)
+{
+	long guess, oldval = v->counter;
+	do {
+		guess = oldval;
+		__insn_mtspr(SPR_CMPEXCH_VALUE, guess);
+		oldval = __insn_cmpexch(&v->counter, guess ^ i);
+	} while (guess != oldval);
+}
+
 #define atomic64_sub_return(i, v)	atomic64_add_return(-(i), (v))
 #define atomic64_sub(i, v)		atomic64_add(-(i), (v))
 #define atomic64_inc_return(v)		atomic64_add_return(1, (v))
diff --git a/arch/tile/lib/atomic_32.c b/arch/tile/lib/atomic_32.c
index c89b211fd9e7..298df1e9912a 100644
--- a/arch/tile/lib/atomic_32.c
+++ b/arch/tile/lib/atomic_32.c
@@ -94,6 +94,12 @@ unsigned long _atomic_or(volatile unsigned long *p, unsigned long mask)
 }
 EXPORT_SYMBOL(_atomic_or);
 
+unsigned long _atomic_and(volatile unsigned long *p, unsigned long mask)
+{
+	return __atomic_and((int *)p, __atomic_setup(p), mask).val;
+}
+EXPORT_SYMBOL(_atomic_and);
+
 unsigned long _atomic_andn(volatile unsigned long *p, unsigned long mask)
 {
 	return __atomic_andn((int *)p, __atomic_setup(p), mask).val;
@@ -136,6 +142,23 @@ long long _atomic64_cmpxchg(long long *v, long long o, long long n)
 }
 EXPORT_SYMBOL(_atomic64_cmpxchg);
 
+long long _atomic64_and(long long *v, long long n)
+{
+	return __atomic64_and(v, __atomic_setup(v), n);
+}
+EXPORT_SYMBOL(_atomic64_and);
+
+long long _atomic64_or(long long *v, long long n)
+{
+	return __atomic64_or(v, __atomic_setup(v), n);
+}
+EXPORT_SYMBOL(_atomic64_or);
+
+long long _atomic64_xor(long long *v, long long n)
+{
+	return __atomic64_xor(v, __atomic_setup(v), n);
+}
+EXPORT_SYMBOL(_atomic64_xor);
 
 /*
  * If any of the atomic or futex routines hit a bad address (not in
diff --git a/arch/tile/lib/atomic_asm_32.S b/arch/tile/lib/atomic_asm_32.S
index 6bda3132cd61..f611265633d6 100644
--- a/arch/tile/lib/atomic_asm_32.S
+++ b/arch/tile/lib/atomic_asm_32.S
@@ -178,6 +178,7 @@ atomic_op _xchg_add, 32, "add r24, r22, r2"
 atomic_op _xchg_add_unless, 32, \
 	"sne r26, r22, r2; { bbns r26, 3f; add r24, r22, r3 }"
 atomic_op _or, 32, "or r24, r22, r2"
+atomic_op _and, 32, "and r24, r22, r2"
 atomic_op _andn, 32, "nor r2, r2, zero; and r24, r22, r2"
 atomic_op _xor, 32, "xor r24, r22, r2"
 
@@ -191,6 +192,9 @@ atomic_op 64_xchg_add_unless, 64, \
 	{ bbns r26, 3f; add r24, r22, r4 }; \
 	{ bbns r27, 3f; add r25, r23, r5 }; \
 	slt_u r26, r24, r22; add r25, r25, r26"
+atomic_op 64_or, 64, "{ or r24, r22, r2; or r25, r23, r3 }"
+atomic_op 64_and, 64, "{ and r24, r22, r2; and r25, r23, r3 }"
+atomic_op 64_xor, 64, "{ xor r24, r22, r2; xor r25, r23, r3 }"
 
 	jrp     lr              /* happy backtracer */
 
-- 
2.1.2


^ permalink raw reply related	[flat|nested] 54+ messages in thread

* Re: [PATCH] tile: Provide atomic_{or,xor,and}
  2015-07-09 20:38 ` [PATCH] tile: Provide atomic_{or,xor,and} Chris Metcalf
@ 2015-07-09 20:49   ` Peter Zijlstra
  2015-07-27 12:17   ` [tip:locking/arch-atomic] " tip-bot for Chris Metcalf
  1 sibling, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-09 20:49 UTC (permalink / raw)
  To: Chris Metcalf
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, will.deacon,
	hskinnemoen, realmz6, dhowells, rkuo, tony.luck, geert,
	james.hogan, ralf, jejb, benh, heiko.carstens, davem, mingo

On Thu, Jul 09, 2015 at 04:38:17PM -0400, Chris Metcalf wrote:
> Implement atomic logic ops -- atomic_{or,xor,and}.
> 
> For tilegx, these are relatively straightforward; the architecture
> provides atomic "or" and "and", both 32-bit and 64-bit.  To support
> xor we provide a loop using "cmpexch".
> 
> For the older 32-bit tilepro architecture, we have to extend
> the set of low-level assembly routines to include 32-bit "and",
> as well as all three 64-bit routines.  Somewhat confusingly,
> some 32-bit versions are already used by the bitops inlines, with
> parameter types appropriate for bitops, so we have to do a bit of
> casting to match "int" to "unsigned long".
> 
> Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
> ---
> Peter, I'm guessing you should just take this into your series,
> rather than my pushing it through the tile tree.

Awesome, thanks! Yeah, I'll collect the lot.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 15/24] powerpc: Provide atomic_{or,xor,and}
  2015-07-09 17:29 ` [RFC][PATCH 15/24] powerpc: " Peter Zijlstra
@ 2015-07-09 21:49   ` Benjamin Herrenschmidt
  0 siblings, 0 replies; 54+ messages in thread
From: Benjamin Herrenschmidt @ 2015-07-09 21:49 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, will.deacon,
	hskinnemoen, realmz6, dhowells, rkuo, tony.luck, geert,
	james.hogan, ralf, jejb, heiko.carstens, davem, cmetcalf, mingo

On Thu, 2015-07-09 at 19:29 +0200, Peter Zijlstra wrote:
> Implement atomic logic ops -- atomic_{or,xor,and}.
> 
> These will replace the atomic_{set,clear}_mask functions that are
> available on some archs.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  arch/powerpc/include/asm/atomic.h |    6 ++++++
>  1 file changed, 6 insertions(+)
> 
> --- a/arch/powerpc/include/asm/atomic.h
> +++ b/arch/powerpc/include/asm/atomic.h
> @@ -66,6 +66,9 @@ static __inline__ int atomic_##op##_retu
>  
>  ATOMIC_OPS(add, add)
>  ATOMIC_OPS(sub, subf)
> +ATOMIC_OP(and, and)
> +ATOMIC_OP(or, or)
> +ATOMIC_OP(xor, xor)
>  
>  #undef ATOMIC_OPS
>  #undef ATOMIC_OP_RETURN
> @@ -304,6 +307,9 @@ static __inline__ long atomic64_##op##_r
>  
>  ATOMIC64_OPS(add, add)
>  ATOMIC64_OPS(sub, subf)
> +ATOMIC64_OP(and, and)
> +ATOMIC64_OP(or, or)
> +ATOMIC64_OP(xor, xor)

As long as you are ok that they are non-ordered atomics (no barrier in
them), then

Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>

>  #undef ATOMIC64_OPS
>  #undef ATOMIC64_OP_RETURN
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe
> linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 02/24] arc: Provide atomic_{or,xor,and}
  2015-07-09 17:28 ` [RFC][PATCH 02/24] arc: " Peter Zijlstra
@ 2015-07-10  4:30   ` Vineet Gupta
  2015-07-10  7:05     ` Peter Zijlstra
  0 siblings, 1 reply; 54+ messages in thread
From: Vineet Gupta @ 2015-07-10  4:30 UTC (permalink / raw)
  To: Peter Zijlstra, linux-kernel, linux-arch
  Cc: rth, Vineet.Gupta1, linux, will.deacon, hskinnemoen, realmz6,
	dhowells, rkuo, tony.luck, geert, james.hogan, ralf, jejb, benh,
	heiko.carstens, davem, cmetcalf, mingo

On Thursday 09 July 2015 11:26 PM, Peter Zijlstra wrote:
> Implement atomic logic ops -- atomic_{or,xor,and}.
>
> These will replace the atomic_{set,clear}_mask functions that are
> available on some archs.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Acked-by: Vineet Gupta <vgupta@synopsys.com>

Since we are on the topic, the cmpxchg() loop in arch/arc/kernel/smp.c still
irritates me.
Do we need a new set of primitives to operate atomically on non atomic_t data or
does that mean that the data *not* being atomic_t but requiring such semantics is
the fundamental problem and thus needs to be converted first.

-Vineet



^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 02/24] arc: Provide atomic_{or,xor,and}
  2015-07-10  4:30   ` Vineet Gupta
@ 2015-07-10  7:05     ` Peter Zijlstra
  2015-07-13 12:43       ` Vineet Gupta
  0 siblings, 1 reply; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-10  7:05 UTC (permalink / raw)
  To: Vineet Gupta
  Cc: linux-kernel, linux-arch, rth, linux, will.deacon, hskinnemoen,
	realmz6, dhowells, rkuo, tony.luck, geert, james.hogan, ralf,
	jejb, benh, heiko.carstens, davem, cmetcalf, mingo

On Fri, Jul 10, 2015 at 04:30:46AM +0000, Vineet Gupta wrote:
> 
> Since we are on the topic, the cmpxchg() loop in arch/arc/kernel/smp.c still
> irritates me.
> Do we need a new set of primitives to operate atomically on non atomic_t data or
> does that mean that the data *not* being atomic_t but requiring such semantics is
> the fundamental problem and thus needs to be converted first.

So if you look at the last patch, there's already a few sites that do
things like:

+       atomic_or(*mask, (atomic_t *)&flushcache_cpumask);

Which is of course ugly as hell, but does work.

Esp. inside arch code.

Now the 'problem' with cmpxchg/xchg, the instructions working on !atomic
data is:

  http://lkml.kernel.org/r/alpine.LRH.2.02.1406011342470.20831@file01.intranet.prod.int.rdu2.redhat.com
  http://lkml.kernel.org/r/20140606175316.GV13930@laptop.programming.kicks-ass.net

And note that includes some arc.

Adding more such primitives will only make it harder on those already
'broken' archs.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 19/24] s390: Provide atomic_{or,xor,and}
  2015-07-09 17:29 ` [RFC][PATCH 19/24] s390: " Peter Zijlstra
@ 2015-07-10  7:17   ` Heiko Carstens
  2015-07-10 10:22     ` Peter Zijlstra
  0 siblings, 1 reply; 54+ messages in thread
From: Heiko Carstens @ 2015-07-10  7:17 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, will.deacon,
	hskinnemoen, realmz6, dhowells, rkuo, tony.luck, geert,
	james.hogan, ralf, jejb, benh, davem, cmetcalf, mingo

On Thu, Jul 09, 2015 at 07:29:14PM +0200, Peter Zijlstra wrote:
> Implement atomic logic ops -- atomic_{or,xor,and}.
> 
> These will replace the atomic_{set,clear}_mask functions that are
> available on some archs.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  arch/s390/include/asm/atomic.h |   45 ++++++++++++++++++++++++++++-------------
>  1 file changed, 31 insertions(+), 14 deletions(-)
> 
> --- a/arch/s390/include/asm/atomic.h
> +++ b/arch/s390/include/asm/atomic.h
> @@ -28,6 +28,7 @@
>  #define __ATOMIC_AND	"lan"
>  #define __ATOMIC_ADD	"laa"
>  #define __ATOMIC_BARRIER "bcr	14,0\n"
> +#define __ATOMIC_XOR	"lax"
> 
>  #define __ATOMIC_LOOP(ptr, op_val, op_string, __barrier)		\
>  ({									\
> @@ -50,6 +51,7 @@
>  #define __ATOMIC_AND	"nr"
>  #define __ATOMIC_ADD	"ar"
>  #define __ATOMIC_BARRIER "\n"
> +#define __ATOMIC_XOR	"xr"

Would you mind moving the two XOR define above the BARRIER?
Just to keep it consistent with ATOMIC64 stuff within this patch ;)

>  #define __ATOMIC_LOOP(ptr, op_val, op_string, __barrier)		\
>  ({									\
> @@ -118,14 +120,26 @@ static inline void atomic_add(int i, ato
>  #define atomic_dec_return(_v)		atomic_sub_return(1, _v)
>  #define atomic_dec_and_test(_v)		(atomic_sub_return(1, _v) == 0)
> 
> -static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
> +#define ATOMIC_OP(op, OP)						\
> +static inline void atomic_##op(int i, atomic_t *v)			\
> +{									\
> +	__ATOMIC_LOOP(v, i, __ATOMIC_##OP, __ATOMIC_NO_BARRIER);	\
> +}
> +
> +ATOMIC_OP(and, AND)
> +ATOMIC_OP(or, OR)
> +ATOMIC_OP(xor, XOR)
> +
> +#undef ATOMIC_OP
> +
> +static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
>  {
> -	__ATOMIC_LOOP(v, ~mask, __ATOMIC_AND, __ATOMIC_NO_BARRIER);
> +	atomic_and(~mask, v);
>  }
> 
> -static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
> +static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
>  {
> -	__ATOMIC_LOOP(v, mask, __ATOMIC_OR, __ATOMIC_NO_BARRIER);
> +	atomic_or(mask, v);
>  }

If you insist on the __deprecated (no problem with that), I'd like to apply
your patch to the s390 tree so I can convert all users.
I would like to avoid to see tons of warnings.

Besides that:

Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 04/24] arm64: Provide atomic_{or,xor,and}
  2015-07-09 17:28 ` [RFC][PATCH 04/24] arm64: " Peter Zijlstra
@ 2015-07-10  8:42   ` Will Deacon
  2015-07-10 16:23     ` Peter Zijlstra
  2015-07-15 16:01   ` Will Deacon
  1 sibling, 1 reply; 54+ messages in thread
From: Will Deacon @ 2015-07-10  8:42 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, hskinnemoen,
	realmz6, dhowells, rkuo, tony.luck, geert, james.hogan, ralf,
	jejb, benh, heiko.carstens, davem, cmetcalf, mingo

Hi Peter,

On Thu, Jul 09, 2015 at 06:28:59PM +0100, Peter Zijlstra wrote:
> Implement atomic logic ops -- atomic_{or,xor,and}.
> 
> These will replace the atomic_{set,clear}_mask functions that are
> available on some archs.
> 
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---

Whilst this is pretty straight-foward, I have some serious rework on arm64
atomic.h pending, so do you mind if I take this via the arm64 tree and
resolve the conflicts myself?

Cheers,

Will

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions
  2015-07-09 17:29 ` [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions Peter Zijlstra
@ 2015-07-10  9:10   ` Geert Uytterhoeven
  2015-07-10  9:13     ` Vineet Gupta
  2015-07-10 10:39     ` Peter Zijlstra
  0 siblings, 2 replies; 54+ messages in thread
From: Geert Uytterhoeven @ 2015-07-10  9:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Linux-Arch, Richard Henderson, Vineet Gupta,
	Russell King, Will Deacon, Håvard Skinnemoen, Miao Steven,
	David Howells, Richard Kuo, Tony Luck, James Hogan, Ralf Baechle,
	James E.J. Bottomley, Benjamin Herrenschmidt, Heiko Carstens,
	David S. Miller, Chris Metcalf, Ingo Molnar

Hi Peter,

On Thu, Jul 9, 2015 at 7:29 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> --- a/include/linux/atomic.h
> +++ b/include/linux/atomic.h
> @@ -28,6 +28,23 @@ static inline int atomic_add_unless(atom
>  #define atomic_inc_not_zero(v)         atomic_add_unless((v), 1, 0)
>  #endif
>
> +#ifndef atomic_nand
> +static inline void atomic_nand(int i, atomic_t *v)
> +{
> +       atomic_and(~i, v);

That sounds like a misnomer...

Your NAND is "A & ~B", while my[*] NAND is "~(A & B)"?

[*] https://en.wikipedia.org/wiki/NAND_logic

What about atomic_clear()? (Is atomic_bic() too ARM-centric?)

> +}
> +#endif
> +
> +static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
> +{
> +       atomic_nand(mask, v);
> +}

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 10/24] m68k: Provide atomic_{or,xor,and}
  2015-07-09 17:29 ` [RFC][PATCH 10/24] m68k: " Peter Zijlstra
@ 2015-07-10  9:13   ` Geert Uytterhoeven
  0 siblings, 0 replies; 54+ messages in thread
From: Geert Uytterhoeven @ 2015-07-10  9:13 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, Linux-Arch, Richard Henderson, Vineet Gupta,
	Russell King, Will Deacon, Håvard Skinnemoen, Miao Steven,
	David Howells, Richard Kuo, Tony Luck, James Hogan, Ralf Baechle,
	James E.J. Bottomley, Benjamin Herrenschmidt, Heiko Carstens,
	David S. Miller, Chris Metcalf, Ingo Molnar

On Thu, Jul 9, 2015 at 7:29 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> Implement atomic logic ops -- atomic_{or,xor,and}.
>
> These will replace the atomic_{set,clear}_mask functions that are
> available on some archs.
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Acked-by: Geert Uytterhoeven <geert@linux-m68k.org>

(it builds, even if I add a user of atomic_xor())

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions
  2015-07-10  9:10   ` Geert Uytterhoeven
@ 2015-07-10  9:13     ` Vineet Gupta
  2015-07-10 10:39     ` Peter Zijlstra
  1 sibling, 0 replies; 54+ messages in thread
From: Vineet Gupta @ 2015-07-10  9:13 UTC (permalink / raw)
  To: Geert Uytterhoeven, Peter Zijlstra
  Cc: linux-kernel, Linux-Arch, Richard Henderson, Russell King,
	Will Deacon, Håvard Skinnemoen, Miao Steven, David Howells,
	Richard Kuo, Tony Luck, James Hogan, Ralf Baechle,
	James E.J. Bottomley, Benjamin Herrenschmidt, Heiko Carstens,
	David S. Miller, Chris Metcalf, Ingo Molnar

On Friday 10 July 2015 02:40 PM, Geert Uytterhoeven wrote:
> Hi Peter,
>
> On Thu, Jul 9, 2015 at 7:29 PM, Peter Zijlstra <peterz@infradead.org> wrote:
>> > --- a/include/linux/atomic.h
>> > +++ b/include/linux/atomic.h
>> > @@ -28,6 +28,23 @@ static inline int atomic_add_unless(atom
>> >  #define atomic_inc_not_zero(v)         atomic_add_unless((v), 1, 0)
>> >  #endif
>> >
>> > +#ifndef atomic_nand
>> > +static inline void atomic_nand(int i, atomic_t *v)
>> > +{
>> > +       atomic_and(~i, v);
> That sounds like a misnomer...
>
> Your NAND is "A & ~B", while my[*] NAND is "~(A & B)"?
>
> [*] https://en.wikipedia.org/wiki/NAND_logic
>
> What about atomic_clear()? (Is atomic_bic() too ARM-centric?)
>

ARM + ARC centric :-)

We have the BIC instruction as well which does the same: A & ~B

-Vineet

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 19/24] s390: Provide atomic_{or,xor,and}
  2015-07-10  7:17   ` Heiko Carstens
@ 2015-07-10 10:22     ` Peter Zijlstra
  2015-07-10 10:52       ` Heiko Carstens
  2015-07-10 11:28       ` Peter Zijlstra
  0 siblings, 2 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-10 10:22 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, will.deacon,
	hskinnemoen, realmz6, dhowells, rkuo, tony.luck, geert,
	james.hogan, ralf, jejb, benh, davem, cmetcalf, mingo

On Fri, Jul 10, 2015 at 09:17:09AM +0200, Heiko Carstens wrote:

> > @@ -50,6 +51,7 @@
> >  #define __ATOMIC_AND	"nr"
> >  #define __ATOMIC_ADD	"ar"
> >  #define __ATOMIC_BARRIER "\n"
> > +#define __ATOMIC_XOR	"xr"
> 
> Would you mind moving the two XOR define above the BARRIER?
> Just to keep it consistent with ATOMIC64 stuff within this patch ;)

Oh, duh, done.

> >  #define __ATOMIC_LOOP(ptr, op_val, op_string, __barrier)		\
> >  ({									\
> > @@ -118,14 +120,26 @@ static inline void atomic_add(int i, ato
> >  #define atomic_dec_return(_v)		atomic_sub_return(1, _v)
> >  #define atomic_dec_and_test(_v)		(atomic_sub_return(1, _v) == 0)
> > 
> > -static inline void atomic_clear_mask(unsigned int mask, atomic_t *v)
> > +#define ATOMIC_OP(op, OP)						\
> > +static inline void atomic_##op(int i, atomic_t *v)			\
> > +{									\
> > +	__ATOMIC_LOOP(v, i, __ATOMIC_##OP, __ATOMIC_NO_BARRIER);	\
> > +}
> > +
> > +ATOMIC_OP(and, AND)
> > +ATOMIC_OP(or, OR)
> > +ATOMIC_OP(xor, XOR)
> > +
> > +#undef ATOMIC_OP
> > +
> > +static inline __deprecated void atomic_clear_mask(unsigned int mask, atomic_t *v)
> >  {
> > -	__ATOMIC_LOOP(v, ~mask, __ATOMIC_AND, __ATOMIC_NO_BARRIER);
> > +	atomic_and(~mask, v);
> >  }
> > 
> > -static inline void atomic_set_mask(unsigned int mask, atomic_t *v)
> > +static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
> >  {
> > -	__ATOMIC_LOOP(v, mask, __ATOMIC_OR, __ATOMIC_NO_BARRIER);
> > +	atomic_or(mask, v);
> >  }
> 
> If you insist on the __deprecated (no problem with that), I'd like to apply
> your patch to the s390 tree so I can convert all users.
> I would like to avoid to see tons of warnings.

See the last patch in this series, it does that conversion. Although I
might need to double check I got them all, its been a while since I did
that patch.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 03/24] arm: Provide atomic_{or,xor,and}
  2015-07-09 18:02   ` Peter Zijlstra
@ 2015-07-10 10:24     ` Russell King - ARM Linux
  0 siblings, 0 replies; 54+ messages in thread
From: Russell King - ARM Linux @ 2015-07-10 10:24 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, linux-arch, rth, vgupta, will.deacon, hskinnemoen,
	realmz6, dhowells, rkuo, tony.luck, geert, james.hogan, ralf,
	jejb, benh, heiko.carstens, davem, cmetcalf, mingo

On Thu, Jul 09, 2015 at 08:02:23PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 09, 2015 at 07:28:58PM +0200, Peter Zijlstra wrote:
> > @@ -320,6 +323,9 @@ static inline long long atomic64_##op##_
> >  
> >  ATOMIC64_OPS(add, adds, adc)
> >  ATOMIC64_OPS(sub, subs, sbc)
> > +ATOMIC64_OP(and, and, and)
> > +ATOMIC64_OP(or, or, or)
> 
> Hmm, reading through them, this should be:
> 
> ATOMIC64_OP(or, orr, orr)
> 
> I suppose, not sure why the compiler didn't complain, maybe because
> there aren't any users..

Yep, as it creates a static inline function, the code will only get
produced if something uses it, and which point the assembler would have
picked up on the error.

In any case, with that modification, the patch then _looks_ correct to
me for both atomic and atomic64 additions.  Not tested myself.

I guess as you're only looking for comments at the moment, there's
little point in acking it just yet.

-- 
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions
  2015-07-10  9:10   ` Geert Uytterhoeven
  2015-07-10  9:13     ` Vineet Gupta
@ 2015-07-10 10:39     ` Peter Zijlstra
  2015-07-10 13:34       ` Chris Metcalf
  1 sibling, 1 reply; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-10 10:39 UTC (permalink / raw)
  To: Geert Uytterhoeven
  Cc: linux-kernel, Linux-Arch, Richard Henderson, Vineet Gupta,
	Russell King, Will Deacon, Håvard Skinnemoen, Miao Steven,
	David Howells, Richard Kuo, Tony Luck, James Hogan, Ralf Baechle,
	James E.J. Bottomley, Benjamin Herrenschmidt, Heiko Carstens,
	David S. Miller, Chris Metcalf, Ingo Molnar

On Fri, Jul 10, 2015 at 11:10:33AM +0200, Geert Uytterhoeven wrote:
> Hi Peter,
> 
> On Thu, Jul 9, 2015 at 7:29 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> > --- a/include/linux/atomic.h
> > +++ b/include/linux/atomic.h
> > @@ -28,6 +28,23 @@ static inline int atomic_add_unless(atom
> >  #define atomic_inc_not_zero(v)         atomic_add_unless((v), 1, 0)
> >  #endif
> >
> > +#ifndef atomic_nand
> > +static inline void atomic_nand(int i, atomic_t *v)
> > +{
> > +       atomic_and(~i, v);
> 
> That sounds like a misnomer...
> 
> Your NAND is "A & ~B", while my[*] NAND is "~(A & B)"?
> 
> [*] https://en.wikipedia.org/wiki/NAND_logic

Right you are.

> What about atomic_clear()? (Is atomic_bic() too ARM-centric?)

atomic_and_not() ?

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 19/24] s390: Provide atomic_{or,xor,and}
  2015-07-10 10:22     ` Peter Zijlstra
@ 2015-07-10 10:52       ` Heiko Carstens
  2015-07-10 11:28       ` Peter Zijlstra
  1 sibling, 0 replies; 54+ messages in thread
From: Heiko Carstens @ 2015-07-10 10:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, will.deacon,
	hskinnemoen, realmz6, dhowells, rkuo, tony.luck, geert,
	james.hogan, ralf, jejb, benh, davem, cmetcalf, mingo

On Fri, Jul 10, 2015 at 12:22:10PM +0200, Peter Zijlstra wrote:
> On Fri, Jul 10, 2015 at 09:17:09AM +0200, Heiko Carstens wrote:
> > > +static inline __deprecated void atomic_set_mask(unsigned int mask, atomic_t *v)
> > >  {
> > > -	__ATOMIC_LOOP(v, mask, __ATOMIC_OR, __ATOMIC_NO_BARRIER);
> > > +	atomic_or(mask, v);
> > >  }
> > 
> > If you insist on the __deprecated (no problem with that), I'd like to apply
> > your patch to the s390 tree so I can convert all users.
> > I would like to avoid to see tons of warnings.
> 
> See the last patch in this series, it does that conversion. Although I
> might need to double check I got them all, its been a while since I did
> that patch.

Ah right, I missed that. Then I'm happy with all the changes.
Thanks for doing this nice cleanup!


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 19/24] s390: Provide atomic_{or,xor,and}
  2015-07-10 10:22     ` Peter Zijlstra
  2015-07-10 10:52       ` Heiko Carstens
@ 2015-07-10 11:28       ` Peter Zijlstra
  1 sibling, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-10 11:28 UTC (permalink / raw)
  To: Heiko Carstens
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, will.deacon,
	hskinnemoen, realmz6, dhowells, rkuo, tony.luck, geert,
	james.hogan, ralf, jejb, benh, davem, cmetcalf, mingo

On Fri, Jul 10, 2015 at 12:22:10PM +0200, Peter Zijlstra wrote:
> > If you insist on the __deprecated (no problem with that), I'd like to apply
> > your patch to the s390 tree so I can convert all users.
> > I would like to avoid to see tons of warnings.
> 
> See the last patch in this series, it does that conversion. Although I
> might need to double check I got them all, its been a while since I did
> that patch.

There were indeed a few stray ones, cleaned them up too.

That grep also found a whole new architecture... somehow I missed h8300,
sorted that too.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions
  2015-07-10 10:39     ` Peter Zijlstra
@ 2015-07-10 13:34       ` Chris Metcalf
  2015-07-10 13:42         ` Russell King - ARM Linux
  0 siblings, 1 reply; 54+ messages in thread
From: Chris Metcalf @ 2015-07-10 13:34 UTC (permalink / raw)
  To: Peter Zijlstra, Geert Uytterhoeven
  Cc: linux-kernel, Linux-Arch, Richard Henderson, Vineet Gupta,
	Russell King, Will Deacon, Håvard Skinnemoen, Miao Steven,
	David Howells, Richard Kuo, Tony Luck, James Hogan, Ralf Baechle,
	James E.J. Bottomley, Benjamin Herrenschmidt, Heiko Carstens,
	David S. Miller, Ingo Molnar

On 7/10/2015 6:39 AM, Peter Zijlstra wrote:
> On Fri, Jul 10, 2015 at 11:10:33AM +0200, Geert Uytterhoeven wrote:
>> Hi Peter,
>>
>> On Thu, Jul 9, 2015 at 7:29 PM, Peter Zijlstra <peterz@infradead.org> wrote:
>>> --- a/include/linux/atomic.h
>>> +++ b/include/linux/atomic.h
>>> @@ -28,6 +28,23 @@ static inline int atomic_add_unless(atom
>>>   #define atomic_inc_not_zero(v)         atomic_add_unless((v), 1, 0)
>>>   #endif
>>>
>>> +#ifndef atomic_nand
>>> +static inline void atomic_nand(int i, atomic_t *v)
>>> +{
>>> +       atomic_and(~i, v);
>> That sounds like a misnomer...
>>
>> Your NAND is "A & ~B", while my[*] NAND is "~(A & B)"?
>>
>> [*] https://en.wikipedia.org/wiki/NAND_logic
> Right you are.
>
>> What about atomic_clear()? (Is atomic_bic() too ARM-centric?)
> atomic_and_not() ?

I've seen this as ANDN (as opposed to NAND).  That's the name I used in
the tilepro atomics as the thing that implements the bitmask clear operation.
SPARC also has an "andn" instruction with this semantics.

-- 
Chris Metcalf, EZChip Semiconductor
http://www.ezchip.com


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions
  2015-07-10 13:34       ` Chris Metcalf
@ 2015-07-10 13:42         ` Russell King - ARM Linux
  2015-07-10 16:27           ` Peter Zijlstra
  0 siblings, 1 reply; 54+ messages in thread
From: Russell King - ARM Linux @ 2015-07-10 13:42 UTC (permalink / raw)
  To: Chris Metcalf
  Cc: Peter Zijlstra, Geert Uytterhoeven, linux-kernel, Linux-Arch,
	Richard Henderson, Vineet Gupta, Will Deacon,
	Håvard Skinnemoen, Miao Steven, David Howells, Richard Kuo,
	Tony Luck, James Hogan, Ralf Baechle, James E.J. Bottomley,
	Benjamin Herrenschmidt, Heiko Carstens, David S. Miller,
	Ingo Molnar

On Fri, Jul 10, 2015 at 09:34:04AM -0400, Chris Metcalf wrote:
> On 7/10/2015 6:39 AM, Peter Zijlstra wrote:
> >On Fri, Jul 10, 2015 at 11:10:33AM +0200, Geert Uytterhoeven wrote:
> >>Hi Peter,
> >>
> >>On Thu, Jul 9, 2015 at 7:29 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> >>>--- a/include/linux/atomic.h
> >>>+++ b/include/linux/atomic.h
> >>>@@ -28,6 +28,23 @@ static inline int atomic_add_unless(atom
> >>>  #define atomic_inc_not_zero(v)         atomic_add_unless((v), 1, 0)
> >>>  #endif
> >>>
> >>>+#ifndef atomic_nand
> >>>+static inline void atomic_nand(int i, atomic_t *v)
> >>>+{
> >>>+       atomic_and(~i, v);
> >>That sounds like a misnomer...
> >>
> >>Your NAND is "A & ~B", while my[*] NAND is "~(A & B)"?
> >>
> >>[*] https://en.wikipedia.org/wiki/NAND_logic
> >Right you are.
> >
> >>What about atomic_clear()? (Is atomic_bic() too ARM-centric?)
> >atomic_and_not() ?
> 
> I've seen this as ANDN (as opposed to NAND).  That's the name I used in
> the tilepro atomics as the thing that implements the bitmask clear operation.
> SPARC also has an "andn" instruction with this semantics.

The obvious question though is whether we have an established name for this
operation elsewhere in the kernel, and whether we should have consistency.
In include/linux, we already have (grepping for 'and_*not'):

include/linux/nodemask.h:#define nodes_andnot(dst, src1, src2) \
include/linux/bitmap.h:extern int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1,
include/linux/cpumask.h:static inline int cpumask_andnot(struct cpumask *dstp,

We also have:

include/linux/signal.h:#define _sig_andn(x,y)       ((x) & ~(y))

which seems to be the only instance of "andn" in include/.

-- 
FTTC broadband for 0.8mile line: currently at 10.5Mbps down 400kbps up
according to speedtest.net.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 04/24] arm64: Provide atomic_{or,xor,and}
  2015-07-10  8:42   ` Will Deacon
@ 2015-07-10 16:23     ` Peter Zijlstra
  2015-07-13  9:29       ` Will Deacon
  0 siblings, 1 reply; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-10 16:23 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, hskinnemoen,
	realmz6, dhowells, rkuo, tony.luck, geert, james.hogan, ralf,
	jejb, benh, heiko.carstens, davem, cmetcalf, mingo

On Fri, Jul 10, 2015 at 09:42:59AM +0100, Will Deacon wrote:
> Hi Peter,
> 
> On Thu, Jul 09, 2015 at 06:28:59PM +0100, Peter Zijlstra wrote:
> > Implement atomic logic ops -- atomic_{or,xor,and}.
> > 
> > These will replace the atomic_{set,clear}_mask functions that are
> > available on some archs.
> > 
> > 
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> 
> Whilst this is pretty straight-foward, I have some serious rework on arm64
> atomic.h pending, so do you mind if I take this via the arm64 tree and
> resolve the conflicts myself?

Are those public anywhere.. The thing is, at the end of this series I
pretty much assume all archs will have these ops available, so whatever
tree I stick the rest in woudl have to pull in your branch too.

Also, IF you apply locally, do not forget to s/or/orr/ on the 64bit
versions, I seem to have missed updating those, just like I did on 32bit
arm.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions
  2015-07-10 13:42         ` Russell King - ARM Linux
@ 2015-07-10 16:27           ` Peter Zijlstra
  2015-07-10 17:35             ` Chris Metcalf
  2015-07-10 19:45             ` Chris Metcalf
  0 siblings, 2 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-10 16:27 UTC (permalink / raw)
  To: Russell King - ARM Linux
  Cc: Chris Metcalf, Geert Uytterhoeven, linux-kernel, Linux-Arch,
	Richard Henderson, Vineet Gupta, Will Deacon,
	Håvard Skinnemoen, Miao Steven, David Howells, Richard Kuo,
	Tony Luck, James Hogan, Ralf Baechle, James E.J. Bottomley,
	Benjamin Herrenschmidt, Heiko Carstens, David S. Miller,
	Ingo Molnar

On Fri, Jul 10, 2015 at 02:42:56PM +0100, Russell King - ARM Linux wrote:
> The obvious question though is whether we have an established name for this
> operation elsewhere in the kernel, and whether we should have consistency.

Consistency is good.

> In include/linux, we already have (grepping for 'and_*not'):
> 
> include/linux/nodemask.h:#define nodes_andnot(dst, src1, src2) \
> include/linux/bitmap.h:extern int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1,
> include/linux/cpumask.h:static inline int cpumask_andnot(struct cpumask *dstp,
> 
> We also have:
> 
> include/linux/signal.h:#define _sig_andn(x,y)       ((x) & ~(y))
> 
> which seems to be the only instance of "andn" in include/.

How about I rename the _sig_andn one to _sig_andnot, and go with
atomic_andnot, to match the *mask functions.


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions
  2015-07-10 16:27           ` Peter Zijlstra
@ 2015-07-10 17:35             ` Chris Metcalf
  2015-07-10 19:45             ` Chris Metcalf
  1 sibling, 0 replies; 54+ messages in thread
From: Chris Metcalf @ 2015-07-10 17:35 UTC (permalink / raw)
  To: Peter Zijlstra, Russell King - ARM Linux
  Cc: Geert Uytterhoeven, linux-kernel, Linux-Arch, Richard Henderson,
	Vineet Gupta, Will Deacon, Håvard Skinnemoen, Miao Steven,
	David Howells, Richard Kuo, Tony Luck, James Hogan, Ralf Baechle,
	James E.J. Bottomley, Benjamin Herrenschmidt, Heiko Carstens,
	David S. Miller, Ingo Molnar

On 07/10/2015 12:27 PM, Peter Zijlstra wrote:
> On Fri, Jul 10, 2015 at 02:42:56PM +0100, Russell King - ARM Linux wrote:
>> The obvious question though is whether we have an established name for this
>> operation elsewhere in the kernel, and whether we should have consistency.
> Consistency is good.
>
>> In include/linux, we already have (grepping for 'and_*not'):
>>
>> include/linux/nodemask.h:#define nodes_andnot(dst, src1, src2) \
>> include/linux/bitmap.h:extern int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1,
>> include/linux/cpumask.h:static inline int cpumask_andnot(struct cpumask *dstp,
>>
>> We also have:
>>
>> include/linux/signal.h:#define _sig_andn(x,y)       ((x) & ~(y))
>>
>> which seems to be the only instance of "andn" in include/.
> How about I rename the _sig_andn one to _sig_andnot, and go with
> atomic_andnot, to match the *mask functions.

I'll respin my patch to just tweak tilepro's "andn" to use
"andnot" as well while I'm at it, then.  Making "andnot" a stand-alone
patch would cause conflicts so it might as well go in with your change.

-- 
Chris Metcalf, EZChip Semiconductor
http://www.ezchip.com


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions
  2015-07-10 16:27           ` Peter Zijlstra
  2015-07-10 17:35             ` Chris Metcalf
@ 2015-07-10 19:45             ` Chris Metcalf
  1 sibling, 0 replies; 54+ messages in thread
From: Chris Metcalf @ 2015-07-10 19:45 UTC (permalink / raw)
  To: Peter Zijlstra, Russell King - ARM Linux
  Cc: Geert Uytterhoeven, linux-kernel, Linux-Arch, Richard Henderson,
	Vineet Gupta, Will Deacon, Håvard Skinnemoen, Miao Steven,
	David Howells, Richard Kuo, Tony Luck, James Hogan, Ralf Baechle,
	James E.J. Bottomley, Benjamin Herrenschmidt, Heiko Carstens,
	David S. Miller, Ingo Molnar

On 07/10/2015 12:27 PM, Peter Zijlstra wrote:
> On Fri, Jul 10, 2015 at 02:42:56PM +0100, Russell King - ARM Linux wrote:
>> The obvious question though is whether we have an established name for this
>> operation elsewhere in the kernel, and whether we should have consistency.
> Consistency is good.
>
>> In include/linux, we already have (grepping for 'and_*not'):
>>
>> include/linux/nodemask.h:#define nodes_andnot(dst, src1, src2) \
>> include/linux/bitmap.h:extern int __bitmap_andnot(unsigned long *dst, const unsigned long *bitmap1,
>> include/linux/cpumask.h:static inline int cpumask_andnot(struct cpumask *dstp,
>>
>> We also have:
>>
>> include/linux/signal.h:#define _sig_andn(x,y)       ((x) & ~(y))
>>
>> which seems to be the only instance of "andn" in include/.
> How about I rename the _sig_andn one to _sig_andnot, and go with
> atomic_andnot, to match the *mask functions.

On further examination, there is also FUTEX_OP_ANDN, which is originally 
what
inspired me to use the name atomic_andn().  So I think churning the 
nomenclature
around for tilepro isn't really particularly helpful, and I won't bother.

In any case I think "andn" and "andnot" are both fine names for 
atomic_xxx :-)

-- 
Chris Metcalf, EZChip Semiconductor
http://www.ezchip.com


^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 04/24] arm64: Provide atomic_{or,xor,and}
  2015-07-10 16:23     ` Peter Zijlstra
@ 2015-07-13  9:29       ` Will Deacon
  0 siblings, 0 replies; 54+ messages in thread
From: Will Deacon @ 2015-07-13  9:29 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, hskinnemoen,
	realmz6, dhowells, rkuo, tony.luck, geert, james.hogan, ralf,
	jejb, benh, heiko.carstens, davem, cmetcalf, mingo

On Fri, Jul 10, 2015 at 05:23:56PM +0100, Peter Zijlstra wrote:
> On Fri, Jul 10, 2015 at 09:42:59AM +0100, Will Deacon wrote:
> > On Thu, Jul 09, 2015 at 06:28:59PM +0100, Peter Zijlstra wrote:
> > > Implement atomic logic ops -- atomic_{or,xor,and}.
> > > 
> > > These will replace the atomic_{set,clear}_mask functions that are
> > > available on some archs.
> > > 
> > > 
> > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > > ---
> > 
> > Whilst this is pretty straight-foward, I have some serious rework on arm64
> > atomic.h pending, so do you mind if I take this via the arm64 tree and
> > resolve the conflicts myself?
> 
> Are those public anywhere.. The thing is, at the end of this series I
> pretty much assume all archs will have these ops available, so whatever
> tree I stick the rest in woudl have to pull in your branch too.

Spammed you with the series now. Not sure you'd want to pull all that lot
via -tip, though.

> Also, IF you apply locally, do not forget to s/or/orr/ on the 64bit
> versions, I seem to have missed updating those, just like I did on 32bit
> arm.

Ok, thanks for the heads up.

Will

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 02/24] arc: Provide atomic_{or,xor,and}
  2015-07-10  7:05     ` Peter Zijlstra
@ 2015-07-13 12:43       ` Vineet Gupta
  0 siblings, 0 replies; 54+ messages in thread
From: Vineet Gupta @ 2015-07-13 12:43 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, linux-arch, rth, linux, will.deacon, hskinnemoen,
	realmz6, dhowells, rkuo, tony.luck, geert, james.hogan, ralf,
	jejb, benh, heiko.carstens, davem, cmetcalf

On Friday 10 July 2015 12:35 PM, Peter Zijlstra wrote:
> On Fri, Jul 10, 2015 at 04:30:46AM +0000, Vineet Gupta wrote:
>> > 
>> > Since we are on the topic, the cmpxchg() loop in arch/arc/kernel/smp.c still
>> > irritates me.
>> > Do we need a new set of primitives to operate atomically on non atomic_t data or
>> > does that mean that the data *not* being atomic_t but requiring such semantics is
>> > the fundamental problem and thus needs to be converted first.
> So if you look at the last patch, there's already a few sites that do
> things like:
>
> +       atomic_or(*mask, (atomic_t *)&flushcache_cpumask);
>
> Which is of course ugly as hell, but does work.
>
> Esp. inside arch code.

Right - I don't have issues with using this API - but this requires atomic_t data
type.  The specific cmpxchg() loop that I'm referring to is not for atomic_t - so
that needs to be converted to atomic_t first ?

>
> Now the 'problem' with cmpxchg/xchg, the instructions working on !atomic
> data is:
>
>   http://lkml.kernel.org/r/alpine.LRH.2.02.1406011342470.20831@file01.intranet.prod.int.rdu2.redhat.com
>   http://lkml.kernel.org/r/20140606175316.GV13930@laptop.programming.kicks-ass.net
>
> And note that includes some arc.

Correct so we don't mix cmpxchg() with normal load/store.

>
> Adding more such primitives will only make it harder on those already
> 'broken' archs.

Not sure if I follow here - my point was not so much about expanding the
atomic_*() API but whether it makes sense to have "some" API for non atomic_t vs.
converting the non atomic_t to atomic_t and then use the API as that is the
fundamental problem for such cases.

-Vineet

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 04/24] arm64: Provide atomic_{or,xor,and}
  2015-07-09 17:28 ` [RFC][PATCH 04/24] arm64: " Peter Zijlstra
  2015-07-10  8:42   ` Will Deacon
@ 2015-07-15 16:01   ` Will Deacon
  2015-07-15 16:46     ` Peter Zijlstra
  1 sibling, 1 reply; 54+ messages in thread
From: Will Deacon @ 2015-07-15 16:01 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, hskinnemoen,
	realmz6, dhowells, rkuo, tony.luck, geert, james.hogan, ralf,
	jejb, benh, heiko.carstens, davem, cmetcalf, mingo

On Thu, Jul 09, 2015 at 06:28:59PM +0100, Peter Zijlstra wrote:
> Implement atomic logic ops -- atomic_{or,xor,and}.
> 
> These will replace the atomic_{set,clear}_mask functions that are
> available on some archs.
> 
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  arch/arm64/include/asm/atomic.h |    6 ++++++
>  1 file changed, 6 insertions(+)
> 
> --- a/arch/arm64/include/asm/atomic.h
> +++ b/arch/arm64/include/asm/atomic.h
> @@ -84,6 +84,9 @@ static inline int atomic_##op##_return(i
>  
>  ATOMIC_OPS(add, add)
>  ATOMIC_OPS(sub, sub)
> +ATOMIC_OP(and, and)
> +ATOMIC_OP(or, orr)

FYI, but without selecting CONFIG_ARCH_HAS_ATOMIC_OR this change leads to
build errors:

                 from include/linux/seqlock.h:35,
                 from include/linux/time.h:5,
                 from include/uapi/linux/timex.h:56,
                 from include/linux/timex.h:56,
                 from include/linux/sched.h:19,
                 from arch/arm64/kernel/asm-offsets.c:21:
include/linux/atomic.h:115:20: error: redefinition of ‘atomic_or’
 static inline void atomic_or(int i, atomic_t *v)
                    ^
In file included from include/linux/atomic.h:4:0,
                 from include/linux/spinlock.h:416,
                 from include/linux/seqlock.h:35,
                 from include/linux/time.h:5,
                 from include/uapi/linux/timex.h:56,
                 from include/linux/timex.h:56,
                 from include/linux/sched.h:19,
                 from arch/arm64/kernel/asm-offsets.c:21:
arch/arm64/include/asm/atomic.h:48:20: note: previous definition of ‘atomic_or’ was here
 static inline void atomic_##op(int i, atomic_t *v)   \
                    ^
arch/arm64/include/asm/atomic.h:88:1: note: in expansion of macro ‘ATOMIC_OP’
 ATOMIC_OP(or, orr)
 ^
make[2]: *** [arch/arm64/kernel/asm-offsets.s] Error 1

Will

^ permalink raw reply	[flat|nested] 54+ messages in thread

* Re: [RFC][PATCH 04/24] arm64: Provide atomic_{or,xor,and}
  2015-07-15 16:01   ` Will Deacon
@ 2015-07-15 16:46     ` Peter Zijlstra
  0 siblings, 0 replies; 54+ messages in thread
From: Peter Zijlstra @ 2015-07-15 16:46 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-kernel, linux-arch, rth, vgupta, linux, hskinnemoen,
	realmz6, dhowells, rkuo, tony.luck, geert, james.hogan, ralf,
	jejb, benh, heiko.carstens, davem, cmetcalf, mingo

On Wed, Jul 15, 2015 at 05:01:09PM +0100, Will Deacon wrote:
> On Thu, Jul 09, 2015 at 06:28:59PM +0100, Peter Zijlstra wrote:
> > Implement atomic logic ops -- atomic_{or,xor,and}.
> > 
> > These will replace the atomic_{set,clear}_mask functions that are
> > available on some archs.
> > 
> > 
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> >  arch/arm64/include/asm/atomic.h |    6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > --- a/arch/arm64/include/asm/atomic.h
> > +++ b/arch/arm64/include/asm/atomic.h
> > @@ -84,6 +84,9 @@ static inline int atomic_##op##_return(i
> >  
> >  ATOMIC_OPS(add, add)
> >  ATOMIC_OPS(sub, sub)
> > +ATOMIC_OP(and, and)
> > +ATOMIC_OP(or, orr)
> 
> FYI, but without selecting CONFIG_ARCH_HAS_ATOMIC_OR this change leads to
> build errors:

Yah, already ran into that; I've a new set cooking which fixes all these
issues. I get the interesting build fails after a day or so (from the
build-bot).

I just pushed a fresh set into it and was hoping to be able to post
tomorrow.

^ permalink raw reply	[flat|nested] 54+ messages in thread

* [tip:locking/arch-atomic] tile: Provide atomic_{or,xor,and}
  2015-07-09 20:38 ` [PATCH] tile: Provide atomic_{or,xor,and} Chris Metcalf
  2015-07-09 20:49   ` Peter Zijlstra
@ 2015-07-27 12:17   ` tip-bot for Chris Metcalf
  1 sibling, 0 replies; 54+ messages in thread
From: tip-bot for Chris Metcalf @ 2015-07-27 12:17 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: mingo, peterz, cmetcalf, linux-kernel, hpa, tglx

Commit-ID:  2957c035395e492463d7f589af9dd32388967bbb
Gitweb:     http://git.kernel.org/tip/2957c035395e492463d7f589af9dd32388967bbb
Author:     Chris Metcalf <cmetcalf@ezchip.com>
AuthorDate: Thu, 9 Jul 2015 16:38:17 -0400
Committer:  Thomas Gleixner <tglx@linutronix.de>
CommitDate: Mon, 27 Jul 2015 14:06:24 +0200

tile: Provide atomic_{or,xor,and}

Implement atomic logic ops -- atomic_{or,xor,and}.

For tilegx, these are relatively straightforward; the architecture
provides atomic "or" and "and", both 32-bit and 64-bit.  To support
xor we provide a loop using "cmpexch".

For the older 32-bit tilepro architecture, we have to extend
the set of low-level assembly routines to include 32-bit "and",
as well as all three 64-bit routines.  Somewhat confusingly,
some 32-bit versions are already used by the bitops inlines, with
parameter types appropriate for bitops, so we have to do a bit of
casting to match "int" to "unsigned long".

Signed-off-by: Chris Metcalf <cmetcalf@ezchip.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1436474297-32187-1-git-send-email-cmetcalf@ezchip.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/tile/include/asm/atomic_32.h | 30 ++++++++++++++++++++++++++++
 arch/tile/include/asm/atomic_64.h | 42 +++++++++++++++++++++++++++++++++++++++
 arch/tile/lib/atomic_32.c         | 23 +++++++++++++++++++++
 arch/tile/lib/atomic_asm_32.S     |  4 ++++
 4 files changed, 99 insertions(+)

diff --git a/arch/tile/include/asm/atomic_32.h b/arch/tile/include/asm/atomic_32.h
index 1b109fa..9423792 100644
--- a/arch/tile/include/asm/atomic_32.h
+++ b/arch/tile/include/asm/atomic_32.h
@@ -34,6 +34,21 @@ static inline void atomic_add(int i, atomic_t *v)
 	_atomic_xchg_add(&v->counter, i);
 }
 
+#define ATOMIC_OP(op)							\
+unsigned long _atomic_##op(volatile unsigned long *p, unsigned long mask); \
+static inline void atomic_##op(int i, atomic_t *v)			\
+{									\
+	_atomic_##op((unsigned long *)&v->counter, i);			\
+}
+
+#define CONFIG_ARCH_HAS_ATOMIC_OR
+
+ATOMIC_OP(and)
+ATOMIC_OP(or)
+ATOMIC_OP(xor)
+
+#undef ATOMIC_OP
+
 /**
  * atomic_add_return - add integer and return
  * @v: pointer of type atomic_t
@@ -113,6 +128,17 @@ static inline void atomic64_add(long long i, atomic64_t *v)
 	_atomic64_xchg_add(&v->counter, i);
 }
 
+#define ATOMIC64_OP(op)						\
+long long _atomic64_##op(long long *v, long long n);		\
+static inline void atomic64_##op(long long i, atomic64_t *v)	\
+{								\
+	_atomic64_##op(&v->counter, i);				\
+}
+
+ATOMIC64_OP(and)
+ATOMIC64_OP(or)
+ATOMIC64_OP(xor)
+
 /**
  * atomic64_add_return - add integer and return
  * @v: pointer of type atomic64_t
@@ -225,6 +251,7 @@ extern struct __get_user __atomic_xchg_add(volatile int *p, int *lock, int n);
 extern struct __get_user __atomic_xchg_add_unless(volatile int *p,
 						  int *lock, int o, int n);
 extern struct __get_user __atomic_or(volatile int *p, int *lock, int n);
+extern struct __get_user __atomic_and(volatile int *p, int *lock, int n);
 extern struct __get_user __atomic_andn(volatile int *p, int *lock, int n);
 extern struct __get_user __atomic_xor(volatile int *p, int *lock, int n);
 extern long long __atomic64_cmpxchg(volatile long long *p, int *lock,
@@ -234,6 +261,9 @@ extern long long __atomic64_xchg_add(volatile long long *p, int *lock,
 					long long n);
 extern long long __atomic64_xchg_add_unless(volatile long long *p,
 					int *lock, long long o, long long n);
+extern long long __atomic64_and(volatile long long *p, int *lock, long long n);
+extern long long __atomic64_or(volatile long long *p, int *lock, long long n);
+extern long long __atomic64_xor(volatile long long *p, int *lock, long long n);
 
 /* Return failure from the atomic wrappers. */
 struct __get_user __atomic_bad_address(int __user *addr);
diff --git a/arch/tile/include/asm/atomic_64.h b/arch/tile/include/asm/atomic_64.h
index 0496970..d07d9fc 100644
--- a/arch/tile/include/asm/atomic_64.h
+++ b/arch/tile/include/asm/atomic_64.h
@@ -58,6 +58,28 @@ static inline int __atomic_add_unless(atomic_t *v, int a, int u)
 	return oldval;
 }
 
+#define CONFIG_ARCH_HAS_ATOMIC_OR
+
+static inline void atomic_and(int i, atomic_t *v)
+{
+	__insn_fetchand4((void *)&v->counter, i);
+}
+
+static inline void atomic_or(int i, atomic_t *v)
+{
+	__insn_fetchor4((void *)&v->counter, i);
+}
+
+static inline void atomic_xor(int i, atomic_t *v)
+{
+	int guess, oldval = v->counter;
+	do {
+		guess = oldval;
+		__insn_mtspr(SPR_CMPEXCH_VALUE, guess);
+		oldval = __insn_cmpexch4(&v->counter, guess ^ i);
+	} while (guess != oldval);
+}
+
 /* Now the true 64-bit operations. */
 
 #define ATOMIC64_INIT(i)	{ (i) }
@@ -91,6 +113,26 @@ static inline long atomic64_add_unless(atomic64_t *v, long a, long u)
 	return oldval != u;
 }
 
+static inline void atomic64_and(long i, atomic64_t *v)
+{
+	__insn_fetchand((void *)&v->counter, i);
+}
+
+static inline void atomic64_or(long i, atomic64_t *v)
+{
+	__insn_fetchor((void *)&v->counter, i);
+}
+
+static inline void atomic64_xor(long i, atomic64_t *v)
+{
+	long guess, oldval = v->counter;
+	do {
+		guess = oldval;
+		__insn_mtspr(SPR_CMPEXCH_VALUE, guess);
+		oldval = __insn_cmpexch(&v->counter, guess ^ i);
+	} while (guess != oldval);
+}
+
 #define atomic64_sub_return(i, v)	atomic64_add_return(-(i), (v))
 #define atomic64_sub(i, v)		atomic64_add(-(i), (v))
 #define atomic64_inc_return(v)		atomic64_add_return(1, (v))
diff --git a/arch/tile/lib/atomic_32.c b/arch/tile/lib/atomic_32.c
index c89b211..298df1e 100644
--- a/arch/tile/lib/atomic_32.c
+++ b/arch/tile/lib/atomic_32.c
@@ -94,6 +94,12 @@ unsigned long _atomic_or(volatile unsigned long *p, unsigned long mask)
 }
 EXPORT_SYMBOL(_atomic_or);
 
+unsigned long _atomic_and(volatile unsigned long *p, unsigned long mask)
+{
+	return __atomic_and((int *)p, __atomic_setup(p), mask).val;
+}
+EXPORT_SYMBOL(_atomic_and);
+
 unsigned long _atomic_andn(volatile unsigned long *p, unsigned long mask)
 {
 	return __atomic_andn((int *)p, __atomic_setup(p), mask).val;
@@ -136,6 +142,23 @@ long long _atomic64_cmpxchg(long long *v, long long o, long long n)
 }
 EXPORT_SYMBOL(_atomic64_cmpxchg);
 
+long long _atomic64_and(long long *v, long long n)
+{
+	return __atomic64_and(v, __atomic_setup(v), n);
+}
+EXPORT_SYMBOL(_atomic64_and);
+
+long long _atomic64_or(long long *v, long long n)
+{
+	return __atomic64_or(v, __atomic_setup(v), n);
+}
+EXPORT_SYMBOL(_atomic64_or);
+
+long long _atomic64_xor(long long *v, long long n)
+{
+	return __atomic64_xor(v, __atomic_setup(v), n);
+}
+EXPORT_SYMBOL(_atomic64_xor);
 
 /*
  * If any of the atomic or futex routines hit a bad address (not in
diff --git a/arch/tile/lib/atomic_asm_32.S b/arch/tile/lib/atomic_asm_32.S
index 6bda313..f611265 100644
--- a/arch/tile/lib/atomic_asm_32.S
+++ b/arch/tile/lib/atomic_asm_32.S
@@ -178,6 +178,7 @@ atomic_op _xchg_add, 32, "add r24, r22, r2"
 atomic_op _xchg_add_unless, 32, \
 	"sne r26, r22, r2; { bbns r26, 3f; add r24, r22, r3 }"
 atomic_op _or, 32, "or r24, r22, r2"
+atomic_op _and, 32, "and r24, r22, r2"
 atomic_op _andn, 32, "nor r2, r2, zero; and r24, r22, r2"
 atomic_op _xor, 32, "xor r24, r22, r2"
 
@@ -191,6 +192,9 @@ atomic_op 64_xchg_add_unless, 64, \
 	{ bbns r26, 3f; add r24, r22, r4 }; \
 	{ bbns r27, 3f; add r25, r23, r5 }; \
 	slt_u r26, r24, r22; add r25, r25, r26"
+atomic_op 64_or, 64, "{ or r24, r22, r2; or r25, r23, r3 }"
+atomic_op 64_and, 64, "{ and r24, r22, r2; and r25, r23, r3 }"
+atomic_op 64_xor, 64, "{ xor r24, r22, r2; xor r25, r23, r3 }"
 
 	jrp     lr              /* happy backtracer */
 

^ permalink raw reply related	[flat|nested] 54+ messages in thread

end of thread, other threads:[~2015-07-27 12:17 UTC | newest]

Thread overview: 54+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-07-09 17:28 [RFC][PATCH 00/24] arch: Provide atomic logic ops Peter Zijlstra
2015-07-09 17:28 ` [RFC][PATCH 01/24] alpha: Provide atomic_{or,xor,and} Peter Zijlstra
2015-07-09 17:28 ` [RFC][PATCH 02/24] arc: " Peter Zijlstra
2015-07-10  4:30   ` Vineet Gupta
2015-07-10  7:05     ` Peter Zijlstra
2015-07-13 12:43       ` Vineet Gupta
2015-07-09 17:28 ` [RFC][PATCH 03/24] arm: " Peter Zijlstra
2015-07-09 18:02   ` Peter Zijlstra
2015-07-10 10:24     ` Russell King - ARM Linux
2015-07-09 17:28 ` [RFC][PATCH 04/24] arm64: " Peter Zijlstra
2015-07-10  8:42   ` Will Deacon
2015-07-10 16:23     ` Peter Zijlstra
2015-07-13  9:29       ` Will Deacon
2015-07-15 16:01   ` Will Deacon
2015-07-15 16:46     ` Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 05/24] avr32: " Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 06/24] blackfin: " Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 07/24] hexagon: " Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 08/24] ia64: " Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 09/24] m32r: " Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 10/24] m68k: " Peter Zijlstra
2015-07-10  9:13   ` Geert Uytterhoeven
2015-07-09 17:29 ` [RFC][PATCH 11/24] metag: " Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 12/24] mips: " Peter Zijlstra
2015-07-09 18:45   ` Ralf Baechle
2015-07-09 17:29 ` [RFC][PATCH 13/24] mn10300: " Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 14/24] parisc: " Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 15/24] powerpc: " Peter Zijlstra
2015-07-09 21:49   ` Benjamin Herrenschmidt
2015-07-09 17:29 ` [RFC][PATCH 16/24] sh: " Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 17/24] sparc: " Peter Zijlstra
2015-07-09 18:05   ` David Miller
2015-07-09 17:29 ` [RFC][PATCH 18/24] xtensa: " Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 19/24] s390: " Peter Zijlstra
2015-07-10  7:17   ` Heiko Carstens
2015-07-10 10:22     ` Peter Zijlstra
2015-07-10 10:52       ` Heiko Carstens
2015-07-10 11:28       ` Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 20/24] x86: " Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 21/24] atomic: " Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 22/24] frv: Rewrite atomic implementation Peter Zijlstra
2015-07-09 17:29 ` [RFC][PATCH 23/24] atomic: Collapse all atomic_{set,clear}_mask definitions Peter Zijlstra
2015-07-10  9:10   ` Geert Uytterhoeven
2015-07-10  9:13     ` Vineet Gupta
2015-07-10 10:39     ` Peter Zijlstra
2015-07-10 13:34       ` Chris Metcalf
2015-07-10 13:42         ` Russell King - ARM Linux
2015-07-10 16:27           ` Peter Zijlstra
2015-07-10 17:35             ` Chris Metcalf
2015-07-10 19:45             ` Chris Metcalf
2015-07-09 17:29 ` [RFC][PATCH 24/24] atomic: Replace atomic_{set,clear}_mask() usage Peter Zijlstra
2015-07-09 20:38 ` [PATCH] tile: Provide atomic_{or,xor,and} Chris Metcalf
2015-07-09 20:49   ` Peter Zijlstra
2015-07-27 12:17   ` [tip:locking/arch-atomic] " tip-bot for Chris Metcalf

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).