All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/18] locking/atomic: atomic64 type cleanup
@ 2019-05-22 13:22 Mark Rutland
  2019-05-22 13:22 ` [PATCH 01/18] locking/atomic: crypto: nx: prepare for atomic64_read() conversion Mark Rutland
                   ` (19 more replies)
  0 siblings, 20 replies; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

Currently architectures return inconsistent types for atomic64 ops. Some return
long (e..g. powerpc), some return long long (e.g. arc), and some return s64
(e.g. x86).

This is a bit messy, and causes unnecessary pain (e.g. as values must be cast
before they can be printed [1]).

This series reworks all the atomic64 implementations to use s64 as the base
type for atomic64_t (as discussed [2]), and to ensure that this type is
consistently used for parameters and return values in the API, avoiding further
problems in this area.

This series (based on v5.1-rc1) can also be found in my atomics/type-cleanup
branch [3] on kernel.org.

Thanks,
Mark.

[1] https://lkml.kernel.org/r/20190310183051.87303-1-cai@lca.pw
[2] https://lkml.kernel.org/r/20190313091844.GA24390@hirez.programming.kicks-ass.net
[3] git://git.kernel.org/pub/scm/linux/kernel/git/mark/linux.git atomics/type-cleanup

Mark Rutland (18):
  locking/atomic: crypto: nx: prepare for atomic64_read() conversion
  locking/atomic: s390/pci: prepare for atomic64_read() conversion
  locking/atomic: generic: use s64 for atomic64
  locking/atomic: alpha: use s64 for atomic64
  locking/atomic: arc: use s64 for atomic64
  locking/atomic: arm: use s64 for atomic64
  locking/atomic: arm64: use s64 for atomic64
  locking/atomic: ia64: use s64 for atomic64
  locking/atomic: mips: use s64 for atomic64
  locking/atomic: powerpc: use s64 for atomic64
  locking/atomic: riscv: fix atomic64_sub_if_positive() offset argument
  locking/atomic: riscv: use s64 for atomic64
  locking/atomic: s390: use s64 for atomic64
  locking/atomic: sparc: use s64 for atomic64
  locking/atomic: x86: use s64 for atomic64
  locking/atomic: use s64 for atomic64_t on 64-bit
  locking/atomic: crypto: nx: remove redundant casts
  locking/atomic: s390/pci: remove redundant casts

 arch/alpha/include/asm/atomic.h       | 20 +++++------
 arch/arc/include/asm/atomic.h         | 41 +++++++++++-----------
 arch/arm/include/asm/atomic.h         | 50 +++++++++++++-------------
 arch/arm64/include/asm/atomic_ll_sc.h | 20 +++++------
 arch/arm64/include/asm/atomic_lse.h   | 34 +++++++++---------
 arch/ia64/include/asm/atomic.h        | 20 +++++------
 arch/mips/include/asm/atomic.h        | 22 ++++++------
 arch/powerpc/include/asm/atomic.h     | 44 +++++++++++------------
 arch/riscv/include/asm/atomic.h       | 44 ++++++++++++-----------
 arch/s390/include/asm/atomic.h        | 38 ++++++++++----------
 arch/s390/pci/pci_debug.c             |  2 +-
 arch/sparc/include/asm/atomic_64.h    |  8 ++---
 arch/x86/include/asm/atomic64_32.h    | 66 +++++++++++++++++------------------
 arch/x86/include/asm/atomic64_64.h    | 38 ++++++++++----------
 drivers/crypto/nx/nx-842-pseries.c    |  6 ++--
 include/asm-generic/atomic64.h        | 20 +++++------
 include/linux/types.h                 |  2 +-
 lib/atomic64.c                        | 32 ++++++++---------
 18 files changed, 252 insertions(+), 255 deletions(-)

-- 
2.11.0


^ permalink raw reply	[flat|nested] 60+ messages in thread

* [PATCH 01/18] locking/atomic: crypto: nx: prepare for atomic64_read() conversion
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:34   ` [tip:locking/core] locking/atomic, crypto/nx: Prepare " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 02/18] locking/atomic: s390/pci: prepare " Mark Rutland
                   ` (18 subsequent siblings)
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

The return type of atomic64_read() varies by architecture. It may return
long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is
somewhat painful, and mandates the use of explicit casts in some cases
(e.g. when printing the return value).

To ameliorate matters, subsequent patches will make the atomic64 API
consistently use s64.

As a preparatory step, this patch updates the nx-842 code to treat the
return value of atomic64_read() as s64, using explicit casts. These
casts will be removed once the s64 conversion is complete.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/crypto/nx/nx-842-pseries.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c
index 57932848361b..9432e9e42afe 100644
--- a/drivers/crypto/nx/nx-842-pseries.c
+++ b/drivers/crypto/nx/nx-842-pseries.c
@@ -869,8 +869,8 @@ static ssize_t nx842_##_name##_show(struct device *dev,		\
 	rcu_read_lock();						\
 	local_devdata = rcu_dereference(devdata);			\
 	if (local_devdata)						\
-		p = snprintf(buf, PAGE_SIZE, "%ld\n",			\
-		       atomic64_read(&local_devdata->counters->_name));	\
+		p = snprintf(buf, PAGE_SIZE, "%lld\n",			\
+		       (s64)atomic64_read(&local_devdata->counters->_name));	\
 	rcu_read_unlock();						\
 	return p;							\
 }
@@ -922,17 +922,17 @@ static ssize_t nx842_timehist_show(struct device *dev,
 	}
 
 	for (i = 0; i < (NX842_HIST_SLOTS - 2); i++) {
-		bytes = snprintf(p, bytes_remain, "%u-%uus:\t%ld\n",
+		bytes = snprintf(p, bytes_remain, "%u-%uus:\t%lld\n",
 			       i ? (2<<(i-1)) : 0, (2<<i)-1,
-			       atomic64_read(&times[i]));
+			       (s64)atomic64_read(&times[i]));
 		bytes_remain -= bytes;
 		p += bytes;
 	}
 	/* The last bucket holds everything over
 	 * 2<<(NX842_HIST_SLOTS - 2) us */
-	bytes = snprintf(p, bytes_remain, "%uus - :\t%ld\n",
+	bytes = snprintf(p, bytes_remain, "%uus - :\t%lld\n",
 			2<<(NX842_HIST_SLOTS - 2),
-			atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
+			(s64)atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
 	p += bytes;
 
 	rcu_read_unlock();
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 02/18] locking/atomic: s390/pci: prepare for atomic64_read() conversion
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
  2019-05-22 13:22 ` [PATCH 01/18] locking/atomic: crypto: nx: prepare for atomic64_read() conversion Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:34   ` [tip:locking/core] locking/atomic, s390/pci: Prepare " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 03/18] locking/atomic: generic: use s64 for atomic64 Mark Rutland
                   ` (17 subsequent siblings)
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

The return type of atomic64_read() varies by architecture. It may return
long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is
somewhat painful, and mandates the use of explicit casts in some cases
(e.g. when printing the return value).

To ameliorate matters, subsequent patches will make the atomic64 API
consistently use s64.

As a preparatory step, this patch updates the s390 pci debug code to
treat the return value of atomic64_read() as s64, using an explicit
cast. This cast will be removed once the s64 conversion is complete.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/s390/pci/pci_debug.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
index 6b48ca7760a7..45eccf79e990 100644
--- a/arch/s390/pci/pci_debug.c
+++ b/arch/s390/pci/pci_debug.c
@@ -74,8 +74,8 @@ static void pci_sw_counter_show(struct seq_file *m)
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
-		seq_printf(m, "%26s:\t%lu\n", pci_sw_names[i],
-			   atomic64_read(counter));
+		seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
+			   (s64)atomic64_read(counter));
 }
 
 static int pci_perf_show(struct seq_file *m, void *v)
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 03/18] locking/atomic: generic: use s64 for atomic64
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
  2019-05-22 13:22 ` [PATCH 01/18] locking/atomic: crypto: nx: prepare for atomic64_read() conversion Mark Rutland
  2019-05-22 13:22 ` [PATCH 02/18] locking/atomic: s390/pci: prepare " Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-05-22 21:16   ` Arnd Bergmann
  2019-06-03 13:35   ` [tip:locking/core] locking/atomic: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 04/18] locking/atomic: alpha: use " Mark Rutland
                   ` (16 subsequent siblings)
  19 siblings, 2 replies; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

As a step towards making the atomic64 API use consistent types treewide,
let's have the generic atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long long, matching the generated
headers.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 include/asm-generic/atomic64.h | 20 ++++++++++----------
 lib/atomic64.c                 | 32 ++++++++++++++++----------------
 2 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h
index 97b28b7f1f29..fc7b831ed632 100644
--- a/include/asm-generic/atomic64.h
+++ b/include/asm-generic/atomic64.h
@@ -14,24 +14,24 @@
 #include <linux/types.h>
 
 typedef struct {
-	long long counter;
+	s64 counter;
 } atomic64_t;
 
 #define ATOMIC64_INIT(i)	{ (i) }
 
-extern long long atomic64_read(const atomic64_t *v);
-extern void	 atomic64_set(atomic64_t *v, long long i);
+extern s64 atomic64_read(const atomic64_t *v);
+extern void atomic64_set(atomic64_t *v, s64 i);
 
 #define atomic64_set_release(v, i)	atomic64_set((v), (i))
 
 #define ATOMIC64_OP(op)							\
-extern void	 atomic64_##op(long long a, atomic64_t *v);
+extern void	 atomic64_##op(s64 a, atomic64_t *v);
 
 #define ATOMIC64_OP_RETURN(op)						\
-extern long long atomic64_##op##_return(long long a, atomic64_t *v);
+extern s64 atomic64_##op##_return(s64 a, atomic64_t *v);
 
 #define ATOMIC64_FETCH_OP(op)						\
-extern long long atomic64_fetch_##op(long long a, atomic64_t *v);
+extern s64 atomic64_fetch_##op(s64 a, atomic64_t *v);
 
 #define ATOMIC64_OPS(op)	ATOMIC64_OP(op) ATOMIC64_OP_RETURN(op) ATOMIC64_FETCH_OP(op)
 
@@ -50,11 +50,11 @@ ATOMIC64_OPS(xor)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-extern long long atomic64_dec_if_positive(atomic64_t *v);
+extern s64 atomic64_dec_if_positive(atomic64_t *v);
 #define atomic64_dec_if_positive atomic64_dec_if_positive
-extern long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n);
-extern long long atomic64_xchg(atomic64_t *v, long long new);
-extern long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u);
+extern s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n);
+extern s64 atomic64_xchg(atomic64_t *v, s64 new);
+extern s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u);
 #define atomic64_fetch_add_unless atomic64_fetch_add_unless
 
 #endif  /*  _ASM_GENERIC_ATOMIC64_H  */
diff --git a/lib/atomic64.c b/lib/atomic64.c
index 1d91e31eceec..62f218bf50a0 100644
--- a/lib/atomic64.c
+++ b/lib/atomic64.c
@@ -46,11 +46,11 @@ static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
 	return &atomic64_lock[addr & (NR_LOCKS - 1)].lock;
 }
 
-long long atomic64_read(const atomic64_t *v)
+s64 atomic64_read(const atomic64_t *v)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
-	long long val;
+	s64 val;
 
 	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter;
@@ -59,7 +59,7 @@ long long atomic64_read(const atomic64_t *v)
 }
 EXPORT_SYMBOL(atomic64_read);
 
-void atomic64_set(atomic64_t *v, long long i)
+void atomic64_set(atomic64_t *v, s64 i)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
@@ -71,7 +71,7 @@ void atomic64_set(atomic64_t *v, long long i)
 EXPORT_SYMBOL(atomic64_set);
 
 #define ATOMIC64_OP(op, c_op)						\
-void atomic64_##op(long long a, atomic64_t *v)				\
+void atomic64_##op(s64 a, atomic64_t *v)				\
 {									\
 	unsigned long flags;						\
 	raw_spinlock_t *lock = lock_addr(v);				\
@@ -83,11 +83,11 @@ void atomic64_##op(long long a, atomic64_t *v)				\
 EXPORT_SYMBOL(atomic64_##op);
 
 #define ATOMIC64_OP_RETURN(op, c_op)					\
-long long atomic64_##op##_return(long long a, atomic64_t *v)		\
+s64 atomic64_##op##_return(s64 a, atomic64_t *v)			\
 {									\
 	unsigned long flags;						\
 	raw_spinlock_t *lock = lock_addr(v);				\
-	long long val;							\
+	s64 val;							\
 									\
 	raw_spin_lock_irqsave(lock, flags);				\
 	val = (v->counter c_op a);					\
@@ -97,11 +97,11 @@ long long atomic64_##op##_return(long long a, atomic64_t *v)		\
 EXPORT_SYMBOL(atomic64_##op##_return);
 
 #define ATOMIC64_FETCH_OP(op, c_op)					\
-long long atomic64_fetch_##op(long long a, atomic64_t *v)		\
+s64 atomic64_fetch_##op(s64 a, atomic64_t *v)				\
 {									\
 	unsigned long flags;						\
 	raw_spinlock_t *lock = lock_addr(v);				\
-	long long val;							\
+	s64 val;							\
 									\
 	raw_spin_lock_irqsave(lock, flags);				\
 	val = v->counter;						\
@@ -134,11 +134,11 @@ ATOMIC64_OPS(xor, ^=)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-long long atomic64_dec_if_positive(atomic64_t *v)
+s64 atomic64_dec_if_positive(atomic64_t *v)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
-	long long val;
+	s64 val;
 
 	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter - 1;
@@ -149,11 +149,11 @@ long long atomic64_dec_if_positive(atomic64_t *v)
 }
 EXPORT_SYMBOL(atomic64_dec_if_positive);
 
-long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
+s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
-	long long val;
+	s64 val;
 
 	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter;
@@ -164,11 +164,11 @@ long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
 }
 EXPORT_SYMBOL(atomic64_cmpxchg);
 
-long long atomic64_xchg(atomic64_t *v, long long new)
+s64 atomic64_xchg(atomic64_t *v, s64 new)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
-	long long val;
+	s64 val;
 
 	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter;
@@ -178,11 +178,11 @@ long long atomic64_xchg(atomic64_t *v, long long new)
 }
 EXPORT_SYMBOL(atomic64_xchg);
 
-long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u)
+s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
-	long long val;
+	s64 val;
 
 	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 04/18] locking/atomic: alpha: use s64 for atomic64
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (2 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 03/18] locking/atomic: generic: use s64 for atomic64 Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:36   ` [tip:locking/core] locking/atomic, alpha: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 05/18] locking/atomic: arc: use " Mark Rutland
                   ` (15 subsequent siblings)
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

As a step towards making the atomic64 API use consistent types treewide,
let's have the alpha atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/alpha/include/asm/atomic.h | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 150a1c5d6a2c..2144530d1428 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -93,9 +93,9 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
 }
 
 #define ATOMIC64_OP(op, asm_op)						\
-static __inline__ void atomic64_##op(long i, atomic64_t * v)		\
+static __inline__ void atomic64_##op(s64 i, atomic64_t * v)		\
 {									\
-	unsigned long temp;						\
+	s64 temp;							\
 	__asm__ __volatile__(						\
 	"1:	ldq_l %0,%1\n"						\
 	"	" #asm_op " %0,%2,%0\n"					\
@@ -109,9 +109,9 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v)		\
 }									\
 
 #define ATOMIC64_OP_RETURN(op, asm_op)					\
-static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v)	\
+static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v)	\
 {									\
-	long temp, result;						\
+	s64 temp, result;						\
 	__asm__ __volatile__(						\
 	"1:	ldq_l %0,%1\n"						\
 	"	" #asm_op " %0,%3,%2\n"					\
@@ -128,9 +128,9 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v)	\
 }
 
 #define ATOMIC64_FETCH_OP(op, asm_op)					\
-static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v)	\
+static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v)	\
 {									\
-	long temp, result;						\
+	s64 temp, result;						\
 	__asm__ __volatile__(						\
 	"1:	ldq_l %2,%1\n"						\
 	"	" #asm_op " %2,%3,%0\n"					\
@@ -246,9 +246,9 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u)
  * Atomically adds @a to @v, so long as it was not @u.
  * Returns the old value of @v.
  */
-static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
+static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
-	long c, new, old;
+	s64 c, new, old;
 	smp_mb();
 	__asm__ __volatile__(
 	"1:	ldq_l	%[old],%[mem]\n"
@@ -276,9 +276,9 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
  * The function returns the old value of *v minus 1, even if
  * the atomic variable, v, was not decremented.
  */
-static inline long atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
 {
-	long old, tmp;
+	s64 old, tmp;
 	smp_mb();
 	__asm__ __volatile__(
 	"1:	ldq_l	%[old],%[mem]\n"
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 05/18] locking/atomic: arc: use s64 for atomic64
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (3 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 04/18] locking/atomic: alpha: use " Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-05-23 23:10   ` Vineet Gupta
  2019-06-03 13:36   ` [tip:locking/core] locking/atomic, arc: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 06/18] locking/atomic: arm: use " Mark Rutland
                   ` (14 subsequent siblings)
  19 siblings, 2 replies; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

As a step towards making the atomic64 API use consistent types treewide,
let's have the arc atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than u64, matching the generated headers.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Vineet Gupta <vgupta@synopsys.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arc/include/asm/atomic.h | 41 ++++++++++++++++++++---------------------
 1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index 158af079838d..2c75df55d0d2 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -324,14 +324,14 @@ ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3)
  */
 
 typedef struct {
-	aligned_u64 counter;
+	s64 __aligned(8) counter;
 } atomic64_t;
 
 #define ATOMIC64_INIT(a) { (a) }
 
-static inline long long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
 {
-	unsigned long long val;
+	s64 val;
 
 	__asm__ __volatile__(
 	"	ldd   %0, [%1]	\n"
@@ -341,7 +341,7 @@ static inline long long atomic64_read(const atomic64_t *v)
 	return val;
 }
 
-static inline void atomic64_set(atomic64_t *v, long long a)
+static inline void atomic64_set(atomic64_t *v, s64 a)
 {
 	/*
 	 * This could have been a simple assignment in "C" but would need
@@ -362,9 +362,9 @@ static inline void atomic64_set(atomic64_t *v, long long a)
 }
 
 #define ATOMIC64_OP(op, op1, op2)					\
-static inline void atomic64_##op(long long a, atomic64_t *v)		\
+static inline void atomic64_##op(s64 a, atomic64_t *v)			\
 {									\
-	unsigned long long val;						\
+	s64 val;							\
 									\
 	__asm__ __volatile__(						\
 	"1:				\n"				\
@@ -375,13 +375,13 @@ static inline void atomic64_##op(long long a, atomic64_t *v)		\
 	"	bnz     1b		\n"				\
 	: "=&r"(val)							\
 	: "r"(&v->counter), "ir"(a)					\
-	: "cc");						\
+	: "cc");							\
 }									\
 
 #define ATOMIC64_OP_RETURN(op, op1, op2)		        	\
-static inline long long atomic64_##op##_return(long long a, atomic64_t *v)	\
+static inline s64 atomic64_##op##_return(s64 a, atomic64_t *v)		\
 {									\
-	unsigned long long val;						\
+	s64 val;							\
 									\
 	smp_mb();							\
 									\
@@ -402,9 +402,9 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v)	\
 }
 
 #define ATOMIC64_FETCH_OP(op, op1, op2)		        		\
-static inline long long atomic64_fetch_##op(long long a, atomic64_t *v)	\
+static inline s64 atomic64_fetch_##op(s64 a, atomic64_t *v)		\
 {									\
-	unsigned long long val, orig;					\
+	s64 val, orig;							\
 									\
 	smp_mb();							\
 									\
@@ -444,10 +444,10 @@ ATOMIC64_OPS(xor, xor, xor)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-static inline long long
-atomic64_cmpxchg(atomic64_t *ptr, long long expected, long long new)
+static inline s64
+atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new)
 {
-	long long prev;
+	s64 prev;
 
 	smp_mb();
 
@@ -467,9 +467,9 @@ atomic64_cmpxchg(atomic64_t *ptr, long long expected, long long new)
 	return prev;
 }
 
-static inline long long atomic64_xchg(atomic64_t *ptr, long long new)
+static inline s64 atomic64_xchg(atomic64_t *ptr, s64 new)
 {
-	long long prev;
+	s64 prev;
 
 	smp_mb();
 
@@ -495,9 +495,9 @@ static inline long long atomic64_xchg(atomic64_t *ptr, long long new)
  * the atomic variable, v, was not decremented.
  */
 
-static inline long long atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
 {
-	long long val;
+	s64 val;
 
 	smp_mb();
 
@@ -528,10 +528,9 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
  * Atomically adds @a to @v, if it was not @u.
  * Returns the old value of @v
  */
-static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a,
-						  long long u)
+static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
-	long long old, temp;
+	s64 old, temp;
 
 	smp_mb();
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 06/18] locking/atomic: arm: use s64 for atomic64
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (4 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 05/18] locking/atomic: arc: use " Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:37   ` [tip:locking/core] locking/atomic, arm: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 07/18] locking/atomic: arm64: use " Mark Rutland
                   ` (13 subsequent siblings)
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

As a step towards making the atomic64 API use consistent types treewide,
let's have the arm atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long long, matching the generated
headers.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm/include/asm/atomic.h | 50 +++++++++++++++++++++----------------------
 1 file changed, 24 insertions(+), 26 deletions(-)

diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index f74756641410..d45c41f6f69c 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -249,15 +249,15 @@ ATOMIC_OPS(xor, ^=, eor)
 
 #ifndef CONFIG_GENERIC_ATOMIC64
 typedef struct {
-	long long counter;
+	s64 counter;
 } atomic64_t;
 
 #define ATOMIC64_INIT(i) { (i) }
 
 #ifdef CONFIG_ARM_LPAE
-static inline long long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
 {
-	long long result;
+	s64 result;
 
 	__asm__ __volatile__("@ atomic64_read\n"
 "	ldrd	%0, %H0, [%1]"
@@ -268,7 +268,7 @@ static inline long long atomic64_read(const atomic64_t *v)
 	return result;
 }
 
-static inline void atomic64_set(atomic64_t *v, long long i)
+static inline void atomic64_set(atomic64_t *v, s64 i)
 {
 	__asm__ __volatile__("@ atomic64_set\n"
 "	strd	%2, %H2, [%1]"
@@ -277,9 +277,9 @@ static inline void atomic64_set(atomic64_t *v, long long i)
 	);
 }
 #else
-static inline long long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
 {
-	long long result;
+	s64 result;
 
 	__asm__ __volatile__("@ atomic64_read\n"
 "	ldrexd	%0, %H0, [%1]"
@@ -290,9 +290,9 @@ static inline long long atomic64_read(const atomic64_t *v)
 	return result;
 }
 
-static inline void atomic64_set(atomic64_t *v, long long i)
+static inline void atomic64_set(atomic64_t *v, s64 i)
 {
-	long long tmp;
+	s64 tmp;
 
 	prefetchw(&v->counter);
 	__asm__ __volatile__("@ atomic64_set\n"
@@ -307,9 +307,9 @@ static inline void atomic64_set(atomic64_t *v, long long i)
 #endif
 
 #define ATOMIC64_OP(op, op1, op2)					\
-static inline void atomic64_##op(long long i, atomic64_t *v)		\
+static inline void atomic64_##op(s64 i, atomic64_t *v)			\
 {									\
-	long long result;						\
+	s64 result;							\
 	unsigned long tmp;						\
 									\
 	prefetchw(&v->counter);						\
@@ -326,10 +326,10 @@ static inline void atomic64_##op(long long i, atomic64_t *v)		\
 }									\
 
 #define ATOMIC64_OP_RETURN(op, op1, op2)				\
-static inline long long							\
-atomic64_##op##_return_relaxed(long long i, atomic64_t *v)		\
+static inline s64							\
+atomic64_##op##_return_relaxed(s64 i, atomic64_t *v)			\
 {									\
-	long long result;						\
+	s64 result;							\
 	unsigned long tmp;						\
 									\
 	prefetchw(&v->counter);						\
@@ -349,10 +349,10 @@ atomic64_##op##_return_relaxed(long long i, atomic64_t *v)		\
 }
 
 #define ATOMIC64_FETCH_OP(op, op1, op2)					\
-static inline long long							\
-atomic64_fetch_##op##_relaxed(long long i, atomic64_t *v)		\
+static inline s64							\
+atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v)			\
 {									\
-	long long result, val;						\
+	s64 result, val;						\
 	unsigned long tmp;						\
 									\
 	prefetchw(&v->counter);						\
@@ -406,10 +406,9 @@ ATOMIC64_OPS(xor, eor, eor)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-static inline long long
-atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new)
+static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new)
 {
-	long long oldval;
+	s64 oldval;
 	unsigned long res;
 
 	prefetchw(&ptr->counter);
@@ -430,9 +429,9 @@ atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new)
 }
 #define atomic64_cmpxchg_relaxed	atomic64_cmpxchg_relaxed
 
-static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new)
+static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new)
 {
-	long long result;
+	s64 result;
 	unsigned long tmp;
 
 	prefetchw(&ptr->counter);
@@ -450,9 +449,9 @@ static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new)
 }
 #define atomic64_xchg_relaxed		atomic64_xchg_relaxed
 
-static inline long long atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
 {
-	long long result;
+	s64 result;
 	unsigned long tmp;
 
 	smp_mb();
@@ -478,10 +477,9 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
 }
 #define atomic64_dec_if_positive atomic64_dec_if_positive
 
-static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a,
-						  long long u)
+static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
-	long long oldval, newval;
+	s64 oldval, newval;
 	unsigned long tmp;
 
 	smp_mb();
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 07/18] locking/atomic: arm64: use s64 for atomic64
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (5 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 06/18] locking/atomic: arm: use " Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:38   ` [tip:locking/core] locking/atomic, arm64: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 08/18] locking/atomic: ia64: use " Mark Rutland
                   ` (12 subsequent siblings)
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

As a step towards making the atomic64 API use consistent types treewide,
let's have the arm64 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Note that in arch_atomic64_dec_if_positive(), the x0 variable is left as
long, as this variable is also used to hold the pointer to the
atomic64_t.

Otherwise, there should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/arm64/include/asm/atomic_ll_sc.h | 20 ++++++++++----------
 arch/arm64/include/asm/atomic_lse.h   | 34 +++++++++++++++++-----------------
 2 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h
index e321293e0c89..f3b12d7f431f 100644
--- a/arch/arm64/include/asm/atomic_ll_sc.h
+++ b/arch/arm64/include/asm/atomic_ll_sc.h
@@ -133,9 +133,9 @@ ATOMIC_OPS(xor, eor)
 
 #define ATOMIC64_OP(op, asm_op)						\
 __LL_SC_INLINE void							\
-__LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v))		\
+__LL_SC_PREFIX(arch_atomic64_##op(s64 i, atomic64_t *v))		\
 {									\
-	long result;							\
+	s64 result;							\
 	unsigned long tmp;						\
 									\
 	asm volatile("// atomic64_" #op "\n"				\
@@ -150,10 +150,10 @@ __LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v))		\
 __LL_SC_EXPORT(arch_atomic64_##op);
 
 #define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op)		\
-__LL_SC_INLINE long							\
-__LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\
+__LL_SC_INLINE s64							\
+__LL_SC_PREFIX(arch_atomic64_##op##_return##name(s64 i, atomic64_t *v))\
 {									\
-	long result;							\
+	s64 result;							\
 	unsigned long tmp;						\
 									\
 	asm volatile("// atomic64_" #op "_return" #name "\n"		\
@@ -172,10 +172,10 @@ __LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\
 __LL_SC_EXPORT(arch_atomic64_##op##_return##name);
 
 #define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op)		\
-__LL_SC_INLINE long							\
-__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(long i, atomic64_t *v))	\
+__LL_SC_INLINE s64							\
+__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v))	\
 {									\
-	long result, val;						\
+	s64 result, val;						\
 	unsigned long tmp;						\
 									\
 	asm volatile("// atomic64_fetch_" #op #name "\n"		\
@@ -225,10 +225,10 @@ ATOMIC64_OPS(xor, eor)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-__LL_SC_INLINE long
+__LL_SC_INLINE s64
 __LL_SC_PREFIX(arch_atomic64_dec_if_positive(atomic64_t *v))
 {
-	long result;
+	s64 result;
 	unsigned long tmp;
 
 	asm volatile("// atomic64_dec_if_positive\n"
diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
index 9256a3921e4b..c53832b08af7 100644
--- a/arch/arm64/include/asm/atomic_lse.h
+++ b/arch/arm64/include/asm/atomic_lse.h
@@ -224,9 +224,9 @@ ATOMIC_FETCH_OP_SUB(        , al, "memory")
 
 #define __LL_SC_ATOMIC64(op)	__LL_SC_CALL(arch_atomic64_##op)
 #define ATOMIC64_OP(op, asm_op)						\
-static inline void arch_atomic64_##op(long i, atomic64_t *v)		\
+static inline void arch_atomic64_##op(s64 i, atomic64_t *v)		\
 {									\
-	register long x0 asm ("x0") = i;				\
+	register s64 x0 asm ("x0") = i;					\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(__LL_SC_ATOMIC64(op),	\
@@ -244,9 +244,9 @@ ATOMIC64_OP(add, stadd)
 #undef ATOMIC64_OP
 
 #define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...)			\
-static inline long arch_atomic64_fetch_##op##name(long i, atomic64_t *v)\
+static inline s64 arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v)	\
 {									\
-	register long x0 asm ("x0") = i;				\
+	register s64 x0 asm ("x0") = i;					\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(				\
@@ -276,9 +276,9 @@ ATOMIC64_FETCH_OPS(add, ldadd)
 #undef ATOMIC64_FETCH_OPS
 
 #define ATOMIC64_OP_ADD_RETURN(name, mb, cl...)				\
-static inline long arch_atomic64_add_return##name(long i, atomic64_t *v)\
+static inline s64 arch_atomic64_add_return##name(s64 i, atomic64_t *v)	\
 {									\
-	register long x0 asm ("x0") = i;				\
+	register s64 x0 asm ("x0") = i;					\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(				\
@@ -302,9 +302,9 @@ ATOMIC64_OP_ADD_RETURN(        , al, "memory")
 
 #undef ATOMIC64_OP_ADD_RETURN
 
-static inline void arch_atomic64_and(long i, atomic64_t *v)
+static inline void arch_atomic64_and(s64 i, atomic64_t *v)
 {
-	register long x0 asm ("x0") = i;
+	register s64 x0 asm ("x0") = i;
 	register atomic64_t *x1 asm ("x1") = v;
 
 	asm volatile(ARM64_LSE_ATOMIC_INSN(
@@ -320,9 +320,9 @@ static inline void arch_atomic64_and(long i, atomic64_t *v)
 }
 
 #define ATOMIC64_FETCH_OP_AND(name, mb, cl...)				\
-static inline long arch_atomic64_fetch_and##name(long i, atomic64_t *v)	\
+static inline s64 arch_atomic64_fetch_and##name(s64 i, atomic64_t *v)	\
 {									\
-	register long x0 asm ("x0") = i;				\
+	register s64 x0 asm ("x0") = i;					\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(				\
@@ -346,9 +346,9 @@ ATOMIC64_FETCH_OP_AND(        , al, "memory")
 
 #undef ATOMIC64_FETCH_OP_AND
 
-static inline void arch_atomic64_sub(long i, atomic64_t *v)
+static inline void arch_atomic64_sub(s64 i, atomic64_t *v)
 {
-	register long x0 asm ("x0") = i;
+	register s64 x0 asm ("x0") = i;
 	register atomic64_t *x1 asm ("x1") = v;
 
 	asm volatile(ARM64_LSE_ATOMIC_INSN(
@@ -364,9 +364,9 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v)
 }
 
 #define ATOMIC64_OP_SUB_RETURN(name, mb, cl...)				\
-static inline long arch_atomic64_sub_return##name(long i, atomic64_t *v)\
+static inline s64 arch_atomic64_sub_return##name(s64 i, atomic64_t *v)	\
 {									\
-	register long x0 asm ("x0") = i;				\
+	register s64 x0 asm ("x0") = i;					\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(				\
@@ -392,9 +392,9 @@ ATOMIC64_OP_SUB_RETURN(        , al, "memory")
 #undef ATOMIC64_OP_SUB_RETURN
 
 #define ATOMIC64_FETCH_OP_SUB(name, mb, cl...)				\
-static inline long arch_atomic64_fetch_sub##name(long i, atomic64_t *v)	\
+static inline s64 arch_atomic64_fetch_sub##name(s64 i, atomic64_t *v)	\
 {									\
-	register long x0 asm ("x0") = i;				\
+	register s64 x0 asm ("x0") = i;					\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(				\
@@ -418,7 +418,7 @@ ATOMIC64_FETCH_OP_SUB(        , al, "memory")
 
 #undef ATOMIC64_FETCH_OP_SUB
 
-static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
 {
 	register long x0 asm ("x0") = (long)v;
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 08/18] locking/atomic: ia64: use s64 for atomic64
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (6 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 07/18] locking/atomic: arm64: use " Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:39   ` [tip:locking/core] locking/atomic, ia64: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 09/18] locking/atomic: mips: use " Mark Rutland
                   ` (11 subsequent siblings)
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

As a step towards making the atomic64 API use consistent types treewide,
let's have the ia64 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or __s64, matching the generated
headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/ia64/include/asm/atomic.h | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 206530d0751b..50440f3ddc43 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -124,10 +124,10 @@ ATOMIC_FETCH_OP(xor, ^)
 #undef ATOMIC_OP
 
 #define ATOMIC64_OP(op, c_op)						\
-static __inline__ long							\
-ia64_atomic64_##op (__s64 i, atomic64_t *v)				\
+static __inline__ s64							\
+ia64_atomic64_##op (s64 i, atomic64_t *v)				\
 {									\
-	__s64 old, new;							\
+	s64 old, new;							\
 	CMPXCHG_BUGCHECK_DECL						\
 									\
 	do {								\
@@ -139,10 +139,10 @@ ia64_atomic64_##op (__s64 i, atomic64_t *v)				\
 }
 
 #define ATOMIC64_FETCH_OP(op, c_op)					\
-static __inline__ long							\
-ia64_atomic64_fetch_##op (__s64 i, atomic64_t *v)			\
+static __inline__ s64							\
+ia64_atomic64_fetch_##op (s64 i, atomic64_t *v)				\
 {									\
-	__s64 old, new;							\
+	s64 old, new;							\
 	CMPXCHG_BUGCHECK_DECL						\
 									\
 	do {								\
@@ -162,7 +162,7 @@ ATOMIC64_OPS(sub, -)
 
 #define atomic64_add_return(i,v)					\
 ({									\
-	long __ia64_aar_i = (i);					\
+	s64 __ia64_aar_i = (i);						\
 	__ia64_atomic_const(i)						\
 		? ia64_fetch_and_add(__ia64_aar_i, &(v)->counter)	\
 		: ia64_atomic64_add(__ia64_aar_i, v);			\
@@ -170,7 +170,7 @@ ATOMIC64_OPS(sub, -)
 
 #define atomic64_sub_return(i,v)					\
 ({									\
-	long __ia64_asr_i = (i);					\
+	s64 __ia64_asr_i = (i);						\
 	__ia64_atomic_const(i)						\
 		? ia64_fetch_and_add(-__ia64_asr_i, &(v)->counter)	\
 		: ia64_atomic64_sub(__ia64_asr_i, v);			\
@@ -178,7 +178,7 @@ ATOMIC64_OPS(sub, -)
 
 #define atomic64_fetch_add(i,v)						\
 ({									\
-	long __ia64_aar_i = (i);					\
+	s64 __ia64_aar_i = (i);						\
 	__ia64_atomic_const(i)						\
 		? ia64_fetchadd(__ia64_aar_i, &(v)->counter, acq)	\
 		: ia64_atomic64_fetch_add(__ia64_aar_i, v);		\
@@ -186,7 +186,7 @@ ATOMIC64_OPS(sub, -)
 
 #define atomic64_fetch_sub(i,v)						\
 ({									\
-	long __ia64_asr_i = (i);					\
+	s64 __ia64_asr_i = (i);						\
 	__ia64_atomic_const(i)						\
 		? ia64_fetchadd(-__ia64_asr_i, &(v)->counter, acq)	\
 		: ia64_atomic64_fetch_sub(__ia64_asr_i, v);		\
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 09/18] locking/atomic: mips: use s64 for atomic64
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (7 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 08/18] locking/atomic: ia64: use " Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:39   ` [tip:locking/core] locking/atomic, mips: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 10/18] locking/atomic: powerpc: use " Mark Rutland
                   ` (10 subsequent siblings)
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

As a step towards making the atomic64 API use consistent types treewide,
let's have the mips atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or __s64, matching the generated
headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: James Hogan <jhogan@kernel.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/mips/include/asm/atomic.h | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 94096299fc56..9a82dd11c0e9 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -254,10 +254,10 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v)
 #define atomic64_set(v, i)	WRITE_ONCE((v)->counter, (i))
 
 #define ATOMIC64_OP(op, c_op, asm_op)					      \
-static __inline__ void atomic64_##op(long i, atomic64_t * v)		      \
+static __inline__ void atomic64_##op(s64 i, atomic64_t * v)		      \
 {									      \
 	if (kernel_uses_llsc) {						      \
-		long temp;						      \
+		s64 temp;						      \
 									      \
 		loongson_llsc_mb();					      \
 		__asm__ __volatile__(					      \
@@ -280,12 +280,12 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v)		      \
 }
 
 #define ATOMIC64_OP_RETURN(op, c_op, asm_op)				      \
-static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
+static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v)   \
 {									      \
-	long result;							      \
+	s64 result;							      \
 									      \
 	if (kernel_uses_llsc) {						      \
-		long temp;						      \
+		s64 temp;						      \
 									      \
 		loongson_llsc_mb();					      \
 		__asm__ __volatile__(					      \
@@ -314,12 +314,12 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
 }
 
 #define ATOMIC64_FETCH_OP(op, c_op, asm_op)				      \
-static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v)  \
+static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v)    \
 {									      \
-	long result;							      \
+	s64 result;							      \
 									      \
 	if (kernel_uses_llsc) {						      \
-		long temp;						      \
+		s64 temp;						      \
 									      \
 		loongson_llsc_mb();					      \
 		__asm__ __volatile__(					      \
@@ -386,14 +386,14 @@ ATOMIC64_OPS(xor, ^=, xor)
  * Atomically test @v and subtract @i if @v is greater or equal than @i.
  * The function returns the old value of @v minus @i.
  */
-static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v)
+static __inline__ s64 atomic64_sub_if_positive(s64 i, atomic64_t * v)
 {
-	long result;
+	s64 result;
 
 	smp_mb__before_llsc();
 
 	if (kernel_uses_llsc) {
-		long temp;
+		s64 temp;
 
 		__asm__ __volatile__(
 		"	.set	push					\n"
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 10/18] locking/atomic: powerpc: use s64 for atomic64
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (8 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 09/18] locking/atomic: mips: use " Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-05-23 13:27   ` Michael Ellerman
  2019-06-03 13:40   ` [tip:locking/core] locking/atomic, powerpc: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 11/18] locking/atomic: riscv: fix atomic64_sub_if_positive() offset argument Mark Rutland
                   ` (9 subsequent siblings)
  19 siblings, 2 replies; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

As a step towards making the atomic64 API use consistent types treewide,
let's have the powerpc atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/powerpc/include/asm/atomic.h | 44 +++++++++++++++++++--------------------
 1 file changed, 22 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
index 52eafaf74054..31c231ea56b7 100644
--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -297,24 +297,24 @@ static __inline__ int atomic_dec_if_positive(atomic_t *v)
 
 #define ATOMIC64_INIT(i)	{ (i) }
 
-static __inline__ long atomic64_read(const atomic64_t *v)
+static __inline__ s64 atomic64_read(const atomic64_t *v)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter));
 
 	return t;
 }
 
-static __inline__ void atomic64_set(atomic64_t *v, long i)
+static __inline__ void atomic64_set(atomic64_t *v, s64 i)
 {
 	__asm__ __volatile__("std%U0%X0 %1,%0" : "=m"(v->counter) : "r"(i));
 }
 
 #define ATOMIC64_OP(op, asm_op)						\
-static __inline__ void atomic64_##op(long a, atomic64_t *v)		\
+static __inline__ void atomic64_##op(s64 a, atomic64_t *v)		\
 {									\
-	long t;								\
+	s64 t;								\
 									\
 	__asm__ __volatile__(						\
 "1:	ldarx	%0,0,%3		# atomic64_" #op "\n"			\
@@ -327,10 +327,10 @@ static __inline__ void atomic64_##op(long a, atomic64_t *v)		\
 }
 
 #define ATOMIC64_OP_RETURN_RELAXED(op, asm_op)				\
-static inline long							\
-atomic64_##op##_return_relaxed(long a, atomic64_t *v)			\
+static inline s64							\
+atomic64_##op##_return_relaxed(s64 a, atomic64_t *v)			\
 {									\
-	long t;								\
+	s64 t;								\
 									\
 	__asm__ __volatile__(						\
 "1:	ldarx	%0,0,%3		# atomic64_" #op "_return_relaxed\n"	\
@@ -345,10 +345,10 @@ atomic64_##op##_return_relaxed(long a, atomic64_t *v)			\
 }
 
 #define ATOMIC64_FETCH_OP_RELAXED(op, asm_op)				\
-static inline long							\
-atomic64_fetch_##op##_relaxed(long a, atomic64_t *v)			\
+static inline s64							\
+atomic64_fetch_##op##_relaxed(s64 a, atomic64_t *v)			\
 {									\
-	long res, t;							\
+	s64 res, t;							\
 									\
 	__asm__ __volatile__(						\
 "1:	ldarx	%0,0,%4		# atomic64_fetch_" #op "_relaxed\n"	\
@@ -396,7 +396,7 @@ ATOMIC64_OPS(xor, xor)
 
 static __inline__ void atomic64_inc(atomic64_t *v)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__(
 "1:	ldarx	%0,0,%2		# atomic64_inc\n\
@@ -409,9 +409,9 @@ static __inline__ void atomic64_inc(atomic64_t *v)
 }
 #define atomic64_inc atomic64_inc
 
-static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)
+static __inline__ s64 atomic64_inc_return_relaxed(atomic64_t *v)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__(
 "1:	ldarx	%0,0,%2		# atomic64_inc_return_relaxed\n"
@@ -427,7 +427,7 @@ static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)
 
 static __inline__ void atomic64_dec(atomic64_t *v)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__(
 "1:	ldarx	%0,0,%2		# atomic64_dec\n\
@@ -440,9 +440,9 @@ static __inline__ void atomic64_dec(atomic64_t *v)
 }
 #define atomic64_dec atomic64_dec
 
-static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
+static __inline__ s64 atomic64_dec_return_relaxed(atomic64_t *v)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__(
 "1:	ldarx	%0,0,%2		# atomic64_dec_return_relaxed\n"
@@ -463,9 +463,9 @@ static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
  * Atomically test *v and decrement if it is greater than 0.
  * The function returns the old value of *v minus 1.
  */
-static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
+static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__(
 	PPC_ATOMIC_ENTRY_BARRIER
@@ -502,9 +502,9 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
  * Atomically adds @a to @v, so long as it was not @u.
  * Returns the old value of @v.
  */
-static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
+static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__ (
 	PPC_ATOMIC_ENTRY_BARRIER
@@ -534,7 +534,7 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
  */
 static __inline__ int atomic64_inc_not_zero(atomic64_t *v)
 {
-	long t1, t2;
+	s64 t1, t2;
 
 	__asm__ __volatile__ (
 	PPC_ATOMIC_ENTRY_BARRIER
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 11/18] locking/atomic: riscv: fix atomic64_sub_if_positive() offset argument
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (9 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 10/18] locking/atomic: powerpc: use " Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-05-22 19:06   ` Palmer Dabbelt
  2019-06-03 13:41   ` [tip:locking/core] locking/atomic, riscv: Fix " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 12/18] locking/atomic: riscv: use s64 for atomic64 Mark Rutland
                   ` (8 subsequent siblings)
  19 siblings, 2 replies; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

Presently the riscv implementation of atomic64_sub_if_positive() takes
a 32-bit offset value rather than a 64-bit offset value as it should do.
Thus, if called with a 64-bit offset, the value will be unexpectedly
truncated to 32 bits.

Fix this by taking the offset as a long rather than an int.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
---
 arch/riscv/include/asm/atomic.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index 93826771b616..c9e18289d65c 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -336,7 +336,7 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
 #define atomic_dec_if_positive(v)	atomic_sub_if_positive(v, 1)
 
 #ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_sub_if_positive(atomic64_t *v, int offset)
+static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset)
 {
        long prev, rc;
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 12/18] locking/atomic: riscv: use s64 for atomic64
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (10 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 11/18] locking/atomic: riscv: fix atomic64_sub_if_positive() offset argument Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-05-22 19:06   ` Palmer Dabbelt
  2019-06-03 13:41   ` [tip:locking/core] locking/atomic, riscv: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 13/18] locking/atomic: s390: use " Mark Rutland
                   ` (7 subsequent siblings)
  19 siblings, 2 replies; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

As a step towards making the atomic64 API use consistent types treewide,
let's have the s390 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Palmer Dabbelt <palmer@sifive.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/riscv/include/asm/atomic.h | 44 +++++++++++++++++++++--------------------
 1 file changed, 23 insertions(+), 21 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index c9e18289d65c..bffebc57357d 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -42,11 +42,11 @@ static __always_inline void atomic_set(atomic_t *v, int i)
 
 #ifndef CONFIG_GENERIC_ATOMIC64
 #define ATOMIC64_INIT(i) { (i) }
-static __always_inline long atomic64_read(const atomic64_t *v)
+static __always_inline s64 atomic64_read(const atomic64_t *v)
 {
 	return READ_ONCE(v->counter);
 }
-static __always_inline void atomic64_set(atomic64_t *v, long i)
+static __always_inline void atomic64_set(atomic64_t *v, s64 i)
 {
 	WRITE_ONCE(v->counter, i);
 }
@@ -70,11 +70,11 @@ void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v)		\
 
 #ifdef CONFIG_GENERIC_ATOMIC64
 #define ATOMIC_OPS(op, asm_op, I)					\
-        ATOMIC_OP (op, asm_op, I, w,  int,   )
+        ATOMIC_OP (op, asm_op, I, w, int,   )
 #else
 #define ATOMIC_OPS(op, asm_op, I)					\
-        ATOMIC_OP (op, asm_op, I, w,  int,   )				\
-        ATOMIC_OP (op, asm_op, I, d, long, 64)
+        ATOMIC_OP (op, asm_op, I, w, int,   )				\
+        ATOMIC_OP (op, asm_op, I, d, s64, 64)
 #endif
 
 ATOMIC_OPS(add, add,  i)
@@ -131,14 +131,14 @@ c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v)	\
 
 #ifdef CONFIG_GENERIC_ATOMIC64
 #define ATOMIC_OPS(op, asm_op, c_op, I)					\
-        ATOMIC_FETCH_OP( op, asm_op,       I, w,  int,   )		\
-        ATOMIC_OP_RETURN(op, asm_op, c_op, I, w,  int,   )
+        ATOMIC_FETCH_OP( op, asm_op,       I, w, int,   )		\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int,   )
 #else
 #define ATOMIC_OPS(op, asm_op, c_op, I)					\
-        ATOMIC_FETCH_OP( op, asm_op,       I, w,  int,   )		\
-        ATOMIC_OP_RETURN(op, asm_op, c_op, I, w,  int,   )		\
-        ATOMIC_FETCH_OP( op, asm_op,       I, d, long, 64)		\
-        ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, long, 64)
+        ATOMIC_FETCH_OP( op, asm_op,       I, w, int,   )		\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int,   )		\
+        ATOMIC_FETCH_OP( op, asm_op,       I, d, s64, 64)		\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, s64, 64)
 #endif
 
 ATOMIC_OPS(add, add, +,  i)
@@ -170,11 +170,11 @@ ATOMIC_OPS(sub, add, +, -i)
 
 #ifdef CONFIG_GENERIC_ATOMIC64
 #define ATOMIC_OPS(op, asm_op, I)					\
-        ATOMIC_FETCH_OP(op, asm_op, I, w,  int,   )
+        ATOMIC_FETCH_OP(op, asm_op, I, w, int,   )
 #else
 #define ATOMIC_OPS(op, asm_op, I)					\
-        ATOMIC_FETCH_OP(op, asm_op, I, w,  int,   )			\
-        ATOMIC_FETCH_OP(op, asm_op, I, d, long, 64)
+        ATOMIC_FETCH_OP(op, asm_op, I, w, int,   )			\
+        ATOMIC_FETCH_OP(op, asm_op, I, d, s64, 64)
 #endif
 
 ATOMIC_OPS(and, and, i)
@@ -223,9 +223,10 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 #define atomic_fetch_add_unless atomic_fetch_add_unless
 
 #ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
+static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
-       long prev, rc;
+       s64 prev;
+       long rc;
 
 	__asm__ __volatile__ (
 		"0:	lr.d     %[p],  %[c]\n"
@@ -294,11 +295,11 @@ c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n)	\
 
 #ifdef CONFIG_GENERIC_ATOMIC64
 #define ATOMIC_OPS()							\
-	ATOMIC_OP( int,   , 4)
+	ATOMIC_OP(int,   , 4)
 #else
 #define ATOMIC_OPS()							\
-	ATOMIC_OP( int,   , 4)						\
-	ATOMIC_OP(long, 64, 8)
+	ATOMIC_OP(int,   , 4)						\
+	ATOMIC_OP(s64, 64, 8)
 #endif
 
 ATOMIC_OPS()
@@ -336,9 +337,10 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
 #define atomic_dec_if_positive(v)	atomic_sub_if_positive(v, 1)
 
 #ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset)
+static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset)
 {
-       long prev, rc;
+       s64 prev;
+       long rc;
 
 	__asm__ __volatile__ (
 		"0:	lr.d     %[p],  %[c]\n"
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 13/18] locking/atomic: s390: use s64 for atomic64
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (11 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 12/18] locking/atomic: riscv: use s64 for atomic64 Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:42   ` [tip:locking/core] locking/atomic, s390: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 14/18] locking/atomic: sparc: use " Mark Rutland
                   ` (6 subsequent siblings)
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

As a step towards making the atomic64 API use consistent types treewide,
let's have the s390 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

The s390-internal __atomic64_*() ops are also used by the s390 bitops,
and expect pointers to long. Since atomic64_t::counter will be converted
to s64 in a subsequent patch, pointes to this are explicitly cast to
pointers to long when passed to __atomic64_*() ops.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/s390/include/asm/atomic.h | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h
index fd20ab5d4cf7..491ad53a0d4e 100644
--- a/arch/s390/include/asm/atomic.h
+++ b/arch/s390/include/asm/atomic.h
@@ -84,9 +84,9 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
 
 #define ATOMIC64_INIT(i)  { (i) }
 
-static inline long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
 {
-	long c;
+	s64 c;
 
 	asm volatile(
 		"	lg	%0,%1\n"
@@ -94,49 +94,49 @@ static inline long atomic64_read(const atomic64_t *v)
 	return c;
 }
 
-static inline void atomic64_set(atomic64_t *v, long i)
+static inline void atomic64_set(atomic64_t *v, s64 i)
 {
 	asm volatile(
 		"	stg	%1,%0\n"
 		: "=Q" (v->counter) : "d" (i));
 }
 
-static inline long atomic64_add_return(long i, atomic64_t *v)
+static inline s64 atomic64_add_return(s64 i, atomic64_t *v)
 {
-	return __atomic64_add_barrier(i, &v->counter) + i;
+	return __atomic64_add_barrier(i, (long *)&v->counter) + i;
 }
 
-static inline long atomic64_fetch_add(long i, atomic64_t *v)
+static inline s64 atomic64_fetch_add(s64 i, atomic64_t *v)
 {
-	return __atomic64_add_barrier(i, &v->counter);
+	return __atomic64_add_barrier(i, (long *)&v->counter);
 }
 
-static inline void atomic64_add(long i, atomic64_t *v)
+static inline void atomic64_add(s64 i, atomic64_t *v)
 {
 #ifdef CONFIG_HAVE_MARCH_Z196_FEATURES
 	if (__builtin_constant_p(i) && (i > -129) && (i < 128)) {
-		__atomic64_add_const(i, &v->counter);
+		__atomic64_add_const(i, (long *)&v->counter);
 		return;
 	}
 #endif
-	__atomic64_add(i, &v->counter);
+	__atomic64_add(i, (long *)&v->counter);
 }
 
 #define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
 
-static inline long atomic64_cmpxchg(atomic64_t *v, long old, long new)
+static inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
 {
-	return __atomic64_cmpxchg(&v->counter, old, new);
+	return __atomic64_cmpxchg((long *)&v->counter, old, new);
 }
 
 #define ATOMIC64_OPS(op)						\
-static inline void atomic64_##op(long i, atomic64_t *v)			\
+static inline void atomic64_##op(s64 i, atomic64_t *v)			\
 {									\
-	__atomic64_##op(i, &v->counter);				\
+	__atomic64_##op(i, (long *)&v->counter);			\
 }									\
-static inline long atomic64_fetch_##op(long i, atomic64_t *v)		\
+static inline long atomic64_fetch_##op(s64 i, atomic64_t *v)		\
 {									\
-	return __atomic64_##op##_barrier(i, &v->counter);		\
+	return __atomic64_##op##_barrier(i, (long *)&v->counter);	\
 }
 
 ATOMIC64_OPS(and)
@@ -145,8 +145,8 @@ ATOMIC64_OPS(xor)
 
 #undef ATOMIC64_OPS
 
-#define atomic64_sub_return(_i, _v)	atomic64_add_return(-(long)(_i), _v)
-#define atomic64_fetch_sub(_i, _v)	atomic64_fetch_add(-(long)(_i), _v)
-#define atomic64_sub(_i, _v)		atomic64_add(-(long)(_i), _v)
+#define atomic64_sub_return(_i, _v)	atomic64_add_return(-(s64)(_i), _v)
+#define atomic64_fetch_sub(_i, _v)	atomic64_fetch_add(-(s64)(_i), _v)
+#define atomic64_sub(_i, _v)		atomic64_add(-(s64)(_i), _v)
 
 #endif /* __ARCH_S390_ATOMIC__  */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 14/18] locking/atomic: sparc: use s64 for atomic64
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (12 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 13/18] locking/atomic: s390: use " Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:43   ` [tip:locking/core] locking/atomic, sparc: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 15/18] locking/atomic: x86: use " Mark Rutland
                   ` (5 subsequent siblings)
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

As a step towards making the atomic64 API use consistent types treewide,
let's have the sparc atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Otherwise, there should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: David S. Miller <davem@davemloft.net>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/sparc/include/asm/atomic_64.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index 6963482c81d8..b60448397d4f 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -23,15 +23,15 @@
 
 #define ATOMIC_OP(op)							\
 void atomic_##op(int, atomic_t *);					\
-void atomic64_##op(long, atomic64_t *);
+void atomic64_##op(s64, atomic64_t *);
 
 #define ATOMIC_OP_RETURN(op)						\
 int atomic_##op##_return(int, atomic_t *);				\
-long atomic64_##op##_return(long, atomic64_t *);
+s64 atomic64_##op##_return(s64, atomic64_t *);
 
 #define ATOMIC_FETCH_OP(op)						\
 int atomic_fetch_##op(int, atomic_t *);					\
-long atomic64_fetch_##op(long, atomic64_t *);
+s64 atomic64_fetch_##op(s64, atomic64_t *);
 
 #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_OP_RETURN(op) ATOMIC_FETCH_OP(op)
 
@@ -61,7 +61,7 @@ static inline int atomic_xchg(atomic_t *v, int new)
 	((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n)))
 #define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
 
-long atomic64_dec_if_positive(atomic64_t *v);
+s64 atomic64_dec_if_positive(atomic64_t *v);
 #define atomic64_dec_if_positive atomic64_dec_if_positive
 
 #endif /* !(__ARCH_SPARC64_ATOMIC__) */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 15/18] locking/atomic: x86: use s64 for atomic64
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (13 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 14/18] locking/atomic: sparc: use " Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:44   ` [tip:locking/core] locking/atomic, x86: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 16/18] locking/atomic: use s64 for atomic64_t on 64-bit Mark Rutland
                   ` (4 subsequent siblings)
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

As a step towards making the atomic64 API use consistent types treewide,
let's have the x86 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or long long, matching the
generated headers.

Note that the x86 arch_atomic64 implementation is already wrapped by the
generic instrumented atomic64 implementation, which uses s64
consistently.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/x86/include/asm/atomic64_32.h | 66 ++++++++++++++++++--------------------
 arch/x86/include/asm/atomic64_64.h | 38 +++++++++++-----------
 2 files changed, 51 insertions(+), 53 deletions(-)

diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 6a5b0ec460da..52cfaecb13f9 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -9,7 +9,7 @@
 /* An 64bit atomic type */
 
 typedef struct {
-	u64 __aligned(8) counter;
+	s64 __aligned(8) counter;
 } atomic64_t;
 
 #define ATOMIC64_INIT(val)	{ (val) }
@@ -71,8 +71,7 @@ ATOMIC64_DECL(add_unless);
  * the old value.
  */
 
-static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o,
-					      long long n)
+static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
 {
 	return arch_cmpxchg64(&v->counter, o, n);
 }
@@ -85,9 +84,9 @@ static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o,
  * Atomically xchgs the value of @v to @n and returns
  * the old value.
  */
-static inline long long arch_atomic64_xchg(atomic64_t *v, long long n)
+static inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n)
 {
-	long long o;
+	s64 o;
 	unsigned high = (unsigned)(n >> 32);
 	unsigned low = (unsigned)n;
 	alternative_atomic64(xchg, "=&A" (o),
@@ -103,7 +102,7 @@ static inline long long arch_atomic64_xchg(atomic64_t *v, long long n)
  *
  * Atomically sets the value of @v to @n.
  */
-static inline void arch_atomic64_set(atomic64_t *v, long long i)
+static inline void arch_atomic64_set(atomic64_t *v, s64 i)
 {
 	unsigned high = (unsigned)(i >> 32);
 	unsigned low = (unsigned)i;
@@ -118,9 +117,9 @@ static inline void arch_atomic64_set(atomic64_t *v, long long i)
  *
  * Atomically reads the value of @v and returns it.
  */
-static inline long long arch_atomic64_read(const atomic64_t *v)
+static inline s64 arch_atomic64_read(const atomic64_t *v)
 {
-	long long r;
+	s64 r;
 	alternative_atomic64(read, "=&A" (r), "c" (v) : "memory");
 	return r;
 }
@@ -132,7 +131,7 @@ static inline long long arch_atomic64_read(const atomic64_t *v)
  *
  * Atomically adds @i to @v and returns @i + *@v
  */
-static inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
 {
 	alternative_atomic64(add_return,
 			     ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -143,7 +142,7 @@ static inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
 /*
  * Other variants with different arithmetic operators:
  */
-static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
 {
 	alternative_atomic64(sub_return,
 			     ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -151,18 +150,18 @@ static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
 	return i;
 }
 
-static inline long long arch_atomic64_inc_return(atomic64_t *v)
+static inline s64 arch_atomic64_inc_return(atomic64_t *v)
 {
-	long long a;
+	s64 a;
 	alternative_atomic64(inc_return, "=&A" (a),
 			     "S" (v) : "memory", "ecx");
 	return a;
 }
 #define arch_atomic64_inc_return arch_atomic64_inc_return
 
-static inline long long arch_atomic64_dec_return(atomic64_t *v)
+static inline s64 arch_atomic64_dec_return(atomic64_t *v)
 {
-	long long a;
+	s64 a;
 	alternative_atomic64(dec_return, "=&A" (a),
 			     "S" (v) : "memory", "ecx");
 	return a;
@@ -176,7 +175,7 @@ static inline long long arch_atomic64_dec_return(atomic64_t *v)
  *
  * Atomically adds @i to @v.
  */
-static inline long long arch_atomic64_add(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_add(s64 i, atomic64_t *v)
 {
 	__alternative_atomic64(add, add_return,
 			       ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -191,7 +190,7 @@ static inline long long arch_atomic64_add(long long i, atomic64_t *v)
  *
  * Atomically subtracts @i from @v.
  */
-static inline long long arch_atomic64_sub(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_sub(s64 i, atomic64_t *v)
 {
 	__alternative_atomic64(sub, sub_return,
 			       ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -234,8 +233,7 @@ static inline void arch_atomic64_dec(atomic64_t *v)
  * Atomically adds @a to @v, so long as it was not @u.
  * Returns non-zero if the add was done, zero otherwise.
  */
-static inline int arch_atomic64_add_unless(atomic64_t *v, long long a,
-					   long long u)
+static inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	unsigned low = (unsigned)u;
 	unsigned high = (unsigned)(u >> 32);
@@ -254,9 +252,9 @@ static inline int arch_atomic64_inc_not_zero(atomic64_t *v)
 }
 #define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero
 
-static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
 {
-	long long r;
+	s64 r;
 	alternative_atomic64(dec_if_positive, "=&A" (r),
 			     "S" (v) : "ecx", "memory");
 	return r;
@@ -266,17 +264,17 @@ static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
 #undef alternative_atomic64
 #undef __alternative_atomic64
 
-static inline void arch_atomic64_and(long long i, atomic64_t *v)
+static inline void arch_atomic64_and(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
 		c = old;
 }
 
-static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
 		c = old;
@@ -284,17 +282,17 @@ static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
 	return old;
 }
 
-static inline void arch_atomic64_or(long long i, atomic64_t *v)
+static inline void arch_atomic64_or(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
 		c = old;
 }
 
-static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
 		c = old;
@@ -302,17 +300,17 @@ static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
 	return old;
 }
 
-static inline void arch_atomic64_xor(long long i, atomic64_t *v)
+static inline void arch_atomic64_xor(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
 		c = old;
 }
 
-static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
 		c = old;
@@ -320,9 +318,9 @@ static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
 	return old;
 }
 
-static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c)
 		c = old;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index dadc20adba21..703b7dfd45e0 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -17,7 +17,7 @@
  * Atomically reads the value of @v.
  * Doesn't imply a read memory barrier.
  */
-static inline long arch_atomic64_read(const atomic64_t *v)
+static inline s64 arch_atomic64_read(const atomic64_t *v)
 {
 	return READ_ONCE((v)->counter);
 }
@@ -29,7 +29,7 @@ static inline long arch_atomic64_read(const atomic64_t *v)
  *
  * Atomically sets the value of @v to @i.
  */
-static inline void arch_atomic64_set(atomic64_t *v, long i)
+static inline void arch_atomic64_set(atomic64_t *v, s64 i)
 {
 	WRITE_ONCE(v->counter, i);
 }
@@ -41,7 +41,7 @@ static inline void arch_atomic64_set(atomic64_t *v, long i)
  *
  * Atomically adds @i to @v.
  */
-static __always_inline void arch_atomic64_add(long i, atomic64_t *v)
+static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "addq %1,%0"
 		     : "=m" (v->counter)
@@ -55,7 +55,7 @@ static __always_inline void arch_atomic64_add(long i, atomic64_t *v)
  *
  * Atomically subtracts @i from @v.
  */
-static inline void arch_atomic64_sub(long i, atomic64_t *v)
+static inline void arch_atomic64_sub(s64 i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "subq %1,%0"
 		     : "=m" (v->counter)
@@ -71,7 +71,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v)
  * true if the result is zero, or false for all
  * other cases.
  */
-static inline bool arch_atomic64_sub_and_test(long i, atomic64_t *v)
+static inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
 {
 	return GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, e, "er", i);
 }
@@ -142,7 +142,7 @@ static inline bool arch_atomic64_inc_and_test(atomic64_t *v)
  * if the result is negative, or false when
  * result is greater than or equal to zero.
  */
-static inline bool arch_atomic64_add_negative(long i, atomic64_t *v)
+static inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v)
 {
 	return GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, s, "er", i);
 }
@@ -155,43 +155,43 @@ static inline bool arch_atomic64_add_negative(long i, atomic64_t *v)
  *
  * Atomically adds @i to @v and returns @i + @v
  */
-static __always_inline long arch_atomic64_add_return(long i, atomic64_t *v)
+static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
 {
 	return i + xadd(&v->counter, i);
 }
 
-static inline long arch_atomic64_sub_return(long i, atomic64_t *v)
+static inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
 {
 	return arch_atomic64_add_return(-i, v);
 }
 
-static inline long arch_atomic64_fetch_add(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v)
 {
 	return xadd(&v->counter, i);
 }
 
-static inline long arch_atomic64_fetch_sub(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_sub(s64 i, atomic64_t *v)
 {
 	return xadd(&v->counter, -i);
 }
 
-static inline long arch_atomic64_cmpxchg(atomic64_t *v, long old, long new)
+static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
 {
 	return arch_cmpxchg(&v->counter, old, new);
 }
 
 #define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
-static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, long new)
+static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
 {
 	return try_cmpxchg(&v->counter, old, new);
 }
 
-static inline long arch_atomic64_xchg(atomic64_t *v, long new)
+static inline s64 arch_atomic64_xchg(atomic64_t *v, s64 new)
 {
 	return arch_xchg(&v->counter, new);
 }
 
-static inline void arch_atomic64_and(long i, atomic64_t *v)
+static inline void arch_atomic64_and(s64 i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "andq %1,%0"
 			: "+m" (v->counter)
@@ -199,7 +199,7 @@ static inline void arch_atomic64_and(long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long arch_atomic64_fetch_and(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v)
 {
 	s64 val = arch_atomic64_read(v);
 
@@ -208,7 +208,7 @@ static inline long arch_atomic64_fetch_and(long i, atomic64_t *v)
 	return val;
 }
 
-static inline void arch_atomic64_or(long i, atomic64_t *v)
+static inline void arch_atomic64_or(s64 i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "orq %1,%0"
 			: "+m" (v->counter)
@@ -216,7 +216,7 @@ static inline void arch_atomic64_or(long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long arch_atomic64_fetch_or(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v)
 {
 	s64 val = arch_atomic64_read(v);
 
@@ -225,7 +225,7 @@ static inline long arch_atomic64_fetch_or(long i, atomic64_t *v)
 	return val;
 }
 
-static inline void arch_atomic64_xor(long i, atomic64_t *v)
+static inline void arch_atomic64_xor(s64 i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "xorq %1,%0"
 			: "+m" (v->counter)
@@ -233,7 +233,7 @@ static inline void arch_atomic64_xor(long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long arch_atomic64_fetch_xor(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
 {
 	s64 val = arch_atomic64_read(v);
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 16/18] locking/atomic: use s64 for atomic64_t on 64-bit
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (14 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 15/18] locking/atomic: x86: use " Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:44   ` [tip:locking/core] locking/atomic: Use " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 17/18] locking/atomic: crypto: nx: remove redundant casts Mark Rutland
                   ` (3 subsequent siblings)
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

Now that all architectures use 64 consistently as the base type for the
atomic64 API, let's have the CONFIG_64BIT definition of atomic64_t use
s64 as the underlying type for atomic64_t, rather than long, matching
the generated headers.

On architectures where atomic64_read(v) is READ_ONCE(v->counter), this
patch will cause the return type of atomic64_read() to be s64.

As of this patch, the atomic64 API can be relied upon to consistently
return s64 where a value rather than boolean condition is returned. This
should make code more robust, and simpler, allowing for the removal of
casts previously required to ensure consistent types.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 include/linux/types.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/types.h b/include/linux/types.h
index 231114ae38f4..05030f608be3 100644
--- a/include/linux/types.h
+++ b/include/linux/types.h
@@ -174,7 +174,7 @@ typedef struct {
 
 #ifdef CONFIG_64BIT
 typedef struct {
-	long counter;
+	s64 counter;
 } atomic64_t;
 #endif
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 17/18] locking/atomic: crypto: nx: remove redundant casts
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (15 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 16/18] locking/atomic: use s64 for atomic64_t on 64-bit Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:45   ` [tip:locking/core] locking/atomic, crypto/nx: Remove " tip-bot for Mark Rutland
  2019-05-22 13:22 ` [PATCH 18/18] locking/atomic: s390/pci: remove " Mark Rutland
                   ` (2 subsequent siblings)
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

Now that atomic64_read() returns s64 consistently, we don't need to
explicitly cast its return value. Drop the redundant casts.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 drivers/crypto/nx/nx-842-pseries.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c
index 9432e9e42afe..5cf77729a438 100644
--- a/drivers/crypto/nx/nx-842-pseries.c
+++ b/drivers/crypto/nx/nx-842-pseries.c
@@ -870,7 +870,7 @@ static ssize_t nx842_##_name##_show(struct device *dev,		\
 	local_devdata = rcu_dereference(devdata);			\
 	if (local_devdata)						\
 		p = snprintf(buf, PAGE_SIZE, "%lld\n",			\
-		       (s64)atomic64_read(&local_devdata->counters->_name));	\
+		       atomic64_read(&local_devdata->counters->_name));	\
 	rcu_read_unlock();						\
 	return p;							\
 }
@@ -924,7 +924,7 @@ static ssize_t nx842_timehist_show(struct device *dev,
 	for (i = 0; i < (NX842_HIST_SLOTS - 2); i++) {
 		bytes = snprintf(p, bytes_remain, "%u-%uus:\t%lld\n",
 			       i ? (2<<(i-1)) : 0, (2<<i)-1,
-			       (s64)atomic64_read(&times[i]));
+			       atomic64_read(&times[i]));
 		bytes_remain -= bytes;
 		p += bytes;
 	}
@@ -932,7 +932,7 @@ static ssize_t nx842_timehist_show(struct device *dev,
 	 * 2<<(NX842_HIST_SLOTS - 2) us */
 	bytes = snprintf(p, bytes_remain, "%uus - :\t%lld\n",
 			2<<(NX842_HIST_SLOTS - 2),
-			(s64)atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
+			atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
 	p += bytes;
 
 	rcu_read_unlock();
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [PATCH 18/18] locking/atomic: s390/pci: remove redundant casts
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (16 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 17/18] locking/atomic: crypto: nx: remove redundant casts Mark Rutland
@ 2019-05-22 13:22 ` Mark Rutland
  2019-06-03 13:46   ` [tip:locking/core] locking/atomic, s390/pci: Remove " tip-bot for Mark Rutland
  2019-05-22 21:18 ` [PATCH 00/18] locking/atomic: atomic64 type cleanup Arnd Bergmann
  2019-05-23  8:30 ` Andrea Parri
  19 siblings, 1 reply; 60+ messages in thread
From: Mark Rutland @ 2019-05-22 13:22 UTC (permalink / raw)
  To: linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, mpe, palmer, paul.burton, paulus, ralf, rth,
	stable, tglx, tony.luck, vgupta

Now that atomic64_read() returns s64 consistently, we don't need to
explicitly cast its return value. Drop the redundant casts.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
---
 arch/s390/pci/pci_debug.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
index 45eccf79e990..3408c0df3ebf 100644
--- a/arch/s390/pci/pci_debug.c
+++ b/arch/s390/pci/pci_debug.c
@@ -75,7 +75,7 @@ static void pci_sw_counter_show(struct seq_file *m)
 
 	for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
 		seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
-			   (s64)atomic64_read(counter));
+			   atomic64_read(counter));
 }
 
 static int pci_perf_show(struct seq_file *m, void *v)
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH 11/18] locking/atomic: riscv: fix atomic64_sub_if_positive() offset argument
  2019-05-22 13:22 ` [PATCH 11/18] locking/atomic: riscv: fix atomic64_sub_if_positive() offset argument Mark Rutland
@ 2019-05-22 19:06   ` Palmer Dabbelt
  2019-06-03 13:41   ` [tip:locking/core] locking/atomic, riscv: Fix " tip-bot for Mark Rutland
  1 sibling, 0 replies; 60+ messages in thread
From: Palmer Dabbelt @ 2019-05-22 19:06 UTC (permalink / raw)
  To: mark.rutland
  Cc: linux-kernel, peterz, Will Deacon, aou, Arnd Bergmann, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mark.rutland, mattst88, mingo, mpe, paul.burton,
	paulus, ralf, rth, stable, tglx, tony.luck, vgupta

On Wed, 22 May 2019 06:22:43 PDT (-0700), mark.rutland@arm.com wrote:
> Presently the riscv implementation of atomic64_sub_if_positive() takes
> a 32-bit offset value rather than a 64-bit offset value as it should do.
> Thus, if called with a 64-bit offset, the value will be unexpectedly
> truncated to 32 bits.
>
> Fix this by taking the offset as a long rather than an int.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Albert Ou <aou@eecs.berkeley.edu>
> Cc: Palmer Dabbelt <palmer@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: stable@vger.kernel.org
> ---
>  arch/riscv/include/asm/atomic.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> index 93826771b616..c9e18289d65c 100644
> --- a/arch/riscv/include/asm/atomic.h
> +++ b/arch/riscv/include/asm/atomic.h
> @@ -336,7 +336,7 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
>  #define atomic_dec_if_positive(v)	atomic_sub_if_positive(v, 1)
>
>  #ifndef CONFIG_GENERIC_ATOMIC64
> -static __always_inline long atomic64_sub_if_positive(atomic64_t *v, int offset)
> +static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset)
>  {
>         long prev, rc;

Reviewed-by: Palmer Dabbelt <palmer@sifive.com>

Thanks!

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 12/18] locking/atomic: riscv: use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 12/18] locking/atomic: riscv: use s64 for atomic64 Mark Rutland
@ 2019-05-22 19:06   ` Palmer Dabbelt
  2019-05-23 10:23     ` Mark Rutland
  2019-06-03 13:41   ` [tip:locking/core] locking/atomic, riscv: Use " tip-bot for Mark Rutland
  1 sibling, 1 reply; 60+ messages in thread
From: Palmer Dabbelt @ 2019-05-22 19:06 UTC (permalink / raw)
  To: mark.rutland
  Cc: linux-kernel, peterz, Will Deacon, aou, Arnd Bergmann, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mark.rutland, mattst88, mingo, mpe, paul.burton,
	paulus, ralf, rth, stable, tglx, tony.luck, vgupta

On Wed, 22 May 2019 06:22:44 PDT (-0700), mark.rutland@arm.com wrote:
> As a step towards making the atomic64 API use consistent types treewide,
> let's have the s390 atomic64 implementation use s64 as the underlying

and apparently the RISC-V one as well? :)

> type for atomic64_t, rather than long, matching the generated headers.
>
> As atomic64_read() depends on the generic defintion of atomic64_t, this
> still returns long on 64-bit. This will be converted in a subsequent
> patch.
>
> Otherwise, there should be no functional change as a result of this patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Albert Ou <aou@eecs.berkeley.edu>
> Cc: Palmer Dabbelt <palmer@sifive.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/riscv/include/asm/atomic.h | 44 +++++++++++++++++++++--------------------
>  1 file changed, 23 insertions(+), 21 deletions(-)
>
> diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
> index c9e18289d65c..bffebc57357d 100644
> --- a/arch/riscv/include/asm/atomic.h
> +++ b/arch/riscv/include/asm/atomic.h
> @@ -42,11 +42,11 @@ static __always_inline void atomic_set(atomic_t *v, int i)
>
>  #ifndef CONFIG_GENERIC_ATOMIC64
>  #define ATOMIC64_INIT(i) { (i) }
> -static __always_inline long atomic64_read(const atomic64_t *v)
> +static __always_inline s64 atomic64_read(const atomic64_t *v)
>  {
>  	return READ_ONCE(v->counter);
>  }
> -static __always_inline void atomic64_set(atomic64_t *v, long i)
> +static __always_inline void atomic64_set(atomic64_t *v, s64 i)
>  {
>  	WRITE_ONCE(v->counter, i);
>  }
> @@ -70,11 +70,11 @@ void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v)		\
>
>  #ifdef CONFIG_GENERIC_ATOMIC64
>  #define ATOMIC_OPS(op, asm_op, I)					\
> -        ATOMIC_OP (op, asm_op, I, w,  int,   )
> +        ATOMIC_OP (op, asm_op, I, w, int,   )
>  #else
>  #define ATOMIC_OPS(op, asm_op, I)					\
> -        ATOMIC_OP (op, asm_op, I, w,  int,   )				\
> -        ATOMIC_OP (op, asm_op, I, d, long, 64)
> +        ATOMIC_OP (op, asm_op, I, w, int,   )				\
> +        ATOMIC_OP (op, asm_op, I, d, s64, 64)
>  #endif
>
>  ATOMIC_OPS(add, add,  i)
> @@ -131,14 +131,14 @@ c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v)	\
>
>  #ifdef CONFIG_GENERIC_ATOMIC64
>  #define ATOMIC_OPS(op, asm_op, c_op, I)					\
> -        ATOMIC_FETCH_OP( op, asm_op,       I, w,  int,   )		\
> -        ATOMIC_OP_RETURN(op, asm_op, c_op, I, w,  int,   )
> +        ATOMIC_FETCH_OP( op, asm_op,       I, w, int,   )		\
> +        ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int,   )
>  #else
>  #define ATOMIC_OPS(op, asm_op, c_op, I)					\
> -        ATOMIC_FETCH_OP( op, asm_op,       I, w,  int,   )		\
> -        ATOMIC_OP_RETURN(op, asm_op, c_op, I, w,  int,   )		\
> -        ATOMIC_FETCH_OP( op, asm_op,       I, d, long, 64)		\
> -        ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, long, 64)
> +        ATOMIC_FETCH_OP( op, asm_op,       I, w, int,   )		\
> +        ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int,   )		\
> +        ATOMIC_FETCH_OP( op, asm_op,       I, d, s64, 64)		\
> +        ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, s64, 64)
>  #endif
>
>  ATOMIC_OPS(add, add, +,  i)
> @@ -170,11 +170,11 @@ ATOMIC_OPS(sub, add, +, -i)
>
>  #ifdef CONFIG_GENERIC_ATOMIC64
>  #define ATOMIC_OPS(op, asm_op, I)					\
> -        ATOMIC_FETCH_OP(op, asm_op, I, w,  int,   )
> +        ATOMIC_FETCH_OP(op, asm_op, I, w, int,   )
>  #else
>  #define ATOMIC_OPS(op, asm_op, I)					\
> -        ATOMIC_FETCH_OP(op, asm_op, I, w,  int,   )			\
> -        ATOMIC_FETCH_OP(op, asm_op, I, d, long, 64)
> +        ATOMIC_FETCH_OP(op, asm_op, I, w, int,   )			\
> +        ATOMIC_FETCH_OP(op, asm_op, I, d, s64, 64)
>  #endif
>
>  ATOMIC_OPS(and, and, i)
> @@ -223,9 +223,10 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
>  #define atomic_fetch_add_unless atomic_fetch_add_unless
>
>  #ifndef CONFIG_GENERIC_ATOMIC64
> -static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
> +static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
>  {
> -       long prev, rc;
> +       s64 prev;
> +       long rc;
>
>  	__asm__ __volatile__ (
>  		"0:	lr.d     %[p],  %[c]\n"
> @@ -294,11 +295,11 @@ c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n)	\
>
>  #ifdef CONFIG_GENERIC_ATOMIC64
>  #define ATOMIC_OPS()							\
> -	ATOMIC_OP( int,   , 4)
> +	ATOMIC_OP(int,   , 4)
>  #else
>  #define ATOMIC_OPS()							\
> -	ATOMIC_OP( int,   , 4)						\
> -	ATOMIC_OP(long, 64, 8)
> +	ATOMIC_OP(int,   , 4)						\
> +	ATOMIC_OP(s64, 64, 8)
>  #endif
>
>  ATOMIC_OPS()
> @@ -336,9 +337,10 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
>  #define atomic_dec_if_positive(v)	atomic_sub_if_positive(v, 1)
>
>  #ifndef CONFIG_GENERIC_ATOMIC64
> -static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset)
> +static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset)
>  {
> -       long prev, rc;
> +       s64 prev;
> +       long rc;
>
>  	__asm__ __volatile__ (
>  		"0:	lr.d     %[p],  %[c]\n"

Reviwed-by: Palmer Dabbelt <palmer@sifive.com>

Thanks!

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 03/18] locking/atomic: generic: use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 03/18] locking/atomic: generic: use s64 for atomic64 Mark Rutland
@ 2019-05-22 21:16   ` Arnd Bergmann
  2019-06-03 13:35   ` [tip:locking/core] locking/atomic: Use " tip-bot for Mark Rutland
  1 sibling, 0 replies; 60+ messages in thread
From: Arnd Bergmann @ 2019-05-22 21:16 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux Kernel Mailing List, Peter Zijlstra, Will Deacon,
	Albert Ou, Borislav Petkov, Catalin Marinas, David Miller,
	Fenghua Yu, Heiko Carstens, Herbert Xu, Ivan Kokshaysky,
	James Hogan, Russell King - ARM Linux, Matt Turner, Ingo Molnar,
	Michael Ellerman, Palmer Dabbelt, Paul Burton, Paul Mackerras,
	Ralf Baechle, Richard Henderson, # 3.4.x, Thomas Gleixner,
	Tony Luck, Vineet Gupta

On Wed, May 22, 2019 at 3:23 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> As a step towards making the atomic64 API use consistent types treewide,
> let's have the generic atomic64 implementation use s64 as the underlying
> type for atomic64_t, rather than long long, matching the generated
> headers.
>
> Otherwise, there should be no functional change as a result of this
> patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will.deacon@arm.com>

Acked-by: Arnd Bergmann <arnd@arndb.de>

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (17 preceding siblings ...)
  2019-05-22 13:22 ` [PATCH 18/18] locking/atomic: s390/pci: remove " Mark Rutland
@ 2019-05-22 21:18 ` Arnd Bergmann
  2019-05-23 10:28   ` Mark Rutland
  2019-05-23  8:30 ` Andrea Parri
  19 siblings, 1 reply; 60+ messages in thread
From: Arnd Bergmann @ 2019-05-22 21:18 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Linux Kernel Mailing List, Peter Zijlstra, Will Deacon,
	Albert Ou, Borislav Petkov, Catalin Marinas, David Miller,
	Fenghua Yu, Heiko Carstens, Herbert Xu, Ivan Kokshaysky,
	James Hogan, Russell King - ARM Linux, Matt Turner, Ingo Molnar,
	Michael Ellerman, Palmer Dabbelt, Paul Burton, Paul Mackerras,
	Ralf Baechle, Richard Henderson, # 3.4.x, Thomas Gleixner,
	Tony Luck, Vineet Gupta

On Wed, May 22, 2019 at 3:23 PM Mark Rutland <mark.rutland@arm.com> wrote:
>
> Currently architectures return inconsistent types for atomic64 ops. Some return
> long (e..g. powerpc), some return long long (e.g. arc), and some return s64
> (e.g. x86).
>
> This is a bit messy, and causes unnecessary pain (e.g. as values must be cast
> before they can be printed [1]).
>
> This series reworks all the atomic64 implementations to use s64 as the base
> type for atomic64_t (as discussed [2]), and to ensure that this type is
> consistently used for parameters and return values in the API, avoiding further
> problems in this area.
>
> This series (based on v5.1-rc1) can also be found in my atomics/type-cleanup
> branch [3] on kernel.org.

Nice cleanup!

I've provided an explicit Ack for the asm-generic patch if someone wants
to pick up the entire series, but I can also put it all into my asm-generic
tree if you want, after more people have had a chance to take a look.

     Arnd

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
                   ` (18 preceding siblings ...)
  2019-05-22 21:18 ` [PATCH 00/18] locking/atomic: atomic64 type cleanup Arnd Bergmann
@ 2019-05-23  8:30 ` Andrea Parri
  2019-05-23 10:19   ` Mark Rutland
  19 siblings, 1 reply; 60+ messages in thread
From: Andrea Parri @ 2019-05-23  8:30 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-kernel, peterz, will.deacon, aou, arnd, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mattst88, mingo, mpe, palmer, paul.burton, paulus,
	ralf, rth, stable, tglx, tony.luck, vgupta

Hi Mark,

On Wed, May 22, 2019 at 02:22:32PM +0100, Mark Rutland wrote:
> Currently architectures return inconsistent types for atomic64 ops. Some return
> long (e..g. powerpc), some return long long (e.g. arc), and some return s64
> (e.g. x86).

(only partially related, but probably worth asking:)

While reading the series, I realized that the following expression:

	atomic64_t v;
        ...
	typeof(v.counter) my_val = atomic64_set(&v, VAL);

is a valid expression on some architectures (in part., on architectures
which #define atomic64_set() to WRITE_ONCE()) but is invalid on others.
(This is due to the fact that WRITE_ONCE() can be used as an rvalue in
the above assignment; TBH, I ignore the reasons for having such rvalue?)

IIUC, similar considerations hold for atomic_set().

The question is whether this is a known/"expected" inconsistency in the
implementation of atomic64_set() or if this would also need to be fixed
/addressed (say in a different patchset)?

Thanks,
  Andrea

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-23  8:30 ` Andrea Parri
@ 2019-05-23 10:19   ` Mark Rutland
  2019-05-23 11:20     ` Andrea Parri
  2019-05-24 10:37     ` Peter Zijlstra
  0 siblings, 2 replies; 60+ messages in thread
From: Mark Rutland @ 2019-05-23 10:19 UTC (permalink / raw)
  To: Andrea Parri
  Cc: linux-kernel, peterz, will.deacon, aou, arnd, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mattst88, mingo, mpe, palmer, paul.burton, paulus,
	ralf, rth, stable, tglx, tony.luck, vgupta

On Thu, May 23, 2019 at 10:30:13AM +0200, Andrea Parri wrote:
> Hi Mark,

Hi Andrea,

> On Wed, May 22, 2019 at 02:22:32PM +0100, Mark Rutland wrote:
> > Currently architectures return inconsistent types for atomic64 ops. Some return
> > long (e..g. powerpc), some return long long (e.g. arc), and some return s64
> > (e.g. x86).
> 
> (only partially related, but probably worth asking:)
> 
> While reading the series, I realized that the following expression:
> 
> 	atomic64_t v;
>         ...
> 	typeof(v.counter) my_val = atomic64_set(&v, VAL);
> 
> is a valid expression on some architectures (in part., on architectures
> which #define atomic64_set() to WRITE_ONCE()) but is invalid on others.
> (This is due to the fact that WRITE_ONCE() can be used as an rvalue in
> the above assignment; TBH, I ignore the reasons for having such rvalue?)
> 
> IIUC, similar considerations hold for atomic_set().
> 
> The question is whether this is a known/"expected" inconsistency in the
> implementation of atomic64_set() or if this would also need to be fixed
> /addressed (say in a different patchset)?

In either case, I don't think the intent is that they should be used that way,
and from a quick scan, I can only fine a single relevant instance today:

[mark@lakrids:~/src/linux]% git grep '\(return\|=\)\s\+atomic\(64\)\?_set'
include/linux/vmw_vmci_defs.h:  return atomic_set((atomic_t *)var, (u32)new_val);
include/linux/vmw_vmci_defs.h:  return atomic64_set(var, new_val);


[mark@lakrids:~/src/linux]% git grep '=\s+atomic_set' | wc -l
0
[mark@lakrids:~/src/linux]% git grep '=\s+atomic64_set' | wc -l
0

Any architectures implementing arch_atomic_* will have both of these functions
returning void. Currently that's x86 and arm64, but (time permitting) I intend
to migrate other architectures, so I guess we'll have to fix the above up as
required.

I think it's best to avoid the construct above.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 12/18] locking/atomic: riscv: use s64 for atomic64
  2019-05-22 19:06   ` Palmer Dabbelt
@ 2019-05-23 10:23     ` Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: Mark Rutland @ 2019-05-23 10:23 UTC (permalink / raw)
  To: Palmer Dabbelt
  Cc: linux-kernel, peterz, Will Deacon, aou, Arnd Bergmann, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mattst88, mingo, mpe, paul.burton, paulus, ralf,
	rth, stable, tglx, tony.luck, vgupta

On Wed, May 22, 2019 at 12:06:31PM -0700, Palmer Dabbelt wrote:
> On Wed, 22 May 2019 06:22:44 PDT (-0700), mark.rutland@arm.com wrote:
> > As a step towards making the atomic64 API use consistent types treewide,
> > let's have the s390 atomic64 implementation use s64 as the underlying
> 
> and apparently the RISC-V one as well? :)

Heh. You can guess which commit message I wrote first...

> Reviwed-by: Palmer Dabbelt <palmer@sifive.com>

Cheers! I'll add an extra 'e' when I fold this in. :)

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-22 21:18 ` [PATCH 00/18] locking/atomic: atomic64 type cleanup Arnd Bergmann
@ 2019-05-23 10:28   ` Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: Mark Rutland @ 2019-05-23 10:28 UTC (permalink / raw)
  To: Arnd Bergmann
  Cc: Linux Kernel Mailing List, Peter Zijlstra, Will Deacon,
	Albert Ou, Borislav Petkov, Catalin Marinas, David Miller,
	Fenghua Yu, Heiko Carstens, Herbert Xu, Ivan Kokshaysky,
	James Hogan, Russell King - ARM Linux, Matt Turner, Ingo Molnar,
	Michael Ellerman, Palmer Dabbelt, Paul Burton, Paul Mackerras,
	Ralf Baechle, Richard Henderson, # 3.4.x, Thomas Gleixner,
	Tony Luck, Vineet Gupta

On Wed, May 22, 2019 at 11:18:59PM +0200, Arnd Bergmann wrote:
> On Wed, May 22, 2019 at 3:23 PM Mark Rutland <mark.rutland@arm.com> wrote:
> >
> > Currently architectures return inconsistent types for atomic64 ops. Some return
> > long (e..g. powerpc), some return long long (e.g. arc), and some return s64
> > (e.g. x86).
> >
> > This is a bit messy, and causes unnecessary pain (e.g. as values must be cast
> > before they can be printed [1]).
> >
> > This series reworks all the atomic64 implementations to use s64 as the base
> > type for atomic64_t (as discussed [2]), and to ensure that this type is
> > consistently used for parameters and return values in the API, avoiding further
> > problems in this area.
> >
> > This series (based on v5.1-rc1) can also be found in my atomics/type-cleanup
> > branch [3] on kernel.org.
> 
> Nice cleanup!
> 
> I've provided an explicit Ack for the asm-generic patch if someone wants
> to pick up the entire series, but I can also put it all into my asm-generic
> tree if you want, after more people have had a chance to take a look.

Thanks!

I had assumed that this would go through the tip tree, as previous
atomic rework had, but I have no preference as to how this gets merged.

I'm not sure what the policy is, so I'll leave it to Peter and Will to
say.

Mark.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-23 10:19   ` Mark Rutland
@ 2019-05-23 11:20     ` Andrea Parri
  2019-05-24 10:37     ` Peter Zijlstra
  1 sibling, 0 replies; 60+ messages in thread
From: Andrea Parri @ 2019-05-23 11:20 UTC (permalink / raw)
  To: Mark Rutland
  Cc: linux-kernel, peterz, will.deacon, aou, arnd, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mattst88, mingo, mpe, palmer, paul.burton, paulus,
	ralf, rth, stable, tglx, tony.luck, vgupta

> > While reading the series, I realized that the following expression:
> > 
> > 	atomic64_t v;
> >         ...
> > 	typeof(v.counter) my_val = atomic64_set(&v, VAL);
> > 
> > is a valid expression on some architectures (in part., on architectures
> > which #define atomic64_set() to WRITE_ONCE()) but is invalid on others.
> > (This is due to the fact that WRITE_ONCE() can be used as an rvalue in
> > the above assignment; TBH, I ignore the reasons for having such rvalue?)
> > 
> > IIUC, similar considerations hold for atomic_set().
> > 
> > The question is whether this is a known/"expected" inconsistency in the
> > implementation of atomic64_set() or if this would also need to be fixed
> > /addressed (say in a different patchset)?
> 
> In either case, I don't think the intent is that they should be used that way,
> and from a quick scan, I can only fine a single relevant instance today:
> 
> [mark@lakrids:~/src/linux]% git grep '\(return\|=\)\s\+atomic\(64\)\?_set'
> include/linux/vmw_vmci_defs.h:  return atomic_set((atomic_t *)var, (u32)new_val);
> include/linux/vmw_vmci_defs.h:  return atomic64_set(var, new_val);
> 
> 
> [mark@lakrids:~/src/linux]% git grep '=\s+atomic_set' | wc -l
> 0
> [mark@lakrids:~/src/linux]% git grep '=\s+atomic64_set' | wc -l
> 0
> 
> Any architectures implementing arch_atomic_* will have both of these functions
> returning void. Currently that's x86 and arm64, but (time permitting) I intend
> to migrate other architectures, so I guess we'll have to fix the above up as
> required.
> 
> I think it's best to avoid the construct above.

Thank you for the clarification, Mark.  I agree with you that it'd be
better to avoid such constructs.  (FWIW, it is not currently possible
to use them in litmus tests for the LKMM...)

Thanks,
  Andrea

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 10/18] locking/atomic: powerpc: use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 10/18] locking/atomic: powerpc: use " Mark Rutland
@ 2019-05-23 13:27   ` Michael Ellerman
  2019-06-03 13:40   ` [tip:locking/core] locking/atomic, powerpc: Use " tip-bot for Mark Rutland
  1 sibling, 0 replies; 60+ messages in thread
From: Michael Ellerman @ 2019-05-23 13:27 UTC (permalink / raw)
  To: Mark Rutland, linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mark.rutland,
	mattst88, mingo, palmer, paul.burton, paulus, ralf, rth, stable,
	tglx, tony.luck, vgupta

Mark Rutland <mark.rutland@arm.com> writes:
> As a step towards making the atomic64 API use consistent types treewide,
> let's have the powerpc atomic64 implementation use s64 as the underlying
> type for atomic64_t, rather than long, matching the generated headers.
>
> As atomic64_read() depends on the generic defintion of atomic64_t, this
> still returns long on 64-bit. This will be converted in a subsequent
> patch.
>
> Otherwise, there should be no functional change as a result of this
> patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will.deacon@arm.com>
> ---
>  arch/powerpc/include/asm/atomic.h | 44 +++++++++++++++++++--------------------
>  1 file changed, 22 insertions(+), 22 deletions(-)

Conversion looks good to me.

Reviewed-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc)

cheers

> diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
> index 52eafaf74054..31c231ea56b7 100644
> --- a/arch/powerpc/include/asm/atomic.h
> +++ b/arch/powerpc/include/asm/atomic.h
> @@ -297,24 +297,24 @@ static __inline__ int atomic_dec_if_positive(atomic_t *v)
>  
>  #define ATOMIC64_INIT(i)	{ (i) }
>  
> -static __inline__ long atomic64_read(const atomic64_t *v)
> +static __inline__ s64 atomic64_read(const atomic64_t *v)
>  {
> -	long t;
> +	s64 t;
>  
>  	__asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter));
>  
>  	return t;
>  }
>  
> -static __inline__ void atomic64_set(atomic64_t *v, long i)
> +static __inline__ void atomic64_set(atomic64_t *v, s64 i)
>  {
>  	__asm__ __volatile__("std%U0%X0 %1,%0" : "=m"(v->counter) : "r"(i));
>  }
>  
>  #define ATOMIC64_OP(op, asm_op)						\
> -static __inline__ void atomic64_##op(long a, atomic64_t *v)		\
> +static __inline__ void atomic64_##op(s64 a, atomic64_t *v)		\
>  {									\
> -	long t;								\
> +	s64 t;								\
>  									\
>  	__asm__ __volatile__(						\
>  "1:	ldarx	%0,0,%3		# atomic64_" #op "\n"			\
> @@ -327,10 +327,10 @@ static __inline__ void atomic64_##op(long a, atomic64_t *v)		\
>  }
>  
>  #define ATOMIC64_OP_RETURN_RELAXED(op, asm_op)				\
> -static inline long							\
> -atomic64_##op##_return_relaxed(long a, atomic64_t *v)			\
> +static inline s64							\
> +atomic64_##op##_return_relaxed(s64 a, atomic64_t *v)			\
>  {									\
> -	long t;								\
> +	s64 t;								\
>  									\
>  	__asm__ __volatile__(						\
>  "1:	ldarx	%0,0,%3		# atomic64_" #op "_return_relaxed\n"	\
> @@ -345,10 +345,10 @@ atomic64_##op##_return_relaxed(long a, atomic64_t *v)			\
>  }
>  
>  #define ATOMIC64_FETCH_OP_RELAXED(op, asm_op)				\
> -static inline long							\
> -atomic64_fetch_##op##_relaxed(long a, atomic64_t *v)			\
> +static inline s64							\
> +atomic64_fetch_##op##_relaxed(s64 a, atomic64_t *v)			\
>  {									\
> -	long res, t;							\
> +	s64 res, t;							\
>  									\
>  	__asm__ __volatile__(						\
>  "1:	ldarx	%0,0,%4		# atomic64_fetch_" #op "_relaxed\n"	\
> @@ -396,7 +396,7 @@ ATOMIC64_OPS(xor, xor)
>  
>  static __inline__ void atomic64_inc(atomic64_t *v)
>  {
> -	long t;
> +	s64 t;
>  
>  	__asm__ __volatile__(
>  "1:	ldarx	%0,0,%2		# atomic64_inc\n\
> @@ -409,9 +409,9 @@ static __inline__ void atomic64_inc(atomic64_t *v)
>  }
>  #define atomic64_inc atomic64_inc
>  
> -static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)
> +static __inline__ s64 atomic64_inc_return_relaxed(atomic64_t *v)
>  {
> -	long t;
> +	s64 t;
>  
>  	__asm__ __volatile__(
>  "1:	ldarx	%0,0,%2		# atomic64_inc_return_relaxed\n"
> @@ -427,7 +427,7 @@ static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)
>  
>  static __inline__ void atomic64_dec(atomic64_t *v)
>  {
> -	long t;
> +	s64 t;
>  
>  	__asm__ __volatile__(
>  "1:	ldarx	%0,0,%2		# atomic64_dec\n\
> @@ -440,9 +440,9 @@ static __inline__ void atomic64_dec(atomic64_t *v)
>  }
>  #define atomic64_dec atomic64_dec
>  
> -static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
> +static __inline__ s64 atomic64_dec_return_relaxed(atomic64_t *v)
>  {
> -	long t;
> +	s64 t;
>  
>  	__asm__ __volatile__(
>  "1:	ldarx	%0,0,%2		# atomic64_dec_return_relaxed\n"
> @@ -463,9 +463,9 @@ static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
>   * Atomically test *v and decrement if it is greater than 0.
>   * The function returns the old value of *v minus 1.
>   */
> -static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
> +static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v)
>  {
> -	long t;
> +	s64 t;
>  
>  	__asm__ __volatile__(
>  	PPC_ATOMIC_ENTRY_BARRIER
> @@ -502,9 +502,9 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
>   * Atomically adds @a to @v, so long as it was not @u.
>   * Returns the old value of @v.
>   */
> -static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
> +static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
>  {
> -	long t;
> +	s64 t;
>  
>  	__asm__ __volatile__ (
>  	PPC_ATOMIC_ENTRY_BARRIER
> @@ -534,7 +534,7 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
>   */
>  static __inline__ int atomic64_inc_not_zero(atomic64_t *v)
>  {
> -	long t1, t2;
> +	s64 t1, t2;
>  
>  	__asm__ __volatile__ (
>  	PPC_ATOMIC_ENTRY_BARRIER
> -- 
> 2.11.0

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 05/18] locking/atomic: arc: use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 05/18] locking/atomic: arc: use " Mark Rutland
@ 2019-05-23 23:10   ` Vineet Gupta
  2019-06-03 13:36   ` [tip:locking/core] locking/atomic, arc: Use " tip-bot for Mark Rutland
  1 sibling, 0 replies; 60+ messages in thread
From: Vineet Gupta @ 2019-05-23 23:10 UTC (permalink / raw)
  To: Mark Rutland, linux-kernel, peterz, will.deacon
  Cc: aou, arnd, bp, catalin.marinas, davem, fenghua.yu,
	heiko.carstens, herbert, ink, jhogan, linux, mattst88, mingo,
	mpe, palmer, paul.burton, paulus, ralf, rth, stable, tglx,
	tony.luck

On 5/22/19 6:24 AM, Mark Rutland wrote:
> As a step towards making the atomic64 API use consistent types treewide,
> let's have the arc atomic64 implementation use s64 as the underlying
> type for atomic64_t, rather than u64, matching the generated headers.
>
> Otherwise, there should be no functional change as a result of this
> patch.
>
> Signed-off-by: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Vineet Gupta <vgupta@synopsys.com>
> Cc: Will Deacon <will.deacon@arm.com>

Thx for the cleanup Mark.

Acked-By: Vineet Gupta <vgupta@synopsys.com>   # for ARC bits

-Vineet

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-23 10:19   ` Mark Rutland
  2019-05-23 11:20     ` Andrea Parri
@ 2019-05-24 10:37     ` Peter Zijlstra
  2019-05-24 11:18       ` Peter Zijlstra
  1 sibling, 1 reply; 60+ messages in thread
From: Peter Zijlstra @ 2019-05-24 10:37 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrea Parri, linux-kernel, will.deacon, aou, arnd, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mattst88, mingo, mpe, palmer, paul.burton, paulus,
	ralf, rth, stable, tglx, tony.luck, vgupta, gregkh, jhansen,
	vdasa, aditr, Steven Rostedt

On Thu, May 23, 2019 at 11:19:26AM +0100, Mark Rutland wrote:

> [mark@lakrids:~/src/linux]% git grep '\(return\|=\)\s\+atomic\(64\)\?_set'
> include/linux/vmw_vmci_defs.h:  return atomic_set((atomic_t *)var, (u32)new_val);
> include/linux/vmw_vmci_defs.h:  return atomic64_set(var, new_val);
> 

Oh boy, what a load of crap you just did find.

How about something like the below? I've not read how that buffer is
used, but the below preserves all broken without using atomic*_t.

---
diff --git a/include/linux/vmw_vmci_defs.h b/include/linux/vmw_vmci_defs.h
index 0c06178e4985..8ee472118f54 100644
--- a/include/linux/vmw_vmci_defs.h
+++ b/include/linux/vmw_vmci_defs.h
@@ -438,8 +438,8 @@ enum {
 struct vmci_queue_header {
 	/* All fields are 64bit and aligned. */
 	struct vmci_handle handle;	/* Identifier. */
-	atomic64_t producer_tail;	/* Offset in this queue. */
-	atomic64_t consumer_head;	/* Offset in peer queue. */
+	u64 producer_tail;	/* Offset in this queue. */
+	u64 consumer_head;	/* Offset in peer queue. */
 };
 
 /*
@@ -740,13 +740,9 @@ static inline void *vmci_event_data_payload(struct vmci_event_data *ev_data)
  * prefix will be used, so correctness isn't an issue, but using a
  * 64bit operation still adds unnecessary overhead.
  */
-static inline u64 vmci_q_read_pointer(atomic64_t *var)
+static inline u64 vmci_q_read_pointer(u64 *var)
 {
-#if defined(CONFIG_X86_32)
-	return atomic_read((atomic_t *)var);
-#else
-	return atomic64_read(var);
-#endif
+	return READ_ONCE(*(unsigned long *)var);
 }
 
 /*
@@ -755,23 +751,17 @@ static inline u64 vmci_q_read_pointer(atomic64_t *var)
  * never exceeds a 32bit value in this case. On 32bit SMP, using a
  * locked cmpxchg8b adds unnecessary overhead.
  */
-static inline void vmci_q_set_pointer(atomic64_t *var,
-				      u64 new_val)
+static inline void vmci_q_set_pointer(u64 *var, u64 new_val)
 {
-#if defined(CONFIG_X86_32)
-	return atomic_set((atomic_t *)var, (u32)new_val);
-#else
-	return atomic64_set(var, new_val);
-#endif
+	/* XXX buggered on big-endian */
+	WRITE_ONCE(*(unsigned long *)var, (unsigned long)new_val);
 }
 
 /*
  * Helper to add a given offset to a head or tail pointer. Wraps the
  * value of the pointer around the max size of the queue.
  */
-static inline void vmci_qp_add_pointer(atomic64_t *var,
-				       size_t add,
-				       u64 size)
+static inline void vmci_qp_add_pointer(u64 *var, size_t add, u64 size)
 {
 	u64 new_val = vmci_q_read_pointer(var);
 
@@ -848,8 +838,8 @@ static inline void vmci_q_header_init(struct vmci_queue_header *q_header,
 				      const struct vmci_handle handle)
 {
 	q_header->handle = handle;
-	atomic64_set(&q_header->producer_tail, 0);
-	atomic64_set(&q_header->consumer_head, 0);
+	q_header->producer_tail = 0;
+	q_header->consumer_head = 0;
 }
 
 /*

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-24 10:37     ` Peter Zijlstra
@ 2019-05-24 11:18       ` Peter Zijlstra
  2019-05-24 11:38         ` Greg KH
  2019-05-24 11:42         ` Will Deacon
  0 siblings, 2 replies; 60+ messages in thread
From: Peter Zijlstra @ 2019-05-24 11:18 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrea Parri, linux-kernel, will.deacon, aou, arnd, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mattst88, mingo, mpe, palmer, paul.burton, paulus,
	ralf, rth, stable, tglx, tony.luck, vgupta, gregkh, jhansen,
	vdasa, aditr, Steven Rostedt

On Fri, May 24, 2019 at 12:37:31PM +0200, Peter Zijlstra wrote:
> On Thu, May 23, 2019 at 11:19:26AM +0100, Mark Rutland wrote:
> 
> > [mark@lakrids:~/src/linux]% git grep '\(return\|=\)\s\+atomic\(64\)\?_set'
> > include/linux/vmw_vmci_defs.h:  return atomic_set((atomic_t *)var, (u32)new_val);
> > include/linux/vmw_vmci_defs.h:  return atomic64_set(var, new_val);
> > 
> 
> Oh boy, what a load of crap you just did find.
> 
> How about something like the below? I've not read how that buffer is
> used, but the below preserves all broken without using atomic*_t.

Clarified by something along these lines?

---
 Documentation/atomic_t.txt | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index dca3fb0554db..125c95ddbbc0 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -83,6 +83,9 @@ The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
 implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
 smp_store_release() respectively.
 
+Therefore, if you find yourself only using the Non-RMW operations of atomic_t,
+you do not in fact need atomic_t at all and are doing it wrong.
+
 The one detail to this is that atomic_set{}() should be observable to the RMW
 ops. That is:
 

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-24 11:18       ` Peter Zijlstra
@ 2019-05-24 11:38         ` Greg KH
  2019-05-24 11:42         ` Will Deacon
  1 sibling, 0 replies; 60+ messages in thread
From: Greg KH @ 2019-05-24 11:38 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mark Rutland, Andrea Parri, linux-kernel, will.deacon, aou, arnd,
	bp, catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert,
	ink, jhogan, linux, mattst88, mingo, mpe, palmer, paul.burton,
	paulus, ralf, rth, stable, tglx, tony.luck, vgupta, jhansen,
	vdasa, aditr, Steven Rostedt

On Fri, May 24, 2019 at 01:18:07PM +0200, Peter Zijlstra wrote:
> On Fri, May 24, 2019 at 12:37:31PM +0200, Peter Zijlstra wrote:
> > On Thu, May 23, 2019 at 11:19:26AM +0100, Mark Rutland wrote:
> > 
> > > [mark@lakrids:~/src/linux]% git grep '\(return\|=\)\s\+atomic\(64\)\?_set'
> > > include/linux/vmw_vmci_defs.h:  return atomic_set((atomic_t *)var, (u32)new_val);
> > > include/linux/vmw_vmci_defs.h:  return atomic64_set(var, new_val);
> > > 
> > 
> > Oh boy, what a load of crap you just did find.
> > 
> > How about something like the below? I've not read how that buffer is
> > used, but the below preserves all broken without using atomic*_t.
> 
> Clarified by something along these lines?
> 
> ---
>  Documentation/atomic_t.txt | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> index dca3fb0554db..125c95ddbbc0 100644
> --- a/Documentation/atomic_t.txt
> +++ b/Documentation/atomic_t.txt
> @@ -83,6 +83,9 @@ The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
>  implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
>  smp_store_release() respectively.
>  
> +Therefore, if you find yourself only using the Non-RMW operations of atomic_t,
> +you do not in fact need atomic_t at all and are doing it wrong.
> +
>  The one detail to this is that atomic_set{}() should be observable to the RMW
>  ops. That is:
>  

I like it!

Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-24 11:18       ` Peter Zijlstra
  2019-05-24 11:38         ` Greg KH
@ 2019-05-24 11:42         ` Will Deacon
  2019-05-24 11:52           ` Peter Zijlstra
  1 sibling, 1 reply; 60+ messages in thread
From: Will Deacon @ 2019-05-24 11:42 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Mark Rutland, Andrea Parri, linux-kernel, aou, arnd, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mattst88, mingo, mpe, palmer, paul.burton, paulus,
	ralf, rth, stable, tglx, tony.luck, vgupta, gregkh, jhansen,
	vdasa, aditr, Steven Rostedt

On Fri, May 24, 2019 at 01:18:07PM +0200, Peter Zijlstra wrote:
> On Fri, May 24, 2019 at 12:37:31PM +0200, Peter Zijlstra wrote:
> > On Thu, May 23, 2019 at 11:19:26AM +0100, Mark Rutland wrote:
> > 
> > > [mark@lakrids:~/src/linux]% git grep '\(return\|=\)\s\+atomic\(64\)\?_set'
> > > include/linux/vmw_vmci_defs.h:  return atomic_set((atomic_t *)var, (u32)new_val);
> > > include/linux/vmw_vmci_defs.h:  return atomic64_set(var, new_val);
> > > 
> > 
> > Oh boy, what a load of crap you just did find.
> > 
> > How about something like the below? I've not read how that buffer is
> > used, but the below preserves all broken without using atomic*_t.
> 
> Clarified by something along these lines?
> 
> ---
>  Documentation/atomic_t.txt | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> index dca3fb0554db..125c95ddbbc0 100644
> --- a/Documentation/atomic_t.txt
> +++ b/Documentation/atomic_t.txt
> @@ -83,6 +83,9 @@ The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
>  implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
>  smp_store_release() respectively.
>  

Not sure you need a new paragraph here.

> +Therefore, if you find yourself only using the Non-RMW operations of atomic_t,
> +you do not in fact need atomic_t at all and are doing it wrong.
> +

That makes sense to me, although I now find that the sentence below is a bit
confusing because it sounds like it's a caveat relating to only using
Non-RMW ops.

>  The one detail to this is that atomic_set{}() should be observable to the RMW
>  ops. That is:

How about changing this to be:

  "A subtle detail of atomic_set{}() is that it should be observable..."

With that:

Acked-by: Will Deacon <will.deacon@arm.com>

Will

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-24 11:42         ` Will Deacon
@ 2019-05-24 11:52           ` Peter Zijlstra
  2019-05-24 22:43             ` Andrea Parri
  2019-06-03 13:46             ` [tip:locking/core] Documentation/atomic_t.txt: Clarify pure non-rmw usage tip-bot for Peter Zijlstra
  0 siblings, 2 replies; 60+ messages in thread
From: Peter Zijlstra @ 2019-05-24 11:52 UTC (permalink / raw)
  To: Will Deacon
  Cc: Mark Rutland, Andrea Parri, linux-kernel, aou, arnd, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mattst88, mingo, mpe, palmer, paul.burton, paulus,
	ralf, rth, stable, tglx, tony.luck, vgupta, gregkh, jhansen,
	vdasa, aditr, Steven Rostedt

On Fri, May 24, 2019 at 12:42:20PM +0100, Will Deacon wrote:

> > diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> > index dca3fb0554db..125c95ddbbc0 100644
> > --- a/Documentation/atomic_t.txt
> > +++ b/Documentation/atomic_t.txt
> > @@ -83,6 +83,9 @@ The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
> >  implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
> >  smp_store_release() respectively.
> >  
> 
> Not sure you need a new paragraph here.
> 
> > +Therefore, if you find yourself only using the Non-RMW operations of atomic_t,
> > +you do not in fact need atomic_t at all and are doing it wrong.
> > +
> 
> That makes sense to me, although I now find that the sentence below is a bit
> confusing because it sounds like it's a caveat relating to only using
> Non-RMW ops.
> 
> >  The one detail to this is that atomic_set{}() should be observable to the RMW
> >  ops. That is:
> 
> How about changing this to be:
> 
>   "A subtle detail of atomic_set{}() is that it should be observable..."

Done, find below.

---
Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage

Clarify that pure non-RMW usage of atomic_t is pointless, there is
nothing 'magical' about atomic_set() / atomic_read().

This is something that seems to confuse people, because I happen upon it
semi-regularly.

Acked-by: Will Deacon <will.deacon@arm.com>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 Documentation/atomic_t.txt | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index dca3fb0554db..89eae7f6b360 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -81,9 +81,11 @@ SEMANTICS
 
 The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
 implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
-smp_store_release() respectively.
+smp_store_release() respectively. Therefore, if you find yourself only using
+the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
+and are doing it wrong.
 
-The one detail to this is that atomic_set{}() should be observable to the RMW
+A subtle detail of atomic_set{}() is that it should be observable to the RMW
 ops. That is:
 
   C atomic-set

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-24 11:52           ` Peter Zijlstra
@ 2019-05-24 22:43             ` Andrea Parri
  2019-05-28 10:47               ` Peter Zijlstra
  2019-06-03 13:46             ` [tip:locking/core] Documentation/atomic_t.txt: Clarify pure non-rmw usage tip-bot for Peter Zijlstra
  1 sibling, 1 reply; 60+ messages in thread
From: Andrea Parri @ 2019-05-24 22:43 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, linux-kernel, aou, arnd, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mattst88, mingo, mpe, palmer, paul.burton, paulus,
	ralf, rth, stable, tglx, tony.luck, vgupta, gregkh, jhansen,
	vdasa, aditr, Steven Rostedt

> ---
> Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage
> 
> Clarify that pure non-RMW usage of atomic_t is pointless, there is
> nothing 'magical' about atomic_set() / atomic_read().
> 
> This is something that seems to confuse people, because I happen upon it
> semi-regularly.
> 
> Acked-by: Will Deacon <will.deacon@arm.com>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  Documentation/atomic_t.txt | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> index dca3fb0554db..89eae7f6b360 100644
> --- a/Documentation/atomic_t.txt
> +++ b/Documentation/atomic_t.txt
> @@ -81,9 +81,11 @@ SEMANTICS
>  
>  The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
>  implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
> -smp_store_release() respectively.
> +smp_store_release() respectively. Therefore, if you find yourself only using
> +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
> +and are doing it wrong.

The counterargument (not so theoretic, just look around in the kernel!) is:
we all 'forget' to use READ_ONCE() and WRITE_ONCE(), it should be difficult
or more difficult to forget to use atomic_read() and atomic_set()...   IAC,
I wouldn't call any of them 'wrong'.

  Andrea


>  
> -The one detail to this is that atomic_set{}() should be observable to the RMW
> +A subtle detail of atomic_set{}() is that it should be observable to the RMW
>  ops. That is:
>  
>    C atomic-set

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-24 22:43             ` Andrea Parri
@ 2019-05-28 10:47               ` Peter Zijlstra
  2019-05-28 11:15                 ` Andrea Parri
  0 siblings, 1 reply; 60+ messages in thread
From: Peter Zijlstra @ 2019-05-28 10:47 UTC (permalink / raw)
  To: Andrea Parri
  Cc: Will Deacon, Mark Rutland, linux-kernel, aou, arnd, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mattst88, mingo, mpe, palmer, paul.burton, paulus,
	ralf, rth, stable, tglx, tony.luck, vgupta, gregkh, jhansen,
	vdasa, aditr, Steven Rostedt

On Sat, May 25, 2019 at 12:43:40AM +0200, Andrea Parri wrote:
> > ---
> > Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage
> > 
> > Clarify that pure non-RMW usage of atomic_t is pointless, there is
> > nothing 'magical' about atomic_set() / atomic_read().
> > 
> > This is something that seems to confuse people, because I happen upon it
> > semi-regularly.
> > 
> > Acked-by: Will Deacon <will.deacon@arm.com>
> > Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > ---
> >  Documentation/atomic_t.txt | 6 ++++--
> >  1 file changed, 4 insertions(+), 2 deletions(-)
> > 
> > diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> > index dca3fb0554db..89eae7f6b360 100644
> > --- a/Documentation/atomic_t.txt
> > +++ b/Documentation/atomic_t.txt
> > @@ -81,9 +81,11 @@ SEMANTICS
> >  
> >  The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
> >  implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
> > -smp_store_release() respectively.
> > +smp_store_release() respectively. Therefore, if you find yourself only using
> > +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
> > +and are doing it wrong.
> 
> The counterargument (not so theoretic, just look around in the kernel!) is:
> we all 'forget' to use READ_ONCE() and WRITE_ONCE(), it should be difficult
> or more difficult to forget to use atomic_read() and atomic_set()...   IAC,
> I wouldn't call any of them 'wrong'.

I'm thinking you mean that the type system isn't helping us with
READ/WRITE_ONCE() like it does with atomic_t ? And while I agree that
there is room for improvement there, that doesn't mean we should start
using atomic*_t all over the place for that.

Part of the problem with READ/WRITE_ONCE() is that it serves a dual
purpose; we've tried to untangle that at some point, but Linus wasn't
having it.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [PATCH 00/18] locking/atomic: atomic64 type cleanup
  2019-05-28 10:47               ` Peter Zijlstra
@ 2019-05-28 11:15                 ` Andrea Parri
  0 siblings, 0 replies; 60+ messages in thread
From: Andrea Parri @ 2019-05-28 11:15 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Will Deacon, Mark Rutland, linux-kernel, aou, arnd, bp,
	catalin.marinas, davem, fenghua.yu, heiko.carstens, herbert, ink,
	jhogan, linux, mattst88, mingo, mpe, palmer, paul.burton, paulus,
	ralf, rth, stable, tglx, tony.luck, vgupta, gregkh, jhansen,
	vdasa, aditr, Steven Rostedt

On Tue, May 28, 2019 at 12:47:19PM +0200, Peter Zijlstra wrote:
> On Sat, May 25, 2019 at 12:43:40AM +0200, Andrea Parri wrote:
> > > ---
> > > Subject: Documentation/atomic_t.txt: Clarify pure non-rmw usage
> > > 
> > > Clarify that pure non-RMW usage of atomic_t is pointless, there is
> > > nothing 'magical' about atomic_set() / atomic_read().
> > > 
> > > This is something that seems to confuse people, because I happen upon it
> > > semi-regularly.
> > > 
> > > Acked-by: Will Deacon <will.deacon@arm.com>
> > > Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > > ---
> > >  Documentation/atomic_t.txt | 6 ++++--
> > >  1 file changed, 4 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> > > index dca3fb0554db..89eae7f6b360 100644
> > > --- a/Documentation/atomic_t.txt
> > > +++ b/Documentation/atomic_t.txt
> > > @@ -81,9 +81,11 @@ SEMANTICS
> > >  
> > >  The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
> > >  implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
> > > -smp_store_release() respectively.
> > > +smp_store_release() respectively. Therefore, if you find yourself only using
> > > +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
> > > +and are doing it wrong.
> > 
> > The counterargument (not so theoretic, just look around in the kernel!) is:
> > we all 'forget' to use READ_ONCE() and WRITE_ONCE(), it should be difficult
> > or more difficult to forget to use atomic_read() and atomic_set()...   IAC,
> > I wouldn't call any of them 'wrong'.
> 
> I'm thinking you mean that the type system isn't helping us with
> READ/WRITE_ONCE() like it does with atomic_t ?

Yep.


> And while I agree that
> there is room for improvement there, that doesn't mean we should start
> using atomic*_t all over the place for that.

Agreed.  But this still doesn't explain that "and are doing it wrong",
AFAICT; maybe just remove that part?

  Andrea


> 
> Part of the problem with READ/WRITE_ONCE() is that it serves a dual
> purpose; we've tried to untangle that at some point, but Linus wasn't
> having it.

^ permalink raw reply	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, crypto/nx: Prepare for atomic64_read() conversion
  2019-05-22 13:22 ` [PATCH 01/18] locking/atomic: crypto: nx: prepare for atomic64_read() conversion Mark Rutland
@ 2019-06-03 13:34   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, mark.rutland, peterz, will.deacon, hpa, torvalds,
	tglx, herbert, mingo

Commit-ID:  90fde663aed0a1c27e50dd1bf3f121141b2fe9f2
Gitweb:     https://git.kernel.org/tip/90fde663aed0a1c27e50dd1bf3f121141b2fe9f2
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:33 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, crypto/nx: Prepare for atomic64_read() conversion

The return type of atomic64_read() varies by architecture. It may return
long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is
somewhat painful, and mandates the use of explicit casts in some cases
(e.g. when printing the return value).

To ameliorate matters, subsequent patches will make the atomic64 API
consistently use s64.

As a preparatory step, this patch updates the nx-842 code to treat the
return value of atomic64_read() as s64, using explicit casts. These
casts will be removed once the s64 conversion is complete.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-2-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 drivers/crypto/nx/nx-842-pseries.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c
index 5c4aa606208c..938332ce3b60 100644
--- a/drivers/crypto/nx/nx-842-pseries.c
+++ b/drivers/crypto/nx/nx-842-pseries.c
@@ -856,8 +856,8 @@ static ssize_t nx842_##_name##_show(struct device *dev,		\
 	rcu_read_lock();						\
 	local_devdata = rcu_dereference(devdata);			\
 	if (local_devdata)						\
-		p = snprintf(buf, PAGE_SIZE, "%ld\n",			\
-		       atomic64_read(&local_devdata->counters->_name));	\
+		p = snprintf(buf, PAGE_SIZE, "%lld\n",			\
+		       (s64)atomic64_read(&local_devdata->counters->_name));	\
 	rcu_read_unlock();						\
 	return p;							\
 }
@@ -909,17 +909,17 @@ static ssize_t nx842_timehist_show(struct device *dev,
 	}
 
 	for (i = 0; i < (NX842_HIST_SLOTS - 2); i++) {
-		bytes = snprintf(p, bytes_remain, "%u-%uus:\t%ld\n",
+		bytes = snprintf(p, bytes_remain, "%u-%uus:\t%lld\n",
 			       i ? (2<<(i-1)) : 0, (2<<i)-1,
-			       atomic64_read(&times[i]));
+			       (s64)atomic64_read(&times[i]));
 		bytes_remain -= bytes;
 		p += bytes;
 	}
 	/* The last bucket holds everything over
 	 * 2<<(NX842_HIST_SLOTS - 2) us */
-	bytes = snprintf(p, bytes_remain, "%uus - :\t%ld\n",
+	bytes = snprintf(p, bytes_remain, "%uus - :\t%lld\n",
 			2<<(NX842_HIST_SLOTS - 2),
-			atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
+			(s64)atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
 	p += bytes;
 
 	rcu_read_unlock();

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, s390/pci: Prepare for atomic64_read() conversion
  2019-05-22 13:22 ` [PATCH 02/18] locking/atomic: s390/pci: prepare " Mark Rutland
@ 2019-06-03 13:34   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:34 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mingo, linux-kernel, will.deacon, tglx, heiko.carstens,
	mark.rutland, peterz, hpa, torvalds

Commit-ID:  982164d62a4b2097c0db28ae7c31fc905af26bb8
Gitweb:     https://git.kernel.org/tip/982164d62a4b2097c0db28ae7c31fc905af26bb8
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:34 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, s390/pci: Prepare for atomic64_read() conversion

The return type of atomic64_read() varies by architecture. It may return
long (e.g. powerpc), long long (e.g. arm), or s64 (e.g. x86_64). This is
somewhat painful, and mandates the use of explicit casts in some cases
(e.g. when printing the return value).

To ameliorate matters, subsequent patches will make the atomic64 API
consistently use s64.

As a preparatory step, this patch updates the s390 pci debug code to
treat the return value of atomic64_read() as s64, using an explicit
cast. This cast will be removed once the s64 conversion is complete.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-3-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/s390/pci/pci_debug.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
index 6b48ca7760a7..45eccf79e990 100644
--- a/arch/s390/pci/pci_debug.c
+++ b/arch/s390/pci/pci_debug.c
@@ -74,8 +74,8 @@ static void pci_sw_counter_show(struct seq_file *m)
 	int i;
 
 	for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
-		seq_printf(m, "%26s:\t%lu\n", pci_sw_names[i],
-			   atomic64_read(counter));
+		seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
+			   (s64)atomic64_read(counter));
 }
 
 static int pci_perf_show(struct seq_file *m, void *v)

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic: Use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 03/18] locking/atomic: generic: use s64 for atomic64 Mark Rutland
  2019-05-22 21:16   ` Arnd Bergmann
@ 2019-06-03 13:35   ` tip-bot for Mark Rutland
  1 sibling, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:35 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, mingo, will.deacon, tglx, mark.rutland, arnd, linux-kernel,
	peterz, torvalds

Commit-ID:  9255813d5841e158f033e0d83d455bffdae009a4
Gitweb:     https://git.kernel.org/tip/9255813d5841e158f033e0d83d455bffdae009a4
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:35 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the generic atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long long, matching the generated
headers.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-4-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/asm-generic/atomic64.h | 20 ++++++++++----------
 lib/atomic64.c                 | 32 ++++++++++++++++----------------
 2 files changed, 26 insertions(+), 26 deletions(-)

diff --git a/include/asm-generic/atomic64.h b/include/asm-generic/atomic64.h
index d7a15096fb3b..370f01d4450f 100644
--- a/include/asm-generic/atomic64.h
+++ b/include/asm-generic/atomic64.h
@@ -10,24 +10,24 @@
 #include <linux/types.h>
 
 typedef struct {
-	long long counter;
+	s64 counter;
 } atomic64_t;
 
 #define ATOMIC64_INIT(i)	{ (i) }
 
-extern long long atomic64_read(const atomic64_t *v);
-extern void	 atomic64_set(atomic64_t *v, long long i);
+extern s64 atomic64_read(const atomic64_t *v);
+extern void atomic64_set(atomic64_t *v, s64 i);
 
 #define atomic64_set_release(v, i)	atomic64_set((v), (i))
 
 #define ATOMIC64_OP(op)							\
-extern void	 atomic64_##op(long long a, atomic64_t *v);
+extern void	 atomic64_##op(s64 a, atomic64_t *v);
 
 #define ATOMIC64_OP_RETURN(op)						\
-extern long long atomic64_##op##_return(long long a, atomic64_t *v);
+extern s64 atomic64_##op##_return(s64 a, atomic64_t *v);
 
 #define ATOMIC64_FETCH_OP(op)						\
-extern long long atomic64_fetch_##op(long long a, atomic64_t *v);
+extern s64 atomic64_fetch_##op(s64 a, atomic64_t *v);
 
 #define ATOMIC64_OPS(op)	ATOMIC64_OP(op) ATOMIC64_OP_RETURN(op) ATOMIC64_FETCH_OP(op)
 
@@ -46,11 +46,11 @@ ATOMIC64_OPS(xor)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-extern long long atomic64_dec_if_positive(atomic64_t *v);
+extern s64 atomic64_dec_if_positive(atomic64_t *v);
 #define atomic64_dec_if_positive atomic64_dec_if_positive
-extern long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n);
-extern long long atomic64_xchg(atomic64_t *v, long long new);
-extern long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u);
+extern s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n);
+extern s64 atomic64_xchg(atomic64_t *v, s64 new);
+extern s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u);
 #define atomic64_fetch_add_unless atomic64_fetch_add_unless
 
 #endif  /*  _ASM_GENERIC_ATOMIC64_H  */
diff --git a/lib/atomic64.c b/lib/atomic64.c
index 7e6905751522..e98c85a99787 100644
--- a/lib/atomic64.c
+++ b/lib/atomic64.c
@@ -42,11 +42,11 @@ static inline raw_spinlock_t *lock_addr(const atomic64_t *v)
 	return &atomic64_lock[addr & (NR_LOCKS - 1)].lock;
 }
 
-long long atomic64_read(const atomic64_t *v)
+s64 atomic64_read(const atomic64_t *v)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
-	long long val;
+	s64 val;
 
 	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter;
@@ -55,7 +55,7 @@ long long atomic64_read(const atomic64_t *v)
 }
 EXPORT_SYMBOL(atomic64_read);
 
-void atomic64_set(atomic64_t *v, long long i)
+void atomic64_set(atomic64_t *v, s64 i)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
@@ -67,7 +67,7 @@ void atomic64_set(atomic64_t *v, long long i)
 EXPORT_SYMBOL(atomic64_set);
 
 #define ATOMIC64_OP(op, c_op)						\
-void atomic64_##op(long long a, atomic64_t *v)				\
+void atomic64_##op(s64 a, atomic64_t *v)				\
 {									\
 	unsigned long flags;						\
 	raw_spinlock_t *lock = lock_addr(v);				\
@@ -79,11 +79,11 @@ void atomic64_##op(long long a, atomic64_t *v)				\
 EXPORT_SYMBOL(atomic64_##op);
 
 #define ATOMIC64_OP_RETURN(op, c_op)					\
-long long atomic64_##op##_return(long long a, atomic64_t *v)		\
+s64 atomic64_##op##_return(s64 a, atomic64_t *v)			\
 {									\
 	unsigned long flags;						\
 	raw_spinlock_t *lock = lock_addr(v);				\
-	long long val;							\
+	s64 val;							\
 									\
 	raw_spin_lock_irqsave(lock, flags);				\
 	val = (v->counter c_op a);					\
@@ -93,11 +93,11 @@ long long atomic64_##op##_return(long long a, atomic64_t *v)		\
 EXPORT_SYMBOL(atomic64_##op##_return);
 
 #define ATOMIC64_FETCH_OP(op, c_op)					\
-long long atomic64_fetch_##op(long long a, atomic64_t *v)		\
+s64 atomic64_fetch_##op(s64 a, atomic64_t *v)				\
 {									\
 	unsigned long flags;						\
 	raw_spinlock_t *lock = lock_addr(v);				\
-	long long val;							\
+	s64 val;							\
 									\
 	raw_spin_lock_irqsave(lock, flags);				\
 	val = v->counter;						\
@@ -130,11 +130,11 @@ ATOMIC64_OPS(xor, ^=)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-long long atomic64_dec_if_positive(atomic64_t *v)
+s64 atomic64_dec_if_positive(atomic64_t *v)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
-	long long val;
+	s64 val;
 
 	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter - 1;
@@ -145,11 +145,11 @@ long long atomic64_dec_if_positive(atomic64_t *v)
 }
 EXPORT_SYMBOL(atomic64_dec_if_positive);
 
-long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
+s64 atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
-	long long val;
+	s64 val;
 
 	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter;
@@ -160,11 +160,11 @@ long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
 }
 EXPORT_SYMBOL(atomic64_cmpxchg);
 
-long long atomic64_xchg(atomic64_t *v, long long new)
+s64 atomic64_xchg(atomic64_t *v, s64 new)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
-	long long val;
+	s64 val;
 
 	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter;
@@ -174,11 +174,11 @@ long long atomic64_xchg(atomic64_t *v, long long new)
 }
 EXPORT_SYMBOL(atomic64_xchg);
 
-long long atomic64_fetch_add_unless(atomic64_t *v, long long a, long long u)
+s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	unsigned long flags;
 	raw_spinlock_t *lock = lock_addr(v);
-	long long val;
+	s64 val;
 
 	raw_spin_lock_irqsave(lock, flags);
 	val = v->counter;

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, alpha: Use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 04/18] locking/atomic: alpha: use " Mark Rutland
@ 2019-06-03 13:36   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:36 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: torvalds, rth, peterz, will.deacon, mark.rutland, linux-kernel,
	mattst88, ink, hpa, mingo, tglx

Commit-ID:  0203fdc160a8c8d8651a3b79aa453ec36cfbd867
Gitweb:     https://git.kernel.org/tip/0203fdc160a8c8d8651a3b79aa453ec36cfbd867
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:36 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, alpha: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the alpha atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matt Turner <mattst88@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Richard Henderson <rth@twiddle.net>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-5-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/alpha/include/asm/atomic.h | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h
index 150a1c5d6a2c..2144530d1428 100644
--- a/arch/alpha/include/asm/atomic.h
+++ b/arch/alpha/include/asm/atomic.h
@@ -93,9 +93,9 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v)	\
 }
 
 #define ATOMIC64_OP(op, asm_op)						\
-static __inline__ void atomic64_##op(long i, atomic64_t * v)		\
+static __inline__ void atomic64_##op(s64 i, atomic64_t * v)		\
 {									\
-	unsigned long temp;						\
+	s64 temp;							\
 	__asm__ __volatile__(						\
 	"1:	ldq_l %0,%1\n"						\
 	"	" #asm_op " %0,%2,%0\n"					\
@@ -109,9 +109,9 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v)		\
 }									\
 
 #define ATOMIC64_OP_RETURN(op, asm_op)					\
-static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v)	\
+static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v)	\
 {									\
-	long temp, result;						\
+	s64 temp, result;						\
 	__asm__ __volatile__(						\
 	"1:	ldq_l %0,%1\n"						\
 	"	" #asm_op " %0,%3,%2\n"					\
@@ -128,9 +128,9 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v)	\
 }
 
 #define ATOMIC64_FETCH_OP(op, asm_op)					\
-static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v)	\
+static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v)	\
 {									\
-	long temp, result;						\
+	s64 temp, result;						\
 	__asm__ __volatile__(						\
 	"1:	ldq_l %2,%1\n"						\
 	"	" #asm_op " %2,%3,%0\n"					\
@@ -246,9 +246,9 @@ static __inline__ int atomic_fetch_add_unless(atomic_t *v, int a, int u)
  * Atomically adds @a to @v, so long as it was not @u.
  * Returns the old value of @v.
  */
-static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
+static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
-	long c, new, old;
+	s64 c, new, old;
 	smp_mb();
 	__asm__ __volatile__(
 	"1:	ldq_l	%[old],%[mem]\n"
@@ -276,9 +276,9 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
  * The function returns the old value of *v minus 1, even if
  * the atomic variable, v, was not decremented.
  */
-static inline long atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
 {
-	long old, tmp;
+	s64 old, tmp;
 	smp_mb();
 	__asm__ __volatile__(
 	"1:	ldq_l	%[old],%[mem]\n"

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, arc: Use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 05/18] locking/atomic: arc: use " Mark Rutland
  2019-05-23 23:10   ` Vineet Gupta
@ 2019-06-03 13:36   ` tip-bot for Mark Rutland
  1 sibling, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:36 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, peterz, mark.rutland, torvalds, vgupta, will.deacon,
	linux-kernel, mingo, tglx

Commit-ID:  16fbad086976574b99ea7019c0504d0194e95dc3
Gitweb:     https://git.kernel.org/tip/16fbad086976574b99ea7019c0504d0194e95dc3
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:37 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, arc: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the arc atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than u64, matching the generated headers.

Otherwise, there should be no functional change as a result of this
patch.

Acked-By: Vineet Gupta <vgupta@synopsys.com>
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Link: https://lkml.kernel.org/r/20190522132250.26499-6-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/arc/include/asm/atomic.h | 41 ++++++++++++++++++++---------------------
 1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index 158af079838d..2c75df55d0d2 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -324,14 +324,14 @@ ATOMIC_OPS(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3)
  */
 
 typedef struct {
-	aligned_u64 counter;
+	s64 __aligned(8) counter;
 } atomic64_t;
 
 #define ATOMIC64_INIT(a) { (a) }
 
-static inline long long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
 {
-	unsigned long long val;
+	s64 val;
 
 	__asm__ __volatile__(
 	"	ldd   %0, [%1]	\n"
@@ -341,7 +341,7 @@ static inline long long atomic64_read(const atomic64_t *v)
 	return val;
 }
 
-static inline void atomic64_set(atomic64_t *v, long long a)
+static inline void atomic64_set(atomic64_t *v, s64 a)
 {
 	/*
 	 * This could have been a simple assignment in "C" but would need
@@ -362,9 +362,9 @@ static inline void atomic64_set(atomic64_t *v, long long a)
 }
 
 #define ATOMIC64_OP(op, op1, op2)					\
-static inline void atomic64_##op(long long a, atomic64_t *v)		\
+static inline void atomic64_##op(s64 a, atomic64_t *v)			\
 {									\
-	unsigned long long val;						\
+	s64 val;							\
 									\
 	__asm__ __volatile__(						\
 	"1:				\n"				\
@@ -375,13 +375,13 @@ static inline void atomic64_##op(long long a, atomic64_t *v)		\
 	"	bnz     1b		\n"				\
 	: "=&r"(val)							\
 	: "r"(&v->counter), "ir"(a)					\
-	: "cc");						\
+	: "cc");							\
 }									\
 
 #define ATOMIC64_OP_RETURN(op, op1, op2)		        	\
-static inline long long atomic64_##op##_return(long long a, atomic64_t *v)	\
+static inline s64 atomic64_##op##_return(s64 a, atomic64_t *v)		\
 {									\
-	unsigned long long val;						\
+	s64 val;							\
 									\
 	smp_mb();							\
 									\
@@ -402,9 +402,9 @@ static inline long long atomic64_##op##_return(long long a, atomic64_t *v)	\
 }
 
 #define ATOMIC64_FETCH_OP(op, op1, op2)		        		\
-static inline long long atomic64_fetch_##op(long long a, atomic64_t *v)	\
+static inline s64 atomic64_fetch_##op(s64 a, atomic64_t *v)		\
 {									\
-	unsigned long long val, orig;					\
+	s64 val, orig;							\
 									\
 	smp_mb();							\
 									\
@@ -444,10 +444,10 @@ ATOMIC64_OPS(xor, xor, xor)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-static inline long long
-atomic64_cmpxchg(atomic64_t *ptr, long long expected, long long new)
+static inline s64
+atomic64_cmpxchg(atomic64_t *ptr, s64 expected, s64 new)
 {
-	long long prev;
+	s64 prev;
 
 	smp_mb();
 
@@ -467,9 +467,9 @@ atomic64_cmpxchg(atomic64_t *ptr, long long expected, long long new)
 	return prev;
 }
 
-static inline long long atomic64_xchg(atomic64_t *ptr, long long new)
+static inline s64 atomic64_xchg(atomic64_t *ptr, s64 new)
 {
-	long long prev;
+	s64 prev;
 
 	smp_mb();
 
@@ -495,9 +495,9 @@ static inline long long atomic64_xchg(atomic64_t *ptr, long long new)
  * the atomic variable, v, was not decremented.
  */
 
-static inline long long atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
 {
-	long long val;
+	s64 val;
 
 	smp_mb();
 
@@ -528,10 +528,9 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
  * Atomically adds @a to @v, if it was not @u.
  * Returns the old value of @v
  */
-static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a,
-						  long long u)
+static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
-	long long old, temp;
+	s64 old, temp;
 
 	smp_mb();
 

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, arm: Use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 06/18] locking/atomic: arm: use " Mark Rutland
@ 2019-06-03 13:37   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:37 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mark.rutland, tglx, will.deacon, mingo, linux, linux-kernel, hpa,
	peterz, torvalds

Commit-ID:  ef4cdc09260e2b0576423ca708e245e7549aa8e0
Gitweb:     https://git.kernel.org/tip/ef4cdc09260e2b0576423ca708e245e7549aa8e0
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:38 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, arm: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the arm atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long long, matching the generated
headers.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-7-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/arm/include/asm/atomic.h | 50 +++++++++++++++++++++----------------------
 1 file changed, 24 insertions(+), 26 deletions(-)

diff --git a/arch/arm/include/asm/atomic.h b/arch/arm/include/asm/atomic.h
index f74756641410..d45c41f6f69c 100644
--- a/arch/arm/include/asm/atomic.h
+++ b/arch/arm/include/asm/atomic.h
@@ -249,15 +249,15 @@ ATOMIC_OPS(xor, ^=, eor)
 
 #ifndef CONFIG_GENERIC_ATOMIC64
 typedef struct {
-	long long counter;
+	s64 counter;
 } atomic64_t;
 
 #define ATOMIC64_INIT(i) { (i) }
 
 #ifdef CONFIG_ARM_LPAE
-static inline long long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
 {
-	long long result;
+	s64 result;
 
 	__asm__ __volatile__("@ atomic64_read\n"
 "	ldrd	%0, %H0, [%1]"
@@ -268,7 +268,7 @@ static inline long long atomic64_read(const atomic64_t *v)
 	return result;
 }
 
-static inline void atomic64_set(atomic64_t *v, long long i)
+static inline void atomic64_set(atomic64_t *v, s64 i)
 {
 	__asm__ __volatile__("@ atomic64_set\n"
 "	strd	%2, %H2, [%1]"
@@ -277,9 +277,9 @@ static inline void atomic64_set(atomic64_t *v, long long i)
 	);
 }
 #else
-static inline long long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
 {
-	long long result;
+	s64 result;
 
 	__asm__ __volatile__("@ atomic64_read\n"
 "	ldrexd	%0, %H0, [%1]"
@@ -290,9 +290,9 @@ static inline long long atomic64_read(const atomic64_t *v)
 	return result;
 }
 
-static inline void atomic64_set(atomic64_t *v, long long i)
+static inline void atomic64_set(atomic64_t *v, s64 i)
 {
-	long long tmp;
+	s64 tmp;
 
 	prefetchw(&v->counter);
 	__asm__ __volatile__("@ atomic64_set\n"
@@ -307,9 +307,9 @@ static inline void atomic64_set(atomic64_t *v, long long i)
 #endif
 
 #define ATOMIC64_OP(op, op1, op2)					\
-static inline void atomic64_##op(long long i, atomic64_t *v)		\
+static inline void atomic64_##op(s64 i, atomic64_t *v)			\
 {									\
-	long long result;						\
+	s64 result;							\
 	unsigned long tmp;						\
 									\
 	prefetchw(&v->counter);						\
@@ -326,10 +326,10 @@ static inline void atomic64_##op(long long i, atomic64_t *v)		\
 }									\
 
 #define ATOMIC64_OP_RETURN(op, op1, op2)				\
-static inline long long							\
-atomic64_##op##_return_relaxed(long long i, atomic64_t *v)		\
+static inline s64							\
+atomic64_##op##_return_relaxed(s64 i, atomic64_t *v)			\
 {									\
-	long long result;						\
+	s64 result;							\
 	unsigned long tmp;						\
 									\
 	prefetchw(&v->counter);						\
@@ -349,10 +349,10 @@ atomic64_##op##_return_relaxed(long long i, atomic64_t *v)		\
 }
 
 #define ATOMIC64_FETCH_OP(op, op1, op2)					\
-static inline long long							\
-atomic64_fetch_##op##_relaxed(long long i, atomic64_t *v)		\
+static inline s64							\
+atomic64_fetch_##op##_relaxed(s64 i, atomic64_t *v)			\
 {									\
-	long long result, val;						\
+	s64 result, val;						\
 	unsigned long tmp;						\
 									\
 	prefetchw(&v->counter);						\
@@ -406,10 +406,9 @@ ATOMIC64_OPS(xor, eor, eor)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-static inline long long
-atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new)
+static inline s64 atomic64_cmpxchg_relaxed(atomic64_t *ptr, s64 old, s64 new)
 {
-	long long oldval;
+	s64 oldval;
 	unsigned long res;
 
 	prefetchw(&ptr->counter);
@@ -430,9 +429,9 @@ atomic64_cmpxchg_relaxed(atomic64_t *ptr, long long old, long long new)
 }
 #define atomic64_cmpxchg_relaxed	atomic64_cmpxchg_relaxed
 
-static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new)
+static inline s64 atomic64_xchg_relaxed(atomic64_t *ptr, s64 new)
 {
-	long long result;
+	s64 result;
 	unsigned long tmp;
 
 	prefetchw(&ptr->counter);
@@ -450,9 +449,9 @@ static inline long long atomic64_xchg_relaxed(atomic64_t *ptr, long long new)
 }
 #define atomic64_xchg_relaxed		atomic64_xchg_relaxed
 
-static inline long long atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 atomic64_dec_if_positive(atomic64_t *v)
 {
-	long long result;
+	s64 result;
 	unsigned long tmp;
 
 	smp_mb();
@@ -478,10 +477,9 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
 }
 #define atomic64_dec_if_positive atomic64_dec_if_positive
 
-static inline long long atomic64_fetch_add_unless(atomic64_t *v, long long a,
-						  long long u)
+static inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
-	long long oldval, newval;
+	s64 oldval, newval;
 	unsigned long tmp;
 
 	smp_mb();

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, arm64: Use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 07/18] locking/atomic: arm64: use " Mark Rutland
@ 2019-06-03 13:38   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:38 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, catalin.marinas, peterz, hpa, mark.rutland, mingo,
	will.deacon, torvalds, tglx

Commit-ID:  16f18688af7ea6c65f6daa3efb4661415e2e6041
Gitweb:     https://git.kernel.org/tip/16f18688af7ea6c65f6daa3efb4661415e2e6041
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:39 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, arm64: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the arm64 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Note that in arch_atomic64_dec_if_positive(), the x0 variable is left as
long, as this variable is also used to hold the pointer to the
atomic64_t.

Otherwise, there should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-8-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/arm64/include/asm/atomic_ll_sc.h | 20 ++++++++++----------
 arch/arm64/include/asm/atomic_lse.h   | 34 +++++++++++++++++-----------------
 2 files changed, 27 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/atomic_ll_sc.h b/arch/arm64/include/asm/atomic_ll_sc.h
index e321293e0c89..f3b12d7f431f 100644
--- a/arch/arm64/include/asm/atomic_ll_sc.h
+++ b/arch/arm64/include/asm/atomic_ll_sc.h
@@ -133,9 +133,9 @@ ATOMIC_OPS(xor, eor)
 
 #define ATOMIC64_OP(op, asm_op)						\
 __LL_SC_INLINE void							\
-__LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v))		\
+__LL_SC_PREFIX(arch_atomic64_##op(s64 i, atomic64_t *v))		\
 {									\
-	long result;							\
+	s64 result;							\
 	unsigned long tmp;						\
 									\
 	asm volatile("// atomic64_" #op "\n"				\
@@ -150,10 +150,10 @@ __LL_SC_PREFIX(arch_atomic64_##op(long i, atomic64_t *v))		\
 __LL_SC_EXPORT(arch_atomic64_##op);
 
 #define ATOMIC64_OP_RETURN(name, mb, acq, rel, cl, op, asm_op)		\
-__LL_SC_INLINE long							\
-__LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\
+__LL_SC_INLINE s64							\
+__LL_SC_PREFIX(arch_atomic64_##op##_return##name(s64 i, atomic64_t *v))\
 {									\
-	long result;							\
+	s64 result;							\
 	unsigned long tmp;						\
 									\
 	asm volatile("// atomic64_" #op "_return" #name "\n"		\
@@ -172,10 +172,10 @@ __LL_SC_PREFIX(arch_atomic64_##op##_return##name(long i, atomic64_t *v))\
 __LL_SC_EXPORT(arch_atomic64_##op##_return##name);
 
 #define ATOMIC64_FETCH_OP(name, mb, acq, rel, cl, op, asm_op)		\
-__LL_SC_INLINE long							\
-__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(long i, atomic64_t *v))	\
+__LL_SC_INLINE s64							\
+__LL_SC_PREFIX(arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v))	\
 {									\
-	long result, val;						\
+	s64 result, val;						\
 	unsigned long tmp;						\
 									\
 	asm volatile("// atomic64_fetch_" #op #name "\n"		\
@@ -225,10 +225,10 @@ ATOMIC64_OPS(xor, eor)
 #undef ATOMIC64_OP_RETURN
 #undef ATOMIC64_OP
 
-__LL_SC_INLINE long
+__LL_SC_INLINE s64
 __LL_SC_PREFIX(arch_atomic64_dec_if_positive(atomic64_t *v))
 {
-	long result;
+	s64 result;
 	unsigned long tmp;
 
 	asm volatile("// atomic64_dec_if_positive\n"
diff --git a/arch/arm64/include/asm/atomic_lse.h b/arch/arm64/include/asm/atomic_lse.h
index 9256a3921e4b..c53832b08af7 100644
--- a/arch/arm64/include/asm/atomic_lse.h
+++ b/arch/arm64/include/asm/atomic_lse.h
@@ -224,9 +224,9 @@ ATOMIC_FETCH_OP_SUB(        , al, "memory")
 
 #define __LL_SC_ATOMIC64(op)	__LL_SC_CALL(arch_atomic64_##op)
 #define ATOMIC64_OP(op, asm_op)						\
-static inline void arch_atomic64_##op(long i, atomic64_t *v)		\
+static inline void arch_atomic64_##op(s64 i, atomic64_t *v)		\
 {									\
-	register long x0 asm ("x0") = i;				\
+	register s64 x0 asm ("x0") = i;					\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(__LL_SC_ATOMIC64(op),	\
@@ -244,9 +244,9 @@ ATOMIC64_OP(add, stadd)
 #undef ATOMIC64_OP
 
 #define ATOMIC64_FETCH_OP(name, mb, op, asm_op, cl...)			\
-static inline long arch_atomic64_fetch_##op##name(long i, atomic64_t *v)\
+static inline s64 arch_atomic64_fetch_##op##name(s64 i, atomic64_t *v)	\
 {									\
-	register long x0 asm ("x0") = i;				\
+	register s64 x0 asm ("x0") = i;					\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(				\
@@ -276,9 +276,9 @@ ATOMIC64_FETCH_OPS(add, ldadd)
 #undef ATOMIC64_FETCH_OPS
 
 #define ATOMIC64_OP_ADD_RETURN(name, mb, cl...)				\
-static inline long arch_atomic64_add_return##name(long i, atomic64_t *v)\
+static inline s64 arch_atomic64_add_return##name(s64 i, atomic64_t *v)	\
 {									\
-	register long x0 asm ("x0") = i;				\
+	register s64 x0 asm ("x0") = i;					\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(				\
@@ -302,9 +302,9 @@ ATOMIC64_OP_ADD_RETURN(        , al, "memory")
 
 #undef ATOMIC64_OP_ADD_RETURN
 
-static inline void arch_atomic64_and(long i, atomic64_t *v)
+static inline void arch_atomic64_and(s64 i, atomic64_t *v)
 {
-	register long x0 asm ("x0") = i;
+	register s64 x0 asm ("x0") = i;
 	register atomic64_t *x1 asm ("x1") = v;
 
 	asm volatile(ARM64_LSE_ATOMIC_INSN(
@@ -320,9 +320,9 @@ static inline void arch_atomic64_and(long i, atomic64_t *v)
 }
 
 #define ATOMIC64_FETCH_OP_AND(name, mb, cl...)				\
-static inline long arch_atomic64_fetch_and##name(long i, atomic64_t *v)	\
+static inline s64 arch_atomic64_fetch_and##name(s64 i, atomic64_t *v)	\
 {									\
-	register long x0 asm ("x0") = i;				\
+	register s64 x0 asm ("x0") = i;					\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(				\
@@ -346,9 +346,9 @@ ATOMIC64_FETCH_OP_AND(        , al, "memory")
 
 #undef ATOMIC64_FETCH_OP_AND
 
-static inline void arch_atomic64_sub(long i, atomic64_t *v)
+static inline void arch_atomic64_sub(s64 i, atomic64_t *v)
 {
-	register long x0 asm ("x0") = i;
+	register s64 x0 asm ("x0") = i;
 	register atomic64_t *x1 asm ("x1") = v;
 
 	asm volatile(ARM64_LSE_ATOMIC_INSN(
@@ -364,9 +364,9 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v)
 }
 
 #define ATOMIC64_OP_SUB_RETURN(name, mb, cl...)				\
-static inline long arch_atomic64_sub_return##name(long i, atomic64_t *v)\
+static inline s64 arch_atomic64_sub_return##name(s64 i, atomic64_t *v)	\
 {									\
-	register long x0 asm ("x0") = i;				\
+	register s64 x0 asm ("x0") = i;					\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(				\
@@ -392,9 +392,9 @@ ATOMIC64_OP_SUB_RETURN(        , al, "memory")
 #undef ATOMIC64_OP_SUB_RETURN
 
 #define ATOMIC64_FETCH_OP_SUB(name, mb, cl...)				\
-static inline long arch_atomic64_fetch_sub##name(long i, atomic64_t *v)	\
+static inline s64 arch_atomic64_fetch_sub##name(s64 i, atomic64_t *v)	\
 {									\
-	register long x0 asm ("x0") = i;				\
+	register s64 x0 asm ("x0") = i;					\
 	register atomic64_t *x1 asm ("x1") = v;				\
 									\
 	asm volatile(ARM64_LSE_ATOMIC_INSN(				\
@@ -418,7 +418,7 @@ ATOMIC64_FETCH_OP_SUB(        , al, "memory")
 
 #undef ATOMIC64_FETCH_OP_SUB
 
-static inline long arch_atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
 {
 	register long x0 asm ("x0") = (long)v;
 

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, ia64: Use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 08/18] locking/atomic: ia64: use " Mark Rutland
@ 2019-06-03 13:39   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:39 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: will.deacon, mingo, peterz, mark.rutland, tony.luck, hpa,
	fenghua.yu, torvalds, tglx, linux-kernel

Commit-ID:  d84e28d250150adc6526dcce4ca089e2b57430f3
Gitweb:     https://git.kernel.org/tip/d84e28d250150adc6526dcce4ca089e2b57430f3
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:40 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, ia64: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the ia64 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or __s64, matching the generated
headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Tony Luck <tony.luck@intel.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-9-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/ia64/include/asm/atomic.h | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/arch/ia64/include/asm/atomic.h b/arch/ia64/include/asm/atomic.h
index 206530d0751b..50440f3ddc43 100644
--- a/arch/ia64/include/asm/atomic.h
+++ b/arch/ia64/include/asm/atomic.h
@@ -124,10 +124,10 @@ ATOMIC_FETCH_OP(xor, ^)
 #undef ATOMIC_OP
 
 #define ATOMIC64_OP(op, c_op)						\
-static __inline__ long							\
-ia64_atomic64_##op (__s64 i, atomic64_t *v)				\
+static __inline__ s64							\
+ia64_atomic64_##op (s64 i, atomic64_t *v)				\
 {									\
-	__s64 old, new;							\
+	s64 old, new;							\
 	CMPXCHG_BUGCHECK_DECL						\
 									\
 	do {								\
@@ -139,10 +139,10 @@ ia64_atomic64_##op (__s64 i, atomic64_t *v)				\
 }
 
 #define ATOMIC64_FETCH_OP(op, c_op)					\
-static __inline__ long							\
-ia64_atomic64_fetch_##op (__s64 i, atomic64_t *v)			\
+static __inline__ s64							\
+ia64_atomic64_fetch_##op (s64 i, atomic64_t *v)				\
 {									\
-	__s64 old, new;							\
+	s64 old, new;							\
 	CMPXCHG_BUGCHECK_DECL						\
 									\
 	do {								\
@@ -162,7 +162,7 @@ ATOMIC64_OPS(sub, -)
 
 #define atomic64_add_return(i,v)					\
 ({									\
-	long __ia64_aar_i = (i);					\
+	s64 __ia64_aar_i = (i);						\
 	__ia64_atomic_const(i)						\
 		? ia64_fetch_and_add(__ia64_aar_i, &(v)->counter)	\
 		: ia64_atomic64_add(__ia64_aar_i, v);			\
@@ -170,7 +170,7 @@ ATOMIC64_OPS(sub, -)
 
 #define atomic64_sub_return(i,v)					\
 ({									\
-	long __ia64_asr_i = (i);					\
+	s64 __ia64_asr_i = (i);						\
 	__ia64_atomic_const(i)						\
 		? ia64_fetch_and_add(-__ia64_asr_i, &(v)->counter)	\
 		: ia64_atomic64_sub(__ia64_asr_i, v);			\
@@ -178,7 +178,7 @@ ATOMIC64_OPS(sub, -)
 
 #define atomic64_fetch_add(i,v)						\
 ({									\
-	long __ia64_aar_i = (i);					\
+	s64 __ia64_aar_i = (i);						\
 	__ia64_atomic_const(i)						\
 		? ia64_fetchadd(__ia64_aar_i, &(v)->counter, acq)	\
 		: ia64_atomic64_fetch_add(__ia64_aar_i, v);		\
@@ -186,7 +186,7 @@ ATOMIC64_OPS(sub, -)
 
 #define atomic64_fetch_sub(i,v)						\
 ({									\
-	long __ia64_asr_i = (i);					\
+	s64 __ia64_asr_i = (i);						\
 	__ia64_atomic_const(i)						\
 		? ia64_fetchadd(-__ia64_asr_i, &(v)->counter, acq)	\
 		: ia64_atomic64_fetch_sub(__ia64_asr_i, v);		\

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, mips: Use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 09/18] locking/atomic: mips: use " Mark Rutland
@ 2019-06-03 13:39   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:39 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: will.deacon, paul.burton, tglx, mark.rutland, linux-kernel,
	torvalds, peterz, jhogan, mingo, ralf, hpa

Commit-ID:  d184cf1a449ca4cb0d86f3dd82c3337c645ea6c0
Gitweb:     https://git.kernel.org/tip/d184cf1a449ca4cb0d86f3dd82c3337c645ea6c0
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:41 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, mips: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the mips atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or __s64, matching the generated
headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: James Hogan <jhogan@kernel.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul Burton <paul.burton@mips.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ralf Baechle <ralf@linux-mips.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paulus@samba.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-10-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/mips/include/asm/atomic.h | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/arch/mips/include/asm/atomic.h b/arch/mips/include/asm/atomic.h
index 94096299fc56..9a82dd11c0e9 100644
--- a/arch/mips/include/asm/atomic.h
+++ b/arch/mips/include/asm/atomic.h
@@ -254,10 +254,10 @@ static __inline__ int atomic_sub_if_positive(int i, atomic_t * v)
 #define atomic64_set(v, i)	WRITE_ONCE((v)->counter, (i))
 
 #define ATOMIC64_OP(op, c_op, asm_op)					      \
-static __inline__ void atomic64_##op(long i, atomic64_t * v)		      \
+static __inline__ void atomic64_##op(s64 i, atomic64_t * v)		      \
 {									      \
 	if (kernel_uses_llsc) {						      \
-		long temp;						      \
+		s64 temp;						      \
 									      \
 		loongson_llsc_mb();					      \
 		__asm__ __volatile__(					      \
@@ -280,12 +280,12 @@ static __inline__ void atomic64_##op(long i, atomic64_t * v)		      \
 }
 
 #define ATOMIC64_OP_RETURN(op, c_op, asm_op)				      \
-static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
+static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v)   \
 {									      \
-	long result;							      \
+	s64 result;							      \
 									      \
 	if (kernel_uses_llsc) {						      \
-		long temp;						      \
+		s64 temp;						      \
 									      \
 		loongson_llsc_mb();					      \
 		__asm__ __volatile__(					      \
@@ -314,12 +314,12 @@ static __inline__ long atomic64_##op##_return_relaxed(long i, atomic64_t * v) \
 }
 
 #define ATOMIC64_FETCH_OP(op, c_op, asm_op)				      \
-static __inline__ long atomic64_fetch_##op##_relaxed(long i, atomic64_t * v)  \
+static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v)    \
 {									      \
-	long result;							      \
+	s64 result;							      \
 									      \
 	if (kernel_uses_llsc) {						      \
-		long temp;						      \
+		s64 temp;						      \
 									      \
 		loongson_llsc_mb();					      \
 		__asm__ __volatile__(					      \
@@ -386,14 +386,14 @@ ATOMIC64_OPS(xor, ^=, xor)
  * Atomically test @v and subtract @i if @v is greater or equal than @i.
  * The function returns the old value of @v minus @i.
  */
-static __inline__ long atomic64_sub_if_positive(long i, atomic64_t * v)
+static __inline__ s64 atomic64_sub_if_positive(s64 i, atomic64_t * v)
 {
-	long result;
+	s64 result;
 
 	smp_mb__before_llsc();
 
 	if (kernel_uses_llsc) {
-		long temp;
+		s64 temp;
 
 		__asm__ __volatile__(
 		"	.set	push					\n"

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, powerpc: Use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 10/18] locking/atomic: powerpc: use " Mark Rutland
  2019-05-23 13:27   ` Michael Ellerman
@ 2019-06-03 13:40   ` tip-bot for Mark Rutland
  1 sibling, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:40 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, mark.rutland, peterz, mingo, torvalds, will.deacon, mpe,
	tglx, paulus, linux-kernel

Commit-ID:  8cd8de59748ba71b476d1b7101f9ecaccd5cb8c2
Gitweb:     https://git.kernel.org/tip/8cd8de59748ba71b476d1b7101f9ecaccd5cb8c2
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:42 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, powerpc: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the powerpc atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Michael Ellerman <mpe@ellerman.id.au>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Paul Mackerras <paulus@samba.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-11-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/powerpc/include/asm/atomic.h | 44 +++++++++++++++++++--------------------
 1 file changed, 22 insertions(+), 22 deletions(-)

diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
index 52eafaf74054..31c231ea56b7 100644
--- a/arch/powerpc/include/asm/atomic.h
+++ b/arch/powerpc/include/asm/atomic.h
@@ -297,24 +297,24 @@ static __inline__ int atomic_dec_if_positive(atomic_t *v)
 
 #define ATOMIC64_INIT(i)	{ (i) }
 
-static __inline__ long atomic64_read(const atomic64_t *v)
+static __inline__ s64 atomic64_read(const atomic64_t *v)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__("ld%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter));
 
 	return t;
 }
 
-static __inline__ void atomic64_set(atomic64_t *v, long i)
+static __inline__ void atomic64_set(atomic64_t *v, s64 i)
 {
 	__asm__ __volatile__("std%U0%X0 %1,%0" : "=m"(v->counter) : "r"(i));
 }
 
 #define ATOMIC64_OP(op, asm_op)						\
-static __inline__ void atomic64_##op(long a, atomic64_t *v)		\
+static __inline__ void atomic64_##op(s64 a, atomic64_t *v)		\
 {									\
-	long t;								\
+	s64 t;								\
 									\
 	__asm__ __volatile__(						\
 "1:	ldarx	%0,0,%3		# atomic64_" #op "\n"			\
@@ -327,10 +327,10 @@ static __inline__ void atomic64_##op(long a, atomic64_t *v)		\
 }
 
 #define ATOMIC64_OP_RETURN_RELAXED(op, asm_op)				\
-static inline long							\
-atomic64_##op##_return_relaxed(long a, atomic64_t *v)			\
+static inline s64							\
+atomic64_##op##_return_relaxed(s64 a, atomic64_t *v)			\
 {									\
-	long t;								\
+	s64 t;								\
 									\
 	__asm__ __volatile__(						\
 "1:	ldarx	%0,0,%3		# atomic64_" #op "_return_relaxed\n"	\
@@ -345,10 +345,10 @@ atomic64_##op##_return_relaxed(long a, atomic64_t *v)			\
 }
 
 #define ATOMIC64_FETCH_OP_RELAXED(op, asm_op)				\
-static inline long							\
-atomic64_fetch_##op##_relaxed(long a, atomic64_t *v)			\
+static inline s64							\
+atomic64_fetch_##op##_relaxed(s64 a, atomic64_t *v)			\
 {									\
-	long res, t;							\
+	s64 res, t;							\
 									\
 	__asm__ __volatile__(						\
 "1:	ldarx	%0,0,%4		# atomic64_fetch_" #op "_relaxed\n"	\
@@ -396,7 +396,7 @@ ATOMIC64_OPS(xor, xor)
 
 static __inline__ void atomic64_inc(atomic64_t *v)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__(
 "1:	ldarx	%0,0,%2		# atomic64_inc\n\
@@ -409,9 +409,9 @@ static __inline__ void atomic64_inc(atomic64_t *v)
 }
 #define atomic64_inc atomic64_inc
 
-static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)
+static __inline__ s64 atomic64_inc_return_relaxed(atomic64_t *v)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__(
 "1:	ldarx	%0,0,%2		# atomic64_inc_return_relaxed\n"
@@ -427,7 +427,7 @@ static __inline__ long atomic64_inc_return_relaxed(atomic64_t *v)
 
 static __inline__ void atomic64_dec(atomic64_t *v)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__(
 "1:	ldarx	%0,0,%2		# atomic64_dec\n\
@@ -440,9 +440,9 @@ static __inline__ void atomic64_dec(atomic64_t *v)
 }
 #define atomic64_dec atomic64_dec
 
-static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
+static __inline__ s64 atomic64_dec_return_relaxed(atomic64_t *v)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__(
 "1:	ldarx	%0,0,%2		# atomic64_dec_return_relaxed\n"
@@ -463,9 +463,9 @@ static __inline__ long atomic64_dec_return_relaxed(atomic64_t *v)
  * Atomically test *v and decrement if it is greater than 0.
  * The function returns the old value of *v minus 1.
  */
-static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
+static __inline__ s64 atomic64_dec_if_positive(atomic64_t *v)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__(
 	PPC_ATOMIC_ENTRY_BARRIER
@@ -502,9 +502,9 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
  * Atomically adds @a to @v, so long as it was not @u.
  * Returns the old value of @v.
  */
-static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
+static __inline__ s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
-	long t;
+	s64 t;
 
 	__asm__ __volatile__ (
 	PPC_ATOMIC_ENTRY_BARRIER
@@ -534,7 +534,7 @@ static __inline__ long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
  */
 static __inline__ int atomic64_inc_not_zero(atomic64_t *v)
 {
-	long t1, t2;
+	s64 t1, t2;
 
 	__asm__ __volatile__ (
 	PPC_ATOMIC_ENTRY_BARRIER

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, riscv: Fix atomic64_sub_if_positive() offset argument
  2019-05-22 13:22 ` [PATCH 11/18] locking/atomic: riscv: fix atomic64_sub_if_positive() offset argument Mark Rutland
  2019-05-22 19:06   ` Palmer Dabbelt
@ 2019-06-03 13:41   ` tip-bot for Mark Rutland
  1 sibling, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:41 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, peterz, will.deacon, hpa, palmer, mingo, mark.rutland,
	torvalds, aou, linux-kernel

Commit-ID:  33e42ef571979fe6601ac838d338eb599d842a6d
Gitweb:     https://git.kernel.org/tip/33e42ef571979fe6601ac838d338eb599d842a6d
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:43 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, riscv: Fix atomic64_sub_if_positive() offset argument

Presently the riscv implementation of atomic64_sub_if_positive() takes
a 32-bit offset value rather than a 64-bit offset value as it should do.
Thus, if called with a 64-bit offset, the value will be unexpectedly
truncated to 32 bits.

Fix this by taking the offset as a long rather than an int.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-12-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/riscv/include/asm/atomic.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index 9038aeb900a6..9c263bd9d5ad 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -332,7 +332,7 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
 #define atomic_dec_if_positive(v)	atomic_sub_if_positive(v, 1)
 
 #ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_sub_if_positive(atomic64_t *v, int offset)
+static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset)
 {
        long prev, rc;
 

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, riscv: Use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 12/18] locking/atomic: riscv: use s64 for atomic64 Mark Rutland
  2019-05-22 19:06   ` Palmer Dabbelt
@ 2019-06-03 13:41   ` tip-bot for Mark Rutland
  1 sibling, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:41 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: mark.rutland, hpa, peterz, aou, linux-kernel, tglx, mingo,
	will.deacon, torvalds, palmer

Commit-ID:  0754211847d7a228f1c34a49fd122979dfd19a1a
Gitweb:     https://git.kernel.org/tip/0754211847d7a228f1c34a49fd122979dfd19a1a
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:44 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, riscv: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the RISC-V atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long on 64-bit. This will be converted in a subsequent
patch.

Otherwise, there should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Palmer Dabbelt <palmer@sifive.com>
Cc: Albert Ou <aou@eecs.berkeley.edu>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-13-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/riscv/include/asm/atomic.h | 44 +++++++++++++++++++++--------------------
 1 file changed, 23 insertions(+), 21 deletions(-)

diff --git a/arch/riscv/include/asm/atomic.h b/arch/riscv/include/asm/atomic.h
index 9c263bd9d5ad..96f95c9ebd97 100644
--- a/arch/riscv/include/asm/atomic.h
+++ b/arch/riscv/include/asm/atomic.h
@@ -38,11 +38,11 @@ static __always_inline void atomic_set(atomic_t *v, int i)
 
 #ifndef CONFIG_GENERIC_ATOMIC64
 #define ATOMIC64_INIT(i) { (i) }
-static __always_inline long atomic64_read(const atomic64_t *v)
+static __always_inline s64 atomic64_read(const atomic64_t *v)
 {
 	return READ_ONCE(v->counter);
 }
-static __always_inline void atomic64_set(atomic64_t *v, long i)
+static __always_inline void atomic64_set(atomic64_t *v, s64 i)
 {
 	WRITE_ONCE(v->counter, i);
 }
@@ -66,11 +66,11 @@ void atomic##prefix##_##op(c_type i, atomic##prefix##_t *v)		\
 
 #ifdef CONFIG_GENERIC_ATOMIC64
 #define ATOMIC_OPS(op, asm_op, I)					\
-        ATOMIC_OP (op, asm_op, I, w,  int,   )
+        ATOMIC_OP (op, asm_op, I, w, int,   )
 #else
 #define ATOMIC_OPS(op, asm_op, I)					\
-        ATOMIC_OP (op, asm_op, I, w,  int,   )				\
-        ATOMIC_OP (op, asm_op, I, d, long, 64)
+        ATOMIC_OP (op, asm_op, I, w, int,   )				\
+        ATOMIC_OP (op, asm_op, I, d, s64, 64)
 #endif
 
 ATOMIC_OPS(add, add,  i)
@@ -127,14 +127,14 @@ c_type atomic##prefix##_##op##_return(c_type i, atomic##prefix##_t *v)	\
 
 #ifdef CONFIG_GENERIC_ATOMIC64
 #define ATOMIC_OPS(op, asm_op, c_op, I)					\
-        ATOMIC_FETCH_OP( op, asm_op,       I, w,  int,   )		\
-        ATOMIC_OP_RETURN(op, asm_op, c_op, I, w,  int,   )
+        ATOMIC_FETCH_OP( op, asm_op,       I, w, int,   )		\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int,   )
 #else
 #define ATOMIC_OPS(op, asm_op, c_op, I)					\
-        ATOMIC_FETCH_OP( op, asm_op,       I, w,  int,   )		\
-        ATOMIC_OP_RETURN(op, asm_op, c_op, I, w,  int,   )		\
-        ATOMIC_FETCH_OP( op, asm_op,       I, d, long, 64)		\
-        ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, long, 64)
+        ATOMIC_FETCH_OP( op, asm_op,       I, w, int,   )		\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, w, int,   )		\
+        ATOMIC_FETCH_OP( op, asm_op,       I, d, s64, 64)		\
+        ATOMIC_OP_RETURN(op, asm_op, c_op, I, d, s64, 64)
 #endif
 
 ATOMIC_OPS(add, add, +,  i)
@@ -166,11 +166,11 @@ ATOMIC_OPS(sub, add, +, -i)
 
 #ifdef CONFIG_GENERIC_ATOMIC64
 #define ATOMIC_OPS(op, asm_op, I)					\
-        ATOMIC_FETCH_OP(op, asm_op, I, w,  int,   )
+        ATOMIC_FETCH_OP(op, asm_op, I, w, int,   )
 #else
 #define ATOMIC_OPS(op, asm_op, I)					\
-        ATOMIC_FETCH_OP(op, asm_op, I, w,  int,   )			\
-        ATOMIC_FETCH_OP(op, asm_op, I, d, long, 64)
+        ATOMIC_FETCH_OP(op, asm_op, I, w, int,   )			\
+        ATOMIC_FETCH_OP(op, asm_op, I, d, s64, 64)
 #endif
 
 ATOMIC_OPS(and, and, i)
@@ -219,9 +219,10 @@ static __always_inline int atomic_fetch_add_unless(atomic_t *v, int a, int u)
 #define atomic_fetch_add_unless atomic_fetch_add_unless
 
 #ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_fetch_add_unless(atomic64_t *v, long a, long u)
+static __always_inline s64 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
-       long prev, rc;
+       s64 prev;
+       long rc;
 
 	__asm__ __volatile__ (
 		"0:	lr.d     %[p],  %[c]\n"
@@ -290,11 +291,11 @@ c_t atomic##prefix##_cmpxchg(atomic##prefix##_t *v, c_t o, c_t n)	\
 
 #ifdef CONFIG_GENERIC_ATOMIC64
 #define ATOMIC_OPS()							\
-	ATOMIC_OP( int,   , 4)
+	ATOMIC_OP(int,   , 4)
 #else
 #define ATOMIC_OPS()							\
-	ATOMIC_OP( int,   , 4)						\
-	ATOMIC_OP(long, 64, 8)
+	ATOMIC_OP(int,   , 4)						\
+	ATOMIC_OP(s64, 64, 8)
 #endif
 
 ATOMIC_OPS()
@@ -332,9 +333,10 @@ static __always_inline int atomic_sub_if_positive(atomic_t *v, int offset)
 #define atomic_dec_if_positive(v)	atomic_sub_if_positive(v, 1)
 
 #ifndef CONFIG_GENERIC_ATOMIC64
-static __always_inline long atomic64_sub_if_positive(atomic64_t *v, long offset)
+static __always_inline s64 atomic64_sub_if_positive(atomic64_t *v, s64 offset)
 {
-       long prev, rc;
+       s64 prev;
+       long rc;
 
 	__asm__ __volatile__ (
 		"0:	lr.d     %[p],  %[c]\n"

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, s390: Use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 13/18] locking/atomic: s390: use " Mark Rutland
@ 2019-06-03 13:42   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:42 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, mingo, torvalds, hpa, heiko.carstens, mark.rutland,
	will.deacon, peterz, tglx

Commit-ID:  0ca94800762e8a2f7037c9b02ba74aff8016dd82
Gitweb:     https://git.kernel.org/tip/0ca94800762e8a2f7037c9b02ba74aff8016dd82
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:45 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, s390: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the s390 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

The s390-internal __atomic64_*() ops are also used by the s390 bitops,
and expect pointers to long. Since atomic64_t::counter will be converted
to s64 in a subsequent patch, pointes to this are explicitly cast to
pointers to long when passed to __atomic64_*() ops.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-14-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/s390/include/asm/atomic.h | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h
index fd20ab5d4cf7..491ad53a0d4e 100644
--- a/arch/s390/include/asm/atomic.h
+++ b/arch/s390/include/asm/atomic.h
@@ -84,9 +84,9 @@ static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
 
 #define ATOMIC64_INIT(i)  { (i) }
 
-static inline long atomic64_read(const atomic64_t *v)
+static inline s64 atomic64_read(const atomic64_t *v)
 {
-	long c;
+	s64 c;
 
 	asm volatile(
 		"	lg	%0,%1\n"
@@ -94,49 +94,49 @@ static inline long atomic64_read(const atomic64_t *v)
 	return c;
 }
 
-static inline void atomic64_set(atomic64_t *v, long i)
+static inline void atomic64_set(atomic64_t *v, s64 i)
 {
 	asm volatile(
 		"	stg	%1,%0\n"
 		: "=Q" (v->counter) : "d" (i));
 }
 
-static inline long atomic64_add_return(long i, atomic64_t *v)
+static inline s64 atomic64_add_return(s64 i, atomic64_t *v)
 {
-	return __atomic64_add_barrier(i, &v->counter) + i;
+	return __atomic64_add_barrier(i, (long *)&v->counter) + i;
 }
 
-static inline long atomic64_fetch_add(long i, atomic64_t *v)
+static inline s64 atomic64_fetch_add(s64 i, atomic64_t *v)
 {
-	return __atomic64_add_barrier(i, &v->counter);
+	return __atomic64_add_barrier(i, (long *)&v->counter);
 }
 
-static inline void atomic64_add(long i, atomic64_t *v)
+static inline void atomic64_add(s64 i, atomic64_t *v)
 {
 #ifdef CONFIG_HAVE_MARCH_Z196_FEATURES
 	if (__builtin_constant_p(i) && (i > -129) && (i < 128)) {
-		__atomic64_add_const(i, &v->counter);
+		__atomic64_add_const(i, (long *)&v->counter);
 		return;
 	}
 #endif
-	__atomic64_add(i, &v->counter);
+	__atomic64_add(i, (long *)&v->counter);
 }
 
 #define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
 
-static inline long atomic64_cmpxchg(atomic64_t *v, long old, long new)
+static inline s64 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
 {
-	return __atomic64_cmpxchg(&v->counter, old, new);
+	return __atomic64_cmpxchg((long *)&v->counter, old, new);
 }
 
 #define ATOMIC64_OPS(op)						\
-static inline void atomic64_##op(long i, atomic64_t *v)			\
+static inline void atomic64_##op(s64 i, atomic64_t *v)			\
 {									\
-	__atomic64_##op(i, &v->counter);				\
+	__atomic64_##op(i, (long *)&v->counter);			\
 }									\
-static inline long atomic64_fetch_##op(long i, atomic64_t *v)		\
+static inline long atomic64_fetch_##op(s64 i, atomic64_t *v)		\
 {									\
-	return __atomic64_##op##_barrier(i, &v->counter);		\
+	return __atomic64_##op##_barrier(i, (long *)&v->counter);	\
 }
 
 ATOMIC64_OPS(and)
@@ -145,8 +145,8 @@ ATOMIC64_OPS(xor)
 
 #undef ATOMIC64_OPS
 
-#define atomic64_sub_return(_i, _v)	atomic64_add_return(-(long)(_i), _v)
-#define atomic64_fetch_sub(_i, _v)	atomic64_fetch_add(-(long)(_i), _v)
-#define atomic64_sub(_i, _v)		atomic64_add(-(long)(_i), _v)
+#define atomic64_sub_return(_i, _v)	atomic64_add_return(-(s64)(_i), _v)
+#define atomic64_fetch_sub(_i, _v)	atomic64_fetch_add(-(s64)(_i), _v)
+#define atomic64_sub(_i, _v)		atomic64_add(-(s64)(_i), _v)
 
 #endif /* __ARCH_S390_ATOMIC__  */

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, sparc: Use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 14/18] locking/atomic: sparc: use " Mark Rutland
@ 2019-06-03 13:43   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:43 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, peterz, hpa, mingo, torvalds, linux-kernel, davem,
	will.deacon, mark.rutland

Commit-ID:  04e8851af767153c0878cc79ce30c0d8806eec43
Gitweb:     https://git.kernel.org/tip/04e8851af767153c0878cc79ce30c0d8806eec43
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:46 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:56 +0200

locking/atomic, sparc: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the sparc atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long, matching the generated headers.

As atomic64_read() depends on the generic defintion of atomic64_t, this
still returns long. This will be converted in a subsequent patch.

Otherwise, there should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: David S. Miller <davem@davemloft.net>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-15-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/sparc/include/asm/atomic_64.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/sparc/include/asm/atomic_64.h b/arch/sparc/include/asm/atomic_64.h
index 6963482c81d8..b60448397d4f 100644
--- a/arch/sparc/include/asm/atomic_64.h
+++ b/arch/sparc/include/asm/atomic_64.h
@@ -23,15 +23,15 @@
 
 #define ATOMIC_OP(op)							\
 void atomic_##op(int, atomic_t *);					\
-void atomic64_##op(long, atomic64_t *);
+void atomic64_##op(s64, atomic64_t *);
 
 #define ATOMIC_OP_RETURN(op)						\
 int atomic_##op##_return(int, atomic_t *);				\
-long atomic64_##op##_return(long, atomic64_t *);
+s64 atomic64_##op##_return(s64, atomic64_t *);
 
 #define ATOMIC_FETCH_OP(op)						\
 int atomic_fetch_##op(int, atomic_t *);					\
-long atomic64_fetch_##op(long, atomic64_t *);
+s64 atomic64_fetch_##op(s64, atomic64_t *);
 
 #define ATOMIC_OPS(op) ATOMIC_OP(op) ATOMIC_OP_RETURN(op) ATOMIC_FETCH_OP(op)
 
@@ -61,7 +61,7 @@ static inline int atomic_xchg(atomic_t *v, int new)
 	((__typeof__((v)->counter))cmpxchg(&((v)->counter), (o), (n)))
 #define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
 
-long atomic64_dec_if_positive(atomic64_t *v);
+s64 atomic64_dec_if_positive(atomic64_t *v);
 #define atomic64_dec_if_positive atomic64_dec_if_positive
 
 #endif /* !(__ARCH_SPARC64_ATOMIC__) */

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, x86: Use s64 for atomic64
  2019-05-22 13:22 ` [PATCH 15/18] locking/atomic: x86: use " Mark Rutland
@ 2019-06-03 13:44   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:44 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: peterz, mingo, tglx, torvalds, mark.rutland, hpa, will.deacon,
	linux-kernel, linux, bp

Commit-ID:  79c53a83d7a31a5b5c7bafce4f0723bebf26836a
Gitweb:     https://git.kernel.org/tip/79c53a83d7a31a5b5c7bafce4f0723bebf26836a
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:47 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:57 +0200

locking/atomic, x86: Use s64 for atomic64

As a step towards making the atomic64 API use consistent types treewide,
let's have the x86 atomic64 implementation use s64 as the underlying
type for atomic64_t, rather than long or long long, matching the
generated headers.

Note that the x86 arch_atomic64 implementation is already wrapped by the
generic instrumented atomic64 implementation, which uses s64
consistently.

Otherwise, there should be no functional change as a result of this
patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Russell King <linux@armlinux.org.uk>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-16-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/x86/include/asm/atomic64_32.h | 66 ++++++++++++++++++--------------------
 arch/x86/include/asm/atomic64_64.h | 38 +++++++++++-----------
 2 files changed, 51 insertions(+), 53 deletions(-)

diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 6a5b0ec460da..52cfaecb13f9 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -9,7 +9,7 @@
 /* An 64bit atomic type */
 
 typedef struct {
-	u64 __aligned(8) counter;
+	s64 __aligned(8) counter;
 } atomic64_t;
 
 #define ATOMIC64_INIT(val)	{ (val) }
@@ -71,8 +71,7 @@ ATOMIC64_DECL(add_unless);
  * the old value.
  */
 
-static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o,
-					      long long n)
+static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 o, s64 n)
 {
 	return arch_cmpxchg64(&v->counter, o, n);
 }
@@ -85,9 +84,9 @@ static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o,
  * Atomically xchgs the value of @v to @n and returns
  * the old value.
  */
-static inline long long arch_atomic64_xchg(atomic64_t *v, long long n)
+static inline s64 arch_atomic64_xchg(atomic64_t *v, s64 n)
 {
-	long long o;
+	s64 o;
 	unsigned high = (unsigned)(n >> 32);
 	unsigned low = (unsigned)n;
 	alternative_atomic64(xchg, "=&A" (o),
@@ -103,7 +102,7 @@ static inline long long arch_atomic64_xchg(atomic64_t *v, long long n)
  *
  * Atomically sets the value of @v to @n.
  */
-static inline void arch_atomic64_set(atomic64_t *v, long long i)
+static inline void arch_atomic64_set(atomic64_t *v, s64 i)
 {
 	unsigned high = (unsigned)(i >> 32);
 	unsigned low = (unsigned)i;
@@ -118,9 +117,9 @@ static inline void arch_atomic64_set(atomic64_t *v, long long i)
  *
  * Atomically reads the value of @v and returns it.
  */
-static inline long long arch_atomic64_read(const atomic64_t *v)
+static inline s64 arch_atomic64_read(const atomic64_t *v)
 {
-	long long r;
+	s64 r;
 	alternative_atomic64(read, "=&A" (r), "c" (v) : "memory");
 	return r;
 }
@@ -132,7 +131,7 @@ static inline long long arch_atomic64_read(const atomic64_t *v)
  *
  * Atomically adds @i to @v and returns @i + *@v
  */
-static inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
 {
 	alternative_atomic64(add_return,
 			     ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -143,7 +142,7 @@ static inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
 /*
  * Other variants with different arithmetic operators:
  */
-static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
 {
 	alternative_atomic64(sub_return,
 			     ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -151,18 +150,18 @@ static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
 	return i;
 }
 
-static inline long long arch_atomic64_inc_return(atomic64_t *v)
+static inline s64 arch_atomic64_inc_return(atomic64_t *v)
 {
-	long long a;
+	s64 a;
 	alternative_atomic64(inc_return, "=&A" (a),
 			     "S" (v) : "memory", "ecx");
 	return a;
 }
 #define arch_atomic64_inc_return arch_atomic64_inc_return
 
-static inline long long arch_atomic64_dec_return(atomic64_t *v)
+static inline s64 arch_atomic64_dec_return(atomic64_t *v)
 {
-	long long a;
+	s64 a;
 	alternative_atomic64(dec_return, "=&A" (a),
 			     "S" (v) : "memory", "ecx");
 	return a;
@@ -176,7 +175,7 @@ static inline long long arch_atomic64_dec_return(atomic64_t *v)
  *
  * Atomically adds @i to @v.
  */
-static inline long long arch_atomic64_add(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_add(s64 i, atomic64_t *v)
 {
 	__alternative_atomic64(add, add_return,
 			       ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -191,7 +190,7 @@ static inline long long arch_atomic64_add(long long i, atomic64_t *v)
  *
  * Atomically subtracts @i from @v.
  */
-static inline long long arch_atomic64_sub(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_sub(s64 i, atomic64_t *v)
 {
 	__alternative_atomic64(sub, sub_return,
 			       ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -234,8 +233,7 @@ static inline void arch_atomic64_dec(atomic64_t *v)
  * Atomically adds @a to @v, so long as it was not @u.
  * Returns non-zero if the add was done, zero otherwise.
  */
-static inline int arch_atomic64_add_unless(atomic64_t *v, long long a,
-					   long long u)
+static inline int arch_atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
 {
 	unsigned low = (unsigned)u;
 	unsigned high = (unsigned)(u >> 32);
@@ -254,9 +252,9 @@ static inline int arch_atomic64_inc_not_zero(atomic64_t *v)
 }
 #define arch_atomic64_inc_not_zero arch_atomic64_inc_not_zero
 
-static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
+static inline s64 arch_atomic64_dec_if_positive(atomic64_t *v)
 {
-	long long r;
+	s64 r;
 	alternative_atomic64(dec_if_positive, "=&A" (r),
 			     "S" (v) : "ecx", "memory");
 	return r;
@@ -266,17 +264,17 @@ static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
 #undef alternative_atomic64
 #undef __alternative_atomic64
 
-static inline void arch_atomic64_and(long long i, atomic64_t *v)
+static inline void arch_atomic64_and(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
 		c = old;
 }
 
-static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
 		c = old;
@@ -284,17 +282,17 @@ static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
 	return old;
 }
 
-static inline void arch_atomic64_or(long long i, atomic64_t *v)
+static inline void arch_atomic64_or(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
 		c = old;
 }
 
-static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
 		c = old;
@@ -302,17 +300,17 @@ static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
 	return old;
 }
 
-static inline void arch_atomic64_xor(long long i, atomic64_t *v)
+static inline void arch_atomic64_xor(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
 		c = old;
 }
 
-static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
 		c = old;
@@ -320,9 +318,9 @@ static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
 	return old;
 }
 
-static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v)
 {
-	long long old, c = 0;
+	s64 old, c = 0;
 
 	while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c)
 		c = old;
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index dadc20adba21..703b7dfd45e0 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -17,7 +17,7 @@
  * Atomically reads the value of @v.
  * Doesn't imply a read memory barrier.
  */
-static inline long arch_atomic64_read(const atomic64_t *v)
+static inline s64 arch_atomic64_read(const atomic64_t *v)
 {
 	return READ_ONCE((v)->counter);
 }
@@ -29,7 +29,7 @@ static inline long arch_atomic64_read(const atomic64_t *v)
  *
  * Atomically sets the value of @v to @i.
  */
-static inline void arch_atomic64_set(atomic64_t *v, long i)
+static inline void arch_atomic64_set(atomic64_t *v, s64 i)
 {
 	WRITE_ONCE(v->counter, i);
 }
@@ -41,7 +41,7 @@ static inline void arch_atomic64_set(atomic64_t *v, long i)
  *
  * Atomically adds @i to @v.
  */
-static __always_inline void arch_atomic64_add(long i, atomic64_t *v)
+static __always_inline void arch_atomic64_add(s64 i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "addq %1,%0"
 		     : "=m" (v->counter)
@@ -55,7 +55,7 @@ static __always_inline void arch_atomic64_add(long i, atomic64_t *v)
  *
  * Atomically subtracts @i from @v.
  */
-static inline void arch_atomic64_sub(long i, atomic64_t *v)
+static inline void arch_atomic64_sub(s64 i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "subq %1,%0"
 		     : "=m" (v->counter)
@@ -71,7 +71,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v)
  * true if the result is zero, or false for all
  * other cases.
  */
-static inline bool arch_atomic64_sub_and_test(long i, atomic64_t *v)
+static inline bool arch_atomic64_sub_and_test(s64 i, atomic64_t *v)
 {
 	return GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, e, "er", i);
 }
@@ -142,7 +142,7 @@ static inline bool arch_atomic64_inc_and_test(atomic64_t *v)
  * if the result is negative, or false when
  * result is greater than or equal to zero.
  */
-static inline bool arch_atomic64_add_negative(long i, atomic64_t *v)
+static inline bool arch_atomic64_add_negative(s64 i, atomic64_t *v)
 {
 	return GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, s, "er", i);
 }
@@ -155,43 +155,43 @@ static inline bool arch_atomic64_add_negative(long i, atomic64_t *v)
  *
  * Atomically adds @i to @v and returns @i + @v
  */
-static __always_inline long arch_atomic64_add_return(long i, atomic64_t *v)
+static __always_inline s64 arch_atomic64_add_return(s64 i, atomic64_t *v)
 {
 	return i + xadd(&v->counter, i);
 }
 
-static inline long arch_atomic64_sub_return(long i, atomic64_t *v)
+static inline s64 arch_atomic64_sub_return(s64 i, atomic64_t *v)
 {
 	return arch_atomic64_add_return(-i, v);
 }
 
-static inline long arch_atomic64_fetch_add(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_add(s64 i, atomic64_t *v)
 {
 	return xadd(&v->counter, i);
 }
 
-static inline long arch_atomic64_fetch_sub(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_sub(s64 i, atomic64_t *v)
 {
 	return xadd(&v->counter, -i);
 }
 
-static inline long arch_atomic64_cmpxchg(atomic64_t *v, long old, long new)
+static inline s64 arch_atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
 {
 	return arch_cmpxchg(&v->counter, old, new);
 }
 
 #define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
-static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, long new)
+static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
 {
 	return try_cmpxchg(&v->counter, old, new);
 }
 
-static inline long arch_atomic64_xchg(atomic64_t *v, long new)
+static inline s64 arch_atomic64_xchg(atomic64_t *v, s64 new)
 {
 	return arch_xchg(&v->counter, new);
 }
 
-static inline void arch_atomic64_and(long i, atomic64_t *v)
+static inline void arch_atomic64_and(s64 i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "andq %1,%0"
 			: "+m" (v->counter)
@@ -199,7 +199,7 @@ static inline void arch_atomic64_and(long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long arch_atomic64_fetch_and(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_and(s64 i, atomic64_t *v)
 {
 	s64 val = arch_atomic64_read(v);
 
@@ -208,7 +208,7 @@ static inline long arch_atomic64_fetch_and(long i, atomic64_t *v)
 	return val;
 }
 
-static inline void arch_atomic64_or(long i, atomic64_t *v)
+static inline void arch_atomic64_or(s64 i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "orq %1,%0"
 			: "+m" (v->counter)
@@ -216,7 +216,7 @@ static inline void arch_atomic64_or(long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long arch_atomic64_fetch_or(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_or(s64 i, atomic64_t *v)
 {
 	s64 val = arch_atomic64_read(v);
 
@@ -225,7 +225,7 @@ static inline long arch_atomic64_fetch_or(long i, atomic64_t *v)
 	return val;
 }
 
-static inline void arch_atomic64_xor(long i, atomic64_t *v)
+static inline void arch_atomic64_xor(s64 i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "xorq %1,%0"
 			: "+m" (v->counter)
@@ -233,7 +233,7 @@ static inline void arch_atomic64_xor(long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long arch_atomic64_fetch_xor(long i, atomic64_t *v)
+static inline s64 arch_atomic64_fetch_xor(s64 i, atomic64_t *v)
 {
 	s64 val = arch_atomic64_read(v);
 

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic: Use s64 for atomic64_t on 64-bit
  2019-05-22 13:22 ` [PATCH 16/18] locking/atomic: use s64 for atomic64_t on 64-bit Mark Rutland
@ 2019-06-03 13:44   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:44 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, mark.rutland, hpa, torvalds, linux-kernel, peterz, mingo,
	will.deacon

Commit-ID:  3724921396dd1a07c93e3516b8d7c9ff570d17a9
Gitweb:     https://git.kernel.org/tip/3724921396dd1a07c93e3516b8d7c9ff570d17a9
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:48 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:57 +0200

locking/atomic: Use s64 for atomic64_t on 64-bit

Now that all architectures use 64 consistently as the base type for the
atomic64 API, let's have the CONFIG_64BIT definition of atomic64_t use
s64 as the underlying type for atomic64_t, rather than long, matching
the generated headers.

On architectures where atomic64_read(v) is READ_ONCE(v->counter), this
patch will cause the return type of atomic64_read() to be s64.

As of this patch, the atomic64 API can be relied upon to consistently
return s64 where a value rather than boolean condition is returned. This
should make code more robust, and simpler, allowing for the removal of
casts previously required to ensure consistent types.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-17-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/types.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/linux/types.h b/include/linux/types.h
index 231114ae38f4..05030f608be3 100644
--- a/include/linux/types.h
+++ b/include/linux/types.h
@@ -174,7 +174,7 @@ typedef struct {
 
 #ifdef CONFIG_64BIT
 typedef struct {
-	long counter;
+	s64 counter;
 } atomic64_t;
 #endif
 

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, crypto/nx: Remove redundant casts
  2019-05-22 13:22 ` [PATCH 17/18] locking/atomic: crypto: nx: remove redundant casts Mark Rutland
@ 2019-06-03 13:45   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:45 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: torvalds, linux-kernel, mingo, tglx, herbert, will.deacon,
	peterz, mark.rutland, hpa

Commit-ID:  2af7a0f91c3a645ec9f1cd68e41472021746db35
Gitweb:     https://git.kernel.org/tip/2af7a0f91c3a645ec9f1cd68e41472021746db35
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:49 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:57 +0200

locking/atomic, crypto/nx: Remove redundant casts

Now that atomic64_read() returns s64 consistently, we don't need to
explicitly cast its return value. Drop the redundant casts.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: heiko.carstens@de.ibm.com
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-18-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 drivers/crypto/nx/nx-842-pseries.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/nx/nx-842-pseries.c b/drivers/crypto/nx/nx-842-pseries.c
index 938332ce3b60..2de5e3672e42 100644
--- a/drivers/crypto/nx/nx-842-pseries.c
+++ b/drivers/crypto/nx/nx-842-pseries.c
@@ -857,7 +857,7 @@ static ssize_t nx842_##_name##_show(struct device *dev,		\
 	local_devdata = rcu_dereference(devdata);			\
 	if (local_devdata)						\
 		p = snprintf(buf, PAGE_SIZE, "%lld\n",			\
-		       (s64)atomic64_read(&local_devdata->counters->_name));	\
+		       atomic64_read(&local_devdata->counters->_name));	\
 	rcu_read_unlock();						\
 	return p;							\
 }
@@ -911,7 +911,7 @@ static ssize_t nx842_timehist_show(struct device *dev,
 	for (i = 0; i < (NX842_HIST_SLOTS - 2); i++) {
 		bytes = snprintf(p, bytes_remain, "%u-%uus:\t%lld\n",
 			       i ? (2<<(i-1)) : 0, (2<<i)-1,
-			       (s64)atomic64_read(&times[i]));
+			       atomic64_read(&times[i]));
 		bytes_remain -= bytes;
 		p += bytes;
 	}
@@ -919,7 +919,7 @@ static ssize_t nx842_timehist_show(struct device *dev,
 	 * 2<<(NX842_HIST_SLOTS - 2) us */
 	bytes = snprintf(p, bytes_remain, "%uus - :\t%lld\n",
 			2<<(NX842_HIST_SLOTS - 2),
-			(s64)atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
+			atomic64_read(&times[(NX842_HIST_SLOTS - 1)]));
 	p += bytes;
 
 	rcu_read_unlock();

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] locking/atomic, s390/pci: Remove redundant casts
  2019-05-22 13:22 ` [PATCH 18/18] locking/atomic: s390/pci: remove " Mark Rutland
@ 2019-06-03 13:46   ` tip-bot for Mark Rutland
  0 siblings, 0 replies; 60+ messages in thread
From: tip-bot for Mark Rutland @ 2019-06-03 13:46 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: heiko.carstens, mingo, will.deacon, torvalds, hpa, mark.rutland,
	peterz, tglx, linux-kernel

Commit-ID:  6a6a9d5fb9f26d2c2127497f3a42adbeb5ccc2a4
Gitweb:     https://git.kernel.org/tip/6a6a9d5fb9f26d2c2127497f3a42adbeb5ccc2a4
Author:     Mark Rutland <mark.rutland@arm.com>
AuthorDate: Wed, 22 May 2019 14:22:50 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:57 +0200

locking/atomic, s390/pci: Remove redundant casts

Now that atomic64_read() returns s64 consistently, we don't need to
explicitly cast its return value. Drop the redundant casts.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: aou@eecs.berkeley.edu
Cc: arnd@arndb.de
Cc: bp@alien8.de
Cc: catalin.marinas@arm.com
Cc: davem@davemloft.net
Cc: fenghua.yu@intel.com
Cc: herbert@gondor.apana.org.au
Cc: ink@jurassic.park.msu.ru
Cc: jhogan@kernel.org
Cc: linux@armlinux.org.uk
Cc: mattst88@gmail.com
Cc: mpe@ellerman.id.au
Cc: palmer@sifive.com
Cc: paul.burton@mips.com
Cc: paulus@samba.org
Cc: ralf@linux-mips.org
Cc: rth@twiddle.net
Cc: tony.luck@intel.com
Cc: vgupta@synopsys.com
Link: https://lkml.kernel.org/r/20190522132250.26499-19-mark.rutland@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/s390/pci/pci_debug.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
index 45eccf79e990..3408c0df3ebf 100644
--- a/arch/s390/pci/pci_debug.c
+++ b/arch/s390/pci/pci_debug.c
@@ -75,7 +75,7 @@ static void pci_sw_counter_show(struct seq_file *m)
 
 	for (i = 0; i < ARRAY_SIZE(pci_sw_names); i++, counter++)
 		seq_printf(m, "%26s:\t%llu\n", pci_sw_names[i],
-			   (s64)atomic64_read(counter));
+			   atomic64_read(counter));
 }
 
 static int pci_perf_show(struct seq_file *m, void *v)

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* [tip:locking/core] Documentation/atomic_t.txt: Clarify pure non-rmw usage
  2019-05-24 11:52           ` Peter Zijlstra
  2019-05-24 22:43             ` Andrea Parri
@ 2019-06-03 13:46             ` tip-bot for Peter Zijlstra
  2019-06-06  8:44               ` Andrea Parri
  1 sibling, 1 reply; 60+ messages in thread
From: tip-bot for Peter Zijlstra @ 2019-06-03 13:46 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: torvalds, mingo, gregkh, tglx, hpa, peterz, linux-kernel, will.deacon

Commit-ID:  fff9b6c7d26943a8eb32b58364b7ec6b9369746a
Gitweb:     https://git.kernel.org/tip/fff9b6c7d26943a8eb32b58364b7ec6b9369746a
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Fri, 24 May 2019 13:52:31 +0200
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Mon, 3 Jun 2019 12:32:57 +0200

Documentation/atomic_t.txt: Clarify pure non-rmw usage

Clarify that pure non-RMW usage of atomic_t is pointless, there is
nothing 'magical' about atomic_set() / atomic_read().

This is something that seems to confuse people, because I happen upon it
semi-regularly.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20190524115231.GN2623@hirez.programming.kicks-ass.net
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 Documentation/atomic_t.txt | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index dca3fb0554db..89eae7f6b360 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -81,9 +81,11 @@ Non-RMW ops:
 
 The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
 implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
-smp_store_release() respectively.
+smp_store_release() respectively. Therefore, if you find yourself only using
+the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
+and are doing it wrong.
 
-The one detail to this is that atomic_set{}() should be observable to the RMW
+A subtle detail of atomic_set{}() is that it should be observable to the RMW
 ops. That is:
 
   C atomic-set

^ permalink raw reply related	[flat|nested] 60+ messages in thread

* Re: [tip:locking/core] Documentation/atomic_t.txt: Clarify pure non-rmw usage
  2019-06-03 13:46             ` [tip:locking/core] Documentation/atomic_t.txt: Clarify pure non-rmw usage tip-bot for Peter Zijlstra
@ 2019-06-06  8:44               ` Andrea Parri
  2019-06-06  9:04                 ` Peter Zijlstra
  0 siblings, 1 reply; 60+ messages in thread
From: Andrea Parri @ 2019-06-06  8:44 UTC (permalink / raw)
  To: torvalds, mingo, gregkh, tglx, hpa, peterz, will.deacon, linux-kernel
  Cc: linux-tip-commits

On Mon, Jun 03, 2019 at 06:46:54AM -0700, tip-bot for Peter Zijlstra wrote:
> Commit-ID:  fff9b6c7d26943a8eb32b58364b7ec6b9369746a
> Gitweb:     https://git.kernel.org/tip/fff9b6c7d26943a8eb32b58364b7ec6b9369746a
> Author:     Peter Zijlstra <peterz@infradead.org>
> AuthorDate: Fri, 24 May 2019 13:52:31 +0200
> Committer:  Ingo Molnar <mingo@kernel.org>
> CommitDate: Mon, 3 Jun 2019 12:32:57 +0200
> 
> Documentation/atomic_t.txt: Clarify pure non-rmw usage
> 
> Clarify that pure non-RMW usage of atomic_t is pointless, there is
> nothing 'magical' about atomic_set() / atomic_read().
> 
> This is something that seems to confuse people, because I happen upon it
> semi-regularly.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Acked-by: Will Deacon <will.deacon@arm.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Link: https://lkml.kernel.org/r/20190524115231.GN2623@hirez.programming.kicks-ass.net
> Signed-off-by: Ingo Molnar <mingo@kernel.org>

I'd appreciate if you could Cc: me in future changes to this doc.
(as currently suggested by get_maintainer.pl).

This is particularly annoying when you spend time to review such
changes:

  https://lkml.kernel.org/r/20190528111558.GA9106@andrea

Thanks,
  Andrea


> ---
>  Documentation/atomic_t.txt | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
> index dca3fb0554db..89eae7f6b360 100644
> --- a/Documentation/atomic_t.txt
> +++ b/Documentation/atomic_t.txt
> @@ -81,9 +81,11 @@ Non-RMW ops:
>  
>  The non-RMW ops are (typically) regular LOADs and STOREs and are canonically
>  implemented using READ_ONCE(), WRITE_ONCE(), smp_load_acquire() and
> -smp_store_release() respectively.
> +smp_store_release() respectively. Therefore, if you find yourself only using
> +the Non-RMW operations of atomic_t, you do not in fact need atomic_t at all
> +and are doing it wrong.
>  
> -The one detail to this is that atomic_set{}() should be observable to the RMW
> +A subtle detail of atomic_set{}() is that it should be observable to the RMW
>  ops. That is:
>  
>    C atomic-set

^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [tip:locking/core] Documentation/atomic_t.txt: Clarify pure non-rmw usage
  2019-06-06  8:44               ` Andrea Parri
@ 2019-06-06  9:04                 ` Peter Zijlstra
  2019-06-06  9:11                   ` Andrea Parri
  0 siblings, 1 reply; 60+ messages in thread
From: Peter Zijlstra @ 2019-06-06  9:04 UTC (permalink / raw)
  To: Andrea Parri
  Cc: torvalds, mingo, gregkh, tglx, hpa, will.deacon, linux-kernel,
	linux-tip-commits

On Thu, Jun 06, 2019 at 10:44:21AM +0200, Andrea Parri wrote:
> On Mon, Jun 03, 2019 at 06:46:54AM -0700, tip-bot for Peter Zijlstra wrote:
> > Commit-ID:  fff9b6c7d26943a8eb32b58364b7ec6b9369746a
> > Gitweb:     https://git.kernel.org/tip/fff9b6c7d26943a8eb32b58364b7ec6b9369746a
> > Author:     Peter Zijlstra <peterz@infradead.org>
> > AuthorDate: Fri, 24 May 2019 13:52:31 +0200
> > Committer:  Ingo Molnar <mingo@kernel.org>
> > CommitDate: Mon, 3 Jun 2019 12:32:57 +0200
> > 
> > Documentation/atomic_t.txt: Clarify pure non-rmw usage
> > 
> > Clarify that pure non-RMW usage of atomic_t is pointless, there is
> > nothing 'magical' about atomic_set() / atomic_read().
> > 
> > This is something that seems to confuse people, because I happen upon it
> > semi-regularly.
> > 
> > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > Acked-by: Will Deacon <will.deacon@arm.com>
> > Cc: Linus Torvalds <torvalds@linux-foundation.org>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Link: https://lkml.kernel.org/r/20190524115231.GN2623@hirez.programming.kicks-ass.net
> > Signed-off-by: Ingo Molnar <mingo@kernel.org>
> 
> I'd appreciate if you could Cc: me in future changes to this doc.
> (as currently suggested by get_maintainer.pl).
> 
> This is particularly annoying when you spend time to review such
> changes:
> 
>   https://lkml.kernel.org/r/20190528111558.GA9106@andrea

Sure, I hadn't realized the LKMM entry had appropriated this file, I
considered it part of the ATOMIC entry there.


^ permalink raw reply	[flat|nested] 60+ messages in thread

* Re: [tip:locking/core] Documentation/atomic_t.txt: Clarify pure non-rmw usage
  2019-06-06  9:04                 ` Peter Zijlstra
@ 2019-06-06  9:11                   ` Andrea Parri
  0 siblings, 0 replies; 60+ messages in thread
From: Andrea Parri @ 2019-06-06  9:11 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: torvalds, mingo, gregkh, tglx, hpa, will.deacon, linux-kernel,
	linux-tip-commits

On Thu, Jun 06, 2019 at 11:04:39AM +0200, Peter Zijlstra wrote:
> On Thu, Jun 06, 2019 at 10:44:21AM +0200, Andrea Parri wrote:
> > On Mon, Jun 03, 2019 at 06:46:54AM -0700, tip-bot for Peter Zijlstra wrote:
> > > Commit-ID:  fff9b6c7d26943a8eb32b58364b7ec6b9369746a
> > > Gitweb:     https://git.kernel.org/tip/fff9b6c7d26943a8eb32b58364b7ec6b9369746a
> > > Author:     Peter Zijlstra <peterz@infradead.org>
> > > AuthorDate: Fri, 24 May 2019 13:52:31 +0200
> > > Committer:  Ingo Molnar <mingo@kernel.org>
> > > CommitDate: Mon, 3 Jun 2019 12:32:57 +0200
> > > 
> > > Documentation/atomic_t.txt: Clarify pure non-rmw usage
> > > 
> > > Clarify that pure non-RMW usage of atomic_t is pointless, there is
> > > nothing 'magical' about atomic_set() / atomic_read().
> > > 
> > > This is something that seems to confuse people, because I happen upon it
> > > semi-regularly.
> > > 
> > > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> > > Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> > > Acked-by: Will Deacon <will.deacon@arm.com>
> > > Cc: Linus Torvalds <torvalds@linux-foundation.org>
> > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > Cc: Thomas Gleixner <tglx@linutronix.de>
> > > Link: https://lkml.kernel.org/r/20190524115231.GN2623@hirez.programming.kicks-ass.net
> > > Signed-off-by: Ingo Molnar <mingo@kernel.org>
> > 
> > I'd appreciate if you could Cc: me in future changes to this doc.
> > (as currently suggested by get_maintainer.pl).
> > 
> > This is particularly annoying when you spend time to review such
> > changes:
> > 
> >   https://lkml.kernel.org/r/20190528111558.GA9106@andrea
> 
> Sure, I hadn't realized the LKMM entry had appropriated this file, I
> considered it part of the ATOMIC entry there.

Thanks.  Well, that was not a 'secret', c.f.,

  70b83069f70d ("tools/memory-model: Add informal LKMM documentation to MAINTAINERS")

  Andrea

^ permalink raw reply	[flat|nested] 60+ messages in thread

end of thread, other threads:[~2019-06-06  9:11 UTC | newest]

Thread overview: 60+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-22 13:22 [PATCH 00/18] locking/atomic: atomic64 type cleanup Mark Rutland
2019-05-22 13:22 ` [PATCH 01/18] locking/atomic: crypto: nx: prepare for atomic64_read() conversion Mark Rutland
2019-06-03 13:34   ` [tip:locking/core] locking/atomic, crypto/nx: Prepare " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 02/18] locking/atomic: s390/pci: prepare " Mark Rutland
2019-06-03 13:34   ` [tip:locking/core] locking/atomic, s390/pci: Prepare " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 03/18] locking/atomic: generic: use s64 for atomic64 Mark Rutland
2019-05-22 21:16   ` Arnd Bergmann
2019-06-03 13:35   ` [tip:locking/core] locking/atomic: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 04/18] locking/atomic: alpha: use " Mark Rutland
2019-06-03 13:36   ` [tip:locking/core] locking/atomic, alpha: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 05/18] locking/atomic: arc: use " Mark Rutland
2019-05-23 23:10   ` Vineet Gupta
2019-06-03 13:36   ` [tip:locking/core] locking/atomic, arc: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 06/18] locking/atomic: arm: use " Mark Rutland
2019-06-03 13:37   ` [tip:locking/core] locking/atomic, arm: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 07/18] locking/atomic: arm64: use " Mark Rutland
2019-06-03 13:38   ` [tip:locking/core] locking/atomic, arm64: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 08/18] locking/atomic: ia64: use " Mark Rutland
2019-06-03 13:39   ` [tip:locking/core] locking/atomic, ia64: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 09/18] locking/atomic: mips: use " Mark Rutland
2019-06-03 13:39   ` [tip:locking/core] locking/atomic, mips: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 10/18] locking/atomic: powerpc: use " Mark Rutland
2019-05-23 13:27   ` Michael Ellerman
2019-06-03 13:40   ` [tip:locking/core] locking/atomic, powerpc: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 11/18] locking/atomic: riscv: fix atomic64_sub_if_positive() offset argument Mark Rutland
2019-05-22 19:06   ` Palmer Dabbelt
2019-06-03 13:41   ` [tip:locking/core] locking/atomic, riscv: Fix " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 12/18] locking/atomic: riscv: use s64 for atomic64 Mark Rutland
2019-05-22 19:06   ` Palmer Dabbelt
2019-05-23 10:23     ` Mark Rutland
2019-06-03 13:41   ` [tip:locking/core] locking/atomic, riscv: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 13/18] locking/atomic: s390: use " Mark Rutland
2019-06-03 13:42   ` [tip:locking/core] locking/atomic, s390: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 14/18] locking/atomic: sparc: use " Mark Rutland
2019-06-03 13:43   ` [tip:locking/core] locking/atomic, sparc: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 15/18] locking/atomic: x86: use " Mark Rutland
2019-06-03 13:44   ` [tip:locking/core] locking/atomic, x86: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 16/18] locking/atomic: use s64 for atomic64_t on 64-bit Mark Rutland
2019-06-03 13:44   ` [tip:locking/core] locking/atomic: Use " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 17/18] locking/atomic: crypto: nx: remove redundant casts Mark Rutland
2019-06-03 13:45   ` [tip:locking/core] locking/atomic, crypto/nx: Remove " tip-bot for Mark Rutland
2019-05-22 13:22 ` [PATCH 18/18] locking/atomic: s390/pci: remove " Mark Rutland
2019-06-03 13:46   ` [tip:locking/core] locking/atomic, s390/pci: Remove " tip-bot for Mark Rutland
2019-05-22 21:18 ` [PATCH 00/18] locking/atomic: atomic64 type cleanup Arnd Bergmann
2019-05-23 10:28   ` Mark Rutland
2019-05-23  8:30 ` Andrea Parri
2019-05-23 10:19   ` Mark Rutland
2019-05-23 11:20     ` Andrea Parri
2019-05-24 10:37     ` Peter Zijlstra
2019-05-24 11:18       ` Peter Zijlstra
2019-05-24 11:38         ` Greg KH
2019-05-24 11:42         ` Will Deacon
2019-05-24 11:52           ` Peter Zijlstra
2019-05-24 22:43             ` Andrea Parri
2019-05-28 10:47               ` Peter Zijlstra
2019-05-28 11:15                 ` Andrea Parri
2019-06-03 13:46             ` [tip:locking/core] Documentation/atomic_t.txt: Clarify pure non-rmw usage tip-bot for Peter Zijlstra
2019-06-06  8:44               ` Andrea Parri
2019-06-06  9:04                 ` Peter Zijlstra
2019-06-06  9:11                   ` Andrea Parri

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.