All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations
@ 2017-03-28 16:15 Dmitry Vyukov
  2017-03-28 16:15 ` [PATCH 1/8] x86: remove unused atomic_inc_short() Dmitry Vyukov
                   ` (8 more replies)
  0 siblings, 9 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov

KASAN uses compiler instrumentation to intercept all memory accesses.
But it does not see memory accesses done in assembly code.
One notable user of assembly code is atomic operations. Frequently,
for example, an atomic reference decrement is the last access to an
object and a good candidate for a racy use-after-free.

Atomic operations are defined in arch files, but KASAN instrumentation
is required for several archs that support KASAN. Later we will need
similar hooks for KMSAN (uninit use detector) and KTSAN (data race
detector).

This change introduces wrappers around atomic operations that can be
used to add KASAN/KMSAN/KTSAN instrumentation across several archs,
and adds KASAN checks to them.

This patch uses the wrappers only for x86 arch. Arm64 will be switched
later. And we also plan to instrument bitops in a similar way.

Within a day it has found its first bug:

BUG: KASAN: use-after-free in atomic_dec_and_test
arch/x86/include/asm/atomic.h:123 [inline] at addr ffff880079c30158
Write of size 4 by task syz-executor6/25698
CPU: 2 PID: 25698 Comm: syz-executor6 Not tainted 4.10.0+ #302
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
Call Trace:
 kasan_check_write+0x14/0x20 mm/kasan/kasan.c:344
 atomic_dec_and_test arch/x86/include/asm/atomic.h:123 [inline]
 put_task_struct include/linux/sched/task.h:93 [inline]
 put_ctx+0xcf/0x110 kernel/events/core.c:1131
 perf_event_release_kernel+0x3ad/0xc90 kernel/events/core.c:4322
 perf_release+0x37/0x50 kernel/events/core.c:4338
 __fput+0x332/0x800 fs/file_table.c:209
 ____fput+0x15/0x20 fs/file_table.c:245
 task_work_run+0x197/0x260 kernel/task_work.c:116
 exit_task_work include/linux/task_work.h:21 [inline]
 do_exit+0xb38/0x29c0 kernel/exit.c:880
 do_group_exit+0x149/0x420 kernel/exit.c:984
 get_signal+0x7e0/0x1820 kernel/signal.c:2318
 do_signal+0xd2/0x2190 arch/x86/kernel/signal.c:808
 exit_to_usermode_loop+0x200/0x2a0 arch/x86/entry/common.c:157
 syscall_return_slowpath arch/x86/entry/common.c:191 [inline]
 do_syscall_64+0x6fc/0x930 arch/x86/entry/common.c:286
 entry_SYSCALL64_slow_path+0x25/0x25
RIP: 0033:0x4458d9
RSP: 002b:00007f3f07187cf8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
RAX: fffffffffffffe00 RBX: 00000000007080c8 RCX: 00000000004458d9
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000007080c8
RBP: 00000000007080a8 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f3f071889c0 R15: 00007f3f07188700
Object at ffff880079c30140, in cache task_struct size: 5376
Allocated:
PID = 25681
 kmem_cache_alloc_node+0x122/0x6f0 mm/slab.c:3662
 alloc_task_struct_node kernel/fork.c:153 [inline]
 dup_task_struct kernel/fork.c:495 [inline]
 copy_process.part.38+0x19c8/0x4aa0 kernel/fork.c:1560
 copy_process kernel/fork.c:1531 [inline]
 _do_fork+0x200/0x1010 kernel/fork.c:1994
 SYSC_clone kernel/fork.c:2104 [inline]
 SyS_clone+0x37/0x50 kernel/fork.c:2098
 do_syscall_64+0x2e8/0x930 arch/x86/entry/common.c:281
 return_from_SYSCALL_64+0x0/0x7a
Freed:
PID = 25681
 __cache_free mm/slab.c:3514 [inline]
 kmem_cache_free+0x71/0x240 mm/slab.c:3774
 free_task_struct kernel/fork.c:158 [inline]
 free_task+0x151/0x1d0 kernel/fork.c:370
 copy_process.part.38+0x18e5/0x4aa0 kernel/fork.c:1931
 copy_process kernel/fork.c:1531 [inline]
 _do_fork+0x200/0x1010 kernel/fork.c:1994
 SYSC_clone kernel/fork.c:2104 [inline]
 SyS_clone+0x37/0x50 kernel/fork.c:2098
 do_syscall_64+0x2e8/0x930 arch/x86/entry/common.c:281
 return_from_SYSCALL_64+0x0/0x7a

Dmitry Vyukov (8):
  x86: remove unused atomic_inc_short()
  x86: un-macro-ify atomic ops implementation
  x86: use long long for 64-bit atomic ops
  asm-generic: add atomic-instrumented.h
  x86: switch atomic.h to use atomic-instrumented.h
  kasan: allow kasan_check_read/write() to accept pointers to volatiles
  asm-generic: add KASAN instrumentation to atomic operations
  asm-generic, x86: add comments for atomic instrumentation

 arch/tile/lib/atomic_asm_32.S             |   3 +-
 arch/x86/include/asm/atomic.h             | 174 +++++++------
 arch/x86/include/asm/atomic64_32.h        | 153 ++++++-----
 arch/x86/include/asm/atomic64_64.h        | 155 ++++++-----
 arch/x86/include/asm/cmpxchg.h            |  14 +-
 arch/x86/include/asm/cmpxchg_32.h         |   8 +-
 arch/x86/include/asm/cmpxchg_64.h         |   4 +-
 include/asm-generic/atomic-instrumented.h | 417 ++++++++++++++++++++++++++++++
 include/linux/kasan-checks.h              |  10 +-
 include/linux/types.h                     |   2 +-
 mm/kasan/kasan.c                          |   4 +-
 11 files changed, 719 insertions(+), 225 deletions(-)
 create mode 100644 include/asm-generic/atomic-instrumented.h

-- 
2.12.2.564.g063fe858b8-goog

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [PATCH 1/8] x86: remove unused atomic_inc_short()
  2017-03-28 16:15 [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations Dmitry Vyukov
@ 2017-03-28 16:15 ` Dmitry Vyukov
  2017-03-28 16:15 ` [PATCH 2/8] x86: un-macro-ify atomic ops implementation Dmitry Vyukov
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, Thomas Gleixner, H. Peter Anvin

It is completely unused and implemented only on x86.
Remove it.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Suggested-by: Mark Rutland <mark.rutland@arm.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
Cc: kasan-dev@googlegroups.com
---
 arch/tile/lib/atomic_asm_32.S |  3 +--
 arch/x86/include/asm/atomic.h | 13 -------------
 2 files changed, 1 insertion(+), 15 deletions(-)

diff --git a/arch/tile/lib/atomic_asm_32.S b/arch/tile/lib/atomic_asm_32.S
index 1a70e6c0f259..94709ab41ed8 100644
--- a/arch/tile/lib/atomic_asm_32.S
+++ b/arch/tile/lib/atomic_asm_32.S
@@ -24,8 +24,7 @@
  * has an opportunity to return -EFAULT to the user if needed.
  * The 64-bit routines just return a "long long" with the value,
  * since they are only used from kernel space and don't expect to fault.
- * Support for 16-bit ops is included in the framework but we don't provide
- * any (x86_64 has an atomic_inc_short(), so we might want to some day).
+ * Support for 16-bit ops is included in the framework but we don't provide any.
  *
  * Note that the caller is advised to issue a suitable L1 or L2
  * prefetch on the address being manipulated to avoid extra stalls.
diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index caa5798c92f4..33380b871463 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -246,19 +246,6 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
 	return c;
 }
 
-/**
- * atomic_inc_short - increment of a short integer
- * @v: pointer to type int
- *
- * Atomically adds 1 to @v
- * Returns the new value of @u
- */
-static __always_inline short int atomic_inc_short(short int *v)
-{
-	asm(LOCK_PREFIX "addw $1, %0" : "+m" (*v));
-	return *v;
-}
-
 #ifdef CONFIG_X86_32
 # include <asm/atomic64_32.h>
 #else
-- 
2.12.2.564.g063fe858b8-goog

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 2/8] x86: un-macro-ify atomic ops implementation
  2017-03-28 16:15 [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations Dmitry Vyukov
  2017-03-28 16:15 ` [PATCH 1/8] x86: remove unused atomic_inc_short() Dmitry Vyukov
@ 2017-03-28 16:15 ` Dmitry Vyukov
  2017-03-28 16:15   ` Dmitry Vyukov
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, Thomas Gleixner, H. Peter Anvin

CPP turns perfectly readable code into an unreadable,
unmaintainable mess. Ingo suggested to write them out as-is.
Do this.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Suggested-by: Ingo Molnar <mingo@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
Cc: kasan-dev@googlegroups.com
---
 arch/x86/include/asm/atomic.h      | 80 ++++++++++++++++++++++++--------------
 arch/x86/include/asm/atomic64_32.h | 77 ++++++++++++++++++++++++------------
 arch/x86/include/asm/atomic64_64.h | 67 ++++++++++++++++++++-----------
 3 files changed, 148 insertions(+), 76 deletions(-)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 33380b871463..8d7f6e579be4 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -197,35 +197,57 @@ static inline int atomic_xchg(atomic_t *v, int new)
 	return xchg(&v->counter, new);
 }
 
-#define ATOMIC_OP(op)							\
-static inline void atomic_##op(int i, atomic_t *v)			\
-{									\
-	asm volatile(LOCK_PREFIX #op"l %1,%0"				\
-			: "+m" (v->counter)				\
-			: "ir" (i)					\
-			: "memory");					\
-}
-
-#define ATOMIC_FETCH_OP(op, c_op)					\
-static inline int atomic_fetch_##op(int i, atomic_t *v)			\
-{									\
-	int val = atomic_read(v);					\
-	do {								\
-	} while (!atomic_try_cmpxchg(v, &val, val c_op i));		\
-	return val;							\
-}
-
-#define ATOMIC_OPS(op, c_op)						\
-	ATOMIC_OP(op)							\
-	ATOMIC_FETCH_OP(op, c_op)
-
-ATOMIC_OPS(and, &)
-ATOMIC_OPS(or , |)
-ATOMIC_OPS(xor, ^)
-
-#undef ATOMIC_OPS
-#undef ATOMIC_FETCH_OP
-#undef ATOMIC_OP
+static inline void atomic_and(int i, atomic_t *v)
+{
+	asm volatile(LOCK_PREFIX "andl %1,%0"
+			: "+m" (v->counter)
+			: "ir" (i)
+			: "memory");
+}
+
+static inline int atomic_fetch_and(int i, atomic_t *v)
+{
+	int val = atomic_read(v);
+
+	do {
+	} while (!atomic_try_cmpxchg(v, &val, val & i));
+	return val;
+}
+
+static inline void atomic_or(int i, atomic_t *v)
+{
+	asm volatile(LOCK_PREFIX "orl %1,%0"
+			: "+m" (v->counter)
+			: "ir" (i)
+			: "memory");
+}
+
+static inline int atomic_fetch_or(int i, atomic_t *v)
+{
+	int val = atomic_read(v);
+
+	do {
+	} while (!atomic_try_cmpxchg(v, &val, val | i));
+	return val;
+}
+
+
+static inline void atomic_xor(int i, atomic_t *v)
+{
+	asm volatile(LOCK_PREFIX "xorl %1,%0"
+			: "+m" (v->counter)
+			: "ir" (i)
+			: "memory");
+}
+
+static inline int atomic_fetch_xor(int i, atomic_t *v)
+{
+	int val = atomic_read(v);
+
+	do {
+	} while (!atomic_try_cmpxchg(v, &val, val ^ i));
+	return val;
+}
 
 /**
  * __atomic_add_unless - add unless the number is already a given value
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index 71d7705fb303..f107fef7bfcc 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -312,37 +312,66 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
 #undef alternative_atomic64
 #undef __alternative_atomic64
 
-#define ATOMIC64_OP(op, c_op)						\
-static inline void atomic64_##op(long long i, atomic64_t *v)		\
-{									\
-	long long old, c = 0;						\
-	while ((old = atomic64_cmpxchg(v, c, c c_op i)) != c)		\
-		c = old;						\
+static inline void atomic64_and(long long i, atomic64_t *v)
+{
+	long long old, c = 0;
+
+	while ((old = atomic64_cmpxchg(v, c, c & i)) != c)
+		c = old;
 }
 
-#define ATOMIC64_FETCH_OP(op, c_op)					\
-static inline long long atomic64_fetch_##op(long long i, atomic64_t *v)	\
-{									\
-	long long old, c = 0;						\
-	while ((old = atomic64_cmpxchg(v, c, c c_op i)) != c)		\
-		c = old;						\
-	return old;							\
+static inline long long atomic64_fetch_and(long long i, atomic64_t *v)
+{
+	long long old, c = 0;
+
+	while ((old = atomic64_cmpxchg(v, c, c & i)) != c)
+		c = old;
+	return old;
 }
 
-ATOMIC64_FETCH_OP(add, +)
+static inline void atomic64_or(long long i, atomic64_t *v)
+{
+	long long old, c = 0;
 
-#define atomic64_fetch_sub(i, v)	atomic64_fetch_add(-(i), (v))
+	while ((old = atomic64_cmpxchg(v, c, c | i)) != c)
+		c = old;
+}
+
+static inline long long atomic64_fetch_or(long long i, atomic64_t *v)
+{
+	long long old, c = 0;
+
+	while ((old = atomic64_cmpxchg(v, c, c | i)) != c)
+		c = old;
+	return old;
+}
 
-#define ATOMIC64_OPS(op, c_op)						\
-	ATOMIC64_OP(op, c_op)						\
-	ATOMIC64_FETCH_OP(op, c_op)
+static inline void atomic64_xor(long long i, atomic64_t *v)
+{
+	long long old, c = 0;
+
+	while ((old = atomic64_cmpxchg(v, c, c ^ i)) != c)
+		c = old;
+}
 
-ATOMIC64_OPS(and, &)
-ATOMIC64_OPS(or, |)
-ATOMIC64_OPS(xor, ^)
+static inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
+{
+	long long old, c = 0;
+
+	while ((old = atomic64_cmpxchg(v, c, c ^ i)) != c)
+		c = old;
+	return old;
+}
 
-#undef ATOMIC64_OPS
-#undef ATOMIC64_FETCH_OP
-#undef ATOMIC64_OP
+static inline long long atomic64_fetch_add(long long i, atomic64_t *v)
+{
+	long long old, c = 0;
+
+	while ((old = atomic64_cmpxchg(v, c, c + i)) != c)
+		c = old;
+	return old;
+}
+
+#define atomic64_fetch_sub(i, v)	atomic64_fetch_add(-(i), (v))
 
 #endif /* _ASM_X86_ATOMIC64_32_H */
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index 6189a433c9a9..8db8879a6d8c 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -226,34 +226,55 @@ static inline long atomic64_dec_if_positive(atomic64_t *v)
 	return dec;
 }
 
-#define ATOMIC64_OP(op)							\
-static inline void atomic64_##op(long i, atomic64_t *v)			\
-{									\
-	asm volatile(LOCK_PREFIX #op"q %1,%0"				\
-			: "+m" (v->counter)				\
-			: "er" (i)					\
-			: "memory");					\
+static inline void atomic64_and(long i, atomic64_t *v)
+{
+	asm volatile(LOCK_PREFIX "andq %1,%0"
+			: "+m" (v->counter)
+			: "er" (i)
+			: "memory");
 }
 
-#define ATOMIC64_FETCH_OP(op, c_op)					\
-static inline long atomic64_fetch_##op(long i, atomic64_t *v)		\
-{									\
-	long val = atomic64_read(v);					\
-	do {								\
-	} while (!atomic64_try_cmpxchg(v, &val, val c_op i));		\
-	return val;							\
+static inline long atomic64_fetch_and(long i, atomic64_t *v)
+{
+	long val = atomic64_read(v);
+
+	do {
+	} while (!atomic64_try_cmpxchg(v, &val, val & i));
+	return val;
 }
 
-#define ATOMIC64_OPS(op, c_op)						\
-	ATOMIC64_OP(op)							\
-	ATOMIC64_FETCH_OP(op, c_op)
+static inline void atomic64_or(long i, atomic64_t *v)
+{
+	asm volatile(LOCK_PREFIX "orq %1,%0"
+			: "+m" (v->counter)
+			: "er" (i)
+			: "memory");
+}
 
-ATOMIC64_OPS(and, &)
-ATOMIC64_OPS(or, |)
-ATOMIC64_OPS(xor, ^)
+static inline long atomic64_fetch_or(long i, atomic64_t *v)
+{
+	long val = atomic64_read(v);
 
-#undef ATOMIC64_OPS
-#undef ATOMIC64_FETCH_OP
-#undef ATOMIC64_OP
+	do {
+	} while (!atomic64_try_cmpxchg(v, &val, val | i));
+	return val;
+}
+
+static inline void atomic64_xor(long i, atomic64_t *v)
+{
+	asm volatile(LOCK_PREFIX "xorq %1,%0"
+			: "+m" (v->counter)
+			: "er" (i)
+			: "memory");
+}
+
+static inline long atomic64_fetch_xor(long i, atomic64_t *v)
+{
+	long val = atomic64_read(v);
+
+	do {
+	} while (!atomic64_try_cmpxchg(v, &val, val ^ i));
+	return val;
+}
 
 #endif /* _ASM_X86_ATOMIC64_64_H */
-- 
2.12.2.564.g063fe858b8-goog

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 3/8] x86: use long long for 64-bit atomic ops
  2017-03-28 16:15 [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations Dmitry Vyukov
@ 2017-03-28 16:15   ` Dmitry Vyukov
  2017-03-28 16:15 ` [PATCH 2/8] x86: un-macro-ify atomic ops implementation Dmitry Vyukov
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, linux-mm

Some 64-bit atomic operations use 'long long' as operand/return type
(e.g. asm-generic/atomic64.h, arch/x86/include/asm/atomic64_32.h);
while others use 'long' (e.g. arch/x86/include/asm/atomic64_64.h).
This makes it impossible to write portable code.
For example, there is no format specifier that prints result of
atomic64_read() without warnings. atomic64_try_cmpxchg() is almost
impossible to use in portable fashion because it requires either
'long *' or 'long long *' as argument depending on arch.

Switch arch/x86/include/asm/atomic64_64.h to 'long long'.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
---
 arch/x86/include/asm/atomic64_64.h | 54 +++++++++++++++++++-------------------
 include/linux/types.h              |  2 +-
 2 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index 8db8879a6d8c..a62982a2b534 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -16,7 +16,7 @@
  * Atomically reads the value of @v.
  * Doesn't imply a read memory barrier.
  */
-static inline long atomic64_read(const atomic64_t *v)
+static inline long long atomic64_read(const atomic64_t *v)
 {
 	return READ_ONCE((v)->counter);
 }
@@ -28,7 +28,7 @@ static inline long atomic64_read(const atomic64_t *v)
  *
  * Atomically sets the value of @v to @i.
  */
-static inline void atomic64_set(atomic64_t *v, long i)
+static inline void atomic64_set(atomic64_t *v, long long i)
 {
 	WRITE_ONCE(v->counter, i);
 }
@@ -40,7 +40,7 @@ static inline void atomic64_set(atomic64_t *v, long i)
  *
  * Atomically adds @i to @v.
  */
-static __always_inline void atomic64_add(long i, atomic64_t *v)
+static __always_inline void atomic64_add(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "addq %1,%0"
 		     : "=m" (v->counter)
@@ -54,7 +54,7 @@ static __always_inline void atomic64_add(long i, atomic64_t *v)
  *
  * Atomically subtracts @i from @v.
  */
-static inline void atomic64_sub(long i, atomic64_t *v)
+static inline void atomic64_sub(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "subq %1,%0"
 		     : "=m" (v->counter)
@@ -70,7 +70,7 @@ static inline void atomic64_sub(long i, atomic64_t *v)
  * true if the result is zero, or false for all
  * other cases.
  */
-static inline bool atomic64_sub_and_test(long i, atomic64_t *v)
+static inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
 {
 	GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e);
 }
@@ -136,7 +136,7 @@ static inline bool atomic64_inc_and_test(atomic64_t *v)
  * if the result is negative, or false when
  * result is greater than or equal to zero.
  */
-static inline bool atomic64_add_negative(long i, atomic64_t *v)
+static inline bool atomic64_add_negative(long long i, atomic64_t *v)
 {
 	GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s);
 }
@@ -148,22 +148,22 @@ static inline bool atomic64_add_negative(long i, atomic64_t *v)
  *
  * Atomically adds @i to @v and returns @i + @v
  */
-static __always_inline long atomic64_add_return(long i, atomic64_t *v)
+static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
 {
 	return i + xadd(&v->counter, i);
 }
 
-static inline long atomic64_sub_return(long i, atomic64_t *v)
+static inline long long atomic64_sub_return(long long i, atomic64_t *v)
 {
 	return atomic64_add_return(-i, v);
 }
 
-static inline long atomic64_fetch_add(long i, atomic64_t *v)
+static inline long long atomic64_fetch_add(long long i, atomic64_t *v)
 {
 	return xadd(&v->counter, i);
 }
 
-static inline long atomic64_fetch_sub(long i, atomic64_t *v)
+static inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
 {
 	return xadd(&v->counter, -i);
 }
@@ -171,18 +171,18 @@ static inline long atomic64_fetch_sub(long i, atomic64_t *v)
 #define atomic64_inc_return(v)  (atomic64_add_return(1, (v)))
 #define atomic64_dec_return(v)  (atomic64_sub_return(1, (v)))
 
-static inline long atomic64_cmpxchg(atomic64_t *v, long old, long new)
+static inline long long atomic64_cmpxchg(atomic64_t *v, long long old, long long new)
 {
 	return cmpxchg(&v->counter, old, new);
 }
 
 #define atomic64_try_cmpxchg atomic64_try_cmpxchg
-static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long *old, long new)
+static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old, long long new)
 {
 	return try_cmpxchg(&v->counter, old, new);
 }
 
-static inline long atomic64_xchg(atomic64_t *v, long new)
+static inline long long atomic64_xchg(atomic64_t *v, long long new)
 {
 	return xchg(&v->counter, new);
 }
@@ -193,12 +193,12 @@ static inline long atomic64_xchg(atomic64_t *v, long new)
  * @a: the amount to add to v...
  * @u: ...unless v is equal to u.
  *
- * Atomically adds @a to @v, so long as it was not @u.
+ * Atomically adds @a to @v, so long long as it was not @u.
  * Returns the old value of @v.
  */
-static inline bool atomic64_add_unless(atomic64_t *v, long a, long u)
+static inline bool atomic64_add_unless(atomic64_t *v, long long a, long long u)
 {
-	long c = atomic64_read(v);
+	long long c = atomic64_read(v);
 	do {
 		if (unlikely(c == u))
 			return false;
@@ -215,9 +215,9 @@ static inline bool atomic64_add_unless(atomic64_t *v, long a, long u)
  * The function returns the old value of *v minus 1, even if
  * the atomic variable, v, was not decremented.
  */
-static inline long atomic64_dec_if_positive(atomic64_t *v)
+static inline long long atomic64_dec_if_positive(atomic64_t *v)
 {
-	long dec, c = atomic64_read(v);
+	long long dec, c = atomic64_read(v);
 	do {
 		dec = c - 1;
 		if (unlikely(dec < 0))
@@ -226,7 +226,7 @@ static inline long atomic64_dec_if_positive(atomic64_t *v)
 	return dec;
 }
 
-static inline void atomic64_and(long i, atomic64_t *v)
+static inline void atomic64_and(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "andq %1,%0"
 			: "+m" (v->counter)
@@ -234,16 +234,16 @@ static inline void atomic64_and(long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long atomic64_fetch_and(long i, atomic64_t *v)
+static inline long long atomic64_fetch_and(long long i, atomic64_t *v)
 {
-	long val = atomic64_read(v);
+	long long val = atomic64_read(v);
 
 	do {
 	} while (!atomic64_try_cmpxchg(v, &val, val & i));
 	return val;
 }
 
-static inline void atomic64_or(long i, atomic64_t *v)
+static inline void atomic64_or(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "orq %1,%0"
 			: "+m" (v->counter)
@@ -251,16 +251,16 @@ static inline void atomic64_or(long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long atomic64_fetch_or(long i, atomic64_t *v)
+static inline long long atomic64_fetch_or(long long i, atomic64_t *v)
 {
-	long val = atomic64_read(v);
+	long long val = atomic64_read(v);
 
 	do {
 	} while (!atomic64_try_cmpxchg(v, &val, val | i));
 	return val;
 }
 
-static inline void atomic64_xor(long i, atomic64_t *v)
+static inline void atomic64_xor(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "xorq %1,%0"
 			: "+m" (v->counter)
@@ -268,9 +268,9 @@ static inline void atomic64_xor(long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long atomic64_fetch_xor(long i, atomic64_t *v)
+static inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
 {
-	long val = atomic64_read(v);
+	long long val = atomic64_read(v);
 
 	do {
 	} while (!atomic64_try_cmpxchg(v, &val, val ^ i));
diff --git a/include/linux/types.h b/include/linux/types.h
index 1e7bd24848fc..569fc6db1bd5 100644
--- a/include/linux/types.h
+++ b/include/linux/types.h
@@ -177,7 +177,7 @@ typedef struct {
 
 #ifdef CONFIG_64BIT
 typedef struct {
-	long counter;
+	long long counter;
 } atomic64_t;
 #endif
 
-- 
2.12.2.564.g063fe858b8-goog

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 3/8] x86: use long long for 64-bit atomic ops
@ 2017-03-28 16:15   ` Dmitry Vyukov
  0 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, linux-mm

Some 64-bit atomic operations use 'long long' as operand/return type
(e.g. asm-generic/atomic64.h, arch/x86/include/asm/atomic64_32.h);
while others use 'long' (e.g. arch/x86/include/asm/atomic64_64.h).
This makes it impossible to write portable code.
For example, there is no format specifier that prints result of
atomic64_read() without warnings. atomic64_try_cmpxchg() is almost
impossible to use in portable fashion because it requires either
'long *' or 'long long *' as argument depending on arch.

Switch arch/x86/include/asm/atomic64_64.h to 'long long'.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
---
 arch/x86/include/asm/atomic64_64.h | 54 +++++++++++++++++++-------------------
 include/linux/types.h              |  2 +-
 2 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index 8db8879a6d8c..a62982a2b534 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -16,7 +16,7 @@
  * Atomically reads the value of @v.
  * Doesn't imply a read memory barrier.
  */
-static inline long atomic64_read(const atomic64_t *v)
+static inline long long atomic64_read(const atomic64_t *v)
 {
 	return READ_ONCE((v)->counter);
 }
@@ -28,7 +28,7 @@ static inline long atomic64_read(const atomic64_t *v)
  *
  * Atomically sets the value of @v to @i.
  */
-static inline void atomic64_set(atomic64_t *v, long i)
+static inline void atomic64_set(atomic64_t *v, long long i)
 {
 	WRITE_ONCE(v->counter, i);
 }
@@ -40,7 +40,7 @@ static inline void atomic64_set(atomic64_t *v, long i)
  *
  * Atomically adds @i to @v.
  */
-static __always_inline void atomic64_add(long i, atomic64_t *v)
+static __always_inline void atomic64_add(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "addq %1,%0"
 		     : "=m" (v->counter)
@@ -54,7 +54,7 @@ static __always_inline void atomic64_add(long i, atomic64_t *v)
  *
  * Atomically subtracts @i from @v.
  */
-static inline void atomic64_sub(long i, atomic64_t *v)
+static inline void atomic64_sub(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "subq %1,%0"
 		     : "=m" (v->counter)
@@ -70,7 +70,7 @@ static inline void atomic64_sub(long i, atomic64_t *v)
  * true if the result is zero, or false for all
  * other cases.
  */
-static inline bool atomic64_sub_and_test(long i, atomic64_t *v)
+static inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
 {
 	GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e);
 }
@@ -136,7 +136,7 @@ static inline bool atomic64_inc_and_test(atomic64_t *v)
  * if the result is negative, or false when
  * result is greater than or equal to zero.
  */
-static inline bool atomic64_add_negative(long i, atomic64_t *v)
+static inline bool atomic64_add_negative(long long i, atomic64_t *v)
 {
 	GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s);
 }
@@ -148,22 +148,22 @@ static inline bool atomic64_add_negative(long i, atomic64_t *v)
  *
  * Atomically adds @i to @v and returns @i + @v
  */
-static __always_inline long atomic64_add_return(long i, atomic64_t *v)
+static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
 {
 	return i + xadd(&v->counter, i);
 }
 
-static inline long atomic64_sub_return(long i, atomic64_t *v)
+static inline long long atomic64_sub_return(long long i, atomic64_t *v)
 {
 	return atomic64_add_return(-i, v);
 }
 
-static inline long atomic64_fetch_add(long i, atomic64_t *v)
+static inline long long atomic64_fetch_add(long long i, atomic64_t *v)
 {
 	return xadd(&v->counter, i);
 }
 
-static inline long atomic64_fetch_sub(long i, atomic64_t *v)
+static inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
 {
 	return xadd(&v->counter, -i);
 }
@@ -171,18 +171,18 @@ static inline long atomic64_fetch_sub(long i, atomic64_t *v)
 #define atomic64_inc_return(v)  (atomic64_add_return(1, (v)))
 #define atomic64_dec_return(v)  (atomic64_sub_return(1, (v)))
 
-static inline long atomic64_cmpxchg(atomic64_t *v, long old, long new)
+static inline long long atomic64_cmpxchg(atomic64_t *v, long long old, long long new)
 {
 	return cmpxchg(&v->counter, old, new);
 }
 
 #define atomic64_try_cmpxchg atomic64_try_cmpxchg
-static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long *old, long new)
+static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old, long long new)
 {
 	return try_cmpxchg(&v->counter, old, new);
 }
 
-static inline long atomic64_xchg(atomic64_t *v, long new)
+static inline long long atomic64_xchg(atomic64_t *v, long long new)
 {
 	return xchg(&v->counter, new);
 }
@@ -193,12 +193,12 @@ static inline long atomic64_xchg(atomic64_t *v, long new)
  * @a: the amount to add to v...
  * @u: ...unless v is equal to u.
  *
- * Atomically adds @a to @v, so long as it was not @u.
+ * Atomically adds @a to @v, so long long as it was not @u.
  * Returns the old value of @v.
  */
-static inline bool atomic64_add_unless(atomic64_t *v, long a, long u)
+static inline bool atomic64_add_unless(atomic64_t *v, long long a, long long u)
 {
-	long c = atomic64_read(v);
+	long long c = atomic64_read(v);
 	do {
 		if (unlikely(c == u))
 			return false;
@@ -215,9 +215,9 @@ static inline bool atomic64_add_unless(atomic64_t *v, long a, long u)
  * The function returns the old value of *v minus 1, even if
  * the atomic variable, v, was not decremented.
  */
-static inline long atomic64_dec_if_positive(atomic64_t *v)
+static inline long long atomic64_dec_if_positive(atomic64_t *v)
 {
-	long dec, c = atomic64_read(v);
+	long long dec, c = atomic64_read(v);
 	do {
 		dec = c - 1;
 		if (unlikely(dec < 0))
@@ -226,7 +226,7 @@ static inline long atomic64_dec_if_positive(atomic64_t *v)
 	return dec;
 }
 
-static inline void atomic64_and(long i, atomic64_t *v)
+static inline void atomic64_and(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "andq %1,%0"
 			: "+m" (v->counter)
@@ -234,16 +234,16 @@ static inline void atomic64_and(long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long atomic64_fetch_and(long i, atomic64_t *v)
+static inline long long atomic64_fetch_and(long long i, atomic64_t *v)
 {
-	long val = atomic64_read(v);
+	long long val = atomic64_read(v);
 
 	do {
 	} while (!atomic64_try_cmpxchg(v, &val, val & i));
 	return val;
 }
 
-static inline void atomic64_or(long i, atomic64_t *v)
+static inline void atomic64_or(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "orq %1,%0"
 			: "+m" (v->counter)
@@ -251,16 +251,16 @@ static inline void atomic64_or(long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long atomic64_fetch_or(long i, atomic64_t *v)
+static inline long long atomic64_fetch_or(long long i, atomic64_t *v)
 {
-	long val = atomic64_read(v);
+	long long val = atomic64_read(v);
 
 	do {
 	} while (!atomic64_try_cmpxchg(v, &val, val | i));
 	return val;
 }
 
-static inline void atomic64_xor(long i, atomic64_t *v)
+static inline void atomic64_xor(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "xorq %1,%0"
 			: "+m" (v->counter)
@@ -268,9 +268,9 @@ static inline void atomic64_xor(long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long atomic64_fetch_xor(long i, atomic64_t *v)
+static inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
 {
-	long val = atomic64_read(v);
+	long long val = atomic64_read(v);
 
 	do {
 	} while (!atomic64_try_cmpxchg(v, &val, val ^ i));
diff --git a/include/linux/types.h b/include/linux/types.h
index 1e7bd24848fc..569fc6db1bd5 100644
--- a/include/linux/types.h
+++ b/include/linux/types.h
@@ -177,7 +177,7 @@ typedef struct {
 
 #ifdef CONFIG_64BIT
 typedef struct {
-	long counter;
+	long long counter;
 } atomic64_t;
 #endif
 
-- 
2.12.2.564.g063fe858b8-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 4/8] asm-generic: add atomic-instrumented.h
  2017-03-28 16:15 [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations Dmitry Vyukov
@ 2017-03-28 16:15   ` Dmitry Vyukov
  2017-03-28 16:15 ` [PATCH 2/8] x86: un-macro-ify atomic ops implementation Dmitry Vyukov
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, linux-mm

The new header allows to wrap per-arch atomic operations
and add common functionality to all of them.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
---
 include/asm-generic/atomic-instrumented.h | 319 ++++++++++++++++++++++++++++++
 1 file changed, 319 insertions(+)

diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
new file mode 100644
index 000000000000..fd483115d4c6
--- /dev/null
+++ b/include/asm-generic/atomic-instrumented.h
@@ -0,0 +1,319 @@
+#ifndef _LINUX_ATOMIC_INSTRUMENTED_H
+#define _LINUX_ATOMIC_INSTRUMENTED_H
+
+static __always_inline int atomic_read(const atomic_t *v)
+{
+	return arch_atomic_read(v);
+}
+
+static __always_inline long long atomic64_read(const atomic64_t *v)
+{
+	return arch_atomic64_read(v);
+}
+
+static __always_inline void atomic_set(atomic_t *v, int i)
+{
+	arch_atomic_set(v, i);
+}
+
+static __always_inline void atomic64_set(atomic64_t *v, long long i)
+{
+	arch_atomic64_set(v, i);
+}
+
+static __always_inline int atomic_xchg(atomic_t *v, int i)
+{
+	return arch_atomic_xchg(v, i);
+}
+
+static __always_inline long long atomic64_xchg(atomic64_t *v, long long i)
+{
+	return arch_atomic64_xchg(v, i);
+}
+
+static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+	return arch_atomic_cmpxchg(v, old, new);
+}
+
+static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
+						  long long new)
+{
+	return arch_atomic64_cmpxchg(v, old, new);
+}
+
+#ifdef arch_atomic_try_cmpxchg
+#define atomic_try_cmpxchg atomic_try_cmpxchg
+static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
+{
+	return arch_atomic_try_cmpxchg(v, old, new);
+}
+#endif
+
+#ifdef arch_atomic64_try_cmpxchg
+#define atomic64_try_cmpxchg atomic64_try_cmpxchg
+static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old,
+						 long long new)
+{
+	return arch_atomic64_try_cmpxchg(v, old, new);
+}
+#endif
+
+static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
+{
+	return __arch_atomic_add_unless(v, a, u);
+}
+
+
+static __always_inline bool atomic64_add_unless(atomic64_t *v, long long a,
+						long long u)
+{
+	return arch_atomic64_add_unless(v, a, u);
+}
+
+static __always_inline void atomic_inc(atomic_t *v)
+{
+	arch_atomic_inc(v);
+}
+
+static __always_inline void atomic64_inc(atomic64_t *v)
+{
+	arch_atomic64_inc(v);
+}
+
+static __always_inline void atomic_dec(atomic_t *v)
+{
+	arch_atomic_dec(v);
+}
+
+static __always_inline void atomic64_dec(atomic64_t *v)
+{
+	arch_atomic64_dec(v);
+}
+
+static __always_inline void atomic_add(int i, atomic_t *v)
+{
+	arch_atomic_add(i, v);
+}
+
+static __always_inline void atomic64_add(long long i, atomic64_t *v)
+{
+	arch_atomic64_add(i, v);
+}
+
+static __always_inline void atomic_sub(int i, atomic_t *v)
+{
+	arch_atomic_sub(i, v);
+}
+
+static __always_inline void atomic64_sub(long long i, atomic64_t *v)
+{
+	arch_atomic64_sub(i, v);
+}
+
+static __always_inline void atomic_and(int i, atomic_t *v)
+{
+	arch_atomic_and(i, v);
+}
+
+static __always_inline void atomic64_and(long long i, atomic64_t *v)
+{
+	arch_atomic64_and(i, v);
+}
+
+static __always_inline void atomic_or(int i, atomic_t *v)
+{
+	arch_atomic_or(i, v);
+}
+
+static __always_inline void atomic64_or(long long i, atomic64_t *v)
+{
+	arch_atomic64_or(i, v);
+}
+
+static __always_inline void atomic_xor(int i, atomic_t *v)
+{
+	arch_atomic_xor(i, v);
+}
+
+static __always_inline void atomic64_xor(long long i, atomic64_t *v)
+{
+	arch_atomic64_xor(i, v);
+}
+
+static __always_inline int atomic_inc_return(atomic_t *v)
+{
+	return arch_atomic_inc_return(v);
+}
+
+static __always_inline long long atomic64_inc_return(atomic64_t *v)
+{
+	return arch_atomic64_inc_return(v);
+}
+
+static __always_inline int atomic_dec_return(atomic_t *v)
+{
+	return arch_atomic_dec_return(v);
+}
+
+static __always_inline long long atomic64_dec_return(atomic64_t *v)
+{
+	return arch_atomic64_dec_return(v);
+}
+
+static __always_inline long long atomic64_inc_not_zero(atomic64_t *v)
+{
+	return arch_atomic64_inc_not_zero(v);
+}
+
+static __always_inline long long atomic64_dec_if_positive(atomic64_t *v)
+{
+	return arch_atomic64_dec_if_positive(v);
+}
+
+static __always_inline bool atomic_dec_and_test(atomic_t *v)
+{
+	return arch_atomic_dec_and_test(v);
+}
+
+static __always_inline bool atomic64_dec_and_test(atomic64_t *v)
+{
+	return arch_atomic64_dec_and_test(v);
+}
+
+static __always_inline bool atomic_inc_and_test(atomic_t *v)
+{
+	return arch_atomic_inc_and_test(v);
+}
+
+static __always_inline bool atomic64_inc_and_test(atomic64_t *v)
+{
+	return arch_atomic64_inc_and_test(v);
+}
+
+static __always_inline int atomic_add_return(int i, atomic_t *v)
+{
+	return arch_atomic_add_return(i, v);
+}
+
+static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
+{
+	return arch_atomic64_add_return(i, v);
+}
+
+static __always_inline int atomic_sub_return(int i, atomic_t *v)
+{
+	return arch_atomic_sub_return(i, v);
+}
+
+static __always_inline long long atomic64_sub_return(long long i, atomic64_t *v)
+{
+	return arch_atomic64_sub_return(i, v);
+}
+
+static __always_inline int atomic_fetch_add(int i, atomic_t *v)
+{
+	return arch_atomic_fetch_add(i, v);
+}
+
+static __always_inline long long atomic64_fetch_add(long long i, atomic64_t *v)
+{
+	return arch_atomic64_fetch_add(i, v);
+}
+
+static __always_inline int atomic_fetch_sub(int i, atomic_t *v)
+{
+	return arch_atomic_fetch_sub(i, v);
+}
+
+static __always_inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
+{
+	return arch_atomic64_fetch_sub(i, v);
+}
+
+static __always_inline int atomic_fetch_and(int i, atomic_t *v)
+{
+	return arch_atomic_fetch_and(i, v);
+}
+
+static __always_inline long long atomic64_fetch_and(long long i, atomic64_t *v)
+{
+	return arch_atomic64_fetch_and(i, v);
+}
+
+static __always_inline int atomic_fetch_or(int i, atomic_t *v)
+{
+	return arch_atomic_fetch_or(i, v);
+}
+
+static __always_inline long long atomic64_fetch_or(long long i, atomic64_t *v)
+{
+	return arch_atomic64_fetch_or(i, v);
+}
+
+static __always_inline int atomic_fetch_xor(int i, atomic_t *v)
+{
+	return arch_atomic_fetch_xor(i, v);
+}
+
+static __always_inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
+{
+	return arch_atomic64_fetch_xor(i, v);
+}
+
+static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
+{
+	return arch_atomic_sub_and_test(i, v);
+}
+
+static __always_inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
+{
+	return arch_atomic64_sub_and_test(i, v);
+}
+
+static __always_inline bool atomic_add_negative(int i, atomic_t *v)
+{
+	return arch_atomic_add_negative(i, v);
+}
+
+static __always_inline bool atomic64_add_negative(long long i, atomic64_t *v)
+{
+	return arch_atomic64_add_negative(i, v);
+}
+
+#define cmpxchg(ptr, old, new)				\
+({							\
+	arch_cmpxchg((ptr), (old), (new));		\
+})
+
+#define sync_cmpxchg(ptr, old, new)			\
+({							\
+	arch_sync_cmpxchg((ptr), (old), (new));		\
+})
+
+#define cmpxchg_local(ptr, old, new)			\
+({							\
+	arch_cmpxchg_local((ptr), (old), (new));	\
+})
+
+#define cmpxchg64(ptr, old, new)			\
+({							\
+	arch_cmpxchg64((ptr), (old), (new));		\
+})
+
+#define cmpxchg64_local(ptr, old, new)			\
+({							\
+	arch_cmpxchg64_local((ptr), (old), (new));	\
+})
+
+#define cmpxchg_double(p1, p2, o1, o2, n1, n2)				\
+({									\
+	arch_cmpxchg_double((p1), (p2), (o1), (o2), (n1), (n2));	\
+})
+
+#define cmpxchg_double_local(p1, p2, o1, o2, n1, n2)			\
+({									\
+	arch_cmpxchg_double_local((p1), (p2), (o1), (o2), (n1), (n2));	\
+})
+
+#endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
-- 
2.12.2.564.g063fe858b8-goog

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 4/8] asm-generic: add atomic-instrumented.h
@ 2017-03-28 16:15   ` Dmitry Vyukov
  0 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, linux-mm

The new header allows to wrap per-arch atomic operations
and add common functionality to all of them.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
---
 include/asm-generic/atomic-instrumented.h | 319 ++++++++++++++++++++++++++++++
 1 file changed, 319 insertions(+)

diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
new file mode 100644
index 000000000000..fd483115d4c6
--- /dev/null
+++ b/include/asm-generic/atomic-instrumented.h
@@ -0,0 +1,319 @@
+#ifndef _LINUX_ATOMIC_INSTRUMENTED_H
+#define _LINUX_ATOMIC_INSTRUMENTED_H
+
+static __always_inline int atomic_read(const atomic_t *v)
+{
+	return arch_atomic_read(v);
+}
+
+static __always_inline long long atomic64_read(const atomic64_t *v)
+{
+	return arch_atomic64_read(v);
+}
+
+static __always_inline void atomic_set(atomic_t *v, int i)
+{
+	arch_atomic_set(v, i);
+}
+
+static __always_inline void atomic64_set(atomic64_t *v, long long i)
+{
+	arch_atomic64_set(v, i);
+}
+
+static __always_inline int atomic_xchg(atomic_t *v, int i)
+{
+	return arch_atomic_xchg(v, i);
+}
+
+static __always_inline long long atomic64_xchg(atomic64_t *v, long long i)
+{
+	return arch_atomic64_xchg(v, i);
+}
+
+static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+	return arch_atomic_cmpxchg(v, old, new);
+}
+
+static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
+						  long long new)
+{
+	return arch_atomic64_cmpxchg(v, old, new);
+}
+
+#ifdef arch_atomic_try_cmpxchg
+#define atomic_try_cmpxchg atomic_try_cmpxchg
+static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
+{
+	return arch_atomic_try_cmpxchg(v, old, new);
+}
+#endif
+
+#ifdef arch_atomic64_try_cmpxchg
+#define atomic64_try_cmpxchg atomic64_try_cmpxchg
+static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old,
+						 long long new)
+{
+	return arch_atomic64_try_cmpxchg(v, old, new);
+}
+#endif
+
+static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
+{
+	return __arch_atomic_add_unless(v, a, u);
+}
+
+
+static __always_inline bool atomic64_add_unless(atomic64_t *v, long long a,
+						long long u)
+{
+	return arch_atomic64_add_unless(v, a, u);
+}
+
+static __always_inline void atomic_inc(atomic_t *v)
+{
+	arch_atomic_inc(v);
+}
+
+static __always_inline void atomic64_inc(atomic64_t *v)
+{
+	arch_atomic64_inc(v);
+}
+
+static __always_inline void atomic_dec(atomic_t *v)
+{
+	arch_atomic_dec(v);
+}
+
+static __always_inline void atomic64_dec(atomic64_t *v)
+{
+	arch_atomic64_dec(v);
+}
+
+static __always_inline void atomic_add(int i, atomic_t *v)
+{
+	arch_atomic_add(i, v);
+}
+
+static __always_inline void atomic64_add(long long i, atomic64_t *v)
+{
+	arch_atomic64_add(i, v);
+}
+
+static __always_inline void atomic_sub(int i, atomic_t *v)
+{
+	arch_atomic_sub(i, v);
+}
+
+static __always_inline void atomic64_sub(long long i, atomic64_t *v)
+{
+	arch_atomic64_sub(i, v);
+}
+
+static __always_inline void atomic_and(int i, atomic_t *v)
+{
+	arch_atomic_and(i, v);
+}
+
+static __always_inline void atomic64_and(long long i, atomic64_t *v)
+{
+	arch_atomic64_and(i, v);
+}
+
+static __always_inline void atomic_or(int i, atomic_t *v)
+{
+	arch_atomic_or(i, v);
+}
+
+static __always_inline void atomic64_or(long long i, atomic64_t *v)
+{
+	arch_atomic64_or(i, v);
+}
+
+static __always_inline void atomic_xor(int i, atomic_t *v)
+{
+	arch_atomic_xor(i, v);
+}
+
+static __always_inline void atomic64_xor(long long i, atomic64_t *v)
+{
+	arch_atomic64_xor(i, v);
+}
+
+static __always_inline int atomic_inc_return(atomic_t *v)
+{
+	return arch_atomic_inc_return(v);
+}
+
+static __always_inline long long atomic64_inc_return(atomic64_t *v)
+{
+	return arch_atomic64_inc_return(v);
+}
+
+static __always_inline int atomic_dec_return(atomic_t *v)
+{
+	return arch_atomic_dec_return(v);
+}
+
+static __always_inline long long atomic64_dec_return(atomic64_t *v)
+{
+	return arch_atomic64_dec_return(v);
+}
+
+static __always_inline long long atomic64_inc_not_zero(atomic64_t *v)
+{
+	return arch_atomic64_inc_not_zero(v);
+}
+
+static __always_inline long long atomic64_dec_if_positive(atomic64_t *v)
+{
+	return arch_atomic64_dec_if_positive(v);
+}
+
+static __always_inline bool atomic_dec_and_test(atomic_t *v)
+{
+	return arch_atomic_dec_and_test(v);
+}
+
+static __always_inline bool atomic64_dec_and_test(atomic64_t *v)
+{
+	return arch_atomic64_dec_and_test(v);
+}
+
+static __always_inline bool atomic_inc_and_test(atomic_t *v)
+{
+	return arch_atomic_inc_and_test(v);
+}
+
+static __always_inline bool atomic64_inc_and_test(atomic64_t *v)
+{
+	return arch_atomic64_inc_and_test(v);
+}
+
+static __always_inline int atomic_add_return(int i, atomic_t *v)
+{
+	return arch_atomic_add_return(i, v);
+}
+
+static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
+{
+	return arch_atomic64_add_return(i, v);
+}
+
+static __always_inline int atomic_sub_return(int i, atomic_t *v)
+{
+	return arch_atomic_sub_return(i, v);
+}
+
+static __always_inline long long atomic64_sub_return(long long i, atomic64_t *v)
+{
+	return arch_atomic64_sub_return(i, v);
+}
+
+static __always_inline int atomic_fetch_add(int i, atomic_t *v)
+{
+	return arch_atomic_fetch_add(i, v);
+}
+
+static __always_inline long long atomic64_fetch_add(long long i, atomic64_t *v)
+{
+	return arch_atomic64_fetch_add(i, v);
+}
+
+static __always_inline int atomic_fetch_sub(int i, atomic_t *v)
+{
+	return arch_atomic_fetch_sub(i, v);
+}
+
+static __always_inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
+{
+	return arch_atomic64_fetch_sub(i, v);
+}
+
+static __always_inline int atomic_fetch_and(int i, atomic_t *v)
+{
+	return arch_atomic_fetch_and(i, v);
+}
+
+static __always_inline long long atomic64_fetch_and(long long i, atomic64_t *v)
+{
+	return arch_atomic64_fetch_and(i, v);
+}
+
+static __always_inline int atomic_fetch_or(int i, atomic_t *v)
+{
+	return arch_atomic_fetch_or(i, v);
+}
+
+static __always_inline long long atomic64_fetch_or(long long i, atomic64_t *v)
+{
+	return arch_atomic64_fetch_or(i, v);
+}
+
+static __always_inline int atomic_fetch_xor(int i, atomic_t *v)
+{
+	return arch_atomic_fetch_xor(i, v);
+}
+
+static __always_inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
+{
+	return arch_atomic64_fetch_xor(i, v);
+}
+
+static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
+{
+	return arch_atomic_sub_and_test(i, v);
+}
+
+static __always_inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
+{
+	return arch_atomic64_sub_and_test(i, v);
+}
+
+static __always_inline bool atomic_add_negative(int i, atomic_t *v)
+{
+	return arch_atomic_add_negative(i, v);
+}
+
+static __always_inline bool atomic64_add_negative(long long i, atomic64_t *v)
+{
+	return arch_atomic64_add_negative(i, v);
+}
+
+#define cmpxchg(ptr, old, new)				\
+({							\
+	arch_cmpxchg((ptr), (old), (new));		\
+})
+
+#define sync_cmpxchg(ptr, old, new)			\
+({							\
+	arch_sync_cmpxchg((ptr), (old), (new));		\
+})
+
+#define cmpxchg_local(ptr, old, new)			\
+({							\
+	arch_cmpxchg_local((ptr), (old), (new));	\
+})
+
+#define cmpxchg64(ptr, old, new)			\
+({							\
+	arch_cmpxchg64((ptr), (old), (new));		\
+})
+
+#define cmpxchg64_local(ptr, old, new)			\
+({							\
+	arch_cmpxchg64_local((ptr), (old), (new));	\
+})
+
+#define cmpxchg_double(p1, p2, o1, o2, n1, n2)				\
+({									\
+	arch_cmpxchg_double((p1), (p2), (o1), (o2), (n1), (n2));	\
+})
+
+#define cmpxchg_double_local(p1, p2, o1, o2, n1, n2)			\
+({									\
+	arch_cmpxchg_double_local((p1), (p2), (o1), (o2), (n1), (n2));	\
+})
+
+#endif /* _LINUX_ATOMIC_INSTRUMENTED_H */
-- 
2.12.2.564.g063fe858b8-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 5/8] x86: switch atomic.h to use atomic-instrumented.h
  2017-03-28 16:15 [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations Dmitry Vyukov
@ 2017-03-28 16:15   ` Dmitry Vyukov
  2017-03-28 16:15 ` [PATCH 2/8] x86: un-macro-ify atomic ops implementation Dmitry Vyukov
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, linux-mm

Add arch_ prefix to all atomic operations and include
<asm-generic/atomic-instrumented.h>. This will allow
to add KASAN instrumentation to all atomic ops.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
---
 arch/x86/include/asm/atomic.h      | 110 ++++++++++++++++++++-----------------
 arch/x86/include/asm/atomic64_32.h | 106 +++++++++++++++++------------------
 arch/x86/include/asm/atomic64_64.h | 110 ++++++++++++++++++-------------------
 arch/x86/include/asm/cmpxchg.h     |  14 ++---
 arch/x86/include/asm/cmpxchg_32.h  |   8 +--
 arch/x86/include/asm/cmpxchg_64.h  |   4 +-
 6 files changed, 181 insertions(+), 171 deletions(-)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 8d7f6e579be4..92dd59f24eba 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -16,36 +16,42 @@
 #define ATOMIC_INIT(i)	{ (i) }
 
 /**
- * atomic_read - read atomic variable
+ * arch_atomic_read - read atomic variable
  * @v: pointer of type atomic_t
  *
  * Atomically reads the value of @v.
  */
-static __always_inline int atomic_read(const atomic_t *v)
+static __always_inline int arch_atomic_read(const atomic_t *v)
 {
 	return READ_ONCE((v)->counter);
 }
 
 /**
- * atomic_set - set atomic variable
+ * arch_atomic_set - set atomic variable
  * @v: pointer of type atomic_t
  * @i: required value
  *
  * Atomically sets the value of @v to @i.
  */
-static __always_inline void atomic_set(atomic_t *v, int i)
+static __always_inline void arch_atomic_set(atomic_t *v, int i)
 {
+	/*
+	 * We could use WRITE_ONCE_NOCHECK() if it exists, similar to
+	 * READ_ONCE_NOCHECK() in arch_atomic_read(). But there is no such
+	 * thing at the moment, and introducing it for this case does not
+	 * worth it.
+	 */
 	WRITE_ONCE(v->counter, i);
 }
 
 /**
- * atomic_add - add integer to atomic variable
+ * arch_atomic_add - add integer to atomic variable
  * @i: integer value to add
  * @v: pointer of type atomic_t
  *
  * Atomically adds @i to @v.
  */
-static __always_inline void atomic_add(int i, atomic_t *v)
+static __always_inline void arch_atomic_add(int i, atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "addl %1,%0"
 		     : "+m" (v->counter)
@@ -53,13 +59,13 @@ static __always_inline void atomic_add(int i, atomic_t *v)
 }
 
 /**
- * atomic_sub - subtract integer from atomic variable
+ * arch_atomic_sub - subtract integer from atomic variable
  * @i: integer value to subtract
  * @v: pointer of type atomic_t
  *
  * Atomically subtracts @i from @v.
  */
-static __always_inline void atomic_sub(int i, atomic_t *v)
+static __always_inline void arch_atomic_sub(int i, atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "subl %1,%0"
 		     : "+m" (v->counter)
@@ -67,7 +73,7 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
 }
 
 /**
- * atomic_sub_and_test - subtract value from variable and test result
+ * arch_atomic_sub_and_test - subtract value from variable and test result
  * @i: integer value to subtract
  * @v: pointer of type atomic_t
  *
@@ -75,63 +81,63 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
  * true if the result is zero, or false for all
  * other cases.
  */
-static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
+static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v)
 {
 	GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", e);
 }
 
 /**
- * atomic_inc - increment atomic variable
+ * arch_atomic_inc - increment atomic variable
  * @v: pointer of type atomic_t
  *
  * Atomically increments @v by 1.
  */
-static __always_inline void atomic_inc(atomic_t *v)
+static __always_inline void arch_atomic_inc(atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "incl %0"
 		     : "+m" (v->counter));
 }
 
 /**
- * atomic_dec - decrement atomic variable
+ * arch_atomic_dec - decrement atomic variable
  * @v: pointer of type atomic_t
  *
  * Atomically decrements @v by 1.
  */
-static __always_inline void atomic_dec(atomic_t *v)
+static __always_inline void arch_atomic_dec(atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "decl %0"
 		     : "+m" (v->counter));
 }
 
 /**
- * atomic_dec_and_test - decrement and test
+ * arch_atomic_dec_and_test - decrement and test
  * @v: pointer of type atomic_t
  *
  * Atomically decrements @v by 1 and
  * returns true if the result is 0, or false for all other
  * cases.
  */
-static __always_inline bool atomic_dec_and_test(atomic_t *v)
+static __always_inline bool arch_atomic_dec_and_test(atomic_t *v)
 {
 	GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e);
 }
 
 /**
- * atomic_inc_and_test - increment and test
+ * arch_atomic_inc_and_test - increment and test
  * @v: pointer of type atomic_t
  *
  * Atomically increments @v by 1
  * and returns true if the result is zero, or false for all
  * other cases.
  */
-static __always_inline bool atomic_inc_and_test(atomic_t *v)
+static __always_inline bool arch_atomic_inc_and_test(atomic_t *v)
 {
 	GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", e);
 }
 
 /**
- * atomic_add_negative - add and test if negative
+ * arch_atomic_add_negative - add and test if negative
  * @i: integer value to add
  * @v: pointer of type atomic_t
  *
@@ -139,65 +145,65 @@ static __always_inline bool atomic_inc_and_test(atomic_t *v)
  * if the result is negative, or false when
  * result is greater than or equal to zero.
  */
-static __always_inline bool atomic_add_negative(int i, atomic_t *v)
+static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
 {
 	GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s);
 }
 
 /**
- * atomic_add_return - add integer and return
+ * arch_atomic_add_return - add integer and return
  * @i: integer value to add
  * @v: pointer of type atomic_t
  *
  * Atomically adds @i to @v and returns @i + @v
  */
-static __always_inline int atomic_add_return(int i, atomic_t *v)
+static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
 {
 	return i + xadd(&v->counter, i);
 }
 
 /**
- * atomic_sub_return - subtract integer and return
+ * arch_atomic_sub_return - subtract integer and return
  * @v: pointer of type atomic_t
  * @i: integer value to subtract
  *
  * Atomically subtracts @i from @v and returns @v - @i
  */
-static __always_inline int atomic_sub_return(int i, atomic_t *v)
+static __always_inline int arch_atomic_sub_return(int i, atomic_t *v)
 {
-	return atomic_add_return(-i, v);
+	return arch_atomic_add_return(-i, v);
 }
 
-#define atomic_inc_return(v)  (atomic_add_return(1, v))
-#define atomic_dec_return(v)  (atomic_sub_return(1, v))
+#define arch_atomic_inc_return(v)  (arch_atomic_add_return(1, v))
+#define arch_atomic_dec_return(v)  (arch_atomic_sub_return(1, v))
 
-static __always_inline int atomic_fetch_add(int i, atomic_t *v)
+static __always_inline int arch_atomic_fetch_add(int i, atomic_t *v)
 {
 	return xadd(&v->counter, i);
 }
 
-static __always_inline int atomic_fetch_sub(int i, atomic_t *v)
+static __always_inline int arch_atomic_fetch_sub(int i, atomic_t *v)
 {
 	return xadd(&v->counter, -i);
 }
 
-static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+static __always_inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
 {
-	return cmpxchg(&v->counter, old, new);
+	return arch_cmpxchg(&v->counter, old, new);
 }
 
-#define atomic_try_cmpxchg atomic_try_cmpxchg
-static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
+#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg
+static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
 {
-	return try_cmpxchg(&v->counter, old, new);
+	return arch_try_cmpxchg(&v->counter, old, new);
 }
 
-static inline int atomic_xchg(atomic_t *v, int new)
+static inline int arch_atomic_xchg(atomic_t *v, int new)
 {
 	return xchg(&v->counter, new);
 }
 
-static inline void atomic_and(int i, atomic_t *v)
+static inline void arch_atomic_and(int i, atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "andl %1,%0"
 			: "+m" (v->counter)
@@ -205,16 +211,16 @@ static inline void atomic_and(int i, atomic_t *v)
 			: "memory");
 }
 
-static inline int atomic_fetch_and(int i, atomic_t *v)
+static inline int arch_atomic_fetch_and(int i, atomic_t *v)
 {
-	int val = atomic_read(v);
+	int val = arch_atomic_read(v);
 
 	do {
-	} while (!atomic_try_cmpxchg(v, &val, val & i));
+	} while (!arch_atomic_try_cmpxchg(v, &val, val & i));
 	return val;
 }
 
-static inline void atomic_or(int i, atomic_t *v)
+static inline void arch_atomic_or(int i, atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "orl %1,%0"
 			: "+m" (v->counter)
@@ -222,17 +228,17 @@ static inline void atomic_or(int i, atomic_t *v)
 			: "memory");
 }
 
-static inline int atomic_fetch_or(int i, atomic_t *v)
+static inline int arch_atomic_fetch_or(int i, atomic_t *v)
 {
-	int val = atomic_read(v);
+	int val = arch_atomic_read(v);
 
 	do {
-	} while (!atomic_try_cmpxchg(v, &val, val | i));
+	} while (!arch_atomic_try_cmpxchg(v, &val, val | i));
 	return val;
 }
 
 
-static inline void atomic_xor(int i, atomic_t *v)
+static inline void arch_atomic_xor(int i, atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "xorl %1,%0"
 			: "+m" (v->counter)
@@ -240,17 +246,17 @@ static inline void atomic_xor(int i, atomic_t *v)
 			: "memory");
 }
 
-static inline int atomic_fetch_xor(int i, atomic_t *v)
+static inline int arch_atomic_fetch_xor(int i, atomic_t *v)
 {
-	int val = atomic_read(v);
+	int val = arch_atomic_read(v);
 
 	do {
-	} while (!atomic_try_cmpxchg(v, &val, val ^ i));
+	} while (!arch_atomic_try_cmpxchg(v, &val, val ^ i));
 	return val;
 }
 
 /**
- * __atomic_add_unless - add unless the number is already a given value
+ * __arch_atomic_add_unless - add unless the number is already a given value
  * @v: pointer of type atomic_t
  * @a: the amount to add to v...
  * @u: ...unless v is equal to u.
@@ -258,13 +264,13 @@ static inline int atomic_fetch_xor(int i, atomic_t *v)
  * Atomically adds @a to @v, so long as @v was not already @u.
  * Returns the old value of @v.
  */
-static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
+static __always_inline int __arch_atomic_add_unless(atomic_t *v, int a, int u)
 {
-	int c = atomic_read(v);
+	int c = arch_atomic_read(v);
 	do {
 		if (unlikely(c == u))
 			break;
-	} while (!atomic_try_cmpxchg(v, &c, c + a));
+	} while (!arch_atomic_try_cmpxchg(v, &c, c + a));
 	return c;
 }
 
@@ -274,4 +280,6 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
 # include <asm/atomic64_64.h>
 #endif
 
+#include <asm-generic/atomic-instrumented.h>
+
 #endif /* _ASM_X86_ATOMIC_H */
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index f107fef7bfcc..8501e4fc5054 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -61,7 +61,7 @@ ATOMIC64_DECL(add_unless);
 #undef ATOMIC64_EXPORT
 
 /**
- * atomic64_cmpxchg - cmpxchg atomic64 variable
+ * arch_atomic64_cmpxchg - cmpxchg atomic64 variable
  * @v: pointer to type atomic64_t
  * @o: expected value
  * @n: new value
@@ -70,20 +70,21 @@ ATOMIC64_DECL(add_unless);
  * the old value.
  */
 
-static inline long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
+static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o,
+					      long long n)
 {
-	return cmpxchg64(&v->counter, o, n);
+	return arch_cmpxchg64(&v->counter, o, n);
 }
 
 /**
- * atomic64_xchg - xchg atomic64 variable
+ * arch_atomic64_xchg - xchg atomic64 variable
  * @v: pointer to type atomic64_t
  * @n: value to assign
  *
  * Atomically xchgs the value of @v to @n and returns
  * the old value.
  */
-static inline long long atomic64_xchg(atomic64_t *v, long long n)
+static inline long long arch_atomic64_xchg(atomic64_t *v, long long n)
 {
 	long long o;
 	unsigned high = (unsigned)(n >> 32);
@@ -95,13 +96,13 @@ static inline long long atomic64_xchg(atomic64_t *v, long long n)
 }
 
 /**
- * atomic64_set - set atomic64 variable
+ * arch_atomic64_set - set atomic64 variable
  * @v: pointer to type atomic64_t
  * @i: value to assign
  *
  * Atomically sets the value of @v to @n.
  */
-static inline void atomic64_set(atomic64_t *v, long long i)
+static inline void arch_atomic64_set(atomic64_t *v, long long i)
 {
 	unsigned high = (unsigned)(i >> 32);
 	unsigned low = (unsigned)i;
@@ -111,12 +112,12 @@ static inline void atomic64_set(atomic64_t *v, long long i)
 }
 
 /**
- * atomic64_read - read atomic64 variable
+ * arch_atomic64_read - read atomic64 variable
  * @v: pointer to type atomic64_t
  *
  * Atomically reads the value of @v and returns it.
  */
-static inline long long atomic64_read(const atomic64_t *v)
+static inline long long arch_atomic64_read(const atomic64_t *v)
 {
 	long long r;
 	alternative_atomic64(read, "=&A" (r), "c" (v) : "memory");
@@ -124,13 +125,13 @@ static inline long long atomic64_read(const atomic64_t *v)
  }
 
 /**
- * atomic64_add_return - add and return
+ * arch_atomic64_add_return - add and return
  * @i: integer value to add
  * @v: pointer to type atomic64_t
  *
  * Atomically adds @i to @v and returns @i + *@v
  */
-static inline long long atomic64_add_return(long long i, atomic64_t *v)
+static inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
 {
 	alternative_atomic64(add_return,
 			     ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -141,7 +142,7 @@ static inline long long atomic64_add_return(long long i, atomic64_t *v)
 /*
  * Other variants with different arithmetic operators:
  */
-static inline long long atomic64_sub_return(long long i, atomic64_t *v)
+static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
 {
 	alternative_atomic64(sub_return,
 			     ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -149,7 +150,7 @@ static inline long long atomic64_sub_return(long long i, atomic64_t *v)
 	return i;
 }
 
-static inline long long atomic64_inc_return(atomic64_t *v)
+static inline long long arch_atomic64_inc_return(atomic64_t *v)
 {
 	long long a;
 	alternative_atomic64(inc_return, "=&A" (a),
@@ -157,7 +158,7 @@ static inline long long atomic64_inc_return(atomic64_t *v)
 	return a;
 }
 
-static inline long long atomic64_dec_return(atomic64_t *v)
+static inline long long arch_atomic64_dec_return(atomic64_t *v)
 {
 	long long a;
 	alternative_atomic64(dec_return, "=&A" (a),
@@ -166,13 +167,13 @@ static inline long long atomic64_dec_return(atomic64_t *v)
 }
 
 /**
- * atomic64_add - add integer to atomic64 variable
+ * arch_atomic64_add - add integer to atomic64 variable
  * @i: integer value to add
  * @v: pointer to type atomic64_t
  *
  * Atomically adds @i to @v.
  */
-static inline long long atomic64_add(long long i, atomic64_t *v)
+static inline long long arch_atomic64_add(long long i, atomic64_t *v)
 {
 	__alternative_atomic64(add, add_return,
 			       ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -181,13 +182,13 @@ static inline long long atomic64_add(long long i, atomic64_t *v)
 }
 
 /**
- * atomic64_sub - subtract the atomic64 variable
+ * arch_atomic64_sub - subtract the atomic64 variable
  * @i: integer value to subtract
  * @v: pointer to type atomic64_t
  *
  * Atomically subtracts @i from @v.
  */
-static inline long long atomic64_sub(long long i, atomic64_t *v)
+static inline long long arch_atomic64_sub(long long i, atomic64_t *v)
 {
 	__alternative_atomic64(sub, sub_return,
 			       ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -196,7 +197,7 @@ static inline long long atomic64_sub(long long i, atomic64_t *v)
 }
 
 /**
- * atomic64_sub_and_test - subtract value from variable and test result
+ * arch_atomic64_sub_and_test - subtract value from variable and test result
  * @i: integer value to subtract
  * @v: pointer to type atomic64_t
  *
@@ -204,46 +205,46 @@ static inline long long atomic64_sub(long long i, atomic64_t *v)
  * true if the result is zero, or false for all
  * other cases.
  */
-static inline int atomic64_sub_and_test(long long i, atomic64_t *v)
+static inline int arch_atomic64_sub_and_test(long long i, atomic64_t *v)
 {
-	return atomic64_sub_return(i, v) == 0;
+	return arch_atomic64_sub_return(i, v) == 0;
 }
 
 /**
- * atomic64_inc - increment atomic64 variable
+ * arch_atomic64_inc - increment atomic64 variable
  * @v: pointer to type atomic64_t
  *
  * Atomically increments @v by 1.
  */
-static inline void atomic64_inc(atomic64_t *v)
+static inline void arch_atomic64_inc(atomic64_t *v)
 {
 	__alternative_atomic64(inc, inc_return, /* no output */,
 			       "S" (v) : "memory", "eax", "ecx", "edx");
 }
 
 /**
- * atomic64_dec - decrement atomic64 variable
+ * arch_atomic64_dec - decrement atomic64 variable
  * @v: pointer to type atomic64_t
  *
  * Atomically decrements @v by 1.
  */
-static inline void atomic64_dec(atomic64_t *v)
+static inline void arch_atomic64_dec(atomic64_t *v)
 {
 	__alternative_atomic64(dec, dec_return, /* no output */,
 			       "S" (v) : "memory", "eax", "ecx", "edx");
 }
 
 /**
- * atomic64_dec_and_test - decrement and test
+ * arch_atomic64_dec_and_test - decrement and test
  * @v: pointer to type atomic64_t
  *
  * Atomically decrements @v by 1 and
  * returns true if the result is 0, or false for all other
  * cases.
  */
-static inline int atomic64_dec_and_test(atomic64_t *v)
+static inline int arch_atomic64_dec_and_test(atomic64_t *v)
 {
-	return atomic64_dec_return(v) == 0;
+	return arch_atomic64_dec_return(v) == 0;
 }
 
 /**
@@ -254,13 +255,13 @@ static inline int atomic64_dec_and_test(atomic64_t *v)
  * and returns true if the result is zero, or false for all
  * other cases.
  */
-static inline int atomic64_inc_and_test(atomic64_t *v)
+static inline int arch_atomic64_inc_and_test(atomic64_t *v)
 {
-	return atomic64_inc_return(v) == 0;
+	return arch_atomic64_inc_return(v) == 0;
 }
 
 /**
- * atomic64_add_negative - add and test if negative
+ * arch_atomic64_add_negative - add and test if negative
  * @i: integer value to add
  * @v: pointer to type atomic64_t
  *
@@ -268,13 +269,13 @@ static inline int atomic64_inc_and_test(atomic64_t *v)
  * if the result is negative, or false when
  * result is greater than or equal to zero.
  */
-static inline int atomic64_add_negative(long long i, atomic64_t *v)
+static inline int arch_atomic64_add_negative(long long i, atomic64_t *v)
 {
-	return atomic64_add_return(i, v) < 0;
+	return arch_atomic64_add_return(i, v) < 0;
 }
 
 /**
- * atomic64_add_unless - add unless the number is a given value
+ * arch_atomic64_add_unless - add unless the number is a given value
  * @v: pointer of type atomic64_t
  * @a: the amount to add to v...
  * @u: ...unless v is equal to u.
@@ -282,7 +283,8 @@ static inline int atomic64_add_negative(long long i, atomic64_t *v)
  * Atomically adds @a to @v, so long as it was not @u.
  * Returns non-zero if the add was done, zero otherwise.
  */
-static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
+static inline int arch_atomic64_add_unless(atomic64_t *v, long long a,
+					   long long u)
 {
 	unsigned low = (unsigned)u;
 	unsigned high = (unsigned)(u >> 32);
@@ -293,7 +295,7 @@ static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
 }
 
 
-static inline int atomic64_inc_not_zero(atomic64_t *v)
+static inline int arch_atomic64_inc_not_zero(atomic64_t *v)
 {
 	int r;
 	alternative_atomic64(inc_not_zero, "=&a" (r),
@@ -301,7 +303,7 @@ static inline int atomic64_inc_not_zero(atomic64_t *v)
 	return r;
 }
 
-static inline long long atomic64_dec_if_positive(atomic64_t *v)
+static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
 {
 	long long r;
 	alternative_atomic64(dec_if_positive, "=&A" (r),
@@ -312,66 +314,66 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
 #undef alternative_atomic64
 #undef __alternative_atomic64
 
-static inline void atomic64_and(long long i, atomic64_t *v)
+static inline void arch_atomic64_and(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c & i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
 		c = old;
 }
 
-static inline long long atomic64_fetch_and(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c & i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
 		c = old;
 	return old;
 }
 
-static inline void atomic64_or(long long i, atomic64_t *v)
+static inline void arch_atomic64_or(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c | i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
 		c = old;
 }
 
-static inline long long atomic64_fetch_or(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c | i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
 		c = old;
 	return old;
 }
 
-static inline void atomic64_xor(long long i, atomic64_t *v)
+static inline void arch_atomic64_xor(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c ^ i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
 		c = old;
 }
 
-static inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c ^ i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
 		c = old;
 	return old;
 }
 
-static inline long long atomic64_fetch_add(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c + i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c)
 		c = old;
 	return old;
 }
 
-#define atomic64_fetch_sub(i, v)	atomic64_fetch_add(-(i), (v))
+#define arch_atomic64_fetch_sub(i, v)	arch_atomic64_fetch_add(-(i), (v))
 
 #endif /* _ASM_X86_ATOMIC64_32_H */
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index a62982a2b534..6b6873e4d4e8 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -10,37 +10,37 @@
 #define ATOMIC64_INIT(i)	{ (i) }
 
 /**
- * atomic64_read - read atomic64 variable
+ * arch_atomic64_read - read atomic64 variable
  * @v: pointer of type atomic64_t
  *
  * Atomically reads the value of @v.
  * Doesn't imply a read memory barrier.
  */
-static inline long long atomic64_read(const atomic64_t *v)
+static inline long long arch_atomic64_read(const atomic64_t *v)
 {
 	return READ_ONCE((v)->counter);
 }
 
 /**
- * atomic64_set - set atomic64 variable
+ * arch_atomic64_set - set atomic64 variable
  * @v: pointer to type atomic64_t
  * @i: required value
  *
  * Atomically sets the value of @v to @i.
  */
-static inline void atomic64_set(atomic64_t *v, long long i)
+static inline void arch_atomic64_set(atomic64_t *v, long long i)
 {
 	WRITE_ONCE(v->counter, i);
 }
 
 /**
- * atomic64_add - add integer to atomic64 variable
+ * arch_atomic64_add - add integer to atomic64 variable
  * @i: integer value to add
  * @v: pointer to type atomic64_t
  *
  * Atomically adds @i to @v.
  */
-static __always_inline void atomic64_add(long long i, atomic64_t *v)
+static __always_inline void arch_atomic64_add(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "addq %1,%0"
 		     : "=m" (v->counter)
@@ -48,13 +48,13 @@ static __always_inline void atomic64_add(long long i, atomic64_t *v)
 }
 
 /**
- * atomic64_sub - subtract the atomic64 variable
+ * arch_atomic64_sub - subtract the atomic64 variable
  * @i: integer value to subtract
  * @v: pointer to type atomic64_t
  *
  * Atomically subtracts @i from @v.
  */
-static inline void atomic64_sub(long long i, atomic64_t *v)
+static inline void arch_atomic64_sub(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "subq %1,%0"
 		     : "=m" (v->counter)
@@ -62,7 +62,7 @@ static inline void atomic64_sub(long long i, atomic64_t *v)
 }
 
 /**
- * atomic64_sub_and_test - subtract value from variable and test result
+ * arch_atomic64_sub_and_test - subtract value from variable and test result
  * @i: integer value to subtract
  * @v: pointer to type atomic64_t
  *
@@ -70,18 +70,18 @@ static inline void atomic64_sub(long long i, atomic64_t *v)
  * true if the result is zero, or false for all
  * other cases.
  */
-static inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
+static inline bool arch_atomic64_sub_and_test(long long i, atomic64_t *v)
 {
 	GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e);
 }
 
 /**
- * atomic64_inc - increment atomic64 variable
+ * arch_atomic64_inc - increment atomic64 variable
  * @v: pointer to type atomic64_t
  *
  * Atomically increments @v by 1.
  */
-static __always_inline void atomic64_inc(atomic64_t *v)
+static __always_inline void arch_atomic64_inc(atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "incq %0"
 		     : "=m" (v->counter)
@@ -89,12 +89,12 @@ static __always_inline void atomic64_inc(atomic64_t *v)
 }
 
 /**
- * atomic64_dec - decrement atomic64 variable
+ * arch_atomic64_dec - decrement atomic64 variable
  * @v: pointer to type atomic64_t
  *
  * Atomically decrements @v by 1.
  */
-static __always_inline void atomic64_dec(atomic64_t *v)
+static __always_inline void arch_atomic64_dec(atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "decq %0"
 		     : "=m" (v->counter)
@@ -102,33 +102,33 @@ static __always_inline void atomic64_dec(atomic64_t *v)
 }
 
 /**
- * atomic64_dec_and_test - decrement and test
+ * arch_atomic64_dec_and_test - decrement and test
  * @v: pointer to type atomic64_t
  *
  * Atomically decrements @v by 1 and
  * returns true if the result is 0, or false for all other
  * cases.
  */
-static inline bool atomic64_dec_and_test(atomic64_t *v)
+static inline bool arch_atomic64_dec_and_test(atomic64_t *v)
 {
 	GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", e);
 }
 
 /**
- * atomic64_inc_and_test - increment and test
+ * arch_atomic64_inc_and_test - increment and test
  * @v: pointer to type atomic64_t
  *
  * Atomically increments @v by 1
  * and returns true if the result is zero, or false for all
  * other cases.
  */
-static inline bool atomic64_inc_and_test(atomic64_t *v)
+static inline bool arch_atomic64_inc_and_test(atomic64_t *v)
 {
 	GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", e);
 }
 
 /**
- * atomic64_add_negative - add and test if negative
+ * arch_atomic64_add_negative - add and test if negative
  * @i: integer value to add
  * @v: pointer to type atomic64_t
  *
@@ -136,59 +136,59 @@ static inline bool atomic64_inc_and_test(atomic64_t *v)
  * if the result is negative, or false when
  * result is greater than or equal to zero.
  */
-static inline bool atomic64_add_negative(long long i, atomic64_t *v)
+static inline bool arch_atomic64_add_negative(long long i, atomic64_t *v)
 {
 	GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s);
 }
 
 /**
- * atomic64_add_return - add and return
+ * arch_atomic64_add_return - add and return
  * @i: integer value to add
  * @v: pointer to type atomic64_t
  *
  * Atomically adds @i to @v and returns @i + @v
  */
-static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
+static __always_inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
 {
 	return i + xadd(&v->counter, i);
 }
 
-static inline long long atomic64_sub_return(long long i, atomic64_t *v)
+static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
 {
-	return atomic64_add_return(-i, v);
+	return arch_atomic64_add_return(-i, v);
 }
 
-static inline long long atomic64_fetch_add(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v)
 {
 	return xadd(&v->counter, i);
 }
 
-static inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_sub(long long i, atomic64_t *v)
 {
 	return xadd(&v->counter, -i);
 }
 
-#define atomic64_inc_return(v)  (atomic64_add_return(1, (v)))
-#define atomic64_dec_return(v)  (atomic64_sub_return(1, (v)))
+#define arch_atomic64_inc_return(v)  (arch_atomic64_add_return(1, (v)))
+#define arch_atomic64_dec_return(v)  (arch_atomic64_sub_return(1, (v)))
 
-static inline long long atomic64_cmpxchg(atomic64_t *v, long long old, long long new)
+static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long old, long long new)
 {
-	return cmpxchg(&v->counter, old, new);
+	return arch_cmpxchg(&v->counter, old, new);
 }
 
-#define atomic64_try_cmpxchg atomic64_try_cmpxchg
-static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old, long long new)
+#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
+static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, long long *old, long long new)
 {
-	return try_cmpxchg(&v->counter, old, new);
+	return arch_try_cmpxchg(&v->counter, old, new);
 }
 
-static inline long long atomic64_xchg(atomic64_t *v, long long new)
+static inline long long arch_atomic64_xchg(atomic64_t *v, long long new)
 {
 	return xchg(&v->counter, new);
 }
 
 /**
- * atomic64_add_unless - add unless the number is a given value
+ * arch_atomic64_add_unless - add unless the number is a given value
  * @v: pointer of type atomic64_t
  * @a: the amount to add to v...
  * @u: ...unless v is equal to u.
@@ -196,37 +196,37 @@ static inline long long atomic64_xchg(atomic64_t *v, long long new)
  * Atomically adds @a to @v, so long long as it was not @u.
  * Returns the old value of @v.
  */
-static inline bool atomic64_add_unless(atomic64_t *v, long long a, long long u)
+static inline bool arch_atomic64_add_unless(atomic64_t *v, long long a, long long u)
 {
-	long long c = atomic64_read(v);
+	long long c = arch_atomic64_read(v);
 	do {
 		if (unlikely(c == u))
 			return false;
-	} while (!atomic64_try_cmpxchg(v, &c, c + a));
+	} while (!arch_atomic64_try_cmpxchg(v, &c, c + a));
 	return true;
 }
 
-#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define arch_atomic64_inc_not_zero(v) arch_atomic64_add_unless((v), 1, 0)
 
 /*
- * atomic64_dec_if_positive - decrement by 1 if old value positive
+ * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
  * @v: pointer of type atomic_t
  *
  * The function returns the old value of *v minus 1, even if
  * the atomic variable, v, was not decremented.
  */
-static inline long long atomic64_dec_if_positive(atomic64_t *v)
+static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
 {
-	long long dec, c = atomic64_read(v);
+	long long dec, c = arch_atomic64_read(v);
 	do {
 		dec = c - 1;
 		if (unlikely(dec < 0))
 			break;
-	} while (!atomic64_try_cmpxchg(v, &c, dec));
+	} while (!arch_atomic64_try_cmpxchg(v, &c, dec));
 	return dec;
 }
 
-static inline void atomic64_and(long long i, atomic64_t *v)
+static inline void arch_atomic64_and(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "andq %1,%0"
 			: "+m" (v->counter)
@@ -234,16 +234,16 @@ static inline void atomic64_and(long long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long long atomic64_fetch_and(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
 {
-	long long val = atomic64_read(v);
+	long long val = arch_atomic64_read(v);
 
 	do {
-	} while (!atomic64_try_cmpxchg(v, &val, val & i));
+	} while (!arch_atomic64_try_cmpxchg(v, &val, val & i));
 	return val;
 }
 
-static inline void atomic64_or(long long i, atomic64_t *v)
+static inline void arch_atomic64_or(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "orq %1,%0"
 			: "+m" (v->counter)
@@ -251,16 +251,16 @@ static inline void atomic64_or(long long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long long atomic64_fetch_or(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
 {
-	long long val = atomic64_read(v);
+	long long val = arch_atomic64_read(v);
 
 	do {
-	} while (!atomic64_try_cmpxchg(v, &val, val | i));
+	} while (!arch_atomic64_try_cmpxchg(v, &val, val | i));
 	return val;
 }
 
-static inline void atomic64_xor(long long i, atomic64_t *v)
+static inline void arch_atomic64_xor(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "xorq %1,%0"
 			: "+m" (v->counter)
@@ -268,12 +268,12 @@ static inline void atomic64_xor(long long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
 {
-	long long val = atomic64_read(v);
+	long long val = arch_atomic64_read(v);
 
 	do {
-	} while (!atomic64_try_cmpxchg(v, &val, val ^ i));
+	} while (!arch_atomic64_try_cmpxchg(v, &val, val ^ i));
 	return val;
 }
 
diff --git a/arch/x86/include/asm/cmpxchg.h b/arch/x86/include/asm/cmpxchg.h
index fb961db51a2a..b4e70a0b1238 100644
--- a/arch/x86/include/asm/cmpxchg.h
+++ b/arch/x86/include/asm/cmpxchg.h
@@ -144,20 +144,20 @@ extern void __add_wrong_size(void)
 # include <asm/cmpxchg_64.h>
 #endif
 
-#define cmpxchg(ptr, old, new)						\
+#define arch_cmpxchg(ptr, old, new)					\
 	__cmpxchg(ptr, old, new, sizeof(*(ptr)))
 
-#define sync_cmpxchg(ptr, old, new)					\
+#define arch_sync_cmpxchg(ptr, old, new)				\
 	__sync_cmpxchg(ptr, old, new, sizeof(*(ptr)))
 
-#define cmpxchg_local(ptr, old, new)					\
+#define arch_cmpxchg_local(ptr, old, new)				\
 	__cmpxchg_local(ptr, old, new, sizeof(*(ptr)))
 
 
 #define __raw_try_cmpxchg(_ptr, _pold, _new, size, lock)		\
 ({									\
 	bool success;							\
-	__typeof__(_ptr) _old = (_pold);				\
+	__typeof__(_pold) _old = (_pold);				\
 	__typeof__(*(_ptr)) __old = *_old;				\
 	__typeof__(*(_ptr)) __new = (_new);				\
 	switch (size) {							\
@@ -219,7 +219,7 @@ extern void __add_wrong_size(void)
 #define __try_cmpxchg(ptr, pold, new, size)				\
 	__raw_try_cmpxchg((ptr), (pold), (new), (size), LOCK_PREFIX)
 
-#define try_cmpxchg(ptr, pold, new)					\
+#define arch_try_cmpxchg(ptr, pold, new)				\
 	__try_cmpxchg((ptr), (pold), (new), sizeof(*(ptr)))
 
 /*
@@ -248,10 +248,10 @@ extern void __add_wrong_size(void)
 	__ret;								\
 })
 
-#define cmpxchg_double(p1, p2, o1, o2, n1, n2) \
+#define arch_cmpxchg_double(p1, p2, o1, o2, n1, n2) \
 	__cmpxchg_double(LOCK_PREFIX, p1, p2, o1, o2, n1, n2)
 
-#define cmpxchg_double_local(p1, p2, o1, o2, n1, n2) \
+#define arch_cmpxchg_double_local(p1, p2, o1, o2, n1, n2) \
 	__cmpxchg_double(, p1, p2, o1, o2, n1, n2)
 
 #endif	/* ASM_X86_CMPXCHG_H */
diff --git a/arch/x86/include/asm/cmpxchg_32.h b/arch/x86/include/asm/cmpxchg_32.h
index e4959d023af8..d897291d2bf9 100644
--- a/arch/x86/include/asm/cmpxchg_32.h
+++ b/arch/x86/include/asm/cmpxchg_32.h
@@ -35,10 +35,10 @@ static inline void set_64bit(volatile u64 *ptr, u64 value)
 }
 
 #ifdef CONFIG_X86_CMPXCHG64
-#define cmpxchg64(ptr, o, n)						\
+#define arch_cmpxchg64(ptr, o, n)					\
 	((__typeof__(*(ptr)))__cmpxchg64((ptr), (unsigned long long)(o), \
 					 (unsigned long long)(n)))
-#define cmpxchg64_local(ptr, o, n)					\
+#define arch_cmpxchg64_local(ptr, o, n)					\
 	((__typeof__(*(ptr)))__cmpxchg64_local((ptr), (unsigned long long)(o), \
 					       (unsigned long long)(n)))
 #endif
@@ -75,7 +75,7 @@ static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new)
  * to simulate the cmpxchg8b on the 80386 and 80486 CPU.
  */
 
-#define cmpxchg64(ptr, o, n)					\
+#define arch_cmpxchg64(ptr, o, n)				\
 ({								\
 	__typeof__(*(ptr)) __ret;				\
 	__typeof__(*(ptr)) __old = (o);				\
@@ -92,7 +92,7 @@ static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new)
 	__ret; })
 
 
-#define cmpxchg64_local(ptr, o, n)				\
+#define arch_cmpxchg64_local(ptr, o, n)				\
 ({								\
 	__typeof__(*(ptr)) __ret;				\
 	__typeof__(*(ptr)) __old = (o);				\
diff --git a/arch/x86/include/asm/cmpxchg_64.h b/arch/x86/include/asm/cmpxchg_64.h
index caa23a34c963..fafaebacca2d 100644
--- a/arch/x86/include/asm/cmpxchg_64.h
+++ b/arch/x86/include/asm/cmpxchg_64.h
@@ -6,13 +6,13 @@ static inline void set_64bit(volatile u64 *ptr, u64 val)
 	*ptr = val;
 }
 
-#define cmpxchg64(ptr, o, n)						\
+#define arch_cmpxchg64(ptr, o, n)					\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
 	cmpxchg((ptr), (o), (n));					\
 })
 
-#define cmpxchg64_local(ptr, o, n)					\
+#define arch_cmpxchg64_local(ptr, o, n)					\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
 	cmpxchg_local((ptr), (o), (n));					\
-- 
2.12.2.564.g063fe858b8-goog

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 5/8] x86: switch atomic.h to use atomic-instrumented.h
@ 2017-03-28 16:15   ` Dmitry Vyukov
  0 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, linux-mm

Add arch_ prefix to all atomic operations and include
<asm-generic/atomic-instrumented.h>. This will allow
to add KASAN instrumentation to all atomic ops.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
---
 arch/x86/include/asm/atomic.h      | 110 ++++++++++++++++++++-----------------
 arch/x86/include/asm/atomic64_32.h | 106 +++++++++++++++++------------------
 arch/x86/include/asm/atomic64_64.h | 110 ++++++++++++++++++-------------------
 arch/x86/include/asm/cmpxchg.h     |  14 ++---
 arch/x86/include/asm/cmpxchg_32.h  |   8 +--
 arch/x86/include/asm/cmpxchg_64.h  |   4 +-
 6 files changed, 181 insertions(+), 171 deletions(-)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 8d7f6e579be4..92dd59f24eba 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -16,36 +16,42 @@
 #define ATOMIC_INIT(i)	{ (i) }
 
 /**
- * atomic_read - read atomic variable
+ * arch_atomic_read - read atomic variable
  * @v: pointer of type atomic_t
  *
  * Atomically reads the value of @v.
  */
-static __always_inline int atomic_read(const atomic_t *v)
+static __always_inline int arch_atomic_read(const atomic_t *v)
 {
 	return READ_ONCE((v)->counter);
 }
 
 /**
- * atomic_set - set atomic variable
+ * arch_atomic_set - set atomic variable
  * @v: pointer of type atomic_t
  * @i: required value
  *
  * Atomically sets the value of @v to @i.
  */
-static __always_inline void atomic_set(atomic_t *v, int i)
+static __always_inline void arch_atomic_set(atomic_t *v, int i)
 {
+	/*
+	 * We could use WRITE_ONCE_NOCHECK() if it exists, similar to
+	 * READ_ONCE_NOCHECK() in arch_atomic_read(). But there is no such
+	 * thing at the moment, and introducing it for this case does not
+	 * worth it.
+	 */
 	WRITE_ONCE(v->counter, i);
 }
 
 /**
- * atomic_add - add integer to atomic variable
+ * arch_atomic_add - add integer to atomic variable
  * @i: integer value to add
  * @v: pointer of type atomic_t
  *
  * Atomically adds @i to @v.
  */
-static __always_inline void atomic_add(int i, atomic_t *v)
+static __always_inline void arch_atomic_add(int i, atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "addl %1,%0"
 		     : "+m" (v->counter)
@@ -53,13 +59,13 @@ static __always_inline void atomic_add(int i, atomic_t *v)
 }
 
 /**
- * atomic_sub - subtract integer from atomic variable
+ * arch_atomic_sub - subtract integer from atomic variable
  * @i: integer value to subtract
  * @v: pointer of type atomic_t
  *
  * Atomically subtracts @i from @v.
  */
-static __always_inline void atomic_sub(int i, atomic_t *v)
+static __always_inline void arch_atomic_sub(int i, atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "subl %1,%0"
 		     : "+m" (v->counter)
@@ -67,7 +73,7 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
 }
 
 /**
- * atomic_sub_and_test - subtract value from variable and test result
+ * arch_atomic_sub_and_test - subtract value from variable and test result
  * @i: integer value to subtract
  * @v: pointer of type atomic_t
  *
@@ -75,63 +81,63 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
  * true if the result is zero, or false for all
  * other cases.
  */
-static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
+static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v)
 {
 	GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", e);
 }
 
 /**
- * atomic_inc - increment atomic variable
+ * arch_atomic_inc - increment atomic variable
  * @v: pointer of type atomic_t
  *
  * Atomically increments @v by 1.
  */
-static __always_inline void atomic_inc(atomic_t *v)
+static __always_inline void arch_atomic_inc(atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "incl %0"
 		     : "+m" (v->counter));
 }
 
 /**
- * atomic_dec - decrement atomic variable
+ * arch_atomic_dec - decrement atomic variable
  * @v: pointer of type atomic_t
  *
  * Atomically decrements @v by 1.
  */
-static __always_inline void atomic_dec(atomic_t *v)
+static __always_inline void arch_atomic_dec(atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "decl %0"
 		     : "+m" (v->counter));
 }
 
 /**
- * atomic_dec_and_test - decrement and test
+ * arch_atomic_dec_and_test - decrement and test
  * @v: pointer of type atomic_t
  *
  * Atomically decrements @v by 1 and
  * returns true if the result is 0, or false for all other
  * cases.
  */
-static __always_inline bool atomic_dec_and_test(atomic_t *v)
+static __always_inline bool arch_atomic_dec_and_test(atomic_t *v)
 {
 	GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e);
 }
 
 /**
- * atomic_inc_and_test - increment and test
+ * arch_atomic_inc_and_test - increment and test
  * @v: pointer of type atomic_t
  *
  * Atomically increments @v by 1
  * and returns true if the result is zero, or false for all
  * other cases.
  */
-static __always_inline bool atomic_inc_and_test(atomic_t *v)
+static __always_inline bool arch_atomic_inc_and_test(atomic_t *v)
 {
 	GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", e);
 }
 
 /**
- * atomic_add_negative - add and test if negative
+ * arch_atomic_add_negative - add and test if negative
  * @i: integer value to add
  * @v: pointer of type atomic_t
  *
@@ -139,65 +145,65 @@ static __always_inline bool atomic_inc_and_test(atomic_t *v)
  * if the result is negative, or false when
  * result is greater than or equal to zero.
  */
-static __always_inline bool atomic_add_negative(int i, atomic_t *v)
+static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
 {
 	GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s);
 }
 
 /**
- * atomic_add_return - add integer and return
+ * arch_atomic_add_return - add integer and return
  * @i: integer value to add
  * @v: pointer of type atomic_t
  *
  * Atomically adds @i to @v and returns @i + @v
  */
-static __always_inline int atomic_add_return(int i, atomic_t *v)
+static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
 {
 	return i + xadd(&v->counter, i);
 }
 
 /**
- * atomic_sub_return - subtract integer and return
+ * arch_atomic_sub_return - subtract integer and return
  * @v: pointer of type atomic_t
  * @i: integer value to subtract
  *
  * Atomically subtracts @i from @v and returns @v - @i
  */
-static __always_inline int atomic_sub_return(int i, atomic_t *v)
+static __always_inline int arch_atomic_sub_return(int i, atomic_t *v)
 {
-	return atomic_add_return(-i, v);
+	return arch_atomic_add_return(-i, v);
 }
 
-#define atomic_inc_return(v)  (atomic_add_return(1, v))
-#define atomic_dec_return(v)  (atomic_sub_return(1, v))
+#define arch_atomic_inc_return(v)  (arch_atomic_add_return(1, v))
+#define arch_atomic_dec_return(v)  (arch_atomic_sub_return(1, v))
 
-static __always_inline int atomic_fetch_add(int i, atomic_t *v)
+static __always_inline int arch_atomic_fetch_add(int i, atomic_t *v)
 {
 	return xadd(&v->counter, i);
 }
 
-static __always_inline int atomic_fetch_sub(int i, atomic_t *v)
+static __always_inline int arch_atomic_fetch_sub(int i, atomic_t *v)
 {
 	return xadd(&v->counter, -i);
 }
 
-static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+static __always_inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
 {
-	return cmpxchg(&v->counter, old, new);
+	return arch_cmpxchg(&v->counter, old, new);
 }
 
-#define atomic_try_cmpxchg atomic_try_cmpxchg
-static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
+#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg
+static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
 {
-	return try_cmpxchg(&v->counter, old, new);
+	return arch_try_cmpxchg(&v->counter, old, new);
 }
 
-static inline int atomic_xchg(atomic_t *v, int new)
+static inline int arch_atomic_xchg(atomic_t *v, int new)
 {
 	return xchg(&v->counter, new);
 }
 
-static inline void atomic_and(int i, atomic_t *v)
+static inline void arch_atomic_and(int i, atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "andl %1,%0"
 			: "+m" (v->counter)
@@ -205,16 +211,16 @@ static inline void atomic_and(int i, atomic_t *v)
 			: "memory");
 }
 
-static inline int atomic_fetch_and(int i, atomic_t *v)
+static inline int arch_atomic_fetch_and(int i, atomic_t *v)
 {
-	int val = atomic_read(v);
+	int val = arch_atomic_read(v);
 
 	do {
-	} while (!atomic_try_cmpxchg(v, &val, val & i));
+	} while (!arch_atomic_try_cmpxchg(v, &val, val & i));
 	return val;
 }
 
-static inline void atomic_or(int i, atomic_t *v)
+static inline void arch_atomic_or(int i, atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "orl %1,%0"
 			: "+m" (v->counter)
@@ -222,17 +228,17 @@ static inline void atomic_or(int i, atomic_t *v)
 			: "memory");
 }
 
-static inline int atomic_fetch_or(int i, atomic_t *v)
+static inline int arch_atomic_fetch_or(int i, atomic_t *v)
 {
-	int val = atomic_read(v);
+	int val = arch_atomic_read(v);
 
 	do {
-	} while (!atomic_try_cmpxchg(v, &val, val | i));
+	} while (!arch_atomic_try_cmpxchg(v, &val, val | i));
 	return val;
 }
 
 
-static inline void atomic_xor(int i, atomic_t *v)
+static inline void arch_atomic_xor(int i, atomic_t *v)
 {
 	asm volatile(LOCK_PREFIX "xorl %1,%0"
 			: "+m" (v->counter)
@@ -240,17 +246,17 @@ static inline void atomic_xor(int i, atomic_t *v)
 			: "memory");
 }
 
-static inline int atomic_fetch_xor(int i, atomic_t *v)
+static inline int arch_atomic_fetch_xor(int i, atomic_t *v)
 {
-	int val = atomic_read(v);
+	int val = arch_atomic_read(v);
 
 	do {
-	} while (!atomic_try_cmpxchg(v, &val, val ^ i));
+	} while (!arch_atomic_try_cmpxchg(v, &val, val ^ i));
 	return val;
 }
 
 /**
- * __atomic_add_unless - add unless the number is already a given value
+ * __arch_atomic_add_unless - add unless the number is already a given value
  * @v: pointer of type atomic_t
  * @a: the amount to add to v...
  * @u: ...unless v is equal to u.
@@ -258,13 +264,13 @@ static inline int atomic_fetch_xor(int i, atomic_t *v)
  * Atomically adds @a to @v, so long as @v was not already @u.
  * Returns the old value of @v.
  */
-static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
+static __always_inline int __arch_atomic_add_unless(atomic_t *v, int a, int u)
 {
-	int c = atomic_read(v);
+	int c = arch_atomic_read(v);
 	do {
 		if (unlikely(c == u))
 			break;
-	} while (!atomic_try_cmpxchg(v, &c, c + a));
+	} while (!arch_atomic_try_cmpxchg(v, &c, c + a));
 	return c;
 }
 
@@ -274,4 +280,6 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
 # include <asm/atomic64_64.h>
 #endif
 
+#include <asm-generic/atomic-instrumented.h>
+
 #endif /* _ASM_X86_ATOMIC_H */
diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
index f107fef7bfcc..8501e4fc5054 100644
--- a/arch/x86/include/asm/atomic64_32.h
+++ b/arch/x86/include/asm/atomic64_32.h
@@ -61,7 +61,7 @@ ATOMIC64_DECL(add_unless);
 #undef ATOMIC64_EXPORT
 
 /**
- * atomic64_cmpxchg - cmpxchg atomic64 variable
+ * arch_atomic64_cmpxchg - cmpxchg atomic64 variable
  * @v: pointer to type atomic64_t
  * @o: expected value
  * @n: new value
@@ -70,20 +70,21 @@ ATOMIC64_DECL(add_unless);
  * the old value.
  */
 
-static inline long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
+static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o,
+					      long long n)
 {
-	return cmpxchg64(&v->counter, o, n);
+	return arch_cmpxchg64(&v->counter, o, n);
 }
 
 /**
- * atomic64_xchg - xchg atomic64 variable
+ * arch_atomic64_xchg - xchg atomic64 variable
  * @v: pointer to type atomic64_t
  * @n: value to assign
  *
  * Atomically xchgs the value of @v to @n and returns
  * the old value.
  */
-static inline long long atomic64_xchg(atomic64_t *v, long long n)
+static inline long long arch_atomic64_xchg(atomic64_t *v, long long n)
 {
 	long long o;
 	unsigned high = (unsigned)(n >> 32);
@@ -95,13 +96,13 @@ static inline long long atomic64_xchg(atomic64_t *v, long long n)
 }
 
 /**
- * atomic64_set - set atomic64 variable
+ * arch_atomic64_set - set atomic64 variable
  * @v: pointer to type atomic64_t
  * @i: value to assign
  *
  * Atomically sets the value of @v to @n.
  */
-static inline void atomic64_set(atomic64_t *v, long long i)
+static inline void arch_atomic64_set(atomic64_t *v, long long i)
 {
 	unsigned high = (unsigned)(i >> 32);
 	unsigned low = (unsigned)i;
@@ -111,12 +112,12 @@ static inline void atomic64_set(atomic64_t *v, long long i)
 }
 
 /**
- * atomic64_read - read atomic64 variable
+ * arch_atomic64_read - read atomic64 variable
  * @v: pointer to type atomic64_t
  *
  * Atomically reads the value of @v and returns it.
  */
-static inline long long atomic64_read(const atomic64_t *v)
+static inline long long arch_atomic64_read(const atomic64_t *v)
 {
 	long long r;
 	alternative_atomic64(read, "=&A" (r), "c" (v) : "memory");
@@ -124,13 +125,13 @@ static inline long long atomic64_read(const atomic64_t *v)
  }
 
 /**
- * atomic64_add_return - add and return
+ * arch_atomic64_add_return - add and return
  * @i: integer value to add
  * @v: pointer to type atomic64_t
  *
  * Atomically adds @i to @v and returns @i + *@v
  */
-static inline long long atomic64_add_return(long long i, atomic64_t *v)
+static inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
 {
 	alternative_atomic64(add_return,
 			     ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -141,7 +142,7 @@ static inline long long atomic64_add_return(long long i, atomic64_t *v)
 /*
  * Other variants with different arithmetic operators:
  */
-static inline long long atomic64_sub_return(long long i, atomic64_t *v)
+static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
 {
 	alternative_atomic64(sub_return,
 			     ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -149,7 +150,7 @@ static inline long long atomic64_sub_return(long long i, atomic64_t *v)
 	return i;
 }
 
-static inline long long atomic64_inc_return(atomic64_t *v)
+static inline long long arch_atomic64_inc_return(atomic64_t *v)
 {
 	long long a;
 	alternative_atomic64(inc_return, "=&A" (a),
@@ -157,7 +158,7 @@ static inline long long atomic64_inc_return(atomic64_t *v)
 	return a;
 }
 
-static inline long long atomic64_dec_return(atomic64_t *v)
+static inline long long arch_atomic64_dec_return(atomic64_t *v)
 {
 	long long a;
 	alternative_atomic64(dec_return, "=&A" (a),
@@ -166,13 +167,13 @@ static inline long long atomic64_dec_return(atomic64_t *v)
 }
 
 /**
- * atomic64_add - add integer to atomic64 variable
+ * arch_atomic64_add - add integer to atomic64 variable
  * @i: integer value to add
  * @v: pointer to type atomic64_t
  *
  * Atomically adds @i to @v.
  */
-static inline long long atomic64_add(long long i, atomic64_t *v)
+static inline long long arch_atomic64_add(long long i, atomic64_t *v)
 {
 	__alternative_atomic64(add, add_return,
 			       ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -181,13 +182,13 @@ static inline long long atomic64_add(long long i, atomic64_t *v)
 }
 
 /**
- * atomic64_sub - subtract the atomic64 variable
+ * arch_atomic64_sub - subtract the atomic64 variable
  * @i: integer value to subtract
  * @v: pointer to type atomic64_t
  *
  * Atomically subtracts @i from @v.
  */
-static inline long long atomic64_sub(long long i, atomic64_t *v)
+static inline long long arch_atomic64_sub(long long i, atomic64_t *v)
 {
 	__alternative_atomic64(sub, sub_return,
 			       ASM_OUTPUT2("+A" (i), "+c" (v)),
@@ -196,7 +197,7 @@ static inline long long atomic64_sub(long long i, atomic64_t *v)
 }
 
 /**
- * atomic64_sub_and_test - subtract value from variable and test result
+ * arch_atomic64_sub_and_test - subtract value from variable and test result
  * @i: integer value to subtract
  * @v: pointer to type atomic64_t
  *
@@ -204,46 +205,46 @@ static inline long long atomic64_sub(long long i, atomic64_t *v)
  * true if the result is zero, or false for all
  * other cases.
  */
-static inline int atomic64_sub_and_test(long long i, atomic64_t *v)
+static inline int arch_atomic64_sub_and_test(long long i, atomic64_t *v)
 {
-	return atomic64_sub_return(i, v) == 0;
+	return arch_atomic64_sub_return(i, v) == 0;
 }
 
 /**
- * atomic64_inc - increment atomic64 variable
+ * arch_atomic64_inc - increment atomic64 variable
  * @v: pointer to type atomic64_t
  *
  * Atomically increments @v by 1.
  */
-static inline void atomic64_inc(atomic64_t *v)
+static inline void arch_atomic64_inc(atomic64_t *v)
 {
 	__alternative_atomic64(inc, inc_return, /* no output */,
 			       "S" (v) : "memory", "eax", "ecx", "edx");
 }
 
 /**
- * atomic64_dec - decrement atomic64 variable
+ * arch_atomic64_dec - decrement atomic64 variable
  * @v: pointer to type atomic64_t
  *
  * Atomically decrements @v by 1.
  */
-static inline void atomic64_dec(atomic64_t *v)
+static inline void arch_atomic64_dec(atomic64_t *v)
 {
 	__alternative_atomic64(dec, dec_return, /* no output */,
 			       "S" (v) : "memory", "eax", "ecx", "edx");
 }
 
 /**
- * atomic64_dec_and_test - decrement and test
+ * arch_atomic64_dec_and_test - decrement and test
  * @v: pointer to type atomic64_t
  *
  * Atomically decrements @v by 1 and
  * returns true if the result is 0, or false for all other
  * cases.
  */
-static inline int atomic64_dec_and_test(atomic64_t *v)
+static inline int arch_atomic64_dec_and_test(atomic64_t *v)
 {
-	return atomic64_dec_return(v) == 0;
+	return arch_atomic64_dec_return(v) == 0;
 }
 
 /**
@@ -254,13 +255,13 @@ static inline int atomic64_dec_and_test(atomic64_t *v)
  * and returns true if the result is zero, or false for all
  * other cases.
  */
-static inline int atomic64_inc_and_test(atomic64_t *v)
+static inline int arch_atomic64_inc_and_test(atomic64_t *v)
 {
-	return atomic64_inc_return(v) == 0;
+	return arch_atomic64_inc_return(v) == 0;
 }
 
 /**
- * atomic64_add_negative - add and test if negative
+ * arch_atomic64_add_negative - add and test if negative
  * @i: integer value to add
  * @v: pointer to type atomic64_t
  *
@@ -268,13 +269,13 @@ static inline int atomic64_inc_and_test(atomic64_t *v)
  * if the result is negative, or false when
  * result is greater than or equal to zero.
  */
-static inline int atomic64_add_negative(long long i, atomic64_t *v)
+static inline int arch_atomic64_add_negative(long long i, atomic64_t *v)
 {
-	return atomic64_add_return(i, v) < 0;
+	return arch_atomic64_add_return(i, v) < 0;
 }
 
 /**
- * atomic64_add_unless - add unless the number is a given value
+ * arch_atomic64_add_unless - add unless the number is a given value
  * @v: pointer of type atomic64_t
  * @a: the amount to add to v...
  * @u: ...unless v is equal to u.
@@ -282,7 +283,8 @@ static inline int atomic64_add_negative(long long i, atomic64_t *v)
  * Atomically adds @a to @v, so long as it was not @u.
  * Returns non-zero if the add was done, zero otherwise.
  */
-static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
+static inline int arch_atomic64_add_unless(atomic64_t *v, long long a,
+					   long long u)
 {
 	unsigned low = (unsigned)u;
 	unsigned high = (unsigned)(u >> 32);
@@ -293,7 +295,7 @@ static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
 }
 
 
-static inline int atomic64_inc_not_zero(atomic64_t *v)
+static inline int arch_atomic64_inc_not_zero(atomic64_t *v)
 {
 	int r;
 	alternative_atomic64(inc_not_zero, "=&a" (r),
@@ -301,7 +303,7 @@ static inline int atomic64_inc_not_zero(atomic64_t *v)
 	return r;
 }
 
-static inline long long atomic64_dec_if_positive(atomic64_t *v)
+static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
 {
 	long long r;
 	alternative_atomic64(dec_if_positive, "=&A" (r),
@@ -312,66 +314,66 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
 #undef alternative_atomic64
 #undef __alternative_atomic64
 
-static inline void atomic64_and(long long i, atomic64_t *v)
+static inline void arch_atomic64_and(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c & i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
 		c = old;
 }
 
-static inline long long atomic64_fetch_and(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c & i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
 		c = old;
 	return old;
 }
 
-static inline void atomic64_or(long long i, atomic64_t *v)
+static inline void arch_atomic64_or(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c | i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
 		c = old;
 }
 
-static inline long long atomic64_fetch_or(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c | i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
 		c = old;
 	return old;
 }
 
-static inline void atomic64_xor(long long i, atomic64_t *v)
+static inline void arch_atomic64_xor(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c ^ i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
 		c = old;
 }
 
-static inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c ^ i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
 		c = old;
 	return old;
 }
 
-static inline long long atomic64_fetch_add(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v)
 {
 	long long old, c = 0;
 
-	while ((old = atomic64_cmpxchg(v, c, c + i)) != c)
+	while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c)
 		c = old;
 	return old;
 }
 
-#define atomic64_fetch_sub(i, v)	atomic64_fetch_add(-(i), (v))
+#define arch_atomic64_fetch_sub(i, v)	arch_atomic64_fetch_add(-(i), (v))
 
 #endif /* _ASM_X86_ATOMIC64_32_H */
diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
index a62982a2b534..6b6873e4d4e8 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -10,37 +10,37 @@
 #define ATOMIC64_INIT(i)	{ (i) }
 
 /**
- * atomic64_read - read atomic64 variable
+ * arch_atomic64_read - read atomic64 variable
  * @v: pointer of type atomic64_t
  *
  * Atomically reads the value of @v.
  * Doesn't imply a read memory barrier.
  */
-static inline long long atomic64_read(const atomic64_t *v)
+static inline long long arch_atomic64_read(const atomic64_t *v)
 {
 	return READ_ONCE((v)->counter);
 }
 
 /**
- * atomic64_set - set atomic64 variable
+ * arch_atomic64_set - set atomic64 variable
  * @v: pointer to type atomic64_t
  * @i: required value
  *
  * Atomically sets the value of @v to @i.
  */
-static inline void atomic64_set(atomic64_t *v, long long i)
+static inline void arch_atomic64_set(atomic64_t *v, long long i)
 {
 	WRITE_ONCE(v->counter, i);
 }
 
 /**
- * atomic64_add - add integer to atomic64 variable
+ * arch_atomic64_add - add integer to atomic64 variable
  * @i: integer value to add
  * @v: pointer to type atomic64_t
  *
  * Atomically adds @i to @v.
  */
-static __always_inline void atomic64_add(long long i, atomic64_t *v)
+static __always_inline void arch_atomic64_add(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "addq %1,%0"
 		     : "=m" (v->counter)
@@ -48,13 +48,13 @@ static __always_inline void atomic64_add(long long i, atomic64_t *v)
 }
 
 /**
- * atomic64_sub - subtract the atomic64 variable
+ * arch_atomic64_sub - subtract the atomic64 variable
  * @i: integer value to subtract
  * @v: pointer to type atomic64_t
  *
  * Atomically subtracts @i from @v.
  */
-static inline void atomic64_sub(long long i, atomic64_t *v)
+static inline void arch_atomic64_sub(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "subq %1,%0"
 		     : "=m" (v->counter)
@@ -62,7 +62,7 @@ static inline void atomic64_sub(long long i, atomic64_t *v)
 }
 
 /**
- * atomic64_sub_and_test - subtract value from variable and test result
+ * arch_atomic64_sub_and_test - subtract value from variable and test result
  * @i: integer value to subtract
  * @v: pointer to type atomic64_t
  *
@@ -70,18 +70,18 @@ static inline void atomic64_sub(long long i, atomic64_t *v)
  * true if the result is zero, or false for all
  * other cases.
  */
-static inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
+static inline bool arch_atomic64_sub_and_test(long long i, atomic64_t *v)
 {
 	GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e);
 }
 
 /**
- * atomic64_inc - increment atomic64 variable
+ * arch_atomic64_inc - increment atomic64 variable
  * @v: pointer to type atomic64_t
  *
  * Atomically increments @v by 1.
  */
-static __always_inline void atomic64_inc(atomic64_t *v)
+static __always_inline void arch_atomic64_inc(atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "incq %0"
 		     : "=m" (v->counter)
@@ -89,12 +89,12 @@ static __always_inline void atomic64_inc(atomic64_t *v)
 }
 
 /**
- * atomic64_dec - decrement atomic64 variable
+ * arch_atomic64_dec - decrement atomic64 variable
  * @v: pointer to type atomic64_t
  *
  * Atomically decrements @v by 1.
  */
-static __always_inline void atomic64_dec(atomic64_t *v)
+static __always_inline void arch_atomic64_dec(atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "decq %0"
 		     : "=m" (v->counter)
@@ -102,33 +102,33 @@ static __always_inline void atomic64_dec(atomic64_t *v)
 }
 
 /**
- * atomic64_dec_and_test - decrement and test
+ * arch_atomic64_dec_and_test - decrement and test
  * @v: pointer to type atomic64_t
  *
  * Atomically decrements @v by 1 and
  * returns true if the result is 0, or false for all other
  * cases.
  */
-static inline bool atomic64_dec_and_test(atomic64_t *v)
+static inline bool arch_atomic64_dec_and_test(atomic64_t *v)
 {
 	GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", e);
 }
 
 /**
- * atomic64_inc_and_test - increment and test
+ * arch_atomic64_inc_and_test - increment and test
  * @v: pointer to type atomic64_t
  *
  * Atomically increments @v by 1
  * and returns true if the result is zero, or false for all
  * other cases.
  */
-static inline bool atomic64_inc_and_test(atomic64_t *v)
+static inline bool arch_atomic64_inc_and_test(atomic64_t *v)
 {
 	GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", e);
 }
 
 /**
- * atomic64_add_negative - add and test if negative
+ * arch_atomic64_add_negative - add and test if negative
  * @i: integer value to add
  * @v: pointer to type atomic64_t
  *
@@ -136,59 +136,59 @@ static inline bool atomic64_inc_and_test(atomic64_t *v)
  * if the result is negative, or false when
  * result is greater than or equal to zero.
  */
-static inline bool atomic64_add_negative(long long i, atomic64_t *v)
+static inline bool arch_atomic64_add_negative(long long i, atomic64_t *v)
 {
 	GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s);
 }
 
 /**
- * atomic64_add_return - add and return
+ * arch_atomic64_add_return - add and return
  * @i: integer value to add
  * @v: pointer to type atomic64_t
  *
  * Atomically adds @i to @v and returns @i + @v
  */
-static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
+static __always_inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
 {
 	return i + xadd(&v->counter, i);
 }
 
-static inline long long atomic64_sub_return(long long i, atomic64_t *v)
+static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
 {
-	return atomic64_add_return(-i, v);
+	return arch_atomic64_add_return(-i, v);
 }
 
-static inline long long atomic64_fetch_add(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v)
 {
 	return xadd(&v->counter, i);
 }
 
-static inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_sub(long long i, atomic64_t *v)
 {
 	return xadd(&v->counter, -i);
 }
 
-#define atomic64_inc_return(v)  (atomic64_add_return(1, (v)))
-#define atomic64_dec_return(v)  (atomic64_sub_return(1, (v)))
+#define arch_atomic64_inc_return(v)  (arch_atomic64_add_return(1, (v)))
+#define arch_atomic64_dec_return(v)  (arch_atomic64_sub_return(1, (v)))
 
-static inline long long atomic64_cmpxchg(atomic64_t *v, long long old, long long new)
+static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long old, long long new)
 {
-	return cmpxchg(&v->counter, old, new);
+	return arch_cmpxchg(&v->counter, old, new);
 }
 
-#define atomic64_try_cmpxchg atomic64_try_cmpxchg
-static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old, long long new)
+#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
+static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, long long *old, long long new)
 {
-	return try_cmpxchg(&v->counter, old, new);
+	return arch_try_cmpxchg(&v->counter, old, new);
 }
 
-static inline long long atomic64_xchg(atomic64_t *v, long long new)
+static inline long long arch_atomic64_xchg(atomic64_t *v, long long new)
 {
 	return xchg(&v->counter, new);
 }
 
 /**
- * atomic64_add_unless - add unless the number is a given value
+ * arch_atomic64_add_unless - add unless the number is a given value
  * @v: pointer of type atomic64_t
  * @a: the amount to add to v...
  * @u: ...unless v is equal to u.
@@ -196,37 +196,37 @@ static inline long long atomic64_xchg(atomic64_t *v, long long new)
  * Atomically adds @a to @v, so long long as it was not @u.
  * Returns the old value of @v.
  */
-static inline bool atomic64_add_unless(atomic64_t *v, long long a, long long u)
+static inline bool arch_atomic64_add_unless(atomic64_t *v, long long a, long long u)
 {
-	long long c = atomic64_read(v);
+	long long c = arch_atomic64_read(v);
 	do {
 		if (unlikely(c == u))
 			return false;
-	} while (!atomic64_try_cmpxchg(v, &c, c + a));
+	} while (!arch_atomic64_try_cmpxchg(v, &c, c + a));
 	return true;
 }
 
-#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
+#define arch_atomic64_inc_not_zero(v) arch_atomic64_add_unless((v), 1, 0)
 
 /*
- * atomic64_dec_if_positive - decrement by 1 if old value positive
+ * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
  * @v: pointer of type atomic_t
  *
  * The function returns the old value of *v minus 1, even if
  * the atomic variable, v, was not decremented.
  */
-static inline long long atomic64_dec_if_positive(atomic64_t *v)
+static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
 {
-	long long dec, c = atomic64_read(v);
+	long long dec, c = arch_atomic64_read(v);
 	do {
 		dec = c - 1;
 		if (unlikely(dec < 0))
 			break;
-	} while (!atomic64_try_cmpxchg(v, &c, dec));
+	} while (!arch_atomic64_try_cmpxchg(v, &c, dec));
 	return dec;
 }
 
-static inline void atomic64_and(long long i, atomic64_t *v)
+static inline void arch_atomic64_and(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "andq %1,%0"
 			: "+m" (v->counter)
@@ -234,16 +234,16 @@ static inline void atomic64_and(long long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long long atomic64_fetch_and(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
 {
-	long long val = atomic64_read(v);
+	long long val = arch_atomic64_read(v);
 
 	do {
-	} while (!atomic64_try_cmpxchg(v, &val, val & i));
+	} while (!arch_atomic64_try_cmpxchg(v, &val, val & i));
 	return val;
 }
 
-static inline void atomic64_or(long long i, atomic64_t *v)
+static inline void arch_atomic64_or(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "orq %1,%0"
 			: "+m" (v->counter)
@@ -251,16 +251,16 @@ static inline void atomic64_or(long long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long long atomic64_fetch_or(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
 {
-	long long val = atomic64_read(v);
+	long long val = arch_atomic64_read(v);
 
 	do {
-	} while (!atomic64_try_cmpxchg(v, &val, val | i));
+	} while (!arch_atomic64_try_cmpxchg(v, &val, val | i));
 	return val;
 }
 
-static inline void atomic64_xor(long long i, atomic64_t *v)
+static inline void arch_atomic64_xor(long long i, atomic64_t *v)
 {
 	asm volatile(LOCK_PREFIX "xorq %1,%0"
 			: "+m" (v->counter)
@@ -268,12 +268,12 @@ static inline void atomic64_xor(long long i, atomic64_t *v)
 			: "memory");
 }
 
-static inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
+static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
 {
-	long long val = atomic64_read(v);
+	long long val = arch_atomic64_read(v);
 
 	do {
-	} while (!atomic64_try_cmpxchg(v, &val, val ^ i));
+	} while (!arch_atomic64_try_cmpxchg(v, &val, val ^ i));
 	return val;
 }
 
diff --git a/arch/x86/include/asm/cmpxchg.h b/arch/x86/include/asm/cmpxchg.h
index fb961db51a2a..b4e70a0b1238 100644
--- a/arch/x86/include/asm/cmpxchg.h
+++ b/arch/x86/include/asm/cmpxchg.h
@@ -144,20 +144,20 @@ extern void __add_wrong_size(void)
 # include <asm/cmpxchg_64.h>
 #endif
 
-#define cmpxchg(ptr, old, new)						\
+#define arch_cmpxchg(ptr, old, new)					\
 	__cmpxchg(ptr, old, new, sizeof(*(ptr)))
 
-#define sync_cmpxchg(ptr, old, new)					\
+#define arch_sync_cmpxchg(ptr, old, new)				\
 	__sync_cmpxchg(ptr, old, new, sizeof(*(ptr)))
 
-#define cmpxchg_local(ptr, old, new)					\
+#define arch_cmpxchg_local(ptr, old, new)				\
 	__cmpxchg_local(ptr, old, new, sizeof(*(ptr)))
 
 
 #define __raw_try_cmpxchg(_ptr, _pold, _new, size, lock)		\
 ({									\
 	bool success;							\
-	__typeof__(_ptr) _old = (_pold);				\
+	__typeof__(_pold) _old = (_pold);				\
 	__typeof__(*(_ptr)) __old = *_old;				\
 	__typeof__(*(_ptr)) __new = (_new);				\
 	switch (size) {							\
@@ -219,7 +219,7 @@ extern void __add_wrong_size(void)
 #define __try_cmpxchg(ptr, pold, new, size)				\
 	__raw_try_cmpxchg((ptr), (pold), (new), (size), LOCK_PREFIX)
 
-#define try_cmpxchg(ptr, pold, new)					\
+#define arch_try_cmpxchg(ptr, pold, new)				\
 	__try_cmpxchg((ptr), (pold), (new), sizeof(*(ptr)))
 
 /*
@@ -248,10 +248,10 @@ extern void __add_wrong_size(void)
 	__ret;								\
 })
 
-#define cmpxchg_double(p1, p2, o1, o2, n1, n2) \
+#define arch_cmpxchg_double(p1, p2, o1, o2, n1, n2) \
 	__cmpxchg_double(LOCK_PREFIX, p1, p2, o1, o2, n1, n2)
 
-#define cmpxchg_double_local(p1, p2, o1, o2, n1, n2) \
+#define arch_cmpxchg_double_local(p1, p2, o1, o2, n1, n2) \
 	__cmpxchg_double(, p1, p2, o1, o2, n1, n2)
 
 #endif	/* ASM_X86_CMPXCHG_H */
diff --git a/arch/x86/include/asm/cmpxchg_32.h b/arch/x86/include/asm/cmpxchg_32.h
index e4959d023af8..d897291d2bf9 100644
--- a/arch/x86/include/asm/cmpxchg_32.h
+++ b/arch/x86/include/asm/cmpxchg_32.h
@@ -35,10 +35,10 @@ static inline void set_64bit(volatile u64 *ptr, u64 value)
 }
 
 #ifdef CONFIG_X86_CMPXCHG64
-#define cmpxchg64(ptr, o, n)						\
+#define arch_cmpxchg64(ptr, o, n)					\
 	((__typeof__(*(ptr)))__cmpxchg64((ptr), (unsigned long long)(o), \
 					 (unsigned long long)(n)))
-#define cmpxchg64_local(ptr, o, n)					\
+#define arch_cmpxchg64_local(ptr, o, n)					\
 	((__typeof__(*(ptr)))__cmpxchg64_local((ptr), (unsigned long long)(o), \
 					       (unsigned long long)(n)))
 #endif
@@ -75,7 +75,7 @@ static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new)
  * to simulate the cmpxchg8b on the 80386 and 80486 CPU.
  */
 
-#define cmpxchg64(ptr, o, n)					\
+#define arch_cmpxchg64(ptr, o, n)				\
 ({								\
 	__typeof__(*(ptr)) __ret;				\
 	__typeof__(*(ptr)) __old = (o);				\
@@ -92,7 +92,7 @@ static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new)
 	__ret; })
 
 
-#define cmpxchg64_local(ptr, o, n)				\
+#define arch_cmpxchg64_local(ptr, o, n)				\
 ({								\
 	__typeof__(*(ptr)) __ret;				\
 	__typeof__(*(ptr)) __old = (o);				\
diff --git a/arch/x86/include/asm/cmpxchg_64.h b/arch/x86/include/asm/cmpxchg_64.h
index caa23a34c963..fafaebacca2d 100644
--- a/arch/x86/include/asm/cmpxchg_64.h
+++ b/arch/x86/include/asm/cmpxchg_64.h
@@ -6,13 +6,13 @@ static inline void set_64bit(volatile u64 *ptr, u64 val)
 	*ptr = val;
 }
 
-#define cmpxchg64(ptr, o, n)						\
+#define arch_cmpxchg64(ptr, o, n)					\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
 	cmpxchg((ptr), (o), (n));					\
 })
 
-#define cmpxchg64_local(ptr, o, n)					\
+#define arch_cmpxchg64_local(ptr, o, n)					\
 ({									\
 	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
 	cmpxchg_local((ptr), (o), (n));					\
-- 
2.12.2.564.g063fe858b8-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 6/8] kasan: allow kasan_check_read/write() to accept pointers to volatiles
  2017-03-28 16:15 [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations Dmitry Vyukov
@ 2017-03-28 16:15   ` Dmitry Vyukov
  2017-03-28 16:15 ` [PATCH 2/8] x86: un-macro-ify atomic ops implementation Dmitry Vyukov
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, Thomas Gleixner, H. Peter Anvin, linux-mm

Currently kasan_check_read/write() accept 'const void*', make them
accept 'const volatile void*'. This is required for instrumentation
of atomic operations and there is just no reason to not allow that.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-mm@kvack.org
Cc: kasan-dev@googlegroups.com
---
 include/linux/kasan-checks.h | 10 ++++++----
 mm/kasan/kasan.c             |  4 ++--
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan-checks.h b/include/linux/kasan-checks.h
index b7f8aced7870..41960fecf783 100644
--- a/include/linux/kasan-checks.h
+++ b/include/linux/kasan-checks.h
@@ -2,11 +2,13 @@
 #define _LINUX_KASAN_CHECKS_H
 
 #ifdef CONFIG_KASAN
-void kasan_check_read(const void *p, unsigned int size);
-void kasan_check_write(const void *p, unsigned int size);
+void kasan_check_read(const volatile void *p, unsigned int size);
+void kasan_check_write(const volatile void *p, unsigned int size);
 #else
-static inline void kasan_check_read(const void *p, unsigned int size) { }
-static inline void kasan_check_write(const void *p, unsigned int size) { }
+static inline void kasan_check_read(const volatile void *p, unsigned int size)
+{ }
+static inline void kasan_check_write(const volatile void *p, unsigned int size)
+{ }
 #endif
 
 #endif
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 98b27195e38b..db46e66eb1d4 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -333,13 +333,13 @@ static void check_memory_region(unsigned long addr,
 	check_memory_region_inline(addr, size, write, ret_ip);
 }
 
-void kasan_check_read(const void *p, unsigned int size)
+void kasan_check_read(const volatile void *p, unsigned int size)
 {
 	check_memory_region((unsigned long)p, size, false, _RET_IP_);
 }
 EXPORT_SYMBOL(kasan_check_read);
 
-void kasan_check_write(const void *p, unsigned int size)
+void kasan_check_write(const volatile void *p, unsigned int size)
 {
 	check_memory_region((unsigned long)p, size, true, _RET_IP_);
 }
-- 
2.12.2.564.g063fe858b8-goog

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 6/8] kasan: allow kasan_check_read/write() to accept pointers to volatiles
@ 2017-03-28 16:15   ` Dmitry Vyukov
  0 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, Thomas Gleixner, H. Peter Anvin, linux-mm

Currently kasan_check_read/write() accept 'const void*', make them
accept 'const volatile void*'. This is required for instrumentation
of atomic operations and there is just no reason to not allow that.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-mm@kvack.org
Cc: kasan-dev@googlegroups.com
---
 include/linux/kasan-checks.h | 10 ++++++----
 mm/kasan/kasan.c             |  4 ++--
 2 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/include/linux/kasan-checks.h b/include/linux/kasan-checks.h
index b7f8aced7870..41960fecf783 100644
--- a/include/linux/kasan-checks.h
+++ b/include/linux/kasan-checks.h
@@ -2,11 +2,13 @@
 #define _LINUX_KASAN_CHECKS_H
 
 #ifdef CONFIG_KASAN
-void kasan_check_read(const void *p, unsigned int size);
-void kasan_check_write(const void *p, unsigned int size);
+void kasan_check_read(const volatile void *p, unsigned int size);
+void kasan_check_write(const volatile void *p, unsigned int size);
 #else
-static inline void kasan_check_read(const void *p, unsigned int size) { }
-static inline void kasan_check_write(const void *p, unsigned int size) { }
+static inline void kasan_check_read(const volatile void *p, unsigned int size)
+{ }
+static inline void kasan_check_write(const volatile void *p, unsigned int size)
+{ }
 #endif
 
 #endif
diff --git a/mm/kasan/kasan.c b/mm/kasan/kasan.c
index 98b27195e38b..db46e66eb1d4 100644
--- a/mm/kasan/kasan.c
+++ b/mm/kasan/kasan.c
@@ -333,13 +333,13 @@ static void check_memory_region(unsigned long addr,
 	check_memory_region_inline(addr, size, write, ret_ip);
 }
 
-void kasan_check_read(const void *p, unsigned int size)
+void kasan_check_read(const volatile void *p, unsigned int size)
 {
 	check_memory_region((unsigned long)p, size, false, _RET_IP_);
 }
 EXPORT_SYMBOL(kasan_check_read);
 
-void kasan_check_write(const void *p, unsigned int size)
+void kasan_check_write(const volatile void *p, unsigned int size)
 {
 	check_memory_region((unsigned long)p, size, true, _RET_IP_);
 }
-- 
2.12.2.564.g063fe858b8-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 7/8] asm-generic: add KASAN instrumentation to atomic operations
  2017-03-28 16:15 [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations Dmitry Vyukov
@ 2017-03-28 16:15   ` Dmitry Vyukov
  2017-03-28 16:15 ` [PATCH 2/8] x86: un-macro-ify atomic ops implementation Dmitry Vyukov
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, linux-mm

KASAN uses compiler instrumentation to intercept all memory accesses.
But it does not see memory accesses done in assembly code.
One notable user of assembly code is atomic operations. Frequently,
for example, an atomic reference decrement is the last access to an
object and a good candidate for a racy use-after-free.

Add manual KASAN checks to atomic operations.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>,
Cc: Andrew Morton <akpm@linux-foundation.org>,
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
Cc: Ingo Molnar <mingo@redhat.com>,
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
---
 include/asm-generic/atomic-instrumented.h | 76 +++++++++++++++++++++++++++++--
 1 file changed, 72 insertions(+), 4 deletions(-)

diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
index fd483115d4c6..7f8eb761f896 100644
--- a/include/asm-generic/atomic-instrumented.h
+++ b/include/asm-generic/atomic-instrumented.h
@@ -1,44 +1,54 @@
 #ifndef _LINUX_ATOMIC_INSTRUMENTED_H
 #define _LINUX_ATOMIC_INSTRUMENTED_H
 
+#include <linux/kasan-checks.h>
+
 static __always_inline int atomic_read(const atomic_t *v)
 {
+	kasan_check_read(v, sizeof(*v));
 	return arch_atomic_read(v);
 }
 
 static __always_inline long long atomic64_read(const atomic64_t *v)
 {
+	kasan_check_read(v, sizeof(*v));
 	return arch_atomic64_read(v);
 }
 
 static __always_inline void atomic_set(atomic_t *v, int i)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_set(v, i);
 }
 
 static __always_inline void atomic64_set(atomic64_t *v, long long i)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_set(v, i);
 }
 
 static __always_inline int atomic_xchg(atomic_t *v, int i)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_xchg(v, i);
 }
 
 static __always_inline long long atomic64_xchg(atomic64_t *v, long long i)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_xchg(v, i);
 }
 
 static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_cmpxchg(v, old, new);
 }
 
 static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
 						  long long new)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_cmpxchg(v, old, new);
 }
 
@@ -46,6 +56,8 @@ static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
 #define atomic_try_cmpxchg atomic_try_cmpxchg
 static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
 {
+	kasan_check_write(v, sizeof(*v));
+	kasan_check_read(old, sizeof(*old));
 	return arch_atomic_try_cmpxchg(v, old, new);
 }
 #endif
@@ -55,12 +67,15 @@ static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
 static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old,
 						 long long new)
 {
+	kasan_check_write(v, sizeof(*v));
+	kasan_check_read(old, sizeof(*old));
 	return arch_atomic64_try_cmpxchg(v, old, new);
 }
 #endif
 
 static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
 {
+	kasan_check_write(v, sizeof(*v));
 	return __arch_atomic_add_unless(v, a, u);
 }
 
@@ -68,242 +83,295 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
 static __always_inline bool atomic64_add_unless(atomic64_t *v, long long a,
 						long long u)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_add_unless(v, a, u);
 }
 
 static __always_inline void atomic_inc(atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_inc(v);
 }
 
 static __always_inline void atomic64_inc(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_inc(v);
 }
 
 static __always_inline void atomic_dec(atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_dec(v);
 }
 
 static __always_inline void atomic64_dec(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_dec(v);
 }
 
 static __always_inline void atomic_add(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_add(i, v);
 }
 
 static __always_inline void atomic64_add(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_add(i, v);
 }
 
 static __always_inline void atomic_sub(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_sub(i, v);
 }
 
 static __always_inline void atomic64_sub(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_sub(i, v);
 }
 
 static __always_inline void atomic_and(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_and(i, v);
 }
 
 static __always_inline void atomic64_and(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_and(i, v);
 }
 
 static __always_inline void atomic_or(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_or(i, v);
 }
 
 static __always_inline void atomic64_or(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_or(i, v);
 }
 
 static __always_inline void atomic_xor(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_xor(i, v);
 }
 
 static __always_inline void atomic64_xor(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_xor(i, v);
 }
 
 static __always_inline int atomic_inc_return(atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_inc_return(v);
 }
 
 static __always_inline long long atomic64_inc_return(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_inc_return(v);
 }
 
 static __always_inline int atomic_dec_return(atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_dec_return(v);
 }
 
 static __always_inline long long atomic64_dec_return(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_dec_return(v);
 }
 
 static __always_inline long long atomic64_inc_not_zero(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_inc_not_zero(v);
 }
 
 static __always_inline long long atomic64_dec_if_positive(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_dec_if_positive(v);
 }
 
 static __always_inline bool atomic_dec_and_test(atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_dec_and_test(v);
 }
 
 static __always_inline bool atomic64_dec_and_test(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_dec_and_test(v);
 }
 
 static __always_inline bool atomic_inc_and_test(atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_inc_and_test(v);
 }
 
 static __always_inline bool atomic64_inc_and_test(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_inc_and_test(v);
 }
 
 static __always_inline int atomic_add_return(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_add_return(i, v);
 }
 
 static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_add_return(i, v);
 }
 
 static __always_inline int atomic_sub_return(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_sub_return(i, v);
 }
 
 static __always_inline long long atomic64_sub_return(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_sub_return(i, v);
 }
 
 static __always_inline int atomic_fetch_add(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_fetch_add(i, v);
 }
 
 static __always_inline long long atomic64_fetch_add(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_fetch_add(i, v);
 }
 
 static __always_inline int atomic_fetch_sub(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_fetch_sub(i, v);
 }
 
 static __always_inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_fetch_sub(i, v);
 }
 
 static __always_inline int atomic_fetch_and(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_fetch_and(i, v);
 }
 
 static __always_inline long long atomic64_fetch_and(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_fetch_and(i, v);
 }
 
 static __always_inline int atomic_fetch_or(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_fetch_or(i, v);
 }
 
 static __always_inline long long atomic64_fetch_or(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_fetch_or(i, v);
 }
 
 static __always_inline int atomic_fetch_xor(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_fetch_xor(i, v);
 }
 
 static __always_inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_fetch_xor(i, v);
 }
 
 static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_sub_and_test(i, v);
 }
 
 static __always_inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_sub_and_test(i, v);
 }
 
 static __always_inline bool atomic_add_negative(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_add_negative(i, v);
 }
 
 static __always_inline bool atomic64_add_negative(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_add_negative(i, v);
 }
 
 #define cmpxchg(ptr, old, new)				\
 ({							\
+	__typeof__(ptr) ___ptr = (ptr);			\
+	kasan_check_write(___ptr, sizeof(*___ptr));	\
 	arch_cmpxchg((ptr), (old), (new));		\
 })
 
 #define sync_cmpxchg(ptr, old, new)			\
 ({							\
-	arch_sync_cmpxchg((ptr), (old), (new));		\
+	__typeof__(ptr) ___ptr = (ptr);			\
+	kasan_check_write(___ptr, sizeof(*___ptr));	\
+	arch_sync_cmpxchg(___ptr, (old), (new));	\
 })
 
 #define cmpxchg_local(ptr, old, new)			\
 ({							\
-	arch_cmpxchg_local((ptr), (old), (new));	\
+	__typeof__(ptr) ____ptr = (ptr);		\
+	kasan_check_write(____ptr, sizeof(*____ptr));	\
+	arch_cmpxchg_local(____ptr, (old), (new));	\
 })
 
 #define cmpxchg64(ptr, old, new)			\
 ({							\
-	arch_cmpxchg64((ptr), (old), (new));		\
+	__typeof__(ptr) ____ptr = (ptr);		\
+	kasan_check_write(____ptr, sizeof(*____ptr));	\
+	arch_cmpxchg64(____ptr, (old), (new));		\
 })
 
 #define cmpxchg64_local(ptr, old, new)			\
 ({							\
-	arch_cmpxchg64_local((ptr), (old), (new));	\
+	__typeof__(ptr) ____ptr = (ptr);		\
+	kasan_check_write(____ptr, sizeof(*____ptr));	\
+	arch_cmpxchg64_local(____ptr, (old), (new));	\
 })
 
 #define cmpxchg_double(p1, p2, o1, o2, n1, n2)				\
-- 
2.12.2.564.g063fe858b8-goog

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 7/8] asm-generic: add KASAN instrumentation to atomic operations
@ 2017-03-28 16:15   ` Dmitry Vyukov
  0 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, linux-mm

KASAN uses compiler instrumentation to intercept all memory accesses.
But it does not see memory accesses done in assembly code.
One notable user of assembly code is atomic operations. Frequently,
for example, an atomic reference decrement is the last access to an
object and a good candidate for a racy use-after-free.

Add manual KASAN checks to atomic operations.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>,
Cc: Andrew Morton <akpm@linux-foundation.org>,
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
Cc: Ingo Molnar <mingo@redhat.com>,
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
---
 include/asm-generic/atomic-instrumented.h | 76 +++++++++++++++++++++++++++++--
 1 file changed, 72 insertions(+), 4 deletions(-)

diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
index fd483115d4c6..7f8eb761f896 100644
--- a/include/asm-generic/atomic-instrumented.h
+++ b/include/asm-generic/atomic-instrumented.h
@@ -1,44 +1,54 @@
 #ifndef _LINUX_ATOMIC_INSTRUMENTED_H
 #define _LINUX_ATOMIC_INSTRUMENTED_H
 
+#include <linux/kasan-checks.h>
+
 static __always_inline int atomic_read(const atomic_t *v)
 {
+	kasan_check_read(v, sizeof(*v));
 	return arch_atomic_read(v);
 }
 
 static __always_inline long long atomic64_read(const atomic64_t *v)
 {
+	kasan_check_read(v, sizeof(*v));
 	return arch_atomic64_read(v);
 }
 
 static __always_inline void atomic_set(atomic_t *v, int i)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_set(v, i);
 }
 
 static __always_inline void atomic64_set(atomic64_t *v, long long i)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_set(v, i);
 }
 
 static __always_inline int atomic_xchg(atomic_t *v, int i)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_xchg(v, i);
 }
 
 static __always_inline long long atomic64_xchg(atomic64_t *v, long long i)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_xchg(v, i);
 }
 
 static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_cmpxchg(v, old, new);
 }
 
 static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
 						  long long new)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_cmpxchg(v, old, new);
 }
 
@@ -46,6 +56,8 @@ static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
 #define atomic_try_cmpxchg atomic_try_cmpxchg
 static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
 {
+	kasan_check_write(v, sizeof(*v));
+	kasan_check_read(old, sizeof(*old));
 	return arch_atomic_try_cmpxchg(v, old, new);
 }
 #endif
@@ -55,12 +67,15 @@ static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
 static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old,
 						 long long new)
 {
+	kasan_check_write(v, sizeof(*v));
+	kasan_check_read(old, sizeof(*old));
 	return arch_atomic64_try_cmpxchg(v, old, new);
 }
 #endif
 
 static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
 {
+	kasan_check_write(v, sizeof(*v));
 	return __arch_atomic_add_unless(v, a, u);
 }
 
@@ -68,242 +83,295 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
 static __always_inline bool atomic64_add_unless(atomic64_t *v, long long a,
 						long long u)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_add_unless(v, a, u);
 }
 
 static __always_inline void atomic_inc(atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_inc(v);
 }
 
 static __always_inline void atomic64_inc(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_inc(v);
 }
 
 static __always_inline void atomic_dec(atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_dec(v);
 }
 
 static __always_inline void atomic64_dec(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_dec(v);
 }
 
 static __always_inline void atomic_add(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_add(i, v);
 }
 
 static __always_inline void atomic64_add(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_add(i, v);
 }
 
 static __always_inline void atomic_sub(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_sub(i, v);
 }
 
 static __always_inline void atomic64_sub(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_sub(i, v);
 }
 
 static __always_inline void atomic_and(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_and(i, v);
 }
 
 static __always_inline void atomic64_and(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_and(i, v);
 }
 
 static __always_inline void atomic_or(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_or(i, v);
 }
 
 static __always_inline void atomic64_or(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_or(i, v);
 }
 
 static __always_inline void atomic_xor(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic_xor(i, v);
 }
 
 static __always_inline void atomic64_xor(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	arch_atomic64_xor(i, v);
 }
 
 static __always_inline int atomic_inc_return(atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_inc_return(v);
 }
 
 static __always_inline long long atomic64_inc_return(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_inc_return(v);
 }
 
 static __always_inline int atomic_dec_return(atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_dec_return(v);
 }
 
 static __always_inline long long atomic64_dec_return(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_dec_return(v);
 }
 
 static __always_inline long long atomic64_inc_not_zero(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_inc_not_zero(v);
 }
 
 static __always_inline long long atomic64_dec_if_positive(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_dec_if_positive(v);
 }
 
 static __always_inline bool atomic_dec_and_test(atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_dec_and_test(v);
 }
 
 static __always_inline bool atomic64_dec_and_test(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_dec_and_test(v);
 }
 
 static __always_inline bool atomic_inc_and_test(atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_inc_and_test(v);
 }
 
 static __always_inline bool atomic64_inc_and_test(atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_inc_and_test(v);
 }
 
 static __always_inline int atomic_add_return(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_add_return(i, v);
 }
 
 static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_add_return(i, v);
 }
 
 static __always_inline int atomic_sub_return(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_sub_return(i, v);
 }
 
 static __always_inline long long atomic64_sub_return(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_sub_return(i, v);
 }
 
 static __always_inline int atomic_fetch_add(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_fetch_add(i, v);
 }
 
 static __always_inline long long atomic64_fetch_add(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_fetch_add(i, v);
 }
 
 static __always_inline int atomic_fetch_sub(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_fetch_sub(i, v);
 }
 
 static __always_inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_fetch_sub(i, v);
 }
 
 static __always_inline int atomic_fetch_and(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_fetch_and(i, v);
 }
 
 static __always_inline long long atomic64_fetch_and(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_fetch_and(i, v);
 }
 
 static __always_inline int atomic_fetch_or(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_fetch_or(i, v);
 }
 
 static __always_inline long long atomic64_fetch_or(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_fetch_or(i, v);
 }
 
 static __always_inline int atomic_fetch_xor(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_fetch_xor(i, v);
 }
 
 static __always_inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_fetch_xor(i, v);
 }
 
 static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_sub_and_test(i, v);
 }
 
 static __always_inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_sub_and_test(i, v);
 }
 
 static __always_inline bool atomic_add_negative(int i, atomic_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic_add_negative(i, v);
 }
 
 static __always_inline bool atomic64_add_negative(long long i, atomic64_t *v)
 {
+	kasan_check_write(v, sizeof(*v));
 	return arch_atomic64_add_negative(i, v);
 }
 
 #define cmpxchg(ptr, old, new)				\
 ({							\
+	__typeof__(ptr) ___ptr = (ptr);			\
+	kasan_check_write(___ptr, sizeof(*___ptr));	\
 	arch_cmpxchg((ptr), (old), (new));		\
 })
 
 #define sync_cmpxchg(ptr, old, new)			\
 ({							\
-	arch_sync_cmpxchg((ptr), (old), (new));		\
+	__typeof__(ptr) ___ptr = (ptr);			\
+	kasan_check_write(___ptr, sizeof(*___ptr));	\
+	arch_sync_cmpxchg(___ptr, (old), (new));	\
 })
 
 #define cmpxchg_local(ptr, old, new)			\
 ({							\
-	arch_cmpxchg_local((ptr), (old), (new));	\
+	__typeof__(ptr) ____ptr = (ptr);		\
+	kasan_check_write(____ptr, sizeof(*____ptr));	\
+	arch_cmpxchg_local(____ptr, (old), (new));	\
 })
 
 #define cmpxchg64(ptr, old, new)			\
 ({							\
-	arch_cmpxchg64((ptr), (old), (new));		\
+	__typeof__(ptr) ____ptr = (ptr);		\
+	kasan_check_write(____ptr, sizeof(*____ptr));	\
+	arch_cmpxchg64(____ptr, (old), (new));		\
 })
 
 #define cmpxchg64_local(ptr, old, new)			\
 ({							\
-	arch_cmpxchg64_local((ptr), (old), (new));	\
+	__typeof__(ptr) ____ptr = (ptr);		\
+	kasan_check_write(____ptr, sizeof(*____ptr));	\
+	arch_cmpxchg64_local(____ptr, (old), (new));	\
 })
 
 #define cmpxchg_double(p1, p2, o1, o2, n1, n2)				\
-- 
2.12.2.564.g063fe858b8-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 8/8] asm-generic, x86: add comments for atomic instrumentation
  2017-03-28 16:15 [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations Dmitry Vyukov
@ 2017-03-28 16:15   ` Dmitry Vyukov
  2017-03-28 16:15 ` [PATCH 2/8] x86: un-macro-ify atomic ops implementation Dmitry Vyukov
                     ` (7 subsequent siblings)
  8 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, linux-mm

The comments are factored out from the code changes to make them
easier to read. Add them separately to explain some non-obvious
aspects.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
---
 arch/x86/include/asm/atomic.h             |  7 +++++++
 include/asm-generic/atomic-instrumented.h | 30 ++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 92dd59f24eba..b2a2220c7ac2 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -23,6 +23,13 @@
  */
 static __always_inline int arch_atomic_read(const atomic_t *v)
 {
+	/*
+	 * Note: READ_ONCE() here leads to double instrumentation as
+	 * both READ_ONCE() and atomic_read() contain instrumentation.
+	 * This is a deliberate choice. READ_ONCE_NOCHECK() is compiled to a
+	 * non-inlined function call that considerably increases binary size
+	 * and stack usage under KASAN.
+	 */
 	return READ_ONCE((v)->counter);
 }
 
diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
index 7f8eb761f896..1134af090976 100644
--- a/include/asm-generic/atomic-instrumented.h
+++ b/include/asm-generic/atomic-instrumented.h
@@ -1,3 +1,15 @@
+/*
+ * This file provides wrappers with KASAN instrumentation for atomic operations.
+ * To use this functionality an arch's atomic.h file needs to define all
+ * atomic operations with arch_ prefix (e.g. arch_atomic_read()) and include
+ * this file at the end. This file provides atomic_read() that forwards to
+ * arch_atomic_read() for actual atomic operation.
+ * Note: if an arch atomic operation is implemented by means of other atomic
+ * operations (e.g. atomic_read()/atomic_cmpxchg() loop), then it needs to use
+ * arch_ variants (i.e. arch_atomic_read()/arch_atomic_cmpxchg()) to avoid
+ * double instrumentation.
+ */
+
 #ifndef _LINUX_ATOMIC_INSTRUMENTED_H
 #define _LINUX_ATOMIC_INSTRUMENTED_H
 
@@ -339,6 +351,15 @@ static __always_inline bool atomic64_add_negative(long long i, atomic64_t *v)
 	return arch_atomic64_add_negative(i, v);
 }
 
+/*
+ * In the following macros we need to be careful to not clash with arch_ macros.
+ * arch_xchg() can be defined as an extended statement expression as well,
+ * if we define a __ptr variable, and arch_xchg() also defines __ptr variable,
+ * and we pass __ptr as an argument to arch_xchg(), it will use own __ptr
+ * instead of ours. This leads to unpleasant crashes. To avoid the problem
+ * the following macros declare variables with lots of underscores.
+ */
+
 #define cmpxchg(ptr, old, new)				\
 ({							\
 	__typeof__(ptr) ___ptr = (ptr);			\
@@ -374,6 +395,15 @@ static __always_inline bool atomic64_add_negative(long long i, atomic64_t *v)
 	arch_cmpxchg64_local(____ptr, (old), (new));	\
 })
 
+/*
+ * Originally we had the following code here:
+ *     __typeof__(p1) ____p1 = (p1);
+ *     kasan_check_write(____p1, 2 * sizeof(*____p1));
+ *     arch_cmpxchg_double(____p1, (p2), (o1), (o2), (n1), (n2));
+ * But it leads to compilation failures (see gcc issue 72873).
+ * So for now it's left non-instrumented.
+ * There are few callers of cmpxchg_double(), so it's not critical.
+ */
 #define cmpxchg_double(p1, p2, o1, o2, n1, n2)				\
 ({									\
 	arch_cmpxchg_double((p1), (p2), (o1), (o2), (n1), (n2));	\
-- 
2.12.2.564.g063fe858b8-goog

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [PATCH 8/8] asm-generic, x86: add comments for atomic instrumentation
@ 2017-03-28 16:15   ` Dmitry Vyukov
  0 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:15 UTC (permalink / raw)
  To: mark.rutland, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86,
	Dmitry Vyukov, linux-mm

The comments are factored out from the code changes to make them
easier to read. Add them separately to explain some non-obvious
aspects.

Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: kasan-dev@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
Cc: x86@kernel.org
---
 arch/x86/include/asm/atomic.h             |  7 +++++++
 include/asm-generic/atomic-instrumented.h | 30 ++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 92dd59f24eba..b2a2220c7ac2 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -23,6 +23,13 @@
  */
 static __always_inline int arch_atomic_read(const atomic_t *v)
 {
+	/*
+	 * Note: READ_ONCE() here leads to double instrumentation as
+	 * both READ_ONCE() and atomic_read() contain instrumentation.
+	 * This is a deliberate choice. READ_ONCE_NOCHECK() is compiled to a
+	 * non-inlined function call that considerably increases binary size
+	 * and stack usage under KASAN.
+	 */
 	return READ_ONCE((v)->counter);
 }
 
diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
index 7f8eb761f896..1134af090976 100644
--- a/include/asm-generic/atomic-instrumented.h
+++ b/include/asm-generic/atomic-instrumented.h
@@ -1,3 +1,15 @@
+/*
+ * This file provides wrappers with KASAN instrumentation for atomic operations.
+ * To use this functionality an arch's atomic.h file needs to define all
+ * atomic operations with arch_ prefix (e.g. arch_atomic_read()) and include
+ * this file at the end. This file provides atomic_read() that forwards to
+ * arch_atomic_read() for actual atomic operation.
+ * Note: if an arch atomic operation is implemented by means of other atomic
+ * operations (e.g. atomic_read()/atomic_cmpxchg() loop), then it needs to use
+ * arch_ variants (i.e. arch_atomic_read()/arch_atomic_cmpxchg()) to avoid
+ * double instrumentation.
+ */
+
 #ifndef _LINUX_ATOMIC_INSTRUMENTED_H
 #define _LINUX_ATOMIC_INSTRUMENTED_H
 
@@ -339,6 +351,15 @@ static __always_inline bool atomic64_add_negative(long long i, atomic64_t *v)
 	return arch_atomic64_add_negative(i, v);
 }
 
+/*
+ * In the following macros we need to be careful to not clash with arch_ macros.
+ * arch_xchg() can be defined as an extended statement expression as well,
+ * if we define a __ptr variable, and arch_xchg() also defines __ptr variable,
+ * and we pass __ptr as an argument to arch_xchg(), it will use own __ptr
+ * instead of ours. This leads to unpleasant crashes. To avoid the problem
+ * the following macros declare variables with lots of underscores.
+ */
+
 #define cmpxchg(ptr, old, new)				\
 ({							\
 	__typeof__(ptr) ___ptr = (ptr);			\
@@ -374,6 +395,15 @@ static __always_inline bool atomic64_add_negative(long long i, atomic64_t *v)
 	arch_cmpxchg64_local(____ptr, (old), (new));	\
 })
 
+/*
+ * Originally we had the following code here:
+ *     __typeof__(p1) ____p1 = (p1);
+ *     kasan_check_write(____p1, 2 * sizeof(*____p1));
+ *     arch_cmpxchg_double(____p1, (p2), (o1), (o2), (n1), (n2));
+ * But it leads to compilation failures (see gcc issue 72873).
+ * So for now it's left non-instrumented.
+ * There are few callers of cmpxchg_double(), so it's not critical.
+ */
 #define cmpxchg_double(p1, p2, o1, o2, n1, n2)				\
 ({									\
 	arch_cmpxchg_double((p1), (p2), (o1), (o2), (n1), (n2));	\
-- 
2.12.2.564.g063fe858b8-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [PATCH 5/8] x86: switch atomic.h to use atomic-instrumented.h
  2017-03-28 16:15   ` Dmitry Vyukov
@ 2017-03-28 16:25     ` Dmitry Vyukov
  -1 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:25 UTC (permalink / raw)
  To: Mark Rutland, Peter Zijlstra, Ingo Molnar
  Cc: Andrew Morton, Will Deacon, Andrey Ryabinin, kasan-dev, LKML,
	x86, Dmitry Vyukov, linux-mm

On Tue, Mar 28, 2017 at 6:15 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> Add arch_ prefix to all atomic operations and include
> <asm-generic/atomic-instrumented.h>. This will allow
> to add KASAN instrumentation to all atomic ops.
>
> Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: kasan-dev@googlegroups.com
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> Cc: x86@kernel.org
> ---
>  arch/x86/include/asm/atomic.h      | 110 ++++++++++++++++++++-----------------
>  arch/x86/include/asm/atomic64_32.h | 106 +++++++++++++++++------------------
>  arch/x86/include/asm/atomic64_64.h | 110 ++++++++++++++++++-------------------
>  arch/x86/include/asm/cmpxchg.h     |  14 ++---
>  arch/x86/include/asm/cmpxchg_32.h  |   8 +--
>  arch/x86/include/asm/cmpxchg_64.h  |   4 +-
>  6 files changed, 181 insertions(+), 171 deletions(-)
>
> diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
> index 8d7f6e579be4..92dd59f24eba 100644
> --- a/arch/x86/include/asm/atomic.h
> +++ b/arch/x86/include/asm/atomic.h
> @@ -16,36 +16,42 @@
>  #define ATOMIC_INIT(i) { (i) }
>
>  /**
> - * atomic_read - read atomic variable
> + * arch_atomic_read - read atomic variable
>   * @v: pointer of type atomic_t
>   *
>   * Atomically reads the value of @v.
>   */
> -static __always_inline int atomic_read(const atomic_t *v)
> +static __always_inline int arch_atomic_read(const atomic_t *v)
>  {
>         return READ_ONCE((v)->counter);
>  }
>
>  /**
> - * atomic_set - set atomic variable
> + * arch_atomic_set - set atomic variable
>   * @v: pointer of type atomic_t
>   * @i: required value
>   *
>   * Atomically sets the value of @v to @i.
>   */
> -static __always_inline void atomic_set(atomic_t *v, int i)
> +static __always_inline void arch_atomic_set(atomic_t *v, int i)
>  {
> +       /*
> +        * We could use WRITE_ONCE_NOCHECK() if it exists, similar to
> +        * READ_ONCE_NOCHECK() in arch_atomic_read(). But there is no such
> +        * thing at the moment, and introducing it for this case does not
> +        * worth it.
> +        */
>         WRITE_ONCE(v->counter, i);
>  }
>
>  /**
> - * atomic_add - add integer to atomic variable
> + * arch_atomic_add - add integer to atomic variable
>   * @i: integer value to add
>   * @v: pointer of type atomic_t
>   *
>   * Atomically adds @i to @v.
>   */
> -static __always_inline void atomic_add(int i, atomic_t *v)
> +static __always_inline void arch_atomic_add(int i, atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "addl %1,%0"
>                      : "+m" (v->counter)
> @@ -53,13 +59,13 @@ static __always_inline void atomic_add(int i, atomic_t *v)
>  }
>
>  /**
> - * atomic_sub - subtract integer from atomic variable
> + * arch_atomic_sub - subtract integer from atomic variable
>   * @i: integer value to subtract
>   * @v: pointer of type atomic_t
>   *
>   * Atomically subtracts @i from @v.
>   */
> -static __always_inline void atomic_sub(int i, atomic_t *v)
> +static __always_inline void arch_atomic_sub(int i, atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "subl %1,%0"
>                      : "+m" (v->counter)
> @@ -67,7 +73,7 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
>  }
>
>  /**
> - * atomic_sub_and_test - subtract value from variable and test result
> + * arch_atomic_sub_and_test - subtract value from variable and test result
>   * @i: integer value to subtract
>   * @v: pointer of type atomic_t
>   *
> @@ -75,63 +81,63 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
>   * true if the result is zero, or false for all
>   * other cases.
>   */
> -static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
> +static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v)
>  {
>         GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", e);
>  }
>
>  /**
> - * atomic_inc - increment atomic variable
> + * arch_atomic_inc - increment atomic variable
>   * @v: pointer of type atomic_t
>   *
>   * Atomically increments @v by 1.
>   */
> -static __always_inline void atomic_inc(atomic_t *v)
> +static __always_inline void arch_atomic_inc(atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "incl %0"
>                      : "+m" (v->counter));
>  }
>
>  /**
> - * atomic_dec - decrement atomic variable
> + * arch_atomic_dec - decrement atomic variable
>   * @v: pointer of type atomic_t
>   *
>   * Atomically decrements @v by 1.
>   */
> -static __always_inline void atomic_dec(atomic_t *v)
> +static __always_inline void arch_atomic_dec(atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "decl %0"
>                      : "+m" (v->counter));
>  }
>
>  /**
> - * atomic_dec_and_test - decrement and test
> + * arch_atomic_dec_and_test - decrement and test
>   * @v: pointer of type atomic_t
>   *
>   * Atomically decrements @v by 1 and
>   * returns true if the result is 0, or false for all other
>   * cases.
>   */
> -static __always_inline bool atomic_dec_and_test(atomic_t *v)
> +static __always_inline bool arch_atomic_dec_and_test(atomic_t *v)
>  {
>         GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e);
>  }
>
>  /**
> - * atomic_inc_and_test - increment and test
> + * arch_atomic_inc_and_test - increment and test
>   * @v: pointer of type atomic_t
>   *
>   * Atomically increments @v by 1
>   * and returns true if the result is zero, or false for all
>   * other cases.
>   */
> -static __always_inline bool atomic_inc_and_test(atomic_t *v)
> +static __always_inline bool arch_atomic_inc_and_test(atomic_t *v)
>  {
>         GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", e);
>  }
>
>  /**
> - * atomic_add_negative - add and test if negative
> + * arch_atomic_add_negative - add and test if negative
>   * @i: integer value to add
>   * @v: pointer of type atomic_t
>   *
> @@ -139,65 +145,65 @@ static __always_inline bool atomic_inc_and_test(atomic_t *v)
>   * if the result is negative, or false when
>   * result is greater than or equal to zero.
>   */
> -static __always_inline bool atomic_add_negative(int i, atomic_t *v)
> +static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
>  {
>         GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s);
>  }
>
>  /**
> - * atomic_add_return - add integer and return
> + * arch_atomic_add_return - add integer and return
>   * @i: integer value to add
>   * @v: pointer of type atomic_t
>   *
>   * Atomically adds @i to @v and returns @i + @v
>   */
> -static __always_inline int atomic_add_return(int i, atomic_t *v)
> +static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
>  {
>         return i + xadd(&v->counter, i);
>  }
>
>  /**
> - * atomic_sub_return - subtract integer and return
> + * arch_atomic_sub_return - subtract integer and return
>   * @v: pointer of type atomic_t
>   * @i: integer value to subtract
>   *
>   * Atomically subtracts @i from @v and returns @v - @i
>   */
> -static __always_inline int atomic_sub_return(int i, atomic_t *v)
> +static __always_inline int arch_atomic_sub_return(int i, atomic_t *v)
>  {
> -       return atomic_add_return(-i, v);
> +       return arch_atomic_add_return(-i, v);
>  }
>
> -#define atomic_inc_return(v)  (atomic_add_return(1, v))
> -#define atomic_dec_return(v)  (atomic_sub_return(1, v))
> +#define arch_atomic_inc_return(v)  (arch_atomic_add_return(1, v))
> +#define arch_atomic_dec_return(v)  (arch_atomic_sub_return(1, v))
>
> -static __always_inline int atomic_fetch_add(int i, atomic_t *v)
> +static __always_inline int arch_atomic_fetch_add(int i, atomic_t *v)
>  {
>         return xadd(&v->counter, i);
>  }
>
> -static __always_inline int atomic_fetch_sub(int i, atomic_t *v)
> +static __always_inline int arch_atomic_fetch_sub(int i, atomic_t *v)
>  {
>         return xadd(&v->counter, -i);
>  }
>
> -static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
> +static __always_inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
>  {
> -       return cmpxchg(&v->counter, old, new);
> +       return arch_cmpxchg(&v->counter, old, new);
>  }
>
> -#define atomic_try_cmpxchg atomic_try_cmpxchg
> -static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
> +#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg
> +static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
>  {
> -       return try_cmpxchg(&v->counter, old, new);
> +       return arch_try_cmpxchg(&v->counter, old, new);
>  }
>
> -static inline int atomic_xchg(atomic_t *v, int new)
> +static inline int arch_atomic_xchg(atomic_t *v, int new)
>  {
>         return xchg(&v->counter, new);
>  }
>
> -static inline void atomic_and(int i, atomic_t *v)
> +static inline void arch_atomic_and(int i, atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "andl %1,%0"
>                         : "+m" (v->counter)
> @@ -205,16 +211,16 @@ static inline void atomic_and(int i, atomic_t *v)
>                         : "memory");
>  }
>
> -static inline int atomic_fetch_and(int i, atomic_t *v)
> +static inline int arch_atomic_fetch_and(int i, atomic_t *v)
>  {
> -       int val = atomic_read(v);
> +       int val = arch_atomic_read(v);
>
>         do {
> -       } while (!atomic_try_cmpxchg(v, &val, val & i));
> +       } while (!arch_atomic_try_cmpxchg(v, &val, val & i));
>         return val;
>  }
>
> -static inline void atomic_or(int i, atomic_t *v)
> +static inline void arch_atomic_or(int i, atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "orl %1,%0"
>                         : "+m" (v->counter)
> @@ -222,17 +228,17 @@ static inline void atomic_or(int i, atomic_t *v)
>                         : "memory");
>  }
>
> -static inline int atomic_fetch_or(int i, atomic_t *v)
> +static inline int arch_atomic_fetch_or(int i, atomic_t *v)
>  {
> -       int val = atomic_read(v);
> +       int val = arch_atomic_read(v);
>
>         do {
> -       } while (!atomic_try_cmpxchg(v, &val, val | i));
> +       } while (!arch_atomic_try_cmpxchg(v, &val, val | i));
>         return val;
>  }
>
>
> -static inline void atomic_xor(int i, atomic_t *v)
> +static inline void arch_atomic_xor(int i, atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "xorl %1,%0"
>                         : "+m" (v->counter)
> @@ -240,17 +246,17 @@ static inline void atomic_xor(int i, atomic_t *v)
>                         : "memory");
>  }
>
> -static inline int atomic_fetch_xor(int i, atomic_t *v)
> +static inline int arch_atomic_fetch_xor(int i, atomic_t *v)
>  {
> -       int val = atomic_read(v);
> +       int val = arch_atomic_read(v);
>
>         do {
> -       } while (!atomic_try_cmpxchg(v, &val, val ^ i));
> +       } while (!arch_atomic_try_cmpxchg(v, &val, val ^ i));
>         return val;
>  }
>
>  /**
> - * __atomic_add_unless - add unless the number is already a given value
> + * __arch_atomic_add_unless - add unless the number is already a given value
>   * @v: pointer of type atomic_t
>   * @a: the amount to add to v...
>   * @u: ...unless v is equal to u.
> @@ -258,13 +264,13 @@ static inline int atomic_fetch_xor(int i, atomic_t *v)
>   * Atomically adds @a to @v, so long as @v was not already @u.
>   * Returns the old value of @v.
>   */
> -static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
> +static __always_inline int __arch_atomic_add_unless(atomic_t *v, int a, int u)
>  {
> -       int c = atomic_read(v);
> +       int c = arch_atomic_read(v);
>         do {
>                 if (unlikely(c == u))
>                         break;
> -       } while (!atomic_try_cmpxchg(v, &c, c + a));
> +       } while (!arch_atomic_try_cmpxchg(v, &c, c + a));
>         return c;
>  }
>
> @@ -274,4 +280,6 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
>  # include <asm/atomic64_64.h>
>  #endif
>
> +#include <asm-generic/atomic-instrumented.h>
> +
>  #endif /* _ASM_X86_ATOMIC_H */
> diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
> index f107fef7bfcc..8501e4fc5054 100644
> --- a/arch/x86/include/asm/atomic64_32.h
> +++ b/arch/x86/include/asm/atomic64_32.h
> @@ -61,7 +61,7 @@ ATOMIC64_DECL(add_unless);
>  #undef ATOMIC64_EXPORT
>
>  /**
> - * atomic64_cmpxchg - cmpxchg atomic64 variable
> + * arch_atomic64_cmpxchg - cmpxchg atomic64 variable
>   * @v: pointer to type atomic64_t
>   * @o: expected value
>   * @n: new value
> @@ -70,20 +70,21 @@ ATOMIC64_DECL(add_unless);
>   * the old value.
>   */
>
> -static inline long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
> +static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o,
> +                                             long long n)
>  {
> -       return cmpxchg64(&v->counter, o, n);
> +       return arch_cmpxchg64(&v->counter, o, n);
>  }
>
>  /**
> - * atomic64_xchg - xchg atomic64 variable
> + * arch_atomic64_xchg - xchg atomic64 variable
>   * @v: pointer to type atomic64_t
>   * @n: value to assign
>   *
>   * Atomically xchgs the value of @v to @n and returns
>   * the old value.
>   */
> -static inline long long atomic64_xchg(atomic64_t *v, long long n)
> +static inline long long arch_atomic64_xchg(atomic64_t *v, long long n)
>  {
>         long long o;
>         unsigned high = (unsigned)(n >> 32);
> @@ -95,13 +96,13 @@ static inline long long atomic64_xchg(atomic64_t *v, long long n)
>  }
>
>  /**
> - * atomic64_set - set atomic64 variable
> + * arch_atomic64_set - set atomic64 variable
>   * @v: pointer to type atomic64_t
>   * @i: value to assign
>   *
>   * Atomically sets the value of @v to @n.
>   */
> -static inline void atomic64_set(atomic64_t *v, long long i)
> +static inline void arch_atomic64_set(atomic64_t *v, long long i)
>  {
>         unsigned high = (unsigned)(i >> 32);
>         unsigned low = (unsigned)i;
> @@ -111,12 +112,12 @@ static inline void atomic64_set(atomic64_t *v, long long i)
>  }
>
>  /**
> - * atomic64_read - read atomic64 variable
> + * arch_atomic64_read - read atomic64 variable
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically reads the value of @v and returns it.
>   */
> -static inline long long atomic64_read(const atomic64_t *v)
> +static inline long long arch_atomic64_read(const atomic64_t *v)
>  {
>         long long r;
>         alternative_atomic64(read, "=&A" (r), "c" (v) : "memory");
> @@ -124,13 +125,13 @@ static inline long long atomic64_read(const atomic64_t *v)
>   }
>
>  /**
> - * atomic64_add_return - add and return
> + * arch_atomic64_add_return - add and return
>   * @i: integer value to add
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically adds @i to @v and returns @i + *@v
>   */
> -static inline long long atomic64_add_return(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
>  {
>         alternative_atomic64(add_return,
>                              ASM_OUTPUT2("+A" (i), "+c" (v)),
> @@ -141,7 +142,7 @@ static inline long long atomic64_add_return(long long i, atomic64_t *v)
>  /*
>   * Other variants with different arithmetic operators:
>   */
> -static inline long long atomic64_sub_return(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
>  {
>         alternative_atomic64(sub_return,
>                              ASM_OUTPUT2("+A" (i), "+c" (v)),
> @@ -149,7 +150,7 @@ static inline long long atomic64_sub_return(long long i, atomic64_t *v)
>         return i;
>  }
>
> -static inline long long atomic64_inc_return(atomic64_t *v)
> +static inline long long arch_atomic64_inc_return(atomic64_t *v)
>  {
>         long long a;
>         alternative_atomic64(inc_return, "=&A" (a),
> @@ -157,7 +158,7 @@ static inline long long atomic64_inc_return(atomic64_t *v)
>         return a;
>  }
>
> -static inline long long atomic64_dec_return(atomic64_t *v)
> +static inline long long arch_atomic64_dec_return(atomic64_t *v)
>  {
>         long long a;
>         alternative_atomic64(dec_return, "=&A" (a),
> @@ -166,13 +167,13 @@ static inline long long atomic64_dec_return(atomic64_t *v)
>  }
>
>  /**
> - * atomic64_add - add integer to atomic64 variable
> + * arch_atomic64_add - add integer to atomic64 variable
>   * @i: integer value to add
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically adds @i to @v.
>   */
> -static inline long long atomic64_add(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_add(long long i, atomic64_t *v)
>  {
>         __alternative_atomic64(add, add_return,
>                                ASM_OUTPUT2("+A" (i), "+c" (v)),
> @@ -181,13 +182,13 @@ static inline long long atomic64_add(long long i, atomic64_t *v)
>  }
>
>  /**
> - * atomic64_sub - subtract the atomic64 variable
> + * arch_atomic64_sub - subtract the atomic64 variable
>   * @i: integer value to subtract
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically subtracts @i from @v.
>   */
> -static inline long long atomic64_sub(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_sub(long long i, atomic64_t *v)
>  {
>         __alternative_atomic64(sub, sub_return,
>                                ASM_OUTPUT2("+A" (i), "+c" (v)),
> @@ -196,7 +197,7 @@ static inline long long atomic64_sub(long long i, atomic64_t *v)
>  }
>
>  /**
> - * atomic64_sub_and_test - subtract value from variable and test result
> + * arch_atomic64_sub_and_test - subtract value from variable and test result
>   * @i: integer value to subtract
>   * @v: pointer to type atomic64_t
>   *
> @@ -204,46 +205,46 @@ static inline long long atomic64_sub(long long i, atomic64_t *v)
>   * true if the result is zero, or false for all
>   * other cases.
>   */
> -static inline int atomic64_sub_and_test(long long i, atomic64_t *v)
> +static inline int arch_atomic64_sub_and_test(long long i, atomic64_t *v)
>  {
> -       return atomic64_sub_return(i, v) == 0;
> +       return arch_atomic64_sub_return(i, v) == 0;
>  }
>
>  /**
> - * atomic64_inc - increment atomic64 variable
> + * arch_atomic64_inc - increment atomic64 variable
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically increments @v by 1.
>   */
> -static inline void atomic64_inc(atomic64_t *v)
> +static inline void arch_atomic64_inc(atomic64_t *v)
>  {
>         __alternative_atomic64(inc, inc_return, /* no output */,
>                                "S" (v) : "memory", "eax", "ecx", "edx");
>  }
>
>  /**
> - * atomic64_dec - decrement atomic64 variable
> + * arch_atomic64_dec - decrement atomic64 variable
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically decrements @v by 1.
>   */
> -static inline void atomic64_dec(atomic64_t *v)
> +static inline void arch_atomic64_dec(atomic64_t *v)
>  {
>         __alternative_atomic64(dec, dec_return, /* no output */,
>                                "S" (v) : "memory", "eax", "ecx", "edx");
>  }
>
>  /**
> - * atomic64_dec_and_test - decrement and test
> + * arch_atomic64_dec_and_test - decrement and test
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically decrements @v by 1 and
>   * returns true if the result is 0, or false for all other
>   * cases.
>   */
> -static inline int atomic64_dec_and_test(atomic64_t *v)
> +static inline int arch_atomic64_dec_and_test(atomic64_t *v)
>  {
> -       return atomic64_dec_return(v) == 0;
> +       return arch_atomic64_dec_return(v) == 0;
>  }
>
>  /**
> @@ -254,13 +255,13 @@ static inline int atomic64_dec_and_test(atomic64_t *v)
>   * and returns true if the result is zero, or false for all
>   * other cases.
>   */
> -static inline int atomic64_inc_and_test(atomic64_t *v)
> +static inline int arch_atomic64_inc_and_test(atomic64_t *v)
>  {
> -       return atomic64_inc_return(v) == 0;
> +       return arch_atomic64_inc_return(v) == 0;
>  }
>
>  /**
> - * atomic64_add_negative - add and test if negative
> + * arch_atomic64_add_negative - add and test if negative
>   * @i: integer value to add
>   * @v: pointer to type atomic64_t
>   *
> @@ -268,13 +269,13 @@ static inline int atomic64_inc_and_test(atomic64_t *v)
>   * if the result is negative, or false when
>   * result is greater than or equal to zero.
>   */
> -static inline int atomic64_add_negative(long long i, atomic64_t *v)
> +static inline int arch_atomic64_add_negative(long long i, atomic64_t *v)
>  {
> -       return atomic64_add_return(i, v) < 0;
> +       return arch_atomic64_add_return(i, v) < 0;
>  }
>
>  /**
> - * atomic64_add_unless - add unless the number is a given value
> + * arch_atomic64_add_unless - add unless the number is a given value
>   * @v: pointer of type atomic64_t
>   * @a: the amount to add to v...
>   * @u: ...unless v is equal to u.
> @@ -282,7 +283,8 @@ static inline int atomic64_add_negative(long long i, atomic64_t *v)
>   * Atomically adds @a to @v, so long as it was not @u.
>   * Returns non-zero if the add was done, zero otherwise.
>   */
> -static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
> +static inline int arch_atomic64_add_unless(atomic64_t *v, long long a,
> +                                          long long u)
>  {
>         unsigned low = (unsigned)u;
>         unsigned high = (unsigned)(u >> 32);
> @@ -293,7 +295,7 @@ static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
>  }
>
>
> -static inline int atomic64_inc_not_zero(atomic64_t *v)
> +static inline int arch_atomic64_inc_not_zero(atomic64_t *v)
>  {
>         int r;
>         alternative_atomic64(inc_not_zero, "=&a" (r),
> @@ -301,7 +303,7 @@ static inline int atomic64_inc_not_zero(atomic64_t *v)
>         return r;
>  }
>
> -static inline long long atomic64_dec_if_positive(atomic64_t *v)
> +static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
>  {
>         long long r;
>         alternative_atomic64(dec_if_positive, "=&A" (r),
> @@ -312,66 +314,66 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
>  #undef alternative_atomic64
>  #undef __alternative_atomic64
>
> -static inline void atomic64_and(long long i, atomic64_t *v)
> +static inline void arch_atomic64_and(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c & i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
>                 c = old;
>  }
>
> -static inline long long atomic64_fetch_and(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c & i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
>                 c = old;
>         return old;
>  }
>
> -static inline void atomic64_or(long long i, atomic64_t *v)
> +static inline void arch_atomic64_or(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c | i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
>                 c = old;
>  }
>
> -static inline long long atomic64_fetch_or(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c | i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
>                 c = old;
>         return old;
>  }
>
> -static inline void atomic64_xor(long long i, atomic64_t *v)
> +static inline void arch_atomic64_xor(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c ^ i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
>                 c = old;
>  }
>
> -static inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c ^ i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
>                 c = old;
>         return old;
>  }
>
> -static inline long long atomic64_fetch_add(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c + i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c)
>                 c = old;
>         return old;
>  }
>
> -#define atomic64_fetch_sub(i, v)       atomic64_fetch_add(-(i), (v))
> +#define arch_atomic64_fetch_sub(i, v)  arch_atomic64_fetch_add(-(i), (v))
>
>  #endif /* _ASM_X86_ATOMIC64_32_H */
> diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
> index a62982a2b534..6b6873e4d4e8 100644
> --- a/arch/x86/include/asm/atomic64_64.h
> +++ b/arch/x86/include/asm/atomic64_64.h
> @@ -10,37 +10,37 @@
>  #define ATOMIC64_INIT(i)       { (i) }
>
>  /**
> - * atomic64_read - read atomic64 variable
> + * arch_atomic64_read - read atomic64 variable
>   * @v: pointer of type atomic64_t
>   *
>   * Atomically reads the value of @v.
>   * Doesn't imply a read memory barrier.
>   */
> -static inline long long atomic64_read(const atomic64_t *v)
> +static inline long long arch_atomic64_read(const atomic64_t *v)
>  {
>         return READ_ONCE((v)->counter);
>  }
>
>  /**
> - * atomic64_set - set atomic64 variable
> + * arch_atomic64_set - set atomic64 variable
>   * @v: pointer to type atomic64_t
>   * @i: required value
>   *
>   * Atomically sets the value of @v to @i.
>   */
> -static inline void atomic64_set(atomic64_t *v, long long i)
> +static inline void arch_atomic64_set(atomic64_t *v, long long i)
>  {
>         WRITE_ONCE(v->counter, i);
>  }
>
>  /**
> - * atomic64_add - add integer to atomic64 variable
> + * arch_atomic64_add - add integer to atomic64 variable
>   * @i: integer value to add
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically adds @i to @v.
>   */
> -static __always_inline void atomic64_add(long long i, atomic64_t *v)
> +static __always_inline void arch_atomic64_add(long long i, atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "addq %1,%0"
>                      : "=m" (v->counter)
> @@ -48,13 +48,13 @@ static __always_inline void atomic64_add(long long i, atomic64_t *v)
>  }
>
>  /**
> - * atomic64_sub - subtract the atomic64 variable
> + * arch_atomic64_sub - subtract the atomic64 variable
>   * @i: integer value to subtract
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically subtracts @i from @v.
>   */
> -static inline void atomic64_sub(long long i, atomic64_t *v)
> +static inline void arch_atomic64_sub(long long i, atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "subq %1,%0"
>                      : "=m" (v->counter)
> @@ -62,7 +62,7 @@ static inline void atomic64_sub(long long i, atomic64_t *v)
>  }
>
>  /**
> - * atomic64_sub_and_test - subtract value from variable and test result
> + * arch_atomic64_sub_and_test - subtract value from variable and test result
>   * @i: integer value to subtract
>   * @v: pointer to type atomic64_t
>   *
> @@ -70,18 +70,18 @@ static inline void atomic64_sub(long long i, atomic64_t *v)
>   * true if the result is zero, or false for all
>   * other cases.
>   */
> -static inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
> +static inline bool arch_atomic64_sub_and_test(long long i, atomic64_t *v)
>  {
>         GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e);
>  }
>
>  /**
> - * atomic64_inc - increment atomic64 variable
> + * arch_atomic64_inc - increment atomic64 variable
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically increments @v by 1.
>   */
> -static __always_inline void atomic64_inc(atomic64_t *v)
> +static __always_inline void arch_atomic64_inc(atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "incq %0"
>                      : "=m" (v->counter)
> @@ -89,12 +89,12 @@ static __always_inline void atomic64_inc(atomic64_t *v)
>  }
>
>  /**
> - * atomic64_dec - decrement atomic64 variable
> + * arch_atomic64_dec - decrement atomic64 variable
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically decrements @v by 1.
>   */
> -static __always_inline void atomic64_dec(atomic64_t *v)
> +static __always_inline void arch_atomic64_dec(atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "decq %0"
>                      : "=m" (v->counter)
> @@ -102,33 +102,33 @@ static __always_inline void atomic64_dec(atomic64_t *v)
>  }
>
>  /**
> - * atomic64_dec_and_test - decrement and test
> + * arch_atomic64_dec_and_test - decrement and test
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically decrements @v by 1 and
>   * returns true if the result is 0, or false for all other
>   * cases.
>   */
> -static inline bool atomic64_dec_and_test(atomic64_t *v)
> +static inline bool arch_atomic64_dec_and_test(atomic64_t *v)
>  {
>         GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", e);
>  }
>
>  /**
> - * atomic64_inc_and_test - increment and test
> + * arch_atomic64_inc_and_test - increment and test
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically increments @v by 1
>   * and returns true if the result is zero, or false for all
>   * other cases.
>   */
> -static inline bool atomic64_inc_and_test(atomic64_t *v)
> +static inline bool arch_atomic64_inc_and_test(atomic64_t *v)
>  {
>         GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", e);
>  }
>
>  /**
> - * atomic64_add_negative - add and test if negative
> + * arch_atomic64_add_negative - add and test if negative
>   * @i: integer value to add
>   * @v: pointer to type atomic64_t
>   *
> @@ -136,59 +136,59 @@ static inline bool atomic64_inc_and_test(atomic64_t *v)
>   * if the result is negative, or false when
>   * result is greater than or equal to zero.
>   */
> -static inline bool atomic64_add_negative(long long i, atomic64_t *v)
> +static inline bool arch_atomic64_add_negative(long long i, atomic64_t *v)
>  {
>         GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s);
>  }
>
>  /**
> - * atomic64_add_return - add and return
> + * arch_atomic64_add_return - add and return
>   * @i: integer value to add
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically adds @i to @v and returns @i + @v
>   */
> -static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
> +static __always_inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
>  {
>         return i + xadd(&v->counter, i);
>  }
>
> -static inline long long atomic64_sub_return(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
>  {
> -       return atomic64_add_return(-i, v);
> +       return arch_atomic64_add_return(-i, v);
>  }
>
> -static inline long long atomic64_fetch_add(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v)
>  {
>         return xadd(&v->counter, i);
>  }
>
> -static inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_sub(long long i, atomic64_t *v)
>  {
>         return xadd(&v->counter, -i);
>  }
>
> -#define atomic64_inc_return(v)  (atomic64_add_return(1, (v)))
> -#define atomic64_dec_return(v)  (atomic64_sub_return(1, (v)))
> +#define arch_atomic64_inc_return(v)  (arch_atomic64_add_return(1, (v)))
> +#define arch_atomic64_dec_return(v)  (arch_atomic64_sub_return(1, (v)))
>
> -static inline long long atomic64_cmpxchg(atomic64_t *v, long long old, long long new)
> +static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long old, long long new)
>  {
> -       return cmpxchg(&v->counter, old, new);
> +       return arch_cmpxchg(&v->counter, old, new);
>  }
>
> -#define atomic64_try_cmpxchg atomic64_try_cmpxchg
> -static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old, long long new)
> +#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
> +static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, long long *old, long long new)
>  {
> -       return try_cmpxchg(&v->counter, old, new);
> +       return arch_try_cmpxchg(&v->counter, old, new);
>  }
>
> -static inline long long atomic64_xchg(atomic64_t *v, long long new)
> +static inline long long arch_atomic64_xchg(atomic64_t *v, long long new)
>  {
>         return xchg(&v->counter, new);
>  }
>
>  /**
> - * atomic64_add_unless - add unless the number is a given value
> + * arch_atomic64_add_unless - add unless the number is a given value
>   * @v: pointer of type atomic64_t
>   * @a: the amount to add to v...
>   * @u: ...unless v is equal to u.
> @@ -196,37 +196,37 @@ static inline long long atomic64_xchg(atomic64_t *v, long long new)
>   * Atomically adds @a to @v, so long long as it was not @u.
>   * Returns the old value of @v.
>   */
> -static inline bool atomic64_add_unless(atomic64_t *v, long long a, long long u)
> +static inline bool arch_atomic64_add_unless(atomic64_t *v, long long a, long long u)
>  {
> -       long long c = atomic64_read(v);
> +       long long c = arch_atomic64_read(v);
>         do {
>                 if (unlikely(c == u))
>                         return false;
> -       } while (!atomic64_try_cmpxchg(v, &c, c + a));
> +       } while (!arch_atomic64_try_cmpxchg(v, &c, c + a));
>         return true;
>  }
>
> -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
> +#define arch_atomic64_inc_not_zero(v) arch_atomic64_add_unless((v), 1, 0)
>
>  /*
> - * atomic64_dec_if_positive - decrement by 1 if old value positive
> + * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
>   * @v: pointer of type atomic_t
>   *
>   * The function returns the old value of *v minus 1, even if
>   * the atomic variable, v, was not decremented.
>   */
> -static inline long long atomic64_dec_if_positive(atomic64_t *v)
> +static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
>  {
> -       long long dec, c = atomic64_read(v);
> +       long long dec, c = arch_atomic64_read(v);
>         do {
>                 dec = c - 1;
>                 if (unlikely(dec < 0))
>                         break;
> -       } while (!atomic64_try_cmpxchg(v, &c, dec));
> +       } while (!arch_atomic64_try_cmpxchg(v, &c, dec));
>         return dec;
>  }
>
> -static inline void atomic64_and(long long i, atomic64_t *v)
> +static inline void arch_atomic64_and(long long i, atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "andq %1,%0"
>                         : "+m" (v->counter)
> @@ -234,16 +234,16 @@ static inline void atomic64_and(long long i, atomic64_t *v)
>                         : "memory");
>  }
>
> -static inline long long atomic64_fetch_and(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
>  {
> -       long long val = atomic64_read(v);
> +       long long val = arch_atomic64_read(v);
>
>         do {
> -       } while (!atomic64_try_cmpxchg(v, &val, val & i));
> +       } while (!arch_atomic64_try_cmpxchg(v, &val, val & i));
>         return val;
>  }
>
> -static inline void atomic64_or(long long i, atomic64_t *v)
> +static inline void arch_atomic64_or(long long i, atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "orq %1,%0"
>                         : "+m" (v->counter)
> @@ -251,16 +251,16 @@ static inline void atomic64_or(long long i, atomic64_t *v)
>                         : "memory");
>  }
>
> -static inline long long atomic64_fetch_or(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
>  {
> -       long long val = atomic64_read(v);
> +       long long val = arch_atomic64_read(v);
>
>         do {
> -       } while (!atomic64_try_cmpxchg(v, &val, val | i));
> +       } while (!arch_atomic64_try_cmpxchg(v, &val, val | i));
>         return val;
>  }
>
> -static inline void atomic64_xor(long long i, atomic64_t *v)
> +static inline void arch_atomic64_xor(long long i, atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "xorq %1,%0"
>                         : "+m" (v->counter)
> @@ -268,12 +268,12 @@ static inline void atomic64_xor(long long i, atomic64_t *v)
>                         : "memory");
>  }
>
> -static inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
>  {
> -       long long val = atomic64_read(v);
> +       long long val = arch_atomic64_read(v);
>
>         do {
> -       } while (!atomic64_try_cmpxchg(v, &val, val ^ i));
> +       } while (!arch_atomic64_try_cmpxchg(v, &val, val ^ i));
>         return val;
>  }
>
> diff --git a/arch/x86/include/asm/cmpxchg.h b/arch/x86/include/asm/cmpxchg.h
> index fb961db51a2a..b4e70a0b1238 100644
> --- a/arch/x86/include/asm/cmpxchg.h
> +++ b/arch/x86/include/asm/cmpxchg.h
> @@ -144,20 +144,20 @@ extern void __add_wrong_size(void)
>  # include <asm/cmpxchg_64.h>
>  #endif
>
> -#define cmpxchg(ptr, old, new)                                         \
> +#define arch_cmpxchg(ptr, old, new)                                    \
>         __cmpxchg(ptr, old, new, sizeof(*(ptr)))
>
> -#define sync_cmpxchg(ptr, old, new)                                    \
> +#define arch_sync_cmpxchg(ptr, old, new)                               \
>         __sync_cmpxchg(ptr, old, new, sizeof(*(ptr)))
>
> -#define cmpxchg_local(ptr, old, new)                                   \
> +#define arch_cmpxchg_local(ptr, old, new)                              \
>         __cmpxchg_local(ptr, old, new, sizeof(*(ptr)))
>
>
>  #define __raw_try_cmpxchg(_ptr, _pold, _new, size, lock)               \
>  ({                                                                     \
>         bool success;                                                   \
> -       __typeof__(_ptr) _old = (_pold);                                \
> +       __typeof__(_pold) _old = (_pold);                               \

I think this is not necessary after switching atomic64 to long long.
Will drop this from v2.


>         __typeof__(*(_ptr)) __old = *_old;                              \
>         __typeof__(*(_ptr)) __new = (_new);                             \
>         switch (size) {                                                 \
> @@ -219,7 +219,7 @@ extern void __add_wrong_size(void)
>  #define __try_cmpxchg(ptr, pold, new, size)                            \
>         __raw_try_cmpxchg((ptr), (pold), (new), (size), LOCK_PREFIX)
>
> -#define try_cmpxchg(ptr, pold, new)                                    \
> +#define arch_try_cmpxchg(ptr, pold, new)                               \
>         __try_cmpxchg((ptr), (pold), (new), sizeof(*(ptr)))


Is try_cmpxchg() a part of public interface like cmpxchg, or only a
helper to implement atomic_try_cmpxchg()?
If it's the latter than we don't need to wrap them.

>  /*
> @@ -248,10 +248,10 @@ extern void __add_wrong_size(void)
>         __ret;                                                          \
>  })
>
> -#define cmpxchg_double(p1, p2, o1, o2, n1, n2) \
> +#define arch_cmpxchg_double(p1, p2, o1, o2, n1, n2) \
>         __cmpxchg_double(LOCK_PREFIX, p1, p2, o1, o2, n1, n2)
>
> -#define cmpxchg_double_local(p1, p2, o1, o2, n1, n2) \
> +#define arch_cmpxchg_double_local(p1, p2, o1, o2, n1, n2) \
>         __cmpxchg_double(, p1, p2, o1, o2, n1, n2)
>
>  #endif /* ASM_X86_CMPXCHG_H */
> diff --git a/arch/x86/include/asm/cmpxchg_32.h b/arch/x86/include/asm/cmpxchg_32.h
> index e4959d023af8..d897291d2bf9 100644
> --- a/arch/x86/include/asm/cmpxchg_32.h
> +++ b/arch/x86/include/asm/cmpxchg_32.h
> @@ -35,10 +35,10 @@ static inline void set_64bit(volatile u64 *ptr, u64 value)
>  }
>
>  #ifdef CONFIG_X86_CMPXCHG64
> -#define cmpxchg64(ptr, o, n)                                           \
> +#define arch_cmpxchg64(ptr, o, n)                                      \
>         ((__typeof__(*(ptr)))__cmpxchg64((ptr), (unsigned long long)(o), \
>                                          (unsigned long long)(n)))
> -#define cmpxchg64_local(ptr, o, n)                                     \
> +#define arch_cmpxchg64_local(ptr, o, n)                                        \
>         ((__typeof__(*(ptr)))__cmpxchg64_local((ptr), (unsigned long long)(o), \
>                                                (unsigned long long)(n)))
>  #endif
> @@ -75,7 +75,7 @@ static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new)
>   * to simulate the cmpxchg8b on the 80386 and 80486 CPU.
>   */
>
> -#define cmpxchg64(ptr, o, n)                                   \
> +#define arch_cmpxchg64(ptr, o, n)                              \
>  ({                                                             \
>         __typeof__(*(ptr)) __ret;                               \
>         __typeof__(*(ptr)) __old = (o);                         \
> @@ -92,7 +92,7 @@ static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new)
>         __ret; })
>
>
> -#define cmpxchg64_local(ptr, o, n)                             \
> +#define arch_cmpxchg64_local(ptr, o, n)                                \
>  ({                                                             \
>         __typeof__(*(ptr)) __ret;                               \
>         __typeof__(*(ptr)) __old = (o);                         \
> diff --git a/arch/x86/include/asm/cmpxchg_64.h b/arch/x86/include/asm/cmpxchg_64.h
> index caa23a34c963..fafaebacca2d 100644
> --- a/arch/x86/include/asm/cmpxchg_64.h
> +++ b/arch/x86/include/asm/cmpxchg_64.h
> @@ -6,13 +6,13 @@ static inline void set_64bit(volatile u64 *ptr, u64 val)
>         *ptr = val;
>  }
>
> -#define cmpxchg64(ptr, o, n)                                           \
> +#define arch_cmpxchg64(ptr, o, n)                                      \
>  ({                                                                     \
>         BUILD_BUG_ON(sizeof(*(ptr)) != 8);                              \
>         cmpxchg((ptr), (o), (n));                                       \
>  })
>
> -#define cmpxchg64_local(ptr, o, n)                                     \
> +#define arch_cmpxchg64_local(ptr, o, n)                                        \
>  ({                                                                     \
>         BUILD_BUG_ON(sizeof(*(ptr)) != 8);                              \
>         cmpxchg_local((ptr), (o), (n));                                 \
> --
> 2.12.2.564.g063fe858b8-goog
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 5/8] x86: switch atomic.h to use atomic-instrumented.h
@ 2017-03-28 16:25     ` Dmitry Vyukov
  0 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:25 UTC (permalink / raw)
  To: Mark Rutland, Peter Zijlstra, Ingo Molnar
  Cc: Andrew Morton, Will Deacon, Andrey Ryabinin, kasan-dev, LKML,
	x86, Dmitry Vyukov, linux-mm

On Tue, Mar 28, 2017 at 6:15 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> Add arch_ prefix to all atomic operations and include
> <asm-generic/atomic-instrumented.h>. This will allow
> to add KASAN instrumentation to all atomic ops.
>
> Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Ingo Molnar <mingo@redhat.com>
> Cc: kasan-dev@googlegroups.com
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> Cc: x86@kernel.org
> ---
>  arch/x86/include/asm/atomic.h      | 110 ++++++++++++++++++++-----------------
>  arch/x86/include/asm/atomic64_32.h | 106 +++++++++++++++++------------------
>  arch/x86/include/asm/atomic64_64.h | 110 ++++++++++++++++++-------------------
>  arch/x86/include/asm/cmpxchg.h     |  14 ++---
>  arch/x86/include/asm/cmpxchg_32.h  |   8 +--
>  arch/x86/include/asm/cmpxchg_64.h  |   4 +-
>  6 files changed, 181 insertions(+), 171 deletions(-)
>
> diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
> index 8d7f6e579be4..92dd59f24eba 100644
> --- a/arch/x86/include/asm/atomic.h
> +++ b/arch/x86/include/asm/atomic.h
> @@ -16,36 +16,42 @@
>  #define ATOMIC_INIT(i) { (i) }
>
>  /**
> - * atomic_read - read atomic variable
> + * arch_atomic_read - read atomic variable
>   * @v: pointer of type atomic_t
>   *
>   * Atomically reads the value of @v.
>   */
> -static __always_inline int atomic_read(const atomic_t *v)
> +static __always_inline int arch_atomic_read(const atomic_t *v)
>  {
>         return READ_ONCE((v)->counter);
>  }
>
>  /**
> - * atomic_set - set atomic variable
> + * arch_atomic_set - set atomic variable
>   * @v: pointer of type atomic_t
>   * @i: required value
>   *
>   * Atomically sets the value of @v to @i.
>   */
> -static __always_inline void atomic_set(atomic_t *v, int i)
> +static __always_inline void arch_atomic_set(atomic_t *v, int i)
>  {
> +       /*
> +        * We could use WRITE_ONCE_NOCHECK() if it exists, similar to
> +        * READ_ONCE_NOCHECK() in arch_atomic_read(). But there is no such
> +        * thing at the moment, and introducing it for this case does not
> +        * worth it.
> +        */
>         WRITE_ONCE(v->counter, i);
>  }
>
>  /**
> - * atomic_add - add integer to atomic variable
> + * arch_atomic_add - add integer to atomic variable
>   * @i: integer value to add
>   * @v: pointer of type atomic_t
>   *
>   * Atomically adds @i to @v.
>   */
> -static __always_inline void atomic_add(int i, atomic_t *v)
> +static __always_inline void arch_atomic_add(int i, atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "addl %1,%0"
>                      : "+m" (v->counter)
> @@ -53,13 +59,13 @@ static __always_inline void atomic_add(int i, atomic_t *v)
>  }
>
>  /**
> - * atomic_sub - subtract integer from atomic variable
> + * arch_atomic_sub - subtract integer from atomic variable
>   * @i: integer value to subtract
>   * @v: pointer of type atomic_t
>   *
>   * Atomically subtracts @i from @v.
>   */
> -static __always_inline void atomic_sub(int i, atomic_t *v)
> +static __always_inline void arch_atomic_sub(int i, atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "subl %1,%0"
>                      : "+m" (v->counter)
> @@ -67,7 +73,7 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
>  }
>
>  /**
> - * atomic_sub_and_test - subtract value from variable and test result
> + * arch_atomic_sub_and_test - subtract value from variable and test result
>   * @i: integer value to subtract
>   * @v: pointer of type atomic_t
>   *
> @@ -75,63 +81,63 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
>   * true if the result is zero, or false for all
>   * other cases.
>   */
> -static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
> +static __always_inline bool arch_atomic_sub_and_test(int i, atomic_t *v)
>  {
>         GEN_BINARY_RMWcc(LOCK_PREFIX "subl", v->counter, "er", i, "%0", e);
>  }
>
>  /**
> - * atomic_inc - increment atomic variable
> + * arch_atomic_inc - increment atomic variable
>   * @v: pointer of type atomic_t
>   *
>   * Atomically increments @v by 1.
>   */
> -static __always_inline void atomic_inc(atomic_t *v)
> +static __always_inline void arch_atomic_inc(atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "incl %0"
>                      : "+m" (v->counter));
>  }
>
>  /**
> - * atomic_dec - decrement atomic variable
> + * arch_atomic_dec - decrement atomic variable
>   * @v: pointer of type atomic_t
>   *
>   * Atomically decrements @v by 1.
>   */
> -static __always_inline void atomic_dec(atomic_t *v)
> +static __always_inline void arch_atomic_dec(atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "decl %0"
>                      : "+m" (v->counter));
>  }
>
>  /**
> - * atomic_dec_and_test - decrement and test
> + * arch_atomic_dec_and_test - decrement and test
>   * @v: pointer of type atomic_t
>   *
>   * Atomically decrements @v by 1 and
>   * returns true if the result is 0, or false for all other
>   * cases.
>   */
> -static __always_inline bool atomic_dec_and_test(atomic_t *v)
> +static __always_inline bool arch_atomic_dec_and_test(atomic_t *v)
>  {
>         GEN_UNARY_RMWcc(LOCK_PREFIX "decl", v->counter, "%0", e);
>  }
>
>  /**
> - * atomic_inc_and_test - increment and test
> + * arch_atomic_inc_and_test - increment and test
>   * @v: pointer of type atomic_t
>   *
>   * Atomically increments @v by 1
>   * and returns true if the result is zero, or false for all
>   * other cases.
>   */
> -static __always_inline bool atomic_inc_and_test(atomic_t *v)
> +static __always_inline bool arch_atomic_inc_and_test(atomic_t *v)
>  {
>         GEN_UNARY_RMWcc(LOCK_PREFIX "incl", v->counter, "%0", e);
>  }
>
>  /**
> - * atomic_add_negative - add and test if negative
> + * arch_atomic_add_negative - add and test if negative
>   * @i: integer value to add
>   * @v: pointer of type atomic_t
>   *
> @@ -139,65 +145,65 @@ static __always_inline bool atomic_inc_and_test(atomic_t *v)
>   * if the result is negative, or false when
>   * result is greater than or equal to zero.
>   */
> -static __always_inline bool atomic_add_negative(int i, atomic_t *v)
> +static __always_inline bool arch_atomic_add_negative(int i, atomic_t *v)
>  {
>         GEN_BINARY_RMWcc(LOCK_PREFIX "addl", v->counter, "er", i, "%0", s);
>  }
>
>  /**
> - * atomic_add_return - add integer and return
> + * arch_atomic_add_return - add integer and return
>   * @i: integer value to add
>   * @v: pointer of type atomic_t
>   *
>   * Atomically adds @i to @v and returns @i + @v
>   */
> -static __always_inline int atomic_add_return(int i, atomic_t *v)
> +static __always_inline int arch_atomic_add_return(int i, atomic_t *v)
>  {
>         return i + xadd(&v->counter, i);
>  }
>
>  /**
> - * atomic_sub_return - subtract integer and return
> + * arch_atomic_sub_return - subtract integer and return
>   * @v: pointer of type atomic_t
>   * @i: integer value to subtract
>   *
>   * Atomically subtracts @i from @v and returns @v - @i
>   */
> -static __always_inline int atomic_sub_return(int i, atomic_t *v)
> +static __always_inline int arch_atomic_sub_return(int i, atomic_t *v)
>  {
> -       return atomic_add_return(-i, v);
> +       return arch_atomic_add_return(-i, v);
>  }
>
> -#define atomic_inc_return(v)  (atomic_add_return(1, v))
> -#define atomic_dec_return(v)  (atomic_sub_return(1, v))
> +#define arch_atomic_inc_return(v)  (arch_atomic_add_return(1, v))
> +#define arch_atomic_dec_return(v)  (arch_atomic_sub_return(1, v))
>
> -static __always_inline int atomic_fetch_add(int i, atomic_t *v)
> +static __always_inline int arch_atomic_fetch_add(int i, atomic_t *v)
>  {
>         return xadd(&v->counter, i);
>  }
>
> -static __always_inline int atomic_fetch_sub(int i, atomic_t *v)
> +static __always_inline int arch_atomic_fetch_sub(int i, atomic_t *v)
>  {
>         return xadd(&v->counter, -i);
>  }
>
> -static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
> +static __always_inline int arch_atomic_cmpxchg(atomic_t *v, int old, int new)
>  {
> -       return cmpxchg(&v->counter, old, new);
> +       return arch_cmpxchg(&v->counter, old, new);
>  }
>
> -#define atomic_try_cmpxchg atomic_try_cmpxchg
> -static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
> +#define arch_atomic_try_cmpxchg arch_atomic_try_cmpxchg
> +static __always_inline bool arch_atomic_try_cmpxchg(atomic_t *v, int *old, int new)
>  {
> -       return try_cmpxchg(&v->counter, old, new);
> +       return arch_try_cmpxchg(&v->counter, old, new);
>  }
>
> -static inline int atomic_xchg(atomic_t *v, int new)
> +static inline int arch_atomic_xchg(atomic_t *v, int new)
>  {
>         return xchg(&v->counter, new);
>  }
>
> -static inline void atomic_and(int i, atomic_t *v)
> +static inline void arch_atomic_and(int i, atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "andl %1,%0"
>                         : "+m" (v->counter)
> @@ -205,16 +211,16 @@ static inline void atomic_and(int i, atomic_t *v)
>                         : "memory");
>  }
>
> -static inline int atomic_fetch_and(int i, atomic_t *v)
> +static inline int arch_atomic_fetch_and(int i, atomic_t *v)
>  {
> -       int val = atomic_read(v);
> +       int val = arch_atomic_read(v);
>
>         do {
> -       } while (!atomic_try_cmpxchg(v, &val, val & i));
> +       } while (!arch_atomic_try_cmpxchg(v, &val, val & i));
>         return val;
>  }
>
> -static inline void atomic_or(int i, atomic_t *v)
> +static inline void arch_atomic_or(int i, atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "orl %1,%0"
>                         : "+m" (v->counter)
> @@ -222,17 +228,17 @@ static inline void atomic_or(int i, atomic_t *v)
>                         : "memory");
>  }
>
> -static inline int atomic_fetch_or(int i, atomic_t *v)
> +static inline int arch_atomic_fetch_or(int i, atomic_t *v)
>  {
> -       int val = atomic_read(v);
> +       int val = arch_atomic_read(v);
>
>         do {
> -       } while (!atomic_try_cmpxchg(v, &val, val | i));
> +       } while (!arch_atomic_try_cmpxchg(v, &val, val | i));
>         return val;
>  }
>
>
> -static inline void atomic_xor(int i, atomic_t *v)
> +static inline void arch_atomic_xor(int i, atomic_t *v)
>  {
>         asm volatile(LOCK_PREFIX "xorl %1,%0"
>                         : "+m" (v->counter)
> @@ -240,17 +246,17 @@ static inline void atomic_xor(int i, atomic_t *v)
>                         : "memory");
>  }
>
> -static inline int atomic_fetch_xor(int i, atomic_t *v)
> +static inline int arch_atomic_fetch_xor(int i, atomic_t *v)
>  {
> -       int val = atomic_read(v);
> +       int val = arch_atomic_read(v);
>
>         do {
> -       } while (!atomic_try_cmpxchg(v, &val, val ^ i));
> +       } while (!arch_atomic_try_cmpxchg(v, &val, val ^ i));
>         return val;
>  }
>
>  /**
> - * __atomic_add_unless - add unless the number is already a given value
> + * __arch_atomic_add_unless - add unless the number is already a given value
>   * @v: pointer of type atomic_t
>   * @a: the amount to add to v...
>   * @u: ...unless v is equal to u.
> @@ -258,13 +264,13 @@ static inline int atomic_fetch_xor(int i, atomic_t *v)
>   * Atomically adds @a to @v, so long as @v was not already @u.
>   * Returns the old value of @v.
>   */
> -static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
> +static __always_inline int __arch_atomic_add_unless(atomic_t *v, int a, int u)
>  {
> -       int c = atomic_read(v);
> +       int c = arch_atomic_read(v);
>         do {
>                 if (unlikely(c == u))
>                         break;
> -       } while (!atomic_try_cmpxchg(v, &c, c + a));
> +       } while (!arch_atomic_try_cmpxchg(v, &c, c + a));
>         return c;
>  }
>
> @@ -274,4 +280,6 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
>  # include <asm/atomic64_64.h>
>  #endif
>
> +#include <asm-generic/atomic-instrumented.h>
> +
>  #endif /* _ASM_X86_ATOMIC_H */
> diff --git a/arch/x86/include/asm/atomic64_32.h b/arch/x86/include/asm/atomic64_32.h
> index f107fef7bfcc..8501e4fc5054 100644
> --- a/arch/x86/include/asm/atomic64_32.h
> +++ b/arch/x86/include/asm/atomic64_32.h
> @@ -61,7 +61,7 @@ ATOMIC64_DECL(add_unless);
>  #undef ATOMIC64_EXPORT
>
>  /**
> - * atomic64_cmpxchg - cmpxchg atomic64 variable
> + * arch_atomic64_cmpxchg - cmpxchg atomic64 variable
>   * @v: pointer to type atomic64_t
>   * @o: expected value
>   * @n: new value
> @@ -70,20 +70,21 @@ ATOMIC64_DECL(add_unless);
>   * the old value.
>   */
>
> -static inline long long atomic64_cmpxchg(atomic64_t *v, long long o, long long n)
> +static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long o,
> +                                             long long n)
>  {
> -       return cmpxchg64(&v->counter, o, n);
> +       return arch_cmpxchg64(&v->counter, o, n);
>  }
>
>  /**
> - * atomic64_xchg - xchg atomic64 variable
> + * arch_atomic64_xchg - xchg atomic64 variable
>   * @v: pointer to type atomic64_t
>   * @n: value to assign
>   *
>   * Atomically xchgs the value of @v to @n and returns
>   * the old value.
>   */
> -static inline long long atomic64_xchg(atomic64_t *v, long long n)
> +static inline long long arch_atomic64_xchg(atomic64_t *v, long long n)
>  {
>         long long o;
>         unsigned high = (unsigned)(n >> 32);
> @@ -95,13 +96,13 @@ static inline long long atomic64_xchg(atomic64_t *v, long long n)
>  }
>
>  /**
> - * atomic64_set - set atomic64 variable
> + * arch_atomic64_set - set atomic64 variable
>   * @v: pointer to type atomic64_t
>   * @i: value to assign
>   *
>   * Atomically sets the value of @v to @n.
>   */
> -static inline void atomic64_set(atomic64_t *v, long long i)
> +static inline void arch_atomic64_set(atomic64_t *v, long long i)
>  {
>         unsigned high = (unsigned)(i >> 32);
>         unsigned low = (unsigned)i;
> @@ -111,12 +112,12 @@ static inline void atomic64_set(atomic64_t *v, long long i)
>  }
>
>  /**
> - * atomic64_read - read atomic64 variable
> + * arch_atomic64_read - read atomic64 variable
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically reads the value of @v and returns it.
>   */
> -static inline long long atomic64_read(const atomic64_t *v)
> +static inline long long arch_atomic64_read(const atomic64_t *v)
>  {
>         long long r;
>         alternative_atomic64(read, "=&A" (r), "c" (v) : "memory");
> @@ -124,13 +125,13 @@ static inline long long atomic64_read(const atomic64_t *v)
>   }
>
>  /**
> - * atomic64_add_return - add and return
> + * arch_atomic64_add_return - add and return
>   * @i: integer value to add
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically adds @i to @v and returns @i + *@v
>   */
> -static inline long long atomic64_add_return(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
>  {
>         alternative_atomic64(add_return,
>                              ASM_OUTPUT2("+A" (i), "+c" (v)),
> @@ -141,7 +142,7 @@ static inline long long atomic64_add_return(long long i, atomic64_t *v)
>  /*
>   * Other variants with different arithmetic operators:
>   */
> -static inline long long atomic64_sub_return(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
>  {
>         alternative_atomic64(sub_return,
>                              ASM_OUTPUT2("+A" (i), "+c" (v)),
> @@ -149,7 +150,7 @@ static inline long long atomic64_sub_return(long long i, atomic64_t *v)
>         return i;
>  }
>
> -static inline long long atomic64_inc_return(atomic64_t *v)
> +static inline long long arch_atomic64_inc_return(atomic64_t *v)
>  {
>         long long a;
>         alternative_atomic64(inc_return, "=&A" (a),
> @@ -157,7 +158,7 @@ static inline long long atomic64_inc_return(atomic64_t *v)
>         return a;
>  }
>
> -static inline long long atomic64_dec_return(atomic64_t *v)
> +static inline long long arch_atomic64_dec_return(atomic64_t *v)
>  {
>         long long a;
>         alternative_atomic64(dec_return, "=&A" (a),
> @@ -166,13 +167,13 @@ static inline long long atomic64_dec_return(atomic64_t *v)
>  }
>
>  /**
> - * atomic64_add - add integer to atomic64 variable
> + * arch_atomic64_add - add integer to atomic64 variable
>   * @i: integer value to add
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically adds @i to @v.
>   */
> -static inline long long atomic64_add(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_add(long long i, atomic64_t *v)
>  {
>         __alternative_atomic64(add, add_return,
>                                ASM_OUTPUT2("+A" (i), "+c" (v)),
> @@ -181,13 +182,13 @@ static inline long long atomic64_add(long long i, atomic64_t *v)
>  }
>
>  /**
> - * atomic64_sub - subtract the atomic64 variable
> + * arch_atomic64_sub - subtract the atomic64 variable
>   * @i: integer value to subtract
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically subtracts @i from @v.
>   */
> -static inline long long atomic64_sub(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_sub(long long i, atomic64_t *v)
>  {
>         __alternative_atomic64(sub, sub_return,
>                                ASM_OUTPUT2("+A" (i), "+c" (v)),
> @@ -196,7 +197,7 @@ static inline long long atomic64_sub(long long i, atomic64_t *v)
>  }
>
>  /**
> - * atomic64_sub_and_test - subtract value from variable and test result
> + * arch_atomic64_sub_and_test - subtract value from variable and test result
>   * @i: integer value to subtract
>   * @v: pointer to type atomic64_t
>   *
> @@ -204,46 +205,46 @@ static inline long long atomic64_sub(long long i, atomic64_t *v)
>   * true if the result is zero, or false for all
>   * other cases.
>   */
> -static inline int atomic64_sub_and_test(long long i, atomic64_t *v)
> +static inline int arch_atomic64_sub_and_test(long long i, atomic64_t *v)
>  {
> -       return atomic64_sub_return(i, v) == 0;
> +       return arch_atomic64_sub_return(i, v) == 0;
>  }
>
>  /**
> - * atomic64_inc - increment atomic64 variable
> + * arch_atomic64_inc - increment atomic64 variable
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically increments @v by 1.
>   */
> -static inline void atomic64_inc(atomic64_t *v)
> +static inline void arch_atomic64_inc(atomic64_t *v)
>  {
>         __alternative_atomic64(inc, inc_return, /* no output */,
>                                "S" (v) : "memory", "eax", "ecx", "edx");
>  }
>
>  /**
> - * atomic64_dec - decrement atomic64 variable
> + * arch_atomic64_dec - decrement atomic64 variable
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically decrements @v by 1.
>   */
> -static inline void atomic64_dec(atomic64_t *v)
> +static inline void arch_atomic64_dec(atomic64_t *v)
>  {
>         __alternative_atomic64(dec, dec_return, /* no output */,
>                                "S" (v) : "memory", "eax", "ecx", "edx");
>  }
>
>  /**
> - * atomic64_dec_and_test - decrement and test
> + * arch_atomic64_dec_and_test - decrement and test
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically decrements @v by 1 and
>   * returns true if the result is 0, or false for all other
>   * cases.
>   */
> -static inline int atomic64_dec_and_test(atomic64_t *v)
> +static inline int arch_atomic64_dec_and_test(atomic64_t *v)
>  {
> -       return atomic64_dec_return(v) == 0;
> +       return arch_atomic64_dec_return(v) == 0;
>  }
>
>  /**
> @@ -254,13 +255,13 @@ static inline int atomic64_dec_and_test(atomic64_t *v)
>   * and returns true if the result is zero, or false for all
>   * other cases.
>   */
> -static inline int atomic64_inc_and_test(atomic64_t *v)
> +static inline int arch_atomic64_inc_and_test(atomic64_t *v)
>  {
> -       return atomic64_inc_return(v) == 0;
> +       return arch_atomic64_inc_return(v) == 0;
>  }
>
>  /**
> - * atomic64_add_negative - add and test if negative
> + * arch_atomic64_add_negative - add and test if negative
>   * @i: integer value to add
>   * @v: pointer to type atomic64_t
>   *
> @@ -268,13 +269,13 @@ static inline int atomic64_inc_and_test(atomic64_t *v)
>   * if the result is negative, or false when
>   * result is greater than or equal to zero.
>   */
> -static inline int atomic64_add_negative(long long i, atomic64_t *v)
> +static inline int arch_atomic64_add_negative(long long i, atomic64_t *v)
>  {
> -       return atomic64_add_return(i, v) < 0;
> +       return arch_atomic64_add_return(i, v) < 0;
>  }
>
>  /**
> - * atomic64_add_unless - add unless the number is a given value
> + * arch_atomic64_add_unless - add unless the number is a given value
>   * @v: pointer of type atomic64_t
>   * @a: the amount to add to v...
>   * @u: ...unless v is equal to u.
> @@ -282,7 +283,8 @@ static inline int atomic64_add_negative(long long i, atomic64_t *v)
>   * Atomically adds @a to @v, so long as it was not @u.
>   * Returns non-zero if the add was done, zero otherwise.
>   */
> -static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
> +static inline int arch_atomic64_add_unless(atomic64_t *v, long long a,
> +                                          long long u)
>  {
>         unsigned low = (unsigned)u;
>         unsigned high = (unsigned)(u >> 32);
> @@ -293,7 +295,7 @@ static inline int atomic64_add_unless(atomic64_t *v, long long a, long long u)
>  }
>
>
> -static inline int atomic64_inc_not_zero(atomic64_t *v)
> +static inline int arch_atomic64_inc_not_zero(atomic64_t *v)
>  {
>         int r;
>         alternative_atomic64(inc_not_zero, "=&a" (r),
> @@ -301,7 +303,7 @@ static inline int atomic64_inc_not_zero(atomic64_t *v)
>         return r;
>  }
>
> -static inline long long atomic64_dec_if_positive(atomic64_t *v)
> +static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
>  {
>         long long r;
>         alternative_atomic64(dec_if_positive, "=&A" (r),
> @@ -312,66 +314,66 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
>  #undef alternative_atomic64
>  #undef __alternative_atomic64
>
> -static inline void atomic64_and(long long i, atomic64_t *v)
> +static inline void arch_atomic64_and(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c & i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
>                 c = old;
>  }
>
> -static inline long long atomic64_fetch_and(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c & i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c & i)) != c)
>                 c = old;
>         return old;
>  }
>
> -static inline void atomic64_or(long long i, atomic64_t *v)
> +static inline void arch_atomic64_or(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c | i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
>                 c = old;
>  }
>
> -static inline long long atomic64_fetch_or(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c | i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c | i)) != c)
>                 c = old;
>         return old;
>  }
>
> -static inline void atomic64_xor(long long i, atomic64_t *v)
> +static inline void arch_atomic64_xor(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c ^ i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
>                 c = old;
>  }
>
> -static inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c ^ i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c ^ i)) != c)
>                 c = old;
>         return old;
>  }
>
> -static inline long long atomic64_fetch_add(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v)
>  {
>         long long old, c = 0;
>
> -       while ((old = atomic64_cmpxchg(v, c, c + i)) != c)
> +       while ((old = arch_atomic64_cmpxchg(v, c, c + i)) != c)
>                 c = old;
>         return old;
>  }
>
> -#define atomic64_fetch_sub(i, v)       atomic64_fetch_add(-(i), (v))
> +#define arch_atomic64_fetch_sub(i, v)  arch_atomic64_fetch_add(-(i), (v))
>
>  #endif /* _ASM_X86_ATOMIC64_32_H */
> diff --git a/arch/x86/include/asm/atomic64_64.h b/arch/x86/include/asm/atomic64_64.h
> index a62982a2b534..6b6873e4d4e8 100644
> --- a/arch/x86/include/asm/atomic64_64.h
> +++ b/arch/x86/include/asm/atomic64_64.h
> @@ -10,37 +10,37 @@
>  #define ATOMIC64_INIT(i)       { (i) }
>
>  /**
> - * atomic64_read - read atomic64 variable
> + * arch_atomic64_read - read atomic64 variable
>   * @v: pointer of type atomic64_t
>   *
>   * Atomically reads the value of @v.
>   * Doesn't imply a read memory barrier.
>   */
> -static inline long long atomic64_read(const atomic64_t *v)
> +static inline long long arch_atomic64_read(const atomic64_t *v)
>  {
>         return READ_ONCE((v)->counter);
>  }
>
>  /**
> - * atomic64_set - set atomic64 variable
> + * arch_atomic64_set - set atomic64 variable
>   * @v: pointer to type atomic64_t
>   * @i: required value
>   *
>   * Atomically sets the value of @v to @i.
>   */
> -static inline void atomic64_set(atomic64_t *v, long long i)
> +static inline void arch_atomic64_set(atomic64_t *v, long long i)
>  {
>         WRITE_ONCE(v->counter, i);
>  }
>
>  /**
> - * atomic64_add - add integer to atomic64 variable
> + * arch_atomic64_add - add integer to atomic64 variable
>   * @i: integer value to add
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically adds @i to @v.
>   */
> -static __always_inline void atomic64_add(long long i, atomic64_t *v)
> +static __always_inline void arch_atomic64_add(long long i, atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "addq %1,%0"
>                      : "=m" (v->counter)
> @@ -48,13 +48,13 @@ static __always_inline void atomic64_add(long long i, atomic64_t *v)
>  }
>
>  /**
> - * atomic64_sub - subtract the atomic64 variable
> + * arch_atomic64_sub - subtract the atomic64 variable
>   * @i: integer value to subtract
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically subtracts @i from @v.
>   */
> -static inline void atomic64_sub(long long i, atomic64_t *v)
> +static inline void arch_atomic64_sub(long long i, atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "subq %1,%0"
>                      : "=m" (v->counter)
> @@ -62,7 +62,7 @@ static inline void atomic64_sub(long long i, atomic64_t *v)
>  }
>
>  /**
> - * atomic64_sub_and_test - subtract value from variable and test result
> + * arch_atomic64_sub_and_test - subtract value from variable and test result
>   * @i: integer value to subtract
>   * @v: pointer to type atomic64_t
>   *
> @@ -70,18 +70,18 @@ static inline void atomic64_sub(long long i, atomic64_t *v)
>   * true if the result is zero, or false for all
>   * other cases.
>   */
> -static inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
> +static inline bool arch_atomic64_sub_and_test(long long i, atomic64_t *v)
>  {
>         GEN_BINARY_RMWcc(LOCK_PREFIX "subq", v->counter, "er", i, "%0", e);
>  }
>
>  /**
> - * atomic64_inc - increment atomic64 variable
> + * arch_atomic64_inc - increment atomic64 variable
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically increments @v by 1.
>   */
> -static __always_inline void atomic64_inc(atomic64_t *v)
> +static __always_inline void arch_atomic64_inc(atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "incq %0"
>                      : "=m" (v->counter)
> @@ -89,12 +89,12 @@ static __always_inline void atomic64_inc(atomic64_t *v)
>  }
>
>  /**
> - * atomic64_dec - decrement atomic64 variable
> + * arch_atomic64_dec - decrement atomic64 variable
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically decrements @v by 1.
>   */
> -static __always_inline void atomic64_dec(atomic64_t *v)
> +static __always_inline void arch_atomic64_dec(atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "decq %0"
>                      : "=m" (v->counter)
> @@ -102,33 +102,33 @@ static __always_inline void atomic64_dec(atomic64_t *v)
>  }
>
>  /**
> - * atomic64_dec_and_test - decrement and test
> + * arch_atomic64_dec_and_test - decrement and test
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically decrements @v by 1 and
>   * returns true if the result is 0, or false for all other
>   * cases.
>   */
> -static inline bool atomic64_dec_and_test(atomic64_t *v)
> +static inline bool arch_atomic64_dec_and_test(atomic64_t *v)
>  {
>         GEN_UNARY_RMWcc(LOCK_PREFIX "decq", v->counter, "%0", e);
>  }
>
>  /**
> - * atomic64_inc_and_test - increment and test
> + * arch_atomic64_inc_and_test - increment and test
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically increments @v by 1
>   * and returns true if the result is zero, or false for all
>   * other cases.
>   */
> -static inline bool atomic64_inc_and_test(atomic64_t *v)
> +static inline bool arch_atomic64_inc_and_test(atomic64_t *v)
>  {
>         GEN_UNARY_RMWcc(LOCK_PREFIX "incq", v->counter, "%0", e);
>  }
>
>  /**
> - * atomic64_add_negative - add and test if negative
> + * arch_atomic64_add_negative - add and test if negative
>   * @i: integer value to add
>   * @v: pointer to type atomic64_t
>   *
> @@ -136,59 +136,59 @@ static inline bool atomic64_inc_and_test(atomic64_t *v)
>   * if the result is negative, or false when
>   * result is greater than or equal to zero.
>   */
> -static inline bool atomic64_add_negative(long long i, atomic64_t *v)
> +static inline bool arch_atomic64_add_negative(long long i, atomic64_t *v)
>  {
>         GEN_BINARY_RMWcc(LOCK_PREFIX "addq", v->counter, "er", i, "%0", s);
>  }
>
>  /**
> - * atomic64_add_return - add and return
> + * arch_atomic64_add_return - add and return
>   * @i: integer value to add
>   * @v: pointer to type atomic64_t
>   *
>   * Atomically adds @i to @v and returns @i + @v
>   */
> -static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
> +static __always_inline long long arch_atomic64_add_return(long long i, atomic64_t *v)
>  {
>         return i + xadd(&v->counter, i);
>  }
>
> -static inline long long atomic64_sub_return(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_sub_return(long long i, atomic64_t *v)
>  {
> -       return atomic64_add_return(-i, v);
> +       return arch_atomic64_add_return(-i, v);
>  }
>
> -static inline long long atomic64_fetch_add(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_add(long long i, atomic64_t *v)
>  {
>         return xadd(&v->counter, i);
>  }
>
> -static inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_sub(long long i, atomic64_t *v)
>  {
>         return xadd(&v->counter, -i);
>  }
>
> -#define atomic64_inc_return(v)  (atomic64_add_return(1, (v)))
> -#define atomic64_dec_return(v)  (atomic64_sub_return(1, (v)))
> +#define arch_atomic64_inc_return(v)  (arch_atomic64_add_return(1, (v)))
> +#define arch_atomic64_dec_return(v)  (arch_atomic64_sub_return(1, (v)))
>
> -static inline long long atomic64_cmpxchg(atomic64_t *v, long long old, long long new)
> +static inline long long arch_atomic64_cmpxchg(atomic64_t *v, long long old, long long new)
>  {
> -       return cmpxchg(&v->counter, old, new);
> +       return arch_cmpxchg(&v->counter, old, new);
>  }
>
> -#define atomic64_try_cmpxchg atomic64_try_cmpxchg
> -static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old, long long new)
> +#define arch_atomic64_try_cmpxchg arch_atomic64_try_cmpxchg
> +static __always_inline bool arch_atomic64_try_cmpxchg(atomic64_t *v, long long *old, long long new)
>  {
> -       return try_cmpxchg(&v->counter, old, new);
> +       return arch_try_cmpxchg(&v->counter, old, new);
>  }
>
> -static inline long long atomic64_xchg(atomic64_t *v, long long new)
> +static inline long long arch_atomic64_xchg(atomic64_t *v, long long new)
>  {
>         return xchg(&v->counter, new);
>  }
>
>  /**
> - * atomic64_add_unless - add unless the number is a given value
> + * arch_atomic64_add_unless - add unless the number is a given value
>   * @v: pointer of type atomic64_t
>   * @a: the amount to add to v...
>   * @u: ...unless v is equal to u.
> @@ -196,37 +196,37 @@ static inline long long atomic64_xchg(atomic64_t *v, long long new)
>   * Atomically adds @a to @v, so long long as it was not @u.
>   * Returns the old value of @v.
>   */
> -static inline bool atomic64_add_unless(atomic64_t *v, long long a, long long u)
> +static inline bool arch_atomic64_add_unless(atomic64_t *v, long long a, long long u)
>  {
> -       long long c = atomic64_read(v);
> +       long long c = arch_atomic64_read(v);
>         do {
>                 if (unlikely(c == u))
>                         return false;
> -       } while (!atomic64_try_cmpxchg(v, &c, c + a));
> +       } while (!arch_atomic64_try_cmpxchg(v, &c, c + a));
>         return true;
>  }
>
> -#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
> +#define arch_atomic64_inc_not_zero(v) arch_atomic64_add_unless((v), 1, 0)
>
>  /*
> - * atomic64_dec_if_positive - decrement by 1 if old value positive
> + * arch_atomic64_dec_if_positive - decrement by 1 if old value positive
>   * @v: pointer of type atomic_t
>   *
>   * The function returns the old value of *v minus 1, even if
>   * the atomic variable, v, was not decremented.
>   */
> -static inline long long atomic64_dec_if_positive(atomic64_t *v)
> +static inline long long arch_atomic64_dec_if_positive(atomic64_t *v)
>  {
> -       long long dec, c = atomic64_read(v);
> +       long long dec, c = arch_atomic64_read(v);
>         do {
>                 dec = c - 1;
>                 if (unlikely(dec < 0))
>                         break;
> -       } while (!atomic64_try_cmpxchg(v, &c, dec));
> +       } while (!arch_atomic64_try_cmpxchg(v, &c, dec));
>         return dec;
>  }
>
> -static inline void atomic64_and(long long i, atomic64_t *v)
> +static inline void arch_atomic64_and(long long i, atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "andq %1,%0"
>                         : "+m" (v->counter)
> @@ -234,16 +234,16 @@ static inline void atomic64_and(long long i, atomic64_t *v)
>                         : "memory");
>  }
>
> -static inline long long atomic64_fetch_and(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_and(long long i, atomic64_t *v)
>  {
> -       long long val = atomic64_read(v);
> +       long long val = arch_atomic64_read(v);
>
>         do {
> -       } while (!atomic64_try_cmpxchg(v, &val, val & i));
> +       } while (!arch_atomic64_try_cmpxchg(v, &val, val & i));
>         return val;
>  }
>
> -static inline void atomic64_or(long long i, atomic64_t *v)
> +static inline void arch_atomic64_or(long long i, atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "orq %1,%0"
>                         : "+m" (v->counter)
> @@ -251,16 +251,16 @@ static inline void atomic64_or(long long i, atomic64_t *v)
>                         : "memory");
>  }
>
> -static inline long long atomic64_fetch_or(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_or(long long i, atomic64_t *v)
>  {
> -       long long val = atomic64_read(v);
> +       long long val = arch_atomic64_read(v);
>
>         do {
> -       } while (!atomic64_try_cmpxchg(v, &val, val | i));
> +       } while (!arch_atomic64_try_cmpxchg(v, &val, val | i));
>         return val;
>  }
>
> -static inline void atomic64_xor(long long i, atomic64_t *v)
> +static inline void arch_atomic64_xor(long long i, atomic64_t *v)
>  {
>         asm volatile(LOCK_PREFIX "xorq %1,%0"
>                         : "+m" (v->counter)
> @@ -268,12 +268,12 @@ static inline void atomic64_xor(long long i, atomic64_t *v)
>                         : "memory");
>  }
>
> -static inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
> +static inline long long arch_atomic64_fetch_xor(long long i, atomic64_t *v)
>  {
> -       long long val = atomic64_read(v);
> +       long long val = arch_atomic64_read(v);
>
>         do {
> -       } while (!atomic64_try_cmpxchg(v, &val, val ^ i));
> +       } while (!arch_atomic64_try_cmpxchg(v, &val, val ^ i));
>         return val;
>  }
>
> diff --git a/arch/x86/include/asm/cmpxchg.h b/arch/x86/include/asm/cmpxchg.h
> index fb961db51a2a..b4e70a0b1238 100644
> --- a/arch/x86/include/asm/cmpxchg.h
> +++ b/arch/x86/include/asm/cmpxchg.h
> @@ -144,20 +144,20 @@ extern void __add_wrong_size(void)
>  # include <asm/cmpxchg_64.h>
>  #endif
>
> -#define cmpxchg(ptr, old, new)                                         \
> +#define arch_cmpxchg(ptr, old, new)                                    \
>         __cmpxchg(ptr, old, new, sizeof(*(ptr)))
>
> -#define sync_cmpxchg(ptr, old, new)                                    \
> +#define arch_sync_cmpxchg(ptr, old, new)                               \
>         __sync_cmpxchg(ptr, old, new, sizeof(*(ptr)))
>
> -#define cmpxchg_local(ptr, old, new)                                   \
> +#define arch_cmpxchg_local(ptr, old, new)                              \
>         __cmpxchg_local(ptr, old, new, sizeof(*(ptr)))
>
>
>  #define __raw_try_cmpxchg(_ptr, _pold, _new, size, lock)               \
>  ({                                                                     \
>         bool success;                                                   \
> -       __typeof__(_ptr) _old = (_pold);                                \
> +       __typeof__(_pold) _old = (_pold);                               \

I think this is not necessary after switching atomic64 to long long.
Will drop this from v2.


>         __typeof__(*(_ptr)) __old = *_old;                              \
>         __typeof__(*(_ptr)) __new = (_new);                             \
>         switch (size) {                                                 \
> @@ -219,7 +219,7 @@ extern void __add_wrong_size(void)
>  #define __try_cmpxchg(ptr, pold, new, size)                            \
>         __raw_try_cmpxchg((ptr), (pold), (new), (size), LOCK_PREFIX)
>
> -#define try_cmpxchg(ptr, pold, new)                                    \
> +#define arch_try_cmpxchg(ptr, pold, new)                               \
>         __try_cmpxchg((ptr), (pold), (new), sizeof(*(ptr)))


Is try_cmpxchg() a part of public interface like cmpxchg, or only a
helper to implement atomic_try_cmpxchg()?
If it's the latter than we don't need to wrap them.

>  /*
> @@ -248,10 +248,10 @@ extern void __add_wrong_size(void)
>         __ret;                                                          \
>  })
>
> -#define cmpxchg_double(p1, p2, o1, o2, n1, n2) \
> +#define arch_cmpxchg_double(p1, p2, o1, o2, n1, n2) \
>         __cmpxchg_double(LOCK_PREFIX, p1, p2, o1, o2, n1, n2)
>
> -#define cmpxchg_double_local(p1, p2, o1, o2, n1, n2) \
> +#define arch_cmpxchg_double_local(p1, p2, o1, o2, n1, n2) \
>         __cmpxchg_double(, p1, p2, o1, o2, n1, n2)
>
>  #endif /* ASM_X86_CMPXCHG_H */
> diff --git a/arch/x86/include/asm/cmpxchg_32.h b/arch/x86/include/asm/cmpxchg_32.h
> index e4959d023af8..d897291d2bf9 100644
> --- a/arch/x86/include/asm/cmpxchg_32.h
> +++ b/arch/x86/include/asm/cmpxchg_32.h
> @@ -35,10 +35,10 @@ static inline void set_64bit(volatile u64 *ptr, u64 value)
>  }
>
>  #ifdef CONFIG_X86_CMPXCHG64
> -#define cmpxchg64(ptr, o, n)                                           \
> +#define arch_cmpxchg64(ptr, o, n)                                      \
>         ((__typeof__(*(ptr)))__cmpxchg64((ptr), (unsigned long long)(o), \
>                                          (unsigned long long)(n)))
> -#define cmpxchg64_local(ptr, o, n)                                     \
> +#define arch_cmpxchg64_local(ptr, o, n)                                        \
>         ((__typeof__(*(ptr)))__cmpxchg64_local((ptr), (unsigned long long)(o), \
>                                                (unsigned long long)(n)))
>  #endif
> @@ -75,7 +75,7 @@ static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new)
>   * to simulate the cmpxchg8b on the 80386 and 80486 CPU.
>   */
>
> -#define cmpxchg64(ptr, o, n)                                   \
> +#define arch_cmpxchg64(ptr, o, n)                              \
>  ({                                                             \
>         __typeof__(*(ptr)) __ret;                               \
>         __typeof__(*(ptr)) __old = (o);                         \
> @@ -92,7 +92,7 @@ static inline u64 __cmpxchg64_local(volatile u64 *ptr, u64 old, u64 new)
>         __ret; })
>
>
> -#define cmpxchg64_local(ptr, o, n)                             \
> +#define arch_cmpxchg64_local(ptr, o, n)                                \
>  ({                                                             \
>         __typeof__(*(ptr)) __ret;                               \
>         __typeof__(*(ptr)) __old = (o);                         \
> diff --git a/arch/x86/include/asm/cmpxchg_64.h b/arch/x86/include/asm/cmpxchg_64.h
> index caa23a34c963..fafaebacca2d 100644
> --- a/arch/x86/include/asm/cmpxchg_64.h
> +++ b/arch/x86/include/asm/cmpxchg_64.h
> @@ -6,13 +6,13 @@ static inline void set_64bit(volatile u64 *ptr, u64 val)
>         *ptr = val;
>  }
>
> -#define cmpxchg64(ptr, o, n)                                           \
> +#define arch_cmpxchg64(ptr, o, n)                                      \
>  ({                                                                     \
>         BUILD_BUG_ON(sizeof(*(ptr)) != 8);                              \
>         cmpxchg((ptr), (o), (n));                                       \
>  })
>
> -#define cmpxchg64_local(ptr, o, n)                                     \
> +#define arch_cmpxchg64_local(ptr, o, n)                                        \
>  ({                                                                     \
>         BUILD_BUG_ON(sizeof(*(ptr)) != 8);                              \
>         cmpxchg_local((ptr), (o), (n));                                 \
> --
> 2.12.2.564.g063fe858b8-goog
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations
  2017-03-28 16:15 [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations Dmitry Vyukov
                   ` (7 preceding siblings ...)
  2017-03-28 16:15   ` Dmitry Vyukov
@ 2017-03-28 16:26 ` Dmitry Vyukov
  8 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-28 16:26 UTC (permalink / raw)
  To: Mark Rutland, Peter Zijlstra, Ingo Molnar
  Cc: Andrew Morton, Will Deacon, Andrey Ryabinin, kasan-dev, LKML,
	x86, Dmitry Vyukov

Andrew,

This will go to tip/locking/core since it contains a bunch of
conflicting patches.
So please drop the following patch from me from mm:

x86: remove unused atomic_inc_short()
x86, asm-generic: add KASAN instrumentation to bitops
x86: s/READ_ONCE_NOCHECK/READ_ONCE/ in arch_atomic_read()
kasan: allow kasan_check_read/write() to accept pointers to volatiles
asm-generic, x86: wrap atomic operations
asm-generic: add KASAN instrumentation to atomic operations
asm-generic: fix compilation failure in cmpxchg_double()




On Tue, Mar 28, 2017 at 6:15 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> KASAN uses compiler instrumentation to intercept all memory accesses.
> But it does not see memory accesses done in assembly code.
> One notable user of assembly code is atomic operations. Frequently,
> for example, an atomic reference decrement is the last access to an
> object and a good candidate for a racy use-after-free.
>
> Atomic operations are defined in arch files, but KASAN instrumentation
> is required for several archs that support KASAN. Later we will need
> similar hooks for KMSAN (uninit use detector) and KTSAN (data race
> detector).
>
> This change introduces wrappers around atomic operations that can be
> used to add KASAN/KMSAN/KTSAN instrumentation across several archs,
> and adds KASAN checks to them.
>
> This patch uses the wrappers only for x86 arch. Arm64 will be switched
> later. And we also plan to instrument bitops in a similar way.
>
> Within a day it has found its first bug:
>
> BUG: KASAN: use-after-free in atomic_dec_and_test
> arch/x86/include/asm/atomic.h:123 [inline] at addr ffff880079c30158
> Write of size 4 by task syz-executor6/25698
> CPU: 2 PID: 25698 Comm: syz-executor6 Not tainted 4.10.0+ #302
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
> Call Trace:
>  kasan_check_write+0x14/0x20 mm/kasan/kasan.c:344
>  atomic_dec_and_test arch/x86/include/asm/atomic.h:123 [inline]
>  put_task_struct include/linux/sched/task.h:93 [inline]
>  put_ctx+0xcf/0x110 kernel/events/core.c:1131
>  perf_event_release_kernel+0x3ad/0xc90 kernel/events/core.c:4322
>  perf_release+0x37/0x50 kernel/events/core.c:4338
>  __fput+0x332/0x800 fs/file_table.c:209
>  ____fput+0x15/0x20 fs/file_table.c:245
>  task_work_run+0x197/0x260 kernel/task_work.c:116
>  exit_task_work include/linux/task_work.h:21 [inline]
>  do_exit+0xb38/0x29c0 kernel/exit.c:880
>  do_group_exit+0x149/0x420 kernel/exit.c:984
>  get_signal+0x7e0/0x1820 kernel/signal.c:2318
>  do_signal+0xd2/0x2190 arch/x86/kernel/signal.c:808
>  exit_to_usermode_loop+0x200/0x2a0 arch/x86/entry/common.c:157
>  syscall_return_slowpath arch/x86/entry/common.c:191 [inline]
>  do_syscall_64+0x6fc/0x930 arch/x86/entry/common.c:286
>  entry_SYSCALL64_slow_path+0x25/0x25
> RIP: 0033:0x4458d9
> RSP: 002b:00007f3f07187cf8 EFLAGS: 00000246 ORIG_RAX: 00000000000000ca
> RAX: fffffffffffffe00 RBX: 00000000007080c8 RCX: 00000000004458d9
> RDX: 0000000000000000 RSI: 0000000000000000 RDI: 00000000007080c8
> RBP: 00000000007080a8 R08: 0000000000000000 R09: 0000000000000000
> R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
> R13: 0000000000000000 R14: 00007f3f071889c0 R15: 00007f3f07188700
> Object at ffff880079c30140, in cache task_struct size: 5376
> Allocated:
> PID = 25681
>  kmem_cache_alloc_node+0x122/0x6f0 mm/slab.c:3662
>  alloc_task_struct_node kernel/fork.c:153 [inline]
>  dup_task_struct kernel/fork.c:495 [inline]
>  copy_process.part.38+0x19c8/0x4aa0 kernel/fork.c:1560
>  copy_process kernel/fork.c:1531 [inline]
>  _do_fork+0x200/0x1010 kernel/fork.c:1994
>  SYSC_clone kernel/fork.c:2104 [inline]
>  SyS_clone+0x37/0x50 kernel/fork.c:2098
>  do_syscall_64+0x2e8/0x930 arch/x86/entry/common.c:281
>  return_from_SYSCALL_64+0x0/0x7a
> Freed:
> PID = 25681
>  __cache_free mm/slab.c:3514 [inline]
>  kmem_cache_free+0x71/0x240 mm/slab.c:3774
>  free_task_struct kernel/fork.c:158 [inline]
>  free_task+0x151/0x1d0 kernel/fork.c:370
>  copy_process.part.38+0x18e5/0x4aa0 kernel/fork.c:1931
>  copy_process kernel/fork.c:1531 [inline]
>  _do_fork+0x200/0x1010 kernel/fork.c:1994
>  SYSC_clone kernel/fork.c:2104 [inline]
>  SyS_clone+0x37/0x50 kernel/fork.c:2098
>  do_syscall_64+0x2e8/0x930 arch/x86/entry/common.c:281
>  return_from_SYSCALL_64+0x0/0x7a
>
> Dmitry Vyukov (8):
>   x86: remove unused atomic_inc_short()
>   x86: un-macro-ify atomic ops implementation
>   x86: use long long for 64-bit atomic ops
>   asm-generic: add atomic-instrumented.h
>   x86: switch atomic.h to use atomic-instrumented.h
>   kasan: allow kasan_check_read/write() to accept pointers to volatiles
>   asm-generic: add KASAN instrumentation to atomic operations
>   asm-generic, x86: add comments for atomic instrumentation
>
>  arch/tile/lib/atomic_asm_32.S             |   3 +-
>  arch/x86/include/asm/atomic.h             | 174 +++++++------
>  arch/x86/include/asm/atomic64_32.h        | 153 ++++++-----
>  arch/x86/include/asm/atomic64_64.h        | 155 ++++++-----
>  arch/x86/include/asm/cmpxchg.h            |  14 +-
>  arch/x86/include/asm/cmpxchg_32.h         |   8 +-
>  arch/x86/include/asm/cmpxchg_64.h         |   4 +-
>  include/asm-generic/atomic-instrumented.h | 417 ++++++++++++++++++++++++++++++
>  include/linux/kasan-checks.h              |  10 +-
>  include/linux/types.h                     |   2 +-
>  mm/kasan/kasan.c                          |   4 +-
>  11 files changed, 719 insertions(+), 225 deletions(-)
>  create mode 100644 include/asm-generic/atomic-instrumented.h
>
> --
> 2.12.2.564.g063fe858b8-goog
>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 3/8] x86: use long long for 64-bit atomic ops
  2017-03-28 16:15   ` Dmitry Vyukov
@ 2017-03-28 21:32     ` Matthew Wilcox
  -1 siblings, 0 replies; 44+ messages in thread
From: Matthew Wilcox @ 2017-03-28 21:32 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: mark.rutland, peterz, mingo, akpm, will.deacon, aryabinin,
	kasan-dev, linux-kernel, x86, linux-mm

On Tue, Mar 28, 2017 at 06:15:40PM +0200, Dmitry Vyukov wrote:
> @@ -193,12 +193,12 @@ static inline long atomic64_xchg(atomic64_t *v, long new)
>   * @a: the amount to add to v...
>   * @u: ...unless v is equal to u.
>   *
> - * Atomically adds @a to @v, so long as it was not @u.
> + * Atomically adds @a to @v, so long long as it was not @u.
>   * Returns the old value of @v.
>   */

That's a clbuttic mistake!

https://www.google.com/search?q=clbuttic

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 3/8] x86: use long long for 64-bit atomic ops
@ 2017-03-28 21:32     ` Matthew Wilcox
  0 siblings, 0 replies; 44+ messages in thread
From: Matthew Wilcox @ 2017-03-28 21:32 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: mark.rutland, peterz, mingo, akpm, will.deacon, aryabinin,
	kasan-dev, linux-kernel, x86, linux-mm

On Tue, Mar 28, 2017 at 06:15:40PM +0200, Dmitry Vyukov wrote:
> @@ -193,12 +193,12 @@ static inline long atomic64_xchg(atomic64_t *v, long new)
>   * @a: the amount to add to v...
>   * @u: ...unless v is equal to u.
>   *
> - * Atomically adds @a to @v, so long as it was not @u.
> + * Atomically adds @a to @v, so long long as it was not @u.
>   * Returns the old value of @v.
>   */

That's a clbuttic mistake!

https://www.google.com/search?q=clbuttic

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] asm-generic: add atomic-instrumented.h
  2017-03-28 16:15   ` Dmitry Vyukov
@ 2017-03-28 21:35     ` Matthew Wilcox
  -1 siblings, 0 replies; 44+ messages in thread
From: Matthew Wilcox @ 2017-03-28 21:35 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: mark.rutland, peterz, mingo, akpm, will.deacon, aryabinin,
	kasan-dev, linux-kernel, x86, linux-mm

On Tue, Mar 28, 2017 at 06:15:41PM +0200, Dmitry Vyukov wrote:
> The new header allows to wrap per-arch atomic operations
> and add common functionality to all of them.

Why a new header instead of putting this in linux/atomic.h?

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] asm-generic: add atomic-instrumented.h
@ 2017-03-28 21:35     ` Matthew Wilcox
  0 siblings, 0 replies; 44+ messages in thread
From: Matthew Wilcox @ 2017-03-28 21:35 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: mark.rutland, peterz, mingo, akpm, will.deacon, aryabinin,
	kasan-dev, linux-kernel, x86, linux-mm

On Tue, Mar 28, 2017 at 06:15:41PM +0200, Dmitry Vyukov wrote:
> The new header allows to wrap per-arch atomic operations
> and add common functionality to all of them.

Why a new header instead of putting this in linux/atomic.h?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] asm-generic: add atomic-instrumented.h
  2017-03-28 21:35     ` Matthew Wilcox
@ 2017-03-29  8:21       ` Dmitry Vyukov
  -1 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-29  8:21 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Mark Rutland, Peter Zijlstra, Ingo Molnar, Andrew Morton,
	Will Deacon, Andrey Ryabinin, kasan-dev, LKML, x86, linux-mm

On Tue, Mar 28, 2017 at 11:35 PM, Matthew Wilcox <willy@infradead.org> wrote:
> On Tue, Mar 28, 2017 at 06:15:41PM +0200, Dmitry Vyukov wrote:
>> The new header allows to wrap per-arch atomic operations
>> and add common functionality to all of them.
>
> Why a new header instead of putting this in linux/atomic.h?


Only a subset of archs include this header. If we pre-include it for
all arches without changing their atomic.h, we will break build. We of
course play some tricks with preprocessor.
It's also large enough to put into a separate header IMO.
Also a reasonable question: why put it into linux/atomic.h instead of
a new header? :)

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] asm-generic: add atomic-instrumented.h
@ 2017-03-29  8:21       ` Dmitry Vyukov
  0 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-29  8:21 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Mark Rutland, Peter Zijlstra, Ingo Molnar, Andrew Morton,
	Will Deacon, Andrey Ryabinin, kasan-dev, LKML, x86, linux-mm

On Tue, Mar 28, 2017 at 11:35 PM, Matthew Wilcox <willy@infradead.org> wrote:
> On Tue, Mar 28, 2017 at 06:15:41PM +0200, Dmitry Vyukov wrote:
>> The new header allows to wrap per-arch atomic operations
>> and add common functionality to all of them.
>
> Why a new header instead of putting this in linux/atomic.h?


Only a subset of archs include this header. If we pre-include it for
all arches without changing their atomic.h, we will break build. We of
course play some tricks with preprocessor.
It's also large enough to put into a separate header IMO.
Also a reasonable question: why put it into linux/atomic.h instead of
a new header? :)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] asm-generic: add atomic-instrumented.h
  2017-03-28 21:35     ` Matthew Wilcox
@ 2017-03-29 13:27       ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-03-29 13:27 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Dmitry Vyukov, peterz, mingo, akpm, will.deacon, aryabinin,
	kasan-dev, linux-kernel, x86, linux-mm

On Tue, Mar 28, 2017 at 02:35:13PM -0700, Matthew Wilcox wrote:
> On Tue, Mar 28, 2017 at 06:15:41PM +0200, Dmitry Vyukov wrote:
> > The new header allows to wrap per-arch atomic operations
> > and add common functionality to all of them.
> 
> Why a new header instead of putting this in linux/atomic.h?

The idea was that doing it this way allowed architectures to switch over
to the arch_* naming without a flag day. Currently this only matters for
KASAN, which is only supported by a couple of architectures (arm64,
x86).

I seem to recall that there was an issue that prevented us from solving
this with ifdeffery early in linux/atomic.h like:

#ifdef arch_op
#define op(...) ({ 		\
	kasna_whatever(...)	\
	arch_op(...)		\
})
#endif

... but I can't recall specifically what it was.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] asm-generic: add atomic-instrumented.h
@ 2017-03-29 13:27       ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-03-29 13:27 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Dmitry Vyukov, peterz, mingo, akpm, will.deacon, aryabinin,
	kasan-dev, linux-kernel, x86, linux-mm

On Tue, Mar 28, 2017 at 02:35:13PM -0700, Matthew Wilcox wrote:
> On Tue, Mar 28, 2017 at 06:15:41PM +0200, Dmitry Vyukov wrote:
> > The new header allows to wrap per-arch atomic operations
> > and add common functionality to all of them.
> 
> Why a new header instead of putting this in linux/atomic.h?

The idea was that doing it this way allowed architectures to switch over
to the arch_* naming without a flag day. Currently this only matters for
KASAN, which is only supported by a couple of architectures (arm64,
x86).

I seem to recall that there was an issue that prevented us from solving
this with ifdeffery early in linux/atomic.h like:

#ifdef arch_op
#define op(...) ({ 		\
	kasna_whatever(...)	\
	arch_op(...)		\
})
#endif

... but I can't recall specifically what it was.

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 5/8] x86: switch atomic.h to use atomic-instrumented.h
  2017-03-28 16:25     ` Dmitry Vyukov
@ 2017-03-29 13:37       ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-03-29 13:37 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Peter Zijlstra, Ingo Molnar, Andrew Morton, Will Deacon,
	Andrey Ryabinin, kasan-dev, LKML, x86, linux-mm

On Tue, Mar 28, 2017 at 06:25:07PM +0200, Dmitry Vyukov wrote:
> On Tue, Mar 28, 2017 at 6:15 PM, Dmitry Vyukov <dvyukov@google.com> wrote:

> >  #define __try_cmpxchg(ptr, pold, new, size)                            \
> >         __raw_try_cmpxchg((ptr), (pold), (new), (size), LOCK_PREFIX)
> >
> > -#define try_cmpxchg(ptr, pold, new)                                    \
> > +#define arch_try_cmpxchg(ptr, pold, new)                               \
> >         __try_cmpxchg((ptr), (pold), (new), sizeof(*(ptr)))
> 
> Is try_cmpxchg() a part of public interface like cmpxchg, or only a
> helper to implement atomic_try_cmpxchg()?
> If it's the latter than we don't need to wrap them.

De-facto, it's an x86-specific helper. It was added in commit:

    a9ebf306f52c756c ("locking/atomic: Introduce atomic_try_cmpxchg()")

... which did not add try_cmpxchg to any generic header.

If it was meant to be part of the public interface, we'd need a generic
definition.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 5/8] x86: switch atomic.h to use atomic-instrumented.h
@ 2017-03-29 13:37       ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-03-29 13:37 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Peter Zijlstra, Ingo Molnar, Andrew Morton, Will Deacon,
	Andrey Ryabinin, kasan-dev, LKML, x86, linux-mm

On Tue, Mar 28, 2017 at 06:25:07PM +0200, Dmitry Vyukov wrote:
> On Tue, Mar 28, 2017 at 6:15 PM, Dmitry Vyukov <dvyukov@google.com> wrote:

> >  #define __try_cmpxchg(ptr, pold, new, size)                            \
> >         __raw_try_cmpxchg((ptr), (pold), (new), (size), LOCK_PREFIX)
> >
> > -#define try_cmpxchg(ptr, pold, new)                                    \
> > +#define arch_try_cmpxchg(ptr, pold, new)                               \
> >         __try_cmpxchg((ptr), (pold), (new), sizeof(*(ptr)))
> 
> Is try_cmpxchg() a part of public interface like cmpxchg, or only a
> helper to implement atomic_try_cmpxchg()?
> If it's the latter than we don't need to wrap them.

De-facto, it's an x86-specific helper. It was added in commit:

    a9ebf306f52c756c ("locking/atomic: Introduce atomic_try_cmpxchg()")

... which did not add try_cmpxchg to any generic header.

If it was meant to be part of the public interface, we'd need a generic
definition.

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 7/8] asm-generic: add KASAN instrumentation to atomic operations
  2017-03-28 16:15   ` Dmitry Vyukov
@ 2017-03-29 14:00     ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-03-29 14:00 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: peterz, mingo, akpm, will.deacon, aryabinin, kasan-dev,
	linux-kernel, x86, linux-mm

On Tue, Mar 28, 2017 at 06:15:44PM +0200, Dmitry Vyukov wrote:
> KASAN uses compiler instrumentation to intercept all memory accesses.
> But it does not see memory accesses done in assembly code.
> One notable user of assembly code is atomic operations. Frequently,
> for example, an atomic reference decrement is the last access to an
> object and a good candidate for a racy use-after-free.
> 
> Add manual KASAN checks to atomic operations.
> 
> Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will.deacon@arm.com>,
> Cc: Andrew Morton <akpm@linux-foundation.org>,
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
> Cc: Ingo Molnar <mingo@redhat.com>,
> Cc: kasan-dev@googlegroups.com
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> Cc: x86@kernel.org

FWIW, I think that structuring the file this way will make it easier to
add the {acquire,release,relaxed} variants (as arm64 will need),
so this looks good to me.

As a heads-up, I wanted to have a go at that, but I wasn't able to apply
patch two onwards on v4.11-rc{3,4} or next-20170329. I was not able to
cleanly revert the instrumentation patches currently in next-20170329,
since other patches built atop of them.

It would be nice to see that sorted out.

Thanks,
Mark.

> ---
>  include/asm-generic/atomic-instrumented.h | 76 +++++++++++++++++++++++++++++--
>  1 file changed, 72 insertions(+), 4 deletions(-)
> 
> diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
> index fd483115d4c6..7f8eb761f896 100644
> --- a/include/asm-generic/atomic-instrumented.h
> +++ b/include/asm-generic/atomic-instrumented.h
> @@ -1,44 +1,54 @@
>  #ifndef _LINUX_ATOMIC_INSTRUMENTED_H
>  #define _LINUX_ATOMIC_INSTRUMENTED_H
>  
> +#include <linux/kasan-checks.h>
> +
>  static __always_inline int atomic_read(const atomic_t *v)
>  {
> +	kasan_check_read(v, sizeof(*v));
>  	return arch_atomic_read(v);
>  }
>  
>  static __always_inline long long atomic64_read(const atomic64_t *v)
>  {
> +	kasan_check_read(v, sizeof(*v));
>  	return arch_atomic64_read(v);
>  }
>  
>  static __always_inline void atomic_set(atomic_t *v, int i)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_set(v, i);
>  }
>  
>  static __always_inline void atomic64_set(atomic64_t *v, long long i)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_set(v, i);
>  }
>  
>  static __always_inline int atomic_xchg(atomic_t *v, int i)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_xchg(v, i);
>  }
>  
>  static __always_inline long long atomic64_xchg(atomic64_t *v, long long i)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_xchg(v, i);
>  }
>  
>  static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_cmpxchg(v, old, new);
>  }
>  
>  static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
>  						  long long new)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_cmpxchg(v, old, new);
>  }
>  
> @@ -46,6 +56,8 @@ static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
>  #define atomic_try_cmpxchg atomic_try_cmpxchg
>  static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
>  {
> +	kasan_check_write(v, sizeof(*v));
> +	kasan_check_read(old, sizeof(*old));
>  	return arch_atomic_try_cmpxchg(v, old, new);
>  }
>  #endif
> @@ -55,12 +67,15 @@ static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
>  static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old,
>  						 long long new)
>  {
> +	kasan_check_write(v, sizeof(*v));
> +	kasan_check_read(old, sizeof(*old));
>  	return arch_atomic64_try_cmpxchg(v, old, new);
>  }
>  #endif
>  
>  static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return __arch_atomic_add_unless(v, a, u);
>  }
>  
> @@ -68,242 +83,295 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
>  static __always_inline bool atomic64_add_unless(atomic64_t *v, long long a,
>  						long long u)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_add_unless(v, a, u);
>  }
>  
>  static __always_inline void atomic_inc(atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_inc(v);
>  }
>  
>  static __always_inline void atomic64_inc(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_inc(v);
>  }
>  
>  static __always_inline void atomic_dec(atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_dec(v);
>  }
>  
>  static __always_inline void atomic64_dec(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_dec(v);
>  }
>  
>  static __always_inline void atomic_add(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_add(i, v);
>  }
>  
>  static __always_inline void atomic64_add(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_add(i, v);
>  }
>  
>  static __always_inline void atomic_sub(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_sub(i, v);
>  }
>  
>  static __always_inline void atomic64_sub(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_sub(i, v);
>  }
>  
>  static __always_inline void atomic_and(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_and(i, v);
>  }
>  
>  static __always_inline void atomic64_and(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_and(i, v);
>  }
>  
>  static __always_inline void atomic_or(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_or(i, v);
>  }
>  
>  static __always_inline void atomic64_or(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_or(i, v);
>  }
>  
>  static __always_inline void atomic_xor(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_xor(i, v);
>  }
>  
>  static __always_inline void atomic64_xor(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_xor(i, v);
>  }
>  
>  static __always_inline int atomic_inc_return(atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_inc_return(v);
>  }
>  
>  static __always_inline long long atomic64_inc_return(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_inc_return(v);
>  }
>  
>  static __always_inline int atomic_dec_return(atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_dec_return(v);
>  }
>  
>  static __always_inline long long atomic64_dec_return(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_dec_return(v);
>  }
>  
>  static __always_inline long long atomic64_inc_not_zero(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_inc_not_zero(v);
>  }
>  
>  static __always_inline long long atomic64_dec_if_positive(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_dec_if_positive(v);
>  }
>  
>  static __always_inline bool atomic_dec_and_test(atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_dec_and_test(v);
>  }
>  
>  static __always_inline bool atomic64_dec_and_test(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_dec_and_test(v);
>  }
>  
>  static __always_inline bool atomic_inc_and_test(atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_inc_and_test(v);
>  }
>  
>  static __always_inline bool atomic64_inc_and_test(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_inc_and_test(v);
>  }
>  
>  static __always_inline int atomic_add_return(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_add_return(i, v);
>  }
>  
>  static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_add_return(i, v);
>  }
>  
>  static __always_inline int atomic_sub_return(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_sub_return(i, v);
>  }
>  
>  static __always_inline long long atomic64_sub_return(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_sub_return(i, v);
>  }
>  
>  static __always_inline int atomic_fetch_add(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_fetch_add(i, v);
>  }
>  
>  static __always_inline long long atomic64_fetch_add(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_fetch_add(i, v);
>  }
>  
>  static __always_inline int atomic_fetch_sub(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_fetch_sub(i, v);
>  }
>  
>  static __always_inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_fetch_sub(i, v);
>  }
>  
>  static __always_inline int atomic_fetch_and(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_fetch_and(i, v);
>  }
>  
>  static __always_inline long long atomic64_fetch_and(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_fetch_and(i, v);
>  }
>  
>  static __always_inline int atomic_fetch_or(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_fetch_or(i, v);
>  }
>  
>  static __always_inline long long atomic64_fetch_or(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_fetch_or(i, v);
>  }
>  
>  static __always_inline int atomic_fetch_xor(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_fetch_xor(i, v);
>  }
>  
>  static __always_inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_fetch_xor(i, v);
>  }
>  
>  static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_sub_and_test(i, v);
>  }
>  
>  static __always_inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_sub_and_test(i, v);
>  }
>  
>  static __always_inline bool atomic_add_negative(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_add_negative(i, v);
>  }
>  
>  static __always_inline bool atomic64_add_negative(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_add_negative(i, v);
>  }
>  
>  #define cmpxchg(ptr, old, new)				\
>  ({							\
> +	__typeof__(ptr) ___ptr = (ptr);			\
> +	kasan_check_write(___ptr, sizeof(*___ptr));	\
>  	arch_cmpxchg((ptr), (old), (new));		\
>  })
>  
>  #define sync_cmpxchg(ptr, old, new)			\
>  ({							\
> -	arch_sync_cmpxchg((ptr), (old), (new));		\
> +	__typeof__(ptr) ___ptr = (ptr);			\
> +	kasan_check_write(___ptr, sizeof(*___ptr));	\
> +	arch_sync_cmpxchg(___ptr, (old), (new));	\
>  })
>  
>  #define cmpxchg_local(ptr, old, new)			\
>  ({							\
> -	arch_cmpxchg_local((ptr), (old), (new));	\
> +	__typeof__(ptr) ____ptr = (ptr);		\
> +	kasan_check_write(____ptr, sizeof(*____ptr));	\
> +	arch_cmpxchg_local(____ptr, (old), (new));	\
>  })
>  
>  #define cmpxchg64(ptr, old, new)			\
>  ({							\
> -	arch_cmpxchg64((ptr), (old), (new));		\
> +	__typeof__(ptr) ____ptr = (ptr);		\
> +	kasan_check_write(____ptr, sizeof(*____ptr));	\
> +	arch_cmpxchg64(____ptr, (old), (new));		\
>  })
>  
>  #define cmpxchg64_local(ptr, old, new)			\
>  ({							\
> -	arch_cmpxchg64_local((ptr), (old), (new));	\
> +	__typeof__(ptr) ____ptr = (ptr);		\
> +	kasan_check_write(____ptr, sizeof(*____ptr));	\
> +	arch_cmpxchg64_local(____ptr, (old), (new));	\
>  })
>  
>  #define cmpxchg_double(p1, p2, o1, o2, n1, n2)				\
> -- 
> 2.12.2.564.g063fe858b8-goog
> 

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 7/8] asm-generic: add KASAN instrumentation to atomic operations
@ 2017-03-29 14:00     ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-03-29 14:00 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: peterz, mingo, akpm, will.deacon, aryabinin, kasan-dev,
	linux-kernel, x86, linux-mm

On Tue, Mar 28, 2017 at 06:15:44PM +0200, Dmitry Vyukov wrote:
> KASAN uses compiler instrumentation to intercept all memory accesses.
> But it does not see memory accesses done in assembly code.
> One notable user of assembly code is atomic operations. Frequently,
> for example, an atomic reference decrement is the last access to an
> object and a good candidate for a racy use-after-free.
> 
> Add manual KASAN checks to atomic operations.
> 
> Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Will Deacon <will.deacon@arm.com>,
> Cc: Andrew Morton <akpm@linux-foundation.org>,
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
> Cc: Ingo Molnar <mingo@redhat.com>,
> Cc: kasan-dev@googlegroups.com
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> Cc: x86@kernel.org

FWIW, I think that structuring the file this way will make it easier to
add the {acquire,release,relaxed} variants (as arm64 will need),
so this looks good to me.

As a heads-up, I wanted to have a go at that, but I wasn't able to apply
patch two onwards on v4.11-rc{3,4} or next-20170329. I was not able to
cleanly revert the instrumentation patches currently in next-20170329,
since other patches built atop of them.

It would be nice to see that sorted out.

Thanks,
Mark.

> ---
>  include/asm-generic/atomic-instrumented.h | 76 +++++++++++++++++++++++++++++--
>  1 file changed, 72 insertions(+), 4 deletions(-)
> 
> diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
> index fd483115d4c6..7f8eb761f896 100644
> --- a/include/asm-generic/atomic-instrumented.h
> +++ b/include/asm-generic/atomic-instrumented.h
> @@ -1,44 +1,54 @@
>  #ifndef _LINUX_ATOMIC_INSTRUMENTED_H
>  #define _LINUX_ATOMIC_INSTRUMENTED_H
>  
> +#include <linux/kasan-checks.h>
> +
>  static __always_inline int atomic_read(const atomic_t *v)
>  {
> +	kasan_check_read(v, sizeof(*v));
>  	return arch_atomic_read(v);
>  }
>  
>  static __always_inline long long atomic64_read(const atomic64_t *v)
>  {
> +	kasan_check_read(v, sizeof(*v));
>  	return arch_atomic64_read(v);
>  }
>  
>  static __always_inline void atomic_set(atomic_t *v, int i)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_set(v, i);
>  }
>  
>  static __always_inline void atomic64_set(atomic64_t *v, long long i)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_set(v, i);
>  }
>  
>  static __always_inline int atomic_xchg(atomic_t *v, int i)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_xchg(v, i);
>  }
>  
>  static __always_inline long long atomic64_xchg(atomic64_t *v, long long i)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_xchg(v, i);
>  }
>  
>  static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_cmpxchg(v, old, new);
>  }
>  
>  static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
>  						  long long new)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_cmpxchg(v, old, new);
>  }
>  
> @@ -46,6 +56,8 @@ static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
>  #define atomic_try_cmpxchg atomic_try_cmpxchg
>  static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
>  {
> +	kasan_check_write(v, sizeof(*v));
> +	kasan_check_read(old, sizeof(*old));
>  	return arch_atomic_try_cmpxchg(v, old, new);
>  }
>  #endif
> @@ -55,12 +67,15 @@ static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
>  static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old,
>  						 long long new)
>  {
> +	kasan_check_write(v, sizeof(*v));
> +	kasan_check_read(old, sizeof(*old));
>  	return arch_atomic64_try_cmpxchg(v, old, new);
>  }
>  #endif
>  
>  static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return __arch_atomic_add_unless(v, a, u);
>  }
>  
> @@ -68,242 +83,295 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
>  static __always_inline bool atomic64_add_unless(atomic64_t *v, long long a,
>  						long long u)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_add_unless(v, a, u);
>  }
>  
>  static __always_inline void atomic_inc(atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_inc(v);
>  }
>  
>  static __always_inline void atomic64_inc(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_inc(v);
>  }
>  
>  static __always_inline void atomic_dec(atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_dec(v);
>  }
>  
>  static __always_inline void atomic64_dec(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_dec(v);
>  }
>  
>  static __always_inline void atomic_add(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_add(i, v);
>  }
>  
>  static __always_inline void atomic64_add(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_add(i, v);
>  }
>  
>  static __always_inline void atomic_sub(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_sub(i, v);
>  }
>  
>  static __always_inline void atomic64_sub(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_sub(i, v);
>  }
>  
>  static __always_inline void atomic_and(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_and(i, v);
>  }
>  
>  static __always_inline void atomic64_and(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_and(i, v);
>  }
>  
>  static __always_inline void atomic_or(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_or(i, v);
>  }
>  
>  static __always_inline void atomic64_or(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_or(i, v);
>  }
>  
>  static __always_inline void atomic_xor(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic_xor(i, v);
>  }
>  
>  static __always_inline void atomic64_xor(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	arch_atomic64_xor(i, v);
>  }
>  
>  static __always_inline int atomic_inc_return(atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_inc_return(v);
>  }
>  
>  static __always_inline long long atomic64_inc_return(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_inc_return(v);
>  }
>  
>  static __always_inline int atomic_dec_return(atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_dec_return(v);
>  }
>  
>  static __always_inline long long atomic64_dec_return(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_dec_return(v);
>  }
>  
>  static __always_inline long long atomic64_inc_not_zero(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_inc_not_zero(v);
>  }
>  
>  static __always_inline long long atomic64_dec_if_positive(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_dec_if_positive(v);
>  }
>  
>  static __always_inline bool atomic_dec_and_test(atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_dec_and_test(v);
>  }
>  
>  static __always_inline bool atomic64_dec_and_test(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_dec_and_test(v);
>  }
>  
>  static __always_inline bool atomic_inc_and_test(atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_inc_and_test(v);
>  }
>  
>  static __always_inline bool atomic64_inc_and_test(atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_inc_and_test(v);
>  }
>  
>  static __always_inline int atomic_add_return(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_add_return(i, v);
>  }
>  
>  static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_add_return(i, v);
>  }
>  
>  static __always_inline int atomic_sub_return(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_sub_return(i, v);
>  }
>  
>  static __always_inline long long atomic64_sub_return(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_sub_return(i, v);
>  }
>  
>  static __always_inline int atomic_fetch_add(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_fetch_add(i, v);
>  }
>  
>  static __always_inline long long atomic64_fetch_add(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_fetch_add(i, v);
>  }
>  
>  static __always_inline int atomic_fetch_sub(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_fetch_sub(i, v);
>  }
>  
>  static __always_inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_fetch_sub(i, v);
>  }
>  
>  static __always_inline int atomic_fetch_and(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_fetch_and(i, v);
>  }
>  
>  static __always_inline long long atomic64_fetch_and(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_fetch_and(i, v);
>  }
>  
>  static __always_inline int atomic_fetch_or(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_fetch_or(i, v);
>  }
>  
>  static __always_inline long long atomic64_fetch_or(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_fetch_or(i, v);
>  }
>  
>  static __always_inline int atomic_fetch_xor(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_fetch_xor(i, v);
>  }
>  
>  static __always_inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_fetch_xor(i, v);
>  }
>  
>  static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_sub_and_test(i, v);
>  }
>  
>  static __always_inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_sub_and_test(i, v);
>  }
>  
>  static __always_inline bool atomic_add_negative(int i, atomic_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic_add_negative(i, v);
>  }
>  
>  static __always_inline bool atomic64_add_negative(long long i, atomic64_t *v)
>  {
> +	kasan_check_write(v, sizeof(*v));
>  	return arch_atomic64_add_negative(i, v);
>  }
>  
>  #define cmpxchg(ptr, old, new)				\
>  ({							\
> +	__typeof__(ptr) ___ptr = (ptr);			\
> +	kasan_check_write(___ptr, sizeof(*___ptr));	\
>  	arch_cmpxchg((ptr), (old), (new));		\
>  })
>  
>  #define sync_cmpxchg(ptr, old, new)			\
>  ({							\
> -	arch_sync_cmpxchg((ptr), (old), (new));		\
> +	__typeof__(ptr) ___ptr = (ptr);			\
> +	kasan_check_write(___ptr, sizeof(*___ptr));	\
> +	arch_sync_cmpxchg(___ptr, (old), (new));	\
>  })
>  
>  #define cmpxchg_local(ptr, old, new)			\
>  ({							\
> -	arch_cmpxchg_local((ptr), (old), (new));	\
> +	__typeof__(ptr) ____ptr = (ptr);		\
> +	kasan_check_write(____ptr, sizeof(*____ptr));	\
> +	arch_cmpxchg_local(____ptr, (old), (new));	\
>  })
>  
>  #define cmpxchg64(ptr, old, new)			\
>  ({							\
> -	arch_cmpxchg64((ptr), (old), (new));		\
> +	__typeof__(ptr) ____ptr = (ptr);		\
> +	kasan_check_write(____ptr, sizeof(*____ptr));	\
> +	arch_cmpxchg64(____ptr, (old), (new));		\
>  })
>  
>  #define cmpxchg64_local(ptr, old, new)			\
>  ({							\
> -	arch_cmpxchg64_local((ptr), (old), (new));	\
> +	__typeof__(ptr) ____ptr = (ptr);		\
> +	kasan_check_write(____ptr, sizeof(*____ptr));	\
> +	arch_cmpxchg64_local(____ptr, (old), (new));	\
>  })
>  
>  #define cmpxchg_double(p1, p2, o1, o2, n1, n2)				\
> -- 
> 2.12.2.564.g063fe858b8-goog
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 7/8] asm-generic: add KASAN instrumentation to atomic operations
  2017-03-29 14:00     ` Mark Rutland
@ 2017-03-29 15:52       ` Dmitry Vyukov
  -1 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-29 15:52 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Peter Zijlstra, Ingo Molnar, Andrew Morton, Will Deacon,
	Andrey Ryabinin, kasan-dev, LKML, x86, linux-mm

On Wed, Mar 29, 2017 at 4:00 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Tue, Mar 28, 2017 at 06:15:44PM +0200, Dmitry Vyukov wrote:
>> KASAN uses compiler instrumentation to intercept all memory accesses.
>> But it does not see memory accesses done in assembly code.
>> One notable user of assembly code is atomic operations. Frequently,
>> for example, an atomic reference decrement is the last access to an
>> object and a good candidate for a racy use-after-free.
>>
>> Add manual KASAN checks to atomic operations.
>>
>> Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Will Deacon <will.deacon@arm.com>,
>> Cc: Andrew Morton <akpm@linux-foundation.org>,
>> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
>> Cc: Ingo Molnar <mingo@redhat.com>,
>> Cc: kasan-dev@googlegroups.com
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> Cc: x86@kernel.org
>
> FWIW, I think that structuring the file this way will make it easier to
> add the {acquire,release,relaxed} variants (as arm64 will need),
> so this looks good to me.
>
> As a heads-up, I wanted to have a go at that, but I wasn't able to apply
> patch two onwards on v4.11-rc{3,4} or next-20170329. I was not able to
> cleanly revert the instrumentation patches currently in next-20170329,
> since other patches built atop of them.

I based it on git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
locking/core


> It would be nice to see that sorted out.
>
> Thanks,
> Mark.
>
>> ---
>>  include/asm-generic/atomic-instrumented.h | 76 +++++++++++++++++++++++++++++--
>>  1 file changed, 72 insertions(+), 4 deletions(-)
>>
>> diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
>> index fd483115d4c6..7f8eb761f896 100644
>> --- a/include/asm-generic/atomic-instrumented.h
>> +++ b/include/asm-generic/atomic-instrumented.h
>> @@ -1,44 +1,54 @@
>>  #ifndef _LINUX_ATOMIC_INSTRUMENTED_H
>>  #define _LINUX_ATOMIC_INSTRUMENTED_H
>>
>> +#include <linux/kasan-checks.h>
>> +
>>  static __always_inline int atomic_read(const atomic_t *v)
>>  {
>> +     kasan_check_read(v, sizeof(*v));
>>       return arch_atomic_read(v);
>>  }
>>
>>  static __always_inline long long atomic64_read(const atomic64_t *v)
>>  {
>> +     kasan_check_read(v, sizeof(*v));
>>       return arch_atomic64_read(v);
>>  }
>>
>>  static __always_inline void atomic_set(atomic_t *v, int i)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_set(v, i);
>>  }
>>
>>  static __always_inline void atomic64_set(atomic64_t *v, long long i)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_set(v, i);
>>  }
>>
>>  static __always_inline int atomic_xchg(atomic_t *v, int i)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_xchg(v, i);
>>  }
>>
>>  static __always_inline long long atomic64_xchg(atomic64_t *v, long long i)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_xchg(v, i);
>>  }
>>
>>  static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_cmpxchg(v, old, new);
>>  }
>>
>>  static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
>>                                                 long long new)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_cmpxchg(v, old, new);
>>  }
>>
>> @@ -46,6 +56,8 @@ static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
>>  #define atomic_try_cmpxchg atomic_try_cmpxchg
>>  static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>> +     kasan_check_read(old, sizeof(*old));
>>       return arch_atomic_try_cmpxchg(v, old, new);
>>  }
>>  #endif
>> @@ -55,12 +67,15 @@ static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
>>  static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old,
>>                                                long long new)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>> +     kasan_check_read(old, sizeof(*old));
>>       return arch_atomic64_try_cmpxchg(v, old, new);
>>  }
>>  #endif
>>
>>  static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return __arch_atomic_add_unless(v, a, u);
>>  }
>>
>> @@ -68,242 +83,295 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
>>  static __always_inline bool atomic64_add_unless(atomic64_t *v, long long a,
>>                                               long long u)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_add_unless(v, a, u);
>>  }
>>
>>  static __always_inline void atomic_inc(atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_inc(v);
>>  }
>>
>>  static __always_inline void atomic64_inc(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_inc(v);
>>  }
>>
>>  static __always_inline void atomic_dec(atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_dec(v);
>>  }
>>
>>  static __always_inline void atomic64_dec(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_dec(v);
>>  }
>>
>>  static __always_inline void atomic_add(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_add(i, v);
>>  }
>>
>>  static __always_inline void atomic64_add(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_add(i, v);
>>  }
>>
>>  static __always_inline void atomic_sub(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_sub(i, v);
>>  }
>>
>>  static __always_inline void atomic64_sub(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_sub(i, v);
>>  }
>>
>>  static __always_inline void atomic_and(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_and(i, v);
>>  }
>>
>>  static __always_inline void atomic64_and(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_and(i, v);
>>  }
>>
>>  static __always_inline void atomic_or(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_or(i, v);
>>  }
>>
>>  static __always_inline void atomic64_or(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_or(i, v);
>>  }
>>
>>  static __always_inline void atomic_xor(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_xor(i, v);
>>  }
>>
>>  static __always_inline void atomic64_xor(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_xor(i, v);
>>  }
>>
>>  static __always_inline int atomic_inc_return(atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_inc_return(v);
>>  }
>>
>>  static __always_inline long long atomic64_inc_return(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_inc_return(v);
>>  }
>>
>>  static __always_inline int atomic_dec_return(atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_dec_return(v);
>>  }
>>
>>  static __always_inline long long atomic64_dec_return(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_dec_return(v);
>>  }
>>
>>  static __always_inline long long atomic64_inc_not_zero(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_inc_not_zero(v);
>>  }
>>
>>  static __always_inline long long atomic64_dec_if_positive(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_dec_if_positive(v);
>>  }
>>
>>  static __always_inline bool atomic_dec_and_test(atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_dec_and_test(v);
>>  }
>>
>>  static __always_inline bool atomic64_dec_and_test(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_dec_and_test(v);
>>  }
>>
>>  static __always_inline bool atomic_inc_and_test(atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_inc_and_test(v);
>>  }
>>
>>  static __always_inline bool atomic64_inc_and_test(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_inc_and_test(v);
>>  }
>>
>>  static __always_inline int atomic_add_return(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_add_return(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_add_return(i, v);
>>  }
>>
>>  static __always_inline int atomic_sub_return(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_sub_return(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_sub_return(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_sub_return(i, v);
>>  }
>>
>>  static __always_inline int atomic_fetch_add(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_fetch_add(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_fetch_add(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_fetch_add(i, v);
>>  }
>>
>>  static __always_inline int atomic_fetch_sub(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_fetch_sub(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_fetch_sub(i, v);
>>  }
>>
>>  static __always_inline int atomic_fetch_and(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_fetch_and(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_fetch_and(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_fetch_and(i, v);
>>  }
>>
>>  static __always_inline int atomic_fetch_or(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_fetch_or(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_fetch_or(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_fetch_or(i, v);
>>  }
>>
>>  static __always_inline int atomic_fetch_xor(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_fetch_xor(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_fetch_xor(i, v);
>>  }
>>
>>  static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_sub_and_test(i, v);
>>  }
>>
>>  static __always_inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_sub_and_test(i, v);
>>  }
>>
>>  static __always_inline bool atomic_add_negative(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_add_negative(i, v);
>>  }
>>
>>  static __always_inline bool atomic64_add_negative(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_add_negative(i, v);
>>  }
>>
>>  #define cmpxchg(ptr, old, new)                               \
>>  ({                                                   \
>> +     __typeof__(ptr) ___ptr = (ptr);                 \
>> +     kasan_check_write(___ptr, sizeof(*___ptr));     \
>>       arch_cmpxchg((ptr), (old), (new));              \
>>  })
>>
>>  #define sync_cmpxchg(ptr, old, new)                  \
>>  ({                                                   \
>> -     arch_sync_cmpxchg((ptr), (old), (new));         \
>> +     __typeof__(ptr) ___ptr = (ptr);                 \
>> +     kasan_check_write(___ptr, sizeof(*___ptr));     \
>> +     arch_sync_cmpxchg(___ptr, (old), (new));        \
>>  })
>>
>>  #define cmpxchg_local(ptr, old, new)                 \
>>  ({                                                   \
>> -     arch_cmpxchg_local((ptr), (old), (new));        \
>> +     __typeof__(ptr) ____ptr = (ptr);                \
>> +     kasan_check_write(____ptr, sizeof(*____ptr));   \
>> +     arch_cmpxchg_local(____ptr, (old), (new));      \
>>  })
>>
>>  #define cmpxchg64(ptr, old, new)                     \
>>  ({                                                   \
>> -     arch_cmpxchg64((ptr), (old), (new));            \
>> +     __typeof__(ptr) ____ptr = (ptr);                \
>> +     kasan_check_write(____ptr, sizeof(*____ptr));   \
>> +     arch_cmpxchg64(____ptr, (old), (new));          \
>>  })
>>
>>  #define cmpxchg64_local(ptr, old, new)                       \
>>  ({                                                   \
>> -     arch_cmpxchg64_local((ptr), (old), (new));      \
>> +     __typeof__(ptr) ____ptr = (ptr);                \
>> +     kasan_check_write(____ptr, sizeof(*____ptr));   \
>> +     arch_cmpxchg64_local(____ptr, (old), (new));    \
>>  })
>>
>>  #define cmpxchg_double(p1, p2, o1, o2, n1, n2)                               \
>> --
>> 2.12.2.564.g063fe858b8-goog
>>
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20170329140000.GK23442%40leverpostej.
> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 7/8] asm-generic: add KASAN instrumentation to atomic operations
@ 2017-03-29 15:52       ` Dmitry Vyukov
  0 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-03-29 15:52 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Peter Zijlstra, Ingo Molnar, Andrew Morton, Will Deacon,
	Andrey Ryabinin, kasan-dev, LKML, x86, linux-mm

On Wed, Mar 29, 2017 at 4:00 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Tue, Mar 28, 2017 at 06:15:44PM +0200, Dmitry Vyukov wrote:
>> KASAN uses compiler instrumentation to intercept all memory accesses.
>> But it does not see memory accesses done in assembly code.
>> One notable user of assembly code is atomic operations. Frequently,
>> for example, an atomic reference decrement is the last access to an
>> object and a good candidate for a racy use-after-free.
>>
>> Add manual KASAN checks to atomic operations.
>>
>> Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Will Deacon <will.deacon@arm.com>,
>> Cc: Andrew Morton <akpm@linux-foundation.org>,
>> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
>> Cc: Ingo Molnar <mingo@redhat.com>,
>> Cc: kasan-dev@googlegroups.com
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> Cc: x86@kernel.org
>
> FWIW, I think that structuring the file this way will make it easier to
> add the {acquire,release,relaxed} variants (as arm64 will need),
> so this looks good to me.
>
> As a heads-up, I wanted to have a go at that, but I wasn't able to apply
> patch two onwards on v4.11-rc{3,4} or next-20170329. I was not able to
> cleanly revert the instrumentation patches currently in next-20170329,
> since other patches built atop of them.

I based it on git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
locking/core


> It would be nice to see that sorted out.
>
> Thanks,
> Mark.
>
>> ---
>>  include/asm-generic/atomic-instrumented.h | 76 +++++++++++++++++++++++++++++--
>>  1 file changed, 72 insertions(+), 4 deletions(-)
>>
>> diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
>> index fd483115d4c6..7f8eb761f896 100644
>> --- a/include/asm-generic/atomic-instrumented.h
>> +++ b/include/asm-generic/atomic-instrumented.h
>> @@ -1,44 +1,54 @@
>>  #ifndef _LINUX_ATOMIC_INSTRUMENTED_H
>>  #define _LINUX_ATOMIC_INSTRUMENTED_H
>>
>> +#include <linux/kasan-checks.h>
>> +
>>  static __always_inline int atomic_read(const atomic_t *v)
>>  {
>> +     kasan_check_read(v, sizeof(*v));
>>       return arch_atomic_read(v);
>>  }
>>
>>  static __always_inline long long atomic64_read(const atomic64_t *v)
>>  {
>> +     kasan_check_read(v, sizeof(*v));
>>       return arch_atomic64_read(v);
>>  }
>>
>>  static __always_inline void atomic_set(atomic_t *v, int i)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_set(v, i);
>>  }
>>
>>  static __always_inline void atomic64_set(atomic64_t *v, long long i)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_set(v, i);
>>  }
>>
>>  static __always_inline int atomic_xchg(atomic_t *v, int i)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_xchg(v, i);
>>  }
>>
>>  static __always_inline long long atomic64_xchg(atomic64_t *v, long long i)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_xchg(v, i);
>>  }
>>
>>  static __always_inline int atomic_cmpxchg(atomic_t *v, int old, int new)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_cmpxchg(v, old, new);
>>  }
>>
>>  static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
>>                                                 long long new)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_cmpxchg(v, old, new);
>>  }
>>
>> @@ -46,6 +56,8 @@ static __always_inline long long atomic64_cmpxchg(atomic64_t *v, long long old,
>>  #define atomic_try_cmpxchg atomic_try_cmpxchg
>>  static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>> +     kasan_check_read(old, sizeof(*old));
>>       return arch_atomic_try_cmpxchg(v, old, new);
>>  }
>>  #endif
>> @@ -55,12 +67,15 @@ static __always_inline bool atomic_try_cmpxchg(atomic_t *v, int *old, int new)
>>  static __always_inline bool atomic64_try_cmpxchg(atomic64_t *v, long long *old,
>>                                                long long new)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>> +     kasan_check_read(old, sizeof(*old));
>>       return arch_atomic64_try_cmpxchg(v, old, new);
>>  }
>>  #endif
>>
>>  static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return __arch_atomic_add_unless(v, a, u);
>>  }
>>
>> @@ -68,242 +83,295 @@ static __always_inline int __atomic_add_unless(atomic_t *v, int a, int u)
>>  static __always_inline bool atomic64_add_unless(atomic64_t *v, long long a,
>>                                               long long u)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_add_unless(v, a, u);
>>  }
>>
>>  static __always_inline void atomic_inc(atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_inc(v);
>>  }
>>
>>  static __always_inline void atomic64_inc(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_inc(v);
>>  }
>>
>>  static __always_inline void atomic_dec(atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_dec(v);
>>  }
>>
>>  static __always_inline void atomic64_dec(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_dec(v);
>>  }
>>
>>  static __always_inline void atomic_add(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_add(i, v);
>>  }
>>
>>  static __always_inline void atomic64_add(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_add(i, v);
>>  }
>>
>>  static __always_inline void atomic_sub(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_sub(i, v);
>>  }
>>
>>  static __always_inline void atomic64_sub(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_sub(i, v);
>>  }
>>
>>  static __always_inline void atomic_and(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_and(i, v);
>>  }
>>
>>  static __always_inline void atomic64_and(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_and(i, v);
>>  }
>>
>>  static __always_inline void atomic_or(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_or(i, v);
>>  }
>>
>>  static __always_inline void atomic64_or(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_or(i, v);
>>  }
>>
>>  static __always_inline void atomic_xor(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic_xor(i, v);
>>  }
>>
>>  static __always_inline void atomic64_xor(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       arch_atomic64_xor(i, v);
>>  }
>>
>>  static __always_inline int atomic_inc_return(atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_inc_return(v);
>>  }
>>
>>  static __always_inline long long atomic64_inc_return(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_inc_return(v);
>>  }
>>
>>  static __always_inline int atomic_dec_return(atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_dec_return(v);
>>  }
>>
>>  static __always_inline long long atomic64_dec_return(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_dec_return(v);
>>  }
>>
>>  static __always_inline long long atomic64_inc_not_zero(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_inc_not_zero(v);
>>  }
>>
>>  static __always_inline long long atomic64_dec_if_positive(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_dec_if_positive(v);
>>  }
>>
>>  static __always_inline bool atomic_dec_and_test(atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_dec_and_test(v);
>>  }
>>
>>  static __always_inline bool atomic64_dec_and_test(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_dec_and_test(v);
>>  }
>>
>>  static __always_inline bool atomic_inc_and_test(atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_inc_and_test(v);
>>  }
>>
>>  static __always_inline bool atomic64_inc_and_test(atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_inc_and_test(v);
>>  }
>>
>>  static __always_inline int atomic_add_return(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_add_return(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_add_return(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_add_return(i, v);
>>  }
>>
>>  static __always_inline int atomic_sub_return(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_sub_return(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_sub_return(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_sub_return(i, v);
>>  }
>>
>>  static __always_inline int atomic_fetch_add(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_fetch_add(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_fetch_add(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_fetch_add(i, v);
>>  }
>>
>>  static __always_inline int atomic_fetch_sub(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_fetch_sub(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_fetch_sub(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_fetch_sub(i, v);
>>  }
>>
>>  static __always_inline int atomic_fetch_and(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_fetch_and(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_fetch_and(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_fetch_and(i, v);
>>  }
>>
>>  static __always_inline int atomic_fetch_or(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_fetch_or(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_fetch_or(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_fetch_or(i, v);
>>  }
>>
>>  static __always_inline int atomic_fetch_xor(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_fetch_xor(i, v);
>>  }
>>
>>  static __always_inline long long atomic64_fetch_xor(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_fetch_xor(i, v);
>>  }
>>
>>  static __always_inline bool atomic_sub_and_test(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_sub_and_test(i, v);
>>  }
>>
>>  static __always_inline bool atomic64_sub_and_test(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_sub_and_test(i, v);
>>  }
>>
>>  static __always_inline bool atomic_add_negative(int i, atomic_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic_add_negative(i, v);
>>  }
>>
>>  static __always_inline bool atomic64_add_negative(long long i, atomic64_t *v)
>>  {
>> +     kasan_check_write(v, sizeof(*v));
>>       return arch_atomic64_add_negative(i, v);
>>  }
>>
>>  #define cmpxchg(ptr, old, new)                               \
>>  ({                                                   \
>> +     __typeof__(ptr) ___ptr = (ptr);                 \
>> +     kasan_check_write(___ptr, sizeof(*___ptr));     \
>>       arch_cmpxchg((ptr), (old), (new));              \
>>  })
>>
>>  #define sync_cmpxchg(ptr, old, new)                  \
>>  ({                                                   \
>> -     arch_sync_cmpxchg((ptr), (old), (new));         \
>> +     __typeof__(ptr) ___ptr = (ptr);                 \
>> +     kasan_check_write(___ptr, sizeof(*___ptr));     \
>> +     arch_sync_cmpxchg(___ptr, (old), (new));        \
>>  })
>>
>>  #define cmpxchg_local(ptr, old, new)                 \
>>  ({                                                   \
>> -     arch_cmpxchg_local((ptr), (old), (new));        \
>> +     __typeof__(ptr) ____ptr = (ptr);                \
>> +     kasan_check_write(____ptr, sizeof(*____ptr));   \
>> +     arch_cmpxchg_local(____ptr, (old), (new));      \
>>  })
>>
>>  #define cmpxchg64(ptr, old, new)                     \
>>  ({                                                   \
>> -     arch_cmpxchg64((ptr), (old), (new));            \
>> +     __typeof__(ptr) ____ptr = (ptr);                \
>> +     kasan_check_write(____ptr, sizeof(*____ptr));   \
>> +     arch_cmpxchg64(____ptr, (old), (new));          \
>>  })
>>
>>  #define cmpxchg64_local(ptr, old, new)                       \
>>  ({                                                   \
>> -     arch_cmpxchg64_local((ptr), (old), (new));      \
>> +     __typeof__(ptr) ____ptr = (ptr);                \
>> +     kasan_check_write(____ptr, sizeof(*____ptr));   \
>> +     arch_cmpxchg64_local(____ptr, (old), (new));    \
>>  })
>>
>>  #define cmpxchg_double(p1, p2, o1, o2, n1, n2)                               \
>> --
>> 2.12.2.564.g063fe858b8-goog
>>
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To post to this group, send email to kasan-dev@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/20170329140000.GK23442%40leverpostej.
> For more options, visit https://groups.google.com/d/optout.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 7/8] asm-generic: add KASAN instrumentation to atomic operations
  2017-03-29 15:52       ` Dmitry Vyukov
@ 2017-03-29 15:56         ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-03-29 15:56 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Peter Zijlstra, Ingo Molnar, Andrew Morton, Will Deacon,
	Andrey Ryabinin, kasan-dev, LKML, x86, linux-mm

On Wed, Mar 29, 2017 at 05:52:43PM +0200, Dmitry Vyukov wrote:
> On Wed, Mar 29, 2017 at 4:00 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Tue, Mar 28, 2017 at 06:15:44PM +0200, Dmitry Vyukov wrote:
> >> KASAN uses compiler instrumentation to intercept all memory accesses.
> >> But it does not see memory accesses done in assembly code.
> >> One notable user of assembly code is atomic operations. Frequently,
> >> for example, an atomic reference decrement is the last access to an
> >> object and a good candidate for a racy use-after-free.
> >>
> >> Add manual KASAN checks to atomic operations.
> >>
> >> Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
> >> Cc: Mark Rutland <mark.rutland@arm.com>
> >> Cc: Peter Zijlstra <peterz@infradead.org>
> >> Cc: Will Deacon <will.deacon@arm.com>,
> >> Cc: Andrew Morton <akpm@linux-foundation.org>,
> >> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
> >> Cc: Ingo Molnar <mingo@redhat.com>,
> >> Cc: kasan-dev@googlegroups.com
> >> Cc: linux-mm@kvack.org
> >> Cc: linux-kernel@vger.kernel.org
> >> Cc: x86@kernel.org
> >
> > FWIW, I think that structuring the file this way will make it easier to
> > add the {acquire,release,relaxed} variants (as arm64 will need),
> > so this looks good to me.
> >
> > As a heads-up, I wanted to have a go at that, but I wasn't able to apply
> > patch two onwards on v4.11-rc{3,4} or next-20170329. I was not able to
> > cleanly revert the instrumentation patches currently in next-20170329,
> > since other patches built atop of them.
> 
> I based it on git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
> locking/core

Ah; I should have guessed. ;)

Thanks for the pointer!  I'll give that a go shortly.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 7/8] asm-generic: add KASAN instrumentation to atomic operations
@ 2017-03-29 15:56         ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-03-29 15:56 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Peter Zijlstra, Ingo Molnar, Andrew Morton, Will Deacon,
	Andrey Ryabinin, kasan-dev, LKML, x86, linux-mm

On Wed, Mar 29, 2017 at 05:52:43PM +0200, Dmitry Vyukov wrote:
> On Wed, Mar 29, 2017 at 4:00 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Tue, Mar 28, 2017 at 06:15:44PM +0200, Dmitry Vyukov wrote:
> >> KASAN uses compiler instrumentation to intercept all memory accesses.
> >> But it does not see memory accesses done in assembly code.
> >> One notable user of assembly code is atomic operations. Frequently,
> >> for example, an atomic reference decrement is the last access to an
> >> object and a good candidate for a racy use-after-free.
> >>
> >> Add manual KASAN checks to atomic operations.
> >>
> >> Signed-off-by: Dmitry Vyukov <dvyukov@google.com>
> >> Cc: Mark Rutland <mark.rutland@arm.com>
> >> Cc: Peter Zijlstra <peterz@infradead.org>
> >> Cc: Will Deacon <will.deacon@arm.com>,
> >> Cc: Andrew Morton <akpm@linux-foundation.org>,
> >> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>,
> >> Cc: Ingo Molnar <mingo@redhat.com>,
> >> Cc: kasan-dev@googlegroups.com
> >> Cc: linux-mm@kvack.org
> >> Cc: linux-kernel@vger.kernel.org
> >> Cc: x86@kernel.org
> >
> > FWIW, I think that structuring the file this way will make it easier to
> > add the {acquire,release,relaxed} variants (as arm64 will need),
> > so this looks good to me.
> >
> > As a heads-up, I wanted to have a go at that, but I wasn't able to apply
> > patch two onwards on v4.11-rc{3,4} or next-20170329. I was not able to
> > cleanly revert the instrumentation patches currently in next-20170329,
> > since other patches built atop of them.
> 
> I based it on git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git
> locking/core

Ah; I should have guessed. ;)

Thanks for the pointer!  I'll give that a go shortly.

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] asm-generic: add atomic-instrumented.h
  2017-03-28 16:15   ` Dmitry Vyukov
@ 2017-03-29 17:15     ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-03-29 17:15 UTC (permalink / raw)
  To: Dmitry Vyukov, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86, linux-mm

Hi,

On Tue, Mar 28, 2017 at 06:15:41PM +0200, Dmitry Vyukov wrote:
> The new header allows to wrap per-arch atomic operations
> and add common functionality to all of them.

I had a quick look at what it would take to have arm64 use this, and I
have a couple of thoughts.

> +static __always_inline int atomic_xchg(atomic_t *v, int i)
> +{
> +	return arch_atomic_xchg(v, i);
> +}

I generally agree that avoiding several layers of CPP aids readability
here, and as-is I think this is fine.

However, avoiding CPP entirely will mean that the file becomes painfully
verbose when support for {relaxed,acquire,release}-order variants is
added.

Just considering atomic_xchg{,_relaxed,_acquire,_release}(), for
example:

----
static __always_inline int atomic_xchg(atomic_t *v, int i)
{
	kasan_check_write(v, sizeof(*v));
	return arch_atomic_xchg(v, i);
}

#ifdef arch_atomic_xchg_relaxed
static __always_inline int atomic_xchg(atomic_t *v, int i)
{
	kasan_check_write(v, sizeof(*v));
	return arch_atomic_xchg_relaxed(v, i);
}
#define atomic_xchg_relaxed atomic_xchg_relaxed
#endif

#ifdef arch_atomic_xchg_acquire
static __always_inline int atomic_xchg(atomic_t *v, int i)
{
	kasan_check_write(v, sizeof(*v));
	return arch_atomic_xchg_acquire(v, i);
}
#define atomic_xchg_acquire atomic_xchg_acquire
#endif

#ifdef arch_atomic_xchg_release
static __always_inline int atomic_xchg(atomic_t *v, int i)
{
	kasan_check_write(v, sizeof(*v));
	return arch_atomic_xchg_release(v, i);
}
#define atomic_xchg_release atomic_xchg_release
#endif
----


With some minimal CPP, it can be a lot more manageable:

----
#define INSTR_ATOMIC_XCHG(order)					\
static __always_inline int atomic_xchg##order(atomic_t *v, int i)	\
{									\
	kasan_check_write(v, sizeof(*v));				\
	arch_atomic_xchg##order(v, i);					\
}

#define INSTR_ATOMIC_XCHG()

#ifdef arch_atomic_xchg_relaxed
INSTR_ATOMIC_XCHG(_relaxed)
#define atomic_xchg_relaxed atomic_xchg_relaxed
#endif

#ifdef arch_atomic_xchg_acquire
INSTR_ATOMIC_XCHG(_acquire)
#define atomic_xchg_acquire atomic_xchg_acquire
#endif

#ifdef arch_atomic_xchg_relaxed
INSTR_ATOMIC_XCHG(_relaxed)
#define atomic_xchg_relaxed atomic_xchg_relaxed
#endif
----


Is there any objection to some light CPP usage as above for adding the
{relaxed,acquire,release} variants?

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] asm-generic: add atomic-instrumented.h
@ 2017-03-29 17:15     ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-03-29 17:15 UTC (permalink / raw)
  To: Dmitry Vyukov, peterz, mingo
  Cc: akpm, will.deacon, aryabinin, kasan-dev, linux-kernel, x86, linux-mm

Hi,

On Tue, Mar 28, 2017 at 06:15:41PM +0200, Dmitry Vyukov wrote:
> The new header allows to wrap per-arch atomic operations
> and add common functionality to all of them.

I had a quick look at what it would take to have arm64 use this, and I
have a couple of thoughts.

> +static __always_inline int atomic_xchg(atomic_t *v, int i)
> +{
> +	return arch_atomic_xchg(v, i);
> +}

I generally agree that avoiding several layers of CPP aids readability
here, and as-is I think this is fine.

However, avoiding CPP entirely will mean that the file becomes painfully
verbose when support for {relaxed,acquire,release}-order variants is
added.

Just considering atomic_xchg{,_relaxed,_acquire,_release}(), for
example:

----
static __always_inline int atomic_xchg(atomic_t *v, int i)
{
	kasan_check_write(v, sizeof(*v));
	return arch_atomic_xchg(v, i);
}

#ifdef arch_atomic_xchg_relaxed
static __always_inline int atomic_xchg(atomic_t *v, int i)
{
	kasan_check_write(v, sizeof(*v));
	return arch_atomic_xchg_relaxed(v, i);
}
#define atomic_xchg_relaxed atomic_xchg_relaxed
#endif

#ifdef arch_atomic_xchg_acquire
static __always_inline int atomic_xchg(atomic_t *v, int i)
{
	kasan_check_write(v, sizeof(*v));
	return arch_atomic_xchg_acquire(v, i);
}
#define atomic_xchg_acquire atomic_xchg_acquire
#endif

#ifdef arch_atomic_xchg_release
static __always_inline int atomic_xchg(atomic_t *v, int i)
{
	kasan_check_write(v, sizeof(*v));
	return arch_atomic_xchg_release(v, i);
}
#define atomic_xchg_release atomic_xchg_release
#endif
----


With some minimal CPP, it can be a lot more manageable:

----
#define INSTR_ATOMIC_XCHG(order)					\
static __always_inline int atomic_xchg##order(atomic_t *v, int i)	\
{									\
	kasan_check_write(v, sizeof(*v));				\
	arch_atomic_xchg##order(v, i);					\
}

#define INSTR_ATOMIC_XCHG()

#ifdef arch_atomic_xchg_relaxed
INSTR_ATOMIC_XCHG(_relaxed)
#define atomic_xchg_relaxed atomic_xchg_relaxed
#endif

#ifdef arch_atomic_xchg_acquire
INSTR_ATOMIC_XCHG(_acquire)
#define atomic_xchg_acquire atomic_xchg_acquire
#endif

#ifdef arch_atomic_xchg_relaxed
INSTR_ATOMIC_XCHG(_relaxed)
#define atomic_xchg_relaxed atomic_xchg_relaxed
#endif
----


Is there any objection to some light CPP usage as above for adding the
{relaxed,acquire,release} variants?

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] asm-generic: add atomic-instrumented.h
  2017-03-29 17:15     ` Mark Rutland
@ 2017-03-30  6:43       ` Ingo Molnar
  -1 siblings, 0 replies; 44+ messages in thread
From: Ingo Molnar @ 2017-03-30  6:43 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Dmitry Vyukov, peterz, mingo, akpm, will.deacon, aryabinin,
	kasan-dev, linux-kernel, x86, linux-mm


* Mark Rutland <mark.rutland@arm.com> wrote:

> With some minimal CPP, it can be a lot more manageable:
> 
> ----
> #define INSTR_ATOMIC_XCHG(order)					\
> static __always_inline int atomic_xchg##order(atomic_t *v, int i)	\
> {									\
> 	kasan_check_write(v, sizeof(*v));				\
> 	arch_atomic_xchg##order(v, i);					\
> }
> 
> #define INSTR_ATOMIC_XCHG()
> 
> #ifdef arch_atomic_xchg_relaxed
> INSTR_ATOMIC_XCHG(_relaxed)
> #define atomic_xchg_relaxed atomic_xchg_relaxed
> #endif
> 
> #ifdef arch_atomic_xchg_acquire
> INSTR_ATOMIC_XCHG(_acquire)
> #define atomic_xchg_acquire atomic_xchg_acquire
> #endif
> 
> #ifdef arch_atomic_xchg_relaxed
> INSTR_ATOMIC_XCHG(_relaxed)
> #define atomic_xchg_relaxed atomic_xchg_relaxed
> #endif

Yeah, small detail: the third one wants to be _release, right?

> Is there any objection to some light CPP usage as above for adding the
> {relaxed,acquire,release} variants?

No objection from me to that way of writing it, this still looks very readable, 
and probably more readable than the verbose variants. It's similar in style to 
linux/atomic.h which has a good balance of C versus CPP.

What I objected to was the deep nested code generation approach in the original 
patch.

CPP is fine in many circumstances, but there's a level of (ab-)use where it 
becomes counterproductive.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] asm-generic: add atomic-instrumented.h
@ 2017-03-30  6:43       ` Ingo Molnar
  0 siblings, 0 replies; 44+ messages in thread
From: Ingo Molnar @ 2017-03-30  6:43 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Dmitry Vyukov, peterz, mingo, akpm, will.deacon, aryabinin,
	kasan-dev, linux-kernel, x86, linux-mm


* Mark Rutland <mark.rutland@arm.com> wrote:

> With some minimal CPP, it can be a lot more manageable:
> 
> ----
> #define INSTR_ATOMIC_XCHG(order)					\
> static __always_inline int atomic_xchg##order(atomic_t *v, int i)	\
> {									\
> 	kasan_check_write(v, sizeof(*v));				\
> 	arch_atomic_xchg##order(v, i);					\
> }
> 
> #define INSTR_ATOMIC_XCHG()
> 
> #ifdef arch_atomic_xchg_relaxed
> INSTR_ATOMIC_XCHG(_relaxed)
> #define atomic_xchg_relaxed atomic_xchg_relaxed
> #endif
> 
> #ifdef arch_atomic_xchg_acquire
> INSTR_ATOMIC_XCHG(_acquire)
> #define atomic_xchg_acquire atomic_xchg_acquire
> #endif
> 
> #ifdef arch_atomic_xchg_relaxed
> INSTR_ATOMIC_XCHG(_relaxed)
> #define atomic_xchg_relaxed atomic_xchg_relaxed
> #endif

Yeah, small detail: the third one wants to be _release, right?

> Is there any objection to some light CPP usage as above for adding the
> {relaxed,acquire,release} variants?

No objection from me to that way of writing it, this still looks very readable, 
and probably more readable than the verbose variants. It's similar in style to 
linux/atomic.h which has a good balance of C versus CPP.

What I objected to was the deep nested code generation approach in the original 
patch.

CPP is fine in many circumstances, but there's a level of (ab-)use where it 
becomes counterproductive.

Thanks,

	Ingo

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] asm-generic: add atomic-instrumented.h
  2017-03-30  6:43       ` Ingo Molnar
@ 2017-03-30 10:40         ` Mark Rutland
  -1 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-03-30 10:40 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Dmitry Vyukov, peterz, mingo, akpm, will.deacon, aryabinin,
	kasan-dev, linux-kernel, x86, linux-mm

On Thu, Mar 30, 2017 at 08:43:39AM +0200, Ingo Molnar wrote:
> 
> * Mark Rutland <mark.rutland@arm.com> wrote:
> 
> > With some minimal CPP, it can be a lot more manageable:
> > 
> > ----
> > #define INSTR_ATOMIC_XCHG(order)					\
> > static __always_inline int atomic_xchg##order(atomic_t *v, int i)	\
> > {									\
> > 	kasan_check_write(v, sizeof(*v));				\
> > 	arch_atomic_xchg##order(v, i);					\
> > }
> > 
> > #define INSTR_ATOMIC_XCHG()
> > 
> > #ifdef arch_atomic_xchg_relaxed
> > INSTR_ATOMIC_XCHG(_relaxed)
> > #define atomic_xchg_relaxed atomic_xchg_relaxed
> > #endif
> > 
> > #ifdef arch_atomic_xchg_acquire
> > INSTR_ATOMIC_XCHG(_acquire)
> > #define atomic_xchg_acquire atomic_xchg_acquire
> > #endif
> > 
> > #ifdef arch_atomic_xchg_relaxed
> > INSTR_ATOMIC_XCHG(_relaxed)
> > #define atomic_xchg_relaxed atomic_xchg_relaxed
> > #endif
> 
> Yeah, small detail: the third one wants to be _release, right?

Yes; my bad.

> > Is there any objection to some light CPP usage as above for adding the
> > {relaxed,acquire,release} variants?
> 
> No objection from me to that way of writing it, this still looks very readable, 
> and probably more readable than the verbose variants. It's similar in style to 
> linux/atomic.h which has a good balance of C versus CPP.

Great. I'll follow the above pattern when adding the ordering variants.

> What I objected to was the deep nested code generation approach in the original 
> patch.
> 
> CPP is fine in many circumstances, but there's a level of (ab-)use where it 
> becomes counterproductive.

Sure, that makes sense to me.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 4/8] asm-generic: add atomic-instrumented.h
@ 2017-03-30 10:40         ` Mark Rutland
  0 siblings, 0 replies; 44+ messages in thread
From: Mark Rutland @ 2017-03-30 10:40 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Dmitry Vyukov, peterz, mingo, akpm, will.deacon, aryabinin,
	kasan-dev, linux-kernel, x86, linux-mm

On Thu, Mar 30, 2017 at 08:43:39AM +0200, Ingo Molnar wrote:
> 
> * Mark Rutland <mark.rutland@arm.com> wrote:
> 
> > With some minimal CPP, it can be a lot more manageable:
> > 
> > ----
> > #define INSTR_ATOMIC_XCHG(order)					\
> > static __always_inline int atomic_xchg##order(atomic_t *v, int i)	\
> > {									\
> > 	kasan_check_write(v, sizeof(*v));				\
> > 	arch_atomic_xchg##order(v, i);					\
> > }
> > 
> > #define INSTR_ATOMIC_XCHG()
> > 
> > #ifdef arch_atomic_xchg_relaxed
> > INSTR_ATOMIC_XCHG(_relaxed)
> > #define atomic_xchg_relaxed atomic_xchg_relaxed
> > #endif
> > 
> > #ifdef arch_atomic_xchg_acquire
> > INSTR_ATOMIC_XCHG(_acquire)
> > #define atomic_xchg_acquire atomic_xchg_acquire
> > #endif
> > 
> > #ifdef arch_atomic_xchg_relaxed
> > INSTR_ATOMIC_XCHG(_relaxed)
> > #define atomic_xchg_relaxed atomic_xchg_relaxed
> > #endif
> 
> Yeah, small detail: the third one wants to be _release, right?

Yes; my bad.

> > Is there any objection to some light CPP usage as above for adding the
> > {relaxed,acquire,release} variants?
> 
> No objection from me to that way of writing it, this still looks very readable, 
> and probably more readable than the verbose variants. It's similar in style to 
> linux/atomic.h which has a good balance of C versus CPP.

Great. I'll follow the above pattern when adding the ordering variants.

> What I objected to was the deep nested code generation approach in the original 
> patch.
> 
> CPP is fine in many circumstances, but there's a level of (ab-)use where it 
> becomes counterproductive.

Sure, that makes sense to me.

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 5/8] x86: switch atomic.h to use atomic-instrumented.h
  2017-03-29 13:37       ` Mark Rutland
@ 2017-05-26 19:28         ` Dmitry Vyukov
  -1 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-05-26 19:28 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Peter Zijlstra, Ingo Molnar, Andrew Morton, Will Deacon,
	Andrey Ryabinin, kasan-dev, LKML, x86, linux-mm

On Wed, Mar 29, 2017 at 3:37 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Tue, Mar 28, 2017 at 06:25:07PM +0200, Dmitry Vyukov wrote:
>> On Tue, Mar 28, 2017 at 6:15 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
>
>> >  #define __try_cmpxchg(ptr, pold, new, size)                            \
>> >         __raw_try_cmpxchg((ptr), (pold), (new), (size), LOCK_PREFIX)
>> >
>> > -#define try_cmpxchg(ptr, pold, new)                                    \
>> > +#define arch_try_cmpxchg(ptr, pold, new)                               \
>> >         __try_cmpxchg((ptr), (pold), (new), sizeof(*(ptr)))
>>
>> Is try_cmpxchg() a part of public interface like cmpxchg, or only a
>> helper to implement atomic_try_cmpxchg()?
>> If it's the latter than we don't need to wrap them.
>
> De-facto, it's an x86-specific helper. It was added in commit:
>
>     a9ebf306f52c756c ("locking/atomic: Introduce atomic_try_cmpxchg()")
>
> ... which did not add try_cmpxchg to any generic header.
>
> If it was meant to be part of the public interface, we'd need a generic
> definition.

Fixed in v2:
https://groups.google.com/forum/#!topic/kasan-dev/3PoGcuMku-w

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 5/8] x86: switch atomic.h to use atomic-instrumented.h
@ 2017-05-26 19:28         ` Dmitry Vyukov
  0 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-05-26 19:28 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Peter Zijlstra, Ingo Molnar, Andrew Morton, Will Deacon,
	Andrey Ryabinin, kasan-dev, LKML, x86, linux-mm

On Wed, Mar 29, 2017 at 3:37 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> On Tue, Mar 28, 2017 at 06:25:07PM +0200, Dmitry Vyukov wrote:
>> On Tue, Mar 28, 2017 at 6:15 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
>
>> >  #define __try_cmpxchg(ptr, pold, new, size)                            \
>> >         __raw_try_cmpxchg((ptr), (pold), (new), (size), LOCK_PREFIX)
>> >
>> > -#define try_cmpxchg(ptr, pold, new)                                    \
>> > +#define arch_try_cmpxchg(ptr, pold, new)                               \
>> >         __try_cmpxchg((ptr), (pold), (new), sizeof(*(ptr)))
>>
>> Is try_cmpxchg() a part of public interface like cmpxchg, or only a
>> helper to implement atomic_try_cmpxchg()?
>> If it's the latter than we don't need to wrap them.
>
> De-facto, it's an x86-specific helper. It was added in commit:
>
>     a9ebf306f52c756c ("locking/atomic: Introduce atomic_try_cmpxchg()")
>
> ... which did not add try_cmpxchg to any generic header.
>
> If it was meant to be part of the public interface, we'd need a generic
> definition.

Fixed in v2:
https://groups.google.com/forum/#!topic/kasan-dev/3PoGcuMku-w

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 3/8] x86: use long long for 64-bit atomic ops
  2017-03-28 21:32     ` Matthew Wilcox
@ 2017-05-26 19:29       ` Dmitry Vyukov
  -1 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-05-26 19:29 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Mark Rutland, Peter Zijlstra, Ingo Molnar, Andrew Morton,
	Will Deacon, Andrey Ryabinin, kasan-dev, LKML, x86, linux-mm

On Tue, Mar 28, 2017 at 11:32 PM, Matthew Wilcox <willy@infradead.org> wrote:
> On Tue, Mar 28, 2017 at 06:15:40PM +0200, Dmitry Vyukov wrote:
>> @@ -193,12 +193,12 @@ static inline long atomic64_xchg(atomic64_t *v, long new)
>>   * @a: the amount to add to v...
>>   * @u: ...unless v is equal to u.
>>   *
>> - * Atomically adds @a to @v, so long as it was not @u.
>> + * Atomically adds @a to @v, so long long as it was not @u.
>>   * Returns the old value of @v.
>>   */
>
> That's a clbuttic mistake!
>
> https://www.google.com/search?q=clbuttic


Fixed in v2:
https://groups.google.com/forum/#!topic/kasan-dev/3PoGcuMku-w
Thanks

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [PATCH 3/8] x86: use long long for 64-bit atomic ops
@ 2017-05-26 19:29       ` Dmitry Vyukov
  0 siblings, 0 replies; 44+ messages in thread
From: Dmitry Vyukov @ 2017-05-26 19:29 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Mark Rutland, Peter Zijlstra, Ingo Molnar, Andrew Morton,
	Will Deacon, Andrey Ryabinin, kasan-dev, LKML, x86, linux-mm

On Tue, Mar 28, 2017 at 11:32 PM, Matthew Wilcox <willy@infradead.org> wrote:
> On Tue, Mar 28, 2017 at 06:15:40PM +0200, Dmitry Vyukov wrote:
>> @@ -193,12 +193,12 @@ static inline long atomic64_xchg(atomic64_t *v, long new)
>>   * @a: the amount to add to v...
>>   * @u: ...unless v is equal to u.
>>   *
>> - * Atomically adds @a to @v, so long as it was not @u.
>> + * Atomically adds @a to @v, so long long as it was not @u.
>>   * Returns the old value of @v.
>>   */
>
> That's a clbuttic mistake!
>
> https://www.google.com/search?q=clbuttic


Fixed in v2:
https://groups.google.com/forum/#!topic/kasan-dev/3PoGcuMku-w
Thanks

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2017-05-26 19:29 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-28 16:15 [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations Dmitry Vyukov
2017-03-28 16:15 ` [PATCH 1/8] x86: remove unused atomic_inc_short() Dmitry Vyukov
2017-03-28 16:15 ` [PATCH 2/8] x86: un-macro-ify atomic ops implementation Dmitry Vyukov
2017-03-28 16:15 ` [PATCH 3/8] x86: use long long for 64-bit atomic ops Dmitry Vyukov
2017-03-28 16:15   ` Dmitry Vyukov
2017-03-28 21:32   ` Matthew Wilcox
2017-03-28 21:32     ` Matthew Wilcox
2017-05-26 19:29     ` Dmitry Vyukov
2017-05-26 19:29       ` Dmitry Vyukov
2017-03-28 16:15 ` [PATCH 4/8] asm-generic: add atomic-instrumented.h Dmitry Vyukov
2017-03-28 16:15   ` Dmitry Vyukov
2017-03-28 21:35   ` Matthew Wilcox
2017-03-28 21:35     ` Matthew Wilcox
2017-03-29  8:21     ` Dmitry Vyukov
2017-03-29  8:21       ` Dmitry Vyukov
2017-03-29 13:27     ` Mark Rutland
2017-03-29 13:27       ` Mark Rutland
2017-03-29 17:15   ` Mark Rutland
2017-03-29 17:15     ` Mark Rutland
2017-03-30  6:43     ` Ingo Molnar
2017-03-30  6:43       ` Ingo Molnar
2017-03-30 10:40       ` Mark Rutland
2017-03-30 10:40         ` Mark Rutland
2017-03-28 16:15 ` [PATCH 5/8] x86: switch atomic.h to use atomic-instrumented.h Dmitry Vyukov
2017-03-28 16:15   ` Dmitry Vyukov
2017-03-28 16:25   ` Dmitry Vyukov
2017-03-28 16:25     ` Dmitry Vyukov
2017-03-29 13:37     ` Mark Rutland
2017-03-29 13:37       ` Mark Rutland
2017-05-26 19:28       ` Dmitry Vyukov
2017-05-26 19:28         ` Dmitry Vyukov
2017-03-28 16:15 ` [PATCH 6/8] kasan: allow kasan_check_read/write() to accept pointers to volatiles Dmitry Vyukov
2017-03-28 16:15   ` Dmitry Vyukov
2017-03-28 16:15 ` [PATCH 7/8] asm-generic: add KASAN instrumentation to atomic operations Dmitry Vyukov
2017-03-28 16:15   ` Dmitry Vyukov
2017-03-29 14:00   ` Mark Rutland
2017-03-29 14:00     ` Mark Rutland
2017-03-29 15:52     ` Dmitry Vyukov
2017-03-29 15:52       ` Dmitry Vyukov
2017-03-29 15:56       ` Mark Rutland
2017-03-29 15:56         ` Mark Rutland
2017-03-28 16:15 ` [PATCH 8/8] asm-generic, x86: add comments for atomic instrumentation Dmitry Vyukov
2017-03-28 16:15   ` Dmitry Vyukov
2017-03-28 16:26 ` [PATCH 0/8] x86, kasan: add KASAN checks to atomic operations Dmitry Vyukov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.