linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 1/5] include/linux: Add instrumented.h infrastructure
@ 2020-01-20 14:19 Marco Elver
  2020-01-20 14:19 ` [PATCH 2/5] asm-generic, atomic-instrumented: Use generic instrumented.h Marco Elver
                   ` (6 more replies)
  0 siblings, 7 replies; 29+ messages in thread
From: Marco Elver @ 2020-01-20 14:19 UTC (permalink / raw)
  To: elver
  Cc: paulmck, andreyknvl, glider, dvyukov, kasan-dev, linux-kernel,
	mark.rutland, will, peterz, boqun.feng, arnd, viro,
	christophe.leroy, dja, mpe, rostedt, mhiramat, mingo,
	christian.brauner, daniel, cyphar, keescook, linux-arch

This adds instrumented.h, which provides generic wrappers for memory
access instrumentation that the compiler cannot emit for various
sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
future this will also include KMSAN instrumentation.

Note that, copy_{to,from}_user require special instrumentation,
providing hooks before and after the access, since we may need to know
the actual bytes accessed (currently this is relevant for KCSAN, and is
also relevant in future for KMSAN).

Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Marco Elver <elver@google.com>
---
 include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
 1 file changed, 153 insertions(+)
 create mode 100644 include/linux/instrumented.h

diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
new file mode 100644
index 000000000000..9f83c8520223
--- /dev/null
+++ b/include/linux/instrumented.h
@@ -0,0 +1,153 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * This header provides generic wrappers for memory access instrumentation that
+ * the compiler cannot emit for: KASAN, KCSAN.
+ */
+#ifndef _LINUX_INSTRUMENTED_H
+#define _LINUX_INSTRUMENTED_H
+
+#include <linux/compiler.h>
+#include <linux/kasan-checks.h>
+#include <linux/kcsan-checks.h>
+#include <linux/types.h>
+
+/**
+ * instrument_read - instrument regular read access
+ *
+ * Instrument a regular read access. The instrumentation should be inserted
+ * before the actual read happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_read(const volatile void *v, size_t size)
+{
+	kasan_check_read(v, size);
+	kcsan_check_read(v, size);
+}
+
+/**
+ * instrument_write - instrument regular write access
+ *
+ * Instrument a regular write access. The instrumentation should be inserted
+ * before the actual write happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_write(const volatile void *v, size_t size)
+{
+	kasan_check_write(v, size);
+	kcsan_check_write(v, size);
+}
+
+/**
+ * instrument_atomic_read - instrument atomic read access
+ *
+ * Instrument an atomic read access. The instrumentation should be inserted
+ * before the actual read happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_atomic_read(const volatile void *v, size_t size)
+{
+	kasan_check_read(v, size);
+	kcsan_check_atomic_read(v, size);
+}
+
+/**
+ * instrument_atomic_write - instrument atomic write access
+ *
+ * Instrument an atomic write access. The instrumentation should be inserted
+ * before the actual write happens.
+ *
+ * @ptr address of access
+ * @size size of access
+ */
+static __always_inline void instrument_atomic_write(const volatile void *v, size_t size)
+{
+	kasan_check_write(v, size);
+	kcsan_check_atomic_write(v, size);
+}
+
+/**
+ * instrument_copy_to_user_pre - instrument reads of copy_to_user
+ *
+ * Instrument reads from kernel memory, that are due to copy_to_user (and
+ * variants).
+ *
+ * The instrumentation must be inserted before the accesses. At this point the
+ * actual number of bytes accessed is not yet known.
+ *
+ * @dst destination address
+ * @size maximum access size
+ */
+static __always_inline void
+instrument_copy_to_user_pre(const volatile void *src, size_t size)
+{
+	/* Check before, to warn before potential memory corruption. */
+	kasan_check_read(src, size);
+}
+
+/**
+ * instrument_copy_to_user_post - instrument reads of copy_to_user
+ *
+ * Instrument reads from kernel memory, that are due to copy_to_user (and
+ * variants).
+ *
+ * The instrumentation must be inserted after the accesses. At this point the
+ * actual number of bytes accessed should be known.
+ *
+ * @dst destination address
+ * @size maximum access size
+ * @left number of bytes left that were not copied
+ */
+static __always_inline void
+instrument_copy_to_user_post(const volatile void *src, size_t size, size_t left)
+{
+	/* Check after, to avoid false positive if memory was not accessed. */
+	kcsan_check_read(src, size - left);
+}
+
+/**
+ * instrument_copy_from_user_pre - instrument writes of copy_from_user
+ *
+ * Instrument writes to kernel memory, that are due to copy_from_user (and
+ * variants).
+ *
+ * The instrumentation must be inserted before the accesses. At this point the
+ * actual number of bytes accessed is not yet known.
+ *
+ * @dst destination address
+ * @size maximum access size
+ */
+static __always_inline void
+instrument_copy_from_user_pre(const volatile void *dst, size_t size)
+{
+	/* Check before, to warn before potential memory corruption. */
+	kasan_check_write(dst, size);
+}
+
+/**
+ * instrument_copy_from_user_post - instrument writes of copy_from_user
+ *
+ * Instrument writes to kernel memory, that are due to copy_from_user (and
+ * variants).
+ *
+ * The instrumentation must be inserted after the accesses. At this point the
+ * actual number of bytes accessed should be known.
+ *
+ * @dst destination address
+ * @size maximum access size
+ * @left number of bytes left that were not copied
+ */
+static __always_inline void
+instrument_copy_from_user_post(const volatile void *dst, size_t size, size_t left)
+{
+	/* Check after, to avoid false positive if memory was not accessed. */
+	kcsan_check_write(dst, size - left);
+}
+
+#endif /* _LINUX_INSTRUMENTED_H */
-- 
2.25.0.341.g760bfbb309-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 2/5] asm-generic, atomic-instrumented: Use generic instrumented.h
  2020-01-20 14:19 [PATCH 1/5] include/linux: Add instrumented.h infrastructure Marco Elver
@ 2020-01-20 14:19 ` Marco Elver
  2020-01-20 14:19 ` [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops Marco Elver
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 29+ messages in thread
From: Marco Elver @ 2020-01-20 14:19 UTC (permalink / raw)
  To: elver
  Cc: paulmck, andreyknvl, glider, dvyukov, kasan-dev, linux-kernel,
	mark.rutland, will, peterz, boqun.feng, arnd, viro,
	christophe.leroy, dja, mpe, rostedt, mhiramat, mingo,
	christian.brauner, daniel, cyphar, keescook, linux-arch

This switches atomic-instrumented.h to use the generic instrumentation
wrappers provided by instrumented.h.

No functional change intended.

Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Marco Elver <elver@google.com>
---
 include/asm-generic/atomic-instrumented.h | 395 +++++++++++-----------
 scripts/atomic/gen-atomic-instrumented.sh |  19 +-
 2 files changed, 194 insertions(+), 220 deletions(-)

diff --git a/include/asm-generic/atomic-instrumented.h b/include/asm-generic/atomic-instrumented.h
index 63869ded73ac..379986e40159 100644
--- a/include/asm-generic/atomic-instrumented.h
+++ b/include/asm-generic/atomic-instrumented.h
@@ -19,25 +19,12 @@
 
 #include <linux/build_bug.h>
 #include <linux/compiler.h>
-#include <linux/kasan-checks.h>
-#include <linux/kcsan-checks.h>
-
-static __always_inline void __atomic_check_read(const volatile void *v, size_t size)
-{
-	kasan_check_read(v, size);
-	kcsan_check_atomic_read(v, size);
-}
-
-static __always_inline void __atomic_check_write(const volatile void *v, size_t size)
-{
-	kasan_check_write(v, size);
-	kcsan_check_atomic_write(v, size);
-}
+#include <linux/instrumented.h>
 
 static __always_inline int
 atomic_read(const atomic_t *v)
 {
-	__atomic_check_read(v, sizeof(*v));
+	instrument_atomic_read(v, sizeof(*v));
 	return arch_atomic_read(v);
 }
 #define atomic_read atomic_read
@@ -46,7 +33,7 @@ atomic_read(const atomic_t *v)
 static __always_inline int
 atomic_read_acquire(const atomic_t *v)
 {
-	__atomic_check_read(v, sizeof(*v));
+	instrument_atomic_read(v, sizeof(*v));
 	return arch_atomic_read_acquire(v);
 }
 #define atomic_read_acquire atomic_read_acquire
@@ -55,7 +42,7 @@ atomic_read_acquire(const atomic_t *v)
 static __always_inline void
 atomic_set(atomic_t *v, int i)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic_set(v, i);
 }
 #define atomic_set atomic_set
@@ -64,7 +51,7 @@ atomic_set(atomic_t *v, int i)
 static __always_inline void
 atomic_set_release(atomic_t *v, int i)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic_set_release(v, i);
 }
 #define atomic_set_release atomic_set_release
@@ -73,7 +60,7 @@ atomic_set_release(atomic_t *v, int i)
 static __always_inline void
 atomic_add(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic_add(i, v);
 }
 #define atomic_add atomic_add
@@ -82,7 +69,7 @@ atomic_add(int i, atomic_t *v)
 static __always_inline int
 atomic_add_return(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_add_return(i, v);
 }
 #define atomic_add_return atomic_add_return
@@ -92,7 +79,7 @@ atomic_add_return(int i, atomic_t *v)
 static __always_inline int
 atomic_add_return_acquire(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_add_return_acquire(i, v);
 }
 #define atomic_add_return_acquire atomic_add_return_acquire
@@ -102,7 +89,7 @@ atomic_add_return_acquire(int i, atomic_t *v)
 static __always_inline int
 atomic_add_return_release(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_add_return_release(i, v);
 }
 #define atomic_add_return_release atomic_add_return_release
@@ -112,7 +99,7 @@ atomic_add_return_release(int i, atomic_t *v)
 static __always_inline int
 atomic_add_return_relaxed(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_add_return_relaxed(i, v);
 }
 #define atomic_add_return_relaxed atomic_add_return_relaxed
@@ -122,7 +109,7 @@ atomic_add_return_relaxed(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_add(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_add(i, v);
 }
 #define atomic_fetch_add atomic_fetch_add
@@ -132,7 +119,7 @@ atomic_fetch_add(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_add_acquire(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_add_acquire(i, v);
 }
 #define atomic_fetch_add_acquire atomic_fetch_add_acquire
@@ -142,7 +129,7 @@ atomic_fetch_add_acquire(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_add_release(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_add_release(i, v);
 }
 #define atomic_fetch_add_release atomic_fetch_add_release
@@ -152,7 +139,7 @@ atomic_fetch_add_release(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_add_relaxed(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_add_relaxed(i, v);
 }
 #define atomic_fetch_add_relaxed atomic_fetch_add_relaxed
@@ -161,7 +148,7 @@ atomic_fetch_add_relaxed(int i, atomic_t *v)
 static __always_inline void
 atomic_sub(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic_sub(i, v);
 }
 #define atomic_sub atomic_sub
@@ -170,7 +157,7 @@ atomic_sub(int i, atomic_t *v)
 static __always_inline int
 atomic_sub_return(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_sub_return(i, v);
 }
 #define atomic_sub_return atomic_sub_return
@@ -180,7 +167,7 @@ atomic_sub_return(int i, atomic_t *v)
 static __always_inline int
 atomic_sub_return_acquire(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_sub_return_acquire(i, v);
 }
 #define atomic_sub_return_acquire atomic_sub_return_acquire
@@ -190,7 +177,7 @@ atomic_sub_return_acquire(int i, atomic_t *v)
 static __always_inline int
 atomic_sub_return_release(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_sub_return_release(i, v);
 }
 #define atomic_sub_return_release atomic_sub_return_release
@@ -200,7 +187,7 @@ atomic_sub_return_release(int i, atomic_t *v)
 static __always_inline int
 atomic_sub_return_relaxed(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_sub_return_relaxed(i, v);
 }
 #define atomic_sub_return_relaxed atomic_sub_return_relaxed
@@ -210,7 +197,7 @@ atomic_sub_return_relaxed(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_sub(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_sub(i, v);
 }
 #define atomic_fetch_sub atomic_fetch_sub
@@ -220,7 +207,7 @@ atomic_fetch_sub(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_sub_acquire(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_sub_acquire(i, v);
 }
 #define atomic_fetch_sub_acquire atomic_fetch_sub_acquire
@@ -230,7 +217,7 @@ atomic_fetch_sub_acquire(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_sub_release(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_sub_release(i, v);
 }
 #define atomic_fetch_sub_release atomic_fetch_sub_release
@@ -240,7 +227,7 @@ atomic_fetch_sub_release(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_sub_relaxed(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_sub_relaxed(i, v);
 }
 #define atomic_fetch_sub_relaxed atomic_fetch_sub_relaxed
@@ -250,7 +237,7 @@ atomic_fetch_sub_relaxed(int i, atomic_t *v)
 static __always_inline void
 atomic_inc(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic_inc(v);
 }
 #define atomic_inc atomic_inc
@@ -260,7 +247,7 @@ atomic_inc(atomic_t *v)
 static __always_inline int
 atomic_inc_return(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_inc_return(v);
 }
 #define atomic_inc_return atomic_inc_return
@@ -270,7 +257,7 @@ atomic_inc_return(atomic_t *v)
 static __always_inline int
 atomic_inc_return_acquire(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_inc_return_acquire(v);
 }
 #define atomic_inc_return_acquire atomic_inc_return_acquire
@@ -280,7 +267,7 @@ atomic_inc_return_acquire(atomic_t *v)
 static __always_inline int
 atomic_inc_return_release(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_inc_return_release(v);
 }
 #define atomic_inc_return_release atomic_inc_return_release
@@ -290,7 +277,7 @@ atomic_inc_return_release(atomic_t *v)
 static __always_inline int
 atomic_inc_return_relaxed(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_inc_return_relaxed(v);
 }
 #define atomic_inc_return_relaxed atomic_inc_return_relaxed
@@ -300,7 +287,7 @@ atomic_inc_return_relaxed(atomic_t *v)
 static __always_inline int
 atomic_fetch_inc(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_inc(v);
 }
 #define atomic_fetch_inc atomic_fetch_inc
@@ -310,7 +297,7 @@ atomic_fetch_inc(atomic_t *v)
 static __always_inline int
 atomic_fetch_inc_acquire(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_inc_acquire(v);
 }
 #define atomic_fetch_inc_acquire atomic_fetch_inc_acquire
@@ -320,7 +307,7 @@ atomic_fetch_inc_acquire(atomic_t *v)
 static __always_inline int
 atomic_fetch_inc_release(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_inc_release(v);
 }
 #define atomic_fetch_inc_release atomic_fetch_inc_release
@@ -330,7 +317,7 @@ atomic_fetch_inc_release(atomic_t *v)
 static __always_inline int
 atomic_fetch_inc_relaxed(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_inc_relaxed(v);
 }
 #define atomic_fetch_inc_relaxed atomic_fetch_inc_relaxed
@@ -340,7 +327,7 @@ atomic_fetch_inc_relaxed(atomic_t *v)
 static __always_inline void
 atomic_dec(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic_dec(v);
 }
 #define atomic_dec atomic_dec
@@ -350,7 +337,7 @@ atomic_dec(atomic_t *v)
 static __always_inline int
 atomic_dec_return(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_dec_return(v);
 }
 #define atomic_dec_return atomic_dec_return
@@ -360,7 +347,7 @@ atomic_dec_return(atomic_t *v)
 static __always_inline int
 atomic_dec_return_acquire(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_dec_return_acquire(v);
 }
 #define atomic_dec_return_acquire atomic_dec_return_acquire
@@ -370,7 +357,7 @@ atomic_dec_return_acquire(atomic_t *v)
 static __always_inline int
 atomic_dec_return_release(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_dec_return_release(v);
 }
 #define atomic_dec_return_release atomic_dec_return_release
@@ -380,7 +367,7 @@ atomic_dec_return_release(atomic_t *v)
 static __always_inline int
 atomic_dec_return_relaxed(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_dec_return_relaxed(v);
 }
 #define atomic_dec_return_relaxed atomic_dec_return_relaxed
@@ -390,7 +377,7 @@ atomic_dec_return_relaxed(atomic_t *v)
 static __always_inline int
 atomic_fetch_dec(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_dec(v);
 }
 #define atomic_fetch_dec atomic_fetch_dec
@@ -400,7 +387,7 @@ atomic_fetch_dec(atomic_t *v)
 static __always_inline int
 atomic_fetch_dec_acquire(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_dec_acquire(v);
 }
 #define atomic_fetch_dec_acquire atomic_fetch_dec_acquire
@@ -410,7 +397,7 @@ atomic_fetch_dec_acquire(atomic_t *v)
 static __always_inline int
 atomic_fetch_dec_release(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_dec_release(v);
 }
 #define atomic_fetch_dec_release atomic_fetch_dec_release
@@ -420,7 +407,7 @@ atomic_fetch_dec_release(atomic_t *v)
 static __always_inline int
 atomic_fetch_dec_relaxed(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_dec_relaxed(v);
 }
 #define atomic_fetch_dec_relaxed atomic_fetch_dec_relaxed
@@ -429,7 +416,7 @@ atomic_fetch_dec_relaxed(atomic_t *v)
 static __always_inline void
 atomic_and(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic_and(i, v);
 }
 #define atomic_and atomic_and
@@ -438,7 +425,7 @@ atomic_and(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_and(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_and(i, v);
 }
 #define atomic_fetch_and atomic_fetch_and
@@ -448,7 +435,7 @@ atomic_fetch_and(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_and_acquire(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_and_acquire(i, v);
 }
 #define atomic_fetch_and_acquire atomic_fetch_and_acquire
@@ -458,7 +445,7 @@ atomic_fetch_and_acquire(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_and_release(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_and_release(i, v);
 }
 #define atomic_fetch_and_release atomic_fetch_and_release
@@ -468,7 +455,7 @@ atomic_fetch_and_release(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_and_relaxed(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_and_relaxed(i, v);
 }
 #define atomic_fetch_and_relaxed atomic_fetch_and_relaxed
@@ -478,7 +465,7 @@ atomic_fetch_and_relaxed(int i, atomic_t *v)
 static __always_inline void
 atomic_andnot(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic_andnot(i, v);
 }
 #define atomic_andnot atomic_andnot
@@ -488,7 +475,7 @@ atomic_andnot(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_andnot(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_andnot(i, v);
 }
 #define atomic_fetch_andnot atomic_fetch_andnot
@@ -498,7 +485,7 @@ atomic_fetch_andnot(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_andnot_acquire(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_andnot_acquire(i, v);
 }
 #define atomic_fetch_andnot_acquire atomic_fetch_andnot_acquire
@@ -508,7 +495,7 @@ atomic_fetch_andnot_acquire(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_andnot_release(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_andnot_release(i, v);
 }
 #define atomic_fetch_andnot_release atomic_fetch_andnot_release
@@ -518,7 +505,7 @@ atomic_fetch_andnot_release(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_andnot_relaxed(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_andnot_relaxed(i, v);
 }
 #define atomic_fetch_andnot_relaxed atomic_fetch_andnot_relaxed
@@ -527,7 +514,7 @@ atomic_fetch_andnot_relaxed(int i, atomic_t *v)
 static __always_inline void
 atomic_or(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic_or(i, v);
 }
 #define atomic_or atomic_or
@@ -536,7 +523,7 @@ atomic_or(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_or(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_or(i, v);
 }
 #define atomic_fetch_or atomic_fetch_or
@@ -546,7 +533,7 @@ atomic_fetch_or(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_or_acquire(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_or_acquire(i, v);
 }
 #define atomic_fetch_or_acquire atomic_fetch_or_acquire
@@ -556,7 +543,7 @@ atomic_fetch_or_acquire(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_or_release(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_or_release(i, v);
 }
 #define atomic_fetch_or_release atomic_fetch_or_release
@@ -566,7 +553,7 @@ atomic_fetch_or_release(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_or_relaxed(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_or_relaxed(i, v);
 }
 #define atomic_fetch_or_relaxed atomic_fetch_or_relaxed
@@ -575,7 +562,7 @@ atomic_fetch_or_relaxed(int i, atomic_t *v)
 static __always_inline void
 atomic_xor(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic_xor(i, v);
 }
 #define atomic_xor atomic_xor
@@ -584,7 +571,7 @@ atomic_xor(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_xor(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_xor(i, v);
 }
 #define atomic_fetch_xor atomic_fetch_xor
@@ -594,7 +581,7 @@ atomic_fetch_xor(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_xor_acquire(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_xor_acquire(i, v);
 }
 #define atomic_fetch_xor_acquire atomic_fetch_xor_acquire
@@ -604,7 +591,7 @@ atomic_fetch_xor_acquire(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_xor_release(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_xor_release(i, v);
 }
 #define atomic_fetch_xor_release atomic_fetch_xor_release
@@ -614,7 +601,7 @@ atomic_fetch_xor_release(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_xor_relaxed(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_xor_relaxed(i, v);
 }
 #define atomic_fetch_xor_relaxed atomic_fetch_xor_relaxed
@@ -624,7 +611,7 @@ atomic_fetch_xor_relaxed(int i, atomic_t *v)
 static __always_inline int
 atomic_xchg(atomic_t *v, int i)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_xchg(v, i);
 }
 #define atomic_xchg atomic_xchg
@@ -634,7 +621,7 @@ atomic_xchg(atomic_t *v, int i)
 static __always_inline int
 atomic_xchg_acquire(atomic_t *v, int i)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_xchg_acquire(v, i);
 }
 #define atomic_xchg_acquire atomic_xchg_acquire
@@ -644,7 +631,7 @@ atomic_xchg_acquire(atomic_t *v, int i)
 static __always_inline int
 atomic_xchg_release(atomic_t *v, int i)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_xchg_release(v, i);
 }
 #define atomic_xchg_release atomic_xchg_release
@@ -654,7 +641,7 @@ atomic_xchg_release(atomic_t *v, int i)
 static __always_inline int
 atomic_xchg_relaxed(atomic_t *v, int i)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_xchg_relaxed(v, i);
 }
 #define atomic_xchg_relaxed atomic_xchg_relaxed
@@ -664,7 +651,7 @@ atomic_xchg_relaxed(atomic_t *v, int i)
 static __always_inline int
 atomic_cmpxchg(atomic_t *v, int old, int new)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_cmpxchg(v, old, new);
 }
 #define atomic_cmpxchg atomic_cmpxchg
@@ -674,7 +661,7 @@ atomic_cmpxchg(atomic_t *v, int old, int new)
 static __always_inline int
 atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_cmpxchg_acquire(v, old, new);
 }
 #define atomic_cmpxchg_acquire atomic_cmpxchg_acquire
@@ -684,7 +671,7 @@ atomic_cmpxchg_acquire(atomic_t *v, int old, int new)
 static __always_inline int
 atomic_cmpxchg_release(atomic_t *v, int old, int new)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_cmpxchg_release(v, old, new);
 }
 #define atomic_cmpxchg_release atomic_cmpxchg_release
@@ -694,7 +681,7 @@ atomic_cmpxchg_release(atomic_t *v, int old, int new)
 static __always_inline int
 atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_cmpxchg_relaxed(v, old, new);
 }
 #define atomic_cmpxchg_relaxed atomic_cmpxchg_relaxed
@@ -704,8 +691,8 @@ atomic_cmpxchg_relaxed(atomic_t *v, int old, int new)
 static __always_inline bool
 atomic_try_cmpxchg(atomic_t *v, int *old, int new)
 {
-	__atomic_check_write(v, sizeof(*v));
-	__atomic_check_write(old, sizeof(*old));
+	instrument_atomic_write(v, sizeof(*v));
+	instrument_atomic_write(old, sizeof(*old));
 	return arch_atomic_try_cmpxchg(v, old, new);
 }
 #define atomic_try_cmpxchg atomic_try_cmpxchg
@@ -715,8 +702,8 @@ atomic_try_cmpxchg(atomic_t *v, int *old, int new)
 static __always_inline bool
 atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
 {
-	__atomic_check_write(v, sizeof(*v));
-	__atomic_check_write(old, sizeof(*old));
+	instrument_atomic_write(v, sizeof(*v));
+	instrument_atomic_write(old, sizeof(*old));
 	return arch_atomic_try_cmpxchg_acquire(v, old, new);
 }
 #define atomic_try_cmpxchg_acquire atomic_try_cmpxchg_acquire
@@ -726,8 +713,8 @@ atomic_try_cmpxchg_acquire(atomic_t *v, int *old, int new)
 static __always_inline bool
 atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
 {
-	__atomic_check_write(v, sizeof(*v));
-	__atomic_check_write(old, sizeof(*old));
+	instrument_atomic_write(v, sizeof(*v));
+	instrument_atomic_write(old, sizeof(*old));
 	return arch_atomic_try_cmpxchg_release(v, old, new);
 }
 #define atomic_try_cmpxchg_release atomic_try_cmpxchg_release
@@ -737,8 +724,8 @@ atomic_try_cmpxchg_release(atomic_t *v, int *old, int new)
 static __always_inline bool
 atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
 {
-	__atomic_check_write(v, sizeof(*v));
-	__atomic_check_write(old, sizeof(*old));
+	instrument_atomic_write(v, sizeof(*v));
+	instrument_atomic_write(old, sizeof(*old));
 	return arch_atomic_try_cmpxchg_relaxed(v, old, new);
 }
 #define atomic_try_cmpxchg_relaxed atomic_try_cmpxchg_relaxed
@@ -748,7 +735,7 @@ atomic_try_cmpxchg_relaxed(atomic_t *v, int *old, int new)
 static __always_inline bool
 atomic_sub_and_test(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_sub_and_test(i, v);
 }
 #define atomic_sub_and_test atomic_sub_and_test
@@ -758,7 +745,7 @@ atomic_sub_and_test(int i, atomic_t *v)
 static __always_inline bool
 atomic_dec_and_test(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_dec_and_test(v);
 }
 #define atomic_dec_and_test atomic_dec_and_test
@@ -768,7 +755,7 @@ atomic_dec_and_test(atomic_t *v)
 static __always_inline bool
 atomic_inc_and_test(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_inc_and_test(v);
 }
 #define atomic_inc_and_test atomic_inc_and_test
@@ -778,7 +765,7 @@ atomic_inc_and_test(atomic_t *v)
 static __always_inline bool
 atomic_add_negative(int i, atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_add_negative(i, v);
 }
 #define atomic_add_negative atomic_add_negative
@@ -788,7 +775,7 @@ atomic_add_negative(int i, atomic_t *v)
 static __always_inline int
 atomic_fetch_add_unless(atomic_t *v, int a, int u)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_fetch_add_unless(v, a, u);
 }
 #define atomic_fetch_add_unless atomic_fetch_add_unless
@@ -798,7 +785,7 @@ atomic_fetch_add_unless(atomic_t *v, int a, int u)
 static __always_inline bool
 atomic_add_unless(atomic_t *v, int a, int u)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_add_unless(v, a, u);
 }
 #define atomic_add_unless atomic_add_unless
@@ -808,7 +795,7 @@ atomic_add_unless(atomic_t *v, int a, int u)
 static __always_inline bool
 atomic_inc_not_zero(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_inc_not_zero(v);
 }
 #define atomic_inc_not_zero atomic_inc_not_zero
@@ -818,7 +805,7 @@ atomic_inc_not_zero(atomic_t *v)
 static __always_inline bool
 atomic_inc_unless_negative(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_inc_unless_negative(v);
 }
 #define atomic_inc_unless_negative atomic_inc_unless_negative
@@ -828,7 +815,7 @@ atomic_inc_unless_negative(atomic_t *v)
 static __always_inline bool
 atomic_dec_unless_positive(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_dec_unless_positive(v);
 }
 #define atomic_dec_unless_positive atomic_dec_unless_positive
@@ -838,7 +825,7 @@ atomic_dec_unless_positive(atomic_t *v)
 static __always_inline int
 atomic_dec_if_positive(atomic_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic_dec_if_positive(v);
 }
 #define atomic_dec_if_positive atomic_dec_if_positive
@@ -847,7 +834,7 @@ atomic_dec_if_positive(atomic_t *v)
 static __always_inline s64
 atomic64_read(const atomic64_t *v)
 {
-	__atomic_check_read(v, sizeof(*v));
+	instrument_atomic_read(v, sizeof(*v));
 	return arch_atomic64_read(v);
 }
 #define atomic64_read atomic64_read
@@ -856,7 +843,7 @@ atomic64_read(const atomic64_t *v)
 static __always_inline s64
 atomic64_read_acquire(const atomic64_t *v)
 {
-	__atomic_check_read(v, sizeof(*v));
+	instrument_atomic_read(v, sizeof(*v));
 	return arch_atomic64_read_acquire(v);
 }
 #define atomic64_read_acquire atomic64_read_acquire
@@ -865,7 +852,7 @@ atomic64_read_acquire(const atomic64_t *v)
 static __always_inline void
 atomic64_set(atomic64_t *v, s64 i)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic64_set(v, i);
 }
 #define atomic64_set atomic64_set
@@ -874,7 +861,7 @@ atomic64_set(atomic64_t *v, s64 i)
 static __always_inline void
 atomic64_set_release(atomic64_t *v, s64 i)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic64_set_release(v, i);
 }
 #define atomic64_set_release atomic64_set_release
@@ -883,7 +870,7 @@ atomic64_set_release(atomic64_t *v, s64 i)
 static __always_inline void
 atomic64_add(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic64_add(i, v);
 }
 #define atomic64_add atomic64_add
@@ -892,7 +879,7 @@ atomic64_add(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_add_return(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_add_return(i, v);
 }
 #define atomic64_add_return atomic64_add_return
@@ -902,7 +889,7 @@ atomic64_add_return(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_add_return_acquire(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_add_return_acquire(i, v);
 }
 #define atomic64_add_return_acquire atomic64_add_return_acquire
@@ -912,7 +899,7 @@ atomic64_add_return_acquire(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_add_return_release(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_add_return_release(i, v);
 }
 #define atomic64_add_return_release atomic64_add_return_release
@@ -922,7 +909,7 @@ atomic64_add_return_release(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_add_return_relaxed(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_add_return_relaxed(i, v);
 }
 #define atomic64_add_return_relaxed atomic64_add_return_relaxed
@@ -932,7 +919,7 @@ atomic64_add_return_relaxed(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_add(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_add(i, v);
 }
 #define atomic64_fetch_add atomic64_fetch_add
@@ -942,7 +929,7 @@ atomic64_fetch_add(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_add_acquire(i, v);
 }
 #define atomic64_fetch_add_acquire atomic64_fetch_add_acquire
@@ -952,7 +939,7 @@ atomic64_fetch_add_acquire(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_add_release(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_add_release(i, v);
 }
 #define atomic64_fetch_add_release atomic64_fetch_add_release
@@ -962,7 +949,7 @@ atomic64_fetch_add_release(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_add_relaxed(i, v);
 }
 #define atomic64_fetch_add_relaxed atomic64_fetch_add_relaxed
@@ -971,7 +958,7 @@ atomic64_fetch_add_relaxed(s64 i, atomic64_t *v)
 static __always_inline void
 atomic64_sub(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic64_sub(i, v);
 }
 #define atomic64_sub atomic64_sub
@@ -980,7 +967,7 @@ atomic64_sub(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_sub_return(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_sub_return(i, v);
 }
 #define atomic64_sub_return atomic64_sub_return
@@ -990,7 +977,7 @@ atomic64_sub_return(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_sub_return_acquire(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_sub_return_acquire(i, v);
 }
 #define atomic64_sub_return_acquire atomic64_sub_return_acquire
@@ -1000,7 +987,7 @@ atomic64_sub_return_acquire(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_sub_return_release(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_sub_return_release(i, v);
 }
 #define atomic64_sub_return_release atomic64_sub_return_release
@@ -1010,7 +997,7 @@ atomic64_sub_return_release(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_sub_return_relaxed(i, v);
 }
 #define atomic64_sub_return_relaxed atomic64_sub_return_relaxed
@@ -1020,7 +1007,7 @@ atomic64_sub_return_relaxed(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_sub(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_sub(i, v);
 }
 #define atomic64_fetch_sub atomic64_fetch_sub
@@ -1030,7 +1017,7 @@ atomic64_fetch_sub(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_sub_acquire(i, v);
 }
 #define atomic64_fetch_sub_acquire atomic64_fetch_sub_acquire
@@ -1040,7 +1027,7 @@ atomic64_fetch_sub_acquire(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_sub_release(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_sub_release(i, v);
 }
 #define atomic64_fetch_sub_release atomic64_fetch_sub_release
@@ -1050,7 +1037,7 @@ atomic64_fetch_sub_release(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_sub_relaxed(i, v);
 }
 #define atomic64_fetch_sub_relaxed atomic64_fetch_sub_relaxed
@@ -1060,7 +1047,7 @@ atomic64_fetch_sub_relaxed(s64 i, atomic64_t *v)
 static __always_inline void
 atomic64_inc(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic64_inc(v);
 }
 #define atomic64_inc atomic64_inc
@@ -1070,7 +1057,7 @@ atomic64_inc(atomic64_t *v)
 static __always_inline s64
 atomic64_inc_return(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_inc_return(v);
 }
 #define atomic64_inc_return atomic64_inc_return
@@ -1080,7 +1067,7 @@ atomic64_inc_return(atomic64_t *v)
 static __always_inline s64
 atomic64_inc_return_acquire(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_inc_return_acquire(v);
 }
 #define atomic64_inc_return_acquire atomic64_inc_return_acquire
@@ -1090,7 +1077,7 @@ atomic64_inc_return_acquire(atomic64_t *v)
 static __always_inline s64
 atomic64_inc_return_release(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_inc_return_release(v);
 }
 #define atomic64_inc_return_release atomic64_inc_return_release
@@ -1100,7 +1087,7 @@ atomic64_inc_return_release(atomic64_t *v)
 static __always_inline s64
 atomic64_inc_return_relaxed(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_inc_return_relaxed(v);
 }
 #define atomic64_inc_return_relaxed atomic64_inc_return_relaxed
@@ -1110,7 +1097,7 @@ atomic64_inc_return_relaxed(atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_inc(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_inc(v);
 }
 #define atomic64_fetch_inc atomic64_fetch_inc
@@ -1120,7 +1107,7 @@ atomic64_fetch_inc(atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_inc_acquire(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_inc_acquire(v);
 }
 #define atomic64_fetch_inc_acquire atomic64_fetch_inc_acquire
@@ -1130,7 +1117,7 @@ atomic64_fetch_inc_acquire(atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_inc_release(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_inc_release(v);
 }
 #define atomic64_fetch_inc_release atomic64_fetch_inc_release
@@ -1140,7 +1127,7 @@ atomic64_fetch_inc_release(atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_inc_relaxed(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_inc_relaxed(v);
 }
 #define atomic64_fetch_inc_relaxed atomic64_fetch_inc_relaxed
@@ -1150,7 +1137,7 @@ atomic64_fetch_inc_relaxed(atomic64_t *v)
 static __always_inline void
 atomic64_dec(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic64_dec(v);
 }
 #define atomic64_dec atomic64_dec
@@ -1160,7 +1147,7 @@ atomic64_dec(atomic64_t *v)
 static __always_inline s64
 atomic64_dec_return(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_dec_return(v);
 }
 #define atomic64_dec_return atomic64_dec_return
@@ -1170,7 +1157,7 @@ atomic64_dec_return(atomic64_t *v)
 static __always_inline s64
 atomic64_dec_return_acquire(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_dec_return_acquire(v);
 }
 #define atomic64_dec_return_acquire atomic64_dec_return_acquire
@@ -1180,7 +1167,7 @@ atomic64_dec_return_acquire(atomic64_t *v)
 static __always_inline s64
 atomic64_dec_return_release(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_dec_return_release(v);
 }
 #define atomic64_dec_return_release atomic64_dec_return_release
@@ -1190,7 +1177,7 @@ atomic64_dec_return_release(atomic64_t *v)
 static __always_inline s64
 atomic64_dec_return_relaxed(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_dec_return_relaxed(v);
 }
 #define atomic64_dec_return_relaxed atomic64_dec_return_relaxed
@@ -1200,7 +1187,7 @@ atomic64_dec_return_relaxed(atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_dec(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_dec(v);
 }
 #define atomic64_fetch_dec atomic64_fetch_dec
@@ -1210,7 +1197,7 @@ atomic64_fetch_dec(atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_dec_acquire(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_dec_acquire(v);
 }
 #define atomic64_fetch_dec_acquire atomic64_fetch_dec_acquire
@@ -1220,7 +1207,7 @@ atomic64_fetch_dec_acquire(atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_dec_release(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_dec_release(v);
 }
 #define atomic64_fetch_dec_release atomic64_fetch_dec_release
@@ -1230,7 +1217,7 @@ atomic64_fetch_dec_release(atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_dec_relaxed(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_dec_relaxed(v);
 }
 #define atomic64_fetch_dec_relaxed atomic64_fetch_dec_relaxed
@@ -1239,7 +1226,7 @@ atomic64_fetch_dec_relaxed(atomic64_t *v)
 static __always_inline void
 atomic64_and(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic64_and(i, v);
 }
 #define atomic64_and atomic64_and
@@ -1248,7 +1235,7 @@ atomic64_and(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_and(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_and(i, v);
 }
 #define atomic64_fetch_and atomic64_fetch_and
@@ -1258,7 +1245,7 @@ atomic64_fetch_and(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_and_acquire(i, v);
 }
 #define atomic64_fetch_and_acquire atomic64_fetch_and_acquire
@@ -1268,7 +1255,7 @@ atomic64_fetch_and_acquire(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_and_release(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_and_release(i, v);
 }
 #define atomic64_fetch_and_release atomic64_fetch_and_release
@@ -1278,7 +1265,7 @@ atomic64_fetch_and_release(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_and_relaxed(i, v);
 }
 #define atomic64_fetch_and_relaxed atomic64_fetch_and_relaxed
@@ -1288,7 +1275,7 @@ atomic64_fetch_and_relaxed(s64 i, atomic64_t *v)
 static __always_inline void
 atomic64_andnot(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic64_andnot(i, v);
 }
 #define atomic64_andnot atomic64_andnot
@@ -1298,7 +1285,7 @@ atomic64_andnot(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_andnot(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_andnot(i, v);
 }
 #define atomic64_fetch_andnot atomic64_fetch_andnot
@@ -1308,7 +1295,7 @@ atomic64_fetch_andnot(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_andnot_acquire(i, v);
 }
 #define atomic64_fetch_andnot_acquire atomic64_fetch_andnot_acquire
@@ -1318,7 +1305,7 @@ atomic64_fetch_andnot_acquire(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_andnot_release(i, v);
 }
 #define atomic64_fetch_andnot_release atomic64_fetch_andnot_release
@@ -1328,7 +1315,7 @@ atomic64_fetch_andnot_release(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_andnot_relaxed(i, v);
 }
 #define atomic64_fetch_andnot_relaxed atomic64_fetch_andnot_relaxed
@@ -1337,7 +1324,7 @@ atomic64_fetch_andnot_relaxed(s64 i, atomic64_t *v)
 static __always_inline void
 atomic64_or(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic64_or(i, v);
 }
 #define atomic64_or atomic64_or
@@ -1346,7 +1333,7 @@ atomic64_or(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_or(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_or(i, v);
 }
 #define atomic64_fetch_or atomic64_fetch_or
@@ -1356,7 +1343,7 @@ atomic64_fetch_or(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_or_acquire(i, v);
 }
 #define atomic64_fetch_or_acquire atomic64_fetch_or_acquire
@@ -1366,7 +1353,7 @@ atomic64_fetch_or_acquire(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_or_release(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_or_release(i, v);
 }
 #define atomic64_fetch_or_release atomic64_fetch_or_release
@@ -1376,7 +1363,7 @@ atomic64_fetch_or_release(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_or_relaxed(i, v);
 }
 #define atomic64_fetch_or_relaxed atomic64_fetch_or_relaxed
@@ -1385,7 +1372,7 @@ atomic64_fetch_or_relaxed(s64 i, atomic64_t *v)
 static __always_inline void
 atomic64_xor(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	arch_atomic64_xor(i, v);
 }
 #define atomic64_xor atomic64_xor
@@ -1394,7 +1381,7 @@ atomic64_xor(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_xor(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_xor(i, v);
 }
 #define atomic64_fetch_xor atomic64_fetch_xor
@@ -1404,7 +1391,7 @@ atomic64_fetch_xor(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_xor_acquire(i, v);
 }
 #define atomic64_fetch_xor_acquire atomic64_fetch_xor_acquire
@@ -1414,7 +1401,7 @@ atomic64_fetch_xor_acquire(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_xor_release(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_xor_release(i, v);
 }
 #define atomic64_fetch_xor_release atomic64_fetch_xor_release
@@ -1424,7 +1411,7 @@ atomic64_fetch_xor_release(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_xor_relaxed(i, v);
 }
 #define atomic64_fetch_xor_relaxed atomic64_fetch_xor_relaxed
@@ -1434,7 +1421,7 @@ atomic64_fetch_xor_relaxed(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_xchg(atomic64_t *v, s64 i)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_xchg(v, i);
 }
 #define atomic64_xchg atomic64_xchg
@@ -1444,7 +1431,7 @@ atomic64_xchg(atomic64_t *v, s64 i)
 static __always_inline s64
 atomic64_xchg_acquire(atomic64_t *v, s64 i)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_xchg_acquire(v, i);
 }
 #define atomic64_xchg_acquire atomic64_xchg_acquire
@@ -1454,7 +1441,7 @@ atomic64_xchg_acquire(atomic64_t *v, s64 i)
 static __always_inline s64
 atomic64_xchg_release(atomic64_t *v, s64 i)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_xchg_release(v, i);
 }
 #define atomic64_xchg_release atomic64_xchg_release
@@ -1464,7 +1451,7 @@ atomic64_xchg_release(atomic64_t *v, s64 i)
 static __always_inline s64
 atomic64_xchg_relaxed(atomic64_t *v, s64 i)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_xchg_relaxed(v, i);
 }
 #define atomic64_xchg_relaxed atomic64_xchg_relaxed
@@ -1474,7 +1461,7 @@ atomic64_xchg_relaxed(atomic64_t *v, s64 i)
 static __always_inline s64
 atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_cmpxchg(v, old, new);
 }
 #define atomic64_cmpxchg atomic64_cmpxchg
@@ -1484,7 +1471,7 @@ atomic64_cmpxchg(atomic64_t *v, s64 old, s64 new)
 static __always_inline s64
 atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_cmpxchg_acquire(v, old, new);
 }
 #define atomic64_cmpxchg_acquire atomic64_cmpxchg_acquire
@@ -1494,7 +1481,7 @@ atomic64_cmpxchg_acquire(atomic64_t *v, s64 old, s64 new)
 static __always_inline s64
 atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_cmpxchg_release(v, old, new);
 }
 #define atomic64_cmpxchg_release atomic64_cmpxchg_release
@@ -1504,7 +1491,7 @@ atomic64_cmpxchg_release(atomic64_t *v, s64 old, s64 new)
 static __always_inline s64
 atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_cmpxchg_relaxed(v, old, new);
 }
 #define atomic64_cmpxchg_relaxed atomic64_cmpxchg_relaxed
@@ -1514,8 +1501,8 @@ atomic64_cmpxchg_relaxed(atomic64_t *v, s64 old, s64 new)
 static __always_inline bool
 atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
 {
-	__atomic_check_write(v, sizeof(*v));
-	__atomic_check_write(old, sizeof(*old));
+	instrument_atomic_write(v, sizeof(*v));
+	instrument_atomic_write(old, sizeof(*old));
 	return arch_atomic64_try_cmpxchg(v, old, new);
 }
 #define atomic64_try_cmpxchg atomic64_try_cmpxchg
@@ -1525,8 +1512,8 @@ atomic64_try_cmpxchg(atomic64_t *v, s64 *old, s64 new)
 static __always_inline bool
 atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
 {
-	__atomic_check_write(v, sizeof(*v));
-	__atomic_check_write(old, sizeof(*old));
+	instrument_atomic_write(v, sizeof(*v));
+	instrument_atomic_write(old, sizeof(*old));
 	return arch_atomic64_try_cmpxchg_acquire(v, old, new);
 }
 #define atomic64_try_cmpxchg_acquire atomic64_try_cmpxchg_acquire
@@ -1536,8 +1523,8 @@ atomic64_try_cmpxchg_acquire(atomic64_t *v, s64 *old, s64 new)
 static __always_inline bool
 atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
 {
-	__atomic_check_write(v, sizeof(*v));
-	__atomic_check_write(old, sizeof(*old));
+	instrument_atomic_write(v, sizeof(*v));
+	instrument_atomic_write(old, sizeof(*old));
 	return arch_atomic64_try_cmpxchg_release(v, old, new);
 }
 #define atomic64_try_cmpxchg_release atomic64_try_cmpxchg_release
@@ -1547,8 +1534,8 @@ atomic64_try_cmpxchg_release(atomic64_t *v, s64 *old, s64 new)
 static __always_inline bool
 atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
 {
-	__atomic_check_write(v, sizeof(*v));
-	__atomic_check_write(old, sizeof(*old));
+	instrument_atomic_write(v, sizeof(*v));
+	instrument_atomic_write(old, sizeof(*old));
 	return arch_atomic64_try_cmpxchg_relaxed(v, old, new);
 }
 #define atomic64_try_cmpxchg_relaxed atomic64_try_cmpxchg_relaxed
@@ -1558,7 +1545,7 @@ atomic64_try_cmpxchg_relaxed(atomic64_t *v, s64 *old, s64 new)
 static __always_inline bool
 atomic64_sub_and_test(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_sub_and_test(i, v);
 }
 #define atomic64_sub_and_test atomic64_sub_and_test
@@ -1568,7 +1555,7 @@ atomic64_sub_and_test(s64 i, atomic64_t *v)
 static __always_inline bool
 atomic64_dec_and_test(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_dec_and_test(v);
 }
 #define atomic64_dec_and_test atomic64_dec_and_test
@@ -1578,7 +1565,7 @@ atomic64_dec_and_test(atomic64_t *v)
 static __always_inline bool
 atomic64_inc_and_test(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_inc_and_test(v);
 }
 #define atomic64_inc_and_test atomic64_inc_and_test
@@ -1588,7 +1575,7 @@ atomic64_inc_and_test(atomic64_t *v)
 static __always_inline bool
 atomic64_add_negative(s64 i, atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_add_negative(i, v);
 }
 #define atomic64_add_negative atomic64_add_negative
@@ -1598,7 +1585,7 @@ atomic64_add_negative(s64 i, atomic64_t *v)
 static __always_inline s64
 atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_fetch_add_unless(v, a, u);
 }
 #define atomic64_fetch_add_unless atomic64_fetch_add_unless
@@ -1608,7 +1595,7 @@ atomic64_fetch_add_unless(atomic64_t *v, s64 a, s64 u)
 static __always_inline bool
 atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_add_unless(v, a, u);
 }
 #define atomic64_add_unless atomic64_add_unless
@@ -1618,7 +1605,7 @@ atomic64_add_unless(atomic64_t *v, s64 a, s64 u)
 static __always_inline bool
 atomic64_inc_not_zero(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_inc_not_zero(v);
 }
 #define atomic64_inc_not_zero atomic64_inc_not_zero
@@ -1628,7 +1615,7 @@ atomic64_inc_not_zero(atomic64_t *v)
 static __always_inline bool
 atomic64_inc_unless_negative(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_inc_unless_negative(v);
 }
 #define atomic64_inc_unless_negative atomic64_inc_unless_negative
@@ -1638,7 +1625,7 @@ atomic64_inc_unless_negative(atomic64_t *v)
 static __always_inline bool
 atomic64_dec_unless_positive(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_dec_unless_positive(v);
 }
 #define atomic64_dec_unless_positive atomic64_dec_unless_positive
@@ -1648,7 +1635,7 @@ atomic64_dec_unless_positive(atomic64_t *v)
 static __always_inline s64
 atomic64_dec_if_positive(atomic64_t *v)
 {
-	__atomic_check_write(v, sizeof(*v));
+	instrument_atomic_write(v, sizeof(*v));
 	return arch_atomic64_dec_if_positive(v);
 }
 #define atomic64_dec_if_positive atomic64_dec_if_positive
@@ -1658,7 +1645,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define xchg(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_xchg(__ai_ptr, __VA_ARGS__);				\
 })
 #endif
@@ -1667,7 +1654,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define xchg_acquire(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_xchg_acquire(__ai_ptr, __VA_ARGS__);				\
 })
 #endif
@@ -1676,7 +1663,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define xchg_release(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_xchg_release(__ai_ptr, __VA_ARGS__);				\
 })
 #endif
@@ -1685,7 +1672,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define xchg_relaxed(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_xchg_relaxed(__ai_ptr, __VA_ARGS__);				\
 })
 #endif
@@ -1694,7 +1681,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define cmpxchg(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_cmpxchg(__ai_ptr, __VA_ARGS__);				\
 })
 #endif
@@ -1703,7 +1690,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define cmpxchg_acquire(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_cmpxchg_acquire(__ai_ptr, __VA_ARGS__);				\
 })
 #endif
@@ -1712,7 +1699,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define cmpxchg_release(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_cmpxchg_release(__ai_ptr, __VA_ARGS__);				\
 })
 #endif
@@ -1721,7 +1708,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define cmpxchg_relaxed(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_cmpxchg_relaxed(__ai_ptr, __VA_ARGS__);				\
 })
 #endif
@@ -1730,7 +1717,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define cmpxchg64(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_cmpxchg64(__ai_ptr, __VA_ARGS__);				\
 })
 #endif
@@ -1739,7 +1726,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define cmpxchg64_acquire(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_cmpxchg64_acquire(__ai_ptr, __VA_ARGS__);				\
 })
 #endif
@@ -1748,7 +1735,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define cmpxchg64_release(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_cmpxchg64_release(__ai_ptr, __VA_ARGS__);				\
 })
 #endif
@@ -1757,7 +1744,7 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define cmpxchg64_relaxed(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_cmpxchg64_relaxed(__ai_ptr, __VA_ARGS__);				\
 })
 #endif
@@ -1765,28 +1752,28 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define cmpxchg_local(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_cmpxchg_local(__ai_ptr, __VA_ARGS__);				\
 })
 
 #define cmpxchg64_local(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_cmpxchg64_local(__ai_ptr, __VA_ARGS__);				\
 })
 
 #define sync_cmpxchg(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, sizeof(*__ai_ptr));		\
 	arch_sync_cmpxchg(__ai_ptr, __VA_ARGS__);				\
 })
 
 #define cmpxchg_double(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, 2 * sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, 2 * sizeof(*__ai_ptr));		\
 	arch_cmpxchg_double(__ai_ptr, __VA_ARGS__);				\
 })
 
@@ -1794,9 +1781,9 @@ atomic64_dec_if_positive(atomic64_t *v)
 #define cmpxchg_double_local(ptr, ...)						\
 ({									\
 	typeof(ptr) __ai_ptr = (ptr);					\
-	__atomic_check_write(__ai_ptr, 2 * sizeof(*__ai_ptr));		\
+	instrument_atomic_write(__ai_ptr, 2 * sizeof(*__ai_ptr));		\
 	arch_cmpxchg_double_local(__ai_ptr, __VA_ARGS__);				\
 })
 
 #endif /* _ASM_GENERIC_ATOMIC_INSTRUMENTED_H */
-// 7b7e2af0e75c8ecb6f02298a7075f503f30d244c
+// 89bf97f3a7509b740845e51ddf31055b48a81f40
diff --git a/scripts/atomic/gen-atomic-instrumented.sh b/scripts/atomic/gen-atomic-instrumented.sh
index fb4222548b22..6afadf73da17 100755
--- a/scripts/atomic/gen-atomic-instrumented.sh
+++ b/scripts/atomic/gen-atomic-instrumented.sh
@@ -20,7 +20,7 @@ gen_param_check()
 	# We don't write to constant parameters
 	[ ${type#c} != ${type} ] && rw="read"
 
-	printf "\t__atomic_check_${rw}(${name}, sizeof(*${name}));\n"
+	printf "\tinstrument_atomic_${rw}(${name}, sizeof(*${name}));\n"
 }
 
 #gen_param_check(arg...)
@@ -107,7 +107,7 @@ cat <<EOF
 #define ${xchg}(ptr, ...)						\\
 ({									\\
 	typeof(ptr) __ai_ptr = (ptr);					\\
-	__atomic_check_write(__ai_ptr, ${mult}sizeof(*__ai_ptr));		\\
+	instrument_atomic_write(__ai_ptr, ${mult}sizeof(*__ai_ptr));		\\
 	arch_${xchg}(__ai_ptr, __VA_ARGS__);				\\
 })
 EOF
@@ -148,20 +148,7 @@ cat << EOF
 
 #include <linux/build_bug.h>
 #include <linux/compiler.h>
-#include <linux/kasan-checks.h>
-#include <linux/kcsan-checks.h>
-
-static __always_inline void __atomic_check_read(const volatile void *v, size_t size)
-{
-	kasan_check_read(v, size);
-	kcsan_check_atomic_read(v, size);
-}
-
-static __always_inline void __atomic_check_write(const volatile void *v, size_t size)
-{
-	kasan_check_write(v, size);
-	kcsan_check_atomic_write(v, size);
-}
+#include <linux/instrumented.h>
 
 EOF
 
-- 
2.25.0.341.g760bfbb309-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops
  2020-01-20 14:19 [PATCH 1/5] include/linux: Add instrumented.h infrastructure Marco Elver
  2020-01-20 14:19 ` [PATCH 2/5] asm-generic, atomic-instrumented: Use generic instrumented.h Marco Elver
@ 2020-01-20 14:19 ` Marco Elver
  2020-01-20 14:40   ` Peter Zijlstra
  2020-01-20 14:19 ` [PATCH 4/5] iov_iter: Use generic instrumented.h Marco Elver
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 29+ messages in thread
From: Marco Elver @ 2020-01-20 14:19 UTC (permalink / raw)
  To: elver
  Cc: paulmck, andreyknvl, glider, dvyukov, kasan-dev, linux-kernel,
	mark.rutland, will, peterz, boqun.feng, arnd, viro,
	christophe.leroy, dja, mpe, rostedt, mhiramat, mingo,
	christian.brauner, daniel, cyphar, keescook, linux-arch

Add explicit KCSAN checks for bitops.

Note that test_bit() is an atomic bitop, and we instrument it as such,
although it is in the non-atomic header. Currently it cannot be moved:
 http://lkml.kernel.org/r/87pnh5dlmn.fsf@dja-thinkpad.axtens.net

Signed-off-by: Marco Elver <elver@google.com>
---
 include/asm-generic/bitops/instrumented-atomic.h | 14 +++++++-------
 include/asm-generic/bitops/instrumented-lock.h   | 10 +++++-----
 .../asm-generic/bitops/instrumented-non-atomic.h | 16 ++++++++--------
 3 files changed, 20 insertions(+), 20 deletions(-)

diff --git a/include/asm-generic/bitops/instrumented-atomic.h b/include/asm-generic/bitops/instrumented-atomic.h
index 18ce3c9e8eec..fb2cb33a4013 100644
--- a/include/asm-generic/bitops/instrumented-atomic.h
+++ b/include/asm-generic/bitops/instrumented-atomic.h
@@ -11,7 +11,7 @@
 #ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H
 #define _ASM_GENERIC_BITOPS_INSTRUMENTED_ATOMIC_H
 
-#include <linux/kasan-checks.h>
+#include <linux/instrumented.h>
 
 /**
  * set_bit - Atomically set a bit in memory
@@ -25,7 +25,7 @@
  */
 static inline void set_bit(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
 	arch_set_bit(nr, addr);
 }
 
@@ -38,7 +38,7 @@ static inline void set_bit(long nr, volatile unsigned long *addr)
  */
 static inline void clear_bit(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
 	arch_clear_bit(nr, addr);
 }
 
@@ -54,7 +54,7 @@ static inline void clear_bit(long nr, volatile unsigned long *addr)
  */
 static inline void change_bit(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
 	arch_change_bit(nr, addr);
 }
 
@@ -67,7 +67,7 @@ static inline void change_bit(long nr, volatile unsigned long *addr)
  */
 static inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
 	return arch_test_and_set_bit(nr, addr);
 }
 
@@ -80,7 +80,7 @@ static inline bool test_and_set_bit(long nr, volatile unsigned long *addr)
  */
 static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
 	return arch_test_and_clear_bit(nr, addr);
 }
 
@@ -93,7 +93,7 @@ static inline bool test_and_clear_bit(long nr, volatile unsigned long *addr)
  */
 static inline bool test_and_change_bit(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
 	return arch_test_and_change_bit(nr, addr);
 }
 
diff --git a/include/asm-generic/bitops/instrumented-lock.h b/include/asm-generic/bitops/instrumented-lock.h
index ec53fdeea9ec..b9bec468ae03 100644
--- a/include/asm-generic/bitops/instrumented-lock.h
+++ b/include/asm-generic/bitops/instrumented-lock.h
@@ -11,7 +11,7 @@
 #ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H
 #define _ASM_GENERIC_BITOPS_INSTRUMENTED_LOCK_H
 
-#include <linux/kasan-checks.h>
+#include <linux/instrumented.h>
 
 /**
  * clear_bit_unlock - Clear a bit in memory, for unlock
@@ -22,7 +22,7 @@
  */
 static inline void clear_bit_unlock(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
 	arch_clear_bit_unlock(nr, addr);
 }
 
@@ -37,7 +37,7 @@ static inline void clear_bit_unlock(long nr, volatile unsigned long *addr)
  */
 static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_write(addr + BIT_WORD(nr), sizeof(long));
 	arch___clear_bit_unlock(nr, addr);
 }
 
@@ -52,7 +52,7 @@ static inline void __clear_bit_unlock(long nr, volatile unsigned long *addr)
  */
 static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
 	return arch_test_and_set_bit_lock(nr, addr);
 }
 
@@ -71,7 +71,7 @@ static inline bool test_and_set_bit_lock(long nr, volatile unsigned long *addr)
 static inline bool
 clear_bit_unlock_is_negative_byte(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_atomic_write(addr + BIT_WORD(nr), sizeof(long));
 	return arch_clear_bit_unlock_is_negative_byte(nr, addr);
 }
 /* Let everybody know we have it. */
diff --git a/include/asm-generic/bitops/instrumented-non-atomic.h b/include/asm-generic/bitops/instrumented-non-atomic.h
index 95ff28d128a1..20f788a25ef9 100644
--- a/include/asm-generic/bitops/instrumented-non-atomic.h
+++ b/include/asm-generic/bitops/instrumented-non-atomic.h
@@ -11,7 +11,7 @@
 #ifndef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
 #define _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H
 
-#include <linux/kasan-checks.h>
+#include <linux/instrumented.h>
 
 /**
  * __set_bit - Set a bit in memory
@@ -24,7 +24,7 @@
  */
 static inline void __set_bit(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_write(addr + BIT_WORD(nr), sizeof(long));
 	arch___set_bit(nr, addr);
 }
 
@@ -39,7 +39,7 @@ static inline void __set_bit(long nr, volatile unsigned long *addr)
  */
 static inline void __clear_bit(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_write(addr + BIT_WORD(nr), sizeof(long));
 	arch___clear_bit(nr, addr);
 }
 
@@ -54,7 +54,7 @@ static inline void __clear_bit(long nr, volatile unsigned long *addr)
  */
 static inline void __change_bit(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_write(addr + BIT_WORD(nr), sizeof(long));
 	arch___change_bit(nr, addr);
 }
 
@@ -68,7 +68,7 @@ static inline void __change_bit(long nr, volatile unsigned long *addr)
  */
 static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_write(addr + BIT_WORD(nr), sizeof(long));
 	return arch___test_and_set_bit(nr, addr);
 }
 
@@ -82,7 +82,7 @@ static inline bool __test_and_set_bit(long nr, volatile unsigned long *addr)
  */
 static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_write(addr + BIT_WORD(nr), sizeof(long));
 	return arch___test_and_clear_bit(nr, addr);
 }
 
@@ -96,7 +96,7 @@ static inline bool __test_and_clear_bit(long nr, volatile unsigned long *addr)
  */
 static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr)
 {
-	kasan_check_write(addr + BIT_WORD(nr), sizeof(long));
+	instrument_write(addr + BIT_WORD(nr), sizeof(long));
 	return arch___test_and_change_bit(nr, addr);
 }
 
@@ -107,7 +107,7 @@ static inline bool __test_and_change_bit(long nr, volatile unsigned long *addr)
  */
 static inline bool test_bit(long nr, const volatile unsigned long *addr)
 {
-	kasan_check_read(addr + BIT_WORD(nr), sizeof(long));
+	instrument_atomic_read(addr + BIT_WORD(nr), sizeof(long));
 	return arch_test_bit(nr, addr);
 }
 
-- 
2.25.0.341.g760bfbb309-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 4/5] iov_iter: Use generic instrumented.h
  2020-01-20 14:19 [PATCH 1/5] include/linux: Add instrumented.h infrastructure Marco Elver
  2020-01-20 14:19 ` [PATCH 2/5] asm-generic, atomic-instrumented: Use generic instrumented.h Marco Elver
  2020-01-20 14:19 ` [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops Marco Elver
@ 2020-01-20 14:19 ` Marco Elver
  2020-01-20 14:19 ` [PATCH 5/5] copy_to_user, copy_from_user: " Marco Elver
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 29+ messages in thread
From: Marco Elver @ 2020-01-20 14:19 UTC (permalink / raw)
  To: elver
  Cc: paulmck, andreyknvl, glider, dvyukov, kasan-dev, linux-kernel,
	mark.rutland, will, peterz, boqun.feng, arnd, viro,
	christophe.leroy, dja, mpe, rostedt, mhiramat, mingo,
	christian.brauner, daniel, cyphar, keescook, linux-arch

This replaces the kasan instrumentation with generic instrumentation,
implicitly adding KCSAN instrumentation support.

For KASAN no functional change is intended.

Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Marco Elver <elver@google.com>
---
 lib/iov_iter.c | 28 +++++++++++++++++++---------
 1 file changed, 19 insertions(+), 9 deletions(-)

diff --git a/lib/iov_iter.c b/lib/iov_iter.c
index fb29c02c6a3c..f06f6f1dd686 100644
--- a/lib/iov_iter.c
+++ b/lib/iov_iter.c
@@ -8,6 +8,7 @@
 #include <linux/splice.h>
 #include <net/checksum.h>
 #include <linux/scatterlist.h>
+#include <linux/instrumented.h>
 
 #define PIPE_PARANOIA /* for now */
 
@@ -137,20 +138,26 @@
 
 static int copyout(void __user *to, const void *from, size_t n)
 {
+	size_t res = n;
+
 	if (access_ok(to, n)) {
-		kasan_check_read(from, n);
-		n = raw_copy_to_user(to, from, n);
+		instrument_copy_to_user_pre(from, n);
+		res = raw_copy_to_user(to, from, n);
+		instrument_copy_to_user_post(from, n, res);
 	}
-	return n;
+	return res;
 }
 
 static int copyin(void *to, const void __user *from, size_t n)
 {
+	size_t res = n;
+
 	if (access_ok(from, n)) {
-		kasan_check_write(to, n);
-		n = raw_copy_from_user(to, from, n);
+		instrument_copy_from_user_pre(to, n);
+		res = raw_copy_from_user(to, from, n);
+		instrument_copy_from_user_post(to, n, res);
 	}
-	return n;
+	return res;
 }
 
 static size_t copy_page_to_iter_iovec(struct page *page, size_t offset, size_t bytes,
@@ -638,11 +645,14 @@ EXPORT_SYMBOL(_copy_to_iter);
 #ifdef CONFIG_ARCH_HAS_UACCESS_MCSAFE
 static int copyout_mcsafe(void __user *to, const void *from, size_t n)
 {
+	size_t res = n;
+
 	if (access_ok(to, n)) {
-		kasan_check_read(from, n);
-		n = copy_to_user_mcsafe((__force void *) to, from, n);
+		instrument_copy_to_user_pre(from, n);
+		res = copy_to_user_mcsafe((__force void *) to, from, n);
+		instrument_copy_to_user_post(from, n, res);
 	}
-	return n;
+	return res;
 }
 
 static unsigned long memcpy_mcsafe_to_page(struct page *page, size_t offset,
-- 
2.25.0.341.g760bfbb309-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* [PATCH 5/5] copy_to_user, copy_from_user: Use generic instrumented.h
  2020-01-20 14:19 [PATCH 1/5] include/linux: Add instrumented.h infrastructure Marco Elver
                   ` (2 preceding siblings ...)
  2020-01-20 14:19 ` [PATCH 4/5] iov_iter: Use generic instrumented.h Marco Elver
@ 2020-01-20 14:19 ` Marco Elver
  2020-01-20 14:51   ` Dmitry Vyukov
  2020-01-20 14:25 ` [PATCH 1/5] include/linux: Add instrumented.h infrastructure Alexander Potapenko
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 29+ messages in thread
From: Marco Elver @ 2020-01-20 14:19 UTC (permalink / raw)
  To: elver
  Cc: paulmck, andreyknvl, glider, dvyukov, kasan-dev, linux-kernel,
	mark.rutland, will, peterz, boqun.feng, arnd, viro,
	christophe.leroy, dja, mpe, rostedt, mhiramat, mingo,
	christian.brauner, daniel, cyphar, keescook, linux-arch

This replaces the KASAN instrumentation with generic instrumentation,
implicitly adding KCSAN instrumentation support.

For KASAN no functional change is intended.

Suggested-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Marco Elver <elver@google.com>
---
 include/linux/uaccess.h | 46 +++++++++++++++++++++++++++++------------
 lib/usercopy.c          | 14 ++++++++-----
 2 files changed, 42 insertions(+), 18 deletions(-)

diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
index 67f016010aad..d3f2d9a8cae3 100644
--- a/include/linux/uaccess.h
+++ b/include/linux/uaccess.h
@@ -2,9 +2,9 @@
 #ifndef __LINUX_UACCESS_H__
 #define __LINUX_UACCESS_H__
 
+#include <linux/instrumented.h>
 #include <linux/sched.h>
 #include <linux/thread_info.h>
-#include <linux/kasan-checks.h>
 
 #define uaccess_kernel() segment_eq(get_fs(), KERNEL_DS)
 
@@ -58,18 +58,26 @@
 static __always_inline __must_check unsigned long
 __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
 {
-	kasan_check_write(to, n);
+	unsigned long res;
+
 	check_object_size(to, n, false);
-	return raw_copy_from_user(to, from, n);
+	instrument_copy_from_user_pre(to, n);
+	res = raw_copy_from_user(to, from, n);
+	instrument_copy_from_user_post(to, n, res);
+	return res;
 }
 
 static __always_inline __must_check unsigned long
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
+	unsigned long res;
+
 	might_fault();
-	kasan_check_write(to, n);
 	check_object_size(to, n, false);
-	return raw_copy_from_user(to, from, n);
+	instrument_copy_from_user_pre(to, n);
+	res = raw_copy_from_user(to, from, n);
+	instrument_copy_from_user_post(to, n, res);
+	return res;
 }
 
 /**
@@ -88,18 +96,26 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
 static __always_inline __must_check unsigned long
 __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
 {
-	kasan_check_read(from, n);
+	unsigned long res;
+
 	check_object_size(from, n, true);
-	return raw_copy_to_user(to, from, n);
+	instrument_copy_to_user_pre(from, n);
+	res = raw_copy_to_user(to, from, n);
+	instrument_copy_to_user_post(from, n, res);
+	return res;
 }
 
 static __always_inline __must_check unsigned long
 __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	unsigned long res;
+
 	might_fault();
-	kasan_check_read(from, n);
 	check_object_size(from, n, true);
-	return raw_copy_to_user(to, from, n);
+	instrument_copy_to_user_pre(from, n);
+	res = raw_copy_to_user(to, from, n);
+	instrument_copy_to_user_post(from, n, res);
+	return res;
 }
 
 #ifdef INLINE_COPY_FROM_USER
@@ -109,8 +125,9 @@ _copy_from_user(void *to, const void __user *from, unsigned long n)
 	unsigned long res = n;
 	might_fault();
 	if (likely(access_ok(from, n))) {
-		kasan_check_write(to, n);
+		instrument_copy_from_user_pre(to, n);
 		res = raw_copy_from_user(to, from, n);
+		instrument_copy_from_user_post(to, n, res);
 	}
 	if (unlikely(res))
 		memset(to + (n - res), 0, res);
@@ -125,12 +142,15 @@ _copy_from_user(void *, const void __user *, unsigned long);
 static inline __must_check unsigned long
 _copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	unsigned long res = n;
+
 	might_fault();
 	if (access_ok(to, n)) {
-		kasan_check_read(from, n);
-		n = raw_copy_to_user(to, from, n);
+		instrument_copy_to_user_pre(from, n);
+		res = raw_copy_to_user(to, from, n);
+		instrument_copy_to_user_post(from, n, res);
 	}
-	return n;
+	return res;
 }
 #else
 extern __must_check unsigned long
diff --git a/lib/usercopy.c b/lib/usercopy.c
index cbb4d9ec00f2..1c20d4423b86 100644
--- a/lib/usercopy.c
+++ b/lib/usercopy.c
@@ -1,6 +1,7 @@
 // SPDX-License-Identifier: GPL-2.0
-#include <linux/uaccess.h>
 #include <linux/bitops.h>
+#include <linux/instrumented.h>
+#include <linux/uaccess.h>
 
 /* out-of-line parts */
 
@@ -10,8 +11,9 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n
 	unsigned long res = n;
 	might_fault();
 	if (likely(access_ok(from, n))) {
-		kasan_check_write(to, n);
+		instrument_copy_from_user_pre(to, n);
 		res = raw_copy_from_user(to, from, n);
+		instrument_copy_from_user_post(to, n, res);
 	}
 	if (unlikely(res))
 		memset(to + (n - res), 0, res);
@@ -23,12 +25,14 @@ EXPORT_SYMBOL(_copy_from_user);
 #ifndef INLINE_COPY_TO_USER
 unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	unsigned long res = n;
 	might_fault();
 	if (likely(access_ok(to, n))) {
-		kasan_check_read(from, n);
-		n = raw_copy_to_user(to, from, n);
+		instrument_copy_to_user_pre(from, n);
+		res = raw_copy_to_user(to, from, n);
+		instrument_copy_to_user_post(from, n, res);
 	}
-	return n;
+	return res;
 }
 EXPORT_SYMBOL(_copy_to_user);
 #endif
-- 
2.25.0.341.g760bfbb309-goog


^ permalink raw reply related	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-20 14:19 [PATCH 1/5] include/linux: Add instrumented.h infrastructure Marco Elver
                   ` (3 preceding siblings ...)
  2020-01-20 14:19 ` [PATCH 5/5] copy_to_user, copy_from_user: " Marco Elver
@ 2020-01-20 14:25 ` Alexander Potapenko
  2020-01-20 14:34 ` Dmitry Vyukov
  2020-01-20 14:45 ` Dmitry Vyukov
  6 siblings, 0 replies; 29+ messages in thread
From: Alexander Potapenko @ 2020-01-20 14:25 UTC (permalink / raw)
  To: Marco Elver
  Cc: paulmck, Andrey Konovalov, Dmitriy Vyukov, kasan-dev, LKML,
	Mark Rutland, will, Peter Zijlstra, boqun.feng, Arnd Bergmann,
	Al Viro, christophe.leroy, Daniel Axtens, mpe, Steven Rostedt,
	Masami Hiramatsu, Ingo Molnar, christian.brauner, daniel, cyphar,
	Kees Cook, linux-arch

On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <elver@google.com> wrote:
>
> This adds instrumented.h, which provides generic wrappers for memory
> access instrumentation that the compiler cannot emit for various
> sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> future this will also include KMSAN instrumentation.
>
> Note that, copy_{to,from}_user require special instrumentation,
> providing hooks before and after the access, since we may need to know
> the actual bytes accessed (currently this is relevant for KCSAN, and is
> also relevant in future for KMSAN).
>
> Suggested-by: Arnd Bergmann <arnd@arndb.de>
> Signed-off-by: Marco Elver <elver@google.com>
Acked-by: Alexander Potapenko <glider@google.com>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-20 14:19 [PATCH 1/5] include/linux: Add instrumented.h infrastructure Marco Elver
                   ` (4 preceding siblings ...)
  2020-01-20 14:25 ` [PATCH 1/5] include/linux: Add instrumented.h infrastructure Alexander Potapenko
@ 2020-01-20 14:34 ` Dmitry Vyukov
  2020-01-20 15:53   ` Marco Elver
  2020-01-20 14:45 ` Dmitry Vyukov
  6 siblings, 1 reply; 29+ messages in thread
From: Dmitry Vyukov @ 2020-01-20 14:34 UTC (permalink / raw)
  To: Marco Elver
  Cc: paulmck, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML,
	Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng,
	Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens,
	Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar,
	Christian Brauner, Daniel Borkmann, cyphar, Kees Cook,
	linux-arch

On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <elver@google.com> wrote:
>
> This adds instrumented.h, which provides generic wrappers for memory
> access instrumentation that the compiler cannot emit for various
> sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> future this will also include KMSAN instrumentation.
>
> Note that, copy_{to,from}_user require special instrumentation,
> providing hooks before and after the access, since we may need to know
> the actual bytes accessed (currently this is relevant for KCSAN, and is
> also relevant in future for KMSAN).

How will KMSAN instrumentation look like?

> Suggested-by: Arnd Bergmann <arnd@arndb.de>
> Signed-off-by: Marco Elver <elver@google.com>
> ---
>  include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
>  1 file changed, 153 insertions(+)
>  create mode 100644 include/linux/instrumented.h
>
> diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> new file mode 100644
> index 000000000000..9f83c8520223
> --- /dev/null
> +++ b/include/linux/instrumented.h
> @@ -0,0 +1,153 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +/*
> + * This header provides generic wrappers for memory access instrumentation that
> + * the compiler cannot emit for: KASAN, KCSAN.
> + */
> +#ifndef _LINUX_INSTRUMENTED_H
> +#define _LINUX_INSTRUMENTED_H
> +
> +#include <linux/compiler.h>
> +#include <linux/kasan-checks.h>
> +#include <linux/kcsan-checks.h>
> +#include <linux/types.h>
> +
> +/**
> + * instrument_read - instrument regular read access
> + *
> + * Instrument a regular read access. The instrumentation should be inserted
> + * before the actual read happens.
> + *
> + * @ptr address of access
> + * @size size of access
> + */
> +static __always_inline void instrument_read(const volatile void *v, size_t size)
> +{
> +       kasan_check_read(v, size);
> +       kcsan_check_read(v, size);
> +}
> +
> +/**
> + * instrument_write - instrument regular write access
> + *
> + * Instrument a regular write access. The instrumentation should be inserted
> + * before the actual write happens.
> + *
> + * @ptr address of access
> + * @size size of access
> + */
> +static __always_inline void instrument_write(const volatile void *v, size_t size)
> +{
> +       kasan_check_write(v, size);
> +       kcsan_check_write(v, size);
> +}
> +
> +/**
> + * instrument_atomic_read - instrument atomic read access
> + *
> + * Instrument an atomic read access. The instrumentation should be inserted
> + * before the actual read happens.
> + *
> + * @ptr address of access
> + * @size size of access
> + */
> +static __always_inline void instrument_atomic_read(const volatile void *v, size_t size)
> +{
> +       kasan_check_read(v, size);
> +       kcsan_check_atomic_read(v, size);
> +}
> +
> +/**
> + * instrument_atomic_write - instrument atomic write access
> + *
> + * Instrument an atomic write access. The instrumentation should be inserted
> + * before the actual write happens.
> + *
> + * @ptr address of access
> + * @size size of access
> + */
> +static __always_inline void instrument_atomic_write(const volatile void *v, size_t size)
> +{
> +       kasan_check_write(v, size);
> +       kcsan_check_atomic_write(v, size);
> +}
> +
> +/**
> + * instrument_copy_to_user_pre - instrument reads of copy_to_user
> + *
> + * Instrument reads from kernel memory, that are due to copy_to_user (and
> + * variants).
> + *
> + * The instrumentation must be inserted before the accesses. At this point the
> + * actual number of bytes accessed is not yet known.
> + *
> + * @dst destination address
> + * @size maximum access size
> + */
> +static __always_inline void
> +instrument_copy_to_user_pre(const volatile void *src, size_t size)
> +{
> +       /* Check before, to warn before potential memory corruption. */
> +       kasan_check_read(src, size);
> +}
> +
> +/**
> + * instrument_copy_to_user_post - instrument reads of copy_to_user
> + *
> + * Instrument reads from kernel memory, that are due to copy_to_user (and
> + * variants).
> + *
> + * The instrumentation must be inserted after the accesses. At this point the
> + * actual number of bytes accessed should be known.
> + *
> + * @dst destination address
> + * @size maximum access size
> + * @left number of bytes left that were not copied
> + */
> +static __always_inline void
> +instrument_copy_to_user_post(const volatile void *src, size_t size, size_t left)
> +{
> +       /* Check after, to avoid false positive if memory was not accessed. */
> +       kcsan_check_read(src, size - left);

Why don't we check the full range?
Kernel intending to copy something racy to user already looks like a
bug to me, even if user-space has that page unmapped. User-space can
always make the full range succeed. What am I missing?


> +}
> +
> +/**
> + * instrument_copy_from_user_pre - instrument writes of copy_from_user
> + *
> + * Instrument writes to kernel memory, that are due to copy_from_user (and
> + * variants).
> + *
> + * The instrumentation must be inserted before the accesses. At this point the
> + * actual number of bytes accessed is not yet known.
> + *
> + * @dst destination address
> + * @size maximum access size
> + */
> +static __always_inline void
> +instrument_copy_from_user_pre(const volatile void *dst, size_t size)
> +{
> +       /* Check before, to warn before potential memory corruption. */
> +       kasan_check_write(dst, size);
> +}
> +
> +/**
> + * instrument_copy_from_user_post - instrument writes of copy_from_user
> + *
> + * Instrument writes to kernel memory, that are due to copy_from_user (and
> + * variants).
> + *
> + * The instrumentation must be inserted after the accesses. At this point the
> + * actual number of bytes accessed should be known.
> + *
> + * @dst destination address
> + * @size maximum access size
> + * @left number of bytes left that were not copied
> + */
> +static __always_inline void
> +instrument_copy_from_user_post(const volatile void *dst, size_t size, size_t left)
> +{
> +       /* Check after, to avoid false positive if memory was not accessed. */
> +       kcsan_check_write(dst, size - left);
> +}
> +
> +#endif /* _LINUX_INSTRUMENTED_H */
> --
> 2.25.0.341.g760bfbb309-goog
>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops
  2020-01-20 14:19 ` [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops Marco Elver
@ 2020-01-20 14:40   ` Peter Zijlstra
  2020-01-20 16:27     ` Paul E. McKenney
  0 siblings, 1 reply; 29+ messages in thread
From: Peter Zijlstra @ 2020-01-20 14:40 UTC (permalink / raw)
  To: Marco Elver
  Cc: paulmck, andreyknvl, glider, dvyukov, kasan-dev, linux-kernel,
	mark.rutland, will, boqun.feng, arnd, viro, christophe.leroy,
	dja, mpe, rostedt, mhiramat, mingo, christian.brauner, daniel,
	cyphar, keescook, linux-arch

On Mon, Jan 20, 2020 at 03:19:25PM +0100, Marco Elver wrote:
> Add explicit KCSAN checks for bitops.
> 
> Note that test_bit() is an atomic bitop, and we instrument it as such,

Well, it is 'atomic' in the same way that atomic_read() is. Both are
very much not atomic ops, but are part of an interface that facilitates
atomic operations.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-20 14:19 [PATCH 1/5] include/linux: Add instrumented.h infrastructure Marco Elver
                   ` (5 preceding siblings ...)
  2020-01-20 14:34 ` Dmitry Vyukov
@ 2020-01-20 14:45 ` Dmitry Vyukov
  2020-01-20 14:58   ` Dmitry Vyukov
  2020-01-21 13:01   ` Dmitry Vyukov
  6 siblings, 2 replies; 29+ messages in thread
From: Dmitry Vyukov @ 2020-01-20 14:45 UTC (permalink / raw)
  To: Marco Elver
  Cc: paulmck, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML,
	Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng,
	Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens,
	Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar,
	Christian Brauner, Daniel Borkmann, cyphar, Kees Cook,
	linux-arch

On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <elver@google.com> wrote:
>
> This adds instrumented.h, which provides generic wrappers for memory
> access instrumentation that the compiler cannot emit for various
> sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> future this will also include KMSAN instrumentation.
>
> Note that, copy_{to,from}_user require special instrumentation,
> providing hooks before and after the access, since we may need to know
> the actual bytes accessed (currently this is relevant for KCSAN, and is
> also relevant in future for KMSAN).
>
> Suggested-by: Arnd Bergmann <arnd@arndb.de>
> Signed-off-by: Marco Elver <elver@google.com>
> ---
>  include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
>  1 file changed, 153 insertions(+)
>  create mode 100644 include/linux/instrumented.h
>
> diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> new file mode 100644
> index 000000000000..9f83c8520223
> --- /dev/null
> +++ b/include/linux/instrumented.h
> @@ -0,0 +1,153 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +/*
> + * This header provides generic wrappers for memory access instrumentation that
> + * the compiler cannot emit for: KASAN, KCSAN.
> + */
> +#ifndef _LINUX_INSTRUMENTED_H
> +#define _LINUX_INSTRUMENTED_H
> +
> +#include <linux/compiler.h>
> +#include <linux/kasan-checks.h>
> +#include <linux/kcsan-checks.h>
> +#include <linux/types.h>
> +
> +/**
> + * instrument_read - instrument regular read access
> + *
> + * Instrument a regular read access. The instrumentation should be inserted
> + * before the actual read happens.
> + *
> + * @ptr address of access
> + * @size size of access
> + */

Based on offline discussion, that's what we add for KMSAN:

> +static __always_inline void instrument_read(const volatile void *v, size_t size)
> +{
> +       kasan_check_read(v, size);
> +       kcsan_check_read(v, size);

KMSAN: nothing

> +}
> +
> +/**
> + * instrument_write - instrument regular write access
> + *
> + * Instrument a regular write access. The instrumentation should be inserted
> + * before the actual write happens.
> + *
> + * @ptr address of access
> + * @size size of access
> + */
> +static __always_inline void instrument_write(const volatile void *v, size_t size)
> +{
> +       kasan_check_write(v, size);
> +       kcsan_check_write(v, size);

KMSAN: nothing

> +}
> +
> +/**
> + * instrument_atomic_read - instrument atomic read access
> + *
> + * Instrument an atomic read access. The instrumentation should be inserted
> + * before the actual read happens.
> + *
> + * @ptr address of access
> + * @size size of access
> + */
> +static __always_inline void instrument_atomic_read(const volatile void *v, size_t size)
> +{
> +       kasan_check_read(v, size);
> +       kcsan_check_atomic_read(v, size);

KMSAN: nothing

> +}
> +
> +/**
> + * instrument_atomic_write - instrument atomic write access
> + *
> + * Instrument an atomic write access. The instrumentation should be inserted
> + * before the actual write happens.
> + *
> + * @ptr address of access
> + * @size size of access
> + */
> +static __always_inline void instrument_atomic_write(const volatile void *v, size_t size)
> +{
> +       kasan_check_write(v, size);
> +       kcsan_check_atomic_write(v, size);

KMSAN: nothing

> +}
> +
> +/**
> + * instrument_copy_to_user_pre - instrument reads of copy_to_user
> + *
> + * Instrument reads from kernel memory, that are due to copy_to_user (and
> + * variants).
> + *
> + * The instrumentation must be inserted before the accesses. At this point the
> + * actual number of bytes accessed is not yet known.
> + *
> + * @dst destination address
> + * @size maximum access size
> + */
> +static __always_inline void
> +instrument_copy_to_user_pre(const volatile void *src, size_t size)
> +{
> +       /* Check before, to warn before potential memory corruption. */
> +       kasan_check_read(src, size);

KMSAN: check that (src,size) is initialized

> +}
> +
> +/**
> + * instrument_copy_to_user_post - instrument reads of copy_to_user
> + *
> + * Instrument reads from kernel memory, that are due to copy_to_user (and
> + * variants).
> + *
> + * The instrumentation must be inserted after the accesses. At this point the
> + * actual number of bytes accessed should be known.
> + *
> + * @dst destination address
> + * @size maximum access size
> + * @left number of bytes left that were not copied
> + */
> +static __always_inline void
> +instrument_copy_to_user_post(const volatile void *src, size_t size, size_t left)
> +{
> +       /* Check after, to avoid false positive if memory was not accessed. */
> +       kcsan_check_read(src, size - left);

KMSAN: nothing

> +}
> +
> +/**
> + * instrument_copy_from_user_pre - instrument writes of copy_from_user
> + *
> + * Instrument writes to kernel memory, that are due to copy_from_user (and
> + * variants).
> + *
> + * The instrumentation must be inserted before the accesses. At this point the
> + * actual number of bytes accessed is not yet known.
> + *
> + * @dst destination address
> + * @size maximum access size
> + */
> +static __always_inline void
> +instrument_copy_from_user_pre(const volatile void *dst, size_t size)
> +{
> +       /* Check before, to warn before potential memory corruption. */
> +       kasan_check_write(dst, size);

KMSAN: nothing

> +}
> +
> +/**
> + * instrument_copy_from_user_post - instrument writes of copy_from_user
> + *
> + * Instrument writes to kernel memory, that are due to copy_from_user (and
> + * variants).
> + *
> + * The instrumentation must be inserted after the accesses. At this point the
> + * actual number of bytes accessed should be known.
> + *
> + * @dst destination address
> + * @size maximum access size
> + * @left number of bytes left that were not copied
> + */
> +static __always_inline void
> +instrument_copy_from_user_post(const volatile void *dst, size_t size, size_t left)
> +{
> +       /* Check after, to avoid false positive if memory was not accessed. */
> +       kcsan_check_write(dst, size - left);

KMSAN: mark (dst, size-left) as initialized

> +}
> +
> +#endif /* _LINUX_INSTRUMENTED_H */
> --
> 2.25.0.341.g760bfbb309-goog
>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] copy_to_user, copy_from_user: Use generic instrumented.h
  2020-01-20 14:19 ` [PATCH 5/5] copy_to_user, copy_from_user: " Marco Elver
@ 2020-01-20 14:51   ` Dmitry Vyukov
  2020-01-20 15:05     ` Marco Elver
  0 siblings, 1 reply; 29+ messages in thread
From: Dmitry Vyukov @ 2020-01-20 14:51 UTC (permalink / raw)
  To: Marco Elver
  Cc: paulmck, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML,
	Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng,
	Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens,
	Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar,
	Christian Brauner, Daniel Borkmann, cyphar, Kees Cook,
	linux-arch

On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <elver@google.com> wrote:
>
> This replaces the KASAN instrumentation with generic instrumentation,
> implicitly adding KCSAN instrumentation support.
>
> For KASAN no functional change is intended.
>
> Suggested-by: Arnd Bergmann <arnd@arndb.de>
> Signed-off-by: Marco Elver <elver@google.com>
> ---
>  include/linux/uaccess.h | 46 +++++++++++++++++++++++++++++------------
>  lib/usercopy.c          | 14 ++++++++-----
>  2 files changed, 42 insertions(+), 18 deletions(-)
>
> diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> index 67f016010aad..d3f2d9a8cae3 100644
> --- a/include/linux/uaccess.h
> +++ b/include/linux/uaccess.h
> @@ -2,9 +2,9 @@
>  #ifndef __LINUX_UACCESS_H__
>  #define __LINUX_UACCESS_H__
>
> +#include <linux/instrumented.h>
>  #include <linux/sched.h>
>  #include <linux/thread_info.h>
> -#include <linux/kasan-checks.h>
>
>  #define uaccess_kernel() segment_eq(get_fs(), KERNEL_DS)
>
> @@ -58,18 +58,26 @@
>  static __always_inline __must_check unsigned long
>  __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
>  {
> -       kasan_check_write(to, n);
> +       unsigned long res;
> +
>         check_object_size(to, n, false);
> -       return raw_copy_from_user(to, from, n);
> +       instrument_copy_from_user_pre(to, n);
> +       res = raw_copy_from_user(to, from, n);
> +       instrument_copy_from_user_post(to, n, res);
> +       return res;
>  }

There is also something called strncpy_from_user() that has kasan
instrumentation now:
https://elixir.bootlin.com/linux/v5.5-rc6/source/lib/strncpy_from_user.c#L117

>  static __always_inline __must_check unsigned long
>  __copy_from_user(void *to, const void __user *from, unsigned long n)
>  {
> +       unsigned long res;
> +
>         might_fault();
> -       kasan_check_write(to, n);
>         check_object_size(to, n, false);
> -       return raw_copy_from_user(to, from, n);
> +       instrument_copy_from_user_pre(to, n);
> +       res = raw_copy_from_user(to, from, n);
> +       instrument_copy_from_user_post(to, n, res);
> +       return res;
>  }
>
>  /**
> @@ -88,18 +96,26 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
>  static __always_inline __must_check unsigned long
>  __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
>  {
> -       kasan_check_read(from, n);
> +       unsigned long res;
> +
>         check_object_size(from, n, true);
> -       return raw_copy_to_user(to, from, n);
> +       instrument_copy_to_user_pre(from, n);
> +       res = raw_copy_to_user(to, from, n);
> +       instrument_copy_to_user_post(from, n, res);
> +       return res;
>  }
>
>  static __always_inline __must_check unsigned long
>  __copy_to_user(void __user *to, const void *from, unsigned long n)
>  {
> +       unsigned long res;
> +
>         might_fault();
> -       kasan_check_read(from, n);
>         check_object_size(from, n, true);
> -       return raw_copy_to_user(to, from, n);
> +       instrument_copy_to_user_pre(from, n);
> +       res = raw_copy_to_user(to, from, n);
> +       instrument_copy_to_user_post(from, n, res);
> +       return res;
>  }
>
>  #ifdef INLINE_COPY_FROM_USER
> @@ -109,8 +125,9 @@ _copy_from_user(void *to, const void __user *from, unsigned long n)
>         unsigned long res = n;
>         might_fault();
>         if (likely(access_ok(from, n))) {
> -               kasan_check_write(to, n);
> +               instrument_copy_from_user_pre(to, n);
>                 res = raw_copy_from_user(to, from, n);
> +               instrument_copy_from_user_post(to, n, res);
>         }
>         if (unlikely(res))
>                 memset(to + (n - res), 0, res);
> @@ -125,12 +142,15 @@ _copy_from_user(void *, const void __user *, unsigned long);
>  static inline __must_check unsigned long
>  _copy_to_user(void __user *to, const void *from, unsigned long n)
>  {
> +       unsigned long res = n;
> +
>         might_fault();
>         if (access_ok(to, n)) {
> -               kasan_check_read(from, n);
> -               n = raw_copy_to_user(to, from, n);
> +               instrument_copy_to_user_pre(from, n);
> +               res = raw_copy_to_user(to, from, n);
> +               instrument_copy_to_user_post(from, n, res);
>         }
> -       return n;
> +       return res;
>  }
>  #else
>  extern __must_check unsigned long
> diff --git a/lib/usercopy.c b/lib/usercopy.c
> index cbb4d9ec00f2..1c20d4423b86 100644
> --- a/lib/usercopy.c
> +++ b/lib/usercopy.c
> @@ -1,6 +1,7 @@
>  // SPDX-License-Identifier: GPL-2.0
> -#include <linux/uaccess.h>
>  #include <linux/bitops.h>
> +#include <linux/instrumented.h>
> +#include <linux/uaccess.h>
>
>  /* out-of-line parts */
>
> @@ -10,8 +11,9 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n
>         unsigned long res = n;
>         might_fault();
>         if (likely(access_ok(from, n))) {
> -               kasan_check_write(to, n);
> +               instrument_copy_from_user_pre(to, n);
>                 res = raw_copy_from_user(to, from, n);
> +               instrument_copy_from_user_post(to, n, res);
>         }
>         if (unlikely(res))
>                 memset(to + (n - res), 0, res);
> @@ -23,12 +25,14 @@ EXPORT_SYMBOL(_copy_from_user);
>  #ifndef INLINE_COPY_TO_USER
>  unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n)
>  {
> +       unsigned long res = n;
>         might_fault();
>         if (likely(access_ok(to, n))) {
> -               kasan_check_read(from, n);
> -               n = raw_copy_to_user(to, from, n);
> +               instrument_copy_to_user_pre(from, n);
> +               res = raw_copy_to_user(to, from, n);
> +               instrument_copy_to_user_post(from, n, res);
>         }
> -       return n;
> +       return res;
>  }
>  EXPORT_SYMBOL(_copy_to_user);
>  #endif
> --
> 2.25.0.341.g760bfbb309-goog
>

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-20 14:45 ` Dmitry Vyukov
@ 2020-01-20 14:58   ` Dmitry Vyukov
  2020-01-20 15:09     ` Dmitry Vyukov
  2020-01-21 13:01   ` Dmitry Vyukov
  1 sibling, 1 reply; 29+ messages in thread
From: Dmitry Vyukov @ 2020-01-20 14:58 UTC (permalink / raw)
  To: Marco Elver
  Cc: paulmck, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML,
	Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng,
	Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens,
	Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar,
	Christian Brauner, Daniel Borkmann, cyphar, Kees Cook,
	linux-arch

On Mon, Jan 20, 2020 at 3:45 PM Dmitry Vyukov <dvyukov@google.com> wrote:
>
> On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <elver@google.com> wrote:
> >
> > This adds instrumented.h, which provides generic wrappers for memory
> > access instrumentation that the compiler cannot emit for various
> > sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> > future this will also include KMSAN instrumentation.
> >
> > Note that, copy_{to,from}_user require special instrumentation,
> > providing hooks before and after the access, since we may need to know
> > the actual bytes accessed (currently this is relevant for KCSAN, and is
> > also relevant in future for KMSAN).
> >
> > Suggested-by: Arnd Bergmann <arnd@arndb.de>
> > Signed-off-by: Marco Elver <elver@google.com>
> > ---
> >  include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
> >  1 file changed, 153 insertions(+)
> >  create mode 100644 include/linux/instrumented.h
> >
> > diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> > new file mode 100644
> > index 000000000000..9f83c8520223
> > --- /dev/null
> > +++ b/include/linux/instrumented.h
> > @@ -0,0 +1,153 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +
> > +/*
> > + * This header provides generic wrappers for memory access instrumentation that
> > + * the compiler cannot emit for: KASAN, KCSAN.
> > + */
> > +#ifndef _LINUX_INSTRUMENTED_H
> > +#define _LINUX_INSTRUMENTED_H
> > +
> > +#include <linux/compiler.h>
> > +#include <linux/kasan-checks.h>
> > +#include <linux/kcsan-checks.h>
> > +#include <linux/types.h>
> > +
> > +/**
> > + * instrument_read - instrument regular read access
> > + *
> > + * Instrument a regular read access. The instrumentation should be inserted
> > + * before the actual read happens.
> > + *
> > + * @ptr address of access
> > + * @size size of access
> > + */
>
> Based on offline discussion, that's what we add for KMSAN:
>
> > +static __always_inline void instrument_read(const volatile void *v, size_t size)
> > +{
> > +       kasan_check_read(v, size);
> > +       kcsan_check_read(v, size);
>
> KMSAN: nothing

KMSAN also has instrumentation in
copy_to_user_page/copy_from_user_page. Do we need to do anything for
KASAN/KCSAN for these functions?

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 5/5] copy_to_user, copy_from_user: Use generic instrumented.h
  2020-01-20 14:51   ` Dmitry Vyukov
@ 2020-01-20 15:05     ` Marco Elver
  0 siblings, 0 replies; 29+ messages in thread
From: Marco Elver @ 2020-01-20 15:05 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Paul E. McKenney, Andrey Konovalov, Alexander Potapenko,
	kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra,
	Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy,
	Daniel Axtens, Michael Ellerman, Steven Rostedt,
	Masami Hiramatsu, Ingo Molnar, Christian Brauner,
	Daniel Borkmann, cyphar, Kees Cook, linux-arch

On Mon, 20 Jan 2020 at 15:52, Dmitry Vyukov <dvyukov@google.com> wrote:
>
> On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <elver@google.com> wrote:
> >
> > This replaces the KASAN instrumentation with generic instrumentation,
> > implicitly adding KCSAN instrumentation support.
> >
> > For KASAN no functional change is intended.
> >
> > Suggested-by: Arnd Bergmann <arnd@arndb.de>
> > Signed-off-by: Marco Elver <elver@google.com>
> > ---
> >  include/linux/uaccess.h | 46 +++++++++++++++++++++++++++++------------
> >  lib/usercopy.c          | 14 ++++++++-----
> >  2 files changed, 42 insertions(+), 18 deletions(-)
> >
> > diff --git a/include/linux/uaccess.h b/include/linux/uaccess.h
> > index 67f016010aad..d3f2d9a8cae3 100644
> > --- a/include/linux/uaccess.h
> > +++ b/include/linux/uaccess.h
> > @@ -2,9 +2,9 @@
> >  #ifndef __LINUX_UACCESS_H__
> >  #define __LINUX_UACCESS_H__
> >
> > +#include <linux/instrumented.h>
> >  #include <linux/sched.h>
> >  #include <linux/thread_info.h>
> > -#include <linux/kasan-checks.h>
> >
> >  #define uaccess_kernel() segment_eq(get_fs(), KERNEL_DS)
> >
> > @@ -58,18 +58,26 @@
> >  static __always_inline __must_check unsigned long
> >  __copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
> >  {
> > -       kasan_check_write(to, n);
> > +       unsigned long res;
> > +
> >         check_object_size(to, n, false);
> > -       return raw_copy_from_user(to, from, n);
> > +       instrument_copy_from_user_pre(to, n);
> > +       res = raw_copy_from_user(to, from, n);
> > +       instrument_copy_from_user_post(to, n, res);
> > +       return res;
> >  }
>
> There is also something called strncpy_from_user() that has kasan
> instrumentation now:
> https://elixir.bootlin.com/linux/v5.5-rc6/source/lib/strncpy_from_user.c#L117

Yes, however, I think it's a special case for KASAN. The
implementation is already instrumented by the compiler. In the
original commit it says (1771c6e1a567e):

"Note: Unlike others strncpy_from_user() is written mostly in C and KASAN
    sees memory accesses in it.  However, it makes sense to add explicit
    check for all @count bytes that *potentially* could be written to the
    kernel."

I don't think we want unconditional double-instrumentation here. Let
me know if you think otherwise.

Thanks,
-- Marco

> >  static __always_inline __must_check unsigned long
> >  __copy_from_user(void *to, const void __user *from, unsigned long n)
> >  {
> > +       unsigned long res;
> > +
> >         might_fault();
> > -       kasan_check_write(to, n);
> >         check_object_size(to, n, false);
> > -       return raw_copy_from_user(to, from, n);
> > +       instrument_copy_from_user_pre(to, n);
> > +       res = raw_copy_from_user(to, from, n);
> > +       instrument_copy_from_user_post(to, n, res);
> > +       return res;
> >  }
> >
> >  /**
> > @@ -88,18 +96,26 @@ __copy_from_user(void *to, const void __user *from, unsigned long n)
> >  static __always_inline __must_check unsigned long
> >  __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
> >  {
> > -       kasan_check_read(from, n);
> > +       unsigned long res;
> > +
> >         check_object_size(from, n, true);
> > -       return raw_copy_to_user(to, from, n);
> > +       instrument_copy_to_user_pre(from, n);
> > +       res = raw_copy_to_user(to, from, n);
> > +       instrument_copy_to_user_post(from, n, res);
> > +       return res;
> >  }
> >
> >  static __always_inline __must_check unsigned long
> >  __copy_to_user(void __user *to, const void *from, unsigned long n)
> >  {
> > +       unsigned long res;
> > +
> >         might_fault();
> > -       kasan_check_read(from, n);
> >         check_object_size(from, n, true);
> > -       return raw_copy_to_user(to, from, n);
> > +       instrument_copy_to_user_pre(from, n);
> > +       res = raw_copy_to_user(to, from, n);
> > +       instrument_copy_to_user_post(from, n, res);
> > +       return res;
> >  }
> >
> >  #ifdef INLINE_COPY_FROM_USER
> > @@ -109,8 +125,9 @@ _copy_from_user(void *to, const void __user *from, unsigned long n)
> >         unsigned long res = n;
> >         might_fault();
> >         if (likely(access_ok(from, n))) {
> > -               kasan_check_write(to, n);
> > +               instrument_copy_from_user_pre(to, n);
> >                 res = raw_copy_from_user(to, from, n);
> > +               instrument_copy_from_user_post(to, n, res);
> >         }
> >         if (unlikely(res))
> >                 memset(to + (n - res), 0, res);
> > @@ -125,12 +142,15 @@ _copy_from_user(void *, const void __user *, unsigned long);
> >  static inline __must_check unsigned long
> >  _copy_to_user(void __user *to, const void *from, unsigned long n)
> >  {
> > +       unsigned long res = n;
> > +
> >         might_fault();
> >         if (access_ok(to, n)) {
> > -               kasan_check_read(from, n);
> > -               n = raw_copy_to_user(to, from, n);
> > +               instrument_copy_to_user_pre(from, n);
> > +               res = raw_copy_to_user(to, from, n);
> > +               instrument_copy_to_user_post(from, n, res);
> >         }
> > -       return n;
> > +       return res;
> >  }
> >  #else
> >  extern __must_check unsigned long
> > diff --git a/lib/usercopy.c b/lib/usercopy.c
> > index cbb4d9ec00f2..1c20d4423b86 100644
> > --- a/lib/usercopy.c
> > +++ b/lib/usercopy.c
> > @@ -1,6 +1,7 @@
> >  // SPDX-License-Identifier: GPL-2.0
> > -#include <linux/uaccess.h>
> >  #include <linux/bitops.h>
> > +#include <linux/instrumented.h>
> > +#include <linux/uaccess.h>
> >
> >  /* out-of-line parts */
> >
> > @@ -10,8 +11,9 @@ unsigned long _copy_from_user(void *to, const void __user *from, unsigned long n
> >         unsigned long res = n;
> >         might_fault();
> >         if (likely(access_ok(from, n))) {
> > -               kasan_check_write(to, n);
> > +               instrument_copy_from_user_pre(to, n);
> >                 res = raw_copy_from_user(to, from, n);
> > +               instrument_copy_from_user_post(to, n, res);
> >         }
> >         if (unlikely(res))
> >                 memset(to + (n - res), 0, res);
> > @@ -23,12 +25,14 @@ EXPORT_SYMBOL(_copy_from_user);
> >  #ifndef INLINE_COPY_TO_USER
> >  unsigned long _copy_to_user(void __user *to, const void *from, unsigned long n)
> >  {
> > +       unsigned long res = n;
> >         might_fault();
> >         if (likely(access_ok(to, n))) {
> > -               kasan_check_read(from, n);
> > -               n = raw_copy_to_user(to, from, n);
> > +               instrument_copy_to_user_pre(from, n);
> > +               res = raw_copy_to_user(to, from, n);
> > +               instrument_copy_to_user_post(from, n, res);
> >         }
> > -       return n;
> > +       return res;
> >  }
> >  EXPORT_SYMBOL(_copy_to_user);
> >  #endif
> > --
> > 2.25.0.341.g760bfbb309-goog
> >

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-20 14:58   ` Dmitry Vyukov
@ 2020-01-20 15:09     ` Dmitry Vyukov
  2020-01-20 15:40       ` Marco Elver
  0 siblings, 1 reply; 29+ messages in thread
From: Dmitry Vyukov @ 2020-01-20 15:09 UTC (permalink / raw)
  To: Marco Elver
  Cc: paulmck, Andrey Konovalov, Alexander Potapenko, kasan-dev, LKML,
	Mark Rutland, Will Deacon, Peter Zijlstra, Boqun Feng,
	Arnd Bergmann, Al Viro, Christophe Leroy, Daniel Axtens,
	Michael Ellerman, Steven Rostedt, Masami Hiramatsu, Ingo Molnar,
	Christian Brauner, Daniel Borkmann, cyphar, Kees Cook,
	linux-arch

On Mon, Jan 20, 2020 at 3:58 PM Dmitry Vyukov <dvyukov@google.com> wrote:
>
> On Mon, Jan 20, 2020 at 3:45 PM Dmitry Vyukov <dvyukov@google.com> wrote:
> >
> > On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <elver@google.com> wrote:
> > >
> > > This adds instrumented.h, which provides generic wrappers for memory
> > > access instrumentation that the compiler cannot emit for various
> > > sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> > > future this will also include KMSAN instrumentation.
> > >
> > > Note that, copy_{to,from}_user require special instrumentation,
> > > providing hooks before and after the access, since we may need to know
> > > the actual bytes accessed (currently this is relevant for KCSAN, and is
> > > also relevant in future for KMSAN).
> > >
> > > Suggested-by: Arnd Bergmann <arnd@arndb.de>
> > > Signed-off-by: Marco Elver <elver@google.com>
> > > ---
> > >  include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
> > >  1 file changed, 153 insertions(+)
> > >  create mode 100644 include/linux/instrumented.h
> > >
> > > diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> > > new file mode 100644
> > > index 000000000000..9f83c8520223
> > > --- /dev/null
> > > +++ b/include/linux/instrumented.h
> > > @@ -0,0 +1,153 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +
> > > +/*
> > > + * This header provides generic wrappers for memory access instrumentation that
> > > + * the compiler cannot emit for: KASAN, KCSAN.
> > > + */
> > > +#ifndef _LINUX_INSTRUMENTED_H
> > > +#define _LINUX_INSTRUMENTED_H
> > > +
> > > +#include <linux/compiler.h>
> > > +#include <linux/kasan-checks.h>
> > > +#include <linux/kcsan-checks.h>
> > > +#include <linux/types.h>
> > > +
> > > +/**
> > > + * instrument_read - instrument regular read access
> > > + *
> > > + * Instrument a regular read access. The instrumentation should be inserted
> > > + * before the actual read happens.
> > > + *
> > > + * @ptr address of access
> > > + * @size size of access
> > > + */
> >
> > Based on offline discussion, that's what we add for KMSAN:
> >
> > > +static __always_inline void instrument_read(const volatile void *v, size_t size)
> > > +{
> > > +       kasan_check_read(v, size);
> > > +       kcsan_check_read(v, size);
> >
> > KMSAN: nothing
>
> KMSAN also has instrumentation in
> copy_to_user_page/copy_from_user_page. Do we need to do anything for
> KASAN/KCSAN for these functions?


There is also copy_user_highpage.

And ioread/write8/16/32_rep: do we need any instrumentation there. It
seems we want both KSAN and KCSAN too. One may argue that KCSAN
instrumentation there is to super critical at this point, but KASAN
instrumentation is important, if anything to prevent silent memory
corruptions. How do we instrument there? I don't see how it maps to
any of the existing instrumentation functions.

There is also kmsan_check_skb/kmsan_handle_dma/kmsan_handle_urb that
does not seem to map to any of the instrumentation functions.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-20 15:09     ` Dmitry Vyukov
@ 2020-01-20 15:40       ` Marco Elver
  2020-01-20 16:06         ` Dmitry Vyukov
  0 siblings, 1 reply; 29+ messages in thread
From: Marco Elver @ 2020-01-20 15:40 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Paul E. McKenney, Andrey Konovalov, Alexander Potapenko,
	kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra,
	Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy,
	Daniel Axtens, Michael Ellerman, Steven Rostedt,
	Masami Hiramatsu, Ingo Molnar, Christian Brauner,
	Daniel Borkmann, cyphar, Kees Cook, linux-arch

On Mon, 20 Jan 2020 at 16:09, Dmitry Vyukov <dvyukov@google.com> wrote:
>
> On Mon, Jan 20, 2020 at 3:58 PM Dmitry Vyukov <dvyukov@google.com> wrote:
> >
> > On Mon, Jan 20, 2020 at 3:45 PM Dmitry Vyukov <dvyukov@google.com> wrote:
> > >
> > > On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <elver@google.com> wrote:
> > > >
> > > > This adds instrumented.h, which provides generic wrappers for memory
> > > > access instrumentation that the compiler cannot emit for various
> > > > sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> > > > future this will also include KMSAN instrumentation.
> > > >
> > > > Note that, copy_{to,from}_user require special instrumentation,
> > > > providing hooks before and after the access, since we may need to know
> > > > the actual bytes accessed (currently this is relevant for KCSAN, and is
> > > > also relevant in future for KMSAN).
> > > >
> > > > Suggested-by: Arnd Bergmann <arnd@arndb.de>
> > > > Signed-off-by: Marco Elver <elver@google.com>
> > > > ---
> > > >  include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
> > > >  1 file changed, 153 insertions(+)
> > > >  create mode 100644 include/linux/instrumented.h
> > > >
> > > > diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> > > > new file mode 100644
> > > > index 000000000000..9f83c8520223
> > > > --- /dev/null
> > > > +++ b/include/linux/instrumented.h
> > > > @@ -0,0 +1,153 @@
> > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > +
> > > > +/*
> > > > + * This header provides generic wrappers for memory access instrumentation that
> > > > + * the compiler cannot emit for: KASAN, KCSAN.
> > > > + */
> > > > +#ifndef _LINUX_INSTRUMENTED_H
> > > > +#define _LINUX_INSTRUMENTED_H
> > > > +
> > > > +#include <linux/compiler.h>
> > > > +#include <linux/kasan-checks.h>
> > > > +#include <linux/kcsan-checks.h>
> > > > +#include <linux/types.h>
> > > > +
> > > > +/**
> > > > + * instrument_read - instrument regular read access
> > > > + *
> > > > + * Instrument a regular read access. The instrumentation should be inserted
> > > > + * before the actual read happens.
> > > > + *
> > > > + * @ptr address of access
> > > > + * @size size of access
> > > > + */
> > >
> > > Based on offline discussion, that's what we add for KMSAN:
> > >
> > > > +static __always_inline void instrument_read(const volatile void *v, size_t size)
> > > > +{
> > > > +       kasan_check_read(v, size);
> > > > +       kcsan_check_read(v, size);
> > >
> > > KMSAN: nothing
> >
> > KMSAN also has instrumentation in
> > copy_to_user_page/copy_from_user_page. Do we need to do anything for
> > KASAN/KCSAN for these functions?

copy_to_user_page/copy_from_user_page can be instrumented with
instrument_copy_{to,from}_user_. I prefer keeping this series with no
functional change intended for KASAN at least.

> There is also copy_user_highpage.
>
> And ioread/write8/16/32_rep: do we need any instrumentation there. It
> seems we want both KSAN and KCSAN too. One may argue that KCSAN
> instrumentation there is to super critical at this point, but KASAN
> instrumentation is important, if anything to prevent silent memory
> corruptions. How do we instrument there? I don't see how it maps to
> any of the existing instrumentation functions.

These should be able to use the regular instrument_{read,write}. I
prefer keeping this series with no functional change intended for
KASAN at least.

> There is also kmsan_check_skb/kmsan_handle_dma/kmsan_handle_urb that
> does not seem to map to any of the instrumentation functions.

For now, I would rather that there are some one-off special
instrumentation, like for KMSAN. Coming up with a unified interface
here that, without the use-cases even settled, seems hard to justify.
Once instrumentation for these have settled, unifying the interface
would have better justification.

This patch series is merely supposed to introduce instrumented.h and
replace the kasan_checks (also implicitly introducing kcsan_checks
there), however, with no further functional change intended.

I propose that adding entirely new instrumentation for both KASAN and
KCSAN, we should send a separate patch-series.

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-20 14:34 ` Dmitry Vyukov
@ 2020-01-20 15:53   ` Marco Elver
  0 siblings, 0 replies; 29+ messages in thread
From: Marco Elver @ 2020-01-20 15:53 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Paul E. McKenney, Andrey Konovalov, Alexander Potapenko,
	kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra,
	Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy,
	Daniel Axtens, Michael Ellerman, Steven Rostedt,
	Masami Hiramatsu, Ingo Molnar, Christian Brauner,
	Daniel Borkmann, cyphar, Kees Cook, linux-arch

On Mon, 20 Jan 2020 at 15:34, Dmitry Vyukov <dvyukov@google.com> wrote:
>
> On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <elver@google.com> wrote:
> >
> > This adds instrumented.h, which provides generic wrappers for memory
> > access instrumentation that the compiler cannot emit for various
> > sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> > future this will also include KMSAN instrumentation.
> >
> > Note that, copy_{to,from}_user require special instrumentation,
> > providing hooks before and after the access, since we may need to know
> > the actual bytes accessed (currently this is relevant for KCSAN, and is
> > also relevant in future for KMSAN).
>
> How will KMSAN instrumentation look like?
>
> > Suggested-by: Arnd Bergmann <arnd@arndb.de>
> > Signed-off-by: Marco Elver <elver@google.com>
> > ---
> >  include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
> >  1 file changed, 153 insertions(+)
> >  create mode 100644 include/linux/instrumented.h
> >
> > diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> > new file mode 100644
> > index 000000000000..9f83c8520223
> > --- /dev/null
> > +++ b/include/linux/instrumented.h
> > @@ -0,0 +1,153 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +
> > +/*
> > + * This header provides generic wrappers for memory access instrumentation that
> > + * the compiler cannot emit for: KASAN, KCSAN.
> > + */
> > +#ifndef _LINUX_INSTRUMENTED_H
> > +#define _LINUX_INSTRUMENTED_H
> > +
> > +#include <linux/compiler.h>
> > +#include <linux/kasan-checks.h>
> > +#include <linux/kcsan-checks.h>
> > +#include <linux/types.h>
> > +
> > +/**
> > + * instrument_read - instrument regular read access
> > + *
> > + * Instrument a regular read access. The instrumentation should be inserted
> > + * before the actual read happens.
> > + *
> > + * @ptr address of access
> > + * @size size of access
> > + */
> > +static __always_inline void instrument_read(const volatile void *v, size_t size)
> > +{
> > +       kasan_check_read(v, size);
> > +       kcsan_check_read(v, size);
> > +}
> > +
> > +/**
> > + * instrument_write - instrument regular write access
> > + *
> > + * Instrument a regular write access. The instrumentation should be inserted
> > + * before the actual write happens.
> > + *
> > + * @ptr address of access
> > + * @size size of access
> > + */
> > +static __always_inline void instrument_write(const volatile void *v, size_t size)
> > +{
> > +       kasan_check_write(v, size);
> > +       kcsan_check_write(v, size);
> > +}
> > +
> > +/**
> > + * instrument_atomic_read - instrument atomic read access
> > + *
> > + * Instrument an atomic read access. The instrumentation should be inserted
> > + * before the actual read happens.
> > + *
> > + * @ptr address of access
> > + * @size size of access
> > + */
> > +static __always_inline void instrument_atomic_read(const volatile void *v, size_t size)
> > +{
> > +       kasan_check_read(v, size);
> > +       kcsan_check_atomic_read(v, size);
> > +}
> > +
> > +/**
> > + * instrument_atomic_write - instrument atomic write access
> > + *
> > + * Instrument an atomic write access. The instrumentation should be inserted
> > + * before the actual write happens.
> > + *
> > + * @ptr address of access
> > + * @size size of access
> > + */
> > +static __always_inline void instrument_atomic_write(const volatile void *v, size_t size)
> > +{
> > +       kasan_check_write(v, size);
> > +       kcsan_check_atomic_write(v, size);
> > +}
> > +
> > +/**
> > + * instrument_copy_to_user_pre - instrument reads of copy_to_user
> > + *
> > + * Instrument reads from kernel memory, that are due to copy_to_user (and
> > + * variants).
> > + *
> > + * The instrumentation must be inserted before the accesses. At this point the
> > + * actual number of bytes accessed is not yet known.
> > + *
> > + * @dst destination address
> > + * @size maximum access size
> > + */
> > +static __always_inline void
> > +instrument_copy_to_user_pre(const volatile void *src, size_t size)
> > +{
> > +       /* Check before, to warn before potential memory corruption. */
> > +       kasan_check_read(src, size);
> > +}
> > +
> > +/**
> > + * instrument_copy_to_user_post - instrument reads of copy_to_user
> > + *
> > + * Instrument reads from kernel memory, that are due to copy_to_user (and
> > + * variants).
> > + *
> > + * The instrumentation must be inserted after the accesses. At this point the
> > + * actual number of bytes accessed should be known.
> > + *
> > + * @dst destination address
> > + * @size maximum access size
> > + * @left number of bytes left that were not copied
> > + */
> > +static __always_inline void
> > +instrument_copy_to_user_post(const volatile void *src, size_t size, size_t left)
> > +{
> > +       /* Check after, to avoid false positive if memory was not accessed. */
> > +       kcsan_check_read(src, size - left);
>
> Why don't we check the full range?
> Kernel intending to copy something racy to user already looks like a
> bug to me, even if user-space has that page unmapped. User-space can
> always make the full range succeed. What am I missing?

Fair enough. I can move this into the pre-hooks in v2.

However, note that, that leaves us with a bunch of empty post-hooks in
the patch. While this will probably change when we get KMSAN, is it
reasonable to keep them empty for now?

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-20 15:40       ` Marco Elver
@ 2020-01-20 16:06         ` Dmitry Vyukov
  2020-01-20 16:25           ` Marco Elver
  0 siblings, 1 reply; 29+ messages in thread
From: Dmitry Vyukov @ 2020-01-20 16:06 UTC (permalink / raw)
  To: Marco Elver
  Cc: Paul E. McKenney, Andrey Konovalov, Alexander Potapenko,
	kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra,
	Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy,
	Daniel Axtens, Michael Ellerman, Steven Rostedt,
	Masami Hiramatsu, Ingo Molnar, Christian Brauner,
	Daniel Borkmann, cyphar, Kees Cook, linux-arch

On Mon, Jan 20, 2020 at 4:40 PM Marco Elver <elver@google.com> wrote:
> > > > > This adds instrumented.h, which provides generic wrappers for memory
> > > > > access instrumentation that the compiler cannot emit for various
> > > > > sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> > > > > future this will also include KMSAN instrumentation.
> > > > >
> > > > > Note that, copy_{to,from}_user require special instrumentation,
> > > > > providing hooks before and after the access, since we may need to know
> > > > > the actual bytes accessed (currently this is relevant for KCSAN, and is
> > > > > also relevant in future for KMSAN).
> > > > >
> > > > > Suggested-by: Arnd Bergmann <arnd@arndb.de>
> > > > > Signed-off-by: Marco Elver <elver@google.com>
> > > > > ---
> > > > >  include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
> > > > >  1 file changed, 153 insertions(+)
> > > > >  create mode 100644 include/linux/instrumented.h
> > > > >
> > > > > diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> > > > > new file mode 100644
> > > > > index 000000000000..9f83c8520223
> > > > > --- /dev/null
> > > > > +++ b/include/linux/instrumented.h
> > > > > @@ -0,0 +1,153 @@
> > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > +
> > > > > +/*
> > > > > + * This header provides generic wrappers for memory access instrumentation that
> > > > > + * the compiler cannot emit for: KASAN, KCSAN.
> > > > > + */
> > > > > +#ifndef _LINUX_INSTRUMENTED_H
> > > > > +#define _LINUX_INSTRUMENTED_H
> > > > > +
> > > > > +#include <linux/compiler.h>
> > > > > +#include <linux/kasan-checks.h>
> > > > > +#include <linux/kcsan-checks.h>
> > > > > +#include <linux/types.h>
> > > > > +
> > > > > +/**
> > > > > + * instrument_read - instrument regular read access
> > > > > + *
> > > > > + * Instrument a regular read access. The instrumentation should be inserted
> > > > > + * before the actual read happens.
> > > > > + *
> > > > > + * @ptr address of access
> > > > > + * @size size of access
> > > > > + */
> > > >
> > > > Based on offline discussion, that's what we add for KMSAN:
> > > >
> > > > > +static __always_inline void instrument_read(const volatile void *v, size_t size)
> > > > > +{
> > > > > +       kasan_check_read(v, size);
> > > > > +       kcsan_check_read(v, size);
> > > >
> > > > KMSAN: nothing
> > >
> > > KMSAN also has instrumentation in
> > > copy_to_user_page/copy_from_user_page. Do we need to do anything for
> > > KASAN/KCSAN for these functions?
>
> copy_to_user_page/copy_from_user_page can be instrumented with
> instrument_copy_{to,from}_user_. I prefer keeping this series with no
> functional change intended for KASAN at least.
>
> > There is also copy_user_highpage.
> >
> > And ioread/write8/16/32_rep: do we need any instrumentation there. It
> > seems we want both KSAN and KCSAN too. One may argue that KCSAN
> > instrumentation there is to super critical at this point, but KASAN
> > instrumentation is important, if anything to prevent silent memory
> > corruptions. How do we instrument there? I don't see how it maps to
> > any of the existing instrumentation functions.
>
> These should be able to use the regular instrument_{read,write}. I
> prefer keeping this series with no functional change intended for
> KASAN at least.

instrument_{read,write} will not contain any KMSAN instrumentation,
which means we will effectively remove KMSAN instrumentation, which is
weird because we instrumented these functions because of KMSAN in the
first place...

> > There is also kmsan_check_skb/kmsan_handle_dma/kmsan_handle_urb that
> > does not seem to map to any of the instrumentation functions.
>
> For now, I would rather that there are some one-off special
> instrumentation, like for KMSAN. Coming up with a unified interface
> here that, without the use-cases even settled, seems hard to justify.
> Once instrumentation for these have settled, unifying the interface
> would have better justification.

I would assume they may also require an annotation that checks the
memory region under all 3 tools and we don't have such annotation
(same as the previous case and effectively copy_to_user). I would
expect such annotation will be used in more places once we start
looking for more opportunities.

> This patch series is merely supposed to introduce instrumented.h and
> replace the kasan_checks (also implicitly introducing kcsan_checks
> there), however, with no further functional change intended.
>
> I propose that adding entirely new instrumentation for both KASAN and
> KCSAN, we should send a separate patch-series.
>
> Thanks,
> -- Marco

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-20 16:06         ` Dmitry Vyukov
@ 2020-01-20 16:25           ` Marco Elver
  2020-01-20 16:39             ` Dmitry Vyukov
  0 siblings, 1 reply; 29+ messages in thread
From: Marco Elver @ 2020-01-20 16:25 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Paul E. McKenney, Andrey Konovalov, Alexander Potapenko,
	kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra,
	Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy,
	Daniel Axtens, Michael Ellerman, Steven Rostedt,
	Masami Hiramatsu, Ingo Molnar, Christian Brauner,
	Daniel Borkmann, cyphar, Kees Cook, linux-arch

On Mon, 20 Jan 2020 at 17:06, Dmitry Vyukov <dvyukov@google.com> wrote:
>
> On Mon, Jan 20, 2020 at 4:40 PM Marco Elver <elver@google.com> wrote:
> > > > > > This adds instrumented.h, which provides generic wrappers for memory
> > > > > > access instrumentation that the compiler cannot emit for various
> > > > > > sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> > > > > > future this will also include KMSAN instrumentation.
> > > > > >
> > > > > > Note that, copy_{to,from}_user require special instrumentation,
> > > > > > providing hooks before and after the access, since we may need to know
> > > > > > the actual bytes accessed (currently this is relevant for KCSAN, and is
> > > > > > also relevant in future for KMSAN).
> > > > > >
> > > > > > Suggested-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > Signed-off-by: Marco Elver <elver@google.com>
> > > > > > ---
> > > > > >  include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
> > > > > >  1 file changed, 153 insertions(+)
> > > > > >  create mode 100644 include/linux/instrumented.h
> > > > > >
> > > > > > diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> > > > > > new file mode 100644
> > > > > > index 000000000000..9f83c8520223
> > > > > > --- /dev/null
> > > > > > +++ b/include/linux/instrumented.h
> > > > > > @@ -0,0 +1,153 @@
> > > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > > +
> > > > > > +/*
> > > > > > + * This header provides generic wrappers for memory access instrumentation that
> > > > > > + * the compiler cannot emit for: KASAN, KCSAN.
> > > > > > + */
> > > > > > +#ifndef _LINUX_INSTRUMENTED_H
> > > > > > +#define _LINUX_INSTRUMENTED_H
> > > > > > +
> > > > > > +#include <linux/compiler.h>
> > > > > > +#include <linux/kasan-checks.h>
> > > > > > +#include <linux/kcsan-checks.h>
> > > > > > +#include <linux/types.h>
> > > > > > +
> > > > > > +/**
> > > > > > + * instrument_read - instrument regular read access
> > > > > > + *
> > > > > > + * Instrument a regular read access. The instrumentation should be inserted
> > > > > > + * before the actual read happens.
> > > > > > + *
> > > > > > + * @ptr address of access
> > > > > > + * @size size of access
> > > > > > + */
> > > > >
> > > > > Based on offline discussion, that's what we add for KMSAN:
> > > > >
> > > > > > +static __always_inline void instrument_read(const volatile void *v, size_t size)
> > > > > > +{
> > > > > > +       kasan_check_read(v, size);
> > > > > > +       kcsan_check_read(v, size);
> > > > >
> > > > > KMSAN: nothing
> > > >
> > > > KMSAN also has instrumentation in
> > > > copy_to_user_page/copy_from_user_page. Do we need to do anything for
> > > > KASAN/KCSAN for these functions?
> >
> > copy_to_user_page/copy_from_user_page can be instrumented with
> > instrument_copy_{to,from}_user_. I prefer keeping this series with no
> > functional change intended for KASAN at least.
> >
> > > There is also copy_user_highpage.
> > >
> > > And ioread/write8/16/32_rep: do we need any instrumentation there. It
> > > seems we want both KSAN and KCSAN too. One may argue that KCSAN
> > > instrumentation there is to super critical at this point, but KASAN
> > > instrumentation is important, if anything to prevent silent memory
> > > corruptions. How do we instrument there? I don't see how it maps to
> > > any of the existing instrumentation functions.
> >
> > These should be able to use the regular instrument_{read,write}. I
> > prefer keeping this series with no functional change intended for
> > KASAN at least.
>
> instrument_{read,write} will not contain any KMSAN instrumentation,
> which means we will effectively remove KMSAN instrumentation, which is
> weird because we instrumented these functions because of KMSAN in the
> first place...
>
> > > There is also kmsan_check_skb/kmsan_handle_dma/kmsan_handle_urb that
> > > does not seem to map to any of the instrumentation functions.
> >
> > For now, I would rather that there are some one-off special
> > instrumentation, like for KMSAN. Coming up with a unified interface
> > here that, without the use-cases even settled, seems hard to justify.
> > Once instrumentation for these have settled, unifying the interface
> > would have better justification.
>
> I would assume they may also require an annotation that checks the
> memory region under all 3 tools and we don't have such annotation
> (same as the previous case and effectively copy_to_user). I would
> expect such annotation will be used in more places once we start
> looking for more opportunities.

Agreed, I'm certainly not against adding these. We may need to
introduce 'instrument_dma_' etc. However, would it be reasonable to do
this in a separate follow-up patch-series, to avoid stalling bitops
instrumentation?  Assuming that the 8 hooks in instrumented.h right
now are reasonable, and such future changes add new hooks, I think
that would be the more pragmatic approach.

Thanks,
-- Marco

>
> > This patch series is merely supposed to introduce instrumented.h and
> > replace the kasan_checks (also implicitly introducing kcsan_checks
> > there), however, with no further functional change intended.
> >
> > I propose that adding entirely new instrumentation for both KASAN and
> > KCSAN, we should send a separate patch-series.
> >
> > Thanks,
> > -- Marco

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops
  2020-01-20 14:40   ` Peter Zijlstra
@ 2020-01-20 16:27     ` Paul E. McKenney
  2020-01-20 16:52       ` Peter Zijlstra
  0 siblings, 1 reply; 29+ messages in thread
From: Paul E. McKenney @ 2020-01-20 16:27 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Marco Elver, andreyknvl, glider, dvyukov, kasan-dev,
	linux-kernel, mark.rutland, will, boqun.feng, arnd, viro,
	christophe.leroy, dja, mpe, rostedt, mhiramat, mingo,
	christian.brauner, daniel, cyphar, keescook, linux-arch

On Mon, Jan 20, 2020 at 03:40:48PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 20, 2020 at 03:19:25PM +0100, Marco Elver wrote:
> > Add explicit KCSAN checks for bitops.
> > 
> > Note that test_bit() is an atomic bitop, and we instrument it as such,
> 
> Well, it is 'atomic' in the same way that atomic_read() is. Both are
> very much not atomic ops, but are part of an interface that facilitates
> atomic operations.

True, but they all are either inline assembly or have either an
implicit or explicit cast to volatile, so they could be treated
the same as atomic_read(), correct?  If not, what am I missing?

(There is one exception, but it is in arch/x86/boot/bitops.h,
which I UP-only, correct?)

						Thanx, Paul

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-20 16:25           ` Marco Elver
@ 2020-01-20 16:39             ` Dmitry Vyukov
  2020-01-21  9:44               ` Marco Elver
  0 siblings, 1 reply; 29+ messages in thread
From: Dmitry Vyukov @ 2020-01-20 16:39 UTC (permalink / raw)
  To: Marco Elver
  Cc: Paul E. McKenney, Andrey Konovalov, Alexander Potapenko,
	kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra,
	Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy,
	Daniel Axtens, Michael Ellerman, Steven Rostedt,
	Masami Hiramatsu, Ingo Molnar, Christian Brauner,
	Daniel Borkmann, cyphar, Kees Cook, linux-arch

On Mon, Jan 20, 2020 at 5:25 PM Marco Elver <elver@google.com> wrote:
> > > > > > > This adds instrumented.h, which provides generic wrappers for memory
> > > > > > > access instrumentation that the compiler cannot emit for various
> > > > > > > sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> > > > > > > future this will also include KMSAN instrumentation.
> > > > > > >
> > > > > > > Note that, copy_{to,from}_user require special instrumentation,
> > > > > > > providing hooks before and after the access, since we may need to know
> > > > > > > the actual bytes accessed (currently this is relevant for KCSAN, and is
> > > > > > > also relevant in future for KMSAN).
> > > > > > >
> > > > > > > Suggested-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > > Signed-off-by: Marco Elver <elver@google.com>
> > > > > > > ---
> > > > > > >  include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
> > > > > > >  1 file changed, 153 insertions(+)
> > > > > > >  create mode 100644 include/linux/instrumented.h
> > > > > > >
> > > > > > > diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> > > > > > > new file mode 100644
> > > > > > > index 000000000000..9f83c8520223
> > > > > > > --- /dev/null
> > > > > > > +++ b/include/linux/instrumented.h
> > > > > > > @@ -0,0 +1,153 @@
> > > > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > > > +
> > > > > > > +/*
> > > > > > > + * This header provides generic wrappers for memory access instrumentation that
> > > > > > > + * the compiler cannot emit for: KASAN, KCSAN.
> > > > > > > + */
> > > > > > > +#ifndef _LINUX_INSTRUMENTED_H
> > > > > > > +#define _LINUX_INSTRUMENTED_H
> > > > > > > +
> > > > > > > +#include <linux/compiler.h>
> > > > > > > +#include <linux/kasan-checks.h>
> > > > > > > +#include <linux/kcsan-checks.h>
> > > > > > > +#include <linux/types.h>
> > > > > > > +
> > > > > > > +/**
> > > > > > > + * instrument_read - instrument regular read access
> > > > > > > + *
> > > > > > > + * Instrument a regular read access. The instrumentation should be inserted
> > > > > > > + * before the actual read happens.
> > > > > > > + *
> > > > > > > + * @ptr address of access
> > > > > > > + * @size size of access
> > > > > > > + */
> > > > > >
> > > > > > Based on offline discussion, that's what we add for KMSAN:
> > > > > >
> > > > > > > +static __always_inline void instrument_read(const volatile void *v, size_t size)
> > > > > > > +{
> > > > > > > +       kasan_check_read(v, size);
> > > > > > > +       kcsan_check_read(v, size);
> > > > > >
> > > > > > KMSAN: nothing
> > > > >
> > > > > KMSAN also has instrumentation in
> > > > > copy_to_user_page/copy_from_user_page. Do we need to do anything for
> > > > > KASAN/KCSAN for these functions?
> > >
> > > copy_to_user_page/copy_from_user_page can be instrumented with
> > > instrument_copy_{to,from}_user_. I prefer keeping this series with no
> > > functional change intended for KASAN at least.
> > >
> > > > There is also copy_user_highpage.
> > > >
> > > > And ioread/write8/16/32_rep: do we need any instrumentation there. It
> > > > seems we want both KSAN and KCSAN too. One may argue that KCSAN
> > > > instrumentation there is to super critical at this point, but KASAN
> > > > instrumentation is important, if anything to prevent silent memory
> > > > corruptions. How do we instrument there? I don't see how it maps to
> > > > any of the existing instrumentation functions.
> > >
> > > These should be able to use the regular instrument_{read,write}. I
> > > prefer keeping this series with no functional change intended for
> > > KASAN at least.
> >
> > instrument_{read,write} will not contain any KMSAN instrumentation,
> > which means we will effectively remove KMSAN instrumentation, which is
> > weird because we instrumented these functions because of KMSAN in the
> > first place...
> >
> > > > There is also kmsan_check_skb/kmsan_handle_dma/kmsan_handle_urb that
> > > > does not seem to map to any of the instrumentation functions.
> > >
> > > For now, I would rather that there are some one-off special
> > > instrumentation, like for KMSAN. Coming up with a unified interface
> > > here that, without the use-cases even settled, seems hard to justify.
> > > Once instrumentation for these have settled, unifying the interface
> > > would have better justification.
> >
> > I would assume they may also require an annotation that checks the
> > memory region under all 3 tools and we don't have such annotation
> > (same as the previous case and effectively copy_to_user). I would
> > expect such annotation will be used in more places once we start
> > looking for more opportunities.
>
> Agreed, I'm certainly not against adding these. We may need to
> introduce 'instrument_dma_' etc. However, would it be reasonable to do
> this in a separate follow-up patch-series, to avoid stalling bitops
> instrumentation?  Assuming that the 8 hooks in instrumented.h right
> now are reasonable, and such future changes add new hooks, I think
> that would be the more pragmatic approach.

I think it would be a wrong direction. Just like this change does not
introduce all of instrument_test_and_set_bit,
instrument___clear_bit_unlock, instrument_copyin,
instrument_copyout_mcsafe, instrument_atomic_andnot, .... All of these
can be grouped into a very small set of cases with respect to what
type of memory access they do from the point of view of sanitizers.
And we introduce instrumentation for these _types_ of accesses, rather
than application functions (we don't care much if the access is for
atomic operations, copy to/from user, usb, dma, skb or something
else). It seems that our set of instrumentation annotations can't
handle some very basic cases...

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops
  2020-01-20 16:27     ` Paul E. McKenney
@ 2020-01-20 16:52       ` Peter Zijlstra
  2020-01-20 20:23         ` Paul E. McKenney
  0 siblings, 1 reply; 29+ messages in thread
From: Peter Zijlstra @ 2020-01-20 16:52 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Marco Elver, andreyknvl, glider, dvyukov, kasan-dev,
	linux-kernel, mark.rutland, will, boqun.feng, arnd, viro,
	christophe.leroy, dja, mpe, rostedt, mhiramat, mingo,
	christian.brauner, daniel, cyphar, keescook, linux-arch

On Mon, Jan 20, 2020 at 08:27:25AM -0800, Paul E. McKenney wrote:
> On Mon, Jan 20, 2020 at 03:40:48PM +0100, Peter Zijlstra wrote:
> > On Mon, Jan 20, 2020 at 03:19:25PM +0100, Marco Elver wrote:
> > > Add explicit KCSAN checks for bitops.
> > > 
> > > Note that test_bit() is an atomic bitop, and we instrument it as such,
> > 
> > Well, it is 'atomic' in the same way that atomic_read() is. Both are
> > very much not atomic ops, but are part of an interface that facilitates
> > atomic operations.
> 
> True, but they all are either inline assembly or have either an
> implicit or explicit cast to volatile, so they could be treated
> the same as atomic_read(), correct?  If not, what am I missing?

Sure, but that is due to instrumentation requirements, not anything
else.

Also note the distinct lack of __test_bit(), to mirror the non-atomic
__set_bit() and __clear_bit().

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops
  2020-01-20 16:52       ` Peter Zijlstra
@ 2020-01-20 20:23         ` Paul E. McKenney
  2020-01-21  9:15           ` Peter Zijlstra
  0 siblings, 1 reply; 29+ messages in thread
From: Paul E. McKenney @ 2020-01-20 20:23 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Marco Elver, andreyknvl, glider, dvyukov, kasan-dev,
	linux-kernel, mark.rutland, will, boqun.feng, arnd, viro,
	christophe.leroy, dja, mpe, rostedt, mhiramat, mingo,
	christian.brauner, daniel, cyphar, keescook, linux-arch

On Mon, Jan 20, 2020 at 05:52:23PM +0100, Peter Zijlstra wrote:
> On Mon, Jan 20, 2020 at 08:27:25AM -0800, Paul E. McKenney wrote:
> > On Mon, Jan 20, 2020 at 03:40:48PM +0100, Peter Zijlstra wrote:
> > > On Mon, Jan 20, 2020 at 03:19:25PM +0100, Marco Elver wrote:
> > > > Add explicit KCSAN checks for bitops.
> > > > 
> > > > Note that test_bit() is an atomic bitop, and we instrument it as such,
> > > 
> > > Well, it is 'atomic' in the same way that atomic_read() is. Both are
> > > very much not atomic ops, but are part of an interface that facilitates
> > > atomic operations.
> > 
> > True, but they all are either inline assembly or have either an
> > implicit or explicit cast to volatile, so they could be treated
> > the same as atomic_read(), correct?  If not, what am I missing?
> 
> Sure, but that is due to instrumentation requirements, not anything
> else.
> 
> Also note the distinct lack of __test_bit(), to mirror the non-atomic
> __set_bit() and __clear_bit().

OK, I will bite.  ;-)

We also don't have __atomic_read() and __atomic_set(), yet atomic_read()
and atomic_set() are considered to be non-racy, right?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops
  2020-01-20 20:23         ` Paul E. McKenney
@ 2020-01-21  9:15           ` Peter Zijlstra
  2020-01-21 14:21             ` Paul E. McKenney
  0 siblings, 1 reply; 29+ messages in thread
From: Peter Zijlstra @ 2020-01-21  9:15 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Marco Elver, andreyknvl, glider, dvyukov, kasan-dev,
	linux-kernel, mark.rutland, will, boqun.feng, arnd, viro,
	christophe.leroy, dja, mpe, rostedt, mhiramat, mingo,
	christian.brauner, daniel, cyphar, keescook, linux-arch

On Mon, Jan 20, 2020 at 12:23:59PM -0800, Paul E. McKenney wrote:
> We also don't have __atomic_read() and __atomic_set(), yet atomic_read()
> and atomic_set() are considered to be non-racy, right?

What is racy? :-) You can make data races with atomic_{read,set}() just
fine.

Anyway, traditionally we call the read-modify-write stuff atomic, not
the trivial load-store stuff. The only reason we care about the
load-store stuff in the first place is because C compilers are shit.

atomic_read() / test_bit() are just a load, all we need is the C
compiler not to be an ass and split it. Yes, we've invented the term
single-copy atomicity for that, but that doesn't make it more or less of
a load.

And exactly because it is just a load, there is no __test_bit(), which
would be the exact same load.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-20 16:39             ` Dmitry Vyukov
@ 2020-01-21  9:44               ` Marco Elver
  0 siblings, 0 replies; 29+ messages in thread
From: Marco Elver @ 2020-01-21  9:44 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Paul E. McKenney, Andrey Konovalov, Alexander Potapenko,
	kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra,
	Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy,
	Daniel Axtens, Michael Ellerman, Steven Rostedt,
	Masami Hiramatsu, Ingo Molnar, Christian Brauner,
	Daniel Borkmann, cyphar, Kees Cook, linux-arch

On Mon, 20 Jan 2020 at 17:39, Dmitry Vyukov <dvyukov@google.com> wrote:
>
> On Mon, Jan 20, 2020 at 5:25 PM Marco Elver <elver@google.com> wrote:
> > > > > > > > This adds instrumented.h, which provides generic wrappers for memory
> > > > > > > > access instrumentation that the compiler cannot emit for various
> > > > > > > > sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> > > > > > > > future this will also include KMSAN instrumentation.
> > > > > > > >
> > > > > > > > Note that, copy_{to,from}_user require special instrumentation,
> > > > > > > > providing hooks before and after the access, since we may need to know
> > > > > > > > the actual bytes accessed (currently this is relevant for KCSAN, and is
> > > > > > > > also relevant in future for KMSAN).
> > > > > > > >
> > > > > > > > Suggested-by: Arnd Bergmann <arnd@arndb.de>
> > > > > > > > Signed-off-by: Marco Elver <elver@google.com>
> > > > > > > > ---
> > > > > > > >  include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
> > > > > > > >  1 file changed, 153 insertions(+)
> > > > > > > >  create mode 100644 include/linux/instrumented.h
> > > > > > > >
> > > > > > > > diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> > > > > > > > new file mode 100644
> > > > > > > > index 000000000000..9f83c8520223
> > > > > > > > --- /dev/null
> > > > > > > > +++ b/include/linux/instrumented.h
> > > > > > > > @@ -0,0 +1,153 @@
> > > > > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > > > > +
> > > > > > > > +/*
> > > > > > > > + * This header provides generic wrappers for memory access instrumentation that
> > > > > > > > + * the compiler cannot emit for: KASAN, KCSAN.
> > > > > > > > + */
> > > > > > > > +#ifndef _LINUX_INSTRUMENTED_H
> > > > > > > > +#define _LINUX_INSTRUMENTED_H
> > > > > > > > +
> > > > > > > > +#include <linux/compiler.h>
> > > > > > > > +#include <linux/kasan-checks.h>
> > > > > > > > +#include <linux/kcsan-checks.h>
> > > > > > > > +#include <linux/types.h>
> > > > > > > > +
> > > > > > > > +/**
> > > > > > > > + * instrument_read - instrument regular read access
> > > > > > > > + *
> > > > > > > > + * Instrument a regular read access. The instrumentation should be inserted
> > > > > > > > + * before the actual read happens.
> > > > > > > > + *
> > > > > > > > + * @ptr address of access
> > > > > > > > + * @size size of access
> > > > > > > > + */
> > > > > > >
> > > > > > > Based on offline discussion, that's what we add for KMSAN:
> > > > > > >
> > > > > > > > +static __always_inline void instrument_read(const volatile void *v, size_t size)
> > > > > > > > +{
> > > > > > > > +       kasan_check_read(v, size);
> > > > > > > > +       kcsan_check_read(v, size);
> > > > > > >
> > > > > > > KMSAN: nothing
> > > > > >
> > > > > > KMSAN also has instrumentation in
> > > > > > copy_to_user_page/copy_from_user_page. Do we need to do anything for
> > > > > > KASAN/KCSAN for these functions?
> > > >
> > > > copy_to_user_page/copy_from_user_page can be instrumented with
> > > > instrument_copy_{to,from}_user_. I prefer keeping this series with no
> > > > functional change intended for KASAN at least.
> > > >
> > > > > There is also copy_user_highpage.
> > > > >
> > > > > And ioread/write8/16/32_rep: do we need any instrumentation there. It
> > > > > seems we want both KSAN and KCSAN too. One may argue that KCSAN
> > > > > instrumentation there is to super critical at this point, but KASAN
> > > > > instrumentation is important, if anything to prevent silent memory
> > > > > corruptions. How do we instrument there? I don't see how it maps to
> > > > > any of the existing instrumentation functions.
> > > >
> > > > These should be able to use the regular instrument_{read,write}. I
> > > > prefer keeping this series with no functional change intended for
> > > > KASAN at least.
> > >
> > > instrument_{read,write} will not contain any KMSAN instrumentation,
> > > which means we will effectively remove KMSAN instrumentation, which is
> > > weird because we instrumented these functions because of KMSAN in the
> > > first place...

I missed this. Yes, you're right.

> > > > > There is also kmsan_check_skb/kmsan_handle_dma/kmsan_handle_urb that
> > > > > does not seem to map to any of the instrumentation functions.
> > > >
> > > > For now, I would rather that there are some one-off special
> > > > instrumentation, like for KMSAN. Coming up with a unified interface
> > > > here that, without the use-cases even settled, seems hard to justify.
> > > > Once instrumentation for these have settled, unifying the interface
> > > > would have better justification.
> > >
> > > I would assume they may also require an annotation that checks the
> > > memory region under all 3 tools and we don't have such annotation
> > > (same as the previous case and effectively copy_to_user). I would
> > > expect such annotation will be used in more places once we start
> > > looking for more opportunities.
> >
> > Agreed, I'm certainly not against adding these. We may need to
> > introduce 'instrument_dma_' etc. However, would it be reasonable to do
> > this in a separate follow-up patch-series, to avoid stalling bitops
> > instrumentation?  Assuming that the 8 hooks in instrumented.h right
> > now are reasonable, and such future changes add new hooks, I think
> > that would be the more pragmatic approach.
>
> I think it would be a wrong direction. Just like this change does not
> introduce all of instrument_test_and_set_bit,
> instrument___clear_bit_unlock, instrument_copyin,
> instrument_copyout_mcsafe, instrument_atomic_andnot, .... All of these
> can be grouped into a very small set of cases with respect to what
> type of memory access they do from the point of view of sanitizers.
> And we introduce instrumentation for these _types_ of accesses, rather
> than application functions (we don't care much if the access is for
> atomic operations, copy to/from user, usb, dma, skb or something
> else). It seems that our set of instrumentation annotations can't
> handle some very basic cases...

With the ioread/write, dma, skb, urb, user-copy cases in mind, it just
appears to me that attempting to find a minimal unifying set of
instrumentation hooks might lead us in circles, given we only have the
following options:

1. Do not introduce 'instrumented.h', and drop this series. With KMSAN
in mind, this is what I mentioned I preferred in the first place, and
just add a few dozen lines in each place we need to instrument. Yes,
yes, it's not as convenient, but at least we'll know it'll be correct
without having to reason about all cases and all sanitizers all at
once (with KMSAN not being in any kernel tree even).

2. This patch series, keeping 'instrumented.h', but only keep what we
use right now. This is knowing we'll likely have to add a number of
special cases (user-copy, ioread/write, etc) for now. Again,
KASAN/KCSAN probably want the same thing, but I don't know how much
conflict there will be with KMSAN after all is said and done. We will
incrementally add what is required, with unifying things later. This
will also satisfy Arnd's constraint of no empty functions:
http://lkml.kernel.org/r/CAK8P3a32sVU4umk2FLnWnMGMQxThvMHAKxVM+G4X-hMgpBsXMA@mail.gmail.com

3. Try to find a minimal set of instrumentation hooks that cater to
all tools (KASAN, KCSAN, KMSAN). Without even having all
instrumentation (without the 'instrumented.h' infrastructure) in
place, I feel this will not be too successful. I think we can do this
once we have instrumentation for all tools, in all places. Then
unifying all of them should be a non-functional-change refactor.
Essentially, this option depends on (1).

However, now we have some constraints which are difficult to satisfy
all at once:
1. Essentially we were told to avoid (1), based on Arnd's suggestion
to simplify the instrumentation. Therefore we thought (2) would be a
good idea.
2. Now that we know what (2) looks like, it seems you prefer (3),
because we should also cater to KMSAN with this patch series.
3. No unused hooks.

Given we have a KMSAN<-(1)<-(3) dependency, but we were told to avoid
(1), empty functions, and KMSAN hasn't yet landed, we can't reasonably
do (3). Since you dislike (2), we're stuck.

Any options I missed?

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-20 14:45 ` Dmitry Vyukov
  2020-01-20 14:58   ` Dmitry Vyukov
@ 2020-01-21 13:01   ` Dmitry Vyukov
  2020-01-21 16:14     ` Marco Elver
  1 sibling, 1 reply; 29+ messages in thread
From: Dmitry Vyukov @ 2020-01-21 13:01 UTC (permalink / raw)
  To: Marco Elver
  Cc: Paul E. McKenney, Andrey Konovalov, Alexander Potapenko,
	kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra,
	Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy,
	Daniel Axtens, Michael Ellerman, Steven Rostedt,
	Masami Hiramatsu, Ingo Molnar, Christian Brauner,
	Daniel Borkmann, cyphar, Kees Cook, linux-arch

On Mon, Jan 20, 2020 at 3:45 PM Dmitry Vyukov <dvyukov@google.com> wrote:
>
> On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <elver@google.com> wrote:
> >
> > This adds instrumented.h, which provides generic wrappers for memory
> > access instrumentation that the compiler cannot emit for various
> > sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> > future this will also include KMSAN instrumentation.
> >
> > Note that, copy_{to,from}_user require special instrumentation,
> > providing hooks before and after the access, since we may need to know
> > the actual bytes accessed (currently this is relevant for KCSAN, and is
> > also relevant in future for KMSAN).
> >
> > Suggested-by: Arnd Bergmann <arnd@arndb.de>
> > Signed-off-by: Marco Elver <elver@google.com>
> > ---
> >  include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
> >  1 file changed, 153 insertions(+)
> >  create mode 100644 include/linux/instrumented.h
> >
> > diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> > new file mode 100644
> > index 000000000000..9f83c8520223
> > --- /dev/null
> > +++ b/include/linux/instrumented.h
> > @@ -0,0 +1,153 @@
> > +/* SPDX-License-Identifier: GPL-2.0 */
> > +
> > +/*
> > + * This header provides generic wrappers for memory access instrumentation that
> > + * the compiler cannot emit for: KASAN, KCSAN.
> > + */
> > +#ifndef _LINUX_INSTRUMENTED_H
> > +#define _LINUX_INSTRUMENTED_H
> > +
> > +#include <linux/compiler.h>
> > +#include <linux/kasan-checks.h>
> > +#include <linux/kcsan-checks.h>
> > +#include <linux/types.h>
> > +
> > +/**
> > + * instrument_read - instrument regular read access
> > + *
> > + * Instrument a regular read access. The instrumentation should be inserted
> > + * before the actual read happens.
> > + *
> > + * @ptr address of access
> > + * @size size of access
> > + */
>
> Based on offline discussion, that's what we add for KMSAN:
>
> > +static __always_inline void instrument_read(const volatile void *v, size_t size)
> > +{
> > +       kasan_check_read(v, size);
> > +       kcsan_check_read(v, size);
>
> KMSAN: nothing
>
> > +}
> > +
> > +/**
> > + * instrument_write - instrument regular write access
> > + *
> > + * Instrument a regular write access. The instrumentation should be inserted
> > + * before the actual write happens.
> > + *
> > + * @ptr address of access
> > + * @size size of access
> > + */
> > +static __always_inline void instrument_write(const volatile void *v, size_t size)
> > +{
> > +       kasan_check_write(v, size);
> > +       kcsan_check_write(v, size);
>
> KMSAN: nothing
>
> > +}
> > +
> > +/**
> > + * instrument_atomic_read - instrument atomic read access
> > + *
> > + * Instrument an atomic read access. The instrumentation should be inserted
> > + * before the actual read happens.
> > + *
> > + * @ptr address of access
> > + * @size size of access
> > + */
> > +static __always_inline void instrument_atomic_read(const volatile void *v, size_t size)
> > +{
> > +       kasan_check_read(v, size);
> > +       kcsan_check_atomic_read(v, size);
>
> KMSAN: nothing
>
> > +}
> > +
> > +/**
> > + * instrument_atomic_write - instrument atomic write access
> > + *
> > + * Instrument an atomic write access. The instrumentation should be inserted
> > + * before the actual write happens.
> > + *
> > + * @ptr address of access
> > + * @size size of access
> > + */
> > +static __always_inline void instrument_atomic_write(const volatile void *v, size_t size)
> > +{
> > +       kasan_check_write(v, size);
> > +       kcsan_check_atomic_write(v, size);
>
> KMSAN: nothing
>
> > +}
> > +
> > +/**
> > + * instrument_copy_to_user_pre - instrument reads of copy_to_user
> > + *
> > + * Instrument reads from kernel memory, that are due to copy_to_user (and
> > + * variants).
> > + *
> > + * The instrumentation must be inserted before the accesses. At this point the
> > + * actual number of bytes accessed is not yet known.
> > + *
> > + * @dst destination address
> > + * @size maximum access size
> > + */
> > +static __always_inline void
> > +instrument_copy_to_user_pre(const volatile void *src, size_t size)
> > +{
> > +       /* Check before, to warn before potential memory corruption. */
> > +       kasan_check_read(src, size);
>
> KMSAN: check that (src,size) is initialized
>
> > +}
> > +
> > +/**
> > + * instrument_copy_to_user_post - instrument reads of copy_to_user
> > + *
> > + * Instrument reads from kernel memory, that are due to copy_to_user (and
> > + * variants).
> > + *
> > + * The instrumentation must be inserted after the accesses. At this point the
> > + * actual number of bytes accessed should be known.
> > + *
> > + * @dst destination address
> > + * @size maximum access size
> > + * @left number of bytes left that were not copied
> > + */
> > +static __always_inline void
> > +instrument_copy_to_user_post(const volatile void *src, size_t size, size_t left)
> > +{
> > +       /* Check after, to avoid false positive if memory was not accessed. */
> > +       kcsan_check_read(src, size - left);
>
> KMSAN: nothing

One detail I noticed for KMSAN is that kmsan_copy_to_user has a
special case when @to address is in kernel-space (compat syscalls
doing tricky things), in that case it only copies metadata. We can't
handle this with existing annotations.


 * actually copied to ensure there was no information leak. If @to belongs to
 * the kernel space (which is possible for compat syscalls), KMSAN just copies
 * the metadata.
 */
void kmsan_copy_to_user(const void *to, const void *from, size_t
to_copy, size_t left);


> > +}
> > +
> > +/**
> > + * instrument_copy_from_user_pre - instrument writes of copy_from_user
> > + *
> > + * Instrument writes to kernel memory, that are due to copy_from_user (and
> > + * variants).
> > + *
> > + * The instrumentation must be inserted before the accesses. At this point the
> > + * actual number of bytes accessed is not yet known.
> > + *
> > + * @dst destination address
> > + * @size maximum access size
> > + */
> > +static __always_inline void
> > +instrument_copy_from_user_pre(const volatile void *dst, size_t size)
> > +{
> > +       /* Check before, to warn before potential memory corruption. */
> > +       kasan_check_write(dst, size);
>
> KMSAN: nothing
>
> > +}
> > +
> > +/**
> > + * instrument_copy_from_user_post - instrument writes of copy_from_user
> > + *
> > + * Instrument writes to kernel memory, that are due to copy_from_user (and
> > + * variants).
> > + *
> > + * The instrumentation must be inserted after the accesses. At this point the
> > + * actual number of bytes accessed should be known.
> > + *
> > + * @dst destination address
> > + * @size maximum access size
> > + * @left number of bytes left that were not copied
> > + */
> > +static __always_inline void
> > +instrument_copy_from_user_post(const volatile void *dst, size_t size, size_t left)
> > +{
> > +       /* Check after, to avoid false positive if memory was not accessed. */
> > +       kcsan_check_write(dst, size - left);
>
> KMSAN: mark (dst, size-left) as initialized
>
> > +}
> > +
> > +#endif /* _LINUX_INSTRUMENTED_H */
> > --
> > 2.25.0.341.g760bfbb309-goog
> >

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops
  2020-01-21  9:15           ` Peter Zijlstra
@ 2020-01-21 14:21             ` Paul E. McKenney
  2020-01-21 14:47               ` Peter Zijlstra
  0 siblings, 1 reply; 29+ messages in thread
From: Paul E. McKenney @ 2020-01-21 14:21 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Marco Elver, andreyknvl, glider, dvyukov, kasan-dev,
	linux-kernel, mark.rutland, will, boqun.feng, arnd, viro,
	christophe.leroy, dja, mpe, rostedt, mhiramat, mingo,
	christian.brauner, daniel, cyphar, keescook, linux-arch

On Tue, Jan 21, 2020 at 10:15:01AM +0100, Peter Zijlstra wrote:
> On Mon, Jan 20, 2020 at 12:23:59PM -0800, Paul E. McKenney wrote:
> > We also don't have __atomic_read() and __atomic_set(), yet atomic_read()
> > and atomic_set() are considered to be non-racy, right?
> 
> What is racy? :-) You can make data races with atomic_{read,set}() just
> fine.

Like "fairness", lots of definitions of "racy".  ;-)

> Anyway, traditionally we call the read-modify-write stuff atomic, not
> the trivial load-store stuff. The only reason we care about the
> load-store stuff in the first place is because C compilers are shit.
> 
> atomic_read() / test_bit() are just a load, all we need is the C
> compiler not to be an ass and split it. Yes, we've invented the term
> single-copy atomicity for that, but that doesn't make it more or less of
> a load.
> 
> And exactly because it is just a load, there is no __test_bit(), which
> would be the exact same load.

Very good!  Shouldn't KCSAN then define test_bit() as non-racy just as
for atomic_read()?

							Thanx, Paul

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops
  2020-01-21 14:21             ` Paul E. McKenney
@ 2020-01-21 14:47               ` Peter Zijlstra
  2020-01-21 15:07                 ` Marco Elver
  2020-01-21 16:16                 ` Paul E. McKenney
  0 siblings, 2 replies; 29+ messages in thread
From: Peter Zijlstra @ 2020-01-21 14:47 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Marco Elver, andreyknvl, glider, dvyukov, kasan-dev,
	linux-kernel, mark.rutland, will, boqun.feng, arnd, viro,
	christophe.leroy, dja, mpe, rostedt, mhiramat, mingo,
	christian.brauner, daniel, cyphar, keescook, linux-arch

On Tue, Jan 21, 2020 at 06:21:09AM -0800, Paul E. McKenney wrote:
> On Tue, Jan 21, 2020 at 10:15:01AM +0100, Peter Zijlstra wrote:
> > On Mon, Jan 20, 2020 at 12:23:59PM -0800, Paul E. McKenney wrote:
> > > We also don't have __atomic_read() and __atomic_set(), yet atomic_read()
> > > and atomic_set() are considered to be non-racy, right?
> > 
> > What is racy? :-) You can make data races with atomic_{read,set}() just
> > fine.
> 
> Like "fairness", lots of definitions of "racy".  ;-)
> 
> > Anyway, traditionally we call the read-modify-write stuff atomic, not
> > the trivial load-store stuff. The only reason we care about the
> > load-store stuff in the first place is because C compilers are shit.
> > 
> > atomic_read() / test_bit() are just a load, all we need is the C
> > compiler not to be an ass and split it. Yes, we've invented the term
> > single-copy atomicity for that, but that doesn't make it more or less of
> > a load.
> > 
> > And exactly because it is just a load, there is no __test_bit(), which
> > would be the exact same load.
> 
> Very good!  Shouldn't KCSAN then define test_bit() as non-racy just as
> for atomic_read()?

Sure it does; but my comment was aimed at the gripe that test_bit()
lives in the non-atomic bitops header. That is arguably entirely
correct.

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops
  2020-01-21 14:47               ` Peter Zijlstra
@ 2020-01-21 15:07                 ` Marco Elver
  2020-01-21 16:16                 ` Paul E. McKenney
  1 sibling, 0 replies; 29+ messages in thread
From: Marco Elver @ 2020-01-21 15:07 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Paul E. McKenney, Andrey Konovalov, Alexander Potapenko,
	Dmitry Vyukov, kasan-dev, LKML, Mark Rutland, Will Deacon,
	Boqun Feng, Arnd Bergmann, Al Viro, christophe leroy,
	Daniel Axtens, Michael Ellerman, Steven Rostedt,
	Masami Hiramatsu, Ingo Molnar, Christian Brauner,
	Daniel Borkmann, cyphar, Kees Cook, linux-arch

On Tue, 21 Jan 2020 at 15:47, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Tue, Jan 21, 2020 at 06:21:09AM -0800, Paul E. McKenney wrote:
> > On Tue, Jan 21, 2020 at 10:15:01AM +0100, Peter Zijlstra wrote:
> > > On Mon, Jan 20, 2020 at 12:23:59PM -0800, Paul E. McKenney wrote:
> > > > We also don't have __atomic_read() and __atomic_set(), yet atomic_read()
> > > > and atomic_set() are considered to be non-racy, right?
> > >
> > > What is racy? :-) You can make data races with atomic_{read,set}() just
> > > fine.
> >
> > Like "fairness", lots of definitions of "racy".  ;-)
> >
> > > Anyway, traditionally we call the read-modify-write stuff atomic, not
> > > the trivial load-store stuff. The only reason we care about the
> > > load-store stuff in the first place is because C compilers are shit.
> > >
> > > atomic_read() / test_bit() are just a load, all we need is the C
> > > compiler not to be an ass and split it. Yes, we've invented the term
> > > single-copy atomicity for that, but that doesn't make it more or less of
> > > a load.
> > >
> > > And exactly because it is just a load, there is no __test_bit(), which
> > > would be the exact same load.
> >
> > Very good!  Shouldn't KCSAN then define test_bit() as non-racy just as
> > for atomic_read()?
>
> Sure it does; but my comment was aimed at the gripe that test_bit()
> lives in the non-atomic bitops header. That is arguably entirely
> correct.

I will also point out that test_bit() is listed in
Documentation/atomic_bitops.txt.  What I gather from
atomic_bitops.txt, is that the semantics of test_bit() is simply an
unordered atomic operation: the interface promises that the load will
be executed as one indivisible step, i.e. (single-copy) atomically
(after compiler optimizations etc.).

Although at this point probably not too important, I checked Alpha's
implementation of test_bit(), and there is no
smp_read_barrier_depends(). Is it safe to say that test_bit() should
then be weaker in terms of ordering than READ_ONCE()?

My assumption was that test_bit() is as strong as READ_ONCE().

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 1/5] include/linux: Add instrumented.h infrastructure
  2020-01-21 13:01   ` Dmitry Vyukov
@ 2020-01-21 16:14     ` Marco Elver
  0 siblings, 0 replies; 29+ messages in thread
From: Marco Elver @ 2020-01-21 16:14 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Paul E. McKenney, Andrey Konovalov, Alexander Potapenko,
	kasan-dev, LKML, Mark Rutland, Will Deacon, Peter Zijlstra,
	Boqun Feng, Arnd Bergmann, Al Viro, Christophe Leroy,
	Daniel Axtens, Michael Ellerman, Steven Rostedt,
	Masami Hiramatsu, Ingo Molnar, Christian Brauner,
	Daniel Borkmann, cyphar, Kees Cook, linux-arch

On Tue, 21 Jan 2020 at 14:01, Dmitry Vyukov <dvyukov@google.com> wrote:
>
> On Mon, Jan 20, 2020 at 3:45 PM Dmitry Vyukov <dvyukov@google.com> wrote:
> >
> > On Mon, Jan 20, 2020 at 3:19 PM Marco Elver <elver@google.com> wrote:
> > >
> > > This adds instrumented.h, which provides generic wrappers for memory
> > > access instrumentation that the compiler cannot emit for various
> > > sanitizers. Currently this unifies KASAN and KCSAN instrumentation. In
> > > future this will also include KMSAN instrumentation.
> > >
> > > Note that, copy_{to,from}_user require special instrumentation,
> > > providing hooks before and after the access, since we may need to know
> > > the actual bytes accessed (currently this is relevant for KCSAN, and is
> > > also relevant in future for KMSAN).
> > >
> > > Suggested-by: Arnd Bergmann <arnd@arndb.de>
> > > Signed-off-by: Marco Elver <elver@google.com>
> > > ---
> > >  include/linux/instrumented.h | 153 +++++++++++++++++++++++++++++++++++
> > >  1 file changed, 153 insertions(+)
> > >  create mode 100644 include/linux/instrumented.h
> > >
> > > diff --git a/include/linux/instrumented.h b/include/linux/instrumented.h
> > > new file mode 100644
> > > index 000000000000..9f83c8520223
> > > --- /dev/null
> > > +++ b/include/linux/instrumented.h
> > > @@ -0,0 +1,153 @@
> > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > +
> > > +/*
> > > + * This header provides generic wrappers for memory access instrumentation that
> > > + * the compiler cannot emit for: KASAN, KCSAN.
> > > + */
> > > +#ifndef _LINUX_INSTRUMENTED_H
> > > +#define _LINUX_INSTRUMENTED_H
> > > +
> > > +#include <linux/compiler.h>
> > > +#include <linux/kasan-checks.h>
> > > +#include <linux/kcsan-checks.h>
> > > +#include <linux/types.h>
> > > +
> > > +/**
> > > + * instrument_read - instrument regular read access
> > > + *
> > > + * Instrument a regular read access. The instrumentation should be inserted
> > > + * before the actual read happens.
> > > + *
> > > + * @ptr address of access
> > > + * @size size of access
> > > + */
> >
> > Based on offline discussion, that's what we add for KMSAN:
> >
> > > +static __always_inline void instrument_read(const volatile void *v, size_t size)
> > > +{
> > > +       kasan_check_read(v, size);
> > > +       kcsan_check_read(v, size);
> >
> > KMSAN: nothing
> >
> > > +}
> > > +
> > > +/**
> > > + * instrument_write - instrument regular write access
> > > + *
> > > + * Instrument a regular write access. The instrumentation should be inserted
> > > + * before the actual write happens.
> > > + *
> > > + * @ptr address of access
> > > + * @size size of access
> > > + */
> > > +static __always_inline void instrument_write(const volatile void *v, size_t size)
> > > +{
> > > +       kasan_check_write(v, size);
> > > +       kcsan_check_write(v, size);
> >
> > KMSAN: nothing
> >
> > > +}
> > > +
> > > +/**
> > > + * instrument_atomic_read - instrument atomic read access
> > > + *
> > > + * Instrument an atomic read access. The instrumentation should be inserted
> > > + * before the actual read happens.
> > > + *
> > > + * @ptr address of access
> > > + * @size size of access
> > > + */
> > > +static __always_inline void instrument_atomic_read(const volatile void *v, size_t size)
> > > +{
> > > +       kasan_check_read(v, size);
> > > +       kcsan_check_atomic_read(v, size);
> >
> > KMSAN: nothing
> >
> > > +}
> > > +
> > > +/**
> > > + * instrument_atomic_write - instrument atomic write access
> > > + *
> > > + * Instrument an atomic write access. The instrumentation should be inserted
> > > + * before the actual write happens.
> > > + *
> > > + * @ptr address of access
> > > + * @size size of access
> > > + */
> > > +static __always_inline void instrument_atomic_write(const volatile void *v, size_t size)
> > > +{
> > > +       kasan_check_write(v, size);
> > > +       kcsan_check_atomic_write(v, size);
> >
> > KMSAN: nothing
> >
> > > +}
> > > +
> > > +/**
> > > + * instrument_copy_to_user_pre - instrument reads of copy_to_user
> > > + *
> > > + * Instrument reads from kernel memory, that are due to copy_to_user (and
> > > + * variants).
> > > + *
> > > + * The instrumentation must be inserted before the accesses. At this point the
> > > + * actual number of bytes accessed is not yet known.
> > > + *
> > > + * @dst destination address
> > > + * @size maximum access size
> > > + */
> > > +static __always_inline void
> > > +instrument_copy_to_user_pre(const volatile void *src, size_t size)
> > > +{
> > > +       /* Check before, to warn before potential memory corruption. */
> > > +       kasan_check_read(src, size);
> >
> > KMSAN: check that (src,size) is initialized
> >
> > > +}
> > > +
> > > +/**
> > > + * instrument_copy_to_user_post - instrument reads of copy_to_user
> > > + *
> > > + * Instrument reads from kernel memory, that are due to copy_to_user (and
> > > + * variants).
> > > + *
> > > + * The instrumentation must be inserted after the accesses. At this point the
> > > + * actual number of bytes accessed should be known.
> > > + *
> > > + * @dst destination address
> > > + * @size maximum access size
> > > + * @left number of bytes left that were not copied
> > > + */
> > > +static __always_inline void
> > > +instrument_copy_to_user_post(const volatile void *src, size_t size, size_t left)
> > > +{
> > > +       /* Check after, to avoid false positive if memory was not accessed. */
> > > +       kcsan_check_read(src, size - left);
> >
> > KMSAN: nothing
>
> One detail I noticed for KMSAN is that kmsan_copy_to_user has a
> special case when @to address is in kernel-space (compat syscalls
> doing tricky things), in that case it only copies metadata. We can't
> handle this with existing annotations.
>
>
>  * actually copied to ensure there was no information leak. If @to belongs to
>  * the kernel space (which is possible for compat syscalls), KMSAN just copies
>  * the metadata.
>  */
> void kmsan_copy_to_user(const void *to, const void *from, size_t
> to_copy, size_t left);

Sent v2: http://lkml.kernel.org/r/20200121160512.70887-1-elver@google.com
I hope it'll satisfy our various constraints for now.

Thanks,
-- Marco

^ permalink raw reply	[flat|nested] 29+ messages in thread

* Re: [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops
  2020-01-21 14:47               ` Peter Zijlstra
  2020-01-21 15:07                 ` Marco Elver
@ 2020-01-21 16:16                 ` Paul E. McKenney
  1 sibling, 0 replies; 29+ messages in thread
From: Paul E. McKenney @ 2020-01-21 16:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Marco Elver, andreyknvl, glider, dvyukov, kasan-dev,
	linux-kernel, mark.rutland, will, boqun.feng, arnd, viro,
	christophe.leroy, dja, mpe, rostedt, mhiramat, mingo,
	christian.brauner, daniel, cyphar, keescook, linux-arch

On Tue, Jan 21, 2020 at 03:47:16PM +0100, Peter Zijlstra wrote:
> On Tue, Jan 21, 2020 at 06:21:09AM -0800, Paul E. McKenney wrote:
> > On Tue, Jan 21, 2020 at 10:15:01AM +0100, Peter Zijlstra wrote:
> > > On Mon, Jan 20, 2020 at 12:23:59PM -0800, Paul E. McKenney wrote:
> > > > We also don't have __atomic_read() and __atomic_set(), yet atomic_read()
> > > > and atomic_set() are considered to be non-racy, right?
> > > 
> > > What is racy? :-) You can make data races with atomic_{read,set}() just
> > > fine.
> > 
> > Like "fairness", lots of definitions of "racy".  ;-)
> > 
> > > Anyway, traditionally we call the read-modify-write stuff atomic, not
> > > the trivial load-store stuff. The only reason we care about the
> > > load-store stuff in the first place is because C compilers are shit.
> > > 
> > > atomic_read() / test_bit() are just a load, all we need is the C
> > > compiler not to be an ass and split it. Yes, we've invented the term
> > > single-copy atomicity for that, but that doesn't make it more or less of
> > > a load.
> > > 
> > > And exactly because it is just a load, there is no __test_bit(), which
> > > would be the exact same load.
> > 
> > Very good!  Shouldn't KCSAN then define test_bit() as non-racy just as
> > for atomic_read()?
> 
> Sure it does; but my comment was aimed at the gripe that test_bit()
> lives in the non-atomic bitops header. That is arguably entirely
> correct.

Fair enough!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 29+ messages in thread

end of thread, other threads:[~2020-01-21 16:16 UTC | newest]

Thread overview: 29+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-20 14:19 [PATCH 1/5] include/linux: Add instrumented.h infrastructure Marco Elver
2020-01-20 14:19 ` [PATCH 2/5] asm-generic, atomic-instrumented: Use generic instrumented.h Marco Elver
2020-01-20 14:19 ` [PATCH 3/5] asm-generic, kcsan: Add KCSAN instrumentation for bitops Marco Elver
2020-01-20 14:40   ` Peter Zijlstra
2020-01-20 16:27     ` Paul E. McKenney
2020-01-20 16:52       ` Peter Zijlstra
2020-01-20 20:23         ` Paul E. McKenney
2020-01-21  9:15           ` Peter Zijlstra
2020-01-21 14:21             ` Paul E. McKenney
2020-01-21 14:47               ` Peter Zijlstra
2020-01-21 15:07                 ` Marco Elver
2020-01-21 16:16                 ` Paul E. McKenney
2020-01-20 14:19 ` [PATCH 4/5] iov_iter: Use generic instrumented.h Marco Elver
2020-01-20 14:19 ` [PATCH 5/5] copy_to_user, copy_from_user: " Marco Elver
2020-01-20 14:51   ` Dmitry Vyukov
2020-01-20 15:05     ` Marco Elver
2020-01-20 14:25 ` [PATCH 1/5] include/linux: Add instrumented.h infrastructure Alexander Potapenko
2020-01-20 14:34 ` Dmitry Vyukov
2020-01-20 15:53   ` Marco Elver
2020-01-20 14:45 ` Dmitry Vyukov
2020-01-20 14:58   ` Dmitry Vyukov
2020-01-20 15:09     ` Dmitry Vyukov
2020-01-20 15:40       ` Marco Elver
2020-01-20 16:06         ` Dmitry Vyukov
2020-01-20 16:25           ` Marco Elver
2020-01-20 16:39             ` Dmitry Vyukov
2020-01-21  9:44               ` Marco Elver
2020-01-21 13:01   ` Dmitry Vyukov
2020-01-21 16:14     ` Marco Elver

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).