All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] kcov: support comparison operands collection
@ 2017-08-30 16:23 Dmitry Vyukov
  2017-08-30 16:23   ` Dmitry Vyukov
                   ` (2 more replies)
  0 siblings, 3 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-08-30 16:23 UTC (permalink / raw)
  To: akpm, linux-mm; +Cc: tchibo, Dmitry Vyukov

Enables kcov to collect comparison operands from instrumented code.
This is done by using Clang's -fsanitize=trace-cmp instrumentation
(currently not available for GCC).

The comparison operands help a lot in fuzz testing. E.g. they are
used in syzkaller to cover the interiors of conditional statements
with way less attempts and thus make previously unreachable code
reachable.

To allow separate collection of coverage and comparison operands two
different work modes are implemented. Mode selection is now done via
a KCOV_ENABLE ioctl call with corresponding argument value.

Clang instrumentation:
https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
Syzkaller:
https://github.com/google/syzkaller

Victor Chibotaru (3):
  kcov: support comparison operands collection
  Makefile: support flag -fsanitizer-coverage=trace-cmp
  kcov: update documentation

 Documentation/dev-tools/kcov.rst |  94 +++++++++++++++++-
 Makefile                         |   5 +-
 include/linux/kcov.h             |  12 ++-
 include/uapi/linux/kcov.h        |  32 ++++++
 kernel/kcov.c                    | 203 ++++++++++++++++++++++++++++++++-------
 lib/Kconfig.debug                |   8 ++
 scripts/Makefile.kcov            |   6 ++
 scripts/Makefile.lib             |   6 ++
 8 files changed, 322 insertions(+), 44 deletions(-)
 create mode 100644 scripts/Makefile.kcov

-- 
2.14.1.581.gf28d330327-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/3] kcov: support comparison operands collection
  2017-08-30 16:23 [PATCH 0/3] kcov: support comparison operands collection Dmitry Vyukov
@ 2017-08-30 16:23   ` Dmitry Vyukov
  2017-08-30 16:23   ` Dmitry Vyukov
  2017-08-30 16:23   ` Dmitry Vyukov
  2 siblings, 0 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-08-30 16:23 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: tchibo, Mark Rutland, Alexander Popov, Andrey Ryabinin,
	Kees Cook, Vegard Nossum, Quentin Casasnovas, syzkaller,
	linux-kernel

From: Victor Chibotaru <tchibo@google.com>

Enables kcov to collect comparison operands from instrumented code.
This is done by using Clang's -fsanitize=trace-cmp instrumentation
(currently not available for GCC).

The comparison operands help a lot in fuzz testing. E.g. they are
used in Syzkaller to cover the interiors of conditional statements
with way less attempts and thus make previously unreachable code
reachable.

To allow separate collection of coverage and comparison operands two
different work modes are implemented. Mode selection is now done via
a KCOV_ENABLE ioctl call with corresponding argument value.

Signed-off-by: Victor Chibotaru <tchibo@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Popov <alex.popov@linux.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: syzkaller@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
Clang instrumentation:
https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
Syzkaller:
https://github.com/google/syzkaller
---
 include/linux/kcov.h      |  12 ++-
 include/uapi/linux/kcov.h |  32 ++++++++
 kernel/kcov.c             | 203 ++++++++++++++++++++++++++++++++++++++--------
 3 files changed, 209 insertions(+), 38 deletions(-)

diff --git a/include/linux/kcov.h b/include/linux/kcov.h
index 2883ac98c280..87e2a44f1bab 100644
--- a/include/linux/kcov.h
+++ b/include/linux/kcov.h
@@ -7,19 +7,23 @@ struct task_struct;
 
 #ifdef CONFIG_KCOV
 
-void kcov_task_init(struct task_struct *t);
-void kcov_task_exit(struct task_struct *t);
-
 enum kcov_mode {
 	/* Coverage collection is not enabled yet. */
 	KCOV_MODE_DISABLED = 0,
+	/* KCOV was initialized, but tracing mode hasn't been chosen yet. */
+	KCOV_MODE_INIT = 1,
 	/*
 	 * Tracing coverage collection mode.
 	 * Covered PCs are collected in a per-task buffer.
 	 */
-	KCOV_MODE_TRACE = 1,
+	KCOV_MODE_TRACE_PC = 2,
+	/* Collecting comparison operands mode. */
+	KCOV_MODE_TRACE_CMP = 3,
 };
 
+void kcov_task_init(struct task_struct *t);
+void kcov_task_exit(struct task_struct *t);
+
 #else
 
 static inline void kcov_task_init(struct task_struct *t) {}
diff --git a/include/uapi/linux/kcov.h b/include/uapi/linux/kcov.h
index 574e22ec640d..a0bc3e6a5ff7 100644
--- a/include/uapi/linux/kcov.h
+++ b/include/uapi/linux/kcov.h
@@ -7,4 +7,36 @@
 #define KCOV_ENABLE			_IO('c', 100)
 #define KCOV_DISABLE			_IO('c', 101)
 
+enum {
+	/*
+	 * Tracing coverage collection mode.
+	 * Covered PCs are collected in a per-task buffer.
+	 * In new KCOV version the mode is chosen by calling
+	 * ioctl(fd, KCOV_ENABLE, mode). In older versions the mode argument
+	 * was supposed to be 0 in such a call. So, for reasons of backward
+	 * compatibility, we have chosen the value KCOV_TRACE_PC to be 0.
+	 */
+	KCOV_TRACE_PC = 0,
+	/* Collecting comparison operands mode. */
+	KCOV_TRACE_CMP = 1,
+};
+
+/*
+ * Defines the format for the types of collected comparisons.
+ */
+enum kcov_cmp_type {
+	/*
+	 * LSB shows whether one of the arguments is a compile-time constant.
+	 */
+	KCOV_CMP_CONST = 1,
+	/*
+	 * Second and third LSBs contain the size of arguments (1/2/4/8 bytes).
+	 */
+	KCOV_CMP_SIZE1 = 0,
+	KCOV_CMP_SIZE2 = 2,
+	KCOV_CMP_SIZE4 = 4,
+	KCOV_CMP_SIZE8 = 6,
+	KCOV_CMP_SIZE_MASK = 6,
+};
+
 #endif /* _LINUX_KCOV_IOCTLS_H */
diff --git a/kernel/kcov.c b/kernel/kcov.c
index cd771993f96f..2abce5dfa2df 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -21,13 +21,21 @@
 #include <linux/kcov.h>
 #include <asm/setup.h>
 
+/* Number of words written per one comparison. */
+#define KCOV_WORDS_PER_CMP 3
+
 /*
  * kcov descriptor (one per opened debugfs file).
  * State transitions of the descriptor:
  *  - initial state after open()
  *  - then there must be a single ioctl(KCOV_INIT_TRACE) call
  *  - then, mmap() call (several calls are allowed but not useful)
- *  - then, repeated enable/disable for a task (only one task a time allowed)
+ *  - then, ioctl(KCOV_ENABLE, arg), where arg is
+ *	KCOV_TRACE_PC - to trace only the PCs
+ *	or
+ *	KCOV_TRACE_CMP - to trace only the comparison operands
+ *  - then, ioctl(KCOV_DISABLE) to disable the task.
+ * Enabling/disabling ioctls can be repeated (only one task a time allowed).
  */
 struct kcov {
 	/*
@@ -47,6 +55,30 @@ struct kcov {
 	struct task_struct	*t;
 };
 
+static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)
+{
+	enum kcov_mode mode;
+
+	/*
+	 * We are interested in code coverage as a function of a syscall inputs,
+	 * so we ignore code executed in interrupts.
+	 */
+	if (!t || !in_task())
+		return false;
+	mode = READ_ONCE(t->kcov_mode);
+	/*
+	 * There is some code that runs in interrupts but for which
+	 * in_interrupt() returns false (e.g. preempt_schedule_irq()).
+	 * READ_ONCE()/barrier() effectively provides load-acquire wrt
+	 * interrupts, there are paired barrier()/WRITE_ONCE() in
+	 * kcov_ioctl_locked().
+	 */
+	barrier();
+	if (mode != needed_mode)
+		return false;
+	return true;
+}
+
 /*
  * Entry point from instrumented code.
  * This is called once per basic-block/edge.
@@ -54,44 +86,136 @@ struct kcov {
 void notrace __sanitizer_cov_trace_pc(void)
 {
 	struct task_struct *t;
-	enum kcov_mode mode;
+	unsigned long *area;
+	unsigned long ip = _RET_IP_;
+	unsigned long pos;
 
 	t = current;
-	/*
-	 * We are interested in code coverage as a function of a syscall inputs,
-	 * so we ignore code executed in interrupts.
-	 */
-	if (!t || !in_task())
+	if (!check_kcov_mode(KCOV_MODE_TRACE_PC, t))
 		return;
-	mode = READ_ONCE(t->kcov_mode);
-	if (mode == KCOV_MODE_TRACE) {
-		unsigned long *area;
-		unsigned long pos;
-		unsigned long ip = _RET_IP_;
 
 #ifdef CONFIG_RANDOMIZE_BASE
-		ip -= kaslr_offset();
+	ip -= kaslr_offset();
 #endif
 
-		/*
-		 * There is some code that runs in interrupts but for which
-		 * in_interrupt() returns false (e.g. preempt_schedule_irq()).
-		 * READ_ONCE()/barrier() effectively provides load-acquire wrt
-		 * interrupts, there are paired barrier()/WRITE_ONCE() in
-		 * kcov_ioctl_locked().
-		 */
-		barrier();
-		area = t->kcov_area;
-		/* The first word is number of subsequent PCs. */
-		pos = READ_ONCE(area[0]) + 1;
-		if (likely(pos < t->kcov_size)) {
-			area[pos] = ip;
-			WRITE_ONCE(area[0], pos);
-		}
+	area = t->kcov_area;
+	/* The first word is number of subsequent PCs. */
+	pos = READ_ONCE(area[0]) + 1;
+	if (likely(pos < t->kcov_size)) {
+		area[pos] = ip;
+		WRITE_ONCE(area[0], pos);
 	}
 }
 EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
 
+#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
+static void write_comp_data(u64 type, u64 arg1, u64 arg2)
+{
+	struct task_struct *t;
+	u64 *area;
+	u64 count, start_index, end_pos, max_pos;
+
+	t = current;
+	if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
+		return;
+
+	/*
+	 * We write all comparison arguments and types as u64.
+	 * The buffer was allocated for t->kcov_size unsigned longs.
+	 */
+	area = (u64 *)t->kcov_area;
+	max_pos = t->kcov_size * sizeof(unsigned long);
+
+	count = READ_ONCE(area[0]);
+
+	/* Every record is KCOV_WORDS_PER_CMP words. */
+	start_index = 1 + count * KCOV_WORDS_PER_CMP;
+	end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
+	if (likely(end_pos <= max_pos)) {
+		area[start_index] = type;
+		area[start_index + 1] = arg1;
+		area[start_index + 2] = arg2;
+		WRITE_ONCE(area[0], count + 1);
+	}
+}
+
+void notrace __sanitizer_cov_trace_cmp1(u8 arg1, u8 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE1, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_cmp1);
+
+void notrace __sanitizer_cov_trace_cmp2(u16 arg1, u16 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE2, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_cmp2);
+
+void notrace __sanitizer_cov_trace_cmp4(u16 arg1, u16 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE4, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_cmp4);
+
+void notrace __sanitizer_cov_trace_cmp8(u64 arg1, u64 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE8, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_cmp8);
+
+void notrace __sanitizer_cov_trace_const_cmp1(u8 arg1, u8 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE1 | KCOV_CMP_CONST, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp1);
+
+void notrace __sanitizer_cov_trace_const_cmp2(u16 arg1, u16 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE2 | KCOV_CMP_CONST, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp2);
+
+void notrace __sanitizer_cov_trace_const_cmp4(u16 arg1, u16 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE4 | KCOV_CMP_CONST, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp4);
+
+void notrace __sanitizer_cov_trace_const_cmp8(u64 arg1, u64 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE8 | KCOV_CMP_CONST, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp8);
+
+void notrace __sanitizer_cov_trace_switch(u64 val, u64 *cases)
+{
+	u64 i;
+	u64 count = cases[0];
+	u64 size = cases[1];
+	u64 type = KCOV_CMP_CONST;
+
+	switch (size) {
+	case 8:
+		type |= KCOV_CMP_SIZE1;
+		break;
+	case 16:
+		type |= KCOV_CMP_SIZE2;
+		break;
+	case 32:
+		type |= KCOV_CMP_SIZE4;
+		break;
+	case 64:
+		type |= KCOV_CMP_SIZE8;
+		break;
+	default:
+		return;
+	}
+	for (i = 0; i < count; i++)
+		write_comp_data(type, cases[i + 2], val);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_switch);
+#endif /* ifdef CONFIG_KCOV_ENABLE_COMPARISONS */
+
 static void kcov_get(struct kcov *kcov)
 {
 	atomic_inc(&kcov->refcount);
@@ -128,6 +252,7 @@ void kcov_task_exit(struct task_struct *t)
 	/* Just to not leave dangling references behind. */
 	kcov_task_init(t);
 	kcov->t = NULL;
+	kcov->mode = KCOV_MODE_INIT;
 	spin_unlock(&kcov->lock);
 	kcov_put(kcov);
 }
@@ -146,7 +271,7 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
 
 	spin_lock(&kcov->lock);
 	size = kcov->size * sizeof(unsigned long);
-	if (kcov->mode == KCOV_MODE_DISABLED || vma->vm_pgoff != 0 ||
+	if (kcov->mode != KCOV_MODE_INIT || vma->vm_pgoff != 0 ||
 	    vma->vm_end - vma->vm_start != size) {
 		res = -EINVAL;
 		goto exit;
@@ -175,6 +300,7 @@ static int kcov_open(struct inode *inode, struct file *filep)
 	kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
 	if (!kcov)
 		return -ENOMEM;
+	kcov->mode = KCOV_MODE_DISABLED;
 	atomic_set(&kcov->refcount, 1);
 	spin_lock_init(&kcov->lock);
 	filep->private_data = kcov;
@@ -210,7 +336,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
 		if (size < 2 || size > INT_MAX / sizeof(unsigned long))
 			return -EINVAL;
 		kcov->size = size;
-		kcov->mode = KCOV_MODE_TRACE;
+		kcov->mode = KCOV_MODE_INIT;
 		return 0;
 	case KCOV_ENABLE:
 		/*
@@ -220,17 +346,25 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
 		 * at task exit or voluntary by KCOV_DISABLE. After that it can
 		 * be enabled for another task.
 		 */
-		unused = arg;
-		if (unused != 0 || kcov->mode == KCOV_MODE_DISABLED ||
-		    kcov->area == NULL)
+		if (kcov->mode != KCOV_MODE_INIT || !kcov->area)
 			return -EINVAL;
 		if (kcov->t != NULL)
 			return -EBUSY;
+		if (arg == KCOV_TRACE_PC)
+			kcov->mode = KCOV_MODE_TRACE_PC;
+		else if (arg == KCOV_TRACE_CMP)
+#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
+			kcov->mode = KCOV_MODE_TRACE_CMP;
+#else
+		return -ENOTSUPP;
+#endif
+		else
+			return -EINVAL;
 		t = current;
 		/* Cache in task struct for performance. */
 		t->kcov_size = kcov->size;
 		t->kcov_area = kcov->area;
-		/* See comment in __sanitizer_cov_trace_pc(). */
+		/* See comment in check_kcov_mode(). */
 		barrier();
 		WRITE_ONCE(t->kcov_mode, kcov->mode);
 		t->kcov = kcov;
@@ -248,6 +382,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
 			return -EINVAL;
 		kcov_task_init(t);
 		kcov->t = NULL;
+		kcov->mode = KCOV_MODE_INIT;
 		kcov_put(kcov);
 		return 0;
 	default:
-- 
2.14.1.581.gf28d330327-goog

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 1/3] kcov: support comparison operands collection
@ 2017-08-30 16:23   ` Dmitry Vyukov
  0 siblings, 0 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-08-30 16:23 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: tchibo, Mark Rutland, Alexander Popov, Andrey Ryabinin,
	Kees Cook, Vegard Nossum, Quentin Casasnovas, syzkaller,
	linux-kernel

From: Victor Chibotaru <tchibo@google.com>

Enables kcov to collect comparison operands from instrumented code.
This is done by using Clang's -fsanitize=trace-cmp instrumentation
(currently not available for GCC).

The comparison operands help a lot in fuzz testing. E.g. they are
used in Syzkaller to cover the interiors of conditional statements
with way less attempts and thus make previously unreachable code
reachable.

To allow separate collection of coverage and comparison operands two
different work modes are implemented. Mode selection is now done via
a KCOV_ENABLE ioctl call with corresponding argument value.

Signed-off-by: Victor Chibotaru <tchibo@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Popov <alex.popov@linux.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: syzkaller@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
Clang instrumentation:
https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
Syzkaller:
https://github.com/google/syzkaller
---
 include/linux/kcov.h      |  12 ++-
 include/uapi/linux/kcov.h |  32 ++++++++
 kernel/kcov.c             | 203 ++++++++++++++++++++++++++++++++++++++--------
 3 files changed, 209 insertions(+), 38 deletions(-)

diff --git a/include/linux/kcov.h b/include/linux/kcov.h
index 2883ac98c280..87e2a44f1bab 100644
--- a/include/linux/kcov.h
+++ b/include/linux/kcov.h
@@ -7,19 +7,23 @@ struct task_struct;
 
 #ifdef CONFIG_KCOV
 
-void kcov_task_init(struct task_struct *t);
-void kcov_task_exit(struct task_struct *t);
-
 enum kcov_mode {
 	/* Coverage collection is not enabled yet. */
 	KCOV_MODE_DISABLED = 0,
+	/* KCOV was initialized, but tracing mode hasn't been chosen yet. */
+	KCOV_MODE_INIT = 1,
 	/*
 	 * Tracing coverage collection mode.
 	 * Covered PCs are collected in a per-task buffer.
 	 */
-	KCOV_MODE_TRACE = 1,
+	KCOV_MODE_TRACE_PC = 2,
+	/* Collecting comparison operands mode. */
+	KCOV_MODE_TRACE_CMP = 3,
 };
 
+void kcov_task_init(struct task_struct *t);
+void kcov_task_exit(struct task_struct *t);
+
 #else
 
 static inline void kcov_task_init(struct task_struct *t) {}
diff --git a/include/uapi/linux/kcov.h b/include/uapi/linux/kcov.h
index 574e22ec640d..a0bc3e6a5ff7 100644
--- a/include/uapi/linux/kcov.h
+++ b/include/uapi/linux/kcov.h
@@ -7,4 +7,36 @@
 #define KCOV_ENABLE			_IO('c', 100)
 #define KCOV_DISABLE			_IO('c', 101)
 
+enum {
+	/*
+	 * Tracing coverage collection mode.
+	 * Covered PCs are collected in a per-task buffer.
+	 * In new KCOV version the mode is chosen by calling
+	 * ioctl(fd, KCOV_ENABLE, mode). In older versions the mode argument
+	 * was supposed to be 0 in such a call. So, for reasons of backward
+	 * compatibility, we have chosen the value KCOV_TRACE_PC to be 0.
+	 */
+	KCOV_TRACE_PC = 0,
+	/* Collecting comparison operands mode. */
+	KCOV_TRACE_CMP = 1,
+};
+
+/*
+ * Defines the format for the types of collected comparisons.
+ */
+enum kcov_cmp_type {
+	/*
+	 * LSB shows whether one of the arguments is a compile-time constant.
+	 */
+	KCOV_CMP_CONST = 1,
+	/*
+	 * Second and third LSBs contain the size of arguments (1/2/4/8 bytes).
+	 */
+	KCOV_CMP_SIZE1 = 0,
+	KCOV_CMP_SIZE2 = 2,
+	KCOV_CMP_SIZE4 = 4,
+	KCOV_CMP_SIZE8 = 6,
+	KCOV_CMP_SIZE_MASK = 6,
+};
+
 #endif /* _LINUX_KCOV_IOCTLS_H */
diff --git a/kernel/kcov.c b/kernel/kcov.c
index cd771993f96f..2abce5dfa2df 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -21,13 +21,21 @@
 #include <linux/kcov.h>
 #include <asm/setup.h>
 
+/* Number of words written per one comparison. */
+#define KCOV_WORDS_PER_CMP 3
+
 /*
  * kcov descriptor (one per opened debugfs file).
  * State transitions of the descriptor:
  *  - initial state after open()
  *  - then there must be a single ioctl(KCOV_INIT_TRACE) call
  *  - then, mmap() call (several calls are allowed but not useful)
- *  - then, repeated enable/disable for a task (only one task a time allowed)
+ *  - then, ioctl(KCOV_ENABLE, arg), where arg is
+ *	KCOV_TRACE_PC - to trace only the PCs
+ *	or
+ *	KCOV_TRACE_CMP - to trace only the comparison operands
+ *  - then, ioctl(KCOV_DISABLE) to disable the task.
+ * Enabling/disabling ioctls can be repeated (only one task a time allowed).
  */
 struct kcov {
 	/*
@@ -47,6 +55,30 @@ struct kcov {
 	struct task_struct	*t;
 };
 
+static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)
+{
+	enum kcov_mode mode;
+
+	/*
+	 * We are interested in code coverage as a function of a syscall inputs,
+	 * so we ignore code executed in interrupts.
+	 */
+	if (!t || !in_task())
+		return false;
+	mode = READ_ONCE(t->kcov_mode);
+	/*
+	 * There is some code that runs in interrupts but for which
+	 * in_interrupt() returns false (e.g. preempt_schedule_irq()).
+	 * READ_ONCE()/barrier() effectively provides load-acquire wrt
+	 * interrupts, there are paired barrier()/WRITE_ONCE() in
+	 * kcov_ioctl_locked().
+	 */
+	barrier();
+	if (mode != needed_mode)
+		return false;
+	return true;
+}
+
 /*
  * Entry point from instrumented code.
  * This is called once per basic-block/edge.
@@ -54,44 +86,136 @@ struct kcov {
 void notrace __sanitizer_cov_trace_pc(void)
 {
 	struct task_struct *t;
-	enum kcov_mode mode;
+	unsigned long *area;
+	unsigned long ip = _RET_IP_;
+	unsigned long pos;
 
 	t = current;
-	/*
-	 * We are interested in code coverage as a function of a syscall inputs,
-	 * so we ignore code executed in interrupts.
-	 */
-	if (!t || !in_task())
+	if (!check_kcov_mode(KCOV_MODE_TRACE_PC, t))
 		return;
-	mode = READ_ONCE(t->kcov_mode);
-	if (mode == KCOV_MODE_TRACE) {
-		unsigned long *area;
-		unsigned long pos;
-		unsigned long ip = _RET_IP_;
 
 #ifdef CONFIG_RANDOMIZE_BASE
-		ip -= kaslr_offset();
+	ip -= kaslr_offset();
 #endif
 
-		/*
-		 * There is some code that runs in interrupts but for which
-		 * in_interrupt() returns false (e.g. preempt_schedule_irq()).
-		 * READ_ONCE()/barrier() effectively provides load-acquire wrt
-		 * interrupts, there are paired barrier()/WRITE_ONCE() in
-		 * kcov_ioctl_locked().
-		 */
-		barrier();
-		area = t->kcov_area;
-		/* The first word is number of subsequent PCs. */
-		pos = READ_ONCE(area[0]) + 1;
-		if (likely(pos < t->kcov_size)) {
-			area[pos] = ip;
-			WRITE_ONCE(area[0], pos);
-		}
+	area = t->kcov_area;
+	/* The first word is number of subsequent PCs. */
+	pos = READ_ONCE(area[0]) + 1;
+	if (likely(pos < t->kcov_size)) {
+		area[pos] = ip;
+		WRITE_ONCE(area[0], pos);
 	}
 }
 EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
 
+#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
+static void write_comp_data(u64 type, u64 arg1, u64 arg2)
+{
+	struct task_struct *t;
+	u64 *area;
+	u64 count, start_index, end_pos, max_pos;
+
+	t = current;
+	if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
+		return;
+
+	/*
+	 * We write all comparison arguments and types as u64.
+	 * The buffer was allocated for t->kcov_size unsigned longs.
+	 */
+	area = (u64 *)t->kcov_area;
+	max_pos = t->kcov_size * sizeof(unsigned long);
+
+	count = READ_ONCE(area[0]);
+
+	/* Every record is KCOV_WORDS_PER_CMP words. */
+	start_index = 1 + count * KCOV_WORDS_PER_CMP;
+	end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
+	if (likely(end_pos <= max_pos)) {
+		area[start_index] = type;
+		area[start_index + 1] = arg1;
+		area[start_index + 2] = arg2;
+		WRITE_ONCE(area[0], count + 1);
+	}
+}
+
+void notrace __sanitizer_cov_trace_cmp1(u8 arg1, u8 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE1, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_cmp1);
+
+void notrace __sanitizer_cov_trace_cmp2(u16 arg1, u16 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE2, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_cmp2);
+
+void notrace __sanitizer_cov_trace_cmp4(u16 arg1, u16 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE4, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_cmp4);
+
+void notrace __sanitizer_cov_trace_cmp8(u64 arg1, u64 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE8, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_cmp8);
+
+void notrace __sanitizer_cov_trace_const_cmp1(u8 arg1, u8 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE1 | KCOV_CMP_CONST, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp1);
+
+void notrace __sanitizer_cov_trace_const_cmp2(u16 arg1, u16 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE2 | KCOV_CMP_CONST, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp2);
+
+void notrace __sanitizer_cov_trace_const_cmp4(u16 arg1, u16 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE4 | KCOV_CMP_CONST, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp4);
+
+void notrace __sanitizer_cov_trace_const_cmp8(u64 arg1, u64 arg2)
+{
+	write_comp_data(KCOV_CMP_SIZE8 | KCOV_CMP_CONST, arg1, arg2);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp8);
+
+void notrace __sanitizer_cov_trace_switch(u64 val, u64 *cases)
+{
+	u64 i;
+	u64 count = cases[0];
+	u64 size = cases[1];
+	u64 type = KCOV_CMP_CONST;
+
+	switch (size) {
+	case 8:
+		type |= KCOV_CMP_SIZE1;
+		break;
+	case 16:
+		type |= KCOV_CMP_SIZE2;
+		break;
+	case 32:
+		type |= KCOV_CMP_SIZE4;
+		break;
+	case 64:
+		type |= KCOV_CMP_SIZE8;
+		break;
+	default:
+		return;
+	}
+	for (i = 0; i < count; i++)
+		write_comp_data(type, cases[i + 2], val);
+}
+EXPORT_SYMBOL(__sanitizer_cov_trace_switch);
+#endif /* ifdef CONFIG_KCOV_ENABLE_COMPARISONS */
+
 static void kcov_get(struct kcov *kcov)
 {
 	atomic_inc(&kcov->refcount);
@@ -128,6 +252,7 @@ void kcov_task_exit(struct task_struct *t)
 	/* Just to not leave dangling references behind. */
 	kcov_task_init(t);
 	kcov->t = NULL;
+	kcov->mode = KCOV_MODE_INIT;
 	spin_unlock(&kcov->lock);
 	kcov_put(kcov);
 }
@@ -146,7 +271,7 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
 
 	spin_lock(&kcov->lock);
 	size = kcov->size * sizeof(unsigned long);
-	if (kcov->mode == KCOV_MODE_DISABLED || vma->vm_pgoff != 0 ||
+	if (kcov->mode != KCOV_MODE_INIT || vma->vm_pgoff != 0 ||
 	    vma->vm_end - vma->vm_start != size) {
 		res = -EINVAL;
 		goto exit;
@@ -175,6 +300,7 @@ static int kcov_open(struct inode *inode, struct file *filep)
 	kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
 	if (!kcov)
 		return -ENOMEM;
+	kcov->mode = KCOV_MODE_DISABLED;
 	atomic_set(&kcov->refcount, 1);
 	spin_lock_init(&kcov->lock);
 	filep->private_data = kcov;
@@ -210,7 +336,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
 		if (size < 2 || size > INT_MAX / sizeof(unsigned long))
 			return -EINVAL;
 		kcov->size = size;
-		kcov->mode = KCOV_MODE_TRACE;
+		kcov->mode = KCOV_MODE_INIT;
 		return 0;
 	case KCOV_ENABLE:
 		/*
@@ -220,17 +346,25 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
 		 * at task exit or voluntary by KCOV_DISABLE. After that it can
 		 * be enabled for another task.
 		 */
-		unused = arg;
-		if (unused != 0 || kcov->mode == KCOV_MODE_DISABLED ||
-		    kcov->area == NULL)
+		if (kcov->mode != KCOV_MODE_INIT || !kcov->area)
 			return -EINVAL;
 		if (kcov->t != NULL)
 			return -EBUSY;
+		if (arg == KCOV_TRACE_PC)
+			kcov->mode = KCOV_MODE_TRACE_PC;
+		else if (arg == KCOV_TRACE_CMP)
+#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
+			kcov->mode = KCOV_MODE_TRACE_CMP;
+#else
+		return -ENOTSUPP;
+#endif
+		else
+			return -EINVAL;
 		t = current;
 		/* Cache in task struct for performance. */
 		t->kcov_size = kcov->size;
 		t->kcov_area = kcov->area;
-		/* See comment in __sanitizer_cov_trace_pc(). */
+		/* See comment in check_kcov_mode(). */
 		barrier();
 		WRITE_ONCE(t->kcov_mode, kcov->mode);
 		t->kcov = kcov;
@@ -248,6 +382,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
 			return -EINVAL;
 		kcov_task_init(t);
 		kcov->t = NULL;
+		kcov->mode = KCOV_MODE_INIT;
 		kcov_put(kcov);
 		return 0;
 	default:
-- 
2.14.1.581.gf28d330327-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/3] Makefile: support flag -fsanitizer-coverage=trace-cmp
  2017-08-30 16:23 [PATCH 0/3] kcov: support comparison operands collection Dmitry Vyukov
@ 2017-08-30 16:23   ` Dmitry Vyukov
  2017-08-30 16:23   ` Dmitry Vyukov
  2017-08-30 16:23   ` Dmitry Vyukov
  2 siblings, 0 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-08-30 16:23 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: tchibo, Mark Rutland, Alexander Popov, Andrey Ryabinin,
	Kees Cook, Vegard Nossum, Quentin Casasnovas, syzkaller,
	linux-kernel

From: Victor Chibotaru <tchibo@google.com>

The flag enables Clang instrumentation of comparison operations
(currently not supported by GCC). This instrumentation is needed by the
new KCOV device to collect comparison operands.

Signed-off-by: Victor Chibotaru <tchibo@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Popov <alex.popov@linux.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: syzkaller@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
Clang instrumentation:
https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
---
 Makefile              | 5 +++--
 lib/Kconfig.debug     | 8 ++++++++
 scripts/Makefile.kcov | 6 ++++++
 scripts/Makefile.lib  | 6 ++++++
 4 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/Makefile b/Makefile
index f9703f3223eb..bb117c42a785 100644
--- a/Makefile
+++ b/Makefile
@@ -374,7 +374,7 @@ AFLAGS_KERNEL	=
 LDFLAGS_vmlinux =
 CFLAGS_GCOV	:= -fprofile-arcs -ftest-coverage -fno-tree-loop-im $(call cc-disable-warning,maybe-uninitialized,)
 CFLAGS_KCOV	:= $(call cc-option,-fsanitize-coverage=trace-pc,)
-
+CFLAGS_KCOV_COMPS := $(call cc-option,-fsanitize-coverage=trace-cmp,)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -420,7 +420,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KCOV CFLAGS_KASAN CFLAGS_UBSAN
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KCOV CFLAGS_KCOV_COMPS CFLAGS_KASAN CFLAGS_UBSAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -822,6 +822,7 @@ KBUILD_CFLAGS   += $(call cc-option,-Werror=designated-init)
 KBUILD_ARFLAGS := $(call ar-option,D)
 
 include scripts/Makefile.kasan
+include scripts/Makefile.kcov
 include scripts/Makefile.extrawarn
 include scripts/Makefile.ubsan
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index d7e3f0bfe91e..85fc0db2e8af 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -777,6 +777,14 @@ config KCOV
 
 	  For more details, see Documentation/dev-tools/kcov.rst.
 
+config KCOV_ENABLE_COMPARISONS
+	bool "Enable comparison operands collection by KCOV"
+	depends on KCOV
+	default n
+	help
+	  KCOV also exposes operands of every comparison in instrumented code.
+	  Note: currently only available if compiled with Clang.
+
 config KCOV_INSTRUMENT_ALL
 	bool "Instrument all code by default"
 	depends on KCOV
diff --git a/scripts/Makefile.kcov b/scripts/Makefile.kcov
new file mode 100644
index 000000000000..5d6e644cefed
--- /dev/null
+++ b/scripts/Makefile.kcov
@@ -0,0 +1,6 @@
+ifeq ($(CONFIG_KCOV_ENABLE_COMPARISONS),y)
+ifneq ($(cc-name),clang)
+  $(warning Cannot use CONFIG_KCOV_ENABLE_COMPARISONS: \
+            -fsanitize=trace-cmp is not supported by compiler)
+endif
+endif
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 58c05e5d9870..bb38cd33e15c 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -142,6 +142,12 @@ _c_flags += $(if $(patsubst n%,, \
 	$(CFLAGS_KCOV))
 endif
 
+ifeq ($(CONFIG_KCOV_ENABLE_COMPARISONS),y)
+_c_flags += $(if $(patsubst n%,, \
+	$(KCOV_INSTRUMENT_$(basetarget).o)$(KCOV_INSTRUMENT)$(CONFIG_KCOV_INSTRUMENT_ALL)), \
+	$(CFLAGS_KCOV_COMPS))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.14.1.581.gf28d330327-goog

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/3] Makefile: support flag -fsanitizer-coverage=trace-cmp
@ 2017-08-30 16:23   ` Dmitry Vyukov
  0 siblings, 0 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-08-30 16:23 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: tchibo, Mark Rutland, Alexander Popov, Andrey Ryabinin,
	Kees Cook, Vegard Nossum, Quentin Casasnovas, syzkaller,
	linux-kernel

From: Victor Chibotaru <tchibo@google.com>

The flag enables Clang instrumentation of comparison operations
(currently not supported by GCC). This instrumentation is needed by the
new KCOV device to collect comparison operands.

Signed-off-by: Victor Chibotaru <tchibo@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Popov <alex.popov@linux.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: syzkaller@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
Clang instrumentation:
https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
---
 Makefile              | 5 +++--
 lib/Kconfig.debug     | 8 ++++++++
 scripts/Makefile.kcov | 6 ++++++
 scripts/Makefile.lib  | 6 ++++++
 4 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/Makefile b/Makefile
index f9703f3223eb..bb117c42a785 100644
--- a/Makefile
+++ b/Makefile
@@ -374,7 +374,7 @@ AFLAGS_KERNEL	=
 LDFLAGS_vmlinux =
 CFLAGS_GCOV	:= -fprofile-arcs -ftest-coverage -fno-tree-loop-im $(call cc-disable-warning,maybe-uninitialized,)
 CFLAGS_KCOV	:= $(call cc-option,-fsanitize-coverage=trace-pc,)
-
+CFLAGS_KCOV_COMPS := $(call cc-option,-fsanitize-coverage=trace-cmp,)
 
 # Use USERINCLUDE when you must reference the UAPI directories only.
 USERINCLUDE    := \
@@ -420,7 +420,7 @@ export MAKE AWK GENKSYMS INSTALLKERNEL PERL PYTHON UTS_MACHINE
 export HOSTCXX HOSTCXXFLAGS LDFLAGS_MODULE CHECK CHECKFLAGS
 
 export KBUILD_CPPFLAGS NOSTDINC_FLAGS LINUXINCLUDE OBJCOPYFLAGS LDFLAGS
-export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KCOV CFLAGS_KASAN CFLAGS_UBSAN
+export KBUILD_CFLAGS CFLAGS_KERNEL CFLAGS_MODULE CFLAGS_GCOV CFLAGS_KCOV CFLAGS_KCOV_COMPS CFLAGS_KASAN CFLAGS_UBSAN
 export KBUILD_AFLAGS AFLAGS_KERNEL AFLAGS_MODULE
 export KBUILD_AFLAGS_MODULE KBUILD_CFLAGS_MODULE KBUILD_LDFLAGS_MODULE
 export KBUILD_AFLAGS_KERNEL KBUILD_CFLAGS_KERNEL
@@ -822,6 +822,7 @@ KBUILD_CFLAGS   += $(call cc-option,-Werror=designated-init)
 KBUILD_ARFLAGS := $(call ar-option,D)
 
 include scripts/Makefile.kasan
+include scripts/Makefile.kcov
 include scripts/Makefile.extrawarn
 include scripts/Makefile.ubsan
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index d7e3f0bfe91e..85fc0db2e8af 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -777,6 +777,14 @@ config KCOV
 
 	  For more details, see Documentation/dev-tools/kcov.rst.
 
+config KCOV_ENABLE_COMPARISONS
+	bool "Enable comparison operands collection by KCOV"
+	depends on KCOV
+	default n
+	help
+	  KCOV also exposes operands of every comparison in instrumented code.
+	  Note: currently only available if compiled with Clang.
+
 config KCOV_INSTRUMENT_ALL
 	bool "Instrument all code by default"
 	depends on KCOV
diff --git a/scripts/Makefile.kcov b/scripts/Makefile.kcov
new file mode 100644
index 000000000000..5d6e644cefed
--- /dev/null
+++ b/scripts/Makefile.kcov
@@ -0,0 +1,6 @@
+ifeq ($(CONFIG_KCOV_ENABLE_COMPARISONS),y)
+ifneq ($(cc-name),clang)
+  $(warning Cannot use CONFIG_KCOV_ENABLE_COMPARISONS: \
+            -fsanitize=trace-cmp is not supported by compiler)
+endif
+endif
diff --git a/scripts/Makefile.lib b/scripts/Makefile.lib
index 58c05e5d9870..bb38cd33e15c 100644
--- a/scripts/Makefile.lib
+++ b/scripts/Makefile.lib
@@ -142,6 +142,12 @@ _c_flags += $(if $(patsubst n%,, \
 	$(CFLAGS_KCOV))
 endif
 
+ifeq ($(CONFIG_KCOV_ENABLE_COMPARISONS),y)
+_c_flags += $(if $(patsubst n%,, \
+	$(KCOV_INSTRUMENT_$(basetarget).o)$(KCOV_INSTRUMENT)$(CONFIG_KCOV_INSTRUMENT_ALL)), \
+	$(CFLAGS_KCOV_COMPS))
+endif
+
 # If building the kernel in a separate objtree expand all occurrences
 # of -Idir to -I$(srctree)/dir except for absolute paths (starting with '/').
 
-- 
2.14.1.581.gf28d330327-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 3/3] kcov: update documentation
  2017-08-30 16:23 [PATCH 0/3] kcov: support comparison operands collection Dmitry Vyukov
@ 2017-08-30 16:23   ` Dmitry Vyukov
  2017-08-30 16:23   ` Dmitry Vyukov
  2017-08-30 16:23   ` Dmitry Vyukov
  2 siblings, 0 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-08-30 16:23 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: tchibo, Mark Rutland, Alexander Popov, Andrey Ryabinin,
	Kees Cook, Vegard Nossum, Quentin Casasnovas, syzkaller,
	linux-kernel

From: Victor Chibotaru <tchibo@google.com>

The updated documentation describes new KCOV mode for collecting
comparison operands.

Signed-off-by: Victor Chibotaru <tchibo@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Popov <alex.popov@linux.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: syzkaller@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 Documentation/dev-tools/kcov.rst | 94 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 90 insertions(+), 4 deletions(-)

diff --git a/Documentation/dev-tools/kcov.rst b/Documentation/dev-tools/kcov.rst
index 44886c91e112..080cb603bf79 100644
--- a/Documentation/dev-tools/kcov.rst
+++ b/Documentation/dev-tools/kcov.rst
@@ -12,19 +12,30 @@ To achieve this goal it does not collect coverage in soft/hard interrupts
 and instrumentation of some inherently non-deterministic parts of kernel is
 disabled (e.g. scheduler, locking).
 
-Usage
------
+kcov is also able to collect comparison operands from the instrumented code
+(this feature currently requires that the kernel is compiled with clang).
+
+Prerequisites
+-------------
 
 Configure the kernel with::
 
         CONFIG_KCOV=y
 
 CONFIG_KCOV requires gcc built on revision 231296 or later.
+
+If the comparison operands need to be collected, set::
+
+	CONFIG_KCOV_ENABLE_COMPARISONS=y
+
 Profiling data will only become accessible once debugfs has been mounted::
 
         mount -t debugfs none /sys/kernel/debug
 
-The following program demonstrates kcov usage from within a test program:
+Coverage collection
+-------------------
+The following program demonstrates coverage collection from within a test
+program using kcov:
 
 .. code-block:: c
 
@@ -44,6 +55,9 @@ The following program demonstrates kcov usage from within a test program:
     #define KCOV_DISABLE			_IO('c', 101)
     #define COVER_SIZE			(64<<10)
 
+    #define KCOV_TRACE_PC  0
+    #define KCOV_TRACE_CMP 1
+
     int main(int argc, char **argv)
     {
 	int fd;
@@ -64,7 +78,7 @@ The following program demonstrates kcov usage from within a test program:
 	if ((void*)cover == MAP_FAILED)
 		perror("mmap"), exit(1);
 	/* Enable coverage collection on the current thread. */
-	if (ioctl(fd, KCOV_ENABLE, 0))
+	if (ioctl(fd, KCOV_ENABLE, KCOV_TRACE_PC))
 		perror("ioctl"), exit(1);
 	/* Reset coverage from the tail of the ioctl() call. */
 	__atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
@@ -111,3 +125,75 @@ The interface is fine-grained to allow efficient forking of test processes.
 That is, a parent process opens /sys/kernel/debug/kcov, enables trace mode,
 mmaps coverage buffer and then forks child processes in a loop. Child processes
 only need to enable coverage (disable happens automatically on thread end).
+
+Comparison operands collection
+------------------------------
+Comparison operands collection is similar to coverage collection:
+
+.. code-block:: c
+
+    /* Same includes and defines as above. */
+
+     enum kcov_cmp_type {
+	/*
+	* LSB shows whether the first argument is a compile-time constant.
+	*/
+	KCOV_CMP_CONST = 1,
+	/*
+	* Second and third LSBs contain the size of arguments (1/2/4/8 bytes).
+	*/
+	KCOV_CMP_SIZE1 = 0,
+	KCOV_CMP_SIZE2 = 2,
+	KCOV_CMP_SIZE4 = 4,
+	KCOV_CMP_SIZE8 = 6,
+	KCOV_CMP_SIZE_MASK = 6,
+    };
+
+    int main(int argc, char **argv)
+    {
+	int fd;
+	uint64_t *cover, type, arg1, arg2, is_const, size;
+	unsigned long n, i;
+
+	fd = open("/sys/kernel/debug/kcov", O_RDWR);
+	if (fd == -1)
+		perror("open"), exit(1);
+	if (ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE))
+		perror("ioctl"), exit(1);
+	/*
+	* Note that the buffer pointer is of type uint64_t*, because all
+	* the comparison operands are promoted to uint64_t.
+	*/
+	cover = (uint64_t *)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
+				     PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+	if ((void*)cover == MAP_FAILED)
+		perror("mmap"), exit(1);
+	/* Note KCOV_TRACE_CMP instead of KCOV_TRACE_PC. */
+	if (ioctl(fd, KCOV_ENABLE, KCOV_TRACE_CMP))
+		perror("ioctl"), exit(1);
+	__atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
+	read(-1, NULL, 0);
+	/* Read number of comparisons collected. */
+	n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
+	for (i = 0; i < n; i++) {
+		type = cover[i*3 + 1];
+		/* arg1 and arg2 - operands of the comparison. */
+		arg1 = cover[i*3 + 2];
+		arg2 = cover[i*3 + 3];
+		/* size == KCOV_CMP_SIZEi. */
+		size = type & KCOV_CMP_SIZE_MASK;
+		/* is_const - shows whether arg1 is a compile-time constant.*/
+		is_const = type & KCOV_CMP_CONST;
+	}
+	if (ioctl(fd, KCOV_DISABLE, 0))
+		perror("ioctl"), exit(1);
+	/* Free resources. */
+	if (munmap(cover, COVER_SIZE * sizeof(unsigned long)))
+		perror("munmap"), exit(1);
+	if (close(fd))
+		perror("close"), exit(1);
+	return 0;
+    }
+
+Note that the kcov modes (coverage collection or comparison operands) are
+mutually exclusive.
-- 
2.14.1.581.gf28d330327-goog

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 3/3] kcov: update documentation
@ 2017-08-30 16:23   ` Dmitry Vyukov
  0 siblings, 0 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-08-30 16:23 UTC (permalink / raw)
  To: akpm, linux-mm
  Cc: tchibo, Mark Rutland, Alexander Popov, Andrey Ryabinin,
	Kees Cook, Vegard Nossum, Quentin Casasnovas, syzkaller,
	linux-kernel

From: Victor Chibotaru <tchibo@google.com>

The updated documentation describes new KCOV mode for collecting
comparison operands.

Signed-off-by: Victor Chibotaru <tchibo@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Alexander Popov <alex.popov@linux.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Vegard Nossum <vegard.nossum@oracle.com>
Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
Cc: syzkaller@googlegroups.com
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---
 Documentation/dev-tools/kcov.rst | 94 ++++++++++++++++++++++++++++++++++++++--
 1 file changed, 90 insertions(+), 4 deletions(-)

diff --git a/Documentation/dev-tools/kcov.rst b/Documentation/dev-tools/kcov.rst
index 44886c91e112..080cb603bf79 100644
--- a/Documentation/dev-tools/kcov.rst
+++ b/Documentation/dev-tools/kcov.rst
@@ -12,19 +12,30 @@ To achieve this goal it does not collect coverage in soft/hard interrupts
 and instrumentation of some inherently non-deterministic parts of kernel is
 disabled (e.g. scheduler, locking).
 
-Usage
------
+kcov is also able to collect comparison operands from the instrumented code
+(this feature currently requires that the kernel is compiled with clang).
+
+Prerequisites
+-------------
 
 Configure the kernel with::
 
         CONFIG_KCOV=y
 
 CONFIG_KCOV requires gcc built on revision 231296 or later.
+
+If the comparison operands need to be collected, set::
+
+	CONFIG_KCOV_ENABLE_COMPARISONS=y
+
 Profiling data will only become accessible once debugfs has been mounted::
 
         mount -t debugfs none /sys/kernel/debug
 
-The following program demonstrates kcov usage from within a test program:
+Coverage collection
+-------------------
+The following program demonstrates coverage collection from within a test
+program using kcov:
 
 .. code-block:: c
 
@@ -44,6 +55,9 @@ The following program demonstrates kcov usage from within a test program:
     #define KCOV_DISABLE			_IO('c', 101)
     #define COVER_SIZE			(64<<10)
 
+    #define KCOV_TRACE_PC  0
+    #define KCOV_TRACE_CMP 1
+
     int main(int argc, char **argv)
     {
 	int fd;
@@ -64,7 +78,7 @@ The following program demonstrates kcov usage from within a test program:
 	if ((void*)cover == MAP_FAILED)
 		perror("mmap"), exit(1);
 	/* Enable coverage collection on the current thread. */
-	if (ioctl(fd, KCOV_ENABLE, 0))
+	if (ioctl(fd, KCOV_ENABLE, KCOV_TRACE_PC))
 		perror("ioctl"), exit(1);
 	/* Reset coverage from the tail of the ioctl() call. */
 	__atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
@@ -111,3 +125,75 @@ The interface is fine-grained to allow efficient forking of test processes.
 That is, a parent process opens /sys/kernel/debug/kcov, enables trace mode,
 mmaps coverage buffer and then forks child processes in a loop. Child processes
 only need to enable coverage (disable happens automatically on thread end).
+
+Comparison operands collection
+------------------------------
+Comparison operands collection is similar to coverage collection:
+
+.. code-block:: c
+
+    /* Same includes and defines as above. */
+
+     enum kcov_cmp_type {
+	/*
+	* LSB shows whether the first argument is a compile-time constant.
+	*/
+	KCOV_CMP_CONST = 1,
+	/*
+	* Second and third LSBs contain the size of arguments (1/2/4/8 bytes).
+	*/
+	KCOV_CMP_SIZE1 = 0,
+	KCOV_CMP_SIZE2 = 2,
+	KCOV_CMP_SIZE4 = 4,
+	KCOV_CMP_SIZE8 = 6,
+	KCOV_CMP_SIZE_MASK = 6,
+    };
+
+    int main(int argc, char **argv)
+    {
+	int fd;
+	uint64_t *cover, type, arg1, arg2, is_const, size;
+	unsigned long n, i;
+
+	fd = open("/sys/kernel/debug/kcov", O_RDWR);
+	if (fd == -1)
+		perror("open"), exit(1);
+	if (ioctl(fd, KCOV_INIT_TRACE, COVER_SIZE))
+		perror("ioctl"), exit(1);
+	/*
+	* Note that the buffer pointer is of type uint64_t*, because all
+	* the comparison operands are promoted to uint64_t.
+	*/
+	cover = (uint64_t *)mmap(NULL, COVER_SIZE * sizeof(unsigned long),
+				     PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+	if ((void*)cover == MAP_FAILED)
+		perror("mmap"), exit(1);
+	/* Note KCOV_TRACE_CMP instead of KCOV_TRACE_PC. */
+	if (ioctl(fd, KCOV_ENABLE, KCOV_TRACE_CMP))
+		perror("ioctl"), exit(1);
+	__atomic_store_n(&cover[0], 0, __ATOMIC_RELAXED);
+	read(-1, NULL, 0);
+	/* Read number of comparisons collected. */
+	n = __atomic_load_n(&cover[0], __ATOMIC_RELAXED);
+	for (i = 0; i < n; i++) {
+		type = cover[i*3 + 1];
+		/* arg1 and arg2 - operands of the comparison. */
+		arg1 = cover[i*3 + 2];
+		arg2 = cover[i*3 + 3];
+		/* size == KCOV_CMP_SIZEi. */
+		size = type & KCOV_CMP_SIZE_MASK;
+		/* is_const - shows whether arg1 is a compile-time constant.*/
+		is_const = type & KCOV_CMP_CONST;
+	}
+	if (ioctl(fd, KCOV_DISABLE, 0))
+		perror("ioctl"), exit(1);
+	/* Free resources. */
+	if (munmap(cover, COVER_SIZE * sizeof(unsigned long)))
+		perror("munmap"), exit(1);
+	if (close(fd))
+		perror("close"), exit(1);
+	return 0;
+    }
+
+Note that the kcov modes (coverage collection or comparison operands) are
+mutually exclusive.
-- 
2.14.1.581.gf28d330327-goog

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
  2017-08-30 16:23   ` Dmitry Vyukov
@ 2017-08-30 18:23     ` Mark Rutland
  -1 siblings, 0 replies; 21+ messages in thread
From: Mark Rutland @ 2017-08-30 18:23 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: akpm, linux-mm, tchibo, Alexander Popov, Andrey Ryabinin,
	Kees Cook, Vegard Nossum, Quentin Casasnovas, syzkaller,
	linux-kernel

Hi,

On Wed, Aug 30, 2017 at 06:23:29PM +0200, Dmitry Vyukov wrote:
> From: Victor Chibotaru <tchibo@google.com>
> 
> Enables kcov to collect comparison operands from instrumented code.
> This is done by using Clang's -fsanitize=trace-cmp instrumentation
> (currently not available for GCC).

What's needed to build the kernel with Clang these days?

I was under the impression that it still wasn't possible to build arm64
with clang due to a number of missing features (e.g. the %a assembler
output template).

> The comparison operands help a lot in fuzz testing. E.g. they are
> used in Syzkaller to cover the interiors of conditional statements
> with way less attempts and thus make previously unreachable code
> reachable.
> 
> To allow separate collection of coverage and comparison operands two
> different work modes are implemented. Mode selection is now done via
> a KCOV_ENABLE ioctl call with corresponding argument value.
> 
> Signed-off-by: Victor Chibotaru <tchibo@google.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Alexander Popov <alex.popov@linux.com>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Vegard Nossum <vegard.nossum@oracle.com>
> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
> Cc: syzkaller@googlegroups.com
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
> Clang instrumentation:
> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow

How stable is this?

The comment at the end says "This interface is a subject to change."

[...]

> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index cd771993f96f..2abce5dfa2df 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -21,13 +21,21 @@
>  #include <linux/kcov.h>
>  #include <asm/setup.h>
>  
> +/* Number of words written per one comparison. */
> +#define KCOV_WORDS_PER_CMP 3

Could you please expand the comment to cover what a "word" is?

Generally, "word" is an ambiguous term, and it's used inconsitently in
this file as of this patch. For comparison coverage, a "word" is assumed
to always be 64-bit, (which makes sxense given 64-bit comparisons), but
for branch coverage a "word" refers to an unsigned long, which would be
32-bit on a 32-bit platform.

[...]

> +static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)

Perhaps kcov_mode_is_active()?

That would better describe what is being checked.

> +{
> +	enum kcov_mode mode;
> +
> +	/*
> +	 * We are interested in code coverage as a function of a syscall inputs,
> +	 * so we ignore code executed in interrupts.
> +	 */
> +	if (!t || !in_task())
> +		return false;
> +	mode = READ_ONCE(t->kcov_mode);
> +	/*
> +	 * There is some code that runs in interrupts but for which
> +	 * in_interrupt() returns false (e.g. preempt_schedule_irq()).
> +	 * READ_ONCE()/barrier() effectively provides load-acquire wrt
> +	 * interrupts, there are paired barrier()/WRITE_ONCE() in
> +	 * kcov_ioctl_locked().
> +	 */
> +	barrier();
> +	if (mode != needed_mode)
> +		return false;
> +	return true;

This would be simlper as:

	barrier();
	return mode == needed_mode;

[...]

> +#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
> +static void write_comp_data(u64 type, u64 arg1, u64 arg2)
> +{
> +	struct task_struct *t;
> +	u64 *area;
> +	u64 count, start_index, end_pos, max_pos;
> +
> +	t = current;
> +	if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
> +		return;
> +
> +	/*
> +	 * We write all comparison arguments and types as u64.
> +	 * The buffer was allocated for t->kcov_size unsigned longs.
> +	 */
> +	area = (u64 *)t->kcov_area;
> +	max_pos = t->kcov_size * sizeof(unsigned long);

Perhaps it would make more sense for k->kcov_size to be in bytes, if
different options will have differing record sizes?

> +
> +	count = READ_ONCE(area[0]);
> +
> +	/* Every record is KCOV_WORDS_PER_CMP words. */

As above, please be explicit about what a "word" is, or avoid using
"word" terminology.

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
@ 2017-08-30 18:23     ` Mark Rutland
  0 siblings, 0 replies; 21+ messages in thread
From: Mark Rutland @ 2017-08-30 18:23 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: akpm, linux-mm, tchibo, Alexander Popov, Andrey Ryabinin,
	Kees Cook, Vegard Nossum, Quentin Casasnovas, syzkaller,
	linux-kernel

Hi,

On Wed, Aug 30, 2017 at 06:23:29PM +0200, Dmitry Vyukov wrote:
> From: Victor Chibotaru <tchibo@google.com>
> 
> Enables kcov to collect comparison operands from instrumented code.
> This is done by using Clang's -fsanitize=trace-cmp instrumentation
> (currently not available for GCC).

What's needed to build the kernel with Clang these days?

I was under the impression that it still wasn't possible to build arm64
with clang due to a number of missing features (e.g. the %a assembler
output template).

> The comparison operands help a lot in fuzz testing. E.g. they are
> used in Syzkaller to cover the interiors of conditional statements
> with way less attempts and thus make previously unreachable code
> reachable.
> 
> To allow separate collection of coverage and comparison operands two
> different work modes are implemented. Mode selection is now done via
> a KCOV_ENABLE ioctl call with corresponding argument value.
> 
> Signed-off-by: Victor Chibotaru <tchibo@google.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Alexander Popov <alex.popov@linux.com>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Vegard Nossum <vegard.nossum@oracle.com>
> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
> Cc: syzkaller@googlegroups.com
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
> Clang instrumentation:
> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow

How stable is this?

The comment at the end says "This interface is a subject to change."

[...]

> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index cd771993f96f..2abce5dfa2df 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -21,13 +21,21 @@
>  #include <linux/kcov.h>
>  #include <asm/setup.h>
>  
> +/* Number of words written per one comparison. */
> +#define KCOV_WORDS_PER_CMP 3

Could you please expand the comment to cover what a "word" is?

Generally, "word" is an ambiguous term, and it's used inconsitently in
this file as of this patch. For comparison coverage, a "word" is assumed
to always be 64-bit, (which makes sxense given 64-bit comparisons), but
for branch coverage a "word" refers to an unsigned long, which would be
32-bit on a 32-bit platform.

[...]

> +static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)

Perhaps kcov_mode_is_active()?

That would better describe what is being checked.

> +{
> +	enum kcov_mode mode;
> +
> +	/*
> +	 * We are interested in code coverage as a function of a syscall inputs,
> +	 * so we ignore code executed in interrupts.
> +	 */
> +	if (!t || !in_task())
> +		return false;
> +	mode = READ_ONCE(t->kcov_mode);
> +	/*
> +	 * There is some code that runs in interrupts but for which
> +	 * in_interrupt() returns false (e.g. preempt_schedule_irq()).
> +	 * READ_ONCE()/barrier() effectively provides load-acquire wrt
> +	 * interrupts, there are paired barrier()/WRITE_ONCE() in
> +	 * kcov_ioctl_locked().
> +	 */
> +	barrier();
> +	if (mode != needed_mode)
> +		return false;
> +	return true;

This would be simlper as:

	barrier();
	return mode == needed_mode;

[...]

> +#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
> +static void write_comp_data(u64 type, u64 arg1, u64 arg2)
> +{
> +	struct task_struct *t;
> +	u64 *area;
> +	u64 count, start_index, end_pos, max_pos;
> +
> +	t = current;
> +	if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
> +		return;
> +
> +	/*
> +	 * We write all comparison arguments and types as u64.
> +	 * The buffer was allocated for t->kcov_size unsigned longs.
> +	 */
> +	area = (u64 *)t->kcov_area;
> +	max_pos = t->kcov_size * sizeof(unsigned long);

Perhaps it would make more sense for k->kcov_size to be in bytes, if
different options will have differing record sizes?

> +
> +	count = READ_ONCE(area[0]);
> +
> +	/* Every record is KCOV_WORDS_PER_CMP words. */

As above, please be explicit about what a "word" is, or avoid using
"word" terminology.

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
  2017-08-30 18:23     ` Mark Rutland
@ 2017-08-30 19:04       ` Alexander Potapenko
  -1 siblings, 0 replies; 21+ messages in thread
From: Alexander Potapenko @ 2017-08-30 19:04 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Dmitry Vyukov, Andrew Morton, Linux Memory Management List,
	Victor Chibotaru, Alexander Popov, Andrey Ryabinin, Kees Cook,
	Vegard Nossum, Quentin Casasnovas, syzkaller, LKML

On Wed, Aug 30, 2017 at 8:23 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> Hi,
>
> On Wed, Aug 30, 2017 at 06:23:29PM +0200, Dmitry Vyukov wrote:
>> From: Victor Chibotaru <tchibo@google.com>
>>
>> Enables kcov to collect comparison operands from instrumented code.
>> This is done by using Clang's -fsanitize=trace-cmp instrumentation
>> (currently not available for GCC).
Hi Mark,

> What's needed to build the kernel with Clang these days?
Shameless plug: https://github.com/ramosian-glider/clang-kernel-build
We are currently one patch away from booting the Clang-built kernel on x86.
The patch in my repo is far from perfect, Josh Poimboeuf has a better
one at https://git.kernel.org/pub/scm/linux/kernel/git/jpoimboe/linux.git/log/?h=ASM_CALL

> I was under the impression that it still wasn't possible to build arm64
> with clang due to a number of missing features (e.g. the %a assembler
> output template).
Sorry, I haven't experimented with ARM much. Nick Desaulniers or Greg
Hackmann should know the details, I'll ask them.
I think the current status is "almost there".
>> The comparison operands help a lot in fuzz testing. E.g. they are
>> used in Syzkaller to cover the interiors of conditional statements
>> with way less attempts and thus make previously unreachable code
>> reachable.
>>
>> To allow separate collection of coverage and comparison operands two
>> different work modes are implemented. Mode selection is now done via
>> a KCOV_ENABLE ioctl call with corresponding argument value.
>>
>> Signed-off-by: Victor Chibotaru <tchibo@google.com>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Alexander Popov <alex.popov@linux.com>
>> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> Cc: Vegard Nossum <vegard.nossum@oracle.com>
>> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
>> Cc: syzkaller@googlegroups.com
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> ---
>> Clang instrumentation:
>> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
>
> How stable is this?
>
> The comment at the end says "This interface is a subject to change."
>
> [...]
>
>> diff --git a/kernel/kcov.c b/kernel/kcov.c
>> index cd771993f96f..2abce5dfa2df 100644
>> --- a/kernel/kcov.c
>> +++ b/kernel/kcov.c
>> @@ -21,13 +21,21 @@
>>  #include <linux/kcov.h>
>>  #include <asm/setup.h>
>>
>> +/* Number of words written per one comparison. */
>> +#define KCOV_WORDS_PER_CMP 3
>
> Could you please expand the comment to cover what a "word" is?
>
> Generally, "word" is an ambiguous term, and it's used inconsitently in
> this file as of this patch. For comparison coverage, a "word" is assumed
> to always be 64-bit, (which makes sxense given 64-bit comparisons), but
> for branch coverage a "word" refers to an unsigned long, which would be
> 32-bit on a 32-bit platform.
>
> [...]
>
>> +static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)
>
> Perhaps kcov_mode_is_active()?
>
> That would better describe what is being checked.
>
>> +{
>> +     enum kcov_mode mode;
>> +
>> +     /*
>> +      * We are interested in code coverage as a function of a syscall inputs,
>> +      * so we ignore code executed in interrupts.
>> +      */
>> +     if (!t || !in_task())
>> +             return false;
>> +     mode = READ_ONCE(t->kcov_mode);
>> +     /*
>> +      * There is some code that runs in interrupts but for which
>> +      * in_interrupt() returns false (e.g. preempt_schedule_irq()).
>> +      * READ_ONCE()/barrier() effectively provides load-acquire wrt
>> +      * interrupts, there are paired barrier()/WRITE_ONCE() in
>> +      * kcov_ioctl_locked().
>> +      */
>> +     barrier();
>> +     if (mode != needed_mode)
>> +             return false;
>> +     return true;
>
> This would be simlper as:
>
>         barrier();
>         return mode == needed_mode;
>
> [...]
>
>> +#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
>> +static void write_comp_data(u64 type, u64 arg1, u64 arg2)
>> +{
>> +     struct task_struct *t;
>> +     u64 *area;
>> +     u64 count, start_index, end_pos, max_pos;
>> +
>> +     t = current;
>> +     if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
>> +             return;
>> +
>> +     /*
>> +      * We write all comparison arguments and types as u64.
>> +      * The buffer was allocated for t->kcov_size unsigned longs.
>> +      */
>> +     area = (u64 *)t->kcov_area;
>> +     max_pos = t->kcov_size * sizeof(unsigned long);
>
> Perhaps it would make more sense for k->kcov_size to be in bytes, if
> different options will have differing record sizes?
>
>> +
>> +     count = READ_ONCE(area[0]);
>> +
>> +     /* Every record is KCOV_WORDS_PER_CMP words. */
>
> As above, please be explicit about what a "word" is, or avoid using
> "word" terminology.
>
> Thanks,
> Mark.
>
> --
> You received this message because you are subscribed to the Google Groups "syzkaller" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller+unsubscribe@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
@ 2017-08-30 19:04       ` Alexander Potapenko
  0 siblings, 0 replies; 21+ messages in thread
From: Alexander Potapenko @ 2017-08-30 19:04 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Dmitry Vyukov, Andrew Morton, Linux Memory Management List,
	Victor Chibotaru, Alexander Popov, Andrey Ryabinin, Kees Cook,
	Vegard Nossum, Quentin Casasnovas, syzkaller, LKML

On Wed, Aug 30, 2017 at 8:23 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> Hi,
>
> On Wed, Aug 30, 2017 at 06:23:29PM +0200, Dmitry Vyukov wrote:
>> From: Victor Chibotaru <tchibo@google.com>
>>
>> Enables kcov to collect comparison operands from instrumented code.
>> This is done by using Clang's -fsanitize=trace-cmp instrumentation
>> (currently not available for GCC).
Hi Mark,

> What's needed to build the kernel with Clang these days?
Shameless plug: https://github.com/ramosian-glider/clang-kernel-build
We are currently one patch away from booting the Clang-built kernel on x86.
The patch in my repo is far from perfect, Josh Poimboeuf has a better
one at https://git.kernel.org/pub/scm/linux/kernel/git/jpoimboe/linux.git/log/?h=ASM_CALL

> I was under the impression that it still wasn't possible to build arm64
> with clang due to a number of missing features (e.g. the %a assembler
> output template).
Sorry, I haven't experimented with ARM much. Nick Desaulniers or Greg
Hackmann should know the details, I'll ask them.
I think the current status is "almost there".
>> The comparison operands help a lot in fuzz testing. E.g. they are
>> used in Syzkaller to cover the interiors of conditional statements
>> with way less attempts and thus make previously unreachable code
>> reachable.
>>
>> To allow separate collection of coverage and comparison operands two
>> different work modes are implemented. Mode selection is now done via
>> a KCOV_ENABLE ioctl call with corresponding argument value.
>>
>> Signed-off-by: Victor Chibotaru <tchibo@google.com>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Alexander Popov <alex.popov@linux.com>
>> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> Cc: Vegard Nossum <vegard.nossum@oracle.com>
>> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
>> Cc: syzkaller@googlegroups.com
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> ---
>> Clang instrumentation:
>> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
>
> How stable is this?
>
> The comment at the end says "This interface is a subject to change."
>
> [...]
>
>> diff --git a/kernel/kcov.c b/kernel/kcov.c
>> index cd771993f96f..2abce5dfa2df 100644
>> --- a/kernel/kcov.c
>> +++ b/kernel/kcov.c
>> @@ -21,13 +21,21 @@
>>  #include <linux/kcov.h>
>>  #include <asm/setup.h>
>>
>> +/* Number of words written per one comparison. */
>> +#define KCOV_WORDS_PER_CMP 3
>
> Could you please expand the comment to cover what a "word" is?
>
> Generally, "word" is an ambiguous term, and it's used inconsitently in
> this file as of this patch. For comparison coverage, a "word" is assumed
> to always be 64-bit, (which makes sxense given 64-bit comparisons), but
> for branch coverage a "word" refers to an unsigned long, which would be
> 32-bit on a 32-bit platform.
>
> [...]
>
>> +static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)
>
> Perhaps kcov_mode_is_active()?
>
> That would better describe what is being checked.
>
>> +{
>> +     enum kcov_mode mode;
>> +
>> +     /*
>> +      * We are interested in code coverage as a function of a syscall inputs,
>> +      * so we ignore code executed in interrupts.
>> +      */
>> +     if (!t || !in_task())
>> +             return false;
>> +     mode = READ_ONCE(t->kcov_mode);
>> +     /*
>> +      * There is some code that runs in interrupts but for which
>> +      * in_interrupt() returns false (e.g. preempt_schedule_irq()).
>> +      * READ_ONCE()/barrier() effectively provides load-acquire wrt
>> +      * interrupts, there are paired barrier()/WRITE_ONCE() in
>> +      * kcov_ioctl_locked().
>> +      */
>> +     barrier();
>> +     if (mode != needed_mode)
>> +             return false;
>> +     return true;
>
> This would be simlper as:
>
>         barrier();
>         return mode == needed_mode;
>
> [...]
>
>> +#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
>> +static void write_comp_data(u64 type, u64 arg1, u64 arg2)
>> +{
>> +     struct task_struct *t;
>> +     u64 *area;
>> +     u64 count, start_index, end_pos, max_pos;
>> +
>> +     t = current;
>> +     if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
>> +             return;
>> +
>> +     /*
>> +      * We write all comparison arguments and types as u64.
>> +      * The buffer was allocated for t->kcov_size unsigned longs.
>> +      */
>> +     area = (u64 *)t->kcov_area;
>> +     max_pos = t->kcov_size * sizeof(unsigned long);
>
> Perhaps it would make more sense for k->kcov_size to be in bytes, if
> different options will have differing record sizes?
>
>> +
>> +     count = READ_ONCE(area[0]);
>> +
>> +     /* Every record is KCOV_WORDS_PER_CMP words. */
>
> As above, please be explicit about what a "word" is, or avoid using
> "word" terminology.
>
> Thanks,
> Mark.
>
> --
> You received this message because you are subscribed to the Google Groups "syzkaller" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller+unsubscribe@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.



-- 
Alexander Potapenko
Software Engineer

Google Germany GmbH
Erika-Mann-Straße, 33
80636 München

Geschäftsführer: Paul Manicle, Halimah DeLaine Prado
Registergericht und -nummer: Hamburg, HRB 86891
Sitz der Gesellschaft: Hamburg

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
  2017-08-30 18:23     ` Mark Rutland
@ 2017-08-30 19:08       ` Dmitry Vyukov
  -1 siblings, 0 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-08-30 19:08 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrew Morton, linux-mm, Victor Chibotaru, Alexander Popov,
	Andrey Ryabinin, Kees Cook, Vegard Nossum, Quentin Casasnovas,
	syzkaller, LKML

On Wed, Aug 30, 2017 at 8:23 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> Hi,
>
> On Wed, Aug 30, 2017 at 06:23:29PM +0200, Dmitry Vyukov wrote:
>> From: Victor Chibotaru <tchibo@google.com>
>>
>> Enables kcov to collect comparison operands from instrumented code.
>> This is done by using Clang's -fsanitize=trace-cmp instrumentation
>> (currently not available for GCC).
>
> What's needed to build the kernel with Clang these days?
>
> I was under the impression that it still wasn't possible to build arm64
> with clang due to a number of missing features (e.g. the %a assembler
> output template).
>
>> The comparison operands help a lot in fuzz testing. E.g. they are
>> used in Syzkaller to cover the interiors of conditional statements
>> with way less attempts and thus make previously unreachable code
>> reachable.
>>
>> To allow separate collection of coverage and comparison operands two
>> different work modes are implemented. Mode selection is now done via
>> a KCOV_ENABLE ioctl call with corresponding argument value.
>>
>> Signed-off-by: Victor Chibotaru <tchibo@google.com>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Alexander Popov <alex.popov@linux.com>
>> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> Cc: Vegard Nossum <vegard.nossum@oracle.com>
>> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
>> Cc: syzkaller@googlegroups.com
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> ---
>> Clang instrumentation:
>> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
>
> How stable is this?
>
> The comment at the end says "This interface is a subject to change."


The intention is that this is not subject to change anymore (since we
are using it in kernel).
I've mailed change to docs: https://reviews.llvm.org/D37303

FWIW, there is patch in flight that adds this instrumentation to gcc:
https://groups.google.com/forum/#!topic/syzkaller/CSLynn6nI-A
It seems to be stalled on review phase, though.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
@ 2017-08-30 19:08       ` Dmitry Vyukov
  0 siblings, 0 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-08-30 19:08 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrew Morton, linux-mm, Victor Chibotaru, Alexander Popov,
	Andrey Ryabinin, Kees Cook, Vegard Nossum, Quentin Casasnovas,
	syzkaller, LKML

On Wed, Aug 30, 2017 at 8:23 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> Hi,
>
> On Wed, Aug 30, 2017 at 06:23:29PM +0200, Dmitry Vyukov wrote:
>> From: Victor Chibotaru <tchibo@google.com>
>>
>> Enables kcov to collect comparison operands from instrumented code.
>> This is done by using Clang's -fsanitize=trace-cmp instrumentation
>> (currently not available for GCC).
>
> What's needed to build the kernel with Clang these days?
>
> I was under the impression that it still wasn't possible to build arm64
> with clang due to a number of missing features (e.g. the %a assembler
> output template).
>
>> The comparison operands help a lot in fuzz testing. E.g. they are
>> used in Syzkaller to cover the interiors of conditional statements
>> with way less attempts and thus make previously unreachable code
>> reachable.
>>
>> To allow separate collection of coverage and comparison operands two
>> different work modes are implemented. Mode selection is now done via
>> a KCOV_ENABLE ioctl call with corresponding argument value.
>>
>> Signed-off-by: Victor Chibotaru <tchibo@google.com>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Mark Rutland <mark.rutland@arm.com>
>> Cc: Alexander Popov <alex.popov@linux.com>
>> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
>> Cc: Kees Cook <keescook@chromium.org>
>> Cc: Vegard Nossum <vegard.nossum@oracle.com>
>> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
>> Cc: syzkaller@googlegroups.com
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> ---
>> Clang instrumentation:
>> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
>
> How stable is this?
>
> The comment at the end says "This interface is a subject to change."


The intention is that this is not subject to change anymore (since we
are using it in kernel).
I've mailed change to docs: https://reviews.llvm.org/D37303

FWIW, there is patch in flight that adds this instrumentation to gcc:
https://groups.google.com/forum/#!topic/syzkaller/CSLynn6nI-A
It seems to be stalled on review phase, though.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
  2017-08-30 19:08       ` Dmitry Vyukov
@ 2017-08-31  9:31         ` Mark Rutland
  -1 siblings, 0 replies; 21+ messages in thread
From: Mark Rutland @ 2017-08-31  9:31 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrew Morton, linux-mm, Victor Chibotaru, Alexander Popov,
	Andrey Ryabinin, Kees Cook, Vegard Nossum, Quentin Casasnovas,
	syzkaller, LKML

On Wed, Aug 30, 2017 at 09:08:43PM +0200, Dmitry Vyukov wrote:
> On Wed, Aug 30, 2017 at 8:23 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Wed, Aug 30, 2017 at 06:23:29PM +0200, Dmitry Vyukov wrote:
> >> From: Victor Chibotaru <tchibo@google.com>
> >>
> >> Enables kcov to collect comparison operands from instrumented code.
> >> This is done by using Clang's -fsanitize=trace-cmp instrumentation
> >> (currently not available for GCC).

> >> Clang instrumentation:
> >> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
> >
> > How stable is this?
> >
> > The comment at the end says "This interface is a subject to change."
> 
> The intention is that this is not subject to change anymore (since we
> are using it in kernel).
> I've mailed change to docs: https://reviews.llvm.org/D37303

Ok; thanks for confirming.

> FWIW, there is patch in flight that adds this instrumentation to gcc:
> https://groups.google.com/forum/#!topic/syzkaller/CSLynn6nI-A
> It seems to be stalled on review phase, though.

Let's hope it gets unblocked soon. :)

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
@ 2017-08-31  9:31         ` Mark Rutland
  0 siblings, 0 replies; 21+ messages in thread
From: Mark Rutland @ 2017-08-31  9:31 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrew Morton, linux-mm, Victor Chibotaru, Alexander Popov,
	Andrey Ryabinin, Kees Cook, Vegard Nossum, Quentin Casasnovas,
	syzkaller, LKML

On Wed, Aug 30, 2017 at 09:08:43PM +0200, Dmitry Vyukov wrote:
> On Wed, Aug 30, 2017 at 8:23 PM, Mark Rutland <mark.rutland@arm.com> wrote:
> > On Wed, Aug 30, 2017 at 06:23:29PM +0200, Dmitry Vyukov wrote:
> >> From: Victor Chibotaru <tchibo@google.com>
> >>
> >> Enables kcov to collect comparison operands from instrumented code.
> >> This is done by using Clang's -fsanitize=trace-cmp instrumentation
> >> (currently not available for GCC).

> >> Clang instrumentation:
> >> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
> >
> > How stable is this?
> >
> > The comment at the end says "This interface is a subject to change."
> 
> The intention is that this is not subject to change anymore (since we
> are using it in kernel).
> I've mailed change to docs: https://reviews.llvm.org/D37303

Ok; thanks for confirming.

> FWIW, there is patch in flight that adds this instrumentation to gcc:
> https://groups.google.com/forum/#!topic/syzkaller/CSLynn6nI-A
> It seems to be stalled on review phase, though.

Let's hope it gets unblocked soon. :)

Thanks,
Mark.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
  2017-08-30 16:23   ` Dmitry Vyukov
@ 2017-08-31 13:27     ` Andrey Konovalov
  -1 siblings, 0 replies; 21+ messages in thread
From: Andrey Konovalov @ 2017-08-31 13:27 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrew Morton, Linux Memory Management List, Victor Chibotaru,
	Mark Rutland, Alexander Popov, Andrey Ryabinin, Kees Cook,
	Vegard Nossum, Quentin Casasnovas, syzkaller, LKML

On Wed, Aug 30, 2017 at 6:23 PM, 'Dmitry Vyukov' via syzkaller
<syzkaller@googlegroups.com> wrote:
> From: Victor Chibotaru <tchibo@google.com>
>
> Enables kcov to collect comparison operands from instrumented code.
> This is done by using Clang's -fsanitize=trace-cmp instrumentation
> (currently not available for GCC).
>
> The comparison operands help a lot in fuzz testing. E.g. they are
> used in Syzkaller to cover the interiors of conditional statements
> with way less attempts and thus make previously unreachable code
> reachable.
>
> To allow separate collection of coverage and comparison operands two
> different work modes are implemented. Mode selection is now done via
> a KCOV_ENABLE ioctl call with corresponding argument value.
>
> Signed-off-by: Victor Chibotaru <tchibo@google.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Alexander Popov <alex.popov@linux.com>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Vegard Nossum <vegard.nossum@oracle.com>
> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
> Cc: syzkaller@googlegroups.com
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
> Clang instrumentation:
> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
> Syzkaller:
> https://github.com/google/syzkaller
> ---
>  include/linux/kcov.h      |  12 ++-
>  include/uapi/linux/kcov.h |  32 ++++++++
>  kernel/kcov.c             | 203 ++++++++++++++++++++++++++++++++++++++--------
>  3 files changed, 209 insertions(+), 38 deletions(-)
>
> diff --git a/include/linux/kcov.h b/include/linux/kcov.h
> index 2883ac98c280..87e2a44f1bab 100644
> --- a/include/linux/kcov.h
> +++ b/include/linux/kcov.h
> @@ -7,19 +7,23 @@ struct task_struct;
>
>  #ifdef CONFIG_KCOV
>
> -void kcov_task_init(struct task_struct *t);
> -void kcov_task_exit(struct task_struct *t);
> -
>  enum kcov_mode {
>         /* Coverage collection is not enabled yet. */
>         KCOV_MODE_DISABLED = 0,
> +       /* KCOV was initialized, but tracing mode hasn't been chosen yet. */
> +       KCOV_MODE_INIT = 1,
>         /*
>          * Tracing coverage collection mode.
>          * Covered PCs are collected in a per-task buffer.
>          */
> -       KCOV_MODE_TRACE = 1,
> +       KCOV_MODE_TRACE_PC = 2,
> +       /* Collecting comparison operands mode. */
> +       KCOV_MODE_TRACE_CMP = 3,
>  };
>
> +void kcov_task_init(struct task_struct *t);
> +void kcov_task_exit(struct task_struct *t);
> +
>  #else
>
>  static inline void kcov_task_init(struct task_struct *t) {}
> diff --git a/include/uapi/linux/kcov.h b/include/uapi/linux/kcov.h
> index 574e22ec640d..a0bc3e6a5ff7 100644
> --- a/include/uapi/linux/kcov.h
> +++ b/include/uapi/linux/kcov.h
> @@ -7,4 +7,36 @@
>  #define KCOV_ENABLE                    _IO('c', 100)
>  #define KCOV_DISABLE                   _IO('c', 101)
>
> +enum {
> +       /*
> +        * Tracing coverage collection mode.
> +        * Covered PCs are collected in a per-task buffer.
> +        * In new KCOV version the mode is chosen by calling
> +        * ioctl(fd, KCOV_ENABLE, mode). In older versions the mode argument
> +        * was supposed to be 0 in such a call. So, for reasons of backward
> +        * compatibility, we have chosen the value KCOV_TRACE_PC to be 0.
> +        */
> +       KCOV_TRACE_PC = 0,
> +       /* Collecting comparison operands mode. */
> +       KCOV_TRACE_CMP = 1,
> +};
> +
> +/*
> + * Defines the format for the types of collected comparisons.
> + */
> +enum kcov_cmp_type {
> +       /*
> +        * LSB shows whether one of the arguments is a compile-time constant.
> +        */
> +       KCOV_CMP_CONST = 1,
> +       /*
> +        * Second and third LSBs contain the size of arguments (1/2/4/8 bytes).
> +        */
> +       KCOV_CMP_SIZE1 = 0,
> +       KCOV_CMP_SIZE2 = 2,
> +       KCOV_CMP_SIZE4 = 4,
> +       KCOV_CMP_SIZE8 = 6,
> +       KCOV_CMP_SIZE_MASK = 6,
> +};
> +
>  #endif /* _LINUX_KCOV_IOCTLS_H */
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index cd771993f96f..2abce5dfa2df 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -21,13 +21,21 @@
>  #include <linux/kcov.h>
>  #include <asm/setup.h>
>
> +/* Number of words written per one comparison. */
> +#define KCOV_WORDS_PER_CMP 3
> +
>  /*
>   * kcov descriptor (one per opened debugfs file).
>   * State transitions of the descriptor:
>   *  - initial state after open()
>   *  - then there must be a single ioctl(KCOV_INIT_TRACE) call
>   *  - then, mmap() call (several calls are allowed but not useful)
> - *  - then, repeated enable/disable for a task (only one task a time allowed)
> + *  - then, ioctl(KCOV_ENABLE, arg), where arg is
> + *     KCOV_TRACE_PC - to trace only the PCs
> + *     or
> + *     KCOV_TRACE_CMP - to trace only the comparison operands
> + *  - then, ioctl(KCOV_DISABLE) to disable the task.
> + * Enabling/disabling ioctls can be repeated (only one task a time allowed).
>   */
>  struct kcov {
>         /*
> @@ -47,6 +55,30 @@ struct kcov {
>         struct task_struct      *t;
>  };
>
> +static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)
> +{
> +       enum kcov_mode mode;
> +
> +       /*
> +        * We are interested in code coverage as a function of a syscall inputs,
> +        * so we ignore code executed in interrupts.
> +        */
> +       if (!t || !in_task())
> +               return false;
> +       mode = READ_ONCE(t->kcov_mode);
> +       /*
> +        * There is some code that runs in interrupts but for which
> +        * in_interrupt() returns false (e.g. preempt_schedule_irq()).
> +        * READ_ONCE()/barrier() effectively provides load-acquire wrt
> +        * interrupts, there are paired barrier()/WRITE_ONCE() in
> +        * kcov_ioctl_locked().
> +        */
> +       barrier();
> +       if (mode != needed_mode)
> +               return false;
> +       return true;
> +}
> +
>  /*
>   * Entry point from instrumented code.
>   * This is called once per basic-block/edge.
> @@ -54,44 +86,136 @@ struct kcov {
>  void notrace __sanitizer_cov_trace_pc(void)
>  {
>         struct task_struct *t;
> -       enum kcov_mode mode;
> +       unsigned long *area;
> +       unsigned long ip = _RET_IP_;
> +       unsigned long pos;
>
>         t = current;
> -       /*
> -        * We are interested in code coverage as a function of a syscall inputs,
> -        * so we ignore code executed in interrupts.
> -        */
> -       if (!t || !in_task())
> +       if (!check_kcov_mode(KCOV_MODE_TRACE_PC, t))
>                 return;
> -       mode = READ_ONCE(t->kcov_mode);
> -       if (mode == KCOV_MODE_TRACE) {
> -               unsigned long *area;
> -               unsigned long pos;
> -               unsigned long ip = _RET_IP_;
>
>  #ifdef CONFIG_RANDOMIZE_BASE
> -               ip -= kaslr_offset();
> +       ip -= kaslr_offset();
>  #endif
>
> -               /*
> -                * There is some code that runs in interrupts but for which
> -                * in_interrupt() returns false (e.g. preempt_schedule_irq()).
> -                * READ_ONCE()/barrier() effectively provides load-acquire wrt
> -                * interrupts, there are paired barrier()/WRITE_ONCE() in
> -                * kcov_ioctl_locked().
> -                */
> -               barrier();
> -               area = t->kcov_area;
> -               /* The first word is number of subsequent PCs. */
> -               pos = READ_ONCE(area[0]) + 1;
> -               if (likely(pos < t->kcov_size)) {
> -                       area[pos] = ip;
> -                       WRITE_ONCE(area[0], pos);
> -               }
> +       area = t->kcov_area;
> +       /* The first word is number of subsequent PCs. */
> +       pos = READ_ONCE(area[0]) + 1;
> +       if (likely(pos < t->kcov_size)) {
> +               area[pos] = ip;
> +               WRITE_ONCE(area[0], pos);
>         }
>  }
>  EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
>
> +#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
> +static void write_comp_data(u64 type, u64 arg1, u64 arg2)
> +{
> +       struct task_struct *t;
> +       u64 *area;
> +       u64 count, start_index, end_pos, max_pos;
> +
> +       t = current;
> +       if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
> +               return;
> +
> +       /*
> +        * We write all comparison arguments and types as u64.
> +        * The buffer was allocated for t->kcov_size unsigned longs.
> +        */
> +       area = (u64 *)t->kcov_area;
> +       max_pos = t->kcov_size * sizeof(unsigned long);
> +
> +       count = READ_ONCE(area[0]);
> +
> +       /* Every record is KCOV_WORDS_PER_CMP words. */
> +       start_index = 1 + count * KCOV_WORDS_PER_CMP;
> +       end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
> +       if (likely(end_pos <= max_pos)) {
> +               area[start_index] = type;
> +               area[start_index + 1] = arg1;
> +               area[start_index + 2] = arg2;

It might make sense to also store PC of the comparison here. This will
allow to trace operands of some specific comparisons.

> +               WRITE_ONCE(area[0], count + 1);
> +       }
> +}
> +
> +void notrace __sanitizer_cov_trace_cmp1(u8 arg1, u8 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE1, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp1);
> +
> +void notrace __sanitizer_cov_trace_cmp2(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE2, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp2);
> +
> +void notrace __sanitizer_cov_trace_cmp4(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE4, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp4);
> +
> +void notrace __sanitizer_cov_trace_cmp8(u64 arg1, u64 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE8, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp8);
> +
> +void notrace __sanitizer_cov_trace_const_cmp1(u8 arg1, u8 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE1 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp1);
> +
> +void notrace __sanitizer_cov_trace_const_cmp2(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE2 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp2);
> +
> +void notrace __sanitizer_cov_trace_const_cmp4(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE4 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp4);
> +
> +void notrace __sanitizer_cov_trace_const_cmp8(u64 arg1, u64 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE8 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp8);
> +
> +void notrace __sanitizer_cov_trace_switch(u64 val, u64 *cases)
> +{
> +       u64 i;
> +       u64 count = cases[0];
> +       u64 size = cases[1];
> +       u64 type = KCOV_CMP_CONST;
> +
> +       switch (size) {
> +       case 8:
> +               type |= KCOV_CMP_SIZE1;
> +               break;
> +       case 16:
> +               type |= KCOV_CMP_SIZE2;
> +               break;
> +       case 32:
> +               type |= KCOV_CMP_SIZE4;
> +               break;
> +       case 64:
> +               type |= KCOV_CMP_SIZE8;
> +               break;
> +       default:
> +               return;
> +       }
> +       for (i = 0; i < count; i++)
> +               write_comp_data(type, cases[i + 2], val);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_switch);
> +#endif /* ifdef CONFIG_KCOV_ENABLE_COMPARISONS */
> +
>  static void kcov_get(struct kcov *kcov)
>  {
>         atomic_inc(&kcov->refcount);
> @@ -128,6 +252,7 @@ void kcov_task_exit(struct task_struct *t)
>         /* Just to not leave dangling references behind. */
>         kcov_task_init(t);
>         kcov->t = NULL;
> +       kcov->mode = KCOV_MODE_INIT;
>         spin_unlock(&kcov->lock);
>         kcov_put(kcov);
>  }
> @@ -146,7 +271,7 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
>
>         spin_lock(&kcov->lock);
>         size = kcov->size * sizeof(unsigned long);
> -       if (kcov->mode == KCOV_MODE_DISABLED || vma->vm_pgoff != 0 ||
> +       if (kcov->mode != KCOV_MODE_INIT || vma->vm_pgoff != 0 ||
>             vma->vm_end - vma->vm_start != size) {
>                 res = -EINVAL;
>                 goto exit;
> @@ -175,6 +300,7 @@ static int kcov_open(struct inode *inode, struct file *filep)
>         kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
>         if (!kcov)
>                 return -ENOMEM;
> +       kcov->mode = KCOV_MODE_DISABLED;
>         atomic_set(&kcov->refcount, 1);
>         spin_lock_init(&kcov->lock);
>         filep->private_data = kcov;
> @@ -210,7 +336,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
>                 if (size < 2 || size > INT_MAX / sizeof(unsigned long))
>                         return -EINVAL;
>                 kcov->size = size;
> -               kcov->mode = KCOV_MODE_TRACE;
> +               kcov->mode = KCOV_MODE_INIT;
>                 return 0;
>         case KCOV_ENABLE:
>                 /*
> @@ -220,17 +346,25 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
>                  * at task exit or voluntary by KCOV_DISABLE. After that it can
>                  * be enabled for another task.
>                  */
> -               unused = arg;
> -               if (unused != 0 || kcov->mode == KCOV_MODE_DISABLED ||
> -                   kcov->area == NULL)
> +               if (kcov->mode != KCOV_MODE_INIT || !kcov->area)
>                         return -EINVAL;
>                 if (kcov->t != NULL)
>                         return -EBUSY;
> +               if (arg == KCOV_TRACE_PC)
> +                       kcov->mode = KCOV_MODE_TRACE_PC;
> +               else if (arg == KCOV_TRACE_CMP)
> +#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
> +                       kcov->mode = KCOV_MODE_TRACE_CMP;
> +#else
> +               return -ENOTSUPP;
> +#endif
> +               else
> +                       return -EINVAL;
>                 t = current;
>                 /* Cache in task struct for performance. */
>                 t->kcov_size = kcov->size;
>                 t->kcov_area = kcov->area;
> -               /* See comment in __sanitizer_cov_trace_pc(). */
> +               /* See comment in check_kcov_mode(). */
>                 barrier();
>                 WRITE_ONCE(t->kcov_mode, kcov->mode);
>                 t->kcov = kcov;
> @@ -248,6 +382,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
>                         return -EINVAL;
>                 kcov_task_init(t);
>                 kcov->t = NULL;
> +               kcov->mode = KCOV_MODE_INIT;
>                 kcov_put(kcov);
>                 return 0;
>         default:
> --
> 2.14.1.581.gf28d330327-goog
>
> --
> You received this message because you are subscribed to the Google Groups "syzkaller" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller+unsubscribe@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
@ 2017-08-31 13:27     ` Andrey Konovalov
  0 siblings, 0 replies; 21+ messages in thread
From: Andrey Konovalov @ 2017-08-31 13:27 UTC (permalink / raw)
  To: Dmitry Vyukov
  Cc: Andrew Morton, Linux Memory Management List, Victor Chibotaru,
	Mark Rutland, Alexander Popov, Andrey Ryabinin, Kees Cook,
	Vegard Nossum, Quentin Casasnovas, syzkaller, LKML

On Wed, Aug 30, 2017 at 6:23 PM, 'Dmitry Vyukov' via syzkaller
<syzkaller@googlegroups.com> wrote:
> From: Victor Chibotaru <tchibo@google.com>
>
> Enables kcov to collect comparison operands from instrumented code.
> This is done by using Clang's -fsanitize=trace-cmp instrumentation
> (currently not available for GCC).
>
> The comparison operands help a lot in fuzz testing. E.g. they are
> used in Syzkaller to cover the interiors of conditional statements
> with way less attempts and thus make previously unreachable code
> reachable.
>
> To allow separate collection of coverage and comparison operands two
> different work modes are implemented. Mode selection is now done via
> a KCOV_ENABLE ioctl call with corresponding argument value.
>
> Signed-off-by: Victor Chibotaru <tchibo@google.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Alexander Popov <alex.popov@linux.com>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Vegard Nossum <vegard.nossum@oracle.com>
> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
> Cc: syzkaller@googlegroups.com
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
> Clang instrumentation:
> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
> Syzkaller:
> https://github.com/google/syzkaller
> ---
>  include/linux/kcov.h      |  12 ++-
>  include/uapi/linux/kcov.h |  32 ++++++++
>  kernel/kcov.c             | 203 ++++++++++++++++++++++++++++++++++++++--------
>  3 files changed, 209 insertions(+), 38 deletions(-)
>
> diff --git a/include/linux/kcov.h b/include/linux/kcov.h
> index 2883ac98c280..87e2a44f1bab 100644
> --- a/include/linux/kcov.h
> +++ b/include/linux/kcov.h
> @@ -7,19 +7,23 @@ struct task_struct;
>
>  #ifdef CONFIG_KCOV
>
> -void kcov_task_init(struct task_struct *t);
> -void kcov_task_exit(struct task_struct *t);
> -
>  enum kcov_mode {
>         /* Coverage collection is not enabled yet. */
>         KCOV_MODE_DISABLED = 0,
> +       /* KCOV was initialized, but tracing mode hasn't been chosen yet. */
> +       KCOV_MODE_INIT = 1,
>         /*
>          * Tracing coverage collection mode.
>          * Covered PCs are collected in a per-task buffer.
>          */
> -       KCOV_MODE_TRACE = 1,
> +       KCOV_MODE_TRACE_PC = 2,
> +       /* Collecting comparison operands mode. */
> +       KCOV_MODE_TRACE_CMP = 3,
>  };
>
> +void kcov_task_init(struct task_struct *t);
> +void kcov_task_exit(struct task_struct *t);
> +
>  #else
>
>  static inline void kcov_task_init(struct task_struct *t) {}
> diff --git a/include/uapi/linux/kcov.h b/include/uapi/linux/kcov.h
> index 574e22ec640d..a0bc3e6a5ff7 100644
> --- a/include/uapi/linux/kcov.h
> +++ b/include/uapi/linux/kcov.h
> @@ -7,4 +7,36 @@
>  #define KCOV_ENABLE                    _IO('c', 100)
>  #define KCOV_DISABLE                   _IO('c', 101)
>
> +enum {
> +       /*
> +        * Tracing coverage collection mode.
> +        * Covered PCs are collected in a per-task buffer.
> +        * In new KCOV version the mode is chosen by calling
> +        * ioctl(fd, KCOV_ENABLE, mode). In older versions the mode argument
> +        * was supposed to be 0 in such a call. So, for reasons of backward
> +        * compatibility, we have chosen the value KCOV_TRACE_PC to be 0.
> +        */
> +       KCOV_TRACE_PC = 0,
> +       /* Collecting comparison operands mode. */
> +       KCOV_TRACE_CMP = 1,
> +};
> +
> +/*
> + * Defines the format for the types of collected comparisons.
> + */
> +enum kcov_cmp_type {
> +       /*
> +        * LSB shows whether one of the arguments is a compile-time constant.
> +        */
> +       KCOV_CMP_CONST = 1,
> +       /*
> +        * Second and third LSBs contain the size of arguments (1/2/4/8 bytes).
> +        */
> +       KCOV_CMP_SIZE1 = 0,
> +       KCOV_CMP_SIZE2 = 2,
> +       KCOV_CMP_SIZE4 = 4,
> +       KCOV_CMP_SIZE8 = 6,
> +       KCOV_CMP_SIZE_MASK = 6,
> +};
> +
>  #endif /* _LINUX_KCOV_IOCTLS_H */
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index cd771993f96f..2abce5dfa2df 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -21,13 +21,21 @@
>  #include <linux/kcov.h>
>  #include <asm/setup.h>
>
> +/* Number of words written per one comparison. */
> +#define KCOV_WORDS_PER_CMP 3
> +
>  /*
>   * kcov descriptor (one per opened debugfs file).
>   * State transitions of the descriptor:
>   *  - initial state after open()
>   *  - then there must be a single ioctl(KCOV_INIT_TRACE) call
>   *  - then, mmap() call (several calls are allowed but not useful)
> - *  - then, repeated enable/disable for a task (only one task a time allowed)
> + *  - then, ioctl(KCOV_ENABLE, arg), where arg is
> + *     KCOV_TRACE_PC - to trace only the PCs
> + *     or
> + *     KCOV_TRACE_CMP - to trace only the comparison operands
> + *  - then, ioctl(KCOV_DISABLE) to disable the task.
> + * Enabling/disabling ioctls can be repeated (only one task a time allowed).
>   */
>  struct kcov {
>         /*
> @@ -47,6 +55,30 @@ struct kcov {
>         struct task_struct      *t;
>  };
>
> +static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct *t)
> +{
> +       enum kcov_mode mode;
> +
> +       /*
> +        * We are interested in code coverage as a function of a syscall inputs,
> +        * so we ignore code executed in interrupts.
> +        */
> +       if (!t || !in_task())
> +               return false;
> +       mode = READ_ONCE(t->kcov_mode);
> +       /*
> +        * There is some code that runs in interrupts but for which
> +        * in_interrupt() returns false (e.g. preempt_schedule_irq()).
> +        * READ_ONCE()/barrier() effectively provides load-acquire wrt
> +        * interrupts, there are paired barrier()/WRITE_ONCE() in
> +        * kcov_ioctl_locked().
> +        */
> +       barrier();
> +       if (mode != needed_mode)
> +               return false;
> +       return true;
> +}
> +
>  /*
>   * Entry point from instrumented code.
>   * This is called once per basic-block/edge.
> @@ -54,44 +86,136 @@ struct kcov {
>  void notrace __sanitizer_cov_trace_pc(void)
>  {
>         struct task_struct *t;
> -       enum kcov_mode mode;
> +       unsigned long *area;
> +       unsigned long ip = _RET_IP_;
> +       unsigned long pos;
>
>         t = current;
> -       /*
> -        * We are interested in code coverage as a function of a syscall inputs,
> -        * so we ignore code executed in interrupts.
> -        */
> -       if (!t || !in_task())
> +       if (!check_kcov_mode(KCOV_MODE_TRACE_PC, t))
>                 return;
> -       mode = READ_ONCE(t->kcov_mode);
> -       if (mode == KCOV_MODE_TRACE) {
> -               unsigned long *area;
> -               unsigned long pos;
> -               unsigned long ip = _RET_IP_;
>
>  #ifdef CONFIG_RANDOMIZE_BASE
> -               ip -= kaslr_offset();
> +       ip -= kaslr_offset();
>  #endif
>
> -               /*
> -                * There is some code that runs in interrupts but for which
> -                * in_interrupt() returns false (e.g. preempt_schedule_irq()).
> -                * READ_ONCE()/barrier() effectively provides load-acquire wrt
> -                * interrupts, there are paired barrier()/WRITE_ONCE() in
> -                * kcov_ioctl_locked().
> -                */
> -               barrier();
> -               area = t->kcov_area;
> -               /* The first word is number of subsequent PCs. */
> -               pos = READ_ONCE(area[0]) + 1;
> -               if (likely(pos < t->kcov_size)) {
> -                       area[pos] = ip;
> -                       WRITE_ONCE(area[0], pos);
> -               }
> +       area = t->kcov_area;
> +       /* The first word is number of subsequent PCs. */
> +       pos = READ_ONCE(area[0]) + 1;
> +       if (likely(pos < t->kcov_size)) {
> +               area[pos] = ip;
> +               WRITE_ONCE(area[0], pos);
>         }
>  }
>  EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
>
> +#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
> +static void write_comp_data(u64 type, u64 arg1, u64 arg2)
> +{
> +       struct task_struct *t;
> +       u64 *area;
> +       u64 count, start_index, end_pos, max_pos;
> +
> +       t = current;
> +       if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
> +               return;
> +
> +       /*
> +        * We write all comparison arguments and types as u64.
> +        * The buffer was allocated for t->kcov_size unsigned longs.
> +        */
> +       area = (u64 *)t->kcov_area;
> +       max_pos = t->kcov_size * sizeof(unsigned long);
> +
> +       count = READ_ONCE(area[0]);
> +
> +       /* Every record is KCOV_WORDS_PER_CMP words. */
> +       start_index = 1 + count * KCOV_WORDS_PER_CMP;
> +       end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
> +       if (likely(end_pos <= max_pos)) {
> +               area[start_index] = type;
> +               area[start_index + 1] = arg1;
> +               area[start_index + 2] = arg2;

It might make sense to also store PC of the comparison here. This will
allow to trace operands of some specific comparisons.

> +               WRITE_ONCE(area[0], count + 1);
> +       }
> +}
> +
> +void notrace __sanitizer_cov_trace_cmp1(u8 arg1, u8 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE1, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp1);
> +
> +void notrace __sanitizer_cov_trace_cmp2(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE2, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp2);
> +
> +void notrace __sanitizer_cov_trace_cmp4(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE4, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp4);
> +
> +void notrace __sanitizer_cov_trace_cmp8(u64 arg1, u64 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE8, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp8);
> +
> +void notrace __sanitizer_cov_trace_const_cmp1(u8 arg1, u8 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE1 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp1);
> +
> +void notrace __sanitizer_cov_trace_const_cmp2(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE2 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp2);
> +
> +void notrace __sanitizer_cov_trace_const_cmp4(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE4 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp4);
> +
> +void notrace __sanitizer_cov_trace_const_cmp8(u64 arg1, u64 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE8 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp8);
> +
> +void notrace __sanitizer_cov_trace_switch(u64 val, u64 *cases)
> +{
> +       u64 i;
> +       u64 count = cases[0];
> +       u64 size = cases[1];
> +       u64 type = KCOV_CMP_CONST;
> +
> +       switch (size) {
> +       case 8:
> +               type |= KCOV_CMP_SIZE1;
> +               break;
> +       case 16:
> +               type |= KCOV_CMP_SIZE2;
> +               break;
> +       case 32:
> +               type |= KCOV_CMP_SIZE4;
> +               break;
> +       case 64:
> +               type |= KCOV_CMP_SIZE8;
> +               break;
> +       default:
> +               return;
> +       }
> +       for (i = 0; i < count; i++)
> +               write_comp_data(type, cases[i + 2], val);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_switch);
> +#endif /* ifdef CONFIG_KCOV_ENABLE_COMPARISONS */
> +
>  static void kcov_get(struct kcov *kcov)
>  {
>         atomic_inc(&kcov->refcount);
> @@ -128,6 +252,7 @@ void kcov_task_exit(struct task_struct *t)
>         /* Just to not leave dangling references behind. */
>         kcov_task_init(t);
>         kcov->t = NULL;
> +       kcov->mode = KCOV_MODE_INIT;
>         spin_unlock(&kcov->lock);
>         kcov_put(kcov);
>  }
> @@ -146,7 +271,7 @@ static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
>
>         spin_lock(&kcov->lock);
>         size = kcov->size * sizeof(unsigned long);
> -       if (kcov->mode == KCOV_MODE_DISABLED || vma->vm_pgoff != 0 ||
> +       if (kcov->mode != KCOV_MODE_INIT || vma->vm_pgoff != 0 ||
>             vma->vm_end - vma->vm_start != size) {
>                 res = -EINVAL;
>                 goto exit;
> @@ -175,6 +300,7 @@ static int kcov_open(struct inode *inode, struct file *filep)
>         kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
>         if (!kcov)
>                 return -ENOMEM;
> +       kcov->mode = KCOV_MODE_DISABLED;
>         atomic_set(&kcov->refcount, 1);
>         spin_lock_init(&kcov->lock);
>         filep->private_data = kcov;
> @@ -210,7 +336,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
>                 if (size < 2 || size > INT_MAX / sizeof(unsigned long))
>                         return -EINVAL;
>                 kcov->size = size;
> -               kcov->mode = KCOV_MODE_TRACE;
> +               kcov->mode = KCOV_MODE_INIT;
>                 return 0;
>         case KCOV_ENABLE:
>                 /*
> @@ -220,17 +346,25 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
>                  * at task exit or voluntary by KCOV_DISABLE. After that it can
>                  * be enabled for another task.
>                  */
> -               unused = arg;
> -               if (unused != 0 || kcov->mode == KCOV_MODE_DISABLED ||
> -                   kcov->area == NULL)
> +               if (kcov->mode != KCOV_MODE_INIT || !kcov->area)
>                         return -EINVAL;
>                 if (kcov->t != NULL)
>                         return -EBUSY;
> +               if (arg == KCOV_TRACE_PC)
> +                       kcov->mode = KCOV_MODE_TRACE_PC;
> +               else if (arg == KCOV_TRACE_CMP)
> +#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
> +                       kcov->mode = KCOV_MODE_TRACE_CMP;
> +#else
> +               return -ENOTSUPP;
> +#endif
> +               else
> +                       return -EINVAL;
>                 t = current;
>                 /* Cache in task struct for performance. */
>                 t->kcov_size = kcov->size;
>                 t->kcov_area = kcov->area;
> -               /* See comment in __sanitizer_cov_trace_pc(). */
> +               /* See comment in check_kcov_mode(). */
>                 barrier();
>                 WRITE_ONCE(t->kcov_mode, kcov->mode);
>                 t->kcov = kcov;
> @@ -248,6 +382,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
>                         return -EINVAL;
>                 kcov_task_init(t);
>                 kcov->t = NULL;
> +               kcov->mode = KCOV_MODE_INIT;
>                 kcov_put(kcov);
>                 return 0;
>         default:
> --
> 2.14.1.581.gf28d330327-goog
>
> --
> You received this message because you are subscribed to the Google Groups "syzkaller" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to syzkaller+unsubscribe@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
       [not found]   ` <CAPZ9YJZUPYs8nbwG9aO1uCfr7vPY7PNr1WPpvOxP8d+vkMiDJw@mail.gmail.com>
@ 2017-09-12 17:41       ` Dmitry Vyukov
  0 siblings, 0 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-09-12 17:41 UTC (permalink / raw)
  To: Andrea Parri, Andrew Morton, LKML, Mark Rutland,
	Alexander Potapenko, Kees Cook, Vegard Nossum,
	Quentin Casasnovas, syzkaller, linux-mm

On Thu, Sep 7, 2017 at 6:29 PM, Andrea Parri <parri.andrea@gmail.com> wrote:
> Hi Dmitry,
>
>
> On Aug 30, 2017 6:26 PM, "Dmitry Vyukov" <dvyukov@google.com> wrote:
>
> From: Victor Chibotaru <tchibo@google.com>
>
> Enables kcov to collect comparison operands from instrumented code.
> This is done by using Clang's -fsanitize=trace-cmp instrumentation
> (currently not available for GCC).
>
> The comparison operands help a lot in fuzz testing. E.g. they are
> used in Syzkaller to cover the interiors of conditional statements
> with way less attempts and thus make previously unreachable code
> reachable.
>
> To allow separate collection of coverage and comparison operands two
> different work modes are implemented. Mode selection is now done via
> a KCOV_ENABLE ioctl call with corresponding argument value.
>
> Signed-off-by: Victor Chibotaru <tchibo@google.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Alexander Popov <alex.popov@linux.com>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Vegard Nossum <vegard.nossum@oracle.com>
> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
> Cc: syzkaller@googlegroups.com
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
> Clang instrumentation:
> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
> Syzkaller:
> https://github.com/google/syzkaller
> ---
>  include/linux/kcov.h      |  12 ++-
>  include/uapi/linux/kcov.h |  32 ++++++++
>  kernel/kcov.c             | 203
> ++++++++++++++++++++++++++++++++++++++--------
>  3 files changed, 209 insertions(+), 38 deletions(-)
>
> diff --git a/include/linux/kcov.h b/include/linux/kcov.h
> index 2883ac98c280..87e2a44f1bab 100644
> --- a/include/linux/kcov.h
> +++ b/include/linux/kcov.h
> @@ -7,19 +7,23 @@ struct task_struct;
>
>  #ifdef CONFIG_KCOV
>
> -void kcov_task_init(struct task_struct *t);
> -void kcov_task_exit(struct task_struct *t);
> -
>  enum kcov_mode {
>         /* Coverage collection is not enabled yet. */
>         KCOV_MODE_DISABLED = 0,
> +       /* KCOV was initialized, but tracing mode hasn't been chosen yet. */
> +       KCOV_MODE_INIT = 1,
>         /*
>          * Tracing coverage collection mode.
>          * Covered PCs are collected in a per-task buffer.
>          */
> -       KCOV_MODE_TRACE = 1,
> +       KCOV_MODE_TRACE_PC = 2,
> +       /* Collecting comparison operands mode. */
> +       KCOV_MODE_TRACE_CMP = 3,
>  };
>
> +void kcov_task_init(struct task_struct *t);
> +void kcov_task_exit(struct task_struct *t);
> +
>  #else
>
>  static inline void kcov_task_init(struct task_struct *t) {}
> diff --git a/include/uapi/linux/kcov.h b/include/uapi/linux/kcov.h
> index 574e22ec640d..a0bc3e6a5ff7 100644
> --- a/include/uapi/linux/kcov.h
> +++ b/include/uapi/linux/kcov.h
> @@ -7,4 +7,36 @@
>  #define KCOV_ENABLE                    _IO('c', 100)
>  #define KCOV_DISABLE                   _IO('c', 101)
>
> +enum {
> +       /*
> +        * Tracing coverage collection mode.
> +        * Covered PCs are collected in a per-task buffer.
> +        * In new KCOV version the mode is chosen by calling
> +        * ioctl(fd, KCOV_ENABLE, mode). In older versions the mode argument
> +        * was supposed to be 0 in such a call. So, for reasons of backward
> +        * compatibility, we have chosen the value KCOV_TRACE_PC to be 0.
> +        */
> +       KCOV_TRACE_PC = 0,
> +       /* Collecting comparison operands mode. */
> +       KCOV_TRACE_CMP = 1,
> +};
> +
> +/*
> + * Defines the format for the types of collected comparisons.
> + */
> +enum kcov_cmp_type {
> +       /*
> +        * LSB shows whether one of the arguments is a compile-time
> constant.
> +        */
> +       KCOV_CMP_CONST = 1,
> +       /*
> +        * Second and third LSBs contain the size of arguments (1/2/4/8
> bytes).
> +        */
> +       KCOV_CMP_SIZE1 = 0,
> +       KCOV_CMP_SIZE2 = 2,
> +       KCOV_CMP_SIZE4 = 4,
> +       KCOV_CMP_SIZE8 = 6,
> +       KCOV_CMP_SIZE_MASK = 6,
> +};
> +
>  #endif /* _LINUX_KCOV_IOCTLS_H */
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index cd771993f96f..2abce5dfa2df 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -21,13 +21,21 @@
>  #include <linux/kcov.h>
>  #include <asm/setup.h>
>
> +/* Number of words written per one comparison. */
> +#define KCOV_WORDS_PER_CMP 3
> +
>  /*
>   * kcov descriptor (one per opened debugfs file).
>   * State transitions of the descriptor:
>   *  - initial state after open()
>   *  - then there must be a single ioctl(KCOV_INIT_TRACE) call
>   *  - then, mmap() call (several calls are allowed but not useful)
> - *  - then, repeated enable/disable for a task (only one task a time
> allowed)
> + *  - then, ioctl(KCOV_ENABLE, arg), where arg is
> + *     KCOV_TRACE_PC - to trace only the PCs
> + *     or
> + *     KCOV_TRACE_CMP - to trace only the comparison operands
> + *  - then, ioctl(KCOV_DISABLE) to disable the task.
> + * Enabling/disabling ioctls can be repeated (only one task a time
> allowed).
>   */
>  struct kcov {
>         /*
> @@ -47,6 +55,30 @@ struct kcov {
>         struct task_struct      *t;
>  };
>
> +static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct
> *t)
> +{
> +       enum kcov_mode mode;
> +
> +       /*
> +        * We are interested in code coverage as a function of a syscall
> inputs,
> +        * so we ignore code executed in interrupts.
> +        */
> +       if (!t || !in_task())
> +               return false;
> +       mode = READ_ONCE(t->kcov_mode);
> +       /*
> +        * There is some code that runs in interrupts but for which
> +        * in_interrupt() returns false (e.g. preempt_schedule_irq()).
> +        * READ_ONCE()/barrier() effectively provides load-acquire wrt
> +        * interrupts, there are paired barrier()/WRITE_ONCE() in
> +        * kcov_ioctl_locked().
>
>
> Could you elaborate on this: what do you mean by "... provides load-acquire
> _wrt_ interrupts?" and why so?
>
>   Andrea


Without this barrier we can get the following situation.
Consider that ioctl sets t->kcov_mode but does not set t->kcov_area
yet (and/or __sanitizer_cov_trace_pc reads area ahead of mode), then
an interrupt on this task comes but in_task() returns true (observed
for preempt_schedule_irq()). Now the interrupt can decide that tracing
is enabled and write to area (which is NULL) which will cause panic
and crash.
We effectively need an acquire/release pair here, but only between the
task and an interrupt running on this task. So we don't need hardware
barriers, only compiler ordering.




> +        */
> +       barrier();
> +       if (mode != needed_mode)
> +               return false;
> +       return true;
> +}
> +
>  /*
>   * Entry point from instrumented code.
>   * This is called once per basic-block/edge.
> @@ -54,44 +86,136 @@ struct kcov {
>  void notrace __sanitizer_cov_trace_pc(void)
>  {
>         struct task_struct *t;
> -       enum kcov_mode mode;
> +       unsigned long *area;
> +       unsigned long ip = _RET_IP_;
> +       unsigned long pos;
>
>         t = current;
> -       /*
> -        * We are interested in code coverage as a function of a syscall
> inputs,
> -        * so we ignore code executed in interrupts.
> -        */
> -       if (!t || !in_task())
> +       if (!check_kcov_mode(KCOV_MODE_TRACE_PC, t))
>                 return;
> -       mode = READ_ONCE(t->kcov_mode);
> -       if (mode == KCOV_MODE_TRACE) {
> -               unsigned long *area;
> -               unsigned long pos;
> -               unsigned long ip = _RET_IP_;
>
>  #ifdef CONFIG_RANDOMIZE_BASE
> -               ip -= kaslr_offset();
> +       ip -= kaslr_offset();
>  #endif
>
> -               /*
> -                * There is some code that runs in interrupts but for which
> -                * in_interrupt() returns false (e.g.
> preempt_schedule_irq()).
> -                * READ_ONCE()/barrier() effectively provides load-acquire
> wrt
> -                * interrupts, there are paired barrier()/WRITE_ONCE() in
> -                * kcov_ioctl_locked().
> -                */
> -               barrier();
> -               area = t->kcov_area;
> -               /* The first word is number of subsequent PCs. */
> -               pos = READ_ONCE(area[0]) + 1;
> -               if (likely(pos < t->kcov_size)) {
> -                       area[pos] = ip;
> -                       WRITE_ONCE(area[0], pos);
> -               }
> +       area = t->kcov_area;
> +       /* The first word is number of subsequent PCs. */
> +       pos = READ_ONCE(area[0]) + 1;
> +       if (likely(pos < t->kcov_size)) {
> +               area[pos] = ip;
> +               WRITE_ONCE(area[0], pos);
>         }
>  }
>  EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
>
> +#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
> +static void write_comp_data(u64 type, u64 arg1, u64 arg2)
> +{
> +       struct task_struct *t;
> +       u64 *area;
> +       u64 count, start_index, end_pos, max_pos;
> +
> +       t = current;
> +       if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
> +               return;
> +
> +       /*
> +        * We write all comparison arguments and types as u64.
> +        * The buffer was allocated for t->kcov_size unsigned longs.
> +        */
> +       area = (u64 *)t->kcov_area;
> +       max_pos = t->kcov_size * sizeof(unsigned long);
> +
> +       count = READ_ONCE(area[0]);
> +
> +       /* Every record is KCOV_WORDS_PER_CMP words. */
> +       start_index = 1 + count * KCOV_WORDS_PER_CMP;
> +       end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
> +       if (likely(end_pos <= max_pos)) {
> +               area[start_index] = type;
> +               area[start_index + 1] = arg1;
> +               area[start_index + 2] = arg2;
> +               WRITE_ONCE(area[0], count + 1);
> +       }
> +}
> +
> +void notrace __sanitizer_cov_trace_cmp1(u8 arg1, u8 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE1, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp1);
> +
> +void notrace __sanitizer_cov_trace_cmp2(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE2, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp2);
> +
> +void notrace __sanitizer_cov_trace_cmp4(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE4, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp4);
> +
> +void notrace __sanitizer_cov_trace_cmp8(u64 arg1, u64 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE8, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp8);
> +
> +void notrace __sanitizer_cov_trace_const_cmp1(u8 arg1, u8 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE1 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp1);
> +
> +void notrace __sanitizer_cov_trace_const_cmp2(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE2 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp2);
> +
> +void notrace __sanitizer_cov_trace_const_cmp4(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE4 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp4);
> +
> +void notrace __sanitizer_cov_trace_const_cmp8(u64 arg1, u64 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE8 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp8);
> +
> +void notrace __sanitizer_cov_trace_switch(u64 val, u64 *cases)
> +{
> +       u64 i;
> +       u64 count = cases[0];
> +       u64 size = cases[1];
> +       u64 type = KCOV_CMP_CONST;
> +
> +       switch (size) {
> +       case 8:
> +               type |= KCOV_CMP_SIZE1;
> +               break;
> +       case 16:
> +               type |= KCOV_CMP_SIZE2;
> +               break;
> +       case 32:
> +               type |= KCOV_CMP_SIZE4;
> +               break;
> +       case 64:
> +               type |= KCOV_CMP_SIZE8;
> +               break;
> +       default:
> +               return;
> +       }
> +       for (i = 0; i < count; i++)
> +               write_comp_data(type, cases[i + 2], val);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_switch);
> +#endif /* ifdef CONFIG_KCOV_ENABLE_COMPARISONS */
> +
>  static void kcov_get(struct kcov *kcov)
>  {
>         atomic_inc(&kcov->refcount);
> @@ -128,6 +252,7 @@ void kcov_task_exit(struct task_struct *t)
>         /* Just to not leave dangling references behind. */
>         kcov_task_init(t);
>         kcov->t = NULL;
> +       kcov->mode = KCOV_MODE_INIT;
>         spin_unlock(&kcov->lock);
>         kcov_put(kcov);
>  }
> @@ -146,7 +271,7 @@ static int kcov_mmap(struct file *filep, struct
> vm_area_struct *vma)
>
>         spin_lock(&kcov->lock);
>         size = kcov->size * sizeof(unsigned long);
> -       if (kcov->mode == KCOV_MODE_DISABLED || vma->vm_pgoff != 0 ||
> +       if (kcov->mode != KCOV_MODE_INIT || vma->vm_pgoff != 0 ||
>             vma->vm_end - vma->vm_start != size) {
>                 res = -EINVAL;
>                 goto exit;
> @@ -175,6 +300,7 @@ static int kcov_open(struct inode *inode, struct file
> *filep)
>         kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
>         if (!kcov)
>                 return -ENOMEM;
> +       kcov->mode = KCOV_MODE_DISABLED;
>         atomic_set(&kcov->refcount, 1);
>         spin_lock_init(&kcov->lock);
>         filep->private_data = kcov;
> @@ -210,7 +336,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned
> int cmd,
>                 if (size < 2 || size > INT_MAX / sizeof(unsigned long))
>                         return -EINVAL;
>                 kcov->size = size;
> -               kcov->mode = KCOV_MODE_TRACE;
> +               kcov->mode = KCOV_MODE_INIT;
>                 return 0;
>         case KCOV_ENABLE:
>                 /*
> @@ -220,17 +346,25 @@ static int kcov_ioctl_locked(struct kcov *kcov,
> unsigned int cmd,
>                  * at task exit or voluntary by KCOV_DISABLE. After that it
> can
>                  * be enabled for another task.
>                  */
> -               unused = arg;
> -               if (unused != 0 || kcov->mode == KCOV_MODE_DISABLED ||
> -                   kcov->area == NULL)
> +               if (kcov->mode != KCOV_MODE_INIT || !kcov->area)
>                         return -EINVAL;
>                 if (kcov->t != NULL)
>                         return -EBUSY;
> +               if (arg == KCOV_TRACE_PC)
> +                       kcov->mode = KCOV_MODE_TRACE_PC;
> +               else if (arg == KCOV_TRACE_CMP)
> +#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
> +                       kcov->mode = KCOV_MODE_TRACE_CMP;
> +#else
> +               return -ENOTSUPP;
> +#endif
> +               else
> +                       return -EINVAL;
>                 t = current;
>                 /* Cache in task struct for performance. */
>                 t->kcov_size = kcov->size;
>                 t->kcov_area = kcov->area;
> -               /* See comment in __sanitizer_cov_trace_pc(). */
> +               /* See comment in check_kcov_mode(). */
>                 barrier();
>                 WRITE_ONCE(t->kcov_mode, kcov->mode);
>                 t->kcov = kcov;
> @@ -248,6 +382,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned
> int cmd,
>                         return -EINVAL;
>                 kcov_task_init(t);
>                 kcov->t = NULL;
> +               kcov->mode = KCOV_MODE_INIT;
>                 kcov_put(kcov);
>                 return 0;
>         default:
> --
> 2.14.1.581.gf28d330327-goog
>
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
@ 2017-09-12 17:41       ` Dmitry Vyukov
  0 siblings, 0 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-09-12 17:41 UTC (permalink / raw)
  To: Andrea Parri, Andrew Morton, LKML, Mark Rutland,
	Alexander Potapenko, Kees Cook, Vegard Nossum,
	Quentin Casasnovas, syzkaller, linux-mm

On Thu, Sep 7, 2017 at 6:29 PM, Andrea Parri <parri.andrea@gmail.com> wrote:
> Hi Dmitry,
>
>
> On Aug 30, 2017 6:26 PM, "Dmitry Vyukov" <dvyukov@google.com> wrote:
>
> From: Victor Chibotaru <tchibo@google.com>
>
> Enables kcov to collect comparison operands from instrumented code.
> This is done by using Clang's -fsanitize=trace-cmp instrumentation
> (currently not available for GCC).
>
> The comparison operands help a lot in fuzz testing. E.g. they are
> used in Syzkaller to cover the interiors of conditional statements
> with way less attempts and thus make previously unreachable code
> reachable.
>
> To allow separate collection of coverage and comparison operands two
> different work modes are implemented. Mode selection is now done via
> a KCOV_ENABLE ioctl call with corresponding argument value.
>
> Signed-off-by: Victor Chibotaru <tchibo@google.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Alexander Popov <alex.popov@linux.com>
> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Vegard Nossum <vegard.nossum@oracle.com>
> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
> Cc: syzkaller@googlegroups.com
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
> Clang instrumentation:
> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
> Syzkaller:
> https://github.com/google/syzkaller
> ---
>  include/linux/kcov.h      |  12 ++-
>  include/uapi/linux/kcov.h |  32 ++++++++
>  kernel/kcov.c             | 203
> ++++++++++++++++++++++++++++++++++++++--------
>  3 files changed, 209 insertions(+), 38 deletions(-)
>
> diff --git a/include/linux/kcov.h b/include/linux/kcov.h
> index 2883ac98c280..87e2a44f1bab 100644
> --- a/include/linux/kcov.h
> +++ b/include/linux/kcov.h
> @@ -7,19 +7,23 @@ struct task_struct;
>
>  #ifdef CONFIG_KCOV
>
> -void kcov_task_init(struct task_struct *t);
> -void kcov_task_exit(struct task_struct *t);
> -
>  enum kcov_mode {
>         /* Coverage collection is not enabled yet. */
>         KCOV_MODE_DISABLED = 0,
> +       /* KCOV was initialized, but tracing mode hasn't been chosen yet. */
> +       KCOV_MODE_INIT = 1,
>         /*
>          * Tracing coverage collection mode.
>          * Covered PCs are collected in a per-task buffer.
>          */
> -       KCOV_MODE_TRACE = 1,
> +       KCOV_MODE_TRACE_PC = 2,
> +       /* Collecting comparison operands mode. */
> +       KCOV_MODE_TRACE_CMP = 3,
>  };
>
> +void kcov_task_init(struct task_struct *t);
> +void kcov_task_exit(struct task_struct *t);
> +
>  #else
>
>  static inline void kcov_task_init(struct task_struct *t) {}
> diff --git a/include/uapi/linux/kcov.h b/include/uapi/linux/kcov.h
> index 574e22ec640d..a0bc3e6a5ff7 100644
> --- a/include/uapi/linux/kcov.h
> +++ b/include/uapi/linux/kcov.h
> @@ -7,4 +7,36 @@
>  #define KCOV_ENABLE                    _IO('c', 100)
>  #define KCOV_DISABLE                   _IO('c', 101)
>
> +enum {
> +       /*
> +        * Tracing coverage collection mode.
> +        * Covered PCs are collected in a per-task buffer.
> +        * In new KCOV version the mode is chosen by calling
> +        * ioctl(fd, KCOV_ENABLE, mode). In older versions the mode argument
> +        * was supposed to be 0 in such a call. So, for reasons of backward
> +        * compatibility, we have chosen the value KCOV_TRACE_PC to be 0.
> +        */
> +       KCOV_TRACE_PC = 0,
> +       /* Collecting comparison operands mode. */
> +       KCOV_TRACE_CMP = 1,
> +};
> +
> +/*
> + * Defines the format for the types of collected comparisons.
> + */
> +enum kcov_cmp_type {
> +       /*
> +        * LSB shows whether one of the arguments is a compile-time
> constant.
> +        */
> +       KCOV_CMP_CONST = 1,
> +       /*
> +        * Second and third LSBs contain the size of arguments (1/2/4/8
> bytes).
> +        */
> +       KCOV_CMP_SIZE1 = 0,
> +       KCOV_CMP_SIZE2 = 2,
> +       KCOV_CMP_SIZE4 = 4,
> +       KCOV_CMP_SIZE8 = 6,
> +       KCOV_CMP_SIZE_MASK = 6,
> +};
> +
>  #endif /* _LINUX_KCOV_IOCTLS_H */
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index cd771993f96f..2abce5dfa2df 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -21,13 +21,21 @@
>  #include <linux/kcov.h>
>  #include <asm/setup.h>
>
> +/* Number of words written per one comparison. */
> +#define KCOV_WORDS_PER_CMP 3
> +
>  /*
>   * kcov descriptor (one per opened debugfs file).
>   * State transitions of the descriptor:
>   *  - initial state after open()
>   *  - then there must be a single ioctl(KCOV_INIT_TRACE) call
>   *  - then, mmap() call (several calls are allowed but not useful)
> - *  - then, repeated enable/disable for a task (only one task a time
> allowed)
> + *  - then, ioctl(KCOV_ENABLE, arg), where arg is
> + *     KCOV_TRACE_PC - to trace only the PCs
> + *     or
> + *     KCOV_TRACE_CMP - to trace only the comparison operands
> + *  - then, ioctl(KCOV_DISABLE) to disable the task.
> + * Enabling/disabling ioctls can be repeated (only one task a time
> allowed).
>   */
>  struct kcov {
>         /*
> @@ -47,6 +55,30 @@ struct kcov {
>         struct task_struct      *t;
>  };
>
> +static bool check_kcov_mode(enum kcov_mode needed_mode, struct task_struct
> *t)
> +{
> +       enum kcov_mode mode;
> +
> +       /*
> +        * We are interested in code coverage as a function of a syscall
> inputs,
> +        * so we ignore code executed in interrupts.
> +        */
> +       if (!t || !in_task())
> +               return false;
> +       mode = READ_ONCE(t->kcov_mode);
> +       /*
> +        * There is some code that runs in interrupts but for which
> +        * in_interrupt() returns false (e.g. preempt_schedule_irq()).
> +        * READ_ONCE()/barrier() effectively provides load-acquire wrt
> +        * interrupts, there are paired barrier()/WRITE_ONCE() in
> +        * kcov_ioctl_locked().
>
>
> Could you elaborate on this: what do you mean by "... provides load-acquire
> _wrt_ interrupts?" and why so?
>
>   Andrea


Without this barrier we can get the following situation.
Consider that ioctl sets t->kcov_mode but does not set t->kcov_area
yet (and/or __sanitizer_cov_trace_pc reads area ahead of mode), then
an interrupt on this task comes but in_task() returns true (observed
for preempt_schedule_irq()). Now the interrupt can decide that tracing
is enabled and write to area (which is NULL) which will cause panic
and crash.
We effectively need an acquire/release pair here, but only between the
task and an interrupt running on this task. So we don't need hardware
barriers, only compiler ordering.




> +        */
> +       barrier();
> +       if (mode != needed_mode)
> +               return false;
> +       return true;
> +}
> +
>  /*
>   * Entry point from instrumented code.
>   * This is called once per basic-block/edge.
> @@ -54,44 +86,136 @@ struct kcov {
>  void notrace __sanitizer_cov_trace_pc(void)
>  {
>         struct task_struct *t;
> -       enum kcov_mode mode;
> +       unsigned long *area;
> +       unsigned long ip = _RET_IP_;
> +       unsigned long pos;
>
>         t = current;
> -       /*
> -        * We are interested in code coverage as a function of a syscall
> inputs,
> -        * so we ignore code executed in interrupts.
> -        */
> -       if (!t || !in_task())
> +       if (!check_kcov_mode(KCOV_MODE_TRACE_PC, t))
>                 return;
> -       mode = READ_ONCE(t->kcov_mode);
> -       if (mode == KCOV_MODE_TRACE) {
> -               unsigned long *area;
> -               unsigned long pos;
> -               unsigned long ip = _RET_IP_;
>
>  #ifdef CONFIG_RANDOMIZE_BASE
> -               ip -= kaslr_offset();
> +       ip -= kaslr_offset();
>  #endif
>
> -               /*
> -                * There is some code that runs in interrupts but for which
> -                * in_interrupt() returns false (e.g.
> preempt_schedule_irq()).
> -                * READ_ONCE()/barrier() effectively provides load-acquire
> wrt
> -                * interrupts, there are paired barrier()/WRITE_ONCE() in
> -                * kcov_ioctl_locked().
> -                */
> -               barrier();
> -               area = t->kcov_area;
> -               /* The first word is number of subsequent PCs. */
> -               pos = READ_ONCE(area[0]) + 1;
> -               if (likely(pos < t->kcov_size)) {
> -                       area[pos] = ip;
> -                       WRITE_ONCE(area[0], pos);
> -               }
> +       area = t->kcov_area;
> +       /* The first word is number of subsequent PCs. */
> +       pos = READ_ONCE(area[0]) + 1;
> +       if (likely(pos < t->kcov_size)) {
> +               area[pos] = ip;
> +               WRITE_ONCE(area[0], pos);
>         }
>  }
>  EXPORT_SYMBOL(__sanitizer_cov_trace_pc);
>
> +#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
> +static void write_comp_data(u64 type, u64 arg1, u64 arg2)
> +{
> +       struct task_struct *t;
> +       u64 *area;
> +       u64 count, start_index, end_pos, max_pos;
> +
> +       t = current;
> +       if (!check_kcov_mode(KCOV_MODE_TRACE_CMP, t))
> +               return;
> +
> +       /*
> +        * We write all comparison arguments and types as u64.
> +        * The buffer was allocated for t->kcov_size unsigned longs.
> +        */
> +       area = (u64 *)t->kcov_area;
> +       max_pos = t->kcov_size * sizeof(unsigned long);
> +
> +       count = READ_ONCE(area[0]);
> +
> +       /* Every record is KCOV_WORDS_PER_CMP words. */
> +       start_index = 1 + count * KCOV_WORDS_PER_CMP;
> +       end_pos = (start_index + KCOV_WORDS_PER_CMP) * sizeof(u64);
> +       if (likely(end_pos <= max_pos)) {
> +               area[start_index] = type;
> +               area[start_index + 1] = arg1;
> +               area[start_index + 2] = arg2;
> +               WRITE_ONCE(area[0], count + 1);
> +       }
> +}
> +
> +void notrace __sanitizer_cov_trace_cmp1(u8 arg1, u8 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE1, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp1);
> +
> +void notrace __sanitizer_cov_trace_cmp2(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE2, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp2);
> +
> +void notrace __sanitizer_cov_trace_cmp4(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE4, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp4);
> +
> +void notrace __sanitizer_cov_trace_cmp8(u64 arg1, u64 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE8, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_cmp8);
> +
> +void notrace __sanitizer_cov_trace_const_cmp1(u8 arg1, u8 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE1 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp1);
> +
> +void notrace __sanitizer_cov_trace_const_cmp2(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE2 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp2);
> +
> +void notrace __sanitizer_cov_trace_const_cmp4(u16 arg1, u16 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE4 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp4);
> +
> +void notrace __sanitizer_cov_trace_const_cmp8(u64 arg1, u64 arg2)
> +{
> +       write_comp_data(KCOV_CMP_SIZE8 | KCOV_CMP_CONST, arg1, arg2);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_const_cmp8);
> +
> +void notrace __sanitizer_cov_trace_switch(u64 val, u64 *cases)
> +{
> +       u64 i;
> +       u64 count = cases[0];
> +       u64 size = cases[1];
> +       u64 type = KCOV_CMP_CONST;
> +
> +       switch (size) {
> +       case 8:
> +               type |= KCOV_CMP_SIZE1;
> +               break;
> +       case 16:
> +               type |= KCOV_CMP_SIZE2;
> +               break;
> +       case 32:
> +               type |= KCOV_CMP_SIZE4;
> +               break;
> +       case 64:
> +               type |= KCOV_CMP_SIZE8;
> +               break;
> +       default:
> +               return;
> +       }
> +       for (i = 0; i < count; i++)
> +               write_comp_data(type, cases[i + 2], val);
> +}
> +EXPORT_SYMBOL(__sanitizer_cov_trace_switch);
> +#endif /* ifdef CONFIG_KCOV_ENABLE_COMPARISONS */
> +
>  static void kcov_get(struct kcov *kcov)
>  {
>         atomic_inc(&kcov->refcount);
> @@ -128,6 +252,7 @@ void kcov_task_exit(struct task_struct *t)
>         /* Just to not leave dangling references behind. */
>         kcov_task_init(t);
>         kcov->t = NULL;
> +       kcov->mode = KCOV_MODE_INIT;
>         spin_unlock(&kcov->lock);
>         kcov_put(kcov);
>  }
> @@ -146,7 +271,7 @@ static int kcov_mmap(struct file *filep, struct
> vm_area_struct *vma)
>
>         spin_lock(&kcov->lock);
>         size = kcov->size * sizeof(unsigned long);
> -       if (kcov->mode == KCOV_MODE_DISABLED || vma->vm_pgoff != 0 ||
> +       if (kcov->mode != KCOV_MODE_INIT || vma->vm_pgoff != 0 ||
>             vma->vm_end - vma->vm_start != size) {
>                 res = -EINVAL;
>                 goto exit;
> @@ -175,6 +300,7 @@ static int kcov_open(struct inode *inode, struct file
> *filep)
>         kcov = kzalloc(sizeof(*kcov), GFP_KERNEL);
>         if (!kcov)
>                 return -ENOMEM;
> +       kcov->mode = KCOV_MODE_DISABLED;
>         atomic_set(&kcov->refcount, 1);
>         spin_lock_init(&kcov->lock);
>         filep->private_data = kcov;
> @@ -210,7 +336,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned
> int cmd,
>                 if (size < 2 || size > INT_MAX / sizeof(unsigned long))
>                         return -EINVAL;
>                 kcov->size = size;
> -               kcov->mode = KCOV_MODE_TRACE;
> +               kcov->mode = KCOV_MODE_INIT;
>                 return 0;
>         case KCOV_ENABLE:
>                 /*
> @@ -220,17 +346,25 @@ static int kcov_ioctl_locked(struct kcov *kcov,
> unsigned int cmd,
>                  * at task exit or voluntary by KCOV_DISABLE. After that it
> can
>                  * be enabled for another task.
>                  */
> -               unused = arg;
> -               if (unused != 0 || kcov->mode == KCOV_MODE_DISABLED ||
> -                   kcov->area == NULL)
> +               if (kcov->mode != KCOV_MODE_INIT || !kcov->area)
>                         return -EINVAL;
>                 if (kcov->t != NULL)
>                         return -EBUSY;
> +               if (arg == KCOV_TRACE_PC)
> +                       kcov->mode = KCOV_MODE_TRACE_PC;
> +               else if (arg == KCOV_TRACE_CMP)
> +#ifdef CONFIG_KCOV_ENABLE_COMPARISONS
> +                       kcov->mode = KCOV_MODE_TRACE_CMP;
> +#else
> +               return -ENOTSUPP;
> +#endif
> +               else
> +                       return -EINVAL;
>                 t = current;
>                 /* Cache in task struct for performance. */
>                 t->kcov_size = kcov->size;
>                 t->kcov_area = kcov->area;
> -               /* See comment in __sanitizer_cov_trace_pc(). */
> +               /* See comment in check_kcov_mode(). */
>                 barrier();
>                 WRITE_ONCE(t->kcov_mode, kcov->mode);
>                 t->kcov = kcov;
> @@ -248,6 +382,7 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned
> int cmd,
>                         return -EINVAL;
>                 kcov_task_init(t);
>                 kcov->t = NULL;
> +               kcov->mode = KCOV_MODE_INIT;
>                 kcov_put(kcov);
>                 return 0;
>         default:
> --
> 2.14.1.581.gf28d330327-goog
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
  2017-08-30 19:08       ` Dmitry Vyukov
@ 2017-09-12 17:41         ` Dmitry Vyukov
  -1 siblings, 0 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-09-12 17:41 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrew Morton, linux-mm, Victor Chibotaru, Alexander Popov,
	Andrey Ryabinin, Kees Cook, Vegard Nossum, Quentin Casasnovas,
	syzkaller, LKML

On Wed, Aug 30, 2017 at 9:08 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> On Wed, Aug 30, 2017 at 8:23 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>> Hi,
>>
>> On Wed, Aug 30, 2017 at 06:23:29PM +0200, Dmitry Vyukov wrote:
>>> From: Victor Chibotaru <tchibo@google.com>
>>>
>>> Enables kcov to collect comparison operands from instrumented code.
>>> This is done by using Clang's -fsanitize=trace-cmp instrumentation
>>> (currently not available for GCC).
>>
>> What's needed to build the kernel with Clang these days?
>>
>> I was under the impression that it still wasn't possible to build arm64
>> with clang due to a number of missing features (e.g. the %a assembler
>> output template).
>>
>>> The comparison operands help a lot in fuzz testing. E.g. they are
>>> used in Syzkaller to cover the interiors of conditional statements
>>> with way less attempts and thus make previously unreachable code
>>> reachable.
>>>
>>> To allow separate collection of coverage and comparison operands two
>>> different work modes are implemented. Mode selection is now done via
>>> a KCOV_ENABLE ioctl call with corresponding argument value.
>>>
>>> Signed-off-by: Victor Chibotaru <tchibo@google.com>
>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>> Cc: Alexander Popov <alex.popov@linux.com>
>>> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
>>> Cc: Kees Cook <keescook@chromium.org>
>>> Cc: Vegard Nossum <vegard.nossum@oracle.com>
>>> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
>>> Cc: syzkaller@googlegroups.com
>>> Cc: linux-mm@kvack.org
>>> Cc: linux-kernel@vger.kernel.org
>>> ---
>>> Clang instrumentation:
>>> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
>>
>> How stable is this?
>>
>> The comment at the end says "This interface is a subject to change."
>
>
> The intention is that this is not subject to change anymore (since we
> are using it in kernel).
> I've mailed change to docs: https://reviews.llvm.org/D37303
>
> FWIW, there is patch in flight that adds this instrumentation to gcc:
> https://groups.google.com/forum/#!topic/syzkaller/CSLynn6nI-A
> It seems to be stalled on review phase, though.


Good news is that this is submitted to gcc in 251801.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] kcov: support comparison operands collection
@ 2017-09-12 17:41         ` Dmitry Vyukov
  0 siblings, 0 replies; 21+ messages in thread
From: Dmitry Vyukov @ 2017-09-12 17:41 UTC (permalink / raw)
  To: Mark Rutland
  Cc: Andrew Morton, linux-mm, Victor Chibotaru, Alexander Popov,
	Andrey Ryabinin, Kees Cook, Vegard Nossum, Quentin Casasnovas,
	syzkaller, LKML

On Wed, Aug 30, 2017 at 9:08 PM, Dmitry Vyukov <dvyukov@google.com> wrote:
> On Wed, Aug 30, 2017 at 8:23 PM, Mark Rutland <mark.rutland@arm.com> wrote:
>> Hi,
>>
>> On Wed, Aug 30, 2017 at 06:23:29PM +0200, Dmitry Vyukov wrote:
>>> From: Victor Chibotaru <tchibo@google.com>
>>>
>>> Enables kcov to collect comparison operands from instrumented code.
>>> This is done by using Clang's -fsanitize=trace-cmp instrumentation
>>> (currently not available for GCC).
>>
>> What's needed to build the kernel with Clang these days?
>>
>> I was under the impression that it still wasn't possible to build arm64
>> with clang due to a number of missing features (e.g. the %a assembler
>> output template).
>>
>>> The comparison operands help a lot in fuzz testing. E.g. they are
>>> used in Syzkaller to cover the interiors of conditional statements
>>> with way less attempts and thus make previously unreachable code
>>> reachable.
>>>
>>> To allow separate collection of coverage and comparison operands two
>>> different work modes are implemented. Mode selection is now done via
>>> a KCOV_ENABLE ioctl call with corresponding argument value.
>>>
>>> Signed-off-by: Victor Chibotaru <tchibo@google.com>
>>> Cc: Andrew Morton <akpm@linux-foundation.org>
>>> Cc: Mark Rutland <mark.rutland@arm.com>
>>> Cc: Alexander Popov <alex.popov@linux.com>
>>> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
>>> Cc: Kees Cook <keescook@chromium.org>
>>> Cc: Vegard Nossum <vegard.nossum@oracle.com>
>>> Cc: Quentin Casasnovas <quentin.casasnovas@oracle.com>
>>> Cc: syzkaller@googlegroups.com
>>> Cc: linux-mm@kvack.org
>>> Cc: linux-kernel@vger.kernel.org
>>> ---
>>> Clang instrumentation:
>>> https://clang.llvm.org/docs/SanitizerCoverage.html#tracing-data-flow
>>
>> How stable is this?
>>
>> The comment at the end says "This interface is a subject to change."
>
>
> The intention is that this is not subject to change anymore (since we
> are using it in kernel).
> I've mailed change to docs: https://reviews.llvm.org/D37303
>
> FWIW, there is patch in flight that adds this instrumentation to gcc:
> https://groups.google.com/forum/#!topic/syzkaller/CSLynn6nI-A
> It seems to be stalled on review phase, though.


Good news is that this is submitted to gcc in 251801.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2017-09-12 17:42 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-30 16:23 [PATCH 0/3] kcov: support comparison operands collection Dmitry Vyukov
2017-08-30 16:23 ` [PATCH 1/3] " Dmitry Vyukov
2017-08-30 16:23   ` Dmitry Vyukov
2017-08-30 18:23   ` Mark Rutland
2017-08-30 18:23     ` Mark Rutland
2017-08-30 19:04     ` Alexander Potapenko
2017-08-30 19:04       ` Alexander Potapenko
2017-08-30 19:08     ` Dmitry Vyukov
2017-08-30 19:08       ` Dmitry Vyukov
2017-08-31  9:31       ` Mark Rutland
2017-08-31  9:31         ` Mark Rutland
2017-09-12 17:41       ` Dmitry Vyukov
2017-09-12 17:41         ` Dmitry Vyukov
2017-08-31 13:27   ` Andrey Konovalov
2017-08-31 13:27     ` Andrey Konovalov
     [not found]   ` <CAPZ9YJZUPYs8nbwG9aO1uCfr7vPY7PNr1WPpvOxP8d+vkMiDJw@mail.gmail.com>
2017-09-12 17:41     ` Dmitry Vyukov
2017-09-12 17:41       ` Dmitry Vyukov
2017-08-30 16:23 ` [PATCH 2/3] Makefile: support flag -fsanitizer-coverage=trace-cmp Dmitry Vyukov
2017-08-30 16:23   ` Dmitry Vyukov
2017-08-30 16:23 ` [PATCH 3/3] kcov: update documentation Dmitry Vyukov
2017-08-30 16:23   ` Dmitry Vyukov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.