linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Marco Elver <elver@google.com>
To: elver@google.com
Cc: linux-kernel@vger.kernel.org,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>,
	Andrey Ryabinin <aryabinin@virtuozzo.com>,
	Alexander Potapenko <glider@google.com>,
	Dmitry Vyukov <dvyukov@google.com>,
	Andrey Konovalov <andreyknvl@google.com>,
	Mark Rutland <mark.rutland@arm.com>,
	Peter Zijlstra <peterz@infradead.org>,
	x86@kernel.org, kasan-dev@googlegroups.com
Subject: [PATCH 1/2] kernel/fork: Add support for stack-end guard page
Date: Fri, 19 Jul 2019 15:28:17 +0200	[thread overview]
Message-ID: <20190719132818.40258-1-elver@google.com> (raw)

Enabling STACK_GUARD_PAGE helps catching kernel stack overflows immediately
rather than causing difficult-to-diagnose corruption. Note that, unlike
virtually-mapped kernel stacks, this will effectively waste an entire page of
memory; however, this feature may provide extra protection in cases that cannot
use virtually-mapped kernel stacks, at the cost of a page.

The motivation for this patch is that KASAN cannot use virtually-mapped kernel
stacks to detect stack overflows. An alternative would be implementing support
for vmapped stacks in KASAN, but would add significant extra complexity. While
the stack-end guard page approach here wastes a page, it is significantly
simpler than the alternative.  We assume that the extra cost of a page can be
justified in the cases where STACK_GUARD_PAGE would be enabled.

Note that in an earlier prototype of this patch, we used
'set_memory_{ro,rw}' functions, which flush the TLBs. This, however,
turned out to be unacceptably expensive, especially when run with
fuzzers such as Syzkaller, as the kernel would encounter frequent RCU
timeouts. The current approach of not flushing the TLB is therefore
best-effort, but works in the test cases considered -- any comments on
better alternatives or improvements are welcome.

Signed-off-by: Marco Elver <elver@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrey Konovalov <andreyknvl@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: kasan-dev@googlegroups.com
---
 arch/Kconfig                         | 15 +++++++++++++++
 arch/x86/include/asm/page_64_types.h |  8 +++++++-
 include/linux/sched/task_stack.h     | 11 +++++++++--
 kernel/fork.c                        | 22 +++++++++++++++++++++-
 4 files changed, 52 insertions(+), 4 deletions(-)

diff --git a/arch/Kconfig b/arch/Kconfig
index e8d19c3cb91f..cca3258fff1f 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -935,6 +935,21 @@ config LOCK_EVENT_COUNTS
 	  the chance of application behavior change because of timing
 	  differences. The counts are reported via debugfs.
 
+config STACK_GUARD_PAGE
+	default n
+	bool "Use stack-end page as guard page"
+	depends on !VMAP_STACK && ARCH_HAS_SET_DIRECT_MAP && THREAD_INFO_IN_TASK && !STACK_GROWSUP
+	help
+	  Enable this if you want to use the stack-end page as a guard page.
+	  This causes kernel stack overflows to be caught immediately rather
+	  than causing difficult-to-diagnose corruption. Note that, unlike
+	  virtually-mapped kernel stacks, this will effectively waste an entire
+	  page of memory; however, this feature may provide extra protection in
+	  cases that cannot use virtually-mapped kernel stacks, at the cost of
+	  a page. Note that, this option does not implicitly increase the
+	  default stack size. The main use-case is for KASAN to avoid reporting
+	  misleading bugs due to stack overflow.
+
 source "kernel/gcov/Kconfig"
 
 source "scripts/gcc-plugins/Kconfig"
diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 288b065955b7..b218b5713c02 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -12,8 +12,14 @@
 #define KASAN_STACK_ORDER 0
 #endif
 
+#ifdef CONFIG_STACK_GUARD_PAGE
+#define STACK_GUARD_SIZE PAGE_SIZE
+#else
+#define STACK_GUARD_SIZE 0
+#endif
+
 #define THREAD_SIZE_ORDER	(2 + KASAN_STACK_ORDER)
-#define THREAD_SIZE  (PAGE_SIZE << THREAD_SIZE_ORDER)
+#define THREAD_SIZE  ((PAGE_SIZE << THREAD_SIZE_ORDER) - STACK_GUARD_SIZE)
 
 #define EXCEPTION_STACK_ORDER (0 + KASAN_STACK_ORDER)
 #define EXCEPTION_STKSZ (PAGE_SIZE << EXCEPTION_STACK_ORDER)
diff --git a/include/linux/sched/task_stack.h b/include/linux/sched/task_stack.h
index 2413427e439c..7ee86ad0a282 100644
--- a/include/linux/sched/task_stack.h
+++ b/include/linux/sched/task_stack.h
@@ -11,6 +11,13 @@
 
 #ifdef CONFIG_THREAD_INFO_IN_TASK
 
+#ifndef STACK_GUARD_SIZE
+#ifdef CONFIG_STACK_GUARD_PAGE
+#error "Architecture not compatible with STACK_GUARD_PAGE"
+#endif
+#define STACK_GUARD_SIZE 0
+#endif
+
 /*
  * When accessing the stack of a non-current task that might exit, use
  * try_get_task_stack() instead.  task_stack_page will return a pointer
@@ -18,14 +25,14 @@
  */
 static inline void *task_stack_page(const struct task_struct *task)
 {
-	return task->stack;
+	return task->stack + STACK_GUARD_SIZE;
 }
 
 #define setup_thread_stack(new,old)	do { } while(0)
 
 static inline unsigned long *end_of_stack(const struct task_struct *task)
 {
-	return task->stack;
+	return task->stack + STACK_GUARD_SIZE;
 }
 
 #elif !defined(__HAVE_THREAD_FUNCTIONS)
diff --git a/kernel/fork.c b/kernel/fork.c
index d8ae0f1b4148..22033b03f7da 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -94,6 +94,7 @@
 #include <linux/livepatch.h>
 #include <linux/thread_info.h>
 #include <linux/stackleak.h>
+#include <linux/set_memory.h>
 
 #include <asm/pgtable.h>
 #include <asm/pgalloc.h>
@@ -249,6 +250,14 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
 					     THREAD_SIZE_ORDER);
 
 	if (likely(page)) {
+		if (IS_ENABLED(CONFIG_STACK_GUARD_PAGE)) {
+			/*
+			 * Best effort: do not flush TLB to avoid the overhead
+			 * of flushing all TLBs.
+			 */
+			set_direct_map_invalid_noflush(page);
+		}
+
 		tsk->stack = page_address(page);
 		return tsk->stack;
 	}
@@ -258,6 +267,7 @@ static unsigned long *alloc_thread_stack_node(struct task_struct *tsk, int node)
 
 static inline void free_thread_stack(struct task_struct *tsk)
 {
+	struct page* stack_page;
 #ifdef CONFIG_VMAP_STACK
 	struct vm_struct *vm = task_stack_vm_area(tsk);
 
@@ -285,7 +295,17 @@ static inline void free_thread_stack(struct task_struct *tsk)
 	}
 #endif
 
-	__free_pages(virt_to_page(tsk->stack), THREAD_SIZE_ORDER);
+	stack_page = virt_to_page(tsk->stack);
+
+	if (IS_ENABLED(CONFIG_STACK_GUARD_PAGE)) {
+		/*
+		 * Avoid flushing TLBs, and instead rely on spurious fault
+		 * detection of stale TLBs.
+		 */
+		set_direct_map_default_noflush(stack_page);
+	}
+
+	__free_pages(stack_page, THREAD_SIZE_ORDER);
 }
 # else
 static struct kmem_cache *thread_stack_cache;
-- 
2.22.0.657.g960e92d24f-goog


             reply	other threads:[~2019-07-19 13:28 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-19 13:28 Marco Elver [this message]
2019-07-19 13:28 ` [PATCH 2/2] lib/test_kasan: Add stack overflow test Marco Elver
2019-07-23 16:24   ` Mark Rutland
2019-07-23 16:49     ` Marco Elver
2019-07-24 11:23       ` Mark Rutland
2019-07-23 16:41 ` [PATCH 1/2] kernel/fork: Add support for stack-end guard page Mark Rutland
2019-07-24  9:11   ` Dmitry Vyukov
2019-07-24 11:21     ` Mark Rutland
2019-07-25  7:53       ` Dmitry Vyukov
2019-07-25 10:14         ` Mark Rutland
2019-07-25 15:14           ` Daniel Axtens
2019-07-25 16:08             ` Mark Rutland

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190719132818.40258-1-elver@google.com \
    --to=elver@google.com \
    --cc=andreyknvl@google.com \
    --cc=aryabinin@virtuozzo.com \
    --cc=bp@alien8.de \
    --cc=dvyukov@google.com \
    --cc=glider@google.com \
    --cc=hpa@zytor.com \
    --cc=kasan-dev@googlegroups.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).