linux-arm-kernel.lists.infradead.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/12] mm: Hardened usercopy
@ 2016-07-20 20:26 Kees Cook
  2016-07-20 20:26 ` [PATCH v4 01/12] mm: Add is_migrate_cma_page Kees Cook
                   ` (12 more replies)
  0 siblings, 13 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:26 UTC (permalink / raw)
  To: linux-arm-kernel

Hi,

[This is now in my kspp -next tree, though I'd really love to add some
additional explicit Tested-bys, Reviewed-bys, or Acked-bys. If you've
looked through any part of this or have done any testing, please consider
sending an email with your "*-by:" line. :)]

This is a start of the mainline port of PAX_USERCOPY[1]. After writing
tests (now in lkdtm in -next) for Casey's earlier port[2], I kept tweaking
things further and further until I ended up with a whole new patch series.
To that end, I took Rik, Laura, and other people's feedback along with
additional changes and clean-ups.

Based on my understanding, PAX_USERCOPY was designed to catch a
few classes of flaws (mainly bad bounds checking) around the use of
copy_to_user()/copy_from_user(). These changes don't touch get_user() and
put_user(), since these operate on constant sized lengths, and tend to be
much less vulnerable. There are effectively three distinct protections in
the whole series, each of which I've given a separate CONFIG, though this
patch set is only the first of the three intended protections. (Generally
speaking, PAX_USERCOPY covers what I'm calling CONFIG_HARDENED_USERCOPY
(this) and CONFIG_HARDENED_USERCOPY_WHITELIST (future), and
PAX_USERCOPY_SLABS covers CONFIG_HARDENED_USERCOPY_SPLIT_KMALLOC
(future).)

This series, which adds CONFIG_HARDENED_USERCOPY, checks that objects
being copied to/from userspace meet certain criteria:
- if address is a heap object, the size must not exceed the object's
  allocated size. (This will catch all kinds of heap overflow flaws.)
- if address range is in the current process stack, it must be within the
  a valid stack frame (if such checking is possible) or at least entirely
  within the current process's stack. (This could catch large lengths that
  would have extended beyond the current process stack, or overflows if
  their length extends back into the original stack.)
- if the address range is part of kernel data, rodata, or bss, allow it.
- if address range is page-allocated, that it doesn't span multiple
  allocations (excepting Reserved and CMA pages).
- if address is within the kernel text, reject it.
- everything else is accepted

The patches in the series are:
- Support for examination of CMA page types:
	1- mm: Add is_migrate_cma_page
- Support for arch-specific stack frame checking (which will likely be
  replaced in the future by Josh's more comprehensive unwinder):
        2- mm: Implement stack frame object validation
- The core copy_to/from_user() checks, without the slab object checks:
        3- mm: Hardened usercopy
- Per-arch enablement of the protection:
        4- x86/uaccess: Enable hardened usercopy
        5- ARM: uaccess: Enable hardened usercopy
        6- arm64/uaccess: Enable hardened usercopy
        7- ia64/uaccess: Enable hardened usercopy
        8- powerpc/uaccess: Enable hardened usercopy
        9- sparc/uaccess: Enable hardened usercopy
       10- s390/uaccess: Enable hardened usercopy
- The heap allocator implementation of object size checking:
       11- mm: SLAB hardened usercopy support
       12- mm: SLUB hardened usercopy support

Some notes:

- This is expected to apply on top of -next which contains fixes for the
  position of _etext on both arm and arm64, though it has some conflicts
  with KASAN that should be trivial to fix up. Also in -next are the
  tests for this protection (in lkdtm), prefixed with USERCOPY_.

- I couldn't detect a measurable performance change with these features
  enabled. Kernel build times were unchanged, hackbench was unchanged,
  etc. I think we could flip this to "on by default" at some point, but
  for now, I'm leaving it off until I can get some more definitive
  measurements. I would love if someone with greater familiarity with
  perf could give this a spin and report results.

- The SLOB support extracted from grsecurity seems entirely broken. I
  have no idea what's going on there, I spent my time testing SLAB and
  SLUB. Having someone else look at SLOB would be nice, but this series
  doesn't depend on it.

Additional features that would be nice, but aren't blocking this series:

- Needs more architecture support for stack frame checking (only x86 now,
  but it seems Josh will have a good solution for this soon).


Thanks!

-Kees

[1] https://grsecurity.net/download.php "grsecurity - test kernel patch"
[2] http://www.openwall.com/lists/kernel-hardening/2016/05/19/5

v4:
- handle CMA pages, labbott
- update stack checker comments, labbott
- check for vmalloc addresses, labbott
- deal with KASAN in -next changing arm64 copy*user calls
- check for linear mappings at runtime instead of via CONFIG

v3:
- switch to using BUG for better Oops integration
- when checking page allocations, check each for Reserved
- use enums for the stack check return for readability

v2:
- added s390 support
- handle slub red zone
- disallow writes to rodata area
- stack frame walker now CONFIG-controlled arch-specific helper

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v4 01/12] mm: Add is_migrate_cma_page
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
@ 2016-07-20 20:26 ` Kees Cook
  2016-07-20 20:26 ` [PATCH v4 02/12] mm: Implement stack frame object validation Kees Cook
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:26 UTC (permalink / raw)
  To: linux-arm-kernel

From: Laura Abbott <labbott@redhat.com>

Code such as hardened user copy[1] needs a way to tell if a
page is CMA or not. Add is_migrate_cma_page in a similar way
to is_migrate_isolate_page.

[1]http://article.gmane.org/gmane.linux.kernel.mm/155238

Signed-off-by: Laura Abbott <labbott@redhat.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 include/linux/mmzone.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 02069c23486d..c8478b29f070 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -68,8 +68,10 @@ extern char * const migratetype_names[MIGRATE_TYPES];
 
 #ifdef CONFIG_CMA
 #  define is_migrate_cma(migratetype) unlikely((migratetype) == MIGRATE_CMA)
+#  define is_migrate_cma_page(_page) (get_pageblock_migratetype(_page) == MIGRATE_CMA)
 #else
 #  define is_migrate_cma(migratetype) false
+#  define is_migrate_cma_page(_page) false
 #endif
 
 #define for_each_migratetype_order(order, type) \
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 02/12] mm: Implement stack frame object validation
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
  2016-07-20 20:26 ` [PATCH v4 01/12] mm: Add is_migrate_cma_page Kees Cook
@ 2016-07-20 20:26 ` Kees Cook
  2016-07-20 20:26 ` [PATCH v4 03/12] mm: Hardened usercopy Kees Cook
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:26 UTC (permalink / raw)
  To: linux-arm-kernel

This creates per-architecture function arch_within_stack_frames() that
should validate if a given object is contained by a kernel stack frame.
Initial implementation is on x86.

This is based on code from PaX.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/Kconfig                       |  9 ++++++++
 arch/x86/Kconfig                   |  1 +
 arch/x86/include/asm/thread_info.h | 44 ++++++++++++++++++++++++++++++++++++++
 include/linux/thread_info.h        |  9 ++++++++
 4 files changed, 63 insertions(+)

diff --git a/arch/Kconfig b/arch/Kconfig
index d794384a0404..5e2776562035 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -424,6 +424,15 @@ config CC_STACKPROTECTOR_STRONG
 
 endchoice
 
+config HAVE_ARCH_WITHIN_STACK_FRAMES
+	bool
+	help
+	  An architecture should select this if it can walk the kernel stack
+	  frames to determine if an object is part of either the arguments
+	  or local variables (i.e. that it excludes saved return addresses,
+	  and similar) by implementing an inline arch_within_stack_frames(),
+	  which is used by CONFIG_HARDENED_USERCOPY.
+
 config HAVE_CONTEXT_TRACKING
 	bool
 	help
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 0a7b885964ba..4407f596b72c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -91,6 +91,7 @@ config X86
 	select HAVE_ARCH_SOFT_DIRTY		if X86_64
 	select HAVE_ARCH_TRACEHOOK
 	select HAVE_ARCH_TRANSPARENT_HUGEPAGE
+	select HAVE_ARCH_WITHIN_STACK_FRAMES
 	select HAVE_EBPF_JIT			if X86_64
 	select HAVE_CC_STACKPROTECTOR
 	select HAVE_CMPXCHG_DOUBLE
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index 30c133ac05cd..ab386f1336f2 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -180,6 +180,50 @@ static inline unsigned long current_stack_pointer(void)
 	return sp;
 }
 
+/*
+ * Walks up the stack frames to make sure that the specified object is
+ * entirely contained by a single stack frame.
+ *
+ * Returns:
+ *		 1 if within a frame
+ *		-1 if placed across a frame boundary (or outside stack)
+ *		 0 unable to determine (no frame pointers, etc)
+ */
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+#if defined(CONFIG_FRAME_POINTER)
+	const void *frame = NULL;
+	const void *oldframe;
+
+	oldframe = __builtin_frame_address(1);
+	if (oldframe)
+		frame = __builtin_frame_address(2);
+	/*
+	 * low ----------------------------------------------> high
+	 * [saved bp][saved ip][args][local vars][saved bp][saved ip]
+	 *                     ^----------------^
+	 *               allow copies only within here
+	 */
+	while (stack <= frame && frame < stackend) {
+		/*
+		 * If obj + len extends past the last frame, this
+		 * check won't pass and the next frame will be 0,
+		 * causing us to bail out and correctly report
+		 * the copy as invalid.
+		 */
+		if (obj + len <= frame)
+			return obj >= oldframe + 2 * sizeof(void *) ? 1 : -1;
+		oldframe = frame;
+		frame = *(const void * const *)frame;
+	}
+	return -1;
+#else
+	return 0;
+#endif
+}
+
 #else /* !__ASSEMBLY__ */
 
 #ifdef CONFIG_X86_64
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index b4c2a485b28a..3d5c80b4391d 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -146,6 +146,15 @@ static inline bool test_and_clear_restore_sigmask(void)
 #error "no set_restore_sigmask() provided and default one won't work"
 #endif
 
+#ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES
+static inline int arch_within_stack_frames(const void * const stack,
+					   const void * const stackend,
+					   const void *obj, unsigned long len)
+{
+	return 0;
+}
+#endif
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 03/12] mm: Hardened usercopy
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
  2016-07-20 20:26 ` [PATCH v4 01/12] mm: Add is_migrate_cma_page Kees Cook
  2016-07-20 20:26 ` [PATCH v4 02/12] mm: Implement stack frame object validation Kees Cook
@ 2016-07-20 20:26 ` Kees Cook
  2016-07-20 20:26 ` [PATCH v4 04/12] x86/uaccess: Enable hardened usercopy Kees Cook
                   ` (9 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:26 UTC (permalink / raw)
  To: linux-arm-kernel

This is the start of porting PAX_USERCOPY into the mainline kernel. This
is the first set of features, controlled by CONFIG_HARDENED_USERCOPY. The
work is based on code by PaX Team and Brad Spengler, and an earlier port
from Casey Schaufler. Additional non-slab page tests are from Rik van Riel.

This patch contains the logic for validating several conditions when
performing copy_to_user() and copy_from_user() on the kernel object
being copied to/from:
- address range doesn't wrap around
- address range isn't NULL or zero-allocated (with a non-zero copy size)
- if on the slab allocator:
  - object size must be less than or equal to copy size (when check is
    implemented in the allocator, which appear in subsequent patches)
- otherwise, object must not span page allocations (excepting Reserved
  and CMA ranges)
- if on the stack
  - object must not extend before/after the current process stack
  - object must be contained by a valid stack frame (when there is
    arch/build support for identifying stack frames)
- object must not overlap with kernel text

Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Valdis Kletnieks <valdis.kletnieks@vt.edu>
Tested-by: Michael Ellerman <mpe@ellerman.id.au>
---
 include/linux/slab.h        |  12 ++
 include/linux/thread_info.h |  15 +++
 mm/Makefile                 |   4 +
 mm/usercopy.c               | 268 ++++++++++++++++++++++++++++++++++++++++++++
 security/Kconfig            |  28 +++++
 5 files changed, 327 insertions(+)
 create mode 100644 mm/usercopy.c

diff --git a/include/linux/slab.h b/include/linux/slab.h
index aeb3e6d00a66..96a16a3fb7cb 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -155,6 +155,18 @@ void kfree(const void *);
 void kzfree(const void *);
 size_t ksize(const void *);
 
+#ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page);
+#else
+static inline const char *__check_heap_object(const void *ptr,
+					      unsigned long n,
+					      struct page *page)
+{
+	return NULL;
+}
+#endif
+
 /*
  * Some archs want to perform DMA into kmalloc caches and need a guaranteed
  * alignment larger than the alignment of a 64-bit integer.
diff --git a/include/linux/thread_info.h b/include/linux/thread_info.h
index 3d5c80b4391d..f24b99eac969 100644
--- a/include/linux/thread_info.h
+++ b/include/linux/thread_info.h
@@ -155,6 +155,21 @@ static inline int arch_within_stack_frames(const void * const stack,
 }
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+extern void __check_object_size(const void *ptr, unsigned long n,
+					bool to_user);
+
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{
+	__check_object_size(ptr, n, to_user);
+}
+#else
+static inline void check_object_size(const void *ptr, unsigned long n,
+				     bool to_user)
+{ }
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 #endif	/* __KERNEL__ */
 
 #endif /* _LINUX_THREAD_INFO_H */
diff --git a/mm/Makefile b/mm/Makefile
index 78c6f7dedb83..32d37247c7e5 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -21,6 +21,9 @@ KCOV_INSTRUMENT_memcontrol.o := n
 KCOV_INSTRUMENT_mmzone.o := n
 KCOV_INSTRUMENT_vmstat.o := n
 
+# Since __builtin_frame_address does work as used, disable the warning.
+CFLAGS_usercopy.o += $(call cc-disable-warning, frame-address)
+
 mmu-y			:= nommu.o
 mmu-$(CONFIG_MMU)	:= gup.o highmem.o memory.o mincore.o \
 			   mlock.o mmap.o mprotect.o mremap.o msync.o rmap.o \
@@ -99,3 +102,4 @@ obj-$(CONFIG_USERFAULTFD) += userfaultfd.o
 obj-$(CONFIG_IDLE_PAGE_TRACKING) += page_idle.o
 obj-$(CONFIG_FRAME_VECTOR) += frame_vector.o
 obj-$(CONFIG_DEBUG_PAGE_REF) += debug_page_ref.o
+obj-$(CONFIG_HARDENED_USERCOPY) += usercopy.o
diff --git a/mm/usercopy.c b/mm/usercopy.c
new file mode 100644
index 000000000000..8ebae91a6b55
--- /dev/null
+++ b/mm/usercopy.c
@@ -0,0 +1,268 @@
+/*
+ * This implements the various checks for CONFIG_HARDENED_USERCOPY*,
+ * which are designed to protect kernel memory from needless exposure
+ * and overwrite under many unintended conditions. This code is based
+ * on PAX_USERCOPY, which is:
+ *
+ * Copyright (C) 2001-2016 PaX Team, Bradley Spengler, Open Source
+ * Security Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/mm.h>
+#include <linux/slab.h>
+#include <asm/sections.h>
+
+enum {
+	BAD_STACK = -1,
+	NOT_STACK = 0,
+	GOOD_FRAME,
+	GOOD_STACK,
+};
+
+/*
+ * Checks if a given pointer and length is contained by the current
+ * stack frame (if possible).
+ *
+ * Returns:
+ *	NOT_STACK: not at all on the stack
+ *	GOOD_FRAME: fully within a valid stack frame
+ *	GOOD_STACK: fully on the stack (when can't do frame-checking)
+ *	BAD_STACK: error condition (invalid stack position or bad stack frame)
+ */
+static noinline int check_stack_object(const void *obj, unsigned long len)
+{
+	const void * const stack = task_stack_page(current);
+	const void * const stackend = stack + THREAD_SIZE;
+	int ret;
+
+	/* Object is not on the stack at all. */
+	if (obj + len <= stack || stackend <= obj)
+		return NOT_STACK;
+
+	/*
+	 * Reject: object partially overlaps the stack (passing the
+	 * the check above means@least one end is within the stack,
+	 * so if this check fails, the other end is outside the stack).
+	 */
+	if (obj < stack || stackend < obj + len)
+		return BAD_STACK;
+
+	/* Check if object is safely within a valid frame. */
+	ret = arch_within_stack_frames(stack, stackend, obj, len);
+	if (ret)
+		return ret;
+
+	return GOOD_STACK;
+}
+
+static void report_usercopy(const void *ptr, unsigned long len,
+			    bool to_user, const char *type)
+{
+	pr_emerg("kernel memory %s attempt detected %s %p (%s) (%lu bytes)\n",
+		to_user ? "exposure" : "overwrite",
+		to_user ? "from" : "to", ptr, type ? : "unknown", len);
+	/*
+	 * For greater effect, it would be nice to do do_group_exit(),
+	 * but BUG() actually hooks all the lock-breaking and per-arch
+	 * Oops code, so that is used here instead.
+	 */
+	BUG();
+}
+
+/* Returns true if any portion of [ptr,ptr+n) over laps with [low,high). */
+static bool overlaps(const void *ptr, unsigned long n, unsigned long low,
+		     unsigned long high)
+{
+	unsigned long check_low = (uintptr_t)ptr;
+	unsigned long check_high = check_low + n;
+
+	/* Does not overlap if entirely above or entirely below. */
+	if (check_low >= high || check_high < low)
+		return false;
+
+	return true;
+}
+
+/* Is this address range in the kernel text area? */
+static inline const char *check_kernel_text_object(const void *ptr,
+						   unsigned long n)
+{
+	unsigned long textlow = (unsigned long)_stext;
+	unsigned long texthigh = (unsigned long)_etext;
+	unsigned long textlow_linear, texthigh_linear;
+
+	if (overlaps(ptr, n, textlow, texthigh))
+		return "<kernel text>";
+
+	/*
+	 * Some architectures have virtual memory mappings with a secondary
+	 * mapping of the kernel text, i.e. there is more than one virtual
+	 * kernel address that points to the kernel image. It is usually
+	 * when there is a separate linear physical memory mapping, in that
+	 * __pa() is not just the reverse of __va(). This can be detected
+	 * and checked:
+	 */
+	textlow_linear = (unsigned long)__va(__pa(textlow));
+	/* No different mapping: we're done. */
+	if (textlow_linear == textlow)
+		return NULL;
+
+	/* Check the secondary mapping... */
+	texthigh_linear = (unsigned long)__va(__pa(texthigh));
+	if (overlaps(ptr, n, textlow_linear, texthigh_linear))
+		return "<linear kernel text>";
+
+	return NULL;
+}
+
+static inline const char *check_bogus_address(const void *ptr, unsigned long n)
+{
+	/* Reject if object wraps past end of memory. */
+	if (ptr + n < ptr)
+		return "<wrapped address>";
+
+	/* Reject if NULL or ZERO-allocation. */
+	if (ZERO_OR_NULL_PTR(ptr))
+		return "<null>";
+
+	return NULL;
+}
+
+static inline const char *check_heap_object(const void *ptr, unsigned long n,
+					    bool to_user)
+{
+	struct page *page, *endpage;
+	const void *end = ptr + n - 1;
+	bool is_reserved, is_cma;
+
+	/*
+	 * Some architectures (arm64) return true for virt_addr_valid() on
+	 * vmalloced addresses. Work around this by checking for vmalloc
+	 * first.
+	 */
+	if (is_vmalloc_addr(ptr))
+		return NULL;
+
+	if (!virt_addr_valid(ptr))
+		return NULL;
+
+	page = virt_to_head_page(ptr);
+
+	/* Check slab allocator for flags and size. */
+	if (PageSlab(page))
+		return __check_heap_object(ptr, n, page);
+
+	/*
+	 * Sometimes the kernel data regions are not marked Reserved (see
+	 * check below). And sometimes [_sdata,_edata) does not cover
+	 * rodata and/or bss, so check each range explicitly.
+	 */
+
+	/* Allow reads of kernel rodata region (if not marked as Reserved). */
+	if (ptr >= (const void *)__start_rodata &&
+	    end <= (const void *)__end_rodata) {
+		if (!to_user)
+			return "<rodata>";
+		return NULL;
+	}
+
+	/* Allow kernel data region (if not marked as Reserved). */
+	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
+		return NULL;
+
+	/* Allow kernel bss region (if not marked as Reserved). */
+	if (ptr >= (const void *)__bss_start &&
+	    end <= (const void *)__bss_stop)
+		return NULL;
+
+	/* Is the object wholly within one base page? */
+	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
+		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
+		return NULL;
+
+	/* Allow if start and end are inside the same compound page. */
+	endpage = virt_to_head_page(end);
+	if (likely(endpage == page))
+		return NULL;
+
+	/*
+	 * Reject if range is entirely either Reserved (i.e. special or
+	 * device memory), or CMA. Otherwise, reject since the object spans
+	 * several independently allocated pages.
+	 */
+	is_reserved = PageReserved(page);
+	is_cma = is_migrate_cma_page(page);
+	if (!is_reserved && !is_cma)
+		goto reject;
+
+	for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
+		page = virt_to_head_page(ptr);
+		if (is_reserved && !PageReserved(page))
+			goto reject;
+		if (is_cma && !is_migrate_cma_page(page))
+			goto reject;
+	}
+
+	return NULL;
+
+reject:
+	return "<spans multiple pages>";
+}
+
+/*
+ * Validates that the given object is:
+ * - not bogus address
+ * - known-safe heap or stack object
+ * - not in kernel text
+ */
+void __check_object_size(const void *ptr, unsigned long n, bool to_user)
+{
+	const char *err;
+
+	/* Skip all tests if size is zero. */
+	if (!n)
+		return;
+
+	/* Check for invalid addresses. */
+	err = check_bogus_address(ptr, n);
+	if (err)
+		goto report;
+
+	/* Check for bad heap object. */
+	err = check_heap_object(ptr, n, to_user);
+	if (err)
+		goto report;
+
+	/* Check for bad stack object. */
+	switch (check_stack_object(ptr, n)) {
+	case NOT_STACK:
+		/* Object is not touching the current process stack. */
+		break;
+	case GOOD_FRAME:
+	case GOOD_STACK:
+		/*
+		 * Object is either in the correct frame (when it
+		 * is possible to check) or just generally on the
+		 * process stack (when frame checking not available).
+		 */
+		return;
+	default:
+		err = "<process stack>";
+		goto report;
+	}
+
+	/* Check for object in kernel to avoid text exposure. */
+	err = check_kernel_text_object(ptr, n);
+	if (!err)
+		return;
+
+report:
+	report_usercopy(ptr, n, to_user, err);
+}
+EXPORT_SYMBOL(__check_object_size);
diff --git a/security/Kconfig b/security/Kconfig
index 176758cdfa57..df28f2b6f3e1 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -118,6 +118,34 @@ config LSM_MMAP_MIN_ADDR
 	  this low address space will need the permission specific to the
 	  systems running LSM.
 
+config HAVE_HARDENED_USERCOPY_ALLOCATOR
+	bool
+	help
+	  The heap allocator implements __check_heap_object() for
+	  validating memory ranges against heap object sizes in
+	  support of CONFIG_HARDENED_USERCOPY.
+
+config HAVE_ARCH_HARDENED_USERCOPY
+	bool
+	help
+	  The architecture supports CONFIG_HARDENED_USERCOPY by
+	  calling check_object_size() just before performing the
+	  userspace copies in the low level implementation of
+	  copy_to_user() and copy_from_user().
+
+config HARDENED_USERCOPY
+	bool "Harden memory copies between kernel and userspace"
+	depends on HAVE_ARCH_HARDENED_USERCOPY
+	select BUG
+	help
+	  This option checks for obviously wrong memory regions when
+	  copying memory to/from the kernel (via copy_to_user() and
+	  copy_from_user() functions) by rejecting memory ranges that
+	  are larger than the specified heap object, span multiple
+	  separately allocates pages, are not on the process stack,
+	  or are part of the kernel text. This kills entire classes
+	  of heap overflow exploits and similar kernel memory exposures.
+
 source security/selinux/Kconfig
 source security/smack/Kconfig
 source security/tomoyo/Kconfig
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 04/12] x86/uaccess: Enable hardened usercopy
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
                   ` (2 preceding siblings ...)
  2016-07-20 20:26 ` [PATCH v4 03/12] mm: Hardened usercopy Kees Cook
@ 2016-07-20 20:26 ` Kees Cook
  2016-07-20 20:27 ` [PATCH v4 05/12] ARM: uaccess: " Kees Cook
                   ` (8 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:26 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on x86. This is done both in
copy_*_user() and __copy_*_user() because copy_*_user() actually calls
down to _copy_*_user() and not __copy_*_user().

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Valdis Kletnieks <valdis.kletnieks@vt.edu>
---
 arch/x86/Kconfig                  |  1 +
 arch/x86/include/asm/uaccess.h    | 10 ++++++----
 arch/x86/include/asm/uaccess_32.h |  2 ++
 arch/x86/include/asm/uaccess_64.h |  2 ++
 4 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4407f596b72c..762a0349633c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -80,6 +80,7 @@ config X86
 	select HAVE_ALIGNED_STRUCT_PAGE		if SLUB
 	select HAVE_AOUT			if X86_32
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP		if X86_64 || X86_PAE
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN			if X86_64 && SPARSEMEM_VMEMMAP
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index 2982387ba817..d3312f0fcdfc 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -742,9 +742,10 @@ copy_from_user(void *to, const void __user *from, unsigned long n)
 	 * case, and do only runtime checking for non-constant sizes.
 	 */
 
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(to, n, false);
 		n = _copy_from_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if (__builtin_constant_p(n))
 		copy_from_user_overflow();
 	else
 		__copy_from_user_overflow(sz, n);
@@ -762,9 +763,10 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
 	might_fault();
 
 	/* See the comment in copy_from_user() above. */
-	if (likely(sz < 0 || sz >= n))
+	if (likely(sz < 0 || sz >= n)) {
+		check_object_size(from, n, true);
 		n = _copy_to_user(to, from, n);
-	else if(__builtin_constant_p(n))
+	} else if (__builtin_constant_p(n))
 		copy_to_user_overflow();
 	else
 		__copy_to_user_overflow(sz, n);
diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 4b32da24faaf..7d3bdd1ed697 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -37,6 +37,7 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero
 static __always_inline unsigned long __must_check
 __copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	return __copy_to_user_ll(to, from, n);
 }
 
@@ -95,6 +96,7 @@ static __always_inline unsigned long
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
 	might_fault();
+	check_object_size(to, n, false);
 	if (__builtin_constant_p(n)) {
 		unsigned long ret;
 
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index 2eac2aa3e37f..673059a109fe 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -54,6 +54,7 @@ int __copy_from_user_nocheck(void *dst, const void __user *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(dst, size, false);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic(dst, (__force void *)src, size);
 	switch (size) {
@@ -119,6 +120,7 @@ int __copy_to_user_nocheck(void __user *dst, const void *src, unsigned size)
 {
 	int ret = 0;
 
+	check_object_size(src, size, true);
 	if (!__builtin_constant_p(size))
 		return copy_user_generic((__force void *)dst, src, size);
 	switch (size) {
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 05/12] ARM: uaccess: Enable hardened usercopy
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
                   ` (3 preceding siblings ...)
  2016-07-20 20:26 ` [PATCH v4 04/12] x86/uaccess: Enable hardened usercopy Kees Cook
@ 2016-07-20 20:27 ` Kees Cook
  2016-07-20 20:27 ` [PATCH v4 06/12] arm64/uaccess: " Kees Cook
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:27 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on arm.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm/Kconfig               |  1 +
 arch/arm/include/asm/uaccess.h | 11 +++++++++--
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 90542db1220d..f56b29b3f57e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -35,6 +35,7 @@ config ARM
 	select HARDIRQS_SW_RESEND
 	select HAVE_ARCH_AUDITSYSCALL if (AEABI && !OABI_COMPAT)
 	select HAVE_ARCH_BITREVERSE if (CPU_32v7M || CPU_32v7) && !CPU_32v6
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL if !XIP_KERNEL && !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_KGDB if !CPU_ENDIAN_BE32 && MMU
 	select HAVE_ARCH_MMAP_RND_BITS if MMU
diff --git a/arch/arm/include/asm/uaccess.h b/arch/arm/include/asm/uaccess.h
index 35c9db857ebe..7fb59199c6bb 100644
--- a/arch/arm/include/asm/uaccess.h
+++ b/arch/arm/include/asm/uaccess.h
@@ -496,7 +496,10 @@ arm_copy_from_user(void *to, const void __user *from, unsigned long n);
 static inline unsigned long __must_check
 __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(to, n, false);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_from_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
@@ -511,11 +514,15 @@ static inline unsigned long __must_check
 __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
 #ifndef CONFIG_UACCESS_WITH_MEMCPY
-	unsigned int __ua_flags = uaccess_save_and_enable();
+	unsigned int __ua_flags;
+
+	check_object_size(from, n, true);
+	__ua_flags = uaccess_save_and_enable();
 	n = arm_copy_to_user(to, from, n);
 	uaccess_restore(__ua_flags);
 	return n;
 #else
+	check_object_size(from, n, true);
 	return arm_copy_to_user(to, from, n);
 #endif
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 06/12] arm64/uaccess: Enable hardened usercopy
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
                   ` (4 preceding siblings ...)
  2016-07-20 20:27 ` [PATCH v4 05/12] ARM: uaccess: " Kees Cook
@ 2016-07-20 20:27 ` Kees Cook
  2016-07-20 20:27 ` [PATCH v4 07/12] ia64/uaccess: " Kees Cook
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:27 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on arm64. As done by KASAN in -next,
renames the low-level functions to __arch_copy_*_user() so a static inline
can do additional work before the copy.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/arm64/Kconfig               |  1 +
 arch/arm64/include/asm/uaccess.h | 29 ++++++++++++++++++++++-------
 arch/arm64/kernel/arm64ksyms.c   |  4 ++--
 arch/arm64/lib/copy_from_user.S  |  4 ++--
 arch/arm64/lib/copy_to_user.S    |  4 ++--
 5 files changed, 29 insertions(+), 13 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5a0a691d4220..9cdb2322c811 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -51,6 +51,7 @@ config ARM64
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_BITREVERSE
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_HUGE_VMAP
 	select HAVE_ARCH_JUMP_LABEL
 	select HAVE_ARCH_KASAN if SPARSEMEM_VMEMMAP && !(ARM64_16K_PAGES && ARM64_VA_BITS_48)
diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h
index 9e397a542756..92848b00e3cd 100644
--- a/arch/arm64/include/asm/uaccess.h
+++ b/arch/arm64/include/asm/uaccess.h
@@ -256,24 +256,39 @@ do {									\
 		-EFAULT;						\
 })
 
-extern unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n);
-extern unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_from_user(void *to, const void __user *from, unsigned long n);
+extern unsigned long __must_check __arch_copy_to_user(void __user *to, const void *from, unsigned long n);
 extern unsigned long __must_check __copy_in_user(void __user *to, const void __user *from, unsigned long n);
 extern unsigned long __must_check __clear_user(void __user *addr, unsigned long n);
 
+static inline unsigned long __must_check __copy_from_user(void *to, const void __user *from, unsigned long n)
+{
+	check_object_size(to, n, false);
+	return __arch_copy_from_user(to, from, n);
+}
+
+static inline unsigned long __must_check __copy_to_user(void __user *to, const void *from, unsigned long n)
+{
+	check_object_size(from, n, true);
+	return __arch_copy_to_user(to, from, n);
+}
+
 static inline unsigned long __must_check copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	if (access_ok(VERIFY_READ, from, n))
-		n = __copy_from_user(to, from, n);
-	else /* security hole - plug it */
+	if (access_ok(VERIFY_READ, from, n)) {
+		check_object_size(to, n, false);
+		n = __arch_copy_from_user(to, from, n);
+	} else /* security hole - plug it */
 		memset(to, 0, n);
 	return n;
 }
 
 static inline unsigned long __must_check copy_to_user(void __user *to, const void *from, unsigned long n)
 {
-	if (access_ok(VERIFY_WRITE, to, n))
-		n = __copy_to_user(to, from, n);
+	if (access_ok(VERIFY_WRITE, to, n)) {
+		check_object_size(from, n, true);
+		n = __arch_copy_to_user(to, from, n);
+	}
 	return n;
 }
 
diff --git a/arch/arm64/kernel/arm64ksyms.c b/arch/arm64/kernel/arm64ksyms.c
index 678f30b05a45..2dc44406a7ad 100644
--- a/arch/arm64/kernel/arm64ksyms.c
+++ b/arch/arm64/kernel/arm64ksyms.c
@@ -34,8 +34,8 @@ EXPORT_SYMBOL(copy_page);
 EXPORT_SYMBOL(clear_page);
 
 	/* user mem (segment) */
-EXPORT_SYMBOL(__copy_from_user);
-EXPORT_SYMBOL(__copy_to_user);
+EXPORT_SYMBOL(__arch_copy_from_user);
+EXPORT_SYMBOL(__arch_copy_to_user);
 EXPORT_SYMBOL(__clear_user);
 EXPORT_SYMBOL(__copy_in_user);
 
diff --git a/arch/arm64/lib/copy_from_user.S b/arch/arm64/lib/copy_from_user.S
index 17e8306dca29..0b90497d4424 100644
--- a/arch/arm64/lib/copy_from_user.S
+++ b/arch/arm64/lib/copy_from_user.S
@@ -66,7 +66,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_from_user)
+ENTRY(__arch_copy_from_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -75,7 +75,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0				// Nothing to copy
 	ret
-ENDPROC(__copy_from_user)
+ENDPROC(__arch_copy_from_user)
 
 	.section .fixup,"ax"
 	.align	2
diff --git a/arch/arm64/lib/copy_to_user.S b/arch/arm64/lib/copy_to_user.S
index 21faae60f988..7a7efe255034 100644
--- a/arch/arm64/lib/copy_to_user.S
+++ b/arch/arm64/lib/copy_to_user.S
@@ -65,7 +65,7 @@
 	.endm
 
 end	.req	x5
-ENTRY(__copy_to_user)
+ENTRY(__arch_copy_to_user)
 ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(0)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	add	end, x0, x2
@@ -74,7 +74,7 @@ ALTERNATIVE("nop", __stringify(SET_PSTATE_PAN(1)), ARM64_ALT_PAN_NOT_UAO, \
 	    CONFIG_ARM64_PAN)
 	mov	x0, #0
 	ret
-ENDPROC(__copy_to_user)
+ENDPROC(__arch_copy_to_user)
 
 	.section .fixup,"ax"
 	.align	2
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 07/12] ia64/uaccess: Enable hardened usercopy
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
                   ` (5 preceding siblings ...)
  2016-07-20 20:27 ` [PATCH v4 06/12] arm64/uaccess: " Kees Cook
@ 2016-07-20 20:27 ` Kees Cook
  2016-07-20 20:27 ` [PATCH v4 08/12] powerpc/uaccess: " Kees Cook
                   ` (5 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:27 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on ia64.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/ia64/Kconfig               |  1 +
 arch/ia64/include/asm/uaccess.h | 18 +++++++++++++++---
 2 files changed, 16 insertions(+), 3 deletions(-)

diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index f80758cb7157..32a87ef516a0 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -53,6 +53,7 @@ config IA64
 	select MODULES_USE_ELF_RELA
 	select ARCH_USE_CMPXCHG_LOCKREF
 	select HAVE_ARCH_AUDITSYSCALL
+	select HAVE_ARCH_HARDENED_USERCOPY
 	default y
 	help
 	  The Itanium Processor Family is Intel's 64-bit successor to
diff --git a/arch/ia64/include/asm/uaccess.h b/arch/ia64/include/asm/uaccess.h
index 2189d5ddc1ee..465c70982f40 100644
--- a/arch/ia64/include/asm/uaccess.h
+++ b/arch/ia64/include/asm/uaccess.h
@@ -241,12 +241,18 @@ extern unsigned long __must_check __copy_user (void __user *to, const void __use
 static inline unsigned long
 __copy_to_user (void __user *to, const void *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(from, count, true);
+
 	return __copy_user(to, (__force void __user *) from, count);
 }
 
 static inline unsigned long
 __copy_from_user (void *to, const void __user *from, unsigned long count)
 {
+	if (!__builtin_constant_p(count))
+		check_object_size(to, count, false);
+
 	return __copy_user((__force void __user *) to, from, count);
 }
 
@@ -258,8 +264,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	const void *__cu_from = (from);							\
 	long __cu_len = (n);								\
 											\
-	if (__access_ok(__cu_to, __cu_len, get_fs()))					\
-		__cu_len = __copy_user(__cu_to, (__force void __user *) __cu_from, __cu_len);	\
+	if (__access_ok(__cu_to, __cu_len, get_fs())) {					\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_from, __cu_len, true);			\
+		__cu_len = __copy_user(__cu_to, (__force void __user *)  __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
@@ -270,8 +279,11 @@ __copy_from_user (void *to, const void __user *from, unsigned long count)
 	long __cu_len = (n);								\
 											\
 	__chk_user_ptr(__cu_from);							\
-	if (__access_ok(__cu_from, __cu_len, get_fs()))					\
+	if (__access_ok(__cu_from, __cu_len, get_fs())) {				\
+		if (!__builtin_constant_p(n))						\
+			check_object_size(__cu_to, __cu_len, false);			\
 		__cu_len = __copy_user((__force void __user *) __cu_to, __cu_from, __cu_len);	\
+	}										\
 	__cu_len;									\
 })
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 08/12] powerpc/uaccess: Enable hardened usercopy
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
                   ` (6 preceding siblings ...)
  2016-07-20 20:27 ` [PATCH v4 07/12] ia64/uaccess: " Kees Cook
@ 2016-07-20 20:27 ` Kees Cook
  2016-07-20 20:27 ` [PATCH v4 09/12] sparc/uaccess: " Kees Cook
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:27 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on powerpc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Michael Ellerman <mpe@ellerman.id.au>
---
 arch/powerpc/Kconfig               |  1 +
 arch/powerpc/include/asm/uaccess.h | 21 +++++++++++++++++++--
 2 files changed, 20 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 01f7464d9fea..b7a18b2604be 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -164,6 +164,7 @@ config PPC
 	select ARCH_HAS_UBSAN_SANITIZE_ALL
 	select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT
 	select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config GENERIC_CSUM
 	def_bool CPU_LITTLE_ENDIAN
diff --git a/arch/powerpc/include/asm/uaccess.h b/arch/powerpc/include/asm/uaccess.h
index b7c20f0b8fbe..c1dc6c14deb8 100644
--- a/arch/powerpc/include/asm/uaccess.h
+++ b/arch/powerpc/include/asm/uaccess.h
@@ -310,10 +310,15 @@ static inline unsigned long copy_from_user(void *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_READ, from, n))
+	if (access_ok(VERIFY_READ, from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_tofrom_user((__force void __user *)to, from, n);
+	}
 	if ((unsigned long)from < TASK_SIZE) {
 		over = (unsigned long)from + n - TASK_SIZE;
+		if (!__builtin_constant_p(n - over))
+			check_object_size(to, n - over, false);
 		return __copy_tofrom_user((__force void __user *)to, from,
 				n - over) + over;
 	}
@@ -325,10 +330,15 @@ static inline unsigned long copy_to_user(void __user *to,
 {
 	unsigned long over;
 
-	if (access_ok(VERIFY_WRITE, to, n))
+	if (access_ok(VERIFY_WRITE, to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_tofrom_user(to, (__force void __user *)from, n);
+	}
 	if ((unsigned long)to < TASK_SIZE) {
 		over = (unsigned long)to + n - TASK_SIZE;
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n - over, true);
 		return __copy_tofrom_user(to, (__force void __user *)from,
 				n - over) + over;
 	}
@@ -372,6 +382,10 @@ static inline unsigned long __copy_from_user_inatomic(void *to,
 		if (ret == 0)
 			return 0;
 	}
+
+	if (!__builtin_constant_p(n))
+		check_object_size(to, n, false);
+
 	return __copy_tofrom_user((__force void __user *)to, from, n);
 }
 
@@ -398,6 +412,9 @@ static inline unsigned long __copy_to_user_inatomic(void __user *to,
 		if (ret == 0)
 			return 0;
 	}
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
+
 	return __copy_tofrom_user(to, (__force const void __user *)from, n);
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 09/12] sparc/uaccess: Enable hardened usercopy
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
                   ` (7 preceding siblings ...)
  2016-07-20 20:27 ` [PATCH v4 08/12] powerpc/uaccess: " Kees Cook
@ 2016-07-20 20:27 ` Kees Cook
  2016-07-20 20:27 ` [PATCH v4 10/12] s390/uaccess: " Kees Cook
                   ` (3 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:27 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on sparc.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/sparc/Kconfig                  |  1 +
 arch/sparc/include/asm/uaccess_32.h | 14 ++++++++++----
 arch/sparc/include/asm/uaccess_64.h | 11 +++++++++--
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 546293d9e6c5..59b09600dd32 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -43,6 +43,7 @@ config SPARC
 	select OLD_SIGSUSPEND
 	select ARCH_HAS_SG_CHAIN
 	select CPU_NO_EFFICIENT_FFS
+	select HAVE_ARCH_HARDENED_USERCOPY
 
 config SPARC32
 	def_bool !64BIT
diff --git a/arch/sparc/include/asm/uaccess_32.h b/arch/sparc/include/asm/uaccess_32.h
index 57aca2792d29..341a5a133f48 100644
--- a/arch/sparc/include/asm/uaccess_32.h
+++ b/arch/sparc/include/asm/uaccess_32.h
@@ -248,22 +248,28 @@ unsigned long __copy_user(void __user *to, const void __user *from, unsigned lon
 
 static inline unsigned long copy_to_user(void __user *to, const void *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) to, n))
+	if (n && __access_ok((unsigned long) to, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(from, n, true);
 		return __copy_user(to, (__force void __user *) from, n);
-	else
+	} else
 		return n;
 }
 
 static inline unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	if (!__builtin_constant_p(n))
+		check_object_size(from, n, true);
 	return __copy_user(to, (__force void __user *) from, n);
 }
 
 static inline unsigned long copy_from_user(void *to, const void __user *from, unsigned long n)
 {
-	if (n && __access_ok((unsigned long) from, n))
+	if (n && __access_ok((unsigned long) from, n)) {
+		if (!__builtin_constant_p(n))
+			check_object_size(to, n, false);
 		return __copy_user((__force void __user *) to, from, n);
-	else
+	} else
 		return n;
 }
 
diff --git a/arch/sparc/include/asm/uaccess_64.h b/arch/sparc/include/asm/uaccess_64.h
index e9a51d64974d..8bda94fab8e8 100644
--- a/arch/sparc/include/asm/uaccess_64.h
+++ b/arch/sparc/include/asm/uaccess_64.h
@@ -210,8 +210,12 @@ unsigned long copy_from_user_fixup(void *to, const void __user *from,
 static inline unsigned long __must_check
 copy_from_user(void *to, const void __user *from, unsigned long size)
 {
-	unsigned long ret = ___copy_from_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(to, size, false);
+
+	ret = ___copy_from_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_from_user_fixup(to, from, size);
 
@@ -227,8 +231,11 @@ unsigned long copy_to_user_fixup(void __user *to, const void *from,
 static inline unsigned long __must_check
 copy_to_user(void __user *to, const void *from, unsigned long size)
 {
-	unsigned long ret = ___copy_to_user(to, from, size);
+	unsigned long ret;
 
+	if (!__builtin_constant_p(size))
+		check_object_size(from, size, true);
+	ret = ___copy_to_user(to, from, size);
 	if (unlikely(ret))
 		ret = copy_to_user_fixup(to, from, size);
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 10/12] s390/uaccess: Enable hardened usercopy
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
                   ` (8 preceding siblings ...)
  2016-07-20 20:27 ` [PATCH v4 09/12] sparc/uaccess: " Kees Cook
@ 2016-07-20 20:27 ` Kees Cook
  2016-07-20 20:27 ` [PATCH v4 11/12] mm: SLAB hardened usercopy support Kees Cook
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:27 UTC (permalink / raw)
  To: linux-arm-kernel

Enables CONFIG_HARDENED_USERCOPY checks on s390.

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/s390/Kconfig       | 1 +
 arch/s390/lib/uaccess.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index a8c259059adf..9f694311c9ed 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -122,6 +122,7 @@ config S390
 	select HAVE_ALIGNED_STRUCT_PAGE if SLUB
 	select HAVE_ARCH_AUDITSYSCALL
 	select HAVE_ARCH_EARLY_PFN_TO_NID
+	select HAVE_ARCH_HARDENED_USERCOPY
 	select HAVE_ARCH_JUMP_LABEL
 	select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
 	select HAVE_ARCH_SECCOMP_FILTER
diff --git a/arch/s390/lib/uaccess.c b/arch/s390/lib/uaccess.c
index ae4de559e3a0..6986c20166f0 100644
--- a/arch/s390/lib/uaccess.c
+++ b/arch/s390/lib/uaccess.c
@@ -104,6 +104,7 @@ static inline unsigned long copy_from_user_mvcp(void *x, const void __user *ptr,
 
 unsigned long __copy_from_user(void *to, const void __user *from, unsigned long n)
 {
+	check_object_size(to, n, false);
 	if (static_branch_likely(&have_mvcos))
 		return copy_from_user_mvcos(to, from, n);
 	return copy_from_user_mvcp(to, from, n);
@@ -177,6 +178,7 @@ static inline unsigned long copy_to_user_mvcs(void __user *ptr, const void *x,
 
 unsigned long __copy_to_user(void __user *to, const void *from, unsigned long n)
 {
+	check_object_size(from, n, true);
 	if (static_branch_likely(&have_mvcos))
 		return copy_to_user_mvcos(to, from, n);
 	return copy_to_user_mvcs(to, from, n);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 11/12] mm: SLAB hardened usercopy support
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
                   ` (9 preceding siblings ...)
  2016-07-20 20:27 ` [PATCH v4 10/12] s390/uaccess: " Kees Cook
@ 2016-07-20 20:27 ` Kees Cook
  2016-07-20 20:27 ` [PATCH v4 12/12] mm: SLUB " Kees Cook
  2016-07-23  0:36 ` [PATCH v4 00/12] mm: Hardened usercopy Laura Abbott
  12 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:27 UTC (permalink / raw)
  To: linux-arm-kernel

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLAB allocator to catch any copies that may span objects.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Valdis Kletnieks <valdis.kletnieks@vt.edu>
---
 init/Kconfig |  1 +
 mm/slab.c    | 30 ++++++++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index f755a602d4a1..798c2020ee7c 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1757,6 +1757,7 @@ choice
 
 config SLAB
 	bool "SLAB"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	  The regular slab allocator that is established and known to work
 	  well in all environments. It organizes cache hot objects in
diff --git a/mm/slab.c b/mm/slab.c
index cc8bbc1e6bc9..5e2d5f349aca 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -4477,6 +4477,36 @@ static int __init slab_proc_init(void)
 module_init(slab_proc_init);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *cachep;
+	unsigned int objnr;
+	unsigned long offset;
+
+	/* Find and validate object. */
+	cachep = page->slab_cache;
+	objnr = obj_to_index(cachep, page, (void *)ptr);
+	BUG_ON(objnr >= cachep->num);
+
+	/* Find offset within object. */
+	offset = ptr - index_to_obj(cachep, page, objnr) - obj_offset(cachep);
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= cachep->object_size && n <= cachep->object_size - offset)
+		return NULL;
+
+	return cachep->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 /**
  * ksize - get the actual amount of memory allocated for a given object
  * @objp: Pointer to the object
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 12/12] mm: SLUB hardened usercopy support
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
                   ` (10 preceding siblings ...)
  2016-07-20 20:27 ` [PATCH v4 11/12] mm: SLAB hardened usercopy support Kees Cook
@ 2016-07-20 20:27 ` Kees Cook
  2016-07-25 19:16   ` Laura Abbott
  2016-07-23  0:36 ` [PATCH v4 00/12] mm: Hardened usercopy Laura Abbott
  12 siblings, 1 reply; 21+ messages in thread
From: Kees Cook @ 2016-07-20 20:27 UTC (permalink / raw)
  To: linux-arm-kernel

Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
SLUB allocator to catch any copies that may span objects. Includes a
redzone handling fix discovered by Michael Ellerman.

Based on code from PaX and grsecurity.

Signed-off-by: Kees Cook <keescook@chromium.org>
Tested-by: Michael Ellerman <mpe@ellerman.id.au>
---
 init/Kconfig |  1 +
 mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index 798c2020ee7c..1c4711819dfd 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1765,6 +1765,7 @@ config SLAB
 
 config SLUB
 	bool "SLUB (Unqueued Allocator)"
+	select HAVE_HARDENED_USERCOPY_ALLOCATOR
 	help
 	   SLUB is a slab allocator that minimizes cache line usage
 	   instead of managing queues of cached objects (SLAB approach).
diff --git a/mm/slub.c b/mm/slub.c
index 825ff4505336..7dee3d9a5843 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
 EXPORT_SYMBOL(__kmalloc_node);
 #endif
 
+#ifdef CONFIG_HARDENED_USERCOPY
+/*
+ * Rejects objects that are incorrectly sized.
+ *
+ * Returns NULL if check passes, otherwise const char * to name of cache
+ * to indicate an error.
+ */
+const char *__check_heap_object(const void *ptr, unsigned long n,
+				struct page *page)
+{
+	struct kmem_cache *s;
+	unsigned long offset;
+	size_t object_size;
+
+	/* Find object and usable object size. */
+	s = page->slab_cache;
+	object_size = slab_ksize(s);
+
+	/* Find offset within object. */
+	offset = (ptr - page_address(page)) % s->size;
+
+	/* Adjust for redzone and reject if within the redzone. */
+	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
+		if (offset < s->red_left_pad)
+			return s->name;
+		offset -= s->red_left_pad;
+	}
+
+	/* Allow address range falling entirely within object size. */
+	if (offset <= object_size && n <= object_size - offset)
+		return NULL;
+
+	return s->name;
+}
+#endif /* CONFIG_HARDENED_USERCOPY */
+
 static size_t __ksize(const void *object)
 {
 	struct page *page;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 00/12] mm: Hardened usercopy
  2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
                   ` (11 preceding siblings ...)
  2016-07-20 20:27 ` [PATCH v4 12/12] mm: SLUB " Kees Cook
@ 2016-07-23  0:36 ` Laura Abbott
  2016-07-25 17:50   ` Kees Cook
  12 siblings, 1 reply; 21+ messages in thread
From: Laura Abbott @ 2016-07-23  0:36 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/20/2016 01:26 PM, Kees Cook wrote:
> Hi,
>
> [This is now in my kspp -next tree, though I'd really love to add some
> additional explicit Tested-bys, Reviewed-bys, or Acked-bys. If you've
> looked through any part of this or have done any testing, please consider
> sending an email with your "*-by:" line. :)]
>
> This is a start of the mainline port of PAX_USERCOPY[1]. After writing
> tests (now in lkdtm in -next) for Casey's earlier port[2], I kept tweaking
> things further and further until I ended up with a whole new patch series.
> To that end, I took Rik, Laura, and other people's feedback along with
> additional changes and clean-ups.
>
> Based on my understanding, PAX_USERCOPY was designed to catch a
> few classes of flaws (mainly bad bounds checking) around the use of
> copy_to_user()/copy_from_user(). These changes don't touch get_user() and
> put_user(), since these operate on constant sized lengths, and tend to be
> much less vulnerable. There are effectively three distinct protections in
> the whole series, each of which I've given a separate CONFIG, though this
> patch set is only the first of the three intended protections. (Generally
> speaking, PAX_USERCOPY covers what I'm calling CONFIG_HARDENED_USERCOPY
> (this) and CONFIG_HARDENED_USERCOPY_WHITELIST (future), and
> PAX_USERCOPY_SLABS covers CONFIG_HARDENED_USERCOPY_SPLIT_KMALLOC
> (future).)
>
> This series, which adds CONFIG_HARDENED_USERCOPY, checks that objects
> being copied to/from userspace meet certain criteria:
> - if address is a heap object, the size must not exceed the object's
>   allocated size. (This will catch all kinds of heap overflow flaws.)
> - if address range is in the current process stack, it must be within the
>   a valid stack frame (if such checking is possible) or at least entirely
>   within the current process's stack. (This could catch large lengths that
>   would have extended beyond the current process stack, or overflows if
>   their length extends back into the original stack.)
> - if the address range is part of kernel data, rodata, or bss, allow it.
> - if address range is page-allocated, that it doesn't span multiple
>   allocations (excepting Reserved and CMA pages).
> - if address is within the kernel text, reject it.
> - everything else is accepted
>
> The patches in the series are:
> - Support for examination of CMA page types:
> 	1- mm: Add is_migrate_cma_page
> - Support for arch-specific stack frame checking (which will likely be
>   replaced in the future by Josh's more comprehensive unwinder):
>         2- mm: Implement stack frame object validation
> - The core copy_to/from_user() checks, without the slab object checks:
>         3- mm: Hardened usercopy
> - Per-arch enablement of the protection:
>         4- x86/uaccess: Enable hardened usercopy
>         5- ARM: uaccess: Enable hardened usercopy
>         6- arm64/uaccess: Enable hardened usercopy
>         7- ia64/uaccess: Enable hardened usercopy
>         8- powerpc/uaccess: Enable hardened usercopy
>         9- sparc/uaccess: Enable hardened usercopy
>        10- s390/uaccess: Enable hardened usercopy
> - The heap allocator implementation of object size checking:
>        11- mm: SLAB hardened usercopy support
>        12- mm: SLUB hardened usercopy support
>
> Some notes:
>
> - This is expected to apply on top of -next which contains fixes for the
>   position of _etext on both arm and arm64, though it has some conflicts
>   with KASAN that should be trivial to fix up. Also in -next are the
>   tests for this protection (in lkdtm), prefixed with USERCOPY_.
>
> - I couldn't detect a measurable performance change with these features
>   enabled. Kernel build times were unchanged, hackbench was unchanged,
>   etc. I think we could flip this to "on by default" at some point, but
>   for now, I'm leaving it off until I can get some more definitive
>   measurements. I would love if someone with greater familiarity with
>   perf could give this a spin and report results.
>
> - The SLOB support extracted from grsecurity seems entirely broken. I
>   have no idea what's going on there, I spent my time testing SLAB and
>   SLUB. Having someone else look at SLOB would be nice, but this series
>   doesn't depend on it.
>
> Additional features that would be nice, but aren't blocking this series:
>
> - Needs more architecture support for stack frame checking (only x86 now,
>   but it seems Josh will have a good solution for this soon).
>
>
> Thanks!
>
> -Kees
>
> [1] https://grsecurity.net/download.php "grsecurity - test kernel patch"
> [2] http://www.openwall.com/lists/kernel-hardening/2016/05/19/5
>
> v4:
> - handle CMA pages, labbott
> - update stack checker comments, labbott
> - check for vmalloc addresses, labbott
> - deal with KASAN in -next changing arm64 copy*user calls
> - check for linear mappings at runtime instead of via CONFIG
>
> v3:
> - switch to using BUG for better Oops integration
> - when checking page allocations, check each for Reserved
> - use enums for the stack check return for readability
>
> v2:
> - added s390 support
> - handle slub red zone
> - disallow writes to rodata area
> - stack frame walker now CONFIG-controlled arch-specific helper
>

Do you have/plan to have LKDTM or the like tests for this? I started reviewing
the slub code and was about to write some test cases for myself. I did that
for CMA as well which is a decent indicator these should all go somewhere.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v4 00/12] mm: Hardened usercopy
  2016-07-23  0:36 ` [PATCH v4 00/12] mm: Hardened usercopy Laura Abbott
@ 2016-07-25 17:50   ` Kees Cook
  0 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-07-25 17:50 UTC (permalink / raw)
  To: linux-arm-kernel

On Fri, Jul 22, 2016 at 5:36 PM, Laura Abbott <labbott@redhat.com> wrote:
> On 07/20/2016 01:26 PM, Kees Cook wrote:
>>
>> Hi,
>>
>> [This is now in my kspp -next tree, though I'd really love to add some
>> additional explicit Tested-bys, Reviewed-bys, or Acked-bys. If you've
>> looked through any part of this or have done any testing, please consider
>> sending an email with your "*-by:" line. :)]
>>
>> This is a start of the mainline port of PAX_USERCOPY[1]. After writing
>> tests (now in lkdtm in -next) for Casey's earlier port[2], I kept tweaking
>> things further and further until I ended up with a whole new patch series.
>> To that end, I took Rik, Laura, and other people's feedback along with
>> additional changes and clean-ups.
>>
>> Based on my understanding, PAX_USERCOPY was designed to catch a
>> few classes of flaws (mainly bad bounds checking) around the use of
>> copy_to_user()/copy_from_user(). These changes don't touch get_user() and
>> put_user(), since these operate on constant sized lengths, and tend to be
>> much less vulnerable. There are effectively three distinct protections in
>> the whole series, each of which I've given a separate CONFIG, though this
>> patch set is only the first of the three intended protections. (Generally
>> speaking, PAX_USERCOPY covers what I'm calling CONFIG_HARDENED_USERCOPY
>> (this) and CONFIG_HARDENED_USERCOPY_WHITELIST (future), and
>> PAX_USERCOPY_SLABS covers CONFIG_HARDENED_USERCOPY_SPLIT_KMALLOC
>> (future).)
>>
>> This series, which adds CONFIG_HARDENED_USERCOPY, checks that objects
>> being copied to/from userspace meet certain criteria:
>> - if address is a heap object, the size must not exceed the object's
>>   allocated size. (This will catch all kinds of heap overflow flaws.)
>> - if address range is in the current process stack, it must be within the
>>   a valid stack frame (if such checking is possible) or at least entirely
>>   within the current process's stack. (This could catch large lengths that
>>   would have extended beyond the current process stack, or overflows if
>>   their length extends back into the original stack.)
>> - if the address range is part of kernel data, rodata, or bss, allow it.
>> - if address range is page-allocated, that it doesn't span multiple
>>   allocations (excepting Reserved and CMA pages).
>> - if address is within the kernel text, reject it.
>> - everything else is accepted
>>
>> The patches in the series are:
>> - Support for examination of CMA page types:
>>         1- mm: Add is_migrate_cma_page
>> - Support for arch-specific stack frame checking (which will likely be
>>   replaced in the future by Josh's more comprehensive unwinder):
>>         2- mm: Implement stack frame object validation
>> - The core copy_to/from_user() checks, without the slab object checks:
>>         3- mm: Hardened usercopy
>> - Per-arch enablement of the protection:
>>         4- x86/uaccess: Enable hardened usercopy
>>         5- ARM: uaccess: Enable hardened usercopy
>>         6- arm64/uaccess: Enable hardened usercopy
>>         7- ia64/uaccess: Enable hardened usercopy
>>         8- powerpc/uaccess: Enable hardened usercopy
>>         9- sparc/uaccess: Enable hardened usercopy
>>        10- s390/uaccess: Enable hardened usercopy
>> - The heap allocator implementation of object size checking:
>>        11- mm: SLAB hardened usercopy support
>>        12- mm: SLUB hardened usercopy support
>>
>> Some notes:
>>
>> - This is expected to apply on top of -next which contains fixes for the
>>   position of _etext on both arm and arm64, though it has some conflicts
>>   with KASAN that should be trivial to fix up. Also in -next are the
>>   tests for this protection (in lkdtm), prefixed with USERCOPY_.
>>
>> - I couldn't detect a measurable performance change with these features
>>   enabled. Kernel build times were unchanged, hackbench was unchanged,
>>   etc. I think we could flip this to "on by default" at some point, but
>>   for now, I'm leaving it off until I can get some more definitive
>>   measurements. I would love if someone with greater familiarity with
>>   perf could give this a spin and report results.
>>
>> - The SLOB support extracted from grsecurity seems entirely broken. I
>>   have no idea what's going on there, I spent my time testing SLAB and
>>   SLUB. Having someone else look at SLOB would be nice, but this series
>>   doesn't depend on it.
>>
>> Additional features that would be nice, but aren't blocking this series:
>>
>> - Needs more architecture support for stack frame checking (only x86 now,
>>   but it seems Josh will have a good solution for this soon).
>>
>>
>> Thanks!
>>
>> -Kees
>>
>> [1] https://grsecurity.net/download.php "grsecurity - test kernel patch"
>> [2] http://www.openwall.com/lists/kernel-hardening/2016/05/19/5
>>
>> v4:
>> - handle CMA pages, labbott
>> - update stack checker comments, labbott
>> - check for vmalloc addresses, labbott
>> - deal with KASAN in -next changing arm64 copy*user calls
>> - check for linear mappings at runtime instead of via CONFIG
>>
>> v3:
>> - switch to using BUG for better Oops integration
>> - when checking page allocations, check each for Reserved
>> - use enums for the stack check return for readability
>>
>> v2:
>> - added s390 support
>> - handle slub red zone
>> - disallow writes to rodata area
>> - stack frame walker now CONFIG-controlled arch-specific helper
>>
>
> Do you have/plan to have LKDTM or the like tests for this? I started
> reviewing
> the slub code and was about to write some test cases for myself. I did that
> for CMA as well which is a decent indicator these should all go somewhere.

Yeah, there is an entire section of tests in lkdtm for the usercopy
protection. I didn't add anything for CMA or multipage allocations
yet, though. Feel free to add those if you have a moment! :) It's on
my todo list.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v4 12/12] mm: SLUB hardened usercopy support
  2016-07-20 20:27 ` [PATCH v4 12/12] mm: SLUB " Kees Cook
@ 2016-07-25 19:16   ` Laura Abbott
  2016-07-25 20:45     ` Kees Cook
  2016-07-25 21:42     ` Rik van Riel
  0 siblings, 2 replies; 21+ messages in thread
From: Laura Abbott @ 2016-07-25 19:16 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/20/2016 01:27 PM, Kees Cook wrote:
> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
> SLUB allocator to catch any copies that may span objects. Includes a
> redzone handling fix discovered by Michael Ellerman.
>
> Based on code from PaX and grsecurity.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> Tested-by: Michael Ellerman <mpe@ellerman.id.au>
> ---
>  init/Kconfig |  1 +
>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 37 insertions(+)
>
> diff --git a/init/Kconfig b/init/Kconfig
> index 798c2020ee7c..1c4711819dfd 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1765,6 +1765,7 @@ config SLAB
>
>  config SLUB
>  	bool "SLUB (Unqueued Allocator)"
> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR
>  	help
>  	   SLUB is a slab allocator that minimizes cache line usage
>  	   instead of managing queues of cached objects (SLAB approach).
> diff --git a/mm/slub.c b/mm/slub.c
> index 825ff4505336..7dee3d9a5843 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int node)
>  EXPORT_SYMBOL(__kmalloc_node);
>  #endif
>
> +#ifdef CONFIG_HARDENED_USERCOPY
> +/*
> + * Rejects objects that are incorrectly sized.
> + *
> + * Returns NULL if check passes, otherwise const char * to name of cache
> + * to indicate an error.
> + */
> +const char *__check_heap_object(const void *ptr, unsigned long n,
> +				struct page *page)
> +{
> +	struct kmem_cache *s;
> +	unsigned long offset;
> +	size_t object_size;
> +
> +	/* Find object and usable object size. */
> +	s = page->slab_cache;
> +	object_size = slab_ksize(s);
> +
> +	/* Find offset within object. */
> +	offset = (ptr - page_address(page)) % s->size;
> +
> +	/* Adjust for redzone and reject if within the redzone. */
> +	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
> +		if (offset < s->red_left_pad)
> +			return s->name;
> +		offset -= s->red_left_pad;
> +	}
> +
> +	/* Allow address range falling entirely within object size. */
> +	if (offset <= object_size && n <= object_size - offset)
> +		return NULL;
> +
> +	return s->name;
> +}
> +#endif /* CONFIG_HARDENED_USERCOPY */
> +

I compared this against what check_valid_pointer does for SLUB_DEBUG
checking. I was hoping we could utilize that function to avoid
duplication but a) __check_heap_object needs to allow accesses anywhere
in the object, not just the beginning b) accessing page->objects
is racy without the addition of locking in SLUB_DEBUG.

Still, the ptr < page_address(page) check from __check_heap_object would
be good to add to avoid generating garbage large offsets and trying to
infer C math.

diff --git a/mm/slub.c b/mm/slub.c
index 7dee3d9..5370e4f 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr, unsigned long n,
         s = page->slab_cache;
         object_size = slab_ksize(s);
  
+       if (ptr < page_address(page))
+               return s->name;
+
         /* Find offset within object. */
         offset = (ptr - page_address(page)) % s->size;
  

With that, you can add

Reviwed-by: Laura Abbott <labbott@redhat.com>

>  static size_t __ksize(const void *object)
>  {
>  	struct page *page;
>

Thanks,
Laura

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v4 12/12] mm: SLUB hardened usercopy support
  2016-07-25 19:16   ` Laura Abbott
@ 2016-07-25 20:45     ` Kees Cook
  2016-07-26  0:54       ` Laura Abbott
  2016-07-25 21:42     ` Rik van Riel
  1 sibling, 1 reply; 21+ messages in thread
From: Kees Cook @ 2016-07-25 20:45 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, Jul 25, 2016 at 12:16 PM, Laura Abbott <labbott@redhat.com> wrote:
> On 07/20/2016 01:27 PM, Kees Cook wrote:
>>
>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
>> SLUB allocator to catch any copies that may span objects. Includes a
>> redzone handling fix discovered by Michael Ellerman.
>>
>> Based on code from PaX and grsecurity.
>>
>> Signed-off-by: Kees Cook <keescook@chromium.org>
>> Tested-by: Michael Ellerman <mpe@ellerman.id.au>
>> ---
>>  init/Kconfig |  1 +
>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>>  2 files changed, 37 insertions(+)
>>
>> diff --git a/init/Kconfig b/init/Kconfig
>> index 798c2020ee7c..1c4711819dfd 100644
>> --- a/init/Kconfig
>> +++ b/init/Kconfig
>> @@ -1765,6 +1765,7 @@ config SLAB
>>
>>  config SLUB
>>         bool "SLUB (Unqueued Allocator)"
>> +       select HAVE_HARDENED_USERCOPY_ALLOCATOR
>>         help
>>            SLUB is a slab allocator that minimizes cache line usage
>>            instead of managing queues of cached objects (SLAB approach).
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 825ff4505336..7dee3d9a5843 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int
>> node)
>>  EXPORT_SYMBOL(__kmalloc_node);
>>  #endif
>>
>> +#ifdef CONFIG_HARDENED_USERCOPY
>> +/*
>> + * Rejects objects that are incorrectly sized.
>> + *
>> + * Returns NULL if check passes, otherwise const char * to name of cache
>> + * to indicate an error.
>> + */
>> +const char *__check_heap_object(const void *ptr, unsigned long n,
>> +                               struct page *page)
>> +{
>> +       struct kmem_cache *s;
>> +       unsigned long offset;
>> +       size_t object_size;
>> +
>> +       /* Find object and usable object size. */
>> +       s = page->slab_cache;
>> +       object_size = slab_ksize(s);
>> +
>> +       /* Find offset within object. */
>> +       offset = (ptr - page_address(page)) % s->size;
>> +
>> +       /* Adjust for redzone and reject if within the redzone. */
>> +       if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
>> +               if (offset < s->red_left_pad)
>> +                       return s->name;
>> +               offset -= s->red_left_pad;
>> +       }
>> +
>> +       /* Allow address range falling entirely within object size. */
>> +       if (offset <= object_size && n <= object_size - offset)
>> +               return NULL;
>> +
>> +       return s->name;
>> +}
>> +#endif /* CONFIG_HARDENED_USERCOPY */
>> +
>
>
> I compared this against what check_valid_pointer does for SLUB_DEBUG
> checking. I was hoping we could utilize that function to avoid
> duplication but a) __check_heap_object needs to allow accesses anywhere
> in the object, not just the beginning b) accessing page->objects
> is racy without the addition of locking in SLUB_DEBUG.
>
> Still, the ptr < page_address(page) check from __check_heap_object would
> be good to add to avoid generating garbage large offsets and trying to
> infer C math.
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 7dee3d9..5370e4f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr,
> unsigned long n,
>         s = page->slab_cache;
>         object_size = slab_ksize(s);
>  +       if (ptr < page_address(page))
> +               return s->name;
> +
>         /* Find offset within object. */
>         offset = (ptr - page_address(page)) % s->size;
>
> With that, you can add
>
> Reviwed-by: Laura Abbott <labbott@redhat.com>

Cool, I'll add that.

Should I add your reviewed-by for this patch only or for the whole series?

Thanks!

-Kees

>
>>  static size_t __ksize(const void *object)
>>  {
>>         struct page *page;
>>
>
> Thanks,
> Laura



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v4 12/12] mm: SLUB hardened usercopy support
  2016-07-25 19:16   ` Laura Abbott
  2016-07-25 20:45     ` Kees Cook
@ 2016-07-25 21:42     ` Rik van Riel
  2016-07-25 23:29       ` Laura Abbott
  1 sibling, 1 reply; 21+ messages in thread
From: Rik van Riel @ 2016-07-25 21:42 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2016-07-25 at 12:16 -0700, Laura Abbott wrote:
> On 07/20/2016 01:27 PM, Kees Cook wrote:
> > Under CONFIG_HARDENED_USERCOPY, this adds object size checking to
> > the
> > SLUB allocator to catch any copies that may span objects. Includes
> > a
> > redzone handling fix discovered by Michael Ellerman.
> > 
> > Based on code from PaX and grsecurity.
> > 
> > Signed-off-by: Kees Cook <keescook@chromium.org>
> > Tested-by: Michael Ellerman <mpe@ellerman.id.au>
> > ---
> > ?init/Kconfig |??1 +
> > ?mm/slub.c????| 36 ++++++++++++++++++++++++++++++++++++
> > ?2 files changed, 37 insertions(+)
> > 
> > diff --git a/init/Kconfig b/init/Kconfig
> > index 798c2020ee7c..1c4711819dfd 100644
> > --- a/init/Kconfig
> > +++ b/init/Kconfig
> > @@ -1765,6 +1765,7 @@ config SLAB
> > 
> > ?config SLUB
> > ?	bool "SLUB (Unqueued Allocator)"
> > +	select HAVE_HARDENED_USERCOPY_ALLOCATOR
> > ?	help
> > ?	???SLUB is a slab allocator that minimizes cache line
> > usage
> > ?	???instead of managing queues of cached objects (SLAB
> > approach).
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 825ff4505336..7dee3d9a5843 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t
> > flags, int node)
> > ?EXPORT_SYMBOL(__kmalloc_node);
> > ?#endif
> > 
> > +#ifdef CONFIG_HARDENED_USERCOPY
> > +/*
> > + * Rejects objects that are incorrectly sized.
> > + *
> > + * Returns NULL if check passes, otherwise const char * to name of
> > cache
> > + * to indicate an error.
> > + */
> > +const char *__check_heap_object(const void *ptr, unsigned long n,
> > +				struct page *page)
> > +{
> > +	struct kmem_cache *s;
> > +	unsigned long offset;
> > +	size_t object_size;
> > +
> > +	/* Find object and usable object size. */
> > +	s = page->slab_cache;
> > +	object_size = slab_ksize(s);
> > +
> > +	/* Find offset within object. */
> > +	offset = (ptr - page_address(page)) % s->size;
> > +
> > +	/* Adjust for redzone and reject if within the redzone. */
> > +	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
> > +		if (offset < s->red_left_pad)
> > +			return s->name;
> > +		offset -= s->red_left_pad;
> > +	}
> > +
> > +	/* Allow address range falling entirely within object
> > size. */
> > +	if (offset <= object_size && n <= object_size - offset)
> > +		return NULL;
> > +
> > +	return s->name;
> > +}
> > +#endif /* CONFIG_HARDENED_USERCOPY */
> > +
> 
> I compared this against what check_valid_pointer does for SLUB_DEBUG
> checking. I was hoping we could utilize that function to avoid
> duplication but a) __check_heap_object needs to allow accesses
> anywhere
> in the object, not just the beginning b) accessing page->objects
> is racy without the addition of locking in SLUB_DEBUG.
> 
> Still, the ptr < page_address(page) check from __check_heap_object
> would
> be good to add to avoid generating garbage large offsets and trying
> to
> infer C math.
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 7dee3d9..5370e4f 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void
> *ptr, unsigned long n,
> ?????????s = page->slab_cache;
> ?????????object_size = slab_ksize(s);
> ??
> +???????if (ptr < page_address(page))
> +???????????????return s->name;
> +
> ?????????/* Find offset within object. */
> ?????????offset = (ptr - page_address(page)) % s->size;
>?

I don't get it, isn't that already guaranteed because we
look for the page that ptr is in, before __check_heap_object
is called?

Specifically, in patch 3/12:

+???????page = virt_to_head_page(ptr);
+
+???????/* Check slab allocator for flags and size. */
+???????if (PageSlab(page))
+???????????????return __check_heap_object(ptr, n, page);

How can that generate a ptr that is not inside the page?

What am I overlooking? ?And, should it be in the changelog or
a comment? :)

-- 

All Rights Reversed.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: This is a digitally signed message part
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160725/cdce2ed3/attachment-0001.sig>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v4 12/12] mm: SLUB hardened usercopy support
  2016-07-25 21:42     ` Rik van Riel
@ 2016-07-25 23:29       ` Laura Abbott
  2016-07-26  0:22         ` Rik van Riel
  0 siblings, 1 reply; 21+ messages in thread
From: Laura Abbott @ 2016-07-25 23:29 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/25/2016 02:42 PM, Rik van Riel wrote:
> On Mon, 2016-07-25 at 12:16 -0700, Laura Abbott wrote:
>> On 07/20/2016 01:27 PM, Kees Cook wrote:
>>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to
>>> the
>>> SLUB allocator to catch any copies that may span objects. Includes
>>> a
>>> redzone handling fix discovered by Michael Ellerman.
>>>
>>> Based on code from PaX and grsecurity.
>>>
>>> Signed-off-by: Kees Cook <keescook@chromium.org>
>>> Tested-by: Michael Ellerman <mpe@ellerman.id.au>
>>> ---
>>>  init/Kconfig |  1 +
>>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>>>  2 files changed, 37 insertions(+)
>>>
>>> diff --git a/init/Kconfig b/init/Kconfig
>>> index 798c2020ee7c..1c4711819dfd 100644
>>> --- a/init/Kconfig
>>> +++ b/init/Kconfig
>>> @@ -1765,6 +1765,7 @@ config SLAB
>>>
>>>  config SLUB
>>>  	bool "SLUB (Unqueued Allocator)"
>>> +	select HAVE_HARDENED_USERCOPY_ALLOCATOR
>>>  	help
>>>  	   SLUB is a slab allocator that minimizes cache line
>>> usage
>>>  	   instead of managing queues of cached objects (SLAB
>>> approach).
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index 825ff4505336..7dee3d9a5843 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t
>>> flags, int node)
>>>  EXPORT_SYMBOL(__kmalloc_node);
>>>  #endif
>>>
>>> +#ifdef CONFIG_HARDENED_USERCOPY
>>> +/*
>>> + * Rejects objects that are incorrectly sized.
>>> + *
>>> + * Returns NULL if check passes, otherwise const char * to name of
>>> cache
>>> + * to indicate an error.
>>> + */
>>> +const char *__check_heap_object(const void *ptr, unsigned long n,
>>> +				struct page *page)
>>> +{
>>> +	struct kmem_cache *s;
>>> +	unsigned long offset;
>>> +	size_t object_size;
>>> +
>>> +	/* Find object and usable object size. */
>>> +	s = page->slab_cache;
>>> +	object_size = slab_ksize(s);
>>> +
>>> +	/* Find offset within object. */
>>> +	offset = (ptr - page_address(page)) % s->size;
>>> +
>>> +	/* Adjust for redzone and reject if within the redzone. */
>>> +	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
>>> +		if (offset < s->red_left_pad)
>>> +			return s->name;
>>> +		offset -= s->red_left_pad;
>>> +	}
>>> +
>>> +	/* Allow address range falling entirely within object
>>> size. */
>>> +	if (offset <= object_size && n <= object_size - offset)
>>> +		return NULL;
>>> +
>>> +	return s->name;
>>> +}
>>> +#endif /* CONFIG_HARDENED_USERCOPY */
>>> +
>>
>> I compared this against what check_valid_pointer does for SLUB_DEBUG
>> checking. I was hoping we could utilize that function to avoid
>> duplication but a) __check_heap_object needs to allow accesses
>> anywhere
>> in the object, not just the beginning b) accessing page->objects
>> is racy without the addition of locking in SLUB_DEBUG.
>>
>> Still, the ptr < page_address(page) check from __check_heap_object
>> would
>> be good to add to avoid generating garbage large offsets and trying
>> to
>> infer C math.
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 7dee3d9..5370e4f 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void
>> *ptr, unsigned long n,
>>          s = page->slab_cache;
>>          object_size = slab_ksize(s);
>>
>> +       if (ptr < page_address(page))
>> +               return s->name;
>> +
>>          /* Find offset within object. */
>>          offset = (ptr - page_address(page)) % s->size;
>>
>
> I don't get it, isn't that already guaranteed because we
> look for the page that ptr is in, before __check_heap_object
> is called?
>
> Specifically, in patch 3/12:
>
> +       page = virt_to_head_page(ptr);
> +
> +       /* Check slab allocator for flags and size. */
> +       if (PageSlab(page))
> +               return __check_heap_object(ptr, n, page);
>
> How can that generate a ptr that is not inside the page?
>
> What am I overlooking?  And, should it be in the changelog or
> a comment? :)
>


I ran into the subtraction issue when the vmalloc detection wasn't
working on ARM64, somehow virt_to_head_page turned into a page
that happened to have PageSlab set. I agree if everything is working
properly this is redundant but given the type of feature this is, a
little bit of redundancy against a system running off into the weeds
or bad patches might be warranted.

I'm not super attached to the check if other maintainers think it
is redundant. Updating the __check_heap_object header comment
with a note of what we are assuming could work

Thanks,
Laura

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v4 12/12] mm: SLUB hardened usercopy support
  2016-07-25 23:29       ` Laura Abbott
@ 2016-07-26  0:22         ` Rik van Riel
  0 siblings, 0 replies; 21+ messages in thread
From: Rik van Riel @ 2016-07-26  0:22 UTC (permalink / raw)
  To: linux-arm-kernel

On Mon, 2016-07-25 at 16:29 -0700, Laura Abbott wrote:
> On 07/25/2016 02:42 PM, Rik van Riel wrote:
> > On Mon, 2016-07-25 at 12:16 -0700, Laura Abbott wrote:
> > > On 07/20/2016 01:27 PM, Kees Cook wrote:
> > > > Under CONFIG_HARDENED_USERCOPY, this adds object size checking
> > > > to
> > > > the
> > > > SLUB allocator to catch any copies that may span objects.
> > > > Includes
> > > > a
> > > > redzone handling fix discovered by Michael Ellerman.
> > > > 
> > > > Based on code from PaX and grsecurity.
> > > > 
> > > > Signed-off-by: Kees Cook <keescook@chromium.org>
> > > > Tested-by: Michael Ellerman <mpe@ellerman.id.au>
> > > > ---
> > > > ?init/Kconfig |??1 +
> > > > ?mm/slub.c????| 36 ++++++++++++++++++++++++++++++++++++
> > > > ?2 files changed, 37 insertions(+)
> > > > 
> > > > diff --git a/init/Kconfig b/init/Kconfig
> > > > index 798c2020ee7c..1c4711819dfd 100644
> > > > --- a/init/Kconfig
> > > > +++ b/init/Kconfig
> > > > @@ -1765,6 +1765,7 @@ config SLAB
> > > > 
> > > > ?config SLUB
> > > > ?	bool "SLUB (Unqueued Allocator)"
> > > > +	select HAVE_HARDENED_USERCOPY_ALLOCATOR
> > > > ?	help
> > > > ?	???SLUB is a slab allocator that minimizes cache line
> > > > usage
> > > > ?	???instead of managing queues of cached objects (SLAB
> > > > approach).
> > > > diff --git a/mm/slub.c b/mm/slub.c
> > > > index 825ff4505336..7dee3d9a5843 100644
> > > > --- a/mm/slub.c
> > > > +++ b/mm/slub.c
> > > > @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t
> > > > flags, int node)
> > > > ?EXPORT_SYMBOL(__kmalloc_node);
> > > > ?#endif
> > > > 
> > > > +#ifdef CONFIG_HARDENED_USERCOPY
> > > > +/*
> > > > + * Rejects objects that are incorrectly sized.
> > > > + *
> > > > + * Returns NULL if check passes, otherwise const char * to
> > > > name of
> > > > cache
> > > > + * to indicate an error.
> > > > + */
> > > > +const char *__check_heap_object(const void *ptr, unsigned long
> > > > n,
> > > > +				struct page *page)
> > > > +{
> > > > +	struct kmem_cache *s;
> > > > +	unsigned long offset;
> > > > +	size_t object_size;
> > > > +
> > > > +	/* Find object and usable object size. */
> > > > +	s = page->slab_cache;
> > > > +	object_size = slab_ksize(s);
> > > > +
> > > > +	/* Find offset within object. */
> > > > +	offset = (ptr - page_address(page)) % s->size;
> > > > +
> > > > +	/* Adjust for redzone and reject if within the
> > > > redzone. */
> > > > +	if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
> > > > +		if (offset < s->red_left_pad)
> > > > +			return s->name;
> > > > +		offset -= s->red_left_pad;
> > > > +	}
> > > > +
> > > > +	/* Allow address range falling entirely within object
> > > > size. */
> > > > +	if (offset <= object_size && n <= object_size -
> > > > offset)
> > > > +		return NULL;
> > > > +
> > > > +	return s->name;
> > > > +}
> > > > +#endif /* CONFIG_HARDENED_USERCOPY */
> > > > +
> > > 
> > > I compared this against what check_valid_pointer does for
> > > SLUB_DEBUG
> > > checking. I was hoping we could utilize that function to avoid
> > > duplication but a) __check_heap_object needs to allow accesses
> > > anywhere
> > > in the object, not just the beginning b) accessing page->objects
> > > is racy without the addition of locking in SLUB_DEBUG.
> > > 
> > > Still, the ptr < page_address(page) check from
> > > __check_heap_object
> > > would
> > > be good to add to avoid generating garbage large offsets and
> > > trying
> > > to
> > > infer C math.
> > > 
> > > diff --git a/mm/slub.c b/mm/slub.c
> > > index 7dee3d9..5370e4f 100644
> > > --- a/mm/slub.c
> > > +++ b/mm/slub.c
> > > @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void
> > > *ptr, unsigned long n,
> > > ?????????s = page->slab_cache;
> > > ?????????object_size = slab_ksize(s);
> > > 
> > > +???????if (ptr < page_address(page))
> > > +???????????????return s->name;
> > > +
> > > ?????????/* Find offset within object. */
> > > ?????????offset = (ptr - page_address(page)) % s->size;
> > > 
> > 
> > I don't get it, isn't that already guaranteed because we
> > look for the page that ptr is in, before __check_heap_object
> > is called?
> > 
> > Specifically, in patch 3/12:
> > 
> > +???????page = virt_to_head_page(ptr);
> > +
> > +???????/* Check slab allocator for flags and size. */
> > +???????if (PageSlab(page))
> > +???????????????return __check_heap_object(ptr, n, page);
> > 
> > How can that generate a ptr that is not inside the page?
> > 
> > What am I overlooking???And, should it be in the changelog or
> > a comment? :)
> > 
> 
> 
> I ran into the subtraction issue when the vmalloc detection wasn't
> working on ARM64, somehow virt_to_head_page turned into a page
> that happened to have PageSlab set. I agree if everything is working
> properly this is redundant but given the type of feature this is, a
> little bit of redundancy against a system running off into the weeds
> or bad patches might be warranted.
>?
That's fair. ?I have no objection to the check, but would
like to see it documented, since it does look a little out
of place.

-- 

All Rights Reversed.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: This is a digitally signed message part
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20160725/3298aac7/attachment.sig>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v4 12/12] mm: SLUB hardened usercopy support
  2016-07-25 20:45     ` Kees Cook
@ 2016-07-26  0:54       ` Laura Abbott
  0 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-07-26  0:54 UTC (permalink / raw)
  To: linux-arm-kernel

On 07/25/2016 01:45 PM, Kees Cook wrote:
> On Mon, Jul 25, 2016 at 12:16 PM, Laura Abbott <labbott@redhat.com> wrote:
>> On 07/20/2016 01:27 PM, Kees Cook wrote:
>>>
>>> Under CONFIG_HARDENED_USERCOPY, this adds object size checking to the
>>> SLUB allocator to catch any copies that may span objects. Includes a
>>> redzone handling fix discovered by Michael Ellerman.
>>>
>>> Based on code from PaX and grsecurity.
>>>
>>> Signed-off-by: Kees Cook <keescook@chromium.org>
>>> Tested-by: Michael Ellerman <mpe@ellerman.id.au>
>>> ---
>>>  init/Kconfig |  1 +
>>>  mm/slub.c    | 36 ++++++++++++++++++++++++++++++++++++
>>>  2 files changed, 37 insertions(+)
>>>
>>> diff --git a/init/Kconfig b/init/Kconfig
>>> index 798c2020ee7c..1c4711819dfd 100644
>>> --- a/init/Kconfig
>>> +++ b/init/Kconfig
>>> @@ -1765,6 +1765,7 @@ config SLAB
>>>
>>>  config SLUB
>>>         bool "SLUB (Unqueued Allocator)"
>>> +       select HAVE_HARDENED_USERCOPY_ALLOCATOR
>>>         help
>>>            SLUB is a slab allocator that minimizes cache line usage
>>>            instead of managing queues of cached objects (SLAB approach).
>>> diff --git a/mm/slub.c b/mm/slub.c
>>> index 825ff4505336..7dee3d9a5843 100644
>>> --- a/mm/slub.c
>>> +++ b/mm/slub.c
>>> @@ -3614,6 +3614,42 @@ void *__kmalloc_node(size_t size, gfp_t flags, int
>>> node)
>>>  EXPORT_SYMBOL(__kmalloc_node);
>>>  #endif
>>>
>>> +#ifdef CONFIG_HARDENED_USERCOPY
>>> +/*
>>> + * Rejects objects that are incorrectly sized.
>>> + *
>>> + * Returns NULL if check passes, otherwise const char * to name of cache
>>> + * to indicate an error.
>>> + */
>>> +const char *__check_heap_object(const void *ptr, unsigned long n,
>>> +                               struct page *page)
>>> +{
>>> +       struct kmem_cache *s;
>>> +       unsigned long offset;
>>> +       size_t object_size;
>>> +
>>> +       /* Find object and usable object size. */
>>> +       s = page->slab_cache;
>>> +       object_size = slab_ksize(s);
>>> +
>>> +       /* Find offset within object. */
>>> +       offset = (ptr - page_address(page)) % s->size;
>>> +
>>> +       /* Adjust for redzone and reject if within the redzone. */
>>> +       if (kmem_cache_debug(s) && s->flags & SLAB_RED_ZONE) {
>>> +               if (offset < s->red_left_pad)
>>> +                       return s->name;
>>> +               offset -= s->red_left_pad;
>>> +       }
>>> +
>>> +       /* Allow address range falling entirely within object size. */
>>> +       if (offset <= object_size && n <= object_size - offset)
>>> +               return NULL;
>>> +
>>> +       return s->name;
>>> +}
>>> +#endif /* CONFIG_HARDENED_USERCOPY */
>>> +
>>
>>
>> I compared this against what check_valid_pointer does for SLUB_DEBUG
>> checking. I was hoping we could utilize that function to avoid
>> duplication but a) __check_heap_object needs to allow accesses anywhere
>> in the object, not just the beginning b) accessing page->objects
>> is racy without the addition of locking in SLUB_DEBUG.
>>
>> Still, the ptr < page_address(page) check from __check_heap_object would
>> be good to add to avoid generating garbage large offsets and trying to
>> infer C math.
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 7dee3d9..5370e4f 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -3632,6 +3632,9 @@ const char *__check_heap_object(const void *ptr,
>> unsigned long n,
>>         s = page->slab_cache;
>>         object_size = slab_ksize(s);
>>  +       if (ptr < page_address(page))
>> +               return s->name;
>> +
>>         /* Find offset within object. */
>>         offset = (ptr - page_address(page)) % s->size;
>>
>> With that, you can add
>>
>> Reviwed-by: Laura Abbott <labbott@redhat.com>
>
> Cool, I'll add that.
>
> Should I add your reviewed-by for this patch only or for the whole series?
>
> Thanks!
>
> -Kees
>

Just this patch for now, I'm working through a couple of others

>>
>>>  static size_t __ksize(const void *object)
>>>  {
>>>         struct page *page;
>>>
>>
>> Thanks,
>> Laura
>
>
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2016-07-26  0:54 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-20 20:26 [PATCH v4 00/12] mm: Hardened usercopy Kees Cook
2016-07-20 20:26 ` [PATCH v4 01/12] mm: Add is_migrate_cma_page Kees Cook
2016-07-20 20:26 ` [PATCH v4 02/12] mm: Implement stack frame object validation Kees Cook
2016-07-20 20:26 ` [PATCH v4 03/12] mm: Hardened usercopy Kees Cook
2016-07-20 20:26 ` [PATCH v4 04/12] x86/uaccess: Enable hardened usercopy Kees Cook
2016-07-20 20:27 ` [PATCH v4 05/12] ARM: uaccess: " Kees Cook
2016-07-20 20:27 ` [PATCH v4 06/12] arm64/uaccess: " Kees Cook
2016-07-20 20:27 ` [PATCH v4 07/12] ia64/uaccess: " Kees Cook
2016-07-20 20:27 ` [PATCH v4 08/12] powerpc/uaccess: " Kees Cook
2016-07-20 20:27 ` [PATCH v4 09/12] sparc/uaccess: " Kees Cook
2016-07-20 20:27 ` [PATCH v4 10/12] s390/uaccess: " Kees Cook
2016-07-20 20:27 ` [PATCH v4 11/12] mm: SLAB hardened usercopy support Kees Cook
2016-07-20 20:27 ` [PATCH v4 12/12] mm: SLUB " Kees Cook
2016-07-25 19:16   ` Laura Abbott
2016-07-25 20:45     ` Kees Cook
2016-07-26  0:54       ` Laura Abbott
2016-07-25 21:42     ` Rik van Riel
2016-07-25 23:29       ` Laura Abbott
2016-07-26  0:22         ` Rik van Riel
2016-07-23  0:36 ` [PATCH v4 00/12] mm: Hardened usercopy Laura Abbott
2016-07-25 17:50   ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).