All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/4] Assorted improvements to usercopy
@ 2021-12-16 21:53 Matthew Wilcox (Oracle)
  2021-12-16 21:53 ` [PATCH v4 1/4] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-12-16 21:53 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-hardening

The HARDENED_USERCOPY_PAGESPAN config option is hard to turn on because
much of the kernel uses non-compound high-order page allocations.
This patchset extracts the valuable parts of HARDENED_USERCOPY_PAGESPAN
and then removes the remainder.

v4:
 - Add the fourth patch to remove HARDENED_USERCOPY_PAGESPAN
v3:
 - Remove a now-unused variable
v2:
 - Prevent a NULL pointer dereference when a vmalloc-range pointer
   doesn't have an associated allocation (me)
 - Report better offsets than "0" (Kees)

Matthew Wilcox (Oracle) (4):
  mm/usercopy: Check kmap addresses properly
  mm/usercopy: Detect vmalloc overruns
  mm/usercopy: Detect compound page overruns
  usercopy: Remove HARDENED_USERCOPY_PAGESPAN

 arch/x86/include/asm/highmem.h   |  1 +
 include/linux/highmem-internal.h | 10 ++++
 mm/usercopy.c                    | 98 +++++++++-----------------------
 security/Kconfig                 | 13 +----
 4 files changed, 40 insertions(+), 82 deletions(-)

-- 
2.33.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v4 1/4] mm/usercopy: Check kmap addresses properly
  2021-12-16 21:53 [PATCH v4 0/4] Assorted improvements to usercopy Matthew Wilcox (Oracle)
@ 2021-12-16 21:53 ` Matthew Wilcox (Oracle)
  2021-12-16 21:53 ` [PATCH v4 2/4] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-12-16 21:53 UTC (permalink / raw)
  To: Kees Cook
  Cc: Matthew Wilcox (Oracle), linux-mm, linux-hardening, William Kucharski

If you are copying to an address in the kmap region, you may not copy
across a page boundary, no matter what the size of the underlying
allocation.  You can't kmap() a slab page because slab pages always
come from low memory.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kees Cook <keescook@chromium.org>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
---
 arch/x86/include/asm/highmem.h   |  1 +
 include/linux/highmem-internal.h | 10 ++++++++++
 mm/usercopy.c                    | 16 ++++++++++------
 3 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
index 032e020853aa..731ee7cc40a5 100644
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -26,6 +26,7 @@
 #include <asm/tlbflush.h>
 #include <asm/paravirt.h>
 #include <asm/fixmap.h>
+#include <asm/pgtable_areas.h>
 
 /* declarations for highmem.c */
 extern unsigned long highstart_pfn, highend_pfn;
diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h
index 0a0b2b09b1b8..01fb76d101b0 100644
--- a/include/linux/highmem-internal.h
+++ b/include/linux/highmem-internal.h
@@ -149,6 +149,11 @@ static inline void totalhigh_pages_add(long count)
 	atomic_long_add(count, &_totalhigh_pages);
 }
 
+static inline bool is_kmap_addr(const void *x)
+{
+	unsigned long addr = (unsigned long)x;
+	return addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP);
+}
 #else /* CONFIG_HIGHMEM */
 
 static inline struct page *kmap_to_page(void *addr)
@@ -234,6 +239,11 @@ static inline void __kunmap_atomic(void *addr)
 static inline unsigned int nr_free_highpages(void) { return 0; }
 static inline unsigned long totalhigh_pages(void) { return 0UL; }
 
+static inline bool is_kmap_addr(const void *x)
+{
+	return false;
+}
+
 #endif /* CONFIG_HIGHMEM */
 
 /*
diff --git a/mm/usercopy.c b/mm/usercopy.c
index b3de3c4eefba..8c039302465f 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -228,12 +228,16 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 	if (!virt_addr_valid(ptr))
 		return;
 
-	/*
-	 * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the
-	 * highmem page or fallback to virt_to_page(). The following
-	 * is effectively a highmem-aware virt_to_head_page().
-	 */
-	page = compound_head(kmap_to_page((void *)ptr));
+	if (is_kmap_addr(ptr)) {
+		unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1);
+
+		if ((unsigned long)ptr + n - 1 > page_end)
+			usercopy_abort("kmap", NULL, to_user,
+					offset_in_page(ptr), n);
+		return;
+	}
+
+	page = virt_to_head_page(ptr);
 
 	if (PageSlab(page)) {
 		/* Check slab allocator for flags and size. */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v4 2/4] mm/usercopy: Detect vmalloc overruns
  2021-12-16 21:53 [PATCH v4 0/4] Assorted improvements to usercopy Matthew Wilcox (Oracle)
  2021-12-16 21:53 ` [PATCH v4 1/4] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
@ 2021-12-16 21:53 ` Matthew Wilcox (Oracle)
  2021-12-17 13:07   ` Mark Hemment
  2021-12-16 21:53 ` [PATCH v4 3/4] mm/usercopy: Detect compound page overruns Matthew Wilcox (Oracle)
  2021-12-16 21:53 ` [PATCH v4 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Matthew Wilcox (Oracle)
  3 siblings, 1 reply; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-12-16 21:53 UTC (permalink / raw)
  To: Kees Cook
  Cc: Matthew Wilcox (Oracle), linux-mm, linux-hardening, William Kucharski

If you have a vmalloc() allocation, or an address from calling vmap(),
you cannot overrun the vm_area which describes it, regardless of the
size of the underlying allocation.  This probably doesn't do much for
security because vmalloc comes with guard pages these days, but it
prevents usercopy aborts when copying to a vmap() of smaller pages.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kees Cook <keescook@chromium.org>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
---
 mm/usercopy.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 8c039302465f..63476e1506e0 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -17,6 +17,7 @@
 #include <linux/sched/task.h>
 #include <linux/sched/task_stack.h>
 #include <linux/thread_info.h>
+#include <linux/vmalloc.h>
 #include <linux/atomic.h>
 #include <linux/jump_label.h>
 #include <asm/sections.h>
@@ -237,6 +238,21 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 		return;
 	}
 
+	if (is_vmalloc_addr(ptr)) {
+		struct vm_struct *vm = find_vm_area(ptr);
+		unsigned long offset;
+
+		if (!vm) {
+			usercopy_abort("vmalloc", "no area", to_user, 0, n);
+			return;
+		}
+
+		offset = ptr - vm->addr;
+		if (offset + n > vm->size)
+			usercopy_abort("vmalloc", NULL, to_user, offset, n);
+		return;
+	}
+
 	page = virt_to_head_page(ptr);
 
 	if (PageSlab(page)) {
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v4 3/4] mm/usercopy: Detect compound page overruns
  2021-12-16 21:53 [PATCH v4 0/4] Assorted improvements to usercopy Matthew Wilcox (Oracle)
  2021-12-16 21:53 ` [PATCH v4 1/4] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
  2021-12-16 21:53 ` [PATCH v4 2/4] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
@ 2021-12-16 21:53 ` Matthew Wilcox (Oracle)
  2021-12-16 21:53 ` [PATCH v4 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Matthew Wilcox (Oracle)
  3 siblings, 0 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-12-16 21:53 UTC (permalink / raw)
  To: Kees Cook
  Cc: Matthew Wilcox (Oracle), linux-mm, linux-hardening, William Kucharski

Move the compound page overrun detection out of
CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kees Cook <keescook@chromium.org>
Reviewed-by: William Kucharski <william.kucharski@oracle.com>
---
 mm/usercopy.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 63476e1506e0..db2e8c4f79fd 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -163,7 +163,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
 {
 #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
 	const void *end = ptr + n - 1;
-	struct page *endpage;
 	bool is_reserved, is_cma;
 
 	/*
@@ -194,11 +193,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
 		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
 		return;
 
-	/* Allow if fully inside the same compound (__GFP_COMP) page. */
-	endpage = virt_to_head_page(end);
-	if (likely(endpage == page))
-		return;
-
 	/*
 	 * Reject if range is entirely either Reserved (i.e. special or
 	 * device memory), or CMA. Otherwise, reject since the object spans
@@ -258,6 +252,11 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 	if (PageSlab(page)) {
 		/* Check slab allocator for flags and size. */
 		__check_heap_object(ptr, n, page, to_user);
+	} else if (PageHead(page)) {
+		/* A compound allocation */
+		unsigned long offset = ptr - page_address(page);
+		if (offset + n > page_size(page))
+			usercopy_abort("page alloc", NULL, to_user, offset, n);
 	} else {
 		/* Verify object does not incorrectly span multiple pages. */
 		check_page_span(ptr, n, page, to_user);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v4 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
  2021-12-16 21:53 [PATCH v4 0/4] Assorted improvements to usercopy Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2021-12-16 21:53 ` [PATCH v4 3/4] mm/usercopy: Detect compound page overruns Matthew Wilcox (Oracle)
@ 2021-12-16 21:53 ` Matthew Wilcox (Oracle)
  3 siblings, 0 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-12-16 21:53 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-hardening

There isn't enough information to make this a useful check any more;
the useful parts of it were moved in earlier patches, so remove this
set of checks now.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/usercopy.c    | 61 ------------------------------------------------
 security/Kconfig | 13 +----------
 2 files changed, 1 insertion(+), 73 deletions(-)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index db2e8c4f79fd..1ee1f0d74828 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -157,64 +157,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n,
 		usercopy_abort("null address", NULL, to_user, ptr, n);
 }
 
-/* Checks for allocs that are marked in some way as spanning multiple pages. */
-static inline void check_page_span(const void *ptr, unsigned long n,
-				   struct page *page, bool to_user)
-{
-#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
-	const void *end = ptr + n - 1;
-	bool is_reserved, is_cma;
-
-	/*
-	 * Sometimes the kernel data regions are not marked Reserved (see
-	 * check below). And sometimes [_sdata,_edata) does not cover
-	 * rodata and/or bss, so check each range explicitly.
-	 */
-
-	/* Allow reads of kernel rodata region (if not marked as Reserved). */
-	if (ptr >= (const void *)__start_rodata &&
-	    end <= (const void *)__end_rodata) {
-		if (!to_user)
-			usercopy_abort("rodata", NULL, to_user, 0, n);
-		return;
-	}
-
-	/* Allow kernel data region (if not marked as Reserved). */
-	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
-		return;
-
-	/* Allow kernel bss region (if not marked as Reserved). */
-	if (ptr >= (const void *)__bss_start &&
-	    end <= (const void *)__bss_stop)
-		return;
-
-	/* Is the object wholly within one base page? */
-	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
-		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
-		return;
-
-	/*
-	 * Reject if range is entirely either Reserved (i.e. special or
-	 * device memory), or CMA. Otherwise, reject since the object spans
-	 * several independently allocated pages.
-	 */
-	is_reserved = PageReserved(page);
-	is_cma = is_migrate_cma_page(page);
-	if (!is_reserved && !is_cma)
-		usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
-
-	for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
-		page = virt_to_head_page(ptr);
-		if (is_reserved && !PageReserved(page))
-			usercopy_abort("spans Reserved and non-Reserved pages",
-				       NULL, to_user, 0, n);
-		if (is_cma && !is_migrate_cma_page(page))
-			usercopy_abort("spans CMA and non-CMA pages", NULL,
-				       to_user, 0, n);
-	}
-#endif
-}
-
 static inline void check_heap_object(const void *ptr, unsigned long n,
 				     bool to_user)
 {
@@ -257,9 +199,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 		unsigned long offset = ptr - page_address(page);
 		if (offset + n > page_size(page))
 			usercopy_abort("page alloc", NULL, to_user, offset, n);
-	} else {
-		/* Verify object does not incorrectly span multiple pages. */
-		check_page_span(ptr, n, page, to_user);
 	}
 }
 
diff --git a/security/Kconfig b/security/Kconfig
index 0b847f435beb..5b289b329a51 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -160,20 +160,9 @@ config HARDENED_USERCOPY
 	  copy_from_user() functions) by rejecting memory ranges that
 	  are larger than the specified heap object, span multiple
 	  separately allocated pages, are not on the process stack,
-	  or are part of the kernel text. This kills entire classes
+	  or are part of the kernel text. This prevents entire classes
 	  of heap overflow exploits and similar kernel memory exposures.
 
-config HARDENED_USERCOPY_PAGESPAN
-	bool "Refuse to copy allocations that span multiple pages"
-	depends on HARDENED_USERCOPY
-	depends on EXPERT
-	help
-	  When a multi-page allocation is done without __GFP_COMP,
-	  hardened usercopy will reject attempts to copy it. There are,
-	  however, several cases of this in the kernel that have not all
-	  been removed. This config is intended to be used only while
-	  trying to find such users.
-
 config FORTIFY_SOURCE
 	bool "Harden common str/mem functions against buffer overflows"
 	depends on ARCH_HAS_FORTIFY_SOURCE
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v4 2/4] mm/usercopy: Detect vmalloc overruns
  2021-12-16 21:53 ` [PATCH v4 2/4] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
@ 2021-12-17 13:07   ` Mark Hemment
  0 siblings, 0 replies; 6+ messages in thread
From: Mark Hemment @ 2021-12-17 13:07 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: Kees Cook, linux-mm, linux-hardening, William Kucharski

On Thu, 16 Dec 2021 at 21:55, Matthew Wilcox (Oracle)
<willy@infradead.org> wrote:
>
> If you have a vmalloc() allocation, or an address from calling vmap(),
> you cannot overrun the vm_area which describes it, regardless of the
> size of the underlying allocation.  This probably doesn't do much for
> security because vmalloc comes with guard pages these days, but it
> prevents usercopy aborts when copying to a vmap() of smaller pages.
...
> +               offset = ptr - vm->addr;
> +               if (offset + n > vm->size)
> +                       usercopy_abort("vmalloc", NULL, to_user, offset, n);
> +               return;
> +       }

Instead of vm->size, call get_vm_area_size() so any guard page is
trimmed from the length.

Mark

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-12-17 13:07 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-12-16 21:53 [PATCH v4 0/4] Assorted improvements to usercopy Matthew Wilcox (Oracle)
2021-12-16 21:53 ` [PATCH v4 1/4] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
2021-12-16 21:53 ` [PATCH v4 2/4] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
2021-12-17 13:07   ` Mark Hemment
2021-12-16 21:53 ` [PATCH v4 3/4] mm/usercopy: Detect compound page overruns Matthew Wilcox (Oracle)
2021-12-16 21:53 ` [PATCH v4 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Matthew Wilcox (Oracle)

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.