linux-hardening.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/4] Assorted improvements to usercopy
@ 2022-01-10 23:15 Matthew Wilcox (Oracle)
  2022-01-10 23:15 ` [PATCH 1/4] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-01-10 23:15 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-hardening

The HARDENED_USERCOPY_PAGESPAN config option is hard to turn on because
much of the kernel uses non-compound high-order page allocations.
This patchset extracts the valuable parts of HARDENED_USERCOPY_PAGESPAN
and then removes the remainder.

v5:
 - Use get_vm_area_size() instead of ->size directly (Mark Hemment)
 - Rebase to current Linus (the slab merge changed a lot of code)
v4:
 - Add the fourth patch to remove HARDENED_USERCOPY_PAGESPAN
v3:
 - Remove a now-unused variable
v2:
 - Prevent a NULL pointer dereference when a vmalloc-range pointer
   doesn't have an associated allocation (me)
 - Report better offsets than "0" (Kees)

Matthew Wilcox (Oracle) (4):
  mm/usercopy: Check kmap addresses properly
  mm/usercopy: Detect vmalloc overruns
  mm/usercopy: Detect large folio overruns
  usercopy: Remove HARDENED_USERCOPY_PAGESPAN

 arch/x86/include/asm/highmem.h   |  1 +
 include/linux/highmem-internal.h | 10 ++++
 mm/usercopy.c                    | 97 +++++++++-----------------------
 security/Kconfig                 | 13 +----
 4 files changed, 39 insertions(+), 82 deletions(-)

-- 
2.33.0


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/4] mm/usercopy: Check kmap addresses properly
  2022-01-10 23:15 [PATCH 0/4] Assorted improvements to usercopy Matthew Wilcox (Oracle)
@ 2022-01-10 23:15 ` Matthew Wilcox (Oracle)
  2022-05-10  3:37   ` Andrew Morton
  2022-01-10 23:15 ` [PATCH 2/4] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-01-10 23:15 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-hardening

If you are copying to an address in the kmap region, you may not copy
across a page boundary, no matter what the size of the underlying
allocation.  You can't kmap() a slab page because slab pages always
come from low memory.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/highmem.h   |  1 +
 include/linux/highmem-internal.h | 10 ++++++++++
 mm/usercopy.c                    | 16 ++++++++++------
 3 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
index 032e020853aa..731ee7cc40a5 100644
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -26,6 +26,7 @@
 #include <asm/tlbflush.h>
 #include <asm/paravirt.h>
 #include <asm/fixmap.h>
+#include <asm/pgtable_areas.h>
 
 /* declarations for highmem.c */
 extern unsigned long highstart_pfn, highend_pfn;
diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h
index 0a0b2b09b1b8..01fb76d101b0 100644
--- a/include/linux/highmem-internal.h
+++ b/include/linux/highmem-internal.h
@@ -149,6 +149,11 @@ static inline void totalhigh_pages_add(long count)
 	atomic_long_add(count, &_totalhigh_pages);
 }
 
+static inline bool is_kmap_addr(const void *x)
+{
+	unsigned long addr = (unsigned long)x;
+	return addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP);
+}
 #else /* CONFIG_HIGHMEM */
 
 static inline struct page *kmap_to_page(void *addr)
@@ -234,6 +239,11 @@ static inline void __kunmap_atomic(void *addr)
 static inline unsigned int nr_free_highpages(void) { return 0; }
 static inline unsigned long totalhigh_pages(void) { return 0UL; }
 
+static inline bool is_kmap_addr(const void *x)
+{
+	return false;
+}
+
 #endif /* CONFIG_HIGHMEM */
 
 /*
diff --git a/mm/usercopy.c b/mm/usercopy.c
index d0d268135d96..2d13bc3bd83b 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -229,12 +229,16 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 	if (!virt_addr_valid(ptr))
 		return;
 
-	/*
-	 * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the
-	 * highmem page or fallback to virt_to_page(). The following
-	 * is effectively a highmem-aware virt_to_slab().
-	 */
-	folio = page_folio(kmap_to_page((void *)ptr));
+	if (is_kmap_addr(ptr)) {
+		unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1);
+
+		if ((unsigned long)ptr + n - 1 > page_end)
+			usercopy_abort("kmap", NULL, to_user,
+					offset_in_page(ptr), n);
+		return;
+	}
+
+	folio = virt_to_folio(ptr);
 
 	if (folio_test_slab(folio)) {
 		/* Check slab allocator for flags and size. */
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/4] mm/usercopy: Detect vmalloc overruns
  2022-01-10 23:15 [PATCH 0/4] Assorted improvements to usercopy Matthew Wilcox (Oracle)
  2022-01-10 23:15 ` [PATCH 1/4] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
@ 2022-01-10 23:15 ` Matthew Wilcox (Oracle)
  2022-01-10 23:15 ` [PATCH 3/4] mm/usercopy: Detect large folio overruns Matthew Wilcox (Oracle)
  2022-01-10 23:15 ` [PATCH 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Matthew Wilcox (Oracle)
  3 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-01-10 23:15 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-hardening

If you have a vmalloc() allocation, or an address from calling vmap(),
you cannot overrun the vm_area which describes it, regardless of the
size of the underlying allocation.  This probably doesn't do much for
security because vmalloc comes with guard pages these days, but it
prevents usercopy aborts when copying to a vmap() of smaller pages.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kees Cook <keescook@chromium.org>
---
 mm/usercopy.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 2d13bc3bd83b..dcf71b7e3098 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -17,6 +17,7 @@
 #include <linux/sched/task.h>
 #include <linux/sched/task_stack.h>
 #include <linux/thread_info.h>
+#include <linux/vmalloc.h>
 #include <linux/atomic.h>
 #include <linux/jump_label.h>
 #include <asm/sections.h>
@@ -238,6 +239,21 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 		return;
 	}
 
+	if (is_vmalloc_addr(ptr)) {
+		struct vm_struct *area = find_vm_area(ptr);
+		unsigned long offset;
+
+		if (!area) {
+			usercopy_abort("vmalloc", "no area", to_user, 0, n);
+			return;
+		}
+
+		offset = ptr - area->addr;
+		if (offset + n > get_vm_area_size(area))
+			usercopy_abort("vmalloc", NULL, to_user, offset, n);
+		return;
+	}
+
 	folio = virt_to_folio(ptr);
 
 	if (folio_test_slab(folio)) {
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/4] mm/usercopy: Detect large folio overruns
  2022-01-10 23:15 [PATCH 0/4] Assorted improvements to usercopy Matthew Wilcox (Oracle)
  2022-01-10 23:15 ` [PATCH 1/4] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
  2022-01-10 23:15 ` [PATCH 2/4] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
@ 2022-01-10 23:15 ` Matthew Wilcox (Oracle)
  2022-01-31 14:28   ` David Hildenbrand
  2022-01-10 23:15 ` [PATCH 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Matthew Wilcox (Oracle)
  3 siblings, 1 reply; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-01-10 23:15 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-hardening

Move the compound page overrun detection out of
CONFIG_HARDENED_USERCOPY_PAGESPAN and convert it to use folios so it's
enabled for more people.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kees Cook <keescook@chromium.org>
---
 mm/usercopy.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index dcf71b7e3098..e1cb98087a05 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -164,7 +164,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
 {
 #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
 	const void *end = ptr + n - 1;
-	struct page *endpage;
 	bool is_reserved, is_cma;
 
 	/*
@@ -195,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
 		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
 		return;
 
-	/* Allow if fully inside the same compound (__GFP_COMP) page. */
-	endpage = virt_to_head_page(end);
-	if (likely(endpage == page))
-		return;
-
 	/*
 	 * Reject if range is entirely either Reserved (i.e. special or
 	 * device memory), or CMA. Otherwise, reject since the object spans
@@ -259,6 +253,10 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 	if (folio_test_slab(folio)) {
 		/* Check slab allocator for flags and size. */
 		__check_heap_object(ptr, n, folio_slab(folio), to_user);
+	} else if (folio_test_large(folio)) {
+		unsigned long offset = ptr - folio_address(folio);
+		if (offset + n > folio_size(folio))
+			usercopy_abort("page alloc", NULL, to_user, offset, n);
 	} else {
 		/* Verify object does not incorrectly span multiple pages. */
 		check_page_span(ptr, n, folio_page(folio, 0), to_user);
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
  2022-01-10 23:15 [PATCH 0/4] Assorted improvements to usercopy Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2022-01-10 23:15 ` [PATCH 3/4] mm/usercopy: Detect large folio overruns Matthew Wilcox (Oracle)
@ 2022-01-10 23:15 ` Matthew Wilcox (Oracle)
  2022-01-12 23:08   ` Kees Cook
  2022-01-31 14:27   ` David Hildenbrand
  3 siblings, 2 replies; 11+ messages in thread
From: Matthew Wilcox (Oracle) @ 2022-01-10 23:15 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-hardening

There isn't enough information to make this a useful check any more;
the useful parts of it were moved in earlier patches, so remove this
set of checks now.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/usercopy.c    | 61 ------------------------------------------------
 security/Kconfig | 13 +----------
 2 files changed, 1 insertion(+), 73 deletions(-)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index e1cb98087a05..94831945d9e7 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -158,64 +158,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n,
 		usercopy_abort("null address", NULL, to_user, ptr, n);
 }
 
-/* Checks for allocs that are marked in some way as spanning multiple pages. */
-static inline void check_page_span(const void *ptr, unsigned long n,
-				   struct page *page, bool to_user)
-{
-#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
-	const void *end = ptr + n - 1;
-	bool is_reserved, is_cma;
-
-	/*
-	 * Sometimes the kernel data regions are not marked Reserved (see
-	 * check below). And sometimes [_sdata,_edata) does not cover
-	 * rodata and/or bss, so check each range explicitly.
-	 */
-
-	/* Allow reads of kernel rodata region (if not marked as Reserved). */
-	if (ptr >= (const void *)__start_rodata &&
-	    end <= (const void *)__end_rodata) {
-		if (!to_user)
-			usercopy_abort("rodata", NULL, to_user, 0, n);
-		return;
-	}
-
-	/* Allow kernel data region (if not marked as Reserved). */
-	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
-		return;
-
-	/* Allow kernel bss region (if not marked as Reserved). */
-	if (ptr >= (const void *)__bss_start &&
-	    end <= (const void *)__bss_stop)
-		return;
-
-	/* Is the object wholly within one base page? */
-	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
-		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
-		return;
-
-	/*
-	 * Reject if range is entirely either Reserved (i.e. special or
-	 * device memory), or CMA. Otherwise, reject since the object spans
-	 * several independently allocated pages.
-	 */
-	is_reserved = PageReserved(page);
-	is_cma = is_migrate_cma_page(page);
-	if (!is_reserved && !is_cma)
-		usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
-
-	for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
-		page = virt_to_head_page(ptr);
-		if (is_reserved && !PageReserved(page))
-			usercopy_abort("spans Reserved and non-Reserved pages",
-				       NULL, to_user, 0, n);
-		if (is_cma && !is_migrate_cma_page(page))
-			usercopy_abort("spans CMA and non-CMA pages", NULL,
-				       to_user, 0, n);
-	}
-#endif
-}
-
 static inline void check_heap_object(const void *ptr, unsigned long n,
 				     bool to_user)
 {
@@ -257,9 +199,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 		unsigned long offset = ptr - folio_address(folio);
 		if (offset + n > folio_size(folio))
 			usercopy_abort("page alloc", NULL, to_user, offset, n);
-	} else {
-		/* Verify object does not incorrectly span multiple pages. */
-		check_page_span(ptr, n, folio_page(folio, 0), to_user);
 	}
 }
 
diff --git a/security/Kconfig b/security/Kconfig
index 0b847f435beb..5b289b329a51 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -160,20 +160,9 @@ config HARDENED_USERCOPY
 	  copy_from_user() functions) by rejecting memory ranges that
 	  are larger than the specified heap object, span multiple
 	  separately allocated pages, are not on the process stack,
-	  or are part of the kernel text. This kills entire classes
+	  or are part of the kernel text. This prevents entire classes
 	  of heap overflow exploits and similar kernel memory exposures.
 
-config HARDENED_USERCOPY_PAGESPAN
-	bool "Refuse to copy allocations that span multiple pages"
-	depends on HARDENED_USERCOPY
-	depends on EXPERT
-	help
-	  When a multi-page allocation is done without __GFP_COMP,
-	  hardened usercopy will reject attempts to copy it. There are,
-	  however, several cases of this in the kernel that have not all
-	  been removed. This config is intended to be used only while
-	  trying to find such users.
-
 config FORTIFY_SOURCE
 	bool "Harden common str/mem functions against buffer overflows"
 	depends on ARCH_HAS_FORTIFY_SOURCE
-- 
2.33.0


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
  2022-01-10 23:15 ` [PATCH 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Matthew Wilcox (Oracle)
@ 2022-01-12 23:08   ` Kees Cook
  2022-01-31 14:27   ` David Hildenbrand
  1 sibling, 0 replies; 11+ messages in thread
From: Kees Cook @ 2022-01-12 23:08 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, linux-hardening

On Mon, Jan 10, 2022 at 11:15:30PM +0000, Matthew Wilcox (Oracle) wrote:
> There isn't enough information to make this a useful check any more;
> the useful parts of it were moved in earlier patches, so remove this
> set of checks now.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>

Thank you!

Acked-by: Kees Cook <keescook@chromium.org>

> ---
>  mm/usercopy.c    | 61 ------------------------------------------------
>  security/Kconfig | 13 +----------
>  2 files changed, 1 insertion(+), 73 deletions(-)
> 
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index e1cb98087a05..94831945d9e7 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -158,64 +158,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n,
>  		usercopy_abort("null address", NULL, to_user, ptr, n);
>  }
>  
> -/* Checks for allocs that are marked in some way as spanning multiple pages. */
> -static inline void check_page_span(const void *ptr, unsigned long n,
> -				   struct page *page, bool to_user)
> -{
> -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
> -	const void *end = ptr + n - 1;
> -	bool is_reserved, is_cma;
> -
> -	/*
> -	 * Sometimes the kernel data regions are not marked Reserved (see
> -	 * check below). And sometimes [_sdata,_edata) does not cover
> -	 * rodata and/or bss, so check each range explicitly.
> -	 */
> -
> -	/* Allow reads of kernel rodata region (if not marked as Reserved). */
> -	if (ptr >= (const void *)__start_rodata &&
> -	    end <= (const void *)__end_rodata) {
> -		if (!to_user)
> -			usercopy_abort("rodata", NULL, to_user, 0, n);
> -		return;
> -	}
> -
> -	/* Allow kernel data region (if not marked as Reserved). */
> -	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> -		return;
> -
> -	/* Allow kernel bss region (if not marked as Reserved). */
> -	if (ptr >= (const void *)__bss_start &&
> -	    end <= (const void *)__bss_stop)
> -		return;
> -
> -	/* Is the object wholly within one base page? */
> -	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> -		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> -		return;
> -
> -	/*
> -	 * Reject if range is entirely either Reserved (i.e. special or
> -	 * device memory), or CMA. Otherwise, reject since the object spans
> -	 * several independently allocated pages.
> -	 */
> -	is_reserved = PageReserved(page);
> -	is_cma = is_migrate_cma_page(page);
> -	if (!is_reserved && !is_cma)
> -		usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
> -
> -	for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
> -		page = virt_to_head_page(ptr);
> -		if (is_reserved && !PageReserved(page))
> -			usercopy_abort("spans Reserved and non-Reserved pages",
> -				       NULL, to_user, 0, n);
> -		if (is_cma && !is_migrate_cma_page(page))
> -			usercopy_abort("spans CMA and non-CMA pages", NULL,
> -				       to_user, 0, n);
> -	}
> -#endif
> -}
> -
>  static inline void check_heap_object(const void *ptr, unsigned long n,
>  				     bool to_user)
>  {
> @@ -257,9 +199,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
>  		unsigned long offset = ptr - folio_address(folio);
>  		if (offset + n > folio_size(folio))
>  			usercopy_abort("page alloc", NULL, to_user, offset, n);
> -	} else {
> -		/* Verify object does not incorrectly span multiple pages. */
> -		check_page_span(ptr, n, folio_page(folio, 0), to_user);
>  	}
>  }
>  
> diff --git a/security/Kconfig b/security/Kconfig
> index 0b847f435beb..5b289b329a51 100644
> --- a/security/Kconfig
> +++ b/security/Kconfig
> @@ -160,20 +160,9 @@ config HARDENED_USERCOPY
>  	  copy_from_user() functions) by rejecting memory ranges that
>  	  are larger than the specified heap object, span multiple
>  	  separately allocated pages, are not on the process stack,
> -	  or are part of the kernel text. This kills entire classes
> +	  or are part of the kernel text. This prevents entire classes
>  	  of heap overflow exploits and similar kernel memory exposures.
>  
> -config HARDENED_USERCOPY_PAGESPAN
> -	bool "Refuse to copy allocations that span multiple pages"
> -	depends on HARDENED_USERCOPY
> -	depends on EXPERT
> -	help
> -	  When a multi-page allocation is done without __GFP_COMP,
> -	  hardened usercopy will reject attempts to copy it. There are,
> -	  however, several cases of this in the kernel that have not all
> -	  been removed. This config is intended to be used only while
> -	  trying to find such users.
> -
>  config FORTIFY_SOURCE
>  	bool "Harden common str/mem functions against buffer overflows"
>  	depends on ARCH_HAS_FORTIFY_SOURCE
> -- 
> 2.33.0
> 

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
  2022-01-10 23:15 ` [PATCH 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Matthew Wilcox (Oracle)
  2022-01-12 23:08   ` Kees Cook
@ 2022-01-31 14:27   ` David Hildenbrand
  2022-01-31 19:27     ` Matthew Wilcox
  1 sibling, 1 reply; 11+ messages in thread
From: David Hildenbrand @ 2022-01-31 14:27 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), Kees Cook; +Cc: linux-mm, linux-hardening

On 11.01.22 00:15, Matthew Wilcox (Oracle) wrote:
> There isn't enough information to make this a useful check any more;
> the useful parts of it were moved in earlier patches, so remove this
> set of checks now.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>


Very nice cleanup IMHO

Reviewed-by: David Hildenbrand <david@redhat.com>


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 3/4] mm/usercopy: Detect large folio overruns
  2022-01-10 23:15 ` [PATCH 3/4] mm/usercopy: Detect large folio overruns Matthew Wilcox (Oracle)
@ 2022-01-31 14:28   ` David Hildenbrand
  0 siblings, 0 replies; 11+ messages in thread
From: David Hildenbrand @ 2022-01-31 14:28 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), Kees Cook; +Cc: linux-mm, linux-hardening

On 11.01.22 00:15, Matthew Wilcox (Oracle) wrote:
> Move the compound page overrun detection out of
> CONFIG_HARDENED_USERCOPY_PAGESPAN and convert it to use folios so it's
> enabled for more people.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Acked-by: Kees Cook <keescook@chromium.org>
> ---
>  mm/usercopy.c | 10 ++++------
>  1 file changed, 4 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index dcf71b7e3098..e1cb98087a05 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -164,7 +164,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
>  {
>  #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
>  	const void *end = ptr + n - 1;
> -	struct page *endpage;
>  	bool is_reserved, is_cma;
>  
>  	/*
> @@ -195,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
>  		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
>  		return;
>  
> -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> -	endpage = virt_to_head_page(end);
> -	if (likely(endpage == page))
> -		return;
> -
>  	/*
>  	 * Reject if range is entirely either Reserved (i.e. special or
>  	 * device memory), or CMA. Otherwise, reject since the object spans
> @@ -259,6 +253,10 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
>  	if (folio_test_slab(folio)) {
>  		/* Check slab allocator for flags and size. */
>  		__check_heap_object(ptr, n, folio_slab(folio), to_user);
> +	} else if (folio_test_large(folio)) {
> +		unsigned long offset = ptr - folio_address(folio);

Nit: I'd have added an empty line.

> +		if (offset + n > folio_size(folio))
> +			usercopy_abort("page alloc", NULL, to_user, offset, n);
>  	} else {
>  		/* Verify object does not incorrectly span multiple pages. */
>  		check_page_span(ptr, n, folio_page(folio, 0), to_user);

LGTM

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
  2022-01-31 14:27   ` David Hildenbrand
@ 2022-01-31 19:27     ` Matthew Wilcox
  0 siblings, 0 replies; 11+ messages in thread
From: Matthew Wilcox @ 2022-01-31 19:27 UTC (permalink / raw)
  To: David Hildenbrand; +Cc: Kees Cook, linux-mm, linux-hardening

On Mon, Jan 31, 2022 at 03:27:02PM +0100, David Hildenbrand wrote:
> On 11.01.22 00:15, Matthew Wilcox (Oracle) wrote:
> > There isn't enough information to make this a useful check any more;
> > the useful parts of it were moved in earlier patches, so remove this
> > set of checks now.
> > 
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> 
> 
> Very nice cleanup IMHO
> 
> Reviewed-by: David Hildenbrand <david@redhat.com>

Thanks.  I'm thinking about adding another check.  If the order-0
page has a non-NULL ->mapping, I believe that is enough to determine
that it's a page cache / KSM / Anon page, and as such is guaranteed
to actually be order-0 (and not part of a postive-order allocation
without __GFP_COMP set).  But that would require a bit more auditing,
or just throwing it in and seeing what breaks *whistles*.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/4] mm/usercopy: Check kmap addresses properly
  2022-01-10 23:15 ` [PATCH 1/4] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
@ 2022-05-10  3:37   ` Andrew Morton
  2022-05-10 22:01     ` Kees Cook
  0 siblings, 1 reply; 11+ messages in thread
From: Andrew Morton @ 2022-05-10  3:37 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: Kees Cook, linux-mm, linux-hardening

On Mon, 10 Jan 2022 23:15:27 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:

> If you are copying to an address in the kmap region, you may not copy
> across a page boundary,

In the source, the destination or in both, and why may we not?

> no matter what the size of the underlying
> allocation.  You can't kmap() a slab page because slab pages always
> come from low memory.

Why not?  kmap() does

	if (!PageHighMem(page))
		addr = page_address(page);
	else
		addr = kmap_high(page);



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 1/4] mm/usercopy: Check kmap addresses properly
  2022-05-10  3:37   ` Andrew Morton
@ 2022-05-10 22:01     ` Kees Cook
  0 siblings, 0 replies; 11+ messages in thread
From: Kees Cook @ 2022-05-10 22:01 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, linux-hardening

On Mon, May 09, 2022 at 08:37:42PM -0700, Andrew Morton wrote:
> On Mon, 10 Jan 2022 23:15:27 +0000 "Matthew Wilcox (Oracle)" <willy@infradead.org> wrote:
> 
> > If you are copying to an address in the kmap region, you may not copy
> > across a page boundary,
> 
> In the source, the destination or in both, and why may we not?

This depends on direction. For copying to userspace, the source (kmap).
For copying from userspace, the destination (kmap).

> > no matter what the size of the underlying
> > allocation.  You can't kmap() a slab page because slab pages always
> > come from low memory.

As in it'll be processed as a slab page instead of kmap by the usercopy
checks?

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-05-10 22:01 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-10 23:15 [PATCH 0/4] Assorted improvements to usercopy Matthew Wilcox (Oracle)
2022-01-10 23:15 ` [PATCH 1/4] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
2022-05-10  3:37   ` Andrew Morton
2022-05-10 22:01     ` Kees Cook
2022-01-10 23:15 ` [PATCH 2/4] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
2022-01-10 23:15 ` [PATCH 3/4] mm/usercopy: Detect large folio overruns Matthew Wilcox (Oracle)
2022-01-31 14:28   ` David Hildenbrand
2022-01-10 23:15 ` [PATCH 4/4] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Matthew Wilcox (Oracle)
2022-01-12 23:08   ` Kees Cook
2022-01-31 14:27   ` David Hildenbrand
2022-01-31 19:27     ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).