linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
@ 2019-05-10 20:43 Kees Cook
  2019-05-11  0:41 ` Laura Abbott
  2019-07-02 17:11 ` Kees Cook
  0 siblings, 2 replies; 8+ messages in thread
From: Kees Cook @ 2019-05-10 20:43 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Eric Biggers, Matthew Wilcox, Laura Abbott, Rik van Riel, linux-kernel

This feature continues to cause more problems than it solves[1]. Its
intention was to check the bounds of page-allocator allocations by using
__GFP_COMP, for which we would need to find all missing __GFP_COMP
markings. This work has been on hold and there is an argument[2]
that such markings are not even the correct signal for checking for
same-allocation pages. Instead of depending on BROKEN, this just removes
it entirely. It can be trivially reverted if/when a better solution for
tracking page allocator sizes is found.

[1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
[2] https://lkml.kernel.org/r/20190415022412.GA29714@bombadil.infradead.org

Suggested-by: Eric Biggers <ebiggers@kernel.org>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 mm/usercopy.c    | 67 ------------------------------------------------
 security/Kconfig | 11 --------
 2 files changed, 78 deletions(-)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 14faadcedd06..15dc1bf03303 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -159,70 +159,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n,
 		usercopy_abort("null address", NULL, to_user, ptr, n);
 }
 
-/* Checks for allocs that are marked in some way as spanning multiple pages. */
-static inline void check_page_span(const void *ptr, unsigned long n,
-				   struct page *page, bool to_user)
-{
-#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
-	const void *end = ptr + n - 1;
-	struct page *endpage;
-	bool is_reserved, is_cma;
-
-	/*
-	 * Sometimes the kernel data regions are not marked Reserved (see
-	 * check below). And sometimes [_sdata,_edata) does not cover
-	 * rodata and/or bss, so check each range explicitly.
-	 */
-
-	/* Allow reads of kernel rodata region (if not marked as Reserved). */
-	if (ptr >= (const void *)__start_rodata &&
-	    end <= (const void *)__end_rodata) {
-		if (!to_user)
-			usercopy_abort("rodata", NULL, to_user, 0, n);
-		return;
-	}
-
-	/* Allow kernel data region (if not marked as Reserved). */
-	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
-		return;
-
-	/* Allow kernel bss region (if not marked as Reserved). */
-	if (ptr >= (const void *)__bss_start &&
-	    end <= (const void *)__bss_stop)
-		return;
-
-	/* Is the object wholly within one base page? */
-	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
-		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
-		return;
-
-	/* Allow if fully inside the same compound (__GFP_COMP) page. */
-	endpage = virt_to_head_page(end);
-	if (likely(endpage == page))
-		return;
-
-	/*
-	 * Reject if range is entirely either Reserved (i.e. special or
-	 * device memory), or CMA. Otherwise, reject since the object spans
-	 * several independently allocated pages.
-	 */
-	is_reserved = PageReserved(page);
-	is_cma = is_migrate_cma_page(page);
-	if (!is_reserved && !is_cma)
-		usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
-
-	for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
-		page = virt_to_head_page(ptr);
-		if (is_reserved && !PageReserved(page))
-			usercopy_abort("spans Reserved and non-Reserved pages",
-				       NULL, to_user, 0, n);
-		if (is_cma && !is_migrate_cma_page(page))
-			usercopy_abort("spans CMA and non-CMA pages", NULL,
-				       to_user, 0, n);
-	}
-#endif
-}
-
 static inline void check_heap_object(const void *ptr, unsigned long n,
 				     bool to_user)
 {
@@ -236,9 +172,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 	if (PageSlab(page)) {
 		/* Check slab allocator for flags and size. */
 		__check_heap_object(ptr, n, page, to_user);
-	} else {
-		/* Verify object does not incorrectly span multiple pages. */
-		check_page_span(ptr, n, page, to_user);
 	}
 }
 
diff --git a/security/Kconfig b/security/Kconfig
index 353cfef71d4e..8392647f5a4c 100644
--- a/security/Kconfig
+++ b/security/Kconfig
@@ -176,17 +176,6 @@ config HARDENED_USERCOPY_FALLBACK
 	  Booting with "slab_common.usercopy_fallback=Y/N" can change
 	  this setting.
 
-config HARDENED_USERCOPY_PAGESPAN
-	bool "Refuse to copy allocations that span multiple pages"
-	depends on HARDENED_USERCOPY
-	depends on EXPERT
-	help
-	  When a multi-page allocation is done without __GFP_COMP,
-	  hardened usercopy will reject attempts to copy it. There are,
-	  however, several cases of this in the kernel that have not all
-	  been removed. This config is intended to be used only while
-	  trying to find such users.
-
 config FORTIFY_SOURCE
 	bool "Harden common str/mem functions against buffer overflows"
 	depends on ARCH_HAS_FORTIFY_SOURCE
-- 
2.17.1


-- 
Kees Cook

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
  2019-05-10 20:43 [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Kees Cook
@ 2019-05-11  0:41 ` Laura Abbott
  2019-05-12  0:03   ` Kees Cook
  2019-07-02 17:11 ` Kees Cook
  1 sibling, 1 reply; 8+ messages in thread
From: Laura Abbott @ 2019-05-11  0:41 UTC (permalink / raw)
  To: Kees Cook, Andrew Morton
  Cc: Eric Biggers, Matthew Wilcox, Rik van Riel, linux-kernel

On 5/10/19 3:43 PM, Kees Cook wrote:
> This feature continues to cause more problems than it solves[1]. Its
> intention was to check the bounds of page-allocator allocations by using
> __GFP_COMP, for which we would need to find all missing __GFP_COMP
> markings. This work has been on hold and there is an argument[2]
> that such markings are not even the correct signal for checking for
> same-allocation pages. Instead of depending on BROKEN, this just removes
> it entirely. It can be trivially reverted if/when a better solution for
> tracking page allocator sizes is found.
> 
> [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> [2] https://lkml.kernel.org/r/20190415022412.GA29714@bombadil.infradead.org
> 
> Suggested-by: Eric Biggers <ebiggers@kernel.org>
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>   mm/usercopy.c    | 67 ------------------------------------------------
>   security/Kconfig | 11 --------
>   2 files changed, 78 deletions(-)
> 
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index 14faadcedd06..15dc1bf03303 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -159,70 +159,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n,
>   		usercopy_abort("null address", NULL, to_user, ptr, n);
>   }
>   
> -/* Checks for allocs that are marked in some way as spanning multiple pages. */
> -static inline void check_page_span(const void *ptr, unsigned long n,
> -				   struct page *page, bool to_user)
> -{
> -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
> -	const void *end = ptr + n - 1;
> -	struct page *endpage;
> -	bool is_reserved, is_cma;
> -
> -	/*
> -	 * Sometimes the kernel data regions are not marked Reserved (see
> -	 * check below). And sometimes [_sdata,_edata) does not cover
> -	 * rodata and/or bss, so check each range explicitly.
> -	 */
> -
> -	/* Allow reads of kernel rodata region (if not marked as Reserved). */
> -	if (ptr >= (const void *)__start_rodata &&
> -	    end <= (const void *)__end_rodata) {
> -		if (!to_user)
> -			usercopy_abort("rodata", NULL, to_user, 0, n);
> -		return;
> -	}
> -
> -	/* Allow kernel data region (if not marked as Reserved). */
> -	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> -		return;
> -
> -	/* Allow kernel bss region (if not marked as Reserved). */
> -	if (ptr >= (const void *)__bss_start &&
> -	    end <= (const void *)__bss_stop)
> -		return;
> -


I agree the page spanning is broken but is it worth keeping the
checks against __rodata __bss etc.?

> -	/* Is the object wholly within one base page? */
> -	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> -		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> -		return;
> -
> -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> -	endpage = virt_to_head_page(end);
> -	if (likely(endpage == page))
> -		return;
> -
> -	/*
> -	 * Reject if range is entirely either Reserved (i.e. special or
> -	 * device memory), or CMA. Otherwise, reject since the object spans
> -	 * several independently allocated pages.
> -	 */
> -	is_reserved = PageReserved(page);
> -	is_cma = is_migrate_cma_page(page);
> -	if (!is_reserved && !is_cma)
> -		usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
> -
> -	for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
> -		page = virt_to_head_page(ptr);
> -		if (is_reserved && !PageReserved(page))
> -			usercopy_abort("spans Reserved and non-Reserved pages",
> -				       NULL, to_user, 0, n);
> -		if (is_cma && !is_migrate_cma_page(page))
> -			usercopy_abort("spans CMA and non-CMA pages", NULL,
> -				       to_user, 0, n);
> -	}
> -#endif
> -}
> -
>   static inline void check_heap_object(const void *ptr, unsigned long n,
>   				     bool to_user)
>   {
> @@ -236,9 +172,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
>   	if (PageSlab(page)) {
>   		/* Check slab allocator for flags and size. */
>   		__check_heap_object(ptr, n, page, to_user);
> -	} else {
> -		/* Verify object does not incorrectly span multiple pages. */
> -		check_page_span(ptr, n, page, to_user);
>   	}
>   }
>   
> diff --git a/security/Kconfig b/security/Kconfig
> index 353cfef71d4e..8392647f5a4c 100644
> --- a/security/Kconfig
> +++ b/security/Kconfig
> @@ -176,17 +176,6 @@ config HARDENED_USERCOPY_FALLBACK
>   	  Booting with "slab_common.usercopy_fallback=Y/N" can change
>   	  this setting.
>   
> -config HARDENED_USERCOPY_PAGESPAN
> -	bool "Refuse to copy allocations that span multiple pages"
> -	depends on HARDENED_USERCOPY
> -	depends on EXPERT
> -	help
> -	  When a multi-page allocation is done without __GFP_COMP,
> -	  hardened usercopy will reject attempts to copy it. There are,
> -	  however, several cases of this in the kernel that have not all
> -	  been removed. This config is intended to be used only while
> -	  trying to find such users.
> -
>   config FORTIFY_SOURCE
>   	bool "Harden common str/mem functions against buffer overflows"
>   	depends on ARCH_HAS_FORTIFY_SOURCE
> 


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
  2019-05-11  0:41 ` Laura Abbott
@ 2019-05-12  0:03   ` Kees Cook
  2019-05-12  4:11     ` Matthew Wilcox
  0 siblings, 1 reply; 8+ messages in thread
From: Kees Cook @ 2019-05-12  0:03 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Andrew Morton, Eric Biggers, Matthew Wilcox, Rik van Riel, linux-kernel

On Fri, May 10, 2019 at 08:41:43PM -0400, Laura Abbott wrote:
> On 5/10/19 3:43 PM, Kees Cook wrote:
> > This feature continues to cause more problems than it solves[1]. Its
> > intention was to check the bounds of page-allocator allocations by using
> > __GFP_COMP, for which we would need to find all missing __GFP_COMP
> > markings. This work has been on hold and there is an argument[2]
> > that such markings are not even the correct signal for checking for
> > same-allocation pages. Instead of depending on BROKEN, this just removes
> > it entirely. It can be trivially reverted if/when a better solution for
> > tracking page allocator sizes is found.
> > 
> > [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> > [2] https://lkml.kernel.org/r/20190415022412.GA29714@bombadil.infradead.org
> > 
> > Suggested-by: Eric Biggers <ebiggers@kernel.org>
> > Signed-off-by: Kees Cook <keescook@chromium.org>
> > ---
> >   mm/usercopy.c    | 67 ------------------------------------------------
> >   security/Kconfig | 11 --------
> >   2 files changed, 78 deletions(-)
> > 
> > diff --git a/mm/usercopy.c b/mm/usercopy.c
> > index 14faadcedd06..15dc1bf03303 100644
> > --- a/mm/usercopy.c
> > +++ b/mm/usercopy.c
> > @@ -159,70 +159,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n,
> >   		usercopy_abort("null address", NULL, to_user, ptr, n);
> >   }
> > -/* Checks for allocs that are marked in some way as spanning multiple pages. */
> > -static inline void check_page_span(const void *ptr, unsigned long n,
> > -				   struct page *page, bool to_user)
> > -{
> > -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
> > -	const void *end = ptr + n - 1;
> > -	struct page *endpage;
> > -	bool is_reserved, is_cma;
> > -
> > -	/*
> > -	 * Sometimes the kernel data regions are not marked Reserved (see
> > -	 * check below). And sometimes [_sdata,_edata) does not cover
> > -	 * rodata and/or bss, so check each range explicitly.
> > -	 */
> > -
> > -	/* Allow reads of kernel rodata region (if not marked as Reserved). */
> > -	if (ptr >= (const void *)__start_rodata &&
> > -	    end <= (const void *)__end_rodata) {
> > -		if (!to_user)
> > -			usercopy_abort("rodata", NULL, to_user, 0, n);
> > -		return;
> > -	}
> > -
> > -	/* Allow kernel data region (if not marked as Reserved). */
> > -	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> > -		return;
> > -
> > -	/* Allow kernel bss region (if not marked as Reserved). */
> > -	if (ptr >= (const void *)__bss_start &&
> > -	    end <= (const void *)__bss_stop)
> > -		return;
> > -
> 
> 
> I agree the page spanning is broken but is it worth keeping the
> checks against __rodata __bss etc.?

They're all just white-listing later checks (except RODATA which is
doing a cheap RO test which is redundant on any architecture with actual
rodata...) so they don't have any value in staying without the rest of
the page allocator logic.

> 
> > -	/* Is the object wholly within one base page? */
> > -	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> > -		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> > -		return;
> > -
> > -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> > -	endpage = virt_to_head_page(end);
> > -	if (likely(endpage == page))
> > -		return;
> > -
> > -	/*
> > -	 * Reject if range is entirely either Reserved (i.e. special or
> > -	 * device memory), or CMA. Otherwise, reject since the object spans
> > -	 * several independently allocated pages.
> > -	 */
> > -	is_reserved = PageReserved(page);
> > -	is_cma = is_migrate_cma_page(page);
> > -	if (!is_reserved && !is_cma)
> > -		usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
> > -
> > -	for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
> > -		page = virt_to_head_page(ptr);
> > -		if (is_reserved && !PageReserved(page))
> > -			usercopy_abort("spans Reserved and non-Reserved pages",
> > -				       NULL, to_user, 0, n);
> > -		if (is_cma && !is_migrate_cma_page(page))
> > -			usercopy_abort("spans CMA and non-CMA pages", NULL,
> > -				       to_user, 0, n);
> > -	}

We _could_ keep the mixed CMA/reserved/neither check if we really wanted
to, but that's such a corner case of a corner case, I'm not sure it's
worth doing the virt_to_head_page() across the whole span to figure
it out.

I really wish we had size of allocation reliably held somewhere. We'll
need it for doing memory tagging of the page allocator too...

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
  2019-05-12  0:03   ` Kees Cook
@ 2019-05-12  4:11     ` Matthew Wilcox
  2019-05-13 21:32       ` Kees Cook
  0 siblings, 1 reply; 8+ messages in thread
From: Matthew Wilcox @ 2019-05-12  4:11 UTC (permalink / raw)
  To: Kees Cook
  Cc: Laura Abbott, Andrew Morton, Eric Biggers, Rik van Riel, linux-kernel

On Sat, May 11, 2019 at 05:03:08PM -0700, Kees Cook wrote:
> On Fri, May 10, 2019 at 08:41:43PM -0400, Laura Abbott wrote:
> > On 5/10/19 3:43 PM, Kees Cook wrote:
> > > This feature continues to cause more problems than it solves[1]. Its
> > > intention was to check the bounds of page-allocator allocations by using
> > > __GFP_COMP, for which we would need to find all missing __GFP_COMP
> > > markings. This work has been on hold and there is an argument[2]
> > > that such markings are not even the correct signal for checking for
> > > same-allocation pages. Instead of depending on BROKEN, this just removes
> > > it entirely. It can be trivially reverted if/when a better solution for
> > > tracking page allocator sizes is found.
> > > 
> > > [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> > > [2] https://lkml.kernel.org/r/20190415022412.GA29714@bombadil.infradead.org
> > 
> > I agree the page spanning is broken but is it worth keeping the
> > checks against __rodata __bss etc.?
> 
> They're all just white-listing later checks (except RODATA which is
> doing a cheap RO test which is redundant on any architecture with actual
> rodata...) so they don't have any value in staying without the rest of
> the page allocator logic.
> 
> > > -	/* Is the object wholly within one base page? */
> > > -	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> > > -		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> > > -		return;
> > > -
> > > -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> > > -	endpage = virt_to_head_page(end);
> > > -	if (likely(endpage == page))
> > > -		return;
> 
> We _could_ keep the mixed CMA/reserved/neither check if we really wanted
> to, but that's such a corner case of a corner case, I'm not sure it's
> worth doing the virt_to_head_page() across the whole span to figure
> it out.

I'd delete that first check, because it's a subset of the second check,

	/* Is the object wholly within a single (base or compound) page? */
	endpage = virt_to_head_page(end);
	if (likely(endpage == page))
		return;

	/*
	 * If the start and end are more than MAX_ORDER apart, they must
	 * be from separate allocations
	 */
	if (n >= (PAGE_SIZE << MAX_ORDER))
		usercopy_abort("spans too many pages", NULL, to_user, 0, n);

	/*
	 * If neither page is compound, we can't tell if the object is
	 * within a single allocation or not
	 */
	if (!PageCompound(page) && !PageCompound(endpage))
		return;

> I really wish we had size of allocation reliably held somewhere. We'll
> need it for doing memory tagging of the page allocator too...

I think we'll need to store those allocations in a separate data structure
on the side.  As far as the rest of the kernel is concerned, those struct
pages belong to them once the page allocator has given them.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
  2019-05-12  4:11     ` Matthew Wilcox
@ 2019-05-13 21:32       ` Kees Cook
  2019-06-10 22:30         ` Eric Biggers
  0 siblings, 1 reply; 8+ messages in thread
From: Kees Cook @ 2019-05-13 21:32 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Laura Abbott, Andrew Morton, Eric Biggers, Rik van Riel, linux-kernel

On Sat, May 11, 2019 at 09:11:42PM -0700, Matthew Wilcox wrote:
> On Sat, May 11, 2019 at 05:03:08PM -0700, Kees Cook wrote:
> > On Fri, May 10, 2019 at 08:41:43PM -0400, Laura Abbott wrote:
> > > On 5/10/19 3:43 PM, Kees Cook wrote:
> > > > This feature continues to cause more problems than it solves[1]. Its
> > > > intention was to check the bounds of page-allocator allocations by using
> > > > __GFP_COMP, for which we would need to find all missing __GFP_COMP
> > > > markings. This work has been on hold and there is an argument[2]
> > > > that such markings are not even the correct signal for checking for
> > > > same-allocation pages. Instead of depending on BROKEN, this just removes
> > > > it entirely. It can be trivially reverted if/when a better solution for
> > > > tracking page allocator sizes is found.
> > > > 
> > > > [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> > > > [2] https://lkml.kernel.org/r/20190415022412.GA29714@bombadil.infradead.org
> > > 
> > > I agree the page spanning is broken but is it worth keeping the
> > > checks against __rodata __bss etc.?
> > 
> > They're all just white-listing later checks (except RODATA which is
> > doing a cheap RO test which is redundant on any architecture with actual
> > rodata...) so they don't have any value in staying without the rest of
> > the page allocator logic.
> > 
> > > > -	/* Is the object wholly within one base page? */
> > > > -	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> > > > -		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> > > > -		return;
> > > > -
> > > > -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> > > > -	endpage = virt_to_head_page(end);
> > > > -	if (likely(endpage == page))
> > > > -		return;
> > 
> > We _could_ keep the mixed CMA/reserved/neither check if we really wanted
> > to, but that's such a corner case of a corner case, I'm not sure it's
> > worth doing the virt_to_head_page() across the whole span to figure
> > it out.
> 
> I'd delete that first check, because it's a subset of the second check,

It seemed easier to short-circuit with a math test before doing the slightly more expensive virt_to_head_page(end) call. Do you think that's sensible?

> 
> 	/* Is the object wholly within a single (base or compound) page? */
> 	endpage = virt_to_head_page(end);
> 	if (likely(endpage == page))
> 		return;
> 
> 	/*
> 	 * If the start and end are more than MAX_ORDER apart, they must
> 	 * be from separate allocations
> 	 */
> 	if (n >= (PAGE_SIZE << MAX_ORDER))
> 		usercopy_abort("spans too many pages", NULL, to_user, 0, n);
> 
> 	/*
> 	 * If neither page is compound, we can't tell if the object is
> 	 * within a single allocation or not
> 	 */
> 	if (!PageCompound(page) && !PageCompound(endpage))
> 		return;
> 
> > I really wish we had size of allocation reliably held somewhere. We'll
> > need it for doing memory tagging of the page allocator too...
> 
> I think we'll need to store those allocations in a separate data structure
> on the side.  As far as the rest of the kernel is concerned, those struct
> pages belong to them once the page allocator has given them.

Okay, let me work up a page-type refactoring while allocation size can
stay back-burnered.

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
  2019-05-13 21:32       ` Kees Cook
@ 2019-06-10 22:30         ` Eric Biggers
  2019-06-11  1:07           ` Kees Cook
  0 siblings, 1 reply; 8+ messages in thread
From: Eric Biggers @ 2019-06-10 22:30 UTC (permalink / raw)
  To: Kees Cook
  Cc: Matthew Wilcox, Laura Abbott, Andrew Morton, Rik van Riel, linux-kernel

On Mon, May 13, 2019 at 02:32:43PM -0700, Kees Cook wrote:
> On Sat, May 11, 2019 at 09:11:42PM -0700, Matthew Wilcox wrote:
> > On Sat, May 11, 2019 at 05:03:08PM -0700, Kees Cook wrote:
> > > On Fri, May 10, 2019 at 08:41:43PM -0400, Laura Abbott wrote:
> > > > On 5/10/19 3:43 PM, Kees Cook wrote:
> > > > > This feature continues to cause more problems than it solves[1]. Its
> > > > > intention was to check the bounds of page-allocator allocations by using
> > > > > __GFP_COMP, for which we would need to find all missing __GFP_COMP
> > > > > markings. This work has been on hold and there is an argument[2]
> > > > > that such markings are not even the correct signal for checking for
> > > > > same-allocation pages. Instead of depending on BROKEN, this just removes
> > > > > it entirely. It can be trivially reverted if/when a better solution for
> > > > > tracking page allocator sizes is found.
> > > > > 
> > > > > [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> > > > > [2] https://lkml.kernel.org/r/20190415022412.GA29714@bombadil.infradead.org
> > > > 
> > > > I agree the page spanning is broken but is it worth keeping the
> > > > checks against __rodata __bss etc.?
> > > 
> > > They're all just white-listing later checks (except RODATA which is
> > > doing a cheap RO test which is redundant on any architecture with actual
> > > rodata...) so they don't have any value in staying without the rest of
> > > the page allocator logic.
> > > 
> > > > > -	/* Is the object wholly within one base page? */
> > > > > -	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> > > > > -		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> > > > > -		return;
> > > > > -
> > > > > -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> > > > > -	endpage = virt_to_head_page(end);
> > > > > -	if (likely(endpage == page))
> > > > > -		return;
> > > 
> > > We _could_ keep the mixed CMA/reserved/neither check if we really wanted
> > > to, but that's such a corner case of a corner case, I'm not sure it's
> > > worth doing the virt_to_head_page() across the whole span to figure
> > > it out.
> > 
> > I'd delete that first check, because it's a subset of the second check,
> 
> It seemed easier to short-circuit with a math test before doing the slightly more expensive virt_to_head_page(end) call. Do you think that's sensible?
> 
> > 
> > 	/* Is the object wholly within a single (base or compound) page? */
> > 	endpage = virt_to_head_page(end);
> > 	if (likely(endpage == page))
> > 		return;
> > 
> > 	/*
> > 	 * If the start and end are more than MAX_ORDER apart, they must
> > 	 * be from separate allocations
> > 	 */
> > 	if (n >= (PAGE_SIZE << MAX_ORDER))
> > 		usercopy_abort("spans too many pages", NULL, to_user, 0, n);
> > 
> > 	/*
> > 	 * If neither page is compound, we can't tell if the object is
> > 	 * within a single allocation or not
> > 	 */
> > 	if (!PageCompound(page) && !PageCompound(endpage))
> > 		return;
> > 
> > > I really wish we had size of allocation reliably held somewhere. We'll
> > > need it for doing memory tagging of the page allocator too...
> > 
> > I think we'll need to store those allocations in a separate data structure
> > on the side.  As far as the rest of the kernel is concerned, those struct
> > pages belong to them once the page allocator has given them.
> 
> Okay, let me work up a page-type refactoring while allocation size can
> stay back-burnered.
> 
> -- 
> Kees Cook

Any progress on this patch?

- Eric

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
  2019-06-10 22:30         ` Eric Biggers
@ 2019-06-11  1:07           ` Kees Cook
  0 siblings, 0 replies; 8+ messages in thread
From: Kees Cook @ 2019-06-11  1:07 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Matthew Wilcox, Laura Abbott, Andrew Morton, Rik van Riel, linux-kernel

On Mon, Jun 10, 2019 at 03:30:55PM -0700, Eric Biggers wrote:
> Any progress on this patch?

I have no had time yet; sorry. If anyone else would like to take a stab
at it, I'd appreciate it. :)

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN
  2019-05-10 20:43 [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Kees Cook
  2019-05-11  0:41 ` Laura Abbott
@ 2019-07-02 17:11 ` Kees Cook
  1 sibling, 0 replies; 8+ messages in thread
From: Kees Cook @ 2019-07-02 17:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Eric Biggers, Matthew Wilcox, Laura Abbott, Rik van Riel, linux-kernel

On Fri, May 10, 2019 at 01:43:36PM -0700, Kees Cook wrote:
> This feature continues to cause more problems than it solves[1]. Its
> intention was to check the bounds of page-allocator allocations by using
> __GFP_COMP, for which we would need to find all missing __GFP_COMP
> markings. This work has been on hold and there is an argument[2]
> that such markings are not even the correct signal for checking for
> same-allocation pages. Instead of depending on BROKEN, this just removes
> it entirely. It can be trivially reverted if/when a better solution for
> tracking page allocator sizes is found.
> 
> [1] https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg37479.html
> [2] https://lkml.kernel.org/r/20190415022412.GA29714@bombadil.infradead.org
> 
> Suggested-by: Eric Biggers <ebiggers@kernel.org>
> Signed-off-by: Kees Cook <keescook@chromium.org>

So, after looking at this more, I think I'm going to keep this patch,
and we can add new sanity checks on a per-Page flag check. (See below.)

Andrew, can you apply this to -mm please?

> ---
>  mm/usercopy.c    | 67 ------------------------------------------------
>  security/Kconfig | 11 --------
>  2 files changed, 78 deletions(-)
> 
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index 14faadcedd06..15dc1bf03303 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -159,70 +159,6 @@ static inline void check_bogus_address(const unsigned long ptr, unsigned long n,
>  		usercopy_abort("null address", NULL, to_user, ptr, n);
>  }
>  
> -/* Checks for allocs that are marked in some way as spanning multiple pages. */
> -static inline void check_page_span(const void *ptr, unsigned long n,
> -				   struct page *page, bool to_user)
> -{
> -#ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
> -	const void *end = ptr + n - 1;
> -	struct page *endpage;
> -	bool is_reserved, is_cma;
> -
> -	/*
> -	 * Sometimes the kernel data regions are not marked Reserved (see
> -	 * check below). And sometimes [_sdata,_edata) does not cover
> -	 * rodata and/or bss, so check each range explicitly.
> -	 */
> -
> -	/* Allow reads of kernel rodata region (if not marked as Reserved). */
> -	if (ptr >= (const void *)__start_rodata &&
> -	    end <= (const void *)__end_rodata) {
> -		if (!to_user)
> -			usercopy_abort("rodata", NULL, to_user, 0, n);
> -		return;
> -	}
> -
> -	/* Allow kernel data region (if not marked as Reserved). */
> -	if (ptr >= (const void *)_sdata && end <= (const void *)_edata)
> -		return;
> -
> -	/* Allow kernel bss region (if not marked as Reserved). */
> -	if (ptr >= (const void *)__bss_start &&
> -	    end <= (const void *)__bss_stop)
> -		return;
> -
> -	/* Is the object wholly within one base page? */
> -	if (likely(((unsigned long)ptr & (unsigned long)PAGE_MASK) ==
> -		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> -		return;
> -
> -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> -	endpage = virt_to_head_page(end);
> -	if (likely(endpage == page))
> -		return;
> -
> -	/*
> -	 * Reject if range is entirely either Reserved (i.e. special or
> -	 * device memory), or CMA. Otherwise, reject since the object spans
> -	 * several independently allocated pages.
> -	 */
> -	is_reserved = PageReserved(page);
> -	is_cma = is_migrate_cma_page(page);
> -	if (!is_reserved && !is_cma)
> -		usercopy_abort("spans multiple pages", NULL, to_user, 0, n);
> -
> -	for (ptr += PAGE_SIZE; ptr <= end; ptr += PAGE_SIZE) {
> -		page = virt_to_head_page(ptr);
> -		if (is_reserved && !PageReserved(page))
> -			usercopy_abort("spans Reserved and non-Reserved pages",
> -				       NULL, to_user, 0, n);
> -		if (is_cma && !is_migrate_cma_page(page))
> -			usercopy_abort("spans CMA and non-CMA pages", NULL,
> -				       to_user, 0, n);
> -	}
> -#endif
> -}
> -
>  static inline void check_heap_object(const void *ptr, unsigned long n,
>  				     bool to_user)
>  {
> @@ -236,9 +172,6 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
>  	if (PageSlab(page)) {
>  		/* Check slab allocator for flags and size. */
>  		__check_heap_object(ptr, n, page, to_user);
> -	} else {
> -		/* Verify object does not incorrectly span multiple pages. */
> -		check_page_span(ptr, n, page, to_user);
>  	}

In the future, instead of this catch-all "else", we can add things like:

	} else if (PageCompound(page)) {
		... do some check for compound pages ...
	} else if (PageReserved(page))
		... etc ...
	}

But for 5.3, I think we need to just entirely drop the PAGESPAN thing.

-Kees

>  }
>  
> diff --git a/security/Kconfig b/security/Kconfig
> index 353cfef71d4e..8392647f5a4c 100644
> --- a/security/Kconfig
> +++ b/security/Kconfig
> @@ -176,17 +176,6 @@ config HARDENED_USERCOPY_FALLBACK
>  	  Booting with "slab_common.usercopy_fallback=Y/N" can change
>  	  this setting.
>  
> -config HARDENED_USERCOPY_PAGESPAN
> -	bool "Refuse to copy allocations that span multiple pages"
> -	depends on HARDENED_USERCOPY
> -	depends on EXPERT
> -	help
> -	  When a multi-page allocation is done without __GFP_COMP,
> -	  hardened usercopy will reject attempts to copy it. There are,
> -	  however, several cases of this in the kernel that have not all
> -	  been removed. This config is intended to be used only while
> -	  trying to find such users.
> -
>  config FORTIFY_SOURCE
>  	bool "Harden common str/mem functions against buffer overflows"
>  	depends on ARCH_HAS_FORTIFY_SOURCE
> -- 
> 2.17.1
> 
> 
> -- 
> Kees Cook

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2019-07-03  1:09 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-05-10 20:43 [PATCH] usercopy: Remove HARDENED_USERCOPY_PAGESPAN Kees Cook
2019-05-11  0:41 ` Laura Abbott
2019-05-12  0:03   ` Kees Cook
2019-05-12  4:11     ` Matthew Wilcox
2019-05-13 21:32       ` Kees Cook
2019-06-10 22:30         ` Eric Biggers
2019-06-11  1:07           ` Kees Cook
2019-07-02 17:11 ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).