All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/3] Assorted improvements to usercopy
@ 2021-10-04 22:42 Matthew Wilcox (Oracle)
  2021-10-04 22:42 ` [PATCH 1/3] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
                   ` (3 more replies)
  0 siblings, 4 replies; 14+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-10-04 22:42 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, Thomas Gleixner

We must prohibit page boundary crossing for kmap() addresses.
vmap() addresses are limited by the length of the mapping, and
compound pages are limited by the size of the page.

These should probably all have test cases?

Matthew Wilcox (Oracle) (3):
  mm/usercopy: Check kmap addresses properly
  mm/usercopy: Detect vmalloc overruns
  mm/usercopy: Detect compound page overruns

 arch/x86/include/asm/highmem.h   |  1 +
 include/linux/highmem-internal.h | 10 ++++++++++
 mm/usercopy.c                    | 33 +++++++++++++++++++++-----------
 3 files changed, 33 insertions(+), 11 deletions(-)

-- 
2.32.0



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/3] mm/usercopy: Check kmap addresses properly
  2021-10-04 22:42 [PATCH 0/3] Assorted improvements to usercopy Matthew Wilcox (Oracle)
@ 2021-10-04 22:42 ` Matthew Wilcox (Oracle)
  2021-10-05 21:23   ` Kees Cook
  2021-10-04 22:42 ` [PATCH 2/3] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-10-04 22:42 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, Thomas Gleixner

If you are copying to an address in the kmap region, you may not copy
across a page boundary, no matter what the size of the underlying
allocation.  You can't kmap() a slab page because slab pages always
come from low memory.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 arch/x86/include/asm/highmem.h   |  1 +
 include/linux/highmem-internal.h | 10 ++++++++++
 mm/usercopy.c                    | 15 +++++++++------
 3 files changed, 20 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
index 032e020853aa..731ee7cc40a5 100644
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -26,6 +26,7 @@
 #include <asm/tlbflush.h>
 #include <asm/paravirt.h>
 #include <asm/fixmap.h>
+#include <asm/pgtable_areas.h>
 
 /* declarations for highmem.c */
 extern unsigned long highstart_pfn, highend_pfn;
diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h
index 4aa1031d3e4c..97d6dc836749 100644
--- a/include/linux/highmem-internal.h
+++ b/include/linux/highmem-internal.h
@@ -143,6 +143,11 @@ static inline void totalhigh_pages_add(long count)
 	atomic_long_add(count, &_totalhigh_pages);
 }
 
+static inline bool is_kmap_addr(const void *x)
+{
+	unsigned long addr = (unsigned long)x;
+	return addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP);
+}
 #else /* CONFIG_HIGHMEM */
 
 static inline struct page *kmap_to_page(void *addr)
@@ -223,6 +228,11 @@ static inline void __kunmap_atomic(void *addr)
 static inline unsigned int nr_free_highpages(void) { return 0; }
 static inline unsigned long totalhigh_pages(void) { return 0UL; }
 
+static inline bool is_kmap_addr(const void *x)
+{
+	return false;
+}
+
 #endif /* CONFIG_HIGHMEM */
 
 /*
diff --git a/mm/usercopy.c b/mm/usercopy.c
index b3de3c4eefba..ac95b22fbbce 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -228,12 +228,15 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 	if (!virt_addr_valid(ptr))
 		return;
 
-	/*
-	 * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the
-	 * highmem page or fallback to virt_to_page(). The following
-	 * is effectively a highmem-aware virt_to_head_page().
-	 */
-	page = compound_head(kmap_to_page((void *)ptr));
+	if (is_kmap_addr(ptr)) {
+		unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1);
+
+		if ((unsigned long)ptr + n - 1 > page_end)
+			usercopy_abort("kmap", NULL, to_user, 0, n);
+		return;
+	}
+
+	page = virt_to_head_page(ptr);
 
 	if (PageSlab(page)) {
 		/* Check slab allocator for flags and size. */
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/3] mm/usercopy: Detect vmalloc overruns
  2021-10-04 22:42 [PATCH 0/3] Assorted improvements to usercopy Matthew Wilcox (Oracle)
  2021-10-04 22:42 ` [PATCH 1/3] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
@ 2021-10-04 22:42 ` Matthew Wilcox (Oracle)
  2021-10-05 21:25   ` Kees Cook
  2021-10-04 22:42 ` [PATCH 3/3] mm/usercopy: Detect compound page overruns Matthew Wilcox (Oracle)
  2021-10-05 21:27 ` [PATCH 0/3] Assorted improvements to usercopy Kees Cook
  3 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-10-04 22:42 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, Thomas Gleixner

If you have a vmalloc() allocation, or an address from calling vmap(),
you cannot overrun the vm_area which describes it, regardless of the
size of the underlying allocation.  This probably doesn't do much for
security because vmalloc comes with guard pages these days, but it
prevents usercopy aborts when copying to a vmap() of smaller pages.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/usercopy.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index ac95b22fbbce..7bfc4f9ed1e4 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -17,6 +17,7 @@
 #include <linux/sched/task.h>
 #include <linux/sched/task_stack.h>
 #include <linux/thread_info.h>
+#include <linux/vmalloc.h>
 #include <linux/atomic.h>
 #include <linux/jump_label.h>
 #include <asm/sections.h>
@@ -236,6 +237,14 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 		return;
 	}
 
+	if (is_vmalloc_addr(ptr)) {
+		struct vm_struct *vm = find_vm_area(ptr);
+
+		if (ptr + n > vm->addr + vm->size)
+			usercopy_abort("vmalloc", NULL, to_user, 0, n);
+		return;
+	}
+
 	page = virt_to_head_page(ptr);
 
 	if (PageSlab(page)) {
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/3] mm/usercopy: Detect compound page overruns
  2021-10-04 22:42 [PATCH 0/3] Assorted improvements to usercopy Matthew Wilcox (Oracle)
  2021-10-04 22:42 ` [PATCH 1/3] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
  2021-10-04 22:42 ` [PATCH 2/3] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
@ 2021-10-04 22:42 ` Matthew Wilcox (Oracle)
  2021-10-05 21:26   ` Kees Cook
  2021-10-05 21:27 ` [PATCH 0/3] Assorted improvements to usercopy Kees Cook
  3 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-10-04 22:42 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, Thomas Gleixner

Move the compound page overrun detection out of
CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/usercopy.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 7bfc4f9ed1e4..e395462961d5 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -194,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
 		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
 		return;
 
-	/* Allow if fully inside the same compound (__GFP_COMP) page. */
-	endpage = virt_to_head_page(end);
-	if (likely(endpage == page))
-		return;
-
 	/*
 	 * Reject if range is entirely either Reserved (i.e. special or
 	 * device memory), or CMA. Otherwise, reject since the object spans
@@ -250,6 +245,10 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 	if (PageSlab(page)) {
 		/* Check slab allocator for flags and size. */
 		__check_heap_object(ptr, n, page, to_user);
+	} else if (PageHead(page)) {
+		/* A compound allocation */
+		if (ptr + n > page_address(page) + page_size(page))
+			usercopy_abort("page alloc", NULL, to_user, 0, n);
 	} else {
 		/* Verify object does not incorrectly span multiple pages. */
 		check_page_span(ptr, n, page, to_user);
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] mm/usercopy: Check kmap addresses properly
  2021-10-04 22:42 ` [PATCH 1/3] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
@ 2021-10-05 21:23   ` Kees Cook
  2021-10-05 21:43     ` Matthew Wilcox
  0 siblings, 1 reply; 14+ messages in thread
From: Kees Cook @ 2021-10-05 21:23 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, Thomas Gleixner

On Mon, Oct 04, 2021 at 11:42:21PM +0100, Matthew Wilcox (Oracle) wrote:
> If you are copying to an address in the kmap region, you may not copy
> across a page boundary, no matter what the size of the underlying
> allocation.  You can't kmap() a slab page because slab pages always
> come from low memory.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  arch/x86/include/asm/highmem.h   |  1 +
>  include/linux/highmem-internal.h | 10 ++++++++++
>  mm/usercopy.c                    | 15 +++++++++------
>  3 files changed, 20 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
> index 032e020853aa..731ee7cc40a5 100644
> --- a/arch/x86/include/asm/highmem.h
> +++ b/arch/x86/include/asm/highmem.h
> @@ -26,6 +26,7 @@
>  #include <asm/tlbflush.h>
>  #include <asm/paravirt.h>
>  #include <asm/fixmap.h>
> +#include <asm/pgtable_areas.h>
>  
>  /* declarations for highmem.c */
>  extern unsigned long highstart_pfn, highend_pfn;
> diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h
> index 4aa1031d3e4c..97d6dc836749 100644
> --- a/include/linux/highmem-internal.h
> +++ b/include/linux/highmem-internal.h
> @@ -143,6 +143,11 @@ static inline void totalhigh_pages_add(long count)
>  	atomic_long_add(count, &_totalhigh_pages);
>  }
>  
> +static inline bool is_kmap_addr(const void *x)
> +{
> +	unsigned long addr = (unsigned long)x;
> +	return addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP);
> +}
>  #else /* CONFIG_HIGHMEM */
>  
>  static inline struct page *kmap_to_page(void *addr)
> @@ -223,6 +228,11 @@ static inline void __kunmap_atomic(void *addr)
>  static inline unsigned int nr_free_highpages(void) { return 0; }
>  static inline unsigned long totalhigh_pages(void) { return 0UL; }
>  
> +static inline bool is_kmap_addr(const void *x)
> +{
> +	return false;
> +}
> +
>  #endif /* CONFIG_HIGHMEM */
>  
>  /*
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index b3de3c4eefba..ac95b22fbbce 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -228,12 +228,15 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
>  	if (!virt_addr_valid(ptr))
>  		return;
>  
> -	/*
> -	 * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the
> -	 * highmem page or fallback to virt_to_page(). The following
> -	 * is effectively a highmem-aware virt_to_head_page().
> -	 */
> -	page = compound_head(kmap_to_page((void *)ptr));
> +	if (is_kmap_addr(ptr)) {
> +		unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1);
> +
> +		if ((unsigned long)ptr + n - 1 > page_end)
> +			usercopy_abort("kmap", NULL, to_user, 0, n);

It's likely not worth getting an offset here, but "0" above could be
something like "ptr - PKMAP_ADDR(0)".

Either way:

Acked-by: Kees Cook <keescook@chromium.org>

Thanks!

-Kees

> +		return;
> +	}
> +
> +	page = virt_to_head_page(ptr);
>  
>  	if (PageSlab(page)) {
>  		/* Check slab allocator for flags and size. */
> -- 
> 2.32.0
> 

-- 
Kees Cook


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] mm/usercopy: Detect vmalloc overruns
  2021-10-04 22:42 ` [PATCH 2/3] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
@ 2021-10-05 21:25   ` Kees Cook
  2021-10-06  1:26     ` Matthew Wilcox
  0 siblings, 1 reply; 14+ messages in thread
From: Kees Cook @ 2021-10-05 21:25 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, Thomas Gleixner

On Mon, Oct 04, 2021 at 11:42:22PM +0100, Matthew Wilcox (Oracle) wrote:
> If you have a vmalloc() allocation, or an address from calling vmap(),
> you cannot overrun the vm_area which describes it, regardless of the
> size of the underlying allocation.  This probably doesn't do much for
> security because vmalloc comes with guard pages these days, but it
> prevents usercopy aborts when copying to a vmap() of smaller pages.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/usercopy.c | 9 +++++++++
>  1 file changed, 9 insertions(+)
> 
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index ac95b22fbbce..7bfc4f9ed1e4 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -17,6 +17,7 @@
>  #include <linux/sched/task.h>
>  #include <linux/sched/task_stack.h>
>  #include <linux/thread_info.h>
> +#include <linux/vmalloc.h>
>  #include <linux/atomic.h>
>  #include <linux/jump_label.h>
>  #include <asm/sections.h>
> @@ -236,6 +237,14 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
>  		return;
>  	}
>  
> +	if (is_vmalloc_addr(ptr)) {
> +		struct vm_struct *vm = find_vm_area(ptr);
> +
> +		if (ptr + n > vm->addr + vm->size)
> +			usercopy_abort("vmalloc", NULL, to_user, 0, n);

This "0" is easy to make "ptr - vm->addr". With that fixed:

Acked-by: Kees Cook <keescook@chromium.org>

-Kees

> +		return;
> +	}
> +
>  	page = virt_to_head_page(ptr);
>  
>  	if (PageSlab(page)) {
> -- 
> 2.32.0
> 

-- 
Kees Cook


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] mm/usercopy: Detect compound page overruns
  2021-10-04 22:42 ` [PATCH 3/3] mm/usercopy: Detect compound page overruns Matthew Wilcox (Oracle)
@ 2021-10-05 21:26   ` Kees Cook
  2021-10-05 22:12     ` Matthew Wilcox
  0 siblings, 1 reply; 14+ messages in thread
From: Kees Cook @ 2021-10-05 21:26 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, Thomas Gleixner

On Mon, Oct 04, 2021 at 11:42:23PM +0100, Matthew Wilcox (Oracle) wrote:
> Move the compound page overrun detection out of
> CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>  mm/usercopy.c | 9 ++++-----
>  1 file changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index 7bfc4f9ed1e4..e395462961d5 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -194,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
>  		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
>  		return;
>  
> -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> -	endpage = virt_to_head_page(end);
> -	if (likely(endpage == page))
> -		return;
> -
>  	/*
>  	 * Reject if range is entirely either Reserved (i.e. special or
>  	 * device memory), or CMA. Otherwise, reject since the object spans
> @@ -250,6 +245,10 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
>  	if (PageSlab(page)) {
>  		/* Check slab allocator for flags and size. */
>  		__check_heap_object(ptr, n, page, to_user);
> +	} else if (PageHead(page)) {
> +		/* A compound allocation */
> +		if (ptr + n > page_address(page) + page_size(page))
> +			usercopy_abort("page alloc", NULL, to_user, 0, n);

"0" could be "ptr - page_address(page)", I think? With that:

Acked-by: Kees Cook <keescook@chromium.org>

-Kees

>  	} else {
>  		/* Verify object does not incorrectly span multiple pages. */
>  		check_page_span(ptr, n, page, to_user);
> -- 
> 2.32.0
> 

-- 
Kees Cook


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 0/3] Assorted improvements to usercopy
  2021-10-04 22:42 [PATCH 0/3] Assorted improvements to usercopy Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2021-10-04 22:42 ` [PATCH 3/3] mm/usercopy: Detect compound page overruns Matthew Wilcox (Oracle)
@ 2021-10-05 21:27 ` Kees Cook
  3 siblings, 0 replies; 14+ messages in thread
From: Kees Cook @ 2021-10-05 21:27 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle); +Cc: linux-mm, Thomas Gleixner

On Mon, Oct 04, 2021 at 11:42:20PM +0100, Matthew Wilcox (Oracle) wrote:
> We must prohibit page boundary crossing for kmap() addresses.
> vmap() addresses are limited by the length of the mapping, and
> compound pages are limited by the size of the page.
> 
> These should probably all have test cases?
> 
> Matthew Wilcox (Oracle) (3):
>   mm/usercopy: Check kmap addresses properly
>   mm/usercopy: Detect vmalloc overruns
>   mm/usercopy: Detect compound page overruns

Thanks! This is a nice additional bit of checking. I wonder if the CMA
and Reserved pieces should be extracted from the PAGESPAN check too?
Probably that CONFIG should be renamed as well now. :P

-Kees

> 
>  arch/x86/include/asm/highmem.h   |  1 +
>  include/linux/highmem-internal.h | 10 ++++++++++
>  mm/usercopy.c                    | 33 +++++++++++++++++++++-----------
>  3 files changed, 33 insertions(+), 11 deletions(-)
> 
> -- 
> 2.32.0
> 

-- 
Kees Cook


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] mm/usercopy: Check kmap addresses properly
  2021-10-05 21:23   ` Kees Cook
@ 2021-10-05 21:43     ` Matthew Wilcox
  2021-10-05 21:54       ` Kees Cook
  0 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2021-10-05 21:43 UTC (permalink / raw)
  To: Kees Cook; +Cc: linux-mm, Thomas Gleixner

On Tue, Oct 05, 2021 at 02:23:09PM -0700, Kees Cook wrote:
> > +	if (is_kmap_addr(ptr)) {
> > +		unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1);
> > +
> > +		if ((unsigned long)ptr + n - 1 > page_end)
> > +			usercopy_abort("kmap", NULL, to_user, 0, n);
> 
> It's likely not worth getting an offset here, but "0" above could be
> something like "ptr - PKMAP_ADDR(0)".

Mmm.  page_offset(ptr) should do the trick, no?

> Either way:
> 
> Acked-by: Kees Cook <keescook@chromium.org>
> 
> Thanks!
> 
> -Kees
> 
> > +		return;
> > +	}
> > +
> > +	page = virt_to_head_page(ptr);
> >  
> >  	if (PageSlab(page)) {
> >  		/* Check slab allocator for flags and size. */
> > -- 
> > 2.32.0
> > 
> 
> -- 
> Kees Cook


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 1/3] mm/usercopy: Check kmap addresses properly
  2021-10-05 21:43     ` Matthew Wilcox
@ 2021-10-05 21:54       ` Kees Cook
  0 siblings, 0 replies; 14+ messages in thread
From: Kees Cook @ 2021-10-05 21:54 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-mm, Thomas Gleixner

On Tue, Oct 05, 2021 at 10:43:13PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 05, 2021 at 02:23:09PM -0700, Kees Cook wrote:
> > > +	if (is_kmap_addr(ptr)) {
> > > +		unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1);
> > > +
> > > +		if ((unsigned long)ptr + n - 1 > page_end)
> > > +			usercopy_abort("kmap", NULL, to_user, 0, n);
> > 
> > It's likely not worth getting an offset here, but "0" above could be
> > something like "ptr - PKMAP_ADDR(0)".
> 
> Mmm.  page_offset(ptr) should do the trick, no?

Ah yeah, that'd be good!

-Kees

> 
> > Either way:
> > 
> > Acked-by: Kees Cook <keescook@chromium.org>
> > 
> > Thanks!
> > 
> > -Kees
> > 
> > > +		return;
> > > +	}
> > > +
> > > +	page = virt_to_head_page(ptr);
> > >  
> > >  	if (PageSlab(page)) {
> > >  		/* Check slab allocator for flags and size. */
> > > -- 
> > > 2.32.0
> > > 
> > 
> > -- 
> > Kees Cook

-- 
Kees Cook


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] mm/usercopy: Detect compound page overruns
  2021-10-05 21:26   ` Kees Cook
@ 2021-10-05 22:12     ` Matthew Wilcox
  2021-10-05 22:55       ` Kees Cook
  0 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2021-10-05 22:12 UTC (permalink / raw)
  To: Kees Cook; +Cc: linux-mm, Thomas Gleixner

On Tue, Oct 05, 2021 at 02:26:37PM -0700, Kees Cook wrote:
> On Mon, Oct 04, 2021 at 11:42:23PM +0100, Matthew Wilcox (Oracle) wrote:
> > +	} else if (PageHead(page)) {
> > +		/* A compound allocation */
> > +		if (ptr + n > page_address(page) + page_size(page))
> > +			usercopy_abort("page alloc", NULL, to_user, 0, n);
> 
> "0" could be "ptr - page_address(page)", I think? With that:
> 
> Acked-by: Kees Cook <keescook@chromium.org>

Right, so that can be:

        } else if (PageHead(page)) {
		/* A compound allocation */
                unsigned long offset = ptr - page_address(page);
                if (offset + n > page_size(page))
                        usercopy_abort("page alloc", NULL, to_user, offset, n);

which saves us calling page_address() twice.  Probably GCC is smart
enough to CSE it anyway, but it also avoids splitting at the 80 column
boundary ;-)


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/3] mm/usercopy: Detect compound page overruns
  2021-10-05 22:12     ` Matthew Wilcox
@ 2021-10-05 22:55       ` Kees Cook
  0 siblings, 0 replies; 14+ messages in thread
From: Kees Cook @ 2021-10-05 22:55 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-mm, Thomas Gleixner

On Tue, Oct 05, 2021 at 11:12:47PM +0100, Matthew Wilcox wrote:
> On Tue, Oct 05, 2021 at 02:26:37PM -0700, Kees Cook wrote:
> > On Mon, Oct 04, 2021 at 11:42:23PM +0100, Matthew Wilcox (Oracle) wrote:
> > > +	} else if (PageHead(page)) {
> > > +		/* A compound allocation */
> > > +		if (ptr + n > page_address(page) + page_size(page))
> > > +			usercopy_abort("page alloc", NULL, to_user, 0, n);
> > 
> > "0" could be "ptr - page_address(page)", I think? With that:
> > 
> > Acked-by: Kees Cook <keescook@chromium.org>
> 
> Right, so that can be:
> 
>         } else if (PageHead(page)) {
> 		/* A compound allocation */
>                 unsigned long offset = ptr - page_address(page);
>                 if (offset + n > page_size(page))
>                         usercopy_abort("page alloc", NULL, to_user, offset, n);
> 
> which saves us calling page_address() twice.  Probably GCC is smart
> enough to CSE it anyway, but it also avoids splitting at the 80 column
> boundary ;-)

Perfect, yes!

-- 
Kees Cook


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] mm/usercopy: Detect vmalloc overruns
  2021-10-05 21:25   ` Kees Cook
@ 2021-10-06  1:26     ` Matthew Wilcox
  2021-10-06  3:02       ` Kees Cook
  0 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2021-10-06  1:26 UTC (permalink / raw)
  To: Kees Cook; +Cc: linux-mm, Thomas Gleixner

On Tue, Oct 05, 2021 at 02:25:23PM -0700, Kees Cook wrote:
> On Mon, Oct 04, 2021 at 11:42:22PM +0100, Matthew Wilcox (Oracle) wrote:
> > If you have a vmalloc() allocation, or an address from calling vmap(),
> > you cannot overrun the vm_area which describes it, regardless of the
> > size of the underlying allocation.  This probably doesn't do much for
> > security because vmalloc comes with guard pages these days, but it
> > prevents usercopy aborts when copying to a vmap() of smaller pages.
> > 
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > ---
> >  mm/usercopy.c | 9 +++++++++
> >  1 file changed, 9 insertions(+)
> > 
> > diff --git a/mm/usercopy.c b/mm/usercopy.c
> > index ac95b22fbbce..7bfc4f9ed1e4 100644
> > --- a/mm/usercopy.c
> > +++ b/mm/usercopy.c
> > @@ -17,6 +17,7 @@
> >  #include <linux/sched/task.h>
> >  #include <linux/sched/task_stack.h>
> >  #include <linux/thread_info.h>
> > +#include <linux/vmalloc.h>
> >  #include <linux/atomic.h>
> >  #include <linux/jump_label.h>
> >  #include <asm/sections.h>
> > @@ -236,6 +237,14 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
> >  		return;
> >  	}
> >  
> > +	if (is_vmalloc_addr(ptr)) {
> > +		struct vm_struct *vm = find_vm_area(ptr);
> > +
> > +		if (ptr + n > vm->addr + vm->size)
> > +			usercopy_abort("vmalloc", NULL, to_user, 0, n);
> 
> This "0" is easy to make "ptr - vm->addr". With that fixed:
> 
> Acked-by: Kees Cook <keescook@chromium.org>

Looking at this again, if we do ...

	char *p = vmalloc(2 * PAGE_SIZE);
	copy_from_user(p + 2 * PAGE_SIZE, ...);

then 'vm' can be NULL.  I think.  While we can't catch everything, a
NULL pointer dereference here seems a little unfriendly?  So how about
this:

        if (is_vmalloc_addr(ptr)) {
                struct vm_struct *vm = find_vm_area(ptr);
                unsigned long offset;

                if (!vm) {
                        usercopy_abort("vmalloc", NULL, to_user, 0, n);
                        return;
                }

                offset = ptr - vm->addr;
                if (offset + n > vm->size)
                        usercopy_abort("vmalloc", NULL, to_user, offset, n);
                return;
        }

Do we want to distinguish the two cases somehow?


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/3] mm/usercopy: Detect vmalloc overruns
  2021-10-06  1:26     ` Matthew Wilcox
@ 2021-10-06  3:02       ` Kees Cook
  0 siblings, 0 replies; 14+ messages in thread
From: Kees Cook @ 2021-10-06  3:02 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-mm, Thomas Gleixner

On Wed, Oct 06, 2021 at 02:26:41AM +0100, Matthew Wilcox wrote:
> On Tue, Oct 05, 2021 at 02:25:23PM -0700, Kees Cook wrote:
> > On Mon, Oct 04, 2021 at 11:42:22PM +0100, Matthew Wilcox (Oracle) wrote:
> > > If you have a vmalloc() allocation, or an address from calling vmap(),
> > > you cannot overrun the vm_area which describes it, regardless of the
> > > size of the underlying allocation.  This probably doesn't do much for
> > > security because vmalloc comes with guard pages these days, but it
> > > prevents usercopy aborts when copying to a vmap() of smaller pages.
> > > 
> > > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > > ---
> > >  mm/usercopy.c | 9 +++++++++
> > >  1 file changed, 9 insertions(+)
> > > 
> > > diff --git a/mm/usercopy.c b/mm/usercopy.c
> > > index ac95b22fbbce..7bfc4f9ed1e4 100644
> > > --- a/mm/usercopy.c
> > > +++ b/mm/usercopy.c
> > > @@ -17,6 +17,7 @@
> > >  #include <linux/sched/task.h>
> > >  #include <linux/sched/task_stack.h>
> > >  #include <linux/thread_info.h>
> > > +#include <linux/vmalloc.h>
> > >  #include <linux/atomic.h>
> > >  #include <linux/jump_label.h>
> > >  #include <asm/sections.h>
> > > @@ -236,6 +237,14 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
> > >  		return;
> > >  	}
> > >  
> > > +	if (is_vmalloc_addr(ptr)) {
> > > +		struct vm_struct *vm = find_vm_area(ptr);
> > > +
> > > +		if (ptr + n > vm->addr + vm->size)
> > > +			usercopy_abort("vmalloc", NULL, to_user, 0, n);
> > 
> > This "0" is easy to make "ptr - vm->addr". With that fixed:
> > 
> > Acked-by: Kees Cook <keescook@chromium.org>
> 
> Looking at this again, if we do ...
> 
> 	char *p = vmalloc(2 * PAGE_SIZE);
> 	copy_from_user(p + 2 * PAGE_SIZE, ...);
> 
> then 'vm' can be NULL.  I think.  While we can't catch everything, a
> NULL pointer dereference here seems a little unfriendly?  So how about
> this:

Oh right, because ptr will be in a guard page (or otherwise unallocated)
but within the vmalloc range?

> 
>         if (is_vmalloc_addr(ptr)) {
>                 struct vm_struct *vm = find_vm_area(ptr);
>                 unsigned long offset;
> 
>                 if (!vm) {
>                         usercopy_abort("vmalloc", NULL, to_user, 0, n);
>                         return;
>                 }
> 
>                 offset = ptr - vm->addr;
>                 if (offset + n > vm->size)
>                         usercopy_abort("vmalloc", NULL, to_user, offset, n);
>                 return;
>         }
> 
> Do we want to distinguish the two cases somehow?

I'd report the first's "details" as "unmapped" or something:

	usercopy_abort("vmalloc", "unmapped", to_user, 0, n);

and the latter is fine as-is.

-- 
Kees Cook


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-10-06  3:02 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-04 22:42 [PATCH 0/3] Assorted improvements to usercopy Matthew Wilcox (Oracle)
2021-10-04 22:42 ` [PATCH 1/3] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
2021-10-05 21:23   ` Kees Cook
2021-10-05 21:43     ` Matthew Wilcox
2021-10-05 21:54       ` Kees Cook
2021-10-04 22:42 ` [PATCH 2/3] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
2021-10-05 21:25   ` Kees Cook
2021-10-06  1:26     ` Matthew Wilcox
2021-10-06  3:02       ` Kees Cook
2021-10-04 22:42 ` [PATCH 3/3] mm/usercopy: Detect compound page overruns Matthew Wilcox (Oracle)
2021-10-05 21:26   ` Kees Cook
2021-10-05 22:12     ` Matthew Wilcox
2021-10-05 22:55       ` Kees Cook
2021-10-05 21:27 ` [PATCH 0/3] Assorted improvements to usercopy Kees Cook

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.