All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/3] Assorted improvements to usercopy
@ 2021-10-06 12:42 Matthew Wilcox (Oracle)
  2021-10-06 12:42 ` [PATCH v2 1/3] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-10-06 12:42 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, Thomas Gleixner

We must prohibit page boundary crossing for kmap() addresses.
vmap() addresses are limited by the length of the mapping, and
compound pages are limited by the size of the page.

These should probably all have test cases?

v2:
 - Prevent a NULL pointer dereference when a vmalloc-range pointer
   doesn't have an associated allocation (me)
 - Report better offsets than "0" (Kees)

Matthew Wilcox (Oracle) (3):
  mm/usercopy: Check kmap addresses properly
  mm/usercopy: Detect vmalloc overruns
  mm/usercopy: Detect compound page overruns

 arch/x86/include/asm/highmem.h   |  1 +
 include/linux/highmem-internal.h | 10 ++++++++
 mm/usercopy.c                    | 42 +++++++++++++++++++++++---------
 3 files changed, 42 insertions(+), 11 deletions(-)

-- 
2.32.0



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/3] mm/usercopy: Check kmap addresses properly
  2021-10-06 12:42 [PATCH v2 0/3] Assorted improvements to usercopy Matthew Wilcox (Oracle)
@ 2021-10-06 12:42 ` Matthew Wilcox (Oracle)
  2021-10-06 12:42 ` [PATCH v2 2/3] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
  2021-10-06 12:42 ` [PATCH v2 3/3] mm/usercopy: Detect compound page overruns Matthew Wilcox (Oracle)
  2 siblings, 0 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-10-06 12:42 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, Thomas Gleixner

If you are copying to an address in the kmap region, you may not copy
across a page boundary, no matter what the size of the underlying
allocation.  You can't kmap() a slab page because slab pages always
come from low memory.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/include/asm/highmem.h   |  1 +
 include/linux/highmem-internal.h | 10 ++++++++++
 mm/usercopy.c                    | 16 ++++++++++------
 3 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h
index 032e020853aa..731ee7cc40a5 100644
--- a/arch/x86/include/asm/highmem.h
+++ b/arch/x86/include/asm/highmem.h
@@ -26,6 +26,7 @@
 #include <asm/tlbflush.h>
 #include <asm/paravirt.h>
 #include <asm/fixmap.h>
+#include <asm/pgtable_areas.h>
 
 /* declarations for highmem.c */
 extern unsigned long highstart_pfn, highend_pfn;
diff --git a/include/linux/highmem-internal.h b/include/linux/highmem-internal.h
index 4aa1031d3e4c..97d6dc836749 100644
--- a/include/linux/highmem-internal.h
+++ b/include/linux/highmem-internal.h
@@ -143,6 +143,11 @@ static inline void totalhigh_pages_add(long count)
 	atomic_long_add(count, &_totalhigh_pages);
 }
 
+static inline bool is_kmap_addr(const void *x)
+{
+	unsigned long addr = (unsigned long)x;
+	return addr >= PKMAP_ADDR(0) && addr < PKMAP_ADDR(LAST_PKMAP);
+}
 #else /* CONFIG_HIGHMEM */
 
 static inline struct page *kmap_to_page(void *addr)
@@ -223,6 +228,11 @@ static inline void __kunmap_atomic(void *addr)
 static inline unsigned int nr_free_highpages(void) { return 0; }
 static inline unsigned long totalhigh_pages(void) { return 0UL; }
 
+static inline bool is_kmap_addr(const void *x)
+{
+	return false;
+}
+
 #endif /* CONFIG_HIGHMEM */
 
 /*
diff --git a/mm/usercopy.c b/mm/usercopy.c
index b3de3c4eefba..8c039302465f 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -228,12 +228,16 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 	if (!virt_addr_valid(ptr))
 		return;
 
-	/*
-	 * When CONFIG_HIGHMEM=y, kmap_to_page() will give either the
-	 * highmem page or fallback to virt_to_page(). The following
-	 * is effectively a highmem-aware virt_to_head_page().
-	 */
-	page = compound_head(kmap_to_page((void *)ptr));
+	if (is_kmap_addr(ptr)) {
+		unsigned long page_end = (unsigned long)ptr | (PAGE_SIZE - 1);
+
+		if ((unsigned long)ptr + n - 1 > page_end)
+			usercopy_abort("kmap", NULL, to_user,
+					offset_in_page(ptr), n);
+		return;
+	}
+
+	page = virt_to_head_page(ptr);
 
 	if (PageSlab(page)) {
 		/* Check slab allocator for flags and size. */
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 2/3] mm/usercopy: Detect vmalloc overruns
  2021-10-06 12:42 [PATCH v2 0/3] Assorted improvements to usercopy Matthew Wilcox (Oracle)
  2021-10-06 12:42 ` [PATCH v2 1/3] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
@ 2021-10-06 12:42 ` Matthew Wilcox (Oracle)
  2021-10-06 12:42 ` [PATCH v2 3/3] mm/usercopy: Detect compound page overruns Matthew Wilcox (Oracle)
  2 siblings, 0 replies; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-10-06 12:42 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, Thomas Gleixner

If you have a vmalloc() allocation, or an address from calling vmap(),
you cannot overrun the vm_area which describes it, regardless of the
size of the underlying allocation.  This probably doesn't do much for
security because vmalloc comes with guard pages these days, but it
prevents usercopy aborts when copying to a vmap() of smaller pages.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kees Cook <keescook@chromium.org>
---
 mm/usercopy.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 8c039302465f..63476e1506e0 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -17,6 +17,7 @@
 #include <linux/sched/task.h>
 #include <linux/sched/task_stack.h>
 #include <linux/thread_info.h>
+#include <linux/vmalloc.h>
 #include <linux/atomic.h>
 #include <linux/jump_label.h>
 #include <asm/sections.h>
@@ -237,6 +238,21 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 		return;
 	}
 
+	if (is_vmalloc_addr(ptr)) {
+		struct vm_struct *vm = find_vm_area(ptr);
+		unsigned long offset;
+
+		if (!vm) {
+			usercopy_abort("vmalloc", "no area", to_user, 0, n);
+			return;
+		}
+
+		offset = ptr - vm->addr;
+		if (offset + n > vm->size)
+			usercopy_abort("vmalloc", NULL, to_user, offset, n);
+		return;
+	}
+
 	page = virt_to_head_page(ptr);
 
 	if (PageSlab(page)) {
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 3/3] mm/usercopy: Detect compound page overruns
  2021-10-06 12:42 [PATCH v2 0/3] Assorted improvements to usercopy Matthew Wilcox (Oracle)
  2021-10-06 12:42 ` [PATCH v2 1/3] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
  2021-10-06 12:42 ` [PATCH v2 2/3] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
@ 2021-10-06 12:42 ` Matthew Wilcox (Oracle)
  2021-10-06 14:08   ` Matthew Wilcox
  2 siblings, 1 reply; 6+ messages in thread
From: Matthew Wilcox (Oracle) @ 2021-10-06 12:42 UTC (permalink / raw)
  To: Kees Cook; +Cc: Matthew Wilcox (Oracle), linux-mm, Thomas Gleixner

Move the compound page overrun detection out of
CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kees Cook <keescook@chromium.org>
---
 mm/usercopy.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/mm/usercopy.c b/mm/usercopy.c
index 63476e1506e0..b825c4344917 100644
--- a/mm/usercopy.c
+++ b/mm/usercopy.c
@@ -194,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
 		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
 		return;
 
-	/* Allow if fully inside the same compound (__GFP_COMP) page. */
-	endpage = virt_to_head_page(end);
-	if (likely(endpage == page))
-		return;
-
 	/*
 	 * Reject if range is entirely either Reserved (i.e. special or
 	 * device memory), or CMA. Otherwise, reject since the object spans
@@ -258,6 +253,11 @@ static inline void check_heap_object(const void *ptr, unsigned long n,
 	if (PageSlab(page)) {
 		/* Check slab allocator for flags and size. */
 		__check_heap_object(ptr, n, page, to_user);
+	} else if (PageHead(page)) {
+		/* A compound allocation */
+		unsigned long offset = ptr - page_address(page);
+		if (offset + n > page_size(page))
+			usercopy_abort("page alloc", NULL, to_user, offset, n);
 	} else {
 		/* Verify object does not incorrectly span multiple pages. */
 		check_page_span(ptr, n, page, to_user);
-- 
2.32.0



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 3/3] mm/usercopy: Detect compound page overruns
  2021-10-06 12:42 ` [PATCH v2 3/3] mm/usercopy: Detect compound page overruns Matthew Wilcox (Oracle)
@ 2021-10-06 14:08   ` Matthew Wilcox
  2021-10-06 22:07     ` Kees Cook
  0 siblings, 1 reply; 6+ messages in thread
From: Matthew Wilcox @ 2021-10-06 14:08 UTC (permalink / raw)
  To: Kees Cook; +Cc: linux-mm, Thomas Gleixner

On Wed, Oct 06, 2021 at 01:42:26PM +0100, Matthew Wilcox (Oracle) wrote:
> Move the compound page overrun detection out of
> CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Acked-by: Kees Cook <keescook@chromium.org>
> ---
>  mm/usercopy.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/usercopy.c b/mm/usercopy.c
> index 63476e1506e0..b825c4344917 100644
> --- a/mm/usercopy.c
> +++ b/mm/usercopy.c
> @@ -194,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
>  		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
>  		return;
>  
> -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> -	endpage = virt_to_head_page(end);
> -	if (likely(endpage == page))
> -		return;
> -
>  	/*
>  	 * Reject if range is entirely either Reserved (i.e. special or
>  	 * device memory), or CMA. Otherwise, reject since the object spans

Needs an extra hunk to avoid a warning with that config:

@@ -163,7 +163,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
 {
 #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
        const void *end = ptr + n - 1;
-       struct page *endpage;
        bool is_reserved, is_cma;

        /*

I'll wait a few days and send a v3.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 3/3] mm/usercopy: Detect compound page overruns
  2021-10-06 14:08   ` Matthew Wilcox
@ 2021-10-06 22:07     ` Kees Cook
  0 siblings, 0 replies; 6+ messages in thread
From: Kees Cook @ 2021-10-06 22:07 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-mm, Thomas Gleixner

On Wed, Oct 06, 2021 at 03:08:46PM +0100, Matthew Wilcox wrote:
> On Wed, Oct 06, 2021 at 01:42:26PM +0100, Matthew Wilcox (Oracle) wrote:
> > Move the compound page overrun detection out of
> > CONFIG_HARDENED_USERCOPY_PAGESPAN so it's enabled for more people.
> > 
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > Acked-by: Kees Cook <keescook@chromium.org>
> > ---
> >  mm/usercopy.c | 10 +++++-----
> >  1 file changed, 5 insertions(+), 5 deletions(-)
> > 
> > diff --git a/mm/usercopy.c b/mm/usercopy.c
> > index 63476e1506e0..b825c4344917 100644
> > --- a/mm/usercopy.c
> > +++ b/mm/usercopy.c
> > @@ -194,11 +194,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
> >  		   ((unsigned long)end & (unsigned long)PAGE_MASK)))
> >  		return;
> >  
> > -	/* Allow if fully inside the same compound (__GFP_COMP) page. */
> > -	endpage = virt_to_head_page(end);
> > -	if (likely(endpage == page))
> > -		return;
> > -
> >  	/*
> >  	 * Reject if range is entirely either Reserved (i.e. special or
> >  	 * device memory), or CMA. Otherwise, reject since the object spans
> 
> Needs an extra hunk to avoid a warning with that config:

Ah yeah, good catch.

> 
> @@ -163,7 +163,6 @@ static inline void check_page_span(const void *ptr, unsigned long n,
>  {
>  #ifdef CONFIG_HARDENED_USERCOPY_PAGESPAN
>         const void *end = ptr + n - 1;
> -       struct page *endpage;
>         bool is_reserved, is_cma;
> 
>         /*
> 
> I'll wait a few days and send a v3.

When you send v3, can you CC linux-hardening@vger.kernel.org too?

Thanks for poking at this!

-Kees

-- 
Kees Cook


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-10-06 22:07 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-06 12:42 [PATCH v2 0/3] Assorted improvements to usercopy Matthew Wilcox (Oracle)
2021-10-06 12:42 ` [PATCH v2 1/3] mm/usercopy: Check kmap addresses properly Matthew Wilcox (Oracle)
2021-10-06 12:42 ` [PATCH v2 2/3] mm/usercopy: Detect vmalloc overruns Matthew Wilcox (Oracle)
2021-10-06 12:42 ` [PATCH v2 3/3] mm/usercopy: Detect compound page overruns Matthew Wilcox (Oracle)
2021-10-06 14:08   ` Matthew Wilcox
2021-10-06 22:07     ` Kees Cook

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.