All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] mm/vmalloc: Don't allow VM_NO_GUARD on vmap()
@ 2021-09-16 10:41 Peter Zijlstra
  2021-09-16 11:23 ` Christoph Hellwig
                   ` (3 more replies)
  0 siblings, 4 replies; 5+ messages in thread
From: Peter Zijlstra @ 2021-09-16 10:41 UTC (permalink / raw)
  To: Andrew Morton, Christoph Hellwig, Will Deacon
  Cc: andreyknvl, linux-kernel, linux-mm, Mel Gorman, keescook


The vmalloc guard pages are added on top of each allocation, thereby
isolating any two allocations from one another. The top guard of the
lower allocation is the bottom guard guard of the higher allocation
etc.

Therefore VM_NO_GUARD is dangerous; it breaks the basic premise of
isolating separate allocations.

There are only two in-tree users of this flag, neither of which use it
through the exported interface. Ensure it stays this way.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
---
 include/linux/vmalloc.h | 2 +-
 mm/vmalloc.c            | 7 +++++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 671d402c3778..10e9571ff0b2 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -22,7 +22,7 @@ struct notifier_block;		/* in notifier.h */
 #define VM_USERMAP		0x00000008	/* suitable for remap_vmalloc_range */
 #define VM_DMA_COHERENT		0x00000010	/* dma_alloc_coherent */
 #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
-#define VM_NO_GUARD		0x00000040      /* don't add guard page */
+#define VM_NO_GUARD		0x00000040      /* ***DANGEROUS*** don't add guard page */
 #define VM_KASAN		0x00000080      /* has allocated kasan shadow memory */
 #define VM_FLUSH_RESET_PERMS	0x00000100	/* reset direct map and flush TLB on unmap, can't be freed in atomic context */
 #define VM_MAP_PUT_PAGES	0x00000200	/* put pages and free array in vfree */
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index d77830ff604c..01927ebea267 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -2743,6 +2743,13 @@ void *vmap(struct page **pages, unsigned int count,
 
 	might_sleep();
 
+	/*
+	 * Your top guard is someone else's bottom guard. Not having a top
+	 * guard compromises someone else's mappings too.
+	 */
+	if (WARN_ON_ONCE(flags & VM_NO_GUARD))
+		flags &= ~VM_NO_GUARD;
+
 	if (count > totalram_pages())
 		return NULL;
 

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/vmalloc: Don't allow VM_NO_GUARD on vmap()
  2021-09-16 10:41 [PATCH] mm/vmalloc: Don't allow VM_NO_GUARD on vmap() Peter Zijlstra
@ 2021-09-16 11:23 ` Christoph Hellwig
  2021-09-16 12:34 ` David Hildenbrand
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 5+ messages in thread
From: Christoph Hellwig @ 2021-09-16 11:23 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Andrew Morton, Christoph Hellwig, Will Deacon, andreyknvl,
	linux-kernel, linux-mm, Mel Gorman, keescook

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/vmalloc: Don't allow VM_NO_GUARD on vmap()
  2021-09-16 10:41 [PATCH] mm/vmalloc: Don't allow VM_NO_GUARD on vmap() Peter Zijlstra
  2021-09-16 11:23 ` Christoph Hellwig
@ 2021-09-16 12:34 ` David Hildenbrand
  2021-09-16 13:30 ` Will Deacon
  2021-09-16 15:57 ` Kees Cook
  3 siblings, 0 replies; 5+ messages in thread
From: David Hildenbrand @ 2021-09-16 12:34 UTC (permalink / raw)
  To: Peter Zijlstra, Andrew Morton, Christoph Hellwig, Will Deacon
  Cc: andreyknvl, linux-kernel, linux-mm, Mel Gorman, keescook

On 16.09.21 12:41, Peter Zijlstra wrote:
> 
> The vmalloc guard pages are added on top of each allocation, thereby
> isolating any two allocations from one another. The top guard of the
> lower allocation is the bottom guard guard of the higher allocation
> etc.
> 
> Therefore VM_NO_GUARD is dangerous; it breaks the basic premise of
> isolating separate allocations.
> 
> There are only two in-tree users of this flag, neither of which use it
> through the exported interface. Ensure it stays this way.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>   include/linux/vmalloc.h | 2 +-
>   mm/vmalloc.c            | 7 +++++++
>   2 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index 671d402c3778..10e9571ff0b2 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -22,7 +22,7 @@ struct notifier_block;		/* in notifier.h */
>   #define VM_USERMAP		0x00000008	/* suitable for remap_vmalloc_range */
>   #define VM_DMA_COHERENT		0x00000010	/* dma_alloc_coherent */
>   #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
> -#define VM_NO_GUARD		0x00000040      /* don't add guard page */
> +#define VM_NO_GUARD		0x00000040      /* ***DANGEROUS*** don't add guard page */
>   #define VM_KASAN		0x00000080      /* has allocated kasan shadow memory */
>   #define VM_FLUSH_RESET_PERMS	0x00000100	/* reset direct map and flush TLB on unmap, can't be freed in atomic context */
>   #define VM_MAP_PUT_PAGES	0x00000200	/* put pages and free array in vfree */
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index d77830ff604c..01927ebea267 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2743,6 +2743,13 @@ void *vmap(struct page **pages, unsigned int count,
>   
>   	might_sleep();
>   
> +	/*
> +	 * Your top guard is someone else's bottom guard. Not having a top
> +	 * guard compromises someone else's mappings too.
> +	 */
> +	if (WARN_ON_ONCE(flags & VM_NO_GUARD))
> +		flags &= ~VM_NO_GUARD;
> +
>   	if (count > totalram_pages())
>   		return NULL;
>   
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/vmalloc: Don't allow VM_NO_GUARD on vmap()
  2021-09-16 10:41 [PATCH] mm/vmalloc: Don't allow VM_NO_GUARD on vmap() Peter Zijlstra
  2021-09-16 11:23 ` Christoph Hellwig
  2021-09-16 12:34 ` David Hildenbrand
@ 2021-09-16 13:30 ` Will Deacon
  2021-09-16 15:57 ` Kees Cook
  3 siblings, 0 replies; 5+ messages in thread
From: Will Deacon @ 2021-09-16 13:30 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Andrew Morton, Christoph Hellwig, andreyknvl, linux-kernel,
	linux-mm, Mel Gorman, keescook

On Thu, Sep 16, 2021 at 12:41:56PM +0200, Peter Zijlstra wrote:
> 
> The vmalloc guard pages are added on top of each allocation, thereby
> isolating any two allocations from one another. The top guard of the
> lower allocation is the bottom guard guard of the higher allocation
> etc.
> 
> Therefore VM_NO_GUARD is dangerous; it breaks the basic premise of
> isolating separate allocations.
> 
> There are only two in-tree users of this flag, neither of which use it
> through the exported interface. Ensure it stays this way.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>  include/linux/vmalloc.h | 2 +-
>  mm/vmalloc.c            | 7 +++++++
>  2 files changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index 671d402c3778..10e9571ff0b2 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -22,7 +22,7 @@ struct notifier_block;		/* in notifier.h */
>  #define VM_USERMAP		0x00000008	/* suitable for remap_vmalloc_range */
>  #define VM_DMA_COHERENT		0x00000010	/* dma_alloc_coherent */
>  #define VM_UNINITIALIZED	0x00000020	/* vm_struct is not fully initialized */
> -#define VM_NO_GUARD		0x00000040      /* don't add guard page */
> +#define VM_NO_GUARD		0x00000040      /* ***DANGEROUS*** don't add guard page */
>  #define VM_KASAN		0x00000080      /* has allocated kasan shadow memory */
>  #define VM_FLUSH_RESET_PERMS	0x00000100	/* reset direct map and flush TLB on unmap, can't be freed in atomic context */
>  #define VM_MAP_PUT_PAGES	0x00000200	/* put pages and free array in vfree */
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index d77830ff604c..01927ebea267 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -2743,6 +2743,13 @@ void *vmap(struct page **pages, unsigned int count,
>  
>  	might_sleep();
>  
> +	/*
> +	 * Your top guard is someone else's bottom guard. Not having a top
> +	 * guard compromises someone else's mappings too.
> +	 */
> +	if (WARN_ON_ONCE(flags & VM_NO_GUARD))
> +		flags &= ~VM_NO_GUARD;
> +
>  	if (count > totalram_pages())
>  		return NULL;

Acked-by: Will Deacon <will@kernel.org>

Thanks!

Will

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] mm/vmalloc: Don't allow VM_NO_GUARD on vmap()
  2021-09-16 10:41 [PATCH] mm/vmalloc: Don't allow VM_NO_GUARD on vmap() Peter Zijlstra
                   ` (2 preceding siblings ...)
  2021-09-16 13:30 ` Will Deacon
@ 2021-09-16 15:57 ` Kees Cook
  3 siblings, 0 replies; 5+ messages in thread
From: Kees Cook @ 2021-09-16 15:57 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Andrew Morton, Christoph Hellwig, Will Deacon, andreyknvl,
	linux-kernel, linux-mm, Mel Gorman

On Thu, Sep 16, 2021 at 12:41:56PM +0200, Peter Zijlstra wrote:
> 
> The vmalloc guard pages are added on top of each allocation, thereby
> isolating any two allocations from one another. The top guard of the
> lower allocation is the bottom guard guard of the higher allocation
> etc.
> 
> Therefore VM_NO_GUARD is dangerous; it breaks the basic premise of
> isolating separate allocations.
> 
> There are only two in-tree users of this flag, neither of which use it
> through the exported interface. Ensure it stays this way.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

Yes, please. :)

Acked-by: Kees Cook <keescook@chromium.org>

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-09-16 15:57 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-16 10:41 [PATCH] mm/vmalloc: Don't allow VM_NO_GUARD on vmap() Peter Zijlstra
2021-09-16 11:23 ` Christoph Hellwig
2021-09-16 12:34 ` David Hildenbrand
2021-09-16 13:30 ` Will Deacon
2021-09-16 15:57 ` Kees Cook

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.