linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB
@ 2020-06-25 21:55 Kees Cook
  2020-06-25 21:55 ` [PATCH v2 1/2] " Kees Cook
  2020-06-25 21:55 ` [PATCH v2 2/2] slab: Add naive detection of double free Kees Cook
  0 siblings, 2 replies; 5+ messages in thread
From: Kees Cook @ 2020-06-25 21:55 UTC (permalink / raw)
  To: akpm
  Cc: Kees Cook, Vlastimil Babka, Roman Gushchin, Christoph Lameter,
	Alexander Popov, Pekka Enberg, David Rientjes, Joonsoo Kim,
	vinmenon, Matthew Garrett, Jann Horn, Vijayanand Jitta, linux-mm,
	linux-kernel

Hi,

In reviewing Vlastimil Babka's latest slub debug series, I realized[1]
that several checks under CONFIG_SLAB_FREELIST_HARDENED weren't being
applied to SLAB. Fix this by expanding the Kconfig coverage, and
adding a simple double-free test for SLAB.

v2:
- rebase to -mmots
- drop SLOB support (willy)
v1: https://lore.kernel.org/lkml/20200617195349.3471794-1-keescook@chromium.org/

Thanks!

-Kees

[1] https://lore.kernel.org/lkml/202006171039.FBDF2D7F4A@keescook/

Kees Cook (2):
  mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB
  slab: Add naive detection of double free

 init/Kconfig |  9 +++++----
 mm/slab.c    | 14 ++++++++++++--
 2 files changed, 17 insertions(+), 6 deletions(-)

-- 
2.25.1

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 1/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB
  2020-06-25 21:55 [PATCH v2 0/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB Kees Cook
@ 2020-06-25 21:55 ` Kees Cook
  2020-08-04 17:15   ` Vlastimil Babka
  2020-06-25 21:55 ` [PATCH v2 2/2] slab: Add naive detection of double free Kees Cook
  1 sibling, 1 reply; 5+ messages in thread
From: Kees Cook @ 2020-06-25 21:55 UTC (permalink / raw)
  To: akpm
  Cc: Kees Cook, Vlastimil Babka, Roman Gushchin, Christoph Lameter,
	Alexander Popov, Pekka Enberg, David Rientjes, Joonsoo Kim,
	vinmenon, Matthew Garrett, Jann Horn, Vijayanand Jitta, linux-mm,
	linux-kernel

Include SLAB caches when performing kmem_cache pointer verification. A
defense against such corruption[1] should be applied to all the
allocators. With this added, the "SLAB_FREE_CROSS" and "SLAB_FREE_PAGE"
LKDTM tests now pass on SLAB:

  lkdtm: Performing direct entry SLAB_FREE_CROSS
  lkdtm: Attempting cross-cache slab free ...
  ------------[ cut here ]------------
  cache_from_obj: Wrong slab cache. lkdtm-heap-b but object is from lkdtm-heap-a
  WARNING: CPU: 2 PID: 2195 at mm/slab.h:530 kmem_cache_free+0x8d/0x1d0
  ...
  lkdtm: Performing direct entry SLAB_FREE_PAGE
  lkdtm: Attempting non-Slab slab free ...
  ------------[ cut here ]------------
  virt_to_cache: Object is not a Slab page!
  WARNING: CPU: 1 PID: 2202 at mm/slab.h:489 kmem_cache_free+0x196/0x1d0

Additionally clean up neighboring Kconfig entries for clarity,
readability, and redundant option removal.

[1] https://github.com/ThomasKing2014/slides/raw/master/Building%20universal%20Android%20rooting%20with%20a%20type%20confusion%20vulnerability.pdf

Fixes: 598a0717a816 ("mm/slab: validate cache membership under freelist hardening")
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/init/Kconfig b/init/Kconfig
index a46aa8f3174d..7542d28c6f61 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1885,9 +1885,8 @@ config SLAB_MERGE_DEFAULT
 	  command line.
 
 config SLAB_FREELIST_RANDOM
-	default n
+	bool "Randomize slab freelist"
 	depends on SLAB || SLUB
-	bool "SLAB freelist randomization"
 	help
 	  Randomizes the freelist order used on creating new pages. This
 	  security feature reduces the predictability of the kernel slab
@@ -1895,12 +1894,14 @@ config SLAB_FREELIST_RANDOM
 
 config SLAB_FREELIST_HARDENED
 	bool "Harden slab freelist metadata"
-	depends on SLUB
+	depends on SLAB || SLUB
 	help
 	  Many kernel heap attacks try to target slab cache metadata and
 	  other infrastructure. This options makes minor performance
 	  sacrifices to harden the kernel slab allocator against common
-	  freelist exploit methods.
+	  freelist exploit methods. Some slab implementations have more
+	  sanity-checking than others. This option is most effective with
+	  CONFIG_SLUB.
 
 config SHUFFLE_PAGE_ALLOCATOR
 	bool "Page allocator randomization"
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 2/2] slab: Add naive detection of double free
  2020-06-25 21:55 [PATCH v2 0/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB Kees Cook
  2020-06-25 21:55 ` [PATCH v2 1/2] " Kees Cook
@ 2020-06-25 21:55 ` Kees Cook
  2020-08-04 17:18   ` Vlastimil Babka
  1 sibling, 1 reply; 5+ messages in thread
From: Kees Cook @ 2020-06-25 21:55 UTC (permalink / raw)
  To: akpm
  Cc: Kees Cook, Vlastimil Babka, Roman Gushchin, Christoph Lameter,
	Alexander Popov, Pekka Enberg, David Rientjes, Joonsoo Kim,
	vinmenon, Matthew Garrett, Jann Horn, Vijayanand Jitta, linux-mm,
	linux-kernel

Similar to commit ce6fa91b9363 ("mm/slub.c: add a naive detection
of double free or corruption"), add a very cheap double-free check
for SLAB under CONFIG_SLAB_FREELIST_HARDENED. With this added, the
"SLAB_FREE_DOUBLE" LKDTM test passes under SLAB:

  lkdtm: Performing direct entry SLAB_FREE_DOUBLE
  lkdtm: Attempting double slab free ...
  ------------[ cut here ]------------
  WARNING: CPU: 2 PID: 2193 at mm/slab.c:757 ___cache _free+0x325/0x390

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 mm/slab.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index ebac5e400ad0..bbff6705ab2b 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -749,6 +749,16 @@ static void drain_alien_cache(struct kmem_cache *cachep,
 	}
 }
 
+/* &alien->lock must be held by alien callers. */
+static __always_inline void __free_one(struct array_cache *ac, void *objp)
+{
+	/* Avoid trivial double-free. */
+	if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
+	    WARN_ON_ONCE(ac->avail > 0 && ac->entry[ac->avail - 1] == objp))
+		return;
+	ac->entry[ac->avail++] = objp;
+}
+
 static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
 				int node, int page_node)
 {
@@ -767,7 +777,7 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
 			STATS_INC_ACOVERFLOW(cachep);
 			__drain_alien_cache(cachep, ac, page_node, &list);
 		}
-		ac->entry[ac->avail++] = objp;
+		__free_one(ac, objp);
 		spin_unlock(&alien->lock);
 		slabs_destroy(cachep, &list);
 	} else {
@@ -3457,7 +3467,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp,
 		}
 	}
 
-	ac->entry[ac->avail++] = objp;
+	__free_one(ac, objp);
 }
 
 /**
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 1/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB
  2020-06-25 21:55 ` [PATCH v2 1/2] " Kees Cook
@ 2020-08-04 17:15   ` Vlastimil Babka
  0 siblings, 0 replies; 5+ messages in thread
From: Vlastimil Babka @ 2020-08-04 17:15 UTC (permalink / raw)
  To: Kees Cook, akpm
  Cc: Roman Gushchin, Christoph Lameter, Alexander Popov, Pekka Enberg,
	David Rientjes, Joonsoo Kim, vinmenon, Matthew Garrett,
	Jann Horn, Vijayanand Jitta, linux-mm, linux-kernel

On 6/25/20 11:55 PM, Kees Cook wrote:
> Include SLAB caches when performing kmem_cache pointer verification. A
> defense against such corruption[1] should be applied to all the
> allocators. With this added, the "SLAB_FREE_CROSS" and "SLAB_FREE_PAGE"
> LKDTM tests now pass on SLAB:
> 
>   lkdtm: Performing direct entry SLAB_FREE_CROSS
>   lkdtm: Attempting cross-cache slab free ...
>   ------------[ cut here ]------------
>   cache_from_obj: Wrong slab cache. lkdtm-heap-b but object is from lkdtm-heap-a
>   WARNING: CPU: 2 PID: 2195 at mm/slab.h:530 kmem_cache_free+0x8d/0x1d0
>   ...
>   lkdtm: Performing direct entry SLAB_FREE_PAGE
>   lkdtm: Attempting non-Slab slab free ...
>   ------------[ cut here ]------------
>   virt_to_cache: Object is not a Slab page!
>   WARNING: CPU: 1 PID: 2202 at mm/slab.h:489 kmem_cache_free+0x196/0x1d0
> 
> Additionally clean up neighboring Kconfig entries for clarity,
> readability, and redundant option removal.
> 
> [1] https://github.com/ThomasKing2014/slides/raw/master/Building%20universal%20Android%20rooting%20with%20a%20type%20confusion%20vulnerability.pdf
> 
> Fixes: 598a0717a816 ("mm/slab: validate cache membership under freelist hardening")
> Signed-off-by: Kees Cook <keescook@chromium.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  init/Kconfig | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/init/Kconfig b/init/Kconfig
> index a46aa8f3174d..7542d28c6f61 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1885,9 +1885,8 @@ config SLAB_MERGE_DEFAULT
>  	  command line.
>  
>  config SLAB_FREELIST_RANDOM
> -	default n
> +	bool "Randomize slab freelist"
>  	depends on SLAB || SLUB
> -	bool "SLAB freelist randomization"
>  	help
>  	  Randomizes the freelist order used on creating new pages. This
>  	  security feature reduces the predictability of the kernel slab
> @@ -1895,12 +1894,14 @@ config SLAB_FREELIST_RANDOM
>  
>  config SLAB_FREELIST_HARDENED
>  	bool "Harden slab freelist metadata"
> -	depends on SLUB
> +	depends on SLAB || SLUB
>  	help
>  	  Many kernel heap attacks try to target slab cache metadata and
>  	  other infrastructure. This options makes minor performance
>  	  sacrifices to harden the kernel slab allocator against common
> -	  freelist exploit methods.
> +	  freelist exploit methods. Some slab implementations have more
> +	  sanity-checking than others. This option is most effective with
> +	  CONFIG_SLUB.
>  
>  config SHUFFLE_PAGE_ALLOCATOR
>  	bool "Page allocator randomization"
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 2/2] slab: Add naive detection of double free
  2020-06-25 21:55 ` [PATCH v2 2/2] slab: Add naive detection of double free Kees Cook
@ 2020-08-04 17:18   ` Vlastimil Babka
  0 siblings, 0 replies; 5+ messages in thread
From: Vlastimil Babka @ 2020-08-04 17:18 UTC (permalink / raw)
  To: Kees Cook, akpm
  Cc: Roman Gushchin, Christoph Lameter, Alexander Popov, Pekka Enberg,
	David Rientjes, Joonsoo Kim, vinmenon, Matthew Garrett,
	Jann Horn, Vijayanand Jitta, linux-mm, linux-kernel

On 6/25/20 11:55 PM, Kees Cook wrote:
> Similar to commit ce6fa91b9363 ("mm/slub.c: add a naive detection
> of double free or corruption"), add a very cheap double-free check
> for SLAB under CONFIG_SLAB_FREELIST_HARDENED. With this added, the
> "SLAB_FREE_DOUBLE" LKDTM test passes under SLAB:
> 
>   lkdtm: Performing direct entry SLAB_FREE_DOUBLE
>   lkdtm: Attempting double slab free ...
>   ------------[ cut here ]------------
>   WARNING: CPU: 2 PID: 2193 at mm/slab.c:757 ___cache _free+0x325/0x390
> 
> Signed-off-by: Kees Cook <keescook@chromium.org>

No idea how much it helps in practice wrt security, but implementation-wise it
seems fine, so:

Acked-by: Vlastimil Babka <vbabka@suse.cz>

Maybe you don't want to warn just once, though? We had similar discussion on
cache_to_obj().

> ---
>  mm/slab.c | 14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/slab.c b/mm/slab.c
> index ebac5e400ad0..bbff6705ab2b 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -749,6 +749,16 @@ static void drain_alien_cache(struct kmem_cache *cachep,
>  	}
>  }
>  
> +/* &alien->lock must be held by alien callers. */
> +static __always_inline void __free_one(struct array_cache *ac, void *objp)
> +{
> +	/* Avoid trivial double-free. */
> +	if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
> +	    WARN_ON_ONCE(ac->avail > 0 && ac->entry[ac->avail - 1] == objp))
> +		return;
> +	ac->entry[ac->avail++] = objp;
> +}
> +
>  static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
>  				int node, int page_node)
>  {
> @@ -767,7 +777,7 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
>  			STATS_INC_ACOVERFLOW(cachep);
>  			__drain_alien_cache(cachep, ac, page_node, &list);
>  		}
> -		ac->entry[ac->avail++] = objp;
> +		__free_one(ac, objp);
>  		spin_unlock(&alien->lock);
>  		slabs_destroy(cachep, &list);
>  	} else {
> @@ -3457,7 +3467,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp,
>  		}
>  	}
>  
> -	ac->entry[ac->avail++] = objp;
> +	__free_one(ac, objp);
>  }
>  
>  /**
> 


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-08-04 17:18 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-25 21:55 [PATCH v2 0/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB Kees Cook
2020-06-25 21:55 ` [PATCH v2 1/2] " Kees Cook
2020-08-04 17:15   ` Vlastimil Babka
2020-06-25 21:55 ` [PATCH v2 2/2] slab: Add naive detection of double free Kees Cook
2020-08-04 17:18   ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).