linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB and SLOB
@ 2020-06-17 19:53 Kees Cook
  2020-06-17 19:53 ` [PATCH 1/2] " Kees Cook
  2020-06-17 19:53 ` [PATCH 2/2] slab: Add naive detection of double free Kees Cook
  0 siblings, 2 replies; 5+ messages in thread
From: Kees Cook @ 2020-06-17 19:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kees Cook, Vlastimil Babka, Roman Gushchin, Christoph Lameter,
	Alexander Popov, Pekka Enberg, David Rientjes, Joonsoo Kim,
	vinmenon, Matthew Garrett, Jann Horn, Vijayanand Jitta, linux-mm,
	linux-kernel

Hi,

In reviewing Vlastimil Babka's latest slub debug series, I realized[1]
that several checks under CONFIG_SLAB_FREELIST_HARDENED weren't being
applied to SLAB (or SLOB). Fix this by expanding the Kconfig coverage and
moving the cache_from_obj() check back into the common code. Additionally
adds a simple double-free test for SLAB.

Thanks!

-Kees

[1] https://lore.kernel.org/lkml/202006171039.FBDF2D7F4A@keescook/

Kees Cook (2):
  mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB and SLOB
  slab: Add naive detection of double free

 init/Kconfig |  8 ++++----
 mm/slab.c    | 22 ++++++++++++----------
 mm/slab.h    | 31 +++++++++++++++++++++++++++++++
 mm/slub.c    | 25 +------------------------
 4 files changed, 48 insertions(+), 38 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH 1/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB and SLOB
  2020-06-17 19:53 [PATCH 0/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB and SLOB Kees Cook
@ 2020-06-17 19:53 ` Kees Cook
  2020-06-17 20:01   ` Matthew Wilcox
  2020-06-17 19:53 ` [PATCH 2/2] slab: Add naive detection of double free Kees Cook
  1 sibling, 1 reply; 5+ messages in thread
From: Kees Cook @ 2020-06-17 19:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kees Cook, Vlastimil Babka, Roman Gushchin, Christoph Lameter,
	Alexander Popov, Pekka Enberg, David Rientjes, Joonsoo Kim,
	vinmenon, Matthew Garrett, Jann Horn, Vijayanand Jitta, linux-mm,
	linux-kernel

Include SLAB and SLOB caches when performing kmem_cache pointer
verification. A defense against such corruption[1] should be applied
to all the allocators. With this added, the "SLAB_FREE_CROSS" and
"SLAB_FREE_PAGE" LKDTM tests now pass on SLAB:

  lkdtm: Performing direct entry SLAB_FREE_CROSS
  lkdtm: Attempting cross-cache slab free ...
  ------------[ cut here ]------------
  cache_from_obj: Wrong slab cache. lkdtm-heap-b but object is from lkdtm-heap-a
  WARNING: CPU: 2 PID: 2195 at mm/slab.h:530 kmem_cache_free+0x8d/0x1d0
  ...
  lkdtm: Performing direct entry SLAB_FREE_PAGE
  lkdtm: Attempting non-Slab slab free ...
  ------------[ cut here ]------------
  virt_to_cache: Object is not a Slab page!
  WARNING: CPU: 1 PID: 2202 at mm/slab.h:489 kmem_cache_free+0x196/0x1d0

Additionally clean up neighboring Kconfig entries for clarity,
readability, and redundant option removal.

[1] https://github.com/ThomasKing2014/slides/raw/master/Building%20universal%20Android%20rooting%20with%20a%20type%20confusion%20vulnerability.pdf

Fixes: 598a0717a816 ("mm/slab: validate cache membership under freelist hardening")
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 init/Kconfig |  8 ++++----
 mm/slab.c    |  8 --------
 mm/slab.h    | 31 +++++++++++++++++++++++++++++++
 mm/slub.c    | 25 +------------------------
 4 files changed, 36 insertions(+), 36 deletions(-)

diff --git a/init/Kconfig b/init/Kconfig
index a46aa8f3174d..b5e616e5fd2f 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1885,9 +1885,8 @@ config SLAB_MERGE_DEFAULT
 	  command line.
 
 config SLAB_FREELIST_RANDOM
-	default n
+	bool "Randomize slab freelist"
 	depends on SLAB || SLUB
-	bool "SLAB freelist randomization"
 	help
 	  Randomizes the freelist order used on creating new pages. This
 	  security feature reduces the predictability of the kernel slab
@@ -1895,12 +1894,13 @@ config SLAB_FREELIST_RANDOM
 
 config SLAB_FREELIST_HARDENED
 	bool "Harden slab freelist metadata"
-	depends on SLUB
 	help
 	  Many kernel heap attacks try to target slab cache metadata and
 	  other infrastructure. This options makes minor performance
 	  sacrifices to harden the kernel slab allocator against common
-	  freelist exploit methods.
+	  freelist exploit methods. Some slab implementations have more
+	  sanity-checking than others. This option is most effective with
+	  CONFIG_SLUB.
 
 config SHUFFLE_PAGE_ALLOCATOR
 	bool "Page allocator randomization"
diff --git a/mm/slab.c b/mm/slab.c
index 6134c4c36d4c..9350062ffc1a 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3672,14 +3672,6 @@ void *__kmalloc_track_caller(size_t size, gfp_t flags, unsigned long caller)
 }
 EXPORT_SYMBOL(__kmalloc_track_caller);
 
-static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
-{
-	if (memcg_kmem_enabled())
-		return virt_to_cache(x);
-	else
-		return s;
-}
-
 /**
  * kmem_cache_free - Deallocate an object
  * @cachep: The cache the allocation was from.
diff --git a/mm/slab.h b/mm/slab.h
index a2696d306b62..090d8b8e7bf8 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -467,6 +467,20 @@ static inline void memcg_link_cache(struct kmem_cache *s,
 
 #endif /* CONFIG_MEMCG_KMEM */
 
+#ifdef CONFIG_SLUB_DEBUG
+extern inline int kmem_cache_debug_flags(struct kmem_cache *s,
+					 slab_flags_t flags);
+extern inline void print_tracking(struct kmem_cache *s, void *object);
+#else
+static inline int kmem_cache_debug_flags(struct kmem_cache *s,
+					 slab_flags_t flags)
+{
+	return 0;
+}
+static inline void print_tracking(struct kmem_cache *s, void *object)
+{ }
+#endif
+
 static inline struct kmem_cache *virt_to_cache(const void *obj)
 {
 	struct page *page;
@@ -503,6 +517,23 @@ static __always_inline void uncharge_slab_page(struct page *page, int order,
 	memcg_uncharge_slab(page, order, s);
 }
 
+static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
+{
+	struct kmem_cache *cachep;
+
+	if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
+	    !memcg_kmem_enabled() &&
+	    !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS))
+		return s;
+
+	cachep = virt_to_cache(x);
+	if (WARN(cachep && !slab_equal_or_root(cachep, s),
+		  "%s: Wrong slab cache. %s but object is from %s\n",
+		  __func__, s->name, cachep->name))
+		print_tracking(cachep, x);
+	return cachep;
+}
+
 static inline size_t slab_ksize(const struct kmem_cache *s)
 {
 #ifndef CONFIG_SLUB
diff --git a/mm/slub.c b/mm/slub.c
index f7a1d8537674..cd4891448db4 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -120,7 +120,6 @@ DEFINE_STATIC_KEY_TRUE(slub_debug_enabled);
 #else
 DEFINE_STATIC_KEY_FALSE(slub_debug_enabled);
 #endif
-#endif
 
 /*
  * Returns true if any of the specified slub_debug flags is enabled for the
@@ -129,12 +128,11 @@ DEFINE_STATIC_KEY_FALSE(slub_debug_enabled);
  */
 static inline int kmem_cache_debug_flags(struct kmem_cache *s, slab_flags_t flags)
 {
-#ifdef CONFIG_SLUB_DEBUG
 	if (static_branch_unlikely(&slub_debug_enabled))
 		return s->flags & flags;
-#endif
 	return 0;
 }
+#endif
 
 static inline int kmem_cache_debug(struct kmem_cache *s)
 {
@@ -1524,10 +1522,6 @@ static bool freelist_corrupted(struct kmem_cache *s, struct page *page,
 {
 	return false;
 }
-
-static void print_tracking(struct kmem_cache *s, void *object)
-{
-}
 #endif /* CONFIG_SLUB_DEBUG */
 
 /*
@@ -3179,23 +3173,6 @@ void ___cache_free(struct kmem_cache *cache, void *x, unsigned long addr)
 }
 #endif
 
-static inline struct kmem_cache *cache_from_obj(struct kmem_cache *s, void *x)
-{
-	struct kmem_cache *cachep;
-
-	if (!IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
-	    !memcg_kmem_enabled() &&
-	    !kmem_cache_debug_flags(s, SLAB_CONSISTENCY_CHECKS))
-		return s;
-
-	cachep = virt_to_cache(x);
-	if (WARN(cachep && !slab_equal_or_root(cachep, s),
-		  "%s: Wrong slab cache. %s but object is from %s\n",
-		  __func__, s->name, cachep->name))
-		print_tracking(cachep, x);
-	return cachep;
-}
-
 void kmem_cache_free(struct kmem_cache *s, void *x)
 {
 	s = cache_from_obj(s, x);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH 2/2] slab: Add naive detection of double free
  2020-06-17 19:53 [PATCH 0/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB and SLOB Kees Cook
  2020-06-17 19:53 ` [PATCH 1/2] " Kees Cook
@ 2020-06-17 19:53 ` Kees Cook
  1 sibling, 0 replies; 5+ messages in thread
From: Kees Cook @ 2020-06-17 19:53 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kees Cook, Vlastimil Babka, Roman Gushchin, Christoph Lameter,
	Alexander Popov, Pekka Enberg, David Rientjes, Joonsoo Kim,
	vinmenon, Matthew Garrett, Jann Horn, Vijayanand Jitta, linux-mm,
	linux-kernel

Similar to commit ce6fa91b9363 ("mm/slub.c: add a naive detection
of double free or corruption"), add a very cheap double-free check
for SLAB under CONFIG_SLAB_FREELIST_HARDENED. With this added, the
"SLAB_FREE_DOUBLE" LKDTM test passes under SLAB:

  lkdtm: Performing direct entry SLAB_FREE_DOUBLE
  lkdtm: Attempting double slab free ...
  ------------[ cut here ]------------
  WARNING: CPU: 2 PID: 2193 at mm/slab.c:757 ___cache _free+0x325/0x390

Signed-off-by: Kees Cook <keescook@chromium.org>
---
 mm/slab.c | 14 ++++++++++++--
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 9350062ffc1a..c4e3a194b271 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -749,6 +749,16 @@ static void drain_alien_cache(struct kmem_cache *cachep,
 	}
 }
 
+/* &alien->lock must be held by alien callers. */
+static __always_inline void __free_one(struct array_cache *ac, void *objp)
+{
+	/* Avoid trivial double-free. */
+	if (IS_ENABLED(CONFIG_SLAB_FREELIST_HARDENED) &&
+	    WARN_ON_ONCE(ac->avail > 0 && ac->entry[ac->avail - 1] == objp))
+		return;
+	ac->entry[ac->avail++] = objp;
+}
+
 static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
 				int node, int page_node)
 {
@@ -767,7 +777,7 @@ static int __cache_free_alien(struct kmem_cache *cachep, void *objp,
 			STATS_INC_ACOVERFLOW(cachep);
 			__drain_alien_cache(cachep, ac, page_node, &list);
 		}
-		ac->entry[ac->avail++] = objp;
+		__free_one(ac, objp);
 		spin_unlock(&alien->lock);
 		slabs_destroy(cachep, &list);
 	} else {
@@ -3466,7 +3476,7 @@ void ___cache_free(struct kmem_cache *cachep, void *objp,
 		}
 	}
 
-	ac->entry[ac->avail++] = objp;
+	__free_one(ac, objp);
 }
 
 /**
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB and SLOB
  2020-06-17 19:53 ` [PATCH 1/2] " Kees Cook
@ 2020-06-17 20:01   ` Matthew Wilcox
  2020-06-18 20:37     ` Kees Cook
  0 siblings, 1 reply; 5+ messages in thread
From: Matthew Wilcox @ 2020-06-17 20:01 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andrew Morton, Vlastimil Babka, Roman Gushchin,
	Christoph Lameter, Alexander Popov, Pekka Enberg, David Rientjes,
	Joonsoo Kim, vinmenon, Matthew Garrett, Jann Horn,
	Vijayanand Jitta, linux-mm, linux-kernel

On Wed, Jun 17, 2020 at 12:53:48PM -0700, Kees Cook wrote:
> Include SLAB and SLOB caches when performing kmem_cache pointer

... SLOB?  Really?  Objects from different kmem caches are mixed together
on the same page with SLOB (at least last time I looked).  So how does
this work?

> verification. A defense against such corruption[1] should be applied
> to all the allocators. With this added, the "SLAB_FREE_CROSS" and
> "SLAB_FREE_PAGE" LKDTM tests now pass on SLAB:

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH 1/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB and SLOB
  2020-06-17 20:01   ` Matthew Wilcox
@ 2020-06-18 20:37     ` Kees Cook
  0 siblings, 0 replies; 5+ messages in thread
From: Kees Cook @ 2020-06-18 20:37 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Andrew Morton, Vlastimil Babka, Roman Gushchin,
	Christoph Lameter, Alexander Popov, Pekka Enberg, David Rientjes,
	Joonsoo Kim, vinmenon, Matthew Garrett, Jann Horn,
	Vijayanand Jitta, linux-mm, linux-kernel

On Wed, Jun 17, 2020 at 01:01:51PM -0700, Matthew Wilcox wrote:
> On Wed, Jun 17, 2020 at 12:53:48PM -0700, Kees Cook wrote:
> > Include SLAB and SLOB caches when performing kmem_cache pointer
> 
> ... SLOB?  Really?  Objects from different kmem caches are mixed together
> on the same page with SLOB (at least last time I looked).  So how does
> this work?

Hmm. I'm not sure. I can't even boot a SLOB kernel these days (even
without these patches). But, pages are shared between kmem caches on
SLOB, then I certainly can't add this check for it. :) I'll adjust this
patch.

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-06-18 20:37 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-17 19:53 [PATCH 0/2] mm: Expand CONFIG_SLAB_FREELIST_HARDENED to include SLAB and SLOB Kees Cook
2020-06-17 19:53 ` [PATCH 1/2] " Kees Cook
2020-06-17 20:01   ` Matthew Wilcox
2020-06-18 20:37     ` Kees Cook
2020-06-17 19:53 ` [PATCH 2/2] slab: Add naive detection of double free Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).