All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/3] Actually fix freelist pointer vs redzoning
@ 2021-06-08 18:39 Kees Cook
  2021-06-08 18:39 ` [PATCH v4 1/3] mm/slub: Clarify verification reporting Kees Cook
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Kees Cook @ 2021-06-08 18:39 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kees Cook, Vlastimil Babka, Marco Elver, Christoph Lameter, Lin,
	Zhenpeng, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Roman Gushchin, linux-kernel, linux-doc, linux-mm

v4:
- remove redundant size check
v3: https://lore.kernel.org/lkml/20201015033712.1491731-1-keescook@chromium.org
v2: https://lore.kernel.org/lkml/20201009195411.4018141-1-keescook@chromium.org
v1: https://lore.kernel.org/lkml/20201008233443.3335464-1-keescook@chromium.org

This fixes redzoning vs the freelist pointer (both for middle-position
and very small caches). Both are "theoretical" fixes, in that I see no
evidence of such small-sized caches actually be used in the kernel, but
that's no reason to let the bugs continue to exist, especially since
people doing local development keep tripping over it. :)

Thanks!

-Kees


Kees Cook (3):
  mm/slub: Clarify verification reporting
  mm/slub: Fix redzoning for small allocations
  mm/slub: Actually fix freelist pointer vs redzoning

 Documentation/vm/slub.rst | 10 +++++-----
 mm/slab_common.c          |  3 +--
 mm/slub.c                 | 36 +++++++++++++++---------------------
 3 files changed, 21 insertions(+), 28 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v4 1/3] mm/slub: Clarify verification reporting
  2021-06-08 18:39 [PATCH v4 0/3] Actually fix freelist pointer vs redzoning Kees Cook
@ 2021-06-08 18:39 ` Kees Cook
  2021-06-08 18:39 ` [PATCH v4 2/3] mm/slub: Fix redzoning for small allocations Kees Cook
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 9+ messages in thread
From: Kees Cook @ 2021-06-08 18:39 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kees Cook, Vlastimil Babka, Marco Elver, Christoph Lameter, Lin,
	Zhenpeng, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Roman Gushchin, linux-kernel, linux-doc, linux-mm

Instead of repeating "Redzone" and "Poison", clarify which sides of
those zones got tripped. Additionally fix column alignment in the
trailer.

Before:

BUG test (Tainted: G    B            ): Redzone overwritten
...
Redzone (____ptrval____): bb bb bb bb bb bb bb bb      ........
Object (____ptrval____): f6 f4 a5 40 1d e8            ...@..
Redzone (____ptrval____): 1a aa                        ..
Padding (____ptrval____): 00 00 00 00 00 00 00 00      ........

After:

BUG test (Tainted: G    B            ): Right Redzone overwritten
...
Redzone  (____ptrval____): bb bb bb bb bb bb bb bb      ........
Object   (____ptrval____): f6 f4 a5 40 1d e8            ...@..
Redzone  (____ptrval____): 1a aa                        ..
Padding  (____ptrval____): 00 00 00 00 00 00 00 00      ........

The earlier commits that slowly resulted in the "Before" reporting were:

  d86bd1bece6f ("mm/slub: support left redzone")
  ffc79d288000 ("slub: use print_hex_dump")
  2492268472e7 ("SLUB: change error reporting format to follow lockdep loosely")

Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Link: https://lore.kernel.org/lkml/cfdb11d7-fb8e-e578-c939-f7f5fb69a6bd@suse.cz/
---
 Documentation/vm/slub.rst | 10 +++++-----
 mm/slub.c                 | 14 +++++++-------
 2 files changed, 12 insertions(+), 12 deletions(-)

diff --git a/Documentation/vm/slub.rst b/Documentation/vm/slub.rst
index 03f294a638bd..d3028554b1e9 100644
--- a/Documentation/vm/slub.rst
+++ b/Documentation/vm/slub.rst
@@ -181,7 +181,7 @@ SLUB Debug output
 Here is a sample of slub debug output::
 
  ====================================================================
- BUG kmalloc-8: Redzone overwritten
+ BUG kmalloc-8: Right Redzone overwritten
  --------------------------------------------------------------------
 
  INFO: 0xc90f6d28-0xc90f6d2b. First byte 0x00 instead of 0xcc
@@ -189,10 +189,10 @@ Here is a sample of slub debug output::
  INFO: Object 0xc90f6d20 @offset=3360 fp=0xc90f6d58
  INFO: Allocated in get_modalias+0x61/0xf5 age=53 cpu=1 pid=554
 
- Bytes b4 0xc90f6d10:  00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ
-   Object 0xc90f6d20:  31 30 31 39 2e 30 30 35                         1019.005
-  Redzone 0xc90f6d28:  00 cc cc cc                                     .
-  Padding 0xc90f6d50:  5a 5a 5a 5a 5a 5a 5a 5a                         ZZZZZZZZ
+ Bytes b4 (0xc90f6d10): 00 00 00 00 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ
+ Object   (0xc90f6d20): 31 30 31 39 2e 30 30 35                         1019.005
+ Redzone  (0xc90f6d28): 00 cc cc cc                                     .
+ Padding  (0xc90f6d50): 5a 5a 5a 5a 5a 5a 5a 5a                         ZZZZZZZZ
 
    [<c010523d>] dump_trace+0x63/0x1eb
    [<c01053df>] show_trace_log_lvl+0x1a/0x2f
diff --git a/mm/slub.c b/mm/slub.c
index 3f96e099817a..f91d9fe7d0d8 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -712,15 +712,15 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 	       p, p - addr, get_freepointer(s, p));
 
 	if (s->flags & SLAB_RED_ZONE)
-		print_section(KERN_ERR, "Redzone ", p - s->red_left_pad,
+		print_section(KERN_ERR, "Redzone  ", p - s->red_left_pad,
 			      s->red_left_pad);
 	else if (p > addr + 16)
 		print_section(KERN_ERR, "Bytes b4 ", p - 16, 16);
 
-	print_section(KERN_ERR, "Object ", p,
+	print_section(KERN_ERR,         "Object   ", p,
 		      min_t(unsigned int, s->object_size, PAGE_SIZE));
 	if (s->flags & SLAB_RED_ZONE)
-		print_section(KERN_ERR, "Redzone ", p + s->object_size,
+		print_section(KERN_ERR, "Redzone  ", p + s->object_size,
 			s->inuse - s->object_size);
 
 	off = get_info_end(s);
@@ -732,7 +732,7 @@ static void print_trailer(struct kmem_cache *s, struct page *page, u8 *p)
 
 	if (off != size_from_object(s))
 		/* Beginning of the filler is the free pointer */
-		print_section(KERN_ERR, "Padding ", p + off,
+		print_section(KERN_ERR, "Padding  ", p + off,
 			      size_from_object(s) - off);
 
 	dump_stack();
@@ -909,11 +909,11 @@ static int check_object(struct kmem_cache *s, struct page *page,
 	u8 *endobject = object + s->object_size;
 
 	if (s->flags & SLAB_RED_ZONE) {
-		if (!check_bytes_and_report(s, page, object, "Redzone",
+		if (!check_bytes_and_report(s, page, object, "Left Redzone",
 			object - s->red_left_pad, val, s->red_left_pad))
 			return 0;
 
-		if (!check_bytes_and_report(s, page, object, "Redzone",
+		if (!check_bytes_and_report(s, page, object, "Right Redzone",
 			endobject, val, s->inuse - s->object_size))
 			return 0;
 	} else {
@@ -928,7 +928,7 @@ static int check_object(struct kmem_cache *s, struct page *page,
 		if (val != SLUB_RED_ACTIVE && (s->flags & __OBJECT_POISON) &&
 			(!check_bytes_and_report(s, page, p, "Poison", p,
 					POISON_FREE, s->object_size - 1) ||
-			 !check_bytes_and_report(s, page, p, "Poison",
+			 !check_bytes_and_report(s, page, p, "End Poison",
 				p + s->object_size - 1, POISON_END, 1)))
 			return 0;
 		/*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v4 2/3] mm/slub: Fix redzoning for small allocations
  2021-06-08 18:39 [PATCH v4 0/3] Actually fix freelist pointer vs redzoning Kees Cook
  2021-06-08 18:39 ` [PATCH v4 1/3] mm/slub: Clarify verification reporting Kees Cook
@ 2021-06-08 18:39 ` Kees Cook
  2021-06-11  9:13   ` Vlastimil Babka
  2021-06-08 18:39 ` [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning Kees Cook
  2021-06-08 20:53 ` [PATCH v4 0/3] " Andrew Morton
  3 siblings, 1 reply; 9+ messages in thread
From: Kees Cook @ 2021-06-08 18:39 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kees Cook, stable, Vlastimil Babka, Marco Elver,
	Christoph Lameter, Lin, Zhenpeng, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Roman Gushchin, linux-kernel, linux-doc, linux-mm

The redzone area for SLUB exists between s->object_size and s->inuse
(which is at least the word-aligned object_size). If a cache were created
with an object_size smaller than sizeof(void *), the in-object stored
freelist pointer would overwrite the redzone (e.g. with boot param
"slub_debug=ZF"):

BUG test (Tainted: G    B            ): Right Redzone overwritten
-----------------------------------------------------------------------------

INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620

Redzone  (____ptrval____): bb bb bb bb bb bb bb bb    ........
Object   (____ptrval____): f6 f4 a5 40 1d e8          ...@..
Redzone  (____ptrval____): 1a aa                      ..
Padding  (____ptrval____): 00 00 00 00 00 00 00 00    ........

Store the freelist pointer out of line when object_size is smaller than
sizeof(void *) and redzoning is enabled.

Additionally remove the "smaller than sizeof(void *)" check under
CONFIG_DEBUG_VM in kmem_cache_sanity_check() as it is now redundant:
SLAB and SLOB both handle small sizes.

(Note that no caches within this size range are known to exist in the
kernel currently.)

Fixes: 81819f0fc828 ("SLUB core")
Cc: stable@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 mm/slab_common.c | 3 +--
 mm/slub.c        | 8 +++++---
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index a4a571428c51..7cab77655f11 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -97,8 +97,7 @@ EXPORT_SYMBOL(kmem_cache_size);
 #ifdef CONFIG_DEBUG_VM
 static int kmem_cache_sanity_check(const char *name, unsigned int size)
 {
-	if (!name || in_interrupt() || size < sizeof(void *) ||
-		size > KMALLOC_MAX_SIZE) {
+	if (!name || in_interrupt() || size > KMALLOC_MAX_SIZE) {
 		pr_err("kmem_cache_create(%s) integrity check failed\n", name);
 		return -EINVAL;
 	}
diff --git a/mm/slub.c b/mm/slub.c
index f91d9fe7d0d8..f58cfd456548 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3734,15 +3734,17 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 	 */
 	s->inuse = size;
 
-	if (((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) ||
-		s->ctor)) {
+	if ((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) ||
+	    ((flags & SLAB_RED_ZONE) && s->object_size < sizeof(void *)) ||
+	    s->ctor) {
 		/*
 		 * Relocate free pointer after the object if it is not
 		 * permitted to overwrite the first word of the object on
 		 * kmem_cache_free.
 		 *
 		 * This is the case if we do RCU, have a constructor or
-		 * destructor or are poisoning the objects.
+		 * destructor, are poisoning the objects, or are
+		 * redzoning an object smaller than sizeof(void *).
 		 *
 		 * The assumption that s->offset >= s->inuse means free
 		 * pointer is outside of the object is used in the
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning
  2021-06-08 18:39 [PATCH v4 0/3] Actually fix freelist pointer vs redzoning Kees Cook
  2021-06-08 18:39 ` [PATCH v4 1/3] mm/slub: Clarify verification reporting Kees Cook
  2021-06-08 18:39 ` [PATCH v4 2/3] mm/slub: Fix redzoning for small allocations Kees Cook
@ 2021-06-08 18:39 ` Kees Cook
  2021-06-08 20:56   ` Andrew Morton
  2021-06-08 20:53 ` [PATCH v4 0/3] " Andrew Morton
  3 siblings, 1 reply; 9+ messages in thread
From: Kees Cook @ 2021-06-08 18:39 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kees Cook, Marco Elver, Lin, Zhenpeng, stable, Vlastimil Babka,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Roman Gushchin, linux-kernel, linux-doc, linux-mm

It turns out that SLUB redzoning ("slub_debug=Z") checks from
s->object_size rather than from s->inuse (which is normally bumped
to make room for the freelist pointer), so a cache created with an
object size less than 24 would have the freelist pointer written beyond
s->object_size, causing the redzone to be corrupted by the freelist
pointer. This was very visible with "slub_debug=ZF":

BUG test (Tainted: G    B            ): Right Redzone overwritten
-----------------------------------------------------------------------------

INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620

Redzone  (____ptrval____): bb bb bb bb bb bb bb bb               ........
Object   (____ptrval____): 00 00 00 00 00 f6 f4 a5               ........
Redzone  (____ptrval____): 40 1d e8 1a aa                        @....
Padding  (____ptrval____): 00 00 00 00 00 00 00 00               ........

Adjust the offset to stay within s->object_size.

(Note that no caches of in this size range are known to exist in the
kernel currently.)

Reported-by: Marco Elver <elver@google.com>
Reported-by: "Lin, Zhenpeng" <zplin@psu.edu>
Link: https://lore.kernel.org/linux-mm/20200807160627.GA1420741@elver.google.com/
Fixes: 89b83f282d8b (slub: avoid redzone when choosing freepointer location)
Cc: stable@vger.kernel.org
Tested-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/lkml/CANpmjNOwZ5VpKQn+SYWovTkFB4VsT-RPwyENBmaK0dLcpqStkA@mail.gmail.com
Signed-off-by: Kees Cook <keescook@chromium.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Link: https://lore.kernel.org/lkml/0f7dd7b2-7496-5e2d-9488-2ec9f8e90441@suse.cz/
---
 mm/slub.c | 14 +++-----------
 1 file changed, 3 insertions(+), 11 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index f58cfd456548..fe30df460fad 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3689,7 +3689,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 {
 	slab_flags_t flags = s->flags;
 	unsigned int size = s->object_size;
-	unsigned int freepointer_area;
 	unsigned int order;
 
 	/*
@@ -3698,13 +3697,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 	 * the possible location of the free pointer.
 	 */
 	size = ALIGN(size, sizeof(void *));
-	/*
-	 * This is the area of the object where a freepointer can be
-	 * safely written. If redzoning adds more to the inuse size, we
-	 * can't use that portion for writing the freepointer, so
-	 * s->offset must be limited within this for the general case.
-	 */
-	freepointer_area = size;
 
 #ifdef CONFIG_SLUB_DEBUG
 	/*
@@ -3730,7 +3722,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 
 	/*
 	 * With that we have determined the number of bytes in actual use
-	 * by the object. This is the potential offset to the free pointer.
+	 * by the object and redzoning.
 	 */
 	s->inuse = size;
 
@@ -3753,13 +3745,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 		 */
 		s->offset = size;
 		size += sizeof(void *);
-	} else if (freepointer_area > sizeof(void *)) {
+	} else {
 		/*
 		 * Store freelist pointer near middle of object to keep
 		 * it away from the edges of the object to avoid small
 		 * sized over/underflows from neighboring allocations.
 		 */
-		s->offset = ALIGN(freepointer_area / 2, sizeof(void *));
+		s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *));
 	}
 
 #ifdef CONFIG_SLUB_DEBUG
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 0/3] Actually fix freelist pointer vs redzoning
  2021-06-08 18:39 [PATCH v4 0/3] Actually fix freelist pointer vs redzoning Kees Cook
                   ` (2 preceding siblings ...)
  2021-06-08 18:39 ` [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning Kees Cook
@ 2021-06-08 20:53 ` Andrew Morton
  2021-06-08 23:08   ` Kees Cook
  3 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2021-06-08 20:53 UTC (permalink / raw)
  To: Kees Cook
  Cc: Vlastimil Babka, Marco Elver, Christoph Lameter, Lin, Zhenpeng,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Roman Gushchin,
	linux-kernel, linux-doc, linux-mm

On Tue,  8 Jun 2021 11:39:52 -0700 Kees Cook <keescook@chromium.org> wrote:

> This fixes redzoning vs the freelist pointer (both for middle-position
> and very small caches). Both are "theoretical" fixes, in that I see no
> evidence of such small-sized caches actually be used in the kernel, but
> that's no reason to let the bugs continue to exist, especially since
> people doing local development keep tripping over it. :)

So I don't think this is suitable -stable material?

It's a bit odd that patches 2&3 were cc:stable but #1 was not.  Makes
one afraid that 2&3 might have had a dependency anyway.

So I'm thinking that the whole series can just be for 5.14-rc1, in the
sent order.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning
  2021-06-08 18:39 ` [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning Kees Cook
@ 2021-06-08 20:56   ` Andrew Morton
  2021-06-08 23:11     ` Kees Cook
  0 siblings, 1 reply; 9+ messages in thread
From: Andrew Morton @ 2021-06-08 20:56 UTC (permalink / raw)
  To: Kees Cook
  Cc: Marco Elver, Lin, Zhenpeng, stable, Vlastimil Babka,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Roman Gushchin, linux-kernel, linux-doc, linux-mm, Lin, Zhenpeng

On Tue,  8 Jun 2021 11:39:55 -0700 Kees Cook <keescook@chromium.org> wrote:

> It turns out that SLUB redzoning ("slub_debug=Z") checks from
> s->object_size rather than from s->inuse (which is normally bumped
> to make room for the freelist pointer), so a cache created with an
> object size less than 24 would have the freelist pointer written beyond
> s->object_size, causing the redzone to be corrupted by the freelist
> pointer. This was very visible with "slub_debug=ZF":
> 
> BUG test (Tainted: G    B            ): Right Redzone overwritten
> -----------------------------------------------------------------------------
> 
> INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
> INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
> INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620
> 
> Redzone  (____ptrval____): bb bb bb bb bb bb bb bb               ........
> Object   (____ptrval____): 00 00 00 00 00 f6 f4 a5               ........
> Redzone  (____ptrval____): 40 1d e8 1a aa                        @....
> Padding  (____ptrval____): 00 00 00 00 00 00 00 00               ........
> 
> Adjust the offset to stay within s->object_size.
> 
> (Note that no caches of in this size range are known to exist in the
> kernel currently.)

We already have
https://lkml.kernel.org/r/6746FEEA-FD69-4792-8DDA-C78F5FE7DA02@psu.edu.
Is this patch better?

> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3689,7 +3689,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>  {
>  	slab_flags_t flags = s->flags;
>  	unsigned int size = s->object_size;
> -	unsigned int freepointer_area;
>  	unsigned int order;
>  
>  	/*
> @@ -3698,13 +3697,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>  	 * the possible location of the free pointer.
>  	 */
>  	size = ALIGN(size, sizeof(void *));
> -	/*
> -	 * This is the area of the object where a freepointer can be
> -	 * safely written. If redzoning adds more to the inuse size, we
> -	 * can't use that portion for writing the freepointer, so
> -	 * s->offset must be limited within this for the general case.
> -	 */
> -	freepointer_area = size;
>  
>  #ifdef CONFIG_SLUB_DEBUG
>  	/*
> @@ -3730,7 +3722,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>  
>  	/*
>  	 * With that we have determined the number of bytes in actual use
> -	 * by the object. This is the potential offset to the free pointer.
> +	 * by the object and redzoning.
>  	 */
>  	s->inuse = size;
>  
> @@ -3753,13 +3745,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>  		 */
>  		s->offset = size;
>  		size += sizeof(void *);
> -	} else if (freepointer_area > sizeof(void *)) {
> +	} else {
>  		/*
>  		 * Store freelist pointer near middle of object to keep
>  		 * it away from the edges of the object to avoid small
>  		 * sized over/underflows from neighboring allocations.
>  		 */
> -		s->offset = ALIGN(freepointer_area / 2, sizeof(void *));
> +		s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *));
>  	}
>  
>  #ifdef CONFIG_SLUB_DEBUG
> -- 
> 2.25.1

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 0/3] Actually fix freelist pointer vs redzoning
  2021-06-08 20:53 ` [PATCH v4 0/3] " Andrew Morton
@ 2021-06-08 23:08   ` Kees Cook
  0 siblings, 0 replies; 9+ messages in thread
From: Kees Cook @ 2021-06-08 23:08 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Vlastimil Babka, Marco Elver, Christoph Lameter, Lin, Zhenpeng,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Roman Gushchin,
	linux-kernel, linux-doc, linux-mm

On Tue, Jun 08, 2021 at 01:53:27PM -0700, Andrew Morton wrote:
> On Tue,  8 Jun 2021 11:39:52 -0700 Kees Cook <keescook@chromium.org> wrote:
> 
> > This fixes redzoning vs the freelist pointer (both for middle-position
> > and very small caches). Both are "theoretical" fixes, in that I see no
> > evidence of such small-sized caches actually be used in the kernel, but
> > that's no reason to let the bugs continue to exist, especially since
> > people doing local development keep tripping over it. :)
> 
> So I don't think this is suitable -stable material?

Yeah, I think it's -stable material, but I'd like some bake time in
-next just in case. zplin saw that there was a 2 * sizeof(void *) case
that existed in the kernel that would trip over the issue.

> It's a bit odd that patches 2&3 were cc:stable but #1 was not.  Makes
> one afraid that 2&3 might have had a dependency anyway.

#1 is entirely cosmetic. It should also be fine to put into -stable, but
since it had no operational impact, I figured it didn't need to be.

> So I'm thinking that the whole series can just be for 5.14-rc1, in the
> sent order.

Unless I'm missing something big, yeah, that would be my preference too.
(And -stable can pick it up then.)

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning
  2021-06-08 20:56   ` Andrew Morton
@ 2021-06-08 23:11     ` Kees Cook
  0 siblings, 0 replies; 9+ messages in thread
From: Kees Cook @ 2021-06-08 23:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Marco Elver, Lin, Zhenpeng, stable, Vlastimil Babka,
	Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Roman Gushchin, linux-kernel, linux-doc, linux-mm

On Tue, Jun 08, 2021 at 01:56:33PM -0700, Andrew Morton wrote:
> On Tue,  8 Jun 2021 11:39:55 -0700 Kees Cook <keescook@chromium.org> wrote:
> 
> > It turns out that SLUB redzoning ("slub_debug=Z") checks from
> > s->object_size rather than from s->inuse (which is normally bumped
> > to make room for the freelist pointer), so a cache created with an
> > object size less than 24 would have the freelist pointer written beyond
> > s->object_size, causing the redzone to be corrupted by the freelist
> > pointer. This was very visible with "slub_debug=ZF":
> > 
> > BUG test (Tainted: G    B            ): Right Redzone overwritten
> > -----------------------------------------------------------------------------
> > 
> > INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
> > INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
> > INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620
> > 
> > Redzone  (____ptrval____): bb bb bb bb bb bb bb bb               ........
> > Object   (____ptrval____): 00 00 00 00 00 f6 f4 a5               ........
> > Redzone  (____ptrval____): 40 1d e8 1a aa                        @....
> > Padding  (____ptrval____): 00 00 00 00 00 00 00 00               ........
> > 
> > Adjust the offset to stay within s->object_size.
> > 
> > (Note that no caches of in this size range are known to exist in the
> > kernel currently.)
> 
> We already have
> https://lkml.kernel.org/r/6746FEEA-FD69-4792-8DDA-C78F5FE7DA02@psu.edu.
> Is this patch better?

Yes, I believe so, since it reduces code and corrects the size checking
more directly (and more clearly demonstrates the redzone calculation
problem in the commit log).

-Kees

> 
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3689,7 +3689,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> >  {
> >  	slab_flags_t flags = s->flags;
> >  	unsigned int size = s->object_size;
> > -	unsigned int freepointer_area;
> >  	unsigned int order;
> >  
> >  	/*
> > @@ -3698,13 +3697,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> >  	 * the possible location of the free pointer.
> >  	 */
> >  	size = ALIGN(size, sizeof(void *));
> > -	/*
> > -	 * This is the area of the object where a freepointer can be
> > -	 * safely written. If redzoning adds more to the inuse size, we
> > -	 * can't use that portion for writing the freepointer, so
> > -	 * s->offset must be limited within this for the general case.
> > -	 */
> > -	freepointer_area = size;
> >  
> >  #ifdef CONFIG_SLUB_DEBUG
> >  	/*
> > @@ -3730,7 +3722,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> >  
> >  	/*
> >  	 * With that we have determined the number of bytes in actual use
> > -	 * by the object. This is the potential offset to the free pointer.
> > +	 * by the object and redzoning.
> >  	 */
> >  	s->inuse = size;
> >  
> > @@ -3753,13 +3745,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> >  		 */
> >  		s->offset = size;
> >  		size += sizeof(void *);
> > -	} else if (freepointer_area > sizeof(void *)) {
> > +	} else {
> >  		/*
> >  		 * Store freelist pointer near middle of object to keep
> >  		 * it away from the edges of the object to avoid small
> >  		 * sized over/underflows from neighboring allocations.
> >  		 */
> > -		s->offset = ALIGN(freepointer_area / 2, sizeof(void *));
> > +		s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *));
> >  	}
> >  
> >  #ifdef CONFIG_SLUB_DEBUG
> > -- 
> > 2.25.1

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v4 2/3] mm/slub: Fix redzoning for small allocations
  2021-06-08 18:39 ` [PATCH v4 2/3] mm/slub: Fix redzoning for small allocations Kees Cook
@ 2021-06-11  9:13   ` Vlastimil Babka
  0 siblings, 0 replies; 9+ messages in thread
From: Vlastimil Babka @ 2021-06-11  9:13 UTC (permalink / raw)
  To: Kees Cook, Andrew Morton
  Cc: stable, Marco Elver, Christoph Lameter, Lin, Zhenpeng,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Roman Gushchin,
	linux-kernel, linux-doc, linux-mm

On 6/8/21 8:39 PM, Kees Cook wrote:
> The redzone area for SLUB exists between s->object_size and s->inuse
> (which is at least the word-aligned object_size). If a cache were created
> with an object_size smaller than sizeof(void *), the in-object stored
> freelist pointer would overwrite the redzone (e.g. with boot param
> "slub_debug=ZF"):
> 
> BUG test (Tainted: G    B            ): Right Redzone overwritten
> -----------------------------------------------------------------------------
> 
> INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
> INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
> INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620
> 
> Redzone  (____ptrval____): bb bb bb bb bb bb bb bb    ........
> Object   (____ptrval____): f6 f4 a5 40 1d e8          ...@..
> Redzone  (____ptrval____): 1a aa                      ..
> Padding  (____ptrval____): 00 00 00 00 00 00 00 00    ........
> 
> Store the freelist pointer out of line when object_size is smaller than
> sizeof(void *) and redzoning is enabled.
> 
> Additionally remove the "smaller than sizeof(void *)" check under
> CONFIG_DEBUG_VM in kmem_cache_sanity_check() as it is now redundant:
> SLAB and SLOB both handle small sizes.
> 
> (Note that no caches within this size range are known to exist in the
> kernel currently.)
> 
> Fixes: 81819f0fc828 ("SLUB core")
> Cc: stable@vger.kernel.org
> Signed-off-by: Kees Cook <keescook@chromium.org>

Acked-by: Vlastimil Babka <vbabka@suse.cz>

> ---
>  mm/slab_common.c | 3 +--
>  mm/slub.c        | 8 +++++---
>  2 files changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index a4a571428c51..7cab77655f11 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -97,8 +97,7 @@ EXPORT_SYMBOL(kmem_cache_size);
>  #ifdef CONFIG_DEBUG_VM
>  static int kmem_cache_sanity_check(const char *name, unsigned int size)
>  {
> -	if (!name || in_interrupt() || size < sizeof(void *) ||
> -		size > KMALLOC_MAX_SIZE) {
> +	if (!name || in_interrupt() || size > KMALLOC_MAX_SIZE) {
>  		pr_err("kmem_cache_create(%s) integrity check failed\n", name);
>  		return -EINVAL;
>  	}
> diff --git a/mm/slub.c b/mm/slub.c
> index f91d9fe7d0d8..f58cfd456548 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3734,15 +3734,17 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>  	 */
>  	s->inuse = size;
>  
> -	if (((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) ||
> -		s->ctor)) {
> +	if ((flags & (SLAB_TYPESAFE_BY_RCU | SLAB_POISON)) ||
> +	    ((flags & SLAB_RED_ZONE) && s->object_size < sizeof(void *)) ||
> +	    s->ctor) {
>  		/*
>  		 * Relocate free pointer after the object if it is not
>  		 * permitted to overwrite the first word of the object on
>  		 * kmem_cache_free.
>  		 *
>  		 * This is the case if we do RCU, have a constructor or
> -		 * destructor or are poisoning the objects.
> +		 * destructor, are poisoning the objects, or are
> +		 * redzoning an object smaller than sizeof(void *).
>  		 *
>  		 * The assumption that s->offset >= s->inuse means free
>  		 * pointer is outside of the object is used in the
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-06-11  9:13 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-08 18:39 [PATCH v4 0/3] Actually fix freelist pointer vs redzoning Kees Cook
2021-06-08 18:39 ` [PATCH v4 1/3] mm/slub: Clarify verification reporting Kees Cook
2021-06-08 18:39 ` [PATCH v4 2/3] mm/slub: Fix redzoning for small allocations Kees Cook
2021-06-11  9:13   ` Vlastimil Babka
2021-06-08 18:39 ` [PATCH v4 3/3] mm/slub: Actually fix freelist pointer vs redzoning Kees Cook
2021-06-08 20:56   ` Andrew Morton
2021-06-08 23:11     ` Kees Cook
2021-06-08 20:53 ` [PATCH v4 0/3] " Andrew Morton
2021-06-08 23:08   ` Kees Cook

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.