linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] slub: Actually fix freelist pointer vs redzoning
@ 2020-10-08 23:34 Kees Cook
  2020-10-09 12:12 ` Marco Elver
  2020-10-15  2:36 ` Waiman Long
  0 siblings, 2 replies; 4+ messages in thread
From: Kees Cook @ 2020-10-08 23:34 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Kees Cook, Marco Elver, stable, Pekka Enberg, Waiman Long,
	Christoph Lameter, David Rientjes, Joonsoo Kim, linux-kernel,
	linux-mm

It turns out that SLUB redzoning ("slub_debug=Z") checks from
s->object_size rather than from s->inuse (which is normally bumped to
make room for the freelist pointer), so a cache created with an object
size less than 24 would have their freelist pointer written beyond
s->object_size, causing the redzone to corrupt the freelist pointer.
This was very visible with "slub_debug=ZF":

BUG test (Tainted: G    B            ): Redzone overwritten
-----------------------------------------------------------------------------

INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620

Redzone (____ptrval____): bb bb bb bb bb bb bb bb               ........
Object  (____ptrval____): 00 00 00 00 00 f6 f4 a5               ........
Redzone (____ptrval____): 40 1d e8 1a aa                        @....
Padding (____ptrval____): 00 00 00 00 00 00 00 00               ........

Adjust the offset to stay within s->object_size.

(Note that there appear to be no such small-sized caches in the kernel
currently.)

Reported-by: Marco Elver <elver@google.com>
Link: https://lore.kernel.org/linux-mm/20200807160627.GA1420741@elver.google.com/
Fixes: 89b83f282d8b (slub: avoid redzone when choosing freepointer location)
Cc: stable@vger.kernel.org
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 mm/slub.c | 17 +++++------------
 1 file changed, 5 insertions(+), 12 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 68c02b2eecd9..979f5da26992 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3641,7 +3641,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 {
 	slab_flags_t flags = s->flags;
 	unsigned int size = s->object_size;
-	unsigned int freepointer_area;
 	unsigned int order;
 
 	/*
@@ -3650,13 +3649,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 	 * the possible location of the free pointer.
 	 */
 	size = ALIGN(size, sizeof(void *));
-	/*
-	 * This is the area of the object where a freepointer can be
-	 * safely written. If redzoning adds more to the inuse size, we
-	 * can't use that portion for writing the freepointer, so
-	 * s->offset must be limited within this for the general case.
-	 */
-	freepointer_area = size;
 
 #ifdef CONFIG_SLUB_DEBUG
 	/*
@@ -3682,7 +3674,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 
 	/*
 	 * With that we have determined the number of bytes in actual use
-	 * by the object. This is the potential offset to the free pointer.
+	 * by the object and redzoning.
 	 */
 	s->inuse = size;
 
@@ -3694,7 +3686,8 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 		 * kmem_cache_free.
 		 *
 		 * This is the case if we do RCU, have a constructor or
-		 * destructor or are poisoning the objects.
+		 * destructor, are poisoning the objects, or are
+		 * redzoning an object smaller than sizeof(void *).
 		 *
 		 * The assumption that s->offset >= s->inuse means free
 		 * pointer is outside of the object is used in the
@@ -3703,13 +3696,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 		 */
 		s->offset = size;
 		size += sizeof(void *);
-	} else if (freepointer_area > sizeof(void *)) {
+	} else {
 		/*
 		 * Store freelist pointer near middle of object to keep
 		 * it away from the edges of the object to avoid small
 		 * sized over/underflows from neighboring allocations.
 		 */
-		s->offset = ALIGN(freepointer_area / 2, sizeof(void *));
+		s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *));
 	}
 
 #ifdef CONFIG_SLUB_DEBUG
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] slub: Actually fix freelist pointer vs redzoning
  2020-10-08 23:34 [PATCH] slub: Actually fix freelist pointer vs redzoning Kees Cook
@ 2020-10-09 12:12 ` Marco Elver
  2020-10-15  2:36 ` Waiman Long
  1 sibling, 0 replies; 4+ messages in thread
From: Marco Elver @ 2020-10-09 12:12 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andrew Morton, stable, Pekka Enberg, Waiman Long,
	Christoph Lameter, David Rientjes, Joonsoo Kim, LKML,
	Linux Memory Management List

On Fri, 9 Oct 2020 at 01:34, Kees Cook <keescook@chromium.org> wrote:
> It turns out that SLUB redzoning ("slub_debug=Z") checks from
> s->object_size rather than from s->inuse (which is normally bumped to
> make room for the freelist pointer), so a cache created with an object
> size less than 24 would have their freelist pointer written beyond
> s->object_size, causing the redzone to corrupt the freelist pointer.
> This was very visible with "slub_debug=ZF":
>
> BUG test (Tainted: G    B            ): Redzone overwritten
> -----------------------------------------------------------------------------
>
> INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
> INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
> INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620
>
> Redzone (____ptrval____): bb bb bb bb bb bb bb bb               ........
> Object  (____ptrval____): 00 00 00 00 00 f6 f4 a5               ........
> Redzone (____ptrval____): 40 1d e8 1a aa                        @....
> Padding (____ptrval____): 00 00 00 00 00 00 00 00               ........
>
> Adjust the offset to stay within s->object_size.
>
> (Note that there appear to be no such small-sized caches in the kernel
> currently.)
>
> Reported-by: Marco Elver <elver@google.com>
> Link: https://lore.kernel.org/linux-mm/20200807160627.GA1420741@elver.google.com/
> Fixes: 89b83f282d8b (slub: avoid redzone when choosing freepointer location)
> Cc: stable@vger.kernel.org
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>  mm/slub.c | 17 +++++------------
>  1 file changed, 5 insertions(+), 12 deletions(-)

Tested-by: Marco Elver <elver@google.com>

Thank you!

> diff --git a/mm/slub.c b/mm/slub.c
> index 68c02b2eecd9..979f5da26992 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3641,7 +3641,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>  {
>         slab_flags_t flags = s->flags;
>         unsigned int size = s->object_size;
> -       unsigned int freepointer_area;
>         unsigned int order;
>
>         /*
> @@ -3650,13 +3649,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>          * the possible location of the free pointer.
>          */
>         size = ALIGN(size, sizeof(void *));
> -       /*
> -        * This is the area of the object where a freepointer can be
> -        * safely written. If redzoning adds more to the inuse size, we
> -        * can't use that portion for writing the freepointer, so
> -        * s->offset must be limited within this for the general case.
> -        */
> -       freepointer_area = size;
>
>  #ifdef CONFIG_SLUB_DEBUG
>         /*
> @@ -3682,7 +3674,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>
>         /*
>          * With that we have determined the number of bytes in actual use
> -        * by the object. This is the potential offset to the free pointer.
> +        * by the object and redzoning.
>          */
>         s->inuse = size;
>
> @@ -3694,7 +3686,8 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>                  * kmem_cache_free.
>                  *
>                  * This is the case if we do RCU, have a constructor or
> -                * destructor or are poisoning the objects.
> +                * destructor, are poisoning the objects, or are
> +                * redzoning an object smaller than sizeof(void *).
>                  *
>                  * The assumption that s->offset >= s->inuse means free
>                  * pointer is outside of the object is used in the
> @@ -3703,13 +3696,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>                  */
>                 s->offset = size;
>                 size += sizeof(void *);
> -       } else if (freepointer_area > sizeof(void *)) {
> +       } else {
>                 /*
>                  * Store freelist pointer near middle of object to keep
>                  * it away from the edges of the object to avoid small
>                  * sized over/underflows from neighboring allocations.
>                  */
> -               s->offset = ALIGN(freepointer_area / 2, sizeof(void *));
> +               s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *));
>         }
>
>  #ifdef CONFIG_SLUB_DEBUG
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] slub: Actually fix freelist pointer vs redzoning
  2020-10-08 23:34 [PATCH] slub: Actually fix freelist pointer vs redzoning Kees Cook
  2020-10-09 12:12 ` Marco Elver
@ 2020-10-15  2:36 ` Waiman Long
  2020-10-15  3:08   ` Kees Cook
  1 sibling, 1 reply; 4+ messages in thread
From: Waiman Long @ 2020-10-15  2:36 UTC (permalink / raw)
  To: Kees Cook, Andrew Morton
  Cc: Marco Elver, stable, Pekka Enberg, Christoph Lameter,
	David Rientjes, Joonsoo Kim, linux-kernel, linux-mm

On 10/8/20 7:34 PM, Kees Cook wrote:
> It turns out that SLUB redzoning ("slub_debug=Z") checks from
> s->object_size rather than from s->inuse (which is normally bumped to
> make room for the freelist pointer), so a cache created with an object
> size less than 24 would have their freelist pointer written beyond
> s->object_size, causing the redzone to corrupt the freelist pointer.
> This was very visible with "slub_debug=ZF":
>
> BUG test (Tainted: G    B            ): Redzone overwritten
> -----------------------------------------------------------------------------
>
> INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
> INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
> INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620
>
> Redzone (____ptrval____): bb bb bb bb bb bb bb bb               ........
> Object  (____ptrval____): 00 00 00 00 00 f6 f4 a5               ........
> Redzone (____ptrval____): 40 1d e8 1a aa                        @....
> Padding (____ptrval____): 00 00 00 00 00 00 00 00               ........
>
> Adjust the offset to stay within s->object_size.
>
> (Note that there appear to be no such small-sized caches in the kernel
> currently.)
>
> Reported-by: Marco Elver <elver@google.com>
> Link: https://lore.kernel.org/linux-mm/20200807160627.GA1420741@elver.google.com/
> Fixes: 89b83f282d8b (slub: avoid redzone when choosing freepointer location)
> Cc: stable@vger.kernel.org
> Signed-off-by: Kees Cook <keescook@chromium.org>
> ---
>   mm/slub.c | 17 +++++------------
>   1 file changed, 5 insertions(+), 12 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 68c02b2eecd9..979f5da26992 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3641,7 +3641,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>   {
>   	slab_flags_t flags = s->flags;
>   	unsigned int size = s->object_size;
> -	unsigned int freepointer_area;
>   	unsigned int order;
>   
>   	/*
> @@ -3650,13 +3649,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>   	 * the possible location of the free pointer.
>   	 */
>   	size = ALIGN(size, sizeof(void *));
> -	/*
> -	 * This is the area of the object where a freepointer can be
> -	 * safely written. If redzoning adds more to the inuse size, we
> -	 * can't use that portion for writing the freepointer, so
> -	 * s->offset must be limited within this for the general case.
> -	 */
> -	freepointer_area = size;
>   
>   #ifdef CONFIG_SLUB_DEBUG
>   	/*
> @@ -3682,7 +3674,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>   
>   	/*
>   	 * With that we have determined the number of bytes in actual use
> -	 * by the object. This is the potential offset to the free pointer.
> +	 * by the object and redzoning.
>   	 */
>   	s->inuse = size;
>   
> @@ -3694,7 +3686,8 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>   		 * kmem_cache_free.
>   		 *
>   		 * This is the case if we do RCU, have a constructor or
> -		 * destructor or are poisoning the objects.
> +		 * destructor, are poisoning the objects, or are
> +		 * redzoning an object smaller than sizeof(void *).
>   		 *
>   		 * The assumption that s->offset >= s->inuse means free
>   		 * pointer is outside of the object is used in the

There is no check to go into this if condition to put free pointer after 
object when redzoning an object smaller than sizeof(void *). In that 
sense, the additional comment isn't correct.

Should we add that check even though we don't have slab objects with 
such a small size in the kernel?

> @@ -3703,13 +3696,13 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
>   		 */
>   		s->offset = size;
>   		size += sizeof(void *);
> -	} else if (freepointer_area > sizeof(void *)) {
> +	} else {
>   		/*
>   		 * Store freelist pointer near middle of object to keep
>   		 * it away from the edges of the object to avoid small
>   		 * sized over/underflows from neighboring allocations.
>   		 */
> -		s->offset = ALIGN(freepointer_area / 2, sizeof(void *));
> +		s->offset = ALIGN_DOWN(s->object_size / 2, sizeof(void *));
>   	}
>   
>   #ifdef CONFIG_SLUB_DEBUG

Other than the above, the patch looks good.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] slub: Actually fix freelist pointer vs redzoning
  2020-10-15  2:36 ` Waiman Long
@ 2020-10-15  3:08   ` Kees Cook
  0 siblings, 0 replies; 4+ messages in thread
From: Kees Cook @ 2020-10-15  3:08 UTC (permalink / raw)
  To: Waiman Long
  Cc: Andrew Morton, Marco Elver, stable, Pekka Enberg,
	Christoph Lameter, David Rientjes, Joonsoo Kim, linux-kernel,
	linux-mm

On Wed, Oct 14, 2020 at 10:36:01PM -0400, Waiman Long wrote:
> On 10/8/20 7:34 PM, Kees Cook wrote:
> > It turns out that SLUB redzoning ("slub_debug=Z") checks from
> > s->object_size rather than from s->inuse (which is normally bumped to
> > make room for the freelist pointer), so a cache created with an object
> > size less than 24 would have their freelist pointer written beyond
> > s->object_size, causing the redzone to corrupt the freelist pointer.
> > This was very visible with "slub_debug=ZF":
> > 
> > BUG test (Tainted: G    B            ): Redzone overwritten
> > -----------------------------------------------------------------------------
> > 
> > INFO: 0xffff957ead1c05de-0xffff957ead1c05df @offset=1502. First byte 0x1a instead of 0xbb
> > INFO: Slab 0xffffef3950b47000 objects=170 used=170 fp=0x0000000000000000 flags=0x8000000000000200
> > INFO: Object 0xffff957ead1c05d8 @offset=1496 fp=0xffff957ead1c0620
> > 
> > Redzone (____ptrval____): bb bb bb bb bb bb bb bb               ........
> > Object  (____ptrval____): 00 00 00 00 00 f6 f4 a5               ........
> > Redzone (____ptrval____): 40 1d e8 1a aa                        @....
> > Padding (____ptrval____): 00 00 00 00 00 00 00 00               ........
> > 
> > Adjust the offset to stay within s->object_size.
> > 
> > (Note that there appear to be no such small-sized caches in the kernel
> > currently.)
> > 
> > Reported-by: Marco Elver <elver@google.com>
> > Link: https://lore.kernel.org/linux-mm/20200807160627.GA1420741@elver.google.com/
> > Fixes: 89b83f282d8b (slub: avoid redzone when choosing freepointer location)
> > Cc: stable@vger.kernel.org
> > Signed-off-by: Kees Cook <keescook@chromium.org>
> > ---
> >   mm/slub.c | 17 +++++------------
> >   1 file changed, 5 insertions(+), 12 deletions(-)
> > 
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 68c02b2eecd9..979f5da26992 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3641,7 +3641,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> >   {
> >   	slab_flags_t flags = s->flags;
> >   	unsigned int size = s->object_size;
> > -	unsigned int freepointer_area;
> >   	unsigned int order;
> >   	/*
> > @@ -3650,13 +3649,6 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> >   	 * the possible location of the free pointer.
> >   	 */
> >   	size = ALIGN(size, sizeof(void *));
> > -	/*
> > -	 * This is the area of the object where a freepointer can be
> > -	 * safely written. If redzoning adds more to the inuse size, we
> > -	 * can't use that portion for writing the freepointer, so
> > -	 * s->offset must be limited within this for the general case.
> > -	 */
> > -	freepointer_area = size;
> >   #ifdef CONFIG_SLUB_DEBUG
> >   	/*
> > @@ -3682,7 +3674,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> >   	/*
> >   	 * With that we have determined the number of bytes in actual use
> > -	 * by the object. This is the potential offset to the free pointer.
> > +	 * by the object and redzoning.
> >   	 */
> >   	s->inuse = size;
> > @@ -3694,7 +3686,8 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
> >   		 * kmem_cache_free.
> >   		 *
> >   		 * This is the case if we do RCU, have a constructor or
> > -		 * destructor or are poisoning the objects.
> > +		 * destructor, are poisoning the objects, or are
> > +		 * redzoning an object smaller than sizeof(void *).
> >   		 *
> >   		 * The assumption that s->offset >= s->inuse means free
> >   		 * pointer is outside of the object is used in the
> 
> There is no check to go into this if condition to put free pointer after
> object when redzoning an object smaller than sizeof(void *). In that sense,
> the additional comment isn't correct.

Right -- part of this is why I did my v2 series[1], where I failed to find
where that had been enforced, and explicitly added it[2]. If no one
objects, I could fold that check into this fix, just for robustness.
cache creation is not fast-path, and the size test is a trivial check.

> Should we add that check even though we don't have slab objects with such a
> small size in the kernel?

I'd like to... I will spin a v3.

Thanks!

[1] https://lore.kernel.org/lkml/20201009195411.4018141-1-keescook@chromium.org/
[2] https://lore.kernel.org/lkml/20201009195411.4018141-3-keescook@chromium.org/

-- 
Kees Cook

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-10-15  3:08 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-08 23:34 [PATCH] slub: Actually fix freelist pointer vs redzoning Kees Cook
2020-10-09 12:12 ` Marco Elver
2020-10-15  2:36 ` Waiman Long
2020-10-15  3:08   ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).