All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] slub: Potential stack overflow
@ 2010-03-24 11:40 Eric Dumazet
  2010-03-24 19:16 ` Christoph Lameter
  0 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2010-03-24 11:40 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka J Enberg, linux-kernel

I discovered that we can overflow stack if CONFIG_SLUB_DEBUG=y and use
slabs with many objects, since list_slab_objects() and process_slab()
use DECLARE_BITMAP(map, page->objects);

With 65535 bits, we use 8192 bytes of stack ...

A possible fix is to lower MAX_OBJS_PER_PAGE so that these bitmaps dont
use more than a third of THREAD_SIZE. I suspect plain memory allocation
in these functions is not an option.

Using non dynamic stack allocation makes the problem more obvious if
somebody runs checkstack.pl

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
---
diff --git a/mm/slub.c b/mm/slub.c
index b364844..adf04c1 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -167,7 +167,13 @@
 
 #define OO_SHIFT	16
 #define OO_MASK		((1 << OO_SHIFT) - 1)
+
+#ifdef CONFIG_SLUB_DEBUG
+/* We use an onstack BITMAP while debugging, make sure this wont be too big */
+#define MAX_OBJS_PER_PAGE	min_t(int, 65535, 8*(THREAD_SIZE/3))
+#else
 #define MAX_OBJS_PER_PAGE	65535 /* since page.objects is u16 */
+#endif
 
 /* Internal SLUB flags */
 #define __OBJECT_POISON		0x80000000 /* Poison object */
@@ -2426,7 +2432,7 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
 #ifdef CONFIG_SLUB_DEBUG
 	void *addr = page_address(page);
 	void *p;
-	DECLARE_BITMAP(map, page->objects);
+	DECLARE_BITMAP(map, MAX_OBJS_PER_PAGE);
 
 	bitmap_zero(map, page->objects);
 	slab_err(s, page, "%s", text);
@@ -3651,7 +3657,7 @@ static void process_slab(struct loc_track *t, struct kmem_cache *s,
 		struct page *page, enum track_item alloc)
 {
 	void *addr = page_address(page);
-	DECLARE_BITMAP(map, page->objects);
+	DECLARE_BITMAP(map, MAX_OBJS_PER_PAGE);
 	void *p;
 
 	bitmap_zero(map, page->objects);



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] slub: Potential stack overflow
  2010-03-24 11:40 [PATCH] slub: Potential stack overflow Eric Dumazet
@ 2010-03-24 19:16 ` Christoph Lameter
  2010-03-24 19:22   ` Eric Dumazet
  0 siblings, 1 reply; 13+ messages in thread
From: Christoph Lameter @ 2010-03-24 19:16 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Pekka J Enberg, linux-kernel

On Wed, 24 Mar 2010, Eric Dumazet wrote:

> I discovered that we can overflow stack if CONFIG_SLUB_DEBUG=y and use
> slabs with many objects, since list_slab_objects() and process_slab()
> use DECLARE_BITMAP(map, page->objects);

Maybe we better allocate the bitmap via kmalloc then.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] slub: Potential stack overflow
  2010-03-24 19:16 ` Christoph Lameter
@ 2010-03-24 19:22   ` Eric Dumazet
  2010-03-24 19:49     ` Christoph Lameter
  0 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2010-03-24 19:22 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka J Enberg, linux-kernel

Le mercredi 24 mars 2010 à 14:16 -0500, Christoph Lameter a écrit :
> On Wed, 24 Mar 2010, Eric Dumazet wrote:
> 
> > I discovered that we can overflow stack if CONFIG_SLUB_DEBUG=y and use
> > slabs with many objects, since list_slab_objects() and process_slab()
> > use DECLARE_BITMAP(map, page->objects);
> 
> Maybe we better allocate the bitmap via kmalloc then.
> 

Hmm...

Are we allowed to nest in these two functions ?

GFP_KERNEL, GFP_ATOMIC ?

These are debugging functions, what happens if kmalloc() returns NULL ?




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] slub: Potential stack overflow
  2010-03-24 19:22   ` Eric Dumazet
@ 2010-03-24 19:49     ` Christoph Lameter
  2010-03-24 21:03       ` Eric Dumazet
  0 siblings, 1 reply; 13+ messages in thread
From: Christoph Lameter @ 2010-03-24 19:49 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Pekka J Enberg, linux-kernel

On Wed, 24 Mar 2010, Eric Dumazet wrote:

> Are we allowed to nest in these two functions ?

This is kmem_cache_close() no danger of nesting.

> These are debugging functions, what happens if kmalloc() returns NULL ?

Then you return ENOMEM and the user gets an error. We already do that in
validate_slab_cache().

Hmmm... In this case we called from list_slab_objects() which gets called
from free_partial() (which took a spinlock!) which gets called from
kmem_cache_close().

Its just a debugging aid so no problem if it fails. GFP_ATOMIC?



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] slub: Potential stack overflow
  2010-03-24 19:49     ` Christoph Lameter
@ 2010-03-24 21:03       ` Eric Dumazet
  2010-03-24 21:10         ` Christoph Lameter
  0 siblings, 1 reply; 13+ messages in thread
From: Eric Dumazet @ 2010-03-24 21:03 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka J Enberg, linux-kernel

Le mercredi 24 mars 2010 à 14:49 -0500, Christoph Lameter a écrit :
> On Wed, 24 Mar 2010, Eric Dumazet wrote:
> 
> > Are we allowed to nest in these two functions ?
> 
> This is kmem_cache_close() no danger of nesting.
> 
> > These are debugging functions, what happens if kmalloc() returns NULL ?
> 
> Then you return ENOMEM and the user gets an error. We already do that in
> validate_slab_cache().
> 
> Hmmm... In this case we called from list_slab_objects() which gets called
> from free_partial() (which took a spinlock!) which gets called from
> kmem_cache_close().
> 
> Its just a debugging aid so no problem if it fails. GFP_ATOMIC?

OK, here is second version of the patch, thanks !


[PATCH] slub: Potential stack overflow

I discovered that we can overflow stack if CONFIG_SLUB_DEBUG=y and use
slabs with many objects, since list_slab_objects() and process_slab()
use DECLARE_BITMAP(map, page->objects);

With 65535 bits, we use 8192 bytes of stack ...

A possible solution is to allocate memory, using GFP_ATOMIC, and do
nothing if allocation fails.

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
---
diff --git a/mm/slub.c b/mm/slub.c
index b364844..5ee857a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2426,9 +2426,11 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
 #ifdef CONFIG_SLUB_DEBUG
 	void *addr = page_address(page);
 	void *p;
-	DECLARE_BITMAP(map, page->objects);
+	long *map = kzalloc(BITS_TO_LONGS(page->objects) * sizeof(long),
+			    GFP_ATOMIC);
 
-	bitmap_zero(map, page->objects);
+	if (!map)
+		return;
 	slab_err(s, page, "%s", text);
 	slab_lock(page);
 	for_each_free_object(p, s, page->freelist)
@@ -2443,6 +2445,7 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
 		}
 	}
 	slab_unlock(page);
+	kfree(map);
 #endif
 }
 
@@ -3651,16 +3654,19 @@ static void process_slab(struct loc_track *t, struct kmem_cache *s,
 		struct page *page, enum track_item alloc)
 {
 	void *addr = page_address(page);
-	DECLARE_BITMAP(map, page->objects);
+	long *map = kzalloc(BITS_TO_LONGS(page->objects) * sizeof(long),
+			    GFP_ATOMIC);
 	void *p;
 
-	bitmap_zero(map, page->objects);
+	if (!map)
+		return;
 	for_each_free_object(p, s, page->freelist)
 		set_bit(slab_index(p, s, addr), map);
 
 	for_each_object(p, s, addr, page->objects)
 		if (!test_bit(slab_index(p, s, addr), map))
 			add_location(t, s, get_track(s, p, alloc));
+	kfree(map);
 }
 
 static int list_locations(struct kmem_cache *s, char *buf,



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] slub: Potential stack overflow
  2010-03-24 21:03       ` Eric Dumazet
@ 2010-03-24 21:10         ` Christoph Lameter
  2010-03-24 21:14           ` Christoph Lameter
  0 siblings, 1 reply; 13+ messages in thread
From: Christoph Lameter @ 2010-03-24 21:10 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Pekka J Enberg, linux-kernel

On Wed, 24 Mar 2010, Eric Dumazet wrote:

> @@ -3651,16 +3654,19 @@ static void process_slab(struct loc_track *t, struct kmem_cache *s,
>  		struct page *page, enum track_item alloc)
>  {
>  	void *addr = page_address(page);
> -	DECLARE_BITMAP(map, page->objects);
> +	long *map = kzalloc(BITS_TO_LONGS(page->objects) * sizeof(long),
> +			    GFP_ATOMIC);
>  	void *p;
>
> -	bitmap_zero(map, page->objects);
> +	if (!map)
> +		return;
>  	for_each_free_object(p, s, page->freelist)
>  		set_bit(slab_index(p, s, addr), map);
>
>  	for_each_object(p, s, addr, page->objects)
>  		if (!test_bit(slab_index(p, s, addr), map))
>  			add_location(t, s, get_track(s, p, alloc));
> +	kfree(map);
>  }
>

Hmmm... Thats another case. We should alloate the map higher up there I
guess and pass the address in so that one allocation can be used for all
slabs. validate_slab_cache() does that.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] slub: Potential stack overflow
  2010-03-24 21:10         ` Christoph Lameter
@ 2010-03-24 21:14           ` Christoph Lameter
  2010-03-24 21:25             ` Eric Dumazet
  2010-03-24 21:25             ` Christoph Lameter
  0 siblings, 2 replies; 13+ messages in thread
From: Christoph Lameter @ 2010-03-24 21:14 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Pekka J Enberg, linux-kernel

Here is a patch for the second case. I think its better since it results
in an error display and it avoids the alloc for each slab. Add this piece
to your patch?

Signed-off-by: Christoph Lameter <cl@linux-foundation.org>

---
 mm/slub.c |   12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2010-03-24 16:10:32.000000000 -0500
+++ linux-2.6/mm/slub.c	2010-03-24 16:13:06.000000000 -0500
@@ -3648,10 +3648,10 @@ static int add_location(struct loc_track
 }

 static void process_slab(struct loc_track *t, struct kmem_cache *s,
-		struct page *page, enum track_item alloc)
+		struct page *page, enum track_item alloc,
+		unsigned long *map)
 {
 	void *addr = page_address(page);
-	DECLARE_BITMAP(map, page->objects);
 	void *p;

 	bitmap_zero(map, page->objects);
@@ -3670,8 +3670,10 @@ static int list_locations(struct kmem_ca
 	unsigned long i;
 	struct loc_track t = { 0, 0, NULL };
 	int node;
+	unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
+				sizeof(unsigned long), GFP_KERNEL);

-	if (!alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
+	if (!map || !alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
 			GFP_TEMPORARY))
 		return sprintf(buf, "Out of memory\n");

@@ -3688,9 +3690,9 @@ static int list_locations(struct kmem_ca

 		spin_lock_irqsave(&n->list_lock, flags);
 		list_for_each_entry(page, &n->partial, lru)
-			process_slab(&t, s, page, alloc);
+			process_slab(&t, s, page, alloc, map);
 		list_for_each_entry(page, &n->full, lru)
-			process_slab(&t, s, page, alloc);
+			process_slab(&t, s, page, alloc, map);
 		spin_unlock_irqrestore(&n->list_lock, flags);
 	}


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] slub: Potential stack overflow
  2010-03-24 21:14           ` Christoph Lameter
  2010-03-24 21:25             ` Eric Dumazet
@ 2010-03-24 21:25             ` Christoph Lameter
  2010-03-24 21:30               ` Eric Dumazet
  1 sibling, 1 reply; 13+ messages in thread
From: Christoph Lameter @ 2010-03-24 21:25 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Pekka J Enberg, linux-kernel

Here is a patch for the second case. I think its better since it results
in an error display and it avoids the alloc for each slab. Add this piece
to your patch?

V1->V2 Fix missing kfree

Signed-off-by: Christoph Lameter <cl@linux-foundation.org>

---
 mm/slub.c |   13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c	2010-03-24 16:23:19.000000000 -0500
+++ linux-2.6/mm/slub.c	2010-03-24 16:24:21.000000000 -0500
@@ -3648,10 +3648,10 @@ static int add_location(struct loc_track
 }

 static void process_slab(struct loc_track *t, struct kmem_cache *s,
-		struct page *page, enum track_item alloc)
+		struct page *page, enum track_item alloc,
+		unsigned long *map)
 {
 	void *addr = page_address(page);
-	DECLARE_BITMAP(map, page->objects);
 	void *p;

 	bitmap_zero(map, page->objects);
@@ -3670,8 +3670,10 @@ static int list_locations(struct kmem_ca
 	unsigned long i;
 	struct loc_track t = { 0, 0, NULL };
 	int node;
+	unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
+				sizeof(unsigned long), GFP_KERNEL);

-	if (!alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
+	if (!map || !alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
 			GFP_TEMPORARY))
 		return sprintf(buf, "Out of memory\n");

@@ -3688,11 +3690,12 @@ static int list_locations(struct kmem_ca

 		spin_lock_irqsave(&n->list_lock, flags);
 		list_for_each_entry(page, &n->partial, lru)
-			process_slab(&t, s, page, alloc);
+			process_slab(&t, s, page, alloc, map);
 		list_for_each_entry(page, &n->full, lru)
-			process_slab(&t, s, page, alloc);
+			process_slab(&t, s, page, alloc, map);
 		spin_unlock_irqrestore(&n->list_lock, flags);
 	}
+	kfree(map);

 	for (i = 0; i < t.count; i++) {
 		struct location *l = &t.loc[i];

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] slub: Potential stack overflow
  2010-03-24 21:14           ` Christoph Lameter
@ 2010-03-24 21:25             ` Eric Dumazet
  2010-03-25 19:29               ` Pekka Enberg
  2010-03-28 17:10               ` Pekka Enberg
  2010-03-24 21:25             ` Christoph Lameter
  1 sibling, 2 replies; 13+ messages in thread
From: Eric Dumazet @ 2010-03-24 21:25 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka J Enberg, linux-kernel

Le mercredi 24 mars 2010 à 16:14 -0500, Christoph Lameter a écrit :
> Here is a patch for the second case. I think its better since it results
> in an error display and it avoids the alloc for each slab. Add this piece
> to your patch?
> 
> Signed-off-by: Christoph Lameter <cl@linux-foundation.org>

Sure, here is third version :

Thanks

[PATCH] slub: Potential stack overflow

I discovered that we can overflow stack if CONFIG_SLUB_DEBUG=y and use
slabs with many objects, since list_slab_objects() and process_slab()
use DECLARE_BITMAP(map, page->objects);

With 65535 bits, we use 8192 bytes of stack ...

Switch these allocations to dynamic allocations.

Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
---
 mm/slub.c |   25 ++++++++++++++++---------
 1 file changed, 16 insertions(+), 9 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index b364844..7dc8e73 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2426,9 +2426,11 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
 #ifdef CONFIG_SLUB_DEBUG
 	void *addr = page_address(page);
 	void *p;
-	DECLARE_BITMAP(map, page->objects);
+	long *map = kzalloc(BITS_TO_LONGS(page->objects) * sizeof(long),
+			    GFP_ATOMIC);
 
-	bitmap_zero(map, page->objects);
+	if (!map)
+		return;
 	slab_err(s, page, "%s", text);
 	slab_lock(page);
 	for_each_free_object(p, s, page->freelist)
@@ -2443,6 +2445,7 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
 		}
 	}
 	slab_unlock(page);
+	kfree(map);
 #endif
 }
 
@@ -3648,10 +3651,10 @@ static int add_location(struct loc_track *t, struct kmem_cache *s,
 }
 
 static void process_slab(struct loc_track *t, struct kmem_cache *s,
-		struct page *page, enum track_item alloc)
+		struct page *page, enum track_item alloc,
+		long *map)
 {
 	void *addr = page_address(page);
-	DECLARE_BITMAP(map, page->objects);
 	void *p;
 
 	bitmap_zero(map, page->objects);
@@ -3670,11 +3673,14 @@ static int list_locations(struct kmem_cache *s, char *buf,
 	unsigned long i;
 	struct loc_track t = { 0, 0, NULL };
 	int node;
+	unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
+				     sizeof(unsigned long), GFP_KERNEL);
 
-	if (!alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
-			GFP_TEMPORARY))
+	if (!map || !alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
+				     GFP_TEMPORARY)) {
+		kfree(map);
 		return sprintf(buf, "Out of memory\n");
-
+	}
 	/* Push back cpu slabs */
 	flush_all(s);
 
@@ -3688,9 +3694,9 @@ static int list_locations(struct kmem_cache *s, char *buf,
 
 		spin_lock_irqsave(&n->list_lock, flags);
 		list_for_each_entry(page, &n->partial, lru)
-			process_slab(&t, s, page, alloc);
+			process_slab(&t, s, page, alloc, map);
 		list_for_each_entry(page, &n->full, lru)
-			process_slab(&t, s, page, alloc);
+			process_slab(&t, s, page, alloc, map);
 		spin_unlock_irqrestore(&n->list_lock, flags);
 	}
 
@@ -3741,6 +3747,7 @@ static int list_locations(struct kmem_cache *s, char *buf,
 	}
 
 	free_loc_track(&t);
+	kfree(map);
 	if (!t.count)
 		len += sprintf(buf, "No data\n");
 	return len;



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH] slub: Potential stack overflow
  2010-03-24 21:25             ` Christoph Lameter
@ 2010-03-24 21:30               ` Eric Dumazet
  0 siblings, 0 replies; 13+ messages in thread
From: Eric Dumazet @ 2010-03-24 21:30 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka J Enberg, linux-kernel

Le mercredi 24 mars 2010 à 16:25 -0500, Christoph Lameter a écrit :
> Here is a patch for the second case. I think its better since it results
> in an error display and it avoids the alloc for each slab. Add this piece
> to your patch?

Yes, I did it

> 
> V1->V2 Fix missing kfree
> 
> Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
> 
> ---
>  mm/slub.c |   13 ++++++++-----
>  1 file changed, 8 insertions(+), 5 deletions(-)
> 
> Index: linux-2.6/mm/slub.c
> ===================================================================
> --- linux-2.6.orig/mm/slub.c	2010-03-24 16:23:19.000000000 -0500
> +++ linux-2.6/mm/slub.c	2010-03-24 16:24:21.000000000 -0500
> @@ -3648,10 +3648,10 @@ static int add_location(struct loc_track
>  }
> 
>  static void process_slab(struct loc_track *t, struct kmem_cache *s,
> -		struct page *page, enum track_item alloc)
> +		struct page *page, enum track_item alloc,
> +		unsigned long *map)
>  {
>  	void *addr = page_address(page);
> -	DECLARE_BITMAP(map, page->objects);
>  	void *p;
> 
>  	bitmap_zero(map, page->objects);
> @@ -3670,8 +3670,10 @@ static int list_locations(struct kmem_ca
>  	unsigned long i;
>  	struct loc_track t = { 0, 0, NULL };
>  	int node;
> +	unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
> +				sizeof(unsigned long), GFP_KERNEL);
> 
> -	if (!alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
> +	if (!map || !alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
>  			GFP_TEMPORARY))

I also added a kfree(map); here

>  		return sprintf(buf, "Out of memory\n");
> 
> @@ -3688,11 +3690,12 @@ static int list_locations(struct kmem_ca
> 
>  		spin_lock_irqsave(&n->list_lock, flags);
>  		list_for_each_entry(page, &n->partial, lru)
> -			process_slab(&t, s, page, alloc);
> +			process_slab(&t, s, page, alloc, map);
>  		list_for_each_entry(page, &n->full, lru)
> -			process_slab(&t, s, page, alloc);
> +			process_slab(&t, s, page, alloc, map);
>  		spin_unlock_irqrestore(&n->list_lock, flags);
>  	}
> +	kfree(map);
> 
>  	for (i = 0; i < t.count; i++) {
>  		struct location *l = &t.loc[i];
> --




^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] slub: Potential stack overflow
  2010-03-24 21:25             ` Eric Dumazet
@ 2010-03-25 19:29               ` Pekka Enberg
  2010-03-25 21:03                 ` Christoph Lameter
  2010-03-28 17:10               ` Pekka Enberg
  1 sibling, 1 reply; 13+ messages in thread
From: Pekka Enberg @ 2010-03-25 19:29 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Christoph Lameter, linux-kernel

Eric Dumazet wrote:
> Le mercredi 24 mars 2010 à 16:14 -0500, Christoph Lameter a écrit :
>> Here is a patch for the second case. I think its better since it results
>> in an error display and it avoids the alloc for each slab. Add this piece
>> to your patch?
>>
>> Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
> 
> Sure, here is third version :

Christoph, does this look OK to you? I think Eric has all your later 
additions and kfree() fixlets here.

> [PATCH] slub: Potential stack overflow
> 
> I discovered that we can overflow stack if CONFIG_SLUB_DEBUG=y and use
> slabs with many objects, since list_slab_objects() and process_slab()
> use DECLARE_BITMAP(map, page->objects);
> 
> With 65535 bits, we use 8192 bytes of stack ...
> 
> Switch these allocations to dynamic allocations.
> 
> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
> Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
> ---
>  mm/slub.c |   25 ++++++++++++++++---------
>  1 file changed, 16 insertions(+), 9 deletions(-)
> diff --git a/mm/slub.c b/mm/slub.c
> index b364844..7dc8e73 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2426,9 +2426,11 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
>  #ifdef CONFIG_SLUB_DEBUG
>  	void *addr = page_address(page);
>  	void *p;
> -	DECLARE_BITMAP(map, page->objects);
> +	long *map = kzalloc(BITS_TO_LONGS(page->objects) * sizeof(long),
> +			    GFP_ATOMIC);
>  
> -	bitmap_zero(map, page->objects);
> +	if (!map)
> +		return;
>  	slab_err(s, page, "%s", text);
>  	slab_lock(page);
>  	for_each_free_object(p, s, page->freelist)
> @@ -2443,6 +2445,7 @@ static void list_slab_objects(struct kmem_cache *s, struct page *page,
>  		}
>  	}
>  	slab_unlock(page);
> +	kfree(map);
>  #endif
>  }
>  
> @@ -3648,10 +3651,10 @@ static int add_location(struct loc_track *t, struct kmem_cache *s,
>  }
>  
>  static void process_slab(struct loc_track *t, struct kmem_cache *s,
> -		struct page *page, enum track_item alloc)
> +		struct page *page, enum track_item alloc,
> +		long *map)
>  {
>  	void *addr = page_address(page);
> -	DECLARE_BITMAP(map, page->objects);
>  	void *p;
>  
>  	bitmap_zero(map, page->objects);
> @@ -3670,11 +3673,14 @@ static int list_locations(struct kmem_cache *s, char *buf,
>  	unsigned long i;
>  	struct loc_track t = { 0, 0, NULL };
>  	int node;
> +	unsigned long *map = kmalloc(BITS_TO_LONGS(oo_objects(s->max)) *
> +				     sizeof(unsigned long), GFP_KERNEL);
>  
> -	if (!alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
> -			GFP_TEMPORARY))
> +	if (!map || !alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
> +				     GFP_TEMPORARY)) {
> +		kfree(map);
>  		return sprintf(buf, "Out of memory\n");
> -
> +	}
>  	/* Push back cpu slabs */
>  	flush_all(s);
>  
> @@ -3688,9 +3694,9 @@ static int list_locations(struct kmem_cache *s, char *buf,
>  
>  		spin_lock_irqsave(&n->list_lock, flags);
>  		list_for_each_entry(page, &n->partial, lru)
> -			process_slab(&t, s, page, alloc);
> +			process_slab(&t, s, page, alloc, map);
>  		list_for_each_entry(page, &n->full, lru)
> -			process_slab(&t, s, page, alloc);
> +			process_slab(&t, s, page, alloc, map);
>  		spin_unlock_irqrestore(&n->list_lock, flags);
>  	}
>  
> @@ -3741,6 +3747,7 @@ static int list_locations(struct kmem_cache *s, char *buf,
>  	}
>  
>  	free_loc_track(&t);
> +	kfree(map);
>  	if (!t.count)
>  		len += sprintf(buf, "No data\n");
>  	return len;
> 
> 


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] slub: Potential stack overflow
  2010-03-25 19:29               ` Pekka Enberg
@ 2010-03-25 21:03                 ` Christoph Lameter
  0 siblings, 0 replies; 13+ messages in thread
From: Christoph Lameter @ 2010-03-25 21:03 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: Eric Dumazet, linux-kernel

On Thu, 25 Mar 2010, Pekka Enberg wrote:

> Christoph, does this look OK to you? I think Eric has all your later additions
> and kfree() fixlets here.

Yes just dont know how to add an Ack given that there is already a
signoff.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH] slub: Potential stack overflow
  2010-03-24 21:25             ` Eric Dumazet
  2010-03-25 19:29               ` Pekka Enberg
@ 2010-03-28 17:10               ` Pekka Enberg
  1 sibling, 0 replies; 13+ messages in thread
From: Pekka Enberg @ 2010-03-28 17:10 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: Christoph Lameter, linux-kernel

Eric Dumazet wrote:
> Le mercredi 24 mars 2010 à 16:14 -0500, Christoph Lameter a écrit :
>> Here is a patch for the second case. I think its better since it results
>> in an error display and it avoids the alloc for each slab. Add this piece
>> to your patch?
>>
>> Signed-off-by: Christoph Lameter <cl@linux-foundation.org>
> 
> Sure, here is third version :
> 
> Thanks
> 
> [PATCH] slub: Potential stack overflow
> 
> I discovered that we can overflow stack if CONFIG_SLUB_DEBUG=y and use
> slabs with many objects, since list_slab_objects() and process_slab()
> use DECLARE_BITMAP(map, page->objects);
> 
> With 65535 bits, we use 8192 bytes of stack ...
> 
> Switch these allocations to dynamic allocations.
> 
> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
> Signed-off-by: Christoph Lameter <cl@linux-foundation.org>

Applied.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2010-03-28 17:10 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-03-24 11:40 [PATCH] slub: Potential stack overflow Eric Dumazet
2010-03-24 19:16 ` Christoph Lameter
2010-03-24 19:22   ` Eric Dumazet
2010-03-24 19:49     ` Christoph Lameter
2010-03-24 21:03       ` Eric Dumazet
2010-03-24 21:10         ` Christoph Lameter
2010-03-24 21:14           ` Christoph Lameter
2010-03-24 21:25             ` Eric Dumazet
2010-03-25 19:29               ` Pekka Enberg
2010-03-25 21:03                 ` Christoph Lameter
2010-03-28 17:10               ` Pekka Enberg
2010-03-24 21:25             ` Christoph Lameter
2010-03-24 21:30               ` Eric Dumazet

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.