All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH -next] slub: set PG_slab on all of slab pages
@ 2012-02-29  8:54 ` Namhyung Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Namhyung Kim @ 2012-02-29  8:54 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, Matt Mackall
  Cc: Namhyung Kim, linux-mm, linux-kernel

Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would
call free_pages() incorrectly on a object in a tail page, she will get
confused with the undefined result. Setting the flag would help her by
emitting a warning on bad_page() in such a case.

Reported-by: Sangseok Lee <sangseok.lee@lge.com>
Signed-off-by: Namhyung Kim <namhyung.kim@lge.com>
---
 mm/slub.c |   12 ++++++++++--
 1 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 33bab2aca882..575baacbec9b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1287,6 +1287,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	struct page *page;
 	struct kmem_cache_order_objects oo = s->oo;
 	gfp_t alloc_gfp;
+	int i;
 
 	flags &= gfp_allowed_mask;
 
@@ -1320,6 +1321,9 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (!page)
 		return NULL;
 
+	for (i = 0; i < 1 << oo_order(oo); i++)
+		__SetPageSlab(page + i);
+
 	if (kmemcheck_enabled
 		&& !(s->flags & (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) {
 		int pages = 1 << oo_order(oo);
@@ -1369,7 +1373,6 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 
 	inc_slabs_node(s, page_to_nid(page), page->objects);
 	page->slab = s;
-	page->flags |= 1 << PG_slab;
 
 	start = page_address(page);
 
@@ -1396,6 +1399,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 {
 	int order = compound_order(page);
 	int pages = 1 << order;
+	int i;
 
 	if (kmem_cache_debug(s)) {
 		void *p;
@@ -1413,7 +1417,11 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 		NR_SLAB_RECLAIMABLE : NR_SLAB_UNRECLAIMABLE,
 		-pages);
 
-	__ClearPageSlab(page);
+	for (i = 0; i < pages; i++) {
+		BUG_ON(!PageSlab(page + i));
+		__ClearPageSlab(page + i);
+	}
+
 	reset_page_mapcount(page);
 	if (current->reclaim_state)
 		current->reclaim_state->reclaimed_slab += pages;
-- 
1.7.9


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH -next] slub: set PG_slab on all of slab pages
@ 2012-02-29  8:54 ` Namhyung Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Namhyung Kim @ 2012-02-29  8:54 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, Matt Mackall
  Cc: Namhyung Kim, linux-mm, linux-kernel

Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would
call free_pages() incorrectly on a object in a tail page, she will get
confused with the undefined result. Setting the flag would help her by
emitting a warning on bad_page() in such a case.

Reported-by: Sangseok Lee <sangseok.lee@lge.com>
Signed-off-by: Namhyung Kim <namhyung.kim@lge.com>
---
 mm/slub.c |   12 ++++++++++--
 1 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 33bab2aca882..575baacbec9b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -1287,6 +1287,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	struct page *page;
 	struct kmem_cache_order_objects oo = s->oo;
 	gfp_t alloc_gfp;
+	int i;
 
 	flags &= gfp_allowed_mask;
 
@@ -1320,6 +1321,9 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
 	if (!page)
 		return NULL;
 
+	for (i = 0; i < 1 << oo_order(oo); i++)
+		__SetPageSlab(page + i);
+
 	if (kmemcheck_enabled
 		&& !(s->flags & (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) {
 		int pages = 1 << oo_order(oo);
@@ -1369,7 +1373,6 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
 
 	inc_slabs_node(s, page_to_nid(page), page->objects);
 	page->slab = s;
-	page->flags |= 1 << PG_slab;
 
 	start = page_address(page);
 
@@ -1396,6 +1399,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 {
 	int order = compound_order(page);
 	int pages = 1 << order;
+	int i;
 
 	if (kmem_cache_debug(s)) {
 		void *p;
@@ -1413,7 +1417,11 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
 		NR_SLAB_RECLAIMABLE : NR_SLAB_UNRECLAIMABLE,
 		-pages);
 
-	__ClearPageSlab(page);
+	for (i = 0; i < pages; i++) {
+		BUG_ON(!PageSlab(page + i));
+		__ClearPageSlab(page + i);
+	}
+
 	reset_page_mapcount(page);
 	if (current->reclaim_state)
 		current->reclaim_state->reclaimed_slab += pages;
-- 
1.7.9

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
  2012-02-29  8:54 ` Namhyung Kim
@ 2012-02-29 15:24   ` Christoph Lameter
  -1 siblings, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2012-02-29 15:24 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Pekka Enberg, Matt Mackall, Namhyung Kim, linux-mm, linux-kernel

On Wed, 29 Feb 2012, Namhyung Kim wrote:

> Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would
> call free_pages() incorrectly on a object in a tail page, she will get
> confused with the undefined result. Setting the flag would help her by
> emitting a warning on bad_page() in such a case.

NAK

You cannot free a tail page of a compound higher order page independently.
You must free the whole compound.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
@ 2012-02-29 15:24   ` Christoph Lameter
  0 siblings, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2012-02-29 15:24 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Pekka Enberg, Matt Mackall, Namhyung Kim, linux-mm, linux-kernel

On Wed, 29 Feb 2012, Namhyung Kim wrote:

> Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would
> call free_pages() incorrectly on a object in a tail page, she will get
> confused with the undefined result. Setting the flag would help her by
> emitting a warning on bad_page() in such a case.

NAK

You cannot free a tail page of a compound higher order page independently.
You must free the whole compound.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
  2012-02-29 15:24   ` Christoph Lameter
@ 2012-03-01  7:30     ` Namhyung Kim
  -1 siblings, 0 replies; 22+ messages in thread
From: Namhyung Kim @ 2012-03-01  7:30 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Namhyung Kim, Pekka Enberg, Matt Mackall, linux-mm, linux-kernel

Hi,

2012-02-29, 09:24 -0600, Christoph Lameter wrote:
> On Wed, 29 Feb 2012, Namhyung Kim wrote:
> 
> > Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would
> > call free_pages() incorrectly on a object in a tail page, she will get
> > confused with the undefined result. Setting the flag would help her by
> > emitting a warning on bad_page() in such a case.
> 
> NAK
> 
> You cannot free a tail page of a compound higher order page independently.
> You must free the whole compound.
> 

I meant freeing a *slab object* resides in a compound page using buddy
system API (e.g. free_pages). I know it's definitely a programming
error. However there's no safety net to protect and/or warn such a
misbehavior AFAICS - except for head page which has PG_slab set - when
it happened by any chance.

Without it, it might be possible to free part of tail pages silently,
and cause unexpected not-so-funny results some time later. It should be
hard to find out.

When I ran such a bad code using SLAB, I was able to be notified
immediately. That's why I'd like to add this patch to SLUB too. In
addition, it will give more correct value for slab pages when
using /proc/kpageflags IMHO.


-- 
Regards,
Namhyung Kim



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
@ 2012-03-01  7:30     ` Namhyung Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Namhyung Kim @ 2012-03-01  7:30 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Namhyung Kim, Pekka Enberg, Matt Mackall, linux-mm, linux-kernel

Hi,

2012-02-29, 09:24 -0600, Christoph Lameter wrote:
> On Wed, 29 Feb 2012, Namhyung Kim wrote:
> 
> > Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would
> > call free_pages() incorrectly on a object in a tail page, she will get
> > confused with the undefined result. Setting the flag would help her by
> > emitting a warning on bad_page() in such a case.
> 
> NAK
> 
> You cannot free a tail page of a compound higher order page independently.
> You must free the whole compound.
> 

I meant freeing a *slab object* resides in a compound page using buddy
system API (e.g. free_pages). I know it's definitely a programming
error. However there's no safety net to protect and/or warn such a
misbehavior AFAICS - except for head page which has PG_slab set - when
it happened by any chance.

Without it, it might be possible to free part of tail pages silently,
and cause unexpected not-so-funny results some time later. It should be
hard to find out.

When I ran such a bad code using SLAB, I was able to be notified
immediately. That's why I'd like to add this patch to SLUB too. In
addition, it will give more correct value for slab pages when
using /proc/kpageflags IMHO.


-- 
Regards,
Namhyung Kim


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
  2012-03-01  7:30     ` Namhyung Kim
@ 2012-03-01 15:03       ` Christoph Lameter
  -1 siblings, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2012-03-01 15:03 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Namhyung Kim, Pekka Enberg, Matt Mackall, linux-mm, linux-kernel

On Thu, 1 Mar 2012, Namhyung Kim wrote:

> > You cannot free a tail page of a compound higher order page independently.
> > You must free the whole compound.
> >
>
> I meant freeing a *slab object* resides in a compound page using buddy
> system API (e.g. free_pages). I know it's definitely a programming
> error. However there's no safety net to protect and/or warn such a
> misbehavior AFAICS - except for head page which has PG_slab set - when
> it happened by any chance.

?? One generally passed a struct page pointer to the page allocator. Slab
allocator takes pointers to object. The calls that take a pointer to an
object must have a page aligned value.

> Without it, it might be possible to free part of tail pages silently,
> and cause unexpected not-so-funny results some time later. It should be
> hard to find out.

Ok then fix the page allocator to BUG() on tail pages. That is an issue
with the page allocator not the slab allocator.

Adding PG_tail to the flags checked on free should do the trick (at least
for 64 bit).


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
@ 2012-03-01 15:03       ` Christoph Lameter
  0 siblings, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2012-03-01 15:03 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Namhyung Kim, Pekka Enberg, Matt Mackall, linux-mm, linux-kernel

On Thu, 1 Mar 2012, Namhyung Kim wrote:

> > You cannot free a tail page of a compound higher order page independently.
> > You must free the whole compound.
> >
>
> I meant freeing a *slab object* resides in a compound page using buddy
> system API (e.g. free_pages). I know it's definitely a programming
> error. However there's no safety net to protect and/or warn such a
> misbehavior AFAICS - except for head page which has PG_slab set - when
> it happened by any chance.

?? One generally passed a struct page pointer to the page allocator. Slab
allocator takes pointers to object. The calls that take a pointer to an
object must have a page aligned value.

> Without it, it might be possible to free part of tail pages silently,
> and cause unexpected not-so-funny results some time later. It should be
> hard to find out.

Ok then fix the page allocator to BUG() on tail pages. That is an issue
with the page allocator not the slab allocator.

Adding PG_tail to the flags checked on free should do the trick (at least
for 64 bit).

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
  2012-03-01 15:03       ` Christoph Lameter
@ 2012-03-02  7:12         ` Namhyung Kim
  -1 siblings, 0 replies; 22+ messages in thread
From: Namhyung Kim @ 2012-03-02  7:12 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Namhyung Kim, Pekka Enberg, Matt Mackall, linux-mm, linux-kernel

2012-03-02 12:03 AM, Christoph Lameter wrote:
> On Thu, 1 Mar 2012, Namhyung Kim wrote:
>
>>> You cannot free a tail page of a compound higher order page independently.
>>> You must free the whole compound.
>>>
>>
>> I meant freeing a *slab object* resides in a compound page using buddy
>> system API (e.g. free_pages). I know it's definitely a programming
>> error. However there's no safety net to protect and/or warn such a
>> misbehavior AFAICS - except for head page which has PG_slab set - when
>> it happened by any chance.
>
> ?? One generally passed a struct page pointer to the page allocator. Slab
> allocator takes pointers to object. The calls that take a pointer to an
> object must have a page aligned value.
>

Please see free_pages(). It converts the pointer using virt_to_page().


>> Without it, it might be possible to free part of tail pages silently,
>> and cause unexpected not-so-funny results some time later. It should be
>> hard to find out.
>
> Ok then fix the page allocator to BUG() on tail pages. That is an issue
> with the page allocator not the slab allocator.
>
> Adding PG_tail to the flags checked on free should do the trick (at least
> for 64 bit).
>

Yeah, but doing it requires to change free path of compound pages. It seems 
freeing normal compound pages would not clear PG_head/tail bits before 
free_pages_check() called. I guess moving destroy_compound_page() into 
free_pages_prepare() will solved this issue but I want to make sure it's the 
right approach since I have no idea how it affects huge page behaviors.

Besides, as it has no effect on 32 bit kernels I still want add the PG_slab 
flag to those pages. If you care about the performance of hot path, how about 
adding it under debug configurations at least?


Thanks,
Namhyung Kim

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
@ 2012-03-02  7:12         ` Namhyung Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Namhyung Kim @ 2012-03-02  7:12 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Namhyung Kim, Pekka Enberg, Matt Mackall, linux-mm, linux-kernel

2012-03-02 12:03 AM, Christoph Lameter wrote:
> On Thu, 1 Mar 2012, Namhyung Kim wrote:
>
>>> You cannot free a tail page of a compound higher order page independently.
>>> You must free the whole compound.
>>>
>>
>> I meant freeing a *slab object* resides in a compound page using buddy
>> system API (e.g. free_pages). I know it's definitely a programming
>> error. However there's no safety net to protect and/or warn such a
>> misbehavior AFAICS - except for head page which has PG_slab set - when
>> it happened by any chance.
>
> ?? One generally passed a struct page pointer to the page allocator. Slab
> allocator takes pointers to object. The calls that take a pointer to an
> object must have a page aligned value.
>

Please see free_pages(). It converts the pointer using virt_to_page().


>> Without it, it might be possible to free part of tail pages silently,
>> and cause unexpected not-so-funny results some time later. It should be
>> hard to find out.
>
> Ok then fix the page allocator to BUG() on tail pages. That is an issue
> with the page allocator not the slab allocator.
>
> Adding PG_tail to the flags checked on free should do the trick (at least
> for 64 bit).
>

Yeah, but doing it requires to change free path of compound pages. It seems 
freeing normal compound pages would not clear PG_head/tail bits before 
free_pages_check() called. I guess moving destroy_compound_page() into 
free_pages_prepare() will solved this issue but I want to make sure it's the 
right approach since I have no idea how it affects huge page behaviors.

Besides, as it has no effect on 32 bit kernels I still want add the PG_slab 
flag to those pages. If you care about the performance of hot path, how about 
adding it under debug configurations at least?


Thanks,
Namhyung Kim

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
  2012-03-02  7:12         ` Namhyung Kim
@ 2012-03-02 16:13           ` Christoph Lameter
  -1 siblings, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2012-03-02 16:13 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Namhyung Kim, Pekka Enberg, Matt Mackall, linux-mm, linux-kernel

On Fri, 2 Mar 2012, Namhyung Kim wrote:

> > ?? One generally passed a struct page pointer to the page allocator. Slab
> > allocator takes pointers to object. The calls that take a pointer to an
> > object must have a page aligned value.
> >
>
> Please see free_pages(). It converts the pointer using virt_to_page().

Sure. As I said you still need a page aligned value. If you were
successful in doing what you claim then there is a bug in the page
allocator because it allowed the freeing of a tail page out of a compound
page.


> > Adding PG_tail to the flags checked on free should do the trick (at least
> > for 64 bit).
> >
>
> Yeah, but doing it requires to change free path of compound pages. It seems
> freeing normal compound pages would not clear PG_head/tail bits before
> free_pages_check() called. I guess moving destroy_compound_page() into
> free_pages_prepare() will solved this issue but I want to make sure it's the
> right approach since I have no idea how it affects huge page behaviors.

Freeing a tail page should cause a BUG() or some form of error handling.
It should not work.

> Besides, as it has no effect on 32 bit kernels I still want add the PG_slab
> flag to those pages. If you care about the performance of hot path, how about
> adding it under debug configurations at least?

One reason to *not* do the marking of each page is that it impacts the
allocation and free paths in the allocator.

The basic notion of compound pages is that the flags in the head page are
valid for all the pages in the compound. PG_slab is set already in the
head page. So the compound is marked as a slab page. Consulting
page->firstpage->flags and not page->flags will yield the correct result.




^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
@ 2012-03-02 16:13           ` Christoph Lameter
  0 siblings, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2012-03-02 16:13 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Namhyung Kim, Pekka Enberg, Matt Mackall, linux-mm, linux-kernel

On Fri, 2 Mar 2012, Namhyung Kim wrote:

> > ?? One generally passed a struct page pointer to the page allocator. Slab
> > allocator takes pointers to object. The calls that take a pointer to an
> > object must have a page aligned value.
> >
>
> Please see free_pages(). It converts the pointer using virt_to_page().

Sure. As I said you still need a page aligned value. If you were
successful in doing what you claim then there is a bug in the page
allocator because it allowed the freeing of a tail page out of a compound
page.


> > Adding PG_tail to the flags checked on free should do the trick (at least
> > for 64 bit).
> >
>
> Yeah, but doing it requires to change free path of compound pages. It seems
> freeing normal compound pages would not clear PG_head/tail bits before
> free_pages_check() called. I guess moving destroy_compound_page() into
> free_pages_prepare() will solved this issue but I want to make sure it's the
> right approach since I have no idea how it affects huge page behaviors.

Freeing a tail page should cause a BUG() or some form of error handling.
It should not work.

> Besides, as it has no effect on 32 bit kernels I still want add the PG_slab
> flag to those pages. If you care about the performance of hot path, how about
> adding it under debug configurations at least?

One reason to *not* do the marking of each page is that it impacts the
allocation and free paths in the allocator.

The basic notion of compound pages is that the flags in the head page are
valid for all the pages in the compound. PG_slab is set already in the
head page. So the compound is marked as a slab page. Consulting
page->firstpage->flags and not page->flags will yield the correct result.



--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
  2012-02-29  8:54 ` Namhyung Kim
@ 2012-03-04 10:34   ` Minchan Kim
  -1 siblings, 0 replies; 22+ messages in thread
From: Minchan Kim @ 2012-03-04 10:34 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Christoph Lameter, Pekka Enberg, Matt Mackall, Namhyung Kim,
	linux-mm, linux-kernel

Hi Namhyung,

On Wed, Feb 29, 2012 at 05:54:34PM +0900, Namhyung Kim wrote:
> Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would
> call free_pages() incorrectly on a object in a tail page, she will get
>i confused with the undefined result. Setting the flag would help her by
> emitting a warning on bad_page() in such a case.
> 
> Reported-by: Sangseok Lee <sangseok.lee@lge.com>
> Signed-off-by: Namhyung Kim <namhyung.kim@lge.com>

I read this thread and I feel the we don't reach right point.
I think it's not a compound page problem.
We can face above problem where we allocates big order page without __GFP_COMP
and free middle page of it.

Fortunately, We can catch such a problem by put_page_testzero in __free_pages
if you enable CONFIG_DEBUG_VM.

Did you tried that with CONFIG_DEBUG_VM?

> ---
>  mm/slub.c |   12 ++++++++++--
>  1 files changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 33bab2aca882..575baacbec9b 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1287,6 +1287,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>  	struct page *page;
>  	struct kmem_cache_order_objects oo = s->oo;
>  	gfp_t alloc_gfp;
> +	int i;
>  
>  	flags &= gfp_allowed_mask;
>  
> @@ -1320,6 +1321,9 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>  	if (!page)
>  		return NULL;
>  
> +	for (i = 0; i < 1 << oo_order(oo); i++)
> +		__SetPageSlab(page + i);
> +
>  	if (kmemcheck_enabled
>  		&& !(s->flags & (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) {
>  		int pages = 1 << oo_order(oo);
> @@ -1369,7 +1373,6 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>  
>  	inc_slabs_node(s, page_to_nid(page), page->objects);
>  	page->slab = s;
> -	page->flags |= 1 << PG_slab;
>  
>  	start = page_address(page);
>  
> @@ -1396,6 +1399,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>  {
>  	int order = compound_order(page);
>  	int pages = 1 << order;
> +	int i;
>  
>  	if (kmem_cache_debug(s)) {
>  		void *p;
> @@ -1413,7 +1417,11 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>  		NR_SLAB_RECLAIMABLE : NR_SLAB_UNRECLAIMABLE,
>  		-pages);
>  
> -	__ClearPageSlab(page);
> +	for (i = 0; i < pages; i++) {
> +		BUG_ON(!PageSlab(page + i));
> +		__ClearPageSlab(page + i);
> +	}
> +
>  	reset_page_mapcount(page);
>  	if (current->reclaim_state)
>  		current->reclaim_state->reclaimed_slab += pages;
> -- 
> 1.7.9
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
@ 2012-03-04 10:34   ` Minchan Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Minchan Kim @ 2012-03-04 10:34 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Christoph Lameter, Pekka Enberg, Matt Mackall, Namhyung Kim,
	linux-mm, linux-kernel

Hi Namhyung,

On Wed, Feb 29, 2012 at 05:54:34PM +0900, Namhyung Kim wrote:
> Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would
> call free_pages() incorrectly on a object in a tail page, she will get
>i confused with the undefined result. Setting the flag would help her by
> emitting a warning on bad_page() in such a case.
> 
> Reported-by: Sangseok Lee <sangseok.lee@lge.com>
> Signed-off-by: Namhyung Kim <namhyung.kim@lge.com>

I read this thread and I feel the we don't reach right point.
I think it's not a compound page problem.
We can face above problem where we allocates big order page without __GFP_COMP
and free middle page of it.

Fortunately, We can catch such a problem by put_page_testzero in __free_pages
if you enable CONFIG_DEBUG_VM.

Did you tried that with CONFIG_DEBUG_VM?

> ---
>  mm/slub.c |   12 ++++++++++--
>  1 files changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 33bab2aca882..575baacbec9b 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -1287,6 +1287,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>  	struct page *page;
>  	struct kmem_cache_order_objects oo = s->oo;
>  	gfp_t alloc_gfp;
> +	int i;
>  
>  	flags &= gfp_allowed_mask;
>  
> @@ -1320,6 +1321,9 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>  	if (!page)
>  		return NULL;
>  
> +	for (i = 0; i < 1 << oo_order(oo); i++)
> +		__SetPageSlab(page + i);
> +
>  	if (kmemcheck_enabled
>  		&& !(s->flags & (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) {
>  		int pages = 1 << oo_order(oo);
> @@ -1369,7 +1373,6 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>  
>  	inc_slabs_node(s, page_to_nid(page), page->objects);
>  	page->slab = s;
> -	page->flags |= 1 << PG_slab;
>  
>  	start = page_address(page);
>  
> @@ -1396,6 +1399,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>  {
>  	int order = compound_order(page);
>  	int pages = 1 << order;
> +	int i;
>  
>  	if (kmem_cache_debug(s)) {
>  		void *p;
> @@ -1413,7 +1417,11 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>  		NR_SLAB_RECLAIMABLE : NR_SLAB_UNRECLAIMABLE,
>  		-pages);
>  
> -	__ClearPageSlab(page);
> +	for (i = 0; i < pages; i++) {
> +		BUG_ON(!PageSlab(page + i));
> +		__ClearPageSlab(page + i);
> +	}
> +
>  	reset_page_mapcount(page);
>  	if (current->reclaim_state)
>  		current->reclaim_state->reclaimed_slab += pages;
> -- 
> 1.7.9
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
  2012-03-04 10:34   ` Minchan Kim
@ 2012-03-05  8:42     ` Namhyung Kim
  -1 siblings, 0 replies; 22+ messages in thread
From: Namhyung Kim @ 2012-03-05  8:42 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Christoph Lameter, Pekka Enberg, Matt Mackall, Namhyung Kim,
	linux-mm, linux-kernel

2012-03-04 7:34 PM, Minchan Kim wrote:
> Hi Namhyung,
>

Hi Minchan,
glad to see you here again :)


> On Wed, Feb 29, 2012 at 05:54:34PM +0900, Namhyung Kim wrote:
>> Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would
>> call free_pages() incorrectly on a object in a tail page, she will get
>> i confused with the undefined result. Setting the flag would help her by
>> emitting a warning on bad_page() in such a case.
>>
>> Reported-by: Sangseok Lee <sangseok.lee@lge.com>
>> Signed-off-by: Namhyung Kim <namhyung.kim@lge.com>
>
> I read this thread and I feel the we don't reach right point.
> I think it's not a compound page problem.
> We can face above problem where we allocates big order page without __GFP_COMP
> and free middle page of it.
>
> Fortunately, We can catch such a problem by put_page_testzero in __free_pages
> if you enable CONFIG_DEBUG_VM.
>
> Did you tried that with CONFIG_DEBUG_VM?
>

To be honest, I don't have a real test environment which brings this issue in 
the first place. On my simple test environment, enabling CONFIG_DEBUG_VM emits 
a bug when I tried to free middle of the slab pages. Thanks for pointing it out.

However I guess there was a chance to bypass that test anyhow since it did 
reach to __free_pages_ok(). If the page count was 0 already, free_pages() will 
prevent it from getting to the function even though CONFIG_DEBUG_VM was 
disabled. But I don't think it's a kernel bug - it seems entirely our fault :( 
I'll recheck and talk about it with my colleagues.

Thanks,
Namhyung


>> ---
>>   mm/slub.c |   12 ++++++++++--
>>   1 files changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 33bab2aca882..575baacbec9b 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1287,6 +1287,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>>   	struct page *page;
>>   	struct kmem_cache_order_objects oo = s->oo;
>>   	gfp_t alloc_gfp;
>> +	int i;
>>
>>   	flags&= gfp_allowed_mask;
>>
>> @@ -1320,6 +1321,9 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>>   	if (!page)
>>   		return NULL;
>>
>> +	for (i = 0; i<  1<<  oo_order(oo); i++)
>> +		__SetPageSlab(page + i);
>> +
>>   	if (kmemcheck_enabled
>>   		&&  !(s->flags&  (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) {
>>   		int pages = 1<<  oo_order(oo);
>> @@ -1369,7 +1373,6 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>>
>>   	inc_slabs_node(s, page_to_nid(page), page->objects);
>>   	page->slab = s;
>> -	page->flags |= 1<<  PG_slab;
>>
>>   	start = page_address(page);
>>
>> @@ -1396,6 +1399,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>>   {
>>   	int order = compound_order(page);
>>   	int pages = 1<<  order;
>> +	int i;
>>
>>   	if (kmem_cache_debug(s)) {
>>   		void *p;
>> @@ -1413,7 +1417,11 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>>   		NR_SLAB_RECLAIMABLE : NR_SLAB_UNRECLAIMABLE,
>>   		-pages);
>>
>> -	__ClearPageSlab(page);
>> +	for (i = 0; i<  pages; i++) {
>> +		BUG_ON(!PageSlab(page + i));
>> +		__ClearPageSlab(page + i);
>> +	}
>> +
>>   	reset_page_mapcount(page);
>>   	if (current->reclaim_state)
>>   		current->reclaim_state->reclaimed_slab += pages;
>> --
>> 1.7.9
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
>> Don't email:<a href=mailto:"dont@kvack.org">  email@kvack.org</a>
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email:<a href=mailto:"dont@kvack.org">  email@kvack.org</a>
>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
@ 2012-03-05  8:42     ` Namhyung Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Namhyung Kim @ 2012-03-05  8:42 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Christoph Lameter, Pekka Enberg, Matt Mackall, Namhyung Kim,
	linux-mm, linux-kernel

2012-03-04 7:34 PM, Minchan Kim wrote:
> Hi Namhyung,
>

Hi Minchan,
glad to see you here again :)


> On Wed, Feb 29, 2012 at 05:54:34PM +0900, Namhyung Kim wrote:
>> Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would
>> call free_pages() incorrectly on a object in a tail page, she will get
>> i confused with the undefined result. Setting the flag would help her by
>> emitting a warning on bad_page() in such a case.
>>
>> Reported-by: Sangseok Lee <sangseok.lee@lge.com>
>> Signed-off-by: Namhyung Kim <namhyung.kim@lge.com>
>
> I read this thread and I feel the we don't reach right point.
> I think it's not a compound page problem.
> We can face above problem where we allocates big order page without __GFP_COMP
> and free middle page of it.
>
> Fortunately, We can catch such a problem by put_page_testzero in __free_pages
> if you enable CONFIG_DEBUG_VM.
>
> Did you tried that with CONFIG_DEBUG_VM?
>

To be honest, I don't have a real test environment which brings this issue in 
the first place. On my simple test environment, enabling CONFIG_DEBUG_VM emits 
a bug when I tried to free middle of the slab pages. Thanks for pointing it out.

However I guess there was a chance to bypass that test anyhow since it did 
reach to __free_pages_ok(). If the page count was 0 already, free_pages() will 
prevent it from getting to the function even though CONFIG_DEBUG_VM was 
disabled. But I don't think it's a kernel bug - it seems entirely our fault :( 
I'll recheck and talk about it with my colleagues.

Thanks,
Namhyung


>> ---
>>   mm/slub.c |   12 ++++++++++--
>>   1 files changed, 10 insertions(+), 2 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 33bab2aca882..575baacbec9b 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -1287,6 +1287,7 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>>   	struct page *page;
>>   	struct kmem_cache_order_objects oo = s->oo;
>>   	gfp_t alloc_gfp;
>> +	int i;
>>
>>   	flags&= gfp_allowed_mask;
>>
>> @@ -1320,6 +1321,9 @@ static struct page *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
>>   	if (!page)
>>   		return NULL;
>>
>> +	for (i = 0; i<  1<<  oo_order(oo); i++)
>> +		__SetPageSlab(page + i);
>> +
>>   	if (kmemcheck_enabled
>>   		&&  !(s->flags&  (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) {
>>   		int pages = 1<<  oo_order(oo);
>> @@ -1369,7 +1373,6 @@ static struct page *new_slab(struct kmem_cache *s, gfp_t flags, int node)
>>
>>   	inc_slabs_node(s, page_to_nid(page), page->objects);
>>   	page->slab = s;
>> -	page->flags |= 1<<  PG_slab;
>>
>>   	start = page_address(page);
>>
>> @@ -1396,6 +1399,7 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>>   {
>>   	int order = compound_order(page);
>>   	int pages = 1<<  order;
>> +	int i;
>>
>>   	if (kmem_cache_debug(s)) {
>>   		void *p;
>> @@ -1413,7 +1417,11 @@ static void __free_slab(struct kmem_cache *s, struct page *page)
>>   		NR_SLAB_RECLAIMABLE : NR_SLAB_UNRECLAIMABLE,
>>   		-pages);
>>
>> -	__ClearPageSlab(page);
>> +	for (i = 0; i<  pages; i++) {
>> +		BUG_ON(!PageSlab(page + i));
>> +		__ClearPageSlab(page + i);
>> +	}
>> +
>>   	reset_page_mapcount(page);
>>   	if (current->reclaim_state)
>>   		current->reclaim_state->reclaimed_slab += pages;
>> --
>> 1.7.9
>>
>> --
>> To unsubscribe, send a message with 'unsubscribe linux-mm' in
>> the body to majordomo@kvack.org.  For more info on Linux MM,
>> see: http://www.linux-mm.org/ .
>> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
>> Don't email:<a href=mailto:"dont@kvack.org">  email@kvack.org</a>
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
> Don't email:<a href=mailto:"dont@kvack.org">  email@kvack.org</a>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
  2012-03-05  8:42     ` Namhyung Kim
@ 2012-03-05 10:59       ` Minchan Kim
  -1 siblings, 0 replies; 22+ messages in thread
From: Minchan Kim @ 2012-03-05 10:59 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Minchan Kim, Christoph Lameter, Pekka Enberg, Matt Mackall,
	Namhyung Kim, linux-mm, linux-kernel

On Mon, Mar 05, 2012 at 05:42:56PM +0900, Namhyung Kim wrote:
> 2012-03-04 7:34 PM, Minchan Kim wrote:
> >Hi Namhyung,
> >
> 
> Hi Minchan,
> glad to see you here again :)

Thanks!

> 
> 
> >On Wed, Feb 29, 2012 at 05:54:34PM +0900, Namhyung Kim wrote:
> >>Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would
> >>call free_pages() incorrectly on a object in a tail page, she will get
> >>i confused with the undefined result. Setting the flag would help her by
> >>emitting a warning on bad_page() in such a case.
> >>
> >>Reported-by: Sangseok Lee <sangseok.lee@lge.com>
> >>Signed-off-by: Namhyung Kim <namhyung.kim@lge.com>
> >
> >I read this thread and I feel the we don't reach right point.
> >I think it's not a compound page problem.
> >We can face above problem where we allocates big order page without __GFP_COMP
> >and free middle page of it.
> >
> >Fortunately, We can catch such a problem by put_page_testzero in __free_pages
> >if you enable CONFIG_DEBUG_VM.
> >
> >Did you tried that with CONFIG_DEBUG_VM?
> >
> 
> To be honest, I don't have a real test environment which brings this
> issue in the first place. On my simple test environment, enabling
> CONFIG_DEBUG_VM emits a bug when I tried to free middle of the slab
> pages. Thanks for pointing it out.
> 
> However I guess there was a chance to bypass that test anyhow since
> it did reach to __free_pages_ok(). If the page count was 0 already,
> free_pages() will prevent it from getting to the function even
> though CONFIG_DEBUG_VM was disabled. But I don't think it's a kernel
> bug - it seems entirely our fault :( I'll recheck and talk about it
> with my colleagues.

Let me ask a question.
Could you see bad page message by PG_slab with SLUB after you apply your patch?


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
@ 2012-03-05 10:59       ` Minchan Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Minchan Kim @ 2012-03-05 10:59 UTC (permalink / raw)
  To: Namhyung Kim
  Cc: Minchan Kim, Christoph Lameter, Pekka Enberg, Matt Mackall,
	Namhyung Kim, linux-mm, linux-kernel

On Mon, Mar 05, 2012 at 05:42:56PM +0900, Namhyung Kim wrote:
> 2012-03-04 7:34 PM, Minchan Kim wrote:
> >Hi Namhyung,
> >
> 
> Hi Minchan,
> glad to see you here again :)

Thanks!

> 
> 
> >On Wed, Feb 29, 2012 at 05:54:34PM +0900, Namhyung Kim wrote:
> >>Unlike SLAB, SLUB doesn't set PG_slab on tail pages, so if a user would
> >>call free_pages() incorrectly on a object in a tail page, she will get
> >>i confused with the undefined result. Setting the flag would help her by
> >>emitting a warning on bad_page() in such a case.
> >>
> >>Reported-by: Sangseok Lee <sangseok.lee@lge.com>
> >>Signed-off-by: Namhyung Kim <namhyung.kim@lge.com>
> >
> >I read this thread and I feel the we don't reach right point.
> >I think it's not a compound page problem.
> >We can face above problem where we allocates big order page without __GFP_COMP
> >and free middle page of it.
> >
> >Fortunately, We can catch such a problem by put_page_testzero in __free_pages
> >if you enable CONFIG_DEBUG_VM.
> >
> >Did you tried that with CONFIG_DEBUG_VM?
> >
> 
> To be honest, I don't have a real test environment which brings this
> issue in the first place. On my simple test environment, enabling
> CONFIG_DEBUG_VM emits a bug when I tried to free middle of the slab
> pages. Thanks for pointing it out.
> 
> However I guess there was a chance to bypass that test anyhow since
> it did reach to __free_pages_ok(). If the page count was 0 already,
> free_pages() will prevent it from getting to the function even
> though CONFIG_DEBUG_VM was disabled. But I don't think it's a kernel
> bug - it seems entirely our fault :( I'll recheck and talk about it
> with my colleagues.

Let me ask a question.
Could you see bad page message by PG_slab with SLUB after you apply your patch?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
  2012-03-04 10:34   ` Minchan Kim
@ 2012-03-05 14:48     ` Christoph Lameter
  -1 siblings, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2012-03-05 14:48 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Namhyung Kim, Pekka Enberg, Matt Mackall, Namhyung Kim, linux-mm,
	linux-kernel

On Sun, 4 Mar 2012, Minchan Kim wrote:

> I read this thread and I feel the we don't reach right point.
> I think it's not a compound page problem.
> We can face above problem where we allocates big order page without __GFP_COMP
> and free middle page of it.

Yes we can do that and doing such a thing seems to be more legitimate
since one could argue that the user did not request an atomic allocation
unit from the page allocator and therefore the freeing of individual
pages in that group is permissible. If memory serves me right we do that
sometimes.

However if compound pages are requested then such an atomic allocation
unit *was* requested and the page allocator should not allow to free
individual pages.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
@ 2012-03-05 14:48     ` Christoph Lameter
  0 siblings, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2012-03-05 14:48 UTC (permalink / raw)
  To: Minchan Kim
  Cc: Namhyung Kim, Pekka Enberg, Matt Mackall, Namhyung Kim, linux-mm,
	linux-kernel

On Sun, 4 Mar 2012, Minchan Kim wrote:

> I read this thread and I feel the we don't reach right point.
> I think it's not a compound page problem.
> We can face above problem where we allocates big order page without __GFP_COMP
> and free middle page of it.

Yes we can do that and doing such a thing seems to be more legitimate
since one could argue that the user did not request an atomic allocation
unit from the page allocator and therefore the freeing of individual
pages in that group is permissible. If memory serves me right we do that
sometimes.

However if compound pages are requested then such an atomic allocation
unit *was* requested and the page allocator should not allow to free
individual pages.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
  2012-03-05 14:48     ` Christoph Lameter
@ 2012-03-06  1:16       ` Minchan Kim
  -1 siblings, 0 replies; 22+ messages in thread
From: Minchan Kim @ 2012-03-06  1:16 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Minchan Kim, Namhyung Kim, Pekka Enberg, Matt Mackall,
	Namhyung Kim, linux-mm, linux-kernel

Hi Christoph,

On Mon, Mar 05, 2012 at 08:48:33AM -0600, Christoph Lameter wrote:
> On Sun, 4 Mar 2012, Minchan Kim wrote:
> 
> > I read this thread and I feel the we don't reach right point.
> > I think it's not a compound page problem.
> > We can face above problem where we allocates big order page without __GFP_COMP
> > and free middle page of it.
> 
> Yes we can do that and doing such a thing seems to be more legitimate
> since one could argue that the user did not request an atomic allocation
> unit from the page allocator and therefore the freeing of individual
> pages in that group is permissible. If memory serves me right we do that
> sometimes.

To be leitimate, user have to handle subpages's ref counter well.
But I think it's not desirable. If user want it, he should use
split_page instead of modifying ref counter directly.

> 
> However if compound pages are requested then such an atomic allocation
> unit *was* requested and the page allocator should not allow to free
> individual pages.

Yes. In fact, I am not sure this problem is related to compound page.
If it is compound page, tail page's ref count should be zero.
When user calls __free_pages in tail page by mistake, it should not pass
into __free_pages_ok but reference count would be underflow.
Later, when head page is freed, we could catch it in free_pages_check.

So I had a question to Namhyung that he can see bad page message by PG_slab when he uses SLUB
with his patch. If the problem still happens, something seems to modify tail page's ref count
directly without get_page. It's apparently BUG.


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH -next] slub: set PG_slab on all of slab pages
@ 2012-03-06  1:16       ` Minchan Kim
  0 siblings, 0 replies; 22+ messages in thread
From: Minchan Kim @ 2012-03-06  1:16 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Minchan Kim, Namhyung Kim, Pekka Enberg, Matt Mackall,
	Namhyung Kim, linux-mm, linux-kernel

Hi Christoph,

On Mon, Mar 05, 2012 at 08:48:33AM -0600, Christoph Lameter wrote:
> On Sun, 4 Mar 2012, Minchan Kim wrote:
> 
> > I read this thread and I feel the we don't reach right point.
> > I think it's not a compound page problem.
> > We can face above problem where we allocates big order page without __GFP_COMP
> > and free middle page of it.
> 
> Yes we can do that and doing such a thing seems to be more legitimate
> since one could argue that the user did not request an atomic allocation
> unit from the page allocator and therefore the freeing of individual
> pages in that group is permissible. If memory serves me right we do that
> sometimes.

To be leitimate, user have to handle subpages's ref counter well.
But I think it's not desirable. If user want it, he should use
split_page instead of modifying ref counter directly.

> 
> However if compound pages are requested then such an atomic allocation
> unit *was* requested and the page allocator should not allow to free
> individual pages.

Yes. In fact, I am not sure this problem is related to compound page.
If it is compound page, tail page's ref count should be zero.
When user calls __free_pages in tail page by mistake, it should not pass
into __free_pages_ok but reference count would be underflow.
Later, when head page is freed, we could catch it in free_pages_check.

So I had a question to Namhyung that he can see bad page message by PG_slab when he uses SLUB
with his patch. If the problem still happens, something seems to modify tail page's ref count
directly without get_page. It's apparently BUG.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2012-03-06  1:16 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-29  8:54 [PATCH -next] slub: set PG_slab on all of slab pages Namhyung Kim
2012-02-29  8:54 ` Namhyung Kim
2012-02-29 15:24 ` Christoph Lameter
2012-02-29 15:24   ` Christoph Lameter
2012-03-01  7:30   ` Namhyung Kim
2012-03-01  7:30     ` Namhyung Kim
2012-03-01 15:03     ` Christoph Lameter
2012-03-01 15:03       ` Christoph Lameter
2012-03-02  7:12       ` Namhyung Kim
2012-03-02  7:12         ` Namhyung Kim
2012-03-02 16:13         ` Christoph Lameter
2012-03-02 16:13           ` Christoph Lameter
2012-03-04 10:34 ` Minchan Kim
2012-03-04 10:34   ` Minchan Kim
2012-03-05  8:42   ` Namhyung Kim
2012-03-05  8:42     ` Namhyung Kim
2012-03-05 10:59     ` Minchan Kim
2012-03-05 10:59       ` Minchan Kim
2012-03-05 14:48   ` Christoph Lameter
2012-03-05 14:48     ` Christoph Lameter
2012-03-06  1:16     ` Minchan Kim
2012-03-06  1:16       ` Minchan Kim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.