All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-04-29 21:09 ` Hugh Dickins
  0 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-04-29 21:09 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
had better make its own check on the order.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
---
Should probably follow
page-allocator-do-not-sanity-check-order-in-the-fast-path-fix.patch

Cc'ed DaveM and netdev, just in case they're surprised it was asking for
so much, or disappointed it's not getting as much as it was asking for.

 mm/page_alloc.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--- 2.6.30-rc3-mm1/mm/page_alloc.c	2009-04-29 21:01:08.000000000 +0100
+++ mmotm/mm/page_alloc.c	2009-04-29 21:12:04.000000000 +0100
@@ -4765,7 +4765,10 @@ void *__init alloc_large_system_hash(con
 			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
 		else {
 			unsigned long order = get_order(size);
-			table = (void*) __get_free_pages(GFP_ATOMIC, order);
+
+			if (order < MAX_ORDER)
+				table = (void *)__get_free_pages(GFP_ATOMIC,
+								order);
 			/*
 			 * If bucketsize is not a power-of-two, we may free
 			 * some pages at the end of hash table.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-04-29 21:09 ` Hugh Dickins
  0 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-04-29 21:09 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Mel Gorman, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
had better make its own check on the order.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
---
Should probably follow
page-allocator-do-not-sanity-check-order-in-the-fast-path-fix.patch

Cc'ed DaveM and netdev, just in case they're surprised it was asking for
so much, or disappointed it's not getting as much as it was asking for.

 mm/page_alloc.c |    5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

--- 2.6.30-rc3-mm1/mm/page_alloc.c	2009-04-29 21:01:08.000000000 +0100
+++ mmotm/mm/page_alloc.c	2009-04-29 21:12:04.000000000 +0100
@@ -4765,7 +4765,10 @@ void *__init alloc_large_system_hash(con
 			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
 		else {
 			unsigned long order = get_order(size);
-			table = (void*) __get_free_pages(GFP_ATOMIC, order);
+
+			if (order < MAX_ORDER)
+				table = (void *)__get_free_pages(GFP_ATOMIC,
+								order);
 			/*
 			 * If bucketsize is not a power-of-two, we may free
 			 * some pages at the end of hash table.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-04-29 21:09 ` Hugh Dickins
@ 2009-04-29 21:28   ` Andrew Morton
  -1 siblings, 0 replies; 35+ messages in thread
From: Andrew Morton @ 2009-04-29 21:28 UTC (permalink / raw)
  To: Hugh Dickins; +Cc: mel, andi, davem, netdev, linux-kernel, linux-mm

On Wed, 29 Apr 2009 22:09:48 +0100 (BST)
Hugh Dickins <hugh@veritas.com> wrote:

> On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
> had better make its own check on the order.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>
> ---
> Should probably follow
> page-allocator-do-not-sanity-check-order-in-the-fast-path-fix.patch
> 
> Cc'ed DaveM and netdev, just in case they're surprised it was asking for
> so much, or disappointed it's not getting as much as it was asking for.
> 
>  mm/page_alloc.c |    5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> --- 2.6.30-rc3-mm1/mm/page_alloc.c	2009-04-29 21:01:08.000000000 +0100
> +++ mmotm/mm/page_alloc.c	2009-04-29 21:12:04.000000000 +0100
> @@ -4765,7 +4765,10 @@ void *__init alloc_large_system_hash(con
>  			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
>  		else {
>  			unsigned long order = get_order(size);
> -			table = (void*) __get_free_pages(GFP_ATOMIC, order);
> +
> +			if (order < MAX_ORDER)
> +				table = (void *)__get_free_pages(GFP_ATOMIC,
> +								order);
>  			/*
>  			 * If bucketsize is not a power-of-two, we may free
>  			 * some pages at the end of hash table.

yes, the code is a bit odd:

:	do {
: 		size = bucketsize << log2qty;
: 		if (flags & HASH_EARLY)
: 			table = alloc_bootmem_nopanic(size);
: 		else if (hashdist)
: 			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
: 		else {
: 			unsigned long order = get_order(size);
: 			table = (void*) __get_free_pages(GFP_ATOMIC, order);
: 			/*
: 			 * If bucketsize is not a power-of-two, we may free
: 			 * some pages at the end of hash table.
: 			 */
: 			if (table) {
: 				unsigned long alloc_end = (unsigned long)table +
: 						(PAGE_SIZE << order);
: 				unsigned long used = (unsigned long)table +
: 						PAGE_ALIGN(size);
: 				split_page(virt_to_page(table), order);
: 				while (used < alloc_end) {
: 					free_page(used);
: 					used += PAGE_SIZE;
: 				}
: 			}
: 		}
: 	} while (!table && size > PAGE_SIZE && --log2qty);

In the case where it does the __vmalloc(), the order-11 allocation will
succeed.  But in the other cases, the allocation attempt will need to
be shrunk and we end up with a smaller hash table.  Is that sensible?

If we want to regularise all three cases, doing

	size = min(size, MAX_ORDER);

before starting the loop would be suitable, although the huge
__get_free_pages() might still fail.  (But it will then warn, won't it?
 And nobody is reporting that).

I was a bit iffy about adding the warning in the first place, let it go
through due to its potential to lead us to code which isn't doing what
it thinks it's doing, or is being generally peculiar.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-04-29 21:28   ` Andrew Morton
  0 siblings, 0 replies; 35+ messages in thread
From: Andrew Morton @ 2009-04-29 21:28 UTC (permalink / raw)
  To: Hugh Dickins; +Cc: mel, andi, davem, netdev, linux-kernel, linux-mm

On Wed, 29 Apr 2009 22:09:48 +0100 (BST)
Hugh Dickins <hugh@veritas.com> wrote:

> On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
> had better make its own check on the order.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>
> ---
> Should probably follow
> page-allocator-do-not-sanity-check-order-in-the-fast-path-fix.patch
> 
> Cc'ed DaveM and netdev, just in case they're surprised it was asking for
> so much, or disappointed it's not getting as much as it was asking for.
> 
>  mm/page_alloc.c |    5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> --- 2.6.30-rc3-mm1/mm/page_alloc.c	2009-04-29 21:01:08.000000000 +0100
> +++ mmotm/mm/page_alloc.c	2009-04-29 21:12:04.000000000 +0100
> @@ -4765,7 +4765,10 @@ void *__init alloc_large_system_hash(con
>  			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
>  		else {
>  			unsigned long order = get_order(size);
> -			table = (void*) __get_free_pages(GFP_ATOMIC, order);
> +
> +			if (order < MAX_ORDER)
> +				table = (void *)__get_free_pages(GFP_ATOMIC,
> +								order);
>  			/*
>  			 * If bucketsize is not a power-of-two, we may free
>  			 * some pages at the end of hash table.

yes, the code is a bit odd:

:	do {
: 		size = bucketsize << log2qty;
: 		if (flags & HASH_EARLY)
: 			table = alloc_bootmem_nopanic(size);
: 		else if (hashdist)
: 			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
: 		else {
: 			unsigned long order = get_order(size);
: 			table = (void*) __get_free_pages(GFP_ATOMIC, order);
: 			/*
: 			 * If bucketsize is not a power-of-two, we may free
: 			 * some pages at the end of hash table.
: 			 */
: 			if (table) {
: 				unsigned long alloc_end = (unsigned long)table +
: 						(PAGE_SIZE << order);
: 				unsigned long used = (unsigned long)table +
: 						PAGE_ALIGN(size);
: 				split_page(virt_to_page(table), order);
: 				while (used < alloc_end) {
: 					free_page(used);
: 					used += PAGE_SIZE;
: 				}
: 			}
: 		}
: 	} while (!table && size > PAGE_SIZE && --log2qty);

In the case where it does the __vmalloc(), the order-11 allocation will
succeed.  But in the other cases, the allocation attempt will need to
be shrunk and we end up with a smaller hash table.  Is that sensible?

If we want to regularise all three cases, doing

	size = min(size, MAX_ORDER);

before starting the loop would be suitable, although the huge
__get_free_pages() might still fail.  (But it will then warn, won't it?
 And nobody is reporting that).

I was a bit iffy about adding the warning in the first place, let it go
through due to its potential to lead us to code which isn't doing what
it thinks it's doing, or is being generally peculiar.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-04-29 21:09 ` Hugh Dickins
@ 2009-04-30  0:25   ` David Miller
  -1 siblings, 0 replies; 35+ messages in thread
From: David Miller @ 2009-04-30  0:25 UTC (permalink / raw)
  To: hugh; +Cc: akpm, mel, andi, netdev, linux-kernel, linux-mm

From: Hugh Dickins <hugh@veritas.com>
Date: Wed, 29 Apr 2009 22:09:48 +0100 (BST)

> Cc'ed DaveM and netdev, just in case they're surprised it was asking for
> so much, or disappointed it's not getting as much as it was asking for.

This is basically what should be happening, thanks for the note.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-04-30  0:25   ` David Miller
  0 siblings, 0 replies; 35+ messages in thread
From: David Miller @ 2009-04-30  0:25 UTC (permalink / raw)
  To: hugh; +Cc: akpm, mel, andi, netdev, linux-kernel, linux-mm

From: Hugh Dickins <hugh@veritas.com>
Date: Wed, 29 Apr 2009 22:09:48 +0100 (BST)

> Cc'ed DaveM and netdev, just in case they're surprised it was asking for
> so much, or disappointed it's not getting as much as it was asking for.

This is basically what should be happening, thanks for the note.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-04-29 21:09 ` Hugh Dickins
@ 2009-04-30 13:25   ` Mel Gorman
  -1 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-04-30 13:25 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
> On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
> had better make its own check on the order.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>

Looks good

Reviewed-by: Mel Gorman <mel@csn.ul.ie>

As I was looking there, it seemed that alloc_large_system_hash() should be
using alloc_pages_exact() instead of having its own "give back the spare
pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
where it may catch other unwary callers.

How about adding the following patch on top of yours?

==== CUT HERE ====
Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic

alloc_large_system_hash() has logic for freeing unused pages at the end
of an power-of-two-pages-aligned buffer that is a duplicate of what is in
alloc_pages_exact(). This patch converts alloc_large_system_hash() to use
alloc_pages_exact().

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
--- 
 mm/page_alloc.c |   27 +++++----------------------
 1 file changed, 5 insertions(+), 22 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1b3da0f..c94b140 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1942,6 +1942,9 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask)
 	unsigned int order = get_order(size);
 	unsigned long addr;
 
+	if (order >= MAX_ORDER)
+		return NULL;
+
 	addr = __get_free_pages(gfp_mask, order);
 	if (addr) {
 		unsigned long alloc_end = addr + (PAGE_SIZE << order);
@@ -4755,28 +4758,8 @@ void *__init alloc_large_system_hash(const char *tablename,
 			table = alloc_bootmem_nopanic(size);
 		else if (hashdist)
 			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
-		else {
-			unsigned long order = get_order(size);
-
-			if (order < MAX_ORDER)
-				table = (void *)__get_free_pages(GFP_ATOMIC,
-								order);
-			/*
-			 * If bucketsize is not a power-of-two, we may free
-			 * some pages at the end of hash table.
-			 */
-			if (table) {
-				unsigned long alloc_end = (unsigned long)table +
-						(PAGE_SIZE << order);
-				unsigned long used = (unsigned long)table +
-						PAGE_ALIGN(size);
-				split_page(virt_to_page(table), order);
-				while (used < alloc_end) {
-					free_page(used);
-					used += PAGE_SIZE;
-				}
-			}
-		}
+		else
+			table = alloc_pages_exact(PAGE_ALIGN(size), GFP_ATOMIC);
 	} while (!table && size > PAGE_SIZE && --log2qty);
 
 	if (!table)

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-04-30 13:25   ` Mel Gorman
  0 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-04-30 13:25 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
> On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
> had better make its own check on the order.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>

Looks good

Reviewed-by: Mel Gorman <mel@csn.ul.ie>

As I was looking there, it seemed that alloc_large_system_hash() should be
using alloc_pages_exact() instead of having its own "give back the spare
pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
where it may catch other unwary callers.

How about adding the following patch on top of yours?

==== CUT HERE ====
Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic

alloc_large_system_hash() has logic for freeing unused pages at the end
of an power-of-two-pages-aligned buffer that is a duplicate of what is in
alloc_pages_exact(). This patch converts alloc_large_system_hash() to use
alloc_pages_exact().

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
--- 
 mm/page_alloc.c |   27 +++++----------------------
 1 file changed, 5 insertions(+), 22 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1b3da0f..c94b140 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1942,6 +1942,9 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask)
 	unsigned int order = get_order(size);
 	unsigned long addr;
 
+	if (order >= MAX_ORDER)
+		return NULL;
+
 	addr = __get_free_pages(gfp_mask, order);
 	if (addr) {
 		unsigned long alloc_end = addr + (PAGE_SIZE << order);
@@ -4755,28 +4758,8 @@ void *__init alloc_large_system_hash(const char *tablename,
 			table = alloc_bootmem_nopanic(size);
 		else if (hashdist)
 			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
-		else {
-			unsigned long order = get_order(size);
-
-			if (order < MAX_ORDER)
-				table = (void *)__get_free_pages(GFP_ATOMIC,
-								order);
-			/*
-			 * If bucketsize is not a power-of-two, we may free
-			 * some pages at the end of hash table.
-			 */
-			if (table) {
-				unsigned long alloc_end = (unsigned long)table +
-						(PAGE_SIZE << order);
-				unsigned long used = (unsigned long)table +
-						PAGE_ALIGN(size);
-				split_page(virt_to_page(table), order);
-				while (used < alloc_end) {
-					free_page(used);
-					used += PAGE_SIZE;
-				}
-			}
-		}
+		else
+			table = alloc_pages_exact(PAGE_ALIGN(size), GFP_ATOMIC);
 	} while (!table && size > PAGE_SIZE && --log2qty);
 
 	if (!table)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-04-30 13:25   ` Mel Gorman
@ 2009-05-01 11:30     ` Hugh Dickins
  -1 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 11:30 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On Thu, 30 Apr 2009, Mel Gorman wrote:
> On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
> > On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> > to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> > order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
> > had better make its own check on the order.
> > 
> > Signed-off-by: Hugh Dickins <hugh@veritas.com>
> 
> Looks good
> 
> Reviewed-by: Mel Gorman <mel@csn.ul.ie>

Thanks.

> 
> As I was looking there, it seemed that alloc_large_system_hash() should be
> using alloc_pages_exact() instead of having its own "give back the spare
> pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
> the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
> where it may catch other unwary callers.
> 
> How about adding the following patch on top of yours?

Well observed, yes indeed.  In fact, it even looks as if, shock horror,
alloc_pages_exact() was _plagiarized_ from alloc_large_system_hash().
Blessed be the GPL, I'm sure we can skip the lengthy lawsuits!

> 
> ==== CUT HERE ====
> Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic
> 
> alloc_large_system_hash() has logic for freeing unused pages at the end
> of an power-of-two-pages-aligned buffer that is a duplicate of what is in
> alloc_pages_exact(). This patch converts alloc_large_system_hash() to use
> alloc_pages_exact().
> 
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> --- 
>  mm/page_alloc.c |   27 +++++----------------------
>  1 file changed, 5 insertions(+), 22 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1b3da0f..c94b140 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1942,6 +1942,9 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask)
>  	unsigned int order = get_order(size);
>  	unsigned long addr;
>  
> +	if (order >= MAX_ORDER)
> +		return NULL;
> +

I suppose there could be an argument about whether we do or do not
want to skip the WARN_ON when it's in alloc_pages_exact().

I have no opinion on that; but DaveM's reply on large_system_hash
does make it clear that we're not interested in the warning there.

>  	addr = __get_free_pages(gfp_mask, order);
>  	if (addr) {
>  		unsigned long alloc_end = addr + (PAGE_SIZE << order);
> @@ -4755,28 +4758,8 @@ void *__init alloc_large_system_hash(const char *tablename,
>  			table = alloc_bootmem_nopanic(size);
>  		else if (hashdist)
>  			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> -		else {
> -			unsigned long order = get_order(size);
> -
> -			if (order < MAX_ORDER)
> -				table = (void *)__get_free_pages(GFP_ATOMIC,
> -								order);
> -			/*
> -			 * If bucketsize is not a power-of-two, we may free
> -			 * some pages at the end of hash table.
> -			 */

That's actually a helpful comment, it's easy to think we're dealing
in powers of two here when we may not be.  Maybe retain it with your
alloc_pages_exact call?

> -			if (table) {
> -				unsigned long alloc_end = (unsigned long)table +
> -						(PAGE_SIZE << order);
> -				unsigned long used = (unsigned long)table +
> -						PAGE_ALIGN(size);
> -				split_page(virt_to_page(table), order);
> -				while (used < alloc_end) {
> -					free_page(used);
> -					used += PAGE_SIZE;
> -				}
> -			}
> -		}
> +		else
> +			table = alloc_pages_exact(PAGE_ALIGN(size), GFP_ATOMIC);

Do you actually need that PAGE_ALIGN on the size?

>  	} while (!table && size > PAGE_SIZE && --log2qty);
>  
>  	if (!table)

Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
I think that's a change we could make _if_ the large_system_hash
users ever ask for it, but _not_ one we should make surreptitiously.

Hugh

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 11:30     ` Hugh Dickins
  0 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 11:30 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On Thu, 30 Apr 2009, Mel Gorman wrote:
> On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
> > On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> > to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> > order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
> > had better make its own check on the order.
> > 
> > Signed-off-by: Hugh Dickins <hugh@veritas.com>
> 
> Looks good
> 
> Reviewed-by: Mel Gorman <mel@csn.ul.ie>

Thanks.

> 
> As I was looking there, it seemed that alloc_large_system_hash() should be
> using alloc_pages_exact() instead of having its own "give back the spare
> pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
> the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
> where it may catch other unwary callers.
> 
> How about adding the following patch on top of yours?

Well observed, yes indeed.  In fact, it even looks as if, shock horror,
alloc_pages_exact() was _plagiarized_ from alloc_large_system_hash().
Blessed be the GPL, I'm sure we can skip the lengthy lawsuits!

> 
> ==== CUT HERE ====
> Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic
> 
> alloc_large_system_hash() has logic for freeing unused pages at the end
> of an power-of-two-pages-aligned buffer that is a duplicate of what is in
> alloc_pages_exact(). This patch converts alloc_large_system_hash() to use
> alloc_pages_exact().
> 
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> --- 
>  mm/page_alloc.c |   27 +++++----------------------
>  1 file changed, 5 insertions(+), 22 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1b3da0f..c94b140 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1942,6 +1942,9 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask)
>  	unsigned int order = get_order(size);
>  	unsigned long addr;
>  
> +	if (order >= MAX_ORDER)
> +		return NULL;
> +

I suppose there could be an argument about whether we do or do not
want to skip the WARN_ON when it's in alloc_pages_exact().

I have no opinion on that; but DaveM's reply on large_system_hash
does make it clear that we're not interested in the warning there.

>  	addr = __get_free_pages(gfp_mask, order);
>  	if (addr) {
>  		unsigned long alloc_end = addr + (PAGE_SIZE << order);
> @@ -4755,28 +4758,8 @@ void *__init alloc_large_system_hash(const char *tablename,
>  			table = alloc_bootmem_nopanic(size);
>  		else if (hashdist)
>  			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> -		else {
> -			unsigned long order = get_order(size);
> -
> -			if (order < MAX_ORDER)
> -				table = (void *)__get_free_pages(GFP_ATOMIC,
> -								order);
> -			/*
> -			 * If bucketsize is not a power-of-two, we may free
> -			 * some pages at the end of hash table.
> -			 */

That's actually a helpful comment, it's easy to think we're dealing
in powers of two here when we may not be.  Maybe retain it with your
alloc_pages_exact call?

> -			if (table) {
> -				unsigned long alloc_end = (unsigned long)table +
> -						(PAGE_SIZE << order);
> -				unsigned long used = (unsigned long)table +
> -						PAGE_ALIGN(size);
> -				split_page(virt_to_page(table), order);
> -				while (used < alloc_end) {
> -					free_page(used);
> -					used += PAGE_SIZE;
> -				}
> -			}
> -		}
> +		else
> +			table = alloc_pages_exact(PAGE_ALIGN(size), GFP_ATOMIC);

Do you actually need that PAGE_ALIGN on the size?

>  	} while (!table && size > PAGE_SIZE && --log2qty);
>  
>  	if (!table)

Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
I think that's a change we could make _if_ the large_system_hash
users ever ask for it, but _not_ one we should make surreptitiously.

Hugh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-05-01 11:30     ` Hugh Dickins
@ 2009-05-01 11:46       ` Eric Dumazet
  -1 siblings, 0 replies; 35+ messages in thread
From: Eric Dumazet @ 2009-05-01 11:46 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Mel Gorman, Andrew Morton, Andi Kleen, David Miller, netdev,
	linux-kernel, linux-mm

Hugh Dickins a écrit :
> On Thu, 30 Apr 2009, Mel Gorman wrote:
>> On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
>>> On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
>>> to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
>>> order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
>>> had better make its own check on the order.

Well, I dont know why, since alloc_large_system_hash() already take
care of retries, halving size between each tries.

>>>
>>> Signed-off-by: Hugh Dickins <hugh@veritas.com>
>> Looks good
>>
>> Reviewed-by: Mel Gorman <mel@csn.ul.ie>
> 
> Thanks.
> 
>> As I was looking there, it seemed that alloc_large_system_hash() should be
>> using alloc_pages_exact() instead of having its own "give back the spare
>> pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
>> the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
>> where it may catch other unwary callers.
>>
>> How about adding the following patch on top of yours?
> 
> Well observed, yes indeed.  In fact, it even looks as if, shock horror,
> alloc_pages_exact() was _plagiarized_ from alloc_large_system_hash().
> Blessed be the GPL, I'm sure we can skip the lengthy lawsuits!

As a matter of fact, I was planning to call my lawyer, so I'll reconsider
this and save some euros, thanks !

;)

It makes sense to use a helper function if it already exist, of course !


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 11:46       ` Eric Dumazet
  0 siblings, 0 replies; 35+ messages in thread
From: Eric Dumazet @ 2009-05-01 11:46 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Mel Gorman, Andrew Morton, Andi Kleen, David Miller, netdev,
	linux-kernel, linux-mm

Hugh Dickins a écrit :
> On Thu, 30 Apr 2009, Mel Gorman wrote:
>> On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
>>> On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
>>> to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
>>> order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
>>> had better make its own check on the order.

Well, I dont know why, since alloc_large_system_hash() already take
care of retries, halving size between each tries.

>>>
>>> Signed-off-by: Hugh Dickins <hugh@veritas.com>
>> Looks good
>>
>> Reviewed-by: Mel Gorman <mel@csn.ul.ie>
> 
> Thanks.
> 
>> As I was looking there, it seemed that alloc_large_system_hash() should be
>> using alloc_pages_exact() instead of having its own "give back the spare
>> pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
>> the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
>> where it may catch other unwary callers.
>>
>> How about adding the following patch on top of yours?
> 
> Well observed, yes indeed.  In fact, it even looks as if, shock horror,
> alloc_pages_exact() was _plagiarized_ from alloc_large_system_hash().
> Blessed be the GPL, I'm sure we can skip the lengthy lawsuits!

As a matter of fact, I was planning to call my lawyer, so I'll reconsider
this and save some euros, thanks !

;)

It makes sense to use a helper function if it already exist, of course !

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-05-01 11:46       ` Eric Dumazet
  (?)
@ 2009-05-01 12:05       ` Hugh Dickins
  -1 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 12:05 UTC (permalink / raw)
  To: Eric Dumazet
  Cc: Mel Gorman, Andrew Morton, Andi Kleen, David Miller, netdev,
	linux-kernel, linux-mm

[-- Attachment #1: Type: TEXT/PLAIN, Size: 1207 bytes --]

On Fri, 1 May 2009, Eric Dumazet wrote:
> Hugh Dickins a écrit :
> > On Thu, 30 Apr 2009, Mel Gorman wrote:
> >> On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
> >>> On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> >>> to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> >>> order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
> >>> had better make its own check on the order.
> 
> Well, I dont know why, since alloc_large_system_hash() already take
> care of retries, halving size between each tries.

Sorry, I wasn't clear: I just meant that if we keep that
WARN_ON_ONCE(order >= MAX_ORDER) in __alloc_pages_slowpath(),
then we need alloc_large_system_hash() to avoid the call to
__get_free_pages() in the order >= MAX_ORDER case,
precisely because we're happy with the way it halves and
falls back, so don't want a noisy warning; and now that we know
that it could give that warning, it would be a shame for the
_ONCE to suppress more interesting warnings later.

I certainly did not mean for alloc_large_system_hash() to fail
in the order >= MAX_ORDER case, nor did the patch do so.

Hugh

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-04-29 21:28   ` Andrew Morton
@ 2009-05-01 13:40     ` Hugh Dickins
  -1 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 13:40 UTC (permalink / raw)
  To: Andrew Morton; +Cc: mel, andi, davem, netdev, linux-kernel, linux-mm

On Wed, 29 Apr 2009, Andrew Morton wrote:
> 
> yes, the code is a bit odd:
> 
> :	do {
> : 		size = bucketsize << log2qty;
> : 		if (flags & HASH_EARLY)
> : 			table = alloc_bootmem_nopanic(size);
> : 		else if (hashdist)
> : 			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> : 		else {
> : 			unsigned long order = get_order(size);
> : 			table = (void*) __get_free_pages(GFP_ATOMIC, order);
> : 			/*
> : 			 * If bucketsize is not a power-of-two, we may free
> : 			 * some pages at the end of hash table.
> : 			 */
> : 			if (table) {
> : 				unsigned long alloc_end = (unsigned long)table +
> : 						(PAGE_SIZE << order);
> : 				unsigned long used = (unsigned long)table +
> : 						PAGE_ALIGN(size);
> : 				split_page(virt_to_page(table), order);
> : 				while (used < alloc_end) {
> : 					free_page(used);
> : 					used += PAGE_SIZE;
> : 				}
> : 			}
> : 		}
> : 	} while (!table && size > PAGE_SIZE && --log2qty);
> 
> In the case where it does the __vmalloc(), the order-11 allocation will
> succeed.  But in the other cases, the allocation attempt will need to
> be shrunk and we end up with a smaller hash table.  Is that sensible?

It is a little odd, but the __vmalloc() route is used by default on
64-bit with CONFIG_NUMA, and this route otherwise.  (The hashdist
Doc isn't up-to-date on that, I'll send a patch.)

> 
> If we want to regularise all three cases, doing
> 
> 	size = min(size, MAX_ORDER);

If I take you literally, the resulting hash tables are going to
be rather small ;) but I know what you mean.

> 
> before starting the loop would be suitable, although the huge
> __get_free_pages() might still fail.

Oh, I don't feel a great urge to regularize these cases in such
a way.  I particularly don't feel like limiting 64-bit NUMA to
MAX_ORDER-1 size, if netdev have been happy with more until now.
Could consider a __vmalloc fallback when order is too large,
but let's not do so unless someone actually needs that.

> (But it will then warn, won't it?
>  And nobody is reporting that).

Well, it was hard to report it while mmotm's WARN_ON_ONCE was itself
oopsing.  With that fixed, I've reported it on x86_64 with 4GB
(without CONFIG_NUMA).

> 
> I was a bit iffy about adding the warning in the first place, let it go
> through due to its potential to lead us to code which isn't doing what
> it thinks it's doing, or is being generally peculiar.

DaveM has confirmed that the code is doing what they want it to do.
So I think mmotm wants this patch (for alloc_large_system_hash to
keep away from that warning), plus Mel's improvement on top of it.

Hugh

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 13:40     ` Hugh Dickins
  0 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 13:40 UTC (permalink / raw)
  To: Andrew Morton; +Cc: mel, andi, davem, netdev, linux-kernel, linux-mm

On Wed, 29 Apr 2009, Andrew Morton wrote:
> 
> yes, the code is a bit odd:
> 
> :	do {
> : 		size = bucketsize << log2qty;
> : 		if (flags & HASH_EARLY)
> : 			table = alloc_bootmem_nopanic(size);
> : 		else if (hashdist)
> : 			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> : 		else {
> : 			unsigned long order = get_order(size);
> : 			table = (void*) __get_free_pages(GFP_ATOMIC, order);
> : 			/*
> : 			 * If bucketsize is not a power-of-two, we may free
> : 			 * some pages at the end of hash table.
> : 			 */
> : 			if (table) {
> : 				unsigned long alloc_end = (unsigned long)table +
> : 						(PAGE_SIZE << order);
> : 				unsigned long used = (unsigned long)table +
> : 						PAGE_ALIGN(size);
> : 				split_page(virt_to_page(table), order);
> : 				while (used < alloc_end) {
> : 					free_page(used);
> : 					used += PAGE_SIZE;
> : 				}
> : 			}
> : 		}
> : 	} while (!table && size > PAGE_SIZE && --log2qty);
> 
> In the case where it does the __vmalloc(), the order-11 allocation will
> succeed.  But in the other cases, the allocation attempt will need to
> be shrunk and we end up with a smaller hash table.  Is that sensible?

It is a little odd, but the __vmalloc() route is used by default on
64-bit with CONFIG_NUMA, and this route otherwise.  (The hashdist
Doc isn't up-to-date on that, I'll send a patch.)

> 
> If we want to regularise all three cases, doing
> 
> 	size = min(size, MAX_ORDER);

If I take you literally, the resulting hash tables are going to
be rather small ;) but I know what you mean.

> 
> before starting the loop would be suitable, although the huge
> __get_free_pages() might still fail.

Oh, I don't feel a great urge to regularize these cases in such
a way.  I particularly don't feel like limiting 64-bit NUMA to
MAX_ORDER-1 size, if netdev have been happy with more until now.
Could consider a __vmalloc fallback when order is too large,
but let's not do so unless someone actually needs that.

> (But it will then warn, won't it?
>  And nobody is reporting that).

Well, it was hard to report it while mmotm's WARN_ON_ONCE was itself
oopsing.  With that fixed, I've reported it on x86_64 with 4GB
(without CONFIG_NUMA).

> 
> I was a bit iffy about adding the warning in the first place, let it go
> through due to its potential to lead us to code which isn't doing what
> it thinks it's doing, or is being generally peculiar.

DaveM has confirmed that the code is doing what they want it to do.
So I think mmotm wants this patch (for alloc_large_system_hash to
keep away from that warning), plus Mel's improvement on top of it.

Hugh

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 2.6.30] Doc: hashdist defaults on for 64bit
  2009-05-01 13:40     ` Hugh Dickins
@ 2009-05-01 13:45       ` Hugh Dickins
  -1 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 13:45 UTC (permalink / raw)
  To: Andrew Morton; +Cc: mel, andi, davem, anton, netdev, linux-kernel, linux-mm

Update Doc: kernel boot parameter hashdist now defaults on for all 64bit NUMA.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
---

 Documentation/kernel-parameters.txt |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- 2.6.30-rc4/Documentation/kernel-parameters.txt	2009-04-30 06:39:30.000000000 +0100
+++ linux/Documentation/kernel-parameters.txt	2009-05-01 14:08:56.000000000 +0100
@@ -775,7 +775,7 @@ and is between 256 and 4096 characters.
 
 	hashdist=	[KNL,NUMA] Large hashes allocated during boot
 			are distributed across NUMA nodes.  Defaults on
-			for IA-64, off otherwise.
+			for 64bit NUMA, off otherwise.
 			Format: 0 | 1 (for off | on)
 
 	hcl=		[IA-64] SGI's Hardware Graph compatibility layer

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH 2.6.30] Doc: hashdist defaults on for 64bit
@ 2009-05-01 13:45       ` Hugh Dickins
  0 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 13:45 UTC (permalink / raw)
  To: Andrew Morton; +Cc: mel, andi, davem, anton, netdev, linux-kernel, linux-mm

Update Doc: kernel boot parameter hashdist now defaults on for all 64bit NUMA.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
---

 Documentation/kernel-parameters.txt |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- 2.6.30-rc4/Documentation/kernel-parameters.txt	2009-04-30 06:39:30.000000000 +0100
+++ linux/Documentation/kernel-parameters.txt	2009-05-01 14:08:56.000000000 +0100
@@ -775,7 +775,7 @@ and is between 256 and 4096 characters.
 
 	hashdist=	[KNL,NUMA] Large hashes allocated during boot
 			are distributed across NUMA nodes.  Defaults on
-			for IA-64, off otherwise.
+			for 64bit NUMA, off otherwise.
 			Format: 0 | 1 (for off | on)
 
 	hcl=		[IA-64] SGI's Hardware Graph compatibility layer

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-05-01 14:00       ` Mel Gorman
@ 2009-05-01 13:59         ` Christoph Lameter
  -1 siblings, 0 replies; 35+ messages in thread
From: Christoph Lameter @ 2009-05-01 13:59 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Hugh Dickins, Andrew Morton, Andi Kleen, David Miller, netdev,
	linux-kernel, linux-mm

On Fri, 1 May 2009, Mel Gorman wrote:

> > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
> > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
>
> I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
> (according to the documentation). It is used in the case that the caller is
> willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> exchange for the pages being interleaved on different nodes so that access
> to the hash table has average performance[*]
>
> If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> a mysterious performance regression report for a workload that depended on
> the hash tables performance but that there was enough memory for the hash
> table to be allocated with vmalloc() instead of alloc_pages_exact().

Can we fall back to a huge page mapped vmalloc? Like what the vmemmap code
does? Then we also would not have MAX_ORDER limitations.

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 13:59         ` Christoph Lameter
  0 siblings, 0 replies; 35+ messages in thread
From: Christoph Lameter @ 2009-05-01 13:59 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Hugh Dickins, Andrew Morton, Andi Kleen, David Miller, netdev,
	linux-kernel, linux-mm

On Fri, 1 May 2009, Mel Gorman wrote:

> > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
> > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
>
> I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
> (according to the documentation). It is used in the case that the caller is
> willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> exchange for the pages being interleaved on different nodes so that access
> to the hash table has average performance[*]
>
> If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> a mysterious performance regression report for a workload that depended on
> the hash tables performance but that there was enough memory for the hash
> table to be allocated with vmalloc() instead of alloc_pages_exact().

Can we fall back to a huge page mapped vmalloc? Like what the vmemmap code
does? Then we also would not have MAX_ORDER limitations.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-05-01 11:30     ` Hugh Dickins
@ 2009-05-01 14:00       ` Mel Gorman
  -1 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:00 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On Fri, May 01, 2009 at 12:30:03PM +0100, Hugh Dickins wrote:
> On Thu, 30 Apr 2009, Mel Gorman wrote:
> > On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
> > > On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> > > to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> > > order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
> > > had better make its own check on the order.
> > > 
> > > Signed-off-by: Hugh Dickins <hugh@veritas.com>
> > 
> > Looks good
> > 
> > Reviewed-by: Mel Gorman <mel@csn.ul.ie>
> 
> Thanks.
> 
> > 
> > As I was looking there, it seemed that alloc_large_system_hash() should be
> > using alloc_pages_exact() instead of having its own "give back the spare
> > pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
> > the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
> > where it may catch other unwary callers.
> > 
> > How about adding the following patch on top of yours?
> 
> Well observed, yes indeed.  In fact, it even looks as if, shock horror,
> alloc_pages_exact() was _plagiarized_ from alloc_large_system_hash().
> Blessed be the GPL, I'm sure we can skip the lengthy lawsuits!
> 

*phew*.  We dodged a bullet there. I can put away my pitchfork and
flaming torch kit for another day.

> > 
> > ==== CUT HERE ====
> > Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic
> > 
> > alloc_large_system_hash() has logic for freeing unused pages at the end
> > of an power-of-two-pages-aligned buffer that is a duplicate of what is in
> > alloc_pages_exact(). This patch converts alloc_large_system_hash() to use
> > alloc_pages_exact().
> > 
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> > --- 
> >  mm/page_alloc.c |   27 +++++----------------------
> >  1 file changed, 5 insertions(+), 22 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 1b3da0f..c94b140 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1942,6 +1942,9 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask)
> >  	unsigned int order = get_order(size);
> >  	unsigned long addr;
> >  
> > +	if (order >= MAX_ORDER)
> > +		return NULL;
> > +
> 
> I suppose there could be an argument about whether we do or do not
> want to skip the WARN_ON when it's in alloc_pages_exact().
> 
> I have no opinion on that; but DaveM's reply on large_system_hash
> does make it clear that we're not interested in the warning there.
> 

That's a fair point. I've included a slightly modified patch below that
preserves the warning for alloc_pages_exact() being called with a
too-large-an-order.

It means we call get_order() twice but in this path, so what. It's not
even text bloat as it's freed up.

> >  	addr = __get_free_pages(gfp_mask, order);
> >  	if (addr) {
> >  		unsigned long alloc_end = addr + (PAGE_SIZE << order);
> > @@ -4755,28 +4758,8 @@ void *__init alloc_large_system_hash(const char *tablename,
> >  			table = alloc_bootmem_nopanic(size);
> >  		else if (hashdist)
> >  			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> > -		else {
> > -			unsigned long order = get_order(size);
> > -
> > -			if (order < MAX_ORDER)
> > -				table = (void *)__get_free_pages(GFP_ATOMIC,
> > -								order);
> > -			/*
> > -			 * If bucketsize is not a power-of-two, we may free
> > -			 * some pages at the end of hash table.
> > -			 */
> 
> That's actually a helpful comment, it's easy to think we're dealing
> in powers of two here when we may not be.  Maybe retain it with your
> alloc_pages_exact call?
> 

Sure, it explains why alloc_pages_exact() is being used instead of
__get_free_pages() for those that are unfamiliar with the call.

> > -			if (table) {
> > -				unsigned long alloc_end = (unsigned long)table +
> > -						(PAGE_SIZE << order);
> > -				unsigned long used = (unsigned long)table +
> > -						PAGE_ALIGN(size);
> > -				split_page(virt_to_page(table), order);
> > -				while (used < alloc_end) {
> > -					free_page(used);
> > -					used += PAGE_SIZE;
> > -				}
> > -			}
> > -		}
> > +		else
> > +			table = alloc_pages_exact(PAGE_ALIGN(size), GFP_ATOMIC);
> 
> Do you actually need that PAGE_ALIGN on the size?
> 

Actually no. When I added it, it was because alloc_pages_exact() did not
obviously deal with unaligned sizes but it does. Sorry about that.

> >  	} while (!table && size > PAGE_SIZE && --log2qty);
> >  
> >  	if (!table)
> 
> Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
> ought to fall back to __vmalloc() if the alloc_pages_exact() fails.

I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
(according to the documentation). It is used in the case that the caller is
willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
exchange for the pages being interleaved on different nodes so that access
to the hash table has average performance[*]

If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
a mysterious performance regression report for a workload that depended on
the hash tables performance but that there was enough memory for the hash
table to be allocated with vmalloc() instead of alloc_pages_exact().

[*] I speculate that on non-IA64 NUMA machines that we see different
    performance for large filesystem benchmarks depending on whether we are
    running on the boot-CPU node or not depending on whether hashdist=
    is used or not.

> I think that's a change we could make _if_ the large_system_hash
> users ever ask for it, but _not_ one we should make surreptitiously.
> 

If they want it, they'll have to ask with hashdist=. Somehow I doubt it's
specified very often :/ .

Here is Take 2

==== CUT HERE ====

Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic V2

alloc_large_system_hash() has logic for freeing pages at the end
of an excessively large power-of-two buffer that is a duplicate of what
is in alloc_pages_exact(). This patch converts alloc_large_system_hash()
to use alloc_pages_exact().

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
--- 
 mm/page_alloc.c |   21 ++++-----------------
 1 file changed, 4 insertions(+), 17 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1b3da0f..8360d59 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4756,26 +4756,13 @@ void *__init alloc_large_system_hash(const char *tablename,
 		else if (hashdist)
 			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
 		else {
-			unsigned long order = get_order(size);
-
-			if (order < MAX_ORDER)
-				table = (void *)__get_free_pages(GFP_ATOMIC,
-								order);
 			/*
 			 * If bucketsize is not a power-of-two, we may free
-			 * some pages at the end of hash table.
+			 * some pages at the end of hash table which
+			 * alloc_pages_exact() automatically does
 			 */
-			if (table) {
-				unsigned long alloc_end = (unsigned long)table +
-						(PAGE_SIZE << order);
-				unsigned long used = (unsigned long)table +
-						PAGE_ALIGN(size);
-				split_page(virt_to_page(table), order);
-				while (used < alloc_end) {
-					free_page(used);
-					used += PAGE_SIZE;
-				}
-			}
+			if (get_order(size) < MAX_ORDER)
+				table = alloc_pages_exact(size, GFP_ATOMIC);
 		}
 	} while (!table && size > PAGE_SIZE && --log2qty);
 

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 14:00       ` Mel Gorman
  0 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:00 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On Fri, May 01, 2009 at 12:30:03PM +0100, Hugh Dickins wrote:
> On Thu, 30 Apr 2009, Mel Gorman wrote:
> > On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
> > > On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> > > to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> > > order >= MAX_ORDER - it's hoping for order 11.  alloc_large_system_hash()
> > > had better make its own check on the order.
> > > 
> > > Signed-off-by: Hugh Dickins <hugh@veritas.com>
> > 
> > Looks good
> > 
> > Reviewed-by: Mel Gorman <mel@csn.ul.ie>
> 
> Thanks.
> 
> > 
> > As I was looking there, it seemed that alloc_large_system_hash() should be
> > using alloc_pages_exact() instead of having its own "give back the spare
> > pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
> > the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
> > where it may catch other unwary callers.
> > 
> > How about adding the following patch on top of yours?
> 
> Well observed, yes indeed.  In fact, it even looks as if, shock horror,
> alloc_pages_exact() was _plagiarized_ from alloc_large_system_hash().
> Blessed be the GPL, I'm sure we can skip the lengthy lawsuits!
> 

*phew*.  We dodged a bullet there. I can put away my pitchfork and
flaming torch kit for another day.

> > 
> > ==== CUT HERE ====
> > Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic
> > 
> > alloc_large_system_hash() has logic for freeing unused pages at the end
> > of an power-of-two-pages-aligned buffer that is a duplicate of what is in
> > alloc_pages_exact(). This patch converts alloc_large_system_hash() to use
> > alloc_pages_exact().
> > 
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> > --- 
> >  mm/page_alloc.c |   27 +++++----------------------
> >  1 file changed, 5 insertions(+), 22 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 1b3da0f..c94b140 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1942,6 +1942,9 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask)
> >  	unsigned int order = get_order(size);
> >  	unsigned long addr;
> >  
> > +	if (order >= MAX_ORDER)
> > +		return NULL;
> > +
> 
> I suppose there could be an argument about whether we do or do not
> want to skip the WARN_ON when it's in alloc_pages_exact().
> 
> I have no opinion on that; but DaveM's reply on large_system_hash
> does make it clear that we're not interested in the warning there.
> 

That's a fair point. I've included a slightly modified patch below that
preserves the warning for alloc_pages_exact() being called with a
too-large-an-order.

It means we call get_order() twice but in this path, so what. It's not
even text bloat as it's freed up.

> >  	addr = __get_free_pages(gfp_mask, order);
> >  	if (addr) {
> >  		unsigned long alloc_end = addr + (PAGE_SIZE << order);
> > @@ -4755,28 +4758,8 @@ void *__init alloc_large_system_hash(const char *tablename,
> >  			table = alloc_bootmem_nopanic(size);
> >  		else if (hashdist)
> >  			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> > -		else {
> > -			unsigned long order = get_order(size);
> > -
> > -			if (order < MAX_ORDER)
> > -				table = (void *)__get_free_pages(GFP_ATOMIC,
> > -								order);
> > -			/*
> > -			 * If bucketsize is not a power-of-two, we may free
> > -			 * some pages at the end of hash table.
> > -			 */
> 
> That's actually a helpful comment, it's easy to think we're dealing
> in powers of two here when we may not be.  Maybe retain it with your
> alloc_pages_exact call?
> 

Sure, it explains why alloc_pages_exact() is being used instead of
__get_free_pages() for those that are unfamiliar with the call.

> > -			if (table) {
> > -				unsigned long alloc_end = (unsigned long)table +
> > -						(PAGE_SIZE << order);
> > -				unsigned long used = (unsigned long)table +
> > -						PAGE_ALIGN(size);
> > -				split_page(virt_to_page(table), order);
> > -				while (used < alloc_end) {
> > -					free_page(used);
> > -					used += PAGE_SIZE;
> > -				}
> > -			}
> > -		}
> > +		else
> > +			table = alloc_pages_exact(PAGE_ALIGN(size), GFP_ATOMIC);
> 
> Do you actually need that PAGE_ALIGN on the size?
> 

Actually no. When I added it, it was because alloc_pages_exact() did not
obviously deal with unaligned sizes but it does. Sorry about that.

> >  	} while (!table && size > PAGE_SIZE && --log2qty);
> >  
> >  	if (!table)
> 
> Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
> ought to fall back to __vmalloc() if the alloc_pages_exact() fails.

I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
(according to the documentation). It is used in the case that the caller is
willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
exchange for the pages being interleaved on different nodes so that access
to the hash table has average performance[*]

If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
a mysterious performance regression report for a workload that depended on
the hash tables performance but that there was enough memory for the hash
table to be allocated with vmalloc() instead of alloc_pages_exact().

[*] I speculate that on non-IA64 NUMA machines that we see different
    performance for large filesystem benchmarks depending on whether we are
    running on the boot-CPU node or not depending on whether hashdist=
    is used or not.

> I think that's a change we could make _if_ the large_system_hash
> users ever ask for it, but _not_ one we should make surreptitiously.
> 

If they want it, they'll have to ask with hashdist=. Somehow I doubt it's
specified very often :/ .

Here is Take 2

==== CUT HERE ====

Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic V2

alloc_large_system_hash() has logic for freeing pages at the end
of an excessively large power-of-two buffer that is a duplicate of what
is in alloc_pages_exact(). This patch converts alloc_large_system_hash()
to use alloc_pages_exact().

Signed-off-by: Mel Gorman <mel@csn.ul.ie>
--- 
 mm/page_alloc.c |   21 ++++-----------------
 1 file changed, 4 insertions(+), 17 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1b3da0f..8360d59 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4756,26 +4756,13 @@ void *__init alloc_large_system_hash(const char *tablename,
 		else if (hashdist)
 			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
 		else {
-			unsigned long order = get_order(size);
-
-			if (order < MAX_ORDER)
-				table = (void *)__get_free_pages(GFP_ATOMIC,
-								order);
 			/*
 			 * If bucketsize is not a power-of-two, we may free
-			 * some pages at the end of hash table.
+			 * some pages at the end of hash table which
+			 * alloc_pages_exact() automatically does
 			 */
-			if (table) {
-				unsigned long alloc_end = (unsigned long)table +
-						(PAGE_SIZE << order);
-				unsigned long used = (unsigned long)table +
-						PAGE_ALIGN(size);
-				split_page(virt_to_page(table), order);
-				while (used < alloc_end) {
-					free_page(used);
-					used += PAGE_SIZE;
-				}
-			}
+			if (get_order(size) < MAX_ORDER)
+				table = alloc_pages_exact(size, GFP_ATOMIC);
 		}
 	} while (!table && size > PAGE_SIZE && --log2qty);
 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-05-01 14:00       ` Mel Gorman
@ 2009-05-01 14:12         ` Mel Gorman
  -1 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:12 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On Fri, May 01, 2009 at 03:00:15PM +0100, Mel Gorman wrote:
> > <SNIP>
> > 
> > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
> > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
> 
> I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
> (according to the documentation).

I was foolish to believe the documentation. vmalloc() will be used by
default on 64-bit NUMA, not just IA-64.

> It is used in the case that the caller is
> willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> exchange for the pages being interleaved on different nodes so that access
> to the hash table has average performance[*]
> 
> If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> a mysterious performance regression report for a workload that depended on
> the hash tables performance but that there was enough memory for the hash
> table to be allocated with vmalloc() instead of alloc_pages_exact().
> 

I think this point still holds. On non-NUMA machine, we don't want to fall
back to using vmalloc() just because the machine happened to have enough
memory. It's really tricky to know for sure though - will there be enough
performance benefits from having a bigger hash table to offset using base
pages to back it? It's probably unknowable because it depends on the exact
hardware and how the hash table is being used.

> [*] I speculate that on non-IA64 NUMA machines that we see different
>     performance for large filesystem benchmarks depending on whether we are
>     running on the boot-CPU node or not depending on whether hashdist=
>     is used or not.

This speculation is junk because using vmalloc() for hash tables is not
specific to IA-64.

> <SNIP>

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 14:12         ` Mel Gorman
  0 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:12 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On Fri, May 01, 2009 at 03:00:15PM +0100, Mel Gorman wrote:
> > <SNIP>
> > 
> > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
> > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
> 
> I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
> (according to the documentation).

I was foolish to believe the documentation. vmalloc() will be used by
default on 64-bit NUMA, not just IA-64.

> It is used in the case that the caller is
> willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> exchange for the pages being interleaved on different nodes so that access
> to the hash table has average performance[*]
> 
> If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> a mysterious performance regression report for a workload that depended on
> the hash tables performance but that there was enough memory for the hash
> table to be allocated with vmalloc() instead of alloc_pages_exact().
> 

I think this point still holds. On non-NUMA machine, we don't want to fall
back to using vmalloc() just because the machine happened to have enough
memory. It's really tricky to know for sure though - will there be enough
performance benefits from having a bigger hash table to offset using base
pages to back it? It's probably unknowable because it depends on the exact
hardware and how the hash table is being used.

> [*] I speculate that on non-IA64 NUMA machines that we see different
>     performance for large filesystem benchmarks depending on whether we are
>     running on the boot-CPU node or not depending on whether hashdist=
>     is used or not.

This speculation is junk because using vmalloc() for hash tables is not
specific to IA-64.

> <SNIP>

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-05-01 14:00       ` Mel Gorman
@ 2009-05-01 14:28         ` Hugh Dickins
  -1 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 14:28 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On Fri, 1 May 2009, Mel Gorman wrote:
> On Fri, May 01, 2009 at 12:30:03PM +0100, Hugh Dickins wrote:
> > 
> > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
> > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
> 
> I don't believe so. __vmalloc() is only used when hashdist= is used
> or on IA-64 (according to the documentation).

Doc out of date, hashdist's default "on" was extended to include
x86_64 ages ago, and to all 64-bit in 2.6.30-rc.

> It is used in the case that the caller is
> willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> exchange for the pages being interleaved on different nodes so that access
> to the hash table has average performance[*]
> 
> If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> a mysterious performance regression report for a workload that depended on
> the hash tables performance but that there was enough memory for the hash
> table to be allocated with vmalloc() instead of alloc_pages_exact().
> 
> [*] I speculate that on non-IA64 NUMA machines that we see different
>     performance for large filesystem benchmarks depending on whether we are
>     running on the boot-CPU node or not depending on whether hashdist=
>     is used or not.

Now that will be "32bit NUMA machines".  I was going to say that's
a tiny sample, but I'm probably out of touch.  I thought NUMA-Q was
on its way out, but see it still there in the tree.  And presumably
nowadays there's a great swing to NUMA on Arm or netbooks or something.

> 
> > I think that's a change we could make _if_ the large_system_hash
> > users ever ask for it, but _not_ one we should make surreptitiously.
> > 
> 
> If they want it, they'll have to ask with hashdist=.

That's quite a good argument for taking it out from under CONFIG_NUMA.
The name "hashdist" would then be absurd, but we could delight our
grandchildren with the story of how it came to be so named.

> Somehow I doubt it's specified very often :/ .

Our intuitions match!  Which is probably why it got extended.

> 
> Here is Take 2
> 
> ==== CUT HERE ====
> 
> Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic V2
> 
> alloc_large_system_hash() has logic for freeing pages at the end
> of an excessively large power-of-two buffer that is a duplicate of what
> is in alloc_pages_exact(). This patch converts alloc_large_system_hash()
> to use alloc_pages_exact().
> 
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>

Acked-by: Hugh Dickins <hugh@veritas.com>

> --- 
>  mm/page_alloc.c |   21 ++++-----------------
>  1 file changed, 4 insertions(+), 17 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1b3da0f..8360d59 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4756,26 +4756,13 @@ void *__init alloc_large_system_hash(const char *tablename,
>  		else if (hashdist)
>  			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
>  		else {
> -			unsigned long order = get_order(size);
> -
> -			if (order < MAX_ORDER)
> -				table = (void *)__get_free_pages(GFP_ATOMIC,
> -								order);
>  			/*
>  			 * If bucketsize is not a power-of-two, we may free
> -			 * some pages at the end of hash table.
> +			 * some pages at the end of hash table which
> +			 * alloc_pages_exact() automatically does
>  			 */
> -			if (table) {
> -				unsigned long alloc_end = (unsigned long)table +
> -						(PAGE_SIZE << order);
> -				unsigned long used = (unsigned long)table +
> -						PAGE_ALIGN(size);
> -				split_page(virt_to_page(table), order);
> -				while (used < alloc_end) {
> -					free_page(used);
> -					used += PAGE_SIZE;
> -				}
> -			}
> +			if (get_order(size) < MAX_ORDER)
> +				table = alloc_pages_exact(size, GFP_ATOMIC);
>  		}
>  	} while (!table && size > PAGE_SIZE && --log2qty);
>  

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 14:28         ` Hugh Dickins
  0 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 14:28 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On Fri, 1 May 2009, Mel Gorman wrote:
> On Fri, May 01, 2009 at 12:30:03PM +0100, Hugh Dickins wrote:
> > 
> > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
> > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
> 
> I don't believe so. __vmalloc() is only used when hashdist= is used
> or on IA-64 (according to the documentation).

Doc out of date, hashdist's default "on" was extended to include
x86_64 ages ago, and to all 64-bit in 2.6.30-rc.

> It is used in the case that the caller is
> willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> exchange for the pages being interleaved on different nodes so that access
> to the hash table has average performance[*]
> 
> If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> a mysterious performance regression report for a workload that depended on
> the hash tables performance but that there was enough memory for the hash
> table to be allocated with vmalloc() instead of alloc_pages_exact().
> 
> [*] I speculate that on non-IA64 NUMA machines that we see different
>     performance for large filesystem benchmarks depending on whether we are
>     running on the boot-CPU node or not depending on whether hashdist=
>     is used or not.

Now that will be "32bit NUMA machines".  I was going to say that's
a tiny sample, but I'm probably out of touch.  I thought NUMA-Q was
on its way out, but see it still there in the tree.  And presumably
nowadays there's a great swing to NUMA on Arm or netbooks or something.

> 
> > I think that's a change we could make _if_ the large_system_hash
> > users ever ask for it, but _not_ one we should make surreptitiously.
> > 
> 
> If they want it, they'll have to ask with hashdist=.

That's quite a good argument for taking it out from under CONFIG_NUMA.
The name "hashdist" would then be absurd, but we could delight our
grandchildren with the story of how it came to be so named.

> Somehow I doubt it's specified very often :/ .

Our intuitions match!  Which is probably why it got extended.

> 
> Here is Take 2
> 
> ==== CUT HERE ====
> 
> Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic V2
> 
> alloc_large_system_hash() has logic for freeing pages at the end
> of an excessively large power-of-two buffer that is a duplicate of what
> is in alloc_pages_exact(). This patch converts alloc_large_system_hash()
> to use alloc_pages_exact().
> 
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>

Acked-by: Hugh Dickins <hugh@veritas.com>

> --- 
>  mm/page_alloc.c |   21 ++++-----------------
>  1 file changed, 4 insertions(+), 17 deletions(-)
> 
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1b3da0f..8360d59 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4756,26 +4756,13 @@ void *__init alloc_large_system_hash(const char *tablename,
>  		else if (hashdist)
>  			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
>  		else {
> -			unsigned long order = get_order(size);
> -
> -			if (order < MAX_ORDER)
> -				table = (void *)__get_free_pages(GFP_ATOMIC,
> -								order);
>  			/*
>  			 * If bucketsize is not a power-of-two, we may free
> -			 * some pages at the end of hash table.
> +			 * some pages at the end of hash table which
> +			 * alloc_pages_exact() automatically does
>  			 */
> -			if (table) {
> -				unsigned long alloc_end = (unsigned long)table +
> -						(PAGE_SIZE << order);
> -				unsigned long used = (unsigned long)table +
> -						PAGE_ALIGN(size);
> -				split_page(virt_to_page(table), order);
> -				while (used < alloc_end) {
> -					free_page(used);
> -					used += PAGE_SIZE;
> -				}
> -			}
> +			if (get_order(size) < MAX_ORDER)
> +				table = alloc_pages_exact(size, GFP_ATOMIC);
>  		}
>  	} while (!table && size > PAGE_SIZE && --log2qty);
>  

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 2.6.30] Doc: hashdist defaults on for 64bit
  2009-05-01 13:45       ` Hugh Dickins
@ 2009-05-01 14:29         ` Mel Gorman
  -1 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:29 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, andi, davem, anton, netdev, linux-kernel, linux-mm

On Fri, May 01, 2009 at 02:45:43PM +0100, Hugh Dickins wrote:
> Update Doc: kernel boot parameter hashdist now defaults on for all 64bit NUMA.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>

Acked-by: Mel Gorman <mel@csn.ul.ie>

> ---
> 
>  Documentation/kernel-parameters.txt |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- 2.6.30-rc4/Documentation/kernel-parameters.txt	2009-04-30 06:39:30.000000000 +0100
> +++ linux/Documentation/kernel-parameters.txt	2009-05-01 14:08:56.000000000 +0100
> @@ -775,7 +775,7 @@ and is between 256 and 4096 characters.
>  
>  	hashdist=	[KNL,NUMA] Large hashes allocated during boot
>  			are distributed across NUMA nodes.  Defaults on
> -			for IA-64, off otherwise.
> +			for 64bit NUMA, off otherwise.
>  			Format: 0 | 1 (for off | on)
>  
>  	hcl=		[IA-64] SGI's Hardware Graph compatibility layer
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 2.6.30] Doc: hashdist defaults on for 64bit
@ 2009-05-01 14:29         ` Mel Gorman
  0 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:29 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, andi, davem, anton, netdev, linux-kernel, linux-mm

On Fri, May 01, 2009 at 02:45:43PM +0100, Hugh Dickins wrote:
> Update Doc: kernel boot parameter hashdist now defaults on for all 64bit NUMA.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>

Acked-by: Mel Gorman <mel@csn.ul.ie>

> ---
> 
>  Documentation/kernel-parameters.txt |    2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- 2.6.30-rc4/Documentation/kernel-parameters.txt	2009-04-30 06:39:30.000000000 +0100
> +++ linux/Documentation/kernel-parameters.txt	2009-05-01 14:08:56.000000000 +0100
> @@ -775,7 +775,7 @@ and is between 256 and 4096 characters.
>  
>  	hashdist=	[KNL,NUMA] Large hashes allocated during boot
>  			are distributed across NUMA nodes.  Defaults on
> -			for IA-64, off otherwise.
> +			for 64bit NUMA, off otherwise.
>  			Format: 0 | 1 (for off | on)
>  
>  	hcl=		[IA-64] SGI's Hardware Graph compatibility layer
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-05-01 14:28         ` Hugh Dickins
@ 2009-05-01 14:43           ` Mel Gorman
  -1 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:43 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On Fri, May 01, 2009 at 03:28:47PM +0100, Hugh Dickins wrote:
> On Fri, 1 May 2009, Mel Gorman wrote:
> > On Fri, May 01, 2009 at 12:30:03PM +0100, Hugh Dickins wrote:
> > > 
> > > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > > way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
> > > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
> > 
> > I don't believe so. __vmalloc() is only used when hashdist= is used
> > or on IA-64 (according to the documentation).
> 
> Doc out of date, hashdist's default "on" was extended to include
> x86_64 ages ago, and to all 64-bit in 2.6.30-rc.
> 
> > It is used in the case that the caller is
> > willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> > exchange for the pages being interleaved on different nodes so that access
> > to the hash table has average performance[*]
> > 
> > If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> > a mysterious performance regression report for a workload that depended on
> > the hash tables performance but that there was enough memory for the hash
> > table to be allocated with vmalloc() instead of alloc_pages_exact().
> > 
> > [*] I speculate that on non-IA64 NUMA machines that we see different
> >     performance for large filesystem benchmarks depending on whether we are
> >     running on the boot-CPU node or not depending on whether hashdist=
> >     is used or not.
> 
> Now that will be "32bit NUMA machines".  I was going to say that's
> a tiny sample, but I'm probably out of touch.  I thought NUMA-Q was
> on its way out, but see it still there in the tree.  And presumably
> nowadays there's a great swing to NUMA on Arm or netbooks or something.
> 

NUMA-Q can probably be ignored in terms of relevance but SuperH can have
32-bit NUMA judging from their Kconfig and my understanding is that NUMA is
important to sh in general. I don't know about ARM. Either way, the comment
for HASHDIST_DEFAULT saying that 32-bit NUMA may not have enough vmalloc()
space looks like a good enough reason to avoid dipping into it.

> > > I think that's a change we could make _if_ the large_system_hash
> > > users ever ask for it, but _not_ one we should make surreptitiously.
> > > 
> > 
> > If they want it, they'll have to ask with hashdist=.
> 
> That's quite a good argument for taking it out from under CONFIG_NUMA.
> The name "hashdist" would then be absurd, but we could delight our
> grandchildren with the story of how it came to be so named.
> 

What is the equivalent for "It was a dark and stormy night" for tales
about kernel hacking?

If it was pulled out from underneath, it would need to be for 64-bit-only to
avoid consuming too much vmalloc space but we'd still have no clue though
if the larger hash bucket performance gain (if any) would offset the cost
of using vmalloc.

> > Somehow I doubt it's specified very often :/ .
> 
> Our intuitions match!  Which is probably why it got extended.
> 

No doubt.

> > 
> > Here is Take 2
> > 
> > ==== CUT HERE ====
> > 
> > Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic V2
> > 
> > alloc_large_system_hash() has logic for freeing pages at the end
> > of an excessively large power-of-two buffer that is a duplicate of what
> > is in alloc_pages_exact(). This patch converts alloc_large_system_hash()
> > to use alloc_pages_exact().
> > 
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> 
> Acked-by: Hugh Dickins <hugh@veritas.com>
> 

Thanks.

> > --- 
> >  mm/page_alloc.c |   21 ++++-----------------
> >  1 file changed, 4 insertions(+), 17 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 1b3da0f..8360d59 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4756,26 +4756,13 @@ void *__init alloc_large_system_hash(const char *tablename,
> >  		else if (hashdist)
> >  			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> >  		else {
> > -			unsigned long order = get_order(size);
> > -
> > -			if (order < MAX_ORDER)
> > -				table = (void *)__get_free_pages(GFP_ATOMIC,
> > -								order);
> >  			/*
> >  			 * If bucketsize is not a power-of-two, we may free
> > -			 * some pages at the end of hash table.
> > +			 * some pages at the end of hash table which
> > +			 * alloc_pages_exact() automatically does
> >  			 */
> > -			if (table) {
> > -				unsigned long alloc_end = (unsigned long)table +
> > -						(PAGE_SIZE << order);
> > -				unsigned long used = (unsigned long)table +
> > -						PAGE_ALIGN(size);
> > -				split_page(virt_to_page(table), order);
> > -				while (used < alloc_end) {
> > -					free_page(used);
> > -					used += PAGE_SIZE;
> > -				}
> > -			}
> > +			if (get_order(size) < MAX_ORDER)
> > +				table = alloc_pages_exact(size, GFP_ATOMIC);
> >  		}
> >  	} while (!table && size > PAGE_SIZE && --log2qty);
> >  
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 14:43           ` Mel Gorman
  0 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:43 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm

On Fri, May 01, 2009 at 03:28:47PM +0100, Hugh Dickins wrote:
> On Fri, 1 May 2009, Mel Gorman wrote:
> > On Fri, May 01, 2009 at 12:30:03PM +0100, Hugh Dickins wrote:
> > > 
> > > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > > way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
> > > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
> > 
> > I don't believe so. __vmalloc() is only used when hashdist= is used
> > or on IA-64 (according to the documentation).
> 
> Doc out of date, hashdist's default "on" was extended to include
> x86_64 ages ago, and to all 64-bit in 2.6.30-rc.
> 
> > It is used in the case that the caller is
> > willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> > exchange for the pages being interleaved on different nodes so that access
> > to the hash table has average performance[*]
> > 
> > If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> > a mysterious performance regression report for a workload that depended on
> > the hash tables performance but that there was enough memory for the hash
> > table to be allocated with vmalloc() instead of alloc_pages_exact().
> > 
> > [*] I speculate that on non-IA64 NUMA machines that we see different
> >     performance for large filesystem benchmarks depending on whether we are
> >     running on the boot-CPU node or not depending on whether hashdist=
> >     is used or not.
> 
> Now that will be "32bit NUMA machines".  I was going to say that's
> a tiny sample, but I'm probably out of touch.  I thought NUMA-Q was
> on its way out, but see it still there in the tree.  And presumably
> nowadays there's a great swing to NUMA on Arm or netbooks or something.
> 

NUMA-Q can probably be ignored in terms of relevance but SuperH can have
32-bit NUMA judging from their Kconfig and my understanding is that NUMA is
important to sh in general. I don't know about ARM. Either way, the comment
for HASHDIST_DEFAULT saying that 32-bit NUMA may not have enough vmalloc()
space looks like a good enough reason to avoid dipping into it.

> > > I think that's a change we could make _if_ the large_system_hash
> > > users ever ask for it, but _not_ one we should make surreptitiously.
> > > 
> > 
> > If they want it, they'll have to ask with hashdist=.
> 
> That's quite a good argument for taking it out from under CONFIG_NUMA.
> The name "hashdist" would then be absurd, but we could delight our
> grandchildren with the story of how it came to be so named.
> 

What is the equivalent for "It was a dark and stormy night" for tales
about kernel hacking?

If it was pulled out from underneath, it would need to be for 64-bit-only to
avoid consuming too much vmalloc space but we'd still have no clue though
if the larger hash bucket performance gain (if any) would offset the cost
of using vmalloc.

> > Somehow I doubt it's specified very often :/ .
> 
> Our intuitions match!  Which is probably why it got extended.
> 

No doubt.

> > 
> > Here is Take 2
> > 
> > ==== CUT HERE ====
> > 
> > Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic V2
> > 
> > alloc_large_system_hash() has logic for freeing pages at the end
> > of an excessively large power-of-two buffer that is a duplicate of what
> > is in alloc_pages_exact(). This patch converts alloc_large_system_hash()
> > to use alloc_pages_exact().
> > 
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> 
> Acked-by: Hugh Dickins <hugh@veritas.com>
> 

Thanks.

> > --- 
> >  mm/page_alloc.c |   21 ++++-----------------
> >  1 file changed, 4 insertions(+), 17 deletions(-)
> > 
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 1b3da0f..8360d59 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4756,26 +4756,13 @@ void *__init alloc_large_system_hash(const char *tablename,
> >  		else if (hashdist)
> >  			table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> >  		else {
> > -			unsigned long order = get_order(size);
> > -
> > -			if (order < MAX_ORDER)
> > -				table = (void *)__get_free_pages(GFP_ATOMIC,
> > -								order);
> >  			/*
> >  			 * If bucketsize is not a power-of-two, we may free
> > -			 * some pages at the end of hash table.
> > +			 * some pages at the end of hash table which
> > +			 * alloc_pages_exact() automatically does
> >  			 */
> > -			if (table) {
> > -				unsigned long alloc_end = (unsigned long)table +
> > -						(PAGE_SIZE << order);
> > -				unsigned long used = (unsigned long)table +
> > -						PAGE_ALIGN(size);
> > -				split_page(virt_to_page(table), order);
> > -				while (used < alloc_end) {
> > -					free_page(used);
> > -					used += PAGE_SIZE;
> > -				}
> > -			}
> > +			if (get_order(size) < MAX_ORDER)
> > +				table = alloc_pages_exact(size, GFP_ATOMIC);
> >  		}
> >  	} while (!table && size > PAGE_SIZE && --log2qty);
> >  
> 

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-05-01 13:59         ` Christoph Lameter
@ 2009-05-01 15:09           ` Mel Gorman
  -1 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 15:09 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Hugh Dickins, Andrew Morton, Andi Kleen, David Miller, netdev,
	linux-kernel, linux-mm

On Fri, May 01, 2009 at 09:59:35AM -0400, Christoph Lameter wrote:
> On Fri, 1 May 2009, Mel Gorman wrote:
> 
> > > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > > way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
> > > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
> >
> > I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
> > (according to the documentation). It is used in the case that the caller is
> > willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> > exchange for the pages being interleaved on different nodes so that access
> > to the hash table has average performance[*]
> >
> > If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> > a mysterious performance regression report for a workload that depended on
> > the hash tables performance but that there was enough memory for the hash
> > table to be allocated with vmalloc() instead of alloc_pages_exact().
> 
> Can we fall back to a huge page mapped vmalloc? Like what the vmemmap code
> does? Then we also would not have MAX_ORDER limitations.
> 

Potentially yes, although it would appear that it will only help the networking
hash table. Dentry and inode are both using the bootmem allocator to allocate
their tables so can exceed MAX_ORDER limitations.

But IIRC, the vmemmap code depends on architecture-specific help from
vmemmap_populate() to place the map in the right place and it's not universally
available. It's likely that similar would be needed to support large
hash tables. I think the networking guys would need to be fairly sure
the larger table would make a big difference before tackling the
problem.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 15:09           ` Mel Gorman
  0 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 15:09 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Hugh Dickins, Andrew Morton, Andi Kleen, David Miller, netdev,
	linux-kernel, linux-mm

On Fri, May 01, 2009 at 09:59:35AM -0400, Christoph Lameter wrote:
> On Fri, 1 May 2009, Mel Gorman wrote:
> 
> > > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > > way, it won't be limited by MAX_ORDER.  Makes one wonder whether it
> > > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
> >
> > I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
> > (according to the documentation). It is used in the case that the caller is
> > willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> > exchange for the pages being interleaved on different nodes so that access
> > to the hash table has average performance[*]
> >
> > If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> > a mysterious performance regression report for a workload that depended on
> > the hash tables performance but that there was enough memory for the hash
> > table to be allocated with vmalloc() instead of alloc_pages_exact().
> 
> Can we fall back to a huge page mapped vmalloc? Like what the vmemmap code
> does? Then we also would not have MAX_ORDER limitations.
> 

Potentially yes, although it would appear that it will only help the networking
hash table. Dentry and inode are both using the bootmem allocator to allocate
their tables so can exceed MAX_ORDER limitations.

But IIRC, the vmemmap code depends on architecture-specific help from
vmemmap_populate() to place the map in the right place and it's not universally
available. It's likely that similar would be needed to support large
hash tables. I think the networking guys would need to be fairly sure
the larger table would make a big difference before tackling the
problem.

-- 
Mel Gorman
Part-time Phd Student                          Linux Technology Center
University of Limerick                         IBM Dublin Software Lab

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
  2009-05-01 15:09           ` Mel Gorman
@ 2009-05-01 15:14             ` Christoph Lameter
  -1 siblings, 0 replies; 35+ messages in thread
From: Christoph Lameter @ 2009-05-01 15:14 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Hugh Dickins, Andrew Morton, Andi Kleen, David Miller, netdev,
	linux-kernel, linux-mm

On Fri, 1 May 2009, Mel Gorman wrote:

> But IIRC, the vmemmap code depends on architecture-specific help from
> vmemmap_populate() to place the map in the right place and it's not universally
> available. It's likely that similar would be needed to support large
> hash tables. I think the networking guys would need to be fairly sure
> the larger table would make a big difference before tackling the
> problem.

The same function could be used. Fallback to vmap is always possible.


^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 15:14             ` Christoph Lameter
  0 siblings, 0 replies; 35+ messages in thread
From: Christoph Lameter @ 2009-05-01 15:14 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Hugh Dickins, Andrew Morton, Andi Kleen, David Miller, netdev,
	linux-kernel, linux-mm

On Fri, 1 May 2009, Mel Gorman wrote:

> But IIRC, the vmemmap code depends on architecture-specific help from
> vmemmap_populate() to place the map in the right place and it's not universally
> available. It's likely that similar would be needed to support large
> hash tables. I think the networking guys would need to be fairly sure
> the larger table would make a big difference before tackling the
> problem.

The same function could be used. Fallback to vmap is always possible.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 2.6.30] Doc: hashdist defaults on for 64bit
  2009-05-01 13:45       ` Hugh Dickins
@ 2009-05-01 17:20         ` David Miller
  -1 siblings, 0 replies; 35+ messages in thread
From: David Miller @ 2009-05-01 17:20 UTC (permalink / raw)
  To: hugh; +Cc: akpm, mel, andi, anton, netdev, linux-kernel, linux-mm

From: Hugh Dickins <hugh@veritas.com>
Date: Fri, 1 May 2009 14:45:43 +0100 (BST)

> Update Doc: kernel boot parameter hashdist now defaults on for all 64bit NUMA.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>

Acked-by: David S. Miller <davem@davemloft.net>

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH 2.6.30] Doc: hashdist defaults on for 64bit
@ 2009-05-01 17:20         ` David Miller
  0 siblings, 0 replies; 35+ messages in thread
From: David Miller @ 2009-05-01 17:20 UTC (permalink / raw)
  To: hugh; +Cc: akpm, mel, andi, anton, netdev, linux-kernel, linux-mm

From: Hugh Dickins <hugh@veritas.com>
Date: Fri, 1 May 2009 14:45:43 +0100 (BST)

> Update Doc: kernel boot parameter hashdist now defaults on for all 64bit NUMA.
> 
> Signed-off-by: Hugh Dickins <hugh@veritas.com>

Acked-by: David S. Miller <davem@davemloft.net>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2009-05-01 17:20 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-04-29 21:09 [PATCH mmotm] mm: alloc_large_system_hash check order Hugh Dickins
2009-04-29 21:09 ` Hugh Dickins
2009-04-29 21:28 ` Andrew Morton
2009-04-29 21:28   ` Andrew Morton
2009-05-01 13:40   ` Hugh Dickins
2009-05-01 13:40     ` Hugh Dickins
2009-05-01 13:45     ` [PATCH 2.6.30] Doc: hashdist defaults on for 64bit Hugh Dickins
2009-05-01 13:45       ` Hugh Dickins
2009-05-01 14:29       ` Mel Gorman
2009-05-01 14:29         ` Mel Gorman
2009-05-01 17:20       ` David Miller
2009-05-01 17:20         ` David Miller
2009-04-30  0:25 ` [PATCH mmotm] mm: alloc_large_system_hash check order David Miller
2009-04-30  0:25   ` David Miller
2009-04-30 13:25 ` Mel Gorman
2009-04-30 13:25   ` Mel Gorman
2009-05-01 11:30   ` Hugh Dickins
2009-05-01 11:30     ` Hugh Dickins
2009-05-01 11:46     ` Eric Dumazet
2009-05-01 11:46       ` Eric Dumazet
2009-05-01 12:05       ` Hugh Dickins
2009-05-01 14:00     ` Mel Gorman
2009-05-01 14:00       ` Mel Gorman
2009-05-01 13:59       ` Christoph Lameter
2009-05-01 13:59         ` Christoph Lameter
2009-05-01 15:09         ` Mel Gorman
2009-05-01 15:09           ` Mel Gorman
2009-05-01 15:14           ` Christoph Lameter
2009-05-01 15:14             ` Christoph Lameter
2009-05-01 14:12       ` Mel Gorman
2009-05-01 14:12         ` Mel Gorman
2009-05-01 14:28       ` Hugh Dickins
2009-05-01 14:28         ` Hugh Dickins
2009-05-01 14:43         ` Mel Gorman
2009-05-01 14:43           ` Mel Gorman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.