* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 11:30 ` Hugh Dickins
0 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 11:30 UTC (permalink / raw)
To: Mel Gorman
Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm
On Thu, 30 Apr 2009, Mel Gorman wrote:
> On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
> > On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> > to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> > order >= MAX_ORDER - it's hoping for order 11. alloc_large_system_hash()
> > had better make its own check on the order.
> >
> > Signed-off-by: Hugh Dickins <hugh@veritas.com>
>
> Looks good
>
> Reviewed-by: Mel Gorman <mel@csn.ul.ie>
Thanks.
>
> As I was looking there, it seemed that alloc_large_system_hash() should be
> using alloc_pages_exact() instead of having its own "give back the spare
> pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
> the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
> where it may catch other unwary callers.
>
> How about adding the following patch on top of yours?
Well observed, yes indeed. In fact, it even looks as if, shock horror,
alloc_pages_exact() was _plagiarized_ from alloc_large_system_hash().
Blessed be the GPL, I'm sure we can skip the lengthy lawsuits!
>
> ==== CUT HERE ====
> Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic
>
> alloc_large_system_hash() has logic for freeing unused pages at the end
> of an power-of-two-pages-aligned buffer that is a duplicate of what is in
> alloc_pages_exact(). This patch converts alloc_large_system_hash() to use
> alloc_pages_exact().
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> ---
> mm/page_alloc.c | 27 +++++----------------------
> 1 file changed, 5 insertions(+), 22 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1b3da0f..c94b140 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1942,6 +1942,9 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask)
> unsigned int order = get_order(size);
> unsigned long addr;
>
> + if (order >= MAX_ORDER)
> + return NULL;
> +
I suppose there could be an argument about whether we do or do not
want to skip the WARN_ON when it's in alloc_pages_exact().
I have no opinion on that; but DaveM's reply on large_system_hash
does make it clear that we're not interested in the warning there.
> addr = __get_free_pages(gfp_mask, order);
> if (addr) {
> unsigned long alloc_end = addr + (PAGE_SIZE << order);
> @@ -4755,28 +4758,8 @@ void *__init alloc_large_system_hash(const char *tablename,
> table = alloc_bootmem_nopanic(size);
> else if (hashdist)
> table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> - else {
> - unsigned long order = get_order(size);
> -
> - if (order < MAX_ORDER)
> - table = (void *)__get_free_pages(GFP_ATOMIC,
> - order);
> - /*
> - * If bucketsize is not a power-of-two, we may free
> - * some pages at the end of hash table.
> - */
That's actually a helpful comment, it's easy to think we're dealing
in powers of two here when we may not be. Maybe retain it with your
alloc_pages_exact call?
> - if (table) {
> - unsigned long alloc_end = (unsigned long)table +
> - (PAGE_SIZE << order);
> - unsigned long used = (unsigned long)table +
> - PAGE_ALIGN(size);
> - split_page(virt_to_page(table), order);
> - while (used < alloc_end) {
> - free_page(used);
> - used += PAGE_SIZE;
> - }
> - }
> - }
> + else
> + table = alloc_pages_exact(PAGE_ALIGN(size), GFP_ATOMIC);
Do you actually need that PAGE_ALIGN on the size?
> } while (!table && size > PAGE_SIZE && --log2qty);
>
> if (!table)
Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
way, it won't be limited by MAX_ORDER. Makes one wonder whether it
ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
I think that's a change we could make _if_ the large_system_hash
users ever ask for it, but _not_ one we should make surreptitiously.
Hugh
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
2009-05-01 11:30 ` Hugh Dickins
@ 2009-05-01 11:46 ` Eric Dumazet
-1 siblings, 0 replies; 35+ messages in thread
From: Eric Dumazet @ 2009-05-01 11:46 UTC (permalink / raw)
To: Hugh Dickins
Cc: Mel Gorman, Andrew Morton, Andi Kleen, David Miller, netdev,
linux-kernel, linux-mm
Hugh Dickins a écrit :
> On Thu, 30 Apr 2009, Mel Gorman wrote:
>> On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
>>> On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
>>> to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
>>> order >= MAX_ORDER - it's hoping for order 11. alloc_large_system_hash()
>>> had better make its own check on the order.
Well, I dont know why, since alloc_large_system_hash() already take
care of retries, halving size between each tries.
>>>
>>> Signed-off-by: Hugh Dickins <hugh@veritas.com>
>> Looks good
>>
>> Reviewed-by: Mel Gorman <mel@csn.ul.ie>
>
> Thanks.
>
>> As I was looking there, it seemed that alloc_large_system_hash() should be
>> using alloc_pages_exact() instead of having its own "give back the spare
>> pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
>> the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
>> where it may catch other unwary callers.
>>
>> How about adding the following patch on top of yours?
>
> Well observed, yes indeed. In fact, it even looks as if, shock horror,
> alloc_pages_exact() was _plagiarized_ from alloc_large_system_hash().
> Blessed be the GPL, I'm sure we can skip the lengthy lawsuits!
As a matter of fact, I was planning to call my lawyer, so I'll reconsider
this and save some euros, thanks !
;)
It makes sense to use a helper function if it already exist, of course !
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 11:46 ` Eric Dumazet
0 siblings, 0 replies; 35+ messages in thread
From: Eric Dumazet @ 2009-05-01 11:46 UTC (permalink / raw)
To: Hugh Dickins
Cc: Mel Gorman, Andrew Morton, Andi Kleen, David Miller, netdev,
linux-kernel, linux-mm
Hugh Dickins a écrit :
> On Thu, 30 Apr 2009, Mel Gorman wrote:
>> On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
>>> On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
>>> to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
>>> order >= MAX_ORDER - it's hoping for order 11. alloc_large_system_hash()
>>> had better make its own check on the order.
Well, I dont know why, since alloc_large_system_hash() already take
care of retries, halving size between each tries.
>>>
>>> Signed-off-by: Hugh Dickins <hugh@veritas.com>
>> Looks good
>>
>> Reviewed-by: Mel Gorman <mel@csn.ul.ie>
>
> Thanks.
>
>> As I was looking there, it seemed that alloc_large_system_hash() should be
>> using alloc_pages_exact() instead of having its own "give back the spare
>> pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
>> the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
>> where it may catch other unwary callers.
>>
>> How about adding the following patch on top of yours?
>
> Well observed, yes indeed. In fact, it even looks as if, shock horror,
> alloc_pages_exact() was _plagiarized_ from alloc_large_system_hash().
> Blessed be the GPL, I'm sure we can skip the lengthy lawsuits!
As a matter of fact, I was planning to call my lawyer, so I'll reconsider
this and save some euros, thanks !
;)
It makes sense to use a helper function if it already exist, of course !
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
2009-05-01 11:46 ` Eric Dumazet
(?)
@ 2009-05-01 12:05 ` Hugh Dickins
-1 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 12:05 UTC (permalink / raw)
To: Eric Dumazet
Cc: Mel Gorman, Andrew Morton, Andi Kleen, David Miller, netdev,
linux-kernel, linux-mm
[-- Attachment #1: Type: TEXT/PLAIN, Size: 1207 bytes --]
On Fri, 1 May 2009, Eric Dumazet wrote:
> Hugh Dickins a écrit :
> > On Thu, 30 Apr 2009, Mel Gorman wrote:
> >> On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
> >>> On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> >>> to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> >>> order >= MAX_ORDER - it's hoping for order 11. alloc_large_system_hash()
> >>> had better make its own check on the order.
>
> Well, I dont know why, since alloc_large_system_hash() already take
> care of retries, halving size between each tries.
Sorry, I wasn't clear: I just meant that if we keep that
WARN_ON_ONCE(order >= MAX_ORDER) in __alloc_pages_slowpath(),
then we need alloc_large_system_hash() to avoid the call to
__get_free_pages() in the order >= MAX_ORDER case,
precisely because we're happy with the way it halves and
falls back, so don't want a noisy warning; and now that we know
that it could give that warning, it would be a shame for the
_ONCE to suppress more interesting warnings later.
I certainly did not mean for alloc_large_system_hash() to fail
in the order >= MAX_ORDER case, nor did the patch do so.
Hugh
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
2009-05-01 11:30 ` Hugh Dickins
@ 2009-05-01 14:00 ` Mel Gorman
-1 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:00 UTC (permalink / raw)
To: Hugh Dickins
Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm
On Fri, May 01, 2009 at 12:30:03PM +0100, Hugh Dickins wrote:
> On Thu, 30 Apr 2009, Mel Gorman wrote:
> > On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
> > > On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> > > to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> > > order >= MAX_ORDER - it's hoping for order 11. alloc_large_system_hash()
> > > had better make its own check on the order.
> > >
> > > Signed-off-by: Hugh Dickins <hugh@veritas.com>
> >
> > Looks good
> >
> > Reviewed-by: Mel Gorman <mel@csn.ul.ie>
>
> Thanks.
>
> >
> > As I was looking there, it seemed that alloc_large_system_hash() should be
> > using alloc_pages_exact() instead of having its own "give back the spare
> > pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
> > the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
> > where it may catch other unwary callers.
> >
> > How about adding the following patch on top of yours?
>
> Well observed, yes indeed. In fact, it even looks as if, shock horror,
> alloc_pages_exact() was _plagiarized_ from alloc_large_system_hash().
> Blessed be the GPL, I'm sure we can skip the lengthy lawsuits!
>
*phew*. We dodged a bullet there. I can put away my pitchfork and
flaming torch kit for another day.
> >
> > ==== CUT HERE ====
> > Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic
> >
> > alloc_large_system_hash() has logic for freeing unused pages at the end
> > of an power-of-two-pages-aligned buffer that is a duplicate of what is in
> > alloc_pages_exact(). This patch converts alloc_large_system_hash() to use
> > alloc_pages_exact().
> >
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> > ---
> > mm/page_alloc.c | 27 +++++----------------------
> > 1 file changed, 5 insertions(+), 22 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 1b3da0f..c94b140 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1942,6 +1942,9 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask)
> > unsigned int order = get_order(size);
> > unsigned long addr;
> >
> > + if (order >= MAX_ORDER)
> > + return NULL;
> > +
>
> I suppose there could be an argument about whether we do or do not
> want to skip the WARN_ON when it's in alloc_pages_exact().
>
> I have no opinion on that; but DaveM's reply on large_system_hash
> does make it clear that we're not interested in the warning there.
>
That's a fair point. I've included a slightly modified patch below that
preserves the warning for alloc_pages_exact() being called with a
too-large-an-order.
It means we call get_order() twice but in this path, so what. It's not
even text bloat as it's freed up.
> > addr = __get_free_pages(gfp_mask, order);
> > if (addr) {
> > unsigned long alloc_end = addr + (PAGE_SIZE << order);
> > @@ -4755,28 +4758,8 @@ void *__init alloc_large_system_hash(const char *tablename,
> > table = alloc_bootmem_nopanic(size);
> > else if (hashdist)
> > table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> > - else {
> > - unsigned long order = get_order(size);
> > -
> > - if (order < MAX_ORDER)
> > - table = (void *)__get_free_pages(GFP_ATOMIC,
> > - order);
> > - /*
> > - * If bucketsize is not a power-of-two, we may free
> > - * some pages at the end of hash table.
> > - */
>
> That's actually a helpful comment, it's easy to think we're dealing
> in powers of two here when we may not be. Maybe retain it with your
> alloc_pages_exact call?
>
Sure, it explains why alloc_pages_exact() is being used instead of
__get_free_pages() for those that are unfamiliar with the call.
> > - if (table) {
> > - unsigned long alloc_end = (unsigned long)table +
> > - (PAGE_SIZE << order);
> > - unsigned long used = (unsigned long)table +
> > - PAGE_ALIGN(size);
> > - split_page(virt_to_page(table), order);
> > - while (used < alloc_end) {
> > - free_page(used);
> > - used += PAGE_SIZE;
> > - }
> > - }
> > - }
> > + else
> > + table = alloc_pages_exact(PAGE_ALIGN(size), GFP_ATOMIC);
>
> Do you actually need that PAGE_ALIGN on the size?
>
Actually no. When I added it, it was because alloc_pages_exact() did not
obviously deal with unaligned sizes but it does. Sorry about that.
> > } while (!table && size > PAGE_SIZE && --log2qty);
> >
> > if (!table)
>
> Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> way, it won't be limited by MAX_ORDER. Makes one wonder whether it
> ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
(according to the documentation). It is used in the case that the caller is
willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
exchange for the pages being interleaved on different nodes so that access
to the hash table has average performance[*]
If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
a mysterious performance regression report for a workload that depended on
the hash tables performance but that there was enough memory for the hash
table to be allocated with vmalloc() instead of alloc_pages_exact().
[*] I speculate that on non-IA64 NUMA machines that we see different
performance for large filesystem benchmarks depending on whether we are
running on the boot-CPU node or not depending on whether hashdist=
is used or not.
> I think that's a change we could make _if_ the large_system_hash
> users ever ask for it, but _not_ one we should make surreptitiously.
>
If they want it, they'll have to ask with hashdist=. Somehow I doubt it's
specified very often :/ .
Here is Take 2
==== CUT HERE ====
Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic V2
alloc_large_system_hash() has logic for freeing pages at the end
of an excessively large power-of-two buffer that is a duplicate of what
is in alloc_pages_exact(). This patch converts alloc_large_system_hash()
to use alloc_pages_exact().
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
mm/page_alloc.c | 21 ++++-----------------
1 file changed, 4 insertions(+), 17 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1b3da0f..8360d59 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4756,26 +4756,13 @@ void *__init alloc_large_system_hash(const char *tablename,
else if (hashdist)
table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
else {
- unsigned long order = get_order(size);
-
- if (order < MAX_ORDER)
- table = (void *)__get_free_pages(GFP_ATOMIC,
- order);
/*
* If bucketsize is not a power-of-two, we may free
- * some pages at the end of hash table.
+ * some pages at the end of hash table which
+ * alloc_pages_exact() automatically does
*/
- if (table) {
- unsigned long alloc_end = (unsigned long)table +
- (PAGE_SIZE << order);
- unsigned long used = (unsigned long)table +
- PAGE_ALIGN(size);
- split_page(virt_to_page(table), order);
- while (used < alloc_end) {
- free_page(used);
- used += PAGE_SIZE;
- }
- }
+ if (get_order(size) < MAX_ORDER)
+ table = alloc_pages_exact(size, GFP_ATOMIC);
}
} while (!table && size > PAGE_SIZE && --log2qty);
^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 14:00 ` Mel Gorman
0 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:00 UTC (permalink / raw)
To: Hugh Dickins
Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm
On Fri, May 01, 2009 at 12:30:03PM +0100, Hugh Dickins wrote:
> On Thu, 30 Apr 2009, Mel Gorman wrote:
> > On Wed, Apr 29, 2009 at 10:09:48PM +0100, Hugh Dickins wrote:
> > > On an x86_64 with 4GB ram, tcp_init()'s call to alloc_large_system_hash(),
> > > to allocate tcp_hashinfo.ehash, is now triggering an mmotm WARN_ON_ONCE on
> > > order >= MAX_ORDER - it's hoping for order 11. alloc_large_system_hash()
> > > had better make its own check on the order.
> > >
> > > Signed-off-by: Hugh Dickins <hugh@veritas.com>
> >
> > Looks good
> >
> > Reviewed-by: Mel Gorman <mel@csn.ul.ie>
>
> Thanks.
>
> >
> > As I was looking there, it seemed that alloc_large_system_hash() should be
> > using alloc_pages_exact() instead of having its own "give back the spare
> > pages at the end of the buffer" logic. If alloc_pages_exact() was used, then
> > the check for an order >= MAX_ORDER can be pushed down to alloc_pages_exact()
> > where it may catch other unwary callers.
> >
> > How about adding the following patch on top of yours?
>
> Well observed, yes indeed. In fact, it even looks as if, shock horror,
> alloc_pages_exact() was _plagiarized_ from alloc_large_system_hash().
> Blessed be the GPL, I'm sure we can skip the lengthy lawsuits!
>
*phew*. We dodged a bullet there. I can put away my pitchfork and
flaming torch kit for another day.
> >
> > ==== CUT HERE ====
> > Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic
> >
> > alloc_large_system_hash() has logic for freeing unused pages at the end
> > of an power-of-two-pages-aligned buffer that is a duplicate of what is in
> > alloc_pages_exact(). This patch converts alloc_large_system_hash() to use
> > alloc_pages_exact().
> >
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
> > ---
> > mm/page_alloc.c | 27 +++++----------------------
> > 1 file changed, 5 insertions(+), 22 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 1b3da0f..c94b140 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -1942,6 +1942,9 @@ void *alloc_pages_exact(size_t size, gfp_t gfp_mask)
> > unsigned int order = get_order(size);
> > unsigned long addr;
> >
> > + if (order >= MAX_ORDER)
> > + return NULL;
> > +
>
> I suppose there could be an argument about whether we do or do not
> want to skip the WARN_ON when it's in alloc_pages_exact().
>
> I have no opinion on that; but DaveM's reply on large_system_hash
> does make it clear that we're not interested in the warning there.
>
That's a fair point. I've included a slightly modified patch below that
preserves the warning for alloc_pages_exact() being called with a
too-large-an-order.
It means we call get_order() twice but in this path, so what. It's not
even text bloat as it's freed up.
> > addr = __get_free_pages(gfp_mask, order);
> > if (addr) {
> > unsigned long alloc_end = addr + (PAGE_SIZE << order);
> > @@ -4755,28 +4758,8 @@ void *__init alloc_large_system_hash(const char *tablename,
> > table = alloc_bootmem_nopanic(size);
> > else if (hashdist)
> > table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> > - else {
> > - unsigned long order = get_order(size);
> > -
> > - if (order < MAX_ORDER)
> > - table = (void *)__get_free_pages(GFP_ATOMIC,
> > - order);
> > - /*
> > - * If bucketsize is not a power-of-two, we may free
> > - * some pages at the end of hash table.
> > - */
>
> That's actually a helpful comment, it's easy to think we're dealing
> in powers of two here when we may not be. Maybe retain it with your
> alloc_pages_exact call?
>
Sure, it explains why alloc_pages_exact() is being used instead of
__get_free_pages() for those that are unfamiliar with the call.
> > - if (table) {
> > - unsigned long alloc_end = (unsigned long)table +
> > - (PAGE_SIZE << order);
> > - unsigned long used = (unsigned long)table +
> > - PAGE_ALIGN(size);
> > - split_page(virt_to_page(table), order);
> > - while (used < alloc_end) {
> > - free_page(used);
> > - used += PAGE_SIZE;
> > - }
> > - }
> > - }
> > + else
> > + table = alloc_pages_exact(PAGE_ALIGN(size), GFP_ATOMIC);
>
> Do you actually need that PAGE_ALIGN on the size?
>
Actually no. When I added it, it was because alloc_pages_exact() did not
obviously deal with unaligned sizes but it does. Sorry about that.
> > } while (!table && size > PAGE_SIZE && --log2qty);
> >
> > if (!table)
>
> Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> way, it won't be limited by MAX_ORDER. Makes one wonder whether it
> ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
(according to the documentation). It is used in the case that the caller is
willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
exchange for the pages being interleaved on different nodes so that access
to the hash table has average performance[*]
If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
a mysterious performance regression report for a workload that depended on
the hash tables performance but that there was enough memory for the hash
table to be allocated with vmalloc() instead of alloc_pages_exact().
[*] I speculate that on non-IA64 NUMA machines that we see different
performance for large filesystem benchmarks depending on whether we are
running on the boot-CPU node or not depending on whether hashdist=
is used or not.
> I think that's a change we could make _if_ the large_system_hash
> users ever ask for it, but _not_ one we should make surreptitiously.
>
If they want it, they'll have to ask with hashdist=. Somehow I doubt it's
specified very often :/ .
Here is Take 2
==== CUT HERE ====
Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic V2
alloc_large_system_hash() has logic for freeing pages at the end
of an excessively large power-of-two buffer that is a duplicate of what
is in alloc_pages_exact(). This patch converts alloc_large_system_hash()
to use alloc_pages_exact().
Signed-off-by: Mel Gorman <mel@csn.ul.ie>
---
mm/page_alloc.c | 21 ++++-----------------
1 file changed, 4 insertions(+), 17 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 1b3da0f..8360d59 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -4756,26 +4756,13 @@ void *__init alloc_large_system_hash(const char *tablename,
else if (hashdist)
table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
else {
- unsigned long order = get_order(size);
-
- if (order < MAX_ORDER)
- table = (void *)__get_free_pages(GFP_ATOMIC,
- order);
/*
* If bucketsize is not a power-of-two, we may free
- * some pages at the end of hash table.
+ * some pages at the end of hash table which
+ * alloc_pages_exact() automatically does
*/
- if (table) {
- unsigned long alloc_end = (unsigned long)table +
- (PAGE_SIZE << order);
- unsigned long used = (unsigned long)table +
- PAGE_ALIGN(size);
- split_page(virt_to_page(table), order);
- while (used < alloc_end) {
- free_page(used);
- used += PAGE_SIZE;
- }
- }
+ if (get_order(size) < MAX_ORDER)
+ table = alloc_pages_exact(size, GFP_ATOMIC);
}
} while (!table && size > PAGE_SIZE && --log2qty);
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply related [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
2009-05-01 14:00 ` Mel Gorman
@ 2009-05-01 13:59 ` Christoph Lameter
-1 siblings, 0 replies; 35+ messages in thread
From: Christoph Lameter @ 2009-05-01 13:59 UTC (permalink / raw)
To: Mel Gorman
Cc: Hugh Dickins, Andrew Morton, Andi Kleen, David Miller, netdev,
linux-kernel, linux-mm
On Fri, 1 May 2009, Mel Gorman wrote:
> > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > way, it won't be limited by MAX_ORDER. Makes one wonder whether it
> > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
>
> I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
> (according to the documentation). It is used in the case that the caller is
> willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> exchange for the pages being interleaved on different nodes so that access
> to the hash table has average performance[*]
>
> If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> a mysterious performance regression report for a workload that depended on
> the hash tables performance but that there was enough memory for the hash
> table to be allocated with vmalloc() instead of alloc_pages_exact().
Can we fall back to a huge page mapped vmalloc? Like what the vmemmap code
does? Then we also would not have MAX_ORDER limitations.
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 13:59 ` Christoph Lameter
0 siblings, 0 replies; 35+ messages in thread
From: Christoph Lameter @ 2009-05-01 13:59 UTC (permalink / raw)
To: Mel Gorman
Cc: Hugh Dickins, Andrew Morton, Andi Kleen, David Miller, netdev,
linux-kernel, linux-mm
On Fri, 1 May 2009, Mel Gorman wrote:
> > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > way, it won't be limited by MAX_ORDER. Makes one wonder whether it
> > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
>
> I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
> (according to the documentation). It is used in the case that the caller is
> willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> exchange for the pages being interleaved on different nodes so that access
> to the hash table has average performance[*]
>
> If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> a mysterious performance regression report for a workload that depended on
> the hash tables performance but that there was enough memory for the hash
> table to be allocated with vmalloc() instead of alloc_pages_exact().
Can we fall back to a huge page mapped vmalloc? Like what the vmemmap code
does? Then we also would not have MAX_ORDER limitations.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
2009-05-01 13:59 ` Christoph Lameter
@ 2009-05-01 15:09 ` Mel Gorman
-1 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 15:09 UTC (permalink / raw)
To: Christoph Lameter
Cc: Hugh Dickins, Andrew Morton, Andi Kleen, David Miller, netdev,
linux-kernel, linux-mm
On Fri, May 01, 2009 at 09:59:35AM -0400, Christoph Lameter wrote:
> On Fri, 1 May 2009, Mel Gorman wrote:
>
> > > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > > way, it won't be limited by MAX_ORDER. Makes one wonder whether it
> > > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
> >
> > I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
> > (according to the documentation). It is used in the case that the caller is
> > willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> > exchange for the pages being interleaved on different nodes so that access
> > to the hash table has average performance[*]
> >
> > If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> > a mysterious performance regression report for a workload that depended on
> > the hash tables performance but that there was enough memory for the hash
> > table to be allocated with vmalloc() instead of alloc_pages_exact().
>
> Can we fall back to a huge page mapped vmalloc? Like what the vmemmap code
> does? Then we also would not have MAX_ORDER limitations.
>
Potentially yes, although it would appear that it will only help the networking
hash table. Dentry and inode are both using the bootmem allocator to allocate
their tables so can exceed MAX_ORDER limitations.
But IIRC, the vmemmap code depends on architecture-specific help from
vmemmap_populate() to place the map in the right place and it's not universally
available. It's likely that similar would be needed to support large
hash tables. I think the networking guys would need to be fairly sure
the larger table would make a big difference before tackling the
problem.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 15:09 ` Mel Gorman
0 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 15:09 UTC (permalink / raw)
To: Christoph Lameter
Cc: Hugh Dickins, Andrew Morton, Andi Kleen, David Miller, netdev,
linux-kernel, linux-mm
On Fri, May 01, 2009 at 09:59:35AM -0400, Christoph Lameter wrote:
> On Fri, 1 May 2009, Mel Gorman wrote:
>
> > > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > > way, it won't be limited by MAX_ORDER. Makes one wonder whether it
> > > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
> >
> > I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
> > (according to the documentation). It is used in the case that the caller is
> > willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> > exchange for the pages being interleaved on different nodes so that access
> > to the hash table has average performance[*]
> >
> > If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> > a mysterious performance regression report for a workload that depended on
> > the hash tables performance but that there was enough memory for the hash
> > table to be allocated with vmalloc() instead of alloc_pages_exact().
>
> Can we fall back to a huge page mapped vmalloc? Like what the vmemmap code
> does? Then we also would not have MAX_ORDER limitations.
>
Potentially yes, although it would appear that it will only help the networking
hash table. Dentry and inode are both using the bootmem allocator to allocate
their tables so can exceed MAX_ORDER limitations.
But IIRC, the vmemmap code depends on architecture-specific help from
vmemmap_populate() to place the map in the right place and it's not universally
available. It's likely that similar would be needed to support large
hash tables. I think the networking guys would need to be fairly sure
the larger table would make a big difference before tackling the
problem.
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
2009-05-01 15:09 ` Mel Gorman
@ 2009-05-01 15:14 ` Christoph Lameter
-1 siblings, 0 replies; 35+ messages in thread
From: Christoph Lameter @ 2009-05-01 15:14 UTC (permalink / raw)
To: Mel Gorman
Cc: Hugh Dickins, Andrew Morton, Andi Kleen, David Miller, netdev,
linux-kernel, linux-mm
On Fri, 1 May 2009, Mel Gorman wrote:
> But IIRC, the vmemmap code depends on architecture-specific help from
> vmemmap_populate() to place the map in the right place and it's not universally
> available. It's likely that similar would be needed to support large
> hash tables. I think the networking guys would need to be fairly sure
> the larger table would make a big difference before tackling the
> problem.
The same function could be used. Fallback to vmap is always possible.
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 15:14 ` Christoph Lameter
0 siblings, 0 replies; 35+ messages in thread
From: Christoph Lameter @ 2009-05-01 15:14 UTC (permalink / raw)
To: Mel Gorman
Cc: Hugh Dickins, Andrew Morton, Andi Kleen, David Miller, netdev,
linux-kernel, linux-mm
On Fri, 1 May 2009, Mel Gorman wrote:
> But IIRC, the vmemmap code depends on architecture-specific help from
> vmemmap_populate() to place the map in the right place and it's not universally
> available. It's likely that similar would be needed to support large
> hash tables. I think the networking guys would need to be fairly sure
> the larger table would make a big difference before tackling the
> problem.
The same function could be used. Fallback to vmap is always possible.
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
2009-05-01 14:00 ` Mel Gorman
@ 2009-05-01 14:12 ` Mel Gorman
-1 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:12 UTC (permalink / raw)
To: Hugh Dickins
Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm
On Fri, May 01, 2009 at 03:00:15PM +0100, Mel Gorman wrote:
> > <SNIP>
> >
> > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > way, it won't be limited by MAX_ORDER. Makes one wonder whether it
> > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
>
> I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
> (according to the documentation).
I was foolish to believe the documentation. vmalloc() will be used by
default on 64-bit NUMA, not just IA-64.
> It is used in the case that the caller is
> willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> exchange for the pages being interleaved on different nodes so that access
> to the hash table has average performance[*]
>
> If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> a mysterious performance regression report for a workload that depended on
> the hash tables performance but that there was enough memory for the hash
> table to be allocated with vmalloc() instead of alloc_pages_exact().
>
I think this point still holds. On non-NUMA machine, we don't want to fall
back to using vmalloc() just because the machine happened to have enough
memory. It's really tricky to know for sure though - will there be enough
performance benefits from having a bigger hash table to offset using base
pages to back it? It's probably unknowable because it depends on the exact
hardware and how the hash table is being used.
> [*] I speculate that on non-IA64 NUMA machines that we see different
> performance for large filesystem benchmarks depending on whether we are
> running on the boot-CPU node or not depending on whether hashdist=
> is used or not.
This speculation is junk because using vmalloc() for hash tables is not
specific to IA-64.
> <SNIP>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 14:12 ` Mel Gorman
0 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:12 UTC (permalink / raw)
To: Hugh Dickins
Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm
On Fri, May 01, 2009 at 03:00:15PM +0100, Mel Gorman wrote:
> > <SNIP>
> >
> > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > way, it won't be limited by MAX_ORDER. Makes one wonder whether it
> > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
>
> I don't believe so. __vmalloc() is only used when hashdist= is used or on IA-64
> (according to the documentation).
I was foolish to believe the documentation. vmalloc() will be used by
default on 64-bit NUMA, not just IA-64.
> It is used in the case that the caller is
> willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> exchange for the pages being interleaved on different nodes so that access
> to the hash table has average performance[*]
>
> If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> a mysterious performance regression report for a workload that depended on
> the hash tables performance but that there was enough memory for the hash
> table to be allocated with vmalloc() instead of alloc_pages_exact().
>
I think this point still holds. On non-NUMA machine, we don't want to fall
back to using vmalloc() just because the machine happened to have enough
memory. It's really tricky to know for sure though - will there be enough
performance benefits from having a bigger hash table to offset using base
pages to back it? It's probably unknowable because it depends on the exact
hardware and how the hash table is being used.
> [*] I speculate that on non-IA64 NUMA machines that we see different
> performance for large filesystem benchmarks depending on whether we are
> running on the boot-CPU node or not depending on whether hashdist=
> is used or not.
This speculation is junk because using vmalloc() for hash tables is not
specific to IA-64.
> <SNIP>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
2009-05-01 14:00 ` Mel Gorman
@ 2009-05-01 14:28 ` Hugh Dickins
-1 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 14:28 UTC (permalink / raw)
To: Mel Gorman
Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm
On Fri, 1 May 2009, Mel Gorman wrote:
> On Fri, May 01, 2009 at 12:30:03PM +0100, Hugh Dickins wrote:
> >
> > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > way, it won't be limited by MAX_ORDER. Makes one wonder whether it
> > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
>
> I don't believe so. __vmalloc() is only used when hashdist= is used
> or on IA-64 (according to the documentation).
Doc out of date, hashdist's default "on" was extended to include
x86_64 ages ago, and to all 64-bit in 2.6.30-rc.
> It is used in the case that the caller is
> willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> exchange for the pages being interleaved on different nodes so that access
> to the hash table has average performance[*]
>
> If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> a mysterious performance regression report for a workload that depended on
> the hash tables performance but that there was enough memory for the hash
> table to be allocated with vmalloc() instead of alloc_pages_exact().
>
> [*] I speculate that on non-IA64 NUMA machines that we see different
> performance for large filesystem benchmarks depending on whether we are
> running on the boot-CPU node or not depending on whether hashdist=
> is used or not.
Now that will be "32bit NUMA machines". I was going to say that's
a tiny sample, but I'm probably out of touch. I thought NUMA-Q was
on its way out, but see it still there in the tree. And presumably
nowadays there's a great swing to NUMA on Arm or netbooks or something.
>
> > I think that's a change we could make _if_ the large_system_hash
> > users ever ask for it, but _not_ one we should make surreptitiously.
> >
>
> If they want it, they'll have to ask with hashdist=.
That's quite a good argument for taking it out from under CONFIG_NUMA.
The name "hashdist" would then be absurd, but we could delight our
grandchildren with the story of how it came to be so named.
> Somehow I doubt it's specified very often :/ .
Our intuitions match! Which is probably why it got extended.
>
> Here is Take 2
>
> ==== CUT HERE ====
>
> Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic V2
>
> alloc_large_system_hash() has logic for freeing pages at the end
> of an excessively large power-of-two buffer that is a duplicate of what
> is in alloc_pages_exact(). This patch converts alloc_large_system_hash()
> to use alloc_pages_exact().
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: Hugh Dickins <hugh@veritas.com>
> ---
> mm/page_alloc.c | 21 ++++-----------------
> 1 file changed, 4 insertions(+), 17 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1b3da0f..8360d59 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4756,26 +4756,13 @@ void *__init alloc_large_system_hash(const char *tablename,
> else if (hashdist)
> table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> else {
> - unsigned long order = get_order(size);
> -
> - if (order < MAX_ORDER)
> - table = (void *)__get_free_pages(GFP_ATOMIC,
> - order);
> /*
> * If bucketsize is not a power-of-two, we may free
> - * some pages at the end of hash table.
> + * some pages at the end of hash table which
> + * alloc_pages_exact() automatically does
> */
> - if (table) {
> - unsigned long alloc_end = (unsigned long)table +
> - (PAGE_SIZE << order);
> - unsigned long used = (unsigned long)table +
> - PAGE_ALIGN(size);
> - split_page(virt_to_page(table), order);
> - while (used < alloc_end) {
> - free_page(used);
> - used += PAGE_SIZE;
> - }
> - }
> + if (get_order(size) < MAX_ORDER)
> + table = alloc_pages_exact(size, GFP_ATOMIC);
> }
> } while (!table && size > PAGE_SIZE && --log2qty);
>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 14:28 ` Hugh Dickins
0 siblings, 0 replies; 35+ messages in thread
From: Hugh Dickins @ 2009-05-01 14:28 UTC (permalink / raw)
To: Mel Gorman
Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm
On Fri, 1 May 2009, Mel Gorman wrote:
> On Fri, May 01, 2009 at 12:30:03PM +0100, Hugh Dickins wrote:
> >
> > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > way, it won't be limited by MAX_ORDER. Makes one wonder whether it
> > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
>
> I don't believe so. __vmalloc() is only used when hashdist= is used
> or on IA-64 (according to the documentation).
Doc out of date, hashdist's default "on" was extended to include
x86_64 ages ago, and to all 64-bit in 2.6.30-rc.
> It is used in the case that the caller is
> willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> exchange for the pages being interleaved on different nodes so that access
> to the hash table has average performance[*]
>
> If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> a mysterious performance regression report for a workload that depended on
> the hash tables performance but that there was enough memory for the hash
> table to be allocated with vmalloc() instead of alloc_pages_exact().
>
> [*] I speculate that on non-IA64 NUMA machines that we see different
> performance for large filesystem benchmarks depending on whether we are
> running on the boot-CPU node or not depending on whether hashdist=
> is used or not.
Now that will be "32bit NUMA machines". I was going to say that's
a tiny sample, but I'm probably out of touch. I thought NUMA-Q was
on its way out, but see it still there in the tree. And presumably
nowadays there's a great swing to NUMA on Arm or netbooks or something.
>
> > I think that's a change we could make _if_ the large_system_hash
> > users ever ask for it, but _not_ one we should make surreptitiously.
> >
>
> If they want it, they'll have to ask with hashdist=.
That's quite a good argument for taking it out from under CONFIG_NUMA.
The name "hashdist" would then be absurd, but we could delight our
grandchildren with the story of how it came to be so named.
> Somehow I doubt it's specified very often :/ .
Our intuitions match! Which is probably why it got extended.
>
> Here is Take 2
>
> ==== CUT HERE ====
>
> Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic V2
>
> alloc_large_system_hash() has logic for freeing pages at the end
> of an excessively large power-of-two buffer that is a duplicate of what
> is in alloc_pages_exact(). This patch converts alloc_large_system_hash()
> to use alloc_pages_exact().
>
> Signed-off-by: Mel Gorman <mel@csn.ul.ie>
Acked-by: Hugh Dickins <hugh@veritas.com>
> ---
> mm/page_alloc.c | 21 ++++-----------------
> 1 file changed, 4 insertions(+), 17 deletions(-)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 1b3da0f..8360d59 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -4756,26 +4756,13 @@ void *__init alloc_large_system_hash(const char *tablename,
> else if (hashdist)
> table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> else {
> - unsigned long order = get_order(size);
> -
> - if (order < MAX_ORDER)
> - table = (void *)__get_free_pages(GFP_ATOMIC,
> - order);
> /*
> * If bucketsize is not a power-of-two, we may free
> - * some pages at the end of hash table.
> + * some pages at the end of hash table which
> + * alloc_pages_exact() automatically does
> */
> - if (table) {
> - unsigned long alloc_end = (unsigned long)table +
> - (PAGE_SIZE << order);
> - unsigned long used = (unsigned long)table +
> - PAGE_ALIGN(size);
> - split_page(virt_to_page(table), order);
> - while (used < alloc_end) {
> - free_page(used);
> - used += PAGE_SIZE;
> - }
> - }
> + if (get_order(size) < MAX_ORDER)
> + table = alloc_pages_exact(size, GFP_ATOMIC);
> }
> } while (!table && size > PAGE_SIZE && --log2qty);
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
2009-05-01 14:28 ` Hugh Dickins
@ 2009-05-01 14:43 ` Mel Gorman
-1 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:43 UTC (permalink / raw)
To: Hugh Dickins
Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm
On Fri, May 01, 2009 at 03:28:47PM +0100, Hugh Dickins wrote:
> On Fri, 1 May 2009, Mel Gorman wrote:
> > On Fri, May 01, 2009 at 12:30:03PM +0100, Hugh Dickins wrote:
> > >
> > > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > > way, it won't be limited by MAX_ORDER. Makes one wonder whether it
> > > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
> >
> > I don't believe so. __vmalloc() is only used when hashdist= is used
> > or on IA-64 (according to the documentation).
>
> Doc out of date, hashdist's default "on" was extended to include
> x86_64 ages ago, and to all 64-bit in 2.6.30-rc.
>
> > It is used in the case that the caller is
> > willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> > exchange for the pages being interleaved on different nodes so that access
> > to the hash table has average performance[*]
> >
> > If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> > a mysterious performance regression report for a workload that depended on
> > the hash tables performance but that there was enough memory for the hash
> > table to be allocated with vmalloc() instead of alloc_pages_exact().
> >
> > [*] I speculate that on non-IA64 NUMA machines that we see different
> > performance for large filesystem benchmarks depending on whether we are
> > running on the boot-CPU node or not depending on whether hashdist=
> > is used or not.
>
> Now that will be "32bit NUMA machines". I was going to say that's
> a tiny sample, but I'm probably out of touch. I thought NUMA-Q was
> on its way out, but see it still there in the tree. And presumably
> nowadays there's a great swing to NUMA on Arm or netbooks or something.
>
NUMA-Q can probably be ignored in terms of relevance but SuperH can have
32-bit NUMA judging from their Kconfig and my understanding is that NUMA is
important to sh in general. I don't know about ARM. Either way, the comment
for HASHDIST_DEFAULT saying that 32-bit NUMA may not have enough vmalloc()
space looks like a good enough reason to avoid dipping into it.
> > > I think that's a change we could make _if_ the large_system_hash
> > > users ever ask for it, but _not_ one we should make surreptitiously.
> > >
> >
> > If they want it, they'll have to ask with hashdist=.
>
> That's quite a good argument for taking it out from under CONFIG_NUMA.
> The name "hashdist" would then be absurd, but we could delight our
> grandchildren with the story of how it came to be so named.
>
What is the equivalent for "It was a dark and stormy night" for tales
about kernel hacking?
If it was pulled out from underneath, it would need to be for 64-bit-only to
avoid consuming too much vmalloc space but we'd still have no clue though
if the larger hash bucket performance gain (if any) would offset the cost
of using vmalloc.
> > Somehow I doubt it's specified very often :/ .
>
> Our intuitions match! Which is probably why it got extended.
>
No doubt.
> >
> > Here is Take 2
> >
> > ==== CUT HERE ====
> >
> > Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic V2
> >
> > alloc_large_system_hash() has logic for freeing pages at the end
> > of an excessively large power-of-two buffer that is a duplicate of what
> > is in alloc_pages_exact(). This patch converts alloc_large_system_hash()
> > to use alloc_pages_exact().
> >
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
>
> Acked-by: Hugh Dickins <hugh@veritas.com>
>
Thanks.
> > ---
> > mm/page_alloc.c | 21 ++++-----------------
> > 1 file changed, 4 insertions(+), 17 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 1b3da0f..8360d59 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4756,26 +4756,13 @@ void *__init alloc_large_system_hash(const char *tablename,
> > else if (hashdist)
> > table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> > else {
> > - unsigned long order = get_order(size);
> > -
> > - if (order < MAX_ORDER)
> > - table = (void *)__get_free_pages(GFP_ATOMIC,
> > - order);
> > /*
> > * If bucketsize is not a power-of-two, we may free
> > - * some pages at the end of hash table.
> > + * some pages at the end of hash table which
> > + * alloc_pages_exact() automatically does
> > */
> > - if (table) {
> > - unsigned long alloc_end = (unsigned long)table +
> > - (PAGE_SIZE << order);
> > - unsigned long used = (unsigned long)table +
> > - PAGE_ALIGN(size);
> > - split_page(virt_to_page(table), order);
> > - while (used < alloc_end) {
> > - free_page(used);
> > - used += PAGE_SIZE;
> > - }
> > - }
> > + if (get_order(size) < MAX_ORDER)
> > + table = alloc_pages_exact(size, GFP_ATOMIC);
> > }
> > } while (!table && size > PAGE_SIZE && --log2qty);
> >
>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
^ permalink raw reply [flat|nested] 35+ messages in thread
* Re: [PATCH mmotm] mm: alloc_large_system_hash check order
@ 2009-05-01 14:43 ` Mel Gorman
0 siblings, 0 replies; 35+ messages in thread
From: Mel Gorman @ 2009-05-01 14:43 UTC (permalink / raw)
To: Hugh Dickins
Cc: Andrew Morton, Andi Kleen, David Miller, netdev, linux-kernel, linux-mm
On Fri, May 01, 2009 at 03:28:47PM +0100, Hugh Dickins wrote:
> On Fri, 1 May 2009, Mel Gorman wrote:
> > On Fri, May 01, 2009 at 12:30:03PM +0100, Hugh Dickins wrote:
> > >
> > > Andrew noticed another oddity: that if it goes the hashdist __vmalloc()
> > > way, it won't be limited by MAX_ORDER. Makes one wonder whether it
> > > ought to fall back to __vmalloc() if the alloc_pages_exact() fails.
> >
> > I don't believe so. __vmalloc() is only used when hashdist= is used
> > or on IA-64 (according to the documentation).
>
> Doc out of date, hashdist's default "on" was extended to include
> x86_64 ages ago, and to all 64-bit in 2.6.30-rc.
>
> > It is used in the case that the caller is
> > willing to deal with the vmalloc() overhead (e.g. using base page PTEs) in
> > exchange for the pages being interleaved on different nodes so that access
> > to the hash table has average performance[*]
> >
> > If we automatically fell back to vmalloc(), I bet 2c we'd eventually get
> > a mysterious performance regression report for a workload that depended on
> > the hash tables performance but that there was enough memory for the hash
> > table to be allocated with vmalloc() instead of alloc_pages_exact().
> >
> > [*] I speculate that on non-IA64 NUMA machines that we see different
> > performance for large filesystem benchmarks depending on whether we are
> > running on the boot-CPU node or not depending on whether hashdist=
> > is used or not.
>
> Now that will be "32bit NUMA machines". I was going to say that's
> a tiny sample, but I'm probably out of touch. I thought NUMA-Q was
> on its way out, but see it still there in the tree. And presumably
> nowadays there's a great swing to NUMA on Arm or netbooks or something.
>
NUMA-Q can probably be ignored in terms of relevance but SuperH can have
32-bit NUMA judging from their Kconfig and my understanding is that NUMA is
important to sh in general. I don't know about ARM. Either way, the comment
for HASHDIST_DEFAULT saying that 32-bit NUMA may not have enough vmalloc()
space looks like a good enough reason to avoid dipping into it.
> > > I think that's a change we could make _if_ the large_system_hash
> > > users ever ask for it, but _not_ one we should make surreptitiously.
> > >
> >
> > If they want it, they'll have to ask with hashdist=.
>
> That's quite a good argument for taking it out from under CONFIG_NUMA.
> The name "hashdist" would then be absurd, but we could delight our
> grandchildren with the story of how it came to be so named.
>
What is the equivalent for "It was a dark and stormy night" for tales
about kernel hacking?
If it was pulled out from underneath, it would need to be for 64-bit-only to
avoid consuming too much vmalloc space but we'd still have no clue though
if the larger hash bucket performance gain (if any) would offset the cost
of using vmalloc.
> > Somehow I doubt it's specified very often :/ .
>
> Our intuitions match! Which is probably why it got extended.
>
No doubt.
> >
> > Here is Take 2
> >
> > ==== CUT HERE ====
> >
> > Use alloc_pages_exact() in alloc_large_system_hash() to avoid duplicated logic V2
> >
> > alloc_large_system_hash() has logic for freeing pages at the end
> > of an excessively large power-of-two buffer that is a duplicate of what
> > is in alloc_pages_exact(). This patch converts alloc_large_system_hash()
> > to use alloc_pages_exact().
> >
> > Signed-off-by: Mel Gorman <mel@csn.ul.ie>
>
> Acked-by: Hugh Dickins <hugh@veritas.com>
>
Thanks.
> > ---
> > mm/page_alloc.c | 21 ++++-----------------
> > 1 file changed, 4 insertions(+), 17 deletions(-)
> >
> > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > index 1b3da0f..8360d59 100644
> > --- a/mm/page_alloc.c
> > +++ b/mm/page_alloc.c
> > @@ -4756,26 +4756,13 @@ void *__init alloc_large_system_hash(const char *tablename,
> > else if (hashdist)
> > table = __vmalloc(size, GFP_ATOMIC, PAGE_KERNEL);
> > else {
> > - unsigned long order = get_order(size);
> > -
> > - if (order < MAX_ORDER)
> > - table = (void *)__get_free_pages(GFP_ATOMIC,
> > - order);
> > /*
> > * If bucketsize is not a power-of-two, we may free
> > - * some pages at the end of hash table.
> > + * some pages at the end of hash table which
> > + * alloc_pages_exact() automatically does
> > */
> > - if (table) {
> > - unsigned long alloc_end = (unsigned long)table +
> > - (PAGE_SIZE << order);
> > - unsigned long used = (unsigned long)table +
> > - PAGE_ALIGN(size);
> > - split_page(virt_to_page(table), order);
> > - while (used < alloc_end) {
> > - free_page(used);
> > - used += PAGE_SIZE;
> > - }
> > - }
> > + if (get_order(size) < MAX_ORDER)
> > + table = alloc_pages_exact(size, GFP_ATOMIC);
> > }
> > } while (!table && size > PAGE_SIZE && --log2qty);
> >
>
--
Mel Gorman
Part-time Phd Student Linux Technology Center
University of Limerick IBM Dublin Software Lab
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 35+ messages in thread