All of lore.kernel.org
 help / color / mirror / Atom feed
* [v4 0/1] mm: Adaptive hash table scaling
@ 2017-05-20 17:06 ` Pavel Tatashin
  0 siblings, 0 replies; 20+ messages in thread
From: Pavel Tatashin @ 2017-05-20 17:06 UTC (permalink / raw)
  To: akpm, linux-kernel, linux-mm, mhocko

Changes from v3 - v4:
- Fixed an issue with 32-bit overflow (adapt is ull now instead ul)
- Added changes suggested by Michal Hocko: use high_limit instead of
  a new flag to determine that we should use this new scaling.

Pavel Tatashin (1):
  mm: Adaptive hash table scaling

 mm/page_alloc.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

-- 
2.13.0

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [v4 0/1] mm: Adaptive hash table scaling
@ 2017-05-20 17:06 ` Pavel Tatashin
  0 siblings, 0 replies; 20+ messages in thread
From: Pavel Tatashin @ 2017-05-20 17:06 UTC (permalink / raw)
  To: akpm, linux-kernel, linux-mm, mhocko

Changes from v3 - v4:
- Fixed an issue with 32-bit overflow (adapt is ull now instead ul)
- Added changes suggested by Michal Hocko: use high_limit instead of
  a new flag to determine that we should use this new scaling.

Pavel Tatashin (1):
  mm: Adaptive hash table scaling

 mm/page_alloc.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

-- 
2.13.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* [v4 1/1] mm: Adaptive hash table scaling
  2017-05-20 17:06 ` Pavel Tatashin
@ 2017-05-20 17:06   ` Pavel Tatashin
  -1 siblings, 0 replies; 20+ messages in thread
From: Pavel Tatashin @ 2017-05-20 17:06 UTC (permalink / raw)
  To: akpm, linux-kernel, linux-mm, mhocko

Allow hash tables to scale with memory but at slower pace, when HASH_ADAPT
is provided every time memory quadruples the sizes of hash tables will only
double instead of quadrupling as well. This algorithm starts working only
when memory size reaches a certain point, currently set to 64G.

This is example of dentry hash table size, before and after four various
memory configurations:

MEMORY    SCALE        HASH_SIZE
        old    new    old     new
    8G  13     13      8M      8M
   16G  13     13     16M     16M
   32G  13     13     32M     32M
   64G  13     13     64M     64M
  128G  13     14    128M     64M
  256G  13     14    256M    128M
  512G  13     15    512M    128M
 1024G  13     15   1024M    256M
 2048G  13     16   2048M    256M
 4096G  13     16   4096M    512M
 8192G  13     17   8192M    512M
16384G  13     17  16384M   1024M
32768G  13     18  32768M   1024M
65536G  13     18  65536M   2048M

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 mm/page_alloc.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8afa63e81e73..15bba5c325a5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7169,6 +7169,17 @@ static unsigned long __init arch_reserved_kernel_pages(void)
 #endif
 
 /*
+ * Adaptive scale is meant to reduce sizes of hash tables on large memory
+ * machines. As memory size is increased the scale is also increased but at
+ * slower pace.  Starting from ADAPT_SCALE_BASE (64G), every time memory
+ * quadruples the scale is increased by one, which means the size of hash table
+ * only doubles, instead of quadrupling as well.
+ */
+#define ADAPT_SCALE_BASE	(64ull << 30)
+#define ADAPT_SCALE_SHIFT	2
+#define ADAPT_SCALE_NPAGES	(ADAPT_SCALE_BASE >> PAGE_SHIFT)
+
+/*
  * allocate a large system hash table from bootmem
  * - it is assumed that the hash table must contain an exact power-of-2
  *   quantity of entries
@@ -7199,6 +7210,14 @@ void *__init alloc_large_system_hash(const char *tablename,
 		if (PAGE_SHIFT < 20)
 			numentries = round_up(numentries, (1<<20)/PAGE_SIZE);
 
+		if (!high_limit) {
+			unsigned long long adapt;
+
+			for (adapt = ADAPT_SCALE_NPAGES; adapt < numentries;
+			     adapt <<= ADAPT_SCALE_SHIFT)
+				scale++;
+		}
+
 		/* limit to 1 bucket per 2^scale bytes of low memory */
 		if (scale > PAGE_SHIFT)
 			numentries >>= (scale - PAGE_SHIFT);
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [v4 1/1] mm: Adaptive hash table scaling
@ 2017-05-20 17:06   ` Pavel Tatashin
  0 siblings, 0 replies; 20+ messages in thread
From: Pavel Tatashin @ 2017-05-20 17:06 UTC (permalink / raw)
  To: akpm, linux-kernel, linux-mm, mhocko

Allow hash tables to scale with memory but at slower pace, when HASH_ADAPT
is provided every time memory quadruples the sizes of hash tables will only
double instead of quadrupling as well. This algorithm starts working only
when memory size reaches a certain point, currently set to 64G.

This is example of dentry hash table size, before and after four various
memory configurations:

MEMORY    SCALE        HASH_SIZE
        old    new    old     new
    8G  13     13      8M      8M
   16G  13     13     16M     16M
   32G  13     13     32M     32M
   64G  13     13     64M     64M
  128G  13     14    128M     64M
  256G  13     14    256M    128M
  512G  13     15    512M    128M
 1024G  13     15   1024M    256M
 2048G  13     16   2048M    256M
 4096G  13     16   4096M    512M
 8192G  13     17   8192M    512M
16384G  13     17  16384M   1024M
32768G  13     18  32768M   1024M
65536G  13     18  65536M   2048M

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 mm/page_alloc.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 8afa63e81e73..15bba5c325a5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7169,6 +7169,17 @@ static unsigned long __init arch_reserved_kernel_pages(void)
 #endif
 
 /*
+ * Adaptive scale is meant to reduce sizes of hash tables on large memory
+ * machines. As memory size is increased the scale is also increased but at
+ * slower pace.  Starting from ADAPT_SCALE_BASE (64G), every time memory
+ * quadruples the scale is increased by one, which means the size of hash table
+ * only doubles, instead of quadrupling as well.
+ */
+#define ADAPT_SCALE_BASE	(64ull << 30)
+#define ADAPT_SCALE_SHIFT	2
+#define ADAPT_SCALE_NPAGES	(ADAPT_SCALE_BASE >> PAGE_SHIFT)
+
+/*
  * allocate a large system hash table from bootmem
  * - it is assumed that the hash table must contain an exact power-of-2
  *   quantity of entries
@@ -7199,6 +7210,14 @@ void *__init alloc_large_system_hash(const char *tablename,
 		if (PAGE_SHIFT < 20)
 			numentries = round_up(numentries, (1<<20)/PAGE_SIZE);
 
+		if (!high_limit) {
+			unsigned long long adapt;
+
+			for (adapt = ADAPT_SCALE_NPAGES; adapt < numentries;
+			     adapt <<= ADAPT_SCALE_SHIFT)
+				scale++;
+		}
+
 		/* limit to 1 bucket per 2^scale bytes of low memory */
 		if (scale > PAGE_SHIFT)
 			numentries >>= (scale - PAGE_SHIFT);
-- 
2.13.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
  2017-05-20 17:06   ` Pavel Tatashin
@ 2017-05-21  2:07     ` Andi Kleen
  -1 siblings, 0 replies; 20+ messages in thread
From: Andi Kleen @ 2017-05-21  2:07 UTC (permalink / raw)
  To: Pavel Tatashin; +Cc: akpm, linux-kernel, linux-mm, mhocko

Pavel Tatashin <pasha.tatashin@oracle.com> writes:

> Allow hash tables to scale with memory but at slower pace, when HASH_ADAPT
> is provided every time memory quadruples the sizes of hash tables will only
> double instead of quadrupling as well. This algorithm starts working only
> when memory size reaches a certain point, currently set to 64G.
>
> This is example of dentry hash table size, before and after four various
> memory configurations:

IMHO the scale is still too aggressive. I find it very unlikely
that a 1TB machine really needs 256MB of hash table because
number of used files are unlikely to directly scale with memory.

Perhaps should just cap it at some large size, e.g. 32M

-Andi

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
@ 2017-05-21  2:07     ` Andi Kleen
  0 siblings, 0 replies; 20+ messages in thread
From: Andi Kleen @ 2017-05-21  2:07 UTC (permalink / raw)
  To: Pavel Tatashin; +Cc: akpm, linux-kernel, linux-mm, mhocko

Pavel Tatashin <pasha.tatashin@oracle.com> writes:

> Allow hash tables to scale with memory but at slower pace, when HASH_ADAPT
> is provided every time memory quadruples the sizes of hash tables will only
> double instead of quadrupling as well. This algorithm starts working only
> when memory size reaches a certain point, currently set to 64G.
>
> This is example of dentry hash table size, before and after four various
> memory configurations:

IMHO the scale is still too aggressive. I find it very unlikely
that a 1TB machine really needs 256MB of hash table because
number of used files are unlikely to directly scale with memory.

Perhaps should just cap it at some large size, e.g. 32M

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
  2017-05-21  2:07     ` Andi Kleen
@ 2017-05-21 12:58       ` Pasha Tatashin
  -1 siblings, 0 replies; 20+ messages in thread
From: Pasha Tatashin @ 2017-05-21 12:58 UTC (permalink / raw)
  To: Andi Kleen; +Cc: akpm, linux-kernel, linux-mm, mhocko

Hi Andi,

Thank you for looking at this. I mentioned earlier, I would not want to 
impose a cap. However, if you think that for example dcache needs a cap, 
there is already a mechanism for that via high_limit argument, so the 
client can be changed to provide that cap. However, this particular 
patch addresses scaling problem for everyone by making it scale with 
memory at a slower pace.

Thank you,
Pasha

On 05/20/2017 10:07 PM, Andi Kleen wrote:
> Pavel Tatashin <pasha.tatashin@oracle.com> writes:
> 
>> Allow hash tables to scale with memory but at slower pace, when HASH_ADAPT
>> is provided every time memory quadruples the sizes of hash tables will only
>> double instead of quadrupling as well. This algorithm starts working only
>> when memory size reaches a certain point, currently set to 64G.
>>
>> This is example of dentry hash table size, before and after four various
>> memory configurations:
> 
> IMHO the scale is still too aggressive. I find it very unlikely
> that a 1TB machine really needs 256MB of hash table because
> number of used files are unlikely to directly scale with memory.
> 
> Perhaps should just cap it at some large size, e.g. 32M
> 
> -Andi
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
@ 2017-05-21 12:58       ` Pasha Tatashin
  0 siblings, 0 replies; 20+ messages in thread
From: Pasha Tatashin @ 2017-05-21 12:58 UTC (permalink / raw)
  To: Andi Kleen; +Cc: akpm, linux-kernel, linux-mm, mhocko

Hi Andi,

Thank you for looking at this. I mentioned earlier, I would not want to 
impose a cap. However, if you think that for example dcache needs a cap, 
there is already a mechanism for that via high_limit argument, so the 
client can be changed to provide that cap. However, this particular 
patch addresses scaling problem for everyone by making it scale with 
memory at a slower pace.

Thank you,
Pasha

On 05/20/2017 10:07 PM, Andi Kleen wrote:
> Pavel Tatashin <pasha.tatashin@oracle.com> writes:
> 
>> Allow hash tables to scale with memory but at slower pace, when HASH_ADAPT
>> is provided every time memory quadruples the sizes of hash tables will only
>> double instead of quadrupling as well. This algorithm starts working only
>> when memory size reaches a certain point, currently set to 64G.
>>
>> This is example of dentry hash table size, before and after four various
>> memory configurations:
> 
> IMHO the scale is still too aggressive. I find it very unlikely
> that a 1TB machine really needs 256MB of hash table because
> number of used files are unlikely to directly scale with memory.
> 
> Perhaps should just cap it at some large size, e.g. 32M
> 
> -Andi
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
> 

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
  2017-05-21 12:58       ` Pasha Tatashin
@ 2017-05-21 16:35         ` Andi Kleen
  -1 siblings, 0 replies; 20+ messages in thread
From: Andi Kleen @ 2017-05-21 16:35 UTC (permalink / raw)
  To: Pasha Tatashin; +Cc: Andi Kleen, akpm, linux-kernel, linux-mm, mhocko

On Sun, May 21, 2017 at 08:58:25AM -0400, Pasha Tatashin wrote:
> Hi Andi,
> 
> Thank you for looking at this. I mentioned earlier, I would not want to
> impose a cap. However, if you think that for example dcache needs a cap,
> there is already a mechanism for that via high_limit argument, so the client

Lots of arguments are not the solution. Today this only affects a few
highend systems, but we'll see much more large memory systems in the
future. We don't want to have all these users either waste their memory,
or apply magic arguments.

> can be changed to provide that cap. However, this particular patch addresses
> scaling problem for everyone by making it scale with memory at a slower
> pace.

Yes your patch goes in the right direction and should be applied.

Just could be even more aggressive.

Long term probably all these hash tables need to be converted to rhash
to dynamically resize.

-Andi

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
@ 2017-05-21 16:35         ` Andi Kleen
  0 siblings, 0 replies; 20+ messages in thread
From: Andi Kleen @ 2017-05-21 16:35 UTC (permalink / raw)
  To: Pasha Tatashin; +Cc: Andi Kleen, akpm, linux-kernel, linux-mm, mhocko

On Sun, May 21, 2017 at 08:58:25AM -0400, Pasha Tatashin wrote:
> Hi Andi,
> 
> Thank you for looking at this. I mentioned earlier, I would not want to
> impose a cap. However, if you think that for example dcache needs a cap,
> there is already a mechanism for that via high_limit argument, so the client

Lots of arguments are not the solution. Today this only affects a few
highend systems, but we'll see much more large memory systems in the
future. We don't want to have all these users either waste their memory,
or apply magic arguments.

> can be changed to provide that cap. However, this particular patch addresses
> scaling problem for everyone by making it scale with memory at a slower
> pace.

Yes your patch goes in the right direction and should be applied.

Just could be even more aggressive.

Long term probably all these hash tables need to be converted to rhash
to dynamically resize.

-Andi

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
  2017-05-20 17:06   ` Pavel Tatashin
@ 2017-05-22  6:17     ` Michael Ellerman
  -1 siblings, 0 replies; 20+ messages in thread
From: Michael Ellerman @ 2017-05-22  6:17 UTC (permalink / raw)
  To: Pavel Tatashin, akpm, linux-kernel, linux-mm, mhocko

Pavel Tatashin <pasha.tatashin@oracle.com> writes:
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 8afa63e81e73..15bba5c325a5 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7169,6 +7169,17 @@ static unsigned long __init arch_reserved_kernel_pages(void)
>  #endif
>  
>  /*
> + * Adaptive scale is meant to reduce sizes of hash tables on large memory
> + * machines. As memory size is increased the scale is also increased but at
> + * slower pace.  Starting from ADAPT_SCALE_BASE (64G), every time memory
> + * quadruples the scale is increased by one, which means the size of hash table
> + * only doubles, instead of quadrupling as well.
> + */
> +#define ADAPT_SCALE_BASE	(64ull << 30)
> +#define ADAPT_SCALE_SHIFT	2
> +#define ADAPT_SCALE_NPAGES	(ADAPT_SCALE_BASE >> PAGE_SHIFT)
> +
> +/*
>   * allocate a large system hash table from bootmem
>   * - it is assumed that the hash table must contain an exact power-of-2
>   *   quantity of entries
> @@ -7199,6 +7210,14 @@ void *__init alloc_large_system_hash(const char *tablename,
>  		if (PAGE_SHIFT < 20)
>  			numentries = round_up(numentries, (1<<20)/PAGE_SIZE);
>  
> +		if (!high_limit) {
> +			unsigned long long adapt;
> +
> +			for (adapt = ADAPT_SCALE_NPAGES; adapt < numentries;
> +			     adapt <<= ADAPT_SCALE_SHIFT)
> +				scale++;
> +		}

This still doesn't work for me. The scale++ is overflowing according to
UBSAN (line 7221).

It looks like numentries is 194560.

00000950  68 0a 50 49 44 20 68 61  73 68 20 74 61 62 6c 65  |h.PID hash table|
00000960  20 65 6e 74 72 69 65 73  3a 20 34 30 39 36 20 28  | entries: 4096 (|
00000970  6f 72 64 65 72 3a 20 32  2c 20 31 36 33 38 34 20  |order: 2, 16384 |
00000980  62 79 74 65 73 29 0a 61  6c 6c 6f 63 5f 6c 61 72  |bytes).alloc_lar|
00000990  67 65 5f 73 79 73 74 65  6d 5f 68 61 73 68 3a 20  |ge_system_hash: |
000009a0  6e 75 6d 65 6e 74 72 69  65 73 20 31 39 34 35 36  |numentries 19456|
000009b0  30 0a 61 6c 6c 6f 63 5f  6c 61 72 67 65 5f 73 79  |0.alloc_large_sy|
000009c0  73 74 65 6d 5f 68 61 73  68 3a 20 61 64 61 70 74  |stem_hash: adapt|
000009d0  20 30 0a 3d 3d 3d 3d 3d  3d 3d 3d 3d 3d 3d 3d 3d  | 0.=============|
000009e0  3d 3d 3d 3d 3d 3d 3d 3d  3d 3d 3d 3d 3d 3d 3d 3d  |================|
*
00000a20  3d 3d 3d 0a 55 42 53 41  4e 3a 20 55 6e 64 65 66  |===.UBSAN: Undef|
00000a30  69 6e 65 64 20 62 65 68  61 76 69 6f 75 72 20 69  |ined behaviour i|
00000a40  6e 20 2e 2e 2f 6d 6d 2f  70 61 67 65 5f 61 6c 6c  |n ../mm/page_all|
00000a50  6f 63 2e 63 3a 37 32 32  31 3a 31 30 0a 73 69 67  |oc.c:7221:10.sig|
00000a60  6e 65 64 20 69 6e 74 65  67 65 72 20 6f 76 65 72  |ned integer over|
00000a70  66 6c 6f 77 3a 0a 32 31  34 37 34 38 33 36 34 37  |flow:.2147483647|
00000a80  20 2b 20 31 20 63 61 6e  6e 6f 74 20 62 65 20 72  | + 1 cannot be r|
00000a90  65 70 72 65 73 65 6e 74  65 64 20 69 6e 20 74 79  |epresented in ty|
00000aa0  70 65 20 27 69 6e 74 20  5b 34 5d 27 0a 43 50 55  |pe 'int [4]'.CPU|
00000ab0  3a 20 30 20 50 49 44 3a  20 30 20 43 6f 6d 6d 3a  |: 0 PID: 0 Comm:|
00000ac0  20 73 77 61 70 70 65 72  20 4e 6f 74 20 74 61 69  | swapper Not tai|
00000ad0  6e 74 65 64 20 34 2e 31  32 2e 30 2d 72 63 31 2d  |nted 4.12.0-rc1-|
00000ae0  67 63 63 2d 36 2e 33 2e  31 2d 30 30 31 38 32 2d  |gcc-6.3.1-00182-|
00000af0  67 36 37 64 30 36 38 37  32 32 34 61 39 2d 64 69  |g67d0687224a9-di|
00000b00  72 74 79 20 23 38 0a 43  61 6c 6c 20 54 72 61 63  |rty #8.Call Trac|
00000b10  65 3a 0a 5b 63 30 65 30  35 65 61 30 5d 20 5b 63  |e:.[c0e05ea0] [c|
00000b20  30 34 37 38 38 63 34 5d  20 75 62 73 61 6e 5f 65  |04788c4] ubsan_e|
00000b30  70 69 6c 6f 67 75 65 2b  30 78 31 38 2f 30 78 34  |pilogue+0x18/0x4|
00000b40  63 20 28 75 6e 72 65 6c  69 61 62 6c 65 29 0a 5b  |c (unreliable).[|
00000b50  63 30 65 30 35 65 62 30  5d 20 5b 63 30 34 37 39  |c0e05eb0] [c0479|
00000b60  32 36 30 5d 20 68 61 6e  64 6c 65 5f 6f 76 65 72  |260] handle_over|
00000b70  66 6c 6f 77 2b 30 78 62  63 2f 30 78 64 63 0a 5b  |flow+0xbc/0xdc.[|
00000b80  63 30 65 30 35 66 33 30  5d 20 5b 63 30 61 62 39  |c0e05f30] [c0ab9|
00000b90  38 66 38 5d 20 61 6c 6c  6f 63 5f 6c 61 72 67 65  |8f8] alloc_large|
00000ba0  5f 73 79 73 74 65 6d 5f  68 61 73 68 2b 30 78 65  |_system_hash+0xe|
00000bb0  34 2f 30 78 35 65 63 0a  5b 63 30 65 30 35 66 39  |4/0x5ec.[c0e05f9|
00000bc0  30 5d 20 5b 63 30 61 62  65 30 30 30 5d 20 76 66  |0] [c0abe000] vf|
00000bd0  73 5f 63 61 63 68 65 73  5f 69 6e 69 74 5f 65 61  |s_caches_init_ea|
00000be0  72 6c 79 2b 30 78 34 63  2f 30 78 36 34 0a 5b 63  |rly+0x4c/0x64.[c|
00000bf0  30 65 30 35 66 62 30 5d  20 5b 63 30 61 61 35 32  |0e05fb0] [c0aa52|
00000c00  31 38 5d 20 73 74 61 72  74 5f 6b 65 72 6e 65 6c  |18] start_kernel|
00000c10  2b 30 78 32 33 63 2f 30  78 33 63 34 0a 5b 63 30  |+0x23c/0x3c4.[c0|
00000c20  65 30 35 66 66 30 5d 20  5b 30 30 30 30 33 34 34  |e05ff0] [0000344|
00000c30  63 5d 20 30 78 33 34 34  63 0a 3d 3d 3d 3d 3d 3d  |c] 0x344c.======|
00000c40  3d 3d 3d 3d 3d 3d 3d 3d  3d 3d 3d 3d 3d 3d 3d 3d  |================|

cheers

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
@ 2017-05-22  6:17     ` Michael Ellerman
  0 siblings, 0 replies; 20+ messages in thread
From: Michael Ellerman @ 2017-05-22  6:17 UTC (permalink / raw)
  To: Pavel Tatashin, akpm, linux-kernel, linux-mm, mhocko

Pavel Tatashin <pasha.tatashin@oracle.com> writes:
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 8afa63e81e73..15bba5c325a5 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -7169,6 +7169,17 @@ static unsigned long __init arch_reserved_kernel_pages(void)
>  #endif
>  
>  /*
> + * Adaptive scale is meant to reduce sizes of hash tables on large memory
> + * machines. As memory size is increased the scale is also increased but at
> + * slower pace.  Starting from ADAPT_SCALE_BASE (64G), every time memory
> + * quadruples the scale is increased by one, which means the size of hash table
> + * only doubles, instead of quadrupling as well.
> + */
> +#define ADAPT_SCALE_BASE	(64ull << 30)
> +#define ADAPT_SCALE_SHIFT	2
> +#define ADAPT_SCALE_NPAGES	(ADAPT_SCALE_BASE >> PAGE_SHIFT)
> +
> +/*
>   * allocate a large system hash table from bootmem
>   * - it is assumed that the hash table must contain an exact power-of-2
>   *   quantity of entries
> @@ -7199,6 +7210,14 @@ void *__init alloc_large_system_hash(const char *tablename,
>  		if (PAGE_SHIFT < 20)
>  			numentries = round_up(numentries, (1<<20)/PAGE_SIZE);
>  
> +		if (!high_limit) {
> +			unsigned long long adapt;
> +
> +			for (adapt = ADAPT_SCALE_NPAGES; adapt < numentries;
> +			     adapt <<= ADAPT_SCALE_SHIFT)
> +				scale++;
> +		}

This still doesn't work for me. The scale++ is overflowing according to
UBSAN (line 7221).

It looks like numentries is 194560.

00000950  68 0a 50 49 44 20 68 61  73 68 20 74 61 62 6c 65  |h.PID hash table|
00000960  20 65 6e 74 72 69 65 73  3a 20 34 30 39 36 20 28  | entries: 4096 (|
00000970  6f 72 64 65 72 3a 20 32  2c 20 31 36 33 38 34 20  |order: 2, 16384 |
00000980  62 79 74 65 73 29 0a 61  6c 6c 6f 63 5f 6c 61 72  |bytes).alloc_lar|
00000990  67 65 5f 73 79 73 74 65  6d 5f 68 61 73 68 3a 20  |ge_system_hash: |
000009a0  6e 75 6d 65 6e 74 72 69  65 73 20 31 39 34 35 36  |numentries 19456|
000009b0  30 0a 61 6c 6c 6f 63 5f  6c 61 72 67 65 5f 73 79  |0.alloc_large_sy|
000009c0  73 74 65 6d 5f 68 61 73  68 3a 20 61 64 61 70 74  |stem_hash: adapt|
000009d0  20 30 0a 3d 3d 3d 3d 3d  3d 3d 3d 3d 3d 3d 3d 3d  | 0.=============|
000009e0  3d 3d 3d 3d 3d 3d 3d 3d  3d 3d 3d 3d 3d 3d 3d 3d  |================|
*
00000a20  3d 3d 3d 0a 55 42 53 41  4e 3a 20 55 6e 64 65 66  |===.UBSAN: Undef|
00000a30  69 6e 65 64 20 62 65 68  61 76 69 6f 75 72 20 69  |ined behaviour i|
00000a40  6e 20 2e 2e 2f 6d 6d 2f  70 61 67 65 5f 61 6c 6c  |n ../mm/page_all|
00000a50  6f 63 2e 63 3a 37 32 32  31 3a 31 30 0a 73 69 67  |oc.c:7221:10.sig|
00000a60  6e 65 64 20 69 6e 74 65  67 65 72 20 6f 76 65 72  |ned integer over|
00000a70  66 6c 6f 77 3a 0a 32 31  34 37 34 38 33 36 34 37  |flow:.2147483647|
00000a80  20 2b 20 31 20 63 61 6e  6e 6f 74 20 62 65 20 72  | + 1 cannot be r|
00000a90  65 70 72 65 73 65 6e 74  65 64 20 69 6e 20 74 79  |epresented in ty|
00000aa0  70 65 20 27 69 6e 74 20  5b 34 5d 27 0a 43 50 55  |pe 'int [4]'.CPU|
00000ab0  3a 20 30 20 50 49 44 3a  20 30 20 43 6f 6d 6d 3a  |: 0 PID: 0 Comm:|
00000ac0  20 73 77 61 70 70 65 72  20 4e 6f 74 20 74 61 69  | swapper Not tai|
00000ad0  6e 74 65 64 20 34 2e 31  32 2e 30 2d 72 63 31 2d  |nted 4.12.0-rc1-|
00000ae0  67 63 63 2d 36 2e 33 2e  31 2d 30 30 31 38 32 2d  |gcc-6.3.1-00182-|
00000af0  67 36 37 64 30 36 38 37  32 32 34 61 39 2d 64 69  |g67d0687224a9-di|
00000b00  72 74 79 20 23 38 0a 43  61 6c 6c 20 54 72 61 63  |rty #8.Call Trac|
00000b10  65 3a 0a 5b 63 30 65 30  35 65 61 30 5d 20 5b 63  |e:.[c0e05ea0] [c|
00000b20  30 34 37 38 38 63 34 5d  20 75 62 73 61 6e 5f 65  |04788c4] ubsan_e|
00000b30  70 69 6c 6f 67 75 65 2b  30 78 31 38 2f 30 78 34  |pilogue+0x18/0x4|
00000b40  63 20 28 75 6e 72 65 6c  69 61 62 6c 65 29 0a 5b  |c (unreliable).[|
00000b50  63 30 65 30 35 65 62 30  5d 20 5b 63 30 34 37 39  |c0e05eb0] [c0479|
00000b60  32 36 30 5d 20 68 61 6e  64 6c 65 5f 6f 76 65 72  |260] handle_over|
00000b70  66 6c 6f 77 2b 30 78 62  63 2f 30 78 64 63 0a 5b  |flow+0xbc/0xdc.[|
00000b80  63 30 65 30 35 66 33 30  5d 20 5b 63 30 61 62 39  |c0e05f30] [c0ab9|
00000b90  38 66 38 5d 20 61 6c 6c  6f 63 5f 6c 61 72 67 65  |8f8] alloc_large|
00000ba0  5f 73 79 73 74 65 6d 5f  68 61 73 68 2b 30 78 65  |_system_hash+0xe|
00000bb0  34 2f 30 78 35 65 63 0a  5b 63 30 65 30 35 66 39  |4/0x5ec.[c0e05f9|
00000bc0  30 5d 20 5b 63 30 61 62  65 30 30 30 5d 20 76 66  |0] [c0abe000] vf|
00000bd0  73 5f 63 61 63 68 65 73  5f 69 6e 69 74 5f 65 61  |s_caches_init_ea|
00000be0  72 6c 79 2b 30 78 34 63  2f 30 78 36 34 0a 5b 63  |rly+0x4c/0x64.[c|
00000bf0  30 65 30 35 66 62 30 5d  20 5b 63 30 61 61 35 32  |0e05fb0] [c0aa52|
00000c00  31 38 5d 20 73 74 61 72  74 5f 6b 65 72 6e 65 6c  |18] start_kernel|
00000c10  2b 30 78 32 33 63 2f 30  78 33 63 34 0a 5b 63 30  |+0x23c/0x3c4.[c0|
00000c20  65 30 35 66 66 30 5d 20  5b 30 30 30 30 33 34 34  |e05ff0] [0000344|
00000c30  63 5d 20 30 78 33 34 34  63 0a 3d 3d 3d 3d 3d 3d  |c] 0x344c.======|
00000c40  3d 3d 3d 3d 3d 3d 3d 3d  3d 3d 3d 3d 3d 3d 3d 3d  |================|

cheers

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
  2017-05-20 17:06   ` Pavel Tatashin
@ 2017-05-22  9:29     ` Michal Hocko
  -1 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2017-05-22  9:29 UTC (permalink / raw)
  To: Pavel Tatashin; +Cc: akpm, linux-kernel, linux-mm

On Sat 20-05-17 13:06:53, Pavel Tatashin wrote:
[...]
>  /*
> + * Adaptive scale is meant to reduce sizes of hash tables on large memory
> + * machines. As memory size is increased the scale is also increased but at
> + * slower pace.  Starting from ADAPT_SCALE_BASE (64G), every time memory
> + * quadruples the scale is increased by one, which means the size of hash table
> + * only doubles, instead of quadrupling as well.
> + */
> +#define ADAPT_SCALE_BASE	(64ull << 30)

I have only noticed this email today because my incoming emails stopped
syncing since Friday. But this is _definitely_ not the right approachh.
64G for 32b systems is _way_ off. We have only ~1G for the kernel. I've
already proposed scaling up to 32M for 32b systems and Andi seems to be
suggesting the same. So can we fold or apply the following instead?
---
>From 6a17a022e82ac715a08a9f4707c1c29a58a2225b Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@suse.com>
Date: Mon, 22 May 2017 10:45:20 +0200
Subject: [PATCH] mm: fix adaptive hash table sizing for 32b systems

Guenter Roeck has noticed that many qemu boot test on 32b systems
and bisected it to "mm: drop HASH_ADAPT". The patch itself only
makes the HASH_ADAPT unconditional for all users which shouldn't
matter. Except it does because ADAPT_SCALE_BASE is 64GB which is
out of 32b word size so the adapt_scale loop will never terminate
and so HASH_EARLY allocations lock up with the patch while we even
do not try to use the new hash adapt code because the early allocation
suceeded.

Fix this by reducint ADAPT_SCALE_BASE down to 32MB on 32b machines.

Fixes: mm: adaptive hash table scaling
Signed-off-by: Michal Hocko <mhocko@suse.com>
---
 mm/page_alloc.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a26e19c3e1ff..70c5fc1fb89a 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -7174,11 +7174,15 @@ static unsigned long __init arch_reserved_kernel_pages(void)
 /*
  * Adaptive scale is meant to reduce sizes of hash tables on large memory
  * machines. As memory size is increased the scale is also increased but at
- * slower pace.  Starting from ADAPT_SCALE_BASE (64G), every time memory
- * quadruples the scale is increased by one, which means the size of hash table
- * only doubles, instead of quadrupling as well.
+ * slower pace.  Starting from ADAPT_SCALE_BASE (64G on 64b systems and 32M
+ * on 32b), every time memory quadruples the scale is increased by one, which
+ * means the size of hash table only doubles, instead of quadrupling as well.
  */
+#if __BITS_PER_LONG == 64
 #define ADAPT_SCALE_BASE	(64ul << 30)
+#else
+#define ADAPT_SCALE_BASE	(32ul << 20)
+#endif
 #define ADAPT_SCALE_SHIFT	2
 #define ADAPT_SCALE_NPAGES	(ADAPT_SCALE_BASE >> PAGE_SHIFT)
 
-- 
2.11.0

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
@ 2017-05-22  9:29     ` Michal Hocko
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2017-05-22  9:29 UTC (permalink / raw)
  To: Pavel Tatashin; +Cc: akpm, linux-kernel, linux-mm

On Sat 20-05-17 13:06:53, Pavel Tatashin wrote:
[...]
>  /*
> + * Adaptive scale is meant to reduce sizes of hash tables on large memory
> + * machines. As memory size is increased the scale is also increased but at
> + * slower pace.  Starting from ADAPT_SCALE_BASE (64G), every time memory
> + * quadruples the scale is increased by one, which means the size of hash table
> + * only doubles, instead of quadrupling as well.
> + */
> +#define ADAPT_SCALE_BASE	(64ull << 30)

I have only noticed this email today because my incoming emails stopped
syncing since Friday. But this is _definitely_ not the right approachh.
64G for 32b systems is _way_ off. We have only ~1G for the kernel. I've
already proposed scaling up to 32M for 32b systems and Andi seems to be
suggesting the same. So can we fold or apply the following instead?
---

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
  2017-05-22  9:29     ` Michal Hocko
@ 2017-05-22 13:18       ` Pasha Tatashin
  -1 siblings, 0 replies; 20+ messages in thread
From: Pasha Tatashin @ 2017-05-22 13:18 UTC (permalink / raw)
  To: Michal Hocko; +Cc: akpm, linux-kernel, linux-mm, Michael Ellerman

> 
> I have only noticed this email today because my incoming emails stopped
> syncing since Friday. But this is _definitely_ not the right approachh.
> 64G for 32b systems is _way_ off. We have only ~1G for the kernel. I've
> already proposed scaling up to 32M for 32b systems and Andi seems to be
> suggesting the same. So can we fold or apply the following instead?

Hi Michal,

Thank you for your suggestion. I will update the patch.

64G base for 32bit systems is not meant to be ever used, as the adaptive 
scaling for 32bit system is just not needed. 32M and 64G are going to be 
exactly the same on such systems.

Here is theoretical limit for the max hash size of entries (dentry cache 
example):

size of bucket: sizeof(struct hlist_bl_head) = 4 bytes
numentries:  (1 << 32) / PAGE_SIZE  = 1048576 (for 4K pages)
hash size: 4b * 1048576 = 4M

In practice it is going to be an order smaller, as number of kernel 
pages is less then (1<<32).

However, I will apply your suggestions as there seems to be a problem of 
overflowing in comparing ul vs. ull as reported by Michael Ellerman, and 
having a large base on 32bit systems will solve this issue. I will 
revert back to "ul" all the quantities.

Another approach is to make it a 64 bit only macro like this:

#if __BITS_PER_LONG > 32

#define ADAPT_SCALE_BASE     (64ull << 30)
#define ADAPT_SCALE_SHIFT    2
#define ADAPT_SCALE_NPAGES   (ADAPT_SCALE_BASE >> PAGE_SHIFT)

#define adapt_scale(high_limit, numentries, scalep)
       if (!(high_limit)) {                                    \
               unsigned long adapt;                            \
               for (adapt = ADAPT_SCALE_NPAGES; adapt <        \
                    (numentries); adapt <<= ADAPT_SCALE_SHIFT) \
                       (*(scalep))++;                          \
       }
#else
#define adapt_scale(high_limit, numentries scalep)
#endif

Pasha

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
@ 2017-05-22 13:18       ` Pasha Tatashin
  0 siblings, 0 replies; 20+ messages in thread
From: Pasha Tatashin @ 2017-05-22 13:18 UTC (permalink / raw)
  To: Michal Hocko; +Cc: akpm, linux-kernel, linux-mm, Michael Ellerman

> 
> I have only noticed this email today because my incoming emails stopped
> syncing since Friday. But this is _definitely_ not the right approachh.
> 64G for 32b systems is _way_ off. We have only ~1G for the kernel. I've
> already proposed scaling up to 32M for 32b systems and Andi seems to be
> suggesting the same. So can we fold or apply the following instead?

Hi Michal,

Thank you for your suggestion. I will update the patch.

64G base for 32bit systems is not meant to be ever used, as the adaptive 
scaling for 32bit system is just not needed. 32M and 64G are going to be 
exactly the same on such systems.

Here is theoretical limit for the max hash size of entries (dentry cache 
example):

size of bucket: sizeof(struct hlist_bl_head) = 4 bytes
numentries:  (1 << 32) / PAGE_SIZE  = 1048576 (for 4K pages)
hash size: 4b * 1048576 = 4M

In practice it is going to be an order smaller, as number of kernel 
pages is less then (1<<32).

However, I will apply your suggestions as there seems to be a problem of 
overflowing in comparing ul vs. ull as reported by Michael Ellerman, and 
having a large base on 32bit systems will solve this issue. I will 
revert back to "ul" all the quantities.

Another approach is to make it a 64 bit only macro like this:

#if __BITS_PER_LONG > 32

#define ADAPT_SCALE_BASE     (64ull << 30)
#define ADAPT_SCALE_SHIFT    2
#define ADAPT_SCALE_NPAGES   (ADAPT_SCALE_BASE >> PAGE_SHIFT)

#define adapt_scale(high_limit, numentries, scalep)
       if (!(high_limit)) {                                    \
               unsigned long adapt;                            \
               for (adapt = ADAPT_SCALE_NPAGES; adapt <        \
                    (numentries); adapt <<= ADAPT_SCALE_SHIFT) \
                       (*(scalep))++;                          \
       }
#else
#define adapt_scale(high_limit, numentries scalep)
#endif

Pasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
  2017-05-22 13:18       ` Pasha Tatashin
@ 2017-05-22 13:38         ` Michal Hocko
  -1 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2017-05-22 13:38 UTC (permalink / raw)
  To: Pasha Tatashin; +Cc: akpm, linux-kernel, linux-mm, Michael Ellerman

On Mon 22-05-17 09:18:58, Pasha Tatashin wrote:
> >
> >I have only noticed this email today because my incoming emails stopped
> >syncing since Friday. But this is _definitely_ not the right approachh.
> >64G for 32b systems is _way_ off. We have only ~1G for the kernel. I've
> >already proposed scaling up to 32M for 32b systems and Andi seems to be
> >suggesting the same. So can we fold or apply the following instead?
> 
> Hi Michal,
> 
> Thank you for your suggestion. I will update the patch.
> 
> 64G base for 32bit systems is not meant to be ever used, as the adaptive
> scaling for 32bit system is just not needed. 32M and 64G are going to be
> exactly the same on such systems.
> 
> Here is theoretical limit for the max hash size of entries (dentry cache
> example):
> 
> size of bucket: sizeof(struct hlist_bl_head) = 4 bytes
> numentries:  (1 << 32) / PAGE_SIZE  = 1048576 (for 4K pages)
> hash size: 4b * 1048576 = 4M
> 
> In practice it is going to be an order smaller, as number of kernel pages is
> less then (1<<32).

I haven't double check your math but if the above is correct then I
would just go and disable the adaptive scaling for 32b altogether. More
on that below.

> However, I will apply your suggestions as there seems to be a problem of
> overflowing in comparing ul vs. ull as reported by Michael Ellerman, and
> having a large base on 32bit systems will solve this issue. I will revert
> back to "ul" all the quantities.

Yeah, that is just calling for troubles.
 
> Another approach is to make it a 64 bit only macro like this:
> 
> #if __BITS_PER_LONG > 32
> 
> #define ADAPT_SCALE_BASE     (64ull << 30)
> #define ADAPT_SCALE_SHIFT    2
> #define ADAPT_SCALE_NPAGES   (ADAPT_SCALE_BASE >> PAGE_SHIFT)
> 
> #define adapt_scale(high_limit, numentries, scalep)
>       if (!(high_limit)) {                                    \
>               unsigned long adapt;                            \
>               for (adapt = ADAPT_SCALE_NPAGES; adapt <        \
>                    (numentries); adapt <<= ADAPT_SCALE_SHIFT) \
>                       (*(scalep))++;                          \
>       }
> #else
> #define adapt_scale(high_limit, numentries scalep)
> #endif

This is just too ugly to live, really. If we do not need adaptive
scaling then just make it #if __BITS_PER_LONG around the code. I would
be fine with this. A big fat warning explaining why this is 64b only
would be appropriate.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
@ 2017-05-22 13:38         ` Michal Hocko
  0 siblings, 0 replies; 20+ messages in thread
From: Michal Hocko @ 2017-05-22 13:38 UTC (permalink / raw)
  To: Pasha Tatashin; +Cc: akpm, linux-kernel, linux-mm, Michael Ellerman

On Mon 22-05-17 09:18:58, Pasha Tatashin wrote:
> >
> >I have only noticed this email today because my incoming emails stopped
> >syncing since Friday. But this is _definitely_ not the right approachh.
> >64G for 32b systems is _way_ off. We have only ~1G for the kernel. I've
> >already proposed scaling up to 32M for 32b systems and Andi seems to be
> >suggesting the same. So can we fold or apply the following instead?
> 
> Hi Michal,
> 
> Thank you for your suggestion. I will update the patch.
> 
> 64G base for 32bit systems is not meant to be ever used, as the adaptive
> scaling for 32bit system is just not needed. 32M and 64G are going to be
> exactly the same on such systems.
> 
> Here is theoretical limit for the max hash size of entries (dentry cache
> example):
> 
> size of bucket: sizeof(struct hlist_bl_head) = 4 bytes
> numentries:  (1 << 32) / PAGE_SIZE  = 1048576 (for 4K pages)
> hash size: 4b * 1048576 = 4M
> 
> In practice it is going to be an order smaller, as number of kernel pages is
> less then (1<<32).

I haven't double check your math but if the above is correct then I
would just go and disable the adaptive scaling for 32b altogether. More
on that below.

> However, I will apply your suggestions as there seems to be a problem of
> overflowing in comparing ul vs. ull as reported by Michael Ellerman, and
> having a large base on 32bit systems will solve this issue. I will revert
> back to "ul" all the quantities.

Yeah, that is just calling for troubles.
 
> Another approach is to make it a 64 bit only macro like this:
> 
> #if __BITS_PER_LONG > 32
> 
> #define ADAPT_SCALE_BASE     (64ull << 30)
> #define ADAPT_SCALE_SHIFT    2
> #define ADAPT_SCALE_NPAGES   (ADAPT_SCALE_BASE >> PAGE_SHIFT)
> 
> #define adapt_scale(high_limit, numentries, scalep)
>       if (!(high_limit)) {                                    \
>               unsigned long adapt;                            \
>               for (adapt = ADAPT_SCALE_NPAGES; adapt <        \
>                    (numentries); adapt <<= ADAPT_SCALE_SHIFT) \
>                       (*(scalep))++;                          \
>       }
> #else
> #define adapt_scale(high_limit, numentries scalep)
> #endif

This is just too ugly to live, really. If we do not need adaptive
scaling then just make it #if __BITS_PER_LONG around the code. I would
be fine with this. A big fat warning explaining why this is 64b only
would be appropriate.

-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
  2017-05-22 13:38         ` Michal Hocko
@ 2017-05-22 13:41           ` Pasha Tatashin
  -1 siblings, 0 replies; 20+ messages in thread
From: Pasha Tatashin @ 2017-05-22 13:41 UTC (permalink / raw)
  To: Michal Hocko; +Cc: akpm, linux-kernel, linux-mm, Michael Ellerman

> 
> This is just too ugly to live, really. If we do not need adaptive
> scaling then just make it #if __BITS_PER_LONG around the code. I would
> be fine with this. A big fat warning explaining why this is 64b only
> would be appropriate.
> 

OK, let me prettify it somehow, and I will send a new patch out.

Pasha

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [v4 1/1] mm: Adaptive hash table scaling
@ 2017-05-22 13:41           ` Pasha Tatashin
  0 siblings, 0 replies; 20+ messages in thread
From: Pasha Tatashin @ 2017-05-22 13:41 UTC (permalink / raw)
  To: Michal Hocko; +Cc: akpm, linux-kernel, linux-mm, Michael Ellerman

> 
> This is just too ugly to live, really. If we do not need adaptive
> scaling then just make it #if __BITS_PER_LONG around the code. I would
> be fine with this. A big fat warning explaining why this is 64b only
> would be appropriate.
> 

OK, let me prettify it somehow, and I will send a new patch out.

Pasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2017-05-22 13:41 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-05-20 17:06 [v4 0/1] mm: Adaptive hash table scaling Pavel Tatashin
2017-05-20 17:06 ` Pavel Tatashin
2017-05-20 17:06 ` [v4 1/1] " Pavel Tatashin
2017-05-20 17:06   ` Pavel Tatashin
2017-05-21  2:07   ` Andi Kleen
2017-05-21  2:07     ` Andi Kleen
2017-05-21 12:58     ` Pasha Tatashin
2017-05-21 12:58       ` Pasha Tatashin
2017-05-21 16:35       ` Andi Kleen
2017-05-21 16:35         ` Andi Kleen
2017-05-22  6:17   ` Michael Ellerman
2017-05-22  6:17     ` Michael Ellerman
2017-05-22  9:29   ` Michal Hocko
2017-05-22  9:29     ` Michal Hocko
2017-05-22 13:18     ` Pasha Tatashin
2017-05-22 13:18       ` Pasha Tatashin
2017-05-22 13:38       ` Michal Hocko
2017-05-22 13:38         ` Michal Hocko
2017-05-22 13:41         ` Pasha Tatashin
2017-05-22 13:41           ` Pasha Tatashin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.