All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2] x86/mm: use max memory block size on bare metal
@ 2020-06-09 22:54 Daniel Jordan
  2020-06-09 23:03 ` Daniel Jordan
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Daniel Jordan @ 2020-06-09 22:54 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Andrew Morton, Andy Lutomirski, Dave Hansen, David Hildenbrand,
	Michal Hocko, Pavel Tatashin, Peter Zijlstra, Steven Sistare,
	Daniel Jordan

Some of our servers spend significant time at kernel boot initializing
memory block sysfs directories and then creating symlinks between them
and the corresponding nodes.  The slowness happens because the machines
get stuck with the smallest supported memory block size on x86 (128M),
which results in 16,288 directories to cover the 2T of installed RAM.
The search for each memory block is noticeable even with
commit 4fb6eabf1037 ("drivers/base/memory.c: cache memory blocks in
xarray to accelerate lookup").

Commit 078eb6aa50dc ("x86/mm/memory_hotplug: determine block size based
on the end of boot memory") chooses the block size based on alignment
with memory end.  That addresses hotplug failures in qemu guests, but
for bare metal systems whose memory end isn't aligned to even the
smallest size, it leaves them at 128M.

Make kernels that aren't running on a hypervisor use the largest
supported size (2G) to minimize overhead on big machines.  Kernel boot
goes 7% faster on the aforementioned servers, shaving off half a second.

Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Sistare <steven.sistare@oracle.com>
Cc: linux-mm@kvack.org
Cc: linux-kernel@vger.kernel.org
---

Applies to 5.7 and today's mainline

 arch/x86/mm/init_64.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 8b5f73f5e207c..906fbdb060748 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -55,6 +55,7 @@
 #include <asm/uv/uv.h>
 #include <asm/setup.h>
 #include <asm/ftrace.h>
+#include <asm/hypervisor.h>
 
 #include "mm_internal.h"
 
@@ -1390,6 +1391,15 @@ static unsigned long probe_memory_block_size(void)
 		goto done;
 	}
 
+	/*
+	 * Use max block size to minimize overhead on bare metal, where
+	 * alignment for memory hotplug isn't a concern.
+	 */
+	if (hypervisor_is_type(X86_HYPER_NATIVE)) {
+		bz = MAX_BLOCK_SIZE;
+		goto done;
+	}
+
 	/* Find the largest allowed block size that aligns to memory end */
 	for (bz = MAX_BLOCK_SIZE; bz > MIN_MEMORY_BLOCK_SIZE; bz >>= 1) {
 		if (IS_ALIGNED(boot_mem_end, bz))
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] x86/mm: use max memory block size on bare metal
  2020-06-09 22:54 [PATCH v2] x86/mm: use max memory block size on bare metal Daniel Jordan
@ 2020-06-09 23:03 ` Daniel Jordan
  2020-06-10  7:20 ` David Hildenbrand
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 13+ messages in thread
From: Daniel Jordan @ 2020-06-09 23:03 UTC (permalink / raw)
  To: linux-mm, linux-kernel
  Cc: Andrew Morton, Andy Lutomirski, Dave Hansen, David Hildenbrand,
	Michal Hocko, Pavel Tatashin, Peter Zijlstra, Steven Sistare,
	Daniel Jordan

On Tue, Jun 09, 2020 at 06:54:51PM -0400, Daniel Jordan wrote:
> Some of our servers spend significant time at kernel boot initializing
> memory block sysfs directories and then creating symlinks between them
> and the corresponding nodes.  The slowness happens because the machines
> get stuck with the smallest supported memory block size on x86 (128M),
> which results in 16,288 directories to cover the 2T of installed RAM.
> The search for each memory block is noticeable even with
> commit 4fb6eabf1037 ("drivers/base/memory.c: cache memory blocks in
> xarray to accelerate lookup").
> 
> Commit 078eb6aa50dc ("x86/mm/memory_hotplug: determine block size based
> on the end of boot memory") chooses the block size based on alignment
> with memory end.  That addresses hotplug failures in qemu guests, but
> for bare metal systems whose memory end isn't aligned to even the
> smallest size, it leaves them at 128M.
> 
> Make kernels that aren't running on a hypervisor use the largest
> supported size (2G) to minimize overhead on big machines.  Kernel boot
> goes 7% faster on the aforementioned servers, shaving off half a second.
> 
> Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Steven Sistare <steven.sistare@oracle.com>
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---

Forgot the v1 changes:

 - Thanks to David for the idea to make this conditional based on
   virtualization.
 - Update performance numbers to account for 4fb6eabf1037 (David)

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] x86/mm: use max memory block size on bare metal
  2020-06-09 22:54 [PATCH v2] x86/mm: use max memory block size on bare metal Daniel Jordan
  2020-06-09 23:03 ` Daniel Jordan
@ 2020-06-10  7:20 ` David Hildenbrand
  2020-06-10  7:30   ` David Hildenbrand
  2020-06-11 14:16 ` Dave Hansen
  2020-06-19 12:07 ` Michal Hocko
  3 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2020-06-10  7:20 UTC (permalink / raw)
  To: Daniel Jordan, linux-mm, linux-kernel
  Cc: Andrew Morton, Andy Lutomirski, Dave Hansen, Michal Hocko,
	Pavel Tatashin, Peter Zijlstra, Steven Sistare

On 10.06.20 00:54, Daniel Jordan wrote:
> Some of our servers spend significant time at kernel boot initializing
> memory block sysfs directories and then creating symlinks between them
> and the corresponding nodes.  The slowness happens because the machines
> get stuck with the smallest supported memory block size on x86 (128M),
> which results in 16,288 directories to cover the 2T of installed RAM.
> The search for each memory block is noticeable even with
> commit 4fb6eabf1037 ("drivers/base/memory.c: cache memory blocks in
> xarray to accelerate lookup").
> 
> Commit 078eb6aa50dc ("x86/mm/memory_hotplug: determine block size based
> on the end of boot memory") chooses the block size based on alignment
> with memory end.  That addresses hotplug failures in qemu guests, but
> for bare metal systems whose memory end isn't aligned to even the
> smallest size, it leaves them at 128M.
> 
> Make kernels that aren't running on a hypervisor use the largest
> supported size (2G) to minimize overhead on big machines.  Kernel boot
> goes 7% faster on the aforementioned servers, shaving off half a second.
> 
> Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Dave Hansen <dave.hansen@linux.intel.com>
> Cc: David Hildenbrand <david@redhat.com>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Steven Sistare <steven.sistare@oracle.com>
> Cc: linux-mm@kvack.org
> Cc: linux-kernel@vger.kernel.org
> ---
> 
> Applies to 5.7 and today's mainline
> 
>  arch/x86/mm/init_64.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 8b5f73f5e207c..906fbdb060748 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -55,6 +55,7 @@
>  #include <asm/uv/uv.h>
>  #include <asm/setup.h>
>  #include <asm/ftrace.h>
> +#include <asm/hypervisor.h>
>  
>  #include "mm_internal.h"
>  
> @@ -1390,6 +1391,15 @@ static unsigned long probe_memory_block_size(void)
>  		goto done;
>  	}
>  
> +	/*
> +	 * Use max block size to minimize overhead on bare metal, where
> +	 * alignment for memory hotplug isn't a concern.
> +	 */
> +	if (hypervisor_is_type(X86_HYPER_NATIVE)) {
> +		bz = MAX_BLOCK_SIZE;
> +		goto done;
> +	}

I'd assume that bioses on physical machines >= 64GB will not align
bigger (>= 2GB) DIMMs to something < 2GB.

Acked-by: David Hildenbrand <david@redhat.com>

> +
>  	/* Find the largest allowed block size that aligns to memory end */
>  	for (bz = MAX_BLOCK_SIZE; bz > MIN_MEMORY_BLOCK_SIZE; bz >>= 1) {
>  		if (IS_ALIGNED(boot_mem_end, bz))
> 


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] x86/mm: use max memory block size on bare metal
  2020-06-10  7:20 ` David Hildenbrand
@ 2020-06-10  7:30   ` David Hildenbrand
  2020-06-10 17:16     ` Daniel Jordan
  0 siblings, 1 reply; 13+ messages in thread
From: David Hildenbrand @ 2020-06-10  7:30 UTC (permalink / raw)
  To: Daniel Jordan, linux-mm, linux-kernel
  Cc: Andrew Morton, Andy Lutomirski, Dave Hansen, Michal Hocko,
	Pavel Tatashin, Peter Zijlstra, Steven Sistare

On 10.06.20 09:20, David Hildenbrand wrote:
> On 10.06.20 00:54, Daniel Jordan wrote:
>> Some of our servers spend significant time at kernel boot initializing
>> memory block sysfs directories and then creating symlinks between them
>> and the corresponding nodes.  The slowness happens because the machines
>> get stuck with the smallest supported memory block size on x86 (128M),
>> which results in 16,288 directories to cover the 2T of installed RAM.
>> The search for each memory block is noticeable even with
>> commit 4fb6eabf1037 ("drivers/base/memory.c: cache memory blocks in
>> xarray to accelerate lookup").
>>
>> Commit 078eb6aa50dc ("x86/mm/memory_hotplug: determine block size based
>> on the end of boot memory") chooses the block size based on alignment
>> with memory end.  That addresses hotplug failures in qemu guests, but
>> for bare metal systems whose memory end isn't aligned to even the
>> smallest size, it leaves them at 128M.
>>
>> Make kernels that aren't running on a hypervisor use the largest
>> supported size (2G) to minimize overhead on big machines.  Kernel boot
>> goes 7% faster on the aforementioned servers, shaving off half a second.
>>
>> Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
>> Cc: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Andy Lutomirski <luto@kernel.org>
>> Cc: Dave Hansen <dave.hansen@linux.intel.com>
>> Cc: David Hildenbrand <david@redhat.com>
>> Cc: Michal Hocko <mhocko@kernel.org>
>> Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
>> Cc: Peter Zijlstra <peterz@infradead.org>
>> Cc: Steven Sistare <steven.sistare@oracle.com>
>> Cc: linux-mm@kvack.org
>> Cc: linux-kernel@vger.kernel.org
>> ---
>>
>> Applies to 5.7 and today's mainline
>>
>>  arch/x86/mm/init_64.c | 10 ++++++++++
>>  1 file changed, 10 insertions(+)
>>
>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
>> index 8b5f73f5e207c..906fbdb060748 100644
>> --- a/arch/x86/mm/init_64.c
>> +++ b/arch/x86/mm/init_64.c
>> @@ -55,6 +55,7 @@
>>  #include <asm/uv/uv.h>
>>  #include <asm/setup.h>
>>  #include <asm/ftrace.h>
>> +#include <asm/hypervisor.h>
>>  
>>  #include "mm_internal.h"
>>  
>> @@ -1390,6 +1391,15 @@ static unsigned long probe_memory_block_size(void)
>>  		goto done;
>>  	}
>>  
>> +	/*
>> +	 * Use max block size to minimize overhead on bare metal, where
>> +	 * alignment for memory hotplug isn't a concern.
>> +	 */
>> +	if (hypervisor_is_type(X86_HYPER_NATIVE)) {
>> +		bz = MAX_BLOCK_SIZE;
>> +		goto done;
>> +	}
> 
> I'd assume that bioses on physical machines >= 64GB will not align
> bigger (>= 2GB) DIMMs to something < 2GB.
> 
> Acked-by: David Hildenbrand <david@redhat.com>

FTWT, setup_arch() does the init_hypervisor_platform() call. I assume
that should be early enough.

We should really look into factoring out memory_block_size_bytes() into
common code, turning into a simple global variable read. Then, we should
provide an interface to configure the memory block size during boot from
arch code (set_memory_block_size()).

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] x86/mm: use max memory block size on bare metal
  2020-06-10  7:30   ` David Hildenbrand
@ 2020-06-10 17:16     ` Daniel Jordan
  0 siblings, 0 replies; 13+ messages in thread
From: Daniel Jordan @ 2020-06-10 17:16 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Daniel Jordan, linux-mm, linux-kernel, Andrew Morton,
	Andy Lutomirski, Dave Hansen, Michal Hocko, Pavel Tatashin,
	Peter Zijlstra, Steven Sistare

On Wed, Jun 10, 2020 at 09:30:00AM +0200, David Hildenbrand wrote:
> On 10.06.20 09:20, David Hildenbrand wrote:
> > On 10.06.20 00:54, Daniel Jordan wrote:
> >> @@ -1390,6 +1391,15 @@ static unsigned long probe_memory_block_size(void)
> >>  		goto done;
> >>  	}
> >>  
> >> +	/*
> >> +	 * Use max block size to minimize overhead on bare metal, where
> >> +	 * alignment for memory hotplug isn't a concern.
> >> +	 */
> >> +	if (hypervisor_is_type(X86_HYPER_NATIVE)) {
> >> +		bz = MAX_BLOCK_SIZE;
> >> +		goto done;
> >> +	}
> > 
> > I'd assume that bioses on physical machines >= 64GB will not align
> > bigger (>= 2GB) DIMMs to something < 2GB.
> > 
> > Acked-by: David Hildenbrand <david@redhat.com>

Thanks!

> FTWT, setup_arch() does the init_hypervisor_platform() call. I assume
> that should be early enough.

I checked all the places where x86 uses memory_block_size_bytes(), it looks ok.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] x86/mm: use max memory block size on bare metal
  2020-06-09 22:54 [PATCH v2] x86/mm: use max memory block size on bare metal Daniel Jordan
  2020-06-09 23:03 ` Daniel Jordan
  2020-06-10  7:20 ` David Hildenbrand
@ 2020-06-11 14:16 ` Dave Hansen
  2020-06-11 16:59   ` Daniel Jordan
  2020-06-19 12:07 ` Michal Hocko
  3 siblings, 1 reply; 13+ messages in thread
From: Dave Hansen @ 2020-06-11 14:16 UTC (permalink / raw)
  To: Daniel Jordan, linux-mm, linux-kernel
  Cc: Andrew Morton, Andy Lutomirski, Dave Hansen, David Hildenbrand,
	Michal Hocko, Pavel Tatashin, Peter Zijlstra, Steven Sistare

On 6/9/20 3:54 PM, Daniel Jordan wrote:
> +	/*
> +	 * Use max block size to minimize overhead on bare metal, where
> +	 * alignment for memory hotplug isn't a concern.
> +	 */
> +	if (hypervisor_is_type(X86_HYPER_NATIVE)) {
> +		bz = MAX_BLOCK_SIZE;
> +		goto done;
> +	}

What ends up being the worst case scenario?  Booting a really small
bare-metal x86 system, say with 64MB or 128MB of RAM?  What's the
overhead there?



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] x86/mm: use max memory block size on bare metal
  2020-06-11 14:16 ` Dave Hansen
@ 2020-06-11 16:59   ` Daniel Jordan
  2020-06-11 17:05     ` Dave Hansen
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel Jordan @ 2020-06-11 16:59 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Daniel Jordan, linux-mm, linux-kernel, Andrew Morton,
	Andy Lutomirski, Dave Hansen, David Hildenbrand, Michal Hocko,
	Pavel Tatashin, Peter Zijlstra, Steven Sistare

On Thu, Jun 11, 2020 at 07:16:02AM -0700, Dave Hansen wrote:
> On 6/9/20 3:54 PM, Daniel Jordan wrote:
> > +	/*
> > +	 * Use max block size to minimize overhead on bare metal, where
> > +	 * alignment for memory hotplug isn't a concern.
> > +	 */
> > +	if (hypervisor_is_type(X86_HYPER_NATIVE)) {
> > +		bz = MAX_BLOCK_SIZE;
> > +		goto done;
> > +	}
> 
> What ends up being the worst case scenario?  Booting a really small
> bare-metal x86 system, say with 64MB or 128MB of RAM?  What's the
> overhead there?

Might not be following you, so bear with me, but we only get to this check on a
system with a physical address end of at least MEM_SIZE_FOR_LARGE_BLOCK (64G),
and this would still (ever so slightly...) reduce overhead of memory block init
at boot in that case.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] x86/mm: use max memory block size on bare metal
  2020-06-11 16:59   ` Daniel Jordan
@ 2020-06-11 17:05     ` Dave Hansen
  2020-06-12  3:29       ` Daniel Jordan
  0 siblings, 1 reply; 13+ messages in thread
From: Dave Hansen @ 2020-06-11 17:05 UTC (permalink / raw)
  To: Daniel Jordan
  Cc: linux-mm, linux-kernel, Andrew Morton, Andy Lutomirski,
	Dave Hansen, David Hildenbrand, Michal Hocko, Pavel Tatashin,
	Peter Zijlstra, Steven Sistare

On 6/11/20 9:59 AM, Daniel Jordan wrote:
> On Thu, Jun 11, 2020 at 07:16:02AM -0700, Dave Hansen wrote:
>> On 6/9/20 3:54 PM, Daniel Jordan wrote:
>>> +	/*
>>> +	 * Use max block size to minimize overhead on bare metal, where
>>> +	 * alignment for memory hotplug isn't a concern.
>>> +	 */
>>> +	if (hypervisor_is_type(X86_HYPER_NATIVE)) {
>>> +		bz = MAX_BLOCK_SIZE;
>>> +		goto done;
>>> +	}
>> What ends up being the worst case scenario?  Booting a really small
>> bare-metal x86 system, say with 64MB or 128MB of RAM?  What's the
>> overhead there?
> Might not be following you, so bear with me, but we only get to this check on a
> system with a physical address end of at least MEM_SIZE_FOR_LARGE_BLOCK (64G),
> and this would still (ever so slightly...) reduce overhead of memory block init
> at boot in that case.

Ahh, I see now.  That is just above the hunk you added, but just wasn't
in the diff context or mentioned in the changelog.

One other nit for this.  We *do* have actual hardware hotplug, and I'm
pretty sure the alignment guarantees for hardware hotplug are pretty
weak.  For instance, the alignment guarantees for persistent memory are
still only 64MB even on modern platforms.

Let's say we're on bare metal and we see an SRAT table that has some
areas that show that hotplug might happen there.  Is this patch still
ideal there?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] x86/mm: use max memory block size on bare metal
  2020-06-11 17:05     ` Dave Hansen
@ 2020-06-12  3:29       ` Daniel Jordan
  0 siblings, 0 replies; 13+ messages in thread
From: Daniel Jordan @ 2020-06-12  3:29 UTC (permalink / raw)
  To: Dave Hansen
  Cc: Daniel Jordan, linux-mm, linux-kernel, Andrew Morton,
	Andy Lutomirski, Dave Hansen, David Hildenbrand, Michal Hocko,
	Pavel Tatashin, Peter Zijlstra, Steven Sistare

On Thu, Jun 11, 2020 at 10:05:38AM -0700, Dave Hansen wrote:
> One other nit for this.  We *do* have actual hardware hotplug, and I'm
> pretty sure the alignment guarantees for hardware hotplug are pretty
> weak.  For instance, the alignment guarantees for persistent memory are
> still only 64MB even on modern platforms.
>
> Let's say we're on bare metal and we see an SRAT table that has some
> areas that show that hotplug might happen there.  Is this patch still
> ideal there?

Well, not if there's concern about hardware hotplug.

My assumption going in was that this wasn't a problem in practice.
078eb6aa50dc50 ("x86/mm/memory_hotplug: determine block size based on the end
of boot memory") was merged in 2018 to address qemu hotplug failures and >64G
systems have used a 2G block since 2014 with no complaints about alignment
issues, to my knowledge anyway.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] x86/mm: use max memory block size on bare metal
  2020-06-09 22:54 [PATCH v2] x86/mm: use max memory block size on bare metal Daniel Jordan
                   ` (2 preceding siblings ...)
  2020-06-11 14:16 ` Dave Hansen
@ 2020-06-19 12:07 ` Michal Hocko
  2020-06-22 19:17   ` Daniel Jordan
  3 siblings, 1 reply; 13+ messages in thread
From: Michal Hocko @ 2020-06-19 12:07 UTC (permalink / raw)
  To: Daniel Jordan
  Cc: linux-mm, linux-kernel, Andrew Morton, Andy Lutomirski,
	Dave Hansen, David Hildenbrand, Pavel Tatashin, Peter Zijlstra,
	Steven Sistare

On Tue 09-06-20 18:54:51, Daniel Jordan wrote:
[...]
> @@ -1390,6 +1391,15 @@ static unsigned long probe_memory_block_size(void)
>  		goto done;
>  	}
>  
> +	/*
> +	 * Use max block size to minimize overhead on bare metal, where
> +	 * alignment for memory hotplug isn't a concern.

This really begs a clarification why this is not a concern. Bare metal
can see physical memory hotadd as well. I just suspect that you do not
consider that to be very common so it is not a big deal? And I would
tend to agree but still we are just going to wait until first user
stumbles over this.

Btw. memblock interface just doesn't scale and it is a terrible
interface for large machines and for the memory hotplug in general (just
look at ppc and their insanely small memblocks).

Most usecases I have seen simply want to either offline some portion of
memory without a strong requirement of the physical memory range as long
as it is from a particular node or simply offline and remove the full
node.

I believe that we should think about a future interface rather than
trying to ducktape the blocksize anytime it causes problems. I would be
even tempted to simply add a kernel command line option 
memory_hotplug=disable,legacy,new_shiny

for disable it would simply drop all the sysfs crud and speed up boot
for most users who simply do not care about memory hotplug. new_shiny
would ideally provide an interface that would either export logically
hotplugable memory ranges (e.g. DIMMs) or a query/action interface which
accepts physical ranges as input. Having gazillions of sysfs files is
simply unsustainable.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] x86/mm: use max memory block size on bare metal
  2020-06-19 12:07 ` Michal Hocko
@ 2020-06-22 19:17   ` Daniel Jordan
  2020-06-26 12:47     ` Michal Hocko
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel Jordan @ 2020-06-22 19:17 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Daniel Jordan, linux-mm, linux-kernel, Andrew Morton,
	Andy Lutomirski, Dave Hansen, David Hildenbrand, Pavel Tatashin,
	Peter Zijlstra, Steven Sistare

Hello Michal,

(I've been away and may be slow to respond for a little while)

On Fri, Jun 19, 2020 at 02:07:04PM +0200, Michal Hocko wrote:
> On Tue 09-06-20 18:54:51, Daniel Jordan wrote:
> [...]
> > @@ -1390,6 +1391,15 @@ static unsigned long probe_memory_block_size(void)
> >  		goto done;
> >  	}
> >  
> > +	/*
> > +	 * Use max block size to minimize overhead on bare metal, where
> > +	 * alignment for memory hotplug isn't a concern.
> 
> This really begs a clarification why this is not a concern. Bare metal
> can see physical memory hotadd as well. I just suspect that you do not
> consider that to be very common so it is not a big deal?

It's not only uncommon, it's also that boot_mem_end on bare metal may not align
with any available memory block size.  For instance, this server's boot_mem_end
is only 4M aligned and FWIW my desktop's is 2M aligned.  As far as I can tell,
the logic that picks the size wasn't intended for bare metal.

> And I would
> tend to agree but still we are just going to wait until first user
> stumbles over this.

This isn't something new with this patch, 2G has been the default on big
machines for years.  This is addressing an unintended side effect of
078eb6aa50dc50, which was for qemu, by restoring the original behavior on bare
metal to avoid oodles of sysfs files.

> Btw. memblock interface just doesn't scale and it is a terrible
> interface for large machines and for the memory hotplug in general (just
> look at ppc and their insanely small memblocks).

I agree that the status quo isn't ideal and is something to address going
forward.

> Most usecases I have seen simply want to either offline some portion of
> memory without a strong requirement of the physical memory range as long
> as it is from a particular node or simply offline and remove the full
> node.

Interesting, would've thought that removing a single bad DIMM for RAS purposes
would also be common relative to how often hotplug is done on real systems.

> I believe that we should think about a future interface rather than
> trying to ducktape the blocksize anytime it causes problems. I would be
> even tempted to simply add a kernel command line option 
> memory_hotplug=disable,legacy,new_shiny
> 
> for disable it would simply drop all the sysfs crud and speed up boot
> for most users who simply do not care about memory hotplug. new_shiny
> would ideally provide an interface that would either export logically
> hotplugable memory ranges (e.g. DIMMs) or a query/action interface which
> accepts physical ranges as input. Having gazillions of sysfs files is
> simply unsustainable.

So in this idea, presumably the default would start off being legacy and then
later be changed to new_shiny?

If new_shiny scales well, maybe 'disable' wouldn't be needed and so using the
option could be avoided most of the time.  If some users really don't want it,
they can build without it.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] x86/mm: use max memory block size on bare metal
  2020-06-22 19:17   ` Daniel Jordan
@ 2020-06-26 12:47     ` Michal Hocko
  2020-07-08 18:46       ` Daniel Jordan
  0 siblings, 1 reply; 13+ messages in thread
From: Michal Hocko @ 2020-06-26 12:47 UTC (permalink / raw)
  To: Daniel Jordan
  Cc: linux-mm, linux-kernel, Andrew Morton, Andy Lutomirski,
	Dave Hansen, David Hildenbrand, Pavel Tatashin, Peter Zijlstra,
	Steven Sistare

On Mon 22-06-20 15:17:39, Daniel Jordan wrote:
> Hello Michal,
> 
> (I've been away and may be slow to respond for a little while)
> 
> On Fri, Jun 19, 2020 at 02:07:04PM +0200, Michal Hocko wrote:
> > On Tue 09-06-20 18:54:51, Daniel Jordan wrote:
> > [...]
> > > @@ -1390,6 +1391,15 @@ static unsigned long probe_memory_block_size(void)
> > >  		goto done;
> > >  	}
> > >  
> > > +	/*
> > > +	 * Use max block size to minimize overhead on bare metal, where
> > > +	 * alignment for memory hotplug isn't a concern.
> > 
> > This really begs a clarification why this is not a concern. Bare metal
> > can see physical memory hotadd as well. I just suspect that you do not
> > consider that to be very common so it is not a big deal?
> 
> It's not only uncommon, it's also that boot_mem_end on bare metal may not align
> with any available memory block size.  For instance, this server's boot_mem_end
> is only 4M aligned and FWIW my desktop's is 2M aligned.  As far as I can tell,
> the logic that picks the size wasn't intended for bare metal.

> 
> > And I would
> > tend to agree but still we are just going to wait until first user
> > stumbles over this.
> 
> This isn't something new with this patch, 2G has been the default on big
> machines for years.  This is addressing an unintended side effect of
> 078eb6aa50dc50, which was for qemu, by restoring the original behavior on bare
> metal to avoid oodles of sysfs files.

I am not really sure the qemu was a target. I suspect it was just easier
to test in qemu.

[...]

> > I believe that we should think about a future interface rather than
> > trying to ducktape the blocksize anytime it causes problems. I would be
> > even tempted to simply add a kernel command line option 
> > memory_hotplug=disable,legacy,new_shiny
> > 
> > for disable it would simply drop all the sysfs crud and speed up boot
> > for most users who simply do not care about memory hotplug. new_shiny
> > would ideally provide an interface that would either export logically
> > hotplugable memory ranges (e.g. DIMMs) or a query/action interface which
> > accepts physical ranges as input. Having gazillions of sysfs files is
> > simply unsustainable.
> 
> So in this idea, presumably the default would start off being legacy and then
> later be changed to new_shiny?

Well it really depends. Going with disable as a default would suit most
users much better because the vast majority simply doesn't use the
functionality. On the other hand real users would regress unless they
enable the option. Which is definitely not nice. Another and much less
intrusive change would be creating sysfs interface on-demand. So until
somebody actually tries to use the interface it won't exist. I haven't
tried to explore how complex that would be. I am not really familiar
with sysfs to be honest. But fundamentally nothing should prevent such a
solution.

Another option would be to create sysfs interface only if there is a
hotplugable memory reported by the platform. But I am not sure we have a
proper interface for that for all arches.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v2] x86/mm: use max memory block size on bare metal
  2020-06-26 12:47     ` Michal Hocko
@ 2020-07-08 18:46       ` Daniel Jordan
  0 siblings, 0 replies; 13+ messages in thread
From: Daniel Jordan @ 2020-07-08 18:46 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Daniel Jordan, linux-mm, linux-kernel, Andrew Morton,
	Andy Lutomirski, Dave Hansen, David Hildenbrand, Pavel Tatashin,
	Peter Zijlstra, Steven Sistare

(I'm back now)

On Fri, Jun 26, 2020 at 02:47:06PM +0200, Michal Hocko wrote:
> On Mon 22-06-20 15:17:39, Daniel Jordan wrote:
> > Hello Michal,
> > 
> > (I've been away and may be slow to respond for a little while)
> > 
> > On Fri, Jun 19, 2020 at 02:07:04PM +0200, Michal Hocko wrote:
> > > I believe that we should think about a future interface rather than
> > > trying to ducktape the blocksize anytime it causes problems. I would be
> > > even tempted to simply add a kernel command line option 
> > > memory_hotplug=disable,legacy,new_shiny
> > > 
> > > for disable it would simply drop all the sysfs crud and speed up boot
> > > for most users who simply do not care about memory hotplug. new_shiny
> > > would ideally provide an interface that would either export logically
> > > hotplugable memory ranges (e.g. DIMMs) or a query/action interface which
> > > accepts physical ranges as input. Having gazillions of sysfs files is
> > > simply unsustainable.
> > 
> > So in this idea, presumably the default would start off being legacy and then
> > later be changed to new_shiny?
> 
> Well it really depends. Going with disable as a default would suit most
> users much better because the vast majority simply doesn't use the
> functionality. On the other hand real users would regress unless they
> enable the option. Which is definitely not nice.

Agreed.

> Another and much less
> intrusive change would be creating sysfs interface on-demand. So until
> somebody actually tries to use the interface it won't exist. I haven't
> tried to explore how complex that would be. I am not really familiar
> with sysfs to be honest. But fundamentally nothing should prevent such a
> solution.

Hm, don't know sysfs internals either.  It seems possible that someone (or
something...) navigating around could trigger creation unintentionally--for
instance, by reading the symlinks to the memory block dirs in
/sys/devices/system/node/nodeN when trying to find another file to read in the
same place.

> Another option would be to create sysfs interface only if there is a
> hotplugable memory reported by the platform. But I am not sure we have a
> proper interface for that for all arches.

Systems that happen to have hotpluggable ranges but don't want the
overhead would still be stuck, though, it seems.


FWIW, the ideas for new_shiny sound promising.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-07-08 18:46 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-09 22:54 [PATCH v2] x86/mm: use max memory block size on bare metal Daniel Jordan
2020-06-09 23:03 ` Daniel Jordan
2020-06-10  7:20 ` David Hildenbrand
2020-06-10  7:30   ` David Hildenbrand
2020-06-10 17:16     ` Daniel Jordan
2020-06-11 14:16 ` Dave Hansen
2020-06-11 16:59   ` Daniel Jordan
2020-06-11 17:05     ` Dave Hansen
2020-06-12  3:29       ` Daniel Jordan
2020-06-19 12:07 ` Michal Hocko
2020-06-22 19:17   ` Daniel Jordan
2020-06-26 12:47     ` Michal Hocko
2020-07-08 18:46       ` Daniel Jordan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.