linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* pcpu allocator on large NUMA machines
@ 2017-07-24 13:42 Michal Hocko
  2017-07-24 13:57 ` Tejun Heo
  0 siblings, 1 reply; 5+ messages in thread
From: Michal Hocko @ 2017-07-24 13:42 UTC (permalink / raw)
  To: Tejun Heo; +Cc: Michael Ellerman, Jiri Kosina, linux-mm, LKML

Hi Tejun,
we are seeing a strange pcpu allocation failures on a large ppc machine
on our older distribution kernel. Please note that I am not yet sure
this is the case with the current up-to-date kernel and I do not have
direct access to the machine but it seems there were not many changes in
the pcpu area since 4.4 that would make any difference.

The machine has 32TB of memory and 192 cores.

Warnings are as follows:
WARNING: at ../mm/vmalloc.c:2423
[...]
NIP [c00000000028e048] pcpu_get_vm_areas+0x698/0x6d0
LR [c000000000268be0] pcpu_create_chunk+0xb0/0x160
Call Trace:
[c000106c3948f900] [c000000000268684] pcpu_mem_zalloc+0x54/0xd0 (unreliable)
[c000106c3948f9d0] [c000000000268be0] pcpu_create_chunk+0xb0/0x160
[c000106c3948fa00] [c000000000269dc4] pcpu_alloc+0x284/0x740
[c000106c3948faf0] [c00000000017ac90] hotplug_cfd+0x100/0x150
[c000106c3948fb30] [c0000000000eabf8] notifier_call_chain+0x98/0x110
[c000106c3948fb80] [c0000000000bdae0] _cpu_up+0x150/0x210
[c000106c3948fc30] [c0000000000bdcbc] cpu_up+0x11c/0x140
[c000106c3948fcb0] [c000000000ac47dc] smp_init+0x110/0x118
[c000106c3948fd00] [c000000000aa4228] kernel_init_freeable+0x19c/0x364
[c000106c3948fdc0] [c00000000000bf58] kernel_init+0x28/0x150
[c000106c3948fe30] [c000000000009538] ret_from_kernel_thread+0x5c/0xa4

And the kernel log complains about the max_distance.
PERCPU: max_distance=0x1d452f940000 too large for vmalloc space 0x80000000000

The boot dies eventually...

Reducing the number of cores doesn't help but reducing the size of
memory does.  Increasing the vmalloc space (to 56TB) helps as well. Our
older kernels (based on 4.4) booted just fine and it seems that
ba4a648f12f4 ("powerpc/numa: Fix percpu allocations to be NUMA aware")
(which went to stable) changed the picture. Previously the same machine
consumed ~400MB vmalloc area per NUMA node.
0xd00007ffb8000000-0xd00007ffd0000000 402653184 pcpu_get_vm_areas+0x0/0x6d0 vmalloc
0xd00007ffd0000000-0xd00007ffe8000000 402653184 pcpu_get_vm_areas+0x0/0x6d0 vmalloc
0xd00007ffe8000000-0xd000080000000000 402653184 pcpu_get_vm_areas+0x0/0x6d0 vmalloc

My understanding of the pcpu allocator is basically close to zero but it
seems weird to me that we would need many TB of vmalloc address space
just to allocate vmalloc areas that are in range of hundreds of MB. So I
am wondering whether this is an expected behavior of the allocator or
there is a problem somwehere else.

Michael has noted
: On powerpc we use pcpu_embed_first_chunk(). That means we use the 1:1 linear
: mapping of kernel virtual to physical for the first per-cpu chunk (kernel 
: static percpu vars).
:
: Because of that, and because the percpu allocator wants to do node local
: allocations, the distance between the percpu areas ends up being dictated by 
: the distance between the real addresses of our NUMA nodes.
: 
: So if you boot a system with a lot of NUMA nodes, or with a very large
: distance between nodes, then you can hit the bug we have here.
: 
: Of course things have been complicated by the fact that the node-local
: part of the percpu allocation was broken until recently, and because
: most of us don't have access to these really large memory systems.

Let me know if you need further details.

Thanks!
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: pcpu allocator on large NUMA machines
  2017-07-24 13:42 pcpu allocator on large NUMA machines Michal Hocko
@ 2017-07-24 13:57 ` Tejun Heo
  2017-07-24 14:28   ` Michal Hocko
  0 siblings, 1 reply; 5+ messages in thread
From: Tejun Heo @ 2017-07-24 13:57 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Michael Ellerman, Jiri Kosina, linux-mm, LKML

Hello,

On Mon, Jul 24, 2017 at 03:42:40PM +0200, Michal Hocko wrote:
> we are seeing a strange pcpu allocation failures on a large ppc machine
> on our older distribution kernel. Please note that I am not yet sure
> this is the case with the current up-to-date kernel and I do not have
> direct access to the machine but it seems there were not many changes in
> the pcpu area since 4.4 that would make any difference.
> 
> The machine has 32TB of memory and 192 cores.
> 
> Warnings are as follows:
> WARNING: at ../mm/vmalloc.c:2423
> [...]
> NIP [c00000000028e048] pcpu_get_vm_areas+0x698/0x6d0
> LR [c000000000268be0] pcpu_create_chunk+0xb0/0x160
> Call Trace:
> [c000106c3948f900] [c000000000268684] pcpu_mem_zalloc+0x54/0xd0 (unreliable)
> [c000106c3948f9d0] [c000000000268be0] pcpu_create_chunk+0xb0/0x160
> [c000106c3948fa00] [c000000000269dc4] pcpu_alloc+0x284/0x740
> [c000106c3948faf0] [c00000000017ac90] hotplug_cfd+0x100/0x150
> [c000106c3948fb30] [c0000000000eabf8] notifier_call_chain+0x98/0x110
> [c000106c3948fb80] [c0000000000bdae0] _cpu_up+0x150/0x210
> [c000106c3948fc30] [c0000000000bdcbc] cpu_up+0x11c/0x140
> [c000106c3948fcb0] [c000000000ac47dc] smp_init+0x110/0x118
> [c000106c3948fd00] [c000000000aa4228] kernel_init_freeable+0x19c/0x364
> [c000106c3948fdc0] [c00000000000bf58] kernel_init+0x28/0x150
> [c000106c3948fe30] [c000000000009538] ret_from_kernel_thread+0x5c/0xa4

That looks like the spread between NUMA addresses is larger than the
size of vmalloc area.

> And the kernel log complains about the max_distance.
> PERCPU: max_distance=0x1d452f940000 too large for vmalloc space 0x80000000000

Yeah, that triggers when the distance becomes larger than 75%.

> The boot dies eventually...
> 
> Reducing the number of cores doesn't help but reducing the size of
> memory does.  Increasing the vmalloc space (to 56TB) helps as well. Our
> older kernels (based on 4.4) booted just fine and it seems that
> ba4a648f12f4 ("powerpc/numa: Fix percpu allocations to be NUMA aware")
> (which went to stable) changed the picture. Previously the same machine
> consumed ~400MB vmalloc area per NUMA node.
> 0xd00007ffb8000000-0xd00007ffd0000000 402653184 pcpu_get_vm_areas+0x0/0x6d0 vmalloc
> 0xd00007ffd0000000-0xd00007ffe8000000 402653184 pcpu_get_vm_areas+0x0/0x6d0 vmalloc
> 0xd00007ffe8000000-0xd000080000000000 402653184 pcpu_get_vm_areas+0x0/0x6d0 vmalloc
> 
> My understanding of the pcpu allocator is basically close to zero but it
> seems weird to me that we would need many TB of vmalloc address space
> just to allocate vmalloc areas that are in range of hundreds of MB. So I
> am wondering whether this is an expected behavior of the allocator or
> there is a problem somwehere else.

It's not actually using the entire region but the area allocations try
to follow the same topology as kernel linear address layouts.  ie. if
kernel address for different NUMA nodes are apart by certain amount,
the percpu allocator tries to replicate that for dynamic allocations
which allows leaving the static and first dynamic area in the kernel
linear address which helps reducing TLB pressure.

This optimization can be turned off when vmalloc area isn't spacious
enough by using pcpu_page_first_chunk() instead of
pcpu_embed_first_chunk() while initializing percpu allocator.  Can you
see whether replacing that in arch/powerpc/kernel/setup_64.c fixes the
issue?  If so, all it needs to do is figuring out what conditions we
need to check to opt out of embedding the first chunk.  Note that x86
32bit does about the same thing.

Thanks.

-- 
tejun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: pcpu allocator on large NUMA machines
  2017-07-24 13:57 ` Tejun Heo
@ 2017-07-24 14:28   ` Michal Hocko
  2017-07-25  1:26     ` Michael Ellerman
  0 siblings, 1 reply; 5+ messages in thread
From: Michal Hocko @ 2017-07-24 14:28 UTC (permalink / raw)
  To: Tejun Heo; +Cc: Michael Ellerman, Jiri Kosina, linux-mm, LKML

On Mon 24-07-17 09:57:14, Tejun Heo wrote:
> Hello,

Hi,
and thanks for ths swift answer

> On Mon, Jul 24, 2017 at 03:42:40PM +0200, Michal Hocko wrote:
[...]
> > My understanding of the pcpu allocator is basically close to zero but it
> > seems weird to me that we would need many TB of vmalloc address space
> > just to allocate vmalloc areas that are in range of hundreds of MB. So I
> > am wondering whether this is an expected behavior of the allocator or
> > there is a problem somwehere else.
> 
> It's not actually using the entire region but the area allocations try
> to follow the same topology as kernel linear address layouts.  ie. if
> kernel address for different NUMA nodes are apart by certain amount,
> the percpu allocator tries to replicate that for dynamic allocations
> which allows leaving the static and first dynamic area in the kernel
> linear address which helps reducing TLB pressure.
> 
> This optimization can be turned off when vmalloc area isn't spacious
> enough by using pcpu_page_first_chunk() instead of
> pcpu_embed_first_chunk() while initializing percpu allocator.

Thanks for the clarification, this is really helpful!

> Can you
> see whether replacing that in arch/powerpc/kernel/setup_64.c fixes the
> issue?  If so, all it needs to do is figuring out what conditions we
> need to check to opt out of embedding the first chunk.  Note that x86
> 32bit does about the same thing.

Hmm, I will need some help from PPC guys here. I cannot find something
ready to implement pcpup_populate_pte and I am not familiar with ppc
memory model to implement one myself.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: pcpu allocator on large NUMA machines
  2017-07-24 14:28   ` Michal Hocko
@ 2017-07-25  1:26     ` Michael Ellerman
  2017-07-25 16:39       ` Tejun Heo
  0 siblings, 1 reply; 5+ messages in thread
From: Michael Ellerman @ 2017-07-25  1:26 UTC (permalink / raw)
  To: Michal Hocko, Tejun Heo; +Cc: Jiri Kosina, linux-mm, LKML

Michal Hocko <mhocko@kernel.org> writes:

> On Mon 24-07-17 09:57:14, Tejun Heo wrote:
>> On Mon, Jul 24, 2017 at 03:42:40PM +0200, Michal Hocko wrote:
> [...]
>> > My understanding of the pcpu allocator is basically close to zero but it
>> > seems weird to me that we would need many TB of vmalloc address space
>> > just to allocate vmalloc areas that are in range of hundreds of MB. So I
>> > am wondering whether this is an expected behavior of the allocator or
>> > there is a problem somwehere else.
>> 
>> It's not actually using the entire region but the area allocations try
>> to follow the same topology as kernel linear address layouts.  ie. if
>> kernel address for different NUMA nodes are apart by certain amount,
>> the percpu allocator tries to replicate that for dynamic allocations
>> which allows leaving the static and first dynamic area in the kernel
>> linear address which helps reducing TLB pressure.
>> 
>> This optimization can be turned off when vmalloc area isn't spacious
>> enough by using pcpu_page_first_chunk() instead of
>> pcpu_embed_first_chunk() while initializing percpu allocator.
>
> Thanks for the clarification, this is really helpful!
>
>> Can you
>> see whether replacing that in arch/powerpc/kernel/setup_64.c fixes the
>> issue?  If so, all it needs to do is figuring out what conditions we
>> need to check to opt out of embedding the first chunk.  Note that x86
>> 32bit does about the same thing.
>
> Hmm, I will need some help from PPC guys here. I cannot find something
> ready to implement pcpup_populate_pte and I am not familiar with ppc
> memory model to implement one myself.

I don't think we want to stop using embed first chunk unless we have to.

We have code that accesses percpu variables in real mode (with the MMU
off), and that wouldn't work easily if the first chunk wasn't in the
linear mapping. So it's not just an optimisation for us.

We can fairly easily make the vmalloc space 56T, and I'm working on a
patch to make it ~500T on newer machines.

cheers

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: pcpu allocator on large NUMA machines
  2017-07-25  1:26     ` Michael Ellerman
@ 2017-07-25 16:39       ` Tejun Heo
  0 siblings, 0 replies; 5+ messages in thread
From: Tejun Heo @ 2017-07-25 16:39 UTC (permalink / raw)
  To: Michael Ellerman; +Cc: Michal Hocko, Jiri Kosina, linux-mm, LKML

Hello, Michael.

On Tue, Jul 25, 2017 at 11:26:03AM +1000, Michael Ellerman wrote:
> I don't think we want to stop using embed first chunk unless we have to.
> 
> We have code that accesses percpu variables in real mode (with the MMU
> off), and that wouldn't work easily if the first chunk wasn't in the
> linear mapping. So it's not just an optimisation for us.
> 
> We can fairly easily make the vmalloc space 56T, and I'm working on a
> patch to make it ~500T on newer machines.

Yeah, the only constraint is the size of vmalloc area in relation to
the maximum spread across NUMA regions.  If the vmalloc space can be
made bigger, that'd be the best option.  As the area percpu allocator
actually uses is very small comparatively, it doesn't have to be a lot
larger either.

Thanks.

-- 
tejun

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-07-25 16:40 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-24 13:42 pcpu allocator on large NUMA machines Michal Hocko
2017-07-24 13:57 ` Tejun Heo
2017-07-24 14:28   ` Michal Hocko
2017-07-25  1:26     ` Michael Ellerman
2017-07-25 16:39       ` Tejun Heo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).