linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] powerpc/numa: fix hot-added CPU on memory-less node
@ 2018-11-14 17:03 Laurent Vivier
  2018-11-15  9:19 ` Satheesh Rajendran
  0 siblings, 1 reply; 4+ messages in thread
From: Laurent Vivier @ 2018-11-14 17:03 UTC (permalink / raw)
  To: Michael Ellerman
  Cc: Laurent Vivier, Satheesh Rajendran, linux-kernel,
	Michael Bringmann, Nathan Fontenot, linuxppc-dev

Trying to hotplug a CPU on an empty NUMA node (without
memory or CPU) crashes the kernel when the CPU is onlined.

During the onlining process, the kernel calls start_secondary()
that ends by calling
set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu]))
that relies on NODE_DATA(nid)->node_zonelists and in our case
NODE_DATA(nid) is NULL.

To fix that, add the same checking as we already have in
find_and_online_cpu_nid(): if NODE_DATA() is NULL, use
the first online node.

Bug: https://github.com/linuxppc/linux/issues/184
Fixes: ea05ba7c559c8e5a5946c3a94a2a266e9a6680a6
       (powerpc/numa: Ensure nodes initialized for hotplug)
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
---
 arch/powerpc/mm/numa.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
index 3a048e98a132..1b2d25a3c984 100644
--- a/arch/powerpc/mm/numa.c
+++ b/arch/powerpc/mm/numa.c
@@ -483,6 +483,15 @@ static int numa_setup_cpu(unsigned long lcpu)
 	if (nid < 0 || !node_possible(nid))
 		nid = first_online_node;
 
+	if (NODE_DATA(nid) == NULL) {
+		/*
+		 * Default to using the nearest node that has memory installed.
+		 * Otherwise, it would be necessary to patch the kernel MM code
+		 * to deal with more memoryless-node error conditions.
+		 */
+		nid = first_online_node;
+	}
+
 	map_cpu_to_node(lcpu, nid);
 	of_node_put(cpu);
 out:
-- 
2.17.2


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] powerpc/numa: fix hot-added CPU on memory-less node
  2018-11-14 17:03 [PATCH] powerpc/numa: fix hot-added CPU on memory-less node Laurent Vivier
@ 2018-11-15  9:19 ` Satheesh Rajendran
  2018-11-15 15:04   ` Laurent Vivier
  2018-11-21 19:06   ` Laurent Vivier
  0 siblings, 2 replies; 4+ messages in thread
From: Satheesh Rajendran @ 2018-11-15  9:19 UTC (permalink / raw)
  To: Laurent Vivier
  Cc: Satheesh Rajendran, linux-kernel, Michael Bringmann,
	Nathan Fontenot, linuxppc-dev

On Wed, Nov 14, 2018 at 06:03:19PM +0100, Laurent Vivier wrote:
> Trying to hotplug a CPU on an empty NUMA node (without
> memory or CPU) crashes the kernel when the CPU is onlined.
> 
> During the onlining process, the kernel calls start_secondary()
> that ends by calling
> set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu]))
> that relies on NODE_DATA(nid)->node_zonelists and in our case
> NODE_DATA(nid) is NULL.
> 
> To fix that, add the same checking as we already have in
> find_and_online_cpu_nid(): if NODE_DATA() is NULL, use
> the first online node.
> 
> Bug: https://github.com/linuxppc/linux/issues/184
> Fixes: ea05ba7c559c8e5a5946c3a94a2a266e9a6680a6
>        (powerpc/numa: Ensure nodes initialized for hotplug)
> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
> ---
>  arch/powerpc/mm/numa.c | 9 +++++++++
>  1 file changed, 9 insertions(+)

This patch causes regression for cold plug numa case(Case 1) and 
hotplug case + reboot(Case 2) with adding all vcpus into node 0.


Env: HW: Power8 Host.
Kernel: 4.20-rc2 + this patch

Case 1:
1. boot a guest with 8 vcpus(all available), spreadout in 4 numa nodes.
<vcpu placement='static'>8</vcpu>
...
   <numa>
      <cell id='0' cpus='0-1' memory='4194304' unit='KiB'/>
      <cell id='1' cpus='2-3' memory='4194304' unit='KiB'/>
      <cell id='2' cpus='4-5' memory='0' unit='KiB'/>
      <cell id='3' cpus='6-7' memory='0' unit='KiB'/>
    </numa>

2. Check lscpu --- all vcpus are added to node0 --> NOK

# lscpu
Architecture:        ppc64le
Byte Order:          Little Endian
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  1
Core(s) per socket:  8
Socket(s):           1
NUMA node(s):        4
Model:               2.1 (pvr 004b 0201)
Model name:          POWER8 (architected), altivec supported
Hypervisor vendor:   KVM
Virtualization type: para
L1d cache:           64K
L1i cache:           32K
NUMA node0 CPU(s):   0-7
NUMA node1 CPU(s):   
NUMA node2 CPU(s):   
NUMA node3 CPU(s): 

without this patch it was working fine.
# lscpu
Architecture:        ppc64le
Byte Order:          Little Endian
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  1
Core(s) per socket:  8
Socket(s):           1
NUMA node(s):        4
Model:               2.1 (pvr 004b 0201)
Model name:          POWER8 (architected), altivec supported
Hypervisor vendor:   KVM
Virtualization type: para
L1d cache:           64K
L1i cache:           32K
NUMA node0 CPU(s):   0,1
NUMA node1 CPU(s):   2,3
NUMA node2 CPU(s):   4,5
NUMA node3 CPU(s):   6,7


Case 2:
1. boot a guest with 8 vcpus(2 available, 6 possible), spreadout in 4 numa nodes.
<vcpu placement='static' current='2'>8</vcpu>
...
   <numa>
      <cell id='0' cpus='0-1' memory='0' unit='KiB'/>
      <cell id='1' cpus='2-3' memory='4194304' unit='KiB'/>
      <cell id='2' cpus='4-5' memory='0' unit='KiB'/>
      <cell id='3' cpus='6-7' memory='0' unit='KiB'/>
    </numa>

2. Hotplug all vcpus
# lscpu
Architecture:        ppc64le
Byte Order:          Little Endian
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  1
Core(s) per socket:  8
Socket(s):           1
NUMA node(s):        2
Model:               2.1 (pvr 004b 0201)
Model name:          POWER8 (architected), altivec supported
Hypervisor vendor:   KVM
Virtualization type: para
L1d cache:           64K
L1i cache:           32K
NUMA node0 CPU(s):   0,1,4-7
NUMA node1 CPU(s):   2,3


3. reboot the guest
# lscpu
Architecture:        ppc64le
Byte Order:          Little Endian
CPU(s):              8
On-line CPU(s) list: 0-7
Thread(s) per core:  1
Core(s) per socket:  8
Socket(s):           1
NUMA node(s):        4
Model:               2.1 (pvr 004b 0201)
Model name:          POWER8 (architected), altivec supported
Hypervisor vendor:   KVM
Virtualization type: para
L1d cache:           64K
L1i cache:           32K
NUMA node0 CPU(s):   0-7
NUMA node1 CPU(s):
NUMA node2 CPU(s):
NUMA node3 CPU(s):


Without this patch, Case 2 crashes the guest during hotplug, i.e
issue reported in https://github.com/linuxppc/linux/issues/184

Regards,
-Satheesh.

> 
> diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
> index 3a048e98a132..1b2d25a3c984 100644
> --- a/arch/powerpc/mm/numa.c
> +++ b/arch/powerpc/mm/numa.c
> @@ -483,6 +483,15 @@ static int numa_setup_cpu(unsigned long lcpu)
>  	if (nid < 0 || !node_possible(nid))
>  		nid = first_online_node;
> 
> +	if (NODE_DATA(nid) == NULL) {
> +		/*
> +		 * Default to using the nearest node that has memory installed.
> +		 * Otherwise, it would be necessary to patch the kernel MM code
> +		 * to deal with more memoryless-node error conditions.
> +		 */
> +		nid = first_online_node;
> +	}
> +
>  	map_cpu_to_node(lcpu, nid);
>  	of_node_put(cpu);
>  out:
> -- 
> 2.17.2
> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] powerpc/numa: fix hot-added CPU on memory-less node
  2018-11-15  9:19 ` Satheesh Rajendran
@ 2018-11-15 15:04   ` Laurent Vivier
  2018-11-21 19:06   ` Laurent Vivier
  1 sibling, 0 replies; 4+ messages in thread
From: Laurent Vivier @ 2018-11-15 15:04 UTC (permalink / raw)
  To: Satheesh Rajendran
  Cc: Satheesh Rajendran, linux-kernel, Michael Bringmann,
	Nathan Fontenot, linuxppc-dev

On 15/11/2018 10:19, Satheesh Rajendran wrote:
> On Wed, Nov 14, 2018 at 06:03:19PM +0100, Laurent Vivier wrote:
>> Trying to hotplug a CPU on an empty NUMA node (without
>> memory or CPU) crashes the kernel when the CPU is onlined.
>>
>> During the onlining process, the kernel calls start_secondary()
>> that ends by calling
>> set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu]))
>> that relies on NODE_DATA(nid)->node_zonelists and in our case
>> NODE_DATA(nid) is NULL.
>>
>> To fix that, add the same checking as we already have in
>> find_and_online_cpu_nid(): if NODE_DATA() is NULL, use
>> the first online node.
>>
>> Bug: https://github.com/linuxppc/linux/issues/184
>> Fixes: ea05ba7c559c8e5a5946c3a94a2a266e9a6680a6
>>        (powerpc/numa: Ensure nodes initialized for hotplug)
>> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
>> ---
>>  arch/powerpc/mm/numa.c | 9 +++++++++
>>  1 file changed, 9 insertions(+)
> 
> This patch causes regression for cold plug numa case(Case 1) and 
> hotplug case + reboot(Case 2) with adding all vcpus into node 0.
> 
> 
> Env: HW: Power8 Host.
> Kernel: 4.20-rc2 + this patch
> 
> Case 1:
> 1. boot a guest with 8 vcpus(all available), spreadout in 4 numa nodes.
> <vcpu placement='static'>8</vcpu>
> ...
>    <numa>
>       <cell id='0' cpus='0-1' memory='4194304' unit='KiB'/>
>       <cell id='1' cpus='2-3' memory='4194304' unit='KiB'/>
>       <cell id='2' cpus='4-5' memory='0' unit='KiB'/>
>       <cell id='3' cpus='6-7' memory='0' unit='KiB'/>
>     </numa>
> 
> 2. Check lscpu --- all vcpus are added to node0 --> NOK
> 
> # lscpu
...
> NUMA node0 CPU(s):   0-7
> NUMA node1 CPU(s):   
> NUMA node2 CPU(s):   
> NUMA node3 CPU(s): 
> 
> without this patch it was working fine.
> # lscpu
...
> NUMA node0 CPU(s):   0,1
> NUMA node1 CPU(s):   2,3
> NUMA node2 CPU(s):   4,5
> NUMA node3 CPU(s):   6,7
> 

Good point. Thank you.

I'm going to see what happens and how the cold case allows to online
CPUs on nodes with NODE_DATA() set to NULL (because it's what the patch
changes).

Thanks,
Laurent

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] powerpc/numa: fix hot-added CPU on memory-less node
  2018-11-15  9:19 ` Satheesh Rajendran
  2018-11-15 15:04   ` Laurent Vivier
@ 2018-11-21 19:06   ` Laurent Vivier
  1 sibling, 0 replies; 4+ messages in thread
From: Laurent Vivier @ 2018-11-21 19:06 UTC (permalink / raw)
  To: Satheesh Rajendran, Michael Bringmann
  Cc: linuxppc-dev, Satheesh Rajendran, linux-kernel, Nathan Fontenot

On 15/11/2018 10:19, Satheesh Rajendran wrote:
> On Wed, Nov 14, 2018 at 06:03:19PM +0100, Laurent Vivier wrote:
>> Trying to hotplug a CPU on an empty NUMA node (without
>> memory or CPU) crashes the kernel when the CPU is onlined.
>>
>> During the onlining process, the kernel calls start_secondary()
>> that ends by calling
>> set_numa_mem(local_memory_node(numa_cpu_lookup_table[cpu]))
>> that relies on NODE_DATA(nid)->node_zonelists and in our case
>> NODE_DATA(nid) is NULL.
>>
>> To fix that, add the same checking as we already have in
>> find_and_online_cpu_nid(): if NODE_DATA() is NULL, use
>> the first online node.
>>
>> Bug: https://github.com/linuxppc/linux/issues/184
>> Fixes: ea05ba7c559c8e5a5946c3a94a2a266e9a6680a6
>>        (powerpc/numa: Ensure nodes initialized for hotplug)
>> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
>> ---
>>  arch/powerpc/mm/numa.c | 9 +++++++++
>>  1 file changed, 9 insertions(+)
> 
> This patch causes regression for cold plug numa case(Case 1) and 
> hotplug case + reboot(Case 2) with adding all vcpus into node 0.
> 
> 
> Env: HW: Power8 Host.
> Kernel: 4.20-rc2 + this patch
> 
> Case 1:
> 1. boot a guest with 8 vcpus(all available), spreadout in 4 numa nodes.
> <vcpu placement='static'>8</vcpu>
> ...
>    <numa>
>       <cell id='0' cpus='0-1' memory='4194304' unit='KiB'/>
>       <cell id='1' cpus='2-3' memory='4194304' unit='KiB'/>
>       <cell id='2' cpus='4-5' memory='0' unit='KiB'/>
>       <cell id='3' cpus='6-7' memory='0' unit='KiB'/>
>     </numa>
> 
> 2. Check lscpu --- all vcpus are added to node0 --> NOK
> 
> # lscpu
> Architecture:        ppc64le
> Byte Order:          Little Endian
> CPU(s):              8
> On-line CPU(s) list: 0-7
> Thread(s) per core:  1
> Core(s) per socket:  8
> Socket(s):           1
> NUMA node(s):        4
> Model:               2.1 (pvr 004b 0201)
> Model name:          POWER8 (architected), altivec supported
> Hypervisor vendor:   KVM
> Virtualization type: para
> L1d cache:           64K
> L1i cache:           32K
> NUMA node0 CPU(s):   0-7
> NUMA node1 CPU(s):   
> NUMA node2 CPU(s):   
> NUMA node3 CPU(s): 
> 
> without this patch it was working fine.
> # lscpu
> Architecture:        ppc64le
> Byte Order:          Little Endian
> CPU(s):              8
> On-line CPU(s) list: 0-7
> Thread(s) per core:  1
> Core(s) per socket:  8
> Socket(s):           1
> NUMA node(s):        4
> Model:               2.1 (pvr 004b 0201)
> Model name:          POWER8 (architected), altivec supported
> Hypervisor vendor:   KVM
> Virtualization type: para
> L1d cache:           64K
> L1i cache:           32K
> NUMA node0 CPU(s):   0,1
> NUMA node1 CPU(s):   2,3
> NUMA node2 CPU(s):   4,5
> NUMA node3 CPU(s):   6,7
> 
> 
> Case 2:
> 1. boot a guest with 8 vcpus(2 available, 6 possible), spreadout in 4 numa nodes.
> <vcpu placement='static' current='2'>8</vcpu>
> ...
>    <numa>
>       <cell id='0' cpus='0-1' memory='0' unit='KiB'/>
>       <cell id='1' cpus='2-3' memory='4194304' unit='KiB'/>
>       <cell id='2' cpus='4-5' memory='0' unit='KiB'/>
>       <cell id='3' cpus='6-7' memory='0' unit='KiB'/>
>     </numa>
> 
> 2. Hotplug all vcpus
> # lscpu
> Architecture:        ppc64le
> Byte Order:          Little Endian
> CPU(s):              8
> On-line CPU(s) list: 0-7
> Thread(s) per core:  1
> Core(s) per socket:  8
> Socket(s):           1
> NUMA node(s):        2
> Model:               2.1 (pvr 004b 0201)
> Model name:          POWER8 (architected), altivec supported
> Hypervisor vendor:   KVM
> Virtualization type: para
> L1d cache:           64K
> L1i cache:           32K
> NUMA node0 CPU(s):   0,1,4-7
> NUMA node1 CPU(s):   2,3
> 
> 
> 3. reboot the guest
> # lscpu
> Architecture:        ppc64le
> Byte Order:          Little Endian
> CPU(s):              8
> On-line CPU(s) list: 0-7
> Thread(s) per core:  1
> Core(s) per socket:  8
> Socket(s):           1
> NUMA node(s):        4
> Model:               2.1 (pvr 004b 0201)
> Model name:          POWER8 (architected), altivec supported
> Hypervisor vendor:   KVM
> Virtualization type: para
> L1d cache:           64K
> L1i cache:           32K
> NUMA node0 CPU(s):   0-7
> NUMA node1 CPU(s):
> NUMA node2 CPU(s):
> NUMA node3 CPU(s):
> 
> 
> Without this patch, Case 2 crashes the guest during hotplug, i.e
> issue reported in https://github.com/linuxppc/linux/issues/184
> 
> Regards,
> -Satheesh.
> 
>>
>> diff --git a/arch/powerpc/mm/numa.c b/arch/powerpc/mm/numa.c
>> index 3a048e98a132..1b2d25a3c984 100644
>> --- a/arch/powerpc/mm/numa.c
>> +++ b/arch/powerpc/mm/numa.c
>> @@ -483,6 +483,15 @@ static int numa_setup_cpu(unsigned long lcpu)
>>  	if (nid < 0 || !node_possible(nid))
>>  		nid = first_online_node;
>>
>> +	if (NODE_DATA(nid) == NULL) {
>> +		/*
>> +		 * Default to using the nearest node that has memory installed.
>> +		 * Otherwise, it would be necessary to patch the kernel MM code
>> +		 * to deal with more memoryless-node error conditions.
>> +		 */
>> +		nid = first_online_node;
>> +	}
>> +
>>  	map_cpu_to_node(lcpu, nid);
>>  	of_node_put(cpu);
>>  out:
>> -- 
>> 2.17.2
>>
> 

I have worked a while on this problem, and I don't see any easy fix for
that. It seems kernel is not ready to online a memory-less/cpu-less node
when someone hotplug a CPU in it. I think we have to fix several areas
to be able to do that.

Perhaps someone from IBM could have a better view on what we need?

Michael? Nathan?

Thanks,
Laurent

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2018-11-21 19:10 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-14 17:03 [PATCH] powerpc/numa: fix hot-added CPU on memory-less node Laurent Vivier
2018-11-15  9:19 ` Satheesh Rajendran
2018-11-15 15:04   ` Laurent Vivier
2018-11-21 19:06   ` Laurent Vivier

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).