All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] NUMA: Apply cluster-NUMA-node boundary for aarch64 and riscv machines
@ 2023-02-25  6:35 Gavin Shan
  2023-02-25  6:35 ` [PATCH v3 1/3] numa: Validate cluster and NUMA node boundary if required Gavin Shan
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Gavin Shan @ 2023-02-25  6:35 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-riscv, qemu-devel, rad, peter.maydell, quic_llindhol,
	eduardo, marcel.apfelbaum, philmd, wangyanan55, palmer,
	alistair.francis, bin.meng, thuth, lvivier, pbonzini, imammedo,
	yihyu, ajones, berrange, dbarboza, shan.gavin

For arm64 and riscv architecture, the driver (/base/arch_topology.c) is
used to populate the CPU topology in the Linux guest. It's required that
the CPUs in one cluster can't span mutiple NUMA nodes. Otherwise, the Linux
scheduling domain can't be sorted out, as the following warning message
indicates. To avoid the unexpected confusion, this series attempts to
warn about such kind of irregular configurations.

   -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
   -numa node,nodeid=0,cpus=0-1,memdev=ram0                \
   -numa node,nodeid=1,cpus=2-3,memdev=ram1                \
   -numa node,nodeid=2,cpus=4-5,memdev=ram2                \

   ------------[ cut here ]------------
   WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:2271 build_sched_domains+0x284/0x910
   Modules linked in:
   CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-268.el9.aarch64 #1
   pstate: 00400005 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
   pc : build_sched_domains+0x284/0x910
   lr : build_sched_domains+0x184/0x910
   sp : ffff80000804bd50
   x29: ffff80000804bd50 x28: 0000000000000002 x27: 0000000000000000
   x26: ffff800009cf9a80 x25: 0000000000000000 x24: ffff800009cbf840
   x23: ffff000080325000 x22: ffff0000005df800 x21: ffff80000a4ce508
   x20: 0000000000000000 x19: ffff000080324440 x18: 0000000000000014
   x17: 00000000388925c0 x16: 000000005386a066 x15: 000000009c10cc2e
   x14: 00000000000001c0 x13: 0000000000000001 x12: ffff00007fffb1a0
   x11: ffff00007fffb180 x10: ffff80000a4ce508 x9 : 0000000000000041
   x8 : ffff80000a4ce500 x7 : ffff80000a4cf920 x6 : 0000000000000001
   x5 : 0000000000000001 x4 : 0000000000000007 x3 : 0000000000000002
   x2 : 0000000000001000 x1 : ffff80000a4cf928 x0 : 0000000000000001
   Call trace:
    build_sched_domains+0x284/0x910
    sched_init_domains+0xac/0xe0
    sched_init_smp+0x48/0xc8
    kernel_init_freeable+0x140/0x1ac
    kernel_init+0x28/0x140
    ret_from_fork+0x10/0x20

PATCH[1] Warn about the irregular configuration if required
PATCH[2] Enable the validation for aarch64 machines
PATCH[3] Enable the validation for riscv machines

v2: https://lists.nongnu.org/archive/html/qemu-arm/2023-02/msg01080.html
v1: https://lists.nongnu.org/archive/html/qemu-arm/2023-02/msg00886.html

Changelog
=========
v3:
  * Validate cluster-to-NUMA instead of socket-to-NUMA
    boundary                                                  (Gavin)
  * Move the switch from MachineState to MachineClass         (Philippe)
  * Warning instead of rejecting the irregular configuration  (Daniel)
  * Comments to mention cluster-to-NUMA is platform instead
    of architectural choice                                   (Drew)
  * Drop PATCH[v2 1/4] related to qtests/numa-test            (Gavin)
v2:
  * Fix socket-NUMA-node boundary issues in qtests/numa-test  (Gavin)
  * Add helper set_numa_socket_boundary() and validate the
    boundary in the generic path                              (Philippe)

Gavin Shan (3):
  numa: Validate cluster and NUMA node boundary if required
  hw/arm: Validate cluster and NUMA node boundary
  hw/riscv: Validate cluster and NUMA node boundary

 hw/arm/sbsa-ref.c   |  2 ++
 hw/arm/virt.c       |  2 ++
 hw/core/machine.c   | 42 ++++++++++++++++++++++++++++++++++++++++++
 hw/riscv/spike.c    |  2 ++
 hw/riscv/virt.c     |  2 ++
 include/hw/boards.h |  1 +
 6 files changed, 51 insertions(+)

-- 
2.23.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 1/3] numa: Validate cluster and NUMA node boundary if required
  2023-02-25  6:35 [PATCH v3 0/3] NUMA: Apply cluster-NUMA-node boundary for aarch64 and riscv machines Gavin Shan
@ 2023-02-25  6:35 ` Gavin Shan
  2023-03-13 11:40   ` Philippe Mathieu-Daudé
  2023-03-17  6:29   ` Gavin Shan
  2023-02-25  6:35 ` [PATCH v3 2/3] hw/arm: Validate cluster and NUMA node boundary Gavin Shan
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 9+ messages in thread
From: Gavin Shan @ 2023-02-25  6:35 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-riscv, qemu-devel, rad, peter.maydell, quic_llindhol,
	eduardo, marcel.apfelbaum, philmd, wangyanan55, palmer,
	alistair.francis, bin.meng, thuth, lvivier, pbonzini, imammedo,
	yihyu, ajones, berrange, dbarboza, shan.gavin

For some architectures like ARM64, multiple CPUs in one cluster can be
associated with different NUMA nodes, which is irregular configuration
because we shouldn't have this in baremetal environment. The irregular
configuration causes Linux guest to misbehave, as the following warning
messages indicate.

  -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
  -numa node,nodeid=0,cpus=0-1,memdev=ram0                \
  -numa node,nodeid=1,cpus=2-3,memdev=ram1                \
  -numa node,nodeid=2,cpus=4-5,memdev=ram2                \

  ------------[ cut here ]------------
  WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:2271 build_sched_domains+0x284/0x910
  Modules linked in:
  CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-268.el9.aarch64 #1
  pstate: 00400005 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
  pc : build_sched_domains+0x284/0x910
  lr : build_sched_domains+0x184/0x910
  sp : ffff80000804bd50
  x29: ffff80000804bd50 x28: 0000000000000002 x27: 0000000000000000
  x26: ffff800009cf9a80 x25: 0000000000000000 x24: ffff800009cbf840
  x23: ffff000080325000 x22: ffff0000005df800 x21: ffff80000a4ce508
  x20: 0000000000000000 x19: ffff000080324440 x18: 0000000000000014
  x17: 00000000388925c0 x16: 000000005386a066 x15: 000000009c10cc2e
  x14: 00000000000001c0 x13: 0000000000000001 x12: ffff00007fffb1a0
  x11: ffff00007fffb180 x10: ffff80000a4ce508 x9 : 0000000000000041
  x8 : ffff80000a4ce500 x7 : ffff80000a4cf920 x6 : 0000000000000001
  x5 : 0000000000000001 x4 : 0000000000000007 x3 : 0000000000000002
  x2 : 0000000000001000 x1 : ffff80000a4cf928 x0 : 0000000000000001
  Call trace:
   build_sched_domains+0x284/0x910
   sched_init_domains+0xac/0xe0
   sched_init_smp+0x48/0xc8
   kernel_init_freeable+0x140/0x1ac
   kernel_init+0x28/0x140
   ret_from_fork+0x10/0x20

Improve the situation to warn when multiple CPUs in one cluster have
been associated with different NUMA nodes. However, one NUMA node is
allowed to be associated with different clusters.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 hw/core/machine.c   | 42 ++++++++++++++++++++++++++++++++++++++++++
 include/hw/boards.h |  1 +
 2 files changed, 43 insertions(+)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index f29e700ee4..3513df5a86 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -1252,6 +1252,45 @@ static void machine_numa_finish_cpu_init(MachineState *machine)
     g_string_free(s, true);
 }
 
+static void validate_cpu_cluster_to_numa_boundary(MachineState *ms)
+{
+    MachineClass *mc = MACHINE_GET_CLASS(ms);
+    NumaState *state = ms->numa_state;
+    const CPUArchIdList *possible_cpus = mc->possible_cpu_arch_ids(ms);
+    const CPUArchId *cpus = possible_cpus->cpus;
+    int len = possible_cpus->len, i, j;
+
+    if (state->num_nodes <= 1 || len <= 1) {
+        return;
+    }
+
+    /*
+     * The Linux scheduling domain can't be parsed when the multiple CPUs
+     * in one cluster have been associated with different NUMA nodes. However,
+     * it's fine to associate one NUMA node with CPUs in different clusters.
+     */
+    for (i = 0; i < len; i++) {
+        for (j = i + 1; j < len; j++) {
+            if (cpus[i].props.has_socket_id &&
+                cpus[i].props.has_cluster_id &&
+                cpus[i].props.has_node_id &&
+                cpus[j].props.has_socket_id &&
+                cpus[j].props.has_cluster_id &&
+                cpus[j].props.has_node_id &&
+                cpus[i].props.socket_id == cpus[j].props.socket_id &&
+                cpus[i].props.cluster_id == cpus[j].props.cluster_id &&
+                cpus[i].props.node_id != cpus[j].props.node_id) {
+                warn_report("CPU-%d and CPU-%d in socket-%ld-cluster-%ld "
+                             "have been associated with node-%ld and node-%ld "
+                             "respectively. It can cause OSes like Linux to"
+                             "misbehave", i, j, cpus[i].props.socket_id,
+                             cpus[i].props.cluster_id, cpus[i].props.node_id,
+                             cpus[j].props.node_id);
+            }
+        }
+    }
+}
+
 MemoryRegion *machine_consume_memdev(MachineState *machine,
                                      HostMemoryBackend *backend)
 {
@@ -1337,6 +1376,9 @@ void machine_run_board_init(MachineState *machine, const char *mem_path, Error *
         numa_complete_configuration(machine);
         if (machine->numa_state->num_nodes) {
             machine_numa_finish_cpu_init(machine);
+            if (machine_class->cpu_cluster_has_numa_boundary) {
+                validate_cpu_cluster_to_numa_boundary(machine);
+            }
         }
     }
 
diff --git a/include/hw/boards.h b/include/hw/boards.h
index 6fbbfd56c8..c9793b2789 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -273,6 +273,7 @@ struct MachineClass {
     bool nvdimm_supported;
     bool numa_mem_supported;
     bool auto_enable_numa;
+    bool cpu_cluster_has_numa_boundary;
     SMPCompatProps smp_props;
     const char *default_ram_id;
 
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 2/3] hw/arm: Validate cluster and NUMA node boundary
  2023-02-25  6:35 [PATCH v3 0/3] NUMA: Apply cluster-NUMA-node boundary for aarch64 and riscv machines Gavin Shan
  2023-02-25  6:35 ` [PATCH v3 1/3] numa: Validate cluster and NUMA node boundary if required Gavin Shan
@ 2023-02-25  6:35 ` Gavin Shan
  2023-02-25  6:35 ` [PATCH v3 3/3] hw/riscv: " Gavin Shan
  2023-03-13  7:15 ` [PATCH v3 0/3] NUMA: Apply cluster-NUMA-node boundary for aarch64 and riscv machines Gavin Shan
  3 siblings, 0 replies; 9+ messages in thread
From: Gavin Shan @ 2023-02-25  6:35 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-riscv, qemu-devel, rad, peter.maydell, quic_llindhol,
	eduardo, marcel.apfelbaum, philmd, wangyanan55, palmer,
	alistair.francis, bin.meng, thuth, lvivier, pbonzini, imammedo,
	yihyu, ajones, berrange, dbarboza, shan.gavin

There are two ARM machines where NUMA is aware: 'virt' and 'sbsa-ref'.
Both of them are required to follow cluster-NUMA-node boundary. To
enable the validation to warn about the irregular configuration where
multiple CPUs in one cluster have been associated with different NUMA
nodes.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 hw/arm/sbsa-ref.c | 2 ++
 hw/arm/virt.c     | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/hw/arm/sbsa-ref.c b/hw/arm/sbsa-ref.c
index f778cb6d09..91d38af94c 100644
--- a/hw/arm/sbsa-ref.c
+++ b/hw/arm/sbsa-ref.c
@@ -864,6 +864,8 @@ static void sbsa_ref_class_init(ObjectClass *oc, void *data)
     mc->possible_cpu_arch_ids = sbsa_ref_possible_cpu_arch_ids;
     mc->cpu_index_to_instance_props = sbsa_ref_cpu_index_to_props;
     mc->get_default_cpu_node_id = sbsa_ref_get_default_cpu_node_id;
+    /* platform instead of architectural choice */
+    mc->cpu_cluster_has_numa_boundary = true;
 }
 
 static const TypeInfo sbsa_ref_info = {
diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index ac626b3bef..b73ac6eabb 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -3030,6 +3030,8 @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
     mc->smp_props.clusters_supported = true;
     mc->auto_enable_numa_with_memhp = true;
     mc->auto_enable_numa_with_memdev = true;
+    /* platform instead of architectural choice */
+    mc->cpu_cluster_has_numa_boundary = true;
     mc->default_ram_id = "mach-virt.ram";
 
     object_class_property_add(oc, "acpi", "OnOffAuto",
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 3/3] hw/riscv: Validate cluster and NUMA node boundary
  2023-02-25  6:35 [PATCH v3 0/3] NUMA: Apply cluster-NUMA-node boundary for aarch64 and riscv machines Gavin Shan
  2023-02-25  6:35 ` [PATCH v3 1/3] numa: Validate cluster and NUMA node boundary if required Gavin Shan
  2023-02-25  6:35 ` [PATCH v3 2/3] hw/arm: Validate cluster and NUMA node boundary Gavin Shan
@ 2023-02-25  6:35 ` Gavin Shan
  2023-02-27 12:38   ` Daniel Henrique Barboza
  2023-03-13  7:15 ` [PATCH v3 0/3] NUMA: Apply cluster-NUMA-node boundary for aarch64 and riscv machines Gavin Shan
  3 siblings, 1 reply; 9+ messages in thread
From: Gavin Shan @ 2023-02-25  6:35 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-riscv, qemu-devel, rad, peter.maydell, quic_llindhol,
	eduardo, marcel.apfelbaum, philmd, wangyanan55, palmer,
	alistair.francis, bin.meng, thuth, lvivier, pbonzini, imammedo,
	yihyu, ajones, berrange, dbarboza, shan.gavin

There are two RISCV machines where NUMA is aware: 'virt' and 'spike'.
Both of them are required to follow cluster-NUMA-node boundary. To
enable the validation to warn about the irregular configuration where
multiple CPUs in one cluster has been associated with multiple NUMA
nodes.

Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 hw/riscv/spike.c | 2 ++
 hw/riscv/virt.c  | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/hw/riscv/spike.c b/hw/riscv/spike.c
index cc3f6dac17..b09b993634 100644
--- a/hw/riscv/spike.c
+++ b/hw/riscv/spike.c
@@ -357,6 +357,8 @@ static void spike_machine_class_init(ObjectClass *oc, void *data)
     mc->cpu_index_to_instance_props = riscv_numa_cpu_index_to_props;
     mc->get_default_cpu_node_id = riscv_numa_get_default_cpu_node_id;
     mc->numa_mem_supported = true;
+    /* platform instead of architectural choice */
+    mc->cpu_cluster_has_numa_boundary = true;
     mc->default_ram_id = "riscv.spike.ram";
 }
 
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
index b81081c70b..e5bb168169 100644
--- a/hw/riscv/virt.c
+++ b/hw/riscv/virt.c
@@ -1636,6 +1636,8 @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
     mc->cpu_index_to_instance_props = riscv_numa_cpu_index_to_props;
     mc->get_default_cpu_node_id = riscv_numa_get_default_cpu_node_id;
     mc->numa_mem_supported = true;
+    /* platform instead of architectural choice */
+    mc->cpu_cluster_has_numa_boundary = true;
     mc->default_ram_id = "riscv_virt_board.ram";
     assert(!mc->get_hotplug_handler);
     mc->get_hotplug_handler = virt_machine_get_hotplug_handler;
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 3/3] hw/riscv: Validate cluster and NUMA node boundary
  2023-02-25  6:35 ` [PATCH v3 3/3] hw/riscv: " Gavin Shan
@ 2023-02-27 12:38   ` Daniel Henrique Barboza
  0 siblings, 0 replies; 9+ messages in thread
From: Daniel Henrique Barboza @ 2023-02-27 12:38 UTC (permalink / raw)
  To: Gavin Shan, qemu-arm
  Cc: qemu-riscv, qemu-devel, rad, peter.maydell, quic_llindhol,
	eduardo, marcel.apfelbaum, philmd, wangyanan55, palmer,
	alistair.francis, bin.meng, thuth, lvivier, pbonzini, imammedo,
	yihyu, ajones, berrange, shan.gavin



On 2/25/23 03:35, Gavin Shan wrote:
> There are two RISCV machines where NUMA is aware: 'virt' and 'spike'.
> Both of them are required to follow cluster-NUMA-node boundary. To
> enable the validation to warn about the irregular configuration where
> multiple CPUs in one cluster has been associated with multiple NUMA
> nodes.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---

Reviewed-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>

>   hw/riscv/spike.c | 2 ++
>   hw/riscv/virt.c  | 2 ++
>   2 files changed, 4 insertions(+)
> 
> diff --git a/hw/riscv/spike.c b/hw/riscv/spike.c
> index cc3f6dac17..b09b993634 100644
> --- a/hw/riscv/spike.c
> +++ b/hw/riscv/spike.c
> @@ -357,6 +357,8 @@ static void spike_machine_class_init(ObjectClass *oc, void *data)
>       mc->cpu_index_to_instance_props = riscv_numa_cpu_index_to_props;
>       mc->get_default_cpu_node_id = riscv_numa_get_default_cpu_node_id;
>       mc->numa_mem_supported = true;
> +    /* platform instead of architectural choice */
> +    mc->cpu_cluster_has_numa_boundary = true;
>       mc->default_ram_id = "riscv.spike.ram";
>   }
>   
> diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
> index b81081c70b..e5bb168169 100644
> --- a/hw/riscv/virt.c
> +++ b/hw/riscv/virt.c
> @@ -1636,6 +1636,8 @@ static void virt_machine_class_init(ObjectClass *oc, void *data)
>       mc->cpu_index_to_instance_props = riscv_numa_cpu_index_to_props;
>       mc->get_default_cpu_node_id = riscv_numa_get_default_cpu_node_id;
>       mc->numa_mem_supported = true;
> +    /* platform instead of architectural choice */
> +    mc->cpu_cluster_has_numa_boundary = true;
>       mc->default_ram_id = "riscv_virt_board.ram";
>       assert(!mc->get_hotplug_handler);
>       mc->get_hotplug_handler = virt_machine_get_hotplug_handler;


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 0/3] NUMA: Apply cluster-NUMA-node boundary for aarch64 and riscv machines
  2023-02-25  6:35 [PATCH v3 0/3] NUMA: Apply cluster-NUMA-node boundary for aarch64 and riscv machines Gavin Shan
                   ` (2 preceding siblings ...)
  2023-02-25  6:35 ` [PATCH v3 3/3] hw/riscv: " Gavin Shan
@ 2023-03-13  7:15 ` Gavin Shan
  3 siblings, 0 replies; 9+ messages in thread
From: Gavin Shan @ 2023-03-13  7:15 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-riscv, qemu-devel, rad, peter.maydell, quic_llindhol,
	eduardo, marcel.apfelbaum, philmd, wangyanan55, palmer,
	alistair.francis, bin.meng, thuth, lvivier, pbonzini, imammedo,
	yihyu, ajones, berrange, dbarboza, shan.gavin

On 2/25/23 2:35 PM, Gavin Shan wrote:
> For arm64 and riscv architecture, the driver (/base/arch_topology.c) is
> used to populate the CPU topology in the Linux guest. It's required that
> the CPUs in one cluster can't span mutiple NUMA nodes. Otherwise, the Linux
> scheduling domain can't be sorted out, as the following warning message
> indicates. To avoid the unexpected confusion, this series attempts to
> warn about such kind of irregular configurations.
> 
>     -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
>     -numa node,nodeid=0,cpus=0-1,memdev=ram0                \
>     -numa node,nodeid=1,cpus=2-3,memdev=ram1                \
>     -numa node,nodeid=2,cpus=4-5,memdev=ram2                \
> 
>     ------------[ cut here ]------------
>     WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:2271 build_sched_domains+0x284/0x910
>     Modules linked in:
>     CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-268.el9.aarch64 #1
>     pstate: 00400005 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>     pc : build_sched_domains+0x284/0x910
>     lr : build_sched_domains+0x184/0x910
>     sp : ffff80000804bd50
>     x29: ffff80000804bd50 x28: 0000000000000002 x27: 0000000000000000
>     x26: ffff800009cf9a80 x25: 0000000000000000 x24: ffff800009cbf840
>     x23: ffff000080325000 x22: ffff0000005df800 x21: ffff80000a4ce508
>     x20: 0000000000000000 x19: ffff000080324440 x18: 0000000000000014
>     x17: 00000000388925c0 x16: 000000005386a066 x15: 000000009c10cc2e
>     x14: 00000000000001c0 x13: 0000000000000001 x12: ffff00007fffb1a0
>     x11: ffff00007fffb180 x10: ffff80000a4ce508 x9 : 0000000000000041
>     x8 : ffff80000a4ce500 x7 : ffff80000a4cf920 x6 : 0000000000000001
>     x5 : 0000000000000001 x4 : 0000000000000007 x3 : 0000000000000002
>     x2 : 0000000000001000 x1 : ffff80000a4cf928 x0 : 0000000000000001
>     Call trace:
>      build_sched_domains+0x284/0x910
>      sched_init_domains+0xac/0xe0
>      sched_init_smp+0x48/0xc8
>      kernel_init_freeable+0x140/0x1ac
>      kernel_init+0x28/0x140
>      ret_from_fork+0x10/0x20
> 
> PATCH[1] Warn about the irregular configuration if required
> PATCH[2] Enable the validation for aarch64 machines
> PATCH[3] Enable the validation for riscv machines
> 
> v2: https://lists.nongnu.org/archive/html/qemu-arm/2023-02/msg01080.html
> v1: https://lists.nongnu.org/archive/html/qemu-arm/2023-02/msg00886.html
> 
> Changelog
> =========
> v3:
>    * Validate cluster-to-NUMA instead of socket-to-NUMA
>      boundary                                                  (Gavin)
>    * Move the switch from MachineState to MachineClass         (Philippe)
>    * Warning instead of rejecting the irregular configuration  (Daniel)
>    * Comments to mention cluster-to-NUMA is platform instead
>      of architectural choice                                   (Drew)
>    * Drop PATCH[v2 1/4] related to qtests/numa-test            (Gavin)
> v2:
>    * Fix socket-NUMA-node boundary issues in qtests/numa-test  (Gavin)
>    * Add helper set_numa_socket_boundary() and validate the
>      boundary in the generic path                              (Philippe)
> 

Ping, Philippe and Igor. Please let me know if you have more
comments, thanks!

> Gavin Shan (3):
>    numa: Validate cluster and NUMA node boundary if required
>    hw/arm: Validate cluster and NUMA node boundary
>    hw/riscv: Validate cluster and NUMA node boundary
> 
>   hw/arm/sbsa-ref.c   |  2 ++
>   hw/arm/virt.c       |  2 ++
>   hw/core/machine.c   | 42 ++++++++++++++++++++++++++++++++++++++++++
>   hw/riscv/spike.c    |  2 ++
>   hw/riscv/virt.c     |  2 ++
>   include/hw/boards.h |  1 +
>   6 files changed, 51 insertions(+)
> 



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/3] numa: Validate cluster and NUMA node boundary if required
  2023-02-25  6:35 ` [PATCH v3 1/3] numa: Validate cluster and NUMA node boundary if required Gavin Shan
@ 2023-03-13 11:40   ` Philippe Mathieu-Daudé
  2023-03-14  6:23     ` Gavin Shan
  2023-03-17  6:29   ` Gavin Shan
  1 sibling, 1 reply; 9+ messages in thread
From: Philippe Mathieu-Daudé @ 2023-03-13 11:40 UTC (permalink / raw)
  To: Gavin Shan, qemu-arm
  Cc: qemu-riscv, qemu-devel, rad, peter.maydell, quic_llindhol,
	eduardo, marcel.apfelbaum, wangyanan55, palmer, alistair.francis,
	bin.meng, thuth, lvivier, pbonzini, imammedo, yihyu, ajones,
	berrange, dbarboza, shan.gavin

On 25/2/23 07:35, Gavin Shan wrote:
> For some architectures like ARM64, multiple CPUs in one cluster can be
> associated with different NUMA nodes, which is irregular configuration
> because we shouldn't have this in baremetal environment. The irregular
> configuration causes Linux guest to misbehave, as the following warning
> messages indicate.
> 
>    -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
>    -numa node,nodeid=0,cpus=0-1,memdev=ram0                \
>    -numa node,nodeid=1,cpus=2-3,memdev=ram1                \
>    -numa node,nodeid=2,cpus=4-5,memdev=ram2                \
> 
>    ------------[ cut here ]------------
>    WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:2271 build_sched_domains+0x284/0x910
>    Modules linked in:
>    CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-268.el9.aarch64 #1
>    pstate: 00400005 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>    pc : build_sched_domains+0x284/0x910
>    lr : build_sched_domains+0x184/0x910
>    sp : ffff80000804bd50
>    x29: ffff80000804bd50 x28: 0000000000000002 x27: 0000000000000000
>    x26: ffff800009cf9a80 x25: 0000000000000000 x24: ffff800009cbf840
>    x23: ffff000080325000 x22: ffff0000005df800 x21: ffff80000a4ce508
>    x20: 0000000000000000 x19: ffff000080324440 x18: 0000000000000014
>    x17: 00000000388925c0 x16: 000000005386a066 x15: 000000009c10cc2e
>    x14: 00000000000001c0 x13: 0000000000000001 x12: ffff00007fffb1a0
>    x11: ffff00007fffb180 x10: ffff80000a4ce508 x9 : 0000000000000041
>    x8 : ffff80000a4ce500 x7 : ffff80000a4cf920 x6 : 0000000000000001
>    x5 : 0000000000000001 x4 : 0000000000000007 x3 : 0000000000000002
>    x2 : 0000000000001000 x1 : ffff80000a4cf928 x0 : 0000000000000001
>    Call trace:
>     build_sched_domains+0x284/0x910
>     sched_init_domains+0xac/0xe0
>     sched_init_smp+0x48/0xc8
>     kernel_init_freeable+0x140/0x1ac
>     kernel_init+0x28/0x140
>     ret_from_fork+0x10/0x20
> 
> Improve the situation to warn when multiple CPUs in one cluster have
> been associated with different NUMA nodes. However, one NUMA node is
> allowed to be associated with different clusters.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>   hw/core/machine.c   | 42 ++++++++++++++++++++++++++++++++++++++++++
>   include/hw/boards.h |  1 +
>   2 files changed, 43 insertions(+)
> 
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index f29e700ee4..3513df5a86 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -1252,6 +1252,45 @@ static void machine_numa_finish_cpu_init(MachineState *machine)
>       g_string_free(s, true);
>   }
>   
> +static void validate_cpu_cluster_to_numa_boundary(MachineState *ms)
> +{
> +    MachineClass *mc = MACHINE_GET_CLASS(ms);
> +    NumaState *state = ms->numa_state;
> +    const CPUArchIdList *possible_cpus = mc->possible_cpu_arch_ids(ms);
> +    const CPUArchId *cpus = possible_cpus->cpus;
> +    int len = possible_cpus->len, i, j;

(Nitpicking, 'len' variable is not very useful).

> +
> +    if (state->num_nodes <= 1 || len <= 1) {
> +        return;
> +    }
> +
> +    /*
> +     * The Linux scheduling domain can't be parsed when the multiple CPUs
> +     * in one cluster have been associated with different NUMA nodes. However,
> +     * it's fine to associate one NUMA node with CPUs in different clusters.
> +     */
> +    for (i = 0; i < len; i++) {
> +        for (j = i + 1; j < len; j++) {
> +            if (cpus[i].props.has_socket_id &&
> +                cpus[i].props.has_cluster_id &&
> +                cpus[i].props.has_node_id &&
> +                cpus[j].props.has_socket_id &&
> +                cpus[j].props.has_cluster_id &&
> +                cpus[j].props.has_node_id &&
> +                cpus[i].props.socket_id == cpus[j].props.socket_id &&
> +                cpus[i].props.cluster_id == cpus[j].props.cluster_id &&
> +                cpus[i].props.node_id != cpus[j].props.node_id) {
> +                warn_report("CPU-%d and CPU-%d in socket-%ld-cluster-%ld "
> +                             "have been associated with node-%ld and node-%ld "
> +                             "respectively. It can cause OSes like Linux to"
> +                             "misbehave", i, j, cpus[i].props.socket_id,
> +                             cpus[i].props.cluster_id, cpus[i].props.node_id,
> +                             cpus[j].props.node_id);

machine_run_board_init() takes an Error* argument, but is only called
once by qemu_init_board() with errp=&error_fatal. I suppose using
warn_report() here is OK.

Acked-by: Philippe Mathieu-Daudé <philmd@linaro.org>

> +            }
> +        }
> +    }
> +}
> +
>   MemoryRegion *machine_consume_memdev(MachineState *machine,
>                                        HostMemoryBackend *backend)
>   {
> @@ -1337,6 +1376,9 @@ void machine_run_board_init(MachineState *machine, const char *mem_path, Error *
>           numa_complete_configuration(machine);
>           if (machine->numa_state->num_nodes) {
>               machine_numa_finish_cpu_init(machine);
> +            if (machine_class->cpu_cluster_has_numa_boundary) {
> +                validate_cpu_cluster_to_numa_boundary(machine);
> +            }
>           }
>       }



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/3] numa: Validate cluster and NUMA node boundary if required
  2023-03-13 11:40   ` Philippe Mathieu-Daudé
@ 2023-03-14  6:23     ` Gavin Shan
  0 siblings, 0 replies; 9+ messages in thread
From: Gavin Shan @ 2023-03-14  6:23 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé, qemu-arm
  Cc: qemu-riscv, qemu-devel, rad, peter.maydell, quic_llindhol,
	eduardo, marcel.apfelbaum, wangyanan55, palmer, alistair.francis,
	bin.meng, thuth, lvivier, pbonzini, imammedo, yihyu, ajones,
	berrange, dbarboza, shan.gavin

On 3/13/23 7:40 PM, Philippe Mathieu-Daudé wrote:
> On 25/2/23 07:35, Gavin Shan wrote:
>> For some architectures like ARM64, multiple CPUs in one cluster can be
>> associated with different NUMA nodes, which is irregular configuration
>> because we shouldn't have this in baremetal environment. The irregular
>> configuration causes Linux guest to misbehave, as the following warning
>> messages indicate.
>>
>>    -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
>>    -numa node,nodeid=0,cpus=0-1,memdev=ram0                \
>>    -numa node,nodeid=1,cpus=2-3,memdev=ram1                \
>>    -numa node,nodeid=2,cpus=4-5,memdev=ram2                \
>>
>>    ------------[ cut here ]------------
>>    WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:2271 build_sched_domains+0x284/0x910
>>    Modules linked in:
>>    CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-268.el9.aarch64 #1
>>    pstate: 00400005 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>>    pc : build_sched_domains+0x284/0x910
>>    lr : build_sched_domains+0x184/0x910
>>    sp : ffff80000804bd50
>>    x29: ffff80000804bd50 x28: 0000000000000002 x27: 0000000000000000
>>    x26: ffff800009cf9a80 x25: 0000000000000000 x24: ffff800009cbf840
>>    x23: ffff000080325000 x22: ffff0000005df800 x21: ffff80000a4ce508
>>    x20: 0000000000000000 x19: ffff000080324440 x18: 0000000000000014
>>    x17: 00000000388925c0 x16: 000000005386a066 x15: 000000009c10cc2e
>>    x14: 00000000000001c0 x13: 0000000000000001 x12: ffff00007fffb1a0
>>    x11: ffff00007fffb180 x10: ffff80000a4ce508 x9 : 0000000000000041
>>    x8 : ffff80000a4ce500 x7 : ffff80000a4cf920 x6 : 0000000000000001
>>    x5 : 0000000000000001 x4 : 0000000000000007 x3 : 0000000000000002
>>    x2 : 0000000000001000 x1 : ffff80000a4cf928 x0 : 0000000000000001
>>    Call trace:
>>     build_sched_domains+0x284/0x910
>>     sched_init_domains+0xac/0xe0
>>     sched_init_smp+0x48/0xc8
>>     kernel_init_freeable+0x140/0x1ac
>>     kernel_init+0x28/0x140
>>     ret_from_fork+0x10/0x20
>>
>> Improve the situation to warn when multiple CPUs in one cluster have
>> been associated with different NUMA nodes. However, one NUMA node is
>> allowed to be associated with different clusters.
>>
>> Signed-off-by: Gavin Shan <gshan@redhat.com>
>> ---
>>   hw/core/machine.c   | 42 ++++++++++++++++++++++++++++++++++++++++++
>>   include/hw/boards.h |  1 +
>>   2 files changed, 43 insertions(+)
>>
>> diff --git a/hw/core/machine.c b/hw/core/machine.c
>> index f29e700ee4..3513df5a86 100644
>> --- a/hw/core/machine.c
>> +++ b/hw/core/machine.c
>> @@ -1252,6 +1252,45 @@ static void machine_numa_finish_cpu_init(MachineState *machine)
>>       g_string_free(s, true);
>>   }
>> +static void validate_cpu_cluster_to_numa_boundary(MachineState *ms)
>> +{
>> +    MachineClass *mc = MACHINE_GET_CLASS(ms);
>> +    NumaState *state = ms->numa_state;
>> +    const CPUArchIdList *possible_cpus = mc->possible_cpu_arch_ids(ms);
>> +    const CPUArchId *cpus = possible_cpus->cpus;
>> +    int len = possible_cpus->len, i, j;
> 
> (Nitpicking, 'len' variable is not very useful).
> 

Yes, Lets drop it if I need to post a new revision :)

>> +
>> +    if (state->num_nodes <= 1 || len <= 1) {
>> +        return;
>> +    }
>> +
>> +    /*
>> +     * The Linux scheduling domain can't be parsed when the multiple CPUs
>> +     * in one cluster have been associated with different NUMA nodes. However,
>> +     * it's fine to associate one NUMA node with CPUs in different clusters.
>> +     */
>> +    for (i = 0; i < len; i++) {
>> +        for (j = i + 1; j < len; j++) {
>> +            if (cpus[i].props.has_socket_id &&
>> +                cpus[i].props.has_cluster_id &&
>> +                cpus[i].props.has_node_id &&
>> +                cpus[j].props.has_socket_id &&
>> +                cpus[j].props.has_cluster_id &&
>> +                cpus[j].props.has_node_id &&
>> +                cpus[i].props.socket_id == cpus[j].props.socket_id &&
>> +                cpus[i].props.cluster_id == cpus[j].props.cluster_id &&
>> +                cpus[i].props.node_id != cpus[j].props.node_id) {
>> +                warn_report("CPU-%d and CPU-%d in socket-%ld-cluster-%ld "
>> +                             "have been associated with node-%ld and node-%ld "
>> +                             "respectively. It can cause OSes like Linux to"
>> +                             "misbehave", i, j, cpus[i].props.socket_id,
>> +                             cpus[i].props.cluster_id, cpus[i].props.node_id,
>> +                             cpus[j].props.node_id);
> 
> machine_run_board_init() takes an Error* argument, but is only called
> once by qemu_init_board() with errp=&error_fatal. I suppose using
> warn_report() here is OK.
> 
> Acked-by: Philippe Mathieu-Daudé <philmd@linaro.org>
> 

warn_report() here is correct because it's inappropriate to propogate the
warning message to @error_fatal through error_setg(). When the messages
included in @error_fatal is handled and printed in util/error.c::error_handle(),
the QEMU process will be terminated unexpectedly.

>> +            }
>> +        }
>> +    }
>> +}
>> +
>>   MemoryRegion *machine_consume_memdev(MachineState *machine,
>>                                        HostMemoryBackend *backend)
>>   {
>> @@ -1337,6 +1376,9 @@ void machine_run_board_init(MachineState *machine, const char *mem_path, Error *
>>           numa_complete_configuration(machine);
>>           if (machine->numa_state->num_nodes) {
>>               machine_numa_finish_cpu_init(machine);
>> +            if (machine_class->cpu_cluster_has_numa_boundary) {
>> +                validate_cpu_cluster_to_numa_boundary(machine);
>> +            }
>>           }
>>       }

Thanks,
Gavin



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/3] numa: Validate cluster and NUMA node boundary if required
  2023-02-25  6:35 ` [PATCH v3 1/3] numa: Validate cluster and NUMA node boundary if required Gavin Shan
  2023-03-13 11:40   ` Philippe Mathieu-Daudé
@ 2023-03-17  6:29   ` Gavin Shan
  1 sibling, 0 replies; 9+ messages in thread
From: Gavin Shan @ 2023-03-17  6:29 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-riscv, qemu-devel, rad, peter.maydell, quic_llindhol,
	eduardo, marcel.apfelbaum, philmd, wangyanan55, palmer,
	alistair.francis, bin.meng, thuth, lvivier, pbonzini, imammedo,
	yihyu, ajones, berrange, dbarboza, shan.gavin

On 2/25/23 2:35 PM, Gavin Shan wrote:
> For some architectures like ARM64, multiple CPUs in one cluster can be
> associated with different NUMA nodes, which is irregular configuration
> because we shouldn't have this in baremetal environment. The irregular
> configuration causes Linux guest to misbehave, as the following warning
> messages indicate.
> 
>    -smp 6,maxcpus=6,sockets=2,clusters=1,cores=3,threads=1 \
>    -numa node,nodeid=0,cpus=0-1,memdev=ram0                \
>    -numa node,nodeid=1,cpus=2-3,memdev=ram1                \
>    -numa node,nodeid=2,cpus=4-5,memdev=ram2                \
> 
>    ------------[ cut here ]------------
>    WARNING: CPU: 0 PID: 1 at kernel/sched/topology.c:2271 build_sched_domains+0x284/0x910
>    Modules linked in:
>    CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.14.0-268.el9.aarch64 #1
>    pstate: 00400005 (nzcv daif +PAN -UAO -TCO -DIT -SSBS BTYPE=--)
>    pc : build_sched_domains+0x284/0x910
>    lr : build_sched_domains+0x184/0x910
>    sp : ffff80000804bd50
>    x29: ffff80000804bd50 x28: 0000000000000002 x27: 0000000000000000
>    x26: ffff800009cf9a80 x25: 0000000000000000 x24: ffff800009cbf840
>    x23: ffff000080325000 x22: ffff0000005df800 x21: ffff80000a4ce508
>    x20: 0000000000000000 x19: ffff000080324440 x18: 0000000000000014
>    x17: 00000000388925c0 x16: 000000005386a066 x15: 000000009c10cc2e
>    x14: 00000000000001c0 x13: 0000000000000001 x12: ffff00007fffb1a0
>    x11: ffff00007fffb180 x10: ffff80000a4ce508 x9 : 0000000000000041
>    x8 : ffff80000a4ce500 x7 : ffff80000a4cf920 x6 : 0000000000000001
>    x5 : 0000000000000001 x4 : 0000000000000007 x3 : 0000000000000002
>    x2 : 0000000000001000 x1 : ffff80000a4cf928 x0 : 0000000000000001
>    Call trace:
>     build_sched_domains+0x284/0x910
>     sched_init_domains+0xac/0xe0
>     sched_init_smp+0x48/0xc8
>     kernel_init_freeable+0x140/0x1ac
>     kernel_init+0x28/0x140
>     ret_from_fork+0x10/0x20
> 
> Improve the situation to warn when multiple CPUs in one cluster have
> been associated with different NUMA nodes. However, one NUMA node is
> allowed to be associated with different clusters.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>   hw/core/machine.c   | 42 ++++++++++++++++++++++++++++++++++++++++++
>   include/hw/boards.h |  1 +
>   2 files changed, 43 insertions(+)
> 
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index f29e700ee4..3513df5a86 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -1252,6 +1252,45 @@ static void machine_numa_finish_cpu_init(MachineState *machine)
>       g_string_free(s, true);
>   }
>   
> +static void validate_cpu_cluster_to_numa_boundary(MachineState *ms)
> +{
> +    MachineClass *mc = MACHINE_GET_CLASS(ms);
> +    NumaState *state = ms->numa_state;
> +    const CPUArchIdList *possible_cpus = mc->possible_cpu_arch_ids(ms);
> +    const CPUArchId *cpus = possible_cpus->cpus;
> +    int len = possible_cpus->len, i, j;
> +
> +    if (state->num_nodes <= 1 || len <= 1) {
> +        return;
> +    }
> +
> +    /*
> +     * The Linux scheduling domain can't be parsed when the multiple CPUs
> +     * in one cluster have been associated with different NUMA nodes. However,
> +     * it's fine to associate one NUMA node with CPUs in different clusters.
> +     */
> +    for (i = 0; i < len; i++) {
> +        for (j = i + 1; j < len; j++) {
> +            if (cpus[i].props.has_socket_id &&
> +                cpus[i].props.has_cluster_id &&
> +                cpus[i].props.has_node_id &&
> +                cpus[j].props.has_socket_id &&
> +                cpus[j].props.has_cluster_id &&
> +                cpus[j].props.has_node_id &&
> +                cpus[i].props.socket_id == cpus[j].props.socket_id &&
> +                cpus[i].props.cluster_id == cpus[j].props.cluster_id &&
> +                cpus[i].props.node_id != cpus[j].props.node_id) {
> +                warn_report("CPU-%d and CPU-%d in socket-%ld-cluster-%ld "
> +                             "have been associated with node-%ld and node-%ld "
> +                             "respectively. It can cause OSes like Linux to"
> +                             "misbehave", i, j, cpus[i].props.socket_id,
> +                             cpus[i].props.cluster_id, cpus[i].props.node_id,
> +                             cpus[j].props.node_id);
> +            }
> +        }
> +    }
> +}
> +

There is an extra space after "It can cause OSes like linux to" is needed. Added to
v4, which was just posted.

>   MemoryRegion *machine_consume_memdev(MachineState *machine,
>                                        HostMemoryBackend *backend)
>   {
> @@ -1337,6 +1376,9 @@ void machine_run_board_init(MachineState *machine, const char *mem_path, Error *
>           numa_complete_configuration(machine);
>           if (machine->numa_state->num_nodes) {
>               machine_numa_finish_cpu_init(machine);
> +            if (machine_class->cpu_cluster_has_numa_boundary) {
> +                validate_cpu_cluster_to_numa_boundary(machine);
> +            }
>           }
>       }
>   
> diff --git a/include/hw/boards.h b/include/hw/boards.h
> index 6fbbfd56c8..c9793b2789 100644
> --- a/include/hw/boards.h
> +++ b/include/hw/boards.h
> @@ -273,6 +273,7 @@ struct MachineClass {
>       bool nvdimm_supported;
>       bool numa_mem_supported;
>       bool auto_enable_numa;
> +    bool cpu_cluster_has_numa_boundary;
>       SMPCompatProps smp_props;
>       const char *default_ram_id;
>   

Thanks,
Gavin



^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2023-03-17  6:29 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-25  6:35 [PATCH v3 0/3] NUMA: Apply cluster-NUMA-node boundary for aarch64 and riscv machines Gavin Shan
2023-02-25  6:35 ` [PATCH v3 1/3] numa: Validate cluster and NUMA node boundary if required Gavin Shan
2023-03-13 11:40   ` Philippe Mathieu-Daudé
2023-03-14  6:23     ` Gavin Shan
2023-03-17  6:29   ` Gavin Shan
2023-02-25  6:35 ` [PATCH v3 2/3] hw/arm: Validate cluster and NUMA node boundary Gavin Shan
2023-02-25  6:35 ` [PATCH v3 3/3] hw/riscv: " Gavin Shan
2023-02-27 12:38   ` Daniel Henrique Barboza
2023-03-13  7:15 ` [PATCH v3 0/3] NUMA: Apply cluster-NUMA-node boundary for aarch64 and riscv machines Gavin Shan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.