All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID
@ 2022-05-03 14:02 Gavin Shan
  2022-05-03 14:02 ` [PATCH v9 1/6] qapi/machine.json: Add cluster-id Gavin Shan
                   ` (6 more replies)
  0 siblings, 7 replies; 11+ messages in thread
From: Gavin Shan @ 2022-05-03 14:02 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-devel, mst, imammedo, ani, peter.maydell, eduardo,
	marcel.apfelbaum, f4bug, wangyanan55, eblake, armbru, thuth,
	lvivier, pbonzini, berrange, Jonathan.Cameron, drjones, zhenyzha,
	shan.gavin

When the CPU-to-NUMA association isn't provided by user, the default NUMA
node ID for the specific CPU is returned from virt_get_default_cpu_node_id().
Unfortunately, the default NUMA node ID breaks socket boundary and leads to
the broken CPU topology warning message in Linux guest. This series intends
to fix the issue by populating full CPU toplogy on arm/virt machine.

  PATCH[1/6] Add cluster-id to CPU instance property
  PATCH[2/6] Fixes test failure in qtest/numa-test/aarch64_numa_cpu()
  PATCH[3/6] Uses SMP configuration to populate CPU topology
  PATCH[4/6] Corrects CPU/NUMA association in qtest/numa-test/aarch64_numa_cpu()
  PATCH[5/6] Fixes the broken CPU topology by considering the socket boundary
             when the default NUMA node ID is given
  PATCH[6/6] Uses the populated CPU topology to build PPTT table, instead of
             calculate it again

Changelog
=========
v9:
   * Fix typo by replacing 'thrad-id' with 'thread-id' in
     PATCH[v9 2/6]'s commit log                                 (Igor)
   * Added PATCH[v9 4/6] to make CPU-to-NUMA association
     real                                                       (Igor)
v8:
   * Separate PATCH[v8 2/5] to fix test failure in qtest/
     numa-test/aarch64_numa_cpu()                               (Igor)
   * Improvents to coding style, changelog and comments         (Yanan)
v6/v7:
   * Fixed description for 'cluster-id' and 'core-id'           (Yanan)
   * Remove '% ms->smp.sockets' in socket ID calculation        (Yanan)
   * Fixed tests/qtest/numa-test/aarch64_numa_cpu()             (Yanan)
   * Initialized offset variables in build_pptt()               (Jonathan)
   * Added comments about the expected and sorted layout of
     cpus[n].props.*_id and assert() on the exceptional cases   (Igor)
v4/v5:
   * Split PATCH[v3 1/3] to PATCH[v5 1/4] and PATCH[v5 2/4].
     Verify or dump 'clsuter-id' in various spots               (Yanan)
   * s/within cluster/within cluster\/die/ for 'core-id' in
     qapi/machine.json                                          (Igor)
   * Apply '% ms->smp.{sockets, clusters, cores, threads} in
     virt_possible_cpu_arch_ids() as x86 does                   (Igor)
   * Use [0 - possible_cpus->len] as ACPI processor UID to
     build PPTT table and PATCH[v3 4/4] is dropped              (Igor)
   * Simplified build_pptt() to add all entries in one loop
     on ms->possible_cpus                                       (Igor)
v3:
   * Split PATCH[v2 1/3] to PATCH[v3 1/4] and PATCH[v3 2/4]     (Yanan)
   * Don't take account of die ID in CPU topology population
     and added assert(!mc->smp_props.dies_supported)            (Yanan/Igor)
   * Assign cluster_id and use it when building PPTT table      (Yanan/Igor)
v2:
   * Populate the CPU topology in virt_possible_cpu_arch_ids()
     so that it can be reused in virt_get_default_cpu_node_id() (Igor)
   * Added PATCH[2/3] to use the existing CPU topology when the
     PPTT table is built                                        (Igor)
   * Added PATCH[3/3] to take thread ID as ACPI processor ID
     in MADT and SRAT table                                     (Gavin)

Gavin Shan (6):
  qapi/machine.json: Add cluster-id
  qtest/numa-test: Specify CPU topology in aarch64_numa_cpu()
  hw/arm/virt: Consider SMP configuration in CPU topology
  qtest/numa-test: Correct CPU and NUMA association in
    aarch64_numa_cpu()
  hw/arm/virt: Fix CPU's default NUMA node ID
  hw/acpi/aml-build: Use existing CPU topology to build PPTT table

 hw/acpi/aml-build.c        | 111 ++++++++++++++++---------------------
 hw/arm/virt.c              |  19 ++++++-
 hw/core/machine-hmp-cmds.c |   4 ++
 hw/core/machine.c          |  16 ++++++
 qapi/machine.json          |   6 +-
 tests/qtest/numa-test.c    |  19 +++++--
 6 files changed, 102 insertions(+), 73 deletions(-)

-- 
2.23.0



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v9 1/6] qapi/machine.json: Add cluster-id
  2022-05-03 14:02 [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
@ 2022-05-03 14:02 ` Gavin Shan
  2022-05-03 14:03 ` [PATCH v9 2/6] qtest/numa-test: Specify CPU topology in aarch64_numa_cpu() Gavin Shan
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Gavin Shan @ 2022-05-03 14:02 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-devel, mst, imammedo, ani, peter.maydell, eduardo,
	marcel.apfelbaum, f4bug, wangyanan55, eblake, armbru, thuth,
	lvivier, pbonzini, berrange, Jonathan.Cameron, drjones, zhenyzha,
	shan.gavin

This adds cluster-id in CPU instance properties, which will be used
by arm/virt machine. Besides, the cluster-id is also verified or
dumped in various spots:

  * hw/core/machine.c::machine_set_cpu_numa_node() to associate
    CPU with its NUMA node.

  * hw/core/machine.c::machine_numa_finish_cpu_init() to record
    CPU slots with no NUMA mapping set.

  * hw/core/machine-hmp-cmds.c::hmp_hotpluggable_cpus() to dump
    cluster-id.

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/core/machine-hmp-cmds.c |  4 ++++
 hw/core/machine.c          | 16 ++++++++++++++++
 qapi/machine.json          |  6 ++++--
 3 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/hw/core/machine-hmp-cmds.c b/hw/core/machine-hmp-cmds.c
index 4e2f319aeb..5cb5eecbfc 100644
--- a/hw/core/machine-hmp-cmds.c
+++ b/hw/core/machine-hmp-cmds.c
@@ -77,6 +77,10 @@ void hmp_hotpluggable_cpus(Monitor *mon, const QDict *qdict)
         if (c->has_die_id) {
             monitor_printf(mon, "    die-id: \"%" PRIu64 "\"\n", c->die_id);
         }
+        if (c->has_cluster_id) {
+            monitor_printf(mon, "    cluster-id: \"%" PRIu64 "\"\n",
+                           c->cluster_id);
+        }
         if (c->has_core_id) {
             monitor_printf(mon, "    core-id: \"%" PRIu64 "\"\n", c->core_id);
         }
diff --git a/hw/core/machine.c b/hw/core/machine.c
index cb9bbc844d..700c1e76b8 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -682,6 +682,11 @@ void machine_set_cpu_numa_node(MachineState *machine,
             return;
         }
 
+        if (props->has_cluster_id && !slot->props.has_cluster_id) {
+            error_setg(errp, "cluster-id is not supported");
+            return;
+        }
+
         if (props->has_socket_id && !slot->props.has_socket_id) {
             error_setg(errp, "socket-id is not supported");
             return;
@@ -701,6 +706,11 @@ void machine_set_cpu_numa_node(MachineState *machine,
                 continue;
         }
 
+        if (props->has_cluster_id &&
+            props->cluster_id != slot->props.cluster_id) {
+                continue;
+        }
+
         if (props->has_die_id && props->die_id != slot->props.die_id) {
                 continue;
         }
@@ -995,6 +1005,12 @@ static char *cpu_slot_to_string(const CPUArchId *cpu)
         }
         g_string_append_printf(s, "die-id: %"PRId64, cpu->props.die_id);
     }
+    if (cpu->props.has_cluster_id) {
+        if (s->len) {
+            g_string_append_printf(s, ", ");
+        }
+        g_string_append_printf(s, "cluster-id: %"PRId64, cpu->props.cluster_id);
+    }
     if (cpu->props.has_core_id) {
         if (s->len) {
             g_string_append_printf(s, ", ");
diff --git a/qapi/machine.json b/qapi/machine.json
index d25a481ce4..4c417e32a5 100644
--- a/qapi/machine.json
+++ b/qapi/machine.json
@@ -868,10 +868,11 @@
 # @node-id: NUMA node ID the CPU belongs to
 # @socket-id: socket number within node/board the CPU belongs to
 # @die-id: die number within socket the CPU belongs to (since 4.1)
-# @core-id: core number within die the CPU belongs to
+# @cluster-id: cluster number within die the CPU belongs to (since 7.1)
+# @core-id: core number within cluster the CPU belongs to
 # @thread-id: thread number within core the CPU belongs to
 #
-# Note: currently there are 5 properties that could be present
+# Note: currently there are 6 properties that could be present
 #       but management should be prepared to pass through other
 #       properties with device_add command to allow for future
 #       interface extension. This also requires the filed names to be kept in
@@ -883,6 +884,7 @@
   'data': { '*node-id': 'int',
             '*socket-id': 'int',
             '*die-id': 'int',
+            '*cluster-id': 'int',
             '*core-id': 'int',
             '*thread-id': 'int'
   }
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v9 2/6] qtest/numa-test: Specify CPU topology in aarch64_numa_cpu()
  2022-05-03 14:02 [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
  2022-05-03 14:02 ` [PATCH v9 1/6] qapi/machine.json: Add cluster-id Gavin Shan
@ 2022-05-03 14:03 ` Gavin Shan
  2022-05-03 14:03 ` [PATCH v9 3/6] hw/arm/virt: Consider SMP configuration in CPU topology Gavin Shan
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Gavin Shan @ 2022-05-03 14:03 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-devel, mst, imammedo, ani, peter.maydell, eduardo,
	marcel.apfelbaum, f4bug, wangyanan55, eblake, armbru, thuth,
	lvivier, pbonzini, berrange, Jonathan.Cameron, drjones, zhenyzha,
	shan.gavin

The CPU topology isn't enabled on arm/virt machine yet, but we're
going to do it in next patch. After the CPU topology is enabled by
next patch, "thread-id=1" becomes invalid because the CPU core is
preferred on arm/virt machine. It means these two CPUs have 0/1
as their core IDs, but their thread IDs are all 0. It will trigger
test failure as the following message indicates:

  [14/21 qemu:qtest+qtest-aarch64 / qtest-aarch64/numa-test  ERROR
  1.48s   killed by signal 6 SIGABRT
  >>> G_TEST_DBUS_DAEMON=/home/gavin/sandbox/qemu.main/tests/dbus-vmstate-daemon.sh \
      QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon         \
      QTEST_QEMU_BINARY=./qemu-system-aarch64                                       \
      QTEST_QEMU_IMG=./qemu-img MALLOC_PERTURB_=83                                  \
      /home/gavin/sandbox/qemu.main/build/tests/qtest/numa-test --tap -k
  ――――――――――――――――――――――――――――――――――――――――――――――
  stderr:
  qemu-system-aarch64: -numa cpu,node-id=0,thread-id=1: no match found

This fixes the issue by providing comprehensive SMP configurations
in aarch64_numa_cpu(). The SMP configurations aren't used before
the CPU topology is enabled in next patch.

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
---
 tests/qtest/numa-test.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
index 90bf68a5b3..aeda8c774c 100644
--- a/tests/qtest/numa-test.c
+++ b/tests/qtest/numa-test.c
@@ -223,7 +223,8 @@ static void aarch64_numa_cpu(const void *data)
     QTestState *qts;
     g_autofree char *cli = NULL;
 
-    cli = make_cli(data, "-machine smp.cpus=2 "
+    cli = make_cli(data, "-machine "
+        "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
         "-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
         "-numa cpu,node-id=1,thread-id=0 "
         "-numa cpu,node-id=0,thread-id=1");
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v9 3/6] hw/arm/virt: Consider SMP configuration in CPU topology
  2022-05-03 14:02 [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
  2022-05-03 14:02 ` [PATCH v9 1/6] qapi/machine.json: Add cluster-id Gavin Shan
  2022-05-03 14:03 ` [PATCH v9 2/6] qtest/numa-test: Specify CPU topology in aarch64_numa_cpu() Gavin Shan
@ 2022-05-03 14:03 ` Gavin Shan
  2022-05-03 14:03 ` [PATCH v9 4/6] qtest/numa-test: Correct CPU and NUMA association in aarch64_numa_cpu() Gavin Shan
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Gavin Shan @ 2022-05-03 14:03 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-devel, mst, imammedo, ani, peter.maydell, eduardo,
	marcel.apfelbaum, f4bug, wangyanan55, eblake, armbru, thuth,
	lvivier, pbonzini, berrange, Jonathan.Cameron, drjones, zhenyzha,
	shan.gavin

Currently, the SMP configuration isn't considered when the CPU
topology is populated. In this case, it's impossible to provide
the default CPU-to-NUMA mapping or association based on the socket
ID of the given CPU.

This takes account of SMP configuration when the CPU topology
is populated. The die ID for the given CPU isn't assigned since
it's not supported on arm/virt machine. Besides, the used SMP
configuration in qtest/numa-test/aarch64_numa_cpu() is corrcted
to avoid testing failure

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/arm/virt.c | 15 ++++++++++++++-
 1 file changed, 14 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index f94278935f..6df3981b1e 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2560,6 +2560,7 @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
     int n;
     unsigned int max_cpus = ms->smp.max_cpus;
     VirtMachineState *vms = VIRT_MACHINE(ms);
+    MachineClass *mc = MACHINE_GET_CLASS(vms);
 
     if (ms->possible_cpus) {
         assert(ms->possible_cpus->len == max_cpus);
@@ -2573,8 +2574,20 @@ static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
         ms->possible_cpus->cpus[n].type = ms->cpu_type;
         ms->possible_cpus->cpus[n].arch_id =
             virt_cpu_mp_affinity(vms, n);
+
+        assert(!mc->smp_props.dies_supported);
+        ms->possible_cpus->cpus[n].props.has_socket_id = true;
+        ms->possible_cpus->cpus[n].props.socket_id =
+            n / (ms->smp.clusters * ms->smp.cores * ms->smp.threads);
+        ms->possible_cpus->cpus[n].props.has_cluster_id = true;
+        ms->possible_cpus->cpus[n].props.cluster_id =
+            (n / (ms->smp.cores * ms->smp.threads)) % ms->smp.clusters;
+        ms->possible_cpus->cpus[n].props.has_core_id = true;
+        ms->possible_cpus->cpus[n].props.core_id =
+            (n / ms->smp.threads) % ms->smp.cores;
         ms->possible_cpus->cpus[n].props.has_thread_id = true;
-        ms->possible_cpus->cpus[n].props.thread_id = n;
+        ms->possible_cpus->cpus[n].props.thread_id =
+            n % ms->smp.threads;
     }
     return ms->possible_cpus;
 }
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v9 4/6] qtest/numa-test: Correct CPU and NUMA association in aarch64_numa_cpu()
  2022-05-03 14:02 [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
                   ` (2 preceding siblings ...)
  2022-05-03 14:03 ` [PATCH v9 3/6] hw/arm/virt: Consider SMP configuration in CPU topology Gavin Shan
@ 2022-05-03 14:03 ` Gavin Shan
  2022-05-04 13:46   ` Igor Mammedov
  2022-05-03 14:03 ` [PATCH v9 5/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 11+ messages in thread
From: Gavin Shan @ 2022-05-03 14:03 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-devel, mst, imammedo, ani, peter.maydell, eduardo,
	marcel.apfelbaum, f4bug, wangyanan55, eblake, armbru, thuth,
	lvivier, pbonzini, berrange, Jonathan.Cameron, drjones, zhenyzha,
	shan.gavin

In aarch64_numa_cpu(), the CPU and NUMA association is something
like below. Two threads in the same core/cluster/socket are
associated with two individual NUMA nodes, which is unreal as
Igor Mammedov mentioned. We don't expect the association to break
NUMA-to-socket boundary, which matches with the real world.

    NUMA-node  socket  cluster   core   thread
    ------------------------------------------
        0       0        0        0      0
        1       0        0        0      1

This corrects the topology for CPUs and their association with
NUMA nodes. After this patch is applied, the CPU and NUMA
association becomes something like below, which looks real.
Besides, socket/cluster/core/thread IDs are all checked when
the NUMA node IDs are verified. It helps to check if the CPU
topology is properly populated or not.

    NUMA-node  socket  cluster   core   thread
    ------------------------------------------
       0        1        0        0       0
       1        0        0        0       0

Suggested-by: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Gavin Shan <gshan@redhat.com>
---
 tests/qtest/numa-test.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
index aeda8c774c..32e35daaae 100644
--- a/tests/qtest/numa-test.c
+++ b/tests/qtest/numa-test.c
@@ -224,17 +224,17 @@ static void aarch64_numa_cpu(const void *data)
     g_autofree char *cli = NULL;
 
     cli = make_cli(data, "-machine "
-        "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
+        "smp.cpus=2,smp.sockets=2,smp.clusters=1,smp.cores=1,smp.threads=1 "
         "-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
-        "-numa cpu,node-id=1,thread-id=0 "
-        "-numa cpu,node-id=0,thread-id=1");
+        "-numa cpu,node-id=0,socket-id=1,cluster-id=0,core-id=0,thread-id=0 "
+        "-numa cpu,node-id=1,socket-id=0,cluster-id=0,core-id=0,thread-id=0");
     qts = qtest_init(cli);
     cpus = get_cpus(qts, &resp);
     g_assert(cpus);
 
     while ((e = qlist_pop(cpus))) {
         QDict *cpu, *props;
-        int64_t thread, node;
+        int64_t socket, cluster, core, thread, node;
 
         cpu = qobject_to(QDict, e);
         g_assert(qdict_haskey(cpu, "props"));
@@ -242,12 +242,18 @@ static void aarch64_numa_cpu(const void *data)
 
         g_assert(qdict_haskey(props, "node-id"));
         node = qdict_get_int(props, "node-id");
+        g_assert(qdict_haskey(props, "socket-id"));
+        socket = qdict_get_int(props, "socket-id");
+        g_assert(qdict_haskey(props, "cluster-id"));
+        cluster = qdict_get_int(props, "cluster-id");
+        g_assert(qdict_haskey(props, "core-id"));
+        core = qdict_get_int(props, "core-id");
         g_assert(qdict_haskey(props, "thread-id"));
         thread = qdict_get_int(props, "thread-id");
 
-        if (thread == 0) {
+        if (socket == 0 && cluster == 0 && core == 0 && thread == 0) {
             g_assert_cmpint(node, ==, 1);
-        } else if (thread == 1) {
+        } else if (socket == 1 && cluster == 0 && core == 0 && thread == 0) {
             g_assert_cmpint(node, ==, 0);
         } else {
             g_assert(false);
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v9 5/6] hw/arm/virt: Fix CPU's default NUMA node ID
  2022-05-03 14:02 [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
                   ` (3 preceding siblings ...)
  2022-05-03 14:03 ` [PATCH v9 4/6] qtest/numa-test: Correct CPU and NUMA association in aarch64_numa_cpu() Gavin Shan
@ 2022-05-03 14:03 ` Gavin Shan
  2022-05-03 14:03 ` [PATCH v9 6/6] hw/acpi/aml-build: Use existing CPU topology to build PPTT table Gavin Shan
  2022-05-09  3:08 ` [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
  6 siblings, 0 replies; 11+ messages in thread
From: Gavin Shan @ 2022-05-03 14:03 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-devel, mst, imammedo, ani, peter.maydell, eduardo,
	marcel.apfelbaum, f4bug, wangyanan55, eblake, armbru, thuth,
	lvivier, pbonzini, berrange, Jonathan.Cameron, drjones, zhenyzha,
	shan.gavin

When CPU-to-NUMA association isn't explicitly provided by users,
the default one is given by mc->get_default_cpu_node_id(). However,
the CPU topology isn't fully considered in the default association
and this causes CPU topology broken warnings on booting Linux guest.

For example, the following warning messages are observed when the
Linux guest is booted with the following command lines.

  /home/gavin/sandbox/qemu.main/build/qemu-system-aarch64 \
  -accel kvm -machine virt,gic-version=host               \
  -cpu host                                               \
  -smp 6,sockets=2,cores=3,threads=1                      \
  -m 1024M,slots=16,maxmem=64G                            \
  -object memory-backend-ram,id=mem0,size=128M            \
  -object memory-backend-ram,id=mem1,size=128M            \
  -object memory-backend-ram,id=mem2,size=128M            \
  -object memory-backend-ram,id=mem3,size=128M            \
  -object memory-backend-ram,id=mem4,size=128M            \
  -object memory-backend-ram,id=mem4,size=384M            \
  -numa node,nodeid=0,memdev=mem0                         \
  -numa node,nodeid=1,memdev=mem1                         \
  -numa node,nodeid=2,memdev=mem2                         \
  -numa node,nodeid=3,memdev=mem3                         \
  -numa node,nodeid=4,memdev=mem4                         \
  -numa node,nodeid=5,memdev=mem5
         :
  alternatives: patching kernel code
  BUG: arch topology borken
  the CLS domain not a subset of the MC domain
  <the above error log repeats>
  BUG: arch topology borken
  the DIE domain not a subset of the NODE domain

With current implementation of mc->get_default_cpu_node_id(),
CPU#0 to CPU#5 are associated with NODE#0 to NODE#5 separately.
That's incorrect because CPU#0/1/2 should be associated with same
NUMA node because they're seated in same socket.

This fixes the issue by considering the socket ID when the default
CPU-to-NUMA association is provided in virt_possible_cpu_arch_ids().
With this applied, no more CPU topology broken warnings are seen
from the Linux guest. The 6 CPUs are associated with NODE#0/1, but
there are no CPUs associated with NODE#2/3/4/5.

Signed-off-by: Gavin Shan <gshan@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
---
 hw/arm/virt.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/hw/arm/virt.c b/hw/arm/virt.c
index 6df3981b1e..766c9acd7f 100644
--- a/hw/arm/virt.c
+++ b/hw/arm/virt.c
@@ -2552,7 +2552,9 @@ virt_cpu_index_to_props(MachineState *ms, unsigned cpu_index)
 
 static int64_t virt_get_default_cpu_node_id(const MachineState *ms, int idx)
 {
-    return idx % ms->numa_state->num_nodes;
+    int64_t socket_id = ms->possible_cpus->cpus[idx].props.socket_id;
+
+    return socket_id % ms->numa_state->num_nodes;
 }
 
 static const CPUArchIdList *virt_possible_cpu_arch_ids(MachineState *ms)
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v9 6/6] hw/acpi/aml-build: Use existing CPU topology to build PPTT table
  2022-05-03 14:02 [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
                   ` (4 preceding siblings ...)
  2022-05-03 14:03 ` [PATCH v9 5/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
@ 2022-05-03 14:03 ` Gavin Shan
  2022-05-03 17:43   ` Michael S. Tsirkin
  2022-05-09  3:08 ` [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
  6 siblings, 1 reply; 11+ messages in thread
From: Gavin Shan @ 2022-05-03 14:03 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-devel, mst, imammedo, ani, peter.maydell, eduardo,
	marcel.apfelbaum, f4bug, wangyanan55, eblake, armbru, thuth,
	lvivier, pbonzini, berrange, Jonathan.Cameron, drjones, zhenyzha,
	shan.gavin

When the PPTT table is built, the CPU topology is re-calculated, but
it's unecessary because the CPU topology has been populated in
virt_possible_cpu_arch_ids() on arm/virt machine.

This reworks build_pptt() to avoid by reusing the existing IDs in
ms->possible_cpus. Currently, the only user of build_pptt() is
arm/virt machine.

Signed-off-by: Gavin Shan <gshan@redhat.com>
Tested-by: Yanan Wang <wangyanan55@huawei.com>
Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
Acked-by: Igor Mammedov <imammedo@redhat.com>
---
 hw/acpi/aml-build.c | 111 +++++++++++++++++++-------------------------
 1 file changed, 48 insertions(+), 63 deletions(-)

diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
index 4086879ebf..e6bfac95c7 100644
--- a/hw/acpi/aml-build.c
+++ b/hw/acpi/aml-build.c
@@ -2002,86 +2002,71 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
                 const char *oem_id, const char *oem_table_id)
 {
     MachineClass *mc = MACHINE_GET_CLASS(ms);
-    GQueue *list = g_queue_new();
-    guint pptt_start = table_data->len;
-    guint parent_offset;
-    guint length, i;
-    int uid = 0;
-    int socket;
+    CPUArchIdList *cpus = ms->possible_cpus;
+    int64_t socket_id = -1, cluster_id = -1, core_id = -1;
+    uint32_t socket_offset = 0, cluster_offset = 0, core_offset = 0;
+    uint32_t pptt_start = table_data->len;
+    int n;
     AcpiTable table = { .sig = "PPTT", .rev = 2,
                         .oem_id = oem_id, .oem_table_id = oem_table_id };
 
     acpi_table_begin(&table, table_data);
 
-    for (socket = 0; socket < ms->smp.sockets; socket++) {
-        g_queue_push_tail(list,
-            GUINT_TO_POINTER(table_data->len - pptt_start));
-        build_processor_hierarchy_node(
-            table_data,
-            /*
-             * Physical package - represents the boundary
-             * of a physical package
-             */
-            (1 << 0),
-            0, socket, NULL, 0);
-    }
-
-    if (mc->smp_props.clusters_supported) {
-        length = g_queue_get_length(list);
-        for (i = 0; i < length; i++) {
-            int cluster;
-
-            parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
-            for (cluster = 0; cluster < ms->smp.clusters; cluster++) {
-                g_queue_push_tail(list,
-                    GUINT_TO_POINTER(table_data->len - pptt_start));
-                build_processor_hierarchy_node(
-                    table_data,
-                    (0 << 0), /* not a physical package */
-                    parent_offset, cluster, NULL, 0);
-            }
+    /*
+     * This works with the assumption that cpus[n].props.*_id has been
+     * sorted from top to down levels in mc->possible_cpu_arch_ids().
+     * Otherwise, the unexpected and duplicated containers will be
+     * created.
+     */
+    for (n = 0; n < cpus->len; n++) {
+        if (cpus->cpus[n].props.socket_id != socket_id) {
+            assert(cpus->cpus[n].props.socket_id > socket_id);
+            socket_id = cpus->cpus[n].props.socket_id;
+            cluster_id = -1;
+            core_id = -1;
+            socket_offset = table_data->len - pptt_start;
+            build_processor_hierarchy_node(table_data,
+                (1 << 0), /* Physical package */
+                0, socket_id, NULL, 0);
         }
-    }
 
-    length = g_queue_get_length(list);
-    for (i = 0; i < length; i++) {
-        int core;
-
-        parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
-        for (core = 0; core < ms->smp.cores; core++) {
-            if (ms->smp.threads > 1) {
-                g_queue_push_tail(list,
-                    GUINT_TO_POINTER(table_data->len - pptt_start));
-                build_processor_hierarchy_node(
-                    table_data,
-                    (0 << 0), /* not a physical package */
-                    parent_offset, core, NULL, 0);
-            } else {
-                build_processor_hierarchy_node(
-                    table_data,
-                    (1 << 1) | /* ACPI Processor ID valid */
-                    (1 << 3),  /* Node is a Leaf */
-                    parent_offset, uid++, NULL, 0);
+        if (mc->smp_props.clusters_supported) {
+            if (cpus->cpus[n].props.cluster_id != cluster_id) {
+                assert(cpus->cpus[n].props.cluster_id > cluster_id);
+                cluster_id = cpus->cpus[n].props.cluster_id;
+                core_id = -1;
+                cluster_offset = table_data->len - pptt_start;
+                build_processor_hierarchy_node(table_data,
+                    (0 << 0), /* Not a physical package */
+                    socket_offset, cluster_id, NULL, 0);
             }
+        } else {
+            cluster_offset = socket_offset;
         }
-    }
 
-    length = g_queue_get_length(list);
-    for (i = 0; i < length; i++) {
-        int thread;
+        if (ms->smp.threads == 1) {
+            build_processor_hierarchy_node(table_data,
+                (1 << 1) | /* ACPI Processor ID valid */
+                (1 << 3),  /* Node is a Leaf */
+                cluster_offset, n, NULL, 0);
+        } else {
+            if (cpus->cpus[n].props.core_id != core_id) {
+                assert(cpus->cpus[n].props.core_id > core_id);
+                core_id = cpus->cpus[n].props.core_id;
+                core_offset = table_data->len - pptt_start;
+                build_processor_hierarchy_node(table_data,
+                    (0 << 0), /* Not a physical package */
+                    cluster_offset, core_id, NULL, 0);
+            }
 
-        parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
-        for (thread = 0; thread < ms->smp.threads; thread++) {
-            build_processor_hierarchy_node(
-                table_data,
+            build_processor_hierarchy_node(table_data,
                 (1 << 1) | /* ACPI Processor ID valid */
                 (1 << 2) | /* Processor is a Thread */
                 (1 << 3),  /* Node is a Leaf */
-                parent_offset, uid++, NULL, 0);
+                core_offset, n, NULL, 0);
         }
     }
 
-    g_queue_free(list);
     acpi_table_end(linker, &table);
 }
 
-- 
2.23.0



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v9 6/6] hw/acpi/aml-build: Use existing CPU topology to build PPTT table
  2022-05-03 14:03 ` [PATCH v9 6/6] hw/acpi/aml-build: Use existing CPU topology to build PPTT table Gavin Shan
@ 2022-05-03 17:43   ` Michael S. Tsirkin
  0 siblings, 0 replies; 11+ messages in thread
From: Michael S. Tsirkin @ 2022-05-03 17:43 UTC (permalink / raw)
  To: Gavin Shan
  Cc: qemu-arm, qemu-devel, imammedo, ani, peter.maydell, eduardo,
	marcel.apfelbaum, f4bug, wangyanan55, eblake, armbru, thuth,
	lvivier, pbonzini, berrange, Jonathan.Cameron, drjones, zhenyzha,
	shan.gavin

On Tue, May 03, 2022 at 10:03:04PM +0800, Gavin Shan wrote:
> When the PPTT table is built, the CPU topology is re-calculated, but
> it's unecessary because the CPU topology has been populated in
> virt_possible_cpu_arch_ids() on arm/virt machine.
> 
> This reworks build_pptt() to avoid by reusing the existing IDs in
> ms->possible_cpus. Currently, the only user of build_pptt() is
> arm/virt machine.
> 
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> Tested-by: Yanan Wang <wangyanan55@huawei.com>
> Reviewed-by: Yanan Wang <wangyanan55@huawei.com>
> Acked-by: Igor Mammedov <imammedo@redhat.com>

Acked-by: Michael S. Tsirkin <mst@redhat.com>

> ---
>  hw/acpi/aml-build.c | 111 +++++++++++++++++++-------------------------
>  1 file changed, 48 insertions(+), 63 deletions(-)
> 
> diff --git a/hw/acpi/aml-build.c b/hw/acpi/aml-build.c
> index 4086879ebf..e6bfac95c7 100644
> --- a/hw/acpi/aml-build.c
> +++ b/hw/acpi/aml-build.c
> @@ -2002,86 +2002,71 @@ void build_pptt(GArray *table_data, BIOSLinker *linker, MachineState *ms,
>                  const char *oem_id, const char *oem_table_id)
>  {
>      MachineClass *mc = MACHINE_GET_CLASS(ms);
> -    GQueue *list = g_queue_new();
> -    guint pptt_start = table_data->len;
> -    guint parent_offset;
> -    guint length, i;
> -    int uid = 0;
> -    int socket;
> +    CPUArchIdList *cpus = ms->possible_cpus;
> +    int64_t socket_id = -1, cluster_id = -1, core_id = -1;
> +    uint32_t socket_offset = 0, cluster_offset = 0, core_offset = 0;
> +    uint32_t pptt_start = table_data->len;
> +    int n;
>      AcpiTable table = { .sig = "PPTT", .rev = 2,
>                          .oem_id = oem_id, .oem_table_id = oem_table_id };
>  
>      acpi_table_begin(&table, table_data);
>  
> -    for (socket = 0; socket < ms->smp.sockets; socket++) {
> -        g_queue_push_tail(list,
> -            GUINT_TO_POINTER(table_data->len - pptt_start));
> -        build_processor_hierarchy_node(
> -            table_data,
> -            /*
> -             * Physical package - represents the boundary
> -             * of a physical package
> -             */
> -            (1 << 0),
> -            0, socket, NULL, 0);
> -    }
> -
> -    if (mc->smp_props.clusters_supported) {
> -        length = g_queue_get_length(list);
> -        for (i = 0; i < length; i++) {
> -            int cluster;
> -
> -            parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
> -            for (cluster = 0; cluster < ms->smp.clusters; cluster++) {
> -                g_queue_push_tail(list,
> -                    GUINT_TO_POINTER(table_data->len - pptt_start));
> -                build_processor_hierarchy_node(
> -                    table_data,
> -                    (0 << 0), /* not a physical package */
> -                    parent_offset, cluster, NULL, 0);
> -            }
> +    /*
> +     * This works with the assumption that cpus[n].props.*_id has been
> +     * sorted from top to down levels in mc->possible_cpu_arch_ids().
> +     * Otherwise, the unexpected and duplicated containers will be
> +     * created.
> +     */
> +    for (n = 0; n < cpus->len; n++) {
> +        if (cpus->cpus[n].props.socket_id != socket_id) {
> +            assert(cpus->cpus[n].props.socket_id > socket_id);
> +            socket_id = cpus->cpus[n].props.socket_id;
> +            cluster_id = -1;
> +            core_id = -1;
> +            socket_offset = table_data->len - pptt_start;
> +            build_processor_hierarchy_node(table_data,
> +                (1 << 0), /* Physical package */
> +                0, socket_id, NULL, 0);
>          }
> -    }
>  
> -    length = g_queue_get_length(list);
> -    for (i = 0; i < length; i++) {
> -        int core;
> -
> -        parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
> -        for (core = 0; core < ms->smp.cores; core++) {
> -            if (ms->smp.threads > 1) {
> -                g_queue_push_tail(list,
> -                    GUINT_TO_POINTER(table_data->len - pptt_start));
> -                build_processor_hierarchy_node(
> -                    table_data,
> -                    (0 << 0), /* not a physical package */
> -                    parent_offset, core, NULL, 0);
> -            } else {
> -                build_processor_hierarchy_node(
> -                    table_data,
> -                    (1 << 1) | /* ACPI Processor ID valid */
> -                    (1 << 3),  /* Node is a Leaf */
> -                    parent_offset, uid++, NULL, 0);
> +        if (mc->smp_props.clusters_supported) {
> +            if (cpus->cpus[n].props.cluster_id != cluster_id) {
> +                assert(cpus->cpus[n].props.cluster_id > cluster_id);
> +                cluster_id = cpus->cpus[n].props.cluster_id;
> +                core_id = -1;
> +                cluster_offset = table_data->len - pptt_start;
> +                build_processor_hierarchy_node(table_data,
> +                    (0 << 0), /* Not a physical package */
> +                    socket_offset, cluster_id, NULL, 0);
>              }
> +        } else {
> +            cluster_offset = socket_offset;
>          }
> -    }
>  
> -    length = g_queue_get_length(list);
> -    for (i = 0; i < length; i++) {
> -        int thread;
> +        if (ms->smp.threads == 1) {
> +            build_processor_hierarchy_node(table_data,
> +                (1 << 1) | /* ACPI Processor ID valid */
> +                (1 << 3),  /* Node is a Leaf */
> +                cluster_offset, n, NULL, 0);
> +        } else {
> +            if (cpus->cpus[n].props.core_id != core_id) {
> +                assert(cpus->cpus[n].props.core_id > core_id);
> +                core_id = cpus->cpus[n].props.core_id;
> +                core_offset = table_data->len - pptt_start;
> +                build_processor_hierarchy_node(table_data,
> +                    (0 << 0), /* Not a physical package */
> +                    cluster_offset, core_id, NULL, 0);
> +            }
>  
> -        parent_offset = GPOINTER_TO_UINT(g_queue_pop_head(list));
> -        for (thread = 0; thread < ms->smp.threads; thread++) {
> -            build_processor_hierarchy_node(
> -                table_data,
> +            build_processor_hierarchy_node(table_data,
>                  (1 << 1) | /* ACPI Processor ID valid */
>                  (1 << 2) | /* Processor is a Thread */
>                  (1 << 3),  /* Node is a Leaf */
> -                parent_offset, uid++, NULL, 0);
> +                core_offset, n, NULL, 0);
>          }
>      }
>  
> -    g_queue_free(list);
>      acpi_table_end(linker, &table);
>  }
>  
> -- 
> 2.23.0



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v9 4/6] qtest/numa-test: Correct CPU and NUMA association in aarch64_numa_cpu()
  2022-05-03 14:03 ` [PATCH v9 4/6] qtest/numa-test: Correct CPU and NUMA association in aarch64_numa_cpu() Gavin Shan
@ 2022-05-04 13:46   ` Igor Mammedov
  0 siblings, 0 replies; 11+ messages in thread
From: Igor Mammedov @ 2022-05-04 13:46 UTC (permalink / raw)
  To: Gavin Shan
  Cc: qemu-arm, qemu-devel, mst, ani, peter.maydell, eduardo,
	marcel.apfelbaum, f4bug, wangyanan55, eblake, armbru, thuth,
	lvivier, pbonzini, berrange, Jonathan.Cameron, drjones, zhenyzha,
	shan.gavin

On Tue,  3 May 2022 22:03:02 +0800
Gavin Shan <gshan@redhat.com> wrote:

> In aarch64_numa_cpu(), the CPU and NUMA association is something
> like below. Two threads in the same core/cluster/socket are
> associated with two individual NUMA nodes, which is unreal as
> Igor Mammedov mentioned. We don't expect the association to break
> NUMA-to-socket boundary, which matches with the real world.
> 
>     NUMA-node  socket  cluster   core   thread
>     ------------------------------------------
>         0       0        0        0      0
>         1       0        0        0      1
> 
> This corrects the topology for CPUs and their association with
> NUMA nodes. After this patch is applied, the CPU and NUMA
> association becomes something like below, which looks real.
> Besides, socket/cluster/core/thread IDs are all checked when
> the NUMA node IDs are verified. It helps to check if the CPU
> topology is properly populated or not.
> 
>     NUMA-node  socket  cluster   core   thread
>     ------------------------------------------
>        0        1        0        0       0
>        1        0        0        0       0

I'd place 'socket 1' on node 0, so it wouldn't look odd.
But it probably doesn't matter, so

Acked-by: Igor Mammedov <imammedo@redhat.com>

> Suggested-by: Igor Mammedov <imammedo@redhat.com>
> Signed-off-by: Gavin Shan <gshan@redhat.com>
> ---
>  tests/qtest/numa-test.c | 18 ++++++++++++------
>  1 file changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/tests/qtest/numa-test.c b/tests/qtest/numa-test.c
> index aeda8c774c..32e35daaae 100644
> --- a/tests/qtest/numa-test.c
> +++ b/tests/qtest/numa-test.c
> @@ -224,17 +224,17 @@ static void aarch64_numa_cpu(const void *data)
>      g_autofree char *cli = NULL;
>  
>      cli = make_cli(data, "-machine "
> -        "smp.cpus=2,smp.sockets=1,smp.clusters=1,smp.cores=1,smp.threads=2 "
> +        "smp.cpus=2,smp.sockets=2,smp.clusters=1,smp.cores=1,smp.threads=1 "
>          "-numa node,nodeid=0,memdev=ram -numa node,nodeid=1 "
> -        "-numa cpu,node-id=1,thread-id=0 "
> -        "-numa cpu,node-id=0,thread-id=1");
> +        "-numa cpu,node-id=0,socket-id=1,cluster-id=0,core-id=0,thread-id=0 "
> +        "-numa cpu,node-id=1,socket-id=0,cluster-id=0,core-id=0,thread-id=0");
>      qts = qtest_init(cli);
>      cpus = get_cpus(qts, &resp);
>      g_assert(cpus);
>  
>      while ((e = qlist_pop(cpus))) {
>          QDict *cpu, *props;
> -        int64_t thread, node;
> +        int64_t socket, cluster, core, thread, node;
>  
>          cpu = qobject_to(QDict, e);
>          g_assert(qdict_haskey(cpu, "props"));
> @@ -242,12 +242,18 @@ static void aarch64_numa_cpu(const void *data)
>  
>          g_assert(qdict_haskey(props, "node-id"));
>          node = qdict_get_int(props, "node-id");
> +        g_assert(qdict_haskey(props, "socket-id"));
> +        socket = qdict_get_int(props, "socket-id");
> +        g_assert(qdict_haskey(props, "cluster-id"));
> +        cluster = qdict_get_int(props, "cluster-id");
> +        g_assert(qdict_haskey(props, "core-id"));
> +        core = qdict_get_int(props, "core-id");
>          g_assert(qdict_haskey(props, "thread-id"));
>          thread = qdict_get_int(props, "thread-id");
>  
> -        if (thread == 0) {
> +        if (socket == 0 && cluster == 0 && core == 0 && thread == 0) {
>              g_assert_cmpint(node, ==, 1);
> -        } else if (thread == 1) {
> +        } else if (socket == 1 && cluster == 0 && core == 0 && thread == 0) {
>              g_assert_cmpint(node, ==, 0);
>          } else {
>              g_assert(false);



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID
  2022-05-03 14:02 [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
                   ` (5 preceding siblings ...)
  2022-05-03 14:03 ` [PATCH v9 6/6] hw/acpi/aml-build: Use existing CPU topology to build PPTT table Gavin Shan
@ 2022-05-09  3:08 ` Gavin Shan
  2022-05-09 10:47   ` Peter Maydell
  6 siblings, 1 reply; 11+ messages in thread
From: Gavin Shan @ 2022-05-09  3:08 UTC (permalink / raw)
  To: qemu-arm
  Cc: qemu-devel, mst, imammedo, ani, peter.maydell, eduardo,
	marcel.apfelbaum, f4bug, wangyanan55, eblake, armbru, thuth,
	lvivier, pbonzini, berrange, Jonathan.Cameron, drjones, zhenyzha,
	shan.gavin

Hi Peter and maintainers,

On 5/3/22 10:02 PM, Gavin Shan wrote:
> When the CPU-to-NUMA association isn't provided by user, the default NUMA
> node ID for the specific CPU is returned from virt_get_default_cpu_node_id().
> Unfortunately, the default NUMA node ID breaks socket boundary and leads to
> the broken CPU topology warning message in Linux guest. This series intends
> to fix the issue by populating full CPU toplogy on arm/virt machine.
> 
>    PATCH[1/6] Add cluster-id to CPU instance property
>    PATCH[2/6] Fixes test failure in qtest/numa-test/aarch64_numa_cpu()
>    PATCH[3/6] Uses SMP configuration to populate CPU topology
>    PATCH[4/6] Corrects CPU/NUMA association in qtest/numa-test/aarch64_numa_cpu()
>    PATCH[5/6] Fixes the broken CPU topology by considering the socket boundary
>               when the default NUMA node ID is given
>    PATCH[6/6] Uses the populated CPU topology to build PPTT table, instead of
>               calculate it again
> 
> Changelog
> =========
> v9:
>     * Fix typo by replacing 'thrad-id' with 'thread-id' in
>       PATCH[v9 2/6]'s commit log                                 (Igor)
>     * Added PATCH[v9 4/6] to make CPU-to-NUMA association
>       real                                                       (Igor)

[...]

> Gavin Shan (6):
>    qapi/machine.json: Add cluster-id
>    qtest/numa-test: Specify CPU topology in aarch64_numa_cpu()
>    hw/arm/virt: Consider SMP configuration in CPU topology
>    qtest/numa-test: Correct CPU and NUMA association in
>      aarch64_numa_cpu()
>    hw/arm/virt: Fix CPU's default NUMA node ID
>    hw/acpi/aml-build: Use existing CPU topology to build PPTT table
> 
>   hw/acpi/aml-build.c        | 111 ++++++++++++++++---------------------
>   hw/arm/virt.c              |  19 ++++++-
>   hw/core/machine-hmp-cmds.c |   4 ++
>   hw/core/machine.c          |  16 ++++++
>   qapi/machine.json          |   6 +-
>   tests/qtest/numa-test.c    |  19 +++++--
>   6 files changed, 102 insertions(+), 73 deletions(-)
> 

I think this series is ready to be merged. Could you help to see how it
can be merged to QEMU 7.1?

Thanks,
Gavin



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID
  2022-05-09  3:08 ` [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
@ 2022-05-09 10:47   ` Peter Maydell
  0 siblings, 0 replies; 11+ messages in thread
From: Peter Maydell @ 2022-05-09 10:47 UTC (permalink / raw)
  To: Gavin Shan
  Cc: qemu-arm, qemu-devel, mst, imammedo, ani, eduardo,
	marcel.apfelbaum, f4bug, wangyanan55, eblake, armbru, thuth,
	lvivier, pbonzini, berrange, Jonathan.Cameron, drjones, zhenyzha,
	shan.gavin

On Mon, 9 May 2022 at 04:08, Gavin Shan <gshan@redhat.com> wrote:
>
> Hi Peter and maintainers,
>
> On 5/3/22 10:02 PM, Gavin Shan wrote:
> > When the CPU-to-NUMA association isn't provided by user, the default NUMA
> > node ID for the specific CPU is returned from virt_get_default_cpu_node_id().
> > Unfortunately, the default NUMA node ID breaks socket boundary and leads to
> > the broken CPU topology warning message in Linux guest. This series intends
> > to fix the issue by populating full CPU toplogy on arm/virt machine.
> >
> >    PATCH[1/6] Add cluster-id to CPU instance property
> >    PATCH[2/6] Fixes test failure in qtest/numa-test/aarch64_numa_cpu()
> >    PATCH[3/6] Uses SMP configuration to populate CPU topology
> >    PATCH[4/6] Corrects CPU/NUMA association in qtest/numa-test/aarch64_numa_cpu()
> >    PATCH[5/6] Fixes the broken CPU topology by considering the socket boundary
> >               when the default NUMA node ID is given
> >    PATCH[6/6] Uses the populated CPU topology to build PPTT table, instead of
> >               calculate it again
> >
> > Changelog
> > =========
> > v9:
> >     * Fix typo by replacing 'thrad-id' with 'thread-id' in
> >       PATCH[v9 2/6]'s commit log                                 (Igor)
> >     * Added PATCH[v9 4/6] to make CPU-to-NUMA association
> >       real                                                       (Igor)
>
> [...]
>
> > Gavin Shan (6):
> >    qapi/machine.json: Add cluster-id
> >    qtest/numa-test: Specify CPU topology in aarch64_numa_cpu()
> >    hw/arm/virt: Consider SMP configuration in CPU topology
> >    qtest/numa-test: Correct CPU and NUMA association in
> >      aarch64_numa_cpu()
> >    hw/arm/virt: Fix CPU's default NUMA node ID
> >    hw/acpi/aml-build: Use existing CPU topology to build PPTT table
> >
> >   hw/acpi/aml-build.c        | 111 ++++++++++++++++---------------------
> >   hw/arm/virt.c              |  19 ++++++-
> >   hw/core/machine-hmp-cmds.c |   4 ++
> >   hw/core/machine.c          |  16 ++++++
> >   qapi/machine.json          |   6 +-
> >   tests/qtest/numa-test.c    |  19 +++++--
> >   6 files changed, 102 insertions(+), 73 deletions(-)
> >
>
> I think this series is ready to be merged. Could you help to see how it
> can be merged to QEMU 7.1?



Applied to target-arm.next, thanks.

-- PMM


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-05-09 11:28 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-03 14:02 [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
2022-05-03 14:02 ` [PATCH v9 1/6] qapi/machine.json: Add cluster-id Gavin Shan
2022-05-03 14:03 ` [PATCH v9 2/6] qtest/numa-test: Specify CPU topology in aarch64_numa_cpu() Gavin Shan
2022-05-03 14:03 ` [PATCH v9 3/6] hw/arm/virt: Consider SMP configuration in CPU topology Gavin Shan
2022-05-03 14:03 ` [PATCH v9 4/6] qtest/numa-test: Correct CPU and NUMA association in aarch64_numa_cpu() Gavin Shan
2022-05-04 13:46   ` Igor Mammedov
2022-05-03 14:03 ` [PATCH v9 5/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
2022-05-03 14:03 ` [PATCH v9 6/6] hw/acpi/aml-build: Use existing CPU topology to build PPTT table Gavin Shan
2022-05-03 17:43   ` Michael S. Tsirkin
2022-05-09  3:08 ` [PATCH v9 0/6] hw/arm/virt: Fix CPU's default NUMA node ID Gavin Shan
2022-05-09 10:47   ` Peter Maydell

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.