All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] pseries NUMA distance calculation
@ 2020-09-23 19:34 Daniel Henrique Barboza
  2020-09-23 19:34 ` [PATCH 1/6] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
                   ` (5 more replies)
  0 siblings, 6 replies; 16+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-23 19:34 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, david

Hi,

This series is a follow-up of the reworked pSeries NUMA
code that is already merged upstream. It contains some of
the patches that were presented in the first version of this
work [1], some of them changed based on the reviews made
back there. 

With this series, we're able to take user input into consideration
when setting up the NUMA topology of the guest. It is still an
approximation, but at least user input is not completely ignored.

The changes will only be effective with pseries-5.2 and newer
machines, and if more than one NUMA node is declared by the user.
The idea is that we don't want to tamper with legacy guest behavior.
Patch 6 has examples of how we are approximating NUMA distance
via user input.

The series was rebased using David's ppc-for-5.2 at
4cca31df828.


Changes carried over from [1]:
- patch 1 (former 4): same patch, added David's r-b
- patch 2 (former 2): the check for asymetrical NUMA was moved
to spapr code as requested in the review
- patch 4 is a merge of former patches 5 and 6
- patch 5 (former 9): reworked
- patch 6 (former 10): same patch

Patch 3 is new in the series.



[1] https://lists.gnu.org/archive/html/qemu-devel/2020-08/msg03169.html



Daniel Henrique Barboza (6):
  spapr: add spapr_machine_using_legacy_numa() helper
  spapr_numa: forbid asymmetrical NUMA setups
  spapr_numa: translate regular NUMA distance to PAPR distance
  spapr_numa: change reference-points and maxdomain settings
  spapr_numa: consider user input when defining associativity
  specs/ppc-spapr-numa: update with new NUMA support

 docs/specs/ppc-spapr-numa.rst | 213 ++++++++++++++++++++++++++++++++++
 hw/ppc/spapr.c                |  12 ++
 hw/ppc/spapr_numa.c           | 184 +++++++++++++++++++++++++++--
 include/hw/ppc/spapr.h        |   2 +
 4 files changed, 402 insertions(+), 9 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/6] spapr: add spapr_machine_using_legacy_numa() helper
  2020-09-23 19:34 [PATCH 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
@ 2020-09-23 19:34 ` Daniel Henrique Barboza
  2020-09-24  7:47   ` Greg Kurz
  2020-09-23 19:34 ` [PATCH 2/6] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 16+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-23 19:34 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, david

The changes to come to NUMA support are all guest visible. In
theory we could just create a new 5_1 class option flag to
avoid the changes to cascade to 5.1 and under. The reality is that
these changes are only relevant if the machine has more than one
NUMA node. There is no need to change guest behavior that has
been around for years needlesly.

This new helper will be used by the next patches to determine
whether we should retain the (soon to be) legacy NUMA behavior
in the pSeries machine. The new behavior will only be exposed
if:

- machine is pseries-5.2 and newer;
- more than one NUMA node is declared in NUMA state.

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr.c         | 12 ++++++++++++
 include/hw/ppc/spapr.h |  2 ++
 2 files changed, 14 insertions(+)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index e813c7cfb9..c5d8910a74 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -294,6 +294,15 @@ static hwaddr spapr_node0_size(MachineState *machine)
     return machine->ram_size;
 }
 
+bool spapr_machine_using_legacy_numa(SpaprMachineState *spapr)
+{
+    MachineState *machine = MACHINE(spapr);
+    SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(machine);
+
+    return smc->pre_5_2_numa_associativity ||
+           machine->numa_state->num_nodes <= 1;
+}
+
 static void add_str(GString *s, const gchar *s1)
 {
     g_string_append_len(s, s1, strlen(s1) + 1);
@@ -4522,8 +4531,11 @@ DEFINE_SPAPR_MACHINE(5_2, "5.2", true);
  */
 static void spapr_machine_5_1_class_options(MachineClass *mc)
 {
+    SpaprMachineClass *smc = SPAPR_MACHINE_CLASS(mc);
+
     spapr_machine_5_2_class_options(mc);
     compat_props_add(mc->compat_props, hw_compat_5_1, hw_compat_5_1_len);
+    smc->pre_5_2_numa_associativity = true;
 }
 
 DEFINE_SPAPR_MACHINE(5_1, "5.1", false);
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 114e819969..d1aae03b97 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -143,6 +143,7 @@ struct SpaprMachineClass {
     bool smp_threads_vsmt; /* set VSMT to smp_threads by default */
     hwaddr rma_limit;          /* clamp the RMA to this size */
     bool pre_5_1_assoc_refpoints;
+    bool pre_5_2_numa_associativity;
 
     void (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
                           uint64_t *buid, hwaddr *pio, 
@@ -860,6 +861,7 @@ int spapr_max_server_number(SpaprMachineState *spapr);
 void spapr_store_hpte(PowerPCCPU *cpu, hwaddr ptex,
                       uint64_t pte0, uint64_t pte1);
 void spapr_mce_req_event(PowerPCCPU *cpu, bool recovered);
+bool spapr_machine_using_legacy_numa(SpaprMachineState *spapr);
 
 /* DRC callbacks. */
 void spapr_core_release(DeviceState *dev);
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/6] spapr_numa: forbid asymmetrical NUMA setups
  2020-09-23 19:34 [PATCH 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
  2020-09-23 19:34 ` [PATCH 1/6] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
@ 2020-09-23 19:34 ` Daniel Henrique Barboza
  2020-09-24  8:01   ` Greg Kurz
  2020-09-23 19:34 ` [PATCH 3/6] spapr_numa: translate regular NUMA distance to PAPR distance Daniel Henrique Barboza
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 16+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-23 19:34 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, david

The pSeries machine does not support asymmetrical NUMA
configurations. This doesn't make much of a different
since we're not using user input for pSeries NUMA setup,
but this will change in the next patches.

To avoid breaking existing setups, gate this change by
checking for legacy NUMA support.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 64fe567f5d..36aaa273ee 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -19,6 +19,24 @@
 /* Moved from hw/ppc/spapr_pci_nvlink2.c */
 #define SPAPR_GPU_NUMA_ID           (cpu_to_be32(1))
 
+static bool spapr_numa_is_symmetrical(MachineState *ms)
+{
+    int src, dst;
+    int nb_numa_nodes = ms->numa_state->num_nodes;
+    NodeInfo *numa_info = ms->numa_state->nodes;
+
+    for (src = 0; src < nb_numa_nodes; src++) {
+        for (dst = src; dst < nb_numa_nodes; dst++) {
+            if (numa_info[src].distance[dst] !=
+                numa_info[dst].distance[src]) {
+                return false;
+            }
+        }
+    }
+
+    return true;
+}
+
 void spapr_numa_associativity_init(SpaprMachineState *spapr,
                                    MachineState *machine)
 {
@@ -61,6 +79,22 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
 
         spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
     }
+
+    /*
+     * Legacy NUMA guests (pseries-5.1 and order, or guests with only
+     * 1 NUMA node) will not benefit from anything we're going to do
+     * after this point.
+     */
+    if (spapr_machine_using_legacy_numa(spapr)) {
+        return;
+    }
+
+    if (!spapr_numa_is_symmetrical(machine)) {
+        error_report("Asymmetrical NUMA topologies aren't supported "
+                     "in the pSeries machine");
+        exit(1);
+    }
+
 }
 
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/6] spapr_numa: translate regular NUMA distance to PAPR distance
  2020-09-23 19:34 [PATCH 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
  2020-09-23 19:34 ` [PATCH 1/6] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
  2020-09-23 19:34 ` [PATCH 2/6] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
@ 2020-09-23 19:34 ` Daniel Henrique Barboza
  2020-09-24  8:16   ` Greg Kurz
  2020-09-23 19:34 ` [PATCH 4/6] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 16+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-23 19:34 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, david

QEMU allows the user to set NUMA distances in the command line.
For ACPI architectures like x86, this means that user input is
used to populate the SLIT table, and the guest perceives the
distances as the user chooses to.

PPC64 does not work that way. In the PAPR concept of NUMA,
associativity relations between the NUMA nodes are provided by
the device tree, and the guest kernel is free to calculate the
distances as it sees fit. Given how ACPI architectures works,
this puts the pSeries machine in a strange spot - users expect
to define NUMA distances like in the ACPI case, but QEMU does
not have control over it. To give pSeries users a similar
experience, we'll need to bring kernel specifics to QEMU
to approximate the NUMA distances.

The pSeries kernel works with the NUMA distance range 10,
20, 40, 80 and 160. The code starts at 10 (local distance) and
searches for a match in the first NUMA level between the
resources. If there is no match, the distance is doubled and
then it proceeds to try to match in the next NUMA level. Rinse
and repeat for MAX_DISTANCE_REF_POINTS levels.

This patch introduces a spapr_numa_PAPRify_distances() helper
that translates the user distances to kernel distance, which
we're going to use to determine the associativity domains for
the NUMA nodes.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 36aaa273ee..180800b2f3 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -37,6 +37,49 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
     return true;
 }
 
+/*
+ * This function will translate the user distances into
+ * what the kernel understand as possible values: 10
+ * (local distance), 20, 40, 80 and 160. Current heuristic
+ * is:
+ *
+ *  - distances between 11 and 30 -> rounded to 20
+ *  - distances between 31 and 60 -> rounded to 40
+ *  - distances between 61 and 120 -> rounded to 80
+ *  - everything above 120 -> 160
+ *
+ * This step can also be done in the same time as the NUMA
+ * associativity domains calculation, at the cost of extra
+ * complexity. We chose to keep it simpler.
+ *
+ * Note: this will overwrite the distance values in
+ * ms->numa_state->nodes.
+ */
+static void spapr_numa_PAPRify_distances(MachineState *ms)
+{
+    int src, dst;
+    int nb_numa_nodes = ms->numa_state->num_nodes;
+    NodeInfo *numa_info = ms->numa_state->nodes;
+
+    for (src = 0; src < nb_numa_nodes; src++) {
+        for (dst = src; dst < nb_numa_nodes; dst++) {
+            uint8_t distance = numa_info[src].distance[dst];
+            uint8_t rounded_distance = 160;
+
+            if (distance > 11 && distance < 30) {
+                rounded_distance = 20;
+            } else if (distance > 31 && distance < 60) {
+                rounded_distance = 40;
+            } else if (distance > 61 && distance < 120) {
+                rounded_distance = 80;
+            }
+
+            numa_info[src].distance[dst] = rounded_distance;
+            numa_info[dst].distance[src] = rounded_distance;
+        }
+    }
+}
+
 void spapr_numa_associativity_init(SpaprMachineState *spapr,
                                    MachineState *machine)
 {
@@ -95,6 +138,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
         exit(1);
     }
 
+    spapr_numa_PAPRify_distances(machine);
 }
 
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 4/6] spapr_numa: change reference-points and maxdomain settings
  2020-09-23 19:34 [PATCH 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
                   ` (2 preceding siblings ...)
  2020-09-23 19:34 ` [PATCH 3/6] spapr_numa: translate regular NUMA distance to PAPR distance Daniel Henrique Barboza
@ 2020-09-23 19:34 ` Daniel Henrique Barboza
  2020-09-24  9:33   ` Greg Kurz
  2020-09-23 19:34 ` [PATCH 5/6] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
  2020-09-23 19:34 ` [PATCH 6/6] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
  5 siblings, 1 reply; 16+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-23 19:34 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, david

This is the first guest visible change introduced in
spapr_numa.c. The previous settings of both reference-points
and maxdomains were too restrictive, but enough for the
existing associativity we're setting in the resources.

We'll change that in the following patches, populating the
associativity arrays based on user input. For those changes
to be effective, reference-points and maxdomains must be
more flexible. After this patch, we'll have 4 distinct
levels of NUMA (0x4, 0x3, 0x2, 0x1) and maxdomains will
allow for any type of configuration the user intends to
do - under the scope and limitations of PAPR itself, of
course.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 27 ++++++++++++++++++---------
 1 file changed, 18 insertions(+), 9 deletions(-)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 180800b2f3..688391278e 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -222,21 +222,30 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
  */
 void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
 {
+    MachineState *ms = MACHINE(spapr);
     SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
     uint32_t refpoints[] = {
         cpu_to_be32(0x4),
-        cpu_to_be32(0x4),
+        cpu_to_be32(0x3),
         cpu_to_be32(0x2),
+        cpu_to_be32(0x1),
     };
     uint32_t nr_refpoints = ARRAY_SIZE(refpoints);
-    uint32_t maxdomain = cpu_to_be32(spapr->gpu_numa_id > 1 ? 1 : 0);
-    uint32_t maxdomains[] = {
-        cpu_to_be32(4),
-        maxdomain,
-        maxdomain,
-        maxdomain,
-        cpu_to_be32(spapr->gpu_numa_id),
-    };
+    uint32_t maxdomain = cpu_to_be32(ms->numa_state->num_nodes +
+                                     spapr->gpu_numa_id);
+    uint32_t maxdomains[] = {0x4, maxdomain, maxdomain, maxdomain, maxdomain};
+
+    if (spapr_machine_using_legacy_numa(spapr)) {
+        refpoints[1] =  cpu_to_be32(0x4);
+        refpoints[2] =  cpu_to_be32(0x2);
+        nr_refpoints = 3;
+
+        maxdomain = cpu_to_be32(spapr->gpu_numa_id > 1 ? 1 : 0);
+        maxdomains[1] = maxdomain;
+        maxdomains[2] = maxdomain;
+        maxdomains[3] = maxdomain;
+        maxdomains[4] = cpu_to_be32(spapr->gpu_numa_id);
+    }
 
     if (smc->pre_5_1_assoc_refpoints) {
         nr_refpoints = 2;
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 5/6] spapr_numa: consider user input when defining associativity
  2020-09-23 19:34 [PATCH 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
                   ` (3 preceding siblings ...)
  2020-09-23 19:34 ` [PATCH 4/6] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
@ 2020-09-23 19:34 ` Daniel Henrique Barboza
  2020-09-24 10:22   ` Greg Kurz
  2020-09-23 19:34 ` [PATCH 6/6] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
  5 siblings, 1 reply; 16+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-23 19:34 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, david

This patch puts all the pieces together to finally allow user
input when defining the NUMA topology of the spapr guest.

We have one more kernel restriction to handle in this patch:
the associativity array of node 0 must be filled with zeroes
[1]. The strategy below ensures that this will happen.

spapr_numa_define_associativity_domains() will read the distance
(already PAPRified) between the nodes from numa_state and determine
the appropriate NUMA level. The NUMA domains, processed in ascending
order, are going to be matched via NUMA levels, and the lowest
associativity domain value is assigned to that specific level for
both.

This will create an heuristic where the associativities of the first
nodes have higher priority and are re-used in new matches, instead of
overwriting them with a new associativity match. This is necessary
because neither QEMU, nor the pSeries kernel, supports multiple
associativity domains for each resource, meaning that we have to
decide which associativity relation is relevant.

Ultimately, all of this results in a best effort approximation for
the actual NUMA distances the user input in the command line. Given
the nature of how PAPR itself interprets NUMA distances versus the
expectations risen by how ACPI SLIT works, there might be better
algorithms but, in the end, it'll also result in another way to
approximate what the user really wanted.

To keep this commit message no longer than it already is, the next
patch will update the existing documentation in ppc-spapr-numa.rst
with more in depth details and design considerations/drawbacks.

[1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 81 ++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 80 insertions(+), 1 deletion(-)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 688391278e..c84f77cda7 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -80,12 +80,79 @@ static void spapr_numa_PAPRify_distances(MachineState *ms)
     }
 }
 
+static uint8_t spapr_numa_get_NUMA_level(uint8_t distance)
+{
+    uint8_t numa_level;
+
+    switch (distance) {
+    case 20:
+        numa_level = 0x3;
+        break;
+    case 40:
+        numa_level = 0x2;
+        break;
+    case 80:
+        numa_level = 0x1;
+        break;
+    default:
+        numa_level = 0;
+    }
+
+    return numa_level;
+}
+
+static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr,
+                                                    MachineState *ms)
+{
+    int src, dst;
+    int nb_numa_nodes = ms->numa_state->num_nodes;
+    NodeInfo *numa_info = ms->numa_state->nodes;
+
+    for (src = 0; src < nb_numa_nodes; src++) {
+        for (dst = src; dst < nb_numa_nodes; dst++) {
+            /*
+             * This is how the associativity domain between A and B
+             * is calculated:
+             *
+             * - get the distance between them
+             * - get the correspondent NUMA level for this distance
+             * - the arrays were initialized with their own numa_ids,
+             * and we're calculating the distance in node_id ascending order,
+             * starting from node 0. This will have a cascade effect in the
+             * algorithm because the associativity domains that node 0 defines
+             * will be carried over to the other nodes, and node 1
+             * associativities will be carried over unless there's already a
+             * node 0 associativity assigned, and so on. This happens because
+             * we'll assign the lowest value of assoc_src and assoc_dst to be
+             * the associativity domain of both, for the given NUMA level.
+             *
+             * The PPC kernel expects the associativity domains of node 0 to
+             * be always 0, and this algorithm will grant that by default.
+             */
+            uint8_t distance = numa_info[src].distance[dst];
+            uint8_t n_level = spapr_numa_get_NUMA_level(distance);
+            uint32_t assoc_src, assoc_dst;
+
+            assoc_src = be32_to_cpu(spapr->numa_assoc_array[src][n_level]);
+            assoc_dst = be32_to_cpu(spapr->numa_assoc_array[dst][n_level]);
+
+            if (assoc_src < assoc_dst) {
+                spapr->numa_assoc_array[dst][n_level] = cpu_to_be32(assoc_src);
+            } else {
+                spapr->numa_assoc_array[src][n_level] = cpu_to_be32(assoc_dst);
+            }
+        }
+    }
+
+}
+
 void spapr_numa_associativity_init(SpaprMachineState *spapr,
                                    MachineState *machine)
 {
     SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
     int nb_numa_nodes = machine->numa_state->num_nodes;
     int i, j, max_nodes_with_gpus;
+    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
 
     /*
      * For all associativity arrays: first position is the size,
@@ -99,6 +166,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
     for (i = 0; i < nb_numa_nodes; i++) {
         spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
         spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
+
+        /*
+         * Fill all associativity domains of the node with node_id.
+         * This is required because the kernel makes valid associativity
+         * matches with the zeroes if we leave the matrix unitialized.
+         */
+        if (!using_legacy_numa) {
+            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
+                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
+            }
+        }
     }
 
     /*
@@ -128,7 +206,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
      * 1 NUMA node) will not benefit from anything we're going to do
      * after this point.
      */
-    if (spapr_machine_using_legacy_numa(spapr)) {
+    if (using_legacy_numa) {
         return;
     }
 
@@ -139,6 +217,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
     }
 
     spapr_numa_PAPRify_distances(machine);
+    spapr_numa_define_associativity_domains(spapr, machine);
 }
 
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 6/6] specs/ppc-spapr-numa: update with new NUMA support
  2020-09-23 19:34 [PATCH 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
                   ` (4 preceding siblings ...)
  2020-09-23 19:34 ` [PATCH 5/6] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
@ 2020-09-23 19:34 ` Daniel Henrique Barboza
  5 siblings, 0 replies; 16+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-23 19:34 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, david

This update provides more in depth information about the
choices and drawbacks of the new NUMA support for the
spapr machine.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 docs/specs/ppc-spapr-numa.rst | 213 ++++++++++++++++++++++++++++++++++
 1 file changed, 213 insertions(+)

diff --git a/docs/specs/ppc-spapr-numa.rst b/docs/specs/ppc-spapr-numa.rst
index e762038022..994bfb996f 100644
--- a/docs/specs/ppc-spapr-numa.rst
+++ b/docs/specs/ppc-spapr-numa.rst
@@ -189,3 +189,216 @@ QEMU up to 5.1, as follows:
 
 This also means that user input in QEMU command line does not change the
 NUMA distancing inside the guest for the pseries machine.
+
+New NUMA mechanics for pseries in QEMU 5.2
+==========================================
+
+Starting in QEMU 5.2, the pseries machine now considers user input when
+setting NUMA topology of the guest. The following changes were made:
+
+* ibm,associativity-reference-points was changed to {0x4, 0x3, 0x2, 0x1}, allowing
+  for 4 distinct NUMA distance values based on the NUMA levels
+
+* ibm,max-associativity-domains was changed to support multiple associativity
+  domains in all NUMA levels. This is needed to ensure user flexibility
+
+* ibm,associativity for all resources now varies with user input
+
+These changes are only effective for pseries-5.2 and newer machines that are
+created with more than one NUMA node (disconsidering NUMA nodes created by
+the machine itself, e.g. NVLink 2 GPUs). The now legacy support has been
+around for such a long time, with users seeing NUMA distances 10 and 40
+(and 80 if using NVLink2 GPUs), and there is no need to disrupt the
+existing experience of those guests.
+
+To bring the user experience x86 users have when tuning up NUMA, we had
+to operate under the current pseries Linux kernel logic described in
+`How the pseries Linux guest calculates NUMA distances`_. The result
+is that we needed to translate NUMA distance user input to pseries
+Linux kernel input.
+
+Translating user distance to kernel distance
+--------------------------------------------
+
+User input for NUMA distance can vary from 10 to 254. We need to translate
+that to the values that the Linux kernel operates on (10, 20, 40, 80, 160).
+This is how it is being done:
+
+* user distance 11 to 30 will be interpreted as 20
+* user distance 31 to 60 will be interpreted as 40
+* user distance 61 to 120 will be interpreted as 80
+* user distance 121 and beyond will be interpreted as 160
+* user distance 10 stays 10
+
+The reasoning behind this aproximation is to avoid any round up to the local
+distance (10), keeping it exclusive to the 4th NUMA level (which is still
+exclusive to the node_id). All other ranges were chosen under the developer
+discretion of what would be (somewhat) sensible considering the user input.
+Any other strategy can be used here, but in the end the reality is that we'll
+have to accept that a large array of values will be translated to the same
+NUMA topology in the guest, e.g. this user input:
+
+::
+
+      0   1   2
+  0  10  31 120
+  1  31  10  30
+  2 120  30  10
+
+And this other user input:
+
+::
+
+      0   1   2
+  0  10  60  61
+  1  60  10  11
+  2  61  11  10
+
+Will both be translated to the same values internally:
+
+::
+
+      0   1   2
+  0  10  40  80
+  1  40  10  20
+  2  80  20  10
+
+Users are encouraged to use only the kernel values in the NUMA definition to
+avoid being taken by surprise with that the guest is actually seeing in the
+topology. There are enough potential surprises that are inherent to the
+associativity domain assignment process, discussed below.
+
+
+How associativity domains are assigned
+--------------------------------------
+
+LOPAPR allows more than one associativity array (or 'string') per allocated
+resource. This would be used to represent that the resource has multiple
+connections with the board, and then the operational system, when deciding
+NUMA distancing, should consider the associativity information that provides
+the shortest distance.
+
+The spapr implementation does not support multiple associativity arrays per
+resource, neither does the pseries Linux kernel. We'll have to represent the
+NUMA topology using one associativity per resource, which means that choices
+and compromises are going to be made.
+
+Consider the following NUMA topology entered by user input:
+
+::
+
+      0   1   2   3
+  0  10  20  20  40
+  1  20  10  80  40
+  2  20  80  10  20
+  3  40  40  20  10
+
+Honoring just the relative distances of node 0 to every other node, one possible
+value for all associativity arrays would be:
+
+* node 0: 0 B A 0
+* node 1: 0 0 A 1
+* node 2: 0 0 A 2
+* node 3: 0 B 0 3
+
+With the reference points {0x4, 0x3, 0x2, 0x1}, for node 0:
+
+* distance from 0 to 1 is 20 (no match at 0x4, will match at 0x3)
+* distance from 0 to 2 is 20 (no match at 0x4, will match at 0x3)
+* distance from 0 to 3 is 40 (no match at 0x4 and 0x3, will match
+  at 0x2)
+
+The distances related to node 0 are well represented. Doing for node 1, and keeping
+in mind that we don't need to revisit node 0 again, the distance from node 1 to
+2 is 80, matching at 0x4:
+
+* node 1: C 0 A 1
+* node 2: C 0 A 2
+
+Over here we already have the first conflict. Even if we assign a new associativity
+domain at 0x4 for 1 and 2, and we do that in the code, the kernel will define
+the distance between 1 and 2 as 20, not 80, because both 1 and 2 have the "A"
+associativity domain from the previous step. If we decide to discard the
+associativity with "A" then the node 0 distances are compromised.
+
+Following up with the distance from 1 to 3 being 40 (a match in 0x2) we have another
+decision to make. These are the current associativity domains of each:
+
+* node 1: C 0 A 1
+* node 3: 0 B 0 3
+
+There is already an associativity domain at 0x2 in node 3, "B", which was assigned
+by the node 0 distances. If we define a new associativity domain at this level
+for 1 and 3 we will overwrite the existing associativity between 0 and 3. What
+the code is doing in this case is to assign the existing domain to the
+current associativity, in this case, "B" is now assigned to the 0x2 of node 1,
+resulting in the following associativity arrays:
+
+* node 0: 0 B A 0
+* node 1: C 0 A 1
+* node 2: C B A 2
+* node 3: 0 B 0 3
+
+In the last step we will analyze just nodes 2 and 3. The desired distance between
+2 and 3 is 20, i.e. a match in 0x3. Node 2 already has a domain assigned in 0x3,
+A, so we do the same as we did in the previous case and assign it to node 3
+at 0x3. This is the end result for the associativity arrays:
+
+* node 0: 0 B A 0
+* node 1: C 0 A 1
+* node 2: C B A 2
+* node 3: 0 B A 3
+
+The kernel will read these arrays and will calculate the following NUMA topology for
+the guest:
+
+::
+
+      0   1   2   3
+  0  10  20  20  20
+  1  20  10  20  20
+  2  20  20  10  20
+  3  20  20  20  10
+
+Which is not what the user wanted, but it is what the current logic and implementation
+constraints of the kernel and QEMU will provide inside the LOPAPR specification.
+
+Changing a single value, specially a low distance value, makes for drastic changes
+in the result. For example, with the same user input from above, but changing the
+node distance from 0 to 1 to 40:
+
+::
+
+      0   1   2   3
+  0  10  40  20  40
+  1  40  10  80  40
+  2  20  80  10  20
+  3  40  40  20  10
+
+This is the result inside the guest, applying the same heuristics:
+
+::
+
+  $ numactl -H
+  available: 4 nodes (0-3)
+  (...)
+  node distances:
+  node   0   1   2   3
+    0:  10  40  20  20
+    1:  40  10  80  40
+    2:  20  80  10  20
+    3:  20  40  20  10
+
+This result is much closer to the user input and only a single distance was changed
+from the original.
+
+The kernel will always match with the shortest associativity domain possible, and we're
+attempting to retain the previous established relations between the nodes. This means
+that a distance equal to 20 between nodes A and B and the same distance 20 between nodes
+A and F will cause the distance between B and F to also be 20. The same will happen to
+other distances, but shorter distances has precedent over it to the distance calculation.
+
+Users are welcome to use this knowledge and experiment with the input to get the
+NUMA topology they want, or as closer as they want. The important thing is to keep
+expectations up to par with what we are capable of provide at this moment: an
+approximation.
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/6] spapr: add spapr_machine_using_legacy_numa() helper
  2020-09-23 19:34 ` [PATCH 1/6] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
@ 2020-09-24  7:47   ` Greg Kurz
  0 siblings, 0 replies; 16+ messages in thread
From: Greg Kurz @ 2020-09-24  7:47 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, david

On Wed, 23 Sep 2020 16:34:53 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> The changes to come to NUMA support are all guest visible. In
> theory we could just create a new 5_1 class option flag to
> avoid the changes to cascade to 5.1 and under. The reality is that
> these changes are only relevant if the machine has more than one
> NUMA node. There is no need to change guest behavior that has
> been around for years needlesly.
> 
> This new helper will be used by the next patches to determine
> whether we should retain the (soon to be) legacy NUMA behavior
> in the pSeries machine. The new behavior will only be exposed
> if:
> 
> - machine is pseries-5.2 and newer;
> - more than one NUMA node is declared in NUMA state.
> 
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  hw/ppc/spapr.c         | 12 ++++++++++++
>  include/hw/ppc/spapr.h |  2 ++
>  2 files changed, 14 insertions(+)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index e813c7cfb9..c5d8910a74 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -294,6 +294,15 @@ static hwaddr spapr_node0_size(MachineState *machine)
>      return machine->ram_size;
>  }
>  
> +bool spapr_machine_using_legacy_numa(SpaprMachineState *spapr)
> +{
> +    MachineState *machine = MACHINE(spapr);
> +    SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(machine);
> +
> +    return smc->pre_5_2_numa_associativity ||
> +           machine->numa_state->num_nodes <= 1;
> +}
> +
>  static void add_str(GString *s, const gchar *s1)
>  {
>      g_string_append_len(s, s1, strlen(s1) + 1);
> @@ -4522,8 +4531,11 @@ DEFINE_SPAPR_MACHINE(5_2, "5.2", true);
>   */
>  static void spapr_machine_5_1_class_options(MachineClass *mc)
>  {
> +    SpaprMachineClass *smc = SPAPR_MACHINE_CLASS(mc);
> +
>      spapr_machine_5_2_class_options(mc);
>      compat_props_add(mc->compat_props, hw_compat_5_1, hw_compat_5_1_len);
> +    smc->pre_5_2_numa_associativity = true;
>  }
>  
>  DEFINE_SPAPR_MACHINE(5_1, "5.1", false);
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index 114e819969..d1aae03b97 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -143,6 +143,7 @@ struct SpaprMachineClass {
>      bool smp_threads_vsmt; /* set VSMT to smp_threads by default */
>      hwaddr rma_limit;          /* clamp the RMA to this size */
>      bool pre_5_1_assoc_refpoints;
> +    bool pre_5_2_numa_associativity;
>  
>      void (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
>                            uint64_t *buid, hwaddr *pio, 
> @@ -860,6 +861,7 @@ int spapr_max_server_number(SpaprMachineState *spapr);
>  void spapr_store_hpte(PowerPCCPU *cpu, hwaddr ptex,
>                        uint64_t pte0, uint64_t pte1);
>  void spapr_mce_req_event(PowerPCCPU *cpu, bool recovered);
> +bool spapr_machine_using_legacy_numa(SpaprMachineState *spapr);
>  
>  /* DRC callbacks. */
>  void spapr_core_release(DeviceState *dev);



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/6] spapr_numa: forbid asymmetrical NUMA setups
  2020-09-23 19:34 ` [PATCH 2/6] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
@ 2020-09-24  8:01   ` Greg Kurz
  2020-09-24 11:23     ` Daniel Henrique Barboza
  0 siblings, 1 reply; 16+ messages in thread
From: Greg Kurz @ 2020-09-24  8:01 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, david

On Wed, 23 Sep 2020 16:34:54 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> The pSeries machine does not support asymmetrical NUMA
> configurations. This doesn't make much of a different
> since we're not using user input for pSeries NUMA setup,
> but this will change in the next patches.
> 
> To avoid breaking existing setups, gate this change by
> checking for legacy NUMA support.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  hw/ppc/spapr_numa.c | 34 ++++++++++++++++++++++++++++++++++
>  1 file changed, 34 insertions(+)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 64fe567f5d..36aaa273ee 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -19,6 +19,24 @@
>  /* Moved from hw/ppc/spapr_pci_nvlink2.c */
>  #define SPAPR_GPU_NUMA_ID           (cpu_to_be32(1))
>  
> +static bool spapr_numa_is_symmetrical(MachineState *ms)
> +{
> +    int src, dst;
> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> +    NodeInfo *numa_info = ms->numa_state->nodes;
> +
> +    for (src = 0; src < nb_numa_nodes; src++) {
> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> +            if (numa_info[src].distance[dst] !=
> +                numa_info[dst].distance[src]) {
> +                return false;
> +            }
> +        }
> +    }
> +
> +    return true;
> +}
> +
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine)
>  {
> @@ -61,6 +79,22 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>  
>          spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>      }
> +
> +    /*
> +     * Legacy NUMA guests (pseries-5.1 and order, or guests with only

s/order/older

> +     * 1 NUMA node) will not benefit from anything we're going to do
> +     * after this point.
> +     */
> +    if (spapr_machine_using_legacy_numa(spapr)) {
> +        return;
> +    }
> +
> +    if (!spapr_numa_is_symmetrical(machine)) {
> +        error_report("Asymmetrical NUMA topologies aren't supported "
> +                     "in the pSeries machine");
> +        exit(1);

Even if the code base is still heavily populated with exit(1), it seems
that exit(EXIT_FAILURE) is preferred.

Anyway,

Reviewed-by: Greg Kurz <groug@kaod.org>

> +    }
> +
>  }
>  
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/6] spapr_numa: translate regular NUMA distance to PAPR distance
  2020-09-23 19:34 ` [PATCH 3/6] spapr_numa: translate regular NUMA distance to PAPR distance Daniel Henrique Barboza
@ 2020-09-24  8:16   ` Greg Kurz
  2020-09-24 11:18     ` Daniel Henrique Barboza
  0 siblings, 1 reply; 16+ messages in thread
From: Greg Kurz @ 2020-09-24  8:16 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, david

On Wed, 23 Sep 2020 16:34:55 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> QEMU allows the user to set NUMA distances in the command line.
> For ACPI architectures like x86, this means that user input is
> used to populate the SLIT table, and the guest perceives the
> distances as the user chooses to.
> 
> PPC64 does not work that way. In the PAPR concept of NUMA,
> associativity relations between the NUMA nodes are provided by
> the device tree, and the guest kernel is free to calculate the
> distances as it sees fit. Given how ACPI architectures works,
> this puts the pSeries machine in a strange spot - users expect
> to define NUMA distances like in the ACPI case, but QEMU does
> not have control over it. To give pSeries users a similar
> experience, we'll need to bring kernel specifics to QEMU
> to approximate the NUMA distances.
> 
> The pSeries kernel works with the NUMA distance range 10,
> 20, 40, 80 and 160. The code starts at 10 (local distance) and
> searches for a match in the first NUMA level between the
> resources. If there is no match, the distance is doubled and
> then it proceeds to try to match in the next NUMA level. Rinse
> and repeat for MAX_DISTANCE_REF_POINTS levels.
> 
> This patch introduces a spapr_numa_PAPRify_distances() helper

Funky naming but meaningful and funny, for me at least :)

> that translates the user distances to kernel distance, which
> we're going to use to determine the associativity domains for
> the NUMA nodes.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  hw/ppc/spapr_numa.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 44 insertions(+)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 36aaa273ee..180800b2f3 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -37,6 +37,49 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
>      return true;
>  }
>  
> +/*
> + * This function will translate the user distances into
> + * what the kernel understand as possible values: 10
> + * (local distance), 20, 40, 80 and 160. Current heuristic
> + * is:
> + *
> + *  - distances between 11 and 30 -> rounded to 20
> + *  - distances between 31 and 60 -> rounded to 40
> + *  - distances between 61 and 120 -> rounded to 80
> + *  - everything above 120 -> 160

It isn't clear what happens when the distances are exactly
30, 60 or 120...

> + *
> + * This step can also be done in the same time as the NUMA
> + * associativity domains calculation, at the cost of extra
> + * complexity. We chose to keep it simpler.
> + *
> + * Note: this will overwrite the distance values in
> + * ms->numa_state->nodes.
> + */
> +static void spapr_numa_PAPRify_distances(MachineState *ms)
> +{
> +    int src, dst;
> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> +    NodeInfo *numa_info = ms->numa_state->nodes;
> +
> +    for (src = 0; src < nb_numa_nodes; src++) {
> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> +            uint8_t distance = numa_info[src].distance[dst];
> +            uint8_t rounded_distance = 160;
> +
> +            if (distance > 11 && distance < 30) {
> +                rounded_distance = 20;
> +            } else if (distance > 31 && distance < 60) {
> +                rounded_distance = 40;
> +            } else if (distance > 61 && distance < 120) {
> +                rounded_distance = 80;
> +            }

... and this code doesn't convert them to PAPR-friendly values
actually. I guess < should be turned into <= .

> +
> +            numa_info[src].distance[dst] = rounded_distance;
> +            numa_info[dst].distance[src] = rounded_distance;
> +        }
> +    }
> +}
> +
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine)
>  {
> @@ -95,6 +138,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>          exit(1);
>      }
>  
> +    spapr_numa_PAPRify_distances(machine);
>  }
>  
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 4/6] spapr_numa: change reference-points and maxdomain settings
  2020-09-23 19:34 ` [PATCH 4/6] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
@ 2020-09-24  9:33   ` Greg Kurz
  0 siblings, 0 replies; 16+ messages in thread
From: Greg Kurz @ 2020-09-24  9:33 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, david

On Wed, 23 Sep 2020 16:34:56 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> This is the first guest visible change introduced in
> spapr_numa.c. The previous settings of both reference-points
> and maxdomains were too restrictive, but enough for the
> existing associativity we're setting in the resources.
> 
> We'll change that in the following patches, populating the
> associativity arrays based on user input. For those changes
> to be effective, reference-points and maxdomains must be
> more flexible. After this patch, we'll have 4 distinct
> levels of NUMA (0x4, 0x3, 0x2, 0x1) and maxdomains will
> allow for any type of configuration the user intends to
> do - under the scope and limitations of PAPR itself, of
> course.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  hw/ppc/spapr_numa.c | 27 ++++++++++++++++++---------
>  1 file changed, 18 insertions(+), 9 deletions(-)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 180800b2f3..688391278e 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -222,21 +222,30 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
>   */
>  void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
>  {
> +    MachineState *ms = MACHINE(spapr);
>      SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>      uint32_t refpoints[] = {
>          cpu_to_be32(0x4),
> -        cpu_to_be32(0x4),
> +        cpu_to_be32(0x3),
>          cpu_to_be32(0x2),
> +        cpu_to_be32(0x1),
>      };
>      uint32_t nr_refpoints = ARRAY_SIZE(refpoints);
> -    uint32_t maxdomain = cpu_to_be32(spapr->gpu_numa_id > 1 ? 1 : 0);
> -    uint32_t maxdomains[] = {
> -        cpu_to_be32(4),
> -        maxdomain,
> -        maxdomain,
> -        maxdomain,
> -        cpu_to_be32(spapr->gpu_numa_id),
> -    };
> +    uint32_t maxdomain = cpu_to_be32(ms->numa_state->num_nodes +
> +                                     spapr->gpu_numa_id);
> +    uint32_t maxdomains[] = {0x4, maxdomain, maxdomain, maxdomain, maxdomain};
> +

It seems maxdomains[0] should be cpu_to_be32(0x4) and spaces are missing.

Maybe keep the previous multi-line declaration style ? This seems to produce
a nicer diff for the reviewer:

     uint32_t maxdomains[] = {
         cpu_to_be32(4),
         maxdomain,
         maxdomain,
         maxdomain,
-        cpu_to_be32(spapr->gpu_numa_id),
+        maxdomain,
     };

> +    if (spapr_machine_using_legacy_numa(spapr)) {
> +        refpoints[1] =  cpu_to_be32(0x4);
> +        refpoints[2] =  cpu_to_be32(0x2);

I'd rather have an explicit view of the legacy layouts for clarity...

> +        nr_refpoints = 3;
> +
> +        maxdomain = cpu_to_be32(spapr->gpu_numa_id > 1 ? 1 : 0);
> +        maxdomains[1] = maxdomain;
> +        maxdomains[2] = maxdomain;
> +        maxdomains[3] = maxdomain;
> +        maxdomains[4] = cpu_to_be32(spapr->gpu_numa_id);

... and here.

eg.

    if (spapr_machine_using_legacy_numa(spapr)) {
        uint32_t legacy_refpoints[] = {
            cpu_to_be32(0x4),
            cpu_to_be32(0x4),
            cpu_to_be32(0x2),
        };
        uint32_t legacy_maxdomain = cpu_to_be32(spapr->gpu_numa_id > 1 ? 1 : 0);
        uint32_t legacy_maxdomains[] = {
            cpu_to_be32(4),
            legacy_maxdomain,
            legacy_maxdomain,
            legacy_maxdomain,
            cpu_to_be32(spapr->gpu_numa_id),
        };

        nr_refpoints = 3;
        memcpy(refpoints, legacy_refpoints, sizeof(legacy_refpoints));
        memcpy(maxdomains, legacy_maxdomains, sizeof(legacy_maxdomains));
    }

This allows to instantly see how things are expected to appear
in the FDT, without having to mentally patch the refpoints[] and
maxdomains[] arrays. This also makes the diff easier to review.

> +    }
>  
>      if (smc->pre_5_1_assoc_refpoints) {
>          nr_refpoints = 2;



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 5/6] spapr_numa: consider user input when defining associativity
  2020-09-23 19:34 ` [PATCH 5/6] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
@ 2020-09-24 10:22   ` Greg Kurz
  2020-09-24 11:21     ` Daniel Henrique Barboza
  0 siblings, 1 reply; 16+ messages in thread
From: Greg Kurz @ 2020-09-24 10:22 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, david

On Wed, 23 Sep 2020 16:34:57 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> This patch puts all the pieces together to finally allow user
> input when defining the NUMA topology of the spapr guest.
> 
> We have one more kernel restriction to handle in this patch:
> the associativity array of node 0 must be filled with zeroes
> [1]. The strategy below ensures that this will happen.
> 
> spapr_numa_define_associativity_domains() will read the distance
> (already PAPRified) between the nodes from numa_state and determine
> the appropriate NUMA level. The NUMA domains, processed in ascending
> order, are going to be matched via NUMA levels, and the lowest
> associativity domain value is assigned to that specific level for
> both.
> 
> This will create an heuristic where the associativities of the first
> nodes have higher priority and are re-used in new matches, instead of
> overwriting them with a new associativity match. This is necessary
> because neither QEMU, nor the pSeries kernel, supports multiple
> associativity domains for each resource, meaning that we have to
> decide which associativity relation is relevant.
> 
> Ultimately, all of this results in a best effort approximation for
> the actual NUMA distances the user input in the command line. Given
> the nature of how PAPR itself interprets NUMA distances versus the
> expectations risen by how ACPI SLIT works, there might be better
> algorithms but, in the end, it'll also result in another way to
> approximate what the user really wanted.
> 
> To keep this commit message no longer than it already is, the next
> patch will update the existing documentation in ppc-spapr-numa.rst
> with more in depth details and design considerations/drawbacks.
> 
> [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  hw/ppc/spapr_numa.c | 81 ++++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 80 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 688391278e..c84f77cda7 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -80,12 +80,79 @@ static void spapr_numa_PAPRify_distances(MachineState *ms)
>      }
>  }
>  
> +static uint8_t spapr_numa_get_NUMA_level(uint8_t distance)

The funky naming doesn't improve clarity IMHO. I'd rather make
it lowercase only.

> +{
> +    uint8_t numa_level;
> +
> +    switch (distance) {
> +    case 20:
> +        numa_level = 0x3;
> +        break;
> +    case 40:
> +        numa_level = 0x2;
> +        break;
> +    case 80:
> +        numa_level = 0x1;
> +        break;
> +    default:
> +        numa_level = 0;

Hmm... same level for distances 10 and 160 ? Is this correct ?

> +    }
> +
> +    return numa_level;
> +}
> +
> +static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr,
> +                                                    MachineState *ms)

Passing ms seems to indicate that it could have a different value than spapr,
which is certainly no true.

I'd rather make it a local variable:

    MachineState *ms = MACHINE(spapr);

This is an slow path : we don't really care to do dynamic type checking
multiple times.

> +{
> +    int src, dst;
> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> +    NodeInfo *numa_info = ms->numa_state->nodes;
> +
> +    for (src = 0; src < nb_numa_nodes; src++) {
> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> +            /*
> +             * This is how the associativity domain between A and B
> +             * is calculated:
> +             *
> +             * - get the distance between them
> +             * - get the correspondent NUMA level for this distance
> +             * - the arrays were initialized with their own numa_ids,
> +             * and we're calculating the distance in node_id ascending order,
> +             * starting from node 0. This will have a cascade effect in the
> +             * algorithm because the associativity domains that node 0 defines
> +             * will be carried over to the other nodes, and node 1
> +             * associativities will be carried over unless there's already a
> +             * node 0 associativity assigned, and so on. This happens because
> +             * we'll assign the lowest value of assoc_src and assoc_dst to be
> +             * the associativity domain of both, for the given NUMA level.
> +             *
> +             * The PPC kernel expects the associativity domains of node 0 to
> +             * be always 0, and this algorithm will grant that by default.
> +             */
> +            uint8_t distance = numa_info[src].distance[dst];
> +            uint8_t n_level = spapr_numa_get_NUMA_level(distance);
> +            uint32_t assoc_src, assoc_dst;
> +
> +            assoc_src = be32_to_cpu(spapr->numa_assoc_array[src][n_level]);
> +            assoc_dst = be32_to_cpu(spapr->numa_assoc_array[dst][n_level]);
> +
> +            if (assoc_src < assoc_dst) {
> +                spapr->numa_assoc_array[dst][n_level] = cpu_to_be32(assoc_src);
> +            } else {
> +                spapr->numa_assoc_array[src][n_level] = cpu_to_be32(assoc_dst);
> +            }
> +        }
> +    }
> +
> +}
> +
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine)
>  {
>      SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>      int nb_numa_nodes = machine->numa_state->num_nodes;
>      int i, j, max_nodes_with_gpus;
> +    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
>  
>      /*
>       * For all associativity arrays: first position is the size,
> @@ -99,6 +166,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>      for (i = 0; i < nb_numa_nodes; i++) {
>          spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>          spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
> +
> +        /*
> +         * Fill all associativity domains of the node with node_id.
> +         * This is required because the kernel makes valid associativity

It would be appreciated to have an URL to the corresponding code in the
changelog.

> +         * matches with the zeroes if we leave the matrix unitialized.
> +         */
> +        if (!using_legacy_numa) {
> +            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
> +                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
> +            }
> +        }
>      }
>  
>      /*
> @@ -128,7 +206,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>       * 1 NUMA node) will not benefit from anything we're going to do
>       * after this point.
>       */
> -    if (spapr_machine_using_legacy_numa(spapr)) {
> +    if (using_legacy_numa) {
>          return;
>      }
>  
> @@ -139,6 +217,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>      }
>  
>      spapr_numa_PAPRify_distances(machine);
> +    spapr_numa_define_associativity_domains(spapr, machine);
>  }
>  
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,



^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/6] spapr_numa: translate regular NUMA distance to PAPR distance
  2020-09-24  8:16   ` Greg Kurz
@ 2020-09-24 11:18     ` Daniel Henrique Barboza
  0 siblings, 0 replies; 16+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-24 11:18 UTC (permalink / raw)
  To: Greg Kurz; +Cc: qemu-ppc, qemu-devel, david



On 9/24/20 5:16 AM, Greg Kurz wrote:
> On Wed, 23 Sep 2020 16:34:55 -0300
> Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
> 
>> QEMU allows the user to set NUMA distances in the command line.
>> For ACPI architectures like x86, this means that user input is
>> used to populate the SLIT table, and the guest perceives the
>> distances as the user chooses to.
>>
>> PPC64 does not work that way. In the PAPR concept of NUMA,
>> associativity relations between the NUMA nodes are provided by
>> the device tree, and the guest kernel is free to calculate the
>> distances as it sees fit. Given how ACPI architectures works,
>> this puts the pSeries machine in a strange spot - users expect
>> to define NUMA distances like in the ACPI case, but QEMU does
>> not have control over it. To give pSeries users a similar
>> experience, we'll need to bring kernel specifics to QEMU
>> to approximate the NUMA distances.
>>
>> The pSeries kernel works with the NUMA distance range 10,
>> 20, 40, 80 and 160. The code starts at 10 (local distance) and
>> searches for a match in the first NUMA level between the
>> resources. If there is no match, the distance is doubled and
>> then it proceeds to try to match in the next NUMA level. Rinse
>> and repeat for MAX_DISTANCE_REF_POINTS levels.
>>
>> This patch introduces a spapr_numa_PAPRify_distances() helper
> 
> Funky naming but meaningful and funny, for me at least :)
> 
>> that translates the user distances to kernel distance, which
>> we're going to use to determine the associativity domains for
>> the NUMA nodes.
>>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
>> ---
>>   hw/ppc/spapr_numa.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 44 insertions(+)
>>
>> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
>> index 36aaa273ee..180800b2f3 100644
>> --- a/hw/ppc/spapr_numa.c
>> +++ b/hw/ppc/spapr_numa.c
>> @@ -37,6 +37,49 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
>>       return true;
>>   }
>>   
>> +/*
>> + * This function will translate the user distances into
>> + * what the kernel understand as possible values: 10
>> + * (local distance), 20, 40, 80 and 160. Current heuristic
>> + * is:
>> + *
>> + *  - distances between 11 and 30 -> rounded to 20
>> + *  - distances between 31 and 60 -> rounded to 40
>> + *  - distances between 61 and 120 -> rounded to 80
>> + *  - everything above 120 -> 160
> 
> It isn't clear what happens when the distances are exactly
> 30, 60 or 120...

30 is rounded to 20, 60 is rounded to 40 and 120 is rounded to 80.
Perhaps I should change this to mention "between 11 and 30
inclusive" and so on.

> 
>> + *
>> + * This step can also be done in the same time as the NUMA
>> + * associativity domains calculation, at the cost of extra
>> + * complexity. We chose to keep it simpler.
>> + *
>> + * Note: this will overwrite the distance values in
>> + * ms->numa_state->nodes.
>> + */
>> +static void spapr_numa_PAPRify_distances(MachineState *ms)
>> +{
>> +    int src, dst;
>> +    int nb_numa_nodes = ms->numa_state->num_nodes;
>> +    NodeInfo *numa_info = ms->numa_state->nodes;
>> +
>> +    for (src = 0; src < nb_numa_nodes; src++) {
>> +        for (dst = src; dst < nb_numa_nodes; dst++) {
>> +            uint8_t distance = numa_info[src].distance[dst];
>> +            uint8_t rounded_distance = 160;
>> +
>> +            if (distance > 11 && distance < 30) {
>> +                rounded_distance = 20;
>> +            } else if (distance > 31 && distance < 60) {
>> +                rounded_distance = 40;
>> +            } else if (distance > 61 && distance < 120) {
>> +                rounded_distance = 80;
>> +            }
> 
> ... and this code doesn't convert them to PAPR-friendly values
> actually. I guess < should be turned into <= .


Good catch. Yep, this needs to be <=.


Thanks,


DHB

> 
>> +
>> +            numa_info[src].distance[dst] = rounded_distance;
>> +            numa_info[dst].distance[src] = rounded_distance;
>> +        }
>> +    }
>> +}
>> +
>>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>                                      MachineState *machine)
>>   {
>> @@ -95,6 +138,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>           exit(1);
>>       }
>>   
>> +    spapr_numa_PAPRify_distances(machine);
>>   }
>>   
>>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 5/6] spapr_numa: consider user input when defining associativity
  2020-09-24 10:22   ` Greg Kurz
@ 2020-09-24 11:21     ` Daniel Henrique Barboza
  2020-09-24 11:32       ` Greg Kurz
  0 siblings, 1 reply; 16+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-24 11:21 UTC (permalink / raw)
  To: Greg Kurz; +Cc: qemu-ppc, qemu-devel, david



On 9/24/20 7:22 AM, Greg Kurz wrote:
> On Wed, 23 Sep 2020 16:34:57 -0300
> Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
> 
>> This patch puts all the pieces together to finally allow user
>> input when defining the NUMA topology of the spapr guest.
>>
>> We have one more kernel restriction to handle in this patch:
>> the associativity array of node 0 must be filled with zeroes
>> [1]. The strategy below ensures that this will happen.
>>
>> spapr_numa_define_associativity_domains() will read the distance
>> (already PAPRified) between the nodes from numa_state and determine
>> the appropriate NUMA level. The NUMA domains, processed in ascending
>> order, are going to be matched via NUMA levels, and the lowest
>> associativity domain value is assigned to that specific level for
>> both.
>>
>> This will create an heuristic where the associativities of the first
>> nodes have higher priority and are re-used in new matches, instead of
>> overwriting them with a new associativity match. This is necessary
>> because neither QEMU, nor the pSeries kernel, supports multiple
>> associativity domains for each resource, meaning that we have to
>> decide which associativity relation is relevant.
>>
>> Ultimately, all of this results in a best effort approximation for
>> the actual NUMA distances the user input in the command line. Given
>> the nature of how PAPR itself interprets NUMA distances versus the
>> expectations risen by how ACPI SLIT works, there might be better
>> algorithms but, in the end, it'll also result in another way to
>> approximate what the user really wanted.
>>
>> To keep this commit message no longer than it already is, the next
>> patch will update the existing documentation in ppc-spapr-numa.rst
>> with more in depth details and design considerations/drawbacks.
>>
>> [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
>>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
>> ---
>>   hw/ppc/spapr_numa.c | 81 ++++++++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 80 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
>> index 688391278e..c84f77cda7 100644
>> --- a/hw/ppc/spapr_numa.c
>> +++ b/hw/ppc/spapr_numa.c
>> @@ -80,12 +80,79 @@ static void spapr_numa_PAPRify_distances(MachineState *ms)
>>       }
>>   }
>>   
>> +static uint8_t spapr_numa_get_NUMA_level(uint8_t distance)
> 
> The funky naming doesn't improve clarity IMHO. I'd rather make
> it lowercase only.
> 
>> +{
>> +    uint8_t numa_level;
>> +
>> +    switch (distance) {
>> +    case 20:
>> +        numa_level = 0x3;
>> +        break;
>> +    case 40:
>> +        numa_level = 0x2;
>> +        break;
>> +    case 80:
>> +        numa_level = 0x1;
>> +        break;
>> +    default:
>> +        numa_level = 0;
> 
> Hmm... same level for distances 10 and 160 ? Is this correct ?


This will never be called with distance = 10 because we won't
evaluate distance between the node to itself. But I'll put a
'case 10:' clause there that does nothing to make it clearer.



DHB

> 
>> +    }
>> +
>> +    return numa_level;
>> +}
>> +
>> +static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr,
>> +                                                    MachineState *ms)
> 
> Passing ms seems to indicate that it could have a different value than spapr,
> which is certainly no true.
> 
> I'd rather make it a local variable:
> 
>      MachineState *ms = MACHINE(spapr);
> 
> This is an slow path : we don't really care to do dynamic type checking
> multiple times.
> 
>> +{
>> +    int src, dst;
>> +    int nb_numa_nodes = ms->numa_state->num_nodes;
>> +    NodeInfo *numa_info = ms->numa_state->nodes;
>> +
>> +    for (src = 0; src < nb_numa_nodes; src++) {
>> +        for (dst = src; dst < nb_numa_nodes; dst++) {
>> +            /*
>> +             * This is how the associativity domain between A and B
>> +             * is calculated:
>> +             *
>> +             * - get the distance between them
>> +             * - get the correspondent NUMA level for this distance
>> +             * - the arrays were initialized with their own numa_ids,
>> +             * and we're calculating the distance in node_id ascending order,
>> +             * starting from node 0. This will have a cascade effect in the
>> +             * algorithm because the associativity domains that node 0 defines
>> +             * will be carried over to the other nodes, and node 1
>> +             * associativities will be carried over unless there's already a
>> +             * node 0 associativity assigned, and so on. This happens because
>> +             * we'll assign the lowest value of assoc_src and assoc_dst to be
>> +             * the associativity domain of both, for the given NUMA level.
>> +             *
>> +             * The PPC kernel expects the associativity domains of node 0 to
>> +             * be always 0, and this algorithm will grant that by default.
>> +             */
>> +            uint8_t distance = numa_info[src].distance[dst];
>> +            uint8_t n_level = spapr_numa_get_NUMA_level(distance);
>> +            uint32_t assoc_src, assoc_dst;
>> +
>> +            assoc_src = be32_to_cpu(spapr->numa_assoc_array[src][n_level]);
>> +            assoc_dst = be32_to_cpu(spapr->numa_assoc_array[dst][n_level]);
>> +
>> +            if (assoc_src < assoc_dst) {
>> +                spapr->numa_assoc_array[dst][n_level] = cpu_to_be32(assoc_src);
>> +            } else {
>> +                spapr->numa_assoc_array[src][n_level] = cpu_to_be32(assoc_dst);
>> +            }
>> +        }
>> +    }
>> +
>> +}
>> +
>>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>                                      MachineState *machine)
>>   {
>>       SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>>       int nb_numa_nodes = machine->numa_state->num_nodes;
>>       int i, j, max_nodes_with_gpus;
>> +    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
>>   
>>       /*
>>        * For all associativity arrays: first position is the size,
>> @@ -99,6 +166,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>       for (i = 0; i < nb_numa_nodes; i++) {
>>           spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>>           spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>> +
>> +        /*
>> +         * Fill all associativity domains of the node with node_id.
>> +         * This is required because the kernel makes valid associativity
> 
> It would be appreciated to have an URL to the corresponding code in the
> changelog.
> 
>> +         * matches with the zeroes if we leave the matrix unitialized.
>> +         */
>> +        if (!using_legacy_numa) {
>> +            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
>> +                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
>> +            }
>> +        }
>>       }
>>   
>>       /*
>> @@ -128,7 +206,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>        * 1 NUMA node) will not benefit from anything we're going to do
>>        * after this point.
>>        */
>> -    if (spapr_machine_using_legacy_numa(spapr)) {
>> +    if (using_legacy_numa) {
>>           return;
>>       }
>>   
>> @@ -139,6 +217,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>       }
>>   
>>       spapr_numa_PAPRify_distances(machine);
>> +    spapr_numa_define_associativity_domains(spapr, machine);
>>   }
>>   
>>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/6] spapr_numa: forbid asymmetrical NUMA setups
  2020-09-24  8:01   ` Greg Kurz
@ 2020-09-24 11:23     ` Daniel Henrique Barboza
  0 siblings, 0 replies; 16+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-24 11:23 UTC (permalink / raw)
  To: Greg Kurz; +Cc: qemu-ppc, qemu-devel, david



On 9/24/20 5:01 AM, Greg Kurz wrote:
> On Wed, 23 Sep 2020 16:34:54 -0300
> Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
> 
>> The pSeries machine does not support asymmetrical NUMA
>> configurations. This doesn't make much of a different
>> since we're not using user input for pSeries NUMA setup,
>> but this will change in the next patches.
>>
>> To avoid breaking existing setups, gate this change by
>> checking for legacy NUMA support.
>>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
>> ---
>>   hw/ppc/spapr_numa.c | 34 ++++++++++++++++++++++++++++++++++
>>   1 file changed, 34 insertions(+)
>>
>> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
>> index 64fe567f5d..36aaa273ee 100644
>> --- a/hw/ppc/spapr_numa.c
>> +++ b/hw/ppc/spapr_numa.c
>> @@ -19,6 +19,24 @@
>>   /* Moved from hw/ppc/spapr_pci_nvlink2.c */
>>   #define SPAPR_GPU_NUMA_ID           (cpu_to_be32(1))
>>   
>> +static bool spapr_numa_is_symmetrical(MachineState *ms)
>> +{
>> +    int src, dst;
>> +    int nb_numa_nodes = ms->numa_state->num_nodes;
>> +    NodeInfo *numa_info = ms->numa_state->nodes;
>> +
>> +    for (src = 0; src < nb_numa_nodes; src++) {
>> +        for (dst = src; dst < nb_numa_nodes; dst++) {
>> +            if (numa_info[src].distance[dst] !=
>> +                numa_info[dst].distance[src]) {
>> +                return false;
>> +            }
>> +        }
>> +    }
>> +
>> +    return true;
>> +}
>> +
>>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>                                      MachineState *machine)
>>   {
>> @@ -61,6 +79,22 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>   
>>           spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>>       }
>> +
>> +    /*
>> +     * Legacy NUMA guests (pseries-5.1 and order, or guests with only
> 
> s/order/older
> 
>> +     * 1 NUMA node) will not benefit from anything we're going to do
>> +     * after this point.
>> +     */
>> +    if (spapr_machine_using_legacy_numa(spapr)) {
>> +        return;
>> +    }
>> +
>> +    if (!spapr_numa_is_symmetrical(machine)) {
>> +        error_report("Asymmetrical NUMA topologies aren't supported "
>> +                     "in the pSeries machine");
>> +        exit(1);
> 
> Even if the code base is still heavily populated with exit(1), it seems
> that exit(EXIT_FAILURE) is preferred.
> 
> Anyway,
> 
> Reviewed-by: Greg Kurz <groug@kaod.org>


Given that a new spin of this series is required, I'll change this to
exit(EXIT_FAILURE) there as well.


Thanks,


DHB

> 
>> +    }
>> +
>>   }
>>   
>>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
> 


^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 5/6] spapr_numa: consider user input when defining associativity
  2020-09-24 11:21     ` Daniel Henrique Barboza
@ 2020-09-24 11:32       ` Greg Kurz
  0 siblings, 0 replies; 16+ messages in thread
From: Greg Kurz @ 2020-09-24 11:32 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, david

On Thu, 24 Sep 2020 08:21:47 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> 
> 
> On 9/24/20 7:22 AM, Greg Kurz wrote:
> > On Wed, 23 Sep 2020 16:34:57 -0300
> > Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
> > 
> >> This patch puts all the pieces together to finally allow user
> >> input when defining the NUMA topology of the spapr guest.
> >>
> >> We have one more kernel restriction to handle in this patch:
> >> the associativity array of node 0 must be filled with zeroes
> >> [1]. The strategy below ensures that this will happen.
> >>
> >> spapr_numa_define_associativity_domains() will read the distance
> >> (already PAPRified) between the nodes from numa_state and determine
> >> the appropriate NUMA level. The NUMA domains, processed in ascending
> >> order, are going to be matched via NUMA levels, and the lowest
> >> associativity domain value is assigned to that specific level for
> >> both.
> >>
> >> This will create an heuristic where the associativities of the first
> >> nodes have higher priority and are re-used in new matches, instead of
> >> overwriting them with a new associativity match. This is necessary
> >> because neither QEMU, nor the pSeries kernel, supports multiple
> >> associativity domains for each resource, meaning that we have to
> >> decide which associativity relation is relevant.
> >>
> >> Ultimately, all of this results in a best effort approximation for
> >> the actual NUMA distances the user input in the command line. Given
> >> the nature of how PAPR itself interprets NUMA distances versus the
> >> expectations risen by how ACPI SLIT works, there might be better
> >> algorithms but, in the end, it'll also result in another way to
> >> approximate what the user really wanted.
> >>
> >> To keep this commit message no longer than it already is, the next
> >> patch will update the existing documentation in ppc-spapr-numa.rst
> >> with more in depth details and design considerations/drawbacks.
> >>
> >> [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
> >>
> >> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> >> ---
> >>   hw/ppc/spapr_numa.c | 81 ++++++++++++++++++++++++++++++++++++++++++++-
> >>   1 file changed, 80 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> >> index 688391278e..c84f77cda7 100644
> >> --- a/hw/ppc/spapr_numa.c
> >> +++ b/hw/ppc/spapr_numa.c
> >> @@ -80,12 +80,79 @@ static void spapr_numa_PAPRify_distances(MachineState *ms)
> >>       }
> >>   }
> >>   
> >> +static uint8_t spapr_numa_get_NUMA_level(uint8_t distance)
> > 
> > The funky naming doesn't improve clarity IMHO. I'd rather make
> > it lowercase only.
> > 
> >> +{
> >> +    uint8_t numa_level;
> >> +
> >> +    switch (distance) {
> >> +    case 20:
> >> +        numa_level = 0x3;
> >> +        break;
> >> +    case 40:
> >> +        numa_level = 0x2;
> >> +        break;
> >> +    case 80:
> >> +        numa_level = 0x1;
> >> +        break;
> >> +    default:
> >> +        numa_level = 0;
> > 
> > Hmm... same level for distances 10 and 160 ? Is this correct ?
> 
> 
> This will never be called with distance = 10 because we won't
> evaluate distance between the node to itself. But I'll put a
> 'case 10:' clause there that does nothing to make it clearer.
> 

You should make it g_assert_not_reached() in this case.

> 
> 
> DHB
> 
> > 
> >> +    }
> >> +
> >> +    return numa_level;
> >> +}
> >> +
> >> +static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr,
> >> +                                                    MachineState *ms)
> > 
> > Passing ms seems to indicate that it could have a different value than spapr,
> > which is certainly no true.
> > 
> > I'd rather make it a local variable:
> > 
> >      MachineState *ms = MACHINE(spapr);
> > 
> > This is an slow path : we don't really care to do dynamic type checking
> > multiple times.
> > 
> >> +{
> >> +    int src, dst;
> >> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> >> +    NodeInfo *numa_info = ms->numa_state->nodes;
> >> +
> >> +    for (src = 0; src < nb_numa_nodes; src++) {
> >> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> >> +            /*
> >> +             * This is how the associativity domain between A and B
> >> +             * is calculated:
> >> +             *
> >> +             * - get the distance between them
> >> +             * - get the correspondent NUMA level for this distance
> >> +             * - the arrays were initialized with their own numa_ids,
> >> +             * and we're calculating the distance in node_id ascending order,
> >> +             * starting from node 0. This will have a cascade effect in the
> >> +             * algorithm because the associativity domains that node 0 defines
> >> +             * will be carried over to the other nodes, and node 1
> >> +             * associativities will be carried over unless there's already a
> >> +             * node 0 associativity assigned, and so on. This happens because
> >> +             * we'll assign the lowest value of assoc_src and assoc_dst to be
> >> +             * the associativity domain of both, for the given NUMA level.
> >> +             *
> >> +             * The PPC kernel expects the associativity domains of node 0 to
> >> +             * be always 0, and this algorithm will grant that by default.
> >> +             */
> >> +            uint8_t distance = numa_info[src].distance[dst];
> >> +            uint8_t n_level = spapr_numa_get_NUMA_level(distance);
> >> +            uint32_t assoc_src, assoc_dst;
> >> +
> >> +            assoc_src = be32_to_cpu(spapr->numa_assoc_array[src][n_level]);
> >> +            assoc_dst = be32_to_cpu(spapr->numa_assoc_array[dst][n_level]);
> >> +
> >> +            if (assoc_src < assoc_dst) {
> >> +                spapr->numa_assoc_array[dst][n_level] = cpu_to_be32(assoc_src);
> >> +            } else {
> >> +                spapr->numa_assoc_array[src][n_level] = cpu_to_be32(assoc_dst);
> >> +            }
> >> +        }
> >> +    }
> >> +
> >> +}
> >> +
> >>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
> >>                                      MachineState *machine)
> >>   {
> >>       SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
> >>       int nb_numa_nodes = machine->numa_state->num_nodes;
> >>       int i, j, max_nodes_with_gpus;
> >> +    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
> >>   
> >>       /*
> >>        * For all associativity arrays: first position is the size,
> >> @@ -99,6 +166,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
> >>       for (i = 0; i < nb_numa_nodes; i++) {
> >>           spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
> >>           spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
> >> +
> >> +        /*
> >> +         * Fill all associativity domains of the node with node_id.
> >> +         * This is required because the kernel makes valid associativity
> > 
> > It would be appreciated to have an URL to the corresponding code in the
> > changelog.
> > 
> >> +         * matches with the zeroes if we leave the matrix unitialized.
> >> +         */
> >> +        if (!using_legacy_numa) {
> >> +            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
> >> +                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
> >> +            }
> >> +        }
> >>       }
> >>   
> >>       /*
> >> @@ -128,7 +206,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
> >>        * 1 NUMA node) will not benefit from anything we're going to do
> >>        * after this point.
> >>        */
> >> -    if (spapr_machine_using_legacy_numa(spapr)) {
> >> +    if (using_legacy_numa) {
> >>           return;
> >>       }
> >>   
> >> @@ -139,6 +217,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
> >>       }
> >>   
> >>       spapr_numa_PAPRify_distances(machine);
> >> +    spapr_numa_define_associativity_domains(spapr, machine);
> >>   }
> >>   
> >>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
> > 



^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2020-09-24 11:34 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-23 19:34 [PATCH 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
2020-09-23 19:34 ` [PATCH 1/6] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
2020-09-24  7:47   ` Greg Kurz
2020-09-23 19:34 ` [PATCH 2/6] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
2020-09-24  8:01   ` Greg Kurz
2020-09-24 11:23     ` Daniel Henrique Barboza
2020-09-23 19:34 ` [PATCH 3/6] spapr_numa: translate regular NUMA distance to PAPR distance Daniel Henrique Barboza
2020-09-24  8:16   ` Greg Kurz
2020-09-24 11:18     ` Daniel Henrique Barboza
2020-09-23 19:34 ` [PATCH 4/6] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
2020-09-24  9:33   ` Greg Kurz
2020-09-23 19:34 ` [PATCH 5/6] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
2020-09-24 10:22   ` Greg Kurz
2020-09-24 11:21     ` Daniel Henrique Barboza
2020-09-24 11:32       ` Greg Kurz
2020-09-23 19:34 ` [PATCH 6/6] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.