All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v4 0/5] pseries NUMA distance calculation
@ 2020-10-07 17:28 Daniel Henrique Barboza
  2020-10-07 17:28 ` [PATCH v4 1/5] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
                   ` (6 more replies)
  0 siblings, 7 replies; 11+ messages in thread
From: Daniel Henrique Barboza @ 2020-10-07 17:28 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, david

This forth version is based on review comments and suggestion
from David in v3.

changes from v3:
- patch 4:
    * copied the explanation in spapr_numa_define_associativity_domains()
      to the commit message
    * return numa_level directly instead of calculating a temp
      value in spapr_numa_get_numa_level()
    * we're now setting assoc_src in all n_levels above it in 
      spapr_numa_define_associativity_domains()
- patch 5:
    * changed the documentation as suggested by David

v3 link: https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg10443.html

Daniel Henrique Barboza (5):
  spapr: add spapr_machine_using_legacy_numa() helper
  spapr_numa: forbid asymmetrical NUMA setups
  spapr_numa: change reference-points and maxdomain settings
  spapr_numa: consider user input when defining associativity
  specs/ppc-spapr-numa: update with new NUMA support

 capstone                      |   2 +-
 docs/specs/ppc-spapr-numa.rst | 235 ++++++++++++++++++++++++++++++++--
 hw/ppc/spapr.c                |  12 ++
 hw/ppc/spapr_numa.c           | 185 ++++++++++++++++++++++++--
 include/hw/ppc/spapr.h        |   2 +
 5 files changed, 419 insertions(+), 17 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v4 1/5] spapr: add spapr_machine_using_legacy_numa() helper
  2020-10-07 17:28 [PATCH v4 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
@ 2020-10-07 17:28 ` Daniel Henrique Barboza
  2020-10-07 17:28 ` [PATCH v4 2/5] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Daniel Henrique Barboza @ 2020-10-07 17:28 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, Greg Kurz, david

The changes to come to NUMA support are all guest visible. In
theory we could just create a new 5_1 class option flag to
avoid the changes to cascade to 5.1 and under. The reality is that
these changes are only relevant if the machine has more than one
NUMA node. There is no need to change guest behavior that has
been around for years needlesly.

This new helper will be used by the next patches to determine
whether we should retain the (soon to be) legacy NUMA behavior
in the pSeries machine. The new behavior will only be exposed
if:

- machine is pseries-5.2 and newer;
- more than one NUMA node is declared in NUMA state.

Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr.c         | 12 ++++++++++++
 include/hw/ppc/spapr.h |  2 ++
 2 files changed, 14 insertions(+)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 4256794f3b..63315f2d0f 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -294,6 +294,15 @@ static hwaddr spapr_node0_size(MachineState *machine)
     return machine->ram_size;
 }
 
+bool spapr_machine_using_legacy_numa(SpaprMachineState *spapr)
+{
+    MachineState *machine = MACHINE(spapr);
+    SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(machine);
+
+    return smc->pre_5_2_numa_associativity ||
+           machine->numa_state->num_nodes <= 1;
+}
+
 static void add_str(GString *s, const gchar *s1)
 {
     g_string_append_len(s, s1, strlen(s1) + 1);
@@ -4519,8 +4528,11 @@ DEFINE_SPAPR_MACHINE(5_2, "5.2", true);
  */
 static void spapr_machine_5_1_class_options(MachineClass *mc)
 {
+    SpaprMachineClass *smc = SPAPR_MACHINE_CLASS(mc);
+
     spapr_machine_5_2_class_options(mc);
     compat_props_add(mc->compat_props, hw_compat_5_1, hw_compat_5_1_len);
+    smc->pre_5_2_numa_associativity = true;
 }
 
 DEFINE_SPAPR_MACHINE(5_1, "5.1", false);
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index bba8736269..bb47896f17 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -138,6 +138,7 @@ struct SpaprMachineClass {
     bool smp_threads_vsmt; /* set VSMT to smp_threads by default */
     hwaddr rma_limit;          /* clamp the RMA to this size */
     bool pre_5_1_assoc_refpoints;
+    bool pre_5_2_numa_associativity;
 
     void (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
                           uint64_t *buid, hwaddr *pio, 
@@ -853,6 +854,7 @@ int spapr_max_server_number(SpaprMachineState *spapr);
 void spapr_store_hpte(PowerPCCPU *cpu, hwaddr ptex,
                       uint64_t pte0, uint64_t pte1);
 void spapr_mce_req_event(PowerPCCPU *cpu, bool recovered);
+bool spapr_machine_using_legacy_numa(SpaprMachineState *spapr);
 
 /* DRC callbacks. */
 void spapr_core_release(DeviceState *dev);
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 2/5] spapr_numa: forbid asymmetrical NUMA setups
  2020-10-07 17:28 [PATCH v4 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
  2020-10-07 17:28 ` [PATCH v4 1/5] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
@ 2020-10-07 17:28 ` Daniel Henrique Barboza
  2020-10-07 17:28 ` [PATCH v4 3/5] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Daniel Henrique Barboza @ 2020-10-07 17:28 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, Greg Kurz, david

The pSeries machine does not support asymmetrical NUMA
configurations. This doesn't make much of a different
since we're not using user input for pSeries NUMA setup,
but this will change in the next patches.

To avoid breaking existing setups, gate this change by
checking for legacy NUMA support.

Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 64fe567f5d..fe395e80a3 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -19,6 +19,24 @@
 /* Moved from hw/ppc/spapr_pci_nvlink2.c */
 #define SPAPR_GPU_NUMA_ID           (cpu_to_be32(1))
 
+static bool spapr_numa_is_symmetrical(MachineState *ms)
+{
+    int src, dst;
+    int nb_numa_nodes = ms->numa_state->num_nodes;
+    NodeInfo *numa_info = ms->numa_state->nodes;
+
+    for (src = 0; src < nb_numa_nodes; src++) {
+        for (dst = src; dst < nb_numa_nodes; dst++) {
+            if (numa_info[src].distance[dst] !=
+                numa_info[dst].distance[src]) {
+                return false;
+            }
+        }
+    }
+
+    return true;
+}
+
 void spapr_numa_associativity_init(SpaprMachineState *spapr,
                                    MachineState *machine)
 {
@@ -61,6 +79,22 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
 
         spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
     }
+
+    /*
+     * Legacy NUMA guests (pseries-5.1 and older, or guests with only
+     * 1 NUMA node) will not benefit from anything we're going to do
+     * after this point.
+     */
+    if (spapr_machine_using_legacy_numa(spapr)) {
+        return;
+    }
+
+    if (!spapr_numa_is_symmetrical(machine)) {
+        error_report("Asymmetrical NUMA topologies aren't supported "
+                     "in the pSeries machine");
+        exit(EXIT_FAILURE);
+    }
+
 }
 
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 3/5] spapr_numa: change reference-points and maxdomain settings
  2020-10-07 17:28 [PATCH v4 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
  2020-10-07 17:28 ` [PATCH v4 1/5] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
  2020-10-07 17:28 ` [PATCH v4 2/5] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
@ 2020-10-07 17:28 ` Daniel Henrique Barboza
  2020-10-07 17:28 ` [PATCH v4 4/5] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 11+ messages in thread
From: Daniel Henrique Barboza @ 2020-10-07 17:28 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, Greg Kurz, david

This is the first guest visible change introduced in
spapr_numa.c. The previous settings of both reference-points
and maxdomains were too restrictive, but enough for the
existing associativity we're setting in the resources.

We'll change that in the following patches, populating the
associativity arrays based on user input. For those changes
to be effective, reference-points and maxdomains must be
more flexible. After this patch, we'll have 4 distinct
levels of NUMA (0x4, 0x3, 0x2, 0x1) and maxdomains will
allow for any type of configuration the user intends to
do - under the scope and limitations of PAPR itself, of
course.

Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 43 +++++++++++++++++++++++++++++++++++--------
 1 file changed, 35 insertions(+), 8 deletions(-)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index fe395e80a3..16badb1f4b 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -178,24 +178,51 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
  */
 void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
 {
+    MachineState *ms = MACHINE(spapr);
     SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
     uint32_t refpoints[] = {
         cpu_to_be32(0x4),
-        cpu_to_be32(0x4),
+        cpu_to_be32(0x3),
         cpu_to_be32(0x2),
+        cpu_to_be32(0x1),
     };
     uint32_t nr_refpoints = ARRAY_SIZE(refpoints);
-    uint32_t maxdomain = cpu_to_be32(spapr->gpu_numa_id > 1 ? 1 : 0);
+    uint32_t maxdomain = ms->numa_state->num_nodes + spapr->gpu_numa_id;
     uint32_t maxdomains[] = {
         cpu_to_be32(4),
-        maxdomain,
-        maxdomain,
-        maxdomain,
-        cpu_to_be32(spapr->gpu_numa_id),
+        cpu_to_be32(maxdomain),
+        cpu_to_be32(maxdomain),
+        cpu_to_be32(maxdomain),
+        cpu_to_be32(maxdomain)
     };
 
-    if (smc->pre_5_1_assoc_refpoints) {
-        nr_refpoints = 2;
+    if (spapr_machine_using_legacy_numa(spapr)) {
+        uint32_t legacy_refpoints[] = {
+            cpu_to_be32(0x4),
+            cpu_to_be32(0x4),
+            cpu_to_be32(0x2),
+        };
+        uint32_t legacy_maxdomain = spapr->gpu_numa_id > 1 ? 1 : 0;
+        uint32_t legacy_maxdomains[] = {
+            cpu_to_be32(4),
+            cpu_to_be32(legacy_maxdomain),
+            cpu_to_be32(legacy_maxdomain),
+            cpu_to_be32(legacy_maxdomain),
+            cpu_to_be32(spapr->gpu_numa_id),
+        };
+
+        G_STATIC_ASSERT(sizeof(legacy_refpoints) <= sizeof(refpoints));
+        G_STATIC_ASSERT(sizeof(legacy_maxdomains) <= sizeof(maxdomains));
+
+        nr_refpoints = 3;
+
+        memcpy(refpoints, legacy_refpoints, sizeof(legacy_refpoints));
+        memcpy(maxdomains, legacy_maxdomains, sizeof(legacy_maxdomains));
+
+        /* pseries-5.0 and older reference-points array is {0x4, 0x4} */
+        if (smc->pre_5_1_assoc_refpoints) {
+            nr_refpoints = 2;
+        }
     }
 
     _FDT(fdt_setprop(fdt, rtas, "ibm,associativity-reference-points",
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 4/5] spapr_numa: consider user input when defining associativity
  2020-10-07 17:28 [PATCH v4 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
                   ` (2 preceding siblings ...)
  2020-10-07 17:28 ` [PATCH v4 3/5] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
@ 2020-10-07 17:28 ` Daniel Henrique Barboza
  2020-10-12 17:44   ` Philippe Mathieu-Daudé
  2020-10-07 17:28 ` [PATCH v4 5/5] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 11+ messages in thread
From: Daniel Henrique Barboza @ 2020-10-07 17:28 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, david

A new function called spapr_numa_define_associativity_domains()
is created to calculate the associativity domains and change
the associativity arrays considering user input. This is how
the associativity domain between two NUMA nodes A and B is
calculated:

- get the distance D between them

- get the correspondent NUMA level 'n_level' for D. This is done
via a helper called spapr_numa_get_numa_level()

- all associativity arrays were initialized with their own
numa_ids, and we're calculating the distance in node_id ascending
order, starting from node id 0 (the first node retrieved by
numa_state). This will have a cascade effect in the algorithm because
the associativity domains that node 0 defines will be carried over to
other nodes, and node 1 associativities will be carried over after
taking node 0 associativities into account, and so on. This
happens because we'll assign assoc_src as the associativity domain
of dst as well, for all NUMA levels beyond and including n_level.

The PPC kernel expects the associativity domains of the first node
(node id 0) to be always 0 [1], and this algorithm will grant that
by default.

Ultimately, all of this results in a best effort approximation for
the actual NUMA distances the user input in the command line. Given
the nature of how PAPR itself interprets NUMA distances versus the
expectations risen by how ACPI SLIT works, there might be better
algorithms but, in the end, it'll also result in another way to
approximate what the user really wanted.

To keep this commit message no longer than it already is, the next
patch will update the existing documentation in ppc-spapr-numa.rst
with more in depth details and design considerations/drawbacks.

[1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 capstone            |   2 +-
 hw/ppc/spapr_numa.c | 110 +++++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 110 insertions(+), 2 deletions(-)

diff --git a/capstone b/capstone
index f8b1b83301..22ead3e0bf 160000
--- a/capstone
+++ b/capstone
@@ -1 +1 @@
-Subproject commit f8b1b833015a4ae47110ed068e0deb7106ced66d
+Subproject commit 22ead3e0bfdb87516656453336160e0a37b066bf
diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 16badb1f4b..b50796bbe3 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -37,12 +37,108 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
     return true;
 }
 
+/*
+ * This function will translate the user distances into
+ * what the kernel understand as possible values: 10
+ * (local distance), 20, 40, 80 and 160, and return the equivalent
+ * NUMA level for each. Current heuristic is:
+ *  - local distance (10) returns numa_level = 0x4, meaning there is
+ *    no rounding for local distance
+ *  - distances between 11 and 30 inclusive -> rounded to 20,
+ *    numa_level = 0x3
+ *  - distances between 31 and 60 inclusive -> rounded to 40,
+ *    numa_level = 0x2
+ *  - distances between 61 and 120 inclusive -> rounded to 80,
+ *    numa_level = 0x1
+ *  - everything above 120 returns numa_level = 0 to indicate that
+ *    there is no match. This will be calculated as disntace = 160
+ *    by the kernel (as of v5.9)
+ */
+static uint8_t spapr_numa_get_numa_level(uint8_t distance)
+{
+    if (distance == 10) {
+        return 0x4;
+    } else if (distance > 11 && distance <= 30) {
+        return 0x3;
+    } else if (distance > 31 && distance <= 60) {
+        return 0x2;
+    } else if (distance > 61 && distance <= 120) {
+        return 0x1;
+    }
+
+    return 0;
+}
+
+static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr)
+{
+    MachineState *ms = MACHINE(spapr);
+    NodeInfo *numa_info = ms->numa_state->nodes;
+    int nb_numa_nodes = ms->numa_state->num_nodes;
+    int src, dst, i;
+
+    for (src = 0; src < nb_numa_nodes; src++) {
+        for (dst = src; dst < nb_numa_nodes; dst++) {
+            /*
+             * This is how the associativity domain between A and B
+             * is calculated:
+             *
+             * - get the distance D between them
+             * - get the correspondent NUMA level 'n_level' for D
+             * - all associativity arrays were initialized with their own
+             * numa_ids, and we're calculating the distance in node_id
+             * ascending order, starting from node id 0 (the first node
+             * retrieved by numa_state). This will have a cascade effect in
+             * the algorithm because the associativity domains that node 0
+             * defines will be carried over to other nodes, and node 1
+             * associativities will be carried over after taking node 0
+             * associativities into account, and so on. This happens because
+             * we'll assign assoc_src as the associativity domain of dst
+             * as well, for all NUMA levels beyond and including n_level.
+             *
+             * The PPC kernel expects the associativity domains of node 0 to
+             * be always 0, and this algorithm will grant that by default.
+             */
+            uint8_t distance = numa_info[src].distance[dst];
+            uint8_t n_level = spapr_numa_get_numa_level(distance);
+            uint32_t assoc_src;
+
+            /*
+             * n_level = 0 means that the distance is greater than our last
+             * rounded value (120). In this case there is no NUMA level match
+             * between src and dst and we can skip the remaining of the loop.
+             *
+             * The Linux kernel will assume that the distance between src and
+             * dst, in this case of no match, is 10 (local distance) doubled
+             * for each NUMA it didn't match. We have MAX_DISTANCE_REF_POINTS
+             * levels (4), so this gives us 10*2*2*2*2 = 160.
+             *
+             * This logic can be seen in the Linux kernel source code, as of
+             * v5.9, in arch/powerpc/mm/numa.c, function __node_distance().
+             */
+            if (n_level == 0) {
+                continue;
+            }
+
+            /*
+             * We must assign all assoc_src to dst, starting from n_level
+             * and going up to 0x1.
+             */
+            for (i = n_level; i > 0; i--) {
+                assoc_src = spapr->numa_assoc_array[src][i];
+                spapr->numa_assoc_array[dst][i] = assoc_src;
+            }
+        }
+    }
+
+}
+
 void spapr_numa_associativity_init(SpaprMachineState *spapr,
                                    MachineState *machine)
 {
     SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
     int nb_numa_nodes = machine->numa_state->num_nodes;
     int i, j, max_nodes_with_gpus;
+    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
 
     /*
      * For all associativity arrays: first position is the size,
@@ -56,6 +152,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
     for (i = 0; i < nb_numa_nodes; i++) {
         spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
         spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
+
+        /*
+         * Fill all associativity domains of non-zero NUMA nodes with
+         * node_id. This is required because the default value (0) is
+         * considered a match with associativity domains of node 0.
+         */
+        if (!using_legacy_numa && i != 0) {
+            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
+                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
+            }
+        }
     }
 
     /*
@@ -85,7 +192,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
      * 1 NUMA node) will not benefit from anything we're going to do
      * after this point.
      */
-    if (spapr_machine_using_legacy_numa(spapr)) {
+    if (using_legacy_numa) {
         return;
     }
 
@@ -95,6 +202,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
         exit(EXIT_FAILURE);
     }
 
+    spapr_numa_define_associativity_domains(spapr);
 }
 
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 5/5] specs/ppc-spapr-numa: update with new NUMA support
  2020-10-07 17:28 [PATCH v4 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
                   ` (3 preceding siblings ...)
  2020-10-07 17:28 ` [PATCH v4 4/5] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
@ 2020-10-07 17:28 ` Daniel Henrique Barboza
  2020-10-08  9:13 ` [PATCH v4 0/5] pseries NUMA distance calculation Greg Kurz
  2020-10-08 23:52 ` David Gibson
  6 siblings, 0 replies; 11+ messages in thread
From: Daniel Henrique Barboza @ 2020-10-07 17:28 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, david

This update provides more in depth information about the
choices and drawbacks of the new NUMA support for the
spapr machine.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 docs/specs/ppc-spapr-numa.rst | 235 ++++++++++++++++++++++++++++++++--
 1 file changed, 227 insertions(+), 8 deletions(-)

diff --git a/docs/specs/ppc-spapr-numa.rst b/docs/specs/ppc-spapr-numa.rst
index e762038022..5fca2bdd8e 100644
--- a/docs/specs/ppc-spapr-numa.rst
+++ b/docs/specs/ppc-spapr-numa.rst
@@ -158,9 +158,235 @@ kernel tree). This results in the following distances:
 * resources four NUMA levels apart: 160
 
 
-Consequences for QEMU NUMA tuning
+pseries NUMA mechanics
+======================
+
+Starting in QEMU 5.2, the pseries machine considers user input when setting NUMA
+topology of the guest. The overall design is:
+
+* ibm,associativity-reference-points is set to {0x4, 0x3, 0x2, 0x1}, allowing
+  for 4 distinct NUMA distance values based on the NUMA levels
+
+* ibm,max-associativity-domains supports multiple associativity domains in all
+  NUMA levels, granting user flexibility
+
+* ibm,associativity for all resources varies with user input
+
+These changes are only effective for pseries-5.2 and newer machines that are
+created with more than one NUMA node (disconsidering NUMA nodes created by
+the machine itself, e.g. NVLink 2 GPUs). The now legacy support has been
+around for such a long time, with users seeing NUMA distances 10 and 40
+(and 80 if using NVLink2 GPUs), and there is no need to disrupt the
+existing experience of those guests.
+
+To bring the user experience x86 users have when tuning up NUMA, we had
+to operate under the current pseries Linux kernel logic described in
+`How the pseries Linux guest calculates NUMA distances`_. The result
+is that we needed to translate NUMA distance user input to pseries
+Linux kernel input.
+
+Translating user distance to kernel distance
+--------------------------------------------
+
+User input for NUMA distance can vary from 10 to 254. We need to translate
+that to the values that the Linux kernel operates on (10, 20, 40, 80, 160).
+This is how it is being done:
+
+* user distance 11 to 30 will be interpreted as 20
+* user distance 31 to 60 will be interpreted as 40
+* user distance 61 to 120 will be interpreted as 80
+* user distance 121 and beyond will be interpreted as 160
+* user distance 10 stays 10
+
+The reasoning behind this aproximation is to avoid any round up to the local
+distance (10), keeping it exclusive to the 4th NUMA level (which is still
+exclusive to the node_id). All other ranges were chosen under the developer
+discretion of what would be (somewhat) sensible considering the user input.
+Any other strategy can be used here, but in the end the reality is that we'll
+have to accept that a large array of values will be translated to the same
+NUMA topology in the guest, e.g. this user input:
+
+::
+
+      0   1   2
+  0  10  31 120
+  1  31  10  30
+  2 120  30  10
+
+And this other user input:
+
+::
+
+      0   1   2
+  0  10  60  61
+  1  60  10  11
+  2  61  11  10
+
+Will both be translated to the same values internally:
+
+::
+
+      0   1   2
+  0  10  40  80
+  1  40  10  20
+  2  80  20  10
+
+Users are encouraged to use only the kernel values in the NUMA definition to
+avoid being taken by surprise with that the guest is actually seeing in the
+topology. There are enough potential surprises that are inherent to the
+associativity domain assignment process, discussed below.
+
+
+How associativity domains are assigned
+--------------------------------------
+
+LOPAPR allows more than one associativity array (or 'string') per allocated
+resource. This would be used to represent that the resource has multiple
+connections with the board, and then the operational system, when deciding
+NUMA distancing, should consider the associativity information that provides
+the shortest distance.
+
+The spapr implementation does not support multiple associativity arrays per
+resource, neither does the pseries Linux kernel. We'll have to represent the
+NUMA topology using one associativity per resource, which means that choices
+and compromises are going to be made.
+
+Consider the following NUMA topology entered by user input:
+
+::
+
+      0   1   2   3
+  0  10  40  20  40
+  1  40  10  80  40
+  2  20  80  10  20
+  3  40  40  20  10
+
+All the associativity arrays are initialized with NUMA id in all associativity
+domains:
+
+* node 0: 0 0 0 0
+* node 1: 1 1 1 1
+* node 2: 2 2 2 2
+* node 3: 3 3 3 3
+
+
+Honoring just the relative distances of node 0 to every other node, we find the
+NUMA level matches (considering the reference points {0x4, 0x3, 0x2, 0x1}) for
+each distance:
+
+* distance from 0 to 1 is 40 (no match at 0x4 and 0x3, will match
+  at 0x2)
+* distance from 0 to 2 is 20 (no match at 0x4, will match at 0x3)
+* distance from 0 to 3 is 40 (no match at 0x4 and 0x3, will match
+  at 0x2)
+
+We'll copy the associativity domains of node 0 to all other nodes, based on
+the NUMA level matches. Between 0 and 1, a match in 0x2, we'll also copy
+the domains 0x2 and 0x1 from 0 to 1 as well. This will give us:
+
+* node 0: 0 0 0 0
+* node 1: 0 0 1 1
+
+Doing the same to node 2 and node 3, these are the associativity arrays
+after considering all matches with node 0:
+
+* node 0: 0 0 0 0
+* node 1: 0 0 1 1
+* node 2: 0 0 0 2
+* node 3: 0 0 3 3
+
+The distances related to node 0 are accounted for. For node 1, and keeping
+in mind that we don't need to revisit node 0 again, the distance from
+node 1 to 2 is 80, matching at 0x1, and distance from 1 to 3 is 40,
+match in 0x2. Repeating the same logic of copying all domains up to
+the NUMA level match:
+
+* node 0: 0 0 0 0
+* node 1: 1 0 1 1
+* node 2: 1 0 0 2
+* node 3: 1 0 3 3
+
+In the last step we will analyze just nodes 2 and 3. The desired distance
+between 2 and 3 is 20, i.e. a match in 0x3:
+
+* node 0: 0 0 0 0
+* node 1: 1 0 1 1
+* node 2: 1 0 0 2
+* node 3: 1 0 0 3
+
+
+The kernel will read these arrays and will calculate the following NUMA topology for
+the guest:
+
+::
+
+      0   1   2   3
+  0  10  40  20  20
+  1  40  10  40  40
+  2  20  40  10  20
+  3  20  40  20  10
+
+Note that this is not what the user wanted - the desired distance between
+0 and 3 is 40, we calculated it as 20. This is what the current logic and
+implementation constraints of the kernel and QEMU will provide inside the
+LOPAPR specification.
+
+Users are welcome to use this knowledge and experiment with the input to get
+the NUMA topology they want, or as closer as they want. The important thing
+is to keep expectations up to par with what we are capable of provide at this
+moment: an approximation.
+
+Limitations of the implementation
 ---------------------------------
 
+As mentioned above, the pSeries NUMA distance logic is, in fact, a way to approximate
+user choice. The Linux kernel, and PAPR itself, does not provide QEMU with the ways
+to fully map user input to actual NUMA distance the guest will use. These limitations
+creates two notable limitations in our support:
+
+* Asymmetrical topologies aren't supported. We only support NUMA topologies where
+  the distance from node A to B is always the same as B to A. We do not support
+  any A-B pair where the distance back and forth is asymmetric. For example, the
+  following topology isn't supported and the pSeries guest will not boot with this
+  user input:
+
+::
+
+      0   1
+  0  10  40
+  1  20  10
+
+
+* 'non-transitive' topologies will be poorly translated to the guest. This is the
+  kind of topology where the distance from a node A to B is X, B to C is X, but
+  the distance A to C is not X. E.g.:
+
+::
+
+      0   1   2   3
+  0  10  20  20  40
+  1  20  10  80  40
+  2  20  80  10  20
+  3  40  40  20  10
+
+  In the example above, distance 0 to 2 is 20, 2 to 3 is 20, but 0 to 3 is 40.
+  The kernel will always match with the shortest associativity domain possible,
+  and we're attempting to retain the previous established relations between the
+  nodes. This means that a distance equal to 20 between nodes 0 and 2 and the
+  same distance 20 between nodes 2 and 3 will cause the distance between 0 and 3
+  to also be 20.
+
+
+Legacy (5.1 and older) pseries NUMA mechanics
+=============================================
+
+In short, we can summarize the NUMA distances seem in pseries Linux guests, using
+QEMU up to 5.1, as follows:
+
+* local distance, i.e. the distance of the resource to its own NUMA node: 10
+* if it's a NVLink GPU device, distance: 80
+* every other resource, distance: 40
+
 The way the pseries Linux guest calculates NUMA distances has a direct effect
 on what QEMU users can expect when doing NUMA tuning. As of QEMU 5.1, this is
 the default ibm,associativity-reference-points being used in the pseries
@@ -180,12 +406,5 @@ as far as NUMA distance goes:
   to the same third NUMA level, having distance = 40
 * for NVLink GPUs, distance = 80 from everything else
 
-In short, we can summarize the NUMA distances seem in pseries Linux guests, using
-QEMU up to 5.1, as follows:
-
-* local distance, i.e. the distance of the resource to its own NUMA node: 10
-* if it's a NVLink GPU device, distance: 80
-* every other resource, distance: 40
-
 This also means that user input in QEMU command line does not change the
 NUMA distancing inside the guest for the pseries machine.
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 0/5] pseries NUMA distance calculation
  2020-10-07 17:28 [PATCH v4 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
                   ` (4 preceding siblings ...)
  2020-10-07 17:28 ` [PATCH v4 5/5] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
@ 2020-10-08  9:13 ` Greg Kurz
  2020-10-08 11:07   ` Daniel Henrique Barboza
  2020-10-08 23:52 ` David Gibson
  6 siblings, 1 reply; 11+ messages in thread
From: Greg Kurz @ 2020-10-08  9:13 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, david

On Wed,  7 Oct 2020 14:28:44 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> This forth version is based on review comments and suggestion

Series for SLOF ? ;-) ;-) ;-)

> from David in v3.
> 
> changes from v3:
> - patch 4:
>     * copied the explanation in spapr_numa_define_associativity_domains()
>       to the commit message
>     * return numa_level directly instead of calculating a temp
>       value in spapr_numa_get_numa_level()
>     * we're now setting assoc_src in all n_levels above it in 
>       spapr_numa_define_associativity_domains()
> - patch 5:
>     * changed the documentation as suggested by David
> 
> v3 link: https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg10443.html
> 
> Daniel Henrique Barboza (5):
>   spapr: add spapr_machine_using_legacy_numa() helper
>   spapr_numa: forbid asymmetrical NUMA setups
>   spapr_numa: change reference-points and maxdomain settings
>   spapr_numa: consider user input when defining associativity
>   specs/ppc-spapr-numa: update with new NUMA support
> 
>  capstone                      |   2 +-
>  docs/specs/ppc-spapr-numa.rst | 235 ++++++++++++++++++++++++++++++++--
>  hw/ppc/spapr.c                |  12 ++
>  hw/ppc/spapr_numa.c           | 185 ++++++++++++++++++++++++--
>  include/hw/ppc/spapr.h        |   2 +
>  5 files changed, 419 insertions(+), 17 deletions(-)
> 



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 0/5] pseries NUMA distance calculation
  2020-10-08  9:13 ` [PATCH v4 0/5] pseries NUMA distance calculation Greg Kurz
@ 2020-10-08 11:07   ` Daniel Henrique Barboza
  0 siblings, 0 replies; 11+ messages in thread
From: Daniel Henrique Barboza @ 2020-10-08 11:07 UTC (permalink / raw)
  To: Greg Kurz; +Cc: qemu-ppc, qemu-devel, david



On 10/8/20 6:13 AM, Greg Kurz wrote:
> On Wed,  7 Oct 2020 14:28:44 -0300
> Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
> 
>> This forth version is based on review comments and suggestion
> 
> Series for SLOF ? ;-) ;-) ;-)


hahahaha sad thing is that a typo in "fourth" is the only viable
way for me to send a Forth series. SLOF is too damn hard!




> 
>> from David in v3.
>>
>> changes from v3:
>> - patch 4:
>>      * copied the explanation in spapr_numa_define_associativity_domains()
>>        to the commit message
>>      * return numa_level directly instead of calculating a temp
>>        value in spapr_numa_get_numa_level()
>>      * we're now setting assoc_src in all n_levels above it in
>>        spapr_numa_define_associativity_domains()
>> - patch 5:
>>      * changed the documentation as suggested by David
>>
>> v3 link: https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg10443.html
>>
>> Daniel Henrique Barboza (5):
>>    spapr: add spapr_machine_using_legacy_numa() helper
>>    spapr_numa: forbid asymmetrical NUMA setups
>>    spapr_numa: change reference-points and maxdomain settings
>>    spapr_numa: consider user input when defining associativity
>>    specs/ppc-spapr-numa: update with new NUMA support
>>
>>   capstone                      |   2 +-
>>   docs/specs/ppc-spapr-numa.rst | 235 ++++++++++++++++++++++++++++++++--
>>   hw/ppc/spapr.c                |  12 ++
>>   hw/ppc/spapr_numa.c           | 185 ++++++++++++++++++++++++--
>>   include/hw/ppc/spapr.h        |   2 +
>>   5 files changed, 419 insertions(+), 17 deletions(-)
>>
> 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 0/5] pseries NUMA distance calculation
  2020-10-07 17:28 [PATCH v4 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
                   ` (5 preceding siblings ...)
  2020-10-08  9:13 ` [PATCH v4 0/5] pseries NUMA distance calculation Greg Kurz
@ 2020-10-08 23:52 ` David Gibson
  6 siblings, 0 replies; 11+ messages in thread
From: David Gibson @ 2020-10-08 23:52 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 1587 bytes --]

On Wed, Oct 07, 2020 at 02:28:44PM -0300, Daniel Henrique Barboza wrote:
> This forth version is based on review comments and suggestion
> from David in v3.
> 
> changes from v3:
> - patch 4:
>     * copied the explanation in spapr_numa_define_associativity_domains()
>       to the commit message
>     * return numa_level directly instead of calculating a temp
>       value in spapr_numa_get_numa_level()
>     * we're now setting assoc_src in all n_levels above it in 
>       spapr_numa_define_associativity_domains()
> - patch 5:
>     * changed the documentation as suggested by David
> 
> v3 link:
> https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg10443.html

Applied to ppc-for-5.2, thanks.

> 
> Daniel Henrique Barboza (5):
>   spapr: add spapr_machine_using_legacy_numa() helper
>   spapr_numa: forbid asymmetrical NUMA setups
>   spapr_numa: change reference-points and maxdomain settings
>   spapr_numa: consider user input when defining associativity
>   specs/ppc-spapr-numa: update with new NUMA support
> 
>  capstone                      |   2 +-
>  docs/specs/ppc-spapr-numa.rst | 235 ++++++++++++++++++++++++++++++++--
>  hw/ppc/spapr.c                |  12 ++
>  hw/ppc/spapr_numa.c           | 185 ++++++++++++++++++++++++--
>  include/hw/ppc/spapr.h        |   2 +
>  5 files changed, 419 insertions(+), 17 deletions(-)
> 

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 4/5] spapr_numa: consider user input when defining associativity
  2020-10-07 17:28 ` [PATCH v4 4/5] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
@ 2020-10-12 17:44   ` Philippe Mathieu-Daudé
  2020-10-12 22:48     ` David Gibson
  0 siblings, 1 reply; 11+ messages in thread
From: Philippe Mathieu-Daudé @ 2020-10-12 17:44 UTC (permalink / raw)
  To: Daniel Henrique Barboza, qemu-devel; +Cc: qemu-ppc, david

On 10/7/20 7:28 PM, Daniel Henrique Barboza wrote:
> A new function called spapr_numa_define_associativity_domains()
> is created to calculate the associativity domains and change
> the associativity arrays considering user input. This is how
> the associativity domain between two NUMA nodes A and B is
> calculated:
> 
> - get the distance D between them
> 
> - get the correspondent NUMA level 'n_level' for D. This is done
> via a helper called spapr_numa_get_numa_level()
> 
> - all associativity arrays were initialized with their own
> numa_ids, and we're calculating the distance in node_id ascending
> order, starting from node id 0 (the first node retrieved by
> numa_state). This will have a cascade effect in the algorithm because
> the associativity domains that node 0 defines will be carried over to
> other nodes, and node 1 associativities will be carried over after
> taking node 0 associativities into account, and so on. This
> happens because we'll assign assoc_src as the associativity domain
> of dst as well, for all NUMA levels beyond and including n_level.
> 
> The PPC kernel expects the associativity domains of the first node
> (node id 0) to be always 0 [1], and this algorithm will grant that
> by default.
> 
> Ultimately, all of this results in a best effort approximation for
> the actual NUMA distances the user input in the command line. Given
> the nature of how PAPR itself interprets NUMA distances versus the
> expectations risen by how ACPI SLIT works, there might be better
> algorithms but, in the end, it'll also result in another way to
> approximate what the user really wanted.
> 
> To keep this commit message no longer than it already is, the next
> patch will update the existing documentation in ppc-spapr-numa.rst
> with more in depth details and design considerations/drawbacks.
> 
> [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>   capstone            |   2 +-
>   hw/ppc/spapr_numa.c | 110 +++++++++++++++++++++++++++++++++++++++++++-
>   2 files changed, 110 insertions(+), 2 deletions(-)
> 
> diff --git a/capstone b/capstone
> index f8b1b83301..22ead3e0bf 160000
> --- a/capstone
> +++ b/capstone
> @@ -1 +1 @@
> -Subproject commit f8b1b833015a4ae47110ed068e0deb7106ced66d
> +Subproject commit 22ead3e0bfdb87516656453336160e0a37b066bf

Certainly unrelated to your patch.



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 4/5] spapr_numa: consider user input when defining associativity
  2020-10-12 17:44   ` Philippe Mathieu-Daudé
@ 2020-10-12 22:48     ` David Gibson
  0 siblings, 0 replies; 11+ messages in thread
From: David Gibson @ 2020-10-12 22:48 UTC (permalink / raw)
  To: Philippe Mathieu-Daudé; +Cc: Daniel Henrique Barboza, qemu-ppc, qemu-devel

[-- Attachment #1: Type: text/plain, Size: 2921 bytes --]

On Mon, Oct 12, 2020 at 07:44:14PM +0200, Philippe Mathieu-Daudé wrote:
> On 10/7/20 7:28 PM, Daniel Henrique Barboza wrote:
> > A new function called spapr_numa_define_associativity_domains()
> > is created to calculate the associativity domains and change
> > the associativity arrays considering user input. This is how
> > the associativity domain between two NUMA nodes A and B is
> > calculated:
> > 
> > - get the distance D between them
> > 
> > - get the correspondent NUMA level 'n_level' for D. This is done
> > via a helper called spapr_numa_get_numa_level()
> > 
> > - all associativity arrays were initialized with their own
> > numa_ids, and we're calculating the distance in node_id ascending
> > order, starting from node id 0 (the first node retrieved by
> > numa_state). This will have a cascade effect in the algorithm because
> > the associativity domains that node 0 defines will be carried over to
> > other nodes, and node 1 associativities will be carried over after
> > taking node 0 associativities into account, and so on. This
> > happens because we'll assign assoc_src as the associativity domain
> > of dst as well, for all NUMA levels beyond and including n_level.
> > 
> > The PPC kernel expects the associativity domains of the first node
> > (node id 0) to be always 0 [1], and this algorithm will grant that
> > by default.
> > 
> > Ultimately, all of this results in a best effort approximation for
> > the actual NUMA distances the user input in the command line. Given
> > the nature of how PAPR itself interprets NUMA distances versus the
> > expectations risen by how ACPI SLIT works, there might be better
> > algorithms but, in the end, it'll also result in another way to
> > approximate what the user really wanted.
> > 
> > To keep this commit message no longer than it already is, the next
> > patch will update the existing documentation in ppc-spapr-numa.rst
> > with more in depth details and design considerations/drawbacks.
> > 
> > [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
> > 
> > Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> > ---
> >   capstone            |   2 +-
> >   hw/ppc/spapr_numa.c | 110 +++++++++++++++++++++++++++++++++++++++++++-
> >   2 files changed, 110 insertions(+), 2 deletions(-)
> > 
> > diff --git a/capstone b/capstone
> > index f8b1b83301..22ead3e0bf 160000
> > --- a/capstone
> > +++ b/capstone
> > @@ -1 +1 @@
> > -Subproject commit f8b1b833015a4ae47110ed068e0deb7106ced66d
> > +Subproject commit 22ead3e0bfdb87516656453336160e0a37b066bf
> 
> Certainly unrelated to your patch.

Yeah, found and fixed that one already.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-10-12 22:53 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-07 17:28 [PATCH v4 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
2020-10-07 17:28 ` [PATCH v4 1/5] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
2020-10-07 17:28 ` [PATCH v4 2/5] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
2020-10-07 17:28 ` [PATCH v4 3/5] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
2020-10-07 17:28 ` [PATCH v4 4/5] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
2020-10-12 17:44   ` Philippe Mathieu-Daudé
2020-10-12 22:48     ` David Gibson
2020-10-07 17:28 ` [PATCH v4 5/5] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
2020-10-08  9:13 ` [PATCH v4 0/5] pseries NUMA distance calculation Greg Kurz
2020-10-08 11:07   ` Daniel Henrique Barboza
2020-10-08 23:52 ` David Gibson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.