All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/5] pseries NUMA distance calculation
@ 2020-09-29 13:38 Daniel Henrique Barboza
  2020-09-29 13:38 ` [PATCH v3 1/5] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-29 13:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

This third version is based on review comments and suggestion
from David.

changes from v2:
- patch 3 from v2: removed
- patch 2:
    * added David's R-b
- patch 3 (former 4 in v2):
    * added Greg and David R-bs
    * added G_STATIC_ASSERT() before memcpy()
- patch 4 (former 5 in v2):
    * clarified what 'node 0' means in the commit msg
    * rewrote a bit to clarify what the logic does
    * the translation of user distance to PAPR distance logic,
      previously presented in the former patch 3 in v2, was folded
      into spapr_numa_get_numa_level()
    * removed needless 'if' when assigning associativities in
      spapr_numa_define_associativity_domains()
- patch 5 (former 6 in v2):
    * moved the section describing the current logic up
    * created a new 'legacy' section describing the pre-5.2
      behavior
    * added a 'known limitations' section documentating that
      we don't support asymmetrical topologies and we do a poor
      job approximating 'non-transitive' topologies

v2 link: https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg09073.html

Daniel Henrique Barboza (5):
  spapr: add spapr_machine_using_legacy_numa() helper
  spapr_numa: forbid asymmetrical NUMA setups
  spapr_numa: change reference-points and maxdomain settings
  spapr_numa: consider user input when defining associativity
  specs/ppc-spapr-numa: update with new NUMA support

 docs/specs/ppc-spapr-numa.rst | 206 +++++++++++++++++++++++++++++++++-
 hw/ppc/spapr.c                |  12 ++
 hw/ppc/spapr_numa.c           | 195 ++++++++++++++++++++++++++++++--
 include/hw/ppc/spapr.h        |   2 +
 4 files changed, 406 insertions(+), 9 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 1/5] spapr: add spapr_machine_using_legacy_numa() helper
  2020-09-29 13:38 [PATCH v3 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
@ 2020-09-29 13:38 ` Daniel Henrique Barboza
  2020-09-29 13:38 ` [PATCH v3 2/5] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-29 13:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

The changes to come to NUMA support are all guest visible. In
theory we could just create a new 5_1 class option flag to
avoid the changes to cascade to 5.1 and under. The reality is that
these changes are only relevant if the machine has more than one
NUMA node. There is no need to change guest behavior that has
been around for years needlesly.

This new helper will be used by the next patches to determine
whether we should retain the (soon to be) legacy NUMA behavior
in the pSeries machine. The new behavior will only be exposed
if:

- machine is pseries-5.2 and newer;
- more than one NUMA node is declared in NUMA state.

Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr.c         | 12 ++++++++++++
 include/hw/ppc/spapr.h |  2 ++
 2 files changed, 14 insertions(+)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index e813c7cfb9..c5d8910a74 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -294,6 +294,15 @@ static hwaddr spapr_node0_size(MachineState *machine)
     return machine->ram_size;
 }
 
+bool spapr_machine_using_legacy_numa(SpaprMachineState *spapr)
+{
+    MachineState *machine = MACHINE(spapr);
+    SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(machine);
+
+    return smc->pre_5_2_numa_associativity ||
+           machine->numa_state->num_nodes <= 1;
+}
+
 static void add_str(GString *s, const gchar *s1)
 {
     g_string_append_len(s, s1, strlen(s1) + 1);
@@ -4522,8 +4531,11 @@ DEFINE_SPAPR_MACHINE(5_2, "5.2", true);
  */
 static void spapr_machine_5_1_class_options(MachineClass *mc)
 {
+    SpaprMachineClass *smc = SPAPR_MACHINE_CLASS(mc);
+
     spapr_machine_5_2_class_options(mc);
     compat_props_add(mc->compat_props, hw_compat_5_1, hw_compat_5_1_len);
+    smc->pre_5_2_numa_associativity = true;
 }
 
 DEFINE_SPAPR_MACHINE(5_1, "5.1", false);
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 114e819969..d1aae03b97 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -143,6 +143,7 @@ struct SpaprMachineClass {
     bool smp_threads_vsmt; /* set VSMT to smp_threads by default */
     hwaddr rma_limit;          /* clamp the RMA to this size */
     bool pre_5_1_assoc_refpoints;
+    bool pre_5_2_numa_associativity;
 
     void (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
                           uint64_t *buid, hwaddr *pio, 
@@ -860,6 +861,7 @@ int spapr_max_server_number(SpaprMachineState *spapr);
 void spapr_store_hpte(PowerPCCPU *cpu, hwaddr ptex,
                       uint64_t pte0, uint64_t pte1);
 void spapr_mce_req_event(PowerPCCPU *cpu, bool recovered);
+bool spapr_machine_using_legacy_numa(SpaprMachineState *spapr);
 
 /* DRC callbacks. */
 void spapr_core_release(DeviceState *dev);
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 2/5] spapr_numa: forbid asymmetrical NUMA setups
  2020-09-29 13:38 [PATCH v3 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
  2020-09-29 13:38 ` [PATCH v3 1/5] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
@ 2020-09-29 13:38 ` Daniel Henrique Barboza
  2020-09-29 13:38 ` [PATCH v3 3/5] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-29 13:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

The pSeries machine does not support asymmetrical NUMA
configurations. This doesn't make much of a different
since we're not using user input for pSeries NUMA setup,
but this will change in the next patches.

To avoid breaking existing setups, gate this change by
checking for legacy NUMA support.

Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 64fe567f5d..fe395e80a3 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -19,6 +19,24 @@
 /* Moved from hw/ppc/spapr_pci_nvlink2.c */
 #define SPAPR_GPU_NUMA_ID           (cpu_to_be32(1))
 
+static bool spapr_numa_is_symmetrical(MachineState *ms)
+{
+    int src, dst;
+    int nb_numa_nodes = ms->numa_state->num_nodes;
+    NodeInfo *numa_info = ms->numa_state->nodes;
+
+    for (src = 0; src < nb_numa_nodes; src++) {
+        for (dst = src; dst < nb_numa_nodes; dst++) {
+            if (numa_info[src].distance[dst] !=
+                numa_info[dst].distance[src]) {
+                return false;
+            }
+        }
+    }
+
+    return true;
+}
+
 void spapr_numa_associativity_init(SpaprMachineState *spapr,
                                    MachineState *machine)
 {
@@ -61,6 +79,22 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
 
         spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
     }
+
+    /*
+     * Legacy NUMA guests (pseries-5.1 and older, or guests with only
+     * 1 NUMA node) will not benefit from anything we're going to do
+     * after this point.
+     */
+    if (spapr_machine_using_legacy_numa(spapr)) {
+        return;
+    }
+
+    if (!spapr_numa_is_symmetrical(machine)) {
+        error_report("Asymmetrical NUMA topologies aren't supported "
+                     "in the pSeries machine");
+        exit(EXIT_FAILURE);
+    }
+
 }
 
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 3/5] spapr_numa: change reference-points and maxdomain settings
  2020-09-29 13:38 [PATCH v3 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
  2020-09-29 13:38 ` [PATCH v3 1/5] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
  2020-09-29 13:38 ` [PATCH v3 2/5] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
@ 2020-09-29 13:38 ` Daniel Henrique Barboza
  2020-09-29 13:38 ` [PATCH v3 4/5] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
  2020-09-29 13:38 ` [PATCH v3 5/5] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
  4 siblings, 0 replies; 9+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-29 13:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

This is the first guest visible change introduced in
spapr_numa.c. The previous settings of both reference-points
and maxdomains were too restrictive, but enough for the
existing associativity we're setting in the resources.

We'll change that in the following patches, populating the
associativity arrays based on user input. For those changes
to be effective, reference-points and maxdomains must be
more flexible. After this patch, we'll have 4 distinct
levels of NUMA (0x4, 0x3, 0x2, 0x1) and maxdomains will
allow for any type of configuration the user intends to
do - under the scope and limitations of PAPR itself, of
course.

Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 43 +++++++++++++++++++++++++++++++++++--------
 1 file changed, 35 insertions(+), 8 deletions(-)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index fe395e80a3..16badb1f4b 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -178,24 +178,51 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
  */
 void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
 {
+    MachineState *ms = MACHINE(spapr);
     SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
     uint32_t refpoints[] = {
         cpu_to_be32(0x4),
-        cpu_to_be32(0x4),
+        cpu_to_be32(0x3),
         cpu_to_be32(0x2),
+        cpu_to_be32(0x1),
     };
     uint32_t nr_refpoints = ARRAY_SIZE(refpoints);
-    uint32_t maxdomain = cpu_to_be32(spapr->gpu_numa_id > 1 ? 1 : 0);
+    uint32_t maxdomain = ms->numa_state->num_nodes + spapr->gpu_numa_id;
     uint32_t maxdomains[] = {
         cpu_to_be32(4),
-        maxdomain,
-        maxdomain,
-        maxdomain,
-        cpu_to_be32(spapr->gpu_numa_id),
+        cpu_to_be32(maxdomain),
+        cpu_to_be32(maxdomain),
+        cpu_to_be32(maxdomain),
+        cpu_to_be32(maxdomain)
     };
 
-    if (smc->pre_5_1_assoc_refpoints) {
-        nr_refpoints = 2;
+    if (spapr_machine_using_legacy_numa(spapr)) {
+        uint32_t legacy_refpoints[] = {
+            cpu_to_be32(0x4),
+            cpu_to_be32(0x4),
+            cpu_to_be32(0x2),
+        };
+        uint32_t legacy_maxdomain = spapr->gpu_numa_id > 1 ? 1 : 0;
+        uint32_t legacy_maxdomains[] = {
+            cpu_to_be32(4),
+            cpu_to_be32(legacy_maxdomain),
+            cpu_to_be32(legacy_maxdomain),
+            cpu_to_be32(legacy_maxdomain),
+            cpu_to_be32(spapr->gpu_numa_id),
+        };
+
+        G_STATIC_ASSERT(sizeof(legacy_refpoints) <= sizeof(refpoints));
+        G_STATIC_ASSERT(sizeof(legacy_maxdomains) <= sizeof(maxdomains));
+
+        nr_refpoints = 3;
+
+        memcpy(refpoints, legacy_refpoints, sizeof(legacy_refpoints));
+        memcpy(maxdomains, legacy_maxdomains, sizeof(legacy_maxdomains));
+
+        /* pseries-5.0 and older reference-points array is {0x4, 0x4} */
+        if (smc->pre_5_1_assoc_refpoints) {
+            nr_refpoints = 2;
+        }
     }
 
     _FDT(fdt_setprop(fdt, rtas, "ibm,associativity-reference-points",
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 4/5] spapr_numa: consider user input when defining associativity
  2020-09-29 13:38 [PATCH v3 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
                   ` (2 preceding siblings ...)
  2020-09-29 13:38 ` [PATCH v3 3/5] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
@ 2020-09-29 13:38 ` Daniel Henrique Barboza
  2020-10-02  2:24   ` David Gibson
  2020-09-29 13:38 ` [PATCH v3 5/5] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
  4 siblings, 1 reply; 9+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-29 13:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

This patch puts all the pieces together to finally allow user
input when defining the NUMA topology of the spapr guest.

For each NUMA node A, starting at node id 0, the new
spapr_numa_define_associativity_domains() will:

- get the distance between node A and B = A + 1
- get the correspondent NUMA level for this distance
- assign the associativity domain for A and B for the given
NUMA level, using the lowest associativity domain value between
them
- if there is more NUMA nodes, increment B and repeat

Since we always start at the first node (id = 0) and go in
ascending order, we are prioritizing any previous associativity
already calculated. This is necessary because neither QEMU, nor
the pSeries kernel, supports multiple associativity domains for
each resource, meaning that we have to decide which associativity
relation is relevant. Another side effect is that the first
NUMA node, node 0, will always have an associativity array
full of zeroes. This is intended - in fact, the Linux kernel
expects it (see [1] for more info).

Ultimately, all of this results in a best effort approximation for
the actual NUMA distances the user input in the command line. Given
the nature of how PAPR itself interprets NUMA distances versus the
expectations risen by how ACPI SLIT works, there might be better
algorithms but, in the end, it'll also result in another way to
approximate what the user really wanted.

To keep this commit message no longer than it already is, the next
patch will update the existing documentation in ppc-spapr-numa.rst
with more in depth details and design considerations/drawbacks.

[1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 120 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 119 insertions(+), 1 deletion(-)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 16badb1f4b..f3d43ceb1e 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -37,12 +37,118 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
     return true;
 }
 
+/*
+ * This function will translate the user distances into
+ * what the kernel understand as possible values: 10
+ * (local distance), 20, 40, 80 and 160, and return the equivalent
+ * NUMA level for each. Current heuristic is:
+ *  - local distance (10) returns numa_level = 0x4
+ *  - distances between 11 and 30 inclusive -> rounded to 20,
+ *    numa_level = 0x3
+ *  - distances between 31 and 60 inclusive -> rounded to 40,
+ *    numa_level = 0x2
+ *  - distances between 61 and 120 inclusive -> rounded to 80,
+ *    numa_level = 0x1
+ *  - everything above 120 returns numa_level = 0 to indicate that
+ *    there is no match. This will be calculated as disntace = 160
+ *    by the kernel (as of v5.9)
+ */
+static uint8_t spapr_numa_get_numa_level(uint8_t distance)
+{
+    uint8_t rounded_distance = 160;
+    uint8_t numa_level;
+
+    if (distance > 11 && distance <= 30) {
+        rounded_distance = 20;
+    } else if (distance > 31 && distance <= 60) {
+        rounded_distance = 40;
+    } else if (distance > 61 && distance <= 120) {
+        rounded_distance = 80;
+    }
+
+    switch (rounded_distance) {
+    case 10:
+        numa_level = 0x4;
+        break;
+    case 20:
+        numa_level = 0x3;
+        break;
+    case 40:
+        numa_level = 0x2;
+        break;
+    case 80:
+        numa_level = 0x1;
+        break;
+    default:
+        numa_level = 0;
+    }
+
+    return numa_level;
+}
+
+static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr)
+{
+    MachineState *ms = MACHINE(spapr);
+    NodeInfo *numa_info = ms->numa_state->nodes;
+    int nb_numa_nodes = ms->numa_state->num_nodes;
+    int src, dst;
+
+    for (src = 0; src < nb_numa_nodes; src++) {
+        for (dst = src; dst < nb_numa_nodes; dst++) {
+            /*
+             * This is how the associativity domain between A and B
+             * is calculated:
+             *
+             * - get the distance between them
+             * - get the correspondent NUMA level for this distance
+             * - the arrays were initialized with their own numa_ids,
+             * and we're calculating the distance in node_id ascending order,
+             * starting from node 0. This will have a cascade effect in the
+             * algorithm because the associativity domains that node 0 defines
+             * will be carried over to the other nodes, and node 1
+             * associativities will be carried over unless there's already a
+             * node 0 associativity assigned, and so on. This happens because
+             * we'll assign assoc_src as the associativity domain of dst
+             * as well, for the given NUMA level.
+             *
+             * The PPC kernel expects the associativity domains of node 0 to
+             * be always 0, and this algorithm will grant that by default.
+             */
+            uint8_t distance = numa_info[src].distance[dst];
+            uint8_t n_level = spapr_numa_get_numa_level(distance);
+            uint32_t assoc_src;
+
+            /*
+             * n_level = 0 means that the distance is greater than our last
+             * rounded value (120). In this case there is no NUMA level match
+             * between src and dst and we can skip the remaining of the loop.
+             *
+             * The Linux kernel will assume that the distance between src and
+             * dst, in this case of no match, is 10 (local distance) doubled
+             * for each NUMA it didn't match. We have MAX_DISTANCE_REF_POINTS
+             * levels (4), so this gives us 10*2*2*2*2 = 160.
+             *
+             * This logic can be seen in the Linux kernel source code, as of
+             * v5.9, in arch/powerpc/mm/numa.c, function __node_distance().
+             */
+            if (n_level == 0) {
+                continue;
+            }
+
+            assoc_src = spapr->numa_assoc_array[src][n_level];
+            spapr->numa_assoc_array[dst][n_level] = assoc_src;
+        }
+    }
+
+}
+
 void spapr_numa_associativity_init(SpaprMachineState *spapr,
                                    MachineState *machine)
 {
     SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
     int nb_numa_nodes = machine->numa_state->num_nodes;
     int i, j, max_nodes_with_gpus;
+    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
 
     /*
      * For all associativity arrays: first position is the size,
@@ -56,6 +162,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
     for (i = 0; i < nb_numa_nodes; i++) {
         spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
         spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
+
+        /*
+         * Fill all associativity domains of non-zero NUMA nodes with
+         * node_id. This is required because the default value (0) is
+         * considered a match with associativity domains of node 0.
+         */
+        if (!using_legacy_numa && i != 0) {
+            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
+                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
+            }
+        }
     }
 
     /*
@@ -85,7 +202,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
      * 1 NUMA node) will not benefit from anything we're going to do
      * after this point.
      */
-    if (spapr_machine_using_legacy_numa(spapr)) {
+    if (using_legacy_numa) {
         return;
     }
 
@@ -95,6 +212,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
         exit(EXIT_FAILURE);
     }
 
+    spapr_numa_define_associativity_domains(spapr);
 }
 
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 5/5] specs/ppc-spapr-numa: update with new NUMA support
  2020-09-29 13:38 [PATCH v3 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
                   ` (3 preceding siblings ...)
  2020-09-29 13:38 ` [PATCH v3 4/5] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
@ 2020-09-29 13:38 ` Daniel Henrique Barboza
  2020-10-02  3:16   ` David Gibson
  4 siblings, 1 reply; 9+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-29 13:38 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

This update provides more in depth information about the
choices and drawbacks of the new NUMA support for the
spapr machine.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 docs/specs/ppc-spapr-numa.rst | 206 +++++++++++++++++++++++++++++++++-
 1 file changed, 205 insertions(+), 1 deletion(-)

diff --git a/docs/specs/ppc-spapr-numa.rst b/docs/specs/ppc-spapr-numa.rst
index e762038022..6dd13bf97b 100644
--- a/docs/specs/ppc-spapr-numa.rst
+++ b/docs/specs/ppc-spapr-numa.rst
@@ -158,9 +158,213 @@ kernel tree). This results in the following distances:
 * resources four NUMA levels apart: 160
 
 
-Consequences for QEMU NUMA tuning
+pseries NUMA mechanics
+======================
+
+Starting in QEMU 5.2, the pseries machine considers user input when setting NUMA
+topology of the guest. The following changes were made:
+
+* ibm,associativity-reference-points was changed to {0x4, 0x3, 0x2, 0x1}, allowing
+  for 4 distinct NUMA distance values based on the NUMA levels
+
+* ibm,max-associativity-domains was changed to support multiple associativity
+  domains in all NUMA levels. This is needed to ensure user flexibility
+
+* ibm,associativity for all resources now varies with user input
+
+These changes are only effective for pseries-5.2 and newer machines that are
+created with more than one NUMA node (disconsidering NUMA nodes created by
+the machine itself, e.g. NVLink 2 GPUs). The now legacy support has been
+around for such a long time, with users seeing NUMA distances 10 and 40
+(and 80 if using NVLink2 GPUs), and there is no need to disrupt the
+existing experience of those guests.
+
+To bring the user experience x86 users have when tuning up NUMA, we had
+to operate under the current pseries Linux kernel logic described in
+`How the pseries Linux guest calculates NUMA distances`_. The result
+is that we needed to translate NUMA distance user input to pseries
+Linux kernel input.
+
+Translating user distance to kernel distance
+--------------------------------------------
+
+User input for NUMA distance can vary from 10 to 254. We need to translate
+that to the values that the Linux kernel operates on (10, 20, 40, 80, 160).
+This is how it is being done:
+
+* user distance 11 to 30 will be interpreted as 20
+* user distance 31 to 60 will be interpreted as 40
+* user distance 61 to 120 will be interpreted as 80
+* user distance 121 and beyond will be interpreted as 160
+* user distance 10 stays 10
+
+The reasoning behind this aproximation is to avoid any round up to the local
+distance (10), keeping it exclusive to the 4th NUMA level (which is still
+exclusive to the node_id). All other ranges were chosen under the developer
+discretion of what would be (somewhat) sensible considering the user input.
+Any other strategy can be used here, but in the end the reality is that we'll
+have to accept that a large array of values will be translated to the same
+NUMA topology in the guest, e.g. this user input:
+
+::
+
+      0   1   2
+  0  10  31 120
+  1  31  10  30
+  2 120  30  10
+
+And this other user input:
+
+::
+
+      0   1   2
+  0  10  60  61
+  1  60  10  11
+  2  61  11  10
+
+Will both be translated to the same values internally:
+
+::
+
+      0   1   2
+  0  10  40  80
+  1  40  10  20
+  2  80  20  10
+
+Users are encouraged to use only the kernel values in the NUMA definition to
+avoid being taken by surprise with that the guest is actually seeing in the
+topology. There are enough potential surprises that are inherent to the
+associativity domain assignment process, discussed below.
+
+
+How associativity domains are assigned
+--------------------------------------
+
+LOPAPR allows more than one associativity array (or 'string') per allocated
+resource. This would be used to represent that the resource has multiple
+connections with the board, and then the operational system, when deciding
+NUMA distancing, should consider the associativity information that provides
+the shortest distance.
+
+The spapr implementation does not support multiple associativity arrays per
+resource, neither does the pseries Linux kernel. We'll have to represent the
+NUMA topology using one associativity per resource, which means that choices
+and compromises are going to be made.
+
+Consider the following NUMA topology entered by user input:
+
+::
+
+      0   1   2   3
+  0  10  40  20  40
+  1  40  10  80  40
+  2  20  80  10  20
+  3  40  40  20  10
+
+Honoring just the relative distances of node 0 to every other node, one possible
+value for all associativity arrays would be:
+
+* node 0: 0 0 0 0
+* node 1: 1 0 1 1
+* node 2: 2 2 0 2
+* node 3: 3 0 3 3
+
+With the reference points {0x4, 0x3, 0x2, 0x1}, for node 0:
+
+* distance from 0 to 1 is 40 (no match at 0x4 and 0x3, will match
+  at 0x2)
+* distance from 0 to 2 is 20 (no match at 0x4, will match at 0x3)
+* distance from 0 to 3 is 40 (no match at 0x4 and 0x3, will match
+  at 0x2)
+
+The distances related to node 0 are accounted for. For node 1, and keeping
+in mind that we don't need to revisit node 0 again, the distance from
+node 1 to 2 is 80, matching at 0x4, and distance from 1 to 3 is 40,
+match in 0x3:
+
+* node 0: 0 0 0 0
+* node 1: 1 0 1 1
+* node 2: 1 2 0 2
+* node 3: 3 0 3 3
+
+In the last step we will analyze just nodes 2 and 3. The desired distance
+between 2 and 3 is 20, i.e. a match in 0x3. Node 2 already has a
+domain assigned in 0x3, 0. We'll preserve it to avoid dissolving the
+association between node 0 and node 2, and use it as a domain for
+0x3 as well:
+
+* node 0: 0 0 0 0
+* node 1: 1 0 1 1
+* node 2: 1 2 0 2
+* node 3: 3 0 0 3
+
+
+The kernel will read these arrays and will calculate the following NUMA topology for
+the guest:
+
+::
+
+      0   1   2   3
+  0  10  40  20  20
+  1  40  10  80  40
+  2  20  80  10  20
+  3  20  40  20  10
+
+Note that this is not what the user wanted - the desired distance between
+0 and 3 is 40, we calculated it as 20. This is what the current logic and
+implementation constraints of the kernel and QEMU will provide inside the
+LOPAPR specification.
+
+
+Users are welcome to use this knowledge and experiment with the input to get
+the NUMA topology they want, or as closer as they want. The important thing
+is to keep expectations up to par with what we are capable of provide at this
+moment: an approximation.
+
+Limitations of the implementation
 ---------------------------------
 
+As mentioned above, the pSeries NUMA distance logic is, in fact, a way to approximate
+user choice. The Linux kernel, and PAPR itself, does not provide QEMU with the ways
+to fully map user input to actual NUMA distance the guest will use. These limitations
+creates two notable limitations in our support:
+
+* Asymmetrical topologies aren't supported. We only support NUMA topologies where
+  the distance from node A to B is always the same as B to A. We do not support
+  any A-B pair where the distance back and forth is asymmetric. For example, the
+  following topology isn't supported and the pSeries guest will not boot with this
+  user input:
+
+::
+
+      0   1
+  0  10  40
+  1  20  10
+
+
+* 'non-transitive' topologies will be poorly translated to the guest. This is the
+  kind of topology where the distance from a node A to B is X, B to C is X, but
+  the distance A to C is not X. E.g.:
+
+::
+
+      0   1   2   3
+  0  10  20  20  40
+  1  20  10  80  40
+  2  20  80  10  20
+  3  40  40  20  10
+
+  In the example above, distance 0 to 2 is 20, 2 to 3 is 20, but 0 to 3 is 40.
+  The kernel will always match with the shortest associativity domain possible,
+  and we're attempting to retain the previous established relations between the
+  nodes. This means that a distance equal to 20 between nodes 0 and 2 and the
+  same distance 20 between nodes 2 and 3 will cause the distance between 0 and 3
+  to also be 20.
+
+
+Legacy (5.1 and older) pseries NUMA mechanics
+=============================================
+
 The way the pseries Linux guest calculates NUMA distances has a direct effect
 on what QEMU users can expect when doing NUMA tuning. As of QEMU 5.1, this is
 the default ibm,associativity-reference-points being used in the pseries
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 4/5] spapr_numa: consider user input when defining associativity
  2020-09-29 13:38 ` [PATCH v3 4/5] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
@ 2020-10-02  2:24   ` David Gibson
  2020-10-02 13:02     ` Daniel Henrique Barboza
  0 siblings, 1 reply; 9+ messages in thread
From: David Gibson @ 2020-10-02  2:24 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, groug

[-- Attachment #1: Type: text/plain, Size: 9197 bytes --]

On Tue, Sep 29, 2020 at 10:38:16AM -0300, Daniel Henrique Barboza wrote:
> This patch puts all the pieces together to finally allow user
> input when defining the NUMA topology of the spapr guest.
> 
> For each NUMA node A, starting at node id 0, the new
> spapr_numa_define_associativity_domains() will:
> 
> - get the distance between node A and B = A + 1
> - get the correspondent NUMA level for this distance
> - assign the associativity domain for A and B for the given
> NUMA level, using the lowest associativity domain value between
> them
> - if there is more NUMA nodes, increment B and repeat

I still find this description very confusing.  The one in the comment
is better, I think, can you maybe copy that one here.

> Since we always start at the first node (id = 0) and go in
> ascending order, we are prioritizing any previous associativity
> already calculated. This is necessary because neither QEMU, nor
> the pSeries kernel, supports multiple associativity domains for
> each resource, meaning that we have to decide which associativity
> relation is relevant. Another side effect is that the first
> NUMA node, node 0, will always have an associativity array
> full of zeroes. This is intended - in fact, the Linux kernel
> expects it (see [1] for more info).
> 
> Ultimately, all of this results in a best effort approximation for
> the actual NUMA distances the user input in the command line. Given
> the nature of how PAPR itself interprets NUMA distances versus the
> expectations risen by how ACPI SLIT works, there might be better
> algorithms but, in the end, it'll also result in another way to
> approximate what the user really wanted.
> 
> To keep this commit message no longer than it already is, the next
> patch will update the existing documentation in ppc-spapr-numa.rst
> with more in depth details and design considerations/drawbacks.
> 
> [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  hw/ppc/spapr_numa.c | 120 +++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 119 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 16badb1f4b..f3d43ceb1e 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -37,12 +37,118 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
>      return true;
>  }
>  
> +/*
> + * This function will translate the user distances into
> + * what the kernel understand as possible values: 10
> + * (local distance), 20, 40, 80 and 160, and return the equivalent
> + * NUMA level for each. Current heuristic is:
> + *  - local distance (10) returns numa_level = 0x4
> + *  - distances between 11 and 30 inclusive -> rounded to 20,
> + *    numa_level = 0x3
> + *  - distances between 31 and 60 inclusive -> rounded to 40,
> + *    numa_level = 0x2
> + *  - distances between 61 and 120 inclusive -> rounded to 80,
> + *    numa_level = 0x1
> + *  - everything above 120 returns numa_level = 0 to indicate that
> + *    there is no match. This will be calculated as disntace = 160
> + *    by the kernel (as of v5.9)
> + */
> +static uint8_t spapr_numa_get_numa_level(uint8_t distance)
> +{
> +    uint8_t rounded_distance = 160;
> +    uint8_t numa_level;
> +
> +    if (distance > 11 && distance <= 30) {
> +        rounded_distance = 20;
> +    } else if (distance > 31 && distance <= 60) {
> +        rounded_distance = 40;
> +    } else if (distance > 61 && distance <= 120) {
> +        rounded_distance = 80;
> +    }
> +
> +    switch (rounded_distance) {
> +    case 10:
> +        numa_level = 0x4;
> +        break;

Uh.. you could just return the numa_level from the if-else chain
without going via rounded_distance.  (You could put the rounded
distances in comments on each if clause, if you like).

> +    case 20:
> +        numa_level = 0x3;
> +        break;
> +    case 40:
> +        numa_level = 0x2;
> +        break;
> +    case 80:
> +        numa_level = 0x1;
> +        break;
> +    default:
> +        numa_level = 0;
> +    }
> +
> +    return numa_level;
> +}
> +
> +static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr)
> +{
> +    MachineState *ms = MACHINE(spapr);
> +    NodeInfo *numa_info = ms->numa_state->nodes;
> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> +    int src, dst;
> +
> +    for (src = 0; src < nb_numa_nodes; src++) {
> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> +            /*
> +             * This is how the associativity domain between A and B
> +             * is calculated:
> +             *
> +             * - get the distance between them
> +             * - get the correspondent NUMA level for this distance
> +             * - the arrays were initialized with their own numa_ids,
> +             * and we're calculating the distance in node_id ascending order,
> +             * starting from node 0. This will have a cascade effect in the
> +             * algorithm because the associativity domains that node 0 defines
> +             * will be carried over to the other nodes, and node 1
> +             * associativities will be carried over unless there's already a
> +             * node 0 associativity assigned, and so on. This happens because
> +             * we'll assign assoc_src as the associativity domain of dst
> +             * as well, for the given NUMA level.
> +             *
> +             * The PPC kernel expects the associativity domains of node 0 to
> +             * be always 0, and this algorithm will grant that by default.
> +             */
> +            uint8_t distance = numa_info[src].distance[dst];
> +            uint8_t n_level = spapr_numa_get_numa_level(distance);
> +            uint32_t assoc_src;
> +
> +            /*
> +             * n_level = 0 means that the distance is greater than our last
> +             * rounded value (120). In this case there is no NUMA level match
> +             * between src and dst and we can skip the remaining of the loop.
> +             *
> +             * The Linux kernel will assume that the distance between src and
> +             * dst, in this case of no match, is 10 (local distance) doubled
> +             * for each NUMA it didn't match. We have MAX_DISTANCE_REF_POINTS
> +             * levels (4), so this gives us 10*2*2*2*2 = 160.
> +             *
> +             * This logic can be seen in the Linux kernel source code, as of
> +             * v5.9, in arch/powerpc/mm/numa.c, function __node_distance().
> +             */
> +            if (n_level == 0) {
> +                continue;
> +            }
> +
> +            assoc_src = spapr->numa_assoc_array[src][n_level];
> +            spapr->numa_assoc_array[dst][n_level] = assoc_src;

I'm still not convinced that having the entry at n_level match, but
not those at "coarser"/"more distant" levels be different makes sense.

> +        }
> +    }
> +
> +}
> +
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine)
>  {
>      SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>      int nb_numa_nodes = machine->numa_state->num_nodes;
>      int i, j, max_nodes_with_gpus;
> +    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
>  
>      /*
>       * For all associativity arrays: first position is the size,
> @@ -56,6 +162,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>      for (i = 0; i < nb_numa_nodes; i++) {
>          spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>          spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
> +
> +        /*
> +         * Fill all associativity domains of non-zero NUMA nodes with
> +         * node_id. This is required because the default value (0) is
> +         * considered a match with associativity domains of node 0.
> +         */
> +        if (!using_legacy_numa && i != 0) {
> +            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
> +                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
> +            }
> +        }
>      }
>  
>      /*
> @@ -85,7 +202,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>       * 1 NUMA node) will not benefit from anything we're going to do
>       * after this point.
>       */
> -    if (spapr_machine_using_legacy_numa(spapr)) {
> +    if (using_legacy_numa) {
>          return;
>      }
>  
> @@ -95,6 +212,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>          exit(EXIT_FAILURE);
>      }
>  
> +    spapr_numa_define_associativity_domains(spapr);
>  }
>  
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 5/5] specs/ppc-spapr-numa: update with new NUMA support
  2020-09-29 13:38 ` [PATCH v3 5/5] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
@ 2020-10-02  3:16   ` David Gibson
  0 siblings, 0 replies; 9+ messages in thread
From: David Gibson @ 2020-10-02  3:16 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, groug

[-- Attachment #1: Type: text/plain, Size: 9549 bytes --]

On Tue, Sep 29, 2020 at 10:38:17AM -0300, Daniel Henrique Barboza wrote:
> This update provides more in depth information about the
> choices and drawbacks of the new NUMA support for the
> spapr machine.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  docs/specs/ppc-spapr-numa.rst | 206 +++++++++++++++++++++++++++++++++-
>  1 file changed, 205 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/specs/ppc-spapr-numa.rst b/docs/specs/ppc-spapr-numa.rst
> index e762038022..6dd13bf97b 100644
> --- a/docs/specs/ppc-spapr-numa.rst
> +++ b/docs/specs/ppc-spapr-numa.rst
> @@ -158,9 +158,213 @@ kernel tree). This results in the following distances:
>  * resources four NUMA levels apart: 160
>  
>  
> -Consequences for QEMU NUMA tuning
> +pseries NUMA mechanics
> +======================
> +
> +Starting in QEMU 5.2, the pseries machine considers user input when setting NUMA
> +topology of the guest. The following changes were made:

So, moving the new scheme to the top is good, along with that we
should describe it directly, rather than in terms of how it differed
from the old scheme.  The idea is that this should be a description of
what you need to know to use this right now primarily, and a history
only secondarily.

> +* ibm,associativity-reference-points was changed to {0x4, 0x3, 0x2, 0x1}, allowing
> +  for 4 distinct NUMA distance values based on the NUMA levels
> +
> +* ibm,max-associativity-domains was changed to support multiple associativity
> +  domains in all NUMA levels. This is needed to ensure user flexibility
> +
> +* ibm,associativity for all resources now varies with user input
> +
> +These changes are only effective for pseries-5.2 and newer machines that are
> +created with more than one NUMA node (disconsidering NUMA nodes created by
> +the machine itself, e.g. NVLink 2 GPUs). The now legacy support has been
> +around for such a long time, with users seeing NUMA distances 10 and 40
> +(and 80 if using NVLink2 GPUs), and there is no need to disrupt the
> +existing experience of those guests.
> +
> +To bring the user experience x86 users have when tuning up NUMA, we had
> +to operate under the current pseries Linux kernel logic described in
> +`How the pseries Linux guest calculates NUMA distances`_. The result
> +is that we needed to translate NUMA distance user input to pseries
> +Linux kernel input.
> +
> +Translating user distance to kernel distance
> +--------------------------------------------
> +
> +User input for NUMA distance can vary from 10 to 254. We need to translate
> +that to the values that the Linux kernel operates on (10, 20, 40, 80, 160).
> +This is how it is being done:
> +
> +* user distance 11 to 30 will be interpreted as 20
> +* user distance 31 to 60 will be interpreted as 40
> +* user distance 61 to 120 will be interpreted as 80
> +* user distance 121 and beyond will be interpreted as 160
> +* user distance 10 stays 10
> +
> +The reasoning behind this aproximation is to avoid any round up to the local
> +distance (10), keeping it exclusive to the 4th NUMA level (which is still
> +exclusive to the node_id). All other ranges were chosen under the developer
> +discretion of what would be (somewhat) sensible considering the user input.
> +Any other strategy can be used here, but in the end the reality is that we'll
> +have to accept that a large array of values will be translated to the same
> +NUMA topology in the guest, e.g. this user input:
> +
> +::
> +
> +      0   1   2
> +  0  10  31 120
> +  1  31  10  30
> +  2 120  30  10
> +
> +And this other user input:
> +
> +::
> +
> +      0   1   2
> +  0  10  60  61
> +  1  60  10  11
> +  2  61  11  10
> +
> +Will both be translated to the same values internally:
> +
> +::
> +
> +      0   1   2
> +  0  10  40  80
> +  1  40  10  20
> +  2  80  20  10
> +
> +Users are encouraged to use only the kernel values in the NUMA definition to
> +avoid being taken by surprise with that the guest is actually seeing in the
> +topology. There are enough potential surprises that are inherent to the
> +associativity domain assignment process, discussed below.
> +
> +
> +How associativity domains are assigned
> +--------------------------------------
> +
> +LOPAPR allows more than one associativity array (or 'string') per allocated
> +resource. This would be used to represent that the resource has multiple
> +connections with the board, and then the operational system, when deciding
> +NUMA distancing, should consider the associativity information that provides
> +the shortest distance.
> +
> +The spapr implementation does not support multiple associativity arrays per
> +resource, neither does the pseries Linux kernel. We'll have to represent the
> +NUMA topology using one associativity per resource, which means that choices
> +and compromises are going to be made.
> +
> +Consider the following NUMA topology entered by user input:
> +
> +::
> +
> +      0   1   2   3
> +  0  10  40  20  40
> +  1  40  10  80  40
> +  2  20  80  10  20
> +  3  40  40  20  10
> +
> +Honoring just the relative distances of node 0 to every other node, one possible
> +value for all associativity arrays would be:
> +
> +* node 0: 0 0 0 0
> +* node 1: 1 0 1 1
> +* node 2: 2 2 0 2
> +* node 3: 3 0 3 3
> +
> +With the reference points {0x4, 0x3, 0x2, 0x1}, for node 0:
> +
> +* distance from 0 to 1 is 40 (no match at 0x4 and 0x3, will match
> +  at 0x2)
> +* distance from 0 to 2 is 20 (no match at 0x4, will match at 0x3)
> +* distance from 0 to 3 is 40 (no match at 0x4 and 0x3, will match
> +  at 0x2)
> +
> +The distances related to node 0 are accounted for. For node 1, and keeping
> +in mind that we don't need to revisit node 0 again, the distance from
> +node 1 to 2 is 80, matching at 0x4, and distance from 1 to 3 is 40,
> +match in 0x3:
> +
> +* node 0: 0 0 0 0
> +* node 1: 1 0 1 1
> +* node 2: 1 2 0 2
> +* node 3: 3 0 3 3
> +
> +In the last step we will analyze just nodes 2 and 3. The desired distance
> +between 2 and 3 is 20, i.e. a match in 0x3. Node 2 already has a
> +domain assigned in 0x3, 0. We'll preserve it to avoid dissolving the
> +association between node 0 and node 2, and use it as a domain for
> +0x3 as well:
> +
> +* node 0: 0 0 0 0
> +* node 1: 1 0 1 1
> +* node 2: 1 2 0 2
> +* node 3: 3 0 0 3
> +
> +
> +The kernel will read these arrays and will calculate the following NUMA topology for
> +the guest:
> +
> +::
> +
> +      0   1   2   3
> +  0  10  40  20  20
> +  1  40  10  80  40
> +  2  20  80  10  20
> +  3  20  40  20  10
> +
> +Note that this is not what the user wanted - the desired distance between
> +0 and 3 is 40, we calculated it as 20. This is what the current logic and
> +implementation constraints of the kernel and QEMU will provide inside the
> +LOPAPR specification.
> +
> +
> +Users are welcome to use this knowledge and experiment with the input to get
> +the NUMA topology they want, or as closer as they want. The important thing
> +is to keep expectations up to par with what we are capable of provide at this
> +moment: an approximation.
> +
> +Limitations of the implementation
>  ---------------------------------
>  
> +As mentioned above, the pSeries NUMA distance logic is, in fact, a way to approximate
> +user choice. The Linux kernel, and PAPR itself, does not provide QEMU with the ways
> +to fully map user input to actual NUMA distance the guest will use. These limitations
> +creates two notable limitations in our support:
> +
> +* Asymmetrical topologies aren't supported. We only support NUMA topologies where
> +  the distance from node A to B is always the same as B to A. We do not support
> +  any A-B pair where the distance back and forth is asymmetric. For example, the
> +  following topology isn't supported and the pSeries guest will not boot with this
> +  user input:
> +
> +::
> +
> +      0   1
> +  0  10  40
> +  1  20  10
> +
> +
> +* 'non-transitive' topologies will be poorly translated to the guest. This is the
> +  kind of topology where the distance from a node A to B is X, B to C is X, but
> +  the distance A to C is not X. E.g.:
> +
> +::
> +
> +      0   1   2   3
> +  0  10  20  20  40
> +  1  20  10  80  40
> +  2  20  80  10  20
> +  3  40  40  20  10
> +
> +  In the example above, distance 0 to 2 is 20, 2 to 3 is 20, but 0 to 3 is 40.
> +  The kernel will always match with the shortest associativity domain possible,
> +  and we're attempting to retain the previous established relations between the
> +  nodes. This means that a distance equal to 20 between nodes 0 and 2 and the
> +  same distance 20 between nodes 2 and 3 will cause the distance between 0 and 3
> +  to also be 20.
> +
> +
> +Legacy (5.1 and older) pseries NUMA mechanics
> +=============================================
> +
>  The way the pseries Linux guest calculates NUMA distances has a direct effect
>  on what QEMU users can expect when doing NUMA tuning. As of QEMU 5.1, this is
>  the default ibm,associativity-reference-points being used in the pseries

I'd suggest moving a discussion of the differences that were made here
down to this point.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 4/5] spapr_numa: consider user input when defining associativity
  2020-10-02  2:24   ` David Gibson
@ 2020-10-02 13:02     ` Daniel Henrique Barboza
  0 siblings, 0 replies; 9+ messages in thread
From: Daniel Henrique Barboza @ 2020-10-02 13:02 UTC (permalink / raw)
  To: David Gibson; +Cc: qemu-ppc, qemu-devel, groug



On 10/1/20 11:24 PM, David Gibson wrote:
> On Tue, Sep 29, 2020 at 10:38:16AM -0300, Daniel Henrique Barboza wrote:
>> This patch puts all the pieces together to finally allow user
>> input when defining the NUMA topology of the spapr guest.
>>
>> For each NUMA node A, starting at node id 0, the new
>> spapr_numa_define_associativity_domains() will:
>>
>> - get the distance between node A and B = A + 1
>> - get the correspondent NUMA level for this distance
>> - assign the associativity domain for A and B for the given
>> NUMA level, using the lowest associativity domain value between
>> them
>> - if there is more NUMA nodes, increment B and repeat
> 
> I still find this description very confusing.  The one in the comment
> is better, I think, can you maybe copy that one here.

Got it.

> 
>> Since we always start at the first node (id = 0) and go in
>> ascending order, we are prioritizing any previous associativity
>> already calculated. This is necessary because neither QEMU, nor
>> the pSeries kernel, supports multiple associativity domains for
>> each resource, meaning that we have to decide which associativity
>> relation is relevant. Another side effect is that the first
>> NUMA node, node 0, will always have an associativity array
>> full of zeroes. This is intended - in fact, the Linux kernel
>> expects it (see [1] for more info).
>>
>> Ultimately, all of this results in a best effort approximation for
>> the actual NUMA distances the user input in the command line. Given
>> the nature of how PAPR itself interprets NUMA distances versus the
>> expectations risen by how ACPI SLIT works, there might be better
>> algorithms but, in the end, it'll also result in another way to
>> approximate what the user really wanted.
>>
>> To keep this commit message no longer than it already is, the next
>> patch will update the existing documentation in ppc-spapr-numa.rst
>> with more in depth details and design considerations/drawbacks.
>>
>> [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
>>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
>> ---
>>   hw/ppc/spapr_numa.c | 120 +++++++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 119 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
>> index 16badb1f4b..f3d43ceb1e 100644
>> --- a/hw/ppc/spapr_numa.c
>> +++ b/hw/ppc/spapr_numa.c
>> @@ -37,12 +37,118 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
>>       return true;
>>   }
>>   
>> +/*
>> + * This function will translate the user distances into
>> + * what the kernel understand as possible values: 10
>> + * (local distance), 20, 40, 80 and 160, and return the equivalent
>> + * NUMA level for each. Current heuristic is:
>> + *  - local distance (10) returns numa_level = 0x4
>> + *  - distances between 11 and 30 inclusive -> rounded to 20,
>> + *    numa_level = 0x3
>> + *  - distances between 31 and 60 inclusive -> rounded to 40,
>> + *    numa_level = 0x2
>> + *  - distances between 61 and 120 inclusive -> rounded to 80,
>> + *    numa_level = 0x1
>> + *  - everything above 120 returns numa_level = 0 to indicate that
>> + *    there is no match. This will be calculated as disntace = 160
>> + *    by the kernel (as of v5.9)
>> + */
>> +static uint8_t spapr_numa_get_numa_level(uint8_t distance)
>> +{
>> +    uint8_t rounded_distance = 160;
>> +    uint8_t numa_level;
>> +
>> +    if (distance > 11 && distance <= 30) {
>> +        rounded_distance = 20;
>> +    } else if (distance > 31 && distance <= 60) {
>> +        rounded_distance = 40;
>> +    } else if (distance > 61 && distance <= 120) {
>> +        rounded_distance = 80;
>> +    }
>> +
>> +    switch (rounded_distance) {
>> +    case 10:
>> +        numa_level = 0x4;
>> +        break;
> 
> Uh.. you could just return the numa_level from the if-else chain
> without going via rounded_distance.  (You could put the rounded
> distances in comments on each if clause, if you like).

Makes sense.

> 
>> +    case 20:
>> +        numa_level = 0x3;
>> +        break;
>> +    case 40:
>> +        numa_level = 0x2;
>> +        break;
>> +    case 80:
>> +        numa_level = 0x1;
>> +        break;
>> +    default:
>> +        numa_level = 0;
>> +    }
>> +
>> +    return numa_level;
>> +}
>> +
>> +static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr)
>> +{
>> +    MachineState *ms = MACHINE(spapr);
>> +    NodeInfo *numa_info = ms->numa_state->nodes;
>> +    int nb_numa_nodes = ms->numa_state->num_nodes;
>> +    int src, dst;
>> +
>> +    for (src = 0; src < nb_numa_nodes; src++) {
>> +        for (dst = src; dst < nb_numa_nodes; dst++) {
>> +            /*
>> +             * This is how the associativity domain between A and B
>> +             * is calculated:
>> +             *
>> +             * - get the distance between them
>> +             * - get the correspondent NUMA level for this distance
>> +             * - the arrays were initialized with their own numa_ids,
>> +             * and we're calculating the distance in node_id ascending order,
>> +             * starting from node 0. This will have a cascade effect in the
>> +             * algorithm because the associativity domains that node 0 defines
>> +             * will be carried over to the other nodes, and node 1
>> +             * associativities will be carried over unless there's already a
>> +             * node 0 associativity assigned, and so on. This happens because
>> +             * we'll assign assoc_src as the associativity domain of dst
>> +             * as well, for the given NUMA level.
>> +             *
>> +             * The PPC kernel expects the associativity domains of node 0 to
>> +             * be always 0, and this algorithm will grant that by default.
>> +             */
>> +            uint8_t distance = numa_info[src].distance[dst];
>> +            uint8_t n_level = spapr_numa_get_numa_level(distance);
>> +            uint32_t assoc_src;
>> +
>> +            /*
>> +             * n_level = 0 means that the distance is greater than our last
>> +             * rounded value (120). In this case there is no NUMA level match
>> +             * between src and dst and we can skip the remaining of the loop.
>> +             *
>> +             * The Linux kernel will assume that the distance between src and
>> +             * dst, in this case of no match, is 10 (local distance) doubled
>> +             * for each NUMA it didn't match. We have MAX_DISTANCE_REF_POINTS
>> +             * levels (4), so this gives us 10*2*2*2*2 = 160.
>> +             *
>> +             * This logic can be seen in the Linux kernel source code, as of
>> +             * v5.9, in arch/powerpc/mm/numa.c, function __node_distance().
>> +             */
>> +            if (n_level == 0) {
>> +                continue;
>> +            }
>> +
>> +            assoc_src = spapr->numa_assoc_array[src][n_level];
>> +            spapr->numa_assoc_array[dst][n_level] = assoc_src;
> 
> I'm still not convinced that having the entry at n_level match, but
> not those at "coarser"/"more distant" levels be different makes sense.

I was going to defend what I was doing here but then, reading PAPR yet again:

"The legal form of the “ibm,associativity” property is dependent upon the
setting of the “ibm,architecture-vec-5” property byte 5 bit 0. The bit value
of zero (...) this form is being deprecated for new implementations in favor
of the form indicated by the “ibm,architecture-vec-5” property byte 5 bit 0
having the value of one in which the “ibm,associativity” property string
***represents the strict physical hierarchy of the platform***".

My defense was going to be that I'm not cascading all levels larger than
n_level because I would be inferring topology, when I should be inferring
just performance distance. Apparently this is wrong and we should represent
physical hierarchy, and what you're proposing is the right way to do it.
I'll fix it in v4.



Thanks,


DHB

> 
>> +        }
>> +    }
>> +
>> +}
>> +
>>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>                                      MachineState *machine)
>>   {
>>       SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>>       int nb_numa_nodes = machine->numa_state->num_nodes;
>>       int i, j, max_nodes_with_gpus;
>> +    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
>>   
>>       /*
>>        * For all associativity arrays: first position is the size,
>> @@ -56,6 +162,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>       for (i = 0; i < nb_numa_nodes; i++) {
>>           spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>>           spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>> +
>> +        /*
>> +         * Fill all associativity domains of non-zero NUMA nodes with
>> +         * node_id. This is required because the default value (0) is
>> +         * considered a match with associativity domains of node 0.
>> +         */
>> +        if (!using_legacy_numa && i != 0) {
>> +            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
>> +                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
>> +            }
>> +        }
>>       }
>>   
>>       /*
>> @@ -85,7 +202,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>        * 1 NUMA node) will not benefit from anything we're going to do
>>        * after this point.
>>        */
>> -    if (spapr_machine_using_legacy_numa(spapr)) {
>> +    if (using_legacy_numa) {
>>           return;
>>       }
>>   
>> @@ -95,6 +212,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>           exit(EXIT_FAILURE);
>>       }
>>   
>> +    spapr_numa_define_associativity_domains(spapr);
>>   }
>>   
>>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
> 


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-10-02 13:06 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-29 13:38 [PATCH v3 0/5] pseries NUMA distance calculation Daniel Henrique Barboza
2020-09-29 13:38 ` [PATCH v3 1/5] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
2020-09-29 13:38 ` [PATCH v3 2/5] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
2020-09-29 13:38 ` [PATCH v3 3/5] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
2020-09-29 13:38 ` [PATCH v3 4/5] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
2020-10-02  2:24   ` David Gibson
2020-10-02 13:02     ` Daniel Henrique Barboza
2020-09-29 13:38 ` [PATCH v3 5/5] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
2020-10-02  3:16   ` David Gibson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.