All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/6] pseries NUMA distance calculation
@ 2020-09-24 19:50 Daniel Henrique Barboza
  2020-09-24 19:50 ` [PATCH v2 1/6] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
                   ` (5 more replies)
  0 siblings, 6 replies; 20+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-24 19:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

This second version contains fixes based on the v1 review made
by Greg.

changes from v1:
- patch 1:
    * added Greg's R-b
- patch 2:
    * fixed 'order' typo
    * changed exit(1) to exit(EXIT_FAILURE)
    * added Greg's R-b
- patch 3:
    * clarified in the function comment the rounding of distance
values 30, 60 and 120
    * fixed a bug where distances 30, 60 and 120 wasn't being
PAPRified (use <= instead of just <)
- patch 4:
    * made the changes multi-line and more explicit, allowing
to easily see the differences between current and legacy arrays
- patch 5:
    * turned spapr_numa_get_NUMA_level() to lowercase
    * added a switch clause for distance = 10
    * removed MachineState parameter of
spapr_numa_define_associativity_domains()
    * clarified the reason why the associativity domains need
to be initialized with 'node_id' in the comment
    * we're now handling distances > 120 appropriately (no
NUMA match case)


v1 link: https://lists.gnu.org/archive/html/qemu-devel/2020-09/msg08549.html


Daniel Henrique Barboza (6):
  spapr: add spapr_machine_using_legacy_numa() helper
  spapr_numa: forbid asymmetrical NUMA setups
  spapr_numa: translate regular NUMA distance to PAPR distance
  spapr_numa: change reference-points and maxdomain settings
  spapr_numa: consider user input when defining associativity
  specs/ppc-spapr-numa: update with new NUMA support

 docs/specs/ppc-spapr-numa.rst | 213 +++++++++++++++++++++++++++++++++
 hw/ppc/spapr.c                |  12 ++
 hw/ppc/spapr_numa.c           | 217 ++++++++++++++++++++++++++++++++--
 include/hw/ppc/spapr.h        |   2 +
 4 files changed, 436 insertions(+), 8 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH v2 1/6] spapr: add spapr_machine_using_legacy_numa() helper
  2020-09-24 19:50 [PATCH v2 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
@ 2020-09-24 19:50 ` Daniel Henrique Barboza
  2020-09-24 19:50 ` [PATCH v2 2/6] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 20+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-24 19:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

The changes to come to NUMA support are all guest visible. In
theory we could just create a new 5_1 class option flag to
avoid the changes to cascade to 5.1 and under. The reality is that
these changes are only relevant if the machine has more than one
NUMA node. There is no need to change guest behavior that has
been around for years needlesly.

This new helper will be used by the next patches to determine
whether we should retain the (soon to be) legacy NUMA behavior
in the pSeries machine. The new behavior will only be exposed
if:

- machine is pseries-5.2 and newer;
- more than one NUMA node is declared in NUMA state.

Reviewed-by: Greg Kurz <groug@kaod.org>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr.c         | 12 ++++++++++++
 include/hw/ppc/spapr.h |  2 ++
 2 files changed, 14 insertions(+)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index e813c7cfb9..c5d8910a74 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -294,6 +294,15 @@ static hwaddr spapr_node0_size(MachineState *machine)
     return machine->ram_size;
 }
 
+bool spapr_machine_using_legacy_numa(SpaprMachineState *spapr)
+{
+    MachineState *machine = MACHINE(spapr);
+    SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(machine);
+
+    return smc->pre_5_2_numa_associativity ||
+           machine->numa_state->num_nodes <= 1;
+}
+
 static void add_str(GString *s, const gchar *s1)
 {
     g_string_append_len(s, s1, strlen(s1) + 1);
@@ -4522,8 +4531,11 @@ DEFINE_SPAPR_MACHINE(5_2, "5.2", true);
  */
 static void spapr_machine_5_1_class_options(MachineClass *mc)
 {
+    SpaprMachineClass *smc = SPAPR_MACHINE_CLASS(mc);
+
     spapr_machine_5_2_class_options(mc);
     compat_props_add(mc->compat_props, hw_compat_5_1, hw_compat_5_1_len);
+    smc->pre_5_2_numa_associativity = true;
 }
 
 DEFINE_SPAPR_MACHINE(5_1, "5.1", false);
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 114e819969..d1aae03b97 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -143,6 +143,7 @@ struct SpaprMachineClass {
     bool smp_threads_vsmt; /* set VSMT to smp_threads by default */
     hwaddr rma_limit;          /* clamp the RMA to this size */
     bool pre_5_1_assoc_refpoints;
+    bool pre_5_2_numa_associativity;
 
     void (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
                           uint64_t *buid, hwaddr *pio, 
@@ -860,6 +861,7 @@ int spapr_max_server_number(SpaprMachineState *spapr);
 void spapr_store_hpte(PowerPCCPU *cpu, hwaddr ptex,
                       uint64_t pte0, uint64_t pte1);
 void spapr_mce_req_event(PowerPCCPU *cpu, bool recovered);
+bool spapr_machine_using_legacy_numa(SpaprMachineState *spapr);
 
 /* DRC callbacks. */
 void spapr_core_release(DeviceState *dev);
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 2/6] spapr_numa: forbid asymmetrical NUMA setups
  2020-09-24 19:50 [PATCH v2 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
  2020-09-24 19:50 ` [PATCH v2 1/6] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
@ 2020-09-24 19:50 ` Daniel Henrique Barboza
  2020-09-25  2:36   ` David Gibson
  2020-09-25  3:48   ` David Gibson
  2020-09-24 19:50 ` [PATCH v2 3/6] spapr_numa: translate regular NUMA distance to PAPR distance Daniel Henrique Barboza
                   ` (3 subsequent siblings)
  5 siblings, 2 replies; 20+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-24 19:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

The pSeries machine does not support asymmetrical NUMA
configurations. This doesn't make much of a different
since we're not using user input for pSeries NUMA setup,
but this will change in the next patches.

To avoid breaking existing setups, gate this change by
checking for legacy NUMA support.

Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 34 ++++++++++++++++++++++++++++++++++
 1 file changed, 34 insertions(+)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 64fe567f5d..fe395e80a3 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -19,6 +19,24 @@
 /* Moved from hw/ppc/spapr_pci_nvlink2.c */
 #define SPAPR_GPU_NUMA_ID           (cpu_to_be32(1))
 
+static bool spapr_numa_is_symmetrical(MachineState *ms)
+{
+    int src, dst;
+    int nb_numa_nodes = ms->numa_state->num_nodes;
+    NodeInfo *numa_info = ms->numa_state->nodes;
+
+    for (src = 0; src < nb_numa_nodes; src++) {
+        for (dst = src; dst < nb_numa_nodes; dst++) {
+            if (numa_info[src].distance[dst] !=
+                numa_info[dst].distance[src]) {
+                return false;
+            }
+        }
+    }
+
+    return true;
+}
+
 void spapr_numa_associativity_init(SpaprMachineState *spapr,
                                    MachineState *machine)
 {
@@ -61,6 +79,22 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
 
         spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
     }
+
+    /*
+     * Legacy NUMA guests (pseries-5.1 and older, or guests with only
+     * 1 NUMA node) will not benefit from anything we're going to do
+     * after this point.
+     */
+    if (spapr_machine_using_legacy_numa(spapr)) {
+        return;
+    }
+
+    if (!spapr_numa_is_symmetrical(machine)) {
+        error_report("Asymmetrical NUMA topologies aren't supported "
+                     "in the pSeries machine");
+        exit(EXIT_FAILURE);
+    }
+
 }
 
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 3/6] spapr_numa: translate regular NUMA distance to PAPR distance
  2020-09-24 19:50 [PATCH v2 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
  2020-09-24 19:50 ` [PATCH v2 1/6] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
  2020-09-24 19:50 ` [PATCH v2 2/6] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
@ 2020-09-24 19:50 ` Daniel Henrique Barboza
  2020-09-25  2:35   ` David Gibson
  2020-09-24 19:50 ` [PATCH v2 4/6] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 20+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-24 19:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

QEMU allows the user to set NUMA distances in the command line.
For ACPI architectures like x86, this means that user input is
used to populate the SLIT table, and the guest perceives the
distances as the user chooses to.

PPC64 does not work that way. In the PAPR concept of NUMA,
associativity relations between the NUMA nodes are provided by
the device tree, and the guest kernel is free to calculate the
distances as it sees fit. Given how ACPI architectures works,
this puts the pSeries machine in a strange spot - users expect
to define NUMA distances like in the ACPI case, but QEMU does
not have control over it. To give pSeries users a similar
experience, we'll need to bring kernel specifics to QEMU
to approximate the NUMA distances.

The pSeries kernel works with the NUMA distance range 10,
20, 40, 80 and 160. The code starts at 10 (local distance) and
searches for a match in the first NUMA level between the
resources. If there is no match, the distance is doubled and
then it proceeds to try to match in the next NUMA level. Rinse
and repeat for MAX_DISTANCE_REF_POINTS levels.

This patch introduces a spapr_numa_PAPRify_distances() helper
that translates the user distances to kernel distance, which
we're going to use to determine the associativity domains for
the NUMA nodes.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index fe395e80a3..990a5fce08 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -37,6 +37,49 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
     return true;
 }
 
+/*
+ * This function will translate the user distances into
+ * what the kernel understand as possible values: 10
+ * (local distance), 20, 40, 80 and 160. Current heuristic
+ * is:
+ *
+ *  - distances between 11 and 30 inclusive -> rounded to 20
+ *  - distances between 31 and 60 inclusive -> rounded to 40
+ *  - distances between 61 and 120 inclusive -> rounded to 80
+ *  - everything above 120 -> 160
+ *
+ * This step can also be done in the same time as the NUMA
+ * associativity domains calculation, at the cost of extra
+ * complexity. We chose to keep it simpler.
+ *
+ * Note: this will overwrite the distance values in
+ * ms->numa_state->nodes.
+ */
+static void spapr_numa_PAPRify_distances(MachineState *ms)
+{
+    int src, dst;
+    int nb_numa_nodes = ms->numa_state->num_nodes;
+    NodeInfo *numa_info = ms->numa_state->nodes;
+
+    for (src = 0; src < nb_numa_nodes; src++) {
+        for (dst = src; dst < nb_numa_nodes; dst++) {
+            uint8_t distance = numa_info[src].distance[dst];
+            uint8_t rounded_distance = 160;
+
+            if (distance > 11 && distance <= 30) {
+                rounded_distance = 20;
+            } else if (distance > 31 && distance <= 60) {
+                rounded_distance = 40;
+            } else if (distance > 61 && distance <= 120) {
+                rounded_distance = 80;
+            }
+
+            numa_info[src].distance[dst] = rounded_distance;
+            numa_info[dst].distance[src] = rounded_distance;
+        }
+    }
+}
+
 void spapr_numa_associativity_init(SpaprMachineState *spapr,
                                    MachineState *machine)
 {
@@ -95,6 +138,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
         exit(EXIT_FAILURE);
     }
 
+    spapr_numa_PAPRify_distances(machine);
 }
 
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 4/6] spapr_numa: change reference-points and maxdomain settings
  2020-09-24 19:50 [PATCH v2 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
                   ` (2 preceding siblings ...)
  2020-09-24 19:50 ` [PATCH v2 3/6] spapr_numa: translate regular NUMA distance to PAPR distance Daniel Henrique Barboza
@ 2020-09-24 19:50 ` Daniel Henrique Barboza
  2020-09-25  2:38   ` David Gibson
  2020-09-25 13:16   ` Greg Kurz
  2020-09-24 19:50 ` [PATCH v2 5/6] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
  2020-09-24 19:50 ` [PATCH v2 6/6] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
  5 siblings, 2 replies; 20+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-24 19:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

This is the first guest visible change introduced in
spapr_numa.c. The previous settings of both reference-points
and maxdomains were too restrictive, but enough for the
existing associativity we're setting in the resources.

We'll change that in the following patches, populating the
associativity arrays based on user input. For those changes
to be effective, reference-points and maxdomains must be
more flexible. After this patch, we'll have 4 distinct
levels of NUMA (0x4, 0x3, 0x2, 0x1) and maxdomains will
allow for any type of configuration the user intends to
do - under the scope and limitations of PAPR itself, of
course.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 40 ++++++++++++++++++++++++++++++++--------
 1 file changed, 32 insertions(+), 8 deletions(-)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 990a5fce08..ea33439a15 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -222,24 +222,48 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
  */
 void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
 {
+    MachineState *ms = MACHINE(spapr);
     SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
     uint32_t refpoints[] = {
         cpu_to_be32(0x4),
-        cpu_to_be32(0x4),
+        cpu_to_be32(0x3),
         cpu_to_be32(0x2),
+        cpu_to_be32(0x1),
     };
     uint32_t nr_refpoints = ARRAY_SIZE(refpoints);
-    uint32_t maxdomain = cpu_to_be32(spapr->gpu_numa_id > 1 ? 1 : 0);
+    uint32_t maxdomain = ms->numa_state->num_nodes + spapr->gpu_numa_id;
     uint32_t maxdomains[] = {
         cpu_to_be32(4),
-        maxdomain,
-        maxdomain,
-        maxdomain,
-        cpu_to_be32(spapr->gpu_numa_id),
+        cpu_to_be32(maxdomain),
+        cpu_to_be32(maxdomain),
+        cpu_to_be32(maxdomain),
+        cpu_to_be32(maxdomain)
     };
 
-    if (smc->pre_5_1_assoc_refpoints) {
-        nr_refpoints = 2;
+    if (spapr_machine_using_legacy_numa(spapr)) {
+        uint32_t legacy_refpoints[] = {
+            cpu_to_be32(0x4),
+            cpu_to_be32(0x4),
+            cpu_to_be32(0x2),
+        };
+        uint32_t legacy_maxdomain = spapr->gpu_numa_id > 1 ? 1 : 0;
+        uint32_t legacy_maxdomains[] = {
+            cpu_to_be32(4),
+            cpu_to_be32(legacy_maxdomain),
+            cpu_to_be32(legacy_maxdomain),
+            cpu_to_be32(legacy_maxdomain),
+            cpu_to_be32(spapr->gpu_numa_id),
+        };
+
+        nr_refpoints = 3;
+
+        memcpy(refpoints, legacy_refpoints, sizeof(legacy_refpoints));
+        memcpy(maxdomains, legacy_maxdomains, sizeof(legacy_maxdomains));
+
+        /* pseries-5.0 and older reference-points array is {0x4, 0x4} */
+        if (smc->pre_5_1_assoc_refpoints) {
+            nr_refpoints = 2;
+        }
     }
 
     _FDT(fdt_setprop(fdt, rtas, "ibm,associativity-reference-points",
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 5/6] spapr_numa: consider user input when defining associativity
  2020-09-24 19:50 [PATCH v2 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
                   ` (3 preceding siblings ...)
  2020-09-24 19:50 ` [PATCH v2 4/6] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
@ 2020-09-24 19:50 ` Daniel Henrique Barboza
  2020-09-25  3:39   ` David Gibson
  2020-09-24 19:50 ` [PATCH v2 6/6] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
  5 siblings, 1 reply; 20+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-24 19:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

This patch puts all the pieces together to finally allow user
input when defining the NUMA topology of the spapr guest.

We have one more kernel restriction to handle in this patch:
the associativity array of node 0 must be filled with zeroes
[1]. The strategy below ensures that this will happen.

spapr_numa_define_associativity_domains() will read the distance
(already PAPRified) between the nodes from numa_state and determine
the appropriate NUMA level. The NUMA domains, processed in ascending
order, are going to be matched via NUMA levels, and the lowest
associativity domain value is assigned to that specific level for
both.

This will create an heuristic where the associativities of the first
nodes have higher priority and are re-used in new matches, instead of
overwriting them with a new associativity match. This is necessary
because neither QEMU, nor the pSeries kernel, supports multiple
associativity domains for each resource, meaning that we have to
decide which associativity relation is relevant.

Ultimately, all of this results in a best effort approximation for
the actual NUMA distances the user input in the command line. Given
the nature of how PAPR itself interprets NUMA distances versus the
expectations risen by how ACPI SLIT works, there might be better
algorithms but, in the end, it'll also result in another way to
approximate what the user really wanted.

To keep this commit message no longer than it already is, the next
patch will update the existing documentation in ppc-spapr-numa.rst
with more in depth details and design considerations/drawbacks.

[1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 101 +++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 100 insertions(+), 1 deletion(-)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index ea33439a15..21cae3f799 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -80,12 +80,99 @@ static void spapr_numa_PAPRify_distances(MachineState *ms)
     }
 }
 
+static uint8_t spapr_numa_get_numa_level(uint8_t distance)
+{
+    uint8_t numa_level;
+
+    switch (distance) {
+    case 10:
+        numa_level = 0x4;
+        break;
+    case 20:
+        numa_level = 0x3;
+        break;
+    case 40:
+        numa_level = 0x2;
+        break;
+    case 80:
+        numa_level = 0x1;
+        break;
+    default:
+        numa_level = 0;
+    }
+
+    return numa_level;
+}
+
+static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr)
+{
+    MachineState *ms = MACHINE(spapr);
+    NodeInfo *numa_info = ms->numa_state->nodes;
+    int nb_numa_nodes = ms->numa_state->num_nodes;
+    int src, dst;
+
+    for (src = 0; src < nb_numa_nodes; src++) {
+        for (dst = src; dst < nb_numa_nodes; dst++) {
+            /*
+             * This is how the associativity domain between A and B
+             * is calculated:
+             *
+             * - get the distance between them
+             * - get the correspondent NUMA level for this distance
+             * - the arrays were initialized with their own numa_ids,
+             * and we're calculating the distance in node_id ascending order,
+             * starting from node 0. This will have a cascade effect in the
+             * algorithm because the associativity domains that node 0 defines
+             * will be carried over to the other nodes, and node 1
+             * associativities will be carried over unless there's already a
+             * node 0 associativity assigned, and so on. This happens because
+             * we'll assign the lowest value of assoc_src and assoc_dst to be
+             * the associativity domain of both, for the given NUMA level.
+             *
+             * The PPC kernel expects the associativity domains of node 0 to
+             * be always 0, and this algorithm will grant that by default.
+             */
+            uint8_t distance = numa_info[src].distance[dst];
+            uint8_t n_level = spapr_numa_get_numa_level(distance);
+            uint32_t assoc_src, assoc_dst;
+
+            /*
+             * n_level = 0 means that the distance is greater than our last
+             * rounded value (120). In this case there is no NUMA level match
+             * between src and dst and we can skip the remaining of the loop.
+             *
+             * The Linux kernel will assume that the distance between src and
+             * dst, in this case of no match, is 10 (local distance) doubled
+             * for each NUMA it didn't match. We have MAX_DISTANCE_REF_POINTS
+             * levels (4), so this gives us 10*2*2*2*2 = 160.
+             *
+             * This logic can be seen in the Linux kernel source code, as of
+             * v5.9, in arch/powerpc/mm/numa.c, function __node_distance().
+             */
+            if (n_level == 0) {
+                continue;
+            }
+
+            assoc_src = be32_to_cpu(spapr->numa_assoc_array[src][n_level]);
+            assoc_dst = be32_to_cpu(spapr->numa_assoc_array[dst][n_level]);
+
+            if (assoc_src < assoc_dst) {
+                spapr->numa_assoc_array[dst][n_level] = cpu_to_be32(assoc_src);
+            } else {
+                spapr->numa_assoc_array[src][n_level] = cpu_to_be32(assoc_dst);
+            }
+        }
+    }
+
+}
+
 void spapr_numa_associativity_init(SpaprMachineState *spapr,
                                    MachineState *machine)
 {
     SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
     int nb_numa_nodes = machine->numa_state->num_nodes;
     int i, j, max_nodes_with_gpus;
+    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
 
     /*
      * For all associativity arrays: first position is the size,
@@ -99,6 +186,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
     for (i = 0; i < nb_numa_nodes; i++) {
         spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
         spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
+
+        /*
+         * Fill all associativity domains of non-zero NUMA nodes with
+         * node_id. This is required because the default value (0) is
+         * considered a match with associativity domains of node 0.
+         */
+        if (!using_legacy_numa && i != 0) {
+            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
+                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
+            }
+        }
     }
 
     /*
@@ -128,7 +226,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
      * 1 NUMA node) will not benefit from anything we're going to do
      * after this point.
      */
-    if (spapr_machine_using_legacy_numa(spapr)) {
+    if (using_legacy_numa) {
         return;
     }
 
@@ -139,6 +237,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
     }
 
     spapr_numa_PAPRify_distances(machine);
+    spapr_numa_define_associativity_domains(spapr);
 }
 
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH v2 6/6] specs/ppc-spapr-numa: update with new NUMA support
  2020-09-24 19:50 [PATCH v2 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
                   ` (4 preceding siblings ...)
  2020-09-24 19:50 ` [PATCH v2 5/6] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
@ 2020-09-24 19:50 ` Daniel Henrique Barboza
  2020-09-25  3:43   ` David Gibson
  5 siblings, 1 reply; 20+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-24 19:50 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

This update provides more in depth information about the
choices and drawbacks of the new NUMA support for the
spapr machine.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 docs/specs/ppc-spapr-numa.rst | 213 ++++++++++++++++++++++++++++++++++
 1 file changed, 213 insertions(+)

diff --git a/docs/specs/ppc-spapr-numa.rst b/docs/specs/ppc-spapr-numa.rst
index e762038022..994bfb996f 100644
--- a/docs/specs/ppc-spapr-numa.rst
+++ b/docs/specs/ppc-spapr-numa.rst
@@ -189,3 +189,216 @@ QEMU up to 5.1, as follows:
 
 This also means that user input in QEMU command line does not change the
 NUMA distancing inside the guest for the pseries machine.
+
+New NUMA mechanics for pseries in QEMU 5.2
+==========================================
+
+Starting in QEMU 5.2, the pseries machine now considers user input when
+setting NUMA topology of the guest. The following changes were made:
+
+* ibm,associativity-reference-points was changed to {0x4, 0x3, 0x2, 0x1}, allowing
+  for 4 distinct NUMA distance values based on the NUMA levels
+
+* ibm,max-associativity-domains was changed to support multiple associativity
+  domains in all NUMA levels. This is needed to ensure user flexibility
+
+* ibm,associativity for all resources now varies with user input
+
+These changes are only effective for pseries-5.2 and newer machines that are
+created with more than one NUMA node (disconsidering NUMA nodes created by
+the machine itself, e.g. NVLink 2 GPUs). The now legacy support has been
+around for such a long time, with users seeing NUMA distances 10 and 40
+(and 80 if using NVLink2 GPUs), and there is no need to disrupt the
+existing experience of those guests.
+
+To bring the user experience x86 users have when tuning up NUMA, we had
+to operate under the current pseries Linux kernel logic described in
+`How the pseries Linux guest calculates NUMA distances`_. The result
+is that we needed to translate NUMA distance user input to pseries
+Linux kernel input.
+
+Translating user distance to kernel distance
+--------------------------------------------
+
+User input for NUMA distance can vary from 10 to 254. We need to translate
+that to the values that the Linux kernel operates on (10, 20, 40, 80, 160).
+This is how it is being done:
+
+* user distance 11 to 30 will be interpreted as 20
+* user distance 31 to 60 will be interpreted as 40
+* user distance 61 to 120 will be interpreted as 80
+* user distance 121 and beyond will be interpreted as 160
+* user distance 10 stays 10
+
+The reasoning behind this aproximation is to avoid any round up to the local
+distance (10), keeping it exclusive to the 4th NUMA level (which is still
+exclusive to the node_id). All other ranges were chosen under the developer
+discretion of what would be (somewhat) sensible considering the user input.
+Any other strategy can be used here, but in the end the reality is that we'll
+have to accept that a large array of values will be translated to the same
+NUMA topology in the guest, e.g. this user input:
+
+::
+
+      0   1   2
+  0  10  31 120
+  1  31  10  30
+  2 120  30  10
+
+And this other user input:
+
+::
+
+      0   1   2
+  0  10  60  61
+  1  60  10  11
+  2  61  11  10
+
+Will both be translated to the same values internally:
+
+::
+
+      0   1   2
+  0  10  40  80
+  1  40  10  20
+  2  80  20  10
+
+Users are encouraged to use only the kernel values in the NUMA definition to
+avoid being taken by surprise with that the guest is actually seeing in the
+topology. There are enough potential surprises that are inherent to the
+associativity domain assignment process, discussed below.
+
+
+How associativity domains are assigned
+--------------------------------------
+
+LOPAPR allows more than one associativity array (or 'string') per allocated
+resource. This would be used to represent that the resource has multiple
+connections with the board, and then the operational system, when deciding
+NUMA distancing, should consider the associativity information that provides
+the shortest distance.
+
+The spapr implementation does not support multiple associativity arrays per
+resource, neither does the pseries Linux kernel. We'll have to represent the
+NUMA topology using one associativity per resource, which means that choices
+and compromises are going to be made.
+
+Consider the following NUMA topology entered by user input:
+
+::
+
+      0   1   2   3
+  0  10  20  20  40
+  1  20  10  80  40
+  2  20  80  10  20
+  3  40  40  20  10
+
+Honoring just the relative distances of node 0 to every other node, one possible
+value for all associativity arrays would be:
+
+* node 0: 0 B A 0
+* node 1: 0 0 A 1
+* node 2: 0 0 A 2
+* node 3: 0 B 0 3
+
+With the reference points {0x4, 0x3, 0x2, 0x1}, for node 0:
+
+* distance from 0 to 1 is 20 (no match at 0x4, will match at 0x3)
+* distance from 0 to 2 is 20 (no match at 0x4, will match at 0x3)
+* distance from 0 to 3 is 40 (no match at 0x4 and 0x3, will match
+  at 0x2)
+
+The distances related to node 0 are well represented. Doing for node 1, and keeping
+in mind that we don't need to revisit node 0 again, the distance from node 1 to
+2 is 80, matching at 0x4:
+
+* node 1: C 0 A 1
+* node 2: C 0 A 2
+
+Over here we already have the first conflict. Even if we assign a new associativity
+domain at 0x4 for 1 and 2, and we do that in the code, the kernel will define
+the distance between 1 and 2 as 20, not 80, because both 1 and 2 have the "A"
+associativity domain from the previous step. If we decide to discard the
+associativity with "A" then the node 0 distances are compromised.
+
+Following up with the distance from 1 to 3 being 40 (a match in 0x2) we have another
+decision to make. These are the current associativity domains of each:
+
+* node 1: C 0 A 1
+* node 3: 0 B 0 3
+
+There is already an associativity domain at 0x2 in node 3, "B", which was assigned
+by the node 0 distances. If we define a new associativity domain at this level
+for 1 and 3 we will overwrite the existing associativity between 0 and 3. What
+the code is doing in this case is to assign the existing domain to the
+current associativity, in this case, "B" is now assigned to the 0x2 of node 1,
+resulting in the following associativity arrays:
+
+* node 0: 0 B A 0
+* node 1: C 0 A 1
+* node 2: C B A 2
+* node 3: 0 B 0 3
+
+In the last step we will analyze just nodes 2 and 3. The desired distance between
+2 and 3 is 20, i.e. a match in 0x3. Node 2 already has a domain assigned in 0x3,
+A, so we do the same as we did in the previous case and assign it to node 3
+at 0x3. This is the end result for the associativity arrays:
+
+* node 0: 0 B A 0
+* node 1: C 0 A 1
+* node 2: C B A 2
+* node 3: 0 B A 3
+
+The kernel will read these arrays and will calculate the following NUMA topology for
+the guest:
+
+::
+
+      0   1   2   3
+  0  10  20  20  20
+  1  20  10  20  20
+  2  20  20  10  20
+  3  20  20  20  10
+
+Which is not what the user wanted, but it is what the current logic and implementation
+constraints of the kernel and QEMU will provide inside the LOPAPR specification.
+
+Changing a single value, specially a low distance value, makes for drastic changes
+in the result. For example, with the same user input from above, but changing the
+node distance from 0 to 1 to 40:
+
+::
+
+      0   1   2   3
+  0  10  40  20  40
+  1  40  10  80  40
+  2  20  80  10  20
+  3  40  40  20  10
+
+This is the result inside the guest, applying the same heuristics:
+
+::
+
+  $ numactl -H
+  available: 4 nodes (0-3)
+  (...)
+  node distances:
+  node   0   1   2   3
+    0:  10  40  20  20
+    1:  40  10  80  40
+    2:  20  80  10  20
+    3:  20  40  20  10
+
+This result is much closer to the user input and only a single distance was changed
+from the original.
+
+The kernel will always match with the shortest associativity domain possible, and we're
+attempting to retain the previous established relations between the nodes. This means
+that a distance equal to 20 between nodes A and B and the same distance 20 between nodes
+A and F will cause the distance between B and F to also be 20. The same will happen to
+other distances, but shorter distances has precedent over it to the distance calculation.
+
+Users are welcome to use this knowledge and experiment with the input to get the
+NUMA topology they want, or as closer as they want. The important thing is to keep
+expectations up to par with what we are capable of provide at this moment: an
+approximation.
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/6] spapr_numa: translate regular NUMA distance to PAPR distance
  2020-09-24 19:50 ` [PATCH v2 3/6] spapr_numa: translate regular NUMA distance to PAPR distance Daniel Henrique Barboza
@ 2020-09-25  2:35   ` David Gibson
  2020-09-25 12:44     ` Daniel Henrique Barboza
  0 siblings, 1 reply; 20+ messages in thread
From: David Gibson @ 2020-09-25  2:35 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, groug

[-- Attachment #1: Type: text/plain, Size: 5081 bytes --]

On Thu, Sep 24, 2020 at 04:50:55PM -0300, Daniel Henrique Barboza wrote:
> QEMU allows the user to set NUMA distances in the command line.
> For ACPI architectures like x86, this means that user input is
> used to populate the SLIT table, and the guest perceives the
> distances as the user chooses to.
> 
> PPC64 does not work that way. In the PAPR concept of NUMA,
> associativity relations between the NUMA nodes are provided by
> the device tree, and the guest kernel is free to calculate the
> distances as it sees fit. Given how ACPI architectures works,
> this puts the pSeries machine in a strange spot - users expect
> to define NUMA distances like in the ACPI case, but QEMU does
> not have control over it. To give pSeries users a similar
> experience, we'll need to bring kernel specifics to QEMU
> to approximate the NUMA distances.
> 
> The pSeries kernel works with the NUMA distance range 10,
> 20, 40, 80 and 160. The code starts at 10 (local distance) and
> searches for a match in the first NUMA level between the
> resources. If there is no match, the distance is doubled and
> then it proceeds to try to match in the next NUMA level. Rinse
> and repeat for MAX_DISTANCE_REF_POINTS levels.
> 
> This patch introduces a spapr_numa_PAPRify_distances() helper
> that translates the user distances to kernel distance, which
> we're going to use to determine the associativity domains for
> the NUMA nodes.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

The idea of rounding the distances like this seems pretty good to me.
Since each level is a multiple of a distance from the preivous one it
might be more theoretically correct to place the thresholds at the
geometric mean between each level, rather than the arithmetic mean.  I
very much doubt it makes much different in practice though, and this
is simpler.

There is one nit, I'm less happy with though..

> ---
>  hw/ppc/spapr_numa.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 44 insertions(+)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index fe395e80a3..990a5fce08 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -37,6 +37,49 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
>      return true;
>  }
>  
> +/*
> + * This function will translate the user distances into
> + * what the kernel understand as possible values: 10
> + * (local distance), 20, 40, 80 and 160. Current heuristic
> + * is:
> + *
> + *  - distances between 11 and 30 inclusive -> rounded to 20
> + *  - distances between 31 and 60 inclusive -> rounded to 40
> + *  - distances between 61 and 120 inclusive -> rounded to 80
> + *  - everything above 120 -> 160
> + *
> + * This step can also be done in the same time as the NUMA
> + * associativity domains calculation, at the cost of extra
> + * complexity. We chose to keep it simpler.
> + *
> + * Note: this will overwrite the distance values in
> + * ms->numa_state->nodes.
> + */
> +static void spapr_numa_PAPRify_distances(MachineState *ms)
> +{
> +    int src, dst;
> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> +    NodeInfo *numa_info = ms->numa_state->nodes;
> +
> +    for (src = 0; src < nb_numa_nodes; src++) {
> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> +            uint8_t distance = numa_info[src].distance[dst];
> +            uint8_t rounded_distance = 160;
> +
> +            if (distance > 11 && distance <= 30) {
> +                rounded_distance = 20;
> +            } else if (distance > 31 && distance <= 60) {
> +                rounded_distance = 40;
> +            } else if (distance > 61 && distance <= 120) {
> +                rounded_distance = 80;
> +            }
> +
> +            numa_info[src].distance[dst] = rounded_distance;
> +            numa_info[dst].distance[src] = rounded_distance;

..I don't love the fact that we alter the distance table in place.
Even though it was never exposed to the guest, I'd prefer not to
destroy the information the user passed in.  It could lead to
surprising results with QMP introspection, and it may make future
extensions more difficult.

So I'd prefer to either (a) just leave the table as is and round
on-demand with a paprify_distance(NN) -> {20,40,80,..} type function,
or (b) create a parallel, spapr local, table with the rounded
distances

> +        }
> +    }
> +}
> +
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine)
>  {
> @@ -95,6 +138,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>          exit(EXIT_FAILURE);
>      }
>  
> +    spapr_numa_PAPRify_distances(machine);
>  }
>  
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/6] spapr_numa: forbid asymmetrical NUMA setups
  2020-09-24 19:50 ` [PATCH v2 2/6] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
@ 2020-09-25  2:36   ` David Gibson
  2020-09-25  3:48   ` David Gibson
  1 sibling, 0 replies; 20+ messages in thread
From: David Gibson @ 2020-09-25  2:36 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, groug

[-- Attachment #1: Type: text/plain, Size: 2611 bytes --]

On Thu, Sep 24, 2020 at 04:50:54PM -0300, Daniel Henrique Barboza wrote:
> The pSeries machine does not support asymmetrical NUMA
> configurations. This doesn't make much of a different
> since we're not using user input for pSeries NUMA setup,
> but this will change in the next patches.
> 
> To avoid breaking existing setups, gate this change by
> checking for legacy NUMA support.
> 
> Reviewed-by: Greg Kurz <groug@kaod.org>
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

> ---
>  hw/ppc/spapr_numa.c | 34 ++++++++++++++++++++++++++++++++++
>  1 file changed, 34 insertions(+)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 64fe567f5d..fe395e80a3 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -19,6 +19,24 @@
>  /* Moved from hw/ppc/spapr_pci_nvlink2.c */
>  #define SPAPR_GPU_NUMA_ID           (cpu_to_be32(1))
>  
> +static bool spapr_numa_is_symmetrical(MachineState *ms)
> +{
> +    int src, dst;
> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> +    NodeInfo *numa_info = ms->numa_state->nodes;
> +
> +    for (src = 0; src < nb_numa_nodes; src++) {
> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> +            if (numa_info[src].distance[dst] !=
> +                numa_info[dst].distance[src]) {
> +                return false;
> +            }
> +        }
> +    }
> +
> +    return true;
> +}
> +
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine)
>  {
> @@ -61,6 +79,22 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>  
>          spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>      }
> +
> +    /*
> +     * Legacy NUMA guests (pseries-5.1 and older, or guests with only
> +     * 1 NUMA node) will not benefit from anything we're going to do
> +     * after this point.
> +     */
> +    if (spapr_machine_using_legacy_numa(spapr)) {
> +        return;
> +    }
> +
> +    if (!spapr_numa_is_symmetrical(machine)) {
> +        error_report("Asymmetrical NUMA topologies aren't supported "
> +                     "in the pSeries machine");
> +        exit(EXIT_FAILURE);
> +    }
> +
>  }
>  
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/6] spapr_numa: change reference-points and maxdomain settings
  2020-09-24 19:50 ` [PATCH v2 4/6] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
@ 2020-09-25  2:38   ` David Gibson
  2020-09-25 13:16   ` Greg Kurz
  1 sibling, 0 replies; 20+ messages in thread
From: David Gibson @ 2020-09-25  2:38 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, groug

[-- Attachment #1: Type: text/plain, Size: 3698 bytes --]

On Thu, Sep 24, 2020 at 04:50:56PM -0300, Daniel Henrique Barboza wrote:
> This is the first guest visible change introduced in
> spapr_numa.c. The previous settings of both reference-points
> and maxdomains were too restrictive, but enough for the
> existing associativity we're setting in the resources.
> 
> We'll change that in the following patches, populating the
> associativity arrays based on user input. For those changes
> to be effective, reference-points and maxdomains must be
> more flexible. After this patch, we'll have 4 distinct
> levels of NUMA (0x4, 0x3, 0x2, 0x1) and maxdomains will
> allow for any type of configuration the user intends to
> do - under the scope and limitations of PAPR itself, of
> course.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

Although..

> ---
>  hw/ppc/spapr_numa.c | 40 ++++++++++++++++++++++++++++++++--------
>  1 file changed, 32 insertions(+), 8 deletions(-)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 990a5fce08..ea33439a15 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -222,24 +222,48 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
>   */
>  void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
>  {
> +    MachineState *ms = MACHINE(spapr);
>      SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>      uint32_t refpoints[] = {
>          cpu_to_be32(0x4),
> -        cpu_to_be32(0x4),
> +        cpu_to_be32(0x3),
>          cpu_to_be32(0x2),
> +        cpu_to_be32(0x1),
>      };
>      uint32_t nr_refpoints = ARRAY_SIZE(refpoints);
> -    uint32_t maxdomain = cpu_to_be32(spapr->gpu_numa_id > 1 ? 1 : 0);
> +    uint32_t maxdomain = ms->numa_state->num_nodes + spapr->gpu_numa_id;
>      uint32_t maxdomains[] = {
>          cpu_to_be32(4),
> -        maxdomain,
> -        maxdomain,
> -        maxdomain,
> -        cpu_to_be32(spapr->gpu_numa_id),
> +        cpu_to_be32(maxdomain),
> +        cpu_to_be32(maxdomain),
> +        cpu_to_be32(maxdomain),
> +        cpu_to_be32(maxdomain)
>      };
>  
> -    if (smc->pre_5_1_assoc_refpoints) {
> -        nr_refpoints = 2;
> +    if (spapr_machine_using_legacy_numa(spapr)) {
> +        uint32_t legacy_refpoints[] = {
> +            cpu_to_be32(0x4),
> +            cpu_to_be32(0x4),
> +            cpu_to_be32(0x2),
> +        };
> +        uint32_t legacy_maxdomain = spapr->gpu_numa_id > 1 ? 1 : 0;
> +        uint32_t legacy_maxdomains[] = {
> +            cpu_to_be32(4),
> +            cpu_to_be32(legacy_maxdomain),
> +            cpu_to_be32(legacy_maxdomain),
> +            cpu_to_be32(legacy_maxdomain),
> +            cpu_to_be32(spapr->gpu_numa_id),
> +        };
> +
> +        nr_refpoints = 3;
> +
> +        memcpy(refpoints, legacy_refpoints, sizeof(legacy_refpoints));
> +        memcpy(maxdomains, legacy_maxdomains, sizeof(legacy_maxdomains));

It would be nice to have a G_STATIC_ASSERT() or QEMU_BUILD_BUG_MSG()
ro ensure that the two structures are the same size, if they became
different the memcpy is wildly unsafe.

> +        /* pseries-5.0 and older reference-points array is {0x4, 0x4} */
> +        if (smc->pre_5_1_assoc_refpoints) {
> +            nr_refpoints = 2;
> +        }
>      }
>  
>      _FDT(fdt_setprop(fdt, rtas, "ibm,associativity-reference-points",

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 5/6] spapr_numa: consider user input when defining associativity
  2020-09-24 19:50 ` [PATCH v2 5/6] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
@ 2020-09-25  3:39   ` David Gibson
  2020-09-25 14:42     ` Daniel Henrique Barboza
  0 siblings, 1 reply; 20+ messages in thread
From: David Gibson @ 2020-09-25  3:39 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, groug

[-- Attachment #1: Type: text/plain, Size: 9365 bytes --]

On Thu, Sep 24, 2020 at 04:50:57PM -0300, Daniel Henrique Barboza wrote:
> This patch puts all the pieces together to finally allow user
> input when defining the NUMA topology of the spapr guest.
> 
> We have one more kernel restriction to handle in this patch:
> the associativity array of node 0 must be filled with zeroes
> [1]. The strategy below ensures that this will happen.

Can you clarify exactly what "node 0" means?  Qemu and the guest
kernel each have node numberings, which I don't think are necessarily
the same.  With PAPR's scheme, it's not totally clear what "node 0"
even means.

> spapr_numa_define_associativity_domains() will read the distance
> (already PAPRified) between the nodes from numa_state and determine
> the appropriate NUMA level. The NUMA domains, processed in ascending
> order, are going to be matched via NUMA levels, and the lowest
> associativity domain value is assigned to that specific level for
> both.
> 
> This will create an heuristic where the associativities of the first
> nodes have higher priority and are re-used in new matches, instead of
> overwriting them with a new associativity match. This is necessary
> because neither QEMU, nor the pSeries kernel, supports multiple
> associativity domains for each resource, meaning that we have to
> decide which associativity relation is relevant.

Hmm.  I find that a bit hard to follow.  So IIUC, what's going on is
you start out by treating every node as as distant as possible from
every other: they have a unique value at every level of the
associativity array.  Then you collapse together nodes that are
supposed to be closer by making some of their associativity entries
match.  Is that right?

> Ultimately, all of this results in a best effort approximation for
> the actual NUMA distances the user input in the command line. Given
> the nature of how PAPR itself interprets NUMA distances versus the
> expectations risen by how ACPI SLIT works, there might be better
> algorithms but, in the end, it'll also result in another way to
> approximate what the user really wanted.
> 
> To keep this commit message no longer than it already is, the next
> patch will update the existing documentation in ppc-spapr-numa.rst
> with more in depth details and design considerations/drawbacks.
> 
> [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  hw/ppc/spapr_numa.c | 101 +++++++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 100 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index ea33439a15..21cae3f799 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -80,12 +80,99 @@ static void spapr_numa_PAPRify_distances(MachineState *ms)
>      }
>  }
>  
> +static uint8_t spapr_numa_get_numa_level(uint8_t distance)
> +{
> +    uint8_t numa_level;

This function reinforces my feeling that pre-PAPRizing the distances
might not be the best idea.  It could go directly from the user
distance to level just as easily.

> +
> +    switch (distance) {
> +    case 10:
> +        numa_level = 0x4;
> +        break;
> +    case 20:
> +        numa_level = 0x3;
> +        break;
> +    case 40:
> +        numa_level = 0x2;
> +        break;
> +    case 80:
> +        numa_level = 0x1;
> +        break;
> +    default:
> +        numa_level = 0;
> +    }
> +
> +    return numa_level;
> +}
> +
> +static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr)
> +{
> +    MachineState *ms = MACHINE(spapr);
> +    NodeInfo *numa_info = ms->numa_state->nodes;
> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> +    int src, dst;
> +
> +    for (src = 0; src < nb_numa_nodes; src++) {
> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> +            /*
> +             * This is how the associativity domain between A and B
> +             * is calculated:
> +             *
> +             * - get the distance between them
> +             * - get the correspondent NUMA level for this distance
> +             * - the arrays were initialized with their own numa_ids,
> +             * and we're calculating the distance in node_id ascending order,
> +             * starting from node 0. This will have a cascade effect in the
> +             * algorithm because the associativity domains that node 0 defines
> +             * will be carried over to the other nodes, and node 1
> +             * associativities will be carried over unless there's already a
> +             * node 0 associativity assigned, and so on. This happens because
> +             * we'll assign the lowest value of assoc_src and assoc_dst to be
> +             * the associativity domain of both, for the given NUMA level.

Ok, I follow this description better than the one in the commit message.

> +             * The PPC kernel expects the associativity domains of node 0 to
> +             * be always 0, and this algorithm will grant that by default.
> +             */
> +            uint8_t distance = numa_info[src].distance[dst];
> +            uint8_t n_level = spapr_numa_get_numa_level(distance);
> +            uint32_t assoc_src, assoc_dst;
> +
> +            /*
> +             * n_level = 0 means that the distance is greater than our last
> +             * rounded value (120). In this case there is no NUMA level match
> +             * between src and dst and we can skip the remaining of the loop.
> +             *
> +             * The Linux kernel will assume that the distance between src and
> +             * dst, in this case of no match, is 10 (local distance) doubled
> +             * for each NUMA it didn't match. We have MAX_DISTANCE_REF_POINTS
> +             * levels (4), so this gives us 10*2*2*2*2 = 160.
> +             *
> +             * This logic can be seen in the Linux kernel source code, as of
> +             * v5.9, in arch/powerpc/mm/numa.c, function __node_distance().
> +             */
> +            if (n_level == 0) {
> +                continue;
> +            }
> +
> +            assoc_src = be32_to_cpu(spapr->numa_assoc_array[src][n_level]);
> +            assoc_dst = be32_to_cpu(spapr->numa_assoc_array[dst][n_level]);
> +
> +            if (assoc_src < assoc_dst) {

So, I don't think you need this logic.  It doesn't really matter which
associativity value you let win, as long as you're consistent.  So you
can just base it on src < dst, rather than assooc_src < assoc_dst.
But src <= dst always, because of the way you've constructed the loops.

> +                spapr->numa_assoc_array[dst][n_level] = cpu_to_be32(assoc_src);
> +            } else {
> +                spapr->numa_assoc_array[src][n_level] = cpu_to_be32(assoc_dst);
> +            }

Don't you need to match the values a all levels >= n_level, rather
than just n_level?  AFAICT it doesn't make sense for two components to
be in the same associativity domain at one level, but different
domains at "bigger" levels (i.e. something like two cores sharing an
L2 cache, but not an L3 cache).

> +        }
> +    }
> +
> +}
> +
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine)
>  {
>      SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>      int nb_numa_nodes = machine->numa_state->num_nodes;
>      int i, j, max_nodes_with_gpus;
> +    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
>  
>      /*
>       * For all associativity arrays: first position is the size,
> @@ -99,6 +186,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>      for (i = 0; i < nb_numa_nodes; i++) {
>          spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>          spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
> +
> +        /*
> +         * Fill all associativity domains of non-zero NUMA nodes with
> +         * node_id. This is required because the default value (0) is
> +         * considered a match with associativity domains of node 0.
> +         */
> +        if (!using_legacy_numa && i != 0) {
> +            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
> +                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
> +            }
> +        }
>      }
>  
>      /*
> @@ -128,7 +226,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>       * 1 NUMA node) will not benefit from anything we're going to do
>       * after this point.
>       */
> -    if (spapr_machine_using_legacy_numa(spapr)) {
> +    if (using_legacy_numa) {
>          return;
>      }
>  
> @@ -139,6 +237,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>      }
>  
>      spapr_numa_PAPRify_distances(machine);
> +    spapr_numa_define_associativity_domains(spapr);
>  }
>  
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 6/6] specs/ppc-spapr-numa: update with new NUMA support
  2020-09-24 19:50 ` [PATCH v2 6/6] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
@ 2020-09-25  3:43   ` David Gibson
  0 siblings, 0 replies; 20+ messages in thread
From: David Gibson @ 2020-09-25  3:43 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, groug

[-- Attachment #1: Type: text/plain, Size: 9617 bytes --]

On Thu, Sep 24, 2020 at 04:50:58PM -0300, Daniel Henrique Barboza wrote:
> This update provides more in depth information about the
> choices and drawbacks of the new NUMA support for the
> spapr machine.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---
>  docs/specs/ppc-spapr-numa.rst | 213 ++++++++++++++++++++++++++++++++++
>  1 file changed, 213 insertions(+)
> 
> diff --git a/docs/specs/ppc-spapr-numa.rst b/docs/specs/ppc-spapr-numa.rst
> index e762038022..994bfb996f 100644
> --- a/docs/specs/ppc-spapr-numa.rst
> +++ b/docs/specs/ppc-spapr-numa.rst
> @@ -189,3 +189,216 @@ QEMU up to 5.1, as follows:
>  
>  This also means that user input in QEMU command line does not change the
>  NUMA distancing inside the guest for the pseries machine.
> +
> +New NUMA mechanics for pseries in QEMU 5.2
> +==========================================

I'd suggest putting the new mechanics - which we intend to become the
normal mechanis - at the top of the document, with the old version later.

> +
> +Starting in QEMU 5.2, the pseries machine now considers user input when
> +setting NUMA topology of the guest. The following changes were made:
> +
> +* ibm,associativity-reference-points was changed to {0x4, 0x3, 0x2, 0x1}, allowing
> +  for 4 distinct NUMA distance values based on the NUMA levels
> +
> +* ibm,max-associativity-domains was changed to support multiple associativity
> +  domains in all NUMA levels. This is needed to ensure user flexibility
> +
> +* ibm,associativity for all resources now varies with user input
> +
> +These changes are only effective for pseries-5.2 and newer machines that are
> +created with more than one NUMA node (disconsidering NUMA nodes created by
> +the machine itself, e.g. NVLink 2 GPUs). The now legacy support has been
> +around for such a long time, with users seeing NUMA distances 10 and 40
> +(and 80 if using NVLink2 GPUs), and there is no need to disrupt the
> +existing experience of those guests.
> +
> +To bring the user experience x86 users have when tuning up NUMA, we had
> +to operate under the current pseries Linux kernel logic described in
> +`How the pseries Linux guest calculates NUMA distances`_. The result
> +is that we needed to translate NUMA distance user input to pseries
> +Linux kernel input.
> +
> +Translating user distance to kernel distance
> +--------------------------------------------
> +
> +User input for NUMA distance can vary from 10 to 254. We need to translate
> +that to the values that the Linux kernel operates on (10, 20, 40, 80, 160).
> +This is how it is being done:
> +
> +* user distance 11 to 30 will be interpreted as 20
> +* user distance 31 to 60 will be interpreted as 40
> +* user distance 61 to 120 will be interpreted as 80
> +* user distance 121 and beyond will be interpreted as 160
> +* user distance 10 stays 10
> +
> +The reasoning behind this aproximation is to avoid any round up to the local
> +distance (10), keeping it exclusive to the 4th NUMA level (which is still
> +exclusive to the node_id). All other ranges were chosen under the developer
> +discretion of what would be (somewhat) sensible considering the user input.
> +Any other strategy can be used here, but in the end the reality is that we'll
> +have to accept that a large array of values will be translated to the same
> +NUMA topology in the guest, e.g. this user input:
> +
> +::
> +
> +      0   1   2
> +  0  10  31 120
> +  1  31  10  30
> +  2 120  30  10
> +
> +And this other user input:
> +
> +::
> +
> +      0   1   2
> +  0  10  60  61
> +  1  60  10  11
> +  2  61  11  10
> +
> +Will both be translated to the same values internally:
> +
> +::
> +
> +      0   1   2
> +  0  10  40  80
> +  1  40  10  20
> +  2  80  20  10
> +
> +Users are encouraged to use only the kernel values in the NUMA definition to
> +avoid being taken by surprise with that the guest is actually seeing in the
> +topology. There are enough potential surprises that are inherent to the
> +associativity domain assignment process, discussed below.
> +
> +
> +How associativity domains are assigned
> +--------------------------------------
> +
> +LOPAPR allows more than one associativity array (or 'string') per allocated
> +resource. This would be used to represent that the resource has multiple
> +connections with the board, and then the operational system, when deciding
> +NUMA distancing, should consider the associativity information that provides
> +the shortest distance.
> +
> +The spapr implementation does not support multiple associativity arrays per
> +resource, neither does the pseries Linux kernel. We'll have to represent the
> +NUMA topology using one associativity per resource, which means that choices
> +and compromises are going to be made.
> +
> +Consider the following NUMA topology entered by user input:
> +
> +::
> +
> +      0   1   2   3
> +  0  10  20  20  40
> +  1  20  10  80  40
> +  2  20  80  10  20
> +  3  40  40  20  10
> +
> +Honoring just the relative distances of node 0 to every other node, one possible
> +value for all associativity arrays would be:
> +
> +* node 0: 0 B A 0
> +* node 1: 0 0 A 1
> +* node 2: 0 0 A 2
> +* node 3: 0 B 0 3
> +
> +With the reference points {0x4, 0x3, 0x2, 0x1}, for node 0:
> +
> +* distance from 0 to 1 is 20 (no match at 0x4, will match at 0x3)
> +* distance from 0 to 2 is 20 (no match at 0x4, will match at 0x3)
> +* distance from 0 to 3 is 40 (no match at 0x4 and 0x3, will match
> +  at 0x2)
> +
> +The distances related to node 0 are well represented. Doing for node 1, and keeping
> +in mind that we don't need to revisit node 0 again, the distance from node 1 to
> +2 is 80, matching at 0x4:
> +
> +* node 1: C 0 A 1
> +* node 2: C 0 A 2
> +
> +Over here we already have the first conflict. Even if we assign a new associativity
> +domain at 0x4 for 1 and 2, and we do that in the code, the kernel will define
> +the distance between 1 and 2 as 20, not 80, because both 1 and 2 have the "A"
> +associativity domain from the previous step. If we decide to discard the
> +associativity with "A" then the node 0 distances are compromised.
> +
> +Following up with the distance from 1 to 3 being 40 (a match in 0x2) we have another
> +decision to make. These are the current associativity domains of each:
> +
> +* node 1: C 0 A 1
> +* node 3: 0 B 0 3
> +
> +There is already an associativity domain at 0x2 in node 3, "B", which was assigned
> +by the node 0 distances. If we define a new associativity domain at this level
> +for 1 and 3 we will overwrite the existing associativity between 0 and 3. What
> +the code is doing in this case is to assign the existing domain to the
> +current associativity, in this case, "B" is now assigned to the 0x2 of node 1,
> +resulting in the following associativity arrays:
> +
> +* node 0: 0 B A 0
> +* node 1: C 0 A 1
> +* node 2: C B A 2
> +* node 3: 0 B 0 3
> +
> +In the last step we will analyze just nodes 2 and 3. The desired distance between
> +2 and 3 is 20, i.e. a match in 0x3. Node 2 already has a domain assigned in 0x3,
> +A, so we do the same as we did in the previous case and assign it to node 3
> +at 0x3. This is the end result for the associativity arrays:
> +
> +* node 0: 0 B A 0
> +* node 1: C 0 A 1
> +* node 2: C B A 2
> +* node 3: 0 B A 3
> +
> +The kernel will read these arrays and will calculate the following NUMA topology for
> +the guest:
> +
> +::
> +
> +      0   1   2   3
> +  0  10  20  20  20
> +  1  20  10  20  20
> +  2  20  20  10  20
> +  3  20  20  20  10
> +
> +Which is not what the user wanted, but it is what the current logic and implementation
> +constraints of the kernel and QEMU will provide inside the LOPAPR specification.
> +
> +Changing a single value, specially a low distance value, makes for drastic changes
> +in the result. For example, with the same user input from above, but changing the
> +node distance from 0 to 1 to 40:
> +
> +::
> +
> +      0   1   2   3
> +  0  10  40  20  40
> +  1  40  10  80  40
> +  2  20  80  10  20
> +  3  40  40  20  10
> +
> +This is the result inside the guest, applying the same heuristics:
> +
> +::
> +
> +  $ numactl -H
> +  available: 4 nodes (0-3)
> +  (...)
> +  node distances:
> +  node   0   1   2   3
> +    0:  10  40  20  20
> +    1:  40  10  80  40
> +    2:  20  80  10  20
> +    3:  20  40  20  10
> +
> +This result is much closer to the user input and only a single distance was changed
> +from the original.
> +
> +The kernel will always match with the shortest associativity domain possible, and we're
> +attempting to retain the previous established relations between the nodes. This means
> +that a distance equal to 20 between nodes A and B and the same distance 20 between nodes
> +A and F will cause the distance between B and F to also be 20. The same will happen to
> +other distances, but shorter distances has precedent over it to the distance calculation.
> +
> +Users are welcome to use this knowledge and experiment with the input to get the
> +NUMA topology they want, or as closer as they want. The important thing is to keep
> +expectations up to par with what we are capable of provide at this moment: an
> +approximation.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/6] spapr_numa: forbid asymmetrical NUMA setups
  2020-09-24 19:50 ` [PATCH v2 2/6] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
  2020-09-25  2:36   ` David Gibson
@ 2020-09-25  3:48   ` David Gibson
  2020-09-25 12:41     ` Daniel Henrique Barboza
  1 sibling, 1 reply; 20+ messages in thread
From: David Gibson @ 2020-09-25  3:48 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, groug

[-- Attachment #1: Type: text/plain, Size: 3007 bytes --]

On Thu, Sep 24, 2020 at 04:50:54PM -0300, Daniel Henrique Barboza wrote:
> The pSeries machine does not support asymmetrical NUMA
> configurations. This doesn't make much of a different
> since we're not using user input for pSeries NUMA setup,
> but this will change in the next patches.
> 
> To avoid breaking existing setups, gate this change by
> checking for legacy NUMA support.
> 
> Reviewed-by: Greg Kurz <groug@kaod.org>
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

Having read the rest of the series, I realized there's another type of
configuration that PAPR can't represent, so possibly we should add
logic to catch that as well.  That's what I'm going to call
"non-transitive" configurations, e.g.

Node	0	1	2
0	10	20	40
1	20	10	20
2	40	20	10	

Basically the closeness of 0 to 1 and 1 to 2 forces them all to be in
the same domain at every PAPR level, even though 0-2 is supposed to be
more expensive.

> ---
>  hw/ppc/spapr_numa.c | 34 ++++++++++++++++++++++++++++++++++
>  1 file changed, 34 insertions(+)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 64fe567f5d..fe395e80a3 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -19,6 +19,24 @@
>  /* Moved from hw/ppc/spapr_pci_nvlink2.c */
>  #define SPAPR_GPU_NUMA_ID           (cpu_to_be32(1))
>  
> +static bool spapr_numa_is_symmetrical(MachineState *ms)
> +{
> +    int src, dst;
> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> +    NodeInfo *numa_info = ms->numa_state->nodes;
> +
> +    for (src = 0; src < nb_numa_nodes; src++) {
> +        for (dst = src; dst < nb_numa_nodes; dst++) {
> +            if (numa_info[src].distance[dst] !=
> +                numa_info[dst].distance[src]) {
> +                return false;
> +            }
> +        }
> +    }
> +
> +    return true;
> +}
> +
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine)
>  {
> @@ -61,6 +79,22 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>  
>          spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>      }
> +
> +    /*
> +     * Legacy NUMA guests (pseries-5.1 and older, or guests with only
> +     * 1 NUMA node) will not benefit from anything we're going to do
> +     * after this point.
> +     */
> +    if (spapr_machine_using_legacy_numa(spapr)) {
> +        return;
> +    }
> +
> +    if (!spapr_numa_is_symmetrical(machine)) {
> +        error_report("Asymmetrical NUMA topologies aren't supported "
> +                     "in the pSeries machine");
> +        exit(EXIT_FAILURE);
> +    }
> +
>  }
>  
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/6] spapr_numa: forbid asymmetrical NUMA setups
  2020-09-25  3:48   ` David Gibson
@ 2020-09-25 12:41     ` Daniel Henrique Barboza
  2020-09-26  7:49       ` David Gibson
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-25 12:41 UTC (permalink / raw)
  To: David Gibson; +Cc: qemu-ppc, qemu-devel, groug



On 9/25/20 12:48 AM, David Gibson wrote:
> On Thu, Sep 24, 2020 at 04:50:54PM -0300, Daniel Henrique Barboza wrote:
>> The pSeries machine does not support asymmetrical NUMA
>> configurations. This doesn't make much of a different
>> since we're not using user input for pSeries NUMA setup,
>> but this will change in the next patches.
>>
>> To avoid breaking existing setups, gate this change by
>> checking for legacy NUMA support.
>>
>> Reviewed-by: Greg Kurz <groug@kaod.org>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> 
> Having read the rest of the series, I realized there's another type of
> configuration that PAPR can't represent, so possibly we should add
> logic to catch that as well.  That's what I'm going to call
> "non-transitive" configurations, e.g.
> 
> Node	0	1	2
> 0	10	20	40
> 1	20	10	20
> 2	40	20	10	
> 
> Basically the closeness of 0 to 1 and 1 to 2 forces them all to be in
> the same domain at every PAPR level, even though 0-2 is supposed to be
> more expensive.


Yes, this is correct. I'm not sure how to proceed in this case
though. Should we error out?


DHB

> 
>> ---
>>   hw/ppc/spapr_numa.c | 34 ++++++++++++++++++++++++++++++++++
>>   1 file changed, 34 insertions(+)
>>
>> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
>> index 64fe567f5d..fe395e80a3 100644
>> --- a/hw/ppc/spapr_numa.c
>> +++ b/hw/ppc/spapr_numa.c
>> @@ -19,6 +19,24 @@
>>   /* Moved from hw/ppc/spapr_pci_nvlink2.c */
>>   #define SPAPR_GPU_NUMA_ID           (cpu_to_be32(1))
>>   
>> +static bool spapr_numa_is_symmetrical(MachineState *ms)
>> +{
>> +    int src, dst;
>> +    int nb_numa_nodes = ms->numa_state->num_nodes;
>> +    NodeInfo *numa_info = ms->numa_state->nodes;
>> +
>> +    for (src = 0; src < nb_numa_nodes; src++) {
>> +        for (dst = src; dst < nb_numa_nodes; dst++) {
>> +            if (numa_info[src].distance[dst] !=
>> +                numa_info[dst].distance[src]) {
>> +                return false;
>> +            }
>> +        }
>> +    }
>> +
>> +    return true;
>> +}
>> +
>>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>                                      MachineState *machine)
>>   {
>> @@ -61,6 +79,22 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>   
>>           spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>>       }
>> +
>> +    /*
>> +     * Legacy NUMA guests (pseries-5.1 and older, or guests with only
>> +     * 1 NUMA node) will not benefit from anything we're going to do
>> +     * after this point.
>> +     */
>> +    if (spapr_machine_using_legacy_numa(spapr)) {
>> +        return;
>> +    }
>> +
>> +    if (!spapr_numa_is_symmetrical(machine)) {
>> +        error_report("Asymmetrical NUMA topologies aren't supported "
>> +                     "in the pSeries machine");
>> +        exit(EXIT_FAILURE);
>> +    }
>> +
>>   }
>>   
>>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 3/6] spapr_numa: translate regular NUMA distance to PAPR distance
  2020-09-25  2:35   ` David Gibson
@ 2020-09-25 12:44     ` Daniel Henrique Barboza
  0 siblings, 0 replies; 20+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-25 12:44 UTC (permalink / raw)
  To: David Gibson; +Cc: qemu-ppc, qemu-devel, groug



On 9/24/20 11:35 PM, David Gibson wrote:
> On Thu, Sep 24, 2020 at 04:50:55PM -0300, Daniel Henrique Barboza wrote:
>> QEMU allows the user to set NUMA distances in the command line.
>> For ACPI architectures like x86, this means that user input is
>> used to populate the SLIT table, and the guest perceives the
>> distances as the user chooses to.
>>
>> PPC64 does not work that way. In the PAPR concept of NUMA,
>> associativity relations between the NUMA nodes are provided by
>> the device tree, and the guest kernel is free to calculate the
>> distances as it sees fit. Given how ACPI architectures works,
>> this puts the pSeries machine in a strange spot - users expect
>> to define NUMA distances like in the ACPI case, but QEMU does
>> not have control over it. To give pSeries users a similar
>> experience, we'll need to bring kernel specifics to QEMU
>> to approximate the NUMA distances.
>>
>> The pSeries kernel works with the NUMA distance range 10,
>> 20, 40, 80 and 160. The code starts at 10 (local distance) and
>> searches for a match in the first NUMA level between the
>> resources. If there is no match, the distance is doubled and
>> then it proceeds to try to match in the next NUMA level. Rinse
>> and repeat for MAX_DISTANCE_REF_POINTS levels.
>>
>> This patch introduces a spapr_numa_PAPRify_distances() helper
>> that translates the user distances to kernel distance, which
>> we're going to use to determine the associativity domains for
>> the NUMA nodes.
>>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> 
> The idea of rounding the distances like this seems pretty good to me.
> Since each level is a multiple of a distance from the preivous one it
> might be more theoretically correct to place the thresholds at the
> geometric mean between each level, rather than the arithmetic mean.  I
> very much doubt it makes much different in practice though, and this
> is simpler.
> 
> There is one nit, I'm less happy with though..
> 
>> ---
>>   hw/ppc/spapr_numa.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
>>   1 file changed, 44 insertions(+)
>>
>> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
>> index fe395e80a3..990a5fce08 100644
>> --- a/hw/ppc/spapr_numa.c
>> +++ b/hw/ppc/spapr_numa.c
>> @@ -37,6 +37,49 @@ static bool spapr_numa_is_symmetrical(MachineState *ms)
>>       return true;
>>   }
>>   
>> +/*
>> + * This function will translate the user distances into
>> + * what the kernel understand as possible values: 10
>> + * (local distance), 20, 40, 80 and 160. Current heuristic
>> + * is:
>> + *
>> + *  - distances between 11 and 30 inclusive -> rounded to 20
>> + *  - distances between 31 and 60 inclusive -> rounded to 40
>> + *  - distances between 61 and 120 inclusive -> rounded to 80
>> + *  - everything above 120 -> 160
>> + *
>> + * This step can also be done in the same time as the NUMA
>> + * associativity domains calculation, at the cost of extra
>> + * complexity. We chose to keep it simpler.
>> + *
>> + * Note: this will overwrite the distance values in
>> + * ms->numa_state->nodes.
>> + */
>> +static void spapr_numa_PAPRify_distances(MachineState *ms)
>> +{
>> +    int src, dst;
>> +    int nb_numa_nodes = ms->numa_state->num_nodes;
>> +    NodeInfo *numa_info = ms->numa_state->nodes;
>> +
>> +    for (src = 0; src < nb_numa_nodes; src++) {
>> +        for (dst = src; dst < nb_numa_nodes; dst++) {
>> +            uint8_t distance = numa_info[src].distance[dst];
>> +            uint8_t rounded_distance = 160;
>> +
>> +            if (distance > 11 && distance <= 30) {
>> +                rounded_distance = 20;
>> +            } else if (distance > 31 && distance <= 60) {
>> +                rounded_distance = 40;
>> +            } else if (distance > 61 && distance <= 120) {
>> +                rounded_distance = 80;
>> +            }
>> +
>> +            numa_info[src].distance[dst] = rounded_distance;
>> +            numa_info[dst].distance[src] = rounded_distance;
> 
> ..I don't love the fact that we alter the distance table in place.
> Even though it was never exposed to the guest, I'd prefer not to
> destroy the information the user passed in.  It could lead to
> surprising results with QMP introspection, and it may make future
> extensions more difficult.
> 
> So I'd prefer to either (a) just leave the table as is and round
> on-demand with a paprify_distance(NN) -> {20,40,80,..} type function,
> or (b) create a parallel, spapr local, table with the rounded
> distances


I did something similar with (a) in the very first version of this series.
I'll fall back to on-demand translation logic to avoid changing numa_info.



Thanks,


DHB

> 
>> +        }
>> +    }
>> +}
>> +
>>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>                                      MachineState *machine)
>>   {
>> @@ -95,6 +138,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>           exit(EXIT_FAILURE);
>>       }
>>   
>> +    spapr_numa_PAPRify_distances(machine);
>>   }
>>   
>>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 4/6] spapr_numa: change reference-points and maxdomain settings
  2020-09-24 19:50 ` [PATCH v2 4/6] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
  2020-09-25  2:38   ` David Gibson
@ 2020-09-25 13:16   ` Greg Kurz
  1 sibling, 0 replies; 20+ messages in thread
From: Greg Kurz @ 2020-09-25 13:16 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, david

On Thu, 24 Sep 2020 16:50:56 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> This is the first guest visible change introduced in
> spapr_numa.c. The previous settings of both reference-points
> and maxdomains were too restrictive, but enough for the
> existing associativity we're setting in the resources.
> 
> We'll change that in the following patches, populating the
> associativity arrays based on user input. For those changes
> to be effective, reference-points and maxdomains must be
> more flexible. After this patch, we'll have 4 distinct
> levels of NUMA (0x4, 0x3, 0x2, 0x1) and maxdomains will
> allow for any type of configuration the user intends to
> do - under the scope and limitations of PAPR itself, of
> course.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  hw/ppc/spapr_numa.c | 40 ++++++++++++++++++++++++++++++++--------
>  1 file changed, 32 insertions(+), 8 deletions(-)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 990a5fce08..ea33439a15 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -222,24 +222,48 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
>   */
>  void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
>  {
> +    MachineState *ms = MACHINE(spapr);
>      SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>      uint32_t refpoints[] = {
>          cpu_to_be32(0x4),
> -        cpu_to_be32(0x4),
> +        cpu_to_be32(0x3),
>          cpu_to_be32(0x2),
> +        cpu_to_be32(0x1),
>      };
>      uint32_t nr_refpoints = ARRAY_SIZE(refpoints);
> -    uint32_t maxdomain = cpu_to_be32(spapr->gpu_numa_id > 1 ? 1 : 0);
> +    uint32_t maxdomain = ms->numa_state->num_nodes + spapr->gpu_numa_id;
>      uint32_t maxdomains[] = {
>          cpu_to_be32(4),
> -        maxdomain,
> -        maxdomain,
> -        maxdomain,
> -        cpu_to_be32(spapr->gpu_numa_id),
> +        cpu_to_be32(maxdomain),
> +        cpu_to_be32(maxdomain),
> +        cpu_to_be32(maxdomain),
> +        cpu_to_be32(maxdomain)
>      };
>  
> -    if (smc->pre_5_1_assoc_refpoints) {
> -        nr_refpoints = 2;
> +    if (spapr_machine_using_legacy_numa(spapr)) {
> +        uint32_t legacy_refpoints[] = {
> +            cpu_to_be32(0x4),
> +            cpu_to_be32(0x4),
> +            cpu_to_be32(0x2),
> +        };
> +        uint32_t legacy_maxdomain = spapr->gpu_numa_id > 1 ? 1 : 0;
> +        uint32_t legacy_maxdomains[] = {
> +            cpu_to_be32(4),
> +            cpu_to_be32(legacy_maxdomain),
> +            cpu_to_be32(legacy_maxdomain),
> +            cpu_to_be32(legacy_maxdomain),
> +            cpu_to_be32(spapr->gpu_numa_id),
> +        };
> +
> +        nr_refpoints = 3;
> +
> +        memcpy(refpoints, legacy_refpoints, sizeof(legacy_refpoints));
> +        memcpy(maxdomains, legacy_maxdomains, sizeof(legacy_maxdomains));
> +
> +        /* pseries-5.0 and older reference-points array is {0x4, 0x4} */
> +        if (smc->pre_5_1_assoc_refpoints) {
> +            nr_refpoints = 2;
> +        }
>      }
>  
>      _FDT(fdt_setprop(fdt, rtas, "ibm,associativity-reference-points",



^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 5/6] spapr_numa: consider user input when defining associativity
  2020-09-25  3:39   ` David Gibson
@ 2020-09-25 14:42     ` Daniel Henrique Barboza
  0 siblings, 0 replies; 20+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-25 14:42 UTC (permalink / raw)
  To: David Gibson; +Cc: qemu-ppc, qemu-devel, groug



On 9/25/20 12:39 AM, David Gibson wrote:
> On Thu, Sep 24, 2020 at 04:50:57PM -0300, Daniel Henrique Barboza wrote:
>> This patch puts all the pieces together to finally allow user
>> input when defining the NUMA topology of the spapr guest.
>>
>> We have one more kernel restriction to handle in this patch:
>> the associativity array of node 0 must be filled with zeroes
>> [1]. The strategy below ensures that this will happen.
> 
> Can you clarify exactly what "node 0" means?  Qemu and the guest
> kernel each have node numberings, which I don't think are necessarily
> the same.  With PAPR's scheme, it's not totally clear what "node 0"
> even means.
> 
>> spapr_numa_define_associativity_domains() will read the distance
>> (already PAPRified) between the nodes from numa_state and determine
>> the appropriate NUMA level. The NUMA domains, processed in ascending
>> order, are going to be matched via NUMA levels, and the lowest
>> associativity domain value is assigned to that specific level for
>> both.
>>
>> This will create an heuristic where the associativities of the first
>> nodes have higher priority and are re-used in new matches, instead of
>> overwriting them with a new associativity match. This is necessary
>> because neither QEMU, nor the pSeries kernel, supports multiple
>> associativity domains for each resource, meaning that we have to
>> decide which associativity relation is relevant.
> 
> Hmm.  I find that a bit hard to follow.  So IIUC, what's going on is
> you start out by treating every node as as distant as possible from
> every other: they have a unique value at every level of the
> associativity array.  Then you collapse together nodes that are
> supposed to be closer by making some of their associativity entries
> match.  Is that right?
> 
>> Ultimately, all of this results in a best effort approximation for
>> the actual NUMA distances the user input in the command line. Given
>> the nature of how PAPR itself interprets NUMA distances versus the
>> expectations risen by how ACPI SLIT works, there might be better
>> algorithms but, in the end, it'll also result in another way to
>> approximate what the user really wanted.
>>
>> To keep this commit message no longer than it already is, the next
>> patch will update the existing documentation in ppc-spapr-numa.rst
>> with more in depth details and design considerations/drawbacks.
>>
>> [1] https://lore.kernel.org/linuxppc-dev/5e8fbea3-8faf-0951-172a-b41a2138fbcf@gmail.com/
>>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
>> ---
>>   hw/ppc/spapr_numa.c | 101 +++++++++++++++++++++++++++++++++++++++++++-
>>   1 file changed, 100 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
>> index ea33439a15..21cae3f799 100644
>> --- a/hw/ppc/spapr_numa.c
>> +++ b/hw/ppc/spapr_numa.c
>> @@ -80,12 +80,99 @@ static void spapr_numa_PAPRify_distances(MachineState *ms)
>>       }
>>   }
>>   
>> +static uint8_t spapr_numa_get_numa_level(uint8_t distance)
>> +{
>> +    uint8_t numa_level;
> 
> This function reinforces my feeling that pre-PAPRizing the distances
> might not be the best idea.  It could go directly from the user
> distance to level just as easily.

Yeah, the logic from patch 3 will ended up being folded into this
function.

> 
>> +
>> +    switch (distance) {
>> +    case 10:
>> +        numa_level = 0x4;
>> +        break;
>> +    case 20:
>> +        numa_level = 0x3;
>> +        break;
>> +    case 40:
>> +        numa_level = 0x2;
>> +        break;
>> +    case 80:
>> +        numa_level = 0x1;
>> +        break;
>> +    default:
>> +        numa_level = 0;
>> +    }
>> +
>> +    return numa_level;
>> +}
>> +
>> +static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr)
>> +{
>> +    MachineState *ms = MACHINE(spapr);
>> +    NodeInfo *numa_info = ms->numa_state->nodes;
>> +    int nb_numa_nodes = ms->numa_state->num_nodes;
>> +    int src, dst;
>> +
>> +    for (src = 0; src < nb_numa_nodes; src++) {
>> +        for (dst = src; dst < nb_numa_nodes; dst++) {
>> +            /*
>> +             * This is how the associativity domain between A and B
>> +             * is calculated:
>> +             *
>> +             * - get the distance between them
>> +             * - get the correspondent NUMA level for this distance
>> +             * - the arrays were initialized with their own numa_ids,
>> +             * and we're calculating the distance in node_id ascending order,
>> +             * starting from node 0. This will have a cascade effect in the
>> +             * algorithm because the associativity domains that node 0 defines
>> +             * will be carried over to the other nodes, and node 1
>> +             * associativities will be carried over unless there's already a
>> +             * node 0 associativity assigned, and so on. This happens because
>> +             * we'll assign the lowest value of assoc_src and assoc_dst to be
>> +             * the associativity domain of both, for the given NUMA level.
> 
> Ok, I follow this description better than the one in the commit message.
> 
>> +             * The PPC kernel expects the associativity domains of node 0 to
>> +             * be always 0, and this algorithm will grant that by default.
>> +             */
>> +            uint8_t distance = numa_info[src].distance[dst];
>> +            uint8_t n_level = spapr_numa_get_numa_level(distance);
>> +            uint32_t assoc_src, assoc_dst;
>> +
>> +            /*
>> +             * n_level = 0 means that the distance is greater than our last
>> +             * rounded value (120). In this case there is no NUMA level match
>> +             * between src and dst and we can skip the remaining of the loop.
>> +             *
>> +             * The Linux kernel will assume that the distance between src and
>> +             * dst, in this case of no match, is 10 (local distance) doubled
>> +             * for each NUMA it didn't match. We have MAX_DISTANCE_REF_POINTS
>> +             * levels (4), so this gives us 10*2*2*2*2 = 160.
>> +             *
>> +             * This logic can be seen in the Linux kernel source code, as of
>> +             * v5.9, in arch/powerpc/mm/numa.c, function __node_distance().
>> +             */
>> +            if (n_level == 0) {
>> +                continue;
>> +            }
>> +
>> +            assoc_src = be32_to_cpu(spapr->numa_assoc_array[src][n_level]);
>> +            assoc_dst = be32_to_cpu(spapr->numa_assoc_array[dst][n_level]);
>> +
>> +            if (assoc_src < assoc_dst) {
> 
> So, I don't think you need this logic.  It doesn't really matter which
> associativity value you let win, as long as you're consistent.  So you
> can just base it on src < dst, rather than assooc_src < assoc_dst.
> But src <= dst always, because of the way you've constructed the loops.
> 
>> +                spapr->numa_assoc_array[dst][n_level] = cpu_to_be32(assoc_src);
>> +            } else {
>> +                spapr->numa_assoc_array[src][n_level] = cpu_to_be32(assoc_dst);
>> +            }
> 
> Don't you need to match the values a all levels >= n_level, rather
> than just n_level?  AFAICT it doesn't make sense for two components to
> be in the same associativity domain at one level, but different
> domains at "bigger" levels (i.e. something like two cores sharing an
> L2 cache, but not an L3 cache).


I'm not sure what you meant, so let's go through an example. Let's say we
have something like:

      0    1     2
0    10   20   80
1    20   10   40
2    80   40   10

Starting arrays are:

    0x1 0x2 0x3 0x4
0:  0   0   0   0
1:  1   1   1   1
2:  2   2   2   2

What the code will do is:

a) distance 0 to 1 is 20 (0x3 match), 0 to 2 is 80 (0x1 match):

    0x1 0x2 0x3 0x4
0:  0   0   0   0
1:  1   1   0   1
2:  0   2   2   2


b) distance from 1 to 2 is 40 (0x2 match):

    0x1 0x2 0x3 0x4
0:  0   0   0   0
1:  1   1   0   1
2:  0   1   2   2

What the kernel will do (via __node_distance()) is to stop at the first match.
So:

- distance 0 to 1: first match at 0x3, distance = 20:

    0x1 0x2 0x3 0x4
0:  0   0   0*   0
1:  1   1   0*   1

- distance 0 to 2: first match at 0x1, distance = 80:

    0x1 0x2 0x3 0x4
0:  0*   0   0   0
2:  0*   1   2   2

- distance 1 to 2: first match at 0x2, distance = 40:

    0x1 0x2 0x3 0x4
1:  1   *1   0   1
2:  0   *1   2   2


Note that, for this example, we have an exact match between user input and
what the kernel calculates.

Back in (a), the reason why I'm not doing matches in all n_levels above the
first one is because it doesn't matter for the kernel (it'll stop at the
first match) and it'll lead to unintended matches between the earlier and
newer nodes due to how I'm cascading the associativity domains.

Let's say we go back to (a) and do this:

a2) distance 0 to 1 is 20 (0x3 match), 0 to 2 is 80 (0x1 match):

    0x1 0x2 0x3 0x4
0:  0   0   0   0
1:  0   0   0   1
2:  0   2   2   2

We forced the match on all n_levels above the first match in the associativity
array of 1. Then in (b), where we evaluate 1 to 2:

b2) distance 1 to 2 is 40 (0x2 match), cascade the 0 associativity already
existent:

    0x1 0x2 0x3 0x4
0:  0   0   0   0
1:  0   0   0   1
2:  0   0   2   2

The kernel will read this as:

0 to 1: 20
0 to 2: 40 <--- this was 80 via user input
1 to 2: 40


In fact, reading what you said again, that sounds more like a vision in line
with the "Form 0" interpretation of the reference-points, where it would
represent something more close with the physical SMP topology itself.
With the "Form 1" format we're using, we're talking about 'near' and 'far'
resources and most significant boundaries, regardless of any correlation
of the underlying topology. So instead of thinking

"(i.e. something like two cores sharing an L2 cache, but not an L3 cache)"

We would think "node 0 and 1 most significant performance boundary is on
the 0x2 position of the reference-points".


Hope that clarified a bit.


DHB


> 
>> +        }
>> +    }
>> +
>> +}
>> +
>>   void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>                                      MachineState *machine)
>>   {
>>       SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
>>       int nb_numa_nodes = machine->numa_state->num_nodes;
>>       int i, j, max_nodes_with_gpus;
>> +    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
>>   
>>       /*
>>        * For all associativity arrays: first position is the size,
>> @@ -99,6 +186,17 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>       for (i = 0; i < nb_numa_nodes; i++) {
>>           spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
>>           spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
>> +
>> +        /*
>> +         * Fill all associativity domains of non-zero NUMA nodes with
>> +         * node_id. This is required because the default value (0) is
>> +         * considered a match with associativity domains of node 0.
>> +         */
>> +        if (!using_legacy_numa && i != 0) {
>> +            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
>> +                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
>> +            }
>> +        }
>>       }
>>   
>>       /*
>> @@ -128,7 +226,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>        * 1 NUMA node) will not benefit from anything we're going to do
>>        * after this point.
>>        */
>> -    if (spapr_machine_using_legacy_numa(spapr)) {
>> +    if (using_legacy_numa) {
>>           return;
>>       }
>>   
>> @@ -139,6 +237,7 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>       }
>>   
>>       spapr_numa_PAPRify_distances(machine);
>> +    spapr_numa_define_associativity_domains(spapr);
>>   }
>>   
>>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/6] spapr_numa: forbid asymmetrical NUMA setups
  2020-09-25 12:41     ` Daniel Henrique Barboza
@ 2020-09-26  7:49       ` David Gibson
  2020-09-27 11:41         ` Daniel Henrique Barboza
  0 siblings, 1 reply; 20+ messages in thread
From: David Gibson @ 2020-09-26  7:49 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, groug

[-- Attachment #1: Type: text/plain, Size: 1604 bytes --]

On Fri, Sep 25, 2020 at 09:41:02AM -0300, Daniel Henrique Barboza wrote:
> 
> 
> On 9/25/20 12:48 AM, David Gibson wrote:
> > On Thu, Sep 24, 2020 at 04:50:54PM -0300, Daniel Henrique Barboza wrote:
> > > The pSeries machine does not support asymmetrical NUMA
> > > configurations. This doesn't make much of a different
> > > since we're not using user input for pSeries NUMA setup,
> > > but this will change in the next patches.
> > > 
> > > To avoid breaking existing setups, gate this change by
> > > checking for legacy NUMA support.
> > > 
> > > Reviewed-by: Greg Kurz <groug@kaod.org>
> > > Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> > 
> > Having read the rest of the series, I realized there's another type of
> > configuration that PAPR can't represent, so possibly we should add
> > logic to catch that as well.  That's what I'm going to call
> > "non-transitive" configurations, e.g.
> > 
> > Node	0	1	2
> > 0	10	20	40
> > 1	20	10	20
> > 2	40	20	10	
> > 
> > Basically the closeness of 0 to 1 and 1 to 2 forces them all to be in
> > the same domain at every PAPR level, even though 0-2 is supposed to be
> > more expensive.
> 
> Yes, this is correct. I'm not sure how to proceed in this case
> though. Should we error out?

Given that we're already erroring on asymmetric configurations, I
think it makes sense to error for these as well.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/6] spapr_numa: forbid asymmetrical NUMA setups
  2020-09-26  7:49       ` David Gibson
@ 2020-09-27 11:41         ` Daniel Henrique Barboza
  2020-09-28  6:25           ` David Gibson
  0 siblings, 1 reply; 20+ messages in thread
From: Daniel Henrique Barboza @ 2020-09-27 11:41 UTC (permalink / raw)
  To: David Gibson; +Cc: qemu-ppc, qemu-devel, groug



On 9/26/20 4:49 AM, David Gibson wrote:
> On Fri, Sep 25, 2020 at 09:41:02AM -0300, Daniel Henrique Barboza wrote:
>>
>>
>> On 9/25/20 12:48 AM, David Gibson wrote:
>>> On Thu, Sep 24, 2020 at 04:50:54PM -0300, Daniel Henrique Barboza wrote:
>>>> The pSeries machine does not support asymmetrical NUMA
>>>> configurations. This doesn't make much of a different
>>>> since we're not using user input for pSeries NUMA setup,
>>>> but this will change in the next patches.
>>>>
>>>> To avoid breaking existing setups, gate this change by
>>>> checking for legacy NUMA support.
>>>>
>>>> Reviewed-by: Greg Kurz <groug@kaod.org>
>>>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
>>>
>>> Having read the rest of the series, I realized there's another type of
>>> configuration that PAPR can't represent, so possibly we should add
>>> logic to catch that as well.  That's what I'm going to call
>>> "non-transitive" configurations, e.g.
>>>
>>> Node	0	1	2
>>> 0	10	20	40
>>> 1	20	10	20
>>> 2	40	20	10	
>>>
>>> Basically the closeness of 0 to 1 and 1 to 2 forces them all to be in
>>> the same domain at every PAPR level, even though 0-2 is supposed to be
>>> more expensive.
>>
>> Yes, this is correct. I'm not sure how to proceed in this case
>> though. Should we error out?
> 
> Given that we're already erroring on asymmetric configurations, I
> think it makes sense to error for these as well.

Thing is that asymmetrical configurations is an easy concept to enforce
to the user - distance from A to B can't be different from B to A.

In the example you gave above, with 3 NUMA nodes, is easy to spot where
the non-transitivity rule would hit. I'm afraid that if we add 2-3 more
NUMA nodes in the mix this will stop being straightforward, with more and
more combinations hitting the 'non-transitivity' rule, and erroring out
will end up being frustrating to the user.

I'd say that we should report this in the documentation as one more
limitation of the implementation (and PAPR). I wouldn't oppose with
throwing a warning message either, letting the user know that the
approximation will be less precise than it already would be in this case.


Thanks,


DHB



> 


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH v2 2/6] spapr_numa: forbid asymmetrical NUMA setups
  2020-09-27 11:41         ` Daniel Henrique Barboza
@ 2020-09-28  6:25           ` David Gibson
  0 siblings, 0 replies; 20+ messages in thread
From: David Gibson @ 2020-09-28  6:25 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, groug

[-- Attachment #1: Type: text/plain, Size: 2734 bytes --]

On Sun, Sep 27, 2020 at 08:41:30AM -0300, Daniel Henrique Barboza wrote:
> 
> 
> On 9/26/20 4:49 AM, David Gibson wrote:
> > On Fri, Sep 25, 2020 at 09:41:02AM -0300, Daniel Henrique Barboza wrote:
> > > 
> > > 
> > > On 9/25/20 12:48 AM, David Gibson wrote:
> > > > On Thu, Sep 24, 2020 at 04:50:54PM -0300, Daniel Henrique Barboza wrote:
> > > > > The pSeries machine does not support asymmetrical NUMA
> > > > > configurations. This doesn't make much of a different
> > > > > since we're not using user input for pSeries NUMA setup,
> > > > > but this will change in the next patches.
> > > > > 
> > > > > To avoid breaking existing setups, gate this change by
> > > > > checking for legacy NUMA support.
> > > > > 
> > > > > Reviewed-by: Greg Kurz <groug@kaod.org>
> > > > > Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> > > > 
> > > > Having read the rest of the series, I realized there's another type of
> > > > configuration that PAPR can't represent, so possibly we should add
> > > > logic to catch that as well.  That's what I'm going to call
> > > > "non-transitive" configurations, e.g.
> > > > 
> > > > Node	0	1	2
> > > > 0	10	20	40
> > > > 1	20	10	20
> > > > 2	40	20	10	
> > > > 
> > > > Basically the closeness of 0 to 1 and 1 to 2 forces them all to be in
> > > > the same domain at every PAPR level, even though 0-2 is supposed to be
> > > > more expensive.
> > > 
> > > Yes, this is correct. I'm not sure how to proceed in this case
> > > though. Should we error out?
> > 
> > Given that we're already erroring on asymmetric configurations, I
> > think it makes sense to error for these as well.
> 
> Thing is that asymmetrical configurations is an easy concept to enforce
> to the user - distance from A to B can't be different from B to A.
> 
> In the example you gave above, with 3 NUMA nodes, is easy to spot where
> the non-transitivity rule would hit. I'm afraid that if we add 2-3 more
> NUMA nodes in the mix this will stop being straightforward, with more and
> more combinations hitting the 'non-transitivity' rule, and erroring out
> will end up being frustrating to the user.
> 
> I'd say that we should report this in the documentation as one more
> limitation of the implementation (and PAPR). I wouldn't oppose with
> throwing a warning message either, letting the user know that the
> approximation will be less precise than it already would be in this
> case.

Hmm... yes, I see your point.  Ok, let's go with your suggestion.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2020-09-28  6:34 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-24 19:50 [PATCH v2 0/6] pseries NUMA distance calculation Daniel Henrique Barboza
2020-09-24 19:50 ` [PATCH v2 1/6] spapr: add spapr_machine_using_legacy_numa() helper Daniel Henrique Barboza
2020-09-24 19:50 ` [PATCH v2 2/6] spapr_numa: forbid asymmetrical NUMA setups Daniel Henrique Barboza
2020-09-25  2:36   ` David Gibson
2020-09-25  3:48   ` David Gibson
2020-09-25 12:41     ` Daniel Henrique Barboza
2020-09-26  7:49       ` David Gibson
2020-09-27 11:41         ` Daniel Henrique Barboza
2020-09-28  6:25           ` David Gibson
2020-09-24 19:50 ` [PATCH v2 3/6] spapr_numa: translate regular NUMA distance to PAPR distance Daniel Henrique Barboza
2020-09-25  2:35   ` David Gibson
2020-09-25 12:44     ` Daniel Henrique Barboza
2020-09-24 19:50 ` [PATCH v2 4/6] spapr_numa: change reference-points and maxdomain settings Daniel Henrique Barboza
2020-09-25  2:38   ` David Gibson
2020-09-25 13:16   ` Greg Kurz
2020-09-24 19:50 ` [PATCH v2 5/6] spapr_numa: consider user input when defining associativity Daniel Henrique Barboza
2020-09-25  3:39   ` David Gibson
2020-09-25 14:42     ` Daniel Henrique Barboza
2020-09-24 19:50 ` [PATCH v2 6/6] specs/ppc-spapr-numa: update with new NUMA support Daniel Henrique Barboza
2020-09-25  3:43   ` David Gibson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.