All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v8 0/7] pSeries FORM2 affinity support
@ 2021-09-17 21:27 Daniel Henrique Barboza
  2021-09-17 21:27 ` [PATCH v8 1/7] spapr_numa.c: split FORM1 code into helpers Daniel Henrique Barboza
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Daniel Henrique Barboza @ 2021-09-17 21:27 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

Hi,

This new version contains changes made in patch 7 after looking more
carefully in the auto-generated NUMA node issue. I also took the
opportunity to make a change that Greg requested in an earlier
review that didn't make to v7.


Changes from v7:
- patch 3:
  * removed redundant 'no FORM2 available yet' comments
- patch 7:
  * removed the unneeded handling of numa_state->num_nodes
- v7 link: https://lists.gnu.org/archive/html/qemu-devel/2021-09/msg04156.html

Changes from v6:
- patch 1:
  * added Greg's r-b
- patch 2:
  * added the missing NUMA nodes number check
  * added Greg's r-b
- patch 3 (former patch 4):
  * no changes.
- former patch 3 (associativity_reset()): dropped
- patch 4 (new):
  * added get_associativity()
  * do not allocate FORM1_assoc_array in the heap
- patch 5:
  * fixed typo
  * added new check function to be called in CAS
- patch 6:
  * do not allocate FORM2_assoc_array in the heap
- patch 7 (new):
  * FORM2 fixes to handle the implicit added QEMU NUMA node when there's
  no NUMA node added by the user.
- v6 link: https://lists.gnu.org/archive/html/qemu-devel/2021-09/msg02892.html

Daniel Henrique Barboza (7):
  spapr_numa.c: split FORM1 code into helpers
  spapr_numa.c: scrap 'legacy_numa' concept
  spapr_numa.c: parametrize FORM1 macros
  spapr_numa.c: rename numa_assoc_array to FORM1_assoc_array
  spapr: move FORM1 verifications to post CAS
  spapr_numa.c: FORM2 NUMA affinity support
  spapr_numa.c: handle auto NUMA node with no distance info

 hw/ppc/spapr.c              |  41 +---
 hw/ppc/spapr_hcall.c        |   7 +
 hw/ppc/spapr_numa.c         | 380 ++++++++++++++++++++++++++++++------
 include/hw/ppc/spapr.h      |  35 ++--
 include/hw/ppc/spapr_numa.h |   1 +
 include/hw/ppc/spapr_ovec.h |   1 +
 6 files changed, 354 insertions(+), 111 deletions(-)

-- 
2.31.1



^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v8 1/7] spapr_numa.c: split FORM1 code into helpers
  2021-09-17 21:27 [PATCH v8 0/7] pSeries FORM2 affinity support Daniel Henrique Barboza
@ 2021-09-17 21:27 ` Daniel Henrique Barboza
  2021-09-17 21:27 ` [PATCH v8 2/7] spapr_numa.c: scrap 'legacy_numa' concept Daniel Henrique Barboza
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Daniel Henrique Barboza @ 2021-09-17 21:27 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

The upcoming FORM2 NUMA affinity will support asymmetric NUMA topologies
and doesn't need be concerned with all the legacy support for older
pseries FORM1 guests.

We're also not going to calculate associativity domains based on numa
distance (via spapr_numa_define_associativity_domains) since the
distances will be written directly into new DT properties.

Let's split FORM1 code into its own functions to allow for easier
insertion of FORM2 logic later on.

Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 35 +++++++++++++++++++++++++----------
 1 file changed, 25 insertions(+), 10 deletions(-)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 779f18b994..786def7c73 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -92,7 +92,7 @@ static uint8_t spapr_numa_get_numa_level(uint8_t distance)
     return 0;
 }
 
-static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr)
+static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
 {
     MachineState *ms = MACHINE(spapr);
     NodeInfo *numa_info = ms->numa_state->nodes;
@@ -155,8 +155,11 @@ static void spapr_numa_define_associativity_domains(SpaprMachineState *spapr)
 
 }
 
-void spapr_numa_associativity_init(SpaprMachineState *spapr,
-                                   MachineState *machine)
+/*
+ * Set NUMA machine state data based on FORM1 affinity semantics.
+ */
+static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
+                                           MachineState *machine)
 {
     SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
     int nb_numa_nodes = machine->numa_state->num_nodes;
@@ -225,7 +228,13 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
         exit(EXIT_FAILURE);
     }
 
-    spapr_numa_define_associativity_domains(spapr);
+    spapr_numa_define_FORM1_domains(spapr);
+}
+
+void spapr_numa_associativity_init(SpaprMachineState *spapr,
+                                   MachineState *machine)
+{
+    spapr_numa_FORM1_affinity_init(spapr, machine);
 }
 
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
@@ -302,12 +311,8 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
     return ret;
 }
 
-/*
- * Helper that writes ibm,associativity-reference-points and
- * max-associativity-domains in the RTAS pointed by @rtas
- * in the DT @fdt.
- */
-void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
+static void spapr_numa_FORM1_write_rtas_dt(SpaprMachineState *spapr,
+                                           void *fdt, int rtas)
 {
     MachineState *ms = MACHINE(spapr);
     SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
@@ -365,6 +370,16 @@ void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
                      maxdomains, sizeof(maxdomains)));
 }
 
+/*
+ * Helper that writes ibm,associativity-reference-points and
+ * max-associativity-domains in the RTAS pointed by @rtas
+ * in the DT @fdt.
+ */
+void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
+{
+    spapr_numa_FORM1_write_rtas_dt(spapr, fdt, rtas);
+}
+
 static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
                                               SpaprMachineState *spapr,
                                               target_ulong opcode,
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v8 2/7] spapr_numa.c: scrap 'legacy_numa' concept
  2021-09-17 21:27 [PATCH v8 0/7] pSeries FORM2 affinity support Daniel Henrique Barboza
  2021-09-17 21:27 ` [PATCH v8 1/7] spapr_numa.c: split FORM1 code into helpers Daniel Henrique Barboza
@ 2021-09-17 21:27 ` Daniel Henrique Barboza
  2021-09-17 21:27 ` [PATCH v8 3/7] spapr_numa.c: parametrize FORM1 macros Daniel Henrique Barboza
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Daniel Henrique Barboza @ 2021-09-17 21:27 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

When first introduced, 'legacy_numa' was a way to refer to guests that
either wouldn't be affected by associativity domain calculations, namely
the ones with only 1 NUMA node, and pre 5.2 guests that shouldn't be
affected by it because it would be an userspace change. Calling these
cases 'legacy_numa' was a convenient way to label these cases.

We're about to introduce a new NUMA affinity, FORM2, and this concept
of 'legacy_numa' is now a bit misleading because, although it is called
'legacy' it is in fact a FORM1 exclusive contraint.

This patch removes spapr_machine_using_legacy_numa() and open code the
conditions in each caller. While we're at it, move the chunk inside
spapr_numa_FORM1_affinity_init() that sets all numa_assoc_array domains
with 'node_id' to spapr_numa_define_FORM1_domains(). This chunk was
being executed if !pre_5_2_numa_associativity and num_nodes => 1, the
same conditions in which spapr_numa_define_FORM1_domains() is called
shortly after.

Reviewed-by: Greg Kurz <groug@kaod.org>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 47 +++++++++++++++++++--------------------------
 1 file changed, 20 insertions(+), 27 deletions(-)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 786def7c73..bf520d42b2 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -19,15 +19,6 @@
 /* Moved from hw/ppc/spapr_pci_nvlink2.c */
 #define SPAPR_GPU_NUMA_ID           (cpu_to_be32(1))
 
-static bool spapr_machine_using_legacy_numa(SpaprMachineState *spapr)
-{
-    MachineState *machine = MACHINE(spapr);
-    SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(machine);
-
-    return smc->pre_5_2_numa_associativity ||
-           machine->numa_state->num_nodes <= 1;
-}
-
 static bool spapr_numa_is_symmetrical(MachineState *ms)
 {
     int src, dst;
@@ -97,7 +88,18 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
     MachineState *ms = MACHINE(spapr);
     NodeInfo *numa_info = ms->numa_state->nodes;
     int nb_numa_nodes = ms->numa_state->num_nodes;
-    int src, dst, i;
+    int src, dst, i, j;
+
+    /*
+     * Fill all associativity domains of non-zero NUMA nodes with
+     * node_id. This is required because the default value (0) is
+     * considered a match with associativity domains of node 0.
+     */
+    for (i = 1; i < nb_numa_nodes; i++) {
+        for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
+            spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
+        }
+    }
 
     for (src = 0; src < nb_numa_nodes; src++) {
         for (dst = src; dst < nb_numa_nodes; dst++) {
@@ -164,7 +166,6 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
     SpaprMachineClass *smc = SPAPR_MACHINE_GET_CLASS(spapr);
     int nb_numa_nodes = machine->numa_state->num_nodes;
     int i, j, max_nodes_with_gpus;
-    bool using_legacy_numa = spapr_machine_using_legacy_numa(spapr);
 
     /*
      * For all associativity arrays: first position is the size,
@@ -178,17 +179,6 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
     for (i = 0; i < nb_numa_nodes; i++) {
         spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
         spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
-
-        /*
-         * Fill all associativity domains of non-zero NUMA nodes with
-         * node_id. This is required because the default value (0) is
-         * considered a match with associativity domains of node 0.
-         */
-        if (!using_legacy_numa && i != 0) {
-            for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
-                spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
-            }
-        }
     }
 
     /*
@@ -214,11 +204,13 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
     }
 
     /*
-     * Legacy NUMA guests (pseries-5.1 and older, or guests with only
-     * 1 NUMA node) will not benefit from anything we're going to do
-     * after this point.
+     * Guests pseries-5.1 and older uses zeroed associativity domains,
+     * i.e. no domain definition based on NUMA distance input.
+     *
+     * Same thing with guests that have only one NUMA node.
      */
-    if (using_legacy_numa) {
+    if (smc->pre_5_2_numa_associativity ||
+        machine->numa_state->num_nodes <= 1) {
         return;
     }
 
@@ -334,7 +326,8 @@ static void spapr_numa_FORM1_write_rtas_dt(SpaprMachineState *spapr,
         cpu_to_be32(maxdomain)
     };
 
-    if (spapr_machine_using_legacy_numa(spapr)) {
+    if (smc->pre_5_2_numa_associativity ||
+        ms->numa_state->num_nodes <= 1) {
         uint32_t legacy_refpoints[] = {
             cpu_to_be32(0x4),
             cpu_to_be32(0x4),
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v8 3/7] spapr_numa.c: parametrize FORM1 macros
  2021-09-17 21:27 [PATCH v8 0/7] pSeries FORM2 affinity support Daniel Henrique Barboza
  2021-09-17 21:27 ` [PATCH v8 1/7] spapr_numa.c: split FORM1 code into helpers Daniel Henrique Barboza
  2021-09-17 21:27 ` [PATCH v8 2/7] spapr_numa.c: scrap 'legacy_numa' concept Daniel Henrique Barboza
@ 2021-09-17 21:27 ` Daniel Henrique Barboza
  2021-09-20  8:54   ` Greg Kurz
  2021-09-17 21:27 ` [PATCH v8 4/7] spapr_numa.c: rename numa_assoc_array to FORM1_assoc_array Daniel Henrique Barboza
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Daniel Henrique Barboza @ 2021-09-17 21:27 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

The next preliminary step to introduce NUMA FORM2 affinity is to make
the existing code independent of FORM1 macros and values, i.e.
MAX_DISTANCE_REF_POINTS, NUMA_ASSOC_SIZE and VCPU_ASSOC_SIZE. This patch
accomplishes that by doing the following:

- move the NUMA related macros from spapr.h to spapr_numa.c where they
are used. spapr.h gets instead a 'NUMA_NODES_MAX_NUM' macro that is used
to refer to the maximum number of NUMA nodes, including GPU nodes, that
the machine can support;

- MAX_DISTANCE_REF_POINTS and NUMA_ASSOC_SIZE are renamed to
FORM1_DIST_REF_POINTS and FORM1_NUMA_ASSOC_SIZE. These FORM1 specific
macros are used in FORM1 init functions;

- code that uses MAX_DISTANCE_REF_POINTS now retrieves the
max_dist_ref_points value using get_max_dist_ref_points().
NUMA_ASSOC_SIZE is replaced by get_numa_assoc_size() and VCPU_ASSOC_SIZE
is replaced by get_vcpu_assoc_size(). These functions are used by the
generic device tree functions and h_home_node_associativity() and will
allow them to switch between FORM1 and FORM2 without changing their core
logic.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c    | 74 ++++++++++++++++++++++++++++++------------
 include/hw/ppc/spapr.h | 28 ++++++++--------
 2 files changed, 67 insertions(+), 35 deletions(-)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index bf520d42b2..08e2d6aed8 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -19,6 +19,33 @@
 /* Moved from hw/ppc/spapr_pci_nvlink2.c */
 #define SPAPR_GPU_NUMA_ID           (cpu_to_be32(1))
 
+/*
+ * Retrieves max_dist_ref_points of the current NUMA affinity.
+ */
+static int get_max_dist_ref_points(SpaprMachineState *spapr)
+{
+    return FORM1_DIST_REF_POINTS;
+}
+
+/*
+ * Retrieves numa_assoc_size of the current NUMA affinity.
+ */
+static int get_numa_assoc_size(SpaprMachineState *spapr)
+{
+    return FORM1_NUMA_ASSOC_SIZE;
+}
+
+/*
+ * Retrieves vcpu_assoc_size of the current NUMA affinity.
+ *
+ * vcpu_assoc_size is the size of ibm,associativity array
+ * for CPUs, which has an extra element (vcpu_id) in the end.
+ */
+static int get_vcpu_assoc_size(SpaprMachineState *spapr)
+{
+    return get_numa_assoc_size(spapr) + 1;
+}
+
 static bool spapr_numa_is_symmetrical(MachineState *ms)
 {
     int src, dst;
@@ -96,7 +123,7 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
      * considered a match with associativity domains of node 0.
      */
     for (i = 1; i < nb_numa_nodes; i++) {
-        for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
+        for (j = 1; j < FORM1_DIST_REF_POINTS; j++) {
             spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
         }
     }
@@ -134,7 +161,7 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
              *
              * The Linux kernel will assume that the distance between src and
              * dst, in this case of no match, is 10 (local distance) doubled
-             * for each NUMA it didn't match. We have MAX_DISTANCE_REF_POINTS
+             * for each NUMA it didn't match. We have FORM1_DIST_REF_POINTS
              * levels (4), so this gives us 10*2*2*2*2 = 160.
              *
              * This logic can be seen in the Linux kernel source code, as of
@@ -169,7 +196,7 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
 
     /*
      * For all associativity arrays: first position is the size,
-     * position MAX_DISTANCE_REF_POINTS is always the numa_id,
+     * position FORM1_DIST_REF_POINTS is always the numa_id,
      * represented by the index 'i'.
      *
      * This will break on sparse NUMA setups, when/if QEMU starts
@@ -177,8 +204,8 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
      * 'i' will be a valid node_id set by the user.
      */
     for (i = 0; i < nb_numa_nodes; i++) {
-        spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
-        spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
+        spapr->numa_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
+        spapr->numa_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
     }
 
     /*
@@ -192,15 +219,15 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
     max_nodes_with_gpus = nb_numa_nodes + NVGPU_MAX_NUM;
 
     for (i = nb_numa_nodes; i < max_nodes_with_gpus; i++) {
-        spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
+        spapr->numa_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
 
-        for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
+        for (j = 1; j < FORM1_DIST_REF_POINTS; j++) {
             uint32_t gpu_assoc = smc->pre_5_1_assoc_refpoints ?
                                  SPAPR_GPU_NUMA_ID : cpu_to_be32(i);
             spapr->numa_assoc_array[i][j] = gpu_assoc;
         }
 
-        spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
+        spapr->numa_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
     }
 
     /*
@@ -234,13 +261,15 @@ void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
 {
     _FDT((fdt_setprop(fdt, offset, "ibm,associativity",
                       spapr->numa_assoc_array[nodeid],
-                      sizeof(spapr->numa_assoc_array[nodeid]))));
+                      get_numa_assoc_size(spapr) * sizeof(uint32_t))));
 }
 
 static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr,
                                            PowerPCCPU *cpu)
 {
-    uint32_t *vcpu_assoc = g_new(uint32_t, VCPU_ASSOC_SIZE);
+    int max_distance_ref_points = get_max_dist_ref_points(spapr);
+    int vcpu_assoc_size = get_vcpu_assoc_size(spapr);
+    uint32_t *vcpu_assoc = g_new(uint32_t, vcpu_assoc_size);
     int index = spapr_get_vcpu_id(cpu);
 
     /*
@@ -249,10 +278,10 @@ static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr,
      * 0, put cpu_id last, then copy the remaining associativity
      * domains.
      */
-    vcpu_assoc[0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS + 1);
-    vcpu_assoc[VCPU_ASSOC_SIZE - 1] = cpu_to_be32(index);
+    vcpu_assoc[0] = cpu_to_be32(max_distance_ref_points + 1);
+    vcpu_assoc[vcpu_assoc_size - 1] = cpu_to_be32(index);
     memcpy(vcpu_assoc + 1, spapr->numa_assoc_array[cpu->node_id] + 1,
-           (VCPU_ASSOC_SIZE - 2) * sizeof(uint32_t));
+           (vcpu_assoc_size - 2) * sizeof(uint32_t));
 
     return vcpu_assoc;
 }
@@ -261,12 +290,13 @@ int spapr_numa_fixup_cpu_dt(SpaprMachineState *spapr, void *fdt,
                             int offset, PowerPCCPU *cpu)
 {
     g_autofree uint32_t *vcpu_assoc = NULL;
+    int vcpu_assoc_size = get_vcpu_assoc_size(spapr);
 
     vcpu_assoc = spapr_numa_get_vcpu_assoc(spapr, cpu);
 
     /* Advertise NUMA via ibm,associativity */
     return fdt_setprop(fdt, offset, "ibm,associativity", vcpu_assoc,
-                       VCPU_ASSOC_SIZE * sizeof(uint32_t));
+                       vcpu_assoc_size * sizeof(uint32_t));
 }
 
 
@@ -274,17 +304,18 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
                                          int offset)
 {
     MachineState *machine = MACHINE(spapr);
+    int max_distance_ref_points = get_max_dist_ref_points(spapr);
     int nb_numa_nodes = machine->numa_state->num_nodes;
     int nr_nodes = nb_numa_nodes ? nb_numa_nodes : 1;
     uint32_t *int_buf, *cur_index, buf_len;
     int ret, i;
 
     /* ibm,associativity-lookup-arrays */
-    buf_len = (nr_nodes * MAX_DISTANCE_REF_POINTS + 2) * sizeof(uint32_t);
+    buf_len = (nr_nodes * max_distance_ref_points + 2) * sizeof(uint32_t);
     cur_index = int_buf = g_malloc0(buf_len);
     int_buf[0] = cpu_to_be32(nr_nodes);
      /* Number of entries per associativity list */
-    int_buf[1] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
+    int_buf[1] = cpu_to_be32(max_distance_ref_points);
     cur_index += 2;
     for (i = 0; i < nr_nodes; i++) {
         /*
@@ -293,8 +324,8 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
          */
         uint32_t *associativity = spapr->numa_assoc_array[i];
         memcpy(cur_index, ++associativity,
-               sizeof(uint32_t) * MAX_DISTANCE_REF_POINTS);
-        cur_index += MAX_DISTANCE_REF_POINTS;
+               sizeof(uint32_t) * max_distance_ref_points);
+        cur_index += max_distance_ref_points;
     }
     ret = fdt_setprop(fdt, offset, "ibm,associativity-lookup-arrays", int_buf,
                       (cur_index - int_buf) * sizeof(uint32_t));
@@ -383,6 +414,7 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
     target_ulong procno = args[1];
     PowerPCCPU *tcpu;
     int idx, assoc_idx;
+    int vcpu_assoc_size = get_vcpu_assoc_size(spapr);
 
     /* only support procno from H_REGISTER_VPA */
     if (flags != 0x1) {
@@ -401,7 +433,7 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
      * 12 associativity domains for vcpus. Assert and bail if that's
      * not the case.
      */
-    G_STATIC_ASSERT((VCPU_ASSOC_SIZE - 1) <= 12);
+    g_assert((vcpu_assoc_size - 1) <= 12);
 
     vcpu_assoc = spapr_numa_get_vcpu_assoc(spapr, tcpu);
     /* assoc_idx starts at 1 to skip associativity size */
@@ -422,9 +454,9 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
          * macro. The ternary will fill the remaining registers with -1
          * after we went through vcpu_assoc[].
          */
-        a = assoc_idx < VCPU_ASSOC_SIZE ?
+        a = assoc_idx < vcpu_assoc_size ?
             be32_to_cpu(vcpu_assoc[assoc_idx++]) : -1;
-        b = assoc_idx < VCPU_ASSOC_SIZE ?
+        b = assoc_idx < vcpu_assoc_size ?
             be32_to_cpu(vcpu_assoc[assoc_idx++]) : -1;
 
         args[idx] = ASSOCIATIVITY(a, b);
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 637652ad16..814e087e98 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -100,23 +100,23 @@ typedef enum {
 
 #define FDT_MAX_SIZE                    0x200000
 
+/* Max number of GPUs per system */
+#define NVGPU_MAX_NUM              6
+
+/* Max number of NUMA nodes */
+#define NUMA_NODES_MAX_NUM         (MAX_NODES + NVGPU_MAX_NUM)
+
 /*
- * NUMA related macros. MAX_DISTANCE_REF_POINTS was taken
- * from Linux kernel arch/powerpc/mm/numa.h. It represents the
- * amount of associativity domains for non-CPU resources.
+ * NUMA FORM1 macros. FORM1_DIST_REF_POINTS was taken from
+ * MAX_DISTANCE_REF_POINTS in arch/powerpc/mm/numa.h from Linux
+ * kernel source. It represents the amount of associativity domains
+ * for non-CPU resources.
  *
- * NUMA_ASSOC_SIZE is the base array size of an ibm,associativity
+ * FORM1_NUMA_ASSOC_SIZE is the base array size of an ibm,associativity
  * array for any non-CPU resource.
- *
- * VCPU_ASSOC_SIZE represents the size of ibm,associativity array
- * for CPUs, which has an extra element (vcpu_id) in the end.
  */
-#define MAX_DISTANCE_REF_POINTS    4
-#define NUMA_ASSOC_SIZE            (MAX_DISTANCE_REF_POINTS + 1)
-#define VCPU_ASSOC_SIZE            (NUMA_ASSOC_SIZE + 1)
-
-/* Max number of these GPUsper a physical box */
-#define NVGPU_MAX_NUM                6
+#define FORM1_DIST_REF_POINTS            4
+#define FORM1_NUMA_ASSOC_SIZE            (FORM1_DIST_REF_POINTS + 1)
 
 typedef struct SpaprCapabilities SpaprCapabilities;
 struct SpaprCapabilities {
@@ -249,7 +249,7 @@ struct SpaprMachineState {
     unsigned gpu_numa_id;
     SpaprTpmProxy *tpm_proxy;
 
-    uint32_t numa_assoc_array[MAX_NODES + NVGPU_MAX_NUM][NUMA_ASSOC_SIZE];
+    uint32_t numa_assoc_array[NUMA_NODES_MAX_NUM][FORM1_NUMA_ASSOC_SIZE];
 
     Error *fwnmi_migration_blocker;
 };
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v8 4/7] spapr_numa.c: rename numa_assoc_array to FORM1_assoc_array
  2021-09-17 21:27 [PATCH v8 0/7] pSeries FORM2 affinity support Daniel Henrique Barboza
                   ` (2 preceding siblings ...)
  2021-09-17 21:27 ` [PATCH v8 3/7] spapr_numa.c: parametrize FORM1 macros Daniel Henrique Barboza
@ 2021-09-17 21:27 ` Daniel Henrique Barboza
  2021-09-20  9:21   ` Greg Kurz
  2021-09-17 21:28 ` [PATCH v8 5/7] spapr: move FORM1 verifications to post CAS Daniel Henrique Barboza
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Daniel Henrique Barboza @ 2021-09-17 21:27 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

Introducing a new NUMA affinity, FORM2, requires a new mechanism to
switch between affinity modes after CAS. Also, we want FORM2 data
structures and functions to be completely separated from the existing
FORM1 code, allowing us to avoid adding new code that inherits the
existing complexity of FORM1.

The idea of switching values used by the write_dt() functions in
spapr_numa.c was already introduced in the previous patch, and
the same approach will be used when dealing with the FORM1 and FORM2
arrays.

We can accomplish that by that by renaming the existing numa_assoc_array
to FORM1_assoc_array, which now is used exclusively to handle FORM1 affinity
data. A new helper get_associativity() is then introduced to be used by the
write_dt() functions to retrieve the current ibm,associativity array of
a given node, after considering affinity selection that might have been
done during CAS. All code that was using numa_assoc_array now needs to
retrieve the array by calling this function.

This will allow for an easier plug of FORM2 data later on.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_hcall.c   |  1 +
 hw/ppc/spapr_numa.c    | 38 +++++++++++++++++++++++++-------------
 include/hw/ppc/spapr.h |  2 +-
 3 files changed, 27 insertions(+), 14 deletions(-)

diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 0e9a5b2e40..9056644890 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -17,6 +17,7 @@
 #include "kvm_ppc.h"
 #include "hw/ppc/fdt.h"
 #include "hw/ppc/spapr_ovec.h"
+#include "hw/ppc/spapr_numa.h"
 #include "mmu-book3s-v3.h"
 #include "hw/mem/memory-device.h"
 
diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 08e2d6aed8..7339d00d20 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -46,6 +46,15 @@ static int get_vcpu_assoc_size(SpaprMachineState *spapr)
     return get_numa_assoc_size(spapr) + 1;
 }
 
+/*
+ * Retrieves the ibm,associativity array of NUMA node 'node_id'
+ * for the current NUMA affinity.
+ */
+static uint32_t *get_associativity(SpaprMachineState *spapr, int node_id)
+{
+    return spapr->FORM1_assoc_array[node_id];
+}
+
 static bool spapr_numa_is_symmetrical(MachineState *ms)
 {
     int src, dst;
@@ -124,7 +133,7 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
      */
     for (i = 1; i < nb_numa_nodes; i++) {
         for (j = 1; j < FORM1_DIST_REF_POINTS; j++) {
-            spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
+            spapr->FORM1_assoc_array[i][j] = cpu_to_be32(i);
         }
     }
 
@@ -176,8 +185,8 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
              * and going up to 0x1.
              */
             for (i = n_level; i > 0; i--) {
-                assoc_src = spapr->numa_assoc_array[src][i];
-                spapr->numa_assoc_array[dst][i] = assoc_src;
+                assoc_src = spapr->FORM1_assoc_array[src][i];
+                spapr->FORM1_assoc_array[dst][i] = assoc_src;
             }
         }
     }
@@ -204,8 +213,8 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
      * 'i' will be a valid node_id set by the user.
      */
     for (i = 0; i < nb_numa_nodes; i++) {
-        spapr->numa_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
-        spapr->numa_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
+        spapr->FORM1_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
+        spapr->FORM1_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
     }
 
     /*
@@ -219,15 +228,15 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
     max_nodes_with_gpus = nb_numa_nodes + NVGPU_MAX_NUM;
 
     for (i = nb_numa_nodes; i < max_nodes_with_gpus; i++) {
-        spapr->numa_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
+        spapr->FORM1_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
 
         for (j = 1; j < FORM1_DIST_REF_POINTS; j++) {
             uint32_t gpu_assoc = smc->pre_5_1_assoc_refpoints ?
                                  SPAPR_GPU_NUMA_ID : cpu_to_be32(i);
-            spapr->numa_assoc_array[i][j] = gpu_assoc;
+            spapr->FORM1_assoc_array[i][j] = gpu_assoc;
         }
 
-        spapr->numa_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
+        spapr->FORM1_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
     }
 
     /*
@@ -259,8 +268,10 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
                                        int offset, int nodeid)
 {
+    uint32_t *associativity = get_associativity(spapr, nodeid);
+
     _FDT((fdt_setprop(fdt, offset, "ibm,associativity",
-                      spapr->numa_assoc_array[nodeid],
+                      associativity,
                       get_numa_assoc_size(spapr) * sizeof(uint32_t))));
 }
 
@@ -270,6 +281,7 @@ static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr,
     int max_distance_ref_points = get_max_dist_ref_points(spapr);
     int vcpu_assoc_size = get_vcpu_assoc_size(spapr);
     uint32_t *vcpu_assoc = g_new(uint32_t, vcpu_assoc_size);
+    uint32_t *associativity = get_associativity(spapr, cpu->node_id);
     int index = spapr_get_vcpu_id(cpu);
 
     /*
@@ -280,7 +292,7 @@ static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr,
      */
     vcpu_assoc[0] = cpu_to_be32(max_distance_ref_points + 1);
     vcpu_assoc[vcpu_assoc_size - 1] = cpu_to_be32(index);
-    memcpy(vcpu_assoc + 1, spapr->numa_assoc_array[cpu->node_id] + 1,
+    memcpy(vcpu_assoc + 1, associativity + 1,
            (vcpu_assoc_size - 2) * sizeof(uint32_t));
 
     return vcpu_assoc;
@@ -319,10 +331,10 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
     cur_index += 2;
     for (i = 0; i < nr_nodes; i++) {
         /*
-         * For the lookup-array we use the ibm,associativity array,
-         * from numa_assoc_array. without the first element (size).
+         * For the lookup-array we use the ibm,associativity array of the
+         * current NUMA affinity, without the first element (size).
          */
-        uint32_t *associativity = spapr->numa_assoc_array[i];
+        uint32_t *associativity = get_associativity(spapr, i);
         memcpy(cur_index, ++associativity,
                sizeof(uint32_t) * max_distance_ref_points);
         cur_index += max_distance_ref_points;
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 814e087e98..6b3dfc5dc2 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -249,7 +249,7 @@ struct SpaprMachineState {
     unsigned gpu_numa_id;
     SpaprTpmProxy *tpm_proxy;
 
-    uint32_t numa_assoc_array[NUMA_NODES_MAX_NUM][FORM1_NUMA_ASSOC_SIZE];
+    uint32_t FORM1_assoc_array[NUMA_NODES_MAX_NUM][FORM1_NUMA_ASSOC_SIZE];
 
     Error *fwnmi_migration_blocker;
 };
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v8 5/7] spapr: move FORM1 verifications to post CAS
  2021-09-17 21:27 [PATCH v8 0/7] pSeries FORM2 affinity support Daniel Henrique Barboza
                   ` (3 preceding siblings ...)
  2021-09-17 21:27 ` [PATCH v8 4/7] spapr_numa.c: rename numa_assoc_array to FORM1_assoc_array Daniel Henrique Barboza
@ 2021-09-17 21:28 ` Daniel Henrique Barboza
  2021-09-20  9:38   ` Greg Kurz
  2021-09-17 21:28 ` [PATCH v8 6/7] spapr_numa.c: FORM2 NUMA affinity support Daniel Henrique Barboza
  2021-09-17 21:28 ` [PATCH v8 7/7] spapr_numa.c: handle auto NUMA node with no distance info Daniel Henrique Barboza
  6 siblings, 1 reply; 15+ messages in thread
From: Daniel Henrique Barboza @ 2021-09-17 21:28 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

FORM2 NUMA affinity is prepared to deal with empty (memory/cpu less)
NUMA nodes. This is used by the DAX KMEM driver to locate a PAPR SCM
device that has a different latency than the original NUMA node from the
regular memory. FORM2 is also able  to deal with asymmetric NUMA
distances gracefully, something that our FORM1 implementation doesn't
do.

Move these FORM1 verifications to a new function and wait until after
CAS, when we're sure that we're sticking with FORM1, to enforce them.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr.c              | 33 -----------------------
 hw/ppc/spapr_hcall.c        |  6 +++++
 hw/ppc/spapr_numa.c         | 53 ++++++++++++++++++++++++++++++++-----
 include/hw/ppc/spapr_numa.h |  1 +
 4 files changed, 54 insertions(+), 39 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index d39fd4e644..ada85ee083 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -2773,39 +2773,6 @@ static void spapr_machine_init(MachineState *machine)
     /* init CPUs */
     spapr_init_cpus(spapr);
 
-    /*
-     * check we don't have a memory-less/cpu-less NUMA node
-     * Firmware relies on the existing memory/cpu topology to provide the
-     * NUMA topology to the kernel.
-     * And the linux kernel needs to know the NUMA topology at start
-     * to be able to hotplug CPUs later.
-     */
-    if (machine->numa_state->num_nodes) {
-        for (i = 0; i < machine->numa_state->num_nodes; ++i) {
-            /* check for memory-less node */
-            if (machine->numa_state->nodes[i].node_mem == 0) {
-                CPUState *cs;
-                int found = 0;
-                /* check for cpu-less node */
-                CPU_FOREACH(cs) {
-                    PowerPCCPU *cpu = POWERPC_CPU(cs);
-                    if (cpu->node_id == i) {
-                        found = 1;
-                        break;
-                    }
-                }
-                /* memory-less and cpu-less node */
-                if (!found) {
-                    error_report(
-                       "Memory-less/cpu-less nodes are not supported (node %d)",
-                                 i);
-                    exit(1);
-                }
-            }
-        }
-
-    }
-
     spapr->gpu_numa_id = spapr_numa_initial_nvgpu_numa_id(machine);
 
     /* Init numa_assoc_array */
diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index 9056644890..222c1b6bbd 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -1198,6 +1198,12 @@ target_ulong do_client_architecture_support(PowerPCCPU *cpu,
     spapr->cas_pre_isa3_guest = !spapr_ovec_test(ov1_guest, OV1_PPC_3_00);
     spapr_ovec_cleanup(ov1_guest);
 
+    /*
+     * Check for NUMA affinity conditions now that we know which NUMA
+     * affinity the guest will use.
+     */
+    spapr_numa_associativity_check(spapr);
+
     /*
      * Ensure the guest asks for an interrupt mode we support;
      * otherwise terminate the boot.
diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 7339d00d20..dfe4fada01 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -193,6 +193,48 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
 
 }
 
+static void spapr_numa_FORM1_affinity_check(MachineState *machine)
+{
+    int i;
+
+    /*
+     * Check we don't have a memory-less/cpu-less NUMA node
+     * Firmware relies on the existing memory/cpu topology to provide the
+     * NUMA topology to the kernel.
+     * And the linux kernel needs to know the NUMA topology at start
+     * to be able to hotplug CPUs later.
+     */
+    if (machine->numa_state->num_nodes) {
+        for (i = 0; i < machine->numa_state->num_nodes; ++i) {
+            /* check for memory-less node */
+            if (machine->numa_state->nodes[i].node_mem == 0) {
+                CPUState *cs;
+                int found = 0;
+                /* check for cpu-less node */
+                CPU_FOREACH(cs) {
+                    PowerPCCPU *cpu = POWERPC_CPU(cs);
+                    if (cpu->node_id == i) {
+                        found = 1;
+                        break;
+                    }
+                }
+                /* memory-less and cpu-less node */
+                if (!found) {
+                    error_report(
+"Memory-less/cpu-less nodes are not supported with FORM1 NUMA (node %d)", i);
+                    exit(EXIT_FAILURE);
+                }
+            }
+        }
+    }
+
+    if (!spapr_numa_is_symmetrical(machine)) {
+        error_report(
+"Asymmetrical NUMA topologies aren't supported in the pSeries machine using FORM1 NUMA");
+        exit(EXIT_FAILURE);
+    }
+}
+
 /*
  * Set NUMA machine state data based on FORM1 affinity semantics.
  */
@@ -250,12 +292,6 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
         return;
     }
 
-    if (!spapr_numa_is_symmetrical(machine)) {
-        error_report("Asymmetrical NUMA topologies aren't supported "
-                     "in the pSeries machine");
-        exit(EXIT_FAILURE);
-    }
-
     spapr_numa_define_FORM1_domains(spapr);
 }
 
@@ -265,6 +301,11 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
     spapr_numa_FORM1_affinity_init(spapr, machine);
 }
 
+void spapr_numa_associativity_check(SpaprMachineState *spapr)
+{
+    spapr_numa_FORM1_affinity_check(MACHINE(spapr));
+}
+
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
                                        int offset, int nodeid)
 {
diff --git a/include/hw/ppc/spapr_numa.h b/include/hw/ppc/spapr_numa.h
index 6f9f02d3de..7cb3367400 100644
--- a/include/hw/ppc/spapr_numa.h
+++ b/include/hw/ppc/spapr_numa.h
@@ -24,6 +24,7 @@
  */
 void spapr_numa_associativity_init(SpaprMachineState *spapr,
                                    MachineState *machine);
+void spapr_numa_associativity_check(SpaprMachineState *spapr);
 void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas);
 void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
                                        int offset, int nodeid);
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v8 6/7] spapr_numa.c: FORM2 NUMA affinity support
  2021-09-17 21:27 [PATCH v8 0/7] pSeries FORM2 affinity support Daniel Henrique Barboza
                   ` (4 preceding siblings ...)
  2021-09-17 21:28 ` [PATCH v8 5/7] spapr: move FORM1 verifications to post CAS Daniel Henrique Barboza
@ 2021-09-17 21:28 ` Daniel Henrique Barboza
  2021-09-20 15:10   ` Greg Kurz
  2021-09-17 21:28 ` [PATCH v8 7/7] spapr_numa.c: handle auto NUMA node with no distance info Daniel Henrique Barboza
  6 siblings, 1 reply; 15+ messages in thread
From: Daniel Henrique Barboza @ 2021-09-17 21:28 UTC (permalink / raw)
  To: qemu-devel; +Cc: Daniel Henrique Barboza, qemu-ppc, groug, david

The main feature of FORM2 affinity support is the separation of NUMA
distances from ibm,associativity information. This allows for a more
flexible and straightforward NUMA distance assignment without relying on
complex associations between several levels of NUMA via
ibm,associativity matches. Another feature is its extensibility. This base
support contains the facilities for NUMA distance assignment, but in the
future more facilities will be added for latency, performance, bandwidth
and so on.

This patch implements the base FORM2 affinity support as follows:

- the use of FORM2 associativity is indicated by using bit 2 of byte 5
of ibm,architecture-vec-5. A FORM2 aware guest can choose to use FORM1
or FORM2 affinity. Setting both forms will default to FORM2. We're not
advertising FORM2 for pseries-6.1 and older machine versions to prevent
guest visible changes in those;

- ibm,associativity-reference-points has a new semantic. Instead of
being used to calculate distances via NUMA levels, it's now used to
indicate the primary domain index in the ibm,associativity domain of
each resource. In our case it's set to {0x4}, matching the position
where we already place logical_domain_id;

- two new RTAS DT artifacts are introduced: ibm,numa-lookup-index-table
and ibm,numa-distance-table. The index table is used to list all the
NUMA logical domains of the platform, in ascending order, and allows for
spartial NUMA configurations (although QEMU ATM doesn't support that).
ibm,numa-distance-table is an array that contains all the distances from
the first NUMA node to all other nodes, then the second NUMA node
distances to all other nodes and so on;

- get_max_dist_ref_points(), get_numa_assoc_size() and get_associativity()
now checks for OV5_FORM2_AFFINITY and returns FORM2 values if the guest
selected FORM2 affinity during CAS.

Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr.c              |   8 ++
 hw/ppc/spapr_numa.c         | 146 ++++++++++++++++++++++++++++++++++++
 include/hw/ppc/spapr.h      |   9 +++
 include/hw/ppc/spapr_ovec.h |   1 +
 4 files changed, 164 insertions(+)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index ada85ee083..babb662845 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -2752,6 +2752,11 @@ static void spapr_machine_init(MachineState *machine)
 
     spapr_ovec_set(spapr->ov5, OV5_FORM1_AFFINITY);
 
+    /* Do not advertise FORM2 NUMA support for pseries-6.1 and older */
+    if (!smc->pre_6_2_numa_affinity) {
+        spapr_ovec_set(spapr->ov5, OV5_FORM2_AFFINITY);
+    }
+
     /* advertise support for dedicated HP event source to guests */
     if (spapr->use_hotplug_event_source) {
         spapr_ovec_set(spapr->ov5, OV5_HP_EVT);
@@ -4667,8 +4672,11 @@ DEFINE_SPAPR_MACHINE(6_2, "6.2", true);
  */
 static void spapr_machine_6_1_class_options(MachineClass *mc)
 {
+    SpaprMachineClass *smc = SPAPR_MACHINE_CLASS(mc);
+
     spapr_machine_6_2_class_options(mc);
     compat_props_add(mc->compat_props, hw_compat_6_1, hw_compat_6_1_len);
+    smc->pre_6_2_numa_affinity = true;
 }
 
 DEFINE_SPAPR_MACHINE(6_1, "6.1", false);
diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index dfe4fada01..659513b405 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -24,6 +24,10 @@
  */
 static int get_max_dist_ref_points(SpaprMachineState *spapr)
 {
+    if (spapr_ovec_test(spapr->ov5_cas, OV5_FORM2_AFFINITY)) {
+        return FORM2_DIST_REF_POINTS;
+    }
+
     return FORM1_DIST_REF_POINTS;
 }
 
@@ -32,6 +36,10 @@ static int get_max_dist_ref_points(SpaprMachineState *spapr)
  */
 static int get_numa_assoc_size(SpaprMachineState *spapr)
 {
+    if (spapr_ovec_test(spapr->ov5_cas, OV5_FORM2_AFFINITY)) {
+        return FORM2_NUMA_ASSOC_SIZE;
+    }
+
     return FORM1_NUMA_ASSOC_SIZE;
 }
 
@@ -52,6 +60,9 @@ static int get_vcpu_assoc_size(SpaprMachineState *spapr)
  */
 static uint32_t *get_associativity(SpaprMachineState *spapr, int node_id)
 {
+    if (spapr_ovec_test(spapr->ov5_cas, OV5_FORM2_AFFINITY)) {
+        return spapr->FORM2_assoc_array[node_id];
+    }
     return spapr->FORM1_assoc_array[node_id];
 }
 
@@ -295,14 +306,50 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
     spapr_numa_define_FORM1_domains(spapr);
 }
 
+/*
+ * Init NUMA FORM2 machine state data
+ */
+static void spapr_numa_FORM2_affinity_init(SpaprMachineState *spapr)
+{
+    int i;
+
+    /*
+     * For all resources but CPUs, FORM2 associativity arrays will
+     * be a size 2 array with the following format:
+     *
+     * ibm,associativity = {1, numa_id}
+     *
+     * CPUs will write an additional 'vcpu_id' on top of the arrays
+     * being initialized here. 'numa_id' is represented by the
+     * index 'i' of the loop.
+     *
+     * Given that this initialization is also valid for GPU associativity
+     * arrays, handle everything in one single step by populating the
+     * arrays up to NUMA_NODES_MAX_NUM.
+     */
+    for (i = 0; i < NUMA_NODES_MAX_NUM; i++) {
+        spapr->FORM2_assoc_array[i][0] = cpu_to_be32(1);
+        spapr->FORM2_assoc_array[i][1] = cpu_to_be32(i);
+    }
+}
+
 void spapr_numa_associativity_init(SpaprMachineState *spapr,
                                    MachineState *machine)
 {
     spapr_numa_FORM1_affinity_init(spapr, machine);
+    spapr_numa_FORM2_affinity_init(spapr);
 }
 
 void spapr_numa_associativity_check(SpaprMachineState *spapr)
 {
+    /*
+     * FORM2 does not have any restrictions we need to handle
+     * at CAS time, for now.
+     */
+    if (spapr_ovec_test(spapr->ov5_cas, OV5_FORM2_AFFINITY)) {
+        return;
+    }
+
     spapr_numa_FORM1_affinity_check(MACHINE(spapr));
 }
 
@@ -447,6 +494,100 @@ static void spapr_numa_FORM1_write_rtas_dt(SpaprMachineState *spapr,
                      maxdomains, sizeof(maxdomains)));
 }
 
+static void spapr_numa_FORM2_write_rtas_tables(SpaprMachineState *spapr,
+                                               void *fdt, int rtas)
+{
+    MachineState *ms = MACHINE(spapr);
+    NodeInfo *numa_info = ms->numa_state->nodes;
+    int nb_numa_nodes = ms->numa_state->num_nodes;
+    int distance_table_entries = nb_numa_nodes * nb_numa_nodes;
+    g_autofree uint32_t *lookup_index_table = NULL;
+    g_autofree uint32_t *distance_table = NULL;
+    int src, dst, i, distance_table_size;
+    uint8_t *node_distances;
+
+    /*
+     * ibm,numa-lookup-index-table: array with length and a
+     * list of NUMA ids present in the guest.
+     */
+    lookup_index_table = g_new0(uint32_t, nb_numa_nodes + 1);
+    lookup_index_table[0] = cpu_to_be32(nb_numa_nodes);
+
+    for (i = 0; i < nb_numa_nodes; i++) {
+        lookup_index_table[i + 1] = cpu_to_be32(i);
+    }
+
+    _FDT(fdt_setprop(fdt, rtas, "ibm,numa-lookup-index-table",
+                     lookup_index_table,
+                     (nb_numa_nodes + 1) * sizeof(uint32_t)));
+
+    /*
+     * ibm,numa-distance-table: contains all node distances. First
+     * element is the size of the table as uint32, followed up
+     * by all the uint8 distances from the first NUMA node, then all
+     * distances from the second NUMA node and so on.
+     *
+     * ibm,numa-lookup-index-table is used by guest to navigate this
+     * array because NUMA ids can be sparse (node 0 is the first,
+     * node 8 is the second ...).
+     */
+    distance_table = g_new0(uint32_t, distance_table_entries + 1);
+    distance_table[0] = cpu_to_be32(distance_table_entries);
+
+    node_distances = (uint8_t *)&distance_table[1];
+    i = 0;
+
+    for (src = 0; src < nb_numa_nodes; src++) {
+        for (dst = 0; dst < nb_numa_nodes; dst++) {
+            node_distances[i++] = numa_info[src].distance[dst];
+        }
+    }
+
+    distance_table_size = distance_table_entries * sizeof(uint8_t) +
+                          sizeof(uint32_t);
+    _FDT(fdt_setprop(fdt, rtas, "ibm,numa-distance-table",
+                     distance_table, distance_table_size));
+}
+
+/*
+ * This helper could be compressed in a single function with
+ * FORM1 logic since we're setting the same DT values, with the
+ * difference being a call to spapr_numa_FORM2_write_rtas_tables()
+ * in the end. The separation was made to avoid clogging FORM1 code
+ * which already has to deal with compat modes from previous
+ * QEMU machine types.
+ */
+static void spapr_numa_FORM2_write_rtas_dt(SpaprMachineState *spapr,
+                                           void *fdt, int rtas)
+{
+    MachineState *ms = MACHINE(spapr);
+    uint32_t number_nvgpus_nodes = spapr->gpu_numa_id -
+                                   spapr_numa_initial_nvgpu_numa_id(ms);
+
+    /*
+     * In FORM2, ibm,associativity-reference-points will point to
+     * the element in the ibm,associativity array that contains the
+     * primary domain index (for FORM2, the first element).
+     *
+     * This value (in our case, the numa-id) is then used as an index
+     * to retrieve all other attributes of the node (distance,
+     * bandwidth, latency) via ibm,numa-lookup-index-table and other
+     * ibm,numa-*-table properties.
+     */
+    uint32_t refpoints[] = { cpu_to_be32(1) };
+
+    uint32_t maxdomain = ms->numa_state->num_nodes + number_nvgpus_nodes;
+    uint32_t maxdomains[] = { cpu_to_be32(1), cpu_to_be32(maxdomain) };
+
+    _FDT(fdt_setprop(fdt, rtas, "ibm,associativity-reference-points",
+                     refpoints, sizeof(refpoints)));
+
+    _FDT(fdt_setprop(fdt, rtas, "ibm,max-associativity-domains",
+                     maxdomains, sizeof(maxdomains)));
+
+    spapr_numa_FORM2_write_rtas_tables(spapr, fdt, rtas);
+}
+
 /*
  * Helper that writes ibm,associativity-reference-points and
  * max-associativity-domains in the RTAS pointed by @rtas
@@ -454,6 +595,11 @@ static void spapr_numa_FORM1_write_rtas_dt(SpaprMachineState *spapr,
  */
 void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
 {
+    if (spapr_ovec_test(spapr->ov5_cas, OV5_FORM2_AFFINITY)) {
+        spapr_numa_FORM2_write_rtas_dt(spapr, fdt, rtas);
+        return;
+    }
+
     spapr_numa_FORM1_write_rtas_dt(spapr, fdt, rtas);
 }
 
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 6b3dfc5dc2..ee7504b976 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -118,6 +118,13 @@ typedef enum {
 #define FORM1_DIST_REF_POINTS            4
 #define FORM1_NUMA_ASSOC_SIZE            (FORM1_DIST_REF_POINTS + 1)
 
+/*
+ * FORM2 NUMA affinity has a single associativity domain, giving
+ * us a assoc size of 2.
+ */
+#define FORM2_DIST_REF_POINTS            1
+#define FORM2_NUMA_ASSOC_SIZE            (FORM2_DIST_REF_POINTS + 1)
+
 typedef struct SpaprCapabilities SpaprCapabilities;
 struct SpaprCapabilities {
     uint8_t caps[SPAPR_CAP_NUM];
@@ -145,6 +152,7 @@ struct SpaprMachineClass {
     hwaddr rma_limit;          /* clamp the RMA to this size */
     bool pre_5_1_assoc_refpoints;
     bool pre_5_2_numa_associativity;
+    bool pre_6_2_numa_affinity;
 
     bool (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
                           uint64_t *buid, hwaddr *pio,
@@ -250,6 +258,7 @@ struct SpaprMachineState {
     SpaprTpmProxy *tpm_proxy;
 
     uint32_t FORM1_assoc_array[NUMA_NODES_MAX_NUM][FORM1_NUMA_ASSOC_SIZE];
+    uint32_t FORM2_assoc_array[NUMA_NODES_MAX_NUM][FORM2_NUMA_ASSOC_SIZE];
 
     Error *fwnmi_migration_blocker;
 };
diff --git a/include/hw/ppc/spapr_ovec.h b/include/hw/ppc/spapr_ovec.h
index 48b716a060..c3e8b98e7e 100644
--- a/include/hw/ppc/spapr_ovec.h
+++ b/include/hw/ppc/spapr_ovec.h
@@ -49,6 +49,7 @@ typedef struct SpaprOptionVector SpaprOptionVector;
 /* option vector 5 */
 #define OV5_DRCONF_MEMORY       OV_BIT(2, 2)
 #define OV5_FORM1_AFFINITY      OV_BIT(5, 0)
+#define OV5_FORM2_AFFINITY      OV_BIT(5, 2)
 #define OV5_HP_EVT              OV_BIT(6, 5)
 #define OV5_HPT_RESIZE          OV_BIT(6, 7)
 #define OV5_DRMEM_V2            OV_BIT(22, 0)
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v8 7/7] spapr_numa.c: handle auto NUMA node with no distance info
  2021-09-17 21:27 [PATCH v8 0/7] pSeries FORM2 affinity support Daniel Henrique Barboza
                   ` (5 preceding siblings ...)
  2021-09-17 21:28 ` [PATCH v8 6/7] spapr_numa.c: FORM2 NUMA affinity support Daniel Henrique Barboza
@ 2021-09-17 21:28 ` Daniel Henrique Barboza
  2021-09-20 15:22   ` Greg Kurz
  2021-09-21  9:16   ` Igor Mammedov
  6 siblings, 2 replies; 15+ messages in thread
From: Daniel Henrique Barboza @ 2021-09-17 21:28 UTC (permalink / raw)
  To: qemu-devel; +Cc: Igor Mammedov, Daniel Henrique Barboza, qemu-ppc, groug, david

numa_complete_configuration() in hw/core/numa.c always adds a NUMA node
for the pSeries machine if none was specified, but without node distance
information for the single node created.

NUMA FORM1 affinity code didn't rely on numa_state information to do its
job, but FORM2 does. As is now, this is the result of a pSeries guest
with NUMA FORM2 affinity when no NUMA nodes is specified:

$ numactl -H
available: 1 nodes (0)
node 0 cpus: 0
node 0 size: 16222 MB
node 0 free: 15681 MB
No distance information available.

This can be amended in spapr_numa_FORM2_write_rtas_tables(). We're
enforcing that the local distance (the distance to the node to itself) is
always 10. This allows for the proper creation of the NUMA distance tables,
fixing the output of 'numactl -H' in the guest:

$ numactl -H
available: 1 nodes (0)
node 0 cpus: 0
node 0 size: 16222 MB
node 0 free: 15685 MB
node distances:
node   0
  0:  10

CC: Igor Mammedov <imammedo@redhat.com>
Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
---
 hw/ppc/spapr_numa.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
index 659513b405..0cead2e7f5 100644
--- a/hw/ppc/spapr_numa.c
+++ b/hw/ppc/spapr_numa.c
@@ -539,6 +539,17 @@ static void spapr_numa_FORM2_write_rtas_tables(SpaprMachineState *spapr,
 
     for (src = 0; src < nb_numa_nodes; src++) {
         for (dst = 0; dst < nb_numa_nodes; dst++) {
+            /*
+             * We need to be explicit with the local distance
+             * value to cover the case where the user didn't added any
+             * NUMA nodes, but QEMU adds the default NUMA node without
+             * adding the numa_info to retrieve distance info from.
+             */
+            if (src == dst) {
+                node_distances[i++] = 10;
+                continue;
+            }
+
             node_distances[i++] = numa_info[src].distance[dst];
         }
     }
-- 
2.31.1



^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v8 3/7] spapr_numa.c: parametrize FORM1 macros
  2021-09-17 21:27 ` [PATCH v8 3/7] spapr_numa.c: parametrize FORM1 macros Daniel Henrique Barboza
@ 2021-09-20  8:54   ` Greg Kurz
  0 siblings, 0 replies; 15+ messages in thread
From: Greg Kurz @ 2021-09-20  8:54 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, david

On Fri, 17 Sep 2021 18:27:58 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> The next preliminary step to introduce NUMA FORM2 affinity is to make
> the existing code independent of FORM1 macros and values, i.e.
> MAX_DISTANCE_REF_POINTS, NUMA_ASSOC_SIZE and VCPU_ASSOC_SIZE. This patch
> accomplishes that by doing the following:
> 
> - move the NUMA related macros from spapr.h to spapr_numa.c where they
> are used. spapr.h gets instead a 'NUMA_NODES_MAX_NUM' macro that is used
> to refer to the maximum number of NUMA nodes, including GPU nodes, that
> the machine can support;
> 
> - MAX_DISTANCE_REF_POINTS and NUMA_ASSOC_SIZE are renamed to
> FORM1_DIST_REF_POINTS and FORM1_NUMA_ASSOC_SIZE. These FORM1 specific
> macros are used in FORM1 init functions;
> 
> - code that uses MAX_DISTANCE_REF_POINTS now retrieves the
> max_dist_ref_points value using get_max_dist_ref_points().
> NUMA_ASSOC_SIZE is replaced by get_numa_assoc_size() and VCPU_ASSOC_SIZE
> is replaced by get_vcpu_assoc_size(). These functions are used by the
> generic device tree functions and h_home_node_associativity() and will
> allow them to switch between FORM1 and FORM2 without changing their core
> logic.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  hw/ppc/spapr_numa.c    | 74 ++++++++++++++++++++++++++++++------------
>  include/hw/ppc/spapr.h | 28 ++++++++--------
>  2 files changed, 67 insertions(+), 35 deletions(-)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index bf520d42b2..08e2d6aed8 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -19,6 +19,33 @@
>  /* Moved from hw/ppc/spapr_pci_nvlink2.c */
>  #define SPAPR_GPU_NUMA_ID           (cpu_to_be32(1))
>  
> +/*
> + * Retrieves max_dist_ref_points of the current NUMA affinity.
> + */
> +static int get_max_dist_ref_points(SpaprMachineState *spapr)
> +{
> +    return FORM1_DIST_REF_POINTS;
> +}
> +
> +/*
> + * Retrieves numa_assoc_size of the current NUMA affinity.
> + */
> +static int get_numa_assoc_size(SpaprMachineState *spapr)
> +{
> +    return FORM1_NUMA_ASSOC_SIZE;
> +}
> +
> +/*
> + * Retrieves vcpu_assoc_size of the current NUMA affinity.
> + *
> + * vcpu_assoc_size is the size of ibm,associativity array
> + * for CPUs, which has an extra element (vcpu_id) in the end.
> + */
> +static int get_vcpu_assoc_size(SpaprMachineState *spapr)
> +{
> +    return get_numa_assoc_size(spapr) + 1;
> +}
> +
>  static bool spapr_numa_is_symmetrical(MachineState *ms)
>  {
>      int src, dst;
> @@ -96,7 +123,7 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
>       * considered a match with associativity domains of node 0.
>       */
>      for (i = 1; i < nb_numa_nodes; i++) {
> -        for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
> +        for (j = 1; j < FORM1_DIST_REF_POINTS; j++) {
>              spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
>          }
>      }
> @@ -134,7 +161,7 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
>               *
>               * The Linux kernel will assume that the distance between src and
>               * dst, in this case of no match, is 10 (local distance) doubled
> -             * for each NUMA it didn't match. We have MAX_DISTANCE_REF_POINTS
> +             * for each NUMA it didn't match. We have FORM1_DIST_REF_POINTS
>               * levels (4), so this gives us 10*2*2*2*2 = 160.
>               *
>               * This logic can be seen in the Linux kernel source code, as of
> @@ -169,7 +196,7 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
>  
>      /*
>       * For all associativity arrays: first position is the size,
> -     * position MAX_DISTANCE_REF_POINTS is always the numa_id,
> +     * position FORM1_DIST_REF_POINTS is always the numa_id,
>       * represented by the index 'i'.
>       *
>       * This will break on sparse NUMA setups, when/if QEMU starts
> @@ -177,8 +204,8 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
>       * 'i' will be a valid node_id set by the user.
>       */
>      for (i = 0; i < nb_numa_nodes; i++) {
> -        spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
> -        spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
> +        spapr->numa_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
> +        spapr->numa_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
>      }
>  
>      /*
> @@ -192,15 +219,15 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
>      max_nodes_with_gpus = nb_numa_nodes + NVGPU_MAX_NUM;
>  
>      for (i = nb_numa_nodes; i < max_nodes_with_gpus; i++) {
> -        spapr->numa_assoc_array[i][0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
> +        spapr->numa_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
>  
> -        for (j = 1; j < MAX_DISTANCE_REF_POINTS; j++) {
> +        for (j = 1; j < FORM1_DIST_REF_POINTS; j++) {
>              uint32_t gpu_assoc = smc->pre_5_1_assoc_refpoints ?
>                                   SPAPR_GPU_NUMA_ID : cpu_to_be32(i);
>              spapr->numa_assoc_array[i][j] = gpu_assoc;
>          }
>  
> -        spapr->numa_assoc_array[i][MAX_DISTANCE_REF_POINTS] = cpu_to_be32(i);
> +        spapr->numa_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
>      }
>  
>      /*
> @@ -234,13 +261,15 @@ void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
>  {
>      _FDT((fdt_setprop(fdt, offset, "ibm,associativity",
>                        spapr->numa_assoc_array[nodeid],
> -                      sizeof(spapr->numa_assoc_array[nodeid]))));
> +                      get_numa_assoc_size(spapr) * sizeof(uint32_t))));
>  }
>  
>  static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr,
>                                             PowerPCCPU *cpu)
>  {
> -    uint32_t *vcpu_assoc = g_new(uint32_t, VCPU_ASSOC_SIZE);
> +    int max_distance_ref_points = get_max_dist_ref_points(spapr);
> +    int vcpu_assoc_size = get_vcpu_assoc_size(spapr);
> +    uint32_t *vcpu_assoc = g_new(uint32_t, vcpu_assoc_size);
>      int index = spapr_get_vcpu_id(cpu);
>  
>      /*
> @@ -249,10 +278,10 @@ static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr,
>       * 0, put cpu_id last, then copy the remaining associativity
>       * domains.
>       */
> -    vcpu_assoc[0] = cpu_to_be32(MAX_DISTANCE_REF_POINTS + 1);
> -    vcpu_assoc[VCPU_ASSOC_SIZE - 1] = cpu_to_be32(index);
> +    vcpu_assoc[0] = cpu_to_be32(max_distance_ref_points + 1);
> +    vcpu_assoc[vcpu_assoc_size - 1] = cpu_to_be32(index);
>      memcpy(vcpu_assoc + 1, spapr->numa_assoc_array[cpu->node_id] + 1,
> -           (VCPU_ASSOC_SIZE - 2) * sizeof(uint32_t));
> +           (vcpu_assoc_size - 2) * sizeof(uint32_t));
>  
>      return vcpu_assoc;
>  }
> @@ -261,12 +290,13 @@ int spapr_numa_fixup_cpu_dt(SpaprMachineState *spapr, void *fdt,
>                              int offset, PowerPCCPU *cpu)
>  {
>      g_autofree uint32_t *vcpu_assoc = NULL;
> +    int vcpu_assoc_size = get_vcpu_assoc_size(spapr);
>  
>      vcpu_assoc = spapr_numa_get_vcpu_assoc(spapr, cpu);
>  
>      /* Advertise NUMA via ibm,associativity */
>      return fdt_setprop(fdt, offset, "ibm,associativity", vcpu_assoc,
> -                       VCPU_ASSOC_SIZE * sizeof(uint32_t));
> +                       vcpu_assoc_size * sizeof(uint32_t));
>  }
>  
>  
> @@ -274,17 +304,18 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
>                                           int offset)
>  {
>      MachineState *machine = MACHINE(spapr);
> +    int max_distance_ref_points = get_max_dist_ref_points(spapr);
>      int nb_numa_nodes = machine->numa_state->num_nodes;
>      int nr_nodes = nb_numa_nodes ? nb_numa_nodes : 1;
>      uint32_t *int_buf, *cur_index, buf_len;
>      int ret, i;
>  
>      /* ibm,associativity-lookup-arrays */
> -    buf_len = (nr_nodes * MAX_DISTANCE_REF_POINTS + 2) * sizeof(uint32_t);
> +    buf_len = (nr_nodes * max_distance_ref_points + 2) * sizeof(uint32_t);
>      cur_index = int_buf = g_malloc0(buf_len);
>      int_buf[0] = cpu_to_be32(nr_nodes);
>       /* Number of entries per associativity list */
> -    int_buf[1] = cpu_to_be32(MAX_DISTANCE_REF_POINTS);
> +    int_buf[1] = cpu_to_be32(max_distance_ref_points);
>      cur_index += 2;
>      for (i = 0; i < nr_nodes; i++) {
>          /*
> @@ -293,8 +324,8 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
>           */
>          uint32_t *associativity = spapr->numa_assoc_array[i];
>          memcpy(cur_index, ++associativity,
> -               sizeof(uint32_t) * MAX_DISTANCE_REF_POINTS);
> -        cur_index += MAX_DISTANCE_REF_POINTS;
> +               sizeof(uint32_t) * max_distance_ref_points);
> +        cur_index += max_distance_ref_points;
>      }
>      ret = fdt_setprop(fdt, offset, "ibm,associativity-lookup-arrays", int_buf,
>                        (cur_index - int_buf) * sizeof(uint32_t));
> @@ -383,6 +414,7 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
>      target_ulong procno = args[1];
>      PowerPCCPU *tcpu;
>      int idx, assoc_idx;
> +    int vcpu_assoc_size = get_vcpu_assoc_size(spapr);
>  
>      /* only support procno from H_REGISTER_VPA */
>      if (flags != 0x1) {
> @@ -401,7 +433,7 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
>       * 12 associativity domains for vcpus. Assert and bail if that's
>       * not the case.
>       */
> -    G_STATIC_ASSERT((VCPU_ASSOC_SIZE - 1) <= 12);
> +    g_assert((vcpu_assoc_size - 1) <= 12);
>  
>      vcpu_assoc = spapr_numa_get_vcpu_assoc(spapr, tcpu);
>      /* assoc_idx starts at 1 to skip associativity size */
> @@ -422,9 +454,9 @@ static target_ulong h_home_node_associativity(PowerPCCPU *cpu,
>           * macro. The ternary will fill the remaining registers with -1
>           * after we went through vcpu_assoc[].
>           */
> -        a = assoc_idx < VCPU_ASSOC_SIZE ?
> +        a = assoc_idx < vcpu_assoc_size ?
>              be32_to_cpu(vcpu_assoc[assoc_idx++]) : -1;
> -        b = assoc_idx < VCPU_ASSOC_SIZE ?
> +        b = assoc_idx < vcpu_assoc_size ?
>              be32_to_cpu(vcpu_assoc[assoc_idx++]) : -1;
>  
>          args[idx] = ASSOCIATIVITY(a, b);
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index 637652ad16..814e087e98 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -100,23 +100,23 @@ typedef enum {
>  
>  #define FDT_MAX_SIZE                    0x200000
>  
> +/* Max number of GPUs per system */
> +#define NVGPU_MAX_NUM              6
> +
> +/* Max number of NUMA nodes */
> +#define NUMA_NODES_MAX_NUM         (MAX_NODES + NVGPU_MAX_NUM)
> +
>  /*
> - * NUMA related macros. MAX_DISTANCE_REF_POINTS was taken
> - * from Linux kernel arch/powerpc/mm/numa.h. It represents the
> - * amount of associativity domains for non-CPU resources.
> + * NUMA FORM1 macros. FORM1_DIST_REF_POINTS was taken from
> + * MAX_DISTANCE_REF_POINTS in arch/powerpc/mm/numa.h from Linux
> + * kernel source. It represents the amount of associativity domains
> + * for non-CPU resources.
>   *
> - * NUMA_ASSOC_SIZE is the base array size of an ibm,associativity
> + * FORM1_NUMA_ASSOC_SIZE is the base array size of an ibm,associativity
>   * array for any non-CPU resource.
> - *
> - * VCPU_ASSOC_SIZE represents the size of ibm,associativity array
> - * for CPUs, which has an extra element (vcpu_id) in the end.
>   */
> -#define MAX_DISTANCE_REF_POINTS    4
> -#define NUMA_ASSOC_SIZE            (MAX_DISTANCE_REF_POINTS + 1)
> -#define VCPU_ASSOC_SIZE            (NUMA_ASSOC_SIZE + 1)
> -
> -/* Max number of these GPUsper a physical box */
> -#define NVGPU_MAX_NUM                6
> +#define FORM1_DIST_REF_POINTS            4
> +#define FORM1_NUMA_ASSOC_SIZE            (FORM1_DIST_REF_POINTS + 1)
>  
>  typedef struct SpaprCapabilities SpaprCapabilities;
>  struct SpaprCapabilities {
> @@ -249,7 +249,7 @@ struct SpaprMachineState {
>      unsigned gpu_numa_id;
>      SpaprTpmProxy *tpm_proxy;
>  
> -    uint32_t numa_assoc_array[MAX_NODES + NVGPU_MAX_NUM][NUMA_ASSOC_SIZE];
> +    uint32_t numa_assoc_array[NUMA_NODES_MAX_NUM][FORM1_NUMA_ASSOC_SIZE];
>  
>      Error *fwnmi_migration_blocker;
>  };



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v8 4/7] spapr_numa.c: rename numa_assoc_array to FORM1_assoc_array
  2021-09-17 21:27 ` [PATCH v8 4/7] spapr_numa.c: rename numa_assoc_array to FORM1_assoc_array Daniel Henrique Barboza
@ 2021-09-20  9:21   ` Greg Kurz
  2021-09-20 13:39     ` Daniel Henrique Barboza
  0 siblings, 1 reply; 15+ messages in thread
From: Greg Kurz @ 2021-09-20  9:21 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, david

On Fri, 17 Sep 2021 18:27:59 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> Introducing a new NUMA affinity, FORM2, requires a new mechanism to
> switch between affinity modes after CAS. Also, we want FORM2 data
> structures and functions to be completely separated from the existing
> FORM1 code, allowing us to avoid adding new code that inherits the
> existing complexity of FORM1.
> 
> The idea of switching values used by the write_dt() functions in
> spapr_numa.c was already introduced in the previous patch, and
> the same approach will be used when dealing with the FORM1 and FORM2
> arrays.
> 
> We can accomplish that by that by renaming the existing numa_assoc_array
> to FORM1_assoc_array, which now is used exclusively to handle FORM1 affinity
> data. A new helper get_associativity() is then introduced to be used by the
> write_dt() functions to retrieve the current ibm,associativity array of
> a given node, after considering affinity selection that might have been
> done during CAS. All code that was using numa_assoc_array now needs to
> retrieve the array by calling this function.
> 
> This will allow for an easier plug of FORM2 data later on.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---

This looks good. Just one suggestion, see below.

>  hw/ppc/spapr_hcall.c   |  1 +
>  hw/ppc/spapr_numa.c    | 38 +++++++++++++++++++++++++-------------
>  include/hw/ppc/spapr.h |  2 +-
>  3 files changed, 27 insertions(+), 14 deletions(-)
> 
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index 0e9a5b2e40..9056644890 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -17,6 +17,7 @@
>  #include "kvm_ppc.h"
>  #include "hw/ppc/fdt.h"
>  #include "hw/ppc/spapr_ovec.h"
> +#include "hw/ppc/spapr_numa.h"
>  #include "mmu-book3s-v3.h"
>  #include "hw/mem/memory-device.h"
>  
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 08e2d6aed8..7339d00d20 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -46,6 +46,15 @@ static int get_vcpu_assoc_size(SpaprMachineState *spapr)
>      return get_numa_assoc_size(spapr) + 1;
>  }
>  
> +/*
> + * Retrieves the ibm,associativity array of NUMA node 'node_id'
> + * for the current NUMA affinity.
> + */
> +static uint32_t *get_associativity(SpaprMachineState *spapr, int node_id)
> +{
> +    return spapr->FORM1_assoc_array[node_id];
> +}

All users of this helper only need to read the content of the
associativity array. And since these arrays are static, the
returned pointer should certainly not be passed to g_free()
for example. This wouldn't be detected by compilers though,
unless you have the helper to return a pointer to const
data. So I suggest you just do that for extra safety.

> +
>  static bool spapr_numa_is_symmetrical(MachineState *ms)
>  {
>      int src, dst;
> @@ -124,7 +133,7 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
>       */
>      for (i = 1; i < nb_numa_nodes; i++) {
>          for (j = 1; j < FORM1_DIST_REF_POINTS; j++) {
> -            spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
> +            spapr->FORM1_assoc_array[i][j] = cpu_to_be32(i);
>          }
>      }
>  
> @@ -176,8 +185,8 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
>               * and going up to 0x1.
>               */
>              for (i = n_level; i > 0; i--) {
> -                assoc_src = spapr->numa_assoc_array[src][i];
> -                spapr->numa_assoc_array[dst][i] = assoc_src;
> +                assoc_src = spapr->FORM1_assoc_array[src][i];
> +                spapr->FORM1_assoc_array[dst][i] = assoc_src;
>              }
>          }
>      }
> @@ -204,8 +213,8 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
>       * 'i' will be a valid node_id set by the user.
>       */
>      for (i = 0; i < nb_numa_nodes; i++) {
> -        spapr->numa_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
> -        spapr->numa_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
> +        spapr->FORM1_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
> +        spapr->FORM1_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
>      }
>  
>      /*
> @@ -219,15 +228,15 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
>      max_nodes_with_gpus = nb_numa_nodes + NVGPU_MAX_NUM;
>  
>      for (i = nb_numa_nodes; i < max_nodes_with_gpus; i++) {
> -        spapr->numa_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
> +        spapr->FORM1_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
>  
>          for (j = 1; j < FORM1_DIST_REF_POINTS; j++) {
>              uint32_t gpu_assoc = smc->pre_5_1_assoc_refpoints ?
>                                   SPAPR_GPU_NUMA_ID : cpu_to_be32(i);
> -            spapr->numa_assoc_array[i][j] = gpu_assoc;
> +            spapr->FORM1_assoc_array[i][j] = gpu_assoc;
>          }
>  
> -        spapr->numa_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
> +        spapr->FORM1_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
>      }
>  
>      /*
> @@ -259,8 +268,10 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
>                                         int offset, int nodeid)
>  {
> +    uint32_t *associativity = get_associativity(spapr, nodeid);
> +
>      _FDT((fdt_setprop(fdt, offset, "ibm,associativity",
> -                      spapr->numa_assoc_array[nodeid],
> +                      associativity,
>                        get_numa_assoc_size(spapr) * sizeof(uint32_t))));
>  }
>  
> @@ -270,6 +281,7 @@ static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr,
>      int max_distance_ref_points = get_max_dist_ref_points(spapr);
>      int vcpu_assoc_size = get_vcpu_assoc_size(spapr);
>      uint32_t *vcpu_assoc = g_new(uint32_t, vcpu_assoc_size);
> +    uint32_t *associativity = get_associativity(spapr, cpu->node_id);
>      int index = spapr_get_vcpu_id(cpu);
>  
>      /*
> @@ -280,7 +292,7 @@ static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr,
>       */
>      vcpu_assoc[0] = cpu_to_be32(max_distance_ref_points + 1);
>      vcpu_assoc[vcpu_assoc_size - 1] = cpu_to_be32(index);
> -    memcpy(vcpu_assoc + 1, spapr->numa_assoc_array[cpu->node_id] + 1,
> +    memcpy(vcpu_assoc + 1, associativity + 1,
>             (vcpu_assoc_size - 2) * sizeof(uint32_t));
>  
>      return vcpu_assoc;
> @@ -319,10 +331,10 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
>      cur_index += 2;
>      for (i = 0; i < nr_nodes; i++) {
>          /*
> -         * For the lookup-array we use the ibm,associativity array,
> -         * from numa_assoc_array. without the first element (size).
> +         * For the lookup-array we use the ibm,associativity array of the
> +         * current NUMA affinity, without the first element (size).
>           */
> -        uint32_t *associativity = spapr->numa_assoc_array[i];
> +        uint32_t *associativity = get_associativity(spapr, i);
>          memcpy(cur_index, ++associativity,
>                 sizeof(uint32_t) * max_distance_ref_points);
>          cur_index += max_distance_ref_points;
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index 814e087e98..6b3dfc5dc2 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -249,7 +249,7 @@ struct SpaprMachineState {
>      unsigned gpu_numa_id;
>      SpaprTpmProxy *tpm_proxy;
>  
> -    uint32_t numa_assoc_array[NUMA_NODES_MAX_NUM][FORM1_NUMA_ASSOC_SIZE];
> +    uint32_t FORM1_assoc_array[NUMA_NODES_MAX_NUM][FORM1_NUMA_ASSOC_SIZE];
>  
>      Error *fwnmi_migration_blocker;
>  };



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v8 5/7] spapr: move FORM1 verifications to post CAS
  2021-09-17 21:28 ` [PATCH v8 5/7] spapr: move FORM1 verifications to post CAS Daniel Henrique Barboza
@ 2021-09-20  9:38   ` Greg Kurz
  0 siblings, 0 replies; 15+ messages in thread
From: Greg Kurz @ 2021-09-20  9:38 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, david

On Fri, 17 Sep 2021 18:28:00 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> FORM2 NUMA affinity is prepared to deal with empty (memory/cpu less)
> NUMA nodes. This is used by the DAX KMEM driver to locate a PAPR SCM
> device that has a different latency than the original NUMA node from the
> regular memory. FORM2 is also able  to deal with asymmetric NUMA
> distances gracefully, something that our FORM1 implementation doesn't
> do.
> 
> Move these FORM1 verifications to a new function and wait until after
> CAS, when we're sure that we're sticking with FORM1, to enforce them.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  hw/ppc/spapr.c              | 33 -----------------------
>  hw/ppc/spapr_hcall.c        |  6 +++++
>  hw/ppc/spapr_numa.c         | 53 ++++++++++++++++++++++++++++++++-----
>  include/hw/ppc/spapr_numa.h |  1 +
>  4 files changed, 54 insertions(+), 39 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index d39fd4e644..ada85ee083 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -2773,39 +2773,6 @@ static void spapr_machine_init(MachineState *machine)
>      /* init CPUs */
>      spapr_init_cpus(spapr);
>  
> -    /*
> -     * check we don't have a memory-less/cpu-less NUMA node
> -     * Firmware relies on the existing memory/cpu topology to provide the
> -     * NUMA topology to the kernel.
> -     * And the linux kernel needs to know the NUMA topology at start
> -     * to be able to hotplug CPUs later.
> -     */
> -    if (machine->numa_state->num_nodes) {
> -        for (i = 0; i < machine->numa_state->num_nodes; ++i) {
> -            /* check for memory-less node */
> -            if (machine->numa_state->nodes[i].node_mem == 0) {
> -                CPUState *cs;
> -                int found = 0;
> -                /* check for cpu-less node */
> -                CPU_FOREACH(cs) {
> -                    PowerPCCPU *cpu = POWERPC_CPU(cs);
> -                    if (cpu->node_id == i) {
> -                        found = 1;
> -                        break;
> -                    }
> -                }
> -                /* memory-less and cpu-less node */
> -                if (!found) {
> -                    error_report(
> -                       "Memory-less/cpu-less nodes are not supported (node %d)",
> -                                 i);
> -                    exit(1);
> -                }
> -            }
> -        }
> -
> -    }
> -
>      spapr->gpu_numa_id = spapr_numa_initial_nvgpu_numa_id(machine);
>  
>      /* Init numa_assoc_array */
> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
> index 9056644890..222c1b6bbd 100644
> --- a/hw/ppc/spapr_hcall.c
> +++ b/hw/ppc/spapr_hcall.c
> @@ -1198,6 +1198,12 @@ target_ulong do_client_architecture_support(PowerPCCPU *cpu,
>      spapr->cas_pre_isa3_guest = !spapr_ovec_test(ov1_guest, OV1_PPC_3_00);
>      spapr_ovec_cleanup(ov1_guest);
>  
> +    /*
> +     * Check for NUMA affinity conditions now that we know which NUMA
> +     * affinity the guest will use.
> +     */
> +    spapr_numa_associativity_check(spapr);
> +
>      /*
>       * Ensure the guest asks for an interrupt mode we support;
>       * otherwise terminate the boot.
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 7339d00d20..dfe4fada01 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -193,6 +193,48 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
>  
>  }
>  
> +static void spapr_numa_FORM1_affinity_check(MachineState *machine)
> +{
> +    int i;
> +
> +    /*
> +     * Check we don't have a memory-less/cpu-less NUMA node
> +     * Firmware relies on the existing memory/cpu topology to provide the
> +     * NUMA topology to the kernel.
> +     * And the linux kernel needs to know the NUMA topology at start
> +     * to be able to hotplug CPUs later.
> +     */
> +    if (machine->numa_state->num_nodes) {
> +        for (i = 0; i < machine->numa_state->num_nodes; ++i) {
> +            /* check for memory-less node */
> +            if (machine->numa_state->nodes[i].node_mem == 0) {
> +                CPUState *cs;
> +                int found = 0;
> +                /* check for cpu-less node */
> +                CPU_FOREACH(cs) {
> +                    PowerPCCPU *cpu = POWERPC_CPU(cs);
> +                    if (cpu->node_id == i) {
> +                        found = 1;
> +                        break;
> +                    }
> +                }
> +                /* memory-less and cpu-less node */
> +                if (!found) {
> +                    error_report(
> +"Memory-less/cpu-less nodes are not supported with FORM1 NUMA (node %d)", i);
> +                    exit(EXIT_FAILURE);
> +                }
> +            }
> +        }
> +    }
> +
> +    if (!spapr_numa_is_symmetrical(machine)) {
> +        error_report(
> +"Asymmetrical NUMA topologies aren't supported in the pSeries machine using FORM1 NUMA");
> +        exit(EXIT_FAILURE);
> +    }
> +}
> +
>  /*
>   * Set NUMA machine state data based on FORM1 affinity semantics.
>   */
> @@ -250,12 +292,6 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
>          return;
>      }
>  
> -    if (!spapr_numa_is_symmetrical(machine)) {
> -        error_report("Asymmetrical NUMA topologies aren't supported "
> -                     "in the pSeries machine");
> -        exit(EXIT_FAILURE);
> -    }
> -
>      spapr_numa_define_FORM1_domains(spapr);
>  }
>  
> @@ -265,6 +301,11 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>      spapr_numa_FORM1_affinity_init(spapr, machine);
>  }
>  
> +void spapr_numa_associativity_check(SpaprMachineState *spapr)
> +{
> +    spapr_numa_FORM1_affinity_check(MACHINE(spapr));
> +}
> +
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
>                                         int offset, int nodeid)
>  {
> diff --git a/include/hw/ppc/spapr_numa.h b/include/hw/ppc/spapr_numa.h
> index 6f9f02d3de..7cb3367400 100644
> --- a/include/hw/ppc/spapr_numa.h
> +++ b/include/hw/ppc/spapr_numa.h
> @@ -24,6 +24,7 @@
>   */
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine);
> +void spapr_numa_associativity_check(SpaprMachineState *spapr);
>  void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas);
>  void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
>                                         int offset, int nodeid);



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v8 4/7] spapr_numa.c: rename numa_assoc_array to FORM1_assoc_array
  2021-09-20  9:21   ` Greg Kurz
@ 2021-09-20 13:39     ` Daniel Henrique Barboza
  0 siblings, 0 replies; 15+ messages in thread
From: Daniel Henrique Barboza @ 2021-09-20 13:39 UTC (permalink / raw)
  To: Greg Kurz; +Cc: qemu-ppc, qemu-devel, david



On 9/20/21 06:21, Greg Kurz wrote:
> On Fri, 17 Sep 2021 18:27:59 -0300
> Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
> 
>> Introducing a new NUMA affinity, FORM2, requires a new mechanism to
>> switch between affinity modes after CAS. Also, we want FORM2 data
>> structures and functions to be completely separated from the existing
>> FORM1 code, allowing us to avoid adding new code that inherits the
>> existing complexity of FORM1.
>>
>> The idea of switching values used by the write_dt() functions in
>> spapr_numa.c was already introduced in the previous patch, and
>> the same approach will be used when dealing with the FORM1 and FORM2
>> arrays.
>>
>> We can accomplish that by that by renaming the existing numa_assoc_array
>> to FORM1_assoc_array, which now is used exclusively to handle FORM1 affinity
>> data. A new helper get_associativity() is then introduced to be used by the
>> write_dt() functions to retrieve the current ibm,associativity array of
>> a given node, after considering affinity selection that might have been
>> done during CAS. All code that was using numa_assoc_array now needs to
>> retrieve the array by calling this function.
>>
>> This will allow for an easier plug of FORM2 data later on.
>>
>> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
>> ---
> 
> This looks good. Just one suggestion, see below.
> 
>>   hw/ppc/spapr_hcall.c   |  1 +
>>   hw/ppc/spapr_numa.c    | 38 +++++++++++++++++++++++++-------------
>>   include/hw/ppc/spapr.h |  2 +-
>>   3 files changed, 27 insertions(+), 14 deletions(-)
>>
>> diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
>> index 0e9a5b2e40..9056644890 100644
>> --- a/hw/ppc/spapr_hcall.c
>> +++ b/hw/ppc/spapr_hcall.c
>> @@ -17,6 +17,7 @@
>>   #include "kvm_ppc.h"
>>   #include "hw/ppc/fdt.h"
>>   #include "hw/ppc/spapr_ovec.h"
>> +#include "hw/ppc/spapr_numa.h"
>>   #include "mmu-book3s-v3.h"
>>   #include "hw/mem/memory-device.h"
>>   
>> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
>> index 08e2d6aed8..7339d00d20 100644
>> --- a/hw/ppc/spapr_numa.c
>> +++ b/hw/ppc/spapr_numa.c
>> @@ -46,6 +46,15 @@ static int get_vcpu_assoc_size(SpaprMachineState *spapr)
>>       return get_numa_assoc_size(spapr) + 1;
>>   }
>>   
>> +/*
>> + * Retrieves the ibm,associativity array of NUMA node 'node_id'
>> + * for the current NUMA affinity.
>> + */
>> +static uint32_t *get_associativity(SpaprMachineState *spapr, int node_id)
>> +{
>> +    return spapr->FORM1_assoc_array[node_id];
>> +}
> 
> All users of this helper only need to read the content of the
> associativity array. And since these arrays are static, the
> returned pointer should certainly not be passed to g_free()
> for example. This wouldn't be detected by compilers though,
> unless you have the helper to return a pointer to const
> data. So I suggest you just do that for extra safety.

Good point. I'll be more vigilant about the use of const pointers in these
cases. The extra safety is always welcome.


Daniel

> 
>> +
>>   static bool spapr_numa_is_symmetrical(MachineState *ms)
>>   {
>>       int src, dst;
>> @@ -124,7 +133,7 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
>>        */
>>       for (i = 1; i < nb_numa_nodes; i++) {
>>           for (j = 1; j < FORM1_DIST_REF_POINTS; j++) {
>> -            spapr->numa_assoc_array[i][j] = cpu_to_be32(i);
>> +            spapr->FORM1_assoc_array[i][j] = cpu_to_be32(i);
>>           }
>>       }
>>   
>> @@ -176,8 +185,8 @@ static void spapr_numa_define_FORM1_domains(SpaprMachineState *spapr)
>>                * and going up to 0x1.
>>                */
>>               for (i = n_level; i > 0; i--) {
>> -                assoc_src = spapr->numa_assoc_array[src][i];
>> -                spapr->numa_assoc_array[dst][i] = assoc_src;
>> +                assoc_src = spapr->FORM1_assoc_array[src][i];
>> +                spapr->FORM1_assoc_array[dst][i] = assoc_src;
>>               }
>>           }
>>       }
>> @@ -204,8 +213,8 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
>>        * 'i' will be a valid node_id set by the user.
>>        */
>>       for (i = 0; i < nb_numa_nodes; i++) {
>> -        spapr->numa_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
>> -        spapr->numa_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
>> +        spapr->FORM1_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
>> +        spapr->FORM1_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
>>       }
>>   
>>       /*
>> @@ -219,15 +228,15 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
>>       max_nodes_with_gpus = nb_numa_nodes + NVGPU_MAX_NUM;
>>   
>>       for (i = nb_numa_nodes; i < max_nodes_with_gpus; i++) {
>> -        spapr->numa_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
>> +        spapr->FORM1_assoc_array[i][0] = cpu_to_be32(FORM1_DIST_REF_POINTS);
>>   
>>           for (j = 1; j < FORM1_DIST_REF_POINTS; j++) {
>>               uint32_t gpu_assoc = smc->pre_5_1_assoc_refpoints ?
>>                                    SPAPR_GPU_NUMA_ID : cpu_to_be32(i);
>> -            spapr->numa_assoc_array[i][j] = gpu_assoc;
>> +            spapr->FORM1_assoc_array[i][j] = gpu_assoc;
>>           }
>>   
>> -        spapr->numa_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
>> +        spapr->FORM1_assoc_array[i][FORM1_DIST_REF_POINTS] = cpu_to_be32(i);
>>       }
>>   
>>       /*
>> @@ -259,8 +268,10 @@ void spapr_numa_associativity_init(SpaprMachineState *spapr,
>>   void spapr_numa_write_associativity_dt(SpaprMachineState *spapr, void *fdt,
>>                                          int offset, int nodeid)
>>   {
>> +    uint32_t *associativity = get_associativity(spapr, nodeid);
>> +
>>       _FDT((fdt_setprop(fdt, offset, "ibm,associativity",
>> -                      spapr->numa_assoc_array[nodeid],
>> +                      associativity,
>>                         get_numa_assoc_size(spapr) * sizeof(uint32_t))));
>>   }
>>   
>> @@ -270,6 +281,7 @@ static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr,
>>       int max_distance_ref_points = get_max_dist_ref_points(spapr);
>>       int vcpu_assoc_size = get_vcpu_assoc_size(spapr);
>>       uint32_t *vcpu_assoc = g_new(uint32_t, vcpu_assoc_size);
>> +    uint32_t *associativity = get_associativity(spapr, cpu->node_id);
>>       int index = spapr_get_vcpu_id(cpu);
>>   
>>       /*
>> @@ -280,7 +292,7 @@ static uint32_t *spapr_numa_get_vcpu_assoc(SpaprMachineState *spapr,
>>        */
>>       vcpu_assoc[0] = cpu_to_be32(max_distance_ref_points + 1);
>>       vcpu_assoc[vcpu_assoc_size - 1] = cpu_to_be32(index);
>> -    memcpy(vcpu_assoc + 1, spapr->numa_assoc_array[cpu->node_id] + 1,
>> +    memcpy(vcpu_assoc + 1, associativity + 1,
>>              (vcpu_assoc_size - 2) * sizeof(uint32_t));
>>   
>>       return vcpu_assoc;
>> @@ -319,10 +331,10 @@ int spapr_numa_write_assoc_lookup_arrays(SpaprMachineState *spapr, void *fdt,
>>       cur_index += 2;
>>       for (i = 0; i < nr_nodes; i++) {
>>           /*
>> -         * For the lookup-array we use the ibm,associativity array,
>> -         * from numa_assoc_array. without the first element (size).
>> +         * For the lookup-array we use the ibm,associativity array of the
>> +         * current NUMA affinity, without the first element (size).
>>            */
>> -        uint32_t *associativity = spapr->numa_assoc_array[i];
>> +        uint32_t *associativity = get_associativity(spapr, i);
>>           memcpy(cur_index, ++associativity,
>>                  sizeof(uint32_t) * max_distance_ref_points);
>>           cur_index += max_distance_ref_points;
>> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
>> index 814e087e98..6b3dfc5dc2 100644
>> --- a/include/hw/ppc/spapr.h
>> +++ b/include/hw/ppc/spapr.h
>> @@ -249,7 +249,7 @@ struct SpaprMachineState {
>>       unsigned gpu_numa_id;
>>       SpaprTpmProxy *tpm_proxy;
>>   
>> -    uint32_t numa_assoc_array[NUMA_NODES_MAX_NUM][FORM1_NUMA_ASSOC_SIZE];
>> +    uint32_t FORM1_assoc_array[NUMA_NODES_MAX_NUM][FORM1_NUMA_ASSOC_SIZE];
>>   
>>       Error *fwnmi_migration_blocker;
>>   };
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v8 6/7] spapr_numa.c: FORM2 NUMA affinity support
  2021-09-17 21:28 ` [PATCH v8 6/7] spapr_numa.c: FORM2 NUMA affinity support Daniel Henrique Barboza
@ 2021-09-20 15:10   ` Greg Kurz
  0 siblings, 0 replies; 15+ messages in thread
From: Greg Kurz @ 2021-09-20 15:10 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: qemu-ppc, qemu-devel, david

On Fri, 17 Sep 2021 18:28:01 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> The main feature of FORM2 affinity support is the separation of NUMA
> distances from ibm,associativity information. This allows for a more
> flexible and straightforward NUMA distance assignment without relying on
> complex associations between several levels of NUMA via
> ibm,associativity matches. Another feature is its extensibility. This base
> support contains the facilities for NUMA distance assignment, but in the
> future more facilities will be added for latency, performance, bandwidth
> and so on.
> 
> This patch implements the base FORM2 affinity support as follows:
> 
> - the use of FORM2 associativity is indicated by using bit 2 of byte 5
> of ibm,architecture-vec-5. A FORM2 aware guest can choose to use FORM1
> or FORM2 affinity. Setting both forms will default to FORM2. We're not
> advertising FORM2 for pseries-6.1 and older machine versions to prevent
> guest visible changes in those;
> 
> - ibm,associativity-reference-points has a new semantic. Instead of
> being used to calculate distances via NUMA levels, it's now used to
> indicate the primary domain index in the ibm,associativity domain of
> each resource. In our case it's set to {0x4}, matching the position
> where we already place logical_domain_id;
> 
> - two new RTAS DT artifacts are introduced: ibm,numa-lookup-index-table
> and ibm,numa-distance-table. The index table is used to list all the
> NUMA logical domains of the platform, in ascending order, and allows for
> spartial NUMA configurations (although QEMU ATM doesn't support that).
> ibm,numa-distance-table is an array that contains all the distances from
> the first NUMA node to all other nodes, then the second NUMA node
> distances to all other nodes and so on;
> 
> - get_max_dist_ref_points(), get_numa_assoc_size() and get_associativity()
> now checks for OV5_FORM2_AFFINITY and returns FORM2 values if the guest
> selected FORM2 affinity during CAS.
> 
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  hw/ppc/spapr.c              |   8 ++
>  hw/ppc/spapr_numa.c         | 146 ++++++++++++++++++++++++++++++++++++
>  include/hw/ppc/spapr.h      |   9 +++
>  include/hw/ppc/spapr_ovec.h |   1 +
>  4 files changed, 164 insertions(+)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index ada85ee083..babb662845 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -2752,6 +2752,11 @@ static void spapr_machine_init(MachineState *machine)
>  
>      spapr_ovec_set(spapr->ov5, OV5_FORM1_AFFINITY);
>  
> +    /* Do not advertise FORM2 NUMA support for pseries-6.1 and older */
> +    if (!smc->pre_6_2_numa_affinity) {
> +        spapr_ovec_set(spapr->ov5, OV5_FORM2_AFFINITY);
> +    }
> +
>      /* advertise support for dedicated HP event source to guests */
>      if (spapr->use_hotplug_event_source) {
>          spapr_ovec_set(spapr->ov5, OV5_HP_EVT);
> @@ -4667,8 +4672,11 @@ DEFINE_SPAPR_MACHINE(6_2, "6.2", true);
>   */
>  static void spapr_machine_6_1_class_options(MachineClass *mc)
>  {
> +    SpaprMachineClass *smc = SPAPR_MACHINE_CLASS(mc);
> +
>      spapr_machine_6_2_class_options(mc);
>      compat_props_add(mc->compat_props, hw_compat_6_1, hw_compat_6_1_len);
> +    smc->pre_6_2_numa_affinity = true;
>  }
>  
>  DEFINE_SPAPR_MACHINE(6_1, "6.1", false);
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index dfe4fada01..659513b405 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -24,6 +24,10 @@
>   */
>  static int get_max_dist_ref_points(SpaprMachineState *spapr)
>  {
> +    if (spapr_ovec_test(spapr->ov5_cas, OV5_FORM2_AFFINITY)) {
> +        return FORM2_DIST_REF_POINTS;
> +    }
> +
>      return FORM1_DIST_REF_POINTS;
>  }
>  
> @@ -32,6 +36,10 @@ static int get_max_dist_ref_points(SpaprMachineState *spapr)
>   */
>  static int get_numa_assoc_size(SpaprMachineState *spapr)
>  {
> +    if (spapr_ovec_test(spapr->ov5_cas, OV5_FORM2_AFFINITY)) {
> +        return FORM2_NUMA_ASSOC_SIZE;
> +    }
> +
>      return FORM1_NUMA_ASSOC_SIZE;
>  }
>  
> @@ -52,6 +60,9 @@ static int get_vcpu_assoc_size(SpaprMachineState *spapr)
>   */
>  static uint32_t *get_associativity(SpaprMachineState *spapr, int node_id)
>  {
> +    if (spapr_ovec_test(spapr->ov5_cas, OV5_FORM2_AFFINITY)) {
> +        return spapr->FORM2_assoc_array[node_id];
> +    }
>      return spapr->FORM1_assoc_array[node_id];
>  }
>  
> @@ -295,14 +306,50 @@ static void spapr_numa_FORM1_affinity_init(SpaprMachineState *spapr,
>      spapr_numa_define_FORM1_domains(spapr);
>  }
>  
> +/*
> + * Init NUMA FORM2 machine state data
> + */
> +static void spapr_numa_FORM2_affinity_init(SpaprMachineState *spapr)
> +{
> +    int i;
> +
> +    /*
> +     * For all resources but CPUs, FORM2 associativity arrays will
> +     * be a size 2 array with the following format:
> +     *
> +     * ibm,associativity = {1, numa_id}
> +     *
> +     * CPUs will write an additional 'vcpu_id' on top of the arrays
> +     * being initialized here. 'numa_id' is represented by the
> +     * index 'i' of the loop.
> +     *
> +     * Given that this initialization is also valid for GPU associativity
> +     * arrays, handle everything in one single step by populating the
> +     * arrays up to NUMA_NODES_MAX_NUM.
> +     */
> +    for (i = 0; i < NUMA_NODES_MAX_NUM; i++) {
> +        spapr->FORM2_assoc_array[i][0] = cpu_to_be32(1);
> +        spapr->FORM2_assoc_array[i][1] = cpu_to_be32(i);
> +    }
> +}
> +
>  void spapr_numa_associativity_init(SpaprMachineState *spapr,
>                                     MachineState *machine)
>  {
>      spapr_numa_FORM1_affinity_init(spapr, machine);
> +    spapr_numa_FORM2_affinity_init(spapr);
>  }
>  
>  void spapr_numa_associativity_check(SpaprMachineState *spapr)
>  {
> +    /*
> +     * FORM2 does not have any restrictions we need to handle
> +     * at CAS time, for now.
> +     */
> +    if (spapr_ovec_test(spapr->ov5_cas, OV5_FORM2_AFFINITY)) {
> +        return;
> +    }
> +
>      spapr_numa_FORM1_affinity_check(MACHINE(spapr));
>  }
>  
> @@ -447,6 +494,100 @@ static void spapr_numa_FORM1_write_rtas_dt(SpaprMachineState *spapr,
>                       maxdomains, sizeof(maxdomains)));
>  }
>  
> +static void spapr_numa_FORM2_write_rtas_tables(SpaprMachineState *spapr,
> +                                               void *fdt, int rtas)
> +{
> +    MachineState *ms = MACHINE(spapr);
> +    NodeInfo *numa_info = ms->numa_state->nodes;
> +    int nb_numa_nodes = ms->numa_state->num_nodes;
> +    int distance_table_entries = nb_numa_nodes * nb_numa_nodes;
> +    g_autofree uint32_t *lookup_index_table = NULL;
> +    g_autofree uint32_t *distance_table = NULL;
> +    int src, dst, i, distance_table_size;
> +    uint8_t *node_distances;
> +
> +    /*
> +     * ibm,numa-lookup-index-table: array with length and a
> +     * list of NUMA ids present in the guest.
> +     */
> +    lookup_index_table = g_new0(uint32_t, nb_numa_nodes + 1);
> +    lookup_index_table[0] = cpu_to_be32(nb_numa_nodes);
> +
> +    for (i = 0; i < nb_numa_nodes; i++) {
> +        lookup_index_table[i + 1] = cpu_to_be32(i);
> +    }
> +
> +    _FDT(fdt_setprop(fdt, rtas, "ibm,numa-lookup-index-table",
> +                     lookup_index_table,
> +                     (nb_numa_nodes + 1) * sizeof(uint32_t)));
> +
> +    /*
> +     * ibm,numa-distance-table: contains all node distances. First
> +     * element is the size of the table as uint32, followed up
> +     * by all the uint8 distances from the first NUMA node, then all
> +     * distances from the second NUMA node and so on.
> +     *
> +     * ibm,numa-lookup-index-table is used by guest to navigate this
> +     * array because NUMA ids can be sparse (node 0 is the first,
> +     * node 8 is the second ...).
> +     */
> +    distance_table = g_new0(uint32_t, distance_table_entries + 1);
> +    distance_table[0] = cpu_to_be32(distance_table_entries);
> +
> +    node_distances = (uint8_t *)&distance_table[1];
> +    i = 0;
> +
> +    for (src = 0; src < nb_numa_nodes; src++) {
> +        for (dst = 0; dst < nb_numa_nodes; dst++) {
> +            node_distances[i++] = numa_info[src].distance[dst];
> +        }
> +    }
> +
> +    distance_table_size = distance_table_entries * sizeof(uint8_t) +
> +                          sizeof(uint32_t);
> +    _FDT(fdt_setprop(fdt, rtas, "ibm,numa-distance-table",
> +                     distance_table, distance_table_size));
> +}
> +
> +/*
> + * This helper could be compressed in a single function with
> + * FORM1 logic since we're setting the same DT values, with the
> + * difference being a call to spapr_numa_FORM2_write_rtas_tables()
> + * in the end. The separation was made to avoid clogging FORM1 code
> + * which already has to deal with compat modes from previous
> + * QEMU machine types.
> + */
> +static void spapr_numa_FORM2_write_rtas_dt(SpaprMachineState *spapr,
> +                                           void *fdt, int rtas)
> +{
> +    MachineState *ms = MACHINE(spapr);
> +    uint32_t number_nvgpus_nodes = spapr->gpu_numa_id -
> +                                   spapr_numa_initial_nvgpu_numa_id(ms);
> +
> +    /*
> +     * In FORM2, ibm,associativity-reference-points will point to
> +     * the element in the ibm,associativity array that contains the
> +     * primary domain index (for FORM2, the first element).
> +     *
> +     * This value (in our case, the numa-id) is then used as an index
> +     * to retrieve all other attributes of the node (distance,
> +     * bandwidth, latency) via ibm,numa-lookup-index-table and other
> +     * ibm,numa-*-table properties.
> +     */
> +    uint32_t refpoints[] = { cpu_to_be32(1) };
> +
> +    uint32_t maxdomain = ms->numa_state->num_nodes + number_nvgpus_nodes;
> +    uint32_t maxdomains[] = { cpu_to_be32(1), cpu_to_be32(maxdomain) };
> +
> +    _FDT(fdt_setprop(fdt, rtas, "ibm,associativity-reference-points",
> +                     refpoints, sizeof(refpoints)));
> +
> +    _FDT(fdt_setprop(fdt, rtas, "ibm,max-associativity-domains",
> +                     maxdomains, sizeof(maxdomains)));
> +
> +    spapr_numa_FORM2_write_rtas_tables(spapr, fdt, rtas);
> +}
> +
>  /*
>   * Helper that writes ibm,associativity-reference-points and
>   * max-associativity-domains in the RTAS pointed by @rtas
> @@ -454,6 +595,11 @@ static void spapr_numa_FORM1_write_rtas_dt(SpaprMachineState *spapr,
>   */
>  void spapr_numa_write_rtas_dt(SpaprMachineState *spapr, void *fdt, int rtas)
>  {
> +    if (spapr_ovec_test(spapr->ov5_cas, OV5_FORM2_AFFINITY)) {
> +        spapr_numa_FORM2_write_rtas_dt(spapr, fdt, rtas);
> +        return;
> +    }
> +
>      spapr_numa_FORM1_write_rtas_dt(spapr, fdt, rtas);
>  }
>  
> diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
> index 6b3dfc5dc2..ee7504b976 100644
> --- a/include/hw/ppc/spapr.h
> +++ b/include/hw/ppc/spapr.h
> @@ -118,6 +118,13 @@ typedef enum {
>  #define FORM1_DIST_REF_POINTS            4
>  #define FORM1_NUMA_ASSOC_SIZE            (FORM1_DIST_REF_POINTS + 1)
>  
> +/*
> + * FORM2 NUMA affinity has a single associativity domain, giving
> + * us a assoc size of 2.
> + */
> +#define FORM2_DIST_REF_POINTS            1
> +#define FORM2_NUMA_ASSOC_SIZE            (FORM2_DIST_REF_POINTS + 1)
> +
>  typedef struct SpaprCapabilities SpaprCapabilities;
>  struct SpaprCapabilities {
>      uint8_t caps[SPAPR_CAP_NUM];
> @@ -145,6 +152,7 @@ struct SpaprMachineClass {
>      hwaddr rma_limit;          /* clamp the RMA to this size */
>      bool pre_5_1_assoc_refpoints;
>      bool pre_5_2_numa_associativity;
> +    bool pre_6_2_numa_affinity;
>  
>      bool (*phb_placement)(SpaprMachineState *spapr, uint32_t index,
>                            uint64_t *buid, hwaddr *pio,
> @@ -250,6 +258,7 @@ struct SpaprMachineState {
>      SpaprTpmProxy *tpm_proxy;
>  
>      uint32_t FORM1_assoc_array[NUMA_NODES_MAX_NUM][FORM1_NUMA_ASSOC_SIZE];
> +    uint32_t FORM2_assoc_array[NUMA_NODES_MAX_NUM][FORM2_NUMA_ASSOC_SIZE];
>  
>      Error *fwnmi_migration_blocker;
>  };
> diff --git a/include/hw/ppc/spapr_ovec.h b/include/hw/ppc/spapr_ovec.h
> index 48b716a060..c3e8b98e7e 100644
> --- a/include/hw/ppc/spapr_ovec.h
> +++ b/include/hw/ppc/spapr_ovec.h
> @@ -49,6 +49,7 @@ typedef struct SpaprOptionVector SpaprOptionVector;
>  /* option vector 5 */
>  #define OV5_DRCONF_MEMORY       OV_BIT(2, 2)
>  #define OV5_FORM1_AFFINITY      OV_BIT(5, 0)
> +#define OV5_FORM2_AFFINITY      OV_BIT(5, 2)
>  #define OV5_HP_EVT              OV_BIT(6, 5)
>  #define OV5_HPT_RESIZE          OV_BIT(6, 7)
>  #define OV5_DRMEM_V2            OV_BIT(22, 0)



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v8 7/7] spapr_numa.c: handle auto NUMA node with no distance info
  2021-09-17 21:28 ` [PATCH v8 7/7] spapr_numa.c: handle auto NUMA node with no distance info Daniel Henrique Barboza
@ 2021-09-20 15:22   ` Greg Kurz
  2021-09-21  9:16   ` Igor Mammedov
  1 sibling, 0 replies; 15+ messages in thread
From: Greg Kurz @ 2021-09-20 15:22 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: Igor Mammedov, qemu-ppc, qemu-devel, david

On Fri, 17 Sep 2021 18:28:02 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> numa_complete_configuration() in hw/core/numa.c always adds a NUMA node
> for the pSeries machine if none was specified, but without node distance
> information for the single node created.
> 
> NUMA FORM1 affinity code didn't rely on numa_state information to do its
> job, but FORM2 does. As is now, this is the result of a pSeries guest
> with NUMA FORM2 affinity when no NUMA nodes is specified:
> 
> $ numactl -H
> available: 1 nodes (0)
> node 0 cpus: 0
> node 0 size: 16222 MB
> node 0 free: 15681 MB
> No distance information available.
> 
> This can be amended in spapr_numa_FORM2_write_rtas_tables(). We're
> enforcing that the local distance (the distance to the node to itself) is
> always 10. This allows for the proper creation of the NUMA distance tables,
> fixing the output of 'numactl -H' in the guest:
> 
> $ numactl -H
> available: 1 nodes (0)
> node 0 cpus: 0
> node 0 size: 16222 MB
> node 0 free: 15685 MB
> node distances:
> node   0
>   0:  10
> 
> CC: Igor Mammedov <imammedo@redhat.com>
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>
> ---

Reviewed-by: Greg Kurz <groug@kaod.org>

>  hw/ppc/spapr_numa.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 659513b405..0cead2e7f5 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -539,6 +539,17 @@ static void spapr_numa_FORM2_write_rtas_tables(SpaprMachineState *spapr,
>  
>      for (src = 0; src < nb_numa_nodes; src++) {
>          for (dst = 0; dst < nb_numa_nodes; dst++) {
> +            /*
> +             * We need to be explicit with the local distance
> +             * value to cover the case where the user didn't added any
> +             * NUMA nodes, but QEMU adds the default NUMA node without
> +             * adding the numa_info to retrieve distance info from.
> +             */
> +            if (src == dst) {
> +                node_distances[i++] = 10;
> +                continue;
> +            }
> +
>              node_distances[i++] = numa_info[src].distance[dst];
>          }
>      }



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v8 7/7] spapr_numa.c: handle auto NUMA node with no distance info
  2021-09-17 21:28 ` [PATCH v8 7/7] spapr_numa.c: handle auto NUMA node with no distance info Daniel Henrique Barboza
  2021-09-20 15:22   ` Greg Kurz
@ 2021-09-21  9:16   ` Igor Mammedov
  1 sibling, 0 replies; 15+ messages in thread
From: Igor Mammedov @ 2021-09-21  9:16 UTC (permalink / raw)
  To: Daniel Henrique Barboza; +Cc: groug, qemu-ppc, qemu-devel, david

On Fri, 17 Sep 2021 18:28:02 -0300
Daniel Henrique Barboza <danielhb413@gmail.com> wrote:

> numa_complete_configuration() in hw/core/numa.c always adds a NUMA node
> for the pSeries machine if none was specified, but without node distance
> information for the single node created.
> 
> NUMA FORM1 affinity code didn't rely on numa_state information to do its
> job, but FORM2 does. As is now, this is the result of a pSeries guest
> with NUMA FORM2 affinity when no NUMA nodes is specified:
> 
> $ numactl -H
> available: 1 nodes (0)
> node 0 cpus: 0
> node 0 size: 16222 MB
> node 0 free: 15681 MB
> No distance information available.
> 
> This can be amended in spapr_numa_FORM2_write_rtas_tables(). We're
> enforcing that the local distance (the distance to the node to itself) is
> always 10. This allows for the proper creation of the NUMA distance tables,
> fixing the output of 'numactl -H' in the guest:
> 
> $ numactl -H
> available: 1 nodes (0)
> node 0 cpus: 0
> node 0 size: 16222 MB
> node 0 free: 15685 MB
> node distances:
> node   0
>   0:  10
> 
> CC: Igor Mammedov <imammedo@redhat.com>
> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com>

Acked-by: Igor Mammedov <imammedo@redhat.com>

> ---
>  hw/ppc/spapr_numa.c | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/hw/ppc/spapr_numa.c b/hw/ppc/spapr_numa.c
> index 659513b405..0cead2e7f5 100644
> --- a/hw/ppc/spapr_numa.c
> +++ b/hw/ppc/spapr_numa.c
> @@ -539,6 +539,17 @@ static void spapr_numa_FORM2_write_rtas_tables(SpaprMachineState *spapr,
>  
>      for (src = 0; src < nb_numa_nodes; src++) {
>          for (dst = 0; dst < nb_numa_nodes; dst++) {
> +            /*
> +             * We need to be explicit with the local distance
> +             * value to cover the case where the user didn't added any
> +             * NUMA nodes, but QEMU adds the default NUMA node without
> +             * adding the numa_info to retrieve distance info from.
> +             */
> +            if (src == dst) {
> +                node_distances[i++] = 10;
> +                continue;
> +            }
> +
>              node_distances[i++] = numa_info[src].distance[dst];
>          }
>      }



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2021-09-21  9:18 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-17 21:27 [PATCH v8 0/7] pSeries FORM2 affinity support Daniel Henrique Barboza
2021-09-17 21:27 ` [PATCH v8 1/7] spapr_numa.c: split FORM1 code into helpers Daniel Henrique Barboza
2021-09-17 21:27 ` [PATCH v8 2/7] spapr_numa.c: scrap 'legacy_numa' concept Daniel Henrique Barboza
2021-09-17 21:27 ` [PATCH v8 3/7] spapr_numa.c: parametrize FORM1 macros Daniel Henrique Barboza
2021-09-20  8:54   ` Greg Kurz
2021-09-17 21:27 ` [PATCH v8 4/7] spapr_numa.c: rename numa_assoc_array to FORM1_assoc_array Daniel Henrique Barboza
2021-09-20  9:21   ` Greg Kurz
2021-09-20 13:39     ` Daniel Henrique Barboza
2021-09-17 21:28 ` [PATCH v8 5/7] spapr: move FORM1 verifications to post CAS Daniel Henrique Barboza
2021-09-20  9:38   ` Greg Kurz
2021-09-17 21:28 ` [PATCH v8 6/7] spapr_numa.c: FORM2 NUMA affinity support Daniel Henrique Barboza
2021-09-20 15:10   ` Greg Kurz
2021-09-17 21:28 ` [PATCH v8 7/7] spapr_numa.c: handle auto NUMA node with no distance info Daniel Henrique Barboza
2021-09-20 15:22   ` Greg Kurz
2021-09-21  9:16   ` Igor Mammedov

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.