qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models
@ 2019-09-06 19:11 Moger, Babu
  2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 01/16] numa: Split the numa functionality Moger, Babu
                   ` (16 more replies)
  0 siblings, 17 replies; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:11 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

These series fixes the problems encoding APIC ID for AMD EPYC cpu models.
https://bugzilla.redhat.com/show_bug.cgi?id=1728166

This is the second pass to give an idea of the changes required to address
the issue. First pass is availabe at 
https://patchwork.kernel.org/cover/11069785/

Currently, apic id is decoded based on sockets/dies/cores/threads. This appears
to work for most standard configurations for AMD and other vendors. But this
decoding does not follow AMD's APIC ID enumeration. In some cases this
causes CPU topology inconstancy. While booting guest Kernel is trying to
validate topology. It finds the topology not aligning to EPYC models.

To fix the problem we need to build the topology as per the
Processor Programming Reference (PPR) for AMD Family 17h Model 01h, Revision B1
Processors. It is available at https://www.amd.com/en/support/tech-docs

Here is the text from the PPR.
2.1.10.2.1.3
ApicId Enumeration Requirements
Operating systems are expected to use
Core::X86::Cpuid::SizeId[ApicIdCoreIdSize], the number of least
significant bits in the Initial APIC ID that indicate core ID within a
processor, in constructing per-core CPUID
masks. Core::X86::Cpuid::SizeId[ApicIdCoreIdSize] determines the maximum number
of cores (MNC) that the
processor could theoretically support, not the actual number of cores that are
actually implemented or enabled on
the processor, as indicated by Core::X86::Cpuid::SizeId[NC].
Each Core::X86::Apic::ApicId[ApicId] register is preset as follows:
• ApicId[6] = Socket ID.
• ApicId[5:4] = Node ID.
• ApicId[3] = Logical CCX L3 complex ID
• ApicId[2:0]= (SMT) ? {LogicalCoreID[1:0],ThreadId} :
{1'b0,LogicalCoreID[1:0]}.
"""

v2:
  1. Introduced the new property epyc to enable new epyc mode.
  2. Separated the epyc mode and non epyc mode function.
  3. Introduced function pointers in PCMachineState to handle the
     differences.
  4. Mildly tested different combinations to make things are working as expected.
  5. TODO : Setting the epyc feature bit needs to be worked out. This feature is
     supported only on AMD EPYC models. I may need some guidance on that.

v1:
  https://patchwork.kernel.org/cover/11069785/

---

Babu Moger (16):
      numa: Split the numa functionality
      hw/i386: Rename X86CPUTopoInfo structure to X86CPUTopoIDs
      hw/i386: Introduce X86CPUTopoInfo to contain topology info
      machine: Add SMP Sockets in CpuTopology
      hw/i386: Simplify topology Offset/width Calculation
      hw/core: Add core complex id in X86CPU topology
      hw/386: Add new epyc mode topology decoding functions
      i386: Cleanup and use the new epyc mode topology functions
      hw/i386: Introduce initialize_topo_info function
      hw/i386: Introduce apicid_from_cpu_idx in PCMachineState
      Introduce-topo_ids_from_apicid-handler
      hw/i386: Introduce apic_id_from_topo_ids handler in PCMachineState
      machine: Add new epyc property in PCMachineState
      hw/i386: Introduce epyc mode function handlers
      i386: Fix pkg_id offset for epyc mode
      hw/core: Fix up the machine_set_cpu_numa_node for epyc


 hw/core/machine-hmp-cmds.c |    3 
 hw/core/machine.c          |   38 ++++++
 hw/core/numa.c             |  110 ++++++++++++----
 hw/i386/pc.c               |  143 +++++++++++++++------
 include/hw/boards.h        |    8 +
 include/hw/i386/pc.h       |    9 +
 include/hw/i386/topology.h |  294 +++++++++++++++++++++++++++++++++++---------
 include/sysemu/numa.h      |    2 
 qapi/machine.json          |    4 -
 target/i386/cpu.c          |  209 +++++++++++--------------------
 target/i386/cpu.h          |    1 
 vl.c                       |    3 
 12 files changed, 560 insertions(+), 264 deletions(-)

--
Signature

^ permalink raw reply	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 01/16] numa: Split the numa functionality
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
@ 2019-09-06 19:11 ` Moger, Babu
  2019-10-10  3:25   ` Eduardo Habkost
  2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 02/16] hw/i386: Rename X86CPUTopoInfo structure to X86CPUTopoIDs Moger, Babu
                   ` (15 subsequent siblings)
  16 siblings, 1 reply; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:11 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

To support new epyc mode, we need to know the number of numa nodes
in advance to generate apic id correctly. So, split the numa
initialization into two. The function parse_numa initializes numa_info
and updates nb_numa_nodes. And then parse_numa_node does the numa node
initialization.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 hw/core/numa.c        |  106 +++++++++++++++++++++++++++++++++++--------------
 include/sysemu/numa.h |    2 +
 vl.c                  |    2 +
 3 files changed, 80 insertions(+), 30 deletions(-)

diff --git a/hw/core/numa.c b/hw/core/numa.c
index a11431483c..27fa6b5e1d 100644
--- a/hw/core/numa.c
+++ b/hw/core/numa.c
@@ -55,14 +55,10 @@ bool have_numa_distance;
 NodeInfo numa_info[MAX_NODES];
 
 
-static void parse_numa_node(MachineState *ms, NumaNodeOptions *node,
+static void parse_numa_info(MachineState *ms, NumaNodeOptions *node,
                             Error **errp)
 {
-    Error *err = NULL;
     uint16_t nodenr;
-    uint16List *cpus = NULL;
-    MachineClass *mc = MACHINE_GET_CLASS(ms);
-    unsigned int max_cpus = ms->smp.max_cpus;
 
     if (node->has_nodeid) {
         nodenr = node->nodeid;
@@ -81,29 +77,6 @@ static void parse_numa_node(MachineState *ms, NumaNodeOptions *node,
         return;
     }
 
-    if (!mc->cpu_index_to_instance_props || !mc->get_default_cpu_node_id) {
-        error_setg(errp, "NUMA is not supported by this machine-type");
-        return;
-    }
-    for (cpus = node->cpus; cpus; cpus = cpus->next) {
-        CpuInstanceProperties props;
-        if (cpus->value >= max_cpus) {
-            error_setg(errp,
-                       "CPU index (%" PRIu16 ")"
-                       " should be smaller than maxcpus (%d)",
-                       cpus->value, max_cpus);
-            return;
-        }
-        props = mc->cpu_index_to_instance_props(ms, cpus->value);
-        props.node_id = nodenr;
-        props.has_node_id = true;
-        machine_set_cpu_numa_node(ms, &props, &err);
-        if (err) {
-            error_propagate(errp, err);
-            return;
-        }
-    }
-
     have_memdevs = have_memdevs ? : node->has_memdev;
     have_mem = have_mem ? : node->has_mem;
     if ((node->has_mem && have_memdevs) || (node->has_memdev && have_mem)) {
@@ -177,7 +150,7 @@ void set_numa_options(MachineState *ms, NumaOptions *object, Error **errp)
 
     switch (object->type) {
     case NUMA_OPTIONS_TYPE_NODE:
-        parse_numa_node(ms, &object->u.node, &err);
+        parse_numa_info(ms, &object->u.node, &err);
         if (err) {
             goto end;
         }
@@ -242,6 +215,73 @@ end:
     return 0;
 }
 
+void set_numa_node_options(MachineState *ms, NumaOptions *object, Error **errp)
+{
+    MachineClass *mc = MACHINE_GET_CLASS(ms);
+    NumaNodeOptions *node = &object->u.node;
+    unsigned int max_cpus = ms->smp.max_cpus;
+    uint16List *cpus = NULL;
+    Error *err = NULL;
+    uint16_t nodenr;
+
+    if (node->has_nodeid) {
+        nodenr = node->nodeid;
+    } else {
+        error_setg(errp, "NUMA node information is not available");
+    }
+
+    if (!mc->cpu_index_to_instance_props || !mc->get_default_cpu_node_id) {
+        error_setg(errp, "NUMA is not supported by this machine-type");
+        return;
+    }
+
+    for (cpus = node->cpus; cpus; cpus = cpus->next) {
+        CpuInstanceProperties props;
+        if (cpus->value >= max_cpus) {
+            error_setg(errp,
+                       "CPU index (%" PRIu16 ")"
+                       " should be smaller than maxcpus (%d)",
+                       cpus->value, max_cpus);
+            return;
+         }
+         props = mc->cpu_index_to_instance_props(ms, cpus->value);
+         props.node_id = nodenr;
+         props.has_node_id = true;
+         machine_set_cpu_numa_node(ms, &props, &err);
+         if (err) {
+            error_propagate(errp, err);
+            return;
+         }
+    }
+}
+
+static int parse_numa_node(void *opaque, QemuOpts *opts, Error **errp)
+{
+    NumaOptions *object = NULL;
+    MachineState *ms = MACHINE(opaque);
+    Error *err = NULL;
+    Visitor *v = opts_visitor_new(opts);
+
+    visit_type_NumaOptions(v, NULL, &object, &err);
+    visit_free(v);
+    if (err) {
+        goto end;
+    }
+
+    if (object->type == NUMA_OPTIONS_TYPE_NODE) {
+        set_numa_node_options(ms, object, &err);
+    }
+
+end:
+    qapi_free_NumaOptions(object);
+    if (err) {
+        error_propagate(errp, err);
+        return -1;
+    }
+
+    return 0;
+}
+
 /* If all node pair distances are symmetric, then only distances
  * in one direction are enough. If there is even one asymmetric
  * pair, though, then all distances must be provided. The
@@ -368,7 +408,7 @@ void numa_complete_configuration(MachineState *ms)
     if (ms->ram_slots > 0 && nb_numa_nodes == 0 &&
         mc->auto_enable_numa_with_memhp) {
             NumaNodeOptions node = { };
-            parse_numa_node(ms, &node, &error_abort);
+            parse_numa_info(ms, &node, &error_abort);
     }
 
     assert(max_numa_nodeid <= MAX_NODES);
@@ -448,6 +488,12 @@ void parse_numa_opts(MachineState *ms)
     qemu_opts_foreach(qemu_find_opts("numa"), parse_numa, ms, &error_fatal);
 }
 
+void parse_numa_node_opts(MachineState *ms)
+{
+    qemu_opts_foreach(qemu_find_opts("numa"), parse_numa_node,
+                      ms, &error_fatal);
+}
+
 void numa_cpu_pre_plug(const CPUArchId *slot, DeviceState *dev, Error **errp)
 {
     int node_id = object_property_get_int(OBJECT(dev), "node-id", &error_abort);
diff --git a/include/sysemu/numa.h b/include/sysemu/numa.h
index 01a263eba2..ca109adaa6 100644
--- a/include/sysemu/numa.h
+++ b/include/sysemu/numa.h
@@ -24,7 +24,9 @@ struct NumaNodeMem {
 extern NodeInfo numa_info[MAX_NODES];
 
 void set_numa_options(MachineState *ms, NumaOptions *object, Error **errp);
+void set_numa_node_options(MachineState *ms, NumaOptions *object, Error **errp);
 void parse_numa_opts(MachineState *ms);
+void parse_numa_node_opts(MachineState *ms);
 void numa_complete_configuration(MachineState *ms);
 void query_numa_node_mem(NumaNodeMem node_mem[]);
 extern QemuOptsList qemu_numa_opts;
diff --git a/vl.c b/vl.c
index b426b32134..711d2ae5da 100644
--- a/vl.c
+++ b/vl.c
@@ -4339,6 +4339,8 @@ int main(int argc, char **argv, char **envp)
     }
     parse_numa_opts(current_machine);
 
+    parse_numa_node_opts(current_machine);
+
     /* do monitor/qmp handling at preconfig state if requested */
     main_loop();
 


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 02/16] hw/i386: Rename X86CPUTopoInfo structure to X86CPUTopoIDs
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
  2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 01/16] numa: Split the numa functionality Moger, Babu
@ 2019-09-06 19:11 ` Moger, Babu
  2019-10-10  3:26   ` Eduardo Habkost
  2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 03/16] hw/i386: Introduce X86CPUTopoInfo to contain topology info Moger, Babu
                   ` (14 subsequent siblings)
  16 siblings, 1 reply; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:11 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

Rename few data structures related to X86 topology.
X86CPUTopoIDs will have individual arch ids. Next
patch introduces X86CPUTopoInfo which will have all
topology information(like cores, threads etc..).

Adds node_id and ccx_id. This will be required to support
new epyc mode mode. There is no functional change.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 hw/i386/pc.c               |   60 ++++++++++++++++++++++----------------------
 include/hw/i386/topology.h |   42 ++++++++++++++++---------------
 2 files changed, 52 insertions(+), 50 deletions(-)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 549c437050..ada445f8f3 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -2379,7 +2379,7 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
     int idx;
     CPUState *cs;
     CPUArchId *cpu_slot;
-    X86CPUTopoInfo topo;
+    X86CPUTopoIDs topo_ids;
     X86CPU *cpu = X86_CPU(dev);
     CPUX86State *env = &cpu->env;
     MachineState *ms = MACHINE(hotplug_dev);
@@ -2432,12 +2432,12 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
             return;
         }
 
-        topo.pkg_id = cpu->socket_id;
-        topo.die_id = cpu->die_id;
-        topo.core_id = cpu->core_id;
-        topo.smt_id = cpu->thread_id;
+        topo_ids.pkg_id = cpu->socket_id;
+        topo_ids.die_id = cpu->die_id;
+        topo_ids.core_id = cpu->core_id;
+        topo_ids.smt_id = cpu->thread_id;
         cpu->apic_id = apicid_from_topo_ids(pcms->smp_dies, smp_cores,
-                                            smp_threads, &topo);
+                                            smp_threads, &topo_ids);
     }
 
     cpu_slot = pc_find_cpu_slot(MACHINE(pcms), cpu->apic_id, &idx);
@@ -2445,11 +2445,11 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
         MachineState *ms = MACHINE(pcms);
 
         x86_topo_ids_from_apicid(cpu->apic_id, pcms->smp_dies,
-                                 smp_cores, smp_threads, &topo);
+                                 smp_cores, smp_threads, &topo_ids);
         error_setg(errp,
             "Invalid CPU [socket: %u, die: %u, core: %u, thread: %u] with"
             " APIC ID %" PRIu32 ", valid index range 0:%d",
-            topo.pkg_id, topo.die_id, topo.core_id, topo.smt_id,
+            topo_ids.pkg_id, topo_ids.die_id, topo_ids.core_id, topo_ids.smt_id,
             cpu->apic_id, ms->possible_cpus->len - 1);
         return;
     }
@@ -2467,34 +2467,34 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
      * once -smp refactoring is complete and there will be CPU private
      * CPUState::nr_cores and CPUState::nr_threads fields instead of globals */
     x86_topo_ids_from_apicid(cpu->apic_id, pcms->smp_dies,
-                             smp_cores, smp_threads, &topo);
-    if (cpu->socket_id != -1 && cpu->socket_id != topo.pkg_id) {
+                             smp_cores, smp_threads, &topo_ids);
+    if (cpu->socket_id != -1 && cpu->socket_id != topo_ids.pkg_id) {
         error_setg(errp, "property socket-id: %u doesn't match set apic-id:"
-            " 0x%x (socket-id: %u)", cpu->socket_id, cpu->apic_id, topo.pkg_id);
+            " 0x%x (socket-id: %u)", cpu->socket_id, cpu->apic_id, topo_ids.pkg_id);
         return;
     }
-    cpu->socket_id = topo.pkg_id;
+    cpu->socket_id = topo_ids.pkg_id;
 
-    if (cpu->die_id != -1 && cpu->die_id != topo.die_id) {
+    if (cpu->die_id != -1 && cpu->die_id != topo_ids.die_id) {
         error_setg(errp, "property die-id: %u doesn't match set apic-id:"
-            " 0x%x (die-id: %u)", cpu->die_id, cpu->apic_id, topo.die_id);
+            " 0x%x (die-id: %u)", cpu->die_id, cpu->apic_id, topo_ids.die_id);
         return;
     }
-    cpu->die_id = topo.die_id;
+    cpu->die_id = topo_ids.die_id;
 
-    if (cpu->core_id != -1 && cpu->core_id != topo.core_id) {
+    if (cpu->core_id != -1 && cpu->core_id != topo_ids.core_id) {
         error_setg(errp, "property core-id: %u doesn't match set apic-id:"
-            " 0x%x (core-id: %u)", cpu->core_id, cpu->apic_id, topo.core_id);
+            " 0x%x (core-id: %u)", cpu->core_id, cpu->apic_id, topo_ids.core_id);
         return;
     }
-    cpu->core_id = topo.core_id;
+    cpu->core_id = topo_ids.core_id;
 
-    if (cpu->thread_id != -1 && cpu->thread_id != topo.smt_id) {
+    if (cpu->thread_id != -1 && cpu->thread_id != topo_ids.smt_id) {
         error_setg(errp, "property thread-id: %u doesn't match set apic-id:"
-            " 0x%x (thread-id: %u)", cpu->thread_id, cpu->apic_id, topo.smt_id);
+            " 0x%x (thread-id: %u)", cpu->thread_id, cpu->apic_id, topo_ids.smt_id);
         return;
     }
-    cpu->thread_id = topo.smt_id;
+    cpu->thread_id = topo_ids.smt_id;
 
     if (hyperv_feat_enabled(cpu, HYPERV_FEAT_VPINDEX) &&
         !kvm_hv_vpindex_settable()) {
@@ -2840,14 +2840,14 @@ pc_cpu_index_to_props(MachineState *ms, unsigned cpu_index)
 
 static int64_t pc_get_default_cpu_node_id(const MachineState *ms, int idx)
 {
-   X86CPUTopoInfo topo;
+   X86CPUTopoIDs topo_ids;
    PCMachineState *pcms = PC_MACHINE(ms);
 
    assert(idx < ms->possible_cpus->len);
    x86_topo_ids_from_apicid(ms->possible_cpus->cpus[idx].arch_id,
                             pcms->smp_dies, ms->smp.cores,
-                            ms->smp.threads, &topo);
-   return topo.pkg_id % nb_numa_nodes;
+                            ms->smp.threads, &topo_ids);
+   return topo_ids.pkg_id % nb_numa_nodes;
 }
 
 static const CPUArchIdList *pc_possible_cpu_arch_ids(MachineState *ms)
@@ -2869,22 +2869,22 @@ static const CPUArchIdList *pc_possible_cpu_arch_ids(MachineState *ms)
                                   sizeof(CPUArchId) * max_cpus);
     ms->possible_cpus->len = max_cpus;
     for (i = 0; i < ms->possible_cpus->len; i++) {
-        X86CPUTopoInfo topo;
+        X86CPUTopoIDs topo_ids;
 
         ms->possible_cpus->cpus[i].type = ms->cpu_type;
         ms->possible_cpus->cpus[i].vcpus_count = 1;
         ms->possible_cpus->cpus[i].arch_id = x86_cpu_apic_id_from_index(pcms, i);
         x86_topo_ids_from_apicid(ms->possible_cpus->cpus[i].arch_id,
                                  pcms->smp_dies, ms->smp.cores,
-                                 ms->smp.threads, &topo);
+                                 ms->smp.threads, &topo_ids);
         ms->possible_cpus->cpus[i].props.has_socket_id = true;
-        ms->possible_cpus->cpus[i].props.socket_id = topo.pkg_id;
+        ms->possible_cpus->cpus[i].props.socket_id = topo_ids.pkg_id;
         ms->possible_cpus->cpus[i].props.has_die_id = true;
-        ms->possible_cpus->cpus[i].props.die_id = topo.die_id;
+        ms->possible_cpus->cpus[i].props.die_id = topo_ids.die_id;
         ms->possible_cpus->cpus[i].props.has_core_id = true;
-        ms->possible_cpus->cpus[i].props.core_id = topo.core_id;
+        ms->possible_cpus->cpus[i].props.core_id = topo_ids.core_id;
         ms->possible_cpus->cpus[i].props.has_thread_id = true;
-        ms->possible_cpus->cpus[i].props.thread_id = topo.smt_id;
+        ms->possible_cpus->cpus[i].props.thread_id = topo_ids.smt_id;
     }
     return ms->possible_cpus;
 }
diff --git a/include/hw/i386/topology.h b/include/hw/i386/topology.h
index 4ff5b2da6c..0637743cdf 100644
--- a/include/hw/i386/topology.h
+++ b/include/hw/i386/topology.h
@@ -45,12 +45,14 @@
  */
 typedef uint32_t apic_id_t;
 
-typedef struct X86CPUTopoInfo {
+typedef struct X86CPUTopoIDs {
     unsigned pkg_id;
     unsigned die_id;
     unsigned core_id;
     unsigned smt_id;
-} X86CPUTopoInfo;
+    unsigned node_id;
+    unsigned ccx_id;
+} X86CPUTopoIDs;
 
 /* Return the bit width needed for 'count' IDs
  */
@@ -122,12 +124,12 @@ static inline unsigned apicid_pkg_offset(unsigned nr_dies,
 static inline apic_id_t apicid_from_topo_ids(unsigned nr_dies,
                                              unsigned nr_cores,
                                              unsigned nr_threads,
-                                             const X86CPUTopoInfo *topo)
+                                             const X86CPUTopoIDs *topo_ids)
 {
-    return (topo->pkg_id  << apicid_pkg_offset(nr_dies, nr_cores, nr_threads)) |
-           (topo->die_id  << apicid_die_offset(nr_dies, nr_cores, nr_threads)) |
-          (topo->core_id << apicid_core_offset(nr_dies, nr_cores, nr_threads)) |
-           topo->smt_id;
+    return (topo_ids->pkg_id  << apicid_pkg_offset(nr_dies, nr_cores, nr_threads)) |
+           (topo_ids->die_id  << apicid_die_offset(nr_dies, nr_cores, nr_threads)) |
+           (topo_ids->core_id << apicid_core_offset(nr_dies, nr_cores, nr_threads)) |
+           topo_ids->smt_id;
 }
 
 /* Calculate thread/core/package IDs for a specific topology,
@@ -137,12 +139,12 @@ static inline void x86_topo_ids_from_idx(unsigned nr_dies,
                                          unsigned nr_cores,
                                          unsigned nr_threads,
                                          unsigned cpu_index,
-                                         X86CPUTopoInfo *topo)
+                                         X86CPUTopoIDs *topo_ids)
 {
-    topo->pkg_id = cpu_index / (nr_dies * nr_cores * nr_threads);
-    topo->die_id = cpu_index / (nr_cores * nr_threads) % nr_dies;
-    topo->core_id = cpu_index / nr_threads % nr_cores;
-    topo->smt_id = cpu_index % nr_threads;
+    topo_ids->pkg_id = cpu_index / (nr_dies * nr_cores * nr_threads);
+    topo_ids->die_id = cpu_index / (nr_cores * nr_threads) % nr_dies;
+    topo_ids->core_id = cpu_index / nr_threads % nr_cores;
+    topo_ids->smt_id = cpu_index % nr_threads;
 }
 
 /* Calculate thread/core/package IDs for a specific topology,
@@ -152,17 +154,17 @@ static inline void x86_topo_ids_from_apicid(apic_id_t apicid,
                                             unsigned nr_dies,
                                             unsigned nr_cores,
                                             unsigned nr_threads,
-                                            X86CPUTopoInfo *topo)
+                                            X86CPUTopoIDs *topo_ids)
 {
-    topo->smt_id = apicid &
+    topo_ids->smt_id = apicid &
             ~(0xFFFFFFFFUL << apicid_smt_width(nr_dies, nr_cores, nr_threads));
-    topo->core_id =
+    topo_ids->core_id =
             (apicid >> apicid_core_offset(nr_dies, nr_cores, nr_threads)) &
             ~(0xFFFFFFFFUL << apicid_core_width(nr_dies, nr_cores, nr_threads));
-    topo->die_id =
+    topo_ids->die_id =
             (apicid >> apicid_die_offset(nr_dies, nr_cores, nr_threads)) &
             ~(0xFFFFFFFFUL << apicid_die_width(nr_dies, nr_cores, nr_threads));
-    topo->pkg_id = apicid >> apicid_pkg_offset(nr_dies, nr_cores, nr_threads);
+    topo_ids->pkg_id = apicid >> apicid_pkg_offset(nr_dies, nr_cores, nr_threads);
 }
 
 /* Make APIC ID for the CPU 'cpu_index'
@@ -174,9 +176,9 @@ static inline apic_id_t x86_apicid_from_cpu_idx(unsigned nr_dies,
                                                 unsigned nr_threads,
                                                 unsigned cpu_index)
 {
-    X86CPUTopoInfo topo;
-    x86_topo_ids_from_idx(nr_dies, nr_cores, nr_threads, cpu_index, &topo);
-    return apicid_from_topo_ids(nr_dies, nr_cores, nr_threads, &topo);
+    X86CPUTopoIDs topo_ids;
+    x86_topo_ids_from_idx(nr_dies, nr_cores, nr_threads, cpu_index, &topo_ids);
+    return apicid_from_topo_ids(nr_dies, nr_cores, nr_threads, &topo_ids);
 }
 
 #endif /* HW_I386_TOPOLOGY_H */


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 03/16] hw/i386: Introduce X86CPUTopoInfo to contain topology info
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
  2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 01/16] numa: Split the numa functionality Moger, Babu
  2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 02/16] hw/i386: Rename X86CPUTopoInfo structure to X86CPUTopoIDs Moger, Babu
@ 2019-09-06 19:11 ` Moger, Babu
  2019-10-11  2:29   ` Eduardo Habkost
  2019-10-11  3:54   ` Eduardo Habkost
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 04/16] machine: Add SMP Sockets in CpuTopology Moger, Babu
                   ` (13 subsequent siblings)
  16 siblings, 2 replies; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:11 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

This is an effort to re-arrange few data structure for better
readability. Add X86CPUTopoInfo which will have all the topology
informations required to build the cpu topology. There is no
functional changes.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 hw/i386/pc.c               |   40 +++++++++++++++++++++++++++-------------
 include/hw/i386/topology.h |   40 ++++++++++++++++++++++++++--------------
 2 files changed, 53 insertions(+), 27 deletions(-)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index ada445f8f3..95aab8e5e7 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -930,11 +930,15 @@ static uint32_t x86_cpu_apic_id_from_index(PCMachineState *pcms,
 {
     MachineState *ms = MACHINE(pcms);
     PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
+    X86CPUTopoInfo topo_info;
     uint32_t correct_id;
     static bool warned;
 
-    correct_id = x86_apicid_from_cpu_idx(pcms->smp_dies, ms->smp.cores,
-                                         ms->smp.threads, cpu_index);
+    topo_info.nr_dies = pcms->smp_dies;
+    topo_info.nr_cores = ms->smp.cores;
+    topo_info.nr_threads = ms->smp.threads;
+
+    correct_id = x86_apicid_from_cpu_idx(&topo_info, cpu_index);
     if (pcmc->compat_apic_id_mode) {
         if (cpu_index != correct_id && !warned && !qtest_enabled()) {
             error_report("APIC IDs set in compatibility mode, "
@@ -2386,6 +2390,7 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
     unsigned int smp_cores = ms->smp.cores;
     unsigned int smp_threads = ms->smp.threads;
+    X86CPUTopoInfo topo_info;
 
     if(!object_dynamic_cast(OBJECT(cpu), ms->cpu_type)) {
         error_setg(errp, "Invalid CPU type, expected cpu type: '%s'",
@@ -2393,6 +2398,10 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
         return;
     }
 
+    topo_info.nr_dies = pcms->smp_dies;
+    topo_info.nr_cores = smp_cores;
+    topo_info.nr_threads = smp_threads;
+
     env->nr_dies = pcms->smp_dies;
 
     /*
@@ -2436,16 +2445,14 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
         topo_ids.die_id = cpu->die_id;
         topo_ids.core_id = cpu->core_id;
         topo_ids.smt_id = cpu->thread_id;
-        cpu->apic_id = apicid_from_topo_ids(pcms->smp_dies, smp_cores,
-                                            smp_threads, &topo_ids);
+        cpu->apic_id = apicid_from_topo_ids(&topo_info, &topo_ids);
     }
 
     cpu_slot = pc_find_cpu_slot(MACHINE(pcms), cpu->apic_id, &idx);
     if (!cpu_slot) {
         MachineState *ms = MACHINE(pcms);
 
-        x86_topo_ids_from_apicid(cpu->apic_id, pcms->smp_dies,
-                                 smp_cores, smp_threads, &topo_ids);
+        x86_topo_ids_from_apicid(cpu->apic_id, &topo_info, &topo_ids);
         error_setg(errp,
             "Invalid CPU [socket: %u, die: %u, core: %u, thread: %u] with"
             " APIC ID %" PRIu32 ", valid index range 0:%d",
@@ -2466,8 +2473,7 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
     /* TODO: move socket_id/core_id/thread_id checks into x86_cpu_realizefn()
      * once -smp refactoring is complete and there will be CPU private
      * CPUState::nr_cores and CPUState::nr_threads fields instead of globals */
-    x86_topo_ids_from_apicid(cpu->apic_id, pcms->smp_dies,
-                             smp_cores, smp_threads, &topo_ids);
+    x86_topo_ids_from_apicid(cpu->apic_id, &topo_info, &topo_ids);
     if (cpu->socket_id != -1 && cpu->socket_id != topo_ids.pkg_id) {
         error_setg(errp, "property socket-id: %u doesn't match set apic-id:"
             " 0x%x (socket-id: %u)", cpu->socket_id, cpu->apic_id, topo_ids.pkg_id);
@@ -2842,19 +2848,28 @@ static int64_t pc_get_default_cpu_node_id(const MachineState *ms, int idx)
 {
    X86CPUTopoIDs topo_ids;
    PCMachineState *pcms = PC_MACHINE(ms);
+   X86CPUTopoInfo topo_info;
+
+   topo_info.nr_dies = pcms->smp_dies;
+   topo_info.nr_cores = ms->smp.cores;
+   topo_info.nr_threads = ms->smp.threads;
 
    assert(idx < ms->possible_cpus->len);
    x86_topo_ids_from_apicid(ms->possible_cpus->cpus[idx].arch_id,
-                            pcms->smp_dies, ms->smp.cores,
-                            ms->smp.threads, &topo_ids);
+                            &topo_info, &topo_ids);
    return topo_ids.pkg_id % nb_numa_nodes;
 }
 
 static const CPUArchIdList *pc_possible_cpu_arch_ids(MachineState *ms)
 {
     PCMachineState *pcms = PC_MACHINE(ms);
-    int i;
     unsigned int max_cpus = ms->smp.max_cpus;
+    X86CPUTopoInfo topo_info;
+    int i;
+
+    topo_info.nr_dies = pcms->smp_dies;
+    topo_info.nr_cores = ms->smp.cores;
+    topo_info.nr_threads = ms->smp.threads;
 
     if (ms->possible_cpus) {
         /*
@@ -2875,8 +2890,7 @@ static const CPUArchIdList *pc_possible_cpu_arch_ids(MachineState *ms)
         ms->possible_cpus->cpus[i].vcpus_count = 1;
         ms->possible_cpus->cpus[i].arch_id = x86_cpu_apic_id_from_index(pcms, i);
         x86_topo_ids_from_apicid(ms->possible_cpus->cpus[i].arch_id,
-                                 pcms->smp_dies, ms->smp.cores,
-                                 ms->smp.threads, &topo_ids);
+                                 &topo_info, &topo_ids);
         ms->possible_cpus->cpus[i].props.has_socket_id = true;
         ms->possible_cpus->cpus[i].props.socket_id = topo_ids.pkg_id;
         ms->possible_cpus->cpus[i].props.has_die_id = true;
diff --git a/include/hw/i386/topology.h b/include/hw/i386/topology.h
index 0637743cdf..906017e8e3 100644
--- a/include/hw/i386/topology.h
+++ b/include/hw/i386/topology.h
@@ -54,6 +54,14 @@ typedef struct X86CPUTopoIDs {
     unsigned ccx_id;
 } X86CPUTopoIDs;
 
+typedef struct X86CPUTopoInfo {
+    unsigned numa_nodes;
+    unsigned nr_sockets;
+    unsigned nr_dies;
+    unsigned nr_cores;
+    unsigned nr_threads;
+} X86CPUTopoInfo;
+
 /* Return the bit width needed for 'count' IDs
  */
 static unsigned apicid_bitwidth_for_count(unsigned count)
@@ -121,11 +129,13 @@ static inline unsigned apicid_pkg_offset(unsigned nr_dies,
  *
  * The caller must make sure core_id < nr_cores and smt_id < nr_threads.
  */
-static inline apic_id_t apicid_from_topo_ids(unsigned nr_dies,
-                                             unsigned nr_cores,
-                                             unsigned nr_threads,
+static inline apic_id_t apicid_from_topo_ids(X86CPUTopoInfo *topo_info,
                                              const X86CPUTopoIDs *topo_ids)
 {
+    unsigned nr_dies = topo_info->nr_dies;
+    unsigned nr_cores = topo_info->nr_cores;
+    unsigned nr_threads = topo_info->nr_threads;
+
     return (topo_ids->pkg_id  << apicid_pkg_offset(nr_dies, nr_cores, nr_threads)) |
            (topo_ids->die_id  << apicid_die_offset(nr_dies, nr_cores, nr_threads)) |
            (topo_ids->core_id << apicid_core_offset(nr_dies, nr_cores, nr_threads)) |
@@ -135,12 +145,14 @@ static inline apic_id_t apicid_from_topo_ids(unsigned nr_dies,
 /* Calculate thread/core/package IDs for a specific topology,
  * based on (contiguous) CPU index
  */
-static inline void x86_topo_ids_from_idx(unsigned nr_dies,
-                                         unsigned nr_cores,
-                                         unsigned nr_threads,
+static inline void x86_topo_ids_from_idx(X86CPUTopoInfo *topo_info,
                                          unsigned cpu_index,
                                          X86CPUTopoIDs *topo_ids)
 {
+    unsigned nr_dies = topo_info->nr_dies;
+    unsigned nr_cores = topo_info->nr_cores;
+    unsigned nr_threads = topo_info->nr_threads;
+
     topo_ids->pkg_id = cpu_index / (nr_dies * nr_cores * nr_threads);
     topo_ids->die_id = cpu_index / (nr_cores * nr_threads) % nr_dies;
     topo_ids->core_id = cpu_index / nr_threads % nr_cores;
@@ -151,11 +163,13 @@ static inline void x86_topo_ids_from_idx(unsigned nr_dies,
  * based on APIC ID
  */
 static inline void x86_topo_ids_from_apicid(apic_id_t apicid,
-                                            unsigned nr_dies,
-                                            unsigned nr_cores,
-                                            unsigned nr_threads,
+                                            X86CPUTopoInfo *topo_info,
                                             X86CPUTopoIDs *topo_ids)
 {
+    unsigned nr_dies = topo_info->nr_dies;
+    unsigned nr_cores = topo_info->nr_cores;
+    unsigned nr_threads = topo_info->nr_threads;
+
     topo_ids->smt_id = apicid &
             ~(0xFFFFFFFFUL << apicid_smt_width(nr_dies, nr_cores, nr_threads));
     topo_ids->core_id =
@@ -171,14 +185,12 @@ static inline void x86_topo_ids_from_apicid(apic_id_t apicid,
  *
  * 'cpu_index' is a sequential, contiguous ID for the CPU.
  */
-static inline apic_id_t x86_apicid_from_cpu_idx(unsigned nr_dies,
-                                                unsigned nr_cores,
-                                                unsigned nr_threads,
+static inline apic_id_t x86_apicid_from_cpu_idx(X86CPUTopoInfo *topo_info,
                                                 unsigned cpu_index)
 {
     X86CPUTopoIDs topo_ids;
-    x86_topo_ids_from_idx(nr_dies, nr_cores, nr_threads, cpu_index, &topo_ids);
-    return apicid_from_topo_ids(nr_dies, nr_cores, nr_threads, &topo_ids);
+    x86_topo_ids_from_idx(topo_info, cpu_index, &topo_ids);
+    return apicid_from_topo_ids(topo_info, &topo_ids);
 }
 
 #endif /* HW_I386_TOPOLOGY_H */


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 04/16] machine: Add SMP Sockets in CpuTopology
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (2 preceding siblings ...)
  2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 03/16] hw/i386: Introduce X86CPUTopoInfo to contain topology info Moger, Babu
@ 2019-09-06 19:12 ` Moger, Babu
  2019-10-11  2:31   ` Eduardo Habkost
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 05/16] hw/i386: Simplify topology Offset/width Calculation Moger, Babu
                   ` (12 subsequent siblings)
  16 siblings, 1 reply; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:12 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

Store the  smp Sockets in CpuTopology. Socket information
is required to build the cpu topology in new epyc mode.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 hw/core/machine.c   |    1 +
 hw/i386/pc.c        |    1 +
 include/hw/boards.h |    2 ++
 vl.c                |    1 +
 4 files changed, 5 insertions(+)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index c58a8e594e..4034b7e903 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -795,6 +795,7 @@ static void smp_parse(MachineState *ms, QemuOpts *opts)
         ms->smp.cpus = cpus;
         ms->smp.cores = cores;
         ms->smp.threads = threads;
+        ms->smp.sockets = sockets;
     }
 
     if (ms->smp.cpus > 1) {
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 95aab8e5e7..9e1c3f9f57 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1609,6 +1609,7 @@ void pc_smp_parse(MachineState *ms, QemuOpts *opts)
         ms->smp.cpus = cpus;
         ms->smp.cores = cores;
         ms->smp.threads = threads;
+        ms->smp.sockets = sockets;
         pcms->smp_dies = dies;
     }
 
diff --git a/include/hw/boards.h b/include/hw/boards.h
index a71d1a53a5..12eb5032a5 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -245,12 +245,14 @@ typedef struct DeviceMemoryState {
  * @cpus: the number of present logical processors on the machine
  * @cores: the number of cores in one package
  * @threads: the number of threads in one core
+ * @sockets: the number of sockets on the machine
  * @max_cpus: the maximum number of logical processors on the machine
  */
 typedef struct CpuTopology {
     unsigned int cpus;
     unsigned int cores;
     unsigned int threads;
+    unsigned int sockets;
     unsigned int max_cpus;
 } CpuTopology;
 
diff --git a/vl.c b/vl.c
index 711d2ae5da..473a688779 100644
--- a/vl.c
+++ b/vl.c
@@ -3981,6 +3981,7 @@ int main(int argc, char **argv, char **envp)
     current_machine->smp.max_cpus = machine_class->default_cpus;
     current_machine->smp.cores = 1;
     current_machine->smp.threads = 1;
+    current_machine->smp.sockets = 1;
 
     machine_class->smp_parse(current_machine,
         qemu_opts_find(qemu_find_opts("smp-opts"), NULL));


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 05/16] hw/i386: Simplify topology Offset/width Calculation
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (3 preceding siblings ...)
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 04/16] machine: Add SMP Sockets in CpuTopology Moger, Babu
@ 2019-09-06 19:12 ` Moger, Babu
  2019-10-11  2:32   ` Eduardo Habkost
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 06/16] hw/core: Add core complex id in X86CPU topology Moger, Babu
                   ` (11 subsequent siblings)
  16 siblings, 1 reply; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:12 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

Some parameters are unnecessarily passed for offset/width
calculation. Remove those parameters from function prototypes.
No functional change.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 include/hw/i386/topology.h |   45 ++++++++++++++++++--------------------------
 target/i386/cpu.c          |   12 ++++--------
 2 files changed, 22 insertions(+), 35 deletions(-)

diff --git a/include/hw/i386/topology.h b/include/hw/i386/topology.h
index 906017e8e3..fb10863a66 100644
--- a/include/hw/i386/topology.h
+++ b/include/hw/i386/topology.h
@@ -73,46 +73,37 @@ static unsigned apicid_bitwidth_for_count(unsigned count)
 
 /* Bit width of the SMT_ID (thread ID) field on the APIC ID
  */
-static inline unsigned apicid_smt_width(unsigned nr_dies,
-                                        unsigned nr_cores,
-                                        unsigned nr_threads)
+static inline unsigned apicid_smt_width(unsigned nr_threads)
 {
     return apicid_bitwidth_for_count(nr_threads);
 }
 
 /* Bit width of the Core_ID field
  */
-static inline unsigned apicid_core_width(unsigned nr_dies,
-                                         unsigned nr_cores,
-                                         unsigned nr_threads)
+static inline unsigned apicid_core_width(unsigned nr_cores)
 {
     return apicid_bitwidth_for_count(nr_cores);
 }
 
 /* Bit width of the Die_ID field */
-static inline unsigned apicid_die_width(unsigned nr_dies,
-                                        unsigned nr_cores,
-                                        unsigned nr_threads)
+static inline unsigned apicid_die_width(unsigned nr_dies)
 {
     return apicid_bitwidth_for_count(nr_dies);
 }
 
 /* Bit offset of the Core_ID field
  */
-static inline unsigned apicid_core_offset(unsigned nr_dies,
-                                          unsigned nr_cores,
-                                          unsigned nr_threads)
+static inline unsigned apicid_core_offset(unsigned nr_threads)
 {
-    return apicid_smt_width(nr_dies, nr_cores, nr_threads);
+    return apicid_smt_width(nr_threads);
 }
 
 /* Bit offset of the Die_ID field */
-static inline unsigned apicid_die_offset(unsigned nr_dies,
-                                          unsigned nr_cores,
-                                           unsigned nr_threads)
+static inline unsigned apicid_die_offset(unsigned nr_cores,
+                                         unsigned nr_threads)
 {
-    return apicid_core_offset(nr_dies, nr_cores, nr_threads) +
-           apicid_core_width(nr_dies, nr_cores, nr_threads);
+    return apicid_core_offset(nr_threads) +
+           apicid_core_width(nr_cores);
 }
 
 /* Bit offset of the Pkg_ID (socket ID) field
@@ -121,8 +112,8 @@ static inline unsigned apicid_pkg_offset(unsigned nr_dies,
                                          unsigned nr_cores,
                                          unsigned nr_threads)
 {
-    return apicid_die_offset(nr_dies, nr_cores, nr_threads) +
-           apicid_die_width(nr_dies, nr_cores, nr_threads);
+    return apicid_die_offset(nr_cores, nr_threads) +
+           apicid_die_width(nr_dies);
 }
 
 /* Make APIC ID for the CPU based on Pkg_ID, Core_ID, SMT_ID
@@ -137,8 +128,8 @@ static inline apic_id_t apicid_from_topo_ids(X86CPUTopoInfo *topo_info,
     unsigned nr_threads = topo_info->nr_threads;
 
     return (topo_ids->pkg_id  << apicid_pkg_offset(nr_dies, nr_cores, nr_threads)) |
-           (topo_ids->die_id  << apicid_die_offset(nr_dies, nr_cores, nr_threads)) |
-           (topo_ids->core_id << apicid_core_offset(nr_dies, nr_cores, nr_threads)) |
+           (topo_ids->die_id  << apicid_die_offset(nr_cores, nr_threads)) |
+           (topo_ids->core_id << apicid_core_offset(nr_threads)) |
            topo_ids->smt_id;
 }
 
@@ -171,13 +162,13 @@ static inline void x86_topo_ids_from_apicid(apic_id_t apicid,
     unsigned nr_threads = topo_info->nr_threads;
 
     topo_ids->smt_id = apicid &
-            ~(0xFFFFFFFFUL << apicid_smt_width(nr_dies, nr_cores, nr_threads));
+            ~(0xFFFFFFFFUL << apicid_smt_width(nr_threads));
     topo_ids->core_id =
-            (apicid >> apicid_core_offset(nr_dies, nr_cores, nr_threads)) &
-            ~(0xFFFFFFFFUL << apicid_core_width(nr_dies, nr_cores, nr_threads));
+            (apicid >> apicid_core_offset(nr_threads)) &
+            ~(0xFFFFFFFFUL << apicid_core_width(nr_cores));
     topo_ids->die_id =
-            (apicid >> apicid_die_offset(nr_dies, nr_cores, nr_threads)) &
-            ~(0xFFFFFFFFUL << apicid_die_width(nr_dies, nr_cores, nr_threads));
+            (apicid >> apicid_die_offset(nr_cores, nr_threads)) &
+            ~(0xFFFFFFFFUL << apicid_die_width(nr_dies));
     topo_ids->pkg_id = apicid >> apicid_pkg_offset(nr_dies, nr_cores, nr_threads);
 }
 
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 19751e37a7..6d7f9b6b8b 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -4260,8 +4260,7 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
                                     eax, ebx, ecx, edx);
                 break;
             case 3: /* L3 cache info */
-                die_offset = apicid_die_offset(env->nr_dies,
-                                        cs->nr_cores, cs->nr_threads);
+                die_offset = apicid_die_offset(cs->nr_cores, cs->nr_threads);
                 if (cpu->enable_l3_cache) {
                     encode_cache_cpuid4(env->cache_info_cpuid4.l3_cache,
                                         (1 << die_offset), cs->nr_cores,
@@ -4346,8 +4345,7 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
 
         switch (count) {
         case 0:
-            *eax = apicid_core_offset(env->nr_dies,
-                                      cs->nr_cores, cs->nr_threads);
+            *eax = apicid_core_offset(cs->nr_threads);
             *ebx = cs->nr_threads;
             *ecx |= CPUID_TOPOLOGY_LEVEL_SMT;
             break;
@@ -4377,14 +4375,12 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
         *edx = cpu->apic_id;
         switch (count) {
         case 0:
-            *eax = apicid_core_offset(env->nr_dies, cs->nr_cores,
-                                                    cs->nr_threads);
+            *eax = apicid_core_offset(cs->nr_threads);
             *ebx = cs->nr_threads;
             *ecx |= CPUID_TOPOLOGY_LEVEL_SMT;
             break;
         case 1:
-            *eax = apicid_die_offset(env->nr_dies, cs->nr_cores,
-                                                   cs->nr_threads);
+            *eax = apicid_die_offset(cs->nr_cores, cs->nr_threads);
             *ebx = cs->nr_cores * cs->nr_threads;
             *ecx |= CPUID_TOPOLOGY_LEVEL_CORE;
             break;


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 06/16] hw/core: Add core complex id in X86CPU topology
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (4 preceding siblings ...)
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 05/16] hw/i386: Simplify topology Offset/width Calculation Moger, Babu
@ 2019-09-06 19:12 ` Moger, Babu
  2019-09-06 19:20   ` Eric Blake
  2019-09-22 12:48   ` Michael S. Tsirkin
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 07/16] hw/386: Add new epyc mode topology decoding functions Moger, Babu
                   ` (10 subsequent siblings)
  16 siblings, 2 replies; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:12 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

Introduce cpu core complex id(ccx_id) in x86CPU topology.
Each CCX can have upto 4 cores and share same L3 cache.
This information is required to build the topology in
new apyc mode.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 hw/core/machine-hmp-cmds.c |    3 +++
 hw/core/machine.c          |   13 +++++++++++++
 hw/i386/pc.c               |   10 ++++++++++
 include/hw/i386/topology.h |    1 +
 qapi/machine.json          |    4 +++-
 target/i386/cpu.c          |    2 ++
 target/i386/cpu.h          |    1 +
 7 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/hw/core/machine-hmp-cmds.c b/hw/core/machine-hmp-cmds.c
index 1f66bda346..6c534779af 100644
--- a/hw/core/machine-hmp-cmds.c
+++ b/hw/core/machine-hmp-cmds.c
@@ -89,6 +89,9 @@ void hmp_hotpluggable_cpus(Monitor *mon, const QDict *qdict)
         if (c->has_die_id) {
             monitor_printf(mon, "    die-id: \"%" PRIu64 "\"\n", c->die_id);
         }
+        if (c->has_ccx_id) {
+            monitor_printf(mon, "    ccx-id: \"%" PRIu64 "\"\n", c->ccx_id);
+        }
         if (c->has_core_id) {
             monitor_printf(mon, "    core-id: \"%" PRIu64 "\"\n", c->core_id);
         }
diff --git a/hw/core/machine.c b/hw/core/machine.c
index 4034b7e903..9a8586cf30 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -694,6 +694,11 @@ void machine_set_cpu_numa_node(MachineState *machine,
             return;
         }
 
+        if (props->has_ccx_id && !slot->props.has_ccx_id) {
+            error_setg(errp, "ccx-id is not supported");
+            return;
+        }
+
         /* skip slots with explicit mismatch */
         if (props->has_thread_id && props->thread_id != slot->props.thread_id) {
                 continue;
@@ -707,6 +712,10 @@ void machine_set_cpu_numa_node(MachineState *machine,
                 continue;
         }
 
+        if (props->has_ccx_id && props->ccx_id != slot->props.ccx_id) {
+                continue;
+        }
+
         if (props->has_socket_id && props->socket_id != slot->props.socket_id) {
                 continue;
         }
@@ -1041,6 +1050,10 @@ static char *cpu_slot_to_string(const CPUArchId *cpu)
     if (cpu->props.has_die_id) {
         g_string_append_printf(s, "die-id: %"PRId64, cpu->props.die_id);
     }
+
+    if (cpu->props.has_ccx_id) {
+        g_string_append_printf(s, "ccx-id: %"PRId64, cpu->props.ccx_id);
+    }
     if (cpu->props.has_core_id) {
         if (s->len) {
             g_string_append_printf(s, ", ");
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 9e1c3f9f57..f71389ad9f 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -2444,6 +2444,7 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
 
         topo_ids.pkg_id = cpu->socket_id;
         topo_ids.die_id = cpu->die_id;
+        topo_ids.ccx_id = cpu->ccx_id;
         topo_ids.core_id = cpu->core_id;
         topo_ids.smt_id = cpu->thread_id;
         cpu->apic_id = apicid_from_topo_ids(&topo_info, &topo_ids);
@@ -2489,6 +2490,13 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
     }
     cpu->die_id = topo_ids.die_id;
 
+    if (cpu->ccx_id != -1 && cpu->ccx_id != topo_ids.ccx_id) {
+        error_setg(errp, "property ccx-id: %u doesn't match set apic-id:"
+            " 0x%x (ccx-id: %u)", cpu->ccx_id, cpu->apic_id, topo_ids.ccx_id);
+        return;
+    }
+    cpu->ccx_id = topo_ids.ccx_id;
+
     if (cpu->core_id != -1 && cpu->core_id != topo_ids.core_id) {
         error_setg(errp, "property core-id: %u doesn't match set apic-id:"
             " 0x%x (core-id: %u)", cpu->core_id, cpu->apic_id, topo_ids.core_id);
@@ -2896,6 +2904,8 @@ static const CPUArchIdList *pc_possible_cpu_arch_ids(MachineState *ms)
         ms->possible_cpus->cpus[i].props.socket_id = topo_ids.pkg_id;
         ms->possible_cpus->cpus[i].props.has_die_id = true;
         ms->possible_cpus->cpus[i].props.die_id = topo_ids.die_id;
+        ms->possible_cpus->cpus[i].props.has_ccx_id = true;
+        ms->possible_cpus->cpus[i].props.ccx_id = topo_ids.ccx_id;
         ms->possible_cpus->cpus[i].props.has_core_id = true;
         ms->possible_cpus->cpus[i].props.core_id = topo_ids.core_id;
         ms->possible_cpus->cpus[i].props.has_thread_id = true;
diff --git a/include/hw/i386/topology.h b/include/hw/i386/topology.h
index fb10863a66..5a61d53f05 100644
--- a/include/hw/i386/topology.h
+++ b/include/hw/i386/topology.h
@@ -170,6 +170,7 @@ static inline void x86_topo_ids_from_apicid(apic_id_t apicid,
             (apicid >> apicid_die_offset(nr_cores, nr_threads)) &
             ~(0xFFFFFFFFUL << apicid_die_width(nr_dies));
     topo_ids->pkg_id = apicid >> apicid_pkg_offset(nr_dies, nr_cores, nr_threads);
+    topo_ids->ccx_id = 0;
 }
 
 /* Make APIC ID for the CPU 'cpu_index'
diff --git a/qapi/machine.json b/qapi/machine.json
index 6db8a7e2ec..bb7627e698 100644
--- a/qapi/machine.json
+++ b/qapi/machine.json
@@ -597,9 +597,10 @@
 # @node-id: NUMA node ID the CPU belongs to
 # @socket-id: socket number within node/board the CPU belongs to
 # @die-id: die number within node/board the CPU belongs to (Since 4.1)
+# @ccx-id: core complex number within node/board the CPU belongs to (Since 4.1)
 # @core-id: core number within die the CPU belongs to# @thread-id: thread number within core the CPU belongs to
 #
-# Note: currently there are 5 properties that could be present
+# Note: currently there are 6 properties that could be present
 # but management should be prepared to pass through other
 # properties with device_add command to allow for future
 # interface extension. This also requires the filed names to be kept in
@@ -611,6 +612,7 @@
   'data': { '*node-id': 'int',
             '*socket-id': 'int',
             '*die-id': 'int',
+            '*ccx-id': 'int',
             '*core-id': 'int',
             '*thread-id': 'int'
   }
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 6d7f9b6b8b..ca02bc21ec 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -5811,12 +5811,14 @@ static Property x86_cpu_properties[] = {
     DEFINE_PROP_INT32("thread-id", X86CPU, thread_id, 0),
     DEFINE_PROP_INT32("core-id", X86CPU, core_id, 0),
     DEFINE_PROP_INT32("die-id", X86CPU, die_id, 0),
+    DEFINE_PROP_INT32("ccx-id", X86CPU, ccx_id, 0),
     DEFINE_PROP_INT32("socket-id", X86CPU, socket_id, 0),
 #else
     DEFINE_PROP_UINT32("apic-id", X86CPU, apic_id, UNASSIGNED_APIC_ID),
     DEFINE_PROP_INT32("thread-id", X86CPU, thread_id, -1),
     DEFINE_PROP_INT32("core-id", X86CPU, core_id, -1),
     DEFINE_PROP_INT32("die-id", X86CPU, die_id, -1),
+    DEFINE_PROP_INT32("ccx-id", X86CPU, ccx_id, -1),
     DEFINE_PROP_INT32("socket-id", X86CPU, socket_id, -1),
 #endif
     DEFINE_PROP_INT32("node-id", X86CPU, node_id, CPU_UNSET_NUMA_NODE_ID),
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 8b3dc5533e..db940cdb2a 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1508,6 +1508,7 @@ struct X86CPU {
     int32_t node_id; /* NUMA node this CPU belongs to */
     int32_t socket_id;
     int32_t die_id;
+    int32_t ccx_id;
     int32_t core_id;
     int32_t thread_id;
 


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 07/16] hw/386: Add new epyc mode topology decoding functions
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (5 preceding siblings ...)
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 06/16] hw/core: Add core complex id in X86CPU topology Moger, Babu
@ 2019-09-06 19:12 ` Moger, Babu
  2019-10-11  3:29   ` Eduardo Habkost
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 08/16] i386: Cleanup and use the new epyc mode topology functions Moger, Babu
                   ` (9 subsequent siblings)
  16 siblings, 1 reply; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:12 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

These functions add support for building new epyc mode topology
given smp details like numa nodes, cores, threads and sockets.
Subsequent patches will use these functions to build the topology.

The topology details are available in Processor Programming Reference (PPR)
for AMD Family 17h Model 01h, Revision B1 Processors.
It is available at https://www.amd.com/en/support/tech-docs

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 include/hw/i386/topology.h |  174 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 174 insertions(+)

diff --git a/include/hw/i386/topology.h b/include/hw/i386/topology.h
index 5a61d53f05..6fd4184f07 100644
--- a/include/hw/i386/topology.h
+++ b/include/hw/i386/topology.h
@@ -62,6 +62,22 @@ typedef struct X86CPUTopoInfo {
     unsigned nr_threads;
 } X86CPUTopoInfo;
 
+/*
+ * Definitions used for building CPUID Leaf 0x8000001D and 0x8000001E
+ * Please refer to the AMD64 Architecture Programmer’s Manual Volume 3.
+ * Define the constants to build the cpu topology. Right now, TOPOEXT
+ * feature is enabled only on EPYC. So, these constants are based on
+ * EPYC supported configurations. We may need to handle the cases if
+ * these values change in future.
+ */
+
+/* Maximum core complexes in a node */
+#define MAX_CCX                  2
+/* Maximum cores in a core complex */
+#define MAX_CORES_IN_CCX         4
+/* Maximum cores in a node */
+#define MAX_CORES_IN_NODE        8
+
 /* Return the bit width needed for 'count' IDs
  */
 static unsigned apicid_bitwidth_for_count(unsigned count)
@@ -116,6 +132,164 @@ static inline unsigned apicid_pkg_offset(unsigned nr_dies,
            apicid_die_width(nr_dies);
 }
 
+/* Bit offset of the CCX_ID field */
+static inline unsigned apicid_ccx_offset(unsigned nr_cores,
+                                         unsigned nr_threads)
+{
+    return apicid_core_offset(nr_threads) +
+           apicid_core_width(nr_cores);
+}
+
+/* Bit width of the Die_ID field */
+static inline unsigned apicid_ccx_width(unsigned nr_ccxs)
+{
+    return apicid_bitwidth_for_count(nr_ccxs);
+}
+
+/* Bit offset of the node_id field */
+static inline unsigned apicid_node_offset(unsigned nr_ccxs,
+                                          unsigned nr_cores,
+                                          unsigned nr_threads)
+{
+    return apicid_ccx_offset(nr_cores, nr_threads) +
+           apicid_ccx_width(nr_ccxs);
+}
+
+/* Bit width of the node_id field */
+static inline unsigned apicid_node_width(unsigned nr_nodes)
+{
+    return apicid_bitwidth_for_count(nr_nodes);
+}
+
+/* Bit offset of the node_id field */
+static inline unsigned apicid_pkg_offset_epyc(unsigned nr_nodes,
+                                              unsigned nr_ccxs,
+                                              unsigned nr_cores,
+                                              unsigned nr_threads)
+{
+    return apicid_node_offset(nr_ccxs, nr_cores, nr_threads) +
+           apicid_node_width(nr_nodes);
+}
+
+/*
+ * Figure out the number of nodes required to build this config.
+ * Max cores in a nodes is 8
+ */
+static inline int nodes_in_pkg(X86CPUTopoInfo *topo_info)
+{
+    /*
+     * Create a config with user given (nr_nodes > 1) numa node config,
+     * else go with a standard configuration
+     */
+    if (topo_info->numa_nodes > 1) {
+        return DIV_ROUND_UP(topo_info->numa_nodes, topo_info->nr_sockets);
+    } else {
+        return DIV_ROUND_UP(topo_info->nr_cores, MAX_CORES_IN_NODE);
+    }
+}
+
+/*
+ * Decide the number of cores in a core complex with the given nr_cores using
+ * following set constants MAX_CCX, MAX_CORES_IN_CCX, MAX_CORES_IN_DIE and
+ * MAX_NODES_PER_SOCKET. Maintain symmetry as much as possible
+ * L3 cache is shared across all cores in a core complex. So, this will also
+ * tell us how many cores are sharing the L3 cache.
+ */
+static inline int cores_in_ccx(X86CPUTopoInfo *topo_info)
+{
+    int nodes;
+
+    /* Get the number of nodes required to build this config */
+    nodes = nodes_in_pkg(topo_info);
+
+    /*
+     * Divide the cores accros all the core complexes
+     * Return rounded up value
+     */
+    return DIV_ROUND_UP(topo_info->nr_cores, nodes * MAX_CCX);
+}
+
+/*
+ * Make APIC ID for the CPU based on Pkg_ID, Core_ID, SMT_ID
+ *
+ * The caller must make sure core_id < nr_cores and smt_id < nr_threads.
+ */
+static inline apic_id_t x86_apicid_from_topo_ids_epyc(X86CPUTopoInfo *topo_info,
+                                                  const X86CPUTopoIDs *topo_ids)
+{
+    unsigned nr_ccxs = MAX_CCX;
+    unsigned nr_nodes = nodes_in_pkg(topo_info);
+    unsigned nr_cores = MAX_CORES_IN_CCX;
+    unsigned nr_threads = topo_info->nr_threads;
+
+    return (topo_ids->pkg_id  << apicid_pkg_offset_epyc(nr_nodes, nr_ccxs,
+                                                        nr_cores, nr_threads)) |
+           (topo_ids->node_id  << apicid_node_offset(nr_ccxs, nr_cores,
+                                                     nr_threads)) |
+           (topo_ids->ccx_id  << apicid_ccx_offset(nr_cores, nr_threads)) |
+           (topo_ids->core_id << apicid_core_offset(nr_threads)) |
+           topo_ids->smt_id;
+}
+
+static inline void x86_topo_ids_from_idx_epyc(X86CPUTopoInfo *topo_info,
+                                              unsigned cpu_index,
+                                              X86CPUTopoIDs *topo_ids)
+{
+    unsigned nr_cores = topo_info->nr_cores;
+    unsigned nr_threads = topo_info->nr_threads;
+    unsigned core_index = cpu_index / nr_threads % nr_cores;
+    unsigned ccx_cores = cores_in_ccx(topo_info);
+
+    topo_ids->smt_id = cpu_index % nr_threads;
+    topo_ids->core_id = core_index % ccx_cores; /* core id inside the ccx */
+    topo_ids->ccx_id = (core_index % (ccx_cores * MAX_CCX)) / ccx_cores;
+    topo_ids->node_id = core_index / (ccx_cores * MAX_CCX);
+    topo_ids->pkg_id = cpu_index / (nr_cores * nr_threads);
+}
+
+/*
+ * Calculate thread/core/package IDs for a specific topology,
+ * based on APIC ID
+ */
+static inline void x86_topo_ids_from_apicid_epyc(apic_id_t apicid,
+                                            X86CPUTopoInfo *topo_info,
+                                            X86CPUTopoIDs *topo_ids)
+{
+    unsigned nr_nodes = nodes_in_pkg(topo_info);
+    unsigned nr_cores = MAX_CORES_IN_CCX;
+    unsigned nr_threads = topo_info->nr_threads;
+    unsigned nr_ccxs = MAX_CCX;
+
+    topo_ids->smt_id = apicid &
+                   ~(0xFFFFFFFFUL << apicid_smt_width(nr_threads));
+
+    topo_ids->core_id = (apicid >> apicid_core_offset(nr_threads)) &
+                   ~(0xFFFFFFFFUL << apicid_core_width(nr_cores));
+
+    topo_ids->ccx_id = (apicid >> apicid_ccx_offset(nr_cores, nr_threads)) &
+                   ~(0xFFFFFFFFUL << apicid_ccx_width(nr_ccxs));
+
+    topo_ids->node_id = (apicid >> apicid_node_offset(nr_ccxs, nr_cores,
+                                                      nr_threads)) &
+                   ~(0xFFFFFFFFUL << apicid_node_width(nr_nodes));
+
+    topo_ids->pkg_id = apicid >> apicid_pkg_offset_epyc(nr_nodes, nr_ccxs,
+                                                        nr_cores, nr_threads);
+}
+
+/*
+ * Make APIC ID for the CPU 'cpu_index'
+ *
+ * 'cpu_index' is a sequential, contiguous ID for the CPU.
+ */
+static inline apic_id_t x86_apicid_from_cpu_idx_epyc(X86CPUTopoInfo *topo_info,
+                                                     unsigned cpu_index)
+{
+    X86CPUTopoIDs topo_ids;
+    x86_topo_ids_from_idx_epyc(topo_info, cpu_index, &topo_ids);
+    return x86_apicid_from_topo_ids_epyc(topo_info, &topo_ids);
+}
+
 /* Make APIC ID for the CPU based on Pkg_ID, Core_ID, SMT_ID
  *
  * The caller must make sure core_id < nr_cores and smt_id < nr_threads.


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 08/16] i386: Cleanup and use the new epyc mode topology functions
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (6 preceding siblings ...)
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 07/16] hw/386: Add new epyc mode topology decoding functions Moger, Babu
@ 2019-09-06 19:12 ` Moger, Babu
  2019-10-11  3:51   ` Eduardo Habkost
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 09/16] hw/i386: Introduce initialize_topo_info function Moger, Babu
                   ` (8 subsequent siblings)
  16 siblings, 1 reply; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:12 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

Use the new epyc mode functions and delete the unused code.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 target/i386/cpu.c |  171 +++++++++++++++--------------------------------------
 1 file changed, 48 insertions(+), 123 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index ca02bc21ec..f25491a029 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -28,6 +28,7 @@
 #include "sysemu/kvm.h"
 #include "sysemu/hvf.h"
 #include "sysemu/cpus.h"
+#include "sysemu/numa.h"
 #include "kvm_i386.h"
 #include "sev_i386.h"
 
@@ -338,67 +339,19 @@ static void encode_cache_cpuid80000006(CPUCacheInfo *l2,
     }
 }
 
-/*
- * Definitions used for building CPUID Leaf 0x8000001D and 0x8000001E
- * Please refer to the AMD64 Architecture Programmer’s Manual Volume 3.
- * Define the constants to build the cpu topology. Right now, TOPOEXT
- * feature is enabled only on EPYC. So, these constants are based on
- * EPYC supported configurations. We may need to handle the cases if
- * these values change in future.
- */
-/* Maximum core complexes in a node */
-#define MAX_CCX 2
-/* Maximum cores in a core complex */
-#define MAX_CORES_IN_CCX 4
-/* Maximum cores in a node */
-#define MAX_CORES_IN_NODE 8
-/* Maximum nodes in a socket */
-#define MAX_NODES_PER_SOCKET 4
-
-/*
- * Figure out the number of nodes required to build this config.
- * Max cores in a node is 8
- */
-static int nodes_in_socket(int nr_cores)
-{
-    int nodes;
-
-    nodes = DIV_ROUND_UP(nr_cores, MAX_CORES_IN_NODE);
-
-   /* Hardware does not support config with 3 nodes, return 4 in that case */
-    return (nodes == 3) ? 4 : nodes;
-}
-
-/*
- * Decide the number of cores in a core complex with the given nr_cores using
- * following set constants MAX_CCX, MAX_CORES_IN_CCX, MAX_CORES_IN_NODE and
- * MAX_NODES_PER_SOCKET. Maintain symmetry as much as possible
- * L3 cache is shared across all cores in a core complex. So, this will also
- * tell us how many cores are sharing the L3 cache.
- */
-static int cores_in_core_complex(int nr_cores)
-{
-    int nodes;
-
-    /* Check if we can fit all the cores in one core complex */
-    if (nr_cores <= MAX_CORES_IN_CCX) {
-        return nr_cores;
-    }
-    /* Get the number of nodes required to build this config */
-    nodes = nodes_in_socket(nr_cores);
-
-    /*
-     * Divide the cores accros all the core complexes
-     * Return rounded up value
-     */
-    return DIV_ROUND_UP(nr_cores, nodes * MAX_CCX);
-}
-
 /* Encode cache info for CPUID[8000001D] */
-static void encode_cache_cpuid8000001d(CPUCacheInfo *cache, CPUState *cs,
-                                uint32_t *eax, uint32_t *ebx,
-                                uint32_t *ecx, uint32_t *edx)
+static void encode_cache_cpuid8000001d(CPUCacheInfo *cache,
+                                       uint32_t *eax, uint32_t *ebx,
+                                       uint32_t *ecx, uint32_t *edx)
 {
+    MachineState *ms = MACHINE(qdev_get_machine());
+    X86CPUTopoInfo topo_info = {
+        .numa_nodes = nb_numa_nodes,
+        .nr_sockets = ms->smp.sockets,
+        .nr_cores = ms->smp.cores,
+        .nr_threads = ms->smp.threads,
+    };
+
     uint32_t l3_cores;
     assert(cache->size == cache->line_size * cache->associativity *
                           cache->partitions * cache->sets);
@@ -408,10 +361,10 @@ static void encode_cache_cpuid8000001d(CPUCacheInfo *cache, CPUState *cs,
 
     /* L3 is shared among multiple cores */
     if (cache->level == 3) {
-        l3_cores = cores_in_core_complex(cs->nr_cores);
-        *eax |= ((l3_cores * cs->nr_threads) - 1) << 14;
+        l3_cores = cores_in_ccx(&topo_info);
+        *eax |= ((l3_cores * topo_info.nr_threads) - 1) << 14;
     } else {
-        *eax |= ((cs->nr_threads - 1) << 14);
+        *eax |= ((topo_info.nr_threads - 1) << 14);
     }
 
     assert(cache->line_size > 0);
@@ -431,56 +384,28 @@ static void encode_cache_cpuid8000001d(CPUCacheInfo *cache, CPUState *cs,
            (cache->complex_indexing ? CACHE_COMPLEX_IDX : 0);
 }
 
-/* Data structure to hold the configuration info for a given core index */
-struct core_topology {
-    /* core complex id of the current core index */
-    int ccx_id;
-    /*
-     * Adjusted core index for this core in the topology
-     * This can be 0,1,2,3 with max 4 cores in a core complex
-     */
-    int core_id;
-    /* Node id for this core index */
-    int node_id;
-    /* Number of nodes in this config */
-    int num_nodes;
-};
-
-/*
- * Build the configuration closely match the EPYC hardware. Using the EPYC
- * hardware configuration values (MAX_CCX, MAX_CORES_IN_CCX, MAX_CORES_IN_NODE)
- * right now. This could change in future.
- * nr_cores : Total number of cores in the config
- * core_id  : Core index of the current CPU
- * topo     : Data structure to hold all the config info for this core index
- */
-static void build_core_topology(int nr_cores, int core_id,
-                                struct core_topology *topo)
-{
-    int nodes, cores_in_ccx;
-
-    /* First get the number of nodes required */
-    nodes = nodes_in_socket(nr_cores);
-
-    cores_in_ccx = cores_in_core_complex(nr_cores);
-
-    topo->node_id = core_id / (cores_in_ccx * MAX_CCX);
-    topo->ccx_id = (core_id % (cores_in_ccx * MAX_CCX)) / cores_in_ccx;
-    topo->core_id = core_id % cores_in_ccx;
-    topo->num_nodes = nodes;
-}
-
 /* Encode cache info for CPUID[8000001E] */
-static void encode_topo_cpuid8000001e(CPUState *cs, X86CPU *cpu,
-                                       uint32_t *eax, uint32_t *ebx,
-                                       uint32_t *ecx, uint32_t *edx)
+static void encode_topo_cpuid8000001e(CPUX86State *env,
+                                      uint32_t *eax, uint32_t *ebx,
+                                      uint32_t *ecx, uint32_t *edx)
 {
-    struct core_topology topo = {0};
-    unsigned long nodes;
-    int shift;
+    X86CPUTopoIDs topo_ids = { 0 };
+    unsigned long nodes, shift;
+    X86CPU *cpu = env_archcpu(env);
+    CPUState *cs = env_cpu(env);
+    MachineState *ms = MACHINE(qdev_get_machine());
+    X86CPUTopoInfo topo_info = {
+        .numa_nodes = nb_numa_nodes,
+        .nr_sockets = ms->smp.sockets,
+        .nr_cores = ms->smp.cores,
+        .nr_threads = ms->smp.threads,
+    };
+
+    nodes = nodes_in_pkg(&topo_info);
+    x86_topo_ids_from_idx_epyc(&topo_info, cs->cpu_index, &topo_ids);
 
-    build_core_topology(cs->nr_cores, cpu->core_id, &topo);
     *eax = cpu->apic_id;
+
     /*
      * CPUID_Fn8000001E_EBX
      * 31:16 Reserved
@@ -496,11 +421,12 @@ static void encode_topo_cpuid8000001e(CPUState *cs, X86CPU *cpu,
      *             3 Core complex id
      *           1:0 Core id
      */
-    if (cs->nr_threads - 1) {
-        *ebx = ((cs->nr_threads - 1) << 8) | (topo.node_id << 3) |
-                (topo.ccx_id << 2) | topo.core_id;
+    if (topo_info.nr_threads - 1) {
+        *ebx = ((topo_info.nr_threads - 1) << 8) | (topo_ids.node_id << 3) |
+                (topo_ids.ccx_id << 2) | topo_ids.core_id;
     } else {
-        *ebx = (topo.node_id << 4) | (topo.ccx_id << 3) | topo.core_id;
+        *ebx = (topo_ids.node_id << 4) | (topo_ids.ccx_id << 3) |
+                topo_ids.core_id;
     }
     /*
      * CPUID_Fn8000001E_ECX
@@ -510,9 +436,8 @@ static void encode_topo_cpuid8000001e(CPUState *cs, X86CPU *cpu,
      *         2  Socket id
      *       1:0  Node id
      */
-    if (topo.num_nodes <= 4) {
-        *ecx = ((topo.num_nodes - 1) << 8) | (cpu->socket_id << 2) |
-                topo.node_id;
+    if (nodes <= 4) {
+        *ecx = ((nodes - 1) << 8) | (topo_ids.pkg_id << 2) | topo_ids.node_id;
     } else {
         /*
          * Node id fix up. Actual hardware supports up to 4 nodes. But with
@@ -527,12 +452,12 @@ static void encode_topo_cpuid8000001e(CPUState *cs, X86CPU *cpu,
          * number of nodes. find_last_bit returns last set bit(0 based). Left
          * shift(+1) the socket id to represent all the nodes.
          */
-        nodes = topo.num_nodes - 1;
+        nodes = nodes - 1;
         shift = find_last_bit(&nodes, 8);
-        *ecx = ((topo.num_nodes - 1) << 8) | (cpu->socket_id << (shift + 1)) |
-                topo.node_id;
+        *ecx = (nodes << 8) | (topo_ids.pkg_id << (shift + 1)) | topo_ids.node_id;
     }
     *edx = 0;
+
 }
 
 /*
@@ -4580,19 +4505,19 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
         }
         switch (count) {
         case 0: /* L1 dcache info */
-            encode_cache_cpuid8000001d(env->cache_info_amd.l1d_cache, cs,
+            encode_cache_cpuid8000001d(env->cache_info_amd.l1d_cache,
                                        eax, ebx, ecx, edx);
             break;
         case 1: /* L1 icache info */
-            encode_cache_cpuid8000001d(env->cache_info_amd.l1i_cache, cs,
+            encode_cache_cpuid8000001d(env->cache_info_amd.l1i_cache,
                                        eax, ebx, ecx, edx);
             break;
         case 2: /* L2 cache info */
-            encode_cache_cpuid8000001d(env->cache_info_amd.l2_cache, cs,
+            encode_cache_cpuid8000001d(env->cache_info_amd.l2_cache,
                                        eax, ebx, ecx, edx);
             break;
         case 3: /* L3 cache info */
-            encode_cache_cpuid8000001d(env->cache_info_amd.l3_cache, cs,
+            encode_cache_cpuid8000001d(env->cache_info_amd.l3_cache,
                                        eax, ebx, ecx, edx);
             break;
         default: /* end of info */
@@ -4602,7 +4527,7 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
         break;
     case 0x8000001E:
         assert(cpu->core_id <= 255);
-        encode_topo_cpuid8000001e(cs, cpu,
+        encode_topo_cpuid8000001e(env,
                                   eax, ebx, ecx, edx);
         break;
     case 0xC0000000:


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 09/16] hw/i386: Introduce initialize_topo_info function
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (7 preceding siblings ...)
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 08/16] i386: Cleanup and use the new epyc mode topology functions Moger, Babu
@ 2019-09-06 19:12 ` Moger, Babu
  2019-10-11  3:54   ` Eduardo Habkost
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 10/16] hw/i386: Introduce apicid_from_cpu_idx in PCMachineState Moger, Babu
                   ` (7 subsequent siblings)
  16 siblings, 1 reply; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:12 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

Introduce initialize_topo_info to initialize X86CPUTopoInfo
data structure to build the topology. No functional change.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 hw/i386/pc.c |   29 +++++++++++++++++------------
 1 file changed, 17 insertions(+), 12 deletions(-)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index f71389ad9f..504e1ab083 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -918,6 +918,17 @@ bool e820_get_entry(int idx, uint32_t type, uint64_t *address, uint64_t *length)
     return false;
 }
 
+static inline void initialize_topo_info(X86CPUTopoInfo *topo_info,
+                                        PCMachineState *pcms,
+                                        const MachineState *ms)
+{
+    topo_info->nr_dies = pcms->smp_dies;
+    topo_info->nr_cores = ms->smp.cores;
+    topo_info->nr_threads = ms->smp.threads;
+    topo_info->nr_sockets = ms->smp.sockets;
+    topo_info->numa_nodes = nb_numa_nodes;
+}
+
 /* Calculates initial APIC ID for a specific CPU index
  *
  * Currently we need to be able to calculate the APIC ID from the CPU index
@@ -934,9 +945,7 @@ static uint32_t x86_cpu_apic_id_from_index(PCMachineState *pcms,
     uint32_t correct_id;
     static bool warned;
 
-    topo_info.nr_dies = pcms->smp_dies;
-    topo_info.nr_cores = ms->smp.cores;
-    topo_info.nr_threads = ms->smp.threads;
+    initialize_topo_info(&topo_info, pcms, ms);
 
     correct_id = x86_apicid_from_cpu_idx(&topo_info, cpu_index);
     if (pcmc->compat_apic_id_mode) {
@@ -2399,9 +2408,7 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
         return;
     }
 
-    topo_info.nr_dies = pcms->smp_dies;
-    topo_info.nr_cores = smp_cores;
-    topo_info.nr_threads = smp_threads;
+    initialize_topo_info(&topo_info, pcms, ms);
 
     env->nr_dies = pcms->smp_dies;
 
@@ -2859,9 +2866,7 @@ static int64_t pc_get_default_cpu_node_id(const MachineState *ms, int idx)
    PCMachineState *pcms = PC_MACHINE(ms);
    X86CPUTopoInfo topo_info;
 
-   topo_info.nr_dies = pcms->smp_dies;
-   topo_info.nr_cores = ms->smp.cores;
-   topo_info.nr_threads = ms->smp.threads;
+   initialize_topo_info(&topo_info, pcms, ms);
 
    assert(idx < ms->possible_cpus->len);
    x86_topo_ids_from_apicid(ms->possible_cpus->cpus[idx].arch_id,
@@ -2876,9 +2881,6 @@ static const CPUArchIdList *pc_possible_cpu_arch_ids(MachineState *ms)
     X86CPUTopoInfo topo_info;
     int i;
 
-    topo_info.nr_dies = pcms->smp_dies;
-    topo_info.nr_cores = ms->smp.cores;
-    topo_info.nr_threads = ms->smp.threads;
 
     if (ms->possible_cpus) {
         /*
@@ -2891,6 +2893,9 @@ static const CPUArchIdList *pc_possible_cpu_arch_ids(MachineState *ms)
 
     ms->possible_cpus = g_malloc0(sizeof(CPUArchIdList) +
                                   sizeof(CPUArchId) * max_cpus);
+
+    initialize_topo_info(&topo_info, pcms, ms);
+
     ms->possible_cpus->len = max_cpus;
     for (i = 0; i < ms->possible_cpus->len; i++) {
         X86CPUTopoIDs topo_ids;


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 10/16] hw/i386: Introduce apicid_from_cpu_idx in PCMachineState
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (8 preceding siblings ...)
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 09/16] hw/i386: Introduce initialize_topo_info function Moger, Babu
@ 2019-09-06 19:12 ` Moger, Babu
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 11/16] Introduce-topo_ids_from_apicid-handler Moger, Babu
                   ` (6 subsequent siblings)
  16 siblings, 0 replies; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:12 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

Add function pointers in PCMachineState to handle apic id specific
functionalities. This will be used to initialize with correct
handlers based on mode selected.

x86_apicid_from_cpu_idx will be default handler.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 hw/i386/pc.c         |    5 ++++-
 include/hw/i386/pc.h |    4 ++++
 2 files changed, 8 insertions(+), 1 deletion(-)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 504e1ab083..69a6b82186 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -947,7 +947,7 @@ static uint32_t x86_cpu_apic_id_from_index(PCMachineState *pcms,
 
     initialize_topo_info(&topo_info, pcms, ms);
 
-    correct_id = x86_apicid_from_cpu_idx(&topo_info, cpu_index);
+    correct_id = pcms->apicid_from_cpu_idx(&topo_info, cpu_index);
     if (pcmc->compat_apic_id_mode) {
         if (cpu_index != correct_id && !warned && !qtest_enabled()) {
             error_report("APIC IDs set in compatibility mode, "
@@ -2828,6 +2828,9 @@ static void pc_machine_initfn(Object *obj)
     pcms->pit_enabled = true;
     pcms->smp_dies = 1;
 
+    /* Initialize the apic id related handlers */
+    pcms->apicid_from_cpu_idx = x86_apicid_from_cpu_idx;
+
     pc_system_flash_create(pcms);
 }
 
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 859b64c51d..6cefefdd57 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -17,6 +17,7 @@
 #include "hw/mem/pc-dimm.h"
 #include "hw/mem/nvdimm.h"
 #include "hw/acpi/acpi_dev_interface.h"
+#include "hw/i386/topology.h"
 
 #define HPET_INTCAP "hpet-intcap"
 
@@ -66,6 +67,9 @@ struct PCMachineState {
     uint64_t numa_nodes;
     uint64_t *node_mem;
 
+    /* Apic id specific handlers */
+    uint32_t (*apicid_from_cpu_idx)(X86CPUTopoInfo *topo_info, unsigned cpu_index);
+
     /* Address space used by IOAPIC device. All IOAPIC interrupts
      * will be translated to MSI messages in the address space. */
     AddressSpace *ioapic_as;


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 11/16] Introduce-topo_ids_from_apicid-handler
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (9 preceding siblings ...)
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 10/16] hw/i386: Introduce apicid_from_cpu_idx in PCMachineState Moger, Babu
@ 2019-09-06 19:12 ` Moger, Babu
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 12/16] hw/i386: Introduce apic_id_from_topo_ids handler in PCMachineState Moger, Babu
                   ` (5 subsequent siblings)
  16 siblings, 0 replies; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:12 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

hw/i386: Introduce topo_ids_from_apicid handler PCMachineState

Add function pointer topo_ids_from_apicid in PCMachineState.
Initialize with correct handler based on mode selected.
x86_apicid_from_cpu_idx will be the default handler.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 hw/i386/pc.c         |   13 +++++++------
 include/hw/i386/pc.h |    2 ++
 2 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 69a6b82186..c88de09350 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -2461,7 +2461,7 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
     if (!cpu_slot) {
         MachineState *ms = MACHINE(pcms);
 
-        x86_topo_ids_from_apicid(cpu->apic_id, &topo_info, &topo_ids);
+        pcms->topo_ids_from_apicid(cpu->apic_id, &topo_info, &topo_ids);
         error_setg(errp,
             "Invalid CPU [socket: %u, die: %u, core: %u, thread: %u] with"
             " APIC ID %" PRIu32 ", valid index range 0:%d",
@@ -2482,7 +2482,7 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
     /* TODO: move socket_id/core_id/thread_id checks into x86_cpu_realizefn()
      * once -smp refactoring is complete and there will be CPU private
      * CPUState::nr_cores and CPUState::nr_threads fields instead of globals */
-    x86_topo_ids_from_apicid(cpu->apic_id, &topo_info, &topo_ids);
+    pcms->topo_ids_from_apicid(cpu->apic_id, &topo_info, &topo_ids);
     if (cpu->socket_id != -1 && cpu->socket_id != topo_ids.pkg_id) {
         error_setg(errp, "property socket-id: %u doesn't match set apic-id:"
             " 0x%x (socket-id: %u)", cpu->socket_id, cpu->apic_id, topo_ids.pkg_id);
@@ -2830,6 +2830,7 @@ static void pc_machine_initfn(Object *obj)
 
     /* Initialize the apic id related handlers */
     pcms->apicid_from_cpu_idx = x86_apicid_from_cpu_idx;
+    pcms->topo_ids_from_apicid = x86_topo_ids_from_apicid;
 
     pc_system_flash_create(pcms);
 }
@@ -2872,8 +2873,8 @@ static int64_t pc_get_default_cpu_node_id(const MachineState *ms, int idx)
    initialize_topo_info(&topo_info, pcms, ms);
 
    assert(idx < ms->possible_cpus->len);
-   x86_topo_ids_from_apicid(ms->possible_cpus->cpus[idx].arch_id,
-                            &topo_info, &topo_ids);
+   pcms->topo_ids_from_apicid(ms->possible_cpus->cpus[idx].arch_id,
+                              &topo_info, &topo_ids);
    return topo_ids.pkg_id % nb_numa_nodes;
 }
 
@@ -2906,8 +2907,8 @@ static const CPUArchIdList *pc_possible_cpu_arch_ids(MachineState *ms)
         ms->possible_cpus->cpus[i].type = ms->cpu_type;
         ms->possible_cpus->cpus[i].vcpus_count = 1;
         ms->possible_cpus->cpus[i].arch_id = x86_cpu_apic_id_from_index(pcms, i);
-        x86_topo_ids_from_apicid(ms->possible_cpus->cpus[i].arch_id,
-                                 &topo_info, &topo_ids);
+        pcms->topo_ids_from_apicid(ms->possible_cpus->cpus[i].arch_id,
+                                   &topo_info, &topo_ids);
         ms->possible_cpus->cpus[i].props.has_socket_id = true;
         ms->possible_cpus->cpus[i].props.socket_id = topo_ids.pkg_id;
         ms->possible_cpus->cpus[i].props.has_die_id = true;
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 6cefefdd57..9a40f123d0 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -69,6 +69,8 @@ struct PCMachineState {
 
     /* Apic id specific handlers */
     uint32_t (*apicid_from_cpu_idx)(X86CPUTopoInfo *topo_info, unsigned cpu_index);
+    void (*topo_ids_from_apicid)(apic_id_t apicid, X86CPUTopoInfo *topo_info,
+                                 X86CPUTopoIDs *topo_ids);
 
     /* Address space used by IOAPIC device. All IOAPIC interrupts
      * will be translated to MSI messages in the address space. */


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 12/16] hw/i386: Introduce apic_id_from_topo_ids handler in PCMachineState
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (10 preceding siblings ...)
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 11/16] Introduce-topo_ids_from_apicid-handler Moger, Babu
@ 2019-09-06 19:13 ` Moger, Babu
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 13/16] machine: Add new epyc property " Moger, Babu
                   ` (4 subsequent siblings)
  16 siblings, 0 replies; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:13 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

Add function pointer apic_id_from_topo_ids in PCMachineState.
Initialize with correct handler based on the mode selected.
Also rename the handler apicid_from_topo_ids to x86_apicid_from_topo_ids
for  consistency. x86_apicid_from_topo_ids will be the default handler.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 hw/i386/pc.c               |    3 ++-
 include/hw/i386/pc.h       |    2 ++
 include/hw/i386/topology.h |    4 ++--
 3 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index c88de09350..959bd3821b 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -2454,7 +2454,7 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
         topo_ids.ccx_id = cpu->ccx_id;
         topo_ids.core_id = cpu->core_id;
         topo_ids.smt_id = cpu->thread_id;
-        cpu->apic_id = apicid_from_topo_ids(&topo_info, &topo_ids);
+        cpu->apic_id = pcms->apicid_from_topo_ids(&topo_info, &topo_ids);
     }
 
     cpu_slot = pc_find_cpu_slot(MACHINE(pcms), cpu->apic_id, &idx);
@@ -2831,6 +2831,7 @@ static void pc_machine_initfn(Object *obj)
     /* Initialize the apic id related handlers */
     pcms->apicid_from_cpu_idx = x86_apicid_from_cpu_idx;
     pcms->topo_ids_from_apicid = x86_topo_ids_from_apicid;
+    pcms->apicid_from_topo_ids = x86_apicid_from_topo_ids;
 
     pc_system_flash_create(pcms);
 }
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 9a40f123d0..d6f1189997 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -71,6 +71,8 @@ struct PCMachineState {
     uint32_t (*apicid_from_cpu_idx)(X86CPUTopoInfo *topo_info, unsigned cpu_index);
     void (*topo_ids_from_apicid)(apic_id_t apicid, X86CPUTopoInfo *topo_info,
                                  X86CPUTopoIDs *topo_ids);
+    apic_id_t (*apicid_from_topo_ids)(X86CPUTopoInfo *topo_info,
+                                      const X86CPUTopoIDs *topo_ids);
 
     /* Address space used by IOAPIC device. All IOAPIC interrupts
      * will be translated to MSI messages in the address space. */
diff --git a/include/hw/i386/topology.h b/include/hw/i386/topology.h
index 6fd4184f07..740e66970d 100644
--- a/include/hw/i386/topology.h
+++ b/include/hw/i386/topology.h
@@ -294,7 +294,7 @@ static inline apic_id_t x86_apicid_from_cpu_idx_epyc(X86CPUTopoInfo *topo_info,
  *
  * The caller must make sure core_id < nr_cores and smt_id < nr_threads.
  */
-static inline apic_id_t apicid_from_topo_ids(X86CPUTopoInfo *topo_info,
+static inline apic_id_t x86_apicid_from_topo_ids(X86CPUTopoInfo *topo_info,
                                              const X86CPUTopoIDs *topo_ids)
 {
     unsigned nr_dies = topo_info->nr_dies;
@@ -356,7 +356,7 @@ static inline apic_id_t x86_apicid_from_cpu_idx(X86CPUTopoInfo *topo_info,
 {
     X86CPUTopoIDs topo_ids;
     x86_topo_ids_from_idx(topo_info, cpu_index, &topo_ids);
-    return apicid_from_topo_ids(topo_info, &topo_ids);
+    return x86_apicid_from_topo_ids(topo_info, &topo_ids);
 }
 
 #endif /* HW_I386_TOPOLOGY_H */


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 13/16] machine: Add new epyc property in PCMachineState
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (11 preceding siblings ...)
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 12/16] hw/i386: Introduce apic_id_from_topo_ids handler in PCMachineState Moger, Babu
@ 2019-09-06 19:13 ` Moger, Babu
  2019-10-11  3:59   ` Eduardo Habkost
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 14/16] hw/i386: Introduce epyc mode function handlers Moger, Babu
                   ` (3 subsequent siblings)
  16 siblings, 1 reply; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:13 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

Adds new epyc property in PCMachineState and also in MachineState.
This property will be used to initialize the mode specific handlers
to generate apic ids.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 hw/i386/pc.c         |   23 +++++++++++++++++++++++
 include/hw/boards.h  |    2 ++
 include/hw/i386/pc.h |    1 +
 3 files changed, 26 insertions(+)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 959bd3821b..14760523a9 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -2810,6 +2810,22 @@ static void pc_machine_set_pit(Object *obj, bool value, Error **errp)
     pcms->pit_enabled = value;
 }
 
+static bool pc_machine_get_epyc(Object *obj, Error **errp)
+{
+    PCMachineState *pcms = PC_MACHINE(obj);
+
+    return pcms->epyc;
+}
+
+static void pc_machine_set_epyc(Object *obj, bool value, Error **errp)
+{
+    PCMachineState *pcms = PC_MACHINE(obj);
+    MachineState *ms = MACHINE(pcms);
+
+    pcms->epyc = value;
+    ms->epyc = value;
+}
+
 static void pc_machine_initfn(Object *obj)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
@@ -3015,6 +3031,13 @@ static void pc_machine_class_init(ObjectClass *oc, void *data)
 
     object_class_property_add_bool(oc, PC_MACHINE_PIT,
         pc_machine_get_pit, pc_machine_set_pit, &error_abort);
+
+    object_class_property_add_bool(oc, "epyc",
+        pc_machine_get_epyc, pc_machine_set_epyc, &error_abort);
+
+    object_class_property_set_description(oc, "epyc",
+        "Set on/off to use epyc mode", &error_abort);
+
 }
 
 static const TypeInfo pc_machine_info = {
diff --git a/include/hw/boards.h b/include/hw/boards.h
index 12eb5032a5..0001d42e50 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -299,6 +299,8 @@ struct MachineState {
     AccelState *accelerator;
     CPUArchIdList *possible_cpus;
     CpuTopology smp;
+    bool epyc;
+
     struct NVDIMMState *nvdimms_state;
 };
 
diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index d6f1189997..cf9e7b0045 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -68,6 +68,7 @@ struct PCMachineState {
     uint64_t *node_mem;
 
     /* Apic id specific handlers */
+    bool epyc;
     uint32_t (*apicid_from_cpu_idx)(X86CPUTopoInfo *topo_info, unsigned cpu_index);
     void (*topo_ids_from_apicid)(apic_id_t apicid, X86CPUTopoInfo *topo_info,
                                  X86CPUTopoIDs *topo_ids);


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 14/16] hw/i386: Introduce epyc mode function handlers
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (12 preceding siblings ...)
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 13/16] machine: Add new epyc property " Moger, Babu
@ 2019-09-06 19:13 ` Moger, Babu
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 15/16] i386: Fix pkg_id offset for epyc mode Moger, Babu
                   ` (2 subsequent siblings)
  16 siblings, 0 replies; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:13 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

Introduce following handlers for new epyc mode.
x86_apicid_from_cpu_idx_epyc: Generate apicid from cpu index.
x86_topo_ids_from_apicid_epyc: Generate topo ids from apic id.
x86_apicid_from_topo_ids_epyci: Generate apicid from topo ids.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 hw/i386/pc.c |    5 +++++
 1 file changed, 5 insertions(+)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 14760523a9..59c7c4d8b2 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -2824,6 +2824,11 @@ static void pc_machine_set_epyc(Object *obj, bool value, Error **errp)
 
     pcms->epyc = value;
     ms->epyc = value;
+    if (pcms->epyc) {
+        pcms->apicid_from_cpu_idx = x86_apicid_from_cpu_idx_epyc;
+        pcms->topo_ids_from_apicid = x86_topo_ids_from_apicid_epyc;
+        pcms->apicid_from_topo_ids = x86_apicid_from_topo_ids_epyc;
+    }
 }
 
 static void pc_machine_initfn(Object *obj)


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 15/16] i386: Fix pkg_id offset for epyc mode
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (13 preceding siblings ...)
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 14/16] hw/i386: Introduce epyc mode function handlers Moger, Babu
@ 2019-09-06 19:13 ` Moger, Babu
  2019-10-11  4:03   ` Eduardo Habkost
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 16/16] hw/core: Fix up the machine_set_cpu_numa_node for epyc Moger, Babu
  2019-09-20 22:44 ` [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
  16 siblings, 1 reply; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:13 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 target/i386/cpu.c |   24 +++++++++++++++++++-----
 1 file changed, 19 insertions(+), 5 deletions(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index f25491a029..f8b1fc5c07 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -4094,9 +4094,10 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
                    uint32_t *eax, uint32_t *ebx,
                    uint32_t *ecx, uint32_t *edx)
 {
+    MachineState *ms = MACHINE(qdev_get_machine());
     X86CPU *cpu = env_archcpu(env);
     CPUState *cs = env_cpu(env);
-    uint32_t die_offset;
+    uint32_t die_offset, pkg_offset;
     uint32_t limit;
     uint32_t signature[3];
 
@@ -4119,6 +4120,21 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
         index = env->cpuid_level;
     }
 
+    if (ms->epyc) {
+        X86CPUTopoInfo topo_info = {
+            .numa_nodes = nb_numa_nodes,
+            .nr_sockets = ms->smp.sockets,
+            .nr_cores = ms->smp.cores,
+            .nr_threads = ms->smp.threads,
+        };
+        unsigned nodes = nodes_in_pkg(&topo_info);
+        pkg_offset = apicid_pkg_offset_epyc(nodes, MAX_CCX, MAX_CORES_IN_CCX,
+                                            cs->nr_threads);
+    } else {
+        pkg_offset = apicid_pkg_offset(env->nr_dies, cs->nr_cores,
+                                       cs->nr_threads);
+    }
+
     switch(index) {
     case 0:
         *eax = env->cpuid_level;
@@ -4275,8 +4291,7 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
             *ecx |= CPUID_TOPOLOGY_LEVEL_SMT;
             break;
         case 1:
-            *eax = apicid_pkg_offset(env->nr_dies,
-                                     cs->nr_cores, cs->nr_threads);
+            *eax = pkg_offset;
             *ebx = cs->nr_cores * cs->nr_threads;
             *ecx |= CPUID_TOPOLOGY_LEVEL_CORE;
             break;
@@ -4310,8 +4325,7 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
             *ecx |= CPUID_TOPOLOGY_LEVEL_CORE;
             break;
         case 2:
-            *eax = apicid_pkg_offset(env->nr_dies, cs->nr_cores,
-                                                   cs->nr_threads);
+            *eax = pkg_offset;
             *ebx = env->nr_dies * cs->nr_cores * cs->nr_threads;
             *ecx |= CPUID_TOPOLOGY_LEVEL_DIE;
             break;


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [Qemu-devel] [RFC 2 PATCH 16/16] hw/core: Fix up the machine_set_cpu_numa_node for epyc
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (14 preceding siblings ...)
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 15/16] i386: Fix pkg_id offset for epyc mode Moger, Babu
@ 2019-09-06 19:13 ` Moger, Babu
  2019-10-11  4:10   ` Eduardo Habkost
  2019-09-20 22:44 ` [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
  16 siblings, 1 reply; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:13 UTC (permalink / raw)
  To: ssg.sos.staff, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	eblake, armbru, imammedo
  Cc: qemu-devel

Current topology id match will not work for epyc mode when setting
the node id. In epyc mode, ids like smt_id, thread_id, core_id,
ccx_id, socket_id can be same for more than one CPUs with across
two numa nodes.

For example, we can have two CPUs with following ids on two different node.
1. smt_id=0, thread_id=0, core_id=0, ccx_id=0, socket_id=0, node_id=0
2. smt_id=0, thread_id=0, core_id=0, ccx_id=0, socket_id=0, node_id=1

The function machine_set_cpu_numa_node will fail to find a match to assign
the node. Added new function machine_set_cpu_numa_node_epyc to set the node_id
directly in epyc mode.

Signed-off-by: Babu Moger <babu.moger@amd.com>
---
 hw/core/machine.c   |   24 ++++++++++++++++++++++++
 hw/core/numa.c      |    6 +++++-
 include/hw/boards.h |    4 ++++
 3 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/hw/core/machine.c b/hw/core/machine.c
index 9a8586cf30..6bceefc6f3 100644
--- a/hw/core/machine.c
+++ b/hw/core/machine.c
@@ -741,6 +741,30 @@ void machine_set_cpu_numa_node(MachineState *machine,
     }
 }
 
+void machine_set_cpu_numa_node_epyc(MachineState *machine,
+                                    const CpuInstanceProperties *props,
+                                    unsigned index,
+                                    Error **errp)
+{
+    MachineClass *mc = MACHINE_GET_CLASS(machine);
+    CPUArchId *slot;
+
+    if (!mc->possible_cpu_arch_ids) {
+        error_setg(errp, "mapping of CPUs to NUMA node is not supported");
+        return;
+    }
+
+    /* disabling node mapping is not supported, forbid it */
+    assert(props->has_node_id);
+
+    /* force board to initialize possible_cpus if it hasn't been done yet */
+    mc->possible_cpu_arch_ids(machine);
+
+    slot = &machine->possible_cpus->cpus[index];
+    slot->props.node_id = props->node_id;
+    slot->props.has_node_id = props->has_node_id;
+}
+
 static void smp_parse(MachineState *ms, QemuOpts *opts)
 {
     if (opts) {
diff --git a/hw/core/numa.c b/hw/core/numa.c
index 27fa6b5e1d..a9e835aea6 100644
--- a/hw/core/numa.c
+++ b/hw/core/numa.c
@@ -247,7 +247,11 @@ void set_numa_node_options(MachineState *ms, NumaOptions *object, Error **errp)
          props = mc->cpu_index_to_instance_props(ms, cpus->value);
          props.node_id = nodenr;
          props.has_node_id = true;
-         machine_set_cpu_numa_node(ms, &props, &err);
+         if (ms->epyc) {
+             machine_set_cpu_numa_node_epyc(ms, &props, cpus->value, &err);
+         } else {
+             machine_set_cpu_numa_node(ms, &props, &err);
+         }
          if (err) {
             error_propagate(errp, err);
             return;
diff --git a/include/hw/boards.h b/include/hw/boards.h
index 0001d42e50..ec1b1c5a85 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -74,6 +74,10 @@ HotpluggableCPUList *machine_query_hotpluggable_cpus(MachineState *machine);
 void machine_set_cpu_numa_node(MachineState *machine,
                                const CpuInstanceProperties *props,
                                Error **errp);
+void machine_set_cpu_numa_node_epyc(MachineState *machine,
+                                    const CpuInstanceProperties *props,
+                                    unsigned index,
+                                    Error **errp);
 
 void machine_class_allow_dynamic_sysbus_dev(MachineClass *mc, const char *type);
 


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 06/16] hw/core: Add core complex id in X86CPU topology
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 06/16] hw/core: Add core complex id in X86CPU topology Moger, Babu
@ 2019-09-06 19:20   ` Eric Blake
  2019-09-06 19:34     ` Moger, Babu
  2019-09-22 12:48   ` Michael S. Tsirkin
  1 sibling, 1 reply; 43+ messages in thread
From: Eric Blake @ 2019-09-06 19:20 UTC (permalink / raw)
  To: Moger, Babu, ssg.sos.staff, ehabkost, marcel.apfelbaum, mst,
	pbonzini, rth, armbru, imammedo
  Cc: qemu-devel

On 9/6/19 2:12 PM, Moger, Babu wrote:
> Introduce cpu core complex id(ccx_id) in x86CPU topology.
> Each CCX can have upto 4 cores and share same L3 cache.
> This information is required to build the topology in
> new apyc mode.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---

> +++ b/qapi/machine.json
> @@ -597,9 +597,10 @@
>  # @node-id: NUMA node ID the CPU belongs to
>  # @socket-id: socket number within node/board the CPU belongs to
>  # @die-id: die number within node/board the CPU belongs to (Since 4.1)
> +# @ccx-id: core complex number within node/board the CPU belongs to (Since 4.1)

4.2 now

>  # @core-id: core number within die the CPU belongs to# @thread-id: thread number within core the CPU belongs to

Pre-existing, but let's fix that missing newline while you're here.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 06/16] hw/core: Add core complex id in X86CPU topology
  2019-09-06 19:20   ` Eric Blake
@ 2019-09-06 19:34     ` Moger, Babu
  0 siblings, 0 replies; 43+ messages in thread
From: Moger, Babu @ 2019-09-06 19:34 UTC (permalink / raw)
  To: Eric Blake, ehabkost, marcel.apfelbaum, mst, pbonzini, rth,
	armbru, imammedo
  Cc: qemu-devel



On 9/6/19 2:20 PM, Eric Blake wrote:
> On 9/6/19 2:12 PM, Moger, Babu wrote:
>> Introduce cpu core complex id(ccx_id) in x86CPU topology.
>> Each CCX can have upto 4 cores and share same L3 cache.
>> This information is required to build the topology in
>> new apyc mode.
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
> 
>> +++ b/qapi/machine.json
>> @@ -597,9 +597,10 @@
>>  # @node-id: NUMA node ID the CPU belongs to
>>  # @socket-id: socket number within node/board the CPU belongs to
>>  # @die-id: die number within node/board the CPU belongs to (Since 4.1)
>> +# @ccx-id: core complex number within node/board the CPU belongs to (Since 4.1)
> 
> 4.2 now
ok. Will fix.
> 
>>  # @core-id: core number within die the CPU belongs to# @thread-id: thread number within core the CPU belongs to
> 
> Pre-existing, but let's fix that missing newline while you're here.

Sure. will take care. thanks

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models
  2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
                   ` (15 preceding siblings ...)
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 16/16] hw/core: Fix up the machine_set_cpu_numa_node for epyc Moger, Babu
@ 2019-09-20 22:44 ` Moger, Babu
  16 siblings, 0 replies; 43+ messages in thread
From: Moger, Babu @ 2019-09-20 22:44 UTC (permalink / raw)
  To: ehabkost, marcel.apfelbaum, mst, pbonzini, rth, eblake, armbru, imammedo
  Cc: qemu-devel

Eduardo and all,
 Waiting for the feedback on this to move forward. Appreciate your time.
Thanks Babu

On 9/6/19 2:11 PM, Moger, Babu wrote:
> These series fixes the problems encoding APIC ID for AMD EPYC cpu models.
> https://bugzilla.redhat.com/show_bug.cgi?id=1728166
> 
> This is the second pass to give an idea of the changes required to address
> the issue. First pass is availabe at 
> https://patchwork.kernel.org/cover/11069785/
> 
> Currently, apic id is decoded based on sockets/dies/cores/threads. This appears
> to work for most standard configurations for AMD and other vendors. But this
> decoding does not follow AMD's APIC ID enumeration. In some cases this
> causes CPU topology inconstancy. While booting guest Kernel is trying to
> validate topology. It finds the topology not aligning to EPYC models.
> 
> To fix the problem we need to build the topology as per the
> Processor Programming Reference (PPR) for AMD Family 17h Model 01h, Revision B1
> Processors. It is available at https://www.amd.com/en/support/tech-docs
> 
> Here is the text from the PPR.
> 2.1.10.2.1.3
> ApicId Enumeration Requirements
> Operating systems are expected to use
> Core::X86::Cpuid::SizeId[ApicIdCoreIdSize], the number of least
> significant bits in the Initial APIC ID that indicate core ID within a
> processor, in constructing per-core CPUID
> masks. Core::X86::Cpuid::SizeId[ApicIdCoreIdSize] determines the maximum number
> of cores (MNC) that the
> processor could theoretically support, not the actual number of cores that are
> actually implemented or enabled on
> the processor, as indicated by Core::X86::Cpuid::SizeId[NC].
> Each Core::X86::Apic::ApicId[ApicId] register is preset as follows:
> • ApicId[6] = Socket ID.
> • ApicId[5:4] = Node ID.
> • ApicId[3] = Logical CCX L3 complex ID
> • ApicId[2:0]= (SMT) ? {LogicalCoreID[1:0],ThreadId} :
> {1'b0,LogicalCoreID[1:0]}.
> """
> 
> v2:
>   1. Introduced the new property epyc to enable new epyc mode.
>   2. Separated the epyc mode and non epyc mode function.
>   3. Introduced function pointers in PCMachineState to handle the
>      differences.
>   4. Mildly tested different combinations to make things are working as expected.
>   5. TODO : Setting the epyc feature bit needs to be worked out. This feature is
>      supported only on AMD EPYC models. I may need some guidance on that.
> 
> v1:
>   https://patchwork.kernel.org/cover/11069785/
> 
> ---
> 
> Babu Moger (16):
>       numa: Split the numa functionality
>       hw/i386: Rename X86CPUTopoInfo structure to X86CPUTopoIDs
>       hw/i386: Introduce X86CPUTopoInfo to contain topology info
>       machine: Add SMP Sockets in CpuTopology
>       hw/i386: Simplify topology Offset/width Calculation
>       hw/core: Add core complex id in X86CPU topology
>       hw/386: Add new epyc mode topology decoding functions
>       i386: Cleanup and use the new epyc mode topology functions
>       hw/i386: Introduce initialize_topo_info function
>       hw/i386: Introduce apicid_from_cpu_idx in PCMachineState
>       Introduce-topo_ids_from_apicid-handler
>       hw/i386: Introduce apic_id_from_topo_ids handler in PCMachineState
>       machine: Add new epyc property in PCMachineState
>       hw/i386: Introduce epyc mode function handlers
>       i386: Fix pkg_id offset for epyc mode
>       hw/core: Fix up the machine_set_cpu_numa_node for epyc
> 
> 
>  hw/core/machine-hmp-cmds.c |    3 
>  hw/core/machine.c          |   38 ++++++
>  hw/core/numa.c             |  110 ++++++++++++----
>  hw/i386/pc.c               |  143 +++++++++++++++------
>  include/hw/boards.h        |    8 +
>  include/hw/i386/pc.h       |    9 +
>  include/hw/i386/topology.h |  294 +++++++++++++++++++++++++++++++++++---------
>  include/sysemu/numa.h      |    2 
>  qapi/machine.json          |    4 -
>  target/i386/cpu.c          |  209 +++++++++++--------------------
>  target/i386/cpu.h          |    1 
>  vl.c                       |    3 
>  12 files changed, 560 insertions(+), 264 deletions(-)
> 
> --
> Signature
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC 2 PATCH 06/16] hw/core: Add core complex id in X86CPU topology
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 06/16] hw/core: Add core complex id in X86CPU topology Moger, Babu
  2019-09-06 19:20   ` Eric Blake
@ 2019-09-22 12:48   ` Michael S. Tsirkin
  2019-09-23 14:38     ` Moger, Babu
  1 sibling, 1 reply; 43+ messages in thread
From: Michael S. Tsirkin @ 2019-09-22 12:48 UTC (permalink / raw)
  To: Moger, Babu
  Cc: ehabkost, ssg.sos.staff, armbru, qemu-devel, imammedo, pbonzini, rth

On Fri, Sep 06, 2019 at 07:12:18PM +0000, Moger, Babu wrote:
> Introduce cpu core complex id(ccx_id) in x86CPU topology.
> Each CCX can have upto 4 cores and share same L3 cache.
> This information is required to build the topology in
> new apyc mode.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
>  hw/core/machine-hmp-cmds.c |    3 +++
>  hw/core/machine.c          |   13 +++++++++++++
>  hw/i386/pc.c               |   10 ++++++++++
>  include/hw/i386/topology.h |    1 +
>  qapi/machine.json          |    4 +++-
>  target/i386/cpu.c          |    2 ++
>  target/i386/cpu.h          |    1 +
>  7 files changed, 33 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/core/machine-hmp-cmds.c b/hw/core/machine-hmp-cmds.c
> index 1f66bda346..6c534779af 100644
> --- a/hw/core/machine-hmp-cmds.c
> +++ b/hw/core/machine-hmp-cmds.c
> @@ -89,6 +89,9 @@ void hmp_hotpluggable_cpus(Monitor *mon, const QDict *qdict)
>          if (c->has_die_id) {
>              monitor_printf(mon, "    die-id: \"%" PRIu64 "\"\n", c->die_id);
>          }
> +        if (c->has_ccx_id) {
> +            monitor_printf(mon, "    ccx-id: \"%" PRIu64 "\"\n", c->ccx_id);
> +        }
>          if (c->has_core_id) {
>              monitor_printf(mon, "    core-id: \"%" PRIu64 "\"\n", c->core_id);
>          }
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index 4034b7e903..9a8586cf30 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -694,6 +694,11 @@ void machine_set_cpu_numa_node(MachineState *machine,
>              return;
>          }
>  
> +        if (props->has_ccx_id && !slot->props.has_ccx_id) {
> +            error_setg(errp, "ccx-id is not supported");
> +            return;
> +        }
> +
>          /* skip slots with explicit mismatch */
>          if (props->has_thread_id && props->thread_id != slot->props.thread_id) {
>                  continue;
> @@ -707,6 +712,10 @@ void machine_set_cpu_numa_node(MachineState *machine,
>                  continue;
>          }
>  
> +        if (props->has_ccx_id && props->ccx_id != slot->props.ccx_id) {
> +                continue;
> +        }
> +
>          if (props->has_socket_id && props->socket_id != slot->props.socket_id) {
>                  continue;
>          }
> @@ -1041,6 +1050,10 @@ static char *cpu_slot_to_string(const CPUArchId *cpu)
>      if (cpu->props.has_die_id) {
>          g_string_append_printf(s, "die-id: %"PRId64, cpu->props.die_id);
>      }
> +
> +    if (cpu->props.has_ccx_id) {
> +        g_string_append_printf(s, "ccx-id: %"PRId64, cpu->props.ccx_id);
> +    }
>      if (cpu->props.has_core_id) {
>          if (s->len) {
>              g_string_append_printf(s, ", ");
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 9e1c3f9f57..f71389ad9f 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -2444,6 +2444,7 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
>  
>          topo_ids.pkg_id = cpu->socket_id;
>          topo_ids.die_id = cpu->die_id;
> +        topo_ids.ccx_id = cpu->ccx_id;
>          topo_ids.core_id = cpu->core_id;
>          topo_ids.smt_id = cpu->thread_id;
>          cpu->apic_id = apicid_from_topo_ids(&topo_info, &topo_ids);
> @@ -2489,6 +2490,13 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
>      }
>      cpu->die_id = topo_ids.die_id;
>  
> +    if (cpu->ccx_id != -1 && cpu->ccx_id != topo_ids.ccx_id) {
> +        error_setg(errp, "property ccx-id: %u doesn't match set apic-id:"
> +            " 0x%x (ccx-id: %u)", cpu->ccx_id, cpu->apic_id, topo_ids.ccx_id);
> +        return;
> +    }
> +    cpu->ccx_id = topo_ids.ccx_id;
> +
>      if (cpu->core_id != -1 && cpu->core_id != topo_ids.core_id) {
>          error_setg(errp, "property core-id: %u doesn't match set apic-id:"
>              " 0x%x (core-id: %u)", cpu->core_id, cpu->apic_id, topo_ids.core_id);
> @@ -2896,6 +2904,8 @@ static const CPUArchIdList *pc_possible_cpu_arch_ids(MachineState *ms)
>          ms->possible_cpus->cpus[i].props.socket_id = topo_ids.pkg_id;
>          ms->possible_cpus->cpus[i].props.has_die_id = true;
>          ms->possible_cpus->cpus[i].props.die_id = topo_ids.die_id;
> +        ms->possible_cpus->cpus[i].props.has_ccx_id = true;
> +        ms->possible_cpus->cpus[i].props.ccx_id = topo_ids.ccx_id;
>          ms->possible_cpus->cpus[i].props.has_core_id = true;
>          ms->possible_cpus->cpus[i].props.core_id = topo_ids.core_id;
>          ms->possible_cpus->cpus[i].props.has_thread_id = true;
> diff --git a/include/hw/i386/topology.h b/include/hw/i386/topology.h
> index fb10863a66..5a61d53f05 100644
> --- a/include/hw/i386/topology.h
> +++ b/include/hw/i386/topology.h
> @@ -170,6 +170,7 @@ static inline void x86_topo_ids_from_apicid(apic_id_t apicid,
>              (apicid >> apicid_die_offset(nr_cores, nr_threads)) &
>              ~(0xFFFFFFFFUL << apicid_die_width(nr_dies));
>      topo_ids->pkg_id = apicid >> apicid_pkg_offset(nr_dies, nr_cores, nr_threads);
> +    topo_ids->ccx_id = 0;
>  }
>  
>  /* Make APIC ID for the CPU 'cpu_index'
> diff --git a/qapi/machine.json b/qapi/machine.json
> index 6db8a7e2ec..bb7627e698 100644
> --- a/qapi/machine.json
> +++ b/qapi/machine.json
> @@ -597,9 +597,10 @@
>  # @node-id: NUMA node ID the CPU belongs to
>  # @socket-id: socket number within node/board the CPU belongs to
>  # @die-id: die number within node/board the CPU belongs to (Since 4.1)
> +# @ccx-id: core complex number within node/board the CPU belongs to (Since 4.1)

Can we come up with a non-AMD specific name?
E.g. would "last level cache" be correct for AMD?
Better ideas?

>  # @core-id: core number within die the CPU belongs to# @thread-id: thread number within core the CPU belongs to
>  #
> -# Note: currently there are 5 properties that could be present
> +# Note: currently there are 6 properties that could be present
>  # but management should be prepared to pass through other
>  # properties with device_add command to allow for future
>  # interface extension. This also requires the filed names to be kept in
> @@ -611,6 +612,7 @@
>    'data': { '*node-id': 'int',
>              '*socket-id': 'int',
>              '*die-id': 'int',
> +            '*ccx-id': 'int',
>              '*core-id': 'int',
>              '*thread-id': 'int'
>    }
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index 6d7f9b6b8b..ca02bc21ec 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -5811,12 +5811,14 @@ static Property x86_cpu_properties[] = {
>      DEFINE_PROP_INT32("thread-id", X86CPU, thread_id, 0),
>      DEFINE_PROP_INT32("core-id", X86CPU, core_id, 0),
>      DEFINE_PROP_INT32("die-id", X86CPU, die_id, 0),
> +    DEFINE_PROP_INT32("ccx-id", X86CPU, ccx_id, 0),
>      DEFINE_PROP_INT32("socket-id", X86CPU, socket_id, 0),
>  #else
>      DEFINE_PROP_UINT32("apic-id", X86CPU, apic_id, UNASSIGNED_APIC_ID),
>      DEFINE_PROP_INT32("thread-id", X86CPU, thread_id, -1),
>      DEFINE_PROP_INT32("core-id", X86CPU, core_id, -1),
>      DEFINE_PROP_INT32("die-id", X86CPU, die_id, -1),
> +    DEFINE_PROP_INT32("ccx-id", X86CPU, ccx_id, -1),
>      DEFINE_PROP_INT32("socket-id", X86CPU, socket_id, -1),
>  #endif
>      DEFINE_PROP_INT32("node-id", X86CPU, node_id, CPU_UNSET_NUMA_NODE_ID),
> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
> index 8b3dc5533e..db940cdb2a 100644
> --- a/target/i386/cpu.h
> +++ b/target/i386/cpu.h
> @@ -1508,6 +1508,7 @@ struct X86CPU {
>      int32_t node_id; /* NUMA node this CPU belongs to */
>      int32_t socket_id;
>      int32_t die_id;
> +    int32_t ccx_id;
>      int32_t core_id;
>      int32_t thread_id;
>  
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [RFC 2 PATCH 06/16] hw/core: Add core complex id in X86CPU topology
  2019-09-22 12:48   ` Michael S. Tsirkin
@ 2019-09-23 14:38     ` Moger, Babu
  0 siblings, 0 replies; 43+ messages in thread
From: Moger, Babu @ 2019-09-23 14:38 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: ehabkost, armbru, qemu-devel, imammedo, pbonzini, rth



On 9/22/19 7:48 AM, Michael S. Tsirkin wrote:
> On Fri, Sep 06, 2019 at 07:12:18PM +0000, Moger, Babu wrote:
>> Introduce cpu core complex id(ccx_id) in x86CPU topology.
>> Each CCX can have upto 4 cores and share same L3 cache.
>> This information is required to build the topology in
>> new apyc mode.
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
>>  hw/core/machine-hmp-cmds.c |    3 +++
>>  hw/core/machine.c          |   13 +++++++++++++
>>  hw/i386/pc.c               |   10 ++++++++++
>>  include/hw/i386/topology.h |    1 +
>>  qapi/machine.json          |    4 +++-
>>  target/i386/cpu.c          |    2 ++
>>  target/i386/cpu.h          |    1 +
>>  7 files changed, 33 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/core/machine-hmp-cmds.c b/hw/core/machine-hmp-cmds.c
>> index 1f66bda346..6c534779af 100644
>> --- a/hw/core/machine-hmp-cmds.c
>> +++ b/hw/core/machine-hmp-cmds.c
>> @@ -89,6 +89,9 @@ void hmp_hotpluggable_cpus(Monitor *mon, const QDict *qdict)
>>          if (c->has_die_id) {
>>              monitor_printf(mon, "    die-id: \"%" PRIu64 "\"\n", c->die_id);
>>          }
>> +        if (c->has_ccx_id) {
>> +            monitor_printf(mon, "    ccx-id: \"%" PRIu64 "\"\n", c->ccx_id);
>> +        }
>>          if (c->has_core_id) {
>>              monitor_printf(mon, "    core-id: \"%" PRIu64 "\"\n", c->core_id);
>>          }
>> diff --git a/hw/core/machine.c b/hw/core/machine.c
>> index 4034b7e903..9a8586cf30 100644
>> --- a/hw/core/machine.c
>> +++ b/hw/core/machine.c
>> @@ -694,6 +694,11 @@ void machine_set_cpu_numa_node(MachineState *machine,
>>              return;
>>          }
>>  
>> +        if (props->has_ccx_id && !slot->props.has_ccx_id) {
>> +            error_setg(errp, "ccx-id is not supported");
>> +            return;
>> +        }
>> +
>>          /* skip slots with explicit mismatch */
>>          if (props->has_thread_id && props->thread_id != slot->props.thread_id) {
>>                  continue;
>> @@ -707,6 +712,10 @@ void machine_set_cpu_numa_node(MachineState *machine,
>>                  continue;
>>          }
>>  
>> +        if (props->has_ccx_id && props->ccx_id != slot->props.ccx_id) {
>> +                continue;
>> +        }
>> +
>>          if (props->has_socket_id && props->socket_id != slot->props.socket_id) {
>>                  continue;
>>          }
>> @@ -1041,6 +1050,10 @@ static char *cpu_slot_to_string(const CPUArchId *cpu)
>>      if (cpu->props.has_die_id) {
>>          g_string_append_printf(s, "die-id: %"PRId64, cpu->props.die_id);
>>      }
>> +
>> +    if (cpu->props.has_ccx_id) {
>> +        g_string_append_printf(s, "ccx-id: %"PRId64, cpu->props.ccx_id);
>> +    }
>>      if (cpu->props.has_core_id) {
>>          if (s->len) {
>>              g_string_append_printf(s, ", ");
>> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
>> index 9e1c3f9f57..f71389ad9f 100644
>> --- a/hw/i386/pc.c
>> +++ b/hw/i386/pc.c
>> @@ -2444,6 +2444,7 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
>>  
>>          topo_ids.pkg_id = cpu->socket_id;
>>          topo_ids.die_id = cpu->die_id;
>> +        topo_ids.ccx_id = cpu->ccx_id;
>>          topo_ids.core_id = cpu->core_id;
>>          topo_ids.smt_id = cpu->thread_id;
>>          cpu->apic_id = apicid_from_topo_ids(&topo_info, &topo_ids);
>> @@ -2489,6 +2490,13 @@ static void pc_cpu_pre_plug(HotplugHandler *hotplug_dev,
>>      }
>>      cpu->die_id = topo_ids.die_id;
>>  
>> +    if (cpu->ccx_id != -1 && cpu->ccx_id != topo_ids.ccx_id) {
>> +        error_setg(errp, "property ccx-id: %u doesn't match set apic-id:"
>> +            " 0x%x (ccx-id: %u)", cpu->ccx_id, cpu->apic_id, topo_ids.ccx_id);
>> +        return;
>> +    }
>> +    cpu->ccx_id = topo_ids.ccx_id;
>> +
>>      if (cpu->core_id != -1 && cpu->core_id != topo_ids.core_id) {
>>          error_setg(errp, "property core-id: %u doesn't match set apic-id:"
>>              " 0x%x (core-id: %u)", cpu->core_id, cpu->apic_id, topo_ids.core_id);
>> @@ -2896,6 +2904,8 @@ static const CPUArchIdList *pc_possible_cpu_arch_ids(MachineState *ms)
>>          ms->possible_cpus->cpus[i].props.socket_id = topo_ids.pkg_id;
>>          ms->possible_cpus->cpus[i].props.has_die_id = true;
>>          ms->possible_cpus->cpus[i].props.die_id = topo_ids.die_id;
>> +        ms->possible_cpus->cpus[i].props.has_ccx_id = true;
>> +        ms->possible_cpus->cpus[i].props.ccx_id = topo_ids.ccx_id;
>>          ms->possible_cpus->cpus[i].props.has_core_id = true;
>>          ms->possible_cpus->cpus[i].props.core_id = topo_ids.core_id;
>>          ms->possible_cpus->cpus[i].props.has_thread_id = true;
>> diff --git a/include/hw/i386/topology.h b/include/hw/i386/topology.h
>> index fb10863a66..5a61d53f05 100644
>> --- a/include/hw/i386/topology.h
>> +++ b/include/hw/i386/topology.h
>> @@ -170,6 +170,7 @@ static inline void x86_topo_ids_from_apicid(apic_id_t apicid,
>>              (apicid >> apicid_die_offset(nr_cores, nr_threads)) &
>>              ~(0xFFFFFFFFUL << apicid_die_width(nr_dies));
>>      topo_ids->pkg_id = apicid >> apicid_pkg_offset(nr_dies, nr_cores, nr_threads);
>> +    topo_ids->ccx_id = 0;
>>  }
>>  
>>  /* Make APIC ID for the CPU 'cpu_index'
>> diff --git a/qapi/machine.json b/qapi/machine.json
>> index 6db8a7e2ec..bb7627e698 100644
>> --- a/qapi/machine.json
>> +++ b/qapi/machine.json
>> @@ -597,9 +597,10 @@
>>  # @node-id: NUMA node ID the CPU belongs to
>>  # @socket-id: socket number within node/board the CPU belongs to
>>  # @die-id: die number within node/board the CPU belongs to (Since 4.1)
>> +# @ccx-id: core complex number within node/board the CPU belongs to (Since 4.1)
> 
> Can we come up with a non-AMD specific name?
> E.g. would "last level cache" be correct for AMD?
> Better ideas?

Yes. Last level cache (or llc_id) should work. Will change it next
revision. Thanks

> 
>>  # @core-id: core number within die the CPU belongs to# @thread-id: thread number within core the CPU belongs to
>>  #
>> -# Note: currently there are 5 properties that could be present
>> +# Note: currently there are 6 properties that could be present
>>  # but management should be prepared to pass through other
>>  # properties with device_add command to allow for future
>>  # interface extension. This also requires the filed names to be kept in
>> @@ -611,6 +612,7 @@
>>    'data': { '*node-id': 'int',
>>              '*socket-id': 'int',
>>              '*die-id': 'int',
>> +            '*ccx-id': 'int',
>>              '*core-id': 'int',
>>              '*thread-id': 'int'
>>    }
>> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
>> index 6d7f9b6b8b..ca02bc21ec 100644
>> --- a/target/i386/cpu.c
>> +++ b/target/i386/cpu.c
>> @@ -5811,12 +5811,14 @@ static Property x86_cpu_properties[] = {
>>      DEFINE_PROP_INT32("thread-id", X86CPU, thread_id, 0),
>>      DEFINE_PROP_INT32("core-id", X86CPU, core_id, 0),
>>      DEFINE_PROP_INT32("die-id", X86CPU, die_id, 0),
>> +    DEFINE_PROP_INT32("ccx-id", X86CPU, ccx_id, 0),
>>      DEFINE_PROP_INT32("socket-id", X86CPU, socket_id, 0),
>>  #else
>>      DEFINE_PROP_UINT32("apic-id", X86CPU, apic_id, UNASSIGNED_APIC_ID),
>>      DEFINE_PROP_INT32("thread-id", X86CPU, thread_id, -1),
>>      DEFINE_PROP_INT32("core-id", X86CPU, core_id, -1),
>>      DEFINE_PROP_INT32("die-id", X86CPU, die_id, -1),
>> +    DEFINE_PROP_INT32("ccx-id", X86CPU, ccx_id, -1),
>>      DEFINE_PROP_INT32("socket-id", X86CPU, socket_id, -1),
>>  #endif
>>      DEFINE_PROP_INT32("node-id", X86CPU, node_id, CPU_UNSET_NUMA_NODE_ID),
>> diff --git a/target/i386/cpu.h b/target/i386/cpu.h
>> index 8b3dc5533e..db940cdb2a 100644
>> --- a/target/i386/cpu.h
>> +++ b/target/i386/cpu.h
>> @@ -1508,6 +1508,7 @@ struct X86CPU {
>>      int32_t node_id; /* NUMA node this CPU belongs to */
>>      int32_t socket_id;
>>      int32_t die_id;
>> +    int32_t ccx_id;
>>      int32_t core_id;
>>      int32_t thread_id;
>>  
>>

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 01/16] numa: Split the numa functionality
  2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 01/16] numa: Split the numa functionality Moger, Babu
@ 2019-10-10  3:25   ` Eduardo Habkost
  2019-12-02 20:18     ` Babu Moger
  0 siblings, 1 reply; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-10  3:25 UTC (permalink / raw)
  To: Moger, Babu
  Cc: mst, armbru, qemu-devel, imammedo, pbonzini, ssg.sos.staff, rth

Hi,

First of all, sorry for taking more than a month to start
reviewing this.

On Fri, Sep 06, 2019 at 07:11:43PM +0000, Moger, Babu wrote:
> To support new epyc mode, we need to know the number of numa nodes
> in advance to generate apic id correctly. [...]

This explains that we need to initialize numa_info earlier than
something...

>                                     [...] So, split the numa
> initialization into two. The function parse_numa initializes numa_info
> and updates nb_numa_nodes. And then parse_numa_node does the numa node
> initialization.

...but I miss what "something" is.

The sequence of events here will be:

* parse_numa_opts()
  * for each -numa option:
    * parse_numa()
      * set_numa_options()
        * parse_numa_info()
          * here ms->numa_state->num_nodes is incremented [1]
* parse_numa_node_opts()
  * for each -numa option:
    * parse_numa_node()
      * set_numa_node_options()
        * here are the operations that are being delayed by this
          patch [2]

What exactly makes it necessary for [2] to happen after [1] is
done for all NUMA nodes?

This needs to be clear in the code, otherwise somebody will try to refactor
this in the future and merge set_numa_node_options() and parse_numa_info()
again, not knowing why ordering between [1] and [2] is so important.

In addition to documenting it better, I suggest saving the CPU
index list in NodeInfo, instead of calling qemu_opts_foreach()
twice.  (Probably a good idea to document the new field as
internal, though.  We don't want machine-specific code to be
looking at the CPU index list.)

Also, would it work if the delayed initialization is done at
numa_complete_configuration() instead of a new
parse_numa_node_opts() function?  We already have 2 separate
steps in NUMA initialization (parse_numa_node() and
numa_complete_configuration()), so it would be nice to avoid
adding a 3rd one.

Putting all the suggestions together, the code would look like this:


static void parse_numa_node(MachineState *ms, NumaNodeOptions *node,
                            Error **errp)
{
    /* ... */
    numa_info[nodenr].cpu_indexes = QAPI_CLONE(node->cpus, uint16List);
    /* ... */
}

void numa_complete_configuration(MachineState *ms)
{
    /* ... */
    for (i = 0; i < ms->numa_state->num_nodes; i++) {
        /*
         * numa_node_complete_configuration() needs to be called after all
         * nodes were already parsed, because <insert reason here>,
         */
        numa_node_complete_configuration(numa_info[i]);
    }
}

void numa_node_complete_configuration(MachineState *ms, NodeInfo *node)
{
    for (cpu_index = node->cpu_indexes; cpu_index; cpu_index = cpu_index->next) {
        CpuInstanceProperties props;
        props = mc->cpu_index_to_instance_props(ms, cpu_index->value);
        props.node_id = nodenr;
        props.has_node_id = true;
        machine_set_cpu_numa_node(ms, &props, &err);
    }
}


> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
>  hw/core/numa.c        |  106 +++++++++++++++++++++++++++++++++++--------------
>  include/sysemu/numa.h |    2 +
>  vl.c                  |    2 +
>  3 files changed, 80 insertions(+), 30 deletions(-)
> 
> diff --git a/hw/core/numa.c b/hw/core/numa.c
> index a11431483c..27fa6b5e1d 100644
> --- a/hw/core/numa.c
> +++ b/hw/core/numa.c
> @@ -55,14 +55,10 @@ bool have_numa_distance;
>  NodeInfo numa_info[MAX_NODES];
>  
>  
> -static void parse_numa_node(MachineState *ms, NumaNodeOptions *node,
> +static void parse_numa_info(MachineState *ms, NumaNodeOptions *node,
>                              Error **errp)
>  {
> -    Error *err = NULL;
>      uint16_t nodenr;
> -    uint16List *cpus = NULL;
> -    MachineClass *mc = MACHINE_GET_CLASS(ms);
> -    unsigned int max_cpus = ms->smp.max_cpus;
>  
>      if (node->has_nodeid) {
>          nodenr = node->nodeid;
> @@ -81,29 +77,6 @@ static void parse_numa_node(MachineState *ms, NumaNodeOptions *node,
>          return;
>      }
>  
> -    if (!mc->cpu_index_to_instance_props || !mc->get_default_cpu_node_id) {
> -        error_setg(errp, "NUMA is not supported by this machine-type");
> -        return;
> -    }
> -    for (cpus = node->cpus; cpus; cpus = cpus->next) {
> -        CpuInstanceProperties props;
> -        if (cpus->value >= max_cpus) {
> -            error_setg(errp,
> -                       "CPU index (%" PRIu16 ")"
> -                       " should be smaller than maxcpus (%d)",
> -                       cpus->value, max_cpus);
> -            return;
> -        }
> -        props = mc->cpu_index_to_instance_props(ms, cpus->value);
> -        props.node_id = nodenr;
> -        props.has_node_id = true;
> -        machine_set_cpu_numa_node(ms, &props, &err);
> -        if (err) {
> -            error_propagate(errp, err);
> -            return;
> -        }
> -    }
> -
>      have_memdevs = have_memdevs ? : node->has_memdev;
>      have_mem = have_mem ? : node->has_mem;
>      if ((node->has_mem && have_memdevs) || (node->has_memdev && have_mem)) {
> @@ -177,7 +150,7 @@ void set_numa_options(MachineState *ms, NumaOptions *object, Error **errp)
>  
>      switch (object->type) {
>      case NUMA_OPTIONS_TYPE_NODE:
> -        parse_numa_node(ms, &object->u.node, &err);
> +        parse_numa_info(ms, &object->u.node, &err);
>          if (err) {
>              goto end;
>          }
> @@ -242,6 +215,73 @@ end:
>      return 0;
>  }
>  
> +void set_numa_node_options(MachineState *ms, NumaOptions *object, Error **errp)
> +{
> +    MachineClass *mc = MACHINE_GET_CLASS(ms);
> +    NumaNodeOptions *node = &object->u.node;
> +    unsigned int max_cpus = ms->smp.max_cpus;
> +    uint16List *cpus = NULL;
> +    Error *err = NULL;
> +    uint16_t nodenr;
> +
> +    if (node->has_nodeid) {
> +        nodenr = node->nodeid;
> +    } else {
> +        error_setg(errp, "NUMA node information is not available");
> +    }
> +
> +    if (!mc->cpu_index_to_instance_props || !mc->get_default_cpu_node_id) {
> +        error_setg(errp, "NUMA is not supported by this machine-type");
> +        return;
> +    }
> +
> +    for (cpus = node->cpus; cpus; cpus = cpus->next) {
> +        CpuInstanceProperties props;
> +        if (cpus->value >= max_cpus) {
> +            error_setg(errp,
> +                       "CPU index (%" PRIu16 ")"
> +                       " should be smaller than maxcpus (%d)",
> +                       cpus->value, max_cpus);
> +            return;
> +         }
> +         props = mc->cpu_index_to_instance_props(ms, cpus->value);
> +         props.node_id = nodenr;
> +         props.has_node_id = true;
> +         machine_set_cpu_numa_node(ms, &props, &err);
> +         if (err) {
> +            error_propagate(errp, err);
> +            return;
> +         }
> +    }
> +}
> +
> +static int parse_numa_node(void *opaque, QemuOpts *opts, Error **errp)
> +{
> +    NumaOptions *object = NULL;
> +    MachineState *ms = MACHINE(opaque);
> +    Error *err = NULL;
> +    Visitor *v = opts_visitor_new(opts);
> +
> +    visit_type_NumaOptions(v, NULL, &object, &err);
> +    visit_free(v);
> +    if (err) {
> +        goto end;
> +    }
> +
> +    if (object->type == NUMA_OPTIONS_TYPE_NODE) {
> +        set_numa_node_options(ms, object, &err);
> +    }
> +
> +end:
> +    qapi_free_NumaOptions(object);
> +    if (err) {
> +        error_propagate(errp, err);
> +        return -1;
> +    }
> +
> +    return 0;
> +}
> +
>  /* If all node pair distances are symmetric, then only distances
>   * in one direction are enough. If there is even one asymmetric
>   * pair, though, then all distances must be provided. The
> @@ -368,7 +408,7 @@ void numa_complete_configuration(MachineState *ms)
>      if (ms->ram_slots > 0 && nb_numa_nodes == 0 &&
>          mc->auto_enable_numa_with_memhp) {
>              NumaNodeOptions node = { };
> -            parse_numa_node(ms, &node, &error_abort);
> +            parse_numa_info(ms, &node, &error_abort);
>      }
>  
>      assert(max_numa_nodeid <= MAX_NODES);
> @@ -448,6 +488,12 @@ void parse_numa_opts(MachineState *ms)
>      qemu_opts_foreach(qemu_find_opts("numa"), parse_numa, ms, &error_fatal);
>  }
>  
> +void parse_numa_node_opts(MachineState *ms)
> +{
> +    qemu_opts_foreach(qemu_find_opts("numa"), parse_numa_node,
> +                      ms, &error_fatal);
> +}
> +
>  void numa_cpu_pre_plug(const CPUArchId *slot, DeviceState *dev, Error **errp)
>  {
>      int node_id = object_property_get_int(OBJECT(dev), "node-id", &error_abort);
> diff --git a/include/sysemu/numa.h b/include/sysemu/numa.h
> index 01a263eba2..ca109adaa6 100644
> --- a/include/sysemu/numa.h
> +++ b/include/sysemu/numa.h
> @@ -24,7 +24,9 @@ struct NumaNodeMem {
>  extern NodeInfo numa_info[MAX_NODES];
>  
>  void set_numa_options(MachineState *ms, NumaOptions *object, Error **errp);
> +void set_numa_node_options(MachineState *ms, NumaOptions *object, Error **errp);
>  void parse_numa_opts(MachineState *ms);
> +void parse_numa_node_opts(MachineState *ms);
>  void numa_complete_configuration(MachineState *ms);
>  void query_numa_node_mem(NumaNodeMem node_mem[]);
>  extern QemuOptsList qemu_numa_opts;
> diff --git a/vl.c b/vl.c
> index b426b32134..711d2ae5da 100644
> --- a/vl.c
> +++ b/vl.c
> @@ -4339,6 +4339,8 @@ int main(int argc, char **argv, char **envp)
>      }
>      parse_numa_opts(current_machine);
>  
> +    parse_numa_node_opts(current_machine);
> +
>      /* do monitor/qmp handling at preconfig state if requested */
>      main_loop();
>  
> 

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 02/16] hw/i386: Rename X86CPUTopoInfo structure to X86CPUTopoIDs
  2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 02/16] hw/i386: Rename X86CPUTopoInfo structure to X86CPUTopoIDs Moger, Babu
@ 2019-10-10  3:26   ` Eduardo Habkost
  0 siblings, 0 replies; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-10  3:26 UTC (permalink / raw)
  To: Moger, Babu
  Cc: mst, armbru, qemu-devel, imammedo, pbonzini, ssg.sos.staff, rth

On Fri, Sep 06, 2019 at 07:11:50PM +0000, Moger, Babu wrote:
> Rename few data structures related to X86 topology.
> X86CPUTopoIDs will have individual arch ids. Next
> patch introduces X86CPUTopoInfo which will have all
> topology information(like cores, threads etc..).
> 
> Adds node_id and ccx_id. This will be required to support
> new epyc mode mode. There is no functional change.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>

Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 03/16] hw/i386: Introduce X86CPUTopoInfo to contain topology info
  2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 03/16] hw/i386: Introduce X86CPUTopoInfo to contain topology info Moger, Babu
@ 2019-10-11  2:29   ` Eduardo Habkost
  2019-12-02 20:23     ` Babu Moger
  2019-10-11  3:54   ` Eduardo Habkost
  1 sibling, 1 reply; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-11  2:29 UTC (permalink / raw)
  To: Moger, Babu
  Cc: mst, armbru, qemu-devel, imammedo, pbonzini, ssg.sos.staff, rth

On Fri, Sep 06, 2019 at 07:11:57PM +0000, Moger, Babu wrote:
> This is an effort to re-arrange few data structure for better
> readability. Add X86CPUTopoInfo which will have all the topology
> informations required to build the cpu topology. There is no
> functional changes.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
>  hw/i386/pc.c               |   40 +++++++++++++++++++++++++++-------------
>  include/hw/i386/topology.h |   40 ++++++++++++++++++++++++++--------------
>  2 files changed, 53 insertions(+), 27 deletions(-)
> 
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index ada445f8f3..95aab8e5e7 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -930,11 +930,15 @@ static uint32_t x86_cpu_apic_id_from_index(PCMachineState *pcms,
>  {
>      MachineState *ms = MACHINE(pcms);
>      PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
> +    X86CPUTopoInfo topo_info;
>      uint32_t correct_id;
>      static bool warned;
>  
> -    correct_id = x86_apicid_from_cpu_idx(pcms->smp_dies, ms->smp.cores,
> -                                         ms->smp.threads, cpu_index);
> +    topo_info.nr_dies = pcms->smp_dies;
> +    topo_info.nr_cores = ms->smp.cores;
> +    topo_info.nr_threads = ms->smp.threads;
> +
> +    correct_id = x86_apicid_from_cpu_idx(&topo_info, cpu_index);

If you are using the struct in function calls, please make sure
all fields are filled correctly, so we won't introduce bugs
accidentally if we start using the new fields inside the topology
functions.

Alternatively, you can leave the struct without the numa_nodes
and nr_sockets fields by now (because they are unused), and add
the fields in another patch.

Except for this, the patch looks good.  However, I don't
understand yet the use case for the `numa_nodes` field yet.  I
will probably comment on the `numa_nodes` field once I see the
patches that use the new field.

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 04/16] machine: Add SMP Sockets in CpuTopology
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 04/16] machine: Add SMP Sockets in CpuTopology Moger, Babu
@ 2019-10-11  2:31   ` Eduardo Habkost
  0 siblings, 0 replies; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-11  2:31 UTC (permalink / raw)
  To: Moger, Babu
  Cc: mst, armbru, qemu-devel, imammedo, pbonzini, ssg.sos.staff, rth

On Fri, Sep 06, 2019 at 07:12:04PM +0000, Moger, Babu wrote:
> Store the  smp Sockets in CpuTopology. Socket information
> is required to build the cpu topology in new epyc mode.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>

Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>

> ---
>  hw/core/machine.c   |    1 +
>  hw/i386/pc.c        |    1 +
>  include/hw/boards.h |    2 ++
>  vl.c                |    1 +
>  4 files changed, 5 insertions(+)
> 
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index c58a8e594e..4034b7e903 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -795,6 +795,7 @@ static void smp_parse(MachineState *ms, QemuOpts *opts)
>          ms->smp.cpus = cpus;
>          ms->smp.cores = cores;
>          ms->smp.threads = threads;
> +        ms->smp.sockets = sockets;
>      }
>  
>      if (ms->smp.cpus > 1) {
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 95aab8e5e7..9e1c3f9f57 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -1609,6 +1609,7 @@ void pc_smp_parse(MachineState *ms, QemuOpts *opts)
>          ms->smp.cpus = cpus;
>          ms->smp.cores = cores;
>          ms->smp.threads = threads;
> +        ms->smp.sockets = sockets;
>          pcms->smp_dies = dies;
>      }
>  
> diff --git a/include/hw/boards.h b/include/hw/boards.h
> index a71d1a53a5..12eb5032a5 100644
> --- a/include/hw/boards.h
> +++ b/include/hw/boards.h
> @@ -245,12 +245,14 @@ typedef struct DeviceMemoryState {
>   * @cpus: the number of present logical processors on the machine
>   * @cores: the number of cores in one package
>   * @threads: the number of threads in one core
> + * @sockets: the number of sockets on the machine
>   * @max_cpus: the maximum number of logical processors on the machine
>   */
>  typedef struct CpuTopology {
>      unsigned int cpus;
>      unsigned int cores;
>      unsigned int threads;
> +    unsigned int sockets;
>      unsigned int max_cpus;
>  } CpuTopology;
>  
> diff --git a/vl.c b/vl.c
> index 711d2ae5da..473a688779 100644
> --- a/vl.c
> +++ b/vl.c
> @@ -3981,6 +3981,7 @@ int main(int argc, char **argv, char **envp)
>      current_machine->smp.max_cpus = machine_class->default_cpus;
>      current_machine->smp.cores = 1;
>      current_machine->smp.threads = 1;
> +    current_machine->smp.sockets = 1;
>  
>      machine_class->smp_parse(current_machine,
>          qemu_opts_find(qemu_find_opts("smp-opts"), NULL));
> 

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 05/16] hw/i386: Simplify topology Offset/width Calculation
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 05/16] hw/i386: Simplify topology Offset/width Calculation Moger, Babu
@ 2019-10-11  2:32   ` Eduardo Habkost
  2019-12-02 20:29     ` Babu Moger
  0 siblings, 1 reply; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-11  2:32 UTC (permalink / raw)
  To: Moger, Babu
  Cc: mst, armbru, qemu-devel, imammedo, pbonzini, ssg.sos.staff, rth

On Fri, Sep 06, 2019 at 07:12:11PM +0000, Moger, Babu wrote:
> Some parameters are unnecessarily passed for offset/width
> calculation. Remove those parameters from function prototypes.
> No functional change.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>

Isn't it simpler to use the new X86CPUTopoInfo struct, to make
hard-to-spot mistakes less likely when calling those functions?

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 07/16] hw/386: Add new epyc mode topology decoding functions
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 07/16] hw/386: Add new epyc mode topology decoding functions Moger, Babu
@ 2019-10-11  3:29   ` Eduardo Habkost
  2019-12-02 20:38     ` Babu Moger
  0 siblings, 1 reply; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-11  3:29 UTC (permalink / raw)
  To: Moger, Babu
  Cc: mst, armbru, qemu-devel, imammedo, pbonzini, ssg.sos.staff, rth

On Fri, Sep 06, 2019 at 07:12:25PM +0000, Moger, Babu wrote:
> These functions add support for building new epyc mode topology
> given smp details like numa nodes, cores, threads and sockets.
> Subsequent patches will use these functions to build the topology.
> 
> The topology details are available in Processor Programming Reference (PPR)
> for AMD Family 17h Model 01h, Revision B1 Processors.
> It is available at https://www.amd.com/en/support/tech-docs
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
>  include/hw/i386/topology.h |  174 ++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 174 insertions(+)
> 
> diff --git a/include/hw/i386/topology.h b/include/hw/i386/topology.h
> index 5a61d53f05..6fd4184f07 100644
> --- a/include/hw/i386/topology.h
> +++ b/include/hw/i386/topology.h
> @@ -62,6 +62,22 @@ typedef struct X86CPUTopoInfo {
>      unsigned nr_threads;
>  } X86CPUTopoInfo;
>  
> +/*
> + * Definitions used for building CPUID Leaf 0x8000001D and 0x8000001E
> + * Please refer to the AMD64 Architecture Programmer’s Manual Volume 3.
> + * Define the constants to build the cpu topology. Right now, TOPOEXT
> + * feature is enabled only on EPYC. So, these constants are based on
> + * EPYC supported configurations. We may need to handle the cases if
> + * these values change in future.
> + */
> +
> +/* Maximum core complexes in a node */
> +#define MAX_CCX                  2

I suggest a more explicit name like MAX_CCX_IN_NODE.

> +/* Maximum cores in a core complex */
> +#define MAX_CORES_IN_CCX         4
> +/* Maximum cores in a node */
> +#define MAX_CORES_IN_NODE        8
> +

Having limits isn't necessarily bad, but if we look at the code
that use those constants below:

[...]
> +/*
> + * Figure out the number of nodes required to build this config.
> + * Max cores in a nodes is 8
> + */
> +static inline int nodes_in_pkg(X86CPUTopoInfo *topo_info)
> +{
> +    /*
> +     * Create a config with user given (nr_nodes > 1) numa node config,
> +     * else go with a standard configuration
> +     */
> +    if (topo_info->numa_nodes > 1) {
> +        return DIV_ROUND_UP(topo_info->numa_nodes, topo_info->nr_sockets);
> +    } else {
> +        return DIV_ROUND_UP(topo_info->nr_cores, MAX_CORES_IN_NODE);
> +    }
> +}

Here we are trying to choose a default value for the number of
nodes, but it's better to let the CPU or machine code choose it.
Otherwise, this will be very painful to maintain if new CPUs with
different limits appear.  I would just let the number of nodes
per package to be configurable in the command-line.

> +
> +/*
> + * Decide the number of cores in a core complex with the given nr_cores using
> + * following set constants MAX_CCX, MAX_CORES_IN_CCX, MAX_CORES_IN_DIE and
> + * MAX_NODES_PER_SOCKET. Maintain symmetry as much as possible
> + * L3 cache is shared across all cores in a core complex. So, this will also
> + * tell us how many cores are sharing the L3 cache.
> + */
> +static inline int cores_in_ccx(X86CPUTopoInfo *topo_info)
> +{
> +    int nodes;
> +
> +    /* Get the number of nodes required to build this config */
> +    nodes = nodes_in_pkg(topo_info);
> +
> +    /*
> +     * Divide the cores accros all the core complexes
> +     * Return rounded up value
> +     */
> +    return DIV_ROUND_UP(topo_info->nr_cores, nodes * MAX_CCX);
> +}

Same here.  This will become painful to maintain if CPUs with a
different CCX-per-node limit appear.  I suggest just letting the
number of cores per CCX to be configurable in the command-line.
The "cores" option is already defined as "cores per die" when
nr_dies > 1.  We can easily define it to mean "cores per CCX"
when nr_ccxs > 1.

If you still want to impose limits to some of those parameters, I
suggest placing those limits in the CPU model table, not in
global constants.


> +
> +/*
> + * Make APIC ID for the CPU based on Pkg_ID, Core_ID, SMT_ID
> + *
> + * The caller must make sure core_id < nr_cores and smt_id < nr_threads.
> + */
> +static inline apic_id_t x86_apicid_from_topo_ids_epyc(X86CPUTopoInfo *topo_info,
> +                                                  const X86CPUTopoIDs *topo_ids)
> +{
> +    unsigned nr_ccxs = MAX_CCX;

I suggest adding nr_ccxs to X86CPUTopoInfo, instead of making it
fixed.

> +    unsigned nr_nodes = nodes_in_pkg(topo_info);

Same here: isn't it better to have a nr_nodes_per_pkg field in
X86CPUTopoInfo, and make it configurable in the command-line?

> +    unsigned nr_cores = MAX_CORES_IN_CCX;

nr_cores is already in X86CPUTopoInfo, why aren't you using it?

Similar questions at x86_topo_ids_from_apicid_epyc() [1].

> +    unsigned nr_threads = topo_info->nr_threads;
> +
> +    return (topo_ids->pkg_id  << apicid_pkg_offset_epyc(nr_nodes, nr_ccxs,
> +                                                        nr_cores, nr_threads)) |
> +           (topo_ids->node_id  << apicid_node_offset(nr_ccxs, nr_cores,
> +                                                     nr_threads)) |
> +           (topo_ids->ccx_id  << apicid_ccx_offset(nr_cores, nr_threads)) |
> +           (topo_ids->core_id << apicid_core_offset(nr_threads)) |
> +           topo_ids->smt_id;
> +}
> +
> +static inline void x86_topo_ids_from_idx_epyc(X86CPUTopoInfo *topo_info,
> +                                              unsigned cpu_index,
> +                                              X86CPUTopoIDs *topo_ids)
> +{
> +    unsigned nr_cores = topo_info->nr_cores;

Is nr_cores cores-per-ccx?  cores-per-node?  cores-per-pkg?


> +    unsigned nr_threads = topo_info->nr_threads;
> +    unsigned core_index = cpu_index / nr_threads % nr_cores;

It's hard to understand what this variable means.  Is it a unique
identifier for a core inside the whole system?  Is it unique
inside a package?  Unique inside a node?  Unique inside a ccx?


> +    unsigned ccx_cores = cores_in_ccx(topo_info);
> +
> +    topo_ids->smt_id = cpu_index % nr_threads;
> +    topo_ids->core_id = core_index % ccx_cores; /* core id inside the ccx */
> +    topo_ids->ccx_id = (core_index % (ccx_cores * MAX_CCX)) / ccx_cores;
> +    topo_ids->node_id = core_index / (ccx_cores * MAX_CCX);

It looks like we have one extra constraint we must ensure
elsewhere in the code: we need to make sure the node topology in
topo->node_id matches the node topology defined using -numa,
don't we?


> +    topo_ids->pkg_id = cpu_index / (nr_cores * nr_threads);

Now, these calculations are over my head.  It's hard to
understand what "nr_cores" mean, or what "ccx_cores * MAX_CCX" is
calculating.

I will try to compare with the existing x86_topo_ids_from_idx()
function, to see if we can make the hierarchy clearer:

Existing code is:

    topo->pkg_id = cpu_index / (nr_dies * nr_cores * nr_threads);
    topo->die_id = cpu_index / (nr_cores * nr_threads) % nr_dies;
    topo->core_id = cpu_index / nr_threads % nr_cores;
    topo->smt_id = cpu_index % nr_threads;

The code is hierarchical, but not 100% clear.  Let's try to make
it clearer:

    unsigned threads_per_core = topo->nr_threads;
    unsigned   cores_per_die  = topo->nr_cores;
    unsigned    dies_per_pkg  = topo->nr_dies;

    unsigned threads_per_die  = threads_per_core * cores_per_die;
    unsigned threads_per_pkg  = threads_per_die  *  dies_per_pkg;

    unsigned thread_index = cpu_index;
    unsigned   core_index = cpu_index / threads_per_core;
    unsigned    die_index = cpu_index / threads_per_die;
    unsigned    pkg_index = cpu_index / threads_per_pkg;

    topo-> smt_id = thread_index % threads_per_core;
    topo->core_id =   core_index %   cores_per_die;
    topo-> die_id =    die_index %    dies_per_pkg;
    topo-> pkg_id =    pkg_index;

The logic above should be equivalent to the existing
x86_topo_ids_from_idx() function.

Can we make it better for the smt/core/ccx/node/pkg case too?
Let's try:

    unsigned threads_per_core = topo->nr_threads;
    unsigned   cores_per_ccx  = topo->nr_cores;
    unsigned    ccxs_per_node = topo->nr_ccxs;
    unsigned   nodes_per_pkg  = topo->nodes_per_pkg;

    unsigned threads_per_ccx  = threads_per_core * cores_per_ccx;
    unsigned threads_per_node = threads_per_ccx  *  ccxs_per_node;
    unsigned threads_per_pkg  = threads_per_node * nodes_per_pkg;

    unsigned thread_index = cpu_index;
    unsigned   core_index = cpu_index / threads_per_core;
    unsigned    ccx_index = cpu_index / threads_per_ccx;
    unsigned   node_index = cpu_index / threads_per_node;
    unsigned    pkg_index = cpu_index / threads_per_pkg;

    topo-> smt_id = thread_index % threads_per_core;
    topo->core_id =   core_index %   cores_per_ccx;
    topo-> ccx_id =    ccx_index %    ccxs_per_node;
    topo->node_id =   node_index %   nodes_per_pkg;
    topo-> pkg_id =    pkg_index;

The logic above probably isn't equivalent to the code you wrote.
The point here is that each variable is very clearly defined with
a "per_*" suffix to avoid confusion, and all parameters are
coming from X86CPUTopoInfo.

> +}
> +
> +/*
> + * Calculate thread/core/package IDs for a specific topology,
> + * based on APIC ID
> + */
> +static inline void x86_topo_ids_from_apicid_epyc(apic_id_t apicid,
> +                                            X86CPUTopoInfo *topo_info,
> +                                            X86CPUTopoIDs *topo_ids)
> +{
> +    unsigned nr_nodes = nodes_in_pkg(topo_info);
> +    unsigned nr_cores = MAX_CORES_IN_CCX;
> +    unsigned nr_threads = topo_info->nr_threads;
> +    unsigned nr_ccxs = MAX_CCX;

Same questions as in x86_apicid_from_topo_ids_epyc() [1]:
shouldn't nr_cores, nr_ccxs and nr_nodes come from
X86CPUTopoInfo?

> +
> +    topo_ids->smt_id = apicid &
> +                   ~(0xFFFFFFFFUL << apicid_smt_width(nr_threads));
> +
> +    topo_ids->core_id = (apicid >> apicid_core_offset(nr_threads)) &
> +                   ~(0xFFFFFFFFUL << apicid_core_width(nr_cores));
> +
> +    topo_ids->ccx_id = (apicid >> apicid_ccx_offset(nr_cores, nr_threads)) &
> +                   ~(0xFFFFFFFFUL << apicid_ccx_width(nr_ccxs));
> +
> +    topo_ids->node_id = (apicid >> apicid_node_offset(nr_ccxs, nr_cores,
> +                                                      nr_threads)) &
> +                   ~(0xFFFFFFFFUL << apicid_node_width(nr_nodes));
> +
> +    topo_ids->pkg_id = apicid >> apicid_pkg_offset_epyc(nr_nodes, nr_ccxs,
> +                                                        nr_cores, nr_threads);
> +}
> +
> +/*
> + * Make APIC ID for the CPU 'cpu_index'
> + *
> + * 'cpu_index' is a sequential, contiguous ID for the CPU.
> + */
> +static inline apic_id_t x86_apicid_from_cpu_idx_epyc(X86CPUTopoInfo *topo_info,
> +                                                     unsigned cpu_index)
> +{
> +    X86CPUTopoIDs topo_ids;
> +    x86_topo_ids_from_idx_epyc(topo_info, cpu_index, &topo_ids);
> +    return x86_apicid_from_topo_ids_epyc(topo_info, &topo_ids);
> +}

If we make nodes_per_pkg==1 by default, treat nr_ccxs and nr_dies
as synonyms, and make all parameters come from X86CPUTopoInfo,
can't we reuse exactly the same topology code for both EPYC and
non-EPYC?  It would be better than having two separate sets of
functions.

I have one additional request: please add unit tests for the
functions above to test-x86-cpuid.c.  There may be opportunities
for refactoring this code later, and unit tests will be important
to make sure we don't break anything.

> +
>  /* Make APIC ID for the CPU based on Pkg_ID, Core_ID, SMT_ID
>   *
>   * The caller must make sure core_id < nr_cores and smt_id < nr_threads.
> 

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 08/16] i386: Cleanup and use the new epyc mode topology functions
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 08/16] i386: Cleanup and use the new epyc mode topology functions Moger, Babu
@ 2019-10-11  3:51   ` Eduardo Habkost
  0 siblings, 0 replies; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-11  3:51 UTC (permalink / raw)
  To: Moger, Babu
  Cc: mst, armbru, qemu-devel, imammedo, pbonzini, ssg.sos.staff, rth

On Fri, Sep 06, 2019 at 07:12:33PM +0000, Moger, Babu wrote:
> Use the new epyc mode functions and delete the unused code.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
>  target/i386/cpu.c |  171 +++++++++++++++--------------------------------------
>  1 file changed, 48 insertions(+), 123 deletions(-)
> 
[...]
>  /* Encode cache info for CPUID[8000001E] */
> -static void encode_topo_cpuid8000001e(CPUState *cs, X86CPU *cpu,
> -                                       uint32_t *eax, uint32_t *ebx,
> -                                       uint32_t *ecx, uint32_t *edx)
> +static void encode_topo_cpuid8000001e(CPUX86State *env,
> +                                      uint32_t *eax, uint32_t *ebx,
> +                                      uint32_t *ecx, uint32_t *edx)
>  {
> -    struct core_topology topo = {0};
> -    unsigned long nodes;
> -    int shift;
> +    X86CPUTopoIDs topo_ids = { 0 };
> +    unsigned long nodes, shift;
> +    X86CPU *cpu = env_archcpu(env);
> +    CPUState *cs = env_cpu(env);
> +    MachineState *ms = MACHINE(qdev_get_machine());
> +    X86CPUTopoInfo topo_info = {
> +        .numa_nodes = nb_numa_nodes,
> +        .nr_sockets = ms->smp.sockets,
> +        .nr_cores = ms->smp.cores,
> +        .nr_threads = ms->smp.threads,
> +    };
> +
> +    nodes = nodes_in_pkg(&topo_info);
> +    x86_topo_ids_from_idx_epyc(&topo_info, cs->cpu_index, &topo_ids);

You probably want to use x86_topo_ids_from_apicid() instead.
cpu_index is a deprecated identifier for a CPU, and may not match
the actual CPU topology if using CPU hotplug.

>  
> -    build_core_topology(cs->nr_cores, cpu->core_id, &topo);
>      *eax = cpu->apic_id;
> +
>      /*
>       * CPUID_Fn8000001E_EBX
>       * 31:16 Reserved
> @@ -496,11 +421,12 @@ static void encode_topo_cpuid8000001e(CPUState *cs, X86CPU *cpu,
>       *             3 Core complex id
>       *           1:0 Core id
>       */
> -    if (cs->nr_threads - 1) {
> -        *ebx = ((cs->nr_threads - 1) << 8) | (topo.node_id << 3) |
> -                (topo.ccx_id << 2) | topo.core_id;
> +    if (topo_info.nr_threads - 1) {
> +        *ebx = ((topo_info.nr_threads - 1) << 8) | (topo_ids.node_id << 3) |
> +                (topo_ids.ccx_id << 2) | topo_ids.core_id;
>      } else {
> -        *ebx = (topo.node_id << 4) | (topo.ccx_id << 3) | topo.core_id;
> +        *ebx = (topo_ids.node_id << 4) | (topo_ids.ccx_id << 3) |
> +                topo_ids.core_id;

It's nice to see we are deleting old code and reusing the same
topology functions.  However, I'm worried that the relationship
between the topology IDs in APIC ID (EAX) and the topology IDs in
EBX and ECX isn't clear.

I'm also worried there's no code enforcing a limit for
node_id/ccx_id/core_id to ensure they fit in the fields defined
above.  These hardcoded limits will become painful to maintain
once we have new CPUs with different constraints.  We need to
make the limits explicit in the code, and also be prepared for
them to change in the future depending on the CPU model.

>      }
>      /*
>       * CPUID_Fn8000001E_ECX
> @@ -510,9 +436,8 @@ static void encode_topo_cpuid8000001e(CPUState *cs, X86CPU *cpu,
>       *         2  Socket id
>       *       1:0  Node id
>       */
> -    if (topo.num_nodes <= 4) {
> -        *ecx = ((topo.num_nodes - 1) << 8) | (cpu->socket_id << 2) |
> -                topo.node_id;
> +    if (nodes <= 4) {
> +        *ecx = ((nodes - 1) << 8) | (topo_ids.pkg_id << 2) | topo_ids.node_id;
>      } else {
>          /*
>           * Node id fix up. Actual hardware supports up to 4 nodes. But with
> @@ -527,12 +452,12 @@ static void encode_topo_cpuid8000001e(CPUState *cs, X86CPU *cpu,
>           * number of nodes. find_last_bit returns last set bit(0 based). Left
>           * shift(+1) the socket id to represent all the nodes.
>           */
> -        nodes = topo.num_nodes - 1;
> +        nodes = nodes - 1;
>          shift = find_last_bit(&nodes, 8);
> -        *ecx = ((topo.num_nodes - 1) << 8) | (cpu->socket_id << (shift + 1)) |
> -                topo.node_id;
> +        *ecx = (nodes << 8) | (topo_ids.pkg_id << (shift + 1)) | topo_ids.node_id;

Like above, we also need to ensure pkg_id and node_id are withing
reasonable limits, somewhere in the code, and that the limits can
change in the future depending on the CPU model.

>      }
>      *edx = 0;
> +
>  }
>  
>  /*
> @@ -4580,19 +4505,19 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
>          }
>          switch (count) {
>          case 0: /* L1 dcache info */
> -            encode_cache_cpuid8000001d(env->cache_info_amd.l1d_cache, cs,
> +            encode_cache_cpuid8000001d(env->cache_info_amd.l1d_cache,
>                                         eax, ebx, ecx, edx);
>              break;
>          case 1: /* L1 icache info */
> -            encode_cache_cpuid8000001d(env->cache_info_amd.l1i_cache, cs,
> +            encode_cache_cpuid8000001d(env->cache_info_amd.l1i_cache,
>                                         eax, ebx, ecx, edx);
>              break;
>          case 2: /* L2 cache info */
> -            encode_cache_cpuid8000001d(env->cache_info_amd.l2_cache, cs,
> +            encode_cache_cpuid8000001d(env->cache_info_amd.l2_cache,
>                                         eax, ebx, ecx, edx);
>              break;
>          case 3: /* L3 cache info */
> -            encode_cache_cpuid8000001d(env->cache_info_amd.l3_cache, cs,
> +            encode_cache_cpuid8000001d(env->cache_info_amd.l3_cache,
>                                         eax, ebx, ecx, edx);
>              break;
>          default: /* end of info */
> @@ -4602,7 +4527,7 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
>          break;
>      case 0x8000001E:
>          assert(cpu->core_id <= 255);
> -        encode_topo_cpuid8000001e(cs, cpu,
> +        encode_topo_cpuid8000001e(env,
>                                    eax, ebx, ecx, edx);
>          break;
>      case 0xC0000000:
> 

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 03/16] hw/i386: Introduce X86CPUTopoInfo to contain topology info
  2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 03/16] hw/i386: Introduce X86CPUTopoInfo to contain topology info Moger, Babu
  2019-10-11  2:29   ` Eduardo Habkost
@ 2019-10-11  3:54   ` Eduardo Habkost
  2019-12-02 20:25     ` Babu Moger
  1 sibling, 1 reply; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-11  3:54 UTC (permalink / raw)
  To: Moger, Babu
  Cc: mst, armbru, qemu-devel, imammedo, pbonzini, ssg.sos.staff, rth

On Fri, Sep 06, 2019 at 07:11:57PM +0000, Moger, Babu wrote:
> This is an effort to re-arrange few data structure for better
> readability. Add X86CPUTopoInfo which will have all the topology
> informations required to build the cpu topology. There is no
> functional changes.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
[...]
> +typedef struct X86CPUTopoInfo {
> +    unsigned numa_nodes;
> +    unsigned nr_sockets;
> +    unsigned nr_dies;
> +    unsigned nr_cores;
> +    unsigned nr_threads;
> +} X86CPUTopoInfo;

With more complex topologies, the meaning of each of those fields
may be ambiguous.  e.g.: is nr_cores cores per die, cores per
ccx, or cores per socket?  Maybe we should use this opportunity
to use more explicit names like threads_per_core, cores_per_die,
dies_per_socket.

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 09/16] hw/i386: Introduce initialize_topo_info function
  2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 09/16] hw/i386: Introduce initialize_topo_info function Moger, Babu
@ 2019-10-11  3:54   ` Eduardo Habkost
  0 siblings, 0 replies; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-11  3:54 UTC (permalink / raw)
  To: Moger, Babu
  Cc: mst, armbru, qemu-devel, imammedo, pbonzini, ssg.sos.staff, rth

On Fri, Sep 06, 2019 at 07:12:40PM +0000, Moger, Babu wrote:
> Introduce initialize_topo_info to initialize X86CPUTopoInfo
> data structure to build the topology. No functional change.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>


Reviewed-by: Eduardo Habkost <ehabkost@redhat.com>

Maybe this could be squashed into patch 03/16.

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 13/16] machine: Add new epyc property in PCMachineState
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 13/16] machine: Add new epyc property " Moger, Babu
@ 2019-10-11  3:59   ` Eduardo Habkost
  2019-10-11 16:23     ` Moger, Babu
  2019-10-11 16:59     ` Moger, Babu
  0 siblings, 2 replies; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-11  3:59 UTC (permalink / raw)
  To: Moger, Babu
  Cc: mst, armbru, qemu-devel, imammedo, pbonzini, ssg.sos.staff, rth

On Fri, Sep 06, 2019 at 07:13:09PM +0000, Moger, Babu wrote:
> Adds new epyc property in PCMachineState and also in MachineState.
> This property will be used to initialize the mode specific handlers
> to generate apic ids.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
[...]
> diff --git a/include/hw/boards.h b/include/hw/boards.h
> index 12eb5032a5..0001d42e50 100644
> --- a/include/hw/boards.h
> +++ b/include/hw/boards.h
> @@ -299,6 +299,8 @@ struct MachineState {
>      AccelState *accelerator;
>      CPUArchIdList *possible_cpus;
>      CpuTopology smp;
> +    bool epyc;
> +

This won't scale at all when we start adding new CPU models with
different topology constraints.

I still have hope we can avoid having separate set of topology ID
functions (see my reply to "hw/386: Add new epyc mode topology
decoding functions").  But if we really have to create separate
functions, we can make them part of the CPU model table, not a
boolean machine property.

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 15/16] i386: Fix pkg_id offset for epyc mode
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 15/16] i386: Fix pkg_id offset for epyc mode Moger, Babu
@ 2019-10-11  4:03   ` Eduardo Habkost
  0 siblings, 0 replies; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-11  4:03 UTC (permalink / raw)
  To: Moger, Babu
  Cc: mst, armbru, qemu-devel, imammedo, pbonzini, ssg.sos.staff, rth

On Fri, Sep 06, 2019 at 07:13:22PM +0000, Moger, Babu wrote:
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
>  target/i386/cpu.c |   24 +++++++++++++++++++-----
>  1 file changed, 19 insertions(+), 5 deletions(-)
> 
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index f25491a029..f8b1fc5c07 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -4094,9 +4094,10 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
>                     uint32_t *eax, uint32_t *ebx,
>                     uint32_t *ecx, uint32_t *edx)
>  {
> +    MachineState *ms = MACHINE(qdev_get_machine());
>      X86CPU *cpu = env_archcpu(env);
>      CPUState *cs = env_cpu(env);
> -    uint32_t die_offset;
> +    uint32_t die_offset, pkg_offset;
>      uint32_t limit;
>      uint32_t signature[3];
>  
> @@ -4119,6 +4120,21 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
>          index = env->cpuid_level;
>      }
>  
> +    if (ms->epyc) {
> +        X86CPUTopoInfo topo_info = {
> +            .numa_nodes = nb_numa_nodes,
> +            .nr_sockets = ms->smp.sockets,
> +            .nr_cores = ms->smp.cores,
> +            .nr_threads = ms->smp.threads,
> +        };

Why aren't you using initialize_topo_info() here?

> +        unsigned nodes = nodes_in_pkg(&topo_info);
> +        pkg_offset = apicid_pkg_offset_epyc(nodes, MAX_CCX, MAX_CORES_IN_CCX,
> +                                            cs->nr_threads);
> +    } else {
> +        pkg_offset = apicid_pkg_offset(env->nr_dies, cs->nr_cores,
> +                                       cs->nr_threads);
> +    }

I think we won't need this at all if we simply:

1) Add nodes_per_pkg to X86CPUTopoInfo;
2) make nodes_per_pkg configurable;
3) just use topo->nr_dies and topo->nr_cores instead of those
   constants.

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 16/16] hw/core: Fix up the machine_set_cpu_numa_node for epyc
  2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 16/16] hw/core: Fix up the machine_set_cpu_numa_node for epyc Moger, Babu
@ 2019-10-11  4:10   ` Eduardo Habkost
  2019-12-02 20:44     ` Babu Moger
  0 siblings, 1 reply; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-11  4:10 UTC (permalink / raw)
  To: Moger, Babu
  Cc: mst, armbru, qemu-devel, imammedo, pbonzini, ssg.sos.staff, rth

On Fri, Sep 06, 2019 at 07:13:29PM +0000, Moger, Babu wrote:
> Current topology id match will not work for epyc mode when setting
> the node id. In epyc mode, ids like smt_id, thread_id, core_id,
> ccx_id, socket_id can be same for more than one CPUs with across
> two numa nodes.
> 
> For example, we can have two CPUs with following ids on two different node.
> 1. smt_id=0, thread_id=0, core_id=0, ccx_id=0, socket_id=0, node_id=0
> 2. smt_id=0, thread_id=0, core_id=0, ccx_id=0, socket_id=0, node_id=1

I don't understand how that could happen.  If all IDs are the
same, how is it possible that those two cases should match two
different CPUs?

cpu_index_to_instance_props() must always return an unique
identifier for a single CPU.

> 
> The function machine_set_cpu_numa_node will fail to find a match to assign
> the node. Added new function machine_set_cpu_numa_node_epyc to set the node_id
> directly in epyc mode.
> 
> Signed-off-by: Babu Moger <babu.moger@amd.com>
> ---
>  hw/core/machine.c   |   24 ++++++++++++++++++++++++
>  hw/core/numa.c      |    6 +++++-
>  include/hw/boards.h |    4 ++++
>  3 files changed, 33 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/core/machine.c b/hw/core/machine.c
> index 9a8586cf30..6bceefc6f3 100644
> --- a/hw/core/machine.c
> +++ b/hw/core/machine.c
> @@ -741,6 +741,30 @@ void machine_set_cpu_numa_node(MachineState *machine,
>      }
>  }
>  
> +void machine_set_cpu_numa_node_epyc(MachineState *machine,
> +                                    const CpuInstanceProperties *props,
> +                                    unsigned index,
> +                                    Error **errp)
> +{
> +    MachineClass *mc = MACHINE_GET_CLASS(machine);
> +    CPUArchId *slot;
> +
> +    if (!mc->possible_cpu_arch_ids) {
> +        error_setg(errp, "mapping of CPUs to NUMA node is not supported");
> +        return;
> +    }
> +
> +    /* disabling node mapping is not supported, forbid it */
> +    assert(props->has_node_id);
> +
> +    /* force board to initialize possible_cpus if it hasn't been done yet */
> +    mc->possible_cpu_arch_ids(machine);
> +
> +    slot = &machine->possible_cpus->cpus[index];
> +    slot->props.node_id = props->node_id;
> +    slot->props.has_node_id = props->has_node_id;
> +}
> +
>  static void smp_parse(MachineState *ms, QemuOpts *opts)
>  {
>      if (opts) {
> diff --git a/hw/core/numa.c b/hw/core/numa.c
> index 27fa6b5e1d..a9e835aea6 100644
> --- a/hw/core/numa.c
> +++ b/hw/core/numa.c
> @@ -247,7 +247,11 @@ void set_numa_node_options(MachineState *ms, NumaOptions *object, Error **errp)
>           props = mc->cpu_index_to_instance_props(ms, cpus->value);
>           props.node_id = nodenr;
>           props.has_node_id = true;
> -         machine_set_cpu_numa_node(ms, &props, &err);
> +         if (ms->epyc) {
> +             machine_set_cpu_numa_node_epyc(ms, &props, cpus->value, &err);
> +         } else {
> +             machine_set_cpu_numa_node(ms, &props, &err);
> +         }
>           if (err) {
>              error_propagate(errp, err);
>              return;
> diff --git a/include/hw/boards.h b/include/hw/boards.h
> index 0001d42e50..ec1b1c5a85 100644
> --- a/include/hw/boards.h
> +++ b/include/hw/boards.h
> @@ -74,6 +74,10 @@ HotpluggableCPUList *machine_query_hotpluggable_cpus(MachineState *machine);
>  void machine_set_cpu_numa_node(MachineState *machine,
>                                 const CpuInstanceProperties *props,
>                                 Error **errp);
> +void machine_set_cpu_numa_node_epyc(MachineState *machine,
> +                                    const CpuInstanceProperties *props,
> +                                    unsigned index,
> +                                    Error **errp);
>  
>  void machine_class_allow_dynamic_sysbus_dev(MachineClass *mc, const char *type);
>  
> 

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 13/16] machine: Add new epyc property in PCMachineState
  2019-10-11  3:59   ` Eduardo Habkost
@ 2019-10-11 16:23     ` Moger, Babu
  2019-10-11 16:59     ` Moger, Babu
  1 sibling, 0 replies; 43+ messages in thread
From: Moger, Babu @ 2019-10-11 16:23 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: mst, armbru, qemu-devel, imammedo, pbonzini, rth



On 10/10/19 10:59 PM, Eduardo Habkost wrote:
> On Fri, Sep 06, 2019 at 07:13:09PM +0000, Moger, Babu wrote:
>> Adds new epyc property in PCMachineState and also in MachineState.
>> This property will be used to initialize the mode specific handlers
>> to generate apic ids.
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
> [...]
>> diff --git a/include/hw/boards.h b/include/hw/boards.h
>> index 12eb5032a5..0001d42e50 100644
>> --- a/include/hw/boards.h
>> +++ b/include/hw/boards.h
>> @@ -299,6 +299,8 @@ struct MachineState {
>>      AccelState *accelerator;
>>      CPUArchIdList *possible_cpus;
>>      CpuTopology smp;
>> +    bool epyc;
>> +
> 
> This won't scale at all when we start adding new CPU models with
> different topology constraints.

Yes, I knew. This could cause scaling issues. Let me see if we could do
anything different to avoid this.

> 
> I still have hope we can avoid having separate set of topology ID
> functions (see my reply to "hw/386: Add new epyc mode topology

Yes. That was (not to have separate topology functions) my hope too. Let
me think thru this bit more.

> decoding functions").  But if we really have to create separate
> functions, we can make them part of the CPU model table, not a
> boolean machine property.
> 

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 13/16] machine: Add new epyc property in PCMachineState
  2019-10-11  3:59   ` Eduardo Habkost
  2019-10-11 16:23     ` Moger, Babu
@ 2019-10-11 16:59     ` Moger, Babu
  2019-10-11 19:03       ` Eduardo Habkost
  1 sibling, 1 reply; 43+ messages in thread
From: Moger, Babu @ 2019-10-11 16:59 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: mst, armbru, qemu-devel, imammedo, pbonzini, rth


On 10/10/19 10:59 PM, Eduardo Habkost wrote:
> On Fri, Sep 06, 2019 at 07:13:09PM +0000, Moger, Babu wrote:
>> Adds new epyc property in PCMachineState and also in MachineState.
>> This property will be used to initialize the mode specific handlers
>> to generate apic ids.
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
> [...]
>> diff --git a/include/hw/boards.h b/include/hw/boards.h
>> index 12eb5032a5..0001d42e50 100644
>> --- a/include/hw/boards.h
>> +++ b/include/hw/boards.h
>> @@ -299,6 +299,8 @@ struct MachineState {
>>      AccelState *accelerator;
>>      CPUArchIdList *possible_cpus;
>>      CpuTopology smp;
>> +    bool epyc;
>> +
> 
> This won't scale at all when we start adding new CPU models with
> different topology constraints.

Yes, I knew. This could cause scaling issues. Let me see if we could
do anything different.

> 
> I still have hope we can avoid having separate set of topology ID
> functions (see my reply to "hw/386: Add new epyc mode topology

Yes. That was my hope too. Let me think thru this bit more. I will come
back on this.


> decoding functions").  But if we really have to create separate
> functions, we can make them part of the CPU model table, not a
> boolean machine property.
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 13/16] machine: Add new epyc property in PCMachineState
  2019-10-11 16:59     ` Moger, Babu
@ 2019-10-11 19:03       ` Eduardo Habkost
  0 siblings, 0 replies; 43+ messages in thread
From: Eduardo Habkost @ 2019-10-11 19:03 UTC (permalink / raw)
  To: Moger, Babu; +Cc: mst, armbru, qemu-devel, imammedo, pbonzini, rth

On Fri, Oct 11, 2019 at 04:59:37PM +0000, Moger, Babu wrote:
> 
> On 10/10/19 10:59 PM, Eduardo Habkost wrote:
> > On Fri, Sep 06, 2019 at 07:13:09PM +0000, Moger, Babu wrote:
> >> Adds new epyc property in PCMachineState and also in MachineState.
> >> This property will be used to initialize the mode specific handlers
> >> to generate apic ids.
> >>
> >> Signed-off-by: Babu Moger <babu.moger@amd.com>
> >> ---
> > [...]
> >> diff --git a/include/hw/boards.h b/include/hw/boards.h
> >> index 12eb5032a5..0001d42e50 100644
> >> --- a/include/hw/boards.h
> >> +++ b/include/hw/boards.h
> >> @@ -299,6 +299,8 @@ struct MachineState {
> >>      AccelState *accelerator;
> >>      CPUArchIdList *possible_cpus;
> >>      CpuTopology smp;
> >> +    bool epyc;
> >> +
> > 
> > This won't scale at all when we start adding new CPU models with
> > different topology constraints.
> 
> Yes, I knew. This could cause scaling issues. Let me see if we could
> do anything different.
> 
> > 
> > I still have hope we can avoid having separate set of topology ID
> > functions (see my reply to "hw/386: Add new epyc mode topology
> 
> Yes. That was my hope too. Let me think thru this bit more. I will come
> back on this.

If you don't manage to use a common function in the next version,
it's not a big deal.  My main request is to make the calculations
easier to follow (e.g. avoiding any expression with more than two
terms, and always using an explicit "_per_*" suffix in all
variables and constants).

There's one possible problem I didn't realize yesterday: we might
need a mechanism to force a field width to be larger than
apicid_bitwidth_for_count(number_of_ids) (e.g. having 2 bits for
core ID even if there's only 1 or 2 cores per CCX).  Maybe the
solution is to add optional field width parameters to
X86CPUTopoInfo.

Then we could redefine the width functions like this:

static inline unsigned apicid_core_width(X86CPUTopoInfo *topo)
{
    return MAX(apicid_bitwidth_for_count(topo->nr_cores), topo->min_core_bits);
}


Maybe we could replace the collection of fields with arrays to make all
calculations generic.  Untested example:

enum TopoField {
    TOPO_FIELD_THREADS = 0,
    TOPO_FIELD_CORES,
    TOPO_FIELD_CCXS,  /* AMD */
    TOPO_FIELD_DIES = TOPO_FIELD_CCX, /* Intel */
    TOPO_FIELD_NODES,
    TOPO_FIELD_PKG,
    MAX_TOPO_FIELD,
};

struct TopoFieldDefinition {
    /* Number of IDs at this level */
    unsigned count;
    /* Minimum number of APIC ID bits for this level */
    unsigned min_width;
};

struct X86CPUTopoInfo
{
    TopoFieldDefinition fields[MAX_TOPO_FIELD];
};

struct X85CPUTopoIDs
{
    unsigned ids[MAX_TOPO_FIELD];
};

static inline unsigned apicid_field_width(const X86CPUTopoInfo *topo, TopoField field)
{
    TopoFieldDefinition *def = &topo->fields[field];
    return MAX(apicid_bitwidth_for_count(def->count), def->min_width);
}

static inline unsigned apicid_field_offset(const X86CPUTopoInfo *topo, TopoField field)
{
    if (field == 0) {
        return 0;
    }
    return apicid_field_offset(topo, field - 1) + apic_id_field_width(topo, field - 1);
}


static inline apic_id_t apicid_from_topo_ids(const X86CPUTopoInfo *topo,
                                             const X86CPUTopoIDs *ids)
{
    TopoField field;
    apic_id_t r = 0;
    for (field = 0; l < MAX_TOPO_FIELD; l++) {
        unsigned offset = apicid_field_offset(topo, field);
        unsigned width = apicid_field_width(topo, field);
        assert(apicid_bitwidth_for_count(ids->ids[field] + 1) < apicid_field_width(topo, field));
        r = deposit64(r, offset, width,  ids->ids[field];
    }
    return r;
}

static inline void x86_topo_ids_from_apicid(apic_id_t apicid,
                                            const X86CPUTopoInfo *topo,
                                            X86CPUTopoIDs *ids)
{
    TopoField field;
    for (field = 0; l < MAX_TOPO_FIELD; l++) {
        unsigned offset = apicid_field_offset(topo, field);
        unsigned width = apicid_field_width(topo, field);
        ids->ids[field] = extract64(apicid, offset, width);
    }
}

> 
> 
> > decoding functions").  But if we really have to create separate
> > functions, we can make them part of the CPU model table, not a
> > boolean machine property.
> > 

-- 
Eduardo


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 01/16] numa: Split the numa functionality
  2019-10-10  3:25   ` Eduardo Habkost
@ 2019-12-02 20:18     ` Babu Moger
  0 siblings, 0 replies; 43+ messages in thread
From: Babu Moger @ 2019-12-02 20:18 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: mst, armbru, qemu-devel, imammedo, pbonzini, rth


Eduardo,
Sorry taking time to respond to your comments earlier. Started working on
something else so it took a while to come back.

Working on v3 right now and taken care most of your comments on v2. Will
plan to submit v3 sometime this week.

On 10/9/19 10:25 PM, Eduardo Habkost wrote:
> Hi,
> 
> First of all, sorry for taking more than a month to start
> reviewing this.
> 
> On Fri, Sep 06, 2019 at 07:11:43PM +0000, Moger, Babu wrote:
>> To support new epyc mode, we need to know the number of numa nodes
>> in advance to generate apic id correctly. [...]
> 
> This explains that we need to initialize numa_info earlier than
> something...
> 
>>                                     [...] So, split the numa
>> initialization into two. The function parse_numa initializes numa_info
>> and updates nb_numa_nodes. And then parse_numa_node does the numa node
>> initialization.
> 
> ...but I miss what "something" is.
> 
> The sequence of events here will be:
> 
> * parse_numa_opts()
>   * for each -numa option:
>     * parse_numa()
>       * set_numa_options()
>         * parse_numa_info()
>           * here ms->numa_state->num_nodes is incremented [1]
> * parse_numa_node_opts()
>   * for each -numa option:
>     * parse_numa_node()
>       * set_numa_node_options()
>         * here are the operations that are being delayed by this
>           patch [2]
> 
> What exactly makes it necessary for [2] to happen after [1] is
> done for all NUMA nodes?

We dont need these changes anymore. Look at my comments below.

> 
> This needs to be clear in the code, otherwise somebody will try to refactor
> this in the future and merge set_numa_node_options() and parse_numa_info()
> again, not knowing why ordering between [1] and [2] is so important.
> 
> In addition to documenting it better, I suggest saving the CPU
> index list in NodeInfo, instead of calling qemu_opts_foreach()
> twice.  (Probably a good idea to document the new field as
> internal, though.  We don't want machine-specific code to be
> looking at the CPU index list.)
> 
> Also, would it work if the delayed initialization is done at
> numa_complete_configuration() instead of a new
> parse_numa_node_opts() function?  We already have 2 separate
> steps in NUMA initialization (parse_numa_node() and
> numa_complete_configuration()), so it would be nice to avoid
> adding a 3rd one.
> 
> Putting all the suggestions together, the code would look like this:
> 
> 
> static void parse_numa_node(MachineState *ms, NumaNodeOptions *node,
>                             Error **errp)
> {
>     /* ... */
>     numa_info[nodenr].cpu_indexes = QAPI_CLONE(node->cpus, uint16List);
>     /* ... */
> }
> 
> void numa_complete_configuration(MachineState *ms)
> {
>     /* ... */
>     for (i = 0; i < ms->numa_state->num_nodes; i++) {
>         /*
>          * numa_node_complete_configuration() needs to be called after all
>          * nodes were already parsed, because <insert reason here>,
>          */
>         numa_node_complete_configuration(numa_info[i]);
>     }
> }
> 
> void numa_node_complete_configuration(MachineState *ms, NodeInfo *node)
> {
>     for (cpu_index = node->cpu_indexes; cpu_index; cpu_index = cpu_index->next) {
>         CpuInstanceProperties props;
>         props = mc->cpu_index_to_instance_props(ms, cpu_index->value);
>         props.node_id = nodenr;
>         props.has_node_id = true;
>         machine_set_cpu_numa_node(ms, &props, &err);
>     }
> }

Yes. I this works fine. Also makes the code simple. Only requirement was
to know the number of numa nodes in advance. Thanks

> 
> 
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
>>  hw/core/numa.c        |  106 +++++++++++++++++++++++++++++++++++--------------
>>  include/sysemu/numa.h |    2 +
>>  vl.c                  |    2 +
>>  3 files changed, 80 insertions(+), 30 deletions(-)
>>
>> diff --git a/hw/core/numa.c b/hw/core/numa.c
>> index a11431483c..27fa6b5e1d 100644
>> --- a/hw/core/numa.c
>> +++ b/hw/core/numa.c
>> @@ -55,14 +55,10 @@ bool have_numa_distance;
>>  NodeInfo numa_info[MAX_NODES];
>>  
>>  
>> -static void parse_numa_node(MachineState *ms, NumaNodeOptions *node,
>> +static void parse_numa_info(MachineState *ms, NumaNodeOptions *node,
>>                              Error **errp)
>>  {
>> -    Error *err = NULL;
>>      uint16_t nodenr;
>> -    uint16List *cpus = NULL;
>> -    MachineClass *mc = MACHINE_GET_CLASS(ms);
>> -    unsigned int max_cpus = ms->smp.max_cpus;
>>  
>>      if (node->has_nodeid) {
>>          nodenr = node->nodeid;
>> @@ -81,29 +77,6 @@ static void parse_numa_node(MachineState *ms, NumaNodeOptions *node,
>>          return;
>>      }
>>  
>> -    if (!mc->cpu_index_to_instance_props || !mc->get_default_cpu_node_id) {
>> -        error_setg(errp, "NUMA is not supported by this machine-type");
>> -        return;
>> -    }
>> -    for (cpus = node->cpus; cpus; cpus = cpus->next) {
>> -        CpuInstanceProperties props;
>> -        if (cpus->value >= max_cpus) {
>> -            error_setg(errp,
>> -                       "CPU index (%" PRIu16 ")"
>> -                       " should be smaller than maxcpus (%d)",
>> -                       cpus->value, max_cpus);
>> -            return;
>> -        }
>> -        props = mc->cpu_index_to_instance_props(ms, cpus->value);
>> -        props.node_id = nodenr;
>> -        props.has_node_id = true;
>> -        machine_set_cpu_numa_node(ms, &props, &err);
>> -        if (err) {
>> -            error_propagate(errp, err);
>> -            return;
>> -        }
>> -    }
>> -
>>      have_memdevs = have_memdevs ? : node->has_memdev;
>>      have_mem = have_mem ? : node->has_mem;
>>      if ((node->has_mem && have_memdevs) || (node->has_memdev && have_mem)) {
>> @@ -177,7 +150,7 @@ void set_numa_options(MachineState *ms, NumaOptions *object, Error **errp)
>>  
>>      switch (object->type) {
>>      case NUMA_OPTIONS_TYPE_NODE:
>> -        parse_numa_node(ms, &object->u.node, &err);
>> +        parse_numa_info(ms, &object->u.node, &err);
>>          if (err) {
>>              goto end;
>>          }
>> @@ -242,6 +215,73 @@ end:
>>      return 0;
>>  }
>>  
>> +void set_numa_node_options(MachineState *ms, NumaOptions *object, Error **errp)
>> +{
>> +    MachineClass *mc = MACHINE_GET_CLASS(ms);
>> +    NumaNodeOptions *node = &object->u.node;
>> +    unsigned int max_cpus = ms->smp.max_cpus;
>> +    uint16List *cpus = NULL;
>> +    Error *err = NULL;
>> +    uint16_t nodenr;
>> +
>> +    if (node->has_nodeid) {
>> +        nodenr = node->nodeid;
>> +    } else {
>> +        error_setg(errp, "NUMA node information is not available");
>> +    }
>> +
>> +    if (!mc->cpu_index_to_instance_props || !mc->get_default_cpu_node_id) {
>> +        error_setg(errp, "NUMA is not supported by this machine-type");
>> +        return;
>> +    }
>> +
>> +    for (cpus = node->cpus; cpus; cpus = cpus->next) {
>> +        CpuInstanceProperties props;
>> +        if (cpus->value >= max_cpus) {
>> +            error_setg(errp,
>> +                       "CPU index (%" PRIu16 ")"
>> +                       " should be smaller than maxcpus (%d)",
>> +                       cpus->value, max_cpus);
>> +            return;
>> +         }
>> +         props = mc->cpu_index_to_instance_props(ms, cpus->value);
>> +         props.node_id = nodenr;
>> +         props.has_node_id = true;
>> +         machine_set_cpu_numa_node(ms, &props, &err);
>> +         if (err) {
>> +            error_propagate(errp, err);
>> +            return;
>> +         }
>> +    }
>> +}
>> +
>> +static int parse_numa_node(void *opaque, QemuOpts *opts, Error **errp)
>> +{
>> +    NumaOptions *object = NULL;
>> +    MachineState *ms = MACHINE(opaque);
>> +    Error *err = NULL;
>> +    Visitor *v = opts_visitor_new(opts);
>> +
>> +    visit_type_NumaOptions(v, NULL, &object, &err);
>> +    visit_free(v);
>> +    if (err) {
>> +        goto end;
>> +    }
>> +
>> +    if (object->type == NUMA_OPTIONS_TYPE_NODE) {
>> +        set_numa_node_options(ms, object, &err);
>> +    }
>> +
>> +end:
>> +    qapi_free_NumaOptions(object);
>> +    if (err) {
>> +        error_propagate(errp, err);
>> +        return -1;
>> +    }
>> +
>> +    return 0;
>> +}
>> +
>>  /* If all node pair distances are symmetric, then only distances
>>   * in one direction are enough. If there is even one asymmetric
>>   * pair, though, then all distances must be provided. The
>> @@ -368,7 +408,7 @@ void numa_complete_configuration(MachineState *ms)
>>      if (ms->ram_slots > 0 && nb_numa_nodes == 0 &&
>>          mc->auto_enable_numa_with_memhp) {
>>              NumaNodeOptions node = { };
>> -            parse_numa_node(ms, &node, &error_abort);
>> +            parse_numa_info(ms, &node, &error_abort);
>>      }
>>  
>>      assert(max_numa_nodeid <= MAX_NODES);
>> @@ -448,6 +488,12 @@ void parse_numa_opts(MachineState *ms)
>>      qemu_opts_foreach(qemu_find_opts("numa"), parse_numa, ms, &error_fatal);
>>  }
>>  
>> +void parse_numa_node_opts(MachineState *ms)
>> +{
>> +    qemu_opts_foreach(qemu_find_opts("numa"), parse_numa_node,
>> +                      ms, &error_fatal);
>> +}
>> +
>>  void numa_cpu_pre_plug(const CPUArchId *slot, DeviceState *dev, Error **errp)
>>  {
>>      int node_id = object_property_get_int(OBJECT(dev), "node-id", &error_abort);
>> diff --git a/include/sysemu/numa.h b/include/sysemu/numa.h
>> index 01a263eba2..ca109adaa6 100644
>> --- a/include/sysemu/numa.h
>> +++ b/include/sysemu/numa.h
>> @@ -24,7 +24,9 @@ struct NumaNodeMem {
>>  extern NodeInfo numa_info[MAX_NODES];
>>  
>>  void set_numa_options(MachineState *ms, NumaOptions *object, Error **errp);
>> +void set_numa_node_options(MachineState *ms, NumaOptions *object, Error **errp);
>>  void parse_numa_opts(MachineState *ms);
>> +void parse_numa_node_opts(MachineState *ms);
>>  void numa_complete_configuration(MachineState *ms);
>>  void query_numa_node_mem(NumaNodeMem node_mem[]);
>>  extern QemuOptsList qemu_numa_opts;
>> diff --git a/vl.c b/vl.c
>> index b426b32134..711d2ae5da 100644
>> --- a/vl.c
>> +++ b/vl.c
>> @@ -4339,6 +4339,8 @@ int main(int argc, char **argv, char **envp)
>>      }
>>      parse_numa_opts(current_machine);
>>  
>> +    parse_numa_node_opts(current_machine);
>> +
>>      /* do monitor/qmp handling at preconfig state if requested */
>>      main_loop();
>>  
>>
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 03/16] hw/i386: Introduce X86CPUTopoInfo to contain topology info
  2019-10-11  2:29   ` Eduardo Habkost
@ 2019-12-02 20:23     ` Babu Moger
  0 siblings, 0 replies; 43+ messages in thread
From: Babu Moger @ 2019-12-02 20:23 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: mst, armbru, qemu-devel, imammedo, pbonzini, rth



On 10/10/19 9:29 PM, Eduardo Habkost wrote:
> On Fri, Sep 06, 2019 at 07:11:57PM +0000, Moger, Babu wrote:
>> This is an effort to re-arrange few data structure for better
>> readability. Add X86CPUTopoInfo which will have all the topology
>> informations required to build the cpu topology. There is no
>> functional changes.
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
>>  hw/i386/pc.c               |   40 +++++++++++++++++++++++++++-------------
>>  include/hw/i386/topology.h |   40 ++++++++++++++++++++++++++--------------
>>  2 files changed, 53 insertions(+), 27 deletions(-)
>>
>> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
>> index ada445f8f3..95aab8e5e7 100644
>> --- a/hw/i386/pc.c
>> +++ b/hw/i386/pc.c
>> @@ -930,11 +930,15 @@ static uint32_t x86_cpu_apic_id_from_index(PCMachineState *pcms,
>>  {
>>      MachineState *ms = MACHINE(pcms);
>>      PCMachineClass *pcmc = PC_MACHINE_GET_CLASS(pcms);
>> +    X86CPUTopoInfo topo_info;
>>      uint32_t correct_id;
>>      static bool warned;
>>  
>> -    correct_id = x86_apicid_from_cpu_idx(pcms->smp_dies, ms->smp.cores,
>> -                                         ms->smp.threads, cpu_index);
>> +    topo_info.nr_dies = pcms->smp_dies;
>> +    topo_info.nr_cores = ms->smp.cores;
>> +    topo_info.nr_threads = ms->smp.threads;
>> +
>> +    correct_id = x86_apicid_from_cpu_idx(&topo_info, cpu_index);
> 
> If you are using the struct in function calls, please make sure
> all fields are filled correctly, so we won't introduce bugs
> accidentally if we start using the new fields inside the topology
> functions.
> 
> Alternatively, you can leave the struct without the numa_nodes
> and nr_sockets fields by now (because they are unused), and add
> the fields in another patch.

Yes. Separated the patches and added the new fields separately.

> 
> Except for this, the patch looks good.  However, I don't
> understand yet the use case for the `numa_nodes` field yet.  I
> will probably comment on the `numa_nodes` field once I see the
> patches that use the new field.
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 03/16] hw/i386: Introduce X86CPUTopoInfo to contain topology info
  2019-10-11  3:54   ` Eduardo Habkost
@ 2019-12-02 20:25     ` Babu Moger
  0 siblings, 0 replies; 43+ messages in thread
From: Babu Moger @ 2019-12-02 20:25 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: mst, armbru, qemu-devel, imammedo, pbonzini, rth



On 10/10/19 10:54 PM, Eduardo Habkost wrote:
> On Fri, Sep 06, 2019 at 07:11:57PM +0000, Moger, Babu wrote:
>> This is an effort to re-arrange few data structure for better
>> readability. Add X86CPUTopoInfo which will have all the topology
>> informations required to build the cpu topology. There is no
>> functional changes.
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
> [...]
>> +typedef struct X86CPUTopoInfo {
>> +    unsigned numa_nodes;
>> +    unsigned nr_sockets;
>> +    unsigned nr_dies;
>> +    unsigned nr_cores;
>> +    unsigned nr_threads;
>> +} X86CPUTopoInfo;
> 
> With more complex topologies, the meaning of each of those fields
> may be ambiguous.  e.g.: is nr_cores cores per die, cores per
> ccx, or cores per socket?  Maybe we should use this opportunity
> to use more explicit names like threads_per_core, cores_per_die,
> dies_per_socket.

Yes. Changed it to

typedef struct X86CPUTopoInfo {
    unsigned nodes_per_pkg;
    unsigned dies_per_pkg;
    unsigned cores_per_die;
    unsigned threads_per_core;
} X86CPUTopoInfo;


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 05/16] hw/i386: Simplify topology Offset/width Calculation
  2019-10-11  2:32   ` Eduardo Habkost
@ 2019-12-02 20:29     ` Babu Moger
  0 siblings, 0 replies; 43+ messages in thread
From: Babu Moger @ 2019-12-02 20:29 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: mst, armbru, qemu-devel, imammedo, pbonzini, rth



On 10/10/19 9:32 PM, Eduardo Habkost wrote:
> On Fri, Sep 06, 2019 at 07:12:11PM +0000, Moger, Babu wrote:
>> Some parameters are unnecessarily passed for offset/width
>> calculation. Remove those parameters from function prototypes.
>> No functional change.
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
> 
> Isn't it simpler to use the new X86CPUTopoInfo struct, to make
> hard-to-spot mistakes less likely when calling those functions?
> 

Yes. Passed X86CPUTopoInfo for all the offset and width calculation.


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 07/16] hw/386: Add new epyc mode topology decoding functions
  2019-10-11  3:29   ` Eduardo Habkost
@ 2019-12-02 20:38     ` Babu Moger
  0 siblings, 0 replies; 43+ messages in thread
From: Babu Moger @ 2019-12-02 20:38 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: mst, armbru, qemu-devel, imammedo, pbonzini, rth



On 10/10/19 10:29 PM, Eduardo Habkost wrote:
> On Fri, Sep 06, 2019 at 07:12:25PM +0000, Moger, Babu wrote:
>> These functions add support for building new epyc mode topology
>> given smp details like numa nodes, cores, threads and sockets.
>> Subsequent patches will use these functions to build the topology.
>>
>> The topology details are available in Processor Programming Reference (PPR)
>> for AMD Family 17h Model 01h, Revision B1 Processors.
>> It is available at https://www.amd.com/en/support/tech-docs
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
>>  include/hw/i386/topology.h |  174 ++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 174 insertions(+)
>>
>> diff --git a/include/hw/i386/topology.h b/include/hw/i386/topology.h
>> index 5a61d53f05..6fd4184f07 100644
>> --- a/include/hw/i386/topology.h
>> +++ b/include/hw/i386/topology.h
>> @@ -62,6 +62,22 @@ typedef struct X86CPUTopoInfo {
>>      unsigned nr_threads;
>>  } X86CPUTopoInfo;
>>  
>> +/*
>> + * Definitions used for building CPUID Leaf 0x8000001D and 0x8000001E
>> + * Please refer to the AMD64 Architecture Programmer’s Manual Volume 3.
>> + * Define the constants to build the cpu topology. Right now, TOPOEXT
>> + * feature is enabled only on EPYC. So, these constants are based on
>> + * EPYC supported configurations. We may need to handle the cases if
>> + * these values change in future.
>> + */
>> +
>> +/* Maximum core complexes in a node */
>> +#define MAX_CCX                  2
> 
> I suggest a more explicit name like MAX_CCX_IN_NODE.
> 
>> +/* Maximum cores in a core complex */
>> +#define MAX_CORES_IN_CCX         4
>> +/* Maximum cores in a node */
>> +#define MAX_CORES_IN_NODE        8
>> +
> 
> Having limits isn't necessarily bad, but if we look at the code
> that use those constants below:
> 
> [...]
>> +/*
>> + * Figure out the number of nodes required to build this config.
>> + * Max cores in a nodes is 8
>> + */
>> +static inline int nodes_in_pkg(X86CPUTopoInfo *topo_info)
>> +{
>> +    /*
>> +     * Create a config with user given (nr_nodes > 1) numa node config,
>> +     * else go with a standard configuration
>> +     */
>> +    if (topo_info->numa_nodes > 1) {
>> +        return DIV_ROUND_UP(topo_info->numa_nodes, topo_info->nr_sockets);
>> +    } else {
>> +        return DIV_ROUND_UP(topo_info->nr_cores, MAX_CORES_IN_NODE);
>> +    }
>> +}
> 
> Here we are trying to choose a default value for the number of
> nodes, but it's better to let the CPU or machine code choose it.
> Otherwise, this will be very painful to maintain if new CPUs with
> different limits appear.  I would just let the number of nodes
> per package to be configurable in the command-line.
> 
>> +
>> +/*
>> + * Decide the number of cores in a core complex with the given nr_cores using
>> + * following set constants MAX_CCX, MAX_CORES_IN_CCX, MAX_CORES_IN_DIE and
>> + * MAX_NODES_PER_SOCKET. Maintain symmetry as much as possible
>> + * L3 cache is shared across all cores in a core complex. So, this will also
>> + * tell us how many cores are sharing the L3 cache.
>> + */
>> +static inline int cores_in_ccx(X86CPUTopoInfo *topo_info)
>> +{
>> +    int nodes;
>> +
>> +    /* Get the number of nodes required to build this config */
>> +    nodes = nodes_in_pkg(topo_info);
>> +
>> +    /*
>> +     * Divide the cores accros all the core complexes
>> +     * Return rounded up value
>> +     */
>> +    return DIV_ROUND_UP(topo_info->nr_cores, nodes * MAX_CCX);
>> +}
> 
> Same here.  This will become painful to maintain if CPUs with a
> different CCX-per-node limit appear.  I suggest just letting the
> number of cores per CCX to be configurable in the command-line.
> The "cores" option is already defined as "cores per die" when
> nr_dies > 1.  We can easily define it to mean "cores per CCX"
> when nr_ccxs > 1.
> 
> If you still want to impose limits to some of those parameters, I
> suggest placing those limits in the CPU model table, not in
> global constants.
> 
> 
>> +
>> +/*
>> + * Make APIC ID for the CPU based on Pkg_ID, Core_ID, SMT_ID
>> + *
>> + * The caller must make sure core_id < nr_cores and smt_id < nr_threads.
>> + */
>> +static inline apic_id_t x86_apicid_from_topo_ids_epyc(X86CPUTopoInfo *topo_info,
>> +                                                  const X86CPUTopoIDs *topo_ids)
>> +{
>> +    unsigned nr_ccxs = MAX_CCX;
> 
> I suggest adding nr_ccxs to X86CPUTopoInfo, instead of making it
> fixed.
> 
>> +    unsigned nr_nodes = nodes_in_pkg(topo_info);
> 
> Same here: isn't it better to have a nr_nodes_per_pkg field in
> X86CPUTopoInfo, and make it configurable in the command-line?
> 
>> +    unsigned nr_cores = MAX_CORES_IN_CCX;
> 
> nr_cores is already in X86CPUTopoInfo, why aren't you using it?
> 
> Similar questions at x86_topo_ids_from_apicid_epyc() [1].
> 
>> +    unsigned nr_threads = topo_info->nr_threads;
>> +
>> +    return (topo_ids->pkg_id  << apicid_pkg_offset_epyc(nr_nodes, nr_ccxs,
>> +                                                        nr_cores, nr_threads)) |
>> +           (topo_ids->node_id  << apicid_node_offset(nr_ccxs, nr_cores,
>> +                                                     nr_threads)) |
>> +           (topo_ids->ccx_id  << apicid_ccx_offset(nr_cores, nr_threads)) |
>> +           (topo_ids->core_id << apicid_core_offset(nr_threads)) |
>> +           topo_ids->smt_id;
>> +}
>> +
>> +static inline void x86_topo_ids_from_idx_epyc(X86CPUTopoInfo *topo_info,
>> +                                              unsigned cpu_index,
>> +                                              X86CPUTopoIDs *topo_ids)
>> +{
>> +    unsigned nr_cores = topo_info->nr_cores;
> 
> Is nr_cores cores-per-ccx?  cores-per-node?  cores-per-pkg?
> 
> 
>> +    unsigned nr_threads = topo_info->nr_threads;
>> +    unsigned core_index = cpu_index / nr_threads % nr_cores;
> 
> It's hard to understand what this variable means.  Is it a unique
> identifier for a core inside the whole system?  Is it unique
> inside a package?  Unique inside a node?  Unique inside a ccx?
> 
> 
>> +    unsigned ccx_cores = cores_in_ccx(topo_info);
>> +
>> +    topo_ids->smt_id = cpu_index % nr_threads;
>> +    topo_ids->core_id = core_index % ccx_cores; /* core id inside the ccx */
>> +    topo_ids->ccx_id = (core_index % (ccx_cores * MAX_CCX)) / ccx_cores;
>> +    topo_ids->node_id = core_index / (ccx_cores * MAX_CCX);
> 
> It looks like we have one extra constraint we must ensure
> elsewhere in the code: we need to make sure the node topology in
> topo->node_id matches the node topology defined using -numa,
> don't we?
> 
> 
>> +    topo_ids->pkg_id = cpu_index / (nr_cores * nr_threads);
> 
> Now, these calculations are over my head.  It's hard to
> understand what "nr_cores" mean, or what "ccx_cores * MAX_CCX" is
> calculating.
> 
> I will try to compare with the existing x86_topo_ids_from_idx()
> function, to see if we can make the hierarchy clearer:
> 
> Existing code is:
> 
>     topo->pkg_id = cpu_index / (nr_dies * nr_cores * nr_threads);
>     topo->die_id = cpu_index / (nr_cores * nr_threads) % nr_dies;
>     topo->core_id = cpu_index / nr_threads % nr_cores;
>     topo->smt_id = cpu_index % nr_threads;
> 
> The code is hierarchical, but not 100% clear.  Let's try to make
> it clearer:
> 
>     unsigned threads_per_core = topo->nr_threads;
>     unsigned   cores_per_die  = topo->nr_cores;
>     unsigned    dies_per_pkg  = topo->nr_dies;
> 
>     unsigned threads_per_die  = threads_per_core * cores_per_die;
>     unsigned threads_per_pkg  = threads_per_die  *  dies_per_pkg;
> 
>     unsigned thread_index = cpu_index;
>     unsigned   core_index = cpu_index / threads_per_core;
>     unsigned    die_index = cpu_index / threads_per_die;
>     unsigned    pkg_index = cpu_index / threads_per_pkg;
> 
>     topo-> smt_id = thread_index % threads_per_core;
>     topo->core_id =   core_index %   cores_per_die;
>     topo-> die_id =    die_index %    dies_per_pkg;
>     topo-> pkg_id =    pkg_index;
> 
> The logic above should be equivalent to the existing
> x86_topo_ids_from_idx() function.
> 
> Can we make it better for the smt/core/ccx/node/pkg case too?
> Let's try:
> 
>     unsigned threads_per_core = topo->nr_threads;
>     unsigned   cores_per_ccx  = topo->nr_cores;
>     unsigned    ccxs_per_node = topo->nr_ccxs;
>     unsigned   nodes_per_pkg  = topo->nodes_per_pkg;
> 
>     unsigned threads_per_ccx  = threads_per_core * cores_per_ccx;
>     unsigned threads_per_node = threads_per_ccx  *  ccxs_per_node;
>     unsigned threads_per_pkg  = threads_per_node * nodes_per_pkg;
> 
>     unsigned thread_index = cpu_index;
>     unsigned   core_index = cpu_index / threads_per_core;
>     unsigned    ccx_index = cpu_index / threads_per_ccx;
>     unsigned   node_index = cpu_index / threads_per_node;
>     unsigned    pkg_index = cpu_index / threads_per_pkg;
> 
>     topo-> smt_id = thread_index % threads_per_core;
>     topo->core_id =   core_index %   cores_per_ccx;
>     topo-> ccx_id =    ccx_index %    ccxs_per_node;
>     topo->node_id =   node_index %   nodes_per_pkg;
>     topo-> pkg_id =    pkg_index;
> 
> The logic above probably isn't equivalent to the code you wrote.
> The point here is that each variable is very clearly defined with
> a "per_*" suffix to avoid confusion, and all parameters are
> coming from X86CPUTopoInfo.
> 
>> +}
>> +
>> +/*
>> + * Calculate thread/core/package IDs for a specific topology,
>> + * based on APIC ID
>> + */
>> +static inline void x86_topo_ids_from_apicid_epyc(apic_id_t apicid,
>> +                                            X86CPUTopoInfo *topo_info,
>> +                                            X86CPUTopoIDs *topo_ids)
>> +{
>> +    unsigned nr_nodes = nodes_in_pkg(topo_info);
>> +    unsigned nr_cores = MAX_CORES_IN_CCX;
>> +    unsigned nr_threads = topo_info->nr_threads;
>> +    unsigned nr_ccxs = MAX_CCX;
> 
> Same questions as in x86_apicid_from_topo_ids_epyc() [1]:
> shouldn't nr_cores, nr_ccxs and nr_nodes come from
> X86CPUTopoInfo?
> 
>> +
>> +    topo_ids->smt_id = apicid &
>> +                   ~(0xFFFFFFFFUL << apicid_smt_width(nr_threads));
>> +
>> +    topo_ids->core_id = (apicid >> apicid_core_offset(nr_threads)) &
>> +                   ~(0xFFFFFFFFUL << apicid_core_width(nr_cores));
>> +
>> +    topo_ids->ccx_id = (apicid >> apicid_ccx_offset(nr_cores, nr_threads)) &
>> +                   ~(0xFFFFFFFFUL << apicid_ccx_width(nr_ccxs));
>> +
>> +    topo_ids->node_id = (apicid >> apicid_node_offset(nr_ccxs, nr_cores,
>> +                                                      nr_threads)) &
>> +                   ~(0xFFFFFFFFUL << apicid_node_width(nr_nodes));
>> +
>> +    topo_ids->pkg_id = apicid >> apicid_pkg_offset_epyc(nr_nodes, nr_ccxs,
>> +                                                        nr_cores, nr_threads);
>> +}
>> +
>> +/*
>> + * Make APIC ID for the CPU 'cpu_index'
>> + *
>> + * 'cpu_index' is a sequential, contiguous ID for the CPU.
>> + */
>> +static inline apic_id_t x86_apicid_from_cpu_idx_epyc(X86CPUTopoInfo *topo_info,
>> +                                                     unsigned cpu_index)
>> +{
>> +    X86CPUTopoIDs topo_ids;
>> +    x86_topo_ids_from_idx_epyc(topo_info, cpu_index, &topo_ids);
>> +    return x86_apicid_from_topo_ids_epyc(topo_info, &topo_ids);
>> +}
> 
> If we make nodes_per_pkg==1 by default, treat nr_ccxs and nr_dies
> as synonyms, and make all parameters come from X86CPUTopoInfo,
> can't we reuse exactly the same topology code for both EPYC and
> non-EPYC?  It would be better than having two separate sets of
> functions.
> 
> I have one additional request: please add unit tests for the
> functions above to test-x86-cpuid.c.  There may be opportunities
> for refactoring this code later, and unit tests will be important
> to make sure we don't break anything.
> 
>> +
>>  /* Make APIC ID for the CPU based on Pkg_ID, Core_ID, SMT_ID
>>   *
>>   * The caller must make sure core_id < nr_cores and smt_id < nr_threads.
>>
> 

I have removed all the hardcoded values. All we need to know is the node
id if numa configured. All other fields just works fine as it is now.
Please look at v3 for more information.


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [Qemu-devel] [RFC 2 PATCH 16/16] hw/core: Fix up the machine_set_cpu_numa_node for epyc
  2019-10-11  4:10   ` Eduardo Habkost
@ 2019-12-02 20:44     ` Babu Moger
  0 siblings, 0 replies; 43+ messages in thread
From: Babu Moger @ 2019-12-02 20:44 UTC (permalink / raw)
  To: Eduardo Habkost; +Cc: mst, armbru, qemu-devel, imammedo, pbonzini, rth



On 10/10/19 11:10 PM, Eduardo Habkost wrote:
> On Fri, Sep 06, 2019 at 07:13:29PM +0000, Moger, Babu wrote:
>> Current topology id match will not work for epyc mode when setting
>> the node id. In epyc mode, ids like smt_id, thread_id, core_id,
>> ccx_id, socket_id can be same for more than one CPUs with across
>> two numa nodes.
>>
>> For example, we can have two CPUs with following ids on two different node.
>> 1. smt_id=0, thread_id=0, core_id=0, ccx_id=0, socket_id=0, node_id=0
>> 2. smt_id=0, thread_id=0, core_id=0, ccx_id=0, socket_id=0, node_id=1
> 
> I don't understand how that could happen.  If all IDs are the
> same, how is it possible that those two cases should match two
> different CPUs?
> 
> cpu_index_to_instance_props() must always return an unique
> identifier for a single CPU.

Simplified apic id decoding in v3 version. We don't need these changes
anymore.

> 
>>
>> The function machine_set_cpu_numa_node will fail to find a match to assign
>> the node. Added new function machine_set_cpu_numa_node_epyc to set the node_id
>> directly in epyc mode.
>>
>> Signed-off-by: Babu Moger <babu.moger@amd.com>
>> ---
>>  hw/core/machine.c   |   24 ++++++++++++++++++++++++
>>  hw/core/numa.c      |    6 +++++-
>>  include/hw/boards.h |    4 ++++
>>  3 files changed, 33 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/core/machine.c b/hw/core/machine.c
>> index 9a8586cf30..6bceefc6f3 100644
>> --- a/hw/core/machine.c
>> +++ b/hw/core/machine.c
>> @@ -741,6 +741,30 @@ void machine_set_cpu_numa_node(MachineState *machine,
>>      }
>>  }
>>  
>> +void machine_set_cpu_numa_node_epyc(MachineState *machine,
>> +                                    const CpuInstanceProperties *props,
>> +                                    unsigned index,
>> +                                    Error **errp)
>> +{
>> +    MachineClass *mc = MACHINE_GET_CLASS(machine);
>> +    CPUArchId *slot;
>> +
>> +    if (!mc->possible_cpu_arch_ids) {
>> +        error_setg(errp, "mapping of CPUs to NUMA node is not supported");
>> +        return;
>> +    }
>> +
>> +    /* disabling node mapping is not supported, forbid it */
>> +    assert(props->has_node_id);
>> +
>> +    /* force board to initialize possible_cpus if it hasn't been done yet */
>> +    mc->possible_cpu_arch_ids(machine);
>> +
>> +    slot = &machine->possible_cpus->cpus[index];
>> +    slot->props.node_id = props->node_id;
>> +    slot->props.has_node_id = props->has_node_id;
>> +}
>> +
>>  static void smp_parse(MachineState *ms, QemuOpts *opts)
>>  {
>>      if (opts) {
>> diff --git a/hw/core/numa.c b/hw/core/numa.c
>> index 27fa6b5e1d..a9e835aea6 100644
>> --- a/hw/core/numa.c
>> +++ b/hw/core/numa.c
>> @@ -247,7 +247,11 @@ void set_numa_node_options(MachineState *ms, NumaOptions *object, Error **errp)
>>           props = mc->cpu_index_to_instance_props(ms, cpus->value);
>>           props.node_id = nodenr;
>>           props.has_node_id = true;
>> -         machine_set_cpu_numa_node(ms, &props, &err);
>> +         if (ms->epyc) {
>> +             machine_set_cpu_numa_node_epyc(ms, &props, cpus->value, &err);
>> +         } else {
>> +             machine_set_cpu_numa_node(ms, &props, &err);
>> +         }
>>           if (err) {
>>              error_propagate(errp, err);
>>              return;
>> diff --git a/include/hw/boards.h b/include/hw/boards.h
>> index 0001d42e50..ec1b1c5a85 100644
>> --- a/include/hw/boards.h
>> +++ b/include/hw/boards.h
>> @@ -74,6 +74,10 @@ HotpluggableCPUList *machine_query_hotpluggable_cpus(MachineState *machine);
>>  void machine_set_cpu_numa_node(MachineState *machine,
>>                                 const CpuInstanceProperties *props,
>>                                 Error **errp);
>> +void machine_set_cpu_numa_node_epyc(MachineState *machine,
>> +                                    const CpuInstanceProperties *props,
>> +                                    unsigned index,
>> +                                    Error **errp);
>>  
>>  void machine_class_allow_dynamic_sysbus_dev(MachineClass *mc, const char *type);
>>  
>>
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2019-12-02 20:45 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-06 19:11 [Qemu-devel] [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu
2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 01/16] numa: Split the numa functionality Moger, Babu
2019-10-10  3:25   ` Eduardo Habkost
2019-12-02 20:18     ` Babu Moger
2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 02/16] hw/i386: Rename X86CPUTopoInfo structure to X86CPUTopoIDs Moger, Babu
2019-10-10  3:26   ` Eduardo Habkost
2019-09-06 19:11 ` [Qemu-devel] [RFC 2 PATCH 03/16] hw/i386: Introduce X86CPUTopoInfo to contain topology info Moger, Babu
2019-10-11  2:29   ` Eduardo Habkost
2019-12-02 20:23     ` Babu Moger
2019-10-11  3:54   ` Eduardo Habkost
2019-12-02 20:25     ` Babu Moger
2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 04/16] machine: Add SMP Sockets in CpuTopology Moger, Babu
2019-10-11  2:31   ` Eduardo Habkost
2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 05/16] hw/i386: Simplify topology Offset/width Calculation Moger, Babu
2019-10-11  2:32   ` Eduardo Habkost
2019-12-02 20:29     ` Babu Moger
2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 06/16] hw/core: Add core complex id in X86CPU topology Moger, Babu
2019-09-06 19:20   ` Eric Blake
2019-09-06 19:34     ` Moger, Babu
2019-09-22 12:48   ` Michael S. Tsirkin
2019-09-23 14:38     ` Moger, Babu
2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 07/16] hw/386: Add new epyc mode topology decoding functions Moger, Babu
2019-10-11  3:29   ` Eduardo Habkost
2019-12-02 20:38     ` Babu Moger
2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 08/16] i386: Cleanup and use the new epyc mode topology functions Moger, Babu
2019-10-11  3:51   ` Eduardo Habkost
2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 09/16] hw/i386: Introduce initialize_topo_info function Moger, Babu
2019-10-11  3:54   ` Eduardo Habkost
2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 10/16] hw/i386: Introduce apicid_from_cpu_idx in PCMachineState Moger, Babu
2019-09-06 19:12 ` [Qemu-devel] [RFC 2 PATCH 11/16] Introduce-topo_ids_from_apicid-handler Moger, Babu
2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 12/16] hw/i386: Introduce apic_id_from_topo_ids handler in PCMachineState Moger, Babu
2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 13/16] machine: Add new epyc property " Moger, Babu
2019-10-11  3:59   ` Eduardo Habkost
2019-10-11 16:23     ` Moger, Babu
2019-10-11 16:59     ` Moger, Babu
2019-10-11 19:03       ` Eduardo Habkost
2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 14/16] hw/i386: Introduce epyc mode function handlers Moger, Babu
2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 15/16] i386: Fix pkg_id offset for epyc mode Moger, Babu
2019-10-11  4:03   ` Eduardo Habkost
2019-09-06 19:13 ` [Qemu-devel] [RFC 2 PATCH 16/16] hw/core: Fix up the machine_set_cpu_numa_node for epyc Moger, Babu
2019-10-11  4:10   ` Eduardo Habkost
2019-12-02 20:44     ` Babu Moger
2019-09-20 22:44 ` [RFC 2 PATCH 00/16] APIC ID fixes for AMD EPYC CPU models Moger, Babu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).