All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
@ 2015-12-10  6:15 Bharata B Rao
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 1/9] vl: Don't allow CPU toplogies with partially filled cores Bharata B Rao
                   ` (11 more replies)
  0 siblings, 12 replies; 46+ messages in thread
From: Bharata B Rao @ 2015-12-10  6:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, ehabkost, Bharata B Rao, agraf, borntraeger,
	imammedo, pbonzini, afaerber, david

Hi,

This is an attempt to define a generic CPU device that serves as a
containing device to underlying arch-specific CPU devices. The motivation
for this is to have an arch-neutral way to specify CPUs mainly during
hotplug.

Instead of individual archs having their own semantics to specify the
CPU like

-device POWER8-powerpc64-cpu (pseries)
-device qemu64-x86_64-cpu (pc)
-device s390-cpu (s390)

this patch introduces a new device named cpu-core that could be
used for all target archs as

-device cpu-core,socket="sid"

This adds a CPU core with all its associated threads into the specified
socket with id "sid". The number of target architecture specific CPU threads
that get created during this operation is based on the CPU topology specified
using -smp sockets=S,cores=C,threads=T option. Also the number of cores that
can be accommodated in the same socket is dictated by the cores= parameter
in the same -smp option.

CPU sockets are represented by QOM objects and the number of sockets required
to fit in max_cpus are created at boottime. As cpu-core devices are
created, they are linked to socket object specified by socket="sid" device
property.

Thus the model consists of backend socket objects which can be considered
as container of one or more cpu-core devices. Each cpu-core object is
linked to the appropriate backend socket object. Each CPU thread device
appears as child object of cpu-core device.

All the required socket objects are created upfront and they can't be deleted.
Though currently socket objects can be created using object_add monitor
command, I am planning to prevent that so that a guest boots with the
required number of sockets and only CPU cores can be hotplugged into
them.

CPU hotplug granularity
-----------------------
CPU hotplug will now be done in cpu-core device granularity.

This patchset includes a patch to prevent topologies that result in
partially filled cores. Hence with this patchset, we will always
have fully filled cpu-core devices both for boot time and during hotplug.

For archs like PowerPC, where there is no requirement to be fully
similar to the physical system, hotplugging CPU at core granularity
is common. While core level hotplug will fit in naturally for such
archs, for others which want socket level hotplug, could higher level
tools like libvirt perform multiple core hotplugs in response to one
socket hotplug request ?

Are there archs that would need thread level CPU addition ?

Boot time CPUs as cpu-core devices
----------------------------------
In this patchset, I am coverting the boot time CPU initialization
(from -smp option) to initialize the required number of cpu-core
devices and linking them with the appropriate socket objects.

Initially I thought we should be able to completely replace -smp with
-device cpu-core, but then I realized that at least both x86 and pseries
guests' machine init code has dependencies on first CPU being available
for the machine init code to work correctly.

Currently I have converted boot CPUs to cpu-core devices only PowerPC sPAPR
and i386 PC targets. I am not really sure about the i386 changes and the
intention in this iteration was to check if it is indeed possible to
fit i386 into cpu-core model. Having said that I am able to boot an x86
guest with this patchset.

NUMA
----
TODO: In this patchset, I haven't explicitly done anything for NUMA yet.
I am thinking if we could add node=N option to cpu-core device.
That could specify the NUMA node to which the CPU core belongs to.

-device cpu-core,socket="sid",node=N

QOM composition tree
---------------------
QOM composition tree for x86 where I don't have CPU hotplug enabled, but
just initializing boot CPUs as cpu-core devices appears like this:

-smp 4,sockets=4,cores=2,threads=2,maxcpus=16

/machine (pc-i440fx-2.5-machine)
  /unattached (container)
    /device[0] (cpu-core)
      /thread[0] (qemu64-x86_64-cpu)
      /thread[1] (qemu64-x86_64-cpu)
    /device[4] (cpu-core)
      /thread[0] (qemu64-x86_64-cpu)
      /thread[1] (qemu64-x86_64-cpu)

For PowerPC where I have CPU hotplug enabled:

-smp 4,sockets=4,cores=2,threads=2,maxcpus=16 -device cpu-core,socket=cpu-socket1,id=core3

/machine (pseries-2.5-machine)
  /unattached (container)
    /device[1] (cpu-core)
      /thread[0] (host-powerpc64-cpu)
      /thread[1] (host-powerpc64-cpu)
    /device[2] (cpu-core)
      /thread[0] (host-powerpc64-cpu)
      /thread[1] (host-powerpc64-cpu)
  /peripheral (container)
    /core3 (cpu-core)
      /thread[0] (host-powerpc64-cpu)
      /thread[1] (host-powerpc64-cpu)

As can be seen, the boot CPU and hotplugged CPU come under separate
parents. Guess I should work towards getting both boot time and hotplugged
CPUs under same parent ?

Socket ID generation
---------------------
In the current approach the socket ID generation is implicit somewhat.
All the sockets objects are created with pre-fixed format for ids like
cpu-socket0, cpu-socket1 etc. And machine init code of each arch is expected
to use the same when creating cpu-core devices to link the core to the
right object. Even user needs to know these IDs during device_add time.
May be I could add "info cpu-sockets" which gives information about all
the existing sockets and their core-occupancy status.

Finally, I understand that this is a simplistic model and it wouldn't probably
support all the notions around CPU topology and hotplug that we would
like to support for all archs. The intention of this RFC is to start
with somewhere and seek inputs from the community.

Bharata B Rao (9):
  vl: Don't allow CPU toplogies with partially filled cores
  cpu: Store CPU typename in MachineState
  cpu: Don't realize CPU from cpu_generic_init()
  cpu: CPU socket backend
  vl: Create CPU socket backend objects
  cpu: Introduce CPU core device
  spapr: Convert boot CPUs into CPU core device initialization
  target-i386: Set apic_id during CPU initfn
  pc: Convert boot CPUs into CPU core device initialization

 hw/cpu/Makefile.objs        |  1 +
 hw/cpu/core.c               | 98 +++++++++++++++++++++++++++++++++++++++++++++
 hw/cpu/socket.c             | 48 ++++++++++++++++++++++
 hw/i386/pc.c                | 64 +++++++++--------------------
 hw/ppc/spapr.c              | 32 ++++++++++-----
 include/hw/boards.h         |  1 +
 include/hw/cpu/core.h       | 28 +++++++++++++
 include/hw/cpu/socket.h     | 26 ++++++++++++
 qom/cpu.c                   |  6 ---
 target-arm/helper.c         | 16 +++++++-
 target-cris/cpu.c           | 16 +++++++-
 target-i386/cpu.c           | 37 ++++++++++++++++-
 target-i386/cpu.h           |  1 +
 target-lm32/helper.c        | 16 +++++++-
 target-moxie/cpu.c          | 16 +++++++-
 target-openrisc/cpu.c       | 16 +++++++-
 target-ppc/translate_init.c | 16 +++++++-
 target-sh4/cpu.c            | 16 +++++++-
 target-tricore/helper.c     | 16 +++++++-
 target-unicore32/helper.c   | 16 +++++++-
 vl.c                        | 26 ++++++++++++
 21 files changed, 439 insertions(+), 73 deletions(-)
 create mode 100644 hw/cpu/core.c
 create mode 100644 hw/cpu/socket.c
 create mode 100644 include/hw/cpu/core.h
 create mode 100644 include/hw/cpu/socket.h

-- 
2.1.0

^ permalink raw reply	[flat|nested] 46+ messages in thread

* [Qemu-devel] [RFC PATCH v0 1/9] vl: Don't allow CPU toplogies with partially filled cores
  2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
@ 2015-12-10  6:15 ` Bharata B Rao
  2015-12-10 10:25   ` Daniel P. Berrange
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState Bharata B Rao
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 46+ messages in thread
From: Bharata B Rao @ 2015-12-10  6:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, ehabkost, Bharata B Rao, agraf, borntraeger,
	imammedo, pbonzini, afaerber, david

Prevent guests from booting with CPU topologies that have partially
filled CPU cores or can result in partially filled CPU cores after CPU
hotplug like

-smp 15,sockets=1,cores=4,threads=4,maxcpus=16 or
-smp 15,sockets=1,cores=4,threads=4,maxcpus=17 or

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 vl.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/vl.c b/vl.c
index 525929b..e656f53 100644
--- a/vl.c
+++ b/vl.c
@@ -1252,6 +1252,19 @@ static void smp_parse(QemuOpts *opts)
         smp_cores = cores > 0 ? cores : 1;
         smp_threads = threads > 0 ? threads : 1;
 
+        if (smp_cpus % smp_threads) {
+            error_report("cpu topology: "
+                         "smp_cpus (%u) should be multiple of threads (%u)",
+                         smp_cpus, smp_threads);
+            exit(1);
+        }
+
+        if (max_cpus % smp_threads) {
+            error_report("cpu topology: "
+                         "maxcpus (%u) should be multiple of threads (%u)",
+                         max_cpus, smp_threads);
+            exit(1);
+        }
     }
 
     if (max_cpus == 0) {
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState
  2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 1/9] vl: Don't allow CPU toplogies with partially filled cores Bharata B Rao
@ 2015-12-10  6:15 ` Bharata B Rao
  2015-12-14 17:29   ` Eduardo Habkost
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 3/9] cpu: Don't realize CPU from cpu_generic_init() Bharata B Rao
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 46+ messages in thread
From: Bharata B Rao @ 2015-12-10  6:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, ehabkost, Bharata B Rao, agraf, borntraeger,
	imammedo, pbonzini, afaerber, david

Storing CPU typename in MachineState lets us to create CPU threads
for all architectures in uniform manner from arch-neutral code.

TODO: Touching only i386 and spapr targets for now

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 hw/i386/pc.c        | 1 +
 hw/ppc/spapr.c      | 2 ++
 include/hw/boards.h | 1 +
 3 files changed, 4 insertions(+)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 5e20e07..ffcd645 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1133,6 +1133,7 @@ void pc_cpus_init(PCMachineState *pcms)
         machine->cpu_model = "qemu32";
 #endif
     }
+    machine->cpu_type = TYPE_X86_CPU;
 
     apic_id_limit = pc_apic_id_limit(max_cpus);
     if (apic_id_limit > ACPI_CPU_HOTPLUG_ID_LIMIT) {
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 030ee35..db441f2 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -1797,6 +1797,8 @@ static void ppc_spapr_init(MachineState *machine)
     if (machine->cpu_model == NULL) {
         machine->cpu_model = kvm_enabled() ? "host" : "POWER7";
     }
+    machine->cpu_type = TYPE_POWERPC_CPU;
+
     for (i = 0; i < smp_cpus; i++) {
         cpu = cpu_ppc_init(machine->cpu_model);
         if (cpu == NULL) {
diff --git a/include/hw/boards.h b/include/hw/boards.h
index 24eb6f0..a1f9512 100644
--- a/include/hw/boards.h
+++ b/include/hw/boards.h
@@ -128,6 +128,7 @@ struct MachineState {
     char *kernel_cmdline;
     char *initrd_filename;
     const char *cpu_model;
+    const char *cpu_type;
     AccelState *accelerator;
 };
 
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [Qemu-devel] [RFC PATCH v0 3/9] cpu: Don't realize CPU from cpu_generic_init()
  2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 1/9] vl: Don't allow CPU toplogies with partially filled cores Bharata B Rao
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState Bharata B Rao
@ 2015-12-10  6:15 ` Bharata B Rao
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 4/9] cpu: CPU socket backend Bharata B Rao
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 46+ messages in thread
From: Bharata B Rao @ 2015-12-10  6:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, ehabkost, Bharata B Rao, agraf, borntraeger,
	imammedo, pbonzini, afaerber, david

Don't do CPU realization from cpu_generic_init(). With this
cpu_generic_init() can be used from instance_init to just create
CPU threads and they could be realized separately from realizefn call.

Convert the existing callers to do explicit realization.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 qom/cpu.c                   |  6 ------
 target-arm/helper.c         | 16 +++++++++++++++-
 target-cris/cpu.c           | 16 +++++++++++++++-
 target-lm32/helper.c        | 16 +++++++++++++++-
 target-moxie/cpu.c          | 16 +++++++++++++++-
 target-openrisc/cpu.c       | 16 +++++++++++++++-
 target-ppc/translate_init.c | 16 +++++++++++++++-
 target-sh4/cpu.c            | 16 +++++++++++++++-
 target-tricore/helper.c     | 16 +++++++++++++++-
 target-unicore32/helper.c   | 16 +++++++++++++++-
 10 files changed, 135 insertions(+), 15 deletions(-)

diff --git a/qom/cpu.c b/qom/cpu.c
index fb80d13..e7a17c1 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -63,13 +63,7 @@ CPUState *cpu_generic_init(const char *typename, const char *cpu_model)
     featurestr = strtok(NULL, ",");
     cc->parse_features(cpu, featurestr, &err);
     g_free(str);
-    if (err != NULL) {
-        goto out;
-    }
-
-    object_property_set_bool(OBJECT(cpu), true, "realized", &err);
 
-out:
     if (err != NULL) {
         error_report_err(err);
         object_unref(OBJECT(cpu));
diff --git a/target-arm/helper.c b/target-arm/helper.c
index 4ecae61..0d8c94e 100644
--- a/target-arm/helper.c
+++ b/target-arm/helper.c
@@ -4546,7 +4546,21 @@ void register_cp_regs_for_features(ARMCPU *cpu)
 
 ARMCPU *cpu_arm_init(const char *cpu_model)
 {
-    return ARM_CPU(cpu_generic_init(TYPE_ARM_CPU, cpu_model));
+    CPUState *cpu = cpu_generic_init(TYPE_ARM_CPU, cpu_model);
+    Error *err = NULL;
+
+    if (!cpu) {
+        return NULL;
+    }
+
+    object_property_set_bool(OBJECT(cpu), true, "realized", &err);
+    if (err != NULL) {
+        error_report_err(err);
+        object_unref(OBJECT(cpu));
+        return NULL;
+    } else {
+        return ARM_CPU(cpu);
+    }
 }
 
 void arm_cpu_register_gdb_regs_for_features(ARMCPU *cpu)
diff --git a/target-cris/cpu.c b/target-cris/cpu.c
index 8eaf5a5..d2c0822 100644
--- a/target-cris/cpu.c
+++ b/target-cris/cpu.c
@@ -89,7 +89,21 @@ static ObjectClass *cris_cpu_class_by_name(const char *cpu_model)
 
 CRISCPU *cpu_cris_init(const char *cpu_model)
 {
-    return CRIS_CPU(cpu_generic_init(TYPE_CRIS_CPU, cpu_model));
+    CPUState *cpu = cpu_generic_init(TYPE_CRIS_CPU, cpu_model);
+    Error *err = NULL;
+
+    if (!cpu) {
+        return NULL;
+    }
+
+    object_property_set_bool(OBJECT(cpu), true, "realized", &err);
+    if (err != NULL) {
+        error_report_err(err);
+        object_unref(OBJECT(cpu));
+        return NULL;
+    } else {
+        return CRIS_CPU(cpu);
+    }
 }
 
 /* Sort alphabetically by VR. */
diff --git a/target-lm32/helper.c b/target-lm32/helper.c
index e26c133..49ac960 100644
--- a/target-lm32/helper.c
+++ b/target-lm32/helper.c
@@ -218,7 +218,21 @@ bool lm32_cpu_exec_interrupt(CPUState *cs, int interrupt_request)
 
 LM32CPU *cpu_lm32_init(const char *cpu_model)
 {
-    return LM32_CPU(cpu_generic_init(TYPE_LM32_CPU, cpu_model));
+    CPUState *cpu = cpu_generic_init(TYPE_LM32_CPU, cpu_model);
+    Error *err = NULL;
+
+    if (!cpu) {
+        return NULL;
+    }
+
+    object_property_set_bool(OBJECT(cpu), true, "realized", &err);
+    if (err != NULL) {
+        error_report_err(err);
+        object_unref(OBJECT(cpu));
+        return NULL;
+    } else {
+        return LM32_CPU(cpu);
+    }
 }
 
 /* Some soc ignores the MSB on the address bus. Thus creating a shadow memory
diff --git a/target-moxie/cpu.c b/target-moxie/cpu.c
index 0c60c65..5989fa6 100644
--- a/target-moxie/cpu.c
+++ b/target-moxie/cpu.c
@@ -152,7 +152,21 @@ static const MoxieCPUInfo moxie_cpus[] = {
 
 MoxieCPU *cpu_moxie_init(const char *cpu_model)
 {
-    return MOXIE_CPU(cpu_generic_init(TYPE_MOXIE_CPU, cpu_model));
+    CPUState *cpu = cpu_generic_init(TYPE_MOXIE_CPU, cpu_model);
+    Error *err = NULL;
+
+    if (!cpu) {
+        return NULL;
+    }
+
+    object_property_set_bool(OBJECT(cpu), true, "realized", &err);
+    if (err != NULL) {
+        error_report_err(err);
+        object_unref(OBJECT(cpu));
+        return NULL;
+    } else {
+        return MOXIE_CPU(cpu);
+    }
 }
 
 static void cpu_register(const MoxieCPUInfo *info)
diff --git a/target-openrisc/cpu.c b/target-openrisc/cpu.c
index cc5e2d1..873eafb 100644
--- a/target-openrisc/cpu.c
+++ b/target-openrisc/cpu.c
@@ -222,7 +222,21 @@ static void openrisc_cpu_register_types(void)
 
 OpenRISCCPU *cpu_openrisc_init(const char *cpu_model)
 {
-    return OPENRISC_CPU(cpu_generic_init(TYPE_OPENRISC_CPU, cpu_model));
+    CPUState *cpu = cpu_generic_init(TYPE_OPENRISC_CPU, cpu_model);
+    Error *err = NULL;
+
+    if (!cpu) {
+        return NULL;
+    }
+
+    object_property_set_bool(OBJECT(cpu), true, "realized", &err);
+    if (err != NULL) {
+        error_report_err(err);
+        object_unref(OBJECT(cpu));
+        return NULL;
+    } else {
+        return OPENRISC_CPU(cpu);
+    }
 }
 
 /* Sort alphabetically by type name, except for "any". */
diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
index e88dc7f..d5ae53e 100644
--- a/target-ppc/translate_init.c
+++ b/target-ppc/translate_init.c
@@ -9373,7 +9373,21 @@ static ObjectClass *ppc_cpu_class_by_name(const char *name)
 
 PowerPCCPU *cpu_ppc_init(const char *cpu_model)
 {
-    return POWERPC_CPU(cpu_generic_init(TYPE_POWERPC_CPU, cpu_model));
+    CPUState *cpu = cpu_generic_init(TYPE_POWERPC_CPU, cpu_model);
+    Error *err = NULL;
+
+    if (!cpu) {
+        return NULL;
+    }
+
+    object_property_set_bool(OBJECT(cpu), true, "realized", &err);
+    if (err != NULL) {
+        error_report_err(err);
+        object_unref(OBJECT(cpu));
+        return NULL;
+    } else {
+        return POWERPC_CPU(cpu);
+    }
 }
 
 /* Sort by PVR, ordering special case "host" last. */
diff --git a/target-sh4/cpu.c b/target-sh4/cpu.c
index d7e2fbd..e5151a0 100644
--- a/target-sh4/cpu.c
+++ b/target-sh4/cpu.c
@@ -155,7 +155,21 @@ static ObjectClass *superh_cpu_class_by_name(const char *cpu_model)
 
 SuperHCPU *cpu_sh4_init(const char *cpu_model)
 {
-    return SUPERH_CPU(cpu_generic_init(TYPE_SUPERH_CPU, cpu_model));
+    CPUState *cpu = cpu_generic_init(TYPE_SUPERH_CPU, cpu_model);
+    Error *err = NULL;
+
+    if (!cpu) {
+        return NULL;
+    }
+
+    object_property_set_bool(OBJECT(cpu), true, "realized", &err);
+    if (err != NULL) {
+        error_report_err(err);
+        object_unref(OBJECT(cpu));
+        return NULL;
+    } else {
+        return SUPERH_CPU(cpu);
+    }
 }
 
 static void sh7750r_cpu_initfn(Object *obj)
diff --git a/target-tricore/helper.c b/target-tricore/helper.c
index 1808b28..7dcd176 100644
--- a/target-tricore/helper.c
+++ b/target-tricore/helper.c
@@ -83,7 +83,21 @@ int cpu_tricore_handle_mmu_fault(CPUState *cs, target_ulong address,
 
 TriCoreCPU *cpu_tricore_init(const char *cpu_model)
 {
-    return TRICORE_CPU(cpu_generic_init(TYPE_TRICORE_CPU, cpu_model));
+    CPUState *cpu = cpu_generic_init(TYPE_TRICORE_CPU, cpu_model);
+    Error *err = NULL;
+
+    if (!cpu) {
+        return NULL;
+    }
+
+    object_property_set_bool(OBJECT(cpu), true, "realized", &err);
+    if (err != NULL) {
+        error_report_err(err);
+        object_unref(OBJECT(cpu));
+        return NULL;
+    } else {
+        return TRICORE_CPU(cpu);
+    }
 }
 
 static void tricore_cpu_list_entry(gpointer data, gpointer user_data)
diff --git a/target-unicore32/helper.c b/target-unicore32/helper.c
index ae63277..e47bb12 100644
--- a/target-unicore32/helper.c
+++ b/target-unicore32/helper.c
@@ -27,7 +27,21 @@
 
 UniCore32CPU *uc32_cpu_init(const char *cpu_model)
 {
-    return UNICORE32_CPU(cpu_generic_init(TYPE_UNICORE32_CPU, cpu_model));
+    CPUState *cpu = cpu_generic_init(TYPE_UNICORE32_CPU, cpu_model);
+    Error *err = NULL;
+
+    if (!cpu) {
+        return NULL;
+    }
+
+    object_property_set_bool(OBJECT(cpu), true, "realized", &err);
+    if (err != NULL) {
+        error_report_err(err);
+        object_unref(OBJECT(cpu));
+        return NULL;
+    } else {
+        return UNICORE32_CPU(cpu);
+    }
 }
 
 uint32_t HELPER(clo)(uint32_t x)
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [Qemu-devel] [RFC PATCH v0 4/9] cpu: CPU socket backend
  2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
                   ` (2 preceding siblings ...)
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 3/9] cpu: Don't realize CPU from cpu_generic_init() Bharata B Rao
@ 2015-12-10  6:15 ` Bharata B Rao
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 5/9] vl: Create CPU socket backend objects Bharata B Rao
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 46+ messages in thread
From: Bharata B Rao @ 2015-12-10  6:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, ehabkost, Bharata B Rao, agraf, borntraeger,
	imammedo, pbonzini, afaerber, david

Backend object for CPU socket.

TODO: Prevent creation of socket objects beyond what is needed by
max_cpus so that all the required socket objects are pre-created
and user can't ever add a socket slot.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 hw/cpu/Makefile.objs    |  1 +
 hw/cpu/socket.c         | 48 ++++++++++++++++++++++++++++++++++++++++++++++++
 include/hw/cpu/socket.h | 26 ++++++++++++++++++++++++++
 3 files changed, 75 insertions(+)
 create mode 100644 hw/cpu/socket.c
 create mode 100644 include/hw/cpu/socket.h

diff --git a/hw/cpu/Makefile.objs b/hw/cpu/Makefile.objs
index 0954a18..93d1226 100644
--- a/hw/cpu/Makefile.objs
+++ b/hw/cpu/Makefile.objs
@@ -2,4 +2,5 @@ obj-$(CONFIG_ARM11MPCORE) += arm11mpcore.o
 obj-$(CONFIG_REALVIEW) += realview_mpcore.o
 obj-$(CONFIG_A9MPCORE) += a9mpcore.o
 obj-$(CONFIG_A15MPCORE) += a15mpcore.o
+obj-y += socket.o
 
diff --git a/hw/cpu/socket.c b/hw/cpu/socket.c
new file mode 100644
index 0000000..e0a2af9
--- /dev/null
+++ b/hw/cpu/socket.c
@@ -0,0 +1,48 @@
+/*
+ * CPU socket backend
+ *
+ * Copyright (C) 2015 Bharata B Rao <bharata@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+#include "hw/cpu/socket.h"
+#include "qom/object_interfaces.h"
+
+static bool cpu_socket_can_be_deleted(UserCreatable *uc, Error **errp)
+{
+    return false;
+}
+
+static void cpu_socket_class_init(ObjectClass *oc, void *data)
+{
+    UserCreatableClass *ucc = USER_CREATABLE_CLASS(oc);
+
+    ucc->can_be_deleted = cpu_socket_can_be_deleted;
+}
+
+static void cpu_socket_instance_init(Object *obj)
+{
+    CPUSocket *socket = CPU_SOCKET(obj);
+
+    socket->nr_cores = 0;
+}
+
+static const TypeInfo cpu_socket_info = {
+    .name = TYPE_CPU_SOCKET,
+    .parent = TYPE_OBJECT,
+    .instance_init = cpu_socket_instance_init,
+    .instance_size = sizeof(CPUSocket),
+    .class_init = cpu_socket_class_init,
+    .interfaces = (InterfaceInfo[]) {
+        { TYPE_USER_CREATABLE },
+        { }
+    }
+};
+
+static void cpu_socket_register_types(void)
+{
+    type_register_static(&cpu_socket_info);
+}
+
+type_init(cpu_socket_register_types)
diff --git a/include/hw/cpu/socket.h b/include/hw/cpu/socket.h
new file mode 100644
index 0000000..ff29367
--- /dev/null
+++ b/include/hw/cpu/socket.h
@@ -0,0 +1,26 @@
+/*
+ * CPU socket backend
+ *
+ * Copyright (C) 2015 Bharata B Rao <bharata@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+#ifndef HW_CPU_SOCKET_H
+#define HW_CPU_SOCKET_H
+
+#include "hw/qdev.h"
+
+#define TYPE_CPU_SOCKET "cpu-socket"
+#define CPU_SOCKET(obj) \
+    OBJECT_CHECK(CPUSocket, (obj), TYPE_CPU_SOCKET)
+
+typedef struct CPUSocket {
+    /* private */
+    Object parent;
+
+    /* public */
+    int nr_cores;
+} CPUSocket;
+
+#endif
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [Qemu-devel] [RFC PATCH v0 5/9] vl: Create CPU socket backend objects
  2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
                   ` (3 preceding siblings ...)
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 4/9] cpu: CPU socket backend Bharata B Rao
@ 2015-12-10  6:15 ` Bharata B Rao
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 6/9] cpu: Introduce CPU core device Bharata B Rao
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 46+ messages in thread
From: Bharata B Rao @ 2015-12-10  6:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, ehabkost, Bharata B Rao, agraf, borntraeger,
	imammedo, pbonzini, afaerber, david

Create as many CPU socket objects as necessary to contain the
max_cpus.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 vl.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/vl.c b/vl.c
index e656f53..83d08c6 100644
--- a/vl.c
+++ b/vl.c
@@ -124,6 +124,7 @@ int main(int argc, char **argv)
 #include "crypto/init.h"
 #include "sysemu/replay.h"
 #include "qapi/qmp/qerror.h"
+#include "hw/cpu/socket.h"
 
 #define MAX_VIRTIO_CONSOLES 1
 #define MAX_SCLP_CONSOLES 1
@@ -3014,6 +3015,7 @@ int main(int argc, char **argv, char **envp)
     FILE *vmstate_dump_file = NULL;
     Error *main_loop_err = NULL;
     Error *err = NULL;
+    int sockets;
 
     qemu_init_cpu_loop();
     qemu_mutex_lock_iothread();
@@ -4154,6 +4156,17 @@ int main(int argc, char **argv, char **envp)
     }
 
     /*
+     * Create CPU socket objects which house CPU cores.
+     */
+    sockets = DIV_ROUND_UP(max_cpus, smp_cores * smp_threads);
+    for (i = 0; i < sockets; i++) {
+        char id[32];
+
+        snprintf(id, 32, "" TYPE_CPU_SOCKET "%d", i);
+        object_add(TYPE_CPU_SOCKET, id, NULL, NULL, &error_abort);
+    }
+
+    /*
      * Get the default machine options from the machine if it is not already
      * specified either by the configuration file or by the command line.
      */
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [Qemu-devel] [RFC PATCH v0 6/9] cpu: Introduce CPU core device
  2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
                   ` (4 preceding siblings ...)
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 5/9] vl: Create CPU socket backend objects Bharata B Rao
@ 2015-12-10  6:15 ` Bharata B Rao
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 7/9] spapr: Convert boot CPUs into CPU core device initialization Bharata B Rao
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 46+ messages in thread
From: Bharata B Rao @ 2015-12-10  6:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, ehabkost, Bharata B Rao, agraf, borntraeger,
	imammedo, pbonzini, afaerber, david

CPU core device is a container of CPU thread devices. Core device links
to the backend socket object. All the cores within a socket defined
in the topology specification will link to the same socket object.
CPU hotplug is performed in the granularity of CPU core device.
When hotplugged, CPU core creates CPU thread devices.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 hw/cpu/Makefile.objs  |  2 +-
 hw/cpu/core.c         | 98 +++++++++++++++++++++++++++++++++++++++++++++++++++
 include/hw/cpu/core.h | 28 +++++++++++++++
 3 files changed, 127 insertions(+), 1 deletion(-)
 create mode 100644 hw/cpu/core.c
 create mode 100644 include/hw/cpu/core.h

diff --git a/hw/cpu/Makefile.objs b/hw/cpu/Makefile.objs
index 93d1226..de5c313 100644
--- a/hw/cpu/Makefile.objs
+++ b/hw/cpu/Makefile.objs
@@ -2,5 +2,5 @@ obj-$(CONFIG_ARM11MPCORE) += arm11mpcore.o
 obj-$(CONFIG_REALVIEW) += realview_mpcore.o
 obj-$(CONFIG_A9MPCORE) += a9mpcore.o
 obj-$(CONFIG_A15MPCORE) += a15mpcore.o
-obj-y += socket.o
+obj-y += socket.o core.o
 
diff --git a/hw/cpu/core.c b/hw/cpu/core.c
new file mode 100644
index 0000000..d14bd77
--- /dev/null
+++ b/hw/cpu/core.c
@@ -0,0 +1,98 @@
+/*
+ * CPU core device, acts as container of CPU thread devices.
+ *
+ * Copyright (C) 2015 Bharata B Rao <bharata@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+#include "hw/cpu/core.h"
+#include "hw/boards.h"
+#include <sysemu/cpus.h>
+#include "qemu/error-report.h"
+
+static int cpu_core_realize_child(Object *child, void *opaque)
+{
+    Error **errp = opaque;
+
+    object_property_set_bool(child, true, "realized", errp);
+    if (*errp) {
+        return 1;
+    }
+
+    return 0;
+}
+
+static void cpu_core_realize(DeviceState *dev, Error **errp)
+{
+    CPUCore *core = CPU_CORE(OBJECT(dev));
+
+    if (!core->socket) {
+        error_setg(errp, "'" CPU_CORE_SOCKET_PROP "' property is not set");
+        return;
+    }
+    object_child_foreach(OBJECT(dev), cpu_core_realize_child, errp);
+}
+
+static void cpu_core_class_init(ObjectClass *oc, void *data)
+{
+    DeviceClass *dc = DEVICE_CLASS(oc);
+
+    dc->realize = cpu_core_realize;
+}
+
+static void cpu_core_check_socket_is_full(Object *obj, const char *name,
+                                          Object *val, Error **errp)
+{
+    CPUSocket *socket = CPU_SOCKET(val);
+
+    if (socket->nr_cores == smp_cores) {
+        char *path = object_get_canonical_path_component(val);
+        error_setg(errp, "Socket already full: %s", path);
+        g_free(path);
+    } else {
+        socket->nr_cores++;
+        qdev_prop_allow_set_link_before_realize(obj, name, val, errp);
+    }
+}
+
+static void cpu_core_instance_init(Object *obj)
+{
+    int i;
+    CPUState *cpu;
+    MachineState *machine = MACHINE(qdev_get_machine());
+    CPUCore *core = CPU_CORE(obj);
+
+    object_property_add_link(obj, CPU_CORE_SOCKET_PROP, TYPE_CPU_SOCKET,
+                             (Object **)&core->socket,
+                             cpu_core_check_socket_is_full,
+                             OBJ_PROP_LINK_UNREF_ON_RELEASE,
+                             &error_abort);
+
+    /* Create as many CPU threads as specified in the topology */
+    for (i = 0; i < smp_threads; i++) {
+        cpu = cpu_generic_init(machine->cpu_type, machine->cpu_model);
+        if (!cpu) {
+            error_report("Unable to find CPU definition: %s\n",
+                          machine->cpu_model);
+            exit(EXIT_FAILURE);
+        }
+        object_property_add_child(obj, "thread[*]", OBJECT(cpu), &error_abort);
+        object_unref(OBJECT(cpu));
+    }
+}
+
+static const TypeInfo cpu_core_type_info = {
+    .name = TYPE_CPU_CORE,
+    .parent = TYPE_DEVICE,
+    .class_init = cpu_core_class_init,
+    .instance_init = cpu_core_instance_init,
+    .instance_size = sizeof(CPUCore),
+};
+
+static void cpu_core_register_types(void)
+{
+    type_register_static(&cpu_core_type_info);
+}
+
+type_init(cpu_core_register_types)
diff --git a/include/hw/cpu/core.h b/include/hw/cpu/core.h
new file mode 100644
index 0000000..0314098
--- /dev/null
+++ b/include/hw/cpu/core.h
@@ -0,0 +1,28 @@
+/*
+ * CPU core device.
+ *
+ * Copyright (C) 2015 Bharata B Rao <bharata@linux.vnet.ibm.com>
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or later.
+ * See the COPYING file in the top-level directory.
+ */
+#ifndef HW_CPU_CORE_H
+#define HW_CPU_CORE_H
+
+#include "hw/qdev.h"
+#include "hw/cpu/socket.h"
+
+#define TYPE_CPU_CORE "cpu-core"
+#define CPU_CORE(obj) \
+    OBJECT_CHECK(CPUCore, (obj), TYPE_CPU_CORE)
+
+#define CPU_CORE_SOCKET_PROP "socket"
+
+typedef struct CPUCore {
+    /*< private >*/
+    DeviceState parent_obj;
+    /*< public >*/
+    CPUSocket *socket;
+} CPUCore;
+
+#endif
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [Qemu-devel] [RFC PATCH v0 7/9] spapr: Convert boot CPUs into CPU core device initialization
  2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
                   ` (5 preceding siblings ...)
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 6/9] cpu: Introduce CPU core device Bharata B Rao
@ 2015-12-10  6:15 ` Bharata B Rao
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 8/9] target-i386: Set apic_id during CPU initfn Bharata B Rao
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 46+ messages in thread
From: Bharata B Rao @ 2015-12-10  6:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, ehabkost, Bharata B Rao, agraf, borntraeger,
	imammedo, pbonzini, afaerber, david

Initialize boot CPUs specified with -smp option as CPU core devices.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 hw/ppc/spapr.c | 30 ++++++++++++++++++++----------
 1 file changed, 20 insertions(+), 10 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index db441f2..9499871 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -63,7 +63,7 @@
 
 #include "hw/compat.h"
 #include "qemu-common.h"
-
+#include "hw/cpu/core.h"
 #include <libfdt.h>
 
 /* SLOF memory layout:
@@ -1713,9 +1713,8 @@ static void ppc_spapr_init(MachineState *machine)
     const char *kernel_filename = machine->kernel_filename;
     const char *kernel_cmdline = machine->kernel_cmdline;
     const char *initrd_filename = machine->initrd_filename;
-    PowerPCCPU *cpu;
     PCIHostState *phb;
-    int i;
+    int i, j;
     MemoryRegion *sysmem = get_system_memory();
     MemoryRegion *ram = g_new(MemoryRegion, 1);
     MemoryRegion *rma_region;
@@ -1727,6 +1726,7 @@ static void ppc_spapr_init(MachineState *machine)
     long load_limit, fw_size;
     bool kernel_le = false;
     char *filename;
+    int sockets = DIV_ROUND_UP(smp_cpus, smp_cores * smp_threads);
 
     msi_supported = true;
 
@@ -1799,13 +1799,16 @@ static void ppc_spapr_init(MachineState *machine)
     }
     machine->cpu_type = TYPE_POWERPC_CPU;
 
-    for (i = 0; i < smp_cpus; i++) {
-        cpu = cpu_ppc_init(machine->cpu_model);
-        if (cpu == NULL) {
-            fprintf(stderr, "Unable to find PowerPC CPU definition\n");
-            exit(1);
+    for (i = 0; i < sockets; i++) {
+        char sid[32];
+
+        snprintf(sid, 32, "" TYPE_CPU_SOCKET "%d", i);
+        for (j = 0; j < smp_cores; j++) {
+            Object *core = object_new(TYPE_CPU_CORE);
+
+            object_property_set_str(core, sid, "socket", &error_abort);
+            object_property_set_bool(core, true, "realized", &error_abort);
         }
-        spapr_cpu_init(spapr, cpu);
     }
 
     if (kvm_enabled()) {
@@ -2192,6 +2195,7 @@ static void spapr_machine_device_plug(HotplugHandler *hotplug_dev,
                                       DeviceState *dev, Error **errp)
 {
     sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(qdev_get_machine());
+    sPAPRMachineState *ms = SPAPR_MACHINE(hotplug_dev);
 
     if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
         int node;
@@ -2228,6 +2232,11 @@ static void spapr_machine_device_plug(HotplugHandler *hotplug_dev,
         }
 
         spapr_memory_plug(hotplug_dev, dev, node, errp);
+    } else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
+        CPUState *cs = CPU(dev);
+        PowerPCCPU *cpu = POWERPC_CPU(cs);
+
+        spapr_cpu_init(ms, cpu);
     }
 }
 
@@ -2242,7 +2251,8 @@ static void spapr_machine_device_unplug(HotplugHandler *hotplug_dev,
 static HotplugHandler *spapr_get_hotpug_handler(MachineState *machine,
                                              DeviceState *dev)
 {
-    if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
+    if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM) ||
+        object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
         return HOTPLUG_HANDLER(machine);
     }
     return NULL;
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [Qemu-devel] [RFC PATCH v0 8/9] target-i386: Set apic_id during CPU initfn
  2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
                   ` (6 preceding siblings ...)
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 7/9] spapr: Convert boot CPUs into CPU core device initialization Bharata B Rao
@ 2015-12-10  6:15 ` Bharata B Rao
  2015-12-14 17:44   ` Eduardo Habkost
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 9/9] pc: Convert boot CPUs into CPU core device initialization Bharata B Rao
                   ` (3 subsequent siblings)
  11 siblings, 1 reply; 46+ messages in thread
From: Bharata B Rao @ 2015-12-10  6:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, ehabkost, Bharata B Rao, agraf, borntraeger,
	imammedo, pbonzini, afaerber, david

Move back the setting of apic_id to instance_init routine (x86_cpu_initfn)
This is needed to initialize X86 CPUs using generic cpu-package device.

TODO: I am not fully aware of the general direction in which apic_id
changes in X86 have evolved and hence not sure if this is indeed aligned with
the X86 way of doing things. This is just to help the PoC implementation
that I have in this patchset to convert PC CPUs initialization into
cpu-package device based initialization.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 hw/i386/pc.c      | 33 ---------------------------------
 target-i386/cpu.c | 37 +++++++++++++++++++++++++++++++++++--
 target-i386/cpu.h |  1 +
 3 files changed, 36 insertions(+), 35 deletions(-)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index ffcd645..80a4d98 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -670,39 +670,6 @@ bool e820_get_entry(int idx, uint32_t type, uint64_t *address, uint64_t *length)
     return false;
 }
 
-/* Enables contiguous-apic-ID mode, for compatibility */
-static bool compat_apic_id_mode;
-
-void enable_compat_apic_id_mode(void)
-{
-    compat_apic_id_mode = true;
-}
-
-/* Calculates initial APIC ID for a specific CPU index
- *
- * Currently we need to be able to calculate the APIC ID from the CPU index
- * alone (without requiring a CPU object), as the QEMU<->Seabios interfaces have
- * no concept of "CPU index", and the NUMA tables on fw_cfg need the APIC ID of
- * all CPUs up to max_cpus.
- */
-static uint32_t x86_cpu_apic_id_from_index(unsigned int cpu_index)
-{
-    uint32_t correct_id;
-    static bool warned;
-
-    correct_id = x86_apicid_from_cpu_idx(smp_cores, smp_threads, cpu_index);
-    if (compat_apic_id_mode) {
-        if (cpu_index != correct_id && !warned && !qtest_enabled()) {
-            error_report("APIC IDs set in compatibility mode, "
-                         "CPU topology won't match the configuration");
-            warned = true;
-        }
-        return cpu_index;
-    } else {
-        return correct_id;
-    }
-}
-
 /* Calculates the limit to CPU APIC ID values
  *
  * This function returns the limit for the APIC ID value, so that all
diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index 11e5e39..c97a646 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -25,6 +25,7 @@
 #include "sysemu/kvm.h"
 #include "sysemu/cpus.h"
 #include "kvm_i386.h"
+#include "hw/i386/topology.h"
 
 #include "qemu/error-report.h"
 #include "qemu/option.h"
@@ -3028,6 +3029,39 @@ static void x86_cpu_register_feature_bit_props(X86CPU *cpu,
     g_strfreev(names);
 }
 
+/* Enables contiguous-apic-ID mode, for compatibility */
+static bool compat_apic_id_mode;
+
+void enable_compat_apic_id_mode(void)
+{
+    compat_apic_id_mode = true;
+}
+
+/* Calculates initial APIC ID for a specific CPU index
+ *
+ * Currently we need to be able to calculate the APIC ID from the CPU index
+ * alone (without requiring a CPU object), as the QEMU<->Seabios interfaces have
+ * no concept of "CPU index", and the NUMA tables on fw_cfg need the APIC ID of
+ * all CPUs up to max_cpus.
+ */
+uint32_t x86_cpu_apic_id_from_index(unsigned int cpu_index)
+{
+    uint32_t correct_id;
+    static bool warned;
+
+    correct_id = x86_apicid_from_cpu_idx(smp_cores, smp_threads, cpu_index);
+    if (compat_apic_id_mode) {
+        if (cpu_index != correct_id && !warned) {
+            error_report("APIC IDs set in compatibility mode, "
+                         "CPU topology won't match the configuration");
+            warned = true;
+        }
+        return cpu_index;
+    } else {
+        return correct_id;
+    }
+}
+
 static void x86_cpu_initfn(Object *obj)
 {
     CPUState *cs = CPU(obj);
@@ -3071,8 +3105,7 @@ static void x86_cpu_initfn(Object *obj)
     cpu->hyperv_spinlock_attempts = HYPERV_SPINLOCK_NEVER_RETRY;
 
 #ifndef CONFIG_USER_ONLY
-    /* Any code creating new X86CPU objects have to set apic-id explicitly */
-    cpu->apic_id = -1;
+    cpu->apic_id = x86_cpu_apic_id_from_index(cs->cpu_index);
 #endif
 
     for (w = 0; w < FEATURE_WORDS; w++) {
diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index fc4a605..a5368cf 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -1333,6 +1333,7 @@ void x86_cpu_change_kvm_default(const char *prop, const char *value);
 /* Return name of 32-bit register, from a R_* constant */
 const char *get_register_name_32(unsigned int reg);
 
+uint32_t x86_cpu_apic_id_from_index(unsigned int cpu_index);
 void enable_compat_apic_id_mode(void);
 
 #define APIC_DEFAULT_ADDRESS 0xfee00000
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* [Qemu-devel] [RFC PATCH v0 9/9] pc: Convert boot CPUs into CPU core device initialization
  2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
                   ` (7 preceding siblings ...)
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 8/9] target-i386: Set apic_id during CPU initfn Bharata B Rao
@ 2015-12-10  6:15 ` Bharata B Rao
  2015-12-10 12:35 ` [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Igor Mammedov
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 46+ messages in thread
From: Bharata B Rao @ 2015-12-10  6:15 UTC (permalink / raw)
  To: qemu-devel
  Cc: peter.maydell, ehabkost, Bharata B Rao, agraf, borntraeger,
	imammedo, pbonzini, afaerber, david

Initialize boot CPUs specified with -smp option as CPU core devices.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 hw/i386/pc.c | 30 +++++++++++++++++-------------
 1 file changed, 17 insertions(+), 13 deletions(-)

diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 80a4d98..661e577 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -65,6 +65,7 @@
 #include "hw/mem/pc-dimm.h"
 #include "qapi/visitor.h"
 #include "qapi-visit.h"
+#include "hw/cpu/core.h"
 
 /* debug PC/ISA interrupts */
 //#define DEBUG_IRQ
@@ -1086,11 +1087,10 @@ void pc_hot_add_cpu(const int64_t id, Error **errp)
 
 void pc_cpus_init(PCMachineState *pcms)
 {
-    int i;
-    X86CPU *cpu = NULL;
+    int i, j;
     MachineState *machine = MACHINE(pcms);
-    Error *error = NULL;
     unsigned long apic_id_limit;
+    int sockets = DIV_ROUND_UP(smp_cpus, smp_cores * smp_threads);
 
     /* init CPUs */
     if (machine->cpu_model == NULL) {
@@ -1109,18 +1109,17 @@ void pc_cpus_init(PCMachineState *pcms)
         exit(1);
     }
 
-    for (i = 0; i < smp_cpus; i++) {
-        cpu = pc_new_cpu(machine->cpu_model, x86_cpu_apic_id_from_index(i),
-                         &error);
-        if (error) {
-            error_report_err(error);
-            exit(1);
+    for (i = 0; i < sockets; i++) {
+        char sid[32];
+
+        snprintf(sid, 32, "" TYPE_CPU_SOCKET "%d", i);
+        for (j = 0; j < smp_cores; j++) {
+            Object *core = object_new(TYPE_CPU_CORE);
+
+            object_property_set_str(core, sid, "socket", &error_abort);
+            object_property_set_bool(core, true, "realized", &error_abort);
         }
-        object_unref(OBJECT(cpu));
     }
-
-    /* tell smbios about cpuid version and features */
-    smbios_set_cpuid(cpu->env.cpuid_version, cpu->env.features[FEAT_1_EDX]);
 }
 
 /* pci-info ROM file. Little endian format */
@@ -1660,6 +1659,11 @@ static void pc_cpu_plug(HotplugHandler *hotplug_dev,
     HotplugHandlerClass *hhc;
     Error *local_err = NULL;
     PCMachineState *pcms = PC_MACHINE(hotplug_dev);
+    CPUState *cs = CPU(dev);
+    X86CPU *cpu = X86_CPU(cs);
+
+    /* tell smbios about cpuid version and features */
+    smbios_set_cpuid(cpu->env.cpuid_version, cpu->env.features[FEAT_1_EDX]);
 
     if (!dev->hotplugged) {
         goto out;
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 1/9] vl: Don't allow CPU toplogies with partially filled cores
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 1/9] vl: Don't allow CPU toplogies with partially filled cores Bharata B Rao
@ 2015-12-10 10:25   ` Daniel P. Berrange
  2015-12-11  3:24     ` Bharata B Rao
  0 siblings, 1 reply; 46+ messages in thread
From: Daniel P. Berrange @ 2015-12-10 10:25 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: peter.maydell, ehabkost, agraf, qemu-devel, borntraeger,
	pbonzini, imammedo, afaerber, david

On Thu, Dec 10, 2015 at 11:45:36AM +0530, Bharata B Rao wrote:
> Prevent guests from booting with CPU topologies that have partially
> filled CPU cores or can result in partially filled CPU cores after CPU
> hotplug like
> 
> -smp 15,sockets=1,cores=4,threads=4,maxcpus=16 or
> -smp 15,sockets=1,cores=4,threads=4,maxcpus=17 or
> 
> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> ---
>  vl.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/vl.c b/vl.c
> index 525929b..e656f53 100644
> --- a/vl.c
> +++ b/vl.c
> @@ -1252,6 +1252,19 @@ static void smp_parse(QemuOpts *opts)
>          smp_cores = cores > 0 ? cores : 1;
>          smp_threads = threads > 0 ? threads : 1;
>  
> +        if (smp_cpus % smp_threads) {
> +            error_report("cpu topology: "
> +                         "smp_cpus (%u) should be multiple of threads (%u)",
> +                         smp_cpus, smp_threads);
> +            exit(1);
> +        }
> +
> +        if (max_cpus % smp_threads) {
> +            error_report("cpu topology: "
> +                         "maxcpus (%u) should be multiple of threads (%u)",
> +                         max_cpus, smp_threads);
> +            exit(1);
> +        }
>      }

Adding this seems like it has a pretty high chance of causing regression,
ie preventing previously working guests from booting with new QEMU. I
know adding the check makes sense from a semantic POV, but are we willing
to risk breaking people with such odd configurations ?

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://autobuild.org       -o-         http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org       -o-       http://live.gnome.org/gtk-vnc :|

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
                   ` (8 preceding siblings ...)
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 9/9] pc: Convert boot CPUs into CPU core device initialization Bharata B Rao
@ 2015-12-10 12:35 ` Igor Mammedov
  2015-12-11  3:57   ` Bharata B Rao
  2015-12-16 15:46   ` Andreas Färber
  2015-12-10 20:25 ` Matthew Rosato
  2015-12-16 15:19 ` Andreas Färber
  11 siblings, 2 replies; 46+ messages in thread
From: Igor Mammedov @ 2015-12-10 12:35 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: peter.maydell, ehabkost, agraf, qemu-devel, borntraeger,
	pbonzini, afaerber, david

On Thu, 10 Dec 2015 11:45:35 +0530
Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:

> Hi,
> 
> This is an attempt to define a generic CPU device that serves as a
> containing device to underlying arch-specific CPU devices. The
> motivation for this is to have an arch-neutral way to specify CPUs
> mainly during hotplug.
> 
> Instead of individual archs having their own semantics to specify the
> CPU like
> 
> -device POWER8-powerpc64-cpu (pseries)
> -device qemu64-x86_64-cpu (pc)
> -device s390-cpu (s390)
> 
> this patch introduces a new device named cpu-core that could be
> used for all target archs as
> 
> -device cpu-core,socket="sid"
> 
> This adds a CPU core with all its associated threads into the
> specified socket with id "sid". The number of target architecture
> specific CPU threads that get created during this operation is based
> on the CPU topology specified using -smp sockets=S,cores=C,threads=T
> option. Also the number of cores that can be accommodated in the same
> socket is dictated by the cores= parameter in the same -smp option.
> 
> CPU sockets are represented by QOM objects and the number of sockets
> required to fit in max_cpus are created at boottime. As cpu-core
> devices are created, they are linked to socket object specified by
> socket="sid" device property.
> 
> Thus the model consists of backend socket objects which can be
> considered as container of one or more cpu-core devices. Each
> cpu-core object is linked to the appropriate backend socket object.
> Each CPU thread device appears as child object of cpu-core device.
> 
> All the required socket objects are created upfront and they can't be
> deleted. Though currently socket objects can be created using
> object_add monitor command, I am planning to prevent that so that a
> guest boots with the required number of sockets and only CPU cores
> can be hotplugged into them.
> 
> CPU hotplug granularity
> -----------------------
> CPU hotplug will now be done in cpu-core device granularity.
> 
> This patchset includes a patch to prevent topologies that result in
> partially filled cores. Hence with this patchset, we will always
> have fully filled cpu-core devices both for boot time and during
> hotplug.
> 
> For archs like PowerPC, where there is no requirement to be fully
> similar to the physical system, hotplugging CPU at core granularity
> is common. While core level hotplug will fit in naturally for such
> archs, for others which want socket level hotplug, could higher level
> tools like libvirt perform multiple core hotplugs in response to one
> socket hotplug request ?
> 
> Are there archs that would need thread level CPU addition ?
there are,
currently x86 target allows to start QEMU with 1 thread even if
topology specifies more threads per core. The same applies to hotplug.

On top of that I think ACPI spec also treats CPU devices on per threads
level.

> 
> Boot time CPUs as cpu-core devices
> ----------------------------------
> In this patchset, I am coverting the boot time CPU initialization
> (from -smp option) to initialize the required number of cpu-core
> devices and linking them with the appropriate socket objects.
> 
> Initially I thought we should be able to completely replace -smp with
> -device cpu-core, but then I realized that at least both x86 and
> pseries guests' machine init code has dependencies on first CPU being
> available for the machine init code to work correctly.
> 
> Currently I have converted boot CPUs to cpu-core devices only PowerPC
> sPAPR and i386 PC targets. I am not really sure about the i386
> changes and the intention in this iteration was to check if it is
> indeed possible to fit i386 into cpu-core model. Having said that I
> am able to boot an x86 guest with this patchset.
> 
> NUMA
> ----
> TODO: In this patchset, I haven't explicitly done anything for NUMA
> yet. I am thinking if we could add node=N option to cpu-core device.
> That could specify the NUMA node to which the CPU core belongs to.
> 
> -device cpu-core,socket="sid",node=N
> 
> QOM composition tree
> ---------------------
> QOM composition tree for x86 where I don't have CPU hotplug enabled,
> but just initializing boot CPUs as cpu-core devices appears like this:
> 
> -smp 4,sockets=4,cores=2,threads=2,maxcpus=16
with this series it would regress following CLI
  -smp 1,sockets=4,cores=2,threads=2,maxcpus=16


wrt CLI can't we do something like this?

-device some-cpu-model,socket=x[,core=y[,thread=z]]

for NUMA configs individual sockets IDs could be bound to
nodes via -numa ... option

and allow individual targets to use its own way to build CPUs?

For initial conversion of x86-cpus to device-add we could do pretty
much the same like we do now, where cpu devices will appear under:
/machine (pc-i440fx-2.5-machine)
  /unattached (container)
    /device[x] (qemu64-x86_64-cpu)

since we don't have to maintain/model dummy socket/core objects.

PowerPC could do the similar only at core level since it has
need for modeling core objects.

It doesn't change anything wrt current introspection state, since
cpus could be still found by mgmt tools that parse QOM tree.

We probably should split 2 conflicting goals we are trying to meet here,

 1. make device-add/dell work with cpus /
     drop support for cpu-add in favor of device_add 

 2. how to model QOM tree view for CPUs in arch independent manner
    to make mgmt layer life easier.

and work on them independently instead of arguing for years,
that would allow us to make progress in #1 while still thinking about
how to do #2 the right way if we really need it.

> 
> /machine (pc-i440fx-2.5-machine)
>   /unattached (container)
>     /device[0] (cpu-core)
>       /thread[0] (qemu64-x86_64-cpu)
>       /thread[1] (qemu64-x86_64-cpu)
>     /device[4] (cpu-core)
>       /thread[0] (qemu64-x86_64-cpu)
>       /thread[1] (qemu64-x86_64-cpu)
> 
> For PowerPC where I have CPU hotplug enabled:
> 
> -smp 4,sockets=4,cores=2,threads=2,maxcpus=16 -device
> cpu-core,socket=cpu-socket1,id=core3
> 
> /machine (pseries-2.5-machine)
>   /unattached (container)
>     /device[1] (cpu-core)
>       /thread[0] (host-powerpc64-cpu)
>       /thread[1] (host-powerpc64-cpu)
>     /device[2] (cpu-core)
>       /thread[0] (host-powerpc64-cpu)
>       /thread[1] (host-powerpc64-cpu)
>   /peripheral (container)
>     /core3 (cpu-core)
>       /thread[0] (host-powerpc64-cpu)
>       /thread[1] (host-powerpc64-cpu)
> 
> As can be seen, the boot CPU and hotplugged CPU come under separate
> parents. Guess I should work towards getting both boot time and
> hotplugged CPUs under same parent ?
> 
> Socket ID generation
> ---------------------
> In the current approach the socket ID generation is implicit somewhat.
> All the sockets objects are created with pre-fixed format for ids like
> cpu-socket0, cpu-socket1 etc. And machine init code of each arch is
> expected to use the same when creating cpu-core devices to link the
> core to the right object. Even user needs to know these IDs during
> device_add time. May be I could add "info cpu-sockets" which gives
> information about all the existing sockets and their core-occupancy
> status.
> 
> Finally, I understand that this is a simplistic model and it wouldn't
> probably support all the notions around CPU topology and hotplug that
> we would like to support for all archs. The intention of this RFC is
> to start with somewhere and seek inputs from the community.
> 
> Bharata B Rao (9):
>   vl: Don't allow CPU toplogies with partially filled cores
>   cpu: Store CPU typename in MachineState
>   cpu: Don't realize CPU from cpu_generic_init()
>   cpu: CPU socket backend
>   vl: Create CPU socket backend objects
>   cpu: Introduce CPU core device
>   spapr: Convert boot CPUs into CPU core device initialization
>   target-i386: Set apic_id during CPU initfn
>   pc: Convert boot CPUs into CPU core device initialization
> 
>  hw/cpu/Makefile.objs        |  1 +
>  hw/cpu/core.c               | 98
> +++++++++++++++++++++++++++++++++++++++++++++
> hw/cpu/socket.c             | 48 ++++++++++++++++++++++
> hw/i386/pc.c                | 64 +++++++++--------------------
> hw/ppc/spapr.c              | 32 ++++++++++-----
> include/hw/boards.h         |  1 + include/hw/cpu/core.h       | 28
> +++++++++++++ include/hw/cpu/socket.h     | 26 ++++++++++++
>  qom/cpu.c                   |  6 ---
>  target-arm/helper.c         | 16 +++++++-
>  target-cris/cpu.c           | 16 +++++++-
>  target-i386/cpu.c           | 37 ++++++++++++++++-
>  target-i386/cpu.h           |  1 +
>  target-lm32/helper.c        | 16 +++++++-
>  target-moxie/cpu.c          | 16 +++++++-
>  target-openrisc/cpu.c       | 16 +++++++-
>  target-ppc/translate_init.c | 16 +++++++-
>  target-sh4/cpu.c            | 16 +++++++-
>  target-tricore/helper.c     | 16 +++++++-
>  target-unicore32/helper.c   | 16 +++++++-
>  vl.c                        | 26 ++++++++++++
>  21 files changed, 439 insertions(+), 73 deletions(-)
>  create mode 100644 hw/cpu/core.c
>  create mode 100644 hw/cpu/socket.c
>  create mode 100644 include/hw/cpu/core.h
>  create mode 100644 include/hw/cpu/socket.h
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
                   ` (9 preceding siblings ...)
  2015-12-10 12:35 ` [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Igor Mammedov
@ 2015-12-10 20:25 ` Matthew Rosato
  2015-12-14  6:25   ` Bharata B Rao
  2015-12-16 15:19 ` Andreas Färber
  11 siblings, 1 reply; 46+ messages in thread
From: Matthew Rosato @ 2015-12-10 20:25 UTC (permalink / raw)
  To: Bharata B Rao, qemu-devel
  Cc: peter.maydell, ehabkost, agraf, borntraeger, pbonzini, imammedo,
	afaerber, david

On 12/10/2015 01:15 AM, Bharata B Rao wrote:
> Hi,
> 
> This is an attempt to define a generic CPU device that serves as a
> containing device to underlying arch-specific CPU devices. The motivation
> for this is to have an arch-neutral way to specify CPUs mainly during
> hotplug.
> 
> Instead of individual archs having their own semantics to specify the
> CPU like
> 
> -device POWER8-powerpc64-cpu (pseries)
> -device qemu64-x86_64-cpu (pc)
> -device s390-cpu (s390)
> 
> this patch introduces a new device named cpu-core that could be
> used for all target archs as
> 
> -device cpu-core,socket="sid"
> 
> This adds a CPU core with all its associated threads into the specified
> socket with id "sid". The number of target architecture specific CPU threads
> that get created during this operation is based on the CPU topology specified
> using -smp sockets=S,cores=C,threads=T option. Also the number of cores that
> can be accommodated in the same socket is dictated by the cores= parameter
> in the same -smp option.
> 
> CPU sockets are represented by QOM objects and the number of sockets required
> to fit in max_cpus are created at boottime. As cpu-core devices are
> created, they are linked to socket object specified by socket="sid" device
> property.
> 
> Thus the model consists of backend socket objects which can be considered
> as container of one or more cpu-core devices. Each cpu-core object is
> linked to the appropriate backend socket object. Each CPU thread device
> appears as child object of cpu-core device.
> 
> All the required socket objects are created upfront and they can't be deleted.
> Though currently socket objects can be created using object_add monitor
> command, I am planning to prevent that so that a guest boots with the
> required number of sockets and only CPU cores can be hotplugged into
> them.
> 
> CPU hotplug granularity
> -----------------------
> CPU hotplug will now be done in cpu-core device granularity.
> 
> This patchset includes a patch to prevent topologies that result in
> partially filled cores. Hence with this patchset, we will always
> have fully filled cpu-core devices both for boot time and during hotplug.
> 
> For archs like PowerPC, where there is no requirement to be fully
> similar to the physical system, hotplugging CPU at core granularity
> is common. While core level hotplug will fit in naturally for such
> archs, for others which want socket level hotplug, could higher level
> tools like libvirt perform multiple core hotplugs in response to one
> socket hotplug request ?
> 
> Are there archs that would need thread level CPU addition ?
> 
> Boot time CPUs as cpu-core devices
> ----------------------------------
> In this patchset, I am coverting the boot time CPU initialization
> (from -smp option) to initialize the required number of cpu-core
> devices and linking them with the appropriate socket objects.
> 
> Initially I thought we should be able to completely replace -smp with
> -device cpu-core, but then I realized that at least both x86 and pseries
> guests' machine init code has dependencies on first CPU being available
> for the machine init code to work correctly.
> 
> Currently I have converted boot CPUs to cpu-core devices only PowerPC sPAPR
> and i386 PC targets. I am not really sure about the i386 changes and the
> intention in this iteration was to check if it is indeed possible to
> fit i386 into cpu-core model. Having said that I am able to boot an x86
> guest with this patchset.

I attempted a quick conversion for s390 to using cpu-core, but looks
like we'd have an issue preventing s390 from using cpu-core immediately
-- it relies on cpu_generic_init, which s390 specifically avoids today
because we don't have support for cpu_models.  Not sure if other
architectures will have the same issue.

I agree with Igor's sentiment of separating the issue of device_add
hotplug vs generic QOM view -- s390 could support device_add/del for
s390-cpu now, but the addition of cpu-core just adds more requirements
before we can allow for hotplug, without providing any immediate benefit
since s390 doesn't currently surface any topology info to the guest.

Matt

> 
> NUMA
> ----
> TODO: In this patchset, I haven't explicitly done anything for NUMA yet.
> I am thinking if we could add node=N option to cpu-core device.
> That could specify the NUMA node to which the CPU core belongs to.
> 
> -device cpu-core,socket="sid",node=N
> 
> QOM composition tree
> ---------------------
> QOM composition tree for x86 where I don't have CPU hotplug enabled, but
> just initializing boot CPUs as cpu-core devices appears like this:
> 
> -smp 4,sockets=4,cores=2,threads=2,maxcpus=16
> 
> /machine (pc-i440fx-2.5-machine)
>   /unattached (container)
>     /device[0] (cpu-core)
>       /thread[0] (qemu64-x86_64-cpu)
>       /thread[1] (qemu64-x86_64-cpu)
>     /device[4] (cpu-core)
>       /thread[0] (qemu64-x86_64-cpu)
>       /thread[1] (qemu64-x86_64-cpu)
> 
> For PowerPC where I have CPU hotplug enabled:
> 
> -smp 4,sockets=4,cores=2,threads=2,maxcpus=16 -device cpu-core,socket=cpu-socket1,id=core3
> 
> /machine (pseries-2.5-machine)
>   /unattached (container)
>     /device[1] (cpu-core)
>       /thread[0] (host-powerpc64-cpu)
>       /thread[1] (host-powerpc64-cpu)
>     /device[2] (cpu-core)
>       /thread[0] (host-powerpc64-cpu)
>       /thread[1] (host-powerpc64-cpu)
>   /peripheral (container)
>     /core3 (cpu-core)
>       /thread[0] (host-powerpc64-cpu)
>       /thread[1] (host-powerpc64-cpu)
> 
> As can be seen, the boot CPU and hotplugged CPU come under separate
> parents. Guess I should work towards getting both boot time and hotplugged
> CPUs under same parent ?
> 
> Socket ID generation
> ---------------------
> In the current approach the socket ID generation is implicit somewhat.
> All the sockets objects are created with pre-fixed format for ids like
> cpu-socket0, cpu-socket1 etc. And machine init code of each arch is expected
> to use the same when creating cpu-core devices to link the core to the
> right object. Even user needs to know these IDs during device_add time.
> May be I could add "info cpu-sockets" which gives information about all
> the existing sockets and their core-occupancy status.
> 
> Finally, I understand that this is a simplistic model and it wouldn't probably
> support all the notions around CPU topology and hotplug that we would
> like to support for all archs. The intention of this RFC is to start
> with somewhere and seek inputs from the community.
> 
> Bharata B Rao (9):
>   vl: Don't allow CPU toplogies with partially filled cores
>   cpu: Store CPU typename in MachineState
>   cpu: Don't realize CPU from cpu_generic_init()
>   cpu: CPU socket backend
>   vl: Create CPU socket backend objects
>   cpu: Introduce CPU core device
>   spapr: Convert boot CPUs into CPU core device initialization
>   target-i386: Set apic_id during CPU initfn
>   pc: Convert boot CPUs into CPU core device initialization
> 
>  hw/cpu/Makefile.objs        |  1 +
>  hw/cpu/core.c               | 98 +++++++++++++++++++++++++++++++++++++++++++++
>  hw/cpu/socket.c             | 48 ++++++++++++++++++++++
>  hw/i386/pc.c                | 64 +++++++++--------------------
>  hw/ppc/spapr.c              | 32 ++++++++++-----
>  include/hw/boards.h         |  1 +
>  include/hw/cpu/core.h       | 28 +++++++++++++
>  include/hw/cpu/socket.h     | 26 ++++++++++++
>  qom/cpu.c                   |  6 ---
>  target-arm/helper.c         | 16 +++++++-
>  target-cris/cpu.c           | 16 +++++++-
>  target-i386/cpu.c           | 37 ++++++++++++++++-
>  target-i386/cpu.h           |  1 +
>  target-lm32/helper.c        | 16 +++++++-
>  target-moxie/cpu.c          | 16 +++++++-
>  target-openrisc/cpu.c       | 16 +++++++-
>  target-ppc/translate_init.c | 16 +++++++-
>  target-sh4/cpu.c            | 16 +++++++-
>  target-tricore/helper.c     | 16 +++++++-
>  target-unicore32/helper.c   | 16 +++++++-
>  vl.c                        | 26 ++++++++++++
>  21 files changed, 439 insertions(+), 73 deletions(-)
>  create mode 100644 hw/cpu/core.c
>  create mode 100644 hw/cpu/socket.c
>  create mode 100644 include/hw/cpu/core.h
>  create mode 100644 include/hw/cpu/socket.h
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 1/9] vl: Don't allow CPU toplogies with partially filled cores
  2015-12-10 10:25   ` Daniel P. Berrange
@ 2015-12-11  3:24     ` Bharata B Rao
  2015-12-14 17:37       ` Eduardo Habkost
  0 siblings, 1 reply; 46+ messages in thread
From: Bharata B Rao @ 2015-12-11  3:24 UTC (permalink / raw)
  To: Daniel P. Berrange
  Cc: peter.maydell, ehabkost, agraf, qemu-devel, borntraeger,
	pbonzini, imammedo, afaerber, david

On Thu, Dec 10, 2015 at 10:25:28AM +0000, Daniel P. Berrange wrote:
> On Thu, Dec 10, 2015 at 11:45:36AM +0530, Bharata B Rao wrote:
> > Prevent guests from booting with CPU topologies that have partially
> > filled CPU cores or can result in partially filled CPU cores after CPU
> > hotplug like
> > 
> > -smp 15,sockets=1,cores=4,threads=4,maxcpus=16 or
> > -smp 15,sockets=1,cores=4,threads=4,maxcpus=17 or
> > 
> > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > ---
> >  vl.c | 13 +++++++++++++
> >  1 file changed, 13 insertions(+)
> > 
> > diff --git a/vl.c b/vl.c
> > index 525929b..e656f53 100644
> > --- a/vl.c
> > +++ b/vl.c
> > @@ -1252,6 +1252,19 @@ static void smp_parse(QemuOpts *opts)
> >          smp_cores = cores > 0 ? cores : 1;
> >          smp_threads = threads > 0 ? threads : 1;
> >  
> > +        if (smp_cpus % smp_threads) {
> > +            error_report("cpu topology: "
> > +                         "smp_cpus (%u) should be multiple of threads (%u)",
> > +                         smp_cpus, smp_threads);
> > +            exit(1);
> > +        }
> > +
> > +        if (max_cpus % smp_threads) {
> > +            error_report("cpu topology: "
> > +                         "maxcpus (%u) should be multiple of threads (%u)",
> > +                         max_cpus, smp_threads);
> > +            exit(1);
> > +        }
> >      }
> 
> Adding this seems like it has a pretty high chance of causing regression,
> ie preventing previously working guests from booting with new QEMU. I
> know adding the check makes sense from a semantic POV, but are we willing
> to risk breaking people with such odd configurations ?

I wasn't sure about how much risk that would be and hence in my older
version of PowerPC CPU hotplug patchset, I indeed supported such topologies:

https://lists.gnu.org/archive/html/qemu-ppc/2015-09/msg00102.html

But the code indeed looked ugly to support such special case.

There was some discussion about this recently here:

http://lists.gnu.org/archive/html/qemu-devel/2015-12/msg00396.html

from where I sensed that it may be ok to dis-allow such topologies.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-10 12:35 ` [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Igor Mammedov
@ 2015-12-11  3:57   ` Bharata B Rao
  2015-12-15  5:27     ` Zhu Guihua
  2015-12-16 15:11     ` Igor Mammedov
  2015-12-16 15:46   ` Andreas Färber
  1 sibling, 2 replies; 46+ messages in thread
From: Bharata B Rao @ 2015-12-11  3:57 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: mjrosato, peter.maydell, zhugh.fnst, ehabkost, agraf, qemu-devel,
	borntraeger, pbonzini, afaerber, david

On Thu, Dec 10, 2015 at 01:35:05PM +0100, Igor Mammedov wrote:
> On Thu, 10 Dec 2015 11:45:35 +0530
> Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:
> 
> > Hi,
> > 
> > This is an attempt to define a generic CPU device that serves as a
> > containing device to underlying arch-specific CPU devices. The
> > motivation for this is to have an arch-neutral way to specify CPUs
> > mainly during hotplug.
> > 
> > Instead of individual archs having their own semantics to specify the
> > CPU like
> > 
> > -device POWER8-powerpc64-cpu (pseries)
> > -device qemu64-x86_64-cpu (pc)
> > -device s390-cpu (s390)
> > 
> > this patch introduces a new device named cpu-core that could be
> > used for all target archs as
> > 
> > -device cpu-core,socket="sid"
> > 
> > This adds a CPU core with all its associated threads into the
> > specified socket with id "sid". The number of target architecture
> > specific CPU threads that get created during this operation is based
> > on the CPU topology specified using -smp sockets=S,cores=C,threads=T
> > option. Also the number of cores that can be accommodated in the same
> > socket is dictated by the cores= parameter in the same -smp option.
> > 
> > CPU sockets are represented by QOM objects and the number of sockets
> > required to fit in max_cpus are created at boottime. As cpu-core
> > devices are created, they are linked to socket object specified by
> > socket="sid" device property.
> > 
> > Thus the model consists of backend socket objects which can be
> > considered as container of one or more cpu-core devices. Each
> > cpu-core object is linked to the appropriate backend socket object.
> > Each CPU thread device appears as child object of cpu-core device.
> > 
> > All the required socket objects are created upfront and they can't be
> > deleted. Though currently socket objects can be created using
> > object_add monitor command, I am planning to prevent that so that a
> > guest boots with the required number of sockets and only CPU cores
> > can be hotplugged into them.
> > 
> > CPU hotplug granularity
> > -----------------------
> > CPU hotplug will now be done in cpu-core device granularity.
> > 
> > This patchset includes a patch to prevent topologies that result in
> > partially filled cores. Hence with this patchset, we will always
> > have fully filled cpu-core devices both for boot time and during
> > hotplug.
> > 
> > For archs like PowerPC, where there is no requirement to be fully
> > similar to the physical system, hotplugging CPU at core granularity
> > is common. While core level hotplug will fit in naturally for such
> > archs, for others which want socket level hotplug, could higher level
> > tools like libvirt perform multiple core hotplugs in response to one
> > socket hotplug request ?
> > 
> > Are there archs that would need thread level CPU addition ?
> there are,
> currently x86 target allows to start QEMU with 1 thread even if
> topology specifies more threads per core. The same applies to hotplug.
> 
> On top of that I think ACPI spec also treats CPU devices on per threads
> level.
<snip>
> with this series it would regress following CLI
>   -smp 1,sockets=4,cores=2,threads=2,maxcpus=16

Yes, the first patch in this series explicitly prevents such topologies.

Though QEMU currently allows to have such topologies, as discussed
at http://lists.gnu.org/archive/html/qemu-devel/2015-12/msg00396.html
is there a need to continue support for such topologies ?
 
> 
> wrt CLI can't we do something like this?
> 
> -device some-cpu-model,socket=x[,core=y[,thread=z]]

We can, I just started with a simple homogenous setup. As David Gibson
pointed out elsewhere, instead of taking the topology from globals,
making it part of each -device command line like you show above would pave way
for heterogenous setup which probably would be needed in future. In such a
case, we wouldn't anyway have to debate about supporting topologies
with partially filled cores and sockets. Also supporting legacy cpu-add
x86 via device_add method would probably be easier with the semantics
you showed.

> 
> for NUMA configs individual sockets IDs could be bound to
> nodes via -numa ... option

For PowerPC, socket is not always a NUMA boundary, it can have
two CPU packages/chips (DCM)  within a socket and hence have two NUMA
levels within a socket.

> 
> and allow individual targets to use its own way to build CPUs?
> 
> For initial conversion of x86-cpus to device-add we could do pretty
> much the same like we do now, where cpu devices will appear under:
> /machine (pc-i440fx-2.5-machine)
>   /unattached (container)
>     /device[x] (qemu64-x86_64-cpu)
> 
> since we don't have to maintain/model dummy socket/core objects.
> 
> PowerPC could do the similar only at core level since it has
> need for modeling core objects.
> 
> It doesn't change anything wrt current introspection state, since
> cpus could be still found by mgmt tools that parse QOM tree.
> 
> We probably should split 2 conflicting goals we are trying to meet here,
> 
>  1. make device-add/dell work with cpus /
>      drop support for cpu-add in favor of device_add 
> 
>  2. how to model QOM tree view for CPUs in arch independent manner
>     to make mgmt layer life easier.
> 
> and work on them independently instead of arguing for years,
> that would allow us to make progress in #1 while still thinking about
> how to do #2 the right way if we really need it.

Makes sense, s390 developer also recommends the same. Given that we have
CPU hotplug patchsets from x86, PowerPC and s390 all implementing device_add
semantics pending on the list, can we hope to get them merged for
QEMU-2.6 ?

So as seen below, the device is either "cpu_model-cpu_type" or just "cpu_type".

-device POWER8-powerpc64-cpu (pseries)
-device qemu64-x86_64-cpu (pc)
-device s390-cpu (s390)

Is this going to be the final acceptable semantics ? Would libvirt be able
to work with this different CPU device names for different guests ?

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-10 20:25 ` Matthew Rosato
@ 2015-12-14  6:25   ` Bharata B Rao
  0 siblings, 0 replies; 46+ messages in thread
From: Bharata B Rao @ 2015-12-14  6:25 UTC (permalink / raw)
  To: Matthew Rosato
  Cc: peter.maydell, ehabkost, agraf, qemu-devel, borntraeger,
	pbonzini, imammedo, afaerber, david

On Thu, Dec 10, 2015 at 03:25:53PM -0500, Matthew Rosato wrote:
> On 12/10/2015 01:15 AM, Bharata B Rao wrote:
> > Hi,
> > 
> > This is an attempt to define a generic CPU device that serves as a
> > containing device to underlying arch-specific CPU devices. The motivation
> > for this is to have an arch-neutral way to specify CPUs mainly during
> > hotplug.
> > 
> > Instead of individual archs having their own semantics to specify the
> > CPU like
> > 
> > -device POWER8-powerpc64-cpu (pseries)
> > -device qemu64-x86_64-cpu (pc)
> > -device s390-cpu (s390)
> > 
> > this patch introduces a new device named cpu-core that could be
> > used for all target archs as
> > 
> > -device cpu-core,socket="sid"
> > 
> > This adds a CPU core with all its associated threads into the specified
> > socket with id "sid". The number of target architecture specific CPU threads
> > that get created during this operation is based on the CPU topology specified
> > using -smp sockets=S,cores=C,threads=T option. Also the number of cores that
> > can be accommodated in the same socket is dictated by the cores= parameter
> > in the same -smp option.
> > 
> > CPU sockets are represented by QOM objects and the number of sockets required
> > to fit in max_cpus are created at boottime. As cpu-core devices are
> > created, they are linked to socket object specified by socket="sid" device
> > property.
> > 
> > Thus the model consists of backend socket objects which can be considered
> > as container of one or more cpu-core devices. Each cpu-core object is
> > linked to the appropriate backend socket object. Each CPU thread device
> > appears as child object of cpu-core device.
> > 
> > All the required socket objects are created upfront and they can't be deleted.
> > Though currently socket objects can be created using object_add monitor
> > command, I am planning to prevent that so that a guest boots with the
> > required number of sockets and only CPU cores can be hotplugged into
> > them.
> > 
> > CPU hotplug granularity
> > -----------------------
> > CPU hotplug will now be done in cpu-core device granularity.
> > 
> > This patchset includes a patch to prevent topologies that result in
> > partially filled cores. Hence with this patchset, we will always
> > have fully filled cpu-core devices both for boot time and during hotplug.
> > 
> > For archs like PowerPC, where there is no requirement to be fully
> > similar to the physical system, hotplugging CPU at core granularity
> > is common. While core level hotplug will fit in naturally for such
> > archs, for others which want socket level hotplug, could higher level
> > tools like libvirt perform multiple core hotplugs in response to one
> > socket hotplug request ?
> > 
> > Are there archs that would need thread level CPU addition ?
> > 
> > Boot time CPUs as cpu-core devices
> > ----------------------------------
> > In this patchset, I am coverting the boot time CPU initialization
> > (from -smp option) to initialize the required number of cpu-core
> > devices and linking them with the appropriate socket objects.
> > 
> > Initially I thought we should be able to completely replace -smp with
> > -device cpu-core, but then I realized that at least both x86 and pseries
> > guests' machine init code has dependencies on first CPU being available
> > for the machine init code to work correctly.
> > 
> > Currently I have converted boot CPUs to cpu-core devices only PowerPC sPAPR
> > and i386 PC targets. I am not really sure about the i386 changes and the
> > intention in this iteration was to check if it is indeed possible to
> > fit i386 into cpu-core model. Having said that I am able to boot an x86
> > guest with this patchset.
> 
> I attempted a quick conversion for s390 to using cpu-core, but looks
> like we'd have an issue preventing s390 from using cpu-core immediately
> -- it relies on cpu_generic_init, which s390 specifically avoids today
> because we don't have support for cpu_models.  Not sure if other
> architectures will have the same issue.

I see that there are a few archs that don't have cpu_model, I guess
we can teach cpu_generic_init() to handle that. I will attempt that
in the next version if needed.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState Bharata B Rao
@ 2015-12-14 17:29   ` Eduardo Habkost
  2015-12-15  8:38     ` Bharata B Rao
  0 siblings, 1 reply; 46+ messages in thread
From: Eduardo Habkost @ 2015-12-14 17:29 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: peter.maydell, agraf, qemu-devel, borntraeger, imammedo,
	pbonzini, afaerber, david

On Thu, Dec 10, 2015 at 11:45:37AM +0530, Bharata B Rao wrote:
> Storing CPU typename in MachineState lets us to create CPU threads
> for all architectures in uniform manner from arch-neutral code.
> 
> TODO: Touching only i386 and spapr targets for now
> 
> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>

Suggestions:

* Name the field "cpu_base_type" to indicate it is the base CPU
  class name, not the actual CPU class name used when creating
  CPUs.
* Put it in MachineClass, as it may be useful for code that
  runs before machine->init(), in the future.
* Maybe make it a CPUClass* field instead of a string?


> ---
>  hw/i386/pc.c        | 1 +
>  hw/ppc/spapr.c      | 2 ++
>  include/hw/boards.h | 1 +
>  3 files changed, 4 insertions(+)
> 
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 5e20e07..ffcd645 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -1133,6 +1133,7 @@ void pc_cpus_init(PCMachineState *pcms)
>          machine->cpu_model = "qemu32";
>  #endif
>      }
> +    machine->cpu_type = TYPE_X86_CPU;
>  
>      apic_id_limit = pc_apic_id_limit(max_cpus);
>      if (apic_id_limit > ACPI_CPU_HOTPLUG_ID_LIMIT) {
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 030ee35..db441f2 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -1797,6 +1797,8 @@ static void ppc_spapr_init(MachineState *machine)
>      if (machine->cpu_model == NULL) {
>          machine->cpu_model = kvm_enabled() ? "host" : "POWER7";
>      }
> +    machine->cpu_type = TYPE_POWERPC_CPU;
> +
>      for (i = 0; i < smp_cpus; i++) {
>          cpu = cpu_ppc_init(machine->cpu_model);
>          if (cpu == NULL) {
> diff --git a/include/hw/boards.h b/include/hw/boards.h
> index 24eb6f0..a1f9512 100644
> --- a/include/hw/boards.h
> +++ b/include/hw/boards.h
> @@ -128,6 +128,7 @@ struct MachineState {
>      char *kernel_cmdline;
>      char *initrd_filename;
>      const char *cpu_model;
> +    const char *cpu_type;
>      AccelState *accelerator;
>  };
>  
> -- 
> 2.1.0
> 

-- 
Eduardo

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 1/9] vl: Don't allow CPU toplogies with partially filled cores
  2015-12-11  3:24     ` Bharata B Rao
@ 2015-12-14 17:37       ` Eduardo Habkost
  2015-12-15  8:41         ` Bharata B Rao
  0 siblings, 1 reply; 46+ messages in thread
From: Eduardo Habkost @ 2015-12-14 17:37 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: peter.maydell, qemu-devel, agraf, borntraeger, pbonzini,
	imammedo, afaerber, david

On Fri, Dec 11, 2015 at 08:54:31AM +0530, Bharata B Rao wrote:
> On Thu, Dec 10, 2015 at 10:25:28AM +0000, Daniel P. Berrange wrote:
> > On Thu, Dec 10, 2015 at 11:45:36AM +0530, Bharata B Rao wrote:
> > > Prevent guests from booting with CPU topologies that have partially
> > > filled CPU cores or can result in partially filled CPU cores after CPU
> > > hotplug like
> > > 
> > > -smp 15,sockets=1,cores=4,threads=4,maxcpus=16 or
> > > -smp 15,sockets=1,cores=4,threads=4,maxcpus=17 or
> > > 
> > > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > > ---
> > >  vl.c | 13 +++++++++++++
> > >  1 file changed, 13 insertions(+)
> > > 
> > > diff --git a/vl.c b/vl.c
> > > index 525929b..e656f53 100644
> > > --- a/vl.c
> > > +++ b/vl.c
> > > @@ -1252,6 +1252,19 @@ static void smp_parse(QemuOpts *opts)
> > >          smp_cores = cores > 0 ? cores : 1;
> > >          smp_threads = threads > 0 ? threads : 1;
> > >  
> > > +        if (smp_cpus % smp_threads) {
> > > +            error_report("cpu topology: "
> > > +                         "smp_cpus (%u) should be multiple of threads (%u)",
> > > +                         smp_cpus, smp_threads);
> > > +            exit(1);
> > > +        }
> > > +
> > > +        if (max_cpus % smp_threads) {
> > > +            error_report("cpu topology: "
> > > +                         "maxcpus (%u) should be multiple of threads (%u)",
> > > +                         max_cpus, smp_threads);
> > > +            exit(1);
> > > +        }
> > >      }
> > 
> > Adding this seems like it has a pretty high chance of causing regression,
> > ie preventing previously working guests from booting with new QEMU. I
> > know adding the check makes sense from a semantic POV, but are we willing
> > to risk breaking people with such odd configurations ?
> 
> I wasn't sure about how much risk that would be and hence in my older
> version of PowerPC CPU hotplug patchset, I indeed supported such topologies:
> 
> https://lists.gnu.org/archive/html/qemu-ppc/2015-09/msg00102.html
> 
> But the code indeed looked ugly to support such special case.
> 
> There was some discussion about this recently here:
> 
> http://lists.gnu.org/archive/html/qemu-devel/2015-12/msg00396.html
> 
> from where I sensed that it may be ok to dis-allow such topologies.

I want to be as strict as possible and disallow such topologies,
but Daniel has a point. Maybe we should make those checks
machine-specific, so we can make pc-*-2.5 and older allow those
broken configs.

If we make it a MachineClass::validate_smp_config() method, for
example, we could make TYPE_MACHINE point to a generic function
containing the checks you implemented above (so all machines have
those checks enabled by default), but let pc <= 2.5 override the
method.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 8/9] target-i386: Set apic_id during CPU initfn
  2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 8/9] target-i386: Set apic_id during CPU initfn Bharata B Rao
@ 2015-12-14 17:44   ` Eduardo Habkost
  2015-12-15  8:14     ` Bharata B Rao
  0 siblings, 1 reply; 46+ messages in thread
From: Eduardo Habkost @ 2015-12-14 17:44 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: peter.maydell, agraf, qemu-devel, borntraeger, imammedo,
	pbonzini, afaerber, david

On Thu, Dec 10, 2015 at 11:45:43AM +0530, Bharata B Rao wrote:
> Move back the setting of apic_id to instance_init routine (x86_cpu_initfn)
> This is needed to initialize X86 CPUs using generic cpu-package device.

Could you explain where exactly apic_id will be used, to make it
necessary to initialize it earlier?

> 
> TODO: I am not fully aware of the general direction in which apic_id
> changes in X86 have evolved and hence not sure if this is indeed aligned with
> the X86 way of doing things. This is just to help the PoC implementation
> that I have in this patchset to convert PC CPUs initialization into
> cpu-package device based initialization.

You shouldn't initialize apic_id on initfn. APIC ID depends (and
will depend) on different CPU properties related to topology,
including (but not limited to) CPU index and CPU topology
properties we may introduce in the future, so it should be done
later (at realize time), not on initfn.

Also, cpu_index is initialized by cpu_exec_init(), and
cpu_exec_init() must not be called by initfn. The cpu_exec_init()
call should (and will) be moved to realize in x86 and all other
architectures.

> 
> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> ---
>  hw/i386/pc.c      | 33 ---------------------------------
>  target-i386/cpu.c | 37 +++++++++++++++++++++++++++++++++++--
>  target-i386/cpu.h |  1 +
>  3 files changed, 36 insertions(+), 35 deletions(-)
> 
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index ffcd645..80a4d98 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -670,39 +670,6 @@ bool e820_get_entry(int idx, uint32_t type, uint64_t *address, uint64_t *length)
>      return false;
>  }
>  
> -/* Enables contiguous-apic-ID mode, for compatibility */
> -static bool compat_apic_id_mode;
> -
> -void enable_compat_apic_id_mode(void)
> -{
> -    compat_apic_id_mode = true;
> -}
> -
> -/* Calculates initial APIC ID for a specific CPU index
> - *
> - * Currently we need to be able to calculate the APIC ID from the CPU index
> - * alone (without requiring a CPU object), as the QEMU<->Seabios interfaces have
> - * no concept of "CPU index", and the NUMA tables on fw_cfg need the APIC ID of
> - * all CPUs up to max_cpus.
> - */
> -static uint32_t x86_cpu_apic_id_from_index(unsigned int cpu_index)
> -{
> -    uint32_t correct_id;
> -    static bool warned;
> -
> -    correct_id = x86_apicid_from_cpu_idx(smp_cores, smp_threads, cpu_index);
> -    if (compat_apic_id_mode) {
> -        if (cpu_index != correct_id && !warned && !qtest_enabled()) {
> -            error_report("APIC IDs set in compatibility mode, "
> -                         "CPU topology won't match the configuration");
> -            warned = true;
> -        }
> -        return cpu_index;
> -    } else {
> -        return correct_id;
> -    }
> -}
> -
>  /* Calculates the limit to CPU APIC ID values
>   *
>   * This function returns the limit for the APIC ID value, so that all
> diff --git a/target-i386/cpu.c b/target-i386/cpu.c
> index 11e5e39..c97a646 100644
> --- a/target-i386/cpu.c
> +++ b/target-i386/cpu.c
> @@ -25,6 +25,7 @@
>  #include "sysemu/kvm.h"
>  #include "sysemu/cpus.h"
>  #include "kvm_i386.h"
> +#include "hw/i386/topology.h"
>  
>  #include "qemu/error-report.h"
>  #include "qemu/option.h"
> @@ -3028,6 +3029,39 @@ static void x86_cpu_register_feature_bit_props(X86CPU *cpu,
>      g_strfreev(names);
>  }
>  
> +/* Enables contiguous-apic-ID mode, for compatibility */
> +static bool compat_apic_id_mode;
> +
> +void enable_compat_apic_id_mode(void)
> +{
> +    compat_apic_id_mode = true;
> +}
> +
> +/* Calculates initial APIC ID for a specific CPU index
> + *
> + * Currently we need to be able to calculate the APIC ID from the CPU index
> + * alone (without requiring a CPU object), as the QEMU<->Seabios interfaces have
> + * no concept of "CPU index", and the NUMA tables on fw_cfg need the APIC ID of
> + * all CPUs up to max_cpus.
> + */
> +uint32_t x86_cpu_apic_id_from_index(unsigned int cpu_index)
> +{
> +    uint32_t correct_id;
> +    static bool warned;
> +
> +    correct_id = x86_apicid_from_cpu_idx(smp_cores, smp_threads, cpu_index);
> +    if (compat_apic_id_mode) {
> +        if (cpu_index != correct_id && !warned) {
> +            error_report("APIC IDs set in compatibility mode, "
> +                         "CPU topology won't match the configuration");
> +            warned = true;
> +        }
> +        return cpu_index;
> +    } else {
> +        return correct_id;
> +    }
> +}
> +
>  static void x86_cpu_initfn(Object *obj)
>  {
>      CPUState *cs = CPU(obj);
> @@ -3071,8 +3105,7 @@ static void x86_cpu_initfn(Object *obj)
>      cpu->hyperv_spinlock_attempts = HYPERV_SPINLOCK_NEVER_RETRY;
>  
>  #ifndef CONFIG_USER_ONLY
> -    /* Any code creating new X86CPU objects have to set apic-id explicitly */
> -    cpu->apic_id = -1;
> +    cpu->apic_id = x86_cpu_apic_id_from_index(cs->cpu_index);
>  #endif
>  
>      for (w = 0; w < FEATURE_WORDS; w++) {
> diff --git a/target-i386/cpu.h b/target-i386/cpu.h
> index fc4a605..a5368cf 100644
> --- a/target-i386/cpu.h
> +++ b/target-i386/cpu.h
> @@ -1333,6 +1333,7 @@ void x86_cpu_change_kvm_default(const char *prop, const char *value);
>  /* Return name of 32-bit register, from a R_* constant */
>  const char *get_register_name_32(unsigned int reg);
>  
> +uint32_t x86_cpu_apic_id_from_index(unsigned int cpu_index);
>  void enable_compat_apic_id_mode(void);
>  
>  #define APIC_DEFAULT_ADDRESS 0xfee00000
> -- 
> 2.1.0
> 

-- 
Eduardo

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-11  3:57   ` Bharata B Rao
@ 2015-12-15  5:27     ` Zhu Guihua
  2015-12-16 15:16       ` Andreas Färber
  2015-12-16 15:11     ` Igor Mammedov
  1 sibling, 1 reply; 46+ messages in thread
From: Zhu Guihua @ 2015-12-15  5:27 UTC (permalink / raw)
  To: bharata, Igor Mammedov
  Cc: mjrosato, peter.maydell, ehabkost, agraf, qemu-devel,
	borntraeger, izumi.taku, pbonzini, afaerber, david

<snip>
>> and allow individual targets to use its own way to build CPUs?
>>
>> For initial conversion of x86-cpus to device-add we could do pretty
>> much the same like we do now, where cpu devices will appear under:
>> /machine (pc-i440fx-2.5-machine)
>>    /unattached (container)
>>      /device[x] (qemu64-x86_64-cpu)
>>
>> since we don't have to maintain/model dummy socket/core objects.
>>
>> PowerPC could do the similar only at core level since it has
>> need for modeling core objects.
>>
>> It doesn't change anything wrt current introspection state, since
>> cpus could be still found by mgmt tools that parse QOM tree.
>>
>> We probably should split 2 conflicting goals we are trying to meet here,
>>
>>   1. make device-add/dell work with cpus /
>>       drop support for cpu-add in favor of device_add
>>
>>   2. how to model QOM tree view for CPUs in arch independent manner
>>      to make mgmt layer life easier.
>>
>> and work on them independently instead of arguing for years,
>> that would allow us to make progress in #1 while still thinking about
>> how to do #2 the right way if we really need it.
> Makes sense, s390 developer also recommends the same. Given that we have
> CPU hotplug patchsets from x86, PowerPC and s390 all implementing device_add
> semantics pending on the list, can we hope to get them merged for
> QEMU-2.6 ?
>
> So as seen below, the device is either "cpu_model-cpu_type" or just "cpu_type".
>
> -device POWER8-powerpc64-cpu (pseries)
> -device qemu64-x86_64-cpu (pc)
> -device s390-cpu (s390)
>
> Is this going to be the final acceptable semantics ? Would libvirt be able
> to work with this different CPU device names for different guests ?

Is operating on core level not final decision ?

For progress, I also agree to implement device_add for different archs.

Thanks,
Zhu

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 8/9] target-i386: Set apic_id during CPU initfn
  2015-12-14 17:44   ` Eduardo Habkost
@ 2015-12-15  8:14     ` Bharata B Rao
  0 siblings, 0 replies; 46+ messages in thread
From: Bharata B Rao @ 2015-12-15  8:14 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: peter.maydell, agraf, qemu-devel, borntraeger, imammedo,
	pbonzini, afaerber, david

On Mon, Dec 14, 2015 at 03:44:06PM -0200, Eduardo Habkost wrote:
> On Thu, Dec 10, 2015 at 11:45:43AM +0530, Bharata B Rao wrote:
> > Move back the setting of apic_id to instance_init routine (x86_cpu_initfn)
> > This is needed to initialize X86 CPUs using generic cpu-package device.
> 
> Could you explain where exactly apic_id will be used, to make it
> necessary to initialize it earlier?

There is a check in x86_cpu_realizefn() to see if apic_id has been
initialized properly. Hence I thought x86 target will require apic_id
to have been initialized before CPU realization and that is what
the existing code does via pc_cpus_init() and pc_new_cpu(). i.e.,
apic_id property is set before setting the realize property to true.
However...

> 
> > 
> > TODO: I am not fully aware of the general direction in which apic_id
> > changes in X86 have evolved and hence not sure if this is indeed aligned with
> > the X86 way of doing things. This is just to help the PoC implementation
> > that I have in this patchset to convert PC CPUs initialization into
> > cpu-package device based initialization.
> 
> You shouldn't initialize apic_id on initfn. APIC ID depends (and
> will depend) on different CPU properties related to topology,
> including (but not limited to) CPU index and CPU topology
> properties we may introduce in the future, so it should be done
> later (at realize time), not on initfn.

... with the current patchset, I just experimented now by moving the setting
of apic_id to x86_cpu_realizefn() and things work just fine. I was in fact
pleasantly surprised to see that I could hot add a cpu core by hot plugging
the cpu-core device on x86 too.

> 
> Also, cpu_index is initialized by cpu_exec_init(), and
> cpu_exec_init() must not be called by initfn. The cpu_exec_init()
> call should (and will) be moved to realize in x86 and all other
> architectures.

Right, I have already moved cpu_exec_init() call to realizefn for PowerPC.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState
  2015-12-14 17:29   ` Eduardo Habkost
@ 2015-12-15  8:38     ` Bharata B Rao
  2015-12-15 15:31       ` Eduardo Habkost
  2015-12-16 16:54       ` Igor Mammedov
  0 siblings, 2 replies; 46+ messages in thread
From: Bharata B Rao @ 2015-12-15  8:38 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: peter.maydell, agraf, qemu-devel, borntraeger, imammedo,
	pbonzini, afaerber, david

On Mon, Dec 14, 2015 at 03:29:49PM -0200, Eduardo Habkost wrote:
> On Thu, Dec 10, 2015 at 11:45:37AM +0530, Bharata B Rao wrote:
> > Storing CPU typename in MachineState lets us to create CPU threads
> > for all architectures in uniform manner from arch-neutral code.
> > 
> > TODO: Touching only i386 and spapr targets for now
> > 
> > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> 
> Suggestions:
> 
> * Name the field "cpu_base_type" to indicate it is the base CPU
>   class name, not the actual CPU class name used when creating
>   CPUs.
> * Put it in MachineClass, as it may be useful for code that
>   runs before machine->init(), in the future.

Ok.

> * Maybe make it a CPUClass* field instead of a string?

In the current use case, this base cpu type string is being passed
to cpu_generic_init(const char *typename, const char *cpu_model)
to create boot time CPUs with given typename and cpu_mode. So for now
the string makes sense for use case.

Making it CPUClass* would necessiate more changes to cpu_generic_init().

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 1/9] vl: Don't allow CPU toplogies with partially filled cores
  2015-12-14 17:37       ` Eduardo Habkost
@ 2015-12-15  8:41         ` Bharata B Rao
  0 siblings, 0 replies; 46+ messages in thread
From: Bharata B Rao @ 2015-12-15  8:41 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: peter.maydell, qemu-devel, agraf, borntraeger, pbonzini,
	imammedo, afaerber, david

On Mon, Dec 14, 2015 at 03:37:52PM -0200, Eduardo Habkost wrote:
> On Fri, Dec 11, 2015 at 08:54:31AM +0530, Bharata B Rao wrote:
> > On Thu, Dec 10, 2015 at 10:25:28AM +0000, Daniel P. Berrange wrote:
> > > On Thu, Dec 10, 2015 at 11:45:36AM +0530, Bharata B Rao wrote:
> > > > Prevent guests from booting with CPU topologies that have partially
> > > > filled CPU cores or can result in partially filled CPU cores after CPU
> > > > hotplug like
> > > > 
> > > > -smp 15,sockets=1,cores=4,threads=4,maxcpus=16 or
> > > > -smp 15,sockets=1,cores=4,threads=4,maxcpus=17 or
> > > > 
> > > > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > > > ---
> > > >  vl.c | 13 +++++++++++++
> > > >  1 file changed, 13 insertions(+)
> > > > 
> > > > diff --git a/vl.c b/vl.c
> > > > index 525929b..e656f53 100644
> > > > --- a/vl.c
> > > > +++ b/vl.c
> > > > @@ -1252,6 +1252,19 @@ static void smp_parse(QemuOpts *opts)
> > > >          smp_cores = cores > 0 ? cores : 1;
> > > >          smp_threads = threads > 0 ? threads : 1;
> > > >  
> > > > +        if (smp_cpus % smp_threads) {
> > > > +            error_report("cpu topology: "
> > > > +                         "smp_cpus (%u) should be multiple of threads (%u)",
> > > > +                         smp_cpus, smp_threads);
> > > > +            exit(1);
> > > > +        }
> > > > +
> > > > +        if (max_cpus % smp_threads) {
> > > > +            error_report("cpu topology: "
> > > > +                         "maxcpus (%u) should be multiple of threads (%u)",
> > > > +                         max_cpus, smp_threads);
> > > > +            exit(1);
> > > > +        }
> > > >      }
> > > 
> > > Adding this seems like it has a pretty high chance of causing regression,
> > > ie preventing previously working guests from booting with new QEMU. I
> > > know adding the check makes sense from a semantic POV, but are we willing
> > > to risk breaking people with such odd configurations ?
> > 
> > I wasn't sure about how much risk that would be and hence in my older
> > version of PowerPC CPU hotplug patchset, I indeed supported such topologies:
> > 
> > https://lists.gnu.org/archive/html/qemu-ppc/2015-09/msg00102.html
> > 
> > But the code indeed looked ugly to support such special case.
> > 
> > There was some discussion about this recently here:
> > 
> > http://lists.gnu.org/archive/html/qemu-devel/2015-12/msg00396.html
> > 
> > from where I sensed that it may be ok to dis-allow such topologies.
> 
> I want to be as strict as possible and disallow such topologies,
> but Daniel has a point. Maybe we should make those checks
> machine-specific, so we can make pc-*-2.5 and older allow those
> broken configs.
> 
> If we make it a MachineClass::validate_smp_config() method, for
> example, we could make TYPE_MACHINE point to a generic function
> containing the checks you implemented above (so all machines have
> those checks enabled by default), but let pc <= 2.5 override the
> method.

Nice suggestion, will give it a try in the next iteration.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState
  2015-12-15  8:38     ` Bharata B Rao
@ 2015-12-15 15:31       ` Eduardo Habkost
  2015-12-16 16:54       ` Igor Mammedov
  1 sibling, 0 replies; 46+ messages in thread
From: Eduardo Habkost @ 2015-12-15 15:31 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: peter.maydell, agraf, qemu-devel, borntraeger, imammedo,
	pbonzini, afaerber, david

On Tue, Dec 15, 2015 at 02:08:09PM +0530, Bharata B Rao wrote:
> On Mon, Dec 14, 2015 at 03:29:49PM -0200, Eduardo Habkost wrote:
[...]
> > * Maybe make it a CPUClass* field instead of a string?
> 
> In the current use case, this base cpu type string is being passed
> to cpu_generic_init(const char *typename, const char *cpu_model)
> to create boot time CPUs with given typename and cpu_mode. So for now
> the string makes sense for use case.
> 
> Making it CPUClass* would necessiate more changes to cpu_generic_init().

I would consider changing cpu_generic_init() to
cpu_generic_init(CPUClass *base_class, const char *cpu_model)
too. But I admit I didn't look at cpu_generic_init() code yet,
and this can be done later if appropriate.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-11  3:57   ` Bharata B Rao
  2015-12-15  5:27     ` Zhu Guihua
@ 2015-12-16 15:11     ` Igor Mammedov
  2015-12-17  9:19       ` Peter Krempa
  1 sibling, 1 reply; 46+ messages in thread
From: Igor Mammedov @ 2015-12-16 15:11 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: mjrosato, peter.maydell, zhugh.fnst, Peter Krempa, ehabkost,
	agraf, qemu-devel, borntraeger, pbonzini, afaerber, david

On Fri, 11 Dec 2015 09:27:57 +0530
Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:

> On Thu, Dec 10, 2015 at 01:35:05PM +0100, Igor Mammedov wrote:
> > On Thu, 10 Dec 2015 11:45:35 +0530
> > Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:
> > 
> > > Hi,
> > > 
> > > This is an attempt to define a generic CPU device that serves as a
> > > containing device to underlying arch-specific CPU devices. The
> > > motivation for this is to have an arch-neutral way to specify CPUs
> > > mainly during hotplug.
> > > 
> > > Instead of individual archs having their own semantics to specify
> > > the CPU like
> > > 
> > > -device POWER8-powerpc64-cpu (pseries)
> > > -device qemu64-x86_64-cpu (pc)
> > > -device s390-cpu (s390)
> > > 
> > > this patch introduces a new device named cpu-core that could be
> > > used for all target archs as
> > > 
> > > -device cpu-core,socket="sid"
> > > 
> > > This adds a CPU core with all its associated threads into the
> > > specified socket with id "sid". The number of target architecture
> > > specific CPU threads that get created during this operation is
> > > based on the CPU topology specified using -smp
> > > sockets=S,cores=C,threads=T option. Also the number of cores that
> > > can be accommodated in the same socket is dictated by the cores=
> > > parameter in the same -smp option.
> > > 
> > > CPU sockets are represented by QOM objects and the number of
> > > sockets required to fit in max_cpus are created at boottime. As
> > > cpu-core devices are created, they are linked to socket object
> > > specified by socket="sid" device property.
> > > 
> > > Thus the model consists of backend socket objects which can be
> > > considered as container of one or more cpu-core devices. Each
> > > cpu-core object is linked to the appropriate backend socket
> > > object. Each CPU thread device appears as child object of
> > > cpu-core device.
> > > 
> > > All the required socket objects are created upfront and they
> > > can't be deleted. Though currently socket objects can be created
> > > using object_add monitor command, I am planning to prevent that
> > > so that a guest boots with the required number of sockets and
> > > only CPU cores can be hotplugged into them.
> > > 
> > > CPU hotplug granularity
> > > -----------------------
> > > CPU hotplug will now be done in cpu-core device granularity.
> > > 
> > > This patchset includes a patch to prevent topologies that result
> > > in partially filled cores. Hence with this patchset, we will
> > > always have fully filled cpu-core devices both for boot time and
> > > during hotplug.
> > > 
> > > For archs like PowerPC, where there is no requirement to be fully
> > > similar to the physical system, hotplugging CPU at core
> > > granularity is common. While core level hotplug will fit in
> > > naturally for such archs, for others which want socket level
> > > hotplug, could higher level tools like libvirt perform multiple
> > > core hotplugs in response to one socket hotplug request ?
> > > 
> > > Are there archs that would need thread level CPU addition ?
> > there are,
> > currently x86 target allows to start QEMU with 1 thread even if
> > topology specifies more threads per core. The same applies to
> > hotplug.
> > 
> > On top of that I think ACPI spec also treats CPU devices on per
> > threads level.
> <snip>
> > with this series it would regress following CLI
> >   -smp 1,sockets=4,cores=2,threads=2,maxcpus=16
> 
> Yes, the first patch in this series explicitly prevents such
> topologies.
> 
> Though QEMU currently allows to have such topologies, as discussed
> at http://lists.gnu.org/archive/html/qemu-devel/2015-12/msg00396.html
> is there a need to continue support for such topologies ?
>  
> > 
> > wrt CLI can't we do something like this?
> > 
> > -device some-cpu-model,socket=x[,core=y[,thread=z]]
> 
> We can, I just started with a simple homogenous setup. As David Gibson
> pointed out elsewhere, instead of taking the topology from globals,
> making it part of each -device command line like you show above would
> pave way for heterogenous setup which probably would be needed in
> future. In such a case, we wouldn't anyway have to debate about
> supporting topologies with partially filled cores and sockets. Also
> supporting legacy cpu-add x86 via device_add method would probably be
> easier with the semantics you showed.
> 
> > 
> > for NUMA configs individual sockets IDs could be bound to
> > nodes via -numa ... option
> 
> For PowerPC, socket is not always a NUMA boundary, it can have
> two CPU packages/chips (DCM)  within a socket and hence have two NUMA
> levels within a socket.
I guess syntax could be extended with an optional 'numa' option,
like we do with pc-dimm device:

-device some-cpu-model,[numa=n,]socket=x[,core=y[,thread=z]]

s/some-cpu-model/some-cpu-type/

but it should be ok if libvirt would feed 'numa' option when creating
cpu device.

> 
> > 
> > and allow individual targets to use its own way to build CPUs?
> > 
> > For initial conversion of x86-cpus to device-add we could do pretty
> > much the same like we do now, where cpu devices will appear under:
> > /machine (pc-i440fx-2.5-machine)
> >   /unattached (container)
> >     /device[x] (qemu64-x86_64-cpu)
> > 
> > since we don't have to maintain/model dummy socket/core objects.
> > 
> > PowerPC could do the similar only at core level since it has
> > need for modeling core objects.
> > 
> > It doesn't change anything wrt current introspection state, since
> > cpus could be still found by mgmt tools that parse QOM tree.
> > 
> > We probably should split 2 conflicting goals we are trying to meet
> > here,
> > 
> >  1. make device-add/dell work with cpus /
> >      drop support for cpu-add in favor of device_add 
> > 
> >  2. how to model QOM tree view for CPUs in arch independent manner
> >     to make mgmt layer life easier.
> > 
> > and work on them independently instead of arguing for years,
> > that would allow us to make progress in #1 while still thinking
> > about how to do #2 the right way if we really need it.
> 
> Makes sense, s390 developer also recommends the same. Given that we
> have CPU hotplug patchsets from x86, PowerPC and s390 all
> implementing device_add semantics pending on the list, can we hope to
> get them merged for QEMU-2.6 ?
> 
> So as seen below, the device is either "cpu_model-cpu_type" or just
> "cpu_type".
generic device_add isn't able to deal with 'cpu_model' stuff, so
it should be concrete 'cpu_type'.
Question is if libvirt can get a list of supported CPU types. 

> 
> -device POWER8-powerpc64-cpu (pseries)
> -device qemu64-x86_64-cpu (pc)
> -device s390-cpu (s390)
> 
> Is this going to be the final acceptable semantics ? Would libvirt be
> able to work with this different CPU device names for different
> guests ?
CCing Peter to check if libvirt could do it nad if his is ok with
proposed device_add semantics as in the it's he who will deal with it
on libvirt side.

> 
> Regards,
> Bharata.
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-15  5:27     ` Zhu Guihua
@ 2015-12-16 15:16       ` Andreas Färber
  0 siblings, 0 replies; 46+ messages in thread
From: Andreas Färber @ 2015-12-16 15:16 UTC (permalink / raw)
  To: Zhu Guihua, bharata, Igor Mammedov
  Cc: mjrosato, peter.maydell, ehabkost, qemu-devel, agraf,
	borntraeger, pbonzini, izumi.taku, david

Am 15.12.2015 um 06:27 schrieb Zhu Guihua:
> <snip>
>>> and allow individual targets to use its own way to build CPUs?
>>>
>>> For initial conversion of x86-cpus to device-add we could do pretty
>>> much the same like we do now, where cpu devices will appear under:
>>> /machine (pc-i440fx-2.5-machine)
>>>    /unattached (container)
>>>      /device[x] (qemu64-x86_64-cpu)
>>>
>>> since we don't have to maintain/model dummy socket/core objects.
>>>
>>> PowerPC could do the similar only at core level since it has
>>> need for modeling core objects.
>>>
>>> It doesn't change anything wrt current introspection state, since
>>> cpus could be still found by mgmt tools that parse QOM tree.
>>>
>>> We probably should split 2 conflicting goals we are trying to meet here,
>>>
>>>   1. make device-add/dell work with cpus /
>>>       drop support for cpu-add in favor of device_add
>>>
>>>   2. how to model QOM tree view for CPUs in arch independent manner
>>>      to make mgmt layer life easier.
>>>
>>> and work on them independently instead of arguing for years,
>>> that would allow us to make progress in #1 while still thinking about
>>> how to do #2 the right way if we really need it.
>> Makes sense, s390 developer also recommends the same. Given that we have
>> CPU hotplug patchsets from x86, PowerPC and s390 all implementing
>> device_add
>> semantics pending on the list, can we hope to get them merged for
>> QEMU-2.6 ?
>>
>> So as seen below, the device is either "cpu_model-cpu_type" or just
>> "cpu_type".
>>
>> -device POWER8-powerpc64-cpu (pseries)
>> -device qemu64-x86_64-cpu (pc)
>> -device s390-cpu (s390)
>>
>> Is this going to be the final acceptable semantics ? Would libvirt be
>> able
>> to work with this different CPU device names for different guests ?
> 
> Is operating on core level not final decision ?

No, it is absolutely _not_ the conclusion from Seattle.

Andreas

-- 
SUSE Linux GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Graham Norton; HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
                   ` (10 preceding siblings ...)
  2015-12-10 20:25 ` Matthew Rosato
@ 2015-12-16 15:19 ` Andreas Färber
  2015-12-16 15:44   ` Igor Mammedov
  11 siblings, 1 reply; 46+ messages in thread
From: Andreas Färber @ 2015-12-16 15:19 UTC (permalink / raw)
  To: Bharata B Rao, qemu-devel
  Cc: peter.maydell, ehabkost, agraf, borntraeger, pbonzini, imammedo, david

Am 10.12.2015 um 07:15 schrieb Bharata B Rao:
> CPU hotplug granularity
> -----------------------
> CPU hotplug will now be done in cpu-core device granularity.

Nack.

> Are there archs that would need thread level CPU addition ?

Yes, s390. And for x86 people called for socket level.

Andreas

-- 
SUSE Linux GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Graham Norton; HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-16 15:19 ` Andreas Färber
@ 2015-12-16 15:44   ` Igor Mammedov
  2015-12-16 15:57     ` Andreas Färber
  0 siblings, 1 reply; 46+ messages in thread
From: Igor Mammedov @ 2015-12-16 15:44 UTC (permalink / raw)
  To: Andreas Färber
  Cc: peter.maydell, ehabkost, agraf, qemu-devel, borntraeger,
	Bharata B Rao, pbonzini, david

On Wed, 16 Dec 2015 16:19:06 +0100
Andreas Färber <afaerber@suse.de> wrote:

> Am 10.12.2015 um 07:15 schrieb Bharata B Rao:
> > CPU hotplug granularity
> > -----------------------
> > CPU hotplug will now be done in cpu-core device granularity.
> 
> Nack.
> 
> > Are there archs that would need thread level CPU addition ?
> 
> Yes, s390. And for x86 people called for socket level.
socket level hotplug would be the last resort if we can't agree
on thread level one. As it would break existing setups where
user can hotplug 1 core, and  I'd like to avoid it if it is possible.

> 
> Andreas
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-10 12:35 ` [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Igor Mammedov
  2015-12-11  3:57   ` Bharata B Rao
@ 2015-12-16 15:46   ` Andreas Färber
  2015-12-16 21:58     ` Igor Mammedov
  2016-01-01  3:47     ` Bharata B Rao
  1 sibling, 2 replies; 46+ messages in thread
From: Andreas Färber @ 2015-12-16 15:46 UTC (permalink / raw)
  To: Igor Mammedov, Bharata B Rao
  Cc: peter.maydell, ehabkost, qemu-devel, agraf, borntraeger, pbonzini, david

Am 10.12.2015 um 13:35 schrieb Igor Mammedov:
> wrt CLI can't we do something like this?
> 
> -device some-cpu-model,socket=x[,core=y[,thread=z]]

That's problematic and where my x86 remodeling got stuck. It works fine
(more or less) to model sockets, cores and hyperthreads for -smp, but
doing it dynamically did not work well. How do you determine the
instance size a socket with N cores and M threads needs? Allocations in
instance_init are to be avoided with a view to hot-plug. So either we
have a fully determined socket object or we need to wire individual
objects on the command line. The latter has bad implications for
atomicity and thus hot-unplug. That leaves us with dynamic properties
doing allocations and reporting it via Error**, something I never
finished and could use reviewers and contributors.

Anthony's old suggestion had been to use real socket product names like
Xeon-E5-4242 to get a 6-core, dual-thread socket, without parameters -
unfortunately I still don't see an easy way to define such a thing today
with the flexibility users will undoubtedly want.

And since the question came up how to detect this, what you guys seem to
keep forgetting is that somewhere there also needs to be a matching
link<> property that determines what can be plugged, i.e. QMP qom-list.
link<>s are the QOM equivalent to qdev's buses. The object itself needs
to live in /machine/peripheral or /machine/peripheral-anon
(/machine/unattached is supposed to go away after the QOM conversion is
done!) and in a machine-specific place there will be a
/machine/cpu-socket[0] -> /machine/peripheral-anon/device[42]
link<x86_64-cpu-socket> property. It might just as well be
/machine/daughterboard-x/cpu-core[2] -> /machine/peripheral/cpu0.
(Gentle reminder of the s390 ipi modeling discussion that never came to
any conclusion iirc.)

Note that I have not read this patch series yet, just some of the
alarming review comments.

Regards,
Andreas

-- 
SUSE Linux GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Graham Norton; HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-16 15:44   ` Igor Mammedov
@ 2015-12-16 15:57     ` Andreas Färber
  2015-12-16 17:22       ` Igor Mammedov
  0 siblings, 1 reply; 46+ messages in thread
From: Andreas Färber @ 2015-12-16 15:57 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: peter.maydell, ehabkost, agraf, qemu-devel, borntraeger,
	Bharata B Rao, pbonzini, david

Am 16.12.2015 um 16:44 schrieb Igor Mammedov:
> On Wed, 16 Dec 2015 16:19:06 +0100
> Andreas Färber <afaerber@suse.de> wrote:
> 
>> Am 10.12.2015 um 07:15 schrieb Bharata B Rao:
>>> CPU hotplug granularity
>>> -----------------------
>>> CPU hotplug will now be done in cpu-core device granularity.
>>
>> Nack.
>>
>>> Are there archs that would need thread level CPU addition ?
>>
>> Yes, s390. And for x86 people called for socket level.
> socket level hotplug would be the last resort if we can't agree
> on thread level one. As it would break existing setups where
> user can hotplug 1 core, and  I'd like to avoid it if it is possible.

We still need to keep cpu-add for backwards compatibility, so I am
discussing solely the new device_add interface. My previous x86 series
went to severe hacks trying to keep cpu-add working with sockets&cores.

Attendees in Seattle said that thread-level hot-plug were dangerous for
Linux guests due to assumptions in the (guest's) scheduler breaking for
any incompletely filled cores or sockets. No one present objected to
doing it on socket level. Bharata(?) had a recent patch to catch such
incompletely filled cores on the initial command line and I think we
should seriously consider doing that even if it breaks some fringe use
case - hot-added incomplete cores or sockets remain to be detected.

Regards,
Andreas

-- 
SUSE Linux GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Graham Norton; HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState
  2015-12-15  8:38     ` Bharata B Rao
  2015-12-15 15:31       ` Eduardo Habkost
@ 2015-12-16 16:54       ` Igor Mammedov
  2015-12-16 19:39         ` Eduardo Habkost
  1 sibling, 1 reply; 46+ messages in thread
From: Igor Mammedov @ 2015-12-16 16:54 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: peter.maydell, Eduardo Habkost, qemu-devel, agraf, borntraeger,
	pbonzini, afaerber, david

On Tue, 15 Dec 2015 14:08:09 +0530
Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:

> On Mon, Dec 14, 2015 at 03:29:49PM -0200, Eduardo Habkost wrote:
> > On Thu, Dec 10, 2015 at 11:45:37AM +0530, Bharata B Rao wrote:
> > > Storing CPU typename in MachineState lets us to create CPU threads
> > > for all architectures in uniform manner from arch-neutral code.
> > > 
> > > TODO: Touching only i386 and spapr targets for now
> > > 
> > > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > 
> > Suggestions:
> > 
> > * Name the field "cpu_base_type" to indicate it is the base CPU
> >   class name, not the actual CPU class name used when creating
> >   CPUs.
> > * Put it in MachineClass, as it may be useful for code that
> >   runs before machine->init(), in the future.
> 
> Ok.
> 
> > * Maybe make it a CPUClass* field instead of a string?
> 
> In the current use case, this base cpu type string is being passed
> to cpu_generic_init(const char *typename, const char *cpu_model)
> to create boot time CPUs with given typename and cpu_mode. So for now
> the string makes sense for use case.
> 
> Making it CPUClass* would necessiate more changes to
> cpu_generic_init().
how about actually leaving it as "cpu_type" and putting in it
actual cpu type that could be used with device_add().

that would get rid of keeping and passing around intermediate cpu_model.

> 
> Regards,
> Bharata.
> 
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-16 15:57     ` Andreas Färber
@ 2015-12-16 17:22       ` Igor Mammedov
  2015-12-16 22:37         ` Igor Mammedov
  2016-01-12  3:54         ` David Gibson
  0 siblings, 2 replies; 46+ messages in thread
From: Igor Mammedov @ 2015-12-16 17:22 UTC (permalink / raw)
  To: Andreas Färber
  Cc: peter.maydell, ehabkost, agraf, qemu-devel, borntraeger,
	Bharata B Rao, pbonzini, david

On Wed, 16 Dec 2015 16:57:54 +0100
Andreas Färber <afaerber@suse.de> wrote:

> Am 16.12.2015 um 16:44 schrieb Igor Mammedov:
> > On Wed, 16 Dec 2015 16:19:06 +0100
> > Andreas Färber <afaerber@suse.de> wrote:
> > 
> >> Am 10.12.2015 um 07:15 schrieb Bharata B Rao:
> >>> CPU hotplug granularity
> >>> -----------------------
> >>> CPU hotplug will now be done in cpu-core device granularity.
> >>
> >> Nack.
> >>
> >>> Are there archs that would need thread level CPU addition ?
> >>
> >> Yes, s390. And for x86 people called for socket level.
> > socket level hotplug would be the last resort if we can't agree
> > on thread level one. As it would break existing setups where
> > user can hotplug 1 core, and  I'd like to avoid it if it is
> > possible.
> 
> We still need to keep cpu-add for backwards compatibility, so I am
> discussing solely the new device_add interface. My previous x86 series
> went to severe hacks trying to keep cpu-add working with
> sockets&cores.
if possible, it would be better to make cpu-add to use device_add
internally.

> 
> Attendees in Seattle said that thread-level hot-plug were dangerous
> for Linux guests due to assumptions in the (guest's) scheduler
> breaking for any incompletely filled cores or sockets. No one present
There is not such thing as cpu hotplug at socket level in x86 linux so far.
CPUs are plugged at logical(thread) cpu level, one at a time.
And ACPI spec does the same (describes logical CPUs) and hotplug
notification in guest handled per one logical cpu at a time.

> objected to doing it on socket level. Bharata(?) had a recent patch
> to catch such incompletely filled cores on the initial command line
> and I think we should seriously consider doing that even if it breaks
> some fringe use case - hot-added incomplete cores or sockets remain
> to be detected.
interesting question, what MADT/SRAT tables would look like on machine
with Intel CPU when hyperthreading is disabled in BIOS? 
 
> 
> Regards,
> Andreas
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState
  2015-12-16 16:54       ` Igor Mammedov
@ 2015-12-16 19:39         ` Eduardo Habkost
  2015-12-16 22:26           ` Igor Mammedov
  0 siblings, 1 reply; 46+ messages in thread
From: Eduardo Habkost @ 2015-12-16 19:39 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: peter.maydell, qemu-devel, agraf, borntraeger, Bharata B Rao,
	pbonzini, afaerber, david

On Wed, Dec 16, 2015 at 05:54:25PM +0100, Igor Mammedov wrote:
> On Tue, 15 Dec 2015 14:08:09 +0530
> Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:
> 
> > On Mon, Dec 14, 2015 at 03:29:49PM -0200, Eduardo Habkost wrote:
> > > On Thu, Dec 10, 2015 at 11:45:37AM +0530, Bharata B Rao wrote:
> > > > Storing CPU typename in MachineState lets us to create CPU threads
> > > > for all architectures in uniform manner from arch-neutral code.
> > > > 
> > > > TODO: Touching only i386 and spapr targets for now
> > > > 
> > > > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > > 
> > > Suggestions:
> > > 
> > > * Name the field "cpu_base_type" to indicate it is the base CPU
> > >   class name, not the actual CPU class name used when creating
> > >   CPUs.
> > > * Put it in MachineClass, as it may be useful for code that
> > >   runs before machine->init(), in the future.
> > 
> > Ok.
> > 
> > > * Maybe make it a CPUClass* field instead of a string?
> > 
> > In the current use case, this base cpu type string is being passed
> > to cpu_generic_init(const char *typename, const char *cpu_model)
> > to create boot time CPUs with given typename and cpu_mode. So for now
> > the string makes sense for use case.
> > 
> > Making it CPUClass* would necessiate more changes to
> > cpu_generic_init().
> how about actually leaving it as "cpu_type" and putting in it
> actual cpu type that could be used with device_add().
> 
> that would get rid of keeping and passing around intermediate cpu_model.

Makes sense. We only need to save both typename and cpu_model
today because cpu_generic_init() currently encapsulates three
steps: CPU class lookup + CPU creation + CPU feature parsing. But
we shouldn't need to redo CPU class lookup every time.

We could just split cpu_model once, and save the resulting
CPUClass* + featurestr, instead of saving the full cpu_model
string and parsing it again every time.

The only problem is that it would require refactoring multiple
machines/architectures that use a cpu_XXX_init(const char *cpu_model)
helper.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-16 15:46   ` Andreas Färber
@ 2015-12-16 21:58     ` Igor Mammedov
  2015-12-24  1:59       ` Zhu Guihua
  2016-01-01  3:47     ` Bharata B Rao
  1 sibling, 1 reply; 46+ messages in thread
From: Igor Mammedov @ 2015-12-16 21:58 UTC (permalink / raw)
  To: Andreas Färber
  Cc: peter.maydell, ehabkost, qemu-devel, agraf, borntraeger,
	Bharata B Rao, pbonzini, david

On Wed, 16 Dec 2015 16:46:37 +0100
Andreas Färber <afaerber@suse.de> wrote:

> Am 10.12.2015 um 13:35 schrieb Igor Mammedov:
> > wrt CLI can't we do something like this?
> > 
> > -device some-cpu-model,socket=x[,core=y[,thread=z]]
> 
> That's problematic and where my x86 remodeling got stuck. It works
> fine (more or less) to model sockets, cores and hyperthreads for
> -smp, but doing it dynamically did not work well. How do you
> determine the instance size a socket with N cores and M threads
> needs?
-smp defines necessary topology, all one need is to find out
instance size of a core or thread object, it would be probably x86
specific but it's doable, the only thing in current X86 thread
needs to fix is to reserve space for largest APIC type
and replace object_new with object_initialize_with_type.

However if I look from x86 point of view, there isn't need to model
sockets nor cores. The threads are sufficient for QEMU needs.
Dummy sockets/cores are just complicating implementation.

What I'm advocating for is let archs to decide if they should create
CPUs per socket, core or thread.

And for x86 do this at thread level, that way we keep compatibility
with cpu-add, but will also allow which cpu thread to plug with
 'node=n,socket=x,core=y,thread=z'.

Another point in favor of thread granularity for x86 is that competing
hyper-visors are doing that at thread level, QEMU would be worse off
in feature parity if minimal hotplug unit would be a socket. 

That also has benefit of being very flexible and would also suit
engineering audience of QEMU, allowing them to build CPUs from
config instead of hardcoding it in code and playing heterogeneous
configurations.

Options 'node=n,socket=x,core=y,thread=z' are just a SMP specific
path, defining where CPU should attached, it could be a QOM path
in the future when we arrive there and have a stable QOM tree. 

> Allocations in instance_init are to be avoided with a view to
> hot-plug.
> So either we have a fully determined socket object or we
> need to wire individual objects on the command line. The latter has
> bad implications for atomicity and thus hot-unplug. That leaves us
what are these bad implication and how they affect unplug?

If for example x86 CPU thread is fixed to embed child APIC then
to avoid allocations as much as possible or fail gracefully there
is 2 options :
 1: like you've said, reserve all needed space at startup, i.e.
    pre-create empty sockets
 2: fail gracefully in qdev_device_add() if allocation is not possible

for #2 it's not enough to avoid allocations in instance_init()
we also must teach qdev_device_add() to get the size of to be created
object and replace
object_new() with malloc() + object_initialize_with_type(),
that way it's possible to fail allocation gracefully and report error.

Doing that would benefit not only CPUs but every device_add capable
Device and is sufficient for hotplug purposes without overhead of
reserving space for every possible hotplugged device at startup (which
is impossible anyway in generic)
So I'd go for #2 sane device_add impl. vs #1 preallocated objects one  
 
> with dynamic properties doing allocations and reporting it via
> Error**, something I never finished and could use reviewers and
> contributors.
most of dynamic properties are static, looks like what QOM needs
is really static properties so we don't misuse the former and probably
a way to reserve space for declared number of dynamic ones to avoid
allocations in instance_initialize().

> 
> Anthony's old suggestion had been to use real socket product names
> like Xeon-E5-4242 to get a 6-core, dual-thread socket, without
> parameters - unfortunately I still don't see an easy way to define
> such a thing today with the flexibility users will undoubtedly want.
I don't see it either and for me it is much harder to remember
what Xeon-E5-4242 is while it's much easier to say:
   I want N [cpu-foo] threads
which in SMP world could be expressed via add N tread objects at
specified locations
   device_add cpu-foo, with optional node=n,socket=x,core=y,thread=z
allows to do it.
And well for x86 there lots of these Xeon-foo/whatever-foo codenames,
which would be nightmare to maintain.

> 
> And since the question came up how to detect this, what you guys seem
> to keep forgetting is that somewhere there also needs to be a matching
> link<> property that determines what can be plugged, i.e. QMP
> qom-list. link<>s are the QOM equivalent to qdev's buses. The object
> itself needs to live in /machine/peripheral
> or /machine/peripheral-anon (/machine/unattached is supposed to go
> away after the QOM conversion is done!) and in a machine-specific
> place there will be a /machine/cpu-socket[0]
> -> /machine/peripheral-anon/device[42] link<x86_64-cpu-socket>
> property. It might just as well
> be /machine/daughterboard-x/cpu-core[2] -> /machine/peripheral/cpu0.
> (Gentle reminder of the s390 ipi modeling discussion that never came
> to any conclusion iirc.)
QOM view probably is too unstable for becoming ABI and as you noted
it might be a machine specific one. To be more generic and consumable
by libvirt it could be 'virtual' flat list 
/machine/cpus/cpu-N-S-C-T<FOOO>[x] where FOOO could be
socket|core|thread depending on granularity at
which arch allows to create CPUs and N,S,C,T specifying 'where' part
that corresponds to link.

But I think separate QMP command  to list present/missing CPUs with
properties, would be easier to maintain and adapt to different archs
without need to commit part of QOM tree as ABI.

> 
> Note that I have not read this patch series yet, just some of the
> alarming review comments.
> 
> Regards,
> Andreas
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState
  2015-12-16 19:39         ` Eduardo Habkost
@ 2015-12-16 22:26           ` Igor Mammedov
  2015-12-17 18:09             ` Eduardo Habkost
  0 siblings, 1 reply; 46+ messages in thread
From: Igor Mammedov @ 2015-12-16 22:26 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: peter.maydell, qemu-devel, agraf, borntraeger, Bharata B Rao,
	pbonzini, afaerber, david

On Wed, 16 Dec 2015 17:39:02 -0200
Eduardo Habkost <ehabkost@redhat.com> wrote:

> On Wed, Dec 16, 2015 at 05:54:25PM +0100, Igor Mammedov wrote:
> > On Tue, 15 Dec 2015 14:08:09 +0530
> > Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:
> > 
> > > On Mon, Dec 14, 2015 at 03:29:49PM -0200, Eduardo Habkost wrote:
> > > > On Thu, Dec 10, 2015 at 11:45:37AM +0530, Bharata B Rao wrote:
> > > > > Storing CPU typename in MachineState lets us to create CPU
> > > > > threads for all architectures in uniform manner from
> > > > > arch-neutral code.
> > > > > 
> > > > > TODO: Touching only i386 and spapr targets for now
> > > > > 
> > > > > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > > > 
> > > > Suggestions:
> > > > 
> > > > * Name the field "cpu_base_type" to indicate it is the base CPU
> > > >   class name, not the actual CPU class name used when creating
> > > >   CPUs.
> > > > * Put it in MachineClass, as it may be useful for code that
> > > >   runs before machine->init(), in the future.
> > > 
> > > Ok.
> > > 
> > > > * Maybe make it a CPUClass* field instead of a string?
> > > 
> > > In the current use case, this base cpu type string is being passed
> > > to cpu_generic_init(const char *typename, const char *cpu_model)
> > > to create boot time CPUs with given typename and cpu_mode. So for
> > > now the string makes sense for use case.
> > > 
> > > Making it CPUClass* would necessiate more changes to
> > > cpu_generic_init().
> > how about actually leaving it as "cpu_type" and putting in it
> > actual cpu type that could be used with device_add().
> > 
> > that would get rid of keeping and passing around intermediate
> > cpu_model.
> 
> Makes sense. We only need to save both typename and cpu_model
> today because cpu_generic_init() currently encapsulates three
> steps: CPU class lookup + CPU creation + CPU feature parsing. But
> we shouldn't need to redo CPU class lookup every time.
BTW: Eduardo do you know if QEMU could somehow provide a list of
supported CPU types (i.e. not cpumodels) to libvirt?

> 
> We could just split cpu_model once, and save the resulting
> CPUClass* + featurestr, instead of saving the full cpu_model
> string and parsing it again every time.
isn't featurestr as x86/sparc specific?

Could we have field in  x86_cpu_class/sparc_cpu_class for it and set it
when cpu_model is parsed?
That way generic cpu_model parser would handle only cpu names and
target specific overrides would handle both.

> 
> The only problem is that it would require refactoring multiple
> machines/architectures that use a cpu_XXX_init(const char *cpu_model)
> helper.
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-16 17:22       ` Igor Mammedov
@ 2015-12-16 22:37         ` Igor Mammedov
  2016-01-12  3:54         ` David Gibson
  1 sibling, 0 replies; 46+ messages in thread
From: Igor Mammedov @ 2015-12-16 22:37 UTC (permalink / raw)
  To: Andreas Färber
  Cc: peter.maydell, ehabkost, qemu-devel, agraf, borntraeger,
	Bharata B Rao, pbonzini, david

On Wed, 16 Dec 2015 18:22:20 +0100
Igor Mammedov <imammedo@redhat.com> wrote:

> On Wed, 16 Dec 2015 16:57:54 +0100
> Andreas Färber <afaerber@suse.de> wrote:
[...]
> > 
> > Attendees in Seattle said that thread-level hot-plug were dangerous
> > for Linux guests due to assumptions in the (guest's) scheduler
> > breaking for any incompletely filled cores or sockets. No one
> > present
> There is not such thing as cpu hotplug at socket level in x86 linux
> so far. CPUs are plugged at logical(thread) cpu level, one at a time.
> And ACPI spec does the same (describes logical CPUs) and hotplug
> notification in guest handled per one logical cpu at a time.
BTW:
from my experience with CPU hotplug, I've had to fix only one
topology+sched related bug in kernel but more bugs were related to fast
CPU online/offline especially if host is over-committed.

And plugging whole socket would lead to a rapid sequence of hotplugging
CPUs which has been causing non deterministic kernel crash/lock up
much more often than topology issues, there are still bugs to fix in
that area.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-16 15:11     ` Igor Mammedov
@ 2015-12-17  9:19       ` Peter Krempa
  0 siblings, 0 replies; 46+ messages in thread
From: Peter Krempa @ 2015-12-17  9:19 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: mjrosato, peter.maydell, zhugh.fnst, ehabkost, agraf, qemu-devel,
	borntraeger, Bharata B Rao, pbonzini, afaerber, david

[-- Attachment #1: Type: text/plain, Size: 3004 bytes --]

On Wed, Dec 16, 2015 at 16:11:08 +0100, Igor Mammedov wrote:
> On Fri, 11 Dec 2015 09:27:57 +0530
> Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:
> 
> > On Thu, Dec 10, 2015 at 01:35:05PM +0100, Igor Mammedov wrote:
> > > On Thu, 10 Dec 2015 11:45:35 +0530
> > > Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:

[...]

> > > For initial conversion of x86-cpus to device-add we could do pretty
> > > much the same like we do now, where cpu devices will appear under:
> > > /machine (pc-i440fx-2.5-machine)
> > >   /unattached (container)
> > >     /device[x] (qemu64-x86_64-cpu)
> > > 
> > > since we don't have to maintain/model dummy socket/core objects.
> > > 
> > > PowerPC could do the similar only at core level since it has
> > > need for modeling core objects.
> > > 
> > > It doesn't change anything wrt current introspection state, since
> > > cpus could be still found by mgmt tools that parse QOM tree.
> > > 
> > > We probably should split 2 conflicting goals we are trying to meet
> > > here,
> > > 
> > >  1. make device-add/dell work with cpus /
> > >      drop support for cpu-add in favor of device_add 
> > > 
> > >  2. how to model QOM tree view for CPUs in arch independent manner
> > >     to make mgmt layer life easier.
> > > 
> > > and work on them independently instead of arguing for years,
> > > that would allow us to make progress in #1 while still thinking
> > > about how to do #2 the right way if we really need it.
> > 
> > Makes sense, s390 developer also recommends the same. Given that we
> > have CPU hotplug patchsets from x86, PowerPC and s390 all
> > implementing device_add semantics pending on the list, can we hope to
> > get them merged for QEMU-2.6 ?
> > 
> > So as seen below, the device is either "cpu_model-cpu_type" or just
> > "cpu_type".
> generic device_add isn't able to deal with 'cpu_model' stuff, so
> it should be concrete 'cpu_type'.
> Question is if libvirt can get a list of supported CPU types. 
> 
> > 
> > -device POWER8-powerpc64-cpu (pseries)
> > -device qemu64-x86_64-cpu (pc)
> > -device s390-cpu (s390)
> > 
> > Is this going to be the final acceptable semantics ? Would libvirt be
> > able to work with this different CPU device names for different
> > guests ?
> CCing Peter to check if libvirt could do it nad if his is ok with
> proposed device_add semantics as in the it's he who will deal with it
> on libvirt side.

Well, this depends entirely on the implementation and the variety of the
cpu device models. Libvirt requires that the cpu model for a given
arch/machine type/whatever can be inferred either completely offline or
via monitor commands that are available when qemu is started with the
'none' machine type. This is required as we query capabilities of a qemu
binary beforehand and then use the cached data to create the command
line. Running a qemu process just to query a cpu model is not acceptable
as it adds significant overhead.

Peter

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState
  2015-12-16 22:26           ` Igor Mammedov
@ 2015-12-17 18:09             ` Eduardo Habkost
  2015-12-18 10:46               ` Igor Mammedov
  0 siblings, 1 reply; 46+ messages in thread
From: Eduardo Habkost @ 2015-12-17 18:09 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: peter.maydell, qemu-devel, agraf, borntraeger, Bharata B Rao,
	pbonzini, afaerber, david

On Wed, Dec 16, 2015 at 11:26:20PM +0100, Igor Mammedov wrote:
> On Wed, 16 Dec 2015 17:39:02 -0200
> Eduardo Habkost <ehabkost@redhat.com> wrote:
> 
> > On Wed, Dec 16, 2015 at 05:54:25PM +0100, Igor Mammedov wrote:
> > > On Tue, 15 Dec 2015 14:08:09 +0530
> > > Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:
> > > 
> > > > On Mon, Dec 14, 2015 at 03:29:49PM -0200, Eduardo Habkost wrote:
> > > > > On Thu, Dec 10, 2015 at 11:45:37AM +0530, Bharata B Rao wrote:
> > > > > > Storing CPU typename in MachineState lets us to create CPU
> > > > > > threads for all architectures in uniform manner from
> > > > > > arch-neutral code.
> > > > > > 
> > > > > > TODO: Touching only i386 and spapr targets for now
> > > > > > 
> > > > > > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > > > > 
> > > > > Suggestions:
> > > > > 
> > > > > * Name the field "cpu_base_type" to indicate it is the base CPU
> > > > >   class name, not the actual CPU class name used when creating
> > > > >   CPUs.
> > > > > * Put it in MachineClass, as it may be useful for code that
> > > > >   runs before machine->init(), in the future.
> > > > 
> > > > Ok.
> > > > 
> > > > > * Maybe make it a CPUClass* field instead of a string?
> > > > 
> > > > In the current use case, this base cpu type string is being passed
> > > > to cpu_generic_init(const char *typename, const char *cpu_model)
> > > > to create boot time CPUs with given typename and cpu_mode. So for
> > > > now the string makes sense for use case.
> > > > 
> > > > Making it CPUClass* would necessiate more changes to
> > > > cpu_generic_init().
> > > how about actually leaving it as "cpu_type" and putting in it
> > > actual cpu type that could be used with device_add().
> > > 
> > > that would get rid of keeping and passing around intermediate
> > > cpu_model.
> > 
> > Makes sense. We only need to save both typename and cpu_model
> > today because cpu_generic_init() currently encapsulates three
> > steps: CPU class lookup + CPU creation + CPU feature parsing. But
> > we shouldn't need to redo CPU class lookup every time.
> BTW: Eduardo do you know if QEMU could somehow provide a list of
> supported CPU types (i.e. not cpumodels) to libvirt?

Not sure I understand the question. Could you clarify what you
mean by "supported CPU types", and what's the problem it would
solve?

> 
> > 
> > We could just split cpu_model once, and save the resulting
> > CPUClass* + featurestr, instead of saving the full cpu_model
> > string and parsing it again every time.
> isn't featurestr as x86/sparc specific?
> 
> Could we have field in  x86_cpu_class/sparc_cpu_class for it and set it
> when cpu_model is parsed?
> That way generic cpu_model parser would handle only cpu names and
> target specific overrides would handle both.

I always assumed we want to have a generic CPU model + featurestr
mechanism that could be reused by multiple architectures.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState
  2015-12-17 18:09             ` Eduardo Habkost
@ 2015-12-18 10:46               ` Igor Mammedov
  2015-12-18 15:51                 ` Eduardo Habkost
  0 siblings, 1 reply; 46+ messages in thread
From: Igor Mammedov @ 2015-12-18 10:46 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: peter.maydell, qemu-devel, agraf, borntraeger, Bharata B Rao,
	pbonzini, afaerber, david

On Thu, 17 Dec 2015 16:09:23 -0200
Eduardo Habkost <ehabkost@redhat.com> wrote:

> On Wed, Dec 16, 2015 at 11:26:20PM +0100, Igor Mammedov wrote:
> > On Wed, 16 Dec 2015 17:39:02 -0200
> > Eduardo Habkost <ehabkost@redhat.com> wrote:
> > 
> > > On Wed, Dec 16, 2015 at 05:54:25PM +0100, Igor Mammedov wrote:
> > > > On Tue, 15 Dec 2015 14:08:09 +0530
> > > > Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:
> > > > 
> > > > > On Mon, Dec 14, 2015 at 03:29:49PM -0200, Eduardo Habkost wrote:
> > > > > > On Thu, Dec 10, 2015 at 11:45:37AM +0530, Bharata B Rao wrote:
> > > > > > > Storing CPU typename in MachineState lets us to create CPU
> > > > > > > threads for all architectures in uniform manner from
> > > > > > > arch-neutral code.
> > > > > > > 
> > > > > > > TODO: Touching only i386 and spapr targets for now
> > > > > > > 
> > > > > > > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > > > > > 
> > > > > > Suggestions:
> > > > > > 
> > > > > > * Name the field "cpu_base_type" to indicate it is the base CPU
> > > > > >   class name, not the actual CPU class name used when creating
> > > > > >   CPUs.
> > > > > > * Put it in MachineClass, as it may be useful for code that
> > > > > >   runs before machine->init(), in the future.
> > > > > 
> > > > > Ok.
> > > > > 
> > > > > > * Maybe make it a CPUClass* field instead of a string?
> > > > > 
> > > > > In the current use case, this base cpu type string is being passed
> > > > > to cpu_generic_init(const char *typename, const char *cpu_model)
> > > > > to create boot time CPUs with given typename and cpu_mode. So for
> > > > > now the string makes sense for use case.
> > > > > 
> > > > > Making it CPUClass* would necessiate more changes to
> > > > > cpu_generic_init().
> > > > how about actually leaving it as "cpu_type" and putting in it
> > > > actual cpu type that could be used with device_add().
> > > > 
> > > > that would get rid of keeping and passing around intermediate
> > > > cpu_model.
> > > 
> > > Makes sense. We only need to save both typename and cpu_model
> > > today because cpu_generic_init() currently encapsulates three
> > > steps: CPU class lookup + CPU creation + CPU feature parsing. But
> > > we shouldn't need to redo CPU class lookup every time.
> > BTW: Eduardo do you know if QEMU could somehow provide a list of
> > supported CPU types (i.e. not cpumodels) to libvirt?
> 
> Not sure I understand the question. Could you clarify what you
> mean by "supported CPU types", and what's the problem it would
> solve?
device_add TYPE, takes only type name so I'd like to kep it that way
and make sure that libvirt/user can list cpu types that hi would
be able to use with device_add/-device.

for PC they are generated from cpu_model with help of x86_cpu_type_name()

> > 
> > > 
> > > We could just split cpu_model once, and save the resulting
> > > CPUClass* + featurestr, instead of saving the full cpu_model
> > > string and parsing it again every time.
> > isn't featurestr as x86/sparc specific?
> > 
> > Could we have field in  x86_cpu_class/sparc_cpu_class for it and set it
> > when cpu_model is parsed?
> > That way generic cpu_model parser would handle only cpu names and
> > target specific overrides would handle both.
> 
> I always assumed we want to have a generic CPU model + featurestr
> mechanism that could be reused by multiple architectures.

I've thought the opposite way, that we wanted to faze out featurestr
in favor of generic option parsing of generic device, i.e.
 -device TYPE,option=X,...
but we would have to keep compatibility with old CLI
that supplies cpu definition via -cpu cpu_model,featurestr
so cpu_model translated into "cpu_type" field make sense for every
target but featurestr is x86/sparc specific and I'd prefer to
keep it that way and do not introduce it to other targets.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState
  2015-12-18 10:46               ` Igor Mammedov
@ 2015-12-18 15:51                 ` Eduardo Habkost
  2015-12-18 16:01                   ` Igor Mammedov
  0 siblings, 1 reply; 46+ messages in thread
From: Eduardo Habkost @ 2015-12-18 15:51 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: peter.maydell, qemu-devel, agraf, borntraeger, Bharata B Rao,
	pbonzini, afaerber, david

On Fri, Dec 18, 2015 at 11:46:05AM +0100, Igor Mammedov wrote:
> On Thu, 17 Dec 2015 16:09:23 -0200
> Eduardo Habkost <ehabkost@redhat.com> wrote:
> 
> > On Wed, Dec 16, 2015 at 11:26:20PM +0100, Igor Mammedov wrote:
> > > On Wed, 16 Dec 2015 17:39:02 -0200
> > > Eduardo Habkost <ehabkost@redhat.com> wrote:
> > > 
> > > > On Wed, Dec 16, 2015 at 05:54:25PM +0100, Igor Mammedov wrote:
> > > > > On Tue, 15 Dec 2015 14:08:09 +0530
> > > > > Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:
> > > > > 
> > > > > > On Mon, Dec 14, 2015 at 03:29:49PM -0200, Eduardo Habkost wrote:
> > > > > > > On Thu, Dec 10, 2015 at 11:45:37AM +0530, Bharata B Rao wrote:
> > > > > > > > Storing CPU typename in MachineState lets us to create CPU
> > > > > > > > threads for all architectures in uniform manner from
> > > > > > > > arch-neutral code.
> > > > > > > > 
> > > > > > > > TODO: Touching only i386 and spapr targets for now
> > > > > > > > 
> > > > > > > > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > > > > > > 
> > > > > > > Suggestions:
> > > > > > > 
> > > > > > > * Name the field "cpu_base_type" to indicate it is the base CPU
> > > > > > >   class name, not the actual CPU class name used when creating
> > > > > > >   CPUs.
> > > > > > > * Put it in MachineClass, as it may be useful for code that
> > > > > > >   runs before machine->init(), in the future.
> > > > > > 
> > > > > > Ok.
> > > > > > 
> > > > > > > * Maybe make it a CPUClass* field instead of a string?
> > > > > > 
> > > > > > In the current use case, this base cpu type string is being passed
> > > > > > to cpu_generic_init(const char *typename, const char *cpu_model)
> > > > > > to create boot time CPUs with given typename and cpu_mode. So for
> > > > > > now the string makes sense for use case.
> > > > > > 
> > > > > > Making it CPUClass* would necessiate more changes to
> > > > > > cpu_generic_init().
> > > > > how about actually leaving it as "cpu_type" and putting in it
> > > > > actual cpu type that could be used with device_add().
> > > > > 
> > > > > that would get rid of keeping and passing around intermediate
> > > > > cpu_model.
> > > > 
> > > > Makes sense. We only need to save both typename and cpu_model
> > > > today because cpu_generic_init() currently encapsulates three
> > > > steps: CPU class lookup + CPU creation + CPU feature parsing. But
> > > > we shouldn't need to redo CPU class lookup every time.
> > > BTW: Eduardo do you know if QEMU could somehow provide a list of
> > > supported CPU types (i.e. not cpumodels) to libvirt?
> > 
> > Not sure I understand the question. Could you clarify what you
> > mean by "supported CPU types", and what's the problem it would
> > solve?
> device_add TYPE, takes only type name so I'd like to kep it that way
> and make sure that libvirt/user can list cpu types that hi would
> be able to use with device_add/-device.
> 
> for PC they are generated from cpu_model with help of x86_cpu_type_name()

What about adding a "qom-type" field to query-cpu-definitions?

> 
> > > 
> > > > 
> > > > We could just split cpu_model once, and save the resulting
> > > > CPUClass* + featurestr, instead of saving the full cpu_model
> > > > string and parsing it again every time.
> > > isn't featurestr as x86/sparc specific?
> > > 
> > > Could we have field in  x86_cpu_class/sparc_cpu_class for it and set it
> > > when cpu_model is parsed?
> > > That way generic cpu_model parser would handle only cpu names and
> > > target specific overrides would handle both.
> > 
> > I always assumed we want to have a generic CPU model + featurestr
> > mechanism that could be reused by multiple architectures.
> 
> I've thought the opposite way, that we wanted to faze out featurestr
> in favor of generic option parsing of generic device, i.e.
>  -device TYPE,option=X,...
> but we would have to keep compatibility with old CLI
> that supplies cpu definition via -cpu cpu_model,featurestr
> so cpu_model translated into "cpu_type" field make sense for every
> target but featurestr is x86/sparc specific and I'd prefer to
> keep it that way and do not introduce it to other targets.

I see, and it may make sense long term. But do you really think
we will be able to deprecate "-cpu" and "-smp" soon?

We already have CPUClass::parse_features, and cpu_generic_init()
already makes the model/featurestr split. Do you propose we
remove that generic code and move it back to x86/sparc?

Also, are you sure no other architectures support options in
"-cpu"? cpu_common_parse_features() already supports setting QOM
properties, and it is already available on every architecture
that calls CPUClass::parse_features or uses cpu_generic_init(). I
see that both ARM and PPC have properties available in their CPU
classes, and call parse_features() too.

-- 
Eduardo

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState
  2015-12-18 15:51                 ` Eduardo Habkost
@ 2015-12-18 16:01                   ` Igor Mammedov
  0 siblings, 0 replies; 46+ messages in thread
From: Igor Mammedov @ 2015-12-18 16:01 UTC (permalink / raw)
  To: Eduardo Habkost
  Cc: peter.maydell, qemu-devel, agraf, borntraeger, Bharata B Rao,
	pbonzini, afaerber, david

On Fri, 18 Dec 2015 13:51:49 -0200
Eduardo Habkost <ehabkost@redhat.com> wrote:

> On Fri, Dec 18, 2015 at 11:46:05AM +0100, Igor Mammedov wrote:
> > On Thu, 17 Dec 2015 16:09:23 -0200
> > Eduardo Habkost <ehabkost@redhat.com> wrote:
> > 
> > > On Wed, Dec 16, 2015 at 11:26:20PM +0100, Igor Mammedov wrote:
> > > > On Wed, 16 Dec 2015 17:39:02 -0200
> > > > Eduardo Habkost <ehabkost@redhat.com> wrote:
> > > > 
> > > > > On Wed, Dec 16, 2015 at 05:54:25PM +0100, Igor Mammedov wrote:
> > > > > > On Tue, 15 Dec 2015 14:08:09 +0530
> > > > > > Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:
> > > > > > 
> > > > > > > On Mon, Dec 14, 2015 at 03:29:49PM -0200, Eduardo Habkost wrote:
> > > > > > > > On Thu, Dec 10, 2015 at 11:45:37AM +0530, Bharata B Rao wrote:
> > > > > > > > > Storing CPU typename in MachineState lets us to create CPU
> > > > > > > > > threads for all architectures in uniform manner from
> > > > > > > > > arch-neutral code.
> > > > > > > > > 
> > > > > > > > > TODO: Touching only i386 and spapr targets for now
> > > > > > > > > 
> > > > > > > > > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > > > > > > > 
> > > > > > > > Suggestions:
> > > > > > > > 
> > > > > > > > * Name the field "cpu_base_type" to indicate it is the base CPU
> > > > > > > >   class name, not the actual CPU class name used when creating
> > > > > > > >   CPUs.
> > > > > > > > * Put it in MachineClass, as it may be useful for code that
> > > > > > > >   runs before machine->init(), in the future.
> > > > > > > 
> > > > > > > Ok.
> > > > > > > 
> > > > > > > > * Maybe make it a CPUClass* field instead of a string?
> > > > > > > 
> > > > > > > In the current use case, this base cpu type string is being passed
> > > > > > > to cpu_generic_init(const char *typename, const char *cpu_model)
> > > > > > > to create boot time CPUs with given typename and cpu_mode. So for
> > > > > > > now the string makes sense for use case.
> > > > > > > 
> > > > > > > Making it CPUClass* would necessiate more changes to
> > > > > > > cpu_generic_init().
> > > > > > how about actually leaving it as "cpu_type" and putting in it
> > > > > > actual cpu type that could be used with device_add().
> > > > > > 
> > > > > > that would get rid of keeping and passing around intermediate
> > > > > > cpu_model.
> > > > > 
> > > > > Makes sense. We only need to save both typename and cpu_model
> > > > > today because cpu_generic_init() currently encapsulates three
> > > > > steps: CPU class lookup + CPU creation + CPU feature parsing. But
> > > > > we shouldn't need to redo CPU class lookup every time.
> > > > BTW: Eduardo do you know if QEMU could somehow provide a list of
> > > > supported CPU types (i.e. not cpumodels) to libvirt?
> > > 
> > > Not sure I understand the question. Could you clarify what you
> > > mean by "supported CPU types", and what's the problem it would
> > > solve?
> > device_add TYPE, takes only type name so I'd like to kep it that way
> > and make sure that libvirt/user can list cpu types that hi would
> > be able to use with device_add/-device.
> > 
> > for PC they are generated from cpu_model with help of x86_cpu_type_name()
> 
> What about adding a "qom-type" field to query-cpu-definitions?
Sounds like good idea to me.

> 
> > 
> > > > 
> > > > > 
> > > > > We could just split cpu_model once, and save the resulting
> > > > > CPUClass* + featurestr, instead of saving the full cpu_model
> > > > > string and parsing it again every time.
> > > > isn't featurestr as x86/sparc specific?
> > > > 
> > > > Could we have field in  x86_cpu_class/sparc_cpu_class for it and set it
> > > > when cpu_model is parsed?
> > > > That way generic cpu_model parser would handle only cpu names and
> > > > target specific overrides would handle both.
> > > 
> > > I always assumed we want to have a generic CPU model + featurestr
> > > mechanism that could be reused by multiple architectures.
> > 
> > I've thought the opposite way, that we wanted to faze out featurestr
> > in favor of generic option parsing of generic device, i.e.
> >  -device TYPE,option=X,...
> > but we would have to keep compatibility with old CLI
> > that supplies cpu definition via -cpu cpu_model,featurestr
> > so cpu_model translated into "cpu_type" field make sense for every
> > target but featurestr is x86/sparc specific and I'd prefer to
> > keep it that way and do not introduce it to other targets.
> 
> I see, and it may make sense long term. But do you really think
> we will be able to deprecate "-cpu" and "-smp" soon?
> 
> We already have CPUClass::parse_features, and cpu_generic_init()
> already makes the model/featurestr split. Do you propose we
> remove that generic code and move it back to x86/sparc?
> 
> Also, are you sure no other architectures support options in
> "-cpu"? cpu_common_parse_features() already supports setting QOM
> properties, and it is already available on every architecture
> that calls CPUClass::parse_features or uses cpu_generic_init(). I
> see that both ARM and PPC have properties available in their CPU
> classes, and call parse_features() too.
Well if generic handlers already exist and spread to other targets
then there isn't much point in pushing it into arch specific
code, sorry for noise.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-16 21:58     ` Igor Mammedov
@ 2015-12-24  1:59       ` Zhu Guihua
  2015-12-29 13:52         ` Igor Mammedov
  0 siblings, 1 reply; 46+ messages in thread
From: Zhu Guihua @ 2015-12-24  1:59 UTC (permalink / raw)
  To: Igor Mammedov, Andreas Färber
  Cc: peter.maydell, ehabkost, agraf, qemu-devel, borntraeger,
	Bharata B Rao, pbonzini, david


On 12/17/2015 05:58 AM, Igor Mammedov wrote:
> On Wed, 16 Dec 2015 16:46:37 +0100
> Andreas Färber <afaerber@suse.de> wrote:
>
>> Am 10.12.2015 um 13:35 schrieb Igor Mammedov:
>>> wrt CLI can't we do something like this?
>>>
>>> -device some-cpu-model,socket=x[,core=y[,thread=z]]
>> That's problematic and where my x86 remodeling got stuck. It works
>> fine (more or less) to model sockets, cores and hyperthreads for
>> -smp, but doing it dynamically did not work well. How do you
>> determine the instance size a socket with N cores and M threads
>> needs?
> -smp defines necessary topology, all one need is to find out
> instance size of a core or thread object, it would be probably x86
> specific but it's doable, the only thing in current X86 thread
> needs to fix is to reserve space for largest APIC type
> and replace object_new with object_initialize_with_type.
>
> However if I look from x86 point of view, there isn't need to model
> sockets nor cores. The threads are sufficient for QEMU needs.
> Dummy sockets/cores are just complicating implementation.
>
> What I'm advocating for is let archs to decide if they should create
> CPUs per socket, core or thread.
>
> And for x86 do this at thread level, that way we keep compatibility
> with cpu-add, but will also allow which cpu thread to plug with
>   'node=n,socket=x,core=y,thread=z'.
>
> Another point in favor of thread granularity for x86 is that competing
> hyper-visors are doing that at thread level, QEMU would be worse off
> in feature parity if minimal hotplug unit would be a socket.
>
> That also has benefit of being very flexible and would also suit
> engineering audience of QEMU, allowing them to build CPUs from
> config instead of hardcoding it in code and playing heterogeneous
> configurations.
>
> Options 'node=n,socket=x,core=y,thread=z' are just a SMP specific
> path, defining where CPU should attached, it could be a QOM path
> in the future when we arrive there and have a stable QOM tree.

This means we will cancel the current apic_id in command line?
And we have already realized the options 'node=n' one year ago.

Thanks,
Zhu

>> Allocations in instance_init are to be avoided with a view to
>> hot-plug.
>> So either we have a fully determined socket object or we
>> need to wire individual objects on the command line. The latter has
>> bad implications for atomicity and thus hot-unplug. That leaves us
> what are these bad implication and how they affect unplug?
>
> If for example x86 CPU thread is fixed to embed child APIC then
> to avoid allocations as much as possible or fail gracefully there
> is 2 options :
>   1: like you've said, reserve all needed space at startup, i.e.
>      pre-create empty sockets
>   2: fail gracefully in qdev_device_add() if allocation is not possible
>
> for #2 it's not enough to avoid allocations in instance_init()
> we also must teach qdev_device_add() to get the size of to be created
> object and replace
> object_new() with malloc() + object_initialize_with_type(),
> that way it's possible to fail allocation gracefully and report error.
>
> Doing that would benefit not only CPUs but every device_add capable
> Device and is sufficient for hotplug purposes without overhead of
> reserving space for every possible hotplugged device at startup (which
> is impossible anyway in generic)
> So I'd go for #2 sane device_add impl. vs #1 preallocated objects one
>   
>> with dynamic properties doing allocations and reporting it via
>> Error**, something I never finished and could use reviewers and
>> contributors.
> most of dynamic properties are static, looks like what QOM needs
> is really static properties so we don't misuse the former and probably
> a way to reserve space for declared number of dynamic ones to avoid
> allocations in instance_initialize().
>
>> Anthony's old suggestion had been to use real socket product names
>> like Xeon-E5-4242 to get a 6-core, dual-thread socket, without
>> parameters - unfortunately I still don't see an easy way to define
>> such a thing today with the flexibility users will undoubtedly want.
> I don't see it either and for me it is much harder to remember
> what Xeon-E5-4242 is while it's much easier to say:
>     I want N [cpu-foo] threads
> which in SMP world could be expressed via add N tread objects at
> specified locations
>     device_add cpu-foo, with optional node=n,socket=x,core=y,thread=z
> allows to do it.
> And well for x86 there lots of these Xeon-foo/whatever-foo codenames,
> which would be nightmare to maintain.
>
>> And since the question came up how to detect this, what you guys seem
>> to keep forgetting is that somewhere there also needs to be a matching
>> link<> property that determines what can be plugged, i.e. QMP
>> qom-list. link<>s are the QOM equivalent to qdev's buses. The object
>> itself needs to live in /machine/peripheral
>> or /machine/peripheral-anon (/machine/unattached is supposed to go
>> away after the QOM conversion is done!) and in a machine-specific
>> place there will be a /machine/cpu-socket[0]
>> -> /machine/peripheral-anon/device[42] link<x86_64-cpu-socket>
>> property. It might just as well
>> be /machine/daughterboard-x/cpu-core[2] -> /machine/peripheral/cpu0.
>> (Gentle reminder of the s390 ipi modeling discussion that never came
>> to any conclusion iirc.)
> QOM view probably is too unstable for becoming ABI and as you noted
> it might be a machine specific one. To be more generic and consumable
> by libvirt it could be 'virtual' flat list
> /machine/cpus/cpu-N-S-C-T<FOOO>[x] where FOOO could be
> socket|core|thread depending on granularity at
> which arch allows to create CPUs and N,S,C,T specifying 'where' part
> that corresponds to link.
>
> But I think separate QMP command  to list present/missing CPUs with
> properties, would be easier to maintain and adapt to different archs
> without need to commit part of QOM tree as ABI.
>
>> Note that I have not read this patch series yet, just some of the
>> alarming review comments.
>>
>> Regards,
>> Andreas
>>
>
>
>
> .
>

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-24  1:59       ` Zhu Guihua
@ 2015-12-29 13:52         ` Igor Mammedov
  0 siblings, 0 replies; 46+ messages in thread
From: Igor Mammedov @ 2015-12-29 13:52 UTC (permalink / raw)
  To: Zhu Guihua
  Cc: peter.maydell, ehabkost, qemu-devel, agraf, borntraeger,
	Bharata B Rao, pbonzini, Andreas Färber, david

On Thu, 24 Dec 2015 09:59:53 +0800
Zhu Guihua <zhugh.fnst@cn.fujitsu.com> wrote:

> 
> On 12/17/2015 05:58 AM, Igor Mammedov wrote:
> > On Wed, 16 Dec 2015 16:46:37 +0100
> > Andreas Färber <afaerber@suse.de> wrote:
> >
> >> Am 10.12.2015 um 13:35 schrieb Igor Mammedov:
> >>> wrt CLI can't we do something like this?
> >>>
> >>> -device some-cpu-model,socket=x[,core=y[,thread=z]]
> >> That's problematic and where my x86 remodeling got stuck. It works
> >> fine (more or less) to model sockets, cores and hyperthreads for
> >> -smp, but doing it dynamically did not work well. How do you
> >> determine the instance size a socket with N cores and M threads
> >> needs?
> > -smp defines necessary topology, all one need is to find out
> > instance size of a core or thread object, it would be probably x86
> > specific but it's doable, the only thing in current X86 thread
> > needs to fix is to reserve space for largest APIC type
> > and replace object_new with object_initialize_with_type.
> >
> > However if I look from x86 point of view, there isn't need to model
> > sockets nor cores. The threads are sufficient for QEMU needs.
> > Dummy sockets/cores are just complicating implementation.
> >
> > What I'm advocating for is let archs to decide if they should create
> > CPUs per socket, core or thread.
> >
> > And for x86 do this at thread level, that way we keep compatibility
> > with cpu-add, but will also allow which cpu thread to plug with
> >   'node=n,socket=x,core=y,thread=z'.
> >
> > Another point in favor of thread granularity for x86 is that competing
> > hyper-visors are doing that at thread level, QEMU would be worse off
> > in feature parity if minimal hotplug unit would be a socket.
> >
> > That also has benefit of being very flexible and would also suit
> > engineering audience of QEMU, allowing them to build CPUs from
> > config instead of hardcoding it in code and playing heterogeneous
> > configurations.
> >
> > Options 'node=n,socket=x,core=y,thread=z' are just a SMP specific
> > path, defining where CPU should attached, it could be a QOM path
> > in the future when we arrive there and have a stable QOM tree.
> 
> This means we will cancel the current apic_id in command line?
> And we have already realized the options 'node=n' one year ago.
There isn't apic_id option in command line so far and the reason for it
is that it's complex to calculate and depends on CPU type beside of
being x86 specific only.

on contrary
 -device some-cpu-model,node=n,socket=x,core=y,thread=z
provides all necessary information for QEMU to calculate APIC ID
while being relatively arch neutral.
Then each arch can require a full set of these options or a subset
to be provided at -device/device_add so it could plug CPU at board
level when hotplug_handler_plug() is called for it.


> 
> Thanks,
> Zhu
> 
> >> Allocations in instance_init are to be avoided with a view to
> >> hot-plug.
> >> So either we have a fully determined socket object or we
> >> need to wire individual objects on the command line. The latter has
> >> bad implications for atomicity and thus hot-unplug. That leaves us
> > what are these bad implication and how they affect unplug?
> >
> > If for example x86 CPU thread is fixed to embed child APIC then
> > to avoid allocations as much as possible or fail gracefully there
> > is 2 options :
> >   1: like you've said, reserve all needed space at startup, i.e.
> >      pre-create empty sockets
> >   2: fail gracefully in qdev_device_add() if allocation is not possible
> >
> > for #2 it's not enough to avoid allocations in instance_init()
> > we also must teach qdev_device_add() to get the size of to be created
> > object and replace
> > object_new() with malloc() + object_initialize_with_type(),
> > that way it's possible to fail allocation gracefully and report error.
> >
> > Doing that would benefit not only CPUs but every device_add capable
> > Device and is sufficient for hotplug purposes without overhead of
> > reserving space for every possible hotplugged device at startup (which
> > is impossible anyway in generic)
> > So I'd go for #2 sane device_add impl. vs #1 preallocated objects one
> >   
> >> with dynamic properties doing allocations and reporting it via
> >> Error**, something I never finished and could use reviewers and
> >> contributors.
> > most of dynamic properties are static, looks like what QOM needs
> > is really static properties so we don't misuse the former and probably
> > a way to reserve space for declared number of dynamic ones to avoid
> > allocations in instance_initialize().
> >
> >> Anthony's old suggestion had been to use real socket product names
> >> like Xeon-E5-4242 to get a 6-core, dual-thread socket, without
> >> parameters - unfortunately I still don't see an easy way to define
> >> such a thing today with the flexibility users will undoubtedly want.
> > I don't see it either and for me it is much harder to remember
> > what Xeon-E5-4242 is while it's much easier to say:
> >     I want N [cpu-foo] threads
> > which in SMP world could be expressed via add N tread objects at
> > specified locations
> >     device_add cpu-foo, with optional node=n,socket=x,core=y,thread=z
> > allows to do it.
> > And well for x86 there lots of these Xeon-foo/whatever-foo codenames,
> > which would be nightmare to maintain.
> >
> >> And since the question came up how to detect this, what you guys seem
> >> to keep forgetting is that somewhere there also needs to be a matching
> >> link<> property that determines what can be plugged, i.e. QMP
> >> qom-list. link<>s are the QOM equivalent to qdev's buses. The object
> >> itself needs to live in /machine/peripheral
> >> or /machine/peripheral-anon (/machine/unattached is supposed to go
> >> away after the QOM conversion is done!) and in a machine-specific
> >> place there will be a /machine/cpu-socket[0]
> >> -> /machine/peripheral-anon/device[42] link<x86_64-cpu-socket>
> >> property. It might just as well
> >> be /machine/daughterboard-x/cpu-core[2] -> /machine/peripheral/cpu0.
> >> (Gentle reminder of the s390 ipi modeling discussion that never came
> >> to any conclusion iirc.)
> > QOM view probably is too unstable for becoming ABI and as you noted
> > it might be a machine specific one. To be more generic and consumable
> > by libvirt it could be 'virtual' flat list
> > /machine/cpus/cpu-N-S-C-T<FOOO>[x] where FOOO could be
> > socket|core|thread depending on granularity at
> > which arch allows to create CPUs and N,S,C,T specifying 'where' part
> > that corresponds to link.
> >
> > But I think separate QMP command  to list present/missing CPUs with
> > properties, would be easier to maintain and adapt to different archs
> > without need to commit part of QOM tree as ABI.
> >
> >> Note that I have not read this patch series yet, just some of the
> >> alarming review comments.
> >>
> >> Regards,
> >> Andreas
> >>
> >
> >
> >
> > .
> >
> 
> 
> 
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-16 15:46   ` Andreas Färber
  2015-12-16 21:58     ` Igor Mammedov
@ 2016-01-01  3:47     ` Bharata B Rao
  2016-01-04 12:52       ` Igor Mammedov
  1 sibling, 1 reply; 46+ messages in thread
From: Bharata B Rao @ 2016-01-01  3:47 UTC (permalink / raw)
  To: Andreas Färber
  Cc: peter.maydell, ehabkost, qemu-devel, agraf, borntraeger,
	pbonzini, Igor Mammedov, david

On Wed, Dec 16, 2015 at 04:46:37PM +0100, Andreas Färber wrote:
> Am 10.12.2015 um 13:35 schrieb Igor Mammedov:
> > wrt CLI can't we do something like this?
> > 
> > -device some-cpu-model,socket=x[,core=y[,thread=z]]
> 
> That's problematic and where my x86 remodeling got stuck. It works fine
> (more or less) to model sockets, cores and hyperthreads for -smp, but
> doing it dynamically did not work well. How do you determine the
> instance size a socket with N cores and M threads needs? Allocations in
> instance_init are to be avoided with a view to hot-plug. So either we
> have a fully determined socket object or we need to wire individual
> objects on the command line. The latter has bad implications for
> atomicity and thus hot-unplug. That leaves us with dynamic properties
> doing allocations and reporting it via Error**, something I never
> finished and could use reviewers and contributors.
> 
> Anthony's old suggestion had been to use real socket product names like
> Xeon-E5-4242 to get a 6-core, dual-thread socket, without parameters -
> unfortunately I still don't see an easy way to define such a thing today
> with the flexibility users will undoubtedly want.
> 
> And since the question came up how to detect this, what you guys seem to
> keep forgetting is that somewhere there also needs to be a matching
> link<> property that determines what can be plugged, i.e. QMP qom-list.
> link<>s are the QOM equivalent to qdev's buses. The object itself needs
> to live in /machine/peripheral or /machine/peripheral-anon
> (/machine/unattached is supposed to go away after the QOM conversion is
> done!) and in a machine-specific place there will be a
> /machine/cpu-socket[0] -> /machine/peripheral-anon/device[42]
> link<x86_64-cpu-socket> property. It might just as well be
> /machine/daughterboard-x/cpu-core[2] -> /machine/peripheral/cpu0.
> (Gentle reminder of the s390 ipi modeling discussion that never came to
> any conclusion iirc.)
> 
> Note that I have not read this patch series yet, just some of the
> alarming review comments.

It has been more than an year since I posted the initial version of
PowerPC sPAPR CPU hotplug patchset. I guess x86 CPU hotplug patchset
existed even before that. Now we have patches for s390 CPU hotplug
also on the list. Given this situation, will it be agreeable and
feasible to follow Igor's suggestion and de-link the QOM part from the
actual CPU hotplug work ? May be we can get these patchsets into 2.6 and
work on QOM links subsequently ?

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2016-01-01  3:47     ` Bharata B Rao
@ 2016-01-04 12:52       ` Igor Mammedov
  0 siblings, 0 replies; 46+ messages in thread
From: Igor Mammedov @ 2016-01-04 12:52 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: peter.maydell, ehabkost, qemu-devel, agraf, borntraeger,
	pbonzini, Andreas Färber, david

On Fri, 1 Jan 2016 09:17:48 +0530
Bharata B Rao <bharata@linux.vnet.ibm.com> wrote:

> On Wed, Dec 16, 2015 at 04:46:37PM +0100, Andreas Färber wrote:
> > Am 10.12.2015 um 13:35 schrieb Igor Mammedov:  
> > > wrt CLI can't we do something like this?
> > > 
> > > -device some-cpu-model,socket=x[,core=y[,thread=z]]  
> > 
> > That's problematic and where my x86 remodeling got stuck. It works fine
> > (more or less) to model sockets, cores and hyperthreads for -smp, but
> > doing it dynamically did not work well. How do you determine the
> > instance size a socket with N cores and M threads needs? Allocations in
> > instance_init are to be avoided with a view to hot-plug. So either we
> > have a fully determined socket object or we need to wire individual
> > objects on the command line. The latter has bad implications for
> > atomicity and thus hot-unplug. That leaves us with dynamic properties
> > doing allocations and reporting it via Error**, something I never
> > finished and could use reviewers and contributors.
> > 
> > Anthony's old suggestion had been to use real socket product names like
> > Xeon-E5-4242 to get a 6-core, dual-thread socket, without parameters -
> > unfortunately I still don't see an easy way to define such a thing today
> > with the flexibility users will undoubtedly want.
> > 
> > And since the question came up how to detect this, what you guys seem to
> > keep forgetting is that somewhere there also needs to be a matching
> > link<> property that determines what can be plugged, i.e. QMP qom-list.
> > link<>s are the QOM equivalent to qdev's buses. The object itself needs
> > to live in /machine/peripheral or /machine/peripheral-anon
> > (/machine/unattached is supposed to go away after the QOM conversion is
> > done!) and in a machine-specific place there will be a
> > /machine/cpu-socket[0] -> /machine/peripheral-anon/device[42]
> > link<x86_64-cpu-socket> property. It might just as well be
> > /machine/daughterboard-x/cpu-core[2] -> /machine/peripheral/cpu0.
> > (Gentle reminder of the s390 ipi modeling discussion that never came to
> > any conclusion iirc.)
> > 
> > Note that I have not read this patch series yet, just some of the
> > alarming review comments.  
> 
> It has been more than an year since I posted the initial version of
> PowerPC sPAPR CPU hotplug patchset. I guess x86 CPU hotplug patchset
> existed even before that. Now we have patches for s390 CPU hotplug
> also on the list. Given this situation, will it be agreeable and
> feasible to follow Igor's suggestion and de-link the QOM part from the
> actual CPU hotplug work ? May be we can get these patchsets into 2.6 and
> work on QOM links subsequently ?
how about doing it in 2 series in parallel,
 1st: a target specific cpu hotplug
 2nd: QMP interface that would suite management needs to enumerate
      existing/possible CPU objects at the granularity which targets
      support from -device/device_add aspect
      (i.e. focuses only on hotplug POV)


> Regards,
> Bharata.
> 
> 

^ permalink raw reply	[flat|nested] 46+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device
  2015-12-16 17:22       ` Igor Mammedov
  2015-12-16 22:37         ` Igor Mammedov
@ 2016-01-12  3:54         ` David Gibson
  1 sibling, 0 replies; 46+ messages in thread
From: David Gibson @ 2016-01-12  3:54 UTC (permalink / raw)
  To: Igor Mammedov
  Cc: peter.maydell, ehabkost, qemu-devel, agraf, borntraeger,
	Bharata B Rao, pbonzini, Andreas Färber

[-- Attachment #1: Type: text/plain, Size: 2605 bytes --]

On Wed, Dec 16, 2015 at 06:22:20PM +0100, Igor Mammedov wrote:
> On Wed, 16 Dec 2015 16:57:54 +0100
> Andreas Färber <afaerber@suse.de> wrote:
> 
> > Am 16.12.2015 um 16:44 schrieb Igor Mammedov:
> > > On Wed, 16 Dec 2015 16:19:06 +0100
> > > Andreas Färber <afaerber@suse.de> wrote:
> > > 
> > >> Am 10.12.2015 um 07:15 schrieb Bharata B Rao:
> > >>> CPU hotplug granularity
> > >>> -----------------------
> > >>> CPU hotplug will now be done in cpu-core device granularity.
> > >>
> > >> Nack.
> > >>
> > >>> Are there archs that would need thread level CPU addition ?
> > >>
> > >> Yes, s390. And for x86 people called for socket level.
> > > socket level hotplug would be the last resort if we can't agree
> > > on thread level one. As it would break existing setups where
> > > user can hotplug 1 core, and  I'd like to avoid it if it is
> > > possible.
> > 
> > We still need to keep cpu-add for backwards compatibility, so I am
> > discussing solely the new device_add interface. My previous x86 series
> > went to severe hacks trying to keep cpu-add working with
> > sockets&cores.
> if possible, it would be better to make cpu-add to use device_add
> internally.
> 
> > 
> > Attendees in Seattle said that thread-level hot-plug were dangerous
> > for Linux guests due to assumptions in the (guest's) scheduler
> > breaking for any incompletely filled cores or sockets. No one present
> There is not such thing as cpu hotplug at socket level in x86 linux so far.
> CPUs are plugged at logical(thread) cpu level, one at a time.
> And ACPI spec does the same (describes logical CPUs) and hotplug
> notification in guest handled per one logical cpu at a time.

I don't think that precludes handling hotplug at socket level in
qemu.  The user <-> qemu interaction can work on the socket level,
then the qemu <-> guest interaction on the thread level, iterating
through the threads in the socket.

Problems arise when the qemu <-> guest protocol is at a coarser
granularity than the user <-> qemu protocol, rather than the other way
around, AFAICT.

This is the problem we have with the existing interfaces for Power -
there the qemu <-> guest protocol specified by the platform has no way
of representing a single thread hotplug, it's always a core at a time
(as a paravirt platform, there's not really a useful difference
between cores and sockets).

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 46+ messages in thread

end of thread, other threads:[~2016-01-12  4:31 UTC | newest]

Thread overview: 46+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-10  6:15 [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Bharata B Rao
2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 1/9] vl: Don't allow CPU toplogies with partially filled cores Bharata B Rao
2015-12-10 10:25   ` Daniel P. Berrange
2015-12-11  3:24     ` Bharata B Rao
2015-12-14 17:37       ` Eduardo Habkost
2015-12-15  8:41         ` Bharata B Rao
2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 2/9] cpu: Store CPU typename in MachineState Bharata B Rao
2015-12-14 17:29   ` Eduardo Habkost
2015-12-15  8:38     ` Bharata B Rao
2015-12-15 15:31       ` Eduardo Habkost
2015-12-16 16:54       ` Igor Mammedov
2015-12-16 19:39         ` Eduardo Habkost
2015-12-16 22:26           ` Igor Mammedov
2015-12-17 18:09             ` Eduardo Habkost
2015-12-18 10:46               ` Igor Mammedov
2015-12-18 15:51                 ` Eduardo Habkost
2015-12-18 16:01                   ` Igor Mammedov
2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 3/9] cpu: Don't realize CPU from cpu_generic_init() Bharata B Rao
2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 4/9] cpu: CPU socket backend Bharata B Rao
2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 5/9] vl: Create CPU socket backend objects Bharata B Rao
2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 6/9] cpu: Introduce CPU core device Bharata B Rao
2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 7/9] spapr: Convert boot CPUs into CPU core device initialization Bharata B Rao
2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 8/9] target-i386: Set apic_id during CPU initfn Bharata B Rao
2015-12-14 17:44   ` Eduardo Habkost
2015-12-15  8:14     ` Bharata B Rao
2015-12-10  6:15 ` [Qemu-devel] [RFC PATCH v0 9/9] pc: Convert boot CPUs into CPU core device initialization Bharata B Rao
2015-12-10 12:35 ` [Qemu-devel] [RFC PATCH v0 0/9] Generic cpu-core device Igor Mammedov
2015-12-11  3:57   ` Bharata B Rao
2015-12-15  5:27     ` Zhu Guihua
2015-12-16 15:16       ` Andreas Färber
2015-12-16 15:11     ` Igor Mammedov
2015-12-17  9:19       ` Peter Krempa
2015-12-16 15:46   ` Andreas Färber
2015-12-16 21:58     ` Igor Mammedov
2015-12-24  1:59       ` Zhu Guihua
2015-12-29 13:52         ` Igor Mammedov
2016-01-01  3:47     ` Bharata B Rao
2016-01-04 12:52       ` Igor Mammedov
2015-12-10 20:25 ` Matthew Rosato
2015-12-14  6:25   ` Bharata B Rao
2015-12-16 15:19 ` Andreas Färber
2015-12-16 15:44   ` Igor Mammedov
2015-12-16 15:57     ` Andreas Färber
2015-12-16 17:22       ` Igor Mammedov
2015-12-16 22:37         ` Igor Mammedov
2016-01-12  3:54         ` David Gibson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.