All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug
@ 2015-08-06  5:27 Bharata B Rao
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit() Bharata B Rao
                   ` (12 more replies)
  0 siblings, 13 replies; 36+ messages in thread
From: Bharata B Rao @ 2015-08-06  5:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: aik, Bharata B Rao, mdroth, agraf, qemu-ppc, tyreld, nfont,
	imammedo, afaerber, david

Hi,

This is the next version of CPU hotplug support patchset for PowerPC
sPAPR guests. This is a split-out from the previous version (v3) that
was carrying CPU and memory hotplug together. This patchset applies on
spapr-next branch of David Gibson's tree.

In the previous version, I was doing CPU addition at socket granularity.
One hotplug request would add one complete CPU socket with all the cores
and threads as per the boot time topology specification. Based on the
feedback for v3, I am switching back to earlier method wherein I don't
have the notion of socket device. In this version I don't create any
additional device abstraction over CPU device, but use the existing
CPU device and add full cores at once. One hotplug request will add
a complete core with all the underlying threads.

I have enabled device_add based hotplug for POWER8 family for processors
and currently the semantics looks like this:

(qemu) device_add POWER8-powerpc64-cpu,id=cpu8

v3: https://lists.nongnu.org/archive/html/qemu-devel/2015-04/msg02910.html

Bharata B Rao (10):
  exec: Remove cpu from cpus list during cpu_exec_exit()
  exec: Do vmstate unregistration from cpu_exec_exit()
  cpus: Add a sync version of cpu_remove()
  xics_kvm: Add cpu_destroy method to XICS
  spapr: Create pseries-2.5 machine
  spapr: Enable CPU hotplug for pseries-2.5 and add CPU DRC DT entries
  spapr: CPU hotplug support
  spapr: Support topologies with unfilled cores
  spapr: CPU hot unplug support
  target-ppc: Enable CPU hotplug for POWER8 CPU family

Gu Zheng (1):
  cpus: Reclaim vCPU objects

 cpus.c                      |  55 ++++++++
 exec.c                      |  30 +++++
 hw/intc/xics.c              |  12 ++
 hw/intc/xics_kvm.c          |   9 ++
 hw/ppc/spapr.c              | 300 +++++++++++++++++++++++++++++++++++++++++++-
 hw/ppc/spapr_events.c       |   3 +
 hw/ppc/spapr_rtas.c         |  11 ++
 include/hw/ppc/spapr.h      |   1 +
 include/hw/ppc/xics.h       |   2 +
 include/qom/cpu.h           |  19 +++
 include/sysemu/kvm.h        |   1 +
 kvm-all.c                   |  57 ++++++++-
 kvm-stub.c                  |   5 +
 target-ppc/translate_init.c |  10 ++
 14 files changed, 511 insertions(+), 4 deletions(-)

-- 
2.1.0

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit()
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
@ 2015-08-06  5:27 ` Bharata B Rao
  2015-09-04  5:31   ` David Gibson
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 02/11] exec: Do vmstate unregistration from cpu_exec_exit() Bharata B Rao
                   ` (11 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Bharata B Rao @ 2015-08-06  5:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: aik, Bharata B Rao, mdroth, agraf, qemu-ppc, tyreld, nfont,
	imammedo, afaerber, david

CPUState *cpu gets added to the cpus list during cpu_exec_init(). It
should be removed from cpu_exec_exit().

cpu_exec_init() is called from generic CPU::instance_finalize and some
archs like PowerPC call it from CPU unrealizefn. So ensure that we
dequeue the cpu only once.

Instead of introducing a new field CPUState.queued, I could have used
CPUState.cpu_index to check if the cpu is already dequeued from the list.
Since that doesn't work for CONFIG_USER_ONLY, I had to add a new field.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 exec.c            | 11 +++++++++++
 include/qom/cpu.h |  1 +
 2 files changed, 12 insertions(+)

diff --git a/exec.c b/exec.c
index 0a4a0c5..b196d68 100644
--- a/exec.c
+++ b/exec.c
@@ -550,6 +550,10 @@ void cpu_exec_exit(CPUState *cpu)
         return;
     }
 
+    if (cpu->queued) {
+        QTAILQ_REMOVE(&cpus, cpu, node);
+        cpu->queued = false;
+    }
     bitmap_clear(cpu_index_map, cpu->cpu_index, 1);
     cpu->cpu_index = -1;
 }
@@ -568,6 +572,12 @@ static int cpu_get_free_index(Error **errp)
 
 void cpu_exec_exit(CPUState *cpu)
 {
+    cpu_list_lock();
+    if (cpu->queued) {
+        QTAILQ_REMOVE(&cpus, cpu, node);
+        cpu->queued = false;
+    }
+    cpu_list_unlock();
 }
 #endif
 
@@ -595,6 +605,7 @@ void cpu_exec_init(CPUState *cpu, Error **errp)
         return;
     }
     QTAILQ_INSERT_TAIL(&cpus, cpu, node);
+    cpu->queued = true;
 #if defined(CONFIG_USER_ONLY)
     cpu_list_unlock();
 #endif
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 20aabc9..a00e3a8 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -284,6 +284,7 @@ struct CPUState {
     int gdb_num_regs;
     int gdb_num_g_regs;
     QTAILQ_ENTRY(CPUState) node;
+    bool queued;
 
     /* ice debug support */
     QTAILQ_HEAD(breakpoints_head, CPUBreakpoint) breakpoints;
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [RFC PATCH v4 02/11] exec: Do vmstate unregistration from cpu_exec_exit()
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit() Bharata B Rao
@ 2015-08-06  5:27 ` Bharata B Rao
  2015-09-04  6:03   ` David Gibson
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 03/11] cpus: Reclaim vCPU objects Bharata B Rao
                   ` (10 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Bharata B Rao @ 2015-08-06  5:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: aik, Bharata B Rao, mdroth, agraf, qemu-ppc, tyreld, nfont,
	imammedo, afaerber, david

cpu_exec_init() does vmstate_register and register_savevm for the CPU device.
These need to be undone from cpu_exec_exit(). These changes are needed to
support CPU hot removal and also to correctly fail hotplug attempts
beyond max_cpus.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 exec.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/exec.c b/exec.c
index b196d68..3415cd7 100644
--- a/exec.c
+++ b/exec.c
@@ -545,6 +545,8 @@ static int cpu_get_free_index(Error **errp)
 
 void cpu_exec_exit(CPUState *cpu)
 {
+    CPUClass *cc = CPU_GET_CLASS(cpu);
+
     if (cpu->cpu_index == -1) {
         /* cpu_index was never allocated by this @cpu or was already freed. */
         return;
@@ -556,6 +558,15 @@ void cpu_exec_exit(CPUState *cpu)
     }
     bitmap_clear(cpu_index_map, cpu->cpu_index, 1);
     cpu->cpu_index = -1;
+    if (cc->vmsd != NULL) {
+        vmstate_unregister(NULL, cc->vmsd, cpu);
+    }
+#if defined(CPU_SAVE_VERSION)
+    unregister_savevm(NULL, "cpu", cpu->env_ptr);
+#endif
+    if (qdev_get_vmsd(DEVICE(cpu)) == NULL) {
+        vmstate_unregister(NULL, &vmstate_cpu_common, cpu);
+    }
 }
 #else
 
@@ -572,12 +583,20 @@ static int cpu_get_free_index(Error **errp)
 
 void cpu_exec_exit(CPUState *cpu)
 {
+    CPUClass *cc = CPU_GET_CLASS(cpu);
+
     cpu_list_lock();
     if (cpu->queued) {
         QTAILQ_REMOVE(&cpus, cpu, node);
         cpu->queued = false;
     }
     cpu_list_unlock();
+    if (cc->vmsd != NULL) {
+        vmstate_unregister(NULL, cc->vmsd, cpu);
+    }
+    if (qdev_get_vmsd(DEVICE(cpu)) == NULL) {
+        vmstate_unregister(NULL, &vmstate_cpu_common, cpu);
+    }
 }
 #endif
 
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [RFC PATCH v4 03/11] cpus: Reclaim vCPU objects
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit() Bharata B Rao
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 02/11] exec: Do vmstate unregistration from cpu_exec_exit() Bharata B Rao
@ 2015-08-06  5:27 ` Bharata B Rao
  2015-09-04  6:09   ` David Gibson
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 04/11] cpus: Add a sync version of cpu_remove() Bharata B Rao
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Bharata B Rao @ 2015-08-06  5:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: Zhu Guihua, aik, Bharata B Rao, mdroth, agraf, Chen Fan,
	qemu-ppc, tyreld, nfont, Gu Zheng, imammedo, afaerber, david

From: Gu Zheng <guz.fnst@cn.fujitsu.com>

In order to deal well with the kvm vcpus (which can not be removed without any
protection), we do not close KVM vcpu fd, just record and mark it as stopped
into a list, so that we can reuse it for the appending cpu hot-add request if
possible. It is also the approach that kvm guys suggested:
https://www.mail-archive.com/kvm@vger.kernel.org/msg102839.html

Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
Signed-off-by: Zhu Guihua <zhugh.fnst@cn.fujitsu.com>
Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
               [Explicit CPU_REMOVE() from qemu_kvm/tcg_destroy_vcpu()
                isn't needed as it is done from cpu_exec_exit()]
---
 cpus.c               | 41 +++++++++++++++++++++++++++++++++++++
 include/qom/cpu.h    | 10 +++++++++
 include/sysemu/kvm.h |  1 +
 kvm-all.c            | 57 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 kvm-stub.c           |  5 +++++
 5 files changed, 113 insertions(+), 1 deletion(-)

diff --git a/cpus.c b/cpus.c
index a822ce3..73ae2e7 100644
--- a/cpus.c
+++ b/cpus.c
@@ -888,6 +888,21 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data)
     qemu_cpu_kick(cpu);
 }
 
+static void qemu_kvm_destroy_vcpu(CPUState *cpu)
+{
+    if (kvm_destroy_vcpu(cpu) < 0) {
+        error_report("kvm_destroy_vcpu failed.\n");
+        exit(EXIT_FAILURE);
+    }
+
+    object_unparent(OBJECT(cpu));
+}
+
+static void qemu_tcg_destroy_vcpu(CPUState *cpu)
+{
+    object_unparent(OBJECT(cpu));
+}
+
 static void flush_queued_work(CPUState *cpu)
 {
     struct qemu_work_item *wi;
@@ -982,6 +997,11 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
             }
         }
         qemu_kvm_wait_io_event(cpu);
+        if (cpu->exit && !cpu_can_run(cpu)) {
+            qemu_kvm_destroy_vcpu(cpu);
+            qemu_mutex_unlock(&qemu_global_mutex);
+            return NULL;
+        }
     }
 
     return NULL;
@@ -1037,6 +1057,7 @@ static void tcg_exec_all(void);
 static void *qemu_tcg_cpu_thread_fn(void *arg)
 {
     CPUState *cpu = arg;
+    CPUState *remove_cpu = NULL;
 
     rcu_register_thread();
 
@@ -1075,6 +1096,16 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
             }
         }
         qemu_tcg_wait_io_event();
+        CPU_FOREACH(cpu) {
+            if (cpu->exit && !cpu_can_run(cpu)) {
+                remove_cpu = cpu;
+                break;
+            }
+        }
+        if (remove_cpu) {
+            qemu_tcg_destroy_vcpu(remove_cpu);
+            remove_cpu = NULL;
+        }
     }
 
     return NULL;
@@ -1245,6 +1276,13 @@ void resume_all_vcpus(void)
     }
 }
 
+void cpu_remove(CPUState *cpu)
+{
+    cpu->stop = true;
+    cpu->exit = true;
+    qemu_cpu_kick(cpu);
+}
+
 /* For temporary buffers for forming a name */
 #define VCPU_THREAD_NAME_SIZE 16
 
@@ -1437,6 +1475,9 @@ static void tcg_exec_all(void)
                 break;
             }
         } else if (cpu->stop || cpu->stopped) {
+            if (cpu->exit) {
+                next_cpu = CPU_NEXT(cpu);
+            }
             break;
         }
     }
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index a00e3a8..136d9fe 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -224,6 +224,7 @@ struct kvm_run;
  * @halted: Nonzero if the CPU is in suspended state.
  * @stop: Indicates a pending stop request.
  * @stopped: Indicates the CPU has been artificially stopped.
+ * @exit: Indicates the CPU has exited due to an unplug operation.
  * @tcg_exit_req: Set to force TCG to stop executing linked TBs for this
  *           CPU and return to its top level loop.
  * @singlestep_enabled: Flags for single-stepping.
@@ -267,6 +268,7 @@ struct CPUState {
     bool created;
     bool stop;
     bool stopped;
+    bool exit;
     volatile sig_atomic_t exit_request;
     uint32_t interrupt_request;
     int singlestep_enabled;
@@ -645,6 +647,14 @@ void cpu_exit(CPUState *cpu);
  */
 void cpu_resume(CPUState *cpu);
 
+ /**
+ * cpu_remove:
+ * @cpu: The CPU to remove.
+ *
+ * Requests the CPU to be removed.
+ */
+void cpu_remove(CPUState *cpu);
+
 /**
  * qemu_init_vcpu:
  * @cpu: The vCPU to initialize.
diff --git a/include/sysemu/kvm.h b/include/sysemu/kvm.h
index 983e99e..d53819b 100644
--- a/include/sysemu/kvm.h
+++ b/include/sysemu/kvm.h
@@ -191,6 +191,7 @@ int kvm_has_intx_set_mask(void);
 
 int kvm_init_vcpu(CPUState *cpu);
 int kvm_cpu_exec(CPUState *cpu);
+int kvm_destroy_vcpu(CPUState *cpu);
 
 #ifdef NEED_CPU_H
 
diff --git a/kvm-all.c b/kvm-all.c
index 06e06f2..64a160a 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -59,6 +59,12 @@
 
 #define KVM_MSI_HASHTAB_SIZE    256
 
+struct KVMParkedVcpu {
+    unsigned long vcpu_id;
+    int kvm_fd;
+    QLIST_ENTRY(KVMParkedVcpu) node;
+};
+
 struct KVMState
 {
     AccelState parent_obj;
@@ -95,6 +101,7 @@ struct KVMState
     bool direct_msi;
 #endif
     KVMMemoryListener memory_listener;
+    QLIST_HEAD(, KVMParkedVcpu) kvm_parked_vcpus;
 };
 
 KVMState *kvm_state;
@@ -235,6 +242,53 @@ static int kvm_set_user_memory_region(KVMMemoryListener *kml, KVMSlot *slot)
     return kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem);
 }
 
+int kvm_destroy_vcpu(CPUState *cpu)
+{
+    KVMState *s = kvm_state;
+    long mmap_size;
+    struct KVMParkedVcpu *vcpu = NULL;
+    int ret = 0;
+
+    DPRINTF("kvm_destroy_vcpu\n");
+
+    mmap_size = kvm_ioctl(s, KVM_GET_VCPU_MMAP_SIZE, 0);
+    if (mmap_size < 0) {
+        ret = mmap_size;
+        DPRINTF("KVM_GET_VCPU_MMAP_SIZE failed\n");
+        goto err;
+    }
+
+    ret = munmap(cpu->kvm_run, mmap_size);
+    if (ret < 0) {
+        goto err;
+    }
+
+    vcpu = g_malloc0(sizeof(*vcpu));
+    vcpu->vcpu_id = kvm_arch_vcpu_id(cpu);
+    vcpu->kvm_fd = cpu->kvm_fd;
+    QLIST_INSERT_HEAD(&kvm_state->kvm_parked_vcpus, vcpu, node);
+err:
+    return ret;
+}
+
+static int kvm_get_vcpu(KVMState *s, unsigned long vcpu_id)
+{
+    struct KVMParkedVcpu *cpu;
+
+    QLIST_FOREACH(cpu, &s->kvm_parked_vcpus, node) {
+        if (cpu->vcpu_id == vcpu_id) {
+            int kvm_fd;
+
+            QLIST_REMOVE(cpu, node);
+            kvm_fd = cpu->kvm_fd;
+            g_free(cpu);
+            return kvm_fd;
+        }
+    }
+
+    return kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)vcpu_id);
+}
+
 int kvm_init_vcpu(CPUState *cpu)
 {
     KVMState *s = kvm_state;
@@ -243,7 +297,7 @@ int kvm_init_vcpu(CPUState *cpu)
 
     DPRINTF("kvm_init_vcpu\n");
 
-    ret = kvm_vm_ioctl(s, KVM_CREATE_VCPU, (void *)kvm_arch_vcpu_id(cpu));
+    ret = kvm_get_vcpu(s, kvm_arch_vcpu_id(cpu));
     if (ret < 0) {
         DPRINTF("kvm_create_vcpu failed\n");
         goto err;
@@ -1469,6 +1523,7 @@ static int kvm_init(MachineState *ms)
 #ifdef KVM_CAP_SET_GUEST_DEBUG
     QTAILQ_INIT(&s->kvm_sw_breakpoints);
 #endif
+    QLIST_INIT(&s->kvm_parked_vcpus);
     s->vmfd = -1;
     s->fd = qemu_open("/dev/kvm", O_RDWR);
     if (s->fd == -1) {
diff --git a/kvm-stub.c b/kvm-stub.c
index d9ad624..4b3ba8c 100644
--- a/kvm-stub.c
+++ b/kvm-stub.c
@@ -31,6 +31,11 @@ bool kvm_gsi_direct_mapping;
 bool kvm_allowed;
 bool kvm_readonly_mem_allowed;
 
+int kvm_destroy_vcpu(CPUState *cpu)
+{
+    return -ENOSYS;
+}
+
 int kvm_init_vcpu(CPUState *cpu)
 {
     return -ENOSYS;
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [RFC PATCH v4 04/11] cpus: Add a sync version of cpu_remove()
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
                   ` (2 preceding siblings ...)
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 03/11] cpus: Reclaim vCPU objects Bharata B Rao
@ 2015-08-06  5:27 ` Bharata B Rao
  2015-09-04  6:11   ` David Gibson
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 05/11] xics_kvm: Add cpu_destroy method to XICS Bharata B Rao
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Bharata B Rao @ 2015-08-06  5:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: aik, Bharata B Rao, mdroth, agraf, qemu-ppc, tyreld, nfont,
	imammedo, afaerber, david

This sync API will be used by the CPU hotplug code to wait for the CPU to
completely get removed before flagging the failure to the device_add
command.

Sync version of this call is needed to correctly recover from CPU
realization failures when ->plug() handler fails.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 cpus.c            | 14 ++++++++++++++
 include/qom/cpu.h |  8 ++++++++
 2 files changed, 22 insertions(+)

diff --git a/cpus.c b/cpus.c
index 73ae2e7..9d9644e 100644
--- a/cpus.c
+++ b/cpus.c
@@ -999,6 +999,8 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
         qemu_kvm_wait_io_event(cpu);
         if (cpu->exit && !cpu_can_run(cpu)) {
             qemu_kvm_destroy_vcpu(cpu);
+            cpu->created = false;
+            qemu_cond_signal(&qemu_cpu_cond);
             qemu_mutex_unlock(&qemu_global_mutex);
             return NULL;
         }
@@ -1104,6 +1106,8 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
         }
         if (remove_cpu) {
             qemu_tcg_destroy_vcpu(remove_cpu);
+            cpu->created = false;
+            qemu_cond_signal(&qemu_cpu_cond);
             remove_cpu = NULL;
         }
     }
@@ -1283,6 +1287,16 @@ void cpu_remove(CPUState *cpu)
     qemu_cpu_kick(cpu);
 }
 
+void cpu_remove_sync(CPUState *cpu)
+{
+    cpu->stop = true;
+    cpu->exit = true;
+    qemu_cpu_kick(cpu);
+    while (cpu->created) {
+        qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
+    }
+}
+
 /* For temporary buffers for forming a name */
 #define VCPU_THREAD_NAME_SIZE 16
 
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 136d9fe..65c6852 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -655,6 +655,14 @@ void cpu_resume(CPUState *cpu);
  */
 void cpu_remove(CPUState *cpu);
 
+ /**
+ * cpu_remove_sync:
+ * @cpu: The CPU to remove.
+ *
+ * Requests the CPU to be removed and waits till it is removed.
+ */
+void cpu_remove_sync(CPUState *cpu);
+
 /**
  * qemu_init_vcpu:
  * @cpu: The vCPU to initialize.
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [RFC PATCH v4 05/11] xics_kvm: Add cpu_destroy method to XICS
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
                   ` (3 preceding siblings ...)
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 04/11] cpus: Add a sync version of cpu_remove() Bharata B Rao
@ 2015-08-06  5:27 ` Bharata B Rao
  2015-08-07 11:33   ` Bharata B Rao
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 06/11] spapr: Create pseries-2.5 machine Bharata B Rao
                   ` (7 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Bharata B Rao @ 2015-08-06  5:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: aik, Bharata B Rao, mdroth, agraf, qemu-ppc, tyreld, nfont,
	imammedo, afaerber, david

XICS is setup for each CPU during initialization. Provide a routine
to undo the same when CPU is unplugged.

This allows reboot of a VM that has undergone CPU hotplug and unplug
to work correctly.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/intc/xics.c        | 12 ++++++++++++
 hw/intc/xics_kvm.c    |  9 +++++++++
 include/hw/ppc/xics.h |  2 ++
 3 files changed, 23 insertions(+)

diff --git a/hw/intc/xics.c b/hw/intc/xics.c
index 924b1ae..cb33438 100644
--- a/hw/intc/xics.c
+++ b/hw/intc/xics.c
@@ -44,6 +44,18 @@ static int get_cpu_index_by_dt_id(int cpu_dt_id)
     return -1;
 }
 
+void xics_cpu_destroy(XICSState *icp, PowerPCCPU *cpu)
+{
+    CPUState *cs = CPU(cpu);
+    XICSStateClass *info = XICS_COMMON_GET_CLASS(icp);
+
+    assert(cs->cpu_index < icp->nr_servers);
+
+    if (info->cpu_destroy) {
+        info->cpu_destroy(icp, cpu);
+    }
+}
+
 void xics_cpu_setup(XICSState *icp, PowerPCCPU *cpu)
 {
     CPUState *cs = CPU(cpu);
diff --git a/hw/intc/xics_kvm.c b/hw/intc/xics_kvm.c
index d58729c..e134f65 100644
--- a/hw/intc/xics_kvm.c
+++ b/hw/intc/xics_kvm.c
@@ -356,6 +356,14 @@ static void xics_kvm_cpu_setup(XICSState *icp, PowerPCCPU *cpu)
     }
 }
 
+static void xics_kvm_cpu_destroy(XICSState *icp, PowerPCCPU *cpu)
+{
+    CPUState *cs = CPU(cpu);
+    ICPState *ss = &icp->ss[cs->cpu_index];
+
+    ss->cs = NULL;
+}
+
 static void xics_kvm_set_nr_irqs(XICSState *icp, uint32_t nr_irqs, Error **errp)
 {
     icp->nr_irqs = icp->ics->nr_irqs = nr_irqs;
@@ -486,6 +494,7 @@ static void xics_kvm_class_init(ObjectClass *oc, void *data)
 
     dc->realize = xics_kvm_realize;
     xsc->cpu_setup = xics_kvm_cpu_setup;
+    xsc->cpu_destroy = xics_kvm_cpu_destroy;
     xsc->set_nr_irqs = xics_kvm_set_nr_irqs;
     xsc->set_nr_servers = xics_kvm_set_nr_servers;
 }
diff --git a/include/hw/ppc/xics.h b/include/hw/ppc/xics.h
index 355a966..2faad48 100644
--- a/include/hw/ppc/xics.h
+++ b/include/hw/ppc/xics.h
@@ -68,6 +68,7 @@ struct XICSStateClass {
     DeviceClass parent_class;
 
     void (*cpu_setup)(XICSState *icp, PowerPCCPU *cpu);
+    void (*cpu_destroy)(XICSState *icp, PowerPCCPU *cpu);
     void (*set_nr_irqs)(XICSState *icp, uint32_t nr_irqs, Error **errp);
     void (*set_nr_servers)(XICSState *icp, uint32_t nr_servers, Error **errp);
 };
@@ -166,5 +167,6 @@ int xics_alloc_block(XICSState *icp, int src, int num, bool lsi, bool align);
 void xics_free(XICSState *icp, int irq, int num);
 
 void xics_cpu_setup(XICSState *icp, PowerPCCPU *cpu);
+void xics_cpu_destroy(XICSState *icp, PowerPCCPU *cpu);
 
 #endif /* __XICS_H__ */
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [RFC PATCH v4 06/11] spapr: Create pseries-2.5 machine
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
                   ` (4 preceding siblings ...)
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 05/11] xics_kvm: Add cpu_destroy method to XICS Bharata B Rao
@ 2015-08-06  5:27 ` Bharata B Rao
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 07/11] spapr: Enable CPU hotplug for pseries-2.5 and add CPU DRC DT entries Bharata B Rao
                   ` (6 subsequent siblings)
  12 siblings, 0 replies; 36+ messages in thread
From: Bharata B Rao @ 2015-08-06  5:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: aik, Bharata B Rao, mdroth, agraf, qemu-ppc, tyreld, nfont,
	imammedo, afaerber, david

Add pseries-2.5 machine version.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 hw/ppc/spapr.c | 21 ++++++++++++++++++++-
 1 file changed, 20 insertions(+), 1 deletion(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index dfd808f..1d8a12a 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -2356,7 +2356,7 @@ static void spapr_machine_2_4_class_init(ObjectClass *oc, void *data)
     mc->name = "pseries-2.4";
     mc->desc = "pSeries Logical Partition (PAPR compliant) v2.4";
     mc->alias = "pseries";
-    mc->is_default = 1;
+    mc->is_default = 0;
     smc->dr_lmb_enabled = true;
 }
 
@@ -2366,6 +2366,24 @@ static const TypeInfo spapr_machine_2_4_info = {
     .class_init    = spapr_machine_2_4_class_init,
 };
 
+static void spapr_machine_2_5_class_init(ObjectClass *oc, void *data)
+{
+    MachineClass *mc = MACHINE_CLASS(oc);
+    sPAPRMachineClass *smc = SPAPR_MACHINE_CLASS(oc);
+
+    mc->name = "pseries-2.5";
+    mc->desc = "pSeries Logical Partition (PAPR compliant) v2.5";
+    mc->alias = "pseries";
+    mc->is_default = 1;
+    smc->dr_lmb_enabled = true;
+}
+
+static const TypeInfo spapr_machine_2_5_info = {
+    .name          = TYPE_SPAPR_MACHINE "2.5",
+    .parent        = TYPE_SPAPR_MACHINE,
+    .class_init    = spapr_machine_2_5_class_init,
+};
+
 static void spapr_machine_register_types(void)
 {
     type_register_static(&spapr_machine_info);
@@ -2373,6 +2391,7 @@ static void spapr_machine_register_types(void)
     type_register_static(&spapr_machine_2_2_info);
     type_register_static(&spapr_machine_2_3_info);
     type_register_static(&spapr_machine_2_4_info);
+    type_register_static(&spapr_machine_2_5_info);
 }
 
 type_init(spapr_machine_register_types)
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [RFC PATCH v4 07/11] spapr: Enable CPU hotplug for pseries-2.5 and add CPU DRC DT entries
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
                   ` (5 preceding siblings ...)
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 06/11] spapr: Create pseries-2.5 machine Bharata B Rao
@ 2015-08-06  5:27 ` Bharata B Rao
  2015-09-04  6:28   ` David Gibson
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 08/11] spapr: CPU hotplug support Bharata B Rao
                   ` (5 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Bharata B Rao @ 2015-08-06  5:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: aik, Bharata B Rao, mdroth, agraf, qemu-ppc, tyreld, nfont,
	imammedo, afaerber, david

Start supporting CPU hotplug from pseries-2.5 onwards. Add CPU
DRC (Dynamic Resource Connector) device tree entries.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 hw/ppc/spapr.c         | 23 +++++++++++++++++++++++
 include/hw/ppc/spapr.h |  1 +
 2 files changed, 24 insertions(+)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 1d8a12a..a106980 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -955,6 +955,16 @@ static void spapr_finalize_fdt(sPAPRMachineState *spapr,
         _FDT(spapr_drc_populate_dt(fdt, 0, NULL, SPAPR_DR_CONNECTOR_TYPE_LMB));
     }
 
+    if (smc->dr_cpu_enabled) {
+        int offset = fdt_path_offset(fdt, "/cpus");
+        ret = spapr_drc_populate_dt(fdt, offset, NULL,
+                                    SPAPR_DR_CONNECTOR_TYPE_CPU);
+        if (ret < 0) {
+            fprintf(stderr, "Couldn't set up CPU DR device tree properties\n");
+            exit(1);
+        }
+    }
+
     _FDT((fdt_pack(fdt)));
 
     if (fdt_totalsize(fdt) > FDT_MAX_SIZE) {
@@ -1674,6 +1684,8 @@ static void ppc_spapr_init(MachineState *machine)
     long load_limit, fw_size;
     bool kernel_le = false;
     char *filename;
+    int smt = kvmppc_smt_threads();
+    int smp_max_cores = DIV_ROUND_UP(max_cpus, smp_threads);
 
     msi_supported = true;
 
@@ -1739,6 +1751,15 @@ static void ppc_spapr_init(MachineState *machine)
         spapr_validate_node_memory(machine);
     }
 
+    if (smc->dr_cpu_enabled) {
+        for (i = 0; i < smp_max_cores; i++) {
+            sPAPRDRConnector *drc =
+                spapr_dr_connector_new(OBJECT(spapr),
+                                       SPAPR_DR_CONNECTOR_TYPE_CPU, i * smt);
+            qemu_register_reset(spapr_drc_reset, drc);
+        }
+    }
+
     /* init CPUs */
     if (machine->cpu_model == NULL) {
         machine->cpu_model = kvm_enabled() ? "host" : "POWER7";
@@ -2213,6 +2234,7 @@ static void spapr_machine_class_init(ObjectClass *oc, void *data)
     hc->unplug = spapr_machine_device_unplug;
 
     smc->dr_lmb_enabled = false;
+    smc->dr_cpu_enabled = false;
     fwc->get_dev_path = spapr_get_fw_dev_path;
     nc->nmi_monitor_handler = spapr_nmi;
 }
@@ -2376,6 +2398,7 @@ static void spapr_machine_2_5_class_init(ObjectClass *oc, void *data)
     mc->alias = "pseries";
     mc->is_default = 1;
     smc->dr_lmb_enabled = true;
+    smc->dr_cpu_enabled = true;
 }
 
 static const TypeInfo spapr_machine_2_5_info = {
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index b6cb0d0..9e364ba 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -36,6 +36,7 @@ struct sPAPRMachineClass {
 
     /*< public >*/
     bool dr_lmb_enabled; /* enable dynamic-reconfig/hotplug of LMBs */
+    bool dr_cpu_enabled; /* enable dynamic-reconfig/hotplug of CPUs */
 };
 
 /**
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [RFC PATCH v4 08/11] spapr: CPU hotplug support
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
                   ` (6 preceding siblings ...)
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 07/11] spapr: Enable CPU hotplug for pseries-2.5 and add CPU DRC DT entries Bharata B Rao
@ 2015-08-06  5:27 ` Bharata B Rao
  2015-09-04  6:58   ` David Gibson
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 09/11] spapr: Support topologies with unfilled cores Bharata B Rao
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Bharata B Rao @ 2015-08-06  5:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: aik, Bharata B Rao, mdroth, agraf, qemu-ppc, tyreld, nfont,
	imammedo, afaerber, david

Support CPU hotplug via device-add command. Set up device tree
entries for the hotplugged CPU core and use the exising EPOW event
infrastructure to send CPU hotplug notification to the guest.

Create only cores explicitly from boot path as well as hotplug path
and let the ->plug() handler of the core create the threads of the core.

Also support cold plugged CPUs that are specified by -device option
on cmdline.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 hw/ppc/spapr.c              | 166 +++++++++++++++++++++++++++++++++++++++++++-
 hw/ppc/spapr_events.c       |   3 +
 hw/ppc/spapr_rtas.c         |  11 +++
 target-ppc/translate_init.c |   8 +++
 4 files changed, 186 insertions(+), 2 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index a106980..74637b3 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -599,6 +599,18 @@ static void spapr_populate_cpu_dt(CPUState *cs, void *fdt, int offset,
     unsigned sockets = opts ? qemu_opt_get_number(opts, "sockets", 0) : 0;
     uint32_t cpus_per_socket = sockets ? (smp_cpus / sockets) : 1;
     uint32_t pft_size_prop[] = {0, cpu_to_be32(spapr->htab_shift)};
+    sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(qdev_get_machine());
+    sPAPRDRConnector *drc;
+    sPAPRDRConnectorClass *drck;
+    int drc_index;
+
+    if (smc->dr_cpu_enabled) {
+        drc = spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, index);
+        g_assert(drc);
+        drck = SPAPR_DR_CONNECTOR_GET_CLASS(drc);
+        drc_index = drck->get_index(drc);
+        _FDT((fdt_setprop_cell(fdt, offset, "ibm,my-drc-index", drc_index)));
+    }
 
     _FDT((fdt_setprop_cell(fdt, offset, "reg", index)));
     _FDT((fdt_setprop_string(fdt, offset, "device_type", "cpu")));
@@ -1686,6 +1698,7 @@ static void ppc_spapr_init(MachineState *machine)
     char *filename;
     int smt = kvmppc_smt_threads();
     int smp_max_cores = DIV_ROUND_UP(max_cpus, smp_threads);
+    int smp_cores = DIV_ROUND_UP(smp_cpus, smp_threads);
 
     msi_supported = true;
 
@@ -1764,7 +1777,7 @@ static void ppc_spapr_init(MachineState *machine)
     if (machine->cpu_model == NULL) {
         machine->cpu_model = kvm_enabled() ? "host" : "POWER7";
     }
-    for (i = 0; i < smp_cpus; i++) {
+    for (i = 0; i < smp_cores; i++) {
         cpu = cpu_ppc_init(machine->cpu_model);
         if (cpu == NULL) {
             fprintf(stderr, "Unable to find PowerPC CPU definition\n");
@@ -2152,10 +2165,135 @@ out:
     error_propagate(errp, local_err);
 }
 
+static void *spapr_populate_hotplug_cpu_dt(DeviceState *dev, CPUState *cs,
+                                           int *fdt_offset,
+                                           sPAPRMachineState *spapr)
+{
+    PowerPCCPU *cpu = POWERPC_CPU(cs);
+    DeviceClass *dc = DEVICE_GET_CLASS(cs);
+    int id = ppc_get_vcpu_dt_id(cpu);
+    void *fdt;
+    int offset, fdt_size;
+    char *nodename;
+
+    fdt = create_device_tree(&fdt_size);
+    nodename = g_strdup_printf("%s@%x", dc->fw_name, id);
+    offset = fdt_add_subnode(fdt, 0, nodename);
+
+    spapr_populate_cpu_dt(cs, fdt, offset, spapr);
+    g_free(nodename);
+
+    *fdt_offset = offset;
+    return fdt;
+}
+
+static void spapr_cpu_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
+                            Error **errp)
+{
+    sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(qdev_get_machine());
+    sPAPRMachineState *ms = SPAPR_MACHINE(hotplug_dev);
+    CPUState *cs = CPU(dev);
+    PowerPCCPU *cpu = POWERPC_CPU(cs);
+    int id = ppc_get_vcpu_dt_id(cpu);
+    sPAPRDRConnector *drc =
+        spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, id);
+    sPAPRDRConnectorClass *drck;
+    int smt = kvmppc_smt_threads();
+    Error *local_err = NULL;
+    void *fdt = NULL;
+    int i, fdt_offset = 0;
+
+    /* Set NUMA node for the added CPUs  */
+    for (i = 0; i < nb_numa_nodes; i++) {
+        if (test_bit(cs->cpu_index, numa_info[i].node_cpu)) {
+            cs->numa_node = i;
+            break;
+        }
+    }
+
+    /*
+     * Currently CPU core and threads of a core aren't really different
+     * from QEMU point of view since all of them are just CPU devices. Hence
+     * there is no separate realize routines for cores and threads.
+     * We use the id check below to do things differently for cores and threads.
+     *
+     * SMT threads return from here, only main thread (core) will
+     * continue, create threads and signal hotplug event to the guest.
+     */
+    if ((id % smt) != 0) {
+        return;
+    }
+
+    /* Create SMT threads of the core. */
+    for (i = 1; i < smp_threads; i++) {
+        cpu = cpu_ppc_init(current_machine->cpu_model);
+        if (!cpu) {
+            error_report("Unable to find PowerPC CPU definition: %s",
+                          current_machine->cpu_model);
+            exit(EXIT_FAILURE);
+        }
+    }
+
+    if (!smc->dr_cpu_enabled) {
+        /*
+         * This is a cold plugged CPU but the machine doesn't support
+         * DR. So skip the hotplug path ensuring that the CPU is brought
+         * up online with out an associated DR connector.
+         */
+        return;
+    }
+
+    g_assert(drc);
+
+    /*
+     * Setup CPU DT entries only for hotplugged CPUs. For boot time or
+     * coldplugged CPUs DT entries are setup in spapr_finalize_fdt().
+     */
+    if (dev->hotplugged) {
+        fdt = spapr_populate_hotplug_cpu_dt(dev, cs, &fdt_offset, ms);
+    }
+
+    drck = SPAPR_DR_CONNECTOR_GET_CLASS(drc);
+    drck->attach(drc, dev, fdt, fdt_offset, !dev->hotplugged, &local_err);
+    if (local_err) {
+        g_free(fdt);
+        error_propagate(errp, local_err);
+        return;
+    }
+
+    /*
+     * We send hotplug notification interrupt to the guest only in case
+     * of hotplugged CPUs.
+     */
+    if (dev->hotplugged) {
+        spapr_hotplug_req_add_event(drc);
+    } else {
+        /*
+         * HACK to support removal of hotplugged CPU after VM migration:
+         *
+         * Since we want to be able to hot-remove those coldplugged CPUs
+         * started at boot time using -device option at the target VM, we set
+         * the right allocation_state and isolation_state for them, which for
+         * the hotplugged CPUs would be set via RTAS calls done from the
+         * guest during hotplug.
+         *
+         * This allows the coldplugged CPUs started using -device option to
+         * have the right isolation and allocation states as expected by the
+         * CPU hot removal code.
+         *
+         * This hack will be removed once we have DRC states migrated as part
+         * of VM migration.
+         */
+        drck->set_allocation_state(drc, SPAPR_DR_ALLOCATION_STATE_USABLE);
+        drck->set_isolation_state(drc, SPAPR_DR_ISOLATION_STATE_UNISOLATED);
+    }
+}
+
 static void spapr_machine_device_plug(HotplugHandler *hotplug_dev,
                                       DeviceState *dev, Error **errp)
 {
     sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(qdev_get_machine());
+    sPAPRMachineState *ms = SPAPR_MACHINE(hotplug_dev);
 
     if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
         int node;
@@ -2192,6 +2330,29 @@ static void spapr_machine_device_plug(HotplugHandler *hotplug_dev,
         }
 
         spapr_memory_plug(hotplug_dev, dev, node, errp);
+    } else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
+        CPUState *cs = CPU(dev);
+        PowerPCCPU *cpu = POWERPC_CPU(cs);
+
+        if (!smc->dr_cpu_enabled && dev->hotplugged) {
+            error_setg(errp, "CPU hotplug not supported for this machine");
+            cpu_remove_sync(cs);
+            return;
+        }
+
+        if (((smp_cpus % smp_threads) || (max_cpus % smp_threads)) &&
+            dev->hotplugged) {
+            error_setg(errp, "CPU hotplug not supported for the topology "
+                       "with %d threads %d cpus and %d maxcpus since "
+                       "CPUs can't be fit fully into cores",
+                       smp_threads, smp_cpus, max_cpus);
+            cpu_remove_sync(cs);
+            return;
+        }
+
+        spapr_cpu_init(ms, cpu);
+        spapr_cpu_reset(cpu);
+        spapr_cpu_plug(hotplug_dev, dev, errp);
     }
 }
 
@@ -2206,7 +2367,8 @@ static void spapr_machine_device_unplug(HotplugHandler *hotplug_dev,
 static HotplugHandler *spapr_get_hotpug_handler(MachineState *machine,
                                              DeviceState *dev)
 {
-    if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
+    if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM) ||
+        object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
         return HOTPLUG_HANDLER(machine);
     }
     return NULL;
diff --git a/hw/ppc/spapr_events.c b/hw/ppc/spapr_events.c
index 98bf7ae..1901652 100644
--- a/hw/ppc/spapr_events.c
+++ b/hw/ppc/spapr_events.c
@@ -437,6 +437,9 @@ static void spapr_hotplug_req_event(sPAPRDRConnector *drc, uint8_t hp_action)
     case SPAPR_DR_CONNECTOR_TYPE_LMB:
         hp->hotplug_type = RTAS_LOG_V6_HP_TYPE_MEMORY;
         break;
+    case SPAPR_DR_CONNECTOR_TYPE_CPU:
+        hp->hotplug_type = RTAS_LOG_V6_HP_TYPE_CPU;
+        break;
     default:
         /* we shouldn't be signaling hotplug events for resources
          * that don't support them
diff --git a/hw/ppc/spapr_rtas.c b/hw/ppc/spapr_rtas.c
index e99e25f..160eb07 100644
--- a/hw/ppc/spapr_rtas.c
+++ b/hw/ppc/spapr_rtas.c
@@ -159,6 +159,16 @@ static void rtas_query_cpu_stopped_state(PowerPCCPU *cpu_,
     rtas_st(rets, 0, RTAS_OUT_PARAM_ERROR);
 }
 
+static void spapr_cpu_set_endianness(PowerPCCPU *cpu)
+{
+    PowerPCCPU *fcpu = POWERPC_CPU(first_cpu);
+    PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(fcpu);
+
+    if (!pcc->interrupts_big_endian(fcpu)) {
+        cpu->env.spr[SPR_LPCR] |= LPCR_ILE;
+    }
+}
+
 static void rtas_start_cpu(PowerPCCPU *cpu_, sPAPRMachineState *spapr,
                            uint32_t token, uint32_t nargs,
                            target_ulong args,
@@ -195,6 +205,7 @@ static void rtas_start_cpu(PowerPCCPU *cpu_, sPAPRMachineState *spapr,
         env->nip = start;
         env->gpr[3] = r3;
         cs->halted = 0;
+        spapr_cpu_set_endianness(cpu);
 
         qemu_cpu_kick(cs);
 
diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
index 16d7b16..c19d630 100644
--- a/target-ppc/translate_init.c
+++ b/target-ppc/translate_init.c
@@ -30,6 +30,9 @@
 #include "qemu/error-report.h"
 #include "qapi/visitor.h"
 #include "hw/qdev-properties.h"
+#if !defined(CONFIG_USER_ONLY)
+#include "sysemu/sysemu.h"
+#endif
 
 //#define PPC_DUMP_CPU
 //#define PPC_DEBUG_SPR
@@ -8936,6 +8939,11 @@ static void ppc_cpu_realizefn(DeviceState *dev, Error **errp)
     }
 
 #if !defined(CONFIG_USER_ONLY)
+    if (cs->cpu_index >= max_cpus) {
+        error_setg(errp, "Cannot have more than %d CPUs", max_cpus);
+        return;
+    }
+
     cpu->cpu_dt_id = (cs->cpu_index / smp_threads) * max_smt
         + (cs->cpu_index % smp_threads);
 #endif
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [RFC PATCH v4 09/11] spapr: Support topologies with unfilled cores
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
                   ` (7 preceding siblings ...)
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 08/11] spapr: CPU hotplug support Bharata B Rao
@ 2015-08-06  5:27 ` Bharata B Rao
  2015-09-04  7:01   ` David Gibson
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 10/11] spapr: CPU hot unplug support Bharata B Rao
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Bharata B Rao @ 2015-08-06  5:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: aik, Bharata B Rao, mdroth, agraf, qemu-ppc, tyreld, nfont,
	imammedo, afaerber, david

QEMU currently supports CPU topologies where there can be cores
which are not completely filled with all the threads as per the
specifed SMT mode.

Restore support for such topologies (Eg -smp 15,cores=4,threads=4)
The last core will always have the deficit even when -device options are
used to cold-plug the cores.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 hw/ppc/spapr.c | 20 +++++++++++++++++++-
 1 file changed, 19 insertions(+), 1 deletion(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 74637b3..004a8e1 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -94,6 +94,8 @@
 
 #define HTAB_SIZE(spapr)        (1ULL << ((spapr)->htab_shift))
 
+static int smp_remaining_cpus;
+
 static XICSState *try_create_xics(const char *type, int nr_servers,
                                   int nr_irqs, Error **errp)
 {
@@ -1700,6 +1702,7 @@ static void ppc_spapr_init(MachineState *machine)
     int smp_max_cores = DIV_ROUND_UP(max_cpus, smp_threads);
     int smp_cores = DIV_ROUND_UP(smp_cpus, smp_threads);
 
+    smp_remaining_cpus = smp_cpus;
     msi_supported = true;
 
     QLIST_INIT(&spapr->phbs);
@@ -2202,6 +2205,7 @@ static void spapr_cpu_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
     Error *local_err = NULL;
     void *fdt = NULL;
     int i, fdt_offset = 0;
+    int threads_per_core;
 
     /* Set NUMA node for the added CPUs  */
     for (i = 0; i < nb_numa_nodes; i++) {
@@ -2224,8 +2228,22 @@ static void spapr_cpu_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
         return;
     }
 
+    /* Create SMT threads of the core.
+     *
+     * Support topologies like -smp 15,cores=4,threads=4 where one core
+     * will have less than the specified SMT threads. The last core will
+     * always have the deficit even when -device options are used to
+     * cold-plug the cores.
+     */
+    if ((smp_remaining_cpus > 0) && (smp_remaining_cpus < smp_threads)) {
+        threads_per_core = smp_remaining_cpus;
+    } else {
+        threads_per_core = smp_threads;
+    }
+    smp_remaining_cpus -= threads_per_core;
+
     /* Create SMT threads of the core. */
-    for (i = 1; i < smp_threads; i++) {
+    for (i = 1; i < threads_per_core; i++) {
         cpu = cpu_ppc_init(current_machine->cpu_model);
         if (!cpu) {
             error_report("Unable to find PowerPC CPU definition: %s",
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [RFC PATCH v4 10/11] spapr: CPU hot unplug support
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
                   ` (8 preceding siblings ...)
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 09/11] spapr: Support topologies with unfilled cores Bharata B Rao
@ 2015-08-06  5:27 ` Bharata B Rao
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 11/11] target-ppc: Enable CPU hotplug for POWER8 CPU family Bharata B Rao
                   ` (2 subsequent siblings)
  12 siblings, 0 replies; 36+ messages in thread
From: Bharata B Rao @ 2015-08-06  5:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: aik, Bharata B Rao, mdroth, agraf, qemu-ppc, tyreld, nfont,
	imammedo, afaerber, david

Support hot removal of CPU for sPAPR guests by sending the hot unplug
notification to the guest via EPOW interrupt. Release the vCPU object
after CPU hot unplug so that vCPU fd can be parked and reused.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 hw/ppc/spapr.c | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 72 insertions(+)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 004a8e1..c05d85b 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -2374,11 +2374,83 @@ static void spapr_machine_device_plug(HotplugHandler *hotplug_dev,
     }
 }
 
+static void spapr_cpu_destroy(PowerPCCPU *cpu)
+{
+    sPAPRMachineState *spapr = SPAPR_MACHINE(qdev_get_machine());
+
+    xics_cpu_destroy(spapr->icp, cpu);
+    qemu_unregister_reset(spapr_cpu_reset, cpu);
+}
+
+static void spapr_cpu_release(DeviceState *dev, void *opaque)
+{
+    CPUState *cs;
+    int i;
+    int id = ppc_get_vcpu_dt_id(POWERPC_CPU(CPU(dev)));
+
+    for (i = id; i < id + smp_threads; i++) {
+        CPU_FOREACH(cs) {
+            PowerPCCPU *cpu = POWERPC_CPU(cs);
+
+            if (i == ppc_get_vcpu_dt_id(cpu)) {
+                spapr_cpu_destroy(cpu);
+                cpu_remove(cs);
+            }
+        }
+    }
+}
+
+static int spapr_cpu_unplug(HotplugHandler *hotplug_dev, DeviceState *dev,
+                             Error **errp)
+{
+    CPUState *cs = CPU(dev);
+    PowerPCCPU *cpu = POWERPC_CPU(cs);
+    int id = ppc_get_vcpu_dt_id(cpu);
+    int smt = kvmppc_smt_threads();
+    sPAPRDRConnector *drc =
+        spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, id);
+    sPAPRDRConnectorClass *drck;
+    Error *local_err = NULL;
+
+    /*
+     * SMT threads return from here, only main thread (core) will
+     * continue and signal hot unplug event to the guest.
+     */
+    if ((id % smt) != 0) {
+        return 0;
+    }
+    g_assert(drc);
+
+    drck = SPAPR_DR_CONNECTOR_GET_CLASS(drc);
+    drck->detach(drc, dev, spapr_cpu_release, NULL, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        return -1;
+    }
+
+    /*
+     * In addition to hotplugged CPUs, send the hot-unplug notification
+     * interrupt to the guest for coldplugged CPUs started via -device
+     * option too.
+     */
+    spapr_hotplug_req_remove_event(drc);
+
+    return 0;
+}
+
 static void spapr_machine_device_unplug(HotplugHandler *hotplug_dev,
                                       DeviceState *dev, Error **errp)
 {
+    sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(qdev_get_machine());
+
     if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
         error_setg(errp, "Memory hot unplug not supported by sPAPR");
+    } else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
+        if (!smc->dr_cpu_enabled) {
+            error_setg(errp, "CPU hot unplug not supported on this machine");
+            return;
+        }
+        spapr_cpu_unplug(hotplug_dev, dev, errp);
     }
 }
 
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [Qemu-devel] [RFC PATCH v4 11/11] target-ppc: Enable CPU hotplug for POWER8 CPU family
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
                   ` (9 preceding siblings ...)
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 10/11] spapr: CPU hot unplug support Bharata B Rao
@ 2015-08-06  5:27 ` Bharata B Rao
  2015-08-06  8:42 ` [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Zhu Guihua
  2015-08-12  2:56 ` David Gibson
  12 siblings, 0 replies; 36+ messages in thread
From: Bharata B Rao @ 2015-08-06  5:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: aik, Bharata B Rao, mdroth, agraf, qemu-ppc, tyreld, nfont,
	imammedo, afaerber, david

Support CPU hotplug on POWER8 by enabling device_add semantics.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
---
 target-ppc/translate_init.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
index c19d630..62ef687 100644
--- a/target-ppc/translate_init.c
+++ b/target-ppc/translate_init.c
@@ -8206,6 +8206,8 @@ POWERPC_FAMILY(POWER8)(ObjectClass *oc, void *data)
     dc->fw_name = "PowerPC,POWER8";
     dc->desc = "POWER8";
     dc->props = powerpc_servercpu_properties;
+    dc->cannot_instantiate_with_device_add_yet = false;
+
     pcc->pvr_match = ppc_pvr_match_power8;
     pcc->pcr_mask = PCR_COMPAT_2_05 | PCR_COMPAT_2_06;
     pcc->init_proc = init_proc_POWER8;
-- 
2.1.0

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
                   ` (10 preceding siblings ...)
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 11/11] target-ppc: Enable CPU hotplug for POWER8 CPU family Bharata B Rao
@ 2015-08-06  8:42 ` Zhu Guihua
  2015-08-10  3:31   ` Bharata B Rao
  2015-08-12  2:56 ` David Gibson
  12 siblings, 1 reply; 36+ messages in thread
From: Zhu Guihua @ 2015-08-06  8:42 UTC (permalink / raw)
  To: Bharata B Rao, qemu-devel
  Cc: aik, mdroth, agraf, qemu-ppc, tyreld, imammedo, nfont, afaerber, david


On 08/06/2015 01:27 PM, Bharata B Rao wrote:
> Hi,
>
> This is the next version of CPU hotplug support patchset for PowerPC
> sPAPR guests. This is a split-out from the previous version (v3) that
> was carrying CPU and memory hotplug together. This patchset applies on
> spapr-next branch of David Gibson's tree.
>
> In the previous version, I was doing CPU addition at socket granularity.
> One hotplug request would add one complete CPU socket with all the cores
> and threads as per the boot time topology specification. Based on the
> feedback for v3, I am switching back to earlier method wherein I don't
> have the notion of socket device. In this version I don't create any
> additional device abstraction over CPU device, but use the existing
> CPU device and add full cores at once. One hotplug request will add
> a complete core with all the underlying threads.

So the new generic infrastructure is generic socket or generic core?

Cc: Andreas
What about hot-adding a core device for x86 too?  Hot-plug per core seems to
handle all cases.

thanks,
Zhu

> I have enabled device_add based hotplug for POWER8 family for processors
> and currently the semantics looks like this:
>
> (qemu) device_add POWER8-powerpc64-cpu,id=cpu8
>
> v3: https://lists.nongnu.org/archive/html/qemu-devel/2015-04/msg02910.html
>
> Bharata B Rao (10):
>    exec: Remove cpu from cpus list during cpu_exec_exit()
>    exec: Do vmstate unregistration from cpu_exec_exit()
>    cpus: Add a sync version of cpu_remove()
>    xics_kvm: Add cpu_destroy method to XICS
>    spapr: Create pseries-2.5 machine
>    spapr: Enable CPU hotplug for pseries-2.5 and add CPU DRC DT entries
>    spapr: CPU hotplug support
>    spapr: Support topologies with unfilled cores
>    spapr: CPU hot unplug support
>    target-ppc: Enable CPU hotplug for POWER8 CPU family
>
> Gu Zheng (1):
>    cpus: Reclaim vCPU objects
>
>   cpus.c                      |  55 ++++++++
>   exec.c                      |  30 +++++
>   hw/intc/xics.c              |  12 ++
>   hw/intc/xics_kvm.c          |   9 ++
>   hw/ppc/spapr.c              | 300 +++++++++++++++++++++++++++++++++++++++++++-
>   hw/ppc/spapr_events.c       |   3 +
>   hw/ppc/spapr_rtas.c         |  11 ++
>   include/hw/ppc/spapr.h      |   1 +
>   include/hw/ppc/xics.h       |   2 +
>   include/qom/cpu.h           |  19 +++
>   include/sysemu/kvm.h        |   1 +
>   kvm-all.c                   |  57 ++++++++-
>   kvm-stub.c                  |   5 +
>   target-ppc/translate_init.c |  10 ++
>   14 files changed, 511 insertions(+), 4 deletions(-)
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 05/11] xics_kvm: Add cpu_destroy method to XICS
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 05/11] xics_kvm: Add cpu_destroy method to XICS Bharata B Rao
@ 2015-08-07 11:33   ` Bharata B Rao
  0 siblings, 0 replies; 36+ messages in thread
From: Bharata B Rao @ 2015-08-07 11:33 UTC (permalink / raw)
  To: qemu-devel
  Cc: aik, mdroth, agraf, qemu-ppc, tyreld, nfont, imammedo, afaerber, david

On Thu, Aug 06, 2015 at 10:57:11AM +0530, Bharata B Rao wrote:
> XICS is setup for each CPU during initialization. Provide a routine
> to undo the same when CPU is unplugged.
> 
> This allows reboot of a VM that has undergone CPU hotplug and unplug
> to work correctly.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  hw/intc/xics.c        | 12 ++++++++++++
>  hw/intc/xics_kvm.c    |  9 +++++++++
>  include/hw/ppc/xics.h |  2 ++
>  3 files changed, 23 insertions(+)
> 
> diff --git a/hw/intc/xics.c b/hw/intc/xics.c
> index 924b1ae..cb33438 100644
> --- a/hw/intc/xics.c
> +++ b/hw/intc/xics.c
> @@ -44,6 +44,18 @@ static int get_cpu_index_by_dt_id(int cpu_dt_id)
>      return -1;
>  }
> 
> +void xics_cpu_destroy(XICSState *icp, PowerPCCPU *cpu)
> +{
> +    CPUState *cs = CPU(cpu);
> +    XICSStateClass *info = XICS_COMMON_GET_CLASS(icp);
> +
> +    assert(cs->cpu_index < icp->nr_servers);
> +
> +    if (info->cpu_destroy) {
> +        info->cpu_destroy(icp, cpu);
> +    }
> +}
> +
>  void xics_cpu_setup(XICSState *icp, PowerPCCPU *cpu)
>  {
>      CPUState *cs = CPU(cpu);
> diff --git a/hw/intc/xics_kvm.c b/hw/intc/xics_kvm.c
> index d58729c..e134f65 100644
> --- a/hw/intc/xics_kvm.c
> +++ b/hw/intc/xics_kvm.c
> @@ -356,6 +356,14 @@ static void xics_kvm_cpu_setup(XICSState *icp, PowerPCCPU *cpu)
>      }
>  }
> 
> +static void xics_kvm_cpu_destroy(XICSState *icp, PowerPCCPU *cpu)
> +{
> +    CPUState *cs = CPU(cpu);
> +    ICPState *ss = &icp->ss[cs->cpu_index];
> +
> +    ss->cs = NULL;

After this we need to ensure that when the hot-removed CPU is brought
up again, we populate ss->cs correctly in the setup routine. Else correct
removal of such a CPU after migration from the target side will not
work correctly. The below patch replaces the current one after taking
care of the above aspect.
--------------

xics_kvm: Add cpu_destroy method to XICS

From: Bharata B Rao <bharata@linux.vnet.ibm.com>

XICS is setup for each CPU during initialization. Provide a routine
to undo the same when CPU is unplugged.

This allows reboot of a VM that has undergone CPU hotplug and unplug
to work correctly.

Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/intc/xics.c        |   12 ++++++++++++
 hw/intc/xics_kvm.c    |   13 +++++++++++--
 include/hw/ppc/xics.h |    2 ++
 3 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/hw/intc/xics.c b/hw/intc/xics.c
index 924b1ae..cb33438 100644
--- a/hw/intc/xics.c
+++ b/hw/intc/xics.c
@@ -44,6 +44,18 @@ static int get_cpu_index_by_dt_id(int cpu_dt_id)
     return -1;
 }
 
+void xics_cpu_destroy(XICSState *icp, PowerPCCPU *cpu)
+{
+    CPUState *cs = CPU(cpu);
+    XICSStateClass *info = XICS_COMMON_GET_CLASS(icp);
+
+    assert(cs->cpu_index < icp->nr_servers);
+
+    if (info->cpu_destroy) {
+        info->cpu_destroy(icp, cpu);
+    }
+}
+
 void xics_cpu_setup(XICSState *icp, PowerPCCPU *cpu)
 {
     CPUState *cs = CPU(cpu);
diff --git a/hw/intc/xics_kvm.c b/hw/intc/xics_kvm.c
index d58729c..cb96f69 100644
--- a/hw/intc/xics_kvm.c
+++ b/hw/intc/xics_kvm.c
@@ -331,6 +331,8 @@ static void xics_kvm_cpu_setup(XICSState *icp, PowerPCCPU *cpu)
         abort();
     }
 
+    ss->cs = cs;
+
     /*
      * If we are reusing a parked vCPU fd corresponding to the CPU
      * which was hot-removed earlier we don't have to renable
@@ -343,8 +345,6 @@ static void xics_kvm_cpu_setup(XICSState *icp, PowerPCCPU *cpu)
     if (icpkvm->kernel_xics_fd != -1) {
         int ret;
 
-        ss->cs = cs;
-
         ret = kvm_vcpu_enable_cap(cs, KVM_CAP_IRQ_XICS, 0,
                                   icpkvm->kernel_xics_fd, kvm_arch_vcpu_id(cs));
         if (ret < 0) {
@@ -356,6 +356,14 @@ static void xics_kvm_cpu_setup(XICSState *icp, PowerPCCPU *cpu)
     }
 }
 
+static void xics_kvm_cpu_destroy(XICSState *icp, PowerPCCPU *cpu)
+{
+    CPUState *cs = CPU(cpu);
+    ICPState *ss = &icp->ss[cs->cpu_index];
+
+    ss->cs = NULL;
+}
+
 static void xics_kvm_set_nr_irqs(XICSState *icp, uint32_t nr_irqs, Error **errp)
 {
     icp->nr_irqs = icp->ics->nr_irqs = nr_irqs;
@@ -486,6 +494,7 @@ static void xics_kvm_class_init(ObjectClass *oc, void *data)
 
     dc->realize = xics_kvm_realize;
     xsc->cpu_setup = xics_kvm_cpu_setup;
+    xsc->cpu_destroy = xics_kvm_cpu_destroy;
     xsc->set_nr_irqs = xics_kvm_set_nr_irqs;
     xsc->set_nr_servers = xics_kvm_set_nr_servers;
 }
diff --git a/include/hw/ppc/xics.h b/include/hw/ppc/xics.h
index 355a966..2faad48 100644
--- a/include/hw/ppc/xics.h
+++ b/include/hw/ppc/xics.h
@@ -68,6 +68,7 @@ struct XICSStateClass {
     DeviceClass parent_class;
 
     void (*cpu_setup)(XICSState *icp, PowerPCCPU *cpu);
+    void (*cpu_destroy)(XICSState *icp, PowerPCCPU *cpu);
     void (*set_nr_irqs)(XICSState *icp, uint32_t nr_irqs, Error **errp);
     void (*set_nr_servers)(XICSState *icp, uint32_t nr_servers, Error **errp);
 };
@@ -166,5 +167,6 @@ int xics_alloc_block(XICSState *icp, int src, int num, bool lsi, bool align);
 void xics_free(XICSState *icp, int irq, int num);
 
 void xics_cpu_setup(XICSState *icp, PowerPCCPU *cpu);
+void xics_cpu_destroy(XICSState *icp, PowerPCCPU *cpu);
 
 #endif /* __XICS_H__ */

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug
  2015-08-06  8:42 ` [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Zhu Guihua
@ 2015-08-10  3:31   ` Bharata B Rao
  0 siblings, 0 replies; 36+ messages in thread
From: Bharata B Rao @ 2015-08-10  3:31 UTC (permalink / raw)
  To: Zhu Guihua
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, imammedo,
	nfont, afaerber, david

On Thu, Aug 06, 2015 at 04:42:05PM +0800, Zhu Guihua wrote:
> 
> On 08/06/2015 01:27 PM, Bharata B Rao wrote:
> >Hi,
> >
> >This is the next version of CPU hotplug support patchset for PowerPC
> >sPAPR guests. This is a split-out from the previous version (v3) that
> >was carrying CPU and memory hotplug together. This patchset applies on
> >spapr-next branch of David Gibson's tree.
> >
> >In the previous version, I was doing CPU addition at socket granularity.
> >One hotplug request would add one complete CPU socket with all the cores
> >and threads as per the boot time topology specification. Based on the
> >feedback for v3, I am switching back to earlier method wherein I don't
> >have the notion of socket device. In this version I don't create any
> >additional device abstraction over CPU device, but use the existing
> >CPU device and add full cores at once. One hotplug request will add
> >a complete core with all the underlying threads.
> 
> So the new generic infrastructure is generic socket or generic core?

In this implementation, it is neither, meaning it is not generic as you
can see from the device_add semantics at the end of this mail.

> 
> Cc: Andreas
> What about hot-adding a core device for x86 too?  Hot-plug per core seems to
> handle all cases.
> 
> thanks,
> Zhu
> 
> >I have enabled device_add based hotplug for POWER8 family for processors
> >and currently the semantics looks like this:
> >
> >(qemu) device_add POWER8-powerpc64-cpu,id=cpu8

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug
  2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
                   ` (11 preceding siblings ...)
  2015-08-06  8:42 ` [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Zhu Guihua
@ 2015-08-12  2:56 ` David Gibson
  12 siblings, 0 replies; 36+ messages in thread
From: David Gibson @ 2015-08-12  2:56 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, nfont,
	imammedo, afaerber

[-- Attachment #1: Type: text/plain, Size: 1649 bytes --]

On Thu, Aug 06, 2015 at 10:57:06AM +0530, Bharata B Rao wrote:
> Hi,
> 
> This is the next version of CPU hotplug support patchset for PowerPC
> sPAPR guests. This is a split-out from the previous version (v3) that
> was carrying CPU and memory hotplug together. This patchset applies on
> spapr-next branch of David Gibson's tree.
> 
> In the previous version, I was doing CPU addition at socket granularity.
> One hotplug request would add one complete CPU socket with all the cores
> and threads as per the boot time topology specification. Based on the
> feedback for v3, I am switching back to earlier method wherein I don't
> have the notion of socket device. In this version I don't create any
> additional device abstraction over CPU device, but use the existing
> CPU device and add full cores at once. One hotplug request will add
> a complete core with all the underlying threads.
> 
> I have enabled device_add based hotplug for POWER8 family for processors
> and currently the semantics looks like this:
> 
> (qemu) device_add POWER8-powerpc64-cpu,id=cpu8

I've cherry picked 5/11 (add 2.5 machine type) since we'll be wanting
it one way or another.  I have rearranged it, though, to go before the
memory hotplug patches, which are already in spapr-next, but missed
the 2.4 cutoff so shouldn't be enabled for the 2.4 machine type.

Need to wait until I have a little more time to review the rest of the
series.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit()
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit() Bharata B Rao
@ 2015-09-04  5:31   ` David Gibson
  2015-09-09  5:52     ` Bharata B Rao
  0 siblings, 1 reply; 36+ messages in thread
From: David Gibson @ 2015-09-04  5:31 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, nfont,
	imammedo, afaerber

[-- Attachment #1: Type: text/plain, Size: 2415 bytes --]

On Thu, Aug 06, 2015 at 10:57:07AM +0530, Bharata B Rao wrote:
> CPUState *cpu gets added to the cpus list during cpu_exec_init(). It
> should be removed from cpu_exec_exit().
> 
> cpu_exec_init() is called from generic CPU::instance_finalize and some
> archs like PowerPC call it from CPU unrealizefn. So ensure that we
> dequeue the cpu only once.
> 
> Instead of introducing a new field CPUState.queued, I could have used
> CPUState.cpu_index to check if the cpu is already dequeued from the list.
> Since that doesn't work for CONFIG_USER_ONLY, I had to add a new field.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>

This seems reasonable to me, but I'm wondering how x86 cpu hotplug /
unplug is working without it.

> ---
>  exec.c            | 11 +++++++++++
>  include/qom/cpu.h |  1 +
>  2 files changed, 12 insertions(+)
> 
> diff --git a/exec.c b/exec.c
> index 0a4a0c5..b196d68 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -550,6 +550,10 @@ void cpu_exec_exit(CPUState *cpu)
>          return;
>      }
>  
> +    if (cpu->queued) {
> +        QTAILQ_REMOVE(&cpus, cpu, node);
> +        cpu->queued = false;
> +    }
>      bitmap_clear(cpu_index_map, cpu->cpu_index, 1);
>      cpu->cpu_index = -1;
>  }
> @@ -568,6 +572,12 @@ static int cpu_get_free_index(Error **errp)
>  
>  void cpu_exec_exit(CPUState *cpu)
>  {
> +    cpu_list_lock();
> +    if (cpu->queued) {
> +        QTAILQ_REMOVE(&cpus, cpu, node);
> +        cpu->queued = false;
> +    }
> +    cpu_list_unlock();
>  }
>  #endif
>  
> @@ -595,6 +605,7 @@ void cpu_exec_init(CPUState *cpu, Error **errp)
>          return;
>      }
>      QTAILQ_INSERT_TAIL(&cpus, cpu, node);
> +    cpu->queued = true;
>  #if defined(CONFIG_USER_ONLY)
>      cpu_list_unlock();
>  #endif
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index 20aabc9..a00e3a8 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -284,6 +284,7 @@ struct CPUState {
>      int gdb_num_regs;
>      int gdb_num_g_regs;
>      QTAILQ_ENTRY(CPUState) node;
> +    bool queued;
>  
>      /* ice debug support */
>      QTAILQ_HEAD(breakpoints_head, CPUBreakpoint) breakpoints;

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 02/11] exec: Do vmstate unregistration from cpu_exec_exit()
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 02/11] exec: Do vmstate unregistration from cpu_exec_exit() Bharata B Rao
@ 2015-09-04  6:03   ` David Gibson
  2015-09-09  5:56     ` Bharata B Rao
  0 siblings, 1 reply; 36+ messages in thread
From: David Gibson @ 2015-09-04  6:03 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, nfont,
	imammedo, afaerber

[-- Attachment #1: Type: text/plain, Size: 2225 bytes --]

On Thu, Aug 06, 2015 at 10:57:08AM +0530, Bharata B Rao wrote:
> cpu_exec_init() does vmstate_register and register_savevm for the CPU device.
> These need to be undone from cpu_exec_exit(). These changes are needed to
> support CPU hot removal and also to correctly fail hotplug attempts
> beyond max_cpus.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>

As with 1/2, looks reasonable, but I'm wondering how x86 hotplug is
getting away without this.

> ---
>  exec.c | 19 +++++++++++++++++++
>  1 file changed, 19 insertions(+)
> 
> diff --git a/exec.c b/exec.c
> index b196d68..3415cd7 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -545,6 +545,8 @@ static int cpu_get_free_index(Error **errp)
>  
>  void cpu_exec_exit(CPUState *cpu)
>  {
> +    CPUClass *cc = CPU_GET_CLASS(cpu);
> +
>      if (cpu->cpu_index == -1) {
>          /* cpu_index was never allocated by this @cpu or was already freed. */
>          return;
> @@ -556,6 +558,15 @@ void cpu_exec_exit(CPUState *cpu)
>      }
>      bitmap_clear(cpu_index_map, cpu->cpu_index, 1);
>      cpu->cpu_index = -1;
> +    if (cc->vmsd != NULL) {
> +        vmstate_unregister(NULL, cc->vmsd, cpu);
> +    }
> +#if defined(CPU_SAVE_VERSION)
> +    unregister_savevm(NULL, "cpu", cpu->env_ptr);
> +#endif
> +    if (qdev_get_vmsd(DEVICE(cpu)) == NULL) {
> +        vmstate_unregister(NULL, &vmstate_cpu_common, cpu);
> +    }
>  }
>  #else
>  
> @@ -572,12 +583,20 @@ static int cpu_get_free_index(Error **errp)
>  
>  void cpu_exec_exit(CPUState *cpu)
>  {
> +    CPUClass *cc = CPU_GET_CLASS(cpu);
> +
>      cpu_list_lock();
>      if (cpu->queued) {
>          QTAILQ_REMOVE(&cpus, cpu, node);
>          cpu->queued = false;
>      }
>      cpu_list_unlock();
> +    if (cc->vmsd != NULL) {
> +        vmstate_unregister(NULL, cc->vmsd, cpu);
> +    }
> +    if (qdev_get_vmsd(DEVICE(cpu)) == NULL) {
> +        vmstate_unregister(NULL, &vmstate_cpu_common, cpu);
> +    }
>  }
>  #endif
>  

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 03/11] cpus: Reclaim vCPU objects
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 03/11] cpus: Reclaim vCPU objects Bharata B Rao
@ 2015-09-04  6:09   ` David Gibson
  0 siblings, 0 replies; 36+ messages in thread
From: David Gibson @ 2015-09-04  6:09 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: Zhu Guihua, mdroth, aik, agraf, qemu-devel, Chen Fan, qemu-ppc,
	tyreld, nfont, Gu Zheng, imammedo, afaerber

[-- Attachment #1: Type: text/plain, Size: 1114 bytes --]

On Thu, Aug 06, 2015 at 10:57:09AM +0530, Bharata B Rao wrote:
> From: Gu Zheng <guz.fnst@cn.fujitsu.com>
> 
> In order to deal well with the kvm vcpus (which can not be removed without any
> protection), we do not close KVM vcpu fd, just record and mark it as stopped
> into a list, so that we can reuse it for the appending cpu hot-add request if
> possible. It is also the approach that kvm guys suggested:
> https://www.mail-archive.com/kvm@vger.kernel.org/msg102839.html
> 
> Signed-off-by: Chen Fan <chen.fan.fnst@cn.fujitsu.com>
> Signed-off-by: Gu Zheng <guz.fnst@cn.fujitsu.com>
> Signed-off-by: Zhu Guihua <zhugh.fnst@cn.fujitsu.com>
> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
>                [Explicit CPU_REMOVE() from qemu_kvm/tcg_destroy_vcpu()
>                 isn't needed as it is done from cpu_exec_exit()]

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 04/11] cpus: Add a sync version of cpu_remove()
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 04/11] cpus: Add a sync version of cpu_remove() Bharata B Rao
@ 2015-09-04  6:11   ` David Gibson
  2015-09-09  5:57     ` Bharata B Rao
  0 siblings, 1 reply; 36+ messages in thread
From: David Gibson @ 2015-09-04  6:11 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, nfont,
	imammedo, afaerber

[-- Attachment #1: Type: text/plain, Size: 2580 bytes --]

On Thu, Aug 06, 2015 at 10:57:10AM +0530, Bharata B Rao wrote:
> This sync API will be used by the CPU hotplug code to wait for the CPU to
> completely get removed before flagging the failure to the device_add
> command.
> 
> Sync version of this call is needed to correctly recover from CPU
> realization failures when ->plug() handler fails.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> ---
>  cpus.c            | 14 ++++++++++++++
>  include/qom/cpu.h |  8 ++++++++
>  2 files changed, 22 insertions(+)
> 
> diff --git a/cpus.c b/cpus.c
> index 73ae2e7..9d9644e 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -999,6 +999,8 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
>          qemu_kvm_wait_io_event(cpu);
>          if (cpu->exit && !cpu_can_run(cpu)) {
>              qemu_kvm_destroy_vcpu(cpu);
> +            cpu->created = false;
> +            qemu_cond_signal(&qemu_cpu_cond);
>              qemu_mutex_unlock(&qemu_global_mutex);
>              return NULL;
>          }
> @@ -1104,6 +1106,8 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
>          }
>          if (remove_cpu) {
>              qemu_tcg_destroy_vcpu(remove_cpu);
> +            cpu->created = false;
> +            qemu_cond_signal(&qemu_cpu_cond);
>              remove_cpu = NULL;
>          }
>      }
> @@ -1283,6 +1287,16 @@ void cpu_remove(CPUState *cpu)
>      qemu_cpu_kick(cpu);
>  }
>  
> +void cpu_remove_sync(CPUState *cpu)
> +{
> +    cpu->stop = true;
> +    cpu->exit = true;
> +    qemu_cpu_kick(cpu);

It would be nicer for this to call the async cpu_remove() above, to
ensure they stay in sync.

> +    while (cpu->created) {
> +        qemu_cond_wait(&qemu_cpu_cond, &qemu_global_mutex);
> +    }
> +}
> +
>  /* For temporary buffers for forming a name */
>  #define VCPU_THREAD_NAME_SIZE 16
>  
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index 136d9fe..65c6852 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -655,6 +655,14 @@ void cpu_resume(CPUState *cpu);
>   */
>  void cpu_remove(CPUState *cpu);
>  
> + /**
> + * cpu_remove_sync:
> + * @cpu: The CPU to remove.
> + *
> + * Requests the CPU to be removed and waits till it is removed.
> + */
> +void cpu_remove_sync(CPUState *cpu);
> +
>  /**
>   * qemu_init_vcpu:
>   * @cpu: The vCPU to initialize.

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 07/11] spapr: Enable CPU hotplug for pseries-2.5 and add CPU DRC DT entries
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 07/11] spapr: Enable CPU hotplug for pseries-2.5 and add CPU DRC DT entries Bharata B Rao
@ 2015-09-04  6:28   ` David Gibson
  0 siblings, 0 replies; 36+ messages in thread
From: David Gibson @ 2015-09-04  6:28 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, nfont,
	imammedo, afaerber

[-- Attachment #1: Type: text/plain, Size: 507 bytes --]

On Thu, Aug 06, 2015 at 10:57:13AM +0530, Bharata B Rao wrote:
> Start supporting CPU hotplug from pseries-2.5 onwards. Add CPU
> DRC (Dynamic Resource Connector) device tree entries.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>

Reviewed-by: David Gibson <david@gibson.dropbear.id.au>

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 08/11] spapr: CPU hotplug support
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 08/11] spapr: CPU hotplug support Bharata B Rao
@ 2015-09-04  6:58   ` David Gibson
  2015-09-09  6:52     ` Bharata B Rao
  0 siblings, 1 reply; 36+ messages in thread
From: David Gibson @ 2015-09-04  6:58 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, nfont,
	imammedo, afaerber

[-- Attachment #1: Type: text/plain, Size: 12445 bytes --]

On Thu, Aug 06, 2015 at 10:57:14AM +0530, Bharata B Rao wrote:
> Support CPU hotplug via device-add command. Set up device tree
> entries for the hotplugged CPU core and use the exising EPOW event
> infrastructure to send CPU hotplug notification to the guest.
> 
> Create only cores explicitly from boot path as well as hotplug path
> and let the ->plug() handler of the core create the threads of the core.
> 
> Also support cold plugged CPUs that are specified by -device option
> on cmdline.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> ---
>  hw/ppc/spapr.c              | 166 +++++++++++++++++++++++++++++++++++++++++++-
>  hw/ppc/spapr_events.c       |   3 +
>  hw/ppc/spapr_rtas.c         |  11 +++
>  target-ppc/translate_init.c |   8 +++
>  4 files changed, 186 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index a106980..74637b3 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -599,6 +599,18 @@ static void spapr_populate_cpu_dt(CPUState *cs, void *fdt, int offset,
>      unsigned sockets = opts ? qemu_opt_get_number(opts, "sockets", 0) : 0;
>      uint32_t cpus_per_socket = sockets ? (smp_cpus / sockets) : 1;
>      uint32_t pft_size_prop[] = {0, cpu_to_be32(spapr->htab_shift)};
> +    sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(qdev_get_machine());
> +    sPAPRDRConnector *drc;
> +    sPAPRDRConnectorClass *drck;
> +    int drc_index;
> +
> +    if (smc->dr_cpu_enabled) {
> +        drc = spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, index);
> +        g_assert(drc);
> +        drck = SPAPR_DR_CONNECTOR_GET_CLASS(drc);
> +        drc_index = drck->get_index(drc);
> +        _FDT((fdt_setprop_cell(fdt, offset, "ibm,my-drc-index", drc_index)));
> +    }
>  
>      _FDT((fdt_setprop_cell(fdt, offset, "reg", index)));
>      _FDT((fdt_setprop_string(fdt, offset, "device_type", "cpu")));
> @@ -1686,6 +1698,7 @@ static void ppc_spapr_init(MachineState *machine)
>      char *filename;
>      int smt = kvmppc_smt_threads();
>      int smp_max_cores = DIV_ROUND_UP(max_cpus, smp_threads);
> +    int smp_cores = DIV_ROUND_UP(smp_cpus, smp_threads);

This shadows the global variable 'smp_cores' which has a different
meaning, so this is a very bad idea.

>  
>      msi_supported = true;
>  
> @@ -1764,7 +1777,7 @@ static void ppc_spapr_init(MachineState *machine)
>      if (machine->cpu_model == NULL) {
>          machine->cpu_model = kvm_enabled() ? "host" : "POWER7";
>      }
> -    for (i = 0; i < smp_cpus; i++) {
> +    for (i = 0; i < smp_cores; i++) {
>          cpu = cpu_ppc_init(machine->cpu_model);
>          if (cpu == NULL) {
>              fprintf(stderr, "Unable to find PowerPC CPU definition\n");
> @@ -2152,10 +2165,135 @@ out:
>      error_propagate(errp, local_err);
>  }
>  
> +static void *spapr_populate_hotplug_cpu_dt(DeviceState *dev, CPUState *cs,
> +                                           int *fdt_offset,
> +                                           sPAPRMachineState *spapr)
> +{
> +    PowerPCCPU *cpu = POWERPC_CPU(cs);
> +    DeviceClass *dc = DEVICE_GET_CLASS(cs);
> +    int id = ppc_get_vcpu_dt_id(cpu);
> +    void *fdt;
> +    int offset, fdt_size;
> +    char *nodename;
> +
> +    fdt = create_device_tree(&fdt_size);
> +    nodename = g_strdup_printf("%s@%x", dc->fw_name, id);
> +    offset = fdt_add_subnode(fdt, 0, nodename);
> +
> +    spapr_populate_cpu_dt(cs, fdt, offset, spapr);
> +    g_free(nodename);
> +
> +    *fdt_offset = offset;
> +    return fdt;
> +}
> +
> +static void spapr_cpu_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
> +                            Error **errp)
> +{
> +    sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(qdev_get_machine());
> +    sPAPRMachineState *ms = SPAPR_MACHINE(hotplug_dev);
> +    CPUState *cs = CPU(dev);
> +    PowerPCCPU *cpu = POWERPC_CPU(cs);
> +    int id = ppc_get_vcpu_dt_id(cpu);
> +    sPAPRDRConnector *drc =
> +        spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, id);
> +    sPAPRDRConnectorClass *drck;
> +    int smt = kvmppc_smt_threads();
> +    Error *local_err = NULL;
> +    void *fdt = NULL;
> +    int i, fdt_offset = 0;
> +
> +    /* Set NUMA node for the added CPUs  */
> +    for (i = 0; i < nb_numa_nodes; i++) {
> +        if (test_bit(cs->cpu_index, numa_info[i].node_cpu)) {
> +            cs->numa_node = i;
> +            break;
> +        }
> +    }
> +
> +    /*
> +     * Currently CPU core and threads of a core aren't really different
> +     * from QEMU point of view since all of them are just CPU devices. Hence
> +     * there is no separate realize routines for cores and threads.
> +     * We use the id check below to do things differently for cores and threads.
> +     *
> +     * SMT threads return from here, only main thread (core) will
> +     * continue, create threads and signal hotplug event to the guest.
> +     */
> +    if ((id % smt) != 0) {
> +        return;
> +    }
> +
> +    /* Create SMT threads of the core. */
> +    for (i = 1; i < smp_threads; i++) {
> +        cpu = cpu_ppc_init(current_machine->cpu_model);
> +        if (!cpu) {
> +            error_report("Unable to find PowerPC CPU definition: %s",
> +                          current_machine->cpu_model);
> +            exit(EXIT_FAILURE);
> +        }
> +    }
> +
> +    if (!smc->dr_cpu_enabled) {
> +        /*
> +         * This is a cold plugged CPU but the machine doesn't support
> +         * DR. So skip the hotplug path ensuring that the CPU is brought
> +         * up online with out an associated DR connector.
> +         */
> +        return;
> +    }
> +
> +    g_assert(drc);
> +
> +    /*
> +     * Setup CPU DT entries only for hotplugged CPUs. For boot time or
> +     * coldplugged CPUs DT entries are setup in spapr_finalize_fdt().
> +     */
> +    if (dev->hotplugged) {
> +        fdt = spapr_populate_hotplug_cpu_dt(dev, cs, &fdt_offset, ms);
> +    }
> +
> +    drck = SPAPR_DR_CONNECTOR_GET_CLASS(drc);
> +    drck->attach(drc, dev, fdt, fdt_offset, !dev->hotplugged, &local_err);
> +    if (local_err) {
> +        g_free(fdt);
> +        error_propagate(errp, local_err);
> +        return;
> +    }
> +
> +    /*
> +     * We send hotplug notification interrupt to the guest only in case
> +     * of hotplugged CPUs.
> +     */
> +    if (dev->hotplugged) {
> +        spapr_hotplug_req_add_event(drc);
> +    } else {
> +        /*
> +         * HACK to support removal of hotplugged CPU after VM migration:
> +         *
> +         * Since we want to be able to hot-remove those coldplugged CPUs
> +         * started at boot time using -device option at the target VM, we set
> +         * the right allocation_state and isolation_state for them, which for
> +         * the hotplugged CPUs would be set via RTAS calls done from the
> +         * guest during hotplug.
> +         *
> +         * This allows the coldplugged CPUs started using -device option to
> +         * have the right isolation and allocation states as expected by the
> +         * CPU hot removal code.
> +         *
> +         * This hack will be removed once we have DRC states migrated as part
> +         * of VM migration.
> +         */
> +        drck->set_allocation_state(drc, SPAPR_DR_ALLOCATION_STATE_USABLE);
> +        drck->set_isolation_state(drc, SPAPR_DR_ISOLATION_STATE_UNISOLATED);
> +    }
> +}
> +
>  static void spapr_machine_device_plug(HotplugHandler *hotplug_dev,
>                                        DeviceState *dev, Error **errp)
>  {
>      sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(qdev_get_machine());
> +    sPAPRMachineState *ms = SPAPR_MACHINE(hotplug_dev);
>  
>      if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
>          int node;
> @@ -2192,6 +2330,29 @@ static void spapr_machine_device_plug(HotplugHandler *hotplug_dev,
>          }
>  
>          spapr_memory_plug(hotplug_dev, dev, node, errp);
> +    } else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
> +        CPUState *cs = CPU(dev);
> +        PowerPCCPU *cpu = POWERPC_CPU(cs);
> +
> +        if (!smc->dr_cpu_enabled && dev->hotplugged) {
> +            error_setg(errp, "CPU hotplug not supported for this machine");
> +            cpu_remove_sync(cs);
> +            return;
> +        }
> +
> +        if (((smp_cpus % smp_threads) || (max_cpus % smp_threads)) &&
> +            dev->hotplugged) {
> +            error_setg(errp, "CPU hotplug not supported for the topology "
> +                       "with %d threads %d cpus and %d maxcpus since "
> +                       "CPUs can't be fit fully into cores",
> +                       smp_threads, smp_cpus, max_cpus);
> +            cpu_remove_sync(cs);

I'd kind of prefer to reject partial cores at initial startup, rather
than only when we actually attempt to hotplug.

> +            return;
> +        }
> +
> +        spapr_cpu_init(ms, cpu);
> +        spapr_cpu_reset(cpu);
> +        spapr_cpu_plug(hotplug_dev, dev, errp);
>      }
>  }
>  
> @@ -2206,7 +2367,8 @@ static void spapr_machine_device_unplug(HotplugHandler *hotplug_dev,
>  static HotplugHandler *spapr_get_hotpug_handler(MachineState *machine,
>                                               DeviceState *dev)
>  {
> -    if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
> +    if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM) ||
> +        object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
>          return HOTPLUG_HANDLER(machine);
>      }
>      return NULL;
> diff --git a/hw/ppc/spapr_events.c b/hw/ppc/spapr_events.c
> index 98bf7ae..1901652 100644
> --- a/hw/ppc/spapr_events.c
> +++ b/hw/ppc/spapr_events.c
> @@ -437,6 +437,9 @@ static void spapr_hotplug_req_event(sPAPRDRConnector *drc, uint8_t hp_action)
>      case SPAPR_DR_CONNECTOR_TYPE_LMB:
>          hp->hotplug_type = RTAS_LOG_V6_HP_TYPE_MEMORY;
>          break;
> +    case SPAPR_DR_CONNECTOR_TYPE_CPU:
> +        hp->hotplug_type = RTAS_LOG_V6_HP_TYPE_CPU;
> +        break;
>      default:
>          /* we shouldn't be signaling hotplug events for resources
>           * that don't support them
> diff --git a/hw/ppc/spapr_rtas.c b/hw/ppc/spapr_rtas.c
> index e99e25f..160eb07 100644
> --- a/hw/ppc/spapr_rtas.c
> +++ b/hw/ppc/spapr_rtas.c
> @@ -159,6 +159,16 @@ static void rtas_query_cpu_stopped_state(PowerPCCPU *cpu_,
>      rtas_st(rets, 0, RTAS_OUT_PARAM_ERROR);
>  }
>  
> +static void spapr_cpu_set_endianness(PowerPCCPU *cpu)
> +{
> +    PowerPCCPU *fcpu = POWERPC_CPU(first_cpu);
> +    PowerPCCPUClass *pcc = POWERPC_CPU_GET_CLASS(fcpu);
> +
> +    if (!pcc->interrupts_big_endian(fcpu)) {
> +        cpu->env.spr[SPR_LPCR] |= LPCR_ILE;
> +    }
> +}
> +
>  static void rtas_start_cpu(PowerPCCPU *cpu_, sPAPRMachineState *spapr,
>                             uint32_t token, uint32_t nargs,
>                             target_ulong args,
> @@ -195,6 +205,7 @@ static void rtas_start_cpu(PowerPCCPU *cpu_, sPAPRMachineState *spapr,
>          env->nip = start;
>          env->gpr[3] = r3;
>          cs->halted = 0;
> +        spapr_cpu_set_endianness(cpu);
>  
>          qemu_cpu_kick(cs);
>  
> diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
> index 16d7b16..c19d630 100644
> --- a/target-ppc/translate_init.c
> +++ b/target-ppc/translate_init.c
> @@ -30,6 +30,9 @@
>  #include "qemu/error-report.h"
>  #include "qapi/visitor.h"
>  #include "hw/qdev-properties.h"
> +#if !defined(CONFIG_USER_ONLY)
> +#include "sysemu/sysemu.h"
> +#endif
>  
>  //#define PPC_DUMP_CPU
>  //#define PPC_DEBUG_SPR
> @@ -8936,6 +8939,11 @@ static void ppc_cpu_realizefn(DeviceState *dev, Error **errp)
>      }
>  
>  #if !defined(CONFIG_USER_ONLY)
> +    if (cs->cpu_index >= max_cpus) {
> +        error_setg(errp, "Cannot have more than %d CPUs", max_cpus);
> +        return;
> +    }
> +
>      cpu->cpu_dt_id = (cs->cpu_index / smp_threads) * max_smt
>          + (cs->cpu_index % smp_threads);
>  #endif

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 09/11] spapr: Support topologies with unfilled cores
  2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 09/11] spapr: Support topologies with unfilled cores Bharata B Rao
@ 2015-09-04  7:01   ` David Gibson
  2015-09-04  8:44     ` Thomas Huth
  0 siblings, 1 reply; 36+ messages in thread
From: David Gibson @ 2015-09-04  7:01 UTC (permalink / raw)
  To: Bharata B Rao
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, nfont,
	imammedo, afaerber

[-- Attachment #1: Type: text/plain, Size: 2933 bytes --]

On Thu, Aug 06, 2015 at 10:57:15AM +0530, Bharata B Rao wrote:
> QEMU currently supports CPU topologies where there can be cores
> which are not completely filled with all the threads as per the
> specifed SMT mode.
> 
> Restore support for such topologies (Eg -smp 15,cores=4,threads=4)
> The last core will always have the deficit even when -device options are
> used to cold-plug the cores.
> 
> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>

Is there a reason to support these silly toplogies, or should we just
error out if this is specified?


> ---
>  hw/ppc/spapr.c | 20 +++++++++++++++++++-
>  1 file changed, 19 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> index 74637b3..004a8e1 100644
> --- a/hw/ppc/spapr.c
> +++ b/hw/ppc/spapr.c
> @@ -94,6 +94,8 @@
>  
>  #define HTAB_SIZE(spapr)        (1ULL << ((spapr)->htab_shift))
>  
> +static int smp_remaining_cpus;
> +
>  static XICSState *try_create_xics(const char *type, int nr_servers,
>                                    int nr_irqs, Error **errp)
>  {
> @@ -1700,6 +1702,7 @@ static void ppc_spapr_init(MachineState *machine)
>      int smp_max_cores = DIV_ROUND_UP(max_cpus, smp_threads);
>      int smp_cores = DIV_ROUND_UP(smp_cpus, smp_threads);
>  
> +    smp_remaining_cpus = smp_cpus;
>      msi_supported = true;
>  
>      QLIST_INIT(&spapr->phbs);
> @@ -2202,6 +2205,7 @@ static void spapr_cpu_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
>      Error *local_err = NULL;
>      void *fdt = NULL;
>      int i, fdt_offset = 0;
> +    int threads_per_core;
>  
>      /* Set NUMA node for the added CPUs  */
>      for (i = 0; i < nb_numa_nodes; i++) {
> @@ -2224,8 +2228,22 @@ static void spapr_cpu_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
>          return;
>      }
>  
> +    /* Create SMT threads of the core.
> +     *
> +     * Support topologies like -smp 15,cores=4,threads=4 where one core
> +     * will have less than the specified SMT threads. The last core will
> +     * always have the deficit even when -device options are used to
> +     * cold-plug the cores.
> +     */
> +    if ((smp_remaining_cpus > 0) && (smp_remaining_cpus < smp_threads)) {
> +        threads_per_core = smp_remaining_cpus;
> +    } else {
> +        threads_per_core = smp_threads;
> +    }
> +    smp_remaining_cpus -= threads_per_core;
> +
>      /* Create SMT threads of the core. */
> -    for (i = 1; i < smp_threads; i++) {
> +    for (i = 1; i < threads_per_core; i++) {
>          cpu = cpu_ppc_init(current_machine->cpu_model);
>          if (!cpu) {
>              error_report("Unable to find PowerPC CPU definition: %s",

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 09/11] spapr: Support topologies with unfilled cores
  2015-09-04  7:01   ` David Gibson
@ 2015-09-04  8:44     ` Thomas Huth
  2015-09-09  6:58       ` Bharata B Rao
  0 siblings, 1 reply; 36+ messages in thread
From: Thomas Huth @ 2015-09-04  8:44 UTC (permalink / raw)
  To: David Gibson, Bharata B Rao
  Cc: agraf, aik, qemu-devel, mdroth, qemu-ppc, tyreld, imammedo,
	nfont, afaerber

[-- Attachment #1: Type: text/plain, Size: 975 bytes --]

On 04/09/15 09:01, David Gibson wrote:
> On Thu, Aug 06, 2015 at 10:57:15AM +0530, Bharata B Rao wrote:
>> QEMU currently supports CPU topologies where there can be cores
>> which are not completely filled with all the threads as per the
>> specifed SMT mode.
>>
>> Restore support for such topologies (Eg -smp 15,cores=4,threads=4)
>> The last core will always have the deficit even when -device options are
>> used to cold-plug the cores.
>>
>> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> 
> Is there a reason to support these silly toplogies, or should we just
> error out if this is specified?

FYI, I've recently submitted a patch that tries to catch such illegal
SMP configurations and simply errors out in that case:

http://lists.nongnu.org/archive/html/qemu-devel/2015-07/msg04549.html

It's not upstream yet, but already in Eduardo's x86 branch. I think this
will reject the bad topology from your example, too.

 Thomas



[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 836 bytes --]

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit()
  2015-09-04  5:31   ` David Gibson
@ 2015-09-09  5:52     ` Bharata B Rao
  2015-09-09  7:41       ` Zhu Guihua
  0 siblings, 1 reply; 36+ messages in thread
From: Bharata B Rao @ 2015-09-09  5:52 UTC (permalink / raw)
  To: David Gibson
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, nfont,
	imammedo, afaerber

On Fri, Sep 04, 2015 at 03:31:24PM +1000, David Gibson wrote:
> On Thu, Aug 06, 2015 at 10:57:07AM +0530, Bharata B Rao wrote:
> > CPUState *cpu gets added to the cpus list during cpu_exec_init(). It
> > should be removed from cpu_exec_exit().
> > 
> > cpu_exec_init() is called from generic CPU::instance_finalize and some
> > archs like PowerPC call it from CPU unrealizefn. So ensure that we
> > dequeue the cpu only once.
> > 
> > Instead of introducing a new field CPUState.queued, I could have used
> > CPUState.cpu_index to check if the cpu is already dequeued from the list.
> > Since that doesn't work for CONFIG_USER_ONLY, I had to add a new field.
> > 
> > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> 
> This seems reasonable to me, but I'm wondering how x86 cpu hotplug /
> unplug is working without it.

x86 hotplug/unplug code currently resides in Zhu's git tree
(git://github.com/zhugh/qemu). They are removing the CPU from the list
explicitly in x86 CPU's instance_finalize routine.

Since we add CPU to the list in cpu_exec_init(), I thought it makes
sense to remove it in cpu_exec_exit().

May be it makes sense to separately purse this patch and the next one
so that other archs are also taken into account correctly.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 02/11] exec: Do vmstate unregistration from cpu_exec_exit()
  2015-09-04  6:03   ` David Gibson
@ 2015-09-09  5:56     ` Bharata B Rao
  0 siblings, 0 replies; 36+ messages in thread
From: Bharata B Rao @ 2015-09-09  5:56 UTC (permalink / raw)
  To: David Gibson
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, nfont,
	imammedo, afaerber

On Fri, Sep 04, 2015 at 04:03:43PM +1000, David Gibson wrote:
> On Thu, Aug 06, 2015 at 10:57:08AM +0530, Bharata B Rao wrote:
> > cpu_exec_init() does vmstate_register and register_savevm for the CPU device.
> > These need to be undone from cpu_exec_exit(). These changes are needed to
> > support CPU hot removal and also to correctly fail hotplug attempts
> > beyond max_cpus.
> > 
> > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> 
> As with 1/2, looks reasonable, but I'm wondering how x86 hotplug is
> getting away without this.

x86 does this explicitly in CPU's unrealizefn. Since registration is
done from cpu_exec_init(), I though un-registration could be done
in cpu_exec_init() instead of each arch doing it separately.

As said earlier, I will probably pursue these changes separately from
this series.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 04/11] cpus: Add a sync version of cpu_remove()
  2015-09-04  6:11   ` David Gibson
@ 2015-09-09  5:57     ` Bharata B Rao
  0 siblings, 0 replies; 36+ messages in thread
From: Bharata B Rao @ 2015-09-09  5:57 UTC (permalink / raw)
  To: David Gibson
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, nfont,
	imammedo, afaerber

On Fri, Sep 04, 2015 at 04:11:38PM +1000, David Gibson wrote:
> On Thu, Aug 06, 2015 at 10:57:10AM +0530, Bharata B Rao wrote:
> > This sync API will be used by the CPU hotplug code to wait for the CPU to
> > completely get removed before flagging the failure to the device_add
> > command.
> > 
> > Sync version of this call is needed to correctly recover from CPU
> > realization failures when ->plug() handler fails.
> > 
> > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > ---
> >  cpus.c            | 14 ++++++++++++++
> >  include/qom/cpu.h |  8 ++++++++
> >  2 files changed, 22 insertions(+)
> > 
> > diff --git a/cpus.c b/cpus.c
> > index 73ae2e7..9d9644e 100644
> > --- a/cpus.c
> > +++ b/cpus.c
> > @@ -999,6 +999,8 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
> >          qemu_kvm_wait_io_event(cpu);
> >          if (cpu->exit && !cpu_can_run(cpu)) {
> >              qemu_kvm_destroy_vcpu(cpu);
> > +            cpu->created = false;
> > +            qemu_cond_signal(&qemu_cpu_cond);
> >              qemu_mutex_unlock(&qemu_global_mutex);
> >              return NULL;
> >          }
> > @@ -1104,6 +1106,8 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
> >          }
> >          if (remove_cpu) {
> >              qemu_tcg_destroy_vcpu(remove_cpu);
> > +            cpu->created = false;
> > +            qemu_cond_signal(&qemu_cpu_cond);
> >              remove_cpu = NULL;
> >          }
> >      }
> > @@ -1283,6 +1287,16 @@ void cpu_remove(CPUState *cpu)
> >      qemu_cpu_kick(cpu);
> >  }
> >  
> > +void cpu_remove_sync(CPUState *cpu)
> > +{
> > +    cpu->stop = true;
> > +    cpu->exit = true;
> > +    qemu_cpu_kick(cpu);
> 
> It would be nicer for this to call the async cpu_remove() above, to
> ensure they stay in sync.

Makes sense, will incorporate this in the next iteration.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 08/11] spapr: CPU hotplug support
  2015-09-04  6:58   ` David Gibson
@ 2015-09-09  6:52     ` Bharata B Rao
  0 siblings, 0 replies; 36+ messages in thread
From: Bharata B Rao @ 2015-09-09  6:52 UTC (permalink / raw)
  To: David Gibson
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, nfont,
	imammedo, afaerber

On Fri, Sep 04, 2015 at 04:58:38PM +1000, David Gibson wrote:
> On Thu, Aug 06, 2015 at 10:57:14AM +0530, Bharata B Rao wrote:
> > Support CPU hotplug via device-add command. Set up device tree
> > entries for the hotplugged CPU core and use the exising EPOW event
> > infrastructure to send CPU hotplug notification to the guest.
> > 
> > Create only cores explicitly from boot path as well as hotplug path
> > and let the ->plug() handler of the core create the threads of the core.
> > 
> > Also support cold plugged CPUs that are specified by -device option
> > on cmdline.
> > 
> > Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > ---
> >  hw/ppc/spapr.c              | 166 +++++++++++++++++++++++++++++++++++++++++++-
> >  hw/ppc/spapr_events.c       |   3 +
> >  hw/ppc/spapr_rtas.c         |  11 +++
> >  target-ppc/translate_init.c |   8 +++
> >  4 files changed, 186 insertions(+), 2 deletions(-)
> > 
> > diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
> > index a106980..74637b3 100644
> > --- a/hw/ppc/spapr.c
> > +++ b/hw/ppc/spapr.c
> > @@ -599,6 +599,18 @@ static void spapr_populate_cpu_dt(CPUState *cs, void *fdt, int offset,
> >      unsigned sockets = opts ? qemu_opt_get_number(opts, "sockets", 0) : 0;
> >      uint32_t cpus_per_socket = sockets ? (smp_cpus / sockets) : 1;
> >      uint32_t pft_size_prop[] = {0, cpu_to_be32(spapr->htab_shift)};
> > +    sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(qdev_get_machine());
> > +    sPAPRDRConnector *drc;
> > +    sPAPRDRConnectorClass *drck;
> > +    int drc_index;
> > +
> > +    if (smc->dr_cpu_enabled) {
> > +        drc = spapr_dr_connector_by_id(SPAPR_DR_CONNECTOR_TYPE_CPU, index);
> > +        g_assert(drc);
> > +        drck = SPAPR_DR_CONNECTOR_GET_CLASS(drc);
> > +        drc_index = drck->get_index(drc);
> > +        _FDT((fdt_setprop_cell(fdt, offset, "ibm,my-drc-index", drc_index)));
> > +    }
> >  
> >      _FDT((fdt_setprop_cell(fdt, offset, "reg", index)));
> >      _FDT((fdt_setprop_string(fdt, offset, "device_type", "cpu")));
> > @@ -1686,6 +1698,7 @@ static void ppc_spapr_init(MachineState *machine)
> >      char *filename;
> >      int smt = kvmppc_smt_threads();
> >      int smp_max_cores = DIV_ROUND_UP(max_cpus, smp_threads);
> > +    int smp_cores = DIV_ROUND_UP(smp_cpus, smp_threads);
> 
> This shadows the global variable 'smp_cores' which has a different
> meaning, so this is a very bad idea.

Oh ok, will fix this.

> >  static void spapr_machine_device_plug(HotplugHandler *hotplug_dev,
> >                                        DeviceState *dev, Error **errp)
> >  {
> >      sPAPRMachineClass *smc = SPAPR_MACHINE_GET_CLASS(qdev_get_machine());
> > +    sPAPRMachineState *ms = SPAPR_MACHINE(hotplug_dev);
> >  
> >      if (object_dynamic_cast(OBJECT(dev), TYPE_PC_DIMM)) {
> >          int node;
> > @@ -2192,6 +2330,29 @@ static void spapr_machine_device_plug(HotplugHandler *hotplug_dev,
> >          }
> >  
> >          spapr_memory_plug(hotplug_dev, dev, node, errp);
> > +    } else if (object_dynamic_cast(OBJECT(dev), TYPE_CPU)) {
> > +        CPUState *cs = CPU(dev);
> > +        PowerPCCPU *cpu = POWERPC_CPU(cs);
> > +
> > +        if (!smc->dr_cpu_enabled && dev->hotplugged) {
> > +            error_setg(errp, "CPU hotplug not supported for this machine");
> > +            cpu_remove_sync(cs);
> > +            return;
> > +        }
> > +
> > +        if (((smp_cpus % smp_threads) || (max_cpus % smp_threads)) &&
> > +            dev->hotplugged) {
> > +            error_setg(errp, "CPU hotplug not supported for the topology "
> > +                       "with %d threads %d cpus and %d maxcpus since "
> > +                       "CPUs can't be fit fully into cores",
> > +                       smp_threads, smp_cpus, max_cpus);
> > +            cpu_remove_sync(cs);
> 
> I'd kind of prefer to reject partial cores at initial startup, rather
> than only when we actually attempt to hotplug.

I am enforcing correct topologies only while hotplugging to ensure that
existing guests with such topologies continue to work. If that is not
required then this explicit check for only hotplugged CPUs won't be needed.

If Thomas' patch ensures that we never end up in topologies with partially
filled cores, then this check isn't required.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 09/11] spapr: Support topologies with unfilled cores
  2015-09-04  8:44     ` Thomas Huth
@ 2015-09-09  6:58       ` Bharata B Rao
  0 siblings, 0 replies; 36+ messages in thread
From: Bharata B Rao @ 2015-09-09  6:58 UTC (permalink / raw)
  To: Thomas Huth
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, imammedo,
	nfont, afaerber, David Gibson

On Fri, Sep 04, 2015 at 10:44:57AM +0200, Thomas Huth wrote:
> On 04/09/15 09:01, David Gibson wrote:
> > On Thu, Aug 06, 2015 at 10:57:15AM +0530, Bharata B Rao wrote:
> >> QEMU currently supports CPU topologies where there can be cores
> >> which are not completely filled with all the threads as per the
> >> specifed SMT mode.
> >>
> >> Restore support for such topologies (Eg -smp 15,cores=4,threads=4)
> >> The last core will always have the deficit even when -device options are
> >> used to cold-plug the cores.
> >>
> >> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> > 
> > Is there a reason to support these silly toplogies, or should we just
> > error out if this is specified?

Only reason was to ensure that existing guest with such topologies
continue to boot like before.

> 
> FYI, I've recently submitted a patch that tries to catch such illegal
> SMP configurations and simply errors out in that case:
> 
> http://lists.nongnu.org/archive/html/qemu-devel/2015-07/msg04549.html
> 
> It's not upstream yet, but already in Eduardo's x86 branch. I think this
> will reject the bad topology from your example, too.

It does reject -smp 15,cores=4,threads=4, but with

-smp 15,cores=4,threads=4,maxcpus=16, the guest still boots with weird
topology.

[root@localhost ~]# lscpu
Architecture:          ppc64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Big Endian
CPU(s):                16
On-line CPU(s) list:   0-14
Off-line CPU(s) list:  15
Thread(s) per core:    3
Core(s) per socket:    1
Socket(s):             4
NUMA node(s):          1
Model:                 IBM pSeries (emulated by qemu)
L1d cache:             64K
L1i cache:             32K
NUMA node0 CPU(s):     0-14

[root@localhost ~]# ppc64_cpu --info
Core   0:    0*    1*    2*    3* 
Core   1:    4*    5*    6*    7* 
Core   2:    8*    9*   10*   11* 
Core   3:   12*   13*   14*   15

Should such topologies also be prevented from booting ?

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit()
  2015-09-09  5:52     ` Bharata B Rao
@ 2015-09-09  7:41       ` Zhu Guihua
  2015-09-09  7:56         ` Bharata B Rao
  0 siblings, 1 reply; 36+ messages in thread
From: Zhu Guihua @ 2015-09-09  7:41 UTC (permalink / raw)
  To: bharata, David Gibson
  Cc: agraf, aik, qemu-devel, mdroth, qemu-ppc, tyreld, imammedo,
	nfont, afaerber


On 09/09/2015 01:52 PM, Bharata B Rao wrote:
> On Fri, Sep 04, 2015 at 03:31:24PM +1000, David Gibson wrote:
>> On Thu, Aug 06, 2015 at 10:57:07AM +0530, Bharata B Rao wrote:
>>> CPUState *cpu gets added to the cpus list during cpu_exec_init(). It
>>> should be removed from cpu_exec_exit().
>>>
>>> cpu_exec_init() is called from generic CPU::instance_finalize and some
>>> archs like PowerPC call it from CPU unrealizefn. So ensure that we
>>> dequeue the cpu only once.
>>>
>>> Instead of introducing a new field CPUState.queued, I could have used
>>> CPUState.cpu_index to check if the cpu is already dequeued from the list.
>>> Since that doesn't work for CONFIG_USER_ONLY, I had to add a new field.
>>>
>>> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
>> This seems reasonable to me, but I'm wondering how x86 cpu hotplug /
>> unplug is working without it.
> x86 hotplug/unplug code currently resides in Zhu's git tree
> (git://github.com/zhugh/qemu). They are removing the CPU from the list
> explicitly in x86 CPU's instance_finalize routine.

Sorry, my git tree is git://github.com/zhuguihua/qemu

Now there was no progress about topology, so we don't know what will happen
in x86. I am not sure whether we will take this method finally.

Thanks,
Zhu

>
> Since we add CPU to the list in cpu_exec_init(), I thought it makes
> sense to remove it in cpu_exec_exit().
>
> May be it makes sense to separately purse this patch and the next one
> so that other archs are also taken into account correctly.
>
> Regards,
> Bharata.
>
>
>

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit()
  2015-09-09  7:41       ` Zhu Guihua
@ 2015-09-09  7:56         ` Bharata B Rao
  2015-11-12  9:11           ` Zhu Guihua
  0 siblings, 1 reply; 36+ messages in thread
From: Bharata B Rao @ 2015-09-09  7:56 UTC (permalink / raw)
  To: Zhu Guihua
  Cc: mdroth, aik, agraf, qemu-devel, qemu-ppc, tyreld, imammedo,
	nfont, afaerber, David Gibson

On Wed, Sep 09, 2015 at 03:41:30PM +0800, Zhu Guihua wrote:
> 
> On 09/09/2015 01:52 PM, Bharata B Rao wrote:
> >On Fri, Sep 04, 2015 at 03:31:24PM +1000, David Gibson wrote:
> >>On Thu, Aug 06, 2015 at 10:57:07AM +0530, Bharata B Rao wrote:
> >>>CPUState *cpu gets added to the cpus list during cpu_exec_init(). It
> >>>should be removed from cpu_exec_exit().
> >>>
> >>>cpu_exec_init() is called from generic CPU::instance_finalize and some
> >>>archs like PowerPC call it from CPU unrealizefn. So ensure that we
> >>>dequeue the cpu only once.
> >>>
> >>>Instead of introducing a new field CPUState.queued, I could have used
> >>>CPUState.cpu_index to check if the cpu is already dequeued from the list.
> >>>Since that doesn't work for CONFIG_USER_ONLY, I had to add a new field.
> >>>
> >>>Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> >>This seems reasonable to me, but I'm wondering how x86 cpu hotplug /
> >>unplug is working without it.
> >x86 hotplug/unplug code currently resides in Zhu's git tree
> >(git://github.com/zhugh/qemu). They are removing the CPU from the list
> >explicitly in x86 CPU's instance_finalize routine.
> 
> Sorry, my git tree is git://github.com/zhuguihua/qemu
> 
> Now there was no progress about topology, so we don't know what will happen
> in x86. I am not sure whether we will take this method finally.

Andreas had a presentation on this topic in KVM forum recently.

Andreas - do you have any updates on the topology and other aspects
of CPU hotplug so that we can align the CPU hotplug work in different
archs accordingly and hope to get it merged in 2.5 time frame ?

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit()
  2015-09-09  7:56         ` Bharata B Rao
@ 2015-11-12  9:11           ` Zhu Guihua
  2015-11-12  9:30             ` Bharata B Rao
  0 siblings, 1 reply; 36+ messages in thread
From: Zhu Guihua @ 2015-11-12  9:11 UTC (permalink / raw)
  To: bharata
  Cc: mdroth, aik, guijianfeng, izumi.taku, agraf, qemu-devel,
	qemu-ppc, tyreld, imammedo, nfont, afaerber, kamezawa.hiroyu,
	David Gibson

Hi Bharata,

On 09/09/2015 03:56 PM, Bharata B Rao wrote:
> On Wed, Sep 09, 2015 at 03:41:30PM +0800, Zhu Guihua wrote:
>> On 09/09/2015 01:52 PM, Bharata B Rao wrote:
>>> On Fri, Sep 04, 2015 at 03:31:24PM +1000, David Gibson wrote:
>>>> On Thu, Aug 06, 2015 at 10:57:07AM +0530, Bharata B Rao wrote:
>>>>> CPUState *cpu gets added to the cpus list during cpu_exec_init(). It
>>>>> should be removed from cpu_exec_exit().
>>>>>
>>>>> cpu_exec_init() is called from generic CPU::instance_finalize and some
>>>>> archs like PowerPC call it from CPU unrealizefn. So ensure that we
>>>>> dequeue the cpu only once.
>>>>>
>>>>> Instead of introducing a new field CPUState.queued, I could have used
>>>>> CPUState.cpu_index to check if the cpu is already dequeued from the list.
>>>>> Since that doesn't work for CONFIG_USER_ONLY, I had to add a new field.
>>>>>
>>>>> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
>>>> This seems reasonable to me, but I'm wondering how x86 cpu hotplug /
>>>> unplug is working without it.
>>> x86 hotplug/unplug code currently resides in Zhu's git tree
>>> (git://github.com/zhugh/qemu). They are removing the CPU from the list
>>> explicitly in x86 CPU's instance_finalize routine.
>> Sorry, my git tree is git://github.com/zhuguihua/qemu
>>
>> Now there was no progress about topology, so we don't know what will happen
>> in x86. I am not sure whether we will take this method finally.
> Andreas had a presentation on this topic in KVM forum recently.
>
> Andreas - do you have any updates on the topology and other aspects
> of CPU hotplug so that we can align the CPU hotplug work in different
> archs accordingly and hope to get it merged in 2.5 time frame ?

Do you update the patchset?

My work in x86 has stopped for a while, Maybe I can get some ideas from 
another
arch's worker.

Thanks,
Zhu

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit()
  2015-11-12  9:11           ` Zhu Guihua
@ 2015-11-12  9:30             ` Bharata B Rao
  2015-11-12  9:41               ` Zhu Guihua
  2015-11-12  9:56               ` Andreas Färber
  0 siblings, 2 replies; 36+ messages in thread
From: Bharata B Rao @ 2015-11-12  9:30 UTC (permalink / raw)
  To: Zhu Guihua
  Cc: mdroth, aik, guijianfeng, izumi.taku, agraf, qemu-devel,
	qemu-ppc, tyreld, imammedo, nfont, afaerber, kamezawa.hiroyu,
	David Gibson

On Thu, Nov 12, 2015 at 05:11:02PM +0800, Zhu Guihua wrote:
> Hi Bharata,
> 
> On 09/09/2015 03:56 PM, Bharata B Rao wrote:
> >On Wed, Sep 09, 2015 at 03:41:30PM +0800, Zhu Guihua wrote:
> >>On 09/09/2015 01:52 PM, Bharata B Rao wrote:
> >>>On Fri, Sep 04, 2015 at 03:31:24PM +1000, David Gibson wrote:
> >>>>On Thu, Aug 06, 2015 at 10:57:07AM +0530, Bharata B Rao wrote:
> >>>>>CPUState *cpu gets added to the cpus list during cpu_exec_init(). It
> >>>>>should be removed from cpu_exec_exit().
> >>>>>
> >>>>>cpu_exec_init() is called from generic CPU::instance_finalize and some
> >>>>>archs like PowerPC call it from CPU unrealizefn. So ensure that we
> >>>>>dequeue the cpu only once.
> >>>>>
> >>>>>Instead of introducing a new field CPUState.queued, I could have used
> >>>>>CPUState.cpu_index to check if the cpu is already dequeued from the list.
> >>>>>Since that doesn't work for CONFIG_USER_ONLY, I had to add a new field.
> >>>>>
> >>>>>Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
> >>>>This seems reasonable to me, but I'm wondering how x86 cpu hotplug /
> >>>>unplug is working without it.
> >>>x86 hotplug/unplug code currently resides in Zhu's git tree
> >>>(git://github.com/zhugh/qemu). They are removing the CPU from the list
> >>>explicitly in x86 CPU's instance_finalize routine.
> >>Sorry, my git tree is git://github.com/zhuguihua/qemu
> >>
> >>Now there was no progress about topology, so we don't know what will happen
> >>in x86. I am not sure whether we will take this method finally.
> >Andreas had a presentation on this topic in KVM forum recently.
> >
> >Andreas - do you have any updates on the topology and other aspects
> >of CPU hotplug so that we can align the CPU hotplug work in different
> >archs accordingly and hope to get it merged in 2.5 time frame ?
> 
> Do you update the patchset?
> 
> My work in x86 has stopped for a while, Maybe I can get some ideas from
> another
> arch's worker.

My last version is here:
https://lists.gnu.org/archive/html/qemu-devel/2015-08/msg00650.html

I initally started with core level CPU hotplug, moved to socket level hotplug
based on Andreas' patchset and then moved back again to core level hotplug.

I was a bit confused about how the generic semantics would evovle and hence
the work got delayed. I wil be posting the next version of my patchset
based on core level semantics soon.

I am hoping that I should be able to get CPU hotplug/unplug included
in QEMU-2.6 timeframe.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit()
  2015-11-12  9:30             ` Bharata B Rao
@ 2015-11-12  9:41               ` Zhu Guihua
  2015-11-12  9:56               ` Andreas Färber
  1 sibling, 0 replies; 36+ messages in thread
From: Zhu Guihua @ 2015-11-12  9:41 UTC (permalink / raw)
  To: bharata
  Cc: mdroth, aik, guijianfeng, izumi.taku, agraf, qemu-devel,
	qemu-ppc, tyreld, imammedo, nfont, afaerber, kamezawa.hiroyu,
	David Gibson


On 11/12/2015 05:30 PM, Bharata B Rao wrote:
> On Thu, Nov 12, 2015 at 05:11:02PM +0800, Zhu Guihua wrote:
>> Hi Bharata,
>>
>> On 09/09/2015 03:56 PM, Bharata B Rao wrote:
>>> On Wed, Sep 09, 2015 at 03:41:30PM +0800, Zhu Guihua wrote:
>>>> On 09/09/2015 01:52 PM, Bharata B Rao wrote:
>>>>> On Fri, Sep 04, 2015 at 03:31:24PM +1000, David Gibson wrote:
>>>>>> On Thu, Aug 06, 2015 at 10:57:07AM +0530, Bharata B Rao wrote:
>>>>>>> CPUState *cpu gets added to the cpus list during cpu_exec_init(). It
>>>>>>> should be removed from cpu_exec_exit().
>>>>>>>
>>>>>>> cpu_exec_init() is called from generic CPU::instance_finalize and some
>>>>>>> archs like PowerPC call it from CPU unrealizefn. So ensure that we
>>>>>>> dequeue the cpu only once.
>>>>>>>
>>>>>>> Instead of introducing a new field CPUState.queued, I could have used
>>>>>>> CPUState.cpu_index to check if the cpu is already dequeued from the list.
>>>>>>> Since that doesn't work for CONFIG_USER_ONLY, I had to add a new field.
>>>>>>>
>>>>>>> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
>>>>>> This seems reasonable to me, but I'm wondering how x86 cpu hotplug /
>>>>>> unplug is working without it.
>>>>> x86 hotplug/unplug code currently resides in Zhu's git tree
>>>>> (git://github.com/zhugh/qemu). They are removing the CPU from the list
>>>>> explicitly in x86 CPU's instance_finalize routine.
>>>> Sorry, my git tree is git://github.com/zhuguihua/qemu
>>>>
>>>> Now there was no progress about topology, so we don't know what will happen
>>>> in x86. I am not sure whether we will take this method finally.
>>> Andreas had a presentation on this topic in KVM forum recently.
>>>
>>> Andreas - do you have any updates on the topology and other aspects
>>> of CPU hotplug so that we can align the CPU hotplug work in different
>>> archs accordingly and hope to get it merged in 2.5 time frame ?
>> Do you update the patchset?
>>
>> My work in x86 has stopped for a while, Maybe I can get some ideas from
>> another
>> arch's worker.
> My last version is here:
> https://lists.gnu.org/archive/html/qemu-devel/2015-08/msg00650.html
>
> I initally started with core level CPU hotplug, moved to socket level hotplug
> based on Andreas' patchset and then moved back again to core level hotplug.
>
> I was a bit confused about how the generic semantics would evovle and hence
> the work got delayed. I wil be posting the next version of my patchset
> based on core level semantics soon.
>
> I am hoping that I should be able to get CPU hotplug/unplug included
> in QEMU-2.6 timeframe.
>
Thanks for your reply. Look forward to your next version.

Regards,
Zhu

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit()
  2015-11-12  9:30             ` Bharata B Rao
  2015-11-12  9:41               ` Zhu Guihua
@ 2015-11-12  9:56               ` Andreas Färber
  2015-11-12 11:40                 ` Bharata B Rao
  1 sibling, 1 reply; 36+ messages in thread
From: Andreas Färber @ 2015-11-12  9:56 UTC (permalink / raw)
  To: bharata
  Cc: Zhu Guihua, mdroth, aik, guijianfeng, izumi.taku, agraf,
	qemu-devel, qemu-ppc, tyreld, imammedo, nfont, kamezawa.hiroyu,
	David Gibson

Am 12.11.2015 um 10:30 schrieb Bharata B Rao:
> On Thu, Nov 12, 2015 at 05:11:02PM +0800, Zhu Guihua wrote:
>> Hi Bharata,
>>
>> On 09/09/2015 03:56 PM, Bharata B Rao wrote:
>>> On Wed, Sep 09, 2015 at 03:41:30PM +0800, Zhu Guihua wrote:
>>>> On 09/09/2015 01:52 PM, Bharata B Rao wrote:
>>>>> On Fri, Sep 04, 2015 at 03:31:24PM +1000, David Gibson wrote:
>>>>>> On Thu, Aug 06, 2015 at 10:57:07AM +0530, Bharata B Rao wrote:
>>>>>>> CPUState *cpu gets added to the cpus list during cpu_exec_init(). It
>>>>>>> should be removed from cpu_exec_exit().
>>>>>>>
>>>>>>> cpu_exec_init() is called from generic CPU::instance_finalize and some
>>>>>>> archs like PowerPC call it from CPU unrealizefn. So ensure that we
>>>>>>> dequeue the cpu only once.
>>>>>>>
>>>>>>> Instead of introducing a new field CPUState.queued, I could have used
>>>>>>> CPUState.cpu_index to check if the cpu is already dequeued from the list.
>>>>>>> Since that doesn't work for CONFIG_USER_ONLY, I had to add a new field.
>>>>>>>
>>>>>>> Signed-off-by: Bharata B Rao <bharata@linux.vnet.ibm.com>
>>>>>> This seems reasonable to me, but I'm wondering how x86 cpu hotplug /
>>>>>> unplug is working without it.
>>>>> x86 hotplug/unplug code currently resides in Zhu's git tree
>>>>> (git://github.com/zhugh/qemu). They are removing the CPU from the list
>>>>> explicitly in x86 CPU's instance_finalize routine.
>>>> Sorry, my git tree is git://github.com/zhuguihua/qemu
>>>>
>>>> Now there was no progress about topology, so we don't know what will happen
>>>> in x86. I am not sure whether we will take this method finally.
>>> Andreas had a presentation on this topic in KVM forum recently.
>>>
>>> Andreas - do you have any updates on the topology and other aspects
>>> of CPU hotplug so that we can align the CPU hotplug work in different
>>> archs accordingly and hope to get it merged in 2.5 time frame ?
>>
>> Do you update the patchset?
>>
>> My work in x86 has stopped for a while, Maybe I can get some ideas from
>> another
>> arch's worker.
> 
> My last version is here:
> https://lists.gnu.org/archive/html/qemu-devel/2015-08/msg00650.html
> 
> I initally started with core level CPU hotplug, moved to socket level hotplug
> based on Andreas' patchset and then moved back again to core level hotplug.
> 
> I was a bit confused about how the generic semantics would evovle and hence
> the work got delayed. I wil be posting the next version of my patchset
> based on core level semantics soon.

What I recall as conclusion from the KVM Forum session and previous
discussions was that pseries would operate on core level (i.e.,
granularity of two SMT threads), whereas your first try was on thread
level and then on socket level.

Regards,
Andreas

> I am hoping that I should be able to get CPU hotplug/unplug included
> in QEMU-2.6 timeframe.

If there are preparatory patches ready for inclusion today, please point
me to them urgently.

Thanks,
Andreas

-- 
SUSE Linux GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Felix Imendörffer, Jane Smithard, Graham Norton; HRB 21284 (AG Nürnberg)

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit()
  2015-11-12  9:56               ` Andreas Färber
@ 2015-11-12 11:40                 ` Bharata B Rao
  0 siblings, 0 replies; 36+ messages in thread
From: Bharata B Rao @ 2015-11-12 11:40 UTC (permalink / raw)
  To: Andreas Färber
  Cc: Zhu Guihua, mdroth, aik, guijianfeng, izumi.taku, agraf,
	qemu-devel, qemu-ppc, tyreld, imammedo, nfont, kamezawa.hiroyu,
	David Gibson

On Thu, Nov 12, 2015 at 10:56:50AM +0100, Andreas Färber wrote:
> <snip> 
> > I am hoping that I should be able to get CPU hotplug/unplug included
> > in QEMU-2.6 timeframe.
> 
> If there are preparatory patches ready for inclusion today, please point
> me to them urgently.

Thanks. I do have some generic changes, but I will push them during 2.6
development.

Regards,
Bharata.

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2015-11-12 11:40 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-08-06  5:27 [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Bharata B Rao
2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 01/11] exec: Remove cpu from cpus list during cpu_exec_exit() Bharata B Rao
2015-09-04  5:31   ` David Gibson
2015-09-09  5:52     ` Bharata B Rao
2015-09-09  7:41       ` Zhu Guihua
2015-09-09  7:56         ` Bharata B Rao
2015-11-12  9:11           ` Zhu Guihua
2015-11-12  9:30             ` Bharata B Rao
2015-11-12  9:41               ` Zhu Guihua
2015-11-12  9:56               ` Andreas Färber
2015-11-12 11:40                 ` Bharata B Rao
2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 02/11] exec: Do vmstate unregistration from cpu_exec_exit() Bharata B Rao
2015-09-04  6:03   ` David Gibson
2015-09-09  5:56     ` Bharata B Rao
2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 03/11] cpus: Reclaim vCPU objects Bharata B Rao
2015-09-04  6:09   ` David Gibson
2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 04/11] cpus: Add a sync version of cpu_remove() Bharata B Rao
2015-09-04  6:11   ` David Gibson
2015-09-09  5:57     ` Bharata B Rao
2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 05/11] xics_kvm: Add cpu_destroy method to XICS Bharata B Rao
2015-08-07 11:33   ` Bharata B Rao
2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 06/11] spapr: Create pseries-2.5 machine Bharata B Rao
2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 07/11] spapr: Enable CPU hotplug for pseries-2.5 and add CPU DRC DT entries Bharata B Rao
2015-09-04  6:28   ` David Gibson
2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 08/11] spapr: CPU hotplug support Bharata B Rao
2015-09-04  6:58   ` David Gibson
2015-09-09  6:52     ` Bharata B Rao
2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 09/11] spapr: Support topologies with unfilled cores Bharata B Rao
2015-09-04  7:01   ` David Gibson
2015-09-04  8:44     ` Thomas Huth
2015-09-09  6:58       ` Bharata B Rao
2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 10/11] spapr: CPU hot unplug support Bharata B Rao
2015-08-06  5:27 ` [Qemu-devel] [RFC PATCH v4 11/11] target-ppc: Enable CPU hotplug for POWER8 CPU family Bharata B Rao
2015-08-06  8:42 ` [Qemu-devel] [RFC PATCH v4 00/11] sPAPR CPU hotplug Zhu Guihua
2015-08-10  3:31   ` Bharata B Rao
2015-08-12  2:56 ` David Gibson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.