All of lore.kernel.org
 help / color / mirror / Atom feed
* [Qemu-devel] [0/8] pseries: savevm / migration support
@ 2013-05-03  1:38 David Gibson
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 1/8] savevm: Implement VMS_DIVIDE flag David Gibson
                   ` (7 more replies)
  0 siblings, 8 replies; 31+ messages in thread
From: David Gibson @ 2013-05-03  1:38 UTC (permalink / raw)
  To: agraf, quintela; +Cc: aik, qemu-ppc, qemu-devel

So, it's obviously too late to merge these for the 1.5 release.  But
I'm sending them so that they're out there and with any luck we can
merge them early in the 1.6 cycle.

These patches have been baking in my internal tree for a while, and
seem to be pretty solid.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH 1/8] savevm: Implement VMS_DIVIDE flag
  2013-05-03  1:38 [Qemu-devel] [0/8] pseries: savevm / migration support David Gibson
@ 2013-05-03  1:38 ` David Gibson
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 2/8] target-ppc: Convert ppc cpu savevm to VMStateDescription David Gibson
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2013-05-03  1:38 UTC (permalink / raw)
  To: agraf, quintela; +Cc: aik, qemu-ppc, qemu-devel, David Gibson

The vmstate infrastructure includes a VMS_MULTIPY flag, and associated
VMSTATE_VBUFFER_MULTIPLY helper macro.  These can be used to save a
variably sized buffer where the size in bytes of the buffer isn't directly
accessible as a structure field, but an element count from which the size
can be derived is.

This patch adds an analogous VMS_DIVIDE option, which handles a variably
sized buffer whose size is a submultiple of a field, rather than a
multiple.  For example a buffer containing per-page structures whose size
is derived from a field storing the total address space described by the
structures could use this construct.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 include/migration/vmstate.h |   13 +++++++++++++
 savevm.c                    |    8 ++++++++
 2 files changed, 21 insertions(+)

diff --git a/include/migration/vmstate.h b/include/migration/vmstate.h
index ebc4d09..787f1cb 100644
--- a/include/migration/vmstate.h
+++ b/include/migration/vmstate.h
@@ -98,6 +98,7 @@ enum VMStateFlags {
     VMS_MULTIPLY         = 0x200,  /* multiply "size" field by field_size */
     VMS_VARRAY_UINT8     = 0x400,  /* Array with size in uint8_t field*/
     VMS_VARRAY_UINT32    = 0x800,  /* Array with size in uint32_t field*/
+    VMS_DIVIDE           = 0x1000, /* divide "size" field by field_size */
 };
 
 typedef struct {
@@ -420,6 +421,18 @@ extern const VMStateInfo vmstate_info_bitmap;
     .start        = (_start),                                        \
 }
 
+#define VMSTATE_VBUFFER_DIVIDE(_field, _state, _version, _test, _start, _field_size, _divide) { \
+    .name         = (stringify(_field)),                             \
+    .version_id   = (_version),                                      \
+    .field_exists = (_test),                                         \
+    .size_offset  = vmstate_offset_value(_state, _field_size, uint32_t),\
+    .size         = (_divide),                                       \
+    .info         = &vmstate_info_buffer,                            \
+    .flags        = VMS_VBUFFER|VMS_POINTER|VMS_DIVIDE,              \
+    .offset       = offsetof(_state, _field),                        \
+    .start        = (_start),                                        \
+}
+
 #define VMSTATE_VBUFFER(_field, _state, _version, _test, _start, _field_size) { \
     .name         = (stringify(_field)),                             \
     .version_id   = (_version),                                      \
diff --git a/savevm.c b/savevm.c
index 31dcce9..750b1cb 100644
--- a/savevm.c
+++ b/savevm.c
@@ -1655,6 +1655,10 @@ int vmstate_load_state(QEMUFile *f, const VMStateDescription *vmsd,
                 if (field->flags & VMS_MULTIPLY) {
                     size *= field->size;
                 }
+                if (field->flags & VMS_DIVIDE) {
+                    assert((size % field->size) == 0);
+                    size /= field->size;
+                }
             }
             if (field->flags & VMS_ARRAY) {
                 n_elems = field->num;
@@ -1719,6 +1723,10 @@ void vmstate_save_state(QEMUFile *f, const VMStateDescription *vmsd,
                 if (field->flags & VMS_MULTIPLY) {
                     size *= field->size;
                 }
+                if (field->flags & VMS_DIVIDE) {
+                    assert((size % field->size) == 0);
+                    size /= field->size;
+                }
             }
             if (field->flags & VMS_ARRAY) {
                 n_elems = field->num;
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH 2/8] target-ppc: Convert ppc cpu savevm to VMStateDescription
  2013-05-03  1:38 [Qemu-devel] [0/8] pseries: savevm / migration support David Gibson
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 1/8] savevm: Implement VMS_DIVIDE flag David Gibson
@ 2013-05-03  1:38 ` David Gibson
  2013-05-03 11:29   ` Andreas Färber
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 3/8] pseries: savevm support for XICS interrupt controller David Gibson
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 31+ messages in thread
From: David Gibson @ 2013-05-03  1:38 UTC (permalink / raw)
  To: agraf, quintela; +Cc: aik, qemu-ppc, qemu-devel, David Gibson

The savevm code for the powerpc cpu emulation is currently based around
the old register_savevm() rather than register_vmstate() method.  It's also
rather broken, missing some important state on some CPU models.

This patch completely rewrites the savevm for target-ppc, using the new
VMStateDescription approach.  Exactly what needs to be saved in what
configurations has been more carefully examined, too.  This introduces a
new version (5) of the cpu save format.  The old load function is retained
to support version 4 images.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/cpu.h     |    9 +-
 target-ppc/machine.c |  542 ++++++++++++++++++++++++++++++++++++++++++--------
 2 files changed, 460 insertions(+), 91 deletions(-)

diff --git a/target-ppc/cpu.h b/target-ppc/cpu.h
index 7cacb56..d945aa8 100644
--- a/target-ppc/cpu.h
+++ b/target-ppc/cpu.h
@@ -943,7 +943,7 @@ struct CPUPPCState {
 #if defined(TARGET_PPC64)
     /* PowerPC 64 SLB area */
     ppc_slb_t slb[64];
-    int slb_nr;
+    int32_t slb_nr;
 #endif
     /* segment registers */
     hwaddr htab_base;
@@ -952,11 +952,11 @@ struct CPUPPCState {
     /* externally stored hash table */
     uint8_t *external_htab;
     /* BATs */
-    int nb_BATs;
+    uint32_t nb_BATs;
     target_ulong DBAT[2][8];
     target_ulong IBAT[2][8];
     /* PowerPC TLB registers (for 4xx, e500 and 60x software driven TLBs) */
-    int nb_tlb;      /* Total number of TLB                                  */
+    int32_t nb_tlb;      /* Total number of TLB                              */
     int tlb_per_way; /* Speed-up helper: used to avoid divisions at run time */
     int nb_ways;     /* Number of ways in the TLB set                        */
     int last_way;    /* Last used way used to allocate TLB in a LRU way      */
@@ -973,6 +973,7 @@ struct CPUPPCState {
     /* Other registers */
     /* Special purpose registers */
     target_ulong spr[1024];
+    uint32_t cr; /* Full CR value used during vmsave/load */
     ppc_spr_t spr_cb[1024];
     /* Altivec registers */
     ppc_avr_t avr[32];
@@ -1171,7 +1172,7 @@ static inline CPUPPCState *cpu_init(const char *cpu_model)
 #define cpu_signal_handler cpu_ppc_signal_handler
 #define cpu_list ppc_cpu_list
 
-#define CPU_SAVE_VERSION 4
+#define CPU_SAVE_VERSION 5
 
 /* MMU modes definitions */
 #define MMU_MODE0_SUFFIX _user
diff --git a/target-ppc/machine.c b/target-ppc/machine.c
index 2d10adb..594fe6a 100644
--- a/target-ppc/machine.c
+++ b/target-ppc/machine.c
@@ -1,94 +1,9 @@
 #include "hw/hw.h"
 #include "hw/boards.h"
 #include "sysemu/kvm.h"
+#include "helper_regs.h"
 
-void cpu_save(QEMUFile *f, void *opaque)
-{
-    CPUPPCState *env = (CPUPPCState *)opaque;
-    unsigned int i, j;
-    uint32_t fpscr;
-    target_ulong xer;
-
-    for (i = 0; i < 32; i++)
-        qemu_put_betls(f, &env->gpr[i]);
-#if !defined(TARGET_PPC64)
-    for (i = 0; i < 32; i++)
-        qemu_put_betls(f, &env->gprh[i]);
-#endif
-    qemu_put_betls(f, &env->lr);
-    qemu_put_betls(f, &env->ctr);
-    for (i = 0; i < 8; i++)
-        qemu_put_be32s(f, &env->crf[i]);
-    xer = cpu_read_xer(env);
-    qemu_put_betls(f, &xer);
-    qemu_put_betls(f, &env->reserve_addr);
-    qemu_put_betls(f, &env->msr);
-    for (i = 0; i < 4; i++)
-        qemu_put_betls(f, &env->tgpr[i]);
-    for (i = 0; i < 32; i++) {
-        union {
-            float64 d;
-            uint64_t l;
-        } u;
-        u.d = env->fpr[i];
-        qemu_put_be64(f, u.l);
-    }
-    fpscr = env->fpscr;
-    qemu_put_be32s(f, &fpscr);
-    qemu_put_sbe32s(f, &env->access_type);
-#if defined(TARGET_PPC64)
-    qemu_put_betls(f, &env->spr[SPR_ASR]);
-    qemu_put_sbe32s(f, &env->slb_nr);
-#endif
-    qemu_put_betls(f, &env->spr[SPR_SDR1]);
-    for (i = 0; i < 32; i++)
-        qemu_put_betls(f, &env->sr[i]);
-    for (i = 0; i < 2; i++)
-        for (j = 0; j < 8; j++)
-            qemu_put_betls(f, &env->DBAT[i][j]);
-    for (i = 0; i < 2; i++)
-        for (j = 0; j < 8; j++)
-            qemu_put_betls(f, &env->IBAT[i][j]);
-    qemu_put_sbe32s(f, &env->nb_tlb);
-    qemu_put_sbe32s(f, &env->tlb_per_way);
-    qemu_put_sbe32s(f, &env->nb_ways);
-    qemu_put_sbe32s(f, &env->last_way);
-    qemu_put_sbe32s(f, &env->id_tlbs);
-    qemu_put_sbe32s(f, &env->nb_pids);
-    if (env->tlb.tlb6) {
-        // XXX assumes 6xx
-        for (i = 0; i < env->nb_tlb; i++) {
-            qemu_put_betls(f, &env->tlb.tlb6[i].pte0);
-            qemu_put_betls(f, &env->tlb.tlb6[i].pte1);
-            qemu_put_betls(f, &env->tlb.tlb6[i].EPN);
-        }
-    }
-    for (i = 0; i < 4; i++)
-        qemu_put_betls(f, &env->pb[i]);
-    for (i = 0; i < 1024; i++)
-        qemu_put_betls(f, &env->spr[i]);
-    qemu_put_be32s(f, &env->vscr);
-    qemu_put_be64s(f, &env->spe_acc);
-    qemu_put_be32s(f, &env->spe_fscr);
-    qemu_put_betls(f, &env->msr_mask);
-    qemu_put_be32s(f, &env->flags);
-    qemu_put_sbe32s(f, &env->error_code);
-    qemu_put_be32s(f, &env->pending_interrupts);
-    qemu_put_be32s(f, &env->irq_input_state);
-    for (i = 0; i < POWERPC_EXCP_NB; i++)
-        qemu_put_betls(f, &env->excp_vectors[i]);
-    qemu_put_betls(f, &env->excp_prefix);
-    qemu_put_betls(f, &env->ivor_mask);
-    qemu_put_betls(f, &env->ivpr_mask);
-    qemu_put_betls(f, &env->hreset_vector);
-    qemu_put_betls(f, &env->nip);
-    qemu_put_betls(f, &env->hflags);
-    qemu_put_betls(f, &env->hflags_nmsr);
-    qemu_put_sbe32s(f, &env->mmu_idx);
-    qemu_put_sbe32(f, 0);
-}
-
-int cpu_load(QEMUFile *f, void *opaque, int version_id)
+static int cpu_load_old(QEMUFile *f, void *opaque, int version_id)
 {
     CPUPPCState *env = (CPUPPCState *)opaque;
     unsigned int i, j;
@@ -177,3 +92,456 @@ int cpu_load(QEMUFile *f, void *opaque, int version_id)
 
     return 0;
 }
+
+static int get_avr(QEMUFile *f, void *pv, size_t size)
+{
+    ppc_avr_t *v = pv;
+
+    v->u64[0] = qemu_get_be64(f);
+    v->u64[1] = qemu_get_be64(f);
+
+    return 0;
+}
+
+static void put_avr(QEMUFile *f, void *pv, size_t size)
+{
+    ppc_avr_t *v = pv;
+
+    qemu_put_be64(f, v->u64[0]);
+    qemu_put_be64(f, v->u64[1]);
+}
+
+const VMStateInfo vmstate_info_avr = {
+    .name = "avr",
+    .get  = get_avr,
+    .put  = put_avr,
+};
+
+#define VMSTATE_AVR_ARRAY_V(_f, _s, _n, _v)                       \
+    VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_avr, ppc_avr_t)
+
+#define VMSTATE_AVR_ARRAY(_f, _s, _n)                             \
+    VMSTATE_AVR_ARRAY_V(_f, _s, _n, 0)
+
+static void cpu_pre_save(void *opaque)
+{
+    CPUPPCState *env = opaque;
+    int i;
+
+    env->spr[SPR_LR] = env->lr;
+    env->spr[SPR_CTR] = env->ctr;
+    env->spr[SPR_XER] = env->xer;
+#if defined(TARGET_PPC64)
+    env->spr[SPR_CFAR] = env->cfar;
+#endif
+    env->spr[SPR_BOOKE_SPEFSCR] = env->spe_fscr;
+
+    env->cr = 0;
+    for (i = 0; i < 8; i++) {
+        env->cr = (env->cr << 4) | (env->crf[i] & 0xf);
+    }
+
+    for (i = 0; (i < 4) && (i < env->nb_BATs); i++) {
+        env->spr[SPR_DBAT0U + 2*i] = env->DBAT[0][i];
+        env->spr[SPR_DBAT0U + 2*i + 1] = env->DBAT[1][i];
+        env->spr[SPR_IBAT0U + 2*i] = env->IBAT[0][i];
+        env->spr[SPR_IBAT0U + 2*i + 1] = env->IBAT[1][i];
+    }
+    for (i = 0; (i < 4) && ((i+4) < env->nb_BATs); i++) {
+        env->spr[SPR_DBAT4U + 2*i] = env->DBAT[0][i+4];
+        env->spr[SPR_DBAT4U + 2*i + 1] = env->DBAT[1][i+4];
+        env->spr[SPR_IBAT4U + 2*i] = env->IBAT[0][i+4];
+        env->spr[SPR_IBAT4U + 2*i + 1] = env->IBAT[1][i+4];
+    }
+}
+
+static int cpu_post_load(void *opaque, int version_id)
+{
+    CPUPPCState *env = opaque;
+    int i;
+
+    env->lr = env->spr[SPR_LR];
+    env->ctr = env->spr[SPR_CTR];
+    env->xer = env->spr[SPR_XER];
+#if defined(TARGET_PPC64)
+    env->cfar = env->spr[SPR_CFAR];
+#endif
+    env->spe_fscr = env->spr[SPR_BOOKE_SPEFSCR];
+
+    for (i = 0; i < 8; i++) {
+        env->crf[i] = env->cr >> (4*(7-i)) & 0xf;
+    }
+
+    for (i = 0; (i < 4) && (i < env->nb_BATs); i++) {
+        env->DBAT[0][i] = env->spr[SPR_DBAT0U + 2*i];
+        env->DBAT[1][i] = env->spr[SPR_DBAT0U + 2*i + 1];
+        env->IBAT[0][i] = env->spr[SPR_IBAT0U + 2*i];
+        env->IBAT[1][i] = env->spr[SPR_IBAT0U + 2*i + 1];
+    }
+    for (i = 0; (i < 4) && ((i+4) < env->nb_BATs); i++) {
+        env->DBAT[0][i+4] = env->spr[SPR_DBAT4U + 2*i];
+        env->DBAT[1][i+4] = env->spr[SPR_DBAT4U + 2*i + 1];
+        env->IBAT[0][i+4] = env->spr[SPR_IBAT4U + 2*i];
+        env->IBAT[1][i+4] = env->spr[SPR_IBAT4U + 2*i + 1];
+    }
+
+    /* Restore htab_base and htab_mask variables */
+    ppc_store_sdr1(env, env->spr[SPR_SDR1]);
+
+    hreg_compute_hflags(env);
+    hreg_compute_mem_idx(env);
+
+    return 0;
+}
+
+static bool fpu_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    return (env->insns_flags & PPC_FLOAT);
+}
+
+static const VMStateDescription vmstate_fpu = {
+    .name = "cpu/fpu",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_FLOAT64_ARRAY(fpr, CPUPPCState, 32),
+        VMSTATE_UINTTL(fpscr, CPUPPCState),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static bool altivec_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    return (env->insns_flags & PPC_ALTIVEC);
+}
+
+static const VMStateDescription vmstate_altivec = {
+    .name = "cpu/altivec",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_AVR_ARRAY(avr, CPUPPCState, 32),
+        VMSTATE_UINT32(vscr, CPUPPCState),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static bool vsx_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    return (env->insns_flags2 & PPC2_VSX);
+}
+
+static const VMStateDescription vmstate_vsx = {
+    .name = "cpu/vsx",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINT64_ARRAY(vsr, CPUPPCState, 32),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static bool sr_needed(void *opaque)
+{
+#ifdef TARGET_PPC64
+    CPUPPCState *env = opaque;
+
+    return !(env->mmu_model & POWERPC_MMU_64);
+#else
+    return true;
+#endif
+}
+
+static const VMStateDescription vmstate_sr = {
+    .name = "cpu/sr",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINTTL_ARRAY(sr, CPUPPCState, 32),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+#ifdef TARGET_PPC64
+static int get_slbe(QEMUFile *f, void *pv, size_t size)
+{
+    ppc_slb_t *v = pv;
+
+    v->esid = qemu_get_be64(f);
+    v->vsid = qemu_get_be64(f);
+
+    return 0;
+}
+
+static void put_slbe(QEMUFile *f, void *pv, size_t size)
+{
+    ppc_slb_t *v = pv;
+
+    qemu_put_be64(f, v->esid);
+    qemu_put_be64(f, v->vsid);
+}
+
+const VMStateInfo vmstate_info_slbe = {
+    .name = "slbe",
+    .get  = get_slbe,
+    .put  = put_slbe,
+};
+
+#define VMSTATE_SLB_ARRAY_V(_f, _s, _n, _v)                       \
+    VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_slbe, ppc_slb_t)
+
+#define VMSTATE_SLB_ARRAY(_f, _s, _n)                             \
+    VMSTATE_SLB_ARRAY_V(_f, _s, _n, 0)
+
+static bool slb_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    /* We don't support any of the old segment table based 64-bit CPUs */
+    return (env->mmu_model & POWERPC_MMU_64);
+}
+
+static const VMStateDescription vmstate_slb = {
+    .name = "cpu/slb",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_INT32_EQUAL(slb_nr, CPUPPCState),
+        VMSTATE_SLB_ARRAY(slb, CPUPPCState, 64),
+        VMSTATE_END_OF_LIST()
+    }
+};
+#endif /* TARGET_PPC64 */
+
+static const VMStateDescription vmstate_tlb6xx_entry = {
+    .name = "cpu/tlb6xx_entry",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINTTL(pte0, ppc6xx_tlb_t),
+        VMSTATE_UINTTL(pte1, ppc6xx_tlb_t),
+        VMSTATE_UINTTL(EPN, ppc6xx_tlb_t),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static bool tlb6xx_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    return env->nb_tlb && (env->tlb_type == TLB_6XX);
+}
+
+static const VMStateDescription vmstate_tlb6xx = {
+    .name = "cpu/tlb6xx",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_INT32_EQUAL(nb_tlb, CPUPPCState),
+        VMSTATE_STRUCT_VARRAY_POINTER_INT32(tlb.tlb6, CPUPPCState, nb_tlb,
+                                            vmstate_tlb6xx_entry,
+                                            ppc6xx_tlb_t),
+        VMSTATE_UINTTL_ARRAY(tgpr, CPUPPCState, 4),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
+static const VMStateDescription vmstate_tlbemb_entry = {
+    .name = "cpu/tlbemb_entry",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINT64(RPN, ppcemb_tlb_t),
+        VMSTATE_UINTTL(EPN, ppcemb_tlb_t),
+        VMSTATE_UINTTL(PID, ppcemb_tlb_t),
+        VMSTATE_UINTTL(size, ppcemb_tlb_t),
+        VMSTATE_UINT32(prot, ppcemb_tlb_t),
+        VMSTATE_UINT32(attr, ppcemb_tlb_t),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static bool tlbemb_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    return env->nb_tlb && (env->tlb_type == TLB_EMB);
+}
+
+static bool pbr403_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+    uint32_t pvr = env->spr[SPR_PVR];
+
+    return (pvr & 0xffff0000) == 0x00200000;
+}
+
+static const VMStateDescription vmstate_pbr403 = {
+    .name = "cpu/pbr403",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINTTL_ARRAY(pb, CPUPPCState, 4),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static const VMStateDescription vmstate_tlbemb = {
+    .name = "cpu/tlb6xx",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_INT32_EQUAL(nb_tlb, CPUPPCState),
+        VMSTATE_STRUCT_VARRAY_POINTER_INT32(tlb.tlbe, CPUPPCState, nb_tlb,
+                                            vmstate_tlbemb_entry,
+                                            ppcemb_tlb_t),
+        /* 403 protection registers */
+        VMSTATE_END_OF_LIST()
+    },
+    .subsections = (VMStateSubsection []) {
+        {
+            .vmsd = &vmstate_pbr403,
+            .needed = pbr403_needed,
+        } , {
+            /* empty */
+        }
+    }
+};
+
+static const VMStateDescription vmstate_tlbmas_entry = {
+    .name = "cpu/tlbmas_entry",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINT32(mas8, ppcmas_tlb_t),
+        VMSTATE_UINT32(mas1, ppcmas_tlb_t),
+        VMSTATE_UINT64(mas2, ppcmas_tlb_t),
+        VMSTATE_UINT64(mas7_3, ppcmas_tlb_t),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static bool tlbmas_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    return env->nb_tlb && (env->tlb_type == TLB_MAS);
+}
+
+static const VMStateDescription vmstate_tlbmas = {
+    .name = "cpu/tlbmas",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_INT32_EQUAL(nb_tlb, CPUPPCState),
+        VMSTATE_STRUCT_VARRAY_POINTER_INT32(tlb.tlbm, CPUPPCState, nb_tlb,
+                                            vmstate_tlbmas_entry,
+                                            ppcmas_tlb_t),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
+static const VMStateDescription vmstate_cpu = {
+    .name = "cpu",
+    .version_id = CPU_SAVE_VERSION,
+    .minimum_version_id = 5,
+    .minimum_version_id_old = 4,
+    .load_state_old = cpu_load_old,
+    .pre_save = cpu_pre_save,
+    .post_load = cpu_post_load,
+    .fields      = (VMStateField []) {
+        /* Verify we haven't changed the pvr */
+        VMSTATE_UINTTL_EQUAL(spr[SPR_PVR], CPUPPCState),
+
+        /* User mode architected state */
+        VMSTATE_UINTTL_ARRAY(gpr, CPUPPCState, 32),
+#if !defined(TARGET_PPC64)
+        VMSTATE_UINTTL_ARRAY(gprh, CPUPPCState, 32),
+#endif
+        VMSTATE_UINT32(cr, CPUPPCState),
+        VMSTATE_UINTTL(nip, CPUPPCState),
+
+        /* SPRs */
+        VMSTATE_UINTTL_ARRAY(spr, CPUPPCState, 1024),
+        VMSTATE_UINT64(spe_acc, CPUPPCState),
+
+        /* Reservation */
+        VMSTATE_UINTTL(reserve_addr, CPUPPCState),
+
+        /* Supervisor mode architected state */
+        VMSTATE_UINTTL(msr, CPUPPCState),
+
+        /* Internal state */
+        VMSTATE_UINTTL(hflags_nmsr, CPUPPCState),
+        /* FIXME: access_type? */
+
+        /* Sanity checking */
+        VMSTATE_UINTTL_EQUAL(msr_mask, CPUPPCState),
+        VMSTATE_UINT64_EQUAL(insns_flags, CPUPPCState),
+        VMSTATE_UINT64_EQUAL(insns_flags2, CPUPPCState),
+        VMSTATE_UINT32_EQUAL(nb_BATs, CPUPPCState),
+        VMSTATE_END_OF_LIST()
+    },
+    .subsections = (VMStateSubsection []) {
+        {
+            .vmsd = &vmstate_fpu,
+            .needed = fpu_needed,
+        } , {
+            .vmsd = &vmstate_altivec,
+            .needed = altivec_needed,
+        } , {
+            .vmsd = &vmstate_vsx,
+            .needed = vsx_needed,
+        } , {
+            .vmsd = &vmstate_sr,
+            .needed = sr_needed,
+        } , {
+#ifdef TARGET_PPC64
+            .vmsd = &vmstate_slb,
+            .needed = slb_needed,
+        } , {
+#endif /* TARGET_PPC64 */
+            .vmsd = &vmstate_tlb6xx,
+            .needed = tlb6xx_needed,
+        } , {
+            .vmsd = &vmstate_tlbemb,
+            .needed = tlbemb_needed,
+        } , {
+            .vmsd = &vmstate_tlbmas,
+            .needed = tlbmas_needed,
+        } , {
+            /* FIXME: DCRs? */
+            /* FIXME: timebase? */
+            /* empty */
+        }
+    }
+};
+
+void cpu_save(QEMUFile *f, void *opaque)
+{
+    vmstate_save_state(f, &vmstate_cpu, opaque);
+
+}
+
+int cpu_load(QEMUFile *f, void *opaque, int version_id)
+{
+    return vmstate_load_state(f, &vmstate_cpu, opaque, version_id);
+}
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH 3/8] pseries: savevm support for XICS interrupt controller
  2013-05-03  1:38 [Qemu-devel] [0/8] pseries: savevm / migration support David Gibson
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 1/8] savevm: Implement VMS_DIVIDE flag David Gibson
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 2/8] target-ppc: Convert ppc cpu savevm to VMStateDescription David Gibson
@ 2013-05-03  1:38 ` David Gibson
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 4/8] pseries: savevm support for VIO devices David Gibson
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2013-05-03  1:38 UTC (permalink / raw)
  To: agraf, quintela; +Cc: aik, qemu-ppc, qemu-devel, David Gibson

This patch adds the necessary VMStateDescription information to support
savevm/loadvm for the XICS interrupt controller used on the pseries
machine.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/xics.c |   57 +++++++++++++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 53 insertions(+), 4 deletions(-)

diff --git a/hw/ppc/xics.c b/hw/ppc/xics.c
index 1b25075..2f6ca54 100644
--- a/hw/ppc/xics.c
+++ b/hw/ppc/xics.c
@@ -50,7 +50,7 @@ struct icp_server_state {
 struct ics_state;
 
 struct icp_state {
-    long nr_servers;
+    uint32_t nr_servers;
     struct icp_server_state *ss;
     struct ics_state *ics;
 };
@@ -173,7 +173,7 @@ static void icp_irq(struct icp_state *icp, int server, int nr, uint8_t priority)
  */
 
 struct ics_irq_state {
-    int server;
+    uint32_t server;
     uint8_t priority;
     uint8_t saved_priority;
 #define XICS_STATUS_ASSERTED           0x1
@@ -184,8 +184,8 @@ struct ics_irq_state {
 };
 
 struct ics_state {
-    int nr_irqs;
-    int offset;
+    uint32_t nr_irqs;
+    uint32_t offset;
     qemu_irq *qirqs;
     bool *islsi;
     struct ics_irq_state *irqs;
@@ -523,6 +523,48 @@ static void xics_reset(void *opaque)
     }
 }
 
+static const VMStateDescription vmstate_icp_server = {
+    .name = "icp/server",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        /* Sanity check */
+        VMSTATE_UINT32(xirr, struct icp_server_state),
+        VMSTATE_UINT8(pending_priority, struct icp_server_state),
+        VMSTATE_UINT8(mfrr, struct icp_server_state),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static const VMStateDescription vmstate_ics_irq = {
+    .name = "ics/irq",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINT32(server, struct ics_irq_state),
+        VMSTATE_UINT8(priority, struct ics_irq_state),
+        VMSTATE_UINT8(saved_priority, struct ics_irq_state),
+        VMSTATE_UINT8(status, struct ics_irq_state),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static const VMStateDescription vmstate_ics = {
+    .name = "ics",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        /* Sanity check */
+        VMSTATE_UINT32_EQUAL(nr_irqs, struct ics_state),
+
+        VMSTATE_STRUCT_VARRAY_POINTER_UINT32(irqs, struct ics_state, nr_irqs, vmstate_ics_irq, struct ics_irq_state),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
 void xics_cpu_setup(struct icp_state *icp, PowerPCCPU *cpu)
 {
     CPUState *cs = CPU(cpu);
@@ -545,6 +587,8 @@ void xics_cpu_setup(struct icp_state *icp, PowerPCCPU *cpu)
                 "bus model\n");
         abort();
     }
+
+    vmstate_register(NULL, cs->cpu_index, &vmstate_icp_server, ss);
 }
 
 struct icp_state *xics_system_init(int nr_servers, int nr_irqs)
@@ -579,5 +623,10 @@ struct icp_state *xics_system_init(int nr_servers, int nr_irqs)
 
     qemu_register_reset(xics_reset, icp);
 
+    /* We use each the ICS's offset into the global irq number space
+     * as an instance id.  This means we can extend to multiple ICS
+     * instances without needing to change the savevm format */
+    vmstate_register(NULL, ics->offset, &vmstate_ics, ics);
+
     return icp;
 }
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH 4/8] pseries: savevm support for VIO devices
  2013-05-03  1:38 [Qemu-devel] [0/8] pseries: savevm / migration support David Gibson
                   ` (2 preceding siblings ...)
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 3/8] pseries: savevm support for XICS interrupt controller David Gibson
@ 2013-05-03  1:38 ` David Gibson
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 5/8] pseries: savevm support for PAPR VIO logical lan David Gibson
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2013-05-03  1:38 UTC (permalink / raw)
  To: agraf, quintela; +Cc: aik, qemu-ppc, qemu-devel, David Gibson

This patch adds helpers to allow PAPR VIO devices to save state common
to all VIO devices during savevm.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr_vio.c         |   20 ++++++++++++++++++++
 include/hw/ppc/spapr_vio.h |    5 +++++
 2 files changed, 25 insertions(+)

diff --git a/hw/ppc/spapr_vio.c b/hw/ppc/spapr_vio.c
index 1405c32..d13a15b 100644
--- a/hw/ppc/spapr_vio.c
+++ b/hw/ppc/spapr_vio.c
@@ -543,6 +543,26 @@ static const TypeInfo spapr_vio_bridge_info = {
     .class_init    = spapr_vio_bridge_class_init,
 };
 
+const VMStateDescription vmstate_spapr_vio = {
+    .name = "spapr_vio",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        /* Sanity check */
+        VMSTATE_UINT32_EQUAL(reg, VIOsPAPRDevice),
+        VMSTATE_UINT32_EQUAL(irq, VIOsPAPRDevice),
+
+        /* General VIO device state */
+        VMSTATE_UINTTL(signal_state, VIOsPAPRDevice),
+        VMSTATE_UINT64(crq.qladdr, VIOsPAPRDevice),
+        VMSTATE_UINT32(crq.qsize, VIOsPAPRDevice),
+        VMSTATE_UINT32(crq.qnext, VIOsPAPRDevice),
+
+        VMSTATE_END_OF_LIST()
+    },
+};
+
 static void vio_spapr_device_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *k = DEVICE_CLASS(klass);
diff --git a/include/hw/ppc/spapr_vio.h b/include/hw/ppc/spapr_vio.h
index f98ec0a..01ad26c 100644
--- a/include/hw/ppc/spapr_vio.h
+++ b/include/hw/ppc/spapr_vio.h
@@ -133,4 +133,9 @@ VIOsPAPRDevice *spapr_vty_get_default(VIOsPAPRBus *bus);
 
 void spapr_vio_quiesce(void);
 
+extern const VMStateDescription vmstate_spapr_vio;
+
+#define VMSTATE_SPAPR_VIO(_f, _s) \
+    VMSTATE_STRUCT(_f, _s, 0, vmstate_spapr_vio, VIOsPAPRDevice)
+
 #endif /* _HW_SPAPR_VIO_H */
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH 5/8] pseries: savevm support for PAPR VIO logical lan
  2013-05-03  1:38 [Qemu-devel] [0/8] pseries: savevm / migration support David Gibson
                   ` (3 preceding siblings ...)
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 4/8] pseries: savevm support for VIO devices David Gibson
@ 2013-05-03  1:38 ` David Gibson
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 6/8] pseries: savevm support for PAPR TCE tables David Gibson
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2013-05-03  1:38 UTC (permalink / raw)
  To: agraf, quintela; +Cc: aik, qemu-ppc, qemu-devel, David Gibson

This patch adds the necessary VMStateDescription information to support
savevm/loadvm for the spapr_llan (PAPR logical lan) device.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/char/spapr_vty.c |   16 ++++++++++++++++
 hw/net/spapr_llan.c |   24 ++++++++++++++++++++++--
 2 files changed, 38 insertions(+), 2 deletions(-)

diff --git a/hw/char/spapr_vty.c b/hw/char/spapr_vty.c
index 2993848..a799721 100644
--- a/hw/char/spapr_vty.c
+++ b/hw/char/spapr_vty.c
@@ -142,6 +142,21 @@ static Property spapr_vty_properties[] = {
     DEFINE_PROP_END_OF_LIST(),
 };
 
+static const VMStateDescription vmstate_spapr_vty = {
+    .name = "spapr_vty",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_SPAPR_VIO(sdev, VIOsPAPRVTYDevice),
+
+        VMSTATE_UINT32(in, VIOsPAPRVTYDevice),
+        VMSTATE_UINT32(out, VIOsPAPRVTYDevice),
+        VMSTATE_BUFFER(buf, VIOsPAPRVTYDevice),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
 static void spapr_vty_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
@@ -152,6 +167,7 @@ static void spapr_vty_class_init(ObjectClass *klass, void *data)
     k->dt_type = "serial";
     k->dt_compatible = "hvterm1";
     dc->props = spapr_vty_properties;
+    dc->vmsd = &vmstate_spapr_vty;
 }
 
 static const TypeInfo spapr_vty_info = {
diff --git a/hw/net/spapr_llan.c b/hw/net/spapr_llan.c
index 3150add..cca3d1a 100644
--- a/hw/net/spapr_llan.c
+++ b/hw/net/spapr_llan.c
@@ -81,9 +81,9 @@ typedef struct VIOsPAPRVLANDevice {
     VIOsPAPRDevice sdev;
     NICConf nicconf;
     NICState *nic;
-    int isopen;
+    bool isopen;
     target_ulong buf_list;
-    int add_buf_ptr, use_buf_ptr, rx_bufs;
+    uint32_t add_buf_ptr, use_buf_ptr, rx_bufs;
     target_ulong rxq_ptr;
 } VIOsPAPRVLANDevice;
 
@@ -498,6 +498,25 @@ static Property spapr_vlan_properties[] = {
     DEFINE_PROP_END_OF_LIST(),
 };
 
+static const VMStateDescription vmstate_spapr_llan = {
+    .name = "spapr_llan",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_SPAPR_VIO(sdev, VIOsPAPRVLANDevice),
+        /* LLAN state */
+        VMSTATE_BOOL(isopen, VIOsPAPRVLANDevice),
+        VMSTATE_UINTTL(buf_list, VIOsPAPRVLANDevice),
+        VMSTATE_UINT32(add_buf_ptr, VIOsPAPRVLANDevice),
+        VMSTATE_UINT32(use_buf_ptr, VIOsPAPRVLANDevice),
+        VMSTATE_UINT32(rx_bufs, VIOsPAPRVLANDevice),
+        VMSTATE_UINTTL(rxq_ptr, VIOsPAPRVLANDevice),
+
+        VMSTATE_END_OF_LIST()
+    },
+};
+
 static void spapr_vlan_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
@@ -512,6 +531,7 @@ static void spapr_vlan_class_init(ObjectClass *klass, void *data)
     k->signal_mask = 0x1;
     dc->props = spapr_vlan_properties;
     k->rtce_window_size = 0x10000000;
+    dc->vmsd = &vmstate_spapr_llan;
 }
 
 static const TypeInfo spapr_vlan_info = {
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH 6/8] pseries: savevm support for PAPR TCE tables
  2013-05-03  1:38 [Qemu-devel] [0/8] pseries: savevm / migration support David Gibson
                   ` (4 preceding siblings ...)
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 5/8] pseries: savevm support for PAPR VIO logical lan David Gibson
@ 2013-05-03  1:38 ` David Gibson
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI David Gibson
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 8/8] pseries: savevm support for pseries machine David Gibson
  7 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2013-05-03  1:38 UTC (permalink / raw)
  To: agraf, quintela; +Cc: aik, qemu-ppc, qemu-devel, David Gibson

This patch adds the necessary VMStateDescription information to save the
state of PAPR TCE tables (that is, the PAPR specified IOMMU).

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr_iommu.c |   24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/hw/ppc/spapr_iommu.c b/hw/ppc/spapr_iommu.c
index e1fe941..ccc5839 100644
--- a/hw/ppc/spapr_iommu.c
+++ b/hw/ppc/spapr_iommu.c
@@ -122,6 +122,26 @@ static int spapr_tce_translate(DMAContext *dma,
     return 0;
 }
 
+static const VMStateDescription vmstate_spapr_tce_table = {
+    .name = "spapr_iommu",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        /* Sanity check */
+        VMSTATE_UINT32_EQUAL(liobn, sPAPRTCETable),
+        VMSTATE_UINT32_EQUAL(window_size, sPAPRTCETable),
+
+        /* IOMMU state */
+        VMSTATE_BOOL(bypass, sPAPRTCETable),
+        VMSTATE_VBUFFER_DIVIDE(table, sPAPRTCETable, 0, NULL, 0, window_size,
+                               SPAPR_TCE_PAGE_SIZE / sizeof(sPAPRTCE)),
+
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+
 DMAContext *spapr_tce_new_dma_context(uint32_t liobn, size_t window_size)
 {
     sPAPRTCETable *tcet;
@@ -161,6 +181,8 @@ DMAContext *spapr_tce_new_dma_context(uint32_t liobn, size_t window_size)
 
     QLIST_INSERT_HEAD(&spapr_tce_tables, tcet, list);
 
+    vmstate_register(NULL, tcet->liobn, &vmstate_spapr_tce_table, tcet);
+
     return &tcet->dma;
 }
 
@@ -170,6 +192,8 @@ void spapr_tce_free(DMAContext *dma)
     if (dma) {
         sPAPRTCETable *tcet = DO_UPCAST(sPAPRTCETable, dma, dma);
 
+        vmstate_unregister(NULL, &vmstate_spapr_tce_table, tcet);
+
         QLIST_REMOVE(tcet, list);
 
         if (!kvm_enabled() ||
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-03  1:38 [Qemu-devel] [0/8] pseries: savevm / migration support David Gibson
                   ` (5 preceding siblings ...)
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 6/8] pseries: savevm support for PAPR TCE tables David Gibson
@ 2013-05-03  1:38 ` David Gibson
  2013-05-06  7:37   ` Paolo Bonzini
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 8/8] pseries: savevm support for pseries machine David Gibson
  7 siblings, 1 reply; 31+ messages in thread
From: David Gibson @ 2013-05-03  1:38 UTC (permalink / raw)
  To: agraf, quintela; +Cc: aik, qemu-ppc, qemu-devel, David Gibson

This patch adds the necessary support for saving the state of the PAPR VIO
virtual SCSI device.  This turns out to be trivial, because the generiC
SCSI code already quiesces the attached virtual SCSI bus.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/scsi/spapr_vscsi.c |   28 ++++++++++++++++++++++++++++
 1 file changed, 28 insertions(+)

diff --git a/hw/scsi/spapr_vscsi.c b/hw/scsi/spapr_vscsi.c
index 3d322d5..f416871 100644
--- a/hw/scsi/spapr_vscsi.c
+++ b/hw/scsi/spapr_vscsi.c
@@ -954,6 +954,33 @@ static Property spapr_vscsi_properties[] = {
     DEFINE_PROP_END_OF_LIST(),
 };
 
+static void spapr_vscsi_pre_save(void *opaque)
+{
+    VSCSIState *s = opaque;
+    int i;
+
+    /* Can't save active requests, apparently the general SCSI code
+     * quiesces the queue for us on vmsave */
+    for (i = 0; i < VSCSI_REQ_LIMIT; i++) {
+        assert(!s->reqs[i].active);
+    }
+}
+
+static const VMStateDescription vmstate_spapr_vscsi = {
+    .name = "spapr_vscsi",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .pre_save = spapr_vscsi_pre_save,
+    .fields      = (VMStateField []) {
+        VMSTATE_SPAPR_VIO(vdev, VSCSIState),
+        /* VSCSI state */
+        /* ???? */
+
+        VMSTATE_END_OF_LIST()
+    },
+};
+
 static void spapr_vscsi_class_init(ObjectClass *klass, void *data)
 {
     DeviceClass *dc = DEVICE_CLASS(klass);
@@ -968,6 +995,7 @@ static void spapr_vscsi_class_init(ObjectClass *klass, void *data)
     k->signal_mask = 0x00000001;
     dc->props = spapr_vscsi_properties;
     k->rtce_window_size = 0x10000000;
+    dc->vmsd = &vmstate_spapr_vscsi;
 }
 
 static const TypeInfo spapr_vscsi_info = {
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* [Qemu-devel] [PATCH 8/8] pseries: savevm support for pseries machine
  2013-05-03  1:38 [Qemu-devel] [0/8] pseries: savevm / migration support David Gibson
                   ` (6 preceding siblings ...)
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI David Gibson
@ 2013-05-03  1:38 ` David Gibson
  7 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2013-05-03  1:38 UTC (permalink / raw)
  To: agraf, quintela; +Cc: aik, qemu-ppc, qemu-devel, David Gibson

This adds the necessary pieces to implement savevm / migration for the
pseries machine.  The most complex part here is migrating the hash
table - for the paravirtualized pseries machine the guest's hash page
table is not stored within guest memory, but externally and the guest
accesses it via hypercalls.

This patch uses a hypervisor reserved bit of the HPTE as a dirty bit
(tracking changes to the HPTE itself, not the page it references).
This is used to implement a live migration style incremental save and
restore of the hash table contents.

In addition it adds VMStateDescription information to save and restore
the (few) remaining pieces of state information needed by the pseries
machine.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 hw/ppc/spapr.c         |  270 +++++++++++++++++++++++++++++++++++++++++++++++-
 hw/ppc/spapr_hcall.c   |    8 +-
 include/hw/ppc/spapr.h |   12 ++-
 3 files changed, 282 insertions(+), 8 deletions(-)

diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index c96ac81..f0221f9 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -32,6 +32,7 @@
 #include "sysemu/cpus.h"
 #include "sysemu/kvm.h"
 #include "kvm_ppc.h"
+#include "mmu-hash64.h"
 
 #include "hw/boards.h"
 #include "hw/ppc/ppc.h"
@@ -669,7 +670,7 @@ static void spapr_cpu_reset(void *opaque)
 
     env->spr[SPR_HIOR] = 0;
 
-    env->external_htab = spapr->htab;
+    env->external_htab = (uint8_t *)spapr->htab;
     env->htab_base = -1;
     env->htab_mask = HTAB_SIZE(spapr) - 1;
     env->spr[SPR_SDR1] = (unsigned long)spapr->htab |
@@ -721,6 +722,269 @@ static int spapr_vga_init(PCIBus *pci_bus)
     }
 }
 
+static const VMStateDescription vmstate_spapr = {
+    .name = "spapr",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        /* Sanity check */
+        VMSTATE_UINT32_EQUAL(next_irq, sPAPREnvironment),
+
+        /* RTC offset */
+        VMSTATE_UINT64(rtc_offset, sPAPREnvironment),
+
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+#define HPTE(_table, _i)   (void *)(((uint64_t *)(_table)) + ((_i) * 2))
+#define HPTE_VALID(_hpte)  (tswap64(*((uint64_t *)(_hpte))) & HPTE64_V_VALID)
+#define HPTE_DIRTY(_hpte)  (tswap64(*((uint64_t *)(_hpte))) & HPTE64_V_HPTE_DIRTY)
+#define CLEAN_HPTE(_hpte)  ((*(uint64_t *)(_hpte)) &= tswap64(~HPTE64_V_HPTE_DIRTY))
+
+static int htab_save_setup(QEMUFile *f, void *opaque)
+{
+    sPAPREnvironment *spapr = opaque;
+
+    spapr->htab_save_index = 0;
+    spapr->htab_first_pass = true;
+
+    /* "Iteration" header */
+    qemu_put_be32(f, spapr->htab_shift);
+
+    return 0;
+}
+
+#define MAX_ITERATION_NS    5000000 /* 5 ms */
+
+static void htab_save_first_pass(QEMUFile *f, sPAPREnvironment *spapr,
+                                 int64_t max_ns)
+{
+    int htabslots = HTAB_SIZE(spapr) / HASH_PTE_SIZE_64;
+    int index = spapr->htab_save_index;
+    int64_t starttime = qemu_get_clock_ns(rt_clock);
+
+    assert(spapr->htab_first_pass);
+
+    do {
+        int chunkstart;
+
+        /* Consume invalid HPTEs */
+        while ((index < htabslots)
+               && !HPTE_VALID(HPTE(spapr->htab, index))) {
+            index++;
+            CLEAN_HPTE(HPTE(spapr->htab, index));
+        }
+
+        /* Consume valid HPTEs */
+        chunkstart = index;
+        while ((index < htabslots)
+               && HPTE_VALID(HPTE(spapr->htab, index))) {
+            index++;
+            CLEAN_HPTE(HPTE(spapr->htab, index));
+        }
+
+        if (index > chunkstart) {
+            int n_valid = index - chunkstart;
+
+            qemu_put_be32(f, chunkstart);
+            qemu_put_be16(f, n_valid);
+            qemu_put_be16(f, 0);
+            qemu_put_buffer(f, HPTE(spapr->htab, chunkstart),
+                            HASH_PTE_SIZE_64 * n_valid);
+
+            if ((qemu_get_clock_ns(rt_clock) - starttime) > max_ns) {
+                break;
+            }
+        }
+    } while ((index < htabslots) && !qemu_file_rate_limit(f));
+
+    if (index >= htabslots) {
+        assert(index == htabslots);
+        index = 0;
+        spapr->htab_first_pass = false;
+    }
+    spapr->htab_save_index = index;
+}
+
+static bool htab_save_later_pass(QEMUFile *f, sPAPREnvironment *spapr,
+                                 int64_t max_ns)
+{
+    bool final = max_ns < 0;
+    int htabslots = HTAB_SIZE(spapr) / HASH_PTE_SIZE_64;
+    int examined = 0, sent = 0;
+    int index = spapr->htab_save_index;
+    int64_t starttime = qemu_get_clock_ns(rt_clock);
+
+    assert(!spapr->htab_first_pass);
+
+    do {
+        int chunkstart, invalidstart;
+
+        /* Consume non-dirty HPTEs */
+        while ((index < htabslots)
+               && !HPTE_DIRTY(HPTE(spapr->htab, index))) {
+            index++;
+            examined++;
+        }
+
+        chunkstart = index;
+        /* Consume valid dirty HPTEs */
+        while ((index < htabslots)
+               && HPTE_DIRTY(HPTE(spapr->htab, index))
+               && HPTE_VALID(HPTE(spapr->htab, index))) {
+            CLEAN_HPTE(HPTE(spapr->htab, index));
+            index++;
+            examined++;
+        }
+
+        invalidstart = index;
+        /* Consume invalid dirty HPTEs */
+        while ((index < htabslots)
+               && HPTE_DIRTY(HPTE(spapr->htab, index))
+               && !HPTE_VALID(HPTE(spapr->htab, index))) {
+            CLEAN_HPTE(HPTE(spapr->htab, index));
+            index++;
+            examined++;
+        }
+
+        if (index > chunkstart) {
+            int n_valid = invalidstart - chunkstart;
+            int n_invalid = index - invalidstart;
+
+            qemu_put_be32(f, chunkstart);
+            qemu_put_be16(f, n_valid);
+            qemu_put_be16(f, n_invalid);
+            qemu_put_buffer(f, HPTE(spapr->htab, chunkstart),
+                            HASH_PTE_SIZE_64 * n_valid);
+            sent += index - chunkstart;
+
+            if (!final && (qemu_get_clock_ns(rt_clock) - starttime) > max_ns) {
+                break;
+            }
+        }
+
+        if (examined >= htabslots) {
+            break;
+        }
+
+        if (index >= htabslots) {
+            assert(index == htabslots);
+            index = 0;
+        }
+    } while ((examined < htabslots) && (!qemu_file_rate_limit(f) || final));
+
+    if (index >= htabslots) {
+        assert(index == htabslots);
+        index = 0;
+    }
+
+    spapr->htab_save_index = index;
+
+    return (examined >= htabslots) && (sent == 0);
+}
+ 
+static int htab_save_iterate(QEMUFile *f, void *opaque)
+{
+    sPAPREnvironment *spapr = opaque;
+    bool nothingleft = false;;
+
+    /* Iteration header */
+    qemu_put_be32(f, 0);
+
+    if (spapr->htab_first_pass) {
+        htab_save_first_pass(f, spapr, MAX_ITERATION_NS);
+    } else {
+        nothingleft = htab_save_later_pass(f, spapr, MAX_ITERATION_NS);
+    }
+
+    /* End marker */
+    qemu_put_be32(f, 0);
+    qemu_put_be16(f, 0);
+    qemu_put_be16(f, 0);
+
+    return nothingleft ? 1 : 0;
+}
+
+static int htab_save_complete(QEMUFile *f, void *opaque)
+{
+    sPAPREnvironment *spapr = opaque;
+
+    /* Iteration header */
+    qemu_put_be32(f, 0);
+
+    htab_save_later_pass(f, spapr, -1);
+
+    /* End marker */
+    qemu_put_be32(f, 0);
+    qemu_put_be16(f, 0);
+    qemu_put_be16(f, 0);
+
+    return 0;
+}
+
+static int htab_load(QEMUFile *f, void *opaque, int version_id)
+{
+    sPAPREnvironment *spapr = opaque;
+    uint32_t section_hdr;
+
+    if (version_id < 1 || version_id > 1) {
+        fprintf(stderr, "htab_load() bad version\n");
+        return -EINVAL;
+    }
+
+    section_hdr = qemu_get_be32(f);
+
+    if (section_hdr) {
+        /* First section, just the hash shift */
+        if (spapr->htab_shift != section_hdr) {
+            return -EINVAL;
+        }
+        return 0;
+    }
+
+    while (true) {
+        uint32_t index;
+        uint16_t n_valid, n_invalid;
+
+        index = qemu_get_be32(f);
+        n_valid = qemu_get_be16(f);
+        n_invalid = qemu_get_be16(f);
+
+        if ((index == 0) && (n_valid == 0) && (n_invalid == 0)) {
+            /* End of Stream */
+            break;
+        }
+
+        if ((index + n_valid + n_invalid) >=
+            (HTAB_SIZE(spapr) / HASH_PTE_SIZE_64)) {
+            /* Bad index in stream */
+            fprintf(stderr, "htab_load() bad index %d (%hd+%hd entries) "
+                    "in htab stream\n", index, n_valid, n_invalid);
+            return -EINVAL;
+        }
+
+        if (n_valid) {
+            qemu_get_buffer(f, HPTE(spapr->htab, index),
+                            HASH_PTE_SIZE_64 * n_valid);
+        }
+        if (n_invalid) {
+            memset(HPTE(spapr->htab, index + n_valid), 0,
+                   HASH_PTE_SIZE_64 * n_invalid);
+        }
+    }
+
+    return 0;
+}
+
+static SaveVMHandlers savevm_htab_handlers = {
+    .save_live_setup = htab_save_setup,
+    .save_live_iterate = htab_save_iterate,
+    .save_live_complete = htab_save_complete,
+    .load_state = htab_load,
+};
+
 /* pSeries LPAR / sPAPR hardware init */
 static void ppc_spapr_init(QEMUMachineInitArgs *args)
 {
@@ -961,6 +1225,10 @@ static void ppc_spapr_init(QEMUMachineInitArgs *args)
 
     spapr->entry_point = 0x100;
 
+    vmstate_register(NULL, 0, &vmstate_spapr, spapr);
+    register_savevm_live(NULL, "spapr/htab", -1, 1,
+                         &savevm_htab_handlers, spapr);
+
     /* Prepare the device tree */
     spapr->fdt_skel = spapr_create_fdt_skel(cpu_model,
                                             initrd_base, initrd_size,
diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index f518aee..b1a4c45 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -116,7 +116,7 @@ static target_ulong h_enter(PowerPCCPU *cpu, sPAPREnvironment *spapr,
     }
     ppc_hash64_store_hpte1(env, hpte, ptel);
     /* eieio();  FIXME: need some sort of barrier for smp? */
-    ppc_hash64_store_hpte0(env, hpte, pteh);
+    ppc_hash64_store_hpte0(env, hpte, pteh | HPTE64_V_HPTE_DIRTY);
 
     args[0] = pte_index + i;
     return H_SUCCESS;
@@ -153,7 +153,7 @@ static target_ulong remove_hpte(CPUPPCState *env, target_ulong ptex,
     }
     *vp = v;
     *rp = r;
-    ppc_hash64_store_hpte0(env, hpte, 0);
+    ppc_hash64_store_hpte0(env, hpte, HPTE64_V_HPTE_DIRTY);
     rb = compute_tlbie_rb(v, r, ptex);
     ppc_tlb_invalidate_one(env, rb);
     return REMOVE_SUCCESS;
@@ -283,11 +283,11 @@ static target_ulong h_protect(PowerPCCPU *cpu, sPAPREnvironment *spapr,
     r |= (flags << 48) & HPTE64_R_KEY_HI;
     r |= flags & (HPTE64_R_PP | HPTE64_R_N | HPTE64_R_KEY_LO);
     rb = compute_tlbie_rb(v, r, pte_index);
-    ppc_hash64_store_hpte0(env, hpte, v & ~HPTE64_V_VALID);
+    ppc_hash64_store_hpte0(env, hpte, (v & ~HPTE64_V_VALID) | HPTE64_V_HPTE_DIRTY);
     ppc_tlb_invalidate_one(env, rb);
     ppc_hash64_store_hpte1(env, hpte, r);
     /* Don't need a memory barrier, due to qemu's global lock */
-    ppc_hash64_store_hpte0(env, hpte, v);
+    ppc_hash64_store_hpte0(env, hpte, v | HPTE64_V_HPTE_DIRTY);
     return H_SUCCESS;
 }
 
diff --git a/include/hw/ppc/spapr.h b/include/hw/ppc/spapr.h
index 864bee9..b441cc3 100644
--- a/include/hw/ppc/spapr.h
+++ b/include/hw/ppc/spapr.h
@@ -9,6 +9,8 @@ struct sPAPRPHBState;
 struct sPAPRNVRAM;
 struct icp_state;
 
+#define HPTE64_V_HPTE_DIRTY     0x0000000000000040ULL
+
 typedef struct sPAPREnvironment {
     struct VIOsPAPRBus *vio_bus;
     QLIST_HEAD(, sPAPRPHBState) phbs;
@@ -17,20 +19,24 @@ typedef struct sPAPREnvironment {
 
     hwaddr ram_limit;
     void *htab;
-    long htab_shift;
+    uint32_t htab_shift;
     hwaddr rma_size;
     int vrma_adjust;
     hwaddr fdt_addr, rtas_addr;
     long rtas_size;
     void *fdt_skel;
     target_ulong entry_point;
-    int next_irq;
-    int rtc_offset;
+    uint32_t next_irq;
+    uint64_t rtc_offset;
     char *cpu_model;
     bool has_graphics;
 
     uint32_t epow_irq;
     Notifier epow_notifier;
+
+    /* Migration state */
+    int htab_save_index;
+    bool htab_first_pass;
 } sPAPREnvironment;
 
 #define H_SUCCESS         0
-- 
1.7.10.4

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 2/8] target-ppc: Convert ppc cpu savevm to VMStateDescription
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 2/8] target-ppc: Convert ppc cpu savevm to VMStateDescription David Gibson
@ 2013-05-03 11:29   ` Andreas Färber
  2013-05-03 14:26     ` [Qemu-devel] [Qemu-ppc] " David Gibson
  0 siblings, 1 reply; 31+ messages in thread
From: Andreas Färber @ 2013-05-03 11:29 UTC (permalink / raw)
  To: David Gibson; +Cc: aik, qemu-devel, qemu-ppc, agraf, quintela

Am 03.05.2013 03:38, schrieb David Gibson:
> The savevm code for the powerpc cpu emulation is currently based around
> the old register_savevm() rather than register_vmstate() method.  It's also
> rather broken, missing some important state on some CPU models.
> 
> This patch completely rewrites the savevm for target-ppc, using the new
> VMStateDescription approach.  Exactly what needs to be saved in what
> configurations has been more carefully examined, too.  This introduces a
> new version (5) of the cpu save format.  The old load function is retained
> to support version 4 images.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  target-ppc/cpu.h     |    9 +-
>  target-ppc/machine.c |  542 ++++++++++++++++++++++++++++++++++++++++++--------
>  2 files changed, 460 insertions(+), 91 deletions(-)
[...]
> diff --git a/target-ppc/machine.c b/target-ppc/machine.c
> index 2d10adb..594fe6a 100644
> --- a/target-ppc/machine.c
> +++ b/target-ppc/machine.c
[...]
> +void cpu_save(QEMUFile *f, void *opaque)
> +{
> +    vmstate_save_state(f, &vmstate_cpu, opaque);
> +
> +}
> +
> +int cpu_load(QEMUFile *f, void *opaque, int version_id)
> +{
> +    return vmstate_load_state(f, &vmstate_cpu, opaque, version_id);
> +}

Please drop cpu_{save,load}() and use the VMStateDescription-based
registration mechanism cpu_class_set_vmsd() from PowerPCCPU's
instance_init in translate_init.c.
I'm pretty certain I CC'ed you on that series...

Thanks,
Andreas

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [Qemu-ppc] [PATCH 2/8] target-ppc: Convert ppc cpu savevm to VMStateDescription
  2013-05-03 11:29   ` Andreas Färber
@ 2013-05-03 14:26     ` David Gibson
  0 siblings, 0 replies; 31+ messages in thread
From: David Gibson @ 2013-05-03 14:26 UTC (permalink / raw)
  To: Andreas Färber; +Cc: qemu-ppc, qemu-devel, quintela

[-- Attachment #1: Type: text/plain, Size: 21979 bytes --]

On Fri, May 03, 2013 at 01:29:28PM +0200, Andreas Färber wrote:
> Am 03.05.2013 03:38, schrieb David Gibson:
> > The savevm code for the powerpc cpu emulation is currently based around
> > the old register_savevm() rather than register_vmstate() method.  It's also
> > rather broken, missing some important state on some CPU models.
> > 
> > This patch completely rewrites the savevm for target-ppc, using the new
> > VMStateDescription approach.  Exactly what needs to be saved in what
> > configurations has been more carefully examined, too.  This introduces a
> > new version (5) of the cpu save format.  The old load function is retained
> > to support version 4 images.
> > 
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> >  target-ppc/cpu.h     |    9 +-
> >  target-ppc/machine.c |  542 ++++++++++++++++++++++++++++++++++++++++++--------
> >  2 files changed, 460 insertions(+), 91 deletions(-)
> [...]
> > diff --git a/target-ppc/machine.c b/target-ppc/machine.c
> > index 2d10adb..594fe6a 100644
> > --- a/target-ppc/machine.c
> > +++ b/target-ppc/machine.c
> [...]
> > +void cpu_save(QEMUFile *f, void *opaque)
> > +{
> > +    vmstate_save_state(f, &vmstate_cpu, opaque);
> > +
> > +}
> > +
> > +int cpu_load(QEMUFile *f, void *opaque, int version_id)
> > +{
> > +    return vmstate_load_state(f, &vmstate_cpu, opaque, version_id);
> > +}
> 
> Please drop cpu_{save,load}() and use the VMStateDescription-based
> registration mechanism cpu_class_set_vmsd() from PowerPCCPU's
> instance_init in translate_init.c.
> I'm pretty certain I CC'ed you on that series...

Very likely.  But I initially wrote this patch before that, and didn't
notice the relevance of the update.  Revised version below:

From 84125649c8b299e4c56c9c28abd5f0b5aafee40a Mon Sep 17 00:00:00 2001
From: David Gibson <david@gibson.dropbear.id.au>
Date: Fri, 22 Feb 2013 09:47:00 +1100
Subject: [PATCH] target-ppc: Convert ppc cpu savevm to VMStateDescription

The savevm code for the powerpc cpu emulation is currently based around
the old register_savevm() rather than register_vmstate() method.  It's also
rather broken, missing some important state on some CPU models.

This patch completely rewrites the savevm for target-ppc, using the new
VMStateDescription approach.  Exactly what needs to be saved in what
configurations has been more carefully examined, too.  This introduces a
new version (5) of the cpu save format.  The old load function is retained
to support version 4 images.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
---
 target-ppc/cpu-qom.h        |    4 +
 target-ppc/cpu.h            |    9 +-
 target-ppc/machine.c        |  531 ++++++++++++++++++++++++++++++++++++-------
 target-ppc/translate_init.c |    2 +
 4 files changed, 454 insertions(+), 92 deletions(-)

diff --git a/target-ppc/cpu-qom.h b/target-ppc/cpu-qom.h
index eb03a00..2b96b04 100644
--- a/target-ppc/cpu-qom.h
+++ b/target-ppc/cpu-qom.h
@@ -102,4 +102,8 @@ PowerPCCPUClass *ppc_cpu_class_by_pvr(uint32_t pvr);
 
 void ppc_cpu_do_interrupt(CPUState *cpu);
 
+#ifndef CONFIG_USER_ONLY
+extern const struct VMStateDescription vmstate_ppc_cpu;
+#endif
+
 #endif
diff --git a/target-ppc/cpu.h b/target-ppc/cpu.h
index 7cacb56..1809cb3 100644
--- a/target-ppc/cpu.h
+++ b/target-ppc/cpu.h
@@ -943,7 +943,7 @@ struct CPUPPCState {
 #if defined(TARGET_PPC64)
     /* PowerPC 64 SLB area */
     ppc_slb_t slb[64];
-    int slb_nr;
+    int32_t slb_nr;
 #endif
     /* segment registers */
     hwaddr htab_base;
@@ -952,11 +952,11 @@ struct CPUPPCState {
     /* externally stored hash table */
     uint8_t *external_htab;
     /* BATs */
-    int nb_BATs;
+    uint32_t nb_BATs;
     target_ulong DBAT[2][8];
     target_ulong IBAT[2][8];
     /* PowerPC TLB registers (for 4xx, e500 and 60x software driven TLBs) */
-    int nb_tlb;      /* Total number of TLB                                  */
+    int32_t nb_tlb;      /* Total number of TLB                              */
     int tlb_per_way; /* Speed-up helper: used to avoid divisions at run time */
     int nb_ways;     /* Number of ways in the TLB set                        */
     int last_way;    /* Last used way used to allocate TLB in a LRU way      */
@@ -973,6 +973,7 @@ struct CPUPPCState {
     /* Other registers */
     /* Special purpose registers */
     target_ulong spr[1024];
+    uint32_t cr; /* Full CR value used during vmsave/load */
     ppc_spr_t spr_cb[1024];
     /* Altivec registers */
     ppc_avr_t avr[32];
@@ -1171,8 +1172,6 @@ static inline CPUPPCState *cpu_init(const char *cpu_model)
 #define cpu_signal_handler cpu_ppc_signal_handler
 #define cpu_list ppc_cpu_list
 
-#define CPU_SAVE_VERSION 4
-
 /* MMU modes definitions */
 #define MMU_MODE0_SUFFIX _user
 #define MMU_MODE1_SUFFIX _kernel
diff --git a/target-ppc/machine.c b/target-ppc/machine.c
index 2d10adb..b6be2a7 100644
--- a/target-ppc/machine.c
+++ b/target-ppc/machine.c
@@ -1,94 +1,9 @@
 #include "hw/hw.h"
 #include "hw/boards.h"
 #include "sysemu/kvm.h"
+#include "helper_regs.h"
 
-void cpu_save(QEMUFile *f, void *opaque)
-{
-    CPUPPCState *env = (CPUPPCState *)opaque;
-    unsigned int i, j;
-    uint32_t fpscr;
-    target_ulong xer;
-
-    for (i = 0; i < 32; i++)
-        qemu_put_betls(f, &env->gpr[i]);
-#if !defined(TARGET_PPC64)
-    for (i = 0; i < 32; i++)
-        qemu_put_betls(f, &env->gprh[i]);
-#endif
-    qemu_put_betls(f, &env->lr);
-    qemu_put_betls(f, &env->ctr);
-    for (i = 0; i < 8; i++)
-        qemu_put_be32s(f, &env->crf[i]);
-    xer = cpu_read_xer(env);
-    qemu_put_betls(f, &xer);
-    qemu_put_betls(f, &env->reserve_addr);
-    qemu_put_betls(f, &env->msr);
-    for (i = 0; i < 4; i++)
-        qemu_put_betls(f, &env->tgpr[i]);
-    for (i = 0; i < 32; i++) {
-        union {
-            float64 d;
-            uint64_t l;
-        } u;
-        u.d = env->fpr[i];
-        qemu_put_be64(f, u.l);
-    }
-    fpscr = env->fpscr;
-    qemu_put_be32s(f, &fpscr);
-    qemu_put_sbe32s(f, &env->access_type);
-#if defined(TARGET_PPC64)
-    qemu_put_betls(f, &env->spr[SPR_ASR]);
-    qemu_put_sbe32s(f, &env->slb_nr);
-#endif
-    qemu_put_betls(f, &env->spr[SPR_SDR1]);
-    for (i = 0; i < 32; i++)
-        qemu_put_betls(f, &env->sr[i]);
-    for (i = 0; i < 2; i++)
-        for (j = 0; j < 8; j++)
-            qemu_put_betls(f, &env->DBAT[i][j]);
-    for (i = 0; i < 2; i++)
-        for (j = 0; j < 8; j++)
-            qemu_put_betls(f, &env->IBAT[i][j]);
-    qemu_put_sbe32s(f, &env->nb_tlb);
-    qemu_put_sbe32s(f, &env->tlb_per_way);
-    qemu_put_sbe32s(f, &env->nb_ways);
-    qemu_put_sbe32s(f, &env->last_way);
-    qemu_put_sbe32s(f, &env->id_tlbs);
-    qemu_put_sbe32s(f, &env->nb_pids);
-    if (env->tlb.tlb6) {
-        // XXX assumes 6xx
-        for (i = 0; i < env->nb_tlb; i++) {
-            qemu_put_betls(f, &env->tlb.tlb6[i].pte0);
-            qemu_put_betls(f, &env->tlb.tlb6[i].pte1);
-            qemu_put_betls(f, &env->tlb.tlb6[i].EPN);
-        }
-    }
-    for (i = 0; i < 4; i++)
-        qemu_put_betls(f, &env->pb[i]);
-    for (i = 0; i < 1024; i++)
-        qemu_put_betls(f, &env->spr[i]);
-    qemu_put_be32s(f, &env->vscr);
-    qemu_put_be64s(f, &env->spe_acc);
-    qemu_put_be32s(f, &env->spe_fscr);
-    qemu_put_betls(f, &env->msr_mask);
-    qemu_put_be32s(f, &env->flags);
-    qemu_put_sbe32s(f, &env->error_code);
-    qemu_put_be32s(f, &env->pending_interrupts);
-    qemu_put_be32s(f, &env->irq_input_state);
-    for (i = 0; i < POWERPC_EXCP_NB; i++)
-        qemu_put_betls(f, &env->excp_vectors[i]);
-    qemu_put_betls(f, &env->excp_prefix);
-    qemu_put_betls(f, &env->ivor_mask);
-    qemu_put_betls(f, &env->ivpr_mask);
-    qemu_put_betls(f, &env->hreset_vector);
-    qemu_put_betls(f, &env->nip);
-    qemu_put_betls(f, &env->hflags);
-    qemu_put_betls(f, &env->hflags_nmsr);
-    qemu_put_sbe32s(f, &env->mmu_idx);
-    qemu_put_sbe32(f, 0);
-}
-
-int cpu_load(QEMUFile *f, void *opaque, int version_id)
+static int cpu_load_old(QEMUFile *f, void *opaque, int version_id)
 {
     CPUPPCState *env = (CPUPPCState *)opaque;
     unsigned int i, j;
@@ -177,3 +92,445 @@ int cpu_load(QEMUFile *f, void *opaque, int version_id)
 
     return 0;
 }
+
+static int get_avr(QEMUFile *f, void *pv, size_t size)
+{
+    ppc_avr_t *v = pv;
+
+    v->u64[0] = qemu_get_be64(f);
+    v->u64[1] = qemu_get_be64(f);
+
+    return 0;
+}
+
+static void put_avr(QEMUFile *f, void *pv, size_t size)
+{
+    ppc_avr_t *v = pv;
+
+    qemu_put_be64(f, v->u64[0]);
+    qemu_put_be64(f, v->u64[1]);
+}
+
+const VMStateInfo vmstate_info_avr = {
+    .name = "avr",
+    .get  = get_avr,
+    .put  = put_avr,
+};
+
+#define VMSTATE_AVR_ARRAY_V(_f, _s, _n, _v)                       \
+    VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_avr, ppc_avr_t)
+
+#define VMSTATE_AVR_ARRAY(_f, _s, _n)                             \
+    VMSTATE_AVR_ARRAY_V(_f, _s, _n, 0)
+
+static void cpu_pre_save(void *opaque)
+{
+    CPUPPCState *env = opaque;
+    int i;
+
+    env->spr[SPR_LR] = env->lr;
+    env->spr[SPR_CTR] = env->ctr;
+    env->spr[SPR_XER] = env->xer;
+#if defined(TARGET_PPC64)
+    env->spr[SPR_CFAR] = env->cfar;
+#endif
+    env->spr[SPR_BOOKE_SPEFSCR] = env->spe_fscr;
+
+    env->cr = 0;
+    for (i = 0; i < 8; i++) {
+        env->cr = (env->cr << 4) | (env->crf[i] & 0xf);
+    }
+
+    for (i = 0; (i < 4) && (i < env->nb_BATs); i++) {
+        env->spr[SPR_DBAT0U + 2*i] = env->DBAT[0][i];
+        env->spr[SPR_DBAT0U + 2*i + 1] = env->DBAT[1][i];
+        env->spr[SPR_IBAT0U + 2*i] = env->IBAT[0][i];
+        env->spr[SPR_IBAT0U + 2*i + 1] = env->IBAT[1][i];
+    }
+    for (i = 0; (i < 4) && ((i+4) < env->nb_BATs); i++) {
+        env->spr[SPR_DBAT4U + 2*i] = env->DBAT[0][i+4];
+        env->spr[SPR_DBAT4U + 2*i + 1] = env->DBAT[1][i+4];
+        env->spr[SPR_IBAT4U + 2*i] = env->IBAT[0][i+4];
+        env->spr[SPR_IBAT4U + 2*i + 1] = env->IBAT[1][i+4];
+    }
+}
+
+static int cpu_post_load(void *opaque, int version_id)
+{
+    CPUPPCState *env = opaque;
+    int i;
+
+    env->lr = env->spr[SPR_LR];
+    env->ctr = env->spr[SPR_CTR];
+    env->xer = env->spr[SPR_XER];
+#if defined(TARGET_PPC64)
+    env->cfar = env->spr[SPR_CFAR];
+#endif
+    env->spe_fscr = env->spr[SPR_BOOKE_SPEFSCR];
+
+    for (i = 0; i < 8; i++) {
+        env->crf[i] = env->cr >> (4*(7-i)) & 0xf;
+    }
+
+    for (i = 0; (i < 4) && (i < env->nb_BATs); i++) {
+        env->DBAT[0][i] = env->spr[SPR_DBAT0U + 2*i];
+        env->DBAT[1][i] = env->spr[SPR_DBAT0U + 2*i + 1];
+        env->IBAT[0][i] = env->spr[SPR_IBAT0U + 2*i];
+        env->IBAT[1][i] = env->spr[SPR_IBAT0U + 2*i + 1];
+    }
+    for (i = 0; (i < 4) && ((i+4) < env->nb_BATs); i++) {
+        env->DBAT[0][i+4] = env->spr[SPR_DBAT4U + 2*i];
+        env->DBAT[1][i+4] = env->spr[SPR_DBAT4U + 2*i + 1];
+        env->IBAT[0][i+4] = env->spr[SPR_IBAT4U + 2*i];
+        env->IBAT[1][i+4] = env->spr[SPR_IBAT4U + 2*i + 1];
+    }
+
+    /* Restore htab_base and htab_mask variables */
+    ppc_store_sdr1(env, env->spr[SPR_SDR1]);
+
+    hreg_compute_hflags(env);
+    hreg_compute_mem_idx(env);
+
+    return 0;
+}
+
+static bool fpu_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    return (env->insns_flags & PPC_FLOAT);
+}
+
+static const VMStateDescription vmstate_fpu = {
+    .name = "cpu/fpu",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_FLOAT64_ARRAY(fpr, CPUPPCState, 32),
+        VMSTATE_UINTTL(fpscr, CPUPPCState),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static bool altivec_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    return (env->insns_flags & PPC_ALTIVEC);
+}
+
+static const VMStateDescription vmstate_altivec = {
+    .name = "cpu/altivec",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_AVR_ARRAY(avr, CPUPPCState, 32),
+        VMSTATE_UINT32(vscr, CPUPPCState),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static bool vsx_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    return (env->insns_flags2 & PPC2_VSX);
+}
+
+static const VMStateDescription vmstate_vsx = {
+    .name = "cpu/vsx",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINT64_ARRAY(vsr, CPUPPCState, 32),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static bool sr_needed(void *opaque)
+{
+#ifdef TARGET_PPC64
+    CPUPPCState *env = opaque;
+
+    return !(env->mmu_model & POWERPC_MMU_64);
+#else
+    return true;
+#endif
+}
+
+static const VMStateDescription vmstate_sr = {
+    .name = "cpu/sr",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINTTL_ARRAY(sr, CPUPPCState, 32),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+#ifdef TARGET_PPC64
+static int get_slbe(QEMUFile *f, void *pv, size_t size)
+{
+    ppc_slb_t *v = pv;
+
+    v->esid = qemu_get_be64(f);
+    v->vsid = qemu_get_be64(f);
+
+    return 0;
+}
+
+static void put_slbe(QEMUFile *f, void *pv, size_t size)
+{
+    ppc_slb_t *v = pv;
+
+    qemu_put_be64(f, v->esid);
+    qemu_put_be64(f, v->vsid);
+}
+
+const VMStateInfo vmstate_info_slbe = {
+    .name = "slbe",
+    .get  = get_slbe,
+    .put  = put_slbe,
+};
+
+#define VMSTATE_SLB_ARRAY_V(_f, _s, _n, _v)                       \
+    VMSTATE_ARRAY(_f, _s, _n, _v, vmstate_info_slbe, ppc_slb_t)
+
+#define VMSTATE_SLB_ARRAY(_f, _s, _n)                             \
+    VMSTATE_SLB_ARRAY_V(_f, _s, _n, 0)
+
+static bool slb_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    /* We don't support any of the old segment table based 64-bit CPUs */
+    return (env->mmu_model & POWERPC_MMU_64);
+}
+
+static const VMStateDescription vmstate_slb = {
+    .name = "cpu/slb",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_INT32_EQUAL(slb_nr, CPUPPCState),
+        VMSTATE_SLB_ARRAY(slb, CPUPPCState, 64),
+        VMSTATE_END_OF_LIST()
+    }
+};
+#endif /* TARGET_PPC64 */
+
+static const VMStateDescription vmstate_tlb6xx_entry = {
+    .name = "cpu/tlb6xx_entry",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINTTL(pte0, ppc6xx_tlb_t),
+        VMSTATE_UINTTL(pte1, ppc6xx_tlb_t),
+        VMSTATE_UINTTL(EPN, ppc6xx_tlb_t),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static bool tlb6xx_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    return env->nb_tlb && (env->tlb_type == TLB_6XX);
+}
+
+static const VMStateDescription vmstate_tlb6xx = {
+    .name = "cpu/tlb6xx",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_INT32_EQUAL(nb_tlb, CPUPPCState),
+        VMSTATE_STRUCT_VARRAY_POINTER_INT32(tlb.tlb6, CPUPPCState, nb_tlb,
+                                            vmstate_tlb6xx_entry,
+                                            ppc6xx_tlb_t),
+        VMSTATE_UINTTL_ARRAY(tgpr, CPUPPCState, 4),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
+static const VMStateDescription vmstate_tlbemb_entry = {
+    .name = "cpu/tlbemb_entry",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINT64(RPN, ppcemb_tlb_t),
+        VMSTATE_UINTTL(EPN, ppcemb_tlb_t),
+        VMSTATE_UINTTL(PID, ppcemb_tlb_t),
+        VMSTATE_UINTTL(size, ppcemb_tlb_t),
+        VMSTATE_UINT32(prot, ppcemb_tlb_t),
+        VMSTATE_UINT32(attr, ppcemb_tlb_t),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static bool tlbemb_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    return env->nb_tlb && (env->tlb_type == TLB_EMB);
+}
+
+static bool pbr403_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+    uint32_t pvr = env->spr[SPR_PVR];
+
+    return (pvr & 0xffff0000) == 0x00200000;
+}
+
+static const VMStateDescription vmstate_pbr403 = {
+    .name = "cpu/pbr403",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINTTL_ARRAY(pb, CPUPPCState, 4),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static const VMStateDescription vmstate_tlbemb = {
+    .name = "cpu/tlb6xx",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_INT32_EQUAL(nb_tlb, CPUPPCState),
+        VMSTATE_STRUCT_VARRAY_POINTER_INT32(tlb.tlbe, CPUPPCState, nb_tlb,
+                                            vmstate_tlbemb_entry,
+                                            ppcemb_tlb_t),
+        /* 403 protection registers */
+        VMSTATE_END_OF_LIST()
+    },
+    .subsections = (VMStateSubsection []) {
+        {
+            .vmsd = &vmstate_pbr403,
+            .needed = pbr403_needed,
+        } , {
+            /* empty */
+        }
+    }
+};
+
+static const VMStateDescription vmstate_tlbmas_entry = {
+    .name = "cpu/tlbmas_entry",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_UINT32(mas8, ppcmas_tlb_t),
+        VMSTATE_UINT32(mas1, ppcmas_tlb_t),
+        VMSTATE_UINT64(mas2, ppcmas_tlb_t),
+        VMSTATE_UINT64(mas7_3, ppcmas_tlb_t),
+        VMSTATE_END_OF_LIST()
+    },
+};
+
+static bool tlbmas_needed(void *opaque)
+{
+    CPUPPCState *env = opaque;
+
+    return env->nb_tlb && (env->tlb_type == TLB_MAS);
+}
+
+static const VMStateDescription vmstate_tlbmas = {
+    .name = "cpu/tlbmas",
+    .version_id = 1,
+    .minimum_version_id = 1,
+    .minimum_version_id_old = 1,
+    .fields      = (VMStateField []) {
+        VMSTATE_INT32_EQUAL(nb_tlb, CPUPPCState),
+        VMSTATE_STRUCT_VARRAY_POINTER_INT32(tlb.tlbm, CPUPPCState, nb_tlb,
+                                            vmstate_tlbmas_entry,
+                                            ppcmas_tlb_t),
+        VMSTATE_END_OF_LIST()
+    }
+};
+
+const VMStateDescription vmstate_ppc_cpu = {
+    .name = "cpu",
+    .version_id = 5,
+    .minimum_version_id = 5,
+    .minimum_version_id_old = 4,
+    .load_state_old = cpu_load_old,
+    .pre_save = cpu_pre_save,
+    .post_load = cpu_post_load,
+    .fields      = (VMStateField []) {
+        /* Verify we haven't changed the pvr */
+        VMSTATE_UINTTL_EQUAL(spr[SPR_PVR], CPUPPCState),
+
+        /* User mode architected state */
+        VMSTATE_UINTTL_ARRAY(gpr, CPUPPCState, 32),
+#if !defined(TARGET_PPC64)
+        VMSTATE_UINTTL_ARRAY(gprh, CPUPPCState, 32),
+#endif
+        VMSTATE_UINT32(cr, CPUPPCState),
+        VMSTATE_UINTTL(nip, CPUPPCState),
+
+        /* SPRs */
+        VMSTATE_UINTTL_ARRAY(spr, CPUPPCState, 1024),
+        VMSTATE_UINT64(spe_acc, CPUPPCState),
+
+        /* Reservation */
+        VMSTATE_UINTTL(reserve_addr, CPUPPCState),
+
+        /* Supervisor mode architected state */
+        VMSTATE_UINTTL(msr, CPUPPCState),
+
+        /* Internal state */
+        VMSTATE_UINTTL(hflags_nmsr, CPUPPCState),
+        /* FIXME: access_type? */
+
+        /* Sanity checking */
+        VMSTATE_UINTTL_EQUAL(msr_mask, CPUPPCState),
+        VMSTATE_UINT64_EQUAL(insns_flags, CPUPPCState),
+        VMSTATE_UINT64_EQUAL(insns_flags2, CPUPPCState),
+        VMSTATE_UINT32_EQUAL(nb_BATs, CPUPPCState),
+        VMSTATE_END_OF_LIST()
+    },
+    .subsections = (VMStateSubsection []) {
+        {
+            .vmsd = &vmstate_fpu,
+            .needed = fpu_needed,
+        } , {
+            .vmsd = &vmstate_altivec,
+            .needed = altivec_needed,
+        } , {
+            .vmsd = &vmstate_vsx,
+            .needed = vsx_needed,
+        } , {
+            .vmsd = &vmstate_sr,
+            .needed = sr_needed,
+        } , {
+#ifdef TARGET_PPC64
+            .vmsd = &vmstate_slb,
+            .needed = slb_needed,
+        } , {
+#endif /* TARGET_PPC64 */
+            .vmsd = &vmstate_tlb6xx,
+            .needed = tlb6xx_needed,
+        } , {
+            .vmsd = &vmstate_tlbemb,
+            .needed = tlbemb_needed,
+        } , {
+            .vmsd = &vmstate_tlbmas,
+            .needed = tlbmas_needed,
+        } , {
+            /* FIXME: DCRs? */
+            /* FIXME: timebase? */
+            /* empty */
+        }
+    }
+};
diff --git a/target-ppc/translate_init.c b/target-ppc/translate_init.c
index 6feb62a..df84f1b 100644
--- a/target-ppc/translate_init.c
+++ b/target-ppc/translate_init.c
@@ -8305,6 +8305,8 @@ static void ppc_cpu_class_init(ObjectClass *oc, void *data)
 
     cc->class_by_name = ppc_cpu_class_by_name;
     cc->do_interrupt = ppc_cpu_do_interrupt;
+
+    cpu_class_set_vmsd(cc, &vmstate_ppc_cpu);
 }
 
 static const TypeInfo ppc_cpu_type_info = {
-- 
1.7.10.4



-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply related	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-03  1:38 ` [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI David Gibson
@ 2013-05-06  7:37   ` Paolo Bonzini
  2013-05-07  3:07     ` [Qemu-devel] [Qemu-ppc] " David Gibson
  2013-05-27  6:48     ` [Qemu-devel] " Alexey Kardashevskiy
  0 siblings, 2 replies; 31+ messages in thread
From: Paolo Bonzini @ 2013-05-06  7:37 UTC (permalink / raw)
  To: David Gibson; +Cc: aik, qemu-devel, qemu-ppc, agraf, quintela

Il 03/05/2013 03:38, David Gibson ha scritto:
> This patch adds the necessary support for saving the state of the PAPR VIO
> virtual SCSI device.  This turns out to be trivial, because the generiC
> SCSI code already quiesces the attached virtual SCSI bus.
> 
> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> ---
>  hw/scsi/spapr_vscsi.c |   28 ++++++++++++++++++++++++++++
>  1 file changed, 28 insertions(+)
> 
> diff --git a/hw/scsi/spapr_vscsi.c b/hw/scsi/spapr_vscsi.c
> index 3d322d5..f416871 100644
> --- a/hw/scsi/spapr_vscsi.c
> +++ b/hw/scsi/spapr_vscsi.c
> @@ -954,6 +954,33 @@ static Property spapr_vscsi_properties[] = {
>      DEFINE_PROP_END_OF_LIST(),
>  };
>  
> +static void spapr_vscsi_pre_save(void *opaque)
> +{
> +    VSCSIState *s = opaque;
> +    int i;
> +
> +    /* Can't save active requests, apparently the general SCSI code
> +     * quiesces the queue for us on vmsave */
> +    for (i = 0; i < VSCSI_REQ_LIMIT; i++) {
> +        assert(!s->reqs[i].active);
> +    }
> +}

This is only true when the rerror and werror options have the values
"ignore" or "report".  See virtio-scsi for an example of how to save the
requests using the save_request and load_request callbacks in SCSIBusInfo.

Paolo

> +static const VMStateDescription vmstate_spapr_vscsi = {
> +    .name = "spapr_vscsi",
> +    .version_id = 1,
> +    .minimum_version_id = 1,
> +    .minimum_version_id_old = 1,
> +    .pre_save = spapr_vscsi_pre_save,
> +    .fields      = (VMStateField []) {
> +        VMSTATE_SPAPR_VIO(vdev, VSCSIState),
> +        /* VSCSI state */
> +        /* ???? */
> +
> +        VMSTATE_END_OF_LIST()
> +    },
> +};
> +
>  static void spapr_vscsi_class_init(ObjectClass *klass, void *data)
>  {
>      DeviceClass *dc = DEVICE_CLASS(klass);
> @@ -968,6 +995,7 @@ static void spapr_vscsi_class_init(ObjectClass *klass, void *data)
>      k->signal_mask = 0x00000001;
>      dc->props = spapr_vscsi_properties;
>      k->rtce_window_size = 0x10000000;
> +    dc->vmsd = &vmstate_spapr_vscsi;
>  }
>  
>  static const TypeInfo spapr_vscsi_info = {
> 

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [Qemu-ppc] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-06  7:37   ` Paolo Bonzini
@ 2013-05-07  3:07     ` David Gibson
  2013-05-27  6:48     ` [Qemu-devel] " Alexey Kardashevskiy
  1 sibling, 0 replies; 31+ messages in thread
From: David Gibson @ 2013-05-07  3:07 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-ppc, qemu-devel, quintela

[-- Attachment #1: Type: text/plain, Size: 1940 bytes --]

On Mon, May 06, 2013 at 09:37:11AM +0200, Paolo Bonzini wrote:
> Il 03/05/2013 03:38, David Gibson ha scritto:
> > This patch adds the necessary support for saving the state of the PAPR VIO
> > virtual SCSI device.  This turns out to be trivial, because the generiC
> > SCSI code already quiesces the attached virtual SCSI bus.
> > 
> > Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
> > ---
> >  hw/scsi/spapr_vscsi.c |   28 ++++++++++++++++++++++++++++
> >  1 file changed, 28 insertions(+)
> > 
> > diff --git a/hw/scsi/spapr_vscsi.c b/hw/scsi/spapr_vscsi.c
> > index 3d322d5..f416871 100644
> > --- a/hw/scsi/spapr_vscsi.c
> > +++ b/hw/scsi/spapr_vscsi.c
> > @@ -954,6 +954,33 @@ static Property spapr_vscsi_properties[] = {
> >      DEFINE_PROP_END_OF_LIST(),
> >  };
> >  
> > +static void spapr_vscsi_pre_save(void *opaque)
> > +{
> > +    VSCSIState *s = opaque;
> > +    int i;
> > +
> > +    /* Can't save active requests, apparently the general SCSI code
> > +     * quiesces the queue for us on vmsave */
> > +    for (i = 0; i < VSCSI_REQ_LIMIT; i++) {
> > +        assert(!s->reqs[i].active);
> > +    }
> > +}
> 
> This is only true when the rerror and werror options have the values
> "ignore" or "report".  See virtio-scsi for an example of how to save the
> requests using the save_request and load_request callbacks in
> SCSIBusInfo.

Ah, bother.  Unfortunately the save request is quite a lot more
complicated for vscsi, since we have a lot more private data, and I'm
not sure which bits can be reconstructed from other information.  I'll
see what I can come up with.

What guarantees _does_ the scsi layer give about the lifecycle state
of the requests when we savevm?

-- 
David Gibson			| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au	| minimalist, thank you.  NOT _the_ _other_
				| _way_ _around_!
http://www.ozlabs.org/~dgibson

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-06  7:37   ` Paolo Bonzini
  2013-05-07  3:07     ` [Qemu-devel] [Qemu-ppc] " David Gibson
@ 2013-05-27  6:48     ` Alexey Kardashevskiy
  2013-05-27  7:03       ` Paolo Bonzini
  1 sibling, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2013-05-27  6:48 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, quintela, qemu-ppc, agraf, David Gibson

On 05/06/2013 05:37 PM, Paolo Bonzini wrote:
> Il 03/05/2013 03:38, David Gibson ha scritto:
>> This patch adds the necessary support for saving the state of the PAPR VIO
>> virtual SCSI device.  This turns out to be trivial, because the generiC
>> SCSI code already quiesces the attached virtual SCSI bus.
>>
>> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
>> ---
>>  hw/scsi/spapr_vscsi.c |   28 ++++++++++++++++++++++++++++
>>  1 file changed, 28 insertions(+)
>>
>> diff --git a/hw/scsi/spapr_vscsi.c b/hw/scsi/spapr_vscsi.c
>> index 3d322d5..f416871 100644
>> --- a/hw/scsi/spapr_vscsi.c
>> +++ b/hw/scsi/spapr_vscsi.c
>> @@ -954,6 +954,33 @@ static Property spapr_vscsi_properties[] = {
>>      DEFINE_PROP_END_OF_LIST(),
>>  };
>>  
>> +static void spapr_vscsi_pre_save(void *opaque)
>> +{
>> +    VSCSIState *s = opaque;
>> +    int i;
>> +
>> +    /* Can't save active requests, apparently the general SCSI code
>> +     * quiesces the queue for us on vmsave */
>> +    for (i = 0; i < VSCSI_REQ_LIMIT; i++) {
>> +        assert(!s->reqs[i].active);
>> +    }
>> +}
> 
> This is only true when the rerror and werror options have the values
> "ignore" or "report".  See virtio-scsi for an example of how to save the
> requests using the save_request and load_request callbacks in SCSIBusInfo.


Sigh.
How do you test that requests are saved/restored correctly? What does
happen to requests which were already sent to the real hardware (real block
device, etc) but have not completed at the moment of the end of migration?


> Paolo
> 
>> +static const VMStateDescription vmstate_spapr_vscsi = {
>> +    .name = "spapr_vscsi",
>> +    .version_id = 1,
>> +    .minimum_version_id = 1,
>> +    .minimum_version_id_old = 1,
>> +    .pre_save = spapr_vscsi_pre_save,
>> +    .fields      = (VMStateField []) {
>> +        VMSTATE_SPAPR_VIO(vdev, VSCSIState),
>> +        /* VSCSI state */
>> +        /* ???? */
>> +
>> +        VMSTATE_END_OF_LIST()
>> +    },
>> +};
>> +
>>  static void spapr_vscsi_class_init(ObjectClass *klass, void *data)
>>  {
>>      DeviceClass *dc = DEVICE_CLASS(klass);
>> @@ -968,6 +995,7 @@ static void spapr_vscsi_class_init(ObjectClass *klass, void *data)
>>      k->signal_mask = 0x00000001;
>>      dc->props = spapr_vscsi_properties;
>>      k->rtce_window_size = 0x10000000;
>> +    dc->vmsd = &vmstate_spapr_vscsi;
>>  }
>>  
>>  static const TypeInfo spapr_vscsi_info = {
>>
> 


-- 
Alexey

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-27  6:48     ` [Qemu-devel] " Alexey Kardashevskiy
@ 2013-05-27  7:03       ` Paolo Bonzini
  2013-05-31  5:58         ` Alexey Kardashevskiy
  0 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2013-05-27  7:03 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: agraf, David Gibson, qemu-ppc, qemu-devel, quintela

Il 27/05/2013 08:48, Alexey Kardashevskiy ha scritto:
>> > 
>> > This is only true when the rerror and werror options have the values
>> > "ignore" or "report".  See virtio-scsi for an example of how to save the
>> > requests using the save_request and load_request callbacks in SCSIBusInfo.
> 
> Sigh.

?

> How do you test that requests are saved/restored correctly? What does
> happen to requests which were already sent to the real hardware (real block
> device, etc) but have not completed at the moment of the end of migration?

They aren't saved, there is a bdrv_drain_all() in the migration code.

This is only used for rerror=stop or werror=stop.  To test it you can
use blkdebug (also a bit underdocumented) or hack block/raw-posix.c with
code that makes it fail the 100th write or something like that.  Start
the VM and migrate it while paused to a QEMU that doesn't have the hack.

Paolo

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-27  7:03       ` Paolo Bonzini
@ 2013-05-31  5:58         ` Alexey Kardashevskiy
  2013-05-31  8:18           ` Paolo Bonzini
  2013-05-31 10:07           ` Benjamin Herrenschmidt
  0 siblings, 2 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2013-05-31  5:58 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: quintela, agraf, qemu-devel, qemu-ppc, David Gibson

On 05/27/2013 05:03 PM, Paolo Bonzini wrote:
> Il 27/05/2013 08:48, Alexey Kardashevskiy ha scritto:
>>>>
>>>> This is only true when the rerror and werror options have the values
>>>> "ignore" or "report".  See virtio-scsi for an example of how to save the
>>>> requests using the save_request and load_request callbacks in SCSIBusInfo.
>>
>> Sigh.
> 
> ?

I thought the series is ready to go but I was wrong. Furthermore when I got
to the point where I could actually test the save/restore for vscsi_req,
migration was totally broken on PPC and it took some time to fix it :-/


>> How do you test that requests are saved/restored correctly? What does
>> happen to requests which were already sent to the real hardware (real block
>> device, etc) but have not completed at the moment of the end of migration?
> 
> They aren't saved, there is a bdrv_drain_all() in the migration code.
> 
> This is only used for rerror=stop or werror=stop.  To test it you can
> use blkdebug (also a bit underdocumented) or hack block/raw-posix.c with
> code that makes it fail the 100th write or something like that.  Start
> the VM and migrate it while paused to a QEMU that doesn't have the hack.

I run QEMU as (this is the destination, the source just does not have
-incoming):
./qemu-system-ppc64 \
 -L "qemu-ppc64-bios/" \
 -device "spapr-vscsi,id=ibmvscsi0" \
 -drive
"file=virtimg/fc18guest,if=none,id=dddrive0,readonly=off,format=blkdebug,media=disk,werror=stop,rerror=stop"
\
 -device
"scsi-disk,id=scsidisk0,bus=ibmvscsi0.0,channel=0,scsi-id=0,lun=0,drive=dddrive0,removable=off"
\
 -incoming "tcp:localhost:4000" \
 -m "1024" \
 -machine "pseries" \
 -nographic \
 -vga "none" \
 -enable-kvm

Am I using werror/rerror correctly?

I did not really understand how to use blkdebug or what else to hack in
raw-posix but the point is I cannot get QEMU into a state with at least one
vcsci_req.active==1, they are always inactive no matter what I do - I run
10 instances of "dd if=/def/sda of=/dev/null bs=4K" (on 8GB image with
FC18) and increase migration speed to 500MB/s, no effect.

How do you trigger the situation when there are inactive requests which
have to be migrated?


And another question (sorry I am not very familiar with terminology but
cc:Ben is :) ) - what happens with indirect requests if migration happened
in the middle of handling such a request? virtio-scsi does not seem to
handle this situation anyhow, it just reconstructs the whole request and
that's it.


-- 
Alexey

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-31  5:58         ` Alexey Kardashevskiy
@ 2013-05-31  8:18           ` Paolo Bonzini
  2013-05-31 10:12             ` Alexey Kardashevskiy
  2013-05-31 10:07           ` Benjamin Herrenschmidt
  1 sibling, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2013-05-31  8:18 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: qemu-devel, David Gibson, qemu-ppc, agraf, quintela

Il 31/05/2013 07:58, Alexey Kardashevskiy ha scritto:
> On 05/27/2013 05:03 PM, Paolo Bonzini wrote:
>> Il 27/05/2013 08:48, Alexey Kardashevskiy ha scritto:
>>>>>
>>>>> This is only true when the rerror and werror options have the values
>>>>> "ignore" or "report".  See virtio-scsi for an example of how to save the
>>>>> requests using the save_request and load_request callbacks in SCSIBusInfo.
>>>
>>> Sigh.
>>
>> ?
> 
> I thought the series is ready to go but I was wrong. Furthermore when I got
> to the point where I could actually test the save/restore for vscsi_req,
> migration was totally broken on PPC and it took some time to fix it :-/

It is ready.  I was just pointing out that it's not _production_ ready.

(Sorry, I'm unusually terse these days).

> I run QEMU as (this is the destination, the source just does not have
> -incoming):
> ./qemu-system-ppc64 \
>  -L "qemu-ppc64-bios/" \
>  -device "spapr-vscsi,id=ibmvscsi0" \
>  -drive
> "file=virtimg/fc18guest,if=none,id=dddrive0,readonly=off,format=blkdebug,media=disk,werror=stop,rerror=stop"
> \
>  -device
> "scsi-disk,id=scsidisk0,bus=ibmvscsi0.0,channel=0,scsi-id=0,lun=0,drive=dddrive0,removable=off"
> \
>  -incoming "tcp:localhost:4000" \
>  -m "1024" \
>  -machine "pseries" \
>  -nographic \
>  -vga "none" \
>  -enable-kvm
> 
> Am I using werror/rerror correctly?

Yes.

> I did not really understand how to use blkdebug or what else to hack in
> raw-posix but the point is I cannot get QEMU into a state with at least one
> vcsci_req.active==1, they are always inactive no matter what I do - I run
> 10 instances of "dd if=/def/sda of=/dev/null bs=4K" (on 8GB image with
> FC18) and increase migration speed to 500MB/s, no effect.

No, that doesn't help.

> How do you trigger the situation when there are inactive requests which
> have to be migrated?

You need to trigger an error.  For example, you could use a sparse image
on an almost-full partition and let "dd" fill your disk.  Then migrate
to another instance of QEMU on the same machine, the destination machine
should succeed migration but fail starting the machine.  When free space
on that partition, and "cont" on the destination, it should resume.

> And another question (sorry I am not very familiar with terminology but
> cc:Ben is :) ) - what happens with indirect requests if migration happened
> in the middle of handling such a request? virtio-scsi does not seem to
> handle this situation anyhow, it just reconstructs the whole request and
> that's it.

What are indirect requests?

Paolo

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-31  5:58         ` Alexey Kardashevskiy
  2013-05-31  8:18           ` Paolo Bonzini
@ 2013-05-31 10:07           ` Benjamin Herrenschmidt
  2013-05-31 10:25             ` Alexey Kardashevskiy
  1 sibling, 1 reply; 31+ messages in thread
From: Benjamin Herrenschmidt @ 2013-05-31 10:07 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: quintela, Alexey Kardashevskiy, agraf, qemu-devel, qemu-ppc,
	David Gibson

On Fri, 2013-05-31 at 15:58 +1000, Alexey Kardashevskiy wrote:
> 
> And another question (sorry I am not very familiar with terminology but
> cc:Ben is :) ) - what happens with indirect requests if migration happened
> in the middle of handling such a request? virtio-scsi does not seem to
> handle this situation anyhow, it just reconstructs the whole request and
> that's it.

So Paolo, the crux of the question here is really whether we have any
guarantee about the state of the request when this happens (by this I
mean a save happening with requests still "in flight") ?

IE. Can the request can be at any stage of processing, with the data
transfer phase being half way through, or do we somewhat know for sure
that the request will *not* have started transferring any data ?

This is key, because in the latter case, all we really need to do is
save the request itself, and re-parse it on restore as if it was
new really (at least from a DMA descriptor perspective).

However, if the data transfer is already half way through, we need to
somewhat save the state of the data transfer machinery, ie. the position
of the "cursor" that follows the guest-provided DMA descriptor list,
etc... (which isn't *that* trivial since we have a concept of indirect
descriptors and we use pointers to follow them, so we'd probably have
to re-walk the whole user descriptors list until we reach the same position).

Cheers,
Ben.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-31  8:18           ` Paolo Bonzini
@ 2013-05-31 10:12             ` Alexey Kardashevskiy
  2013-05-31 10:26               ` Paolo Bonzini
  0 siblings, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2013-05-31 10:12 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: qemu-devel, David Gibson, qemu-ppc, agraf, quintela

On 05/31/2013 06:18 PM, Paolo Bonzini wrote:
> Il 31/05/2013 07:58, Alexey Kardashevskiy ha scritto:
>> On 05/27/2013 05:03 PM, Paolo Bonzini wrote:
>>> Il 27/05/2013 08:48, Alexey Kardashevskiy ha scritto:
>>>>>>
>>>>>> This is only true when the rerror and werror options have the values
>>>>>> "ignore" or "report".  See virtio-scsi for an example of how to save the
>>>>>> requests using the save_request and load_request callbacks in SCSIBusInfo.
>>>>
>>>> Sigh.
>>>
>>> ?
>>
>> I thought the series is ready to go but I was wrong. Furthermore when I got
>> to the point where I could actually test the save/restore for vscsi_req,
>> migration was totally broken on PPC and it took some time to fix it :-/
> 
> It is ready.  I was just pointing out that it's not _production_ ready.

What is the difference then? :)


> (Sorry, I'm unusually terse these days).
> 
>> I run QEMU as (this is the destination, the source just does not have
>> -incoming):
>> ./qemu-system-ppc64 \
>>  -L "qemu-ppc64-bios/" \
>>  -device "spapr-vscsi,id=ibmvscsi0" \
>>  -drive
>> "file=virtimg/fc18guest,if=none,id=dddrive0,readonly=off,format=blkdebug,media=disk,werror=stop,rerror=stop"
>> \
>>  -device
>> "scsi-disk,id=scsidisk0,bus=ibmvscsi0.0,channel=0,scsi-id=0,lun=0,drive=dddrive0,removable=off"
>> \
>>  -incoming "tcp:localhost:4000" \
>>  -m "1024" \
>>  -machine "pseries" \
>>  -nographic \
>>  -vga "none" \
>>  -enable-kvm
>>
>> Am I using werror/rerror correctly?
> 
> Yes.
> 
>> I did not really understand how to use blkdebug or what else to hack in
>> raw-posix but the point is I cannot get QEMU into a state with at least one
>> vcsci_req.active==1, they are always inactive no matter what I do - I run
>> 10 instances of "dd if=/def/sda of=/dev/null bs=4K" (on 8GB image with
>> FC18) and increase migration speed to 500MB/s, no effect.
> 
> No, that doesn't help.
> 
>> How do you trigger the situation when there are inactive requests which
>> have to be migrated?
> 
> You need to trigger an error.  For example, you could use a sparse image
> on an almost-full partition and let "dd" fill your disk.  Then migrate
> to another instance of QEMU on the same machine, the destination machine
> should succeed migration but fail starting the machine.

Why would it fail? I run "dd", it fills the disk and stops. Then I migrate.
How can I get pending SCSI request in a queue? Sorry, I am definitely
missing something.


> When free space
> on that partition, and "cont" on the destination, it should resume.
> 
>> And another question (sorry I am not very familiar with terminology but
>> cc:Ben is :) ) - what happens with indirect requests if migration happened
>> in the middle of handling such a request? virtio-scsi does not seem to
>> handle this situation anyhow, it just reconstructs the whole request and
>> that's it.
> 
> What are indirect requests?

I'll leave it to Ben to avoid confusing :)




-- 
Alexey

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-31 10:07           ` Benjamin Herrenschmidt
@ 2013-05-31 10:25             ` Alexey Kardashevskiy
  2013-05-31 10:41               ` Paolo Bonzini
  0 siblings, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2013-05-31 10:25 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: quintela, agraf, qemu-devel, qemu-ppc, Paolo Bonzini, David Gibson

On 05/31/2013 08:07 PM, Benjamin Herrenschmidt wrote:
> On Fri, 2013-05-31 at 15:58 +1000, Alexey Kardashevskiy wrote:
>>
>> And another question (sorry I am not very familiar with terminology but
>> cc:Ben is :) ) - what happens with indirect requests if migration happened
>> in the middle of handling such a request? virtio-scsi does not seem to
>> handle this situation anyhow, it just reconstructs the whole request and
>> that's it.
> 
> So Paolo, the crux of the question here is really whether we have any
> guarantee about the state of the request when this happens (by this I
> mean a save happening with requests still "in flight") ?
> 
> IE. Can the request can be at any stage of processing, with the data
> transfer phase being half way through, or do we somewhat know for sure
> that the request will *not* have started transferring any data ?
> 
> This is key, because in the latter case, all we really need to do is
> save the request itself, and re-parse it on restore as if it was
> new really (at least from a DMA descriptor perspective).
> 
> However, if the data transfer is already half way through, we need to
> somewhat save the state of the data transfer machinery, ie. the position
> of the "cursor" that follows the guest-provided DMA descriptor list,
> etc... (which isn't *that* trivial since we have a concept of indirect
> descriptors and we use pointers to follow them, so we'd probably have
> to re-walk the whole user descriptors list until we reach the same position).


Is not it the same QEMU thread which handles hcalls and QEMU console
commands so the migration cannot stop parsing/handling a vscsi_req?



-- 
Alexey

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-31 10:12             ` Alexey Kardashevskiy
@ 2013-05-31 10:26               ` Paolo Bonzini
  2013-05-31 10:33                 ` Alexey Kardashevskiy
  0 siblings, 1 reply; 31+ messages in thread
From: Paolo Bonzini @ 2013-05-31 10:26 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: agraf, quintela, qemu-ppc, qemu-devel, David Gibson

Il 31/05/2013 12:12, Alexey Kardashevskiy ha scritto:
> On 05/31/2013 06:18 PM, Paolo Bonzini wrote:
>> Il 31/05/2013 07:58, Alexey Kardashevskiy ha scritto:
>>> On 05/27/2013 05:03 PM, Paolo Bonzini wrote:
>>>> Il 27/05/2013 08:48, Alexey Kardashevskiy ha scritto:
>>>>>>>
>>>>>>> This is only true when the rerror and werror options have the values
>>>>>>> "ignore" or "report".  See virtio-scsi for an example of how to save the
>>>>>>> requests using the save_request and load_request callbacks in SCSIBusInfo.
>>>>>
>>>>> Sigh.
>>>>
>>>> ?
>>>
>>> I thought the series is ready to go but I was wrong. Furthermore when I got
>>> to the point where I could actually test the save/restore for vscsi_req,
>>> migration was totally broken on PPC and it took some time to fix it :-/
>>
>> It is ready.  I was just pointing out that it's not _production_ ready.
> 
> What is the difference then? :)

It is mergeable, but it needs further work and you should be aware of that.

>>> How do you trigger the situation when there are inactive requests which
>>> have to be migrated?
>>
>> You need to trigger an error.  For example, you could use a sparse image
>> on an almost-full partition and let "dd" fill your disk.  Then migrate
>> to another instance of QEMU on the same machine, the destination machine
>> should succeed migration but fail starting the machine.
> 
> Why would it fail? I run "dd", it fills the disk and stops.

You have to make it fill the _host_ disk before it fills the guest disk.
 That's why I mentioned a sparse image.

Then the machine pauses with the failing request in its queue.  When you
migrate, the request is migrated as well.

Paolo

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-31 10:26               ` Paolo Bonzini
@ 2013-05-31 10:33                 ` Alexey Kardashevskiy
  2013-05-31 10:34                   ` Paolo Bonzini
  0 siblings, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2013-05-31 10:33 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: agraf, quintela, qemu-ppc, qemu-devel, David Gibson

On 05/31/2013 08:26 PM, Paolo Bonzini wrote:
> Il 31/05/2013 12:12, Alexey Kardashevskiy ha scritto:
>> On 05/31/2013 06:18 PM, Paolo Bonzini wrote:
>>> Il 31/05/2013 07:58, Alexey Kardashevskiy ha scritto:
>>>> On 05/27/2013 05:03 PM, Paolo Bonzini wrote:
>>>>> Il 27/05/2013 08:48, Alexey Kardashevskiy ha scritto:
>>>>>>>>
>>>>>>>> This is only true when the rerror and werror options have the values
>>>>>>>> "ignore" or "report".  See virtio-scsi for an example of how to save the
>>>>>>>> requests using the save_request and load_request callbacks in SCSIBusInfo.
>>>>>>
>>>>>> Sigh.
>>>>>
>>>>> ?
>>>>
>>>> I thought the series is ready to go but I was wrong. Furthermore when I got
>>>> to the point where I could actually test the save/restore for vscsi_req,
>>>> migration was totally broken on PPC and it took some time to fix it :-/
>>>
>>> It is ready.  I was just pointing out that it's not _production_ ready.
>>
>> What is the difference then? :)
> 
> It is mergeable, but it needs further work and you should be aware of that.
> 
>>>> How do you trigger the situation when there are inactive requests which
>>>> have to be migrated?
>>>
>>> You need to trigger an error.  For example, you could use a sparse image
>>> on an almost-full partition and let "dd" fill your disk.  Then migrate
>>> to another instance of QEMU on the same machine, the destination machine
>>> should succeed migration but fail starting the machine.
>>
>> Why would it fail? I run "dd", it fills the disk and stops.
> 
> You have to make it fill the _host_ disk before it fills the guest disk.
>  That's why I mentioned a sparse image.
> 
> Then the machine pauses with the failing request in its queue.


Does the machine pause automatically in such case? Did not know that, now
it makes sense. Thanks.


> When you migrate, the request is migrated as well.


-- 
Alexey

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-31 10:33                 ` Alexey Kardashevskiy
@ 2013-05-31 10:34                   ` Paolo Bonzini
  0 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2013-05-31 10:34 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: agraf, quintela, qemu-ppc, qemu-devel, David Gibson

Il 31/05/2013 12:33, Alexey Kardashevskiy ha scritto:
>>>>> How do you trigger the situation when there are inactive requests which
>>>>> have to be migrated?
>>>>
>>>> You need to trigger an error.  For example, you could use a sparse image
>>>> on an almost-full partition and let "dd" fill your disk.  Then migrate
>>>> to another instance of QEMU on the same machine, the destination machine
>>>> should succeed migration but fail starting the machine.
>>>
>>> Why would it fail? I run "dd", it fills the disk and stops.
>>
>> You have to make it fill the _host_ disk before it fills the guest disk.
>>  That's why I mentioned a sparse image.
>>
>> Then the machine pauses with the failing request in its queue.
> 
> Does the machine pause automatically in such case? Did not know that, now
> it makes sense. Thanks.

That's the point of rerror=stop/werror=stop. :)

Paolo

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-31 10:25             ` Alexey Kardashevskiy
@ 2013-05-31 10:41               ` Paolo Bonzini
  2013-06-01  0:01                 ` Benjamin Herrenschmidt
  2013-06-03  5:46                 ` Alexey Kardashevskiy
  0 siblings, 2 replies; 31+ messages in thread
From: Paolo Bonzini @ 2013-05-31 10:41 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: quintela, agraf, qemu-devel, qemu-ppc, David Gibson

Il 31/05/2013 12:25, Alexey Kardashevskiy ha scritto:
> On 05/31/2013 08:07 PM, Benjamin Herrenschmidt wrote:
>> On Fri, 2013-05-31 at 15:58 +1000, Alexey Kardashevskiy wrote:
>>>
>>> And another question (sorry I am not very familiar with terminology but
>>> cc:Ben is :) ) - what happens with indirect requests if migration happened
>>> in the middle of handling such a request? virtio-scsi does not seem to
>>> handle this situation anyhow, it just reconstructs the whole request and
>>> that's it.
>>
>> So Paolo, the crux of the question here is really whether we have any
>> guarantee about the state of the request when this happens (by this I
>> mean a save happening with requests still "in flight") ?
>>
>> IE. Can the request can be at any stage of processing, with the data
>> transfer phase being half way through, or do we somewhat know for sure
>> that the request will *not* have started transferring any data ?
>>
>> This is key, because in the latter case, all we really need to do is
>> save the request itself, and re-parse it on restore as if it was
>> new really (at least from a DMA descriptor perspective).
>>
>> However, if the data transfer is already half way through, we need to
>> somewhat save the state of the data transfer machinery, ie. the position
>> of the "cursor" that follows the guest-provided DMA descriptor list,
>> etc... (which isn't *that* trivial since we have a concept of indirect
>> descriptors and we use pointers to follow them, so we'd probably have
>> to re-walk the whole user descriptors list until we reach the same position).

It may be halfway through, but it is always restarted on the destination.

virtio-scsi parses the whole descriptor chain upfront and sends the
guest addresses in the migration stream.

> Is not it the same QEMU thread which handles hcalls and QEMU console
> commands so the migration cannot stop parsing/handling a vscsi_req?

The VM is paused and I/O is flushed at the point when the reqs are sent.
 That's why you couldn't get a pending request.  Only failed requests
remain in queue.

Paolo

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-31 10:41               ` Paolo Bonzini
@ 2013-06-01  0:01                 ` Benjamin Herrenschmidt
  2013-06-03  6:21                   ` Paolo Bonzini
  2013-06-03  5:46                 ` Alexey Kardashevskiy
  1 sibling, 1 reply; 31+ messages in thread
From: Benjamin Herrenschmidt @ 2013-06-01  0:01 UTC (permalink / raw)
  To: Paolo Bonzini
  Cc: quintela, Alexey Kardashevskiy, agraf, qemu-devel, qemu-ppc,
	David Gibson

On Fri, 2013-05-31 at 12:41 +0200, Paolo Bonzini wrote:

> It may be halfway through, but it is always restarted on the destination.

"restarted" as in the whole transfer is restarted if any right ? So we
can essentially consider as a new request for which we just did
scsi_req_enqueue() ?

IE. We don't do direct DMA to guest pages just yet (we still do copies)
so basically our process is:

 1- Obtain request from guest
 2- Queue it (scsi_req_enqueue)
 3- No transfer -> go away (completion is called)
 4- Pre-process user descriptors (check desc type direct vs indirect,
    position our "cursor" walking them etc....)
 5- scsi_req_continue()
    .../... loop of callbacks & transfer

Now from what you say, I assume that regardless of the point where
the request was, when we "resume" it will always be at step 4 ?

IE. I can just pre-process the descriptors again ? (I actually need
to transfer them again from the guest since I suspect I clobber them
at the very least due to byteswap) and call scsi_req_continue() and
assume the transfer (if any) started from the beginning ?

> virtio-scsi parses the whole descriptor chain upfront and sends the
> guest addresses in the migration stream.
> 
> > Is not it the same QEMU thread which handles hcalls and QEMU console
> > commands so the migration cannot stop parsing/handling a vscsi_req?
> 
> The VM is paused and I/O is flushed at the point when the reqs are sent.
>  That's why you couldn't get a pending request.  Only failed requests
> remain in queue.

Ben.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-05-31 10:41               ` Paolo Bonzini
  2013-06-01  0:01                 ` Benjamin Herrenschmidt
@ 2013-06-03  5:46                 ` Alexey Kardashevskiy
  2013-06-03  6:23                   ` Paolo Bonzini
  2013-06-03  8:07                   ` Benjamin Herrenschmidt
  1 sibling, 2 replies; 31+ messages in thread
From: Alexey Kardashevskiy @ 2013-06-03  5:46 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: quintela, agraf, qemu-devel, qemu-ppc, David Gibson

On 05/31/2013 08:41 PM, Paolo Bonzini wrote:
> Il 31/05/2013 12:25, Alexey Kardashevskiy ha scritto:
>> On 05/31/2013 08:07 PM, Benjamin Herrenschmidt wrote:
>>> On Fri, 2013-05-31 at 15:58 +1000, Alexey Kardashevskiy wrote:
>>>>
>>>> And another question (sorry I am not very familiar with terminology but
>>>> cc:Ben is :) ) - what happens with indirect requests if migration happened
>>>> in the middle of handling such a request? virtio-scsi does not seem to
>>>> handle this situation anyhow, it just reconstructs the whole request and
>>>> that's it.
>>>
>>> So Paolo, the crux of the question here is really whether we have any
>>> guarantee about the state of the request when this happens (by this I
>>> mean a save happening with requests still "in flight") ?
>>>
>>> IE. Can the request can be at any stage of processing, with the data
>>> transfer phase being half way through, or do we somewhat know for sure
>>> that the request will *not* have started transferring any data ?
>>>
>>> This is key, because in the latter case, all we really need to do is
>>> save the request itself, and re-parse it on restore as if it was
>>> new really (at least from a DMA descriptor perspective).
>>>
>>> However, if the data transfer is already half way through, we need to
>>> somewhat save the state of the data transfer machinery, ie. the position
>>> of the "cursor" that follows the guest-provided DMA descriptor list,
>>> etc... (which isn't *that* trivial since we have a concept of indirect
>>> descriptors and we use pointers to follow them, so we'd probably have
>>> to re-walk the whole user descriptors list until we reach the same position).
> 
> It may be halfway through, but it is always restarted on the destination.
> 
> virtio-scsi parses the whole descriptor chain upfront and sends the
> guest addresses in the migration stream.
> 
>> Is not it the same QEMU thread which handles hcalls and QEMU console
>> commands so the migration cannot stop parsing/handling a vscsi_req?
> 
> The VM is paused and I/O is flushed at the point when the reqs are sent.
>  That's why you couldn't get a pending request.  Only failed requests
> remain in queue.


Ok. I implemented {save|load}_request for IBMVSCSI, started testing - the
destination system behaves very unstable, sometime it is a fault in
_raw_spin_lock or it looks okay but any attempt to read the filesystem
leads to 100% cpu load in qemu process and no response from the guest.

I tried virtio-scsi as well (as it was referred as a good example), it
fails in exactly the same way. So I started wondering - when did you try it
last time? :)

My test is:
1. create qcow2 image 8GB, put it to 2GB USB disk.
2. put 1.8GB "dummy" image onto the same USB disk.
3. run qemu with qcow2 image.
4. do "mkfs.ext4 /dev/sda" in the guest. It creates 300MB file when there
is enough space.
5. wait till the source qemu gets stopped due to io error (info status
confirms this).
6. migrate.
7. remove "dummy".
8. "c"ontinue in the destination guest.

Is it good/bad/ugly? What do I miss? Thanks!


-- 
Alexey

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-06-01  0:01                 ` Benjamin Herrenschmidt
@ 2013-06-03  6:21                   ` Paolo Bonzini
  0 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2013-06-03  6:21 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: quintela, Alexey Kardashevskiy, qemu-devel, agraf, qemu-ppc,
	David Gibson

Il 01/06/2013 02:01, Benjamin Herrenschmidt ha scritto:
> On Fri, 2013-05-31 at 12:41 +0200, Paolo Bonzini wrote:
> 
>> It may be halfway through, but it is always restarted on the destination.
> 
> "restarted" as in the whole transfer is restarted if any right ? So we
> can essentially consider as a new request for which we just did
> scsi_req_enqueue() ?
> 
> IE. We don't do direct DMA to guest pages just yet (we still do copies)
> so basically our process is:
> 
>  1- Obtain request from guest
>  2- Queue it (scsi_req_enqueue)
>  3- No transfer -> go away (completion is called)
>  4- Pre-process user descriptors (check desc type direct vs indirect,
>     position our "cursor" walking them etc....)
>  5- scsi_req_continue()
>     .../... loop of callbacks & transfer
> 
> Now from what you say, I assume that regardless of the point where
> the request was, when we "resume" it will always be at step 4 ?
> 
> IE. I can just pre-process the descriptors again ? (I actually need
> to transfer them again from the guest since I suspect I clobber them
> at the very least due to byteswap) and call scsi_req_continue() and
> assume the transfer (if any) started from the beginning ?

Yes.  Unless the spec somehow lets the guest figure out the point at
which the whole chain has been pre-processed, and lets the guest modify
the chain at this point.  But if that's not the case, you can do that.
Memory has already been loaded when load_request runs.

Paolo

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-06-03  5:46                 ` Alexey Kardashevskiy
@ 2013-06-03  6:23                   ` Paolo Bonzini
  2013-06-03  8:07                   ` Benjamin Herrenschmidt
  1 sibling, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2013-06-03  6:23 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: qemu-devel, David Gibson, qemu-ppc, agraf, quintela

Il 03/06/2013 07:46, Alexey Kardashevskiy ha scritto:
> Ok. I implemented {save|load}_request for IBMVSCSI, started testing - the
> destination system behaves very unstable, sometime it is a fault in
> _raw_spin_lock or it looks okay but any attempt to read the filesystem
> leads to 100% cpu load in qemu process and no response from the guest.
> 
> I tried virtio-scsi as well (as it was referred as a good example), it
> fails in exactly the same way. So I started wondering - when did you try it
> last time? :)

Perhaps a year ago.  Gerd must have tested usb-storage later than that
though.

> My test is:
> 1. create qcow2 image 8GB, put it to 2GB USB disk.
> 2. put 1.8GB "dummy" image onto the same USB disk.
> 3. run qemu with qcow2 image.
> 4. do "mkfs.ext4 /dev/sda" in the guest. It creates 300MB file when there
> is enough space.
> 5. wait till the source qemu gets stopped due to io error (info status
> confirms this).
> 6. migrate.
> 7. remove "dummy".
> 8. "c"ontinue in the destination guest.

Sounds good.  We really need testcases for this.  I'll take a look when
I come back from vacation.

Paolo

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-06-03  5:46                 ` Alexey Kardashevskiy
  2013-06-03  6:23                   ` Paolo Bonzini
@ 2013-06-03  8:07                   ` Benjamin Herrenschmidt
  2013-06-03  9:37                     ` Alexey Kardashevskiy
  1 sibling, 1 reply; 31+ messages in thread
From: Benjamin Herrenschmidt @ 2013-06-03  8:07 UTC (permalink / raw)
  To: Alexey Kardashevskiy
  Cc: quintela, agraf, qemu-devel, qemu-ppc, Paolo Bonzini, David Gibson

On Mon, 2013-06-03 at 15:46 +1000, Alexey Kardashevskiy wrote:
> Ok. I implemented {save|load}_request for IBMVSCSI, started testing -
> the
> destination system behaves very unstable, sometime it is a fault in
> _raw_spin_lock or it looks okay but any attempt to read the filesystem
> leads to 100% cpu load in qemu process and no response from the guest.
> 
> I tried virtio-scsi as well (as it was referred as a good example), it
> fails in exactly the same way. So I started wondering - when did you
> try it
> last time? :)

Did you try virtio-blk or even a ramdisk ? IE, Make sure the problem
isn't some kind of generic migration issue unrelated to storage ?

Cheers,
Ben.

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-06-03  8:07                   ` Benjamin Herrenschmidt
@ 2013-06-03  9:37                     ` Alexey Kardashevskiy
  2013-06-03  9:41                       ` Paolo Bonzini
  0 siblings, 1 reply; 31+ messages in thread
From: Alexey Kardashevskiy @ 2013-06-03  9:37 UTC (permalink / raw)
  To: Benjamin Herrenschmidt
  Cc: quintela, agraf, qemu-devel, qemu-ppc, Paolo Bonzini, David Gibson

On 06/03/2013 06:07 PM, Benjamin Herrenschmidt wrote:
> On Mon, 2013-06-03 at 15:46 +1000, Alexey Kardashevskiy wrote:
>> Ok. I implemented {save|load}_request for IBMVSCSI, started testing -
>> the
>> destination system behaves very unstable, sometime it is a fault in
>> _raw_spin_lock or it looks okay but any attempt to read the filesystem
>> leads to 100% cpu load in qemu process and no response from the guest.
>>
>> I tried virtio-scsi as well (as it was referred as a good example), it
>> fails in exactly the same way. So I started wondering - when did you
>> try it
>> last time? :)
> 
> Did you try virtio-blk or even a ramdisk ? IE, Make sure the problem
> isn't some kind of generic migration issue unrelated to storage ?


False alarm. During multiple switches between different git branches, I
just lost my own patch which disables "bulk" migration (which we want to
revert anyway, just waiting for the author to do that himself) :)

At least my test does not fail any more. Sorry for confusing.



-- 
Alexey

^ permalink raw reply	[flat|nested] 31+ messages in thread

* Re: [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI
  2013-06-03  9:37                     ` Alexey Kardashevskiy
@ 2013-06-03  9:41                       ` Paolo Bonzini
  0 siblings, 0 replies; 31+ messages in thread
From: Paolo Bonzini @ 2013-06-03  9:41 UTC (permalink / raw)
  To: Alexey Kardashevskiy; +Cc: quintela, agraf, qemu-devel, qemu-ppc, David Gibson

Il 03/06/2013 11:37, Alexey Kardashevskiy ha scritto:
> On 06/03/2013 06:07 PM, Benjamin Herrenschmidt wrote:
>> On Mon, 2013-06-03 at 15:46 +1000, Alexey Kardashevskiy wrote:
>>> Ok. I implemented {save|load}_request for IBMVSCSI, started testing -
>>> the
>>> destination system behaves very unstable, sometime it is a fault in
>>> _raw_spin_lock or it looks okay but any attempt to read the filesystem
>>> leads to 100% cpu load in qemu process and no response from the guest.
>>>
>>> I tried virtio-scsi as well (as it was referred as a good example), it
>>> fails in exactly the same way. So I started wondering - when did you
>>> try it
>>> last time? :)
>>
>> Did you try virtio-blk or even a ramdisk ? IE, Make sure the problem
>> isn't some kind of generic migration issue unrelated to storage ?
> 
> 
> False alarm. During multiple switches between different git branches, I
> just lost my own patch which disables "bulk" migration (which we want to
> revert anyway, just waiting for the author to do that himself) :)
> 
> At least my test does not fail any more. Sorry for confusing.

Good, I was surprised.  That area doesn't see much testing, but it
hasn't seen much change either.

Paolo

^ permalink raw reply	[flat|nested] 31+ messages in thread

end of thread, other threads:[~2013-06-03  9:42 UTC | newest]

Thread overview: 31+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-05-03  1:38 [Qemu-devel] [0/8] pseries: savevm / migration support David Gibson
2013-05-03  1:38 ` [Qemu-devel] [PATCH 1/8] savevm: Implement VMS_DIVIDE flag David Gibson
2013-05-03  1:38 ` [Qemu-devel] [PATCH 2/8] target-ppc: Convert ppc cpu savevm to VMStateDescription David Gibson
2013-05-03 11:29   ` Andreas Färber
2013-05-03 14:26     ` [Qemu-devel] [Qemu-ppc] " David Gibson
2013-05-03  1:38 ` [Qemu-devel] [PATCH 3/8] pseries: savevm support for XICS interrupt controller David Gibson
2013-05-03  1:38 ` [Qemu-devel] [PATCH 4/8] pseries: savevm support for VIO devices David Gibson
2013-05-03  1:38 ` [Qemu-devel] [PATCH 5/8] pseries: savevm support for PAPR VIO logical lan David Gibson
2013-05-03  1:38 ` [Qemu-devel] [PATCH 6/8] pseries: savevm support for PAPR TCE tables David Gibson
2013-05-03  1:38 ` [Qemu-devel] [PATCH 7/8] pseries: savevm support for PAPR virtual SCSI David Gibson
2013-05-06  7:37   ` Paolo Bonzini
2013-05-07  3:07     ` [Qemu-devel] [Qemu-ppc] " David Gibson
2013-05-27  6:48     ` [Qemu-devel] " Alexey Kardashevskiy
2013-05-27  7:03       ` Paolo Bonzini
2013-05-31  5:58         ` Alexey Kardashevskiy
2013-05-31  8:18           ` Paolo Bonzini
2013-05-31 10:12             ` Alexey Kardashevskiy
2013-05-31 10:26               ` Paolo Bonzini
2013-05-31 10:33                 ` Alexey Kardashevskiy
2013-05-31 10:34                   ` Paolo Bonzini
2013-05-31 10:07           ` Benjamin Herrenschmidt
2013-05-31 10:25             ` Alexey Kardashevskiy
2013-05-31 10:41               ` Paolo Bonzini
2013-06-01  0:01                 ` Benjamin Herrenschmidt
2013-06-03  6:21                   ` Paolo Bonzini
2013-06-03  5:46                 ` Alexey Kardashevskiy
2013-06-03  6:23                   ` Paolo Bonzini
2013-06-03  8:07                   ` Benjamin Herrenschmidt
2013-06-03  9:37                     ` Alexey Kardashevskiy
2013-06-03  9:41                       ` Paolo Bonzini
2013-05-03  1:38 ` [Qemu-devel] [PATCH 8/8] pseries: savevm support for pseries machine David Gibson

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.