All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] xen/arm: Xen save/restore/live migration support
@ 2014-04-10 16:48 Wei Huang
  2014-04-10 16:48 ` [PATCH 1/6] xen/arm: Save and restore support for hvm context hypercall Wei Huang
                   ` (6 more replies)
  0 siblings, 7 replies; 21+ messages in thread
From: Wei Huang @ 2014-04-10 16:48 UTC (permalink / raw)
  To: xen-devel
  Cc: w1.huang, ian.campbell, stefano.stabellini, julien.grall,
	jaeyong.yoo, yjhyun.yoo

The following are the save/restore/live-migration patches I forward-ported to 
the latest Xen tree. Note that I kept the order of original patches, and 
Signed-off-by as well. For the patches I modified (see summary below), I
added my name as Signed-off-by.

These patches aren't intended as the final version. Since Junghyun Yoo is 
working on his patches, I think it is better to send out my version now 
for discussion and proper merge. Let us make Xen 4.5 support these features.

=== Modification Summary ===

Patch 1: 
* Adapt to latest Xen source code
* Setting IRQ status as enabled by checking ieanble bits after loading VM info
* Minor fixes to support 64-bit

Patch 5:
* Add new p2m ARM type (p2m_ram_logdirty) to support dirty page tracking

Patch 6:
* Enable save/restore/live migration support based on latest Xen code
* Adapt to latest Xen source code


-Wei


Alexey Sokolov (1):
  xen/arm: Implement toolstack for xl restore/save and migrate

Jaeyong Yoo (4):
  xen/arm: implement get_maximum_gpfn hypercall
  xen/arm: Implement do_suspend function
  xen/arm: Implement VLPT for guest p2m mapping in live migration
  xen/arm: Implement hypercall for dirty page tracing

Jaeyong Yoon (1):
  xen/arm: Save and restore support for hvm context hypercall

 config/arm32.mk                        |   1 +
 config/arm64.mk                        |   1 +
 tools/libxc/Makefile                   |   6 +-
 tools/libxc/xc_arm_migrate.c           | 712 +++++++++++++++++++++++++++++++++
 tools/libxc/xc_dom_arm.c               |   4 +-
 tools/libxc/xc_resume.c                |  25 ++
 tools/libxl/libxl.h                    |   3 -
 tools/misc/Makefile                    |   4 +-
 xen/arch/arm/Makefile                  |   1 +
 xen/arch/arm/domain.c                  |  19 +
 xen/arch/arm/domctl.c                  | 101 ++++-
 xen/arch/arm/hvm.c                     | 505 ++++++++++++++++++++++-
 xen/arch/arm/mm.c                      | 240 ++++++++++-
 xen/arch/arm/p2m.c                     | 208 ++++++++++
 xen/arch/arm/save.c                    |  66 +++
 xen/arch/arm/traps.c                   |   9 +
 xen/common/Makefile                    |   2 +
 xen/include/asm-arm/arm32/page.h       |  23 +-
 xen/include/asm-arm/config.h           |   9 +
 xen/include/asm-arm/domain.h           |  14 +
 xen/include/asm-arm/hvm/support.h      |  29 ++
 xen/include/asm-arm/mm.h               |  26 ++
 xen/include/asm-arm/p2m.h              |   8 +-
 xen/include/asm-arm/processor.h        |   2 +
 xen/include/public/arch-arm/hvm/save.h | 136 +++++++
 25 files changed, 2129 insertions(+), 25 deletions(-)
 create mode 100644 tools/libxc/xc_arm_migrate.c
 create mode 100644 xen/arch/arm/save.c
 create mode 100644 xen/include/asm-arm/hvm/support.h

-- 
1.8.3.2

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH 1/6] xen/arm: Save and restore support for hvm context hypercall
  2014-04-10 16:48 [PATCH 0/6] xen/arm: Xen save/restore/live migration support Wei Huang
@ 2014-04-10 16:48 ` Wei Huang
  2014-04-10 17:26   ` Andrew Cooper
  2014-04-11 13:57   ` Julien Grall
  2014-04-10 16:48 ` [PATCH 2/6] xen/arm: implement get_maximum_gpfn hypercall Wei Huang
                   ` (5 subsequent siblings)
  6 siblings, 2 replies; 21+ messages in thread
From: Wei Huang @ 2014-04-10 16:48 UTC (permalink / raw)
  To: xen-devel
  Cc: w1.huang, ian.campbell, stefano.stabellini, julien.grall,
	jaeyong.yoo, yjhyun.yoo

From: Jaeyong Yoon <jaeyong.yoo@samsung.com>

Implement save/restore of hvm context hypercall. In hvm context
save/restore, this patch saves gic, timer and vfp registers.

Singed-off-by: Evgeny Fedotov <e.fedotov@samsung.com>
Signed-off-by: Wei Huang <w1.huang@samsung.com>
---
 xen/arch/arm/Makefile                  |   1 +
 xen/arch/arm/domctl.c                  |  92 +++++-
 xen/arch/arm/hvm.c                     | 505 ++++++++++++++++++++++++++++++++-
 xen/arch/arm/save.c                    |  66 +++++
 xen/common/Makefile                    |   2 +
 xen/include/asm-arm/hvm/support.h      |  29 ++
 xen/include/public/arch-arm/hvm/save.h | 136 +++++++++
 7 files changed, 826 insertions(+), 5 deletions(-)
 create mode 100644 xen/arch/arm/save.c
 create mode 100644 xen/include/asm-arm/hvm/support.h

diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
index 63e0460..d9a328c 100644
--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -33,6 +33,7 @@ obj-y += hvm.o
 obj-y += device.o
 obj-y += decode.o
 obj-y += processor.o
+obj-y += save.o
 
 #obj-bin-y += ....o
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 45974e7..914de29 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -9,31 +9,115 @@
 #include <xen/lib.h>
 #include <xen/errno.h>
 #include <xen/sched.h>
+#include <xen/hvm/save.h>
+#include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <public/domctl.h>
 
 long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
                     XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
+    long ret = 0;
+    bool_t copyback = 0;
+
     switch ( domctl->cmd )
     {
+    case XEN_DOMCTL_sethvmcontext:
+    {
+        struct hvm_domain_context c = { .size = domctl->u.hvmcontext.size };
+
+        ret = -ENOMEM;
+        if ( (c.data = xmalloc_bytes(c.size)) == NULL )
+            goto sethvmcontext_out;
+
+        ret = -EFAULT;
+        if ( copy_from_guest(c.data, domctl->u.hvmcontext.buffer, c.size) != 0)
+            goto sethvmcontext_out;
+
+        domain_pause(d);
+        ret = hvm_load(d, &c);
+        domain_unpause(d);
+
+    sethvmcontext_out:
+        if ( c.data != NULL )
+            xfree(c.data);
+    }
+    break;
+
+    case XEN_DOMCTL_gethvmcontext:
+    {
+        struct hvm_domain_context c = { 0 };
+
+        ret = -EINVAL;
+
+        c.size = hvm_save_size(d);
+
+        if ( guest_handle_is_null(domctl->u.hvmcontext.buffer) )
+        {
+            /* Client is querying for the correct buffer size */
+            domctl->u.hvmcontext.size = c.size;
+            ret = 0;
+            goto gethvmcontext_out;
+        }
+
+        /* Check that the client has a big enough buffer */
+        ret = -ENOSPC;
+        if ( domctl->u.hvmcontext.size < c.size )
+        {
+            printk("(gethvmcontext) size error: %d and %d\n",
+                   domctl->u.hvmcontext.size, c.size );
+            goto gethvmcontext_out;
+        }
+
+        /* Allocate our own marshalling buffer */
+        ret = -ENOMEM;
+        if ( (c.data = xmalloc_bytes(c.size)) == NULL )
+        {
+            printk("(gethvmcontext) xmalloc_bytes failed: %d\n", c.size );
+            goto gethvmcontext_out;
+        }
+
+        domain_pause(d);
+        ret = hvm_save(d, &c);
+        domain_unpause(d);
+
+        domctl->u.hvmcontext.size = c.cur;
+        if ( copy_to_guest(domctl->u.hvmcontext.buffer, c.data, c.size) != 0 )
+        {
+            printk("(gethvmcontext) copy to guest failed\n");
+            ret = -EFAULT;
+        }
+
+    gethvmcontext_out:
+        copyback = 1;
+
+        if ( c.data != NULL )
+            xfree(c.data);
+    }
+    break;
+
     case XEN_DOMCTL_cacheflush:
     {
         unsigned long s = domctl->u.cacheflush.start_pfn;
         unsigned long e = s + domctl->u.cacheflush.nr_pfns;
 
         if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
-            return -EINVAL;
+            ret = -EINVAL;
 
         if ( e < s )
-            return -EINVAL;
+            ret = -EINVAL;
 
-        return p2m_cache_flush(d, s, e);
+        ret = p2m_cache_flush(d, s, e);
     }
 
     default:
-        return subarch_do_domctl(domctl, d, u_domctl);
+        ret = subarch_do_domctl(domctl, d, u_domctl);
     }
+
+    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
+        ret = -EFAULT;
+
+    return ret;
 }
 
 void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 471c4cd..bfe38f4 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -7,14 +7,15 @@
 
 #include <xsm/xsm.h>
 
+#include <xen/hvm/save.h>
 #include <public/xen.h>
 #include <public/hvm/params.h>
 #include <public/hvm/hvm_op.h>
 
 #include <asm/hypercall.h>
+#include <asm/gic.h>
 
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
-
 {
     long rc = 0;
 
@@ -65,3 +66,505 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     return rc;
 }
+
+static int vgic_irq_rank_save(struct vcpu *v,
+                              struct vgic_rank *ext,
+                              struct vgic_irq_rank *rank)
+{
+    spin_lock(&rank->lock);
+
+    /* Some of VGIC registers are not used yet, it is for a future usage */
+    /* IENABLE, IACTIVE, IPEND,  PENDSGI registers */
+    ext->ienable = rank->ienable;
+    ext->iactive = rank->iactive;
+    ext->ipend = rank->ipend;
+    ext->pendsgi = rank->pendsgi;
+
+    /* ICFG */
+    ext->icfg[0] = rank->icfg[0];
+    ext->icfg[1] = rank->icfg[1];
+
+    /* IPRIORITY */
+    if ( sizeof(rank->ipriority) != sizeof (ext->ipriority) )
+    {
+        dprintk(XENLOG_G_ERR, "hvm_hw_gic: check ipriority dumping space\n");
+        return -EINVAL;
+    }
+    memcpy(ext->ipriority, rank->ipriority, sizeof(rank->ipriority));
+
+    /* ITARGETS */
+    if ( sizeof(rank->itargets) != sizeof (ext->itargets) )
+    {
+        dprintk(XENLOG_G_ERR, "hvm_hw_gic: check itargets dumping space\n");
+        return -EINVAL;
+    }
+    memcpy(ext->itargets, rank->itargets, sizeof(rank->itargets));
+
+    spin_unlock(&rank->lock);
+    return 0;
+}
+
+static int vgic_irq_rank_restore(struct vcpu *v,
+                                 struct vgic_irq_rank *rank,
+                                 struct vgic_rank *ext)
+{
+    struct pending_irq *p;
+    unsigned int irq = 0;
+    const unsigned long enable_bits = ext->ienable;
+
+    spin_lock(&rank->lock);
+
+    /* IENABLE, IACTIVE, IPEND,  PENDSGI registers */
+    rank->ienable = ext->ienable;
+    rank->iactive = ext->iactive;
+    rank->ipend = ext->ipend;
+    rank->pendsgi = ext->pendsgi;
+
+    /* ICFG */
+    rank->icfg[0] = ext->icfg[0];
+    rank->icfg[1] = ext->icfg[1];
+
+    /* IPRIORITY */
+    if ( sizeof(rank->ipriority) != sizeof (ext->ipriority) )
+    {
+        dprintk(XENLOG_G_ERR, "hvm_hw_gic: check ipriority dumping space\n");
+        return -EINVAL;
+    }
+    memcpy(rank->ipriority, ext->ipriority, sizeof(rank->ipriority));
+
+    /* ITARGETS */
+    if ( sizeof(rank->itargets) != sizeof (ext->itargets) )
+    {
+        dprintk(XENLOG_G_ERR, "hvm_hw_gic: check itargets dumping space\n");
+        return -EINVAL;
+    }
+    memcpy(rank->itargets, ext->itargets, sizeof(rank->itargets));
+
+    /* Set IRQ status as enabled by iterating through rank->ienable register */
+    while ( (irq = find_next_bit(&enable_bits, 32, irq)) < 32 ) {
+        p = irq_to_pending(v, irq);
+        set_bit(GIC_IRQ_GUEST_ENABLED, &p->status);
+        irq++;
+    }
+
+    spin_unlock(&rank->lock);
+    return 0;
+}
+
+
+static int gic_save(struct domain *d, hvm_domain_context_t *h)
+{
+    struct hvm_hw_gic ctxt;
+    struct vcpu *v;
+
+    /* Save the state of GICs */
+    for_each_vcpu( d, v )
+    {
+        ctxt.gic_hcr = v->arch.gic_hcr;
+        ctxt.gic_vmcr = v->arch.gic_vmcr;
+        ctxt.gic_apr = v->arch.gic_apr;
+
+        /* Save list registers and masks
+         * NOTE: It is not necessary to save/restore them, but LR state can
+         * have influence on downtime after Live Migration (to be tested)
+         */
+        if ( sizeof(v->arch.gic_lr) > sizeof (ctxt.gic_lr) )
+        {
+             dprintk(XENLOG_G_ERR, "hvm_hw_gic: increase LR dumping space\n");
+             return -EINVAL;
+        }
+        memcpy(ctxt.gic_lr, v->arch.gic_lr, sizeof(v->arch.gic_lr));
+        ctxt.lr_mask = v->arch.lr_mask;
+        ctxt.event_mask = v->arch.event_mask;
+
+        /* Save PPI states (per-CPU), necessary for SMP-enabled guests */
+        if ( vgic_irq_rank_save(v, &ctxt.ppi_state,
+                                &v->arch.vgic.private_irqs) )
+            return 1;
+
+        if ( hvm_save_entry(GIC, v->vcpu_id, h, &ctxt) != 0 )
+            return 1;
+    }
+
+    return 0;
+}
+
+static int gic_load(struct domain *d, hvm_domain_context_t *h)
+{
+    int vcpuid;
+    struct hvm_hw_gic ctxt;
+    struct vcpu *v;
+
+    /* Which vcpu is this? */
+    vcpuid = hvm_load_instance(h);
+    if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
+    {
+        dprintk(XENLOG_G_ERR, "HVM restore: dom%u has no vcpu%u\n",
+                d->domain_id, vcpuid);
+        return -EINVAL;
+    }
+
+    if ( hvm_load_entry(GIC, h, &ctxt) != 0 )
+        return -EINVAL;
+
+    v->arch.gic_hcr = ctxt.gic_hcr;
+    v->arch.gic_vmcr = ctxt.gic_vmcr;
+    v->arch.gic_apr = ctxt.gic_apr;
+
+    /* Restore list registers and masks */
+    if ( sizeof(v->arch.gic_lr) > sizeof (ctxt.gic_lr) )
+    {
+         dprintk(XENLOG_G_ERR, "hvm_hw_gic: increase LR dumping space\n");
+         return -EINVAL;
+    }
+    memcpy(v->arch.gic_lr, ctxt.gic_lr, sizeof(v->arch.gic_lr));
+    v->arch.lr_mask = ctxt.lr_mask;
+    v->arch.event_mask = ctxt.event_mask;
+
+    /* Restore PPI states */
+    if ( vgic_irq_rank_restore(v, &v->arch.vgic.private_irqs,
+                               &ctxt.ppi_state) )
+        return 1;
+
+    return 0;
+}
+
+HVM_REGISTER_SAVE_RESTORE(GIC, gic_save, gic_load, 1, HVMSR_PER_VCPU);
+
+static int timer_save(struct domain *d, hvm_domain_context_t *h)
+{
+    struct hvm_hw_timer ctxt;
+    struct vcpu *v;
+    struct vtimer *t;
+    int i;
+
+    /* Save the state of vtimer and ptimer */
+    for_each_vcpu( d, v )
+    {
+        t = &v->arch.virt_timer;
+        for ( i = 0; i < 2; i++ )
+        {
+            ctxt.cval = t->cval;
+            ctxt.ctl = t->ctl;
+            ctxt.vtb_offset = i ? d->arch.phys_timer_base.offset :
+                d->arch.virt_timer_base.offset;
+            ctxt.type = i ? TIMER_TYPE_PHYS : TIMER_TYPE_VIRT;
+
+            if ( hvm_save_entry(A15_TIMER, v->vcpu_id, h, &ctxt) != 0 )
+                return 1;
+
+            t = &v->arch.phys_timer;
+        }
+    }
+
+    return 0;
+}
+
+static int timer_load(struct domain *d, hvm_domain_context_t *h)
+{
+    int vcpuid;
+    struct hvm_hw_timer ctxt;
+    struct vcpu *v;
+    struct vtimer *t = NULL;
+
+    /* Which vcpu is this? */
+    vcpuid = hvm_load_instance(h);
+
+    if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
+    {
+        dprintk(XENLOG_G_ERR, "HVM restore: dom%u has no vcpu%u\n",
+                d->domain_id, vcpuid);
+        return -EINVAL;
+    }
+
+    if ( hvm_load_entry(A15_TIMER, h, &ctxt) != 0 )
+        return -EINVAL;
+
+    if ( ctxt.type == TIMER_TYPE_VIRT )
+    {
+        t = &v->arch.virt_timer;
+        d->arch.virt_timer_base.offset = ctxt.vtb_offset;
+
+    }
+    else
+    {
+        t = &v->arch.phys_timer;
+        d->arch.phys_timer_base.offset = ctxt.vtb_offset;
+    }
+
+    t->cval = ctxt.cval;
+    t->ctl = ctxt.ctl;
+    t->v = v;
+
+    return 0;
+}
+
+HVM_REGISTER_SAVE_RESTORE(A15_TIMER, timer_save, timer_load, 2, HVMSR_PER_VCPU);
+
+static int cpu_save(struct domain *d, hvm_domain_context_t *h)
+{
+    struct hvm_hw_cpu ctxt;
+    struct vcpu_guest_core_regs c;
+    struct vcpu *v;
+
+    /* Save the state of CPU */
+    for_each_vcpu( d, v )
+    {
+        memset(&ctxt, 0, sizeof(ctxt));
+
+        ctxt.sctlr = v->arch.sctlr;
+        ctxt.ttbr0 = v->arch.ttbr0;
+        ctxt.ttbr1 = v->arch.ttbr1;
+        ctxt.ttbcr = v->arch.ttbcr;
+
+        ctxt.dacr = v->arch.dacr;
+        ctxt.ifsr = v->arch.ifsr;
+#ifdef CONFIG_ARM_32
+        ctxt.ifar = v->arch.ifar;
+        ctxt.dfar = v->arch.dfar;
+        ctxt.dfsr = v->arch.dfsr;
+#else
+        ctxt.far = v->arch.far;
+        ctxt.esr = v->arch.esr;
+#endif
+
+#ifdef CONFIG_ARM_32
+        ctxt.mair0 = v->arch.mair0;
+        ctxt.mair1 = v->arch.mair1;
+#else
+        ctxt.mair0 = v->arch.mair;
+#endif
+        /* Control Registers */
+        ctxt.actlr = v->arch.actlr;
+        ctxt.sctlr = v->arch.sctlr;
+        ctxt.cpacr = v->arch.cpacr;
+
+        ctxt.contextidr = v->arch.contextidr;
+        ctxt.tpidr_el0 = v->arch.tpidr_el0;
+        ctxt.tpidr_el1 = v->arch.tpidr_el1;
+        ctxt.tpidrro_el0 = v->arch.tpidrro_el0;
+
+        /* CP 15 */
+        ctxt.csselr = v->arch.csselr;
+
+        ctxt.afsr0 = v->arch.afsr0;
+        ctxt.afsr1 = v->arch.afsr1;
+        ctxt.vbar = v->arch.vbar;
+        ctxt.par = v->arch.par;
+        ctxt.teecr = v->arch.teecr;
+        ctxt.teehbr = v->arch.teehbr;
+
+#ifdef CONFIG_ARM_32
+        ctxt.joscr = v->arch.joscr;
+        ctxt.jmcr = v->arch.jmcr;
+#endif
+
+        memset(&c, 0, sizeof(c));
+
+        /* get guest core registers */
+        vcpu_regs_hyp_to_user(v, &c);
+
+        ctxt.x0 = c.x0;
+        ctxt.x1 = c.x1;
+        ctxt.x2 = c.x2;
+        ctxt.x3 = c.x3;
+        ctxt.x4 = c.x4;
+        ctxt.x5 = c.x5;
+        ctxt.x6 = c.x6;
+        ctxt.x7 = c.x7;
+        ctxt.x8 = c.x8;
+        ctxt.x9 = c.x9;
+        ctxt.x10 = c.x10;
+        ctxt.x11 = c.x11;
+        ctxt.x12 = c.x12;
+        ctxt.x13 = c.x13;
+        ctxt.x14 = c.x14;
+        ctxt.x15 = c.x15;
+        ctxt.x16 = c.x16;
+        ctxt.x17 = c.x17;
+        ctxt.x18 = c.x18;
+        ctxt.x19 = c.x19;
+        ctxt.x20 = c.x20;
+        ctxt.x21 = c.x21;
+        ctxt.x22 = c.x22;
+        ctxt.x23 = c.x23;
+        ctxt.x24 = c.x24;
+        ctxt.x25 = c.x25;
+        ctxt.x26 = c.x26;
+        ctxt.x27 = c.x27;
+        ctxt.x28 = c.x28;
+        ctxt.x29 = c.x29;
+        ctxt.x30 = c.x30;
+        ctxt.pc64 = c.pc64;
+        ctxt.cpsr = c.cpsr;
+        ctxt.spsr_el1 = c.spsr_el1; /* spsr_svc */
+
+#ifdef CONFIG_ARM_32
+        ctxt.spsr_fiq = c.spsr_fiq;
+        ctxt.spsr_irq = c.spsr_irq;
+        ctxt.spsr_und = c.spsr_und;
+        ctxt.spsr_abt = c.spsr_abt;
+#endif
+#ifdef CONFIG_ARM_64
+        ctxt.sp_el0 = c.sp_el0;
+        ctxt.sp_el1 = c.sp_el1;
+        ctxt.elr_el1 = c.elr_el1;
+#endif
+
+        /* check VFP state size before dumping */
+        if ( sizeof(v->arch.vfp) > sizeof (ctxt.vfp) )
+        {
+            dprintk(XENLOG_G_ERR, "hvm_hw_cpu: increase VFP dumping space\n");
+            return -EINVAL;
+        }
+        memcpy((void*) &ctxt.vfp, (void*) &v->arch.vfp, sizeof(v->arch.vfp));
+
+        ctxt.pause_flags = v->pause_flags;
+
+        if ( hvm_save_entry(VCPU, v->vcpu_id, h, &ctxt) != 0 )
+            return 1;
+    }
+    return 0;
+}
+
+static int cpu_load(struct domain *d, hvm_domain_context_t *h)
+{
+    int vcpuid;
+    struct hvm_hw_cpu ctxt;
+    struct vcpu *v;
+    struct vcpu_guest_core_regs c;
+
+    /* Which vcpu is this? */
+    vcpuid = hvm_load_instance(h);
+    if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
+    {
+        dprintk(XENLOG_G_ERR, "HVM restore: dom%u has no vcpu%u\n",
+                d->domain_id, vcpuid);
+        return -EINVAL;
+    }
+
+    if ( hvm_load_entry(VCPU, h, &ctxt) != 0 )
+        return -EINVAL;
+
+    v->arch.sctlr = ctxt.sctlr;
+    v->arch.ttbr0 = ctxt.ttbr0;
+    v->arch.ttbr1 = ctxt.ttbr1;
+    v->arch.ttbcr = ctxt.ttbcr;
+
+    v->arch.dacr = ctxt.dacr;
+    v->arch.ifsr = ctxt.ifsr;
+#ifdef CONFIG_ARM_32
+    v->arch.ifar = ctxt.ifar;
+    v->arch.dfar = ctxt.dfar;
+    v->arch.dfsr = ctxt.dfsr;
+#else
+    v->arch.far = ctxt.far;
+    v->arch.esr = ctxt.esr;
+#endif
+
+#ifdef CONFIG_ARM_32
+    v->arch.mair0 = ctxt.mair0;
+    v->arch.mair1 = ctxt.mair1;
+#else
+    v->arch.mair = ctxt.mair0;
+#endif
+
+    /* Control Registers */
+    v->arch.actlr = ctxt.actlr;
+    v->arch.cpacr = ctxt.cpacr;
+    v->arch.contextidr = ctxt.contextidr;
+    v->arch.tpidr_el0 = ctxt.tpidr_el0;
+    v->arch.tpidr_el1 = ctxt.tpidr_el1;
+    v->arch.tpidrro_el0 = ctxt.tpidrro_el0;
+
+    /* CP 15 */
+    v->arch.csselr = ctxt.csselr;
+
+    v->arch.afsr0 = ctxt.afsr0;
+    v->arch.afsr1 = ctxt.afsr1;
+    v->arch.vbar = ctxt.vbar;
+    v->arch.par = ctxt.par;
+    v->arch.teecr = ctxt.teecr;
+    v->arch.teehbr = ctxt.teehbr;
+#ifdef CONFIG_ARM_32
+    v->arch.joscr = ctxt.joscr;
+    v->arch.jmcr = ctxt.jmcr;
+#endif
+
+    /* fill guest core registers */
+    memset(&c, 0, sizeof(c));
+    c.x0 = ctxt.x0;
+    c.x1 = ctxt.x1;
+    c.x2 = ctxt.x2;
+    c.x3 = ctxt.x3;
+    c.x4 = ctxt.x4;
+    c.x5 = ctxt.x5;
+    c.x6 = ctxt.x6;
+    c.x7 = ctxt.x7;
+    c.x8 = ctxt.x8;
+    c.x9 = ctxt.x9;
+    c.x10 = ctxt.x10;
+    c.x11 = ctxt.x11;
+    c.x12 = ctxt.x12;
+    c.x13 = ctxt.x13;
+    c.x14 = ctxt.x14;
+    c.x15 = ctxt.x15;
+    c.x16 = ctxt.x16;
+    c.x17 = ctxt.x17;
+    c.x18 = ctxt.x18;
+    c.x19 = ctxt.x19;
+    c.x20 = ctxt.x20;
+    c.x21 = ctxt.x21;
+    c.x22 = ctxt.x22;
+    c.x23 = ctxt.x23;
+    c.x24 = ctxt.x24;
+    c.x25 = ctxt.x25;
+    c.x26 = ctxt.x26;
+    c.x27 = ctxt.x27;
+    c.x28 = ctxt.x28;
+    c.x29 = ctxt.x29;
+    c.x30 = ctxt.x30;
+    c.pc64 = ctxt.pc64;
+    c.cpsr = ctxt.cpsr;
+    c.spsr_el1 = ctxt.spsr_el1; /* spsr_svc */
+
+#ifdef CONFIG_ARM_32
+    c.spsr_fiq = ctxt.spsr_fiq;
+    c.spsr_irq = ctxt.spsr_irq;
+    c.spsr_und = ctxt.spsr_und;
+    c.spsr_abt = ctxt.spsr_abt;
+#endif
+#ifdef CONFIG_ARM_64
+    c.sp_el0 = ctxt.sp_el0;
+    c.sp_el1 = ctxt.sp_el1;
+    c.elr_el1 = ctxt.elr_el1;
+#endif
+
+    /* set guest core registers */
+    vcpu_regs_user_to_hyp(v, &c);
+
+    if ( sizeof(v->arch.vfp) > sizeof (ctxt.vfp) )
+    {
+        dprintk(XENLOG_G_ERR, "hvm_hw_cpu: increase VFP dumping space\n");
+        return -EINVAL;
+    }
+
+    memcpy(&v->arch.vfp, &ctxt,  sizeof(v->arch.vfp));
+
+    v->is_initialised = 1;
+    v->pause_flags = ctxt.pause_flags;
+
+    return 0;
+}
+
+HVM_REGISTER_SAVE_RESTORE(VCPU, cpu_save, cpu_load, 1, HVMSR_PER_VCPU);
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/arm/save.c b/xen/arch/arm/save.c
new file mode 100644
index 0000000..c923910
--- /dev/null
+++ b/xen/arch/arm/save.c
@@ -0,0 +1,66 @@
+/*
+ * hvm/save.c: Save and restore HVM guest's emulated hardware state for ARM.
+ *
+ * Copyright (c) 2013, Samsung Electronics.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#include <asm/hvm/support.h>
+#include <public/hvm/save.h>
+
+void arch_hvm_save(struct domain *d, struct hvm_save_header *hdr)
+{
+    hdr->cpuid = READ_SYSREG32(MIDR_EL1);
+}
+
+int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
+{
+    uint32_t cpuid;
+
+    if ( hdr->magic != HVM_FILE_MAGIC )
+    {
+        printk(XENLOG_G_ERR "HVM%d restore: bad magic number %#"PRIx32"\n",
+               d->domain_id, hdr->magic);
+        return -1;
+    }
+
+    if ( hdr->version != HVM_FILE_VERSION )
+    {
+        printk(XENLOG_G_ERR "HVM%d restore: unsupported version %u\n",
+               d->domain_id, hdr->version);
+        return -1;
+    }
+
+    cpuid = READ_SYSREG32(MIDR_EL1);
+    if ( hdr->cpuid != cpuid )
+    {
+        printk(XENLOG_G_INFO "HVM%d restore: VM saved on one CPU "
+               "(%#"PRIx32") and restored on another (%#"PRIx32").\n",
+               d->domain_id, hdr->cpuid, cpuid);
+        return -1;
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 3683ae3..714a3c4 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -64,6 +64,8 @@ subdir-$(CONFIG_COMPAT) += compat
 
 subdir-$(x86_64) += hvm
 
+subdir-$(CONFIG_ARM) += hvm
+
 subdir-$(coverage) += gcov
 
 subdir-y += libelf
diff --git a/xen/include/asm-arm/hvm/support.h b/xen/include/asm-arm/hvm/support.h
new file mode 100644
index 0000000..8311f2f
--- /dev/null
+++ b/xen/include/asm-arm/hvm/support.h
@@ -0,0 +1,29 @@
+/*
+ * support.h: HVM support routines used by ARMv7 VE.
+ *
+ * Copyright (c) 2012, Citrix Systems
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc., 59 Temple
+ * Place - Suite 330, Boston, MA 02111-1307 USA.
+ */
+
+#ifndef __ASM_ARM_HVM_SUPPORT_H__
+#define __ASM_ARM_HVM_SUPPORT_H__
+
+#include <xen/types.h>
+#include <public/hvm/ioreq.h>
+#include <xen/sched.h>
+#include <xen/hvm/save.h>
+#include <asm/processor.h>
+
+#endif /* __ASM_ARM_HVM_SUPPORT_H__ */
diff --git a/xen/include/public/arch-arm/hvm/save.h b/xen/include/public/arch-arm/hvm/save.h
index 75b8e65..0a98807 100644
--- a/xen/include/public/arch-arm/hvm/save.h
+++ b/xen/include/public/arch-arm/hvm/save.h
@@ -26,6 +26,142 @@
 #ifndef __XEN_PUBLIC_HVM_SAVE_ARM_H__
 #define __XEN_PUBLIC_HVM_SAVE_ARM_H__
 
+#define HVM_FILE_MAGIC   0x92385520
+#define HVM_FILE_VERSION 0x00000001
+
+
+struct hvm_save_header
+{
+    uint32_t magic;             /* Must be HVM_FILE_MAGIC */
+    uint32_t version;           /* File format version */
+    uint64_t changeset;         /* Version of Xen that saved this file */
+    uint32_t cpuid;             /* MIDR_EL1 on the saving machine */
+};
+
+DECLARE_HVM_SAVE_TYPE(HEADER, 1, struct hvm_save_header);
+
+struct vgic_rank
+{
+    uint32_t ienable, iactive, ipend, pendsgi;
+    uint32_t icfg[2];
+    uint32_t ipriority[8];
+    uint32_t itargets[8];
+};
+
+struct hvm_hw_gic
+{
+    uint32_t gic_hcr;
+    uint32_t gic_vmcr;
+    uint32_t gic_apr;
+    uint32_t gic_lr[64];
+    uint64_t event_mask;
+    uint64_t lr_mask;
+    struct vgic_rank ppi_state;
+};
+
+DECLARE_HVM_SAVE_TYPE(GIC, 2, struct hvm_hw_gic);
+
+#define TIMER_TYPE_VIRT 0
+#define TIMER_TYPE_PHYS 1
+
+struct hvm_hw_timer
+{
+    uint64_t vtb_offset;
+    uint32_t ctl;
+    uint64_t cval;
+    uint32_t type;
+};
+
+DECLARE_HVM_SAVE_TYPE(A15_TIMER, 3, struct hvm_hw_timer);
+
+
+struct hvm_hw_cpu
+{
+#ifdef CONFIG_ARM_32
+    uint64_t vfp[34];  /* 32-bit VFP registers */
+#else
+    uint64_t vfp[66];  /* 64-bit VFP registers */
+#endif
+
+    /* Guest core registers */
+    uint64_t x0;     /* r0_usr */
+    uint64_t x1;     /* r1_usr */
+    uint64_t x2;     /* r2_usr */
+    uint64_t x3;     /* r3_usr */
+    uint64_t x4;     /* r4_usr */
+    uint64_t x5;     /* r5_usr */
+    uint64_t x6;     /* r6_usr */
+    uint64_t x7;     /* r7_usr */
+    uint64_t x8;     /* r8_usr */
+    uint64_t x9;     /* r9_usr */
+    uint64_t x10;    /* r10_usr */
+    uint64_t x11;    /* r11_usr */
+    uint64_t x12;    /* r12_usr */
+    uint64_t x13;    /* sp_usr */
+    uint64_t x14;    /* lr_usr; */
+    uint64_t x15;    /* __unused_sp_hyp */
+    uint64_t x16;    /* lr_irq */
+    uint64_t x17;    /* sp_irq */
+    uint64_t x18;    /* lr_svc */
+    uint64_t x19;    /* sp_svc */
+    uint64_t x20;    /* lr_abt */
+    uint64_t x21;    /* sp_abt */
+    uint64_t x22;    /* lr_und */
+    uint64_t x23;    /* sp_und */
+    uint64_t x24;    /* r8_fiq */
+    uint64_t x25;    /* r9_fiq */
+    uint64_t x26;    /* r10_fiq */
+    uint64_t x27;    /* r11_fiq */
+    uint64_t x28;    /* r12_fiq */
+    uint64_t x29;    /* fp,sp_fiq */
+    uint64_t x30;    /* lr_fiq */
+    uint64_t pc64;   /* ELR_EL2 */
+    uint32_t cpsr;   /* SPSR_EL2 */
+    uint32_t spsr_el1;  /*spsr_svc */
+    /* AArch32 guests only */
+    uint32_t spsr_fiq, spsr_irq, spsr_und, spsr_abt;
+    /* AArch64 guests only */
+    uint64_t sp_el0;
+    uint64_t sp_el1, elr_el1;
+
+    uint32_t sctlr, ttbcr;
+    uint64_t ttbr0, ttbr1;
+
+    uint32_t ifar, dfar;
+    uint32_t ifsr, dfsr;
+    uint32_t dacr;
+    uint64_t par;
+
+    uint64_t far;
+    uint64_t esr;
+
+    uint64_t mair0, mair1;
+    uint64_t tpidr_el0;
+    uint64_t tpidr_el1;
+    uint64_t tpidrro_el0;
+    uint64_t vbar;
+
+    /* Control Registers */
+    uint32_t actlr;
+    uint32_t cpacr;
+    uint32_t afsr0, afsr1;
+    uint32_t contextidr;
+    uint32_t teecr, teehbr; /* ThumbEE, 32-bit guests only */
+    uint32_t joscr, jmcr;
+    /* CP 15 */
+    uint32_t csselr;
+
+    unsigned long pause_flags;
+
+};
+
+DECLARE_HVM_SAVE_TYPE(VCPU, 4, struct hvm_hw_cpu);
+
+/*
+ * Largest type-code in use
+ */
+#define HVM_SAVE_CODE_MAX 4
+
 #endif
 
 /*
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 2/6] xen/arm: implement get_maximum_gpfn hypercall
  2014-04-10 16:48 [PATCH 0/6] xen/arm: Xen save/restore/live migration support Wei Huang
  2014-04-10 16:48 ` [PATCH 1/6] xen/arm: Save and restore support for hvm context hypercall Wei Huang
@ 2014-04-10 16:48 ` Wei Huang
  2014-04-10 17:28   ` Andrew Cooper
  2014-04-11 13:15   ` Julien Grall
  2014-04-10 16:48 ` [PATCH 3/6] xen/arm: Implement do_suspend function Wei Huang
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 21+ messages in thread
From: Wei Huang @ 2014-04-10 16:48 UTC (permalink / raw)
  To: xen-devel
  Cc: w1.huang, ian.campbell, stefano.stabellini, julien.grall,
	jaeyong.yoo, yjhyun.yoo

From: Jaeyong Yoo <jaeyong.yoo@samsung.com>

This patchi implements get_maximum_gpfn by using the memory map
info in arch_domain (from set_memory_map hypercall).

Signed-off-by: Evgeny Fedotov <e.fedotov@samsung.com>
---
 xen/arch/arm/mm.c        | 19 ++++++++++++++++++-
 xen/include/asm-arm/mm.h |  2 ++
 2 files changed, 20 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 362bc8d..14b4686 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -947,7 +947,11 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
 
 unsigned long domain_get_maximum_gpfn(struct domain *d)
 {
-    return -ENOSYS;
+    paddr_t end;
+
+    get_gma_start_end(d, NULL, &end);
+
+    return (unsigned long) (end >> PAGE_SHIFT);
 }
 
 void share_xen_page_with_guest(struct page_info *page,
@@ -1235,6 +1239,19 @@ int is_iomem_page(unsigned long mfn)
         return 1;
     return 0;
 }
+
+/*
+ * Return start and end addresses of guest
+ */
+void get_gma_start_end(struct domain *d, paddr_t *start, paddr_t *end)
+{
+    if ( start )
+        *start = GUEST_RAM_BASE;
+
+    if ( end )
+        *end = GUEST_RAM_BASE + ((paddr_t) d->max_pages << PAGE_SHIFT);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index b8d4e7d..341493a 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -341,6 +341,8 @@ static inline void put_page_and_type(struct page_info *page)
     put_page(page);
 }
 
+void get_gma_start_end(struct domain *d, paddr_t *start, paddr_t *end);
+
 #endif /*  __ARCH_ARM_MM__ */
 /*
  * Local variables:
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 3/6] xen/arm: Implement do_suspend function
  2014-04-10 16:48 [PATCH 0/6] xen/arm: Xen save/restore/live migration support Wei Huang
  2014-04-10 16:48 ` [PATCH 1/6] xen/arm: Save and restore support for hvm context hypercall Wei Huang
  2014-04-10 16:48 ` [PATCH 2/6] xen/arm: implement get_maximum_gpfn hypercall Wei Huang
@ 2014-04-10 16:48 ` Wei Huang
  2014-04-11 14:10   ` Julien Grall
  2014-04-10 16:48 ` [PATCH 4/6] xen/arm: Implement VLPT for guest p2m mapping in live migration Wei Huang
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 21+ messages in thread
From: Wei Huang @ 2014-04-10 16:48 UTC (permalink / raw)
  To: xen-devel
  Cc: w1.huang, ian.campbell, stefano.stabellini, julien.grall,
	jaeyong.yoo, yjhyun.yoo

From: Jaeyong Yoo <jaeyong.yoo@samsung.com>

Making sched_op in do_suspend (driver/xen/manage.c) returns 0 on the
success of suspend.

Signed-off-by: Alexey Sokolov <sokolov.a@samsung.com>
---
 tools/libxc/xc_resume.c | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/tools/libxc/xc_resume.c b/tools/libxc/xc_resume.c
index 18b4818..9315eb8 100644
--- a/tools/libxc/xc_resume.c
+++ b/tools/libxc/xc_resume.c
@@ -73,6 +73,31 @@ static int modify_returncode(xc_interface *xch, uint32_t domid)
     return 0;
 }
 
+#elif defined(__arm__)
+
+static int modify_returncode(xc_interface *xch, uint32_t domid)
+{
+    vcpu_guest_context_any_t ctxt;
+    xc_dominfo_t info;
+    int rc;
+
+    if ( xc_domain_getinfo(xch, domid, 1, &info) != 1 )
+    {
+        PERROR("Could not get domain info");
+        return -1;
+    }
+
+    if ( (rc = xc_vcpu_getcontext(xch, domid, 0, &ctxt)) != 0 )
+        return rc;
+
+    ctxt.c.user_regs.r0_usr = 1;
+
+    if ( (rc = xc_vcpu_setcontext(xch, domid, 0, &ctxt)) != 0 )
+        return rc;
+
+    return 0;
+}
+
 #else
 
 static int modify_returncode(xc_interface *xch, uint32_t domid)
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 4/6] xen/arm: Implement VLPT for guest p2m mapping in live migration
  2014-04-10 16:48 [PATCH 0/6] xen/arm: Xen save/restore/live migration support Wei Huang
                   ` (2 preceding siblings ...)
  2014-04-10 16:48 ` [PATCH 3/6] xen/arm: Implement do_suspend function Wei Huang
@ 2014-04-10 16:48 ` Wei Huang
  2014-04-10 16:48 ` [PATCH 5/6] xen/arm: Implement hypercall for dirty page tracing Wei Huang
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 21+ messages in thread
From: Wei Huang @ 2014-04-10 16:48 UTC (permalink / raw)
  To: xen-devel
  Cc: w1.huang, ian.campbell, stefano.stabellini, julien.grall,
	jaeyong.yoo, yjhyun.yoo

From: Jaeyong Yoo <jaeyong.yoo@samsung.com>

Thsi patch implements VLPT (virtual-linear page table) for fast accessing
of 3rd PTE of guest P2M. For more info about VLPT, please see
http://www.technovelty.org/linux/virtual-linear-page-table.html.

When creating a mapping for VLPT, just copy the 1st level PTE of guest p2m
to xen's 2nd level PTE. Then the mapping becomes the following:
      xen's 1st PTE -->
      xen's 2nd PTE (which is the same as 1st PTE of guest p2m) -->
      guest p2m's 2nd PTE -->
      guest p2m's 3rd PTE (the memory contents where the vlpt points)

This function is used in dirty-page tracing. When domU write-fault is
trapped by xen, xen can immediately locate the 3rd PTE of guest p2m.

The following link shows the performance comparison for handling a
dirty-page between vlpt and typical page table walking.
http://lists.xen.org/archives/html/xen-devel/2013-08/msg01503.html

Signed-off-by: Jaeyong Yoo <jaeyong.yoo@samsung.com>
---
 xen/arch/arm/domain.c            |   5 ++
 xen/arch/arm/mm.c                | 116 +++++++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/arm32/page.h |  23 ++++----
 xen/include/asm-arm/config.h     |   9 +++
 xen/include/asm-arm/domain.h     |   7 +++
 xen/include/asm-arm/mm.h         |  16 ++++++
 6 files changed, 166 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index b125857..3f04a77 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -502,6 +502,11 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags)
     /* Default the virtual ID to match the physical */
     d->arch.vpidr = boot_cpu_data.midr.bits;
 
+    d->arch.dirty.second_lvl_start = 0;
+    d->arch.dirty.second_lvl_end = 0;
+    d->arch.dirty.second_lvl[0] = NULL;
+    d->arch.dirty.second_lvl[1] = NULL;
+
     clear_page(d->shared_info);
     share_xen_page_with_guest(
         virt_to_page(d->shared_info), d, XENSHARE_writable);
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 14b4686..df9d428 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1252,6 +1252,122 @@ void get_gma_start_end(struct domain *d, paddr_t *start, paddr_t *end)
         *end = GUEST_RAM_BASE + ((paddr_t) d->max_pages << PAGE_SHIFT);
 }
 
+/* flush the vlpt area */
+void flush_vlpt(struct domain *d)
+{
+    int flush_size;
+    flush_size = (d->arch.dirty.second_lvl_end -
+                  d->arch.dirty.second_lvl_start) << SECOND_SHIFT;
+
+    /* flushing the 3rd level mapping */
+    flush_xen_data_tlb_range_va(d->arch.dirty.second_lvl_start << SECOND_SHIFT,
+                                flush_size);
+}
+
+/* restore the xen page table for vlpt mapping for domain d */
+void restore_vlpt(struct domain *d)
+{
+    int i;
+
+    dsb(sy);
+
+    for ( i = d->arch.dirty.second_lvl_start;
+          i < d->arch.dirty.second_lvl_end;
+          ++i )
+    {
+        int k = i % LPAE_ENTRIES;
+        int l = i / LPAE_ENTRIES;
+
+        if ( xen_second[i].bits != d->arch.dirty.second_lvl[l][k].bits )
+        {
+            write_pte(&xen_second[i], d->arch.dirty.second_lvl[l][k]);
+            flush_xen_data_tlb_range_va(i << SECOND_SHIFT, 1 << SECOND_SHIFT);
+        }
+    }
+
+    dsb(sy);
+    isb();
+}
+
+/* setting up the xen page table for vlpt mapping for domain d */
+int prepare_vlpt(struct domain *d)
+{
+    int xen_second_linear_base;
+    int gp2m_start_index, gp2m_end_index;
+    struct p2m_domain *p2m = &d->arch.p2m;
+    struct page_info *second_lvl_page;
+    paddr_t gma_start = 0;
+    paddr_t gma_end = 0;
+    lpae_t *first[2];
+    int i;
+    uint64_t required, avail = VIRT_LIN_P2M_END - VIRT_LIN_P2M_START;
+
+    get_gma_start_end(d, &gma_start, &gma_end);
+    required = (gma_end - gma_start) >> LPAE_SHIFT;
+
+    if ( required > avail )
+    {
+        dprintk(XENLOG_ERR, "Available VLPT is small for domU guest"
+                "(avail: %llx, required: %llx)\n", (unsigned long long)avail,
+                (unsigned long long)required);
+        return -ENOMEM;
+    }
+
+    xen_second_linear_base = second_linear_offset(VIRT_LIN_P2M_START);
+
+    gp2m_start_index = gma_start >> FIRST_SHIFT;
+    gp2m_end_index = (gma_end >> FIRST_SHIFT) + 1;
+
+    if ( xen_second_linear_base + gp2m_end_index >= LPAE_ENTRIES * 2 )
+    {
+        dprintk(XENLOG_ERR, "xen second page is small for VLPT for domU");
+        return -ENOMEM;
+    }
+
+    second_lvl_page = alloc_domheap_pages(NULL, 1, 0);
+    if ( second_lvl_page == NULL )
+        return -ENOMEM;
+
+    /* First level p2m is 2 consecutive pages */
+    d->arch.dirty.second_lvl[0] = map_domain_page_global(
+        page_to_mfn(second_lvl_page) );
+    d->arch.dirty.second_lvl[1] = map_domain_page_global(
+        page_to_mfn(second_lvl_page+1) );
+
+    first[0] = __map_domain_page(p2m->first_level);
+    first[1] = __map_domain_page(p2m->first_level+1);
+
+    for ( i = gp2m_start_index; i < gp2m_end_index; ++i )
+    {
+        int k = i % LPAE_ENTRIES;
+        int l = i / LPAE_ENTRIES;
+        int k2 = (xen_second_linear_base + i) % LPAE_ENTRIES;
+        int l2 = (xen_second_linear_base + i) / LPAE_ENTRIES;
+
+        write_pte(&xen_second[xen_second_linear_base+i], first[l][k]);
+
+        /* we copy the mapping into domain''s structure as a reference
+         * in case of the context switch (used in restore_vlpt) */
+        d->arch.dirty.second_lvl[l2][k2] = first[l][k];
+    }
+    unmap_domain_page(first[0]);
+    unmap_domain_page(first[1]);
+
+    /* storing the start and end index */
+    d->arch.dirty.second_lvl_start = xen_second_linear_base + gp2m_start_index;
+    d->arch.dirty.second_lvl_end = xen_second_linear_base + gp2m_end_index;
+
+    flush_vlpt(d);
+
+    return 0;
+}
+
+void cleanup_vlpt(struct domain *d)
+{
+    /* First level p2m is 2 consecutive pages */
+    unmap_domain_page_global(d->arch.dirty.second_lvl[0]);
+    unmap_domain_page_global(d->arch.dirty.second_lvl[1]);
+}
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/arm32/page.h b/xen/include/asm-arm/arm32/page.h
index 4abb281..feca06c 100644
--- a/xen/include/asm-arm/arm32/page.h
+++ b/xen/include/asm-arm/arm32/page.h
@@ -3,22 +3,25 @@
 
 #ifndef __ASSEMBLY__
 
-/* Write a pagetable entry.
- *
- * If the table entry is changing a text mapping, it is responsibility
- * of the caller to issue an ISB after write_pte.
- */
-static inline void write_pte(lpae_t *p, lpae_t pte)
+/* Write a pagetable entry. All necessary barriers are responsibility of
+ * the caller */
+static inline void __write_pte(lpae_t *p, lpae_t pte)
 {
     asm volatile (
-        /* Ensure any writes have completed with the old mappings. */
-        "dsb;"
-        /* Safely write the entry (STRD is atomic on CPUs that support LPAE) */
+        /* safely write the entry (STRD is atomic on CPUs that support LPAE) */
         "strd %0, %H0, [%1];"
-        "dsb;"
         : : "r" (pte.bits), "r" (p) : "memory");
 }
 
+/* Write a pagetable entry with dsb barriers. All necessary barriers are
+ * responsibility of the caller. */
+static inline void write_pte(lpae_t *p, lpae_t pte)
+{
+    dsb();
+    __write_pte(p, pte);
+    dsb();
+}
+
 /* Inline ASM to flush dcache on register R (may be an inline asm operand) */
 #define __clean_xen_dcache_one(R) STORE_CP32(R, DCCMVAC)
 
diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
index 5b7b1a8..95e84bd 100644
--- a/xen/include/asm-arm/config.h
+++ b/xen/include/asm-arm/config.h
@@ -87,6 +87,7 @@
  *   0  -   8M   <COMMON>
  *
  *  32M - 128M   Frametable: 24 bytes per page for 16GB of RAM
+ * 128M - 256M   Virtual-linear mapping to P2M table
  * 256M -   1G   VMAP: ioremap and early_ioremap use this virtual address
  *                    space
  *
@@ -124,7 +125,9 @@
 #define CONFIG_SEPARATE_XENHEAP 1
 
 #define FRAMETABLE_VIRT_START  _AT(vaddr_t,0x02000000)
+#define VIRT_LIN_P2M_START     _AT(vaddr_t,0x08000000)
 #define VMAP_VIRT_START  _AT(vaddr_t,0x10000000)
+#define VIRT_LIN_P2M_END       VMAP_VIRT_START
 #define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
 #define XENHEAP_VIRT_END       _AT(vaddr_t,0x7fffffff)
 #define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
@@ -157,6 +160,12 @@
 
 #define HYPERVISOR_VIRT_END    DIRECTMAP_VIRT_END
 
+/* Temporary definition for VIRT_LIN_P2M_START and VIRT_LIN_P2M_END
+ * TODO: Needs evaluation!!!!
+ */
+#define VIRT_LIN_P2M_START     _AT(vaddr_t, 0x08000000)
+#define VIRT_LIN_P2M_END       VMAP_VIRT_START
+
 #endif
 
 /* Fixmap slots */
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 28c359a..5321bd6 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -161,6 +161,13 @@ struct arch_domain
         spinlock_t                  lock;
     } vuart;
 
+    /* dirty-page tracing */
+    struct {
+        volatile int second_lvl_start;   /* for context switch */
+        volatile int second_lvl_end;
+        lpae_t *second_lvl[2];           /* copy of guest p2m's first */
+    } dirty;
+
     unsigned int evtchn_irq;
 }  __cacheline_aligned;
 
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 341493a..75c27fb 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -4,6 +4,7 @@
 #include <xen/config.h>
 #include <xen/kernel.h>
 #include <asm/page.h>
+#include <asm/config.h>
 #include <public/xen.h>
 
 /* Align Xen to a 2 MiB boundary. */
@@ -342,6 +343,21 @@ static inline void put_page_and_type(struct page_info *page)
 }
 
 void get_gma_start_end(struct domain *d, paddr_t *start, paddr_t *end);
+int prepare_vlpt(struct domain *d);
+void cleanup_vlpt(struct domain *d);
+void restore_vlpt(struct domain *d);
+
+/* calculate the xen''s virtual address for accessing the leaf PTE of
+ * a given address (GPA) */
+static inline lpae_t * get_vlpt_3lvl_pte(paddr_t addr)
+{
+    lpae_t *table = (lpae_t *)VIRT_LIN_P2M_START;
+
+    /* Since we slotted the guest''s first p2m page table to xen''s
+     * second page table, one shift is enough for calculating the
+     * index of guest p2m table entry */
+    return &table[addr >> PAGE_SHIFT];
+}
 
 #endif /*  __ARCH_ARM_MM__ */
 /*
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 5/6] xen/arm: Implement hypercall for dirty page tracing
  2014-04-10 16:48 [PATCH 0/6] xen/arm: Xen save/restore/live migration support Wei Huang
                   ` (3 preceding siblings ...)
  2014-04-10 16:48 ` [PATCH 4/6] xen/arm: Implement VLPT for guest p2m mapping in live migration Wei Huang
@ 2014-04-10 16:48 ` Wei Huang
  2014-04-10 16:48 ` [PATCH 6/6] xen/arm: Implement toolstack for xl restore/save and migrate Wei Huang
  2014-04-11 14:15 ` [PATCH 0/6] xen/arm: Xen save/restore/live migration support Julien Grall
  6 siblings, 0 replies; 21+ messages in thread
From: Wei Huang @ 2014-04-10 16:48 UTC (permalink / raw)
  To: xen-devel
  Cc: w1.huang, ian.campbell, stefano.stabellini, julien.grall,
	jaeyong.yoo, yjhyun.yoo

From: Jaeyong Yoo <jaeyong.yoo@samsung.com>

This patch adds hypercall for shadow operations, including enable/disable
and clean/peek dirty page bitmap.

The design consists of two parts: dirty page detecting and saving. For
detecting, we setup the guest p2m's leaf PTE read-only and whenever
the guest tries to write something, permission fault happens and traps
into xen. The permission-faulted GPA should be saved for the toolstack,
which checks which pages are dirty. For this purpose, it temporarily saves
the GPAs into bitmap.

Signed-off-by: Jaeyong Yoo <jaeyong.yoo@samsung.com>
Signed-off-by: Wei Huang <w1.huang@samsung.com>
---
 xen/arch/arm/domain.c           |  14 +++
 xen/arch/arm/domctl.c           |   9 ++
 xen/arch/arm/mm.c               | 105 +++++++++++++++++++-
 xen/arch/arm/p2m.c              | 208 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/arm/traps.c            |   9 ++
 xen/include/asm-arm/domain.h    |   7 ++
 xen/include/asm-arm/mm.h        |   8 ++
 xen/include/asm-arm/p2m.h       |   8 +-
 xen/include/asm-arm/processor.h |   2 +
 9 files changed, 368 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 3f04a77..d2531ed 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -207,6 +207,12 @@ static void ctxt_switch_to(struct vcpu *n)
 
     isb();
 
+    /* Dirty-page tracing
+     * NB: How do we consider SMP case?
+     */
+    if ( n->domain->arch.dirty.mode )
+        restore_vlpt(n->domain);
+
     /* This is could trigger an hardware interrupt from the virtual
      * timer. The interrupt needs to be injected into the guest. */
     virt_timer_restore(n);
@@ -502,11 +508,19 @@ int arch_domain_create(struct domain *d, unsigned int domcr_flags)
     /* Default the virtual ID to match the physical */
     d->arch.vpidr = boot_cpu_data.midr.bits;
 
+    /* init for dirty-page tracing */
+    d->arch.dirty.count = 0;
+    d->arch.dirty.mode = 0;
+    spin_lock_init(&d->arch.dirty.lock);
+
     d->arch.dirty.second_lvl_start = 0;
     d->arch.dirty.second_lvl_end = 0;
     d->arch.dirty.second_lvl[0] = NULL;
     d->arch.dirty.second_lvl[1] = NULL;
 
+    memset(d->arch.dirty.bitmap, 0, sizeof(d->arch.dirty.bitmap));
+    d->arch.dirty.bitmap_pages = 0;
+
     clear_page(d->shared_info);
     share_xen_page_with_guest(
         virt_to_page(d->shared_info), d, XENSHARE_writable);
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index 914de29..e67381b 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -95,6 +95,15 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
             xfree(c.data);
     }
     break;
+    case XEN_DOMCTL_shadow_op:
+    {
+        domain_pause(d);
+        ret = dirty_mode_op(d, &domctl->u.shadow_op);
+        domain_unpause(d);
+
+        copyback = 1;
+    }
+    break;
 
     case XEN_DOMCTL_cacheflush:
     {
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index df9d428..ea0b051 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -865,7 +865,6 @@ void destroy_xen_mappings(unsigned long v, unsigned long e)
     create_xen_entries(REMOVE, v, 0, (e - v) >> PAGE_SHIFT, 0);
 }
 
-enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
 static void set_pte_flags_on_range(const char *p, unsigned long l, enum mg mg)
 {
     lpae_t pte;
@@ -1240,9 +1239,64 @@ int is_iomem_page(unsigned long mfn)
     return 0;
 }
 
+
 /*
  * Return start and end addresses of guest
  */
+static inline void mark_dirty_bitmap(struct domain *d, paddr_t addr)
+{
+    paddr_t ram_base = (paddr_t) GUEST_RAM_BASE;
+    int bit_index = PFN_DOWN(addr - ram_base);
+    int page_index = bit_index >> (PAGE_SHIFT + 3);
+    int bit_index_residual = bit_index & ((1ul << (PAGE_SHIFT + 3)) - 1);
+
+    set_bit(bit_index_residual, d->arch.dirty.bitmap[page_index]);
+}
+
+/* Routine for dirty-page tracing
+ *
+ * On first write, it page faults, its entry is changed to read-write,
+ * and on retry the write succeeds. For locating p2m of the faulting entry,
+ * we use virtual-linear page table.
+ *
+ * Returns zero if addr is not valid or dirty mode is not set
+ */
+int handle_page_fault(struct domain *d, paddr_t addr)
+{
+
+    lpae_t *vlp2m_pte = 0;
+    paddr_t gma_start = 0;
+    paddr_t gma_end = 0;
+
+    if ( !d->arch.dirty.mode )
+        return 0;
+    get_gma_start_end(d, &gma_start, &gma_end);
+
+    /* Ensure that addr is inside guest's RAM */
+    if ( addr < gma_start || addr > gma_end )
+        return 0;
+
+    vlp2m_pte = get_vlpt_3lvl_pte(addr);
+    if ( vlp2m_pte->p2m.valid && vlp2m_pte->p2m.write == 0 &&
+         vlp2m_pte->p2m.type == p2m_ram_logdirty )
+    {
+        lpae_t pte = *vlp2m_pte;
+        pte.p2m.write = 1;
+        write_pte(vlp2m_pte, pte);
+        flush_tlb_local();
+
+        /* only necessary to lock between get-dirty bitmap and mark dirty
+         * bitmap. If get-dirty bitmap happens immediately before this
+         * lock, the corresponding dirty-page would be marked at the next
+         * round of get-dirty bitmap */
+        spin_lock(&d->arch.dirty.lock);
+        mark_dirty_bitmap(d, addr);
+        spin_unlock(&d->arch.dirty.lock);
+    }
+
+    return 1;
+}
+
 void get_gma_start_end(struct domain *d, paddr_t *start, paddr_t *end)
 {
     if ( start )
@@ -1368,6 +1422,55 @@ void cleanup_vlpt(struct domain *d)
     unmap_domain_page_global(d->arch.dirty.second_lvl[0]);
     unmap_domain_page_global(d->arch.dirty.second_lvl[1]);
 }
+
+int prepare_bitmap(struct domain *d)
+{
+    paddr_t gma_start = 0;
+    paddr_t gma_end = 0;
+    int nr_bytes;
+    int nr_pages;
+    int i;
+
+    get_gma_start_end(d, &gma_start, &gma_end);
+
+    nr_bytes = (PFN_DOWN(gma_end - gma_start) + 7) / 8;
+    nr_pages = (nr_bytes + PAGE_SIZE - 1) / PAGE_SIZE;
+
+    BUG_ON( nr_pages > MAX_DIRTY_BITMAP_PAGES );
+
+    for ( i = 0; i < nr_pages; ++i )
+    {
+        struct page_info *page;
+        page = alloc_domheap_page(NULL, 0);
+        if ( page == NULL )
+            goto cleanup_on_failure;
+
+        d->arch.dirty.bitmap[i] = map_domain_page_global(__page_to_mfn(page));
+        clear_page(d->arch.dirty.bitmap[i]);
+    }
+
+    d->arch.dirty.bitmap_pages = nr_pages;
+    return 0;
+
+cleanup_on_failure:
+    nr_pages = i;
+    for ( i = 0; i < nr_pages; ++i )
+    {
+        unmap_domain_page_global(d->arch.dirty.bitmap[i]);
+    }
+
+    return -ENOMEM;
+}
+
+void cleanup_bitmap(struct domain *d)
+{
+    int i;
+    for ( i = 0; i < d->arch.dirty.bitmap_pages; ++i )
+    {
+        unmap_domain_page_global(d->arch.dirty.bitmap[i]);
+    }
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index 403fd89..2925f80 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -6,6 +6,8 @@
 #include <xen/bitops.h>
 #include <asm/flushtlb.h>
 #include <asm/gic.h>
+#include <xen/guest_access.h>
+#include <xen/pfn.h>
 #include <asm/event.h>
 #include <asm/hardirq.h>
 #include <asm/page.h>
@@ -208,6 +210,7 @@ static lpae_t mfn_to_p2m_entry(unsigned long mfn, unsigned int mattr,
         break;
 
     case p2m_ram_ro:
+    case p2m_ram_logdirty:
         e.p2m.xn = 0;
         e.p2m.write = 0;
         break;
@@ -261,6 +264,10 @@ static int p2m_create_table(struct domain *d,
 
     pte = mfn_to_p2m_entry(page_to_mfn(page), MATTR_MEM, p2m_invalid);
 
+    /* mark the write bit (page table's case, ro bit) as 0
+     * so, it is writable in case of vlpt access */
+    pte.pt.ro = 0;
+
     write_pte(entry, pte);
 
     return 0;
@@ -697,6 +704,207 @@ unsigned long gmfn_to_mfn(struct domain *d, unsigned long gpfn)
     return p >> PAGE_SHIFT;
 }
 
+/* Change types across all p2m entries in a domain */
+void p2m_change_entry_type_global(struct domain *d, enum mg nt)
+{
+    struct p2m_domain *p2m = &d->arch.p2m;
+    paddr_t ram_base;
+    int i1, i2, i3;
+    int first_index, second_index, third_index;
+    lpae_t *first = __map_domain_page(p2m->first_level);
+    lpae_t pte, *second = NULL, *third = NULL;
+
+    get_gma_start_end(d, &ram_base, NULL);
+
+    first_index = first_table_offset((uint64_t)ram_base);
+    second_index = second_table_offset((uint64_t)ram_base);
+    third_index = third_table_offset((uint64_t)ram_base);
+
+    BUG_ON( !first && "Can't map first level p2m." );
+
+    spin_lock(&p2m->lock);
+
+    for ( i1 = first_index; i1 < LPAE_ENTRIES*2; ++i1 )
+    {
+        lpae_walk_t first_pte = first[i1].walk;
+        if ( !first_pte.valid || !first_pte.table )
+            goto out;
+
+        second = map_domain_page(first_pte.base);
+        BUG_ON( !second && "Can't map second level p2m.");
+        for ( i2 = second_index; i2 < LPAE_ENTRIES; ++i2 )
+        {
+            lpae_walk_t second_pte = second[i2].walk;
+
+            if ( !second_pte.valid || !second_pte.table )
+                goto out;
+
+            third = map_domain_page(second_pte.base);
+            BUG_ON( !third && "Can't map third level p2m.");
+
+            for ( i3 = third_index; i3 < LPAE_ENTRIES; ++i3 )
+            {
+                lpae_walk_t third_pte = third[i3].walk;
+                if ( !third_pte.valid )
+                    goto out;
+
+                pte = third[i3];
+                if ( nt == mg_ro )
+                {
+                    if ( pte.p2m.write == 1 )
+                    {
+                        pte.p2m.write = 0;
+                        pte.p2m.type = p2m_ram_logdirty;
+                    }
+                    else
+                    {
+                        /* reuse avail bit as an indicator of 'actual'
+                         * read-only */
+                        pte.p2m.type = p2m_ram_rw;
+                    }
+                }
+                else if ( nt == mg_rw )
+                {
+                    if ( pte.p2m.write == 0 &&
+                         pte.p2m.type == p2m_ram_logdirty )
+                    {
+                        pte.p2m.write = p2m_ram_rw;
+                    }
+                }
+                write_pte(&third[i3], pte);
+            }
+            unmap_domain_page(third);
+
+            third = NULL;
+            third_index = 0;
+        }
+        unmap_domain_page(second);
+
+        second = NULL;
+        second_index = 0;
+        third_index = 0;
+    }
+
+out:
+    flush_tlb_all_local();
+    if ( third ) unmap_domain_page(third);
+    if ( second ) unmap_domain_page(second);
+    if ( first ) unmap_domain_page(first);
+
+    spin_unlock(&p2m->lock);
+}
+
+/* Read a domain's log-dirty bitmap and stats.
+ * If the operation is a CLEAN, clear the bitmap and stats. */
+int log_dirty_op(struct domain *d, xen_domctl_shadow_op_t *sc)
+{
+    int peek = 1;
+    int i;
+    int bitmap_size;
+    paddr_t gma_start, gma_end;
+
+    /* this hypercall is called from domain 0, and we don't know which guest's
+     * vlpt is mapped in xen_second, so, to be sure, we restore vlpt here */
+    restore_vlpt(d);
+
+    get_gma_start_end(d, &gma_start, &gma_end);
+    bitmap_size = (gma_end - gma_start) / 8;
+
+    if ( guest_handle_is_null(sc->dirty_bitmap) )
+    {
+        peek = 0;
+    }
+    else
+    {
+        spin_lock(&d->arch.dirty.lock);
+        for ( i = 0; i < d->arch.dirty.bitmap_pages; ++i )
+        {
+            int j = 0;
+            uint8_t *bitmap;
+            copy_to_guest_offset(sc->dirty_bitmap, i * PAGE_SIZE,
+                                 d->arch.dirty.bitmap[i],
+                                 bitmap_size < PAGE_SIZE ? bitmap_size :
+                                                           PAGE_SIZE);
+            bitmap_size -= PAGE_SIZE;
+
+            /* set p2m page table read-only */
+            bitmap = d->arch.dirty.bitmap[i];
+            while ((j = find_next_bit((const long unsigned int *)bitmap,
+                                      PAGE_SIZE*8, j)) < PAGE_SIZE*8)
+            {
+                lpae_t *vlpt;
+                paddr_t addr = gma_start +
+                               (i << (2*PAGE_SHIFT+3)) +
+                               (j << PAGE_SHIFT);
+                vlpt = get_vlpt_3lvl_pte(addr);
+                vlpt->p2m.write = 0;
+                j++;
+            }
+        }
+
+        if ( sc->op == XEN_DOMCTL_SHADOW_OP_CLEAN )
+        {
+            for ( i = 0; i < d->arch.dirty.bitmap_pages; ++i )
+            {
+                clear_page(d->arch.dirty.bitmap[i]);
+            }
+        }
+
+        spin_unlock(&d->arch.dirty.lock);
+        flush_tlb_local();
+    }
+
+    sc->stats.dirty_count = d->arch.dirty.count;
+
+    return 0;
+}
+
+long dirty_mode_op(struct domain *d, xen_domctl_shadow_op_t *sc)
+{
+    long ret = 0;
+    switch (sc->op)
+    {
+        case XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY:
+        case XEN_DOMCTL_SHADOW_OP_OFF:
+        {
+            enum mg nt = sc->op == XEN_DOMCTL_SHADOW_OP_OFF ? mg_rw : mg_ro;
+
+            d->arch.dirty.mode = sc->op == XEN_DOMCTL_SHADOW_OP_OFF ? 0 : 1;
+            p2m_change_entry_type_global(d, nt);
+
+            if ( sc->op == XEN_DOMCTL_SHADOW_OP_OFF )
+            {
+                cleanup_vlpt(d);
+                cleanup_bitmap(d);
+            }
+            else
+            {
+                if ( (ret = prepare_vlpt(d)) )
+                   return ret;
+
+                if ( (ret = prepare_bitmap(d)) )
+                {
+                   /* in case of failure, we have to cleanup vlpt */
+                   cleanup_vlpt(d);
+                   return ret;
+                }
+            }
+        }
+        break;
+
+        case XEN_DOMCTL_SHADOW_OP_CLEAN:
+        case XEN_DOMCTL_SHADOW_OP_PEEK:
+        {
+            ret = log_dirty_op(d, sc);
+        }
+        break;
+
+        default:
+            return -ENOSYS;
+    }
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index a7edc4e..42c8b75 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1491,6 +1491,8 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     struct hsr_dabt dabt = hsr.dabt;
     int rc;
     mmio_info_t info;
+    int page_fault = ( (dabt.dfsc & FSC_MASK) ==
+                          (FSC_FLT_PERM | FSC_3D_LEVEL) && dabt.write );
 
     if ( !check_conditional_instr(regs, hsr) )
     {
@@ -1512,6 +1514,13 @@ static void do_trap_data_abort_guest(struct cpu_user_regs *regs,
     if ( rc == -EFAULT )
         goto bad_data_abort;
 
+    /* domU page fault handling for guest live migration */
+    /* dabt.valid can be 0 here */
+    if ( page_fault && handle_page_fault(current->domain, info.gpa) )
+    {
+        /* Do not modify pc after page fault to repeat memory operation */
+        return;
+    }
     /* XXX: Decode the instruction if ISS is not valid */
     if ( !dabt.valid )
         goto bad_data_abort;
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 5321bd6..99f9f51 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -163,9 +163,16 @@ struct arch_domain
 
     /* dirty-page tracing */
     struct {
+#define MAX_DIRTY_BITMAP_PAGES 64        /* support upto 8GB guest memory */
+        spinlock_t lock;                 /* protect list: head, mvn_head */
+        volatile int mode;               /* 1 if dirty pages tracing enabled */
+        volatile unsigned int count;     /* dirty pages counter */
         volatile int second_lvl_start;   /* for context switch */
         volatile int second_lvl_end;
         lpae_t *second_lvl[2];           /* copy of guest p2m's first */
+        /* dirty bitmap */
+        uint8_t *bitmap[MAX_DIRTY_BITMAP_PAGES];
+        int bitmap_pages;                /* number of dirty bitmap pages */
     } dirty;
 
     unsigned int evtchn_irq;
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 75c27fb..6733021 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -342,11 +342,19 @@ static inline void put_page_and_type(struct page_info *page)
     put_page(page);
 }
 
+enum mg { mg_clear, mg_ro, mg_rw, mg_rx };
+
+/* routine for dirty-page tracing */
+int handle_page_fault(struct domain *d, paddr_t addr);
+
 void get_gma_start_end(struct domain *d, paddr_t *start, paddr_t *end);
 int prepare_vlpt(struct domain *d);
 void cleanup_vlpt(struct domain *d);
 void restore_vlpt(struct domain *d);
 
+int prepare_bitmap(struct domain *d);
+void cleanup_bitmap(struct domain *d);
+
 /* calculate the xen''s virtual address for accessing the leaf PTE of
  * a given address (GPA) */
 static inline lpae_t * get_vlpt_3lvl_pte(paddr_t addr)
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index bd71abe..0cecbe7 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -2,6 +2,7 @@
 #define _XEN_P2M_H
 
 #include <xen/mm.h>
+#include <public/domctl.h>
 
 struct domain;
 
@@ -41,6 +42,7 @@ typedef enum {
     p2m_invalid = 0,    /* Nothing mapped here */
     p2m_ram_rw,         /* Normal read/write guest RAM */
     p2m_ram_ro,         /* Read-only; writes are silently dropped */
+    p2m_ram_logdirty,   /* Read-only: special mode for log dirty */
     p2m_mmio_direct,    /* Read/write mapping of genuine MMIO area */
     p2m_map_foreign,    /* Ram pages from foreign domain */
     p2m_grant_map_rw,   /* Read/write grant mapping */
@@ -49,7 +51,8 @@ typedef enum {
 } p2m_type_t;
 
 #define p2m_is_foreign(_t)  ((_t) == p2m_map_foreign)
-#define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro)
+#define p2m_is_ram(_t)      ((_t) == p2m_ram_rw || (_t) == p2m_ram_ro ||  \
+                             (_t) == p2m_ram_logdirty)
 
 /* Initialise vmid allocator */
 void p2m_vmid_allocator_init(void);
@@ -178,6 +181,9 @@ static inline int get_page_and_type(struct page_info *page,
     return rc;
 }
 
+void p2m_change_entry_type_global(struct domain *d, enum mg nt);
+long dirty_mode_op(struct domain *d, xen_domctl_shadow_op_t *sc);
+
 #endif /* _XEN_P2M_H */
 
 /*
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 06e638f..9dc49c3 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -399,6 +399,8 @@ union hsr {
 #define FSC_CPR        (0x3a) /* Coprocossor Abort */
 
 #define FSC_LL_MASK    (_AC(0x03,U)<<0)
+#define FSC_MASK       (0x3f) /* Fault status mask */
+#define FSC_3D_LEVEL   (0x03) /* Third level fault*/
 
 /* Time counter hypervisor control register */
 #define CNTHCTL_PA      (1u<<0)  /* Kernel/user access to physical counter */
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH 6/6] xen/arm: Implement toolstack for xl restore/save and migrate
  2014-04-10 16:48 [PATCH 0/6] xen/arm: Xen save/restore/live migration support Wei Huang
                   ` (4 preceding siblings ...)
  2014-04-10 16:48 ` [PATCH 5/6] xen/arm: Implement hypercall for dirty page tracing Wei Huang
@ 2014-04-10 16:48 ` Wei Huang
  2014-04-11 14:15 ` [PATCH 0/6] xen/arm: Xen save/restore/live migration support Julien Grall
  6 siblings, 0 replies; 21+ messages in thread
From: Wei Huang @ 2014-04-10 16:48 UTC (permalink / raw)
  To: xen-devel
  Cc: w1.huang, ian.campbell, stefano.stabellini, julien.grall,
	jaeyong.yoo, yjhyun.yoo

From: Alexey Sokolov <sokolov.a@samsung.com>

This patch implements xl save/restore operation in xc_arm_migrate.c and
make it compilable with existing design. The operation is also used by
migration.

The overall process of save is the following:
1) save guest parameters (i.e., memory map, console and store pfn, etc)
2) save memory (if it is live migration, perform dirty-page tracing)
3) save hvm states (i.e., gic, timer, vcpu etc)

Signed-off-by: Alexey Sokolov <sokolov.a@samsung.com>
Signed-off-by: Wei Huang <w1.huang@samsung.com>
---
 config/arm32.mk              |   1 +
 config/arm64.mk              |   1 +
 tools/libxc/Makefile         |   6 +-
 tools/libxc/xc_arm_migrate.c | 712 +++++++++++++++++++++++++++++++++++++++++++
 tools/libxc/xc_dom_arm.c     |   4 +-
 tools/libxl/libxl.h          |   3 -
 tools/misc/Makefile          |   4 +-
 7 files changed, 724 insertions(+), 7 deletions(-)
 create mode 100644 tools/libxc/xc_arm_migrate.c

diff --git a/config/arm32.mk b/config/arm32.mk
index aa79d22..01374c9 100644
--- a/config/arm32.mk
+++ b/config/arm32.mk
@@ -1,6 +1,7 @@
 CONFIG_ARM := y
 CONFIG_ARM_32 := y
 CONFIG_ARM_$(XEN_OS) := y
+CONFIG_MIGRATE := y
 
 CONFIG_XEN_INSTALL_SUFFIX :=
 
diff --git a/config/arm64.mk b/config/arm64.mk
index 15b57a4..7ac3b65 100644
--- a/config/arm64.mk
+++ b/config/arm64.mk
@@ -1,6 +1,7 @@
 CONFIG_ARM := y
 CONFIG_ARM_64 := y
 CONFIG_ARM_$(XEN_OS) := y
+CONFIG_MIGRATE := y
 
 CONFIG_XEN_INSTALL_SUFFIX :=
 
diff --git a/tools/libxc/Makefile b/tools/libxc/Makefile
index 2cca2b2..6b90b1c 100644
--- a/tools/libxc/Makefile
+++ b/tools/libxc/Makefile
@@ -43,8 +43,13 @@ CTRL_SRCS-$(CONFIG_MiniOS) += xc_minios.c
 GUEST_SRCS-y :=
 GUEST_SRCS-y += xg_private.c xc_suspend.c
 ifeq ($(CONFIG_MIGRATE),y)
+ifeq ($(CONFIG_X86),y)
 GUEST_SRCS-y += xc_domain_restore.c xc_domain_save.c
 GUEST_SRCS-y += xc_offline_page.c xc_compression.c
+endif
+ifeq ($(CONFIG_ARM),y)
+GUEST_SRCS-y += xc_arm_migrate.c
+endif
 else
 GUEST_SRCS-y += xc_nomigrate.c
 endif
@@ -64,7 +69,6 @@ $(patsubst %.c,%.opic,$(ELF_SRCS-y)): CFLAGS += -Wno-pointer-sign
 GUEST_SRCS-y                 += xc_dom_core.c xc_dom_boot.c
 GUEST_SRCS-y                 += xc_dom_elfloader.c
 GUEST_SRCS-$(CONFIG_X86)     += xc_dom_bzimageloader.c
-GUEST_SRCS-$(CONFIG_X86)     += xc_dom_decompress_lz4.c
 GUEST_SRCS-$(CONFIG_ARM)     += xc_dom_armzimageloader.c
 GUEST_SRCS-y                 += xc_dom_binloader.c
 GUEST_SRCS-y                 += xc_dom_compat_linux.c
diff --git a/tools/libxc/xc_arm_migrate.c b/tools/libxc/xc_arm_migrate.c
new file mode 100644
index 0000000..48c77cb
--- /dev/null
+++ b/tools/libxc/xc_arm_migrate.c
@@ -0,0 +1,712 @@
+/******************************************************************************
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; if not, write to the Free Software
+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301 USA
+ *
+ * Copyright (c) 2013, Samsung Electronics
+ */
+
+#include <inttypes.h>
+#include <errno.h>
+#include <xenctrl.h>
+#include <xenguest.h>
+
+#include <unistd.h>
+#include <xc_private.h>
+#include <xc_dom.h>
+#include "xc_bitops.h"
+#include "xg_private.h"
+
+/*
+ *  XXX: Use correct definition for RAM base when the following patch
+ *  xen: arm: 64-bit guest support and domU FDT autogeneration
+ *  will be upstreamed.
+ */
+
+#define DEF_MAX_ITERS          29 /* limit us to 30 times round loop   */
+#define DEF_MAX_FACTOR         3  /* never send more than 3x p2m_size  */
+#define DEF_MIN_DIRTY_PER_ITER 50 /* dirty page count to define last iter */
+#define DEF_PROGRESS_RATE      50 /* progress bar update rate */
+
+/* Enable this macro for debug only: "static" migration instead of live */
+/*
+#define DISABLE_LIVE_MIGRATION
+*/
+
+/* Enable this macro for debug only: additional debug info */
+/*
+#define ARM_MIGRATE_VERBOSE
+*/
+
+/*
+ * Guest params to save: used HVM params, save flags, memory map
+ */
+typedef struct guest_params
+{
+    unsigned long console_pfn;
+    unsigned long store_pfn;
+    uint32_t flags;
+    xen_pfn_t start_gpfn;
+    xen_pfn_t max_gpfn;
+    uint32_t max_vcpu_id;
+} guest_params_t;
+
+static int suspend_and_state(int (*suspend)(void*), void *data,
+                             xc_interface *xch, int dom)
+{
+    xc_dominfo_t info;
+    if ( !(*suspend)(data) )
+    {
+        ERROR("Suspend request failed");
+        return -1;
+    }
+
+    if ( (xc_domain_getinfo(xch, dom, 1, &info) != 1) ||
+         !info.shutdown || (info.shutdown_reason != SHUTDOWN_suspend) )
+    {
+        ERROR("Domain is not in suspended state after suspend attempt");
+        return -1;
+    }
+
+    return 0;
+}
+
+static int write_exact_handled(xc_interface *xch, int fd, const void *data,
+                               size_t size)
+{
+    if ( write_exact(fd, data, size) )
+    {
+        ERROR("Write failed, check space");
+        return -1;
+    }
+    return 0;
+}
+
+/* ============ Memory ============= */
+static int save_memory(xc_interface *xch, int io_fd, uint32_t dom,
+                       struct save_callbacks *callbacks,
+                       uint32_t max_iters, uint32_t max_factor,
+                       guest_params_t *params)
+{
+    int live =  !!(params->flags & XCFLAGS_LIVE);
+    int debug =  !!(params->flags & XCFLAGS_DEBUG);
+    xen_pfn_t i;
+    char reportbuf[80];
+    int iter = 0;
+    int last_iter = !live;
+    int total_dirty_pages_num = 0;
+    int dirty_pages_on_prev_iter_num = 0;
+    int count = 0;
+    char *page = 0;
+    xen_pfn_t *busy_pages = 0;
+    int busy_pages_count = 0;
+    int busy_pages_max = 256;
+
+    DECLARE_HYPERCALL_BUFFER(unsigned long, to_send);
+
+    xen_pfn_t start = params->start_gpfn;
+    const xen_pfn_t end = params->max_gpfn;
+    const xen_pfn_t mem_size = end - start;
+
+    if ( debug )
+    {
+        IPRINTF("(save mem) start=%llx end=%llx!\n", (unsigned long long)start,
+                (unsigned long long)end);
+    }
+
+    if ( live )
+    {
+        if ( xc_shadow_control(xch, dom, XEN_DOMCTL_SHADOW_OP_ENABLE_LOGDIRTY,
+                    NULL, 0, NULL, 0, NULL) < 0 )
+        {
+            ERROR("Couldn''t enable log-dirty mode !\n");
+            return -1;
+        }
+
+        max_iters  = max_iters  ? : DEF_MAX_ITERS;
+        max_factor = max_factor ? : DEF_MAX_FACTOR;
+
+        if ( debug )
+            IPRINTF("Log-dirty mode enabled, max_iters=%d, max_factor=%d!\n",
+                    max_iters, max_factor);
+    }
+
+    to_send = xc_hypercall_buffer_alloc_pages(xch, to_send,
+                                              NRPAGES(bitmap_size(mem_size)));
+    if ( !to_send )
+    {
+        ERROR("Couldn''t allocate to_send array!\n");
+        return -1;
+    }
+
+    /* send all pages on first iter */
+    memset(to_send, 0xff, bitmap_size(mem_size));
+
+    for ( ; ; )
+    {
+        int dirty_pages_on_current_iter_num = 0;
+        int frc;
+        iter++;
+
+        snprintf(reportbuf, sizeof(reportbuf),
+                 "Saving memory: iter %d (last sent %u)",
+                 iter, dirty_pages_on_prev_iter_num);
+
+        xc_report_progress_start(xch, reportbuf, mem_size);
+
+        if ( (iter > 1 &&
+              dirty_pages_on_prev_iter_num < DEF_MIN_DIRTY_PER_ITER) ||
+             (iter == max_iters) ||
+             (total_dirty_pages_num >= mem_size*max_factor) )
+        {
+            if ( debug )
+                IPRINTF("Last iteration");
+            last_iter = 1;
+        }
+
+        if ( last_iter )
+        {
+            if ( suspend_and_state(callbacks->suspend, callbacks->data,
+                                   xch, dom) )
+            {
+                ERROR("Domain appears not to have suspended");
+                return -1;
+            }
+        }
+        if ( live && iter > 1 )
+        {
+            frc = xc_shadow_control(xch, dom, XEN_DOMCTL_SHADOW_OP_CLEAN,
+                                    HYPERCALL_BUFFER(to_send), mem_size,
+                                                     NULL, 0, NULL);
+            if ( frc != mem_size )
+            {
+                ERROR("Error peeking shadow bitmap");
+                xc_hypercall_buffer_free_pages(xch, to_send,
+                                               NRPAGES(bitmap_size(mem_size)));
+                return -1;
+            }
+        }
+
+        busy_pages = malloc(sizeof(xen_pfn_t) * busy_pages_max);
+
+        for ( i = start; i < end; ++i )
+        {
+            if ( test_bit(i - start, to_send) )
+            {
+                page = xc_map_foreign_range(xch, dom, PAGE_SIZE, PROT_READ, i);
+                if ( !page )
+                {
+                    /* This page is mapped elsewhere, should be resent later */
+                    busy_pages[busy_pages_count] = i;
+                    busy_pages_count++;
+                    if ( busy_pages_count >= busy_pages_max )
+                    {
+                        busy_pages_max += 256;
+                        busy_pages = realloc(busy_pages, sizeof(xen_pfn_t) *
+                                                         busy_pages_max);
+                    }
+                    continue;
+                }
+
+                if ( write_exact_handled(xch, io_fd, &i, sizeof(i)) ||
+                     write_exact_handled(xch, io_fd, page, PAGE_SIZE) )
+                {
+                    munmap(page, PAGE_SIZE);
+                    free(busy_pages);
+                    return -1;
+                }
+                count++;
+                munmap(page, PAGE_SIZE);
+
+                if ( (i % DEF_PROGRESS_RATE) == 0 )
+                    xc_report_progress_step(xch, i - start, mem_size);
+                dirty_pages_on_current_iter_num++;
+            }
+        }
+
+        while ( busy_pages_count )
+        {
+            /* Send busy pages */
+            busy_pages_count--;
+            i = busy_pages[busy_pages_count];
+            if ( test_bit(i - start, to_send) )
+            {
+                page = xc_map_foreign_range(xch, dom, PAGE_SIZE,PROT_READ, i);
+                if ( !page )
+                {
+                    IPRINTF("WARNING: 2nd attempt to save page "
+                            "busy failed pfn=%llx", (unsigned long long)i);
+                    continue;
+                }
+
+                if ( debug )
+                {
+                    IPRINTF("save mem: resend busy page %llx\n",
+                            (unsigned long long)i);
+                }
+
+                if ( write_exact_handled(xch, io_fd, &i, sizeof(i)) ||
+                     write_exact_handled(xch, io_fd, page, PAGE_SIZE) )
+                {
+                    munmap(page, PAGE_SIZE);
+                    free(busy_pages);
+                    return -1;
+                }
+                count++;
+                munmap(page, PAGE_SIZE);
+                dirty_pages_on_current_iter_num++;
+            }
+        }
+        free(busy_pages);
+
+        if ( debug )
+            IPRINTF("Dirty pages=%d", dirty_pages_on_current_iter_num);
+
+        xc_report_progress_step(xch, mem_size, mem_size);
+
+        dirty_pages_on_prev_iter_num = dirty_pages_on_current_iter_num;
+        total_dirty_pages_num += dirty_pages_on_current_iter_num;
+
+        if ( last_iter )
+        {
+            xc_hypercall_buffer_free_pages(xch, to_send,
+                                           NRPAGES(bitmap_size(mem_size)));
+            if ( live )
+            {
+                if ( xc_shadow_control(xch, dom, XEN_DOMCTL_SHADOW_OP_OFF,
+                                       NULL, 0, NULL, 0, NULL) < 0 )
+                    ERROR("Couldn''t disable log-dirty mode");
+            }
+            break;
+        }
+    }
+    if ( debug )
+    {
+        IPRINTF("save mem: pages count = %d\n", count);
+    }
+
+    i = (xen_pfn_t) -1; /* end page marker */
+    return write_exact_handled(xch, io_fd, &i, sizeof(i));
+}
+
+static int restore_memory(xc_interface *xch, int io_fd, uint32_t dom,
+                          guest_params_t *params)
+{
+    xen_pfn_t end = params->max_gpfn;
+    xen_pfn_t gpfn;
+    int debug =  !!(params->flags & XCFLAGS_DEBUG);
+    int count = 0;
+    char *page;
+    xen_pfn_t start = params->start_gpfn;
+
+    /* TODO allocate several pages per call */
+    for ( gpfn = start; gpfn < end; ++gpfn )
+    {
+        if ( xc_domain_populate_physmap_exact(xch, dom, 1, 0, 0, &gpfn) )
+        {
+            PERROR("Memory allocation for a new domain failed");
+            return -1;
+        }
+    }
+    while ( 1 )
+    {
+
+        if ( read_exact(io_fd, &gpfn, sizeof(gpfn)) )
+        {
+            PERROR("GPFN read failed during memory transfer, count=%d", count);
+            return -1;
+        }
+        if ( gpfn == (xen_pfn_t) -1 ) break; /* end page marker */
+
+        if ( gpfn < start || gpfn >= end )
+        {
+            ERROR("GPFN %llx doesn''t belong to RAM address space, count=%d",
+                  (unsigned long long)gpfn, count);
+            return -1;
+        }
+        page = xc_map_foreign_range(xch, dom, PAGE_SIZE,
+                                    PROT_READ | PROT_WRITE, gpfn);
+        if ( !page )
+        {
+            PERROR("xc_map_foreign_range failed, pfn=%llx", gpfn);
+            return -1;
+        }
+        if ( read_exact(io_fd, page, PAGE_SIZE) )
+        {
+            PERROR("Page data read failed during memory transfer, pfn=%llx",
+                    gpfn);
+            return -1;
+        }
+        munmap(page, PAGE_SIZE);
+        count++;
+    }
+
+    if ( debug )
+    {
+        IPRINTF("Memory restored, pages count=%d", count);
+    }
+    return 0;
+}
+
+/* ============ HVM context =========== */
+static int save_armhvm(xc_interface *xch, int io_fd, uint32_t dom, int debug)
+{
+    /* HVM: a buffer for holding HVM context */
+    uint32_t hvm_buf_size = 0;
+    uint8_t *hvm_buf = NULL;
+    uint32_t rec_size;
+    int retval = -1;
+
+    /* Need another buffer for HVM context */
+    hvm_buf_size = xc_domain_hvm_getcontext(xch, dom, 0, 0);
+    if ( hvm_buf_size == -1 )
+    {
+        ERROR("Couldn''t get HVM context size from Xen");
+        goto out;
+    }
+    hvm_buf = malloc(hvm_buf_size);
+
+    if ( !hvm_buf )
+    {
+        ERROR("Couldn''t allocate memory for hvm buffer");
+        goto out;
+    }
+
+    /* Get HVM context from Xen and save it too */
+    if ( (rec_size = xc_domain_hvm_getcontext(xch, dom, hvm_buf,
+                    hvm_buf_size)) == -1 )
+    {
+        ERROR("HVM:Could not get hvm buffer");
+        goto out;
+    }
+
+    if ( debug )
+        IPRINTF("HVM save size %d %d", hvm_buf_size, rec_size);
+
+    if ( write_exact_handled(xch, io_fd, &rec_size, sizeof(uint32_t)) )
+        goto out;
+
+    if ( write_exact_handled(xch, io_fd, hvm_buf, rec_size) )
+    {
+        goto out;
+    }
+    retval = 0;
+
+out:
+    if ( hvm_buf )
+        free (hvm_buf);
+    return retval;
+}
+
+static int restore_armhvm(xc_interface *xch, int io_fd,
+                          uint32_t dom, int debug)
+{
+    uint32_t rec_size;
+    uint32_t hvm_buf_size = 0;
+    uint8_t *hvm_buf = NULL;
+    int frc = 0;
+    int retval = -1;
+
+    if ( read_exact(io_fd, &rec_size, sizeof(uint32_t)) )
+    {
+        PERROR("Could not read HVM size");
+        goto out;
+    }
+
+    if ( !rec_size )
+    {
+        ERROR("Zero HVM size");
+        goto out;
+    }
+
+    hvm_buf_size = xc_domain_hvm_getcontext(xch, dom, 0, 0);
+    if ( hvm_buf_size != rec_size )
+    {
+        ERROR("HVM size for this domain is not the same as stored");
+    }
+
+    hvm_buf = malloc(hvm_buf_size);
+    if ( !hvm_buf )
+    {
+        ERROR("Couldn''t allocate memory");
+        goto out;
+    }
+
+    if ( read_exact(io_fd, hvm_buf, hvm_buf_size) )
+    {
+        PERROR("Could not read HVM context");
+        goto out;
+    }
+
+    frc = xc_domain_hvm_setcontext(xch, dom, hvm_buf, hvm_buf_size);
+    if ( frc )
+    {
+        ERROR("error setting the HVM context");
+        goto out;
+    }
+    retval = 0;
+
+    if ( debug )
+    {
+            IPRINTF("HVM restore size %d %d", hvm_buf_size, rec_size);
+    }
+out:
+    if ( hvm_buf )
+        free (hvm_buf);
+    return retval;
+}
+
+/* ================= Console & Xenstore & Memory map =========== */
+static int save_guest_params(xc_interface *xch, int io_fd,
+                             uint32_t dom, uint32_t flags,
+                             guest_params_t *params)
+{
+    size_t sz = sizeof(guest_params_t);
+    xc_dominfo_t dom_info;
+
+    params->max_gpfn = xc_domain_maximum_gpfn(xch, dom);
+    params->start_gpfn = (GUEST_RAM_BASE >> PAGE_SHIFT);
+
+    if ( flags & XCFLAGS_DEBUG )
+    {
+        IPRINTF("Guest param save size: %d ", (int)sz);
+    }
+
+    if ( xc_get_hvm_param(xch, dom, HVM_PARAM_CONSOLE_PFN,
+            &params->console_pfn) )
+    {
+        ERROR("Can''t get console gpfn");
+        return -1;
+    }
+
+    if ( xc_get_hvm_param(xch, dom, HVM_PARAM_STORE_PFN, &params->store_pfn) )
+    {
+        ERROR("Can''t get store gpfn");
+        return -1;
+    }
+
+    if ( xc_domain_getinfo(xch, dom, 1, &dom_info ) < 0)
+    {
+        ERROR("Can''t get domain info for dom %d", dom);
+        return -1;
+    }
+    params->max_vcpu_id = dom_info.max_vcpu_id;
+
+    params->flags = flags;
+
+    if ( write_exact_handled(xch, io_fd, params, sz) )
+    {
+        return -1;
+    }
+
+    return 0;
+}
+
+static int restore_guest_params(xc_interface *xch, int io_fd,
+                                uint32_t dom, guest_params_t *params)
+{
+    size_t sz = sizeof(guest_params_t);
+    xen_pfn_t nr_pfns;
+    unsigned int maxmemkb;
+
+    if ( read_exact(io_fd, params, sizeof(guest_params_t)) )
+    {
+        PERROR("Can''t read guest params");
+        return -1;
+    }
+
+    nr_pfns = params->max_gpfn - params->start_gpfn;
+    maxmemkb = (unsigned int) nr_pfns << (PAGE_SHIFT - 10);
+
+    if ( params->flags & XCFLAGS_DEBUG )
+    {
+        IPRINTF("Guest param restore size: %d ", (int)sz);
+        IPRINTF("Guest memory size: %d MB", maxmemkb >> 10);
+    }
+
+    if ( xc_domain_setmaxmem(xch, dom, maxmemkb) )
+    {
+        ERROR("Can''t set memory map");
+        return -1;
+    }
+
+    /* Set max. number of vcpus as max_vcpu_id + 1 */
+    if ( xc_domain_max_vcpus(xch, dom, params->max_vcpu_id + 1) )
+    {
+        ERROR("Can''t set max vcpu number for domain");
+        return -1;
+    }
+
+    return 0;
+}
+
+static int set_guest_params(xc_interface *xch, int io_fd, uint32_t dom,
+                            guest_params_t *params, unsigned int console_evtchn,
+                            domid_t console_domid, unsigned int store_evtchn,
+                            domid_t store_domid)
+{
+    int rc = 0;
+
+    if ( (rc = xc_clear_domain_page(xch, dom, params->console_pfn)) )
+    {
+        ERROR("Can''t clear console page");
+        return rc;
+    }
+
+    if ( (rc = xc_clear_domain_page(xch, dom, params->store_pfn)) )
+    {
+        ERROR("Can''t clear xenstore page");
+        return rc;
+    }
+
+    if ( (rc = xc_dom_gnttab_hvm_seed(xch, dom, params->console_pfn,
+                                      params->store_pfn, console_domid,
+                                      store_domid)) )
+    {
+        ERROR("Can''t grant console and xenstore pages");
+        return rc;
+    }
+
+    if ( (rc = xc_set_hvm_param(xch, dom, HVM_PARAM_CONSOLE_PFN,
+                                params->console_pfn)) )
+    {
+        ERROR("Can''t set console gpfn");
+        return rc;
+    }
+
+    if ( (rc = xc_set_hvm_param(xch, dom, HVM_PARAM_STORE_PFN,
+                                params->store_pfn)) )
+    {
+        ERROR("Can''t set xenstore gpfn");
+        return rc;
+    }
+
+    if ( (rc = xc_set_hvm_param(xch, dom, HVM_PARAM_CONSOLE_EVTCHN,
+                                console_evtchn)) )
+    {
+        ERROR("Can''t set console event channel");
+        return rc;
+    }
+
+    if ( (rc = xc_set_hvm_param(xch, dom, HVM_PARAM_STORE_EVTCHN,
+                                store_evtchn)) )
+    {
+        ERROR("Can''t set xenstore event channel");
+        return rc;
+    }
+    return 0;
+}
+
+/* ================== Main ============== */
+int xc_domain_save(xc_interface *xch, int io_fd, uint32_t dom,
+                   uint32_t max_iters, uint32_t max_factor, uint32_t flags,
+                   struct save_callbacks *callbacks, int hvm,
+                   unsigned long vm_generationid_addr)
+{
+    int debug;
+    guest_params_t params;
+
+#ifdef ARM_MIGRATE_VERBOSE
+    flags |= XCFLAGS_DEBUG;
+#endif
+
+#ifdef DISABLE_LIVE_MIGRATION
+    flags &= ~(XCFLAGS_LIVE);
+#endif
+
+    debug = !!(flags & XCFLAGS_DEBUG);
+    if ( save_guest_params(xch, io_fd, dom, flags, &params) )
+    {
+       ERROR("Can''t save guest params");
+       return -1;
+    }
+
+    if ( save_memory(xch, io_fd, dom, callbacks, max_iters,
+            max_factor, &params) )
+    {
+        ERROR("Memory not saved");
+        return -1;
+    }
+
+    if ( save_armhvm(xch, io_fd, dom, debug) )
+    {
+        ERROR("HVM not saved");
+        return -1;
+    }
+
+    if ( debug )
+    {
+        IPRINTF("Domain %d saved", dom);
+    }
+    return 0;
+}
+
+int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
+                      unsigned int store_evtchn, unsigned long *store_gpfn,
+                      domid_t store_domid, unsigned int console_evtchn,
+                      unsigned long *console_gpfn, domid_t console_domid,
+                      unsigned int hvm, unsigned int pae, int superpages,
+                      int no_incr_generationid, int checkpointed_stream,
+                      unsigned long *vm_generationid_addr,
+                      struct restore_callbacks *callbacks)
+{
+    guest_params_t params;
+    int debug = 1;
+
+    if ( restore_guest_params(xch, io_fd, dom, &params) )
+    {
+        ERROR("Can''t restore guest params");
+        return -1;
+    }
+    debug = !!(params.flags & XCFLAGS_DEBUG);
+
+    if ( restore_memory(xch, io_fd, dom, &params) )
+    {
+        ERROR("Can''t restore memory");
+        return -1;
+    }
+    if ( set_guest_params(xch, io_fd, dom, &params,
+                console_evtchn, console_domid,
+                store_evtchn, store_domid) )
+    {
+        ERROR("Can''t setup guest params");
+        return -1;
+    }
+
+    /* Setup console and store PFNs to caller */
+    *console_gpfn = params.console_pfn;
+    *store_gpfn = params.store_pfn;
+
+    if ( restore_armhvm(xch, io_fd, dom, debug) )
+    {
+        ERROR("HVM not restored");
+        return -1;
+    }
+
+    if ( debug )
+    {
+         IPRINTF("Domain %d restored", dom);
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-set-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libxc/xc_dom_arm.c b/tools/libxc/xc_dom_arm.c
index a40e04d..f4d3af4 100644
--- a/tools/libxc/xc_dom_arm.c
+++ b/tools/libxc/xc_dom_arm.c
@@ -297,7 +297,9 @@ int arch_setup_meminit(struct xc_dom_image *dom)
                   dom->devicetree_seg.vstart, dom->devicetree_seg.vend);
     }
 
-    return 0;
+    return xc_domain_setmaxmem(dom->xch, dom->guest_domid,
+                               (dom->total_pages + NR_MAGIC_PAGES)
+                                << (PAGE_SHIFT - 10));
 }
 
 int arch_setup_bootearly(struct xc_dom_image *dom)
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index b2c3015..e10f4fb 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -441,9 +441,6 @@
  *  - libxl_domain_resume
  *  - libxl_domain_remus_start
  */
-#if defined(__arm__) || defined(__aarch64__)
-#define LIBXL_HAVE_NO_SUSPEND_RESUME 1
-#endif
 
 /*
  * LIBXL_HAVE_DEVICE_PCI_SEIZE
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 17aeda5..0824100 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -11,7 +11,7 @@ HDRS     = $(wildcard *.h)
 
 TARGETS-y := xenperf xenpm xen-tmem-list-parse gtraceview gtracestat xenlockprof xenwatchdogd xencov
 TARGETS-$(CONFIG_X86) += xen-detect xen-hvmctx xen-hvmcrash xen-lowmemd xen-mfndump
-TARGETS-$(CONFIG_MIGRATE) += xen-hptool
+TARGETS-$(CONFIG_X86) += xen-hptool
 TARGETS := $(TARGETS-y)
 
 SUBDIRS := $(SUBDIRS-y)
@@ -23,7 +23,7 @@ INSTALL_BIN := $(INSTALL_BIN-y)
 INSTALL_SBIN-y := xen-bugtool xen-python-path xenperf xenpm xen-tmem-list-parse gtraceview \
 	gtracestat xenlockprof xenwatchdogd xen-ringwatch xencov
 INSTALL_SBIN-$(CONFIG_X86) += xen-hvmctx xen-hvmcrash xen-lowmemd xen-mfndump
-INSTALL_SBIN-$(CONFIG_MIGRATE) += xen-hptool
+INSTALL_SBIN-$(CONFIG_X86) += xen-hptool
 INSTALL_SBIN := $(INSTALL_SBIN-y)
 
 INSTALL_PRIVBIN-y := xenpvnetboot
-- 
1.8.3.2

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/6] xen/arm: Save and restore support for hvm context hypercall
  2014-04-10 16:48 ` [PATCH 1/6] xen/arm: Save and restore support for hvm context hypercall Wei Huang
@ 2014-04-10 17:26   ` Andrew Cooper
  2014-04-10 21:53     ` Wei Huang
  2014-04-11 13:57   ` Julien Grall
  1 sibling, 1 reply; 21+ messages in thread
From: Andrew Cooper @ 2014-04-10 17:26 UTC (permalink / raw)
  To: Wei Huang
  Cc: ian.campbell, stefano.stabellini, julien.grall, jaeyong.yoo,
	xen-devel, yjhyun.yoo

On 10/04/14 17:48, Wei Huang wrote:
> From: Jaeyong Yoon <jaeyong.yoo@samsung.com>
>
> Implement save/restore of hvm context hypercall. In hvm context
> save/restore, this patch saves gic, timer and vfp registers.
>
> Singed-off-by: Evgeny Fedotov <e.fedotov@samsung.com>
> Signed-off-by: Wei Huang <w1.huang@samsung.com>
> ---
>  xen/arch/arm/Makefile                  |   1 +
>  xen/arch/arm/domctl.c                  |  92 +++++-
>  xen/arch/arm/hvm.c                     | 505 ++++++++++++++++++++++++++++++++-
>  xen/arch/arm/save.c                    |  66 +++++
>  xen/common/Makefile                    |   2 +
>  xen/include/asm-arm/hvm/support.h      |  29 ++
>  xen/include/public/arch-arm/hvm/save.h | 136 +++++++++
>  7 files changed, 826 insertions(+), 5 deletions(-)
>  create mode 100644 xen/arch/arm/save.c
>  create mode 100644 xen/include/asm-arm/hvm/support.h
>
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index 63e0460..d9a328c 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -33,6 +33,7 @@ obj-y += hvm.o
>  obj-y += device.o
>  obj-y += decode.o
>  obj-y += processor.o
> +obj-y += save.o
>  
>  #obj-bin-y += ....o
>  
> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
> index 45974e7..914de29 100644
> --- a/xen/arch/arm/domctl.c
> +++ b/xen/arch/arm/domctl.c
> @@ -9,31 +9,115 @@
>  #include <xen/lib.h>
>  #include <xen/errno.h>
>  #include <xen/sched.h>
> +#include <xen/hvm/save.h>
> +#include <xen/guest_access.h>
>  #include <xen/hypercall.h>
>  #include <public/domctl.h>
>  
>  long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>                      XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>  {
> +    long ret = 0;
> +    bool_t copyback = 0;
> +
>      switch ( domctl->cmd )
>      {
> +    case XEN_DOMCTL_sethvmcontext:
> +    {
> +        struct hvm_domain_context c = { .size = domctl->u.hvmcontext.size };
> +
> +        ret = -ENOMEM;
> +        if ( (c.data = xmalloc_bytes(c.size)) == NULL )
> +            goto sethvmcontext_out;
> +
> +        ret = -EFAULT;
> +        if ( copy_from_guest(c.data, domctl->u.hvmcontext.buffer, c.size) != 0)
> +            goto sethvmcontext_out;
> +

You need to ensure that d != current->domain, or domain_pause() will
ASSERT().

> +        domain_pause(d);
> +        ret = hvm_load(d, &c);
> +        domain_unpause(d);
> +
> +    sethvmcontext_out:
> +        if ( c.data != NULL )
> +            xfree(c.data);
> +    }
> +    break;
> +
> +    case XEN_DOMCTL_gethvmcontext:
> +    {
> +        struct hvm_domain_context c = { 0 };
> +
> +        ret = -EINVAL;
> +
> +        c.size = hvm_save_size(d);
> +
> +        if ( guest_handle_is_null(domctl->u.hvmcontext.buffer) )
> +        {
> +            /* Client is querying for the correct buffer size */
> +            domctl->u.hvmcontext.size = c.size;
> +            ret = 0;
> +            goto gethvmcontext_out;
> +        }
> +
> +        /* Check that the client has a big enough buffer */
> +        ret = -ENOSPC;
> +        if ( domctl->u.hvmcontext.size < c.size )
> +        {
> +            printk("(gethvmcontext) size error: %d and %d\n",
> +                   domctl->u.hvmcontext.size, c.size );
> +            goto gethvmcontext_out;
> +        }
> +
> +        /* Allocate our own marshalling buffer */
> +        ret = -ENOMEM;
> +        if ( (c.data = xmalloc_bytes(c.size)) == NULL )
> +        {
> +            printk("(gethvmcontext) xmalloc_bytes failed: %d\n", c.size );
> +            goto gethvmcontext_out;
> +        }
> +

Same here.

> +        domain_pause(d);
> +        ret = hvm_save(d, &c);
> +        domain_unpause(d);
> +
> +        domctl->u.hvmcontext.size = c.cur;
> +        if ( copy_to_guest(domctl->u.hvmcontext.buffer, c.data, c.size) != 0 )
> +        {
> +            printk("(gethvmcontext) copy to guest failed\n");
> +            ret = -EFAULT;
> +        }
> +
> +    gethvmcontext_out:
> +        copyback = 1;
> +
> +        if ( c.data != NULL )
> +            xfree(c.data);
> +    }
> +    break;
> +
>      case XEN_DOMCTL_cacheflush:
>      {
>          unsigned long s = domctl->u.cacheflush.start_pfn;
>          unsigned long e = s + domctl->u.cacheflush.nr_pfns;
>  
>          if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
> -            return -EINVAL;
> +            ret = -EINVAL;
>  
>          if ( e < s )
> -            return -EINVAL;
> +            ret = -EINVAL;
>  
> -        return p2m_cache_flush(d, s, e);
> +        ret = p2m_cache_flush(d, s, e);
>      }
>  
>      default:
> -        return subarch_do_domctl(domctl, d, u_domctl);
> +        ret = subarch_do_domctl(domctl, d, u_domctl);
>      }
> +
> +    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
> +        ret = -EFAULT;
> +
> +    return ret;
>  }

Both these hypercalls look suspiciously similar to the x86 variants, and
look to be good candidates to live in common code.

~Andrew

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/6] xen/arm: implement get_maximum_gpfn hypercall
  2014-04-10 16:48 ` [PATCH 2/6] xen/arm: implement get_maximum_gpfn hypercall Wei Huang
@ 2014-04-10 17:28   ` Andrew Cooper
  2014-04-10 21:54     ` Wei Huang
  2014-04-11 13:17     ` Julien Grall
  2014-04-11 13:15   ` Julien Grall
  1 sibling, 2 replies; 21+ messages in thread
From: Andrew Cooper @ 2014-04-10 17:28 UTC (permalink / raw)
  To: Wei Huang
  Cc: ian.campbell, stefano.stabellini, julien.grall, jaeyong.yoo,
	xen-devel, yjhyun.yoo

On 10/04/14 17:48, Wei Huang wrote:
> From: Jaeyong Yoo <jaeyong.yoo@samsung.com>
>
> This patchi implements get_maximum_gpfn by using the memory map
> info in arch_domain (from set_memory_map hypercall).
>
> Signed-off-by: Evgeny Fedotov <e.fedotov@samsung.com>

Common implementation and a specific arch_get_maximum_gpfn() ?

~Andrew

> ---
>  xen/arch/arm/mm.c        | 19 ++++++++++++++++++-
>  xen/include/asm-arm/mm.h |  2 ++
>  2 files changed, 20 insertions(+), 1 deletion(-)
>
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 362bc8d..14b4686 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -947,7 +947,11 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
>  
>  unsigned long domain_get_maximum_gpfn(struct domain *d)
>  {
> -    return -ENOSYS;
> +    paddr_t end;
> +
> +    get_gma_start_end(d, NULL, &end);
> +
> +    return (unsigned long) (end >> PAGE_SHIFT);
>  }
>  
>  void share_xen_page_with_guest(struct page_info *page,
> @@ -1235,6 +1239,19 @@ int is_iomem_page(unsigned long mfn)
>          return 1;
>      return 0;
>  }
> +
> +/*
> + * Return start and end addresses of guest
> + */
> +void get_gma_start_end(struct domain *d, paddr_t *start, paddr_t *end)
> +{
> +    if ( start )
> +        *start = GUEST_RAM_BASE;
> +
> +    if ( end )
> +        *end = GUEST_RAM_BASE + ((paddr_t) d->max_pages << PAGE_SHIFT);
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index b8d4e7d..341493a 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -341,6 +341,8 @@ static inline void put_page_and_type(struct page_info *page)
>      put_page(page);
>  }
>  
> +void get_gma_start_end(struct domain *d, paddr_t *start, paddr_t *end);
> +
>  #endif /*  __ARCH_ARM_MM__ */
>  /*
>   * Local variables:

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/6] xen/arm: Save and restore support for hvm context hypercall
  2014-04-10 17:26   ` Andrew Cooper
@ 2014-04-10 21:53     ` Wei Huang
  0 siblings, 0 replies; 21+ messages in thread
From: Wei Huang @ 2014-04-10 21:53 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: ian.campbell, stefano.stabellini, julien.grall, jaeyong.yoo,
	xen-devel, yjhyun.yoo

On 04/10/2014 12:26 PM, Andrew Cooper wrote:
> On 10/04/14 17:48, Wei Huang wrote:
>> From: Jaeyong Yoon <jaeyong.yoo@samsung.com>
>>
>> Implement save/restore of hvm context hypercall. In hvm context
>> save/restore, this patch saves gic, timer and vfp registers.
>>
>> Singed-off-by: Evgeny Fedotov <e.fedotov@samsung.com>
>> Signed-off-by: Wei Huang <w1.huang@samsung.com>
>> ---
>>   xen/arch/arm/Makefile                  |   1 +
>>   xen/arch/arm/domctl.c                  |  92 +++++-
>>   xen/arch/arm/hvm.c                     | 505 ++++++++++++++++++++++++++++++++-
>>   xen/arch/arm/save.c                    |  66 +++++
>>   xen/common/Makefile                    |   2 +
>>   xen/include/asm-arm/hvm/support.h      |  29 ++
>>   xen/include/public/arch-arm/hvm/save.h | 136 +++++++++
>>   7 files changed, 826 insertions(+), 5 deletions(-)
>>   create mode 100644 xen/arch/arm/save.c
>>   create mode 100644 xen/include/asm-arm/hvm/support.h
>>
>> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
>> index 63e0460..d9a328c 100644
>> --- a/xen/arch/arm/Makefile
>> +++ b/xen/arch/arm/Makefile
>> @@ -33,6 +33,7 @@ obj-y += hvm.o
>>   obj-y += device.o
>>   obj-y += decode.o
>>   obj-y += processor.o
>> +obj-y += save.o
>>
>>   #obj-bin-y += ....o
>>
>> diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
>> index 45974e7..914de29 100644
>> --- a/xen/arch/arm/domctl.c
>> +++ b/xen/arch/arm/domctl.c
>> @@ -9,31 +9,115 @@
>>   #include <xen/lib.h>
>>   #include <xen/errno.h>
>>   #include <xen/sched.h>
>> +#include <xen/hvm/save.h>
>> +#include <xen/guest_access.h>
>>   #include <xen/hypercall.h>
>>   #include <public/domctl.h>
>>
>>   long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
>>                       XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
>>   {
>> +    long ret = 0;
>> +    bool_t copyback = 0;
>> +
>>       switch ( domctl->cmd )
>>       {
>> +    case XEN_DOMCTL_sethvmcontext:
>> +    {
>> +        struct hvm_domain_context c = { .size = domctl->u.hvmcontext.size };
>> +
>> +        ret = -ENOMEM;
>> +        if ( (c.data = xmalloc_bytes(c.size)) == NULL )
>> +            goto sethvmcontext_out;
>> +
>> +        ret = -EFAULT;
>> +        if ( copy_from_guest(c.data, domctl->u.hvmcontext.buffer, c.size) != 0)
>> +            goto sethvmcontext_out;
>> +
>
> You need to ensure that d != current->domain, or domain_pause() will
> ASSERT().
>
>> +        domain_pause(d);
>> +        ret = hvm_load(d, &c);
>> +        domain_unpause(d);
>> +
>> +    sethvmcontext_out:
>> +        if ( c.data != NULL )
>> +            xfree(c.data);
>> +    }
>> +    break;
>> +
>> +    case XEN_DOMCTL_gethvmcontext:
>> +    {
>> +        struct hvm_domain_context c = { 0 };
>> +
>> +        ret = -EINVAL;
>> +
>> +        c.size = hvm_save_size(d);
>> +
>> +        if ( guest_handle_is_null(domctl->u.hvmcontext.buffer) )
>> +        {
>> +            /* Client is querying for the correct buffer size */
>> +            domctl->u.hvmcontext.size = c.size;
>> +            ret = 0;
>> +            goto gethvmcontext_out;
>> +        }
>> +
>> +        /* Check that the client has a big enough buffer */
>> +        ret = -ENOSPC;
>> +        if ( domctl->u.hvmcontext.size < c.size )
>> +        {
>> +            printk("(gethvmcontext) size error: %d and %d\n",
>> +                   domctl->u.hvmcontext.size, c.size );
>> +            goto gethvmcontext_out;
>> +        }
>> +
>> +        /* Allocate our own marshalling buffer */
>> +        ret = -ENOMEM;
>> +        if ( (c.data = xmalloc_bytes(c.size)) == NULL )
>> +        {
>> +            printk("(gethvmcontext) xmalloc_bytes failed: %d\n", c.size );
>> +            goto gethvmcontext_out;
>> +        }
>> +
>
> Same here.
>
>> +        domain_pause(d);
>> +        ret = hvm_save(d, &c);
>> +        domain_unpause(d);
>> +
>> +        domctl->u.hvmcontext.size = c.cur;
>> +        if ( copy_to_guest(domctl->u.hvmcontext.buffer, c.data, c.size) != 0 )
>> +        {
>> +            printk("(gethvmcontext) copy to guest failed\n");
>> +            ret = -EFAULT;
>> +        }
>> +
>> +    gethvmcontext_out:
>> +        copyback = 1;
>> +
>> +        if ( c.data != NULL )
>> +            xfree(c.data);
>> +    }
>> +    break;
>> +
>>       case XEN_DOMCTL_cacheflush:
>>       {
>>           unsigned long s = domctl->u.cacheflush.start_pfn;
>>           unsigned long e = s + domctl->u.cacheflush.nr_pfns;
>>
>>           if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
>> -            return -EINVAL;
>> +            ret = -EINVAL;
>>
>>           if ( e < s )
>> -            return -EINVAL;
>> +            ret = -EINVAL;
>>
>> -        return p2m_cache_flush(d, s, e);
>> +        ret = p2m_cache_flush(d, s, e);
>>       }
>>
>>       default:
>> -        return subarch_do_domctl(domctl, d, u_domctl);
>> +        ret = subarch_do_domctl(domctl, d, u_domctl);
>>       }
>> +
>> +    if ( copyback && __copy_to_guest(u_domctl, domctl, 1) )
>> +        ret = -EFAULT;
>> +
>> +    return ret;
>>   }
>
> Both these hypercalls look suspiciously similar to the x86 variants, and
> look to be good candidates to live in common code.
Got it. I will try to merge them with x86, if possible.
>
> ~Andrew
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/6] xen/arm: implement get_maximum_gpfn hypercall
  2014-04-10 17:28   ` Andrew Cooper
@ 2014-04-10 21:54     ` Wei Huang
  2014-04-11 13:17     ` Julien Grall
  1 sibling, 0 replies; 21+ messages in thread
From: Wei Huang @ 2014-04-10 21:54 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: ian.campbell, stefano.stabellini, julien.grall, jaeyong.yoo,
	xen-devel, yjhyun.yoo

On 04/10/2014 12:28 PM, Andrew Cooper wrote:
> On 10/04/14 17:48, Wei Huang wrote:
>> From: Jaeyong Yoo <jaeyong.yoo@samsung.com>
>>
>> This patchi implements get_maximum_gpfn by using the memory map
>> info in arch_domain (from set_memory_map hypercall).
>>
>> Signed-off-by: Evgeny Fedotov <e.fedotov@samsung.com>
>
> Common implementation and a specific arch_get_maximum_gpfn() ?
>
> ~Andrew
Yes, will do.
>
>> ---
>>   xen/arch/arm/mm.c        | 19 ++++++++++++++++++-
>>   xen/include/asm-arm/mm.h |  2 ++
>>   2 files changed, 20 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 362bc8d..14b4686 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -947,7 +947,11 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
>>
>>   unsigned long domain_get_maximum_gpfn(struct domain *d)
>>   {
>> -    return -ENOSYS;
>> +    paddr_t end;
>> +
>> +    get_gma_start_end(d, NULL, &end);
>> +
>> +    return (unsigned long) (end >> PAGE_SHIFT);
>>   }
>>
>>   void share_xen_page_with_guest(struct page_info *page,
>> @@ -1235,6 +1239,19 @@ int is_iomem_page(unsigned long mfn)
>>           return 1;
>>       return 0;
>>   }
>> +
>> +/*
>> + * Return start and end addresses of guest
>> + */
>> +void get_gma_start_end(struct domain *d, paddr_t *start, paddr_t *end)
>> +{
>> +    if ( start )
>> +        *start = GUEST_RAM_BASE;
>> +
>> +    if ( end )
>> +        *end = GUEST_RAM_BASE + ((paddr_t) d->max_pages << PAGE_SHIFT);
>> +}
>> +
>>   /*
>>    * Local variables:
>>    * mode: C
>> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
>> index b8d4e7d..341493a 100644
>> --- a/xen/include/asm-arm/mm.h
>> +++ b/xen/include/asm-arm/mm.h
>> @@ -341,6 +341,8 @@ static inline void put_page_and_type(struct page_info *page)
>>       put_page(page);
>>   }
>>
>> +void get_gma_start_end(struct domain *d, paddr_t *start, paddr_t *end);
>> +
>>   #endif /*  __ARCH_ARM_MM__ */
>>   /*
>>    * Local variables:
>
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/6] xen/arm: implement get_maximum_gpfn hypercall
  2014-04-10 16:48 ` [PATCH 2/6] xen/arm: implement get_maximum_gpfn hypercall Wei Huang
  2014-04-10 17:28   ` Andrew Cooper
@ 2014-04-11 13:15   ` Julien Grall
  1 sibling, 0 replies; 21+ messages in thread
From: Julien Grall @ 2014-04-11 13:15 UTC (permalink / raw)
  To: Wei Huang
  Cc: stefano.stabellini, yjhyun.yoo, ian.campbell, jaeyong.yoo, xen-devel

Hello Wei,

On 04/10/2014 05:48 PM, Wei Huang wrote:
> From: Jaeyong Yoo <jaeyong.yoo@samsung.com>
> 
> This patchi implements get_maximum_gpfn by using the memory map
> info in arch_domain (from set_memory_map hypercall).
> 
> Signed-off-by: Evgeny Fedotov <e.fedotov@samsung.com>
> ---
>  xen/arch/arm/mm.c        | 19 ++++++++++++++++++-
>  xen/include/asm-arm/mm.h |  2 ++
>  2 files changed, 20 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 362bc8d..14b4686 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -947,7 +947,11 @@ int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
>  
>  unsigned long domain_get_maximum_gpfn(struct domain *d)
>  {
> -    return -ENOSYS;
> +    paddr_t end;
> +
> +    get_gma_start_end(d, NULL, &end);
> +
> +    return (unsigned long) (end >> PAGE_SHIFT);
>  }
>  void share_xen_page_with_guest(struct page_info *page,
> @@ -1235,6 +1239,19 @@ int is_iomem_page(unsigned long mfn)
>          return 1;
>      return 0;
>  }
> +
> +/*
> + * Return start and end addresses of guest
> + */
> +void get_gma_start_end(struct domain *d, paddr_t *start, paddr_t *end)
> +{
> +    if ( start )
> +        *start = GUEST_RAM_BASE;
> +
> +    if ( end )
> +        *end = GUEST_RAM_BASE + ((paddr_t) d->max_pages << PAGE_SHIFT);
> +}
> +

Ian plans to add multiple banks support for the guest very soon. This
solution will stop to work.

Late december, we've introduce max_mapped_pfn for ARM which give the
maximum pfn mapped in the guest.

Would it suit for your purpose? FYI x86 uses a similar solution.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 2/6] xen/arm: implement get_maximum_gpfn hypercall
  2014-04-10 17:28   ` Andrew Cooper
  2014-04-10 21:54     ` Wei Huang
@ 2014-04-11 13:17     ` Julien Grall
  1 sibling, 0 replies; 21+ messages in thread
From: Julien Grall @ 2014-04-11 13:17 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Wei Huang, ian.campbell, stefano.stabellini, jaeyong.yoo,
	xen-devel, yjhyun.yoo

Hi Andrew,

On 04/10/2014 06:28 PM, Andrew Cooper wrote:
> On 10/04/14 17:48, Wei Huang wrote:
>> From: Jaeyong Yoo <jaeyong.yoo@samsung.com>
>>
>> This patchi implements get_maximum_gpfn by using the memory map
>> info in arch_domain (from set_memory_map hypercall).
>>
>> Signed-off-by: Evgeny Fedotov <e.fedotov@samsung.com>
> 
> Common implementation and a specific arch_get_maximum_gpfn() ?

domain_get_maximum_gfn is already a common implementation.

Except prefixing this function by arch_. I don't think it's possible to
be more common...

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/6] xen/arm: Save and restore support for hvm context hypercall
  2014-04-10 16:48 ` [PATCH 1/6] xen/arm: Save and restore support for hvm context hypercall Wei Huang
  2014-04-10 17:26   ` Andrew Cooper
@ 2014-04-11 13:57   ` Julien Grall
  1 sibling, 0 replies; 21+ messages in thread
From: Julien Grall @ 2014-04-11 13:57 UTC (permalink / raw)
  To: Wei Huang
  Cc: stefano.stabellini, yjhyun.yoo, ian.campbell, jaeyong.yoo, xen-devel

Hi Wei,

Thank you for the patch.

On 04/10/2014 05:48 PM, Wei Huang wrote:

[..]

>      case XEN_DOMCTL_cacheflush:
>      {
>          unsigned long s = domctl->u.cacheflush.start_pfn;
>          unsigned long e = s + domctl->u.cacheflush.nr_pfns;
>  
>          if ( domctl->u.cacheflush.nr_pfns > (1U<<MAX_ORDER) )
> -            return -EINVAL;
> +            ret = -EINVAL;
>  
>          if ( e < s )
> -            return -EINVAL;
> +            ret = -EINVAL;
>  
> -        return p2m_cache_flush(d, s, e);
> +        ret = p2m_cache_flush(d, s, e);
>      }

Your change in XEN_DOMCTL_cacheflush is wrong. The code should not
continue if the sanity check has failed.

>  void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
> diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
> index 471c4cd..bfe38f4 100644
> --- a/xen/arch/arm/hvm.c
> +++ b/xen/arch/arm/hvm.c
> @@ -7,14 +7,15 @@
>  
>  #include <xsm/xsm.h>
>  
> +#include <xen/hvm/save.h>
>  #include <public/xen.h>
>  #include <public/hvm/params.h>
>  #include <public/hvm/hvm_op.h>
>  
>  #include <asm/hypercall.h>
> +#include <asm/gic.h>
>  
>  long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
> -
>  {
>      long rc = 0;
>  
> @@ -65,3 +66,505 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>      return rc;
>  }
> +
> +static int vgic_irq_rank_save(struct vcpu *v,
> +                              struct vgic_rank *ext,
> +                              struct vgic_irq_rank *rank)

I would prefer if you move all theses functions of this file in each
specific file (timer in time.c, vgic in vgic.c,...).

It would be easier when GICv3 and new stuff will be supported.

> +{
> +    spin_lock(&rank->lock);
> +
> +    /* Some of VGIC registers are not used yet, it is for a future usage */
> +    /* IENABLE, IACTIVE, IPEND,  PENDSGI registers */
> +    ext->ienable = rank->ienable;
> +    ext->iactive = rank->iactive;
> +    ext->ipend = rank->ipend;
> +    ext->pendsgi = rank->pendsgi;
> +
> +    /* ICFG */
> +    ext->icfg[0] = rank->icfg[0];
> +    ext->icfg[1] = rank->icfg[1];
> +
> +    /* IPRIORITY */
> +    if ( sizeof(rank->ipriority) != sizeof (ext->ipriority) )
> +    {
> +        dprintk(XENLOG_G_ERR, "hvm_hw_gic: check ipriority dumping space\n");
> +        return -EINVAL;
> +    }

You don't need to check it at runtime. BUILG_BUG_ON is enough here.


> +    memcpy(ext->ipriority, rank->ipriority, sizeof(rank->ipriority));
> +
> +    /* ITARGETS */
> +    if ( sizeof(rank->itargets) != sizeof (ext->itargets) )
> +    {
> +        dprintk(XENLOG_G_ERR, "hvm_hw_gic: check itargets dumping space\n");
> +        return -EINVAL;
> +    }

Same here. And actually everywhere you have the pattern "if (sizeof(a)
op sizeof (b))" in this file.

[..]

> +HVM_REGISTER_SAVE_RESTORE(A15_TIMER, timer_save, timer_load, 2, HVMSR_PER_VCPU);

The timer structure is not A15 specific. Can you rename it in timer?

[..]

> +void arch_hvm_save(struct domain *d, struct hvm_save_header *hdr)
> +{
> +    hdr->cpuid = READ_SYSREG32(MIDR_EL1);

You can directly use current_cpu_data.midr.bits;

> +}
> +
> +int arch_hvm_load(struct domain *d, struct hvm_save_header *hdr)
> +{
> +    uint32_t cpuid;
> +
> +    if ( hdr->magic != HVM_FILE_MAGIC )
> +    {
> +        printk(XENLOG_G_ERR "HVM%d restore: bad magic number %#"PRIx32"\n",
> +               d->domain_id, hdr->magic);
> +        return -1;
> +    }
> +
> +    if ( hdr->version != HVM_FILE_VERSION )
> +    {
> +        printk(XENLOG_G_ERR "HVM%d restore: unsupported version %u\n",
> +               d->domain_id, hdr->version);
> +        return -1;
> +    }
> +
> +    cpuid = READ_SYSREG32(MIDR_EL1);

Same here.

> +    if ( hdr->cpuid != cpuid )
> +    {
> +        printk(XENLOG_G_INFO "HVM%d restore: VM saved on one CPU "
> +               "(%#"PRIx32") and restored on another (%#"PRIx32").\n",
> +               d->domain_id, hdr->cpuid, cpuid);
> +        return -1;
> +    }
> +
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index 3683ae3..714a3c4 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -64,6 +64,8 @@ subdir-$(CONFIG_COMPAT) += compat
>  
>  subdir-$(x86_64) += hvm
>  
> +subdir-$(CONFIG_ARM) += hvm
> +

I think you can directly merge with the line above and use:

subdir-y += hvm

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 3/6] xen/arm: Implement do_suspend function
  2014-04-10 16:48 ` [PATCH 3/6] xen/arm: Implement do_suspend function Wei Huang
@ 2014-04-11 14:10   ` Julien Grall
  0 siblings, 0 replies; 21+ messages in thread
From: Julien Grall @ 2014-04-11 14:10 UTC (permalink / raw)
  To: Wei Huang
  Cc: stefano.stabellini, yjhyun.yoo, ian.campbell, jaeyong.yoo, xen-devel

Hello,

Thank you for the patch.

On 04/10/2014 05:48 PM, Wei Huang wrote:
> From: Jaeyong Yoo <jaeyong.yoo@samsung.com>
> 
> Making sched_op in do_suspend (driver/xen/manage.c) returns 0 on the
> success of suspend.
> 
> Signed-off-by: Alexey Sokolov <sokolov.a@samsung.com>
> ---
>  tools/libxc/xc_resume.c | 25 +++++++++++++++++++++++++
>  1 file changed, 25 insertions(+)
> 
> diff --git a/tools/libxc/xc_resume.c b/tools/libxc/xc_resume.c
> index 18b4818..9315eb8 100644
> --- a/tools/libxc/xc_resume.c
> +++ b/tools/libxc/xc_resume.c
> @@ -73,6 +73,31 @@ static int modify_returncode(xc_interface *xch, uint32_t domid)
>      return 0;
>  }
>  
> +#elif defined(__arm__)
> +

Did you forget to implement arm64?

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/6] xen/arm: Xen save/restore/live migration support
  2014-04-10 16:48 [PATCH 0/6] xen/arm: Xen save/restore/live migration support Wei Huang
                   ` (5 preceding siblings ...)
  2014-04-10 16:48 ` [PATCH 6/6] xen/arm: Implement toolstack for xl restore/save and migrate Wei Huang
@ 2014-04-11 14:15 ` Julien Grall
  2014-04-11 14:22   ` Wei Huang
  6 siblings, 1 reply; 21+ messages in thread
From: Julien Grall @ 2014-04-11 14:15 UTC (permalink / raw)
  To: Wei Huang
  Cc: stefano.stabellini, yjhyun.yoo, ian.campbell, jaeyong.yoo, xen-devel

Hi Wei,

Thank you for the series, I plan to give a try on midway. On which board
did you try the migration?

On 04/10/2014 05:48 PM, Wei Huang wrote:
> The following are the save/restore/live-migration patches I forward-ported to 
> the latest Xen tree. Note that I kept the order of original patches, and 
> Signed-off-by as well. For the patches I modified (see summary below), I
> added my name as Signed-off-by.
> 
> These patches aren't intended as the final version. Since Junghyun Yoo is 
> working on his patches, I think it is better to send out my version now 
> for discussion and proper merge. Let us make Xen 4.5 support these features.
> 
> === Modification Summary ===
> 
> Patch 1: 
> * Adapt to latest Xen source code
> * Setting IRQ status as enabled by checking ieanble bits after loading VM info
> * Minor fixes to support 64-bit
> 
> Patch 5:
> * Add new p2m ARM type (p2m_ram_logdirty) to support dirty page tracking
> 
> Patch 6:
> * Enable save/restore/live migration support based on latest Xen code
> * Adapt to latest Xen source code

Might be a stupid, did you go through the different comments on the v5
sent by Jaeyong? Sorry if it was the case.

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/6] xen/arm: Xen save/restore/live migration support
  2014-04-11 14:15 ` [PATCH 0/6] xen/arm: Xen save/restore/live migration support Julien Grall
@ 2014-04-11 14:22   ` Wei Huang
  2014-04-11 14:33     ` Julien Grall
  0 siblings, 1 reply; 21+ messages in thread
From: Wei Huang @ 2014-04-11 14:22 UTC (permalink / raw)
  To: Julien Grall
  Cc: stefano.stabellini, yjhyun.yoo, ian.campbell, jaeyong.yoo, xen-devel

On 04/11/2014 09:15 AM, Julien Grall wrote:
> Hi Wei,
>
> Thank you for the series, I plan to give a try on midway. On which board
> did you try the migration?
I tested the patches on a system with A15 CPUs. I don't see any problem 
to run them on midway. You need patches for DomU to support migration. 
Most of them were from Jaeyong's patch posted before (I made some 
modification). IMO they are hack-ish, so I didn't post them here. I can 
set them to you in private email. Let me know.
>
> On 04/10/2014 05:48 PM, Wei Huang wrote:
>> The following are the save/restore/live-migration patches I forward-ported to
>> the latest Xen tree. Note that I kept the order of original patches, and
>> Signed-off-by as well. For the patches I modified (see summary below), I
>> added my name as Signed-off-by.
>>
>> These patches aren't intended as the final version. Since Junghyun Yoo is
>> working on his patches, I think it is better to send out my version now
>> for discussion and proper merge. Let us make Xen 4.5 support these features.
>>
>> === Modification Summary ===
>>
>> Patch 1:
>> * Adapt to latest Xen source code
>> * Setting IRQ status as enabled by checking ieanble bits after loading VM info
>> * Minor fixes to support 64-bit
>>
>> Patch 5:
>> * Add new p2m ARM type (p2m_ram_logdirty) to support dirty page tracking
>>
>> Patch 6:
>> * Enable save/restore/live migration support based on latest Xen code
>> * Adapt to latest Xen source code
>
> Might be a stupid, did you go through the different comments on the v5
> sent by Jaeyong? Sorry if it was the case.
Not completely. I will work on them in next revision. If Junghyun has 
fixes, please post them too.
>
> Regards,
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/6] xen/arm: Xen save/restore/live migration support
  2014-04-11 14:22   ` Wei Huang
@ 2014-04-11 14:33     ` Julien Grall
  2014-04-11 15:36       ` Wei Huang
  0 siblings, 1 reply; 21+ messages in thread
From: Julien Grall @ 2014-04-11 14:33 UTC (permalink / raw)
  To: Wei Huang
  Cc: stefano.stabellini, yjhyun.yoo, ian.campbell, jaeyong.yoo, xen-devel

On 04/11/2014 03:22 PM, Wei Huang wrote:
> On 04/11/2014 09:15 AM, Julien Grall wrote:
>> Hi Wei,
>>
>> Thank you for the series, I plan to give a try on midway. On which board
>> did you try the migration?
> I tested the patches on a system with A15 CPUs. I don't see any problem
> to run them on midway. You need patches for DomU to support migration.
> Most of them were from Jaeyong's patch posted before (I made some
> modification). IMO they are hack-ish, so I didn't post them here. I can
> set them to you in private email. Let me know.

Is it the one posted in July 2013? If you have an updated version, can
you send me the series?

>>
>> On 04/10/2014 05:48 PM, Wei Huang wrote:
>>> The following are the save/restore/live-migration patches I
>>> forward-ported to
>>> the latest Xen tree. Note that I kept the order of original patches, and
>>> Signed-off-by as well. For the patches I modified (see summary below), I
>>> added my name as Signed-off-by.
>>>
>>> These patches aren't intended as the final version. Since Junghyun
>>> Yoo is
>>> working on his patches, I think it is better to send out my version now
>>> for discussion and proper merge. Let us make Xen 4.5 support these
>>> features.
>>>
>>> === Modification Summary ===
>>>
>>> Patch 1:
>>> * Adapt to latest Xen source code
>>> * Setting IRQ status as enabled by checking ieanble bits after
>>> loading VM info
>>> * Minor fixes to support 64-bit
>>>
>>> Patch 5:
>>> * Add new p2m ARM type (p2m_ram_logdirty) to support dirty page tracking
>>>
>>> Patch 6:
>>> * Enable save/restore/live migration support based on latest Xen code
>>> * Adapt to latest Xen source code
>>
>> Might be a stupid, did you go through the different comments on the v5
>> sent by Jaeyong? Sorry if it was the case.
> Not completely. I will work on them in next revision. If Junghyun has
> fixes, please post them too.

Thanks, I will wait the next version before reviewing. I don't want to
bother you with the same comments :).

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/6] xen/arm: Xen save/restore/live migration support
  2014-04-11 14:33     ` Julien Grall
@ 2014-04-11 15:36       ` Wei Huang
  2014-04-11 15:53         ` Julien Grall
  0 siblings, 1 reply; 21+ messages in thread
From: Wei Huang @ 2014-04-11 15:36 UTC (permalink / raw)
  To: Julien Grall
  Cc: stefano.stabellini, yjhyun.yoo, ian.campbell, jaeyong.yoo, xen-devel

On 04/11/2014 09:33 AM, Julien Grall wrote:
> On 04/11/2014 03:22 PM, Wei Huang wrote:
>> On 04/11/2014 09:15 AM, Julien Grall wrote:
>>> Hi Wei,
>>>
>>> Thank you for the series, I plan to give a try on midway. On which board
>>> did you try the migration?
>> I tested the patches on a system with A15 CPUs. I don't see any problem
>> to run them on midway. You need patches for DomU to support migration.
>> Most of them were from Jaeyong's patch posted before (I made some
>> modification). IMO they are hack-ish, so I didn't post them here. I can
>> set them to you in private email. Let me know.
>
> Is it the one posted in July 2013? If you have an updated version, can
> you send me the series?
Yes, it was from July 2013.
>
>>>
>>> On 04/10/2014 05:48 PM, Wei Huang wrote:
>>>> The following are the save/restore/live-migration patches I
>>>> forward-ported to
>>>> the latest Xen tree. Note that I kept the order of original patches, and
>>>> Signed-off-by as well. For the patches I modified (see summary below), I
>>>> added my name as Signed-off-by.
>>>>
>>>> These patches aren't intended as the final version. Since Junghyun
>>>> Yoo is
>>>> working on his patches, I think it is better to send out my version now
>>>> for discussion and proper merge. Let us make Xen 4.5 support these
>>>> features.
>>>>
>>>> === Modification Summary ===
>>>>
>>>> Patch 1:
>>>> * Adapt to latest Xen source code
>>>> * Setting IRQ status as enabled by checking ieanble bits after
>>>> loading VM info
>>>> * Minor fixes to support 64-bit
>>>>
>>>> Patch 5:
>>>> * Add new p2m ARM type (p2m_ram_logdirty) to support dirty page tracking
>>>>
>>>> Patch 6:
>>>> * Enable save/restore/live migration support based on latest Xen code
>>>> * Adapt to latest Xen source code
>>>
>>> Might be a stupid, did you go through the different comments on the v5
>>> sent by Jaeyong? Sorry if it was the case.
>> Not completely. I will work on them in next revision. If Junghyun has
>> fixes, please post them too.
>
> Thanks, I will wait the next version before reviewing. I don't want to
> bother you with the same comments :).
>
> Regards,
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/6] xen/arm: Xen save/restore/live migration support
  2014-04-11 15:36       ` Wei Huang
@ 2014-04-11 15:53         ` Julien Grall
  2014-04-11 16:15           ` Wei Huang
  0 siblings, 1 reply; 21+ messages in thread
From: Julien Grall @ 2014-04-11 15:53 UTC (permalink / raw)
  To: Wei Huang
  Cc: stefano.stabellini, yjhyun.yoo, ian.campbell, jaeyong.yoo, xen-devel

On 04/11/2014 04:36 PM, Wei Huang wrote:
> On 04/11/2014 09:33 AM, Julien Grall wrote:
>> On 04/11/2014 03:22 PM, Wei Huang wrote:
>>> On 04/11/2014 09:15 AM, Julien Grall wrote:
>>>> Hi Wei,
>>>>
>>>> Thank you for the series, I plan to give a try on midway. On which
>>>> board
>>>> did you try the migration?
>>> I tested the patches on a system with A15 CPUs. I don't see any problem
>>> to run them on midway. You need patches for DomU to support migration.
>>> Most of them were from Jaeyong's patch posted before (I made some
>>> modification). IMO they are hack-ish, so I didn't post them here. I can
>>> set them to you in private email. Let me know.
>>
>> Is it the one posted in July 2013? If you have an updated version, can
>> you send me the series?
> Yes, it was from July 2013.

What happen if the guest doesn't have this patch? Does the toolstack
notify the user?

Regards,

-- 
Julien Grall

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 0/6] xen/arm: Xen save/restore/live migration support
  2014-04-11 15:53         ` Julien Grall
@ 2014-04-11 16:15           ` Wei Huang
  0 siblings, 0 replies; 21+ messages in thread
From: Wei Huang @ 2014-04-11 16:15 UTC (permalink / raw)
  To: Julien Grall
  Cc: stefano.stabellini, yjhyun.yoo, ian.campbell, jaeyong.yoo, xen-devel

On 04/11/2014 10:53 AM, Julien Grall wrote:
> On 04/11/2014 04:36 PM, Wei Huang wrote:
>> On 04/11/2014 09:33 AM, Julien Grall wrote:
>>> On 04/11/2014 03:22 PM, Wei Huang wrote:
>>>> On 04/11/2014 09:15 AM, Julien Grall wrote:
>>>>> Hi Wei,
>>>>>
>>>>> Thank you for the series, I plan to give a try on midway. On which
>>>>> board
>>>>> did you try the migration?
>>>> I tested the patches on a system with A15 CPUs. I don't see any problem
>>>> to run them on midway. You need patches for DomU to support migration.
>>>> Most of them were from Jaeyong's patch posted before (I made some
>>>> modification). IMO they are hack-ish, so I didn't post them here. I can
>>>> set them to you in private email. Let me know.
>>>
>>> Is it the one posted in July 2013? If you have an updated version, can
>>> you send me the series?
>> Yes, it was from July 2013.
>
> What happen if the guest doesn't have this patch? Does the toolstack
> notify the user?
During migration, the toolstack will probe and check guest status. If 
guest doesn't support suspend, the migration (save/restore as well) will 
abort.
>
> Regards,
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2014-04-11 16:15 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-10 16:48 [PATCH 0/6] xen/arm: Xen save/restore/live migration support Wei Huang
2014-04-10 16:48 ` [PATCH 1/6] xen/arm: Save and restore support for hvm context hypercall Wei Huang
2014-04-10 17:26   ` Andrew Cooper
2014-04-10 21:53     ` Wei Huang
2014-04-11 13:57   ` Julien Grall
2014-04-10 16:48 ` [PATCH 2/6] xen/arm: implement get_maximum_gpfn hypercall Wei Huang
2014-04-10 17:28   ` Andrew Cooper
2014-04-10 21:54     ` Wei Huang
2014-04-11 13:17     ` Julien Grall
2014-04-11 13:15   ` Julien Grall
2014-04-10 16:48 ` [PATCH 3/6] xen/arm: Implement do_suspend function Wei Huang
2014-04-11 14:10   ` Julien Grall
2014-04-10 16:48 ` [PATCH 4/6] xen/arm: Implement VLPT for guest p2m mapping in live migration Wei Huang
2014-04-10 16:48 ` [PATCH 5/6] xen/arm: Implement hypercall for dirty page tracing Wei Huang
2014-04-10 16:48 ` [PATCH 6/6] xen/arm: Implement toolstack for xl restore/save and migrate Wei Huang
2014-04-11 14:15 ` [PATCH 0/6] xen/arm: Xen save/restore/live migration support Julien Grall
2014-04-11 14:22   ` Wei Huang
2014-04-11 14:33     ` Julien Grall
2014-04-11 15:36       ` Wei Huang
2014-04-11 15:53         ` Julien Grall
2014-04-11 16:15           ` Wei Huang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.