All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] support linear p2m list in migrate stream v2
@ 2015-12-11 11:31 Juergen Gross
  2015-12-11 11:31 ` [PATCH 1/4] libxc: split mapping p2m leaves into a separate function Juergen Gross
                   ` (4 more replies)
  0 siblings, 5 replies; 16+ messages in thread
From: Juergen Gross @ 2015-12-11 11:31 UTC (permalink / raw)
  To: xen-devel, Ian.Campbell, ian.jackson, stefano.stabellini,
	wei.liu2, andrew.cooper3
  Cc: Juergen Gross

Add support for the virtual mapped linear p2m list of pv-domains in the
v2 migrate stream. This will allow to migrate domains larger than 512
GB.

Tested with 32- and 64-bit pv-domains both with and without linear p2m
list and with a hvm domain.

Juergen Gross (4):
  libxc: split mapping p2m leaves into a separate function
  libxc: support of linear p2m list for migration of pv-domains
  libxc: stop migration in case of p2m list structural changes
  libxc: set flag for support of linear p2m list in domain builder

 tools/libxc/xc_dom_compat_linux.c |   2 +-
 tools/libxc/xc_dom_core.c         |   2 +
 tools/libxc/xc_sr_common.h        |  11 ++
 tools/libxc/xc_sr_save.c          |   4 +
 tools/libxc/xc_sr_save_x86_hvm.c  |   7 ++
 tools/libxc/xc_sr_save_x86_pv.c   | 248 +++++++++++++++++++++++++++++++++-----
 6 files changed, 244 insertions(+), 30 deletions(-)

-- 
2.6.2

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH 1/4] libxc: split mapping p2m leaves into a separate function
  2015-12-11 11:31 [PATCH 0/4] support linear p2m list in migrate stream v2 Juergen Gross
@ 2015-12-11 11:31 ` Juergen Gross
  2015-12-11 14:21   ` Andrew Cooper
  2015-12-11 11:31 ` [PATCH 2/4] libxc: support of linear p2m list for migration of pv-domains Juergen Gross
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 16+ messages in thread
From: Juergen Gross @ 2015-12-11 11:31 UTC (permalink / raw)
  To: xen-devel, Ian.Campbell, ian.jackson, stefano.stabellini,
	wei.liu2, andrew.cooper3
  Cc: Juergen Gross

In order to prepare using the virtual mapped linear p2m list for
migration split mapping of the p2m leaf pages into a separate function.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libxc/xc_sr_save_x86_pv.c | 77 ++++++++++++++++++++++++-----------------
 1 file changed, 45 insertions(+), 32 deletions(-)

diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
index c8d6f0b..d7acd37 100644
--- a/tools/libxc/xc_sr_save_x86_pv.c
+++ b/tools/libxc/xc_sr_save_x86_pv.c
@@ -68,6 +68,50 @@ static int copy_mfns_from_guest(const struct xc_sr_context *ctx,
 }
 
 /*
+ * Map the p2m leave pages and build an array of their pfns.
+ */
+static int map_p2m_leaves(struct xc_sr_context *ctx, xen_pfn_t *mfns,
+                          size_t n_mfns)
+{
+    xc_interface *xch = ctx->xch;
+    unsigned x;
+
+    ctx->x86_pv.p2m = xc_map_foreign_pages(xch, ctx->domid, PROT_READ,
+                                           mfns, n_mfns);
+    if ( !ctx->x86_pv.p2m )
+    {
+        PERROR("Failed to map p2m frames");
+        return -1;
+    }
+
+    ctx->save.p2m_size = ctx->x86_pv.max_pfn + 1;
+    ctx->x86_pv.p2m_frames = n_mfns;
+    ctx->x86_pv.p2m_pfns = malloc(n_mfns * sizeof(*mfns));
+    if ( !ctx->x86_pv.p2m_pfns )
+    {
+        ERROR("Cannot allocate %zu bytes for p2m pfns list",
+              n_mfns * sizeof(*mfns));
+        return -1;
+    }
+
+    /* Convert leaf frames from mfns to pfns. */
+    for ( x = 0; x < n_mfns; ++x )
+    {
+        if ( !mfn_in_pseudophysmap(ctx, mfns[x]) )
+        {
+            ERROR("Bad mfn in p2m_frame_list[%u]", x);
+            dump_bad_pseudophysmap_entry(ctx, mfns[x]);
+            errno = ERANGE;
+            return -1;
+        }
+
+        ctx->x86_pv.p2m_pfns[x] = mfn_to_pfn(ctx, mfns[x]);
+    }
+
+    return 0;
+}
+
+/*
  * Walk the guests frame list list and frame list to identify and map the
  * frames making up the guests p2m table.  Construct a list of pfns making up
  * the table.
@@ -173,7 +217,6 @@ static int map_p2m(struct xc_sr_context *ctx)
     ctx->x86_pv.p2m_frames = (ctx->x86_pv.max_pfn + fpp) / fpp;
     DPRINTF("max_pfn %#lx, p2m_frames %d", ctx->x86_pv.max_pfn,
             ctx->x86_pv.p2m_frames);
-    ctx->save.p2m_size = ctx->x86_pv.max_pfn + 1;
     fl_entries  = (ctx->x86_pv.max_pfn / fpp) + 1;
 
     /* Map the guest mid p2m frames. */
@@ -211,38 +254,8 @@ static int map_p2m(struct xc_sr_context *ctx)
     }
 
     /* Map the p2m leaves themselves. */
-    ctx->x86_pv.p2m = xc_map_foreign_pages(xch, ctx->domid, PROT_READ,
-                                           local_fl, fl_entries);
-    if ( !ctx->x86_pv.p2m )
-    {
-        PERROR("Failed to map p2m frames");
-        goto err;
-    }
+    rc = map_p2m_leaves(ctx, local_fl, fl_entries);
 
-    ctx->x86_pv.p2m_frames = fl_entries;
-    ctx->x86_pv.p2m_pfns = malloc(local_fl_size);
-    if ( !ctx->x86_pv.p2m_pfns )
-    {
-        ERROR("Cannot allocate %zu bytes for p2m pfns list",
-              local_fl_size);
-        goto err;
-    }
-
-    /* Convert leaf frames from mfns to pfns. */
-    for ( x = 0; x < fl_entries; ++x )
-    {
-        if ( !mfn_in_pseudophysmap(ctx, local_fl[x]) )
-        {
-            ERROR("Bad mfn in p2m_frame_list[%u]", x);
-            dump_bad_pseudophysmap_entry(ctx, local_fl[x]);
-            errno = ERANGE;
-            goto err;
-        }
-
-        ctx->x86_pv.p2m_pfns[x] = mfn_to_pfn(ctx, local_fl[x]);
-    }
-
-    rc = 0;
 err:
 
     free(local_fl);
-- 
2.6.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 2/4] libxc: support of linear p2m list for migration of pv-domains
  2015-12-11 11:31 [PATCH 0/4] support linear p2m list in migrate stream v2 Juergen Gross
  2015-12-11 11:31 ` [PATCH 1/4] libxc: split mapping p2m leaves into a separate function Juergen Gross
@ 2015-12-11 11:31 ` Juergen Gross
  2015-12-11 14:51   ` Andrew Cooper
  2015-12-11 11:31 ` [PATCH 3/4] libxc: stop migration in case of p2m list structural changes Juergen Gross
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 16+ messages in thread
From: Juergen Gross @ 2015-12-11 11:31 UTC (permalink / raw)
  To: xen-devel, Ian.Campbell, ian.jackson, stefano.stabellini,
	wei.liu2, andrew.cooper3
  Cc: Juergen Gross

In order to be able to migrate pv-domains with more than 512 GB of RAM
the p2m information can be specified by the guest kernel via a virtual
mapped linear p2m list instead of a 3 level tree.

Add support for this new p2m format in libxc.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libxc/xc_sr_save_x86_pv.c | 139 +++++++++++++++++++++++++++++++++++++++-
 1 file changed, 136 insertions(+), 3 deletions(-)

diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
index d7acd37..0237378 100644
--- a/tools/libxc/xc_sr_save_x86_pv.c
+++ b/tools/libxc/xc_sr_save_x86_pv.c
@@ -116,7 +116,7 @@ static int map_p2m_leaves(struct xc_sr_context *ctx, xen_pfn_t *mfns,
  * frames making up the guests p2m table.  Construct a list of pfns making up
  * the table.
  */
-static int map_p2m(struct xc_sr_context *ctx)
+static int map_p2m_tree(struct xc_sr_context *ctx)
 {
     /* Terminology:
      *
@@ -138,8 +138,6 @@ static int map_p2m(struct xc_sr_context *ctx)
     void *guest_fl = NULL;
     size_t local_fl_size;
 
-    ctx->x86_pv.max_pfn = GET_FIELD(ctx->x86_pv.shinfo, arch.max_pfn,
-                                    ctx->x86_pv.width) - 1;
     fpp = PAGE_SIZE / ctx->x86_pv.width;
     fll_entries = (ctx->x86_pv.max_pfn / (fpp * fpp)) + 1;
     if ( fll_entries > fpp )
@@ -270,6 +268,141 @@ err:
 }
 
 /*
+ * Map the guest p2m frames specified via a cr3 value, a virtual address, and
+ * the maximum pfn.
+ */
+static int map_p2m_list(struct xc_sr_context *ctx, uint64_t p2m_cr3)
+{
+    xc_interface *xch = ctx->xch;
+    xen_vaddr_t p2m_vaddr, p2m_end, mask, off;
+    xen_pfn_t p2m_mfn, mfn, saved_mfn, max_pfn;
+    uint64_t *ptes;
+    xen_pfn_t *mfns;
+    unsigned fpp, n_pages, level, shift, idx_start, idx_end, idx, saved_idx;
+    int rc = -1;
+
+    p2m_mfn = cr3_to_mfn(ctx, p2m_cr3);
+    if ( p2m_mfn == 0 || p2m_mfn > ctx->x86_pv.max_mfn )
+    {
+        ERROR("Bad p2m_cr3 value %#lx", p2m_cr3);
+        errno = ERANGE;
+        return -1;
+    }
+
+    p2m_vaddr = GET_FIELD(ctx->x86_pv.shinfo, arch.p2m_vaddr,
+                          ctx->x86_pv.width);
+    fpp = PAGE_SIZE / ctx->x86_pv.width;
+    ctx->x86_pv.p2m_frames = (ctx->x86_pv.max_pfn + fpp) / fpp;
+    p2m_end = p2m_vaddr + ctx->x86_pv.p2m_frames * PAGE_SIZE - 1;
+    DPRINTF("p2m list from %#lx to %#lx, root at %#lx", p2m_vaddr, p2m_end,
+            p2m_mfn);
+    DPRINTF("max_pfn %#lx, p2m_frames %d", ctx->x86_pv.max_pfn,
+            ctx->x86_pv.p2m_frames);
+
+    mask = (ctx->x86_pv.width == 8) ?
+           0x0000ffffffffffffULL : 0x00000000ffffffffULL;
+
+    mfns = malloc(sizeof(*mfns));
+    if ( !mfns )
+    {
+        ERROR("Cannot allocate memory for array of %u mfns", 1);
+        goto err;
+    }
+    mfns[0] = p2m_mfn;
+    off = 0;
+    saved_mfn = 0;
+    idx_start = idx_end = saved_idx = 0;
+
+    for ( level = ctx->x86_pv.levels; level > 0; level-- )
+    {
+        n_pages = idx_end - idx_start + 1;
+        ptes = xc_map_foreign_pages(xch, ctx->domid, PROT_READ, mfns, n_pages);
+        if ( !ptes )
+        {
+            PERROR("Failed to map %u page table pages for p2m list", n_pages);
+            goto err;
+        }
+        free(mfns);
+
+        shift = level * 9 + 3;
+        idx_start = ((p2m_vaddr - off) & mask) >> shift;
+        idx_end = ((p2m_end - off) & mask) >> shift;
+        idx = idx_end - idx_start + 1;
+        mfns = malloc(sizeof(*mfns) * idx);
+        if ( !mfns )
+        {
+            ERROR("Cannot allocate memory for array of %u mfns", idx);
+            goto err;
+        }
+
+        for ( idx = idx_start; idx <= idx_end; idx++ )
+        {
+            mfn = pte_to_frame(ptes[idx]);
+            if ( mfn == 0 || mfn > ctx->x86_pv.max_mfn )
+            {
+                ERROR("Bad mfn %#lx during page table walk for vaddr %#lx at level %d of p2m list",
+                      mfn, off + ((xen_vaddr_t)idx << shift), level);
+                errno = ERANGE;
+                goto err;
+            }
+            mfns[idx - idx_start] = mfn;
+
+            /* Maximum pfn check at level 2. Same reasoning as for p2m tree. */
+            if ( level == 2 )
+            {
+                if ( mfn != saved_mfn )
+                {
+                    saved_mfn = mfn;
+                    saved_idx = idx - idx_start;
+                }
+            }
+        }
+
+        if ( level == 2 )
+        {
+            max_pfn = ((xen_pfn_t)saved_idx << 9) * fpp - 1;
+            if ( max_pfn < ctx->x86_pv.max_pfn )
+            {
+                ctx->x86_pv.max_pfn = max_pfn;
+                ctx->x86_pv.p2m_frames = (ctx->x86_pv.max_pfn + fpp) / fpp;
+                p2m_end = p2m_vaddr + ctx->x86_pv.p2m_frames * PAGE_SIZE - 1;
+                idx_end = idx_start + saved_idx;
+            }
+        }
+
+        munmap(ptes, n_pages * PAGE_SIZE);
+        ptes = NULL;
+        off = p2m_vaddr & ((mask >> shift) << shift);
+    }
+
+    /* Map the p2m leaves themselves. */
+    rc = map_p2m_leaves(ctx, mfns, idx_end - idx_start + 1);
+
+err:
+    free(mfns);
+    if ( ptes )
+        munmap(ptes, n_pages * PAGE_SIZE);
+
+    return rc;
+}
+
+/*
+ * Map the guest p2m frames.
+ * Depending on guest support this might either be a virtual mapped linear
+ * list (preferred format) or a 3 level tree linked via mfns.
+ */
+static int map_p2m(struct xc_sr_context *ctx)
+{
+    uint64_t p2m_cr3;
+
+    ctx->x86_pv.max_pfn = GET_FIELD(ctx->x86_pv.shinfo, arch.max_pfn,
+                                    ctx->x86_pv.width) - 1;
+    p2m_cr3 = GET_FIELD(ctx->x86_pv.shinfo, arch.p2m_cr3, ctx->x86_pv.width);
+
+    return p2m_cr3 ? map_p2m_list(ctx, p2m_cr3) : map_p2m_tree(ctx);
+}
+
+/*
  * Obtain a specific vcpus basic state and write an X86_PV_VCPU_BASIC record
  * into the stream.  Performs mfn->pfn conversion on architectural state.
  */
-- 
2.6.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 3/4] libxc: stop migration in case of p2m list structural changes
  2015-12-11 11:31 [PATCH 0/4] support linear p2m list in migrate stream v2 Juergen Gross
  2015-12-11 11:31 ` [PATCH 1/4] libxc: split mapping p2m leaves into a separate function Juergen Gross
  2015-12-11 11:31 ` [PATCH 2/4] libxc: support of linear p2m list for migration of pv-domains Juergen Gross
@ 2015-12-11 11:31 ` Juergen Gross
  2015-12-11 15:20   ` Andrew Cooper
  2015-12-11 11:31 ` [PATCH 4/4] libxc: set flag for support of linear p2m list in domain builder Juergen Gross
  2015-12-11 14:18 ` [PATCH 0/4] support linear p2m list in migrate stream v2 Andrew Cooper
  4 siblings, 1 reply; 16+ messages in thread
From: Juergen Gross @ 2015-12-11 11:31 UTC (permalink / raw)
  To: xen-devel, Ian.Campbell, ian.jackson, stefano.stabellini,
	wei.liu2, andrew.cooper3
  Cc: Juergen Gross

With support of the virtual mapped linear p2m list for migration it is
now possible to detect structural changes of the p2m list which before
would either lead to a crashing or otherwise wrong behaving domU.

A guest supporting the linear p2m list will increment the
p2m_generation counter located in the shared info page before and after
each modification of a mapping related to the p2m list. A change of
that counter can be detected by the tools and reacted upon.

As such a change should occur only very rarely once the domU is up the
most simple reaction is to cancel migration in such an event.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libxc/xc_sr_common.h       | 11 ++++++++++
 tools/libxc/xc_sr_save.c         |  4 ++++
 tools/libxc/xc_sr_save_x86_hvm.c |  7 +++++++
 tools/libxc/xc_sr_save_x86_pv.c  | 44 ++++++++++++++++++++++++++++++++++++++++
 4 files changed, 66 insertions(+)

diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index 9aecde2..bfb9602 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -83,6 +83,14 @@ struct xc_sr_save_ops
     int (*end_of_checkpoint)(struct xc_sr_context *ctx);
 
     /**
+     * Check whether new iteration can be started.  This is called before each
+     * iteration to check whether all criteria for the migration are still
+     * met.  If that's not the case either migration is cancelled via a bad rc
+     * or the situation is handled, e.g. by sending appropriate records.
+     */
+    int (*check_iteration)(struct xc_sr_context *ctx);
+
+    /**
      * Clean up the local environment.  Will be called exactly once, either
      * after a successful save, or upon encountering an error.
      */
@@ -280,6 +288,9 @@ struct xc_sr_context
             /* Read-only mapping of guests shared info page */
             shared_info_any_t *shinfo;
 
+            /* p2m generation count for verifying validity of local p2m. */
+            uint64_t p2m_generation;
+
             union
             {
                 struct
diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
index cefcef5..c235706 100644
--- a/tools/libxc/xc_sr_save.c
+++ b/tools/libxc/xc_sr_save.c
@@ -370,6 +370,10 @@ static int send_dirty_pages(struct xc_sr_context *ctx,
     DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
                                     &ctx->save.dirty_bitmap_hbuf);
 
+    rc = ctx->save.ops.check_iteration(ctx);
+    if ( rc )
+        return rc;
+
     for ( p = 0, written = 0; p < ctx->save.p2m_size; ++p )
     {
         if ( !test_bit(p, dirty_bitmap) )
diff --git a/tools/libxc/xc_sr_save_x86_hvm.c b/tools/libxc/xc_sr_save_x86_hvm.c
index f3d6cee..aa24f90 100644
--- a/tools/libxc/xc_sr_save_x86_hvm.c
+++ b/tools/libxc/xc_sr_save_x86_hvm.c
@@ -175,6 +175,12 @@ static int x86_hvm_start_of_checkpoint(struct xc_sr_context *ctx)
     return 0;
 }
 
+static int x86_hvm_check_iteration(struct xc_sr_context *ctx)
+{
+    /* no-op */
+    return 0;
+}
+
 static int x86_hvm_end_of_checkpoint(struct xc_sr_context *ctx)
 {
     int rc;
@@ -221,6 +227,7 @@ struct xc_sr_save_ops save_ops_x86_hvm =
     .start_of_stream     = x86_hvm_start_of_stream,
     .start_of_checkpoint = x86_hvm_start_of_checkpoint,
     .end_of_checkpoint   = x86_hvm_end_of_checkpoint,
+    .check_iteration     = x86_hvm_check_iteration,
     .cleanup             = x86_hvm_cleanup,
 };
 
diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
index 0237378..3a58d0d 100644
--- a/tools/libxc/xc_sr_save_x86_pv.c
+++ b/tools/libxc/xc_sr_save_x86_pv.c
@@ -268,6 +268,39 @@ err:
 }
 
 /*
+ * Get p2m_generation count.
+ * Returns an error if the generation count has changed since the last call.
+ */
+static int get_p2m_generation(struct xc_sr_context *ctx)
+{
+    uint64_t p2m_generation;
+    int rc;
+
+    p2m_generation = GET_FIELD(ctx->x86_pv.shinfo, arch.p2m_generation,
+                               ctx->x86_pv.width);
+
+    rc = (p2m_generation == ctx->x86_pv.p2m_generation) ? 0 : -1;
+    ctx->x86_pv.p2m_generation = p2m_generation;
+
+    return rc;
+}
+
+static int x86_pv_check_iteration_p2m_list(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    int rc;
+
+    if ( !ctx->save.live )
+        return 0;
+
+    rc = get_p2m_generation(ctx);
+    if ( rc )
+        ERROR("p2m generation count changed. Migration aborted.");
+
+    return rc;
+}
+
+/*
  * Map the guest p2m frames specified via a cr3 value, a virtual address, and
  * the maximum pfn.
  */
@@ -281,6 +314,9 @@ static int map_p2m_list(struct xc_sr_context *ctx, uint64_t p2m_cr3)
     unsigned fpp, n_pages, level, shift, idx_start, idx_end, idx, saved_idx;
     int rc = -1;
 
+    /* Before each iteration check for local p2m list still valid. */
+    ctx->save.ops.check_iteration = x86_pv_check_iteration_p2m_list;
+
     p2m_mfn = cr3_to_mfn(ctx, p2m_cr3);
     if ( p2m_mfn == 0 || p2m_mfn > ctx->x86_pv.max_mfn )
     {
@@ -289,6 +325,8 @@ static int map_p2m_list(struct xc_sr_context *ctx, uint64_t p2m_cr3)
         return -1;
     }
 
+    get_p2m_generation(ctx);
+
     p2m_vaddr = GET_FIELD(ctx->x86_pv.shinfo, arch.p2m_vaddr,
                           ctx->x86_pv.width);
     fpp = PAGE_SIZE / ctx->x86_pv.width;
@@ -1034,6 +1072,11 @@ static int x86_pv_end_of_checkpoint(struct xc_sr_context *ctx)
     return 0;
 }
 
+static int x86_pv_check_iteration(struct xc_sr_context *ctx)
+{
+    return 0;
+}
+
 /*
  * save_ops function.  Cleanup.
  */
@@ -1061,6 +1104,7 @@ struct xc_sr_save_ops save_ops_x86_pv =
     .start_of_stream     = x86_pv_start_of_stream,
     .start_of_checkpoint = x86_pv_start_of_checkpoint,
     .end_of_checkpoint   = x86_pv_end_of_checkpoint,
+    .check_iteration     = x86_pv_check_iteration,
     .cleanup             = x86_pv_cleanup,
 };
 
-- 
2.6.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH 4/4] libxc: set flag for support of linear p2m list in domain builder
  2015-12-11 11:31 [PATCH 0/4] support linear p2m list in migrate stream v2 Juergen Gross
                   ` (2 preceding siblings ...)
  2015-12-11 11:31 ` [PATCH 3/4] libxc: stop migration in case of p2m list structural changes Juergen Gross
@ 2015-12-11 11:31 ` Juergen Gross
  2015-12-11 14:18 ` [PATCH 0/4] support linear p2m list in migrate stream v2 Andrew Cooper
  4 siblings, 0 replies; 16+ messages in thread
From: Juergen Gross @ 2015-12-11 11:31 UTC (permalink / raw)
  To: xen-devel, Ian.Campbell, ian.jackson, stefano.stabellini,
	wei.liu2, andrew.cooper3
  Cc: Juergen Gross

Set the SIF_VIRT_P2M_4TOOLS flag for pv-domUs in the domain builder
to indicate the Xen tools have full support for the virtual mapped
linear p2m list.

This will enable pv-domUs to drop support of the 3 level p2m tree
and use the linear list only. Without setting this flag some kernels
might limit themselves to 512 GB memory size in order not to break
migration.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libxc/xc_dom_compat_linux.c | 2 +-
 tools/libxc/xc_dom_core.c         | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/libxc/xc_dom_compat_linux.c b/tools/libxc/xc_dom_compat_linux.c
index abbc09f..c922c61 100644
--- a/tools/libxc/xc_dom_compat_linux.c
+++ b/tools/libxc/xc_dom_compat_linux.c
@@ -59,7 +59,7 @@ int xc_linux_build(xc_interface *xch, uint32_t domid,
          ((rc = xc_dom_ramdisk_file(dom, initrd_name)) != 0) )
         goto out;
 
-    dom->flags = flags;
+    dom->flags |= flags;
     dom->console_evtchn = console_evtchn;
     dom->xenstore_evtchn = store_evtchn;
 
diff --git a/tools/libxc/xc_dom_core.c b/tools/libxc/xc_dom_core.c
index 2061ba6..55c779d 100644
--- a/tools/libxc/xc_dom_core.c
+++ b/tools/libxc/xc_dom_core.c
@@ -777,6 +777,8 @@ struct xc_dom_image *xc_dom_allocate(xc_interface *xch,
     dom->parms.elf_paddr_offset = UNSET_ADDR;
     dom->parms.p2m_base = UNSET_ADDR;
 
+    dom->flags = SIF_VIRT_P2M_4TOOLS;
+
     dom->alloc_malloc += sizeof(*dom);
     return dom;
 
-- 
2.6.2

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/4] support linear p2m list in migrate stream v2
  2015-12-11 11:31 [PATCH 0/4] support linear p2m list in migrate stream v2 Juergen Gross
                   ` (3 preceding siblings ...)
  2015-12-11 11:31 ` [PATCH 4/4] libxc: set flag for support of linear p2m list in domain builder Juergen Gross
@ 2015-12-11 14:18 ` Andrew Cooper
  2015-12-11 14:20   ` Juergen Gross
  4 siblings, 1 reply; 16+ messages in thread
From: Andrew Cooper @ 2015-12-11 14:18 UTC (permalink / raw)
  To: Juergen Gross, xen-devel, Ian.Campbell, ian.jackson,
	stefano.stabellini, wei.liu2

On 11/12/15 11:31, Juergen Gross wrote:
> Add support for the virtual mapped linear p2m list of pv-domains in the
> v2 migrate stream. This will allow to migrate domains larger than 512
> GB.
>
> Tested with 32- and 64-bit pv-domains both with and without linear p2m
> list and with a hvm domain.
>
> Juergen Gross (4):
>   libxc: split mapping p2m leaves into a separate function
>   libxc: support of linear p2m list for migration of pv-domains
>   libxc: stop migration in case of p2m list structural changes
>   libxc: set flag for support of linear p2m list in domain builder
>
>  tools/libxc/xc_dom_compat_linux.c |   2 +-
>  tools/libxc/xc_dom_core.c         |   2 +
>  tools/libxc/xc_sr_common.h        |  11 ++
>  tools/libxc/xc_sr_save.c          |   4 +
>  tools/libxc/xc_sr_save_x86_hvm.c  |   7 ++
>  tools/libxc/xc_sr_save_x86_pv.c   | 248 +++++++++++++++++++++++++++++++++-----
>  6 files changed, 244 insertions(+), 30 deletions(-)
>

Wow - surprisingly little change for what seems like a large new feature.

Please can you see about patching docs/features/migration.pandoc to
indicate that linear p2m is now supported for migration.  This looks
like it would neatly fit into patch 4.

~Andrew

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 0/4] support linear p2m list in migrate stream v2
  2015-12-11 14:18 ` [PATCH 0/4] support linear p2m list in migrate stream v2 Andrew Cooper
@ 2015-12-11 14:20   ` Juergen Gross
  0 siblings, 0 replies; 16+ messages in thread
From: Juergen Gross @ 2015-12-11 14:20 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel, Ian.Campbell, ian.jackson,
	stefano.stabellini, wei.liu2

On 11/12/15 15:18, Andrew Cooper wrote:
> On 11/12/15 11:31, Juergen Gross wrote:
>> Add support for the virtual mapped linear p2m list of pv-domains in the
>> v2 migrate stream. This will allow to migrate domains larger than 512
>> GB.
>>
>> Tested with 32- and 64-bit pv-domains both with and without linear p2m
>> list and with a hvm domain.
>>
>> Juergen Gross (4):
>>   libxc: split mapping p2m leaves into a separate function
>>   libxc: support of linear p2m list for migration of pv-domains
>>   libxc: stop migration in case of p2m list structural changes
>>   libxc: set flag for support of linear p2m list in domain builder
>>
>>  tools/libxc/xc_dom_compat_linux.c |   2 +-
>>  tools/libxc/xc_dom_core.c         |   2 +
>>  tools/libxc/xc_sr_common.h        |  11 ++
>>  tools/libxc/xc_sr_save.c          |   4 +
>>  tools/libxc/xc_sr_save_x86_hvm.c  |   7 ++
>>  tools/libxc/xc_sr_save_x86_pv.c   | 248 +++++++++++++++++++++++++++++++++-----
>>  6 files changed, 244 insertions(+), 30 deletions(-)
>>
> 
> Wow - surprisingly little change for what seems like a large new feature.

Yeah, I was pleased, too.

> Please can you see about patching docs/features/migration.pandoc to
> indicate that linear p2m is now supported for migration.  This looks
> like it would neatly fit into patch 4.

Okay, will do.


Juergen

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 1/4] libxc: split mapping p2m leaves into a separate function
  2015-12-11 11:31 ` [PATCH 1/4] libxc: split mapping p2m leaves into a separate function Juergen Gross
@ 2015-12-11 14:21   ` Andrew Cooper
  0 siblings, 0 replies; 16+ messages in thread
From: Andrew Cooper @ 2015-12-11 14:21 UTC (permalink / raw)
  To: Juergen Gross, xen-devel, Ian.Campbell, ian.jackson,
	stefano.stabellini, wei.liu2

On 11/12/15 11:31, Juergen Gross wrote:
> In order to prepare using the virtual mapped linear p2m list for
> migration split mapping of the p2m leaf pages into a separate function.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4] libxc: support of linear p2m list for migration of pv-domains
  2015-12-11 11:31 ` [PATCH 2/4] libxc: support of linear p2m list for migration of pv-domains Juergen Gross
@ 2015-12-11 14:51   ` Andrew Cooper
  2015-12-11 15:12     ` Juergen Gross
  0 siblings, 1 reply; 16+ messages in thread
From: Andrew Cooper @ 2015-12-11 14:51 UTC (permalink / raw)
  To: Juergen Gross, xen-devel, Ian.Campbell, ian.jackson,
	stefano.stabellini, wei.liu2

On 11/12/15 11:31, Juergen Gross wrote:
> In order to be able to migrate pv-domains with more than 512 GB of RAM
> the p2m information can be specified by the guest kernel via a virtual
> mapped linear p2m list instead of a 3 level tree.
>
> Add support for this new p2m format in libxc.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  tools/libxc/xc_sr_save_x86_pv.c | 139 +++++++++++++++++++++++++++++++++++++++-
>  1 file changed, 136 insertions(+), 3 deletions(-)
>
> diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
> index d7acd37..0237378 100644
> --- a/tools/libxc/xc_sr_save_x86_pv.c
> +++ b/tools/libxc/xc_sr_save_x86_pv.c
> @@ -116,7 +116,7 @@ static int map_p2m_leaves(struct xc_sr_context *ctx, xen_pfn_t *mfns,
>   * frames making up the guests p2m table.  Construct a list of pfns making up
>   * the table.
>   */
> -static int map_p2m(struct xc_sr_context *ctx)
> +static int map_p2m_tree(struct xc_sr_context *ctx)
>  {
>      /* Terminology:
>       *
> @@ -138,8 +138,6 @@ static int map_p2m(struct xc_sr_context *ctx)
>      void *guest_fl = NULL;
>      size_t local_fl_size;
>  
> -    ctx->x86_pv.max_pfn = GET_FIELD(ctx->x86_pv.shinfo, arch.max_pfn,
> -                                    ctx->x86_pv.width) - 1;
>      fpp = PAGE_SIZE / ctx->x86_pv.width;
>      fll_entries = (ctx->x86_pv.max_pfn / (fpp * fpp)) + 1;
>      if ( fll_entries > fpp )
> @@ -270,6 +268,141 @@ err:
>  }
>  
>  /*
> + * Map the guest p2m frames specified via a cr3 value, a virtual address, and
> + * the maximum pfn.

Probably worth stating that this function assumes PAE paging is in use.

> + */
> +static int map_p2m_list(struct xc_sr_context *ctx, uint64_t p2m_cr3)
> +{
> +    xc_interface *xch = ctx->xch;
> +    xen_vaddr_t p2m_vaddr, p2m_end, mask, off;
> +    xen_pfn_t p2m_mfn, mfn, saved_mfn, max_pfn;
> +    uint64_t *ptes;
> +    xen_pfn_t *mfns;
> +    unsigned fpp, n_pages, level, shift, idx_start, idx_end, idx, saved_idx;
> +    int rc = -1;
> +
> +    p2m_mfn = cr3_to_mfn(ctx, p2m_cr3);
> +    if ( p2m_mfn == 0 || p2m_mfn > ctx->x86_pv.max_mfn )

mfn 0 isn't invalid to use here.  It could, in principle, be available
for PV guest use.

> +    {
> +        ERROR("Bad p2m_cr3 value %#lx", p2m_cr3);
> +        errno = ERANGE;
> +        return -1;
> +    }
> +
> +    p2m_vaddr = GET_FIELD(ctx->x86_pv.shinfo, arch.p2m_vaddr,
> +                          ctx->x86_pv.width);
> +    fpp = PAGE_SIZE / ctx->x86_pv.width;
> +    ctx->x86_pv.p2m_frames = (ctx->x86_pv.max_pfn + fpp) / fpp;

ctx->x86_pv.max_pfn / fpp + 1

It is mathematically identically, but resilient to overflow.

> +    p2m_end = p2m_vaddr + ctx->x86_pv.p2m_frames * PAGE_SIZE - 1;

You probably want to sanity check both p2m_vaddr and p2m_end for being
either <4G or canonical, depending on the guest, and out of the Xen
mappings.

I believe this allows you drop 'mask' in its entirety.

> +    DPRINTF("p2m list from %#lx to %#lx, root at %#lx", p2m_vaddr, p2m_end,
> +            p2m_mfn);
> +    DPRINTF("max_pfn %#lx, p2m_frames %d", ctx->x86_pv.max_pfn,
> +            ctx->x86_pv.p2m_frames);
> +
> +    mask = (ctx->x86_pv.width == 8) ?
> +           0x0000ffffffffffffULL : 0x00000000ffffffffULL;
> +
> +    mfns = malloc(sizeof(*mfns));
> +    if ( !mfns )
> +    {
> +        ERROR("Cannot allocate memory for array of %u mfns", 1);
> +        goto err;
> +    }
> +    mfns[0] = p2m_mfn;
> +    off = 0;
> +    saved_mfn = 0;
> +    idx_start = idx_end = saved_idx = 0;
> +
> +    for ( level = ctx->x86_pv.levels; level > 0; level-- )
> +    {
> +        n_pages = idx_end - idx_start + 1;
> +        ptes = xc_map_foreign_pages(xch, ctx->domid, PROT_READ, mfns, n_pages);
> +        if ( !ptes )
> +        {
> +            PERROR("Failed to map %u page table pages for p2m list", n_pages);
> +            goto err;
> +        }
> +        free(mfns);
> +
> +        shift = level * 9 + 3;
> +        idx_start = ((p2m_vaddr - off) & mask) >> shift;
> +        idx_end = ((p2m_end - off) & mask) >> shift;
> +        idx = idx_end - idx_start + 1;
> +        mfns = malloc(sizeof(*mfns) * idx);
> +        if ( !mfns )
> +        {
> +            ERROR("Cannot allocate memory for array of %u mfns", idx);
> +            goto err;
> +        }
> +
> +        for ( idx = idx_start; idx <= idx_end; idx++ )
> +        {
> +            mfn = pte_to_frame(ptes[idx]);
> +            if ( mfn == 0 || mfn > ctx->x86_pv.max_mfn )
> +            {
> +                ERROR("Bad mfn %#lx during page table walk for vaddr %#lx at level %d of p2m list",
> +                      mfn, off + ((xen_vaddr_t)idx << shift), level);
> +                errno = ERANGE;
> +                goto err;
> +            }
> +            mfns[idx - idx_start] = mfn;
> +
> +            /* Maximum pfn check at level 2. Same reasoning as for p2m tree. */
> +            if ( level == 2 )
> +            {
> +                if ( mfn != saved_mfn )
> +                {
> +                    saved_mfn = mfn;
> +                    saved_idx = idx - idx_start;
> +                }
> +            }
> +        }
> +
> +        if ( level == 2 )
> +        {
> +            max_pfn = ((xen_pfn_t)saved_idx << 9) * fpp - 1;
> +            if ( max_pfn < ctx->x86_pv.max_pfn )
> +            {
> +                ctx->x86_pv.max_pfn = max_pfn;
> +                ctx->x86_pv.p2m_frames = (ctx->x86_pv.max_pfn + fpp) / fpp;
> +                p2m_end = p2m_vaddr + ctx->x86_pv.p2m_frames * PAGE_SIZE - 1;
> +                idx_end = idx_start + saved_idx;
> +            }
> +        }
> +
> +        munmap(ptes, n_pages * PAGE_SIZE);
> +        ptes = NULL;
> +        off = p2m_vaddr & ((mask >> shift) << shift);
> +    }
> +
> +    /* Map the p2m leaves themselves. */
> +    rc = map_p2m_leaves(ctx, mfns, idx_end - idx_start + 1);
> +
> +err:
> +    free(mfns);
> +    if ( ptes )
> +        munmap(ptes, n_pages * PAGE_SIZE);

Well - I think I have understood what is going on here, and it looks
plausible.

~Andrew

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4] libxc: support of linear p2m list for migration of pv-domains
  2015-12-11 14:51   ` Andrew Cooper
@ 2015-12-11 15:12     ` Juergen Gross
  2015-12-11 15:24       ` Andrew Cooper
  0 siblings, 1 reply; 16+ messages in thread
From: Juergen Gross @ 2015-12-11 15:12 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel, Ian.Campbell, ian.jackson,
	stefano.stabellini, wei.liu2

On 11/12/15 15:51, Andrew Cooper wrote:
> On 11/12/15 11:31, Juergen Gross wrote:
>> In order to be able to migrate pv-domains with more than 512 GB of RAM
>> the p2m information can be specified by the guest kernel via a virtual
>> mapped linear p2m list instead of a 3 level tree.
>>
>> Add support for this new p2m format in libxc.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>  tools/libxc/xc_sr_save_x86_pv.c | 139 +++++++++++++++++++++++++++++++++++++++-
>>  1 file changed, 136 insertions(+), 3 deletions(-)
>>
>> diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
>> index d7acd37..0237378 100644
>> --- a/tools/libxc/xc_sr_save_x86_pv.c
>> +++ b/tools/libxc/xc_sr_save_x86_pv.c
>> @@ -116,7 +116,7 @@ static int map_p2m_leaves(struct xc_sr_context *ctx, xen_pfn_t *mfns,
>>   * frames making up the guests p2m table.  Construct a list of pfns making up
>>   * the table.
>>   */
>> -static int map_p2m(struct xc_sr_context *ctx)
>> +static int map_p2m_tree(struct xc_sr_context *ctx)
>>  {
>>      /* Terminology:
>>       *
>> @@ -138,8 +138,6 @@ static int map_p2m(struct xc_sr_context *ctx)
>>      void *guest_fl = NULL;
>>      size_t local_fl_size;
>>  
>> -    ctx->x86_pv.max_pfn = GET_FIELD(ctx->x86_pv.shinfo, arch.max_pfn,
>> -                                    ctx->x86_pv.width) - 1;
>>      fpp = PAGE_SIZE / ctx->x86_pv.width;
>>      fll_entries = (ctx->x86_pv.max_pfn / (fpp * fpp)) + 1;
>>      if ( fll_entries > fpp )
>> @@ -270,6 +268,141 @@ err:
>>  }
>>  
>>  /*
>> + * Map the guest p2m frames specified via a cr3 value, a virtual address, and
>> + * the maximum pfn.
> 
> Probably worth stating that this function assumes PAE paging is in use.

Okay. I don't mind.

> 
>> + */
>> +static int map_p2m_list(struct xc_sr_context *ctx, uint64_t p2m_cr3)
>> +{
>> +    xc_interface *xch = ctx->xch;
>> +    xen_vaddr_t p2m_vaddr, p2m_end, mask, off;
>> +    xen_pfn_t p2m_mfn, mfn, saved_mfn, max_pfn;
>> +    uint64_t *ptes;
>> +    xen_pfn_t *mfns;
>> +    unsigned fpp, n_pages, level, shift, idx_start, idx_end, idx, saved_idx;
>> +    int rc = -1;
>> +
>> +    p2m_mfn = cr3_to_mfn(ctx, p2m_cr3);
>> +    if ( p2m_mfn == 0 || p2m_mfn > ctx->x86_pv.max_mfn )
> 
> mfn 0 isn't invalid to use here.  It could, in principle, be available
> for PV guest use.

No, the value 0 indicates that the linear p2m info isn't valid. See
comments in xen/include/public/arch-x86/xen.h

> 
>> +    {
>> +        ERROR("Bad p2m_cr3 value %#lx", p2m_cr3);
>> +        errno = ERANGE;
>> +        return -1;
>> +    }
>> +
>> +    p2m_vaddr = GET_FIELD(ctx->x86_pv.shinfo, arch.p2m_vaddr,
>> +                          ctx->x86_pv.width);
>> +    fpp = PAGE_SIZE / ctx->x86_pv.width;
>> +    ctx->x86_pv.p2m_frames = (ctx->x86_pv.max_pfn + fpp) / fpp;
> 
> ctx->x86_pv.max_pfn / fpp + 1
> 
> It is mathematically identically, but resilient to overflow.

Okay.

> 
>> +    p2m_end = p2m_vaddr + ctx->x86_pv.p2m_frames * PAGE_SIZE - 1;
> 
> You probably want to sanity check both p2m_vaddr and p2m_end for being
> either <4G or canonical, depending on the guest, and out of the Xen
> mappings.

Yes, you are right.

> 
> I believe this allows you drop 'mask' in its entirety.

Hmm, no. I'd still have to mask possible top 16 '1' bits away.

> 
>> +    DPRINTF("p2m list from %#lx to %#lx, root at %#lx", p2m_vaddr, p2m_end,
>> +            p2m_mfn);
>> +    DPRINTF("max_pfn %#lx, p2m_frames %d", ctx->x86_pv.max_pfn,
>> +            ctx->x86_pv.p2m_frames);
>> +
>> +    mask = (ctx->x86_pv.width == 8) ?
>> +           0x0000ffffffffffffULL : 0x00000000ffffffffULL;
>> +
>> +    mfns = malloc(sizeof(*mfns));
>> +    if ( !mfns )
>> +    {
>> +        ERROR("Cannot allocate memory for array of %u mfns", 1);
>> +        goto err;
>> +    }
>> +    mfns[0] = p2m_mfn;
>> +    off = 0;
>> +    saved_mfn = 0;
>> +    idx_start = idx_end = saved_idx = 0;
>> +
>> +    for ( level = ctx->x86_pv.levels; level > 0; level-- )
>> +    {
>> +        n_pages = idx_end - idx_start + 1;
>> +        ptes = xc_map_foreign_pages(xch, ctx->domid, PROT_READ, mfns, n_pages);
>> +        if ( !ptes )
>> +        {
>> +            PERROR("Failed to map %u page table pages for p2m list", n_pages);
>> +            goto err;
>> +        }
>> +        free(mfns);
>> +
>> +        shift = level * 9 + 3;
>> +        idx_start = ((p2m_vaddr - off) & mask) >> shift;
>> +        idx_end = ((p2m_end - off) & mask) >> shift;
>> +        idx = idx_end - idx_start + 1;
>> +        mfns = malloc(sizeof(*mfns) * idx);
>> +        if ( !mfns )
>> +        {
>> +            ERROR("Cannot allocate memory for array of %u mfns", idx);
>> +            goto err;
>> +        }
>> +
>> +        for ( idx = idx_start; idx <= idx_end; idx++ )
>> +        {
>> +            mfn = pte_to_frame(ptes[idx]);
>> +            if ( mfn == 0 || mfn > ctx->x86_pv.max_mfn )
>> +            {
>> +                ERROR("Bad mfn %#lx during page table walk for vaddr %#lx at level %d of p2m list",
>> +                      mfn, off + ((xen_vaddr_t)idx << shift), level);
>> +                errno = ERANGE;
>> +                goto err;
>> +            }
>> +            mfns[idx - idx_start] = mfn;
>> +
>> +            /* Maximum pfn check at level 2. Same reasoning as for p2m tree. */
>> +            if ( level == 2 )
>> +            {
>> +                if ( mfn != saved_mfn )
>> +                {
>> +                    saved_mfn = mfn;
>> +                    saved_idx = idx - idx_start;
>> +                }
>> +            }
>> +        }
>> +
>> +        if ( level == 2 )
>> +        {
>> +            max_pfn = ((xen_pfn_t)saved_idx << 9) * fpp - 1;
>> +            if ( max_pfn < ctx->x86_pv.max_pfn )
>> +            {
>> +                ctx->x86_pv.max_pfn = max_pfn;
>> +                ctx->x86_pv.p2m_frames = (ctx->x86_pv.max_pfn + fpp) / fpp;
>> +                p2m_end = p2m_vaddr + ctx->x86_pv.p2m_frames * PAGE_SIZE - 1;
>> +                idx_end = idx_start + saved_idx;
>> +            }
>> +        }
>> +
>> +        munmap(ptes, n_pages * PAGE_SIZE);
>> +        ptes = NULL;
>> +        off = p2m_vaddr & ((mask >> shift) << shift);
>> +    }
>> +
>> +    /* Map the p2m leaves themselves. */
>> +    rc = map_p2m_leaves(ctx, mfns, idx_end - idx_start + 1);
>> +
>> +err:
>> +    free(mfns);
>> +    if ( ptes )
>> +        munmap(ptes, n_pages * PAGE_SIZE);
> 
> Well - I think I have understood what is going on here, and it looks
> plausible.

I hope so. :-)


Juergen

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/4] libxc: stop migration in case of p2m list structural changes
  2015-12-11 11:31 ` [PATCH 3/4] libxc: stop migration in case of p2m list structural changes Juergen Gross
@ 2015-12-11 15:20   ` Andrew Cooper
  2015-12-11 16:02     ` Juergen Gross
  0 siblings, 1 reply; 16+ messages in thread
From: Andrew Cooper @ 2015-12-11 15:20 UTC (permalink / raw)
  To: Juergen Gross, xen-devel, Ian.Campbell, ian.jackson,
	stefano.stabellini, wei.liu2

On 11/12/15 11:31, Juergen Gross wrote:
> With support of the virtual mapped linear p2m list for migration it is
> now possible to detect structural changes of the p2m list which before
> would either lead to a crashing or otherwise wrong behaving domU.
>
> A guest supporting the linear p2m list will increment the
> p2m_generation counter located in the shared info page before and after
> each modification of a mapping related to the p2m list. A change of
> that counter can be detected by the tools and reacted upon.
>
> As such a change should occur only very rarely once the domU is up the
> most simple reaction is to cancel migration in such an event.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  tools/libxc/xc_sr_common.h       | 11 ++++++++++
>  tools/libxc/xc_sr_save.c         |  4 ++++
>  tools/libxc/xc_sr_save_x86_hvm.c |  7 +++++++
>  tools/libxc/xc_sr_save_x86_pv.c  | 44 ++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 66 insertions(+)
>
> diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
> index 9aecde2..bfb9602 100644
> --- a/tools/libxc/xc_sr_common.h
> +++ b/tools/libxc/xc_sr_common.h
> @@ -83,6 +83,14 @@ struct xc_sr_save_ops
>      int (*end_of_checkpoint)(struct xc_sr_context *ctx);
>  
>      /**
> +     * Check whether new iteration can be started.  This is called before each
> +     * iteration to check whether all criteria for the migration are still
> +     * met.  If that's not the case either migration is cancelled via a bad rc
> +     * or the situation is handled, e.g. by sending appropriate records.
> +     */
> +    int (*check_iteration)(struct xc_sr_context *ctx);
> +

This is slightly ambiguous, especially with the not-so-different
differences between live migration and remus checkpoints.

I would be tempted to name it check_vm_state() and document simply that
it is called periodically, to allow for fixup (or abort) for guest state
which may have changed while the VM was running.

On the remus side, it needs to be called between start_of_checkpoint()
and send_memory_***() in save(), as the guest gets to run between the
checkpoints.

> +    /**
>       * Clean up the local environment.  Will be called exactly once, either
>       * after a successful save, or upon encountering an error.
>       */
> @@ -280,6 +288,9 @@ struct xc_sr_context
>              /* Read-only mapping of guests shared info page */
>              shared_info_any_t *shinfo;
>  
> +            /* p2m generation count for verifying validity of local p2m. */
> +            uint64_t p2m_generation;
> +
>              union
>              {
>                  struct
> diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
> index cefcef5..c235706 100644
> --- a/tools/libxc/xc_sr_save.c
> +++ b/tools/libxc/xc_sr_save.c
> @@ -370,6 +370,10 @@ static int send_dirty_pages(struct xc_sr_context *ctx,
>      DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
>                                      &ctx->save.dirty_bitmap_hbuf);
>  
> +    rc = ctx->save.ops.check_iteration(ctx);
> +    if ( rc )
> +        return rc;
> +

As there is now a call at each start of checkpoint, this call
essentially becomes back-to-back.  I would suggest having it after the
batch, rather than ahead.

>      for ( p = 0, written = 0; p < ctx->save.p2m_size; ++p )
>      {
>          if ( !test_bit(p, dirty_bitmap) )
> diff --git a/tools/libxc/xc_sr_save_x86_hvm.c b/tools/libxc/xc_sr_save_x86_hvm.c
> index f3d6cee..aa24f90 100644
> --- a/tools/libxc/xc_sr_save_x86_hvm.c
> +++ b/tools/libxc/xc_sr_save_x86_hvm.c
> @@ -175,6 +175,12 @@ static int x86_hvm_start_of_checkpoint(struct xc_sr_context *ctx)
>      return 0;
>  }
>  
> +static int x86_hvm_check_iteration(struct xc_sr_context *ctx)
> +{
> +    /* no-op */
> +    return 0;
> +}
> +
>  static int x86_hvm_end_of_checkpoint(struct xc_sr_context *ctx)
>  {
>      int rc;
> @@ -221,6 +227,7 @@ struct xc_sr_save_ops save_ops_x86_hvm =
>      .start_of_stream     = x86_hvm_start_of_stream,
>      .start_of_checkpoint = x86_hvm_start_of_checkpoint,
>      .end_of_checkpoint   = x86_hvm_end_of_checkpoint,
> +    .check_iteration     = x86_hvm_check_iteration,
>      .cleanup             = x86_hvm_cleanup,
>  };
>  
> diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
> index 0237378..3a58d0d 100644
> --- a/tools/libxc/xc_sr_save_x86_pv.c
> +++ b/tools/libxc/xc_sr_save_x86_pv.c
> @@ -268,6 +268,39 @@ err:
>  }
>  
>  /*
> + * Get p2m_generation count.
> + * Returns an error if the generation count has changed since the last call.
> + */
> +static int get_p2m_generation(struct xc_sr_context *ctx)
> +{
> +    uint64_t p2m_generation;
> +    int rc;
> +
> +    p2m_generation = GET_FIELD(ctx->x86_pv.shinfo, arch.p2m_generation,
> +                               ctx->x86_pv.width);
> +
> +    rc = (p2m_generation == ctx->x86_pv.p2m_generation) ? 0 : -1;
> +    ctx->x86_pv.p2m_generation = p2m_generation;
> +
> +    return rc;
> +}
> +
> +static int x86_pv_check_iteration_p2m_list(struct xc_sr_context *ctx)
> +{
> +    xc_interface *xch = ctx->xch;
> +    int rc;
> +
> +    if ( !ctx->save.live )
> +        return 0;
> +
> +    rc = get_p2m_generation(ctx);
> +    if ( rc )
> +        ERROR("p2m generation count changed. Migration aborted.");
> +
> +    return rc;
> +}
> +
> +/*
>   * Map the guest p2m frames specified via a cr3 value, a virtual address, and
>   * the maximum pfn.
>   */
> @@ -281,6 +314,9 @@ static int map_p2m_list(struct xc_sr_context *ctx, uint64_t p2m_cr3)
>      unsigned fpp, n_pages, level, shift, idx_start, idx_end, idx, saved_idx;
>      int rc = -1;
>  
> +    /* Before each iteration check for local p2m list still valid. */
> +    ctx->save.ops.check_iteration = x86_pv_check_iteration_p2m_list;
> +

This is admittedly the first, but definitely not the only eventual thing
needed for check iteration.  To avoid clobbering one check with another
in the future, it would be cleaner to have a single
x86_pv_check_iteration() which performs the get_p2m_generation() check
iff linear p2m is in use.

~Andrew

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4] libxc: support of linear p2m list for migration of pv-domains
  2015-12-11 15:12     ` Juergen Gross
@ 2015-12-11 15:24       ` Andrew Cooper
  2015-12-11 16:00         ` Juergen Gross
  0 siblings, 1 reply; 16+ messages in thread
From: Andrew Cooper @ 2015-12-11 15:24 UTC (permalink / raw)
  To: Juergen Gross, xen-devel, Ian.Campbell, ian.jackson,
	stefano.stabellini, wei.liu2


>>> + */
>>> +static int map_p2m_list(struct xc_sr_context *ctx, uint64_t p2m_cr3)
>>> +{
>>> +    xc_interface *xch = ctx->xch;
>>> +    xen_vaddr_t p2m_vaddr, p2m_end, mask, off;
>>> +    xen_pfn_t p2m_mfn, mfn, saved_mfn, max_pfn;
>>> +    uint64_t *ptes;
>>> +    xen_pfn_t *mfns;
>>> +    unsigned fpp, n_pages, level, shift, idx_start, idx_end, idx, saved_idx;
>>> +    int rc = -1;
>>> +
>>> +    p2m_mfn = cr3_to_mfn(ctx, p2m_cr3);
>>> +    if ( p2m_mfn == 0 || p2m_mfn > ctx->x86_pv.max_mfn )
>> mfn 0 isn't invalid to use here.  It could, in principle, be available
>> for PV guest use.
> No, the value 0 indicates that the linear p2m info isn't valid. See
> comments in xen/include/public/arch-x86/xen.h

Technically speaking, that is p2m_cr3, rather than p2m_mfn but I suppose
there is a linear mapping between the two.

As this function only gets called with a non-zero p2m_cr3, an
alternative would be assert(p2m_cr3 > 0).

The mfn == 0 comment also applies for reading the ptes in the loop below.

>
>> I believe this allows you drop 'mask' in its entirety.
> Hmm, no. I'd still have to mask possible top 16 '1' bits away.

So you would.  My mistake.

~Andrew

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4] libxc: support of linear p2m list for migration of pv-domains
  2015-12-11 15:24       ` Andrew Cooper
@ 2015-12-11 16:00         ` Juergen Gross
  2015-12-11 16:09           ` Andrew Cooper
  0 siblings, 1 reply; 16+ messages in thread
From: Juergen Gross @ 2015-12-11 16:00 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel, Ian.Campbell, ian.jackson,
	stefano.stabellini, wei.liu2

On 11/12/15 16:24, Andrew Cooper wrote:
> 
>>>> + */
>>>> +static int map_p2m_list(struct xc_sr_context *ctx, uint64_t p2m_cr3)
>>>> +{
>>>> +    xc_interface *xch = ctx->xch;
>>>> +    xen_vaddr_t p2m_vaddr, p2m_end, mask, off;
>>>> +    xen_pfn_t p2m_mfn, mfn, saved_mfn, max_pfn;
>>>> +    uint64_t *ptes;
>>>> +    xen_pfn_t *mfns;
>>>> +    unsigned fpp, n_pages, level, shift, idx_start, idx_end, idx, saved_idx;
>>>> +    int rc = -1;
>>>> +
>>>> +    p2m_mfn = cr3_to_mfn(ctx, p2m_cr3);
>>>> +    if ( p2m_mfn == 0 || p2m_mfn > ctx->x86_pv.max_mfn )
>>> mfn 0 isn't invalid to use here.  It could, in principle, be available
>>> for PV guest use.
>> No, the value 0 indicates that the linear p2m info isn't valid. See
>> comments in xen/include/public/arch-x86/xen.h
> 
> Technically speaking, that is p2m_cr3, rather than p2m_mfn but I suppose
> there is a linear mapping between the two.
> 
> As this function only gets called with a non-zero p2m_cr3, an
> alternative would be assert(p2m_cr3 > 0).

Hmm, yes.

> The mfn == 0 comment also applies for reading the ptes in the loop below.

Sure? Is the hypervisor really giving mfn 0 to a guest? I don't mind
dropping the test, but I'd be surprised if mfn 0 would be valid.

> 
>>
>>> I believe this allows you drop 'mask' in its entirety.
>> Hmm, no. I'd still have to mask possible top 16 '1' bits away.
> 
> So you would.  My mistake.


Juergen

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 3/4] libxc: stop migration in case of p2m list structural changes
  2015-12-11 15:20   ` Andrew Cooper
@ 2015-12-11 16:02     ` Juergen Gross
  0 siblings, 0 replies; 16+ messages in thread
From: Juergen Gross @ 2015-12-11 16:02 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel, Ian.Campbell, ian.jackson,
	stefano.stabellini, wei.liu2

On 11/12/15 16:20, Andrew Cooper wrote:
> On 11/12/15 11:31, Juergen Gross wrote:
>> With support of the virtual mapped linear p2m list for migration it is
>> now possible to detect structural changes of the p2m list which before
>> would either lead to a crashing or otherwise wrong behaving domU.
>>
>> A guest supporting the linear p2m list will increment the
>> p2m_generation counter located in the shared info page before and after
>> each modification of a mapping related to the p2m list. A change of
>> that counter can be detected by the tools and reacted upon.
>>
>> As such a change should occur only very rarely once the domU is up the
>> most simple reaction is to cancel migration in such an event.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>  tools/libxc/xc_sr_common.h       | 11 ++++++++++
>>  tools/libxc/xc_sr_save.c         |  4 ++++
>>  tools/libxc/xc_sr_save_x86_hvm.c |  7 +++++++
>>  tools/libxc/xc_sr_save_x86_pv.c  | 44 ++++++++++++++++++++++++++++++++++++++++
>>  4 files changed, 66 insertions(+)
>>
>> diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
>> index 9aecde2..bfb9602 100644
>> --- a/tools/libxc/xc_sr_common.h
>> +++ b/tools/libxc/xc_sr_common.h
>> @@ -83,6 +83,14 @@ struct xc_sr_save_ops
>>      int (*end_of_checkpoint)(struct xc_sr_context *ctx);
>>  
>>      /**
>> +     * Check whether new iteration can be started.  This is called before each
>> +     * iteration to check whether all criteria for the migration are still
>> +     * met.  If that's not the case either migration is cancelled via a bad rc
>> +     * or the situation is handled, e.g. by sending appropriate records.
>> +     */
>> +    int (*check_iteration)(struct xc_sr_context *ctx);
>> +
> 
> This is slightly ambiguous, especially with the not-so-different
> differences between live migration and remus checkpoints.
> 
> I would be tempted to name it check_vm_state() and document simply that
> it is called periodically, to allow for fixup (or abort) for guest state
> which may have changed while the VM was running.

Yes, this is better.

> On the remus side, it needs to be called between start_of_checkpoint()
> and send_memory_***() in save(), as the guest gets to run between the
> checkpoints.

Okay.

> 
>> +    /**
>>       * Clean up the local environment.  Will be called exactly once, either
>>       * after a successful save, or upon encountering an error.
>>       */
>> @@ -280,6 +288,9 @@ struct xc_sr_context
>>              /* Read-only mapping of guests shared info page */
>>              shared_info_any_t *shinfo;
>>  
>> +            /* p2m generation count for verifying validity of local p2m. */
>> +            uint64_t p2m_generation;
>> +
>>              union
>>              {
>>                  struct
>> diff --git a/tools/libxc/xc_sr_save.c b/tools/libxc/xc_sr_save.c
>> index cefcef5..c235706 100644
>> --- a/tools/libxc/xc_sr_save.c
>> +++ b/tools/libxc/xc_sr_save.c
>> @@ -370,6 +370,10 @@ static int send_dirty_pages(struct xc_sr_context *ctx,
>>      DECLARE_HYPERCALL_BUFFER_SHADOW(unsigned long, dirty_bitmap,
>>                                      &ctx->save.dirty_bitmap_hbuf);
>>  
>> +    rc = ctx->save.ops.check_iteration(ctx);
>> +    if ( rc )
>> +        return rc;
>> +
> 
> As there is now a call at each start of checkpoint, this call
> essentially becomes back-to-back.  I would suggest having it after the
> batch, rather than ahead.

Okay.

> 
>>      for ( p = 0, written = 0; p < ctx->save.p2m_size; ++p )
>>      {
>>          if ( !test_bit(p, dirty_bitmap) )
>> diff --git a/tools/libxc/xc_sr_save_x86_hvm.c b/tools/libxc/xc_sr_save_x86_hvm.c
>> index f3d6cee..aa24f90 100644
>> --- a/tools/libxc/xc_sr_save_x86_hvm.c
>> +++ b/tools/libxc/xc_sr_save_x86_hvm.c
>> @@ -175,6 +175,12 @@ static int x86_hvm_start_of_checkpoint(struct xc_sr_context *ctx)
>>      return 0;
>>  }
>>  
>> +static int x86_hvm_check_iteration(struct xc_sr_context *ctx)
>> +{
>> +    /* no-op */
>> +    return 0;
>> +}
>> +
>>  static int x86_hvm_end_of_checkpoint(struct xc_sr_context *ctx)
>>  {
>>      int rc;
>> @@ -221,6 +227,7 @@ struct xc_sr_save_ops save_ops_x86_hvm =
>>      .start_of_stream     = x86_hvm_start_of_stream,
>>      .start_of_checkpoint = x86_hvm_start_of_checkpoint,
>>      .end_of_checkpoint   = x86_hvm_end_of_checkpoint,
>> +    .check_iteration     = x86_hvm_check_iteration,
>>      .cleanup             = x86_hvm_cleanup,
>>  };
>>  
>> diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
>> index 0237378..3a58d0d 100644
>> --- a/tools/libxc/xc_sr_save_x86_pv.c
>> +++ b/tools/libxc/xc_sr_save_x86_pv.c
>> @@ -268,6 +268,39 @@ err:
>>  }
>>  
>>  /*
>> + * Get p2m_generation count.
>> + * Returns an error if the generation count has changed since the last call.
>> + */
>> +static int get_p2m_generation(struct xc_sr_context *ctx)
>> +{
>> +    uint64_t p2m_generation;
>> +    int rc;
>> +
>> +    p2m_generation = GET_FIELD(ctx->x86_pv.shinfo, arch.p2m_generation,
>> +                               ctx->x86_pv.width);
>> +
>> +    rc = (p2m_generation == ctx->x86_pv.p2m_generation) ? 0 : -1;
>> +    ctx->x86_pv.p2m_generation = p2m_generation;
>> +
>> +    return rc;
>> +}
>> +
>> +static int x86_pv_check_iteration_p2m_list(struct xc_sr_context *ctx)
>> +{
>> +    xc_interface *xch = ctx->xch;
>> +    int rc;
>> +
>> +    if ( !ctx->save.live )
>> +        return 0;
>> +
>> +    rc = get_p2m_generation(ctx);
>> +    if ( rc )
>> +        ERROR("p2m generation count changed. Migration aborted.");
>> +
>> +    return rc;
>> +}
>> +
>> +/*
>>   * Map the guest p2m frames specified via a cr3 value, a virtual address, and
>>   * the maximum pfn.
>>   */
>> @@ -281,6 +314,9 @@ static int map_p2m_list(struct xc_sr_context *ctx, uint64_t p2m_cr3)
>>      unsigned fpp, n_pages, level, shift, idx_start, idx_end, idx, saved_idx;
>>      int rc = -1;
>>  
>> +    /* Before each iteration check for local p2m list still valid. */
>> +    ctx->save.ops.check_iteration = x86_pv_check_iteration_p2m_list;
>> +
> 
> This is admittedly the first, but definitely not the only eventual thing
> needed for check iteration.  To avoid clobbering one check with another
> in the future, it would be cleaner to have a single
> x86_pv_check_iteration() which performs the get_p2m_generation() check
> iff linear p2m is in use.

Agreed.


Juergen

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4] libxc: support of linear p2m list for migration of pv-domains
  2015-12-11 16:00         ` Juergen Gross
@ 2015-12-11 16:09           ` Andrew Cooper
  2015-12-11 16:17             ` Juergen Gross
  0 siblings, 1 reply; 16+ messages in thread
From: Andrew Cooper @ 2015-12-11 16:09 UTC (permalink / raw)
  To: Juergen Gross, xen-devel, Ian.Campbell, ian.jackson,
	stefano.stabellini, wei.liu2

On 11/12/15 16:00, Juergen Gross wrote:
> On 11/12/15 16:24, Andrew Cooper wrote:
>>>>> + */
>>>>> +static int map_p2m_list(struct xc_sr_context *ctx, uint64_t p2m_cr3)
>>>>> +{
>>>>> +    xc_interface *xch = ctx->xch;
>>>>> +    xen_vaddr_t p2m_vaddr, p2m_end, mask, off;
>>>>> +    xen_pfn_t p2m_mfn, mfn, saved_mfn, max_pfn;
>>>>> +    uint64_t *ptes;
>>>>> +    xen_pfn_t *mfns;
>>>>> +    unsigned fpp, n_pages, level, shift, idx_start, idx_end, idx, saved_idx;
>>>>> +    int rc = -1;
>>>>> +
>>>>> +    p2m_mfn = cr3_to_mfn(ctx, p2m_cr3);
>>>>> +    if ( p2m_mfn == 0 || p2m_mfn > ctx->x86_pv.max_mfn )
>>>> mfn 0 isn't invalid to use here.  It could, in principle, be available
>>>> for PV guest use.
>>> No, the value 0 indicates that the linear p2m info isn't valid. See
>>> comments in xen/include/public/arch-x86/xen.h
>> Technically speaking, that is p2m_cr3, rather than p2m_mfn but I suppose
>> there is a linear mapping between the two.
>>
>> As this function only gets called with a non-zero p2m_cr3, an
>> alternative would be assert(p2m_cr3 > 0).
> Hmm, yes.
>
>> The mfn == 0 comment also applies for reading the ptes in the loop below.
> Sure? Is the hypervisor really giving mfn 0 to a guest? I don't mind
> dropping the test, but I'd be surprised if mfn 0 would be valid.

Currently no.

I am thinking longer term for things like a DMLite nested hypervisor,
where none of the RAM below 1MB is special any more.

I don't expect handing mfn 0 to guests to actually function very well,
but I would prefer to avoid false assumptions about it.

~Andrew

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH 2/4] libxc: support of linear p2m list for migration of pv-domains
  2015-12-11 16:09           ` Andrew Cooper
@ 2015-12-11 16:17             ` Juergen Gross
  0 siblings, 0 replies; 16+ messages in thread
From: Juergen Gross @ 2015-12-11 16:17 UTC (permalink / raw)
  To: Andrew Cooper, xen-devel, Ian.Campbell, ian.jackson,
	stefano.stabellini, wei.liu2

On 11/12/15 17:09, Andrew Cooper wrote:
> On 11/12/15 16:00, Juergen Gross wrote:
>> On 11/12/15 16:24, Andrew Cooper wrote:
>>>>>> + */
>>>>>> +static int map_p2m_list(struct xc_sr_context *ctx, uint64_t p2m_cr3)
>>>>>> +{
>>>>>> +    xc_interface *xch = ctx->xch;
>>>>>> +    xen_vaddr_t p2m_vaddr, p2m_end, mask, off;
>>>>>> +    xen_pfn_t p2m_mfn, mfn, saved_mfn, max_pfn;
>>>>>> +    uint64_t *ptes;
>>>>>> +    xen_pfn_t *mfns;
>>>>>> +    unsigned fpp, n_pages, level, shift, idx_start, idx_end, idx, saved_idx;
>>>>>> +    int rc = -1;
>>>>>> +
>>>>>> +    p2m_mfn = cr3_to_mfn(ctx, p2m_cr3);
>>>>>> +    if ( p2m_mfn == 0 || p2m_mfn > ctx->x86_pv.max_mfn )
>>>>> mfn 0 isn't invalid to use here.  It could, in principle, be available
>>>>> for PV guest use.
>>>> No, the value 0 indicates that the linear p2m info isn't valid. See
>>>> comments in xen/include/public/arch-x86/xen.h
>>> Technically speaking, that is p2m_cr3, rather than p2m_mfn but I suppose
>>> there is a linear mapping between the two.
>>>
>>> As this function only gets called with a non-zero p2m_cr3, an
>>> alternative would be assert(p2m_cr3 > 0).
>> Hmm, yes.
>>
>>> The mfn == 0 comment also applies for reading the ptes in the loop below.
>> Sure? Is the hypervisor really giving mfn 0 to a guest? I don't mind
>> dropping the test, but I'd be surprised if mfn 0 would be valid.
> 
> Currently no.
> 
> I am thinking longer term for things like a DMLite nested hypervisor,
> where none of the RAM below 1MB is special any more.
> 
> I don't expect handing mfn 0 to guests to actually function very well,
> but I would prefer to avoid false assumptions about it.

Uuh, I really see problems with that approach. A pv guest would have to
check the mfn after allocating the top level page table used to map the
p2m list. Letting mfn 0 be valid is asking for problems, I guess.

I'd rather ban mfn 0 and even gfn 0 from being used as page table, p2m
page or gdt/ldt/idt by pv guests.


Juergen

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2015-12-11 16:17 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-11 11:31 [PATCH 0/4] support linear p2m list in migrate stream v2 Juergen Gross
2015-12-11 11:31 ` [PATCH 1/4] libxc: split mapping p2m leaves into a separate function Juergen Gross
2015-12-11 14:21   ` Andrew Cooper
2015-12-11 11:31 ` [PATCH 2/4] libxc: support of linear p2m list for migration of pv-domains Juergen Gross
2015-12-11 14:51   ` Andrew Cooper
2015-12-11 15:12     ` Juergen Gross
2015-12-11 15:24       ` Andrew Cooper
2015-12-11 16:00         ` Juergen Gross
2015-12-11 16:09           ` Andrew Cooper
2015-12-11 16:17             ` Juergen Gross
2015-12-11 11:31 ` [PATCH 3/4] libxc: stop migration in case of p2m list structural changes Juergen Gross
2015-12-11 15:20   ` Andrew Cooper
2015-12-11 16:02     ` Juergen Gross
2015-12-11 11:31 ` [PATCH 4/4] libxc: set flag for support of linear p2m list in domain builder Juergen Gross
2015-12-11 14:18 ` [PATCH 0/4] support linear p2m list in migrate stream v2 Andrew Cooper
2015-12-11 14:20   ` Juergen Gross

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.