All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] tools/libxc: use superpages
@ 2017-08-24 10:14 Olaf Hering
  2017-08-24 10:14 ` [PATCH v3 1/3] tools/libxc: move SUPERPAGE macros to common header Olaf Hering
                   ` (2 more replies)
  0 siblings, 3 replies; 11+ messages in thread
From: Olaf Hering @ 2017-08-24 10:14 UTC (permalink / raw)
  To: xen-devel, Andrew Cooper; +Cc: Olaf Hering

Using superpages on the receiving dom0 will avoid performance regressions.

Olaf

v3:
 clear pointer in xc_sr_bitmap_free
 some coding style changes
 use getdomaininfo.max_pages to avoid Over-allocation check
 trim bitmap function names, drop trailing "_bit"
 add some comments
v2:
 split into individual commits

Olaf Hering (3):
  tools/libxc: move SUPERPAGE macros to common header
  tools/libxc: add API for bitmap access for restore
  tools/libxc: use superpages during restore of HVM guest

 tools/libxc/xc_dom_x86.c            |   5 -
 tools/libxc/xc_private.h            |   5 +
 tools/libxc/xc_sr_common.c          |  41 ++++++++
 tools/libxc/xc_sr_common.h          |  93 +++++++++++++++--
 tools/libxc/xc_sr_restore.c         | 141 ++-----------------------
 tools/libxc/xc_sr_restore_x86_hvm.c | 202 ++++++++++++++++++++++++++++++++++++
 tools/libxc/xc_sr_restore_x86_pv.c  |  72 ++++++++++++-
 7 files changed, 411 insertions(+), 148 deletions(-)


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v3 1/3] tools/libxc: move SUPERPAGE macros to common header
  2017-08-24 10:14 [PATCH v3 0/3] tools/libxc: use superpages Olaf Hering
@ 2017-08-24 10:14 ` Olaf Hering
  2017-08-25 12:06   ` Wei Liu
  2017-08-24 10:14 ` [PATCH v3 2/3] tools/libxc: add API for bitmap access for restore Olaf Hering
  2017-08-24 10:14 ` [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest Olaf Hering
  2 siblings, 1 reply; 11+ messages in thread
From: Olaf Hering @ 2017-08-24 10:14 UTC (permalink / raw)
  To: xen-devel, Andrew Cooper, Ian Jackson, Wei Liu; +Cc: Olaf Hering

The macros SUPERPAGE_2MB_SHIFT and SUPERPAGE_1GB_SHIFT will be used by
other code in libxc. Move the macros to a header file.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libxc/xc_dom_x86.c | 5 -----
 tools/libxc/xc_private.h | 5 +++++
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/tools/libxc/xc_dom_x86.c b/tools/libxc/xc_dom_x86.c
index cb68efcbd3..5aff5cad58 100644
--- a/tools/libxc/xc_dom_x86.c
+++ b/tools/libxc/xc_dom_x86.c
@@ -43,11 +43,6 @@
 
 #define SUPERPAGE_BATCH_SIZE 512
 
-#define SUPERPAGE_2MB_SHIFT   9
-#define SUPERPAGE_2MB_NR_PFNS (1UL << SUPERPAGE_2MB_SHIFT)
-#define SUPERPAGE_1GB_SHIFT   18
-#define SUPERPAGE_1GB_NR_PFNS (1UL << SUPERPAGE_1GB_SHIFT)
-
 #define X86_CR0_PE 0x01
 #define X86_CR0_ET 0x10
 
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 1c27b0fded..d581f850b0 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -66,6 +66,11 @@ struct iovec {
 #define DECLARE_FLASK_OP struct xen_flask_op op
 #define DECLARE_PLATFORM_OP struct xen_platform_op platform_op
 
+#define SUPERPAGE_2MB_SHIFT   9
+#define SUPERPAGE_2MB_NR_PFNS (1UL << SUPERPAGE_2MB_SHIFT)
+#define SUPERPAGE_1GB_SHIFT   18
+#define SUPERPAGE_1GB_NR_PFNS (1UL << SUPERPAGE_1GB_SHIFT)
+
 #undef PAGE_SHIFT
 #undef PAGE_SIZE
 #undef PAGE_MASK

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 2/3] tools/libxc: add API for bitmap access for restore
  2017-08-24 10:14 [PATCH v3 0/3] tools/libxc: use superpages Olaf Hering
  2017-08-24 10:14 ` [PATCH v3 1/3] tools/libxc: move SUPERPAGE macros to common header Olaf Hering
@ 2017-08-24 10:14 ` Olaf Hering
  2017-08-25 12:06   ` Wei Liu
  2017-08-24 10:14 ` [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest Olaf Hering
  2 siblings, 1 reply; 11+ messages in thread
From: Olaf Hering @ 2017-08-24 10:14 UTC (permalink / raw)
  To: xen-devel, Andrew Cooper, Ian Jackson, Wei Liu; +Cc: Olaf Hering

Extend API for managing bitmaps. Each bitmap is now represented by a
generic struct xc_sr_bitmap.
Switch the existing populated_pfns to this API.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libxc/xc_sr_common.c  | 41 +++++++++++++++++++++++++++
 tools/libxc/xc_sr_common.h  | 68 +++++++++++++++++++++++++++++++++++++++++++--
 tools/libxc/xc_sr_restore.c | 66 ++-----------------------------------------
 3 files changed, 110 insertions(+), 65 deletions(-)

diff --git a/tools/libxc/xc_sr_common.c b/tools/libxc/xc_sr_common.c
index 79b9c3e940..4d221ca90c 100644
--- a/tools/libxc/xc_sr_common.c
+++ b/tools/libxc/xc_sr_common.c
@@ -155,6 +155,47 @@ static void __attribute__((unused)) build_assertions(void)
     BUILD_BUG_ON(sizeof(struct xc_sr_rec_hvm_params)        != 8);
 }
 
+/*
+ * Expand the tracking structures as needed.
+ * To avoid realloc()ing too excessively, the size increased to the nearest power
+ * of two large enough to contain the required number of bits.
+ */
+bool _xc_sr_bitmap_resize(struct xc_sr_bitmap *bm, unsigned long bits)
+{
+    if (bits > bm->bits)
+    {
+        size_t new_max;
+        size_t old_sz, new_sz;
+        void *p;
+
+        /* Round up to the nearest power of two larger than bit, less 1. */
+        new_max = bits;
+        new_max |= new_max >> 1;
+        new_max |= new_max >> 2;
+        new_max |= new_max >> 4;
+        new_max |= new_max >> 8;
+        new_max |= new_max >> 16;
+#ifdef __x86_64__
+        new_max |= new_max >> 32;
+#endif
+
+        old_sz = bitmap_size(bm->bits + 1);
+        new_sz = bitmap_size(new_max + 1);
+        p = realloc(bm->p, new_sz);
+        if (!p)
+            return false;
+
+        if (bm->p)
+            memset(p + old_sz, 0, new_sz - old_sz);
+        else
+            memset(p, 0, new_sz);
+
+        bm->p = p;
+        bm->bits = new_max;
+    }
+    return true;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index a83f22af4e..8901af112a 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -172,6 +172,12 @@ struct xc_sr_x86_pv_restore_vcpu
     size_t basicsz, extdsz, xsavesz, msrsz;
 };
 
+struct xc_sr_bitmap
+{
+    void *p;
+    unsigned long bits;
+};
+
 struct xc_sr_context
 {
     xc_interface *xch;
@@ -255,8 +261,7 @@ struct xc_sr_context
             domid_t      xenstore_domid,  console_domid;
 
             /* Bitmap of currently populated PFNs during restore. */
-            unsigned long *populated_pfns;
-            xen_pfn_t max_populated_pfn;
+            struct xc_sr_bitmap populated_pfns;
 
             /* Sender has invoked verify mode on the stream. */
             bool verify;
@@ -343,6 +348,65 @@ extern struct xc_sr_save_ops save_ops_x86_hvm;
 extern struct xc_sr_restore_ops restore_ops_x86_pv;
 extern struct xc_sr_restore_ops restore_ops_x86_hvm;
 
+extern bool _xc_sr_bitmap_resize(struct xc_sr_bitmap *bm, unsigned long bits);
+
+static inline bool xc_sr_bitmap_resize(struct xc_sr_bitmap *bm, unsigned long bits)
+{
+    if (bits > bm->bits)
+        return _xc_sr_bitmap_resize(bm, bits);
+    return true;
+}
+
+static inline void xc_sr_bitmap_free(struct xc_sr_bitmap *bm)
+{
+    free(bm->p);
+    bm->p = NULL;
+}
+
+static inline bool xc_sr_set(unsigned long bit, struct xc_sr_bitmap *bm)
+{
+    if (!xc_sr_bitmap_resize(bm, bit))
+        return false;
+
+    set_bit(bit, bm->p);
+    return true;
+}
+
+static inline bool xc_sr_test(unsigned long bit, struct xc_sr_bitmap *bm)
+{
+    if (bit > bm->bits)
+        return false;
+    return !!test_bit(bit, bm->p);
+}
+
+static inline int xc_sr_test_and_clear(unsigned long bit, struct xc_sr_bitmap *bm)
+{
+    return test_and_clear_bit(bit, bm->p);
+}
+
+static inline int xc_sr_test_and_set(unsigned long bit, struct xc_sr_bitmap *bm)
+{
+    return test_and_set_bit(bit, bm->p);
+}
+
+static inline bool pfn_is_populated(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    return xc_sr_test(pfn, &ctx->restore.populated_pfns);
+}
+
+static inline int pfn_set_populated(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    xc_interface *xch = ctx->xch;
+
+    if ( !xc_sr_set(pfn, &ctx->restore.populated_pfns) )
+    {
+        ERROR("Failed to realloc populated_pfns bitmap");
+        errno = ENOMEM;
+        return -1;
+    }
+    return 0;
+}
+
 struct xc_sr_record
 {
     uint32_t type;
diff --git a/tools/libxc/xc_sr_restore.c b/tools/libxc/xc_sr_restore.c
index a016678332..d53948e1a6 100644
--- a/tools/libxc/xc_sr_restore.c
+++ b/tools/libxc/xc_sr_restore.c
@@ -68,64 +68,6 @@ static int read_headers(struct xc_sr_context *ctx)
     return 0;
 }
 
-/*
- * Is a pfn populated?
- */
-static bool pfn_is_populated(const struct xc_sr_context *ctx, xen_pfn_t pfn)
-{
-    if ( pfn > ctx->restore.max_populated_pfn )
-        return false;
-    return test_bit(pfn, ctx->restore.populated_pfns);
-}
-
-/*
- * Set a pfn as populated, expanding the tracking structures if needed. To
- * avoid realloc()ing too excessively, the size increased to the nearest power
- * of two large enough to contain the required pfn.
- */
-static int pfn_set_populated(struct xc_sr_context *ctx, xen_pfn_t pfn)
-{
-    xc_interface *xch = ctx->xch;
-
-    if ( pfn > ctx->restore.max_populated_pfn )
-    {
-        xen_pfn_t new_max;
-        size_t old_sz, new_sz;
-        unsigned long *p;
-
-        /* Round up to the nearest power of two larger than pfn, less 1. */
-        new_max = pfn;
-        new_max |= new_max >> 1;
-        new_max |= new_max >> 2;
-        new_max |= new_max >> 4;
-        new_max |= new_max >> 8;
-        new_max |= new_max >> 16;
-#ifdef __x86_64__
-        new_max |= new_max >> 32;
-#endif
-
-        old_sz = bitmap_size(ctx->restore.max_populated_pfn + 1);
-        new_sz = bitmap_size(new_max + 1);
-        p = realloc(ctx->restore.populated_pfns, new_sz);
-        if ( !p )
-        {
-            ERROR("Failed to realloc populated bitmap");
-            errno = ENOMEM;
-            return -1;
-        }
-
-        memset((uint8_t *)p + old_sz, 0x00, new_sz - old_sz);
-
-        ctx->restore.populated_pfns    = p;
-        ctx->restore.max_populated_pfn = new_max;
-    }
-
-    assert(!test_bit(pfn, ctx->restore.populated_pfns));
-    set_bit(pfn, ctx->restore.populated_pfns);
-
-    return 0;
-}
-
 /*
  * Given a set of pfns, obtain memory from Xen to fill the physmap for the
  * unpopulated subset.  If types is NULL, no page type checking is performed
@@ -684,10 +626,8 @@ static int setup(struct xc_sr_context *ctx)
     if ( rc )
         goto err;
 
-    ctx->restore.max_populated_pfn = (32 * 1024 / 4) - 1;
-    ctx->restore.populated_pfns = bitmap_alloc(
-        ctx->restore.max_populated_pfn + 1);
-    if ( !ctx->restore.populated_pfns )
+    rc = !xc_sr_bitmap_resize(&ctx->restore.populated_pfns, 32 * 1024 / 4);
+    if ( rc )
     {
         ERROR("Unable to allocate memory for populated_pfns bitmap");
         rc = -1;
@@ -722,7 +662,7 @@ static void cleanup(struct xc_sr_context *ctx)
         xc_hypercall_buffer_free_pages(xch, dirty_bitmap,
                                    NRPAGES(bitmap_size(ctx->restore.p2m_size)));
     free(ctx->restore.buffered_records);
-    free(ctx->restore.populated_pfns);
+    xc_sr_bitmap_free(&ctx->restore.populated_pfns);
     if ( ctx->restore.ops.cleanup(ctx) )
         PERROR("Failed to clean up");
 }

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest
  2017-08-24 10:14 [PATCH v3 0/3] tools/libxc: use superpages Olaf Hering
  2017-08-24 10:14 ` [PATCH v3 1/3] tools/libxc: move SUPERPAGE macros to common header Olaf Hering
  2017-08-24 10:14 ` [PATCH v3 2/3] tools/libxc: add API for bitmap access for restore Olaf Hering
@ 2017-08-24 10:14 ` Olaf Hering
  2017-08-25 12:06   ` Wei Liu
  2 siblings, 1 reply; 11+ messages in thread
From: Olaf Hering @ 2017-08-24 10:14 UTC (permalink / raw)
  To: xen-devel, Andrew Cooper, Ian Jackson, Wei Liu; +Cc: Olaf Hering

During creating of a HVM domU meminit_hvm() tries to map superpages.
After save/restore or migration this mapping is lost, everything is
allocated in single pages. This causes a performance degradition after
migration.

Add neccessary code to preallocate a superpage for the chunk of pfns
that is received. In case a pfn was not populated on the sending side it
must be freed on the receiving side to avoid over-allocation.

The existing code for x86_pv is moved unmodified into its own file.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/libxc/xc_sr_common.h          |  25 +++--
 tools/libxc/xc_sr_restore.c         |  75 ++-----------
 tools/libxc/xc_sr_restore_x86_hvm.c | 202 ++++++++++++++++++++++++++++++++++++
 tools/libxc/xc_sr_restore_x86_pv.c  |  72 ++++++++++++-
 4 files changed, 296 insertions(+), 78 deletions(-)

diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index 8901af112a..bf2758e91a 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -139,6 +139,16 @@ struct xc_sr_restore_ops
      */
     int (*setup)(struct xc_sr_context *ctx);
 
+    /**
+     * Populate PFNs
+     *
+     * Given a set of pfns, obtain memory from Xen to fill the physmap for the
+     * unpopulated subset.
+     */
+    int (*populate_pfns)(struct xc_sr_context *ctx, unsigned count,
+                         const xen_pfn_t *original_pfns, const uint32_t *types);
+
+
     /**
      * Process an individual record from the stream.  The caller shall take
      * care of processing common records (e.g. END, PAGE_DATA).
@@ -224,6 +234,8 @@ struct xc_sr_context
 
             int send_back_fd;
             unsigned long p2m_size;
+            unsigned long max_pages;
+            unsigned long tot_pages;
             xc_hypercall_buffer_t dirty_bitmap_hbuf;
 
             /* From Image Header. */
@@ -336,6 +348,11 @@ struct xc_sr_context
                     /* HVM context blob. */
                     void *context;
                     size_t contextsz;
+
+                    /* Bitmap of currently allocated PFNs during restore. */
+                    struct xc_sr_bitmap attempted_1g;
+                    struct xc_sr_bitmap attempted_2m;
+                    struct xc_sr_bitmap allocated_pfns;
                 } restore;
             };
         } x86_hvm;
@@ -455,14 +472,6 @@ static inline int write_record(struct xc_sr_context *ctx,
  */
 int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec);
 
-/*
- * This would ideally be private in restore.c, but is needed by
- * x86_pv_localise_page() if we receive pagetables frames ahead of the
- * contents of the frames they point at.
- */
-int populate_pfns(struct xc_sr_context *ctx, unsigned count,
-                  const xen_pfn_t *original_pfns, const uint32_t *types);
-
 #endif
 /*
  * Local variables:
diff --git a/tools/libxc/xc_sr_restore.c b/tools/libxc/xc_sr_restore.c
index d53948e1a6..8cd9289d1a 100644
--- a/tools/libxc/xc_sr_restore.c
+++ b/tools/libxc/xc_sr_restore.c
@@ -68,74 +68,6 @@ static int read_headers(struct xc_sr_context *ctx)
     return 0;
 }
 
-/*
- * Given a set of pfns, obtain memory from Xen to fill the physmap for the
- * unpopulated subset.  If types is NULL, no page type checking is performed
- * and all unpopulated pfns are populated.
- */
-int populate_pfns(struct xc_sr_context *ctx, unsigned count,
-                  const xen_pfn_t *original_pfns, const uint32_t *types)
-{
-    xc_interface *xch = ctx->xch;
-    xen_pfn_t *mfns = malloc(count * sizeof(*mfns)),
-        *pfns = malloc(count * sizeof(*pfns));
-    unsigned i, nr_pfns = 0;
-    int rc = -1;
-
-    if ( !mfns || !pfns )
-    {
-        ERROR("Failed to allocate %zu bytes for populating the physmap",
-              2 * count * sizeof(*mfns));
-        goto err;
-    }
-
-    for ( i = 0; i < count; ++i )
-    {
-        if ( (!types || (types &&
-                         (types[i] != XEN_DOMCTL_PFINFO_XTAB &&
-                          types[i] != XEN_DOMCTL_PFINFO_BROKEN))) &&
-             !pfn_is_populated(ctx, original_pfns[i]) )
-        {
-            rc = pfn_set_populated(ctx, original_pfns[i]);
-            if ( rc )
-                goto err;
-            pfns[nr_pfns] = mfns[nr_pfns] = original_pfns[i];
-            ++nr_pfns;
-        }
-    }
-
-    if ( nr_pfns )
-    {
-        rc = xc_domain_populate_physmap_exact(
-            xch, ctx->domid, nr_pfns, 0, 0, mfns);
-        if ( rc )
-        {
-            PERROR("Failed to populate physmap");
-            goto err;
-        }
-
-        for ( i = 0; i < nr_pfns; ++i )
-        {
-            if ( mfns[i] == INVALID_MFN )
-            {
-                ERROR("Populate physmap failed for pfn %u", i);
-                rc = -1;
-                goto err;
-            }
-
-            ctx->restore.ops.set_gfn(ctx, pfns[i], mfns[i]);
-        }
-    }
-
-    rc = 0;
-
- err:
-    free(pfns);
-    free(mfns);
-
-    return rc;
-}
-
 /*
  * Given a list of pfns, their types, and a block of page data from the
  * stream, populate and record their types, map the relevant subset and copy
@@ -161,7 +93,7 @@ static int process_page_data(struct xc_sr_context *ctx, unsigned count,
         goto err;
     }
 
-    rc = populate_pfns(ctx, count, pfns, types);
+    rc = ctx->restore.ops.populate_pfns(ctx, count, pfns, types);
     if ( rc )
     {
         ERROR("Failed to populate pfns for batch of %u pages", count);
@@ -826,7 +758,12 @@ int xc_domain_restore(xc_interface *xch, int io_fd, uint32_t dom,
         return -1;
     }
 
+    /* See xc_domain_getinfo */
+    ctx.restore.max_pages = ctx.dominfo.max_memkb >> (PAGE_SHIFT-10);
+    ctx.restore.tot_pages = ctx.dominfo.nr_pages;
     ctx.restore.p2m_size = nr_pfns;
+    DPRINTF("dom %u p2m_size %lx max_pages %lx",
+            ctx.domid, ctx.restore.p2m_size, ctx.restore.max_pages);
 
     if ( ctx.dominfo.hvm )
     {
diff --git a/tools/libxc/xc_sr_restore_x86_hvm.c b/tools/libxc/xc_sr_restore_x86_hvm.c
index 1dca85354a..135872ca53 100644
--- a/tools/libxc/xc_sr_restore_x86_hvm.c
+++ b/tools/libxc/xc_sr_restore_x86_hvm.c
@@ -135,6 +135,8 @@ static int x86_hvm_localise_page(struct xc_sr_context *ctx,
 static int x86_hvm_setup(struct xc_sr_context *ctx)
 {
     xc_interface *xch = ctx->xch;
+    struct xc_sr_bitmap *bm;
+    unsigned long bits;
 
     if ( ctx->restore.guest_type != DHDR_TYPE_X86_HVM )
     {
@@ -149,7 +151,30 @@ static int x86_hvm_setup(struct xc_sr_context *ctx)
         return -1;
     }
 
+    bm = &ctx->x86_hvm.restore.attempted_1g;
+    bits = (ctx->restore.p2m_size >> SUPERPAGE_1GB_SHIFT) + 1;
+    if ( xc_sr_bitmap_resize(bm, bits) == false )
+        goto out;
+
+    bm = &ctx->x86_hvm.restore.attempted_2m;
+    bits = (ctx->restore.p2m_size >> SUPERPAGE_2MB_SHIFT) + 1;
+    if ( xc_sr_bitmap_resize(bm, bits) == false )
+        goto out;
+
+    bm = &ctx->x86_hvm.restore.allocated_pfns;
+    bits = ctx->restore.p2m_size + 1;
+    if ( xc_sr_bitmap_resize(bm, bits) == false )
+        goto out;
+
+    /* No superpage in 1st 2MB due to VGA hole */
+    xc_sr_set(0, &ctx->x86_hvm.restore.attempted_1g);
+    xc_sr_set(0, &ctx->x86_hvm.restore.attempted_2m);
+
     return 0;
+
+out:
+    ERROR("Unable to allocate memory for pfn bitmaps");
+    return -1;
 }
 
 /*
@@ -224,10 +249,186 @@ static int x86_hvm_stream_complete(struct xc_sr_context *ctx)
 static int x86_hvm_cleanup(struct xc_sr_context *ctx)
 {
     free(ctx->x86_hvm.restore.context);
+    xc_sr_bitmap_free(&ctx->x86_hvm.restore.attempted_1g);
+    xc_sr_bitmap_free(&ctx->x86_hvm.restore.attempted_2m);
+    xc_sr_bitmap_free(&ctx->x86_hvm.restore.allocated_pfns);
+
+    return 0;
+}
 
+/*
+ * Set a pfn as allocated, expanding the tracking structures if needed.
+ */
+static int pfn_set_allocated(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    xc_interface *xch = ctx->xch;
+
+    if ( !xc_sr_set(pfn, &ctx->x86_hvm.restore.allocated_pfns) )
+    {
+        ERROR("Failed to realloc allocated_pfns bitmap");
+        errno = ENOMEM;
+        return -1;
+    }
     return 0;
 }
 
+static bool x86_hvm_do_superpage(struct xc_sr_context *ctx, unsigned int order)
+{
+    unsigned long count = 1ULL << order;
+    return ctx->restore.tot_pages + count <= ctx->restore.max_pages;
+}
+
+/*
+ * Attempt to allocate a superpage where the pfn resides.
+ */
+static int x86_hvm_allocate_pfn(struct xc_sr_context *ctx, xen_pfn_t pfn)
+{
+    xc_interface *xch = ctx->xch;
+    struct xc_sr_bitmap *bm;
+    bool success = false;
+    int rc = -1, done;
+    unsigned int order, shift;
+    unsigned long i;
+    unsigned long idx_1g, idx_2m;
+    unsigned long count;
+    xen_pfn_t base_pfn = 0, extnt;
+
+    if ( xc_sr_test(pfn, &ctx->x86_hvm.restore.allocated_pfns) )
+        return 0;
+
+    idx_1g = pfn >> SUPERPAGE_1GB_SHIFT;
+    idx_2m = pfn >> SUPERPAGE_2MB_SHIFT;
+    if ( !xc_sr_bitmap_resize(&ctx->x86_hvm.restore.attempted_1g, idx_1g) )
+    {
+        PERROR("Failed to realloc attempted_1g");
+        return -1;
+    }
+    if ( !xc_sr_bitmap_resize(&ctx->x86_hvm.restore.attempted_2m, idx_2m) )
+    {
+        PERROR("Failed to realloc attempted_2m");
+        return -1;
+    }
+
+    /*
+     * Try to allocate a 1GB page for this pfn, but avoid Over-allocation.
+     * If this succeeds, mark the range of 2MB pages as busy.
+     */
+    bm = &ctx->x86_hvm.restore.attempted_1g;
+    order = SUPERPAGE_1GB_SHIFT;
+    if ( !xc_sr_test_and_set(idx_1g, bm) && x86_hvm_do_superpage(ctx, order) ) {
+        count = 1UL << order;
+        base_pfn = (pfn >> order) << order;
+        extnt = base_pfn;
+        done = xc_domain_populate_physmap(xch, ctx->domid, 1, order, 0, &extnt);
+        if ( done > 0 ) {
+            DPRINTF("1G base_pfn %" PRI_xen_pfn "\n", base_pfn);
+            success = true;
+            bm = &ctx->x86_hvm.restore.attempted_2m;
+            shift = SUPERPAGE_1GB_SHIFT - SUPERPAGE_2MB_SHIFT;
+            for ( i = 0; i < (count >> shift); i++ )
+                xc_sr_set((base_pfn >> SUPERPAGE_2MB_SHIFT) + i, bm);
+        } else if ( done < 0 ) {
+            PERROR("populate_physmap failed.");
+            return -1;
+        }
+    }
+
+    /* Allocate a 2MB page if the above failed, avoid Over-allocation. */
+    bm = &ctx->x86_hvm.restore.attempted_2m;
+    order = SUPERPAGE_2MB_SHIFT;
+    if ( !xc_sr_test_and_set(idx_2m, bm) && x86_hvm_do_superpage(ctx, order) ) {
+        count = 1UL << order;
+        base_pfn = (pfn >> order) << order;
+        extnt = base_pfn;
+        done = xc_domain_populate_physmap(xch, ctx->domid, 1, order, 0, &extnt);
+        if ( done > 0 ) {
+            DPRINTF("2M base_pfn %" PRI_xen_pfn "\n", base_pfn);
+            success = true;
+        } else if ( done < 0 ) {
+            PERROR("populate_physmap failed.");
+            return -1;
+        }
+    }
+    if ( success == false ) {
+        count = 1;
+        extnt = base_pfn = pfn;
+        done = xc_domain_populate_physmap(xch, ctx->domid, count, 0, 0, &extnt);
+        if ( done > 0 ) {
+            DPRINTF("4K pfn %" PRI_xen_pfn "\n", pfn);
+            success = true;
+        } else if ( done < 0 ) {
+            PERROR("populate_physmap failed.");
+            return -1;
+        }
+    }
+    if ( success == true ) {
+        do {
+            count--;
+            ctx->restore.tot_pages++;
+            rc = pfn_set_allocated(ctx, base_pfn + count);
+            if ( rc )
+                break;
+        } while ( count );
+    }
+    return rc;
+}
+
+static int x86_hvm_populate_pfns(struct xc_sr_context *ctx, unsigned count,
+                                 const xen_pfn_t *original_pfns,
+                                 const uint32_t *types)
+{
+    xc_interface *xch = ctx->xch;
+    xen_pfn_t min_pfn = original_pfns[0], max_pfn = original_pfns[0];
+    unsigned i;
+    int rc = -1;
+
+    for ( i = 0; i < count; ++i )
+    {
+        if ( original_pfns[i] < min_pfn )
+            min_pfn = original_pfns[i];
+        if ( original_pfns[i] > max_pfn )
+            max_pfn = original_pfns[i];
+        if ( (types[i] != XEN_DOMCTL_PFINFO_XTAB &&
+              types[i] != XEN_DOMCTL_PFINFO_BROKEN) &&
+             !pfn_is_populated(ctx, original_pfns[i]) )
+        {
+            rc = x86_hvm_allocate_pfn(ctx, original_pfns[i]);
+            if ( rc )
+                goto err;
+            rc = pfn_set_populated(ctx, original_pfns[i]);
+            if ( rc )
+                goto err;
+        }
+    }
+
+    while ( min_pfn < max_pfn )
+    {
+        if ( !xc_sr_bitmap_resize(&ctx->x86_hvm.restore.allocated_pfns, min_pfn) )
+        {
+            PERROR("Failed to realloc allocated_pfns %" PRI_xen_pfn, min_pfn);
+            goto err;
+        }
+        if ( !pfn_is_populated(ctx, min_pfn) &&
+            xc_sr_test_and_clear(min_pfn, &ctx->x86_hvm.restore.allocated_pfns) ) {
+            xen_pfn_t pfn = min_pfn;
+            rc = xc_domain_decrease_reservation_exact(xch, ctx->domid, 1, 0, &pfn);
+            if ( rc )
+            {
+                PERROR("Failed to release pfn %" PRI_xen_pfn, min_pfn);
+                goto err;
+            }
+            ctx->restore.tot_pages--;
+        }
+        min_pfn++;
+    }
+
+    rc = 0;
+
+ err:
+    return rc;
+}
+
+
 struct xc_sr_restore_ops restore_ops_x86_hvm =
 {
     .pfn_is_valid    = x86_hvm_pfn_is_valid,
@@ -236,6 +437,7 @@ struct xc_sr_restore_ops restore_ops_x86_hvm =
     .set_page_type   = x86_hvm_set_page_type,
     .localise_page   = x86_hvm_localise_page,
     .setup           = x86_hvm_setup,
+    .populate_pfns   = x86_hvm_populate_pfns,
     .process_record  = x86_hvm_process_record,
     .stream_complete = x86_hvm_stream_complete,
     .cleanup         = x86_hvm_cleanup,
diff --git a/tools/libxc/xc_sr_restore_x86_pv.c b/tools/libxc/xc_sr_restore_x86_pv.c
index 50e25c162c..87957559bc 100644
--- a/tools/libxc/xc_sr_restore_x86_pv.c
+++ b/tools/libxc/xc_sr_restore_x86_pv.c
@@ -936,6 +936,75 @@ static void x86_pv_set_gfn(struct xc_sr_context *ctx, xen_pfn_t pfn,
         ((uint32_t *)ctx->x86_pv.p2m)[pfn] = mfn;
 }
 
+/*
+ * Given a set of pfns, obtain memory from Xen to fill the physmap for the
+ * unpopulated subset.  If types is NULL, no page type checking is performed
+ * and all unpopulated pfns are populated.
+ */
+static int x86_pv_populate_pfns(struct xc_sr_context *ctx, unsigned count,
+                                const xen_pfn_t *original_pfns,
+                                const uint32_t *types)
+{
+    xc_interface *xch = ctx->xch;
+    xen_pfn_t *mfns = malloc(count * sizeof(*mfns)),
+        *pfns = malloc(count * sizeof(*pfns));
+    unsigned i, nr_pfns = 0;
+    int rc = -1;
+
+    if ( !mfns || !pfns )
+    {
+        ERROR("Failed to allocate %zu bytes for populating the physmap",
+              2 * count * sizeof(*mfns));
+        goto err;
+    }
+
+    for ( i = 0; i < count; ++i )
+    {
+        if ( (!types || (types &&
+                         (types[i] != XEN_DOMCTL_PFINFO_XTAB &&
+                          types[i] != XEN_DOMCTL_PFINFO_BROKEN))) &&
+             !pfn_is_populated(ctx, original_pfns[i]) )
+        {
+            rc = pfn_set_populated(ctx, original_pfns[i]);
+            if ( rc )
+                goto err;
+            pfns[nr_pfns] = mfns[nr_pfns] = original_pfns[i];
+            ++nr_pfns;
+        }
+    }
+
+    if ( nr_pfns )
+    {
+        rc = xc_domain_populate_physmap_exact(
+            xch, ctx->domid, nr_pfns, 0, 0, mfns);
+        if ( rc )
+        {
+            PERROR("Failed to populate physmap");
+            goto err;
+        }
+
+        for ( i = 0; i < nr_pfns; ++i )
+        {
+            if ( mfns[i] == INVALID_MFN )
+            {
+                ERROR("Populate physmap failed for pfn %u", i);
+                rc = -1;
+                goto err;
+            }
+
+            ctx->restore.ops.set_gfn(ctx, pfns[i], mfns[i]);
+        }
+    }
+
+    rc = 0;
+
+ err:
+    free(pfns);
+    free(mfns);
+
+    return rc;
+}
+
 /*
  * restore_ops function.  Convert pfns back to mfns in pagetables.  Possibly
  * needs to populate new frames if a PTE is found referring to a frame which
@@ -980,7 +1049,7 @@ static int x86_pv_localise_page(struct xc_sr_context *ctx,
         }
     }
 
-    if ( to_populate && populate_pfns(ctx, to_populate, pfns, NULL) )
+    if ( to_populate && x86_pv_populate_pfns(ctx, to_populate, pfns, NULL) )
         return -1;
 
     for ( i = 0; i < (PAGE_SIZE / sizeof(uint64_t)); ++i )
@@ -1160,6 +1229,7 @@ struct xc_sr_restore_ops restore_ops_x86_pv =
     .set_gfn         = x86_pv_set_gfn,
     .localise_page   = x86_pv_localise_page,
     .setup           = x86_pv_setup,
+    .populate_pfns   = x86_pv_populate_pfns,
     .process_record  = x86_pv_process_record,
     .stream_complete = x86_pv_stream_complete,
     .cleanup         = x86_pv_cleanup,

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest
  2017-08-24 10:14 ` [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest Olaf Hering
@ 2017-08-25 12:06   ` Wei Liu
  2017-08-25 12:51     ` Olaf Hering
  0 siblings, 1 reply; 11+ messages in thread
From: Wei Liu @ 2017-08-25 12:06 UTC (permalink / raw)
  To: Olaf Hering; +Cc: Andrew Cooper, Ian Jackson, Wei Liu, xen-devel

On Thu, Aug 24, 2017 at 12:14:43PM +0200, Olaf Hering wrote:
> During creating of a HVM domU meminit_hvm() tries to map superpages.
> After save/restore or migration this mapping is lost, everything is
> allocated in single pages. This causes a performance degradition after
> migration.
> 
> Add neccessary code to preallocate a superpage for the chunk of pfns
> that is received. In case a pfn was not populated on the sending side it
> must be freed on the receiving side to avoid over-allocation.
> 
> The existing code for x86_pv is moved unmodified into its own file.
> 
[...]
> +static int x86_hvm_populate_pfns(struct xc_sr_context *ctx, unsigned count,
> +                                 const xen_pfn_t *original_pfns,
> +                                 const uint32_t *types)
> +{
> +    xc_interface *xch = ctx->xch;
> +    xen_pfn_t min_pfn = original_pfns[0], max_pfn = original_pfns[0];
> +    unsigned i;
> +    int rc = -1;
> +
> +    for ( i = 0; i < count; ++i )
> +    {
> +        if ( original_pfns[i] < min_pfn )
> +            min_pfn = original_pfns[i];
> +        if ( original_pfns[i] > max_pfn )
> +            max_pfn = original_pfns[i];
> +        if ( (types[i] != XEN_DOMCTL_PFINFO_XTAB &&
> +              types[i] != XEN_DOMCTL_PFINFO_BROKEN) &&
> +             !pfn_is_populated(ctx, original_pfns[i]) )
> +        {
> +            rc = x86_hvm_allocate_pfn(ctx, original_pfns[i]);
> +            if ( rc )
> +                goto err;
> +            rc = pfn_set_populated(ctx, original_pfns[i]);
> +            if ( rc )
> +                goto err;
> +        }
> +    }
> +
> +    while ( min_pfn < max_pfn )
> +    {
> +        if ( !xc_sr_bitmap_resize(&ctx->x86_hvm.restore.allocated_pfns, min_pfn) )
> +        {
> +            PERROR("Failed to realloc allocated_pfns %" PRI_xen_pfn, min_pfn);
> +            goto err;
> +        }
> +        if ( !pfn_is_populated(ctx, min_pfn) &&
> +            xc_sr_test_and_clear(min_pfn, &ctx->x86_hvm.restore.allocated_pfns) ) {
> +            xen_pfn_t pfn = min_pfn;
> +            rc = xc_domain_decrease_reservation_exact(xch, ctx->domid, 1, 0, &pfn);
> +            if ( rc )
> +            {
> +                PERROR("Failed to release pfn %" PRI_xen_pfn, min_pfn);
> +                goto err;
> +            }
> +            ctx->restore.tot_pages--;
> +        }
> +        min_pfn++;
> +    }
> +

I'm still unconvinced this works all the time because it still needs the
assumption that the stream contains contiguous pfns.

It works now probably because the mmio hole is placed right under 1G and
you preemptively avoid allocating 1G page for the first GB.

Suppose we have the following memory layout:

   [0...A) [mmio_start.......B) [B...C]
                         ^1G here

A-0 < 1G
B-C < 1G
The guest can use up to 1G ram
 
And then you receive the following batch of pfns from remote

 B,A-1

When B arrives, a 1G page is thus allocated, using up all guest's
allowance. Decreasing reservation happens only after the batch has been
processed, which means A will trigger over-allocation even on a 4K page.
Then there is no fallback in x86_hvm_allocate_pfn, which will cause
migration to fail.

Another less contrived example, without expanding the mmio hole beyond
1G:

   [0...A) [mmio_start...1G) [1G...C...2G...D...E]

E < 2G
The guest can use up to 2G ram

And then the batch of pfns from remote:

  C,D,A-1

So maybe you need to adjust the location of decrease reservation? Or do
I misread your code?

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 1/3] tools/libxc: move SUPERPAGE macros to common header
  2017-08-24 10:14 ` [PATCH v3 1/3] tools/libxc: move SUPERPAGE macros to common header Olaf Hering
@ 2017-08-25 12:06   ` Wei Liu
  0 siblings, 0 replies; 11+ messages in thread
From: Wei Liu @ 2017-08-25 12:06 UTC (permalink / raw)
  To: Olaf Hering; +Cc: Andrew Cooper, Ian Jackson, Wei Liu, xen-devel

On Thu, Aug 24, 2017 at 12:14:41PM +0200, Olaf Hering wrote:
> The macros SUPERPAGE_2MB_SHIFT and SUPERPAGE_1GB_SHIFT will be used by
> other code in libxc. Move the macros to a header file.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Wei Liu <wei.liu2@citrix.com>

Please fold in the tag if you end up reposting.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 2/3] tools/libxc: add API for bitmap access for restore
  2017-08-24 10:14 ` [PATCH v3 2/3] tools/libxc: add API for bitmap access for restore Olaf Hering
@ 2017-08-25 12:06   ` Wei Liu
  0 siblings, 0 replies; 11+ messages in thread
From: Wei Liu @ 2017-08-25 12:06 UTC (permalink / raw)
  To: Olaf Hering; +Cc: Andrew Cooper, Ian Jackson, Wei Liu, xen-devel

On Thu, Aug 24, 2017 at 12:14:42PM +0200, Olaf Hering wrote:
> Extend API for managing bitmaps. Each bitmap is now represented by a
> generic struct xc_sr_bitmap.
> Switch the existing populated_pfns to this API.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest
  2017-08-25 12:06   ` Wei Liu
@ 2017-08-25 12:51     ` Olaf Hering
  2017-08-25 13:36       ` Wei Liu
  0 siblings, 1 reply; 11+ messages in thread
From: Olaf Hering @ 2017-08-25 12:51 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, Ian Jackson, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 366 bytes --]

On Fri, Aug 25, Wei Liu wrote:

> I'm still unconvinced this works all the time because it still needs the
> assumption that the stream contains contiguous pfns.

This is how it is done today. If the pfns start to arrive in another
order the format has to be changed to send a memory layout in advance.

I will check if some sort of retry logic can be added.


Olaf

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest
  2017-08-25 12:51     ` Olaf Hering
@ 2017-08-25 13:36       ` Wei Liu
  2017-08-25 14:00         ` Olaf Hering
  0 siblings, 1 reply; 11+ messages in thread
From: Wei Liu @ 2017-08-25 13:36 UTC (permalink / raw)
  To: Olaf Hering; +Cc: Andrew Cooper, Wei Liu, Ian Jackson, xen-devel

On Fri, Aug 25, 2017 at 02:51:01PM +0200, Olaf Hering wrote:
> On Fri, Aug 25, Wei Liu wrote:
> 
> > I'm still unconvinced this works all the time because it still needs the
> > assumption that the stream contains contiguous pfns.
> 
> This is how it is done today. If the pfns start to arrive in another
> order the format has to be changed to send a memory layout in advance.
> 

Maybe a middle ground is to scan the batch to see if pfns can be fit
into a whole super page? I don't think you can get a batch as big as 1G
but there should be a lot of 2M batches?

> I will check if some sort of retry logic can be added.
> 

This would be useful too.

> 
> Olaf



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest
  2017-08-25 13:36       ` Wei Liu
@ 2017-08-25 14:00         ` Olaf Hering
  2017-08-25 14:51           ` Olaf Hering
  0 siblings, 1 reply; 11+ messages in thread
From: Olaf Hering @ 2017-08-25 14:00 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, Ian Jackson, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 663 bytes --]

On Fri, Aug 25, Wei Liu wrote:

> Maybe a middle ground is to scan the batch to see if pfns can be fit
> into a whole super page? I don't think you can get a batch as big as 1G
> but there should be a lot of 2M batches?

I think with the new check of max_pages an overallocation can not happen
anymore. If at some point the domU still has room for a superpage, it
will be allocated. In case the batch does not fully fill the superpage,
the holes will be freed. In the next batch no superpage can be allocated
anymore, but single pages will be used.

This punching of holes might be inefficent, the win is the usage of
superpages in case of contiguous pfns.

Olaf

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest
  2017-08-25 14:00         ` Olaf Hering
@ 2017-08-25 14:51           ` Olaf Hering
  0 siblings, 0 replies; 11+ messages in thread
From: Olaf Hering @ 2017-08-25 14:51 UTC (permalink / raw)
  To: Wei Liu; +Cc: Andrew Cooper, Ian Jackson, xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 975 bytes --]

On Fri, Aug 25, Olaf Hering wrote:

> I think with the new check of max_pages an overallocation can not happen
> anymore. If at some point the domU still has room for a superpage, it
> will be allocated. In case the batch does not fully fill the superpage,
> the holes will be freed. In the next batch no superpage can be allocated
> anymore, but single pages will be used.

There is one case where Over-allocation will happen: assume
x86_hvm_populate_pfns gets a batch of pfns that fit trigger the
allocation of a 1G page. All pfns will fit into that partly populated
superpage. Then the guest has a hole right after the max_pfn of that
batch. The next batch will start in a new superpage. As a result the
freeing part of x86_hvm_populate_pfns will not consider the previous
superpage anymore. Now 512MB are allocated, but unpopulated.

To handle this case the min_pfn/max_pfn have to be global so that the
current batch can free allocated pfns from previous batches.

Olaf

[-- Attachment #1.2: signature.asc --]
[-- Type: application/pgp-signature, Size: 195 bytes --]

[-- Attachment #2: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2017-08-25 14:51 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-24 10:14 [PATCH v3 0/3] tools/libxc: use superpages Olaf Hering
2017-08-24 10:14 ` [PATCH v3 1/3] tools/libxc: move SUPERPAGE macros to common header Olaf Hering
2017-08-25 12:06   ` Wei Liu
2017-08-24 10:14 ` [PATCH v3 2/3] tools/libxc: add API for bitmap access for restore Olaf Hering
2017-08-25 12:06   ` Wei Liu
2017-08-24 10:14 ` [PATCH v3 3/3] tools/libxc: use superpages during restore of HVM guest Olaf Hering
2017-08-25 12:06   ` Wei Liu
2017-08-25 12:51     ` Olaf Hering
2017-08-25 13:36       ` Wei Liu
2017-08-25 14:00         ` Olaf Hering
2017-08-25 14:51           ` Olaf Hering

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.