All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5][XSA-97] x86/paging: make log-dirty operations preemptible
@ 2014-09-05 10:47 Jan Beulich
  2014-09-11 18:27 ` Andrew Cooper
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2014-09-05 10:47 UTC (permalink / raw)
  To: xen-devel; +Cc: Tim Deegan, Keir Fraser

[-- Attachment #1: Type: text/plain, Size: 20839 bytes --]

Both the freeing and the inspection of the bitmap get done in (nested)
loops which - besides having a rather high iteration count in general,
albeit that would be covered by XSA-77 - have the number of non-trivial
iterations they need to perform (indirectly) controllable by both the
guest they are for and any domain controlling the guest (including the
one running qemu for it).

Note that the tying of the continuations to the invoking domain (which
previously [wrongly] used the invoking vCPU instead) implies that the
tools requesting such operations have to make sure they don't issue
multiple similar operations in parallel.

Note further that this breaks supervisor-mode kernel assumptions in
hypercall_create_continuation() (where regs->eip gets rewound to the
current hypercall stub beginning), but otoh
hypercall_cancel_continuation() doesn't work in that mode either.
Perhaps time to rip out all the remains of that feature?

This is part of CVE-2014-5146 / XSA-97.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org>
---
v5: Also wire up helper hypercall in HVM/PVH hypercall tables (pointed
    out by Tim).
v4: Fold in Andrew's paging_mode_log_dirty() guarding adjustments to
    paging_log_dirty_disable() and its caller. Convert tying
    continuations to domain (was vCPU). Introduce a separate internal
    hypercall (__HYPERCALL_arch_1) to deal with the continuation (both
    suggested by Tim and Andrew). As a result hap_domctl() and
    shadow_domctl() now don't propagate -ERESTART into paging_domctl()
    anymore, as that would imply setting d->arch.paging.preempt fields
    without holding the paging lock. Sadly this results in mixed
    methods used for continuations here.
v3: Convert if(!resuming) to ASSERT() in paging_log_dirty_op().
v2: Re-order L4 loop continuation/termination handling in
    paging_free_log_dirty_bitmap(). Add an ASSERT() in a special case
    exit path of paging_log_dirty_op().

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1924,7 +1924,9 @@ int domain_relinquish_resources(struct d
         pci_release_devices(d);
 
         /* Tear down paging-assistance stuff. */
-        paging_teardown(d);
+        ret = paging_teardown(d);
+        if ( ret )
+            return ret;
 
         /* Drop the in-use references to page-table bases. */
         for_each_vcpu ( d, v )
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -61,9 +61,11 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_shadow_op:
     {
-        ret = paging_domctl(d,
-                            &domctl->u.shadow_op,
-                            guest_handle_cast(u_domctl, void));
+        ret = paging_domctl(d, &domctl->u.shadow_op,
+                            guest_handle_cast(u_domctl, void), 0);
+        if ( ret == -ERESTART )
+            return hypercall_create_continuation(__HYPERVISOR_arch_1,
+                                                 "h", u_domctl);
         copyback = 1;
     }
     break;
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4794,7 +4794,8 @@ static hvm_hypercall_t *const hvm_hyperc
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
     HYPERCALL(domctl),
-    HYPERCALL(tmem_op)
+    HYPERCALL(tmem_op),
+    [ __HYPERVISOR_arch_1 ] = (hvm_hypercall_t *)paging_domctl_continuation
 };
 
 #define COMPAT_CALL(x)                                        \
@@ -4814,7 +4815,8 @@ static hvm_hypercall_t *const hvm_hyperc
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
     HYPERCALL(domctl),
-    HYPERCALL(tmem_op)
+    HYPERCALL(tmem_op),
+    [ __HYPERVISOR_arch_1 ] = (hvm_hypercall_t *)paging_domctl_continuation
 };
 
 /* PVH 32bitfixme. */
@@ -4832,7 +4834,8 @@ static hvm_hypercall_t *const pvh_hyperc
     [ __HYPERVISOR_physdev_op ]      = (hvm_hypercall_t *)hvm_physdev_op,
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(domctl)
+    HYPERCALL(domctl),
+    [ __HYPERVISOR_arch_1 ] = (hvm_hypercall_t *)paging_domctl_continuation
 };
 
 int hvm_do_hypercall(struct cpu_user_regs *regs)
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -26,6 +26,7 @@
 #include <asm/shadow.h>
 #include <asm/p2m.h>
 #include <asm/hap.h>
+#include <asm/event.h>
 #include <asm/hvm/nestedhvm.h>
 #include <xen/numa.h>
 #include <xsm/xsm.h>
@@ -116,26 +117,46 @@ static void paging_free_log_dirty_page(s
     d->arch.paging.free_page(d, mfn_to_page(mfn));
 }
 
-void paging_free_log_dirty_bitmap(struct domain *d)
+static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
 {
     mfn_t *l4, *l3, *l2;
     int i4, i3, i2;
 
+    paging_lock(d);
+
     if ( !mfn_valid(d->arch.paging.log_dirty.top) )
-        return;
+    {
+        paging_unlock(d);
+        return 0;
+    }
 
-    paging_lock(d);
+    if ( !d->arch.paging.preempt.dom )
+    {
+        memset(&d->arch.paging.preempt.log_dirty, 0,
+               sizeof(d->arch.paging.preempt.log_dirty));
+        ASSERT(rc <= 0);
+        d->arch.paging.preempt.log_dirty.done = -rc;
+    }
+    else if ( d->arch.paging.preempt.dom != current->domain ||
+              d->arch.paging.preempt.op != XEN_DOMCTL_SHADOW_OP_OFF )
+    {
+        paging_unlock(d);
+        return -EBUSY;
+    }
 
     l4 = map_domain_page(mfn_x(d->arch.paging.log_dirty.top));
+    i4 = d->arch.paging.preempt.log_dirty.i4;
+    i3 = d->arch.paging.preempt.log_dirty.i3;
+    rc = 0;
 
-    for ( i4 = 0; i4 < LOGDIRTY_NODE_ENTRIES; i4++ )
+    for ( ; i4 < LOGDIRTY_NODE_ENTRIES; i4++, i3 = 0 )
     {
         if ( !mfn_valid(l4[i4]) )
             continue;
 
         l3 = map_domain_page(mfn_x(l4[i4]));
 
-        for ( i3 = 0; i3 < LOGDIRTY_NODE_ENTRIES; i3++ )
+        for ( ; i3 < LOGDIRTY_NODE_ENTRIES; i3++ )
         {
             if ( !mfn_valid(l3[i3]) )
                 continue;
@@ -148,20 +169,54 @@ void paging_free_log_dirty_bitmap(struct
 
             unmap_domain_page(l2);
             paging_free_log_dirty_page(d, l3[i3]);
+            l3[i3] = _mfn(INVALID_MFN);
+
+            if ( i3 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
+            {
+                d->arch.paging.preempt.log_dirty.i3 = i3 + 1;
+                d->arch.paging.preempt.log_dirty.i4 = i4;
+                rc = -ERESTART;
+                break;
+            }
         }
 
         unmap_domain_page(l3);
+        if ( rc )
+            break;
         paging_free_log_dirty_page(d, l4[i4]);
+        l4[i4] = _mfn(INVALID_MFN);
+
+        if ( i4 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
+        {
+            d->arch.paging.preempt.log_dirty.i3 = 0;
+            d->arch.paging.preempt.log_dirty.i4 = i4 + 1;
+            rc = -ERESTART;
+            break;
+        }
     }
 
     unmap_domain_page(l4);
-    paging_free_log_dirty_page(d, d->arch.paging.log_dirty.top);
-    d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
 
-    ASSERT(d->arch.paging.log_dirty.allocs == 0);
-    d->arch.paging.log_dirty.failed_allocs = 0;
+    if ( !rc )
+    {
+        paging_free_log_dirty_page(d, d->arch.paging.log_dirty.top);
+        d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
+
+        ASSERT(d->arch.paging.log_dirty.allocs == 0);
+        d->arch.paging.log_dirty.failed_allocs = 0;
+
+        rc = -d->arch.paging.preempt.log_dirty.done;
+        d->arch.paging.preempt.dom = NULL;
+    }
+    else
+    {
+        d->arch.paging.preempt.dom = current->domain;
+        d->arch.paging.preempt.op = XEN_DOMCTL_SHADOW_OP_OFF;
+    }
 
     paging_unlock(d);
+
+    return rc;
 }
 
 int paging_log_dirty_enable(struct domain *d, bool_t log_global)
@@ -187,15 +242,25 @@ int paging_log_dirty_enable(struct domai
     return ret;
 }
 
-int paging_log_dirty_disable(struct domain *d)
+static int paging_log_dirty_disable(struct domain *d, bool_t resuming)
 {
-    int ret;
+    int ret = 1;
+
+    if ( !resuming )
+    {
+        domain_pause(d);
+        /* Safe because the domain is paused. */
+        if ( paging_mode_log_dirty(d) )
+        {
+            ret = d->arch.paging.log_dirty.disable_log_dirty(d);
+            ASSERT(ret <= 0);
+        }
+    }
+
+    ret = paging_free_log_dirty_bitmap(d, ret);
+    if ( ret == -ERESTART )
+        return ret;
 
-    domain_pause(d);
-    /* Safe because the domain is paused. */
-    ret = d->arch.paging.log_dirty.disable_log_dirty(d);
-    if ( !paging_mode_log_dirty(d) )
-        paging_free_log_dirty_bitmap(d);
     domain_unpause(d);
 
     return ret;
@@ -335,7 +400,9 @@ int paging_mfn_is_dirty(struct domain *d
 
 /* Read a domain's log-dirty bitmap and stats.  If the operation is a CLEAN,
  * clear the bitmap and stats as well. */
-int paging_log_dirty_op(struct domain *d, struct xen_domctl_shadow_op *sc)
+static int paging_log_dirty_op(struct domain *d,
+                               struct xen_domctl_shadow_op *sc,
+                               bool_t resuming)
 {
     int rv = 0, clean = 0, peek = 1;
     unsigned long pages = 0;
@@ -343,9 +410,22 @@ int paging_log_dirty_op(struct domain *d
     unsigned long *l1 = NULL;
     int i4, i3, i2;
 
-    domain_pause(d);
+    if ( !resuming )
+        domain_pause(d);
     paging_lock(d);
 
+    if ( !d->arch.paging.preempt.dom )
+        memset(&d->arch.paging.preempt.log_dirty, 0,
+               sizeof(d->arch.paging.preempt.log_dirty));
+    else if ( d->arch.paging.preempt.dom != current->domain ||
+              d->arch.paging.preempt.op != sc->op )
+    {
+        paging_unlock(d);
+        ASSERT(!resuming);
+        domain_unpause(d);
+        return -EBUSY;
+    }
+
     clean = (sc->op == XEN_DOMCTL_SHADOW_OP_CLEAN);
 
     PAGING_DEBUG(LOGDIRTY, "log-dirty %s: dom %u faults=%u dirty=%u\n",
@@ -374,17 +454,15 @@ int paging_log_dirty_op(struct domain *d
         goto out;
     }
 
-    pages = 0;
     l4 = paging_map_log_dirty_bitmap(d);
+    i4 = d->arch.paging.preempt.log_dirty.i4;
+    i3 = d->arch.paging.preempt.log_dirty.i3;
+    pages = d->arch.paging.preempt.log_dirty.done;
 
-    for ( i4 = 0;
-          (pages < sc->pages) && (i4 < LOGDIRTY_NODE_ENTRIES);
-          i4++ )
+    for ( ; (pages < sc->pages) && (i4 < LOGDIRTY_NODE_ENTRIES); i4++, i3 = 0 )
     {
         l3 = (l4 && mfn_valid(l4[i4])) ? map_domain_page(mfn_x(l4[i4])) : NULL;
-        for ( i3 = 0;
-              (pages < sc->pages) && (i3 < LOGDIRTY_NODE_ENTRIES);
-              i3++ )
+        for ( ; (pages < sc->pages) && (i3 < LOGDIRTY_NODE_ENTRIES); i3++ )
         {
             l2 = ((l3 && mfn_valid(l3[i3])) ?
                   map_domain_page(mfn_x(l3[i3])) : NULL);
@@ -419,18 +497,51 @@ int paging_log_dirty_op(struct domain *d
             }
             if ( l2 )
                 unmap_domain_page(l2);
+
+            if ( i3 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
+            {
+                d->arch.paging.preempt.log_dirty.i4 = i4;
+                d->arch.paging.preempt.log_dirty.i3 = i3 + 1;
+                rv = -ERESTART;
+                break;
+            }
         }
         if ( l3 )
             unmap_domain_page(l3);
+
+        if ( !rv && i4 < LOGDIRTY_NODE_ENTRIES - 1 &&
+             hypercall_preempt_check() )
+        {
+            d->arch.paging.preempt.log_dirty.i4 = i4 + 1;
+            d->arch.paging.preempt.log_dirty.i3 = 0;
+            rv = -ERESTART;
+        }
+        if ( rv )
+            break;
     }
     if ( l4 )
         unmap_domain_page(l4);
 
-    if ( pages < sc->pages )
-        sc->pages = pages;
+    if ( !rv )
+        d->arch.paging.preempt.dom = NULL;
+    else
+    {
+        d->arch.paging.preempt.dom = current->domain;
+        d->arch.paging.preempt.op = sc->op;
+        d->arch.paging.preempt.log_dirty.done = pages;
+    }
 
     paging_unlock(d);
 
+    if ( rv )
+    {
+        /* Never leave the domain paused for other errors. */
+        ASSERT(rv == -ERESTART);
+        return rv;
+    }
+
+    if ( pages < sc->pages )
+        sc->pages = pages;
     if ( clean )
     {
         /* We need to further call clean_dirty_bitmap() functions of specific
@@ -441,6 +552,7 @@ int paging_log_dirty_op(struct domain *d
     return rv;
 
  out:
+    d->arch.paging.preempt.dom = NULL;
     paging_unlock(d);
     domain_unpause(d);
 
@@ -504,12 +616,6 @@ void paging_log_dirty_init(struct domain
     d->arch.paging.log_dirty.clean_dirty_bitmap = clean_dirty_bitmap;
 }
 
-/* This function fress log dirty bitmap resources. */
-static void paging_log_dirty_teardown(struct domain*d)
-{
-    paging_free_log_dirty_bitmap(d);
-}
-
 /************************************************/
 /*           CODE FOR PAGING SUPPORT            */
 /************************************************/
@@ -551,7 +657,7 @@ void paging_vcpu_init(struct vcpu *v)
 
 
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl, bool_t resuming)
 {
     int rc;
 
@@ -575,6 +681,20 @@ int paging_domctl(struct domain *d, xen_
         return -EINVAL;
     }
 
+    if ( resuming
+         ? (d->arch.paging.preempt.dom != current->domain ||
+            d->arch.paging.preempt.op != sc->op)
+         : (d->arch.paging.preempt.dom &&
+            sc->op != XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION) )
+    {
+        printk(XENLOG_G_DEBUG
+               "%pv: Paging op %#x on Dom%u with unfinished prior op %#x by Dom%u\n",
+               current, sc->op, d->domain_id, d->arch.paging.preempt.op,
+               d->arch.paging.preempt.dom
+               ? d->arch.paging.preempt.dom->domain_id : DOMID_INVALID);
+        return -EBUSY;
+    }
+
     rc = xsm_shadow_control(XSM_HOOK, d, sc->op);
     if ( rc )
         return rc;
@@ -597,14 +717,13 @@ int paging_domctl(struct domain *d, xen_
         return paging_log_dirty_enable(d, 1);
 
     case XEN_DOMCTL_SHADOW_OP_OFF:
-        if ( paging_mode_log_dirty(d) )
-            if ( (rc = paging_log_dirty_disable(d)) != 0 )
-                return rc;
+        if ( (rc = paging_log_dirty_disable(d, resuming)) != 0 )
+            return rc;
         break;
 
     case XEN_DOMCTL_SHADOW_OP_CLEAN:
     case XEN_DOMCTL_SHADOW_OP_PEEK:
-        return paging_log_dirty_op(d, sc);
+        return paging_log_dirty_op(d, sc, resuming);
     }
 
     /* Here, dispatch domctl to the appropriate paging code */
@@ -614,19 +733,67 @@ int paging_domctl(struct domain *d, xen_
         return shadow_domctl(d, sc, u_domctl);
 }
 
+long paging_domctl_continuation(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    struct xen_domctl op;
+    struct domain *d;
+    int ret;
+
+    if ( copy_from_guest(&op, u_domctl, 1) )
+        return -EFAULT;
+
+    if ( op.interface_version != XEN_DOMCTL_INTERFACE_VERSION ||
+         op.cmd != XEN_DOMCTL_shadow_op )
+        return -EBADRQC;
+
+    d = rcu_lock_domain_by_id(op.domain);
+    if ( d == NULL )
+        return -ESRCH;
+
+    ret = xsm_domctl(XSM_OTHER, d, op.cmd);
+    if ( !ret )
+    {
+        if ( domctl_lock_acquire() )
+        {
+            ret = paging_domctl(d, &op.u.shadow_op,
+                                guest_handle_cast(u_domctl, void), 1);
+
+            domctl_lock_release();
+        }
+        else
+            ret = -ERESTART;
+    }
+
+    rcu_unlock_domain(d);
+
+    if ( ret == -ERESTART )
+        ret = hypercall_create_continuation(__HYPERVISOR_arch_1,
+                                            "h", u_domctl);
+    else if ( __copy_field_to_guest(u_domctl, &op, u.shadow_op) )
+        ret = -EFAULT;
+
+    return ret;
+}
+
 /* Call when destroying a domain */
-void paging_teardown(struct domain *d)
+int paging_teardown(struct domain *d)
 {
+    int rc;
+
     if ( hap_enabled(d) )
         hap_teardown(d);
     else
         shadow_teardown(d);
 
     /* clean up log dirty resources. */
-    paging_log_dirty_teardown(d);
+    rc = paging_free_log_dirty_bitmap(d, 0);
+    if ( rc == -ERESTART )
+        return rc;
 
     /* Move populate-on-demand cache back to domain_list for destruction */
     p2m_pod_empty_cache(d);
+
+    return rc;
 }
 
 /* Call once all of the references to the domain have gone away */
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -421,6 +421,7 @@ ENTRY(compat_hypercall_table)
         .quad compat_ni_hypercall
         .endr
         .quad do_mca                    /* 48 */
+        .quad paging_domctl_continuation
         .rept NR_hypercalls-((.-compat_hypercall_table)/8)
         .quad compat_ni_hypercall
         .endr
@@ -469,6 +470,7 @@ ENTRY(compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
         .byte 1 /* do_mca                   */
+        .byte 1 /* paging_domctl_continuation      */
         .rept NR_hypercalls-(.-compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -765,6 +765,7 @@ ENTRY(hypercall_table)
         .quad do_ni_hypercall
         .endr
         .quad do_mca                /* 48 */
+        .quad paging_domctl_continuation
         .rept NR_hypercalls-((.-hypercall_table)/8)
         .quad do_ni_hypercall
         .endr
@@ -813,6 +814,7 @@ ENTRY(hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
         .byte 1 /* do_mca               */  /* 48 */
+        .byte 1 /* paging_domctl_continuation */
         .rept NR_hypercalls-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -615,7 +615,6 @@ int domain_kill(struct domain *d)
         {
             if ( rc == -ERESTART )
                 rc = -EAGAIN;
-            BUG_ON(rc != -EAGAIN);
             break;
         }
         if ( sched_move_domain(d, cpupool0) )
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -186,6 +186,20 @@ struct paging_domain {
     struct hap_domain       hap;
     /* log dirty support */
     struct log_dirty_domain log_dirty;
+
+    /* preemption handling */
+    struct {
+        const struct domain *dom;
+        unsigned int op;
+        union {
+            struct {
+                unsigned long done:PADDR_BITS - PAGE_SHIFT;
+                unsigned long i4:PAGETABLE_ORDER;
+                unsigned long i3:PAGETABLE_ORDER;
+            } log_dirty;
+        };
+    } preempt;
+
     /* alloc/free pages from the pool for paging-assistance structures
      * (used by p2m and log-dirty code for their tries) */
     struct page_info * (*alloc_page)(struct domain *d);
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -132,9 +132,6 @@ struct paging_mode {
 /*****************************************************************************
  * Log dirty code */
 
-/* free log dirty bitmap resource */
-void paging_free_log_dirty_bitmap(struct domain *d);
-
 /* get the dirty bitmap for a specific range of pfns */
 void paging_log_dirty_range(struct domain *d,
                             unsigned long begin_pfn,
@@ -144,9 +141,6 @@ void paging_log_dirty_range(struct domai
 /* enable log dirty */
 int paging_log_dirty_enable(struct domain *d, bool_t log_global);
 
-/* disable log dirty */
-int paging_log_dirty_disable(struct domain *d);
-
 /* log dirty initialization */
 void paging_log_dirty_init(struct domain *d,
                            int  (*enable_log_dirty)(struct domain *d,
@@ -203,10 +197,13 @@ int paging_domain_init(struct domain *d,
  * and disable ephemeral shadow modes (test mode and log-dirty mode) and
  * manipulate the log-dirty bitmap. */
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl, bool_t resuming);
+
+/* Helper hypercall for dealing with continuations. */
+long paging_domctl_continuation(XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
 /* Call when destroying a domain */
-void paging_teardown(struct domain *d);
+int paging_teardown(struct domain *d);
 
 /* Call once all of the references to the domain have gone away */
 void paging_final_teardown(struct domain *d);



[-- Attachment #2: xsa97-hap.patch --]
[-- Type: text/plain, Size: 20888 bytes --]

x86/paging: make log-dirty operations preemptible

Both the freeing and the inspection of the bitmap get done in (nested)
loops which - besides having a rather high iteration count in general,
albeit that would be covered by XSA-77 - have the number of non-trivial
iterations they need to perform (indirectly) controllable by both the
guest they are for and any domain controlling the guest (including the
one running qemu for it).

Note that the tying of the continuations to the invoking domain (which
previously [wrongly] used the invoking vCPU instead) implies that the
tools requesting such operations have to make sure they don't issue
multiple similar operations in parallel.

Note further that this breaks supervisor-mode kernel assumptions in
hypercall_create_continuation() (where regs->eip gets rewound to the
current hypercall stub beginning), but otoh
hypercall_cancel_continuation() doesn't work in that mode either.
Perhaps time to rip out all the remains of that feature?

This is part of CVE-2014-5146 / XSA-97.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Tim Deegan <tim@xen.org>
---
v5: Also wire up helper hypercall in HVM/PVH hypercall tables (pointed
    out by Tim).
v4: Fold in Andrew's paging_mode_log_dirty() guarding adjustments to
    paging_log_dirty_disable() and its caller. Convert tying
    continuations to domain (was vCPU). Introduce a separate internal
    hypercall (__HYPERCALL_arch_1) to deal with the continuation (both
    suggested by Tim and Andrew). As a result hap_domctl() and
    shadow_domctl() now don't propagate -ERESTART into paging_domctl()
    anymore, as that would imply setting d->arch.paging.preempt fields
    without holding the paging lock. Sadly this results in mixed
    methods used for continuations here.
v3: Convert if(!resuming) to ASSERT() in paging_log_dirty_op().
v2: Re-order L4 loop continuation/termination handling in
    paging_free_log_dirty_bitmap(). Add an ASSERT() in a special case
    exit path of paging_log_dirty_op().

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1924,7 +1924,9 @@ int domain_relinquish_resources(struct d
         pci_release_devices(d);
 
         /* Tear down paging-assistance stuff. */
-        paging_teardown(d);
+        ret = paging_teardown(d);
+        if ( ret )
+            return ret;
 
         /* Drop the in-use references to page-table bases. */
         for_each_vcpu ( d, v )
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -61,9 +61,11 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_shadow_op:
     {
-        ret = paging_domctl(d,
-                            &domctl->u.shadow_op,
-                            guest_handle_cast(u_domctl, void));
+        ret = paging_domctl(d, &domctl->u.shadow_op,
+                            guest_handle_cast(u_domctl, void), 0);
+        if ( ret == -ERESTART )
+            return hypercall_create_continuation(__HYPERVISOR_arch_1,
+                                                 "h", u_domctl);
         copyback = 1;
     }
     break;
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4794,7 +4794,8 @@ static hvm_hypercall_t *const hvm_hyperc
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
     HYPERCALL(domctl),
-    HYPERCALL(tmem_op)
+    HYPERCALL(tmem_op),
+    [ __HYPERVISOR_arch_1 ] = (hvm_hypercall_t *)paging_domctl_continuation
 };
 
 #define COMPAT_CALL(x)                                        \
@@ -4814,7 +4815,8 @@ static hvm_hypercall_t *const hvm_hyperc
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
     HYPERCALL(domctl),
-    HYPERCALL(tmem_op)
+    HYPERCALL(tmem_op),
+    [ __HYPERVISOR_arch_1 ] = (hvm_hypercall_t *)paging_domctl_continuation
 };
 
 /* PVH 32bitfixme. */
@@ -4832,7 +4834,8 @@ static hvm_hypercall_t *const pvh_hyperc
     [ __HYPERVISOR_physdev_op ]      = (hvm_hypercall_t *)hvm_physdev_op,
     HYPERCALL(hvm_op),
     HYPERCALL(sysctl),
-    HYPERCALL(domctl)
+    HYPERCALL(domctl),
+    [ __HYPERVISOR_arch_1 ] = (hvm_hypercall_t *)paging_domctl_continuation
 };
 
 int hvm_do_hypercall(struct cpu_user_regs *regs)
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -26,6 +26,7 @@
 #include <asm/shadow.h>
 #include <asm/p2m.h>
 #include <asm/hap.h>
+#include <asm/event.h>
 #include <asm/hvm/nestedhvm.h>
 #include <xen/numa.h>
 #include <xsm/xsm.h>
@@ -116,26 +117,46 @@ static void paging_free_log_dirty_page(s
     d->arch.paging.free_page(d, mfn_to_page(mfn));
 }
 
-void paging_free_log_dirty_bitmap(struct domain *d)
+static int paging_free_log_dirty_bitmap(struct domain *d, int rc)
 {
     mfn_t *l4, *l3, *l2;
     int i4, i3, i2;
 
+    paging_lock(d);
+
     if ( !mfn_valid(d->arch.paging.log_dirty.top) )
-        return;
+    {
+        paging_unlock(d);
+        return 0;
+    }
 
-    paging_lock(d);
+    if ( !d->arch.paging.preempt.dom )
+    {
+        memset(&d->arch.paging.preempt.log_dirty, 0,
+               sizeof(d->arch.paging.preempt.log_dirty));
+        ASSERT(rc <= 0);
+        d->arch.paging.preempt.log_dirty.done = -rc;
+    }
+    else if ( d->arch.paging.preempt.dom != current->domain ||
+              d->arch.paging.preempt.op != XEN_DOMCTL_SHADOW_OP_OFF )
+    {
+        paging_unlock(d);
+        return -EBUSY;
+    }
 
     l4 = map_domain_page(mfn_x(d->arch.paging.log_dirty.top));
+    i4 = d->arch.paging.preempt.log_dirty.i4;
+    i3 = d->arch.paging.preempt.log_dirty.i3;
+    rc = 0;
 
-    for ( i4 = 0; i4 < LOGDIRTY_NODE_ENTRIES; i4++ )
+    for ( ; i4 < LOGDIRTY_NODE_ENTRIES; i4++, i3 = 0 )
     {
         if ( !mfn_valid(l4[i4]) )
             continue;
 
         l3 = map_domain_page(mfn_x(l4[i4]));
 
-        for ( i3 = 0; i3 < LOGDIRTY_NODE_ENTRIES; i3++ )
+        for ( ; i3 < LOGDIRTY_NODE_ENTRIES; i3++ )
         {
             if ( !mfn_valid(l3[i3]) )
                 continue;
@@ -148,20 +169,54 @@ void paging_free_log_dirty_bitmap(struct
 
             unmap_domain_page(l2);
             paging_free_log_dirty_page(d, l3[i3]);
+            l3[i3] = _mfn(INVALID_MFN);
+
+            if ( i3 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
+            {
+                d->arch.paging.preempt.log_dirty.i3 = i3 + 1;
+                d->arch.paging.preempt.log_dirty.i4 = i4;
+                rc = -ERESTART;
+                break;
+            }
         }
 
         unmap_domain_page(l3);
+        if ( rc )
+            break;
         paging_free_log_dirty_page(d, l4[i4]);
+        l4[i4] = _mfn(INVALID_MFN);
+
+        if ( i4 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
+        {
+            d->arch.paging.preempt.log_dirty.i3 = 0;
+            d->arch.paging.preempt.log_dirty.i4 = i4 + 1;
+            rc = -ERESTART;
+            break;
+        }
     }
 
     unmap_domain_page(l4);
-    paging_free_log_dirty_page(d, d->arch.paging.log_dirty.top);
-    d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
 
-    ASSERT(d->arch.paging.log_dirty.allocs == 0);
-    d->arch.paging.log_dirty.failed_allocs = 0;
+    if ( !rc )
+    {
+        paging_free_log_dirty_page(d, d->arch.paging.log_dirty.top);
+        d->arch.paging.log_dirty.top = _mfn(INVALID_MFN);
+
+        ASSERT(d->arch.paging.log_dirty.allocs == 0);
+        d->arch.paging.log_dirty.failed_allocs = 0;
+
+        rc = -d->arch.paging.preempt.log_dirty.done;
+        d->arch.paging.preempt.dom = NULL;
+    }
+    else
+    {
+        d->arch.paging.preempt.dom = current->domain;
+        d->arch.paging.preempt.op = XEN_DOMCTL_SHADOW_OP_OFF;
+    }
 
     paging_unlock(d);
+
+    return rc;
 }
 
 int paging_log_dirty_enable(struct domain *d, bool_t log_global)
@@ -187,15 +242,25 @@ int paging_log_dirty_enable(struct domai
     return ret;
 }
 
-int paging_log_dirty_disable(struct domain *d)
+static int paging_log_dirty_disable(struct domain *d, bool_t resuming)
 {
-    int ret;
+    int ret = 1;
+
+    if ( !resuming )
+    {
+        domain_pause(d);
+        /* Safe because the domain is paused. */
+        if ( paging_mode_log_dirty(d) )
+        {
+            ret = d->arch.paging.log_dirty.disable_log_dirty(d);
+            ASSERT(ret <= 0);
+        }
+    }
+
+    ret = paging_free_log_dirty_bitmap(d, ret);
+    if ( ret == -ERESTART )
+        return ret;
 
-    domain_pause(d);
-    /* Safe because the domain is paused. */
-    ret = d->arch.paging.log_dirty.disable_log_dirty(d);
-    if ( !paging_mode_log_dirty(d) )
-        paging_free_log_dirty_bitmap(d);
     domain_unpause(d);
 
     return ret;
@@ -335,7 +400,9 @@ int paging_mfn_is_dirty(struct domain *d
 
 /* Read a domain's log-dirty bitmap and stats.  If the operation is a CLEAN,
  * clear the bitmap and stats as well. */
-int paging_log_dirty_op(struct domain *d, struct xen_domctl_shadow_op *sc)
+static int paging_log_dirty_op(struct domain *d,
+                               struct xen_domctl_shadow_op *sc,
+                               bool_t resuming)
 {
     int rv = 0, clean = 0, peek = 1;
     unsigned long pages = 0;
@@ -343,9 +410,22 @@ int paging_log_dirty_op(struct domain *d
     unsigned long *l1 = NULL;
     int i4, i3, i2;
 
-    domain_pause(d);
+    if ( !resuming )
+        domain_pause(d);
     paging_lock(d);
 
+    if ( !d->arch.paging.preempt.dom )
+        memset(&d->arch.paging.preempt.log_dirty, 0,
+               sizeof(d->arch.paging.preempt.log_dirty));
+    else if ( d->arch.paging.preempt.dom != current->domain ||
+              d->arch.paging.preempt.op != sc->op )
+    {
+        paging_unlock(d);
+        ASSERT(!resuming);
+        domain_unpause(d);
+        return -EBUSY;
+    }
+
     clean = (sc->op == XEN_DOMCTL_SHADOW_OP_CLEAN);
 
     PAGING_DEBUG(LOGDIRTY, "log-dirty %s: dom %u faults=%u dirty=%u\n",
@@ -374,17 +454,15 @@ int paging_log_dirty_op(struct domain *d
         goto out;
     }
 
-    pages = 0;
     l4 = paging_map_log_dirty_bitmap(d);
+    i4 = d->arch.paging.preempt.log_dirty.i4;
+    i3 = d->arch.paging.preempt.log_dirty.i3;
+    pages = d->arch.paging.preempt.log_dirty.done;
 
-    for ( i4 = 0;
-          (pages < sc->pages) && (i4 < LOGDIRTY_NODE_ENTRIES);
-          i4++ )
+    for ( ; (pages < sc->pages) && (i4 < LOGDIRTY_NODE_ENTRIES); i4++, i3 = 0 )
     {
         l3 = (l4 && mfn_valid(l4[i4])) ? map_domain_page(mfn_x(l4[i4])) : NULL;
-        for ( i3 = 0;
-              (pages < sc->pages) && (i3 < LOGDIRTY_NODE_ENTRIES);
-              i3++ )
+        for ( ; (pages < sc->pages) && (i3 < LOGDIRTY_NODE_ENTRIES); i3++ )
         {
             l2 = ((l3 && mfn_valid(l3[i3])) ?
                   map_domain_page(mfn_x(l3[i3])) : NULL);
@@ -419,18 +497,51 @@ int paging_log_dirty_op(struct domain *d
             }
             if ( l2 )
                 unmap_domain_page(l2);
+
+            if ( i3 < LOGDIRTY_NODE_ENTRIES - 1 && hypercall_preempt_check() )
+            {
+                d->arch.paging.preempt.log_dirty.i4 = i4;
+                d->arch.paging.preempt.log_dirty.i3 = i3 + 1;
+                rv = -ERESTART;
+                break;
+            }
         }
         if ( l3 )
             unmap_domain_page(l3);
+
+        if ( !rv && i4 < LOGDIRTY_NODE_ENTRIES - 1 &&
+             hypercall_preempt_check() )
+        {
+            d->arch.paging.preempt.log_dirty.i4 = i4 + 1;
+            d->arch.paging.preempt.log_dirty.i3 = 0;
+            rv = -ERESTART;
+        }
+        if ( rv )
+            break;
     }
     if ( l4 )
         unmap_domain_page(l4);
 
-    if ( pages < sc->pages )
-        sc->pages = pages;
+    if ( !rv )
+        d->arch.paging.preempt.dom = NULL;
+    else
+    {
+        d->arch.paging.preempt.dom = current->domain;
+        d->arch.paging.preempt.op = sc->op;
+        d->arch.paging.preempt.log_dirty.done = pages;
+    }
 
     paging_unlock(d);
 
+    if ( rv )
+    {
+        /* Never leave the domain paused for other errors. */
+        ASSERT(rv == -ERESTART);
+        return rv;
+    }
+
+    if ( pages < sc->pages )
+        sc->pages = pages;
     if ( clean )
     {
         /* We need to further call clean_dirty_bitmap() functions of specific
@@ -441,6 +552,7 @@ int paging_log_dirty_op(struct domain *d
     return rv;
 
  out:
+    d->arch.paging.preempt.dom = NULL;
     paging_unlock(d);
     domain_unpause(d);
 
@@ -504,12 +616,6 @@ void paging_log_dirty_init(struct domain
     d->arch.paging.log_dirty.clean_dirty_bitmap = clean_dirty_bitmap;
 }
 
-/* This function fress log dirty bitmap resources. */
-static void paging_log_dirty_teardown(struct domain*d)
-{
-    paging_free_log_dirty_bitmap(d);
-}
-
 /************************************************/
 /*           CODE FOR PAGING SUPPORT            */
 /************************************************/
@@ -551,7 +657,7 @@ void paging_vcpu_init(struct vcpu *v)
 
 
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE_PARAM(void) u_domctl)
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl, bool_t resuming)
 {
     int rc;
 
@@ -575,6 +681,20 @@ int paging_domctl(struct domain *d, xen_
         return -EINVAL;
     }
 
+    if ( resuming
+         ? (d->arch.paging.preempt.dom != current->domain ||
+            d->arch.paging.preempt.op != sc->op)
+         : (d->arch.paging.preempt.dom &&
+            sc->op != XEN_DOMCTL_SHADOW_OP_GET_ALLOCATION) )
+    {
+        printk(XENLOG_G_DEBUG
+               "%pv: Paging op %#x on Dom%u with unfinished prior op %#x by Dom%u\n",
+               current, sc->op, d->domain_id, d->arch.paging.preempt.op,
+               d->arch.paging.preempt.dom
+               ? d->arch.paging.preempt.dom->domain_id : DOMID_INVALID);
+        return -EBUSY;
+    }
+
     rc = xsm_shadow_control(XSM_HOOK, d, sc->op);
     if ( rc )
         return rc;
@@ -597,14 +717,13 @@ int paging_domctl(struct domain *d, xen_
         return paging_log_dirty_enable(d, 1);
 
     case XEN_DOMCTL_SHADOW_OP_OFF:
-        if ( paging_mode_log_dirty(d) )
-            if ( (rc = paging_log_dirty_disable(d)) != 0 )
-                return rc;
+        if ( (rc = paging_log_dirty_disable(d, resuming)) != 0 )
+            return rc;
         break;
 
     case XEN_DOMCTL_SHADOW_OP_CLEAN:
     case XEN_DOMCTL_SHADOW_OP_PEEK:
-        return paging_log_dirty_op(d, sc);
+        return paging_log_dirty_op(d, sc, resuming);
     }
 
     /* Here, dispatch domctl to the appropriate paging code */
@@ -614,19 +733,67 @@ int paging_domctl(struct domain *d, xen_
         return shadow_domctl(d, sc, u_domctl);
 }
 
+long paging_domctl_continuation(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    struct xen_domctl op;
+    struct domain *d;
+    int ret;
+
+    if ( copy_from_guest(&op, u_domctl, 1) )
+        return -EFAULT;
+
+    if ( op.interface_version != XEN_DOMCTL_INTERFACE_VERSION ||
+         op.cmd != XEN_DOMCTL_shadow_op )
+        return -EBADRQC;
+
+    d = rcu_lock_domain_by_id(op.domain);
+    if ( d == NULL )
+        return -ESRCH;
+
+    ret = xsm_domctl(XSM_OTHER, d, op.cmd);
+    if ( !ret )
+    {
+        if ( domctl_lock_acquire() )
+        {
+            ret = paging_domctl(d, &op.u.shadow_op,
+                                guest_handle_cast(u_domctl, void), 1);
+
+            domctl_lock_release();
+        }
+        else
+            ret = -ERESTART;
+    }
+
+    rcu_unlock_domain(d);
+
+    if ( ret == -ERESTART )
+        ret = hypercall_create_continuation(__HYPERVISOR_arch_1,
+                                            "h", u_domctl);
+    else if ( __copy_field_to_guest(u_domctl, &op, u.shadow_op) )
+        ret = -EFAULT;
+
+    return ret;
+}
+
 /* Call when destroying a domain */
-void paging_teardown(struct domain *d)
+int paging_teardown(struct domain *d)
 {
+    int rc;
+
     if ( hap_enabled(d) )
         hap_teardown(d);
     else
         shadow_teardown(d);
 
     /* clean up log dirty resources. */
-    paging_log_dirty_teardown(d);
+    rc = paging_free_log_dirty_bitmap(d, 0);
+    if ( rc == -ERESTART )
+        return rc;
 
     /* Move populate-on-demand cache back to domain_list for destruction */
     p2m_pod_empty_cache(d);
+
+    return rc;
 }
 
 /* Call once all of the references to the domain have gone away */
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -421,6 +421,7 @@ ENTRY(compat_hypercall_table)
         .quad compat_ni_hypercall
         .endr
         .quad do_mca                    /* 48 */
+        .quad paging_domctl_continuation
         .rept NR_hypercalls-((.-compat_hypercall_table)/8)
         .quad compat_ni_hypercall
         .endr
@@ -469,6 +470,7 @@ ENTRY(compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
         .byte 1 /* do_mca                   */
+        .byte 1 /* paging_domctl_continuation      */
         .rept NR_hypercalls-(.-compat_hypercall_args_table)
         .byte 0 /* compat_ni_hypercall      */
         .endr
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -765,6 +765,7 @@ ENTRY(hypercall_table)
         .quad do_ni_hypercall
         .endr
         .quad do_mca                /* 48 */
+        .quad paging_domctl_continuation
         .rept NR_hypercalls-((.-hypercall_table)/8)
         .quad do_ni_hypercall
         .endr
@@ -813,6 +814,7 @@ ENTRY(hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
         .byte 1 /* do_mca               */  /* 48 */
+        .byte 1 /* paging_domctl_continuation */
         .rept NR_hypercalls-(.-hypercall_args_table)
         .byte 0 /* do_ni_hypercall      */
         .endr
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -615,7 +615,6 @@ int domain_kill(struct domain *d)
         {
             if ( rc == -ERESTART )
                 rc = -EAGAIN;
-            BUG_ON(rc != -EAGAIN);
             break;
         }
         if ( sched_move_domain(d, cpupool0) )
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -186,6 +186,20 @@ struct paging_domain {
     struct hap_domain       hap;
     /* log dirty support */
     struct log_dirty_domain log_dirty;
+
+    /* preemption handling */
+    struct {
+        const struct domain *dom;
+        unsigned int op;
+        union {
+            struct {
+                unsigned long done:PADDR_BITS - PAGE_SHIFT;
+                unsigned long i4:PAGETABLE_ORDER;
+                unsigned long i3:PAGETABLE_ORDER;
+            } log_dirty;
+        };
+    } preempt;
+
     /* alloc/free pages from the pool for paging-assistance structures
      * (used by p2m and log-dirty code for their tries) */
     struct page_info * (*alloc_page)(struct domain *d);
--- a/xen/include/asm-x86/paging.h
+++ b/xen/include/asm-x86/paging.h
@@ -132,9 +132,6 @@ struct paging_mode {
 /*****************************************************************************
  * Log dirty code */
 
-/* free log dirty bitmap resource */
-void paging_free_log_dirty_bitmap(struct domain *d);
-
 /* get the dirty bitmap for a specific range of pfns */
 void paging_log_dirty_range(struct domain *d,
                             unsigned long begin_pfn,
@@ -144,9 +141,6 @@ void paging_log_dirty_range(struct domai
 /* enable log dirty */
 int paging_log_dirty_enable(struct domain *d, bool_t log_global);
 
-/* disable log dirty */
-int paging_log_dirty_disable(struct domain *d);
-
 /* log dirty initialization */
 void paging_log_dirty_init(struct domain *d,
                            int  (*enable_log_dirty)(struct domain *d,
@@ -203,10 +197,13 @@ int paging_domain_init(struct domain *d,
  * and disable ephemeral shadow modes (test mode and log-dirty mode) and
  * manipulate the log-dirty bitmap. */
 int paging_domctl(struct domain *d, xen_domctl_shadow_op_t *sc,
-                  XEN_GUEST_HANDLE_PARAM(void) u_domctl);
+                  XEN_GUEST_HANDLE_PARAM(void) u_domctl, bool_t resuming);
+
+/* Helper hypercall for dealing with continuations. */
+long paging_domctl_continuation(XEN_GUEST_HANDLE_PARAM(xen_domctl_t));
 
 /* Call when destroying a domain */
-void paging_teardown(struct domain *d);
+int paging_teardown(struct domain *d);
 
 /* Call once all of the references to the domain have gone away */
 void paging_final_teardown(struct domain *d);

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5][XSA-97] x86/paging: make log-dirty operations preemptible
  2014-09-05 10:47 [PATCH v5][XSA-97] x86/paging: make log-dirty operations preemptible Jan Beulich
@ 2014-09-11 18:27 ` Andrew Cooper
  2014-09-12 12:18   ` Jan Beulich
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Cooper @ 2014-09-11 18:27 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Keir Fraser, Tim Deegan

On 05/09/14 11:47, Jan Beulich wrote:
> Both the freeing and the inspection of the bitmap get done in (nested)
> loops which - besides having a rather high iteration count in general,
> albeit that would be covered by XSA-77 - have the number of non-trivial
> iterations they need to perform (indirectly) controllable by both the
> guest they are for and any domain controlling the guest (including the
> one running qemu for it).
>
> Note that the tying of the continuations to the invoking domain (which
> previously [wrongly] used the invoking vCPU instead) implies that the
> tools requesting such operations have to make sure they don't issue
> multiple similar operations in parallel.
>
> Note further that this breaks supervisor-mode kernel assumptions in
> hypercall_create_continuation() (where regs->eip gets rewound to the
> current hypercall stub beginning), but otoh
> hypercall_cancel_continuation() doesn't work in that mode either.
> Perhaps time to rip out all the remains of that feature?
>
> This is part of CVE-2014-5146 / XSA-97.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Tim Deegan <tim@xen.org>

Unfortunately XenRT is finding reliable issues with this version of the
patch.

Taking two builds of XenServer, identical other than this patch
(Xen-4.4.1 based adjusting for -EAGAIN/-ERESTART), the build without is
fine, but the build with appears to show page accounting issues.

The logs below are from a standard vmlifecycle ops test of RHEL6.2 with
a 32bit and 64bit PV guest undergoing tests in tandem.

E.g:

(XEN) [ 4141.838508] mm.c:2352:d0v1 Bad type (saw 7400000000000001 !=
exp 1000000000000000) for mfn 2317f0 (pfn 14436)
(XEN) [ 4141.838512] mm.c:2995:d0v1 Error while pinning mfn 2317f0

Failure to pin a batch of domain 78's pagetables on restore. 

(XEN) [ 7832.953068] mm.c:827:d0v0 pg_owner 100 l1e_owner 100, but
real_pg_owner 99
(XEN) [ 7832.953072] mm.c:898:d0v0 Error getting mfn 854c3 (pfn 2c820)
from L1 entry 00000000854c3025 for l1e_owner=100, pg_owner=100
(XEN) [ 7832.953076] mm.c:1221:d0v0 Failure in alloc_l1_table: entry 488
(XEN) [ 7832.953083] mm.c:2099:d0v0 Error while validating mfn 12406d
(pfn 18fbe) for type 1000000000000000: caf=8000000000000003
taf=1000000000000001
(XEN) [ 7832.953086] mm.c:906:d0v0 Attempt to create linear p.t. with
write perms
(XEN) [ 7832.953089] mm.c:1297:d0v0 Failure in alloc_l2_table: entry 4
(XEN) [ 7832.953100] mm.c:2099:d0v0 Error while validating mfn 23ebe4
(pfn 1db65) for type 2000000000000000: caf=8000000000000003
taf=2000000000000001
(XEN) [ 7832.953104] mm.c:948:d0v0 Attempt to create linear p.t. with
write perms
(XEN) [ 7832.953106] mm.c:1379:d0v0 Failure in alloc_l3_table: entry 0
(XEN) [ 7832.953110] mm.c:2099:d0v0 Error while validating mfn 2019db
(pfn 18eaf) for type 3000000000000000: caf=8000000000000003
taf=3000000000000001
(XEN) [ 7832.953113] mm.c:2995:d0v0 Error while pinning mfn 2019db

Failure to pin a batch of domain 100's pagetables on restore.

In both of these cases, the save side succeeds, which means the
pagetable normalisation found fully complete and correct pagetables
(i.e. the p2m and m2p agreed), and
xc_get_pfn_type_batch()/xc_map_foreign_bulk() didn't fail any domain
ownership tests.

On inspection of the libxc logs, I am feeing quite glad I left this
debugging message in:

xenguest-75-save[11876]: xc: detail: Bitmap contained more entries than
expected...
xenguest-83-save[32123]: xc: detail: Bitmap contained more entries than
expected...
xenguest-84-save[471]: xc: detail: Bitmap contained more entries than
expected...
xenguest-88-save[3823]: xc: detail: Bitmap contained more entries than
expected...
xenguest-89-save[4656]: xc: detail: Bitmap contained more entries than
expected...
xenguest-95-save[9379]: xc: detail: Bitmap contained more entries than
expected...
xenguest-98-save[11784]: xc: detail: Bitmap contained more entries than
expected...

This means that periodically, a XEN_DOMCTL_SHADOW_OP_{CLEAN,PEEK}
hypercall gives us back a bitmap with more set bits than
stats.dirty_count which it hands back at the same time.

Domain 75 (the 46bit was the first with the bitmap error, migrated to
domain 76, then to 78 which suffered a pinning failure.  Beyond this
point, on the 32bit domain continues testing, and suffers a similar
problem later.

I have found a bug in my accounting code (need to change two set_bit()s
to test_and_set_bit()s before blindly incrementing the stat), but the
precondition which tickles this bug indicates something is going awry
with the final logdirty bitmap as used by the migration code.

Unfortunately, I am now out of the office for 6 working days (back on
Monday 22nd), but will be sporadically on email during that time.

~Andrew

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5][XSA-97] x86/paging: make log-dirty operations preemptible
  2014-09-11 18:27 ` Andrew Cooper
@ 2014-09-12 12:18   ` Jan Beulich
  2014-09-15  7:50     ` Andrew Cooper
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2014-09-12 12:18 UTC (permalink / raw)
  To: Andrew Cooper, Tim Deegan; +Cc: xen-devel, Keir Fraser

>>> On 11.09.14 at 20:27, <andrew.cooper3@citrix.com> wrote:
> On inspection of the libxc logs, I am feeing quite glad I left this
> debugging message in:
> 
> xenguest-75-save[11876]: xc: detail: Bitmap contained more entries than
> expected...
> xenguest-83-save[32123]: xc: detail: Bitmap contained more entries than
> expected...
> xenguest-84-save[471]: xc: detail: Bitmap contained more entries than
> expected...
> xenguest-88-save[3823]: xc: detail: Bitmap contained more entries than
> expected...
> xenguest-89-save[4656]: xc: detail: Bitmap contained more entries than
> expected...
> xenguest-95-save[9379]: xc: detail: Bitmap contained more entries than
> expected...
> xenguest-98-save[11784]: xc: detail: Bitmap contained more entries than
> expected...
> 
> This means that periodically, a XEN_DOMCTL_SHADOW_OP_{CLEAN,PEEK}
> hypercall gives us back a bitmap with more set bits than
> stats.dirty_count which it hands back at the same time.

Which has quite simple an explanation, at least when (as I assume is
the case) XEN_DOMCTL_SHADOW_OP_CLEAN is being used: We
clear

d->arch.paging.log_dirty.{dirty,fault}_count on the each step of the
continuation instead of just on the final one. That code needs moving
down.

This, however, points at a broader problem: The necessary dropping
of the paging lock between continuation steps implies that subsequent
setting of bits in the map would need to distinguish between ranges
where the bitmap already got copied and those still outstanding when
it comes to updating dirty_count. I.e. no matter whether we leave
the current copying/clearing where it is or move it down, the counts
wouldn't be precise.

Now both from your description and from looking at current code
I conclude that this is with your new migration code, since current
code uses both counts only for printing messages. Which in turn
raises the question whether use of these counts, getting exported
just as statistics, for anything other than statistics is actually
appropriate. Otoh, if your new migration code doesn't use the
counts for non-statistical, non-logging purposes I still can't see why
things go wrong for you but not for me or osstest.

Jan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5][XSA-97] x86/paging: make log-dirty operations preemptible
  2014-09-12 12:18   ` Jan Beulich
@ 2014-09-15  7:50     ` Andrew Cooper
  2014-09-15 12:54       ` Jan Beulich
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Cooper @ 2014-09-15  7:50 UTC (permalink / raw)
  To: Jan Beulich, Tim Deegan; +Cc: xen-devel, Keir Fraser

On 12/09/2014 13:18, Jan Beulich wrote:
>>>> On 11.09.14 at 20:27, <andrew.cooper3@citrix.com> wrote:
>> On inspection of the libxc logs, I am feeing quite glad I left this
>> debugging message in:
>>
>> xenguest-75-save[11876]: xc: detail: Bitmap contained more entries than
>> expected...
>> xenguest-83-save[32123]: xc: detail: Bitmap contained more entries than
>> expected...
>> xenguest-84-save[471]: xc: detail: Bitmap contained more entries than
>> expected...
>> xenguest-88-save[3823]: xc: detail: Bitmap contained more entries than
>> expected...
>> xenguest-89-save[4656]: xc: detail: Bitmap contained more entries than
>> expected...
>> xenguest-95-save[9379]: xc: detail: Bitmap contained more entries than
>> expected...
>> xenguest-98-save[11784]: xc: detail: Bitmap contained more entries than
>> expected...
>>
>> This means that periodically, a XEN_DOMCTL_SHADOW_OP_{CLEAN,PEEK}
>> hypercall gives us back a bitmap with more set bits than
>> stats.dirty_count which it hands back at the same time.
> Which has quite simple an explanation, at least when (as I assume is
> the case) XEN_DOMCTL_SHADOW_OP_CLEAN is being used: We
> clear
>
> d->arch.paging.log_dirty.{dirty,fault}_count on the each step of the
> continuation instead of just on the final one. That code needs moving
> down.

Agreed.

>
> This, however, points at a broader problem: The necessary dropping
> of the paging lock between continuation steps implies that subsequent
> setting of bits in the map would need to distinguish between ranges
> where the bitmap already got copied and those still outstanding when
> it comes to updating dirty_count. I.e. no matter whether we leave
> the current copying/clearing where it is or move it down, the counts
> wouldn't be precise.

Having  a "logdirty dying" or "logdirty shutdown" state could compensate 
for this, by no longer having paging_log_dirty() true while the 
continuations are in progress.

>
> Now both from your description and from looking at current code
> I conclude that this is with your new migration code, since current
> code uses both counts only for printing messages. Which in turn
> raises the question whether use of these counts, getting exported
> just as statistics, for anything other than statistics is actually
> appropriate. Otoh, if your new migration code doesn't use the
> counts for non-statistical, non-logging purposes I still can't see why
> things go wrong for you but not for me or osstest.

It is indeed migration v2, which is necessary in XenServer given our 
recent switch from 32bit dom0 to 64bit.  The counts are only used for 
logging, and debugging purposes; all movement of pages is based off the 
bits in the bitmap alone.  In particular, the dirty count is used as a 
basis of the statistics for the present iteration of migration.  While 
getting it wrong is not the end of the world, it would certainly be 
preferable for the count to be accurate.

As for the memory corruption, XenRT usually tests pairs of VMs at a time 
(32 and 64bit variants) and all operations as back-to-back as possible.  
Therefore, it is highly likely that a continued operation on one domain 
intersects with other paging operations on another.

The results (now they have run fully) are 10 tests each.  10 passes 
without this patch, and 10 failures in similar ways with the patch, 
spread across a randomly selected set of hardware.

We also see failures with HVM VMs as well, although there are no errors 
at all from Xen or the toolstack components.  Symptoms range from BSODs 
to simply wedging with not apparent cause (I have not had a live repro 
to investigate)

~Andrew

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5][XSA-97] x86/paging: make log-dirty operations preemptible
  2014-09-15  7:50     ` Andrew Cooper
@ 2014-09-15 12:54       ` Jan Beulich
  2014-09-15 13:56         ` Andrew Cooper
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2014-09-15 12:54 UTC (permalink / raw)
  To: Andrew Cooper, Tim Deegan; +Cc: xen-devel, Keir Fraser

>>> On 15.09.14 at 09:50, <andrew.cooper3@citrix.com> wrote:
> It is indeed migration v2, which is necessary in XenServer given our 
> recent switch from 32bit dom0 to 64bit.  The counts are only used for 
> logging, and debugging purposes; all movement of pages is based off the 
> bits in the bitmap alone.  In particular, the dirty count is used as a 
> basis of the statistics for the present iteration of migration.  While 
> getting it wrong is not the end of the world, it would certainly be 
> preferable for the count to be accurate.
> 
> As for the memory corruption, XenRT usually tests pairs of VMs at a time 
> (32 and 64bit variants) and all operations as back-to-back as possible.  
> Therefore, it is highly likely that a continued operation on one domain 
> intersects with other paging operations on another.

But there's nothing I can see where domains would have a way
of getting mismatched. It is in particular this one

(XEN) [ 7832.953068] mm.c:827:d0v0 pg_owner 100 l1e_owner 100, but real_pg_owner 99

which puzzles me: Assuming Dom99 was the original one, how
would Dom100 get hold of any of Dom99's pages (IOW why would
Dom0 map one of Dom99's pages into Dom100)? The patch doesn't
alter any of the page refcounting after all. Nor does your v2
migration series I would think.

In general I understand you - as much as I - suspect that we're
losing one or more bits from the dirty bitmap (too many being set
wouldn't do any harm other than affecting performance afaict),
but that scenario doesn't seem to fit with your observations.

> The results (now they have run fully) are 10 tests each.  10 passes 
> without this patch, and 10 failures in similar ways with the patch, 
> spread across a randomly selected set of hardware.

I was meanwhile considering the call to
d->arch.paging.log_dirty.clean_dirty_bitmap() getting made only
in the final success exit case to be a problem (with the paging lock
dropped perhaps multiple times in between), but I'm pretty certain
it isn't: Newly dirtied pages would get accounted correctly in the
bitmap no matter whether they're in the range already processed
or the remainder, and ones already having been p2m_ram_rw
would have no problem if further writes to them happen while we
do continuations. The only thing potentially suffering here seems
efficiency: We might return a few pages to p2m_ram_logdirty
without strict need (but that issue existed before already, we're
just widening the window).

Jan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5][XSA-97] x86/paging: make log-dirty operations preemptible
  2014-09-15 12:54       ` Jan Beulich
@ 2014-09-15 13:56         ` Andrew Cooper
  2014-09-15 14:20           ` Jan Beulich
  0 siblings, 1 reply; 8+ messages in thread
From: Andrew Cooper @ 2014-09-15 13:56 UTC (permalink / raw)
  To: Jan Beulich, Tim Deegan; +Cc: xen-devel, Keir Fraser

On 15/09/2014 13:54, Jan Beulich wrote:
>>>> On 15.09.14 at 09:50, <andrew.cooper3@citrix.com> wrote:
>> It is indeed migration v2, which is necessary in XenServer given our
>> recent switch from 32bit dom0 to 64bit.  The counts are only used for
>> logging, and debugging purposes; all movement of pages is based off the
>> bits in the bitmap alone.  In particular, the dirty count is used as a
>> basis of the statistics for the present iteration of migration.  While
>> getting it wrong is not the end of the world, it would certainly be
>> preferable for the count to be accurate.
>>
>> As for the memory corruption, XenRT usually tests pairs of VMs at a time
>> (32 and 64bit variants) and all operations as back-to-back as possible.
>> Therefore, it is highly likely that a continued operation on one domain
>> intersects with other paging operations on another.
> But there's nothing I can see where domains would have a way
> of getting mismatched. It is in particular this one
>
> (XEN) [ 7832.953068] mm.c:827:d0v0 pg_owner 100 l1e_owner 100, but real_pg_owner 99
>
> which puzzles me: Assuming Dom99 was the original one, how
> would Dom100 get hold of any of Dom99's pages (IOW why would
> Dom0 map one of Dom99's pages into Dom100)? The patch doesn't
> alter any of the page refcounting after all. Nor does your v2
> migration series I would think.

In this case, dom99 was migrating to dom100.  The failure was part of 
verifying dom100v0's cr3 at the point of loading vcpu state, so Xen was 
in the process of pinning pagetables.

There were no errors on pagetable normalisation, so dom99's PTEs were 
all correct, and there were no errors restoring any of dom100's memory, 
so Xen fully allocated frames for dom100's memory during 
populate_phymap() hypercalls.

During pagetable normalisation, dom99's pfns in the stream are converted 
to dom100's mfns as per the newly created p2m from the 
populate_physmap() allocations.  Then during dom100's cr3 validation, it 
finds a dom99 PTE and complains.

Therefore, a frame Xen handed back to the toolstack as part of 
allocating dom100's memory still belonged to dom99.

>
> In general I understand you - as much as I - suspect that we're
> losing one or more bits from the dirty bitmap (too many being set
> wouldn't do any harm other than affecting performance afaict),
> but that scenario doesn't seem to fit with your observations.

I would agree - it is not obvious how this corruption, given only 
changes to the logdirty handling, appears to be causing these problems.

I think I will need to debug this issue properly, but I won't be in a 
position to do that until next week.

>
>> The results (now they have run fully) are 10 tests each.  10 passes
>> without this patch, and 10 failures in similar ways with the patch,
>> spread across a randomly selected set of hardware.
> I was meanwhile considering the call to
> d->arch.paging.log_dirty.clean_dirty_bitmap() getting made only
> in the final success exit case to be a problem (with the paging lock
> dropped perhaps multiple times in between), but I'm pretty certain
> it isn't: Newly dirtied pages would get accounted correctly in the
> bitmap no matter whether they're in the range already processed
> or the remainder, and ones already having been p2m_ram_rw
> would have no problem if further writes to them happen while we
> do continuations. The only thing potentially suffering here seems
> efficiency: We might return a few pages to p2m_ram_logdirty
> without strict need (but that issue existed before already, we're
> just widening the window).

It will defer the notification of a page being dirtied until the 
subsequent CLEAN/PEEK operation, but I believe its all fine.  The final 
CLEAN operation is after pausing the domain, so will be no activity 
(other than the backends, which are compensated for).

~Andrew

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5][XSA-97] x86/paging: make log-dirty operations preemptible
  2014-09-15 13:56         ` Andrew Cooper
@ 2014-09-15 14:20           ` Jan Beulich
  2014-09-15 14:37             ` Andrew Cooper
  0 siblings, 1 reply; 8+ messages in thread
From: Jan Beulich @ 2014-09-15 14:20 UTC (permalink / raw)
  To: Andrew Cooper, Tim Deegan; +Cc: xen-devel, Keir Fraser

>>> On 15.09.14 at 15:56, <andrew.cooper3@citrix.com> wrote:
> On 15/09/2014 13:54, Jan Beulich wrote:
>>>>> On 15.09.14 at 09:50, <andrew.cooper3@citrix.com> wrote:
>>> It is indeed migration v2, which is necessary in XenServer given our
>>> recent switch from 32bit dom0 to 64bit.  The counts are only used for
>>> logging, and debugging purposes; all movement of pages is based off the
>>> bits in the bitmap alone.  In particular, the dirty count is used as a
>>> basis of the statistics for the present iteration of migration.  While
>>> getting it wrong is not the end of the world, it would certainly be
>>> preferable for the count to be accurate.
>>>
>>> As for the memory corruption, XenRT usually tests pairs of VMs at a time
>>> (32 and 64bit variants) and all operations as back-to-back as possible.
>>> Therefore, it is highly likely that a continued operation on one domain
>>> intersects with other paging operations on another.
>> But there's nothing I can see where domains would have a way
>> of getting mismatched. It is in particular this one
>>
>> (XEN) [ 7832.953068] mm.c:827:d0v0 pg_owner 100 l1e_owner 100, but 
> real_pg_owner 99
>>
>> which puzzles me: Assuming Dom99 was the original one, how
>> would Dom100 get hold of any of Dom99's pages (IOW why would
>> Dom0 map one of Dom99's pages into Dom100)? The patch doesn't
>> alter any of the page refcounting after all. Nor does your v2
>> migration series I would think.
> 
> In this case, dom99 was migrating to dom100.  The failure was part of 
> verifying dom100v0's cr3 at the point of loading vcpu state, so Xen was 
> in the process of pinning pagetables.
> 
> There were no errors on pagetable normalisation, so dom99's PTEs were 
> all correct, and there were no errors restoring any of dom100's memory, 
> so Xen fully allocated frames for dom100's memory during 
> populate_phymap() hypercalls.
> 
> During pagetable normalisation, dom99's pfns in the stream are converted 
> to dom100's mfns as per the newly created p2m from the 
> populate_physmap() allocations.  Then during dom100's cr3 validation, it 
> finds a dom99 PTE and complains.
> 
> Therefore, a frame Xen handed back to the toolstack as part of 
> allocating dom100's memory still belonged to dom99.

Or on the saving side some page table(s) didn't get normalized at
all (in which case there necessarily also were no errors detected
with that). Not being marked as page table(s) would then also lead
to not getting converted back to machine representation on restore,
resulting in a reference to a page belonging to the old domain.

But together with the memory corruption you mentioned seen in
HVM guests all of the above may just be secondary effects.

Jan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v5][XSA-97] x86/paging: make log-dirty operations preemptible
  2014-09-15 14:20           ` Jan Beulich
@ 2014-09-15 14:37             ` Andrew Cooper
  0 siblings, 0 replies; 8+ messages in thread
From: Andrew Cooper @ 2014-09-15 14:37 UTC (permalink / raw)
  To: Jan Beulich, Tim Deegan; +Cc: xen-devel, Keir Fraser


On 15/09/2014 15:20, Jan Beulich wrote:
>>>> On 15.09.14 at 15:56, <andrew.cooper3@citrix.com> wrote:
>> On 15/09/2014 13:54, Jan Beulich wrote:
>>>>>> On 15.09.14 at 09:50, <andrew.cooper3@citrix.com> wrote:
>>>> It is indeed migration v2, which is necessary in XenServer given our
>>>> recent switch from 32bit dom0 to 64bit.  The counts are only used for
>>>> logging, and debugging purposes; all movement of pages is based off the
>>>> bits in the bitmap alone.  In particular, the dirty count is used as a
>>>> basis of the statistics for the present iteration of migration.  While
>>>> getting it wrong is not the end of the world, it would certainly be
>>>> preferable for the count to be accurate.
>>>>
>>>> As for the memory corruption, XenRT usually tests pairs of VMs at a time
>>>> (32 and 64bit variants) and all operations as back-to-back as possible.
>>>> Therefore, it is highly likely that a continued operation on one domain
>>>> intersects with other paging operations on another.
>>> But there's nothing I can see where domains would have a way
>>> of getting mismatched. It is in particular this one
>>>
>>> (XEN) [ 7832.953068] mm.c:827:d0v0 pg_owner 100 l1e_owner 100, but
>> real_pg_owner 99
>>> which puzzles me: Assuming Dom99 was the original one, how
>>> would Dom100 get hold of any of Dom99's pages (IOW why would
>>> Dom0 map one of Dom99's pages into Dom100)? The patch doesn't
>>> alter any of the page refcounting after all. Nor does your v2
>>> migration series I would think.
>> In this case, dom99 was migrating to dom100.  The failure was part of
>> verifying dom100v0's cr3 at the point of loading vcpu state, so Xen was
>> in the process of pinning pagetables.
>>
>> There were no errors on pagetable normalisation, so dom99's PTEs were
>> all correct, and there were no errors restoring any of dom100's memory,
>> so Xen fully allocated frames for dom100's memory during
>> populate_phymap() hypercalls.
>>
>> During pagetable normalisation, dom99's pfns in the stream are converted
>> to dom100's mfns as per the newly created p2m from the
>> populate_physmap() allocations.  Then during dom100's cr3 validation, it
>> finds a dom99 PTE and complains.
>>
>> Therefore, a frame Xen handed back to the toolstack as part of
>> allocating dom100's memory still belonged to dom99.
> Or on the saving side some page table(s) didn't get normalized at
> all (in which case there necessarily also were no errors detected
> with that). Not being marked as page table(s) would then also lead
> to not getting converted back to machine representation on restore,
> resulting in a reference to a page belonging to the old domain.
>
> But together with the memory corruption you mentioned seen in
> HVM guests all of the above may just be secondary effects.

Yes - that is my suspicion as well, although I was hoping that the 
failures would give some hints as to the root cause.

~Andrew

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2014-09-15 14:39 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-09-05 10:47 [PATCH v5][XSA-97] x86/paging: make log-dirty operations preemptible Jan Beulich
2014-09-11 18:27 ` Andrew Cooper
2014-09-12 12:18   ` Jan Beulich
2014-09-15  7:50     ` Andrew Cooper
2014-09-15 12:54       ` Jan Beulich
2014-09-15 13:56         ` Andrew Cooper
2014-09-15 14:20           ` Jan Beulich
2014-09-15 14:37             ` Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.