xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH v5 00/18] VM forking
@ 2020-01-21 17:49 Tamas K Lengyel
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 01/18] x86/hvm: introduce hvm_copy_context_and_params Tamas K Lengyel
                   ` (17 more replies)
  0 siblings, 18 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Petre Pircalabu, Tamas K Lengyel, Tamas K Lengyel, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Anthony PERARD, Stefano Stabellini, Alexandru Isaila,
	Julien Grall, Roger Pau Monné

The following series implements VM forking for Intel HVM guests to allow for
the fast creation of identical VMs without the assosciated high startup costs
of booting or restoring the VM from a savefile.

JIRA issue: https://xenproject.atlassian.net/browse/XEN-89

The fork operation is implemented as part of the "xl fork-vm" command:
    xl fork-vm -C <config_file_for_fork> -Q <qemu_save_file> <parent_domid>
    
By default a fully functional fork is created. The user is in charge however to
create the appropriate config file for the fork and to generate the QEMU save
file before the fork-vm call is made. The config file needs to give the
fork a new name at minimum but other settings may also require changes.

The interface also allows to split the forking into two steps:
    xl fork-vm --launch-dm no \
               -p <parent_domid>
    xl fork-vm --launch-dm late \
               -C <config_file_for_fork> \
               -Q <qemu_save_file> \
               <fork_domid>

The split creation model is useful when the VM needs to be created as fast as
possible. The forked VM can be unpaused without the device model being launched
to be monitored and accessed via VMI. Note however that without its device
model running (depending on what is executing in the VM) it is bound to
misbehave or even crash when its trying to access devices that would be
emulated by QEMU. We anticipate that for certain use-cases this would be an
acceptable situation, in case for example when fuzzing is performed of code
segments that don't access such devices.

Launching the device model requires the QEMU Xen savefile to be generated
manually from the parent VM. This can be accomplished simply by connecting to
its QMP socket and issuing the "xen-save-devices-state" command. For example
using the standard tool socat these commands can be used to generate the file:
    socat - UNIX-CONNECT:/var/run/xen/qmp-libxl-<parent_domid>
    { "execute": "qmp_capabilities" }
    { "execute": "xen-save-devices-state", \
        "arguments": { "filename": "/path/to/save/qemu_state", \
                        "live": false} }

At runtime the forked VM starts running with an empty p2m which gets lazily
populated when the VM generates EPT faults, similar to how altp2m views are
populated. If the memory access is a read-only access, the p2m entry is
populated with a memory shared entry with its parent. For write memory accesses
or in case memory sharing wasn't possible (for example in case a reference is
held by a third party), a new page is allocated and the page contents are
copied over from the parent VM. Forks can be further forked if needed, thus
allowing for further memory savings.

A VM fork reset hypercall is also added that allows the fork to be reset to the
state it was just after a fork, also accessible via xl:
    xl fork-vm --fork-reset -p <fork_domid>

This is an optimization for cases where the forks are very short-lived and run
without a device model, so resetting saves some time compared to creating a
brand new fork provided the fork has not aquired a lot of memory. If the fork
has a lot of memory deduplicated it is likely going to be faster to create a
new fork from scratch and asynchronously destroying the old one.

The series has been tested with both Linux and Windows VMs and functions as
expected. VM forking time has been measured to be 0.0007s, device model launch
to be around 1s depending largely on the number of devices being emulated. Fork
resets have been measured to be 0.0001s under the optimal circumstances.

Patches 1-2 implement changes to existing internal Xen APIs to make VM forking
possible.

Patches 3-14 are code-cleanups and adjustments of to Xen memory sharing
subsystem with no functional changes.

Patch 15 adds the hypervisor-side code implementing VM forking.

Patch 16 is integration of mem_access with forked VMs.

Patch 17 implements the VM fork reset operation hypervisor side bits.

Patch 18 adds the toolstack-side code implementing VM forking and reset.

New in v5: Patch 3 and 14.

Paches without Acks or Reviews:
    1, 2, 3, 11, 12, 14, 15, 16, 17, 18

Tamas K Lengyel (18):
  x86/hvm: introduce hvm_copy_context_and_params
  xen/x86: Make hap_get_allocation accessible
  x86/p2m: Allow p2m_get_page_from_gfn to return shared entries
  x86/mem_sharing: make get_two_gfns take locks conditionally
  x86/mem_sharing: drop flags from mem_sharing_unshare_page
  x86/mem_sharing: don't try to unshare twice during page fault
  x86/mem_sharing: define mem_sharing_domain to hold some scattered
    variables
  x86/mem_sharing: Use INVALID_MFN and p2m_is_shared in
    relinquish_shared_pages
  x86/mem_sharing: Make add_to_physmap static and shorten name
  x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool
  x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk
  x86/mem_sharing: Enable mem_sharing on first memop
  x86/mem_sharing: Skip xen heap pages in memshr nominate
  x86/mem_sharing: use default_access in add_to_physmap
  xen/mem_sharing: VM forking
  xen/mem_access: Use __get_gfn_type_access in set_mem_access
  x86/mem_sharing: reset a fork
  xen/tools: VM forking toolstack side

 docs/man/xl.1.pod.in              |  36 +++
 tools/libxc/include/xenctrl.h     |  13 +
 tools/libxc/xc_memshr.c           |  22 ++
 tools/libxl/libxl.h               |   7 +
 tools/libxl/libxl_create.c        | 237 ++++++++++-----
 tools/libxl/libxl_dm.c            |   2 +-
 tools/libxl/libxl_dom.c           |  83 ++++--
 tools/libxl/libxl_internal.h      |   1 +
 tools/libxl/libxl_types.idl       |   1 +
 tools/xl/xl.h                     |   5 +
 tools/xl/xl_cmdtable.c            |  12 +
 tools/xl/xl_saverestore.c         |  96 ++++++
 tools/xl/xl_vmcontrol.c           |   8 +
 xen/arch/x86/domain.c             |   9 +
 xen/arch/x86/hvm/hvm.c            | 267 ++++++++++-------
 xen/arch/x86/mm/hap/hap.c         |   3 +-
 xen/arch/x86/mm/mem_access.c      |   5 +-
 xen/arch/x86/mm/mem_sharing.c     | 474 ++++++++++++++++++++++++------
 xen/arch/x86/mm/p2m.c             |  21 +-
 xen/common/memory.c               |   2 +-
 xen/drivers/passthrough/pci.c     |   3 +-
 xen/include/asm-x86/hap.h         |   1 +
 xen/include/asm-x86/hvm/domain.h  |   6 +-
 xen/include/asm-x86/hvm/hvm.h     |   2 +
 xen/include/asm-x86/mem_sharing.h |  42 ++-
 xen/include/asm-x86/p2m.h         |  14 +-
 xen/include/public/memory.h       |   6 +
 xen/include/xen/sched.h           |   2 +
 28 files changed, 1051 insertions(+), 329 deletions(-)

-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 01/18] x86/hvm: introduce hvm_copy_context_and_params
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-22 15:00   ` Jan Beulich
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 02/18] xen/x86: Make hap_get_allocation accessible Tamas K Lengyel
                   ` (16 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Tamas K Lengyel, Wei Liu, Roger Pau Monné

Currently the hvm parameters are only accessible via the HVMOP hypercalls. In
this patch we introduce a new function that can copy both the hvm context and
parameters directly into a target domain. No functional changes in existing
code.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 xen/arch/x86/hvm/hvm.c        | 240 +++++++++++++++++++++-------------
 xen/include/asm-x86/hvm/hvm.h |   2 +
 2 files changed, 151 insertions(+), 91 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4723f5d09c..651998e783 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4067,16 +4067,17 @@ static int hvmop_set_evtchn_upcall_vector(
 }
 
 static int hvm_allow_set_param(struct domain *d,
-                               const struct xen_hvm_param *a)
+                               uint32_t index,
+                               uint64_t new_value)
 {
-    uint64_t value = d->arch.hvm.params[a->index];
+    uint64_t value = d->arch.hvm.params[index];
     int rc;
 
     rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
     if ( rc )
         return rc;
 
-    switch ( a->index )
+    switch ( index )
     {
     /* The following parameters can be set by the guest. */
     case HVM_PARAM_CALLBACK_IRQ:
@@ -4109,7 +4110,7 @@ static int hvm_allow_set_param(struct domain *d,
     if ( rc )
         return rc;
 
-    switch ( a->index )
+    switch ( index )
     {
     /* The following parameters should only be changed once. */
     case HVM_PARAM_VIRIDIAN:
@@ -4119,7 +4120,7 @@ static int hvm_allow_set_param(struct domain *d,
     case HVM_PARAM_NR_IOREQ_SERVER_PAGES:
     case HVM_PARAM_ALTP2M:
     case HVM_PARAM_MCA_CAP:
-        if ( value != 0 && a->value != value )
+        if ( value != 0 && new_value != value )
             rc = -EEXIST;
         break;
     default:
@@ -4129,49 +4130,32 @@ static int hvm_allow_set_param(struct domain *d,
     return rc;
 }
 
-static int hvmop_set_param(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_param_t) arg)
+static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
 {
     struct domain *curr_d = current->domain;
-    struct xen_hvm_param a;
-    struct domain *d;
-    struct vcpu *v;
     int rc;
+    struct vcpu *v;
 
-    if ( copy_from_guest(&a, arg, 1) )
-        return -EFAULT;
-
-    if ( a.index >= HVM_NR_PARAMS )
+    if ( index >= HVM_NR_PARAMS )
         return -EINVAL;
 
-    /* Make sure the above bound check is not bypassed during speculation. */
-    block_speculation();
-
-    d = rcu_lock_domain_by_any_id(a.domid);
-    if ( d == NULL )
-        return -ESRCH;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = hvm_allow_set_param(d, &a);
+    rc = hvm_allow_set_param(d, index, value);
     if ( rc )
         goto out;
 
-    switch ( a.index )
+    switch ( index )
     {
     case HVM_PARAM_CALLBACK_IRQ:
-        hvm_set_callback_via(d, a.value);
+        hvm_set_callback_via(d, value);
         hvm_latch_shinfo_size(d);
         break;
     case HVM_PARAM_TIMER_MODE:
-        if ( a.value > HVMPTM_one_missed_tick_pending )
+        if ( value > HVMPTM_one_missed_tick_pending )
             rc = -EINVAL;
         break;
     case HVM_PARAM_VIRIDIAN:
-        if ( (a.value & ~HVMPV_feature_mask) ||
-             !(a.value & HVMPV_base_freq) )
+        if ( (value & ~HVMPV_feature_mask) ||
+             !(value & HVMPV_base_freq) )
             rc = -EINVAL;
         break;
     case HVM_PARAM_IDENT_PT:
@@ -4181,7 +4165,7 @@ static int hvmop_set_param(
          */
         if ( !paging_mode_hap(d) || !cpu_has_vmx )
         {
-            d->arch.hvm.params[a.index] = a.value;
+            d->arch.hvm.params[index] = value;
             break;
         }
 
@@ -4196,7 +4180,7 @@ static int hvmop_set_param(
 
         rc = 0;
         domain_pause(d);
-        d->arch.hvm.params[a.index] = a.value;
+        d->arch.hvm.params[index] = value;
         for_each_vcpu ( d, v )
             paging_update_cr3(v, false);
         domain_unpause(d);
@@ -4205,23 +4189,23 @@ static int hvmop_set_param(
         break;
     case HVM_PARAM_DM_DOMAIN:
         /* The only value this should ever be set to is DOMID_SELF */
-        if ( a.value != DOMID_SELF )
+        if ( value != DOMID_SELF )
             rc = -EINVAL;
 
-        a.value = curr_d->domain_id;
+        value = curr_d->domain_id;
         break;
     case HVM_PARAM_ACPI_S_STATE:
         rc = 0;
-        if ( a.value == 3 )
+        if ( value == 3 )
             hvm_s3_suspend(d);
-        else if ( a.value == 0 )
+        else if ( value == 0 )
             hvm_s3_resume(d);
         else
             rc = -EINVAL;
 
         break;
     case HVM_PARAM_ACPI_IOPORTS_LOCATION:
-        rc = pmtimer_change_ioport(d, a.value);
+        rc = pmtimer_change_ioport(d, value);
         break;
     case HVM_PARAM_MEMORY_EVENT_CR0:
     case HVM_PARAM_MEMORY_EVENT_CR3:
@@ -4236,24 +4220,24 @@ static int hvmop_set_param(
         rc = xsm_hvm_param_nested(XSM_PRIV, d);
         if ( rc )
             break;
-        if ( a.value > 1 )
+        if ( value > 1 )
             rc = -EINVAL;
         /*
          * Remove the check below once we have
          * shadow-on-shadow.
          */
-        if ( !paging_mode_hap(d) && a.value )
+        if ( !paging_mode_hap(d) && value )
             rc = -EINVAL;
-        if ( a.value &&
+        if ( value &&
              d->arch.hvm.params[HVM_PARAM_ALTP2M] )
             rc = -EINVAL;
         /* Set up NHVM state for any vcpus that are already up. */
-        if ( a.value &&
+        if ( value &&
              !d->arch.hvm.params[HVM_PARAM_NESTEDHVM] )
             for_each_vcpu(d, v)
                 if ( rc == 0 )
                     rc = nestedhvm_vcpu_initialise(v);
-        if ( !a.value || rc )
+        if ( !value || rc )
             for_each_vcpu(d, v)
                 nestedhvm_vcpu_destroy(v);
         break;
@@ -4261,30 +4245,30 @@ static int hvmop_set_param(
         rc = xsm_hvm_param_altp2mhvm(XSM_PRIV, d);
         if ( rc )
             break;
-        if ( a.value > XEN_ALTP2M_limited )
+        if ( value > XEN_ALTP2M_limited )
             rc = -EINVAL;
-        if ( a.value &&
+        if ( value &&
              d->arch.hvm.params[HVM_PARAM_NESTEDHVM] )
             rc = -EINVAL;
         break;
     case HVM_PARAM_TRIPLE_FAULT_REASON:
-        if ( a.value > SHUTDOWN_MAX )
+        if ( value > SHUTDOWN_MAX )
             rc = -EINVAL;
         break;
     case HVM_PARAM_IOREQ_SERVER_PFN:
-        d->arch.hvm.ioreq_gfn.base = a.value;
+        d->arch.hvm.ioreq_gfn.base = value;
         break;
     case HVM_PARAM_NR_IOREQ_SERVER_PAGES:
     {
         unsigned int i;
 
-        if ( a.value == 0 ||
-             a.value > sizeof(d->arch.hvm.ioreq_gfn.mask) * 8 )
+        if ( value == 0 ||
+             value > sizeof(d->arch.hvm.ioreq_gfn.mask) * 8 )
         {
             rc = -EINVAL;
             break;
         }
-        for ( i = 0; i < a.value; i++ )
+        for ( i = 0; i < value; i++ )
             set_bit(i, &d->arch.hvm.ioreq_gfn.mask);
 
         break;
@@ -4296,35 +4280,35 @@ static int hvmop_set_param(
                      sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
         BUILD_BUG_ON(HVM_PARAM_BUFIOREQ_PFN >
                      sizeof(d->arch.hvm.ioreq_gfn.legacy_mask) * 8);
-        if ( a.value )
-            set_bit(a.index, &d->arch.hvm.ioreq_gfn.legacy_mask);
+        if ( value )
+            set_bit(index, &d->arch.hvm.ioreq_gfn.legacy_mask);
         break;
 
     case HVM_PARAM_X87_FIP_WIDTH:
-        if ( a.value != 0 && a.value != 4 && a.value != 8 )
+        if ( value != 0 && value != 4 && value != 8 )
         {
             rc = -EINVAL;
             break;
         }
-        d->arch.x87_fip_width = a.value;
+        d->arch.x87_fip_width = value;
         break;
 
     case HVM_PARAM_VM86_TSS:
         /* Hardware would silently truncate high bits. */
-        if ( a.value != (uint32_t)a.value )
+        if ( value != (uint32_t)value )
         {
             if ( d == curr_d )
                 domain_crash(d);
             rc = -EINVAL;
         }
         /* Old hvmloader binaries hardcode the size to 128 bytes. */
-        if ( a.value )
-            a.value |= (128ULL << 32) | VM86_TSS_UPDATED;
-        a.index = HVM_PARAM_VM86_TSS_SIZED;
+        if ( value )
+            value |= (128ULL << 32) | VM86_TSS_UPDATED;
+        index = HVM_PARAM_VM86_TSS_SIZED;
         break;
 
     case HVM_PARAM_VM86_TSS_SIZED:
-        if ( (a.value >> 32) < sizeof(struct tss32) )
+        if ( (value >> 32) < sizeof(struct tss32) )
         {
             if ( d == curr_d )
                 domain_crash(d);
@@ -4335,26 +4319,56 @@ static int hvmop_set_param(
          * 256 bits interrupt redirection bitmap + 64k bits I/O bitmap
          * plus one padding byte).
          */
-        if ( (a.value >> 32) > sizeof(struct tss32) +
+        if ( (value >> 32) > sizeof(struct tss32) +
                                (0x100 / 8) + (0x10000 / 8) + 1 )
-            a.value = (uint32_t)a.value |
+            value = (uint32_t)value |
                       ((sizeof(struct tss32) + (0x100 / 8) +
                                                (0x10000 / 8) + 1) << 32);
-        a.value |= VM86_TSS_UPDATED;
+        value |= VM86_TSS_UPDATED;
         break;
 
     case HVM_PARAM_MCA_CAP:
-        rc = vmce_enable_mca_cap(d, a.value);
+        rc = vmce_enable_mca_cap(d, value);
         break;
     }
 
     if ( rc != 0 )
         goto out;
 
-    d->arch.hvm.params[a.index] = a.value;
+    d->arch.hvm.params[index] = value;
 
     HVM_DBG_LOG(DBG_LEVEL_HCALL, "set param %u = %"PRIx64,
-                a.index, a.value);
+                index, value);
+
+ out:
+    return rc;
+}
+
+int hvmop_set_param(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_param_t) arg)
+{
+    struct xen_hvm_param a;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&a, arg, 1) )
+        return -EFAULT;
+
+    if ( a.index >= HVM_NR_PARAMS )
+        return -EINVAL;
+
+    /* Make sure the above bound check is not bypassed during speculation. */
+    block_speculation();
+
+    d = rcu_lock_domain_by_any_id(a.domid);
+    if ( d == NULL )
+        return -ESRCH;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = hvm_set_param(d, a.index, a.value);
 
  out:
     rcu_unlock_domain(d);
@@ -4362,7 +4376,7 @@ static int hvmop_set_param(
 }
 
 static int hvm_allow_get_param(struct domain *d,
-                               const struct xen_hvm_param *a)
+                               uint32_t index)
 {
     int rc;
 
@@ -4370,7 +4384,7 @@ static int hvm_allow_get_param(struct domain *d,
     if ( rc )
         return rc;
 
-    switch ( a->index )
+    switch ( index )
     {
     /* The following parameters can be read by the guest. */
     case HVM_PARAM_CALLBACK_IRQ:
@@ -4400,6 +4414,43 @@ static int hvm_allow_get_param(struct domain *d,
     return rc;
 }
 
+static int hvm_get_param(struct domain *d, uint32_t index, uint64_t *value)
+{
+    int rc;
+
+    if ( index >= HVM_NR_PARAMS || !value )
+        return -EINVAL;
+
+    rc = hvm_allow_get_param(d, index);
+    if ( rc )
+        return rc;
+
+    switch ( index )
+    {
+    case HVM_PARAM_ACPI_S_STATE:
+        *value = d->arch.hvm.is_s3_suspended ? 3 : 0;
+        break;
+
+    case HVM_PARAM_VM86_TSS:
+        *value = (uint32_t)d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED];
+        break;
+
+    case HVM_PARAM_VM86_TSS_SIZED:
+        *value = d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED] &
+                   ~VM86_TSS_UPDATED;
+        break;
+
+    case HVM_PARAM_X87_FIP_WIDTH:
+        *value = d->arch.x87_fip_width;
+        break;
+    default:
+        *value = d->arch.hvm.params[index];
+        break;
+    }
+
+    return 0;
+};
+
 static int hvmop_get_param(
     XEN_GUEST_HANDLE_PARAM(xen_hvm_param_t) arg)
 {
@@ -4424,33 +4475,10 @@ static int hvmop_get_param(
     if ( !is_hvm_domain(d) )
         goto out;
 
-    rc = hvm_allow_get_param(d, &a);
+    rc = hvm_get_param(d, a.index, &a.value);
     if ( rc )
         goto out;
 
-    switch ( a.index )
-    {
-    case HVM_PARAM_ACPI_S_STATE:
-        a.value = d->arch.hvm.is_s3_suspended ? 3 : 0;
-        break;
-
-    case HVM_PARAM_VM86_TSS:
-        a.value = (uint32_t)d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED];
-        break;
-
-    case HVM_PARAM_VM86_TSS_SIZED:
-        a.value = d->arch.hvm.params[HVM_PARAM_VM86_TSS_SIZED] &
-                  ~VM86_TSS_UPDATED;
-        break;
-
-    case HVM_PARAM_X87_FIP_WIDTH:
-        a.value = d->arch.x87_fip_width;
-        break;
-    default:
-        a.value = d->arch.hvm.params[a.index];
-        break;
-    }
-
     rc = __copy_to_guest(arg, &a, 1) ? -EFAULT : 0;
 
     HVM_DBG_LOG(DBG_LEVEL_HCALL, "get param %u = %"PRIx64,
@@ -5266,6 +5294,36 @@ void hvm_set_segment_register(struct vcpu *v, enum x86_segment seg,
     alternative_vcall(hvm_funcs.set_segment_register, v, seg, reg);
 }
 
+int hvm_copy_context_and_params(struct domain *src, struct domain *dst)
+{
+    int rc, i;
+    struct hvm_domain_context c = { };
+
+    for ( i = 0; i < HVM_NR_PARAMS; i++ )
+    {
+        uint64_t value = 0;
+
+        if ( hvm_get_param(src, i, &value) || !value )
+            continue;
+
+        if ( (rc = hvm_set_param(dst, i, value)) )
+            return rc;
+    }
+
+    c.size = hvm_save_size(src);
+    if ( (c.data = vmalloc(c.size)) == NULL )
+        return -ENOMEM;
+
+    if ( !(rc = hvm_save(src, &c)) )
+    {
+        c.cur = 0;
+        rc = hvm_load(dst, &c);
+    }
+
+    vfree(c.data);
+    return rc;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 09793c12e9..6106b82c95 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -336,6 +336,8 @@ unsigned long hvm_cr4_guest_valid_bits(const struct domain *d, bool restore);
 bool hvm_flush_vcpu_tlb(bool (*flush_vcpu)(void *ctxt, struct vcpu *v),
                         void *ctxt);
 
+int hvm_copy_context_and_params(struct domain *src, struct domain *dst);
+
 #ifdef CONFIG_HVM
 
 #define hvm_get_guest_tsc(v) hvm_get_guest_tsc_fixed(v, 0)
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 02/18] xen/x86: Make hap_get_allocation accessible
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 01/18] x86/hvm: introduce hvm_copy_context_and_params Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries Tamas K Lengyel
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Roger Pau Monné

During VM forking we'll copy the parent domain's parameters to the client,
including the HAP shadow memory setting that is used for storing the domain's
EPT. We'll copy this in the hypervisor instead doing it during toolstack launch
to allow the domain to start executing and unsharing memory before (or
even completely without) the toolstack.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 xen/arch/x86/mm/hap/hap.c | 3 +--
 xen/include/asm-x86/hap.h | 1 +
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 3d93f3451c..c7c7ff6e99 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -321,8 +321,7 @@ static void hap_free_p2m_page(struct domain *d, struct page_info *pg)
 }
 
 /* Return the size of the pool, rounded up to the nearest MB */
-static unsigned int
-hap_get_allocation(struct domain *d)
+unsigned int hap_get_allocation(struct domain *d)
 {
     unsigned int pg = d->arch.paging.hap.total_pages
         + d->arch.paging.hap.p2m_pages;
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index b94bfb4ed0..1bf07e49fe 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -45,6 +45,7 @@ int   hap_track_dirty_vram(struct domain *d,
 
 extern const struct paging_mode *hap_paging_get_mode(struct vcpu *);
 int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempted);
+unsigned int hap_get_allocation(struct domain *d);
 
 #endif /* XEN_HAP_H */
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 01/18] x86/hvm: introduce hvm_copy_context_and_params Tamas K Lengyel
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 02/18] xen/x86: Make hap_get_allocation accessible Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-22 15:23   ` Jan Beulich
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 04/18] x86/mem_sharing: make get_two_gfns take locks conditionally Tamas K Lengyel
                   ` (14 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Roger Pau Monné

The owner domain of shared pages is dom_cow, use that for get_page
otherwise the function fails to return the correct page.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 xen/arch/x86/mm/p2m.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 3119269073..fdeb742707 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -594,7 +594,10 @@ struct page_info *p2m_get_page_from_gfn(
     if ( p2m_is_ram(*t) && mfn_valid(mfn) )
     {
         page = mfn_to_page(mfn);
-        if ( !get_page(page, p2m->domain) )
+        if ( !get_page(page, p2m->domain) &&
+             /* Page could be shared */
+             (!dom_cow || !p2m_is_shared(*t) ||
+              !get_page(page, dom_cow)) )
             page = NULL;
     }
     put_gfn(p2m->domain, gfn_x(gfn));
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 04/18] x86/mem_sharing: make get_two_gfns take locks conditionally
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (2 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-23 15:50   ` George Dunlap
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 05/18] x86/mem_sharing: drop flags from mem_sharing_unshare_page Tamas K Lengyel
                   ` (13 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, George Dunlap,
	Andrew Cooper, Roger Pau Monné

During VM forking the client lock will already be taken.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Acked-by: Andrew Coopers <andrew.cooper3@citrix.com>
---
 xen/arch/x86/mm/mem_sharing.c | 11 ++++++-----
 xen/include/asm-x86/p2m.h     | 10 +++++-----
 2 files changed, 11 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 64dd3689df..2a20e4958b 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -943,7 +943,7 @@ static int share_pages(struct domain *sd, gfn_t sgfn, shr_handle_t sh,
     unsigned long put_count = 0;
 
     get_two_gfns(sd, sgfn, &smfn_type, NULL, &smfn,
-                 cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg);
+                 cd, cgfn, &cmfn_type, NULL, &cmfn, 0, &tg, true);
 
     /*
      * This tricky business is to avoid two callers deadlocking if
@@ -1061,7 +1061,7 @@ err_out:
 }
 
 int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle_t sh,
-                               struct domain *cd, unsigned long cgfn)
+                               struct domain *cd, unsigned long cgfn, bool lock)
 {
     struct page_info *spage;
     int ret = -EINVAL;
@@ -1073,7 +1073,7 @@ int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle
     struct two_gfns tg;
 
     get_two_gfns(sd, _gfn(sgfn), &smfn_type, NULL, &smfn,
-                 cd, _gfn(cgfn), &cmfn_type, &a, &cmfn, 0, &tg);
+                 cd, _gfn(cgfn), &cmfn_type, &a, &cmfn, 0, &tg, lock);
 
     /* Get the source shared page, check and lock */
     ret = XENMEM_SHARING_OP_S_HANDLE_INVALID;
@@ -1150,7 +1150,8 @@ int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle
 err_unlock:
     mem_sharing_page_unlock(spage);
 err_out:
-    put_two_gfns(&tg);
+    if ( lock )
+        put_two_gfns(&tg);
     return ret;
 }
 
@@ -1571,7 +1572,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         sh      = mso.u.share.source_handle;
         cgfn    = mso.u.share.client_gfn;
 
-        rc = mem_sharing_add_to_physmap(d, sgfn, sh, cd, cgfn);
+        rc = mem_sharing_add_to_physmap(d, sgfn, sh, cd, cgfn, true);
 
         rcu_unlock_domain(cd);
     }
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 94285db1b4..7399c4a897 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -539,7 +539,7 @@ struct two_gfns {
 static inline void get_two_gfns(struct domain *rd, gfn_t rgfn,
         p2m_type_t *rt, p2m_access_t *ra, mfn_t *rmfn, struct domain *ld,
         gfn_t lgfn, p2m_type_t *lt, p2m_access_t *la, mfn_t *lmfn,
-        p2m_query_t q, struct two_gfns *rval)
+        p2m_query_t q, struct two_gfns *rval, bool lock)
 {
     mfn_t           *first_mfn, *second_mfn, scratch_mfn;
     p2m_access_t    *first_a, *second_a, scratch_a;
@@ -569,10 +569,10 @@ do {                                                    \
 #undef assign_pointers
 
     /* Now do the gets */
-    *first_mfn  = get_gfn_type_access(p2m_get_hostp2m(rval->first_domain),
-                                      gfn_x(rval->first_gfn), first_t, first_a, q, NULL);
-    *second_mfn = get_gfn_type_access(p2m_get_hostp2m(rval->second_domain),
-                                      gfn_x(rval->second_gfn), second_t, second_a, q, NULL);
+    *first_mfn  = __get_gfn_type_access(p2m_get_hostp2m(rval->first_domain),
+                                        gfn_x(rval->first_gfn), first_t, first_a, q, NULL, lock);
+    *second_mfn = __get_gfn_type_access(p2m_get_hostp2m(rval->second_domain),
+                                        gfn_x(rval->second_gfn), second_t, second_a, q, NULL, lock);
 }
 
 static inline void put_two_gfns(struct two_gfns *arg)
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 05/18] x86/mem_sharing: drop flags from mem_sharing_unshare_page
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (3 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 04/18] x86/mem_sharing: make get_two_gfns take locks conditionally Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-23 15:53   ` George Dunlap
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 06/18] x86/mem_sharing: don't try to unshare twice during page fault Tamas K Lengyel
                   ` (12 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Tamas K Lengyel, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Julien Grall, Roger Pau Monné

All callers pass 0 in.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Reviewed-by: Wei Liu <wl@xen.org>
---
 xen/arch/x86/hvm/hvm.c            | 2 +-
 xen/arch/x86/mm/p2m.c             | 5 ++---
 xen/common/memory.c               | 2 +-
 xen/include/asm-x86/mem_sharing.h | 8 +++-----
 4 files changed, 7 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 651998e783..55bf7353c9 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1898,7 +1898,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
     if ( npfec.write_access && (p2mt == p2m_ram_shared) )
     {
         ASSERT(p2m_is_hostp2m(p2m));
-        sharing_enomem = mem_sharing_unshare_page(currd, gfn, 0);
+        sharing_enomem = mem_sharing_unshare_page(currd, gfn);
         rc = 1;
         goto out_put_gfn;
     }
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index fdeb742707..43dd46a32f 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -515,7 +515,7 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn_l,
          * Try to unshare. If we fail, communicate ENOMEM without
          * sleeping.
          */
-        if ( mem_sharing_unshare_page(p2m->domain, gfn_l, 0) < 0 )
+        if ( mem_sharing_unshare_page(p2m->domain, gfn_l) < 0 )
             mem_sharing_notify_enomem(p2m->domain, gfn_l, false);
         mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order, NULL);
     }
@@ -899,8 +899,7 @@ guest_physmap_add_entry(struct domain *d, gfn_t gfn, mfn_t mfn,
         {
             /* Do an unshare to cleanly take care of all corner cases. */
             int rc;
-            rc = mem_sharing_unshare_page(p2m->domain,
-                                          gfn_x(gfn_add(gfn, i)), 0);
+            rc = mem_sharing_unshare_page(p2m->domain, gfn_x(gfn_add(gfn, i)));
             if ( rc )
             {
                 p2m_unlock(p2m);
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 309e872edf..c7d2bac452 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -352,7 +352,7 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
          * might be the only one using this shared page, and we need to
          * trigger proper cleanup. Once done, this is like any other page.
          */
-        rc = mem_sharing_unshare_page(d, gmfn, 0);
+        rc = mem_sharing_unshare_page(d, gmfn);
         if ( rc )
         {
             mem_sharing_notify_enomem(d, gmfn, false);
diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index af2a1038b5..cf7848709f 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -69,10 +69,9 @@ int __mem_sharing_unshare_page(struct domain *d,
                                uint16_t flags);
 
 static inline int mem_sharing_unshare_page(struct domain *d,
-                                           unsigned long gfn,
-                                           uint16_t flags)
+                                           unsigned long gfn)
 {
-    int rc = __mem_sharing_unshare_page(d, gfn, flags);
+    int rc = __mem_sharing_unshare_page(d, gfn, 0);
     BUG_ON(rc && (rc != -ENOMEM));
     return rc;
 }
@@ -115,8 +114,7 @@ static inline unsigned int mem_sharing_get_nr_shared_mfns(void)
     return 0;
 }
 
-static inline int mem_sharing_unshare_page(struct domain *d, unsigned long gfn,
-                                           uint16_t flags)
+static inline int mem_sharing_unshare_page(struct domain *d, unsigned long gfn)
 {
     ASSERT_UNREACHABLE();
     return -EOPNOTSUPP;
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 06/18] x86/mem_sharing: don't try to unshare twice during page fault
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (4 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 05/18] x86/mem_sharing: drop flags from mem_sharing_unshare_page Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 07/18] x86/mem_sharing: define mem_sharing_domain to hold some scattered variables Tamas K Lengyel
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Tamas K Lengyel, Wei Liu, Roger Pau Monné

The page was already tried to be unshared in get_gfn_type_access. If that
didn't work, then trying again is pointless. Don't try to send vm_event again
either, simply check if there is a ring or not.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/hvm/hvm.c | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 55bf7353c9..e60b4931bf 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -38,6 +38,7 @@
 #include <xen/warning.h>
 #include <xen/vpci.h>
 #include <xen/nospec.h>
+#include <xen/vm_event.h>
 #include <asm/shadow.h>
 #include <asm/hap.h>
 #include <asm/current.h>
@@ -1702,7 +1703,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
     struct domain *currd = curr->domain;
     struct p2m_domain *p2m, *hostp2m;
     int rc, fall_through = 0, paged = 0;
-    int sharing_enomem = 0;
+    bool sharing_enomem = false;
     vm_event_request_t *req_ptr = NULL;
     bool sync = false;
     unsigned int page_order;
@@ -1894,14 +1895,16 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
     if ( p2m_is_paged(p2mt) || (p2mt == p2m_ram_paging_out) )
         paged = 1;
 
-    /* Mem sharing: unshare the page and try again */
-    if ( npfec.write_access && (p2mt == p2m_ram_shared) )
+#ifdef CONFIG_MEM_SHARING
+    /* Mem sharing: if still shared on write access then its enomem */
+    if ( npfec.write_access && p2m_is_shared(p2mt) )
     {
         ASSERT(p2m_is_hostp2m(p2m));
-        sharing_enomem = mem_sharing_unshare_page(currd, gfn);
+        sharing_enomem = true;
         rc = 1;
         goto out_put_gfn;
     }
+#endif
 
     /* Spurious fault? PoD and log-dirty also take this path. */
     if ( p2m_is_ram(p2mt) )
@@ -1955,19 +1958,21 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
      */
     if ( paged )
         p2m_mem_paging_populate(currd, gfn);
+
     if ( sharing_enomem )
     {
-        int rv;
-
-        if ( (rv = mem_sharing_notify_enomem(currd, gfn, true)) < 0 )
+#ifdef CONFIG_MEM_SHARING
+        if ( !vm_event_check_ring(currd->vm_event_share) )
         {
-            gdprintk(XENLOG_ERR, "Domain %hu attempt to unshare "
-                     "gfn %lx, ENOMEM and no helper (rc %d)\n",
-                     currd->domain_id, gfn, rv);
+            gprintk(XENLOG_ERR, "Domain %pd attempt to unshare "
+                    "gfn %lx, ENOMEM and no helper\n",
+                    currd, gfn);
             /* Crash the domain */
             rc = 0;
         }
+#endif
     }
+
     if ( req_ptr )
     {
         if ( monitor_traps(curr, sync, req_ptr) < 0 )
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 07/18] x86/mem_sharing: define mem_sharing_domain to hold some scattered variables
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (5 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 06/18] x86/mem_sharing: don't try to unshare twice during page fault Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-23 15:59   ` George Dunlap
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 08/18] x86/mem_sharing: Use INVALID_MFN and p2m_is_shared in relinquish_shared_pages Tamas K Lengyel
                   ` (10 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, George Dunlap,
	Andrew Cooper, Roger Pau Monné

Create struct mem_sharing_domain under hvm_domain and move mem sharing
variables into it from p2m_domain and hvm_domain.

Expose the mem_sharing_enabled macro to be used consistently across Xen.

Remove some duplicate calls to mem_sharing_enabled in mem_sharing.c

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
v5: remove hap_enabled check() from mem_sharing_enabled as suggested by Jan
---
 xen/arch/x86/mm/mem_sharing.c     | 10 ++++------
 xen/drivers/passthrough/pci.c     |  3 +--
 xen/include/asm-x86/hvm/domain.h  |  6 +++++-
 xen/include/asm-x86/mem_sharing.h | 15 +++++++++++++++
 xen/include/asm-x86/p2m.h         |  4 ----
 5 files changed, 25 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 2a20e4958b..1846f97acc 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -197,9 +197,6 @@ static shr_handle_t get_next_handle(void)
     return x + 1;
 }
 
-#define mem_sharing_enabled(d) \
-    (is_hvm_domain(d) && (d)->arch.hvm.mem_sharing_enabled)
-
 static atomic_t nr_saved_mfns   = ATOMIC_INIT(0);
 static atomic_t nr_shared_mfns  = ATOMIC_INIT(0);
 
@@ -1297,6 +1294,7 @@ int __mem_sharing_unshare_page(struct domain *d,
 int relinquish_shared_pages(struct domain *d)
 {
     int rc = 0;
+    struct mem_sharing_domain *msd = &d->arch.hvm.mem_sharing;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     unsigned long gfn, count = 0;
 
@@ -1304,7 +1302,7 @@ int relinquish_shared_pages(struct domain *d)
         return 0;
 
     p2m_lock(p2m);
-    for ( gfn = p2m->next_shared_gfn_to_relinquish;
+    for ( gfn = msd->next_shared_gfn_to_relinquish;
           gfn <= p2m->max_mapped_pfn; gfn++ )
     {
         p2m_access_t a;
@@ -1339,7 +1337,7 @@ int relinquish_shared_pages(struct domain *d)
         {
             if ( hypercall_preempt_check() )
             {
-                p2m->next_shared_gfn_to_relinquish = gfn + 1;
+                msd->next_shared_gfn_to_relinquish = gfn + 1;
                 rc = -ERESTART;
                 break;
             }
@@ -1425,7 +1423,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 
     /* Only HAP is supported */
     rc = -ENODEV;
-    if ( !hap_enabled(d) || !d->arch.hvm.mem_sharing_enabled )
+    if ( !mem_sharing_enabled(d) )
         goto out;
 
     switch ( mso.op )
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index c07a63981a..65d1d457ff 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1498,8 +1498,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
     /* Prevent device assign if mem paging or mem sharing have been 
      * enabled for this domain */
     if ( d != dom_io &&
-         unlikely((is_hvm_domain(d) &&
-                   d->arch.hvm.mem_sharing_enabled) ||
+         unlikely(mem_sharing_enabled(d) ||
                   vm_event_check_ring(d->vm_event_paging) ||
                   p2m_get_hostp2m(d)->global_logdirty) )
         return -EXDEV;
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index bcc5621797..8f70ba2b1a 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -29,6 +29,7 @@
 #include <asm/hvm/viridian.h>
 #include <asm/hvm/vmx/vmcs.h>
 #include <asm/hvm/svm/vmcb.h>
+#include <asm/mem_sharing.h>
 #include <public/grant_table.h>
 #include <public/hvm/params.h>
 #include <public/hvm/save.h>
@@ -156,7 +157,6 @@ struct hvm_domain {
 
     struct viridian_domain *viridian;
 
-    bool_t                 mem_sharing_enabled;
     bool_t                 qemu_mapcache_invalidate;
     bool_t                 is_s3_suspended;
 
@@ -192,6 +192,10 @@ struct hvm_domain {
         struct vmx_domain vmx;
         struct svm_domain svm;
     };
+
+#ifdef CONFIG_MEM_SHARING
+    struct mem_sharing_domain mem_sharing;
+#endif
 };
 
 #endif /* __ASM_X86_HVM_DOMAIN_H__ */
diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index cf7848709f..a10af9d570 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -26,6 +26,19 @@
 
 #ifdef CONFIG_MEM_SHARING
 
+struct mem_sharing_domain
+{
+    bool enabled;
+
+    /*
+     * When releasing shared gfn's in a preemptible manner, recall where
+     * to resume the search.
+     */
+    unsigned long next_shared_gfn_to_relinquish;
+};
+
+#define mem_sharing_enabled(d) ((d)->arch.hvm.mem_sharing.enabled)
+
 /* Auditing of memory sharing code? */
 #ifndef NDEBUG
 #define MEM_SHARING_AUDIT 1
@@ -104,6 +117,8 @@ int relinquish_shared_pages(struct domain *d);
 
 #else
 
+#define mem_sharing_enabled(d) false
+
 static inline unsigned int mem_sharing_get_nr_saved_mfns(void)
 {
     return 0;
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 7399c4a897..8defa90306 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -305,10 +305,6 @@ struct p2m_domain {
     unsigned long min_remapped_gfn;
     unsigned long max_remapped_gfn;
 
-    /* When releasing shared gfn's in a preemptible manner, recall where
-     * to resume the search */
-    unsigned long next_shared_gfn_to_relinquish;
-
 #ifdef CONFIG_HVM
     /* Populate-on-demand variables
      * All variables are protected with the pod lock. We cannot rely on
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 08/18] x86/mem_sharing: Use INVALID_MFN and p2m_is_shared in relinquish_shared_pages
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (6 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 07/18] x86/mem_sharing: define mem_sharing_domain to hold some scattered variables Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 09/18] x86/mem_sharing: Make add_to_physmap static and shorten name Tamas K Lengyel
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, George Dunlap,
	Andrew Cooper, Roger Pau Monné

While using _mfn(0) is of no consequence during teardown, INVALID_MFN is the
correct value that should be used.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/mm/mem_sharing.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 1846f97acc..cc3fc97618 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1314,7 +1314,7 @@ int relinquish_shared_pages(struct domain *d)
             break;
 
         mfn = p2m->get_entry(p2m, _gfn(gfn), &t, &a, 0, NULL, NULL);
-        if ( mfn_valid(mfn) && t == p2m_ram_shared )
+        if ( mfn_valid(mfn) && p2m_is_shared(t) )
         {
             /* Does not fail with ENOMEM given the DESTROY flag */
             BUG_ON(__mem_sharing_unshare_page(
@@ -1324,7 +1324,7 @@ int relinquish_shared_pages(struct domain *d)
              * unshare.  Must succeed: we just read the old entry and
              * we hold the p2m lock.
              */
-            set_rc = p2m->set_entry(p2m, _gfn(gfn), _mfn(0), PAGE_ORDER_4K,
+            set_rc = p2m->set_entry(p2m, _gfn(gfn), INVALID_MFN, PAGE_ORDER_4K,
                                     p2m_invalid, p2m_access_rwx, -1);
             ASSERT(!set_rc);
             count += 0x10;
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 09/18] x86/mem_sharing: Make add_to_physmap static and shorten name
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (7 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 08/18] x86/mem_sharing: Use INVALID_MFN and p2m_is_shared in relinquish_shared_pages Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 10/18] x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool Tamas K Lengyel
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, George Dunlap,
	Andrew Cooper, Roger Pau Monné

It's not being called from outside mem_sharing.c

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/mm/mem_sharing.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index cc3fc97618..5d840550f4 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1057,8 +1057,9 @@ err_out:
     return ret;
 }
 
-int mem_sharing_add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle_t sh,
-                               struct domain *cd, unsigned long cgfn, bool lock)
+static
+int add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle_t sh,
+                   struct domain *cd, unsigned long cgfn, bool lock)
 {
     struct page_info *spage;
     int ret = -EINVAL;
@@ -1570,7 +1571,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         sh      = mso.u.share.source_handle;
         cgfn    = mso.u.share.client_gfn;
 
-        rc = mem_sharing_add_to_physmap(d, sgfn, sh, cd, cgfn, true);
+        rc = add_to_physmap(d, sgfn, sh, cd, cgfn, true);
 
         rcu_unlock_domain(cd);
     }
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 10/18] x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (8 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 09/18] x86/mem_sharing: Make add_to_physmap static and shorten name Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-23 16:14   ` George Dunlap
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 11/18] x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk Tamas K Lengyel
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, George Dunlap,
	Andrew Cooper, Roger Pau Monné

MEM_SHARING_DESTROY_GFN is used on the 'flags' bitfield during unsharing.
However, the bitfield is not used for anything else, so just convert it to a
bool instead.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/mm/mem_sharing.c     | 9 ++++-----
 xen/include/asm-x86/mem_sharing.h | 5 ++---
 2 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 5d840550f4..da7d142ad8 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1170,7 +1170,7 @@ err_out:
  */
 int __mem_sharing_unshare_page(struct domain *d,
                                unsigned long gfn,
-                               uint16_t flags)
+                               bool destroy)
 {
     p2m_type_t p2mt;
     mfn_t mfn;
@@ -1226,7 +1226,7 @@ int __mem_sharing_unshare_page(struct domain *d,
      * If the GFN is getting destroyed drop the references to MFN
      * (possibly freeing the page), and exit early.
      */
-    if ( flags & MEM_SHARING_DESTROY_GFN )
+    if ( destroy )
     {
         if ( !last_gfn )
             mem_sharing_gfn_destroy(page, d, gfn_info);
@@ -1317,9 +1317,8 @@ int relinquish_shared_pages(struct domain *d)
         mfn = p2m->get_entry(p2m, _gfn(gfn), &t, &a, 0, NULL, NULL);
         if ( mfn_valid(mfn) && p2m_is_shared(t) )
         {
-            /* Does not fail with ENOMEM given the DESTROY flag */
-            BUG_ON(__mem_sharing_unshare_page(
-                       d, gfn, MEM_SHARING_DESTROY_GFN));
+            /* Does not fail with ENOMEM given "destroy" is set to true */
+            BUG_ON(__mem_sharing_unshare_page(d, gfn, true));
             /*
              * Clear out the p2m entry so no one else may try to
              * unshare.  Must succeed: we just read the old entry and
diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index a10af9d570..53760a2896 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -75,16 +75,15 @@ struct page_sharing_info
 unsigned int mem_sharing_get_nr_saved_mfns(void);
 unsigned int mem_sharing_get_nr_shared_mfns(void);
 
-#define MEM_SHARING_DESTROY_GFN       (1<<1)
 /* Only fails with -ENOMEM. Enforce it with a BUG_ON wrapper. */
 int __mem_sharing_unshare_page(struct domain *d,
                                unsigned long gfn,
-                               uint16_t flags);
+                               bool destroy);
 
 static inline int mem_sharing_unshare_page(struct domain *d,
                                            unsigned long gfn)
 {
-    int rc = __mem_sharing_unshare_page(d, gfn, 0);
+    int rc = __mem_sharing_unshare_page(d, gfn, false);
     BUG_ON(rc && (rc != -ENOMEM));
     return rc;
 }
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 11/18] x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (9 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 10/18] x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-22 15:30   ` Jan Beulich
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 12/18] x86/mem_sharing: Enable mem_sharing on first memop Tamas K Lengyel
                   ` (6 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, George Dunlap,
	Andrew Cooper, Roger Pau Monné

Using XENLOG_ERR level since this is only used in debug paths (ie. it's
expected the user already has loglvl=all set). Also use %pd to print the domain
ids.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 xen/arch/x86/mm/mem_sharing.c | 82 +++++++++++++++++------------------
 1 file changed, 41 insertions(+), 41 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index da7d142ad8..21ce8d32f3 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -49,9 +49,6 @@ typedef struct pg_lock_data {
 
 static DEFINE_PER_CPU(pg_lock_data_t, __pld);
 
-#define MEM_SHARING_DEBUG(_f, _a...)                                  \
-    debugtrace_printk("mem_sharing_debug: %s(): " _f, __func__, ##_a)
-
 /* Reverse map defines */
 #define RMAP_HASHTAB_ORDER  0
 #define RMAP_HASHTAB_SIZE   \
@@ -482,9 +479,9 @@ static int audit(void)
         /* If we can't lock it, it's definitely not a shared page */
         if ( !mem_sharing_page_lock(pg) )
         {
-            MEM_SHARING_DEBUG(
-                "mfn %lx in audit list, but cannot be locked (%lx)!\n",
-                mfn_x(mfn), pg->u.inuse.type_info);
+            gdprintk(XENLOG_ERR,
+                     "mfn %lx in audit list, but cannot be locked (%lx)!\n",
+                     mfn_x(mfn), pg->u.inuse.type_info);
             errors++;
             continue;
         }
@@ -492,9 +489,9 @@ static int audit(void)
         /* Check if the MFN has correct type, owner and handle. */
         if ( (pg->u.inuse.type_info & PGT_type_mask) != PGT_shared_page )
         {
-            MEM_SHARING_DEBUG(
-                "mfn %lx in audit list, but not PGT_shared_page (%lx)!\n",
-                mfn_x(mfn), pg->u.inuse.type_info & PGT_type_mask);
+            gdprintk(XENLOG_ERR,
+                     "mfn %lx in audit list, but not PGT_shared_page (%lx)!\n",
+                     mfn_x(mfn), pg->u.inuse.type_info & PGT_type_mask);
             errors++;
             continue;
         }
@@ -502,24 +499,24 @@ static int audit(void)
         /* Check the page owner. */
         if ( page_get_owner(pg) != dom_cow )
         {
-            MEM_SHARING_DEBUG("mfn %lx shared, but wrong owner %pd!\n",
-                              mfn_x(mfn), page_get_owner(pg));
+            gdprintk(XENLOG_ERR, "mfn %lx shared, but wrong owner (%pd)!\n",
+                     mfn_x(mfn), page_get_owner(pg));
             errors++;
         }
 
         /* Check the m2p entry */
         if ( !SHARED_M2P(get_gpfn_from_mfn(mfn_x(mfn))) )
         {
-            MEM_SHARING_DEBUG("mfn %lx shared, but wrong m2p entry (%lx)!\n",
-                              mfn_x(mfn), get_gpfn_from_mfn(mfn_x(mfn)));
+            gdprintk(XENLOG_ERR, "mfn %lx shared, but wrong m2p entry (%lx)!\n",
+                     mfn_x(mfn), get_gpfn_from_mfn(mfn_x(mfn)));
             errors++;
         }
 
         /* Check we have a list */
         if ( (!pg->sharing) || rmap_count(pg) == 0 )
         {
-            MEM_SHARING_DEBUG("mfn %lx shared, but empty gfn list!\n",
-                              mfn_x(mfn));
+            gdprintk(XENLOG_ERR, "mfn %lx shared, but empty gfn list!\n",
+                     mfn_x(mfn));
             errors++;
             continue;
         }
@@ -538,24 +535,26 @@ static int audit(void)
             d = get_domain_by_id(g->domain);
             if ( d == NULL )
             {
-                MEM_SHARING_DEBUG("Unknown dom: %hu, for PFN=%lx, MFN=%lx\n",
-                                  g->domain, g->gfn, mfn_x(mfn));
+                gdprintk(XENLOG_ERR,
+                         "Unknown dom: %pd, for PFN=%lx, MFN=%lx\n",
+                         d, g->gfn, mfn_x(mfn));
                 errors++;
                 continue;
             }
             o_mfn = get_gfn_query_unlocked(d, g->gfn, &t);
             if ( !mfn_eq(o_mfn, mfn) )
             {
-                MEM_SHARING_DEBUG("Incorrect P2M for d=%hu, PFN=%lx."
-                                  "Expecting MFN=%lx, got %lx\n",
-                                  g->domain, g->gfn, mfn_x(mfn), mfn_x(o_mfn));
+                gdprintk(XENLOG_ERR, "Incorrect P2M for d=%pd, PFN=%lx."
+                         "Expecting MFN=%lx, got %lx\n",
+                         d, g->gfn, mfn_x(mfn), mfn_x(o_mfn));
                 errors++;
             }
             if ( t != p2m_ram_shared )
             {
-                MEM_SHARING_DEBUG("Incorrect P2M type for d=%hu, PFN=%lx MFN=%lx."
-                                  "Expecting t=%d, got %d\n",
-                                  g->domain, g->gfn, mfn_x(mfn), p2m_ram_shared, t);
+                gdprintk(XENLOG_ERR,
+                         "Incorrect P2M type for d=%pd, PFN=%lx MFN=%lx."
+                         "Expecting t=%d, got %d\n",
+                         d, g->gfn, mfn_x(mfn), p2m_ram_shared, t);
                 errors++;
             }
             put_domain(d);
@@ -564,10 +563,10 @@ static int audit(void)
         /* The type count has an extra ref because we have locked the page */
         if ( (nr_gfns + 1) != (pg->u.inuse.type_info & PGT_count_mask) )
         {
-            MEM_SHARING_DEBUG("Mismatched counts for MFN=%lx."
-                              "nr_gfns in list %lu, in type_info %lx\n",
-                              mfn_x(mfn), nr_gfns,
-                              (pg->u.inuse.type_info & PGT_count_mask));
+            gdprintk(XENLOG_ERR, "Mismatched counts for MFN=%lx."
+                     "nr_gfns in list %lu, in type_info %lx\n",
+                     mfn_x(mfn), nr_gfns,
+                     (pg->u.inuse.type_info & PGT_count_mask));
             errors++;
         }
 
@@ -578,8 +577,8 @@ static int audit(void)
 
     if ( count_found != count_expected )
     {
-        MEM_SHARING_DEBUG("Expected %ld shared mfns, found %ld.",
-                          count_expected, count_found);
+        gdprintk(XENLOG_ERR, "Expected %ld shared mfns, found %ld.",
+                 count_expected, count_found);
         errors++;
     }
 
@@ -757,10 +756,10 @@ static int debug_mfn(mfn_t mfn)
         return -EINVAL;
     }
 
-    MEM_SHARING_DEBUG(
-        "Debug page: MFN=%lx is ci=%lx, ti=%lx, owner=%pd\n",
-        mfn_x(page_to_mfn(page)), page->count_info,
-        page->u.inuse.type_info, page_get_owner(page));
+    gdprintk(XENLOG_ERR,
+             "Debug page: MFN=%lx is ci=%lx, ti=%lx, owner_id=%d\n",
+             mfn_x(page_to_mfn(page)), page->count_info,
+             page->u.inuse.type_info, page_get_owner(page)->domain_id);
 
     /* -1 because the page is locked and that's an additional type ref */
     num_refs = ((int) (page->u.inuse.type_info & PGT_count_mask)) - 1;
@@ -776,8 +775,9 @@ static int debug_gfn(struct domain *d, gfn_t gfn)
 
     mfn = get_gfn_query(d, gfn_x(gfn), &p2mt);
 
-    MEM_SHARING_DEBUG("Debug for dom%d, gfn=%" PRI_gfn "\n",
-                      d->domain_id, gfn_x(gfn));
+    gdprintk(XENLOG_ERR, "Debug for dom%d, gfn=%" PRI_gfn "\n",
+             d->domain_id, gfn_x(gfn));
+
     num_refs = debug_mfn(mfn);
     put_gfn(d, gfn_x(gfn));
 
@@ -793,13 +793,13 @@ static int debug_gref(struct domain *d, grant_ref_t ref)
     rc = mem_sharing_gref_to_gfn(d->grant_table, ref, &gfn, &status);
     if ( rc )
     {
-        MEM_SHARING_DEBUG("Asked to debug [dom=%d,gref=%u]: error %d.\n",
-                          d->domain_id, ref, rc);
+        gdprintk(XENLOG_ERR, "Asked to debug [dom=%d,gref=%u]: error %d.\n",
+                 d->domain_id, ref, rc);
         return rc;
     }
 
-    MEM_SHARING_DEBUG("==> Grant [dom=%d,ref=%d], status=%x. ",
-                      d->domain_id, ref, status);
+    gdprintk(XENLOG_ERR, "==> Grant [dom=%d,ref=%d], status=%x. ",
+             d->domain_id, ref, status);
 
     return debug_gfn(d, gfn);
 }
@@ -1274,8 +1274,8 @@ int __mem_sharing_unshare_page(struct domain *d,
  private_page_found:
     if ( p2m_change_type_one(d, gfn, p2m_ram_shared, p2m_ram_rw) )
     {
-        gdprintk(XENLOG_ERR, "Could not change p2m type d %hu gfn %lx.\n",
-                 d->domain_id, gfn);
+        gdprintk(XENLOG_ERR, "Could not change p2m type d %pd gfn %lx.\n",
+                 d, gfn);
         BUG();
     }
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 12/18] x86/mem_sharing: Enable mem_sharing on first memop
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (10 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 11/18] x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-22 15:32   ` Jan Beulich
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 13/18] x86/mem_sharing: Skip xen heap pages in memshr nominate Tamas K Lengyel
                   ` (5 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, George Dunlap,
	Andrew Cooper, Roger Pau Monné

It is wasteful to require separate hypercalls to enable sharing on both the
parent and the client domain during VM forking. To speed things up we enable
sharing on the first memop in case it wasn't already enabled.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 xen/arch/x86/mm/mem_sharing.c | 52 ++++++++++++++---------------------
 1 file changed, 21 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 21ce8d32f3..172f02e780 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1400,6 +1400,24 @@ static int range_share(struct domain *d, struct domain *cd,
     return rc;
 }
 
+static inline int mem_sharing_control(struct domain *d, bool enable)
+{
+    if ( enable )
+    {
+        if ( unlikely(!is_hvm_domain(d)) )
+            return -EOPNOTSUPP;
+
+        if ( unlikely(!hap_enabled(d)) )
+            return -ENODEV;
+
+        if ( unlikely(is_iommu_enabled(d)) )
+            return -EXDEV;
+    }
+
+    d->arch.hvm.mem_sharing.enabled = enable;
+    return 0;
+}
+
 int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 {
     int rc;
@@ -1421,10 +1439,8 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
     if ( rc )
         goto out;
 
-    /* Only HAP is supported */
-    rc = -ENODEV;
-    if ( !mem_sharing_enabled(d) )
-        goto out;
+    if ( !mem_sharing_enabled(d) && (rc = mem_sharing_control(d, true)) )
+        return rc;
 
     switch ( mso.op )
     {
@@ -1432,10 +1448,6 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
     {
         shr_handle_t handle;
 
-        rc = -EINVAL;
-        if ( !mem_sharing_enabled(d) )
-            goto out;
-
         rc = nominate_page(d, _gfn(mso.u.nominate.u.gfn), 0, &handle);
         mso.u.nominate.handle = handle;
     }
@@ -1447,9 +1459,6 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         gfn_t gfn;
         shr_handle_t handle;
 
-        rc = -EINVAL;
-        if ( !mem_sharing_enabled(d) )
-            goto out;
         rc = mem_sharing_gref_to_gfn(d->grant_table, gref, &gfn, NULL);
         if ( rc < 0 )
             goto out;
@@ -1465,10 +1474,6 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         struct domain *cd;
         shr_handle_t sh, ch;
 
-        rc = -EINVAL;
-        if ( !mem_sharing_enabled(d) )
-            goto out;
-
         rc = rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
                                                &cd);
         if ( rc )
@@ -1535,10 +1540,6 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         struct domain *cd;
         shr_handle_t sh;
 
-        rc = -EINVAL;
-        if ( !mem_sharing_enabled(d) )
-            goto out;
-
         rc = rcu_lock_live_remote_domain_by_id(mso.u.share.client_domain,
                                                &cd);
         if ( rc )
@@ -1597,9 +1598,6 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
               mso.u.range.opaque > mso.u.range.last_gfn) )
             goto out;
 
-        if ( !mem_sharing_enabled(d) )
-            goto out;
-
         rc = rcu_lock_live_remote_domain_by_id(mso.u.range.client_domain,
                                                &cd);
         if ( rc )
@@ -1691,18 +1689,10 @@ int mem_sharing_domctl(struct domain *d, struct xen_domctl_mem_sharing_op *mec)
 {
     int rc;
 
-    /* Only HAP is supported */
-    if ( !hap_enabled(d) )
-        return -ENODEV;
-
     switch ( mec->op )
     {
     case XEN_DOMCTL_MEM_SHARING_CONTROL:
-        rc = 0;
-        if ( unlikely(is_iommu_enabled(d) && mec->u.enable) )
-            rc = -EXDEV;
-        else
-            d->arch.hvm.mem_sharing_enabled = mec->u.enable;
+        rc = mem_sharing_control(d, mec->u.enable);
         break;
 
     default:
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 13/18] x86/mem_sharing: Skip xen heap pages in memshr nominate
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (11 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 12/18] x86/mem_sharing: Enable mem_sharing on first memop Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 14/18] x86/mem_sharing: use default_access in add_to_physmap Tamas K Lengyel
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, George Dunlap,
	Andrew Cooper, Roger Pau Monné

Trying to share these would fail anyway, better to skip them early.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/mm/mem_sharing.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 172f02e780..eac8047c07 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -840,6 +840,11 @@ static int nominate_page(struct domain *d, gfn_t gfn,
     if ( !p2m_is_sharable(p2mt) )
         goto out;
 
+    /* Skip xen heap pages */
+    page = mfn_to_page(mfn);
+    if ( !page || is_xen_heap_page(page) )
+        goto out;
+
     /* Check if there are mem_access/remapped altp2m entries for this page */
     if ( altp2m_active(d) )
     {
@@ -870,7 +875,6 @@ static int nominate_page(struct domain *d, gfn_t gfn,
     }
 
     /* Try to convert the mfn to the sharable type */
-    page = mfn_to_page(mfn);
     ret = page_make_sharable(d, page, expected_refcnt);
     if ( ret )
         goto out;
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 14/18] x86/mem_sharing: use default_access in add_to_physmap
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (12 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 13/18] x86/mem_sharing: Skip xen heap pages in memshr nominate Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-22 15:35   ` Jan Beulich
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 15/18] xen/mem_sharing: VM forking Tamas K Lengyel
                   ` (3 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, George Dunlap,
	Andrew Cooper, Roger Pau Monné

When plugging a hole in the target physmap don't use the access permission
returned by __get_gfn_type_access as it can be non-sensical, leading to
spurious vm_events being sent out for access violations at unexpected
locations. Make use of p2m->default_access instead.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 xen/arch/x86/mm/mem_sharing.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index eac8047c07..e3ddb63b4f 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1071,11 +1071,10 @@ int add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle_t sh,
     p2m_type_t smfn_type, cmfn_type;
     struct gfn_info *gfn_info;
     struct p2m_domain *p2m = p2m_get_hostp2m(cd);
-    p2m_access_t a;
     struct two_gfns tg;
 
     get_two_gfns(sd, _gfn(sgfn), &smfn_type, NULL, &smfn,
-                 cd, _gfn(cgfn), &cmfn_type, &a, &cmfn, 0, &tg, lock);
+                 cd, _gfn(cgfn), &cmfn_type, NULL, &cmfn, 0, &tg, lock);
 
     /* Get the source shared page, check and lock */
     ret = XENMEM_SHARING_OP_S_HANDLE_INVALID;
@@ -1110,7 +1109,7 @@ int add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle_t sh,
     }
 
     ret = p2m_set_entry(p2m, _gfn(cgfn), smfn, PAGE_ORDER_4K,
-                        p2m_ram_shared, a);
+                        p2m_ram_shared, p2m->default_access);
 
     /* Tempted to turn this into an assert */
     if ( ret )
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 15/18] xen/mem_sharing: VM forking
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (13 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 14/18] x86/mem_sharing: use default_access in add_to_physmap Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-23 17:21   ` George Dunlap
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 16/18] xen/mem_access: Use __get_gfn_type_access in set_mem_access Tamas K Lengyel
                   ` (2 subsequent siblings)
  17 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Stefano Stabellini, Tamas K Lengyel, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Tamas K Lengyel, Julien Grall, Roger Pau Monné

VM forking is the process of creating a domain with an empty memory space and a
parent domain specified from which to populate the memory when necessary. For
the new domain to be functional the VM state is copied over as part of the fork
operation (HVM params, hap allocation, etc).

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 xen/arch/x86/domain.c             |   9 ++
 xen/arch/x86/hvm/hvm.c            |   2 +-
 xen/arch/x86/mm/mem_sharing.c     | 220 ++++++++++++++++++++++++++++++
 xen/arch/x86/mm/p2m.c             |  11 +-
 xen/include/asm-x86/mem_sharing.h |  20 ++-
 xen/include/public/memory.h       |   5 +
 xen/include/xen/sched.h           |   2 +
 7 files changed, 265 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 28fefa1f81..953abcc1fc 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2198,6 +2198,15 @@ int domain_relinquish_resources(struct domain *d)
             ret = relinquish_shared_pages(d);
             if ( ret )
                 return ret;
+
+            /*
+             * If the domain is forked, decrement the parent's pause count.
+             */
+            if ( d->parent )
+            {
+                domain_unpause(d->parent);
+                d->parent = NULL;
+            }
         }
 #endif
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e60b4931bf..6012b88845 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1906,7 +1906,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
     }
 #endif
 
-    /* Spurious fault? PoD and log-dirty also take this path. */
+    /* Spurious fault? PoD, log-dirty and VM forking also take this path. */
     if ( p2m_is_ram(p2mt) )
     {
         rc = 1;
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index e3ddb63b4f..749305417c 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -22,11 +22,13 @@
 
 #include <xen/types.h>
 #include <xen/domain_page.h>
+#include <xen/event.h>
 #include <xen/spinlock.h>
 #include <xen/rwlock.h>
 #include <xen/mm.h>
 #include <xen/grant_table.h>
 #include <xen/sched.h>
+#include <xen/sched-if.h>
 #include <xen/rcupdate.h>
 #include <xen/guest_access.h>
 #include <xen/vm_event.h>
@@ -36,6 +38,9 @@
 #include <asm/altp2m.h>
 #include <asm/atomic.h>
 #include <asm/event.h>
+#include <asm/hap.h>
+#include <asm/hvm/hvm.h>
+#include <asm/hvm/save.h>
 #include <xsm/xsm.h>
 
 #include "mm-locks.h"
@@ -1421,6 +1426,191 @@ static inline int mem_sharing_control(struct domain *d, bool enable)
     return 0;
 }
 
+/*
+ * Forking a page only gets called when the VM faults due to no entry being
+ * in the EPT for the access. Depending on the type of access we either
+ * populate the physmap with a shared entry for read-only access or
+ * fork the page if its a write access.
+ *
+ * The client p2m is already locked so we only need to lock
+ * the parent's here.
+ */
+int mem_sharing_fork_page(struct domain *d, gfn_t gfn, bool unsharing)
+{
+    int rc = -ENOENT;
+    shr_handle_t handle;
+    struct domain *parent;
+    struct p2m_domain *p2m;
+    unsigned long gfn_l = gfn_x(gfn);
+    mfn_t mfn, new_mfn;
+    p2m_type_t p2mt;
+    struct page_info *page;
+
+    if ( !mem_sharing_is_fork(d) )
+        return -ENOENT;
+
+    parent = d->parent;
+
+    if ( !unsharing )
+    {
+        /* For read-only accesses we just add a shared entry to the physmap */
+        while ( parent )
+        {
+            if ( !(rc = nominate_page(parent, gfn, 0, &handle)) )
+                break;
+
+            parent = parent->parent;
+        }
+
+        if ( !rc )
+        {
+            /* The client's p2m is already locked */
+            struct p2m_domain *pp2m = p2m_get_hostp2m(parent);
+
+            p2m_lock(pp2m);
+            rc = add_to_physmap(parent, gfn_l, handle, d, gfn_l, false);
+            p2m_unlock(pp2m);
+
+            if ( !rc )
+                return 0;
+        }
+    }
+
+    /*
+     * If it's a write access (ie. unsharing) or if adding a shared entry to
+     * the physmap failed we'll fork the page directly.
+     */
+    p2m = p2m_get_hostp2m(d);
+    parent = d->parent;
+
+    while ( parent )
+    {
+        mfn = get_gfn_query(parent, gfn_l, &p2mt);
+
+        if ( mfn_valid(mfn) && p2m_is_any_ram(p2mt) )
+            break;
+
+        put_gfn(parent, gfn_l);
+        parent = parent->parent;
+    }
+
+    if ( !parent )
+        return -ENOENT;
+
+    if ( !(page = alloc_domheap_page(d, 0)) )
+    {
+        put_gfn(parent, gfn_l);
+        return -ENOMEM;
+    }
+
+    new_mfn = page_to_mfn(page);
+    copy_domain_page(new_mfn, mfn);
+    set_gpfn_from_mfn(mfn_x(new_mfn), gfn_l);
+
+    put_gfn(parent, gfn_l);
+
+    return p2m->set_entry(p2m, gfn, new_mfn, PAGE_ORDER_4K, p2m_ram_rw,
+                          p2m->default_access, -1);
+}
+
+static int bring_up_vcpus(struct domain *cd, struct cpupool *cpupool)
+{
+    int ret;
+    unsigned int i;
+
+    if ( (ret = cpupool_move_domain(cd, cpupool)) )
+        return ret;
+
+    for ( i = 0; i < cd->max_vcpus; i++ )
+    {
+        if ( cd->vcpu[i] )
+            continue;
+
+        if ( !vcpu_create(cd, i) )
+            return -EINVAL;
+    }
+
+    domain_update_node_affinity(cd);
+    return 0;
+}
+
+static int fork_hap_allocation(struct domain *d, struct domain *cd)
+{
+    int rc;
+    bool preempted;
+    unsigned long mb = hap_get_allocation(d);
+
+    if ( mb == hap_get_allocation(cd) )
+        return 0;
+
+    paging_lock(cd);
+    rc = hap_set_allocation(cd, mb << (20 - PAGE_SHIFT), &preempted);
+    paging_unlock(cd);
+
+    if ( rc )
+        return rc;
+
+    if ( preempted )
+        return -ERESTART;
+
+    return 0;
+}
+
+static void fork_tsc(struct domain *d, struct domain *cd)
+{
+    uint32_t tsc_mode;
+    uint32_t gtsc_khz;
+    uint32_t incarnation;
+    uint64_t elapsed_nsec;
+
+    tsc_get_info(d, &tsc_mode, &elapsed_nsec, &gtsc_khz, &incarnation);
+    tsc_set_info(cd, tsc_mode, elapsed_nsec, gtsc_khz, incarnation);
+}
+
+static int mem_sharing_fork(struct domain *d, struct domain *cd)
+{
+    int rc = -EINVAL;
+
+    if ( !cd->controller_pause_count )
+        return rc;
+
+    /*
+     * We only want to pause the parent once, not each time this
+     * operation is restarted due to preemption.
+     */
+    if ( !cd->parent_paused )
+    {
+        domain_pause(d);
+        cd->parent_paused = true;
+    }
+
+    cd->max_pages = d->max_pages;
+    cd->max_vcpus = d->max_vcpus;
+
+    /* this is preemptible so it's the first to get done */
+    if ( (rc = fork_hap_allocation(d, cd)) )
+        goto done;
+
+    if ( (rc = bring_up_vcpus(cd, d->cpupool)) )
+        goto done;
+
+    if ( (rc = hvm_copy_context_and_params(d, cd)) )
+        goto done;
+
+    fork_tsc(d, cd);
+
+    cd->parent = d;
+
+ done:
+    if ( rc && rc != -ERESTART )
+    {
+        domain_unpause(d);
+        cd->parent_paused = false;
+    }
+
+    return rc;
+}
+
 int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 {
     int rc;
@@ -1675,6 +1865,36 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         rc = debug_gref(d, mso.u.debug.u.gref);
         break;
 
+    case XENMEM_sharing_op_fork:
+    {
+        struct domain *pd;
+
+        rc = -EINVAL;
+        if ( mso.u.fork._pad[0] || mso.u.fork._pad[1] ||
+             mso.u.fork._pad[2] )
+            goto out;
+
+        rc = rcu_lock_live_remote_domain_by_id(mso.u.fork.parent_domain,
+                                               &pd);
+        if ( rc )
+            goto out;
+
+        if ( !mem_sharing_enabled(pd) )
+        {
+            if ( (rc = mem_sharing_control(pd, true)) )
+                goto out;
+        }
+
+        rc = mem_sharing_fork(pd, d);
+
+        if ( rc == -ERESTART )
+            rc = hypercall_create_continuation(__HYPERVISOR_memory_op,
+                                               "lh", XENMEM_sharing_op,
+                                               arg);
+        rcu_unlock_domain(pd);
+        break;
+    }
+
     default:
         rc = -ENOSYS;
         break;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 43dd46a32f..c73f5aefbb 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -508,6 +508,14 @@ mfn_t __get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn_l,
 
     mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order, NULL);
 
+    /* Check if we need to fork the page */
+    if ( (q & P2M_ALLOC) && p2m_is_hole(*t) &&
+         !mem_sharing_fork_page(p2m->domain, gfn, !!(q & P2M_UNSHARE)) )
+    {
+        mfn = p2m->get_entry(p2m, gfn, t, a, q, page_order, NULL);
+    }
+
+    /* Check if we need to unshare the page */
     if ( (q & P2M_UNSHARE) && p2m_is_shared(*t) )
     {
         ASSERT(p2m_is_hostp2m(p2m));
@@ -585,7 +593,8 @@ struct page_info *p2m_get_page_from_gfn(
             return page;
 
         /* Error path: not a suitable GFN at all */
-        if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) )
+        if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) &&
+             !mem_sharing_is_fork(p2m->domain) )
             return NULL;
     }
 
diff --git a/xen/include/asm-x86/mem_sharing.h b/xen/include/asm-x86/mem_sharing.h
index 53760a2896..812171284f 100644
--- a/xen/include/asm-x86/mem_sharing.h
+++ b/xen/include/asm-x86/mem_sharing.h
@@ -26,8 +26,7 @@
 
 #ifdef CONFIG_MEM_SHARING
 
-struct mem_sharing_domain
-{
+struct mem_sharing_domain {
     bool enabled;
 
     /*
@@ -39,6 +38,9 @@ struct mem_sharing_domain
 
 #define mem_sharing_enabled(d) ((d)->arch.hvm.mem_sharing.enabled)
 
+#define mem_sharing_is_fork(d) \
+    (mem_sharing_enabled(d) && !!((d)->parent))
+
 /* Auditing of memory sharing code? */
 #ifndef NDEBUG
 #define MEM_SHARING_AUDIT 1
@@ -88,6 +90,9 @@ static inline int mem_sharing_unshare_page(struct domain *d,
     return rc;
 }
 
+int mem_sharing_fork_page(struct domain *d, gfn_t gfn,
+                          bool unsharing);
+
 /*
  * If called by a foreign domain, possible errors are
  *   -EBUSY -> ring full
@@ -117,6 +122,7 @@ int relinquish_shared_pages(struct domain *d);
 #else
 
 #define mem_sharing_enabled(d) false
+#define mem_sharing_is_fork(p2m) false
 
 static inline unsigned int mem_sharing_get_nr_saved_mfns(void)
 {
@@ -141,6 +147,16 @@ static inline int mem_sharing_notify_enomem(struct domain *d, unsigned long gfn,
     return -EOPNOTSUPP;
 }
 
+static inline int mem_sharing_fork(struct domain *d, struct domain *cd, bool vcpu)
+{
+    return -EOPNOTSUPP;
+}
+
+static inline int mem_sharing_fork_page(struct domain *d, gfn_t gfn, bool lock)
+{
+    return -EOPNOTSUPP;
+}
+
 #endif
 
 #endif /* __MEM_SHARING_H__ */
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index cfdda6e2a8..90a3f4498e 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -482,6 +482,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_access_op_t);
 #define XENMEM_sharing_op_add_physmap       6
 #define XENMEM_sharing_op_audit             7
 #define XENMEM_sharing_op_range_share       8
+#define XENMEM_sharing_op_fork              9
 
 #define XENMEM_SHARING_OP_S_HANDLE_INVALID  (-10)
 #define XENMEM_SHARING_OP_C_HANDLE_INVALID  (-9)
@@ -532,6 +533,10 @@ struct xen_mem_sharing_op {
                 uint32_t gref;     /* IN: gref to debug         */
             } u;
         } debug;
+        struct mem_sharing_op_fork {
+            domid_t parent_domain;
+            uint16_t _pad[3];                /* Must be set to 0 */
+        } fork;
     } u;
 };
 typedef struct xen_mem_sharing_op xen_mem_sharing_op_t;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index cc942a3621..67d44e219c 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -504,6 +504,8 @@ struct domain
     /* Memory sharing support */
 #ifdef CONFIG_MEM_SHARING
     struct vm_event_domain *vm_event_share;
+    struct domain *parent; /* VM fork parent */
+    bool parent_paused;
 #endif
     /* Memory paging support */
 #ifdef CONFIG_HAS_MEM_PAGING
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 16/18] xen/mem_access: Use __get_gfn_type_access in set_mem_access
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (14 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 15/18] xen/mem_sharing: VM forking Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 17/18] x86/mem_sharing: reset a fork Tamas K Lengyel
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 18/18] xen/tools: VM forking toolstack side Tamas K Lengyel
  17 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Petre Pircalabu, Tamas K Lengyel, Tamas K Lengyel, Wei Liu,
	George Dunlap, Andrew Cooper, Alexandru Isaila,
	Roger Pau Monné

Use __get_gfn_type_access instead of p2m->get_entry to trigger page-forking
when the mem_access permission is being set on a page that has not yet been
copied over from the parent.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 xen/arch/x86/mm/mem_access.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
index 320b9fe621..9caf08a5b2 100644
--- a/xen/arch/x86/mm/mem_access.c
+++ b/xen/arch/x86/mm/mem_access.c
@@ -303,11 +303,10 @@ static int set_mem_access(struct domain *d, struct p2m_domain *p2m,
     ASSERT(!ap2m);
 #endif
     {
-        mfn_t mfn;
         p2m_access_t _a;
         p2m_type_t t;
-
-        mfn = p2m->get_entry(p2m, gfn, &t, &_a, 0, NULL, NULL);
+        mfn_t mfn = __get_gfn_type_access(p2m, gfn_x(gfn), &t, &_a,
+                                          P2M_ALLOC, NULL, false);
         rc = p2m->set_entry(p2m, gfn, mfn, PAGE_ORDER_4K, t, a, -1);
     }
 
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 17/18] x86/mem_sharing: reset a fork
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (15 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 16/18] xen/mem_access: Use __get_gfn_type_access in set_mem_access Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 18/18] xen/tools: VM forking toolstack side Tamas K Lengyel
  17 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Stefano Stabellini,
	Julien Grall, Roger Pau Monné

Implement hypercall that allows a fork to shed all memory that got allocated
for it during its execution and re-load its vCPU context from the parent VM.
This allows the forked VM to reset into the same state the parent VM is in a
faster way then creating a new fork would be. Measurements show about a 2x
speedup during normal fuzzing operations. Performance may vary depending how
much memory got allocated for the forked VM. If it has been completely
deduplicated from the parent VM then creating a new fork would likely be more
performant.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 xen/arch/x86/mm/mem_sharing.c | 76 +++++++++++++++++++++++++++++++++++
 xen/include/public/memory.h   |  1 +
 2 files changed, 77 insertions(+)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 749305417c..c185b0e7c4 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1611,6 +1611,59 @@ static int mem_sharing_fork(struct domain *d, struct domain *cd)
     return rc;
 }
 
+/*
+ * The fork reset operation is intended to be used on short-lived forks only.
+ * There is no hypercall continuation operation implemented for this reason.
+ * For forks that obtain a larger memory footprint it is likely going to be
+ * more performant to create a new fork instead of resetting an existing one.
+ *
+ * TODO: In case this hypercall would become useful on forks with larger memory
+ * footprints the hypercall continuation should be implemented.
+ */
+static int mem_sharing_fork_reset(struct domain *d, struct domain *cd)
+{
+    int rc;
+    struct p2m_domain* p2m = p2m_get_hostp2m(cd);
+    struct page_info *page, *tmp;
+
+    domain_pause(cd);
+
+    page_list_for_each_safe(page, tmp, &cd->page_list)
+    {
+        p2m_type_t p2mt;
+        p2m_access_t p2ma;
+        gfn_t gfn;
+        mfn_t mfn = page_to_mfn(page);
+
+        if ( !mfn_valid(mfn) )
+            continue;
+
+        gfn = mfn_to_gfn(cd, mfn);
+        mfn = __get_gfn_type_access(p2m, gfn_x(gfn), &p2mt, &p2ma,
+                                    0, NULL, false);
+
+        if ( !p2m_is_ram(p2mt) || p2m_is_shared(p2mt) )
+            continue;
+
+        /* take an extra reference */
+        if ( !get_page(page, cd) )
+            continue;
+
+        rc = p2m->set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
+                            p2m_invalid, p2m_access_rwx, -1);
+        ASSERT(!rc);
+
+        put_page_alloc_ref(page);
+        put_page(page);
+    }
+
+    if ( !(rc = hvm_copy_context_and_params(d, cd)) )
+        fork_tsc(d, cd);
+
+    domain_unpause(cd);
+    return rc;
+}
+
 int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
 {
     int rc;
@@ -1895,6 +1948,29 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         break;
     }
 
+    case XENMEM_sharing_op_fork_reset:
+    {
+        struct domain *pd;
+
+        rc = -EINVAL;
+        if ( mso.u.fork._pad[0] || mso.u.fork._pad[1] ||
+             mso.u.fork._pad[2] )
+            goto out;
+
+        rc = -ENOSYS;
+        if ( !d->parent )
+            goto out;
+
+        rc = rcu_lock_live_remote_domain_by_id(d->parent->domain_id, &pd);
+        if ( rc )
+            goto out;
+
+        rc = mem_sharing_fork_reset(pd, d);
+
+        rcu_unlock_domain(pd);
+        break;
+    }
+
     default:
         rc = -ENOSYS;
         break;
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 90a3f4498e..e3d063e22e 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -483,6 +483,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_mem_access_op_t);
 #define XENMEM_sharing_op_audit             7
 #define XENMEM_sharing_op_range_share       8
 #define XENMEM_sharing_op_fork              9
+#define XENMEM_sharing_op_fork_reset        10
 
 #define XENMEM_SHARING_OP_S_HANDLE_INVALID  (-10)
 #define XENMEM_SHARING_OP_C_HANDLE_INVALID  (-9)
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* [Xen-devel] [PATCH v5 18/18] xen/tools: VM forking toolstack side
  2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
                   ` (16 preceding siblings ...)
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 17/18] x86/mem_sharing: reset a fork Tamas K Lengyel
@ 2020-01-21 17:49 ` Tamas K Lengyel
  17 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-21 17:49 UTC (permalink / raw)
  To: xen-devel; +Cc: Anthony PERARD, Ian Jackson, Tamas K Lengyel, Wei Liu

Add necessary bits to implement "xl fork-vm" commands. The command allows the
user to specify how to launch the device model allowing for a late-launch model
in which the user can execute the fork without the device model and decide to
only later launch it.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 docs/man/xl.1.pod.in          |  36 ++++++
 tools/libxc/include/xenctrl.h |  13 ++
 tools/libxc/xc_memshr.c       |  22 ++++
 tools/libxl/libxl.h           |   7 +
 tools/libxl/libxl_create.c    | 237 +++++++++++++++++++++++-----------
 tools/libxl/libxl_dm.c        |   2 +-
 tools/libxl/libxl_dom.c       |  83 ++++++++----
 tools/libxl/libxl_internal.h  |   1 +
 tools/libxl/libxl_types.idl   |   1 +
 tools/xl/xl.h                 |   5 +
 tools/xl/xl_cmdtable.c        |  12 ++
 tools/xl/xl_saverestore.c     |  96 ++++++++++++++
 tools/xl/xl_vmcontrol.c       |   8 ++
 13 files changed, 419 insertions(+), 104 deletions(-)

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index d4b5e8e362..22cc4149b0 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -694,6 +694,42 @@ Leave the domain paused after creating the snapshot.
 
 =back
 
+=item B<fork-vm> [I<OPTIONS>] I<domain-id>
+
+Create a fork of a running VM. The domain will be paused after the operation
+and needs to remain paused while forks of it exist.
+
+B<OPTIONS>
+
+=over 4
+
+=item B<-p>
+
+Leave the fork paused after creating it.
+
+=item B<--launch-dm>
+
+Specify whether the device model (QEMU) should be launched for the fork. Late
+launch allows to start the device model for an already running fork.
+
+=item B<-C>
+
+The config file to use when launching the device model. Currently required when
+launching the device model.
+
+=item B<-Q>
+
+The qemu save file to use when launching the device model.  Currently required
+when launching the device model.
+
+=item B<--fork-reset>
+
+Perform a reset operation of an already running fork. Note that resetting may
+be less performant then creating a new fork depending on how much memory the
+fork has deduplicated during its runtime.
+
+=back
+
 =item B<sharing> [I<domain-id>]
 
 Display the number of shared pages for a specified domain. If no domain is
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 75f191ae3a..ffb0bb9a42 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2221,6 +2221,19 @@ int xc_memshr_range_share(xc_interface *xch,
                           uint64_t first_gfn,
                           uint64_t last_gfn);
 
+int xc_memshr_fork(xc_interface *xch,
+                   uint32_t source_domain,
+                   uint32_t client_domain);
+
+/*
+ * Note: this function is only intended to be used on short-lived forks that
+ * haven't yet aquired a lot of memory. In case the fork has a lot of memory
+ * it is likely more performant to create a new fork with xc_memshr_fork.
+ *
+ * With VMs that have a lot of memory this call may block for a long time.
+ */
+int xc_memshr_fork_reset(xc_interface *xch, uint32_t forked_domain);
+
 /* Debug calls: return the number of pages referencing the shared frame backing
  * the input argument. Should be one or greater.
  *
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index 97e2e6a8d9..d0e4ee225b 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -239,6 +239,28 @@ int xc_memshr_debug_gref(xc_interface *xch,
     return xc_memshr_memop(xch, domid, &mso);
 }
 
+int xc_memshr_fork(xc_interface *xch, uint32_t pdomid, uint32_t domid)
+{
+    xen_mem_sharing_op_t mso;
+
+    memset(&mso, 0, sizeof(mso));
+
+    mso.op = XENMEM_sharing_op_fork;
+    mso.u.fork.parent_domain = pdomid;
+
+    return xc_memshr_memop(xch, domid, &mso);
+}
+
+int xc_memshr_fork_reset(xc_interface *xch, uint32_t domid)
+{
+    xen_mem_sharing_op_t mso;
+
+    memset(&mso, 0, sizeof(mso));
+    mso.op = XENMEM_sharing_op_fork_reset;
+
+    return xc_memshr_memop(xch, domid, &mso);
+}
+
 int xc_memshr_audit(xc_interface *xch)
 {
     xen_mem_sharing_op_t mso;
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 54abb9db1f..75cb070587 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -1536,6 +1536,13 @@ int libxl_domain_create_new(libxl_ctx *ctx, libxl_domain_config *d_config,
                             const libxl_asyncop_how *ao_how,
                             const libxl_asyncprogress_how *aop_console_how)
                             LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_domain_fork_vm(libxl_ctx *ctx, uint32_t pdomid, uint32_t *domid)
+                             LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_domain_fork_launch_dm(libxl_ctx *ctx, libxl_domain_config *d_config,
+                                uint32_t domid,
+                                const libxl_asyncprogress_how *aop_console_how)
+                                LIBXL_EXTERNAL_CALLERS_ONLY;
+int libxl_domain_fork_reset(libxl_ctx *ctx, uint32_t domid);
 int libxl_domain_create_restore(libxl_ctx *ctx, libxl_domain_config *d_config,
                                 uint32_t *domid, int restore_fd,
                                 int send_back_fd,
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 69fceff061..354c278161 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -536,12 +536,12 @@ out:
     return ret;
 }
 
-int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
-                       libxl__domain_build_state *state,
-                       uint32_t *domid)
+static int libxl__domain_make_xs_entries(libxl__gc *gc, libxl_domain_config *d_config,
+                                         libxl__domain_build_state *state,
+                                         uint32_t domid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    int ret, rc, nb_vm;
+    int rc, nb_vm;
     const char *dom_type;
     char *uuid_string;
     char *dom_path, *vm_path, *libxl_path;
@@ -553,7 +553,6 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
 
     /* convenience aliases */
     libxl_domain_create_info *info = &d_config->c_info;
-    libxl_domain_build_info *b_info = &d_config->b_info;
 
     uuid_string = libxl__uuid2string(gc, info->uuid);
     if (!uuid_string) {
@@ -561,64 +560,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
         goto out;
     }
 
-    /* Valid domid here means we're soft resetting. */
-    if (!libxl_domid_valid_guest(*domid)) {
-        struct xen_domctl_createdomain create = {
-            .ssidref = info->ssidref,
-            .max_vcpus = b_info->max_vcpus,
-            .max_evtchn_port = b_info->event_channels,
-            .max_grant_frames = b_info->max_grant_frames,
-            .max_maptrack_frames = b_info->max_maptrack_frames,
-        };
-
-        if (info->type != LIBXL_DOMAIN_TYPE_PV) {
-            create.flags |= XEN_DOMCTL_CDF_hvm;
-            create.flags |=
-                libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0;
-            create.flags |=
-                libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off;
-        }
-
-        assert(info->passthrough != LIBXL_PASSTHROUGH_DEFAULT);
-        LOG(DETAIL, "passthrough: %s",
-            libxl_passthrough_to_string(info->passthrough));
-
-        if (info->passthrough != LIBXL_PASSTHROUGH_DISABLED)
-            create.flags |= XEN_DOMCTL_CDF_iommu;
-
-        if (info->passthrough == LIBXL_PASSTHROUGH_SYNC_PT)
-            create.iommu_opts |= XEN_DOMCTL_IOMMU_no_sharept;
-
-        /* Ultimately, handle is an array of 16 uint8_t, same as uuid */
-        libxl_uuid_copy(ctx, (libxl_uuid *)&create.handle, &info->uuid);
-
-        ret = libxl__arch_domain_prepare_config(gc, d_config, &create);
-        if (ret < 0) {
-            LOGED(ERROR, *domid, "fail to get domain config");
-            rc = ERROR_FAIL;
-            goto out;
-        }
-
-        ret = xc_domain_create(ctx->xch, domid, &create);
-        if (ret < 0) {
-            LOGED(ERROR, *domid, "domain creation fail");
-            rc = ERROR_FAIL;
-            goto out;
-        }
-
-        rc = libxl__arch_domain_save_config(gc, d_config, state, &create);
-        if (rc < 0)
-            goto out;
-    }
-
-    ret = xc_cpupool_movedomain(ctx->xch, info->poolid, *domid);
-    if (ret < 0) {
-        LOGED(ERROR, *domid, "domain move fail");
-        rc = ERROR_FAIL;
-        goto out;
-    }
-
-    dom_path = libxl__xs_get_dompath(gc, *domid);
+    dom_path = libxl__xs_get_dompath(gc, domid);
     if (!dom_path) {
         rc = ERROR_FAIL;
         goto out;
@@ -626,12 +568,12 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
 
     vm_path = GCSPRINTF("/vm/%s", uuid_string);
     if (!vm_path) {
-        LOGD(ERROR, *domid, "cannot allocate create paths");
+        LOGD(ERROR, domid, "cannot allocate create paths");
         rc = ERROR_FAIL;
         goto out;
     }
 
-    libxl_path = libxl__xs_libxl_path(gc, *domid);
+    libxl_path = libxl__xs_libxl_path(gc, domid);
     if (!libxl_path) {
         rc = ERROR_FAIL;
         goto out;
@@ -642,10 +584,10 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
 
     roperm[0].id = 0;
     roperm[0].perms = XS_PERM_NONE;
-    roperm[1].id = *domid;
+    roperm[1].id = domid;
     roperm[1].perms = XS_PERM_READ;
 
-    rwperm[0].id = *domid;
+    rwperm[0].id = domid;
     rwperm[0].perms = XS_PERM_NONE;
 
 retry_transaction:
@@ -663,7 +605,7 @@ retry_transaction:
                     noperm, ARRAY_SIZE(noperm));
 
     xs_write(ctx->xsh, t, GCSPRINTF("%s/vm", dom_path), vm_path, strlen(vm_path));
-    rc = libxl__domain_rename(gc, *domid, 0, info->name, t);
+    rc = libxl__domain_rename(gc, domid, 0, info->name, t);
     if (rc)
         goto out;
 
@@ -740,7 +682,7 @@ retry_transaction:
 
     vm_list = libxl_list_vm(ctx, &nb_vm);
     if (!vm_list) {
-        LOGD(ERROR, *domid, "cannot get number of running guests");
+        LOGD(ERROR, domid, "cannot get number of running guests");
         rc = ERROR_FAIL;
         goto out;
     }
@@ -764,7 +706,7 @@ retry_transaction:
             t = 0;
             goto retry_transaction;
         }
-        LOGED(ERROR, *domid, "domain creation ""xenstore transaction commit failed");
+        LOGED(ERROR, domid, "domain creation ""xenstore transaction commit failed");
         rc = ERROR_FAIL;
         goto out;
     }
@@ -776,6 +718,80 @@ retry_transaction:
     return rc;
 }
 
+int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
+                       libxl__domain_build_state *state,
+                       uint32_t *domid)
+{
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    int ret, rc;
+
+    /* convenience aliases */
+    libxl_domain_create_info *info = &d_config->c_info;
+    libxl_domain_build_info *b_info = &d_config->b_info;
+
+    /* Valid domid here means we're soft resetting. */
+    if (!libxl_domid_valid_guest(*domid)) {
+        struct xen_domctl_createdomain create = {
+            .ssidref = info->ssidref,
+            .max_vcpus = b_info->max_vcpus,
+            .max_evtchn_port = b_info->event_channels,
+            .max_grant_frames = b_info->max_grant_frames,
+            .max_maptrack_frames = b_info->max_maptrack_frames,
+        };
+
+        if (info->type != LIBXL_DOMAIN_TYPE_PV) {
+            create.flags |= XEN_DOMCTL_CDF_hvm;
+            create.flags |=
+                libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0;
+            create.flags |=
+                libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off;
+        }
+
+        assert(info->passthrough != LIBXL_PASSTHROUGH_DEFAULT);
+        LOG(DETAIL, "passthrough: %s",
+            libxl_passthrough_to_string(info->passthrough));
+
+        if (info->passthrough != LIBXL_PASSTHROUGH_DISABLED)
+            create.flags |= XEN_DOMCTL_CDF_iommu;
+
+        if (info->passthrough == LIBXL_PASSTHROUGH_SYNC_PT)
+            create.iommu_opts |= XEN_DOMCTL_IOMMU_no_sharept;
+
+        /* Ultimately, handle is an array of 16 uint8_t, same as uuid */
+        libxl_uuid_copy(ctx, (libxl_uuid *)&create.handle, &info->uuid);
+
+        ret = libxl__arch_domain_prepare_config(gc, d_config, &create);
+        if (ret < 0) {
+            LOGED(ERROR, *domid, "fail to get domain config");
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        ret = xc_domain_create(ctx->xch, domid, &create);
+        if (ret < 0) {
+            LOGED(ERROR, *domid, "domain creation fail");
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        rc = libxl__arch_domain_save_config(gc, d_config, state, &create);
+        if (rc < 0)
+            goto out;
+    }
+
+    ret = xc_cpupool_movedomain(ctx->xch, info->poolid, *domid);
+    if (ret < 0) {
+        LOGED(ERROR, *domid, "domain move fail");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    rc = libxl__domain_make_xs_entries(gc, d_config, state, *domid);
+
+out:
+    return rc;
+}
+
 static int store_libxl_entry(libxl__gc *gc, uint32_t domid,
                              libxl_domain_build_info *b_info)
 {
@@ -1097,15 +1113,31 @@ static void initiate_domain_create(libxl__egc *egc,
     ret = libxl__domain_config_setdefault(gc,d_config,domid);
     if (ret) goto error_out;
 
-    ret = libxl__domain_make(gc, d_config, &dcs->build_state, &domid);
-    if (ret) {
-        LOGD(ERROR, domid, "cannot make domain: %d", ret);
+    if ( !d_config->dm_restore_file )
+    {
+        ret = libxl__domain_make(gc, d_config, &dcs->build_state, &domid);
         dcs->guest_domid = domid;
+
+        if (ret) {
+            LOGD(ERROR, domid, "cannot make domain: %d", ret);
+            ret = ERROR_FAIL;
+            goto error_out;
+        }
+    } else if ( dcs->guest_domid != INVALID_DOMID ) {
+        domid = dcs->guest_domid;
+
+        ret = libxl__domain_make_xs_entries(gc, d_config, &dcs->build_state, domid);
+        if (ret) {
+            LOGD(ERROR, domid, "cannot make domain: %d", ret);
+            ret = ERROR_FAIL;
+            goto error_out;
+        }
+    } else {
+        LOGD(ERROR, domid, "cannot make domain");
         ret = ERROR_FAIL;
         goto error_out;
     }
 
-    dcs->guest_domid = domid;
     dcs->sdss.dm.guest_domid = 0; /* means we haven't spawned */
 
     /* post-4.13 todo: move these next bits of defaulting to
@@ -1141,7 +1173,7 @@ static void initiate_domain_create(libxl__egc *egc,
     if (ret)
         goto error_out;
 
-    if (restore_fd >= 0 || dcs->domid_soft_reset != INVALID_DOMID) {
+    if (restore_fd >= 0 || dcs->domid_soft_reset != INVALID_DOMID || d_config->dm_restore_file) {
         LOGD(DEBUG, domid, "restoring, not running bootloader");
         domcreate_bootloader_done(egc, &dcs->bl, 0);
     } else  {
@@ -1217,7 +1249,16 @@ static void domcreate_bootloader_done(libxl__egc *egc,
     dcs->sdss.dm.callback = domcreate_devmodel_started;
     dcs->sdss.callback = domcreate_devmodel_started;
 
-    if (restore_fd < 0 && dcs->domid_soft_reset == INVALID_DOMID) {
+    if (restore_fd < 0 && dcs->domid_soft_reset == INVALID_DOMID && !d_config->dm_restore_file) {
+        rc = libxl__domain_build(gc, domid, dcs);
+        domcreate_rebuild_done(egc, dcs, rc);
+        return;
+    }
+
+    if ( d_config->dm_restore_file ) {
+        dcs->srs.dcs = dcs;
+        dcs->srs.ao = ao;
+        state->forked_vm = true;
         rc = libxl__domain_build(gc, domid, dcs);
         domcreate_rebuild_done(egc, dcs, rc);
         return;
@@ -1415,6 +1456,7 @@ static void domcreate_rebuild_done(libxl__egc *egc,
     /* convenience aliases */
     const uint32_t domid = dcs->guest_domid;
     libxl_domain_config *const d_config = dcs->guest_config;
+    libxl__domain_build_state *const state = &dcs->build_state;
 
     if (ret) {
         LOGD(ERROR, domid, "cannot (re-)build domain: %d", ret);
@@ -1422,6 +1464,9 @@ static void domcreate_rebuild_done(libxl__egc *egc,
         goto error_out;
     }
 
+    if ( d_config->dm_restore_file )
+        state->saved_state = GCSPRINTF("%s", d_config->dm_restore_file);
+
     store_libxl_entry(gc, domid, &d_config->b_info);
 
     libxl__multidev_begin(ao, &dcs->multidev);
@@ -1823,10 +1868,13 @@ static int do_domain_create(libxl_ctx *ctx, libxl_domain_config *d_config,
     GCNEW(cdcs);
     cdcs->dcs.ao = ao;
     cdcs->dcs.guest_config = d_config;
+    cdcs->dcs.guest_domid = *domid;
+
     libxl_domain_config_init(&cdcs->dcs.guest_config_saved);
     libxl_domain_config_copy(ctx, &cdcs->dcs.guest_config_saved, d_config);
     cdcs->dcs.restore_fd = cdcs->dcs.libxc_fd = restore_fd;
     cdcs->dcs.send_back_fd = send_back_fd;
+
     if (restore_fd > -1) {
         cdcs->dcs.restore_params = *params;
         rc = libxl__fd_flags_modify_save(gc, cdcs->dcs.restore_fd,
@@ -2069,6 +2117,43 @@ int libxl_domain_create_new(libxl_ctx *ctx, libxl_domain_config *d_config,
                             ao_how, aop_console_how);
 }
 
+int libxl_domain_fork_vm(libxl_ctx *ctx, uint32_t pdomid, uint32_t *domid)
+{
+    int rc;
+    struct xen_domctl_createdomain create = {0};
+    create.flags |= XEN_DOMCTL_CDF_hvm;
+    create.flags |= XEN_DOMCTL_CDF_hap;
+    create.flags |= XEN_DOMCTL_CDF_oos_off;
+    create.arch.emulation_flags = (XEN_X86_EMU_ALL & ~XEN_X86_EMU_VPCI);
+
+    create.ssidref = SECINITSID_DOMU;
+    create.max_vcpus = 1; // placeholder, will be cloned from pdomid
+    create.max_evtchn_port = 1023;
+    create.max_grant_frames = LIBXL_MAX_GRANT_FRAMES_DEFAULT;
+    create.max_maptrack_frames = LIBXL_MAX_MAPTRACK_FRAMES_DEFAULT;
+
+    if ( (rc = xc_domain_create(ctx->xch, domid, &create)) )
+        return rc;
+
+    if ( (rc = xc_memshr_fork(ctx->xch, pdomid, *domid)) )
+        xc_domain_destroy(ctx->xch, *domid);
+
+    return rc;
+}
+
+int libxl_domain_fork_launch_dm(libxl_ctx *ctx, libxl_domain_config *d_config,
+                                uint32_t domid,
+                                const libxl_asyncprogress_how *aop_console_how)
+{
+    unset_disk_colo_restore(d_config);
+    return do_domain_create(ctx, d_config, &domid, -1, -1, 0, 0, aop_console_how);
+}
+
+int libxl_domain_fork_reset(libxl_ctx *ctx, uint32_t domid)
+{
+    return xc_memshr_fork_reset(ctx->xch, domid);
+}
+
 int libxl_domain_create_restore(libxl_ctx *ctx, libxl_domain_config *d_config,
                                 uint32_t *domid, int restore_fd,
                                 int send_back_fd,
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 3f08ccad1b..37914e3d86 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -2788,7 +2788,7 @@ static void device_model_spawn_outcome(libxl__egc *egc,
 
     libxl__domain_build_state *state = dmss->build_state;
 
-    if (state->saved_state) {
+    if (state->saved_state && !state->forked_vm) {
         ret2 = unlink(state->saved_state);
         if (ret2) {
             LOGED(ERROR, dmss->guest_domid, "%s: failed to remove device-model state %s",
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index e0b6d4a8d3..871b3712e3 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -394,9 +394,12 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     libxl__domain_build_state *state = &dcs->build_state;
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *xs_domid, *con_domid;
-    int rc;
+    int rc = 0;
     uint64_t size;
 
+    if ( state->forked_vm )
+        goto skip_fork;
+
     if (xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus) != 0) {
         LOG(ERROR, "Couldn't set max vcpu count");
         return ERROR_FAIL;
@@ -501,29 +504,6 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
         }
     }
 
-
-    rc = libxl__arch_extra_memory(gc, info, &size);
-    if (rc < 0) {
-        LOGE(ERROR, "Couldn't get arch extra constant memory size");
-        return ERROR_FAIL;
-    }
-
-    if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb + size) < 0) {
-        LOGE(ERROR, "Couldn't set max memory");
-        return ERROR_FAIL;
-    }
-
-    xs_domid = xs_read(ctx->xsh, XBT_NULL, "/tool/xenstored/domid", NULL);
-    state->store_domid = xs_domid ? atoi(xs_domid) : 0;
-    free(xs_domid);
-
-    con_domid = xs_read(ctx->xsh, XBT_NULL, "/tool/xenconsoled/domid", NULL);
-    state->console_domid = con_domid ? atoi(con_domid) : 0;
-    free(con_domid);
-
-    state->store_port = xc_evtchn_alloc_unbound(ctx->xch, domid, state->store_domid);
-    state->console_port = xc_evtchn_alloc_unbound(ctx->xch, domid, state->console_domid);
-
     if (info->type != LIBXL_DOMAIN_TYPE_PV)
         hvm_set_conf_params(ctx->xch, domid, info);
 
@@ -558,8 +538,34 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
                          info->altp2m);
     }
 
+    rc = libxl__arch_extra_memory(gc, info, &size);
+    if (rc < 0) {
+        LOGE(ERROR, "Couldn't get arch extra constant memory size");
+        return ERROR_FAIL;
+    }
+
+    if (xc_domain_setmaxmem(ctx->xch, domid, info->target_memkb + size) < 0) {
+        LOGE(ERROR, "Couldn't set max memory");
+        return ERROR_FAIL;
+    }
+
     rc = libxl__arch_domain_create(gc, d_config, domid);
+    if ( rc )
+        goto out;
 
+skip_fork:
+    xs_domid = xs_read(ctx->xsh, XBT_NULL, "/tool/xenstored/domid", NULL);
+    state->store_domid = xs_domid ? atoi(xs_domid) : 0;
+    free(xs_domid);
+
+    con_domid = xs_read(ctx->xsh, XBT_NULL, "/tool/xenconsoled/domid", NULL);
+    state->console_domid = con_domid ? atoi(con_domid) : 0;
+    free(con_domid);
+
+    state->store_port = xc_evtchn_alloc_unbound(ctx->xch, domid, state->store_domid);
+    state->console_port = xc_evtchn_alloc_unbound(ctx->xch, domid, state->console_domid);
+
+out:
     return rc;
 }
 
@@ -617,6 +623,9 @@ int libxl__build_post(libxl__gc *gc, uint32_t domid,
     char **ents;
     int i, rc;
 
+    if ( state->forked_vm )
+        goto skip_fork;
+
     if (info->num_vnuma_nodes && !info->num_vcpu_soft_affinity) {
         rc = set_vnuma_affinity(gc, domid, info);
         if (rc)
@@ -641,6 +650,7 @@ int libxl__build_post(libxl__gc *gc, uint32_t domid,
         }
     }
 
+skip_fork:
     ents = libxl__calloc(gc, 12 + (info->max_vcpus * 2) + 2, sizeof(char *));
     ents[0] = "memory/static-max";
     ents[1] = GCSPRINTF("%"PRId64, info->max_memkb);
@@ -903,14 +913,16 @@ static int hvm_build_set_params(xc_interface *handle, uint32_t domid,
                                 libxl_domain_build_info *info,
                                 int store_evtchn, unsigned long *store_mfn,
                                 int console_evtchn, unsigned long *console_mfn,
-                                domid_t store_domid, domid_t console_domid)
+                                domid_t store_domid, domid_t console_domid,
+                                bool forked_vm)
 {
     struct hvm_info_table *va_hvm;
     uint8_t *va_map, sum;
     uint64_t str_mfn, cons_mfn;
     int i;
 
-    if (info->type == LIBXL_DOMAIN_TYPE_HVM) {
+    if ( info->type == LIBXL_DOMAIN_TYPE_HVM && !forked_vm )
+    {
         va_map = xc_map_foreign_range(handle, domid,
                                       XC_PAGE_SIZE, PROT_READ | PROT_WRITE,
                                       HVM_INFO_PFN);
@@ -1226,6 +1238,23 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
     struct xc_dom_image *dom = NULL;
     bool device_model = info->type == LIBXL_DOMAIN_TYPE_HVM ? true : false;
 
+    if ( state->forked_vm )
+    {
+        rc = hvm_build_set_params(ctx->xch, domid, info, state->store_port,
+                                  &state->store_mfn, state->console_port,
+                                  &state->console_mfn, state->store_domid,
+                                  state->console_domid, state->forked_vm);
+
+        if ( rc )
+            return rc;
+
+        return xc_dom_gnttab_seed(ctx->xch, domid, true,
+                                  state->console_mfn,
+                                  state->store_mfn,
+                                  state->console_domid,
+                                  state->store_domid);
+    }
+
     xc_dom_loginit(ctx->xch);
 
     /*
@@ -1350,7 +1379,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
     rc = hvm_build_set_params(ctx->xch, domid, info, state->store_port,
                                &state->store_mfn, state->console_port,
                                &state->console_mfn, state->store_domid,
-                               state->console_domid);
+                               state->console_domid, false);
     if (rc != 0) {
         LOG(ERROR, "hvm build set params failed");
         goto out;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index d919f91882..4410270508 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1360,6 +1360,7 @@ typedef struct {
 
     char *saved_state;
     int dm_monitor_fd;
+    bool forked_vm;
 
     libxl__file_reference pv_kernel;
     libxl__file_reference pv_ramdisk;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index 7921950f6a..7c4c4057a9 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -956,6 +956,7 @@ libxl_domain_config = Struct("domain_config", [
     ("on_watchdog", libxl_action_on_shutdown),
     ("on_crash", libxl_action_on_shutdown),
     ("on_soft_reset", libxl_action_on_shutdown),
+    ("dm_restore_file", string, {'const': True}),
     ], dir=DIR_IN)
 
 libxl_diskinfo = Struct("diskinfo", [
diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 60bdad8ffb..9bdad6526e 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -31,6 +31,7 @@ struct cmd_spec {
 };
 
 struct domain_create {
+    uint32_t ddomid; /* fork launch dm for this domid */
     int debug;
     int daemonize;
     int monitor; /* handle guest reboots etc */
@@ -45,6 +46,7 @@ struct domain_create {
     const char *config_file;
     char *extra_config; /* extra config string */
     const char *restore_file;
+    const char *dm_restore_file;
     char *colo_proxy_script;
     bool userspace_colo_proxy;
     int migrate_fd; /* -1 means none */
@@ -127,6 +129,9 @@ int main_pciassignable_remove(int argc, char **argv);
 int main_pciassignable_list(int argc, char **argv);
 #ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
 int main_restore(int argc, char **argv);
+int main_fork_vm(int argc, char **argv);
+int main_fork_launch_dm(int argc, char **argv);
+int main_fork_reset(int argc, char **argv);
 int main_migrate_receive(int argc, char **argv);
 int main_save(int argc, char **argv);
 int main_migrate(int argc, char **argv);
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 3b302b2f20..3a5d371057 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -185,6 +185,18 @@ struct cmd_spec cmd_table[] = {
       "Restore a domain from a saved state",
       "- for internal use only",
     },
+    { "fork-vm",
+      &main_fork_vm, 0, 1,
+      "Fork a domain from the running parent domid",
+      "[options] <Domid>",
+      "-h                           Print this help.\n"
+      "-C <config>                  Use config file for VM fork.\n"
+      "-Q <qemu-save-file>          Use qemu save file for VM fork.\n"
+      "--launch-dm <yes|no|late>    Launch device model (QEMU) for VM fork.\n"
+      "--fork-reset                 Reset VM fork.\n"
+      "-p                           Do not unpause fork VM after operation.\n"
+      "-d                           Enable debug messages.\n"
+    },
 #endif
     { "dump-core",
       &main_dump_core, 0, 1,
diff --git a/tools/xl/xl_saverestore.c b/tools/xl/xl_saverestore.c
index 9be033fe65..72c6209558 100644
--- a/tools/xl/xl_saverestore.c
+++ b/tools/xl/xl_saverestore.c
@@ -229,6 +229,102 @@ int main_restore(int argc, char **argv)
     return EXIT_SUCCESS;
 }
 
+int main_fork_vm(int argc, char **argv)
+{
+    int rc, debug = 0;
+    uint32_t domid_in = INVALID_DOMID, domid_out = INVALID_DOMID;
+    int launch_dm = 1;
+    bool reset = 0;
+    bool pause = 0;
+    const char *config_file = NULL;
+    const char *dm_restore_file = NULL;
+
+    int opt;
+    static struct option opts[] = {
+        {"launch-dm", 1, 0, 'l'},
+        {"fork-reset", 0, 0, 'r'},
+        COMMON_LONG_OPTS
+    };
+
+    SWITCH_FOREACH_OPT(opt, "phdC:Q:l:rN:D:B:V:", opts, "fork-vm", 1) {
+    case 'd':
+        debug = 1;
+        break;
+    case 'p':
+        pause = 1;
+        break;
+    case 'C':
+        config_file = optarg;
+        break;
+    case 'Q':
+        dm_restore_file = optarg;
+        break;
+    case 'l':
+        printf("optarg: %s\n", optarg);
+        if ( !strcmp(optarg, "no") )
+            launch_dm = 0;
+        if ( !strcmp(optarg, "yes") )
+            launch_dm = 1;
+        if ( !strcmp(optarg, "late") )
+            launch_dm = 2;
+        break;
+    case 'r':
+        reset = 1;
+        break;
+    case 'N': /* fall-through */
+    case 'D': /* fall-through */
+    case 'B': /* fall-through */
+    case 'V':
+        fprintf(stderr, "Unimplemented option(s)\n");
+        return EXIT_FAILURE;
+    }
+
+    if (argc-optind == 1) {
+        domid_in = atoi(argv[optind]);
+    } else {
+        help("fork-vm");
+        return EXIT_FAILURE;
+    }
+
+    if (launch_dm && (!config_file || !dm_restore_file)) {
+        fprintf(stderr, "Currently you must provide both -C and -Q options\n");
+        return EXIT_FAILURE;
+    }
+
+    if (reset) {
+        domid_out = domid_in;
+        if (libxl_domain_fork_reset(ctx, domid_in) == EXIT_FAILURE)
+            return EXIT_FAILURE;
+    }
+
+    if (launch_dm == 2 || reset) {
+        domid_out = domid_in;
+        rc = EXIT_SUCCESS;
+    } else
+        rc = libxl_domain_fork_vm(ctx, domid_in, &domid_out);
+
+    if (rc == EXIT_SUCCESS && launch_dm) {
+        struct domain_create dom_info;
+        memset(&dom_info, 0, sizeof(dom_info));
+        dom_info.ddomid = domid_out;
+        dom_info.dm_restore_file = dm_restore_file;
+        dom_info.debug = debug;
+        dom_info.paused = pause;
+        dom_info.config_file = config_file;
+        dom_info.migrate_fd = -1;
+        dom_info.send_back_fd = -1;
+        rc = create_domain(&dom_info) < 0 ? EXIT_FAILURE : EXIT_SUCCESS;
+    }
+
+    if (rc == EXIT_SUCCESS && !pause)
+        rc = libxl_domain_unpause(ctx, domid_out, NULL);
+
+    if (rc == EXIT_SUCCESS)
+        fprintf(stderr, "fork-vm command successfully returned domid: %u\n", domid_out);
+
+    return rc;
+}
+
 int main_save(int argc, char **argv)
 {
     uint32_t domid;
diff --git a/tools/xl/xl_vmcontrol.c b/tools/xl/xl_vmcontrol.c
index e520b1da79..d9cb19c599 100644
--- a/tools/xl/xl_vmcontrol.c
+++ b/tools/xl/xl_vmcontrol.c
@@ -645,6 +645,7 @@ int create_domain(struct domain_create *dom_info)
 
     libxl_domain_config d_config;
 
+    uint32_t ddomid = dom_info->ddomid; // launch dm for this domain iff set
     int debug = dom_info->debug;
     int daemonize = dom_info->daemonize;
     int monitor = dom_info->monitor;
@@ -655,6 +656,7 @@ int create_domain(struct domain_create *dom_info)
     const char *restore_file = dom_info->restore_file;
     const char *config_source = NULL;
     const char *restore_source = NULL;
+    const char *dm_restore_file = dom_info->dm_restore_file;
     int migrate_fd = dom_info->migrate_fd;
     bool config_in_json;
 
@@ -923,6 +925,12 @@ start:
          * restore/migrate-receive it again.
          */
         restoring = 0;
+    } else if ( ddomid ) {
+        d_config.dm_restore_file = dm_restore_file;
+        ret = libxl_domain_fork_launch_dm(ctx, &d_config, ddomid,
+                                          autoconnect_console_how);
+        domid = ddomid;
+        ddomid = INVALID_DOMID;
     } else if (domid_soft_reset != INVALID_DOMID) {
         /* Do soft reset. */
         ret = libxl_domain_soft_reset(ctx, &d_config, domid_soft_reset,
-- 
2.20.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 01/18] x86/hvm: introduce hvm_copy_context_and_params
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 01/18] x86/hvm: introduce hvm_copy_context_and_params Tamas K Lengyel
@ 2020-01-22 15:00   ` Jan Beulich
  2020-01-22 16:42     ` Tamas K Lengyel
  0 siblings, 1 reply; 44+ messages in thread
From: Jan Beulich @ 2020-01-22 15:00 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel, Roger Pau Monné, Wei Liu, Andrew Cooper

On 21.01.2020 18:49, Tamas K Lengyel wrote:
> Currently the hvm parameters are only accessible via the HVMOP hypercalls. In
> this patch we introduce a new function that can copy both the hvm context and
> parameters directly into a target domain. No functional changes in existing
> code.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>

In reply to my v4 comments you said "I don't have any objections to the
things you pointed out." Yet only one aspect was actually changed here.
It also doesn't help that there's no brief summary of the changes done
for v5. I guess I'm confused.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries Tamas K Lengyel
@ 2020-01-22 15:23   ` Jan Beulich
  2020-01-22 16:51     ` Tamas K Lengyel
  0 siblings, 1 reply; 44+ messages in thread
From: Jan Beulich @ 2020-01-22 15:23 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: George Dunlap, xen-devel, Roger Pau Monné, Wei Liu, Andrew Cooper

On 21.01.2020 18:49, Tamas K Lengyel wrote:
> The owner domain of shared pages is dom_cow, use that for get_page
> otherwise the function fails to return the correct page.

I think this description needs improvement: The function does the
special shared page dance in one place (on the "fast path")
already. This wants mentioning, either because it was a mistake
to have it just there, or because a new need has appeared to also
have it on the "slow path".

> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -594,7 +594,10 @@ struct page_info *p2m_get_page_from_gfn(
>      if ( p2m_is_ram(*t) && mfn_valid(mfn) )
>      {
>          page = mfn_to_page(mfn);
> -        if ( !get_page(page, p2m->domain) )
> +        if ( !get_page(page, p2m->domain) &&
> +             /* Page could be shared */
> +             (!dom_cow || !p2m_is_shared(*t) ||
> +              !get_page(page, dom_cow)) )

While there may be a reason why on the fast path two get_page()
invocations are be necessary, couldn't you get away with just
one

        if ( !get_page(page, !dom_cow || !p2m_is_shared(*t) ? p2m->domain
                                                            : dom_cow) )

at least here? It's also not really clear to me why here and
there we need "!dom_cow || !p2m_is_shared(*t)" - wouldn't
p2m_is_shared() return consistently "false" when !dom_cow ?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 11/18] x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 11/18] x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk Tamas K Lengyel
@ 2020-01-22 15:30   ` Jan Beulich
  2020-01-22 16:55     ` Tamas K Lengyel
  0 siblings, 1 reply; 44+ messages in thread
From: Jan Beulich @ 2020-01-22 15:30 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	xen-devel, Roger Pau Monné

On 21.01.2020 18:49, Tamas K Lengyel wrote:
> @@ -538,24 +535,26 @@ static int audit(void)
>              d = get_domain_by_id(g->domain);
>              if ( d == NULL )
>              {
> -                MEM_SHARING_DEBUG("Unknown dom: %hu, for PFN=%lx, MFN=%lx\n",
> -                                  g->domain, g->gfn, mfn_x(mfn));
> +                gdprintk(XENLOG_ERR,
> +                         "Unknown dom: %pd, for PFN=%lx, MFN=%lx\n",
> +                         d, g->gfn, mfn_x(mfn));

With "if ( d == NULL )" around this you hardly mean to pass d to
the function here. This is a case where you really need to stick
to logging a raw number.

>                  errors++;
>                  continue;
>              }
>              o_mfn = get_gfn_query_unlocked(d, g->gfn, &t);
>              if ( !mfn_eq(o_mfn, mfn) )
>              {
> -                MEM_SHARING_DEBUG("Incorrect P2M for d=%hu, PFN=%lx."
> -                                  "Expecting MFN=%lx, got %lx\n",
> -                                  g->domain, g->gfn, mfn_x(mfn), mfn_x(o_mfn));
> +                gdprintk(XENLOG_ERR, "Incorrect P2M for d=%pd, PFN=%lx."

Here and elsewhere may I recommend dropping d= (or dom= further
down)?

> @@ -757,10 +756,10 @@ static int debug_mfn(mfn_t mfn)
>          return -EINVAL;
>      }
>  
> -    MEM_SHARING_DEBUG(
> -        "Debug page: MFN=%lx is ci=%lx, ti=%lx, owner=%pd\n",
> -        mfn_x(page_to_mfn(page)), page->count_info,
> -        page->u.inuse.type_info, page_get_owner(page));
> +    gdprintk(XENLOG_ERR,
> +             "Debug page: MFN=%lx is ci=%lx, ti=%lx, owner_id=%d\n",
> +             mfn_x(page_to_mfn(page)), page->count_info,
> +             page->u.inuse.type_info, page_get_owner(page)->domain_id);

As indicated before (I think), please prefer %pd and a struct domain
pointer over passing ->domain_id (at least one more instance further
down).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 12/18] x86/mem_sharing: Enable mem_sharing on first memop
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 12/18] x86/mem_sharing: Enable mem_sharing on first memop Tamas K Lengyel
@ 2020-01-22 15:32   ` Jan Beulich
  0 siblings, 0 replies; 44+ messages in thread
From: Jan Beulich @ 2020-01-22 15:32 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	xen-devel, Roger Pau Monné

On 21.01.2020 18:49, Tamas K Lengyel wrote:
> It is wasteful to require separate hypercalls to enable sharing on both the
> parent and the client domain during VM forking. To speed things up we enable
> sharing on the first memop in case it wasn't already enabled.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 14/18] x86/mem_sharing: use default_access in add_to_physmap
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 14/18] x86/mem_sharing: use default_access in add_to_physmap Tamas K Lengyel
@ 2020-01-22 15:35   ` Jan Beulich
  2020-01-22 17:08     ` Tamas K Lengyel
  0 siblings, 1 reply; 44+ messages in thread
From: Jan Beulich @ 2020-01-22 15:35 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	xen-devel, Roger Pau Monné

On 21.01.2020 18:49, Tamas K Lengyel wrote:
> When plugging a hole in the target physmap don't use the access permission
> returned by __get_gfn_type_access as it can be non-sensical,

"can be" is too vague for my taste - it suggests there may also be cases
where a sensible value is returned, and hence it should be used. Could
you clarify this please? (The code change itself of course is simple and
mechanical enough to look okay.)

Jan

> leading to
> spurious vm_events being sent out for access violations at unexpected
> locations. Make use of p2m->default_access instead.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
> ---
>  xen/arch/x86/mm/mem_sharing.c | 5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index eac8047c07..e3ddb63b4f 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1071,11 +1071,10 @@ int add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle_t sh,
>      p2m_type_t smfn_type, cmfn_type;
>      struct gfn_info *gfn_info;
>      struct p2m_domain *p2m = p2m_get_hostp2m(cd);
> -    p2m_access_t a;
>      struct two_gfns tg;
>  
>      get_two_gfns(sd, _gfn(sgfn), &smfn_type, NULL, &smfn,
> -                 cd, _gfn(cgfn), &cmfn_type, &a, &cmfn, 0, &tg, lock);
> +                 cd, _gfn(cgfn), &cmfn_type, NULL, &cmfn, 0, &tg, lock);
>  
>      /* Get the source shared page, check and lock */
>      ret = XENMEM_SHARING_OP_S_HANDLE_INVALID;
> @@ -1110,7 +1109,7 @@ int add_to_physmap(struct domain *sd, unsigned long sgfn, shr_handle_t sh,
>      }
>  
>      ret = p2m_set_entry(p2m, _gfn(cgfn), smfn, PAGE_ORDER_4K,
> -                        p2m_ram_shared, a);
> +                        p2m_ram_shared, p2m->default_access);
>  
>      /* Tempted to turn this into an assert */
>      if ( ret )
> 


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 01/18] x86/hvm: introduce hvm_copy_context_and_params
  2020-01-22 15:00   ` Jan Beulich
@ 2020-01-22 16:42     ` Tamas K Lengyel
  0 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-22 16:42 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Xen-devel, Andrew Cooper, Tamas K Lengyel, Wei Liu, Roger Pau Monné

On Wed, Jan 22, 2020 at 8:01 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 21.01.2020 18:49, Tamas K Lengyel wrote:
> > Currently the hvm parameters are only accessible via the HVMOP hypercalls. In
> > this patch we introduce a new function that can copy both the hvm context and
> > parameters directly into a target domain. No functional changes in existing
> > code.
> >
> > Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
>
> In reply to my v4 comments you said "I don't have any objections to the
> things you pointed out." Yet only one aspect was actually changed here.
> It also doesn't help that there's no brief summary of the changes done
> for v5. I guess I'm confused.

Indeed it seems I missed some of your previous requests. I was halfway
through making the modifications but simply forgot to do the rest
after I stepped away. I still don't have any objections to them
though, so will catch up on it in v6.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries
  2020-01-22 15:23   ` Jan Beulich
@ 2020-01-22 16:51     ` Tamas K Lengyel
  2020-01-22 16:55       ` Jan Beulich
  0 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-22 16:51 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Xen-devel, Roger Pau Monné

On Wed, Jan 22, 2020 at 8:23 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 21.01.2020 18:49, Tamas K Lengyel wrote:
> > The owner domain of shared pages is dom_cow, use that for get_page
> > otherwise the function fails to return the correct page.
>
> I think this description needs improvement: The function does the
> special shared page dance in one place (on the "fast path")
> already. This wants mentioning, either because it was a mistake
> to have it just there, or because a new need has appeared to also
> have it on the "slow path".

It was a pre-existing error not to get the page from dom_cow for a
shared entry in the slow path. I only ran into it now because the
erroneous type_count check move in the previous version of the series
was resulting in all pages being fully deduplicated instead of mostly
being shared. Now that the pages are properly shared running LibVMI on
the fork resulted in failures do to this bug.

> > --- a/xen/arch/x86/mm/p2m.c
> > +++ b/xen/arch/x86/mm/p2m.c
> > @@ -594,7 +594,10 @@ struct page_info *p2m_get_page_from_gfn(
> >      if ( p2m_is_ram(*t) && mfn_valid(mfn) )
> >      {
> >          page = mfn_to_page(mfn);
> > -        if ( !get_page(page, p2m->domain) )
> > +        if ( !get_page(page, p2m->domain) &&
> > +             /* Page could be shared */
> > +             (!dom_cow || !p2m_is_shared(*t) ||
> > +              !get_page(page, dom_cow)) )
>
> While there may be a reason why on the fast path two get_page()
> invocations are be necessary, couldn't you get away with just
> one
>
>         if ( !get_page(page, !dom_cow || !p2m_is_shared(*t) ? p2m->domain
>                                                             : dom_cow) )
>
> at least here? It's also not really clear to me why here and
> there we need "!dom_cow || !p2m_is_shared(*t)" - wouldn't
> p2m_is_shared() return consistently "false" when !dom_cow ?

I simply copied the existing code from the slow_path as-is. IMHO it
would suffice to do a single get_page(page, !p2m_is_shared(*t) ?
p2m->domain : dom_cow);  since we can't have any entries that are
shared when dom_cow is NULL so this is safe, no need for the extra
!dom_cow check. If you prefer I can make the change for both
locations.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries
  2020-01-22 16:51     ` Tamas K Lengyel
@ 2020-01-22 16:55       ` Jan Beulich
  2020-01-23 15:32         ` George Dunlap
  0 siblings, 1 reply; 44+ messages in thread
From: Jan Beulich @ 2020-01-22 16:55 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Xen-devel, Roger Pau Monné

On 22.01.2020 17:51, Tamas K Lengyel wrote:
> On Wed, Jan 22, 2020 at 8:23 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 21.01.2020 18:49, Tamas K Lengyel wrote:
>>> The owner domain of shared pages is dom_cow, use that for get_page
>>> otherwise the function fails to return the correct page.
>>
>> I think this description needs improvement: The function does the
>> special shared page dance in one place (on the "fast path")
>> already. This wants mentioning, either because it was a mistake
>> to have it just there, or because a new need has appeared to also
>> have it on the "slow path".
> 
> It was a pre-existing error not to get the page from dom_cow for a
> shared entry in the slow path. I only ran into it now because the
> erroneous type_count check move in the previous version of the series
> was resulting in all pages being fully deduplicated instead of mostly
> being shared. Now that the pages are properly shared running LibVMI on
> the fork resulted in failures do to this bug.
> 
>>> --- a/xen/arch/x86/mm/p2m.c
>>> +++ b/xen/arch/x86/mm/p2m.c
>>> @@ -594,7 +594,10 @@ struct page_info *p2m_get_page_from_gfn(
>>>      if ( p2m_is_ram(*t) && mfn_valid(mfn) )
>>>      {
>>>          page = mfn_to_page(mfn);
>>> -        if ( !get_page(page, p2m->domain) )
>>> +        if ( !get_page(page, p2m->domain) &&
>>> +             /* Page could be shared */
>>> +             (!dom_cow || !p2m_is_shared(*t) ||
>>> +              !get_page(page, dom_cow)) )
>>
>> While there may be a reason why on the fast path two get_page()
>> invocations are be necessary, couldn't you get away with just
>> one
>>
>>         if ( !get_page(page, !dom_cow || !p2m_is_shared(*t) ? p2m->domain
>>                                                             : dom_cow) )
>>
>> at least here? It's also not really clear to me why here and
>> there we need "!dom_cow || !p2m_is_shared(*t)" - wouldn't
>> p2m_is_shared() return consistently "false" when !dom_cow ?
> 
> I simply copied the existing code from the slow_path as-is. IMHO it
> would suffice to do a single get_page(page, !p2m_is_shared(*t) ?
> p2m->domain : dom_cow);  since we can't have any entries that are
> shared when dom_cow is NULL so this is safe, no need for the extra
> !dom_cow check. If you prefer I can make the change for both
> locations.

If the change is correct to make also in the other place, I'd
definitely prefer you doing so.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 11/18] x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk
  2020-01-22 15:30   ` Jan Beulich
@ 2020-01-22 16:55     ` Tamas K Lengyel
  0 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-22 16:55 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Xen-devel, Roger Pau Monné

On Wed, Jan 22, 2020 at 8:30 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 21.01.2020 18:49, Tamas K Lengyel wrote:
> > @@ -538,24 +535,26 @@ static int audit(void)
> >              d = get_domain_by_id(g->domain);
> >              if ( d == NULL )
> >              {
> > -                MEM_SHARING_DEBUG("Unknown dom: %hu, for PFN=%lx, MFN=%lx\n",
> > -                                  g->domain, g->gfn, mfn_x(mfn));
> > +                gdprintk(XENLOG_ERR,
> > +                         "Unknown dom: %pd, for PFN=%lx, MFN=%lx\n",
> > +                         d, g->gfn, mfn_x(mfn));
>
> With "if ( d == NULL )" around this you hardly mean to pass d to
> the function here. This is a case where you really need to stick
> to logging a raw number.

Indeed..

>
> >                  errors++;
> >                  continue;
> >              }
> >              o_mfn = get_gfn_query_unlocked(d, g->gfn, &t);
> >              if ( !mfn_eq(o_mfn, mfn) )
> >              {
> > -                MEM_SHARING_DEBUG("Incorrect P2M for d=%hu, PFN=%lx."
> > -                                  "Expecting MFN=%lx, got %lx\n",
> > -                                  g->domain, g->gfn, mfn_x(mfn), mfn_x(o_mfn));
> > +                gdprintk(XENLOG_ERR, "Incorrect P2M for d=%pd, PFN=%lx."
>
> Here and elsewhere may I recommend dropping d= (or dom= further
> down)?

SGTM

>
> > @@ -757,10 +756,10 @@ static int debug_mfn(mfn_t mfn)
> >          return -EINVAL;
> >      }
> >
> > -    MEM_SHARING_DEBUG(
> > -        "Debug page: MFN=%lx is ci=%lx, ti=%lx, owner=%pd\n",
> > -        mfn_x(page_to_mfn(page)), page->count_info,
> > -        page->u.inuse.type_info, page_get_owner(page));
> > +    gdprintk(XENLOG_ERR,
> > +             "Debug page: MFN=%lx is ci=%lx, ti=%lx, owner_id=%d\n",
> > +             mfn_x(page_to_mfn(page)), page->count_info,
> > +             page->u.inuse.type_info, page_get_owner(page)->domain_id);
>
> As indicated before (I think), please prefer %pd and a struct domain
> pointer over passing ->domain_id (at least one more instance further
> down).

I thought I fixed them all but evidently some remained.

Thanks,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 14/18] x86/mem_sharing: use default_access in add_to_physmap
  2020-01-22 15:35   ` Jan Beulich
@ 2020-01-22 17:08     ` Tamas K Lengyel
  2020-01-23 16:44       ` George Dunlap
  0 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-22 17:08 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Xen-devel, Roger Pau Monné

On Wed, Jan 22, 2020 at 8:35 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 21.01.2020 18:49, Tamas K Lengyel wrote:
> > When plugging a hole in the target physmap don't use the access permission
> > returned by __get_gfn_type_access as it can be non-sensical,
>
> "can be" is too vague for my taste - it suggests there may also be cases
> where a sensible value is returned, and hence it should be used. Could
> you clarify this please? (The code change itself of course is simple and
> mechanical enough to look okay.)

Well, I can only speak of what I observed. The case seems to be that
most of the time the function actually returns p2m_access_rwx (which
is sensible), but occasionally something else. I didn't investigate
where that value actually comes from, but when populating a physmap
like this only the default_access is sane.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries
  2020-01-22 16:55       ` Jan Beulich
@ 2020-01-23 15:32         ` George Dunlap
  2020-01-23 15:48           ` Tamas K Lengyel
  0 siblings, 1 reply; 44+ messages in thread
From: George Dunlap @ 2020-01-23 15:32 UTC (permalink / raw)
  To: Jan Beulich, Tamas K Lengyel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Xen-devel, Roger Pau Monné

On 1/22/20 4:55 PM, Jan Beulich wrote:
> On 22.01.2020 17:51, Tamas K Lengyel wrote:
>> On Wed, Jan 22, 2020 at 8:23 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>
>>> On 21.01.2020 18:49, Tamas K Lengyel wrote:
>>>> The owner domain of shared pages is dom_cow, use that for get_page
>>>> otherwise the function fails to return the correct page.
>>>
>>> I think this description needs improvement: The function does the
>>> special shared page dance in one place (on the "fast path")
>>> already. This wants mentioning, either because it was a mistake
>>> to have it just there, or because a new need has appeared to also
>>> have it on the "slow path".
>>
>> It was a pre-existing error not to get the page from dom_cow for a
>> shared entry in the slow path. I only ran into it now because the
>> erroneous type_count check move in the previous version of the series
>> was resulting in all pages being fully deduplicated instead of mostly
>> being shared. Now that the pages are properly shared running LibVMI on
>> the fork resulted in failures do to this bug.
>>
>>>> --- a/xen/arch/x86/mm/p2m.c
>>>> +++ b/xen/arch/x86/mm/p2m.c
>>>> @@ -594,7 +594,10 @@ struct page_info *p2m_get_page_from_gfn(
>>>>      if ( p2m_is_ram(*t) && mfn_valid(mfn) )
>>>>      {
>>>>          page = mfn_to_page(mfn);
>>>> -        if ( !get_page(page, p2m->domain) )
>>>> +        if ( !get_page(page, p2m->domain) &&
>>>> +             /* Page could be shared */
>>>> +             (!dom_cow || !p2m_is_shared(*t) ||
>>>> +              !get_page(page, dom_cow)) )
>>>
>>> While there may be a reason why on the fast path two get_page()
>>> invocations are be necessary, couldn't you get away with just
>>> one
>>>
>>>         if ( !get_page(page, !dom_cow || !p2m_is_shared(*t) ? p2m->domain
>>>                                                             : dom_cow) )
>>>
>>> at least here? It's also not really clear to me why here and
>>> there we need "!dom_cow || !p2m_is_shared(*t)" - wouldn't
>>> p2m_is_shared() return consistently "false" when !dom_cow ?
>>
>> I simply copied the existing code from the slow_path as-is. IMHO it
>> would suffice to do a single get_page(page, !p2m_is_shared(*t) ?
>> p2m->domain : dom_cow);  since we can't have any entries that are
>> shared when dom_cow is NULL so this is safe, no need for the extra
>> !dom_cow check. If you prefer I can make the change for both
>> locations.
> 
> If the change is correct to make also in the other place, I'd
> definitely prefer you doing so.

I agree with everything Jan said. :-)

Also, since this is a general bugfix, you might consider moving it to
the top of your series, so it can be checked in as soon as it's ready.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries
  2020-01-23 15:32         ` George Dunlap
@ 2020-01-23 15:48           ` Tamas K Lengyel
  0 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-23 15:48 UTC (permalink / raw)
  To: George Dunlap
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Xen-devel, Roger Pau Monné

On Thu, Jan 23, 2020 at 8:32 AM George Dunlap <george.dunlap@citrix.com> wrote:
>
> On 1/22/20 4:55 PM, Jan Beulich wrote:
> > On 22.01.2020 17:51, Tamas K Lengyel wrote:
> >> On Wed, Jan 22, 2020 at 8:23 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>>
> >>> On 21.01.2020 18:49, Tamas K Lengyel wrote:
> >>>> The owner domain of shared pages is dom_cow, use that for get_page
> >>>> otherwise the function fails to return the correct page.
> >>>
> >>> I think this description needs improvement: The function does the
> >>> special shared page dance in one place (on the "fast path")
> >>> already. This wants mentioning, either because it was a mistake
> >>> to have it just there, or because a new need has appeared to also
> >>> have it on the "slow path".
> >>
> >> It was a pre-existing error not to get the page from dom_cow for a
> >> shared entry in the slow path. I only ran into it now because the
> >> erroneous type_count check move in the previous version of the series
> >> was resulting in all pages being fully deduplicated instead of mostly
> >> being shared. Now that the pages are properly shared running LibVMI on
> >> the fork resulted in failures do to this bug.
> >>
> >>>> --- a/xen/arch/x86/mm/p2m.c
> >>>> +++ b/xen/arch/x86/mm/p2m.c
> >>>> @@ -594,7 +594,10 @@ struct page_info *p2m_get_page_from_gfn(
> >>>>      if ( p2m_is_ram(*t) && mfn_valid(mfn) )
> >>>>      {
> >>>>          page = mfn_to_page(mfn);
> >>>> -        if ( !get_page(page, p2m->domain) )
> >>>> +        if ( !get_page(page, p2m->domain) &&
> >>>> +             /* Page could be shared */
> >>>> +             (!dom_cow || !p2m_is_shared(*t) ||
> >>>> +              !get_page(page, dom_cow)) )
> >>>
> >>> While there may be a reason why on the fast path two get_page()
> >>> invocations are be necessary, couldn't you get away with just
> >>> one
> >>>
> >>>         if ( !get_page(page, !dom_cow || !p2m_is_shared(*t) ? p2m->domain
> >>>                                                             : dom_cow) )
> >>>
> >>> at least here? It's also not really clear to me why here and
> >>> there we need "!dom_cow || !p2m_is_shared(*t)" - wouldn't
> >>> p2m_is_shared() return consistently "false" when !dom_cow ?
> >>
> >> I simply copied the existing code from the slow_path as-is. IMHO it
> >> would suffice to do a single get_page(page, !p2m_is_shared(*t) ?
> >> p2m->domain : dom_cow);  since we can't have any entries that are
> >> shared when dom_cow is NULL so this is safe, no need for the extra
> >> !dom_cow check. If you prefer I can make the change for both
> >> locations.
> >
> > If the change is correct to make also in the other place, I'd
> > definitely prefer you doing so.
>
> I agree with everything Jan said. :-)
>
> Also, since this is a general bugfix, you might consider moving it to
> the top of your series, so it can be checked in as soon as it's ready.

Sure - although it can be checked in before patch 1 & 2, it's not
dependent on them. In fact, all the patches in this series up to and
including Patch 14 could be checked in out-of-order.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 04/18] x86/mem_sharing: make get_two_gfns take locks conditionally
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 04/18] x86/mem_sharing: make get_two_gfns take locks conditionally Tamas K Lengyel
@ 2020-01-23 15:50   ` George Dunlap
  0 siblings, 0 replies; 44+ messages in thread
From: George Dunlap @ 2020-01-23 15:50 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Roger Pau Monné

On 1/21/20 5:49 PM, Tamas K Lengyel wrote:
> During VM forking the client lock will already be taken.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
> Acked-by: Andrew Coopers <andrew.cooper3@citrix.com>

Acked-by: George Dunlap <george.dunlap@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 05/18] x86/mem_sharing: drop flags from mem_sharing_unshare_page
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 05/18] x86/mem_sharing: drop flags from mem_sharing_unshare_page Tamas K Lengyel
@ 2020-01-23 15:53   ` George Dunlap
  0 siblings, 0 replies; 44+ messages in thread
From: George Dunlap @ 2020-01-23 15:53 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Roger Pau Monné

On 1/21/20 5:49 PM, Tamas K Lengyel wrote:
> All callers pass 0 in.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
> Reviewed-by: Wei Liu <wl@xen.org>

Acked-by: George Dunlap <george.dunlap@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 07/18] x86/mem_sharing: define mem_sharing_domain to hold some scattered variables
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 07/18] x86/mem_sharing: define mem_sharing_domain to hold some scattered variables Tamas K Lengyel
@ 2020-01-23 15:59   ` George Dunlap
  0 siblings, 0 replies; 44+ messages in thread
From: George Dunlap @ 2020-01-23 15:59 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Roger Pau Monné

On 1/21/20 5:49 PM, Tamas K Lengyel wrote:
> Create struct mem_sharing_domain under hvm_domain and move mem sharing
> variables into it from p2m_domain and hvm_domain.
> 
> Expose the mem_sharing_enabled macro to be used consistently across Xen.
> 
> Remove some duplicate calls to mem_sharing_enabled in mem_sharing.c
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Acked-by: George Dunlap <george.dunlap@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 10/18] x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 10/18] x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool Tamas K Lengyel
@ 2020-01-23 16:14   ` George Dunlap
  2020-01-23 16:23     ` Tamas K Lengyel
  0 siblings, 1 reply; 44+ messages in thread
From: George Dunlap @ 2020-01-23 16:14 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Roger Pau Monné

On 1/21/20 5:49 PM, Tamas K Lengyel wrote:
> MEM_SHARING_DESTROY_GFN is used on the 'flags' bitfield during unsharing.
> However, the bitfield is not used for anything else, so just convert it to a
> bool instead.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

But is there a particular advantage to getting rid of the bitfield?

You're the maintainer, so it's your decision of course.  But if it were
me I'd leave it as a bitfield so that all the bitfield code is there if
it's needed in the future.  Flipping it to a bool, with the risk of
flipping it back to a bitfield later, seems like pointless churn to me.

(Although perhaps the reason will become evident by the time I get to
the end of the series.)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 10/18] x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool
  2020-01-23 16:14   ` George Dunlap
@ 2020-01-23 16:23     ` Tamas K Lengyel
  2020-01-23 16:32       ` George Dunlap
  0 siblings, 1 reply; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-23 16:23 UTC (permalink / raw)
  To: George Dunlap
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Xen-devel, Roger Pau Monné

On Thu, Jan 23, 2020 at 9:14 AM George Dunlap <george.dunlap@citrix.com> wrote:
>
> On 1/21/20 5:49 PM, Tamas K Lengyel wrote:
> > MEM_SHARING_DESTROY_GFN is used on the 'flags' bitfield during unsharing.
> > However, the bitfield is not used for anything else, so just convert it to a
> > bool instead.
> >
> > Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
> > Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> But is there a particular advantage to getting rid of the bitfield?
>
> You're the maintainer, so it's your decision of course.  But if it were
> me I'd leave it as a bitfield so that all the bitfield code is there if
> it's needed in the future.  Flipping it to a bool, with the risk of
> flipping it back to a bitfield later, seems like pointless churn to me.
>
> (Although perhaps the reason will become evident by the time I get to
> the end of the series.)

Provided its been many years since this code has been added with no
need for any extra bits, and with no expectations that this would
change any time soon, I wouldn't worry about that. True, there is very
little difference between keeping the code as-is vs converting it to
bool, but IMHO it makes the code easier to follow without you
wondering what might be those non-existent situations that warranted
it to be a bitmap to begin with.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 10/18] x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool
  2020-01-23 16:23     ` Tamas K Lengyel
@ 2020-01-23 16:32       ` George Dunlap
  2020-01-23 16:37         ` Jan Beulich
  0 siblings, 1 reply; 44+ messages in thread
From: George Dunlap @ 2020-01-23 16:32 UTC (permalink / raw)
  To: Tamas K Lengyel
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Xen-devel, Roger Pau Monné

On 1/23/20 4:23 PM, Tamas K Lengyel wrote:
> On Thu, Jan 23, 2020 at 9:14 AM George Dunlap <george.dunlap@citrix.com> wrote:
>>
>> On 1/21/20 5:49 PM, Tamas K Lengyel wrote:
>>> MEM_SHARING_DESTROY_GFN is used on the 'flags' bitfield during unsharing.
>>> However, the bitfield is not used for anything else, so just convert it to a
>>> bool instead.
>>>
>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
>> But is there a particular advantage to getting rid of the bitfield?
>>
>> You're the maintainer, so it's your decision of course.  But if it were
>> me I'd leave it as a bitfield so that all the bitfield code is there if
>> it's needed in the future.  Flipping it to a bool, with the risk of
>> flipping it back to a bitfield later, seems like pointless churn to me.
>>
>> (Although perhaps the reason will become evident by the time I get to
>> the end of the series.)
> 
> Provided its been many years since this code has been added with no
> need for any extra bits, and with no expectations that this would
> change any time soon, I wouldn't worry about that. True, there is very
> little difference between keeping the code as-is vs converting it to
> bool, but IMHO it makes the code easier to follow without you
> wondering what might be those non-existent situations that warranted
> it to be a bitmap to begin with.

It's definitely a judgement call, and I can see where you're coming
from.  Like I said, if it were me I'd leave it, but it's not. :-)   Just
wanted to raise the issue as I was going through.  I'd Ack it but you've
already got an R-b.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 10/18] x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool
  2020-01-23 16:32       ` George Dunlap
@ 2020-01-23 16:37         ` Jan Beulich
  2020-01-23 16:42           ` George Dunlap
  0 siblings, 1 reply; 44+ messages in thread
From: Jan Beulich @ 2020-01-23 16:37 UTC (permalink / raw)
  To: George Dunlap
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, George Dunlap,
	Andrew Cooper, Xen-devel, Roger Pau Monné

On 23.01.2020 17:32, George Dunlap wrote:
> On 1/23/20 4:23 PM, Tamas K Lengyel wrote:
>> On Thu, Jan 23, 2020 at 9:14 AM George Dunlap <george.dunlap@citrix.com> wrote:
>>>
>>> On 1/21/20 5:49 PM, Tamas K Lengyel wrote:
>>>> MEM_SHARING_DESTROY_GFN is used on the 'flags' bitfield during unsharing.
>>>> However, the bitfield is not used for anything else, so just convert it to a
>>>> bool instead.
>>>>
>>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> But is there a particular advantage to getting rid of the bitfield?
>>>
>>> You're the maintainer, so it's your decision of course.  But if it were
>>> me I'd leave it as a bitfield so that all the bitfield code is there if
>>> it's needed in the future.  Flipping it to a bool, with the risk of
>>> flipping it back to a bitfield later, seems like pointless churn to me.
>>>
>>> (Although perhaps the reason will become evident by the time I get to
>>> the end of the series.)
>>
>> Provided its been many years since this code has been added with no
>> need for any extra bits, and with no expectations that this would
>> change any time soon, I wouldn't worry about that. True, there is very
>> little difference between keeping the code as-is vs converting it to
>> bool, but IMHO it makes the code easier to follow without you
>> wondering what might be those non-existent situations that warranted
>> it to be a bitmap to begin with.
> 
> It's definitely a judgement call, and I can see where you're coming
> from.  Like I said, if it were me I'd leave it, but it's not. :-)   Just
> wanted to raise the issue as I was going through.  I'd Ack it but you've
> already got an R-b.

Until your proposal gets accepted, isn't it that your ack is needed
despite the R-b? This is also why e.g. for patch 2 I didn't see a
point in sending any R-b, as the patch will need your ack anyway,
and it's so simple that "reviewed" would be an overstatement.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 10/18] x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool
  2020-01-23 16:37         ` Jan Beulich
@ 2020-01-23 16:42           ` George Dunlap
  2020-01-23 16:55             ` Jan Beulich
  0 siblings, 1 reply; 44+ messages in thread
From: George Dunlap @ 2020-01-23 16:42 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, George Dunlap,
	Andrew Cooper, Xen-devel, Roger Pau Monné

On 1/23/20 4:37 PM, Jan Beulich wrote:
> On 23.01.2020 17:32, George Dunlap wrote:
>> On 1/23/20 4:23 PM, Tamas K Lengyel wrote:
>>> On Thu, Jan 23, 2020 at 9:14 AM George Dunlap <george.dunlap@citrix.com> wrote:
>>>>
>>>> On 1/21/20 5:49 PM, Tamas K Lengyel wrote:
>>>>> MEM_SHARING_DESTROY_GFN is used on the 'flags' bitfield during unsharing.
>>>>> However, the bitfield is not used for anything else, so just convert it to a
>>>>> bool instead.
>>>>>
>>>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
>>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> But is there a particular advantage to getting rid of the bitfield?
>>>>
>>>> You're the maintainer, so it's your decision of course.  But if it were
>>>> me I'd leave it as a bitfield so that all the bitfield code is there if
>>>> it's needed in the future.  Flipping it to a bool, with the risk of
>>>> flipping it back to a bitfield later, seems like pointless churn to me.
>>>>
>>>> (Although perhaps the reason will become evident by the time I get to
>>>> the end of the series.)
>>>
>>> Provided its been many years since this code has been added with no
>>> need for any extra bits, and with no expectations that this would
>>> change any time soon, I wouldn't worry about that. True, there is very
>>> little difference between keeping the code as-is vs converting it to
>>> bool, but IMHO it makes the code easier to follow without you
>>> wondering what might be those non-existent situations that warranted
>>> it to be a bitmap to begin with.
>>
>> It's definitely a judgement call, and I can see where you're coming
>> from.  Like I said, if it were me I'd leave it, but it's not. :-)   Just
>> wanted to raise the issue as I was going through.  I'd Ack it but you've
>> already got an R-b.
> 
> Until your proposal gets accepted, isn't it that your ack is needed
> despite the R-b? This is also why e.g. for patch 2 I didn't see a
> point in sending any R-b, as the patch will need your ack anyway,
> and it's so simple that "reviewed" would be an overstatement.

I don't think I'm co-maintainer of this; and you're a less-specific
maintainer than Tamas, right?  Do you need my Ack?

I'm happy to go back and Ack all the mm/ ones you've given an R-b to up
until this point in the series; I just didn't think it was necessary.

I'll probably Ack patch 2 once I become convinced the change in that
patch is necessary (which I'm guessing might be when I get to 15/18).

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 14/18] x86/mem_sharing: use default_access in add_to_physmap
  2020-01-22 17:08     ` Tamas K Lengyel
@ 2020-01-23 16:44       ` George Dunlap
  2020-01-23 17:19         ` Tamas K Lengyel
  0 siblings, 1 reply; 44+ messages in thread
From: George Dunlap @ 2020-01-23 16:44 UTC (permalink / raw)
  To: Tamas K Lengyel, Jan Beulich
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Xen-devel, Roger Pau Monné

On 1/22/20 5:08 PM, Tamas K Lengyel wrote:
> On Wed, Jan 22, 2020 at 8:35 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 21.01.2020 18:49, Tamas K Lengyel wrote:
>>> When plugging a hole in the target physmap don't use the access permission
>>> returned by __get_gfn_type_access as it can be non-sensical,
>>
>> "can be" is too vague for my taste - it suggests there may also be cases
>> where a sensible value is returned, and hence it should be used. Could
>> you clarify this please? (The code change itself of course is simple and
>> mechanical enough to look okay.)
> 
> Well, I can only speak of what I observed. The case seems to be that
> most of the time the function actually returns p2m_access_rwx (which
> is sensible), but occasionally something else. I didn't investigate
> where that value actually comes from, but when populating a physmap
> like this only the default_access is sane.

It would be good to get to the bottom of this.  Is it possible that your
dom0 agent (or whatever it's called) is calling add_to_physmap() on gfns
that have already been populated?  Is that something you want to catch?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 10/18] x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool
  2020-01-23 16:42           ` George Dunlap
@ 2020-01-23 16:55             ` Jan Beulich
  0 siblings, 0 replies; 44+ messages in thread
From: Jan Beulich @ 2020-01-23 16:55 UTC (permalink / raw)
  To: George Dunlap
  Cc: Tamas K Lengyel, Tamas K Lengyel, Wei Liu, George Dunlap,
	Andrew Cooper, Xen-devel, Roger Pau Monné

On 23.01.2020 17:42, George Dunlap wrote:
> On 1/23/20 4:37 PM, Jan Beulich wrote:
>> On 23.01.2020 17:32, George Dunlap wrote:
>>> On 1/23/20 4:23 PM, Tamas K Lengyel wrote:
>>>> On Thu, Jan 23, 2020 at 9:14 AM George Dunlap <george.dunlap@citrix.com> wrote:
>>>>>
>>>>> On 1/21/20 5:49 PM, Tamas K Lengyel wrote:
>>>>>> MEM_SHARING_DESTROY_GFN is used on the 'flags' bitfield during unsharing.
>>>>>> However, the bitfield is not used for anything else, so just convert it to a
>>>>>> bool instead.
>>>>>>
>>>>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
>>>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> But is there a particular advantage to getting rid of the bitfield?
>>>>>
>>>>> You're the maintainer, so it's your decision of course.  But if it were
>>>>> me I'd leave it as a bitfield so that all the bitfield code is there if
>>>>> it's needed in the future.  Flipping it to a bool, with the risk of
>>>>> flipping it back to a bitfield later, seems like pointless churn to me.
>>>>>
>>>>> (Although perhaps the reason will become evident by the time I get to
>>>>> the end of the series.)
>>>>
>>>> Provided its been many years since this code has been added with no
>>>> need for any extra bits, and with no expectations that this would
>>>> change any time soon, I wouldn't worry about that. True, there is very
>>>> little difference between keeping the code as-is vs converting it to
>>>> bool, but IMHO it makes the code easier to follow without you
>>>> wondering what might be those non-existent situations that warranted
>>>> it to be a bitmap to begin with.
>>>
>>> It's definitely a judgement call, and I can see where you're coming
>>> from.  Like I said, if it were me I'd leave it, but it's not. :-)   Just
>>> wanted to raise the issue as I was going through.  I'd Ack it but you've
>>> already got an R-b.
>>
>> Until your proposal gets accepted, isn't it that your ack is needed
>> despite the R-b? This is also why e.g. for patch 2 I didn't see a
>> point in sending any R-b, as the patch will need your ack anyway,
>> and it's so simple that "reviewed" would be an overstatement.
> 
> I don't think I'm co-maintainer of this; and you're a less-specific
> maintainer than Tamas, right?  Do you need my Ack?

Well, I was under the impression that the maintainership nesting
was hierarchical, i.e. the next level up would become the relevant
one in cases like this one. But I'm of course happy to commit the
parts of this series which are ready, if just any less specific
maintainer's ack is sufficient. Probably tomorrow morning then.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 14/18] x86/mem_sharing: use default_access in add_to_physmap
  2020-01-23 16:44       ` George Dunlap
@ 2020-01-23 17:19         ` Tamas K Lengyel
  0 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-23 17:19 UTC (permalink / raw)
  To: George Dunlap
  Cc: Tamas K Lengyel, Wei Liu, George Dunlap, Andrew Cooper,
	Xen-devel, Roger Pau Monné

On Thu, Jan 23, 2020 at 9:44 AM George Dunlap <george.dunlap@citrix.com> wrote:
>
> On 1/22/20 5:08 PM, Tamas K Lengyel wrote:
> > On Wed, Jan 22, 2020 at 8:35 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 21.01.2020 18:49, Tamas K Lengyel wrote:
> >>> When plugging a hole in the target physmap don't use the access permission
> >>> returned by __get_gfn_type_access as it can be non-sensical,
> >>
> >> "can be" is too vague for my taste - it suggests there may also be cases
> >> where a sensible value is returned, and hence it should be used. Could
> >> you clarify this please? (The code change itself of course is simple and
> >> mechanical enough to look okay.)
> >
> > Well, I can only speak of what I observed. The case seems to be that
> > most of the time the function actually returns p2m_access_rwx (which
> > is sensible), but occasionally something else. I didn't investigate
> > where that value actually comes from, but when populating a physmap
> > like this only the default_access is sane.
>
> It would be good to get to the bottom of this.  Is it possible that your
> dom0 agent (or whatever it's called) is calling add_to_physmap() on gfns
> that have already been populated?  Is that something you want to catch?
>

OK, I went back and deciphered why sometimes I saw different permissions here.

In the context I ran into this issue there is no dom0 agent calling
add_to_physmap. We wind up in this path with a VM fork where the MMU
faulted. The fault handler is trying to determine whether the page
needs to be forked from its parent: populated with a shared entry if
it's R/X access, or deduplicated if it's a W. This forking only
actually happens if the page type that is returned by ept_get_entry is
of a hole type. When it's a hole type, ept_get_entry always returns
p2m_access_n as the access permission. Copying that access permission
to the newly populated entry is bad - that's what this patch fixes.

But this path also gets hit when the MMU faults for other reasons. In
those cases will get permissions other then p2m_access_n since the
type is not a hole. But when it's not a hole, this function bails as
that's a clear signal that the page doesn't need forking. So I was
seeing p2m_access_rwx permission for page accesses that triggered the
MMU fault for reasons other then mem_access. For example, when a
previously shared entry needs unsharing.

So there is no mystery after all, I was just printing my debug lines
with the mem_access permissions irrespective of the page type before
the path bails due to the type check.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 15/18] xen/mem_sharing: VM forking
  2020-01-21 17:49 ` [Xen-devel] [PATCH v5 15/18] xen/mem_sharing: VM forking Tamas K Lengyel
@ 2020-01-23 17:21   ` George Dunlap
  2020-01-23 17:30     ` Tamas K Lengyel
  0 siblings, 1 reply; 44+ messages in thread
From: George Dunlap @ 2020-01-23 17:21 UTC (permalink / raw)
  To: Tamas K Lengyel, xen-devel
  Cc: Stefano Stabellini, Julien Grall, Wei Liu, Konrad Rzeszutek Wilk,
	George Dunlap, Andrew Cooper, Ian Jackson, Tamas K Lengyel,
	Roger Pau Monné

On 1/21/20 5:49 PM, Tamas K Lengyel wrote:
> VM forking is the process of creating a domain with an empty memory space and a
> parent domain specified from which to populate the memory when necessary. For
> the new domain to be functional the VM state is copied over as part of the fork
> operation (HVM params, hap allocation, etc).
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>

Overall this looks really good.  Just a few questions.

> ---
>  xen/arch/x86/domain.c             |   9 ++
>  xen/arch/x86/hvm/hvm.c            |   2 +-
>  xen/arch/x86/mm/mem_sharing.c     | 220 ++++++++++++++++++++++++++++++
>  xen/arch/x86/mm/p2m.c             |  11 +-
>  xen/include/asm-x86/mem_sharing.h |  20 ++-
>  xen/include/public/memory.h       |   5 +
>  xen/include/xen/sched.h           |   2 +
>  7 files changed, 265 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 28fefa1f81..953abcc1fc 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -2198,6 +2198,15 @@ int domain_relinquish_resources(struct domain *d)
>              ret = relinquish_shared_pages(d);
>              if ( ret )
>                  return ret;
> +
> +            /*
> +             * If the domain is forked, decrement the parent's pause count.
> +             */
> +            if ( d->parent )
> +            {
> +                domain_unpause(d->parent);
> +                d->parent = NULL;

Did this want to be `if ( d->parent_paused )`?

> +static int bring_up_vcpus(struct domain *cd, struct cpupool *cpupool)
> +{
> +    int ret;
> +    unsigned int i;
> +
> +    if ( (ret = cpupool_move_domain(cd, cpupool)) )
> +        return ret;
> +
> +    for ( i = 0; i < cd->max_vcpus; i++ )
> +    {
> +        if ( cd->vcpu[i] )
> +            continue;
> +
> +        if ( !vcpu_create(cd, i) )
> +            return -EINVAL;

You're not copying the contents of the vcpu registers or anything here
-- is that something you're leaving to your dom0 agent?

> +static int mem_sharing_fork(struct domain *d, struct domain *cd)
> +{
> +    int rc = -EINVAL;
> +
> +    if ( !cd->controller_pause_count )
> +        return rc;
> +
> +    /*
> +     * We only want to pause the parent once, not each time this
> +     * operation is restarted due to preemption.
> +     */
> +    if ( !cd->parent_paused )
> +    {
> +        domain_pause(d);
> +        cd->parent_paused = true;
> +    }
> +
> +    cd->max_pages = d->max_pages;
> +    cd->max_vcpus = d->max_vcpus;
> +
> +    /* this is preemptible so it's the first to get done */
> +    if ( (rc = fork_hap_allocation(d, cd)) )
> +        goto done;
> +
> +    if ( (rc = bring_up_vcpus(cd, d->cpupool)) )
> +        goto done;
> +
> +    if ( (rc = hvm_copy_context_and_params(d, cd)) )
> +        goto done;
> +
> +    fork_tsc(d, cd);
> +
> +    cd->parent = d;

What happens if the parent dies?

It seems like we might want to do get_domain(parent) here, and
put_domain(parent) in domain_destroy.

I'll probably need to come back to this, at which point I may have more
questions.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

* Re: [Xen-devel] [PATCH v5 15/18] xen/mem_sharing: VM forking
  2020-01-23 17:21   ` George Dunlap
@ 2020-01-23 17:30     ` Tamas K Lengyel
  0 siblings, 0 replies; 44+ messages in thread
From: Tamas K Lengyel @ 2020-01-23 17:30 UTC (permalink / raw)
  To: George Dunlap
  Cc: Julien Grall, Stefano Stabellini, Tamas K Lengyel, Wei Liu,
	Konrad Rzeszutek Wilk, George Dunlap, Andrew Cooper, Ian Jackson,
	Xen-devel, Roger Pau Monné

On Thu, Jan 23, 2020 at 10:21 AM George Dunlap <george.dunlap@citrix.com> wrote:
>
> On 1/21/20 5:49 PM, Tamas K Lengyel wrote:
> > VM forking is the process of creating a domain with an empty memory space and a
> > parent domain specified from which to populate the memory when necessary. For
> > the new domain to be functional the VM state is copied over as part of the fork
> > operation (HVM params, hap allocation, etc).
> >
> > Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
>
> Overall this looks really good.  Just a few questions.
>
> > ---
> >  xen/arch/x86/domain.c             |   9 ++
> >  xen/arch/x86/hvm/hvm.c            |   2 +-
> >  xen/arch/x86/mm/mem_sharing.c     | 220 ++++++++++++++++++++++++++++++
> >  xen/arch/x86/mm/p2m.c             |  11 +-
> >  xen/include/asm-x86/mem_sharing.h |  20 ++-
> >  xen/include/public/memory.h       |   5 +
> >  xen/include/xen/sched.h           |   2 +
> >  7 files changed, 265 insertions(+), 4 deletions(-)
> >
> > diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> > index 28fefa1f81..953abcc1fc 100644
> > --- a/xen/arch/x86/domain.c
> > +++ b/xen/arch/x86/domain.c
> > @@ -2198,6 +2198,15 @@ int domain_relinquish_resources(struct domain *d)
> >              ret = relinquish_shared_pages(d);
> >              if ( ret )
> >                  return ret;
> > +
> > +            /*
> > +             * If the domain is forked, decrement the parent's pause count.
> > +             */
> > +            if ( d->parent )
> > +            {
> > +                domain_unpause(d->parent);
> > +                d->parent = NULL;
>
> Did this want to be `if ( d->parent_paused )`?

If the domain has the parent pointer set, it's guaranteed that the
parent is paused. It's paused before the parent pointer is set during
the fork hypercall.

>
> > +static int bring_up_vcpus(struct domain *cd, struct cpupool *cpupool)
> > +{
> > +    int ret;
> > +    unsigned int i;
> > +
> > +    if ( (ret = cpupool_move_domain(cd, cpupool)) )
> > +        return ret;
> > +
> > +    for ( i = 0; i < cd->max_vcpus; i++ )
> > +    {
> > +        if ( cd->vcpu[i] )
> > +            continue;
> > +
> > +        if ( !vcpu_create(cd, i) )
> > +            return -EINVAL;
>
> You're not copying the contents of the vcpu registers or anything here
> -- is that something you're leaving to your dom0 agent?

The registers are being copied as part of the HVM contexts.

>
> > +static int mem_sharing_fork(struct domain *d, struct domain *cd)
> > +{
> > +    int rc = -EINVAL;
> > +
> > +    if ( !cd->controller_pause_count )
> > +        return rc;
> > +
> > +    /*
> > +     * We only want to pause the parent once, not each time this
> > +     * operation is restarted due to preemption.
> > +     */
> > +    if ( !cd->parent_paused )
> > +    {
> > +        domain_pause(d);
> > +        cd->parent_paused = true;
> > +    }
> > +
> > +    cd->max_pages = d->max_pages;
> > +    cd->max_vcpus = d->max_vcpus;
> > +
> > +    /* this is preemptible so it's the first to get done */
> > +    if ( (rc = fork_hap_allocation(d, cd)) )
> > +        goto done;
> > +
> > +    if ( (rc = bring_up_vcpus(cd, d->cpupool)) )
> > +        goto done;
> > +
> > +    if ( (rc = hvm_copy_context_and_params(d, cd)) )
> > +        goto done;
> > +
> > +    fork_tsc(d, cd);
> > +
> > +    cd->parent = d;
>
> What happens if the parent dies?
>
> It seems like we might want to do get_domain(parent) here, and
> put_domain(parent) in domain_destroy.

If forks are still active when someone destroys the parent than Xen
will crash I assume. Right now it's a requirement that the parent
remains in existence - and it's paused - while there are forks active.
We enforce the pause state but making the parent undestroyable is not
implemented right now. We just trust that the user of this
experimental system won't do that.

But yes, doing the get_domain()/put_domain() dance would be an easy
way to do that. Will add that and then we don't have to worry about
the parent getting pulled from under our feat.

>
> I'll probably need to come back to this, at which point I may have more
> questions.

Thanks!
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 44+ messages in thread

end of thread, other threads:[~2020-01-23 17:32 UTC | newest]

Thread overview: 44+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-21 17:49 [Xen-devel] [PATCH v5 00/18] VM forking Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 01/18] x86/hvm: introduce hvm_copy_context_and_params Tamas K Lengyel
2020-01-22 15:00   ` Jan Beulich
2020-01-22 16:42     ` Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 02/18] xen/x86: Make hap_get_allocation accessible Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 03/18] x86/p2m: Allow p2m_get_page_from_gfn to return shared entries Tamas K Lengyel
2020-01-22 15:23   ` Jan Beulich
2020-01-22 16:51     ` Tamas K Lengyel
2020-01-22 16:55       ` Jan Beulich
2020-01-23 15:32         ` George Dunlap
2020-01-23 15:48           ` Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 04/18] x86/mem_sharing: make get_two_gfns take locks conditionally Tamas K Lengyel
2020-01-23 15:50   ` George Dunlap
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 05/18] x86/mem_sharing: drop flags from mem_sharing_unshare_page Tamas K Lengyel
2020-01-23 15:53   ` George Dunlap
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 06/18] x86/mem_sharing: don't try to unshare twice during page fault Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 07/18] x86/mem_sharing: define mem_sharing_domain to hold some scattered variables Tamas K Lengyel
2020-01-23 15:59   ` George Dunlap
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 08/18] x86/mem_sharing: Use INVALID_MFN and p2m_is_shared in relinquish_shared_pages Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 09/18] x86/mem_sharing: Make add_to_physmap static and shorten name Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 10/18] x86/mem_sharing: Convert MEM_SHARING_DESTROY_GFN to a bool Tamas K Lengyel
2020-01-23 16:14   ` George Dunlap
2020-01-23 16:23     ` Tamas K Lengyel
2020-01-23 16:32       ` George Dunlap
2020-01-23 16:37         ` Jan Beulich
2020-01-23 16:42           ` George Dunlap
2020-01-23 16:55             ` Jan Beulich
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 11/18] x86/mem_sharing: Replace MEM_SHARING_DEBUG with gdprintk Tamas K Lengyel
2020-01-22 15:30   ` Jan Beulich
2020-01-22 16:55     ` Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 12/18] x86/mem_sharing: Enable mem_sharing on first memop Tamas K Lengyel
2020-01-22 15:32   ` Jan Beulich
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 13/18] x86/mem_sharing: Skip xen heap pages in memshr nominate Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 14/18] x86/mem_sharing: use default_access in add_to_physmap Tamas K Lengyel
2020-01-22 15:35   ` Jan Beulich
2020-01-22 17:08     ` Tamas K Lengyel
2020-01-23 16:44       ` George Dunlap
2020-01-23 17:19         ` Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 15/18] xen/mem_sharing: VM forking Tamas K Lengyel
2020-01-23 17:21   ` George Dunlap
2020-01-23 17:30     ` Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 16/18] xen/mem_access: Use __get_gfn_type_access in set_mem_access Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 17/18] x86/mem_sharing: reset a fork Tamas K Lengyel
2020-01-21 17:49 ` [Xen-devel] [PATCH v5 18/18] xen/tools: VM forking toolstack side Tamas K Lengyel

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).