All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paul Durrant <paul.durrant@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Julien Grall <julien.grall@arm.com>,
	Paul Durrant <paul.durrant@citrix.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [PATCH v15 05/11] x86/mm: add HYPERVISOR_memory_op to acquire guest resources
Date: Thu, 14 Dec 2017 17:41:38 +0000	[thread overview]
Message-ID: <20171214174144.27852-6-paul.durrant@citrix.com> (raw)
In-Reply-To: <20171214174144.27852-1-paul.durrant@citrix.com>

Certain memory resources associated with a guest are not necessarily
present in the guest P2M.

This patch adds the boilerplate for new memory op to allow such a resource
to be priv-mapped directly, by either a PV or HVM tools domain.

NOTE: Whilst the new op is not intrinsicly specific to the x86 architecture,
      I have no means to test it on an ARM platform and so cannot verify
      that it functions correctly.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Julien Grall <julien.grall@arm.com>

v14:
 - Addressed more comments from Jan.

v13:
 - Use xen_pfn_t for mfn_list.
 - Addressed further comments from Jan and Julien.

v12:
 - Addressed more comments form Jan.
 - Removed #ifdef CONFIG_X86 from common code and instead introduced a
   stub set_foreign_p2m_entry() in asm-arm/p2m.h returning -EOPNOTSUPP.
 - Restricted mechanism for querying implementation limit on nr_frames
   and simplified compat code.

v11:
 - Addressed more comments from Jan.

v9:
 - Addressed more comments from Jan.

v8:
 - Move the code into common as requested by Jan.
 - Make the gmfn_list handle a 64-bit type to avoid limiting the MFN
   range for a 32-bit tools domain.
 - Add missing pad.
 - Add compat code.
 - Make this patch deal with purely boilerplate.
 - Drop George's A-b and Wei's R-b because the changes are non-trivial,
   and update Cc list now the boilerplate is common.

v5:
 - Switched __copy_to/from_guest_offset() to copy_to/from_guest_offset().
---
 tools/flask/policy/modules/xen.if   |  4 +-
 xen/arch/x86/mm/p2m.c               |  3 +-
 xen/common/compat/memory.c          | 95 +++++++++++++++++++++++++++++++++++++
 xen/common/memory.c                 | 89 ++++++++++++++++++++++++++++++++++
 xen/include/asm-arm/p2m.h           | 10 ++++
 xen/include/asm-x86/p2m.h           |  3 ++
 xen/include/public/memory.h         | 43 ++++++++++++++++-
 xen/include/xlat.lst                |  1 +
 xen/include/xsm/dummy.h             |  6 +++
 xen/include/xsm/xsm.h               |  6 +++
 xen/xsm/dummy.c                     |  1 +
 xen/xsm/flask/hooks.c               |  6 +++
 xen/xsm/flask/policy/access_vectors |  2 +
 13 files changed, 265 insertions(+), 4 deletions(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 55437496f6..07cba8a15d 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -52,7 +52,8 @@ define(`create_domain_common', `
 			settime setdomainhandle getvcpucontext set_misc_info };
 	allow $1 $2:domain2 { set_cpuid settsc setscheduler setclaim
 			set_max_evtchn set_vnumainfo get_vnumainfo cacheflush
-			psr_cmt_op psr_cat_op soft_reset set_gnttab_limits };
+			psr_cmt_op psr_cat_op soft_reset set_gnttab_limits
+			resource_map };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
@@ -152,6 +153,7 @@ define(`device_model', `
 	allow $1 $2_target:domain { getdomaininfo shutdown };
 	allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack };
 	allow $1 $2_target:hvm { getparam setparam hvmctl cacheattr dm };
+	allow $1 $2_target:domain2 resource_map;
 ')
 
 # make_device_model(priv, dm_dom, hvm_dom)
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index c72a3cdebb..71bb9b4f93 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -1132,8 +1132,7 @@ static int set_typed_p2m_entry(struct domain *d, unsigned long gfn_l,
 }
 
 /* Set foreign mfn in the given guest's p2m table. */
-static int set_foreign_p2m_entry(struct domain *d, unsigned long gfn,
-                                 mfn_t mfn)
+int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn)
 {
     return set_typed_p2m_entry(d, gfn, mfn, PAGE_ORDER_4K, p2m_map_foreign,
                                p2m_get_hostp2m(d)->default_access);
diff --git a/xen/common/compat/memory.c b/xen/common/compat/memory.c
index 35bb259808..9a7cb1a71b 100644
--- a/xen/common/compat/memory.c
+++ b/xen/common/compat/memory.c
@@ -71,6 +71,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
             struct xen_remove_from_physmap *xrfp;
             struct xen_vnuma_topology_info *vnuma;
             struct xen_mem_access_op *mao;
+            struct xen_mem_acquire_resource *mar;
         } nat;
         union {
             struct compat_memory_reservation rsrv;
@@ -79,6 +80,7 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
             struct compat_add_to_physmap_batch atpb;
             struct compat_vnuma_topology_info vnuma;
             struct compat_mem_access_op mao;
+            struct compat_mem_acquire_resource mar;
         } cmp;
 
         set_xen_guest_handle(nat.hnd, COMPAT_ARG_XLAT_VIRT_BASE);
@@ -395,6 +397,57 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
         }
 #endif
 
+        case XENMEM_acquire_resource:
+        {
+            xen_pfn_t *xen_frame_list;
+            unsigned int max_nr_frames;
+
+            if ( copy_from_guest(&cmp.mar, compat, 1) )
+                return -EFAULT;
+
+            /*
+             * The number of frames handled is currently limited to a
+             * small number by the underlying implementation, so the
+             * scratch space should be sufficient for bouncing the
+             * frame addresses.
+             */
+            max_nr_frames = (COMPAT_ARG_XLAT_SIZE - sizeof(*nat.mar)) /
+                sizeof(*xen_frame_list);
+
+            if ( cmp.mar.nr_frames > max_nr_frames )
+                return -E2BIG;
+
+            if ( compat_handle_is_null(cmp.mar.frame_list) )
+                xen_frame_list = NULL;
+            else
+            {
+                xen_frame_list = (xen_pfn_t *)(nat.mar + 1);
+
+                if ( !compat_handle_okay(cmp.mar.frame_list,
+                                         cmp.mar.nr_frames) )
+                    return -EFAULT;
+
+                for ( i = 0; i < cmp.mar.nr_frames; i++ )
+                {
+                    compat_pfn_t frame;
+
+                    if ( __copy_from_compat_offset(
+                             &frame, cmp.mar.frame_list, i, 1) )
+                        return -EFAULT;
+
+                    xen_frame_list[i] = frame;
+                }
+            }
+
+#define XLAT_mem_acquire_resource_HNDL_frame_list(_d_, _s_) \
+            set_xen_guest_handle((_d_)->frame_list, xen_frame_list)
+
+            XLAT_mem_acquire_resource(nat.mar, &cmp.mar);
+
+#undef XLAT_mem_acquire_resource_HNDL_frame_list
+
+            break;
+        }
         default:
             return compat_arch_memory_op(cmd, compat);
         }
@@ -535,6 +588,48 @@ int compat_memory_op(unsigned int cmd, XEN_GUEST_HANDLE_PARAM(void) compat)
                 rc = -EFAULT;
             break;
 
+        case XENMEM_acquire_resource:
+        {
+            const xen_pfn_t *xen_frame_list = (xen_pfn_t *)(nat.mar + 1);
+            compat_pfn_t *compat_frame_list = (compat_pfn_t *)(nat.mar + 1);
+
+            if ( compat_handle_is_null(cmp.mar.frame_list) )
+            {
+                DEFINE_XEN_GUEST_HANDLE(compat_mem_acquire_resource_t);
+
+                if ( __copy_field_to_guest(
+                         guest_handle_cast(compat,
+                                           compat_mem_acquire_resource_t),
+                         &cmp.mar, nr_frames) )
+                    return -EFAULT;
+            }
+            else
+            {
+                /*
+                 * NOTE: the smaller compat array overwrites the native
+                 *       array.
+                 */
+                BUILD_BUG_ON(sizeof(compat_pfn_t) > sizeof(xen_pfn_t));
+
+                for ( i = 0; i < cmp.mar.nr_frames; i++ )
+                {
+                    compat_pfn_t frame = xen_frame_list[i];
+
+                    if ( frame != xen_frame_list[i] )
+                        return -ERANGE;
+
+                    compat_frame_list[i] = frame;
+                }
+
+                if ( __copy_to_compat_offset(cmp.mar.frame_list, 0,
+                                             compat_frame_list,
+                                             cmp.mar.nr_frames) )
+                    return -EFAULT;
+            }
+
+            break;
+        }
+
         default:
             domain_crash(current->domain);
             split = 0;
diff --git a/xen/common/memory.c b/xen/common/memory.c
index a6ba33fdcb..6c385a2328 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -970,6 +970,90 @@ static long xatp_permission_check(struct domain *d, unsigned int space)
     return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
 }
 
+static int acquire_resource(
+    XEN_GUEST_HANDLE_PARAM(xen_mem_acquire_resource_t) arg)
+{
+    struct domain *d, *currd = current->domain;
+    xen_mem_acquire_resource_t xmar;
+    /*
+     * The mfn_list and gfn_list (below) arrays are ok on stack for the
+     * moment since they are small, but if they need to grow in future
+     * use-cases then per-CPU arrays or heap allocations may be required.
+     */
+    xen_pfn_t mfn_list[2];
+    int rc;
+
+    if ( copy_from_guest(&xmar, arg, 1) )
+        return -EFAULT;
+
+    if ( xmar.pad != 0 )
+        return -EINVAL;
+
+    if ( guest_handle_is_null(xmar.frame_list) )
+    {
+        if ( xmar.nr_frames )
+            return -EINVAL;
+
+        xmar.nr_frames = ARRAY_SIZE(mfn_list);
+
+        if ( __copy_field_to_guest(arg, &xmar, nr_frames) )
+            return -EFAULT;
+
+        return 0;
+    }
+
+    if ( xmar.nr_frames > ARRAY_SIZE(mfn_list) )
+        return -E2BIG;
+
+    rc = rcu_lock_remote_domain_by_id(xmar.domid, &d);
+    if ( rc )
+        return rc;
+
+    rc = xsm_domain_resource_map(XSM_DM_PRIV, d);
+    if ( rc )
+        goto out;
+
+    switch ( xmar.type )
+    {
+    default:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+    if ( rc )
+        goto out;
+
+    if ( !paging_mode_translate(currd) )
+    {
+        if ( copy_to_guest(xmar.frame_list, mfn_list, xmar.nr_frames) )
+            rc = -EFAULT;
+    }
+    else
+    {
+        xen_pfn_t gfn_list[ARRAY_SIZE(mfn_list)];
+        unsigned int i;
+
+        if ( copy_from_guest(gfn_list, xmar.frame_list, xmar.nr_frames) )
+            rc = -EFAULT;
+
+        for ( i = 0; !rc && i < xmar.nr_frames; i++ )
+        {
+            rc = set_foreign_p2m_entry(currd, gfn_list[i],
+                                       _mfn(mfn_list[i]));
+            if ( rc )
+                /*
+                 * Make sure rc is -EIO for any iteration other than
+                 * the first.
+                 */
+                rc = i ? -EIO : rc;
+        }
+    }
+
+ out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
 long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     struct domain *d, *curr_d = current->domain;
@@ -1408,6 +1492,11 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     }
 #endif
 
+    case XENMEM_acquire_resource:
+        rc = acquire_resource(
+            guest_handle_cast(arg, xen_mem_acquire_resource_t));
+        break;
+
     default:
         rc = arch_memory_op(cmd, arg);
         break;
diff --git a/xen/include/asm-arm/p2m.h b/xen/include/asm-arm/p2m.h
index a0abc84ed8..0fee0f7738 100644
--- a/xen/include/asm-arm/p2m.h
+++ b/xen/include/asm-arm/p2m.h
@@ -348,6 +348,16 @@ static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
     return gfn_add(gfn, 1UL << order);
 }
 
+static inline int set_foreign_p2m_entry(struct domain *d, unsigned long gfn,
+                                        mfn_t mfn)
+{
+    /*
+     * NOTE: If this is implemented then proper reference counting of
+     *       foreign entries will need to be impmemented.
+     */
+    return -EOPNOTSUPP;
+}
+
 #endif /* _XEN_P2M_H */
 
 /*
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 17b1d0c8d3..44f7ec088c 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -620,6 +620,9 @@ void p2m_memory_type_changed(struct domain *d);
 int p2m_is_logdirty_range(struct p2m_domain *, unsigned long start,
                           unsigned long end);
 
+/* Set foreign entry in the p2m table (for priv-mapping) */
+int set_foreign_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn);
+
 /* Set mmio addresses in the p2m table (for pass-through) */
 int set_mmio_p2m_entry(struct domain *d, unsigned long gfn, mfn_t mfn,
                        unsigned int order, p2m_access_t access);
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index 29386df98b..83e60b6603 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -599,6 +599,47 @@ struct xen_reserved_device_memory_map {
 typedef struct xen_reserved_device_memory_map xen_reserved_device_memory_map_t;
 DEFINE_XEN_GUEST_HANDLE(xen_reserved_device_memory_map_t);
 
+/*
+ * Get the pages for a particular guest resource, so that they can be
+ * mapped directly by a tools domain.
+ */
+#define XENMEM_acquire_resource 28
+struct xen_mem_acquire_resource {
+    /* IN - the domain whose resource is to be mapped */
+    domid_t domid;
+    /* IN - the type of resource */
+    uint16_t type;
+    /*
+     * IN - a type-specific resource identifier, which must be zero
+     *      unless stated otherwise.
+     */
+    uint32_t id;
+    /* IN/OUT - As an IN parameter number of frames of the resource
+     *          to be mapped. However, if the specified value is 0 and
+     *          frame_list is NULL then this field will be set to the
+     *          maximum value supported by the implementation on return.
+     */
+    uint32_t nr_frames;
+    uint32_t pad;
+    /* IN - the index of the initial frame to be mapped. This parameter
+     *      is ignored if nr_frames is 0.
+     */
+    uint64_aligned_t frame;
+    /* IN/OUT - If the tools domain is PV then, upon return, frame_list
+     *          will be populated with the MFNs of the resource.
+     *          If the tools domain is HVM then it is expected that, on
+     *          entry, frame_list will be populated with a list of GFNs
+     *          that will be mapped to the MFNs of the resource.
+     *          If -EIO is returned then the frame_list has only been
+     *          partially mapped and it is up to the caller to unmap all
+     *          the GFNs.
+     *          This parameter may be NULL if nr_frames is 0.
+     */
+    XEN_GUEST_HANDLE(xen_pfn_t) frame_list;
+};
+typedef struct xen_mem_acquire_resource xen_mem_acquire_resource_t;
+DEFINE_XEN_GUEST_HANDLE(xen_mem_acquire_resource_t);
+
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
 /*
@@ -650,7 +691,7 @@ struct xen_vnuma_topology_info {
 typedef struct xen_vnuma_topology_info xen_vnuma_topology_info_t;
 DEFINE_XEN_GUEST_HANDLE(xen_vnuma_topology_info_t);
 
-/* Next available subop number is 28 */
+/* Next available subop number is 29 */
 
 #endif /* __XEN_PUBLIC_MEMORY_H__ */
 
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index 4346cbedcf..5806ef0ad8 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -84,6 +84,7 @@
 !	memory_map			memory.h
 !	memory_reservation		memory.h
 !	mem_access_op			memory.h
+!	mem_acquire_resource		memory.h
 !	pod_target			memory.h
 !	remove_from_physmap		memory.h
 !	reserved_device_memory_map	memory.h
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index d6ddadcafd..d28b8eac09 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -726,3 +726,9 @@ static XSM_INLINE int xsm_xen_version (XSM_DEFAULT_ARG uint32_t op)
         return xsm_default_action(XSM_PRIV, current->domain, NULL);
     }
 }
+
+static XSM_INLINE int xsm_domain_resource_map(XSM_DEFAULT_ARG struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 1406f752b6..6701524150 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -180,6 +180,7 @@ struct xsm_operations {
     int (*dm_op) (struct domain *d);
 #endif
     int (*xen_version) (uint32_t cmd);
+    int (*domain_resource_map) (struct domain *d);
 };
 
 #ifdef CONFIG_XSM
@@ -692,6 +693,11 @@ static inline int xsm_xen_version (xsm_default_t def, uint32_t op)
     return xsm_ops->xen_version(op);
 }
 
+static inline int xsm_domain_resource_map(xsm_default_t def, struct domain *d)
+{
+    return xsm_ops->domain_resource_map(d);
+}
+
 #endif /* XSM_NO_WRAPPERS */
 
 #ifdef CONFIG_MULTIBOOT
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 479b103614..6e751199ee 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -157,4 +157,5 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, dm_op);
 #endif
     set_to_dummy_if_null(ops, xen_version);
+    set_to_dummy_if_null(ops, domain_resource_map);
 }
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 04f453bfc5..e560d4c611 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1718,6 +1718,11 @@ static int flask_xen_version (uint32_t op)
     }
 }
 
+static int flask_domain_resource_map(struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__RESOURCE_MAP);
+}
+
 long do_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
 int compat_flask_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) u_flask_op);
 
@@ -1851,6 +1856,7 @@ static struct xsm_operations flask_ops = {
     .dm_op = flask_dm_op,
 #endif
     .xen_version = flask_xen_version,
+    .domain_resource_map = flask_domain_resource_map,
 };
 
 void __init flask_init(const void *policy_buffer, size_t policy_size)
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 3a2d863b8f..341ade1f7d 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -250,6 +250,8 @@ class domain2
     psr_cat_op
 # XEN_DOMCTL_set_gnttab_limits
     set_gnttab_limits
+# XENMEM_resource_map
+    resource_map
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply	other threads:[~2017-12-14 17:41 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-14 17:41 [PATCH v15 00/11] x86: guest resource mapping Paul Durrant
2017-12-14 17:41 ` [PATCH v15 01/11] x86/hvm/ioreq: maintain an array of ioreq servers rather than a list Paul Durrant
2017-12-14 17:41 ` [PATCH v15 02/11] x86/hvm/ioreq: simplify code and use consistent naming Paul Durrant
2017-12-14 17:41 ` [PATCH v15 03/11] x86/hvm/ioreq: use gfn_t in struct hvm_ioreq_page Paul Durrant
2017-12-14 17:41 ` [PATCH v15 04/11] x86/hvm/ioreq: defer mapping gfns until they are actually requsted Paul Durrant
2017-12-15  0:50   ` Chao Gao
2017-12-15  9:03     ` Paul Durrant
2017-12-14 17:41 ` Paul Durrant [this message]
2017-12-15  8:06   ` [PATCH v15 05/11] x86/mm: add HYPERVISOR_memory_op to acquire guest resources Jan Beulich
2017-12-14 17:41 ` [PATCH v15 06/11] x86/hvm/ioreq: add a new mappable resource type Paul Durrant
2017-12-14 17:41 ` [PATCH v15 07/11] x86/mm: add an extra command to HYPERVISOR_mmu_update Paul Durrant
2017-12-14 17:41 ` [PATCH v15 08/11] tools/libxenforeignmemory: add support for resource mapping Paul Durrant
2017-12-14 17:41 ` [PATCH v15 09/11] tools/libxenforeignmemory: reduce xenforeignmemory_restrict code footprint Paul Durrant
2017-12-14 17:41 ` [PATCH v15 10/11] common: add a new mappable resource type: XENMEM_resource_grant_table Paul Durrant
2017-12-14 17:41 ` [PATCH v15 11/11] tools/libxenctrl: use new xenforeignmemory API to seed grant table Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171214174144.27852-6-paul.durrant@citrix.com \
    --to=paul.durrant@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=dgdegra@tycho.nsa.gov \
    --cc=ian.jackson@eu.citrix.com \
    --cc=julien.grall@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.