All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paul Durrant <paul.durrant@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>, Tim Deegan <tim@xen.org>,
	Paul Durrant <paul.durrant@citrix.com>
Subject: [PATCH v12 04/11] x86/hvm/ioreq: defer mapping gfns until they are actually requsted
Date: Tue, 17 Oct 2017 14:24:25 +0100	[thread overview]
Message-ID: <20171017132432.24093-5-paul.durrant@citrix.com> (raw)
In-Reply-To: <20171017132432.24093-1-paul.durrant@citrix.com>

A subsequent patch will introduce a new scheme to allow an emulator to
map ioreq server pages directly from Xen rather than the guest P2M.

This patch lays the groundwork for that change by deferring mapping of
gfns until their values are requested by an emulator. To that end, the
pad field of the xen_dm_op_get_ioreq_server_info structure is re-purposed
to a flags field and new flag, XEN_DMOP_no_gfns, defined which modifies the
behaviour of XEN_DMOP_get_ioreq_server_info to allow the caller to avoid
requesting the gfn values.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Tim Deegan <tim@xen.org>

v8:
 - For safety make all of the pointers passed to
   hvm_get_ioreq_server_info() optional.
 - Shrink bufioreq_handling down to a uint8_t.

v3:
 - Updated in response to review comments from Wei and Roger.
 - Added a HANDLE_BUFIOREQ macro to make the code neater.
 - This patch no longer introduces a security vulnerability since there
   is now an explicit limit on the number of ioreq servers that may be
   created for any one domain.
---
 tools/libs/devicemodel/core.c                   |  8 +++++
 tools/libs/devicemodel/include/xendevicemodel.h |  6 ++--
 xen/arch/x86/hvm/dm.c                           |  9 +++--
 xen/arch/x86/hvm/ioreq.c                        | 47 ++++++++++++++-----------
 xen/include/asm-x86/hvm/domain.h                |  2 +-
 xen/include/public/hvm/dm_op.h                  | 32 ++++++++++-------
 6 files changed, 63 insertions(+), 41 deletions(-)

diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c
index 0f2c1a791f..91c69d103b 100644
--- a/tools/libs/devicemodel/core.c
+++ b/tools/libs/devicemodel/core.c
@@ -188,6 +188,14 @@ int xendevicemodel_get_ioreq_server_info(
 
     data->id = id;
 
+    /*
+     * If the caller is not requesting gfn values then instruct the
+     * hypercall not to retrieve them as this may cause them to be
+     * mapped.
+     */
+    if (!ioreq_gfn && !bufioreq_gfn)
+        data->flags |= XEN_DMOP_no_gfns;
+
     rc = xendevicemodel_op(dmod, domid, 1, &op, sizeof(op));
     if (rc)
         return rc;
diff --git a/tools/libs/devicemodel/include/xendevicemodel.h b/tools/libs/devicemodel/include/xendevicemodel.h
index 13216db04a..d73a76da35 100644
--- a/tools/libs/devicemodel/include/xendevicemodel.h
+++ b/tools/libs/devicemodel/include/xendevicemodel.h
@@ -61,11 +61,11 @@ int xendevicemodel_create_ioreq_server(
  * @parm domid the domain id to be serviced
  * @parm id the IOREQ Server id.
  * @parm ioreq_gfn pointer to a xen_pfn_t to receive the synchronous ioreq
- *                  gfn
+ *                  gfn. (May be NULL if not required)
  * @parm bufioreq_gfn pointer to a xen_pfn_t to receive the buffered ioreq
- *                    gfn
+ *                    gfn. (May be NULL if not required)
  * @parm bufioreq_port pointer to a evtchn_port_t to receive the buffered
- *                     ioreq event channel
+ *                     ioreq event channel. (May be NULL if not required)
  * @return 0 on success, -1 on failure.
  */
 int xendevicemodel_get_ioreq_server_info(
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 9cf53b551c..22fa5b51e3 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -416,16 +416,19 @@ static int dm_op(const struct dmop_args *op_args)
     {
         struct xen_dm_op_get_ioreq_server_info *data =
             &op.u.get_ioreq_server_info;
+        const uint16_t valid_flags = XEN_DMOP_no_gfns;
 
         const_op = false;
 
         rc = -EINVAL;
-        if ( data->pad )
+        if ( data->flags & ~valid_flags )
             break;
 
         rc = hvm_get_ioreq_server_info(d, data->id,
-                                       &data->ioreq_gfn,
-                                       &data->bufioreq_gfn,
+                                       (data->flags & XEN_DMOP_no_gfns) ?
+                                       NULL : &data->ioreq_gfn,
+                                       (data->flags & XEN_DMOP_no_gfns) ?
+                                       NULL : &data->bufioreq_gfn,
                                        &data->bufioreq_port);
         break;
     }
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 64bb13cec9..f654e7796c 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -350,6 +350,9 @@ static void hvm_update_ioreq_evtchn(struct hvm_ioreq_server *s,
     }
 }
 
+#define HANDLE_BUFIOREQ(s) \
+    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
+
 static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
                                      struct vcpu *v)
 {
@@ -371,7 +374,7 @@ static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
 
     sv->ioreq_evtchn = rc;
 
-    if ( v->vcpu_id == 0 && s->bufioreq.va != NULL )
+    if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
     {
         struct domain *d = s->domain;
 
@@ -422,7 +425,7 @@ static void hvm_ioreq_server_remove_vcpu(struct hvm_ioreq_server *s,
 
         list_del(&sv->list_entry);
 
-        if ( v->vcpu_id == 0 && s->bufioreq.va != NULL )
+        if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
             free_xen_event_channel(v->domain, s->bufioreq_evtchn);
 
         free_xen_event_channel(v->domain, sv->ioreq_evtchn);
@@ -449,7 +452,7 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
 
         list_del(&sv->list_entry);
 
-        if ( v->vcpu_id == 0 && s->bufioreq.va != NULL )
+        if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
             free_xen_event_channel(v->domain, s->bufioreq_evtchn);
 
         free_xen_event_channel(v->domain, sv->ioreq_evtchn);
@@ -460,14 +463,13 @@ static void hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
     spin_unlock(&s->lock);
 }
 
-static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
-                                      bool handle_bufioreq)
+static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
 {
     int rc;
 
     rc = hvm_map_ioreq_gfn(s, false);
 
-    if ( !rc && handle_bufioreq )
+    if ( !rc && HANDLE_BUFIOREQ(s) )
         rc = hvm_map_ioreq_gfn(s, true);
 
     if ( rc )
@@ -597,13 +599,7 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
     if ( rc )
         return rc;
 
-    if ( bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC )
-        s->bufioreq_atomic = true;
-
-    rc = hvm_ioreq_server_map_pages(
-             s, bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF);
-    if ( rc )
-        goto fail_map;
+    s->bufioreq_handling = bufioreq_handling;
 
     for_each_vcpu ( d, v )
     {
@@ -618,9 +614,6 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
     hvm_ioreq_server_remove_all_vcpus(s);
     hvm_ioreq_server_unmap_pages(s);
 
- fail_map:
-    hvm_ioreq_server_free_rangesets(s);
-
     return rc;
 }
 
@@ -757,12 +750,23 @@ int hvm_get_ioreq_server_info(struct domain *d, ioservid_t id,
 
     ASSERT(!IS_DEFAULT(s));
 
-    *ioreq_gfn = gfn_x(s->ioreq.gfn);
+    if ( ioreq_gfn || bufioreq_gfn )
+    {
+        rc = hvm_ioreq_server_map_pages(s);
+        if ( rc )
+            goto out;
+    }
 
-    if ( s->bufioreq.va != NULL )
+    if ( ioreq_gfn )
+        *ioreq_gfn = gfn_x(s->ioreq.gfn);
+
+    if ( HANDLE_BUFIOREQ(s) )
     {
-        *bufioreq_gfn = gfn_x(s->bufioreq.gfn);
-        *bufioreq_port = s->bufioreq_evtchn;
+        if ( bufioreq_gfn )
+            *bufioreq_gfn = gfn_x(s->bufioreq.gfn);
+
+        if ( bufioreq_port )
+            *bufioreq_port = s->bufioreq_evtchn;
     }
 
     rc = 0;
@@ -1264,7 +1268,8 @@ static int hvm_send_buffered_ioreq(struct hvm_ioreq_server *s, ioreq_t *p)
     pg->ptrs.write_pointer += qw ? 2 : 1;
 
     /* Canonicalize read/write pointers to prevent their overflow. */
-    while ( s->bufioreq_atomic && qw++ < IOREQ_BUFFER_SLOT_NUM &&
+    while ( (s->bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC) &&
+            qw++ < IOREQ_BUFFER_SLOT_NUM &&
             pg->ptrs.read_pointer >= IOREQ_BUFFER_SLOT_NUM )
     {
         union bufioreq_pointers old = pg->ptrs, new;
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 3bd9c5d7c0..8b798ee4e9 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -69,7 +69,7 @@ struct hvm_ioreq_server {
     evtchn_port_t          bufioreq_evtchn;
     struct rangeset        *range[NR_IO_RANGE_TYPES];
     bool                   enabled;
-    bool                   bufioreq_atomic;
+    uint8_t                bufioreq_handling;
 };
 
 /*
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index 6bbab5fca3..9677bd74e7 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -79,28 +79,34 @@ struct xen_dm_op_create_ioreq_server {
  * XEN_DMOP_get_ioreq_server_info: Get all the information necessary to
  *                                 access IOREQ Server <id>.
  *
- * The emulator needs to map the synchronous ioreq structures and buffered
- * ioreq ring (if it exists) that Xen uses to request emulation. These are
- * hosted in the target domain's gmfns <ioreq_gfn> and <bufioreq_gfn>
- * respectively. In addition, if the IOREQ Server is handling buffered
- * emulation requests, the emulator needs to bind to event channel
- * <bufioreq_port> to listen for them. (The event channels used for
- * synchronous emulation requests are specified in the per-CPU ioreq
- * structures in <ioreq_gfn>).
- * If the IOREQ Server is not handling buffered emulation requests then the
- * values handed back in <bufioreq_gfn> and <bufioreq_port> will both be 0.
+ * If the IOREQ Server is handling buffered emulation requests, the
+ * emulator needs to bind to event channel <bufioreq_port> to listen for
+ * them. (The event channels used for synchronous emulation requests are
+ * specified in the per-CPU ioreq structures).
+ * In addition, if the XENMEM_acquire_resource memory op cannot be used,
+ * the emulator will need to map the synchronous ioreq structures and
+ * buffered ioreq ring (if it exists) from guest memory. If <flags> does
+ * not contain XEN_DMOP_no_gfns then these pages will be made available and
+ * the frame numbers passed back in gfns <ioreq_gfn> and <bufioreq_gfn>
+ * respectively. (If the IOREQ Server is not handling buffered emulation
+ * only <ioreq_gfn> will be valid).
  */
 #define XEN_DMOP_get_ioreq_server_info 2
 
 struct xen_dm_op_get_ioreq_server_info {
     /* IN - server id */
     ioservid_t id;
-    uint16_t pad;
+    /* IN - flags */
+    uint16_t flags;
+
+#define _XEN_DMOP_no_gfns 0
+#define XEN_DMOP_no_gfns (1u << _XEN_DMOP_no_gfns)
+
     /* OUT - buffered ioreq port */
     evtchn_port_t bufioreq_port;
-    /* OUT - sync ioreq gfn */
+    /* OUT - sync ioreq gfn (see block comment above) */
     uint64_aligned_t ioreq_gfn;
-    /* OUT - buffered ioreq gfn */
+    /* OUT - buffered ioreq gfn (see block comment above)*/
     uint64_aligned_t bufioreq_gfn;
 };
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  parent reply	other threads:[~2017-10-17 13:24 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-17 13:24 [PATCH v12 00/11] x86: guest resource mapping Paul Durrant
2017-10-17 13:24 ` [PATCH v12 01/11] x86/hvm/ioreq: maintain an array of ioreq servers rather than a list Paul Durrant
2017-10-17 13:24 ` [PATCH v12 02/11] x86/hvm/ioreq: simplify code and use consistent naming Paul Durrant
2017-10-17 13:24 ` [PATCH v12 03/11] x86/hvm/ioreq: use gfn_t in struct hvm_ioreq_page Paul Durrant
2017-10-17 13:24 ` Paul Durrant [this message]
2017-10-17 13:24 ` [PATCH v12 05/11] x86/mm: add HYPERVISOR_memory_op to acquire guest resources Paul Durrant
2017-10-17 14:45   ` Daniel De Graaf
2017-10-19 12:22   ` Julien Grall
2017-10-19 12:57     ` Paul Durrant
2017-10-19 13:29       ` Julien Grall
2017-10-19 13:35         ` Paul Durrant
2017-10-19 14:12           ` Julien Grall
2017-10-19 14:49             ` Paul Durrant
2017-10-19 15:11               ` Jan Beulich
2017-10-19 15:37                 ` Julien Grall
2017-10-19 15:47                   ` Jan Beulich
2017-10-19 16:06                     ` Julien Grall
2017-10-19 16:21                       ` Julien Grall
2017-10-20  6:24                         ` Jan Beulich
2017-10-20  8:26                           ` Paul Durrant
2017-10-20 10:00                             ` Julien Grall
2017-10-20 10:10                               ` Paul Durrant
2017-10-23 18:04                                 ` Julien Grall
2017-10-25  8:40                                   ` Paul Durrant
2017-10-20  6:17                       ` Jan Beulich
2017-10-26 15:26   ` Jan Beulich
2017-10-26 15:32     ` Julien Grall
2017-10-26 15:39       ` Jan Beulich
2017-10-27 10:46         ` Julien Grall
2017-10-27 15:19           ` Paul Durrant
2017-10-30 12:08             ` Julien Grall
2017-10-30 13:10               ` Paul Durrant
2017-10-30 12:05     ` Paul Durrant
2017-10-17 13:24 ` [PATCH v12 06/11] x86/hvm/ioreq: add a new mappable resource type Paul Durrant
2017-10-19 12:31   ` Julien Grall
2017-10-19 12:58     ` Paul Durrant
2017-10-19 13:08       ` Julien Grall
2017-10-19 13:08         ` Paul Durrant
2017-10-26 15:36   ` Jan Beulich
2017-10-17 13:24 ` [PATCH v12 07/11] x86/mm: add an extra command to HYPERVISOR_mmu_update Paul Durrant
2017-10-17 13:24 ` [PATCH v12 08/11] tools/libxenforeignmemory: add support for resource mapping Paul Durrant
2017-10-17 13:24 ` [PATCH v12 09/11] tools/libxenforeignmemory: reduce xenforeignmemory_restrict code footprint Paul Durrant
2017-10-17 13:24 ` [PATCH v12 10/11] common: add a new mappable resource type: XENMEM_resource_grant_table Paul Durrant
2017-10-26 15:46   ` Jan Beulich
2017-10-17 13:24 ` [PATCH v12 11/11] tools/libxenctrl: use new xenforeignmemory API to seed grant table Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171017132432.24093-5-paul.durrant@citrix.com \
    --to=paul.durrant@citrix.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=konrad.wilk@oracle.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.