All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paul Durrant <Paul.Durrant@citrix.com>
To: 'Chao Gao' <chao.gao@intel.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"Tim (Xen.org)" <tim@xen.org>,
	George Dunlap <George.Dunlap@citrix.com>,
	Ian Jackson <Ian.Jackson@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v14 04/11] x86/hvm/ioreq: defer mapping gfns until they are actually requsted
Date: Thu, 7 Dec 2017 08:38:49 +0000	[thread overview]
Message-ID: <97fe0dfc0a83407bbc56306c5d410892@AMSPEX02CL03.citrite.net> (raw)
In-Reply-To: <20171206214948.GA47014@op-computing>

> -----Original Message-----
> From: Chao Gao [mailto:chao.gao@intel.com]
> Sent: 06 December 2017 21:50
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: xen-devel@lists.xenproject.org; Stefano Stabellini
> <sstabellini@kernel.org>; George Dunlap <George.Dunlap@citrix.com>;
> Andrew Cooper <Andrew.Cooper3@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; Tim (Xen.org) <tim@xen.org>
> Subject: Re: [Xen-devel] [PATCH v14 04/11] x86/hvm/ioreq: defer mapping
> gfns until they are actually requsted
> 
> On Tue, Nov 28, 2017 at 03:08:46PM +0000, Paul Durrant wrote:
> >A subsequent patch will introduce a new scheme to allow an emulator to
> >map ioreq server pages directly from Xen rather than the guest P2M.
> >
> >This patch lays the groundwork for that change by deferring mapping of
> >gfns until their values are requested by an emulator. To that end, the
> >pad field of the xen_dm_op_get_ioreq_server_info structure is re-
> purposed
> >to a flags field and new flag, XEN_DMOP_no_gfns, defined which modifies
> the
> >behaviour of XEN_DMOP_get_ioreq_server_info to allow the caller to
> avoid
> >requesting the gfn values.
> >
> >Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> >Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> >Acked-by: Wei Liu <wei.liu2@citrix.com>
> >Reviewed-by: Jan Beulich <jbeulich@suse.com>
> >---
> >Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> >Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> >Cc: George Dunlap <George.Dunlap@eu.citrix.com>
> >Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
> >Cc: Stefano Stabellini <sstabellini@kernel.org>
> >Cc: Tim Deegan <tim@xen.org>
> >
> >v8:
> > - For safety make all of the pointers passed to
> >   hvm_get_ioreq_server_info() optional.
> > - Shrink bufioreq_handling down to a uint8_t.
> >
> >v3:
> > - Updated in response to review comments from Wei and Roger.
> > - Added a HANDLE_BUFIOREQ macro to make the code neater.
> > - This patch no longer introduces a security vulnerability since there
> >   is now an explicit limit on the number of ioreq servers that may be
> >   created for any one domain.
> >---
> > tools/libs/devicemodel/core.c                   |  8 +++++
> > tools/libs/devicemodel/include/xendevicemodel.h |  6 ++--
> > xen/arch/x86/hvm/dm.c                           |  9 +++--
> > xen/arch/x86/hvm/ioreq.c                        | 47 ++++++++++++++-----------
> > xen/include/asm-x86/hvm/domain.h                |  2 +-
> > xen/include/public/hvm/dm_op.h                  | 32 ++++++++++-------
> > 6 files changed, 63 insertions(+), 41 deletions(-)
> >
> >diff --git a/tools/libs/devicemodel/core.c b/tools/libs/devicemodel/core.c
> >index b66d4f9294..e684e657b6 100644
> >--- a/tools/libs/devicemodel/core.c
> >+++ b/tools/libs/devicemodel/core.c
> >@@ -204,6 +204,14 @@ int xendevicemodel_get_ioreq_server_info(
> >
> >     data->id = id;
> >
> >+    /*
> >+     * If the caller is not requesting gfn values then instruct the
> >+     * hypercall not to retrieve them as this may cause them to be
> >+     * mapped.
> >+     */
> >+    if (!ioreq_gfn && !bufioreq_gfn)
> >+        data->flags |= XEN_DMOP_no_gfns;
> >+
> >     rc = xendevicemodel_op(dmod, domid, 1, &op, sizeof(op));
> >     if (rc)
> >         return rc;
> >diff --git a/tools/libs/devicemodel/include/xendevicemodel.h
> b/tools/libs/devicemodel/include/xendevicemodel.h
> >index dda0bc7695..fffee3a4a0 100644
> >--- a/tools/libs/devicemodel/include/xendevicemodel.h
> >+++ b/tools/libs/devicemodel/include/xendevicemodel.h
> >@@ -61,11 +61,11 @@ int xendevicemodel_create_ioreq_server(
> >  * @parm domid the domain id to be serviced
> >  * @parm id the IOREQ Server id.
> >  * @parm ioreq_gfn pointer to a xen_pfn_t to receive the synchronous
> ioreq
> >- *                  gfn
> >+ *                  gfn. (May be NULL if not required)
> >  * @parm bufioreq_gfn pointer to a xen_pfn_t to receive the buffered
> ioreq
> >- *                    gfn
> >+ *                    gfn. (May be NULL if not required)
> >  * @parm bufioreq_port pointer to a evtchn_port_t to receive the
> buffered
> >- *                     ioreq event channel
> >+ *                     ioreq event channel. (May be NULL if not required)
> >  * @return 0 on success, -1 on failure.
> >  */
> > int xendevicemodel_get_ioreq_server_info(
> >diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
> >index a787f43737..3c617bd754 100644
> >--- a/xen/arch/x86/hvm/dm.c
> >+++ b/xen/arch/x86/hvm/dm.c
> >@@ -416,16 +416,19 @@ static int dm_op(const struct dmop_args
> *op_args)
> >     {
> >         struct xen_dm_op_get_ioreq_server_info *data =
> >             &op.u.get_ioreq_server_info;
> >+        const uint16_t valid_flags = XEN_DMOP_no_gfns;
> >
> >         const_op = false;
> >
> >         rc = -EINVAL;
> >-        if ( data->pad )
> >+        if ( data->flags & ~valid_flags )
> >             break;
> >
> >         rc = hvm_get_ioreq_server_info(d, data->id,
> >-                                       &data->ioreq_gfn,
> >-                                       &data->bufioreq_gfn,
> >+                                       (data->flags & XEN_DMOP_no_gfns) ?
> >+                                       NULL : &data->ioreq_gfn,
> >+                                       (data->flags & XEN_DMOP_no_gfns) ?
> >+                                       NULL : &data->bufioreq_gfn,
> >                                        &data->bufioreq_port);
> >         break;
> >     }
> >diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> >index eec4e4771e..39de659ddf 100644
> >--- a/xen/arch/x86/hvm/ioreq.c
> >+++ b/xen/arch/x86/hvm/ioreq.c
> >@@ -350,6 +350,9 @@ static void hvm_update_ioreq_evtchn(struct
> hvm_ioreq_server *s,
> >     }
> > }
> >
> >+#define HANDLE_BUFIOREQ(s) \
> >+    ((s)->bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF)
> >+
> > static int hvm_ioreq_server_add_vcpu(struct hvm_ioreq_server *s,
> >                                      struct vcpu *v)
> > {
> >@@ -371,7 +374,7 @@ static int hvm_ioreq_server_add_vcpu(struct
> hvm_ioreq_server *s,
> >
> >     sv->ioreq_evtchn = rc;
> >
> >-    if ( v->vcpu_id == 0 && s->bufioreq.va != NULL )
> >+    if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
> >     {
> >         struct domain *d = s->domain;
> >
> >@@ -422,7 +425,7 @@ static void hvm_ioreq_server_remove_vcpu(struct
> hvm_ioreq_server *s,
> >
> >         list_del(&sv->list_entry);
> >
> >-        if ( v->vcpu_id == 0 && s->bufioreq.va != NULL )
> >+        if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
> >             free_xen_event_channel(v->domain, s->bufioreq_evtchn);
> >
> >         free_xen_event_channel(v->domain, sv->ioreq_evtchn);
> >@@ -449,7 +452,7 @@ static void
> hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
> >
> >         list_del(&sv->list_entry);
> >
> >-        if ( v->vcpu_id == 0 && s->bufioreq.va != NULL )
> >+        if ( v->vcpu_id == 0 && HANDLE_BUFIOREQ(s) )
> >             free_xen_event_channel(v->domain, s->bufioreq_evtchn);
> >
> >         free_xen_event_channel(v->domain, sv->ioreq_evtchn);
> >@@ -460,14 +463,13 @@ static void
> hvm_ioreq_server_remove_all_vcpus(struct hvm_ioreq_server *s)
> >     spin_unlock(&s->lock);
> > }
> >
> >-static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s,
> >-                                      bool handle_bufioreq)
> >+static int hvm_ioreq_server_map_pages(struct hvm_ioreq_server *s)
> > {
> >     int rc;
> >
> >     rc = hvm_map_ioreq_gfn(s, false);
> >
> >-    if ( !rc && handle_bufioreq )
> >+    if ( !rc && HANDLE_BUFIOREQ(s) )
> >         rc = hvm_map_ioreq_gfn(s, true);
> >
> >     if ( rc )
> >@@ -597,13 +599,7 @@ static int hvm_ioreq_server_init(struct
> hvm_ioreq_server *s,
> >     if ( rc )
> >         return rc;
> >
> >-    if ( bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC )
> >-        s->bufioreq_atomic = true;
> >-
> >-    rc = hvm_ioreq_server_map_pages(
> >-             s, bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF);
> 
> It seems that for default IO server, mapping gfns here is required. Old
> qemu won't call hvm_get_ioreq_server_info() (and cannot because of the
> assertion 'ASSERT(!IS_DEFAULT(s))') to set up the mapping.

Yes. Old qemu has no knowledge of the ioreq server API so the 'default' ioreq server is there to accommodate it. Again this is all legacy code so no need to extend it.

  Paul

> 
> Thanks
> Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2017-12-07  8:43 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-28 15:08 [PATCH v14 00/11] x86: guest resource mapping Paul Durrant
2017-11-28 15:08 ` [PATCH v14 01/11] x86/hvm/ioreq: maintain an array of ioreq servers rather than a list Paul Durrant
2017-11-28 15:08 ` [PATCH v14 02/11] x86/hvm/ioreq: simplify code and use consistent naming Paul Durrant
2017-11-28 15:08 ` [PATCH v14 03/11] x86/hvm/ioreq: use gfn_t in struct hvm_ioreq_page Paul Durrant
2017-11-28 15:08 ` [PATCH v14 04/11] x86/hvm/ioreq: defer mapping gfns until they are actually requsted Paul Durrant
2017-12-06 21:49   ` Chao Gao
2017-12-07  8:38     ` Paul Durrant [this message]
2017-11-28 15:08 ` [PATCH v14 05/11] x86/mm: add HYPERVISOR_memory_op to acquire guest resources Paul Durrant
2017-11-28 18:50   ` Daniel De Graaf
2017-11-28 15:08 ` [PATCH v14 06/11] x86/hvm/ioreq: add a new mappable resource type Paul Durrant
2017-12-14  9:51   ` Paul Durrant
2017-12-14  9:52     ` Paul Durrant
2017-12-14 10:31     ` Paul Durrant
2017-12-14 13:46     ` Jan Beulich
2017-12-14 13:46       ` Paul Durrant
2017-11-28 15:08 ` [PATCH v14 07/11] x86/mm: add an extra command to HYPERVISOR_mmu_update Paul Durrant
2017-12-12 13:25   ` Jan Beulich
2017-12-12 13:52     ` Andrew Cooper
2017-12-12 14:38       ` Jan Beulich
2017-12-13 12:06         ` Paul Durrant
2017-12-13 14:35           ` Jan Beulich
2017-12-13 14:49             ` Paul Durrant
2017-12-13 15:24               ` Jan Beulich
2017-12-13 17:03                 ` Paul Durrant
2017-12-14 13:50                   ` Jan Beulich
2017-12-12 14:54     ` Paul Durrant
2017-11-28 15:08 ` [PATCH v14 08/11] tools/libxenforeignmemory: add support for resource mapping Paul Durrant
2017-11-28 15:08 ` [PATCH v14 09/11] tools/libxenforeignmemory: reduce xenforeignmemory_restrict code footprint Paul Durrant
2017-11-28 15:08 ` [PATCH v14 10/11] common: add a new mappable resource type: XENMEM_resource_grant_table Paul Durrant
2017-11-28 15:08 ` [PATCH v14 11/11] tools/libxenctrl: use new xenforeignmemory API to seed grant table Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=97fe0dfc0a83407bbc56306c5d410892@AMSPEX02CL03.citrite.net \
    --to=paul.durrant@citrix.com \
    --cc=Andrew.Cooper3@citrix.com \
    --cc=George.Dunlap@citrix.com \
    --cc=Ian.Jackson@citrix.com \
    --cc=chao.gao@intel.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.