All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH-for-4.9 v1 0/8] New hypercall for device models
@ 2016-11-18 17:13 Paul Durrant
  2016-11-18 17:13 ` [PATCH-for-4.9 v1 1/8] public / x86: Introduce __HYPERCALL_dm_op Paul Durrant
                   ` (7 more replies)
  0 siblings, 8 replies; 39+ messages in thread
From: Paul Durrant @ 2016-11-18 17:13 UTC (permalink / raw)
  To: xen-devel; +Cc: Paul Durrant

Following on from the design submitted by Jennifer Herbert to the list [1]
this series provides an implementation of __HYPERCALL_dm_op followed by
patches based on Jan Beulich's previous HVMCTL series [2] to convert
tools-only HVMOPs used by device models to DMOPs.

[1] https://lists.xenproject.org/archives/html/xen-devel/2016-09/msg01052.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2016-06/msg02433.html

Paul Durrant (8):
  public / x86: Introduce __HYPERCALL_dm_op...
  dm_op: convert HVMOP_*ioreq_server*
  dm_op: convert HVMOP_track_dirty_vram
  dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level,
    and...
  dm_op: convert HVMOP_modified_memory
  dm_op: convert HVMOP_set_mem_type
  dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi
  x86/hvm: serialize trap injecting producer and consumer

 docs/designs/dmop.markdown          | 193 ++++++++++
 tools/flask/policy/modules/xen.if   |   8 +-
 tools/libxc/include/xenctrl.h       |   1 +
 tools/libxc/xc_domain.c             | 204 +++++------
 tools/libxc/xc_misc.c               | 226 ++++--------
 tools/libxc/xc_private.c            |  70 ++++
 tools/libxc/xc_private.h            |   2 +
 xen/arch/x86/hvm/Makefile           |   1 +
 xen/arch/x86/hvm/dm.c               | 501 ++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c              | 678 +-----------------------------------
 xen/arch/x86/hvm/ioreq.c            |  42 +--
 xen/arch/x86/hypercall.c            |   2 +
 xen/arch/x86/mm/hap/hap.c           |   2 +-
 xen/arch/x86/mm/shadow/common.c     |   2 +-
 xen/include/asm-x86/hap.h           |   2 +-
 xen/include/asm-x86/hvm/domain.h    |   3 +-
 xen/include/asm-x86/hvm/hvm.h       |   2 +
 xen/include/asm-x86/shadow.h        |   2 +-
 xen/include/public/hvm/dm_op.h      | 360 +++++++++++++++++++
 xen/include/public/hvm/hvm_op.h     |  70 ++--
 xen/include/public/xen-compat.h     |   2 +-
 xen/include/public/xen.h            |   1 +
 xen/include/xen/hypercall.h         |   7 +
 xen/include/xsm/dummy.h             |  36 +-
 xen/include/xsm/xsm.h               |  36 +-
 xen/xsm/dummy.c                     |   5 -
 xen/xsm/flask/hooks.c               |  37 +-
 xen/xsm/flask/policy/access_vectors |  15 +-
 28 files changed, 1396 insertions(+), 1114 deletions(-)
 create mode 100644 docs/designs/dmop.markdown
 create mode 100644 xen/arch/x86/hvm/dm.c
 create mode 100644 xen/include/public/hvm/dm_op.h

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* [PATCH-for-4.9 v1 1/8] public / x86: Introduce __HYPERCALL_dm_op...
  2016-11-18 17:13 [PATCH-for-4.9 v1 0/8] New hypercall for device models Paul Durrant
@ 2016-11-18 17:13 ` Paul Durrant
  2016-11-22 15:57   ` Jan Beulich
  2016-11-18 17:13 ` [PATCH-for-4.9 v1 2/8] dm_op: convert HVMOP_*ioreq_server* Paul Durrant
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-18 17:13 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Daniel De Graaf, Paul Durrant, Wei Liu, Jan Beulich

...as a set of hypercalls to be used by a device model.

As stated in the new docs/designs/dm_op.markdown:

"The aim of DMOP is to prevent a compromised device model from
compromising domains other then the one it is associated with. (And is
therefore likely already compromised)."

See that file for further information.

This patch simply adds the boilerplate for the hypercall and bumps
__XEN_LATEST_INTERFACE_VERSION__ to 0x0000040900. Subsequent patches will
add the individual (sets of) operations.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Suggested-by: Ian Jackson <ian.jackson@citrix.com>
Suggested-by: Jennifer Herbert <jennifer.herbert@citrix.com>
---
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 docs/designs/dmop.markdown        | 193 ++++++++++++++++++++++++++++++++++++++
 tools/flask/policy/modules/xen.if |   2 +-
 tools/libxc/include/xenctrl.h     |   1 +
 tools/libxc/xc_private.c          |  70 ++++++++++++++
 tools/libxc/xc_private.h          |   2 +
 xen/arch/x86/hvm/Makefile         |   1 +
 xen/arch/x86/hvm/dm.c             | 136 +++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c            |   1 +
 xen/arch/x86/hypercall.c          |   2 +
 xen/include/public/hvm/dm_op.h    |  70 ++++++++++++++
 xen/include/public/xen-compat.h   |   2 +-
 xen/include/public/xen.h          |   1 +
 xen/include/xen/hypercall.h       |   7 ++
 xen/include/xsm/dummy.h           |   6 ++
 xen/include/xsm/xsm.h             |   6 ++
 xen/xsm/flask/hooks.c             |   7 ++
 16 files changed, 505 insertions(+), 2 deletions(-)
 create mode 100644 docs/designs/dmop.markdown
 create mode 100644 xen/arch/x86/hvm/dm.c
 create mode 100644 xen/include/public/hvm/dm_op.h

diff --git a/docs/designs/dmop.markdown b/docs/designs/dmop.markdown
new file mode 100644
index 0000000..56878a2
--- /dev/null
+++ b/docs/designs/dmop.markdown
@@ -0,0 +1,193 @@
+DMOP  (multi-buffer variant)
+============================
+
+Introduction
+------------
+
+A previous proposal for a 'DMOP' was put forward by Ian Jackson on the 1st
+of August. This proposal seem very promising, however a problem was
+identified with it, that this proposal addresses.
+
+The aim of DMOP, as before, is to prevent a compromised device model from
+compromising domains other then the one it is associated with. (And is
+therefore likely already compromised).
+
+The previous proposal adds a DMOP hypercall, for use by the device models,
+which places the domain ID in a fixed place, within the calling args,
+such that the priv driver can always find it, and not need to know any
+further details about the particular DMOP in order to validate it against
+previously set (vie ioctl) domain.
+
+The problem occurs when you have a DMOP with references to other user memory
+objects, such as with Track dirty VRAM (As used with the VGA buffer).
+Is this case, the address of this other user object needs to be vetted,
+to ensure it is not within a restricted address ranges, such as kernel
+memory. The real problem comes down to how you would vet this address -
+the idea place to do this is within the privcmd driver, since it would have
+knowledge of the address space involved. However, with a principal goal
+of DMOP to allow privcmd to be free from any knowledge of DMOP's sub ops,
+it would have no way to identify any user buffer  addresses that need
+checking.  The alternative of allowing the hypervisor to vet the address
+is also problematic, since it has no knowledge of the guest memory layout.
+
+
+The Design
+----------
+
+As with the previous design, we provide a new restriction ioctl, which
+takes a domid parameter.  After that restriction ioctl is called, the
+privcmd driver will permit only DMOP hypercalls, and only with the
+specified target domid.
+
+In the previous design, a DMOP consisted of one buffer, containing all of
+the DMOP parameters, which may include further explicit references to
+more buffers.  In this design, an array of buffers with length is presented,
+with the first one containing the DMOP parameters, which could implicitly
+reference to further buffers from within in the array. Here, the only
+user buffers passed, are that found with the array, and so all can be
+audited from privcmd.  With the passing of the length of the buffers array,
+privcmd does not need to know which DMOP it is to audit them.
+
+If the hypervisor ends up with the wrong number of buffers, it can reject
+the DMOP at that point.
+
+The following code illustrates this idea:
+
+struct xen_dm_op {
+    uint32_t op;
+};
+
+struct xen_dm_op_buf {
+    XEN_GUEST_HANDLE(void) h;
+    uint64_t size;
+};
+typedef struct xen_dm_op_buf xen_dm_op_buf_t;
+
+enum neg_errnoval
+HYPERVISOR_dm_op(domid_t domid,
+                 xen_dm_op_buf_t *bufs,
+                 unsigned int nr_bufs)
+
+@domid is the domain the hypercall operates on.
+@bufs points to an array of buffers where @bufs[0] contains a struct
+dm_op, describing the specific device model operation and its parameters.
+@bufs[1..] may be referenced in the parameters for the purposes of
+passing extra information to or from the domain.
+@nr_bufs is the number of buffers in the @bufs array.
+
+
+It is forbidden for the above struct (dm_op) to contain any
+guest handles - if they are needed, they should instead be in
+HYPERVISOR_dm_op->buffers.
+
+Validation by privcmd driver
+------------------------------
+
+If the privcmd driver has been restricted to specific domain (using a
+ new ioctl), when it received an op, it will:
+
+1. Check hypercall is DMOP.
+
+2. Check domid == restricted domid.
+
+3. For each @nr_buffers in @buffers: Check @h and @len give a buffer
+   wholey in the user space part of the virtual address space. (e.g.,
+   on Linux use access_ok()).
+
+
+Xen Implementation
+------------------
+
+Since a DMOP sub of may need to copy or return a buffer from the guest,
+as well as the DMOP itself to git the initial buffer, functions for doing
+this would be written as below.  Note that care is taken to prevent
+damage from buffer under or over run situations.  If the DMOP is called
+with insufficient buffers, zeros will be read, while extra is ignored.
+
+static int dm_op_get_buf(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
+                         unsigned int nr_bufs, unsigned int idx,
+                         struct xen_dm_op_buf *buf)
+{
+    if ( idx >= nr_bufs )
+        return -EFAULT;
+
+    return copy_from_guest_offset(buf, bufs, idx, 1);
+}
+
+static int dm_op_copy_buf_from_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
+                                     unsigned int nr_bufs, void *dst,
+                                     unsigned int idx, size_t dst_size)
+{
+    struct xen_dm_op_buf buf;
+    size_t size;
+    int rc;
+
+    memset(dst, 0, dst_size);
+
+    rc = dm_op_get_buf(bufs, nr_bufs, idx, &buf);
+    if ( rc )
+        return -EFAULT;
+
+    size = min(dst_size, buf.size);
+
+    rc = copy_from_guest(dst, buf.h, size);
+    if ( rc )
+        return -EFAULT;
+
+    return 0;
+}
+
+int dm_op_copy_buf_to_guest(...)
+{
+    /* Similar to the above, except copying the the other
+       direction. */
+}
+
+This leaves do_dm_op easy to implement as below:
+
+long do_dm_op(domid_t domid,
+              unsigned int nr_bufs,
+              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs);
+	      {
+    struct domain *d;
+    struct xen_dm_op op;
+    bool restart;
+    long rc;
+
+    rc = rcu_lock_remote_domain_by_id(domid, &d);
+    if ( rc )
+        return rc;
+
+    if ( !has_hvm_container_domain(d) )
+        goto out;
+
+    rc = xsm_dm_op(XSM_DM_PRIV, d);
+    if ( rc )
+        goto out;
+
+    rc = copy_dm_op_buf_from_guest(bufs, nr_bufs, &op, 0, sizeof(op));
+    if ( rc )
+        goto out;
+
+    switch ( op.op ) {
+    default:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+    restart = (rc == -ERESTART);
+
+    if ( !restart && rc )
+        goto out;
+
+    rc = copy_dm_op_buf_to_guest(bufs, nr_bufs, 0, &op, sizeof(op));
+
+out:
+    rcu_unlock_domain(d);
+
+    if ( restart && !rc )
+        rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih",
+                                           domid, nr_bufs, bufs);
+
+    return rc;
+}
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index eb646f5..779232e 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -151,7 +151,7 @@ define(`device_model', `
 
 	allow $1 $2_target:domain { getdomaininfo shutdown };
 	allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack };
-	allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl irqlevel pciroute pcilevel cacheattr send_irq };
+	allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl irqlevel pciroute pcilevel cacheattr send_irq dm };
 ')
 
 # make_device_model(priv, dm_dom, hvm_dom)
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 2c83544..cc37752 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -41,6 +41,7 @@
 #include <xen/sched.h>
 #include <xen/memory.h>
 #include <xen/grant_table.h>
+#include <xen/hvm/dm_op.h>
 #include <xen/hvm/params.h>
 #include <xen/xsm/flask_op.h>
 #include <xen/tmem.h>
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index d57c39a..83f224d 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -776,6 +776,76 @@ int xc_ffs64(uint64_t x)
     return l ? xc_ffs32(l) : h ? xc_ffs32(h) + 32 : 0;
 }
 
+int do_dm_op(xc_interface *xch, domid_t domid, unsigned int nr_bufs, ...)
+{
+    int ret = -1;
+    struct  {
+        void *u;
+        void *h;
+    } *bounce;
+    DECLARE_HYPERCALL_BUFFER(xen_dm_op_buf_t, bufs);
+    va_list args;
+    unsigned int idx;
+
+    bounce = calloc(nr_bufs, sizeof(*bounce));
+    if ( bounce == NULL )
+        goto fail1;
+
+    bufs = xc_hypercall_buffer_alloc(xch, bufs, sizeof(*bufs) * nr_bufs);
+    if ( bufs == NULL )
+        goto fail2;
+
+    va_start(args, nr_bufs);
+    for (idx = 0; idx < nr_bufs; idx++)
+    {
+        void *u = va_arg(args, void *);
+        size_t size = va_arg(args, size_t);
+
+        bounce[idx].h = xencall_alloc_buffer(xch->xcall, size);
+        if ( bounce[idx].h == NULL )
+            goto fail3;
+
+        memcpy(bounce[idx].h, u, size);
+        bounce[idx].u = u;
+
+        set_xen_guest_handle_raw(bufs[idx].h, bounce[idx].h);
+        bufs[idx].size = size;
+    }
+    va_end(args);
+
+    ret = xencall3(xch->xcall, __HYPERVISOR_dm_op,
+                   domid, nr_bufs, HYPERCALL_BUFFER_AS_ARG(bufs));
+    if ( ret < 0 )
+        goto fail4;
+
+    while ( idx-- != 0 )
+    {
+        memcpy(bounce[idx].u, bounce[idx].h, bufs[idx].size);
+        xencall_free_buffer(xch->xcall, bounce[idx].h);
+    }
+
+    xc_hypercall_buffer_free(xch, bufs);
+
+    free(bounce);
+
+    return 0;
+
+fail4:
+    idx = nr_bufs;
+
+fail3:
+    while ( idx-- != 0 )
+        xencall_free_buffer(xch->xcall, bounce[idx].h);
+
+    xc_hypercall_buffer_free(xch, bufs);
+
+fail2:
+    free(bounce);
+
+fail1:
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 97445ae..f191320 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -422,6 +422,8 @@ int xc_vm_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
 void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
                          uint32_t *port);
 
+int do_dm_op(xc_interface *xch, domid_t domid, unsigned int nr_bufs, ...);
+
 #endif /* __XC_PRIVATE_H__ */
 
 /*
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index f750d13..5869d1b 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -2,6 +2,7 @@ subdir-y += svm
 subdir-y += vmx
 
 obj-y += asid.o
+obj-y += dm.o
 obj-y += emulate.o
 obj-y += hpet.o
 obj-y += hvm.o
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
new file mode 100644
index 0000000..ba7b8f6
--- /dev/null
+++ b/xen/arch/x86/hvm/dm.c
@@ -0,0 +1,136 @@
+/*
+ * Copyright (c) 2016 Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/hypercall.h>
+#include <xen/guest_access.h>
+#include <xen/sched.h>
+#include <asm/hvm/ioreq.h>
+#include <xsm/xsm.h>
+
+static int dm_op_get_buf(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
+                         unsigned int nr_bufs, unsigned int idx,
+                         struct xen_dm_op_buf *buf)
+{
+    if ( idx >= nr_bufs )
+        return -EFAULT;
+
+    return copy_from_guest_offset(buf, bufs, idx, 1);
+}
+
+static int dm_op_copy_buf_from_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
+                                     unsigned int nr_bufs, void *dst,
+                                     unsigned int idx, size_t dst_size)
+{
+    struct xen_dm_op_buf buf;
+    size_t size;
+    int rc;
+
+    memset(dst, 0, dst_size);
+
+    rc = dm_op_get_buf(bufs, nr_bufs, idx, &buf);
+    if ( rc )
+        return -EFAULT;
+
+    size = min(dst_size, buf.size);
+
+    rc = copy_from_guest(dst, buf.h, size);
+    if ( rc )
+        return -EFAULT;
+
+    return 0;
+}
+
+static int dm_op_copy_buf_to_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
+                                   unsigned int nr_bufs, unsigned int idx,
+                                   void *src, size_t src_size)
+{
+    struct xen_dm_op_buf buf;
+    size_t size;
+    int rc;
+
+    rc = dm_op_get_buf(bufs, nr_bufs, idx, &buf);
+    if ( rc )
+        return -EFAULT;
+
+    size = min(buf.size, src_size);
+
+    rc = copy_to_guest(buf.h, src, size);
+    if ( rc )
+        return -EFAULT;
+
+    return 0;
+}
+
+long do_dm_op(domid_t domid,
+              unsigned int nr_bufs,
+              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
+{
+    struct domain *d;
+    struct xen_dm_op op;
+    bool restart;
+    long rc;
+
+    rc = rcu_lock_remote_domain_by_id(domid, &d);
+    if ( rc )
+        return rc;
+
+    restart = false;
+
+    if ( !has_hvm_container_domain(d) )
+        goto out;
+
+    rc = xsm_dm_op(XSM_DM_PRIV, d);
+    if ( rc )
+        goto out;
+
+    rc = dm_op_copy_buf_from_guest(bufs, nr_bufs, &op, 0, sizeof(op));
+    if ( rc )
+        goto out;
+
+    switch ( op.op )
+    {
+    default:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+    if ( rc == -ERESTART )
+        restart = true;
+
+    if ( !restart && rc )
+        goto out;
+
+    rc = dm_op_copy_buf_to_guest(bufs, nr_bufs, 0, &op, sizeof(op));
+
+out:
+    rcu_unlock_domain(d);
+
+    if ( restart && !rc )
+        rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih",
+                                           domid, nr_bufs, bufs);
+
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 704fd64..25c32e6 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4259,6 +4259,7 @@ static const hypercall_table_t hvm_hypercall_table[] = {
     COMPAT_CALL(platform_op),
     COMPAT_CALL(mmuext_op),
     HYPERCALL(xenpmu_op),
+    HYPERCALL(dm_op),
     HYPERCALL(arch_1)
 };
 
diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
index d2b5331..0a163ac 100644
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -66,6 +66,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
     ARGS(kexec_op, 2),
     ARGS(tmem_op, 1),
     ARGS(xenpmu_op, 2),
+    ARGS(dm_op, 3),
     ARGS(mca, 1),
     ARGS(arch_1, 1),
 };
@@ -128,6 +129,7 @@ static const hypercall_table_t pv_hypercall_table[] = {
     HYPERCALL(tmem_op),
 #endif
     HYPERCALL(xenpmu_op),
+    HYPERCALL(dm_op),
     HYPERCALL(mca),
     HYPERCALL(arch_1),
 };
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
new file mode 100644
index 0000000..3eb37d6
--- /dev/null
+++ b/xen/include/public/hvm/dm_op.h
@@ -0,0 +1,70 @@
+/*
+ * Copyright (c) 2016, Citrix Systems Inc
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __XEN_PUBLIC_HVM_DM_OP_H__
+#define __XEN_PUBLIC_HVM_DM_OP_H__
+
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+#include "../xen.h"
+
+#define DMOP_invalid 0
+
+struct xen_dm_op {
+    uint32_t op;
+};
+
+struct xen_dm_op_buf {
+    XEN_GUEST_HANDLE(void) h;
+    uint64_t size;
+};
+typedef struct xen_dm_op_buf xen_dm_op_buf_t;
+DEFINE_XEN_GUEST_HANDLE(xen_dm_op_buf_t);
+
+/* ` enum neg_errnoval
+ * ` HYPERVISOR_dm_op(domid_t domid,
+ * `                  xen_dm_op_buf_t *bufs,
+ * `                  unsigned int nr_bufs)
+ * `
+ *
+ * @domid is the domain the hypercall operates on.
+ * @bufs points to an array of buffers where @bufs[0] contains a struct
+ * dm_op, describing the specific device model operation and its parameters.
+ * @bufs[1..] may be referenced in the parameters for the purposes of
+ * passing extra information to or from the domain.
+ * @nr_bufs is the number of buffers in the @bufs array.
+ */
+
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+
+#endif /* __XEN_PUBLIC_HVM_DM_OP_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/xen-compat.h b/xen/include/public/xen-compat.h
index dd8a5c0..b673653 100644
--- a/xen/include/public/xen-compat.h
+++ b/xen/include/public/xen-compat.h
@@ -27,7 +27,7 @@
 #ifndef __XEN_PUBLIC_XEN_COMPAT_H__
 #define __XEN_PUBLIC_XEN_COMPAT_H__
 
-#define __XEN_LATEST_INTERFACE_VERSION__ 0x00040800
+#define __XEN_LATEST_INTERFACE_VERSION__ 0x00040900
 
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 /* Xen is built with matching headers and implements the latest interface. */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 336aa3f..213b94d 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -120,6 +120,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_tmem_op              38
 #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
 #define __HYPERVISOR_xenpmu_op            40
+#define __HYPERVISOR_dm_op                41
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 207a0e8..fee78f7 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -15,6 +15,7 @@
 #include <public/tmem.h>
 #include <public/version.h>
 #include <public/pmu.h>
+#include <public/hvm/dm_op.h>
 #include <asm/hypercall.h>
 #include <xsm/xsm.h>
 
@@ -141,6 +142,12 @@ do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 extern long
 do_xenpmu_op(unsigned int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg);
 
+extern long
+do_dm_op(
+    domid_t domid,
+    unsigned int nr_bufs,
+    XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs);
+
 #ifdef CONFIG_COMPAT
 
 extern int
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 95460af..711318e 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -727,6 +727,12 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int
     }
 }
 
+static XSM_INLINE int xsm_dm_op (XSM_DEFAULT_ARG struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+
 #endif /* CONFIG_X86 */
 
 #include <public/version.h>
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 5dc59dd..c94c1a2 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -184,6 +184,7 @@ struct xsm_operations {
     int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
     int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
     int (*pmu_op) (struct domain *d, unsigned int op);
+    int (*dm_op) (struct domain *d);
 #endif
     int (*xen_version) (uint32_t cmd);
 };
@@ -722,6 +723,11 @@ static inline int xsm_pmu_op (xsm_default_t def, struct domain *d, unsigned int
     return xsm_ops->pmu_op(d, op);
 }
 
+static inline int xsm_dm_op (xsm_default_t def, struct domain *d)
+{
+    return xsm_ops->dm_op(d);
+}
+
 #endif /* CONFIG_X86 */
 
 static inline int xsm_xen_version (xsm_default_t def, uint32_t op)
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 177c11f..d24bc01 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1632,6 +1632,12 @@ static int flask_pmu_op (struct domain *d, unsigned int op)
         return -EPERM;
     }
 }
+
+static int flask_dm_op (struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_HVM, HVM__DM);
+}
+
 #endif /* CONFIG_X86 */
 
 static int flask_xen_version (uint32_t op)
@@ -1811,6 +1817,7 @@ static struct xsm_operations flask_ops = {
     .ioport_permission = flask_ioport_permission,
     .ioport_mapping = flask_ioport_mapping,
     .pmu_op = flask_pmu_op,
+    .dm_op = flask_dm_op,
 #endif
     .xen_version = flask_xen_version,
 };
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH-for-4.9 v1 2/8] dm_op: convert HVMOP_*ioreq_server*
  2016-11-18 17:13 [PATCH-for-4.9 v1 0/8] New hypercall for device models Paul Durrant
  2016-11-18 17:13 ` [PATCH-for-4.9 v1 1/8] public / x86: Introduce __HYPERCALL_dm_op Paul Durrant
@ 2016-11-18 17:13 ` Paul Durrant
  2016-11-24 17:02   ` Jan Beulich
  2016-11-18 17:13 ` [PATCH-for-4.9 v1 3/8] dm_op: convert HVMOP_track_dirty_vram Paul Durrant
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-18 17:13 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Daniel De Graaf, Paul Durrant, Ian Jackson, Andrew Cooper

NOTE: The definitions of HVM_IOREQSRV_BUFIOREQ_*, HVMOP_IO_RANGE_* and
      HVMOP_PCI_SBDF have to persist for new interface versions as
      they are already in use by callers of the libxc interface.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
--
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/libxc/xc_domain.c          | 204 ++++++++++++++----------------------
 xen/arch/x86/hvm/dm.c            |  55 ++++++++++
 xen/arch/x86/hvm/hvm.c           | 221 +--------------------------------------
 xen/arch/x86/hvm/ioreq.c         |  42 ++++----
 xen/include/asm-x86/hvm/domain.h |   3 +-
 xen/include/public/hvm/dm_op.h   | 157 +++++++++++++++++++++++++++
 xen/include/public/hvm/hvm_op.h  |  40 ++++---
 xen/include/xsm/dummy.h          |   6 --
 xen/include/xsm/xsm.h            |   6 --
 xen/xsm/dummy.c                  |   1 -
 xen/xsm/flask/hooks.c            |   6 --
 11 files changed, 340 insertions(+), 401 deletions(-)

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 296b852..1cbe49d 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1417,24 +1417,22 @@ int xc_hvm_create_ioreq_server(xc_interface *xch,
                                int handle_bufioreq,
                                ioservid_t *id)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_create_ioreq_server_t, arg);
+    struct xen_dm_op op;
+    struct xen_dm_op_create_ioreq_server *data;
     int rc;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    op.op = DMOP_create_ioreq_server;
+    data = &op.u.create_ioreq_server;
 
-    arg->domid = domid;
-    arg->handle_bufioreq = handle_bufioreq;
+    data->handle_bufioreq = handle_bufioreq;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_create_ioreq_server,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    rc = do_dm_op(xch, domid, 1, &op, sizeof(op));
+    if ( rc )
+        return rc;
 
-    *id = arg->id;
+    *id = data->id;
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return 0;
 }
 
 int xc_hvm_get_ioreq_server_info(xc_interface *xch,
@@ -1444,84 +1442,65 @@ int xc_hvm_get_ioreq_server_info(xc_interface *xch,
                                  xen_pfn_t *bufioreq_pfn,
                                  evtchn_port_t *bufioreq_port)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_get_ioreq_server_info_t, arg);
+    struct xen_dm_op op;
+    struct xen_dm_op_get_ioreq_server_info *data;
     int rc;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    op.op = DMOP_get_ioreq_server_info;
+    data = &op.u.get_ioreq_server_info;
 
-    arg->domid = domid;
-    arg->id = id;
+    data->id = id;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_get_ioreq_server_info,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
-    if ( rc != 0 )
-        goto done;
+    rc = do_dm_op(xch, domid, 1, &op, sizeof(op));
+    if ( rc )
+        return rc;
 
     if ( ioreq_pfn )
-        *ioreq_pfn = arg->ioreq_pfn;
+        *ioreq_pfn = data->ioreq_pfn;
 
     if ( bufioreq_pfn )
-        *bufioreq_pfn = arg->bufioreq_pfn;
+        *bufioreq_pfn = data->bufioreq_pfn;
 
     if ( bufioreq_port )
-        *bufioreq_port = arg->bufioreq_port;
+        *bufioreq_port = data->bufioreq_port;
 
-done:
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return 0;
 }
 
 int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch, domid_t domid,
                                         ioservid_t id, int is_mmio,
                                         uint64_t start, uint64_t end)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_ioreq_server_range *data;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
-
-    arg->domid = domid;
-    arg->id = id;
-    arg->type = is_mmio ? HVMOP_IO_RANGE_MEMORY : HVMOP_IO_RANGE_PORT;
-    arg->start = start;
-    arg->end = end;
+    op.op = DMOP_map_io_range_to_ioreq_server;
+    data = &op.u.map_io_range_to_ioreq_server;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_map_io_range_to_ioreq_server,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->id = id;
+    data->type = is_mmio ? DMOP_IO_RANGE_MEMORY : DMOP_IO_RANGE_PORT;
+    data->start = start;
+    data->end = end;
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return do_dm_op(xch, domid, 1, &op, sizeof(op));
 }
 
 int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch, domid_t domid,
                                             ioservid_t id, int is_mmio,
                                             uint64_t start, uint64_t end)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_ioreq_server_range *data;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    op.op = DMOP_unmap_io_range_from_ioreq_server;
+    data = &op.u.unmap_io_range_from_ioreq_server;
 
-    arg->domid = domid;
-    arg->id = id;
-    arg->type = is_mmio ? HVMOP_IO_RANGE_MEMORY : HVMOP_IO_RANGE_PORT;
-    arg->start = start;
-    arg->end = end;
-
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_unmap_io_range_from_ioreq_server,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->id = id;
+    data->type = is_mmio ? DMOP_IO_RANGE_MEMORY : DMOP_IO_RANGE_PORT;
+    data->start = start;
+    data->end = end;
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return do_dm_op(xch, domid, 1, &op, sizeof(op));
 }
 
 int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch, domid_t domid,
@@ -1529,37 +1508,30 @@ int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch, domid_t domid,
                                       uint8_t bus, uint8_t device,
                                       uint8_t function)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_ioreq_server_range *data;
 
     if (device > 0x1f || function > 0x7) {
         errno = EINVAL;
         return -1;
     }
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    op.op = DMOP_map_io_range_to_ioreq_server;
+    data = &op.u.map_io_range_to_ioreq_server;
 
-    arg->domid = domid;
-    arg->id = id;
-    arg->type = HVMOP_IO_RANGE_PCI;
+    data->id = id;
+    data->type = DMOP_IO_RANGE_PCI;
 
     /*
      * The underlying hypercall will deal with ranges of PCI SBDF
      * but, for simplicity, the API only uses singletons.
      */
-    arg->start = arg->end = HVMOP_PCI_SBDF((uint64_t)segment,
-                                           (uint64_t)bus,
-                                           (uint64_t)device,
-                                           (uint64_t)function);
-
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_map_io_range_to_ioreq_server,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->start = data->end = DMOP_PCI_SBDF((uint64_t)segment,
+                                            (uint64_t)bus,
+                                            (uint64_t)device,
+                                            (uint64_t)function);
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return do_dm_op(xch, domid, 1, &op, sizeof(op));
 }
 
 int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch, domid_t domid,
@@ -1567,54 +1539,45 @@ int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch, domid_t domid,
                                           uint8_t bus, uint8_t device,
                                           uint8_t function)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_ioreq_server_range *data;
 
     if (device > 0x1f || function > 0x7) {
         errno = EINVAL;
         return -1;
     }
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    op.op = DMOP_unmap_io_range_from_ioreq_server;
+    data = &op.u.unmap_io_range_from_ioreq_server;
 
-    arg->domid = domid;
-    arg->id = id;
-    arg->type = HVMOP_IO_RANGE_PCI;
-    arg->start = arg->end = HVMOP_PCI_SBDF((uint64_t)segment,
-                                           (uint64_t)bus,
-                                           (uint64_t)device,
-                                           (uint64_t)function);
-
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_unmap_io_range_from_ioreq_server,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->id = id;
+    data->type = DMOP_IO_RANGE_PCI;
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    /*
+     * The underlying hypercall will deal with ranges of PCI SBDF
+     * but, for simplicity, the API only uses singletons.
+     */
+    data->start = data->end = DMOP_PCI_SBDF((uint64_t)segment,
+                                            (uint64_t)bus,
+                                            (uint64_t)device,
+                                            (uint64_t)function);
+
+    return do_dm_op(xch, domid, 1, &op, sizeof(op));
 }
 
 int xc_hvm_destroy_ioreq_server(xc_interface *xch,
                                 domid_t domid,
                                 ioservid_t id)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_destroy_ioreq_server_t, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    struct xen_dm_op op;
+    struct xen_dm_op_destroy_ioreq_server *data;
 
-    arg->domid = domid;
-    arg->id = id;
+    op.op = DMOP_destroy_ioreq_server;
+    data = &op.u.destroy_ioreq_server;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_destroy_ioreq_server,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->id = id;
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return do_dm_op(xch, domid, 1, &op, sizeof(op));
 }
 
 int xc_hvm_set_ioreq_server_state(xc_interface *xch,
@@ -1622,23 +1585,16 @@ int xc_hvm_set_ioreq_server_state(xc_interface *xch,
                                   ioservid_t id,
                                   int enabled)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_set_ioreq_server_state_t, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_set_ioreq_server_state *data;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    op.op = DMOP_set_ioreq_server_state;
+    data = &op.u.set_ioreq_server_state;
 
-    arg->domid = domid;
-    arg->id = id;
-    arg->enabled = !!enabled;
+    data->id = id;
+    data->enabled = !!enabled;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_set_ioreq_server_state,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
-
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return do_dm_op(xch, domid, 1, &op, sizeof(op));
 }
 
 int xc_domain_setdebugging(xc_interface *xch,
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index ba7b8f6..c718a76 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -102,6 +102,61 @@ long do_dm_op(domid_t domid,
 
     switch ( op.op )
     {
+    case DMOP_create_ioreq_server:
+    {
+        struct domain *curr_d = current->domain;
+        struct xen_dm_op_create_ioreq_server *data =
+            &op.u.create_ioreq_server;
+
+        rc = hvm_create_ioreq_server(d, curr_d->domain_id, 0,
+                                     data->handle_bufioreq, &data->id);
+        break;
+    }
+    case DMOP_get_ioreq_server_info:
+    {
+        struct xen_dm_op_get_ioreq_server_info *data =
+            &op.u.get_ioreq_server_info;
+
+        rc = hvm_get_ioreq_server_info(d, data->id,
+                                       &data->ioreq_pfn,
+                                       &data->bufioreq_pfn,
+                                       &data->bufioreq_port);
+        break;
+    }
+    case DMOP_map_io_range_to_ioreq_server:
+    {
+        struct xen_dm_op_ioreq_server_range *data =
+            &op.u.map_io_range_to_ioreq_server;
+
+        rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type,
+                                              data->start, data->end);
+        break;
+    }
+    case DMOP_unmap_io_range_from_ioreq_server:
+    {
+        struct xen_dm_op_ioreq_server_range *data =
+            &op.u.unmap_io_range_from_ioreq_server;
+
+        rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type,
+                                                  data->start, data->end);
+        break;
+    }
+    case DMOP_set_ioreq_server_state:
+    {
+        struct xen_dm_op_set_ioreq_server_state *data =
+            &op.u.set_ioreq_server_state;
+
+        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled);
+        break;
+    }
+    case DMOP_destroy_ioreq_server:
+    {
+        struct xen_dm_op_destroy_ioreq_server *data =
+            &op.u.destroy_ioreq_server;
+
+        rc = hvm_destroy_ioreq_server(d, data->id);
+        break;
+    }
     default:
         rc = -EOPNOTSUPP;
         break;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 25c32e6..b2a7772 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4789,195 +4789,6 @@ static int hvmop_flush_tlb_all(void)
     return 0;
 }
 
-static int hvmop_create_ioreq_server(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_create_ioreq_server_t) uop)
-{
-    struct domain *curr_d = current->domain;
-    xen_hvm_create_ioreq_server_t op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, HVMOP_create_ioreq_server);
-    if ( rc != 0 )
-        goto out;
-
-    rc = hvm_create_ioreq_server(d, curr_d->domain_id, 0,
-                                 op.handle_bufioreq, &op.id);
-    if ( rc != 0 )
-        goto out;
-
-    rc = copy_to_guest(uop, &op, 1) ? -EFAULT : 0;
-    
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-static int hvmop_get_ioreq_server_info(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_get_ioreq_server_info_t) uop)
-{
-    xen_hvm_get_ioreq_server_info_t op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, HVMOP_get_ioreq_server_info);
-    if ( rc != 0 )
-        goto out;
-
-    rc = hvm_get_ioreq_server_info(d, op.id,
-                                   &op.ioreq_pfn,
-                                   &op.bufioreq_pfn, 
-                                   &op.bufioreq_port);
-    if ( rc != 0 )
-        goto out;
-
-    rc = copy_to_guest(uop, &op, 1) ? -EFAULT : 0;
-    
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-static int hvmop_map_io_range_to_ioreq_server(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_io_range_t) uop)
-{
-    xen_hvm_io_range_t op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, HVMOP_map_io_range_to_ioreq_server);
-    if ( rc != 0 )
-        goto out;
-
-    rc = hvm_map_io_range_to_ioreq_server(d, op.id, op.type,
-                                          op.start, op.end);
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-static int hvmop_unmap_io_range_from_ioreq_server(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_io_range_t) uop)
-{
-    xen_hvm_io_range_t op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, HVMOP_unmap_io_range_from_ioreq_server);
-    if ( rc != 0 )
-        goto out;
-
-    rc = hvm_unmap_io_range_from_ioreq_server(d, op.id, op.type,
-                                              op.start, op.end);
-    
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-static int hvmop_set_ioreq_server_state(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_ioreq_server_state_t) uop)
-{
-    xen_hvm_set_ioreq_server_state_t op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, HVMOP_set_ioreq_server_state);
-    if ( rc != 0 )
-        goto out;
-
-    rc = hvm_set_ioreq_server_state(d, op.id, !!op.enabled);
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-static int hvmop_destroy_ioreq_server(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_destroy_ioreq_server_t) uop)
-{
-    xen_hvm_destroy_ioreq_server_t op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, HVMOP_destroy_ioreq_server);
-    if ( rc != 0 )
-        goto out;
-
-    rc = hvm_destroy_ioreq_server(d, op.id);
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
 static int hvmop_set_evtchn_upcall_vector(
     XEN_GUEST_HANDLE_PARAM(xen_hvm_evtchn_upcall_vector_t) uop)
 {
@@ -5324,7 +5135,7 @@ static int hvmop_get_param(
         /* May need to create server. */
         domid = d->arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN];
         rc = hvm_create_ioreq_server(d, domid, 1,
-                                     HVM_IOREQSRV_BUFIOREQ_LEGACY, NULL);
+                                     DMOP_BUFIOREQ_LEGACY, NULL);
         if ( rc != 0 && rc != -EEXIST )
             goto out;
     }
@@ -5687,36 +5498,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
     start_iter = op & ~mask;
     switch ( op &= mask )
     {
-    case HVMOP_create_ioreq_server:
-        rc = hvmop_create_ioreq_server(
-            guest_handle_cast(arg, xen_hvm_create_ioreq_server_t));
-        break;
-    
-    case HVMOP_get_ioreq_server_info:
-        rc = hvmop_get_ioreq_server_info(
-            guest_handle_cast(arg, xen_hvm_get_ioreq_server_info_t));
-        break;
-    
-    case HVMOP_map_io_range_to_ioreq_server:
-        rc = hvmop_map_io_range_to_ioreq_server(
-            guest_handle_cast(arg, xen_hvm_io_range_t));
-        break;
-    
-    case HVMOP_unmap_io_range_from_ioreq_server:
-        rc = hvmop_unmap_io_range_from_ioreq_server(
-            guest_handle_cast(arg, xen_hvm_io_range_t));
-        break;
-
-    case HVMOP_set_ioreq_server_state:
-        rc = hvmop_set_ioreq_server_state(
-            guest_handle_cast(arg, xen_hvm_set_ioreq_server_state_t));
-        break;
-    
-    case HVMOP_destroy_ioreq_server:
-        rc = hvmop_destroy_ioreq_server(
-            guest_handle_cast(arg, xen_hvm_destroy_ioreq_server_t));
-        break;
-    
     case HVMOP_set_evtchn_upcall_vector:
         rc = hvmop_set_evtchn_upcall_vector(
             guest_handle_cast(arg, xen_hvm_evtchn_upcall_vector_t));
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index d2245e2..c11e7a0 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -513,9 +513,9 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
         char *name;
 
         rc = asprintf(&name, "ioreq_server %d %s", s->id,
-                      (i == HVMOP_IO_RANGE_PORT) ? "port" :
-                      (i == HVMOP_IO_RANGE_MEMORY) ? "memory" :
-                      (i == HVMOP_IO_RANGE_PCI) ? "pci" :
+                      (i == DMOP_IO_RANGE_PORT) ? "port" :
+                      (i == DMOP_IO_RANGE_MEMORY) ? "memory" :
+                      (i == DMOP_IO_RANGE_PCI) ? "pci" :
                       "");
         if ( rc )
             goto fail;
@@ -617,11 +617,11 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
     if ( rc )
         return rc;
 
-    if ( bufioreq_handling == HVM_IOREQSRV_BUFIOREQ_ATOMIC )
+    if ( bufioreq_handling == DMOP_BUFIOREQ_ATOMIC )
         s->bufioreq_atomic = 1;
 
     rc = hvm_ioreq_server_setup_pages(
-             s, is_default, bufioreq_handling != HVM_IOREQSRV_BUFIOREQ_OFF);
+             s, is_default, bufioreq_handling != DMOP_BUFIOREQ_OFF);
     if ( rc )
         goto fail_map;
 
@@ -686,7 +686,7 @@ int hvm_create_ioreq_server(struct domain *d, domid_t domid,
     struct hvm_ioreq_server *s;
     int rc;
 
-    if ( bufioreq_handling > HVM_IOREQSRV_BUFIOREQ_ATOMIC )
+    if ( bufioreq_handling > DMOP_BUFIOREQ_ATOMIC )
         return -EINVAL;
 
     rc = -ENOMEM;
@@ -833,9 +833,9 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
 
             switch ( type )
             {
-            case HVMOP_IO_RANGE_PORT:
-            case HVMOP_IO_RANGE_MEMORY:
-            case HVMOP_IO_RANGE_PCI:
+            case DMOP_IO_RANGE_PORT:
+            case DMOP_IO_RANGE_MEMORY:
+            case DMOP_IO_RANGE_PCI:
                 r = s->range[type];
                 break;
 
@@ -885,9 +885,9 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
 
             switch ( type )
             {
-            case HVMOP_IO_RANGE_PORT:
-            case HVMOP_IO_RANGE_MEMORY:
-            case HVMOP_IO_RANGE_PCI:
+            case DMOP_IO_RANGE_PORT:
+            case DMOP_IO_RANGE_MEMORY:
+            case DMOP_IO_RANGE_PCI:
                 r = s->range[type];
                 break;
 
@@ -1128,12 +1128,12 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
 
         /* PCI config data cycle */
 
-        sbdf = HVMOP_PCI_SBDF(0,
-                              PCI_BUS(CF8_BDF(cf8)),
-                              PCI_SLOT(CF8_BDF(cf8)),
-                              PCI_FUNC(CF8_BDF(cf8)));
+        sbdf = DMOP_PCI_SBDF(0,
+                             PCI_BUS(CF8_BDF(cf8)),
+                             PCI_SLOT(CF8_BDF(cf8)),
+                             PCI_FUNC(CF8_BDF(cf8)));
 
-        type = HVMOP_IO_RANGE_PCI;
+        type = DMOP_IO_RANGE_PCI;
         addr = ((uint64_t)sbdf << 32) |
                CF8_ADDR_LO(cf8) |
                (p->addr & 3);
@@ -1152,7 +1152,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
     else
     {
         type = (p->type == IOREQ_TYPE_PIO) ?
-                HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
+                DMOP_IO_RANGE_PORT : DMOP_IO_RANGE_MEMORY;
         addr = p->addr;
     }
 
@@ -1174,19 +1174,19 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
         {
             unsigned long end;
 
-        case HVMOP_IO_RANGE_PORT:
+        case DMOP_IO_RANGE_PORT:
             end = addr + p->size - 1;
             if ( rangeset_contains_range(r, addr, end) )
                 return s;
 
             break;
-        case HVMOP_IO_RANGE_MEMORY:
+        case DMOP_IO_RANGE_MEMORY:
             end = addr + (p->size * p->count) - 1;
             if ( rangeset_contains_range(r, addr, end) )
                 return s;
 
             break;
-        case HVMOP_IO_RANGE_PCI:
+        case DMOP_IO_RANGE_PCI:
             if ( rangeset_contains_singleton(r, addr >> 32) )
             {
                 p->type = IOREQ_TYPE_PCI_CONFIG;
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index f34d784..894c01d 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -33,6 +33,7 @@
 #include <public/hvm/params.h>
 #include <public/hvm/save.h>
 #include <public/hvm/hvm_op.h>
+#include <public/hvm/dm_op.h>
 
 struct hvm_ioreq_page {
     unsigned long gmfn;
@@ -47,7 +48,7 @@ struct hvm_ioreq_vcpu {
     bool_t           pending;
 };
 
-#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
+#define NR_IO_RANGE_TYPES (DMOP_IO_RANGE_PCI + 1)
 #define MAX_NR_IO_RANGES  256
 
 struct hvm_ioreq_server {
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index 3eb37d6..dc1d2ad 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -27,11 +27,168 @@
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 
 #include "../xen.h"
+#include "../event_channel.h"
 
 #define DMOP_invalid 0
 
+/*
+ * IOREQ Servers
+ *
+ * The interface between an I/O emulator an Xen is called an IOREQ Server.
+ * A domain supports a single 'legacy' IOREQ Server which is instantiated if
+ * parameter...
+ *
+ * HVM_PARAM_IOREQ_PFN is read (to get the gmfn containing the synchronous
+ * ioreq structures), or...
+ * HVM_PARAM_BUFIOREQ_PFN is read (to get the gmfn containing the buffered
+ * ioreq ring), or...
+ * HVM_PARAM_BUFIOREQ_EVTCHN is read (to get the event channel that Xen uses
+ * to request buffered I/O emulation).
+ *
+ * The following hypercalls facilitate the creation of IOREQ Servers for
+ * 'secondary' emulators which are invoked to implement port I/O, memory, or
+ * PCI config space ranges which they explicitly register.
+ */
+
+typedef uint16_t ioservid_t;
+
+/*
+ * DMOP_create_ioreq_server: Instantiate a new IOREQ Server for a secondary
+ *                           emulator servicing domain <domid>.
+ *
+ * The <id> handed back is unique for <domid>. If <handle_bufioreq> is zero
+ * the buffered ioreq ring will not be allocated and hence all emulation
+ * requestes to this server will be synchronous.
+ */
+#define DMOP_create_ioreq_server 1
+
+struct xen_dm_op_create_ioreq_server {
+    /* IN - should server handle buffered ioreqs */
+    uint8_t handle_bufioreq;
+#define DMOP_BUFIOREQ_OFF    0
+#define DMOP_BUFIOREQ_LEGACY 1
+/*
+ * Use this when read_pointer gets updated atomically and
+ * the pointer pair gets read atomically:
+ */
+#define DMOP_BUFIOREQ_ATOMIC 2
+    uint8_t __pad[3];
+    /* OUT - server id */
+    ioservid_t id;
+};
+
+/*
+ * DMOP_get_ioreq_server_info: Get all the information necessary to access
+ *                             IOREQ Server <id>.
+ *
+ * The emulator needs to map the synchronous ioreq structures and buffered
+ * ioreq ring (if it exists) that Xen uses to request emulation. These are
+ * hosted in domain <domid>'s gmfns <ioreq_pfn> and <bufioreq_pfn>
+ * respectively. In addition, if the IOREQ Server is handling buffered
+ * emulation requests, the emulator needs to bind to event channel
+ * <bufioreq_port> to listen for them. (The event channels used for
+ * synchronous emulation requests are specified in the per-CPU ioreq
+ * structures in <ioreq_pfn>).
+ * If the IOREQ Server is not handling buffered emulation requests then the
+ * values handed back in <bufioreq_pfn> and <bufioreq_port> will both be 0.
+ */
+#define DMOP_get_ioreq_server_info 2
+
+struct xen_dm_op_get_ioreq_server_info {
+    /* IN - server id */
+    ioservid_t id;
+    uint16_t __pad;
+    /* OUT - buffered ioreq port */
+    evtchn_port_t bufioreq_port;
+    /* OUT - sync ioreq pfn */
+    uint64_aligned_t ioreq_pfn;
+    /* OUT - buffered ioreq pfn */
+    uint64_aligned_t bufioreq_pfn;
+};
+
+/*
+ * DMOP_map_io_range_to_ioreq_server: Register an I/O range of domain
+ *                                    <domid> for emulation by the client
+ *                                    of IOREQ Server <id>
+ * DMOP_unmap_io_range_from_ioreq_server: Deregister an I/O range of <domid>
+ *                                        for emulation by the client of
+ *                                        IOREQ Server <id>
+ *
+ * There are three types of I/O that can be emulated: port I/O, memory
+ * accesses and PCI config space accesses. The <type> field denotes which
+ * type of range* the <start> and <end> (inclusive) fields are specifying.
+ * PCI config space ranges are specified by segment/bus/device/function
+ * values which should be encoded using the HVMOP_PCI_SBDF helper macro
+ * below.
+ *
+ * NOTE: unless an emulation request falls entirely within a range mapped
+ * by a secondary emulator, it will not be passed to that emulator.
+ */
+#define DMOP_map_io_range_to_ioreq_server 3
+#define DMOP_unmap_io_range_from_ioreq_server 4
+
+struct xen_dm_op_ioreq_server_range {
+    /* IN - server id */
+    ioservid_t id;
+    uint16_t __pad;
+    /* IN - type of range */
+    uint32_t type;
+# define DMOP_IO_RANGE_PORT   0 /* I/O port range */
+# define DMOP_IO_RANGE_MEMORY 1 /* MMIO range */
+# define DMOP_IO_RANGE_PCI    2 /* PCI segment/bus/dev/func range */
+    /* IN - inclusive start and end of range */
+    uint64_aligned_t start, end;
+};
+
+#define DMOP_PCI_SBDF(s,b,d,f) \
+	((((s) & 0xffff) << 16) |  \
+	 (((b) & 0xff) << 8) |     \
+	 (((d) & 0x1f) << 3) |     \
+	 ((f) & 0x07))
+
+/*
+ * DMOP_set_ioreq_server_state: Enable or disable the IOREQ Server <id>
+ *                              servicing domain <domid>.
+ *
+ * The IOREQ Server will not be passed any emulation requests until it is
+ * in the enabled state.
+ * Note that the contents of the ioreq_pfn and bufioreq_fn (see
+ * DMOP_get_ioreq_server_info) are not meaningful until the IOREQ Server
+ * is in the enabled state.
+ */
+#define DMOP_set_ioreq_server_state 5
+
+struct xen_dm_op_set_ioreq_server_state {
+    /* IN - server id */
+    ioservid_t id;
+    uint16_t __pad;
+    /* IN - enabled? */
+    uint8_t enabled;
+};
+
+/*
+ * DMOP_destroy_ioreq_server: Destroy the IOREQ Server <id> servicing domain
+ *                            <domid>.
+ *
+ * Any registered I/O ranges will be automatically deregistered.
+ */
+#define DMOP_destroy_ioreq_server 6
+
+struct xen_dm_op_destroy_ioreq_server {
+    /* IN - server id */
+    ioservid_t id;
+};
+
 struct xen_dm_op {
     uint32_t op;
+    union {
+        struct xen_dm_op_create_ioreq_server create_ioreq_server;
+        struct xen_dm_op_get_ioreq_server_info get_ioreq_server_info;
+        struct xen_dm_op_ioreq_server_range map_io_range_to_ioreq_server;
+        struct xen_dm_op_ioreq_server_range unmap_io_range_from_ioreq_server;
+        struct xen_dm_op_set_ioreq_server_state set_ioreq_server_state;
+        struct xen_dm_op_destroy_ioreq_server destroy_ioreq_server;
+    } u;
 };
 
 struct xen_dm_op_buf {
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index b3e45cf..cf5e59a 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -26,6 +26,7 @@
 #include "../xen.h"
 #include "../trace.h"
 #include "../event_channel.h"
+#include "dm_op.h"
 
 /* Get/set subcommands: extra argument == pointer to xen_hvm_param struct. */
 #define HVMOP_set_param           0
@@ -242,6 +243,8 @@ struct xen_hvm_inject_msi {
 typedef struct xen_hvm_inject_msi xen_hvm_inject_msi_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_msi_t);
 
+#if __XEN_INTERFACE_VERSION__ < 0x00040900
+
 /*
  * IOREQ Servers
  *
@@ -274,13 +277,6 @@ typedef uint16_t ioservid_t;
 #define HVMOP_create_ioreq_server 17
 struct xen_hvm_create_ioreq_server {
     domid_t domid;           /* IN - domain to be serviced */
-#define HVM_IOREQSRV_BUFIOREQ_OFF    0
-#define HVM_IOREQSRV_BUFIOREQ_LEGACY 1
-/*
- * Use this when read_pointer gets updated atomically and
- * the pointer pair gets read atomically:
- */
-#define HVM_IOREQSRV_BUFIOREQ_ATOMIC 2
     uint8_t handle_bufioreq; /* IN - should server handle buffered ioreqs */
     ioservid_t id;           /* OUT - server id */
 };
@@ -336,20 +332,11 @@ struct xen_hvm_io_range {
     domid_t domid;               /* IN - domain to be serviced */
     ioservid_t id;               /* IN - server id */
     uint32_t type;               /* IN - type of range */
-# define HVMOP_IO_RANGE_PORT   0 /* I/O port range */
-# define HVMOP_IO_RANGE_MEMORY 1 /* MMIO range */
-# define HVMOP_IO_RANGE_PCI    2 /* PCI segment/bus/dev/func range */
     uint64_aligned_t start, end; /* IN - inclusive start and end of range */
 };
 typedef struct xen_hvm_io_range xen_hvm_io_range_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_io_range_t);
 
-#define HVMOP_PCI_SBDF(s,b,d,f)                 \
-	((((s) & 0xffff) << 16) |                   \
-	 (((b) & 0xff) << 8) |                      \
-	 (((d) & 0x1f) << 3) |                      \
-	 ((f) & 0x07))
-
 /*
  * HVMOP_destroy_ioreq_server: Destroy the IOREQ Server <id> servicing domain
  *                             <domid>.
@@ -383,6 +370,27 @@ struct xen_hvm_set_ioreq_server_state {
 typedef struct xen_hvm_set_ioreq_server_state xen_hvm_set_ioreq_server_state_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_ioreq_server_state_t);
 
+#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
+
+/*
+ * Definitions relating to HVMOP/DMOP_create_ioreq_server.
+ */
+
+#define HVM_IOREQSRV_BUFIOREQ_OFF    DMOP_BUFIOREQ_OFF
+#define HVM_IOREQSRV_BUFIOREQ_LEGACY DMOP_BUFIOREQ_LEGACY
+#define HVM_IOREQSRV_BUFIOREQ_ATOMIC DMOP_BUFIOREQ_ATOMIC
+
+/*
+ * Definitions relating to HVMOP/DMOP_map_io_range_to_ioreq_server and
+ * HVMOP/DMOP_unmap_io_range_from_ioreq_server
+ */
+
+#define HVMOP_IO_RANGE_PORT   DMOP_IO_RANGE_PORT
+#define HVMOP_IO_RANGE_MEMORY DMOP_IO_RANGE_MEMORY
+#define HVMOP_IO_RANGE_PCI    DMOP_IO_RANGE_PCI
+
+#define HVMOP_PCI_SBDF        DMOP_PCI_SBDF
+
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
 #if defined(__i386__) || defined(__x86_64__)
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 711318e..b7d3173 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -634,12 +634,6 @@ static XSM_INLINE int xsm_hvm_inject_msi(XSM_DEFAULT_ARG struct domain *d)
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_hvm_ioreq_server(XSM_DEFAULT_ARG struct domain *d, int op)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
 static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index c94c1a2..0bcde39 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -166,7 +166,6 @@ struct xsm_operations {
     int (*hvm_set_isa_irq_level) (struct domain *d);
     int (*hvm_set_pci_link_route) (struct domain *d);
     int (*hvm_inject_msi) (struct domain *d);
-    int (*hvm_ioreq_server) (struct domain *d, int op);
     int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
     int (*apic) (struct domain *d, int cmd);
     int (*memtype) (uint32_t access);
@@ -656,11 +655,6 @@ static inline int xsm_hvm_inject_msi (xsm_default_t def, struct domain *d)
     return xsm_ops->hvm_inject_msi(d);
 }
 
-static inline int xsm_hvm_ioreq_server (xsm_default_t def, struct domain *d, int op)
-{
-    return xsm_ops->hvm_ioreq_server(d, op);
-}
-
 static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
 {
     return xsm_ops->mem_sharing_op(d, cd, op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index a082b28..d544ec1 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -149,7 +149,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, hvm_set_isa_irq_level);
     set_to_dummy_if_null(ops, hvm_set_pci_link_route);
     set_to_dummy_if_null(ops, hvm_inject_msi);
-    set_to_dummy_if_null(ops, hvm_ioreq_server);
     set_to_dummy_if_null(ops, mem_sharing_op);
     set_to_dummy_if_null(ops, apic);
     set_to_dummy_if_null(ops, machine_memory_map);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index d24bc01..d60c96d 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1522,11 +1522,6 @@ static int flask_hvm_inject_msi(struct domain *d)
     return current_has_perm(d, SECCLASS_HVM, HVM__SEND_IRQ);
 }
 
-static int flask_hvm_ioreq_server(struct domain *d, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__HVMCTL);
-}
-
 static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
 {
     int rc = current_has_perm(cd, SECCLASS_HVM, HVM__MEM_SHARING);
@@ -1805,7 +1800,6 @@ static struct xsm_operations flask_ops = {
     .hvm_set_isa_irq_level = flask_hvm_set_isa_irq_level,
     .hvm_set_pci_link_route = flask_hvm_set_pci_link_route,
     .hvm_inject_msi = flask_hvm_inject_msi,
-    .hvm_ioreq_server = flask_hvm_ioreq_server,
     .mem_sharing_op = flask_mem_sharing_op,
     .apic = flask_apic,
     .machine_memory_map = flask_machine_memory_map,
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH-for-4.9 v1 3/8] dm_op: convert HVMOP_track_dirty_vram
  2016-11-18 17:13 [PATCH-for-4.9 v1 0/8] New hypercall for device models Paul Durrant
  2016-11-18 17:13 ` [PATCH-for-4.9 v1 1/8] public / x86: Introduce __HYPERCALL_dm_op Paul Durrant
  2016-11-18 17:13 ` [PATCH-for-4.9 v1 2/8] dm_op: convert HVMOP_*ioreq_server* Paul Durrant
@ 2016-11-18 17:13 ` Paul Durrant
  2016-11-25 11:25   ` Jan Beulich
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and Paul Durrant
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-18 17:13 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Paul Durrant, Daniel De Graaf

The patch changes the handle type passed to the underlying shadow and hap
functions for compatibility with the new hypercall buffer.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Tim Deegan <tim@xen.org>
---
 tools/flask/policy/modules/xen.if   |  4 ++--
 tools/libxc/xc_misc.c               | 31 ++++++++------------------
 xen/arch/x86/hvm/dm.c               | 43 +++++++++++++++++++++++++++++++++++--
 xen/arch/x86/hvm/hvm.c              | 41 -----------------------------------
 xen/arch/x86/mm/hap/hap.c           |  2 +-
 xen/arch/x86/mm/shadow/common.c     |  2 +-
 xen/include/asm-x86/hap.h           |  2 +-
 xen/include/asm-x86/shadow.h        |  2 +-
 xen/include/public/hvm/dm_op.h      | 16 ++++++++++++++
 xen/include/public/hvm/hvm_op.h     |  4 ++++
 xen/xsm/flask/hooks.c               |  3 ---
 xen/xsm/flask/policy/access_vectors |  2 --
 12 files changed, 76 insertions(+), 76 deletions(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 779232e..366273e 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -58,7 +58,7 @@ define(`create_domain_common', `
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
 	allow $1 $2:grant setup;
 	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute sethvmc
-			setparam pcilevel trackdirtyvram nested altp2mhvm altp2mhvm_op };
+			setparam pcilevel nested altp2mhvm altp2mhvm_op };
 ')
 
 # create_domain(priv, target)
@@ -151,7 +151,7 @@ define(`device_model', `
 
 	allow $1 $2_target:domain { getdomaininfo shutdown };
 	allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack };
-	allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl irqlevel pciroute pcilevel cacheattr send_irq dm };
+	allow $1 $2_target:hvm { getparam setparam hvmctl irqlevel pciroute pcilevel cacheattr send_irq dm };
 ')
 
 # make_device_model(priv, dm_dom, hvm_dom)
diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 06e90de..3651cab 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -584,31 +584,18 @@ int xc_hvm_track_dirty_vram(
     uint64_t first_pfn, uint64_t nr,
     unsigned long *dirty_bitmap)
 {
-    DECLARE_HYPERCALL_BOUNCE(dirty_bitmap, (nr+7) / 8, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_track_dirty_vram, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_track_dirty_vram *data;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL || xc_hypercall_bounce_pre(xch, dirty_bitmap) )
-    {
-        PERROR("Could not bounce memory for xc_hvm_track_dirty_vram hypercall");
-        rc = -1;
-        goto out;
-    }
-
-    arg->domid     = dom;
-    arg->first_pfn = first_pfn;
-    arg->nr        = nr;
-    set_xen_guest_handle(arg->dirty_bitmap, dirty_bitmap);
+    op.op = DMOP_track_dirty_vram;
+    data = &op.u.track_dirty_vram;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_track_dirty_vram,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->first_pfn = first_pfn;
+    /* NOTE: The following assignment truncates nr to 32-bits */
+    data->nr = nr;
 
-out:
-    xc_hypercall_buffer_free(xch, arg);
-    xc_hypercall_bounce_post(xch, dirty_bitmap);
-    return rc;
+    return do_dm_op(xch, dom, 2, &op, sizeof(op),
+                    dirty_bitmap, (nr + 7) / 8);
 }
 
 int xc_hvm_modified_memory(
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index c718a76..78dd6e7 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -18,6 +18,8 @@
 #include <xen/guest_access.h>
 #include <xen/sched.h>
 #include <asm/hvm/ioreq.h>
+#include <asm/hap.h>
+#include <asm/shadow.h>
 #include <xsm/xsm.h>
 
 static int dm_op_get_buf(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
@@ -74,6 +76,35 @@ static int dm_op_copy_buf_to_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
     return 0;
 }
 
+static int dm_op_track_dirty_vram(struct domain *d,
+                                  unsigned int nr_bufs,
+                                  XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
+                                  xen_pfn_t first_pfn, unsigned int nr)
+{
+    struct xen_dm_op_buf buf;
+    int rc;
+
+    if ( nr > GB(1) >> PAGE_SHIFT )
+        return -EINVAL;
+
+    if ( d->is_dying )
+        return -ESRCH;
+
+    if ( d->vcpu == NULL || d->vcpu[0] == NULL )
+        return -EINVAL;
+
+    rc = dm_op_get_buf(bufs, nr_bufs, 1, &buf);
+    if ( rc )
+        return rc;
+
+    if ( ((nr + 7) / 8) > buf.size )
+        return -EINVAL;
+
+    return shadow_mode_enabled(d) ?
+        shadow_track_dirty_vram(d, first_pfn, nr, buf.h) :
+        hap_track_dirty_vram(d, first_pfn, nr, buf.h);
+}
+
 long do_dm_op(domid_t domid,
               unsigned int nr_bufs,
               XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
@@ -157,11 +188,19 @@ long do_dm_op(domid_t domid,
         rc = hvm_destroy_ioreq_server(d, data->id);
         break;
     }
+    case DMOP_track_dirty_vram:
+    {
+        struct xen_dm_op_track_dirty_vram *data =
+            &op.u.track_dirty_vram;
+
+        rc = dm_op_track_dirty_vram(d, nr_bufs, bufs, data->first_pfn,
+                                    data->nr);
+        break;
+    }
     default:
         rc = -EOPNOTSUPP;
         break;
     }
-
     if ( rc == -ERESTART )
         restart = true;
 
@@ -178,7 +217,7 @@ out:
                                            domid, nr_bufs, bufs);
 
     return rc;
-}
+    }
 
 /*
  * Local variables:
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b2a7772..0ca9ca0 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5537,47 +5537,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         rc = guest_handle_is_null(arg) ? hvmop_flush_tlb_all() : -EINVAL;
         break;
 
-    case HVMOP_track_dirty_vram:
-    {
-        struct xen_hvm_track_dirty_vram a;
-        struct domain *d;
-
-        if ( copy_from_guest(&a, arg, 1) )
-            return -EFAULT;
-
-        rc = rcu_lock_remote_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
-
-        rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
-            goto tdv_fail;
-
-        if ( a.nr > GB(1) >> PAGE_SHIFT )
-            goto tdv_fail;
-
-        rc = xsm_hvm_control(XSM_DM_PRIV, d, op);
-        if ( rc )
-            goto tdv_fail;
-
-        rc = -ESRCH;
-        if ( d->is_dying )
-            goto tdv_fail;
-
-        rc = -EINVAL;
-        if ( d->vcpu == NULL || d->vcpu[0] == NULL )
-            goto tdv_fail;
-
-        if ( shadow_mode_enabled(d) )
-            rc = shadow_track_dirty_vram(d, a.first_pfn, a.nr, a.dirty_bitmap);
-        else
-            rc = hap_track_dirty_vram(d, a.first_pfn, a.nr, a.dirty_bitmap);
-
-    tdv_fail:
-        rcu_unlock_domain(d);
-        break;
-    }
-
     case HVMOP_modified_memory:
     {
         struct xen_hvm_modified_memory a;
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 3218fa2..4788e03 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -68,7 +68,7 @@
 int hap_track_dirty_vram(struct domain *d,
                          unsigned long begin_pfn,
                          unsigned long nr,
-                         XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
+                         XEN_GUEST_HANDLE_PARAM(void) guest_dirty_bitmap)
 {
     long rc = 0;
     struct sh_dirty_vram *dirty_vram;
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index ced2313..b99aa59 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3677,7 +3677,7 @@ static void sh_clean_dirty_bitmap(struct domain *d)
 int shadow_track_dirty_vram(struct domain *d,
                             unsigned long begin_pfn,
                             unsigned long nr,
-                            XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
+                            XEN_GUEST_HANDLE_PARAM(void) guest_dirty_bitmap)
 {
     int rc = 0;
     unsigned long end_pfn = begin_pfn + nr;
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index c613836..dbf1da2 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -43,7 +43,7 @@ void  hap_vcpu_init(struct vcpu *v);
 int   hap_track_dirty_vram(struct domain *d,
                            unsigned long begin_pfn,
                            unsigned long nr,
-                           XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
+                           XEN_GUEST_HANDLE_PARAM(void) dirty_bitmap);
 
 extern const struct paging_mode *hap_paging_get_mode(struct vcpu *);
 void hap_set_alloc_for_pvh_dom0(struct domain *d, unsigned long num_pages);
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 6d0aefb..c305171 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -63,7 +63,7 @@ int shadow_enable(struct domain *d, u32 mode);
 int shadow_track_dirty_vram(struct domain *d,
                             unsigned long first_pfn,
                             unsigned long nr,
-                            XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
+                            XEN_GUEST_HANDLE_PARAM(void) dirty_bitmap);
 
 /* Handler for shadow control ops: operations from user-space to enable
  * and disable ephemeral shadow modes (test mode and log-dirty mode) and
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index dc1d2ad..c1557eb 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -179,6 +179,21 @@ struct xen_dm_op_destroy_ioreq_server {
     ioservid_t id;
 };
 
+/*
+ * DMOP_track_dirty_vram: Track modifications to the specified pfn range.
+ *
+ * NOTE: The bitmap passed back to the caller is passed in a
+ *       secondary buffer.
+ */
+#define DMOP_track_dirty_vram 7
+
+struct xen_dm_op_track_dirty_vram {
+    /* IN - number of pages to be tracked */
+    uint32_t nr;
+    /* IN - first pfn to track */
+    uint64_aligned_t first_pfn;
+};
+
 struct xen_dm_op {
     uint32_t op;
     union {
@@ -188,6 +203,7 @@ struct xen_dm_op {
         struct xen_dm_op_ioreq_server_range unmap_io_range_from_ioreq_server;
         struct xen_dm_op_set_ioreq_server_state set_ioreq_server_state;
         struct xen_dm_op_destroy_ioreq_server destroy_ioreq_server;
+        struct xen_dm_op_track_dirty_vram track_dirty_vram;
     } u;
 };
 
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index cf5e59a..1bb5221 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -96,6 +96,8 @@ typedef enum {
 /* Following tools-only interfaces may change in future. */
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 
+#if __XEN_INTERFACE_VERSION__ < 0x00040900
+
 /* Track dirty VRAM. */
 #define HVMOP_track_dirty_vram    6
 struct xen_hvm_track_dirty_vram {
@@ -112,6 +114,8 @@ struct xen_hvm_track_dirty_vram {
 typedef struct xen_hvm_track_dirty_vram xen_hvm_track_dirty_vram_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_track_dirty_vram_t);
 
+#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
+
 /* Notify that some pages got modified by the Device Model. */
 #define HVMOP_modified_memory    7
 struct xen_hvm_modified_memory {
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index d60c96d..7972546 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1177,9 +1177,6 @@ static int flask_hvm_param(struct domain *d, unsigned long op)
     case HVMOP_get_param:
         perm = HVM__GETPARAM;
         break;
-    case HVMOP_track_dirty_vram:
-        perm = HVM__TRACKDIRTYVRAM;
-        break;
     default:
         perm = HVM__HVMCTL;
     }
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 49c9a9e..5af427f 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -266,8 +266,6 @@ class hvm
     bind_irq
 # XEN_DOMCTL_pin_mem_cacheattr
     cacheattr
-# HVMOP_track_dirty_vram
-    trackdirtyvram
 # HVMOP_modified_memory, HVMOP_get_mem_type, HVMOP_set_mem_type,
 # HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying,
 # HVMOP_inject_trap
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH-for-4.9 v1 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and...
  2016-11-18 17:13 [PATCH-for-4.9 v1 0/8] New hypercall for device models Paul Durrant
                   ` (2 preceding siblings ...)
  2016-11-18 17:13 ` [PATCH-for-4.9 v1 3/8] dm_op: convert HVMOP_track_dirty_vram Paul Durrant
@ 2016-11-18 17:14 ` Paul Durrant
  2016-11-25 11:49   ` Jan Beulich
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 5/8] dm_op: convert HVMOP_modified_memory Paul Durrant
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-18 17:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Daniel De Graaf, Paul Durrant, Ian Jackson, Andrew Cooper

... HVMOP_set_pci_link_route

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 tools/flask/policy/modules/xen.if   |   8 +--
 tools/libxc/xc_misc.c               |  81 +++++++--------------
 xen/arch/x86/hvm/dm.c               |  80 +++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c              | 136 ------------------------------------
 xen/include/public/hvm/dm_op.h      |  42 +++++++++++
 xen/include/public/hvm/hvm_op.h     |   4 ++
 xen/include/xsm/dummy.h             |  18 -----
 xen/include/xsm/xsm.h               |  18 -----
 xen/xsm/dummy.c                     |   3 -
 xen/xsm/flask/hooks.c               |  15 ----
 xen/xsm/flask/policy/access_vectors |   6 --
 11 files changed, 154 insertions(+), 257 deletions(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 366273e..e6dfaf0 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -57,8 +57,8 @@ define(`create_domain_common', `
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
 	allow $1 $2:grant setup;
-	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute sethvmc
-			setparam pcilevel nested altp2mhvm altp2mhvm_op };
+	allow $1 $2:hvm { cacheattr getparam hvmctl sethvmc
+			setparam nested altp2mhvm altp2mhvm_op };
 ')
 
 # create_domain(priv, target)
@@ -93,7 +93,7 @@ define(`manage_domain', `
 #   (inbound migration is the same as domain creation)
 define(`migrate_domain_out', `
 	allow $1 domxen_t:mmu map_read;
-	allow $1 $2:hvm { gethvmc getparam irqlevel };
+	allow $1 $2:hvm { gethvmc getparam };
 	allow $1 $2:mmu { stat pageinfo map_read };
 	allow $1 $2:domain { getaddrsize getvcpucontext pause destroy };
 	allow $1 $2:domain2 gettsc;
@@ -151,7 +151,7 @@ define(`device_model', `
 
 	allow $1 $2_target:domain { getdomaininfo shutdown };
 	allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack };
-	allow $1 $2_target:hvm { getparam setparam hvmctl irqlevel pciroute pcilevel cacheattr send_irq dm };
+	allow $1 $2_target:hvm { getparam setparam hvmctl cacheattr send_irq dm };
 ')
 
 # make_device_model(priv, dm_dom, hvm_dom)
diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 3651cab..842b699 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -473,30 +473,19 @@ int xc_hvm_set_pci_intx_level(
     uint8_t domain, uint8_t bus, uint8_t device, uint8_t intx,
     unsigned int level)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_pci_intx_level, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_set_pci_intx_level hypercall");
-        return -1;
-    }
-
-    arg->domid  = dom;
-    arg->domain = domain;
-    arg->bus    = bus;
-    arg->device = device;
-    arg->intx   = intx;
-    arg->level  = level;
+    struct xen_dm_op op;
+    struct xen_dm_op_set_pci_intx_level *data;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_set_pci_intx_level,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    op.op = DMOP_set_pci_intx_level;
+    data = &op.u.set_pci_intx_level;
 
-    xc_hypercall_buffer_free(xch, arg);
+    data->domain = domain;
+    data->bus = bus;
+    data->device = device;
+    data->intx = intx;
+    data->level = level;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_hvm_set_isa_irq_level(
@@ -504,53 +493,31 @@ int xc_hvm_set_isa_irq_level(
     uint8_t isa_irq,
     unsigned int level)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_isa_irq_level, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_set_isa_irq_level hypercall");
-        return -1;
-    }
+    struct xen_dm_op op;
+    struct xen_dm_op_set_isa_irq_level *data;
 
-    arg->domid   = dom;
-    arg->isa_irq = isa_irq;
-    arg->level   = level;
+    op.op = DMOP_set_isa_irq_level;
+    data = &op.u.set_isa_irq_level;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_set_isa_irq_level,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
-
-    xc_hypercall_buffer_free(xch, arg);
+    data->isa_irq = isa_irq;
+    data->level = level;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_hvm_set_pci_link_route(
     xc_interface *xch, domid_t dom, uint8_t link, uint8_t isa_irq)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_pci_link_route, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_set_pci_link_route hypercall");
-        return -1;
-    }
+    struct xen_dm_op op;
+    struct xen_dm_op_set_pci_link_route *data;
 
-    arg->domid   = dom;
-    arg->link    = link;
-    arg->isa_irq = isa_irq;
+    op.op = DMOP_set_pci_link_route;
+    data = &op.u.set_pci_link_route;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_set_pci_link_route,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
-
-    xc_hypercall_buffer_free(xch, arg);
+    data->link = link;
+    data->isa_irq = isa_irq;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_hvm_inject_msi(
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 78dd6e7..b8edf2c 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -105,6 +105,60 @@ static int dm_op_track_dirty_vram(struct domain *d,
         hap_track_dirty_vram(d, first_pfn, nr, buf.h);
 }
 
+static int dm_op_set_pci_intx_level(struct domain *d, uint8_t domain,
+                                    uint8_t bus, uint8_t device,
+                                    uint8_t intx, uint8_t level)
+{
+    if ( domain != 0 || bus != 0 || device > 0x1f || intx > 3 )
+        return -EINVAL;
+
+    switch ( level )
+    {
+    case 0:
+        hvm_pci_intx_deassert(d, device, intx);
+        break;
+    case 1:
+        hvm_pci_intx_assert(d, device, intx);
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+static int dm_op_set_isa_irq_level(struct domain *d, uint8_t isa_irq,
+                                   uint8_t level)
+{
+    if ( isa_irq > 15 )
+        return -EINVAL;
+
+    switch ( level )
+    {
+    case 0:
+        hvm_isa_irq_deassert(d, isa_irq);
+        break;
+    case 1:
+        hvm_isa_irq_assert(d, isa_irq);
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+static int dm_op_set_pci_link_route(struct domain *d, uint8_t link,
+                                    uint8_t isa_irq)
+{
+    if ( link > 3 || isa_irq > 15 )
+        return -EINVAL;
+
+    hvm_set_pci_link_route(d, link, isa_irq);
+
+    return 0;
+}
+
 long do_dm_op(domid_t domid,
               unsigned int nr_bufs,
               XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
@@ -197,6 +251,32 @@ long do_dm_op(domid_t domid,
                                     data->nr);
         break;
     }
+    case DMOP_set_pci_intx_level:
+    {
+        struct xen_dm_op_set_pci_intx_level *data =
+            &op.u.set_pci_intx_level;
+
+        rc = dm_op_set_pci_intx_level(d, data->domain, data->bus,
+                                      data->device, data->intx,
+                                      data->level);
+        break;
+    }
+    case DMOP_set_isa_irq_level:
+    {
+        struct xen_dm_op_set_isa_irq_level *data =
+            &op.u.set_isa_irq_level;
+
+        rc = dm_op_set_isa_irq_level(d, data->isa_irq, data->level);
+        break;
+    }
+    case DMOP_set_pci_link_route:
+    {
+        struct xen_dm_op_set_pci_link_route *data =
+            &op.u.set_pci_link_route;
+
+        rc = dm_op_set_pci_link_route(d, data->link, data->isa_irq);
+        break;
+    }
     default:
         rc = -EOPNOTSUPP;
         break;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 0ca9ca0..14d3b87 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4455,50 +4455,6 @@ void hvm_hypercall_page_initialise(struct domain *d,
     hvm_funcs.init_hypercall_page(d, hypercall_page);
 }
 
-static int hvmop_set_pci_intx_level(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_intx_level_t) uop)
-{
-    struct xen_hvm_set_pci_intx_level op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    if ( (op.domain > 0) || (op.bus > 0) || (op.device > 31) || (op.intx > 3) )
-        return -EINVAL;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_set_pci_intx_level(XSM_DM_PRIV, d);
-    if ( rc )
-        goto out;
-
-    rc = 0;
-    switch ( op.level )
-    {
-    case 0:
-        hvm_pci_intx_deassert(d, op.device, op.intx);
-        break;
-    case 1:
-        hvm_pci_intx_assert(d, op.device, op.intx);
-        break;
-    default:
-        rc = -EINVAL;
-        break;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
 void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip)
 {
     struct domain *d = v->domain;
@@ -4642,83 +4598,6 @@ static void hvm_s3_resume(struct domain *d)
     }
 }
 
-static int hvmop_set_isa_irq_level(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_isa_irq_level_t) uop)
-{
-    struct xen_hvm_set_isa_irq_level op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    if ( op.isa_irq > 15 )
-        return -EINVAL;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_set_isa_irq_level(XSM_DM_PRIV, d);
-    if ( rc )
-        goto out;
-
-    rc = 0;
-    switch ( op.level )
-    {
-    case 0:
-        hvm_isa_irq_deassert(d, op.isa_irq);
-        break;
-    case 1:
-        hvm_isa_irq_assert(d, op.isa_irq);
-        break;
-    default:
-        rc = -EINVAL;
-        break;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-static int hvmop_set_pci_link_route(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_link_route_t) uop)
-{
-    struct xen_hvm_set_pci_link_route op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    if ( (op.link > 3) || (op.isa_irq > 15) )
-        return -EINVAL;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_set_pci_link_route(XSM_DM_PRIV, d);
-    if ( rc )
-        goto out;
-
-    rc = 0;
-    hvm_set_pci_link_route(d, op.link, op.isa_irq);
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
 static int hvmop_inject_msi(
     XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop)
 {
@@ -5513,26 +5392,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             guest_handle_cast(arg, xen_hvm_param_t));
         break;
 
-    case HVMOP_set_pci_intx_level:
-        rc = hvmop_set_pci_intx_level(
-            guest_handle_cast(arg, xen_hvm_set_pci_intx_level_t));
-        break;
-
-    case HVMOP_set_isa_irq_level:
-        rc = hvmop_set_isa_irq_level(
-            guest_handle_cast(arg, xen_hvm_set_isa_irq_level_t));
-        break;
-
     case HVMOP_inject_msi:
         rc = hvmop_inject_msi(
             guest_handle_cast(arg, xen_hvm_inject_msi_t));
         break;
 
-    case HVMOP_set_pci_link_route:
-        rc = hvmop_set_pci_link_route(
-            guest_handle_cast(arg, xen_hvm_set_pci_link_route_t));
-        break;
-
     case HVMOP_flush_tlbs:
         rc = guest_handle_is_null(arg) ? hvmop_flush_tlb_all() : -EINVAL;
         break;
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index c1557eb..b47014b 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -194,6 +194,45 @@ struct xen_dm_op_track_dirty_vram {
     uint64_aligned_t first_pfn;
 };
 
+/*
+ * DMOP_set_pci_intx_level: Set the logical level of one of a domain's
+ *                          PCI INTx pins.
+ */
+#define DMOP_set_pci_intx_level 8
+
+struct xen_dm_op_set_pci_intx_level {
+    /* IN - PCI INTx identification (domain:bus:device:intx) */
+    uint8_t  domain, bus, device, intx;
+    /* IN - Level: 0 -> deasserted, 1 -> asserted */
+    uint8_t  level;
+};
+
+/*
+ * DMOP_set_isa_irq_level: Set the logical level of a one of a domain's
+ *                         ISA IRQ lines.
+ */
+#define DMOP_set_isa_irq_level 9
+
+struct xen_dm_op_set_isa_irq_level {
+    /* IN - ISA IRQ (0-15) */
+    uint8_t  isa_irq;
+    /* IN - Level: 0 -> deasserted, 1 -> asserted */
+    uint8_t  level;
+};
+
+/*
+ * DMOP_set_pci_link_route: Map a PCI INTx line to an IRQ line.
+ */
+#define DMOP_set_pci_link_route 10
+
+struct xen_dm_op_set_pci_link_route {
+    /* PCI INTx line (0-3) */
+    uint8_t  link;
+    /* ISA IRQ (1-15) or 0 -> disable link */
+    uint8_t  isa_irq;
+};
+
+
 struct xen_dm_op {
     uint32_t op;
     union {
@@ -204,6 +243,9 @@ struct xen_dm_op {
         struct xen_dm_op_set_ioreq_server_state set_ioreq_server_state;
         struct xen_dm_op_destroy_ioreq_server destroy_ioreq_server;
         struct xen_dm_op_track_dirty_vram track_dirty_vram;
+        struct xen_dm_op_set_pci_intx_level set_pci_intx_level;
+        struct xen_dm_op_set_isa_irq_level set_isa_irq_level;
+        struct xen_dm_op_set_pci_link_route set_pci_link_route;
     } u;
 };
 
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 1bb5221..1b9e3e0 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -39,6 +39,8 @@ struct xen_hvm_param {
 typedef struct xen_hvm_param xen_hvm_param_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_param_t);
 
+#if __XEN_INTERFACE_VERSION__ < 0x00040900
+
 /* Set the logical level of one of a domain's PCI INTx wires. */
 #define HVMOP_set_pci_intx_level  2
 struct xen_hvm_set_pci_intx_level {
@@ -77,6 +79,8 @@ struct xen_hvm_set_pci_link_route {
 typedef struct xen_hvm_set_pci_link_route xen_hvm_set_pci_link_route_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t);
 
+#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
+
 /* Flushes all VCPU TLBs: @arg must be NULL. */
 #define HVMOP_flush_tlbs          5
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index b7d3173..47c6072 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -610,24 +610,6 @@ static XSM_INLINE int xsm_shadow_control(XSM_DEFAULT_ARG struct domain *d, uint3
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_hvm_set_pci_intx_level(XSM_DEFAULT_ARG struct domain *d)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
-static XSM_INLINE int xsm_hvm_set_isa_irq_level(XSM_DEFAULT_ARG struct domain *d)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
-static XSM_INLINE int xsm_hvm_set_pci_link_route(XSM_DEFAULT_ARG struct domain *d)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
 static XSM_INLINE int xsm_hvm_inject_msi(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 0bcde39..cb32644 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -162,9 +162,6 @@ struct xsm_operations {
 #ifdef CONFIG_X86
     int (*do_mca) (void);
     int (*shadow_control) (struct domain *d, uint32_t op);
-    int (*hvm_set_pci_intx_level) (struct domain *d);
-    int (*hvm_set_isa_irq_level) (struct domain *d);
-    int (*hvm_set_pci_link_route) (struct domain *d);
     int (*hvm_inject_msi) (struct domain *d);
     int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
     int (*apic) (struct domain *d, int cmd);
@@ -635,21 +632,6 @@ static inline int xsm_shadow_control (xsm_default_t def, struct domain *d, uint3
     return xsm_ops->shadow_control(d, op);
 }
 
-static inline int xsm_hvm_set_pci_intx_level (xsm_default_t def, struct domain *d)
-{
-    return xsm_ops->hvm_set_pci_intx_level(d);
-}
-
-static inline int xsm_hvm_set_isa_irq_level (xsm_default_t def, struct domain *d)
-{
-    return xsm_ops->hvm_set_isa_irq_level(d);
-}
-
-static inline int xsm_hvm_set_pci_link_route (xsm_default_t def, struct domain *d)
-{
-    return xsm_ops->hvm_set_pci_link_route(d);
-}
-
 static inline int xsm_hvm_inject_msi (xsm_default_t def, struct domain *d)
 {
     return xsm_ops->hvm_inject_msi(d);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index d544ec1..f1568dd 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -145,9 +145,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
     set_to_dummy_if_null(ops, shadow_control);
-    set_to_dummy_if_null(ops, hvm_set_pci_intx_level);
-    set_to_dummy_if_null(ops, hvm_set_isa_irq_level);
-    set_to_dummy_if_null(ops, hvm_set_pci_link_route);
     set_to_dummy_if_null(ops, hvm_inject_msi);
     set_to_dummy_if_null(ops, mem_sharing_op);
     set_to_dummy_if_null(ops, apic);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 7972546..088aa87 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1499,21 +1499,6 @@ static int flask_ioport_mapping(struct domain *d, uint32_t start, uint32_t end,
     return flask_ioport_permission(d, start, end, access);
 }
 
-static int flask_hvm_set_pci_intx_level(struct domain *d)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__PCILEVEL);
-}
-
-static int flask_hvm_set_isa_irq_level(struct domain *d)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__IRQLEVEL);
-}
-
-static int flask_hvm_set_pci_link_route(struct domain *d)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__PCIROUTE);
-}
-
 static int flask_hvm_inject_msi(struct domain *d)
 {
     return current_has_perm(d, SECCLASS_HVM, HVM__SEND_IRQ);
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 5af427f..708cfe6 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -257,12 +257,6 @@ class hvm
     setparam
 # HVMOP_get_param
     getparam
-# HVMOP_set_pci_intx_level (also needs hvmctl)
-    pcilevel
-# HVMOP_set_isa_irq_level
-    irqlevel
-# HVMOP_set_pci_link_route
-    pciroute
     bind_irq
 # XEN_DOMCTL_pin_mem_cacheattr
     cacheattr
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH-for-4.9 v1 5/8] dm_op: convert HVMOP_modified_memory
  2016-11-18 17:13 [PATCH-for-4.9 v1 0/8] New hypercall for device models Paul Durrant
                   ` (3 preceding siblings ...)
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and Paul Durrant
@ 2016-11-18 17:14 ` Paul Durrant
  2016-11-25 13:25   ` Jan Beulich
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 6/8] dm_op: convert HVMOP_set_mem_type Paul Durrant
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-18 17:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Daniel De Graaf, Paul Durrant, Ian Jackson, Andrew Cooper

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/libxc/xc_misc.c               | 26 +++++-----------
 xen/arch/x86/hvm/dm.c               | 54 +++++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c              | 60 -------------------------------------
 xen/include/public/hvm/dm_op.h      | 14 +++++++++
 xen/include/public/hvm/hvm_op.h     |  4 +--
 xen/xsm/flask/policy/access_vectors |  2 +-
 6 files changed, 79 insertions(+), 81 deletions(-)

diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 842b699..a97864e 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -568,27 +568,17 @@ int xc_hvm_track_dirty_vram(
 int xc_hvm_modified_memory(
     xc_interface *xch, domid_t dom, uint64_t first_pfn, uint64_t nr)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_modified_memory, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_modified_memory hypercall");
-        return -1;
-    }
+    struct xen_dm_op op;
+    struct xen_dm_op_modified_memory *data;
 
-    arg->domid     = dom;
-    arg->first_pfn = first_pfn;
-    arg->nr        = nr;
+    op.op = DMOP_modified_memory;
+    data = &op.u.modified_memory;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_modified_memory,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
-
-    xc_hypercall_buffer_free(xch, arg);
+    data->first_pfn = first_pfn;
+    /* NOTE: The following assignment truncates nr to 32-bits */
+    data->nr = nr;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_hvm_set_mem_type(
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index b8edf2c..0dcd454 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -17,6 +17,7 @@
 #include <xen/hypercall.h>
 #include <xen/guest_access.h>
 #include <xen/sched.h>
+#include <xen/event.h>
 #include <asm/hvm/ioreq.h>
 #include <asm/hap.h>
 #include <asm/shadow.h>
@@ -159,6 +160,51 @@ static int dm_op_set_pci_link_route(struct domain *d, uint8_t link,
     return 0;
 }
 
+static int dm_op_modified_memory(struct domain *d, xen_pfn_t *first_pfn,
+                                 unsigned int *nr)
+{
+    xen_pfn_t last_pfn = *first_pfn + *nr - 1;
+    unsigned int iter;
+    int rc;
+
+    if ( (*first_pfn > last_pfn) ||
+         (last_pfn > domain_get_maximum_gpfn(d)) )
+        return -EINVAL;
+
+    if ( !paging_mode_log_dirty(d) )
+        return 0;
+
+    iter = 0;
+    rc = 0;
+    while ( iter < *nr )
+    {
+        unsigned long pfn = *first_pfn + iter;
+        struct page_info *page;
+
+        page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
+        if ( page )
+        {
+            paging_mark_dirty(d, page_to_mfn(page));
+            /* These are most probably not page tables any more */
+            /* don't take a long time and don't die either */
+            sh_remove_shadows(d, _mfn(page_to_mfn(page)), 1, 0);
+            put_page(page);
+        }
+
+        /* Check for continuation if it's not the last interation */
+        if ( (++iter < *nr) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    *first_pfn += iter;
+    *nr -= iter;
+
+    return rc;
+}
+
 long do_dm_op(domid_t domid,
               unsigned int nr_bufs,
               XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
@@ -277,6 +323,14 @@ long do_dm_op(domid_t domid,
         rc = dm_op_set_pci_link_route(d, data->link, data->isa_irq);
         break;
     }
+    case DMOP_modified_memory:
+    {
+        struct xen_dm_op_modified_memory *data =
+            &op.u.modified_memory;
+
+        rc = dm_op_modified_memory(d, &data->first_pfn, &data->nr);
+        break;
+    }
     default:
         rc = -EOPNOTSUPP;
         break;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 14d3b87..3b2e9d5 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5368,7 +5368,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
     default:
         mask = ~0UL;
         break;
-    case HVMOP_modified_memory:
     case HVMOP_set_mem_type:
         mask = HVMOP_op_mask;
         break;
@@ -5401,65 +5400,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         rc = guest_handle_is_null(arg) ? hvmop_flush_tlb_all() : -EINVAL;
         break;
 
-    case HVMOP_modified_memory:
-    {
-        struct xen_hvm_modified_memory a;
-        struct domain *d;
-
-        if ( copy_from_guest(&a, arg, 1) )
-            return -EFAULT;
-
-        rc = rcu_lock_remote_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
-
-        rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
-            goto modmem_fail;
-
-        rc = xsm_hvm_control(XSM_DM_PRIV, d, op);
-        if ( rc )
-            goto modmem_fail;
-
-        rc = -EINVAL;
-        if ( a.nr < start_iter ||
-             ((a.first_pfn + a.nr - 1) < a.first_pfn) ||
-             ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
-            goto modmem_fail;
-
-        rc = 0;
-        if ( !paging_mode_log_dirty(d) )
-            goto modmem_fail;
-
-        while ( a.nr > start_iter )
-        {
-            unsigned long pfn = a.first_pfn + start_iter;
-            struct page_info *page;
-
-            page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
-            if ( page )
-            {
-                paging_mark_dirty(d, page_to_mfn(page));
-                /* These are most probably not page tables any more */
-                /* don't take a long time and don't die either */
-                sh_remove_shadows(d, _mfn(page_to_mfn(page)), 1, 0);
-                put_page(page);
-            }
-
-            /* Check for continuation if it's not the last interation */
-            if ( a.nr > ++start_iter && !(start_iter & HVMOP_op_mask) &&
-                 hypercall_preempt_check() )
-            {
-                rc = -ERESTART;
-                break;
-            }
-        }
-
-    modmem_fail:
-        rcu_unlock_domain(d);
-        break;
-    }
-
     case HVMOP_get_mem_type:
         rc = hvmop_get_mem_type(
             guest_handle_cast(arg, xen_hvm_get_mem_type_t));
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index b47014b..d2065f2 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -232,6 +232,19 @@ struct xen_dm_op_set_pci_link_route {
     uint8_t  isa_irq;
 };
 
+/*
+ * DMOP_modified_memory: Notify that some pages were modified by an
+ *                       emulator.
+ */
+#define DMOP_modified_memory 11
+
+struct xen_dm_op_modified_memory {
+    /* IN - number of contiguous pages modified */
+    uint32_t nr;
+    /* IN - first pfn modified */
+    uint64_t first_pfn;
+};
+
 
 struct xen_dm_op {
     uint32_t op;
@@ -246,6 +259,7 @@ struct xen_dm_op {
         struct xen_dm_op_set_pci_intx_level set_pci_intx_level;
         struct xen_dm_op_set_isa_irq_level set_isa_irq_level;
         struct xen_dm_op_set_pci_link_route set_pci_link_route;
+        struct xen_dm_op_modified_memory modified_memory;
     } u;
 };
 
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 1b9e3e0..45879cf 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -118,8 +118,6 @@ struct xen_hvm_track_dirty_vram {
 typedef struct xen_hvm_track_dirty_vram xen_hvm_track_dirty_vram_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_track_dirty_vram_t);
 
-#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
-
 /* Notify that some pages got modified by the Device Model. */
 #define HVMOP_modified_memory    7
 struct xen_hvm_modified_memory {
@@ -133,6 +131,8 @@ struct xen_hvm_modified_memory {
 typedef struct xen_hvm_modified_memory xen_hvm_modified_memory_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_modified_memory_t);
 
+#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
+
 #define HVMOP_set_mem_type    8
 /* Notify that a region of memory is to be treated in a specific way. */
 struct xen_hvm_set_mem_type {
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 708cfe6..2041ca5 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -260,7 +260,7 @@ class hvm
     bind_irq
 # XEN_DOMCTL_pin_mem_cacheattr
     cacheattr
-# HVMOP_modified_memory, HVMOP_get_mem_type, HVMOP_set_mem_type,
+# HVMOP_get_mem_type, HVMOP_set_mem_type,
 # HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying,
 # HVMOP_inject_trap
     hvmctl
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH-for-4.9 v1 6/8] dm_op: convert HVMOP_set_mem_type
  2016-11-18 17:13 [PATCH-for-4.9 v1 0/8] New hypercall for device models Paul Durrant
                   ` (4 preceding siblings ...)
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 5/8] dm_op: convert HVMOP_modified_memory Paul Durrant
@ 2016-11-18 17:14 ` Paul Durrant
  2016-11-25 13:50   ` Jan Beulich
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi Paul Durrant
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 8/8] x86/hvm: serialize trap injecting producer and consumer Paul Durrant
  7 siblings, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-18 17:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Daniel De Graaf, Paul Durrant, Ian Jackson, Andrew Cooper

This patch also removes the need for handling HVMOP restarts, so that
infrastructure is also removed.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
---
 tools/libxc/xc_misc.c               |  28 +++-----
 xen/arch/x86/hvm/dm.c               |  92 ++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c              | 136 +-----------------------------------
 xen/include/public/hvm/dm_op.h      |  16 +++++
 xen/include/public/hvm/hvm_op.h     |   4 +-
 xen/xsm/flask/policy/access_vectors |   2 +-
 6 files changed, 121 insertions(+), 157 deletions(-)

diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index a97864e..607cf80 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -584,28 +584,18 @@ int xc_hvm_modified_memory(
 int xc_hvm_set_mem_type(
     xc_interface *xch, domid_t dom, hvmmem_type_t mem_type, uint64_t first_pfn, uint64_t nr)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_mem_type, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_set_mem_type hypercall");
-        return -1;
-    }
+    struct xen_dm_op op;
+    struct xen_dm_op_set_mem_type *data;
 
-    arg->domid        = dom;
-    arg->hvmmem_type  = mem_type;
-    arg->first_pfn    = first_pfn;
-    arg->nr           = nr;
+    op.op = DMOP_set_mem_type;
+    data = &op.u.set_mem_type;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_set_mem_type,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
-
-    xc_hypercall_buffer_free(xch, arg);
+    data->mem_type = mem_type;
+    data->first_pfn = first_pfn;
+    /* NOTE: The following assignment truncates nr to 32-bits */
+    data->nr = nr;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_hvm_inject_trap(
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 0dcd454..969b68c 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -160,6 +160,16 @@ static int dm_op_set_pci_link_route(struct domain *d, uint8_t link,
     return 0;
 }
 
+static bool_t dm_op_allow_p2m_type_change(p2m_type_t old, p2m_type_t new)
+{
+    if ( p2m_is_ram(old) ||
+         (p2m_is_hole(old) && new == p2m_mmio_dm) ||
+         (old == p2m_ioreq_server && new == p2m_ram_rw) )
+        return 1;
+
+    return 0;
+}
+
 static int dm_op_modified_memory(struct domain *d, xen_pfn_t *first_pfn,
                                  unsigned int *nr)
 {
@@ -205,6 +215,79 @@ static int dm_op_modified_memory(struct domain *d, xen_pfn_t *first_pfn,
     return rc;
 }
 
+
+static int dm_op_set_mem_type(struct domain *d, hvmmem_type_t mem_type,
+                              xen_pfn_t *first_pfn, unsigned int *nr)
+{
+    xen_pfn_t last_pfn = *first_pfn + *nr - 1;
+    unsigned int iter;
+    int rc;
+
+    /* Interface types to internal p2m types */
+    static const p2m_type_t memtype[] = {
+        [HVMMEM_ram_rw]  = p2m_ram_rw,
+        [HVMMEM_ram_ro]  = p2m_ram_ro,
+        [HVMMEM_mmio_dm] = p2m_mmio_dm,
+        [HVMMEM_unused] = p2m_invalid,
+        [HVMMEM_ioreq_server] = p2m_ioreq_server
+    };
+
+    if ( (*first_pfn > last_pfn) ||
+         (last_pfn > domain_get_maximum_gpfn(d)) )
+        return -EINVAL;
+
+    if ( mem_type >= ARRAY_SIZE(memtype) ||
+         unlikely(mem_type == HVMMEM_unused) )
+        return -EINVAL;
+
+    iter = 0;
+    rc = 0;
+    while ( iter < *nr )
+    {
+        unsigned long pfn = *first_pfn + iter;
+        p2m_type_t t;
+
+        get_gfn_unshare(d, pfn, &t);
+        if ( p2m_is_paging(t) )
+        {
+            put_gfn(d, pfn);
+            p2m_mem_paging_populate(d, pfn);
+            rc = -EAGAIN;
+            break;
+        }
+        if ( p2m_is_shared(t) )
+        {
+            put_gfn(d, pfn);
+            rc = -EAGAIN;
+            break;
+        }
+        if ( !dm_op_allow_p2m_type_change(t, memtype[mem_type]) )
+        {
+            put_gfn(d, pfn);
+            rc = -EINVAL;
+            break;
+        }
+
+        rc = p2m_change_type_one(d, pfn, t, memtype[mem_type]);
+        put_gfn(d, pfn);
+
+        if ( rc )
+            break;
+
+        /* Check for continuation if it's not the last interation */
+        if ( (++iter < *nr) && hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    *first_pfn += iter;
+    *nr -= iter;
+
+    return rc;
+}
+
 long do_dm_op(domid_t domid,
               unsigned int nr_bufs,
               XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
@@ -331,6 +414,15 @@ long do_dm_op(domid_t domid,
         rc = dm_op_modified_memory(d, &data->first_pfn, &data->nr);
         break;
     }
+    case DMOP_set_mem_type:
+    {
+        struct xen_dm_op_set_mem_type *data =
+            &op.u.set_mem_type;
+
+        rc = dm_op_set_mem_type(d, data->mem_type, &data->first_pfn,
+                                &data->nr);
+        break;
+    }
     default:
         rc = -EOPNOTSUPP;
         break;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 3b2e9d5..83c4063 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5249,132 +5249,11 @@ static int hvmop_get_mem_type(
     return rc;
 }
 
-/*
- * Note that this value is effectively part of the ABI, even if we don't need
- * to make it a formal part of it: A guest suspended for migration in the
- * middle of a continuation would fail to work if resumed on a hypervisor
- * using a different value.
- */
-#define HVMOP_op_mask 0xff
-
-static bool_t hvm_allow_p2m_type_change(p2m_type_t old, p2m_type_t new)
-{
-    if ( p2m_is_ram(old) ||
-         (p2m_is_hole(old) && new == p2m_mmio_dm) ||
-         (old == p2m_ioreq_server && new == p2m_ram_rw) )
-        return 1;
-
-    return 0;
-}
-
-static int hvmop_set_mem_type(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_mem_type_t) arg,
-    unsigned long *iter)
-{
-    unsigned long start_iter = *iter;
-    struct xen_hvm_set_mem_type a;
-    struct domain *d;
-    int rc;
-
-    /* Interface types to internal p2m types */
-    static const p2m_type_t memtype[] = {
-        [HVMMEM_ram_rw]  = p2m_ram_rw,
-        [HVMMEM_ram_ro]  = p2m_ram_ro,
-        [HVMMEM_mmio_dm] = p2m_mmio_dm,
-        [HVMMEM_unused] = p2m_invalid,
-        [HVMMEM_ioreq_server] = p2m_ioreq_server
-    };
-
-    if ( copy_from_guest(&a, arg, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(a.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_control(XSM_DM_PRIV, d, HVMOP_set_mem_type);
-    if ( rc )
-        goto out;
-
-    rc = -EINVAL;
-    if ( a.nr < start_iter ||
-         ((a.first_pfn + a.nr - 1) < a.first_pfn) ||
-         ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
-        goto out;
-
-    if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
-         unlikely(a.hvmmem_type == HVMMEM_unused) )
-        goto out;
-
-    while ( a.nr > start_iter )
-    {
-        unsigned long pfn = a.first_pfn + start_iter;
-        p2m_type_t t;
-
-        get_gfn_unshare(d, pfn, &t);
-        if ( p2m_is_paging(t) )
-        {
-            put_gfn(d, pfn);
-            p2m_mem_paging_populate(d, pfn);
-            rc = -EAGAIN;
-            goto out;
-        }
-        if ( p2m_is_shared(t) )
-        {
-            put_gfn(d, pfn);
-            rc = -EAGAIN;
-            goto out;
-        }
-        if ( !hvm_allow_p2m_type_change(t, memtype[a.hvmmem_type]) )
-        {
-            put_gfn(d, pfn);
-            goto out;
-        }
-
-        rc = p2m_change_type_one(d, pfn, t, memtype[a.hvmmem_type]);
-        put_gfn(d, pfn);
-
-        if ( rc )
-            goto out;
-
-        /* Check for continuation if it's not the last interation */
-        if ( a.nr > ++start_iter && !(start_iter & HVMOP_op_mask) &&
-             hypercall_preempt_check() )
-        {
-            rc = -ERESTART;
-            goto out;
-        }
-    }
-    rc = 0;
-
- out:
-    rcu_unlock_domain(d);
-    *iter = start_iter;
-
-    return rc;
-}
-
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
-    unsigned long start_iter, mask;
     long rc = 0;
 
-    switch ( op & HVMOP_op_mask )
-    {
-    default:
-        mask = ~0UL;
-        break;
-    case HVMOP_set_mem_type:
-        mask = HVMOP_op_mask;
-        break;
-    }
-
-    start_iter = op & ~mask;
-    switch ( op &= mask )
+    switch ( op )
     {
     case HVMOP_set_evtchn_upcall_vector:
         rc = hvmop_set_evtchn_upcall_vector(
@@ -5405,12 +5284,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             guest_handle_cast(arg, xen_hvm_get_mem_type_t));
         break;
 
-    case HVMOP_set_mem_type:
-        rc = hvmop_set_mem_type(
-            guest_handle_cast(arg, xen_hvm_set_mem_type_t),
-            &start_iter);
-        break;
-
     case HVMOP_pagetable_dying:
     {
         struct xen_hvm_pagetable_dying a;
@@ -5519,13 +5392,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
     }
     }
 
-    if ( rc == -ERESTART )
-    {
-        ASSERT(!(start_iter & mask));
-        rc = hypercall_create_continuation(__HYPERVISOR_hvm_op, "lh",
-                                           op | start_iter, arg);
-    }
-
     return rc;
 }
 
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index d2065f2..247cac6 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -245,6 +245,21 @@ struct xen_dm_op_modified_memory {
     uint64_t first_pfn;
 };
 
+/*
+ * DMOP_set_mem_type: Notify that a region of memory is to be treated in a
+ *                    specific way. (See definition of hvmmem_type_t).
+ */
+#define DMOP_set_mem_type 12
+
+struct xen_dm_op_set_mem_type {
+    /* IN - number of contiguous pages */
+    uint32_t nr;
+    /* IN - first pfn in region */
+    uint64_t first_pfn;
+    /* IN - new hvmmem_type_t of region */
+    uint16_t mem_type;
+};
+
 
 struct xen_dm_op {
     uint32_t op;
@@ -260,6 +275,7 @@ struct xen_dm_op {
         struct xen_dm_op_set_isa_irq_level set_isa_irq_level;
         struct xen_dm_op_set_pci_link_route set_pci_link_route;
         struct xen_dm_op_modified_memory modified_memory;
+        struct xen_dm_op_set_mem_type set_mem_type;
     } u;
 };
 
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 45879cf..2e9a1f6 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -131,8 +131,6 @@ struct xen_hvm_modified_memory {
 typedef struct xen_hvm_modified_memory xen_hvm_modified_memory_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_modified_memory_t);
 
-#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
-
 #define HVMOP_set_mem_type    8
 /* Notify that a region of memory is to be treated in a specific way. */
 struct xen_hvm_set_mem_type {
@@ -148,6 +146,8 @@ struct xen_hvm_set_mem_type {
 typedef struct xen_hvm_set_mem_type xen_hvm_set_mem_type_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_mem_type_t);
 
+#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
+
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
 /* Hint from PV drivers for pagetable destruction. */
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 2041ca5..125210b 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -260,7 +260,7 @@ class hvm
     bind_irq
 # XEN_DOMCTL_pin_mem_cacheattr
     cacheattr
-# HVMOP_get_mem_type, HVMOP_set_mem_type,
+# HVMOP_get_mem_type,
 # HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying,
 # HVMOP_inject_trap
     hvmctl
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH-for-4.9 v1 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi
  2016-11-18 17:13 [PATCH-for-4.9 v1 0/8] New hypercall for device models Paul Durrant
                   ` (5 preceding siblings ...)
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 6/8] dm_op: convert HVMOP_set_mem_type Paul Durrant
@ 2016-11-18 17:14 ` Paul Durrant
  2016-11-25 14:07   ` Jan Beulich
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 8/8] x86/hvm: serialize trap injecting producer and consumer Paul Durrant
  7 siblings, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-18 17:14 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Daniel De Graaf, Paul Durrant, Ian Jackson, Andrew Cooper

NOTE: The definitions of HVMOP_TRAP_* have to persist for new interface
      versions as they are already in use by callers of the libxc
      interface.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 tools/flask/policy/modules/xen.if   |  2 +-
 tools/libxc/xc_misc.c               | 60 ++++++++++-------------------
 xen/arch/x86/hvm/dm.c               | 44 ++++++++++++++++++++-
 xen/arch/x86/hvm/hvm.c              | 76 -------------------------------------
 xen/include/public/hvm/dm_op.h      | 45 ++++++++++++++++++++++
 xen/include/public/hvm/hvm_op.h     | 26 ++++++++-----
 xen/include/xsm/dummy.h             |  6 ---
 xen/include/xsm/xsm.h               |  6 ---
 xen/xsm/dummy.c                     |  1 -
 xen/xsm/flask/hooks.c               |  6 ---
 xen/xsm/flask/policy/access_vectors |  5 +--
 11 files changed, 126 insertions(+), 151 deletions(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index e6dfaf0..70ce47b 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -151,7 +151,7 @@ define(`device_model', `
 
 	allow $1 $2_target:domain { getdomaininfo shutdown };
 	allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack };
-	allow $1 $2_target:hvm { getparam setparam hvmctl cacheattr send_irq dm };
+	allow $1 $2_target:hvm { getparam setparam hvmctl cacheattr dm };
 ')
 
 # make_device_model(priv, dm_dom, hvm_dom)
diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 607cf80..67fb779 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -521,29 +521,18 @@ int xc_hvm_set_pci_link_route(
 }
 
 int xc_hvm_inject_msi(
-    xc_interface *xch, domid_t dom, uint64_t addr, uint32_t data)
+    xc_interface *xch, domid_t dom, uint64_t msi_addr, uint32_t msi_data)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_inject_msi, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_inject_msi hypercall");
-        return -1;
-    }
-
-    arg->domid = dom;
-    arg->addr  = addr;
-    arg->data  = data;
+    struct xen_dm_op op;
+    struct xen_dm_op_inject_msi *data;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_inject_msi,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    op.op = DMOP_inject_msi;
+    data = &op.u.inject_msi;
 
-    xc_hypercall_buffer_free(xch, arg);
+    data->addr = msi_addr;
+    data->data = msi_data;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_hvm_track_dirty_vram(
@@ -603,31 +592,20 @@ int xc_hvm_inject_trap(
     uint32_t type, uint32_t error_code, uint32_t insn_len,
     uint64_t cr2)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_inject_trap, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_inject_trap hypercall");
-        return -1;
-    }
-
-    arg->domid       = dom;
-    arg->vcpuid      = vcpu;
-    arg->vector      = vector;
-    arg->type        = type;
-    arg->error_code  = error_code;
-    arg->insn_len    = insn_len;
-    arg->cr2         = cr2;
+    struct xen_dm_op op;
+    struct xen_dm_op_inject_trap *data;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_inject_trap,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    op.op = DMOP_inject_trap;
+    data = &op.u.inject_trap;
 
-    xc_hypercall_buffer_free(xch, arg);
+    data->vcpuid = vcpu;
+    data->vector = vector;
+    data->type = type;
+    data->error_code = error_code;
+    data->insn_len = insn_len;
+    data->cr2 = cr2;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_livepatch_upload(xc_interface *xch,
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 969b68c..ee0aeed 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -215,7 +215,6 @@ static int dm_op_modified_memory(struct domain *d, xen_pfn_t *first_pfn,
     return rc;
 }
 
-
 static int dm_op_set_mem_type(struct domain *d, hvmmem_type_t mem_type,
                               xen_pfn_t *first_pfn, unsigned int *nr)
 {
@@ -288,6 +287,31 @@ static int dm_op_set_mem_type(struct domain *d, hvmmem_type_t mem_type,
     return rc;
 }
 
+static int dm_op_inject_trap(struct domain *d, unsigned int vcpuid,
+                             uint16_t vector, uint8_t type,
+                             uint8_t insn_len, uint32_t error_code,
+                             unsigned long cr2)
+{
+    struct vcpu *v;
+
+    if ( vector > INT16_MAX )
+        return -EINVAL;
+
+    if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
+        return -EINVAL;
+
+    if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
+        return -EBUSY;
+
+    v->arch.hvm_vcpu.inject_trap.type = type;
+    v->arch.hvm_vcpu.inject_trap.insn_len = insn_len;
+    v->arch.hvm_vcpu.inject_trap.error_code = error_code;
+    v->arch.hvm_vcpu.inject_trap.cr2 = cr2;
+    v->arch.hvm_vcpu.inject_trap.vector = vector;
+
+    return 0;
+}
+
 long do_dm_op(domid_t domid,
               unsigned int nr_bufs,
               XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
@@ -423,6 +447,24 @@ long do_dm_op(domid_t domid,
                                 &data->nr);
         break;
     }
+    case DMOP_inject_trap:
+    {
+        struct xen_dm_op_inject_trap *data =
+            &op.u.inject_trap;
+
+        rc = dm_op_inject_trap(d, data->vcpuid, data->vector,
+                               data->type, data->insn_len,
+                               data->error_code, data->cr2);
+        break;
+    }
+    case DMOP_inject_msi:
+    {
+        struct xen_dm_op_inject_msi *data =
+            &op.u.inject_msi;
+
+        rc = hvm_inject_msi(d, data->addr, data->data);
+        break;
+    }
     default:
         rc = -EOPNOTSUPP;
         break;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 83c4063..90c4b43 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4598,35 +4598,6 @@ static void hvm_s3_resume(struct domain *d)
     }
 }
 
-static int hvmop_inject_msi(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop)
-{
-    struct xen_hvm_inject_msi op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_inject_msi(XSM_DM_PRIV, d);
-    if ( rc )
-        goto out;
-
-    rc = hvm_inject_msi(d, op.addr, op.data);
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
 static int hvmop_flush_tlb_all(void)
 {
     struct domain *d = current->domain;
@@ -5270,11 +5241,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             guest_handle_cast(arg, xen_hvm_param_t));
         break;
 
-    case HVMOP_inject_msi:
-        rc = hvmop_inject_msi(
-            guest_handle_cast(arg, xen_hvm_inject_msi_t));
-        break;
-
     case HVMOP_flush_tlbs:
         rc = guest_handle_is_null(arg) ? hvmop_flush_tlb_all() : -EINVAL;
         break;
@@ -5331,48 +5297,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
-    case HVMOP_inject_trap: 
-    {
-        xen_hvm_inject_trap_t tr;
-        struct domain *d;
-        struct vcpu *v;
-
-        if ( copy_from_guest(&tr, arg, 1 ) )
-            return -EFAULT;
-
-        rc = rcu_lock_remote_domain_by_id(tr.domid, &d);
-        if ( rc != 0 )
-            return rc;
-
-        rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
-            goto injtrap_fail;
-
-        rc = xsm_hvm_control(XSM_DM_PRIV, d, op);
-        if ( rc )
-            goto injtrap_fail;
-
-        rc = -ENOENT;
-        if ( tr.vcpuid >= d->max_vcpus || (v = d->vcpu[tr.vcpuid]) == NULL )
-            goto injtrap_fail;
-        
-        if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
-            rc = -EBUSY;
-        else 
-        {
-            v->arch.hvm_vcpu.inject_trap.vector = tr.vector;
-            v->arch.hvm_vcpu.inject_trap.type = tr.type;
-            v->arch.hvm_vcpu.inject_trap.error_code = tr.error_code;
-            v->arch.hvm_vcpu.inject_trap.insn_len = tr.insn_len;
-            v->arch.hvm_vcpu.inject_trap.cr2 = tr.cr2;
-            rc = 0;
-        }
-
-    injtrap_fail:
-        rcu_unlock_domain(d);
-        break;
-    }
-
     case HVMOP_guest_request_vm_event:
         if ( guest_handle_is_null(arg) )
             monitor_guest_request();
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index 247cac6..bd0c7e0 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -260,6 +260,49 @@ struct xen_dm_op_set_mem_type {
     uint16_t mem_type;
 };
 
+/*
+ * DMOP_inject_trap: Inject a trap into a VCPU, which will get taken up
+ *                   when it is next scheduled.
+ *
+ * Note that the caller should know enough of the state of the CPU before
+ * injecting, to know what the effect of injecting the trap will be.
+ */
+#define DMOP_inject_trap 13
+
+struct xen_dm_op_inject_trap {
+    /* IN - index of vCPU */
+    uint32_t vcpuid;
+    /* IN - interrupt vector */
+    uint16_t vector;
+    /* IN - trap type (DMOP_TRAP_* ) */
+    uint8_t type;
+/* NB. This enumeration precisely matches hvm.h:X86_EVENTTYPE_* */
+# define DMOP_TRAP_ext_int    0 /* external interrupt */
+# define DMOP_TRAP_nmi        2 /* nmi */
+# define DMOP_TRAP_hw_exc     3 /* hardware exception */
+# define DMOP_TRAP_sw_int     4 /* software interrupt (CD nn) */
+# define DMOP_TRAP_pri_sw_exc 5 /* ICEBP (F1) */
+# define DMOP_TRAP_sw_exc     6 /* INT3 (CC), INTO (CE) */
+    /* IN - enstruction length */
+    uint8_t insn_len;
+    /* IN - error code (or ~0 to skip) */
+    uint32_t error_code;
+    /* IN - CR2 for page faults */
+    uint64_aligned_t cr2;
+};
+
+/*
+ * DMOP_inject_msi: Inject an MSI for an emulated device.
+ */
+#define DMOP_inject_msi 14
+
+struct xen_dm_op_inject_msi {
+    /* IN - MSI data (lower 32 bits) */
+    uint32_t data;
+    /* IN - MSI address (0xfeexxxxx) */
+    uint64_t addr;
+};
+
 
 struct xen_dm_op {
     uint32_t op;
@@ -276,6 +319,8 @@ struct xen_dm_op {
         struct xen_dm_op_set_pci_link_route set_pci_link_route;
         struct xen_dm_op_modified_memory modified_memory;
         struct xen_dm_op_set_mem_type set_mem_type;
+        struct xen_dm_op_inject_trap inject_trap;
+        struct xen_dm_op_inject_msi inject_msi;
     } u;
 };
 
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 2e9a1f6..123c322 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -187,6 +187,8 @@ DEFINE_XEN_GUEST_HANDLE(xen_hvm_xentrace_t);
 /* Deprecated by XENMEM_access_op_get_access */
 #define HVMOP_get_mem_access        13
 
+#if __XEN_INTERFACE_VERSION__ < 0x00040900
+
 #define HVMOP_inject_trap            14
 /* Inject a trap into a VCPU, which will get taken up on the next
  * scheduling of it. Note that the caller should know enough of the
@@ -202,13 +204,6 @@ struct xen_hvm_inject_trap {
     uint32_t vector;
     /* Trap type (HVMOP_TRAP_*) */
     uint32_t type;
-/* NB. This enumeration precisely matches hvm.h:X86_EVENTTYPE_* */
-# define HVMOP_TRAP_ext_int    0 /* external interrupt */
-# define HVMOP_TRAP_nmi        2 /* nmi */
-# define HVMOP_TRAP_hw_exc     3 /* hardware exception */
-# define HVMOP_TRAP_sw_int     4 /* software interrupt (CD nn) */
-# define HVMOP_TRAP_pri_sw_exc 5 /* ICEBP (F1) */
-# define HVMOP_TRAP_sw_exc     6 /* INT3 (CC), INTO (CE) */
     /* Error code, or ~0u to skip */
     uint32_t error_code;
     /* Intruction length */
@@ -219,6 +214,19 @@ struct xen_hvm_inject_trap {
 typedef struct xen_hvm_inject_trap xen_hvm_inject_trap_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_trap_t);
 
+#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
+
+/*
+ * Definitions relating to HVMOP/DMOP_inject_trap.
+ */
+
+# define HVMOP_TRAP_ext_int    DMOP_TRAP_ext_int
+# define HVMOP_TRAP_nmi        DMOP_TRAP_nmi
+# define HVMOP_TRAP_hw_exc     DMOP_TRAP_hw_exc
+# define HVMOP_TRAP_sw_int     DMOP_TRAP_sw_int
+# define HVMOP_TRAP_pri_sw_exc DMOP_TRAP_pri_sw_exc
+# define HVMOP_TRAP_sw_exc     DMOP_TRAP_sw_exc
+
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
 #define HVMOP_get_mem_type    15
@@ -238,6 +246,8 @@ DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_mem_type_t);
 /* Following tools-only interfaces may change in future. */
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 
+#if __XEN_INTERFACE_VERSION__ < 0x00040900
+
 /* MSI injection for emulated devices */
 #define HVMOP_inject_msi         16
 struct xen_hvm_inject_msi {
@@ -251,8 +261,6 @@ struct xen_hvm_inject_msi {
 typedef struct xen_hvm_inject_msi xen_hvm_inject_msi_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_msi_t);
 
-#if __XEN_INTERFACE_VERSION__ < 0x00040900
-
 /*
  * IOREQ Servers
  *
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 47c6072..fea4317 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -610,12 +610,6 @@ static XSM_INLINE int xsm_shadow_control(XSM_DEFAULT_ARG struct domain *d, uint3
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_hvm_inject_msi(XSM_DEFAULT_ARG struct domain *d)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
 static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index cb32644..eec84e4 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -162,7 +162,6 @@ struct xsm_operations {
 #ifdef CONFIG_X86
     int (*do_mca) (void);
     int (*shadow_control) (struct domain *d, uint32_t op);
-    int (*hvm_inject_msi) (struct domain *d);
     int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
     int (*apic) (struct domain *d, int cmd);
     int (*memtype) (uint32_t access);
@@ -632,11 +631,6 @@ static inline int xsm_shadow_control (xsm_default_t def, struct domain *d, uint3
     return xsm_ops->shadow_control(d, op);
 }
 
-static inline int xsm_hvm_inject_msi (xsm_default_t def, struct domain *d)
-{
-    return xsm_ops->hvm_inject_msi(d);
-}
-
 static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
 {
     return xsm_ops->mem_sharing_op(d, cd, op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index f1568dd..1f659c7 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -145,7 +145,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
     set_to_dummy_if_null(ops, shadow_control);
-    set_to_dummy_if_null(ops, hvm_inject_msi);
     set_to_dummy_if_null(ops, mem_sharing_op);
     set_to_dummy_if_null(ops, apic);
     set_to_dummy_if_null(ops, machine_memory_map);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 088aa87..9a3b90c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1499,11 +1499,6 @@ static int flask_ioport_mapping(struct domain *d, uint32_t start, uint32_t end,
     return flask_ioport_permission(d, start, end, access);
 }
 
-static int flask_hvm_inject_msi(struct domain *d)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__SEND_IRQ);
-}
-
 static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
 {
     int rc = current_has_perm(cd, SECCLASS_HVM, HVM__MEM_SHARING);
@@ -1781,7 +1776,6 @@ static struct xsm_operations flask_ops = {
     .hvm_set_pci_intx_level = flask_hvm_set_pci_intx_level,
     .hvm_set_isa_irq_level = flask_hvm_set_isa_irq_level,
     .hvm_set_pci_link_route = flask_hvm_set_pci_link_route,
-    .hvm_inject_msi = flask_hvm_inject_msi,
     .mem_sharing_op = flask_mem_sharing_op,
     .apic = flask_apic,
     .machine_memory_map = flask_machine_memory_map,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 125210b..a826264 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -261,8 +261,7 @@ class hvm
 # XEN_DOMCTL_pin_mem_cacheattr
     cacheattr
 # HVMOP_get_mem_type,
-# HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying,
-# HVMOP_inject_trap
+# HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying
     hvmctl
 # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap} with:
 #  source = the domain making the hypercall
@@ -270,8 +269,6 @@ class hvm
     mem_sharing
 # XEN_DOMCTL_audit_p2m
     audit_p2m
-# HVMOP_inject_msi
-    send_irq
 # checked in XENMEM_sharing_op_{share,add_physmap} with:
 #  source = domain whose memory is being shared
 #  target = client domain
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* [PATCH-for-4.9 v1 8/8] x86/hvm: serialize trap injecting producer and consumer
  2016-11-18 17:13 [PATCH-for-4.9 v1 0/8] New hypercall for device models Paul Durrant
                   ` (6 preceding siblings ...)
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi Paul Durrant
@ 2016-11-18 17:14 ` Paul Durrant
  2016-11-18 17:52   ` Razvan Cojocaru
  2016-11-21  7:53   ` Jan Beulich
  7 siblings, 2 replies; 39+ messages in thread
From: Paul Durrant @ 2016-11-18 17:14 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Jan Beulich

Since injection works on a remote vCPU, and since there's no
enforcement of the subject vCPU being paused, there's a potential race
between the producing and consuming sides. Fix this by leveraging the
vector field as synchronization variable.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/dm.c         | 5 ++++-
 xen/arch/x86/hvm/hvm.c        | 7 ++++---
 xen/include/asm-x86/hvm/hvm.h | 2 ++
 3 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index ee0aeed..45e164e 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -300,13 +300,16 @@ static int dm_op_inject_trap(struct domain *d, unsigned int vcpuid,
     if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
         return -EINVAL;
 
-    if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
+    if ( cmpxchg(&v->arch.hvm_vcpu.inject_trap.vector,
+                 HVM_TRAP_VECTOR_UNSET, HVM_TRAP_VECTOR_UPDATING) !=
+         HVM_TRAP_VECTOR_UNSET )
         return -EBUSY;
 
     v->arch.hvm_vcpu.inject_trap.type = type;
     v->arch.hvm_vcpu.inject_trap.insn_len = insn_len;
     v->arch.hvm_vcpu.inject_trap.error_code = error_code;
     v->arch.hvm_vcpu.inject_trap.cr2 = cr2;
+    smp_wmb();
     v->arch.hvm_vcpu.inject_trap.vector = vector;
 
     return 0;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 90c4b43..f817c32 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -533,10 +533,11 @@ void hvm_do_resume(struct vcpu *v)
     }
 
     /* Inject pending hw/sw trap */
-    if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
+    if ( v->arch.hvm_vcpu.inject_trap.vector >= 0 )
     {
+        smp_rmb();
         hvm_inject_trap(&v->arch.hvm_vcpu.inject_trap);
-        v->arch.hvm_vcpu.inject_trap.vector = -1;
+        v->arch.hvm_vcpu.inject_trap.vector = HVM_TRAP_VECTOR_UNSET;
     }
 }
 
@@ -1548,7 +1549,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
         (void(*)(unsigned long))hvm_assert_evtchn_irq,
         (unsigned long)v);
 
-    v->arch.hvm_vcpu.inject_trap.vector = -1;
+    v->arch.hvm_vcpu.inject_trap.vector = HVM_TRAP_VECTOR_UNSET;
 
     if ( is_pvh_domain(d) )
     {
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 7e7462e..e6c951f 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -78,6 +78,8 @@ enum hvm_intblk {
 #define HVM_HAP_SUPERPAGE_1GB   0x00000002
 
 struct hvm_trap {
+#define HVM_TRAP_VECTOR_UNSET    (-1)
+#define HVM_TRAP_VECTOR_UPDATING (-2)
     int16_t       vector;
     uint8_t       type;         /* X86_EVENTTYPE_* */
     uint8_t       insn_len;     /* Instruction length */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 8/8] x86/hvm: serialize trap injecting producer and consumer
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 8/8] x86/hvm: serialize trap injecting producer and consumer Paul Durrant
@ 2016-11-18 17:52   ` Razvan Cojocaru
  2016-11-21  7:53   ` Jan Beulich
  1 sibling, 0 replies; 39+ messages in thread
From: Razvan Cojocaru @ 2016-11-18 17:52 UTC (permalink / raw)
  To: Paul Durrant, xen-devel; +Cc: Andrew Cooper, Jan Beulich

On 11/18/2016 07:14 PM, Paul Durrant wrote:
> Since injection works on a remote vCPU, and since there's no
> enforcement of the subject vCPU being paused, there's a potential race
> between the producing and consuming sides. Fix this by leveraging the
> vector field as synchronization variable.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> [re-based]
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
>  xen/arch/x86/hvm/dm.c         | 5 ++++-
>  xen/arch/x86/hvm/hvm.c        | 7 ++++---
>  xen/include/asm-x86/hvm/hvm.h | 2 ++
>  3 files changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
> index ee0aeed..45e164e 100644
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -300,13 +300,16 @@ static int dm_op_inject_trap(struct domain *d, unsigned int vcpuid,
>      if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
>          return -EINVAL;
>  
> -    if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
> +    if ( cmpxchg(&v->arch.hvm_vcpu.inject_trap.vector,
> +                 HVM_TRAP_VECTOR_UNSET, HVM_TRAP_VECTOR_UPDATING) !=
> +         HVM_TRAP_VECTOR_UNSET )
>          return -EBUSY;
>  
>      v->arch.hvm_vcpu.inject_trap.type = type;
>      v->arch.hvm_vcpu.inject_trap.insn_len = insn_len;
>      v->arch.hvm_vcpu.inject_trap.error_code = error_code;
>      v->arch.hvm_vcpu.inject_trap.cr2 = cr2;
> +    smp_wmb();
>      v->arch.hvm_vcpu.inject_trap.vector = vector;
>  
>      return 0;
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 90c4b43..f817c32 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -533,10 +533,11 @@ void hvm_do_resume(struct vcpu *v)
>      }
>  
>      /* Inject pending hw/sw trap */
> -    if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
> +    if ( v->arch.hvm_vcpu.inject_trap.vector >= 0 )
>      {
> +        smp_rmb();
>          hvm_inject_trap(&v->arch.hvm_vcpu.inject_trap);
> -        v->arch.hvm_vcpu.inject_trap.vector = -1;
> +        v->arch.hvm_vcpu.inject_trap.vector = HVM_TRAP_VECTOR_UNSET;
>      }
>  }

Does this mean I should rebase my vm_event patch?


Thanks,
Razvan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 8/8] x86/hvm: serialize trap injecting producer and consumer
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 8/8] x86/hvm: serialize trap injecting producer and consumer Paul Durrant
  2016-11-18 17:52   ` Razvan Cojocaru
@ 2016-11-21  7:53   ` Jan Beulich
  2016-11-21  8:26     ` Paul Durrant
  1 sibling, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-21  7:53 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, xen-devel

>>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
> Since injection works on a remote vCPU, and since there's no
> enforcement of the subject vCPU being paused, there's a potential race
> between the producing and consuming sides. Fix this by leveraging the
> vector field as synchronization variable.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> [re-based]
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>

At least this patch of the series should remain "From:" me; I haven't
looked at the others yet, but I have to admit I'm puzzled that I
haven't been Cc-ed on other than the one here and patch 1.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 8/8] x86/hvm: serialize trap injecting producer and consumer
  2016-11-21  7:53   ` Jan Beulich
@ 2016-11-21  8:26     ` Paul Durrant
  0 siblings, 0 replies; 39+ messages in thread
From: Paul Durrant @ 2016-11-21  8:26 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 21 November 2016 07:54
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; xen-
> devel@lists.xenproject.org
> Subject: Re: [PATCH-for-4.9 v1 8/8] x86/hvm: serialize trap injecting producer
> and consumer
> 
> >>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
> > Since injection works on a remote vCPU, and since there's no
> > enforcement of the subject vCPU being paused, there's a potential race
> > between the producing and consuming sides. Fix this by leveraging the
> > vector field as synchronization variable.
> >
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > [re-based]
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> 
> At least this patch of the series should remain "From:" me; I haven't
> looked at the others yet, but I have to admit I'm puzzled that I
> haven't been Cc-ed on other than the one here and patch 1.
> 

I'm puzzled as well because most patches are 'suggested-by' you. I guess it means that git-send-email doesn't consider that worthy of a cc... I'll manually add you to the cc list for future versions.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 1/8] public / x86: Introduce __HYPERCALL_dm_op...
  2016-11-18 17:13 ` [PATCH-for-4.9 v1 1/8] public / x86: Introduce __HYPERCALL_dm_op Paul Durrant
@ 2016-11-22 15:57   ` Jan Beulich
  2016-11-22 16:32     ` Paul Durrant
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-22 15:57 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel

>>> On 18.11.16 at 18:13, <paul.durrant@citrix.com> wrote:
> This patch simply adds the boilerplate for the hypercall and bumps
> __XEN_LATEST_INTERFACE_VERSION__ to 0x0000040900.

Why the latter?

> +static int dm_op_get_buf(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
> +                         unsigned int nr_bufs, unsigned int idx,
> +                         struct xen_dm_op_buf *buf)
> +{
> +    if ( idx >= nr_bufs )
> +        return -EFAULT;

There's no fault here. ENOENT, EIO, ENXIO, ...?

> +    return copy_from_guest_offset(buf, bufs, idx, 1);
> +}
> +
> +static int 
> dm_op_copy_buf_from_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
> +                                     unsigned int nr_bufs, void *dst,
> +                                     unsigned int idx, size_t dst_size)
> +{
> +    struct xen_dm_op_buf buf;
> +    size_t size;
> +    int rc;
> +
> +    memset(dst, 0, dst_size);
> +
> +    rc = dm_op_get_buf(bufs, nr_bufs, idx, &buf);
> +    if ( rc )
> +        return -EFAULT;
> +
> +    size = min(dst_size, buf.size);

Hmm, the file is x86-specific, so this may indeed build. But formally
the two expressions are of different types, which min() doesn't like.

> +static int dm_op_copy_buf_to_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
> +                                   unsigned int nr_bufs, unsigned int idx,
> +                                   void *src, size_t src_size)
> +{
> +    struct xen_dm_op_buf buf;
> +    size_t size;
> +    int rc;
> +
> +    rc = dm_op_get_buf(bufs, nr_bufs, idx, &buf);
> +    if ( rc )
> +        return -EFAULT;
> +
> +    size = min(buf.size, src_size);
> +
> +    rc = copy_to_guest(buf.h, src, size);
> +    if ( rc )
> +        return -EFAULT;
> +
> +    return 0;
> +}

For copying from guest doing all-or-nothing is probably sufficient,
but is that really the case also for copying data back?

> +long do_dm_op(domid_t domid,
> +              unsigned int nr_bufs,
> +              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
> +{
> +    struct domain *d;
> +    struct xen_dm_op op;
> +    bool restart;
> +    long rc;
> +
> +    rc = rcu_lock_remote_domain_by_id(domid, &d);
> +    if ( rc )
> +        return rc;
> +
> +    restart = false;
> +
> +    if ( !has_hvm_container_domain(d) )
> +        goto out;
> +
> +    rc = xsm_dm_op(XSM_DM_PRIV, d);
> +    if ( rc )
> +        goto out;
> +
> +    rc = dm_op_copy_buf_from_guest(bufs, nr_bufs, &op, 0, sizeof(op));
> +    if ( rc )
> +        goto out;
> +
> +    switch ( op.op )
> +    {
> +    default:
> +        rc = -EOPNOTSUPP;
> +        break;
> +    }
> +
> +    if ( rc == -ERESTART )
> +        restart = true;
> +
> +    if ( !restart && rc )

Is the "restart" variable really necessary?

> +        goto out;
> +
> +    rc = dm_op_copy_buf_to_guest(bufs, nr_bufs, 0, &op, sizeof(op));
> +
> +out:

A goto over a single statement is certainly too much goto-ery for
my taste. In any event - labels indented by at least one space
please.

> +#ifndef __XEN_PUBLIC_HVM_DM_OP_H__
> +#define __XEN_PUBLIC_HVM_DM_OP_H__
> +
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> +
> +#include "../xen.h"
> +
> +#define DMOP_invalid 0

XEN_DMOP_invalid

> +struct xen_dm_op {
> +    uint32_t op;
> +};
> +
> +struct xen_dm_op_buf {
> +    XEN_GUEST_HANDLE(void) h;
> +    uint64_t size;
> +};
> +typedef struct xen_dm_op_buf xen_dm_op_buf_t;
> +DEFINE_XEN_GUEST_HANDLE(xen_dm_op_buf_t);
> +
> +/* ` enum neg_errnoval
> + * ` HYPERVISOR_dm_op(domid_t domid,
> + * `                  xen_dm_op_buf_t *bufs,

I'd prefer you to use the bufs[] notation here, to emphasize the
array nature.

> + * `                  unsigned int nr_bufs)
> + * `
> + *
> + * @domid is the domain the hypercall operates on.
> + * @bufs points to an array of buffers where @bufs[0] contains a struct
> + * dm_op, describing the specific device model operation and its parameters.

xen_dm_op

> + * @bufs[1..] may be referenced in the parameters for the purposes of
> + * passing extra information to or from the domain.
> + * @nr_bufs is the number of buffers in the @bufs array.
> + */
> +
> +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */

Please omit the two defined() (but retain what's inside the
parentheses).


> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -727,6 +727,12 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int
>      }
>  }
>  
> +static XSM_INLINE int xsm_dm_op (XSM_DEFAULT_ARG struct domain *d)

Stray blank (many of the XSM routines have this wrong, and this
really should be cleaned up eventually).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 1/8] public / x86: Introduce __HYPERCALL_dm_op...
  2016-11-22 15:57   ` Jan Beulich
@ 2016-11-22 16:32     ` Paul Durrant
  2016-11-22 17:24       ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-22 16:32 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 22 November 2016 15:57
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; xen-devel@lists.xenproject.org; Daniel De Graaf
> <dgdegra@tycho.nsa.gov>
> Subject: Re: [PATCH-for-4.9 v1 1/8] public / x86: Introduce
> __HYPERCALL_dm_op...
> 
> >>> On 18.11.16 at 18:13, <paul.durrant@citrix.com> wrote:
> > This patch simply adds the boilerplate for the hypercall and bumps
> > __XEN_LATEST_INTERFACE_VERSION__ to 0x0000040900.
> 
> Why the latter?

Do I not need to bump the interface version?

> 
> > +static int
> dm_op_get_buf(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
> > +                         unsigned int nr_bufs, unsigned int idx,
> > +                         struct xen_dm_op_buf *buf)
> > +{
> > +    if ( idx >= nr_bufs )
> > +        return -EFAULT;
> 
> There's no fault here. ENOENT, EIO, ENXIO, ...?
> 

True, ENOENT I think.

> > +    return copy_from_guest_offset(buf, bufs, idx, 1);
> > +}
> > +
> > +static int
> >
> dm_op_copy_buf_from_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_
> buf_t) bufs,
> > +                                     unsigned int nr_bufs, void *dst,
> > +                                     unsigned int idx, size_t dst_size)
> > +{
> > +    struct xen_dm_op_buf buf;
> > +    size_t size;
> > +    int rc;
> > +
> > +    memset(dst, 0, dst_size);
> > +
> > +    rc = dm_op_get_buf(bufs, nr_bufs, idx, &buf);
> > +    if ( rc )
> > +        return -EFAULT;
> > +
> > +    size = min(dst_size, buf.size);
> 
> Hmm, the file is x86-specific, so this may indeed build. But formally
> the two expressions are of different types, which min() doesn't like.
> 

Ok.

> > +static int
> dm_op_copy_buf_to_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_bu
> f_t) bufs,
> > +                                   unsigned int nr_bufs, unsigned int idx,
> > +                                   void *src, size_t src_size)
> > +{
> > +    struct xen_dm_op_buf buf;
> > +    size_t size;
> > +    int rc;
> > +
> > +    rc = dm_op_get_buf(bufs, nr_bufs, idx, &buf);
> > +    if ( rc )
> > +        return -EFAULT;
> > +
> > +    size = min(buf.size, src_size);
> > +
> > +    rc = copy_to_guest(buf.h, src, size);
> > +    if ( rc )
> > +        return -EFAULT;
> > +
> > +    return 0;
> > +}
> 
> For copying from guest doing all-or-nothing is probably sufficient,
> but is that really the case also for copying data back?
> 

It is ok for now. It can always be changed later if we want to optimise.

> > +long do_dm_op(domid_t domid,
> > +              unsigned int nr_bufs,
> > +              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
> > +{
> > +    struct domain *d;
> > +    struct xen_dm_op op;
> > +    bool restart;
> > +    long rc;
> > +
> > +    rc = rcu_lock_remote_domain_by_id(domid, &d);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    restart = false;
> > +
> > +    if ( !has_hvm_container_domain(d) )
> > +        goto out;
> > +
> > +    rc = xsm_dm_op(XSM_DM_PRIV, d);
> > +    if ( rc )
> > +        goto out;
> > +
> > +    rc = dm_op_copy_buf_from_guest(bufs, nr_bufs, &op, 0, sizeof(op));
> > +    if ( rc )
> > +        goto out;
> > +
> > +    switch ( op.op )
> > +    {
> > +    default:
> > +        rc = -EOPNOTSUPP;
> > +        break;
> > +    }
> > +
> > +    if ( rc == -ERESTART )
> > +        restart = true;
> > +
> > +    if ( !restart && rc )
> 
> Is the "restart" variable really necessary?
> 
> > +        goto out;
> > +
> > +    rc = dm_op_copy_buf_to_guest(bufs, nr_bufs, 0, &op, sizeof(op));
> > +
> > +out:
> 
> A goto over a single statement is certainly too much goto-ery for
> my taste. In any event - labels indented by at least one space
> please.
> 

Ok, I'll get rid of the goto despite it making the code look more cumbersome to me.

> > +#ifndef __XEN_PUBLIC_HVM_DM_OP_H__
> > +#define __XEN_PUBLIC_HVM_DM_OP_H__
> > +
> > +#if defined(__XEN__) || defined(__XEN_TOOLS__)
> > +
> > +#include "../xen.h"
> > +
> > +#define DMOP_invalid 0
> 
> XEN_DMOP_invalid
> 

Yes, indeed.

> > +struct xen_dm_op {
> > +    uint32_t op;
> > +};
> > +
> > +struct xen_dm_op_buf {
> > +    XEN_GUEST_HANDLE(void) h;
> > +    uint64_t size;
> > +};
> > +typedef struct xen_dm_op_buf xen_dm_op_buf_t;
> > +DEFINE_XEN_GUEST_HANDLE(xen_dm_op_buf_t);
> > +
> > +/* ` enum neg_errnoval
> > + * ` HYPERVISOR_dm_op(domid_t domid,
> > + * `                  xen_dm_op_buf_t *bufs,
> 
> I'd prefer you to use the bufs[] notation here, to emphasize the
> array nature.
> 

Ok.

> > + * `                  unsigned int nr_bufs)
> > + * `
> > + *
> > + * @domid is the domain the hypercall operates on.
> > + * @bufs points to an array of buffers where @bufs[0] contains a struct
> > + * dm_op, describing the specific device model operation and its
> parameters.
> 
> xen_dm_op
> 

Yep.

> > + * @bufs[1..] may be referenced in the parameters for the purposes of
> > + * passing extra information to or from the domain.
> > + * @nr_bufs is the number of buffers in the @bufs array.
> > + */
> > +
> > +#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
> 
> Please omit the two defined() (but retain what's inside the
> parentheses).
> 

Ok.

> 
> > --- a/xen/include/xsm/dummy.h
> > +++ b/xen/include/xsm/dummy.h
> > @@ -727,6 +727,12 @@ static XSM_INLINE int xsm_pmu_op
> (XSM_DEFAULT_ARG struct domain *d, unsigned int
> >      }
> >  }
> >
> > +static XSM_INLINE int xsm_dm_op (XSM_DEFAULT_ARG struct domain
> *d)
> 
> Stray blank (many of the XSM routines have this wrong, and this
> really should be cleaned up eventually).
>

Ok. I was just going for consistency.

  Paul
 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 1/8] public / x86: Introduce __HYPERCALL_dm_op...
  2016-11-22 16:32     ` Paul Durrant
@ 2016-11-22 17:24       ` Jan Beulich
  2016-11-22 17:29         ` Paul Durrant
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-22 17:24 UTC (permalink / raw)
  To: Paul Durrant; +Cc: Andrew Cooper, Daniel DeGraaf, Wei Liu, xen-devel

>>> On 22.11.16 at 17:32, <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 22 November 2016 15:57
>> To: Paul Durrant <Paul.Durrant@citrix.com>
>> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
>> <wei.liu2@citrix.com>; xen-devel@lists.xenproject.org; Daniel De Graaf
>> <dgdegra@tycho.nsa.gov>
>> Subject: Re: [PATCH-for-4.9 v1 1/8] public / x86: Introduce
>> __HYPERCALL_dm_op...
>> 
>> >>> On 18.11.16 at 18:13, <paul.durrant@citrix.com> wrote:
>> > This patch simply adds the boilerplate for the hypercall and bumps
>> > __XEN_LATEST_INTERFACE_VERSION__ to 0x0000040900.
>> 
>> Why the latter?
> 
> Do I not need to bump the interface version?

Not for plain additions.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 1/8] public / x86: Introduce __HYPERCALL_dm_op...
  2016-11-22 17:24       ` Jan Beulich
@ 2016-11-22 17:29         ` Paul Durrant
  0 siblings, 0 replies; 39+ messages in thread
From: Paul Durrant @ 2016-11-22 17:29 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Andrew Cooper, Daniel DeGraaf, Wei Liu, xen-devel

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 22 November 2016 17:25
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; xen-devel@lists.xenproject.org; Daniel DeGraaf
> <dgdegra@tycho.nsa.gov>
> Subject: RE: [PATCH-for-4.9 v1 1/8] public / x86: Introduce
> __HYPERCALL_dm_op...
> 
> >>> On 22.11.16 at 17:32, <Paul.Durrant@citrix.com> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 22 November 2016 15:57
> >> To: Paul Durrant <Paul.Durrant@citrix.com>
> >> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> >> <wei.liu2@citrix.com>; xen-devel@lists.xenproject.org; Daniel De Graaf
> >> <dgdegra@tycho.nsa.gov>
> >> Subject: Re: [PATCH-for-4.9 v1 1/8] public / x86: Introduce
> >> __HYPERCALL_dm_op...
> >>
> >> >>> On 18.11.16 at 18:13, <paul.durrant@citrix.com> wrote:
> >> > This patch simply adds the boilerplate for the hypercall and bumps
> >> > __XEN_LATEST_INTERFACE_VERSION__ to 0x0000040900.
> >>
> >> Why the latter?
> >
> > Do I not need to bump the interface version?
> 
> Not for plain additions.
> 

Oh, I see. The change is in preparation for later patches which remove HVMOP definitions for newer versions, but I can defer making the change till the first of those if that makes more sense.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 2/8] dm_op: convert HVMOP_*ioreq_server*
  2016-11-18 17:13 ` [PATCH-for-4.9 v1 2/8] dm_op: convert HVMOP_*ioreq_server* Paul Durrant
@ 2016-11-24 17:02   ` Jan Beulich
  2016-11-25  7:06     ` Jan Beulich
  2016-11-25  9:01     ` Paul Durrant
  0 siblings, 2 replies; 39+ messages in thread
From: Jan Beulich @ 2016-11-24 17:02 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, Ian Jackson, xen-devel

>>> On 18.11.16 at 18:13, <paul.durrant@citrix.com> wrote:
> NOTE: The definitions of HVM_IOREQSRV_BUFIOREQ_*, HVMOP_IO_RANGE_* and
>       HVMOP_PCI_SBDF have to persist for new interface versions as
>       they are already in use by callers of the libxc interface.

Looking back at my original hvmctl patch, I agree with the first, but
where did you find uses of the latter two outside of libxc (IOW what
did I overlook back then)? The libxc interfaces, after all, are meant
to abstract those away.

> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -102,6 +102,61 @@ long do_dm_op(domid_t domid,
>  
>      switch ( op.op )
>      {
> +    case DMOP_create_ioreq_server:
> +    {
> +        struct domain *curr_d = current->domain;
> +        struct xen_dm_op_create_ioreq_server *data =
> +            &op.u.create_ioreq_server;
> +
> +        rc = hvm_create_ioreq_server(d, curr_d->domain_id, 0,
> +                                     data->handle_bufioreq, &data->id);
> +        break;
> +    }
> +    case DMOP_get_ioreq_server_info:

Blank lines between non-fall-through case statements please.

> +    {
> +        struct xen_dm_op_get_ioreq_server_info *data =
> +            &op.u.get_ioreq_server_info;
> +
> +        rc = hvm_get_ioreq_server_info(d, data->id,
> +                                       &data->ioreq_pfn,
> +                                       &data->bufioreq_pfn,
> +                                       &data->bufioreq_port);

Before the call you should check the __pad field to be zero
(presumably also elsewhere).

> +    case DMOP_destroy_ioreq_server:
> +    {
> +        struct xen_dm_op_destroy_ioreq_server *data =
> +            &op.u.destroy_ioreq_server;
> +
> +        rc = hvm_destroy_ioreq_server(d, data->id);
> +        break;

When there are multiple uses of "data" I can see the point of
using it to help readability, but here I'm unconvinced.

> --- a/xen/include/public/hvm/hvm_op.h
> +++ b/xen/include/public/hvm/hvm_op.h
> @@ -26,6 +26,7 @@
>  #include "../xen.h"
>  #include "../trace.h"
>  #include "../event_channel.h"
> +#include "dm_op.h"

I'd really wish we could avoid that additional dependency, and I seem
to have got away without in my hvmctl series.

> +/*
> + * DMOP_create_ioreq_server: Instantiate a new IOREQ Server for a secondary
> + *                           emulator servicing domain <domid>.
> + *
> + * The <id> handed back is unique for <domid>. If <handle_bufioreq> is zero
> + * the buffered ioreq ring will not be allocated and hence all emulation
> + * requestes to this server will be synchronous.
> + */
> +#define DMOP_create_ioreq_server 1

Missing XEN_ prefix.

> +struct xen_dm_op_create_ioreq_server {
> +    /* IN - should server handle buffered ioreqs */
> +    uint8_t handle_bufioreq;
> +#define DMOP_BUFIOREQ_OFF    0
> +#define DMOP_BUFIOREQ_LEGACY 1

Again (and of course more below).

> +struct xen_dm_op_ioreq_server_range {
> +    /* IN - server id */
> +    ioservid_t id;
> +    uint16_t __pad;
> +    /* IN - type of range */
> +    uint32_t type;

Any reason for making this 32 bits wide, instead of 16 (and leaving
32 for future use)?

> +struct xen_dm_op_set_ioreq_server_state {
> +    /* IN - server id */
> +    ioservid_t id;
> +    uint16_t __pad;
> +    /* IN - enabled? */
> +    uint8_t enabled;
> +};

Why 16 bits of padding ahead of an 8-bit field, the more that
ioservid_t is also just 16 bits?

>  struct xen_dm_op {
>      uint32_t op;
> +    union {

Even if no current structure needs it, I think we should have a 32-bit
padding field ahead of the union right away, to cover (current or
future) uint64_aligned_t uses inside the union members.

> @@ -242,6 +243,8 @@ struct xen_hvm_inject_msi {
>  typedef struct xen_hvm_inject_msi xen_hvm_inject_msi_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_msi_t);
>  
> +#if __XEN_INTERFACE_VERSION__ < 0x00040900
> +
>  /*
>   * IOREQ Servers
>   *

This lives inside a __XEN__ / __XEN_TOOLS__ only region, so does
not need to be guarded (or the contents preserved).

> @@ -383,6 +370,27 @@ struct xen_hvm_set_ioreq_server_state {
>  typedef struct xen_hvm_set_ioreq_server_state xen_hvm_set_ioreq_server_state_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_ioreq_server_state_t);
>  
> +#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
> +
> +/*
> + * Definitions relating to HVMOP/DMOP_create_ioreq_server.
> + */
> +
> +#define HVM_IOREQSRV_BUFIOREQ_OFF    DMOP_BUFIOREQ_OFF
> +#define HVM_IOREQSRV_BUFIOREQ_LEGACY DMOP_BUFIOREQ_LEGACY
> +#define HVM_IOREQSRV_BUFIOREQ_ATOMIC DMOP_BUFIOREQ_ATOMIC
> +
> +/*
> + * Definitions relating to HVMOP/DMOP_map_io_range_to_ioreq_server and
> + * HVMOP/DMOP_unmap_io_range_from_ioreq_server
> + */
> +
> +#define HVMOP_IO_RANGE_PORT   DMOP_IO_RANGE_PORT
> +#define HVMOP_IO_RANGE_MEMORY DMOP_IO_RANGE_MEMORY
> +#define HVMOP_IO_RANGE_PCI    DMOP_IO_RANGE_PCI
> +
> +#define HVMOP_PCI_SBDF        DMOP_PCI_SBDF

Instead these additions (or, as said above, any parts thereof
which really need retaining) should then go into an interface
version guarded block, as we don't want new code to use them.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 2/8] dm_op: convert HVMOP_*ioreq_server*
  2016-11-24 17:02   ` Jan Beulich
@ 2016-11-25  7:06     ` Jan Beulich
  2016-11-25  8:47       ` Paul Durrant
  2016-11-25  9:01     ` Paul Durrant
  1 sibling, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-25  7:06 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, Ian Jackson, xen-devel

>>> On 24.11.16 at 18:02, <JBeulich@suse.com> wrote:
>>>> On 18.11.16 at 18:13, <paul.durrant@citrix.com> wrote:
>> +    {
>> +        struct xen_dm_op_get_ioreq_server_info *data =
>> +            &op.u.get_ioreq_server_info;
>> +
>> +        rc = hvm_get_ioreq_server_info(d, data->id,
>> +                                       &data->ioreq_pfn,
>> +                                       &data->bufioreq_pfn,
>> +                                       &data->bufioreq_port);
> 
> Before the call you should check the __pad field to be zero
> (presumably also elsewhere).

And please no double underscores at the beginning of those field
names; preferably none at all (as field names may collide with file
scope object-like macros).

>>  struct xen_dm_op {
>>      uint32_t op;
>> +    union {
> 
> Even if no current structure needs it, I think we should have a 32-bit
> padding field ahead of the union right away, to cover (current or
> future) uint64_aligned_t uses inside the union members.

Actually I did overlook that the few instances of uint64_aligned_t
are in direct union members, not in fields referenced, so this isn't just
a "should" really.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 2/8] dm_op: convert HVMOP_*ioreq_server*
  2016-11-25  7:06     ` Jan Beulich
@ 2016-11-25  8:47       ` Paul Durrant
  0 siblings, 0 replies; 39+ messages in thread
From: Paul Durrant @ 2016-11-25  8:47 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 25 November 2016 07:06
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; xen-
> devel@lists.xenproject.org; Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Subject: Re: [Xen-devel] [PATCH-for-4.9 v1 2/8] dm_op: convert
> HVMOP_*ioreq_server*
> 
> >>> On 24.11.16 at 18:02, <JBeulich@suse.com> wrote:
> >>>> On 18.11.16 at 18:13, <paul.durrant@citrix.com> wrote:
> >> +    {
> >> +        struct xen_dm_op_get_ioreq_server_info *data =
> >> +            &op.u.get_ioreq_server_info;
> >> +
> >> +        rc = hvm_get_ioreq_server_info(d, data->id,
> >> +                                       &data->ioreq_pfn,
> >> +                                       &data->bufioreq_pfn,
> >> +                                       &data->bufioreq_port);
> >
> > Before the call you should check the __pad field to be zero
> > (presumably also elsewhere).
> 
> And please no double underscores at the beginning of those field
> names; preferably none at all (as field names may collide with file
> scope object-like macros).

Ok.

> 
> >>  struct xen_dm_op {
> >>      uint32_t op;
> >> +    union {
> >
> > Even if no current structure needs it, I think we should have a 32-bit
> > padding field ahead of the union right away, to cover (current or
> > future) uint64_aligned_t uses inside the union members.
> 
> Actually I did overlook that the few instances of uint64_aligned_t
> are in direct union members, not in fields referenced, so this isn't just
> a "should" really.
> 

Ok, I'll go through and check field alignments again.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 2/8] dm_op: convert HVMOP_*ioreq_server*
  2016-11-24 17:02   ` Jan Beulich
  2016-11-25  7:06     ` Jan Beulich
@ 2016-11-25  9:01     ` Paul Durrant
  2016-11-25  9:28       ` Jan Beulich
  1 sibling, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-25  9:01 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 24 November 2016 17:02
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; xen-
> devel@lists.xenproject.org; Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Subject: Re: [Xen-devel] [PATCH-for-4.9 v1 2/8] dm_op: convert
> HVMOP_*ioreq_server*
> 
> >>> On 18.11.16 at 18:13, <paul.durrant@citrix.com> wrote:
> > NOTE: The definitions of HVM_IOREQSRV_BUFIOREQ_*,
> HVMOP_IO_RANGE_* and
> >       HVMOP_PCI_SBDF have to persist for new interface versions as
> >       they are already in use by callers of the libxc interface.
> 
> Looking back at my original hvmctl patch, I agree with the first, but
> where did you find uses of the latter two outside of libxc (IOW what
> did I overlook back then)? The libxc interfaces, after all, are meant
> to abstract those away.
> 

You are correct. For some reason I thought the encoding was exposed at the libxc level but it isn't so I can keep the definition inside the #ifdef.

> > --- a/xen/arch/x86/hvm/dm.c
> > +++ b/xen/arch/x86/hvm/dm.c
> > @@ -102,6 +102,61 @@ long do_dm_op(domid_t domid,
> >
> >      switch ( op.op )
> >      {
> > +    case DMOP_create_ioreq_server:
> > +    {
> > +        struct domain *curr_d = current->domain;
> > +        struct xen_dm_op_create_ioreq_server *data =
> > +            &op.u.create_ioreq_server;
> > +
> > +        rc = hvm_create_ioreq_server(d, curr_d->domain_id, 0,
> > +                                     data->handle_bufioreq, &data->id);
> > +        break;
> > +    }
> > +    case DMOP_get_ioreq_server_info:
> 
> Blank lines between non-fall-through case statements please.
> 

Even where there are braces?

> > +    {
> > +        struct xen_dm_op_get_ioreq_server_info *data =
> > +            &op.u.get_ioreq_server_info;
> > +
> > +        rc = hvm_get_ioreq_server_info(d, data->id,
> > +                                       &data->ioreq_pfn,
> > +                                       &data->bufioreq_pfn,
> > +                                       &data->bufioreq_port);
> 
> Before the call you should check the __pad field to be zero
> (presumably also elsewhere).
> 

Yes, I'll go through and check.

> > +    case DMOP_destroy_ioreq_server:
> > +    {
> > +        struct xen_dm_op_destroy_ioreq_server *data =
> > +            &op.u.destroy_ioreq_server;
> > +
> > +        rc = hvm_destroy_ioreq_server(d, data->id);
> > +        break;
> 
> When there are multiple uses of "data" I can see the point of
> using it to help readability, but here I'm unconvinced.

Without it the call to hvm_destroy_ioreq_server() looks unwieldy because the union field specifier makes the line longer than 80 chars. It looked neater this way.

> 
> > --- a/xen/include/public/hvm/hvm_op.h
> > +++ b/xen/include/public/hvm/hvm_op.h
> > @@ -26,6 +26,7 @@
> >  #include "../xen.h"
> >  #include "../trace.h"
> >  #include "../event_channel.h"
> > +#include "dm_op.h"
> 
> I'd really wish we could avoid that additional dependency, and I seem
> to have got away without in my hvmctl series.
> 

I can do that but it means I need to typedef ioservid_t in both headers, which I thought was less preferable.

> > +/*
> > + * DMOP_create_ioreq_server: Instantiate a new IOREQ Server for a
> secondary
> > + *                           emulator servicing domain <domid>.
> > + *
> > + * The <id> handed back is unique for <domid>. If <handle_bufioreq> is
> zero
> > + * the buffered ioreq ring will not be allocated and hence all emulation
> > + * requestes to this server will be synchronous.
> > + */
> > +#define DMOP_create_ioreq_server 1
> 
> Missing XEN_ prefix.
> 

Yep.

> > +struct xen_dm_op_create_ioreq_server {
> > +    /* IN - should server handle buffered ioreqs */
> > +    uint8_t handle_bufioreq;
> > +#define DMOP_BUFIOREQ_OFF    0
> > +#define DMOP_BUFIOREQ_LEGACY 1
> 
> Again (and of course more below).
> 
> > +struct xen_dm_op_ioreq_server_range {
> > +    /* IN - server id */
> > +    ioservid_t id;
> > +    uint16_t __pad;
> > +    /* IN - type of range */
> > +    uint32_t type;
> 
> Any reason for making this 32 bits wide, instead of 16 (and leaving
> 32 for future use)?
> 

Not really. I could probably shrink it to 8.

> > +struct xen_dm_op_set_ioreq_server_state {
> > +    /* IN - server id */
> > +    ioservid_t id;
> > +    uint16_t __pad;
> > +    /* IN - enabled? */
> > +    uint8_t enabled;
> > +};
> 
> Why 16 bits of padding ahead of an 8-bit field, the more that
> ioservid_t is also just 16 bits?
> 

That's a mistake. I'll drop it.

> >  struct xen_dm_op {
> >      uint32_t op;
> > +    union {
> 
> Even if no current structure needs it, I think we should have a 32-bit
> padding field ahead of the union right away, to cover (current or
> future) uint64_aligned_t uses inside the union members.
> 
> > @@ -242,6 +243,8 @@ struct xen_hvm_inject_msi {
> >  typedef struct xen_hvm_inject_msi xen_hvm_inject_msi_t;
> >  DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_msi_t);
> >
> > +#if __XEN_INTERFACE_VERSION__ < 0x00040900
> > +
> >  /*
> >   * IOREQ Servers
> >   *
> 
> This lives inside a __XEN__ / __XEN_TOOLS__ only region, so does
> not need to be guarded (or the contents preserved).
> 

Ok.

> > @@ -383,6 +370,27 @@ struct xen_hvm_set_ioreq_server_state {
> >  typedef struct xen_hvm_set_ioreq_server_state
> xen_hvm_set_ioreq_server_state_t;
> >  DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_ioreq_server_state_t);
> >
> > +#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
> > +
> > +/*
> > + * Definitions relating to HVMOP/DMOP_create_ioreq_server.
> > + */
> > +
> > +#define HVM_IOREQSRV_BUFIOREQ_OFF    DMOP_BUFIOREQ_OFF
> > +#define HVM_IOREQSRV_BUFIOREQ_LEGACY DMOP_BUFIOREQ_LEGACY
> > +#define HVM_IOREQSRV_BUFIOREQ_ATOMIC
> DMOP_BUFIOREQ_ATOMIC
> > +
> > +/*
> > + * Definitions relating to HVMOP/DMOP_map_io_range_to_ioreq_server
> and
> > + * HVMOP/DMOP_unmap_io_range_from_ioreq_server
> > + */
> > +
> > +#define HVMOP_IO_RANGE_PORT   DMOP_IO_RANGE_PORT
> > +#define HVMOP_IO_RANGE_MEMORY DMOP_IO_RANGE_MEMORY
> > +#define HVMOP_IO_RANGE_PCI    DMOP_IO_RANGE_PCI
> > +
> > +#define HVMOP_PCI_SBDF        DMOP_PCI_SBDF
> 
> Instead these additions (or, as said above, any parts thereof
> which really need retaining) should then go into an interface
> version guarded block, as we don't want new code to use them.
> 

Ok. Like with the SBDF definition, I mistakenly thought the range definitions were being used outside of libxc. Since they were tools-only, I should be able to drop them.

> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 2/8] dm_op: convert HVMOP_*ioreq_server*
  2016-11-25  9:01     ` Paul Durrant
@ 2016-11-25  9:28       ` Jan Beulich
  2016-11-25  9:33         ` Paul Durrant
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-25  9:28 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

>>> On 25.11.16 at 10:01, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 24 November 2016 17:02
>> >>> On 18.11.16 at 18:13, <paul.durrant@citrix.com> wrote:
>> > --- a/xen/arch/x86/hvm/dm.c
>> > +++ b/xen/arch/x86/hvm/dm.c
>> > @@ -102,6 +102,61 @@ long do_dm_op(domid_t domid,
>> >
>> >      switch ( op.op )
>> >      {
>> > +    case DMOP_create_ioreq_server:
>> > +    {
>> > +        struct domain *curr_d = current->domain;
>> > +        struct xen_dm_op_create_ioreq_server *data =
>> > +            &op.u.create_ioreq_server;
>> > +
>> > +        rc = hvm_create_ioreq_server(d, curr_d->domain_id, 0,
>> > +                                     data->handle_bufioreq, &data->id);
>> > +        break;
>> > +    }
>> > +    case DMOP_get_ioreq_server_info:
>> 
>> Blank lines between non-fall-through case statements please.
> 
> Even where there are braces?

Yes please, because the closing brace alone is no indication of
whether there is fall-through involved here.

>> > --- a/xen/include/public/hvm/hvm_op.h
>> > +++ b/xen/include/public/hvm/hvm_op.h
>> > @@ -26,6 +26,7 @@
>> >  #include "../xen.h"
>> >  #include "../trace.h"
>> >  #include "../event_channel.h"
>> > +#include "dm_op.h"
>> 
>> I'd really wish we could avoid that additional dependency, and I seem
>> to have got away without in my hvmctl series.
> 
> I can do that but it means I need to typedef ioservid_t in both headers, 
> which I thought was less preferable.

Hmm, are there any uses of that type left in this header after you
actually removed everything that doesn't need to be here anymore?

>> > +struct xen_dm_op_ioreq_server_range {
>> > +    /* IN - server id */
>> > +    ioservid_t id;
>> > +    uint16_t __pad;
>> > +    /* IN - type of range */
>> > +    uint32_t type;
>> 
>> Any reason for making this 32 bits wide, instead of 16 (and leaving
>> 32 for future use)?
> 
> Not really. I could probably shrink it to 8.

I wouldn't go that far, as then you'd need two padding fields.

>> > +struct xen_dm_op_set_ioreq_server_state {
>> > +    /* IN - server id */
>> > +    ioservid_t id;
>> > +    uint16_t __pad;
>> > +    /* IN - enabled? */
>> > +    uint8_t enabled;
>> > +};
>> 
>> Why 16 bits of padding ahead of an 8-bit field, the more that
>> ioservid_t is also just 16 bits?
>> 
> 
> That's a mistake. I'll drop it.

s/drop/change/ I suppose, as you'll need to add tail padding?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 2/8] dm_op: convert HVMOP_*ioreq_server*
  2016-11-25  9:28       ` Jan Beulich
@ 2016-11-25  9:33         ` Paul Durrant
  0 siblings, 0 replies; 39+ messages in thread
From: Paul Durrant @ 2016-11-25  9:33 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 25 November 2016 09:28
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>; xen-
> devel@lists.xenproject.org; Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Subject: RE: [Xen-devel] [PATCH-for-4.9 v1 2/8] dm_op: convert
> HVMOP_*ioreq_server*
> 
> >>> On 25.11.16 at 10:01, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 24 November 2016 17:02
> >> >>> On 18.11.16 at 18:13, <paul.durrant@citrix.com> wrote:
> >> > --- a/xen/arch/x86/hvm/dm.c
> >> > +++ b/xen/arch/x86/hvm/dm.c
> >> > @@ -102,6 +102,61 @@ long do_dm_op(domid_t domid,
> >> >
> >> >      switch ( op.op )
> >> >      {
> >> > +    case DMOP_create_ioreq_server:
> >> > +    {
> >> > +        struct domain *curr_d = current->domain;
> >> > +        struct xen_dm_op_create_ioreq_server *data =
> >> > +            &op.u.create_ioreq_server;
> >> > +
> >> > +        rc = hvm_create_ioreq_server(d, curr_d->domain_id, 0,
> >> > +                                     data->handle_bufioreq, &data->id);
> >> > +        break;
> >> > +    }
> >> > +    case DMOP_get_ioreq_server_info:
> >>
> >> Blank lines between non-fall-through case statements please.
> >
> > Even where there are braces?
> 
> Yes please, because the closing brace alone is no indication of
> whether there is fall-through involved here.
> 
> >> > --- a/xen/include/public/hvm/hvm_op.h
> >> > +++ b/xen/include/public/hvm/hvm_op.h
> >> > @@ -26,6 +26,7 @@
> >> >  #include "../xen.h"
> >> >  #include "../trace.h"
> >> >  #include "../event_channel.h"
> >> > +#include "dm_op.h"
> >>
> >> I'd really wish we could avoid that additional dependency, and I seem
> >> to have got away without in my hvmctl series.
> >
> > I can do that but it means I need to typedef ioservid_t in both headers,
> > which I thought was less preferable.
> 
> Hmm, are there any uses of that type left in this header after you
> actually removed everything that doesn't need to be here anymore?
> 

Ah true. I was still thinking that I needed to keep the old HVMOPs for compatibility but of course I don't so, yes, I can get rid of the inclusion.

  Paul

> >> > +struct xen_dm_op_ioreq_server_range {
> >> > +    /* IN - server id */
> >> > +    ioservid_t id;
> >> > +    uint16_t __pad;
> >> > +    /* IN - type of range */
> >> > +    uint32_t type;
> >>
> >> Any reason for making this 32 bits wide, instead of 16 (and leaving
> >> 32 for future use)?
> >
> > Not really. I could probably shrink it to 8.
> 
> I wouldn't go that far, as then you'd need two padding fields.
> 
> >> > +struct xen_dm_op_set_ioreq_server_state {
> >> > +    /* IN - server id */
> >> > +    ioservid_t id;
> >> > +    uint16_t __pad;
> >> > +    /* IN - enabled? */
> >> > +    uint8_t enabled;
> >> > +};
> >>
> >> Why 16 bits of padding ahead of an 8-bit field, the more that
> >> ioservid_t is also just 16 bits?
> >>
> >
> > That's a mistake. I'll drop it.
> 
> s/drop/change/ I suppose, as you'll need to add tail padding?
> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 3/8] dm_op: convert HVMOP_track_dirty_vram
  2016-11-18 17:13 ` [PATCH-for-4.9 v1 3/8] dm_op: convert HVMOP_track_dirty_vram Paul Durrant
@ 2016-11-25 11:25   ` Jan Beulich
  2016-11-25 11:32     ` Paul Durrant
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-25 11:25 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Wei Liu, GeorgeDunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	xen-devel, Daniel De Graaf

>>> On 18.11.16 at 18:13, <paul.durrant@citrix.com> wrote:
> @@ -74,6 +76,35 @@ static int dm_op_copy_buf_to_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
>      return 0;
>  }
>  
> +static int dm_op_track_dirty_vram(struct domain *d,
> +                                  unsigned int nr_bufs,
> +                                  XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,

Wouldn't it be more natural for the caller to pass in a pointer to the
already retrieved struct xen_dm_op_buf? The function here has in
particular no other use for nr_bufs.

> +                                  xen_pfn_t first_pfn, unsigned int nr)
> +{
> +    struct xen_dm_op_buf buf;
> +    int rc;
> +
> +    if ( nr > GB(1) >> PAGE_SHIFT )

Please parenthesize the operands of >>.

> +        return -EINVAL;
> +
> +    if ( d->is_dying )
> +        return -ESRCH;
> +
> +    if ( d->vcpu == NULL || d->vcpu[0] == NULL )

I'd appreciate if you used ! in cases like these. Also the left side
should check d->max_vcpus, to be more in line with the checking
done elsewhere (albeit I agree we're not consistent with this yet).

> @@ -157,11 +188,19 @@ long do_dm_op(domid_t domid,
>          rc = hvm_destroy_ioreq_server(d, data->id);
>          break;
>      }
> +    case DMOP_track_dirty_vram:
> +    {
> +        struct xen_dm_op_track_dirty_vram *data =
> +            &op.u.track_dirty_vram;
> +
> +        rc = dm_op_track_dirty_vram(d, nr_bufs, bufs, data->first_pfn,
> +                                    data->nr);
> +        break;
> +    }
>      default:
>          rc = -EOPNOTSUPP;
>          break;
>      }
> -
>      if ( rc == -ERESTART )
>          restart = true;

Stray removal of a (imo useful) blank line.

> @@ -178,7 +217,7 @@ out:
>                                             domid, nr_bufs, bufs);
>  
>      return rc;
> -}
> +    }

???

> --- a/xen/include/public/hvm/dm_op.h
> +++ b/xen/include/public/hvm/dm_op.h
> @@ -179,6 +179,21 @@ struct xen_dm_op_destroy_ioreq_server {
>      ioservid_t id;
>  };
>  
> +/*
> + * DMOP_track_dirty_vram: Track modifications to the specified pfn range.
> + *
> + * NOTE: The bitmap passed back to the caller is passed in a
> + *       secondary buffer.
> + */
> +#define DMOP_track_dirty_vram 7
> +
> +struct xen_dm_op_track_dirty_vram {
> +    /* IN - number of pages to be tracked */
> +    uint32_t nr;
> +    /* IN - first pfn to track */
> +    uint64_aligned_t first_pfn;
> +};

Missing explicit padding (as well as the check for it to be zero).

> --- a/xen/include/public/hvm/hvm_op.h
> +++ b/xen/include/public/hvm/hvm_op.h
> @@ -96,6 +96,8 @@ typedef enum {
>  /* Following tools-only interfaces may change in future. */
>  #if defined(__XEN__) || defined(__XEN_TOOLS__)
>  
> +#if __XEN_INTERFACE_VERSION__ < 0x00040900
> +
>  /* Track dirty VRAM. */
>  #define HVMOP_track_dirty_vram    6
>  struct xen_hvm_track_dirty_vram {
> @@ -112,6 +114,8 @@ struct xen_hvm_track_dirty_vram {
>  typedef struct xen_hvm_track_dirty_vram xen_hvm_track_dirty_vram_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_hvm_track_dirty_vram_t);
>  
> +#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */

Same as in the earlier patch - these don't need to be retained. I
guess I'll refrain from mentioning this and the padding thing again,
should they re-occur in subsequent patches.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 3/8] dm_op: convert HVMOP_track_dirty_vram
  2016-11-25 11:25   ` Jan Beulich
@ 2016-11-25 11:32     ` Paul Durrant
  0 siblings, 0 replies; 39+ messages in thread
From: Paul Durrant @ 2016-11-25 11:32 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Wei Liu, Andrew Cooper, Tim (Xen.org),
	George Dunlap, Ian Jackson, xen-devel, Daniel De Graaf

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 25 November 2016 11:26
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; xen-devel@lists.xenproject.org; Daniel
> De Graaf <dgdegra@tycho.nsa.gov>; Tim (Xen.org) <tim@xen.org>
> Subject: Re: [Xen-devel] [PATCH-for-4.9 v1 3/8] dm_op: convert
> HVMOP_track_dirty_vram
> 
> >>> On 18.11.16 at 18:13, <paul.durrant@citrix.com> wrote:
> > @@ -74,6 +76,35 @@ static int
> dm_op_copy_buf_to_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_bu
> f_t) bufs,
> >      return 0;
> >  }
> >
> > +static int dm_op_track_dirty_vram(struct domain *d,
> > +                                  unsigned int nr_bufs,
> > +                                  XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
> 
> Wouldn't it be more natural for the caller to pass in a pointer to the
> already retrieved struct xen_dm_op_buf? The function here has in
> particular no other use for nr_bufs.

I could do it that way but I think it makes the switch statement in do_dm_op() more cluttered.

> 
> > +                                  xen_pfn_t first_pfn, unsigned int nr)
> > +{
> > +    struct xen_dm_op_buf buf;
> > +    int rc;
> > +
> > +    if ( nr > GB(1) >> PAGE_SHIFT )
> 
> Please parenthesize the operands of >>.
> 

Ok.

> > +        return -EINVAL;
> > +
> > +    if ( d->is_dying )
> > +        return -ESRCH;
> > +
> > +    if ( d->vcpu == NULL || d->vcpu[0] == NULL )
> 
> I'd appreciate if you used ! in cases like these. Also the left side
> should check d->max_vcpus, to be more in line with the checking
> done elsewhere (albeit I agree we're not consistent with this yet).

Sure.

> 
> > @@ -157,11 +188,19 @@ long do_dm_op(domid_t domid,
> >          rc = hvm_destroy_ioreq_server(d, data->id);
> >          break;
> >      }
> > +    case DMOP_track_dirty_vram:
> > +    {
> > +        struct xen_dm_op_track_dirty_vram *data =
> > +            &op.u.track_dirty_vram;
> > +
> > +        rc = dm_op_track_dirty_vram(d, nr_bufs, bufs, data->first_pfn,
> > +                                    data->nr);
> > +        break;
> > +    }
> >      default:
> >          rc = -EOPNOTSUPP;
> >          break;
> >      }
> > -
> >      if ( rc == -ERESTART )
> >          restart = true;
> 
> Stray removal of a (imo useful) blank line.
> 

Yep. That should not have happened.

> > @@ -178,7 +217,7 @@ out:
> >                                             domid, nr_bufs, bufs);
> >
> >      return rc;
> > -}
> > +    }
> 
> ???

Yes, weird. Emacs must have decided to indent it for some reason.

> 
> > --- a/xen/include/public/hvm/dm_op.h
> > +++ b/xen/include/public/hvm/dm_op.h
> > @@ -179,6 +179,21 @@ struct xen_dm_op_destroy_ioreq_server {
> >      ioservid_t id;
> >  };
> >
> > +/*
> > + * DMOP_track_dirty_vram: Track modifications to the specified pfn
> range.
> > + *
> > + * NOTE: The bitmap passed back to the caller is passed in a
> > + *       secondary buffer.
> > + */
> > +#define DMOP_track_dirty_vram 7
> > +
> > +struct xen_dm_op_track_dirty_vram {
> > +    /* IN - number of pages to be tracked */
> > +    uint32_t nr;
> > +    /* IN - first pfn to track */
> > +    uint64_aligned_t first_pfn;
> > +};
> 
> Missing explicit padding (as well as the check for it to be zero).

Ok.

> 
> > --- a/xen/include/public/hvm/hvm_op.h
> > +++ b/xen/include/public/hvm/hvm_op.h
> > @@ -96,6 +96,8 @@ typedef enum {
> >  /* Following tools-only interfaces may change in future. */
> >  #if defined(__XEN__) || defined(__XEN_TOOLS__)
> >
> > +#if __XEN_INTERFACE_VERSION__ < 0x00040900
> > +
> >  /* Track dirty VRAM. */
> >  #define HVMOP_track_dirty_vram    6
> >  struct xen_hvm_track_dirty_vram {
> > @@ -112,6 +114,8 @@ struct xen_hvm_track_dirty_vram {
> >  typedef struct xen_hvm_track_dirty_vram xen_hvm_track_dirty_vram_t;
> >  DEFINE_XEN_GUEST_HANDLE(xen_hvm_track_dirty_vram_t);
> >
> > +#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
> 
> Same as in the earlier patch - these don't need to be retained. I
> guess I'll refrain from mentioning this and the padding thing again,
> should they re-occur in subsequent patches.

Indeed. I'll check what was and wasn't tools-only and bin anything that wasn't exposed to a guest.

  Paul

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and...
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and Paul Durrant
@ 2016-11-25 11:49   ` Jan Beulich
  2016-11-25 11:55     ` Paul Durrant
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-25 11:49 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, Ian Jackson, xen-devel

>>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -105,6 +105,60 @@ static int dm_op_track_dirty_vram(struct domain *d,
>          hap_track_dirty_vram(d, first_pfn, nr, buf.h);
>  }
>  
> +static int dm_op_set_pci_intx_level(struct domain *d, uint8_t domain,
> +                                    uint8_t bus, uint8_t device,
> +                                    uint8_t intx, uint8_t level)

Btw., none of these static helper functions have any need to have
a dm_op_ prefix. Such disambiguation is needed only for non-
static ones.

> +static int dm_op_set_pci_link_route(struct domain *d, uint8_t link,
> +                                    uint8_t isa_irq)
> +{
> +    if ( link > 3 || isa_irq > 15 )
> +        return -EINVAL;
> +
> +    hvm_set_pci_link_route(d, link, isa_irq);
> +
> +    return 0;
> +}

In the hvmctl series I did specifically avoid to create this trivial a
wrapper for a function with no other callers. Simply move the
argument range checks there.

> +struct xen_dm_op_set_pci_intx_level {
> +    /* IN - PCI INTx identification (domain:bus:device:intx) */
> +    uint8_t  domain, bus, device, intx;

I've just now noticed that I did overlook this in the hvmctl series
too: domain should be uint16_t.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and...
  2016-11-25 11:49   ` Jan Beulich
@ 2016-11-25 11:55     ` Paul Durrant
  2016-11-25 12:26       ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-25 11:55 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 25 November 2016 11:50
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; xen-
> devel@lists.xenproject.org; Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Subject: Re: [Xen-devel] [PATCH-for-4.9 v1 4/8] dm_op: convert
> HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and...
> 
> >>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
> > --- a/xen/arch/x86/hvm/dm.c
> > +++ b/xen/arch/x86/hvm/dm.c
> > @@ -105,6 +105,60 @@ static int dm_op_track_dirty_vram(struct domain
> *d,
> >          hap_track_dirty_vram(d, first_pfn, nr, buf.h);
> >  }
> >
> > +static int dm_op_set_pci_intx_level(struct domain *d, uint8_t domain,
> > +                                    uint8_t bus, uint8_t device,
> > +                                    uint8_t intx, uint8_t level)
> 
> Btw., none of these static helper functions have any need to have
> a dm_op_ prefix. Such disambiguation is needed only for non-
> static ones.

It make it easier to find them quickly with cscope, but I can drop the prefix if you prefer.

> 
> > +static int dm_op_set_pci_link_route(struct domain *d, uint8_t link,
> > +                                    uint8_t isa_irq)
> > +{
> > +    if ( link > 3 || isa_irq > 15 )
> > +        return -EINVAL;
> > +
> > +    hvm_set_pci_link_route(d, link, isa_irq);
> > +
> > +    return 0;
> > +}
> 
> In the hvmctl series I did specifically avoid to create this trivial a
> wrapper for a function with no other callers. Simply move the
> argument range checks there.
> 

Ok, I'll inline in the switch if you prefer that style.

> > +struct xen_dm_op_set_pci_intx_level {
> > +    /* IN - PCI INTx identification (domain:bus:device:intx) */
> > +    uint8_t  domain, bus, device, intx;
> 
> I've just now noticed that I did overlook this in the hvmctl series
> too: domain should be uint16_t.
> 

Indeed it should.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and...
  2016-11-25 11:55     ` Paul Durrant
@ 2016-11-25 12:26       ` Jan Beulich
  2016-11-25 13:07         ` Paul Durrant
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-25 12:26 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

>>> On 25.11.16 at 12:55, <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 25 November 2016 11:50
>> To: Paul Durrant <Paul.Durrant@citrix.com>
>> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
>> <wei.liu2@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; xen-
>> devel@lists.xenproject.org; Daniel De Graaf <dgdegra@tycho.nsa.gov>
>> Subject: Re: [Xen-devel] [PATCH-for-4.9 v1 4/8] dm_op: convert
>> HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and...
>> 
>> >>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
>> > --- a/xen/arch/x86/hvm/dm.c
>> > +++ b/xen/arch/x86/hvm/dm.c
>> > @@ -105,6 +105,60 @@ static int dm_op_track_dirty_vram(struct domain
>> *d,
>> >          hap_track_dirty_vram(d, first_pfn, nr, buf.h);
>> >  }
>> >
>> > +static int dm_op_set_pci_intx_level(struct domain *d, uint8_t domain,
>> > +                                    uint8_t bus, uint8_t device,
>> > +                                    uint8_t intx, uint8_t level)
>> 
>> Btw., none of these static helper functions have any need to have
>> a dm_op_ prefix. Such disambiguation is needed only for non-
>> static ones.
> 
> It make it easier to find them quickly with cscope, but I can drop the 
> prefix if you prefer.

Thanks.

>> > +static int dm_op_set_pci_link_route(struct domain *d, uint8_t link,
>> > +                                    uint8_t isa_irq)
>> > +{
>> > +    if ( link > 3 || isa_irq > 15 )
>> > +        return -EINVAL;
>> > +
>> > +    hvm_set_pci_link_route(d, link, isa_irq);
>> > +
>> > +    return 0;
>> > +}
>> 
>> In the hvmctl series I did specifically avoid to create this trivial a
>> wrapper for a function with no other callers. Simply move the
>> argument range checks there.
> 
> Ok, I'll inline in the switch if you prefer that style.

I didn't mean to inline the above in the case statement, but to
move the argument checking to hvm_set_pci_link_route() (and
call it directly from the case statement).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and...
  2016-11-25 12:26       ` Jan Beulich
@ 2016-11-25 13:07         ` Paul Durrant
  0 siblings, 0 replies; 39+ messages in thread
From: Paul Durrant @ 2016-11-25 13:07 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

> -----Original Message-----
[snip] 
> >> > +static int dm_op_set_pci_link_route(struct domain *d, uint8_t link,
> >> > +                                    uint8_t isa_irq)
> >> > +{
> >> > +    if ( link > 3 || isa_irq > 15 )
> >> > +        return -EINVAL;
> >> > +
> >> > +    hvm_set_pci_link_route(d, link, isa_irq);
> >> > +
> >> > +    return 0;
> >> > +}
> >>
> >> In the hvmctl series I did specifically avoid to create this trivial a
> >> wrapper for a function with no other callers. Simply move the
> >> argument range checks there.
> >
> > Ok, I'll inline in the switch if you prefer that style.
> 
> I didn't mean to inline the above in the case statement, but to
> move the argument checking to hvm_set_pci_link_route() (and
> call it directly from the case statement).
> 

Oh, I see. Yes, that would be much better.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 5/8] dm_op: convert HVMOP_modified_memory
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 5/8] dm_op: convert HVMOP_modified_memory Paul Durrant
@ 2016-11-25 13:25   ` Jan Beulich
  2016-11-25 13:31     ` Paul Durrant
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-25 13:25 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, Ian Jackson, xen-devel

>>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -17,6 +17,7 @@
>  #include <xen/hypercall.h>
>  #include <xen/guest_access.h>
>  #include <xen/sched.h>
> +#include <xen/event.h>

I should have notice before, but it's more evident here: May I ask
that you sort the xen/ and asm/ subgroups, rather than always
appending at the respective one's end? With sorted #include
directives the risk of merge conflicts reduces statistically.

> +static int dm_op_modified_memory(struct domain *d, xen_pfn_t *first_pfn,
> +                                 unsigned int *nr)
> +{
> +    xen_pfn_t last_pfn = *first_pfn + *nr - 1;
> +    unsigned int iter;
> +    int rc;
> +
> +    if ( (*first_pfn > last_pfn) ||
> +         (last_pfn > domain_get_maximum_gpfn(d)) )
> +        return -EINVAL;
> +
> +    if ( !paging_mode_log_dirty(d) )
> +        return 0;
> +
> +    iter = 0;
> +    rc = 0;
> +    while ( iter < *nr )
> +    {
> +        unsigned long pfn = *first_pfn + iter;
> +        struct page_info *page;
> +
> +        page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
> +        if ( page )
> +        {
> +            paging_mark_dirty(d, page_to_mfn(page));
> +            /* These are most probably not page tables any more */
> +            /* don't take a long time and don't die either */

Please fix the comment style.

> +            sh_remove_shadows(d, _mfn(page_to_mfn(page)), 1, 0);
> +            put_page(page);
> +        }
> +
> +        /* Check for continuation if it's not the last interation */
> +        if ( (++iter < *nr) && hypercall_preempt_check() )

Please avoid checking on every iteration. In the hvmctl series I
did so every 256th one.

> +        {
> +            rc = -ERESTART;
> +            break;
> +        }
> +    }
> +
> +    *first_pfn += iter;
> +    *nr -= iter;

So this is not the standard way we handle continuations: We try
hard to avoid modifying interface structures. This being a new
interface, I don't mind deviation (as it simplifies the implementation),
but this needs to be spelled out prominently in the header, to
avoid people assuming IN fields won't get modified.

> +struct xen_dm_op_modified_memory {
> +    /* IN - number of contiguous pages modified */
> +    uint32_t nr;
> +    /* IN - first pfn modified */
> +    uint64_t first_pfn;

Alignment missing.  (At this point I can't resist to state that it
probably wouldn't have hurt if you had taken a little more of that
original series, as a number of comments I find myself making
are a result of comparing your code with my original.)

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 5/8] dm_op: convert HVMOP_modified_memory
  2016-11-25 13:25   ` Jan Beulich
@ 2016-11-25 13:31     ` Paul Durrant
  2016-11-25 13:56       ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-25 13:31 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 25 November 2016 13:25
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; xen-
> devel@lists.xenproject.org; Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Subject: Re: [Xen-devel] [PATCH-for-4.9 v1 5/8] dm_op: convert
> HVMOP_modified_memory
> 
> >>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
> > --- a/xen/arch/x86/hvm/dm.c
> > +++ b/xen/arch/x86/hvm/dm.c
> > @@ -17,6 +17,7 @@
> >  #include <xen/hypercall.h>
> >  #include <xen/guest_access.h>
> >  #include <xen/sched.h>
> > +#include <xen/event.h>
> 
> I should have notice before, but it's more evident here: May I ask
> that you sort the xen/ and asm/ subgroups, rather than always
> appending at the respective one's end? With sorted #include
> directives the risk of merge conflicts reduces statistically.
> 

Sure.

> > +static int dm_op_modified_memory(struct domain *d, xen_pfn_t
> *first_pfn,
> > +                                 unsigned int *nr)
> > +{
> > +    xen_pfn_t last_pfn = *first_pfn + *nr - 1;
> > +    unsigned int iter;
> > +    int rc;
> > +
> > +    if ( (*first_pfn > last_pfn) ||
> > +         (last_pfn > domain_get_maximum_gpfn(d)) )
> > +        return -EINVAL;
> > +
> > +    if ( !paging_mode_log_dirty(d) )
> > +        return 0;
> > +
> > +    iter = 0;
> > +    rc = 0;
> > +    while ( iter < *nr )
> > +    {
> > +        unsigned long pfn = *first_pfn + iter;
> > +        struct page_info *page;
> > +
> > +        page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
> > +        if ( page )
> > +        {
> > +            paging_mark_dirty(d, page_to_mfn(page));
> > +            /* These are most probably not page tables any more */
> > +            /* don't take a long time and don't die either */
> 
> Please fix the comment style.
> 

Yes, I'll do that.

> > +            sh_remove_shadows(d, _mfn(page_to_mfn(page)), 1, 0);
> > +            put_page(page);
> > +        }
> > +
> > +        /* Check for continuation if it's not the last interation */
> > +        if ( (++iter < *nr) && hypercall_preempt_check() )
> 
> Please avoid checking on every iteration. In the hvmctl series I
> did so every 256th one.
> 

Ok.

> > +        {
> > +            rc = -ERESTART;
> > +            break;
> > +        }
> > +    }
> > +
> > +    *first_pfn += iter;
> > +    *nr -= iter;
> 
> So this is not the standard way we handle continuations: We try
> hard to avoid modifying interface structures. This being a new
> interface, I don't mind deviation (as it simplifies the implementation),
> but this needs to be spelled out prominently in the header, to
> avoid people assuming IN fields won't get modified.
> 

OK, I'll add an explanatory note somewhere about how to deal with -ERESTART for this and type setting.

> > +struct xen_dm_op_modified_memory {
> > +    /* IN - number of contiguous pages modified */
> > +    uint32_t nr;
> > +    /* IN - first pfn modified */
> > +    uint64_t first_pfn;
> 
> Alignment missing.  (At this point I can't resist to state that it
> probably wouldn't have hurt if you had taken a little more of that
> original series, as a number of comments I find myself making
> are a result of comparing your code with my original.)
> 

OK. I was pulling across from hvm_op in the same tree rather than your patches (as I didn't have them in on the same machine as it happened). I'll cross-check the op definitions.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 6/8] dm_op: convert HVMOP_set_mem_type
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 6/8] dm_op: convert HVMOP_set_mem_type Paul Durrant
@ 2016-11-25 13:50   ` Jan Beulich
  2016-11-25 14:00     ` Paul Durrant
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-25 13:50 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, Ian Jackson, xen-devel

>>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
> --- a/tools/libxc/xc_misc.c
> +++ b/tools/libxc/xc_misc.c
> @@ -584,28 +584,18 @@ int xc_hvm_modified_memory( int xc_hvm_set_mem_type(
>      xc_interface *xch, domid_t dom, hvmmem_type_t mem_type, uint64_t first_pfn, uint64_t nr)
>  {
> -    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_mem_type, arg);
> -    int rc;
> -
> -    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> -    if ( arg == NULL )
> -    {
> -        PERROR("Could not allocate memory for xc_hvm_set_mem_type hypercall");
> -        return -1;
> -    }
> +    struct xen_dm_op op;
> +    struct xen_dm_op_set_mem_type *data;
>  
> -    arg->domid        = dom;
> -    arg->hvmmem_type  = mem_type;
> -    arg->first_pfn    = first_pfn;
> -    arg->nr           = nr;
> +    op.op = DMOP_set_mem_type;
> +    data = &op.u.set_mem_type;
>  
> -    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
> -                  HVMOP_set_mem_type,
> -                  HYPERCALL_BUFFER_AS_ARG(arg));
> -
> -    xc_hypercall_buffer_free(xch, arg);
> +    data->mem_type = mem_type;
> +    data->first_pfn = first_pfn;
> +    /* NOTE: The following assignment truncates nr to 32-bits */
> +    data->nr = nr;

What strange a comment. Why don't you - again as done in the
hvmctl series - simply correct the function's parameter type?
(Same for xc_hvm_track_dirty_vram() and
xc_hvm_modified_memory() then.)

> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -160,6 +160,16 @@ static int dm_op_set_pci_link_route(struct domain *d, uint8_t link,
>      return 0;
>  }
>  
> +static bool_t dm_op_allow_p2m_type_change(p2m_type_t old, p2m_type_t new)

bool

> +{
> +    if ( p2m_is_ram(old) ||
> +         (p2m_is_hole(old) && new == p2m_mmio_dm) ||
> +         (old == p2m_ioreq_server && new == p2m_ram_rw) )
> +        return 1;

true

> +
> +    return 0;

false (or perhaps even better a simple return statement, and perhaps
you can by now guess where I could refer you)

> +}
> +
>  static int dm_op_modified_memory(struct domain *d, xen_pfn_t *first_pfn,

Any reason of putting it here rather than ...

> @@ -205,6 +215,79 @@ static int dm_op_modified_memory(struct domain *d, xen_pfn_t *first_pfn,
>      return rc;
>  }
>  
> +
> +static int dm_op_set_mem_type(struct domain *d, hvmmem_type_t mem_type,
> +                              xen_pfn_t *first_pfn, unsigned int *nr)

... right before this one (the helper of which it is)?

Also please don't add a second blank line above the function.

> +{
> +    xen_pfn_t last_pfn = *first_pfn + *nr - 1;
> +    unsigned int iter;
> +    int rc;
> +
> +    /* Interface types to internal p2m types */
> +    static const p2m_type_t memtype[] = {
> +        [HVMMEM_ram_rw]  = p2m_ram_rw,
> +        [HVMMEM_ram_ro]  = p2m_ram_ro,
> +        [HVMMEM_mmio_dm] = p2m_mmio_dm,
> +        [HVMMEM_unused] = p2m_invalid,
> +        [HVMMEM_ioreq_server] = p2m_ioreq_server
> +    };
> +
> +    if ( (*first_pfn > last_pfn) ||
> +         (last_pfn > domain_get_maximum_gpfn(d)) )
> +        return -EINVAL;
> +
> +    if ( mem_type >= ARRAY_SIZE(memtype) ||
> +         unlikely(mem_type == HVMMEM_unused) )
> +        return -EINVAL;
> +
> +    iter = 0;
> +    rc = 0;
> +    while ( iter < *nr )
> +    {
> +        unsigned long pfn = *first_pfn + iter;
> +        p2m_type_t t;
> +
> +        get_gfn_unshare(d, pfn, &t);

Note the disagreement between function and parameter names.

> +        if ( p2m_is_paging(t) )
> +        {
> +            put_gfn(d, pfn);
> +            p2m_mem_paging_populate(d, pfn);
> +            rc = -EAGAIN;
> +            break;
> +        }
> +        if ( p2m_is_shared(t) )
> +        {
> +            put_gfn(d, pfn);
> +            rc = -EAGAIN;
> +            break;
> +        }
> +        if ( !dm_op_allow_p2m_type_change(t, memtype[mem_type]) )
> +        {
> +            put_gfn(d, pfn);
> +            rc = -EINVAL;
> +            break;
> +        }

Why can't all of these simply be return statements?

> +        rc = p2m_change_type_one(d, pfn, t, memtype[mem_type]);
> +        put_gfn(d, pfn);
> +
> +        if ( rc )
> +            break;

Or, again as done you know where, fold some of those redundant
put_gfn()s as well, by using if/else-if.

> +struct xen_dm_op_set_mem_type {
> +    /* IN - number of contiguous pages */
> +    uint32_t nr;
> +    /* IN - first pfn in region */
> +    uint64_t first_pfn;
> +    /* IN - new hvmmem_type_t of region */
> +    uint16_t mem_type;
> +};

mem_type should be moved up, first_pfn be aligned, and explicit
padding be inserted.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 5/8] dm_op: convert HVMOP_modified_memory
  2016-11-25 13:31     ` Paul Durrant
@ 2016-11-25 13:56       ` Jan Beulich
  0 siblings, 0 replies; 39+ messages in thread
From: Jan Beulich @ 2016-11-25 13:56 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

>>> On 25.11.16 at 14:31, <Paul.Durrant@citrix.com> wrote:
> OK. I was pulling across from hvm_op in the same tree rather than your 
> patches (as I didn't have them in on the same machine as it happened). I'll 
> cross-check the op definitions.

Oh, so you've re-done everything instead of evolving it. Interesting.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 6/8] dm_op: convert HVMOP_set_mem_type
  2016-11-25 13:50   ` Jan Beulich
@ 2016-11-25 14:00     ` Paul Durrant
  2016-11-25 14:16       ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-25 14:00 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 25 November 2016 13:51
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; xen-
> devel@lists.xenproject.org; Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Subject: Re: [Xen-devel] [PATCH-for-4.9 v1 6/8] dm_op: convert
> HVMOP_set_mem_type
> 
> >>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
> > --- a/tools/libxc/xc_misc.c
> > +++ b/tools/libxc/xc_misc.c
> > @@ -584,28 +584,18 @@ int xc_hvm_modified_memory( int
> xc_hvm_set_mem_type(
> >      xc_interface *xch, domid_t dom, hvmmem_type_t mem_type, uint64_t
> first_pfn, uint64_t nr)
> >  {
> > -    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_mem_type, arg);
> > -    int rc;
> > -
> > -    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> > -    if ( arg == NULL )
> > -    {
> > -        PERROR("Could not allocate memory for xc_hvm_set_mem_type
> hypercall");
> > -        return -1;
> > -    }
> > +    struct xen_dm_op op;
> > +    struct xen_dm_op_set_mem_type *data;
> >
> > -    arg->domid        = dom;
> > -    arg->hvmmem_type  = mem_type;
> > -    arg->first_pfn    = first_pfn;
> > -    arg->nr           = nr;
> > +    op.op = DMOP_set_mem_type;
> > +    data = &op.u.set_mem_type;
> >
> > -    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
> > -                  HVMOP_set_mem_type,
> > -                  HYPERCALL_BUFFER_AS_ARG(arg));
> > -
> > -    xc_hypercall_buffer_free(xch, arg);
> > +    data->mem_type = mem_type;
> > +    data->first_pfn = first_pfn;
> > +    /* NOTE: The following assignment truncates nr to 32-bits */
> > +    data->nr = nr;
> 
> What strange a comment. Why don't you - again as done in the
> hvmctl series - simply correct the function's parameter type?
> (Same for xc_hvm_track_dirty_vram() and
> xc_hvm_modified_memory() then.)

Because that may cause compiler warnings in clients when they grab the new version of the header. I didn't want to have any adverse effect so just commenting that the value was being truncated (as it always has been) seemed like the best thing to do.

> 
> > --- a/xen/arch/x86/hvm/dm.c
> > +++ b/xen/arch/x86/hvm/dm.c
> > @@ -160,6 +160,16 @@ static int dm_op_set_pci_link_route(struct domain
> *d, uint8_t link,
> >      return 0;
> >  }
> >
> > +static bool_t dm_op_allow_p2m_type_change(p2m_type_t old,
> p2m_type_t new)
> 
> bool
> 

Ok.

> > +{
> > +    if ( p2m_is_ram(old) ||
> > +         (p2m_is_hole(old) && new == p2m_mmio_dm) ||
> > +         (old == p2m_ioreq_server && new == p2m_ram_rw) )
> > +        return 1;
> 
> true
> 
> > +
> > +    return 0;
> 
> false (or perhaps even better a simple return statement, and perhaps
> you can by now guess where I could refer you)
> 
> > +}
> > +
> >  static int dm_op_modified_memory(struct domain *d, xen_pfn_t
> *first_pfn,
> 
> Any reason of putting it here rather than ...
> 
> > @@ -205,6 +215,79 @@ static int dm_op_modified_memory(struct domain
> *d, xen_pfn_t *first_pfn,
> >      return rc;
> >  }
> >
> > +
> > +static int dm_op_set_mem_type(struct domain *d, hvmmem_type_t
> mem_type,
> > +                              xen_pfn_t *first_pfn, unsigned int *nr)
> 
> ... right before this one (the helper of which it is)?
> 

No, I can move it.

> Also please don't add a second blank line above the function.
> 

Yes, that's a mistake.

> > +{
> > +    xen_pfn_t last_pfn = *first_pfn + *nr - 1;
> > +    unsigned int iter;
> > +    int rc;
> > +
> > +    /* Interface types to internal p2m types */
> > +    static const p2m_type_t memtype[] = {
> > +        [HVMMEM_ram_rw]  = p2m_ram_rw,
> > +        [HVMMEM_ram_ro]  = p2m_ram_ro,
> > +        [HVMMEM_mmio_dm] = p2m_mmio_dm,
> > +        [HVMMEM_unused] = p2m_invalid,
> > +        [HVMMEM_ioreq_server] = p2m_ioreq_server
> > +    };
> > +
> > +    if ( (*first_pfn > last_pfn) ||
> > +         (last_pfn > domain_get_maximum_gpfn(d)) )
> > +        return -EINVAL;
> > +
> > +    if ( mem_type >= ARRAY_SIZE(memtype) ||
> > +         unlikely(mem_type == HVMMEM_unused) )
> > +        return -EINVAL;
> > +
> > +    iter = 0;
> > +    rc = 0;
> > +    while ( iter < *nr )
> > +    {
> > +        unsigned long pfn = *first_pfn + iter;
> > +        p2m_type_t t;
> > +
> > +        get_gfn_unshare(d, pfn, &t);
> 
> Note the disagreement between function and parameter names.
> 

It was inherited but I'll correct it.

> > +        if ( p2m_is_paging(t) )
> > +        {
> > +            put_gfn(d, pfn);
> > +            p2m_mem_paging_populate(d, pfn);
> > +            rc = -EAGAIN;
> > +            break;
> > +        }
> > +        if ( p2m_is_shared(t) )
> > +        {
> > +            put_gfn(d, pfn);
> > +            rc = -EAGAIN;
> > +            break;
> > +        }
> > +        if ( !dm_op_allow_p2m_type_change(t, memtype[mem_type]) )
> > +        {
> > +            put_gfn(d, pfn);
> > +            rc = -EINVAL;
> > +            break;
> > +        }
> 
> Why can't all of these simply be return statements?
> 
> > +        rc = p2m_change_type_one(d, pfn, t, memtype[mem_type]);
> > +        put_gfn(d, pfn);
> > +
> > +        if ( rc )
> > +            break;
> 
> Or, again as done you know where, fold some of those redundant
> put_gfn()s as well, by using if/else-if.

That would be neater.

> 
> > +struct xen_dm_op_set_mem_type {
> > +    /* IN - number of contiguous pages */
> > +    uint32_t nr;
> > +    /* IN - first pfn in region */
> > +    uint64_t first_pfn;
> > +    /* IN - new hvmmem_type_t of region */
> > +    uint16_t mem_type;
> > +};
> 
> mem_type should be moved up, first_pfn be aligned, and explicit
> padding be inserted.

OK.

  Paul

> 
> Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi
  2016-11-18 17:14 ` [PATCH-for-4.9 v1 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi Paul Durrant
@ 2016-11-25 14:07   ` Jan Beulich
  2016-11-25 14:13     ` Paul Durrant
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-25 14:07 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, Ian Jackson, xen-devel

>>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
> +static int dm_op_inject_trap(struct domain *d, unsigned int vcpuid,
> +                             uint16_t vector, uint8_t type,
> +                             uint8_t insn_len, uint32_t error_code,
> +                             unsigned long cr2)
> +{
> +    struct vcpu *v;
> +
> +    if ( vector > INT16_MAX )
> +        return -EINVAL;

Please limit vector to uint8_t and delete this strange (architecturally
wrong) check.

> +    if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
> +        return -EINVAL;

ENOENT (to make error reasons distinguishable for the caller)?

> +    case DMOP_inject_msi:
> +    {
> +        struct xen_dm_op_inject_msi *data =
> +            &op.u.inject_msi;
> +
> +        rc = hvm_inject_msi(d, data->addr, data->data);

Line length clearly is not an issue here, but if you want to keep
the helper variable, then please constify it (which I guess would
apply to some of the earlier patches too).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi
  2016-11-25 14:07   ` Jan Beulich
@ 2016-11-25 14:13     ` Paul Durrant
  0 siblings, 0 replies; 39+ messages in thread
From: Paul Durrant @ 2016-11-25 14:13 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 25 November 2016 14:08
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; Ian Jackson <Ian.Jackson@citrix.com>; xen-
> devel@lists.xenproject.org; Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Subject: Re: [Xen-devel] [PATCH-for-4.9 v1 7/8] dm_op: convert
> HVMOP_inject_trap and HVMOP_inject_msi
> 
> >>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
> > +static int dm_op_inject_trap(struct domain *d, unsigned int vcpuid,
> > +                             uint16_t vector, uint8_t type,
> > +                             uint8_t insn_len, uint32_t error_code,
> > +                             unsigned long cr2)
> > +{
> > +    struct vcpu *v;
> > +
> > +    if ( vector > INT16_MAX )
> > +        return -EINVAL;
> 
> Please limit vector to uint8_t and delete this strange (architecturally
> wrong) check.

This check is only meant to check that the field assignment below it doesn't overflow. IIRC the old code didn't even do that. I'll limit to uint8_t as you suggest though.

> 
> > +    if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
> > +        return -EINVAL;
> 
> ENOENT (to make error reasons distinguishable for the caller)?
> 

Ok.

> > +    case DMOP_inject_msi:
> > +    {
> > +        struct xen_dm_op_inject_msi *data =
> > +            &op.u.inject_msi;
> > +
> > +        rc = hvm_inject_msi(d, data->addr, data->data);
> 
> Line length clearly is not an issue here, but if you want to keep
> the helper variable, then please constify it (which I guess would
> apply to some of the earlier patches too).

Ok. I'll try to const everything that can't be subject to a continuation.

  Paul

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 6/8] dm_op: convert HVMOP_set_mem_type
  2016-11-25 14:00     ` Paul Durrant
@ 2016-11-25 14:16       ` Jan Beulich
  2016-11-25 14:20         ` Paul Durrant
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-25 14:16 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

>>> On 25.11.16 at 15:00, <Paul.Durrant@citrix.com> wrote:
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 25 November 2016 13:51
>> >>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
>> > --- a/tools/libxc/xc_misc.c
>> > +++ b/tools/libxc/xc_misc.c
>> > @@ -584,28 +584,18 @@ int xc_hvm_modified_memory( int
>> xc_hvm_set_mem_type(
>> >      xc_interface *xch, domid_t dom, hvmmem_type_t mem_type, uint64_t
>> first_pfn, uint64_t nr)
>> >  {
>> > -    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_mem_type, arg);
>> > -    int rc;
>> > -
>> > -    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
>> > -    if ( arg == NULL )
>> > -    {
>> > -        PERROR("Could not allocate memory for xc_hvm_set_mem_type
>> hypercall");
>> > -        return -1;
>> > -    }
>> > +    struct xen_dm_op op;
>> > +    struct xen_dm_op_set_mem_type *data;
>> >
>> > -    arg->domid        = dom;
>> > -    arg->hvmmem_type  = mem_type;
>> > -    arg->first_pfn    = first_pfn;
>> > -    arg->nr           = nr;
>> > +    op.op = DMOP_set_mem_type;
>> > +    data = &op.u.set_mem_type;
>> >
>> > -    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
>> > -                  HVMOP_set_mem_type,
>> > -                  HYPERCALL_BUFFER_AS_ARG(arg));
>> > -
>> > -    xc_hypercall_buffer_free(xch, arg);
>> > +    data->mem_type = mem_type;
>> > +    data->first_pfn = first_pfn;
>> > +    /* NOTE: The following assignment truncates nr to 32-bits */
>> > +    data->nr = nr;
>> 
>> What strange a comment. Why don't you - again as done in the
>> hvmctl series - simply correct the function's parameter type?
>> (Same for xc_hvm_track_dirty_vram() and
>> xc_hvm_modified_memory() then.)
> 
> Because that may cause compiler warnings in clients when they grab the new 
> version of the header. I didn't want to have any adverse effect so just 
> commenting that the value was being truncated (as it always has been) seemed 
> like the best thing to do.

Well, maybe the tool stack maintainers think differently now, but
for those libxc interface changes I had Wei's R-b already back then.
In any case the present choice of types is plain wrong, and I
think it's better if consumers of the API get warned about the
possible truncation by compilers than silently truncating inside the
library.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 6/8] dm_op: convert HVMOP_set_mem_type
  2016-11-25 14:16       ` Jan Beulich
@ 2016-11-25 14:20         ` Paul Durrant
  2016-11-25 14:46           ` Jan Beulich
  0 siblings, 1 reply; 39+ messages in thread
From: Paul Durrant @ 2016-11-25 14:20 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 25 November 2016 14:16
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>; xen-
> devel@lists.xenproject.org; Daniel De Graaf <dgdegra@tycho.nsa.gov>
> Subject: RE: [Xen-devel] [PATCH-for-4.9 v1 6/8] dm_op: convert
> HVMOP_set_mem_type
> 
> >>> On 25.11.16 at 15:00, <Paul.Durrant@citrix.com> wrote:
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 25 November 2016 13:51
> >> >>> On 18.11.16 at 18:14, <paul.durrant@citrix.com> wrote:
> >> > --- a/tools/libxc/xc_misc.c
> >> > +++ b/tools/libxc/xc_misc.c
> >> > @@ -584,28 +584,18 @@ int xc_hvm_modified_memory( int
> >> xc_hvm_set_mem_type(
> >> >      xc_interface *xch, domid_t dom, hvmmem_type_t mem_type,
> uint64_t
> >> first_pfn, uint64_t nr)
> >> >  {
> >> > -    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_mem_type,
> arg);
> >> > -    int rc;
> >> > -
> >> > -    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
> >> > -    if ( arg == NULL )
> >> > -    {
> >> > -        PERROR("Could not allocate memory for xc_hvm_set_mem_type
> >> hypercall");
> >> > -        return -1;
> >> > -    }
> >> > +    struct xen_dm_op op;
> >> > +    struct xen_dm_op_set_mem_type *data;
> >> >
> >> > -    arg->domid        = dom;
> >> > -    arg->hvmmem_type  = mem_type;
> >> > -    arg->first_pfn    = first_pfn;
> >> > -    arg->nr           = nr;
> >> > +    op.op = DMOP_set_mem_type;
> >> > +    data = &op.u.set_mem_type;
> >> >
> >> > -    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
> >> > -                  HVMOP_set_mem_type,
> >> > -                  HYPERCALL_BUFFER_AS_ARG(arg));
> >> > -
> >> > -    xc_hypercall_buffer_free(xch, arg);
> >> > +    data->mem_type = mem_type;
> >> > +    data->first_pfn = first_pfn;
> >> > +    /* NOTE: The following assignment truncates nr to 32-bits */
> >> > +    data->nr = nr;
> >>
> >> What strange a comment. Why don't you - again as done in the
> >> hvmctl series - simply correct the function's parameter type?
> >> (Same for xc_hvm_track_dirty_vram() and
> >> xc_hvm_modified_memory() then.)
> >
> > Because that may cause compiler warnings in clients when they grab the
> new
> > version of the header. I didn't want to have any adverse effect so just
> > commenting that the value was being truncated (as it always has been)
> seemed
> > like the best thing to do.
> 
> Well, maybe the tool stack maintainers think differently now, but
> for those libxc interface changes I had Wei's R-b already back then.
> In any case the present choice of types is plain wrong, and I
> think it's better if consumers of the API get warned about the
> possible truncation by compilers than silently truncating inside the
> library.
> 

Ok, if you already had agreement from a toolstack maintainer then I'll change the header.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 6/8] dm_op: convert HVMOP_set_mem_type
  2016-11-25 14:20         ` Paul Durrant
@ 2016-11-25 14:46           ` Jan Beulich
  2016-11-25 14:56             ` Paul Durrant
  0 siblings, 1 reply; 39+ messages in thread
From: Jan Beulich @ 2016-11-25 14:46 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel DeGraaf, Wei Liu, xen-devel, Ian Jackson

>>> On 25.11.16 at 15:20, <Paul.Durrant@citrix.com> wrote:
> Ok, if you already had agreement from a toolstack maintainer then I'll 
> change the header.

Well, I had handed you the hvmctl patches, which I'm pretty sure had
all the relevant tags accumulated.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

* Re: [PATCH-for-4.9 v1 6/8] dm_op: convert HVMOP_set_mem_type
  2016-11-25 14:46           ` Jan Beulich
@ 2016-11-25 14:56             ` Paul Durrant
  0 siblings, 0 replies; 39+ messages in thread
From: Paul Durrant @ 2016-11-25 14:56 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Daniel DeGraaf, Wei Liu, xen-devel, Ian Jackson

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 25 November 2016 14:47
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; Wei Liu <wei.liu2@citrix.com>; xen-
> devel@lists.xenproject.org; Daniel DeGraaf <dgdegra@tycho.nsa.gov>
> Subject: RE: [Xen-devel] [PATCH-for-4.9 v1 6/8] dm_op: convert
> HVMOP_set_mem_type
> 
> >>> On 25.11.16 at 15:20, <Paul.Durrant@citrix.com> wrote:
> > Ok, if you already had agreement from a toolstack maintainer then I'll
> > change the header.
> 
> Well, I had handed you the hvmctl patches, which I'm pretty sure had
> all the relevant tags accumulated.
> 

Oh. They must have got stripped when I attempted to git am the patches onto my branch and had to manually fix-up.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 39+ messages in thread

end of thread, other threads:[~2016-11-25 14:56 UTC | newest]

Thread overview: 39+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-18 17:13 [PATCH-for-4.9 v1 0/8] New hypercall for device models Paul Durrant
2016-11-18 17:13 ` [PATCH-for-4.9 v1 1/8] public / x86: Introduce __HYPERCALL_dm_op Paul Durrant
2016-11-22 15:57   ` Jan Beulich
2016-11-22 16:32     ` Paul Durrant
2016-11-22 17:24       ` Jan Beulich
2016-11-22 17:29         ` Paul Durrant
2016-11-18 17:13 ` [PATCH-for-4.9 v1 2/8] dm_op: convert HVMOP_*ioreq_server* Paul Durrant
2016-11-24 17:02   ` Jan Beulich
2016-11-25  7:06     ` Jan Beulich
2016-11-25  8:47       ` Paul Durrant
2016-11-25  9:01     ` Paul Durrant
2016-11-25  9:28       ` Jan Beulich
2016-11-25  9:33         ` Paul Durrant
2016-11-18 17:13 ` [PATCH-for-4.9 v1 3/8] dm_op: convert HVMOP_track_dirty_vram Paul Durrant
2016-11-25 11:25   ` Jan Beulich
2016-11-25 11:32     ` Paul Durrant
2016-11-18 17:14 ` [PATCH-for-4.9 v1 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and Paul Durrant
2016-11-25 11:49   ` Jan Beulich
2016-11-25 11:55     ` Paul Durrant
2016-11-25 12:26       ` Jan Beulich
2016-11-25 13:07         ` Paul Durrant
2016-11-18 17:14 ` [PATCH-for-4.9 v1 5/8] dm_op: convert HVMOP_modified_memory Paul Durrant
2016-11-25 13:25   ` Jan Beulich
2016-11-25 13:31     ` Paul Durrant
2016-11-25 13:56       ` Jan Beulich
2016-11-18 17:14 ` [PATCH-for-4.9 v1 6/8] dm_op: convert HVMOP_set_mem_type Paul Durrant
2016-11-25 13:50   ` Jan Beulich
2016-11-25 14:00     ` Paul Durrant
2016-11-25 14:16       ` Jan Beulich
2016-11-25 14:20         ` Paul Durrant
2016-11-25 14:46           ` Jan Beulich
2016-11-25 14:56             ` Paul Durrant
2016-11-18 17:14 ` [PATCH-for-4.9 v1 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi Paul Durrant
2016-11-25 14:07   ` Jan Beulich
2016-11-25 14:13     ` Paul Durrant
2016-11-18 17:14 ` [PATCH-for-4.9 v1 8/8] x86/hvm: serialize trap injecting producer and consumer Paul Durrant
2016-11-18 17:52   ` Razvan Cojocaru
2016-11-21  7:53   ` Jan Beulich
2016-11-21  8:26     ` Paul Durrant

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.