All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/8] New hypercall for device models
@ 2016-12-06 13:46 Paul Durrant
  2016-12-06 13:46 ` [PATCH v2 1/8] public / x86: Introduce __HYPERCALL_dm_op Paul Durrant
                   ` (7 more replies)
  0 siblings, 8 replies; 22+ messages in thread
From: Paul Durrant @ 2016-12-06 13:46 UTC (permalink / raw)
  To: xen-devel; +Cc: Paul Durrant

Following on from the design submitted by Jennifer Herbert to the list [1]
this series provides an implementation of __HYPERCALL_dm_op followed by
patches based on Jan Beulich's previous HVMCTL series [2] to convert
tools-only HVMOPs used by device models to DMOPs.

[1] https://lists.xenproject.org/archives/html/xen-devel/2016-09/msg01052.html
[2] https://lists.xenproject.org/archives/html/xen-devel/2016-06/msg02433.html

v2:
- Specifically 'Cc' Jan since 'Suggested-by' does not result in an
  automatic cc.

Paul Durrant (8):
  public / x86: Introduce __HYPERCALL_dm_op...
  dm_op: convert HVMOP_*ioreq_server*
  dm_op: convert HVMOP_track_dirty_vram
  dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level,
    and...
  dm_op: convert HVMOP_modified_memory
  dm_op: convert HVMOP_set_mem_type
  dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi
  x86/hvm: serialize trap injecting producer and consumer

 docs/designs/dmop.markdown          | 193 ++++++++++
 tools/flask/policy/modules/xen.if   |   8 +-
 tools/libxc/include/xenctrl.h       |  13 +-
 tools/libxc/xc_domain.c             | 212 +++++------
 tools/libxc/xc_misc.c               | 235 +++++--------
 tools/libxc/xc_private.c            |  70 ++++
 tools/libxc/xc_private.h            |   2 +
 xen/arch/x86/hvm/Makefile           |   1 +
 xen/arch/x86/hvm/dm.c               | 554 +++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c              | 676 +-----------------------------------
 xen/arch/x86/hvm/ioreq.c            |  36 +-
 xen/arch/x86/hvm/irq.c              |   7 +-
 xen/arch/x86/hypercall.c            |   2 +
 xen/arch/x86/mm/hap/hap.c           |   2 +-
 xen/arch/x86/mm/shadow/common.c     |   2 +-
 xen/include/asm-x86/hap.h           |   2 +-
 xen/include/asm-x86/hvm/domain.h    |   3 +-
 xen/include/asm-x86/hvm/hvm.h       |   3 +
 xen/include/asm-x86/shadow.h        |   2 +-
 xen/include/public/hvm/dm_op.h      | 376 ++++++++++++++++++++
 xen/include/public/hvm/hvm_op.h     | 230 +-----------
 xen/include/public/xen-compat.h     |   2 +-
 xen/include/public/xen.h            |   1 +
 xen/include/xen/hvm/irq.h           |   2 +-
 xen/include/xen/hypercall.h         |   7 +
 xen/include/xsm/dummy.h             |  36 +-
 xen/include/xsm/xsm.h               |  36 +-
 xen/xsm/dummy.c                     |   5 -
 xen/xsm/flask/hooks.c               |  37 +-
 xen/xsm/flask/policy/access_vectors |  15 +-
 30 files changed, 1456 insertions(+), 1314 deletions(-)
 create mode 100644 docs/designs/dmop.markdown
 create mode 100644 xen/arch/x86/hvm/dm.c
 create mode 100644 xen/include/public/hvm/dm_op.h

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v2 1/8] public / x86: Introduce __HYPERCALL_dm_op...
  2016-12-06 13:46 [PATCH v2 0/8] New hypercall for device models Paul Durrant
@ 2016-12-06 13:46 ` Paul Durrant
  2016-12-12 13:27   ` Wei Liu
  2016-12-15 15:22   ` Jan Beulich
  2016-12-06 13:46 ` [PATCH v2 2/8] dm_op: convert HVMOP_*ioreq_server* Paul Durrant
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 22+ messages in thread
From: Paul Durrant @ 2016-12-06 13:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Andrew Cooper, Jennifer Herbert, Paul Durrant,
	Jan Beulich, Ian Jackson, Daniel De Graaf

...as a set of hypercalls to be used by a device model.

As stated in the new docs/designs/dm_op.markdown:

"The aim of DMOP is to prevent a compromised device model from
compromising domains other then the one it is associated with. (And is
therefore likely already compromised)."

See that file for further information.

This patch simply adds the boilerplate for the hypercall.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Suggested-by: Ian Jackson <ian.jackson@citrix.com>
Suggested-by: Jennifer Herbert <jennifer.herbert@citrix.com>
---
Cc: Ian Jackson <ian.jackson@citrix.com>
Cc: Jennifer Herbert <jennifer.herbert@citrix.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

v2:
- Addressed several comments from Jan.
- Removed modification of __XEN_LATEST_INTERFACE_VERSION__ as it is not
  needed in this patch.
---
 docs/designs/dmop.markdown        | 181 ++++++++++++++++++++++++++++++++++++++
 tools/flask/policy/modules/xen.if |   2 +-
 tools/libxc/include/xenctrl.h     |   1 +
 tools/libxc/xc_private.c          |  70 +++++++++++++++
 tools/libxc/xc_private.h          |   2 +
 xen/arch/x86/hvm/Makefile         |   1 +
 xen/arch/x86/hvm/dm.c             | 118 +++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c            |   1 +
 xen/arch/x86/hypercall.c          |   2 +
 xen/include/public/hvm/dm_op.h    |  71 +++++++++++++++
 xen/include/public/xen.h          |   1 +
 xen/include/xen/hypercall.h       |   7 ++
 xen/include/xsm/dummy.h           |   6 ++
 xen/include/xsm/xsm.h             |   6 ++
 xen/xsm/flask/hooks.c             |   7 ++
 15 files changed, 475 insertions(+), 1 deletion(-)
 create mode 100644 docs/designs/dmop.markdown
 create mode 100644 xen/arch/x86/hvm/dm.c
 create mode 100644 xen/include/public/hvm/dm_op.h

diff --git a/docs/designs/dmop.markdown b/docs/designs/dmop.markdown
new file mode 100644
index 0000000..3ccb7e6
--- /dev/null
+++ b/docs/designs/dmop.markdown
@@ -0,0 +1,181 @@
+DMOP  (multi-buffer variant)
+============================
+
+Introduction
+------------
+
+A previous proposal for a 'DMOP' was put forward by Ian Jackson on the 1st
+of August. This proposal seem very promising, however a problem was
+identified with it, that this proposal addresses.
+
+The aim of DMOP, as before, is to prevent a compromised device model from
+compromising domains other then the one it is associated with. (And is
+therefore likely already compromised).
+
+The previous proposal adds a DMOP hypercall, for use by the device models,
+which places the domain ID in a fixed place, within the calling args,
+such that the priv driver can always find it, and not need to know any
+further details about the particular DMOP in order to validate it against
+previously set (vie ioctl) domain.
+
+The problem occurs when you have a DMOP with references to other user memory
+objects, such as with Track dirty VRAM (As used with the VGA buffer).
+Is this case, the address of this other user object needs to be vetted,
+to ensure it is not within a restricted address ranges, such as kernel
+memory. The real problem comes down to how you would vet this address -
+the idea place to do this is within the privcmd driver, since it would have
+knowledge of the address space involved. However, with a principal goal
+of DMOP to allow privcmd to be free from any knowledge of DMOP's sub ops,
+it would have no way to identify any user buffer  addresses that need
+checking.  The alternative of allowing the hypervisor to vet the address
+is also problematic, since it has no knowledge of the guest memory layout.
+
+
+The Design
+----------
+
+As with the previous design, we provide a new restriction ioctl, which
+takes a domid parameter.  After that restriction ioctl is called, the
+privcmd driver will permit only DMOP hypercalls, and only with the
+specified target domid.
+
+In the previous design, a DMOP consisted of one buffer, containing all of
+the DMOP parameters, which may include further explicit references to
+more buffers.  In this design, an array of buffers with length is presented,
+with the first one containing the DMOP parameters, which could implicitly
+reference to further buffers from within in the array. Here, the only
+user buffers passed, are that found with the array, and so all can be
+audited from privcmd.  With the passing of the length of the buffers array,
+privcmd does not need to know which DMOP it is to audit them.
+
+If the hypervisor ends up with the wrong number of buffers, it can reject
+the DMOP at that point.
+
+The following code illustrates this idea:
+
+struct xen_dm_op {
+    uint32_t op;
+};
+
+struct xen_dm_op_buf {
+    XEN_GUEST_HANDLE(void) h;
+    uint64_t size;
+};
+typedef struct xen_dm_op_buf xen_dm_op_buf_t;
+
+enum neg_errnoval
+HYPERVISOR_dm_op(domid_t domid,
+                 xen_dm_op_buf_t bufs[],
+                 unsigned int nr_bufs)
+
+@domid is the domain the hypercall operates on.
+@bufs points to an array of buffers where @bufs[0] contains a struct
+dm_op, describing the specific device model operation and its parameters.
+@bufs[1..] may be referenced in the parameters for the purposes of
+passing extra information to or from the domain.
+@nr_bufs is the number of buffers in the @bufs array.
+
+
+It is forbidden for the above struct (dm_op) to contain any
+guest handles - if they are needed, they should instead be in
+HYPERVISOR_dm_op->bufs.
+
+Validation by privcmd driver
+------------------------------
+
+If the privcmd driver has been restricted to specific domain (using a
+ new ioctl), when it received an op, it will:
+
+1. Check hypercall is DMOP.
+
+2. Check domid == restricted domid.
+
+3. For each @nr_buffers in @buffers: Check @h and @len give a buffer
+   wholey in the user space part of the virtual address space. (e.g.,
+   on Linux use access_ok()).
+
+
+Xen Implementation
+------------------
+
+Since a DMOP sub of may need to copy or return a buffer from the guest,
+as well as the DMOP itself to git the initial buffer, functions for doing
+this would be written as below.  Note that care is taken to prevent
+damage from buffer under or over run situations.  If the DMOP is called
+with insufficient buffers, zeros will be read, while extra is ignored.
+
+static int get_buf(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
+                   unsigned int nr_bufs, unsigned int idx,
+                   struct xen_dm_op_buf *buf)
+{
+    if ( idx >= nr_bufs )
+        return -ENOENT;
+
+    return copy_from_guest_offset(buf, bufs, idx, 1) ? -EFAULT : 0;
+}
+
+static int copy_buf_from_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
+                               unsigned int nr_bufs, void *dst,
+                               unsigned int idx, size_t dst_size)
+{
+    struct xen_dm_op_buf buf;
+    size_t size;
+    int rc;
+
+    memset(dst, 0, dst_size);
+
+    rc = get_buf(bufs, nr_bufs, idx, &buf);
+    if ( rc )
+        return rc;
+
+    size = min_t(size_t, dst_size, buf.size);
+
+    return copy_from_guest(dst, buf.h, size) ? -EFAULT : 0;
+}
+
+static int copy_buf_to_guest(...)
+{
+    /* Similar to the above, except copying the the other
+       direction. */
+}
+
+This leaves do_dm_op easy to implement as below:
+
+long do_dm_op(domid_t domid,
+              unsigned int nr_bufs,
+              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
+{
+    struct domain *d;
+    struct xen_dm_op op;
+    long rc;
+
+    rc = rcu_lock_remote_domain_by_id(domid, &d);
+    if ( rc )
+        return rc;
+
+    if ( !has_hvm_container_domain(d) )
+        goto out;
+
+    rc = xsm_dm_op(XSM_DM_PRIV, d);
+    if ( rc )
+        goto out;
+
+    rc = copy_buf_from_guest(bufs, nr_bufs, &op, 0, sizeof(op));
+    if ( rc )
+        goto out;
+
+    switch ( op.op )
+    {
+    default:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+    if ( !rc )
+        rc = copy_buf_to_guest(bufs, nr_bufs, 0, &op, sizeof(op));
+
+ out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index eb646f5..779232e 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -151,7 +151,7 @@ define(`device_model', `
 
 	allow $1 $2_target:domain { getdomaininfo shutdown };
 	allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack };
-	allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl irqlevel pciroute pcilevel cacheattr send_irq };
+	allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl irqlevel pciroute pcilevel cacheattr send_irq dm };
 ')
 
 # make_device_model(priv, dm_dom, hvm_dom)
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 2c83544..cc37752 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -41,6 +41,7 @@
 #include <xen/sched.h>
 #include <xen/memory.h>
 #include <xen/grant_table.h>
+#include <xen/hvm/dm_op.h>
 #include <xen/hvm/params.h>
 #include <xen/xsm/flask_op.h>
 #include <xen/tmem.h>
diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
index d57c39a..8e49635 100644
--- a/tools/libxc/xc_private.c
+++ b/tools/libxc/xc_private.c
@@ -776,6 +776,76 @@ int xc_ffs64(uint64_t x)
     return l ? xc_ffs32(l) : h ? xc_ffs32(h) + 32 : 0;
 }
 
+int do_dm_op(xc_interface *xch, domid_t domid, unsigned int nr_bufs, ...)
+{
+    int ret = -1;
+    struct  {
+        void *u;
+        void *h;
+    } *bounce;
+    DECLARE_HYPERCALL_BUFFER(xen_dm_op_buf_t, bufs);
+    va_list args;
+    unsigned int idx;
+
+    bounce = calloc(nr_bufs, sizeof(*bounce));
+    if ( bounce == NULL )
+        goto fail1;
+
+    bufs = xc_hypercall_buffer_alloc(xch, bufs, sizeof(*bufs) * nr_bufs);
+    if ( bufs == NULL )
+        goto fail2;
+
+    va_start(args, nr_bufs);
+    for (idx = 0; idx < nr_bufs; idx++)
+    {
+        void *u = va_arg(args, void *);
+        size_t size = va_arg(args, size_t);
+
+        bounce[idx].h = xencall_alloc_buffer(xch->xcall, size);
+        if ( bounce[idx].h == NULL )
+            goto fail3;
+
+        memcpy(bounce[idx].h, u, size);
+        bounce[idx].u = u;
+
+        set_xen_guest_handle_raw(bufs[idx].h, bounce[idx].h);
+        bufs[idx].size = size;
+    }
+    va_end(args);
+
+    ret = xencall3(xch->xcall, __HYPERVISOR_dm_op,
+                   domid, nr_bufs, HYPERCALL_BUFFER_AS_ARG(bufs));
+    if ( ret < 0 )
+        goto fail4;
+
+    while ( idx-- != 0 )
+    {
+        memcpy(bounce[idx].u, bounce[idx].h, bufs[idx].size);
+        xencall_free_buffer(xch->xcall, bounce[idx].h);
+    }
+
+    xc_hypercall_buffer_free(xch, bufs);
+
+    free(bounce);
+
+    return 0;
+
+ fail4:
+    idx = nr_bufs;
+
+ fail3:
+    while ( idx-- != 0 )
+        xencall_free_buffer(xch->xcall, bounce[idx].h);
+
+    xc_hypercall_buffer_free(xch, bufs);
+
+ fail2:
+    free(bounce);
+
+ fail1:
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xc_private.h b/tools/libxc/xc_private.h
index 97445ae..f191320 100644
--- a/tools/libxc/xc_private.h
+++ b/tools/libxc/xc_private.h
@@ -422,6 +422,8 @@ int xc_vm_event_control(xc_interface *xch, domid_t domain_id, unsigned int op,
 void *xc_vm_event_enable(xc_interface *xch, domid_t domain_id, int param,
                          uint32_t *port);
 
+int do_dm_op(xc_interface *xch, domid_t domid, unsigned int nr_bufs, ...);
+
 #endif /* __XC_PRIVATE_H__ */
 
 /*
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index f750d13..5869d1b 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -2,6 +2,7 @@ subdir-y += svm
 subdir-y += vmx
 
 obj-y += asid.o
+obj-y += dm.o
 obj-y += emulate.o
 obj-y += hpet.o
 obj-y += hvm.o
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
new file mode 100644
index 0000000..90510b8
--- /dev/null
+++ b/xen/arch/x86/hvm/dm.c
@@ -0,0 +1,118 @@
+/*
+ * Copyright (c) 2016 Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <xen/sched.h>
+
+#include <asm/hvm/ioreq.h>
+
+#include <xsm/xsm.h>
+
+static int get_buf(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
+                   unsigned int nr_bufs, unsigned int idx,
+                   struct xen_dm_op_buf *buf)
+{
+    if ( idx >= nr_bufs )
+        return -ENOENT;
+
+    return copy_from_guest_offset(buf, bufs, idx, 1) ? -EFAULT : 0;
+}
+
+static int copy_buf_from_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
+                               unsigned int nr_bufs, void *dst,
+                               unsigned int idx, size_t dst_size)
+{
+    struct xen_dm_op_buf buf;
+    size_t size;
+    int rc;
+
+    memset(dst, 0, dst_size);
+
+    rc = get_buf(bufs, nr_bufs, idx, &buf);
+    if ( rc )
+        return rc;
+
+    size = min_t(size_t, dst_size, buf.size);
+
+    return copy_from_guest(dst, buf.h, size) ? -EFAULT : 0;
+}
+
+static int copy_buf_to_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
+                             unsigned int nr_bufs, unsigned int idx,
+                             void *src, size_t src_size)
+{
+    struct xen_dm_op_buf buf;
+    size_t size;
+    int rc;
+
+    rc = get_buf(bufs, nr_bufs, idx, &buf);
+    if ( rc )
+        return rc;
+
+    size = min_t(size_t, buf.size, src_size);
+
+    return copy_to_guest(buf.h, src, size) ? -EFAULT : 0;
+}
+
+long do_dm_op(domid_t domid,
+              unsigned int nr_bufs,
+              XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
+{
+    struct domain *d;
+    struct xen_dm_op op;
+    long rc;
+
+    rc = rcu_lock_remote_domain_by_id(domid, &d);
+    if ( rc )
+        return rc;
+
+    if ( !has_hvm_container_domain(d) )
+        goto out;
+
+    rc = xsm_dm_op(XSM_DM_PRIV, d);
+    if ( rc )
+        goto out;
+
+    rc = copy_buf_from_guest(bufs, nr_bufs, &op, 0, sizeof(op));
+    if ( rc )
+        goto out;
+
+    switch ( op.op )
+    {
+    default:
+        rc = -EOPNOTSUPP;
+        break;
+    }
+
+    if ( !rc )
+        rc = copy_buf_to_guest(bufs, nr_bufs, 0, &op, sizeof(op));
+
+ out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e0f936b..6f002ba 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4276,6 +4276,7 @@ static const hypercall_table_t hvm_hypercall_table[] = {
     COMPAT_CALL(platform_op),
     COMPAT_CALL(mmuext_op),
     HYPERCALL(xenpmu_op),
+    HYPERCALL(dm_op),
     HYPERCALL(arch_1)
 };
 
diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
index d2b5331..0a163ac 100644
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -66,6 +66,7 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
     ARGS(kexec_op, 2),
     ARGS(tmem_op, 1),
     ARGS(xenpmu_op, 2),
+    ARGS(dm_op, 3),
     ARGS(mca, 1),
     ARGS(arch_1, 1),
 };
@@ -128,6 +129,7 @@ static const hypercall_table_t pv_hypercall_table[] = {
     HYPERCALL(tmem_op),
 #endif
     HYPERCALL(xenpmu_op),
+    HYPERCALL(dm_op),
     HYPERCALL(mca),
     HYPERCALL(arch_1),
 };
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
new file mode 100644
index 0000000..862c914
--- /dev/null
+++ b/xen/include/public/hvm/dm_op.h
@@ -0,0 +1,71 @@
+/*
+ * Copyright (c) 2016, Citrix Systems Inc
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __XEN_PUBLIC_HVM_DM_OP_H__
+#define __XEN_PUBLIC_HVM_DM_OP_H__
+
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+#include "../xen.h"
+
+#define XEN_DMOP_invalid 0
+
+struct xen_dm_op {
+    uint32_t op;
+};
+
+struct xen_dm_op_buf {
+    XEN_GUEST_HANDLE(void) h;
+    uint64_aligned_t size;
+};
+typedef struct xen_dm_op_buf xen_dm_op_buf_t;
+DEFINE_XEN_GUEST_HANDLE(xen_dm_op_buf_t);
+
+/* ` enum neg_errnoval
+ * ` HYPERVISOR_dm_op(domid_t domid,
+ * `                  xen_dm_op_buf_t bufs[],
+ * `                  unsigned int nr_bufs)
+ * `
+ *
+ * @domid is the domain the hypercall operates on.
+ * @bufs points to an array of buffers where @bufs[0] contains a struct
+ * xen_dm_op, describing the specific device model operation and its
+ * parameters.
+ * @bufs[1..] may be referenced in the parameters for the purposes of
+ * passing extra information to or from the domain.
+ * @nr_bufs is the number of buffers in the @bufs array.
+ */
+
+#endif /* __XEN__ || __XEN_TOOLS__ */
+
+#endif /* __XEN_PUBLIC_HVM_DM_OP_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 336aa3f..213b94d 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -120,6 +120,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_tmem_op              38
 #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
 #define __HYPERVISOR_xenpmu_op            40
+#define __HYPERVISOR_dm_op                41
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index 207a0e8..fee78f7 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -15,6 +15,7 @@
 #include <public/tmem.h>
 #include <public/version.h>
 #include <public/pmu.h>
+#include <public/hvm/dm_op.h>
 #include <asm/hypercall.h>
 #include <xsm/xsm.h>
 
@@ -141,6 +142,12 @@ do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
 extern long
 do_xenpmu_op(unsigned int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg);
 
+extern long
+do_dm_op(
+    domid_t domid,
+    unsigned int nr_bufs,
+    XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs);
+
 #ifdef CONFIG_COMPAT
 
 extern int
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 95460af..b206f5a 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -727,6 +727,12 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int
     }
 }
 
+static XSM_INLINE int xsm_dm_op(XSM_DEFAULT_ARG struct domain *d)
+{
+    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    return xsm_default_action(action, current->domain, d);
+}
+
 #endif /* CONFIG_X86 */
 
 #include <public/version.h>
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 5dc59dd..e2d336f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -184,6 +184,7 @@ struct xsm_operations {
     int (*ioport_permission) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
     int (*ioport_mapping) (struct domain *d, uint32_t s, uint32_t e, uint8_t allow);
     int (*pmu_op) (struct domain *d, unsigned int op);
+    int (*dm_op) (struct domain *d);
 #endif
     int (*xen_version) (uint32_t cmd);
 };
@@ -722,6 +723,11 @@ static inline int xsm_pmu_op (xsm_default_t def, struct domain *d, unsigned int
     return xsm_ops->pmu_op(d, op);
 }
 
+static inline int xsm_dm_op(xsm_default_t def, struct domain *d)
+{
+    return xsm_ops->dm_op(d);
+}
+
 #endif /* CONFIG_X86 */
 
 static inline int xsm_xen_version (xsm_default_t def, uint32_t op)
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 177c11f..2ed5888 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1632,6 +1632,12 @@ static int flask_pmu_op (struct domain *d, unsigned int op)
         return -EPERM;
     }
 }
+
+static int flask_dm_op(struct domain *d)
+{
+    return current_has_perm(d, SECCLASS_HVM, HVM__DM);
+}
+
 #endif /* CONFIG_X86 */
 
 static int flask_xen_version (uint32_t op)
@@ -1811,6 +1817,7 @@ static struct xsm_operations flask_ops = {
     .ioport_permission = flask_ioport_permission,
     .ioport_mapping = flask_ioport_mapping,
     .pmu_op = flask_pmu_op,
+    .dm_op = flask_dm_op,
 #endif
     .xen_version = flask_xen_version,
 };
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 2/8] dm_op: convert HVMOP_*ioreq_server*
  2016-12-06 13:46 [PATCH v2 0/8] New hypercall for device models Paul Durrant
  2016-12-06 13:46 ` [PATCH v2 1/8] public / x86: Introduce __HYPERCALL_dm_op Paul Durrant
@ 2016-12-06 13:46 ` Paul Durrant
  2016-12-12 13:30   ` Wei Liu
  2016-12-15 15:37   ` Jan Beulich
  2016-12-06 13:46 ` [PATCH v2 3/8] dm_op: convert HVMOP_track_dirty_vram Paul Durrant
                   ` (5 subsequent siblings)
  7 siblings, 2 replies; 22+ messages in thread
From: Paul Durrant @ 2016-12-06 13:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, Paul Durrant, Jan Beulich,
	Daniel De Graaf

The definitions of HVM_IOREQSRV_BUFIOREQ_* have to persist as they are
already in use by callers of the libxc interface.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
--
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>

v2:
- Addressed several comments from Jan.
---
 tools/libxc/xc_domain.c          | 212 ++++++++++++++++---------------------
 xen/arch/x86/hvm/dm.c            |  89 ++++++++++++++++
 xen/arch/x86/hvm/hvm.c           | 219 ---------------------------------------
 xen/arch/x86/hvm/ioreq.c         |  36 +++----
 xen/include/asm-x86/hvm/domain.h |   3 +-
 xen/include/public/hvm/dm_op.h   | 153 +++++++++++++++++++++++++++
 xen/include/public/hvm/hvm_op.h  | 132 +----------------------
 xen/include/xsm/dummy.h          |   6 --
 xen/include/xsm/xsm.h            |   6 --
 xen/xsm/dummy.c                  |   1 -
 xen/xsm/flask/hooks.c            |   6 --
 11 files changed, 356 insertions(+), 507 deletions(-)

diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 296b852..419a897 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1417,24 +1417,24 @@ int xc_hvm_create_ioreq_server(xc_interface *xch,
                                int handle_bufioreq,
                                ioservid_t *id)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_create_ioreq_server_t, arg);
+    struct xen_dm_op op;
+    struct xen_dm_op_create_ioreq_server *data;
     int rc;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    memset(&op, 0, sizeof(op));
 
-    arg->domid = domid;
-    arg->handle_bufioreq = handle_bufioreq;
+    op.op = XEN_DMOP_create_ioreq_server;
+    data = &op.u.create_ioreq_server;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_create_ioreq_server,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->handle_bufioreq = handle_bufioreq;
+
+    rc = do_dm_op(xch, domid, 1, &op, sizeof(op));
+    if ( rc )
+        return rc;
 
-    *id = arg->id;
+    *id = data->id;
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return 0;
 }
 
 int xc_hvm_get_ioreq_server_info(xc_interface *xch,
@@ -1444,84 +1444,71 @@ int xc_hvm_get_ioreq_server_info(xc_interface *xch,
                                  xen_pfn_t *bufioreq_pfn,
                                  evtchn_port_t *bufioreq_port)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_get_ioreq_server_info_t, arg);
+    struct xen_dm_op op;
+    struct xen_dm_op_get_ioreq_server_info *data;
     int rc;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    memset(&op, 0, sizeof(op));
 
-    arg->domid = domid;
-    arg->id = id;
+    op.op = XEN_DMOP_get_ioreq_server_info;
+    data = &op.u.get_ioreq_server_info;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_get_ioreq_server_info,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
-    if ( rc != 0 )
-        goto done;
+    data->id = id;
+
+    rc = do_dm_op(xch, domid, 1, &op, sizeof(op));
+    if ( rc )
+        return rc;
 
     if ( ioreq_pfn )
-        *ioreq_pfn = arg->ioreq_pfn;
+        *ioreq_pfn = data->ioreq_pfn;
 
     if ( bufioreq_pfn )
-        *bufioreq_pfn = arg->bufioreq_pfn;
+        *bufioreq_pfn = data->bufioreq_pfn;
 
     if ( bufioreq_port )
-        *bufioreq_port = arg->bufioreq_port;
+        *bufioreq_port = data->bufioreq_port;
 
-done:
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return 0;
 }
 
 int xc_hvm_map_io_range_to_ioreq_server(xc_interface *xch, domid_t domid,
                                         ioservid_t id, int is_mmio,
                                         uint64_t start, uint64_t end)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_ioreq_server_range *data;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    memset(&op, 0, sizeof(op));
 
-    arg->domid = domid;
-    arg->id = id;
-    arg->type = is_mmio ? HVMOP_IO_RANGE_MEMORY : HVMOP_IO_RANGE_PORT;
-    arg->start = start;
-    arg->end = end;
+    op.op = XEN_DMOP_map_io_range_to_ioreq_server;
+    data = &op.u.map_io_range_to_ioreq_server;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_map_io_range_to_ioreq_server,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->id = id;
+    data->type = is_mmio ? XEN_DMOP_IO_RANGE_MEMORY : XEN_DMOP_IO_RANGE_PORT;
+    data->start = start;
+    data->end = end;
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return do_dm_op(xch, domid, 1, &op, sizeof(op));
 }
 
 int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface *xch, domid_t domid,
                                             ioservid_t id, int is_mmio,
                                             uint64_t start, uint64_t end)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_ioreq_server_range *data;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    memset(&op, 0, sizeof(op));
 
-    arg->domid = domid;
-    arg->id = id;
-    arg->type = is_mmio ? HVMOP_IO_RANGE_MEMORY : HVMOP_IO_RANGE_PORT;
-    arg->start = start;
-    arg->end = end;
+    op.op = XEN_DMOP_unmap_io_range_from_ioreq_server;
+    data = &op.u.unmap_io_range_from_ioreq_server;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_unmap_io_range_from_ioreq_server,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->id = id;
+    data->type = is_mmio ? XEN_DMOP_IO_RANGE_MEMORY : XEN_DMOP_IO_RANGE_PORT;
+    data->start = start;
+    data->end = end;
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return do_dm_op(xch, domid, 1, &op, sizeof(op));
 }
 
 int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch, domid_t domid,
@@ -1529,37 +1516,32 @@ int xc_hvm_map_pcidev_to_ioreq_server(xc_interface *xch, domid_t domid,
                                       uint8_t bus, uint8_t device,
                                       uint8_t function)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_ioreq_server_range *data;
 
     if (device > 0x1f || function > 0x7) {
         errno = EINVAL;
         return -1;
     }
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    memset(&op, 0, sizeof(op));
 
-    arg->domid = domid;
-    arg->id = id;
-    arg->type = HVMOP_IO_RANGE_PCI;
+    op.op = XEN_DMOP_map_io_range_to_ioreq_server;
+    data = &op.u.map_io_range_to_ioreq_server;
+
+    data->id = id;
+    data->type = XEN_DMOP_IO_RANGE_PCI;
 
     /*
      * The underlying hypercall will deal with ranges of PCI SBDF
      * but, for simplicity, the API only uses singletons.
      */
-    arg->start = arg->end = HVMOP_PCI_SBDF((uint64_t)segment,
-                                           (uint64_t)bus,
-                                           (uint64_t)device,
-                                           (uint64_t)function);
-
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_map_io_range_to_ioreq_server,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->start = data->end = XEN_DMOP_PCI_SBDF((uint64_t)segment,
+                                                (uint64_t)bus,
+                                                (uint64_t)device,
+                                                (uint64_t)function);
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return do_dm_op(xch, domid, 1, &op, sizeof(op));
 }
 
 int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch, domid_t domid,
@@ -1567,54 +1549,49 @@ int xc_hvm_unmap_pcidev_from_ioreq_server(xc_interface *xch, domid_t domid,
                                           uint8_t bus, uint8_t device,
                                           uint8_t function)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_ioreq_server_range *data;
 
     if (device > 0x1f || function > 0x7) {
         errno = EINVAL;
         return -1;
     }
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    memset(&op, 0, sizeof(op));
 
-    arg->domid = domid;
-    arg->id = id;
-    arg->type = HVMOP_IO_RANGE_PCI;
-    arg->start = arg->end = HVMOP_PCI_SBDF((uint64_t)segment,
-                                           (uint64_t)bus,
-                                           (uint64_t)device,
-                                           (uint64_t)function);
-
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_unmap_io_range_from_ioreq_server,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    op.op = XEN_DMOP_unmap_io_range_from_ioreq_server;
+    data = &op.u.unmap_io_range_from_ioreq_server;
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    data->id = id;
+    data->type = XEN_DMOP_IO_RANGE_PCI;
+
+    /*
+     * The underlying hypercall will deal with ranges of PCI SBDF
+     * but, for simplicity, the API only uses singletons.
+     */
+    data->start = data->end = XEN_DMOP_PCI_SBDF((uint64_t)segment,
+                                                (uint64_t)bus,
+                                                (uint64_t)device,
+                                                (uint64_t)function);
+
+    return do_dm_op(xch, domid, 1, &op, sizeof(op));
 }
 
 int xc_hvm_destroy_ioreq_server(xc_interface *xch,
                                 domid_t domid,
                                 ioservid_t id)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_destroy_ioreq_server_t, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_destroy_ioreq_server *data;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    memset(&op, 0, sizeof(op));
 
-    arg->domid = domid;
-    arg->id = id;
+    op.op = XEN_DMOP_destroy_ioreq_server;
+    data = &op.u.destroy_ioreq_server;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_destroy_ioreq_server,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->id = id;
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return do_dm_op(xch, domid, 1, &op, sizeof(op));
 }
 
 int xc_hvm_set_ioreq_server_state(xc_interface *xch,
@@ -1622,23 +1599,18 @@ int xc_hvm_set_ioreq_server_state(xc_interface *xch,
                                   ioservid_t id,
                                   int enabled)
 {
-    DECLARE_HYPERCALL_BUFFER(xen_hvm_set_ioreq_server_state_t, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_set_ioreq_server_state *data;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-        return -1;
+    memset(&op, 0, sizeof(op));
 
-    arg->domid = domid;
-    arg->id = id;
-    arg->enabled = !!enabled;
+    op.op = XEN_DMOP_set_ioreq_server_state;
+    data = &op.u.set_ioreq_server_state;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_set_ioreq_server_state,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->id = id;
+    data->enabled = !!enabled;
 
-    xc_hypercall_buffer_free(xch, arg);
-    return rc;
+    return do_dm_op(xch, domid, 1, &op, sizeof(op));
 }
 
 int xc_domain_setdebugging(xc_interface *xch,
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 90510b8..b9d70d1 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -91,8 +91,97 @@ long do_dm_op(domid_t domid,
     if ( rc )
         goto out;
 
+    rc = -EINVAL;
+    if ( op.pad )
+        goto out;
+
     switch ( op.op )
     {
+    case XEN_DMOP_create_ioreq_server:
+    {
+        struct domain *curr_d = current->domain;
+        struct xen_dm_op_create_ioreq_server *data =
+            &op.u.create_ioreq_server;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_create_ioreq_server(d, curr_d->domain_id, 0,
+                                     data->handle_bufioreq, &data->id);
+        break;
+    }
+
+    case XEN_DMOP_get_ioreq_server_info:
+    {
+        struct xen_dm_op_get_ioreq_server_info *data =
+            &op.u.get_ioreq_server_info;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_get_ioreq_server_info(d, data->id,
+                                       &data->ioreq_pfn,
+                                       &data->bufioreq_pfn,
+                                       &data->bufioreq_port);
+        break;
+    }
+
+    case XEN_DMOP_map_io_range_to_ioreq_server:
+    {
+        const struct xen_dm_op_ioreq_server_range *data =
+            &op.u.map_io_range_to_ioreq_server;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_map_io_range_to_ioreq_server(d, data->id, data->type,
+                                              data->start, data->end);
+        break;
+    }
+
+    case XEN_DMOP_unmap_io_range_from_ioreq_server:
+    {
+        const struct xen_dm_op_ioreq_server_range *data =
+            &op.u.unmap_io_range_from_ioreq_server;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_unmap_io_range_from_ioreq_server(d, data->id, data->type,
+                                                  data->start, data->end);
+        break;
+    }
+
+    case XEN_DMOP_set_ioreq_server_state:
+    {
+        const struct xen_dm_op_set_ioreq_server_state *data =
+            &op.u.set_ioreq_server_state;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_set_ioreq_server_state(d, data->id, !!data->enabled);
+        break;
+    }
+
+    case XEN_DMOP_destroy_ioreq_server:
+    {
+        const struct xen_dm_op_destroy_ioreq_server *data =
+            &op.u.destroy_ioreq_server;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_destroy_ioreq_server(d, data->id);
+        break;
+    }
+
     default:
         rc = -EOPNOTSUPP;
         break;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 6f002ba..3009449 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4806,195 +4806,6 @@ static int hvmop_flush_tlb_all(void)
     return 0;
 }
 
-static int hvmop_create_ioreq_server(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_create_ioreq_server_t) uop)
-{
-    struct domain *curr_d = current->domain;
-    xen_hvm_create_ioreq_server_t op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, HVMOP_create_ioreq_server);
-    if ( rc != 0 )
-        goto out;
-
-    rc = hvm_create_ioreq_server(d, curr_d->domain_id, 0,
-                                 op.handle_bufioreq, &op.id);
-    if ( rc != 0 )
-        goto out;
-
-    rc = copy_to_guest(uop, &op, 1) ? -EFAULT : 0;
-    
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-static int hvmop_get_ioreq_server_info(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_get_ioreq_server_info_t) uop)
-{
-    xen_hvm_get_ioreq_server_info_t op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, HVMOP_get_ioreq_server_info);
-    if ( rc != 0 )
-        goto out;
-
-    rc = hvm_get_ioreq_server_info(d, op.id,
-                                   &op.ioreq_pfn,
-                                   &op.bufioreq_pfn, 
-                                   &op.bufioreq_port);
-    if ( rc != 0 )
-        goto out;
-
-    rc = copy_to_guest(uop, &op, 1) ? -EFAULT : 0;
-    
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-static int hvmop_map_io_range_to_ioreq_server(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_io_range_t) uop)
-{
-    xen_hvm_io_range_t op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, HVMOP_map_io_range_to_ioreq_server);
-    if ( rc != 0 )
-        goto out;
-
-    rc = hvm_map_io_range_to_ioreq_server(d, op.id, op.type,
-                                          op.start, op.end);
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-static int hvmop_unmap_io_range_from_ioreq_server(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_io_range_t) uop)
-{
-    xen_hvm_io_range_t op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, HVMOP_unmap_io_range_from_ioreq_server);
-    if ( rc != 0 )
-        goto out;
-
-    rc = hvm_unmap_io_range_from_ioreq_server(d, op.id, op.type,
-                                              op.start, op.end);
-    
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-static int hvmop_set_ioreq_server_state(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_ioreq_server_state_t) uop)
-{
-    xen_hvm_set_ioreq_server_state_t op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, HVMOP_set_ioreq_server_state);
-    if ( rc != 0 )
-        goto out;
-
-    rc = hvm_set_ioreq_server_state(d, op.id, !!op.enabled);
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-static int hvmop_destroy_ioreq_server(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_destroy_ioreq_server_t) uop)
-{
-    xen_hvm_destroy_ioreq_server_t op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d, HVMOP_destroy_ioreq_server);
-    if ( rc != 0 )
-        goto out;
-
-    rc = hvm_destroy_ioreq_server(d, op.id);
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
 static int hvmop_set_evtchn_upcall_vector(
     XEN_GUEST_HANDLE_PARAM(xen_hvm_evtchn_upcall_vector_t) uop)
 {
@@ -5704,36 +5515,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
     start_iter = op & ~mask;
     switch ( op &= mask )
     {
-    case HVMOP_create_ioreq_server:
-        rc = hvmop_create_ioreq_server(
-            guest_handle_cast(arg, xen_hvm_create_ioreq_server_t));
-        break;
-    
-    case HVMOP_get_ioreq_server_info:
-        rc = hvmop_get_ioreq_server_info(
-            guest_handle_cast(arg, xen_hvm_get_ioreq_server_info_t));
-        break;
-    
-    case HVMOP_map_io_range_to_ioreq_server:
-        rc = hvmop_map_io_range_to_ioreq_server(
-            guest_handle_cast(arg, xen_hvm_io_range_t));
-        break;
-    
-    case HVMOP_unmap_io_range_from_ioreq_server:
-        rc = hvmop_unmap_io_range_from_ioreq_server(
-            guest_handle_cast(arg, xen_hvm_io_range_t));
-        break;
-
-    case HVMOP_set_ioreq_server_state:
-        rc = hvmop_set_ioreq_server_state(
-            guest_handle_cast(arg, xen_hvm_set_ioreq_server_state_t));
-        break;
-    
-    case HVMOP_destroy_ioreq_server:
-        rc = hvmop_destroy_ioreq_server(
-            guest_handle_cast(arg, xen_hvm_destroy_ioreq_server_t));
-        break;
-    
     case HVMOP_set_evtchn_upcall_vector:
         rc = hvmop_set_evtchn_upcall_vector(
             guest_handle_cast(arg, xen_hvm_evtchn_upcall_vector_t));
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 88071ab..51a04f7 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -513,9 +513,9 @@ static int hvm_ioreq_server_alloc_rangesets(struct hvm_ioreq_server *s,
         char *name;
 
         rc = asprintf(&name, "ioreq_server %d %s", s->id,
-                      (i == HVMOP_IO_RANGE_PORT) ? "port" :
-                      (i == HVMOP_IO_RANGE_MEMORY) ? "memory" :
-                      (i == HVMOP_IO_RANGE_PCI) ? "pci" :
+                      (i == XEN_DMOP_IO_RANGE_PORT) ? "port" :
+                      (i == XEN_DMOP_IO_RANGE_MEMORY) ? "memory" :
+                      (i == XEN_DMOP_IO_RANGE_PCI) ? "pci" :
                       "");
         if ( rc )
             goto fail;
@@ -833,9 +833,9 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
 
             switch ( type )
             {
-            case HVMOP_IO_RANGE_PORT:
-            case HVMOP_IO_RANGE_MEMORY:
-            case HVMOP_IO_RANGE_PCI:
+            case XEN_DMOP_IO_RANGE_PORT:
+            case XEN_DMOP_IO_RANGE_MEMORY:
+            case XEN_DMOP_IO_RANGE_PCI:
                 r = s->range[type];
                 break;
 
@@ -885,9 +885,9 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
 
             switch ( type )
             {
-            case HVMOP_IO_RANGE_PORT:
-            case HVMOP_IO_RANGE_MEMORY:
-            case HVMOP_IO_RANGE_PCI:
+            case XEN_DMOP_IO_RANGE_PORT:
+            case XEN_DMOP_IO_RANGE_MEMORY:
+            case XEN_DMOP_IO_RANGE_PCI:
                 r = s->range[type];
                 break;
 
@@ -1128,12 +1128,12 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
 
         /* PCI config data cycle */
 
-        sbdf = HVMOP_PCI_SBDF(0,
-                              PCI_BUS(CF8_BDF(cf8)),
-                              PCI_SLOT(CF8_BDF(cf8)),
-                              PCI_FUNC(CF8_BDF(cf8)));
+        sbdf = XEN_DMOP_PCI_SBDF(0,
+                                 PCI_BUS(CF8_BDF(cf8)),
+                                 PCI_SLOT(CF8_BDF(cf8)),
+                                 PCI_FUNC(CF8_BDF(cf8)));
 
-        type = HVMOP_IO_RANGE_PCI;
+        type = XEN_DMOP_IO_RANGE_PCI;
         addr = ((uint64_t)sbdf << 32) |
                CF8_ADDR_LO(cf8) |
                (p->addr & 3);
@@ -1152,7 +1152,7 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
     else
     {
         type = (p->type == IOREQ_TYPE_PIO) ?
-                HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
+                XEN_DMOP_IO_RANGE_PORT : XEN_DMOP_IO_RANGE_MEMORY;
         addr = p->addr;
     }
 
@@ -1174,19 +1174,19 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct domain *d,
         {
             unsigned long end;
 
-        case HVMOP_IO_RANGE_PORT:
+        case XEN_DMOP_IO_RANGE_PORT:
             end = addr + p->size - 1;
             if ( rangeset_contains_range(r, addr, end) )
                 return s;
 
             break;
-        case HVMOP_IO_RANGE_MEMORY:
+        case XEN_DMOP_IO_RANGE_MEMORY:
             end = addr + (p->size * p->count) - 1;
             if ( rangeset_contains_range(r, addr, end) )
                 return s;
 
             break;
-        case HVMOP_IO_RANGE_PCI:
+        case XEN_DMOP_IO_RANGE_PCI:
             if ( rangeset_contains_singleton(r, addr >> 32) )
             {
                 p->type = IOREQ_TYPE_PCI_CONFIG;
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index f34d784..e869285 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -33,6 +33,7 @@
 #include <public/hvm/params.h>
 #include <public/hvm/save.h>
 #include <public/hvm/hvm_op.h>
+#include <public/hvm/dm_op.h>
 
 struct hvm_ioreq_page {
     unsigned long gmfn;
@@ -47,7 +48,7 @@ struct hvm_ioreq_vcpu {
     bool_t           pending;
 };
 
-#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
+#define NR_IO_RANGE_TYPES (XEN_DMOP_IO_RANGE_PCI + 1)
 #define MAX_NR_IO_RANGES  256
 
 struct hvm_ioreq_server {
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index 862c914..2b22077 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -27,11 +27,164 @@
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 
 #include "../xen.h"
+#include "../event_channel.h"
 
 #define XEN_DMOP_invalid 0
 
+/*
+ * IOREQ Servers
+ *
+ * The interface between an I/O emulator an Xen is called an IOREQ Server.
+ * A domain supports a single 'legacy' IOREQ Server which is instantiated if
+ * parameter...
+ *
+ * HVM_PARAM_IOREQ_PFN is read (to get the gmfn containing the synchronous
+ * ioreq structures), or...
+ * HVM_PARAM_BUFIOREQ_PFN is read (to get the gmfn containing the buffered
+ * ioreq ring), or...
+ * HVM_PARAM_BUFIOREQ_EVTCHN is read (to get the event channel that Xen uses
+ * to request buffered I/O emulation).
+ *
+ * The following hypercalls facilitate the creation of IOREQ Servers for
+ * 'secondary' emulators which are invoked to implement port I/O, memory, or
+ * PCI config space ranges which they explicitly register.
+ */
+
+typedef uint16_t ioservid_t;
+
+/*
+ * XEN_DMOP_create_ioreq_server: Instantiate a new IOREQ Server for a
+ *                               secondary emulator.
+ *
+ * The <id> handed back is unique for target domain. The valur of
+ * <handle_bufioreq> should be one of HVM_IOREQSRV_BUFIOREQ_* defined in
+ * hvm_op.h. If the value is HVM_IOREQSRV_BUFIOREQ_OFF then  the buffered
+ * ioreq ring will not be allocated and hence all emulation requests to
+ * this server will be synchronous.
+ */
+#define XEN_DMOP_create_ioreq_server 1
+
+struct xen_dm_op_create_ioreq_server {
+    /* IN - should server handle buffered ioreqs */
+    uint8_t handle_bufioreq;
+    uint8_t pad[3];
+    /* OUT - server id */
+    ioservid_t id;
+};
+
+/*
+ * XEN_DMOP_get_ioreq_server_info: Get all the information necessary to
+ *                                 access IOREQ Server <id>.
+ *
+ * The emulator needs to map the synchronous ioreq structures and buffered
+ * ioreq ring (if it exists) that Xen uses to request emulation. These are
+ * hosted in the target domain's gmfns <ioreq_pfn> and <bufioreq_pfn>
+ * respectively. In addition, if the IOREQ Server is handling buffered
+ * emulation requests, the emulator needs to bind to event channel
+ * <bufioreq_port> to listen for them. (The event channels used for
+ * synchronous emulation requests are specified in the per-CPU ioreq
+ * structures in <ioreq_pfn>).
+ * If the IOREQ Server is not handling buffered emulation requests then the
+ * values handed back in <bufioreq_pfn> and <bufioreq_port> will both be 0.
+ */
+#define XEN_DMOP_get_ioreq_server_info 2
+
+struct xen_dm_op_get_ioreq_server_info {
+    /* IN - server id */
+    ioservid_t id;
+    uint16_t pad;
+    /* OUT - buffered ioreq port */
+    evtchn_port_t bufioreq_port;
+    /* OUT - sync ioreq pfn */
+    uint64_aligned_t ioreq_pfn;
+    /* OUT - buffered ioreq pfn */
+    uint64_aligned_t bufioreq_pfn;
+};
+
+/*
+ * XEN_DMOP_map_io_range_to_ioreq_server: Register an I/O range for
+ *                                        emulation by the client of
+ *                                        IOREQ Server <id>.
+ * XEN_DMOP_unmap_io_range_from_ioreq_server: Deregister an I/O range
+ *                                            previously registered for
+ *                                            emulation by the client of
+ *                                            IOREQ Server <id>.
+ *
+ * There are three types of I/O that can be emulated: port I/O, memory
+ * accesses and PCI config space accesses. The <type> field denotes which
+ * type of range* the <start> and <end> (inclusive) fields are specifying.
+ * PCI config space ranges are specified by segment/bus/device/function
+ * values which should be encoded using the DMOP_PCI_SBDF helper macro
+ * below.
+ *
+ * NOTE: unless an emulation request falls entirely within a range mapped
+ * by a secondary emulator, it will not be passed to that emulator.
+ */
+#define XEN_DMOP_map_io_range_to_ioreq_server 3
+#define XEN_DMOP_unmap_io_range_from_ioreq_server 4
+
+struct xen_dm_op_ioreq_server_range {
+    /* IN - server id */
+    ioservid_t id;
+    uint16_t pad;
+    /* IN - type of range */
+    uint32_t type;
+# define XEN_DMOP_IO_RANGE_PORT   0 /* I/O port range */
+# define XEN_DMOP_IO_RANGE_MEMORY 1 /* MMIO range */
+# define XEN_DMOP_IO_RANGE_PCI    2 /* PCI segment/bus/dev/func range */
+    /* IN - inclusive start and end of range */
+    uint64_aligned_t start, end;
+};
+
+#define XEN_DMOP_PCI_SBDF(s,b,d,f) \
+	((((s) & 0xffff) << 16) |  \
+	 (((b) & 0xff) << 8) |     \
+	 (((d) & 0x1f) << 3) |     \
+	 ((f) & 0x07))
+
+/*
+ * XEN_DMOP_set_ioreq_server_state: Enable or disable the IOREQ Server <id>
+ *
+ * The IOREQ Server will not be passed any emulation requests until it is
+ * in the enabled state.
+ * Note that the contents of the ioreq_pfn and bufioreq_fn (see
+ * XEN_DMOP_get_ioreq_server_info) are not meaningful until the IOREQ Server
+ * is in the enabled state.
+ */
+#define XEN_DMOP_set_ioreq_server_state 5
+
+struct xen_dm_op_set_ioreq_server_state {
+    /* IN - server id */
+    ioservid_t id;
+    /* IN - enabled? */
+    uint8_t enabled;
+    uint8_t pad;
+};
+
+/*
+ * XEN_DMOP_destroy_ioreq_server: Destroy the IOREQ Server <id>.
+ *
+ * Any registered I/O ranges will be automatically deregistered.
+ */
+#define XEN_DMOP_destroy_ioreq_server 6
+
+struct xen_dm_op_destroy_ioreq_server {
+    /* IN - server id */
+    ioservid_t id;
+    uint16_t pad;
+};
+
 struct xen_dm_op {
     uint32_t op;
+    uint32_t pad;
+    union {
+        struct xen_dm_op_create_ioreq_server create_ioreq_server;
+        struct xen_dm_op_get_ioreq_server_info get_ioreq_server_info;
+        struct xen_dm_op_ioreq_server_range map_io_range_to_ioreq_server;
+        struct xen_dm_op_ioreq_server_range unmap_io_range_from_ioreq_server;
+        struct xen_dm_op_set_ioreq_server_state set_ioreq_server_state;
+        struct xen_dm_op_destroy_ioreq_server destroy_ioreq_server;
+    } u;
 };
 
 struct xen_dm_op_buf {
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index b3e45cf..6fcd86d 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -243,37 +243,10 @@ typedef struct xen_hvm_inject_msi xen_hvm_inject_msi_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_msi_t);
 
 /*
- * IOREQ Servers
- *
- * The interface between an I/O emulator an Xen is called an IOREQ Server.
- * A domain supports a single 'legacy' IOREQ Server which is instantiated if
- * parameter...
- *
- * HVM_PARAM_IOREQ_PFN is read (to get the gmfn containing the synchronous
- * ioreq structures), or...
- * HVM_PARAM_BUFIOREQ_PFN is read (to get the gmfn containing the buffered
- * ioreq ring), or...
- * HVM_PARAM_BUFIOREQ_EVTCHN is read (to get the event channel that Xen uses
- * to request buffered I/O emulation).
- * 
- * The following hypercalls facilitate the creation of IOREQ Servers for
- * 'secondary' emulators which are invoked to implement port I/O, memory, or
- * PCI config space ranges which they explicitly register.
+ * Definitions relating to DMOP_create_ioreq_server. (Defined here for
+ * backwards compatibility).
  */
 
-typedef uint16_t ioservid_t;
-
-/*
- * HVMOP_create_ioreq_server: Instantiate a new IOREQ Server for a secondary
- *                            emulator servicing domain <domid>.
- *
- * The <id> handed back is unique for <domid>. If <handle_bufioreq> is zero
- * the buffered ioreq ring will not be allocated and hence all emulation
- * requestes to this server will be synchronous.
- */
-#define HVMOP_create_ioreq_server 17
-struct xen_hvm_create_ioreq_server {
-    domid_t domid;           /* IN - domain to be serviced */
 #define HVM_IOREQSRV_BUFIOREQ_OFF    0
 #define HVM_IOREQSRV_BUFIOREQ_LEGACY 1
 /*
@@ -281,107 +254,6 @@ struct xen_hvm_create_ioreq_server {
  * the pointer pair gets read atomically:
  */
 #define HVM_IOREQSRV_BUFIOREQ_ATOMIC 2
-    uint8_t handle_bufioreq; /* IN - should server handle buffered ioreqs */
-    ioservid_t id;           /* OUT - server id */
-};
-typedef struct xen_hvm_create_ioreq_server xen_hvm_create_ioreq_server_t;
-DEFINE_XEN_GUEST_HANDLE(xen_hvm_create_ioreq_server_t);
-
-/*
- * HVMOP_get_ioreq_server_info: Get all the information necessary to access
- *                              IOREQ Server <id>. 
- *
- * The emulator needs to map the synchronous ioreq structures and buffered
- * ioreq ring (if it exists) that Xen uses to request emulation. These are
- * hosted in domain <domid>'s gmfns <ioreq_pfn> and <bufioreq_pfn>
- * respectively. In addition, if the IOREQ Server is handling buffered
- * emulation requests, the emulator needs to bind to event channel
- * <bufioreq_port> to listen for them. (The event channels used for
- * synchronous emulation requests are specified in the per-CPU ioreq
- * structures in <ioreq_pfn>).
- * If the IOREQ Server is not handling buffered emulation requests then the
- * values handed back in <bufioreq_pfn> and <bufioreq_port> will both be 0.
- */
-#define HVMOP_get_ioreq_server_info 18
-struct xen_hvm_get_ioreq_server_info {
-    domid_t domid;                 /* IN - domain to be serviced */
-    ioservid_t id;                 /* IN - server id */
-    evtchn_port_t bufioreq_port;   /* OUT - buffered ioreq port */
-    uint64_aligned_t ioreq_pfn;    /* OUT - sync ioreq pfn */
-    uint64_aligned_t bufioreq_pfn; /* OUT - buffered ioreq pfn */
-};
-typedef struct xen_hvm_get_ioreq_server_info xen_hvm_get_ioreq_server_info_t;
-DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_ioreq_server_info_t);
-
-/*
- * HVM_map_io_range_to_ioreq_server: Register an I/O range of domain <domid>
- *                                   for emulation by the client of IOREQ
- *                                   Server <id>
- * HVM_unmap_io_range_from_ioreq_server: Deregister an I/O range of <domid>
- *                                       for emulation by the client of IOREQ
- *                                       Server <id>
- *
- * There are three types of I/O that can be emulated: port I/O, memory accesses
- * and PCI config space accesses. The <type> field denotes which type of range
- * the <start> and <end> (inclusive) fields are specifying.
- * PCI config space ranges are specified by segment/bus/device/function values
- * which should be encoded using the HVMOP_PCI_SBDF helper macro below.
- *
- * NOTE: unless an emulation request falls entirely within a range mapped
- * by a secondary emulator, it will not be passed to that emulator.
- */
-#define HVMOP_map_io_range_to_ioreq_server 19
-#define HVMOP_unmap_io_range_from_ioreq_server 20
-struct xen_hvm_io_range {
-    domid_t domid;               /* IN - domain to be serviced */
-    ioservid_t id;               /* IN - server id */
-    uint32_t type;               /* IN - type of range */
-# define HVMOP_IO_RANGE_PORT   0 /* I/O port range */
-# define HVMOP_IO_RANGE_MEMORY 1 /* MMIO range */
-# define HVMOP_IO_RANGE_PCI    2 /* PCI segment/bus/dev/func range */
-    uint64_aligned_t start, end; /* IN - inclusive start and end of range */
-};
-typedef struct xen_hvm_io_range xen_hvm_io_range_t;
-DEFINE_XEN_GUEST_HANDLE(xen_hvm_io_range_t);
-
-#define HVMOP_PCI_SBDF(s,b,d,f)                 \
-	((((s) & 0xffff) << 16) |                   \
-	 (((b) & 0xff) << 8) |                      \
-	 (((d) & 0x1f) << 3) |                      \
-	 ((f) & 0x07))
-
-/*
- * HVMOP_destroy_ioreq_server: Destroy the IOREQ Server <id> servicing domain
- *                             <domid>.
- *
- * Any registered I/O ranges will be automatically deregistered.
- */
-#define HVMOP_destroy_ioreq_server 21
-struct xen_hvm_destroy_ioreq_server {
-    domid_t domid; /* IN - domain to be serviced */
-    ioservid_t id; /* IN - server id */
-};
-typedef struct xen_hvm_destroy_ioreq_server xen_hvm_destroy_ioreq_server_t;
-DEFINE_XEN_GUEST_HANDLE(xen_hvm_destroy_ioreq_server_t);
-
-/*
- * HVMOP_set_ioreq_server_state: Enable or disable the IOREQ Server <id> servicing
- *                               domain <domid>.
- *
- * The IOREQ Server will not be passed any emulation requests until it is in the
- * enabled state.
- * Note that the contents of the ioreq_pfn and bufioreq_fn (see
- * HVMOP_get_ioreq_server_info) are not meaningful until the IOREQ Server is in
- * the enabled state.
- */
-#define HVMOP_set_ioreq_server_state 22
-struct xen_hvm_set_ioreq_server_state {
-    domid_t domid;   /* IN - domain to be serviced */
-    ioservid_t id;   /* IN - server id */
-    uint8_t enabled; /* IN - enabled? */    
-};
-typedef struct xen_hvm_set_ioreq_server_state xen_hvm_set_ioreq_server_state_t;
-DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_ioreq_server_state_t);
 
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index b206f5a..9f7c174 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -634,12 +634,6 @@ static XSM_INLINE int xsm_hvm_inject_msi(XSM_DEFAULT_ARG struct domain *d)
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_hvm_ioreq_server(XSM_DEFAULT_ARG struct domain *d, int op)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
 static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index e2d336f..b5845a2 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -166,7 +166,6 @@ struct xsm_operations {
     int (*hvm_set_isa_irq_level) (struct domain *d);
     int (*hvm_set_pci_link_route) (struct domain *d);
     int (*hvm_inject_msi) (struct domain *d);
-    int (*hvm_ioreq_server) (struct domain *d, int op);
     int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
     int (*apic) (struct domain *d, int cmd);
     int (*memtype) (uint32_t access);
@@ -656,11 +655,6 @@ static inline int xsm_hvm_inject_msi (xsm_default_t def, struct domain *d)
     return xsm_ops->hvm_inject_msi(d);
 }
 
-static inline int xsm_hvm_ioreq_server (xsm_default_t def, struct domain *d, int op)
-{
-    return xsm_ops->hvm_ioreq_server(d, op);
-}
-
 static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
 {
     return xsm_ops->mem_sharing_op(d, cd, op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index a082b28..d544ec1 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -149,7 +149,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, hvm_set_isa_irq_level);
     set_to_dummy_if_null(ops, hvm_set_pci_link_route);
     set_to_dummy_if_null(ops, hvm_inject_msi);
-    set_to_dummy_if_null(ops, hvm_ioreq_server);
     set_to_dummy_if_null(ops, mem_sharing_op);
     set_to_dummy_if_null(ops, apic);
     set_to_dummy_if_null(ops, machine_memory_map);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 2ed5888..147bed7 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1522,11 +1522,6 @@ static int flask_hvm_inject_msi(struct domain *d)
     return current_has_perm(d, SECCLASS_HVM, HVM__SEND_IRQ);
 }
 
-static int flask_hvm_ioreq_server(struct domain *d, int op)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__HVMCTL);
-}
-
 static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
 {
     int rc = current_has_perm(cd, SECCLASS_HVM, HVM__MEM_SHARING);
@@ -1805,7 +1800,6 @@ static struct xsm_operations flask_ops = {
     .hvm_set_isa_irq_level = flask_hvm_set_isa_irq_level,
     .hvm_set_pci_link_route = flask_hvm_set_pci_link_route,
     .hvm_inject_msi = flask_hvm_inject_msi,
-    .hvm_ioreq_server = flask_hvm_ioreq_server,
     .mem_sharing_op = flask_mem_sharing_op,
     .apic = flask_apic,
     .machine_memory_map = flask_machine_memory_map,
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 3/8] dm_op: convert HVMOP_track_dirty_vram
  2016-12-06 13:46 [PATCH v2 0/8] New hypercall for device models Paul Durrant
  2016-12-06 13:46 ` [PATCH v2 1/8] public / x86: Introduce __HYPERCALL_dm_op Paul Durrant
  2016-12-06 13:46 ` [PATCH v2 2/8] dm_op: convert HVMOP_*ioreq_server* Paul Durrant
@ 2016-12-06 13:46 ` Paul Durrant
  2016-12-15 15:44   ` Jan Beulich
  2016-12-06 13:46 ` [PATCH v2 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and Paul Durrant
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 22+ messages in thread
From: Paul Durrant @ 2016-12-06 13:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Ian Jackson, Tim Deegan,
	Paul Durrant, Jan Beulich, Daniel De Graaf

The handle type passed to the underlying shadow and hap functions is
changed for compatibility with the new hypercall buffer.

NOTE: This patch also modifies the type of the 'nr' parameter of
      xc_hvm_track_dirty_vram() from uint64_t to uint32_t. In practice
      the value passed was always truncated to 32 bits.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Tim Deegan <tim@xen.org>

v2:
- Addressed several comments from Jan.
---
 tools/flask/policy/modules/xen.if   |  4 ++--
 tools/libxc/include/xenctrl.h       |  2 +-
 tools/libxc/xc_misc.c               | 32 +++++++++-----------------
 xen/arch/x86/hvm/dm.c               | 45 +++++++++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c              | 41 ---------------------------------
 xen/arch/x86/mm/hap/hap.c           |  2 +-
 xen/arch/x86/mm/shadow/common.c     |  2 +-
 xen/include/asm-x86/hap.h           |  2 +-
 xen/include/asm-x86/shadow.h        |  2 +-
 xen/include/public/hvm/dm_op.h      | 18 +++++++++++++++
 xen/include/public/hvm/hvm_op.h     | 16 -------------
 xen/xsm/flask/hooks.c               |  3 ---
 xen/xsm/flask/policy/access_vectors |  2 --
 13 files changed, 80 insertions(+), 91 deletions(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 779232e..366273e 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -58,7 +58,7 @@ define(`create_domain_common', `
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
 	allow $1 $2:grant setup;
 	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute sethvmc
-			setparam pcilevel trackdirtyvram nested altp2mhvm altp2mhvm_op };
+			setparam pcilevel nested altp2mhvm altp2mhvm_op };
 ')
 
 # create_domain(priv, target)
@@ -151,7 +151,7 @@ define(`device_model', `
 
 	allow $1 $2_target:domain { getdomaininfo shutdown };
 	allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack };
-	allow $1 $2_target:hvm { getparam setparam trackdirtyvram hvmctl irqlevel pciroute pcilevel cacheattr send_irq dm };
+	allow $1 $2_target:hvm { getparam setparam hvmctl irqlevel pciroute pcilevel cacheattr send_irq dm };
 ')
 
 # make_device_model(priv, dm_dom, hvm_dom)
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index cc37752..a6e26c2 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1620,7 +1620,7 @@ int xc_hvm_inject_msi(
  */
 int xc_hvm_track_dirty_vram(
     xc_interface *xch, domid_t dom,
-    uint64_t first_pfn, uint64_t nr,
+    uint64_t first_pfn, uint32_t nr,
     unsigned long *bitmap);
 
 /*
diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 06e90de..4c41d41 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -581,34 +581,22 @@ int xc_hvm_inject_msi(
 
 int xc_hvm_track_dirty_vram(
     xc_interface *xch, domid_t dom,
-    uint64_t first_pfn, uint64_t nr,
+    uint64_t first_pfn, uint32_t nr,
     unsigned long *dirty_bitmap)
 {
-    DECLARE_HYPERCALL_BOUNCE(dirty_bitmap, (nr+7) / 8, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_track_dirty_vram, arg);
-    int rc;
+    struct xen_dm_op op;
+    struct xen_dm_op_track_dirty_vram *data;
 
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL || xc_hypercall_bounce_pre(xch, dirty_bitmap) )
-    {
-        PERROR("Could not bounce memory for xc_hvm_track_dirty_vram hypercall");
-        rc = -1;
-        goto out;
-    }
+    memset(&op, 0, sizeof(op));
 
-    arg->domid     = dom;
-    arg->first_pfn = first_pfn;
-    arg->nr        = nr;
-    set_xen_guest_handle(arg->dirty_bitmap, dirty_bitmap);
+    op.op = XEN_DMOP_track_dirty_vram;
+    data = &op.u.track_dirty_vram;
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_track_dirty_vram,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    data->first_pfn = first_pfn;
+    data->nr = nr;
 
-out:
-    xc_hypercall_buffer_free(xch, arg);
-    xc_hypercall_bounce_post(xch, dirty_bitmap);
-    return rc;
+    return do_dm_op(xch, dom, 2, &op, sizeof(op),
+                    dirty_bitmap, (nr + 7) / 8);
 }
 
 int xc_hvm_modified_memory(
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index b9d70d1..1abe643 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -18,7 +18,9 @@
 #include <xen/hypercall.h>
 #include <xen/sched.h>
 
+#include <asm/hap.h>
 #include <asm/hvm/ioreq.h>
+#include <asm/shadow.h>
 
 #include <xsm/xsm.h>
 
@@ -68,6 +70,35 @@ static int copy_buf_to_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
     return copy_to_guest(buf.h, src, size) ? -EFAULT : 0;
 }
 
+static int track_dirty_vram(struct domain *d,
+                            unsigned int nr_bufs,
+                            XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
+                            xen_pfn_t first_pfn, unsigned int nr)
+{
+    struct xen_dm_op_buf buf;
+    int rc;
+
+    if ( nr > (GB(1) >> PAGE_SHIFT) )
+        return -EINVAL;
+
+    if ( d->is_dying )
+        return -ESRCH;
+
+    if ( !d->vcpu || !d->vcpu[0] )
+        return -EINVAL;
+
+    rc = get_buf(bufs, nr_bufs, 1, &buf);
+    if ( rc )
+        return rc;
+
+    if ( ((nr + 7) / 8) > buf.size )
+        return -EINVAL;
+
+    return shadow_mode_enabled(d) ?
+        shadow_track_dirty_vram(d, first_pfn, nr, buf.h) :
+        hap_track_dirty_vram(d, first_pfn, nr, buf.h);
+}
+
 long do_dm_op(domid_t domid,
               unsigned int nr_bufs,
               XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
@@ -182,6 +213,20 @@ long do_dm_op(domid_t domid,
         break;
     }
 
+    case XEN_DMOP_track_dirty_vram:
+    {
+        const struct xen_dm_op_track_dirty_vram *data =
+            &op.u.track_dirty_vram;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = track_dirty_vram(d, nr_bufs, bufs, data->first_pfn,
+                              data->nr);
+        break;
+    }
+
     default:
         rc = -EOPNOTSUPP;
         break;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 3009449..e0e9940 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5554,47 +5554,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         rc = guest_handle_is_null(arg) ? hvmop_flush_tlb_all() : -EINVAL;
         break;
 
-    case HVMOP_track_dirty_vram:
-    {
-        struct xen_hvm_track_dirty_vram a;
-        struct domain *d;
-
-        if ( copy_from_guest(&a, arg, 1) )
-            return -EFAULT;
-
-        rc = rcu_lock_remote_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
-
-        rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
-            goto tdv_fail;
-
-        if ( a.nr > GB(1) >> PAGE_SHIFT )
-            goto tdv_fail;
-
-        rc = xsm_hvm_control(XSM_DM_PRIV, d, op);
-        if ( rc )
-            goto tdv_fail;
-
-        rc = -ESRCH;
-        if ( d->is_dying )
-            goto tdv_fail;
-
-        rc = -EINVAL;
-        if ( d->vcpu == NULL || d->vcpu[0] == NULL )
-            goto tdv_fail;
-
-        if ( shadow_mode_enabled(d) )
-            rc = shadow_track_dirty_vram(d, a.first_pfn, a.nr, a.dirty_bitmap);
-        else
-            rc = hap_track_dirty_vram(d, a.first_pfn, a.nr, a.dirty_bitmap);
-
-    tdv_fail:
-        rcu_unlock_domain(d);
-        break;
-    }
-
     case HVMOP_modified_memory:
     {
         struct xen_hvm_modified_memory a;
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index e6dc088..6dbb3cc 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -68,7 +68,7 @@
 int hap_track_dirty_vram(struct domain *d,
                          unsigned long begin_pfn,
                          unsigned long nr,
-                         XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
+                         XEN_GUEST_HANDLE_PARAM(void) guest_dirty_bitmap)
 {
     long rc = 0;
     struct sh_dirty_vram *dirty_vram;
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 39be564..570fb25 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -3677,7 +3677,7 @@ static void sh_clean_dirty_bitmap(struct domain *d)
 int shadow_track_dirty_vram(struct domain *d,
                             unsigned long begin_pfn,
                             unsigned long nr,
-                            XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
+                            XEN_GUEST_HANDLE_PARAM(void) guest_dirty_bitmap)
 {
     int rc = 0;
     unsigned long end_pfn = begin_pfn + nr;
diff --git a/xen/include/asm-x86/hap.h b/xen/include/asm-x86/hap.h
index dedb4b1..88587c4 100644
--- a/xen/include/asm-x86/hap.h
+++ b/xen/include/asm-x86/hap.h
@@ -43,7 +43,7 @@ void  hap_vcpu_init(struct vcpu *v);
 int   hap_track_dirty_vram(struct domain *d,
                            unsigned long begin_pfn,
                            unsigned long nr,
-                           XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
+                           XEN_GUEST_HANDLE_PARAM(void) dirty_bitmap);
 
 extern const struct paging_mode *hap_paging_get_mode(struct vcpu *);
 int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempted);
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index bac952f..7e1ed3b 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -63,7 +63,7 @@ int shadow_enable(struct domain *d, u32 mode);
 int shadow_track_dirty_vram(struct domain *d,
                             unsigned long first_pfn,
                             unsigned long nr,
-                            XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
+                            XEN_GUEST_HANDLE_PARAM(void) dirty_bitmap);
 
 /* Handler for shadow control ops: operations from user-space to enable
  * and disable ephemeral shadow modes (test mode and log-dirty mode) and
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index 2b22077..249fd1f 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -174,6 +174,23 @@ struct xen_dm_op_destroy_ioreq_server {
     uint16_t pad;
 };
 
+/*
+ * XEN_DMOP_track_dirty_vram: Track modifications to the specified pfn
+ *                            range.
+ *
+ * NOTE: The bitmap passed back to the caller is passed in a
+ *       secondary buffer.
+ */
+#define XEN_DMOP_track_dirty_vram 7
+
+struct xen_dm_op_track_dirty_vram {
+    /* IN - number of pages to be tracked */
+    uint32_t nr;
+    uint32_t pad;
+    /* IN - first pfn to track */
+    uint64_aligned_t first_pfn;
+};
+
 struct xen_dm_op {
     uint32_t op;
     uint32_t pad;
@@ -184,6 +201,7 @@ struct xen_dm_op {
         struct xen_dm_op_ioreq_server_range unmap_io_range_from_ioreq_server;
         struct xen_dm_op_set_ioreq_server_state set_ioreq_server_state;
         struct xen_dm_op_destroy_ioreq_server destroy_ioreq_server;
+        struct xen_dm_op_track_dirty_vram track_dirty_vram;
     } u;
 };
 
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 6fcd86d..47e836c 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -95,22 +95,6 @@ typedef enum {
 /* Following tools-only interfaces may change in future. */
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 
-/* Track dirty VRAM. */
-#define HVMOP_track_dirty_vram    6
-struct xen_hvm_track_dirty_vram {
-    /* Domain to be tracked. */
-    domid_t  domid;
-    /* Number of pages to track. */
-    uint32_t nr;
-    /* First pfn to track. */
-    uint64_aligned_t first_pfn;
-    /* OUT variable. */
-    /* Dirty bitmap buffer. */
-    XEN_GUEST_HANDLE_64(uint8) dirty_bitmap;
-};
-typedef struct xen_hvm_track_dirty_vram xen_hvm_track_dirty_vram_t;
-DEFINE_XEN_GUEST_HANDLE(xen_hvm_track_dirty_vram_t);
-
 /* Notify that some pages got modified by the Device Model. */
 #define HVMOP_modified_memory    7
 struct xen_hvm_modified_memory {
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 147bed7..effc80e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1177,9 +1177,6 @@ static int flask_hvm_param(struct domain *d, unsigned long op)
     case HVMOP_get_param:
         perm = HVM__GETPARAM;
         break;
-    case HVMOP_track_dirty_vram:
-        perm = HVM__TRACKDIRTYVRAM;
-        break;
     default:
         perm = HVM__HVMCTL;
     }
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 49c9a9e..5af427f 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -266,8 +266,6 @@ class hvm
     bind_irq
 # XEN_DOMCTL_pin_mem_cacheattr
     cacheattr
-# HVMOP_track_dirty_vram
-    trackdirtyvram
 # HVMOP_modified_memory, HVMOP_get_mem_type, HVMOP_set_mem_type,
 # HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying,
 # HVMOP_inject_trap
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and...
  2016-12-06 13:46 [PATCH v2 0/8] New hypercall for device models Paul Durrant
                   ` (2 preceding siblings ...)
  2016-12-06 13:46 ` [PATCH v2 3/8] dm_op: convert HVMOP_track_dirty_vram Paul Durrant
@ 2016-12-06 13:46 ` Paul Durrant
  2016-12-15 15:51   ` Jan Beulich
  2016-12-06 13:46 ` [PATCH v2 5/8] dm_op: convert HVMOP_modified_memory Paul Durrant
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 22+ messages in thread
From: Paul Durrant @ 2016-12-06 13:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, Paul Durrant, Jan Beulich,
	Daniel De Graaf

... HVMOP_set_pci_link_route

These HVMOPs were exposed to guests so their definitions need to be
preserved for compatibility. This patch therefore updates
__XEN_LATEST_INTERFACE_VERSION__ to 0x00040900 and makes the HVMOP
defintions conditional on __XEN_INTERFACE_VERSION__ less than that value.

NOTE: This patch also widens the 'domain' parameter of
      xc_hvm_set_pci_intx_level() from a uint8_t to a uint16_t.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

v2:
- Interface version modification moved to this patch, where it is needed.
- Addressed several comments from Jan.
---
 tools/flask/policy/modules/xen.if   |   8 +--
 tools/libxc/include/xenctrl.h       |   2 +-
 tools/libxc/xc_misc.c               |  83 ++++++++--------------
 xen/arch/x86/hvm/dm.c               |  84 ++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c              | 136 ------------------------------------
 xen/arch/x86/hvm/irq.c              |   7 +-
 xen/include/public/hvm/dm_op.h      |  46 ++++++++++++
 xen/include/public/hvm/hvm_op.h     |   4 ++
 xen/include/public/xen-compat.h     |   2 +-
 xen/include/xen/hvm/irq.h           |   2 +-
 xen/include/xsm/dummy.h             |  18 -----
 xen/include/xsm/xsm.h               |  18 -----
 xen/xsm/dummy.c                     |   3 -
 xen/xsm/flask/hooks.c               |  15 ----
 xen/xsm/flask/policy/access_vectors |   6 --
 15 files changed, 174 insertions(+), 260 deletions(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 366273e..e6dfaf0 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -57,8 +57,8 @@ define(`create_domain_common', `
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
 	allow $1 $2:grant setup;
-	allow $1 $2:hvm { cacheattr getparam hvmctl irqlevel pciroute sethvmc
-			setparam pcilevel nested altp2mhvm altp2mhvm_op };
+	allow $1 $2:hvm { cacheattr getparam hvmctl sethvmc
+			setparam nested altp2mhvm altp2mhvm_op };
 ')
 
 # create_domain(priv, target)
@@ -93,7 +93,7 @@ define(`manage_domain', `
 #   (inbound migration is the same as domain creation)
 define(`migrate_domain_out', `
 	allow $1 domxen_t:mmu map_read;
-	allow $1 $2:hvm { gethvmc getparam irqlevel };
+	allow $1 $2:hvm { gethvmc getparam };
 	allow $1 $2:mmu { stat pageinfo map_read };
 	allow $1 $2:domain { getaddrsize getvcpucontext pause destroy };
 	allow $1 $2:domain2 gettsc;
@@ -151,7 +151,7 @@ define(`device_model', `
 
 	allow $1 $2_target:domain { getdomaininfo shutdown };
 	allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack };
-	allow $1 $2_target:hvm { getparam setparam hvmctl irqlevel pciroute pcilevel cacheattr send_irq dm };
+	allow $1 $2_target:hvm { getparam setparam hvmctl cacheattr send_irq dm };
 ')
 
 # make_device_model(priv, dm_dom, hvm_dom)
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index a6e26c2..db04848 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1594,7 +1594,7 @@ int xc_physdev_unmap_pirq(xc_interface *xch,
 
 int xc_hvm_set_pci_intx_level(
     xc_interface *xch, domid_t dom,
-    uint8_t domain, uint8_t bus, uint8_t device, uint8_t intx,
+    uint16_t domain, uint8_t bus, uint8_t device, uint8_t intx,
     unsigned int level);
 int xc_hvm_set_isa_irq_level(
     xc_interface *xch, domid_t dom,
diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 4c41d41..ddea2bb 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -470,33 +470,24 @@ int xc_getcpuinfo(xc_interface *xch, int max_cpus,
 
 int xc_hvm_set_pci_intx_level(
     xc_interface *xch, domid_t dom,
-    uint8_t domain, uint8_t bus, uint8_t device, uint8_t intx,
+    uint16_t domain, uint8_t bus, uint8_t device, uint8_t intx,
     unsigned int level)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_pci_intx_level, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_set_pci_intx_level hypercall");
-        return -1;
-    }
+    struct xen_dm_op op;
+    struct xen_dm_op_set_pci_intx_level *data;
 
-    arg->domid  = dom;
-    arg->domain = domain;
-    arg->bus    = bus;
-    arg->device = device;
-    arg->intx   = intx;
-    arg->level  = level;
+    memset(&op, 0, sizeof(op));
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_set_pci_intx_level,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    op.op = XEN_DMOP_set_pci_intx_level;
+    data = &op.u.set_pci_intx_level;
 
-    xc_hypercall_buffer_free(xch, arg);
+    data->domain = domain;
+    data->bus = bus;
+    data->device = device;
+    data->intx = intx;
+    data->level = level;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_hvm_set_isa_irq_level(
@@ -504,53 +495,35 @@ int xc_hvm_set_isa_irq_level(
     uint8_t isa_irq,
     unsigned int level)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_isa_irq_level, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_set_isa_irq_level hypercall");
-        return -1;
-    }
+    struct xen_dm_op op;
+    struct xen_dm_op_set_isa_irq_level *data;
 
-    arg->domid   = dom;
-    arg->isa_irq = isa_irq;
-    arg->level   = level;
+    memset(&op, 0, sizeof(op));
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_set_isa_irq_level,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    op.op = XEN_DMOP_set_isa_irq_level;
+    data = &op.u.set_isa_irq_level;
 
-    xc_hypercall_buffer_free(xch, arg);
+    data->isa_irq = isa_irq;
+    data->level = level;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_hvm_set_pci_link_route(
     xc_interface *xch, domid_t dom, uint8_t link, uint8_t isa_irq)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_pci_link_route, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_set_pci_link_route hypercall");
-        return -1;
-    }
+    struct xen_dm_op op;
+    struct xen_dm_op_set_pci_link_route *data;
 
-    arg->domid   = dom;
-    arg->link    = link;
-    arg->isa_irq = isa_irq;
+    memset(&op, 0, sizeof(op));
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_set_pci_link_route,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    op.op = XEN_DMOP_set_pci_link_route;
+    data = &op.u.set_pci_link_route;
 
-    xc_hypercall_buffer_free(xch, arg);
+    data->link = link;
+    data->isa_irq = isa_irq;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_hvm_inject_msi(
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 1abe643..06bace0 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -99,6 +99,49 @@ static int track_dirty_vram(struct domain *d,
         hap_track_dirty_vram(d, first_pfn, nr, buf.h);
 }
 
+static int set_pci_intx_level(struct domain *d, uint16_t domain,
+                              uint8_t bus, uint8_t device,
+                              uint8_t intx, uint8_t level)
+{
+    if ( domain != 0 || bus != 0 || device > 0x1f || intx > 3 )
+        return -EINVAL;
+
+    switch ( level )
+    {
+    case 0:
+        hvm_pci_intx_deassert(d, device, intx);
+        break;
+    case 1:
+        hvm_pci_intx_assert(d, device, intx);
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+static int set_isa_irq_level(struct domain *d, uint8_t isa_irq,
+                             uint8_t level)
+{
+    if ( isa_irq > 15 )
+        return -EINVAL;
+
+    switch ( level )
+    {
+    case 0:
+        hvm_isa_irq_deassert(d, isa_irq);
+        break;
+    case 1:
+        hvm_isa_irq_assert(d, isa_irq);
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
 long do_dm_op(domid_t domid,
               unsigned int nr_bufs,
               XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
@@ -227,6 +270,47 @@ long do_dm_op(domid_t domid,
         break;
     }
 
+    case XEN_DMOP_set_pci_intx_level:
+    {
+        const struct xen_dm_op_set_pci_intx_level *data =
+            &op.u.set_pci_intx_level;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = set_pci_intx_level(d, data->domain, data->bus,
+                                data->device, data->intx,
+                                data->level);
+        break;
+    }
+
+    case XEN_DMOP_set_isa_irq_level:
+    {
+        const struct xen_dm_op_set_isa_irq_level *data =
+            &op.u.set_isa_irq_level;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = set_isa_irq_level(d, data->isa_irq, data->level);
+        break;
+    }
+
+    case XEN_DMOP_set_pci_link_route:
+    {
+        const struct xen_dm_op_set_pci_link_route *data =
+            &op.u.set_pci_link_route;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = hvm_set_pci_link_route(d, data->link, data->isa_irq);
+        break;
+    }
+
     default:
         rc = -EOPNOTSUPP;
         break;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e0e9940..b336840 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4472,50 +4472,6 @@ void hvm_hypercall_page_initialise(struct domain *d,
     hvm_funcs.init_hypercall_page(d, hypercall_page);
 }
 
-static int hvmop_set_pci_intx_level(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_intx_level_t) uop)
-{
-    struct xen_hvm_set_pci_intx_level op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    if ( (op.domain > 0) || (op.bus > 0) || (op.device > 31) || (op.intx > 3) )
-        return -EINVAL;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_set_pci_intx_level(XSM_DM_PRIV, d);
-    if ( rc )
-        goto out;
-
-    rc = 0;
-    switch ( op.level )
-    {
-    case 0:
-        hvm_pci_intx_deassert(d, op.device, op.intx);
-        break;
-    case 1:
-        hvm_pci_intx_assert(d, op.device, op.intx);
-        break;
-    default:
-        rc = -EINVAL;
-        break;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
 void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, uint16_t ip)
 {
     struct domain *d = v->domain;
@@ -4659,83 +4615,6 @@ static void hvm_s3_resume(struct domain *d)
     }
 }
 
-static int hvmop_set_isa_irq_level(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_isa_irq_level_t) uop)
-{
-    struct xen_hvm_set_isa_irq_level op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    if ( op.isa_irq > 15 )
-        return -EINVAL;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_set_isa_irq_level(XSM_DM_PRIV, d);
-    if ( rc )
-        goto out;
-
-    rc = 0;
-    switch ( op.level )
-    {
-    case 0:
-        hvm_isa_irq_deassert(d, op.isa_irq);
-        break;
-    case 1:
-        hvm_isa_irq_assert(d, op.isa_irq);
-        break;
-    default:
-        rc = -EINVAL;
-        break;
-    }
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
-static int hvmop_set_pci_link_route(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_pci_link_route_t) uop)
-{
-    struct xen_hvm_set_pci_link_route op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    if ( (op.link > 3) || (op.isa_irq > 15) )
-        return -EINVAL;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_set_pci_link_route(XSM_DM_PRIV, d);
-    if ( rc )
-        goto out;
-
-    rc = 0;
-    hvm_set_pci_link_route(d, op.link, op.isa_irq);
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
 static int hvmop_inject_msi(
     XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop)
 {
@@ -5530,26 +5409,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             guest_handle_cast(arg, xen_hvm_param_t));
         break;
 
-    case HVMOP_set_pci_intx_level:
-        rc = hvmop_set_pci_intx_level(
-            guest_handle_cast(arg, xen_hvm_set_pci_intx_level_t));
-        break;
-
-    case HVMOP_set_isa_irq_level:
-        rc = hvmop_set_isa_irq_level(
-            guest_handle_cast(arg, xen_hvm_set_isa_irq_level_t));
-        break;
-
     case HVMOP_inject_msi:
         rc = hvmop_inject_msi(
             guest_handle_cast(arg, xen_hvm_inject_msi_t));
         break;
 
-    case HVMOP_set_pci_link_route:
-        rc = hvmop_set_pci_link_route(
-            guest_handle_cast(arg, xen_hvm_set_pci_link_route_t));
-        break;
-
     case HVMOP_flush_tlbs:
         rc = guest_handle_is_null(arg) ? hvmop_flush_tlb_all() : -EINVAL;
         break;
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index e597114..265a620 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -229,13 +229,14 @@ void hvm_assert_evtchn_irq(struct vcpu *v)
         hvm_set_callback_irq_level(v);
 }
 
-void hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq)
+int hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq)
 {
     struct hvm_irq *hvm_irq = &d->arch.hvm_domain.irq;
     u8 old_isa_irq;
     int i;
 
-    ASSERT((link <= 3) && (isa_irq <= 15));
+    if ( (link > 3) || (isa_irq > 15) )
+        return -EINVAL;
 
     spin_lock(&d->arch.hvm_domain.irq_lock);
 
@@ -273,6 +274,8 @@ void hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq)
 
     dprintk(XENLOG_G_INFO, "Dom%u PCI link %u changed %u -> %u\n",
             d->domain_id, link, old_isa_irq, isa_irq);
+
+    return 0;
 }
 
 int hvm_inject_msi(struct domain *d, uint64_t addr, uint32_t data)
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index 249fd1f..5751411 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -191,6 +191,49 @@ struct xen_dm_op_track_dirty_vram {
     uint64_aligned_t first_pfn;
 };
 
+/*
+ * XEN_DMOP_set_pci_intx_level: Set the logical level of one of a domain's
+ *                              PCI INTx pins.
+ */
+#define XEN_DMOP_set_pci_intx_level 8
+
+struct xen_dm_op_set_pci_intx_level {
+    /* IN - PCI INTx identification (domain:bus:device:intx) */
+    uint16_t domain;
+    uint8_t bus, device, intx;
+    /* IN - Level: 0 -> deasserted, 1 -> asserted */
+    uint8_t  level;
+    uint16_t pad;
+};
+
+/*
+ * XEN_DMOP_set_isa_irq_level: Set the logical level of a one of a domain's
+ *                             ISA IRQ lines.
+ */
+#define XEN_DMOP_set_isa_irq_level 9
+
+struct xen_dm_op_set_isa_irq_level {
+    /* IN - ISA IRQ (0-15) */
+    uint8_t  isa_irq;
+    /* IN - Level: 0 -> deasserted, 1 -> asserted */
+    uint8_t  level;
+    uint16_t pad;
+};
+
+/*
+ * XEN_DMOP_set_pci_link_route: Map a PCI INTx line to an IRQ line.
+ */
+#define XEN_DMOP_set_pci_link_route 10
+
+struct xen_dm_op_set_pci_link_route {
+    /* PCI INTx line (0-3) */
+    uint8_t  link;
+    /* ISA IRQ (1-15) or 0 -> disable link */
+    uint8_t  isa_irq;
+    uint16_t pad;
+};
+
+
 struct xen_dm_op {
     uint32_t op;
     uint32_t pad;
@@ -202,6 +245,9 @@ struct xen_dm_op {
         struct xen_dm_op_set_ioreq_server_state set_ioreq_server_state;
         struct xen_dm_op_destroy_ioreq_server destroy_ioreq_server;
         struct xen_dm_op_track_dirty_vram track_dirty_vram;
+        struct xen_dm_op_set_pci_intx_level set_pci_intx_level;
+        struct xen_dm_op_set_isa_irq_level set_isa_irq_level;
+        struct xen_dm_op_set_pci_link_route set_pci_link_route;
     } u;
 };
 
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 47e836c..7cf8d4d 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -38,6 +38,8 @@ struct xen_hvm_param {
 typedef struct xen_hvm_param xen_hvm_param_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_param_t);
 
+#if __XEN_INTERFACE_VERSION__ < 0x00040900
+
 /* Set the logical level of one of a domain's PCI INTx wires. */
 #define HVMOP_set_pci_intx_level  2
 struct xen_hvm_set_pci_intx_level {
@@ -76,6 +78,8 @@ struct xen_hvm_set_pci_link_route {
 typedef struct xen_hvm_set_pci_link_route xen_hvm_set_pci_link_route_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t);
 
+#endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
+
 /* Flushes all VCPU TLBs: @arg must be NULL. */
 #define HVMOP_flush_tlbs          5
 
diff --git a/xen/include/public/xen-compat.h b/xen/include/public/xen-compat.h
index dd8a5c0..b673653 100644
--- a/xen/include/public/xen-compat.h
+++ b/xen/include/public/xen-compat.h
@@ -27,7 +27,7 @@
 #ifndef __XEN_PUBLIC_XEN_COMPAT_H__
 #define __XEN_PUBLIC_XEN_COMPAT_H__
 
-#define __XEN_LATEST_INTERFACE_VERSION__ 0x00040800
+#define __XEN_LATEST_INTERFACE_VERSION__ 0x00040900
 
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 /* Xen is built with matching headers and implements the latest interface. */
diff --git a/xen/include/xen/hvm/irq.h b/xen/include/xen/hvm/irq.h
index 4c9cb20..d3f8623 100644
--- a/xen/include/xen/hvm/irq.h
+++ b/xen/include/xen/hvm/irq.h
@@ -122,7 +122,7 @@ void hvm_isa_irq_assert(
 void hvm_isa_irq_deassert(
     struct domain *d, unsigned int isa_irq);
 
-void hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq);
+int hvm_set_pci_link_route(struct domain *d, u8 link, u8 isa_irq);
 
 int hvm_inject_msi(struct domain *d, uint64_t addr, uint32_t data);
 
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 9f7c174..d3de4b7 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -610,24 +610,6 @@ static XSM_INLINE int xsm_shadow_control(XSM_DEFAULT_ARG struct domain *d, uint3
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_hvm_set_pci_intx_level(XSM_DEFAULT_ARG struct domain *d)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
-static XSM_INLINE int xsm_hvm_set_isa_irq_level(XSM_DEFAULT_ARG struct domain *d)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
-static XSM_INLINE int xsm_hvm_set_pci_link_route(XSM_DEFAULT_ARG struct domain *d)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
 static XSM_INLINE int xsm_hvm_inject_msi(XSM_DEFAULT_ARG struct domain *d)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index b5845a2..2e4a3ce 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -162,9 +162,6 @@ struct xsm_operations {
 #ifdef CONFIG_X86
     int (*do_mca) (void);
     int (*shadow_control) (struct domain *d, uint32_t op);
-    int (*hvm_set_pci_intx_level) (struct domain *d);
-    int (*hvm_set_isa_irq_level) (struct domain *d);
-    int (*hvm_set_pci_link_route) (struct domain *d);
     int (*hvm_inject_msi) (struct domain *d);
     int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
     int (*apic) (struct domain *d, int cmd);
@@ -635,21 +632,6 @@ static inline int xsm_shadow_control (xsm_default_t def, struct domain *d, uint3
     return xsm_ops->shadow_control(d, op);
 }
 
-static inline int xsm_hvm_set_pci_intx_level (xsm_default_t def, struct domain *d)
-{
-    return xsm_ops->hvm_set_pci_intx_level(d);
-}
-
-static inline int xsm_hvm_set_isa_irq_level (xsm_default_t def, struct domain *d)
-{
-    return xsm_ops->hvm_set_isa_irq_level(d);
-}
-
-static inline int xsm_hvm_set_pci_link_route (xsm_default_t def, struct domain *d)
-{
-    return xsm_ops->hvm_set_pci_link_route(d);
-}
-
 static inline int xsm_hvm_inject_msi (xsm_default_t def, struct domain *d)
 {
     return xsm_ops->hvm_inject_msi(d);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index d544ec1..f1568dd 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -145,9 +145,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
     set_to_dummy_if_null(ops, shadow_control);
-    set_to_dummy_if_null(ops, hvm_set_pci_intx_level);
-    set_to_dummy_if_null(ops, hvm_set_isa_irq_level);
-    set_to_dummy_if_null(ops, hvm_set_pci_link_route);
     set_to_dummy_if_null(ops, hvm_inject_msi);
     set_to_dummy_if_null(ops, mem_sharing_op);
     set_to_dummy_if_null(ops, apic);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index effc80e..b39692e 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1499,21 +1499,6 @@ static int flask_ioport_mapping(struct domain *d, uint32_t start, uint32_t end,
     return flask_ioport_permission(d, start, end, access);
 }
 
-static int flask_hvm_set_pci_intx_level(struct domain *d)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__PCILEVEL);
-}
-
-static int flask_hvm_set_isa_irq_level(struct domain *d)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__IRQLEVEL);
-}
-
-static int flask_hvm_set_pci_link_route(struct domain *d)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__PCIROUTE);
-}
-
 static int flask_hvm_inject_msi(struct domain *d)
 {
     return current_has_perm(d, SECCLASS_HVM, HVM__SEND_IRQ);
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 5af427f..708cfe6 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -257,12 +257,6 @@ class hvm
     setparam
 # HVMOP_get_param
     getparam
-# HVMOP_set_pci_intx_level (also needs hvmctl)
-    pcilevel
-# HVMOP_set_isa_irq_level
-    irqlevel
-# HVMOP_set_pci_link_route
-    pciroute
     bind_irq
 # XEN_DOMCTL_pin_mem_cacheattr
     cacheattr
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 5/8] dm_op: convert HVMOP_modified_memory
  2016-12-06 13:46 [PATCH v2 0/8] New hypercall for device models Paul Durrant
                   ` (3 preceding siblings ...)
  2016-12-06 13:46 ` [PATCH v2 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and Paul Durrant
@ 2016-12-06 13:46 ` Paul Durrant
  2016-12-15 16:05   ` Jan Beulich
  2016-12-06 13:46 ` [PATCH v2 6/8] dm_op: convert HVMOP_set_mem_type Paul Durrant
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 22+ messages in thread
From: Paul Durrant @ 2016-12-06 13:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, Paul Durrant, Jan Beulich,
	Daniel De Graaf

This patch introduces code to handle DMOP continuations.

NOTE: This patch also modifies the type of the 'nr' argument of
      xc_hvm_modified_memory() from uint64_t to uint32_t. In practice the
      value passed was always truncated to 32 bits.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>

v2:
- Addressed several comments from Jan, including...
- Added explanatory note on continuation handling
---
 tools/libxc/include/xenctrl.h       |  2 +-
 tools/libxc/xc_misc.c               | 27 +++++--------
 xen/arch/x86/hvm/dm.c               | 81 ++++++++++++++++++++++++++++++++++++-
 xen/arch/x86/hvm/hvm.c              | 60 ---------------------------
 xen/include/public/hvm/dm_op.h      | 19 +++++++++
 xen/include/public/hvm/hvm_op.h     | 13 ------
 xen/xsm/flask/policy/access_vectors |  2 +-
 7 files changed, 110 insertions(+), 94 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index db04848..9950690 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1627,7 +1627,7 @@ int xc_hvm_track_dirty_vram(
  * Notify that some pages got modified by the Device Model
  */
 int xc_hvm_modified_memory(
-    xc_interface *xch, domid_t dom, uint64_t first_pfn, uint64_t nr);
+    xc_interface *xch, domid_t dom, uint64_t first_pfn, uint32_t nr);
 
 /*
  * Set a range of memory to a specific type.
diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index ddea2bb..597df99 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -573,29 +573,20 @@ int xc_hvm_track_dirty_vram(
 }
 
 int xc_hvm_modified_memory(
-    xc_interface *xch, domid_t dom, uint64_t first_pfn, uint64_t nr)
+    xc_interface *xch, domid_t dom, uint64_t first_pfn, uint32_t nr)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_modified_memory, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_modified_memory hypercall");
-        return -1;
-    }
+    struct xen_dm_op op;
+    struct xen_dm_op_modified_memory *data;
 
-    arg->domid     = dom;
-    arg->first_pfn = first_pfn;
-    arg->nr        = nr;
+    memset(&op, 0, sizeof(op));
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_modified_memory,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    op.op = XEN_DMOP_modified_memory;
+    data = &op.u.modified_memory;
 
-    xc_hypercall_buffer_free(xch, arg);
+    data->first_pfn = first_pfn;
+    data->nr = nr;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_hvm_set_mem_type(
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 06bace0..4e7d8f9 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -14,6 +14,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/event.h>
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <xen/sched.h>
@@ -142,18 +143,77 @@ static int set_isa_irq_level(struct domain *d, uint8_t isa_irq,
     return 0;
 }
 
+static int modified_memory(struct domain *d, xen_pfn_t *first_pfn,
+                           unsigned int *nr)
+{
+    xen_pfn_t last_pfn = *first_pfn + *nr - 1;
+    unsigned int iter;
+    int rc;
+
+    if ( (*first_pfn > last_pfn) ||
+         (last_pfn > domain_get_maximum_gpfn(d)) )
+        return -EINVAL;
+
+    if ( !paging_mode_log_dirty(d) )
+        return 0;
+
+    iter = 0;
+    rc = 0;
+    while ( iter < *nr )
+    {
+        unsigned long pfn = *first_pfn + iter;
+        struct page_info *page;
+
+        page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
+        if ( page )
+        {
+            paging_mark_dirty(d, page_to_mfn(page));
+            /*
+             * These are most probably not page tables any more
+             * don't take a long time and don't die either.
+             */
+            sh_remove_shadows(d, _mfn(page_to_mfn(page)), 1, 0);
+            put_page(page);
+        }
+
+        iter++;
+
+        /*
+         * Check for continuation every 256th iteration and if the
+         * iteration is not the last.
+         */
+        if ( (iter < *nr) && ((iter & 0xff) == 0) &&
+             hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    if ( rc == -ERESTART )
+    {
+        *first_pfn += iter;
+        *nr -= iter;
+    }
+
+    return rc;
+}
+
 long do_dm_op(domid_t domid,
               unsigned int nr_bufs,
               XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
 {
     struct domain *d;
     struct xen_dm_op op;
+    bool restart;
     long rc;
 
     rc = rcu_lock_remote_domain_by_id(domid, &d);
     if ( rc )
         return rc;
 
+    restart = false;
+
     if ( !has_hvm_container_domain(d) )
         goto out;
 
@@ -311,17 +371,36 @@ long do_dm_op(domid_t domid,
         break;
     }
 
+    case XEN_DMOP_modified_memory:
+    {
+        struct xen_dm_op_modified_memory *data =
+            &op.u.modified_memory;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = modified_memory(d, &data->first_pfn, &data->nr);
+        break;
+    }
+
     default:
         rc = -EOPNOTSUPP;
         break;
     }
 
-    if ( !rc )
+    if ( rc == -ERESTART )
+        restart = true;
+
+    if ( !rc || restart )
         rc = copy_buf_to_guest(bufs, nr_bufs, 0, &op, sizeof(op));
 
  out:
     rcu_unlock_domain(d);
 
+    if ( !rc && restart )
+        rc = hypercall_create_continuation(__HYPERVISOR_dm_op, "iih",
+                                           domid, nr_bufs, bufs);
     return rc;
 }
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index b336840..3760e0b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5385,7 +5385,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
     default:
         mask = ~0UL;
         break;
-    case HVMOP_modified_memory:
     case HVMOP_set_mem_type:
         mask = HVMOP_op_mask;
         break;
@@ -5418,65 +5417,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         rc = guest_handle_is_null(arg) ? hvmop_flush_tlb_all() : -EINVAL;
         break;
 
-    case HVMOP_modified_memory:
-    {
-        struct xen_hvm_modified_memory a;
-        struct domain *d;
-
-        if ( copy_from_guest(&a, arg, 1) )
-            return -EFAULT;
-
-        rc = rcu_lock_remote_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
-
-        rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
-            goto modmem_fail;
-
-        rc = xsm_hvm_control(XSM_DM_PRIV, d, op);
-        if ( rc )
-            goto modmem_fail;
-
-        rc = -EINVAL;
-        if ( a.nr < start_iter ||
-             ((a.first_pfn + a.nr - 1) < a.first_pfn) ||
-             ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
-            goto modmem_fail;
-
-        rc = 0;
-        if ( !paging_mode_log_dirty(d) )
-            goto modmem_fail;
-
-        while ( a.nr > start_iter )
-        {
-            unsigned long pfn = a.first_pfn + start_iter;
-            struct page_info *page;
-
-            page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
-            if ( page )
-            {
-                paging_mark_dirty(d, page_to_mfn(page));
-                /* These are most probably not page tables any more */
-                /* don't take a long time and don't die either */
-                sh_remove_shadows(d, _mfn(page_to_mfn(page)), 1, 0);
-                put_page(page);
-            }
-
-            /* Check for continuation if it's not the last interation */
-            if ( a.nr > ++start_iter && !(start_iter & HVMOP_op_mask) &&
-                 hypercall_preempt_check() )
-            {
-                rc = -ERESTART;
-                break;
-            }
-        }
-
-    modmem_fail:
-        rcu_unlock_domain(d);
-        break;
-    }
-
     case HVMOP_get_mem_type:
         rc = hvmop_get_mem_type(
             guest_handle_cast(arg, xen_hvm_get_mem_type_t));
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index 5751411..1a1b784 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -233,6 +233,24 @@ struct xen_dm_op_set_pci_link_route {
     uint16_t pad;
 };
 
+/*
+ * XEN_DMOP_modified_memory: Notify that a set of pages were modified by
+ *                           an emulator.
+ *
+ * NOTE: In the event of a continuation (return code -ERESTART), the
+ *       @first_pfn is set to the value of the pfn of the remaining
+ *       set of pages and @nr reduced to the size of the remaining set.
+ */
+#define XEN_DMOP_modified_memory 11
+
+struct xen_dm_op_modified_memory {
+    /* IN - number of contiguous pages modified */
+    uint32_t nr;
+    uint32_t pad;
+    /* IN - first pfn modified */
+    uint64_aligned_t first_pfn;
+};
+
 
 struct xen_dm_op {
     uint32_t op;
@@ -248,6 +266,7 @@ struct xen_dm_op {
         struct xen_dm_op_set_pci_intx_level set_pci_intx_level;
         struct xen_dm_op_set_isa_irq_level set_isa_irq_level;
         struct xen_dm_op_set_pci_link_route set_pci_link_route;
+        struct xen_dm_op_modified_memory modified_memory;
     } u;
 };
 
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 7cf8d4d..76e1b78 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -99,19 +99,6 @@ typedef enum {
 /* Following tools-only interfaces may change in future. */
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 
-/* Notify that some pages got modified by the Device Model. */
-#define HVMOP_modified_memory    7
-struct xen_hvm_modified_memory {
-    /* Domain to be updated. */
-    domid_t  domid;
-    /* Number of pages. */
-    uint32_t nr;
-    /* First pfn. */
-    uint64_aligned_t first_pfn;
-};
-typedef struct xen_hvm_modified_memory xen_hvm_modified_memory_t;
-DEFINE_XEN_GUEST_HANDLE(xen_hvm_modified_memory_t);
-
 #define HVMOP_set_mem_type    8
 /* Notify that a region of memory is to be treated in a specific way. */
 struct xen_hvm_set_mem_type {
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 708cfe6..2041ca5 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -260,7 +260,7 @@ class hvm
     bind_irq
 # XEN_DOMCTL_pin_mem_cacheattr
     cacheattr
-# HVMOP_modified_memory, HVMOP_get_mem_type, HVMOP_set_mem_type,
+# HVMOP_get_mem_type, HVMOP_set_mem_type,
 # HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying,
 # HVMOP_inject_trap
     hvmctl
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 6/8] dm_op: convert HVMOP_set_mem_type
  2016-12-06 13:46 [PATCH v2 0/8] New hypercall for device models Paul Durrant
                   ` (4 preceding siblings ...)
  2016-12-06 13:46 ` [PATCH v2 5/8] dm_op: convert HVMOP_modified_memory Paul Durrant
@ 2016-12-06 13:46 ` Paul Durrant
  2016-12-15 16:11   ` Jan Beulich
  2016-12-06 13:46 ` [PATCH v2 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi Paul Durrant
  2016-12-06 13:46 ` [PATCH v2 8/8] x86/hvm: serialize trap injecting producer and consumer Paul Durrant
  7 siblings, 1 reply; 22+ messages in thread
From: Paul Durrant @ 2016-12-06 13:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, Paul Durrant, Jan Beulich,
	Daniel De Graaf

This patch removes the need for handling HVMOP restarts, so that
infrastructure is removed.

NOTE: This patch also modifies the type of the 'nr' argument of
      xc_hvm_set_mem_type() from uint64_t to uint32_t. In practice the
      value passed was always truncated to 32 bits.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>

v2:
- Addressed several comments from Jan.
---
 tools/libxc/include/xenctrl.h       |   2 +-
 tools/libxc/xc_misc.c               |  29 +++-----
 xen/arch/x86/hvm/dm.c               |  95 +++++++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c              | 136 +-----------------------------------
 xen/include/public/hvm/dm_op.h      |  22 ++++++
 xen/include/public/hvm/hvm_op.h     |  20 ------
 xen/xsm/flask/policy/access_vectors |   2 +-
 7 files changed, 130 insertions(+), 176 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 9950690..32a1e9e 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1634,7 +1634,7 @@ int xc_hvm_modified_memory(
  * Allowed types are HVMMEM_ram_rw, HVMMEM_ram_ro, HVMMEM_mmio_dm
  */
 int xc_hvm_set_mem_type(
-    xc_interface *xch, domid_t dom, hvmmem_type_t memtype, uint64_t first_pfn, uint64_t nr);
+    xc_interface *xch, domid_t dom, hvmmem_type_t memtype, uint64_t first_pfn, uint32_t nr);
 
 /*
  * Injects a hardware/software CPU trap, to take effect the next time the HVM 
diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 597df99..5b06d6b 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -590,30 +590,21 @@ int xc_hvm_modified_memory(
 }
 
 int xc_hvm_set_mem_type(
-    xc_interface *xch, domid_t dom, hvmmem_type_t mem_type, uint64_t first_pfn, uint64_t nr)
+    xc_interface *xch, domid_t dom, hvmmem_type_t mem_type, uint64_t first_pfn, uint32_t nr)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_set_mem_type, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_set_mem_type hypercall");
-        return -1;
-    }
+    struct xen_dm_op op;
+    struct xen_dm_op_set_mem_type *data;
 
-    arg->domid        = dom;
-    arg->hvmmem_type  = mem_type;
-    arg->first_pfn    = first_pfn;
-    arg->nr           = nr;
+    memset(&op, 0, sizeof(op));
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_set_mem_type,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    op.op = XEN_DMOP_set_mem_type;
+    data = &op.u.set_mem_type;
 
-    xc_hypercall_buffer_free(xch, arg);
+    data->mem_type = mem_type;
+    data->first_pfn = first_pfn;
+    data->nr = nr;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_hvm_inject_trap(
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 4e7d8f9..3737372 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -199,6 +199,87 @@ static int modified_memory(struct domain *d, xen_pfn_t *first_pfn,
     return rc;
 }
 
+static bool allow_p2m_type_change(p2m_type_t old, p2m_type_t new)
+{
+    return ( p2m_is_ram(old) ||
+             (p2m_is_hole(old) && new == p2m_mmio_dm) ||
+             (old == p2m_ioreq_server && new == p2m_ram_rw) );
+}
+
+static int set_mem_type(struct domain *d, hvmmem_type_t mem_type,
+                        xen_pfn_t *first_pfn, unsigned int *nr)
+{
+    xen_pfn_t last_pfn = *first_pfn + *nr - 1;
+    unsigned int iter;
+    int rc;
+
+    /* Interface types to internal p2m types */
+    static const p2m_type_t memtype[] = {
+        [HVMMEM_ram_rw]  = p2m_ram_rw,
+        [HVMMEM_ram_ro]  = p2m_ram_ro,
+        [HVMMEM_mmio_dm] = p2m_mmio_dm,
+        [HVMMEM_unused] = p2m_invalid,
+        [HVMMEM_ioreq_server] = p2m_ioreq_server
+    };
+
+    if ( (*first_pfn > last_pfn) ||
+         (last_pfn > domain_get_maximum_gpfn(d)) )
+        return -EINVAL;
+
+    if ( mem_type >= ARRAY_SIZE(memtype) ||
+         unlikely(mem_type == HVMMEM_unused) )
+        return -EINVAL;
+
+    iter = 0;
+    rc = 0;
+    while ( iter < *nr )
+    {
+        unsigned long pfn = *first_pfn + iter;
+        p2m_type_t t;
+
+        get_gfn_unshare(d, pfn, &t);
+        if ( p2m_is_paging(t) )
+        {
+            put_gfn(d, pfn);
+            p2m_mem_paging_populate(d, pfn);
+            return -EAGAIN;
+        }
+
+        if ( p2m_is_shared(t) )
+            rc = -EAGAIN;
+        else if ( !allow_p2m_type_change(t, memtype[mem_type]) )
+            rc = -EINVAL;
+        else
+            rc = p2m_change_type_one(d, pfn, t, memtype[mem_type]);
+
+        put_gfn(d, pfn);
+
+        if ( rc )
+            break;
+
+        iter++;
+
+        /*
+         * Check for continuation every 256th iteration and if the
+         * iteration is not the last.
+         */
+        if ( (iter < *nr) && ((iter & 0xff) == 0) &&
+             hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            break;
+        }
+    }
+
+    if ( rc == -ERESTART )
+    {
+        *first_pfn += iter;
+        *nr -= iter;
+    }
+
+    return rc;
+}
+
 long do_dm_op(domid_t domid,
               unsigned int nr_bufs,
               XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
@@ -384,6 +465,20 @@ long do_dm_op(domid_t domid,
         break;
     }
 
+    case XEN_DMOP_set_mem_type:
+    {
+        struct xen_dm_op_set_mem_type *data =
+            &op.u.set_mem_type;
+
+        rc = -EINVAL;
+        if ( data->pad )
+            break;
+
+        rc = set_mem_type(d, data->mem_type, &data->first_pfn,
+                          &data->nr);
+        break;
+    }
+
     default:
         rc = -EOPNOTSUPP;
         break;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 3760e0b..4a24e4e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5266,132 +5266,11 @@ static int hvmop_get_mem_type(
     return rc;
 }
 
-/*
- * Note that this value is effectively part of the ABI, even if we don't need
- * to make it a formal part of it: A guest suspended for migration in the
- * middle of a continuation would fail to work if resumed on a hypervisor
- * using a different value.
- */
-#define HVMOP_op_mask 0xff
-
-static bool_t hvm_allow_p2m_type_change(p2m_type_t old, p2m_type_t new)
-{
-    if ( p2m_is_ram(old) ||
-         (p2m_is_hole(old) && new == p2m_mmio_dm) ||
-         (old == p2m_ioreq_server && new == p2m_ram_rw) )
-        return 1;
-
-    return 0;
-}
-
-static int hvmop_set_mem_type(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_mem_type_t) arg,
-    unsigned long *iter)
-{
-    unsigned long start_iter = *iter;
-    struct xen_hvm_set_mem_type a;
-    struct domain *d;
-    int rc;
-
-    /* Interface types to internal p2m types */
-    static const p2m_type_t memtype[] = {
-        [HVMMEM_ram_rw]  = p2m_ram_rw,
-        [HVMMEM_ram_ro]  = p2m_ram_ro,
-        [HVMMEM_mmio_dm] = p2m_mmio_dm,
-        [HVMMEM_unused] = p2m_invalid,
-        [HVMMEM_ioreq_server] = p2m_ioreq_server
-    };
-
-    if ( copy_from_guest(&a, arg, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(a.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_control(XSM_DM_PRIV, d, HVMOP_set_mem_type);
-    if ( rc )
-        goto out;
-
-    rc = -EINVAL;
-    if ( a.nr < start_iter ||
-         ((a.first_pfn + a.nr - 1) < a.first_pfn) ||
-         ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
-        goto out;
-
-    if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
-         unlikely(a.hvmmem_type == HVMMEM_unused) )
-        goto out;
-
-    while ( a.nr > start_iter )
-    {
-        unsigned long pfn = a.first_pfn + start_iter;
-        p2m_type_t t;
-
-        get_gfn_unshare(d, pfn, &t);
-        if ( p2m_is_paging(t) )
-        {
-            put_gfn(d, pfn);
-            p2m_mem_paging_populate(d, pfn);
-            rc = -EAGAIN;
-            goto out;
-        }
-        if ( p2m_is_shared(t) )
-        {
-            put_gfn(d, pfn);
-            rc = -EAGAIN;
-            goto out;
-        }
-        if ( !hvm_allow_p2m_type_change(t, memtype[a.hvmmem_type]) )
-        {
-            put_gfn(d, pfn);
-            goto out;
-        }
-
-        rc = p2m_change_type_one(d, pfn, t, memtype[a.hvmmem_type]);
-        put_gfn(d, pfn);
-
-        if ( rc )
-            goto out;
-
-        /* Check for continuation if it's not the last interation */
-        if ( a.nr > ++start_iter && !(start_iter & HVMOP_op_mask) &&
-             hypercall_preempt_check() )
-        {
-            rc = -ERESTART;
-            goto out;
-        }
-    }
-    rc = 0;
-
- out:
-    rcu_unlock_domain(d);
-    *iter = start_iter;
-
-    return rc;
-}
-
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
-    unsigned long start_iter, mask;
     long rc = 0;
 
-    switch ( op & HVMOP_op_mask )
-    {
-    default:
-        mask = ~0UL;
-        break;
-    case HVMOP_set_mem_type:
-        mask = HVMOP_op_mask;
-        break;
-    }
-
-    start_iter = op & ~mask;
-    switch ( op &= mask )
+    switch ( op )
     {
     case HVMOP_set_evtchn_upcall_vector:
         rc = hvmop_set_evtchn_upcall_vector(
@@ -5422,12 +5301,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             guest_handle_cast(arg, xen_hvm_get_mem_type_t));
         break;
 
-    case HVMOP_set_mem_type:
-        rc = hvmop_set_mem_type(
-            guest_handle_cast(arg, xen_hvm_set_mem_type_t),
-            &start_iter);
-        break;
-
     case HVMOP_pagetable_dying:
     {
         struct xen_hvm_pagetable_dying a;
@@ -5536,13 +5409,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
     }
     }
 
-    if ( rc == -ERESTART )
-    {
-        ASSERT(!(start_iter & mask));
-        rc = hypercall_create_continuation(__HYPERVISOR_hvm_op, "lh",
-                                           op | start_iter, arg);
-    }
-
     return rc;
 }
 
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index 1a1b784..4b74c91 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -251,6 +251,27 @@ struct xen_dm_op_modified_memory {
     uint64_aligned_t first_pfn;
 };
 
+/*
+ * XEN_DMOP_set_mem_type: Notify that a region of memory is to be treated
+ *                        in a specific way. (See definition of
+ *                        hvmmem_type_t).
+ *
+ * NOTE: In the event of a continuation (return code -ERESTART), the
+ *       @first_pfn is set to the value of the pfn of the remaining
+ *       region and @nr reduced to the size of the remaining region.
+ */
+#define XEN_DMOP_set_mem_type 12
+
+struct xen_dm_op_set_mem_type {
+    /* IN - number of contiguous pages */
+    uint32_t nr;
+    /* IN - new hvmmem_type_t of region */
+    uint16_t mem_type;
+    uint16_t pad;
+    /* IN - first pfn in region */
+    uint64_aligned_t first_pfn;
+};
+
 
 struct xen_dm_op {
     uint32_t op;
@@ -267,6 +288,7 @@ struct xen_dm_op {
         struct xen_dm_op_set_isa_irq_level set_isa_irq_level;
         struct xen_dm_op_set_pci_link_route set_pci_link_route;
         struct xen_dm_op_modified_memory modified_memory;
+        struct xen_dm_op_set_mem_type set_mem_type;
     } u;
 };
 
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 76e1b78..d7e2f12 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -96,26 +96,6 @@ typedef enum {
     HVMMEM_ioreq_server
 } hvmmem_type_t;
 
-/* Following tools-only interfaces may change in future. */
-#if defined(__XEN__) || defined(__XEN_TOOLS__)
-
-#define HVMOP_set_mem_type    8
-/* Notify that a region of memory is to be treated in a specific way. */
-struct xen_hvm_set_mem_type {
-    /* Domain to be updated. */
-    domid_t domid;
-    /* Memory type */
-    uint16_t hvmmem_type;
-    /* Number of pages. */
-    uint32_t nr;
-    /* First pfn. */
-    uint64_aligned_t first_pfn;
-};
-typedef struct xen_hvm_set_mem_type xen_hvm_set_mem_type_t;
-DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_mem_type_t);
-
-#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
-
 /* Hint from PV drivers for pagetable destruction. */
 #define HVMOP_pagetable_dying        9
 struct xen_hvm_pagetable_dying {
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 2041ca5..125210b 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -260,7 +260,7 @@ class hvm
     bind_irq
 # XEN_DOMCTL_pin_mem_cacheattr
     cacheattr
-# HVMOP_get_mem_type, HVMOP_set_mem_type,
+# HVMOP_get_mem_type,
 # HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying,
 # HVMOP_inject_trap
     hvmctl
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi
  2016-12-06 13:46 [PATCH v2 0/8] New hypercall for device models Paul Durrant
                   ` (5 preceding siblings ...)
  2016-12-06 13:46 ` [PATCH v2 6/8] dm_op: convert HVMOP_set_mem_type Paul Durrant
@ 2016-12-06 13:46 ` Paul Durrant
  2016-12-15 16:23   ` Jan Beulich
  2016-12-06 13:46 ` [PATCH v2 8/8] x86/hvm: serialize trap injecting producer and consumer Paul Durrant
  7 siblings, 1 reply; 22+ messages in thread
From: Paul Durrant @ 2016-12-06 13:46 UTC (permalink / raw)
  To: xen-devel
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, Paul Durrant, Jan Beulich,
	Daniel De Graaf

NOTE: This patch also modifies the types of the 'vector', 'type' and
      'insn_len' arguments of xc_hvm_inject_trap() from uint32_t to
      uint8_t. In practice the values passed were always truncated to
      8 bits.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wei.liu2@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

v2:
- Addressed several comments from Jan.
---
 tools/flask/policy/modules/xen.if   |  2 +-
 tools/libxc/include/xenctrl.h       |  4 +-
 tools/libxc/xc_misc.c               | 64 +++++++++++--------------------
 xen/arch/x86/hvm/dm.c               | 41 ++++++++++++++++++++
 xen/arch/x86/hvm/hvm.c              | 76 -------------------------------------
 xen/include/public/hvm/dm_op.h      | 47 +++++++++++++++++++++++
 xen/include/public/hvm/hvm_op.h     | 45 ----------------------
 xen/include/xsm/dummy.h             |  6 ---
 xen/include/xsm/xsm.h               |  6 ---
 xen/xsm/dummy.c                     |  1 -
 xen/xsm/flask/hooks.c               |  6 ---
 xen/xsm/flask/policy/access_vectors |  5 +--
 12 files changed, 115 insertions(+), 188 deletions(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index e6dfaf0..70ce47b 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -151,7 +151,7 @@ define(`device_model', `
 
 	allow $1 $2_target:domain { getdomaininfo shutdown };
 	allow $1 $2_target:mmu { map_read map_write adjust physmap target_hack };
-	allow $1 $2_target:hvm { getparam setparam hvmctl cacheattr send_irq dm };
+	allow $1 $2_target:hvm { getparam setparam hvmctl cacheattr dm };
 ')
 
 # make_device_model(priv, dm_dom, hvm_dom)
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 32a1e9e..dc225cf 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1641,8 +1641,8 @@ int xc_hvm_set_mem_type(
  * resumes. 
  */
 int xc_hvm_inject_trap(
-    xc_interface *xch, domid_t dom, int vcpu, uint32_t vector,
-    uint32_t type, uint32_t error_code, uint32_t insn_len,
+    xc_interface *xch, domid_t dom, int vcpu, uint8_t vector,
+    uint8_t type, uint32_t error_code, uint8_t insn_len,
     uint64_t cr2);
 
 /*
diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index 5b06d6b..98ab826 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -527,29 +527,20 @@ int xc_hvm_set_pci_link_route(
 }
 
 int xc_hvm_inject_msi(
-    xc_interface *xch, domid_t dom, uint64_t addr, uint32_t data)
+    xc_interface *xch, domid_t dom, uint64_t msi_addr, uint32_t msi_data)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_inject_msi, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_inject_msi hypercall");
-        return -1;
-    }
+    struct xen_dm_op op;
+    struct xen_dm_op_inject_msi *data;
 
-    arg->domid = dom;
-    arg->addr  = addr;
-    arg->data  = data;
+    memset(&op, 0, sizeof(op));
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_inject_msi,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    op.op = XEN_DMOP_inject_msi;
+    data = &op.u.inject_msi;
 
-    xc_hypercall_buffer_free(xch, arg);
+    data->addr = msi_addr;
+    data->data = msi_data;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_hvm_track_dirty_vram(
@@ -608,35 +599,26 @@ int xc_hvm_set_mem_type(
 }
 
 int xc_hvm_inject_trap(
-    xc_interface *xch, domid_t dom, int vcpu, uint32_t vector,
-    uint32_t type, uint32_t error_code, uint32_t insn_len,
+    xc_interface *xch, domid_t dom, int vcpu, uint8_t vector,
+    uint8_t type, uint32_t error_code, uint8_t insn_len,
     uint64_t cr2)
 {
-    DECLARE_HYPERCALL_BUFFER(struct xen_hvm_inject_trap, arg);
-    int rc;
-
-    arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
-    if ( arg == NULL )
-    {
-        PERROR("Could not allocate memory for xc_hvm_inject_trap hypercall");
-        return -1;
-    }
+    struct xen_dm_op op;
+    struct xen_dm_op_inject_trap *data;
 
-    arg->domid       = dom;
-    arg->vcpuid      = vcpu;
-    arg->vector      = vector;
-    arg->type        = type;
-    arg->error_code  = error_code;
-    arg->insn_len    = insn_len;
-    arg->cr2         = cr2;
+    memset(&op, 0, sizeof(op));
 
-    rc = xencall2(xch->xcall, __HYPERVISOR_hvm_op,
-                  HVMOP_inject_trap,
-                  HYPERCALL_BUFFER_AS_ARG(arg));
+    op.op = XEN_DMOP_inject_trap;
+    data = &op.u.inject_trap;
 
-    xc_hypercall_buffer_free(xch, arg);
+    data->vcpuid = vcpu;
+    data->vector = vector;
+    data->type = type;
+    data->error_code = error_code;
+    data->insn_len = insn_len;
+    data->cr2 = cr2;
 
-    return rc;
+    return do_dm_op(xch, dom, 1, &op, sizeof(op));
 }
 
 int xc_livepatch_upload(xc_interface *xch,
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index 3737372..fa67598 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -280,6 +280,28 @@ static int set_mem_type(struct domain *d, hvmmem_type_t mem_type,
     return rc;
 }
 
+static int inject_trap(struct domain *d, unsigned int vcpuid,
+                       uint8_t vector, uint8_t type,
+                       uint8_t insn_len, uint32_t error_code,
+                       unsigned long cr2)
+{
+    struct vcpu *v;
+
+    if ( vcpuid >= d->max_vcpus || !(v = d->vcpu[vcpuid]) )
+        return -EINVAL;
+
+    if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
+        return -EBUSY;
+
+    v->arch.hvm_vcpu.inject_trap.type = type;
+    v->arch.hvm_vcpu.inject_trap.insn_len = insn_len;
+    v->arch.hvm_vcpu.inject_trap.error_code = error_code;
+    v->arch.hvm_vcpu.inject_trap.cr2 = cr2;
+    v->arch.hvm_vcpu.inject_trap.vector = vector;
+
+    return 0;
+}
+
 long do_dm_op(domid_t domid,
               unsigned int nr_bufs,
               XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
@@ -479,6 +501,25 @@ long do_dm_op(domid_t domid,
         break;
     }
 
+    case XEN_DMOP_inject_trap:
+    {
+        const struct xen_dm_op_inject_trap *data =
+            &op.u.inject_trap;
+
+        rc = inject_trap(d, data->vcpuid, data->vector,
+                         data->type, data->insn_len,
+                         data->error_code, data->cr2);
+        break;
+    }
+    case XEN_DMOP_inject_msi:
+    {
+        const struct xen_dm_op_inject_msi *data =
+            &op.u.inject_msi;
+
+        rc = hvm_inject_msi(d, data->addr, data->data);
+        break;
+    }
+
     default:
         rc = -EOPNOTSUPP;
         break;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 4a24e4e..be3c133 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4615,35 +4615,6 @@ static void hvm_s3_resume(struct domain *d)
     }
 }
 
-static int hvmop_inject_msi(
-    XEN_GUEST_HANDLE_PARAM(xen_hvm_inject_msi_t) uop)
-{
-    struct xen_hvm_inject_msi op;
-    struct domain *d;
-    int rc;
-
-    if ( copy_from_guest(&op, uop, 1) )
-        return -EFAULT;
-
-    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
-    if ( rc != 0 )
-        return rc;
-
-    rc = -EINVAL;
-    if ( !is_hvm_domain(d) )
-        goto out;
-
-    rc = xsm_hvm_inject_msi(XSM_DM_PRIV, d);
-    if ( rc )
-        goto out;
-
-    rc = hvm_inject_msi(d, op.addr, op.data);
-
- out:
-    rcu_unlock_domain(d);
-    return rc;
-}
-
 static int hvmop_flush_tlb_all(void)
 {
     struct domain *d = current->domain;
@@ -5287,11 +5258,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             guest_handle_cast(arg, xen_hvm_param_t));
         break;
 
-    case HVMOP_inject_msi:
-        rc = hvmop_inject_msi(
-            guest_handle_cast(arg, xen_hvm_inject_msi_t));
-        break;
-
     case HVMOP_flush_tlbs:
         rc = guest_handle_is_null(arg) ? hvmop_flush_tlb_all() : -EINVAL;
         break;
@@ -5348,48 +5314,6 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
-    case HVMOP_inject_trap: 
-    {
-        xen_hvm_inject_trap_t tr;
-        struct domain *d;
-        struct vcpu *v;
-
-        if ( copy_from_guest(&tr, arg, 1 ) )
-            return -EFAULT;
-
-        rc = rcu_lock_remote_domain_by_id(tr.domid, &d);
-        if ( rc != 0 )
-            return rc;
-
-        rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
-            goto injtrap_fail;
-
-        rc = xsm_hvm_control(XSM_DM_PRIV, d, op);
-        if ( rc )
-            goto injtrap_fail;
-
-        rc = -ENOENT;
-        if ( tr.vcpuid >= d->max_vcpus || (v = d->vcpu[tr.vcpuid]) == NULL )
-            goto injtrap_fail;
-        
-        if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
-            rc = -EBUSY;
-        else 
-        {
-            v->arch.hvm_vcpu.inject_trap.vector = tr.vector;
-            v->arch.hvm_vcpu.inject_trap.type = tr.type;
-            v->arch.hvm_vcpu.inject_trap.error_code = tr.error_code;
-            v->arch.hvm_vcpu.inject_trap.insn_len = tr.insn_len;
-            v->arch.hvm_vcpu.inject_trap.cr2 = tr.cr2;
-            rc = 0;
-        }
-
-    injtrap_fail:
-        rcu_unlock_domain(d);
-        break;
-    }
-
     case HVMOP_guest_request_vm_event:
         if ( guest_handle_is_null(arg) )
             monitor_guest_request();
diff --git a/xen/include/public/hvm/dm_op.h b/xen/include/public/hvm/dm_op.h
index 4b74c91..94f3f62 100644
--- a/xen/include/public/hvm/dm_op.h
+++ b/xen/include/public/hvm/dm_op.h
@@ -272,6 +272,51 @@ struct xen_dm_op_set_mem_type {
     uint64_aligned_t first_pfn;
 };
 
+/*
+ * XEN_DMOP_inject_trap: Inject a trap into a VCPU, which will get taken up
+ *                       when it is next scheduled.
+ *
+ * Note that the caller should know enough of the state of the CPU before
+ * injecting, to know what the effect of injecting the trap will be.
+ */
+#define XEN_DMOP_inject_trap 13
+
+struct xen_dm_op_inject_trap {
+    /* IN - index of vCPU */
+    uint32_t vcpuid;
+    /* IN - interrupt vector */
+    uint8_t vector;
+    /* IN - trap type (DMOP_TRAP_* ) */
+    uint8_t type;
+/* NB. This enumeration precisely matches hvm.h:X86_EVENTTYPE_* */
+# define DMOP_TRAP_ext_int    0 /* external interrupt */
+# define DMOP_TRAP_nmi        2 /* nmi */
+# define DMOP_TRAP_hw_exc     3 /* hardware exception */
+# define DMOP_TRAP_sw_int     4 /* software interrupt (CD nn) */
+# define DMOP_TRAP_pri_sw_exc 5 /* ICEBP (F1) */
+# define DMOP_TRAP_sw_exc     6 /* INT3 (CC), INTO (CE) */
+    /* IN - enstruction length */
+    uint8_t insn_len;
+    uint8_t pad;
+    /* IN - error code (or ~0 to skip) */
+    uint32_t error_code;
+    /* IN - CR2 for page faults */
+    uint64_aligned_t cr2;
+};
+
+/*
+ * XEN_DMOP_inject_msi: Inject an MSI for an emulated device.
+ */
+#define XEN_DMOP_inject_msi 14
+
+struct xen_dm_op_inject_msi {
+    /* IN - MSI data (lower 32 bits) */
+    uint32_t data;
+    uint32_t pad;
+    /* IN - MSI address (0xfeexxxxx) */
+    uint64_aligned_t addr;
+};
+
 
 struct xen_dm_op {
     uint32_t op;
@@ -289,6 +334,8 @@ struct xen_dm_op {
         struct xen_dm_op_set_pci_link_route set_pci_link_route;
         struct xen_dm_op_modified_memory modified_memory;
         struct xen_dm_op_set_mem_type set_mem_type;
+        struct xen_dm_op_inject_trap inject_trap;
+        struct xen_dm_op_inject_msi inject_msi;
     } u;
 };
 
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index d7e2f12..bc00ef0 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -133,38 +133,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_hvm_xentrace_t);
 /* Deprecated by XENMEM_access_op_get_access */
 #define HVMOP_get_mem_access        13
 
-#define HVMOP_inject_trap            14
-/* Inject a trap into a VCPU, which will get taken up on the next
- * scheduling of it. Note that the caller should know enough of the
- * state of the CPU before injecting, to know what the effect of
- * injecting the trap will be.
- */
-struct xen_hvm_inject_trap {
-    /* Domain to be queried. */
-    domid_t domid;
-    /* VCPU */
-    uint32_t vcpuid;
-    /* Vector number */
-    uint32_t vector;
-    /* Trap type (HVMOP_TRAP_*) */
-    uint32_t type;
-/* NB. This enumeration precisely matches hvm.h:X86_EVENTTYPE_* */
-# define HVMOP_TRAP_ext_int    0 /* external interrupt */
-# define HVMOP_TRAP_nmi        2 /* nmi */
-# define HVMOP_TRAP_hw_exc     3 /* hardware exception */
-# define HVMOP_TRAP_sw_int     4 /* software interrupt (CD nn) */
-# define HVMOP_TRAP_pri_sw_exc 5 /* ICEBP (F1) */
-# define HVMOP_TRAP_sw_exc     6 /* INT3 (CC), INTO (CE) */
-    /* Error code, or ~0u to skip */
-    uint32_t error_code;
-    /* Intruction length */
-    uint32_t insn_len;
-    /* CR2 for page faults */
-    uint64_aligned_t cr2;
-};
-typedef struct xen_hvm_inject_trap xen_hvm_inject_trap_t;
-DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_trap_t);
-
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
 #define HVMOP_get_mem_type    15
@@ -184,19 +152,6 @@ DEFINE_XEN_GUEST_HANDLE(xen_hvm_get_mem_type_t);
 /* Following tools-only interfaces may change in future. */
 #if defined(__XEN__) || defined(__XEN_TOOLS__)
 
-/* MSI injection for emulated devices */
-#define HVMOP_inject_msi         16
-struct xen_hvm_inject_msi {
-    /* Domain to be injected */
-    domid_t   domid;
-    /* Data -- lower 32 bits */
-    uint32_t  data;
-    /* Address (0xfeexxxxx) */
-    uint64_t  addr;
-};
-typedef struct xen_hvm_inject_msi xen_hvm_inject_msi_t;
-DEFINE_XEN_GUEST_HANDLE(xen_hvm_inject_msi_t);
-
 /*
  * Definitions relating to DMOP_create_ioreq_server. (Defined here for
  * backwards compatibility).
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index d3de4b7..4b27ae7 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -610,12 +610,6 @@ static XSM_INLINE int xsm_shadow_control(XSM_DEFAULT_ARG struct domain *d, uint3
     return xsm_default_action(action, current->domain, d);
 }
 
-static XSM_INLINE int xsm_hvm_inject_msi(XSM_DEFAULT_ARG struct domain *d)
-{
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
-    return xsm_default_action(action, current->domain, d);
-}
-
 static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
 {
     XSM_ASSERT_ACTION(XSM_DM_PRIV);
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 2e4a3ce..2cf7ac1 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -162,7 +162,6 @@ struct xsm_operations {
 #ifdef CONFIG_X86
     int (*do_mca) (void);
     int (*shadow_control) (struct domain *d, uint32_t op);
-    int (*hvm_inject_msi) (struct domain *d);
     int (*mem_sharing_op) (struct domain *d, struct domain *cd, int op);
     int (*apic) (struct domain *d, int cmd);
     int (*memtype) (uint32_t access);
@@ -632,11 +631,6 @@ static inline int xsm_shadow_control (xsm_default_t def, struct domain *d, uint3
     return xsm_ops->shadow_control(d, op);
 }
 
-static inline int xsm_hvm_inject_msi (xsm_default_t def, struct domain *d)
-{
-    return xsm_ops->hvm_inject_msi(d);
-}
-
 static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
 {
     return xsm_ops->mem_sharing_op(d, cd, op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index f1568dd..1f659c7 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -145,7 +145,6 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
 #ifdef CONFIG_X86
     set_to_dummy_if_null(ops, do_mca);
     set_to_dummy_if_null(ops, shadow_control);
-    set_to_dummy_if_null(ops, hvm_inject_msi);
     set_to_dummy_if_null(ops, mem_sharing_op);
     set_to_dummy_if_null(ops, apic);
     set_to_dummy_if_null(ops, machine_memory_map);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index b39692e..46f0108 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1499,11 +1499,6 @@ static int flask_ioport_mapping(struct domain *d, uint32_t start, uint32_t end,
     return flask_ioport_permission(d, start, end, access);
 }
 
-static int flask_hvm_inject_msi(struct domain *d)
-{
-    return current_has_perm(d, SECCLASS_HVM, HVM__SEND_IRQ);
-}
-
 static int flask_mem_sharing_op(struct domain *d, struct domain *cd, int op)
 {
     int rc = current_has_perm(cd, SECCLASS_HVM, HVM__MEM_SHARING);
@@ -1781,7 +1776,6 @@ static struct xsm_operations flask_ops = {
     .hvm_set_pci_intx_level = flask_hvm_set_pci_intx_level,
     .hvm_set_isa_irq_level = flask_hvm_set_isa_irq_level,
     .hvm_set_pci_link_route = flask_hvm_set_pci_link_route,
-    .hvm_inject_msi = flask_hvm_inject_msi,
     .mem_sharing_op = flask_mem_sharing_op,
     .apic = flask_apic,
     .machine_memory_map = flask_machine_memory_map,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index 125210b..a826264 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -261,8 +261,7 @@ class hvm
 # XEN_DOMCTL_pin_mem_cacheattr
     cacheattr
 # HVMOP_get_mem_type,
-# HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying,
-# HVMOP_inject_trap
+# HVMOP_set_mem_access, HVMOP_get_mem_access, HVMOP_pagetable_dying
     hvmctl
 # XEN_DOMCTL_mem_sharing_op and XENMEM_sharing_op_{share,add_physmap} with:
 #  source = the domain making the hypercall
@@ -270,8 +269,6 @@ class hvm
     mem_sharing
 # XEN_DOMCTL_audit_p2m
     audit_p2m
-# HVMOP_inject_msi
-    send_irq
 # checked in XENMEM_sharing_op_{share,add_physmap} with:
 #  source = domain whose memory is being shared
 #  target = client domain
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v2 8/8] x86/hvm: serialize trap injecting producer and consumer
  2016-12-06 13:46 [PATCH v2 0/8] New hypercall for device models Paul Durrant
                   ` (6 preceding siblings ...)
  2016-12-06 13:46 ` [PATCH v2 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi Paul Durrant
@ 2016-12-06 13:46 ` Paul Durrant
  7 siblings, 0 replies; 22+ messages in thread
From: Paul Durrant @ 2016-12-06 13:46 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Paul Durrant, Jan Beulich

Since injection works on a remote vCPU, and since there's no
enforcement of the subject vCPU being paused, there's a potential race
between the producing and consuming sides. Fix this by leveraging the
vector field as synchronization variable.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
[re-based]
Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>

v2:
- Re-re-based after Andrew's recent changes.
---
 xen/arch/x86/hvm/dm.c         | 5 ++++-
 xen/arch/x86/hvm/hvm.c        | 7 ++++---
 xen/include/asm-x86/hvm/hvm.h | 3 +++
 3 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index fa67598..bdb45c3 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -290,13 +290,16 @@ static int inject_trap(struct domain *d, unsigned int vcpuid,
     if ( vcpuid >= d->max_vcpus || !(v = d->vcpu[vcpuid]) )
         return -EINVAL;
 
-    if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
+    if ( cmpxchg(&v->arch.hvm_vcpu.inject_trap.vector,
+                 HVM_TRAP_VECTOR_UNSET, HVM_TRAP_VECTOR_UPDATING) !=
+         HVM_TRAP_VECTOR_UNSET )
         return -EBUSY;
 
     v->arch.hvm_vcpu.inject_trap.type = type;
     v->arch.hvm_vcpu.inject_trap.insn_len = insn_len;
     v->arch.hvm_vcpu.inject_trap.error_code = error_code;
     v->arch.hvm_vcpu.inject_trap.cr2 = cr2;
+    smp_wmb();
     v->arch.hvm_vcpu.inject_trap.vector = vector;
 
     return 0;
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index be3c133..bb1218a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -533,10 +533,11 @@ void hvm_do_resume(struct vcpu *v)
     }
 
     /* Inject pending hw/sw trap */
-    if ( v->arch.hvm_vcpu.inject_trap.vector != -1 )
+    if ( v->arch.hvm_vcpu.inject_trap.vector >= 0 )
     {
+        smp_rmb();
         hvm_inject_event(&v->arch.hvm_vcpu.inject_trap);
-        v->arch.hvm_vcpu.inject_trap.vector = -1;
+        v->arch.hvm_vcpu.inject_trap.vector = HVM_TRAP_VECTOR_UNSET;
     }
 }
 
@@ -1548,7 +1549,7 @@ int hvm_vcpu_initialise(struct vcpu *v)
         (void(*)(unsigned long))hvm_assert_evtchn_irq,
         (unsigned long)v);
 
-    v->arch.hvm_vcpu.inject_trap.vector = -1;
+    v->arch.hvm_vcpu.inject_trap.vector = HVM_TRAP_VECTOR_UNSET;
 
     if ( is_pvh_domain(d) )
     {
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index b37b335..240300b 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -77,6 +77,9 @@ enum hvm_intblk {
 #define HVM_HAP_SUPERPAGE_2MB   0x00000001
 #define HVM_HAP_SUPERPAGE_1GB   0x00000002
 
+#define HVM_TRAP_VECTOR_UNSET    (-1)
+#define HVM_TRAP_VECTOR_UPDATING (-2)
+
 /*
  * The hardware virtual machine (HVM) interface abstracts away from the
  * x86/x86_64 CPU virtualization assist specifics. Currently this interface
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/8] public / x86: Introduce __HYPERCALL_dm_op...
  2016-12-06 13:46 ` [PATCH v2 1/8] public / x86: Introduce __HYPERCALL_dm_op Paul Durrant
@ 2016-12-12 13:27   ` Wei Liu
  2016-12-15 15:22   ` Jan Beulich
  1 sibling, 0 replies; 22+ messages in thread
From: Wei Liu @ 2016-12-12 13:27 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Wei Liu, Andrew Cooper, Jennifer Herbert, Jan Beulich,
	Ian Jackson, xen-devel, Daniel De Graaf

On Tue, Dec 06, 2016 at 01:46:12PM +0000, Paul Durrant wrote:
> ...as a set of hypercalls to be used by a device model.
[...]
> +Introduction
> +------------
> +
> +A previous proposal for a 'DMOP' was put forward by Ian Jackson on the 1st
> +of August. This proposal seem very promising, however a problem was
> +identified with it, that this proposal addresses.
> +
> +The aim of DMOP, as before, is to prevent a compromised device model from
> +compromising domains other then the one it is associated with. (And is
> +therefore likely already compromised).
> +
> +The previous proposal adds a DMOP hypercall, for use by the device models,
> +which places the domain ID in a fixed place, within the calling args,
> +such that the priv driver can always find it, and not need to know any
> +further details about the particular DMOP in order to validate it against
> +previously set (vie ioctl) domain.
> +
> +The problem occurs when you have a DMOP with references to other user memory
> +objects, such as with Track dirty VRAM (As used with the VGA buffer).
> +Is this case, the address of this other user object needs to be vetted,
> +to ensure it is not within a restricted address ranges, such as kernel
> +memory. The real problem comes down to how you would vet this address -
> +the idea place to do this is within the privcmd driver, since it would have
> +knowledge of the address space involved. However, with a principal goal
> +of DMOP to allow privcmd to be free from any knowledge of DMOP's sub ops,
> +it would have no way to identify any user buffer  addresses that need
> +checking.  The alternative of allowing the hypervisor to vet the address
> +is also problematic, since it has no knowledge of the guest memory layout.
> +

Could you perhaps rewrite this section? The reference to "Ian Jackson on
the ..." and some other bits aren't really useful in the context of a
design doc.

The comparison between the old and new proposal isn't needed IMHO. There
has never been an old implementation / design doc in xen.git.

> +Validation by privcmd driver
> +------------------------------
> +
> +If the privcmd driver has been restricted to specific domain (using a
> + new ioctl), when it received an op, it will:
> +
> +1. Check hypercall is DMOP.
> +
> +2. Check domid == restricted domid.
> +
> +3. For each @nr_buffers in @buffers: Check @h and @len give a buffer
> +   wholey in the user space part of the virtual address space. (e.g.,
> +   on Linux use access_ok()).
> +
> +
> +Xen Implementation
> +------------------
> +
> +Since a DMOP sub of may need to copy or return a buffer from the guest,

"sub of" -> "sub op" ?

> +as well as the DMOP itself to git the initial buffer, functions for doing
> +this would be written as below.  Note that care is taken to prevent
> +damage from buffer under or over run situations.  If the DMOP is called
> +with insufficient buffers, zeros will be read, while extra is ignored.
> +
[...]
>  # make_device_model(priv, dm_dom, hvm_dom)
> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index 2c83544..cc37752 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h

Is DMOP considered stable in this patch?  I guess not, given the header
is enclosed by __XEN__ and __XEN_TOOLS__.

Let's keep it in libxc for now. Eventually it will need to be moved to
tools/libs.

> @@ -41,6 +41,7 @@
>  #include <xen/sched.h>
>  #include <xen/memory.h>
>  #include <xen/grant_table.h>
> +#include <xen/hvm/dm_op.h>
>  #include <xen/hvm/params.h>
>  #include <xen/xsm/flask_op.h>
>  #include <xen/tmem.h>
> diff --git a/tools/libxc/xc_private.c b/tools/libxc/xc_private.c
> index d57c39a..8e49635 100644
> --- a/tools/libxc/xc_private.c
> +++ b/tools/libxc/xc_private.c
> @@ -776,6 +776,76 @@ int xc_ffs64(uint64_t x)
>      return l ? xc_ffs32(l) : h ? xc_ffs32(h) + 32 : 0;
>  }
>  
> +int do_dm_op(xc_interface *xch, domid_t domid, unsigned int nr_bufs, ...)
> +{
> +    int ret = -1;
> +    struct  {
> +        void *u;
> +        void *h;
> +    } *bounce;
> +    DECLARE_HYPERCALL_BUFFER(xen_dm_op_buf_t, bufs);
> +    va_list args;
> +    unsigned int idx;
> +
> +    bounce = calloc(nr_bufs, sizeof(*bounce));
> +    if ( bounce == NULL )
> +        goto fail1;
> +
> +    bufs = xc_hypercall_buffer_alloc(xch, bufs, sizeof(*bufs) * nr_bufs);
> +    if ( bufs == NULL )
> +        goto fail2;
> +
> +    va_start(args, nr_bufs);
> +    for (idx = 0; idx < nr_bufs; idx++)

for ( idx .. idx++ )

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 2/8] dm_op: convert HVMOP_*ioreq_server*
  2016-12-06 13:46 ` [PATCH v2 2/8] dm_op: convert HVMOP_*ioreq_server* Paul Durrant
@ 2016-12-12 13:30   ` Wei Liu
  2016-12-15 15:37   ` Jan Beulich
  1 sibling, 0 replies; 22+ messages in thread
From: Wei Liu @ 2016-12-12 13:30 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Wei Liu, Andrew Cooper, Ian Jackson, Jan Beulich, xen-devel,
	Daniel De Graaf

On Tue, Dec 06, 2016 at 01:46:13PM +0000, Paul Durrant wrote:
> The definitions of HVM_IOREQSRV_BUFIOREQ_* have to persist as they are
> already in use by callers of the libxc interface.
> 
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> --
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wei.liu2@citrix.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
> 
> v2:
> - Addressed several comments from Jan.
> ---
>  tools/libxc/xc_domain.c          | 212 ++++++++++++++++---------------------

Toolstack changes in patch 2-8 look rather mechanical to me, so for all
7:

Acked-by: Wei Liu <wei.liu2@citrix.com>

Provided HV maintainers are happy with with the dmop interface.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/8] public / x86: Introduce __HYPERCALL_dm_op...
  2016-12-06 13:46 ` [PATCH v2 1/8] public / x86: Introduce __HYPERCALL_dm_op Paul Durrant
  2016-12-12 13:27   ` Wei Liu
@ 2016-12-15 15:22   ` Jan Beulich
  2016-12-15 15:55     ` Paul Durrant
  1 sibling, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2016-12-15 15:22 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Wei Liu, Andrew Cooper, Jennifer Herbert, Ian Jackson, xen-devel,
	Daniel De Graaf

>>> On 06.12.16 at 14:46, <paul.durrant@citrix.com> wrote:
> ...as a set of hypercalls to be used by a device model.
> 
> As stated in the new docs/designs/dm_op.markdown:
> 
> "The aim of DMOP is to prevent a compromised device model from
> compromising domains other then the one it is associated with. (And is
> therefore likely already compromised)."
> 
> See that file for further information.
> 
> This patch simply adds the boilerplate for the hypercall.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Suggested-by: Ian Jackson <ian.jackson@citrix.com>
> Suggested-by: Jennifer Herbert <jennifer.herbert@citrix.com>

Hypervisor parts
Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit with one more question/adjustment request:

> +struct xen_dm_op_buf {
> +    XEN_GUEST_HANDLE(void) h;
> +    uint64_aligned_t size;

Does size need to be 64 bits wide? I thing 32 should suffice, even if
that won't shrink structure size (because the handle really wants to
be XEN_GUEST_HANDLE_64() for a tool stack-only interface which
we don't want to have a compat wrapper for).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 2/8] dm_op: convert HVMOP_*ioreq_server*
  2016-12-06 13:46 ` [PATCH v2 2/8] dm_op: convert HVMOP_*ioreq_server* Paul Durrant
  2016-12-12 13:30   ` Wei Liu
@ 2016-12-15 15:37   ` Jan Beulich
  1 sibling, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2016-12-15 15:37 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, Ian Jackson, xen-devel

>>> On 06.12.16 at 14:46, <paul.durrant@citrix.com> wrote:
>      switch ( op.op )
>      {
> +    case XEN_DMOP_create_ioreq_server:
> +    {
> +        struct domain *curr_d = current->domain;
> +        struct xen_dm_op_create_ioreq_server *data =
> +            &op.u.create_ioreq_server;
> +
> +        rc = -EINVAL;
> +        if ( data->pad )

You only check the first of three bytes. With that taken care of,
hypervisor parts
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/8] dm_op: convert HVMOP_track_dirty_vram
  2016-12-06 13:46 ` [PATCH v2 3/8] dm_op: convert HVMOP_track_dirty_vram Paul Durrant
@ 2016-12-15 15:44   ` Jan Beulich
  2016-12-15 16:23     ` Paul Durrant
  0 siblings, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2016-12-15 15:44 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Wei Liu, George Dunlap, Andrew Cooper, Ian Jackson, TimDeegan,
	xen-devel, Daniel De Graaf

>>> On 06.12.16 at 14:46, <paul.durrant@citrix.com> wrote:
> @@ -68,6 +70,35 @@ static int copy_buf_to_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
>      return copy_to_guest(buf.h, src, size) ? -EFAULT : 0;
>  }
>  
> +static int track_dirty_vram(struct domain *d,
> +                            unsigned int nr_bufs,
> +                            XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
> +                            xen_pfn_t first_pfn, unsigned int nr)
> +{
> +    struct xen_dm_op_buf buf;
> +    int rc;
> +
> +    if ( nr > (GB(1) >> PAGE_SHIFT) )
> +        return -EINVAL;
> +
> +    if ( d->is_dying )
> +        return -ESRCH;
> +
> +    if ( !d->vcpu || !d->vcpu[0] )

It may not have been this patch, but I think we had settled on the left
side here wanting to be !d->max_vcpus for consistency with other
(modern) code.

> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -68,7 +68,7 @@
>  int hap_track_dirty_vram(struct domain *d,
>                           unsigned long begin_pfn,
>                           unsigned long nr,
> -                         XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
> +                         XEN_GUEST_HANDLE_PARAM(void) guest_dirty_bitmap)
>  {
>      long rc = 0;
>      struct sh_dirty_vram *dirty_vram;
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -3677,7 +3677,7 @@ static void sh_clean_dirty_bitmap(struct domain *d)
>  int shadow_track_dirty_vram(struct domain *d,
>                              unsigned long begin_pfn,
>                              unsigned long nr,
> -                            XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
> +                            XEN_GUEST_HANDLE_PARAM(void) guest_dirty_bitmap)
>  {
>      int rc = 0;
>      unsigned long end_pfn = begin_pfn + nr;
> --- a/xen/include/asm-x86/hap.h
> +++ b/xen/include/asm-x86/hap.h
> @@ -43,7 +43,7 @@ void  hap_vcpu_init(struct vcpu *v);
>  int   hap_track_dirty_vram(struct domain *d,
>                             unsigned long begin_pfn,
>                             unsigned long nr,
> -                           XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
> +                           XEN_GUEST_HANDLE_PARAM(void) dirty_bitmap);
>  
>  extern const struct paging_mode *hap_paging_get_mode(struct vcpu *);
>  int hap_set_allocation(struct domain *d, unsigned int pages, bool *preempted);
> --- a/xen/include/asm-x86/shadow.h
> +++ b/xen/include/asm-x86/shadow.h
> @@ -63,7 +63,7 @@ int shadow_enable(struct domain *d, u32 mode);
>  int shadow_track_dirty_vram(struct domain *d,
>                              unsigned long first_pfn,
>                              unsigned long nr,
> -                            XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
> +                            XEN_GUEST_HANDLE_PARAM(void) dirty_bitmap);

I think all of these will go away with the type change required in the
earlier patch.

With all of this taken care of, hypervisor parts
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and...
  2016-12-06 13:46 ` [PATCH v2 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and Paul Durrant
@ 2016-12-15 15:51   ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2016-12-15 15:51 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, Ian Jackson, xen-devel

>>> On 06.12.16 at 14:46, <paul.durrant@citrix.com> wrote:
> --- a/xen/include/public/hvm/dm_op.h
> +++ b/xen/include/public/hvm/dm_op.h
> @@ -191,6 +191,49 @@ struct xen_dm_op_track_dirty_vram {
>      uint64_aligned_t first_pfn;
>  };
>  
> +/*
> + * XEN_DMOP_set_pci_intx_level: Set the logical level of one of a domain's
> + *                              PCI INTx pins.
> + */
> +#define XEN_DMOP_set_pci_intx_level 8
> +
> +struct xen_dm_op_set_pci_intx_level {
> +    /* IN - PCI INTx identification (domain:bus:device:intx) */
> +    uint16_t domain;
> +    uint8_t bus, device, intx;
> +    /* IN - Level: 0 -> deasserted, 1 -> asserted */
> +    uint8_t  level;
> +    uint16_t pad;
> +};

You don't need any padding here afaict.

> +/*
> + * XEN_DMOP_set_isa_irq_level: Set the logical level of a one of a domain's
> + *                             ISA IRQ lines.
> + */
> +#define XEN_DMOP_set_isa_irq_level 9
> +
> +struct xen_dm_op_set_isa_irq_level {
> +    /* IN - ISA IRQ (0-15) */
> +    uint8_t  isa_irq;
> +    /* IN - Level: 0 -> deasserted, 1 -> asserted */
> +    uint8_t  level;
> +    uint16_t pad;
> +};

Same here.

> +/*
> + * XEN_DMOP_set_pci_link_route: Map a PCI INTx line to an IRQ line.
> + */
> +#define XEN_DMOP_set_pci_link_route 10
> +
> +struct xen_dm_op_set_pci_link_route {
> +    /* PCI INTx line (0-3) */
> +    uint8_t  link;
> +    /* ISA IRQ (1-15) or 0 -> disable link */
> +    uint8_t  isa_irq;
> +    uint16_t pad;
> +};

And here.

> +
> +
>  struct xen_dm_op {

No double blank lines please.

With all of these taken care of, hypervisor parts
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 1/8] public / x86: Introduce __HYPERCALL_dm_op...
  2016-12-15 15:22   ` Jan Beulich
@ 2016-12-15 15:55     ` Paul Durrant
  0 siblings, 0 replies; 22+ messages in thread
From: Paul Durrant @ 2016-12-15 15:55 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Wei Liu, Andrew Cooper, Jennifer Herbert, Ian Jackson, xen-devel,
	Daniel De Graaf

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 15 December 2016 15:23
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; Jennifer Herbert <jennifer.herbert@citrix.com>;
> Wei Liu <wei.liu2@citrix.com>; xen-devel@lists.xenproject.org; Daniel De
> Graaf <dgdegra@tycho.nsa.gov>
> Subject: Re: [PATCH v2 1/8] public / x86: Introduce __HYPERCALL_dm_op...
> 
> >>> On 06.12.16 at 14:46, <paul.durrant@citrix.com> wrote:
> > ...as a set of hypercalls to be used by a device model.
> >
> > As stated in the new docs/designs/dm_op.markdown:
> >
> > "The aim of DMOP is to prevent a compromised device model from
> > compromising domains other then the one it is associated with. (And is
> > therefore likely already compromised)."
> >
> > See that file for further information.
> >
> > This patch simply adds the boilerplate for the hypercall.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Suggested-by: Ian Jackson <ian.jackson@citrix.com>
> > Suggested-by: Jennifer Herbert <jennifer.herbert@citrix.com>
> 
> Hypervisor parts
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> albeit with one more question/adjustment request:
> 
> > +struct xen_dm_op_buf {
> > +    XEN_GUEST_HANDLE(void) h;
> > +    uint64_aligned_t size;
> 
> Does size need to be 64 bits wide? I thing 32 should suffice, even if
> that won't shrink structure size (because the handle really wants to
> be XEN_GUEST_HANDLE_64() for a tool stack-only interface which
> we don't want to have a compat wrapper for).

Sure. I don't think it does need to be that wide in reality.

  Paul

> 
> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 5/8] dm_op: convert HVMOP_modified_memory
  2016-12-06 13:46 ` [PATCH v2 5/8] dm_op: convert HVMOP_modified_memory Paul Durrant
@ 2016-12-15 16:05   ` Jan Beulich
  2016-12-15 16:25     ` Paul Durrant
  0 siblings, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2016-12-15 16:05 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, Ian Jackson, xen-devel

>>> On 06.12.16 at 14:46, <paul.durrant@citrix.com> wrote:
> @@ -142,18 +143,77 @@ static int set_isa_irq_level(struct domain *d, uint8_t isa_irq,
>      return 0;
>  }
>  
> +static int modified_memory(struct domain *d, xen_pfn_t *first_pfn,
> +                           unsigned int *nr)
> +{
> +    xen_pfn_t last_pfn = *first_pfn + *nr - 1;
> +    unsigned int iter;
> +    int rc;
> +
> +    if ( (*first_pfn > last_pfn) ||
> +         (last_pfn > domain_get_maximum_gpfn(d)) )
> +        return -EINVAL;
> +
> +    if ( !paging_mode_log_dirty(d) )
> +        return 0;
> +
> +    iter = 0;
> +    rc = 0;
> +    while ( iter < *nr )
> +    {
> +        unsigned long pfn = *first_pfn + iter;
> +        struct page_info *page;
> +
> +        page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
> +        if ( page )
> +        {
> +            paging_mark_dirty(d, page_to_mfn(page));

I guess this will need re-basing over Andrew's about to be committed
(I think) parameter type change of this function. And you probably
want to latch the MFN into a local variable instead of doing the
translation (which is not as cheap as we'd like it to be) twice.

>  long do_dm_op(domid_t domid,
>                unsigned int nr_bufs,
>                XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
>  {
>      struct domain *d;
>      struct xen_dm_op op;
> +    bool restart;
>      long rc;
>  
>      rc = rcu_lock_remote_domain_by_id(domid, &d);
>      if ( rc )
>          return rc;
>  
> +    restart = false;

Please make this the initializer of the variable.

> --- a/xen/include/public/hvm/dm_op.h
> +++ b/xen/include/public/hvm/dm_op.h
> @@ -233,6 +233,24 @@ struct xen_dm_op_set_pci_link_route {
>      uint16_t pad;
>  };
>  
> +/*
> + * XEN_DMOP_modified_memory: Notify that a set of pages were modified by
> + *                           an emulator.
> + *
> + * NOTE: In the event of a continuation (return code -ERESTART), the
> + *       @first_pfn is set to the value of the pfn of the remaining
> + *       set of pages and @nr reduced to the size of the remaining set.
> + */

There's no point in mentioning -ERESTART here, as that's never be
seen by the caller.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 6/8] dm_op: convert HVMOP_set_mem_type
  2016-12-06 13:46 ` [PATCH v2 6/8] dm_op: convert HVMOP_set_mem_type Paul Durrant
@ 2016-12-15 16:11   ` Jan Beulich
  0 siblings, 0 replies; 22+ messages in thread
From: Jan Beulich @ 2016-12-15 16:11 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, Ian Jackson, xen-devel

>>> On 06.12.16 at 14:46, <paul.durrant@citrix.com> wrote:
> --- a/xen/arch/x86/hvm/dm.c
> +++ b/xen/arch/x86/hvm/dm.c
> @@ -199,6 +199,87 @@ static int modified_memory(struct domain *d, xen_pfn_t *first_pfn,
>      return rc;
>  }
>  
> +static bool allow_p2m_type_change(p2m_type_t old, p2m_type_t new)
> +{
> +    return ( p2m_is_ram(old) ||
> +             (p2m_is_hole(old) && new == p2m_mmio_dm) ||
> +             (old == p2m_ioreq_server && new == p2m_ram_rw) );

Pointless outermost parentheses; at the very least the blanks
immediately inside them need to go away.

> +static int set_mem_type(struct domain *d, hvmmem_type_t mem_type,
> +                        xen_pfn_t *first_pfn, unsigned int *nr)
> +{
> +    xen_pfn_t last_pfn = *first_pfn + *nr - 1;
> +    unsigned int iter;
> +    int rc;
> +
> +    /* Interface types to internal p2m types */
> +    static const p2m_type_t memtype[] = {
> +        [HVMMEM_ram_rw]  = p2m_ram_rw,
> +        [HVMMEM_ram_ro]  = p2m_ram_ro,
> +        [HVMMEM_mmio_dm] = p2m_mmio_dm,
> +        [HVMMEM_unused] = p2m_invalid,
> +        [HVMMEM_ioreq_server] = p2m_ioreq_server
> +    };
> +
> +    if ( (*first_pfn > last_pfn) ||
> +         (last_pfn > domain_get_maximum_gpfn(d)) )
> +        return -EINVAL;
> +
> +    if ( mem_type >= ARRAY_SIZE(memtype) ||
> +         unlikely(mem_type == HVMMEM_unused) )
> +        return -EINVAL;
> +
> +    iter = 0;
> +    rc = 0;
> +    while ( iter < *nr )
> +    {
> +        unsigned long pfn = *first_pfn + iter;
> +        p2m_type_t t;
> +
> +        get_gfn_unshare(d, pfn, &t);
> +        if ( p2m_is_paging(t) )
> +        {
> +            put_gfn(d, pfn);
> +            p2m_mem_paging_populate(d, pfn);
> +            return -EAGAIN;
> +        }
> +
> +        if ( p2m_is_shared(t) )
> +            rc = -EAGAIN;
> +        else if ( !allow_p2m_type_change(t, memtype[mem_type]) )
> +            rc = -EINVAL;
> +        else
> +            rc = p2m_change_type_one(d, pfn, t, memtype[mem_type]);
> +
> +        put_gfn(d, pfn);
> +
> +        if ( rc )
> +            break;
> +
> +        iter++;
> +
> +        /*
> +         * Check for continuation every 256th iteration and if the
> +         * iteration is not the last.
> +         */
> +        if ( (iter < *nr) && ((iter & 0xff) == 0) &&
> +             hypercall_preempt_check() )
> +        {
> +            rc = -ERESTART;
> +            break;
> +        }
> +    }
> +
> +    if ( rc == -ERESTART )
> +    {
> +        *first_pfn += iter;
> +        *nr -= iter;
> +    }

I may have overlooked this in the previous patch - the body of this
if() could easily move into the one setting rc to -ERESTART,
eliminating the extra conditional.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi
  2016-12-06 13:46 ` [PATCH v2 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi Paul Durrant
@ 2016-12-15 16:23   ` Jan Beulich
  2016-12-15 16:32     ` Paul Durrant
  0 siblings, 1 reply; 22+ messages in thread
From: Jan Beulich @ 2016-12-15 16:23 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, Ian Jackson, xen-devel

>>> On 06.12.16 at 14:46, <paul.durrant@citrix.com> wrote:
> +struct xen_dm_op_inject_trap {
> +    /* IN - index of vCPU */
> +    uint32_t vcpuid;
> +    /* IN - interrupt vector */
> +    uint8_t vector;
> +    /* IN - trap type (DMOP_TRAP_* ) */
> +    uint8_t type;
> +/* NB. This enumeration precisely matches hvm.h:X86_EVENTTYPE_* */
> +# define DMOP_TRAP_ext_int    0 /* external interrupt */
> +# define DMOP_TRAP_nmi        2 /* nmi */
> +# define DMOP_TRAP_hw_exc     3 /* hardware exception */
> +# define DMOP_TRAP_sw_int     4 /* software interrupt (CD nn) */
> +# define DMOP_TRAP_pri_sw_exc 5 /* ICEBP (F1) */
> +# define DMOP_TRAP_sw_exc     6 /* INT3 (CC), INTO (CE) */

XEN_ prefixes missing. (Did I overlook any in earlier patches? Please
double check.)

> +    /* IN - enstruction length */
> +    uint8_t insn_len;
> +    uint8_t pad;
> +    /* IN - error code (or ~0 to skip) */
> +    uint32_t error_code;
> +    /* IN - CR2 for page faults */
> +    uint64_aligned_t cr2;

Another 32-bit padding field is needed ahead of this one.

With both taken care of, hypervisor parts
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 3/8] dm_op: convert HVMOP_track_dirty_vram
  2016-12-15 15:44   ` Jan Beulich
@ 2016-12-15 16:23     ` Paul Durrant
  0 siblings, 0 replies; 22+ messages in thread
From: Paul Durrant @ 2016-12-15 16:23 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Wei Liu, Andrew Cooper, Tim (Xen.org),
	George Dunlap, Ian Jackson, xen-devel, Daniel De Graaf

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 15 December 2016 15:45
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Wei Liu
> <wei.liu2@citrix.com>; George Dunlap <George.Dunlap@citrix.com>; Ian
> Jackson <Ian.Jackson@citrix.com>; xen-devel@lists.xenproject.org; Daniel
> De Graaf <dgdegra@tycho.nsa.gov>; Tim (Xen.org) <tim@xen.org>
> Subject: Re: [PATCH v2 3/8] dm_op: convert HVMOP_track_dirty_vram
> 
> >>> On 06.12.16 at 14:46, <paul.durrant@citrix.com> wrote:
> > @@ -68,6 +70,35 @@ static int
> copy_buf_to_guest(XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t)
> bufs,
> >      return copy_to_guest(buf.h, src, size) ? -EFAULT : 0;
> >  }
> >
> > +static int track_dirty_vram(struct domain *d,
> > +                            unsigned int nr_bufs,
> > +                            XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs,
> > +                            xen_pfn_t first_pfn, unsigned int nr)
> > +{
> > +    struct xen_dm_op_buf buf;
> > +    int rc;
> > +
> > +    if ( nr > (GB(1) >> PAGE_SHIFT) )
> > +        return -EINVAL;
> > +
> > +    if ( d->is_dying )
> > +        return -ESRCH;
> > +
> > +    if ( !d->vcpu || !d->vcpu[0] )
> 
> It may not have been this patch, but I think we had settled on the left
> side here wanting to be !d->max_vcpus for consistency with other
> (modern) code.

Oh sorry, I must have missed that.

> 
> > --- a/xen/arch/x86/mm/hap/hap.c
> > +++ b/xen/arch/x86/mm/hap/hap.c
> > @@ -68,7 +68,7 @@
> >  int hap_track_dirty_vram(struct domain *d,
> >                           unsigned long begin_pfn,
> >                           unsigned long nr,
> > -                         XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
> > +                         XEN_GUEST_HANDLE_PARAM(void) guest_dirty_bitmap)
> >  {
> >      long rc = 0;
> >      struct sh_dirty_vram *dirty_vram;
> > --- a/xen/arch/x86/mm/shadow/common.c
> > +++ b/xen/arch/x86/mm/shadow/common.c
> > @@ -3677,7 +3677,7 @@ static void sh_clean_dirty_bitmap(struct domain
> *d)
> >  int shadow_track_dirty_vram(struct domain *d,
> >                              unsigned long begin_pfn,
> >                              unsigned long nr,
> > -                            XEN_GUEST_HANDLE_64(uint8) guest_dirty_bitmap)
> > +                            XEN_GUEST_HANDLE_PARAM(void) guest_dirty_bitmap)
> >  {
> >      int rc = 0;
> >      unsigned long end_pfn = begin_pfn + nr;
> > --- a/xen/include/asm-x86/hap.h
> > +++ b/xen/include/asm-x86/hap.h
> > @@ -43,7 +43,7 @@ void  hap_vcpu_init(struct vcpu *v);
> >  int   hap_track_dirty_vram(struct domain *d,
> >                             unsigned long begin_pfn,
> >                             unsigned long nr,
> > -                           XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
> > +                           XEN_GUEST_HANDLE_PARAM(void) dirty_bitmap);
> >
> >  extern const struct paging_mode *hap_paging_get_mode(struct vcpu *);
> >  int hap_set_allocation(struct domain *d, unsigned int pages, bool
> *preempted);
> > --- a/xen/include/asm-x86/shadow.h
> > +++ b/xen/include/asm-x86/shadow.h
> > @@ -63,7 +63,7 @@ int shadow_enable(struct domain *d, u32 mode);
> >  int shadow_track_dirty_vram(struct domain *d,
> >                              unsigned long first_pfn,
> >                              unsigned long nr,
> > -                            XEN_GUEST_HANDLE_64(uint8) dirty_bitmap);
> > +                            XEN_GUEST_HANDLE_PARAM(void) dirty_bitmap);
> 
> I think all of these will go away with the type change required in the
> earlier patch.
> 

Yes, they probably will.

> With all of this taken care of, hypervisor parts
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 

Thanks,

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 5/8] dm_op: convert HVMOP_modified_memory
  2016-12-15 16:05   ` Jan Beulich
@ 2016-12-15 16:25     ` Paul Durrant
  0 siblings, 0 replies; 22+ messages in thread
From: Paul Durrant @ 2016-12-15 16:25 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Jan
> Beulich
> Sent: 15 December 2016 16:06
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Daniel De Graaf
> <dgdegra@tycho.nsa.gov>; Wei Liu <wei.liu2@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [Xen-devel] [PATCH v2 5/8] dm_op: convert
> HVMOP_modified_memory
> 
> >>> On 06.12.16 at 14:46, <paul.durrant@citrix.com> wrote:
> > @@ -142,18 +143,77 @@ static int set_isa_irq_level(struct domain *d,
> uint8_t isa_irq,
> >      return 0;
> >  }
> >
> > +static int modified_memory(struct domain *d, xen_pfn_t *first_pfn,
> > +                           unsigned int *nr)
> > +{
> > +    xen_pfn_t last_pfn = *first_pfn + *nr - 1;
> > +    unsigned int iter;
> > +    int rc;
> > +
> > +    if ( (*first_pfn > last_pfn) ||
> > +         (last_pfn > domain_get_maximum_gpfn(d)) )
> > +        return -EINVAL;
> > +
> > +    if ( !paging_mode_log_dirty(d) )
> > +        return 0;
> > +
> > +    iter = 0;
> > +    rc = 0;
> > +    while ( iter < *nr )
> > +    {
> > +        unsigned long pfn = *first_pfn + iter;
> > +        struct page_info *page;
> > +
> > +        page = get_page_from_gfn(d, pfn, NULL, P2M_UNSHARE);
> > +        if ( page )
> > +        {
> > +            paging_mark_dirty(d, page_to_mfn(page));
> 
> I guess this will need re-basing over Andrew's about to be committed
> (I think) parameter type change of this function. And you probably
> want to latch the MFN into a local variable instead of doing the
> translation (which is not as cheap as we'd like it to be) twice.
> 

Sure. I probably won't get time to post v3 until the new year so hopefully Andrew's changes will be there by then :-)

> >  long do_dm_op(domid_t domid,
> >                unsigned int nr_bufs,
> >                XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs)
> >  {
> >      struct domain *d;
> >      struct xen_dm_op op;
> > +    bool restart;
> >      long rc;
> >
> >      rc = rcu_lock_remote_domain_by_id(domid, &d);
> >      if ( rc )
> >          return rc;
> >
> > +    restart = false;
> 
> Please make this the initializer of the variable.
> 

Ok.

> > --- a/xen/include/public/hvm/dm_op.h
> > +++ b/xen/include/public/hvm/dm_op.h
> > @@ -233,6 +233,24 @@ struct xen_dm_op_set_pci_link_route {
> >      uint16_t pad;
> >  };
> >
> > +/*
> > + * XEN_DMOP_modified_memory: Notify that a set of pages were
> modified by
> > + *                           an emulator.
> > + *
> > + * NOTE: In the event of a continuation (return code -ERESTART), the
> > + *       @first_pfn is set to the value of the pfn of the remaining
> > + *       set of pages and @nr reduced to the size of the remaining set.
> > + */
> 
> There's no point in mentioning -ERESTART here, as that's never be
> seen by the caller.
> 

Ok.

  Paul

> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v2 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi
  2016-12-15 16:23   ` Jan Beulich
@ 2016-12-15 16:32     ` Paul Durrant
  0 siblings, 0 replies; 22+ messages in thread
From: Paul Durrant @ 2016-12-15 16:32 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Andrew Cooper, Daniel De Graaf, Wei Liu, xen-devel, Ian Jackson

> -----Original Message-----
> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of Jan
> Beulich
> Sent: 15 December 2016 16:23
> To: Paul Durrant <Paul.Durrant@citrix.com>
> Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>; Daniel De Graaf
> <dgdegra@tycho.nsa.gov>; Wei Liu <wei.liu2@citrix.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; xen-devel@lists.xenproject.org
> Subject: Re: [Xen-devel] [PATCH v2 7/8] dm_op: convert
> HVMOP_inject_trap and HVMOP_inject_msi
> 
> >>> On 06.12.16 at 14:46, <paul.durrant@citrix.com> wrote:
> > +struct xen_dm_op_inject_trap {
> > +    /* IN - index of vCPU */
> > +    uint32_t vcpuid;
> > +    /* IN - interrupt vector */
> > +    uint8_t vector;
> > +    /* IN - trap type (DMOP_TRAP_* ) */
> > +    uint8_t type;
> > +/* NB. This enumeration precisely matches hvm.h:X86_EVENTTYPE_* */
> > +# define DMOP_TRAP_ext_int    0 /* external interrupt */
> > +# define DMOP_TRAP_nmi        2 /* nmi */
> > +# define DMOP_TRAP_hw_exc     3 /* hardware exception */
> > +# define DMOP_TRAP_sw_int     4 /* software interrupt (CD nn) */
> > +# define DMOP_TRAP_pri_sw_exc 5 /* ICEBP (F1) */
> > +# define DMOP_TRAP_sw_exc     6 /* INT3 (CC), INTO (CE) */
> 
> XEN_ prefixes missing. (Did I overlook any in earlier patches? Please
> double check.)
> 

I should have noticed anyway... I'll fix them up.

> > +    /* IN - enstruction length */
> > +    uint8_t insn_len;
> > +    uint8_t pad;
> > +    /* IN - error code (or ~0 to skip) */
> > +    uint32_t error_code;
> > +    /* IN - CR2 for page faults */
> > +    uint64_aligned_t cr2;
> 
> Another 32-bit padding field is needed ahead of this one.

So, it is... I obviously can't count.

> 
> With both taken care of, hypervisor parts
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 

Thanks,

  Paul

> Jan
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2016-12-15 16:32 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-06 13:46 [PATCH v2 0/8] New hypercall for device models Paul Durrant
2016-12-06 13:46 ` [PATCH v2 1/8] public / x86: Introduce __HYPERCALL_dm_op Paul Durrant
2016-12-12 13:27   ` Wei Liu
2016-12-15 15:22   ` Jan Beulich
2016-12-15 15:55     ` Paul Durrant
2016-12-06 13:46 ` [PATCH v2 2/8] dm_op: convert HVMOP_*ioreq_server* Paul Durrant
2016-12-12 13:30   ` Wei Liu
2016-12-15 15:37   ` Jan Beulich
2016-12-06 13:46 ` [PATCH v2 3/8] dm_op: convert HVMOP_track_dirty_vram Paul Durrant
2016-12-15 15:44   ` Jan Beulich
2016-12-15 16:23     ` Paul Durrant
2016-12-06 13:46 ` [PATCH v2 4/8] dm_op: convert HVMOP_set_pci_intx_level, HVMOP_set_isa_irq_level, and Paul Durrant
2016-12-15 15:51   ` Jan Beulich
2016-12-06 13:46 ` [PATCH v2 5/8] dm_op: convert HVMOP_modified_memory Paul Durrant
2016-12-15 16:05   ` Jan Beulich
2016-12-15 16:25     ` Paul Durrant
2016-12-06 13:46 ` [PATCH v2 6/8] dm_op: convert HVMOP_set_mem_type Paul Durrant
2016-12-15 16:11   ` Jan Beulich
2016-12-06 13:46 ` [PATCH v2 7/8] dm_op: convert HVMOP_inject_trap and HVMOP_inject_msi Paul Durrant
2016-12-15 16:23   ` Jan Beulich
2016-12-15 16:32     ` Paul Durrant
2016-12-06 13:46 ` [PATCH v2 8/8] x86/hvm: serialize trap injecting producer and consumer Paul Durrant

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.