xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type.
@ 2016-04-25 10:35 Yu Zhang
  2016-04-25 10:35 ` [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
                   ` (2 more replies)
  0 siblings, 3 replies; 35+ messages in thread
From: Yu Zhang @ 2016-04-25 10:35 UTC (permalink / raw)
  To: xen-devel; +Cc: Paul.Durrant, wei.liu2, zhiyuan.lv

XenGT leverages ioreq server to track and forward the accesses to GPU 
I/O resources, e.g. the PPGTT(per-process graphic translation tables).
Currently, ioreq server uses rangeset to track the BDF/ PIO/MMIO ranges
to be emulated. To select an ioreq server, the rangeset is searched to
see if the I/O range is recorded. However, number of ram pages to be
tracked may exceed the upper limit of rangeset.

Previously, one solution was proposed to refactor the rangeset, and 
extend its upper limit. However, after 12 rounds discussion, we have
decided to drop this approach due to security concerns. Now this new 
patch series introduces a new mem type, HVMMEM_ioreq_server, and added
hvm operations to let one ioreq server to claim its ownership of ram 
pages with this type. Accesses to a page of this type will be handled
by the specified ioreq server directly. 

Yu Zhang (3):
  x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to
    p2m_ioreq_server.
  x86/ioreq server: Add new functions to get/set memory types.
  x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to
    an ioreq server.

 xen/arch/x86/hvm/emulate.c       |  32 +++-
 xen/arch/x86/hvm/hvm.c           | 347 +++++++++++++++++++++++++--------------
 xen/arch/x86/hvm/ioreq.c         |  41 +++++
 xen/arch/x86/mm/hap/nested_hap.c |   2 +-
 xen/arch/x86/mm/p2m-ept.c        |   7 +-
 xen/arch/x86/mm/p2m-pt.c         |  23 ++-
 xen/arch/x86/mm/p2m.c            |  70 ++++++++
 xen/arch/x86/mm/shadow/multi.c   |   3 +-
 xen/include/asm-x86/hvm/ioreq.h  |   2 +
 xen/include/asm-x86/p2m.h        |  32 +++-
 xen/include/public/hvm/hvm_op.h  |  39 ++++-
 11 files changed, 454 insertions(+), 144 deletions(-)

-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 10:35 [PATCH v3 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
@ 2016-04-25 10:35 ` Yu Zhang
  2016-04-25 12:12   ` Jan Beulich
  2016-04-25 13:39   ` George Dunlap
  2016-04-25 10:35 ` [PATCH v3 2/3] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
  2016-04-25 10:35 ` [PATCH v3 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
  2 siblings, 2 replies; 35+ messages in thread
From: Yu Zhang @ 2016-04-25 10:35 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Keir Fraser, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Paul Durrant, zhiyuan.lv, Jan Beulich,
	wei.liu2

Previously p2m type p2m_mmio_write_dm was introduced for write-
protected memory pages whose write operations are supposed to be
forwarded to and emulated by an ioreq server. Yet limitations of
rangeset restrict the number of guest pages to be write-protected.

This patch replaces the p2m type p2m_mmio_write_dm with a new name:
p2m_ioreq_server, which means this p2m type can be claimed by one
ioreq server, instead of being tracked inside the rangeset of ioreq
server. Patches following up will add the related hvmop handling
code which map/unmap type p2m_ioreq_server to/from an ioreq server.

changes in v3:
  - According to Jan & George's comments, keep HVMMEM_mmio_write_dm
    for old xen interface versions, and replace it with HVMMEM_unused
    for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
    enum - HVMMEM_ioreq_server is introduced for the get/set mem type
    interfaces;
  - Add George's Reviewed-by and Acked-by from Tim & Andrew.

changes in v2:
  - According to George Dunlap's comments, only rename the p2m type,
    with no behavior changes.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
Acked-by: Tim Deegan <tim@xen.org>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
 xen/arch/x86/mm/p2m-ept.c       |  2 +-
 xen/arch/x86/mm/p2m-pt.c        |  2 +-
 xen/arch/x86/mm/shadow/multi.c  |  2 +-
 xen/include/asm-x86/p2m.h       |  4 ++--
 xen/include/public/hvm/hvm_op.h |  8 +++++++-
 6 files changed, 20 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index f24126d..874cb0f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
      */
     if ( (p2mt == p2m_mmio_dm) || 
          (npfec.write_access &&
-          (p2m_is_discard_write(p2mt) || (p2mt == p2m_mmio_write_dm))) )
+          (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) )
     {
         __put_gfn(p2m, gfn);
         if ( ap2m_active )
@@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             get_gfn_query_unlocked(d, a.pfn, &t);
             if ( p2m_is_mmio(t) )
                 a.mem_type =  HVMMEM_mmio_dm;
-            else if ( t == p2m_mmio_write_dm )
-                a.mem_type = HVMMEM_mmio_write_dm;
+            else if ( t == p2m_ioreq_server )
+                a.mem_type = HVMMEM_ioreq_server;
             else if ( p2m_is_readonly(t) )
                 a.mem_type =  HVMMEM_ram_ro;
             else if ( p2m_is_ram(t) )
@@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             [HVMMEM_ram_rw]  = p2m_ram_rw,
             [HVMMEM_ram_ro]  = p2m_ram_ro,
             [HVMMEM_mmio_dm] = p2m_mmio_dm,
-            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
+            [HVMMEM_unused] = p2m_invalid,
+            [HVMMEM_ioreq_server] = p2m_ioreq_server
         };
 
         if ( copy_from_guest(&a, arg, 1) )
@@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
              ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
             goto setmemtype_fail;
             
-        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
+        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
+             unlikely(a.hvmmem_type == HVMMEM_unused) )
             goto setmemtype_fail;
 
         while ( a.nr > start_iter )
@@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             }
             if ( !p2m_is_ram(t) &&
                  (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm) &&
-                 (t != p2m_mmio_write_dm || a.hvmmem_type != HVMMEM_ram_rw) )
+                 (t != p2m_ioreq_server || a.hvmmem_type != HVMMEM_ram_rw) )
             {
                 put_gfn(d, pfn);
                 goto setmemtype_fail;
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 3cb6868..380ec25 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry,
             entry->a = entry->d = !!cpu_has_vmx_ept_ad;
             break;
         case p2m_grant_map_ro:
-        case p2m_mmio_write_dm:
+        case p2m_ioreq_server:
             entry->r = 1;
             entry->w = entry->x = 0;
             entry->a = !!cpu_has_vmx_ept_ad;
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 3d80612..eabd2e3 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -94,7 +94,7 @@ static unsigned long p2m_type_to_flags(p2m_type_t t, mfn_t mfn,
     default:
         return flags | _PAGE_NX_BIT;
     case p2m_grant_map_ro:
-    case p2m_mmio_write_dm:
+    case p2m_ioreq_server:
         return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
     case p2m_ram_ro:
     case p2m_ram_logdirty:
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index e5c8499..c81302a 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
 
     /* Need to hand off device-model MMIO to the device model */
     if ( p2mt == p2m_mmio_dm
-         || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
+         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
     {
         gpa = guest_walk_to_gpa(&gw);
         goto mmio;
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 5392eb0..ee2ea9c 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -71,7 +71,7 @@ typedef enum {
     p2m_ram_shared = 12,          /* Shared or sharable memory */
     p2m_ram_broken = 13,          /* Broken page, access cause domain crash */
     p2m_map_foreign  = 14,        /* ram pages from foreign domain */
-    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the device model */
+    p2m_ioreq_server = 15,
 } p2m_type_t;
 
 /* Modifiers to the query */
@@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
                       | p2m_to_mask(p2m_ram_ro)         \
                       | p2m_to_mask(p2m_grant_map_ro)   \
                       | p2m_to_mask(p2m_ram_shared)     \
-                      | p2m_to_mask(p2m_mmio_write_dm))
+                      | p2m_to_mask(p2m_ioreq_server))
 
 /* Write-discard types, which should discard the write operations */
 #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index 1606185..b3e45cf 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -83,7 +83,13 @@ typedef enum {
     HVMMEM_ram_rw,             /* Normal read/write guest RAM */
     HVMMEM_ram_ro,             /* Read-only; writes are discarded */
     HVMMEM_mmio_dm,            /* Reads and write go to the device model */
-    HVMMEM_mmio_write_dm       /* Read-only; writes go to the device model */
+#if __XEN_INTERFACE_VERSION__ < 0x00040700
+    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the device model */
+#else
+    HVMMEM_unused,             /* Placeholder; setting memory to this type
+                                  will fail for code after 4.7.0 */
+#endif
+    HVMMEM_ioreq_server
 } hvmmem_type_t;
 
 /* Following tools-only interfaces may change in future. */
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 2/3] x86/ioreq server: Add new functions to get/set memory types.
  2016-04-25 10:35 [PATCH v3 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
  2016-04-25 10:35 ` [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
@ 2016-04-25 10:35 ` Yu Zhang
  2016-04-26 10:53   ` Wei Liu
  2016-04-25 10:35 ` [PATCH v3 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
  2 siblings, 1 reply; 35+ messages in thread
From: Yu Zhang @ 2016-04-25 10:35 UTC (permalink / raw)
  To: xen-devel
  Cc: Keir Fraser, Andrew Cooper, Paul Durrant, zhiyuan.lv,
	Jan Beulich, wei.liu2

For clarity this patch breaks the code to set/get memory types out
of do_hvm_op() into dedicated functions: hvmop_set/get_mem_type().
Also, for clarity, checks for whether a memory type change is allowed
are broken out into a separate function called by hvmop_set_mem_type().

There is no intentional functional change in this patch.

changes in v3:
  - Add Andrew's Acked-by and George's Reviewed-by.

changes in v2:
  - According to George Dunlap's comments, follow the "set rc /
    do something / goto out" pattern in hvmop_get_mem_type().

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/hvm.c | 288 +++++++++++++++++++++++++++----------------------
 1 file changed, 161 insertions(+), 127 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 874cb0f..607546c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5279,6 +5279,61 @@ static int do_altp2m_op(
     return rc;
 }
 
+static int hvmop_get_mem_type(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_get_mem_type_t) arg)
+{
+    struct xen_hvm_get_mem_type a;
+    struct domain *d;
+    p2m_type_t t;
+    int rc;
+
+    if ( copy_from_guest(&a, arg, 1) )
+        return -EFAULT;
+
+    d = rcu_lock_domain_by_any_id(a.domid);
+    if ( d == NULL )
+        return -ESRCH;
+
+    rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_get_mem_type);
+    if ( rc )
+        goto out;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    /*
+     * Use get_gfn query as we are interested in the current
+     * type, not in allocating or unsharing. That'll happen
+     * on access.
+     */
+    get_gfn_query_unlocked(d, a.pfn, &t);
+    if ( p2m_is_mmio(t) )
+        a.mem_type =  HVMMEM_mmio_dm;
+    else if ( t == p2m_ioreq_server )
+        a.mem_type = HVMMEM_ioreq_server;
+    else if ( p2m_is_readonly(t) )
+        a.mem_type =  HVMMEM_ram_ro;
+    else if ( p2m_is_ram(t) )
+        a.mem_type =  HVMMEM_ram_rw;
+    else if ( p2m_is_pod(t) )
+        a.mem_type =  HVMMEM_ram_rw;
+    else if ( p2m_is_grant(t) )
+        a.mem_type =  HVMMEM_ram_rw;
+    else
+        a.mem_type =  HVMMEM_mmio_dm;
+
+    rc = -EFAULT;
+    if ( __copy_to_guest(arg, &a, 1) )
+        goto out;
+    rc = 0;
+
+ out:
+    rcu_unlock_domain(d);
+
+    return rc;
+}
+
 /*
  * Note that this value is effectively part of the ABI, even if we don't need
  * to make it a formal part of it: A guest suspended for migration in the
@@ -5287,6 +5342,107 @@ static int do_altp2m_op(
  */
 #define HVMOP_op_mask 0xff
 
+static bool_t hvm_allow_p2m_type_change(p2m_type_t old, p2m_type_t new)
+{
+    if ( p2m_is_ram(old) ||
+         (p2m_is_hole(old) && new == p2m_mmio_dm) ||
+         (old == p2m_ioreq_server && new == p2m_ram_rw) )
+        return 1;
+
+    return 0;
+}
+
+static int hvmop_set_mem_type(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_set_mem_type_t) arg,
+    unsigned long *iter)
+{
+    unsigned long start_iter = *iter;
+    struct xen_hvm_set_mem_type a;
+    struct domain *d;
+    int rc;
+
+    /* Interface types to internal p2m types */
+    static const p2m_type_t memtype[] = {
+        [HVMMEM_ram_rw]  = p2m_ram_rw,
+        [HVMMEM_ram_ro]  = p2m_ram_ro,
+        [HVMMEM_mmio_dm] = p2m_mmio_dm,
+        [HVMMEM_unused] = p2m_invalid,
+        [HVMMEM_ioreq_server] = p2m_ioreq_server
+    };
+
+    if ( copy_from_guest(&a, arg, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(a.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    rc = xsm_hvm_control(XSM_DM_PRIV, d, HVMOP_set_mem_type);
+    if ( rc )
+        goto out;
+
+    rc = -EINVAL;
+    if ( a.nr < start_iter ||
+         ((a.first_pfn + a.nr - 1) < a.first_pfn) ||
+         ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
+        goto out;
+
+    if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
+         unlikely(a.hvmmem_type == HVMMEM_unused) )
+        goto out;
+
+    while ( a.nr > start_iter )
+    {
+        unsigned long pfn = a.first_pfn + start_iter;
+        p2m_type_t t;
+
+        get_gfn_unshare(d, pfn, &t);
+        if ( p2m_is_paging(t) )
+        {
+            put_gfn(d, pfn);
+            p2m_mem_paging_populate(d, pfn);
+            rc = -EAGAIN;
+            goto out;
+        }
+        if ( p2m_is_shared(t) )
+        {
+            put_gfn(d, pfn);
+            rc = -EAGAIN;
+            goto out;
+        }
+        if ( !hvm_allow_p2m_type_change(t, memtype[a.hvmmem_type]) )
+        {
+            put_gfn(d, pfn);
+            goto out;
+        }
+
+        rc = p2m_change_type_one(d, pfn, t, memtype[a.hvmmem_type]);
+        put_gfn(d, pfn);
+
+        if ( rc )
+            goto out;
+
+        /* Check for continuation if it's not the last interation */
+        if ( a.nr > ++start_iter && !(start_iter & HVMOP_op_mask) &&
+             hypercall_preempt_check() )
+        {
+            rc = -ERESTART;
+            goto out;
+        }
+    }
+    rc = 0;
+
+ out:
+    rcu_unlock_domain(d);
+    *iter = start_iter;
+
+    return rc;
+}
+
 long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     unsigned long start_iter, mask;
@@ -5476,137 +5632,15 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
     }
 
     case HVMOP_get_mem_type:
-    {
-        struct xen_hvm_get_mem_type a;
-        struct domain *d;
-        p2m_type_t t;
-
-        if ( copy_from_guest(&a, arg, 1) )
-            return -EFAULT;
-
-        d = rcu_lock_domain_by_any_id(a.domid);
-        if ( d == NULL )
-            return -ESRCH;
-
-        rc = xsm_hvm_param(XSM_TARGET, d, op);
-        if ( unlikely(rc) )
-            /* nothing */;
-        else if ( likely(is_hvm_domain(d)) )
-        {
-            /* Use get_gfn query as we are interested in the current 
-             * type, not in allocating or unsharing. That'll happen 
-             * on access. */
-            get_gfn_query_unlocked(d, a.pfn, &t);
-            if ( p2m_is_mmio(t) )
-                a.mem_type =  HVMMEM_mmio_dm;
-            else if ( t == p2m_ioreq_server )
-                a.mem_type = HVMMEM_ioreq_server;
-            else if ( p2m_is_readonly(t) )
-                a.mem_type =  HVMMEM_ram_ro;
-            else if ( p2m_is_ram(t) )
-                a.mem_type =  HVMMEM_ram_rw;
-            else if ( p2m_is_pod(t) )
-                a.mem_type =  HVMMEM_ram_rw;
-            else if ( p2m_is_grant(t) )
-                a.mem_type =  HVMMEM_ram_rw;
-            else
-                a.mem_type =  HVMMEM_mmio_dm;
-            if ( __copy_to_guest(arg, &a, 1) )
-                rc = -EFAULT;
-        }
-        else
-            rc = -EINVAL;
-
-        rcu_unlock_domain(d);
+        rc = hvmop_get_mem_type(
+            guest_handle_cast(arg, xen_hvm_get_mem_type_t));
         break;
-    }
 
     case HVMOP_set_mem_type:
-    {
-        struct xen_hvm_set_mem_type a;
-        struct domain *d;
-        
-        /* Interface types to internal p2m types */
-        static const p2m_type_t memtype[] = {
-            [HVMMEM_ram_rw]  = p2m_ram_rw,
-            [HVMMEM_ram_ro]  = p2m_ram_ro,
-            [HVMMEM_mmio_dm] = p2m_mmio_dm,
-            [HVMMEM_unused] = p2m_invalid,
-            [HVMMEM_ioreq_server] = p2m_ioreq_server
-        };
-
-        if ( copy_from_guest(&a, arg, 1) )
-            return -EFAULT;
-
-        rc = rcu_lock_remote_domain_by_id(a.domid, &d);
-        if ( rc != 0 )
-            return rc;
-
-        rc = -EINVAL;
-        if ( !is_hvm_domain(d) )
-            goto setmemtype_fail;
-
-        rc = xsm_hvm_control(XSM_DM_PRIV, d, op);
-        if ( rc )
-            goto setmemtype_fail;
-
-        rc = -EINVAL;
-        if ( a.nr < start_iter ||
-             ((a.first_pfn + a.nr - 1) < a.first_pfn) ||
-             ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
-            goto setmemtype_fail;
-            
-        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
-             unlikely(a.hvmmem_type == HVMMEM_unused) )
-            goto setmemtype_fail;
-
-        while ( a.nr > start_iter )
-        {
-            unsigned long pfn = a.first_pfn + start_iter;
-            p2m_type_t t;
-
-            get_gfn_unshare(d, pfn, &t);
-            if ( p2m_is_paging(t) )
-            {
-                put_gfn(d, pfn);
-                p2m_mem_paging_populate(d, pfn);
-                rc = -EAGAIN;
-                goto setmemtype_fail;
-            }
-            if ( p2m_is_shared(t) )
-            {
-                put_gfn(d, pfn);
-                rc = -EAGAIN;
-                goto setmemtype_fail;
-            }
-            if ( !p2m_is_ram(t) &&
-                 (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm) &&
-                 (t != p2m_ioreq_server || a.hvmmem_type != HVMMEM_ram_rw) )
-            {
-                put_gfn(d, pfn);
-                goto setmemtype_fail;
-            }
-
-            rc = p2m_change_type_one(d, pfn, t, memtype[a.hvmmem_type]);
-            put_gfn(d, pfn);
-            if ( rc )
-                goto setmemtype_fail;
-
-            /* Check for continuation if it's not the last interation */
-            if ( a.nr > ++start_iter && !(start_iter & HVMOP_op_mask) &&
-                 hypercall_preempt_check() )
-            {
-                rc = -ERESTART;
-                goto setmemtype_fail;
-            }
-        }
-
-        rc = 0;
-
-    setmemtype_fail:
-        rcu_unlock_domain(d);
+        rc = hvmop_set_mem_type(
+            guest_handle_cast(arg, xen_hvm_set_mem_type_t),
+            &start_iter);
         break;
-    }
 
     case HVMOP_pagetable_dying:
     {
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* [PATCH v3 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.
  2016-04-25 10:35 [PATCH v3 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
  2016-04-25 10:35 ` [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
  2016-04-25 10:35 ` [PATCH v3 2/3] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
@ 2016-04-25 10:35 ` Yu Zhang
  2016-04-25 12:36   ` Paul Durrant
  2 siblings, 1 reply; 35+ messages in thread
From: Yu Zhang @ 2016-04-25 10:35 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Keir Fraser, Jun Nakajima, George Dunlap,
	Andrew Cooper, Tim Deegan, Paul Durrant, zhiyuan.lv, Jan Beulich,
	wei.liu2

A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to
let one ioreq server claim/disclaim its responsibility for the
handling of guest pages with p2m type p2m_ioreq_server. Users
of this HVMOP can specify which kind of operation is supposed
to be emulated in a parameter named flags. Currently, this HVMOP
only support the emulation of write operations. And it can be
easily extended to support the emulation of read ones if an
ioreq server has such requirement in the future.

For now, we only support one ioreq server for this p2m type, so
once an ioreq server has claimed its ownership, subsequent calls
of the HVMOP_map_mem_type_to_ioreq_server will fail. Users can also
disclaim the ownership of guest ram pages with p2m_ioreq_server, by
triggering this new HVMOP, with ioreq server id set to the current
owner's and flags parameter set to 0.

Note both HVMOP_map_mem_type_to_ioreq_server and p2m_ioreq_server
are only supported for HVMs with HAP enabled.

Also note that only after one ioreq server claims its ownership
of p2m_ioreq_server, will the p2m type change to p2m_ioreq_server
be allowed.

changes in v3:
  - Only support write emulation in this patch;
  - Remove the code to handle race condition in hvmemul_do_io(),
  - No need to reset the p2m type after an ioreq server has disclaimed
    its ownership of p2m_ioreq_server;
  - Only allow p2m type change to p2m_ioreq_server after an ioreq
    server has claimed its ownership of p2m_ioreq_server;
  - Only allow p2m type change to p2m_ioreq_server from pages with type
    p2m_ram_rw, and vice versa;
  - HVMOP_map_mem_type_to_ioreq_server interface change - use uint16,
    instead of enum to specify the memory type;
  - Function prototype change to p2m_get_ioreq_server();
  - Coding style changes;
  - Commit message changes;
  - Add Tim's Acked-by.

changes in v2:
  - Only support HAP enabled HVMs;
  - Replace p2m_mem_type_changed() with p2m_change_entry_type_global()
    to reset the p2m type, when an ioreq server tries to claim/disclaim
    its ownership of p2m_ioreq_server;
  - Comments changes.

Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
Acked-by: Tim Deegan <tim@xen.org>
Cc: Keir Fraser <keir@xen.org>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@eu.citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>
Cc: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/hvm/emulate.c       | 32 ++++++++++++++++--
 xen/arch/x86/hvm/hvm.c           | 63 ++++++++++++++++++++++++++++++++++--
 xen/arch/x86/hvm/ioreq.c         | 41 +++++++++++++++++++++++
 xen/arch/x86/mm/hap/nested_hap.c |  2 +-
 xen/arch/x86/mm/p2m-ept.c        |  7 +++-
 xen/arch/x86/mm/p2m-pt.c         | 23 +++++++++----
 xen/arch/x86/mm/p2m.c            | 70 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/shadow/multi.c   |  3 +-
 xen/include/asm-x86/hvm/ioreq.h  |  2 ++
 xen/include/asm-x86/p2m.h        | 30 +++++++++++++++--
 xen/include/public/hvm/hvm_op.h  | 31 ++++++++++++++++++
 11 files changed, 286 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index cc0b841..fd350bd 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -100,6 +100,7 @@ static int hvmemul_do_io(
     uint8_t dir, bool_t df, bool_t data_is_addr, uintptr_t data)
 {
     struct vcpu *curr = current;
+    struct domain *currd = curr->domain;
     struct hvm_vcpu_io *vio = &curr->arch.hvm_vcpu.hvm_io;
     ioreq_t p = {
         .type = is_mmio ? IOREQ_TYPE_COPY : IOREQ_TYPE_PIO,
@@ -141,7 +142,7 @@ static int hvmemul_do_io(
              (p.dir != dir) ||
              (p.df != df) ||
              (p.data_is_ptr != data_is_addr) )
-            domain_crash(curr->domain);
+            domain_crash(currd);
 
         if ( data_is_addr )
             return X86EMUL_UNHANDLEABLE;
@@ -169,8 +170,33 @@ static int hvmemul_do_io(
         break;
     case X86EMUL_UNHANDLEABLE:
     {
-        struct hvm_ioreq_server *s =
-            hvm_select_ioreq_server(curr->domain, &p);
+        struct hvm_ioreq_server *s;
+        p2m_type_t p2mt;
+
+        if ( is_mmio )
+        {
+            unsigned long gmfn = paddr_to_pfn(addr);
+
+            (void) get_gfn_query_unlocked(currd, gmfn, &p2mt);
+
+            if ( p2mt == p2m_ioreq_server )
+            {
+                unsigned long flags;
+
+                s = p2m_get_ioreq_server(currd, &flags);
+
+                if ( dir == IOREQ_WRITE &&
+                     !(flags & P2M_IOREQ_HANDLE_WRITE_ACCESS) )
+                    s = NULL;
+            }
+            else
+                s = hvm_select_ioreq_server(currd, &p);
+        }
+        else
+        {
+            p2mt = p2m_invalid;
+            s = hvm_select_ioreq_server(currd, &p);
+        }
 
         /* If there is no suitable backing DM, just ignore accesses */
         if ( !s )
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 607546c..0ba66a2 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4711,6 +4711,40 @@ static int hvmop_unmap_io_range_from_ioreq_server(
     return rc;
 }
 
+static int hvmop_map_mem_type_to_ioreq_server(
+    XEN_GUEST_HANDLE_PARAM(xen_hvm_map_mem_type_to_ioreq_server_t) uop)
+{
+    xen_hvm_map_mem_type_to_ioreq_server_t op;
+    struct domain *d;
+    int rc;
+
+    if ( copy_from_guest(&op, uop, 1) )
+        return -EFAULT;
+
+    rc = rcu_lock_remote_domain_by_id(op.domid, &d);
+    if ( rc != 0 )
+        return rc;
+
+    rc = -EINVAL;
+    if ( !is_hvm_domain(d) )
+        goto out;
+
+    /* Only support for HAP enabled hvm */
+    if ( !hap_enabled(d) )
+        goto out;
+
+    rc = xsm_hvm_ioreq_server(XSM_DM_PRIV, d,
+                              HVMOP_map_mem_type_to_ioreq_server);
+    if ( rc != 0 )
+        goto out;
+
+    rc = hvm_map_mem_type_to_ioreq_server(d, op.id, op.type, op.flags);
+
+ out:
+    rcu_unlock_domain(d);
+    return rc;
+}
+
 static int hvmop_set_ioreq_server_state(
     XEN_GUEST_HANDLE_PARAM(xen_hvm_set_ioreq_server_state_t) uop)
 {
@@ -5344,9 +5378,14 @@ static int hvmop_get_mem_type(
 
 static bool_t hvm_allow_p2m_type_change(p2m_type_t old, p2m_type_t new)
 {
+    if ( new == p2m_ioreq_server )
+        return old == p2m_ram_rw;
+
+    if ( old == p2m_ioreq_server )
+        return new == p2m_ram_rw;
+
     if ( p2m_is_ram(old) ||
-         (p2m_is_hole(old) && new == p2m_mmio_dm) ||
-         (old == p2m_ioreq_server && new == p2m_ram_rw) )
+         (p2m_is_hole(old) && new == p2m_mmio_dm) )
         return 1;
 
     return 0;
@@ -5381,6 +5420,21 @@ static int hvmop_set_mem_type(
     if ( !is_hvm_domain(d) )
         goto out;
 
+    if ( a.hvmmem_type == HVMMEM_ioreq_server )
+    {
+        unsigned long flags;
+        struct hvm_ioreq_server *s;
+
+        /* HVMMEM_ioreq_server is only supported for HAP enabled hvm. */
+        if ( !hap_enabled(d) )
+            goto out;
+
+        /* Do not change to HVMMEM_ioreq_server if no ioreq server mapped. */
+        s = p2m_get_ioreq_server(d, &flags);
+        if ( s == NULL )
+            goto out;
+    }
+
     rc = xsm_hvm_control(XSM_DM_PRIV, d, HVMOP_set_mem_type);
     if ( rc )
         goto out;
@@ -5482,6 +5536,11 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
             guest_handle_cast(arg, xen_hvm_io_range_t));
         break;
 
+    case HVMOP_map_mem_type_to_ioreq_server:
+        rc = hvmop_map_mem_type_to_ioreq_server(
+            guest_handle_cast(arg, xen_hvm_map_mem_type_to_ioreq_server_t));
+        break;
+
     case HVMOP_set_ioreq_server_state:
         rc = hvmop_set_ioreq_server_state(
             guest_handle_cast(arg, xen_hvm_set_ioreq_server_state_t));
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 690e478..2e4dea5 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -729,6 +729,8 @@ int hvm_destroy_ioreq_server(struct domain *d, ioservid_t id)
 
         domain_pause(d);
 
+        p2m_destroy_ioreq_server(d, s);
+
         hvm_ioreq_server_disable(s, 0);
 
         list_del(&s->list_entry);
@@ -890,6 +892,45 @@ int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
     return rc;
 }
 
+int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
+                                     uint16_t type, uint32_t flags)
+{
+    struct hvm_ioreq_server *s;
+    int rc;
+
+    /* For now, only HVMMEM_ioreq_server is supported. */
+    if ( type != HVMMEM_ioreq_server )
+        return -EINVAL;
+
+    /* For now, only write emulation is supported. */
+    if ( flags & ~(HVMOP_IOREQ_MEM_ACCESS_WRITE) )
+        return -EINVAL;
+
+    spin_lock(&d->arch.hvm_domain.ioreq_server.lock);
+
+    rc = -ENOENT;
+    list_for_each_entry ( s,
+                          &d->arch.hvm_domain.ioreq_server.list,
+                          list_entry )
+    {
+        if ( s == d->arch.hvm_domain.default_ioreq_server )
+            continue;
+
+        if ( s->id == id )
+        {
+            rc = p2m_set_ioreq_server(d, flags, s);
+            if ( rc == 0 )
+                dprintk(XENLOG_DEBUG, "%u %s type HVMMEM_ioreq_server.\n",
+                         s->id, (flags != 0) ? "mapped to" : "unmapped from");
+
+            break;
+        }
+    }
+
+    spin_unlock(&d->arch.hvm_domain.ioreq_server.lock);
+    return rc;
+}
+
 int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
                                bool_t enabled)
 {
diff --git a/xen/arch/x86/mm/hap/nested_hap.c b/xen/arch/x86/mm/hap/nested_hap.c
index 9cee5a0..bbb6d85 100644
--- a/xen/arch/x86/mm/hap/nested_hap.c
+++ b/xen/arch/x86/mm/hap/nested_hap.c
@@ -174,7 +174,7 @@ nestedhap_walk_L0_p2m(struct p2m_domain *p2m, paddr_t L1_gpa, paddr_t *L0_gpa,
     if ( *p2mt == p2m_mmio_direct )
         goto direct_mmio_out;
     rc = NESTEDHVM_PAGEFAULT_MMIO;
-    if ( *p2mt == p2m_mmio_dm )
+    if ( *p2mt == p2m_mmio_dm || *p2mt == p2m_ioreq_server )
         goto out;
 
     rc = NESTEDHVM_PAGEFAULT_L0_ERROR;
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 380ec25..b803694 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -132,6 +132,12 @@ static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry,
             entry->r = entry->w = entry->x = 1;
             entry->a = entry->d = !!cpu_has_vmx_ept_ad;
             break;
+        case p2m_ioreq_server:
+            entry->r = entry->x = 1;
+            entry->w = !(p2m->ioreq.flags & P2M_IOREQ_HANDLE_WRITE_ACCESS);
+            entry->a = !!cpu_has_vmx_ept_ad;
+            entry->d = entry->w && cpu_has_vmx_ept_ad;
+            break;
         case p2m_mmio_direct:
             entry->r = entry->x = 1;
             entry->w = !rangeset_contains_singleton(mmio_ro_ranges,
@@ -171,7 +177,6 @@ static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry,
             entry->a = entry->d = !!cpu_has_vmx_ept_ad;
             break;
         case p2m_grant_map_ro:
-        case p2m_ioreq_server:
             entry->r = 1;
             entry->w = entry->x = 0;
             entry->a = !!cpu_has_vmx_ept_ad;
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index eabd2e3..bf75afa 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -72,7 +72,9 @@ static const unsigned long pgt[] = {
     PGT_l3_page_table
 };
 
-static unsigned long p2m_type_to_flags(p2m_type_t t, mfn_t mfn,
+static unsigned long p2m_type_to_flags(const struct p2m_domain *p2m,
+                                       p2m_type_t t,
+                                       mfn_t mfn,
                                        unsigned int level)
 {
     unsigned long flags;
@@ -94,8 +96,16 @@ static unsigned long p2m_type_to_flags(p2m_type_t t, mfn_t mfn,
     default:
         return flags | _PAGE_NX_BIT;
     case p2m_grant_map_ro:
-    case p2m_ioreq_server:
         return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
+    case p2m_ioreq_server:
+    {
+        flags |= P2M_BASE_FLAGS | _PAGE_RW;
+
+        if ( p2m->ioreq.flags & P2M_IOREQ_HANDLE_WRITE_ACCESS )
+            return flags & ~_PAGE_RW;
+        else
+            return flags;
+    }
     case p2m_ram_ro:
     case p2m_ram_logdirty:
     case p2m_ram_shared:
@@ -442,7 +452,8 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
             p2m_type_t p2mt = p2m_is_logdirty_range(p2m, gfn & mask, gfn | ~mask)
                               ? p2m_ram_logdirty : p2m_ram_rw;
             unsigned long mfn = l1e_get_pfn(e);
-            unsigned long flags = p2m_type_to_flags(p2mt, _mfn(mfn), level);
+            unsigned long flags = p2m_type_to_flags(p2m, p2mt,
+                                                    _mfn(mfn), level);
 
             if ( level )
             {
@@ -579,7 +590,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         ASSERT(!mfn_valid(mfn) || p2mt != p2m_mmio_direct);
         l3e_content = mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt)
             ? l3e_from_pfn(mfn_x(mfn),
-                           p2m_type_to_flags(p2mt, mfn, 2) | _PAGE_PSE)
+                           p2m_type_to_flags(p2m, p2mt, mfn, 2) | _PAGE_PSE)
             : l3e_empty();
         entry_content.l1 = l3e_content.l3;
 
@@ -615,7 +626,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
 
         if ( mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) )
             entry_content = p2m_l1e_from_pfn(mfn_x(mfn),
-                                             p2m_type_to_flags(p2mt, mfn, 0));
+                                         p2m_type_to_flags(p2m, p2mt, mfn, 0));
         else
             entry_content = l1e_empty();
 
@@ -651,7 +662,7 @@ p2m_pt_set_entry(struct p2m_domain *p2m, unsigned long gfn, mfn_t mfn,
         ASSERT(!mfn_valid(mfn) || p2mt != p2m_mmio_direct);
         if ( mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) )
             l2e_content = l2e_from_pfn(mfn_x(mfn),
-                                       p2m_type_to_flags(p2mt, mfn, 1) |
+                                       p2m_type_to_flags(p2m, p2mt, mfn, 1) |
                                        _PAGE_PSE);
         else
             l2e_content = l2e_empty();
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index b3fce1b..231d41d 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -83,6 +83,8 @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
     else
         p2m_pt_init(p2m);
 
+    spin_lock_init(&p2m->ioreq.lock);
+
     return ret;
 }
 
@@ -289,6 +291,74 @@ void p2m_memory_type_changed(struct domain *d)
     }
 }
 
+int p2m_set_ioreq_server(struct domain *d,
+                         unsigned long flags,
+                         struct hvm_ioreq_server *s)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    int rc;
+
+    spin_lock(&p2m->ioreq.lock);
+
+    if ( flags == 0 )
+    {
+        rc = -EINVAL;
+        if ( p2m->ioreq.server != s )
+            goto out;
+
+        /* Unmap ioreq server from p2m type by passing flags with 0. */
+        p2m->ioreq.server = NULL;
+        p2m->ioreq.flags = 0;
+    }
+    else
+    {
+        rc = -EBUSY;
+        if ( p2m->ioreq.server != NULL )
+            goto out;
+
+        p2m->ioreq.server = s;
+        p2m->ioreq.flags = flags;
+    }
+
+    rc = 0;
+
+ out:
+    spin_unlock(&p2m->ioreq.lock);
+
+    return rc;
+}
+
+struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
+                                              unsigned long *flags)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+    struct hvm_ioreq_server *s;
+
+    spin_lock(&p2m->ioreq.lock);
+
+    s = p2m->ioreq.server;
+    *flags = p2m->ioreq.flags;
+
+    spin_unlock(&p2m->ioreq.lock);
+    return s;
+}
+
+void p2m_destroy_ioreq_server(struct domain *d,
+                              struct hvm_ioreq_server *s)
+{
+    struct p2m_domain *p2m = p2m_get_hostp2m(d);
+
+    spin_lock(&p2m->ioreq.lock);
+
+    if ( p2m->ioreq.server == s )
+    {
+        p2m->ioreq.server = NULL;
+        p2m->ioreq.flags = 0;
+    }
+
+    spin_unlock(&p2m->ioreq.lock);
+}
+
 void p2m_enable_hardware_log_dirty(struct domain *d)
 {
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index c81302a..2e0d258 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -3224,8 +3224,7 @@ static int sh_page_fault(struct vcpu *v,
     }
 
     /* Need to hand off device-model MMIO to the device model */
-    if ( p2mt == p2m_mmio_dm
-         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
+    if ( p2mt == p2m_mmio_dm )
     {
         gpa = guest_walk_to_gpa(&gw);
         goto mmio;
diff --git a/xen/include/asm-x86/hvm/ioreq.h b/xen/include/asm-x86/hvm/ioreq.h
index a1778ee..b03c64d 100644
--- a/xen/include/asm-x86/hvm/ioreq.h
+++ b/xen/include/asm-x86/hvm/ioreq.h
@@ -36,6 +36,8 @@ int hvm_map_io_range_to_ioreq_server(struct domain *d, ioservid_t id,
 int hvm_unmap_io_range_from_ioreq_server(struct domain *d, ioservid_t id,
                                          uint32_t type, uint64_t start,
                                          uint64_t end);
+int hvm_map_mem_type_to_ioreq_server(struct domain *d, ioservid_t id,
+                                     uint16_t type, uint32_t flags);
 int hvm_set_ioreq_server_state(struct domain *d, ioservid_t id,
                                bool_t enabled);
 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index ee2ea9c..358605b 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -89,7 +89,8 @@ typedef unsigned int p2m_query_t;
                        | p2m_to_mask(p2m_ram_paging_out)      \
                        | p2m_to_mask(p2m_ram_paged)           \
                        | p2m_to_mask(p2m_ram_paging_in)       \
-                       | p2m_to_mask(p2m_ram_shared))
+                       | p2m_to_mask(p2m_ram_shared)          \
+                       | p2m_to_mask(p2m_ioreq_server))
 
 /* Types that represent a physmap hole that is ok to replace with a shared
  * entry */
@@ -111,8 +112,7 @@ typedef unsigned int p2m_query_t;
 #define P2M_RO_TYPES (p2m_to_mask(p2m_ram_logdirty)     \
                       | p2m_to_mask(p2m_ram_ro)         \
                       | p2m_to_mask(p2m_grant_map_ro)   \
-                      | p2m_to_mask(p2m_ram_shared)     \
-                      | p2m_to_mask(p2m_ioreq_server))
+                      | p2m_to_mask(p2m_ram_shared))
 
 /* Write-discard types, which should discard the write operations */
 #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
@@ -320,6 +320,24 @@ struct p2m_domain {
         struct ept_data ept;
         /* NPT-equivalent structure could be added here. */
     };
+
+    struct {
+        spinlock_t lock;
+        /*
+         * ioreq server who's responsible for the emulation of
+         * gfns with specific p2m type(for now, p2m_ioreq_server).
+         */
+        struct hvm_ioreq_server *server;
+        /*
+         * flags specifies whether read, write or both operations
+         * are to be emulated by an ioreq server.
+         */
+        unsigned int flags;
+
+#define P2M_IOREQ_HANDLE_WRITE_ACCESS HVMOP_IOREQ_MEM_ACCESS_WRITE
+#define P2M_IOREQ_HANDLE_READ_ACCESS  HVMOP_IOREQ_MEM_ACCESS_READ
+
+    } ioreq;
 };
 
 /* get host p2m table */
@@ -821,6 +839,12 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt)
     return flags;
 }
 
+int p2m_set_ioreq_server(struct domain *d, unsigned long flags,
+                         struct hvm_ioreq_server *s);
+struct hvm_ioreq_server *p2m_get_ioreq_server(struct domain *d,
+                                              unsigned long *flags);
+void p2m_destroy_ioreq_server(struct domain *d, struct hvm_ioreq_server *s);
+
 #endif /* _XEN_P2M_H */
 
 /*
diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
index b3e45cf..6a6de43 100644
--- a/xen/include/public/hvm/hvm_op.h
+++ b/xen/include/public/hvm/hvm_op.h
@@ -383,6 +383,37 @@ struct xen_hvm_set_ioreq_server_state {
 typedef struct xen_hvm_set_ioreq_server_state xen_hvm_set_ioreq_server_state_t;
 DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_ioreq_server_state_t);
 
+/*
+ * HVMOP_map_mem_type_to_ioreq_server : map or unmap the IOREQ Server <id>
+ *                                      to specific memroy type <type>
+ *                                      for specific accesses <flags>
+ *
+ * For now, flags only accept the value of HVMOP_IOREQ_MEM_ACCESS_WRITE,
+ * which means only write operations are to be forwarded to an ioreq server.
+ * Support for the emulation of read operations can be added when an ioreq
+ * server has such requirement in future.
+ */
+#define HVMOP_map_mem_type_to_ioreq_server 26
+struct xen_hvm_map_mem_type_to_ioreq_server {
+    domid_t domid;      /* IN - domain to be serviced */
+    ioservid_t id;      /* IN - ioreq server id */
+    uint16_t type;      /* IN - memory type */
+    uint16_t pad;
+    uint32_t flags;     /* IN - types of accesses to be forwarded to the
+                           ioreq server. flags with 0 means to unmap the
+                           ioreq server */
+#define _HVMOP_IOREQ_MEM_ACCESS_READ 0
+#define HVMOP_IOREQ_MEM_ACCESS_READ \
+    (1u << _HVMOP_IOREQ_MEM_ACCESS_READ)
+
+#define _HVMOP_IOREQ_MEM_ACCESS_WRITE 1
+#define HVMOP_IOREQ_MEM_ACCESS_WRITE \
+    (1u << _HVMOP_IOREQ_MEM_ACCESS_WRITE)
+};
+typedef struct xen_hvm_map_mem_type_to_ioreq_server
+    xen_hvm_map_mem_type_to_ioreq_server_t;
+DEFINE_XEN_GUEST_HANDLE(xen_hvm_map_mem_type_to_ioreq_server_t);
+
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
 #if defined(__i386__) || defined(__x86_64__)
-- 
1.9.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 10:35 ` [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
@ 2016-04-25 12:12   ` Jan Beulich
  2016-04-25 13:30     ` Wei Liu
  2016-04-25 13:39   ` George Dunlap
  1 sibling, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2016-04-25 12:12 UTC (permalink / raw)
  To: Yu Zhang
  Cc: Kevin Tian, wei.liu2, George Dunlap, Andrew Cooper, Tim Deegan,
	xen-devel, Paul Durrant, zhiyuan.lv, Jun Nakajima, Keir Fraser

>>> On 25.04.16 at 12:35, <yu.c.zhang@linux.intel.com> wrote:
> Previously p2m type p2m_mmio_write_dm was introduced for write-
> protected memory pages whose write operations are supposed to be
> forwarded to and emulated by an ioreq server. Yet limitations of
> rangeset restrict the number of guest pages to be write-protected.
> 
> This patch replaces the p2m type p2m_mmio_write_dm with a new name:
> p2m_ioreq_server, which means this p2m type can be claimed by one
> ioreq server, instead of being tracked inside the rangeset of ioreq
> server. Patches following up will add the related hvmop handling
> code which map/unmap type p2m_ioreq_server to/from an ioreq server.
> 
> changes in v3:
>   - According to Jan & George's comments, keep HVMMEM_mmio_write_dm
>     for old xen interface versions, and replace it with HVMMEM_unused
>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
>     enum - HVMMEM_ioreq_server is introduced for the get/set mem type
>     interfaces;
>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
> 
> changes in v2:
>   - According to George Dunlap's comments, only rename the p2m type,
>     with no behavior changes.
> 
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> Acked-by: Tim Deegan <tim@xen.org>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

I hope the other three tags above are still being considered
applicable by their originators.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server.
  2016-04-25 10:35 ` [PATCH v3 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
@ 2016-04-25 12:36   ` Paul Durrant
  0 siblings, 0 replies; 35+ messages in thread
From: Paul Durrant @ 2016-04-25 12:36 UTC (permalink / raw)
  To: Yu Zhang, xen-devel
  Cc: Kevin Tian, Keir (Xen.org),
	Jun Nakajima, Andrew Cooper, Tim (Xen.org),
	George Dunlap, zhiyuan.lv, Jan Beulich, Wei Liu

> -----Original Message-----
> From: Yu Zhang [mailto:yu.c.zhang@linux.intel.com]
> Sent: 25 April 2016 11:36
> To: xen-devel@lists.xen.org
> Cc: zhiyuan.lv@intel.com; Wei Liu; Paul Durrant; Paul Durrant; Keir (Xen.org);
> Jan Beulich; Andrew Cooper; George Dunlap; Jun Nakajima; Kevin Tian; Tim
> (Xen.org)
> Subject: [PATCH v3 3/3] x86/ioreq server: Add HVMOP to map guest ram
> with p2m_ioreq_server to an ioreq server.
> 
> A new HVMOP - HVMOP_map_mem_type_to_ioreq_server, is added to
> let one ioreq server claim/disclaim its responsibility for the
> handling of guest pages with p2m type p2m_ioreq_server. Users
> of this HVMOP can specify which kind of operation is supposed
> to be emulated in a parameter named flags. Currently, this HVMOP
> only support the emulation of write operations. And it can be
> easily extended to support the emulation of read ones if an
> ioreq server has such requirement in the future.
> 
> For now, we only support one ioreq server for this p2m type, so
> once an ioreq server has claimed its ownership, subsequent calls
> of the HVMOP_map_mem_type_to_ioreq_server will fail. Users can also
> disclaim the ownership of guest ram pages with p2m_ioreq_server, by
> triggering this new HVMOP, with ioreq server id set to the current
> owner's and flags parameter set to 0.
> 
> Note both HVMOP_map_mem_type_to_ioreq_server and
> p2m_ioreq_server
> are only supported for HVMs with HAP enabled.
> 
> Also note that only after one ioreq server claims its ownership
> of p2m_ioreq_server, will the p2m type change to p2m_ioreq_server
> be allowed.
> 
> changes in v3:
>   - Only support write emulation in this patch;
>   - Remove the code to handle race condition in hvmemul_do_io(),
>   - No need to reset the p2m type after an ioreq server has disclaimed
>     its ownership of p2m_ioreq_server;
>   - Only allow p2m type change to p2m_ioreq_server after an ioreq
>     server has claimed its ownership of p2m_ioreq_server;

I think this restriction on should be mentioned in hvm_op.h somewhere close to where HVMMEM_ioreq_server is defined.

  Paul


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 12:12   ` Jan Beulich
@ 2016-04-25 13:30     ` Wei Liu
  0 siblings, 0 replies; 35+ messages in thread
From: Wei Liu @ 2016-04-25 13:30 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, wei.liu2, George Dunlap, Andrew Cooper, Tim Deegan,
	xen-devel, Paul Durrant, Yu Zhang, zhiyuan.lv, Jun Nakajima,
	Keir Fraser

On Mon, Apr 25, 2016 at 06:12:34AM -0600, Jan Beulich wrote:
> >>> On 25.04.16 at 12:35, <yu.c.zhang@linux.intel.com> wrote:
> > Previously p2m type p2m_mmio_write_dm was introduced for write-
> > protected memory pages whose write operations are supposed to be
> > forwarded to and emulated by an ioreq server. Yet limitations of
> > rangeset restrict the number of guest pages to be write-protected.
> > 
> > This patch replaces the p2m type p2m_mmio_write_dm with a new name:
> > p2m_ioreq_server, which means this p2m type can be claimed by one
> > ioreq server, instead of being tracked inside the rangeset of ioreq
> > server. Patches following up will add the related hvmop handling
> > code which map/unmap type p2m_ioreq_server to/from an ioreq server.
> > 
> > changes in v3:
> >   - According to Jan & George's comments, keep HVMMEM_mmio_write_dm
> >     for old xen interface versions, and replace it with HVMMEM_unused
> >     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
> >     enum - HVMMEM_ioreq_server is introduced for the get/set mem type
> >     interfaces;
> >   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
> > 
> > changes in v2:
> >   - According to George Dunlap's comments, only rename the p2m type,
> >     with no behavior changes.
> > 
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> > Acked-by: Tim Deegan <tim@xen.org>
> > Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > Reviewed-by: George Dunlap <george.dunlap@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> I hope the other three tags above are still being considered
> applicable by their originators.
> 

Subject to confirmation from the originators:

Release-acked-by: Wei Liu <wei.liu2@citrix.com>

> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 10:35 ` [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
  2016-04-25 12:12   ` Jan Beulich
@ 2016-04-25 13:39   ` George Dunlap
  2016-04-25 14:01     ` Paul Durrant
  2016-04-25 14:14     ` Jan Beulich
  1 sibling, 2 replies; 35+ messages in thread
From: George Dunlap @ 2016-04-25 13:39 UTC (permalink / raw)
  To: Yu Zhang
  Cc: Kevin Tian, Keir Fraser, Jan Beulich, Andrew Cooper, Tim Deegan,
	xen-devel, Paul Durrant, Lv, Zhiyuan, Jun Nakajima, Wei Liu

On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang <yu.c.zhang@linux.intel.com> wrote:
> Previously p2m type p2m_mmio_write_dm was introduced for write-
> protected memory pages whose write operations are supposed to be
> forwarded to and emulated by an ioreq server. Yet limitations of
> rangeset restrict the number of guest pages to be write-protected.
>
> This patch replaces the p2m type p2m_mmio_write_dm with a new name:
> p2m_ioreq_server, which means this p2m type can be claimed by one
> ioreq server, instead of being tracked inside the rangeset of ioreq
> server. Patches following up will add the related hvmop handling
> code which map/unmap type p2m_ioreq_server to/from an ioreq server.
>
> changes in v3:
>   - According to Jan & George's comments, keep HVMMEM_mmio_write_dm
>     for old xen interface versions, and replace it with HVMMEM_unused
>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
>     enum - HVMMEM_ioreq_server is introduced for the get/set mem type
>     interfaces;
>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.

Unfortunately these rather contradict each other -- I consider
Reviewed-by to only stick if the code I've specified hasn't changed
(or has only changed trivially).

Also...

>
> changes in v2:
>   - According to George Dunlap's comments, only rename the p2m type,
>     with no behavior changes.
>
> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> Acked-by: Tim Deegan <tim@xen.org>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
> Cc: Keir Fraser <keir@xen.org>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Jun Nakajima <jun.nakajima@intel.com>
> Cc: Kevin Tian <kevin.tian@intel.com>
> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> Cc: Tim Deegan <tim@xen.org>
> ---
>  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
>  xen/arch/x86/mm/p2m-ept.c       |  2 +-
>  xen/arch/x86/mm/p2m-pt.c        |  2 +-
>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
>  xen/include/asm-x86/p2m.h       |  4 ++--
>  xen/include/public/hvm/hvm_op.h |  8 +++++++-
>  6 files changed, 20 insertions(+), 12 deletions(-)
>
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index f24126d..874cb0f 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
>       */
>      if ( (p2mt == p2m_mmio_dm) ||
>           (npfec.write_access &&
> -          (p2m_is_discard_write(p2mt) || (p2mt == p2m_mmio_write_dm))) )
> +          (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) )
>      {
>          __put_gfn(p2m, gfn);
>          if ( ap2m_active )
> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>              get_gfn_query_unlocked(d, a.pfn, &t);
>              if ( p2m_is_mmio(t) )
>                  a.mem_type =  HVMMEM_mmio_dm;
> -            else if ( t == p2m_mmio_write_dm )
> -                a.mem_type = HVMMEM_mmio_write_dm;
> +            else if ( t == p2m_ioreq_server )
> +                a.mem_type = HVMMEM_ioreq_server;
>              else if ( p2m_is_readonly(t) )
>                  a.mem_type =  HVMMEM_ram_ro;
>              else if ( p2m_is_ram(t) )
> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>              [HVMMEM_ram_rw]  = p2m_ram_rw,
>              [HVMMEM_ram_ro]  = p2m_ram_ro,
>              [HVMMEM_mmio_dm] = p2m_mmio_dm,
> -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
> +            [HVMMEM_unused] = p2m_invalid,
> +            [HVMMEM_ioreq_server] = p2m_ioreq_server
>          };
>
>          if ( copy_from_guest(&a, arg, 1) )
> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>               ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
>              goto setmemtype_fail;
>
> -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
> +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
> +             unlikely(a.hvmmem_type == HVMMEM_unused) )
>              goto setmemtype_fail;
>
>          while ( a.nr > start_iter )
> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>              }
>              if ( !p2m_is_ram(t) &&
>                   (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm) &&
> -                 (t != p2m_mmio_write_dm || a.hvmmem_type != HVMMEM_ram_rw) )
> +                 (t != p2m_ioreq_server || a.hvmmem_type != HVMMEM_ram_rw) )
>              {
>                  put_gfn(d, pfn);
>                  goto setmemtype_fail;
> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> index 3cb6868..380ec25 100644
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct p2m_domain *p2m, ept_entry_t *entry,
>              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
>              break;
>          case p2m_grant_map_ro:
> -        case p2m_mmio_write_dm:
> +        case p2m_ioreq_server:
>              entry->r = 1;
>              entry->w = entry->x = 0;
>              entry->a = !!cpu_has_vmx_ept_ad;
> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> index 3d80612..eabd2e3 100644
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -94,7 +94,7 @@ static unsigned long p2m_type_to_flags(p2m_type_t t, mfn_t mfn,
>      default:
>          return flags | _PAGE_NX_BIT;
>      case p2m_grant_map_ro:
> -    case p2m_mmio_write_dm:
> +    case p2m_ioreq_server:
>          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
>      case p2m_ram_ro:
>      case p2m_ram_logdirty:
> diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
> index e5c8499..c81302a 100644
> --- a/xen/arch/x86/mm/shadow/multi.c
> +++ b/xen/arch/x86/mm/shadow/multi.c
> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
>
>      /* Need to hand off device-model MMIO to the device model */
>      if ( p2mt == p2m_mmio_dm
> -         || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
> +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
>      {
>          gpa = guest_walk_to_gpa(&gw);
>          goto mmio;
> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> index 5392eb0..ee2ea9c 100644
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -71,7 +71,7 @@ typedef enum {
>      p2m_ram_shared = 12,          /* Shared or sharable memory */
>      p2m_ram_broken = 13,          /* Broken page, access cause domain crash */
>      p2m_map_foreign  = 14,        /* ram pages from foreign domain */
> -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the device model */
> +    p2m_ioreq_server = 15,
>  } p2m_type_t;
>
>  /* Modifiers to the query */
> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
>                        | p2m_to_mask(p2m_ram_ro)         \
>                        | p2m_to_mask(p2m_grant_map_ro)   \
>                        | p2m_to_mask(p2m_ram_shared)     \
> -                      | p2m_to_mask(p2m_mmio_write_dm))
> +                      | p2m_to_mask(p2m_ioreq_server))
>
>  /* Write-discard types, which should discard the write operations */
>  #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
> diff --git a/xen/include/public/hvm/hvm_op.h b/xen/include/public/hvm/hvm_op.h
> index 1606185..b3e45cf 100644
> --- a/xen/include/public/hvm/hvm_op.h
> +++ b/xen/include/public/hvm/hvm_op.h
> @@ -83,7 +83,13 @@ typedef enum {
>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
>      HVMMEM_ram_ro,             /* Read-only; writes are discarded */
>      HVMMEM_mmio_dm,            /* Reads and write go to the device model */
> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the device model */
> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the device model */
> +#else
> +    HVMMEM_unused,             /* Placeholder; setting memory to this type
> +                                  will fail for code after 4.7.0 */
> +#endif
> +    HVMMEM_ioreq_server

Also, I don't think we've had a convincing argument for why this patch
needs to be in 4.7.  The p2m name changes are internal only, and so
don't need to be made at all; and the old functionality will work as
well as it ever did.  Furthermore, the whole reason we're in this
situation is that we checked in a publicly-visible change to the
interface before it was properly ready; I think we should avoid making
the same mistake this time.

So personally I'd just leave this patch entirely for 4.8; but if Paul
and/or Jan have strong opinions, then I would say check in only a
patch to do the #if/#else/#endif, and leave both the p2m type changes
and the new HVMMEM_ioreq_server enum for when the 4.8 development
window opens.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 13:39   ` George Dunlap
@ 2016-04-25 14:01     ` Paul Durrant
  2016-04-25 14:15       ` George Dunlap
                         ` (2 more replies)
  2016-04-25 14:14     ` Jan Beulich
  1 sibling, 3 replies; 35+ messages in thread
From: Paul Durrant @ 2016-04-25 14:01 UTC (permalink / raw)
  To: George Dunlap, Yu Zhang
  Cc: Kevin Tian, Keir (Xen.org),
	Jan Beulich, Andrew Cooper, Tim (Xen.org),
	xen-devel, Lv, Zhiyuan, Jun Nakajima, Wei Liu

> -----Original Message-----
> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf Of
> George Dunlap
> Sent: 25 April 2016 14:39
> To: Yu Zhang
> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan Beulich; Wei Liu
> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> 
> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang <yu.c.zhang@linux.intel.com>
> wrote:
> > Previously p2m type p2m_mmio_write_dm was introduced for write-
> > protected memory pages whose write operations are supposed to be
> > forwarded to and emulated by an ioreq server. Yet limitations of
> > rangeset restrict the number of guest pages to be write-protected.
> >
> > This patch replaces the p2m type p2m_mmio_write_dm with a new name:
> > p2m_ioreq_server, which means this p2m type can be claimed by one
> > ioreq server, instead of being tracked inside the rangeset of ioreq
> > server. Patches following up will add the related hvmop handling
> > code which map/unmap type p2m_ioreq_server to/from an ioreq server.
> >
> > changes in v3:
> >   - According to Jan & George's comments, keep
> HVMMEM_mmio_write_dm
> >     for old xen interface versions, and replace it with HVMMEM_unused
> >     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
> >     enum - HVMMEM_ioreq_server is introduced for the get/set mem type
> >     interfaces;
> >   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
> 
> Unfortunately these rather contradict each other -- I consider
> Reviewed-by to only stick if the code I've specified hasn't changed
> (or has only changed trivially).
> 
> Also...
> 
> >
> > changes in v2:
> >   - According to George Dunlap's comments, only rename the p2m type,
> >     with no behavior changes.
> >
> > Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> > Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> > Acked-by: Tim Deegan <tim@xen.org>
> > Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > Reviewed-by: George Dunlap <george.dunlap@citrix.com>
> > Cc: Keir Fraser <keir@xen.org>
> > Cc: Jan Beulich <jbeulich@suse.com>
> > Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> > Cc: Jun Nakajima <jun.nakajima@intel.com>
> > Cc: Kevin Tian <kevin.tian@intel.com>
> > Cc: George Dunlap <george.dunlap@eu.citrix.com>
> > Cc: Tim Deegan <tim@xen.org>
> > ---
> >  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
> >  xen/arch/x86/mm/p2m-ept.c       |  2 +-
> >  xen/arch/x86/mm/p2m-pt.c        |  2 +-
> >  xen/arch/x86/mm/shadow/multi.c  |  2 +-
> >  xen/include/asm-x86/p2m.h       |  4 ++--
> >  xen/include/public/hvm/hvm_op.h |  8 +++++++-
> >  6 files changed, 20 insertions(+), 12 deletions(-)
> >
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index f24126d..874cb0f 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
> unsigned long gla,
> >       */
> >      if ( (p2mt == p2m_mmio_dm) ||
> >           (npfec.write_access &&
> > -          (p2m_is_discard_write(p2mt) || (p2mt == p2m_mmio_write_dm))) )
> > +          (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) )
> >      {
> >          __put_gfn(p2m, gfn);
> >          if ( ap2m_active )
> > @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >              get_gfn_query_unlocked(d, a.pfn, &t);
> >              if ( p2m_is_mmio(t) )
> >                  a.mem_type =  HVMMEM_mmio_dm;
> > -            else if ( t == p2m_mmio_write_dm )
> > -                a.mem_type = HVMMEM_mmio_write_dm;
> > +            else if ( t == p2m_ioreq_server )
> > +                a.mem_type = HVMMEM_ioreq_server;
> >              else if ( p2m_is_readonly(t) )
> >                  a.mem_type =  HVMMEM_ram_ro;
> >              else if ( p2m_is_ram(t) )
> > @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >              [HVMMEM_ram_rw]  = p2m_ram_rw,
> >              [HVMMEM_ram_ro]  = p2m_ram_ro,
> >              [HVMMEM_mmio_dm] = p2m_mmio_dm,
> > -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
> > +            [HVMMEM_unused] = p2m_invalid,
> > +            [HVMMEM_ioreq_server] = p2m_ioreq_server
> >          };
> >
> >          if ( copy_from_guest(&a, arg, 1) )
> > @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >               ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
> >              goto setmemtype_fail;
> >
> > -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
> > +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
> > +             unlikely(a.hvmmem_type == HVMMEM_unused) )
> >              goto setmemtype_fail;
> >
> >          while ( a.nr > start_iter )
> > @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
> XEN_GUEST_HANDLE_PARAM(void) arg)
> >              }
> >              if ( !p2m_is_ram(t) &&
> >                   (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm)
> &&
> > -                 (t != p2m_mmio_write_dm || a.hvmmem_type !=
> HVMMEM_ram_rw) )
> > +                 (t != p2m_ioreq_server || a.hvmmem_type !=
> HVMMEM_ram_rw) )
> >              {
> >                  put_gfn(d, pfn);
> >                  goto setmemtype_fail;
> > diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
> > index 3cb6868..380ec25 100644
> > --- a/xen/arch/x86/mm/p2m-ept.c
> > +++ b/xen/arch/x86/mm/p2m-ept.c
> > @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
> p2m_domain *p2m, ept_entry_t *entry,
> >              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
> >              break;
> >          case p2m_grant_map_ro:
> > -        case p2m_mmio_write_dm:
> > +        case p2m_ioreq_server:
> >              entry->r = 1;
> >              entry->w = entry->x = 0;
> >              entry->a = !!cpu_has_vmx_ept_ad;
> > diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> > index 3d80612..eabd2e3 100644
> > --- a/xen/arch/x86/mm/p2m-pt.c
> > +++ b/xen/arch/x86/mm/p2m-pt.c
> > @@ -94,7 +94,7 @@ static unsigned long p2m_type_to_flags(p2m_type_t
> t, mfn_t mfn,
> >      default:
> >          return flags | _PAGE_NX_BIT;
> >      case p2m_grant_map_ro:
> > -    case p2m_mmio_write_dm:
> > +    case p2m_ioreq_server:
> >          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
> >      case p2m_ram_ro:
> >      case p2m_ram_logdirty:
> > diff --git a/xen/arch/x86/mm/shadow/multi.c
> b/xen/arch/x86/mm/shadow/multi.c
> > index e5c8499..c81302a 100644
> > --- a/xen/arch/x86/mm/shadow/multi.c
> > +++ b/xen/arch/x86/mm/shadow/multi.c
> > @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
> >
> >      /* Need to hand off device-model MMIO to the device model */
> >      if ( p2mt == p2m_mmio_dm
> > -         || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
> > +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
> >      {
> >          gpa = guest_walk_to_gpa(&gw);
> >          goto mmio;
> > diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> > index 5392eb0..ee2ea9c 100644
> > --- a/xen/include/asm-x86/p2m.h
> > +++ b/xen/include/asm-x86/p2m.h
> > @@ -71,7 +71,7 @@ typedef enum {
> >      p2m_ram_shared = 12,          /* Shared or sharable memory */
> >      p2m_ram_broken = 13,          /* Broken page, access cause domain crash
> */
> >      p2m_map_foreign  = 14,        /* ram pages from foreign domain */
> > -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the device
> model */
> > +    p2m_ioreq_server = 15,
> >  } p2m_type_t;
> >
> >  /* Modifiers to the query */
> > @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
> >                        | p2m_to_mask(p2m_ram_ro)         \
> >                        | p2m_to_mask(p2m_grant_map_ro)   \
> >                        | p2m_to_mask(p2m_ram_shared)     \
> > -                      | p2m_to_mask(p2m_mmio_write_dm))
> > +                      | p2m_to_mask(p2m_ioreq_server))
> >
> >  /* Write-discard types, which should discard the write operations */
> >  #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
> > diff --git a/xen/include/public/hvm/hvm_op.h
> b/xen/include/public/hvm/hvm_op.h
> > index 1606185..b3e45cf 100644
> > --- a/xen/include/public/hvm/hvm_op.h
> > +++ b/xen/include/public/hvm/hvm_op.h
> > @@ -83,7 +83,13 @@ typedef enum {
> >      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
> >      HVMMEM_ram_ro,             /* Read-only; writes are discarded */
> >      HVMMEM_mmio_dm,            /* Reads and write go to the device model
> */
> > -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the device
> model */
> > +#if __XEN_INTERFACE_VERSION__ < 0x00040700
> > +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the device
> model */
> > +#else
> > +    HVMMEM_unused,             /* Placeholder; setting memory to this type
> > +                                  will fail for code after 4.7.0 */
> > +#endif
> > +    HVMMEM_ioreq_server
> 
> Also, I don't think we've had a convincing argument for why this patch
> needs to be in 4.7.  The p2m name changes are internal only, and so
> don't need to be made at all; and the old functionality will work as
> well as it ever did.  Furthermore, the whole reason we're in this
> situation is that we checked in a publicly-visible change to the
> interface before it was properly ready; I think we should avoid making
> the same mistake this time.
> 
> So personally I'd just leave this patch entirely for 4.8; but if Paul
> and/or Jan have strong opinions, then I would say check in only a
> patch to do the #if/#else/#endif, and leave both the p2m type changes
> and the new HVMMEM_ioreq_server enum for when the 4.8 development
> window opens.
> 

If the whole series is going in then I think this patch is ok. If this the only patch that is going in for 4.7 then I thing we need the patch to hvm_op.h to deprecate the old type and that's it. We definitely should not introduce an implementation of the type HVMMEM_ioreq_server that has the same hardcoded semantics as the old type and then change it.
The p2m type changes are also wrong. That type needs to be left alone, presumably, so that anything using HVMMEM_mmio_write_dm and compiled to the old interface version continues to function. I think HVMMEM_ioreq_server needs to map to a new p2m type which should be introduced in patch #3.

  Paul


>  -George
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 13:39   ` George Dunlap
  2016-04-25 14:01     ` Paul Durrant
@ 2016-04-25 14:14     ` Jan Beulich
  1 sibling, 0 replies; 35+ messages in thread
From: Jan Beulich @ 2016-04-25 14:14 UTC (permalink / raw)
  To: George Dunlap, Yu Zhang
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Tim Deegan, xen-devel,
	Paul Durrant, Zhiyuan Lv, Jun Nakajima, Keir Fraser

>>> On 25.04.16 at 15:39, <George.Dunlap@eu.citrix.com> wrote:
> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang <yu.c.zhang@linux.intel.com> wrote:
>> --- a/xen/include/public/hvm/hvm_op.h
>> +++ b/xen/include/public/hvm/hvm_op.h
>> @@ -83,7 +83,13 @@ typedef enum {
>>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
>>      HVMMEM_ram_ro,             /* Read-only; writes are discarded */
>>      HVMMEM_mmio_dm,            /* Reads and write go to the device model */
>> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the device model */
>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
>> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the device model */
>> +#else
>> +    HVMMEM_unused,             /* Placeholder; setting memory to this type
>> +                                  will fail for code after 4.7.0 */
>> +#endif
>> +    HVMMEM_ioreq_server
> 
> Also, I don't think we've had a convincing argument for why this patch
> needs to be in 4.7.  The p2m name changes are internal only, and so
> don't need to be made at all; and the old functionality will work as
> well as it ever did.  Furthermore, the whole reason we're in this
> situation is that we checked in a publicly-visible change to the
> interface before it was properly ready; I think we should avoid making
> the same mistake this time.

Good point.

> So personally I'd just leave this patch entirely for 4.8; but if Paul
> and/or Jan have strong opinions, then I would say check in only a
> patch to do the #if/#else/#endif, and leave both the p2m type changes
> and the new HVMMEM_ioreq_server enum for when the 4.8 development
> window opens.

Doing that alone would break the build - it would need to be a
little more than that.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 14:01     ` Paul Durrant
@ 2016-04-25 14:15       ` George Dunlap
  2016-04-25 14:16       ` Jan Beulich
  2016-04-25 15:21       ` Yu, Zhang
  2 siblings, 0 replies; 35+ messages in thread
From: George Dunlap @ 2016-04-25 14:15 UTC (permalink / raw)
  To: Paul Durrant, Yu Zhang
  Cc: Kevin Tian, Keir (Xen.org),
	Jan Beulich, Andrew Cooper, Tim (Xen.org),
	xen-devel, Lv, Zhiyuan, Jun Nakajima, Wei Liu

On 25/04/16 15:01, Paul Durrant wrote:
>> -----Original Message-----
>> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf Of
>> George Dunlap
>> Sent: 25 April 2016 14:39
>> To: Yu Zhang
>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan Beulich; Wei Liu
>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>
>> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang <yu.c.zhang@linux.intel.com>
>> wrote:
>>> Previously p2m type p2m_mmio_write_dm was introduced for write-
>>> protected memory pages whose write operations are supposed to be
>>> forwarded to and emulated by an ioreq server. Yet limitations of
>>> rangeset restrict the number of guest pages to be write-protected.
>>>
>>> This patch replaces the p2m type p2m_mmio_write_dm with a new name:
>>> p2m_ioreq_server, which means this p2m type can be claimed by one
>>> ioreq server, instead of being tracked inside the rangeset of ioreq
>>> server. Patches following up will add the related hvmop handling
>>> code which map/unmap type p2m_ioreq_server to/from an ioreq server.
>>>
>>> changes in v3:
>>>   - According to Jan & George's comments, keep
>> HVMMEM_mmio_write_dm
>>>     for old xen interface versions, and replace it with HVMMEM_unused
>>>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
>>>     enum - HVMMEM_ioreq_server is introduced for the get/set mem type
>>>     interfaces;
>>>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
>>
>> Unfortunately these rather contradict each other -- I consider
>> Reviewed-by to only stick if the code I've specified hasn't changed
>> (or has only changed trivially).
>>
>> Also...
>>
>>>
>>> changes in v2:
>>>   - According to George Dunlap's comments, only rename the p2m type,
>>>     with no behavior changes.
>>>
>>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
>>> Acked-by: Tim Deegan <tim@xen.org>
>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>>> Cc: Keir Fraser <keir@xen.org>
>>> Cc: Jan Beulich <jbeulich@suse.com>
>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>>> Cc: Tim Deegan <tim@xen.org>
>>> ---
>>>  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
>>>  xen/arch/x86/mm/p2m-ept.c       |  2 +-
>>>  xen/arch/x86/mm/p2m-pt.c        |  2 +-
>>>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
>>>  xen/include/asm-x86/p2m.h       |  4 ++--
>>>  xen/include/public/hvm/hvm_op.h |  8 +++++++-
>>>  6 files changed, 20 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>> index f24126d..874cb0f 100644
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>> unsigned long gla,
>>>       */
>>>      if ( (p2mt == p2m_mmio_dm) ||
>>>           (npfec.write_access &&
>>> -          (p2m_is_discard_write(p2mt) || (p2mt == p2m_mmio_write_dm))) )
>>> +          (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) )
>>>      {
>>>          __put_gfn(p2m, gfn);
>>>          if ( ap2m_active )
>>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>              get_gfn_query_unlocked(d, a.pfn, &t);
>>>              if ( p2m_is_mmio(t) )
>>>                  a.mem_type =  HVMMEM_mmio_dm;
>>> -            else if ( t == p2m_mmio_write_dm )
>>> -                a.mem_type = HVMMEM_mmio_write_dm;
>>> +            else if ( t == p2m_ioreq_server )
>>> +                a.mem_type = HVMMEM_ioreq_server;
>>>              else if ( p2m_is_readonly(t) )
>>>                  a.mem_type =  HVMMEM_ram_ro;
>>>              else if ( p2m_is_ram(t) )
>>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>              [HVMMEM_ram_rw]  = p2m_ram_rw,
>>>              [HVMMEM_ram_ro]  = p2m_ram_ro,
>>>              [HVMMEM_mmio_dm] = p2m_mmio_dm,
>>> -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
>>> +            [HVMMEM_unused] = p2m_invalid,
>>> +            [HVMMEM_ioreq_server] = p2m_ioreq_server
>>>          };
>>>
>>>          if ( copy_from_guest(&a, arg, 1) )
>>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>               ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
>>>              goto setmemtype_fail;
>>>
>>> -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
>>> +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
>>> +             unlikely(a.hvmmem_type == HVMMEM_unused) )
>>>              goto setmemtype_fail;
>>>
>>>          while ( a.nr > start_iter )
>>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>              }
>>>              if ( !p2m_is_ram(t) &&
>>>                   (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm)
>> &&
>>> -                 (t != p2m_mmio_write_dm || a.hvmmem_type !=
>> HVMMEM_ram_rw) )
>>> +                 (t != p2m_ioreq_server || a.hvmmem_type !=
>> HVMMEM_ram_rw) )
>>>              {
>>>                  put_gfn(d, pfn);
>>>                  goto setmemtype_fail;
>>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
>>> index 3cb6868..380ec25 100644
>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
>> p2m_domain *p2m, ept_entry_t *entry,
>>>              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
>>>              break;
>>>          case p2m_grant_map_ro:
>>> -        case p2m_mmio_write_dm:
>>> +        case p2m_ioreq_server:
>>>              entry->r = 1;
>>>              entry->w = entry->x = 0;
>>>              entry->a = !!cpu_has_vmx_ept_ad;
>>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
>>> index 3d80612..eabd2e3 100644
>>> --- a/xen/arch/x86/mm/p2m-pt.c
>>> +++ b/xen/arch/x86/mm/p2m-pt.c
>>> @@ -94,7 +94,7 @@ static unsigned long p2m_type_to_flags(p2m_type_t
>> t, mfn_t mfn,
>>>      default:
>>>          return flags | _PAGE_NX_BIT;
>>>      case p2m_grant_map_ro:
>>> -    case p2m_mmio_write_dm:
>>> +    case p2m_ioreq_server:
>>>          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
>>>      case p2m_ram_ro:
>>>      case p2m_ram_logdirty:
>>> diff --git a/xen/arch/x86/mm/shadow/multi.c
>> b/xen/arch/x86/mm/shadow/multi.c
>>> index e5c8499..c81302a 100644
>>> --- a/xen/arch/x86/mm/shadow/multi.c
>>> +++ b/xen/arch/x86/mm/shadow/multi.c
>>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
>>>
>>>      /* Need to hand off device-model MMIO to the device model */
>>>      if ( p2mt == p2m_mmio_dm
>>> -         || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
>>> +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
>>>      {
>>>          gpa = guest_walk_to_gpa(&gw);
>>>          goto mmio;
>>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>>> index 5392eb0..ee2ea9c 100644
>>> --- a/xen/include/asm-x86/p2m.h
>>> +++ b/xen/include/asm-x86/p2m.h
>>> @@ -71,7 +71,7 @@ typedef enum {
>>>      p2m_ram_shared = 12,          /* Shared or sharable memory */
>>>      p2m_ram_broken = 13,          /* Broken page, access cause domain crash
>> */
>>>      p2m_map_foreign  = 14,        /* ram pages from foreign domain */
>>> -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the device
>> model */
>>> +    p2m_ioreq_server = 15,
>>>  } p2m_type_t;
>>>
>>>  /* Modifiers to the query */
>>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
>>>                        | p2m_to_mask(p2m_ram_ro)         \
>>>                        | p2m_to_mask(p2m_grant_map_ro)   \
>>>                        | p2m_to_mask(p2m_ram_shared)     \
>>> -                      | p2m_to_mask(p2m_mmio_write_dm))
>>> +                      | p2m_to_mask(p2m_ioreq_server))
>>>
>>>  /* Write-discard types, which should discard the write operations */
>>>  #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
>>> diff --git a/xen/include/public/hvm/hvm_op.h
>> b/xen/include/public/hvm/hvm_op.h
>>> index 1606185..b3e45cf 100644
>>> --- a/xen/include/public/hvm/hvm_op.h
>>> +++ b/xen/include/public/hvm/hvm_op.h
>>> @@ -83,7 +83,13 @@ typedef enum {
>>>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
>>>      HVMMEM_ram_ro,             /* Read-only; writes are discarded */
>>>      HVMMEM_mmio_dm,            /* Reads and write go to the device model
>> */
>>> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the device
>> model */
>>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
>>> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the device
>> model */
>>> +#else
>>> +    HVMMEM_unused,             /* Placeholder; setting memory to this type
>>> +                                  will fail for code after 4.7.0 */
>>> +#endif
>>> +    HVMMEM_ioreq_server
>>
>> Also, I don't think we've had a convincing argument for why this patch
>> needs to be in 4.7.  The p2m name changes are internal only, and so
>> don't need to be made at all; and the old functionality will work as
>> well as it ever did.  Furthermore, the whole reason we're in this
>> situation is that we checked in a publicly-visible change to the
>> interface before it was properly ready; I think we should avoid making
>> the same mistake this time.
>>
>> So personally I'd just leave this patch entirely for 4.8; but if Paul
>> and/or Jan have strong opinions, then I would say check in only a
>> patch to do the #if/#else/#endif, and leave both the p2m type changes
>> and the new HVMMEM_ioreq_server enum for when the 4.8 development
>> window opens.
>>
> 
> If the whole series is going in then I think this patch is ok. If this the only patch that is going in for 4.7 then I thing we need the patch to hvm_op.h to deprecate the old type and that's it. We definitely should not introduce an implementation of the type HVMMEM_ioreq_server that has the same hardcoded semantics as the old type and then change it.
> The p2m type changes are also wrong. That type needs to be left alone, presumably, so that anything using HVMMEM_mmio_write_dm and compiled to the old interface version continues to function. I think HVMMEM_ioreq_server needs to map to a new p2m type which should be introduced in patch #3.

Well yes, if the whole series is going in the patch is OK; but I assumed
that since it's a new feature that missed the hard deadline we were at
this point only talking about how to fix up the interface for the 4.7
release.

I think for 4.8 it should return -EINVAL until someone complains that
it's not working, but that's something we can discuss when the
development window opens.

Thanks
 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 14:01     ` Paul Durrant
  2016-04-25 14:15       ` George Dunlap
@ 2016-04-25 14:16       ` Jan Beulich
  2016-04-25 14:19         ` Paul Durrant
  2016-04-25 15:21       ` Yu, Zhang
  2 siblings, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2016-04-25 14:16 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Tim(Xen.org),
	George Dunlap, xen-devel, Yu Zhang, Zhiyuan Lv, Jun Nakajima,
	Keir (Xen.org)

>>> On 25.04.16 at 16:01, <Paul.Durrant@citrix.com> wrote:
> The p2m type changes are also wrong. That type needs to be left alone, 
> presumably, so that anything using HVMMEM_mmio_write_dm and compiled to the 
> old interface version continues to function. I think HVMMEM_ioreq_server 
> needs to map to a new p2m type which should be introduced in patch #3.

I don't understand this part: I thought it was agreed that the old
p2m type needs to go away (not the least because we're tight on
these), and use of the old HVMMEM_* type needs to result in
errors.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 14:16       ` Jan Beulich
@ 2016-04-25 14:19         ` Paul Durrant
  2016-04-25 14:28           ` George Dunlap
  0 siblings, 1 reply; 35+ messages in thread
From: Paul Durrant @ 2016-04-25 14:19 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Tim (Xen.org),
	George Dunlap, xen-devel, Yu Zhang, Zhiyuan Lv, Jun Nakajima,
	Keir (Xen.org)

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 25 April 2016 15:16
> To: Paul Durrant
> Cc: Andrew Cooper; George Dunlap; Wei Liu; Jun Nakajima; Kevin Tian;
> Zhiyuan Lv; Yu Zhang; xen-devel@lists.xen.org; Keir (Xen.org); Tim (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> 
> >>> On 25.04.16 at 16:01, <Paul.Durrant@citrix.com> wrote:
> > The p2m type changes are also wrong. That type needs to be left alone,
> > presumably, so that anything using HVMMEM_mmio_write_dm and
> compiled to the
> > old interface version continues to function. I think HVMMEM_ioreq_server
> > needs to map to a new p2m type which should be introduced in patch #3.
> 
> I don't understand this part: I thought it was agreed that the old
> p2m type needs to go away (not the least because we're tight on
> these), and use of the old HVMMEM_* type needs to result in
> errors.
> 

I may have misunderstood. I thought we'd back-tracked on that because there's a concern that we also need to keep anything compiled against the old header working. If not then this patch should also remove that p2m type, not rename it.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 14:19         ` Paul Durrant
@ 2016-04-25 14:28           ` George Dunlap
  2016-04-25 14:34             ` Paul Durrant
  0 siblings, 1 reply; 35+ messages in thread
From: George Dunlap @ 2016-04-25 14:28 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, Andrew Cooper, Tim (Xen.org),
	xen-devel, Yu Zhang, Zhiyuan Lv, Jan Beulich, Keir (Xen.org)

On Mon, Apr 25, 2016 at 3:19 PM, Paul Durrant <Paul.Durrant@citrix.com> wrote:
>> -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@suse.com]
>> Sent: 25 April 2016 15:16
>> To: Paul Durrant
>> Cc: Andrew Cooper; George Dunlap; Wei Liu; Jun Nakajima; Kevin Tian;
>> Zhiyuan Lv; Yu Zhang; xen-devel@lists.xen.org; Keir (Xen.org); Tim (Xen.org)
>> Subject: RE: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>
>> >>> On 25.04.16 at 16:01, <Paul.Durrant@citrix.com> wrote:
>> > The p2m type changes are also wrong. That type needs to be left alone,
>> > presumably, so that anything using HVMMEM_mmio_write_dm and
>> compiled to the
>> > old interface version continues to function. I think HVMMEM_ioreq_server
>> > needs to map to a new p2m type which should be introduced in patch #3.
>>
>> I don't understand this part: I thought it was agreed that the old
>> p2m type needs to go away (not the least because we're tight on
>> these), and use of the old HVMMEM_* type needs to result in
>> errors.
>>
>
> I may have misunderstood. I thought we'd back-tracked on that because there's a concern that we also need to keep anything compiled against the old header working. If not then this patch should also remove that p2m type, not rename it.

You mean remove the old HVMMEM type?

There are two issues:
1. Whether old code should continue to compile
2. How old code should act on new hypervisors

I think we've determined that we definitely cannot allow code compiled
against old hypervisors to accidentally use a different p2m type; so
we certainly need to "burn" an enum here.

I'd personally prefer we just straight-up rename it to HVMMEM_unused,
so nobody continues to think that the HVMMEM_mmio_write_dm
functionality might still actually work; I think Jan thinks that's not
allowed.

Honestly I don't see the point of letting it compile and then return
-EINVAL when we run it.  If people complain that it doesn't work
anymore we should either make it compile *and* maintain the
functionality, or say "Sorry, just use an older version" and make it
neither compile nor maintain the functionality.

But I sort of assumed this discussion on what support looked like had
already been had and Jan was just enforcing it.

(Maybe we should have had a talk about this in person at the Hackathon...)

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 14:28           ` George Dunlap
@ 2016-04-25 14:34             ` Paul Durrant
  0 siblings, 0 replies; 35+ messages in thread
From: Paul Durrant @ 2016-04-25 14:34 UTC (permalink / raw)
  To: George Dunlap
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, Andrew Cooper, Tim (Xen.org),
	xen-devel, Yu Zhang, Zhiyuan Lv, Jan Beulich, Keir (Xen.org)

> -----Original Message-----
> From: George Dunlap
> Sent: 25 April 2016 15:28
> To: Paul Durrant
> Cc: Jan Beulich; Kevin Tian; Wei Liu; Andrew Cooper; Tim (Xen.org); xen-
> devel@lists.xen.org; Yu Zhang; Zhiyuan Lv; Jun Nakajima; Keir (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> 
> On Mon, Apr 25, 2016 at 3:19 PM, Paul Durrant <Paul.Durrant@citrix.com>
> wrote:
> >> -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@suse.com]
> >> Sent: 25 April 2016 15:16
> >> To: Paul Durrant
> >> Cc: Andrew Cooper; George Dunlap; Wei Liu; Jun Nakajima; Kevin Tian;
> >> Zhiyuan Lv; Yu Zhang; xen-devel@lists.xen.org; Keir (Xen.org); Tim
> (Xen.org)
> >> Subject: RE: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
> >> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> >>
> >> >>> On 25.04.16 at 16:01, <Paul.Durrant@citrix.com> wrote:
> >> > The p2m type changes are also wrong. That type needs to be left alone,
> >> > presumably, so that anything using HVMMEM_mmio_write_dm and
> >> compiled to the
> >> > old interface version continues to function. I think
> HVMMEM_ioreq_server
> >> > needs to map to a new p2m type which should be introduced in patch
> #3.
> >>
> >> I don't understand this part: I thought it was agreed that the old
> >> p2m type needs to go away (not the least because we're tight on
> >> these), and use of the old HVMMEM_* type needs to result in
> >> errors.
> >>
> >
> > I may have misunderstood. I thought we'd back-tracked on that because
> there's a concern that we also need to keep anything compiled against the
> old header working. If not then this patch should also remove that p2m type,
> not rename it.
> 
> You mean remove the old HVMMEM type?
> 
> There are two issues:
> 1. Whether old code should continue to compile
> 2. How old code should act on new hypervisors
> 
> I think we've determined that we definitely cannot allow code compiled
> against old hypervisors to accidentally use a different p2m type; so
> we certainly need to "burn" an enum here.
> 
> I'd personally prefer we just straight-up rename it to HVMMEM_unused,
> so nobody continues to think that the HVMMEM_mmio_write_dm
> functionality might still actually work; I think Jan thinks that's not
> allowed.
> 
> Honestly I don't see the point of letting it compile and then return
> -EINVAL when we run it.  If people complain that it doesn't work
> anymore we should either make it compile *and* maintain the
> functionality, or say "Sorry, just use an older version" and make it
> neither compile nor maintain the functionality.
> 
> But I sort of assumed this discussion on what support looked like had
> already been had and Jan was just enforcing it.
> 
> (Maybe we should have had a talk about this in person at the Hackathon...)
> 

I'm now confused as to what was agreed, if anything.

The fact of the matter is though that the old type escaped into the wild. I wanted it to go away but because it escaped I guess that may just not be possible.

  Paul

>  -George
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 14:01     ` Paul Durrant
  2016-04-25 14:15       ` George Dunlap
  2016-04-25 14:16       ` Jan Beulich
@ 2016-04-25 15:21       ` Yu, Zhang
  2016-04-25 15:29         ` Paul Durrant
  2 siblings, 1 reply; 35+ messages in thread
From: Yu, Zhang @ 2016-04-25 15:21 UTC (permalink / raw)
  To: Paul Durrant, George Dunlap
  Cc: Kevin Tian, Keir (Xen.org),
	Jan Beulich, Andrew Cooper, Tim (Xen.org),
	xen-devel, Lv, Zhiyuan, Jun Nakajima, Wei Liu



On 4/25/2016 10:01 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf Of
>> George Dunlap
>> Sent: 25 April 2016 14:39
>> To: Yu Zhang
>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan Beulich; Wei Liu
>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>
>> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang <yu.c.zhang@linux.intel.com>
>> wrote:
>>> Previously p2m type p2m_mmio_write_dm was introduced for write-
>>> protected memory pages whose write operations are supposed to be
>>> forwarded to and emulated by an ioreq server. Yet limitations of
>>> rangeset restrict the number of guest pages to be write-protected.
>>>
>>> This patch replaces the p2m type p2m_mmio_write_dm with a new name:
>>> p2m_ioreq_server, which means this p2m type can be claimed by one
>>> ioreq server, instead of being tracked inside the rangeset of ioreq
>>> server. Patches following up will add the related hvmop handling
>>> code which map/unmap type p2m_ioreq_server to/from an ioreq server.
>>>
>>> changes in v3:
>>>   - According to Jan & George's comments, keep
>> HVMMEM_mmio_write_dm
>>>     for old xen interface versions, and replace it with HVMMEM_unused
>>>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
>>>     enum - HVMMEM_ioreq_server is introduced for the get/set mem type
>>>     interfaces;
>>>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
>>
>> Unfortunately these rather contradict each other -- I consider
>> Reviewed-by to only stick if the code I've specified hasn't changed
>> (or has only changed trivially).
>>
>> Also...
>>
>>>
>>> changes in v2:
>>>   - According to George Dunlap's comments, only rename the p2m type,
>>>     with no behavior changes.
>>>
>>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
>>> Acked-by: Tim Deegan <tim@xen.org>
>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>>> Cc: Keir Fraser <keir@xen.org>
>>> Cc: Jan Beulich <jbeulich@suse.com>
>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>>> Cc: Tim Deegan <tim@xen.org>
>>> ---
>>>  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
>>>  xen/arch/x86/mm/p2m-ept.c       |  2 +-
>>>  xen/arch/x86/mm/p2m-pt.c        |  2 +-
>>>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
>>>  xen/include/asm-x86/p2m.h       |  4 ++--
>>>  xen/include/public/hvm/hvm_op.h |  8 +++++++-
>>>  6 files changed, 20 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>> index f24126d..874cb0f 100644
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>> unsigned long gla,
>>>       */
>>>      if ( (p2mt == p2m_mmio_dm) ||
>>>           (npfec.write_access &&
>>> -          (p2m_is_discard_write(p2mt) || (p2mt == p2m_mmio_write_dm))) )
>>> +          (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) )
>>>      {
>>>          __put_gfn(p2m, gfn);
>>>          if ( ap2m_active )
>>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>              get_gfn_query_unlocked(d, a.pfn, &t);
>>>              if ( p2m_is_mmio(t) )
>>>                  a.mem_type =  HVMMEM_mmio_dm;
>>> -            else if ( t == p2m_mmio_write_dm )
>>> -                a.mem_type = HVMMEM_mmio_write_dm;
>>> +            else if ( t == p2m_ioreq_server )
>>> +                a.mem_type = HVMMEM_ioreq_server;
>>>              else if ( p2m_is_readonly(t) )
>>>                  a.mem_type =  HVMMEM_ram_ro;
>>>              else if ( p2m_is_ram(t) )
>>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>              [HVMMEM_ram_rw]  = p2m_ram_rw,
>>>              [HVMMEM_ram_ro]  = p2m_ram_ro,
>>>              [HVMMEM_mmio_dm] = p2m_mmio_dm,
>>> -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
>>> +            [HVMMEM_unused] = p2m_invalid,
>>> +            [HVMMEM_ioreq_server] = p2m_ioreq_server
>>>          };
>>>
>>>          if ( copy_from_guest(&a, arg, 1) )
>>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>               ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
>>>              goto setmemtype_fail;
>>>
>>> -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
>>> +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
>>> +             unlikely(a.hvmmem_type == HVMMEM_unused) )
>>>              goto setmemtype_fail;
>>>
>>>          while ( a.nr > start_iter )
>>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>              }
>>>              if ( !p2m_is_ram(t) &&
>>>                   (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm)
>> &&
>>> -                 (t != p2m_mmio_write_dm || a.hvmmem_type !=
>> HVMMEM_ram_rw) )
>>> +                 (t != p2m_ioreq_server || a.hvmmem_type !=
>> HVMMEM_ram_rw) )
>>>              {
>>>                  put_gfn(d, pfn);
>>>                  goto setmemtype_fail;
>>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
>>> index 3cb6868..380ec25 100644
>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
>> p2m_domain *p2m, ept_entry_t *entry,
>>>              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
>>>              break;
>>>          case p2m_grant_map_ro:
>>> -        case p2m_mmio_write_dm:
>>> +        case p2m_ioreq_server:
>>>              entry->r = 1;
>>>              entry->w = entry->x = 0;
>>>              entry->a = !!cpu_has_vmx_ept_ad;
>>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
>>> index 3d80612..eabd2e3 100644
>>> --- a/xen/arch/x86/mm/p2m-pt.c
>>> +++ b/xen/arch/x86/mm/p2m-pt.c
>>> @@ -94,7 +94,7 @@ static unsigned long p2m_type_to_flags(p2m_type_t
>> t, mfn_t mfn,
>>>      default:
>>>          return flags | _PAGE_NX_BIT;
>>>      case p2m_grant_map_ro:
>>> -    case p2m_mmio_write_dm:
>>> +    case p2m_ioreq_server:
>>>          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
>>>      case p2m_ram_ro:
>>>      case p2m_ram_logdirty:
>>> diff --git a/xen/arch/x86/mm/shadow/multi.c
>> b/xen/arch/x86/mm/shadow/multi.c
>>> index e5c8499..c81302a 100644
>>> --- a/xen/arch/x86/mm/shadow/multi.c
>>> +++ b/xen/arch/x86/mm/shadow/multi.c
>>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
>>>
>>>      /* Need to hand off device-model MMIO to the device model */
>>>      if ( p2mt == p2m_mmio_dm
>>> -         || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
>>> +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
>>>      {
>>>          gpa = guest_walk_to_gpa(&gw);
>>>          goto mmio;
>>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>>> index 5392eb0..ee2ea9c 100644
>>> --- a/xen/include/asm-x86/p2m.h
>>> +++ b/xen/include/asm-x86/p2m.h
>>> @@ -71,7 +71,7 @@ typedef enum {
>>>      p2m_ram_shared = 12,          /* Shared or sharable memory */
>>>      p2m_ram_broken = 13,          /* Broken page, access cause domain crash
>> */
>>>      p2m_map_foreign  = 14,        /* ram pages from foreign domain */
>>> -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the device
>> model */
>>> +    p2m_ioreq_server = 15,
>>>  } p2m_type_t;
>>>
>>>  /* Modifiers to the query */
>>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
>>>                        | p2m_to_mask(p2m_ram_ro)         \
>>>                        | p2m_to_mask(p2m_grant_map_ro)   \
>>>                        | p2m_to_mask(p2m_ram_shared)     \
>>> -                      | p2m_to_mask(p2m_mmio_write_dm))
>>> +                      | p2m_to_mask(p2m_ioreq_server))
>>>
>>>  /* Write-discard types, which should discard the write operations */
>>>  #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
>>> diff --git a/xen/include/public/hvm/hvm_op.h
>> b/xen/include/public/hvm/hvm_op.h
>>> index 1606185..b3e45cf 100644
>>> --- a/xen/include/public/hvm/hvm_op.h
>>> +++ b/xen/include/public/hvm/hvm_op.h
>>> @@ -83,7 +83,13 @@ typedef enum {
>>>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
>>>      HVMMEM_ram_ro,             /* Read-only; writes are discarded */
>>>      HVMMEM_mmio_dm,            /* Reads and write go to the device model
>> */
>>> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the device
>> model */
>>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
>>> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the device
>> model */
>>> +#else
>>> +    HVMMEM_unused,             /* Placeholder; setting memory to this type
>>> +                                  will fail for code after 4.7.0 */
>>> +#endif
>>> +    HVMMEM_ioreq_server
>>
>> Also, I don't think we've had a convincing argument for why this patch
>> needs to be in 4.7.  The p2m name changes are internal only, and so
>> don't need to be made at all; and the old functionality will work as
>> well as it ever did.  Furthermore, the whole reason we're in this
>> situation is that we checked in a publicly-visible change to the
>> interface before it was properly ready; I think we should avoid making
>> the same mistake this time.
>>
>> So personally I'd just leave this patch entirely for 4.8; but if Paul
>> and/or Jan have strong opinions, then I would say check in only a
>> patch to do the #if/#else/#endif, and leave both the p2m type changes
>> and the new HVMMEM_ioreq_server enum for when the 4.8 development
>> window opens.
>>
>
> If the whole series is going in then I think this patch is ok. If this the only patch that is going in for 4.7 then I thing we need the patch to hvm_op.h to deprecate the old type and that's it. We definitely should not introduce an implementation of the type HVMMEM_ioreq_server that has the same hardcoded semantics as the old type and then change it.
> The p2m type changes are also wrong. That type needs to be left alone, presumably, so that anything using HVMMEM_mmio_write_dm and compiled to the old interface version continues to function. I think HVMMEM_ioreq_server needs to map to a new p2m type which should be introduced in patch #3.
>

Sorry, I'm also confused now. :(

Do we really want to introduce a new p2m type? Why?
My understanding of the previous agreement is that:
1> We need the interface to work on old hypervisor for
HVMMEM_mmio_write_dm;
2> We need the interface to return -EINVAL for new hypervisor
for HVMMEM_mmio_write_dm;
3> We need the new type HVMMEM_ioreq_server to work on new
hypervisor;

Did I miss something? Or I totally misunderstood?

B.R.
Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 15:21       ` Yu, Zhang
@ 2016-04-25 15:29         ` Paul Durrant
  2016-04-25 15:38           ` Jan Beulich
  2016-04-25 15:49           ` Yu, Zhang
  0 siblings, 2 replies; 35+ messages in thread
From: Paul Durrant @ 2016-04-25 15:29 UTC (permalink / raw)
  To: Yu, Zhang, George Dunlap
  Cc: Kevin Tian, Keir (Xen.org),
	Jan Beulich, Andrew Cooper, Tim (Xen.org),
	xen-devel, Lv, Zhiyuan, Jun Nakajima, Wei Liu

> -----Original Message-----
> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
> Sent: 25 April 2016 16:22
> To: Paul Durrant; George Dunlap
> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
> Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan; Jan Beulich; Wei Liu
> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> 
> 
> 
> On 4/25/2016 10:01 PM, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf Of
> >> George Dunlap
> >> Sent: 25 April 2016 14:39
> >> To: Yu Zhang
> >> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
> >> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan Beulich; Wei
> Liu
> >> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
> >> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> >>
> >> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang
> <yu.c.zhang@linux.intel.com>
> >> wrote:
> >>> Previously p2m type p2m_mmio_write_dm was introduced for write-
> >>> protected memory pages whose write operations are supposed to be
> >>> forwarded to and emulated by an ioreq server. Yet limitations of
> >>> rangeset restrict the number of guest pages to be write-protected.
> >>>
> >>> This patch replaces the p2m type p2m_mmio_write_dm with a new
> name:
> >>> p2m_ioreq_server, which means this p2m type can be claimed by one
> >>> ioreq server, instead of being tracked inside the rangeset of ioreq
> >>> server. Patches following up will add the related hvmop handling
> >>> code which map/unmap type p2m_ioreq_server to/from an ioreq
> server.
> >>>
> >>> changes in v3:
> >>>   - According to Jan & George's comments, keep
> >> HVMMEM_mmio_write_dm
> >>>     for old xen interface versions, and replace it with HVMMEM_unused
> >>>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
> >>>     enum - HVMMEM_ioreq_server is introduced for the get/set mem
> type
> >>>     interfaces;
> >>>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
> >>
> >> Unfortunately these rather contradict each other -- I consider
> >> Reviewed-by to only stick if the code I've specified hasn't changed
> >> (or has only changed trivially).
> >>
> >> Also...
> >>
> >>>
> >>> changes in v2:
> >>>   - According to George Dunlap's comments, only rename the p2m type,
> >>>     with no behavior changes.
> >>>
> >>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> >>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> >>> Acked-by: Tim Deegan <tim@xen.org>
> >>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
> >>> Cc: Keir Fraser <keir@xen.org>
> >>> Cc: Jan Beulich <jbeulich@suse.com>
> >>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> >>> Cc: Jun Nakajima <jun.nakajima@intel.com>
> >>> Cc: Kevin Tian <kevin.tian@intel.com>
> >>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> >>> Cc: Tim Deegan <tim@xen.org>
> >>> ---
> >>>  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
> >>>  xen/arch/x86/mm/p2m-ept.c       |  2 +-
> >>>  xen/arch/x86/mm/p2m-pt.c        |  2 +-
> >>>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
> >>>  xen/include/asm-x86/p2m.h       |  4 ++--
> >>>  xen/include/public/hvm/hvm_op.h |  8 +++++++-
> >>>  6 files changed, 20 insertions(+), 12 deletions(-)
> >>>
> >>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> >>> index f24126d..874cb0f 100644
> >>> --- a/xen/arch/x86/hvm/hvm.c
> >>> +++ b/xen/arch/x86/hvm/hvm.c
> >>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
> >> unsigned long gla,
> >>>       */
> >>>      if ( (p2mt == p2m_mmio_dm) ||
> >>>           (npfec.write_access &&
> >>> -          (p2m_is_discard_write(p2mt) || (p2mt ==
> p2m_mmio_write_dm))) )
> >>> +          (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) )
> >>>      {
> >>>          __put_gfn(p2m, gfn);
> >>>          if ( ap2m_active )
> >>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
> >> XEN_GUEST_HANDLE_PARAM(void) arg)
> >>>              get_gfn_query_unlocked(d, a.pfn, &t);
> >>>              if ( p2m_is_mmio(t) )
> >>>                  a.mem_type =  HVMMEM_mmio_dm;
> >>> -            else if ( t == p2m_mmio_write_dm )
> >>> -                a.mem_type = HVMMEM_mmio_write_dm;
> >>> +            else if ( t == p2m_ioreq_server )
> >>> +                a.mem_type = HVMMEM_ioreq_server;
> >>>              else if ( p2m_is_readonly(t) )
> >>>                  a.mem_type =  HVMMEM_ram_ro;
> >>>              else if ( p2m_is_ram(t) )
> >>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
> >> XEN_GUEST_HANDLE_PARAM(void) arg)
> >>>              [HVMMEM_ram_rw]  = p2m_ram_rw,
> >>>              [HVMMEM_ram_ro]  = p2m_ram_ro,
> >>>              [HVMMEM_mmio_dm] = p2m_mmio_dm,
> >>> -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
> >>> +            [HVMMEM_unused] = p2m_invalid,
> >>> +            [HVMMEM_ioreq_server] = p2m_ioreq_server
> >>>          };
> >>>
> >>>          if ( copy_from_guest(&a, arg, 1) )
> >>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
> >> XEN_GUEST_HANDLE_PARAM(void) arg)
> >>>               ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
> >>>              goto setmemtype_fail;
> >>>
> >>> -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
> >>> +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
> >>> +             unlikely(a.hvmmem_type == HVMMEM_unused) )
> >>>              goto setmemtype_fail;
> >>>
> >>>          while ( a.nr > start_iter )
> >>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
> >> XEN_GUEST_HANDLE_PARAM(void) arg)
> >>>              }
> >>>              if ( !p2m_is_ram(t) &&
> >>>                   (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm)
> >> &&
> >>> -                 (t != p2m_mmio_write_dm || a.hvmmem_type !=
> >> HVMMEM_ram_rw) )
> >>> +                 (t != p2m_ioreq_server || a.hvmmem_type !=
> >> HVMMEM_ram_rw) )
> >>>              {
> >>>                  put_gfn(d, pfn);
> >>>                  goto setmemtype_fail;
> >>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-
> ept.c
> >>> index 3cb6868..380ec25 100644
> >>> --- a/xen/arch/x86/mm/p2m-ept.c
> >>> +++ b/xen/arch/x86/mm/p2m-ept.c
> >>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
> >> p2m_domain *p2m, ept_entry_t *entry,
> >>>              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
> >>>              break;
> >>>          case p2m_grant_map_ro:
> >>> -        case p2m_mmio_write_dm:
> >>> +        case p2m_ioreq_server:
> >>>              entry->r = 1;
> >>>              entry->w = entry->x = 0;
> >>>              entry->a = !!cpu_has_vmx_ept_ad;
> >>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> >>> index 3d80612..eabd2e3 100644
> >>> --- a/xen/arch/x86/mm/p2m-pt.c
> >>> +++ b/xen/arch/x86/mm/p2m-pt.c
> >>> @@ -94,7 +94,7 @@ static unsigned long
> p2m_type_to_flags(p2m_type_t
> >> t, mfn_t mfn,
> >>>      default:
> >>>          return flags | _PAGE_NX_BIT;
> >>>      case p2m_grant_map_ro:
> >>> -    case p2m_mmio_write_dm:
> >>> +    case p2m_ioreq_server:
> >>>          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
> >>>      case p2m_ram_ro:
> >>>      case p2m_ram_logdirty:
> >>> diff --git a/xen/arch/x86/mm/shadow/multi.c
> >> b/xen/arch/x86/mm/shadow/multi.c
> >>> index e5c8499..c81302a 100644
> >>> --- a/xen/arch/x86/mm/shadow/multi.c
> >>> +++ b/xen/arch/x86/mm/shadow/multi.c
> >>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
> >>>
> >>>      /* Need to hand off device-model MMIO to the device model */
> >>>      if ( p2mt == p2m_mmio_dm
> >>> -         || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
> >>> +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
> >>>      {
> >>>          gpa = guest_walk_to_gpa(&gw);
> >>>          goto mmio;
> >>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
> >>> index 5392eb0..ee2ea9c 100644
> >>> --- a/xen/include/asm-x86/p2m.h
> >>> +++ b/xen/include/asm-x86/p2m.h
> >>> @@ -71,7 +71,7 @@ typedef enum {
> >>>      p2m_ram_shared = 12,          /* Shared or sharable memory */
> >>>      p2m_ram_broken = 13,          /* Broken page, access cause domain
> crash
> >> */
> >>>      p2m_map_foreign  = 14,        /* ram pages from foreign domain */
> >>> -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the device
> >> model */
> >>> +    p2m_ioreq_server = 15,
> >>>  } p2m_type_t;
> >>>
> >>>  /* Modifiers to the query */
> >>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
> >>>                        | p2m_to_mask(p2m_ram_ro)         \
> >>>                        | p2m_to_mask(p2m_grant_map_ro)   \
> >>>                        | p2m_to_mask(p2m_ram_shared)     \
> >>> -                      | p2m_to_mask(p2m_mmio_write_dm))
> >>> +                      | p2m_to_mask(p2m_ioreq_server))
> >>>
> >>>  /* Write-discard types, which should discard the write operations */
> >>>  #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
> >>> diff --git a/xen/include/public/hvm/hvm_op.h
> >> b/xen/include/public/hvm/hvm_op.h
> >>> index 1606185..b3e45cf 100644
> >>> --- a/xen/include/public/hvm/hvm_op.h
> >>> +++ b/xen/include/public/hvm/hvm_op.h
> >>> @@ -83,7 +83,13 @@ typedef enum {
> >>>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
> >>>      HVMMEM_ram_ro,             /* Read-only; writes are discarded */
> >>>      HVMMEM_mmio_dm,            /* Reads and write go to the device
> model
> >> */
> >>> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the device
> >> model */
> >>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
> >>> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the device
> >> model */
> >>> +#else
> >>> +    HVMMEM_unused,             /* Placeholder; setting memory to this
> type
> >>> +                                  will fail for code after 4.7.0 */
> >>> +#endif
> >>> +    HVMMEM_ioreq_server
> >>
> >> Also, I don't think we've had a convincing argument for why this patch
> >> needs to be in 4.7.  The p2m name changes are internal only, and so
> >> don't need to be made at all; and the old functionality will work as
> >> well as it ever did.  Furthermore, the whole reason we're in this
> >> situation is that we checked in a publicly-visible change to the
> >> interface before it was properly ready; I think we should avoid making
> >> the same mistake this time.
> >>
> >> So personally I'd just leave this patch entirely for 4.8; but if Paul
> >> and/or Jan have strong opinions, then I would say check in only a
> >> patch to do the #if/#else/#endif, and leave both the p2m type changes
> >> and the new HVMMEM_ioreq_server enum for when the 4.8
> development
> >> window opens.
> >>
> >
> > If the whole series is going in then I think this patch is ok. If this the only
> patch that is going in for 4.7 then I thing we need the patch to hvm_op.h to
> deprecate the old type and that's it. We definitely should not introduce an
> implementation of the type HVMMEM_ioreq_server that has the same
> hardcoded semantics as the old type and then change it.
> > The p2m type changes are also wrong. That type needs to be left alone,
> presumably, so that anything using HVMMEM_mmio_write_dm and
> compiled to the old interface version continues to function. I think
> HVMMEM_ioreq_server needs to map to a new p2m type which should be
> introduced in patch #3.
> >
> 
> Sorry, I'm also confused now. :(
> 
> Do we really want to introduce a new p2m type? Why?
> My understanding of the previous agreement is that:
> 1> We need the interface to work on old hypervisor for
> HVMMEM_mmio_write_dm;
> 2> We need the interface to return -EINVAL for new hypervisor
> for HVMMEM_mmio_write_dm;
> 3> We need the new type HVMMEM_ioreq_server to work on new
> hypervisor;
> 
> Did I miss something? Or I totally misunderstood?
> 

I don't know. I'm confused too. What we definitely don't want to do is add a new HVMMEM type and have it map to the old behaviour, otherwise we're no better off.

The question I'm not clear on the answer to is what happens to old code:

Should it continue to compile? If so, should it continue to run.

  Paul

> B.R.
> Yu
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 15:29         ` Paul Durrant
@ 2016-04-25 15:38           ` Jan Beulich
  2016-04-25 15:53             ` Yu, Zhang
  2016-04-25 15:49           ` Yu, Zhang
  1 sibling, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2016-04-25 15:38 UTC (permalink / raw)
  To: George Dunlap, Paul Durrant, Zhang Yu
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Tim(Xen.org),
	xen-devel, Zhiyuan Lv, Jun Nakajima, Keir (Xen.org)

>>> On 25.04.16 at 17:29, <Paul.Durrant@citrix.com> wrote:
>>  -----Original Message-----
>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
>> Sent: 25 April 2016 16:22
>> To: Paul Durrant; George Dunlap
>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>> Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan; Jan Beulich; Wei Liu
>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>> 
>> 
>> 
>> On 4/25/2016 10:01 PM, Paul Durrant wrote:
>> >> -----Original Message-----
>> >> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf Of
>> >> George Dunlap
>> >> Sent: 25 April 2016 14:39
>> >> To: Yu Zhang
>> >> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>> >> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan Beulich; Wei
>> Liu
>> >> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>> >> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>> >>
>> >> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang
>> <yu.c.zhang@linux.intel.com>
>> >> wrote:
>> >>> Previously p2m type p2m_mmio_write_dm was introduced for write-
>> >>> protected memory pages whose write operations are supposed to be
>> >>> forwarded to and emulated by an ioreq server. Yet limitations of
>> >>> rangeset restrict the number of guest pages to be write-protected.
>> >>>
>> >>> This patch replaces the p2m type p2m_mmio_write_dm with a new
>> name:
>> >>> p2m_ioreq_server, which means this p2m type can be claimed by one
>> >>> ioreq server, instead of being tracked inside the rangeset of ioreq
>> >>> server. Patches following up will add the related hvmop handling
>> >>> code which map/unmap type p2m_ioreq_server to/from an ioreq
>> server.
>> >>>
>> >>> changes in v3:
>> >>>   - According to Jan & George's comments, keep
>> >> HVMMEM_mmio_write_dm
>> >>>     for old xen interface versions, and replace it with HVMMEM_unused
>> >>>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
>> >>>     enum - HVMMEM_ioreq_server is introduced for the get/set mem
>> type
>> >>>     interfaces;
>> >>>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
>> >>
>> >> Unfortunately these rather contradict each other -- I consider
>> >> Reviewed-by to only stick if the code I've specified hasn't changed
>> >> (or has only changed trivially).
>> >>
>> >> Also...
>> >>
>> >>>
>> >>> changes in v2:
>> >>>   - According to George Dunlap's comments, only rename the p2m type,
>> >>>     with no behavior changes.
>> >>>
>> >>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>> >>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
>> >>> Acked-by: Tim Deegan <tim@xen.org>
>> >>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> >>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>> >>> Cc: Keir Fraser <keir@xen.org>
>> >>> Cc: Jan Beulich <jbeulich@suse.com>
>> >>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>> >>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>> >>> Cc: Kevin Tian <kevin.tian@intel.com>
>> >>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>> >>> Cc: Tim Deegan <tim@xen.org>
>> >>> ---
>> >>>  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
>> >>>  xen/arch/x86/mm/p2m-ept.c       |  2 +-
>> >>>  xen/arch/x86/mm/p2m-pt.c        |  2 +-
>> >>>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
>> >>>  xen/include/asm-x86/p2m.h       |  4 ++--
>> >>>  xen/include/public/hvm/hvm_op.h |  8 +++++++-
>> >>>  6 files changed, 20 insertions(+), 12 deletions(-)
>> >>>
>> >>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>> >>> index f24126d..874cb0f 100644
>> >>> --- a/xen/arch/x86/hvm/hvm.c
>> >>> +++ b/xen/arch/x86/hvm/hvm.c
>> >>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>> >> unsigned long gla,
>> >>>       */
>> >>>      if ( (p2mt == p2m_mmio_dm) ||
>> >>>           (npfec.write_access &&
>> >>> -          (p2m_is_discard_write(p2mt) || (p2mt ==
>> p2m_mmio_write_dm))) )
>> >>> +          (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) )
>> >>>      {
>> >>>          __put_gfn(p2m, gfn);
>> >>>          if ( ap2m_active )
>> >>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
>> >> XEN_GUEST_HANDLE_PARAM(void) arg)
>> >>>              get_gfn_query_unlocked(d, a.pfn, &t);
>> >>>              if ( p2m_is_mmio(t) )
>> >>>                  a.mem_type =  HVMMEM_mmio_dm;
>> >>> -            else if ( t == p2m_mmio_write_dm )
>> >>> -                a.mem_type = HVMMEM_mmio_write_dm;
>> >>> +            else if ( t == p2m_ioreq_server )
>> >>> +                a.mem_type = HVMMEM_ioreq_server;
>> >>>              else if ( p2m_is_readonly(t) )
>> >>>                  a.mem_type =  HVMMEM_ram_ro;
>> >>>              else if ( p2m_is_ram(t) )
>> >>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
>> >> XEN_GUEST_HANDLE_PARAM(void) arg)
>> >>>              [HVMMEM_ram_rw]  = p2m_ram_rw,
>> >>>              [HVMMEM_ram_ro]  = p2m_ram_ro,
>> >>>              [HVMMEM_mmio_dm] = p2m_mmio_dm,
>> >>> -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
>> >>> +            [HVMMEM_unused] = p2m_invalid,
>> >>> +            [HVMMEM_ioreq_server] = p2m_ioreq_server
>> >>>          };
>> >>>
>> >>>          if ( copy_from_guest(&a, arg, 1) )
>> >>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
>> >> XEN_GUEST_HANDLE_PARAM(void) arg)
>> >>>               ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
>> >>>              goto setmemtype_fail;
>> >>>
>> >>> -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
>> >>> +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
>> >>> +             unlikely(a.hvmmem_type == HVMMEM_unused) )
>> >>>              goto setmemtype_fail;
>> >>>
>> >>>          while ( a.nr > start_iter )
>> >>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
>> >> XEN_GUEST_HANDLE_PARAM(void) arg)
>> >>>              }
>> >>>              if ( !p2m_is_ram(t) &&
>> >>>                   (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm)
>> >> &&
>> >>> -                 (t != p2m_mmio_write_dm || a.hvmmem_type !=
>> >> HVMMEM_ram_rw) )
>> >>> +                 (t != p2m_ioreq_server || a.hvmmem_type !=
>> >> HVMMEM_ram_rw) )
>> >>>              {
>> >>>                  put_gfn(d, pfn);
>> >>>                  goto setmemtype_fail;
>> >>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-
>> ept.c
>> >>> index 3cb6868..380ec25 100644
>> >>> --- a/xen/arch/x86/mm/p2m-ept.c
>> >>> +++ b/xen/arch/x86/mm/p2m-ept.c
>> >>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
>> >> p2m_domain *p2m, ept_entry_t *entry,
>> >>>              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
>> >>>              break;
>> >>>          case p2m_grant_map_ro:
>> >>> -        case p2m_mmio_write_dm:
>> >>> +        case p2m_ioreq_server:
>> >>>              entry->r = 1;
>> >>>              entry->w = entry->x = 0;
>> >>>              entry->a = !!cpu_has_vmx_ept_ad;
>> >>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
>> >>> index 3d80612..eabd2e3 100644
>> >>> --- a/xen/arch/x86/mm/p2m-pt.c
>> >>> +++ b/xen/arch/x86/mm/p2m-pt.c
>> >>> @@ -94,7 +94,7 @@ static unsigned long
>> p2m_type_to_flags(p2m_type_t
>> >> t, mfn_t mfn,
>> >>>      default:
>> >>>          return flags | _PAGE_NX_BIT;
>> >>>      case p2m_grant_map_ro:
>> >>> -    case p2m_mmio_write_dm:
>> >>> +    case p2m_ioreq_server:
>> >>>          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
>> >>>      case p2m_ram_ro:
>> >>>      case p2m_ram_logdirty:
>> >>> diff --git a/xen/arch/x86/mm/shadow/multi.c
>> >> b/xen/arch/x86/mm/shadow/multi.c
>> >>> index e5c8499..c81302a 100644
>> >>> --- a/xen/arch/x86/mm/shadow/multi.c
>> >>> +++ b/xen/arch/x86/mm/shadow/multi.c
>> >>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
>> >>>
>> >>>      /* Need to hand off device-model MMIO to the device model */
>> >>>      if ( p2mt == p2m_mmio_dm
>> >>> -         || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
>> >>> +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
>> >>>      {
>> >>>          gpa = guest_walk_to_gpa(&gw);
>> >>>          goto mmio;
>> >>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>> >>> index 5392eb0..ee2ea9c 100644
>> >>> --- a/xen/include/asm-x86/p2m.h
>> >>> +++ b/xen/include/asm-x86/p2m.h
>> >>> @@ -71,7 +71,7 @@ typedef enum {
>> >>>      p2m_ram_shared = 12,          /* Shared or sharable memory */
>> >>>      p2m_ram_broken = 13,          /* Broken page, access cause domain
>> crash
>> >> */
>> >>>      p2m_map_foreign  = 14,        /* ram pages from foreign domain */
>> >>> -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the device
>> >> model */
>> >>> +    p2m_ioreq_server = 15,
>> >>>  } p2m_type_t;
>> >>>
>> >>>  /* Modifiers to the query */
>> >>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
>> >>>                        | p2m_to_mask(p2m_ram_ro)         \
>> >>>                        | p2m_to_mask(p2m_grant_map_ro)   \
>> >>>                        | p2m_to_mask(p2m_ram_shared)     \
>> >>> -                      | p2m_to_mask(p2m_mmio_write_dm))
>> >>> +                      | p2m_to_mask(p2m_ioreq_server))
>> >>>
>> >>>  /* Write-discard types, which should discard the write operations */
>> >>>  #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
>> >>> diff --git a/xen/include/public/hvm/hvm_op.h
>> >> b/xen/include/public/hvm/hvm_op.h
>> >>> index 1606185..b3e45cf 100644
>> >>> --- a/xen/include/public/hvm/hvm_op.h
>> >>> +++ b/xen/include/public/hvm/hvm_op.h
>> >>> @@ -83,7 +83,13 @@ typedef enum {
>> >>>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
>> >>>      HVMMEM_ram_ro,             /* Read-only; writes are discarded */
>> >>>      HVMMEM_mmio_dm,            /* Reads and write go to the device
>> model
>> >> */
>> >>> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the device
>> >> model */
>> >>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
>> >>> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the device
>> >> model */
>> >>> +#else
>> >>> +    HVMMEM_unused,             /* Placeholder; setting memory to this
>> type
>> >>> +                                  will fail for code after 4.7.0 */
>> >>> +#endif
>> >>> +    HVMMEM_ioreq_server
>> >>
>> >> Also, I don't think we've had a convincing argument for why this patch
>> >> needs to be in 4.7.  The p2m name changes are internal only, and so
>> >> don't need to be made at all; and the old functionality will work as
>> >> well as it ever did.  Furthermore, the whole reason we're in this
>> >> situation is that we checked in a publicly-visible change to the
>> >> interface before it was properly ready; I think we should avoid making
>> >> the same mistake this time.
>> >>
>> >> So personally I'd just leave this patch entirely for 4.8; but if Paul
>> >> and/or Jan have strong opinions, then I would say check in only a
>> >> patch to do the #if/#else/#endif, and leave both the p2m type changes
>> >> and the new HVMMEM_ioreq_server enum for when the 4.8
>> development
>> >> window opens.
>> >>
>> >
>> > If the whole series is going in then I think this patch is ok. If this the 
> only
>> patch that is going in for 4.7 then I thing we need the patch to hvm_op.h to
>> deprecate the old type and that's it. We definitely should not introduce an
>> implementation of the type HVMMEM_ioreq_server that has the same
>> hardcoded semantics as the old type and then change it.
>> > The p2m type changes are also wrong. That type needs to be left alone,
>> presumably, so that anything using HVMMEM_mmio_write_dm and
>> compiled to the old interface version continues to function. I think
>> HVMMEM_ioreq_server needs to map to a new p2m type which should be
>> introduced in patch #3.
>> >
>> 
>> Sorry, I'm also confused now. :(
>> 
>> Do we really want to introduce a new p2m type? Why?
>> My understanding of the previous agreement is that:
>> 1> We need the interface to work on old hypervisor for
>> HVMMEM_mmio_write_dm;
>> 2> We need the interface to return -EINVAL for new hypervisor
>> for HVMMEM_mmio_write_dm;
>> 3> We need the new type HVMMEM_ioreq_server to work on new
>> hypervisor;
>> 
>> Did I miss something? Or I totally misunderstood?
>> 
> 
> I don't know. I'm confused too. What we definitely don't want to do is add a 
> new HVMMEM type and have it map to the old behaviour, otherwise we're no 
> better off.
> 
> The question I'm not clear on the answer to is what happens to old code:
> 
> Should it continue to compile? If so, should it continue to run.

We only need to be concerned about the "get type" functionality,
as that's the only thing an arbitrary guest can use. If the
hypercall simply never returns the old type, then old code will
still work (it'll just have some dead code on new Xen), and hence
it compiling against the older interface is fine (and, from general
considerations, a requirement).

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 15:29         ` Paul Durrant
  2016-04-25 15:38           ` Jan Beulich
@ 2016-04-25 15:49           ` Yu, Zhang
  1 sibling, 0 replies; 35+ messages in thread
From: Yu, Zhang @ 2016-04-25 15:49 UTC (permalink / raw)
  To: Paul Durrant, George Dunlap
  Cc: Kevin Tian, Keir (Xen.org),
	Jan Beulich, Andrew Cooper, Tim (Xen.org),
	xen-devel, Lv, Zhiyuan, Jun Nakajima, Wei Liu



On 4/25/2016 11:29 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
>> Sent: 25 April 2016 16:22
>> To: Paul Durrant; George Dunlap
>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>> Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan; Jan Beulich; Wei Liu
>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>
>>
>>
>> On 4/25/2016 10:01 PM, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf Of
>>>> George Dunlap
>>>> Sent: 25 April 2016 14:39
>>>> To: Yu Zhang
>>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>>>> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan Beulich; Wei
>> Liu
>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>>
>>>> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang
>> <yu.c.zhang@linux.intel.com>
>>>> wrote:
>>>>> Previously p2m type p2m_mmio_write_dm was introduced for write-
>>>>> protected memory pages whose write operations are supposed to be
>>>>> forwarded to and emulated by an ioreq server. Yet limitations of
>>>>> rangeset restrict the number of guest pages to be write-protected.
>>>>>
>>>>> This patch replaces the p2m type p2m_mmio_write_dm with a new
>> name:
>>>>> p2m_ioreq_server, which means this p2m type can be claimed by one
>>>>> ioreq server, instead of being tracked inside the rangeset of ioreq
>>>>> server. Patches following up will add the related hvmop handling
>>>>> code which map/unmap type p2m_ioreq_server to/from an ioreq
>> server.
>>>>>
>>>>> changes in v3:
>>>>>   - According to Jan & George's comments, keep
>>>> HVMMEM_mmio_write_dm
>>>>>     for old xen interface versions, and replace it with HVMMEM_unused
>>>>>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
>>>>>     enum - HVMMEM_ioreq_server is introduced for the get/set mem
>> type
>>>>>     interfaces;
>>>>>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
>>>>
>>>> Unfortunately these rather contradict each other -- I consider
>>>> Reviewed-by to only stick if the code I've specified hasn't changed
>>>> (or has only changed trivially).
>>>>
>>>> Also...
>>>>
>>>>>
>>>>> changes in v2:
>>>>>   - According to George Dunlap's comments, only rename the p2m type,
>>>>>     with no behavior changes.
>>>>>
>>>>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>>>>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
>>>>> Acked-by: Tim Deegan <tim@xen.org>
>>>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>>>>> Cc: Keir Fraser <keir@xen.org>
>>>>> Cc: Jan Beulich <jbeulich@suse.com>
>>>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>>>>> Cc: Tim Deegan <tim@xen.org>
>>>>> ---
>>>>>  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
>>>>>  xen/arch/x86/mm/p2m-ept.c       |  2 +-
>>>>>  xen/arch/x86/mm/p2m-pt.c        |  2 +-
>>>>>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
>>>>>  xen/include/asm-x86/p2m.h       |  4 ++--
>>>>>  xen/include/public/hvm/hvm_op.h |  8 +++++++-
>>>>>  6 files changed, 20 insertions(+), 12 deletions(-)
>>>>>
>>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>>>> index f24126d..874cb0f 100644
>>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>>>> unsigned long gla,
>>>>>       */
>>>>>      if ( (p2mt == p2m_mmio_dm) ||
>>>>>           (npfec.write_access &&
>>>>> -          (p2m_is_discard_write(p2mt) || (p2mt ==
>> p2m_mmio_write_dm))) )
>>>>> +          (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) )
>>>>>      {
>>>>>          __put_gfn(p2m, gfn);
>>>>>          if ( ap2m_active )
>>>>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>              get_gfn_query_unlocked(d, a.pfn, &t);
>>>>>              if ( p2m_is_mmio(t) )
>>>>>                  a.mem_type =  HVMMEM_mmio_dm;
>>>>> -            else if ( t == p2m_mmio_write_dm )
>>>>> -                a.mem_type = HVMMEM_mmio_write_dm;
>>>>> +            else if ( t == p2m_ioreq_server )
>>>>> +                a.mem_type = HVMMEM_ioreq_server;
>>>>>              else if ( p2m_is_readonly(t) )
>>>>>                  a.mem_type =  HVMMEM_ram_ro;
>>>>>              else if ( p2m_is_ram(t) )
>>>>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>              [HVMMEM_ram_rw]  = p2m_ram_rw,
>>>>>              [HVMMEM_ram_ro]  = p2m_ram_ro,
>>>>>              [HVMMEM_mmio_dm] = p2m_mmio_dm,
>>>>> -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
>>>>> +            [HVMMEM_unused] = p2m_invalid,
>>>>> +            [HVMMEM_ioreq_server] = p2m_ioreq_server
>>>>>          };
>>>>>
>>>>>          if ( copy_from_guest(&a, arg, 1) )
>>>>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>               ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
>>>>>              goto setmemtype_fail;
>>>>>
>>>>> -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
>>>>> +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
>>>>> +             unlikely(a.hvmmem_type == HVMMEM_unused) )
>>>>>              goto setmemtype_fail;
>>>>>
>>>>>          while ( a.nr > start_iter )
>>>>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>              }
>>>>>              if ( !p2m_is_ram(t) &&
>>>>>                   (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm)
>>>> &&
>>>>> -                 (t != p2m_mmio_write_dm || a.hvmmem_type !=
>>>> HVMMEM_ram_rw) )
>>>>> +                 (t != p2m_ioreq_server || a.hvmmem_type !=
>>>> HVMMEM_ram_rw) )
>>>>>              {
>>>>>                  put_gfn(d, pfn);
>>>>>                  goto setmemtype_fail;
>>>>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-
>> ept.c
>>>>> index 3cb6868..380ec25 100644
>>>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>>>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
>>>> p2m_domain *p2m, ept_entry_t *entry,
>>>>>              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
>>>>>              break;
>>>>>          case p2m_grant_map_ro:
>>>>> -        case p2m_mmio_write_dm:
>>>>> +        case p2m_ioreq_server:
>>>>>              entry->r = 1;
>>>>>              entry->w = entry->x = 0;
>>>>>              entry->a = !!cpu_has_vmx_ept_ad;
>>>>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
>>>>> index 3d80612..eabd2e3 100644
>>>>> --- a/xen/arch/x86/mm/p2m-pt.c
>>>>> +++ b/xen/arch/x86/mm/p2m-pt.c
>>>>> @@ -94,7 +94,7 @@ static unsigned long
>> p2m_type_to_flags(p2m_type_t
>>>> t, mfn_t mfn,
>>>>>      default:
>>>>>          return flags | _PAGE_NX_BIT;
>>>>>      case p2m_grant_map_ro:
>>>>> -    case p2m_mmio_write_dm:
>>>>> +    case p2m_ioreq_server:
>>>>>          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
>>>>>      case p2m_ram_ro:
>>>>>      case p2m_ram_logdirty:
>>>>> diff --git a/xen/arch/x86/mm/shadow/multi.c
>>>> b/xen/arch/x86/mm/shadow/multi.c
>>>>> index e5c8499..c81302a 100644
>>>>> --- a/xen/arch/x86/mm/shadow/multi.c
>>>>> +++ b/xen/arch/x86/mm/shadow/multi.c
>>>>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
>>>>>
>>>>>      /* Need to hand off device-model MMIO to the device model */
>>>>>      if ( p2mt == p2m_mmio_dm
>>>>> -         || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
>>>>> +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
>>>>>      {
>>>>>          gpa = guest_walk_to_gpa(&gw);
>>>>>          goto mmio;
>>>>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>>>>> index 5392eb0..ee2ea9c 100644
>>>>> --- a/xen/include/asm-x86/p2m.h
>>>>> +++ b/xen/include/asm-x86/p2m.h
>>>>> @@ -71,7 +71,7 @@ typedef enum {
>>>>>      p2m_ram_shared = 12,          /* Shared or sharable memory */
>>>>>      p2m_ram_broken = 13,          /* Broken page, access cause domain
>> crash
>>>> */
>>>>>      p2m_map_foreign  = 14,        /* ram pages from foreign domain */
>>>>> -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the device
>>>> model */
>>>>> +    p2m_ioreq_server = 15,
>>>>>  } p2m_type_t;
>>>>>
>>>>>  /* Modifiers to the query */
>>>>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
>>>>>                        | p2m_to_mask(p2m_ram_ro)         \
>>>>>                        | p2m_to_mask(p2m_grant_map_ro)   \
>>>>>                        | p2m_to_mask(p2m_ram_shared)     \
>>>>> -                      | p2m_to_mask(p2m_mmio_write_dm))
>>>>> +                      | p2m_to_mask(p2m_ioreq_server))
>>>>>
>>>>>  /* Write-discard types, which should discard the write operations */
>>>>>  #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
>>>>> diff --git a/xen/include/public/hvm/hvm_op.h
>>>> b/xen/include/public/hvm/hvm_op.h
>>>>> index 1606185..b3e45cf 100644
>>>>> --- a/xen/include/public/hvm/hvm_op.h
>>>>> +++ b/xen/include/public/hvm/hvm_op.h
>>>>> @@ -83,7 +83,13 @@ typedef enum {
>>>>>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
>>>>>      HVMMEM_ram_ro,             /* Read-only; writes are discarded */
>>>>>      HVMMEM_mmio_dm,            /* Reads and write go to the device
>> model
>>>> */
>>>>> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the device
>>>> model */
>>>>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
>>>>> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the device
>>>> model */
>>>>> +#else
>>>>> +    HVMMEM_unused,             /* Placeholder; setting memory to this
>> type
>>>>> +                                  will fail for code after 4.7.0 */
>>>>> +#endif
>>>>> +    HVMMEM_ioreq_server
>>>>
>>>> Also, I don't think we've had a convincing argument for why this patch
>>>> needs to be in 4.7.  The p2m name changes are internal only, and so
>>>> don't need to be made at all; and the old functionality will work as
>>>> well as it ever did.  Furthermore, the whole reason we're in this
>>>> situation is that we checked in a publicly-visible change to the
>>>> interface before it was properly ready; I think we should avoid making
>>>> the same mistake this time.
>>>>
>>>> So personally I'd just leave this patch entirely for 4.8; but if Paul
>>>> and/or Jan have strong opinions, then I would say check in only a
>>>> patch to do the #if/#else/#endif, and leave both the p2m type changes
>>>> and the new HVMMEM_ioreq_server enum for when the 4.8
>> development
>>>> window opens.
>>>>
>>>
>>> If the whole series is going in then I think this patch is ok. If this the only
>> patch that is going in for 4.7 then I thing we need the patch to hvm_op.h to
>> deprecate the old type and that's it. We definitely should not introduce an
>> implementation of the type HVMMEM_ioreq_server that has the same
>> hardcoded semantics as the old type and then change it.
>>> The p2m type changes are also wrong. That type needs to be left alone,
>> presumably, so that anything using HVMMEM_mmio_write_dm and
>> compiled to the old interface version continues to function. I think
>> HVMMEM_ioreq_server needs to map to a new p2m type which should be
>> introduced in patch #3.
>>>
>>
>> Sorry, I'm also confused now. :(
>>
>> Do we really want to introduce a new p2m type? Why?
>> My understanding of the previous agreement is that:
>> 1> We need the interface to work on old hypervisor for
>> HVMMEM_mmio_write_dm;
>> 2> We need the interface to return -EINVAL for new hypervisor
>> for HVMMEM_mmio_write_dm;
>> 3> We need the new type HVMMEM_ioreq_server to work on new
>> hypervisor;
>>
>> Did I miss something? Or I totally misunderstood?
>>
>
> I don't know. I'm confused too. What we definitely don't want to do is add a new HVMMEM type and have it map to the old behaviour, otherwise we're no better off.
>

Thanks for your reply, Paul.

Well, but the old HEMMEM type does not exist anymore, and it is not old
behavior with patch3 accepted. Will it be more reasonable if all 3 these
patches are accepted in next version, 4.8 or 4.7.1 if there's one?

One reason I hesitate to remove the old p2m_mmio_write_dm (which is 0xf
IIRC), and define p2m_ioreq_server another value(say, 0x10), is that
what if another future p2m type is introduced with this value(0xf)?
That would also be weird…

> The question I'm not clear on the answer to is what happens to old code:
>
> Should it continue to compile? If so, should it continue to run.
>

By "old code", you mean old interface on new hypervisor, or both?
>   Paul
>
>> B.R.
>> Yu

Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 15:38           ` Jan Beulich
@ 2016-04-25 15:53             ` Yu, Zhang
  2016-04-25 16:15               ` George Dunlap
  0 siblings, 1 reply; 35+ messages in thread
From: Yu, Zhang @ 2016-04-25 15:53 UTC (permalink / raw)
  To: Jan Beulich, George Dunlap, Paul Durrant
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Tim(Xen.org),
	xen-devel, Zhiyuan Lv, Jun Nakajima, Keir (Xen.org)



On 4/25/2016 11:38 PM, Jan Beulich wrote:
>>>> On 25.04.16 at 17:29, <Paul.Durrant@citrix.com> wrote:
>>>  -----Original Message-----
>>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
>>> Sent: 25 April 2016 16:22
>>> To: Paul Durrant; George Dunlap
>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>>> Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan; Jan Beulich; Wei Liu
>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>
>>>
>>>
>>> On 4/25/2016 10:01 PM, Paul Durrant wrote:
>>>>> -----Original Message-----
>>>>> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf Of
>>>>> George Dunlap
>>>>> Sent: 25 April 2016 14:39
>>>>> To: Yu Zhang
>>>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>>>>> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan Beulich; Wei
>>> Liu
>>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>>>
>>>>> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang
>>> <yu.c.zhang@linux.intel.com>
>>>>> wrote:
>>>>>> Previously p2m type p2m_mmio_write_dm was introduced for write-
>>>>>> protected memory pages whose write operations are supposed to be
>>>>>> forwarded to and emulated by an ioreq server. Yet limitations of
>>>>>> rangeset restrict the number of guest pages to be write-protected.
>>>>>>
>>>>>> This patch replaces the p2m type p2m_mmio_write_dm with a new
>>> name:
>>>>>> p2m_ioreq_server, which means this p2m type can be claimed by one
>>>>>> ioreq server, instead of being tracked inside the rangeset of ioreq
>>>>>> server. Patches following up will add the related hvmop handling
>>>>>> code which map/unmap type p2m_ioreq_server to/from an ioreq
>>> server.
>>>>>>
>>>>>> changes in v3:
>>>>>>   - According to Jan & George's comments, keep
>>>>> HVMMEM_mmio_write_dm
>>>>>>     for old xen interface versions, and replace it with HVMMEM_unused
>>>>>>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
>>>>>>     enum - HVMMEM_ioreq_server is introduced for the get/set mem
>>> type
>>>>>>     interfaces;
>>>>>>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
>>>>>
>>>>> Unfortunately these rather contradict each other -- I consider
>>>>> Reviewed-by to only stick if the code I've specified hasn't changed
>>>>> (or has only changed trivially).
>>>>>
>>>>> Also...
>>>>>
>>>>>>
>>>>>> changes in v2:
>>>>>>   - According to George Dunlap's comments, only rename the p2m type,
>>>>>>     with no behavior changes.
>>>>>>
>>>>>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>>>>>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
>>>>>> Acked-by: Tim Deegan <tim@xen.org>
>>>>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>>>>>> Cc: Keir Fraser <keir@xen.org>
>>>>>> Cc: Jan Beulich <jbeulich@suse.com>
>>>>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>>>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>>>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>> Cc: Tim Deegan <tim@xen.org>
>>>>>> ---
>>>>>>  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
>>>>>>  xen/arch/x86/mm/p2m-ept.c       |  2 +-
>>>>>>  xen/arch/x86/mm/p2m-pt.c        |  2 +-
>>>>>>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
>>>>>>  xen/include/asm-x86/p2m.h       |  4 ++--
>>>>>>  xen/include/public/hvm/hvm_op.h |  8 +++++++-
>>>>>>  6 files changed, 20 insertions(+), 12 deletions(-)
>>>>>>
>>>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>>>>> index f24126d..874cb0f 100644
>>>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>>>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>>>>> unsigned long gla,
>>>>>>       */
>>>>>>      if ( (p2mt == p2m_mmio_dm) ||
>>>>>>           (npfec.write_access &&
>>>>>> -          (p2m_is_discard_write(p2mt) || (p2mt ==
>>> p2m_mmio_write_dm))) )
>>>>>> +          (p2m_is_discard_write(p2mt) || (p2mt == p2m_ioreq_server))) )
>>>>>>      {
>>>>>>          __put_gfn(p2m, gfn);
>>>>>>          if ( ap2m_active )
>>>>>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>              get_gfn_query_unlocked(d, a.pfn, &t);
>>>>>>              if ( p2m_is_mmio(t) )
>>>>>>                  a.mem_type =  HVMMEM_mmio_dm;
>>>>>> -            else if ( t == p2m_mmio_write_dm )
>>>>>> -                a.mem_type = HVMMEM_mmio_write_dm;
>>>>>> +            else if ( t == p2m_ioreq_server )
>>>>>> +                a.mem_type = HVMMEM_ioreq_server;
>>>>>>              else if ( p2m_is_readonly(t) )
>>>>>>                  a.mem_type =  HVMMEM_ram_ro;
>>>>>>              else if ( p2m_is_ram(t) )
>>>>>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>              [HVMMEM_ram_rw]  = p2m_ram_rw,
>>>>>>              [HVMMEM_ram_ro]  = p2m_ram_ro,
>>>>>>              [HVMMEM_mmio_dm] = p2m_mmio_dm,
>>>>>> -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
>>>>>> +            [HVMMEM_unused] = p2m_invalid,
>>>>>> +            [HVMMEM_ioreq_server] = p2m_ioreq_server
>>>>>>          };
>>>>>>
>>>>>>          if ( copy_from_guest(&a, arg, 1) )
>>>>>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>               ((a.first_pfn + a.nr - 1) > domain_get_maximum_gpfn(d)) )
>>>>>>              goto setmemtype_fail;
>>>>>>
>>>>>> -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
>>>>>> +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
>>>>>> +             unlikely(a.hvmmem_type == HVMMEM_unused) )
>>>>>>              goto setmemtype_fail;
>>>>>>
>>>>>>          while ( a.nr > start_iter )
>>>>>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>              }
>>>>>>              if ( !p2m_is_ram(t) &&
>>>>>>                   (!p2m_is_hole(t) || a.hvmmem_type != HVMMEM_mmio_dm)
>>>>> &&
>>>>>> -                 (t != p2m_mmio_write_dm || a.hvmmem_type !=
>>>>> HVMMEM_ram_rw) )
>>>>>> +                 (t != p2m_ioreq_server || a.hvmmem_type !=
>>>>> HVMMEM_ram_rw) )
>>>>>>              {
>>>>>>                  put_gfn(d, pfn);
>>>>>>                  goto setmemtype_fail;
>>>>>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-
>>> ept.c
>>>>>> index 3cb6868..380ec25 100644
>>>>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>>>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>>>>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
>>>>> p2m_domain *p2m, ept_entry_t *entry,
>>>>>>              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
>>>>>>              break;
>>>>>>          case p2m_grant_map_ro:
>>>>>> -        case p2m_mmio_write_dm:
>>>>>> +        case p2m_ioreq_server:
>>>>>>              entry->r = 1;
>>>>>>              entry->w = entry->x = 0;
>>>>>>              entry->a = !!cpu_has_vmx_ept_ad;
>>>>>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
>>>>>> index 3d80612..eabd2e3 100644
>>>>>> --- a/xen/arch/x86/mm/p2m-pt.c
>>>>>> +++ b/xen/arch/x86/mm/p2m-pt.c
>>>>>> @@ -94,7 +94,7 @@ static unsigned long
>>> p2m_type_to_flags(p2m_type_t
>>>>> t, mfn_t mfn,
>>>>>>      default:
>>>>>>          return flags | _PAGE_NX_BIT;
>>>>>>      case p2m_grant_map_ro:
>>>>>> -    case p2m_mmio_write_dm:
>>>>>> +    case p2m_ioreq_server:
>>>>>>          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
>>>>>>      case p2m_ram_ro:
>>>>>>      case p2m_ram_logdirty:
>>>>>> diff --git a/xen/arch/x86/mm/shadow/multi.c
>>>>> b/xen/arch/x86/mm/shadow/multi.c
>>>>>> index e5c8499..c81302a 100644
>>>>>> --- a/xen/arch/x86/mm/shadow/multi.c
>>>>>> +++ b/xen/arch/x86/mm/shadow/multi.c
>>>>>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
>>>>>>
>>>>>>      /* Need to hand off device-model MMIO to the device model */
>>>>>>      if ( p2mt == p2m_mmio_dm
>>>>>> -         || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
>>>>>> +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
>>>>>>      {
>>>>>>          gpa = guest_walk_to_gpa(&gw);
>>>>>>          goto mmio;
>>>>>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>>>>>> index 5392eb0..ee2ea9c 100644
>>>>>> --- a/xen/include/asm-x86/p2m.h
>>>>>> +++ b/xen/include/asm-x86/p2m.h
>>>>>> @@ -71,7 +71,7 @@ typedef enum {
>>>>>>      p2m_ram_shared = 12,          /* Shared or sharable memory */
>>>>>>      p2m_ram_broken = 13,          /* Broken page, access cause domain
>>> crash
>>>>> */
>>>>>>      p2m_map_foreign  = 14,        /* ram pages from foreign domain */
>>>>>> -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the device
>>>>> model */
>>>>>> +    p2m_ioreq_server = 15,
>>>>>>  } p2m_type_t;
>>>>>>
>>>>>>  /* Modifiers to the query */
>>>>>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
>>>>>>                        | p2m_to_mask(p2m_ram_ro)         \
>>>>>>                        | p2m_to_mask(p2m_grant_map_ro)   \
>>>>>>                        | p2m_to_mask(p2m_ram_shared)     \
>>>>>> -                      | p2m_to_mask(p2m_mmio_write_dm))
>>>>>> +                      | p2m_to_mask(p2m_ioreq_server))
>>>>>>
>>>>>>  /* Write-discard types, which should discard the write operations */
>>>>>>  #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
>>>>>> diff --git a/xen/include/public/hvm/hvm_op.h
>>>>> b/xen/include/public/hvm/hvm_op.h
>>>>>> index 1606185..b3e45cf 100644
>>>>>> --- a/xen/include/public/hvm/hvm_op.h
>>>>>> +++ b/xen/include/public/hvm/hvm_op.h
>>>>>> @@ -83,7 +83,13 @@ typedef enum {
>>>>>>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
>>>>>>      HVMMEM_ram_ro,             /* Read-only; writes are discarded */
>>>>>>      HVMMEM_mmio_dm,            /* Reads and write go to the device
>>> model
>>>>> */
>>>>>> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the device
>>>>> model */
>>>>>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
>>>>>> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the device
>>>>> model */
>>>>>> +#else
>>>>>> +    HVMMEM_unused,             /* Placeholder; setting memory to this
>>> type
>>>>>> +                                  will fail for code after 4.7.0 */
>>>>>> +#endif
>>>>>> +    HVMMEM_ioreq_server
>>>>>
>>>>> Also, I don't think we've had a convincing argument for why this patch
>>>>> needs to be in 4.7.  The p2m name changes are internal only, and so
>>>>> don't need to be made at all; and the old functionality will work as
>>>>> well as it ever did.  Furthermore, the whole reason we're in this
>>>>> situation is that we checked in a publicly-visible change to the
>>>>> interface before it was properly ready; I think we should avoid making
>>>>> the same mistake this time.
>>>>>
>>>>> So personally I'd just leave this patch entirely for 4.8; but if Paul
>>>>> and/or Jan have strong opinions, then I would say check in only a
>>>>> patch to do the #if/#else/#endif, and leave both the p2m type changes
>>>>> and the new HVMMEM_ioreq_server enum for when the 4.8
>>> development
>>>>> window opens.
>>>>>
>>>>
>>>> If the whole series is going in then I think this patch is ok. If this the
>> only
>>> patch that is going in for 4.7 then I thing we need the patch to hvm_op.h to
>>> deprecate the old type and that's it. We definitely should not introduce an
>>> implementation of the type HVMMEM_ioreq_server that has the same
>>> hardcoded semantics as the old type and then change it.
>>>> The p2m type changes are also wrong. That type needs to be left alone,
>>> presumably, so that anything using HVMMEM_mmio_write_dm and
>>> compiled to the old interface version continues to function. I think
>>> HVMMEM_ioreq_server needs to map to a new p2m type which should be
>>> introduced in patch #3.
>>>>
>>>
>>> Sorry, I'm also confused now. :(
>>>
>>> Do we really want to introduce a new p2m type? Why?
>>> My understanding of the previous agreement is that:
>>> 1> We need the interface to work on old hypervisor for
>>> HVMMEM_mmio_write_dm;
>>> 2> We need the interface to return -EINVAL for new hypervisor
>>> for HVMMEM_mmio_write_dm;
>>> 3> We need the new type HVMMEM_ioreq_server to work on new
>>> hypervisor;
>>>
>>> Did I miss something? Or I totally misunderstood?
>>>
>>
>> I don't know. I'm confused too. What we definitely don't want to do is add a
>> new HVMMEM type and have it map to the old behaviour, otherwise we're no
>> better off.
>>
>> The question I'm not clear on the answer to is what happens to old code:
>>
>> Should it continue to compile? If so, should it continue to run.
>
> We only need to be concerned about the "get type" functionality,
> as that's the only thing an arbitrary guest can use. If the
> hypercall simply never returns the old type, then old code will
> still work (it'll just have some dead code on new Xen), and hence
> it compiling against the older interface is fine (and, from general
> considerations, a requirement).
>

Thanks Jan. And I think the answer is yes. The new hypervisor will
only return HVMMEM_ioreq_server, which is a different value now.

> Jan
>

Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 15:53             ` Yu, Zhang
@ 2016-04-25 16:15               ` George Dunlap
  2016-04-25 16:20                 ` Yu, Zhang
  2016-04-25 17:01                 ` Paul Durrant
  0 siblings, 2 replies; 35+ messages in thread
From: George Dunlap @ 2016-04-25 16:15 UTC (permalink / raw)
  To: Yu, Zhang, Jan Beulich, Paul Durrant
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Tim(Xen.org),
	xen-devel, Zhiyuan Lv, Jun Nakajima, Keir (Xen.org)

On 25/04/16 16:53, Yu, Zhang wrote:
> 
> 
> On 4/25/2016 11:38 PM, Jan Beulich wrote:
>>>>> On 25.04.16 at 17:29, <Paul.Durrant@citrix.com> wrote:
>>>>  -----Original Message-----
>>>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
>>>> Sent: 25 April 2016 16:22
>>>> To: Paul Durrant; George Dunlap
>>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>>>> Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan; Jan Beulich; Wei Liu
>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for
>>>> 4.7):
>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>>
>>>>
>>>>
>>>> On 4/25/2016 10:01 PM, Paul Durrant wrote:
>>>>>> -----Original Message-----
>>>>>> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf Of
>>>>>> George Dunlap
>>>>>> Sent: 25 April 2016 14:39
>>>>>> To: Yu Zhang
>>>>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun
>>>>>> Nakajima;
>>>>>> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan
>>>>>> Beulich; Wei
>>>> Liu
>>>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for
>>>>>> 4.7):
>>>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>>>>
>>>>>> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang
>>>> <yu.c.zhang@linux.intel.com>
>>>>>> wrote:
>>>>>>> Previously p2m type p2m_mmio_write_dm was introduced for write-
>>>>>>> protected memory pages whose write operations are supposed to be
>>>>>>> forwarded to and emulated by an ioreq server. Yet limitations of
>>>>>>> rangeset restrict the number of guest pages to be write-protected.
>>>>>>>
>>>>>>> This patch replaces the p2m type p2m_mmio_write_dm with a new
>>>> name:
>>>>>>> p2m_ioreq_server, which means this p2m type can be claimed by one
>>>>>>> ioreq server, instead of being tracked inside the rangeset of ioreq
>>>>>>> server. Patches following up will add the related hvmop handling
>>>>>>> code which map/unmap type p2m_ioreq_server to/from an ioreq
>>>> server.
>>>>>>>
>>>>>>> changes in v3:
>>>>>>>   - According to Jan & George's comments, keep
>>>>>> HVMMEM_mmio_write_dm
>>>>>>>     for old xen interface versions, and replace it with
>>>>>>> HVMMEM_unused
>>>>>>>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
>>>>>>>     enum - HVMMEM_ioreq_server is introduced for the get/set mem
>>>> type
>>>>>>>     interfaces;
>>>>>>>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
>>>>>>
>>>>>> Unfortunately these rather contradict each other -- I consider
>>>>>> Reviewed-by to only stick if the code I've specified hasn't changed
>>>>>> (or has only changed trivially).
>>>>>>
>>>>>> Also...
>>>>>>
>>>>>>>
>>>>>>> changes in v2:
>>>>>>>   - According to George Dunlap's comments, only rename the p2m type,
>>>>>>>     with no behavior changes.
>>>>>>>
>>>>>>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>>>>>>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
>>>>>>> Acked-by: Tim Deegan <tim@xen.org>
>>>>>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>>>>>>> Cc: Keir Fraser <keir@xen.org>
>>>>>>> Cc: Jan Beulich <jbeulich@suse.com>
>>>>>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>>>>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>>>>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>>> Cc: Tim Deegan <tim@xen.org>
>>>>>>> ---
>>>>>>>  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
>>>>>>>  xen/arch/x86/mm/p2m-ept.c       |  2 +-
>>>>>>>  xen/arch/x86/mm/p2m-pt.c        |  2 +-
>>>>>>>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
>>>>>>>  xen/include/asm-x86/p2m.h       |  4 ++--
>>>>>>>  xen/include/public/hvm/hvm_op.h |  8 +++++++-
>>>>>>>  6 files changed, 20 insertions(+), 12 deletions(-)
>>>>>>>
>>>>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>>>>>> index f24126d..874cb0f 100644
>>>>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>>>>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>>>>>> unsigned long gla,
>>>>>>>       */
>>>>>>>      if ( (p2mt == p2m_mmio_dm) ||
>>>>>>>           (npfec.write_access &&
>>>>>>> -          (p2m_is_discard_write(p2mt) || (p2mt ==
>>>> p2m_mmio_write_dm))) )
>>>>>>> +          (p2m_is_discard_write(p2mt) || (p2mt ==
>>>>>>> p2m_ioreq_server))) )
>>>>>>>      {
>>>>>>>          __put_gfn(p2m, gfn);
>>>>>>>          if ( ap2m_active )
>>>>>>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>              get_gfn_query_unlocked(d, a.pfn, &t);
>>>>>>>              if ( p2m_is_mmio(t) )
>>>>>>>                  a.mem_type =  HVMMEM_mmio_dm;
>>>>>>> -            else if ( t == p2m_mmio_write_dm )
>>>>>>> -                a.mem_type = HVMMEM_mmio_write_dm;
>>>>>>> +            else if ( t == p2m_ioreq_server )
>>>>>>> +                a.mem_type = HVMMEM_ioreq_server;
>>>>>>>              else if ( p2m_is_readonly(t) )
>>>>>>>                  a.mem_type =  HVMMEM_ram_ro;
>>>>>>>              else if ( p2m_is_ram(t) )
>>>>>>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>              [HVMMEM_ram_rw]  = p2m_ram_rw,
>>>>>>>              [HVMMEM_ram_ro]  = p2m_ram_ro,
>>>>>>>              [HVMMEM_mmio_dm] = p2m_mmio_dm,
>>>>>>> -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
>>>>>>> +            [HVMMEM_unused] = p2m_invalid,
>>>>>>> +            [HVMMEM_ioreq_server] = p2m_ioreq_server
>>>>>>>          };
>>>>>>>
>>>>>>>          if ( copy_from_guest(&a, arg, 1) )
>>>>>>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>               ((a.first_pfn + a.nr - 1) >
>>>>>>> domain_get_maximum_gpfn(d)) )
>>>>>>>              goto setmemtype_fail;
>>>>>>>
>>>>>>> -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
>>>>>>> +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
>>>>>>> +             unlikely(a.hvmmem_type == HVMMEM_unused) )
>>>>>>>              goto setmemtype_fail;
>>>>>>>
>>>>>>>          while ( a.nr > start_iter )
>>>>>>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>              }
>>>>>>>              if ( !p2m_is_ram(t) &&
>>>>>>>                   (!p2m_is_hole(t) || a.hvmmem_type !=
>>>>>>> HVMMEM_mmio_dm)
>>>>>> &&
>>>>>>> -                 (t != p2m_mmio_write_dm || a.hvmmem_type !=
>>>>>> HVMMEM_ram_rw) )
>>>>>>> +                 (t != p2m_ioreq_server || a.hvmmem_type !=
>>>>>> HVMMEM_ram_rw) )
>>>>>>>              {
>>>>>>>                  put_gfn(d, pfn);
>>>>>>>                  goto setmemtype_fail;
>>>>>>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-
>>>> ept.c
>>>>>>> index 3cb6868..380ec25 100644
>>>>>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>>>>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>>>>>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
>>>>>> p2m_domain *p2m, ept_entry_t *entry,
>>>>>>>              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
>>>>>>>              break;
>>>>>>>          case p2m_grant_map_ro:
>>>>>>> -        case p2m_mmio_write_dm:
>>>>>>> +        case p2m_ioreq_server:
>>>>>>>              entry->r = 1;
>>>>>>>              entry->w = entry->x = 0;
>>>>>>>              entry->a = !!cpu_has_vmx_ept_ad;
>>>>>>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
>>>>>>> index 3d80612..eabd2e3 100644
>>>>>>> --- a/xen/arch/x86/mm/p2m-pt.c
>>>>>>> +++ b/xen/arch/x86/mm/p2m-pt.c
>>>>>>> @@ -94,7 +94,7 @@ static unsigned long
>>>> p2m_type_to_flags(p2m_type_t
>>>>>> t, mfn_t mfn,
>>>>>>>      default:
>>>>>>>          return flags | _PAGE_NX_BIT;
>>>>>>>      case p2m_grant_map_ro:
>>>>>>> -    case p2m_mmio_write_dm:
>>>>>>> +    case p2m_ioreq_server:
>>>>>>>          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
>>>>>>>      case p2m_ram_ro:
>>>>>>>      case p2m_ram_logdirty:
>>>>>>> diff --git a/xen/arch/x86/mm/shadow/multi.c
>>>>>> b/xen/arch/x86/mm/shadow/multi.c
>>>>>>> index e5c8499..c81302a 100644
>>>>>>> --- a/xen/arch/x86/mm/shadow/multi.c
>>>>>>> +++ b/xen/arch/x86/mm/shadow/multi.c
>>>>>>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
>>>>>>>
>>>>>>>      /* Need to hand off device-model MMIO to the device model */
>>>>>>>      if ( p2mt == p2m_mmio_dm
>>>>>>> -         || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
>>>>>>> +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
>>>>>>>      {
>>>>>>>          gpa = guest_walk_to_gpa(&gw);
>>>>>>>          goto mmio;
>>>>>>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>>>>>>> index 5392eb0..ee2ea9c 100644
>>>>>>> --- a/xen/include/asm-x86/p2m.h
>>>>>>> +++ b/xen/include/asm-x86/p2m.h
>>>>>>> @@ -71,7 +71,7 @@ typedef enum {
>>>>>>>      p2m_ram_shared = 12,          /* Shared or sharable memory */
>>>>>>>      p2m_ram_broken = 13,          /* Broken page, access cause
>>>>>>> domain
>>>> crash
>>>>>> */
>>>>>>>      p2m_map_foreign  = 14,        /* ram pages from foreign
>>>>>>> domain */
>>>>>>> -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the
>>>>>>> device
>>>>>> model */
>>>>>>> +    p2m_ioreq_server = 15,
>>>>>>>  } p2m_type_t;
>>>>>>>
>>>>>>>  /* Modifiers to the query */
>>>>>>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
>>>>>>>                        | p2m_to_mask(p2m_ram_ro)         \
>>>>>>>                        | p2m_to_mask(p2m_grant_map_ro)   \
>>>>>>>                        | p2m_to_mask(p2m_ram_shared)     \
>>>>>>> -                      | p2m_to_mask(p2m_mmio_write_dm))
>>>>>>> +                      | p2m_to_mask(p2m_ioreq_server))
>>>>>>>
>>>>>>>  /* Write-discard types, which should discard the write
>>>>>>> operations */
>>>>>>>  #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
>>>>>>> diff --git a/xen/include/public/hvm/hvm_op.h
>>>>>> b/xen/include/public/hvm/hvm_op.h
>>>>>>> index 1606185..b3e45cf 100644
>>>>>>> --- a/xen/include/public/hvm/hvm_op.h
>>>>>>> +++ b/xen/include/public/hvm/hvm_op.h
>>>>>>> @@ -83,7 +83,13 @@ typedef enum {
>>>>>>>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
>>>>>>>      HVMMEM_ram_ro,             /* Read-only; writes are
>>>>>>> discarded */
>>>>>>>      HVMMEM_mmio_dm,            /* Reads and write go to the device
>>>> model
>>>>>> */
>>>>>>> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the
>>>>>>> device
>>>>>> model */
>>>>>>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
>>>>>>> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the
>>>>>>> device
>>>>>> model */
>>>>>>> +#else
>>>>>>> +    HVMMEM_unused,             /* Placeholder; setting memory to
>>>>>>> this
>>>> type
>>>>>>> +                                  will fail for code after 4.7.0 */
>>>>>>> +#endif
>>>>>>> +    HVMMEM_ioreq_server
>>>>>>
>>>>>> Also, I don't think we've had a convincing argument for why this
>>>>>> patch
>>>>>> needs to be in 4.7.  The p2m name changes are internal only, and so
>>>>>> don't need to be made at all; and the old functionality will work as
>>>>>> well as it ever did.  Furthermore, the whole reason we're in this
>>>>>> situation is that we checked in a publicly-visible change to the
>>>>>> interface before it was properly ready; I think we should avoid
>>>>>> making
>>>>>> the same mistake this time.
>>>>>>
>>>>>> So personally I'd just leave this patch entirely for 4.8; but if Paul
>>>>>> and/or Jan have strong opinions, then I would say check in only a
>>>>>> patch to do the #if/#else/#endif, and leave both the p2m type changes
>>>>>> and the new HVMMEM_ioreq_server enum for when the 4.8
>>>> development
>>>>>> window opens.
>>>>>>
>>>>>
>>>>> If the whole series is going in then I think this patch is ok. If
>>>>> this the
>>> only
>>>> patch that is going in for 4.7 then I thing we need the patch to
>>>> hvm_op.h to
>>>> deprecate the old type and that's it. We definitely should not
>>>> introduce an
>>>> implementation of the type HVMMEM_ioreq_server that has the same
>>>> hardcoded semantics as the old type and then change it.
>>>>> The p2m type changes are also wrong. That type needs to be left alone,
>>>> presumably, so that anything using HVMMEM_mmio_write_dm and
>>>> compiled to the old interface version continues to function. I think
>>>> HVMMEM_ioreq_server needs to map to a new p2m type which should be
>>>> introduced in patch #3.
>>>>>
>>>>
>>>> Sorry, I'm also confused now. :(
>>>>
>>>> Do we really want to introduce a new p2m type? Why?
>>>> My understanding of the previous agreement is that:
>>>> 1> We need the interface to work on old hypervisor for
>>>> HVMMEM_mmio_write_dm;
>>>> 2> We need the interface to return -EINVAL for new hypervisor
>>>> for HVMMEM_mmio_write_dm;
>>>> 3> We need the new type HVMMEM_ioreq_server to work on new
>>>> hypervisor;
>>>>
>>>> Did I miss something? Or I totally misunderstood?
>>>>
>>>
>>> I don't know. I'm confused too. What we definitely don't want to do
>>> is add a
>>> new HVMMEM type and have it map to the old behaviour, otherwise we're no
>>> better off.
>>>
>>> The question I'm not clear on the answer to is what happens to old code:
>>>
>>> Should it continue to compile? If so, should it continue to run.
>>
>> We only need to be concerned about the "get type" functionality,
>> as that's the only thing an arbitrary guest can use. If the
>> hypercall simply never returns the old type, then old code will
>> still work (it'll just have some dead code on new Xen), and hence
>> it compiling against the older interface is fine (and, from general
>> considerations, a requirement).
>>
> 
> Thanks Jan. And I think the answer is yes. The new hypervisor will
> only return HVMMEM_ioreq_server, which is a different value now.

Right -- but we can't check in this entire series for 4.7; however, Paul
would like to make it clear that HVMMEM_mmio_write_dm is deprecated; so
we need a patch which does the #ifdef/#else/#endif clause, and then
everything else necessary to make sure that things compile properly
either way, but no p2m changes or additional functionality.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 16:15               ` George Dunlap
@ 2016-04-25 16:20                 ` Yu, Zhang
  2016-04-25 17:01                 ` Paul Durrant
  1 sibling, 0 replies; 35+ messages in thread
From: Yu, Zhang @ 2016-04-25 16:20 UTC (permalink / raw)
  To: George Dunlap, Jan Beulich, Paul Durrant
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Tim(Xen.org),
	xen-devel, Zhiyuan Lv, Jun Nakajima, Keir (Xen.org)



On 4/26/2016 12:15 AM, George Dunlap wrote:
> On 25/04/16 16:53, Yu, Zhang wrote:
>>
>>
>> On 4/25/2016 11:38 PM, Jan Beulich wrote:
>>>>>> On 25.04.16 at 17:29, <Paul.Durrant@citrix.com> wrote:
>>>>>  -----Original Message-----
>>>>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
>>>>> Sent: 25 April 2016 16:22
>>>>> To: Paul Durrant; George Dunlap
>>>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>>>>> Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan; Jan Beulich; Wei Liu
>>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for
>>>>> 4.7):
>>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>>>
>>>>>
>>>>>
>>>>> On 4/25/2016 10:01 PM, Paul Durrant wrote:
>>>>>>> -----Original Message-----
>>>>>>> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf Of
>>>>>>> George Dunlap
>>>>>>> Sent: 25 April 2016 14:39
>>>>>>> To: Yu Zhang
>>>>>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun
>>>>>>> Nakajima;
>>>>>>> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan
>>>>>>> Beulich; Wei
>>>>> Liu
>>>>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for
>>>>>>> 4.7):
>>>>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>>>>>
>>>>>>> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang
>>>>> <yu.c.zhang@linux.intel.com>
>>>>>>> wrote:
>>>>>>>> Previously p2m type p2m_mmio_write_dm was introduced for write-
>>>>>>>> protected memory pages whose write operations are supposed to be
>>>>>>>> forwarded to and emulated by an ioreq server. Yet limitations of
>>>>>>>> rangeset restrict the number of guest pages to be write-protected.
>>>>>>>>
>>>>>>>> This patch replaces the p2m type p2m_mmio_write_dm with a new
>>>>> name:
>>>>>>>> p2m_ioreq_server, which means this p2m type can be claimed by one
>>>>>>>> ioreq server, instead of being tracked inside the rangeset of ioreq
>>>>>>>> server. Patches following up will add the related hvmop handling
>>>>>>>> code which map/unmap type p2m_ioreq_server to/from an ioreq
>>>>> server.
>>>>>>>>
>>>>>>>> changes in v3:
>>>>>>>>   - According to Jan & George's comments, keep
>>>>>>> HVMMEM_mmio_write_dm
>>>>>>>>     for old xen interface versions, and replace it with
>>>>>>>> HVMMEM_unused
>>>>>>>>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a new
>>>>>>>>     enum - HVMMEM_ioreq_server is introduced for the get/set mem
>>>>> type
>>>>>>>>     interfaces;
>>>>>>>>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
>>>>>>>
>>>>>>> Unfortunately these rather contradict each other -- I consider
>>>>>>> Reviewed-by to only stick if the code I've specified hasn't changed
>>>>>>> (or has only changed trivially).
>>>>>>>
>>>>>>> Also...
>>>>>>>
>>>>>>>>
>>>>>>>> changes in v2:
>>>>>>>>   - According to George Dunlap's comments, only rename the p2m type,
>>>>>>>>     with no behavior changes.
>>>>>>>>
>>>>>>>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>>>>>>>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
>>>>>>>> Acked-by: Tim Deegan <tim@xen.org>
>>>>>>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>>>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>>>>>>>> Cc: Keir Fraser <keir@xen.org>
>>>>>>>> Cc: Jan Beulich <jbeulich@suse.com>
>>>>>>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>>>>>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>>>>>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>>>> Cc: Tim Deegan <tim@xen.org>
>>>>>>>> ---
>>>>>>>>  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
>>>>>>>>  xen/arch/x86/mm/p2m-ept.c       |  2 +-
>>>>>>>>  xen/arch/x86/mm/p2m-pt.c        |  2 +-
>>>>>>>>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
>>>>>>>>  xen/include/asm-x86/p2m.h       |  4 ++--
>>>>>>>>  xen/include/public/hvm/hvm_op.h |  8 +++++++-
>>>>>>>>  6 files changed, 20 insertions(+), 12 deletions(-)
>>>>>>>>
>>>>>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>>>>>>> index f24126d..874cb0f 100644
>>>>>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>>>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>>>>>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa,
>>>>>>> unsigned long gla,
>>>>>>>>       */
>>>>>>>>      if ( (p2mt == p2m_mmio_dm) ||
>>>>>>>>           (npfec.write_access &&
>>>>>>>> -          (p2m_is_discard_write(p2mt) || (p2mt ==
>>>>> p2m_mmio_write_dm))) )
>>>>>>>> +          (p2m_is_discard_write(p2mt) || (p2mt ==
>>>>>>>> p2m_ioreq_server))) )
>>>>>>>>      {
>>>>>>>>          __put_gfn(p2m, gfn);
>>>>>>>>          if ( ap2m_active )
>>>>>>>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
>>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>>              get_gfn_query_unlocked(d, a.pfn, &t);
>>>>>>>>              if ( p2m_is_mmio(t) )
>>>>>>>>                  a.mem_type =  HVMMEM_mmio_dm;
>>>>>>>> -            else if ( t == p2m_mmio_write_dm )
>>>>>>>> -                a.mem_type = HVMMEM_mmio_write_dm;
>>>>>>>> +            else if ( t == p2m_ioreq_server )
>>>>>>>> +                a.mem_type = HVMMEM_ioreq_server;
>>>>>>>>              else if ( p2m_is_readonly(t) )
>>>>>>>>                  a.mem_type =  HVMMEM_ram_ro;
>>>>>>>>              else if ( p2m_is_ram(t) )
>>>>>>>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
>>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>>              [HVMMEM_ram_rw]  = p2m_ram_rw,
>>>>>>>>              [HVMMEM_ram_ro]  = p2m_ram_ro,
>>>>>>>>              [HVMMEM_mmio_dm] = p2m_mmio_dm,
>>>>>>>> -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
>>>>>>>> +            [HVMMEM_unused] = p2m_invalid,
>>>>>>>> +            [HVMMEM_ioreq_server] = p2m_ioreq_server
>>>>>>>>          };
>>>>>>>>
>>>>>>>>          if ( copy_from_guest(&a, arg, 1) )
>>>>>>>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
>>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>>               ((a.first_pfn + a.nr - 1) >
>>>>>>>> domain_get_maximum_gpfn(d)) )
>>>>>>>>              goto setmemtype_fail;
>>>>>>>>
>>>>>>>> -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
>>>>>>>> +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
>>>>>>>> +             unlikely(a.hvmmem_type == HVMMEM_unused) )
>>>>>>>>              goto setmemtype_fail;
>>>>>>>>
>>>>>>>>          while ( a.nr > start_iter )
>>>>>>>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
>>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>>              }
>>>>>>>>              if ( !p2m_is_ram(t) &&
>>>>>>>>                   (!p2m_is_hole(t) || a.hvmmem_type !=
>>>>>>>> HVMMEM_mmio_dm)
>>>>>>> &&
>>>>>>>> -                 (t != p2m_mmio_write_dm || a.hvmmem_type !=
>>>>>>> HVMMEM_ram_rw) )
>>>>>>>> +                 (t != p2m_ioreq_server || a.hvmmem_type !=
>>>>>>> HVMMEM_ram_rw) )
>>>>>>>>              {
>>>>>>>>                  put_gfn(d, pfn);
>>>>>>>>                  goto setmemtype_fail;
>>>>>>>> diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-
>>>>> ept.c
>>>>>>>> index 3cb6868..380ec25 100644
>>>>>>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>>>>>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>>>>>>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
>>>>>>> p2m_domain *p2m, ept_entry_t *entry,
>>>>>>>>              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
>>>>>>>>              break;
>>>>>>>>          case p2m_grant_map_ro:
>>>>>>>> -        case p2m_mmio_write_dm:
>>>>>>>> +        case p2m_ioreq_server:
>>>>>>>>              entry->r = 1;
>>>>>>>>              entry->w = entry->x = 0;
>>>>>>>>              entry->a = !!cpu_has_vmx_ept_ad;
>>>>>>>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
>>>>>>>> index 3d80612..eabd2e3 100644
>>>>>>>> --- a/xen/arch/x86/mm/p2m-pt.c
>>>>>>>> +++ b/xen/arch/x86/mm/p2m-pt.c
>>>>>>>> @@ -94,7 +94,7 @@ static unsigned long
>>>>> p2m_type_to_flags(p2m_type_t
>>>>>>> t, mfn_t mfn,
>>>>>>>>      default:
>>>>>>>>          return flags | _PAGE_NX_BIT;
>>>>>>>>      case p2m_grant_map_ro:
>>>>>>>> -    case p2m_mmio_write_dm:
>>>>>>>> +    case p2m_ioreq_server:
>>>>>>>>          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
>>>>>>>>      case p2m_ram_ro:
>>>>>>>>      case p2m_ram_logdirty:
>>>>>>>> diff --git a/xen/arch/x86/mm/shadow/multi.c
>>>>>>> b/xen/arch/x86/mm/shadow/multi.c
>>>>>>>> index e5c8499..c81302a 100644
>>>>>>>> --- a/xen/arch/x86/mm/shadow/multi.c
>>>>>>>> +++ b/xen/arch/x86/mm/shadow/multi.c
>>>>>>>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
>>>>>>>>
>>>>>>>>      /* Need to hand off device-model MMIO to the device model */
>>>>>>>>      if ( p2mt == p2m_mmio_dm
>>>>>>>> -         || (p2mt == p2m_mmio_write_dm && ft == ft_demand_write) )
>>>>>>>> +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
>>>>>>>>      {
>>>>>>>>          gpa = guest_walk_to_gpa(&gw);
>>>>>>>>          goto mmio;
>>>>>>>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
>>>>>>>> index 5392eb0..ee2ea9c 100644
>>>>>>>> --- a/xen/include/asm-x86/p2m.h
>>>>>>>> +++ b/xen/include/asm-x86/p2m.h
>>>>>>>> @@ -71,7 +71,7 @@ typedef enum {
>>>>>>>>      p2m_ram_shared = 12,          /* Shared or sharable memory */
>>>>>>>>      p2m_ram_broken = 13,          /* Broken page, access cause
>>>>>>>> domain
>>>>> crash
>>>>>>> */
>>>>>>>>      p2m_map_foreign  = 14,        /* ram pages from foreign
>>>>>>>> domain */
>>>>>>>> -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the
>>>>>>>> device
>>>>>>> model */
>>>>>>>> +    p2m_ioreq_server = 15,
>>>>>>>>  } p2m_type_t;
>>>>>>>>
>>>>>>>>  /* Modifiers to the query */
>>>>>>>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
>>>>>>>>                        | p2m_to_mask(p2m_ram_ro)         \
>>>>>>>>                        | p2m_to_mask(p2m_grant_map_ro)   \
>>>>>>>>                        | p2m_to_mask(p2m_ram_shared)     \
>>>>>>>> -                      | p2m_to_mask(p2m_mmio_write_dm))
>>>>>>>> +                      | p2m_to_mask(p2m_ioreq_server))
>>>>>>>>
>>>>>>>>  /* Write-discard types, which should discard the write
>>>>>>>> operations */
>>>>>>>>  #define P2M_DISCARD_WRITE_TYPES (p2m_to_mask(p2m_ram_ro)     \
>>>>>>>> diff --git a/xen/include/public/hvm/hvm_op.h
>>>>>>> b/xen/include/public/hvm/hvm_op.h
>>>>>>>> index 1606185..b3e45cf 100644
>>>>>>>> --- a/xen/include/public/hvm/hvm_op.h
>>>>>>>> +++ b/xen/include/public/hvm/hvm_op.h
>>>>>>>> @@ -83,7 +83,13 @@ typedef enum {
>>>>>>>>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
>>>>>>>>      HVMMEM_ram_ro,             /* Read-only; writes are
>>>>>>>> discarded */
>>>>>>>>      HVMMEM_mmio_dm,            /* Reads and write go to the device
>>>>> model
>>>>>>> */
>>>>>>>> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the
>>>>>>>> device
>>>>>>> model */
>>>>>>>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
>>>>>>>> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the
>>>>>>>> device
>>>>>>> model */
>>>>>>>> +#else
>>>>>>>> +    HVMMEM_unused,             /* Placeholder; setting memory to
>>>>>>>> this
>>>>> type
>>>>>>>> +                                  will fail for code after 4.7.0 */
>>>>>>>> +#endif
>>>>>>>> +    HVMMEM_ioreq_server
>>>>>>>
>>>>>>> Also, I don't think we've had a convincing argument for why this
>>>>>>> patch
>>>>>>> needs to be in 4.7.  The p2m name changes are internal only, and so
>>>>>>> don't need to be made at all; and the old functionality will work as
>>>>>>> well as it ever did.  Furthermore, the whole reason we're in this
>>>>>>> situation is that we checked in a publicly-visible change to the
>>>>>>> interface before it was properly ready; I think we should avoid
>>>>>>> making
>>>>>>> the same mistake this time.
>>>>>>>
>>>>>>> So personally I'd just leave this patch entirely for 4.8; but if Paul
>>>>>>> and/or Jan have strong opinions, then I would say check in only a
>>>>>>> patch to do the #if/#else/#endif, and leave both the p2m type changes
>>>>>>> and the new HVMMEM_ioreq_server enum for when the 4.8
>>>>> development
>>>>>>> window opens.
>>>>>>>
>>>>>>
>>>>>> If the whole series is going in then I think this patch is ok. If
>>>>>> this the
>>>> only
>>>>> patch that is going in for 4.7 then I thing we need the patch to
>>>>> hvm_op.h to
>>>>> deprecate the old type and that's it. We definitely should not
>>>>> introduce an
>>>>> implementation of the type HVMMEM_ioreq_server that has the same
>>>>> hardcoded semantics as the old type and then change it.
>>>>>> The p2m type changes are also wrong. That type needs to be left alone,
>>>>> presumably, so that anything using HVMMEM_mmio_write_dm and
>>>>> compiled to the old interface version continues to function. I think
>>>>> HVMMEM_ioreq_server needs to map to a new p2m type which should be
>>>>> introduced in patch #3.
>>>>>>
>>>>>
>>>>> Sorry, I'm also confused now. :(
>>>>>
>>>>> Do we really want to introduce a new p2m type? Why?
>>>>> My understanding of the previous agreement is that:
>>>>> 1> We need the interface to work on old hypervisor for
>>>>> HVMMEM_mmio_write_dm;
>>>>> 2> We need the interface to return -EINVAL for new hypervisor
>>>>> for HVMMEM_mmio_write_dm;
>>>>> 3> We need the new type HVMMEM_ioreq_server to work on new
>>>>> hypervisor;
>>>>>
>>>>> Did I miss something? Or I totally misunderstood?
>>>>>
>>>>
>>>> I don't know. I'm confused too. What we definitely don't want to do
>>>> is add a
>>>> new HVMMEM type and have it map to the old behaviour, otherwise we're no
>>>> better off.
>>>>
>>>> The question I'm not clear on the answer to is what happens to old code:
>>>>
>>>> Should it continue to compile? If so, should it continue to run.
>>>
>>> We only need to be concerned about the "get type" functionality,
>>> as that's the only thing an arbitrary guest can use. If the
>>> hypercall simply never returns the old type, then old code will
>>> still work (it'll just have some dead code on new Xen), and hence
>>> it compiling against the older interface is fine (and, from general
>>> considerations, a requirement).
>>>
>>
>> Thanks Jan. And I think the answer is yes. The new hypervisor will
>> only return HVMMEM_ioreq_server, which is a different value now.
>
> Right -- but we can't check in this entire series for 4.7; however, Paul
> would like to make it clear that HVMMEM_mmio_write_dm is deprecated; so
> we need a patch which does the #ifdef/#else/#endif clause, and then
> everything else necessary to make sure that things compile properly
> either way, but no p2m changes or additional functionality.
>

Thanks, George.
So what is it good for, with only the #ifdef/#els/#endif clause? It
seems more reasonable to me that we hold all 3 patches till next
version, and at that time the __XEN_INTERFACE_VERSION__ should be
compared against 0x000408000 in the #ifdef I guess?

Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 16:15               ` George Dunlap
  2016-04-25 16:20                 ` Yu, Zhang
@ 2016-04-25 17:01                 ` Paul Durrant
  2016-04-26  8:23                   ` Yu, Zhang
  2016-04-27 14:12                   ` George Dunlap
  1 sibling, 2 replies; 35+ messages in thread
From: Paul Durrant @ 2016-04-25 17:01 UTC (permalink / raw)
  To: George Dunlap, Yu, Zhang, Jan Beulich
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Tim (Xen.org),
	xen-devel, Zhiyuan Lv, Jun Nakajima, Keir (Xen.org)

> -----Original Message-----
> From: George Dunlap [mailto:george.dunlap@citrix.com]
> Sent: 25 April 2016 17:16
> To: Yu, Zhang; Jan Beulich; Paul Durrant
> Cc: Andrew Cooper; Wei Liu; Jun Nakajima; Kevin Tian; Zhiyuan Lv; xen-
> devel@lists.xen.org; Keir (Xen.org); Tim (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> 
> On 25/04/16 16:53, Yu, Zhang wrote:
> >
> >
> > On 4/25/2016 11:38 PM, Jan Beulich wrote:
> >>>>> On 25.04.16 at 17:29, <Paul.Durrant@citrix.com> wrote:
> >>>>  -----Original Message-----
> >>>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
> >>>> Sent: 25 April 2016 16:22
> >>>> To: Paul Durrant; George Dunlap
> >>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
> >>>> Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan; Jan Beulich; Wei Liu
> >>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for
> >>>> 4.7):
> >>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> >>>>
> >>>>
> >>>>
> >>>> On 4/25/2016 10:01 PM, Paul Durrant wrote:
> >>>>>> -----Original Message-----
> >>>>>> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf
> Of
> >>>>>> George Dunlap
> >>>>>> Sent: 25 April 2016 14:39
> >>>>>> To: Yu Zhang
> >>>>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun
> >>>>>> Nakajima;
> >>>>>> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan
> >>>>>> Beulich; Wei
> >>>> Liu
> >>>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for
> >>>>>> 4.7):
> >>>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> >>>>>>
> >>>>>> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang
> >>>> <yu.c.zhang@linux.intel.com>
> >>>>>> wrote:
> >>>>>>> Previously p2m type p2m_mmio_write_dm was introduced for
> write-
> >>>>>>> protected memory pages whose write operations are supposed to
> be
> >>>>>>> forwarded to and emulated by an ioreq server. Yet limitations of
> >>>>>>> rangeset restrict the number of guest pages to be write-protected.
> >>>>>>>
> >>>>>>> This patch replaces the p2m type p2m_mmio_write_dm with a new
> >>>> name:
> >>>>>>> p2m_ioreq_server, which means this p2m type can be claimed by
> one
> >>>>>>> ioreq server, instead of being tracked inside the rangeset of ioreq
> >>>>>>> server. Patches following up will add the related hvmop handling
> >>>>>>> code which map/unmap type p2m_ioreq_server to/from an ioreq
> >>>> server.
> >>>>>>>
> >>>>>>> changes in v3:
> >>>>>>>   - According to Jan & George's comments, keep
> >>>>>> HVMMEM_mmio_write_dm
> >>>>>>>     for old xen interface versions, and replace it with
> >>>>>>> HVMMEM_unused
> >>>>>>>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a
> new
> >>>>>>>     enum - HVMMEM_ioreq_server is introduced for the get/set
> mem
> >>>> type
> >>>>>>>     interfaces;
> >>>>>>>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
> >>>>>>
> >>>>>> Unfortunately these rather contradict each other -- I consider
> >>>>>> Reviewed-by to only stick if the code I've specified hasn't changed
> >>>>>> (or has only changed trivially).
> >>>>>>
> >>>>>> Also...
> >>>>>>
> >>>>>>>
> >>>>>>> changes in v2:
> >>>>>>>   - According to George Dunlap's comments, only rename the p2m
> type,
> >>>>>>>     with no behavior changes.
> >>>>>>>
> >>>>>>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
> >>>>>>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
> >>>>>>> Acked-by: Tim Deegan <tim@xen.org>
> >>>>>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>>>>>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
> >>>>>>> Cc: Keir Fraser <keir@xen.org>
> >>>>>>> Cc: Jan Beulich <jbeulich@suse.com>
> >>>>>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> >>>>>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
> >>>>>>> Cc: Kevin Tian <kevin.tian@intel.com>
> >>>>>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
> >>>>>>> Cc: Tim Deegan <tim@xen.org>
> >>>>>>> ---
> >>>>>>>  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
> >>>>>>>  xen/arch/x86/mm/p2m-ept.c       |  2 +-
> >>>>>>>  xen/arch/x86/mm/p2m-pt.c        |  2 +-
> >>>>>>>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
> >>>>>>>  xen/include/asm-x86/p2m.h       |  4 ++--
> >>>>>>>  xen/include/public/hvm/hvm_op.h |  8 +++++++-
> >>>>>>>  6 files changed, 20 insertions(+), 12 deletions(-)
> >>>>>>>
> >>>>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> >>>>>>> index f24126d..874cb0f 100644
> >>>>>>> --- a/xen/arch/x86/hvm/hvm.c
> >>>>>>> +++ b/xen/arch/x86/hvm/hvm.c
> >>>>>>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t
> gpa,
> >>>>>> unsigned long gla,
> >>>>>>>       */
> >>>>>>>      if ( (p2mt == p2m_mmio_dm) ||
> >>>>>>>           (npfec.write_access &&
> >>>>>>> -          (p2m_is_discard_write(p2mt) || (p2mt ==
> >>>> p2m_mmio_write_dm))) )
> >>>>>>> +          (p2m_is_discard_write(p2mt) || (p2mt ==
> >>>>>>> p2m_ioreq_server))) )
> >>>>>>>      {
> >>>>>>>          __put_gfn(p2m, gfn);
> >>>>>>>          if ( ap2m_active )
> >>>>>>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
> >>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
> >>>>>>>              get_gfn_query_unlocked(d, a.pfn, &t);
> >>>>>>>              if ( p2m_is_mmio(t) )
> >>>>>>>                  a.mem_type =  HVMMEM_mmio_dm;
> >>>>>>> -            else if ( t == p2m_mmio_write_dm )
> >>>>>>> -                a.mem_type = HVMMEM_mmio_write_dm;
> >>>>>>> +            else if ( t == p2m_ioreq_server )
> >>>>>>> +                a.mem_type = HVMMEM_ioreq_server;
> >>>>>>>              else if ( p2m_is_readonly(t) )
> >>>>>>>                  a.mem_type =  HVMMEM_ram_ro;
> >>>>>>>              else if ( p2m_is_ram(t) )
> >>>>>>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
> >>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
> >>>>>>>              [HVMMEM_ram_rw]  = p2m_ram_rw,
> >>>>>>>              [HVMMEM_ram_ro]  = p2m_ram_ro,
> >>>>>>>              [HVMMEM_mmio_dm] = p2m_mmio_dm,
> >>>>>>> -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
> >>>>>>> +            [HVMMEM_unused] = p2m_invalid,
> >>>>>>> +            [HVMMEM_ioreq_server] = p2m_ioreq_server
> >>>>>>>          };
> >>>>>>>
> >>>>>>>          if ( copy_from_guest(&a, arg, 1) )
> >>>>>>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
> >>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
> >>>>>>>               ((a.first_pfn + a.nr - 1) >
> >>>>>>> domain_get_maximum_gpfn(d)) )
> >>>>>>>              goto setmemtype_fail;
> >>>>>>>
> >>>>>>> -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
> >>>>>>> +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
> >>>>>>> +             unlikely(a.hvmmem_type == HVMMEM_unused) )
> >>>>>>>              goto setmemtype_fail;
> >>>>>>>
> >>>>>>>          while ( a.nr > start_iter )
> >>>>>>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
> >>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
> >>>>>>>              }
> >>>>>>>              if ( !p2m_is_ram(t) &&
> >>>>>>>                   (!p2m_is_hole(t) || a.hvmmem_type !=
> >>>>>>> HVMMEM_mmio_dm)
> >>>>>> &&
> >>>>>>> -                 (t != p2m_mmio_write_dm || a.hvmmem_type !=
> >>>>>> HVMMEM_ram_rw) )
> >>>>>>> +                 (t != p2m_ioreq_server || a.hvmmem_type !=
> >>>>>> HVMMEM_ram_rw) )
> >>>>>>>              {
> >>>>>>>                  put_gfn(d, pfn);
> >>>>>>>                  goto setmemtype_fail;
> >>>>>>> diff --git a/xen/arch/x86/mm/p2m-ept.c
> b/xen/arch/x86/mm/p2m-
> >>>> ept.c
> >>>>>>> index 3cb6868..380ec25 100644
> >>>>>>> --- a/xen/arch/x86/mm/p2m-ept.c
> >>>>>>> +++ b/xen/arch/x86/mm/p2m-ept.c
> >>>>>>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
> >>>>>> p2m_domain *p2m, ept_entry_t *entry,
> >>>>>>>              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
> >>>>>>>              break;
> >>>>>>>          case p2m_grant_map_ro:
> >>>>>>> -        case p2m_mmio_write_dm:
> >>>>>>> +        case p2m_ioreq_server:
> >>>>>>>              entry->r = 1;
> >>>>>>>              entry->w = entry->x = 0;
> >>>>>>>              entry->a = !!cpu_has_vmx_ept_ad;
> >>>>>>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-
> pt.c
> >>>>>>> index 3d80612..eabd2e3 100644
> >>>>>>> --- a/xen/arch/x86/mm/p2m-pt.c
> >>>>>>> +++ b/xen/arch/x86/mm/p2m-pt.c
> >>>>>>> @@ -94,7 +94,7 @@ static unsigned long
> >>>> p2m_type_to_flags(p2m_type_t
> >>>>>> t, mfn_t mfn,
> >>>>>>>      default:
> >>>>>>>          return flags | _PAGE_NX_BIT;
> >>>>>>>      case p2m_grant_map_ro:
> >>>>>>> -    case p2m_mmio_write_dm:
> >>>>>>> +    case p2m_ioreq_server:
> >>>>>>>          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
> >>>>>>>      case p2m_ram_ro:
> >>>>>>>      case p2m_ram_logdirty:
> >>>>>>> diff --git a/xen/arch/x86/mm/shadow/multi.c
> >>>>>> b/xen/arch/x86/mm/shadow/multi.c
> >>>>>>> index e5c8499..c81302a 100644
> >>>>>>> --- a/xen/arch/x86/mm/shadow/multi.c
> >>>>>>> +++ b/xen/arch/x86/mm/shadow/multi.c
> >>>>>>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
> >>>>>>>
> >>>>>>>      /* Need to hand off device-model MMIO to the device model */
> >>>>>>>      if ( p2mt == p2m_mmio_dm
> >>>>>>> -         || (p2mt == p2m_mmio_write_dm && ft ==
> ft_demand_write) )
> >>>>>>> +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
> >>>>>>>      {
> >>>>>>>          gpa = guest_walk_to_gpa(&gw);
> >>>>>>>          goto mmio;
> >>>>>>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-
> x86/p2m.h
> >>>>>>> index 5392eb0..ee2ea9c 100644
> >>>>>>> --- a/xen/include/asm-x86/p2m.h
> >>>>>>> +++ b/xen/include/asm-x86/p2m.h
> >>>>>>> @@ -71,7 +71,7 @@ typedef enum {
> >>>>>>>      p2m_ram_shared = 12,          /* Shared or sharable memory */
> >>>>>>>      p2m_ram_broken = 13,          /* Broken page, access cause
> >>>>>>> domain
> >>>> crash
> >>>>>> */
> >>>>>>>      p2m_map_foreign  = 14,        /* ram pages from foreign
> >>>>>>> domain */
> >>>>>>> -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the
> >>>>>>> device
> >>>>>> model */
> >>>>>>> +    p2m_ioreq_server = 15,
> >>>>>>>  } p2m_type_t;
> >>>>>>>
> >>>>>>>  /* Modifiers to the query */
> >>>>>>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
> >>>>>>>                        | p2m_to_mask(p2m_ram_ro)         \
> >>>>>>>                        | p2m_to_mask(p2m_grant_map_ro)   \
> >>>>>>>                        | p2m_to_mask(p2m_ram_shared)     \
> >>>>>>> -                      | p2m_to_mask(p2m_mmio_write_dm))
> >>>>>>> +                      | p2m_to_mask(p2m_ioreq_server))
> >>>>>>>
> >>>>>>>  /* Write-discard types, which should discard the write
> >>>>>>> operations */
> >>>>>>>  #define P2M_DISCARD_WRITE_TYPES
> (p2m_to_mask(p2m_ram_ro)     \
> >>>>>>> diff --git a/xen/include/public/hvm/hvm_op.h
> >>>>>> b/xen/include/public/hvm/hvm_op.h
> >>>>>>> index 1606185..b3e45cf 100644
> >>>>>>> --- a/xen/include/public/hvm/hvm_op.h
> >>>>>>> +++ b/xen/include/public/hvm/hvm_op.h
> >>>>>>> @@ -83,7 +83,13 @@ typedef enum {
> >>>>>>>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
> >>>>>>>      HVMMEM_ram_ro,             /* Read-only; writes are
> >>>>>>> discarded */
> >>>>>>>      HVMMEM_mmio_dm,            /* Reads and write go to the device
> >>>> model
> >>>>>> */
> >>>>>>> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the
> >>>>>>> device
> >>>>>> model */
> >>>>>>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
> >>>>>>> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the
> >>>>>>> device
> >>>>>> model */
> >>>>>>> +#else
> >>>>>>> +    HVMMEM_unused,             /* Placeholder; setting memory to
> >>>>>>> this
> >>>> type
> >>>>>>> +                                  will fail for code after 4.7.0 */
> >>>>>>> +#endif
> >>>>>>> +    HVMMEM_ioreq_server
> >>>>>>
> >>>>>> Also, I don't think we've had a convincing argument for why this
> >>>>>> patch
> >>>>>> needs to be in 4.7.  The p2m name changes are internal only, and so
> >>>>>> don't need to be made at all; and the old functionality will work as
> >>>>>> well as it ever did.  Furthermore, the whole reason we're in this
> >>>>>> situation is that we checked in a publicly-visible change to the
> >>>>>> interface before it was properly ready; I think we should avoid
> >>>>>> making
> >>>>>> the same mistake this time.
> >>>>>>
> >>>>>> So personally I'd just leave this patch entirely for 4.8; but if Paul
> >>>>>> and/or Jan have strong opinions, then I would say check in only a
> >>>>>> patch to do the #if/#else/#endif, and leave both the p2m type
> changes
> >>>>>> and the new HVMMEM_ioreq_server enum for when the 4.8
> >>>> development
> >>>>>> window opens.
> >>>>>>
> >>>>>
> >>>>> If the whole series is going in then I think this patch is ok. If
> >>>>> this the
> >>> only
> >>>> patch that is going in for 4.7 then I thing we need the patch to
> >>>> hvm_op.h to
> >>>> deprecate the old type and that's it. We definitely should not
> >>>> introduce an
> >>>> implementation of the type HVMMEM_ioreq_server that has the same
> >>>> hardcoded semantics as the old type and then change it.
> >>>>> The p2m type changes are also wrong. That type needs to be left
> alone,
> >>>> presumably, so that anything using HVMMEM_mmio_write_dm and
> >>>> compiled to the old interface version continues to function. I think
> >>>> HVMMEM_ioreq_server needs to map to a new p2m type which
> should be
> >>>> introduced in patch #3.
> >>>>>
> >>>>
> >>>> Sorry, I'm also confused now. :(
> >>>>
> >>>> Do we really want to introduce a new p2m type? Why?
> >>>> My understanding of the previous agreement is that:
> >>>> 1> We need the interface to work on old hypervisor for
> >>>> HVMMEM_mmio_write_dm;
> >>>> 2> We need the interface to return -EINVAL for new hypervisor
> >>>> for HVMMEM_mmio_write_dm;
> >>>> 3> We need the new type HVMMEM_ioreq_server to work on new
> >>>> hypervisor;
> >>>>
> >>>> Did I miss something? Or I totally misunderstood?
> >>>>
> >>>
> >>> I don't know. I'm confused too. What we definitely don't want to do
> >>> is add a
> >>> new HVMMEM type and have it map to the old behaviour, otherwise
> we're no
> >>> better off.
> >>>
> >>> The question I'm not clear on the answer to is what happens to old code:
> >>>
> >>> Should it continue to compile? If so, should it continue to run.
> >>
> >> We only need to be concerned about the "get type" functionality,
> >> as that's the only thing an arbitrary guest can use. If the
> >> hypercall simply never returns the old type, then old code will
> >> still work (it'll just have some dead code on new Xen), and hence
> >> it compiling against the older interface is fine (and, from general
> >> considerations, a requirement).
> >>
> >
> > Thanks Jan. And I think the answer is yes. The new hypervisor will
> > only return HVMMEM_ioreq_server, which is a different value now.
> 
> Right -- but we can't check in this entire series for 4.7; however, Paul
> would like to make it clear that HVMMEM_mmio_write_dm is deprecated;
> so
> we need a patch which does the #ifdef/#else/#endif clause, and then
> everything else necessary to make sure that things compile properly
> either way, but no p2m changes or additional functionality.
> 

For clarity, do you expect any existing use of HVMMEM_mmio_write_dm to continue to *function*? I agree that things should continue to build, but if they don't need to function then the now redundant p2m type should be removed IMO and any attempt to set a page to HVMMEM_mmio_write_dm (or the new HVMMEM_unused) name should result in -EINVAL. What is your position on this?

  Paul

>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 17:01                 ` Paul Durrant
@ 2016-04-26  8:23                   ` Yu, Zhang
  2016-04-26  8:33                     ` Paul Durrant
  2016-04-27 14:12                   ` George Dunlap
  1 sibling, 1 reply; 35+ messages in thread
From: Yu, Zhang @ 2016-04-26  8:23 UTC (permalink / raw)
  To: Paul Durrant, George Dunlap, Jan Beulich
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Tim (Xen.org),
	xen-devel, Zhiyuan Lv, Jun Nakajima, Keir (Xen.org)



On 4/26/2016 1:01 AM, Paul Durrant wrote:
>> -----Original Message-----
>> From: George Dunlap [mailto:george.dunlap@citrix.com]
>> Sent: 25 April 2016 17:16
>> To: Yu, Zhang; Jan Beulich; Paul Durrant
>> Cc: Andrew Cooper; Wei Liu; Jun Nakajima; Kevin Tian; Zhiyuan Lv; xen-
>> devel@lists.xen.org; Keir (Xen.org); Tim (Xen.org)
>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>
>> On 25/04/16 16:53, Yu, Zhang wrote:
>>>
>>>
>>> On 4/25/2016 11:38 PM, Jan Beulich wrote:
>>>>>>> On 25.04.16 at 17:29, <Paul.Durrant@citrix.com> wrote:
>>>>>>  -----Original Message-----
>>>>>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
>>>>>> Sent: 25 April 2016 16:22
>>>>>> To: Paul Durrant; George Dunlap
>>>>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun Nakajima;
>>>>>> Andrew Cooper; Tim (Xen.org); Lv, Zhiyuan; Jan Beulich; Wei Liu
>>>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for
>>>>>> 4.7):
>>>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 4/25/2016 10:01 PM, Paul Durrant wrote:
>>>>>>>> -----Original Message-----
>>>>>>>> From: dunlapg@gmail.com [mailto:dunlapg@gmail.com] On Behalf
>> Of
>>>>>>>> George Dunlap
>>>>>>>> Sent: 25 April 2016 14:39
>>>>>>>> To: Yu Zhang
>>>>>>>> Cc: xen-devel@lists.xen.org; Kevin Tian; Keir (Xen.org); Jun
>>>>>>>> Nakajima;
>>>>>>>> Andrew Cooper; Tim (Xen.org); Paul Durrant; Lv, Zhiyuan; Jan
>>>>>>>> Beulich; Wei
>>>>>> Liu
>>>>>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for
>>>>>>>> 4.7):
>>>>>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>>>>>>
>>>>>>>> On Mon, Apr 25, 2016 at 11:35 AM, Yu Zhang
>>>>>> <yu.c.zhang@linux.intel.com>
>>>>>>>> wrote:
>>>>>>>>> Previously p2m type p2m_mmio_write_dm was introduced for
>> write-
>>>>>>>>> protected memory pages whose write operations are supposed to
>> be
>>>>>>>>> forwarded to and emulated by an ioreq server. Yet limitations of
>>>>>>>>> rangeset restrict the number of guest pages to be write-protected.
>>>>>>>>>
>>>>>>>>> This patch replaces the p2m type p2m_mmio_write_dm with a new
>>>>>> name:
>>>>>>>>> p2m_ioreq_server, which means this p2m type can be claimed by
>> one
>>>>>>>>> ioreq server, instead of being tracked inside the rangeset of ioreq
>>>>>>>>> server. Patches following up will add the related hvmop handling
>>>>>>>>> code which map/unmap type p2m_ioreq_server to/from an ioreq
>>>>>> server.
>>>>>>>>>
>>>>>>>>> changes in v3:
>>>>>>>>>   - According to Jan & George's comments, keep
>>>>>>>> HVMMEM_mmio_write_dm
>>>>>>>>>     for old xen interface versions, and replace it with
>>>>>>>>> HVMMEM_unused
>>>>>>>>>     for xen interfaces newer than 4.7.0; For p2m_ioreq_server, a
>> new
>>>>>>>>>     enum - HVMMEM_ioreq_server is introduced for the get/set
>> mem
>>>>>> type
>>>>>>>>>     interfaces;
>>>>>>>>>   - Add George's Reviewed-by and Acked-by from Tim & Andrew.
>>>>>>>>
>>>>>>>> Unfortunately these rather contradict each other -- I consider
>>>>>>>> Reviewed-by to only stick if the code I've specified hasn't changed
>>>>>>>> (or has only changed trivially).
>>>>>>>>
>>>>>>>> Also...
>>>>>>>>
>>>>>>>>>
>>>>>>>>> changes in v2:
>>>>>>>>>   - According to George Dunlap's comments, only rename the p2m
>> type,
>>>>>>>>>     with no behavior changes.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Paul Durrant <paul.durrant@citrix.com>
>>>>>>>>> Signed-off-by: Yu Zhang <yu.c.zhang@linux.intel.com>
>>>>>>>>> Acked-by: Tim Deegan <tim@xen.org>
>>>>>>>>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>>>>> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>>>>>>>>> Cc: Keir Fraser <keir@xen.org>
>>>>>>>>> Cc: Jan Beulich <jbeulich@suse.com>
>>>>>>>>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>>>>>> Cc: Jun Nakajima <jun.nakajima@intel.com>
>>>>>>>>> Cc: Kevin Tian <kevin.tian@intel.com>
>>>>>>>>> Cc: George Dunlap <george.dunlap@eu.citrix.com>
>>>>>>>>> Cc: Tim Deegan <tim@xen.org>
>>>>>>>>> ---
>>>>>>>>>  xen/arch/x86/hvm/hvm.c          | 14 ++++++++------
>>>>>>>>>  xen/arch/x86/mm/p2m-ept.c       |  2 +-
>>>>>>>>>  xen/arch/x86/mm/p2m-pt.c        |  2 +-
>>>>>>>>>  xen/arch/x86/mm/shadow/multi.c  |  2 +-
>>>>>>>>>  xen/include/asm-x86/p2m.h       |  4 ++--
>>>>>>>>>  xen/include/public/hvm/hvm_op.h |  8 +++++++-
>>>>>>>>>  6 files changed, 20 insertions(+), 12 deletions(-)
>>>>>>>>>
>>>>>>>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
>>>>>>>>> index f24126d..874cb0f 100644
>>>>>>>>> --- a/xen/arch/x86/hvm/hvm.c
>>>>>>>>> +++ b/xen/arch/x86/hvm/hvm.c
>>>>>>>>> @@ -1857,7 +1857,7 @@ int hvm_hap_nested_page_fault(paddr_t
>> gpa,
>>>>>>>> unsigned long gla,
>>>>>>>>>       */
>>>>>>>>>      if ( (p2mt == p2m_mmio_dm) ||
>>>>>>>>>           (npfec.write_access &&
>>>>>>>>> -          (p2m_is_discard_write(p2mt) || (p2mt ==
>>>>>> p2m_mmio_write_dm))) )
>>>>>>>>> +          (p2m_is_discard_write(p2mt) || (p2mt ==
>>>>>>>>> p2m_ioreq_server))) )
>>>>>>>>>      {
>>>>>>>>>          __put_gfn(p2m, gfn);
>>>>>>>>>          if ( ap2m_active )
>>>>>>>>> @@ -5499,8 +5499,8 @@ long do_hvm_op(unsigned long op,
>>>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>>>              get_gfn_query_unlocked(d, a.pfn, &t);
>>>>>>>>>              if ( p2m_is_mmio(t) )
>>>>>>>>>                  a.mem_type =  HVMMEM_mmio_dm;
>>>>>>>>> -            else if ( t == p2m_mmio_write_dm )
>>>>>>>>> -                a.mem_type = HVMMEM_mmio_write_dm;
>>>>>>>>> +            else if ( t == p2m_ioreq_server )
>>>>>>>>> +                a.mem_type = HVMMEM_ioreq_server;
>>>>>>>>>              else if ( p2m_is_readonly(t) )
>>>>>>>>>                  a.mem_type =  HVMMEM_ram_ro;
>>>>>>>>>              else if ( p2m_is_ram(t) )
>>>>>>>>> @@ -5531,7 +5531,8 @@ long do_hvm_op(unsigned long op,
>>>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>>>              [HVMMEM_ram_rw]  = p2m_ram_rw,
>>>>>>>>>              [HVMMEM_ram_ro]  = p2m_ram_ro,
>>>>>>>>>              [HVMMEM_mmio_dm] = p2m_mmio_dm,
>>>>>>>>> -            [HVMMEM_mmio_write_dm] = p2m_mmio_write_dm
>>>>>>>>> +            [HVMMEM_unused] = p2m_invalid,
>>>>>>>>> +            [HVMMEM_ioreq_server] = p2m_ioreq_server
>>>>>>>>>          };
>>>>>>>>>
>>>>>>>>>          if ( copy_from_guest(&a, arg, 1) )
>>>>>>>>> @@ -5555,7 +5556,8 @@ long do_hvm_op(unsigned long op,
>>>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>>>               ((a.first_pfn + a.nr - 1) >
>>>>>>>>> domain_get_maximum_gpfn(d)) )
>>>>>>>>>              goto setmemtype_fail;
>>>>>>>>>
>>>>>>>>> -        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) )
>>>>>>>>> +        if ( a.hvmmem_type >= ARRAY_SIZE(memtype) ||
>>>>>>>>> +             unlikely(a.hvmmem_type == HVMMEM_unused) )
>>>>>>>>>              goto setmemtype_fail;
>>>>>>>>>
>>>>>>>>>          while ( a.nr > start_iter )
>>>>>>>>> @@ -5579,7 +5581,7 @@ long do_hvm_op(unsigned long op,
>>>>>>>> XEN_GUEST_HANDLE_PARAM(void) arg)
>>>>>>>>>              }
>>>>>>>>>              if ( !p2m_is_ram(t) &&
>>>>>>>>>                   (!p2m_is_hole(t) || a.hvmmem_type !=
>>>>>>>>> HVMMEM_mmio_dm)
>>>>>>>> &&
>>>>>>>>> -                 (t != p2m_mmio_write_dm || a.hvmmem_type !=
>>>>>>>> HVMMEM_ram_rw) )
>>>>>>>>> +                 (t != p2m_ioreq_server || a.hvmmem_type !=
>>>>>>>> HVMMEM_ram_rw) )
>>>>>>>>>              {
>>>>>>>>>                  put_gfn(d, pfn);
>>>>>>>>>                  goto setmemtype_fail;
>>>>>>>>> diff --git a/xen/arch/x86/mm/p2m-ept.c
>> b/xen/arch/x86/mm/p2m-
>>>>>> ept.c
>>>>>>>>> index 3cb6868..380ec25 100644
>>>>>>>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>>>>>>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>>>>>>>> @@ -171,7 +171,7 @@ static void ept_p2m_type_to_flags(struct
>>>>>>>> p2m_domain *p2m, ept_entry_t *entry,
>>>>>>>>>              entry->a = entry->d = !!cpu_has_vmx_ept_ad;
>>>>>>>>>              break;
>>>>>>>>>          case p2m_grant_map_ro:
>>>>>>>>> -        case p2m_mmio_write_dm:
>>>>>>>>> +        case p2m_ioreq_server:
>>>>>>>>>              entry->r = 1;
>>>>>>>>>              entry->w = entry->x = 0;
>>>>>>>>>              entry->a = !!cpu_has_vmx_ept_ad;
>>>>>>>>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-
>> pt.c
>>>>>>>>> index 3d80612..eabd2e3 100644
>>>>>>>>> --- a/xen/arch/x86/mm/p2m-pt.c
>>>>>>>>> +++ b/xen/arch/x86/mm/p2m-pt.c
>>>>>>>>> @@ -94,7 +94,7 @@ static unsigned long
>>>>>> p2m_type_to_flags(p2m_type_t
>>>>>>>> t, mfn_t mfn,
>>>>>>>>>      default:
>>>>>>>>>          return flags | _PAGE_NX_BIT;
>>>>>>>>>      case p2m_grant_map_ro:
>>>>>>>>> -    case p2m_mmio_write_dm:
>>>>>>>>> +    case p2m_ioreq_server:
>>>>>>>>>          return flags | P2M_BASE_FLAGS | _PAGE_NX_BIT;
>>>>>>>>>      case p2m_ram_ro:
>>>>>>>>>      case p2m_ram_logdirty:
>>>>>>>>> diff --git a/xen/arch/x86/mm/shadow/multi.c
>>>>>>>> b/xen/arch/x86/mm/shadow/multi.c
>>>>>>>>> index e5c8499..c81302a 100644
>>>>>>>>> --- a/xen/arch/x86/mm/shadow/multi.c
>>>>>>>>> +++ b/xen/arch/x86/mm/shadow/multi.c
>>>>>>>>> @@ -3225,7 +3225,7 @@ static int sh_page_fault(struct vcpu *v,
>>>>>>>>>
>>>>>>>>>      /* Need to hand off device-model MMIO to the device model */
>>>>>>>>>      if ( p2mt == p2m_mmio_dm
>>>>>>>>> -         || (p2mt == p2m_mmio_write_dm && ft ==
>> ft_demand_write) )
>>>>>>>>> +         || (p2mt == p2m_ioreq_server && ft == ft_demand_write) )
>>>>>>>>>      {
>>>>>>>>>          gpa = guest_walk_to_gpa(&gw);
>>>>>>>>>          goto mmio;
>>>>>>>>> diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-
>> x86/p2m.h
>>>>>>>>> index 5392eb0..ee2ea9c 100644
>>>>>>>>> --- a/xen/include/asm-x86/p2m.h
>>>>>>>>> +++ b/xen/include/asm-x86/p2m.h
>>>>>>>>> @@ -71,7 +71,7 @@ typedef enum {
>>>>>>>>>      p2m_ram_shared = 12,          /* Shared or sharable memory */
>>>>>>>>>      p2m_ram_broken = 13,          /* Broken page, access cause
>>>>>>>>> domain
>>>>>> crash
>>>>>>>> */
>>>>>>>>>      p2m_map_foreign  = 14,        /* ram pages from foreign
>>>>>>>>> domain */
>>>>>>>>> -    p2m_mmio_write_dm = 15,       /* Read-only; writes go to the
>>>>>>>>> device
>>>>>>>> model */
>>>>>>>>> +    p2m_ioreq_server = 15,
>>>>>>>>>  } p2m_type_t;
>>>>>>>>>
>>>>>>>>>  /* Modifiers to the query */
>>>>>>>>> @@ -112,7 +112,7 @@ typedef unsigned int p2m_query_t;
>>>>>>>>>                        | p2m_to_mask(p2m_ram_ro)         \
>>>>>>>>>                        | p2m_to_mask(p2m_grant_map_ro)   \
>>>>>>>>>                        | p2m_to_mask(p2m_ram_shared)     \
>>>>>>>>> -                      | p2m_to_mask(p2m_mmio_write_dm))
>>>>>>>>> +                      | p2m_to_mask(p2m_ioreq_server))
>>>>>>>>>
>>>>>>>>>  /* Write-discard types, which should discard the write
>>>>>>>>> operations */
>>>>>>>>>  #define P2M_DISCARD_WRITE_TYPES
>> (p2m_to_mask(p2m_ram_ro)     \
>>>>>>>>> diff --git a/xen/include/public/hvm/hvm_op.h
>>>>>>>> b/xen/include/public/hvm/hvm_op.h
>>>>>>>>> index 1606185..b3e45cf 100644
>>>>>>>>> --- a/xen/include/public/hvm/hvm_op.h
>>>>>>>>> +++ b/xen/include/public/hvm/hvm_op.h
>>>>>>>>> @@ -83,7 +83,13 @@ typedef enum {
>>>>>>>>>      HVMMEM_ram_rw,             /* Normal read/write guest RAM */
>>>>>>>>>      HVMMEM_ram_ro,             /* Read-only; writes are
>>>>>>>>> discarded */
>>>>>>>>>      HVMMEM_mmio_dm,            /* Reads and write go to the device
>>>>>> model
>>>>>>>> */
>>>>>>>>> -    HVMMEM_mmio_write_dm       /* Read-only; writes go to the
>>>>>>>>> device
>>>>>>>> model */
>>>>>>>>> +#if __XEN_INTERFACE_VERSION__ < 0x00040700
>>>>>>>>> +    HVMMEM_mmio_write_dm,      /* Read-only; writes go to the
>>>>>>>>> device
>>>>>>>> model */
>>>>>>>>> +#else
>>>>>>>>> +    HVMMEM_unused,             /* Placeholder; setting memory to
>>>>>>>>> this
>>>>>> type
>>>>>>>>> +                                  will fail for code after 4.7.0 */
>>>>>>>>> +#endif
>>>>>>>>> +    HVMMEM_ioreq_server
>>>>>>>>
>>>>>>>> Also, I don't think we've had a convincing argument for why this
>>>>>>>> patch
>>>>>>>> needs to be in 4.7.  The p2m name changes are internal only, and so
>>>>>>>> don't need to be made at all; and the old functionality will work as
>>>>>>>> well as it ever did.  Furthermore, the whole reason we're in this
>>>>>>>> situation is that we checked in a publicly-visible change to the
>>>>>>>> interface before it was properly ready; I think we should avoid
>>>>>>>> making
>>>>>>>> the same mistake this time.
>>>>>>>>
>>>>>>>> So personally I'd just leave this patch entirely for 4.8; but if Paul
>>>>>>>> and/or Jan have strong opinions, then I would say check in only a
>>>>>>>> patch to do the #if/#else/#endif, and leave both the p2m type
>> changes
>>>>>>>> and the new HVMMEM_ioreq_server enum for when the 4.8
>>>>>> development
>>>>>>>> window opens.
>>>>>>>>
>>>>>>>
>>>>>>> If the whole series is going in then I think this patch is ok. If
>>>>>>> this the
>>>>> only
>>>>>> patch that is going in for 4.7 then I thing we need the patch to
>>>>>> hvm_op.h to
>>>>>> deprecate the old type and that's it. We definitely should not
>>>>>> introduce an
>>>>>> implementation of the type HVMMEM_ioreq_server that has the same
>>>>>> hardcoded semantics as the old type and then change it.
>>>>>>> The p2m type changes are also wrong. That type needs to be left
>> alone,
>>>>>> presumably, so that anything using HVMMEM_mmio_write_dm and
>>>>>> compiled to the old interface version continues to function. I think
>>>>>> HVMMEM_ioreq_server needs to map to a new p2m type which
>> should be
>>>>>> introduced in patch #3.
>>>>>>>
>>>>>>
>>>>>> Sorry, I'm also confused now. :(
>>>>>>
>>>>>> Do we really want to introduce a new p2m type? Why?
>>>>>> My understanding of the previous agreement is that:
>>>>>> 1> We need the interface to work on old hypervisor for
>>>>>> HVMMEM_mmio_write_dm;
>>>>>> 2> We need the interface to return -EINVAL for new hypervisor
>>>>>> for HVMMEM_mmio_write_dm;
>>>>>> 3> We need the new type HVMMEM_ioreq_server to work on new
>>>>>> hypervisor;
>>>>>>
>>>>>> Did I miss something? Or I totally misunderstood?
>>>>>>
>>>>>
>>>>> I don't know. I'm confused too. What we definitely don't want to do
>>>>> is add a
>>>>> new HVMMEM type and have it map to the old behaviour, otherwise
>> we're no
>>>>> better off.
>>>>>
>>>>> The question I'm not clear on the answer to is what happens to old code:
>>>>>
>>>>> Should it continue to compile? If so, should it continue to run.
>>>>
>>>> We only need to be concerned about the "get type" functionality,
>>>> as that's the only thing an arbitrary guest can use. If the
>>>> hypercall simply never returns the old type, then old code will
>>>> still work (it'll just have some dead code on new Xen), and hence
>>>> it compiling against the older interface is fine (and, from general
>>>> considerations, a requirement).
>>>>
>>>
>>> Thanks Jan. And I think the answer is yes. The new hypervisor will
>>> only return HVMMEM_ioreq_server, which is a different value now.
>>
>> Right -- but we can't check in this entire series for 4.7; however, Paul
>> would like to make it clear that HVMMEM_mmio_write_dm is deprecated;
>> so
>> we need a patch which does the #ifdef/#else/#endif clause, and then
>> everything else necessary to make sure that things compile properly
>> either way, but no p2m changes or additional functionality.
>>
>
> For clarity, do you expect any existing use of HVMMEM_mmio_write_dm to continue to *function*? I agree that things should continue to build, but if they don't need to function then the now redundant p2m type should be removed IMO and any attempt to set a page to HVMMEM_mmio_write_dm (or the new HVMMEM_unused) name should result in -EINVAL. What is your position on this?
>

Thanks, Paul.
My expectation is that HVMMEM_mmio_write_dm shall fail in new xen
version, but I do not think we need to remove the p2m type, just
rename it could be OK.

>   Paul
>
>>  -George

Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-26  8:23                   ` Yu, Zhang
@ 2016-04-26  8:33                     ` Paul Durrant
  0 siblings, 0 replies; 35+ messages in thread
From: Paul Durrant @ 2016-04-26  8:33 UTC (permalink / raw)
  To: Yu, Zhang, George Dunlap, Jan Beulich
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Tim (Xen.org),
	xen-devel, Zhiyuan Lv, Jun Nakajima, Keir (Xen.org)

> -----Original Message-----
[snip]
> >
> > For clarity, do you expect any existing use of HVMMEM_mmio_write_dm
> to continue to *function*? I agree that things should continue to build, but if
> they don't need to function then the now redundant p2m type should be
> removed IMO and any attempt to set a page to HVMMEM_mmio_write_dm
> (or the new HVMMEM_unused) name should result in -EINVAL. What is your
> position on this?
> >
> 
> Thanks, Paul.
> My expectation is that HVMMEM_mmio_write_dm shall fail in new xen
> version, but I do not think we need to remove the p2m type, just
> rename it could be OK.
> 

I think we need George's response before we can proceed.

  Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 2/3] x86/ioreq server: Add new functions to get/set memory types.
  2016-04-25 10:35 ` [PATCH v3 2/3] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
@ 2016-04-26 10:53   ` Wei Liu
  2016-04-27  9:11     ` Yu, Zhang
  0 siblings, 1 reply; 35+ messages in thread
From: Wei Liu @ 2016-04-26 10:53 UTC (permalink / raw)
  To: Yu Zhang
  Cc: Keir Fraser, Andrew Cooper, xen-devel, Paul.Durrant, zhiyuan.lv,
	Jan Beulich, wei.liu2

Hi Yu

On Mon, Apr 25, 2016 at 06:35:39PM +0800, Yu Zhang wrote:
> For clarity this patch breaks the code to set/get memory types out
> of do_hvm_op() into dedicated functions: hvmop_set/get_mem_type().
> Also, for clarity, checks for whether a memory type change is allowed
> are broken out into a separate function called by hvmop_set_mem_type().
> 
> There is no intentional functional change in this patch.
> 
> changes in v3:
>   - Add Andrew's Acked-by and George's Reviewed-by.
> 
> changes in v2:
>   - According to George Dunlap's comments, follow the "set rc /
>     do something / goto out" pattern in hvmop_get_mem_type().
> 

Normally we put these changelogs (or other information that is not
intended to be committed) between "---" so that they are ignored
when committing. Here is one example:

https://marc.info/?l=xen-devel&m=146056699332101

Note the Cc and some extra words inside surrounded by two "---". They
will be ignored when committing.

The code itself looks good to me.

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 2/3] x86/ioreq server: Add new functions to get/set memory types.
  2016-04-26 10:53   ` Wei Liu
@ 2016-04-27  9:11     ` Yu, Zhang
  0 siblings, 0 replies; 35+ messages in thread
From: Yu, Zhang @ 2016-04-27  9:11 UTC (permalink / raw)
  To: Wei Liu
  Cc: Keir Fraser, Andrew Cooper, xen-devel, Paul.Durrant, zhiyuan.lv,
	Jan Beulich



On 4/26/2016 6:53 PM, Wei Liu wrote:
> Hi Yu
>
> On Mon, Apr 25, 2016 at 06:35:39PM +0800, Yu Zhang wrote:
>> For clarity this patch breaks the code to set/get memory types out
>> of do_hvm_op() into dedicated functions: hvmop_set/get_mem_type().
>> Also, for clarity, checks for whether a memory type change is allowed
>> are broken out into a separate function called by hvmop_set_mem_type().
>>
>> There is no intentional functional change in this patch.
>>
>> changes in v3:
>>   - Add Andrew's Acked-by and George's Reviewed-by.
>>
>> changes in v2:
>>   - According to George Dunlap's comments, follow the "set rc /
>>     do something / goto out" pattern in hvmop_get_mem_type().
>>
>
> Normally we put these changelogs (or other information that is not
> intended to be committed) between "---" so that they are ignored
> when committing. Here is one example:
>
> https://marc.info/?l=xen-devel&m=146056699332101
>
> Note the Cc and some extra words inside surrounded by two "---". They
> will be ignored when committing.
>

Oh. Thanks for your information, Wei. :)

B.R.
Yu

> The code itself looks good to me.
>
> Wei.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-25 17:01                 ` Paul Durrant
  2016-04-26  8:23                   ` Yu, Zhang
@ 2016-04-27 14:12                   ` George Dunlap
  2016-04-27 14:42                     ` Paul Durrant
  2016-04-27 14:47                     ` Wei Liu
  1 sibling, 2 replies; 35+ messages in thread
From: George Dunlap @ 2016-04-27 14:12 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, Andrew Cooper, Tim (Xen.org),
	xen-devel, Yu, Zhang, Zhiyuan Lv, Jan Beulich, Keir (Xen.org)

On Mon, Apr 25, 2016 at 6:01 PM, Paul Durrant <Paul.Durrant@citrix.com> wrote:
> For clarity, do you expect any existing use of HVMMEM_mmio_write_dm to continue to *function*? I agree that things should continue to build, but if they don't need to function then the now redundant p2m type should be removed IMO and any attempt to set a page to HVMMEM_mmio_write_dm (or the new HVMMEM_unused) name should result in -EINVAL. What is your position on this?

I sort of feel like we're playing some strange guessing game with the
color of this bike shed, where all 4 of us give a random combination
of constrants and then we have to figure out what the solution is. :-)

There are two issues: the interface (HVMMEM_*) and the internals.(p2m_*).

Jan says that code that calls HVMOP_get_mem_type has to continue to
compile and function.  "Functioning" is easy, as you just don't return
that value and you're done.  Compiling just means having the #ifdef.

It sounds like we all agree that HVMOP_set_mem_type with the current
HVMMEM_mmio_write_dm value should return -EINVAL.

Regarding the p2m type which now should be impossible to set -- I
don't think it's critical to remove from the release, since it's just
internal.  I'd normally say just leave it for now to reduce code
churn.

But mostly I think we just want to get this bike shed painted, so if
anyone thinks we should really remove the p2m type from this release,
then that's fine with me too (assuming it's OK with Wei).

Does this cover everything?

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-27 14:12                   ` George Dunlap
@ 2016-04-27 14:42                     ` Paul Durrant
  2016-04-28  2:47                       ` Yu, Zhang
  2016-04-27 14:47                     ` Wei Liu
  1 sibling, 1 reply; 35+ messages in thread
From: Paul Durrant @ 2016-04-27 14:42 UTC (permalink / raw)
  To: George Dunlap, Yu, Zhang
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, Andrew Cooper, Tim (Xen.org),
	xen-devel, Zhiyuan Lv, Jan Beulich, Keir (Xen.org)

> -----Original Message-----
> From: George Dunlap
> Sent: 27 April 2016 15:13
> To: Paul Durrant
> Cc: Yu, Zhang; Jan Beulich; Kevin Tian; Wei Liu; Andrew Cooper; Tim
> (Xen.org); xen-devel@lists.xen.org; Zhiyuan Lv; Jun Nakajima; Keir (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> 
> On Mon, Apr 25, 2016 at 6:01 PM, Paul Durrant <Paul.Durrant@citrix.com>
> wrote:
> > For clarity, do you expect any existing use of HVMMEM_mmio_write_dm
> to continue to *function*? I agree that things should continue to build, but if
> they don't need to function then the now redundant p2m type should be
> removed IMO and any attempt to set a page to HVMMEM_mmio_write_dm
> (or the new HVMMEM_unused) name should result in -EINVAL. What is your
> position on this?
> 
> I sort of feel like we're playing some strange guessing game with the
> color of this bike shed, where all 4 of us give a random combination
> of constrants and then we have to figure out what the solution is. :-)
> 
> There are two issues: the interface (HVMMEM_*) and the
> internals.(p2m_*).
> 
> Jan says that code that calls HVMOP_get_mem_type has to continue to
> compile and function.  "Functioning" is easy, as you just don't return
> that value and you're done.  Compiling just means having the #ifdef.
> 
> It sounds like we all agree that HVMOP_set_mem_type with the current
> HVMMEM_mmio_write_dm value should return -EINVAL.
> 
> Regarding the p2m type which now should be impossible to set -- I
> don't think it's critical to remove from the release, since it's just
> internal.  I'd normally say just leave it for now to reduce code
> churn.
> 
> But mostly I think we just want to get this bike shed painted, so if
> anyone thinks we should really remove the p2m type from this release,
> then that's fine with me too (assuming it's OK with Wei).
> 
> Does this cover everything?
> 

I think so. Thanks for the clarification. 

Yu, are you happy to submit a patch with the #ifdef in the header, and that removes any ability to set the old type?

I guess leaving the p2m type in place to avoid code churn is reasonable at this stage, but anyone looking at the p2m code is probably going to question why it's there in 4.7.

  Paul

>  -George
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-27 14:12                   ` George Dunlap
  2016-04-27 14:42                     ` Paul Durrant
@ 2016-04-27 14:47                     ` Wei Liu
  1 sibling, 0 replies; 35+ messages in thread
From: Wei Liu @ 2016-04-27 14:47 UTC (permalink / raw)
  To: George Dunlap
  Cc: Kevin Tian, Wei Liu, Jun Nakajima, Andrew Cooper, Tim (Xen.org),
	xen-devel, Paul Durrant, Yu, Zhang, Zhiyuan Lv, Jan Beulich,
	Keir (Xen.org)

On Wed, Apr 27, 2016 at 03:12:46PM +0100, George Dunlap wrote:
> On Mon, Apr 25, 2016 at 6:01 PM, Paul Durrant <Paul.Durrant@citrix.com> wrote:
> > For clarity, do you expect any existing use of HVMMEM_mmio_write_dm to continue to *function*? I agree that things should continue to build, but if they don't need to function then the now redundant p2m type should be removed IMO and any attempt to set a page to HVMMEM_mmio_write_dm (or the new HVMMEM_unused) name should result in -EINVAL. What is your position on this?
> 
> I sort of feel like we're playing some strange guessing game with the
> color of this bike shed, where all 4 of us give a random combination
> of constrants and then we have to figure out what the solution is. :-)
> 
> There are two issues: the interface (HVMMEM_*) and the internals.(p2m_*).
> 
> Jan says that code that calls HVMOP_get_mem_type has to continue to
> compile and function.  "Functioning" is easy, as you just don't return
> that value and you're done.  Compiling just means having the #ifdef.
> 
> It sounds like we all agree that HVMOP_set_mem_type with the current
> HVMMEM_mmio_write_dm value should return -EINVAL.
> 

This, is the most urgent issue at the moment AIUI.

> Regarding the p2m type which now should be impossible to set -- I
> don't think it's critical to remove from the release, since it's just
> internal.  I'd normally say just leave it for now to reduce code
> churn.
> 
> But mostly I think we just want to get this bike shed painted, so if
> anyone thinks we should really remove the p2m type from this release,
> then that's fine with me too (assuming it's OK with Wei).
> 

I would prefer as few churns as possible.

Wei.

> Does this cover everything?
> 
>  -George

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-27 14:42                     ` Paul Durrant
@ 2016-04-28  2:47                       ` Yu, Zhang
  2016-04-28  7:14                         ` Paul Durrant
  0 siblings, 1 reply; 35+ messages in thread
From: Yu, Zhang @ 2016-04-28  2:47 UTC (permalink / raw)
  To: Paul Durrant, George Dunlap
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Tim (Xen.org),
	xen-devel, Zhiyuan Lv, Jun Nakajima, Keir (Xen.org)



On 4/27/2016 10:42 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: George Dunlap
>> Sent: 27 April 2016 15:13
>> To: Paul Durrant
>> Cc: Yu, Zhang; Jan Beulich; Kevin Tian; Wei Liu; Andrew Cooper; Tim
>> (Xen.org); xen-devel@lists.xen.org; Zhiyuan Lv; Jun Nakajima; Keir (Xen.org)
>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>
>> On Mon, Apr 25, 2016 at 6:01 PM, Paul Durrant <Paul.Durrant@citrix.com>
>> wrote:
>>> For clarity, do you expect any existing use of HVMMEM_mmio_write_dm
>> to continue to *function*? I agree that things should continue to build, but if
>> they don't need to function then the now redundant p2m type should be
>> removed IMO and any attempt to set a page to HVMMEM_mmio_write_dm
>> (or the new HVMMEM_unused) name should result in -EINVAL. What is your
>> position on this?
>>
>> I sort of feel like we're playing some strange guessing game with the
>> color of this bike shed, where all 4 of us give a random combination
>> of constrants and then we have to figure out what the solution is. :-)
>>
>> There are two issues: the interface (HVMMEM_*) and the
>> internals.(p2m_*).
>>
>> Jan says that code that calls HVMOP_get_mem_type has to continue to
>> compile and function.  "Functioning" is easy, as you just don't return
>> that value and you're done.  Compiling just means having the #ifdef.
>>
>> It sounds like we all agree that HVMOP_set_mem_type with the current
>> HVMMEM_mmio_write_dm value should return -EINVAL.
>>
>> Regarding the p2m type which now should be impossible to set -- I
>> don't think it's critical to remove from the release, since it's just
>> internal.  I'd normally say just leave it for now to reduce code
>> churn.
>>
>> But mostly I think we just want to get this bike shed painted, so if
>> anyone thinks we should really remove the p2m type from this release,
>> then that's fine with me too (assuming it's OK with Wei).
>>
>> Does this cover everything?
>>
>
> I think so. Thanks for the clarification.
>
> Yu, are you happy to submit a patch with the #ifdef in the header, and that removes any ability to set the old type?
>

I'm fine with this, and thanks. :)
So my understanding is that the only difference between the new
patch and this current one is we do not replace p2m_mmio_write_dm
with p2m_ioreq_server, hence no need to introduce HVMMEM_ioreq_server.
Is this understanding correct?

Besides, do you think it acceptable we just replace p2m_mmio_write_dm
with p2m_ioreq_server in next version, or should we remove this p2m
type and define p2m_ioreq_server with a different value? :)


> I guess leaving the p2m type in place to avoid code churn is reasonable at this stage, but anyone looking at the p2m code is probably going to question why it's there in 4.7.
>
>   Paul
>
>>  -George

B.R.
Yu


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-28  7:14                         ` Paul Durrant
@ 2016-04-28  7:07                           ` Yu, Zhang
  2016-04-28 10:02                           ` Jan Beulich
  1 sibling, 0 replies; 35+ messages in thread
From: Yu, Zhang @ 2016-04-28  7:07 UTC (permalink / raw)
  To: Paul Durrant, George Dunlap
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Tim (Xen.org),
	xen-devel, Zhiyuan Lv, Jun Nakajima, Keir (Xen.org)



On 4/28/2016 3:14 PM, Paul Durrant wrote:
>> -----Original Message-----
>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
>> Sent: 28 April 2016 03:47
>> To: Paul Durrant; George Dunlap
>> Cc: Kevin Tian; Wei Liu; Jun Nakajima; Andrew Cooper; Tim (Xen.org); xen-
>> devel@lists.xen.org; Zhiyuan Lv; Jan Beulich; Keir (Xen.org)
>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>
>>
>>
>> On 4/27/2016 10:42 PM, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: George Dunlap
>>>> Sent: 27 April 2016 15:13
>>>> To: Paul Durrant
>>>> Cc: Yu, Zhang; Jan Beulich; Kevin Tian; Wei Liu; Andrew Cooper; Tim
>>>> (Xen.org); xen-devel@lists.xen.org; Zhiyuan Lv; Jun Nakajima; Keir
>> (Xen.org)
>>>> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
>>>> Rename p2m_mmio_write_dm to p2m_ioreq_server.
>>>>
>>>> On Mon, Apr 25, 2016 at 6:01 PM, Paul Durrant <Paul.Durrant@citrix.com>
>>>> wrote:
>>>>> For clarity, do you expect any existing use of
>> HVMMEM_mmio_write_dm
>>>> to continue to *function*? I agree that things should continue to build,
>> but if
>>>> they don't need to function then the now redundant p2m type should be
>>>> removed IMO and any attempt to set a page to
>> HVMMEM_mmio_write_dm
>>>> (or the new HVMMEM_unused) name should result in -EINVAL. What is
>> your
>>>> position on this?
>>>>
>>>> I sort of feel like we're playing some strange guessing game with the
>>>> color of this bike shed, where all 4 of us give a random combination
>>>> of constrants and then we have to figure out what the solution is. :-)
>>>>
>>>> There are two issues: the interface (HVMMEM_*) and the
>>>> internals.(p2m_*).
>>>>
>>>> Jan says that code that calls HVMOP_get_mem_type has to continue to
>>>> compile and function.  "Functioning" is easy, as you just don't return
>>>> that value and you're done.  Compiling just means having the #ifdef.
>>>>
>>>> It sounds like we all agree that HVMOP_set_mem_type with the current
>>>> HVMMEM_mmio_write_dm value should return -EINVAL.
>>>>
>>>> Regarding the p2m type which now should be impossible to set -- I
>>>> don't think it's critical to remove from the release, since it's just
>>>> internal.  I'd normally say just leave it for now to reduce code
>>>> churn.
>>>>
>>>> But mostly I think we just want to get this bike shed painted, so if
>>>> anyone thinks we should really remove the p2m type from this release,
>>>> then that's fine with me too (assuming it's OK with Wei).
>>>>
>>>> Does this cover everything?
>>>>
>>>
>>> I think so. Thanks for the clarification.
>>>
>>> Yu, are you happy to submit a patch with the #ifdef in the header, and that
>> removes any ability to set the old type?
>>>
>>
>> I'm fine with this, and thanks. :)
>> So my understanding is that the only difference between the new
>> patch and this current one is we do not replace p2m_mmio_write_dm
>> with p2m_ioreq_server, hence no need to introduce
>> HVMMEM_ioreq_server.
>> Is this understanding correct?
>>
>
> I believe so. The main points are that:
>
> a) HVMMEM_mmio_write_dm no longer exists with the new interface version and is replaced by HVMMEM_unused
> b) Any attempt to set a page to type HVMMEM_unused results in -EINVAL
>

Yep.

>> Besides, do you think it acceptable we just replace p2m_mmio_write_dm
>> with p2m_ioreq_server in next version, or should we remove this p2m
>> type and define p2m_ioreq_server with a different value? :)
>>
>
> p2m_ioreq_server becomes defunct after the aforementioned patch. With that patch applied there will be no way to set that type on a p2m entry so it should be ok to re-use the type.
>

Got it. Thank you, Paul.
Will send the patch out later. :)

>   Paul
>
>>
>>> I guess leaving the p2m type in place to avoid code churn is reasonable at
>> this stage, but anyone looking at the p2m code is probably going to question
>> why it's there in 4.7.
>>>
>>>   Paul
>>>
>>>>  -George
>>
>> B.R.
>> Yu
>

Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-28  2:47                       ` Yu, Zhang
@ 2016-04-28  7:14                         ` Paul Durrant
  2016-04-28  7:07                           ` Yu, Zhang
  2016-04-28 10:02                           ` Jan Beulich
  0 siblings, 2 replies; 35+ messages in thread
From: Paul Durrant @ 2016-04-28  7:14 UTC (permalink / raw)
  To: Yu, Zhang, George Dunlap
  Cc: Kevin Tian, Wei Liu, Jan Beulich, Andrew Cooper, Tim (Xen.org),
	xen-devel, Zhiyuan Lv, Jun Nakajima, Keir (Xen.org)

> -----Original Message-----
> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
> Sent: 28 April 2016 03:47
> To: Paul Durrant; George Dunlap
> Cc: Kevin Tian; Wei Liu; Jun Nakajima; Andrew Cooper; Tim (Xen.org); xen-
> devel@lists.xen.org; Zhiyuan Lv; Jan Beulich; Keir (Xen.org)
> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> 
> 
> 
> On 4/27/2016 10:42 PM, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: George Dunlap
> >> Sent: 27 April 2016 15:13
> >> To: Paul Durrant
> >> Cc: Yu, Zhang; Jan Beulich; Kevin Tian; Wei Liu; Andrew Cooper; Tim
> >> (Xen.org); xen-devel@lists.xen.org; Zhiyuan Lv; Jun Nakajima; Keir
> (Xen.org)
> >> Subject: Re: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
> >> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> >>
> >> On Mon, Apr 25, 2016 at 6:01 PM, Paul Durrant <Paul.Durrant@citrix.com>
> >> wrote:
> >>> For clarity, do you expect any existing use of
> HVMMEM_mmio_write_dm
> >> to continue to *function*? I agree that things should continue to build,
> but if
> >> they don't need to function then the now redundant p2m type should be
> >> removed IMO and any attempt to set a page to
> HVMMEM_mmio_write_dm
> >> (or the new HVMMEM_unused) name should result in -EINVAL. What is
> your
> >> position on this?
> >>
> >> I sort of feel like we're playing some strange guessing game with the
> >> color of this bike shed, where all 4 of us give a random combination
> >> of constrants and then we have to figure out what the solution is. :-)
> >>
> >> There are two issues: the interface (HVMMEM_*) and the
> >> internals.(p2m_*).
> >>
> >> Jan says that code that calls HVMOP_get_mem_type has to continue to
> >> compile and function.  "Functioning" is easy, as you just don't return
> >> that value and you're done.  Compiling just means having the #ifdef.
> >>
> >> It sounds like we all agree that HVMOP_set_mem_type with the current
> >> HVMMEM_mmio_write_dm value should return -EINVAL.
> >>
> >> Regarding the p2m type which now should be impossible to set -- I
> >> don't think it's critical to remove from the release, since it's just
> >> internal.  I'd normally say just leave it for now to reduce code
> >> churn.
> >>
> >> But mostly I think we just want to get this bike shed painted, so if
> >> anyone thinks we should really remove the p2m type from this release,
> >> then that's fine with me too (assuming it's OK with Wei).
> >>
> >> Does this cover everything?
> >>
> >
> > I think so. Thanks for the clarification.
> >
> > Yu, are you happy to submit a patch with the #ifdef in the header, and that
> removes any ability to set the old type?
> >
> 
> I'm fine with this, and thanks. :)
> So my understanding is that the only difference between the new
> patch and this current one is we do not replace p2m_mmio_write_dm
> with p2m_ioreq_server, hence no need to introduce
> HVMMEM_ioreq_server.
> Is this understanding correct?
> 

I believe so. The main points are that:

a) HVMMEM_mmio_write_dm no longer exists with the new interface version and is replaced by HVMMEM_unused
b) Any attempt to set a page to type HVMMEM_unused results in -EINVAL

> Besides, do you think it acceptable we just replace p2m_mmio_write_dm
> with p2m_ioreq_server in next version, or should we remove this p2m
> type and define p2m_ioreq_server with a different value? :)
> 

p2m_ioreq_server becomes defunct after the aforementioned patch. With that patch applied there will be no way to set that type on a p2m entry so it should be ok to re-use the type.

  Paul

> 
> > I guess leaving the p2m type in place to avoid code churn is reasonable at
> this stage, but anyone looking at the p2m code is probably going to question
> why it's there in 4.7.
> >
> >   Paul
> >
> >>  -George
> 
> B.R.
> Yu

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-28  7:14                         ` Paul Durrant
  2016-04-28  7:07                           ` Yu, Zhang
@ 2016-04-28 10:02                           ` Jan Beulich
  2016-04-28 10:43                             ` Paul Durrant
  1 sibling, 1 reply; 35+ messages in thread
From: Jan Beulich @ 2016-04-28 10:02 UTC (permalink / raw)
  To: Paul Durrant
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Tim (Xen.org),
	George Dunlap, xen-devel, Zhang Yu, Zhiyuan Lv, JunNakajima,
	Keir (Xen.org)

>>> On 28.04.16 at 09:14, <Paul.Durrant@citrix.com> wrote:
>> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
>> Sent: 28 April 2016 03:47
>> Besides, do you think it acceptable we just replace p2m_mmio_write_dm
>> with p2m_ioreq_server in next version, or should we remove this p2m
>> type and define p2m_ioreq_server with a different value? :)
>> 
> 
> p2m_ioreq_server becomes defunct after the aforementioned patch. With that 
> patch applied there will be no way to set that type on a p2m entry so it 
> should be ok to re-use the type.

s/p2m_ioreq_server/p2m_mmio_write_dm/ ?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

* Re: [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server.
  2016-04-28 10:02                           ` Jan Beulich
@ 2016-04-28 10:43                             ` Paul Durrant
  0 siblings, 0 replies; 35+ messages in thread
From: Paul Durrant @ 2016-04-28 10:43 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Kevin Tian, Wei Liu, Andrew Cooper, Tim (Xen.org),
	George Dunlap, xen-devel, Zhang Yu, Zhiyuan Lv, JunNakajima,
	Keir (Xen.org)

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: 28 April 2016 11:03
> To: Paul Durrant
> Cc: Andrew Cooper; George Dunlap; Wei Liu; JunNakajima; Kevin Tian;
> Zhiyuan Lv; Zhang Yu; xen-devel@lists.xen.org; Keir (Xen.org); Tim (Xen.org)
> Subject: RE: [Xen-devel] [PATCH v3 1/3] x86/ioreq server(patch for 4.7):
> Rename p2m_mmio_write_dm to p2m_ioreq_server.
> 
> >>> On 28.04.16 at 09:14, <Paul.Durrant@citrix.com> wrote:
> >> From: Yu, Zhang [mailto:yu.c.zhang@linux.intel.com]
> >> Sent: 28 April 2016 03:47
> >> Besides, do you think it acceptable we just replace p2m_mmio_write_dm
> >> with p2m_ioreq_server in next version, or should we remove this p2m
> >> type and define p2m_ioreq_server with a different value? :)
> >>
> >
> > p2m_ioreq_server becomes defunct after the aforementioned patch. With
> that
> > patch applied there will be no way to set that type on a p2m entry so it
> > should be ok to re-use the type.
> 
> s/p2m_ioreq_server/p2m_mmio_write_dm/ ?
> 

Yes, that's what I meant.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 35+ messages in thread

end of thread, other threads:[~2016-04-28 10:43 UTC | newest]

Thread overview: 35+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-04-25 10:35 [PATCH v3 0/3] x86/ioreq server: Introduce HVMMEM_ioreq_server mem type Yu Zhang
2016-04-25 10:35 ` [PATCH v3 1/3] x86/ioreq server(patch for 4.7): Rename p2m_mmio_write_dm to p2m_ioreq_server Yu Zhang
2016-04-25 12:12   ` Jan Beulich
2016-04-25 13:30     ` Wei Liu
2016-04-25 13:39   ` George Dunlap
2016-04-25 14:01     ` Paul Durrant
2016-04-25 14:15       ` George Dunlap
2016-04-25 14:16       ` Jan Beulich
2016-04-25 14:19         ` Paul Durrant
2016-04-25 14:28           ` George Dunlap
2016-04-25 14:34             ` Paul Durrant
2016-04-25 15:21       ` Yu, Zhang
2016-04-25 15:29         ` Paul Durrant
2016-04-25 15:38           ` Jan Beulich
2016-04-25 15:53             ` Yu, Zhang
2016-04-25 16:15               ` George Dunlap
2016-04-25 16:20                 ` Yu, Zhang
2016-04-25 17:01                 ` Paul Durrant
2016-04-26  8:23                   ` Yu, Zhang
2016-04-26  8:33                     ` Paul Durrant
2016-04-27 14:12                   ` George Dunlap
2016-04-27 14:42                     ` Paul Durrant
2016-04-28  2:47                       ` Yu, Zhang
2016-04-28  7:14                         ` Paul Durrant
2016-04-28  7:07                           ` Yu, Zhang
2016-04-28 10:02                           ` Jan Beulich
2016-04-28 10:43                             ` Paul Durrant
2016-04-27 14:47                     ` Wei Liu
2016-04-25 15:49           ` Yu, Zhang
2016-04-25 14:14     ` Jan Beulich
2016-04-25 10:35 ` [PATCH v3 2/3] x86/ioreq server: Add new functions to get/set memory types Yu Zhang
2016-04-26 10:53   ` Wei Liu
2016-04-27  9:11     ` Yu, Zhang
2016-04-25 10:35 ` [PATCH v3 3/3] x86/ioreq server: Add HVMOP to map guest ram with p2m_ioreq_server to an ioreq server Yu Zhang
2016-04-25 12:36   ` Paul Durrant

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).