xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
* [PATCH Altp2m cleanup v5 0/3] Cleaning up altp2m code
@ 2016-09-13 16:46 Paul Lai
  2016-09-13 16:46 ` [PATCH Altp2m cleanup v5 1/3] altp2m cleanup work Paul Lai
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Paul Lai @ 2016-09-13 16:46 UTC (permalink / raw)
  To: xen-devel; +Cc: ravi.sahita, george.dunlap, jbeulich

Altp2m cleanup work

The altp2m clean work is motivated by the following URLs:
   https://lists.xenproject.org/archives/html/xen-devel/2015-07/msg04323.html
   https://lists.xenproject.org/archives/html/xen-devel/2015-07/msg04454.html
   https://lists.xenproject.org/archives/html/xen-devel/2015-07/msg04530.html

Most of the work has been:
Lots of white space, indentation, and other coding style preference
corrections.
Lots of moving altp2m functions to the altp2m file.
Lots of moving ept functions to the ept file.
Lots of function return type corrections (and checking).
Better sanity checking of values before processing in do_altp2m_op().
Just using 'return' after a if() clause instead of using a goto
if the block is can be a one liner.

Most importantly here, we help submit Ravi Sahita's dynamically allocated
altp2m domain for struct p2m.  Also, Ravi introduces set_altp2m_active()
and altp2m_active() APIs for better readability and maintainability.

Along the way, some dependencies have broken and we've waited for them
to be fixed.  Most recently the gfn() which caused a hang of the whole system.
The gfn fixes are currently in staging. We've have verified against the
staging branch that this series of patches functions as expected.

Paul Lai (3):
  altp2m cleanup work.
  Move altp2m specific functions to altp2m files.
  Making altp2m domain dynamically allocated.

 xen/arch/x86/hvm/hvm.c            |  54 +++++++++----------
 xen/arch/x86/hvm/vmx/vmx.c        |   2 +-
 xen/arch/x86/mm/altp2m.c          |  57 ++++++++++++++++++++
 xen/arch/x86/mm/hap/hap.c         |  38 ++++----------
 xen/arch/x86/mm/mem_sharing.c     |   2 +-
 xen/arch/x86/mm/mm-locks.h        |   4 +-
 xen/arch/x86/mm/p2m-ept.c         |  39 ++++++++++++++
 xen/arch/x86/mm/p2m.c             | 106 +++++++++++++-------------------------
 xen/common/monitor.c              |   1 +
 xen/include/asm-x86/altp2m.h      |  11 +++-
 xen/include/asm-x86/domain.h      |   6 +--
 xen/include/asm-x86/hvm/hvm.h     |  22 ++++++--
 xen/include/asm-x86/hvm/vmx/vmx.h |   3 ++
 xen/include/asm-x86/p2m.h         |  18 ++++---
 14 files changed, 216 insertions(+), 147 deletions(-)

-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH Altp2m cleanup v5 1/3] altp2m cleanup work.
  2016-09-13 16:46 [PATCH Altp2m cleanup v5 0/3] Cleaning up altp2m code Paul Lai
@ 2016-09-13 16:46 ` Paul Lai
  2016-09-26 16:01   ` Jan Beulich
  2016-09-13 16:46 ` [PATCH Altp2m cleanup v5 2/3] Move altp2m specific functions to altp2m files Paul Lai
  2016-09-13 16:46 ` [PATCH Altp2m cleanup v5 3/3] Making altp2m domain dynamically allocated Paul Lai
  2 siblings, 1 reply; 6+ messages in thread
From: Paul Lai @ 2016-09-13 16:46 UTC (permalink / raw)
  To: xen-devel; +Cc: ravi.sahita, george.dunlap, jbeulich

Indent goto labels by one space.
Inline (header) altp2m functions.
In do_altp2m_op(), during the sanity check of the passed command,
return -ENONSYS if not a valid command.
In do_altp2m_op(), when evaluating a command, ASSERT_UNREACHABLE()
if the command is not recognizable.  The sanity check above should
have triggered the return of -ENOSYS.
Make altp2m_vcpu_emulate_ve() return actual bool_t (rather than return
void()).

Signed-off-by: Paul Lai <paul.c.lai@intel.com>
---
 xen/arch/x86/hvm/hvm.c        | 46 ++++++++++++++++++++-----------------------
 xen/include/asm-x86/hvm/hvm.h | 22 ++++++++++++++++++---
 2 files changed, 40 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 2c89984..1bf2d01 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1932,11 +1932,11 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
      * Otherwise, this is an error condition. */
     rc = fall_through;
 
-out_put_gfn:
+ out_put_gfn:
     __put_gfn(p2m, gfn);
     if ( ap2m_active )
         __put_gfn(hostp2m, gfn);
-out:
+ out:
     /* All of these are delayed until we exit, since we might 
      * sleep on event ring wait queues, and we must not hold
      * locks in such circumstance */
@@ -5227,12 +5227,25 @@ static int do_altp2m_op(
         return -EFAULT;
 
     if ( a.pad1 || a.pad2 ||
-         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
-         (a.cmd < HVMOP_altp2m_get_domain_state) ||
-         (a.cmd > HVMOP_altp2m_change_gfn) )
+        (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) )
         return -EINVAL;
 
-    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
+    switch ( a.cmd )
+    {
+    case HVMOP_altp2m_get_domain_state:
+    case HVMOP_altp2m_set_domain_state:
+    case HVMOP_altp2m_vcpu_enable_notify:
+    case HVMOP_altp2m_create_p2m:
+    case HVMOP_altp2m_destroy_p2m:
+    case HVMOP_altp2m_switch_p2m:
+    case HVMOP_altp2m_set_mem_access:
+    case HVMOP_altp2m_change_gfn:
+        break;
+    default:
+        return -ENOSYS;
+    }
+
+    d = ( a.cmd != HVMOP_altp2m_vcpu_enable_notify ) ?
         rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
 
     if ( d == NULL )
@@ -5349,6 +5362,8 @@ static int do_altp2m_op(
             rc = p2m_change_altp2m_gfn(d, a.u.change_gfn.view,
                     _gfn(a.u.change_gfn.old_gfn),
                     _gfn(a.u.change_gfn.new_gfn));
+    default:
+        ASSERT_UNREACHABLE();
     }
 
  out:
@@ -5873,25 +5888,6 @@ void hvm_toggle_singlestep(struct vcpu *v)
     v->arch.hvm_vcpu.single_step = !v->arch.hvm_vcpu.single_step;
 }
 
-void altp2m_vcpu_update_p2m(struct vcpu *v)
-{
-    if ( hvm_funcs.altp2m_vcpu_update_p2m )
-        hvm_funcs.altp2m_vcpu_update_p2m(v);
-}
-
-void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v)
-{
-    if ( hvm_funcs.altp2m_vcpu_update_vmfunc_ve )
-        hvm_funcs.altp2m_vcpu_update_vmfunc_ve(v);
-}
-
-bool_t altp2m_vcpu_emulate_ve(struct vcpu *v)
-{
-    if ( hvm_funcs.altp2m_vcpu_emulate_ve )
-        return hvm_funcs.altp2m_vcpu_emulate_ve(v);
-    return 0;
-}
-
 int hvm_set_mode(struct vcpu *v, int mode)
 {
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 5d463e0..f3c996d 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -589,13 +589,29 @@ static inline bool_t hvm_altp2m_supported(void)
 }
 
 /* updates the current hardware p2m */
-void altp2m_vcpu_update_p2m(struct vcpu *v);
+static inline void altp2m_vcpu_update_p2m(struct vcpu *v)
+{
+    if ( hvm_funcs.altp2m_vcpu_update_p2m )
+        hvm_funcs.altp2m_vcpu_update_p2m(v);
+}
 
 /* updates VMCS fields related to VMFUNC and #VE */
-void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v);
+static inline void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v)
+{
+    if ( hvm_funcs.altp2m_vcpu_update_vmfunc_ve )
+        hvm_funcs.altp2m_vcpu_update_vmfunc_ve(v);
+}
 
 /* emulates #VE */
-bool_t altp2m_vcpu_emulate_ve(struct vcpu *v);
+static inline bool_t altp2m_vcpu_emulate_ve(struct vcpu *v)
+{
+    if ( hvm_funcs.altp2m_vcpu_emulate_ve )
+    {
+        hvm_funcs.altp2m_vcpu_emulate_ve(v);
+        return true;
+    }
+    return false;
+}
 
 /* Check CR4/EFER values */
 const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH Altp2m cleanup v5 2/3] Move altp2m specific functions to altp2m files.
  2016-09-13 16:46 [PATCH Altp2m cleanup v5 0/3] Cleaning up altp2m code Paul Lai
  2016-09-13 16:46 ` [PATCH Altp2m cleanup v5 1/3] altp2m cleanup work Paul Lai
@ 2016-09-13 16:46 ` Paul Lai
  2016-09-13 16:46 ` [PATCH Altp2m cleanup v5 3/3] Making altp2m domain dynamically allocated Paul Lai
  2 siblings, 0 replies; 6+ messages in thread
From: Paul Lai @ 2016-09-13 16:46 UTC (permalink / raw)
  To: xen-devel; +Cc: ravi.sahita, george.dunlap, jbeulich

This makes the code a little easier to read.
Moving hvm_altp2m_supported() check into functions that use it
for better readability.
Moving ept code to ept specific files as requested in:
  https://lists.xenproject.org/archives/html/xen-devel/2015-07/msg04323.html
Renamed p2m_init_altp2m_helper() to p2m_init_altp2m_ept().
Got rid of stray blanks after open paren after function names.
Defining _XEN_ASM_X86_P2M_H instead of _XEN_P2M_H for
xen/include/asm-x86/p2m.h.

Signed-off-by: Paul Lai <paul.c.lai@intel.com>
---
 xen/arch/x86/mm/altp2m.c          | 57 +++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/hap.c         | 38 +++++++-------------------
 xen/arch/x86/mm/p2m-ept.c         | 39 +++++++++++++++++++++++++++
 xen/arch/x86/mm/p2m.c             | 45 +++----------------------------
 xen/include/asm-x86/altp2m.h      |  4 ++-
 xen/include/asm-x86/hvm/vmx/vmx.h |  3 +++
 xen/include/asm-x86/p2m.h         |  9 +++----
 7 files changed, 117 insertions(+), 78 deletions(-)

diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 930bdc2..62801ae 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -17,6 +17,7 @@
 
 #include <asm/hvm/support.h>
 #include <asm/hvm/hvm.h>
+#include <asm/domain.h>
 #include <asm/p2m.h>
 #include <asm/altp2m.h>
 
@@ -66,6 +67,62 @@ altp2m_vcpu_destroy(struct vcpu *v)
 }
 
 /*
+ * altp2m_domain_init(struct domain *d))
+ *  allocate and initialize memory for altp2m portion of domain
+ *
+ *  returns < 0 on error
+ *  returns 0 on no operation (success)
+ *  returns > 0 on success
+ */
+int
+altp2m_domain_init(struct domain *d)
+{
+    int rc;
+    unsigned int i;
+
+    if ( d == NULL)
+        return 0;
+
+    if ( !hvm_altp2m_supported() )
+        return 0;
+
+    /* Init alternate p2m data. */
+    if ( (d->arch.altp2m_eptp = alloc_xenheap_page()) == NULL )
+        return -ENOMEM;
+
+    for ( i = 0; i < MAX_EPTP; i++ )
+        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        rc = p2m_alloc_table(d->arch.altp2m_p2m[i]);
+        if ( rc != 0 )
+           return rc;
+    }
+
+    d->arch.altp2m_active = 0;
+
+    return rc;
+}
+
+void
+altp2m_domain_teardown(struct domain *d)
+{
+    unsigned int i;
+
+    if ( !hvm_altp2m_supported() )
+        return;
+
+    d->arch.altp2m_active = 0;
+
+    free_xenheap_page(d->arch.altp2m_eptp);
+    d->arch.altp2m_eptp = NULL;
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+        p2m_teardown(d->arch.altp2m_p2m[i]);
+}
+
+/*
  * Local variables:
  * mode: C
  * c-file-style: "BSD"
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 3218fa2..61c0c42 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -37,6 +37,7 @@
 #include <asm/hap.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
+#include <asm/altp2m.h>
 #include <asm/domain.h>
 #include <xen/numa.h>
 #include <asm/hvm/nestedhvm.h>
@@ -499,26 +500,17 @@ int hap_enable(struct domain *d, u32 mode)
            goto out;
     }
 
-    if ( hvm_altp2m_supported() )
+    if ( (rv = altp2m_domain_init(d)) < 0 )
     {
-        /* Init alternate p2m data */
-        if ( (d->arch.altp2m_eptp = alloc_xenheap_page()) == NULL )
-        {
-            rv = -ENOMEM;
-            goto out;
+        for (i = 0; i < MAX_NESTEDP2M; i++) {
+            p2m_teardown(d->arch.nested_p2m[i]);
         }
 
-        for ( i = 0; i < MAX_EPTP; i++ )
-            d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
-
-        for ( i = 0; i < MAX_ALTP2M; i++ )
-        {
-            rv = p2m_alloc_table(d->arch.altp2m_p2m[i]);
-            if ( rv != 0 )
-               goto out;
-        }
+        if ( d->arch.paging.hap.total_pages != 0 )
+            hap_teardown(d, NULL);
 
-        d->arch.altp2m_active = 0;
+        p2m_teardown(p2m_get_hostp2m(d));
+        goto out;
     }
 
     /* Now let other users see the new mode */
@@ -533,19 +525,7 @@ void hap_final_teardown(struct domain *d)
 {
     unsigned int i;
 
-    if ( hvm_altp2m_supported() )
-    {
-        d->arch.altp2m_active = 0;
-
-        if ( d->arch.altp2m_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_eptp);
-            d->arch.altp2m_eptp = NULL;
-        }
-
-        for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i]);
-    }
+    altp2m_domain_teardown(d);
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 13cab24..04878f5 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -1329,6 +1329,45 @@ void setup_ept_dump(void)
     register_keyhandler('D', ept_dump_p2m_table, "dump VT-x EPT tables", 0);
 }
 
+void p2m_init_altp2m_ept(struct domain *d, unsigned int i)
+{
+    struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
+    struct ept_data *ept;
+
+    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
+    p2m->max_remapped_gfn = 0;
+    ept = &p2m->ept;
+    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    d->arch.altp2m_eptp[i] = ept_get_eptp(ept);
+}
+
+unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
+{
+    struct p2m_domain *p2m;
+    struct ept_data *ept;
+    unsigned int i;
+
+    altp2m_list_lock(d);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+            continue;
+
+        p2m = d->arch.altp2m_p2m[i];
+        ept = &p2m->ept;
+
+        if ( eptp == ept_get_eptp(ept) )
+            goto out;
+    }
+
+    i = INVALID_ALTP2M;
+
+ out:
+    altp2m_list_unlock(d);
+    return i;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 812dbf6..fa1ad4a 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -196,8 +196,8 @@ static void p2m_teardown_altp2m(struct domain *d)
         if ( !d->arch.altp2m_p2m[i] )
             continue;
         p2m = d->arch.altp2m_p2m[i];
-        d->arch.altp2m_p2m[i] = NULL;
         p2m_free_one(p2m);
+        d->arch.altp2m_p2m[i] = NULL;
     }
 }
 
@@ -2279,33 +2279,6 @@ int unmap_mmio_regions(struct domain *d,
     return i == nr ? 0 : i ?: ret;
 }
 
-unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
-{
-    struct p2m_domain *p2m;
-    struct ept_data *ept;
-    unsigned int i;
-
-    altp2m_list_lock(d);
-
-    for ( i = 0; i < MAX_ALTP2M; i++ )
-    {
-        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
-            continue;
-
-        p2m = d->arch.altp2m_p2m[i];
-        ept = &p2m->ept;
-
-        if ( eptp == ept_get_eptp(ept) )
-            goto out;
-    }
-
-    i = INVALID_ALTP2M;
-
- out:
-    altp2m_list_unlock(d);
-    return i;
-}
-
 bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
 {
     struct domain *d = v->domain;
@@ -2411,18 +2384,6 @@ void p2m_flush_altp2m(struct domain *d)
     altp2m_list_unlock(d);
 }
 
-static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
-{
-    struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
-    struct ept_data *ept;
-
-    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
-    p2m->max_remapped_gfn = 0;
-    ept = &p2m->ept;
-    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
-    d->arch.altp2m_eptp[i] = ept_get_eptp(ept);
-}
-
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
 {
     int rc = -EINVAL;
@@ -2434,7 +2395,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
 
     if ( d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
     {
-        p2m_init_altp2m_helper(d, idx);
+        p2m_init_altp2m_ept(d, idx);
         rc = 0;
     }
 
@@ -2454,7 +2415,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
         if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
             continue;
 
-        p2m_init_altp2m_helper(d, i);
+        p2m_init_altp2m_ept(d, i);
         *idx = i;
         rc = 0;
 
diff --git a/xen/include/asm-x86/altp2m.h b/xen/include/asm-x86/altp2m.h
index 64c7618..0090c89 100644
--- a/xen/include/asm-x86/altp2m.h
+++ b/xen/include/asm-x86/altp2m.h
@@ -18,7 +18,6 @@
 #ifndef __ASM_X86_ALTP2M_H
 #define __ASM_X86_ALTP2M_H
 
-#include <xen/types.h>
 #include <xen/sched.h>         /* for struct vcpu, struct domain */
 #include <asm/hvm/vcpu.h>      /* for vcpu_altp2m */
 
@@ -38,4 +37,7 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
     return vcpu_altp2m(v).p2midx;
 }
 
+int altp2m_domain_init(struct domain *d);
+void altp2m_domain_teardown(struct domain *d);
+
 #endif /* __ASM_X86_ALTP2M_H */
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 4cdd9b1..9f4c7de 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -561,6 +561,9 @@ void ept_p2m_uninit(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 bool_t ept_handle_misconfig(uint64_t gpa);
 void setup_ept_dump(void);
+void p2m_init_altp2m_ept(struct domain *d, unsigned int i);
+/* Locate an alternate p2m by its EPTP */
+unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp);
 
 void update_guest_eip(void);
 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index d5fd546..0c5b391 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -23,8 +23,8 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#ifndef _XEN_P2M_H
-#define _XEN_P2M_H
+#ifndef _XEN_ASM_X86_P2M_H
+#define _XEN_ASM_X86_P2M_H
 
 #include <xen/config.h>
 #include <xen/paging.h>
@@ -781,9 +781,6 @@ static inline struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
     return v->domain->arch.altp2m_p2m[index];
 }
 
-/* Locate an alternate p2m by its EPTP */
-unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp);
-
 /* Switch alternate p2m for a single vcpu */
 bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx);
 
@@ -845,7 +842,7 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt)
     return flags;
 }
 
-#endif /* _XEN_P2M_H */
+#endif /* _XEN_ASM_X86_P2M_H */
 
 /*
  * Local variables:
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH Altp2m cleanup v5 3/3] Making altp2m domain dynamically allocated.
  2016-09-13 16:46 [PATCH Altp2m cleanup v5 0/3] Cleaning up altp2m code Paul Lai
  2016-09-13 16:46 ` [PATCH Altp2m cleanup v5 1/3] altp2m cleanup work Paul Lai
  2016-09-13 16:46 ` [PATCH Altp2m cleanup v5 2/3] Move altp2m specific functions to altp2m files Paul Lai
@ 2016-09-13 16:46 ` Paul Lai
  2 siblings, 0 replies; 6+ messages in thread
From: Paul Lai @ 2016-09-13 16:46 UTC (permalink / raw)
  To: xen-devel; +Cc: ravi.sahita, george.dunlap, jbeulich

Ravi Sahita's dynamically allocated altp2m domain.
Introduce set_altp2m_active() and altp2m_active() api()s.

Signed-off-by: Ravi Sahita <ravi.sahita@intel.com>
Signed-off-by: Paul Lai <paul.c.lai@intel.com>
---
 xen/arch/x86/hvm/hvm.c        |  8 +++---
 xen/arch/x86/hvm/vmx/vmx.c    |  2 +-
 xen/arch/x86/mm/altp2m.c      | 16 +++++------
 xen/arch/x86/mm/mem_sharing.c |  2 +-
 xen/arch/x86/mm/mm-locks.h    |  4 +--
 xen/arch/x86/mm/p2m-ept.c     | 10 +++----
 xen/arch/x86/mm/p2m.c         | 63 ++++++++++++++++++++++++-------------------
 xen/common/monitor.c          |  1 +
 xen/include/asm-x86/altp2m.h  |  7 ++++-
 xen/include/asm-x86/domain.h  |  6 ++---
 xen/include/asm-x86/p2m.h     |  9 ++++++-
 11 files changed, 73 insertions(+), 55 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 1bf2d01..ac692ab 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5259,7 +5259,7 @@ static int do_altp2m_op(
 
     if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
          (a.cmd != HVMOP_altp2m_set_domain_state) &&
-         !d->arch.altp2m_active )
+         !altp2m_active(d) )
     {
         rc = -EOPNOTSUPP;
         goto out;
@@ -5293,11 +5293,11 @@ static int do_altp2m_op(
             break;
         }
 
-        ostate = d->arch.altp2m_active;
-        d->arch.altp2m_active = !!a.u.domain_state.state;
+        ostate = altp2m_active(d);
+        set_altp2m_active(d, a.u.domain_state.state);
 
         /* If the alternate p2m state has changed, handle appropriately */
-        if ( d->arch.altp2m_active != ostate &&
+        if ( altp2m_active(d) != ostate &&
              (ostate || !(rc = p2m_init_altp2m_by_id(d, 0))) )
         {
             for_each_vcpu( d, v )
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index bb7a329..699e9b1 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2018,7 +2018,7 @@ static void vmx_vcpu_update_vmfunc_ve(struct vcpu *v)
     {
         v->arch.hvm_vmx.secondary_exec_control |= mask;
         __vmwrite(VM_FUNCTION_CONTROL, VMX_VMFUNC_EPTP_SWITCHING);
-        __vmwrite(EPTP_LIST_ADDR, virt_to_maddr(d->arch.altp2m_eptp));
+        __vmwrite(EPTP_LIST_ADDR, virt_to_maddr(d->arch.altp2m->eptp));
 
         if ( cpu_has_vmx_virt_exceptions )
         {
diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 62801ae..7917a1e 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -87,20 +87,20 @@ altp2m_domain_init(struct domain *d)
         return 0;
 
     /* Init alternate p2m data. */
-    if ( (d->arch.altp2m_eptp = alloc_xenheap_page()) == NULL )
+    if ( (d->arch.altp2m->eptp = alloc_xenheap_page()) == NULL )
         return -ENOMEM;
 
     for ( i = 0; i < MAX_EPTP; i++ )
-        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
+        d->arch.altp2m->eptp[i] = mfn_x(INVALID_MFN);
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        rc = p2m_alloc_table(d->arch.altp2m_p2m[i]);
+        rc = p2m_alloc_table(d->arch.altp2m->p2m[i]);
         if ( rc != 0 )
            return rc;
     }
 
-    d->arch.altp2m_active = 0;
+    set_altp2m_active(d, false);
 
     return rc;
 }
@@ -113,13 +113,13 @@ altp2m_domain_teardown(struct domain *d)
     if ( !hvm_altp2m_supported() )
         return;
 
-    d->arch.altp2m_active = 0;
+    set_altp2m_active(d, false);
 
-    free_xenheap_page(d->arch.altp2m_eptp);
-    d->arch.altp2m_eptp = NULL;
+    free_xenheap_page(d->arch.altp2m->eptp);
+    d->arch.altp2m->eptp = NULL;
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
-        p2m_teardown(d->arch.altp2m_p2m[i]);
+        p2m_teardown(d->arch.altp2m->p2m[i]);
 }
 
 /*
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index e7c6b74..3fd8380 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -877,7 +877,7 @@ int mem_sharing_nominate_page(struct domain *d,
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
         {
-            ap2m = d->arch.altp2m_p2m[i];
+            ap2m = d->arch.altp2m->p2m[i];
             if ( !ap2m )
                 continue;
 
diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
index 74fdfc1..71d3891 100644
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -266,8 +266,8 @@ declare_mm_order_constraint(per_page_sharing)
  */
 
 declare_mm_lock(altp2mlist)
-#define altp2m_list_lock(d)   mm_lock(altp2mlist, &(d)->arch.altp2m_list_lock)
-#define altp2m_list_unlock(d) mm_unlock(&(d)->arch.altp2m_list_lock)
+#define altp2m_list_lock(d)   mm_lock(altp2mlist, &(d)->arch.altp2m->list_lock)
+#define altp2m_list_unlock(d) mm_unlock(&(d)->arch.altp2m->list_lock)
 
 /* P2M lock (per-altp2m-table)
  *
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 04878f5..9459d87 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -1331,14 +1331,14 @@ void setup_ept_dump(void)
 
 void p2m_init_altp2m_ept(struct domain *d, unsigned int i)
 {
-    struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
+    struct p2m_domain *p2m = d->arch.altp2m->p2m[i];
     struct ept_data *ept;
 
     p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
-    p2m->max_remapped_gfn = 0;
+    p2m->max_remapped_gfn = gfn_x(_gfn(0UL));
     ept = &p2m->ept;
     ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
-    d->arch.altp2m_eptp[i] = ept_get_eptp(ept);
+    d->arch.altp2m->eptp[i] = ept_get_eptp(ept);
 }
 
 unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
@@ -1351,10 +1351,10 @@ unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+        if ( d->arch.altp2m->eptp[i] == mfn_x(INVALID_MFN) )
             continue;
 
-        p2m = d->arch.altp2m_p2m[i];
+        p2m = d->arch.altp2m->p2m[i];
         ept = &p2m->ept;
 
         if ( eptp == ept_get_eptp(ept) )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index fa1ad4a..970a56c 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -191,14 +191,17 @@ static void p2m_teardown_altp2m(struct domain *d)
     unsigned int i;
     struct p2m_domain *p2m;
 
+    if ( d->arch.altp2m == NULL )
+	return;
+
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( !d->arch.altp2m_p2m[i] )
-            continue;
-        p2m = d->arch.altp2m_p2m[i];
+        p2m = d->arch.altp2m->p2m[i];
         p2m_free_one(p2m);
-        d->arch.altp2m_p2m[i] = NULL;
+        d->arch.altp2m->p2m[i] = NULL;
     }
+
+    xfree(d->arch.altp2m);
 }
 
 static int p2m_init_altp2m(struct domain *d)
@@ -206,10 +209,14 @@ static int p2m_init_altp2m(struct domain *d)
     unsigned int i;
     struct p2m_domain *p2m;
 
-    mm_lock_init(&d->arch.altp2m_list_lock);
+    d->arch.altp2m = xzalloc(struct altp2m_domain);
+    if ( d->arch.altp2m == NULL )
+         return -ENOMEM;
+
+    mm_lock_init(&d->arch.altp2m->list_lock);
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
+        d->arch.altp2m->p2m[i] = p2m = p2m_init_one(d);
         if ( p2m == NULL )
         {
             p2m_teardown_altp2m(d);
@@ -1845,10 +1852,10 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
     if ( altp2m_idx )
     {
         if ( altp2m_idx >= MAX_ALTP2M ||
-             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
+             d->arch.altp2m->eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
             return -EINVAL;
 
-        ap2m = d->arch.altp2m_p2m[altp2m_idx];
+        ap2m = d->arch.altp2m->p2m[altp2m_idx];
     }
 
     switch ( access )
@@ -2289,7 +2296,7 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
+    if ( d->arch.altp2m->eptp[idx] != mfn_x(INVALID_MFN) )
     {
         if ( idx != vcpu_altp2m(v).p2midx )
         {
@@ -2374,11 +2381,11 @@ void p2m_flush_altp2m(struct domain *d)
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        p2m_flush_table(d->arch.altp2m_p2m[i]);
+        p2m_flush_table(d->arch.altp2m->p2m[i]);
         /* Uninit and reinit ept to force TLB shootdown */
-        ept_p2m_uninit(d->arch.altp2m_p2m[i]);
-        ept_p2m_init(d->arch.altp2m_p2m[i]);
-        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
+        ept_p2m_uninit(d->arch.altp2m->p2m[i]);
+        ept_p2m_init(d->arch.altp2m->p2m[i]);
+        d->arch.altp2m->eptp[i] = mfn_x(INVALID_MFN);
     }
 
     altp2m_list_unlock(d);
@@ -2393,7 +2400,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
+    if ( d->arch.altp2m->eptp[idx] == mfn_x(INVALID_MFN) )
     {
         p2m_init_altp2m_ept(d, idx);
         rc = 0;
@@ -2412,7 +2419,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
+        if ( d->arch.altp2m->eptp[i] != mfn_x(INVALID_MFN) )
             continue;
 
         p2m_init_altp2m_ept(d, i);
@@ -2438,17 +2445,17 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
+    if ( d->arch.altp2m->eptp[idx] != mfn_x(INVALID_MFN) )
     {
-        p2m = d->arch.altp2m_p2m[idx];
+        p2m = d->arch.altp2m->p2m[idx];
 
         if ( !_atomic_read(p2m->active_vcpus) )
         {
-            p2m_flush_table(d->arch.altp2m_p2m[idx]);
+            p2m_flush_table(d->arch.altp2m->p2m[idx]);
             /* Uninit and reinit ept to force TLB shootdown */
-            ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
-            ept_p2m_init(d->arch.altp2m_p2m[idx]);
-            d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
+            ept_p2m_uninit(d->arch.altp2m->p2m[idx]);
+            ept_p2m_init(d->arch.altp2m->p2m[idx]);
+            d->arch.altp2m->eptp[idx] = mfn_x(INVALID_MFN);
             rc = 0;
         }
     }
@@ -2472,7 +2479,7 @@ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
+    if ( d->arch.altp2m->eptp[idx] != mfn_x(INVALID_MFN) )
     {
         for_each_vcpu( d, v )
             if ( idx != vcpu_altp2m(v).p2midx )
@@ -2503,11 +2510,11 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
     unsigned int page_order;
     int rc = -EINVAL;
 
-    if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
+    if ( idx >= MAX_ALTP2M || d->arch.altp2m->eptp[idx] == mfn_x(INVALID_MFN) )
         return rc;
 
     hp2m = p2m_get_hostp2m(d);
-    ap2m = d->arch.altp2m_p2m[idx];
+    ap2m = d->arch.altp2m->p2m[idx];
 
     p2m_lock(ap2m);
 
@@ -2599,10 +2606,10 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+        if ( d->arch.altp2m->eptp[i] == mfn_x(INVALID_MFN) )
             continue;
 
-        p2m = d->arch.altp2m_p2m[i];
+        p2m = d->arch.altp2m->p2m[i];
         m = get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, NULL);
 
         /* Check for a dropped page that may impact this altp2m */
@@ -2623,10 +2630,10 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
                 for ( i = 0; i < MAX_ALTP2M; i++ )
                 {
                     if ( i == last_reset_idx ||
-                         d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+                         d->arch.altp2m->eptp[i] == mfn_x(INVALID_MFN) )
                         continue;
 
-                    p2m = d->arch.altp2m_p2m[i];
+                    p2m = d->arch.altp2m->p2m[i];
                     p2m_lock(p2m);
                     p2m_reset_altp2m(p2m);
                     p2m_unlock(p2m);
diff --git a/xen/common/monitor.c b/xen/common/monitor.c
index c73d1d5..d6b3f90 100644
--- a/xen/common/monitor.c
+++ b/xen/common/monitor.c
@@ -24,6 +24,7 @@
 #include <xen/sched.h>
 #include <xen/vm_event.h>
 #include <xsm/xsm.h>
+#include <asm/p2m.h>
 #include <asm/altp2m.h>
 #include <asm/monitor.h>
 #include <asm/vm_event.h>
diff --git a/xen/include/asm-x86/altp2m.h b/xen/include/asm-x86/altp2m.h
index 0090c89..65a7ead 100644
--- a/xen/include/asm-x86/altp2m.h
+++ b/xen/include/asm-x86/altp2m.h
@@ -24,7 +24,12 @@
 /* Alternate p2m HVM on/off per domain */
 static inline bool_t altp2m_active(const struct domain *d)
 {
-    return d->arch.altp2m_active;
+    return d->arch.altp2m->active;
+}
+
+static inline void set_altp2m_active(const struct domain *d, bool_t v)
+{
+    d->arch.altp2m->active = v;
 }
 
 /* Alternate p2m VCPU */
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 5807a1f..3b9fcc1 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -242,6 +242,7 @@ typedef xen_domctl_cpuid_t cpuid_input_t;
 #define INVALID_ALTP2M  0xffff
 #define MAX_EPTP        (PAGE_SIZE / sizeof(uint64_t))
 struct p2m_domain;
+struct altp2m_domain;
 struct time_scale {
     int shift;
     u32 mul_frac;
@@ -320,10 +321,7 @@ struct arch_domain
     mm_lock_t nested_p2m_lock;
 
     /* altp2m: allow multiple copies of host p2m */
-    bool_t altp2m_active;
-    struct p2m_domain *altp2m_p2m[MAX_ALTP2M];
-    mm_lock_t altp2m_list_lock;
-    uint64_t *altp2m_eptp;
+    struct altp2m_domain *altp2m;
 
     /* NB. protected by d->event_lock and by irq_desc[irq].lock */
     struct radix_tree_root irq_pirq;
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 0c5b391..cf30924 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -338,6 +338,13 @@ struct p2m_domain {
     };
 };
 
+struct altp2m_domain {
+    bool_t active;
+    struct p2m_domain *p2m[MAX_ALTP2M];
+    mm_lock_t list_lock;
+    uint64_t *eptp;
+};
+
 /* get host p2m table */
 #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
 
@@ -778,7 +785,7 @@ static inline struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
 
     BUG_ON(index >= MAX_ALTP2M);
 
-    return v->domain->arch.altp2m_p2m[index];
+    return v->domain->arch.altp2m->p2m[index];
 }
 
 /* Switch alternate p2m for a single vcpu */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH Altp2m cleanup v5 1/3] altp2m cleanup work.
  2016-09-13 16:46 ` [PATCH Altp2m cleanup v5 1/3] altp2m cleanup work Paul Lai
@ 2016-09-26 16:01   ` Jan Beulich
  2016-09-26 17:25     ` Lai, Paul
  0 siblings, 1 reply; 6+ messages in thread
From: Jan Beulich @ 2016-09-26 16:01 UTC (permalink / raw)
  To: Paul Lai; +Cc: xen-devel, ravi.sahita, george.dunlap

>>> On 13.09.16 at 18:46, <paul.c.lai@intel.com> wrote:
> Indent goto labels by one space.
> Inline (header) altp2m functions.
> In do_altp2m_op(), during the sanity check of the passed command,
> return -ENONSYS if not a valid command.
> In do_altp2m_op(), when evaluating a command, ASSERT_UNREACHABLE()
> if the command is not recognizable.  The sanity check above should
> have triggered the return of -ENOSYS.
> Make altp2m_vcpu_emulate_ve() return actual bool_t (rather than return
> void()).
> 
> Signed-off-by: Paul Lai <paul.c.lai@intel.com>
> ---

It is very disappointing to find that there is still no information here
on what changed from the previous version.

> @@ -5349,6 +5362,8 @@ static int do_altp2m_op(
>              rc = p2m_change_altp2m_gfn(d, a.u.change_gfn.view,
>                      _gfn(a.u.change_gfn.old_gfn),
>                      _gfn(a.u.change_gfn.new_gfn));
> +    default:
> +        ASSERT_UNREACHABLE();
>      }

And it is even worse that the bug pointed out here is still present.

>  /* emulates #VE */
> -bool_t altp2m_vcpu_emulate_ve(struct vcpu *v);
> +static inline bool_t altp2m_vcpu_emulate_ve(struct vcpu *v)

Nor did you switch to plain bool here, as was requested during both
v3 and v4 review.

Please do not resubmit this series until you have taken care of
_all_ review comments you've received so far. As with the
previous version I'm not going to spend time looking at the other
two patches due to this fundamental requirement, despite
having been pointed out before, not being met.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH Altp2m cleanup v5 1/3] altp2m cleanup work.
  2016-09-26 16:01   ` Jan Beulich
@ 2016-09-26 17:25     ` Lai, Paul
  0 siblings, 0 replies; 6+ messages in thread
From: Lai, Paul @ 2016-09-26 17:25 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, ravi.sahita, george.dunlap

On Mon, Sep 26, 2016 at 10:01:50AM -0600, Jan Beulich wrote:
> >>> On 13.09.16 at 18:46, <paul.c.lai@intel.com> wrote:
> > Indent goto labels by one space.
> > Inline (header) altp2m functions.
> > In do_altp2m_op(), during the sanity check of the passed command,
> > return -ENONSYS if not a valid command.
> > In do_altp2m_op(), when evaluating a command, ASSERT_UNREACHABLE()
> > if the command is not recognizable.  The sanity check above should
> > have triggered the return of -ENOSYS.
> > Make altp2m_vcpu_emulate_ve() return actual bool_t (rather than return
> > void()).
> > 
> > Signed-off-by: Paul Lai <paul.c.lai@intel.com>
> > ---
> 
> It is very disappointing to find that there is still no information here
> on what changed from the previous version.
> 
> > @@ -5349,6 +5362,8 @@ static int do_altp2m_op(
> >              rc = p2m_change_altp2m_gfn(d, a.u.change_gfn.view,
> >                      _gfn(a.u.change_gfn.old_gfn),
> >                      _gfn(a.u.change_gfn.new_gfn));
> > +    default:
> > +        ASSERT_UNREACHABLE();
> >      }
> 
> And it is even worse that the bug pointed out here is still present.

I reread the comment in 
   https://lists.xenproject.org/archives/html/xen-devel/2016-09/msg00937.html
and missed the "unintended fallthrough".
Will correct.

> 
> >  /* emulates #VE */
> > -bool_t altp2m_vcpu_emulate_ve(struct vcpu *v);
> > +static inline bool_t altp2m_vcpu_emulate_ve(struct vcpu *v)
> 
> Nor did you switch to plain bool here, as was requested during both
> v3 and v4 review.

I did not know "bool" existed, and thought "plain bool" was "bool_t".
I have looked around and see that "bool" is better defined ({0,1} instead of
{0, !0}) and hopefully understand your motivation for using/requiring it.
> 
> Please do not resubmit this series until you have taken care of
> _all_ review comments you've received so far. As with the
> previous version I'm not going to spend time looking at the other
> two patches due to this fundamental requirement, despite
> having been pointed out before, not being met.
> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2016-09-26 17:23 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-13 16:46 [PATCH Altp2m cleanup v5 0/3] Cleaning up altp2m code Paul Lai
2016-09-13 16:46 ` [PATCH Altp2m cleanup v5 1/3] altp2m cleanup work Paul Lai
2016-09-26 16:01   ` Jan Beulich
2016-09-26 17:25     ` Lai, Paul
2016-09-13 16:46 ` [PATCH Altp2m cleanup v5 2/3] Move altp2m specific functions to altp2m files Paul Lai
2016-09-13 16:46 ` [PATCH Altp2m cleanup v5 3/3] Making altp2m domain dynamically allocated Paul Lai

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).