All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH Altp2m cleanup v4 0/4] Cleaning up altp2m code
@ 2016-09-07 22:04 Paul Lai
  2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 1/4] x86/HVM: adjust feature checking in MSR intercept handling Paul Lai
                   ` (3 more replies)
  0 siblings, 4 replies; 16+ messages in thread
From: Paul Lai @ 2016-09-07 22:04 UTC (permalink / raw)
  To: xen-devel; +Cc: ravi.sahita, george.dunlap, jbeulich

Incorporating comments generated by v3 patch series.

Older comments, reason for the code clean effort, are the following URLs:
   https://lists.xenproject.org/archives/html/xen-devel/2015-07/msg04323.html
   https://lists.xenproject.org/archives/html/xen-devel/2015-07/msg04454.html
   https://lists.xenproject.org/archives/html/xen-devel/2015-07/msg04530.html


Jan Beulich (1):
  x86/HVM: adjust feature checking in MSR intercept handling

Paul Lai (3):
  altp2m cleanup work
  Move altp2m specific functions to altp2m files.
  Making altp2m struct dynamically allocated.

 xen/arch/x86/hvm/hvm.c            | 142 ++++++++++++++++++++++++--------------
 xen/arch/x86/hvm/vmx/vmx.c        |  42 +++++++----
 xen/arch/x86/mm/altp2m.c          |  51 ++++++++++++++
 xen/arch/x86/mm/hap/hap.c         |  37 +---------
 xen/arch/x86/mm/mem_sharing.c     |   2 +-
 xen/arch/x86/mm/mm-locks.h        |   4 +-
 xen/arch/x86/mm/p2m-ept.c         |  39 +++++++++++
 xen/arch/x86/mm/p2m.c             | 106 ++++++++++------------------
 xen/common/monitor.c              |   1 +
 xen/include/asm-x86/altp2m.h      |  11 ++-
 xen/include/asm-x86/domain.h      |   6 +-
 xen/include/asm-x86/hvm/hvm.h     |  29 +++++++-
 xen/include/asm-x86/hvm/vmx/vmx.h |   3 +
 xen/include/asm-x86/p2m.h         |  18 +++--
 14 files changed, 304 insertions(+), 187 deletions(-)

-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [PATCH Altp2m cleanup v4 1/4] x86/HVM: adjust feature checking in MSR intercept handling
  2016-09-07 22:04 [PATCH Altp2m cleanup v4 0/4] Cleaning up altp2m code Paul Lai
@ 2016-09-07 22:04 ` Paul Lai
  2016-09-08 10:32   ` Jan Beulich
  2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work Paul Lai
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 16+ messages in thread
From: Paul Lai @ 2016-09-07 22:04 UTC (permalink / raw)
  To: xen-devel; +Cc: ravi.sahita, george.dunlap, jbeulich

From: Jan Beulich <jbeulich@suse.com>

Consistently consult hvm_cpuid(). With that, BNDCFGS gets better
handled outside of VMX specific code, just like XSS. Don't needlessly
check for MTRR support when the MSR being accessed clearly is not an
MTRR one.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 xen/arch/x86/hvm/hvm.c        | 88 ++++++++++++++++++++++++++++++++-----------
 xen/arch/x86/hvm/vmx/vmx.c    | 40 ++++++++++++++------
 xen/include/asm-x86/hvm/hvm.h | 10 +++++
 3 files changed, 103 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 98f0740..787f055 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -309,6 +309,14 @@ int hvm_set_guest_pat(struct vcpu *v, u64 guest_pat)
     return 1;
 }
 
+bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val)
+{
+    return hvm_funcs.set_guest_bndcfgs &&
+           is_canonical_address(val) &&
+           !(val & IA32_BNDCFGS_RESERVED) &&
+           hvm_funcs.set_guest_bndcfgs(v, val);
+}
+
 /*
  * Get the ratio to scale host TSC frequency to gtsc_khz. zero will be
  * returned if TSC scaling is unavailable or ratio cannot be handled
@@ -3667,28 +3675,30 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
 {
     struct vcpu *v = current;
     uint64_t *var_range_base, *fixed_range_base;
-    bool_t mtrr;
-    unsigned int edx, index;
+    bool mtrr = false;
     int ret = X86EMUL_OKAY;
 
     var_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.var_ranges;
     fixed_range_base = (uint64_t *)v->arch.hvm_vcpu.mtrr.fixed_ranges;
 
-    hvm_cpuid(1, NULL, NULL, NULL, &edx);
-    mtrr = !!(edx & cpufeat_mask(X86_FEATURE_MTRR));
+    if ( msr == MSR_MTRRcap ||
+         (msr >= MSR_IA32_MTRR_PHYSBASE(0) && msr <= MSR_MTRRdefType) )
+    {
+        unsigned int edx;
+
+        hvm_cpuid(1, NULL, NULL, NULL, &edx);
+        if ( edx & cpufeat_mask(X86_FEATURE_MTRR) )
+            mtrr = true;
+    }
 
     switch ( msr )
     {
+        unsigned int eax, ebx, ecx, index;
+
     case MSR_EFER:
         *msr_content = v->arch.hvm_vcpu.guest_efer;
         break;
 
-    case MSR_IA32_XSS:
-        if ( !cpu_has_xsaves )
-            goto gp_fault;
-        *msr_content = v->arch.hvm_vcpu.msr_xss;
-        break;
-
     case MSR_IA32_TSC:
         *msr_content = _hvm_rdtsc_intercept();
         break;
@@ -3754,6 +3764,22 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
         *msr_content = var_range_base[index];
         break;
 
+    case MSR_IA32_XSS:
+        ecx = 1;
+        hvm_cpuid(XSTATE_CPUID, &eax, NULL, &ecx, NULL);
+        if ( !(eax & cpufeat_mask(X86_FEATURE_XSAVES)) )
+            goto gp_fault;
+        *msr_content = v->arch.hvm_vcpu.msr_xss;
+        break;
+
+    case MSR_IA32_BNDCFGS:
+        ecx = 0;
+        hvm_cpuid(7, NULL, &ebx, &ecx, NULL);
+        if ( !(ebx & cpufeat_mask(X86_FEATURE_MPX)) ||
+             !hvm_get_guest_bndcfgs(v, msr_content) )
+            goto gp_fault;
+        break;
+
     case MSR_K8_ENABLE_C1E:
     case MSR_AMD64_NB_CFG:
          /*
@@ -3790,15 +3816,20 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
                             bool_t may_defer)
 {
     struct vcpu *v = current;
-    bool_t mtrr;
-    unsigned int edx, index;
+    bool mtrr = false;
     int ret = X86EMUL_OKAY;
 
     HVMTRACE_3D(MSR_WRITE, msr,
                (uint32_t)msr_content, (uint32_t)(msr_content >> 32));
 
-    hvm_cpuid(1, NULL, NULL, NULL, &edx);
-    mtrr = !!(edx & cpufeat_mask(X86_FEATURE_MTRR));
+    if ( msr >= MSR_IA32_MTRR_PHYSBASE(0) && msr <= MSR_MTRRdefType )
+    {
+        unsigned int edx;
+
+        hvm_cpuid(1, NULL, NULL, NULL, &edx);
+        if ( edx & cpufeat_mask(X86_FEATURE_MTRR) )
+            mtrr = true;
+    }
 
     if ( may_defer && unlikely(monitored_msr(v->domain, msr)) )
     {
@@ -3815,18 +3846,13 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
 
     switch ( msr )
     {
+        unsigned int eax, ebx, ecx, index;
+
     case MSR_EFER:
         if ( hvm_set_efer(msr_content) )
            return X86EMUL_EXCEPTION;
         break;
 
-    case MSR_IA32_XSS:
-        /* No XSS features currently supported for guests. */
-        if ( !cpu_has_xsaves || msr_content != 0 )
-            goto gp_fault;
-        v->arch.hvm_vcpu.msr_xss = msr_content;
-        break;
-
     case MSR_IA32_TSC:
         hvm_set_guest_tsc(v, msr_content);
         break;
@@ -3863,9 +3889,8 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
         break;
 
     case MSR_MTRRcap:
-        if ( !mtrr )
-            goto gp_fault;
         goto gp_fault;
+
     case MSR_MTRRdefType:
         if ( !mtrr )
             goto gp_fault;
@@ -3905,6 +3930,23 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
             goto gp_fault;
         break;
 
+    case MSR_IA32_XSS:
+        ecx = 1;
+        hvm_cpuid(XSTATE_CPUID, &eax, NULL, &ecx, NULL);
+        /* No XSS features currently supported for guests. */
+        if ( !(eax & cpufeat_mask(X86_FEATURE_XSAVES)) || msr_content != 0 )
+            goto gp_fault;
+        v->arch.hvm_vcpu.msr_xss = msr_content;
+        break;
+
+    case MSR_IA32_BNDCFGS:
+        ecx = 0;
+        hvm_cpuid(7, NULL, &ebx, &ecx, NULL);
+        if ( !(ebx & cpufeat_mask(X86_FEATURE_MPX)) ||
+             !hvm_set_guest_bndcfgs(v, msr_content) )
+            goto gp_fault;
+        break;
+
     case MSR_AMD64_NB_CFG:
         /* ignore the write */
         break;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 306f482..2759e6f 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -1224,6 +1224,28 @@ static int vmx_get_guest_pat(struct vcpu *v, u64 *gpat)
     return 1;
 }
 
+static bool vmx_set_guest_bndcfgs(struct vcpu *v, u64 val)
+{
+    ASSERT(cpu_has_mpx && cpu_has_vmx_mpx);
+
+    vmx_vmcs_enter(v);
+    __vmwrite(GUEST_BNDCFGS, val);
+    vmx_vmcs_exit(v);
+
+    return true;
+}
+
+static bool vmx_get_guest_bndcfgs(struct vcpu *v, u64 *val)
+{
+    ASSERT(cpu_has_mpx && cpu_has_vmx_mpx);
+
+    vmx_vmcs_enter(v);
+    __vmread(GUEST_BNDCFGS, val);
+    vmx_vmcs_exit(v);
+
+    return true;
+}
+
 static void vmx_handle_cd(struct vcpu *v, unsigned long value)
 {
     if ( !paging_mode_hap(v->domain) )
@@ -2323,6 +2345,12 @@ const struct hvm_function_table * __init start_vmx(void)
     if ( cpu_has_vmx_tsc_scaling )
         vmx_function_table.tsc_scaling.ratio_frac_bits = 48;
 
+    if ( cpu_has_mpx && cpu_has_vmx_mpx )
+    {
+        vmx_function_table.set_guest_bndcfgs = vmx_set_guest_bndcfgs;
+        vmx_function_table.get_guest_bndcfgs = vmx_get_guest_bndcfgs;
+    }
+
     setup_vmcs_dump();
 
     return &vmx_function_table;
@@ -2650,11 +2678,6 @@ static int vmx_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     case MSR_IA32_DEBUGCTLMSR:
         __vmread(GUEST_IA32_DEBUGCTL, msr_content);
         break;
-    case MSR_IA32_BNDCFGS:
-        if ( !cpu_has_mpx || !cpu_has_vmx_mpx )
-            goto gp_fault;
-        __vmread(GUEST_BNDCFGS, msr_content);
-        break;
     case MSR_IA32_FEATURE_CONTROL:
     case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_VMFUNC:
         if ( !nvmx_msr_read_intercept(msr, msr_content) )
@@ -2881,13 +2904,6 @@ static int vmx_msr_write_intercept(unsigned int msr, uint64_t msr_content)
 
         break;
     }
-    case MSR_IA32_BNDCFGS:
-        if ( !cpu_has_mpx || !cpu_has_vmx_mpx ||
-             !is_canonical_address(msr_content) ||
-             (msr_content & IA32_BNDCFGS_RESERVED) )
-            goto gp_fault;
-        __vmwrite(GUEST_BNDCFGS, msr_content);
-        break;
     case MSR_IA32_FEATURE_CONTROL:
     case MSR_IA32_VMX_BASIC...MSR_IA32_VMX_TRUE_ENTRY_CTLS:
         if ( !nvmx_msr_write_intercept(msr, msr_content) )
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 5d463e0..81b60d5 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -147,6 +147,9 @@ struct hvm_function_table {
     int  (*get_guest_pat)(struct vcpu *v, u64 *);
     int  (*set_guest_pat)(struct vcpu *v, u64);
 
+    bool (*get_guest_bndcfgs)(struct vcpu *v, u64 *);
+    bool (*set_guest_bndcfgs)(struct vcpu *v, u64);
+
     void (*set_tsc_offset)(struct vcpu *v, u64 offset, u64 at_tsc);
 
     void (*inject_trap)(const struct hvm_trap *trap);
@@ -383,6 +386,13 @@ static inline unsigned long hvm_get_shadow_gs_base(struct vcpu *v)
     return hvm_funcs.get_shadow_gs_base(v);
 }
 
+static inline bool hvm_get_guest_bndcfgs(struct vcpu *v, u64 *val)
+{
+    return hvm_funcs.get_guest_bndcfgs &&
+           hvm_funcs.get_guest_bndcfgs(v, val);
+}
+
+bool hvm_set_guest_bndcfgs(struct vcpu *v, u64 val);
 
 #define has_hvm_params(d) \
     ((d)->arch.hvm_domain.params != NULL)
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work
  2016-09-07 22:04 [PATCH Altp2m cleanup v4 0/4] Cleaning up altp2m code Paul Lai
  2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 1/4] x86/HVM: adjust feature checking in MSR intercept handling Paul Lai
@ 2016-09-07 22:04 ` Paul Lai
  2016-09-08 14:46   ` Jan Beulich
  2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 3/4] Move altp2m specific functions to altp2m files Paul Lai
  2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 4/4] Making altp2m struct dynamically allocated Paul Lai
  3 siblings, 1 reply; 16+ messages in thread
From: Paul Lai @ 2016-09-07 22:04 UTC (permalink / raw)
  To: xen-devel; +Cc: ravi.sahita, george.dunlap, jbeulich

Indent goto labels by one space
Inline (header) altp2m functions
Define default behavior in switch
Define max and min for range of altp2m macroed values

Signed-off-by: Paul Lai <paul.c.lai@intel.com>
---
 xen/arch/x86/hvm/hvm.c        | 46 ++++++++++++++++++++-----------------------
 xen/include/asm-x86/hvm/hvm.h | 19 +++++++++++++++---
 2 files changed, 37 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 787f055..e63ef0b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1942,11 +1942,11 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
      * Otherwise, this is an error condition. */
     rc = fall_through;
 
-out_put_gfn:
+ out_put_gfn:
     __put_gfn(p2m, gfn);
     if ( ap2m_active )
         __put_gfn(hostp2m, gfn);
-out:
+ out:
     /* All of these are delayed until we exit, since we might 
      * sleep on event ring wait queues, and we must not hold
      * locks in such circumstance */
@@ -5291,12 +5291,25 @@ static int do_altp2m_op(
         return -EFAULT;
 
     if ( a.pad1 || a.pad2 ||
-         (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) ||
-         (a.cmd < HVMOP_altp2m_get_domain_state) ||
-         (a.cmd > HVMOP_altp2m_change_gfn) )
+        (a.version != HVMOP_ALTP2M_INTERFACE_VERSION) )
         return -EINVAL;
 
-    d = (a.cmd != HVMOP_altp2m_vcpu_enable_notify) ?
+    switch ( a.cmd )
+    {
+    case HVMOP_altp2m_get_domain_state:
+    case HVMOP_altp2m_set_domain_state:
+    case HVMOP_altp2m_vcpu_enable_notify:
+    case HVMOP_altp2m_create_p2m:
+    case HVMOP_altp2m_destroy_p2m:
+    case HVMOP_altp2m_switch_p2m:
+    case HVMOP_altp2m_set_mem_access:
+    case HVMOP_altp2m_change_gfn:
+        break;
+    default:
+        return -ENOSYS;
+    }
+
+    d = ( a.cmd != HVMOP_altp2m_vcpu_enable_notify ) ?
         rcu_lock_domain_by_any_id(a.domain) : rcu_lock_current_domain();
 
     if ( d == NULL )
@@ -5413,6 +5426,8 @@ static int do_altp2m_op(
             rc = p2m_change_altp2m_gfn(d, a.u.change_gfn.view,
                     _gfn(a.u.change_gfn.old_gfn),
                     _gfn(a.u.change_gfn.new_gfn));
+    default:
+        ASSERT_UNREACHABLE();
     }
 
  out:
@@ -5937,25 +5952,6 @@ void hvm_toggle_singlestep(struct vcpu *v)
     v->arch.hvm_vcpu.single_step = !v->arch.hvm_vcpu.single_step;
 }
 
-void altp2m_vcpu_update_p2m(struct vcpu *v)
-{
-    if ( hvm_funcs.altp2m_vcpu_update_p2m )
-        hvm_funcs.altp2m_vcpu_update_p2m(v);
-}
-
-void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v)
-{
-    if ( hvm_funcs.altp2m_vcpu_update_vmfunc_ve )
-        hvm_funcs.altp2m_vcpu_update_vmfunc_ve(v);
-}
-
-bool_t altp2m_vcpu_emulate_ve(struct vcpu *v)
-{
-    if ( hvm_funcs.altp2m_vcpu_emulate_ve )
-        return hvm_funcs.altp2m_vcpu_emulate_ve(v);
-    return 0;
-}
-
 int hvm_set_mode(struct vcpu *v, int mode)
 {
 
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 81b60d5..ed043b2 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -599,13 +599,26 @@ static inline bool_t hvm_altp2m_supported(void)
 }
 
 /* updates the current hardware p2m */
-void altp2m_vcpu_update_p2m(struct vcpu *v);
+static inline void altp2m_vcpu_update_p2m(struct vcpu *v)
+{
+    if ( hvm_funcs.altp2m_vcpu_update_p2m )
+        hvm_funcs.altp2m_vcpu_update_p2m(v);
+}
 
 /* updates VMCS fields related to VMFUNC and #VE */
-void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v);
+static inline void altp2m_vcpu_update_vmfunc_ve(struct vcpu *v)
+{
+    if ( hvm_funcs.altp2m_vcpu_update_vmfunc_ve )
+        hvm_funcs.altp2m_vcpu_update_vmfunc_ve(v);
+}
 
 /* emulates #VE */
-bool_t altp2m_vcpu_emulate_ve(struct vcpu *v);
+static inline bool_t altp2m_vcpu_emulate_ve(struct vcpu *v)
+{
+    if ( hvm_funcs.altp2m_vcpu_emulate_ve )
+        return hvm_funcs.altp2m_vcpu_emulate_ve(v);
+    return false;
+}
 
 /* Check CR4/EFER values */
 const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH Altp2m cleanup v4 3/4] Move altp2m specific functions to altp2m files.
  2016-09-07 22:04 [PATCH Altp2m cleanup v4 0/4] Cleaning up altp2m code Paul Lai
  2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 1/4] x86/HVM: adjust feature checking in MSR intercept handling Paul Lai
  2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work Paul Lai
@ 2016-09-07 22:04 ` Paul Lai
  2016-09-08 15:02   ` Jan Beulich
  2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 4/4] Making altp2m struct dynamically allocated Paul Lai
  3 siblings, 1 reply; 16+ messages in thread
From: Paul Lai @ 2016-09-07 22:04 UTC (permalink / raw)
  To: xen-devel; +Cc: ravi.sahita, george.dunlap, jbeulich

Move altp2m specific functions to altp2m files.  This makes the code
a little easier to read.

Also moving ept code to ept specific files as requested in:
  https://lists.xenproject.org/archives/html/xen-devel/2015-07/msg04323.html

Signed-off-by: Paul Lai <paul.c.lai@intel.com>
---
 xen/arch/x86/mm/altp2m.c          | 51 +++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/mm/hap/hap.c         | 37 +++-------------------------
 xen/arch/x86/mm/p2m-ept.c         | 39 ++++++++++++++++++++++++++++++
 xen/arch/x86/mm/p2m.c             | 45 +++-------------------------------
 xen/include/asm-x86/altp2m.h      |  4 ++-
 xen/include/asm-x86/hvm/vmx/vmx.h |  3 +++
 xen/include/asm-x86/p2m.h         |  9 +++----
 7 files changed, 105 insertions(+), 83 deletions(-)

diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 930bdc2..bf3f46e 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -17,6 +17,7 @@
 
 #include <asm/hvm/support.h>
 #include <asm/hvm/hvm.h>
+#include <asm/domain.h>
 #include <asm/p2m.h>
 #include <asm/altp2m.h>
 
@@ -65,6 +66,56 @@ altp2m_vcpu_destroy(struct vcpu *v)
         vcpu_unpause(v);
 }
 
+int
+altp2m_domain_init(struct domain *d)
+{
+    int rc;
+    unsigned int i;
+
+    if ( !hvm_altp2m_supported() )
+    {
+        rc = 0;
+        goto out;
+    }
+    /* Init alternate p2m data. */
+    if ( (d->arch.altp2m_eptp = alloc_xenheap_page()) == NULL )
+    {
+        rc = -ENOMEM;
+        goto out;
+    }
+
+    for ( i = 0; i < MAX_EPTP; i++ )
+        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        rc = p2m_alloc_table(d->arch.altp2m_p2m[i]);
+        if ( rc != 0 )
+           goto out;
+    }
+
+    d->arch.altp2m_active = 0;
+ out:
+    return rc;
+}
+
+void
+altp2m_domain_teardown(struct domain *d)
+{
+    unsigned int i;
+
+    if ( !hvm_altp2m_supported() )
+	return;
+
+    d->arch.altp2m_active = 0;
+
+    free_xenheap_page(d->arch.altp2m_eptp);
+    d->arch.altp2m_eptp = NULL;
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+        p2m_teardown(d->arch.altp2m_p2m[i]);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 3218fa2..283ec38 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -37,6 +37,7 @@
 #include <asm/hap.h>
 #include <asm/paging.h>
 #include <asm/p2m.h>
+#include <asm/altp2m.h>
 #include <asm/domain.h>
 #include <xen/numa.h>
 #include <asm/hvm/nestedhvm.h>
@@ -499,27 +500,7 @@ int hap_enable(struct domain *d, u32 mode)
            goto out;
     }
 
-    if ( hvm_altp2m_supported() )
-    {
-        /* Init alternate p2m data */
-        if ( (d->arch.altp2m_eptp = alloc_xenheap_page()) == NULL )
-        {
-            rv = -ENOMEM;
-            goto out;
-        }
-
-        for ( i = 0; i < MAX_EPTP; i++ )
-            d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
-
-        for ( i = 0; i < MAX_ALTP2M; i++ )
-        {
-            rv = p2m_alloc_table(d->arch.altp2m_p2m[i]);
-            if ( rv != 0 )
-               goto out;
-        }
-
-        d->arch.altp2m_active = 0;
-    }
+    rv = altp2m_domain_init(d);
 
     /* Now let other users see the new mode */
     d->arch.paging.mode = mode | PG_HAP_enable;
@@ -533,19 +514,7 @@ void hap_final_teardown(struct domain *d)
 {
     unsigned int i;
 
-    if ( hvm_altp2m_supported() )
-    {
-        d->arch.altp2m_active = 0;
-
-        if ( d->arch.altp2m_eptp )
-        {
-            free_xenheap_page(d->arch.altp2m_eptp);
-            d->arch.altp2m_eptp = NULL;
-        }
-
-        for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i]);
-    }
+    altp2m_domain_teardown(d);
 
     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 13cab24..7ae3168 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -1329,6 +1329,45 @@ void setup_ept_dump(void)
     register_keyhandler('D', ept_dump_p2m_table, "dump VT-x EPT tables", 0);
 }
 
+void p2m_init_altp2m_ept( struct domain *d, unsigned int i)
+{
+    struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
+    struct ept_data *ept;
+
+    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
+    p2m->max_remapped_gfn = 0;
+    ept = &p2m->ept;
+    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
+    d->arch.altp2m_eptp[i] = ept_get_eptp(ept);
+}
+
+unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
+{
+    struct p2m_domain *p2m;
+    struct ept_data *ept;
+    unsigned int i;
+
+    altp2m_list_lock(d);
+
+    for ( i = 0; i < MAX_ALTP2M; i++ )
+    {
+        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+            continue;
+
+        p2m = d->arch.altp2m_p2m[i];
+        ept = &p2m->ept;
+
+        if ( eptp == ept_get_eptp(ept) )
+            goto out;
+    }
+
+    i = INVALID_ALTP2M;
+
+ out:
+    altp2m_list_unlock(d);
+    return i;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 7d14c3b..9a97962 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -196,8 +196,8 @@ static void p2m_teardown_altp2m(struct domain *d)
         if ( !d->arch.altp2m_p2m[i] )
             continue;
         p2m = d->arch.altp2m_p2m[i];
-        d->arch.altp2m_p2m[i] = NULL;
         p2m_free_one(p2m);
+        d->arch.altp2m_p2m[i] = NULL;
     }
 }
 
@@ -2261,33 +2261,6 @@ int unmap_mmio_regions(struct domain *d,
     return i == nr ? 0 : i ?: ret;
 }
 
-unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
-{
-    struct p2m_domain *p2m;
-    struct ept_data *ept;
-    unsigned int i;
-
-    altp2m_list_lock(d);
-
-    for ( i = 0; i < MAX_ALTP2M; i++ )
-    {
-        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
-            continue;
-
-        p2m = d->arch.altp2m_p2m[i];
-        ept = &p2m->ept;
-
-        if ( eptp == ept_get_eptp(ept) )
-            goto out;
-    }
-
-    i = INVALID_ALTP2M;
-
- out:
-    altp2m_list_unlock(d);
-    return i;
-}
-
 bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
 {
     struct domain *d = v->domain;
@@ -2393,18 +2366,6 @@ void p2m_flush_altp2m(struct domain *d)
     altp2m_list_unlock(d);
 }
 
-static void p2m_init_altp2m_helper(struct domain *d, unsigned int i)
-{
-    struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
-    struct ept_data *ept;
-
-    p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
-    p2m->max_remapped_gfn = 0;
-    ept = &p2m->ept;
-    ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
-    d->arch.altp2m_eptp[i] = ept_get_eptp(ept);
-}
-
 int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
 {
     int rc = -EINVAL;
@@ -2416,7 +2377,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
 
     if ( d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
     {
-        p2m_init_altp2m_helper(d, idx);
+        p2m_init_altp2m_ept(d, idx);
         rc = 0;
     }
 
@@ -2436,7 +2397,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
         if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
             continue;
 
-        p2m_init_altp2m_helper(d, i);
+        p2m_init_altp2m_ept(d, i);
         *idx = i;
         rc = 0;
 
diff --git a/xen/include/asm-x86/altp2m.h b/xen/include/asm-x86/altp2m.h
index 64c7618..0090c89 100644
--- a/xen/include/asm-x86/altp2m.h
+++ b/xen/include/asm-x86/altp2m.h
@@ -18,7 +18,6 @@
 #ifndef __ASM_X86_ALTP2M_H
 #define __ASM_X86_ALTP2M_H
 
-#include <xen/types.h>
 #include <xen/sched.h>         /* for struct vcpu, struct domain */
 #include <asm/hvm/vcpu.h>      /* for vcpu_altp2m */
 
@@ -38,4 +37,7 @@ static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
     return vcpu_altp2m(v).p2midx;
 }
 
+int altp2m_domain_init(struct domain *d);
+void altp2m_domain_teardown(struct domain *d);
+
 #endif /* __ASM_X86_ALTP2M_H */
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 4cdd9b1..4d247b5 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -561,6 +561,9 @@ void ept_p2m_uninit(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 bool_t ept_handle_misconfig(uint64_t gpa);
 void setup_ept_dump(void);
+void p2m_init_altp2m_ept( struct domain *d, unsigned int i);
+/* Locate an alternate p2m by its EPTP */
+unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp);
 
 void update_guest_eip(void);
 
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index 9fc9ead..c138522 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -23,8 +23,8 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#ifndef _XEN_P2M_H
-#define _XEN_P2M_H
+#ifndef _XEN_ASM_X86_P2M_H
+#define _XEN_ASM_X86_P2M_H
 
 #include <xen/config.h>
 #include <xen/paging.h>
@@ -784,9 +784,6 @@ static inline struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
     return v->domain->arch.altp2m_p2m[index];
 }
 
-/* Locate an alternate p2m by its EPTP */
-unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp);
-
 /* Switch alternate p2m for a single vcpu */
 bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx);
 
@@ -848,7 +845,7 @@ static inline unsigned int p2m_get_iommu_flags(p2m_type_t p2mt)
     return flags;
 }
 
-#endif /* _XEN_P2M_H */
+#endif /* _XEN_ASM_X86_P2M_H */
 
 /*
  * Local variables:
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* [PATCH Altp2m cleanup v4 4/4] Making altp2m struct dynamically allocated.
  2016-09-07 22:04 [PATCH Altp2m cleanup v4 0/4] Cleaning up altp2m code Paul Lai
                   ` (2 preceding siblings ...)
  2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 3/4] Move altp2m specific functions to altp2m files Paul Lai
@ 2016-09-07 22:04 ` Paul Lai
  3 siblings, 0 replies; 16+ messages in thread
From: Paul Lai @ 2016-09-07 22:04 UTC (permalink / raw)
  To: xen-devel; +Cc: ravi.sahita, george.dunlap, jbeulich

Ravi Sahita's dynamically allocated altp2m structs

Signed-off-by: Paul Lai <paul.c.lai@intel.com>
Reviewed-by: Ravi Sahita <ravi.sahita@intel.com>
---
 xen/arch/x86/hvm/hvm.c        |  8 +++---
 xen/arch/x86/hvm/vmx/vmx.c    |  2 +-
 xen/arch/x86/mm/altp2m.c      | 16 +++++------
 xen/arch/x86/mm/mem_sharing.c |  2 +-
 xen/arch/x86/mm/mm-locks.h    |  4 +--
 xen/arch/x86/mm/p2m-ept.c     | 10 +++----
 xen/arch/x86/mm/p2m.c         | 63 ++++++++++++++++++++++++-------------------
 xen/common/monitor.c          |  1 +
 xen/include/asm-x86/altp2m.h  |  7 ++++-
 xen/include/asm-x86/domain.h  |  6 ++---
 xen/include/asm-x86/p2m.h     |  9 ++++++-
 11 files changed, 73 insertions(+), 55 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e63ef0b..e378b9c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5323,7 +5323,7 @@ static int do_altp2m_op(
 
     if ( (a.cmd != HVMOP_altp2m_get_domain_state) &&
          (a.cmd != HVMOP_altp2m_set_domain_state) &&
-         !d->arch.altp2m_active )
+         !altp2m_active(d) )
     {
         rc = -EOPNOTSUPP;
         goto out;
@@ -5357,11 +5357,11 @@ static int do_altp2m_op(
             break;
         }
 
-        ostate = d->arch.altp2m_active;
-        d->arch.altp2m_active = !!a.u.domain_state.state;
+        ostate = altp2m_active(d);
+        set_altp2m_active(d, a.u.domain_state.state);
 
         /* If the alternate p2m state has changed, handle appropriately */
-        if ( d->arch.altp2m_active != ostate &&
+        if ( altp2m_active(d) != ostate &&
              (ostate || !(rc = p2m_init_altp2m_by_id(d, 0))) )
         {
             for_each_vcpu( d, v )
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 2759e6f..45ec308 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2043,7 +2043,7 @@ static void vmx_vcpu_update_vmfunc_ve(struct vcpu *v)
     {
         v->arch.hvm_vmx.secondary_exec_control |= mask;
         __vmwrite(VM_FUNCTION_CONTROL, VMX_VMFUNC_EPTP_SWITCHING);
-        __vmwrite(EPTP_LIST_ADDR, virt_to_maddr(d->arch.altp2m_eptp));
+        __vmwrite(EPTP_LIST_ADDR, virt_to_maddr(d->arch.altp2m->eptp));
 
         if ( cpu_has_vmx_virt_exceptions )
         {
diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index bf3f46e..6803b36 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -78,23 +78,23 @@ altp2m_domain_init(struct domain *d)
         goto out;
     }
     /* Init alternate p2m data. */
-    if ( (d->arch.altp2m_eptp = alloc_xenheap_page()) == NULL )
+    if ( (d->arch.altp2m->eptp = alloc_xenheap_page()) == NULL )
     {
         rc = -ENOMEM;
         goto out;
     }
 
     for ( i = 0; i < MAX_EPTP; i++ )
-        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
+        d->arch.altp2m->eptp[i] = mfn_x(INVALID_MFN);
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        rc = p2m_alloc_table(d->arch.altp2m_p2m[i]);
+        rc = p2m_alloc_table(d->arch.altp2m->p2m[i]);
         if ( rc != 0 )
            goto out;
     }
 
-    d->arch.altp2m_active = 0;
+    set_altp2m_active(d, false);
  out:
     return rc;
 }
@@ -107,13 +107,13 @@ altp2m_domain_teardown(struct domain *d)
     if ( !hvm_altp2m_supported() )
 	return;
 
-    d->arch.altp2m_active = 0;
+    set_altp2m_active(d, false);
 
-    free_xenheap_page(d->arch.altp2m_eptp);
-    d->arch.altp2m_eptp = NULL;
+    free_xenheap_page(d->arch.altp2m->eptp);
+    d->arch.altp2m->eptp = NULL;
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
-        p2m_teardown(d->arch.altp2m_p2m[i]);
+        p2m_teardown(d->arch.altp2m->p2m[i]);
 }
 
 /*
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index e7c6b74..3fd8380 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -877,7 +877,7 @@ int mem_sharing_nominate_page(struct domain *d,
 
         for ( i = 0; i < MAX_ALTP2M; i++ )
         {
-            ap2m = d->arch.altp2m_p2m[i];
+            ap2m = d->arch.altp2m->p2m[i];
             if ( !ap2m )
                 continue;
 
diff --git a/xen/arch/x86/mm/mm-locks.h b/xen/arch/x86/mm/mm-locks.h
index 74fdfc1..71d3891 100644
--- a/xen/arch/x86/mm/mm-locks.h
+++ b/xen/arch/x86/mm/mm-locks.h
@@ -266,8 +266,8 @@ declare_mm_order_constraint(per_page_sharing)
  */
 
 declare_mm_lock(altp2mlist)
-#define altp2m_list_lock(d)   mm_lock(altp2mlist, &(d)->arch.altp2m_list_lock)
-#define altp2m_list_unlock(d) mm_unlock(&(d)->arch.altp2m_list_lock)
+#define altp2m_list_lock(d)   mm_lock(altp2mlist, &(d)->arch.altp2m->list_lock)
+#define altp2m_list_unlock(d) mm_unlock(&(d)->arch.altp2m->list_lock)
 
 /* P2M lock (per-altp2m-table)
  *
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 7ae3168..a54c1cf 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -1331,14 +1331,14 @@ void setup_ept_dump(void)
 
 void p2m_init_altp2m_ept( struct domain *d, unsigned int i)
 {
-    struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
+    struct p2m_domain *p2m = d->arch.altp2m->p2m[i];
     struct ept_data *ept;
 
     p2m->min_remapped_gfn = gfn_x(INVALID_GFN);
-    p2m->max_remapped_gfn = 0;
+    p2m->max_remapped_gfn = gfn_x(_gfn(0UL));
     ept = &p2m->ept;
     ept->asr = pagetable_get_pfn(p2m_get_pagetable(p2m));
-    d->arch.altp2m_eptp[i] = ept_get_eptp(ept);
+    d->arch.altp2m->eptp[i] = ept_get_eptp(ept);
 }
 
 unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
@@ -1351,10 +1351,10 @@ unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+        if ( d->arch.altp2m->eptp[i] == mfn_x(INVALID_MFN) )
             continue;
 
-        p2m = d->arch.altp2m_p2m[i];
+        p2m = d->arch.altp2m->p2m[i];
         ept = &p2m->ept;
 
         if ( eptp == ept_get_eptp(ept) )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 9a97962..b6ede25 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -191,14 +191,17 @@ static void p2m_teardown_altp2m(struct domain *d)
     unsigned int i;
     struct p2m_domain *p2m;
 
+    if ( d->arch.altp2m == NULL )
+	return;
+
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( !d->arch.altp2m_p2m[i] )
-            continue;
-        p2m = d->arch.altp2m_p2m[i];
+        p2m = d->arch.altp2m->p2m[i];
         p2m_free_one(p2m);
-        d->arch.altp2m_p2m[i] = NULL;
+        d->arch.altp2m->p2m[i] = NULL;
     }
+
+    xfree(d->arch.altp2m);
 }
 
 static int p2m_init_altp2m(struct domain *d)
@@ -206,10 +209,14 @@ static int p2m_init_altp2m(struct domain *d)
     unsigned int i;
     struct p2m_domain *p2m;
 
-    mm_lock_init(&d->arch.altp2m_list_lock);
+    d->arch.altp2m = xzalloc(struct altp2m_domain);
+    if ( d->arch.altp2m == NULL )
+         return -ENOMEM;
+
+    mm_lock_init(&d->arch.altp2m->list_lock);
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
+        d->arch.altp2m->p2m[i] = p2m = p2m_init_one(d);
         if ( p2m == NULL )
         {
             p2m_teardown_altp2m(d);
@@ -1827,10 +1834,10 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
     if ( altp2m_idx )
     {
         if ( altp2m_idx >= MAX_ALTP2M ||
-             d->arch.altp2m_eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
+             d->arch.altp2m->eptp[altp2m_idx] == mfn_x(INVALID_MFN) )
             return -EINVAL;
 
-        ap2m = d->arch.altp2m_p2m[altp2m_idx];
+        ap2m = d->arch.altp2m->p2m[altp2m_idx];
     }
 
     switch ( access )
@@ -2271,7 +2278,7 @@ bool_t p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
+    if ( d->arch.altp2m->eptp[idx] != mfn_x(INVALID_MFN) )
     {
         if ( idx != vcpu_altp2m(v).p2midx )
         {
@@ -2356,11 +2363,11 @@ void p2m_flush_altp2m(struct domain *d)
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        p2m_flush_table(d->arch.altp2m_p2m[i]);
+        p2m_flush_table(d->arch.altp2m->p2m[i]);
         /* Uninit and reinit ept to force TLB shootdown */
-        ept_p2m_uninit(d->arch.altp2m_p2m[i]);
-        ept_p2m_init(d->arch.altp2m_p2m[i]);
-        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
+        ept_p2m_uninit(d->arch.altp2m->p2m[i]);
+        ept_p2m_init(d->arch.altp2m->p2m[i]);
+        d->arch.altp2m->eptp[i] = mfn_x(INVALID_MFN);
     }
 
     altp2m_list_unlock(d);
@@ -2375,7 +2382,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
+    if ( d->arch.altp2m->eptp[idx] == mfn_x(INVALID_MFN) )
     {
         p2m_init_altp2m_ept(d, idx);
         rc = 0;
@@ -2394,7 +2401,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx)
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
+        if ( d->arch.altp2m->eptp[i] != mfn_x(INVALID_MFN) )
             continue;
 
         p2m_init_altp2m_ept(d, i);
@@ -2420,17 +2427,17 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
+    if ( d->arch.altp2m->eptp[idx] != mfn_x(INVALID_MFN) )
     {
-        p2m = d->arch.altp2m_p2m[idx];
+        p2m = d->arch.altp2m->p2m[idx];
 
         if ( !_atomic_read(p2m->active_vcpus) )
         {
-            p2m_flush_table(d->arch.altp2m_p2m[idx]);
+            p2m_flush_table(d->arch.altp2m->p2m[idx]);
             /* Uninit and reinit ept to force TLB shootdown */
-            ept_p2m_uninit(d->arch.altp2m_p2m[idx]);
-            ept_p2m_init(d->arch.altp2m_p2m[idx]);
-            d->arch.altp2m_eptp[idx] = mfn_x(INVALID_MFN);
+            ept_p2m_uninit(d->arch.altp2m->p2m[idx]);
+            ept_p2m_init(d->arch.altp2m->p2m[idx]);
+            d->arch.altp2m->eptp[idx] = mfn_x(INVALID_MFN);
             rc = 0;
         }
     }
@@ -2454,7 +2461,7 @@ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
 
     altp2m_list_lock(d);
 
-    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
+    if ( d->arch.altp2m->eptp[idx] != mfn_x(INVALID_MFN) )
     {
         for_each_vcpu( d, v )
             if ( idx != vcpu_altp2m(v).p2midx )
@@ -2485,11 +2492,11 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
     unsigned int page_order;
     int rc = -EINVAL;
 
-    if ( idx >= MAX_ALTP2M || d->arch.altp2m_eptp[idx] == mfn_x(INVALID_MFN) )
+    if ( idx >= MAX_ALTP2M || d->arch.altp2m->eptp[idx] == mfn_x(INVALID_MFN) )
         return rc;
 
     hp2m = p2m_get_hostp2m(d);
-    ap2m = d->arch.altp2m_p2m[idx];
+    ap2m = d->arch.altp2m->p2m[idx];
 
     p2m_lock(hp2m);
     p2m_lock(ap2m);
@@ -2583,10 +2590,10 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
 
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+        if ( d->arch.altp2m->eptp[i] == mfn_x(INVALID_MFN) )
             continue;
 
-        p2m = d->arch.altp2m_p2m[i];
+        p2m = d->arch.altp2m->p2m[i];
         m = get_gfn_type_access(p2m, gfn_x(gfn), &t, &a, 0, NULL);
 
         /* Check for a dropped page that may impact this altp2m */
@@ -2607,10 +2614,10 @@ void p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
                 for ( i = 0; i < MAX_ALTP2M; i++ )
                 {
                     if ( i == last_reset_idx ||
-                         d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+                         d->arch.altp2m->eptp[i] == mfn_x(INVALID_MFN) )
                         continue;
 
-                    p2m = d->arch.altp2m_p2m[i];
+                    p2m = d->arch.altp2m->p2m[i];
                     p2m_lock(p2m);
                     p2m_reset_altp2m(p2m);
                     p2m_unlock(p2m);
diff --git a/xen/common/monitor.c b/xen/common/monitor.c
index 451f42f..28b0458 100644
--- a/xen/common/monitor.c
+++ b/xen/common/monitor.c
@@ -24,6 +24,7 @@
 #include <xen/sched.h>
 #include <xen/vm_event.h>
 #include <xsm/xsm.h>
+#include <asm/p2m.h>
 #include <asm/altp2m.h>
 #include <asm/monitor.h>
 #include <asm/vm_event.h>
diff --git a/xen/include/asm-x86/altp2m.h b/xen/include/asm-x86/altp2m.h
index 0090c89..65a7ead 100644
--- a/xen/include/asm-x86/altp2m.h
+++ b/xen/include/asm-x86/altp2m.h
@@ -24,7 +24,12 @@
 /* Alternate p2m HVM on/off per domain */
 static inline bool_t altp2m_active(const struct domain *d)
 {
-    return d->arch.altp2m_active;
+    return d->arch.altp2m->active;
+}
+
+static inline void set_altp2m_active(const struct domain *d, bool_t v)
+{
+    d->arch.altp2m->active = v;
 }
 
 /* Alternate p2m VCPU */
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 5807a1f..3b9fcc1 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -242,6 +242,7 @@ typedef xen_domctl_cpuid_t cpuid_input_t;
 #define INVALID_ALTP2M  0xffff
 #define MAX_EPTP        (PAGE_SIZE / sizeof(uint64_t))
 struct p2m_domain;
+struct altp2m_domain;
 struct time_scale {
     int shift;
     u32 mul_frac;
@@ -320,10 +321,7 @@ struct arch_domain
     mm_lock_t nested_p2m_lock;
 
     /* altp2m: allow multiple copies of host p2m */
-    bool_t altp2m_active;
-    struct p2m_domain *altp2m_p2m[MAX_ALTP2M];
-    mm_lock_t altp2m_list_lock;
-    uint64_t *altp2m_eptp;
+    struct altp2m_domain *altp2m;
 
     /* NB. protected by d->event_lock and by irq_desc[irq].lock */
     struct radix_tree_root irq_pirq;
diff --git a/xen/include/asm-x86/p2m.h b/xen/include/asm-x86/p2m.h
index c138522..13b3eb7 100644
--- a/xen/include/asm-x86/p2m.h
+++ b/xen/include/asm-x86/p2m.h
@@ -338,6 +338,13 @@ struct p2m_domain {
     };
 };
 
+struct altp2m_domain {
+    bool_t active;
+    struct p2m_domain *p2m[MAX_ALTP2M];
+    mm_lock_t list_lock;
+    uint64_t *eptp;
+};
+
 /* get host p2m table */
 #define p2m_get_hostp2m(d)      ((d)->arch.p2m)
 
@@ -781,7 +788,7 @@ static inline struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
 
     BUG_ON(index >= MAX_ALTP2M);
 
-    return v->domain->arch.altp2m_p2m[index];
+    return v->domain->arch.altp2m->p2m[index];
 }
 
 /* Switch alternate p2m for a single vcpu */
-- 
2.7.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: [PATCH Altp2m cleanup v4 1/4] x86/HVM: adjust feature checking in MSR intercept handling
  2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 1/4] x86/HVM: adjust feature checking in MSR intercept handling Paul Lai
@ 2016-09-08 10:32   ` Jan Beulich
  2016-09-08 15:38     ` Lai, Paul C
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2016-09-08 10:32 UTC (permalink / raw)
  To: Paul Lai; +Cc: xen-devel, ravi.sahita, george.dunlap

>>> On 08.09.16 at 00:04, <paul.c.lai@intel.com> wrote:
> From: Jan Beulich <jbeulich@suse.com>
> 
> Consistently consult hvm_cpuid(). With that, BNDCFGS gets better
> handled outside of VMX specific code, just like XSS. Don't needlessly
> check for MTRR support when the MSR being accessed clearly is not an
> MTRR one.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Why did you (re)send this? It went in yesterday together with its
VMX prereq. Without that prereq it's useless (as it won't apply),
and if you worked on a tree where the prereq was already present,
then this one would have been present too (as they got pushed at
the same time).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work
  2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work Paul Lai
@ 2016-09-08 14:46   ` Jan Beulich
  2016-09-08 15:50     ` Lai, Paul C
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2016-09-08 14:46 UTC (permalink / raw)
  To: Paul Lai; +Cc: xen-devel, ravi.sahita, george.dunlap

>>> On 08.09.16 at 00:04, <paul.c.lai@intel.com> wrote:
> Indent goto labels by one space
> Inline (header) altp2m functions
> Define default behavior in switch
> Define max and min for range of altp2m macroed values
> 
> Signed-off-by: Paul Lai <paul.c.lai@intel.com>
> ---

Missing a brief summary of changes from previous version here.

> @@ -5413,6 +5426,8 @@ static int do_altp2m_op(
>              rc = p2m_change_altp2m_gfn(d, a.u.change_gfn.view,
>                      _gfn(a.u.change_gfn.old_gfn),
>                      _gfn(a.u.change_gfn.new_gfn));
> +    default:
> +        ASSERT_UNREACHABLE();
>      }

Did you test anything using HVMOP_altp2m_change_gfn with this
change, on a debug build? There's obviously an unintended fall-
through right now.

>  /* emulates #VE */
> -bool_t altp2m_vcpu_emulate_ve(struct vcpu *v);
> +static inline bool_t altp2m_vcpu_emulate_ve(struct vcpu *v)

Already on v3 I had asked to switch to plain bool (and true/false as
appropriate).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH Altp2m cleanup v4 3/4] Move altp2m specific functions to altp2m files.
  2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 3/4] Move altp2m specific functions to altp2m files Paul Lai
@ 2016-09-08 15:02   ` Jan Beulich
  2016-09-08 18:04     ` Lai, Paul C
  0 siblings, 1 reply; 16+ messages in thread
From: Jan Beulich @ 2016-09-08 15:02 UTC (permalink / raw)
  To: Paul Lai; +Cc: xen-devel, ravi.sahita, george.dunlap

>>> On 08.09.16 at 00:04, <paul.c.lai@intel.com> wrote:
> @@ -65,6 +66,56 @@ altp2m_vcpu_destroy(struct vcpu *v)
>          vcpu_unpause(v);
>  }
>  
> +int
> +altp2m_domain_init(struct domain *d)
> +{
> +    int rc;
> +    unsigned int i;
> +
> +    if ( !hvm_altp2m_supported() )
> +    {
> +        rc = 0;
> +        goto out;
> +    }
> +    /* Init alternate p2m data. */
> +    if ( (d->arch.altp2m_eptp = alloc_xenheap_page()) == NULL )
> +    {
> +        rc = -ENOMEM;
> +        goto out;
> +    }
> +
> +    for ( i = 0; i < MAX_EPTP; i++ )
> +        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
> +
> +    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    {
> +        rc = p2m_alloc_table(d->arch.altp2m_p2m[i]);
> +        if ( rc != 0 )
> +           goto out;
> +    }
> +
> +    d->arch.altp2m_active = 0;
> + out:
> +    return rc;
> +}

I don't follow: I did specifically ask to avoid goto when where the
label is there's just a single statement (return in this case).

> +void
> +altp2m_domain_teardown(struct domain *d)
> +{
> +    unsigned int i;
> +
> +    if ( !hvm_altp2m_supported() )
> +	return;

Bad tab indentation.

> @@ -499,27 +500,7 @@ int hap_enable(struct domain *d, u32 mode)
>             goto out;
>      }
>  
> -    if ( hvm_altp2m_supported() )
> -    {
> -        /* Init alternate p2m data */
> -        if ( (d->arch.altp2m_eptp = alloc_xenheap_page()) == NULL )
> -        {
> -            rv = -ENOMEM;
> -            goto out;
> -        }
> -
> -        for ( i = 0; i < MAX_EPTP; i++ )
> -            d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
> -
> -        for ( i = 0; i < MAX_ALTP2M; i++ )
> -        {
> -            rv = p2m_alloc_table(d->arch.altp2m_p2m[i]);
> -            if ( rv != 0 )
> -               goto out;
> -        }
> -
> -        d->arch.altp2m_active = 0;
> -    }
> +    rv = altp2m_domain_init(d);
>  
>      /* Now let other users see the new mode */
>      d->arch.paging.mode = mode | PG_HAP_enable;

I don't think you should reach the following statement(s) when your
function returned an error. Even if this might be benign now, this is
easy to overlook if later more code gets added near the end of the
function.

Also I wonder whether in an error case there's not a possibility for
memory to be leaked. That wouldn't be a problem your patch
introduces, but I think you could have noticed and fixed this as
you touch the code anyway.

> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -1329,6 +1329,45 @@ void setup_ept_dump(void)
>      register_keyhandler('D', ept_dump_p2m_table, "dump VT-x EPT tables", 0);
>  }
>  
> +void p2m_init_altp2m_ept( struct domain *d, unsigned int i)

Another instance of you not having corrected what was previously
pointed out: There's a stray blank here. And even if I hadn't said
there are multiple instances of this, you should still apply such
comments to all of your series. And I only now notice that this e.g.
also applies to the suggested re-ordering of operations in
altp2m_domain_teardown(). I think I'm going to stop reviewing
this series here: Please make sure you address all review comments
(either verbally or by code adjustment) before submitting a new
version (or in extreme cases, like due to lack of feedback, point out
open issues right after the first --- separator).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH Altp2m cleanup v4 1/4] x86/HVM: adjust feature checking in MSR intercept handling
  2016-09-08 10:32   ` Jan Beulich
@ 2016-09-08 15:38     ` Lai, Paul C
  0 siblings, 0 replies; 16+ messages in thread
From: Lai, Paul C @ 2016-09-08 15:38 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Sahita, Ravi, george.dunlap

[Paul] in-line
-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Thursday, September 8, 2016 3:33 AM
To: Lai, Paul C <paul.c.lai@intel.com>
Cc: george.dunlap@citrix.com; Sahita, Ravi <ravi.sahita@intel.com>; xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH Altp2m cleanup v4 1/4] x86/HVM: adjust feature checking in MSR intercept handling

>>> On 08.09.16 at 00:04, <paul.c.lai@intel.com> wrote:
> From: Jan Beulich <jbeulich@suse.com>
> 
> Consistently consult hvm_cpuid(). With that, BNDCFGS gets better 
> handled outside of VMX specific code, just like XSS. Don't needlessly 
> check for MTRR support when the MSR being accessed clearly is not an 
> MTRR one.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Why did you (re)send this? It went in yesterday together with its VMX prereq. Without that prereq it's useless (as it won't apply), and if you worked on a tree where the prereq was already present, then this one would have been present too (as they got pushed at the same time).

Jan

[Paul] Argh -- looks like I used the wrong commit# during the git send-email command .... grrr...
   Apologies....


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work
  2016-09-08 14:46   ` Jan Beulich
@ 2016-09-08 15:50     ` Lai, Paul C
  2016-09-08 16:06       ` Tamas K Lengyel
  0 siblings, 1 reply; 16+ messages in thread
From: Lai, Paul C @ 2016-09-08 15:50 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Sahita, Ravi, george.dunlap

[Paul] in-line

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Thursday, September 8, 2016 7:47 AM
To: Lai, Paul C <paul.c.lai@intel.com>
Cc: george.dunlap@citrix.com; Sahita, Ravi <ravi.sahita@intel.com>; xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work

>>> On 08.09.16 at 00:04, <paul.c.lai@intel.com> wrote:
> Indent goto labels by one space
> Inline (header) altp2m functions
> Define default behavior in switch
> Define max and min for range of altp2m macroed values
> 
> Signed-off-by: Paul Lai <paul.c.lai@intel.com>
> ---

Missing a brief summary of changes from previous version here.

> @@ -5413,6 +5426,8 @@ static int do_altp2m_op(
>              rc = p2m_change_altp2m_gfn(d, a.u.change_gfn.view,
>                      _gfn(a.u.change_gfn.old_gfn),
>                      _gfn(a.u.change_gfn.new_gfn));
> +    default:
> +        ASSERT_UNREACHABLE();
>      }

Did you test anything using HVMOP_altp2m_change_gfn with this change, on a debug build? There's obviously an unintended fall- through right now.

[Paul] - Yes, this patch series was tested with Tamas's HVMOP_altp2m_change_gfn in a debug build.


>  /* emulates #VE */
> -bool_t altp2m_vcpu_emulate_ve(struct vcpu *v);
> +static inline bool_t altp2m_vcpu_emulate_ve(struct vcpu *v)

Already on v3 I had asked to switch to plain bool (and true/false as appropriate).

[Paul] - Maybe I misunderstood something here.  I fixed the return value to false in the patch.   ... Ah, it's the non-false case that's returning void.  Will fix.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work
  2016-09-08 15:50     ` Lai, Paul C
@ 2016-09-08 16:06       ` Tamas K Lengyel
  2016-09-08 16:45         ` Lai, Paul C
  0 siblings, 1 reply; 16+ messages in thread
From: Tamas K Lengyel @ 2016-09-08 16:06 UTC (permalink / raw)
  To: Lai, Paul C; +Cc: xen-devel, Sahita, Ravi, george.dunlap, Jan Beulich

On Thu, Sep 8, 2016 at 9:50 AM, Lai, Paul C <paul.c.lai@intel.com> wrote:
> [Paul] in-line
>
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, September 8, 2016 7:47 AM
> To: Lai, Paul C <paul.c.lai@intel.com>
> Cc: george.dunlap@citrix.com; Sahita, Ravi <ravi.sahita@intel.com>; xen-devel <xen-devel@lists.xenproject.org>
> Subject: Re: [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work
>
>>>> On 08.09.16 at 00:04, <paul.c.lai@intel.com> wrote:
>> Indent goto labels by one space
>> Inline (header) altp2m functions
>> Define default behavior in switch
>> Define max and min for range of altp2m macroed values
>>
>> Signed-off-by: Paul Lai <paul.c.lai@intel.com>
>> ---
>
> Missing a brief summary of changes from previous version here.
>
>> @@ -5413,6 +5426,8 @@ static int do_altp2m_op(
>>              rc = p2m_change_altp2m_gfn(d, a.u.change_gfn.view,
>>                      _gfn(a.u.change_gfn.old_gfn),
>>                      _gfn(a.u.change_gfn.new_gfn));
>> +    default:
>> +        ASSERT_UNREACHABLE();
>>      }
>
> Did you test anything using HVMOP_altp2m_change_gfn with this change, on a debug build? There's obviously an unintended fall- through right now.
>
> [Paul] - Yes, this patch series was tested with Tamas's HVMOP_altp2m_change_gfn in a debug build.

Not sure what you used here, the xen-access tool right now in Xen
doesn't (yet) exercise the gfn remapping, only the mem-access parts.
Sergej has a patch for this in the arm-altp2m series though.

Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work
  2016-09-08 16:06       ` Tamas K Lengyel
@ 2016-09-08 16:45         ` Lai, Paul C
  2016-09-12 10:47           ` George Dunlap
  0 siblings, 1 reply; 16+ messages in thread
From: Lai, Paul C @ 2016-09-08 16:45 UTC (permalink / raw)
  To: Tamas K Lengyel; +Cc: xen-devel, Sahita, Ravi, george.dunlap, Jan Beulich

[Paul2] in-line

-----Original Message-----
From: Tamas K Lengyel [mailto:tamas.k.lengyel@gmail.com] 
Sent: Thursday, September 8, 2016 9:07 AM
To: Lai, Paul C <paul.c.lai@intel.com>
Cc: Jan Beulich <JBeulich@suse.com>; xen-devel <xen-devel@lists.xenproject.org>; Sahita, Ravi <ravi.sahita@intel.com>; george.dunlap@citrix.com
Subject: Re: [Xen-devel] [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work

On Thu, Sep 8, 2016 at 9:50 AM, Lai, Paul C <paul.c.lai@intel.com> wrote:
> [Paul] in-line
>
> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@suse.com]
> Sent: Thursday, September 8, 2016 7:47 AM
> To: Lai, Paul C <paul.c.lai@intel.com>
> Cc: george.dunlap@citrix.com; Sahita, Ravi <ravi.sahita@intel.com>; 
> xen-devel <xen-devel@lists.xenproject.org>
> Subject: Re: [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work
>
>>>> On 08.09.16 at 00:04, <paul.c.lai@intel.com> wrote:
>> Indent goto labels by one space
>> Inline (header) altp2m functions
>> Define default behavior in switch
>> Define max and min for range of altp2m macroed values
>>
>> Signed-off-by: Paul Lai <paul.c.lai@intel.com>
>> ---
>
> Missing a brief summary of changes from previous version here.
>
>> @@ -5413,6 +5426,8 @@ static int do_altp2m_op(
>>              rc = p2m_change_altp2m_gfn(d, a.u.change_gfn.view,
>>                      _gfn(a.u.change_gfn.old_gfn),
>>                      _gfn(a.u.change_gfn.new_gfn));
>> +    default:
>> +        ASSERT_UNREACHABLE();
>>      }
>
> Did you test anything using HVMOP_altp2m_change_gfn with this change, on a debug build? There's obviously an unintended fall- through right now.
>
> [Paul] - Yes, this patch series was tested with Tamas's HVMOP_altp2m_change_gfn in a debug build.

Not sure what you used here, the xen-access tool right now in Xen doesn't (yet) exercise the gfn remapping, only the mem-access parts.
Sergej has a patch for this in the arm-altp2m series though.

Tamas

[Paul2] All I was testing was if xen-access behaved as before the gfn remapping patch was introduced.  After the gfn remapping patch was introduced, 'xen-acess -m 1 altp2m_write' hanged the system.  Now, with your fix, it doesn't and the output of xen-access appears as before.   Thanks, -Paul
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH Altp2m cleanup v4 3/4] Move altp2m specific functions to altp2m files.
  2016-09-08 15:02   ` Jan Beulich
@ 2016-09-08 18:04     ` Lai, Paul C
  0 siblings, 0 replies; 16+ messages in thread
From: Lai, Paul C @ 2016-09-08 18:04 UTC (permalink / raw)
  To: Jan Beulich; +Cc: xen-devel, Sahita, Ravi, george.dunlap

[Paul2] in-line

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Thursday, September 8, 2016 8:02 AM
To: Lai, Paul C <paul.c.lai@intel.com>
Cc: george.dunlap@citrix.com; Sahita, Ravi <ravi.sahita@intel.com>; xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH Altp2m cleanup v4 3/4] Move altp2m specific functions to altp2m files.

>>> On 08.09.16 at 00:04, <paul.c.lai@intel.com> wrote:
> @@ -65,6 +66,56 @@ altp2m_vcpu_destroy(struct vcpu *v)
>          vcpu_unpause(v);
>  }
>  
> +int
> +altp2m_domain_init(struct domain *d)
> +{
> +    int rc;
> +    unsigned int i;
> +
> +    if ( !hvm_altp2m_supported() )
> +    {
> +        rc = 0;
> +        goto out;
> +    }
> +    /* Init alternate p2m data. */
> +    if ( (d->arch.altp2m_eptp = alloc_xenheap_page()) == NULL )
> +    {
> +        rc = -ENOMEM;
> +        goto out;
> +    }
> +
> +    for ( i = 0; i < MAX_EPTP; i++ )
> +        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
> +
> +    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    {
> +        rc = p2m_alloc_table(d->arch.altp2m_p2m[i]);
> +        if ( rc != 0 )
> +           goto out;
> +    }
> +
> +    d->arch.altp2m_active = 0;
> + out:
> +    return rc;
> +}

I don't follow: I did specifically ask to avoid goto when where the label is there's just a single statement (return in this case).

[Paul2]  In the v3 2/3 patch series you said:
   
  [Jan] When the epilogue (after the target label) is just a return statement, please avoid using goto.

    [PAUL] I don't see this code in an epilogue (after the target label).  

[Paul2] I didn't get an answer to the question.  After scratching my head, just realized you want the 'goto' to become a 'return'.  I will make this change in order to pass your review (and the coding style of Xen).  That said, I'm a believer in one exit per function -- to me, this is a much cleaner coding style.  I'm not trying to start a debate, I'm just explaining how I was confused.

> +void
> +altp2m_domain_teardown(struct domain *d) {
> +    unsigned int i;
> +
> +    if ( !hvm_altp2m_supported() )
> +	return;

Bad tab indentation.

> @@ -499,27 +500,7 @@ int hap_enable(struct domain *d, u32 mode)
>             goto out;
>      }
>  
> -    if ( hvm_altp2m_supported() )
> -    {
> -        /* Init alternate p2m data */
> -        if ( (d->arch.altp2m_eptp = alloc_xenheap_page()) == NULL )
> -        {
> -            rv = -ENOMEM;
> -            goto out;
> -        }
> -
> -        for ( i = 0; i < MAX_EPTP; i++ )
> -            d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
> -
> -        for ( i = 0; i < MAX_ALTP2M; i++ )
> -        {
> -            rv = p2m_alloc_table(d->arch.altp2m_p2m[i]);
> -            if ( rv != 0 )
> -               goto out;
> -        }
> -
> -        d->arch.altp2m_active = 0;
> -    }
> +    rv = altp2m_domain_init(d);
>  
>      /* Now let other users see the new mode */
>      d->arch.paging.mode = mode | PG_HAP_enable;

I don't think you should reach the following statement(s) when your function returned an error. Even if this might be benign now, this is easy to overlook if later more code gets added near the end of the function.

Also I wonder whether in an error case there's not a possibility for memory to be leaked. That wouldn't be a problem your patch introduces, but I think you could have noticed and fixed this as you touch the code anyway.

[Paul2] Good catch

> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -1329,6 +1329,45 @@ void setup_ept_dump(void)
>      register_keyhandler('D', ept_dump_p2m_table, "dump VT-x EPT 
> tables", 0);  }
>  
> +void p2m_init_altp2m_ept( struct domain *d, unsigned int i)

Another instance of you not having corrected what was previously pointed out: There's a stray blank here. And even if I hadn't said there are multiple instances of this, you should still apply such comments to all of your series. And I only now notice that this e.g.
also applies to the suggested re-ordering of operations in altp2m_domain_teardown(). I think I'm going to stop reviewing this series here: Please make sure you address all review comments (either verbally or by code adjustment) before submitting a new version (or in extreme cases, like due to lack of feedback, point out open issues right after the first --- separator).

[Paul2] I did do a sweep for the stray blanks (and other like typos/style issues), but your eyes are much better trained.  Will do again.  

[Paul 2] Thanks for the quick feedback.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work
  2016-09-08 16:45         ` Lai, Paul C
@ 2016-09-12 10:47           ` George Dunlap
  2016-09-13 17:38             ` Lai, Paul
  0 siblings, 1 reply; 16+ messages in thread
From: George Dunlap @ 2016-09-12 10:47 UTC (permalink / raw)
  To: Lai, Paul C, Tamas K Lengyel; +Cc: xen-devel, Sahita, Ravi, Jan Beulich

On 08/09/16 17:45, Lai, Paul C wrote:
> [Paul2] in-line

If you're going to engage in discussions on xen-devel it would really be
worth your time to find a mail setup that allows you to actually quote
properly such that you can reply in-line without these manual mark-ups.

If you can't configure your mail reader to do proper nested quoting, and
you can't / don't want to change your mail reader, then you could
consider doing what I do and signing up for a gmail account to get all
the xen-devel mail and replying from there.

Thanks,
 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work
  2016-09-12 10:47           ` George Dunlap
@ 2016-09-13 17:38             ` Lai, Paul
  2016-09-14 13:39               ` George Dunlap
  0 siblings, 1 reply; 16+ messages in thread
From: Lai, Paul @ 2016-09-13 17:38 UTC (permalink / raw)
  To: George Dunlap; +Cc: Tamas K Lengyel, Sahita, Ravi, Jan Beulich, xen-devel

On Mon, Sep 12, 2016 at 11:47:35AM +0100, George Dunlap wrote:
> On 08/09/16 17:45, Lai, Paul C wrote:
> > [Paul2] in-line
> 
> If you're going to engage in discussions on xen-devel it would really be
> worth your time to find a mail setup that allows you to actually quote
> properly such that you can reply in-line without these manual mark-ups.
> 
> If you can't configure your mail reader to do proper nested quoting, and
> you can't / don't want to change your mail reader, then you could
> consider doing what I do and signing up for a gmail account to get all
> the xen-devel mail and replying from there.
> 
> Thanks,
>  -George
> 

George,
Thanks for the suggestions.
Hopefully this reply illustrates that I can learn.

-Paul

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work
  2016-09-13 17:38             ` Lai, Paul
@ 2016-09-14 13:39               ` George Dunlap
  0 siblings, 0 replies; 16+ messages in thread
From: George Dunlap @ 2016-09-14 13:39 UTC (permalink / raw)
  To: Lai, Paul; +Cc: Tamas K Lengyel, Sahita, Ravi, Jan Beulich, xen-devel

On 13/09/16 18:38, Lai, Paul wrote:
> On Mon, Sep 12, 2016 at 11:47:35AM +0100, George Dunlap wrote:
>> On 08/09/16 17:45, Lai, Paul C wrote:
>>> [Paul2] in-line
>>
>> If you're going to engage in discussions on xen-devel it would really be
>> worth your time to find a mail setup that allows you to actually quote
>> properly such that you can reply in-line without these manual mark-ups.
>>
>> If you can't configure your mail reader to do proper nested quoting, and
>> you can't / don't want to change your mail reader, then you could
>> consider doing what I do and signing up for a gmail account to get all
>> the xen-devel mail and replying from there.
>>
>> Thanks,
>>  -George
>>
> 
> George,
> Thanks for the suggestions.

No problem -- and I hope the tone of my message didn't come off too
wrong.  I was trying to find solutions that work for everyone, not
criticize. :-)

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2016-09-14 13:40 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-09-07 22:04 [PATCH Altp2m cleanup v4 0/4] Cleaning up altp2m code Paul Lai
2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 1/4] x86/HVM: adjust feature checking in MSR intercept handling Paul Lai
2016-09-08 10:32   ` Jan Beulich
2016-09-08 15:38     ` Lai, Paul C
2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 2/4] altp2m cleanup work Paul Lai
2016-09-08 14:46   ` Jan Beulich
2016-09-08 15:50     ` Lai, Paul C
2016-09-08 16:06       ` Tamas K Lengyel
2016-09-08 16:45         ` Lai, Paul C
2016-09-12 10:47           ` George Dunlap
2016-09-13 17:38             ` Lai, Paul
2016-09-14 13:39               ` George Dunlap
2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 3/4] Move altp2m specific functions to altp2m files Paul Lai
2016-09-08 15:02   ` Jan Beulich
2016-09-08 18:04     ` Lai, Paul C
2016-09-07 22:04 ` [PATCH Altp2m cleanup v4 4/4] Making altp2m struct dynamically allocated Paul Lai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.