All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] vvmx: fix L1 vmxon
@ 2016-12-14 10:11 Haozhong Zhang
  2016-12-14 10:11 ` [PATCH v2 1/4] vvmx: set vmxon_region_pa of vcpu out of VMX operation to an invalid address Haozhong Zhang
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Haozhong Zhang @ 2016-12-14 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Haozhong Zhang, Kevin Tian, Jun Nakajima, Andrew Cooper, Jan Beulich

This patchset fixes bugs and adds missing checks in nvmx_handle_vmxon(),
in order to make it more consistent to Intel SDM (section "VMXON - Enter
VMX Operation" in Vol 3).

Changes in v2:
 * Add necessary 'const' in patch 1. (Andrew Cooper)
 * Use the existing INVALID_PADDR rather than introducing a new
   one in patch 1. (Jan Beulich)
 * Add patch 4 to replace VMCX_EADDR by INVALID_PADDR. (Jan Beulich)

Haozhong Zhang (4):
  vvmx: set vmxon_region_pa of vcpu out of VMX operation to an invalid address
  vvmx: return VMfail to L1 if L1 vmxon is executed in VMX operation
  vvmx: check the operand of L1 vmxon
  nestedhvm: replace VMCX_EADDR by INVALID_PADDR

 xen/arch/x86/hvm/nestedhvm.c       |  2 +-
 xen/arch/x86/hvm/svm/nestedsvm.c   | 18 ++++++-------
 xen/arch/x86/hvm/svm/vmcb.c        |  2 +-
 xen/arch/x86/hvm/vmx/vvmx.c        | 52 ++++++++++++++++++++++++++++----------
 xen/include/asm-x86/hvm/vcpu.h     |  2 --
 xen/include/asm-x86/hvm/vmx/vvmx.h |  7 +++++
 6 files changed, 56 insertions(+), 27 deletions(-)

-- 
2.10.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/4] vvmx: set vmxon_region_pa of vcpu out of VMX operation to an invalid address
  2016-12-14 10:11 [PATCH v2 0/4] vvmx: fix L1 vmxon Haozhong Zhang
@ 2016-12-14 10:11 ` Haozhong Zhang
  2016-12-14 10:11 ` [PATCH v2 2/4] vvmx: return VMfail to L1 if L1 vmxon is executed in VMX operation Haozhong Zhang
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 12+ messages in thread
From: Haozhong Zhang @ 2016-12-14 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Haozhong Zhang, Kevin Tian, Jun Nakajima, Andrew Cooper, Jan Beulich

nvmx_handle_vmxon() previously checks whether a vcpu is in VMX
operation by comparing its vmxon_region_pa with GPA 0. However, 0 is
also a valid VMXON region address. If L1 hypervisor had set the VMXON
region address to 0, the check in nvmx_handle_vmxon() will be skipped.
Fix this problem by using an invalid VMXON region address for vcpu
out of VMX operation.

Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        | 13 +++++++++----
 xen/include/asm-x86/hvm/vmx/vvmx.h |  7 +++++++
 2 files changed, 16 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index e6e9ebd..eae8150 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -32,6 +32,11 @@ static DEFINE_PER_CPU(u64 *, vvmcs_buf);
 
 static void nvmx_purge_vvmcs(struct vcpu *v);
 
+static bool nvmx_vcpu_in_vmx(const struct vcpu *v)
+{
+    return vcpu_2_nvmx(v).vmxon_region_pa != INVALID_PADDR;
+}
+
 #define VMCS_BUF_SIZE 100
 
 int nvmx_cpu_up_prepare(unsigned int cpu)
@@ -107,7 +112,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
 
     nvmx->ept.enabled = 0;
     nvmx->guest_vpid = 0;
-    nvmx->vmxon_region_pa = 0;
+    nvmx->vmxon_region_pa = INVALID_PADDR;
     nvcpu->nv_vvmcx = NULL;
     nvcpu->nv_vvmcxaddr = VMCX_EADDR;
     nvmx->intr.intr_info = 0;
@@ -357,7 +362,7 @@ static int vmx_inst_check_privilege(struct cpu_user_regs *regs, int vmxop_check)
              !(v->arch.hvm_vcpu.guest_cr[4] & X86_CR4_VMXE) )
             goto invalid_op;
     }
-    else if ( !vcpu_2_nvmx(v).vmxon_region_pa )
+    else if ( !nvmx_vcpu_in_vmx(v) )
         goto invalid_op;
 
     hvm_get_segment_register(v, x86_seg_cs, &cs);
@@ -1384,7 +1389,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
     if ( rc != X86EMUL_OKAY )
         return rc;
 
-    if ( nvmx->vmxon_region_pa )
+    if ( nvmx_vcpu_in_vmx(v) )
         gdprintk(XENLOG_WARNING, 
                  "vmxon again: orig %"PRIpaddr" new %lx\n",
                  nvmx->vmxon_region_pa, gpa);
@@ -1417,7 +1422,7 @@ int nvmx_handle_vmxoff(struct cpu_user_regs *regs)
         return rc;
 
     nvmx_purge_vvmcs(v);
-    nvmx->vmxon_region_pa = 0;
+    nvmx->vmxon_region_pa = INVALID_PADDR;
 
     vmreturn(regs, VMSUCCEED);
     return X86EMUL_OKAY;
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index ead586e..af7702b 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -28,6 +28,13 @@ struct vvmcs_list {
 };
 
 struct nestedvmx {
+    /*
+     * vmxon_region_pa is also used to indicate whether a vcpu is in
+     * the VMX operation. When a vcpu is out of the VMX operation, its
+     * vmxon_region_pa is set to an invalid address INVALID_PADDR. We
+     * cannot use 0 for this purpose, because it's a valid VMXON region
+     * address.
+     */
     paddr_t    vmxon_region_pa;
     void       *iobitmap[2];		/* map (va) of L1 guest I/O bitmap */
     void       *msrbitmap;		/* map (va) of L1 guest MSR bitmap */
-- 
2.10.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 2/4] vvmx: return VMfail to L1 if L1 vmxon is executed in VMX operation
  2016-12-14 10:11 [PATCH v2 0/4] vvmx: fix L1 vmxon Haozhong Zhang
  2016-12-14 10:11 ` [PATCH v2 1/4] vvmx: set vmxon_region_pa of vcpu out of VMX operation to an invalid address Haozhong Zhang
@ 2016-12-14 10:11 ` Haozhong Zhang
  2016-12-14 10:11 ` [PATCH v2 3/4] vvmx: check the operand of L1 vmxon Haozhong Zhang
  2016-12-14 10:11 ` [PATCH v2 4/4] nestedhvm: replace VMCX_EADDR by INVALID_PADDR Haozhong Zhang
  3 siblings, 0 replies; 12+ messages in thread
From: Haozhong Zhang @ 2016-12-14 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Haozhong Zhang, Kevin Tian, Jun Nakajima, Andrew Cooper, Jan Beulich

According to Intel SDM, section "VMXON - Enter VMX Operation", a
VMfail should be returned to L1 hypervisor if L1 vmxon is executed in
VMX operation, rather than just print a warning message.

Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index eae8150..e765b60 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1390,9 +1390,12 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
         return rc;
 
     if ( nvmx_vcpu_in_vmx(v) )
-        gdprintk(XENLOG_WARNING, 
-                 "vmxon again: orig %"PRIpaddr" new %lx\n",
-                 nvmx->vmxon_region_pa, gpa);
+    {
+        vmreturn(regs,
+                 nvcpu->nv_vvmcxaddr != VMCX_EADDR ?
+                 VMFAIL_VALID : VMFAIL_INVALID);
+        return X86EMUL_OKAY;
+    }
 
     nvmx->vmxon_region_pa = gpa;
 
-- 
2.10.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 3/4] vvmx: check the operand of L1 vmxon
  2016-12-14 10:11 [PATCH v2 0/4] vvmx: fix L1 vmxon Haozhong Zhang
  2016-12-14 10:11 ` [PATCH v2 1/4] vvmx: set vmxon_region_pa of vcpu out of VMX operation to an invalid address Haozhong Zhang
  2016-12-14 10:11 ` [PATCH v2 2/4] vvmx: return VMfail to L1 if L1 vmxon is executed in VMX operation Haozhong Zhang
@ 2016-12-14 10:11 ` Haozhong Zhang
  2016-12-18 14:02   ` Haozhong Zhang
  2016-12-14 10:11 ` [PATCH v2 4/4] nestedhvm: replace VMCX_EADDR by INVALID_PADDR Haozhong Zhang
  3 siblings, 1 reply; 12+ messages in thread
From: Haozhong Zhang @ 2016-12-14 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Haozhong Zhang, Kevin Tian, Jun Nakajima, Andrew Cooper, Jan Beulich

Check whether the operand of L1 vmxon is a valid VMXON region address
and whether the VMXON region at that address contains a valid revision
ID.

Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index e765b60..5523146 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1383,6 +1383,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct vmx_inst_decoded decode;
     unsigned long gpa = 0;
+    uint32_t nvmcs_revid;
     int rc;
 
     rc = decode_vmx_inst(regs, &decode, &gpa, 1);
@@ -1397,6 +1398,21 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
         return X86EMUL_OKAY;
     }
 
+    if ( (gpa & ~PAGE_MASK) || (gpa >> v->domain->arch.paging.gfn_bits) )
+    {
+        vmreturn(regs, VMFAIL_INVALID);
+        return X86EMUL_OKAY;
+    }
+
+    rc = hvm_copy_from_guest_phys(&nvmcs_revid, gpa, sizeof(nvmcs_revid));
+    if ( rc != HVMCOPY_okay ||
+         (nvmcs_revid & ~VMX_BASIC_REVISION_MASK) ||
+         ((nvmcs_revid ^ vmx_basic_msr) & VMX_BASIC_REVISION_MASK) )
+    {
+        vmreturn(regs, VMFAIL_INVALID);
+        return X86EMUL_OKAY;
+    }
+
     nvmx->vmxon_region_pa = gpa;
 
     /*
-- 
2.10.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 4/4] nestedhvm: replace VMCX_EADDR by INVALID_PADDR
  2016-12-14 10:11 [PATCH v2 0/4] vvmx: fix L1 vmxon Haozhong Zhang
                   ` (2 preceding siblings ...)
  2016-12-14 10:11 ` [PATCH v2 3/4] vvmx: check the operand of L1 vmxon Haozhong Zhang
@ 2016-12-14 10:11 ` Haozhong Zhang
  2016-12-14 10:16   ` Jan Beulich
                     ` (2 more replies)
  3 siblings, 3 replies; 12+ messages in thread
From: Haozhong Zhang @ 2016-12-14 10:11 UTC (permalink / raw)
  To: xen-devel
  Cc: Haozhong Zhang, Kevin Tian, Jun Nakajima, Andrew Cooper, Jan Beulich

... because INVALID_PADDR is a more general one.

Suggested-by: Jan Beulich <JBeulich@suse.com>
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
---
 xen/arch/x86/hvm/nestedhvm.c     |  2 +-
 xen/arch/x86/hvm/svm/nestedsvm.c | 18 +++++++++---------
 xen/arch/x86/hvm/svm/vmcb.c      |  2 +-
 xen/arch/x86/hvm/vmx/vvmx.c      | 16 ++++++++--------
 xen/include/asm-x86/hvm/vcpu.h   |  2 --
 5 files changed, 19 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/hvm/nestedhvm.c b/xen/arch/x86/hvm/nestedhvm.c
index c4671d8..c09c5b2 100644
--- a/xen/arch/x86/hvm/nestedhvm.c
+++ b/xen/arch/x86/hvm/nestedhvm.c
@@ -54,7 +54,7 @@ nestedhvm_vcpu_reset(struct vcpu *v)
 
     hvm_unmap_guest_frame(nv->nv_vvmcx, 1);
     nv->nv_vvmcx = NULL;
-    nv->nv_vvmcxaddr = VMCX_EADDR;
+    nv->nv_vvmcxaddr = INVALID_PADDR;
     nv->nv_flushp2m = 0;
     nv->nv_p2m = NULL;
 
diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 8c9b073..4d9de86 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -68,10 +68,10 @@ int nestedsvm_vmcb_map(struct vcpu *v, uint64_t vmcbaddr)
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
 
     if (nv->nv_vvmcx != NULL && nv->nv_vvmcxaddr != vmcbaddr) {
-        ASSERT(nv->nv_vvmcxaddr != VMCX_EADDR);
+        ASSERT(nv->nv_vvmcxaddr != INVALID_PADDR);
         hvm_unmap_guest_frame(nv->nv_vvmcx, 1);
         nv->nv_vvmcx = NULL;
-        nv->nv_vvmcxaddr = VMCX_EADDR;
+        nv->nv_vvmcxaddr = INVALID_PADDR;
     }
 
     if ( !nv->nv_vvmcx )
@@ -154,7 +154,7 @@ void nsvm_vcpu_destroy(struct vcpu *v)
     if (nv->nv_n2vmcx) {
         free_vmcb(nv->nv_n2vmcx);
         nv->nv_n2vmcx = NULL;
-        nv->nv_n2vmcx_pa = VMCX_EADDR;
+        nv->nv_n2vmcx_pa = INVALID_PADDR;
     }
     if (svm->ns_iomap)
         svm->ns_iomap = NULL;
@@ -164,8 +164,8 @@ int nsvm_vcpu_reset(struct vcpu *v)
 {
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
 
-    svm->ns_msr_hsavepa = VMCX_EADDR;
-    svm->ns_ovvmcb_pa = VMCX_EADDR;
+    svm->ns_msr_hsavepa = INVALID_PADDR;
+    svm->ns_ovvmcb_pa = INVALID_PADDR;
 
     svm->ns_tscratio = DEFAULT_TSC_RATIO;
 
@@ -425,7 +425,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
 
     /* Check if virtual VMCB cleanbits are valid */
     vcleanbits_valid = 1;
-    if (svm->ns_ovvmcb_pa == VMCX_EADDR)
+    if ( svm->ns_ovvmcb_pa == INVALID_PADDR )
         vcleanbits_valid = 0;
     if (svm->ns_ovvmcb_pa != nv->nv_vvmcxaddr)
         vcleanbits_valid = 0;
@@ -674,7 +674,7 @@ nsvm_vcpu_vmentry(struct vcpu *v, struct cpu_user_regs *regs,
     ns_vmcb = nv->nv_vvmcx;
     ASSERT(ns_vmcb != NULL);
     ASSERT(nv->nv_n2vmcx != NULL);
-    ASSERT(nv->nv_n2vmcx_pa != VMCX_EADDR);
+    ASSERT(nv->nv_n2vmcx_pa != INVALID_PADDR);
 
     /* Save values for later use. Needed for Nested-on-Nested and
      * Shadow-on-Shadow paging.
@@ -1490,8 +1490,8 @@ void nsvm_vcpu_switch(struct cpu_user_regs *regs)
     ASSERT(v->arch.hvm_svm.vmcb != NULL);
     ASSERT(nv->nv_n1vmcx != NULL);
     ASSERT(nv->nv_n2vmcx != NULL);
-    ASSERT(nv->nv_n1vmcx_pa != VMCX_EADDR);
-    ASSERT(nv->nv_n2vmcx_pa != VMCX_EADDR);
+    ASSERT(nv->nv_n1vmcx_pa != INVALID_PADDR);
+    ASSERT(nv->nv_n2vmcx_pa != INVALID_PADDR);
 
     if (nv->nv_vmexit_pending) {
  vmexit:
diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
index 9ea014f..70d75e7 100644
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -273,7 +273,7 @@ void svm_destroy_vmcb(struct vcpu *v)
     }
 
     nv->nv_n1vmcx = NULL;
-    nv->nv_n1vmcx_pa = VMCX_EADDR;
+    nv->nv_n1vmcx_pa = INVALID_PADDR;
     arch_svm->vmcb = NULL;
 }
 
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 5523146..c4f19a0 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -114,7 +114,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
     nvmx->guest_vpid = 0;
     nvmx->vmxon_region_pa = INVALID_PADDR;
     nvcpu->nv_vvmcx = NULL;
-    nvcpu->nv_vvmcxaddr = VMCX_EADDR;
+    nvcpu->nv_vvmcxaddr = INVALID_PADDR;
     nvmx->intr.intr_info = 0;
     nvmx->intr.error_code = 0;
     nvmx->iobitmap[0] = NULL;
@@ -766,10 +766,10 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
     int i;
 
     __clear_current_vvmcs(v);
-    if ( nvcpu->nv_vvmcxaddr != VMCX_EADDR )
+    if ( nvcpu->nv_vvmcxaddr != INVALID_PADDR )
         hvm_unmap_guest_frame(nvcpu->nv_vvmcx, 1);
     nvcpu->nv_vvmcx = NULL;
-    nvcpu->nv_vvmcxaddr = VMCX_EADDR;
+    nvcpu->nv_vvmcxaddr = INVALID_PADDR;
     v->arch.hvm_vmx.vmcs_shadow_maddr = 0;
     for (i=0; i<2; i++) {
         if ( nvmx->iobitmap[i] ) {
@@ -1393,7 +1393,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
     if ( nvmx_vcpu_in_vmx(v) )
     {
         vmreturn(regs,
-                 nvcpu->nv_vvmcxaddr != VMCX_EADDR ?
+                 nvcpu->nv_vvmcxaddr != INVALID_PADDR ?
                  VMFAIL_VALID : VMFAIL_INVALID);
         return X86EMUL_OKAY;
     }
@@ -1509,7 +1509,7 @@ static int nvmx_vmresume(struct vcpu *v, struct cpu_user_regs *regs)
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
 
     /* check VMCS is valid and IO BITMAP is set */
-    if ( (nvcpu->nv_vvmcxaddr != VMCX_EADDR) &&
+    if ( (nvcpu->nv_vvmcxaddr != INVALID_PADDR) &&
             ((nvmx->iobitmap[0] && nvmx->iobitmap[1]) ||
             !(__n2_exec_control(v) & CPU_BASED_ACTIVATE_IO_BITMAP) ) )
         nvcpu->nv_vmentry_pending = 1;
@@ -1529,7 +1529,7 @@ int nvmx_handle_vmresume(struct cpu_user_regs *regs)
     if ( rc != X86EMUL_OKAY )
         return rc;
 
-    if ( vcpu_nestedhvm(v).nv_vvmcxaddr == VMCX_EADDR )
+    if ( vcpu_nestedhvm(v).nv_vvmcxaddr == INVALID_PADDR )
     {
         vmreturn (regs, VMFAIL_INVALID);
         return X86EMUL_OKAY;        
@@ -1554,7 +1554,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
     if ( rc != X86EMUL_OKAY )
         return rc;
 
-    if ( vcpu_nestedhvm(v).nv_vvmcxaddr == VMCX_EADDR )
+    if ( vcpu_nestedhvm(v).nv_vvmcxaddr == INVALID_PADDR )
     {
         vmreturn (regs, VMFAIL_INVALID);
         return X86EMUL_OKAY;
@@ -1599,7 +1599,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
     if ( nvcpu->nv_vvmcxaddr != gpa )
         nvmx_purge_vvmcs(v);
 
-    if ( nvcpu->nv_vvmcxaddr == VMCX_EADDR )
+    if ( nvcpu->nv_vvmcxaddr == INVALID_PADDR )
     {
         bool_t writable;
         void *vvmcx = hvm_map_guest_frame_rw(paddr_to_pfn(gpa), 1, &writable);
diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
index d485536..7b411a8 100644
--- a/xen/include/asm-x86/hvm/vcpu.h
+++ b/xen/include/asm-x86/hvm/vcpu.h
@@ -97,8 +97,6 @@ static inline bool_t hvm_vcpu_io_need_completion(const struct hvm_vcpu_io *vio)
            !vio->io_req.data_is_ptr;
 }
 
-#define VMCX_EADDR    (~0ULL)
-
 struct nestedvcpu {
     bool_t nv_guestmode; /* vcpu in guestmode? */
     void *nv_vvmcx; /* l1 guest virtual VMCB/VMCS */
-- 
2.10.1


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/4] nestedhvm: replace VMCX_EADDR by INVALID_PADDR
  2016-12-14 10:11 ` [PATCH v2 4/4] nestedhvm: replace VMCX_EADDR by INVALID_PADDR Haozhong Zhang
@ 2016-12-14 10:16   ` Jan Beulich
  2016-12-14 15:54     ` Konrad Rzeszutek Wilk
  2016-12-14 10:18   ` Haozhong Zhang
  2016-12-14 11:57   ` Tian, Kevin
  2 siblings, 1 reply; 12+ messages in thread
From: Jan Beulich @ 2016-12-14 10:16 UTC (permalink / raw)
  To: Haozhong Zhang; +Cc: Andrew Cooper, Kevin Tian, Jun Nakajima, xen-devel

>>> On 14.12.16 at 11:11, <haozhong.zhang@intel.com> wrote:
> ... because INVALID_PADDR is a more general one.
> 
> Suggested-by: Jan Beulich <JBeulich@suse.com>
> Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks for doing this!

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/4] nestedhvm: replace VMCX_EADDR by INVALID_PADDR
  2016-12-14 10:11 ` [PATCH v2 4/4] nestedhvm: replace VMCX_EADDR by INVALID_PADDR Haozhong Zhang
  2016-12-14 10:16   ` Jan Beulich
@ 2016-12-14 10:18   ` Haozhong Zhang
  2016-12-14 16:24     ` Boris Ostrovsky
  2016-12-14 11:57   ` Tian, Kevin
  2 siblings, 1 reply; 12+ messages in thread
From: Haozhong Zhang @ 2016-12-14 10:18 UTC (permalink / raw)
  To: xen-devel
  Cc: Kevin Tian, Jun Nakajima, Andrew Cooper, Jan Beulich,
	Boris Ostrovsky, Suravee Suthikulpanit

On 12/14/16 18:11 +0800, Haozhong Zhang wrote:
>... because INVALID_PADDR is a more general one.
>
>Suggested-by: Jan Beulich <JBeulich@suse.com>
>Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
>---
> xen/arch/x86/hvm/nestedhvm.c     |  2 +-
> xen/arch/x86/hvm/svm/nestedsvm.c | 18 +++++++++---------
> xen/arch/x86/hvm/svm/vmcb.c      |  2 +-
> xen/arch/x86/hvm/vmx/vvmx.c      | 16 ++++++++--------
> xen/include/asm-x86/hvm/vcpu.h   |  2 --
> 5 files changed, 19 insertions(+), 21 deletions(-)
>

Forgot to cc AMD maintainers.

>diff --git a/xen/arch/x86/hvm/nestedhvm.c b/xen/arch/x86/hvm/nestedhvm.c
>index c4671d8..c09c5b2 100644
>--- a/xen/arch/x86/hvm/nestedhvm.c
>+++ b/xen/arch/x86/hvm/nestedhvm.c
>@@ -54,7 +54,7 @@ nestedhvm_vcpu_reset(struct vcpu *v)
>
>     hvm_unmap_guest_frame(nv->nv_vvmcx, 1);
>     nv->nv_vvmcx = NULL;
>-    nv->nv_vvmcxaddr = VMCX_EADDR;
>+    nv->nv_vvmcxaddr = INVALID_PADDR;
>     nv->nv_flushp2m = 0;
>     nv->nv_p2m = NULL;
>
>diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
>index 8c9b073..4d9de86 100644
>--- a/xen/arch/x86/hvm/svm/nestedsvm.c
>+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
>@@ -68,10 +68,10 @@ int nestedsvm_vmcb_map(struct vcpu *v, uint64_t vmcbaddr)
>     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
>
>     if (nv->nv_vvmcx != NULL && nv->nv_vvmcxaddr != vmcbaddr) {
>-        ASSERT(nv->nv_vvmcxaddr != VMCX_EADDR);
>+        ASSERT(nv->nv_vvmcxaddr != INVALID_PADDR);
>         hvm_unmap_guest_frame(nv->nv_vvmcx, 1);
>         nv->nv_vvmcx = NULL;
>-        nv->nv_vvmcxaddr = VMCX_EADDR;
>+        nv->nv_vvmcxaddr = INVALID_PADDR;
>     }
>
>     if ( !nv->nv_vvmcx )
>@@ -154,7 +154,7 @@ void nsvm_vcpu_destroy(struct vcpu *v)
>     if (nv->nv_n2vmcx) {
>         free_vmcb(nv->nv_n2vmcx);
>         nv->nv_n2vmcx = NULL;
>-        nv->nv_n2vmcx_pa = VMCX_EADDR;
>+        nv->nv_n2vmcx_pa = INVALID_PADDR;
>     }
>     if (svm->ns_iomap)
>         svm->ns_iomap = NULL;
>@@ -164,8 +164,8 @@ int nsvm_vcpu_reset(struct vcpu *v)
> {
>     struct nestedsvm *svm = &vcpu_nestedsvm(v);
>
>-    svm->ns_msr_hsavepa = VMCX_EADDR;
>-    svm->ns_ovvmcb_pa = VMCX_EADDR;
>+    svm->ns_msr_hsavepa = INVALID_PADDR;
>+    svm->ns_ovvmcb_pa = INVALID_PADDR;
>
>     svm->ns_tscratio = DEFAULT_TSC_RATIO;
>
>@@ -425,7 +425,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
>
>     /* Check if virtual VMCB cleanbits are valid */
>     vcleanbits_valid = 1;
>-    if (svm->ns_ovvmcb_pa == VMCX_EADDR)
>+    if ( svm->ns_ovvmcb_pa == INVALID_PADDR )
>         vcleanbits_valid = 0;
>     if (svm->ns_ovvmcb_pa != nv->nv_vvmcxaddr)
>         vcleanbits_valid = 0;
>@@ -674,7 +674,7 @@ nsvm_vcpu_vmentry(struct vcpu *v, struct cpu_user_regs *regs,
>     ns_vmcb = nv->nv_vvmcx;
>     ASSERT(ns_vmcb != NULL);
>     ASSERT(nv->nv_n2vmcx != NULL);
>-    ASSERT(nv->nv_n2vmcx_pa != VMCX_EADDR);
>+    ASSERT(nv->nv_n2vmcx_pa != INVALID_PADDR);
>
>     /* Save values for later use. Needed for Nested-on-Nested and
>      * Shadow-on-Shadow paging.
>@@ -1490,8 +1490,8 @@ void nsvm_vcpu_switch(struct cpu_user_regs *regs)
>     ASSERT(v->arch.hvm_svm.vmcb != NULL);
>     ASSERT(nv->nv_n1vmcx != NULL);
>     ASSERT(nv->nv_n2vmcx != NULL);
>-    ASSERT(nv->nv_n1vmcx_pa != VMCX_EADDR);
>-    ASSERT(nv->nv_n2vmcx_pa != VMCX_EADDR);
>+    ASSERT(nv->nv_n1vmcx_pa != INVALID_PADDR);
>+    ASSERT(nv->nv_n2vmcx_pa != INVALID_PADDR);
>
>     if (nv->nv_vmexit_pending) {
>  vmexit:
>diff --git a/xen/arch/x86/hvm/svm/vmcb.c b/xen/arch/x86/hvm/svm/vmcb.c
>index 9ea014f..70d75e7 100644
>--- a/xen/arch/x86/hvm/svm/vmcb.c
>+++ b/xen/arch/x86/hvm/svm/vmcb.c
>@@ -273,7 +273,7 @@ void svm_destroy_vmcb(struct vcpu *v)
>     }
>
>     nv->nv_n1vmcx = NULL;
>-    nv->nv_n1vmcx_pa = VMCX_EADDR;
>+    nv->nv_n1vmcx_pa = INVALID_PADDR;
>     arch_svm->vmcb = NULL;
> }
>
>diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
>index 5523146..c4f19a0 100644
>--- a/xen/arch/x86/hvm/vmx/vvmx.c
>+++ b/xen/arch/x86/hvm/vmx/vvmx.c
>@@ -114,7 +114,7 @@ int nvmx_vcpu_initialise(struct vcpu *v)
>     nvmx->guest_vpid = 0;
>     nvmx->vmxon_region_pa = INVALID_PADDR;
>     nvcpu->nv_vvmcx = NULL;
>-    nvcpu->nv_vvmcxaddr = VMCX_EADDR;
>+    nvcpu->nv_vvmcxaddr = INVALID_PADDR;
>     nvmx->intr.intr_info = 0;
>     nvmx->intr.error_code = 0;
>     nvmx->iobitmap[0] = NULL;
>@@ -766,10 +766,10 @@ static void nvmx_purge_vvmcs(struct vcpu *v)
>     int i;
>
>     __clear_current_vvmcs(v);
>-    if ( nvcpu->nv_vvmcxaddr != VMCX_EADDR )
>+    if ( nvcpu->nv_vvmcxaddr != INVALID_PADDR )
>         hvm_unmap_guest_frame(nvcpu->nv_vvmcx, 1);
>     nvcpu->nv_vvmcx = NULL;
>-    nvcpu->nv_vvmcxaddr = VMCX_EADDR;
>+    nvcpu->nv_vvmcxaddr = INVALID_PADDR;
>     v->arch.hvm_vmx.vmcs_shadow_maddr = 0;
>     for (i=0; i<2; i++) {
>         if ( nvmx->iobitmap[i] ) {
>@@ -1393,7 +1393,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
>     if ( nvmx_vcpu_in_vmx(v) )
>     {
>         vmreturn(regs,
>-                 nvcpu->nv_vvmcxaddr != VMCX_EADDR ?
>+                 nvcpu->nv_vvmcxaddr != INVALID_PADDR ?
>                  VMFAIL_VALID : VMFAIL_INVALID);
>         return X86EMUL_OKAY;
>     }
>@@ -1509,7 +1509,7 @@ static int nvmx_vmresume(struct vcpu *v, struct cpu_user_regs *regs)
>     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>
>     /* check VMCS is valid and IO BITMAP is set */
>-    if ( (nvcpu->nv_vvmcxaddr != VMCX_EADDR) &&
>+    if ( (nvcpu->nv_vvmcxaddr != INVALID_PADDR) &&
>             ((nvmx->iobitmap[0] && nvmx->iobitmap[1]) ||
>             !(__n2_exec_control(v) & CPU_BASED_ACTIVATE_IO_BITMAP) ) )
>         nvcpu->nv_vmentry_pending = 1;
>@@ -1529,7 +1529,7 @@ int nvmx_handle_vmresume(struct cpu_user_regs *regs)
>     if ( rc != X86EMUL_OKAY )
>         return rc;
>
>-    if ( vcpu_nestedhvm(v).nv_vvmcxaddr == VMCX_EADDR )
>+    if ( vcpu_nestedhvm(v).nv_vvmcxaddr == INVALID_PADDR )
>     {
>         vmreturn (regs, VMFAIL_INVALID);
>         return X86EMUL_OKAY;
>@@ -1554,7 +1554,7 @@ int nvmx_handle_vmlaunch(struct cpu_user_regs *regs)
>     if ( rc != X86EMUL_OKAY )
>         return rc;
>
>-    if ( vcpu_nestedhvm(v).nv_vvmcxaddr == VMCX_EADDR )
>+    if ( vcpu_nestedhvm(v).nv_vvmcxaddr == INVALID_PADDR )
>     {
>         vmreturn (regs, VMFAIL_INVALID);
>         return X86EMUL_OKAY;
>@@ -1599,7 +1599,7 @@ int nvmx_handle_vmptrld(struct cpu_user_regs *regs)
>     if ( nvcpu->nv_vvmcxaddr != gpa )
>         nvmx_purge_vvmcs(v);
>
>-    if ( nvcpu->nv_vvmcxaddr == VMCX_EADDR )
>+    if ( nvcpu->nv_vvmcxaddr == INVALID_PADDR )
>     {
>         bool_t writable;
>         void *vvmcx = hvm_map_guest_frame_rw(paddr_to_pfn(gpa), 1, &writable);
>diff --git a/xen/include/asm-x86/hvm/vcpu.h b/xen/include/asm-x86/hvm/vcpu.h
>index d485536..7b411a8 100644
>--- a/xen/include/asm-x86/hvm/vcpu.h
>+++ b/xen/include/asm-x86/hvm/vcpu.h
>@@ -97,8 +97,6 @@ static inline bool_t hvm_vcpu_io_need_completion(const struct hvm_vcpu_io *vio)
>            !vio->io_req.data_is_ptr;
> }
>
>-#define VMCX_EADDR    (~0ULL)
>-
> struct nestedvcpu {
>     bool_t nv_guestmode; /* vcpu in guestmode? */
>     void *nv_vvmcx; /* l1 guest virtual VMCB/VMCS */
>-- 
>2.10.1
>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/4] nestedhvm: replace VMCX_EADDR by INVALID_PADDR
  2016-12-14 10:11 ` [PATCH v2 4/4] nestedhvm: replace VMCX_EADDR by INVALID_PADDR Haozhong Zhang
  2016-12-14 10:16   ` Jan Beulich
  2016-12-14 10:18   ` Haozhong Zhang
@ 2016-12-14 11:57   ` Tian, Kevin
  2 siblings, 0 replies; 12+ messages in thread
From: Tian, Kevin @ 2016-12-14 11:57 UTC (permalink / raw)
  To: Zhang, Haozhong, xen-devel; +Cc: Andrew Cooper, Jan Beulich, Nakajima, Jun

> From: Zhang, Haozhong
> Sent: Wednesday, December 14, 2016 6:12 PM
> 
> ... because INVALID_PADDR is a more general one.
> 
> Suggested-by: Jan Beulich <JBeulich@suse.com>
> Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/4] nestedhvm: replace VMCX_EADDR by INVALID_PADDR
  2016-12-14 10:16   ` Jan Beulich
@ 2016-12-14 15:54     ` Konrad Rzeszutek Wilk
  0 siblings, 0 replies; 12+ messages in thread
From: Konrad Rzeszutek Wilk @ 2016-12-14 15:54 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Haozhong Zhang, Kevin Tian, xen-devel, Jun Nakajima, Andrew Cooper

On Wed, Dec 14, 2016 at 03:16:33AM -0700, Jan Beulich wrote:
> >>> On 14.12.16 at 11:11, <haozhong.zhang@intel.com> wrote:
> > ... because INVALID_PADDR is a more general one.
> > 
> > Suggested-by: Jan Beulich <JBeulich@suse.com>
> > Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Thanks for doing this!

Indeed! Thank you.


And Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

as well.
> 
> Jan
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/4] nestedhvm: replace VMCX_EADDR by INVALID_PADDR
  2016-12-14 10:18   ` Haozhong Zhang
@ 2016-12-14 16:24     ` Boris Ostrovsky
  0 siblings, 0 replies; 12+ messages in thread
From: Boris Ostrovsky @ 2016-12-14 16:24 UTC (permalink / raw)
  To: xen-devel, Jun Nakajima, Kevin Tian, Jan Beulich, Andrew Cooper,
	Konrad Rzeszutek Wilk, Suravee Suthikulpanit

On 12/14/2016 05:18 AM, Haozhong Zhang wrote:
> On 12/14/16 18:11 +0800, Haozhong Zhang wrote:
>> ... because INVALID_PADDR is a more general one.
>>
>> Suggested-by: Jan Beulich <JBeulich@suse.com>
>> Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
>> ---
>> xen/arch/x86/hvm/nestedhvm.c     |  2 +-
>> xen/arch/x86/hvm/svm/nestedsvm.c | 18 +++++++++---------
>> xen/arch/x86/hvm/svm/vmcb.c      |  2 +-
>> xen/arch/x86/hvm/vmx/vvmx.c      | 16 ++++++++--------
>> xen/include/asm-x86/hvm/vcpu.h   |  2 --
>> 5 files changed, 19 insertions(+), 21 deletions(-)
>>
>
> Forgot to cc AMD maintainers.


Reviewed-by:  Boris Ostrovsky <boris.ostrovsky@oracle.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/4] vvmx: check the operand of L1 vmxon
  2016-12-14 10:11 ` [PATCH v2 3/4] vvmx: check the operand of L1 vmxon Haozhong Zhang
@ 2016-12-18 14:02   ` Haozhong Zhang
  2016-12-18 14:06     ` Andrew Cooper
  0 siblings, 1 reply; 12+ messages in thread
From: Haozhong Zhang @ 2016-12-18 14:02 UTC (permalink / raw)
  To: Jan Beulich, Andrew Cooper, xen-devel; +Cc: Kevin Tian, Jun Nakajima

[-- Attachment #1: Type: text/plain, Size: 1540 bytes --]

On 12/14/16 18:11 +0800, Haozhong Zhang wrote:
>Check whether the operand of L1 vmxon is a valid VMXON region address
>and whether the VMXON region at that address contains a valid revision
>ID.
>
>Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
>Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>Acked-by: Kevin Tian <kevin.tian@intel.com>
>---
> xen/arch/x86/hvm/vmx/vvmx.c | 16 ++++++++++++++++
> 1 file changed, 16 insertions(+)
>
>diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
>index e765b60..5523146 100644
>--- a/xen/arch/x86/hvm/vmx/vvmx.c
>+++ b/xen/arch/x86/hvm/vmx/vvmx.c
>@@ -1383,6 +1383,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
>     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>     struct vmx_inst_decoded decode;
>     unsigned long gpa = 0;
>+    uint32_t nvmcs_revid;
>     int rc;
>
>     rc = decode_vmx_inst(regs, &decode, &gpa, 1);
>@@ -1397,6 +1398,21 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
>         return X86EMUL_OKAY;
>     }
>
>+    if ( (gpa & ~PAGE_MASK) || (gpa >> v->domain->arch.paging.gfn_bits) )
                                                                 ^^^^^^^^

I mistaken it as the number of valid bits of physical address and
therefore missed adding PAGE_SHIFT here. The correct patch should be
the one attached. I notice the wrong patch has been in the staging
branch, so should I send a patch(set) to fix my mistake on the staging
branch?

Thanks,
Haozhong



[-- Attachment #2: v2-0003-vvmx-check-the-operand-of-L1-vmxon.patch --]
[-- Type: text/x-diff, Size: 1645 bytes --]

>From 809cf1ee317527d2eb8c2d8bac3be46b4d446b63 Mon Sep 17 00:00:00 2001
From: Haozhong Zhang <haozhong.zhang@intel.com>
Date: Tue, 13 Dec 2016 19:49:48 +0800
Subject: [RESEND PATCH v2 3/5] vvmx: check the operand of L1 vmxon

Check whether the operand of L1 vmxon is a valid VMXON region address
and whether the VMXON region at that address contains a valid revision
ID.

Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index e765b60..a1f8e16 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -1383,6 +1383,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
     struct vmx_inst_decoded decode;
     unsigned long gpa = 0;
+    uint32_t nvmcs_revid;
     int rc;
 
     rc = decode_vmx_inst(regs, &decode, &gpa, 1);
@@ -1397,6 +1398,22 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
         return X86EMUL_OKAY;
     }
 
+    if ( (gpa & ~PAGE_MASK) ||
+         (gpa >> (v->domain->arch.paging.gfn_bits + PAGE_SHIFT)) )
+    {
+        vmreturn(regs, VMFAIL_INVALID);
+        return X86EMUL_OKAY;
+    }
+
+    rc = hvm_copy_from_guest_phys(&nvmcs_revid, gpa, sizeof(nvmcs_revid));
+    if ( rc != HVMCOPY_okay ||
+         (nvmcs_revid & ~VMX_BASIC_REVISION_MASK) ||
+         ((nvmcs_revid ^ vmx_basic_msr) & VMX_BASIC_REVISION_MASK) )
+    {
+        vmreturn(regs, VMFAIL_INVALID);
+        return X86EMUL_OKAY;
+    }
+
     nvmx->vmxon_region_pa = gpa;
 
     /*
-- 
2.10.1


[-- Attachment #3: Type: text/plain, Size: 127 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/4] vvmx: check the operand of L1 vmxon
  2016-12-18 14:02   ` Haozhong Zhang
@ 2016-12-18 14:06     ` Andrew Cooper
  0 siblings, 0 replies; 12+ messages in thread
From: Andrew Cooper @ 2016-12-18 14:06 UTC (permalink / raw)
  To: Jan Beulich, xen-devel, Jun Nakajima, Kevin Tian, Konrad Rzeszutek Wilk

On 18/12/16 14:02, Haozhong Zhang wrote:
> On 12/14/16 18:11 +0800, Haozhong Zhang wrote:
>> Check whether the operand of L1 vmxon is a valid VMXON region address
>> and whether the VMXON region at that address contains a valid revision
>> ID.
>>
>> Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
>> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
>> Acked-by: Kevin Tian <kevin.tian@intel.com>
>> ---
>> xen/arch/x86/hvm/vmx/vvmx.c | 16 ++++++++++++++++
>> 1 file changed, 16 insertions(+)
>>
>> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
>> index e765b60..5523146 100644
>> --- a/xen/arch/x86/hvm/vmx/vvmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
>> @@ -1383,6 +1383,7 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
>>     struct nestedvcpu *nvcpu = &vcpu_nestedhvm(v);
>>     struct vmx_inst_decoded decode;
>>     unsigned long gpa = 0;
>> +    uint32_t nvmcs_revid;
>>     int rc;
>>
>>     rc = decode_vmx_inst(regs, &decode, &gpa, 1);
>> @@ -1397,6 +1398,21 @@ int nvmx_handle_vmxon(struct cpu_user_regs *regs)
>>         return X86EMUL_OKAY;
>>     }
>>
>> +    if ( (gpa & ~PAGE_MASK) || (gpa >>
>> v->domain->arch.paging.gfn_bits) )
>                                                                 ^^^^^^^^
>
> I mistaken it as the number of valid bits of physical address and
> therefore missed adding PAGE_SHIFT here. The correct patch should be
> the one attached. I notice the wrong patch has been in the staging
> branch, so should I send a patch(set) to fix my mistake on the staging
> branch?

Yes please.  All new code gets committed into staging.

master is fast-forwarded into staging when OSSTest is happy that no
regressions have occurred.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2016-12-18 14:06 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-12-14 10:11 [PATCH v2 0/4] vvmx: fix L1 vmxon Haozhong Zhang
2016-12-14 10:11 ` [PATCH v2 1/4] vvmx: set vmxon_region_pa of vcpu out of VMX operation to an invalid address Haozhong Zhang
2016-12-14 10:11 ` [PATCH v2 2/4] vvmx: return VMfail to L1 if L1 vmxon is executed in VMX operation Haozhong Zhang
2016-12-14 10:11 ` [PATCH v2 3/4] vvmx: check the operand of L1 vmxon Haozhong Zhang
2016-12-18 14:02   ` Haozhong Zhang
2016-12-18 14:06     ` Andrew Cooper
2016-12-14 10:11 ` [PATCH v2 4/4] nestedhvm: replace VMCX_EADDR by INVALID_PADDR Haozhong Zhang
2016-12-14 10:16   ` Jan Beulich
2016-12-14 15:54     ` Konrad Rzeszutek Wilk
2016-12-14 10:18   ` Haozhong Zhang
2016-12-14 16:24     ` Boris Ostrovsky
2016-12-14 11:57   ` Tian, Kevin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.