All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/4] x86/vvmx: correctly emulate VMREAD and VMWRITE
@ 2017-02-06 14:57 Sergey Dyasli
  2017-02-06 14:57 ` [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe() Sergey Dyasli
                   ` (4 more replies)
  0 siblings, 5 replies; 19+ messages in thread
From: Sergey Dyasli @ 2017-02-06 14:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli

Currently, emulation of vmread and vmwrite for a guest leads to BUG()
in cases when actual VMREAD or VMWRITE ends up in VMfail due to invalid
arguments.  The goal of this patch series is to prevent the BUG() from
happening and report any kind of VMfail back to the guest, just like
it would be done by H/W.

v1 --> v2:
* Removed "ASM_FLAG_OUT" from tools/tests/x86_emulator/x86_emulate.h
* Replaced "~0UL" with "~0" for VMX_INSN_FAIL_INVALID
* Removed double underscore prefix from vmwrite_safe() and vmread_safe()
* Replaced "setb --> setc" and "sete --> setz"
* Removed "fail_" prefix from "invalid" and "valid" asm constraints
* Added "\t" to asm
* Added unlikely() for checking fail conditions
* Moved "ASM_FLAG_OUT" from lib.h to asm_defns.h

Sergey Dyasli (4):
  x86/vmx: introduce vmwrite_safe()
  x86/vmx: improve vmread_safe()
  x86/vvmx: correctly emulate VMWRITE
  x86/vvmx: correctly emulate VMREAD

 xen/arch/x86/hvm/vmx/vmcs.c        | 20 +++++++----
 xen/arch/x86/hvm/vmx/vvmx.c        | 46 +++++++++++++++++++-----
 xen/include/asm-x86/asm_defns.h    |  6 ++++
 xen/include/asm-x86/hvm/vmx/vmcs.h |  6 ++--
 xen/include/asm-x86/hvm/vmx/vmx.h  | 72 +++++++++++++++++++++++++-------------
 xen/include/asm-x86/hvm/vmx/vvmx.h | 11 ++++--
 6 files changed, 117 insertions(+), 44 deletions(-)

-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe()
  2017-02-06 14:57 [PATCH v2 0/4] x86/vvmx: correctly emulate VMREAD and VMWRITE Sergey Dyasli
@ 2017-02-06 14:57 ` Sergey Dyasli
  2017-02-07  6:31   ` Tian, Kevin
  2017-02-07 11:09   ` Jan Beulich
  2017-02-06 14:57 ` [PATCH v2 2/4] x86/vmx: improve vmread_safe() Sergey Dyasli
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 19+ messages in thread
From: Sergey Dyasli @ 2017-02-06 14:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli

Any fail during the original __vmwrite() leads to BUG() which can be
easily exploited from a guest in the nested vmx mode.

The new function returns error code depending on the outcome:

          VMsucceed: 0
        VMfailValid: VM Instruction Error Number
      VMfailInvalid: a new VMX_INSN_FAIL_INVALID

A new macro GAS_VMX_OP is introduced in order to improve the
readability of asm.  Existing ASM_FLAG_OUT macro is reused and copied
into asm_defns.h

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/hvm/vmx/vvmx.c        |  3 ++-
 xen/include/asm-x86/asm_defns.h    |  6 ++++++
 xen/include/asm-x86/hvm/vmx/vmcs.h |  1 +
 xen/include/asm-x86/hvm/vmx/vmx.h  | 29 +++++++++++++++++++++++++++++
 4 files changed, 38 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 9caebe5..105a3c0 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -483,7 +483,8 @@ static void vmfail_invalid(struct cpu_user_regs *regs)
 
 static void vmfail(struct cpu_user_regs *regs, enum vmx_insn_errno errno)
 {
-    if ( vcpu_nestedhvm(current).nv_vvmcxaddr != INVALID_PADDR )
+    if ( vcpu_nestedhvm(current).nv_vvmcxaddr != INVALID_PADDR &&
+         errno != VMX_INSN_FAIL_INVALID )
         vmfail_valid(regs, errno);
     else
         vmfail_invalid(regs);
diff --git a/xen/include/asm-x86/asm_defns.h b/xen/include/asm-x86/asm_defns.h
index f1c6fa1..220ae2e 100644
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -413,4 +413,10 @@ static always_inline void stac(void)
 #define REX64_PREFIX "rex64/"
 #endif
 
+#ifdef __GCC_ASM_FLAG_OUTPUTS__
+# define ASM_FLAG_OUT(yes, no) yes
+#else
+# define ASM_FLAG_OUT(yes, no) no
+#endif
+
 #endif /* __X86_ASM_DEFNS_H__ */
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index d71de04..8d43efd 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -526,6 +526,7 @@ enum vmx_insn_errno
     VMX_INSN_VMPTRLD_INVALID_PHYADDR       = 9,
     VMX_INSN_UNSUPPORTED_VMCS_COMPONENT    = 12,
     VMX_INSN_VMXON_IN_VMX_ROOT             = 15,
+    VMX_INSN_FAIL_INVALID                  = ~0,
 };
 
 void vmx_disable_intercept_for_msr(struct vcpu *v, u32 msr, int type);
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 4bf4d50..d40d6a5 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -306,6 +306,12 @@ extern uint8_t posted_intr_vector;
 #define INVVPID_ALL_CONTEXT                     2
 #define INVVPID_SINGLE_CONTEXT_RETAINING_GLOBAL 3
 
+#ifdef HAVE_GAS_VMX
+# define GAS_VMX_OP(yes, no) yes
+#else
+# define GAS_VMX_OP(yes, no) no
+#endif
+
 static always_inline void __vmptrld(u64 addr)
 {
     asm volatile (
@@ -423,6 +429,29 @@ static inline bool_t __vmread_safe(unsigned long field, unsigned long *value)
     return okay;
 }
 
+static always_inline unsigned long vmwrite_safe(unsigned long field,
+                                                unsigned long value)
+{
+    unsigned long ret = 0;
+    bool fail_invalid, fail_valid;
+
+    asm volatile ( GAS_VMX_OP("vmwrite %[value], %[field]\n\t",
+                              VMWRITE_OPCODE MODRM_EAX_ECX)
+                   ASM_FLAG_OUT(, "setc %[invalid]\n\t")
+                   ASM_FLAG_OUT(, "setz %[valid]\n\t")
+                   : ASM_FLAG_OUT("=@ccc", [invalid] "=rm") (fail_invalid),
+                     ASM_FLAG_OUT("=@ccz", [valid] "=rm") (fail_valid)
+                   : [field] GAS_VMX_OP("r", "a") (field),
+                     [value] GAS_VMX_OP("rm", "c") (value));
+
+    if ( unlikely(fail_invalid) )
+        ret = VMX_INSN_FAIL_INVALID;
+    else if ( unlikely(fail_valid) )
+        __vmread(VM_INSTRUCTION_ERROR, &ret);
+
+    return ret;
+}
+
 static always_inline void __invept(unsigned long type, u64 eptp, u64 gpa)
 {
     struct {
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 2/4] x86/vmx: improve vmread_safe()
  2017-02-06 14:57 [PATCH v2 0/4] x86/vvmx: correctly emulate VMREAD and VMWRITE Sergey Dyasli
  2017-02-06 14:57 ` [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe() Sergey Dyasli
@ 2017-02-06 14:57 ` Sergey Dyasli
  2017-02-07  6:32   ` Tian, Kevin
  2017-02-06 14:57 ` [PATCH v2 3/4] x86/vvmx: correctly emulate VMWRITE Sergey Dyasli
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 19+ messages in thread
From: Sergey Dyasli @ 2017-02-06 14:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli

The original function doesn't distinguish between Valid and Invalid
VMfails.  Improved function returns error code depending on the outcome:

        VMsucceed: 0
      VMfailValid: VM Instruction Error Number
    VMfailInvalid: VMX_INSN_FAIL_INVALID (~0)

Existing users of __vmread_safe() are updated and double underscore
prefix is removed from the function's name because such prefixes are
reserved to a compiler.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c       |  2 +-
 xen/arch/x86/hvm/vmx/vvmx.c       |  2 +-
 xen/include/asm-x86/hvm/vmx/vmx.h | 41 +++++++++++++++++----------------------
 3 files changed, 20 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 59ef199..24f7570 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1711,7 +1711,7 @@ static inline unsigned long vmr(unsigned long field)
 {
     unsigned long val;
 
-    return __vmread_safe(field, &val) ? val : 0;
+    return vmread_safe(field, &val) ? 0 : val;
 }
 
 #define vmr16(fld) ({             \
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 105a3c0..31ac360 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -932,7 +932,7 @@ static inline void shadow_to_vvmcs(const struct vcpu *v, unsigned int field)
 {
     unsigned long value;
 
-    if ( __vmread_safe(field, &value) )
+    if ( vmread_safe(field, &value) == 0 )
         set_vvmcs(v, field, value);
 }
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index d40d6a5..24fbbd4 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -401,32 +401,27 @@ static always_inline void __vmwrite(unsigned long field, unsigned long value)
         );
 }
 
-static inline bool_t __vmread_safe(unsigned long field, unsigned long *value)
+static inline unsigned long vmread_safe(unsigned long field,
+                                        unsigned long *value)
 {
-    bool_t okay;
+    unsigned long ret = 0;
+    bool fail_invalid, fail_valid;
 
-    asm volatile (
-#ifdef HAVE_GAS_VMX
-                   "vmread %2, %1\n\t"
-#else
-                   VMREAD_OPCODE MODRM_EAX_ECX
-#endif
-                   /* CF==1 or ZF==1 --> rc = 0 */
-#ifdef __GCC_ASM_FLAG_OUTPUTS__
-                   : "=@ccnbe" (okay),
-#else
-                   "setnbe %0"
-                   : "=qm" (okay),
-#endif
-#ifdef HAVE_GAS_VMX
-                     "=rm" (*value)
-                   : "r" (field));
-#else
-                     "=c" (*value)
-                   : "a" (field));
-#endif
+    asm volatile ( GAS_VMX_OP("vmread %[field], %[value]\n\t",
+                              VMREAD_OPCODE MODRM_EAX_ECX)
+                   ASM_FLAG_OUT(, "setc %[invalid]\n\t")
+                   ASM_FLAG_OUT(, "setz %[valid]\n\t")
+                   : ASM_FLAG_OUT("=@ccc", [invalid] "=rm") (fail_invalid),
+                     ASM_FLAG_OUT("=@ccz", [valid] "=rm") (fail_valid),
+                     [value] GAS_VMX_OP("=rm", "=c") (*value)
+                   : [field] GAS_VMX_OP("r", "a") (field));
+
+    if ( unlikely(fail_invalid) )
+        ret = VMX_INSN_FAIL_INVALID;
+    else if ( unlikely(fail_valid) )
+        __vmread(VM_INSTRUCTION_ERROR, &ret);
 
-    return okay;
+    return ret;
 }
 
 static always_inline unsigned long vmwrite_safe(unsigned long field,
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 3/4] x86/vvmx: correctly emulate VMWRITE
  2017-02-06 14:57 [PATCH v2 0/4] x86/vvmx: correctly emulate VMREAD and VMWRITE Sergey Dyasli
  2017-02-06 14:57 ` [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe() Sergey Dyasli
  2017-02-06 14:57 ` [PATCH v2 2/4] x86/vmx: improve vmread_safe() Sergey Dyasli
@ 2017-02-06 14:57 ` Sergey Dyasli
  2017-02-07  6:37   ` Tian, Kevin
  2017-02-06 14:57 ` [PATCH v2 4/4] x86/vvmx: correctly emulate VMREAD Sergey Dyasli
  2017-02-06 15:19 ` [PATCH v2 0/4] x86/vvmx: correctly emulate VMREAD and VMWRITE Andrew Cooper
  4 siblings, 1 reply; 19+ messages in thread
From: Sergey Dyasli @ 2017-02-06 14:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli

There is an issue with the original __vmwrite() in nested vmx mode:
emulation of a guest's VMWRITE with invalid arguments leads to BUG().

Fix this by using vmwrite_safe() and reporting any kind of VMfail back
to the guest.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |  9 +++++++--
 xen/arch/x86/hvm/vmx/vvmx.c        | 15 +++++++++++----
 xen/include/asm-x86/hvm/vmx/vmcs.h |  2 +-
 xen/include/asm-x86/hvm/vmx/vvmx.h |  4 ++--
 4 files changed, 21 insertions(+), 9 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 24f7570..1d83a69 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -943,11 +943,16 @@ u64 virtual_vmcs_vmread(const struct vcpu *v, u32 vmcs_encoding)
     return res;
 }
 
-void virtual_vmcs_vmwrite(const struct vcpu *v, u32 vmcs_encoding, u64 val)
+unsigned long virtual_vmcs_vmwrite(const struct vcpu *v, u32 vmcs_encoding,
+                                   u64 val)
 {
+    unsigned long ret;
+
     virtual_vmcs_enter(v);
-    __vmwrite(vmcs_encoding, val);
+    ret = vmwrite_safe(vmcs_encoding, val);
     virtual_vmcs_exit(v);
+
+    return ret;
 }
 
 /*
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 31ac360..1a3d1d2 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -264,7 +264,7 @@ u64 get_vvmcs_real(const struct vcpu *v, u32 encoding)
     return virtual_vmcs_vmread(v, encoding);
 }
 
-void set_vvmcs_virtual(void *vvmcs, u32 vmcs_encoding, u64 val)
+unsigned long set_vvmcs_virtual(void *vvmcs, u32 vmcs_encoding, u64 val)
 {
     union vmcs_encoding enc;
     u64 *content = (u64 *) vvmcs;
@@ -298,11 +298,13 @@ void set_vvmcs_virtual(void *vvmcs, u32 vmcs_encoding, u64 val)
     }
 
     content[offset] = res;
+
+    return 0;
 }
 
-void set_vvmcs_real(const struct vcpu *v, u32 encoding, u64 val)
+unsigned long set_vvmcs_real(const struct vcpu *v, u32 encoding, u64 val)
 {
-    virtual_vmcs_vmwrite(v, encoding, val);
+    return virtual_vmcs_vmwrite(v, encoding, val);
 }
 
 static unsigned long reg_read(struct cpu_user_regs *regs,
@@ -1740,13 +1742,18 @@ int nvmx_handle_vmwrite(struct cpu_user_regs *regs)
     unsigned long operand; 
     u64 vmcs_encoding;
     bool_t okay = 1;
+    unsigned long err;
 
     if ( decode_vmx_inst(regs, &decode, &operand, 0)
              != X86EMUL_OKAY )
         return X86EMUL_EXCEPTION;
 
     vmcs_encoding = reg_read(regs, decode.reg2);
-    set_vvmcs(v, vmcs_encoding, operand);
+    if ( (err = set_vvmcs(v, vmcs_encoding, operand)) )
+    {
+        vmfail(regs, err);
+        return X86EMUL_OKAY;
+    }
 
     switch ( vmcs_encoding & ~VMCS_HIGH(0) )
     {
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 8d43efd..72dc5fc 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -541,7 +541,7 @@ int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
 void virtual_vmcs_enter(const struct vcpu *);
 void virtual_vmcs_exit(const struct vcpu *);
 u64 virtual_vmcs_vmread(const struct vcpu *, u32 encoding);
-void virtual_vmcs_vmwrite(const struct vcpu *, u32 encoding, u64 val);
+unsigned long virtual_vmcs_vmwrite(const struct vcpu *, u32 encoding, u64 val);
 
 static inline int vmx_add_guest_msr(u32 msr)
 {
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index 242e524..d60e0bb 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -181,8 +181,8 @@ enum vvmcs_encoding_type {
 
 u64 get_vvmcs_virtual(void *vvmcs, u32 encoding);
 u64 get_vvmcs_real(const struct vcpu *, u32 encoding);
-void set_vvmcs_virtual(void *vvmcs, u32 encoding, u64 val);
-void set_vvmcs_real(const struct vcpu *, u32 encoding, u64 val);
+unsigned long set_vvmcs_virtual(void *vvmcs, u32 encoding, u64 val);
+unsigned long set_vvmcs_real(const struct vcpu *, u32 encoding, u64 val);
 
 #define get_vvmcs(vcpu, encoding) \
   (cpu_has_vmx_vmcs_shadowing ? \
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v2 4/4] x86/vvmx: correctly emulate VMREAD
  2017-02-06 14:57 [PATCH v2 0/4] x86/vvmx: correctly emulate VMREAD and VMWRITE Sergey Dyasli
                   ` (2 preceding siblings ...)
  2017-02-06 14:57 ` [PATCH v2 3/4] x86/vvmx: correctly emulate VMWRITE Sergey Dyasli
@ 2017-02-06 14:57 ` Sergey Dyasli
  2017-02-07  6:52   ` Tian, Kevin
  2017-02-06 15:19 ` [PATCH v2 0/4] x86/vvmx: correctly emulate VMREAD and VMWRITE Andrew Cooper
  4 siblings, 1 reply; 19+ messages in thread
From: Sergey Dyasli @ 2017-02-06 14:57 UTC (permalink / raw)
  To: xen-devel
  Cc: Andrew Cooper, Kevin Tian, Jan Beulich, Jun Nakajima, Sergey Dyasli

There is an issue with the original __vmread() in nested vmx mode:
emulation of a guest's VMREAD with invalid arguments leads to BUG().

Fix this by using vmread_safe() and reporting any kind of VMfail back
to the guest.

A new safe versions of get_vvmcs() macro and related functions are
introduced because of new function signatures and lots of existing
users.

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        |  9 +++++----
 xen/arch/x86/hvm/vmx/vvmx.c        | 26 +++++++++++++++++++++++---
 xen/include/asm-x86/hvm/vmx/vmcs.h |  3 ++-
 xen/include/asm-x86/hvm/vmx/vvmx.h |  7 +++++++
 4 files changed, 37 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 1d83a69..e04b002 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -932,15 +932,16 @@ void virtual_vmcs_exit(const struct vcpu *v)
         __vmptrld(cur);
 }
 
-u64 virtual_vmcs_vmread(const struct vcpu *v, u32 vmcs_encoding)
+unsigned long virtual_vmcs_vmread(const struct vcpu *v, u32 vmcs_encoding,
+                                  u64 *val)
 {
-    u64 res;
+    unsigned long ret;
 
     virtual_vmcs_enter(v);
-    __vmread(vmcs_encoding, &res);
+    ret = vmread_safe(vmcs_encoding, val);
     virtual_vmcs_exit(v);
 
-    return res;
+    return ret;
 }
 
 unsigned long virtual_vmcs_vmwrite(const struct vcpu *v, u32 vmcs_encoding,
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 1a3d1d2..6fda44a 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -261,7 +261,23 @@ u64 get_vvmcs_virtual(void *vvmcs, u32 vmcs_encoding)
 
 u64 get_vvmcs_real(const struct vcpu *v, u32 encoding)
 {
-    return virtual_vmcs_vmread(v, encoding);
+    u64 val;
+
+    virtual_vmcs_vmread(v, encoding, &val);
+
+    return val;
+}
+
+unsigned long get_vvmcs_virtual_safe(void *vvmcs, u32 encoding, u64 *val)
+{
+    *val = get_vvmcs_virtual(vvmcs, encoding);
+
+    return 0;
+}
+
+unsigned long get_vvmcs_real_safe(const struct vcpu *v, u32 encoding, u64 *val)
+{
+    return virtual_vmcs_vmread(v, encoding, val);
 }
 
 unsigned long set_vvmcs_virtual(void *vvmcs, u32 vmcs_encoding, u64 val)
@@ -1710,13 +1726,17 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs)
     struct vmx_inst_decoded decode;
     pagefault_info_t pfinfo;
     u64 value = 0;
-    int rc;
+    unsigned long rc;
 
     rc = decode_vmx_inst(regs, &decode, NULL, 0);
     if ( rc != X86EMUL_OKAY )
         return rc;
 
-    value = get_vvmcs(v, reg_read(regs, decode.reg2));
+    if ( (rc = get_vvmcs_safe(v, reg_read(regs, decode.reg2), &value)) )
+    {
+        vmfail(regs, rc);
+        return X86EMUL_OKAY;
+    }
 
     switch ( decode.type ) {
     case VMX_INST_MEMREG_TYPE_MEMORY:
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 72dc5fc..4f16250 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -540,7 +540,8 @@ void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
 int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
 void virtual_vmcs_enter(const struct vcpu *);
 void virtual_vmcs_exit(const struct vcpu *);
-u64 virtual_vmcs_vmread(const struct vcpu *, u32 encoding);
+unsigned long virtual_vmcs_vmread(const struct vcpu *v, u32 vmcs_encoding,
+                                  u64 *val);
 unsigned long virtual_vmcs_vmwrite(const struct vcpu *, u32 encoding, u64 val);
 
 static inline int vmx_add_guest_msr(u32 msr)
diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h b/xen/include/asm-x86/hvm/vmx/vvmx.h
index d60e0bb..28e2503 100644
--- a/xen/include/asm-x86/hvm/vmx/vvmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
@@ -181,6 +181,8 @@ enum vvmcs_encoding_type {
 
 u64 get_vvmcs_virtual(void *vvmcs, u32 encoding);
 u64 get_vvmcs_real(const struct vcpu *, u32 encoding);
+unsigned long get_vvmcs_virtual_safe(void *vvmcs, u32 encoding, u64 *val);
+unsigned long get_vvmcs_real_safe(const struct vcpu *, u32 encoding, u64 *val);
 unsigned long set_vvmcs_virtual(void *vvmcs, u32 encoding, u64 val);
 unsigned long set_vvmcs_real(const struct vcpu *, u32 encoding, u64 val);
 
@@ -194,6 +196,11 @@ unsigned long set_vvmcs_real(const struct vcpu *, u32 encoding, u64 val);
    set_vvmcs_real(vcpu, encoding, val) : \
    set_vvmcs_virtual(vcpu_nestedhvm(vcpu).nv_vvmcx, encoding, val))
 
+#define get_vvmcs_safe(vcpu, encoding, val) \
+  (cpu_has_vmx_vmcs_shadowing ? \
+   get_vvmcs_real_safe(vcpu, encoding, val) : \
+   get_vvmcs_virtual_safe(vcpu_nestedhvm(vcpu).nv_vvmcx, encoding, val))
+
 uint64_t get_shadow_eptp(struct vcpu *v);
 
 void nvmx_destroy_vmcs(struct vcpu *v);
-- 
2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 0/4] x86/vvmx: correctly emulate VMREAD and VMWRITE
  2017-02-06 14:57 [PATCH v2 0/4] x86/vvmx: correctly emulate VMREAD and VMWRITE Sergey Dyasli
                   ` (3 preceding siblings ...)
  2017-02-06 14:57 ` [PATCH v2 4/4] x86/vvmx: correctly emulate VMREAD Sergey Dyasli
@ 2017-02-06 15:19 ` Andrew Cooper
  4 siblings, 0 replies; 19+ messages in thread
From: Andrew Cooper @ 2017-02-06 15:19 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel; +Cc: Kevin Tian, Jan Beulich, Jun Nakajima

On 06/02/17 14:57, Sergey Dyasli wrote:
> Currently, emulation of vmread and vmwrite for a guest leads to BUG()
> in cases when actual VMREAD or VMWRITE ends up in VMfail due to invalid
> arguments.  The goal of this patch series is to prevent the BUG() from
> happening and report any kind of VMfail back to the guest, just like
> it would be done by H/W.
>
> v1 --> v2:
> * Removed "ASM_FLAG_OUT" from tools/tests/x86_emulator/x86_emulate.h
> * Replaced "~0UL" with "~0" for VMX_INSN_FAIL_INVALID
> * Removed double underscore prefix from vmwrite_safe() and vmread_safe()
> * Replaced "setb --> setc" and "sete --> setz"
> * Removed "fail_" prefix from "invalid" and "valid" asm constraints
> * Added "\t" to asm
> * Added unlikely() for checking fail conditions
> * Moved "ASM_FLAG_OUT" from lib.h to asm_defns.h
>
> Sergey Dyasli (4):
>   x86/vmx: introduce vmwrite_safe()
>   x86/vmx: improve vmread_safe()
>   x86/vvmx: correctly emulate VMWRITE
>   x86/vvmx: correctly emulate VMREAD

All 4 Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

>
>  xen/arch/x86/hvm/vmx/vmcs.c        | 20 +++++++----
>  xen/arch/x86/hvm/vmx/vvmx.c        | 46 +++++++++++++++++++-----
>  xen/include/asm-x86/asm_defns.h    |  6 ++++
>  xen/include/asm-x86/hvm/vmx/vmcs.h |  6 ++--
>  xen/include/asm-x86/hvm/vmx/vmx.h  | 72 +++++++++++++++++++++++++-------------
>  xen/include/asm-x86/hvm/vmx/vvmx.h | 11 ++++--
>  6 files changed, 117 insertions(+), 44 deletions(-)
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe()
  2017-02-06 14:57 ` [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe() Sergey Dyasli
@ 2017-02-07  6:31   ` Tian, Kevin
  2017-02-07 11:09   ` Jan Beulich
  1 sibling, 0 replies; 19+ messages in thread
From: Tian, Kevin @ 2017-02-07  6:31 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel; +Cc: Andrew Cooper, Jan Beulich, Nakajima, Jun

> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> Sent: Monday, February 06, 2017 10:58 PM
> 
> Any fail during the original __vmwrite() leads to BUG() which can be
> easily exploited from a guest in the nested vmx mode.
> 
> The new function returns error code depending on the outcome:
> 
>           VMsucceed: 0
>         VMfailValid: VM Instruction Error Number
>       VMfailInvalid: a new VMX_INSN_FAIL_INVALID
> 
> A new macro GAS_VMX_OP is introduced in order to improve the
> readability of asm.  Existing ASM_FLAG_OUT macro is reused and copied
> into asm_defns.h
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 2/4] x86/vmx: improve vmread_safe()
  2017-02-06 14:57 ` [PATCH v2 2/4] x86/vmx: improve vmread_safe() Sergey Dyasli
@ 2017-02-07  6:32   ` Tian, Kevin
  0 siblings, 0 replies; 19+ messages in thread
From: Tian, Kevin @ 2017-02-07  6:32 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel; +Cc: Andrew Cooper, Jan Beulich, Nakajima, Jun

> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> Sent: Monday, February 06, 2017 10:58 PM
> 
> The original function doesn't distinguish between Valid and Invalid
> VMfails.  Improved function returns error code depending on the outcome:
> 
>         VMsucceed: 0
>       VMfailValid: VM Instruction Error Number
>     VMfailInvalid: VMX_INSN_FAIL_INVALID (~0)
> 
> Existing users of __vmread_safe() are updated and double underscore
> prefix is removed from the function's name because such prefixes are
> reserved to a compiler.
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 3/4] x86/vvmx: correctly emulate VMWRITE
  2017-02-06 14:57 ` [PATCH v2 3/4] x86/vvmx: correctly emulate VMWRITE Sergey Dyasli
@ 2017-02-07  6:37   ` Tian, Kevin
  0 siblings, 0 replies; 19+ messages in thread
From: Tian, Kevin @ 2017-02-07  6:37 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel; +Cc: Andrew Cooper, Jan Beulich, Nakajima, Jun

> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> Sent: Monday, February 06, 2017 10:58 PM
> 
> There is an issue with the original __vmwrite() in nested vmx mode:
> emulation of a guest's VMWRITE with invalid arguments leads to BUG().
> 
> Fix this by using vmwrite_safe() and reporting any kind of VMfail back
> to the guest.
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

Acked-by: Kevin Tian <kevin.tian@intel.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 4/4] x86/vvmx: correctly emulate VMREAD
  2017-02-06 14:57 ` [PATCH v2 4/4] x86/vvmx: correctly emulate VMREAD Sergey Dyasli
@ 2017-02-07  6:52   ` Tian, Kevin
  2017-02-07 15:56     ` Sergey Dyasli
  0 siblings, 1 reply; 19+ messages in thread
From: Tian, Kevin @ 2017-02-07  6:52 UTC (permalink / raw)
  To: Sergey Dyasli, xen-devel; +Cc: Andrew Cooper, Jan Beulich, Nakajima, Jun

> From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> Sent: Monday, February 06, 2017 10:58 PM
> 
> There is an issue with the original __vmread() in nested vmx mode:
> emulation of a guest's VMREAD with invalid arguments leads to BUG().
> 
> Fix this by using vmread_safe() and reporting any kind of VMfail back
> to the guest.
> 
> A new safe versions of get_vvmcs() macro and related functions are
> introduced because of new function signatures and lots of existing
> users.
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>

after reading this patch I realized my earlier ack to 3/4 might not hold,
since now you have mismatched get/set pairs:

get_vvmcs
get_vvmcs_safe
set_vvmcs

suggest to also introduce a set_vvmcs_safe counterpart in 3/4, as
there are still many existing callers on set_vvmcs which don't 
expect error.

> ---
>  xen/arch/x86/hvm/vmx/vmcs.c        |  9 +++++----
>  xen/arch/x86/hvm/vmx/vvmx.c        | 26 +++++++++++++++++++++++---
>  xen/include/asm-x86/hvm/vmx/vmcs.h |  3 ++-
>  xen/include/asm-x86/hvm/vmx/vvmx.h |  7 +++++++
>  4 files changed, 37 insertions(+), 8 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index 1d83a69..e04b002 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -932,15 +932,16 @@ void virtual_vmcs_exit(const struct vcpu *v)
>          __vmptrld(cur);
>  }
> 
> -u64 virtual_vmcs_vmread(const struct vcpu *v, u32 vmcs_encoding)
> +unsigned long virtual_vmcs_vmread(const struct vcpu *v, u32 vmcs_encoding,
> +                                  u64 *val)
>  {
> -    u64 res;
> +    unsigned long ret;
> 
>      virtual_vmcs_enter(v);
> -    __vmread(vmcs_encoding, &res);
> +    ret = vmread_safe(vmcs_encoding, val);
>      virtual_vmcs_exit(v);
> 
> -    return res;
> +    return ret;
>  }
> 
>  unsigned long virtual_vmcs_vmwrite(const struct vcpu *v, u32 vmcs_encoding,
> diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
> index 1a3d1d2..6fda44a 100644
> --- a/xen/arch/x86/hvm/vmx/vvmx.c
> +++ b/xen/arch/x86/hvm/vmx/vvmx.c
> @@ -261,7 +261,23 @@ u64 get_vvmcs_virtual(void *vvmcs, u32 vmcs_encoding)
> 
>  u64 get_vvmcs_real(const struct vcpu *v, u32 encoding)
>  {
> -    return virtual_vmcs_vmread(v, encoding);
> +    u64 val;
> +
> +    virtual_vmcs_vmread(v, encoding, &val);
> +
> +    return val;
> +}
> +
> +unsigned long get_vvmcs_virtual_safe(void *vvmcs, u32 encoding, u64 *val)
> +{
> +    *val = get_vvmcs_virtual(vvmcs, encoding);
> +
> +    return 0;
> +}
> +
> +unsigned long get_vvmcs_real_safe(const struct vcpu *v, u32 encoding, u64 *val)
> +{
> +    return virtual_vmcs_vmread(v, encoding, val);
>  }
> 
>  unsigned long set_vvmcs_virtual(void *vvmcs, u32 vmcs_encoding, u64 val)
> @@ -1710,13 +1726,17 @@ int nvmx_handle_vmread(struct cpu_user_regs *regs)
>      struct vmx_inst_decoded decode;
>      pagefault_info_t pfinfo;
>      u64 value = 0;
> -    int rc;
> +    unsigned long rc;
> 
>      rc = decode_vmx_inst(regs, &decode, NULL, 0);
>      if ( rc != X86EMUL_OKAY )
>          return rc;
> 
> -    value = get_vvmcs(v, reg_read(regs, decode.reg2));
> +    if ( (rc = get_vvmcs_safe(v, reg_read(regs, decode.reg2), &value)) )
> +    {
> +        vmfail(regs, rc);
> +        return X86EMUL_OKAY;
> +    }
> 
>      switch ( decode.type ) {
>      case VMX_INST_MEMREG_TYPE_MEMORY:
> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h
> b/xen/include/asm-x86/hvm/vmx/vmcs.h
> index 72dc5fc..4f16250 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -540,7 +540,8 @@ void vmx_clear_eoi_exit_bitmap(struct vcpu *v, u8 vector);
>  int vmx_check_msr_bitmap(unsigned long *msr_bitmap, u32 msr, int access_type);
>  void virtual_vmcs_enter(const struct vcpu *);
>  void virtual_vmcs_exit(const struct vcpu *);
> -u64 virtual_vmcs_vmread(const struct vcpu *, u32 encoding);
> +unsigned long virtual_vmcs_vmread(const struct vcpu *v, u32 vmcs_encoding,
> +                                  u64 *val);
>  unsigned long virtual_vmcs_vmwrite(const struct vcpu *, u32 encoding, u64 val);
> 
>  static inline int vmx_add_guest_msr(u32 msr)
> diff --git a/xen/include/asm-x86/hvm/vmx/vvmx.h
> b/xen/include/asm-x86/hvm/vmx/vvmx.h
> index d60e0bb..28e2503 100644
> --- a/xen/include/asm-x86/hvm/vmx/vvmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vvmx.h
> @@ -181,6 +181,8 @@ enum vvmcs_encoding_type {
> 
>  u64 get_vvmcs_virtual(void *vvmcs, u32 encoding);
>  u64 get_vvmcs_real(const struct vcpu *, u32 encoding);
> +unsigned long get_vvmcs_virtual_safe(void *vvmcs, u32 encoding, u64 *val);
> +unsigned long get_vvmcs_real_safe(const struct vcpu *, u32 encoding, u64 *val);
>  unsigned long set_vvmcs_virtual(void *vvmcs, u32 encoding, u64 val);
>  unsigned long set_vvmcs_real(const struct vcpu *, u32 encoding, u64 val);
> 
> @@ -194,6 +196,11 @@ unsigned long set_vvmcs_real(const struct vcpu *, u32 encoding,
> u64 val);
>     set_vvmcs_real(vcpu, encoding, val) : \
>     set_vvmcs_virtual(vcpu_nestedhvm(vcpu).nv_vvmcx, encoding, val))
> 
> +#define get_vvmcs_safe(vcpu, encoding, val) \
> +  (cpu_has_vmx_vmcs_shadowing ? \
> +   get_vvmcs_real_safe(vcpu, encoding, val) : \
> +   get_vvmcs_virtual_safe(vcpu_nestedhvm(vcpu).nv_vvmcx, encoding, val))
> +
>  uint64_t get_shadow_eptp(struct vcpu *v);
> 
>  void nvmx_destroy_vmcs(struct vcpu *v);
> --
> 2.9.3


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe()
  2017-02-06 14:57 ` [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe() Sergey Dyasli
  2017-02-07  6:31   ` Tian, Kevin
@ 2017-02-07 11:09   ` Jan Beulich
  2017-02-07 11:59     ` Andrew Cooper
  2017-02-07 15:06     ` Sergey Dyasli
  1 sibling, 2 replies; 19+ messages in thread
From: Jan Beulich @ 2017-02-07 11:09 UTC (permalink / raw)
  To: Sergey Dyasli; +Cc: Andrew Cooper, Kevin Tian, Jun Nakajima, xen-devel

>>> On 06.02.17 at 15:57, <sergey.dyasli@citrix.com> wrote:
> Any fail during the original __vmwrite() leads to BUG() which can be
> easily exploited from a guest in the nested vmx mode.
> 
> The new function returns error code depending on the outcome:
> 
>           VMsucceed: 0
>         VMfailValid: VM Instruction Error Number
>       VMfailInvalid: a new VMX_INSN_FAIL_INVALID
> 
> A new macro GAS_VMX_OP is introduced in order to improve the
> readability of asm.  Existing ASM_FLAG_OUT macro is reused and copied
> into asm_defns.h
> 
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> ---

Please can you have the revision info for the individual patches
here. I know you've put it in the overview mail, but for reviewers
it's far more useful to (also) be here.

> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -526,6 +526,7 @@ enum vmx_insn_errno
>      VMX_INSN_VMPTRLD_INVALID_PHYADDR       = 9,
>      VMX_INSN_UNSUPPORTED_VMCS_COMPONENT    = 12,
>      VMX_INSN_VMXON_IN_VMX_ROOT             = 15,
> +    VMX_INSN_FAIL_INVALID                  = ~0,
>  };

The main reason for me to ask for the type change here was to ...

> @@ -423,6 +429,29 @@ static inline bool_t __vmread_safe(unsigned long field, unsigned long *value)
>      return okay;
>  }
>  
> +static always_inline unsigned long vmwrite_safe(unsigned long field,
> +                                                unsigned long value)
> +{
> +    unsigned long ret = 0;
> +    bool fail_invalid, fail_valid;
> +
> +    asm volatile ( GAS_VMX_OP("vmwrite %[value], %[field]\n\t",
> +                              VMWRITE_OPCODE MODRM_EAX_ECX)
> +                   ASM_FLAG_OUT(, "setc %[invalid]\n\t")
> +                   ASM_FLAG_OUT(, "setz %[valid]\n\t")
> +                   : ASM_FLAG_OUT("=@ccc", [invalid] "=rm") (fail_invalid),
> +                     ASM_FLAG_OUT("=@ccz", [valid] "=rm") (fail_valid)
> +                   : [field] GAS_VMX_OP("r", "a") (field),
> +                     [value] GAS_VMX_OP("rm", "c") (value));
> +
> +    if ( unlikely(fail_invalid) )
> +        ret = VMX_INSN_FAIL_INVALID;
> +    else if ( unlikely(fail_valid) )
> +        __vmread(VM_INSTRUCTION_ERROR, &ret);
> +
> +    return ret;
> +}

... allow the function to return enum vmx_insn_errno, and that
to not be a 64-bit quantity. As you're presumably aware, dealing
with 32-bit quantities is on the average slightly more efficient than
dealing with 64-bit ones. The code above should imo still BUG() if
the value read from VM_INSTRUCTION_ERROR doesn't fit in 32
bits (as it's a 32-bit field only anyway).

Also, looking at the entire asm() above another time, wouldn't it
end up a bit less convoluted if you simply used CMOVC for the
"invalid" code path? Similarly I wonder whether the second
VMREAD wouldn't better be moved into the asm(), utilizing the
UNLIKELY_START() et al constructs to get that code path
entirely out of the main path. These aren't requirements though,
just aspects to think about.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe()
  2017-02-07 11:09   ` Jan Beulich
@ 2017-02-07 11:59     ` Andrew Cooper
  2017-02-07 13:18       ` Jan Beulich
  2017-02-07 15:06     ` Sergey Dyasli
  1 sibling, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2017-02-07 11:59 UTC (permalink / raw)
  To: Jan Beulich, Sergey Dyasli; +Cc: Kevin Tian, Jun Nakajima, xen-devel

On 07/02/17 11:09, Jan Beulich wrote:
>
>> @@ -423,6 +429,29 @@ static inline bool_t __vmread_safe(unsigned long field, unsigned long *value)
>>      return okay;
>>  }
>>  
>> +static always_inline unsigned long vmwrite_safe(unsigned long field,
>> +                                                unsigned long value)
>> +{
>> +    unsigned long ret = 0;
>> +    bool fail_invalid, fail_valid;
>> +
>> +    asm volatile ( GAS_VMX_OP("vmwrite %[value], %[field]\n\t",
>> +                              VMWRITE_OPCODE MODRM_EAX_ECX)
>> +                   ASM_FLAG_OUT(, "setc %[invalid]\n\t")
>> +                   ASM_FLAG_OUT(, "setz %[valid]\n\t")
>> +                   : ASM_FLAG_OUT("=@ccc", [invalid] "=rm") (fail_invalid),
>> +                     ASM_FLAG_OUT("=@ccz", [valid] "=rm") (fail_valid)
>> +                   : [field] GAS_VMX_OP("r", "a") (field),
>> +                     [value] GAS_VMX_OP("rm", "c") (value));
>> +
>> +    if ( unlikely(fail_invalid) )
>> +        ret = VMX_INSN_FAIL_INVALID;
>> +    else if ( unlikely(fail_valid) )
>> +        __vmread(VM_INSTRUCTION_ERROR, &ret);
>> +
>> +    return ret;
>> +}
> ... allow the function to return enum vmx_insn_errno, and that
> to not be a 64-bit quantity. As you're presumably aware, dealing
> with 32-bit quantities is on the average slightly more efficient than
> dealing with 64-bit ones. The code above should imo still BUG() if
> the value read from VM_INSTRUCTION_ERROR doesn't fit in 32
> bits (as it's a 32-bit field only anyway).
>
> Also, looking at the entire asm() above another time, wouldn't it
> end up a bit less convoluted if you simply used CMOVC for the
> "invalid" code path? Similarly I wonder whether the second
> VMREAD wouldn't better be moved into the asm(), utilizing the
> UNLIKELY_START() et al constructs to get that code path
> entirely out of the main path. These aren't requirements though,
> just aspects to think about.

Embedding two vm*** instruction is substantially harder in the non
GAS_VMX_OP() case.  It either involves manual register scheduling, or a
separate ModRM and different explicit register fields.

As for extra logic, I have some further plans which would allow the
compiler to elide the __vmread() on some paths, which it can only for
logic exposed in C.  From this point of view, the less code in the asm
block, the better.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe()
  2017-02-07 11:59     ` Andrew Cooper
@ 2017-02-07 13:18       ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2017-02-07 13:18 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Sergey Dyasli, Kevin Tian, Jun Nakajima, xen-devel

>>> On 07.02.17 at 12:59, <andrew.cooper3@citrix.com> wrote:
> On 07/02/17 11:09, Jan Beulich wrote:
>>
>>> @@ -423,6 +429,29 @@ static inline bool_t __vmread_safe(unsigned long field, 
> unsigned long *value)
>>>      return okay;
>>>  }
>>>  
>>> +static always_inline unsigned long vmwrite_safe(unsigned long field,
>>> +                                                unsigned long value)
>>> +{
>>> +    unsigned long ret = 0;
>>> +    bool fail_invalid, fail_valid;
>>> +
>>> +    asm volatile ( GAS_VMX_OP("vmwrite %[value], %[field]\n\t",
>>> +                              VMWRITE_OPCODE MODRM_EAX_ECX)
>>> +                   ASM_FLAG_OUT(, "setc %[invalid]\n\t")
>>> +                   ASM_FLAG_OUT(, "setz %[valid]\n\t")
>>> +                   : ASM_FLAG_OUT("=@ccc", [invalid] "=rm") (fail_invalid),
>>> +                     ASM_FLAG_OUT("=@ccz", [valid] "=rm") (fail_valid)
>>> +                   : [field] GAS_VMX_OP("r", "a") (field),
>>> +                     [value] GAS_VMX_OP("rm", "c") (value));
>>> +
>>> +    if ( unlikely(fail_invalid) )
>>> +        ret = VMX_INSN_FAIL_INVALID;
>>> +    else if ( unlikely(fail_valid) )
>>> +        __vmread(VM_INSTRUCTION_ERROR, &ret);
>>> +
>>> +    return ret;
>>> +}
>> ... allow the function to return enum vmx_insn_errno, and that
>> to not be a 64-bit quantity. As you're presumably aware, dealing
>> with 32-bit quantities is on the average slightly more efficient than
>> dealing with 64-bit ones. The code above should imo still BUG() if
>> the value read from VM_INSTRUCTION_ERROR doesn't fit in 32
>> bits (as it's a 32-bit field only anyway).
>>
>> Also, looking at the entire asm() above another time, wouldn't it
>> end up a bit less convoluted if you simply used CMOVC for the
>> "invalid" code path? Similarly I wonder whether the second
>> VMREAD wouldn't better be moved into the asm(), utilizing the
>> UNLIKELY_START() et al constructs to get that code path
>> entirely out of the main path. These aren't requirements though,
>> just aspects to think about.
> 
> Embedding two vm*** instruction is substantially harder in the non
> GAS_VMX_OP() case.  It either involves manual register scheduling, or a
> separate ModRM and different explicit register fields.

Ah, right, that wouldn't be very nice indeed.

> As for extra logic, I have some further plans which would allow the
> compiler to elide the __vmread() on some paths, which it can only for
> logic exposed in C.  From this point of view, the less code in the asm
> block, the better.

Well, okay then.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe()
  2017-02-07 11:09   ` Jan Beulich
  2017-02-07 11:59     ` Andrew Cooper
@ 2017-02-07 15:06     ` Sergey Dyasli
  2017-02-07 15:22       ` Andrew Cooper
  2017-02-07 16:22       ` Jan Beulich
  1 sibling, 2 replies; 19+ messages in thread
From: Sergey Dyasli @ 2017-02-07 15:06 UTC (permalink / raw)
  To: JBeulich
  Cc: Sergey Dyasli, Kevin Tian, Andrew Cooper, jun.nakajima, xen-devel

On Tue, 2017-02-07 at 04:09 -0700, Jan Beulich wrote:
> > > > On 06.02.17 at 15:57, <sergey.dyasli@citrix.com> wrote:
> > 
> > Any fail during the original __vmwrite() leads to BUG() which can be
> > easily exploited from a guest in the nested vmx mode.
> > 
> > The new function returns error code depending on the outcome:
> > 
> >           VMsucceed: 0
> >         VMfailValid: VM Instruction Error Number
> >       VMfailInvalid: a new VMX_INSN_FAIL_INVALID
> > 
> > A new macro GAS_VMX_OP is introduced in order to improve the
> > readability of asm.  Existing ASM_FLAG_OUT macro is reused and copied
> > into asm_defns.h
> > 
> > Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> > ---
> 
> Please can you have the revision info for the individual patches
> here. I know you've put it in the overview mail, but for reviewers
> it's far more useful to (also) be here.
> 
> > --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> > +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> > @@ -526,6 +526,7 @@ enum vmx_insn_errno
> >      VMX_INSN_VMPTRLD_INVALID_PHYADDR       = 9,
> >      VMX_INSN_UNSUPPORTED_VMCS_COMPONENT    = 12,
> >      VMX_INSN_VMXON_IN_VMX_ROOT             = 15,
> > +    VMX_INSN_FAIL_INVALID                  = ~0,
> >  };
> 
> The main reason for me to ask for the type change here was to ...
> 
> > @@ -423,6 +429,29 @@ static inline bool_t __vmread_safe(unsigned long field, unsigned long *value)
> >      return okay;
> >  }
> >  
> > +static always_inline unsigned long vmwrite_safe(unsigned long field,
> > +                                                unsigned long value)
> > +{
> > +    unsigned long ret = 0;
> > +    bool fail_invalid, fail_valid;
> > +
> > +    asm volatile ( GAS_VMX_OP("vmwrite %[value], %[field]\n\t",
> > +                              VMWRITE_OPCODE MODRM_EAX_ECX)
> > +                   ASM_FLAG_OUT(, "setc %[invalid]\n\t")
> > +                   ASM_FLAG_OUT(, "setz %[valid]\n\t")
> > +                   : ASM_FLAG_OUT("=@ccc", [invalid] "=rm") (fail_invalid),
> > +                     ASM_FLAG_OUT("=@ccz", [valid] "=rm") (fail_valid)
> > +                   : [field] GAS_VMX_OP("r", "a") (field),
> > +                     [value] GAS_VMX_OP("rm", "c") (value));
> > +
> > +    if ( unlikely(fail_invalid) )
> > +        ret = VMX_INSN_FAIL_INVALID;
> > +    else if ( unlikely(fail_valid) )
> > +        __vmread(VM_INSTRUCTION_ERROR, &ret);
> > +
> > +    return ret;
> > +}
> 
> ... allow the function to return enum vmx_insn_errno, and that
> to not be a 64-bit quantity. As you're presumably aware, dealing
> with 32-bit quantities is on the average slightly more efficient than
> dealing with 64-bit ones. The code above should imo still BUG() if
> the value read from VM_INSTRUCTION_ERROR doesn't fit in 32
> bits (as it's a 32-bit field only anyway).

If I understood correctly, you are suggesting the following change:

diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 24fbbd4..f9b3bf1 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -424,8 +424,8 @@ static inline unsigned long vmread_safe(unsigned long field,
     return ret;
 }
 
-static always_inline unsigned long vmwrite_safe(unsigned long field,
-                                                unsigned long value)
+static always_inline enum vmx_insn_errno vmwrite_safe(unsigned long field,
+                                                      unsigned long value)
 {
     unsigned long ret = 0;
     bool fail_invalid, fail_valid;
@@ -440,11 +440,16 @@ static always_inline unsigned long vmwrite_safe(unsigned long field,
                      [value] GAS_VMX_OP("rm", "c") (value));
 
     if ( unlikely(fail_invalid) )
+    {
         ret = VMX_INSN_FAIL_INVALID;
+    }
     else if ( unlikely(fail_valid) )
+    {
         __vmread(VM_INSTRUCTION_ERROR, &ret);
+        BUG_ON(ret >= ~0U);
+    }
 
-    return ret;
+    return (enum vmx_insn_errno) ret;
 }

And I have noticed one inconsistency: vmwrite_safe() is "always_inline"
while vmread_safe() is plain "inline". I believe that plain inline is
enough here, what do you think?

-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe()
  2017-02-07 15:06     ` Sergey Dyasli
@ 2017-02-07 15:22       ` Andrew Cooper
  2017-02-07 16:22       ` Jan Beulich
  1 sibling, 0 replies; 19+ messages in thread
From: Andrew Cooper @ 2017-02-07 15:22 UTC (permalink / raw)
  To: Sergey Dyasli, JBeulich; +Cc: Kevin Tian, jun.nakajima, xen-devel

On 07/02/17 15:06, Sergey Dyasli wrote:
> On Tue, 2017-02-07 at 04:09 -0700, Jan Beulich wrote:
>>>>> On 06.02.17 at 15:57, <sergey.dyasli@citrix.com> wrote:
>>> Any fail during the original __vmwrite() leads to BUG() which can be
>>> easily exploited from a guest in the nested vmx mode.
>>>
>>> The new function returns error code depending on the outcome:
>>>
>>>           VMsucceed: 0
>>>         VMfailValid: VM Instruction Error Number
>>>       VMfailInvalid: a new VMX_INSN_FAIL_INVALID
>>>
>>> A new macro GAS_VMX_OP is introduced in order to improve the
>>> readability of asm.  Existing ASM_FLAG_OUT macro is reused and copied
>>> into asm_defns.h
>>>
>>> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>>> ---
>> Please can you have the revision info for the individual patches
>> here. I know you've put it in the overview mail, but for reviewers
>> it's far more useful to (also) be here.
>>
>>> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
>>> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
>>> @@ -526,6 +526,7 @@ enum vmx_insn_errno
>>>      VMX_INSN_VMPTRLD_INVALID_PHYADDR       = 9,
>>>      VMX_INSN_UNSUPPORTED_VMCS_COMPONENT    = 12,
>>>      VMX_INSN_VMXON_IN_VMX_ROOT             = 15,
>>> +    VMX_INSN_FAIL_INVALID                  = ~0,
>>>  };
>> The main reason for me to ask for the type change here was to ...
>>
>>> @@ -423,6 +429,29 @@ static inline bool_t __vmread_safe(unsigned long field, unsigned long *value)
>>>      return okay;
>>>  }
>>>  
>>> +static always_inline unsigned long vmwrite_safe(unsigned long field,
>>> +                                                unsigned long value)
>>> +{
>>> +    unsigned long ret = 0;
>>> +    bool fail_invalid, fail_valid;
>>> +
>>> +    asm volatile ( GAS_VMX_OP("vmwrite %[value], %[field]\n\t",
>>> +                              VMWRITE_OPCODE MODRM_EAX_ECX)
>>> +                   ASM_FLAG_OUT(, "setc %[invalid]\n\t")
>>> +                   ASM_FLAG_OUT(, "setz %[valid]\n\t")
>>> +                   : ASM_FLAG_OUT("=@ccc", [invalid] "=rm") (fail_invalid),
>>> +                     ASM_FLAG_OUT("=@ccz", [valid] "=rm") (fail_valid)
>>> +                   : [field] GAS_VMX_OP("r", "a") (field),
>>> +                     [value] GAS_VMX_OP("rm", "c") (value));
>>> +
>>> +    if ( unlikely(fail_invalid) )
>>> +        ret = VMX_INSN_FAIL_INVALID;
>>> +    else if ( unlikely(fail_valid) )
>>> +        __vmread(VM_INSTRUCTION_ERROR, &ret);
>>> +
>>> +    return ret;
>>> +}
>> ... allow the function to return enum vmx_insn_errno, and that
>> to not be a 64-bit quantity. As you're presumably aware, dealing
>> with 32-bit quantities is on the average slightly more efficient than
>> dealing with 64-bit ones. The code above should imo still BUG() if
>> the value read from VM_INSTRUCTION_ERROR doesn't fit in 32
>> bits (as it's a 32-bit field only anyway).
> If I understood correctly, you are suggesting the following change:
>
> diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
> index 24fbbd4..f9b3bf1 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmx.h
> @@ -424,8 +424,8 @@ static inline unsigned long vmread_safe(unsigned long field,
>      return ret;
>  }
>  
> -static always_inline unsigned long vmwrite_safe(unsigned long field,
> -                                                unsigned long value)
> +static always_inline enum vmx_insn_errno vmwrite_safe(unsigned long field,
> +                                                      unsigned long value)
>  {
>      unsigned long ret = 0;
>      bool fail_invalid, fail_valid;
> @@ -440,11 +440,16 @@ static always_inline unsigned long vmwrite_safe(unsigned long field,
>                       [value] GAS_VMX_OP("rm", "c") (value));
>  
>      if ( unlikely(fail_invalid) )
> +    {
>          ret = VMX_INSN_FAIL_INVALID;
> +    }
>      else if ( unlikely(fail_valid) )
> +    {
>          __vmread(VM_INSTRUCTION_ERROR, &ret);
> +        BUG_ON(ret >= ~0U);

I really don't think the BUG_ON() is necessary.  If hardware already
guarentees to hand us back a 32bit quantity, and if hardware is
malfunctioning, we have already lost.

Also, this BUG_ON() will prevent inlining the function if alway_inline
is reduced to inline (which is a good idea).

~Andrew

> +    }
>  
> -    return ret;
> +    return (enum vmx_insn_errno) ret;
>  }
>
> And I have noticed one inconsistency: vmwrite_safe() is "always_inline"
> while vmread_safe() is plain "inline". I believe that plain inline is
> enough here, what do you think?
>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 4/4] x86/vvmx: correctly emulate VMREAD
  2017-02-07  6:52   ` Tian, Kevin
@ 2017-02-07 15:56     ` Sergey Dyasli
  0 siblings, 0 replies; 19+ messages in thread
From: Sergey Dyasli @ 2017-02-07 15:56 UTC (permalink / raw)
  To: Kevin Tian, xen-devel
  Cc: Sergey Dyasli, jun.nakajima, jbeulich, Andrew Cooper

On Tue, 2017-02-07 at 06:52 +0000, Tian, Kevin wrote:
> > From: Sergey Dyasli [mailto:sergey.dyasli@citrix.com]
> > Sent: Monday, February 06, 2017 10:58 PM
> > 
> > There is an issue with the original __vmread() in nested vmx mode:
> > emulation of a guest's VMREAD with invalid arguments leads to BUG().
> > 
> > Fix this by using vmread_safe() and reporting any kind of VMfail back
> > to the guest.
> > 
> > A new safe versions of get_vvmcs() macro and related functions are
> > introduced because of new function signatures and lots of existing
> > users.
> > 
> > Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> 
> after reading this patch I realized my earlier ack to 3/4 might not hold,
> since now you have mismatched get/set pairs:
> 
> get_vvmcs
> get_vvmcs_safe
> set_vvmcs
> 
> suggest to also introduce a set_vvmcs_safe counterpart in 3/4, as
> there are still many existing callers on set_vvmcs which don't 
> expect error.

It is true that my patch introduces a change in behaviour: invalid
set_vvmcs() calls will silently fail instead of triggering BUG().
I agree it would be better to introduce set_vvmcs_safe() for now.

-- 
Thanks,
Sergey
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe()
  2017-02-07 15:06     ` Sergey Dyasli
  2017-02-07 15:22       ` Andrew Cooper
@ 2017-02-07 16:22       ` Jan Beulich
  2017-02-07 16:34         ` Andrew Cooper
  1 sibling, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2017-02-07 16:22 UTC (permalink / raw)
  To: Sergey Dyasli; +Cc: AndrewCooper, Kevin Tian, jun.nakajima, xen-devel

>>> On 07.02.17 at 16:06, <sergey.dyasli@citrix.com> wrote:
> If I understood correctly, you are suggesting the following change:

Mostly.

> --- a/xen/include/asm-x86/hvm/vmx/vmx.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmx.h
> @@ -424,8 +424,8 @@ static inline unsigned long vmread_safe(unsigned long field,
>      return ret;
>  }
>  
> -static always_inline unsigned long vmwrite_safe(unsigned long field,
> -                                                unsigned long value)
> +static always_inline enum vmx_insn_errno vmwrite_safe(unsigned long field,
> +                                                      unsigned long value)
>  {
>      unsigned long ret = 0;
>      bool fail_invalid, fail_valid;
> @@ -440,11 +440,16 @@ static always_inline unsigned long vmwrite_safe(unsigned long field,
>                       [value] GAS_VMX_OP("rm", "c") (value));
>  
>      if ( unlikely(fail_invalid) )
> +    {
>          ret = VMX_INSN_FAIL_INVALID;
> +    }

No need to add braces here and ...

>      else if ( unlikely(fail_valid) )
> +    {
>          __vmread(VM_INSTRUCTION_ERROR, &ret);
> +        BUG_ON(ret >= ~0U);
> +    }
>  
> -    return ret;
> +    return (enum vmx_insn_errno) ret;

... no need for the cast here. (See Andrew's reply for the BUG_ON().)

> And I have noticed one inconsistency: vmwrite_safe() is "always_inline"
> while vmread_safe() is plain "inline". I believe that plain inline is
> enough here, what do you think?

I would assume plain inline to be enough, but maybe the VMX
maintainers know why always_inline was used.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe()
  2017-02-07 16:22       ` Jan Beulich
@ 2017-02-07 16:34         ` Andrew Cooper
  2017-02-07 16:47           ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2017-02-07 16:34 UTC (permalink / raw)
  To: Jan Beulich, Sergey Dyasli; +Cc: Kevin Tian, jun.nakajima, xen-devel

On 07/02/17 16:22, Jan Beulich wrote:
>>>> On 07.02.17 at 16:06, <sergey.dyasli@citrix.com> wrote:
>> If I understood correctly, you are suggesting the following change:
> Mostly.
>
>> --- a/xen/include/asm-x86/hvm/vmx/vmx.h
>> +++ b/xen/include/asm-x86/hvm/vmx/vmx.h
>> @@ -424,8 +424,8 @@ static inline unsigned long vmread_safe(unsigned long field,
>>      return ret;
>>  }
>>  
>> -static always_inline unsigned long vmwrite_safe(unsigned long field,
>> -                                                unsigned long value)
>> +static always_inline enum vmx_insn_errno vmwrite_safe(unsigned long field,
>> +                                                      unsigned long value)
>>  {
>>      unsigned long ret = 0;
>>      bool fail_invalid, fail_valid;
>> @@ -440,11 +440,16 @@ static always_inline unsigned long vmwrite_safe(unsigned long field,
>>                       [value] GAS_VMX_OP("rm", "c") (value));
>>  
>>      if ( unlikely(fail_invalid) )
>> +    {
>>          ret = VMX_INSN_FAIL_INVALID;
>> +    }
> No need to add braces here and ...
>
>>      else if ( unlikely(fail_valid) )
>> +    {
>>          __vmread(VM_INSTRUCTION_ERROR, &ret);
>> +        BUG_ON(ret >= ~0U);
>> +    }
>>  
>> -    return ret;
>> +    return (enum vmx_insn_errno) ret;
> ... no need for the cast here. (See Andrew's reply for the BUG_ON().)
>
>> And I have noticed one inconsistency: vmwrite_safe() is "always_inline"
>> while vmread_safe() is plain "inline". I believe that plain inline is
>> enough here, what do you think?
> I would assume plain inline to be enough, but maybe the VMX
> maintainers know why always_inline was used.

The always_inline was my doing IIRC, because the use of unlikely
sections caused GCC to create a separate identical functions in each
translation unit, in an attempt to minimise the quantity of out-of-line
code.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe()
  2017-02-07 16:34         ` Andrew Cooper
@ 2017-02-07 16:47           ` Jan Beulich
  0 siblings, 0 replies; 19+ messages in thread
From: Jan Beulich @ 2017-02-07 16:47 UTC (permalink / raw)
  To: Andrew Cooper, Sergey Dyasli; +Cc: Kevin Tian, jun.nakajima, xen-devel

>>> On 07.02.17 at 17:34, <andrew.cooper3@citrix.com> wrote:
> On 07/02/17 16:22, Jan Beulich wrote:
>>>>> On 07.02.17 at 16:06, <sergey.dyasli@citrix.com> wrote:
>>> If I understood correctly, you are suggesting the following change:
>> Mostly.
>>
>>> --- a/xen/include/asm-x86/hvm/vmx/vmx.h
>>> +++ b/xen/include/asm-x86/hvm/vmx/vmx.h
>>> @@ -424,8 +424,8 @@ static inline unsigned long vmread_safe(unsigned long 
> field,
>>>      return ret;
>>>  }
>>>  
>>> -static always_inline unsigned long vmwrite_safe(unsigned long field,
>>> -                                                unsigned long value)
>>> +static always_inline enum vmx_insn_errno vmwrite_safe(unsigned long field,
>>> +                                                      unsigned long value)
>>>  {
>>>      unsigned long ret = 0;
>>>      bool fail_invalid, fail_valid;
>>> @@ -440,11 +440,16 @@ static always_inline unsigned long 
> vmwrite_safe(unsigned long field,
>>>                       [value] GAS_VMX_OP("rm", "c") (value));
>>>  
>>>      if ( unlikely(fail_invalid) )
>>> +    {
>>>          ret = VMX_INSN_FAIL_INVALID;
>>> +    }
>> No need to add braces here and ...
>>
>>>      else if ( unlikely(fail_valid) )
>>> +    {
>>>          __vmread(VM_INSTRUCTION_ERROR, &ret);
>>> +        BUG_ON(ret >= ~0U);
>>> +    }
>>>  
>>> -    return ret;
>>> +    return (enum vmx_insn_errno) ret;
>> ... no need for the cast here. (See Andrew's reply for the BUG_ON().)
>>
>>> And I have noticed one inconsistency: vmwrite_safe() is "always_inline"
>>> while vmread_safe() is plain "inline". I believe that plain inline is
>>> enough here, what do you think?
>> I would assume plain inline to be enough, but maybe the VMX
>> maintainers know why always_inline was used.
> 
> The always_inline was my doing IIRC, because the use of unlikely
> sections caused GCC to create a separate identical functions in each
> translation unit, in an attempt to minimise the quantity of out-of-line
> code.

In which case it's not needed in these new flavors.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2017-02-07 16:47 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-06 14:57 [PATCH v2 0/4] x86/vvmx: correctly emulate VMREAD and VMWRITE Sergey Dyasli
2017-02-06 14:57 ` [PATCH v2 1/4] x86/vmx: introduce vmwrite_safe() Sergey Dyasli
2017-02-07  6:31   ` Tian, Kevin
2017-02-07 11:09   ` Jan Beulich
2017-02-07 11:59     ` Andrew Cooper
2017-02-07 13:18       ` Jan Beulich
2017-02-07 15:06     ` Sergey Dyasli
2017-02-07 15:22       ` Andrew Cooper
2017-02-07 16:22       ` Jan Beulich
2017-02-07 16:34         ` Andrew Cooper
2017-02-07 16:47           ` Jan Beulich
2017-02-06 14:57 ` [PATCH v2 2/4] x86/vmx: improve vmread_safe() Sergey Dyasli
2017-02-07  6:32   ` Tian, Kevin
2017-02-06 14:57 ` [PATCH v2 3/4] x86/vvmx: correctly emulate VMWRITE Sergey Dyasli
2017-02-07  6:37   ` Tian, Kevin
2017-02-06 14:57 ` [PATCH v2 4/4] x86/vvmx: correctly emulate VMREAD Sergey Dyasli
2017-02-07  6:52   ` Tian, Kevin
2017-02-07 15:56     ` Sergey Dyasli
2017-02-06 15:19 ` [PATCH v2 0/4] x86/vvmx: correctly emulate VMREAD and VMWRITE Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.