All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 for-4.9 0/6] x86/emul: Fixes
@ 2017-04-05 17:32 Andrew Cooper
  2017-04-05 17:32 ` [PATCH v2 for-4.9 1/6] x86/hvm: Correct some address space terminology Andrew Cooper
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Andrew Cooper @ 2017-04-05 17:32 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Jan Beulich

This series started out as patches 4 and 5, to aid the userspace fuzzing
harness, but ended up discovering the bug in patch 3, which is security
relevant.

Patch 3 is a must-fix for Xen 4.9, before the bug needs an XSA.  Patches 4-6
are nice-to-have.

The main change from v1 is reworking of patch 3.

Andrew Cooper (6):
  x86/hvm: Correct some address space terminology
  x86/hvm: Correct long mode predicate
  x86/hvm: Fix segmentation logic for system segments
  x86/svm: Introduce svm_emul_swint_injection()
  x86/emul: Drop swint_emulate infrastructure
  x86/emul: Require callers to provide LMA in the emulation context

 tools/fuzz/x86_instruction_emulator/fuzz-emul.c |  20 +-
 tools/tests/x86_emulator/test_x86_emulator.c    |   4 +
 xen/arch/x86/cpuid.c                            |   2 +-
 xen/arch/x86/hvm/emulate.c                      |  25 +--
 xen/arch/x86/hvm/hvm.c                          |  77 ++++----
 xen/arch/x86/hvm/svm/svm.c                      | 139 +++++++++++++-
 xen/arch/x86/hvm/vmx/vmx.c                      |   6 +-
 xen/arch/x86/hvm/vmx/vvmx.c                     |   8 +-
 xen/arch/x86/mm.c                               |   4 +-
 xen/arch/x86/mm/hap/hap.c                       |   8 +-
 xen/arch/x86/mm/shadow/common.c                 |  25 ++-
 xen/arch/x86/oprofile/backtrace.c               |   2 +-
 xen/arch/x86/traps.c                            |   1 +
 xen/arch/x86/x86_emulate/x86_emulate.c          | 238 ++++--------------------
 xen/arch/x86/x86_emulate/x86_emulate.h          |  56 +-----
 xen/include/asm-x86/hvm/hvm.h                   |   5 +-
 16 files changed, 266 insertions(+), 354 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 for-4.9 1/6] x86/hvm: Correct some address space terminology
  2017-04-05 17:32 [PATCH v2 for-4.9 0/6] x86/emul: Fixes Andrew Cooper
@ 2017-04-05 17:32 ` Andrew Cooper
  2017-04-06  8:35   ` Tim Deegan
  2017-04-06  8:47   ` Jan Beulich
  2017-04-05 17:33 ` [PATCH v2 for-4.9 2/6] x86/hvm: Correct long mode predicate Andrew Cooper
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 18+ messages in thread
From: Andrew Cooper @ 2017-04-05 17:32 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Tim Deegan, Jan Beulich

The function hvm_translate_linear_addr() translates a virtual address to a
linear address, not a linear address to a physical address.  Correct its name.

Both hvm_translate_virtual_addr() and hvmemul_virtual_to_linear() return a
linear address, but a parameter name of paddr is easily confused with paddr_t.
Rename it to linear, to clearly identify the address space, and for
consistency with hvm_virtual_to_linear_addr().

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Julien Grall <julien.grall@arm.com>

v2:
 * Alter commit message, as there is disagreement as to the intended meaning
   of paddr.  Whatever the intended meaning, it is currently confusing against
   paddr_t.
---
 xen/arch/x86/hvm/emulate.c      | 12 ++++++------
 xen/arch/x86/mm/shadow/common.c | 16 ++++++++--------
 2 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 2d92957..4073715 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -506,7 +506,7 @@ static int hvmemul_virtual_to_linear(
     unsigned long *reps,
     enum hvm_access_type access_type,
     struct hvm_emulate_ctxt *hvmemul_ctxt,
-    unsigned long *paddr)
+    unsigned long *linear)
 {
     struct segment_register *reg;
     int okay;
@@ -514,7 +514,7 @@ static int hvmemul_virtual_to_linear(
 
     if ( seg == x86_seg_none )
     {
-        *paddr = offset;
+        *linear = offset;
         return X86EMUL_OKAY;
     }
 
@@ -549,16 +549,16 @@ static int hvmemul_virtual_to_linear(
         okay = hvm_virtual_to_linear_addr(
             seg, reg, offset - (*reps - 1) * bytes_per_rep,
             *reps * bytes_per_rep, access_type,
-            hvmemul_ctxt->ctxt.addr_size, paddr);
-        *paddr += (*reps - 1) * bytes_per_rep;
+            hvmemul_ctxt->ctxt.addr_size, linear);
+        *linear += (*reps - 1) * bytes_per_rep;
         if ( hvmemul_ctxt->ctxt.addr_size != 64 )
-            *paddr = (uint32_t)*paddr;
+            *linear = (uint32_t)*linear;
     }
     else
     {
         okay = hvm_virtual_to_linear_addr(
             seg, reg, offset, *reps * bytes_per_rep, access_type,
-            hvmemul_ctxt->ctxt.addr_size, paddr);
+            hvmemul_ctxt->ctxt.addr_size, linear);
     }
 
     if ( okay )
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index d93f2ab..03cb24d 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -136,13 +136,13 @@ static struct segment_register *hvm_get_seg_reg(
     return seg_reg;
 }
 
-static int hvm_translate_linear_addr(
+static int hvm_translate_virtual_addr(
     enum x86_segment seg,
     unsigned long offset,
     unsigned int bytes,
     enum hvm_access_type access_type,
     struct sh_emulate_ctxt *sh_ctxt,
-    unsigned long *paddr)
+    unsigned long *linear)
 {
     const struct segment_register *reg;
     int okay;
@@ -152,7 +152,7 @@ static int hvm_translate_linear_addr(
         return -PTR_ERR(reg);
 
     okay = hvm_virtual_to_linear_addr(
-        seg, reg, offset, bytes, access_type, sh_ctxt->ctxt.addr_size, paddr);
+        seg, reg, offset, bytes, access_type, sh_ctxt->ctxt.addr_size, linear);
 
     if ( !okay )
     {
@@ -183,7 +183,7 @@ hvm_read(enum x86_segment seg,
     unsigned long addr;
     int rc;
 
-    rc = hvm_translate_linear_addr(
+    rc = hvm_translate_virtual_addr(
         seg, offset, bytes, access_type, sh_ctxt, &addr);
     if ( rc || !bytes )
         return rc;
@@ -265,7 +265,7 @@ hvm_emulate_write(enum x86_segment seg,
     if ( seg == x86_seg_ss )
         perfc_incr(shadow_fault_emulate_stack);
 
-    rc = hvm_translate_linear_addr(
+    rc = hvm_translate_virtual_addr(
         seg, offset, bytes, hvm_access_write, sh_ctxt, &addr);
     if ( rc || !bytes )
         return rc;
@@ -291,7 +291,7 @@ hvm_emulate_cmpxchg(enum x86_segment seg,
     if ( bytes > sizeof(long) )
         return X86EMUL_UNHANDLEABLE;
 
-    rc = hvm_translate_linear_addr(
+    rc = hvm_translate_virtual_addr(
         seg, offset, bytes, hvm_access_write, sh_ctxt, &addr);
     if ( rc )
         return rc;
@@ -345,7 +345,7 @@ const struct x86_emulate_ops *shadow_init_emulation(
     /* Attempt to prefetch whole instruction. */
     sh_ctxt->insn_buf_eip = regs->rip;
     sh_ctxt->insn_buf_bytes =
-        (!hvm_translate_linear_addr(
+        (!hvm_translate_virtual_addr(
             x86_seg_cs, regs->rip, sizeof(sh_ctxt->insn_buf),
             hvm_access_insn_fetch, sh_ctxt, &addr) &&
          !hvm_fetch_from_guest_linear(
@@ -374,7 +374,7 @@ void shadow_continue_emulation(struct sh_emulate_ctxt *sh_ctxt,
     {
         /* Prefetch more bytes. */
         sh_ctxt->insn_buf_bytes =
-            (!hvm_translate_linear_addr(
+            (!hvm_translate_virtual_addr(
                 x86_seg_cs, regs->rip, sizeof(sh_ctxt->insn_buf),
                 hvm_access_insn_fetch, sh_ctxt, &addr) &&
              !hvm_fetch_from_guest_linear(
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 for-4.9 2/6] x86/hvm: Correct long mode predicate
  2017-04-05 17:32 [PATCH v2 for-4.9 0/6] x86/emul: Fixes Andrew Cooper
  2017-04-05 17:32 ` [PATCH v2 for-4.9 1/6] x86/hvm: Correct some address space terminology Andrew Cooper
@ 2017-04-05 17:33 ` Andrew Cooper
  2017-04-05 18:55   ` Boris Ostrovsky
  2017-04-05 17:33 ` [PATCH v2 for-4.9 3/6] x86/hvm: Fix segmentation logic for system segments Andrew Cooper
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2017-04-05 17:33 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Boris Ostrovsky, Tim Deegan, Julien Grall,
	Suravee Suthikulpanit

hvm_long_mode_enabled() tests for EFER.LMA, which is specifically different to
EFER.LME.

Rename it to match its behaviour, and have it strictly return a boolean value
(although all its callers already use it in implicitly-boolean contexts, so no
functional change).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Acked-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
CC: Tim Deegan <tim@xen.org>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Julien Grall <julien.grall@arm.com>

v2:
 * Drop now-redundant !!
---
 xen/arch/x86/cpuid.c              |  2 +-
 xen/arch/x86/hvm/emulate.c        |  2 +-
 xen/arch/x86/hvm/hvm.c            | 10 +++++-----
 xen/arch/x86/hvm/svm/svm.c        |  6 +++---
 xen/arch/x86/hvm/vmx/vmx.c        |  6 +++---
 xen/arch/x86/hvm/vmx/vvmx.c       |  8 ++++----
 xen/arch/x86/mm/hap/hap.c         |  8 ++++----
 xen/arch/x86/mm/shadow/common.c   |  4 ++--
 xen/arch/x86/oprofile/backtrace.c |  2 +-
 xen/include/asm-x86/hvm/hvm.h     |  3 +--
 10 files changed, 25 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 1c6a6c6..d359e09 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -911,7 +911,7 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
     case 0x80000001:
         /* SYSCALL is hidden outside of long mode on Intel. */
         if ( p->x86_vendor == X86_VENDOR_INTEL &&
-             is_hvm_domain(d) && !hvm_long_mode_enabled(v) )
+             is_hvm_domain(d) && !hvm_long_mode_active(v) )
             res->d &= ~cpufeat_mask(X86_FEATURE_SYSCALL);
 
     common_leaf1_adjustments:
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 4073715..3d084ca 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2051,7 +2051,7 @@ void hvm_emulate_init_per_insn(
     unsigned int pfec = PFEC_page_present;
     unsigned long addr;
 
-    if ( hvm_long_mode_enabled(curr) &&
+    if ( hvm_long_mode_active(curr) &&
          hvmemul_ctxt->seg_reg[x86_seg_cs].attr.fields.l )
         hvmemul_ctxt->ctxt.addr_size = hvmemul_ctxt->ctxt.sp_size = 64;
     else
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index eba6e9d..dbc3b8a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2241,7 +2241,7 @@ int hvm_set_cr0(unsigned long value, bool_t may_defer)
         }
 
         /* When CR0.PG is cleared, LMA is cleared immediately. */
-        if ( hvm_long_mode_enabled(v) )
+        if ( hvm_long_mode_active(v) )
         {
             v->arch.hvm_vcpu.guest_efer &= ~EFER_LMA;
             hvm_update_guest_efer(v);
@@ -2335,7 +2335,7 @@ int hvm_set_cr4(unsigned long value, bool_t may_defer)
 
     if ( !(value & X86_CR4_PAE) )
     {
-        if ( hvm_long_mode_enabled(v) )
+        if ( hvm_long_mode_active(v) )
         {
             HVM_DBG_LOG(DBG_LEVEL_1, "Guest cleared CR4.PAE while "
                         "EFER.LMA is set");
@@ -2346,7 +2346,7 @@ int hvm_set_cr4(unsigned long value, bool_t may_defer)
     old_cr = v->arch.hvm_vcpu.guest_cr[4];
 
     if ( (value & X86_CR4_PCIDE) && !(old_cr & X86_CR4_PCIDE) &&
-         (!hvm_long_mode_enabled(v) ||
+         (!hvm_long_mode_active(v) ||
           (v->arch.hvm_vcpu.guest_cr[3] & 0xfff)) )
     {
         HVM_DBG_LOG(DBG_LEVEL_1, "Guest attempts to change CR4.PCIDE from "
@@ -3619,7 +3619,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs)
 
         if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
                                         sizeof(sig), hvm_access_insn_fetch,
-                                        (hvm_long_mode_enabled(cur) &&
+                                        (hvm_long_mode_active(cur) &&
                                          cs->attr.fields.l) ? 64 :
                                         cs->attr.fields.db ? 32 : 16, &addr) &&
              (hvm_fetch_from_guest_linear(sig, addr, sizeof(sig),
@@ -3630,7 +3630,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs)
             regs->eflags &= ~X86_EFLAGS_RF;
 
             /* Zero the upper 32 bits of %rip if not in 64bit mode. */
-            if ( !(hvm_long_mode_enabled(cur) && cs->attr.fields.l) )
+            if ( !(hvm_long_mode_active(cur) && cs->attr.fields.l) )
                 regs->rip = regs->eip;
 
             add_taint(TAINT_HVM_FEP);
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index b69789b..4d7e49f 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -516,7 +516,7 @@ static int svm_guest_x86_mode(struct vcpu *v)
         return 0;
     if ( unlikely(guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) )
         return 1;
-    if ( hvm_long_mode_enabled(v) && likely(vmcb->cs.attr.fields.l) )
+    if ( hvm_long_mode_active(v) && likely(vmcb->cs.attr.fields.l) )
         return 8;
     return (likely(vmcb->cs.attr.fields.db) ? 4 : 2);
 }
@@ -2279,7 +2279,7 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
 
     exit_reason = vmcb->exitcode;
 
-    if ( hvm_long_mode_enabled(v) )
+    if ( hvm_long_mode_active(v) )
         HVMTRACE_ND(VMEXIT64, vcpu_guestmode ? TRC_HVM_NESTEDFLAG : 0,
                     1/*cycles*/, 3, exit_reason,
                     regs->eip, regs->rip >> 32, 0, 0, 0);
@@ -2429,7 +2429,7 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
         {
             if ( trace_will_trace_event(TRC_SHADOW) )
                 break;
-            if ( hvm_long_mode_enabled(v) )
+            if ( hvm_long_mode_active(v) )
                 HVMTRACE_LONG_2D(PF_XEN, regs->error_code, TRC_PAR_LONG(va));
             else
                 HVMTRACE_2D(PF_XEN, regs->error_code, va);
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index d201956..b6526c9 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -611,7 +611,7 @@ int vmx_guest_x86_mode(struct vcpu *v)
     if ( unlikely(guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) )
         return 1;
     __vmread(GUEST_CS_AR_BYTES, &cs_ar_bytes);
-    if ( hvm_long_mode_enabled(v) &&
+    if ( hvm_long_mode_active(v) &&
          likely(cs_ar_bytes & X86_SEG_AR_CS_LM_ACTIVE) )
         return 8;
     return (likely(cs_ar_bytes & X86_SEG_AR_DEF_OP_SIZE) ? 4 : 2);
@@ -3392,7 +3392,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
 
     __vmread(VM_EXIT_REASON, &exit_reason);
 
-    if ( hvm_long_mode_enabled(v) )
+    if ( hvm_long_mode_active(v) )
         HVMTRACE_ND(VMEXIT64, 0, 1/*cycles*/, 3, exit_reason,
                     regs->eip, regs->rip >> 32, 0, 0, 0);
     else
@@ -3632,7 +3632,7 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             {
                 if ( trace_will_trace_event(TRC_SHADOW) )
                     break;
-                if ( hvm_long_mode_enabled(v) )
+                if ( hvm_long_mode_active(v) )
                     HVMTRACE_LONG_2D(PF_XEN, regs->error_code,
                                      TRC_PAR_LONG(exit_qualification) );
                 else
diff --git a/xen/arch/x86/hvm/vmx/vvmx.c b/xen/arch/x86/hvm/vmx/vvmx.c
index 09e4250..936feb6 100644
--- a/xen/arch/x86/hvm/vmx/vvmx.c
+++ b/xen/arch/x86/hvm/vmx/vvmx.c
@@ -392,7 +392,7 @@ static int vmx_inst_check_privilege(struct cpu_user_regs *regs, int vmxop_check)
     else if ( !nvmx_vcpu_in_vmx(v) )
         goto invalid_op;
 
-    if ( vmx_guest_x86_mode(v) < (hvm_long_mode_enabled(v) ? 8 : 2) )
+    if ( vmx_guest_x86_mode(v) < (hvm_long_mode_active(v) ? 8 : 2) )
         goto invalid_op;
     else if ( nestedhvm_vcpu_in_guestmode(v) )
         goto vmexit;
@@ -1154,13 +1154,13 @@ static void virtual_vmentry(struct cpu_user_regs *regs)
     /*
      * EFER handling:
      * hvm_set_efer won't work if CR0.PG = 1, so we change the value
-     * directly to make hvm_long_mode_enabled(v) work in L2.
+     * directly to make hvm_long_mode_active(v) work in L2.
      * An additional update_paging_modes is also needed if
      * there is 32/64 switch. v->arch.hvm_vcpu.guest_efer doesn't
      * need to be saved, since its value on vmexit is determined by
      * L1 exit_controls
      */
-    lm_l1 = !!hvm_long_mode_enabled(v);
+    lm_l1 = hvm_long_mode_active(v);
     lm_l2 = !!(get_vvmcs(v, VM_ENTRY_CONTROLS) & VM_ENTRY_IA32E_MODE);
 
     if ( lm_l2 )
@@ -1359,7 +1359,7 @@ static void virtual_vmexit(struct cpu_user_regs *regs)
     nvcpu->nv_vmexit_pending = 0;
     nvcpu->nv_vmswitch_in_progress = 1;
 
-    lm_l2 = !!hvm_long_mode_enabled(v);
+    lm_l2 = hvm_long_mode_active(v);
     lm_l1 = !!(get_vvmcs(v, VM_EXIT_CONTROLS) & VM_EXIT_IA32E_MODE);
 
     if ( lm_l1 )
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index a57b385..c0610c5 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -690,10 +690,10 @@ static void hap_update_cr3(struct vcpu *v, int do_locking)
 const struct paging_mode *
 hap_paging_get_mode(struct vcpu *v)
 {
-    return !hvm_paging_enabled(v)   ? &hap_paging_real_mode :
-        hvm_long_mode_enabled(v) ? &hap_paging_long_mode :
-        hvm_pae_enabled(v)       ? &hap_paging_pae_mode  :
-                                   &hap_paging_protected_mode;
+    return (!hvm_paging_enabled(v)  ? &hap_paging_real_mode :
+            hvm_long_mode_active(v) ? &hap_paging_long_mode :
+            hvm_pae_enabled(v)      ? &hap_paging_pae_mode  :
+                                      &hap_paging_protected_mode);
 }
 
 static void hap_update_paging_modes(struct vcpu *v)
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 03cb24d..14a07dd 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -331,7 +331,7 @@ const struct x86_emulate_ops *shadow_init_emulation(
     creg = hvm_get_seg_reg(x86_seg_cs, sh_ctxt);
 
     /* Work out the emulation mode. */
-    if ( hvm_long_mode_enabled(v) && creg->attr.fields.l )
+    if ( hvm_long_mode_active(v) && creg->attr.fields.l )
     {
         sh_ctxt->ctxt.addr_size = sh_ctxt->ctxt.sp_size = 64;
     }
@@ -2921,7 +2921,7 @@ static void sh_update_paging_modes(struct vcpu *v)
             v->arch.guest_table = d->arch.paging.shadow.unpaged_pagetable;
             v->arch.paging.mode = &SHADOW_INTERNAL_NAME(sh_paging_mode, 2);
         }
-        else if ( hvm_long_mode_enabled(v) )
+        else if ( hvm_long_mode_active(v) )
         {
             // long mode guest...
             v->arch.paging.mode =
diff --git a/xen/arch/x86/oprofile/backtrace.c b/xen/arch/x86/oprofile/backtrace.c
index f0fbb42..316821f 100644
--- a/xen/arch/x86/oprofile/backtrace.c
+++ b/xen/arch/x86/oprofile/backtrace.c
@@ -47,7 +47,7 @@ dump_hypervisor_backtrace(struct vcpu *vcpu, const struct frame_head *head,
 static inline int is_32bit_vcpu(struct vcpu *vcpu)
 {
     if (is_hvm_vcpu(vcpu))
-        return !hvm_long_mode_enabled(vcpu);
+        return !hvm_long_mode_active(vcpu);
     else
         return is_pv_32bit_vcpu(vcpu);
 }
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index c854183..49c8001 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -302,8 +302,7 @@ int hvm_girq_dest_2_vcpu_id(struct domain *d, uint8_t dest, uint8_t dest_mode);
 #define hap_has_1gb (!!(hvm_funcs.hap_capabilities & HVM_HAP_SUPERPAGE_1GB))
 #define hap_has_2mb (!!(hvm_funcs.hap_capabilities & HVM_HAP_SUPERPAGE_2MB))
 
-#define hvm_long_mode_enabled(v) \
-    ((v)->arch.hvm_vcpu.guest_efer & EFER_LMA)
+#define hvm_long_mode_active(v) (!!((v)->arch.hvm_vcpu.guest_efer & EFER_LMA))
 
 enum hvm_intblk
 hvm_interrupt_blocked(struct vcpu *v, struct hvm_intack intack);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 for-4.9 3/6] x86/hvm: Fix segmentation logic for system segments
  2017-04-05 17:32 [PATCH v2 for-4.9 0/6] x86/emul: Fixes Andrew Cooper
  2017-04-05 17:32 ` [PATCH v2 for-4.9 1/6] x86/hvm: Correct some address space terminology Andrew Cooper
  2017-04-05 17:33 ` [PATCH v2 for-4.9 2/6] x86/hvm: Correct long mode predicate Andrew Cooper
@ 2017-04-05 17:33 ` Andrew Cooper
  2017-04-06  8:56   ` Jan Beulich
  2017-04-05 17:33 ` [PATCH v2 for-4.9 4/6] x86/svm: Introduce svm_emul_swint_injection() Andrew Cooper
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2017-04-05 17:33 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Julien Grall, Paul Durrant, Tim Deegan, Jan Beulich

c/s c785f759718 "x86/emul: Prepare to allow use of system segments for memory
references" made alterations to hvm_virtual_to_linear_addr() to allow for the
use of system segments.

However, the determination of which segmentation mode to use was based on the
current address size from emulation.

In particular, it is wrong for system segment accesses while executing in a
compatibility mode code segment.  When long mode is active, all system
segments have a 64-bit base, and this must not be truncated during the
calculation of the linear address.  (Note that the presence and limit checks
for system segments behave the same, and are already uniformly applied in both
cases.)

Replace the existing addr_size parameter with active_cs, which gets used in
combination with current to work out which segmentation logic to use.

While here, also fix the determination of segmentation to use for vm86 mode,
which is a protected mode facility but which uses real mode segmentation.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Paul Durrant <paul.durrant@citrix.com>
CC: Tim Deegan <tim@xen.org>
CC: Julien Grall <julien.grall@arm.com>

v2: Rework the approach from scratch, following feedback.

This is the same logical bug that caused XSA-196, but luckily hasn't yet been
in a released version of Xen.
---
 xen/arch/x86/hvm/emulate.c      |  6 ++--
 xen/arch/x86/hvm/hvm.c          | 69 +++++++++++++++++++++--------------------
 xen/arch/x86/mm/shadow/common.c |  3 +-
 xen/include/asm-x86/hvm/hvm.h   |  2 +-
 4 files changed, 41 insertions(+), 39 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 3d084ca..87ca801 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -549,7 +549,7 @@ static int hvmemul_virtual_to_linear(
         okay = hvm_virtual_to_linear_addr(
             seg, reg, offset - (*reps - 1) * bytes_per_rep,
             *reps * bytes_per_rep, access_type,
-            hvmemul_ctxt->ctxt.addr_size, linear);
+            hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt), linear);
         *linear += (*reps - 1) * bytes_per_rep;
         if ( hvmemul_ctxt->ctxt.addr_size != 64 )
             *linear = (uint32_t)*linear;
@@ -558,7 +558,7 @@ static int hvmemul_virtual_to_linear(
     {
         okay = hvm_virtual_to_linear_addr(
             seg, reg, offset, *reps * bytes_per_rep, access_type,
-            hvmemul_ctxt->ctxt.addr_size, linear);
+            hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt), linear);
     }
 
     if ( okay )
@@ -2075,7 +2075,7 @@ void hvm_emulate_init_per_insn(
                                         hvmemul_ctxt->insn_buf_eip,
                                         sizeof(hvmemul_ctxt->insn_buf),
                                         hvm_access_insn_fetch,
-                                        hvmemul_ctxt->ctxt.addr_size,
+                                        &hvmemul_ctxt->seg_reg[x86_seg_cs],
                                         &addr) &&
              hvm_fetch_from_guest_linear(hvmemul_ctxt->insn_buf, addr,
                                          sizeof(hvmemul_ctxt->insn_buf),
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index dbc3b8a..fdf13db 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2394,9 +2394,10 @@ bool_t hvm_virtual_to_linear_addr(
     unsigned long offset,
     unsigned int bytes,
     enum hvm_access_type access_type,
-    unsigned int addr_size,
+    const struct segment_register *active_cs,
     unsigned long *linear_addr)
 {
+    const struct vcpu *curr = current;
     unsigned long addr = offset, last_byte;
     bool_t okay = 0;
 
@@ -2408,10 +2409,11 @@ bool_t hvm_virtual_to_linear_addr(
      */
     ASSERT(seg < x86_seg_none);
 
-    if ( !(current->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE) )
+    if ( !(curr->arch.hvm_vcpu.guest_cr[0] & X86_CR0_PE) ||
+         (guest_cpu_user_regs()->eflags & X86_EFLAGS_VM) )
     {
         /*
-         * REAL MODE: Don't bother with segment access checks.
+         * REAL/VM86 MODE: Don't bother with segment access checks.
          * Certain of them are not done in native real mode anyway.
          */
         addr = (uint32_t)(addr + reg->base);
@@ -2419,10 +2421,33 @@ bool_t hvm_virtual_to_linear_addr(
         if ( last_byte < addr )
             goto out;
     }
-    else if ( addr_size != 64 )
+    else if ( hvm_long_mode_active(curr) &&
+              (is_x86_system_segment(seg) || active_cs->attr.fields.l) )
     {
         /*
-         * COMPATIBILITY MODE: Apply segment checks and add base.
+         * User segments are always treated as present.  System segment may
+         * not be, and also incur limit checks.
+         */
+        if ( is_x86_system_segment(seg) &&
+             (!reg->attr.fields.p || (offset + bytes - !!bytes) > reg->limit) )
+            goto out;
+
+        /*
+         * LONG MODE: FS, GS and system segments: add segment base. All
+         * addresses must be canonical.
+         */
+        if ( seg >= x86_seg_fs )
+            addr += reg->base;
+
+        last_byte = addr + bytes - !!bytes;
+        if ( !is_canonical_address(addr) || last_byte < addr ||
+             !is_canonical_address(last_byte) )
+            goto out;
+    }
+    else
+    {
+        /*
+         * PROTECTED/COMPATIBILITY MODE: Apply segment checks and add base.
          */
 
         /*
@@ -2469,28 +2494,6 @@ bool_t hvm_virtual_to_linear_addr(
         else if ( (last_byte > reg->limit) || (last_byte < offset) )
             goto out; /* last byte is beyond limit or wraps 0xFFFFFFFF */
     }
-    else
-    {
-        /*
-         * User segments are always treated as present.  System segment may
-         * not be, and also incur limit checks.
-         */
-        if ( is_x86_system_segment(seg) &&
-             (!reg->attr.fields.p || (offset + bytes - !!bytes) > reg->limit) )
-            goto out;
-
-        /*
-         * LONG MODE: FS, GS and system segments: add segment base. All
-         * addresses must be canonical.
-         */
-        if ( seg >= x86_seg_fs )
-            addr += reg->base;
-
-        last_byte = addr + bytes - !!bytes;
-        if ( !is_canonical_address(addr) || last_byte < addr ||
-             !is_canonical_address(last_byte) )
-            goto out;
-    }
 
     /* All checks ok. */
     okay = 1;
@@ -3026,11 +3029,12 @@ void hvm_task_switch(
 
     if ( errcode >= 0 )
     {
+        struct segment_register cs;
         unsigned long linear_addr;
         unsigned int opsz, sp;
 
-        hvm_get_segment_register(v, x86_seg_cs, &segr);
-        opsz = segr.attr.fields.db ? 4 : 2;
+        hvm_get_segment_register(v, x86_seg_cs, &cs);
+        opsz = cs.attr.fields.db ? 4 : 2;
         hvm_get_segment_register(v, x86_seg_ss, &segr);
         if ( segr.attr.fields.db )
             sp = regs->esp -= opsz;
@@ -3038,8 +3042,7 @@ void hvm_task_switch(
             sp = regs->sp -= opsz;
         if ( hvm_virtual_to_linear_addr(x86_seg_ss, &segr, sp, opsz,
                                         hvm_access_write,
-                                        16 << segr.attr.fields.db,
-                                        &linear_addr) )
+                                        &cs, &linear_addr) )
         {
             rc = hvm_copy_to_guest_linear(linear_addr, &errcode, opsz, 0,
                                           &pfinfo);
@@ -3619,9 +3622,7 @@ void hvm_ud_intercept(struct cpu_user_regs *regs)
 
         if ( hvm_virtual_to_linear_addr(x86_seg_cs, cs, regs->rip,
                                         sizeof(sig), hvm_access_insn_fetch,
-                                        (hvm_long_mode_active(cur) &&
-                                         cs->attr.fields.l) ? 64 :
-                                        cs->attr.fields.db ? 32 : 16, &addr) &&
+                                        cs, &addr) &&
              (hvm_fetch_from_guest_linear(sig, addr, sizeof(sig),
                                           walk, NULL) == HVMCOPY_okay) &&
              (memcmp(sig, "\xf\xbxen", sizeof(sig)) == 0) )
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 14a07dd..574337c 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -152,7 +152,8 @@ static int hvm_translate_virtual_addr(
         return -PTR_ERR(reg);
 
     okay = hvm_virtual_to_linear_addr(
-        seg, reg, offset, bytes, access_type, sh_ctxt->ctxt.addr_size, linear);
+        seg, reg, offset, bytes, access_type,
+        hvm_get_seg_reg(x86_seg_cs, sh_ctxt), linear);
 
     if ( !okay )
     {
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 49c8001..97f9771 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -472,7 +472,7 @@ bool_t hvm_virtual_to_linear_addr(
     unsigned long offset,
     unsigned int bytes,
     enum hvm_access_type access_type,
-    unsigned int addr_size,
+    const struct segment_register *active_cs,
     unsigned long *linear_addr);
 
 void *hvm_map_guest_frame_rw(unsigned long gfn, bool_t permanent,
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 for-4.9 4/6] x86/svm: Introduce svm_emul_swint_injection()
  2017-04-05 17:32 [PATCH v2 for-4.9 0/6] x86/emul: Fixes Andrew Cooper
                   ` (2 preceding siblings ...)
  2017-04-05 17:33 ` [PATCH v2 for-4.9 3/6] x86/hvm: Fix segmentation logic for system segments Andrew Cooper
@ 2017-04-05 17:33 ` Andrew Cooper
  2017-04-05 18:58   ` Boris Ostrovsky
  2017-04-06  9:00   ` Jan Beulich
  2017-04-05 17:33 ` [PATCH v2 for-4.9 5/6] x86/emul: Drop swint_emulate infrastructure Andrew Cooper
  2017-04-05 17:33 ` [PATCH v2 for-4.9 6/6] x86/emul: Require callers to provide LMA in the emulation context Andrew Cooper
  5 siblings, 2 replies; 18+ messages in thread
From: Andrew Cooper @ 2017-04-05 17:33 UTC (permalink / raw)
  To: Xen-devel
  Cc: Andrew Cooper, Boris Ostrovsky, Julien Grall,
	Suravee Suthikulpanit, Jan Beulich

Software events require emulation in some cases on AMD hardware.  Introduce
svm_emul_swint_injection() to perform this emulation if necessary in
svm_inject_event(), which will cope with any sources of event, rather than
just those coming from x86_emulate().

This logic mirrors inject_swint() in the x86 instruction emulator.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
CC: Julien Grall <julien.grall@arm.com>

v2:
 * Rebase over the logical changes in the previous patch.
 * Use vmcb_get_*() rather than opencoding the accesses.
---
 xen/arch/x86/hvm/svm/svm.c | 133 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 133 insertions(+)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 4d7e49f..d47fabe 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1183,6 +1183,121 @@ static void svm_vcpu_destroy(struct vcpu *v)
     passive_domain_destroy(v);
 }
 
+/*
+ * Emulate enough of interrupt injection to cover the DPL check (omitted by
+ * hardware), and to work out whether it is safe to move %rip fowards for
+ * architectural trap vs fault semantics in the exception frame (which
+ * hardware won't cope with).
+ *
+ * The event parameter will be modified to a fault if necessary.
+ */
+static void svm_emul_swint_injection(struct x86_event *event)
+{
+    struct vcpu *curr = current;
+    const struct vmcb_struct *vmcb = curr->arch.hvm_svm.vmcb;
+    const struct cpu_user_regs *regs = guest_cpu_user_regs();
+    unsigned int trap = event->vector, type = event->type;
+    unsigned int fault = TRAP_gp_fault, ec = 0;
+    pagefault_info_t pfinfo;
+    struct segment_register cs, idtr;
+    unsigned int idte_size, idte_offset;
+    unsigned long idte_linear_addr;
+    struct { uint32_t a, b, c, d; } idte = {};
+    bool lm = vmcb_get_efer(vmcb) & EFER_LMA;
+    int rc;
+
+    if ( !(vmcb_get_cr0(vmcb) & X86_CR0_PE) )
+        goto raise_exception; /* TODO: support real-mode injection? */
+
+    idte_size   = lm ? 16 : 8;
+    idte_offset = trap * idte_size;
+
+    /* ICEBP sets the External Event bit despite being an instruction. */
+    ec = (trap << 3) | X86_XEC_IDT |
+        (type == X86_EVENTTYPE_PRI_SW_EXCEPTION ? X86_XEC_EXT : 0);
+
+    /*
+     * TODO: This does not cover the v8086 mode with CR4.VME case
+     * correctly, but falls on the safe side from the point of view of a
+     * 32bit OS.  Someone with many TUITs can see about reading the TSS
+     * Software Interrupt Redirection bitmap.
+     */
+    if ( (regs->eflags & X86_EFLAGS_VM) &&
+         MASK_EXTR(regs->eflags, X86_EFLAGS_IOPL) != 3 )
+        goto raise_exception;
+
+    /*
+     * Read all 8/16 bytes so the idtr limit check is applied properly to
+     * this entry, even though we don't look all the words read.
+     */
+    hvm_get_segment_register(curr, x86_seg_cs, &cs);
+    hvm_get_segment_register(curr, x86_seg_idtr, &idtr);
+    if ( !hvm_virtual_to_linear_addr(x86_seg_idtr, &idtr, idte_offset,
+                                     idte_size, hvm_access_read,
+                                     &cs, &idte_linear_addr) )
+        goto raise_exception;
+
+    rc = hvm_copy_from_guest_linear(&idte, idte_linear_addr, idte_size,
+                                    PFEC_implicit, &pfinfo);
+    if ( rc )
+    {
+        if ( rc == HVMCOPY_bad_gva_to_gfn )
+        {
+            fault = TRAP_page_fault;
+            ec = pfinfo.ec;
+            event->cr2 = pfinfo.linear;
+        }
+
+        goto raise_exception;
+    }
+
+    /* This must be an interrupt, trap, or task gate. */
+    switch ( (idte.b >> 8) & 0x1f )
+    {
+    case SYS_DESC_irq_gate:
+    case SYS_DESC_trap_gate:
+        break;
+    case SYS_DESC_irq_gate16:
+    case SYS_DESC_trap_gate16:
+    case SYS_DESC_task_gate:
+        if ( !lm )
+            break;
+        /* fall through */
+    default:
+        goto raise_exception;
+    }
+
+    /* The 64-bit high half's type must be zero. */
+    if ( idte.d & 0x1f00 )
+        goto raise_exception;
+
+    /* ICEBP counts as a hardware event, and bypasses the dpl check. */
+    if ( type != X86_EVENTTYPE_PRI_SW_EXCEPTION &&
+         vmcb_get_cpl(vmcb) > ((idte.b >> 13) & 3) )
+        goto raise_exception;
+
+    /* Is this entry present? */
+    if ( !(idte.b & (1u << 15)) )
+    {
+        fault = TRAP_no_segment;
+        goto raise_exception;
+    }
+
+    /*
+     * Any further fault during injection will cause a double fault.  It
+     * is fine to leave this up to hardware, and software won't be in a
+     * position to care about the architectural correctness of %rip in the
+     * exception frame.
+     */
+    return;
+
+ raise_exception:
+    event->vector = fault;
+    event->type = X86_EVENTTYPE_HW_EXCEPTION;
+    event->insn_len = 0;
+    event->error_code = ec;
+}
+
 static void svm_inject_event(const struct x86_event *event)
 {
     struct vcpu *curr = current;
@@ -1191,6 +1306,24 @@ static void svm_inject_event(const struct x86_event *event)
     struct x86_event _event = *event;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
 
+    /*
+     * For hardware lacking NRips support, and always for ICEBP instructions,
+     * the processor requires extra help to deliver software events.
+     *
+     * Xen must emulate enough of the event injection to be sure that a
+     * further fault shouldn't occur during delivery.  This covers the fact
+     * that hardware doesn't perform DPL checking on injection.
+     *
+     * Also, it accounts for proper positioning of %rip for an event with trap
+     * semantics (where %rip should point after the instruction) which suffers
+     * a fault during injection (at which point %rip should point at the
+     * instruction).
+     */
+    if ( event->type == X86_EVENTTYPE_PRI_SW_EXCEPTION ||
+         (!cpu_has_svm_nrips && (event->type == X86_EVENTTYPE_SW_INTERRUPT ||
+                                 event->type == X86_EVENTTYPE_SW_EXCEPTION)) )
+        svm_emul_swint_injection(&_event);
+
     switch ( _event.vector )
     {
     case TRAP_debug:
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 for-4.9 5/6] x86/emul: Drop swint_emulate infrastructure
  2017-04-05 17:32 [PATCH v2 for-4.9 0/6] x86/emul: Fixes Andrew Cooper
                   ` (3 preceding siblings ...)
  2017-04-05 17:33 ` [PATCH v2 for-4.9 4/6] x86/svm: Introduce svm_emul_swint_injection() Andrew Cooper
@ 2017-04-05 17:33 ` Andrew Cooper
  2017-04-06  7:30   ` Jan Beulich
  2017-04-05 17:33 ` [PATCH v2 for-4.9 6/6] x86/emul: Require callers to provide LMA in the emulation context Andrew Cooper
  5 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2017-04-05 17:33 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Tim Deegan

With the SVM injection logic capable of doing its own emulation, there is no
need for this hardware-specific assistance in the common emulator.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Tim Deegan <tim@xen.org>
CC: Julien Grall <julien.grall@arm.com>

v2:
 * imm8 -> imm1
---
 tools/fuzz/x86_instruction_emulator/fuzz-emul.c |  18 +--
 xen/arch/x86/hvm/emulate.c                      |   7 -
 xen/arch/x86/mm.c                               |   2 -
 xen/arch/x86/mm/shadow/common.c                 |   1 -
 xen/arch/x86/x86_emulate/x86_emulate.c          | 187 ++++--------------------
 xen/arch/x86/x86_emulate/x86_emulate.h          |  53 -------
 6 files changed, 30 insertions(+), 238 deletions(-)

diff --git a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
index 890642c..8488816 100644
--- a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
+++ b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
@@ -536,8 +536,7 @@ enum {
     HOOK_put_fpu,
     HOOK_invlpg,
     HOOK_vmfunc,
-    OPTION_swint_emulation, /* Two bits */
-    CANONICALIZE_rip = OPTION_swint_emulation + 2,
+    CANONICALIZE_rip,
     CANONICALIZE_rsp,
     CANONICALIZE_rbp
 };
@@ -577,19 +576,6 @@ static void disable_hooks(void)
     MAYBE_DISABLE_HOOK(invlpg);
 }
 
-static void set_swint_support(struct x86_emulate_ctxt *ctxt)
-{
-    unsigned int swint_opt = (input.options >> OPTION_swint_emulation) & 3;
-    static const enum x86_swint_emulation map[4] = {
-        x86_swint_emulate_none,
-        x86_swint_emulate_none,
-        x86_swint_emulate_icebp,
-        x86_swint_emulate_all
-    };
-
-    ctxt->swint_emulate = map[swint_opt];
-}
-
 /*
  * Constrain input to architecturally-possible states where
  * the emulator relies on these
@@ -693,8 +679,6 @@ int LLVMFuzzerTestOneInput(const uint8_t *data_p, size_t size)
 
     disable_hooks();
 
-    set_swint_support(&ctxt);
-
     do {
         /* FIXME: Until we actually implement SIGFPE handling properly */
         setup_fpu_exception_handler();
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 87ca801..39e4319 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2033,13 +2033,6 @@ void hvm_emulate_init_once(
     hvmemul_ctxt->ctxt.regs = regs;
     hvmemul_ctxt->ctxt.vendor = curr->domain->arch.cpuid->x86_vendor;
     hvmemul_ctxt->ctxt.force_writeback = true;
-
-    if ( cpu_has_vmx )
-        hvmemul_ctxt->ctxt.swint_emulate = x86_swint_emulate_none;
-    else if ( cpu_has_svm_nrips )
-        hvmemul_ctxt->ctxt.swint_emulate = x86_swint_emulate_icebp;
-    else
-        hvmemul_ctxt->ctxt.swint_emulate = x86_swint_emulate_all;
 }
 
 void hvm_emulate_init_per_insn(
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index be4e308..3918a37 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5412,7 +5412,6 @@ int ptwr_do_page_fault(struct vcpu *v, unsigned long addr,
             .vendor = d->arch.cpuid->x86_vendor,
             .addr_size = is_pv_32bit_domain(d) ? 32 : BITS_PER_LONG,
             .sp_size   = is_pv_32bit_domain(d) ? 32 : BITS_PER_LONG,
-            .swint_emulate = x86_swint_emulate_none,
         },
     };
     int rc;
@@ -5567,7 +5566,6 @@ int mmio_ro_do_page_fault(struct vcpu *v, unsigned long addr,
         .vendor = v->domain->arch.cpuid->x86_vendor,
         .addr_size = addr_size,
         .sp_size = addr_size,
-        .swint_emulate = x86_swint_emulate_none,
         .data = &mmio_ro_ctxt
     };
     int rc;
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 574337c..736ceaa 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -326,7 +326,6 @@ const struct x86_emulate_ops *shadow_init_emulation(
 
     sh_ctxt->ctxt.regs = regs;
     sh_ctxt->ctxt.vendor = v->domain->arch.cpuid->x86_vendor;
-    sh_ctxt->ctxt.swint_emulate = x86_swint_emulate_none;
 
     /* Segment cache initialisation. Primed with CS. */
     creg = hvm_get_seg_reg(x86_seg_cs, sh_ctxt);
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index 7af8a42..8c4e885 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1999,142 +1999,6 @@ static bool umip_active(struct x86_emulate_ctxt *ctxt,
            (cr4 & X86_CR4_UMIP);
 }
 
-/* Inject a software interrupt/exception, emulating if needed. */
-static int inject_swint(enum x86_swint_type type,
-                        uint8_t vector, uint8_t insn_len,
-                        struct x86_emulate_ctxt *ctxt,
-                        const struct x86_emulate_ops *ops)
-{
-    int rc, error_code, fault_type = EXC_GP;
-
-    /*
-     * Without hardware support, injecting software interrupts/exceptions is
-     * problematic.
-     *
-     * All software methods of generating exceptions (other than BOUND) yield
-     * traps, so eip in the exception frame needs to point after the
-     * instruction, not at it.
-     *
-     * However, if injecting it as a hardware exception causes a fault during
-     * delivery, our adjustment of eip will cause the fault to be reported
-     * after the faulting instruction, not pointing to it.
-     *
-     * Therefore, eip can only safely be wound forwards if we are certain that
-     * injecting an equivalent hardware exception won't fault, which means
-     * emulating everything the processor would do on a control transfer.
-     *
-     * However, emulation of complete control transfers is very complicated.
-     * All we care about is that guest userspace cannot avoid the descriptor
-     * DPL check by using the Xen emulator, and successfully invoke DPL=0
-     * descriptors.
-     *
-     * Any OS which would further fault during injection is going to receive a
-     * double fault anyway, and won't be in a position to care that the
-     * faulting eip is incorrect.
-     */
-
-    if ( (ctxt->swint_emulate == x86_swint_emulate_all) ||
-         ((ctxt->swint_emulate == x86_swint_emulate_icebp) &&
-          (type == x86_swint_icebp)) )
-    {
-        if ( !in_realmode(ctxt, ops) )
-        {
-            unsigned int idte_size, idte_offset;
-            struct { uint32_t a, b, c, d; } idte = {};
-            int lm = in_longmode(ctxt, ops);
-
-            if ( lm < 0 )
-                return X86EMUL_UNHANDLEABLE;
-
-            idte_size = lm ? 16 : 8;
-            idte_offset = vector * idte_size;
-
-            /* icebp sets the External Event bit despite being an instruction. */
-            error_code = (vector << 3) | ECODE_IDT |
-                (type == x86_swint_icebp ? ECODE_EXT : 0);
-
-            /*
-             * TODO - this does not cover the v8086 mode with CR4.VME case
-             * correctly, but falls on the safe side from the point of view of
-             * a 32bit OS.  Someone with many TUITs can see about reading the
-             * TSS Software Interrupt Redirection bitmap.
-             */
-            if ( (ctxt->regs->eflags & X86_EFLAGS_VM) &&
-                 ((ctxt->regs->eflags & X86_EFLAGS_IOPL) != X86_EFLAGS_IOPL) )
-                goto raise_exn;
-
-            /*
-             * Read all 8/16 bytes so the idtr limit check is applied properly
-             * to this entry, even though we only end up looking at the 2nd
-             * word.
-             */
-            switch ( rc = ops->read(x86_seg_idtr, idte_offset,
-                                    &idte, idte_size, ctxt) )
-            {
-            case X86EMUL_OKAY:
-                break;
-
-            case X86EMUL_EXCEPTION:
-                if ( !ctxt->event_pending )
-                    goto raise_exn;
-                /* fallthrough */
-
-            default:
-                return rc;
-            }
-
-            /* This must be an interrupt, trap, or task gate. */
-#ifdef __XEN__
-            switch ( (idte.b >> 8) & 0x1f )
-            {
-            case SYS_DESC_irq_gate:
-            case SYS_DESC_trap_gate:
-                break;
-            case SYS_DESC_irq_gate16:
-            case SYS_DESC_trap_gate16:
-            case SYS_DESC_task_gate:
-                if ( !lm )
-                    break;
-                /* fall through */
-            default:
-                goto raise_exn;
-            }
-#endif
-
-            /* The 64-bit high half's type must be zero. */
-            if ( idte.d & 0x1f00 )
-                goto raise_exn;
-
-            /* icebp counts as a hardware event, and bypasses the dpl check. */
-            if ( type != x86_swint_icebp )
-            {
-                int cpl = get_cpl(ctxt, ops);
-
-                fail_if(cpl < 0);
-
-                if ( cpl > ((idte.b >> 13) & 3) )
-                    goto raise_exn;
-            }
-
-            /* Is this entry present? */
-            if ( !(idte.b & (1u << 15)) )
-            {
-                fault_type = EXC_NP;
-                goto raise_exn;
-            }
-        }
-    }
-
-    x86_emul_software_event(type, vector, insn_len, ctxt);
-    rc = X86EMUL_OKAY;
-
- done:
-    return rc;
-
- raise_exn:
-    generate_exception(fault_type, error_code);
-}
-
 static void adjust_bnd(struct x86_emulate_ctxt *ctxt,
                        const struct x86_emulate_ops *ops, enum vex_pfx pfx)
 {
@@ -3101,7 +2965,6 @@ x86_emulate(
     struct operand src = { .reg = PTR_POISON };
     struct operand dst = { .reg = PTR_POISON };
     unsigned long cr4;
-    enum x86_swint_type swint_type;
     struct fpu_insn_ctxt fic = { .type = X86EMUL_FPU_none, .exn_raised = -1 };
     struct x86_emulate_stub stub = {};
     DECLARE_ALIGNED(mmval_t, mmval);
@@ -4103,25 +3966,38 @@ x86_emulate(
             goto done;
         break;
 
-    case 0xcc: /* int3 */
-        src.val = EXC_BP;
-        swint_type = x86_swint_int3;
-        goto swint;
-
-    case 0xcd: /* int imm8 */
-        swint_type = x86_swint_int;
-    swint:
-        rc = inject_swint(swint_type, (uint8_t)src.val,
-                          _regs.r(ip) - ctxt->regs->r(ip),
-                          ctxt, ops) ? : X86EMUL_EXCEPTION;
-        goto done;
-
     case 0xce: /* into */
         if ( !(_regs.eflags & X86_EFLAGS_OF) )
             break;
-        src.val = EXC_OF;
-        swint_type = x86_swint_into;
-        goto swint;
+        /* Fallthrough */
+    case 0xcc: /* int3 */
+    case 0xcd: /* int imm1 */
+    case 0xf1: /* int1 (icebp) */
+        ASSERT(!ctxt->event_pending);
+        switch ( ctxt->opcode )
+        {
+        case 0xcc: /* int3 */
+            ctxt->event.vector = EXC_BP;
+            ctxt->event.type = X86_EVENTTYPE_SW_EXCEPTION;
+            break;
+        case 0xcd: /* int imm1 */
+            ctxt->event.vector = src.val;
+            ctxt->event.type = X86_EVENTTYPE_SW_INTERRUPT;
+            break;
+        case 0xce: /* into */
+            ctxt->event.vector = EXC_OF;
+            ctxt->event.type = X86_EVENTTYPE_SW_EXCEPTION;
+            break;
+        case 0xf1: /* icebp */
+            ctxt->event.vector = EXC_DB;
+            ctxt->event.type = X86_EVENTTYPE_PRI_SW_EXCEPTION;
+            break;
+        }
+        ctxt->event.error_code = X86_EVENT_NO_EC;
+        ctxt->event.insn_len = _regs.r(ip) - ctxt->regs->r(ip);
+        ctxt->event_pending = true;
+        rc = X86EMUL_EXCEPTION;
+        goto done;
 
     case 0xcf: /* iret */ {
         unsigned long sel, eip, eflags;
@@ -4782,11 +4658,6 @@ x86_emulate(
             goto done;
         break;
 
-    case 0xf1: /* int1 (icebp) */
-        src.val = EXC_DB;
-        swint_type = x86_swint_icebp;
-        goto swint;
-
     case 0xf4: /* hlt */
         generate_exception_if(!mode_ring0(), EXC_GP, 0);
         ctxt->retire.hlt = true;
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.h b/xen/arch/x86/x86_emulate/x86_emulate.h
index 9c5fcde..215adf6 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -60,27 +60,6 @@ static inline bool is_x86_system_segment(enum x86_segment seg)
     return seg >= x86_seg_tr && seg < x86_seg_none;
 }
 
-/* Classification of the types of software generated interrupts/exceptions. */
-enum x86_swint_type {
-    x86_swint_icebp, /* 0xf1 */
-    x86_swint_int3,  /* 0xcc */
-    x86_swint_into,  /* 0xce */
-    x86_swint_int,   /* 0xcd $n */
-};
-
-/*
- * How much help is required with software event injection?
- *
- * All software events return from x86_emulate() with X86EMUL_EXCEPTION and
- * fault-like semantics.  This just controls whether the emulator performs
- * presence/dpl/etc checks and possibly raises exceptions instead.
- */
-enum x86_swint_emulation {
-    x86_swint_emulate_none, /* Hardware supports all software injection properly */
-    x86_swint_emulate_icebp,/* Help needed with `icebp` (0xf1) */
-    x86_swint_emulate_all,  /* Help needed with all software events */
-};
-
 /*
  * x86 event types. This enumeration is valid for:
  *  Intel VMX: {VM_ENTRY,VM_EXIT,IDT_VECTORING}_INTR_INFO[10:8]
@@ -472,9 +451,6 @@ struct x86_emulate_ctxt
      * Input-only state:
      */
 
-    /* Software event injection support. */
-    enum x86_swint_emulation swint_emulate;
-
     /* CPU vendor (X86_VENDOR_UNKNOWN for "don't care") */
     unsigned char vendor;
 
@@ -699,35 +675,6 @@ static inline void x86_emul_pagefault(
     ctxt->event_pending = true;
 }
 
-static inline void x86_emul_software_event(
-    enum x86_swint_type type, uint8_t vector, uint8_t insn_len,
-    struct x86_emulate_ctxt *ctxt)
-{
-    ASSERT(!ctxt->event_pending);
-
-    switch ( type )
-    {
-    case x86_swint_icebp:
-        ctxt->event.type = X86_EVENTTYPE_PRI_SW_EXCEPTION;
-        break;
-
-    case x86_swint_int3:
-    case x86_swint_into:
-        ctxt->event.type = X86_EVENTTYPE_SW_EXCEPTION;
-        break;
-
-    case x86_swint_int:
-        ctxt->event.type = X86_EVENTTYPE_SW_INTERRUPT;
-        break;
-    }
-
-    ctxt->event.vector = vector;
-    ctxt->event.error_code = X86_EVENT_NO_EC;
-    ctxt->event.insn_len = insn_len;
-
-    ctxt->event_pending = true;
-}
-
 static inline void x86_emul_reset_event(struct x86_emulate_ctxt *ctxt)
 {
     ctxt->event_pending = false;
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 for-4.9 6/6] x86/emul: Require callers to provide LMA in the emulation context
  2017-04-05 17:32 [PATCH v2 for-4.9 0/6] x86/emul: Fixes Andrew Cooper
                   ` (4 preceding siblings ...)
  2017-04-05 17:33 ` [PATCH v2 for-4.9 5/6] x86/emul: Drop swint_emulate infrastructure Andrew Cooper
@ 2017-04-05 17:33 ` Andrew Cooper
  2017-04-06  9:08   ` Jan Beulich
  5 siblings, 1 reply; 18+ messages in thread
From: Andrew Cooper @ 2017-04-05 17:33 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Julien Grall, Tim Deegan, Jan Beulich

Long mode (or not) influences emulation behaviour in a number of cases.
Instead of reusing the ->read_msr() hook to obtain EFER.LMA, require callers
to provide it directly.

This simplifies all long mode checks during emulation to a simple boolean
read, removing embedded msr reads.  It also allows for the removal of a local
variable in the sysenter emulation block, and removes a latent bug in the
syscall emulation block where rc contains a non X86EMUL_* constant for a
period of time.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Tim Deegan <tim@xen.org>
CC: Julien Grall <julien.grall@arm.com>

v2:
 * Move lma to the in/out section of x86_emulate_ctxt
 * Replace one 0 with false
---
 tools/fuzz/x86_instruction_emulator/fuzz-emul.c |  2 +
 tools/tests/x86_emulator/test_x86_emulator.c    |  4 ++
 xen/arch/x86/hvm/emulate.c                      |  4 +-
 xen/arch/x86/mm.c                               |  2 +
 xen/arch/x86/mm/shadow/common.c                 |  5 +--
 xen/arch/x86/traps.c                            |  1 +
 xen/arch/x86/x86_emulate/x86_emulate.c          | 51 ++++++-------------------
 xen/arch/x86/x86_emulate/x86_emulate.h          |  3 ++
 8 files changed, 29 insertions(+), 43 deletions(-)

diff --git a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
index 8488816..65c5a3b 100644
--- a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
+++ b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
@@ -484,6 +484,8 @@ static bool in_longmode(struct x86_emulate_ctxt *ctxt)
 
 static void set_sizes(struct x86_emulate_ctxt *ctxt)
 {
+    ctxt->lma = long_mode_active(ctxt);
+
     if ( in_longmode(ctxt) )
         ctxt->addr_size = ctxt->sp_size = 64;
     else
diff --git a/tools/tests/x86_emulator/test_x86_emulator.c b/tools/tests/x86_emulator/test_x86_emulator.c
index 5be8ddc..efeb175 100644
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -319,6 +319,7 @@ int main(int argc, char **argv)
     ctxt.regs = &regs;
     ctxt.force_writeback = 0;
     ctxt.vendor    = X86_VENDOR_UNKNOWN;
+    ctxt.lma       = sizeof(void *) == 8;
     ctxt.addr_size = 8 * sizeof(void *);
     ctxt.sp_size   = 8 * sizeof(void *);
 
@@ -2922,6 +2923,7 @@ int main(int argc, char **argv)
     {
         decl_insn(vzeroupper);
 
+        ctxt.lma = false;
         ctxt.sp_size = ctxt.addr_size = 32;
 
         asm volatile ( "vxorps %xmm2, %xmm2, %xmm3\n"
@@ -2949,6 +2951,7 @@ int main(int argc, char **argv)
             goto fail;
         printf("okay\n");
 
+        ctxt.lma = true;
         ctxt.sp_size = ctxt.addr_size = 64;
     }
     else
@@ -3001,6 +3004,7 @@ int main(int argc, char **argv)
             continue;
 
         memcpy(res, blobs[j].code, blobs[j].size);
+        ctxt.lma = blobs[j].bitness == 64;
         ctxt.addr_size = ctxt.sp_size = blobs[j].bitness;
 
         if ( ctxt.addr_size == sizeof(void *) * CHAR_BIT )
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 39e4319..a4918e1 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2044,7 +2044,9 @@ void hvm_emulate_init_per_insn(
     unsigned int pfec = PFEC_page_present;
     unsigned long addr;
 
-    if ( hvm_long_mode_active(curr) &&
+    hvmemul_ctxt->ctxt.lma = hvm_long_mode_active(curr);
+
+    if ( hvmemul_ctxt->ctxt.lma &&
          hvmemul_ctxt->seg_reg[x86_seg_cs].attr.fields.l )
         hvmemul_ctxt->ctxt.addr_size = hvmemul_ctxt->ctxt.sp_size = 64;
     else
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 3918a37..d010aa3 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5412,6 +5412,7 @@ int ptwr_do_page_fault(struct vcpu *v, unsigned long addr,
             .vendor = d->arch.cpuid->x86_vendor,
             .addr_size = is_pv_32bit_domain(d) ? 32 : BITS_PER_LONG,
             .sp_size   = is_pv_32bit_domain(d) ? 32 : BITS_PER_LONG,
+            .lma = true,
         },
     };
     int rc;
@@ -5566,6 +5567,7 @@ int mmio_ro_do_page_fault(struct vcpu *v, unsigned long addr,
         .vendor = v->domain->arch.cpuid->x86_vendor,
         .addr_size = addr_size,
         .sp_size = addr_size,
+        .lma = true,
         .data = &mmio_ro_ctxt
     };
     int rc;
diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 736ceaa..d432198 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -326,15 +326,14 @@ const struct x86_emulate_ops *shadow_init_emulation(
 
     sh_ctxt->ctxt.regs = regs;
     sh_ctxt->ctxt.vendor = v->domain->arch.cpuid->x86_vendor;
+    sh_ctxt->ctxt.lma = hvm_long_mode_active(v);
 
     /* Segment cache initialisation. Primed with CS. */
     creg = hvm_get_seg_reg(x86_seg_cs, sh_ctxt);
 
     /* Work out the emulation mode. */
-    if ( hvm_long_mode_active(v) && creg->attr.fields.l )
-    {
+    if ( sh_ctxt->ctxt.lma && creg->attr.fields.l )
         sh_ctxt->ctxt.addr_size = sh_ctxt->ctxt.sp_size = 64;
-    }
     else
     {
         sreg = hvm_get_seg_reg(x86_seg_ss, sh_ctxt);
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 0d54baf..09dc2f1 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2966,6 +2966,7 @@ static int emulate_privileged_op(struct cpu_user_regs *regs)
     struct priv_op_ctxt ctxt = {
         .ctxt.regs = regs,
         .ctxt.vendor = currd->arch.cpuid->x86_vendor,
+        .ctxt.lma = true,
     };
     int rc;
     unsigned int eflags, ar;
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index 8c4e885..1d5a698 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -968,11 +968,10 @@ do {                                                                    \
 
 #define validate_far_branch(cs, ip) ({                                  \
     if ( sizeof(ip) <= 4 ) {                                            \
-        ASSERT(in_longmode(ctxt, ops) <= 0);                            \
+        ASSERT(!ctxt->lma);                                             \
         generate_exception_if((ip) > (cs)->limit, EXC_GP, 0);           \
     } else                                                              \
-        generate_exception_if(in_longmode(ctxt, ops) &&                 \
-                              (cs)->attr.fields.l                       \
+        generate_exception_if(ctxt->lma && (cs)->attr.fields.l          \
                               ? !is_canonical_address(ip)               \
                               : (ip) > (cs)->limit, EXC_GP, 0);         \
 })
@@ -1630,20 +1629,6 @@ static bool vcpu_has(
 #endif
 
 static int
-in_longmode(
-    struct x86_emulate_ctxt *ctxt,
-    const struct x86_emulate_ops *ops)
-{
-    uint64_t efer;
-
-    if ( !ops->read_msr ||
-         unlikely(ops->read_msr(MSR_EFER, &efer, ctxt) != X86EMUL_OKAY) )
-        return -1;
-
-    return !!(efer & EFER_LMA);
-}
-
-static int
 realmode_load_seg(
     enum x86_segment seg,
     uint16_t sel,
@@ -1767,8 +1752,7 @@ protmode_load_seg(
          * Experimentally in long mode, the L and D bits are checked before
          * the Present bit.
          */
-        if ( in_longmode(ctxt, ops) &&
-             (desc.b & (1 << 21)) && (desc.b & (1 << 22)) )
+        if ( ctxt->lma && (desc.b & (1 << 21)) && (desc.b & (1 << 22)) )
             goto raise_exn;
         sel = (sel ^ rpl) | cpl;
         break;
@@ -1818,10 +1802,8 @@ protmode_load_seg(
 
     if ( !is_x86_user_segment(seg) )
     {
-        int lm = (desc.b & (1u << 12)) ? 0 : in_longmode(ctxt, ops);
+        bool lm = (desc.b & (1u << 12)) ? false : ctxt->lma;
 
-        if ( lm < 0 )
-            return X86EMUL_UNHANDLEABLE;
         if ( lm )
         {
             switch ( rc = ops->read(sel_seg, (sel & 0xfff8) + 8,
@@ -5195,7 +5177,7 @@ x86_emulate(
                 case 0x03: /* busy 16-bit TSS */
                 case 0x04: /* 16-bit call gate */
                 case 0x05: /* 16/32-bit task gate */
-                    if ( in_longmode(ctxt, ops) )
+                    if ( ctxt->lma )
                         break;
                     /* fall through */
                 case 0x02: /* LDT */
@@ -5242,7 +5224,7 @@ x86_emulate(
                 {
                 case 0x01: /* available 16-bit TSS */
                 case 0x03: /* busy 16-bit TSS */
-                    if ( in_longmode(ctxt, ops) )
+                    if ( ctxt->lma )
                         break;
                     /* fall through */
                 case 0x02: /* LDT */
@@ -5292,10 +5274,7 @@ x86_emulate(
         sreg.attr.bytes = 0xc93; /* G+DB+P+S+Data */
 
 #ifdef __x86_64__
-        rc = in_longmode(ctxt, ops);
-        if ( rc < 0 )
-            goto cannot_emulate;
-        if ( rc )
+        if ( ctxt->lma )
         {
             cs.attr.bytes = 0xa9b; /* L+DB+P+S+Code */
 
@@ -5732,9 +5711,7 @@ x86_emulate(
             dst.val = src.val;
         break;
 
-    case X86EMUL_OPC(0x0f, 0x34): /* sysenter */ {
-        int lm;
-
+    case X86EMUL_OPC(0x0f, 0x34): /* sysenter */
         vcpu_must_have(sep);
         generate_exception_if(mode_ring0(), EXC_GP, 0);
         generate_exception_if(!in_protmode(ctxt, ops), EXC_GP, 0);
@@ -5745,17 +5722,14 @@ x86_emulate(
             goto done;
 
         generate_exception_if(!(msr_val & 0xfffc), EXC_GP, 0);
-        lm = in_longmode(ctxt, ops);
-        if ( lm < 0 )
-            goto cannot_emulate;
 
         _regs.eflags &= ~(X86_EFLAGS_VM | X86_EFLAGS_IF | X86_EFLAGS_RF);
 
         cs.sel = msr_val & ~3; /* SELECTOR_RPL_MASK */
         cs.base = 0;   /* flat segment */
         cs.limit = ~0u;  /* 4GB limit */
-        cs.attr.bytes = lm ? 0xa9b  /* G+L+P+S+Code */
-                           : 0xc9b; /* G+DB+P+S+Code */
+        cs.attr.bytes = ctxt->lma ? 0xa9b  /* G+L+P+S+Code */
+                                  : 0xc9b; /* G+DB+P+S+Code */
 
         sreg.sel = cs.sel + 8;
         sreg.base = 0;   /* flat segment */
@@ -5770,16 +5744,15 @@ x86_emulate(
         if ( (rc = ops->read_msr(MSR_IA32_SYSENTER_EIP,
                                  &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
-        _regs.r(ip) = lm ? msr_val : (uint32_t)msr_val;
+        _regs.r(ip) = ctxt->lma ? msr_val : (uint32_t)msr_val;
 
         if ( (rc = ops->read_msr(MSR_IA32_SYSENTER_ESP,
                                  &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
-        _regs.r(sp) = lm ? msr_val : (uint32_t)msr_val;
+        _regs.r(sp) = ctxt->lma ? msr_val : (uint32_t)msr_val;
 
         singlestep = _regs.eflags & X86_EFLAGS_TF;
         break;
-    }
 
     case X86EMUL_OPC(0x0f, 0x35): /* sysexit */
         vcpu_must_have(sep);
diff --git a/xen/arch/x86/x86_emulate/x86_emulate.h b/xen/arch/x86/x86_emulate/x86_emulate.h
index 215adf6..d9a252e 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -473,6 +473,9 @@ struct x86_emulate_ctxt
     /* Stack pointer width in bits (16, 32 or 64). */
     unsigned int sp_size;
 
+    /* Long mode active? */
+    bool lma;
+
     /*
      * Output-only state:
      */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 for-4.9 2/6] x86/hvm: Correct long mode predicate
  2017-04-05 17:33 ` [PATCH v2 for-4.9 2/6] x86/hvm: Correct long mode predicate Andrew Cooper
@ 2017-04-05 18:55   ` Boris Ostrovsky
  0 siblings, 0 replies; 18+ messages in thread
From: Boris Ostrovsky @ 2017-04-05 18:55 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Julien Grall, Tim Deegan, Suravee Suthikulpanit

On 04/05/2017 01:33 PM, Andrew Cooper wrote:
> hvm_long_mode_enabled() tests for EFER.LMA, which is specifically different to
> EFER.LME.
>
> Rename it to match its behaviour, and have it strictly return a boolean value
> (although all its callers already use it in implicitly-boolean contexts, so no
> functional change).

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 for-4.9 4/6] x86/svm: Introduce svm_emul_swint_injection()
  2017-04-05 17:33 ` [PATCH v2 for-4.9 4/6] x86/svm: Introduce svm_emul_swint_injection() Andrew Cooper
@ 2017-04-05 18:58   ` Boris Ostrovsky
  2017-04-05 18:59     ` Andrew Cooper
  2017-04-06  9:00   ` Jan Beulich
  1 sibling, 1 reply; 18+ messages in thread
From: Boris Ostrovsky @ 2017-04-05 18:58 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel; +Cc: Julien Grall, Suravee Suthikulpanit, Jan Beulich

On 04/05/2017 01:33 PM, Andrew Cooper wrote:
> Software events require emulation in some cases on AMD hardware.  Introduce
> svm_emul_swint_injection() to perform this emulation if necessary in
> svm_inject_event(), which will cope with any sources of event, rather than
> just those coming from x86_emulate().
>
> This logic mirrors inject_swint() in the x86 instruction emulator.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>


> +    /*
> +     * Read all 8/16 bytes so the idtr limit check is applied properly to
> +     * this entry, even though we don't look all the words read.
> +     */

("don't look *at* all the words read"? )

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 for-4.9 4/6] x86/svm: Introduce svm_emul_swint_injection()
  2017-04-05 18:58   ` Boris Ostrovsky
@ 2017-04-05 18:59     ` Andrew Cooper
  0 siblings, 0 replies; 18+ messages in thread
From: Andrew Cooper @ 2017-04-05 18:59 UTC (permalink / raw)
  To: Boris Ostrovsky, Xen-devel
  Cc: Julien Grall, Suravee Suthikulpanit, Jan Beulich

On 05/04/17 19:58, Boris Ostrovsky wrote:
> On 04/05/2017 01:33 PM, Andrew Cooper wrote:
>> Software events require emulation in some cases on AMD hardware.  Introduce
>> svm_emul_swint_injection() to perform this emulation if necessary in
>> svm_inject_event(), which will cope with any sources of event, rather than
>> just those coming from x86_emulate().
>>
>> This logic mirrors inject_swint() in the x86 instruction emulator.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>
>> +    /*
>> +     * Read all 8/16 bytes so the idtr limit check is applied properly to
>> +     * this entry, even though we don't look all the words read.
>> +     */
> ("don't look *at* all the words read"? )
>
> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Oops yes - will fix.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 for-4.9 5/6] x86/emul: Drop swint_emulate infrastructure
  2017-04-05 17:33 ` [PATCH v2 for-4.9 5/6] x86/emul: Drop swint_emulate infrastructure Andrew Cooper
@ 2017-04-06  7:30   ` Jan Beulich
  2017-04-06  9:07     ` Andrew Cooper
  0 siblings, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2017-04-06  7:30 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Tim Deegan, Xen-devel

>>> On 05.04.17 at 19:33, <andrew.cooper3@citrix.com> wrote:
> With the SVM injection logic capable of doing its own emulation, there is no
> need for this hardware-specific assistance in the common emulator.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> ---
> CC: Tim Deegan <tim@xen.org>
> CC: Julien Grall <julien.grall@arm.com>
> 
> v2:
>  * imm8 -> imm1

This went wrong:

>      case 0xce: /* into */
>          if ( !(_regs.eflags & X86_EFLAGS_OF) )
>              break;
> -        src.val = EXC_OF;
> -        swint_type = x86_swint_into;
> -        goto swint;
> +        /* Fallthrough */
> +    case 0xcc: /* int3 */
> +    case 0xcd: /* int imm1 */

This needs to remain imm8.

> +    case 0xf1: /* int1 (icebp) */
> +        ASSERT(!ctxt->event_pending);
> +        switch ( ctxt->opcode )
> +        {
> +        case 0xcc: /* int3 */
> +            ctxt->event.vector = EXC_BP;
> +            ctxt->event.type = X86_EVENTTYPE_SW_EXCEPTION;
> +            break;
> +        case 0xcd: /* int imm1 */

As does this one.

> +            ctxt->event.vector = src.val;

This is what I would prefer to become imm1.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 for-4.9 1/6] x86/hvm: Correct some address space terminology
  2017-04-05 17:32 ` [PATCH v2 for-4.9 1/6] x86/hvm: Correct some address space terminology Andrew Cooper
@ 2017-04-06  8:35   ` Tim Deegan
  2017-04-06  8:47   ` Jan Beulich
  1 sibling, 0 replies; 18+ messages in thread
From: Tim Deegan @ 2017-04-06  8:35 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Jan Beulich, Xen-devel

At 18:32 +0100 on 05 Apr (1491417179), Andrew Cooper wrote:
> The function hvm_translate_linear_addr() translates a virtual address to a
> linear address, not a linear address to a physical address.  Correct its name.
> 
> Both hvm_translate_virtual_addr() and hvmemul_virtual_to_linear() return a
> linear address, but a parameter name of paddr is easily confused with paddr_t.
> Rename it to linear, to clearly identify the address space, and for
> consistency with hvm_virtual_to_linear_addr().
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>

For the whole series, Acked-by: Tim Deegan <tim@xen.org>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 for-4.9 1/6] x86/hvm: Correct some address space terminology
  2017-04-05 17:32 ` [PATCH v2 for-4.9 1/6] x86/hvm: Correct some address space terminology Andrew Cooper
  2017-04-06  8:35   ` Tim Deegan
@ 2017-04-06  8:47   ` Jan Beulich
  1 sibling, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2017-04-06  8:47 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Tim Deegan, Xen-devel

>>> On 05.04.17 at 19:32, <andrew.cooper3@citrix.com> wrote:
> The function hvm_translate_linear_addr() translates a virtual address to a
> linear address, not a linear address to a physical address.  Correct its name.
> 
> Both hvm_translate_virtual_addr() and hvmemul_virtual_to_linear() return a
> linear address, but a parameter name of paddr is easily confused with paddr_t.
> Rename it to linear, to clearly identify the address space, and for
> consistency with hvm_virtual_to_linear_addr().
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Tim Deegan <tim@xen.org>
> CC: Julien Grall <julien.grall@arm.com>
> 
> v2:
>  * Alter commit message, as there is disagreement as to the intended meaning
>    of paddr.  Whatever the intended meaning, it is currently confusing against
>    paddr_t.

And as said on irc - you could have kept my R-b despite not doing
the originally requested parameter name change, so here it is again:

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 for-4.9 3/6] x86/hvm: Fix segmentation logic for system segments
  2017-04-05 17:33 ` [PATCH v2 for-4.9 3/6] x86/hvm: Fix segmentation logic for system segments Andrew Cooper
@ 2017-04-06  8:56   ` Jan Beulich
  2017-04-06  9:06     ` Andrew Cooper
  0 siblings, 1 reply; 18+ messages in thread
From: Jan Beulich @ 2017-04-06  8:56 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Paul Durrant, Tim Deegan, Xen-devel

>>> On 05.04.17 at 19:33, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -549,7 +549,7 @@ static int hvmemul_virtual_to_linear(
>          okay = hvm_virtual_to_linear_addr(
>              seg, reg, offset - (*reps - 1) * bytes_per_rep,
>              *reps * bytes_per_rep, access_type,
> -            hvmemul_ctxt->ctxt.addr_size, linear);
> +            hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt), linear);
>          *linear += (*reps - 1) * bytes_per_rep;
>          if ( hvmemul_ctxt->ctxt.addr_size != 64 )
>              *linear = (uint32_t)*linear;
> @@ -558,7 +558,7 @@ static int hvmemul_virtual_to_linear(
>      {
>          okay = hvm_virtual_to_linear_addr(
>              seg, reg, offset, *reps * bytes_per_rep, access_type,
> -            hvmemul_ctxt->ctxt.addr_size, linear);
> +            hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt), linear);

Is there a particular reason why you use the function call in the
above two cases, but ...

> @@ -2075,7 +2075,7 @@ void hvm_emulate_init_per_insn(
>                                          hvmemul_ctxt->insn_buf_eip,
>                                          sizeof(hvmemul_ctxt->insn_buf),
>                                          hvm_access_insn_fetch,
> -                                        hvmemul_ctxt->ctxt.addr_size,
> +                                        &hvmemul_ctxt->seg_reg[x86_seg_cs],

... the cheaper array reference here?

In any event

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 for-4.9 4/6] x86/svm: Introduce svm_emul_swint_injection()
  2017-04-05 17:33 ` [PATCH v2 for-4.9 4/6] x86/svm: Introduce svm_emul_swint_injection() Andrew Cooper
  2017-04-05 18:58   ` Boris Ostrovsky
@ 2017-04-06  9:00   ` Jan Beulich
  1 sibling, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2017-04-06  9:00 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Julien Grall, Boris Ostrovsky, SuraveeSuthikulpanit, Xen-devel

>>> On 05.04.17 at 19:33, <andrew.cooper3@citrix.com> wrote:
> Software events require emulation in some cases on AMD hardware.  Introduce
> svm_emul_swint_injection() to perform this emulation if necessary in
> svm_inject_event(), which will cope with any sources of event, rather than
> just those coming from x86_emulate().
> 
> This logic mirrors inject_swint() in the x86 instruction emulator.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 for-4.9 3/6] x86/hvm: Fix segmentation logic for system segments
  2017-04-06  8:56   ` Jan Beulich
@ 2017-04-06  9:06     ` Andrew Cooper
  0 siblings, 0 replies; 18+ messages in thread
From: Andrew Cooper @ 2017-04-06  9:06 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Julien Grall, Paul Durrant, Tim Deegan, Xen-devel

On 06/04/17 09:56, Jan Beulich wrote:
>>>> On 05.04.17 at 19:33, <andrew.cooper3@citrix.com> wrote:
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -549,7 +549,7 @@ static int hvmemul_virtual_to_linear(
>>          okay = hvm_virtual_to_linear_addr(
>>              seg, reg, offset - (*reps - 1) * bytes_per_rep,
>>              *reps * bytes_per_rep, access_type,
>> -            hvmemul_ctxt->ctxt.addr_size, linear);
>> +            hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt), linear);
>>          *linear += (*reps - 1) * bytes_per_rep;
>>          if ( hvmemul_ctxt->ctxt.addr_size != 64 )
>>              *linear = (uint32_t)*linear;
>> @@ -558,7 +558,7 @@ static int hvmemul_virtual_to_linear(
>>      {
>>          okay = hvm_virtual_to_linear_addr(
>>              seg, reg, offset, *reps * bytes_per_rep, access_type,
>> -            hvmemul_ctxt->ctxt.addr_size, linear);
>> +            hvmemul_get_seg_reg(x86_seg_cs, hvmemul_ctxt), linear);
> Is there a particular reason why you use the function call in the
> above two cases, but ...
>
>> @@ -2075,7 +2075,7 @@ void hvm_emulate_init_per_insn(
>>                                          hvmemul_ctxt->insn_buf_eip,
>>                                          sizeof(hvmemul_ctxt->insn_buf),
>>                                          hvm_access_insn_fetch,
>> -                                        hvmemul_ctxt->ctxt.addr_size,
>> +                                        &hvmemul_ctxt->seg_reg[x86_seg_cs],
> ... the cheaper array reference here?

Consistency with the surrounding code.

>
> In any event
>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 for-4.9 5/6] x86/emul: Drop swint_emulate infrastructure
  2017-04-06  7:30   ` Jan Beulich
@ 2017-04-06  9:07     ` Andrew Cooper
  0 siblings, 0 replies; 18+ messages in thread
From: Andrew Cooper @ 2017-04-06  9:07 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Julien Grall, Tim Deegan, Xen-devel

On 06/04/17 08:30, Jan Beulich wrote:
>>>> On 05.04.17 at 19:33, <andrew.cooper3@citrix.com> wrote:
>> With the SVM injection logic capable of doing its own emulation, there is no
>> need for this hardware-specific assistance in the common emulator.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Reviewed-by: Paul Durrant <paul.durrant@citrix.com>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> CC: Tim Deegan <tim@xen.org>
>> CC: Julien Grall <julien.grall@arm.com>
>>
>> v2:
>>  * imm8 -> imm1
> This went wrong:
>
>>      case 0xce: /* into */
>>          if ( !(_regs.eflags & X86_EFLAGS_OF) )
>>              break;
>> -        src.val = EXC_OF;
>> -        swint_type = x86_swint_into;
>> -        goto swint;
>> +        /* Fallthrough */
>> +    case 0xcc: /* int3 */
>> +    case 0xcd: /* int imm1 */
> This needs to remain imm8.
>
>> +    case 0xf1: /* int1 (icebp) */
>> +        ASSERT(!ctxt->event_pending);
>> +        switch ( ctxt->opcode )
>> +        {
>> +        case 0xcc: /* int3 */
>> +            ctxt->event.vector = EXC_BP;
>> +            ctxt->event.type = X86_EVENTTYPE_SW_EXCEPTION;
>> +            break;
>> +        case 0xcd: /* int imm1 */
> As does this one.
>
>> +            ctxt->event.vector = src.val;
> This is what I would prefer to become imm1.

Ah - I see what you mean now.  Fixed.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 for-4.9 6/6] x86/emul: Require callers to provide LMA in the emulation context
  2017-04-05 17:33 ` [PATCH v2 for-4.9 6/6] x86/emul: Require callers to provide LMA in the emulation context Andrew Cooper
@ 2017-04-06  9:08   ` Jan Beulich
  0 siblings, 0 replies; 18+ messages in thread
From: Jan Beulich @ 2017-04-06  9:08 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Julien Grall, Tim Deegan, Xen-devel

>>> On 05.04.17 at 19:33, <andrew.cooper3@citrix.com> wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5412,6 +5412,7 @@ int ptwr_do_page_fault(struct vcpu *v, unsigned long addr,
>              .vendor = d->arch.cpuid->x86_vendor,
>              .addr_size = is_pv_32bit_domain(d) ? 32 : BITS_PER_LONG,
>              .sp_size   = is_pv_32bit_domain(d) ? 32 : BITS_PER_LONG,
> +            .lma = true,
>          },
>      };
>      int rc;
> @@ -5566,6 +5567,7 @@ int mmio_ro_do_page_fault(struct vcpu *v, unsigned long addr,
>          .vendor = v->domain->arch.cpuid->x86_vendor,
>          .addr_size = addr_size,
>          .sp_size = addr_size,
> +        .lma = true,

As mentioned elsewhere already, I continue to consider this wrong
for 32-bit PV guests. I don't think there is any requirement for them
to be meaningfully aware of possibly running in long mode, at least
as far as segmentation is concerned. While likely benign right now,
this would become an active issue if any of the paths into
x86_emulate() wanted to have call gate use emulated (once the
function supports that).

> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c

Can x86_emulate_wrapper() please gain

    ASSERT(!mode_64bit() || ctxt->lma);

or some equivalent?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2017-04-06  9:08 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-04-05 17:32 [PATCH v2 for-4.9 0/6] x86/emul: Fixes Andrew Cooper
2017-04-05 17:32 ` [PATCH v2 for-4.9 1/6] x86/hvm: Correct some address space terminology Andrew Cooper
2017-04-06  8:35   ` Tim Deegan
2017-04-06  8:47   ` Jan Beulich
2017-04-05 17:33 ` [PATCH v2 for-4.9 2/6] x86/hvm: Correct long mode predicate Andrew Cooper
2017-04-05 18:55   ` Boris Ostrovsky
2017-04-05 17:33 ` [PATCH v2 for-4.9 3/6] x86/hvm: Fix segmentation logic for system segments Andrew Cooper
2017-04-06  8:56   ` Jan Beulich
2017-04-06  9:06     ` Andrew Cooper
2017-04-05 17:33 ` [PATCH v2 for-4.9 4/6] x86/svm: Introduce svm_emul_swint_injection() Andrew Cooper
2017-04-05 18:58   ` Boris Ostrovsky
2017-04-05 18:59     ` Andrew Cooper
2017-04-06  9:00   ` Jan Beulich
2017-04-05 17:33 ` [PATCH v2 for-4.9 5/6] x86/emul: Drop swint_emulate infrastructure Andrew Cooper
2017-04-06  7:30   ` Jan Beulich
2017-04-06  9:07     ` Andrew Cooper
2017-04-05 17:33 ` [PATCH v2 for-4.9 6/6] x86/emul: Require callers to provide LMA in the emulation context Andrew Cooper
2017-04-06  9:08   ` Jan Beulich

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.