All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/2] x86emul: (mainly) vendor specific behavior adjustments
@ 2020-03-31 15:57 Jan Beulich
  2020-03-31 15:58 ` [PATCH v2 1/2] x86emul: vendor specific SYSCALL behavior Jan Beulich
  2020-03-31 15:58 ` [PATCH v2 2/2] x86emul: support SYSRET Jan Beulich
  0 siblings, 2 replies; 5+ messages in thread
From: Jan Beulich @ 2020-03-31 15:57 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné

Just the remaining two pieces of the original series.

1: vendor specific SYSCALL behavior
2: support SYSRET

Jan


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 1/2] x86emul: vendor specific SYSCALL behavior
  2020-03-31 15:57 [PATCH v2 0/2] x86emul: (mainly) vendor specific behavior adjustments Jan Beulich
@ 2020-03-31 15:58 ` Jan Beulich
  2020-03-31 16:02   ` Andrew Cooper
  2020-03-31 15:58 ` [PATCH v2 2/2] x86emul: support SYSRET Jan Beulich
  1 sibling, 1 reply; 5+ messages in thread
From: Jan Beulich @ 2020-03-31 15:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné

AMD CPUs permit the insn everywhere (even outside of protected mode),
while Intel ones restrict it to 64-bit mode. While at it also comment
about the apparently missing CPUID bit check.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Replace CPUID bit check by comment.

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -5897,13 +5897,16 @@ x86_emulate(
         break;
 
     case X86EMUL_OPC(0x0f, 0x05): /* syscall */
-        generate_exception_if(!in_protmode(ctxt, ops), EXC_UD);
-
-        /* Inject #UD if syscall/sysret are disabled. */
+        /*
+         * Inject #UD if syscall/sysret are disabled. EFER.SCE can't be set
+         * with the respective CPUID bit clear, so no need for an explicit
+         * check of that one.
+         */
         fail_if(ops->read_msr == NULL);
         if ( (rc = ops->read_msr(MSR_EFER, &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;
         generate_exception_if((msr_val & EFER_SCE) == 0, EXC_UD);
+        generate_exception_if(!amd_like(ctxt) && !mode_64bit(), EXC_UD);
 
         if ( (rc = ops->read_msr(MSR_STAR, &msr_val, ctxt)) != X86EMUL_OKAY )
             goto done;



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 2/2] x86emul: support SYSRET
  2020-03-31 15:57 [PATCH v2 0/2] x86emul: (mainly) vendor specific behavior adjustments Jan Beulich
  2020-03-31 15:58 ` [PATCH v2 1/2] x86emul: vendor specific SYSCALL behavior Jan Beulich
@ 2020-03-31 15:58 ` Jan Beulich
  2020-03-31 16:10   ` Andrew Cooper
  1 sibling, 1 reply; 5+ messages in thread
From: Jan Beulich @ 2020-03-31 15:58 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné

This is to augment SYSCALL, which we've been supporting for quite some
time.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Replace CPUID bit check by comment. Limit RCX based canonical check
    to just Intel as vendor. Update SS selector on AMD and alike.

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -5977,6 +5977,82 @@ x86_emulate(
             goto done;
         break;
 
+    case X86EMUL_OPC(0x0f, 0x07): /* sysret */
+        /*
+         * Inject #UD if syscall/sysret are disabled. EFER.SCE can't be set
+         * with the respective CPUID bit clear, so no need for an explicit
+         * check of that one.
+         */
+        fail_if(!ops->read_msr);
+        if ( (rc = ops->read_msr(MSR_EFER, &msr_val, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        generate_exception_if(!(msr_val & EFER_SCE), EXC_UD);
+        generate_exception_if(!amd_like(ctxt) && !mode_64bit(), EXC_UD);
+        generate_exception_if(!mode_ring0(), EXC_GP, 0);
+        generate_exception_if(!in_protmode(ctxt, ops), EXC_GP, 0);
+#ifdef __x86_64__
+        /*
+         * Doing this for just Intel (rather than e.g. !amd_like()) as this is
+         * in fact risking to make guest OSes vulnerable to the equivalent of
+         * XSA-7 (CVE-2012-0217).
+         */
+        generate_exception_if(ctxt->cpuid->x86_vendor == X86_VENDOR_INTEL &&
+                              op_bytes == 8 && !is_canonical_address(_regs.rcx),
+                              EXC_GP, 0);
+#endif
+
+        if ( (rc = ops->read_msr(MSR_STAR, &msr_val, ctxt)) != X86EMUL_OKAY )
+            goto done;
+
+        sreg.sel = ((msr_val >> 48) + 8) | 3; /* SELECTOR_RPL_MASK */
+        cs.sel = op_bytes == 8 ? sreg.sel + 8 : sreg.sel - 8;
+
+        cs.base = sreg.base = 0; /* flat segment */
+        cs.limit = sreg.limit = ~0u; /* 4GB limit */
+        cs.attr = 0xcfb; /* G+DB+P+DPL3+S+Code */
+        sreg.attr = 0xcf3; /* G+DB+P+DPL3+S+Data */
+
+        /* Only the selector part of SS gets updated by AMD and alike. */
+        if ( amd_like(ctxt) )
+        {
+            fail_if(!ops->read_segment);
+            if ( (rc = ops->read_segment(x86_seg_ss, &sreg,
+                                         ctxt)) != X86EMUL_OKAY )
+                goto done;
+
+            /* There's explicitly no RPL adjustment here. */
+            sreg.sel = (msr_val >> 48) + 8;
+        }
+
+#ifdef __x86_64__
+        if ( mode_64bit() )
+        {
+            if ( op_bytes == 8 )
+            {
+                cs.attr = 0xafb; /* L+DB+P+DPL3+S+Code */
+                _regs.rip = _regs.rcx;
+            }
+            else
+                _regs.rip = _regs.ecx;
+
+            _regs.eflags = _regs.r11 & ~(X86_EFLAGS_RF | X86_EFLAGS_VM);
+        }
+        else
+#endif
+        {
+            _regs.r(ip) = _regs.ecx;
+            _regs.eflags |= X86_EFLAGS_IF;
+        }
+
+        fail_if(!ops->write_segment);
+        if ( (rc = ops->write_segment(x86_seg_cs, &cs, ctxt)) != X86EMUL_OKAY ||
+             (rc = ops->write_segment(x86_seg_ss, &sreg,
+                                      ctxt)) != X86EMUL_OKAY )
+            goto done;
+
+        singlestep = _regs.eflags & X86_EFLAGS_TF;
+        break;
+
     case X86EMUL_OPC(0x0f, 0x08): /* invd */
     case X86EMUL_OPC(0x0f, 0x09): /* wbinvd / wbnoinvd */
         generate_exception_if(!mode_ring0(), EXC_GP, 0);



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 1/2] x86emul: vendor specific SYSCALL behavior
  2020-03-31 15:58 ` [PATCH v2 1/2] x86emul: vendor specific SYSCALL behavior Jan Beulich
@ 2020-03-31 16:02   ` Andrew Cooper
  0 siblings, 0 replies; 5+ messages in thread
From: Andrew Cooper @ 2020-03-31 16:02 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Wei Liu, Roger Pau Monné

On 31/03/2020 16:58, Jan Beulich wrote:
> AMD CPUs permit the insn everywhere (even outside of protected mode),
> while Intel ones restrict it to 64-bit mode. While at it also comment
> about the apparently missing CPUID bit check.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 2/2] x86emul: support SYSRET
  2020-03-31 15:58 ` [PATCH v2 2/2] x86emul: support SYSRET Jan Beulich
@ 2020-03-31 16:10   ` Andrew Cooper
  0 siblings, 0 replies; 5+ messages in thread
From: Andrew Cooper @ 2020-03-31 16:10 UTC (permalink / raw)
  To: Jan Beulich, xen-devel; +Cc: Wei Liu, Roger Pau Monné

On 31/03/2020 16:58, Jan Beulich wrote:
> This is to augment SYSCALL, which we've been supporting for quite some
> time.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

In some copious free time I'll see about finishing off my XTF test for
these cases, but that will have to wait for now.


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-03-31 16:10 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-31 15:57 [PATCH v2 0/2] x86emul: (mainly) vendor specific behavior adjustments Jan Beulich
2020-03-31 15:58 ` [PATCH v2 1/2] x86emul: vendor specific SYSCALL behavior Jan Beulich
2020-03-31 16:02   ` Andrew Cooper
2020-03-31 15:58 ` [PATCH v2 2/2] x86emul: support SYSRET Jan Beulich
2020-03-31 16:10   ` Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.