All of lore.kernel.org
 help / color / mirror / Atom feed
* [Xen-devel] [PATCH for-4.13 0/2] x86/hvm: Multiple corrections to task switch handling
@ 2019-11-21 22:15 Andrew Cooper
  2019-11-21 22:15 ` [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures Andrew Cooper
                   ` (2 more replies)
  0 siblings, 3 replies; 19+ messages in thread
From: Andrew Cooper @ 2019-11-21 22:15 UTC (permalink / raw)
  To: Xen-devel
  Cc: Juergen Gross, Kevin Tian, Jan Beulich, Wei Liu, Andrew Cooper,
	Jun Nakajima, Roger Pau Monné

These patches want backporting due to the severity of patch 2.  They should
therefore be considered for 4.13 at this point.

Andrew Cooper (2):
  x86/vtx: Fix fault semantics for early task switch failures
  x86/svm: Write the correct %eip into the outgoing task

 xen/arch/x86/hvm/hvm.c                |  4 +--
 xen/arch/x86/hvm/svm/emulate.c        | 55 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c            | 46 ++++++++++++++++++++++-------
 xen/arch/x86/hvm/vmx/vmx.c            |  4 +--
 xen/include/asm-x86/hvm/hvm.h         |  2 +-
 xen/include/asm-x86/hvm/svm/emulate.h |  1 +
 6 files changed, 97 insertions(+), 15 deletions(-)

-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures
  2019-11-21 22:15 [Xen-devel] [PATCH for-4.13 0/2] x86/hvm: Multiple corrections to task switch handling Andrew Cooper
@ 2019-11-21 22:15 ` Andrew Cooper
  2019-11-22 12:37   ` Roger Pau Monné
  2019-11-25  8:23   ` Tian, Kevin
  2019-11-21 22:15 ` [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task Andrew Cooper
  2019-11-22 10:23 ` [Xen-devel] [PATCH for-4.13 0/2] x86/hvm: Multiple corrections to task switch handling Roger Pau Monné
  2 siblings, 2 replies; 19+ messages in thread
From: Andrew Cooper @ 2019-11-21 22:15 UTC (permalink / raw)
  To: Xen-devel
  Cc: Juergen Gross, Kevin Tian, Jan Beulich, Wei Liu, Andrew Cooper,
	Jun Nakajima, Roger Pau Monné

The VT-x task switch handler adds inst_len to rip before calling
hvm_task_switch().  This causes early faults to be delivered to the guest with
trap semantics, and break restartibility.

Instead, pass the instruction length into hvm_task_switch() and write it into
the outgoing tss only, leaving rip in its original location.

For now, pass 0 on the SVM side.  This highlights a separate preexisting bug
which will be addressed in the following patch.

While adjusting call sites, drop the unnecessary uint16_t cast.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Jun Nakajima <jun.nakajima@intel.com>
CC: Kevin Tian <kevin.tian@intel.com>
CC: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/hvm/hvm.c        | 4 ++--
 xen/arch/x86/hvm/svm/svm.c    | 2 +-
 xen/arch/x86/hvm/vmx/vmx.c    | 4 ++--
 xen/include/asm-x86/hvm/hvm.h | 2 +-
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 818e705fd1..7f556171bd 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2913,7 +2913,7 @@ void hvm_prepare_vm86_tss(struct vcpu *v, uint32_t base, uint32_t limit)
 
 void hvm_task_switch(
     uint16_t tss_sel, enum hvm_task_switch_reason taskswitch_reason,
-    int32_t errcode)
+    int32_t errcode, unsigned int insn_len)
 {
     struct vcpu *v = current;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
@@ -2987,7 +2987,7 @@ void hvm_task_switch(
     if ( taskswitch_reason == TSW_iret )
         eflags &= ~X86_EFLAGS_NT;
 
-    tss.eip    = regs->eip;
+    tss.eip    = regs->eip + insn_len;
     tss.eflags = eflags;
     tss.eax    = regs->eax;
     tss.ecx    = regs->ecx;
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 4eb6b0e4c7..049b800e20 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2794,7 +2794,7 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
          */
         vmcb->eventinj.bytes = 0;
 
-        hvm_task_switch((uint16_t)vmcb->exitinfo1, reason, errcode);
+        hvm_task_switch(vmcb->exitinfo1, reason, errcode, 0);
         break;
     }
 
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 6a5eeb5c13..6d048852c3 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3956,8 +3956,8 @@ void vmx_vmexit_handler(struct cpu_user_regs *regs)
             __vmread(IDT_VECTORING_ERROR_CODE, &ecode);
         else
              ecode = -1;
-        regs->rip += inst_len;
-        hvm_task_switch((uint16_t)exit_qualification, reasons[source], ecode);
+
+        hvm_task_switch(exit_qualification, reasons[source], ecode, inst_len);
         break;
     }
     case EXIT_REASON_CPUID:
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index f86af09898..4cce59bb31 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -297,7 +297,7 @@ void hvm_set_rdtsc_exiting(struct domain *d, bool_t enable);
 enum hvm_task_switch_reason { TSW_jmp, TSW_iret, TSW_call_or_int };
 void hvm_task_switch(
     uint16_t tss_sel, enum hvm_task_switch_reason taskswitch_reason,
-    int32_t errcode);
+    int32_t errcode, unsigned int insn_len);
 
 enum hvm_access_type {
     hvm_access_insn_fetch,
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task
  2019-11-21 22:15 [Xen-devel] [PATCH for-4.13 0/2] x86/hvm: Multiple corrections to task switch handling Andrew Cooper
  2019-11-21 22:15 ` [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures Andrew Cooper
@ 2019-11-21 22:15 ` Andrew Cooper
  2019-11-22 13:10   ` Andrew Cooper
                     ` (2 more replies)
  2019-11-22 10:23 ` [Xen-devel] [PATCH for-4.13 0/2] x86/hvm: Multiple corrections to task switch handling Roger Pau Monné
  2 siblings, 3 replies; 19+ messages in thread
From: Andrew Cooper @ 2019-11-21 22:15 UTC (permalink / raw)
  To: Xen-devel
  Cc: Juergen Gross, Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

The TASK_SWITCH vmexit has fault semantics, and doesn't provide any NRIPs
assistance with instruction length.  As a result, any instruction-induced task
switch has the outgoing task's %eip pointing at the instruction switch caused
the switch, rather than after it.

This causes explicit use of task gates to livelock (as when the task returns,
it executes the task-switching instruction again), and any restartable task to
become a nop after its first instantiation (the entry state points at the
ret/iret instruction used to exit the task).

32bit Windows in particular is known to use task gates for NMI handling, and
to use NMI IPIs.

In the task switch handler, distinguish instruction-induced from
interrupt/exception-induced task switches, and decode the instruction under
%rip to calculate its length.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Juergen Gross <jgross@suse.com>

The implementation of svm_get_task_switch_insn_len() is bug-compatible with
svm_get_insn_len() when it comes to conditional #GP'ing.  I still haven't had
time to address this more thoroughly.

AMD does permit TASK_SWITCH not to be intercepted and, I'm informed does do
the right thing when it comes to a TSS crossing a page boundary.  However, it
is not actually safe to leave task switches unintercepted.  Any NPT or shadow
page fault, even from logdirty/paging/etc will corrupt guest state in an
unrecoverable manner.
---
 xen/arch/x86/hvm/svm/emulate.c        | 55 +++++++++++++++++++++++++++++++++++
 xen/arch/x86/hvm/svm/svm.c            | 46 ++++++++++++++++++++++-------
 xen/include/asm-x86/hvm/svm/emulate.h |  1 +
 3 files changed, 92 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/emulate.c b/xen/arch/x86/hvm/svm/emulate.c
index 3e52592847..176c25f60d 100644
--- a/xen/arch/x86/hvm/svm/emulate.c
+++ b/xen/arch/x86/hvm/svm/emulate.c
@@ -117,6 +117,61 @@ unsigned int svm_get_insn_len(struct vcpu *v, unsigned int instr_enc)
 }
 
 /*
+ * TASK_SWITCH vmexits never provide an instruction length.  We must always
+ * decode under %rip to find the answer.
+ */
+unsigned int svm_get_task_switch_insn_len(struct vcpu *v)
+{
+    struct hvm_emulate_ctxt ctxt;
+    struct x86_emulate_state *state;
+    unsigned int emul_len, modrm_reg;
+
+    ASSERT(v == current);
+    hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
+    hvm_emulate_init_per_insn(&ctxt, NULL, 0);
+    state = x86_decode_insn(&ctxt.ctxt, hvmemul_insn_fetch);
+    if ( IS_ERR_OR_NULL(state) )
+        return 0;
+
+    emul_len = x86_insn_length(state, &ctxt.ctxt);
+
+    /*
+     * Check for an instruction which can cause a task switch.  Any far
+     * jmp/call/ret, any software interrupt/exception, and iret.
+     */
+    switch ( ctxt.ctxt.opcode )
+    {
+    case 0xff: /* Grp 5 */
+        /* call / jmp (far, absolute indirect) */
+        if ( x86_insn_modrm(state, NULL, &modrm_reg) != 3 ||
+             (modrm_reg != 3 && modrm_reg != 5) )
+        {
+            /* Wrong instruction.  Throw #GP back for now. */
+    default:
+            hvm_inject_hw_exception(TRAP_gp_fault, 0);
+            emul_len = 0;
+            break;
+        }
+        /* Fallthrough */
+    case 0x62: /* bound */
+    case 0x9a: /* call (far, absolute) */
+    case 0xca: /* ret imm16 (far) */
+    case 0xcb: /* ret (far) */
+    case 0xcc: /* int3 */
+    case 0xcd: /* int imm8 */
+    case 0xce: /* into */
+    case 0xcf: /* iret */
+    case 0xea: /* jmp (far, absolute) */
+    case 0xf1: /* icebp */
+        break;
+    }
+
+    x86_emulate_free_state(state);
+
+    return emul_len;
+}
+
+/*
  * Local variables:
  * mode: C
  * c-file-style: "BSD"
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 049b800e20..ba9c24a70c 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2776,7 +2776,41 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
 
     case VMEXIT_TASK_SWITCH: {
         enum hvm_task_switch_reason reason;
-        int32_t errcode = -1;
+        int32_t errcode = -1, insn_len = -1;
+
+        /*
+         * All TASK_SWITCH intercepts have fault-like semantics.  NRIP is
+         * never provided, even for instruction-induced task switches, but we
+         * need to know the instruction length in order to set %eip suitably
+         * in the outgoing TSS.
+         *
+         * For a task switch which vectored through the IDT, look at the type
+         * to distinguish interrupts/exceptions from instruction based
+         * switches.
+         */
+        if ( vmcb->eventinj.fields.v )
+        {
+            /*
+             * HW_EXCEPTION, NMI and EXT_INTR are not instruction based.  All
+             * others are.
+             */
+            if ( vmcb->eventinj.fields.type <= X86_EVENTTYPE_HW_EXCEPTION )
+                insn_len = 0;
+
+            /*
+             * Clobber the vectoring information, as we are going to emulate
+             * the task switch in full.
+             */
+            vmcb->eventinj.bytes = 0;
+        }
+
+        /*
+         * insn_len being -1 indicates that we have an instruction-induced
+         * task switch.  Decode under %rip to find its length.
+         */
+        if ( insn_len < 0 && (insn_len = svm_get_task_switch_insn_len(v)) == 0 )
+            break;
+
         if ( (vmcb->exitinfo2 >> 36) & 1 )
             reason = TSW_iret;
         else if ( (vmcb->exitinfo2 >> 38) & 1 )
@@ -2786,15 +2820,7 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
         if ( (vmcb->exitinfo2 >> 44) & 1 )
             errcode = (uint32_t)vmcb->exitinfo2;
 
-        /*
-         * Some processors set the EXITINTINFO field when the task switch
-         * is caused by a task gate in the IDT. In this case we will be
-         * emulating the event injection, so we do not want the processor
-         * to re-inject the original event!
-         */
-        vmcb->eventinj.bytes = 0;
-
-        hvm_task_switch(vmcb->exitinfo1, reason, errcode, 0);
+        hvm_task_switch(vmcb->exitinfo1, reason, errcode, insn_len);
         break;
     }
 
diff --git a/xen/include/asm-x86/hvm/svm/emulate.h b/xen/include/asm-x86/hvm/svm/emulate.h
index 9af10061c5..d7364f774a 100644
--- a/xen/include/asm-x86/hvm/svm/emulate.h
+++ b/xen/include/asm-x86/hvm/svm/emulate.h
@@ -51,6 +51,7 @@
 struct vcpu;
 
 unsigned int svm_get_insn_len(struct vcpu *v, unsigned int instr_enc);
+unsigned int svm_get_task_switch_insn_len(struct vcpu *v);
 
 #endif /* __ASM_X86_HVM_SVM_EMULATE_H__ */
 
-- 
2.11.0


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH for-4.13 0/2] x86/hvm: Multiple corrections to task switch handling
  2019-11-21 22:15 [Xen-devel] [PATCH for-4.13 0/2] x86/hvm: Multiple corrections to task switch handling Andrew Cooper
  2019-11-21 22:15 ` [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures Andrew Cooper
  2019-11-21 22:15 ` [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task Andrew Cooper
@ 2019-11-22 10:23 ` Roger Pau Monné
  2019-11-22 10:25   ` Andrew Cooper
  2 siblings, 1 reply; 19+ messages in thread
From: Roger Pau Monné @ 2019-11-22 10:23 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Juergen Gross, Kevin Tian, Jan Beulich, Wei Liu, Jun Nakajima, Xen-devel

On Thu, Nov 21, 2019 at 10:15:49PM +0000, Andrew Cooper wrote:
> These patches want backporting due to the severity of patch 2.  They should
> therefore be considered for 4.13 at this point.

Is there a matching XTF test to exercise this functionality?

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH for-4.13 0/2] x86/hvm: Multiple corrections to task switch handling
  2019-11-22 10:23 ` [Xen-devel] [PATCH for-4.13 0/2] x86/hvm: Multiple corrections to task switch handling Roger Pau Monné
@ 2019-11-22 10:25   ` Andrew Cooper
  0 siblings, 0 replies; 19+ messages in thread
From: Andrew Cooper @ 2019-11-22 10:25 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Juergen Gross, Kevin Tian, Jan Beulich, Wei Liu, Jun Nakajima, Xen-devel

On 22/11/2019 10:23, Roger Pau Monné wrote:
> On Thu, Nov 21, 2019 at 10:15:49PM +0000, Andrew Cooper wrote:
>> These patches want backporting due to the severity of patch 2.  They should
>> therefore be considered for 4.13 at this point.
> Is there a matching XTF test to exercise this functionality?

Modification of an existing one to begin with (which is how I spotted
the problems).

I don't have a CI-ready test yet.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures
  2019-11-21 22:15 ` [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures Andrew Cooper
@ 2019-11-22 12:37   ` Roger Pau Monné
  2019-11-22 12:43     ` Andrew Cooper
  2019-11-22 13:08     ` Jan Beulich
  2019-11-25  8:23   ` Tian, Kevin
  1 sibling, 2 replies; 19+ messages in thread
From: Roger Pau Monné @ 2019-11-22 12:37 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Juergen Gross, Kevin Tian, Jan Beulich, Wei Liu, Jun Nakajima, Xen-devel

On Thu, Nov 21, 2019 at 10:15:50PM +0000, Andrew Cooper wrote:
> The VT-x task switch handler adds inst_len to rip before calling
> hvm_task_switch().  This causes early faults to be delivered to the guest with

By early faults you mean faults injected by hvm_task_switch itself for
example?

> trap semantics, and break restartibility.
> 
> Instead, pass the instruction length into hvm_task_switch() and write it into
> the outgoing tss only, leaving rip in its original location.
> 
> For now, pass 0 on the SVM side.  This highlights a separate preexisting bug
> which will be addressed in the following patch.
> 
> While adjusting call sites, drop the unnecessary uint16_t cast.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Code LGTM:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures
  2019-11-22 12:37   ` Roger Pau Monné
@ 2019-11-22 12:43     ` Andrew Cooper
  2019-11-22 13:08     ` Jan Beulich
  1 sibling, 0 replies; 19+ messages in thread
From: Andrew Cooper @ 2019-11-22 12:43 UTC (permalink / raw)
  To: Roger Pau Monné
  Cc: Juergen Gross, Kevin Tian, Jan Beulich, Wei Liu, Jun Nakajima, Xen-devel

On 22/11/2019 12:37, Roger Pau Monné wrote:
> On Thu, Nov 21, 2019 at 10:15:50PM +0000, Andrew Cooper wrote:
>> The VT-x task switch handler adds inst_len to rip before calling
>> hvm_task_switch().  This causes early faults to be delivered to the guest with
> By early faults you mean faults injected by hvm_task_switch itself for
> example?

A task switch is restartable up until a point.  Beyond that point any
chaos will reign in the new task, not the old task.

By "early", I mean any fault which is handled in the context of the old
task.  As far as testing goes, I think mapping the current TSS as
read-only is about the only way I've got causing this to occur, because
all other fault conditions are checked by the processor before issuing a
TASK_SWITCH VMExit.

>
>> trap semantics, and break restartibility.
>>
>> Instead, pass the instruction length into hvm_task_switch() and write it into
>> the outgoing tss only, leaving rip in its original location.
>>
>> For now, pass 0 on the SVM side.  This highlights a separate preexisting bug
>> which will be addressed in the following patch.
>>
>> While adjusting call sites, drop the unnecessary uint16_t cast.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Code LGTM:
>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks,

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures
  2019-11-22 12:37   ` Roger Pau Monné
  2019-11-22 12:43     ` Andrew Cooper
@ 2019-11-22 13:08     ` Jan Beulich
  2019-11-22 13:12       ` Andrew Cooper
  1 sibling, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2019-11-22 13:08 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Juergen Gross, Kevin Tian, Wei Liu, Jun Nakajima, Xen-devel,
	Roger Pau Monné

On 22.11.2019 13:37, Roger Pau Monné  wrote:
> On Thu, Nov 21, 2019 at 10:15:50PM +0000, Andrew Cooper wrote:
>> The VT-x task switch handler adds inst_len to rip before calling
>> hvm_task_switch().  This causes early faults to be delivered to the guest with
>> trap semantics, and break restartibility.
>>
>> Instead, pass the instruction length into hvm_task_switch() and write it into
>> the outgoing tss only, leaving rip in its original location.
>>
>> For now, pass 0 on the SVM side.  This highlights a separate preexisting bug
>> which will be addressed in the following patch.
>>
>> While adjusting call sites, drop the unnecessary uint16_t cast.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Code LGTM:
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task
  2019-11-21 22:15 ` [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task Andrew Cooper
@ 2019-11-22 13:10   ` Andrew Cooper
  2019-11-22 13:31   ` Jan Beulich
  2019-11-22 13:59   ` Roger Pau Monné
  2 siblings, 0 replies; 19+ messages in thread
From: Andrew Cooper @ 2019-11-22 13:10 UTC (permalink / raw)
  To: Xen-devel; +Cc: Juergen Gross, Wei Liu, Jan Beulich, Roger Pau Monné

On 21/11/2019 22:15, Andrew Cooper wrote:
> The TASK_SWITCH vmexit has fault semantics, and doesn't provide any NRIPs
> assistance with instruction length.  As a result, any instruction-induced task
> switch has the outgoing task's %eip pointing at the instruction switch caused
> the switch, rather than after it.
>
> This causes explicit use of task gates to livelock (as when the task returns,
> it executes the task-switching instruction again), and any restartable task to
> become a nop after its first instantiation (the entry state points at the
> ret/iret instruction used to exit the task).

FWIW, I've rewritten this paragraph as:

This causes callers of task gates to livelock (repeatedly execute the
call/jmp
to enter the task), and any restartable task to become a nop after its first
use (the (re)entry state points at the ret/iret used to exit the task).

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures
  2019-11-22 13:08     ` Jan Beulich
@ 2019-11-22 13:12       ` Andrew Cooper
  2019-11-22 13:39         ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2019-11-22 13:12 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Juergen Gross, Kevin Tian, Wei Liu, Jun Nakajima, Xen-devel,
	Roger Pau Monné

On 22/11/2019 13:08, Jan Beulich wrote:
> On 22.11.2019 13:37, Roger Pau Monné  wrote:
>> On Thu, Nov 21, 2019 at 10:15:50PM +0000, Andrew Cooper wrote:
>>> The VT-x task switch handler adds inst_len to rip before calling
>>> hvm_task_switch().  This causes early faults to be delivered to the guest with
>>> trap semantics, and break restartibility.
>>>
>>> Instead, pass the instruction length into hvm_task_switch() and write it into
>>> the outgoing tss only, leaving rip in its original location.
>>>
>>> For now, pass 0 on the SVM side.  This highlights a separate preexisting bug
>>> which will be addressed in the following patch.
>>>
>>> While adjusting call sites, drop the unnecessary uint16_t cast.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Code LGTM:
>>
>> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

It occurs to me that this also fixes a vmentry failure in the corner
case that an instruction, which crosses the 4G=>0 boundary takes a
fault.  %rip will be adjusted without being truncated.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task
  2019-11-21 22:15 ` [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task Andrew Cooper
  2019-11-22 13:10   ` Andrew Cooper
@ 2019-11-22 13:31   ` Jan Beulich
  2019-11-22 13:55     ` Andrew Cooper
  2019-11-22 13:59   ` Roger Pau Monné
  2 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2019-11-22 13:31 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Juergen Gross, Xen-devel, Wei Liu, Roger Pau Monné

On 21.11.2019 23:15, Andrew Cooper wrote:
> --- a/xen/arch/x86/hvm/svm/emulate.c
> +++ b/xen/arch/x86/hvm/svm/emulate.c
> @@ -117,6 +117,61 @@ unsigned int svm_get_insn_len(struct vcpu *v, unsigned int instr_enc)
>  }
>  
>  /*
> + * TASK_SWITCH vmexits never provide an instruction length.  We must always
> + * decode under %rip to find the answer.
> + */
> +unsigned int svm_get_task_switch_insn_len(struct vcpu *v)
> +{
> +    struct hvm_emulate_ctxt ctxt;
> +    struct x86_emulate_state *state;
> +    unsigned int emul_len, modrm_reg;
> +
> +    ASSERT(v == current);

You look to be using v here just for this ASSERT() - is this really
worth it? By making the function take "void" it would be quite obvious
that it would act on the current vCPU only.

> +    hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
> +    hvm_emulate_init_per_insn(&ctxt, NULL, 0);
> +    state = x86_decode_insn(&ctxt.ctxt, hvmemul_insn_fetch);
> +    if ( IS_ERR_OR_NULL(state) )
> +        return 0;
> +
> +    emul_len = x86_insn_length(state, &ctxt.ctxt);
> +
> +    /*
> +     * Check for an instruction which can cause a task switch.  Any far
> +     * jmp/call/ret, any software interrupt/exception, and iret.
> +     */
> +    switch ( ctxt.ctxt.opcode )
> +    {
> +    case 0xff: /* Grp 5 */
> +        /* call / jmp (far, absolute indirect) */
> +        if ( x86_insn_modrm(state, NULL, &modrm_reg) != 3 ||

DYM "== 3", to bail upon non-memory operands?

> +             (modrm_reg != 3 && modrm_reg != 5) )
> +        {
> +            /* Wrong instruction.  Throw #GP back for now. */
> +    default:
> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +            emul_len = 0;
> +            break;
> +        }
> +        /* Fallthrough */
> +    case 0x62: /* bound */

Does "bound" really belong on this list? It raising #BR is like
insns raising random other exceptions, not like INTO / INT3,
where the IDT descriptor also has to have suitable DPL for the
exception to actually get delivered (rather than #GP). I.e. it
shouldn't make it here in the first place, due to the
X86_EVENTTYPE_HW_EXCEPTION check in the caller.

IOW if "bound" needs to be here, then all others need to be as
well, unless they can't cause any exception at all.

> +    case 0x9a: /* call (far, absolute) */
> +    case 0xca: /* ret imm16 (far) */
> +    case 0xcb: /* ret (far) */
> +    case 0xcc: /* int3 */
> +    case 0xcd: /* int imm8 */
> +    case 0xce: /* into */
> +    case 0xcf: /* iret */
> +    case 0xea: /* jmp (far, absolute) */
> +    case 0xf1: /* icebp */

Same perhaps for ICEBP, albeit I'm less certain here, as its
behavior is too poorly documented (if at all).

> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -2776,7 +2776,41 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>  
>      case VMEXIT_TASK_SWITCH: {
>          enum hvm_task_switch_reason reason;
> -        int32_t errcode = -1;
> +        int32_t errcode = -1, insn_len = -1;
> +
> +        /*
> +         * All TASK_SWITCH intercepts have fault-like semantics.  NRIP is
> +         * never provided, even for instruction-induced task switches, but we
> +         * need to know the instruction length in order to set %eip suitably
> +         * in the outgoing TSS.
> +         *
> +         * For a task switch which vectored through the IDT, look at the type
> +         * to distinguish interrupts/exceptions from instruction based
> +         * switches.
> +         */
> +        if ( vmcb->eventinj.fields.v )
> +        {
> +            /*
> +             * HW_EXCEPTION, NMI and EXT_INTR are not instruction based.  All
> +             * others are.
> +             */
> +            if ( vmcb->eventinj.fields.type <= X86_EVENTTYPE_HW_EXCEPTION )
> +                insn_len = 0;
> +
> +            /*
> +             * Clobber the vectoring information, as we are going to emulate
> +             * the task switch in full.
> +             */
> +            vmcb->eventinj.bytes = 0;
> +        }
> +
> +        /*
> +         * insn_len being -1 indicates that we have an instruction-induced
> +         * task switch.  Decode under %rip to find its length.
> +         */
> +        if ( insn_len < 0 && (insn_len = svm_get_task_switch_insn_len(v)) == 0 )
> +            break;

Won't this live-lock the guest? I.e. isn't it better to e.g. crash it
if svm_get_task_switch_insn_len() didn't raise #GP(0)?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures
  2019-11-22 13:12       ` Andrew Cooper
@ 2019-11-22 13:39         ` Jan Beulich
  2019-11-22 14:51           ` Andrew Cooper
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2019-11-22 13:39 UTC (permalink / raw)
  To: Andrew Cooper
  Cc: Juergen Gross, Kevin Tian, Wei Liu, Jun Nakajima, Xen-devel,
	Roger Pau Monné

On 22.11.2019 14:12, Andrew Cooper wrote:
> On 22/11/2019 13:08, Jan Beulich wrote:
>> On 22.11.2019 13:37, Roger Pau Monné  wrote:
>>> On Thu, Nov 21, 2019 at 10:15:50PM +0000, Andrew Cooper wrote:
>>>> The VT-x task switch handler adds inst_len to rip before calling
>>>> hvm_task_switch().  This causes early faults to be delivered to the guest with
>>>> trap semantics, and break restartibility.
>>>>
>>>> Instead, pass the instruction length into hvm_task_switch() and write it into
>>>> the outgoing tss only, leaving rip in its original location.
>>>>
>>>> For now, pass 0 on the SVM side.  This highlights a separate preexisting bug
>>>> which will be addressed in the following patch.
>>>>
>>>> While adjusting call sites, drop the unnecessary uint16_t cast.
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Code LGTM:
>>>
>>> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> It occurs to me that this also fixes a vmentry failure in the corner
> case that an instruction, which crosses the 4G=>0 boundary takes a
> fault.  %rip will be adjusted without being truncated.

I was about to say so in my earlier reply, until I paid attention
to this

@@ -2987,7 +2987,7 @@ void hvm_task_switch(
     if ( taskswitch_reason == TSW_iret )
         eflags &= ~X86_EFLAGS_NT;
 
-    tss.eip    = regs->eip;
+    tss.eip    = regs->eip + insn_len;

together with the subsequent

    regs->rip    = tss.eip;

already having taken care of this aspect before, afaict.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task
  2019-11-22 13:31   ` Jan Beulich
@ 2019-11-22 13:55     ` Andrew Cooper
  2019-11-22 14:31       ` Jan Beulich
  0 siblings, 1 reply; 19+ messages in thread
From: Andrew Cooper @ 2019-11-22 13:55 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Juergen Gross, Xen-devel, Wei Liu, Roger Pau Monné

On 22/11/2019 13:31, Jan Beulich wrote:
> On 21.11.2019 23:15, Andrew Cooper wrote:
>> --- a/xen/arch/x86/hvm/svm/emulate.c
>> +++ b/xen/arch/x86/hvm/svm/emulate.c
>> @@ -117,6 +117,61 @@ unsigned int svm_get_insn_len(struct vcpu *v, unsigned int instr_enc)
>>  }
>>  
>>  /*
>> + * TASK_SWITCH vmexits never provide an instruction length.  We must always
>> + * decode under %rip to find the answer.
>> + */
>> +unsigned int svm_get_task_switch_insn_len(struct vcpu *v)
>> +{
>> +    struct hvm_emulate_ctxt ctxt;
>> +    struct x86_emulate_state *state;
>> +    unsigned int emul_len, modrm_reg;
>> +
>> +    ASSERT(v == current);
> You look to be using v here just for this ASSERT() - is this really
> worth it? By making the function take "void" it would be quite obvious
> that it would act on the current vCPU only.

This was cribbed largely from svm_get_insn_len(), which also behaves the
same.

>
>> +    hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
>> +    hvm_emulate_init_per_insn(&ctxt, NULL, 0);
>> +    state = x86_decode_insn(&ctxt.ctxt, hvmemul_insn_fetch);
>> +    if ( IS_ERR_OR_NULL(state) )
>> +        return 0;
>> +
>> +    emul_len = x86_insn_length(state, &ctxt.ctxt);
>> +
>> +    /*
>> +     * Check for an instruction which can cause a task switch.  Any far
>> +     * jmp/call/ret, any software interrupt/exception, and iret.
>> +     */
>> +    switch ( ctxt.ctxt.opcode )
>> +    {
>> +    case 0xff: /* Grp 5 */
>> +        /* call / jmp (far, absolute indirect) */
>> +        if ( x86_insn_modrm(state, NULL, &modrm_reg) != 3 ||
> DYM "== 3", to bail upon non-memory operands?

Ah yes (and this demonstrates that I really need to get an XTF tested
sorted soon.)

>
>> +             (modrm_reg != 3 && modrm_reg != 5) )
>> +        {
>> +            /* Wrong instruction.  Throw #GP back for now. */
>> +    default:
>> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
>> +            emul_len = 0;
>> +            break;
>> +        }
>> +        /* Fallthrough */
>> +    case 0x62: /* bound */
> Does "bound" really belong on this list? It raising #BR is like
> insns raising random other exceptions, not like INTO / INT3,
> where the IDT descriptor also has to have suitable DPL for the
> exception to actually get delivered (rather than #GP). I.e. it
> shouldn't make it here in the first place, due to the
> X86_EVENTTYPE_HW_EXCEPTION check in the caller.
>
> IOW if "bound" needs to be here, then all others need to be as
> well, unless they can't cause any exception at all.

More experimentation required.  BOUND doesn't appear to be special cased
by SVM, but is by VT-x.  VT-x however does throw it in the same category
as #UD, and identify it to be a hardware exception.

I suspect you are right, and t doesn't want to be here.

>> +    case 0x9a: /* call (far, absolute) */
>> +    case 0xca: /* ret imm16 (far) */
>> +    case 0xcb: /* ret (far) */
>> +    case 0xcc: /* int3 */
>> +    case 0xcd: /* int imm8 */
>> +    case 0xce: /* into */
>> +    case 0xcf: /* iret */
>> +    case 0xea: /* jmp (far, absolute) */
>> +    case 0xf1: /* icebp */
> Same perhaps for ICEBP, albeit I'm less certain here, as its
> behavior is too poorly documented (if at all).

ICEBP's #DB is a trap, not a fault, so instruction length is important.

>
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -2776,7 +2776,41 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>>  
>>      case VMEXIT_TASK_SWITCH: {
>>          enum hvm_task_switch_reason reason;
>> -        int32_t errcode = -1;
>> +        int32_t errcode = -1, insn_len = -1;
>> +
>> +        /*
>> +         * All TASK_SWITCH intercepts have fault-like semantics.  NRIP is
>> +         * never provided, even for instruction-induced task switches, but we
>> +         * need to know the instruction length in order to set %eip suitably
>> +         * in the outgoing TSS.
>> +         *
>> +         * For a task switch which vectored through the IDT, look at the type
>> +         * to distinguish interrupts/exceptions from instruction based
>> +         * switches.
>> +         */
>> +        if ( vmcb->eventinj.fields.v )
>> +        {
>> +            /*
>> +             * HW_EXCEPTION, NMI and EXT_INTR are not instruction based.  All
>> +             * others are.
>> +             */
>> +            if ( vmcb->eventinj.fields.type <= X86_EVENTTYPE_HW_EXCEPTION )
>> +                insn_len = 0;
>> +
>> +            /*
>> +             * Clobber the vectoring information, as we are going to emulate
>> +             * the task switch in full.
>> +             */
>> +            vmcb->eventinj.bytes = 0;
>> +        }
>> +
>> +        /*
>> +         * insn_len being -1 indicates that we have an instruction-induced
>> +         * task switch.  Decode under %rip to find its length.
>> +         */
>> +        if ( insn_len < 0 && (insn_len = svm_get_task_switch_insn_len(v)) == 0 )
>> +            break;
> Won't this live-lock the guest?

Potentially, yes.

> I.e. isn't it better to e.g. crash it
> if svm_get_task_switch_insn_len() didn't raise #GP(0)?

No - that would need and XSA if we got it wrong, as none of these are
privileged instruction.

However, it occurs to me that we are in a position to use
svm_crash_or_fault(), so I'll respin with that in mind.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task
  2019-11-21 22:15 ` [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task Andrew Cooper
  2019-11-22 13:10   ` Andrew Cooper
  2019-11-22 13:31   ` Jan Beulich
@ 2019-11-22 13:59   ` Roger Pau Monné
  2019-11-22 14:39     ` Andrew Cooper
  2 siblings, 1 reply; 19+ messages in thread
From: Roger Pau Monné @ 2019-11-22 13:59 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Juergen Gross, Xen-devel, Wei Liu, Jan Beulich

On Thu, Nov 21, 2019 at 10:15:51PM +0000, Andrew Cooper wrote:
> The TASK_SWITCH vmexit has fault semantics, and doesn't provide any NRIPs
> assistance with instruction length.  As a result, any instruction-induced task
> switch has the outgoing task's %eip pointing at the instruction switch caused
                                                                  ^ that
> the switch, rather than after it.
> 
> This causes explicit use of task gates to livelock (as when the task returns,
> it executes the task-switching instruction again), and any restartable task to
> become a nop after its first instantiation (the entry state points at the
> ret/iret instruction used to exit the task).
> 
> 32bit Windows in particular is known to use task gates for NMI handling, and
> to use NMI IPIs.
> 
> In the task switch handler, distinguish instruction-induced from
> interrupt/exception-induced task switches, and decode the instruction under
> %rip to calculate its length.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Juergen Gross <jgross@suse.com>
> 
> The implementation of svm_get_task_switch_insn_len() is bug-compatible with
> svm_get_insn_len() when it comes to conditional #GP'ing.  I still haven't had
> time to address this more thoroughly.
> 
> AMD does permit TASK_SWITCH not to be intercepted and, I'm informed does do
> the right thing when it comes to a TSS crossing a page boundary.  However, it
> is not actually safe to leave task switches unintercepted.  Any NPT or shadow
> page fault, even from logdirty/paging/etc will corrupt guest state in an
> unrecoverable manner.
> ---
>  xen/arch/x86/hvm/svm/emulate.c        | 55 +++++++++++++++++++++++++++++++++++
>  xen/arch/x86/hvm/svm/svm.c            | 46 ++++++++++++++++++++++-------
>  xen/include/asm-x86/hvm/svm/emulate.h |  1 +
>  3 files changed, 92 insertions(+), 10 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/emulate.c b/xen/arch/x86/hvm/svm/emulate.c
> index 3e52592847..176c25f60d 100644
> --- a/xen/arch/x86/hvm/svm/emulate.c
> +++ b/xen/arch/x86/hvm/svm/emulate.c
> @@ -117,6 +117,61 @@ unsigned int svm_get_insn_len(struct vcpu *v, unsigned int instr_enc)
>  }
>  
>  /*
> + * TASK_SWITCH vmexits never provide an instruction length.  We must always
> + * decode under %rip to find the answer.
> + */
> +unsigned int svm_get_task_switch_insn_len(struct vcpu *v)
> +{
> +    struct hvm_emulate_ctxt ctxt;
> +    struct x86_emulate_state *state;
> +    unsigned int emul_len, modrm_reg;
> +
> +    ASSERT(v == current);
> +    hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
> +    hvm_emulate_init_per_insn(&ctxt, NULL, 0);
> +    state = x86_decode_insn(&ctxt.ctxt, hvmemul_insn_fetch);
> +    if ( IS_ERR_OR_NULL(state) )

Maybe crash the guest in this case? Not advancing the instruction
pointer in a software induced task switch will create a loop AFAICT?

> +        return 0;
> +
> +    emul_len = x86_insn_length(state, &ctxt.ctxt);
> +
> +    /*
> +     * Check for an instruction which can cause a task switch.  Any far
> +     * jmp/call/ret, any software interrupt/exception, and iret.
> +     */
> +    switch ( ctxt.ctxt.opcode )
> +    {
> +    case 0xff: /* Grp 5 */
> +        /* call / jmp (far, absolute indirect) */
> +        if ( x86_insn_modrm(state, NULL, &modrm_reg) != 3 ||
> +             (modrm_reg != 3 && modrm_reg != 5) )
> +        {
> +            /* Wrong instruction.  Throw #GP back for now. */
> +    default:
> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
> +            emul_len = 0;
> +            break;
> +        }
> +        /* Fallthrough */
> +    case 0x62: /* bound */
> +    case 0x9a: /* call (far, absolute) */

I'm slightly loss here, in the case of call or jmp for example, don't
you need the instruction pointer to point to the destination of the
call/jmp instead of the next instruction?

> +    case 0xca: /* ret imm16 (far) */
> +    case 0xcb: /* ret (far) */
> +    case 0xcc: /* int3 */
> +    case 0xcd: /* int imm8 */
> +    case 0xce: /* into */
> +    case 0xcf: /* iret */
> +    case 0xea: /* jmp (far, absolute) */
> +    case 0xf1: /* icebp */
> +        break;
> +    }
> +
> +    x86_emulate_free_state(state);
> +
> +    return emul_len;
> +}
> +
> +/*
>   * Local variables:
>   * mode: C
>   * c-file-style: "BSD"
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 049b800e20..ba9c24a70c 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -2776,7 +2776,41 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>  
>      case VMEXIT_TASK_SWITCH: {
>          enum hvm_task_switch_reason reason;
> -        int32_t errcode = -1;
> +        int32_t errcode = -1, insn_len = -1;

Plain int seem better for insn_len?

Also I'm not sure there's a reason that errcode uses int32_t, but
that's not introduced here anyway.

Thanks, Roger.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task
  2019-11-22 13:55     ` Andrew Cooper
@ 2019-11-22 14:31       ` Jan Beulich
  2019-11-22 14:55         ` Andrew Cooper
  0 siblings, 1 reply; 19+ messages in thread
From: Jan Beulich @ 2019-11-22 14:31 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Juergen Gross, Xen-devel, Wei Liu, Roger Pau Monné

On 22.11.2019 14:55, Andrew Cooper wrote:
> On 22/11/2019 13:31, Jan Beulich wrote:
>> On 21.11.2019 23:15, Andrew Cooper wrote:
>>> +        /* Fallthrough */
>>> +    case 0x62: /* bound */
>> Does "bound" really belong on this list? It raising #BR is like
>> insns raising random other exceptions, not like INTO / INT3,
>> where the IDT descriptor also has to have suitable DPL for the
>> exception to actually get delivered (rather than #GP). I.e. it
>> shouldn't make it here in the first place, due to the
>> X86_EVENTTYPE_HW_EXCEPTION check in the caller.
>>
>> IOW if "bound" needs to be here, then all others need to be as
>> well, unless they can't cause any exception at all.
> 
> More experimentation required.  BOUND doesn't appear to be special cased
> by SVM, but is by VT-x.  VT-x however does throw it in the same category
> as #UD, and identify it to be a hardware exception.
> 
> I suspect you are right, and t doesn't want to be here.
> 
>>> +    case 0x9a: /* call (far, absolute) */
>>> +    case 0xca: /* ret imm16 (far) */
>>> +    case 0xcb: /* ret (far) */
>>> +    case 0xcc: /* int3 */
>>> +    case 0xcd: /* int imm8 */
>>> +    case 0xce: /* into */
>>> +    case 0xcf: /* iret */
>>> +    case 0xea: /* jmp (far, absolute) */
>>> +    case 0xf1: /* icebp */
>> Same perhaps for ICEBP, albeit I'm less certain here, as its
>> behavior is too poorly documented (if at all).
> 
> ICEBP's #DB is a trap, not a fault, so instruction length is important.

Hmm, this may point at a bigger issue then: Single step and data
breakpoints are traps, too. But of course they can occur with
arbitrary insns. Do their intercepts occur with guest RIP already
updated? (They wouldn't currently make it here anyway because of
the X86_EVENTTYPE_HW_EXCEPTION check in the caller.) If they do,
are you sure ICEBP-#DB's doesn't?

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task
  2019-11-22 13:59   ` Roger Pau Monné
@ 2019-11-22 14:39     ` Andrew Cooper
  0 siblings, 0 replies; 19+ messages in thread
From: Andrew Cooper @ 2019-11-22 14:39 UTC (permalink / raw)
  To: Roger Pau Monné; +Cc: Juergen Gross, Xen-devel, Wei Liu, Jan Beulich

On 22/11/2019 13:59, Roger Pau Monné wrote:
> On Thu, Nov 21, 2019 at 10:15:51PM +0000, Andrew Cooper wrote:
>> The TASK_SWITCH vmexit has fault semantics, and doesn't provide any NRIPs
>> assistance with instruction length.  As a result, any instruction-induced task
>> switch has the outgoing task's %eip pointing at the instruction switch caused
>                                                                   ^ that
>> the switch, rather than after it.
>>
>> This causes explicit use of task gates to livelock (as when the task returns,
>> it executes the task-switching instruction again), and any restartable task to
>> become a nop after its first instantiation (the entry state points at the
>> ret/iret instruction used to exit the task).
>>
>> 32bit Windows in particular is known to use task gates for NMI handling, and
>> to use NMI IPIs.
>>
>> In the task switch handler, distinguish instruction-induced from
>> interrupt/exception-induced task switches, and decode the instruction under
>> %rip to calculate its length.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Juergen Gross <jgross@suse.com>
>>
>> The implementation of svm_get_task_switch_insn_len() is bug-compatible with
>> svm_get_insn_len() when it comes to conditional #GP'ing.  I still haven't had
>> time to address this more thoroughly.
>>
>> AMD does permit TASK_SWITCH not to be intercepted and, I'm informed does do
>> the right thing when it comes to a TSS crossing a page boundary.  However, it
>> is not actually safe to leave task switches unintercepted.  Any NPT or shadow
>> page fault, even from logdirty/paging/etc will corrupt guest state in an
>> unrecoverable manner.
>> ---
>>  xen/arch/x86/hvm/svm/emulate.c        | 55 +++++++++++++++++++++++++++++++++++
>>  xen/arch/x86/hvm/svm/svm.c            | 46 ++++++++++++++++++++++-------
>>  xen/include/asm-x86/hvm/svm/emulate.h |  1 +
>>  3 files changed, 92 insertions(+), 10 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/svm/emulate.c b/xen/arch/x86/hvm/svm/emulate.c
>> index 3e52592847..176c25f60d 100644
>> --- a/xen/arch/x86/hvm/svm/emulate.c
>> +++ b/xen/arch/x86/hvm/svm/emulate.c
>> @@ -117,6 +117,61 @@ unsigned int svm_get_insn_len(struct vcpu *v, unsigned int instr_enc)
>>  }
>>  
>>  /*
>> + * TASK_SWITCH vmexits never provide an instruction length.  We must always
>> + * decode under %rip to find the answer.
>> + */
>> +unsigned int svm_get_task_switch_insn_len(struct vcpu *v)
>> +{
>> +    struct hvm_emulate_ctxt ctxt;
>> +    struct x86_emulate_state *state;
>> +    unsigned int emul_len, modrm_reg;
>> +
>> +    ASSERT(v == current);
>> +    hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
>> +    hvm_emulate_init_per_insn(&ctxt, NULL, 0);
>> +    state = x86_decode_insn(&ctxt.ctxt, hvmemul_insn_fetch);
>> +    if ( IS_ERR_OR_NULL(state) )
> Maybe crash the guest in this case? Not advancing the instruction
> pointer in a software induced task switch will create a loop AFAICT?

Your analysis is correct, but crashing the guest would be a user=>kernel
DoS, which is worse than a livelock.

We do have some logic to try and cope with this in svm.c, and I think
I've got a better idea of how to make use of it.

>
>> +        return 0;
>> +
>> +    emul_len = x86_insn_length(state, &ctxt.ctxt);
>> +
>> +    /*
>> +     * Check for an instruction which can cause a task switch.  Any far
>> +     * jmp/call/ret, any software interrupt/exception, and iret.
>> +     */
>> +    switch ( ctxt.ctxt.opcode )
>> +    {
>> +    case 0xff: /* Grp 5 */
>> +        /* call / jmp (far, absolute indirect) */
>> +        if ( x86_insn_modrm(state, NULL, &modrm_reg) != 3 ||
>> +             (modrm_reg != 3 && modrm_reg != 5) )
>> +        {
>> +            /* Wrong instruction.  Throw #GP back for now. */
>> +    default:
>> +            hvm_inject_hw_exception(TRAP_gp_fault, 0);
>> +            emul_len = 0;
>> +            break;
>> +        }
>> +        /* Fallthrough */
>> +    case 0x62: /* bound */
>> +    case 0x9a: /* call (far, absolute) */
> I'm slightly loss here, in the case of call or jmp for example, don't
> you need the instruction pointer to point to the destination of the
> call/jmp instead of the next instruction?

No, but that is by design.

Far calls provide a selector:offset pair (either imm or mem operands),
rather than a displacement within the same code segment.

Selector may be new code selector, at which point offset is important,
and execution continues at %cs:%rip.  This case isn't interesting for
us, and doesn't vmexit in the first place.

When Selector is a Task State Segment, or Task Gate selector, a task
switch occurs (subject to cpl checks, etc).

In this case, the entrypoint of the new task is stashed in the new tasks
TSS (cs and eip fields).  The offset from the original call/jmp
instruction is discarded as it isn't relevant.   (After all,
particularly on a privilege level transition task switch, you don't want
the unprivileged caller able to start executing from somewhere which
isn't the designated entrypoint.)

Just to complete the set, selector may also be a Call Gate selector,
which is far lighter weight than a fully blown task switch, and whose
entry point is part of the Call Gate descriptor itself.

>> +    case 0xca: /* ret imm16 (far) */
>> +    case 0xcb: /* ret (far) */
>> +    case 0xcc: /* int3 */
>> +    case 0xcd: /* int imm8 */
>> +    case 0xce: /* into */
>> +    case 0xcf: /* iret */
>> +    case 0xea: /* jmp (far, absolute) */
>> +    case 0xf1: /* icebp */
>> +        break;
>> +    }
>> +
>> +    x86_emulate_free_state(state);
>> +
>> +    return emul_len;
>> +}
>> +
>> +/*
>>   * Local variables:
>>   * mode: C
>>   * c-file-style: "BSD"
>> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
>> index 049b800e20..ba9c24a70c 100644
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -2776,7 +2776,41 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>>  
>>      case VMEXIT_TASK_SWITCH: {
>>          enum hvm_task_switch_reason reason;
>> -        int32_t errcode = -1;
>> +        int32_t errcode = -1, insn_len = -1;
> Plain int seem better for insn_len?
>
> Also I'm not sure there's a reason that errcode uses int32_t, but
> that's not introduced here anyway.

I was just using what was already here.  I'm not sure why it is int32_t
either, but this is consistent throughout the task switch infrastructure.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures
  2019-11-22 13:39         ` Jan Beulich
@ 2019-11-22 14:51           ` Andrew Cooper
  0 siblings, 0 replies; 19+ messages in thread
From: Andrew Cooper @ 2019-11-22 14:51 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Juergen Gross, Kevin Tian, Wei Liu, Jun Nakajima, Xen-devel,
	Roger Pau Monné

On 22/11/2019 13:39, Jan Beulich wrote:
> On 22.11.2019 14:12, Andrew Cooper wrote:
>> On 22/11/2019 13:08, Jan Beulich wrote:
>>> On 22.11.2019 13:37, Roger Pau Monné  wrote:
>>>> On Thu, Nov 21, 2019 at 10:15:50PM +0000, Andrew Cooper wrote:
>>>>> The VT-x task switch handler adds inst_len to rip before calling
>>>>> hvm_task_switch().  This causes early faults to be delivered to the guest with
>>>>> trap semantics, and break restartibility.
>>>>>
>>>>> Instead, pass the instruction length into hvm_task_switch() and write it into
>>>>> the outgoing tss only, leaving rip in its original location.
>>>>>
>>>>> For now, pass 0 on the SVM side.  This highlights a separate preexisting bug
>>>>> which will be addressed in the following patch.
>>>>>
>>>>> While adjusting call sites, drop the unnecessary uint16_t cast.
>>>>>
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Code LGTM:
>>>>
>>>> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>>> Acked-by: Jan Beulich <jbeulich@suse.com>
>> It occurs to me that this also fixes a vmentry failure in the corner
>> case that an instruction, which crosses the 4G=>0 boundary takes a
>> fault.  %rip will be adjusted without being truncated.
> I was about to say so in my earlier reply, until I paid attention
> to this
>
> @@ -2987,7 +2987,7 @@ void hvm_task_switch(
>      if ( taskswitch_reason == TSW_iret )
>          eflags &= ~X86_EFLAGS_NT;
>  
> -    tss.eip    = regs->eip;
> +    tss.eip    = regs->eip + insn_len;
>
> together with the subsequent
>
>     regs->rip    = tss.eip;
>
> already having taken care of this aspect before, afaict.

This takes care of things for a task switch which completes
successfully, but not for one which faulted (and ended up delivering
with trap semantics).  In that case, the (now deleted) regs->rip +=
inst_len; would end up un-truncated.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task
  2019-11-22 14:31       ` Jan Beulich
@ 2019-11-22 14:55         ` Andrew Cooper
  0 siblings, 0 replies; 19+ messages in thread
From: Andrew Cooper @ 2019-11-22 14:55 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Juergen Gross, Xen-devel, Wei Liu, Roger Pau Monné

On 22/11/2019 14:31, Jan Beulich wrote:
> On 22.11.2019 14:55, Andrew Cooper wrote:
>> On 22/11/2019 13:31, Jan Beulich wrote:
>>> On 21.11.2019 23:15, Andrew Cooper wrote:
>>>> +        /* Fallthrough */
>>>> +    case 0x62: /* bound */
>>> Does "bound" really belong on this list? It raising #BR is like
>>> insns raising random other exceptions, not like INTO / INT3,
>>> where the IDT descriptor also has to have suitable DPL for the
>>> exception to actually get delivered (rather than #GP). I.e. it
>>> shouldn't make it here in the first place, due to the
>>> X86_EVENTTYPE_HW_EXCEPTION check in the caller.
>>>
>>> IOW if "bound" needs to be here, then all others need to be as
>>> well, unless they can't cause any exception at all.
>> More experimentation required.  BOUND doesn't appear to be special cased
>> by SVM, but is by VT-x.  VT-x however does throw it in the same category
>> as #UD, and identify it to be a hardware exception.
>>
>> I suspect you are right, and t doesn't want to be here.
>>
>>>> +    case 0x9a: /* call (far, absolute) */
>>>> +    case 0xca: /* ret imm16 (far) */
>>>> +    case 0xcb: /* ret (far) */
>>>> +    case 0xcc: /* int3 */
>>>> +    case 0xcd: /* int imm8 */
>>>> +    case 0xce: /* into */
>>>> +    case 0xcf: /* iret */
>>>> +    case 0xea: /* jmp (far, absolute) */
>>>> +    case 0xf1: /* icebp */
>>> Same perhaps for ICEBP, albeit I'm less certain here, as its
>>> behavior is too poorly documented (if at all).
>> ICEBP's #DB is a trap, not a fault, so instruction length is important.
> Hmm, this may point at a bigger issue then: Single step and data
> breakpoints are traps, too. But of course they can occur with
> arbitrary insns. Do their intercepts occur with guest RIP already
> updated?

Based on other behaviour, I'm going to guess yes on SVM and no on VT-x.

We'll take the #DB intercept, re-inject, and should see a vectoring task
switch.  The type should match the re-inject, so will be SW_INT/EXC with
a length on VT-x, and be HW_EXCEPTION with no length on SVM.

Either way, I think the logic presented here will work correctly.

> (They wouldn't currently make it here anyway because of
> the X86_EVENTTYPE_HW_EXCEPTION check in the caller.) If they do,
> are you sure ICEBP-#DB's doesn't?

ICEBP itself doesn't get intercepted.  Only the resulting #DB does,
which will will trigger a #DB-vectoring task switch, irrespective of its
exact origin.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures
  2019-11-21 22:15 ` [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures Andrew Cooper
  2019-11-22 12:37   ` Roger Pau Monné
@ 2019-11-25  8:23   ` Tian, Kevin
  1 sibling, 0 replies; 19+ messages in thread
From: Tian, Kevin @ 2019-11-25  8:23 UTC (permalink / raw)
  To: Andrew Cooper, Xen-devel
  Cc: Nakajima, Jun, Juergen Gross, Wei Liu, Jan Beulich, Roger Pau Monné

> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com]
> Sent: Friday, November 22, 2019 6:16 AM
> 
> The VT-x task switch handler adds inst_len to rip before calling
> hvm_task_switch().  This causes early faults to be delivered to the guest
> with
> trap semantics, and break restartibility.
> 
> Instead, pass the instruction length into hvm_task_switch() and write it into
> the outgoing tss only, leaving rip in its original location.
> 
> For now, pass 0 on the SVM side.  This highlights a separate preexisting bug
> which will be addressed in the following patch.
> 
> While adjusting call sites, drop the unnecessary uint16_t cast.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2019-11-25  8:23 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-21 22:15 [Xen-devel] [PATCH for-4.13 0/2] x86/hvm: Multiple corrections to task switch handling Andrew Cooper
2019-11-21 22:15 ` [Xen-devel] [PATCH 1/2] x86/vtx: Fix fault semantics for early task switch failures Andrew Cooper
2019-11-22 12:37   ` Roger Pau Monné
2019-11-22 12:43     ` Andrew Cooper
2019-11-22 13:08     ` Jan Beulich
2019-11-22 13:12       ` Andrew Cooper
2019-11-22 13:39         ` Jan Beulich
2019-11-22 14:51           ` Andrew Cooper
2019-11-25  8:23   ` Tian, Kevin
2019-11-21 22:15 ` [Xen-devel] [PATCH 2/2] x86/svm: Write the correct %eip into the outgoing task Andrew Cooper
2019-11-22 13:10   ` Andrew Cooper
2019-11-22 13:31   ` Jan Beulich
2019-11-22 13:55     ` Andrew Cooper
2019-11-22 14:31       ` Jan Beulich
2019-11-22 14:55         ` Andrew Cooper
2019-11-22 13:59   ` Roger Pau Monné
2019-11-22 14:39     ` Andrew Cooper
2019-11-22 10:23 ` [Xen-devel] [PATCH for-4.13 0/2] x86/hvm: Multiple corrections to task switch handling Roger Pau Monné
2019-11-22 10:25   ` Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.