All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/5] x86: Lift create_exception_frame() up out of C
@ 2018-02-27 14:50 Andrew Cooper
  2018-02-27 14:50 ` [PATCH v2 1/5] x86/entry: Correct comparisons against boolean variables Andrew Cooper
                   ` (4 more replies)
  0 siblings, 5 replies; 12+ messages in thread
From: Andrew Cooper @ 2018-02-27 14:50 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper

This has finally undergone performance testing.  No measureable difference in
any lmbench test on either 32 or 64bit PV guests.

Andrew Cooper (5):
  x86/entry: Correct comparisons against boolean variables
  x86/pv: Drop int80_bounce from struct pv_vcpu
  x86/pv: Introduce pv_create_exception_frame()
  x86/pv: Drop {compat_,}create_bounce_frame() and use the C version instead
  x86/pv: Implement the failsafe callback using the general path

 xen/arch/x86/domain.c              | 102 ++-----------------
 xen/arch/x86/pv/callback.c         |   8 --
 xen/arch/x86/pv/traps.c            | 176 +++++++++++++++++++++++++++++---
 xen/arch/x86/traps.c               |  23 -----
 xen/arch/x86/x86_64/asm-offsets.c  |   1 -
 xen/arch/x86/x86_64/compat/entry.S | 128 ++---------------------
 xen/arch/x86/x86_64/entry.S        | 203 ++++++++-----------------------------
 xen/include/asm-x86/domain.h       |   1 -
 xen/include/asm-x86/processor.h    |   3 +-
 xen/include/xen/sched.h            |   7 --
 10 files changed, 224 insertions(+), 428 deletions(-)

-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v2 1/5] x86/entry: Correct comparisons against boolean variables
  2018-02-27 14:50 [PATCH v2 0/5] x86: Lift create_exception_frame() up out of C Andrew Cooper
@ 2018-02-27 14:50 ` Andrew Cooper
  2018-03-02 16:27   ` Jan Beulich
  2018-03-02 17:32   ` Wei Liu
  2018-02-27 14:50 ` [PATCH v2 2/5] x86/pv: Drop int80_bounce from struct pv_vcpu Andrew Cooper
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 12+ messages in thread
From: Andrew Cooper @ 2018-02-27 14:50 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

The correct way to check a boolean is `cmpb $0` or `testb $0xff`, whereas a
lot of our entry code uses `testb $1`.  This will work in principle for values
which are really C _Bool types, but won't work for other integer types which
are intended to have boolean properties.

cmp is the more logical way of thinking about the operation, so adjust all
outstanding uses of `testb $1` against boolean values.  Changing test to cmp
changes the logical mnemonic of the following condition from 'zero' to
'equal', but the actual encoding remains the same.

No functional change, as all uses are real C _Bool types, and confirmed by
diffing the disassembly.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

New in v2
---
 xen/arch/x86/x86_64/compat/entry.S |  8 ++++----
 xen/arch/x86/x86_64/entry.S        | 28 ++++++++++++++--------------
 2 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index 458d810..3e8b6c1 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -41,11 +41,11 @@ ENTRY(compat_test_all_events)
         leaq  irq_stat+IRQSTAT_softirq_pending(%rip),%rcx
         cmpl  $0,(%rcx,%rax,1)
         jne   compat_process_softirqs
-        testb $1,VCPU_mce_pending(%rbx)
-        jnz   compat_process_mce
+        cmpb  $0, VCPU_mce_pending(%rbx)
+        jne   compat_process_mce
 .Lcompat_test_guest_nmi:
-        testb $1,VCPU_nmi_pending(%rbx)
-        jnz   compat_process_nmi
+        cmpb  $0, VCPU_nmi_pending(%rbx)
+        jne   compat_process_nmi
 compat_test_guest_events:
         movq  VCPU_vcpu_info(%rbx),%rax
         movzwl COMPAT_VCPUINFO_upcall_pending(%rax),%eax
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 941f06f..6249efe 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -190,11 +190,11 @@ test_all_events:
         leaq  irq_stat+IRQSTAT_softirq_pending(%rip),%rcx
         cmpl  $0,(%rcx,%rax,1)
         jne   process_softirqs
-        testb $1,VCPU_mce_pending(%rbx)
-        jnz   process_mce
+        cmpb  $0, VCPU_mce_pending(%rbx)
+        jne   process_mce
 .Ltest_guest_nmi:
-        testb $1,VCPU_nmi_pending(%rbx)
-        jnz   process_nmi
+        cmpb  $0, VCPU_nmi_pending(%rbx)
+        jne   process_nmi
 test_guest_events:
         movq  VCPU_vcpu_info(%rbx),%rax
         movzwl VCPUINFO_upcall_pending(%rax),%eax
@@ -305,8 +305,8 @@ UNLIKELY_END(sysenter_gpf)
         movq  VCPU_domain(%rbx),%rdi
         movq  %rax,TRAPBOUNCE_eip(%rdx)
         movb  %cl,TRAPBOUNCE_flags(%rdx)
-        testb $1,DOMAIN_is_32bit_pv(%rdi)
-        jnz   compat_sysenter
+        cmpb  $0, DOMAIN_is_32bit_pv(%rdi)
+        jne   compat_sysenter
         jmp   .Lbounce_exception
 
 ENTRY(int80_direct_trap)
@@ -342,8 +342,8 @@ UNLIKELY_END(msi_check)
         jz    int80_slow_path
 
         movq  VCPU_domain(%rbx),%rax
-        testb $1,DOMAIN_is_32bit_pv(%rax)
-        jnz   compat_int80_direct_trap
+        cmpb  $0, DOMAIN_is_32bit_pv(%rax)
+        jne   compat_int80_direct_trap
 
         call  create_bounce_frame
         jmp   test_all_events
@@ -484,8 +484,8 @@ ENTRY(dom_crash_sync_extable)
         # create_bounce_frame() temporarily clobbers CS.RPL. Fix up.
         movq  STACK_CPUINFO_FIELD(current_vcpu)(%rax), %rax
         movq  VCPU_domain(%rax),%rax
-        testb $1,DOMAIN_is_32bit_pv(%rax)
-        setz  %al
+        cmpb  $0, DOMAIN_is_32bit_pv(%rax)
+        sete  %al
         leal  (%rax,%rax,2),%eax
         orb   %al,UREGS_cs(%rsp)
         xorl  %edi,%edi
@@ -529,8 +529,8 @@ ENTRY(ret_from_intr)
         testb $3,UREGS_cs(%rsp)
         jz    restore_all_xen
         movq  VCPU_domain(%rbx),%rax
-        testb $1,DOMAIN_is_32bit_pv(%rax)
-        jz    test_all_events
+        cmpb  $0, DOMAIN_is_32bit_pv(%rax)
+        je    test_all_events
         jmp   compat_test_all_events
 
 ENTRY(page_fault)
@@ -629,8 +629,8 @@ handle_exception_saved:
         jz    restore_all_xen
         leaq  VCPU_trap_bounce(%rbx),%rdx
         movq  VCPU_domain(%rbx),%rax
-        testb $1,DOMAIN_is_32bit_pv(%rax)
-        jnz   compat_post_handle_exception
+        cmpb  $0, DOMAIN_is_32bit_pv(%rax)
+        jne   compat_post_handle_exception
         testb $TBF_EXCEPTION,TRAPBOUNCE_flags(%rdx)
         jz    test_all_events
 .Lbounce_exception:
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 2/5] x86/pv: Drop int80_bounce from struct pv_vcpu
  2018-02-27 14:50 [PATCH v2 0/5] x86: Lift create_exception_frame() up out of C Andrew Cooper
  2018-02-27 14:50 ` [PATCH v2 1/5] x86/entry: Correct comparisons against boolean variables Andrew Cooper
@ 2018-02-27 14:50 ` Andrew Cooper
  2018-02-27 14:50 ` [PATCH v2 3/5] x86/pv: Introduce pv_create_exception_frame() Andrew Cooper
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 12+ messages in thread
From: Andrew Cooper @ 2018-02-27 14:50 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper

The int80_bounce field of struct pv_vcpu is a bit of an odd special case,
because it is a simple derivation of trap_ctxt[0x80], which is also stored.

It is also the only use of {compat_,}create_bounce_frame() which isn't
referencing the plain trap_bounce field of struct pv_vcpu.  (And altering this
property the purpose of this patch.)

Remove the int80_bounce field entirely, along with init_int80_direct_trap(),
which in turn requires that the int80_direct_trap() path gain logic previously
contained in init_int80_direct_trap().

This does admittedly make the int80 fastpath slightly longer, but these few
instructions are in the noise compared to the architectural context switch
overhead, and it now matches the syscall/sysenter paths (which have far less
architectural overhead already).

No behavioural change from the guests point of view.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
v2:
 * Fix comparason against DOMAIN_is_32bit_pv
---
 xen/arch/x86/domain.c             |  2 --
 xen/arch/x86/pv/callback.c        |  8 --------
 xen/arch/x86/pv/traps.c           | 14 --------------
 xen/arch/x86/x86_64/asm-offsets.c |  1 -
 xen/arch/x86/x86_64/entry.S       | 32 ++++++++++++++++++++++++++++----
 xen/include/asm-x86/domain.h      |  1 -
 xen/include/asm-x86/processor.h   |  2 --
 7 files changed, 28 insertions(+), 32 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 1f8b08e..69679a6 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -892,8 +892,6 @@ int arch_set_info_guest(
         goto out;
     }
 
-    init_int80_direct_trap(v);
-
     /* IOPL privileges are virtualised. */
     v->arch.pv_vcpu.iopl = v->arch.user_regs.eflags & X86_EFLAGS_IOPL;
     v->arch.user_regs.eflags &= ~X86_EFLAGS_IOPL;
diff --git a/xen/arch/x86/pv/callback.c b/xen/arch/x86/pv/callback.c
index 97d8438..29ae692 100644
--- a/xen/arch/x86/pv/callback.c
+++ b/xen/arch/x86/pv/callback.c
@@ -371,7 +371,6 @@ long do_set_trap_table(XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps)
     if ( guest_handle_is_null(traps) )
     {
         memset(dst, 0, NR_VECTORS * sizeof(*dst));
-        init_int80_direct_trap(curr);
         return 0;
     }
 
@@ -393,9 +392,6 @@ long do_set_trap_table(XEN_GUEST_HANDLE_PARAM(const_trap_info_t) traps)
 
         memcpy(&dst[cur.vector], &cur, sizeof(cur));
 
-        if ( cur.vector == 0x80 )
-            init_int80_direct_trap(curr);
-
         guest_handle_add_offset(traps, 1);
 
         if ( hypercall_preempt_check() )
@@ -420,7 +416,6 @@ int compat_set_trap_table(XEN_GUEST_HANDLE(trap_info_compat_t) traps)
     if ( guest_handle_is_null(traps) )
     {
         memset(dst, 0, NR_VECTORS * sizeof(*dst));
-        init_int80_direct_trap(curr);
         return 0;
     }
 
@@ -439,9 +434,6 @@ int compat_set_trap_table(XEN_GUEST_HANDLE(trap_info_compat_t) traps)
 
         XLAT_trap_info(dst + cur.vector, &cur);
 
-        if ( cur.vector == 0x80 )
-            init_int80_direct_trap(curr);
-
         guest_handle_add_offset(traps, 1);
 
         if ( hypercall_preempt_check() )
diff --git a/xen/arch/x86/pv/traps.c b/xen/arch/x86/pv/traps.c
index d122881..98549bc 100644
--- a/xen/arch/x86/pv/traps.c
+++ b/xen/arch/x86/pv/traps.c
@@ -142,20 +142,6 @@ bool set_guest_nmi_trapbounce(void)
     return !null_trap_bounce(curr, tb);
 }
 
-void init_int80_direct_trap(struct vcpu *v)
-{
-    struct trap_info *ti = &v->arch.pv_vcpu.trap_ctxt[0x80];
-    struct trap_bounce *tb = &v->arch.pv_vcpu.int80_bounce;
-
-    tb->cs  = ti->cs;
-    tb->eip = ti->address;
-
-    if ( null_trap_bounce(v, tb) )
-        tb->flags = 0;
-    else
-        tb->flags = TBF_EXCEPTION | (TI_GET_IF(ti) ? TBF_INTERRUPT : 0);
-}
-
 struct softirq_trap {
     struct domain *domain;   /* domain to inject trap */
     struct vcpu *vcpu;       /* vcpu to inject trap */
diff --git a/xen/arch/x86/x86_64/asm-offsets.c b/xen/arch/x86/x86_64/asm-offsets.c
index e6d4147..1a45428 100644
--- a/xen/arch/x86/x86_64/asm-offsets.c
+++ b/xen/arch/x86/x86_64/asm-offsets.c
@@ -61,7 +61,6 @@ void __dummy__(void)
     OFFSET(VCPU_domain, struct vcpu, domain);
     OFFSET(VCPU_vcpu_info, struct vcpu, vcpu_info);
     OFFSET(VCPU_trap_bounce, struct vcpu, arch.pv_vcpu.trap_bounce);
-    OFFSET(VCPU_int80_bounce, struct vcpu, arch.pv_vcpu.int80_bounce);
     OFFSET(VCPU_thread_flags, struct vcpu, arch.flags);
     OFFSET(VCPU_event_addr, struct vcpu, arch.pv_vcpu.event_callback_eip);
     OFFSET(VCPU_event_sel, struct vcpu, arch.pv_vcpu.event_callback_cs);
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 6249efe..bf41563 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -336,12 +336,36 @@ UNLIKELY_END(msi_check)
 
         movq  STACK_CPUINFO_FIELD(current_vcpu)(%rbx), %rbx
 
-        /* Check that the callback is non-null. */
-        leaq  VCPU_int80_bounce(%rbx),%rdx
-        cmpb  $0,TRAPBOUNCE_flags(%rdx)
+        mov   VCPU_trap_ctxt(%rbx), %rsi
+        mov   VCPU_domain(%rbx), %rax
+
+        /*
+         * if ( null_trap_bounce(v, &v->arch.pv_vcpu.trap_ctxt[0x80]) )
+         *    goto int80_slow_path;
+         */
+        mov    0x80 * TRAPINFO_sizeof + TRAPINFO_eip(%rsi), %rdi
+        movzwl 0x80 * TRAPINFO_sizeof + TRAPINFO_cs (%rsi), %ecx
+
+        mov   %ecx, %edx
+        and   $~3, %edx
+
+        cmpb  $0, DOMAIN_is_32bit_pv(%rax)
+        cmove %rdi, %rdx
+
+        test  %rdx, %rdx
         jz    int80_slow_path
 
-        movq  VCPU_domain(%rbx),%rax
+        /* Construct trap_bounce from trap_ctxt[0x80]. */
+        lea   VCPU_trap_bounce(%rbx), %rdx
+        movw  %cx, TRAPBOUNCE_cs(%rdx)
+        movq  %rdi, TRAPBOUNCE_eip(%rdx)
+
+        /* TB_flags = TBF_EXCEPTION | (TI_GET_IF(ti) ? TBF_INTERRUPT : 0); */
+        testb $4, 0x80 * TRAPINFO_sizeof + TRAPINFO_flags(%rsi)
+        setnz %cl
+        lea   TBF_EXCEPTION(, %rcx, TBF_INTERRUPT), %ecx
+        mov   %cl, TRAPBOUNCE_flags(%rdx)
+
         cmpb  $0, DOMAIN_is_32bit_pv(%rax)
         jne   compat_int80_direct_trap
 
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 4679d54..47aadc2 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -482,7 +482,6 @@ struct pv_vcpu
 
     /* Bounce information for propagating an exception to guest OS. */
     struct trap_bounce trap_bounce;
-    struct trap_bounce int80_bounce;
 
     /* I/O-port access bitmap. */
     XEN_GUEST_HANDLE(uint8) iobmp; /* Guest kernel vaddr of the bitmap. */
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 9c70a98..01bc89f 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -439,8 +439,6 @@ extern idt_entry_t *idt_tables[];
 DECLARE_PER_CPU(struct tss_struct, init_tss);
 DECLARE_PER_CPU(root_pgentry_t *, root_pgt);
 
-extern void init_int80_direct_trap(struct vcpu *v);
-
 extern void write_ptbase(struct vcpu *v);
 
 /* REP NOP (PAUSE) is a good thing to insert into busy-wait loops. */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 3/5] x86/pv: Introduce pv_create_exception_frame()
  2018-02-27 14:50 [PATCH v2 0/5] x86: Lift create_exception_frame() up out of C Andrew Cooper
  2018-02-27 14:50 ` [PATCH v2 1/5] x86/entry: Correct comparisons against boolean variables Andrew Cooper
  2018-02-27 14:50 ` [PATCH v2 2/5] x86/pv: Drop int80_bounce from struct pv_vcpu Andrew Cooper
@ 2018-02-27 14:50 ` Andrew Cooper
  2018-03-02 16:44   ` Jan Beulich
  2018-02-27 14:50 ` [PATCH v2 4/5] x86/pv: Drop {compat_, }create_bounce_frame() and use the C version instead Andrew Cooper
  2018-02-27 14:50 ` [PATCH v2 5/5] x86/pv: Implement the failsafe callback using the general path Andrew Cooper
  4 siblings, 1 reply; 12+ messages in thread
From: Andrew Cooper @ 2018-02-27 14:50 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

This is a C implementation of {compat_,}create_bounce_frame(), based loosely
on the existing failsafe implementation in load_segments().  It picks up all
injection information from the trap_bounce structure.

One minor improvement is that at no point is regs->cs left with an rpl of 0 on
the root stack frame.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Use domain_crash() rather than domain_crash_sync().  All callers
   immediately continue to {compat_}test_all_events
 * Count the number of frame[] entries correctly
 * Consistently use 64bit operations when adjusting the root frame
 * Introduce a compat_addr_ok() check for the 32bit side.  The ASM version
   didn't have protection attempting to write into the compat p2m, other than
   hitting a #PF while trying.
---
 xen/arch/x86/pv/traps.c | 143 ++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 143 insertions(+)

diff --git a/xen/arch/x86/pv/traps.c b/xen/arch/x86/pv/traps.c
index 98549bc..b7d7d2b 100644
--- a/xen/arch/x86/pv/traps.c
+++ b/xen/arch/x86/pv/traps.c
@@ -245,6 +245,149 @@ int pv_raise_interrupt(struct vcpu *v, uint8_t vector)
 }
 
 /*
+ * This function emulates the behaviour of hardware when Xen needs to inject
+ * an event into into a guest.
+ *
+ * It may switch from user mode to kernel mode, will write an appropriate
+ * hardware exception frame (including Xen-specific extras), and alter the
+ * root stack frame to invoke the guest kernels correct entry point on exit
+ * from the hypervisor.
+ */
+void pv_create_exception_frame(void)
+{
+    struct vcpu *curr = current;
+    struct trap_bounce *tb = &curr->arch.pv_vcpu.trap_bounce;
+    struct cpu_user_regs *regs = guest_cpu_user_regs();
+    const bool user_mode_frame = !guest_kernel_mode(curr, regs);
+    uint8_t *evt_mask = &vcpu_info(curr, evtchn_upcall_mask);
+    unsigned int flags, bytes, missing;
+
+    ASSERT_NOT_IN_ATOMIC();
+
+    if ( unlikely(null_trap_bounce(curr, tb)) )
+    {
+        gprintk(XENLOG_ERR, "Fatal: Attempting to inject null trap bounce\n");
+        domain_crash(curr->domain);
+        return;
+    }
+
+    /* Fold the upcall mask and architectural IOPL into the guests rflags. */
+    flags  = regs->rflags & ~(X86_EFLAGS_IF | X86_EFLAGS_IOPL);
+    flags |= ((*evt_mask ? 0 : X86_EFLAGS_IF) |
+              (VM_ASSIST(curr->domain, architectural_iopl)
+               ? curr->arch.pv_vcpu.iopl : 0));
+
+    if ( is_pv_32bit_vcpu(curr) )
+    {
+        /* { [ERRCODE,] EIP, CS/MASK , EFLAGS, [ESP, SS] } */
+        unsigned int frame[6], *ptr = frame, ksp =
+            (user_mode_frame ? curr->arch.pv_vcpu.kernel_sp : regs->esp);
+
+        if ( tb->flags & TBF_EXCEPTION_ERRCODE )
+            *ptr++ = tb->error_code;
+
+        *ptr++ = regs->eip;
+        *ptr++ = regs->cs | ((unsigned int)*evt_mask << 16);
+        *ptr++ = flags;
+
+        if ( user_mode_frame )
+        {
+            *ptr++ = regs->esp;
+            *ptr++ = regs->ss;
+        }
+
+        /* Copy the constructed frame to the guest kernel stack. */
+        bytes = _p(ptr) - _p(frame);
+        ksp -= bytes;
+
+        if ( unlikely(!__compat_access_ok(curr->domain, ksp, bytes)) )
+        {
+            gprintk(XENLOG_ERR, "Fatal: Bad guest kernel stack %p\n", _p(ksp));
+            domain_crash(curr->domain);
+            return;
+        }
+
+        if ( unlikely((missing = __copy_to_user(_p(ksp), frame, bytes)) != 0) )
+        {
+            gprintk(XENLOG_ERR, "Fatal: Fault while writing exception frame\n");
+            show_page_walk(ksp + missing);
+            domain_crash(curr->domain);
+            return;
+        }
+
+        /* Rewrite our stack frame. */
+        regs->rip           = (uint32_t)tb->eip;
+        regs->cs            = tb->cs;
+        regs->rflags       &= ~(X86_EFLAGS_VM | X86_EFLAGS_RF |
+                                X86_EFLAGS_NT | X86_EFLAGS_TF);
+        regs->rsp           = ksp;
+        if ( user_mode_frame )
+            regs->ss = curr->arch.pv_vcpu.kernel_ss;
+    }
+    else
+    {
+        /* { RCX, R11, [ERRCODE,] RIP, CS/MASK, RFLAGS, RSP, SS } */
+        unsigned long frame[8], *ptr = frame, ksp =
+            (user_mode_frame ? curr->arch.pv_vcpu.kernel_sp : regs->rsp) & ~0xf;
+
+        if ( user_mode_frame )
+            toggle_guest_mode(curr);
+
+        *ptr++ = regs->rcx;
+        *ptr++ = regs->r11;
+
+        if ( tb->flags & TBF_EXCEPTION_ERRCODE )
+            *ptr++ = tb->error_code;
+
+        *ptr++ = regs->rip;
+        *ptr++ = ((user_mode_frame ? regs->cs : regs->cs & ~3) |
+                  ((unsigned long)*evt_mask << 32));
+        *ptr++ = flags;
+        *ptr++ = regs->rsp;
+        *ptr++ = regs->ss;
+
+        /* Copy the constructed frame to the guest kernel stack. */
+        bytes = _p(ptr) - _p(frame);
+        ksp -= bytes;
+
+        if ( unlikely(!__addr_ok(ksp)) )
+        {
+            gprintk(XENLOG_ERR, "Fatal: Bad guest kernel stack %p\n", _p(ksp));
+            domain_crash(curr->domain);
+            return;
+        }
+
+        if ( unlikely((missing = __copy_to_user(_p(ksp), frame, bytes)) != 0) )
+        {
+            gprintk(XENLOG_ERR, "Fatal: Fault while writing exception frame\n");
+            show_page_walk(ksp + missing);
+            domain_crash(curr->domain);
+            return;
+        }
+
+        /* Rewrite our stack frame. */
+        regs->entry_vector |= TRAP_syscall;
+        regs->rip           = tb->eip;
+        regs->cs            = FLAT_KERNEL_CS;
+        regs->rflags       &= ~(X86_EFLAGS_AC | X86_EFLAGS_VM | X86_EFLAGS_RF |
+                                X86_EFLAGS_NT | X86_EFLAGS_TF);
+        regs->rsp           = ksp;
+        regs->ss            = FLAT_KERNEL_SS;
+    }
+
+    /* Mask events if requested. */
+    if ( tb->flags & TBF_INTERRUPT )
+        *evt_mask = 1;
+
+    /*
+     * Clobber the injection information now it has been completed.  Buggy
+     * attempts to inject the same event twice will hit the null_trap_bounce()
+     * check above.
+     */
+    *tb = (struct trap_bounce){};
+}
+
+/*
  * Local variables:
  * mode: C
  * c-file-style: "BSD"
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 4/5] x86/pv: Drop {compat_, }create_bounce_frame() and use the C version instead
  2018-02-27 14:50 [PATCH v2 0/5] x86: Lift create_exception_frame() up out of C Andrew Cooper
                   ` (2 preceding siblings ...)
  2018-02-27 14:50 ` [PATCH v2 3/5] x86/pv: Introduce pv_create_exception_frame() Andrew Cooper
@ 2018-02-27 14:50 ` Andrew Cooper
  2018-03-05  9:27   ` Jan Beulich
  2018-02-27 14:50 ` [PATCH v2 5/5] x86/pv: Implement the failsafe callback using the general path Andrew Cooper
  4 siblings, 1 reply; 12+ messages in thread
From: Andrew Cooper @ 2018-02-27 14:50 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Jan Beulich, Roger Pau Monné

The clobbering of TRAPBOUNCE_flags in .L{compat_}bounce_exception is subsumed
by the logic at the end of pv_create_bounce_frame().

This cleanup removes all callers of asm_domain_crash_synchronous(), which is
therefore dropped as well.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Remove redundant lea's
---
 xen/arch/x86/traps.c               |  23 ------
 xen/arch/x86/x86_64/compat/entry.S | 120 ++----------------------------
 xen/arch/x86/x86_64/entry.S        | 147 ++-----------------------------------
 xen/include/xen/sched.h            |   7 --
 4 files changed, 10 insertions(+), 287 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 27190e0..a9b53da 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2103,29 +2103,6 @@ long set_debugreg(struct vcpu *v, unsigned int reg, unsigned long value)
     return 0;
 }
 
-void asm_domain_crash_synchronous(unsigned long addr)
-{
-    /*
-     * We need clear AC bit here because in entry.S AC is set
-     * by ASM_STAC to temporarily allow accesses to user pages
-     * which is prevented by SMAP by default.
-     *
-     * For some code paths, where this function is called, clac()
-     * is not needed, but adding clac() here instead of each place
-     * asm_domain_crash_synchronous() is called can reduce the code
-     * redundancy, and it is harmless as well.
-     */
-    clac();
-
-    if ( addr == 0 )
-        addr = this_cpu(last_extable_addr);
-
-    printk("domain_crash_sync called from entry.S: fault at %p %pS\n",
-           _p(addr), _p(addr));
-
-    __domain_crash_synchronous();
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index 3e8b6c1..4f681bf 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -60,7 +60,7 @@ compat_test_guest_events:
         movl  VCPU_event_sel(%rbx),%eax
         movw  %ax,TRAPBOUNCE_cs(%rdx)
         movb  $TBF_INTERRUPT,TRAPBOUNCE_flags(%rdx)
-        call  compat_create_bounce_frame
+        call  pv_create_exception_frame
         jmp   compat_test_all_events
 
         ALIGN
@@ -102,8 +102,7 @@ compat_process_nmi:
         movb %dl,VCPU_async_exception_mask(%rbx)
         /* FALLTHROUGH */
 compat_process_trap:
-        leaq  VCPU_trap_bounce(%rbx),%rdx
-        call  compat_create_bounce_frame
+        call  pv_create_exception_frame
         jmp   compat_test_all_events
 
 /* %rbx: struct vcpu, interrupts disabled */
@@ -196,8 +195,7 @@ ENTRY(compat_post_handle_exception)
         testb $TBF_EXCEPTION,TRAPBOUNCE_flags(%rdx)
         jz    compat_test_all_events
 .Lcompat_bounce_exception:
-        call  compat_create_bounce_frame
-        movb  $0,TRAPBOUNCE_flags(%rdx)
+        call  pv_create_exception_frame
         jmp   compat_test_all_events
 
 /* See lstar_enter for entry register state. */
@@ -264,118 +262,10 @@ ENTRY(compat_sysenter)
         movl  $FLAT_COMPAT_USER_SS,UREGS_ss(%rsp)
         cmovzl %ecx,%eax
         movw  %ax,TRAPBOUNCE_cs(%rdx)
-        call  compat_create_bounce_frame
+        call  pv_create_exception_frame
         jmp   compat_test_all_events
 
 ENTRY(compat_int80_direct_trap)
         CR4_PV32_RESTORE
-        call  compat_create_bounce_frame
+        call  pv_create_exception_frame
         jmp   compat_test_all_events
-
-        /* compat_create_bounce_frame & helpers don't need to be in .text.entry */
-        .text
-
-/* CREATE A BASIC EXCEPTION FRAME ON GUEST OS (RING-1) STACK:            */
-/*   {[ERRCODE,] EIP, CS, EFLAGS, [ESP, SS]}                             */
-/* %rdx: trap_bounce, %rbx: struct vcpu                                  */
-/* On return only %rbx and %rdx are guaranteed non-clobbered.            */
-compat_create_bounce_frame:
-        ASSERT_INTERRUPTS_ENABLED
-        mov   %fs,%edi
-        ASM_STAC
-        testb $2,UREGS_cs+8(%rsp)
-        jz    1f
-        /* Push new frame at registered guest-OS stack base. */
-        movl  VCPU_kernel_sp(%rbx),%esi
-.Lft1:  mov   VCPU_kernel_ss(%rbx),%fs
-        subl  $2*4,%esi
-        movl  UREGS_rsp+8(%rsp),%eax
-.Lft2:  movl  %eax,%fs:(%rsi)
-        movl  UREGS_ss+8(%rsp),%eax
-.Lft3:  movl  %eax,%fs:4(%rsi)
-        jmp   2f
-1:      /* In kernel context already: push new frame at existing %rsp. */
-        movl  UREGS_rsp+8(%rsp),%esi
-.Lft4:  mov   UREGS_ss+8(%rsp),%fs
-2:
-        movq  VCPU_domain(%rbx),%r8
-        subl  $3*4,%esi
-        movq  VCPU_vcpu_info(%rbx),%rax
-        pushq COMPAT_VCPUINFO_upcall_mask(%rax)
-        testb $TBF_INTERRUPT,TRAPBOUNCE_flags(%rdx)
-        setnz %ch                       # TBF_INTERRUPT -> set upcall mask
-        orb   %ch,COMPAT_VCPUINFO_upcall_mask(%rax)
-        popq  %rax
-        shll  $16,%eax                  # Bits 16-23: saved_upcall_mask
-        movw  UREGS_cs+8(%rsp),%ax      # Bits  0-15: CS
-.Lft5:  movl  %eax,%fs:4(%rsi)          # CS / saved_upcall_mask
-        shrl  $16,%eax
-        testb %al,%al                   # Bits 0-7: saved_upcall_mask
-        setz  %ch                       # %ch == !saved_upcall_mask
-        movl  UREGS_eflags+8(%rsp),%eax
-        andl  $~(X86_EFLAGS_IF|X86_EFLAGS_IOPL),%eax
-        addb  %ch,%ch                   # Bit 9 (EFLAGS.IF)
-        orb   %ch,%ah                   # Fold EFLAGS.IF into %eax
-        xorl  %ecx,%ecx                 # if ( VM_ASSIST(v->domain, architectural_iopl) )
-        testb $1 << VMASST_TYPE_architectural_iopl,DOMAIN_vm_assist(%r8)
-        cmovnzl VCPU_iopl(%rbx),%ecx    # Bits 13:12 (EFLAGS.IOPL)
-        orl   %ecx,%eax                 # Fold EFLAGS.IOPL into %eax
-.Lft6:  movl  %eax,%fs:2*4(%rsi)        # EFLAGS
-        movl  UREGS_rip+8(%rsp),%eax
-.Lft7:  movl  %eax,%fs:(%rsi)           # EIP
-        testb $TBF_EXCEPTION_ERRCODE,TRAPBOUNCE_flags(%rdx)
-        jz    1f
-        subl  $4,%esi
-        movl  TRAPBOUNCE_error_code(%rdx),%eax
-.Lft8:  movl  %eax,%fs:(%rsi)           # ERROR CODE
-1:
-        ASM_CLAC
-        /* Rewrite our stack frame and return to guest-OS mode. */
-        /* IA32 Ref. Vol. 3: TF, VM, RF and NT flags are cleared on trap. */
-        andl  $~(X86_EFLAGS_VM|X86_EFLAGS_RF|\
-                 X86_EFLAGS_NT|X86_EFLAGS_TF),UREGS_eflags+8(%rsp)
-        mov   %fs,UREGS_ss+8(%rsp)
-        movl  %esi,UREGS_rsp+8(%rsp)
-.Lft13: mov   %edi,%fs
-        movzwl TRAPBOUNCE_cs(%rdx),%eax
-        /* Null selectors (0-3) are not allowed. */
-        testl $~3,%eax
-UNLIKELY_START(z, compat_bounce_null_selector)
-        lea   UNLIKELY_DISPATCH_LABEL(compat_bounce_null_selector)(%rip), %rdi
-        jmp   asm_domain_crash_synchronous  /* Does not return */
-__UNLIKELY_END(compat_bounce_null_selector)
-        movl  %eax,UREGS_cs+8(%rsp)
-        movl  TRAPBOUNCE_eip(%rdx),%eax
-        movl  %eax,UREGS_rip+8(%rsp)
-        ret
-.section .fixup,"ax"
-.Lfx13:
-        xorl  %edi,%edi
-        jmp   .Lft13
-.previous
-        _ASM_EXTABLE(.Lft1,  dom_crash_sync_extable)
-        _ASM_EXTABLE(.Lft2,  compat_crash_page_fault)
-        _ASM_EXTABLE(.Lft3,  compat_crash_page_fault_4)
-        _ASM_EXTABLE(.Lft4,  dom_crash_sync_extable)
-        _ASM_EXTABLE(.Lft5,  compat_crash_page_fault_4)
-        _ASM_EXTABLE(.Lft6,  compat_crash_page_fault_8)
-        _ASM_EXTABLE(.Lft7,  compat_crash_page_fault)
-        _ASM_EXTABLE(.Lft8,  compat_crash_page_fault)
-        _ASM_EXTABLE(.Lft13, .Lfx13)
-
-compat_crash_page_fault_8:
-        addl  $4,%esi
-compat_crash_page_fault_4:
-        addl  $4,%esi
-compat_crash_page_fault:
-.Lft14: mov   %edi,%fs
-        ASM_CLAC
-        movl  %esi,%edi
-        call  show_page_walk
-        jmp   dom_crash_sync_extable
-.section .fixup,"ax"
-.Lfx14:
-        xorl  %edi,%edi
-        jmp   .Lft14
-.previous
-        _ASM_EXTABLE(.Lft14, .Lfx14)
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index bf41563..9baf54b 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -32,7 +32,7 @@ ENTRY(switch_to_kernel)
         setc  %cl
         leal  (,%rcx,TBF_INTERRUPT),%ecx
         movb  %cl,TRAPBOUNCE_flags(%rdx)
-        call  create_bounce_frame
+        call  pv_create_exception_frame
         andl  $~X86_EFLAGS_DF,UREGS_eflags(%rsp)
         jmp   test_all_events
 
@@ -207,7 +207,7 @@ test_guest_events:
         movq  VCPU_event_addr(%rbx),%rax
         movq  %rax,TRAPBOUNCE_eip(%rdx)
         movb  $TBF_INTERRUPT,TRAPBOUNCE_flags(%rdx)
-        call  create_bounce_frame
+        call  pv_create_exception_frame
         jmp   test_all_events
 
         ALIGN
@@ -249,8 +249,7 @@ process_nmi:
         movb %dl,VCPU_async_exception_mask(%rbx)
         /* FALLTHROUGH */
 process_trap:
-        leaq VCPU_trap_bounce(%rbx),%rdx
-        call create_bounce_frame
+        call pv_create_exception_frame
         jmp  test_all_events
 
 ENTRY(sysenter_entry)
@@ -369,7 +368,7 @@ UNLIKELY_END(msi_check)
         cmpb  $0, DOMAIN_is_32bit_pv(%rax)
         jne   compat_int80_direct_trap
 
-        call  create_bounce_frame
+        call  pv_create_exception_frame
         jmp   test_all_events
 
 int80_slow_path:
@@ -383,141 +382,6 @@ int80_slow_path:
         subq  $2,UREGS_rip(%rsp)
         jmp   handle_exception_saved
 
-        /* create_bounce_frame & helpers don't need to be in .text.entry */
-        .text
-
-/* CREATE A BASIC EXCEPTION FRAME ON GUEST OS STACK:                     */
-/*   { RCX, R11, [ERRCODE,] RIP, CS, RFLAGS, RSP, SS }                   */
-/* %rdx: trap_bounce, %rbx: struct vcpu                                  */
-/* On return only %rbx and %rdx are guaranteed non-clobbered.            */
-create_bounce_frame:
-        ASSERT_INTERRUPTS_ENABLED
-        testb $TF_kernel_mode,VCPU_thread_flags(%rbx)
-        jnz   1f
-        /* Push new frame at registered guest-OS stack base. */
-        pushq %rdx
-        movq  %rbx,%rdi
-        call  toggle_guest_mode
-        popq  %rdx
-        movq  VCPU_kernel_sp(%rbx),%rsi
-        jmp   2f
-1:      /* In kernel context already: push new frame at existing %rsp. */
-        movq  UREGS_rsp+8(%rsp),%rsi
-        andb  $0xfc,UREGS_cs+8(%rsp)    # Indicate kernel context to guest.
-2:      andq  $~0xf,%rsi                # Stack frames are 16-byte aligned.
-        movq  $HYPERVISOR_VIRT_START+1,%rax
-        cmpq  %rax,%rsi
-        movq  $HYPERVISOR_VIRT_END+8*8,%rax
-        sbb   %ecx,%ecx                 # In +ve address space? Then okay.
-        cmpq  %rax,%rsi
-        adc   %ecx,%ecx                 # Above Xen private area? Then okay.
-UNLIKELY_START(g, create_bounce_frame_bad_sp)
-        lea   UNLIKELY_DISPATCH_LABEL(create_bounce_frame_bad_sp)(%rip), %rdi
-        jmp   asm_domain_crash_synchronous  /* Does not return */
-__UNLIKELY_END(create_bounce_frame_bad_sp)
-
-#define STORE_GUEST_STACK(reg, n) \
-0:      movq  %reg,(n)*8(%rsi); \
-        _ASM_EXTABLE(0b, domain_crash_page_fault_ ## n ## x8)
-
-        subq  $7*8,%rsi
-        movq  UREGS_ss+8(%rsp),%rax
-        ASM_STAC
-        movq  VCPU_domain(%rbx),%rdi
-        STORE_GUEST_STACK(rax,6)        # SS
-        movq  UREGS_rsp+8(%rsp),%rax
-        STORE_GUEST_STACK(rax,5)        # RSP
-        movq  VCPU_vcpu_info(%rbx),%rax
-        pushq VCPUINFO_upcall_mask(%rax)
-        testb $TBF_INTERRUPT,TRAPBOUNCE_flags(%rdx)
-        setnz %ch                       # TBF_INTERRUPT -> set upcall mask
-        orb   %ch,VCPUINFO_upcall_mask(%rax)
-        popq  %rax
-        shlq  $32,%rax                  # Bits 32-39: saved_upcall_mask
-        movw  UREGS_cs+8(%rsp),%ax      # Bits  0-15: CS
-        STORE_GUEST_STACK(rax,3)        # CS / saved_upcall_mask
-        shrq  $32,%rax
-        testb $0xFF,%al                 # Bits 0-7: saved_upcall_mask
-        setz  %ch                       # %ch == !saved_upcall_mask
-        movl  UREGS_eflags+8(%rsp),%eax
-        andl  $~(X86_EFLAGS_IF|X86_EFLAGS_IOPL),%eax
-        addb  %ch,%ch                   # Bit 9 (EFLAGS.IF)
-        orb   %ch,%ah                   # Fold EFLAGS.IF into %eax
-        xorl  %ecx,%ecx                 # if ( VM_ASSIST(v->domain, architectural_iopl) )
-        testb $1 << VMASST_TYPE_architectural_iopl,DOMAIN_vm_assist(%rdi)
-        cmovnzl VCPU_iopl(%rbx),%ecx    # Bits 13:12 (EFLAGS.IOPL)
-        orl   %ecx,%eax                 # Fold EFLAGS.IOPL into %eax
-        STORE_GUEST_STACK(rax,4)        # RFLAGS
-        movq  UREGS_rip+8(%rsp),%rax
-        STORE_GUEST_STACK(rax,2)        # RIP
-        testb $TBF_EXCEPTION_ERRCODE,TRAPBOUNCE_flags(%rdx)
-        jz    1f
-        subq  $8,%rsi
-        movl  TRAPBOUNCE_error_code(%rdx),%eax
-        STORE_GUEST_STACK(rax,2)        # ERROR CODE
-1:
-        movq  UREGS_r11+8(%rsp),%rax
-        STORE_GUEST_STACK(rax,1)        # R11
-        movq  UREGS_rcx+8(%rsp),%rax
-        STORE_GUEST_STACK(rax,0)        # RCX
-        ASM_CLAC
-
-#undef STORE_GUEST_STACK
-
-        /* Rewrite our stack frame and return to guest-OS mode. */
-        /* IA32 Ref. Vol. 3: TF, VM, RF and NT flags are cleared on trap. */
-        /* Also clear AC: alignment checks shouldn't trigger in kernel mode. */
-        orl   $TRAP_syscall,UREGS_entry_vector+8(%rsp)
-        andl  $~(X86_EFLAGS_AC|X86_EFLAGS_VM|X86_EFLAGS_RF|\
-                 X86_EFLAGS_NT|X86_EFLAGS_TF),UREGS_eflags+8(%rsp)
-        movq  $FLAT_KERNEL_SS,UREGS_ss+8(%rsp)
-        movq  %rsi,UREGS_rsp+8(%rsp)
-        movq  $FLAT_KERNEL_CS,UREGS_cs+8(%rsp)
-        movq  TRAPBOUNCE_eip(%rdx),%rax
-        testq %rax,%rax
-UNLIKELY_START(z, create_bounce_frame_bad_bounce_ip)
-        lea   UNLIKELY_DISPATCH_LABEL(create_bounce_frame_bad_bounce_ip)(%rip), %rdi
-        jmp   asm_domain_crash_synchronous  /* Does not return */
-__UNLIKELY_END(create_bounce_frame_bad_bounce_ip)
-        movq  %rax,UREGS_rip+8(%rsp)
-        ret
-
-        .pushsection .fixup, "ax", @progbits
-        # Numeric tags below represent the intended overall %rsi adjustment.
-domain_crash_page_fault_6x8:
-        addq  $8,%rsi
-domain_crash_page_fault_5x8:
-        addq  $8,%rsi
-domain_crash_page_fault_4x8:
-        addq  $8,%rsi
-domain_crash_page_fault_3x8:
-        addq  $8,%rsi
-domain_crash_page_fault_2x8:
-        addq  $8,%rsi
-domain_crash_page_fault_1x8:
-        addq  $8,%rsi
-domain_crash_page_fault_0x8:
-        ASM_CLAC
-        movq  %rsi,%rdi
-        call  show_page_walk
-ENTRY(dom_crash_sync_extable)
-        ASM_CLAC
-        # Get out of the guest-save area of the stack.
-        GET_STACK_END(ax)
-        leaq  STACK_CPUINFO_FIELD(guest_cpu_user_regs)(%rax),%rsp
-        # create_bounce_frame() temporarily clobbers CS.RPL. Fix up.
-        movq  STACK_CPUINFO_FIELD(current_vcpu)(%rax), %rax
-        movq  VCPU_domain(%rax),%rax
-        cmpb  $0, DOMAIN_is_32bit_pv(%rax)
-        sete  %al
-        leal  (%rax,%rax,2),%eax
-        orb   %al,UREGS_cs(%rsp)
-        xorl  %edi,%edi
-        jmp   asm_domain_crash_synchronous /* Does not return */
-        .popsection
-
-        .section .text.entry, "ax", @progbits
-
 ENTRY(common_interrupt)
         SAVE_ALL CLAC
 
@@ -658,8 +522,7 @@ handle_exception_saved:
         testb $TBF_EXCEPTION,TRAPBOUNCE_flags(%rdx)
         jz    test_all_events
 .Lbounce_exception:
-        call  create_bounce_frame
-        movb  $0,TRAPBOUNCE_flags(%rdx)
+        call  pv_create_exception_frame
         jmp   test_all_events
 
 /* No special register assumptions. */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 39f9386..d5244f1 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -633,13 +633,6 @@ void noreturn __domain_crash_synchronous(void);
     __domain_crash_synchronous();                                         \
 } while (0)
 
-/*
- * Called from assembly code, with an optional address to help indicate why
- * the crash occured.  If addr is 0, look up address from last extable
- * redirection.
- */
-void noreturn asm_domain_crash_synchronous(unsigned long addr);
-
 #define set_current_state(_s) do { current->state = (_s); } while (0)
 void scheduler_init(void);
 int  sched_init_vcpu(struct vcpu *v, unsigned int processor);
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v2 5/5] x86/pv: Implement the failsafe callback using the general path
  2018-02-27 14:50 [PATCH v2 0/5] x86: Lift create_exception_frame() up out of C Andrew Cooper
                   ` (3 preceding siblings ...)
  2018-02-27 14:50 ` [PATCH v2 4/5] x86/pv: Drop {compat_, }create_bounce_frame() and use the C version instead Andrew Cooper
@ 2018-02-27 14:50 ` Andrew Cooper
  4 siblings, 0 replies; 12+ messages in thread
From: Andrew Cooper @ 2018-02-27 14:50 UTC (permalink / raw)
  To: Xen-devel; +Cc: Andrew Cooper, Wei Liu, Roger Pau Monné

Reintroduce TBF_FAILSAFE and update pv_create_exception_frame() to cope with
the additional data segment registers.

load_segments() now fills in trap_bounce, and lets the general return-to-guest
path inject the exception.

Bloat-o-meter reports:
  add/remove: 0/0 grow/shrink: 1/1 up/down: 123/-2522 (-2399)
  function                                     old     new   delta
  pv_create_exception_frame                   1088    1211    +123
  context_switch                              3565    1043   -2522

which I suspect is largely due to the quantity of code hidden behind
put_user().

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Wei Liu <wei.liu2@citrix.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Use const uregs.
---
 xen/arch/x86/domain.c           | 100 +++-------------------------------------
 xen/arch/x86/pv/traps.c         |  29 ++++++++++--
 xen/include/asm-x86/processor.h |   1 +
 3 files changed, 32 insertions(+), 98 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 69679a6..7bce7de 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1354,100 +1354,14 @@ static void load_segments(struct vcpu *n)
 
     if ( unlikely(!all_segs_okay) )
     {
-        struct pv_vcpu *pv = &n->arch.pv_vcpu;
-        struct cpu_user_regs *regs = guest_cpu_user_regs();
-        unsigned long *rsp =
-            (unsigned long *)(((n->arch.flags & TF_kernel_mode)
-                               ? regs->rsp : pv->kernel_sp) & ~0xf);
-        unsigned long cs_and_mask, rflags;
-
-        /* Fold upcall mask and architectural IOPL into RFLAGS.IF. */
-        rflags  = regs->rflags & ~(X86_EFLAGS_IF|X86_EFLAGS_IOPL);
-        rflags |= !vcpu_info(n, evtchn_upcall_mask) << 9;
-        if ( VM_ASSIST(n->domain, architectural_iopl) )
-            rflags |= n->arch.pv_vcpu.iopl;
-
-        if ( is_pv_32bit_vcpu(n) )
-        {
-            unsigned int *esp = ring_1(regs) ?
-                                (unsigned int *)regs->rsp :
-                                (unsigned int *)pv->kernel_sp;
-            int ret = 0;
-
-            /* CS longword also contains full evtchn_upcall_mask. */
-            cs_and_mask = (unsigned short)regs->cs |
-                ((unsigned int)vcpu_info(n, evtchn_upcall_mask) << 16);
-
-            if ( !ring_1(regs) )
-            {
-                ret  = put_user(regs->ss,       esp-1);
-                ret |= put_user(regs->esp,      esp-2);
-                esp -= 2;
-            }
-
-            if ( ret |
-                 put_user(rflags,              esp-1) |
-                 put_user(cs_and_mask,         esp-2) |
-                 put_user(regs->eip,           esp-3) |
-                 put_user(uregs->gs,           esp-4) |
-                 put_user(uregs->fs,           esp-5) |
-                 put_user(uregs->es,           esp-6) |
-                 put_user(uregs->ds,           esp-7) )
-            {
-                gprintk(XENLOG_ERR,
-                        "error while creating compat failsafe callback frame\n");
-                domain_crash(n->domain);
-            }
+        bool disable = n->arch.vgc_flags & VGCF_failsafe_disables_events;
 
-            if ( n->arch.vgc_flags & VGCF_failsafe_disables_events )
-                vcpu_info(n, evtchn_upcall_mask) = 1;
-
-            regs->entry_vector |= TRAP_syscall;
-            regs->eflags       &= ~(X86_EFLAGS_VM|X86_EFLAGS_RF|X86_EFLAGS_NT|
-                                    X86_EFLAGS_IOPL|X86_EFLAGS_TF);
-            regs->ss            = FLAT_COMPAT_KERNEL_SS;
-            regs->esp           = (unsigned long)(esp-7);
-            regs->cs            = FLAT_COMPAT_KERNEL_CS;
-            regs->eip           = pv->failsafe_callback_eip;
-            return;
-        }
-
-        if ( !(n->arch.flags & TF_kernel_mode) )
-            toggle_guest_mode(n);
-        else
-            regs->cs &= ~3;
-
-        /* CS longword also contains full evtchn_upcall_mask. */
-        cs_and_mask = (unsigned long)regs->cs |
-            ((unsigned long)vcpu_info(n, evtchn_upcall_mask) << 32);
-
-        if ( put_user(regs->ss,            rsp- 1) |
-             put_user(regs->rsp,           rsp- 2) |
-             put_user(rflags,              rsp- 3) |
-             put_user(cs_and_mask,         rsp- 4) |
-             put_user(regs->rip,           rsp- 5) |
-             put_user(uregs->gs,           rsp- 6) |
-             put_user(uregs->fs,           rsp- 7) |
-             put_user(uregs->es,           rsp- 8) |
-             put_user(uregs->ds,           rsp- 9) |
-             put_user(regs->r11,           rsp-10) |
-             put_user(regs->rcx,           rsp-11) )
-        {
-            gprintk(XENLOG_ERR,
-                    "error while creating failsafe callback frame\n");
-            domain_crash(n->domain);
-        }
-
-        if ( n->arch.vgc_flags & VGCF_failsafe_disables_events )
-            vcpu_info(n, evtchn_upcall_mask) = 1;
-
-        regs->entry_vector |= TRAP_syscall;
-        regs->rflags       &= ~(X86_EFLAGS_AC|X86_EFLAGS_VM|X86_EFLAGS_RF|
-                                X86_EFLAGS_NT|X86_EFLAGS_IOPL|X86_EFLAGS_TF);
-        regs->ss            = FLAT_KERNEL_SS;
-        regs->rsp           = (unsigned long)(rsp-11);
-        regs->cs            = FLAT_KERNEL_CS;
-        regs->rip           = pv->failsafe_callback_eip;
+        n->arch.pv_vcpu.trap_bounce = (struct trap_bounce){
+            .flags = (TBF_FAILSAFE | TBF_EXCEPTION |
+                      (disable ? TBF_INTERRUPT : 0)),
+            .cs    = FLAT_COMPAT_KERNEL_CS, /* Ignored for 64bit guests. */
+            .eip   = n->arch.pv_vcpu.failsafe_callback_eip
+        };
     }
 }
 
diff --git a/xen/arch/x86/pv/traps.c b/xen/arch/x86/pv/traps.c
index b7d7d2b..8745e44 100644
--- a/xen/arch/x86/pv/traps.c
+++ b/xen/arch/x86/pv/traps.c
@@ -258,6 +258,7 @@ void pv_create_exception_frame(void)
     struct vcpu *curr = current;
     struct trap_bounce *tb = &curr->arch.pv_vcpu.trap_bounce;
     struct cpu_user_regs *regs = guest_cpu_user_regs();
+    const struct cpu_user_regs *uregs = &curr->arch.user_regs;
     const bool user_mode_frame = !guest_kernel_mode(curr, regs);
     uint8_t *evt_mask = &vcpu_info(curr, evtchn_upcall_mask);
     unsigned int flags, bytes, missing;
@@ -279,10 +280,18 @@ void pv_create_exception_frame(void)
 
     if ( is_pv_32bit_vcpu(curr) )
     {
-        /* { [ERRCODE,] EIP, CS/MASK , EFLAGS, [ESP, SS] } */
-        unsigned int frame[6], *ptr = frame, ksp =
+        /* { [DS-GS,] [ERRCODE,] EIP, CS/MASK , EFLAGS, [ESP, SS] } */
+        unsigned int frame[10], *ptr = frame, ksp =
             (user_mode_frame ? curr->arch.pv_vcpu.kernel_sp : regs->esp);
 
+        if ( tb->flags & TBF_FAILSAFE )
+        {
+            *ptr++ = uregs->ds;
+            *ptr++ = uregs->es;
+            *ptr++ = uregs->fs;
+            *ptr++ = uregs->gs;
+        }
+
         if ( tb->flags & TBF_EXCEPTION_ERRCODE )
             *ptr++ = tb->error_code;
 
@@ -321,13 +330,15 @@ void pv_create_exception_frame(void)
         regs->rflags       &= ~(X86_EFLAGS_VM | X86_EFLAGS_RF |
                                 X86_EFLAGS_NT | X86_EFLAGS_TF);
         regs->rsp           = ksp;
-        if ( user_mode_frame )
+        if ( tb->flags & TBF_FAILSAFE )
+            regs->ss = FLAT_COMPAT_KERNEL_SS;
+        else if ( user_mode_frame )
             regs->ss = curr->arch.pv_vcpu.kernel_ss;
     }
     else
     {
-        /* { RCX, R11, [ERRCODE,] RIP, CS/MASK, RFLAGS, RSP, SS } */
-        unsigned long frame[8], *ptr = frame, ksp =
+        /* { RCX, R11, [DS-GS,] [ERRCODE,] RIP, CS/MASK, RFLAGS, RSP, SS } */
+        unsigned long frame[12], *ptr = frame, ksp =
             (user_mode_frame ? curr->arch.pv_vcpu.kernel_sp : regs->rsp) & ~0xf;
 
         if ( user_mode_frame )
@@ -336,6 +347,14 @@ void pv_create_exception_frame(void)
         *ptr++ = regs->rcx;
         *ptr++ = regs->r11;
 
+        if ( tb->flags & TBF_FAILSAFE )
+        {
+            *ptr++ = uregs->ds;
+            *ptr++ = uregs->es;
+            *ptr++ = uregs->fs;
+            *ptr++ = uregs->gs;
+        }
+
         if ( tb->flags & TBF_EXCEPTION_ERRCODE )
             *ptr++ = tb->error_code;
 
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 01bc89f..4fba9a4 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -61,6 +61,7 @@
 /* 'trap_bounce' flags values */
 #define TBF_EXCEPTION          1
 #define TBF_EXCEPTION_ERRCODE  2
+#define TBF_FAILSAFE           4
 #define TBF_INTERRUPT          8
 
 /* 'arch_vcpu' flags values */
-- 
2.1.4


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/5] x86/entry: Correct comparisons against boolean variables
  2018-02-27 14:50 ` [PATCH v2 1/5] x86/entry: Correct comparisons against boolean variables Andrew Cooper
@ 2018-03-02 16:27   ` Jan Beulich
  2018-03-02 17:32   ` Wei Liu
  1 sibling, 0 replies; 12+ messages in thread
From: Jan Beulich @ 2018-03-02 16:27 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monné

>>> On 27.02.18 at 15:50, <andrew.cooper3@citrix.com> wrote:
> The correct way to check a boolean is `cmpb $0` or `testb $0xff`, whereas a
> lot of our entry code uses `testb $1`.  This will work in principle for values
> which are really C _Bool types, but won't work for other integer types which
> are intended to have boolean properties.
> 
> cmp is the more logical way of thinking about the operation, so adjust all
> outstanding uses of `testb $1` against boolean values.  Changing test to cmp
> changes the logical mnemonic of the following condition from 'zero' to
> 'equal', but the actual encoding remains the same.
> 
> No functional change, as all uses are real C _Bool types, and confirmed by
> diffing the disassembly.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks, one less item on the list of things I keep forgetting to
actually do.

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 3/5] x86/pv: Introduce pv_create_exception_frame()
  2018-02-27 14:50 ` [PATCH v2 3/5] x86/pv: Introduce pv_create_exception_frame() Andrew Cooper
@ 2018-03-02 16:44   ` Jan Beulich
  0 siblings, 0 replies; 12+ messages in thread
From: Jan Beulich @ 2018-03-02 16:44 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monné

>>> On 27.02.18 at 15:50, <andrew.cooper3@citrix.com> wrote:
> v2:
>  * Use domain_crash() rather than domain_crash_sync().  All callers
>    immediately continue to {compat_}test_all_events
>  * Count the number of frame[] entries correctly
>  * Consistently use 64bit operations when adjusting the root frame
>  * Introduce a compat_addr_ok() check for the 32bit side.  The ASM version
>    didn't have protection attempting to write into the compat p2m, other than
>    hitting a #PF while trying.

I'm not sure I see the value of the extra check - we've got to handle
#PF anyway. But I also won't insist on dropping it again.

> +void pv_create_exception_frame(void)
> +{
> +    struct vcpu *curr = current;
> +    struct trap_bounce *tb = &curr->arch.pv_vcpu.trap_bounce;
> +    struct cpu_user_regs *regs = guest_cpu_user_regs();
> +    const bool user_mode_frame = !guest_kernel_mode(curr, regs);
> +    uint8_t *evt_mask = &vcpu_info(curr, evtchn_upcall_mask);
> +    unsigned int flags, bytes, missing;
> +
> +    ASSERT_NOT_IN_ATOMIC();
> +
> +    if ( unlikely(null_trap_bounce(curr, tb)) )
> +    {
> +        gprintk(XENLOG_ERR, "Fatal: Attempting to inject null trap bounce\n");
> +        domain_crash(curr->domain);
> +        return;
> +    }
> +
> +    /* Fold the upcall mask and architectural IOPL into the guests rflags. */
> +    flags  = regs->rflags & ~(X86_EFLAGS_IF | X86_EFLAGS_IOPL);

regs->eflags would be more consistent with the type of flags.

> +    flags |= ((*evt_mask ? 0 : X86_EFLAGS_IF) |
> +              (VM_ASSIST(curr->domain, architectural_iopl)
> +               ? curr->arch.pv_vcpu.iopl : 0));
> +
> +    if ( is_pv_32bit_vcpu(curr) )
> +    {
> +        /* { [ERRCODE,] EIP, CS/MASK , EFLAGS, [ESP, SS] } */
> +        unsigned int frame[6], *ptr = frame, ksp =
> +            (user_mode_frame ? curr->arch.pv_vcpu.kernel_sp : regs->esp);
> +
> +        if ( tb->flags & TBF_EXCEPTION_ERRCODE )
> +            *ptr++ = tb->error_code;
> +
> +        *ptr++ = regs->eip;
> +        *ptr++ = regs->cs | ((unsigned int)*evt_mask << 16);
> +        *ptr++ = flags;
> +
> +        if ( user_mode_frame )
> +        {
> +            *ptr++ = regs->esp;
> +            *ptr++ = regs->ss;
> +        }
> +
> +        /* Copy the constructed frame to the guest kernel stack. */
> +        bytes = _p(ptr) - _p(frame);
> +        ksp -= bytes;
> +
> +        if ( unlikely(!__compat_access_ok(curr->domain, ksp, bytes)) )
> +        {
> +            gprintk(XENLOG_ERR, "Fatal: Bad guest kernel stack %p\n", _p(ksp));

While I understand that you don't want to deal with non-flat SS here
(yet), I think it would be prudent to log %ss nevertheless.

> +            domain_crash(curr->domain);
> +            return;
> +        }
> +
> +        if ( unlikely((missing = __copy_to_user(_p(ksp), frame, bytes)) != 0) )
> +        {
> +            gprintk(XENLOG_ERR, "Fatal: Fault while writing exception frame\n");
> +            show_page_walk(ksp + missing);

"missing" is the right name, but the use is wrong - ITYM
"ksp + bytes - missing" (same on the 64-bit path then).

If you agree with (and have carried out) the suggested changes
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 1/5] x86/entry: Correct comparisons against boolean variables
  2018-02-27 14:50 ` [PATCH v2 1/5] x86/entry: Correct comparisons against boolean variables Andrew Cooper
  2018-03-02 16:27   ` Jan Beulich
@ 2018-03-02 17:32   ` Wei Liu
  1 sibling, 0 replies; 12+ messages in thread
From: Wei Liu @ 2018-03-02 17:32 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Roger Pau Monné, Wei Liu, Jan Beulich, Xen-devel

On Tue, Feb 27, 2018 at 02:50:32PM +0000, Andrew Cooper wrote:
> The correct way to check a boolean is `cmpb $0` or `testb $0xff`, whereas a
> lot of our entry code uses `testb $1`.  This will work in principle for values
> which are really C _Bool types, but won't work for other integer types which
> are intended to have boolean properties.
> 
> cmp is the more logical way of thinking about the operation, so adjust all
> outstanding uses of `testb $1` against boolean values.  Changing test to cmp
> changes the logical mnemonic of the following condition from 'zero' to
> 'equal', but the actual encoding remains the same.
> 
> No functional change, as all uses are real C _Bool types, and confirmed by
> diffing the disassembly.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Wei Liu <wei.liu2@citrix.com>

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/5] x86/pv: Drop {compat_, }create_bounce_frame() and use the C version instead
  2018-02-27 14:50 ` [PATCH v2 4/5] x86/pv: Drop {compat_, }create_bounce_frame() and use the C version instead Andrew Cooper
@ 2018-03-05  9:27   ` Jan Beulich
  2018-09-07 14:17     ` Andrew Cooper
  0 siblings, 1 reply; 12+ messages in thread
From: Jan Beulich @ 2018-03-05  9:27 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monné

>>> On 27.02.18 at 15:50, <andrew.cooper3@citrix.com> wrote:
> -compat_create_bounce_frame:
> -        ASSERT_INTERRUPTS_ENABLED
> -        mov   %fs,%edi
> -        ASM_STAC
> -        testb $2,UREGS_cs+8(%rsp)
> -        jz    1f
> -        /* Push new frame at registered guest-OS stack base. */
> -        movl  VCPU_kernel_sp(%rbx),%esi
> -.Lft1:  mov   VCPU_kernel_ss(%rbx),%fs

Note how we did take into consideration the segment base here;
pv_create_bounce_frame() doesn't. Hence while the patch here
is
Reviewed-by: Jan Beulich <jbeulich@suse.com>
I'm afraid I have to withdraw the respective tag for the earlier one
(despite realizing that there are other places where we [wrongly]
assume stack segments to be flat).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/5] x86/pv: Drop {compat_, }create_bounce_frame() and use the C version instead
  2018-03-05  9:27   ` Jan Beulich
@ 2018-09-07 14:17     ` Andrew Cooper
  2018-09-07 15:49       ` Jan Beulich
  0 siblings, 1 reply; 12+ messages in thread
From: Andrew Cooper @ 2018-09-07 14:17 UTC (permalink / raw)
  To: Jan Beulich; +Cc: Xen-devel, Wei Liu, Roger Pau Monné

On 05/03/18 09:27, Jan Beulich wrote:
>>>> On 27.02.18 at 15:50, <andrew.cooper3@citrix.com> wrote:
>> -compat_create_bounce_frame:
>> -        ASSERT_INTERRUPTS_ENABLED
>> -        mov   %fs,%edi
>> -        ASM_STAC
>> -        testb $2,UREGS_cs+8(%rsp)
>> -        jz    1f
>> -        /* Push new frame at registered guest-OS stack base. */
>> -        movl  VCPU_kernel_sp(%rbx),%esi
>> -.Lft1:  mov   VCPU_kernel_ss(%rbx),%fs
> Note how we did take into consideration the segment base here;
> pv_create_bounce_frame() doesn't. Hence while the patch here
> is
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> I'm afraid I have to withdraw the respective tag for the earlier one
> (despite realizing that there are other places where we [wrongly]
> assume stack segments to be flat).

For the failsafe callback, %ss is set to be flat, and then a bounce
frame is created at the current kernel_sp.

Despite the impression the API might give, a 32bit PV kernel cannot use
a non-flat stack segment.  No PV guest (not even MiniOS) uses a non-flat
layout, so while this is a change of behaviour, its not going to break
anything.

~Andrew

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v2 4/5] x86/pv: Drop {compat_, }create_bounce_frame() and use the C version instead
  2018-09-07 14:17     ` Andrew Cooper
@ 2018-09-07 15:49       ` Jan Beulich
  0 siblings, 0 replies; 12+ messages in thread
From: Jan Beulich @ 2018-09-07 15:49 UTC (permalink / raw)
  To: Andrew Cooper; +Cc: Xen-devel, Wei Liu, Roger Pau Monne

>>> On 07.09.18 at 16:17, <andrew.cooper3@citrix.com> wrote:
> On 05/03/18 09:27, Jan Beulich wrote:
>>>>> On 27.02.18 at 15:50, <andrew.cooper3@citrix.com> wrote:
>>> -compat_create_bounce_frame:
>>> -        ASSERT_INTERRUPTS_ENABLED
>>> -        mov   %fs,%edi
>>> -        ASM_STAC
>>> -        testb $2,UREGS_cs+8(%rsp)
>>> -        jz    1f
>>> -        /* Push new frame at registered guest-OS stack base. */
>>> -        movl  VCPU_kernel_sp(%rbx),%esi
>>> -.Lft1:  mov   VCPU_kernel_ss(%rbx),%fs
>> Note how we did take into consideration the segment base here;
>> pv_create_bounce_frame() doesn't. Hence while the patch here
>> is
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>> I'm afraid I have to withdraw the respective tag for the earlier one
>> (despite realizing that there are other places where we [wrongly]
>> assume stack segments to be flat).
> 
> For the failsafe callback, %ss is set to be flat, and then a bounce
> frame is created at the current kernel_sp.
> 
> Despite the impression the API might give, a 32bit PV kernel cannot use
> a non-flat stack segment.  No PV guest (not even MiniOS) uses a non-flat
> layout, so while this is a change of behaviour, its not going to break
> anything.

Well, at the very least such a change in behavior needs to be called
out very prominently in the description.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-09-07 15:49 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-27 14:50 [PATCH v2 0/5] x86: Lift create_exception_frame() up out of C Andrew Cooper
2018-02-27 14:50 ` [PATCH v2 1/5] x86/entry: Correct comparisons against boolean variables Andrew Cooper
2018-03-02 16:27   ` Jan Beulich
2018-03-02 17:32   ` Wei Liu
2018-02-27 14:50 ` [PATCH v2 2/5] x86/pv: Drop int80_bounce from struct pv_vcpu Andrew Cooper
2018-02-27 14:50 ` [PATCH v2 3/5] x86/pv: Introduce pv_create_exception_frame() Andrew Cooper
2018-03-02 16:44   ` Jan Beulich
2018-02-27 14:50 ` [PATCH v2 4/5] x86/pv: Drop {compat_, }create_bounce_frame() and use the C version instead Andrew Cooper
2018-03-05  9:27   ` Jan Beulich
2018-09-07 14:17     ` Andrew Cooper
2018-09-07 15:49       ` Jan Beulich
2018-02-27 14:50 ` [PATCH v2 5/5] x86/pv: Implement the failsafe callback using the general path Andrew Cooper

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.