From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: [PATCH v2 01/11] x86emul: catch exceptions occurring in stubs Date: Wed, 01 Feb 2017 04:12:39 -0700 Message-ID: <5891D0B70200007800135BDC@prv-mh.provo.novell.com> References: <5891CF990200007800135BC5@prv-mh.provo.novell.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="=__PartD4ED10B7.1__=" Return-path: Received: from mail6.bemta6.messagelabs.com ([193.109.254.103]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1cYsqE-0002RI-E3 for xen-devel@lists.xenproject.org; Wed, 01 Feb 2017 11:12:46 +0000 In-Reply-To: <5891CF990200007800135BC5@prv-mh.provo.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" To: xen-devel Cc: Andrew Cooper List-Id: xen-devel@lists.xenproject.org This is a MIME message. If you are reading this text, you may want to consider changing to a mail reader or gateway that understands how to properly handle MIME multipart messages. --=__PartD4ED10B7.1__= Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: quoted-printable Content-Disposition: inline Before adding more use of stubs cloned from decoded guest insns, guard ourselves against mistakes there: Should an exception (with the noteworthy exception of #PF) occur inside the stub, forward it to the guest. Since the exception fixup table entry can't encode the address of the faulting insn itself, attach it to the return address instead. This at once provides a convenient place to hand the exception information back: The return address is being overwritten by it before branching to the recovery code. Take the opportunity and (finally!) add symbol resolution to the respective log messages (the new one is intentionally not being coded that way, as it covers stub addresses only, which don't have symbols associated). Also take the opportunity and make search_one_extable() static again. Suggested-by: Andrew Cooper Signed-off-by: Jan Beulich --- There's one possible caveat here: A stub invocation immediately followed by another instruction having fault revovery attached to it would not work properly, as the table lookup can only ever find one of the two entries. Such CALL instructions would then need to be followed by a NOP for disambiguation (even if only a slim chance exists for the compiler to emit things that way). TBD: Instead of adding a 2nd search_exception_table() invocation to do_trap(), we may want to consider moving the existing one down: Xen code (except when executing stubs) shouldn't be raising #MF or #XM, and hence fixups attached to instructions shouldn't care about getting invoked for those. With that, doing the HVM special case for them before running search_exception_table() would be fine. Note that the two SIMD related stub invocations in the insn emulator intentionally don't get adjusted here, as subsequent patches will replace them anyway. --- a/xen/arch/x86/extable.c +++ b/xen/arch/x86/extable.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include =20 @@ -62,7 +63,7 @@ void __init sort_exception_tables(void) sort_exception_table(__start___pre_ex_table, __stop___pre_ex_table); } =20 -unsigned long +static unsigned long search_one_extable(const struct exception_table_entry *first, const struct exception_table_entry *last, unsigned long value) @@ -85,15 +86,88 @@ search_one_extable(const struct exceptio } =20 unsigned long -search_exception_table(unsigned long addr) +search_exception_table(const struct cpu_user_regs *regs, bool check_stub) { - const struct virtual_region *region =3D find_text_region(addr); + const struct virtual_region *region =3D find_text_region(regs->rip); + unsigned long stub =3D this_cpu(stubs.addr); =20 if ( region && region->ex ) - return search_one_extable(region->ex, region->ex_end - 1, addr); + return search_one_extable(region->ex, region->ex_end - 1, = regs->rip); + + if ( check_stub && + regs->rip >=3D stub + STUB_BUF_SIZE / 2 && + regs->rip < stub + STUB_BUF_SIZE && + regs->rsp > (unsigned long)&check_stub && + regs->rsp < (unsigned long)get_cpu_info() ) + { + unsigned long retptr =3D *(unsigned long *)regs->rsp; + + region =3D find_text_region(retptr); + retptr =3D region && region->ex + ? search_one_extable(region->ex, region->ex_end - 1, = retptr) + : 0; + if ( retptr ) + { + /* + * Put trap number and error code on the stack (in place of = the + * original return address) for recovery code to pick up. + */ + *(unsigned long *)regs->rsp =3D regs->error_code | + ((uint64_t)(uint8_t)regs->entry_vector << 32); + return retptr; + } + } + + return 0; +} + +#ifndef NDEBUG +static int __init stub_selftest(void) +{ + static const struct { + uint8_t opc[4]; + uint64_t rax; + union stub_exception_token res; + } tests[] __initconst =3D { + { .opc =3D { 0x0f, 0xb9, 0xc3, 0xc3 }, /* ud1 */ + .res.fields.trapnr =3D TRAP_invalid_op }, + { .opc =3D { 0x90, 0x02, 0x00, 0xc3 }, /* nop; add (%rax),%al */ + .rax =3D 0x0123456789abcdef, + .res.fields.trapnr =3D TRAP_gp_fault }, + { .opc =3D { 0x02, 0x04, 0x04, 0xc3 }, /* add (%rsp,%rax),%al */ + .rax =3D 0xfedcba9876543210, + .res.fields.trapnr =3D TRAP_stack_error }, + }; + unsigned long addr =3D this_cpu(stubs.addr) + STUB_BUF_SIZE / 2; + unsigned int i; + + for ( i =3D 0; i < ARRAY_SIZE(tests); ++i ) + { + uint8_t *ptr =3D map_domain_page(_mfn(this_cpu(stubs.mfn))) + + (addr & ~PAGE_MASK); + unsigned long res =3D ~0; + + memset(ptr, 0xcc, STUB_BUF_SIZE / 2); + memcpy(ptr, tests[i].opc, ARRAY_SIZE(tests[i].opc)); + unmap_domain_page(ptr); + + asm volatile ( "call *%[stb]\n" + ".Lret%=3D:\n\t" + ".pushsection .fixup,\"ax\"\n" + ".Lfix%=3D:\n\t" + "pop %[exn]\n\t" + "jmp .Lret%=3D\n\t" + ".popsection\n\t" + _ASM_EXTABLE(.Lret%=3D, .Lfix%=3D) + : [exn] "+m" (res) + : [stb] "rm" (addr), "a" (tests[i].rax)); + ASSERT(res =3D=3D tests[i].res.raw); + } =20 return 0; } +__initcall(stub_selftest); +#endif =20 unsigned long search_pre_exception_table(struct cpu_user_regs *regs) --- a/xen/arch/x86/traps.c +++ b/xen/arch/x86/traps.c @@ -802,10 +802,10 @@ void do_trap(struct cpu_user_regs *regs) return; } =20 - if ( likely((fixup =3D search_exception_table(regs->rip)) !=3D 0) ) + if ( likely((fixup =3D search_exception_table(regs, false)) !=3D 0) ) { - dprintk(XENLOG_ERR, "Trap %d: %p -> %p\n", - trapnr, _p(regs->rip), _p(fixup)); + dprintk(XENLOG_ERR, "Trap %u: %p [%ps] -> %p\n", + trapnr, _p(regs->rip), _p(regs->rip), _p(fixup)); this_cpu(last_extable_addr) =3D regs->rip; regs->rip =3D fixup; return; @@ -820,6 +820,15 @@ void do_trap(struct cpu_user_regs *regs) return; } =20 + if ( likely((fixup =3D search_exception_table(regs, true)) !=3D 0) ) + { + dprintk(XENLOG_ERR, "Trap %u: %p -> %p\n", + trapnr, _p(regs->rip), _p(fixup)); + this_cpu(last_extable_addr) =3D regs->rip; + regs->rip =3D fixup; + return; + } + hardware_trap: if ( debugger_trap_fatal(trapnr, regs) ) return; @@ -1567,7 +1576,7 @@ void do_invalid_op(struct cpu_user_regs } =20 die: - if ( (fixup =3D search_exception_table(regs->rip)) !=3D 0 ) + if ( (fixup =3D search_exception_table(regs, true)) !=3D 0 ) { this_cpu(last_extable_addr) =3D regs->rip; regs->rip =3D fixup; @@ -1897,7 +1906,7 @@ void do_page_fault(struct cpu_user_regs if ( pf_type !=3D real_fault ) return; =20 - if ( likely((fixup =3D search_exception_table(regs->rip)) !=3D 0) = ) + if ( likely((fixup =3D search_exception_table(regs, false)) !=3D = 0) ) { perfc_incr(copy_user_faults); if ( unlikely(regs->error_code & PFEC_reserved_bit) ) @@ -3841,10 +3850,10 @@ void do_general_protection(struct cpu_us =20 gp_in_kernel: =20 - if ( likely((fixup =3D search_exception_table(regs->rip)) !=3D 0) ) + if ( likely((fixup =3D search_exception_table(regs, true)) !=3D 0) ) { - dprintk(XENLOG_INFO, "GPF (%04x): %p -> %p\n", - regs->error_code, _p(regs->rip), _p(fixup)); + dprintk(XENLOG_INFO, "GPF (%04x): %p [%ps] -> %p\n", + regs->error_code, _p(regs->rip), _p(regs->rip), _p(fixup))= ; this_cpu(last_extable_addr) =3D regs->rip; regs->rip =3D fixup; return; @@ -4120,7 +4129,7 @@ void do_debug(struct cpu_user_regs *regs * watchpoint set on it. No need to bump EIP; the only = faulting * trap is an instruction breakpoint, which can't happen to = us. */ - WARN_ON(!search_exception_table(regs->rip)); + WARN_ON(!search_exception_table(regs, false)); } goto out; } --- a/xen/arch/x86/x86_emulate/x86_emulate.c +++ b/xen/arch/x86/x86_emulate/x86_emulate.c @@ -676,14 +676,34 @@ do{ asm volatile ( #define __emulate_1op_8byte(_op, _dst, _eflags) #endif /* __i386__ */ =20 +#ifdef __XEN__ +# define invoke_stub(pre, post, constraints...) do { \ + union stub_exception_token res_ =3D { .raw =3D ~0 }; = \ + asm volatile ( pre "\n\tcall *%[stub]\n\t" post "\n" \ + ".Lret%=3D:\n\t" = \ + ".pushsection .fixup,\"ax\"\n" \ + ".Lfix%=3D:\n\t" = \ + "pop %[exn]\n\t" \ + "jmp .Lret%=3D\n\t" = \ + ".popsection\n\t" \ + _ASM_EXTABLE(.Lret%=3D, .Lfix%=3D) = \ + : [exn] "+g" (res_), constraints, \ + [stub] "rm" (stub.func) ); \ + generate_exception_if(~res_.raw, res_.fields.trapnr, \ + res_.fields.ec); \ +} while (0) +#else +# define invoke_stub(pre, post, constraints...) \ + asm volatile ( pre "\n\tcall *%[stub]\n\t" post \ + : constraints, [stub] "rm" (stub.func) ) +#endif + #define emulate_stub(dst, src...) do { \ unsigned long tmp; \ - asm volatile ( _PRE_EFLAGS("[efl]", "[msk]", "[tmp]") \ - "call *%[stub];" \ - _POST_EFLAGS("[efl]", "[msk]", "[tmp]") \ - : dst, [tmp] "=3D&r" (tmp), [efl] "+g" (_regs._eflags) = \ - : [stub] "r" (stub.func), \ - [msk] "i" (EFLAGS_MASK), ## src ); \ + invoke_stub(_PRE_EFLAGS("[efl]", "[msk]", "[tmp]"), \ + _POST_EFLAGS("[efl]", "[msk]", "[tmp]"), \ + dst, [tmp] "=3D&r" (tmp), [efl] "+g" (_regs._eflags) = \ + : [msk] "i" (EFLAGS_MASK), ## src); \ } while (0) =20 /* Fetch next part of the instruction being emulated. */ @@ -929,8 +949,7 @@ do { unsigned int nr_ =3D sizeof((uint8_t[]){ bytes }); = \ fic.insn_bytes =3D nr_; = \ memcpy(get_stub(stub), ((uint8_t[]){ bytes, 0xc3 }), nr_ + 1); \ - asm volatile ( "call *%[stub]" : "+m" (fic) : \ - [stub] "rm" (stub.func) ); \ + invoke_stub("", "", "=3Dm" (fic) : "m" (fic)); = \ put_stub(stub); \ } while (0) =20 @@ -940,13 +959,11 @@ do { unsigned long tmp_; \ fic.insn_bytes =3D nr_; = \ memcpy(get_stub(stub), ((uint8_t[]){ bytes, 0xc3 }), nr_ + 1); \ - asm volatile ( _PRE_EFLAGS("[eflags]", "[mask]", "[tmp]") \ - "call *%[func];" \ - _POST_EFLAGS("[eflags]", "[mask]", "[tmp]") \ - : [eflags] "+g" (_regs._eflags), \ - [tmp] "=3D&r" (tmp_), "+m" (fic) = \ - : [func] "rm" (stub.func), \ - [mask] "i" (EFLG_ZF|EFLG_PF|EFLG_CF) ); \ + invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", "[tmp]"), \ + _POST_EFLAGS("[eflags]", "[mask]", "[tmp]"), \ + [eflags] "+g" (_regs._eflags), [tmp] "=3D&r" (tmp_), = \ + "+m" (fic) \ + : [mask] "i" (EFLG_ZF|EFLG_PF|EFLG_CF)); \ put_stub(stub); \ } while (0) =20 --- a/xen/include/asm-x86/uaccess.h +++ b/xen/include/asm-x86/uaccess.h @@ -275,7 +275,16 @@ extern struct exception_table_entry __st extern struct exception_table_entry __start___pre_ex_table[]; extern struct exception_table_entry __stop___pre_ex_table[]; =20 -extern unsigned long search_exception_table(unsigned long); +union stub_exception_token { + struct { + uint32_t ec; + uint8_t trapnr; + } fields; + uint64_t raw; +}; + +extern unsigned long search_exception_table(const struct cpu_user_regs = *regs, + bool check_stub); extern void sort_exception_tables(void); extern void sort_exception_table(struct exception_table_entry *start, const struct exception_table_entry = *stop); --=__PartD4ED10B7.1__= Content-Type: text/plain; name="x86emul-stub-catch-UD.patch" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="x86emul-stub-catch-UD.patch" x86emul: catch exceptions occurring in stubs=0A=0ABefore adding more use = of stubs cloned from decoded guest insns, guard=0Aourselves against = mistakes there: Should an exception (with the=0Anoteworthy exception of = #PF) occur inside the stub, forward it to the=0Aguest.=0A=0ASince the = exception fixup table entry can't encode the address of the=0Afaulting = insn itself, attach it to the return address instead. This at=0Aonce = provides a convenient place to hand the exception information=0Aback: The = return address is being overwritten by it before branching to=0Athe = recovery code.=0A=0ATake the opportunity and (finally!) add symbol = resolution to the=0Arespective log messages (the new one is intentionally = not being coded=0Athat way, as it covers stub addresses only, which don't = have symbols=0Aassociated).=0A=0AAlso take the opportunity and make = search_one_extable() static again.=0A=0ASuggested-by: Andrew Cooper = =0ASigned-off-by: Jan Beulich =0A---=0AThere's one possible caveat here: A stub invocation immediately= =0Afollowed by another instruction having fault revovery attached to = it=0Awould not work properly, as the table lookup can only ever find one = of=0Athe two entries. Such CALL instructions would then need to be = followed=0Aby a NOP for disambiguation (even if only a slim chance exists = for the=0Acompiler to emit things that way).=0A=0ATBD: Instead of adding a = 2nd search_exception_table() invocation to=0A do_trap(), we may want = to consider moving the existing one down:=0A Xen code (except when = executing stubs) shouldn't be raising #MF=0A or #XM, and hence fixups = attached to instructions shouldn't care=0A about getting invoked for = those. With that, doing the HVM special=0A case for them before = running search_exception_table() would be=0A fine.=0A=0ANote that the = two SIMD related stub invocations in the insn emulator=0Aintentionally = don't get adjusted here, as subsequent patches will=0Areplace them = anyway.=0A=0A--- a/xen/arch/x86/extable.c=0A+++ b/xen/arch/x86/extable.c=0A= @@ -6,6 +6,7 @@=0A #include =0A #include =0A = #include =0A+#include =0A #include = =0A #include =0A =0A@@ -62,7 +63,7 = @@ void __init sort_exception_tables(void)=0A sort_exception_table(__st= art___pre_ex_table, __stop___pre_ex_table);=0A }=0A =0A-unsigned long=0A+st= atic unsigned long=0A search_one_extable(const struct exception_table_entry= *first,=0A const struct exception_table_entry = *last,=0A unsigned long value)=0A@@ -85,15 +86,88 @@ = search_one_extable(const struct exceptio=0A }=0A =0A unsigned long=0A-searc= h_exception_table(unsigned long addr)=0A+search_exception_table(const = struct cpu_user_regs *regs, bool check_stub)=0A {=0A- const struct = virtual_region *region =3D find_text_region(addr);=0A+ const struct = virtual_region *region =3D find_text_region(regs->rip);=0A+ unsigned = long stub =3D this_cpu(stubs.addr);=0A =0A if ( region && region->ex = )=0A- return search_one_extable(region->ex, region->ex_end - 1, = addr);=0A+ return search_one_extable(region->ex, region->ex_end - = 1, regs->rip);=0A+=0A+ if ( check_stub &&=0A+ regs->rip >=3D = stub + STUB_BUF_SIZE / 2 &&=0A+ regs->rip < stub + STUB_BUF_SIZE = &&=0A+ regs->rsp > (unsigned long)&check_stub &&=0A+ = regs->rsp < (unsigned long)get_cpu_info() )=0A+ {=0A+ unsigned = long retptr =3D *(unsigned long *)regs->rsp;=0A+=0A+ region =3D = find_text_region(retptr);=0A+ retptr =3D region && region->ex=0A+ = ? search_one_extable(region->ex, region->ex_end - 1, = retptr)=0A+ : 0;=0A+ if ( retptr )=0A+ {=0A+ = /*=0A+ * Put trap number and error code on the = stack (in place of the=0A+ * original return address) for = recovery code to pick up.=0A+ */=0A+ *(unsigned = long *)regs->rsp =3D regs->error_code |=0A+ ((uint64_t)(uint= 8_t)regs->entry_vector << 32);=0A+ return retptr;=0A+ = }=0A+ }=0A+=0A+ return 0;=0A+}=0A+=0A+#ifndef NDEBUG=0A+static int = __init stub_selftest(void)=0A+{=0A+ static const struct {=0A+ = uint8_t opc[4];=0A+ uint64_t rax;=0A+ union stub_exception_to= ken res;=0A+ } tests[] __initconst =3D {=0A+ { .opc =3D { 0x0f, = 0xb9, 0xc3, 0xc3 }, /* ud1 */=0A+ .res.fields.trapnr =3D = TRAP_invalid_op },=0A+ { .opc =3D { 0x90, 0x02, 0x00, 0xc3 }, /* = nop; add (%rax),%al */=0A+ .rax =3D 0x0123456789abcdef,=0A+ = .res.fields.trapnr =3D TRAP_gp_fault },=0A+ { .opc =3D { 0x02, = 0x04, 0x04, 0xc3 }, /* add (%rsp,%rax),%al */=0A+ .rax =3D = 0xfedcba9876543210,=0A+ .res.fields.trapnr =3D TRAP_stack_error = },=0A+ };=0A+ unsigned long addr =3D this_cpu(stubs.addr) + = STUB_BUF_SIZE / 2;=0A+ unsigned int i;=0A+=0A+ for ( i =3D 0; i < = ARRAY_SIZE(tests); ++i )=0A+ {=0A+ uint8_t *ptr =3D map_domain_pa= ge(_mfn(this_cpu(stubs.mfn))) +=0A+ (addr & = ~PAGE_MASK);=0A+ unsigned long res =3D ~0;=0A+=0A+ memset(ptr= , 0xcc, STUB_BUF_SIZE / 2);=0A+ memcpy(ptr, tests[i].opc, ARRAY_SIZE= (tests[i].opc));=0A+ unmap_domain_page(ptr);=0A+=0A+ asm = volatile ( "call *%[stb]\n"=0A+ ".Lret%=3D:\n\t"=0A+ = ".pushsection .fixup,\"ax\"\n"=0A+ = ".Lfix%=3D:\n\t"=0A+ "pop %[exn]\n\t"=0A+ = "jmp .Lret%=3D\n\t"=0A+ ".popsection\n= \t"=0A+ _ASM_EXTABLE(.Lret%=3D, .Lfix%=3D)=0A+ = : [exn] "+m" (res)=0A+ : [stb] "rm" = (addr), "a" (tests[i].rax));=0A+ ASSERT(res =3D=3D tests[i].res.raw)= ;=0A+ }=0A =0A return 0;=0A }=0A+__initcall(stub_selftest);=0A+#endi= f=0A =0A unsigned long=0A search_pre_exception_table(struct cpu_user_regs = *regs)=0A--- a/xen/arch/x86/traps.c=0A+++ b/xen/arch/x86/traps.c=0A@@ = -802,10 +802,10 @@ void do_trap(struct cpu_user_regs *regs)=0A = return;=0A }=0A =0A- if ( likely((fixup =3D search_exception_table(r= egs->rip)) !=3D 0) )=0A+ if ( likely((fixup =3D search_exception_table(r= egs, false)) !=3D 0) )=0A {=0A- dprintk(XENLOG_ERR, "Trap %d: = %p -> %p\n",=0A- trapnr, _p(regs->rip), _p(fixup));=0A+ = dprintk(XENLOG_ERR, "Trap %u: %p [%ps] -> %p\n",=0A+ = trapnr, _p(regs->rip), _p(regs->rip), _p(fixup));=0A this_cpu(last_= extable_addr) =3D regs->rip;=0A regs->rip =3D fixup;=0A = return;=0A@@ -820,6 +820,15 @@ void do_trap(struct cpu_user_regs *regs)=0A = return;=0A }=0A =0A+ if ( likely((fixup =3D search_exception= _table(regs, true)) !=3D 0) )=0A+ {=0A+ dprintk(XENLOG_ERR, = "Trap %u: %p -> %p\n",=0A+ trapnr, _p(regs->rip), _p(fixup))= ;=0A+ this_cpu(last_extable_addr) =3D regs->rip;=0A+ = regs->rip =3D fixup;=0A+ return;=0A+ }=0A+=0A hardware_trap:=0A = if ( debugger_trap_fatal(trapnr, regs) )=0A return;=0A@@ = -1567,7 +1576,7 @@ void do_invalid_op(struct cpu_user_regs=0A }=0A =0A = die:=0A- if ( (fixup =3D search_exception_table(regs->rip)) !=3D 0 = )=0A+ if ( (fixup =3D search_exception_table(regs, true)) !=3D 0 )=0A = {=0A this_cpu(last_extable_addr) =3D regs->rip;=0A = regs->rip =3D fixup;=0A@@ -1897,7 +1906,7 @@ void do_page_fault(struct = cpu_user_regs=0A if ( pf_type !=3D real_fault )=0A = return;=0A =0A- if ( likely((fixup =3D search_exception_table(regs->= rip)) !=3D 0) )=0A+ if ( likely((fixup =3D search_exception_table(re= gs, false)) !=3D 0) )=0A {=0A perfc_incr(copy_user_faul= ts);=0A if ( unlikely(regs->error_code & PFEC_reserved_bit) = )=0A@@ -3841,10 +3850,10 @@ void do_general_protection(struct cpu_us=0A = =0A gp_in_kernel:=0A =0A- if ( likely((fixup =3D search_exception_table= (regs->rip)) !=3D 0) )=0A+ if ( likely((fixup =3D search_exception_table= (regs, true)) !=3D 0) )=0A {=0A- dprintk(XENLOG_INFO, "GPF = (%04x): %p -> %p\n",=0A- regs->error_code, _p(regs->rip), = _p(fixup));=0A+ dprintk(XENLOG_INFO, "GPF (%04x): %p [%ps] -> = %p\n",=0A+ regs->error_code, _p(regs->rip), _p(regs->rip), = _p(fixup));=0A this_cpu(last_extable_addr) =3D regs->rip;=0A = regs->rip =3D fixup;=0A return;=0A@@ -4120,7 +4129,7 @@ void = do_debug(struct cpu_user_regs *regs=0A * watchpoint set on = it. No need to bump EIP; the only faulting=0A * trap is an = instruction breakpoint, which can't happen to us.=0A */=0A- = WARN_ON(!search_exception_table(regs->rip));=0A+ = WARN_ON(!search_exception_table(regs, false));=0A }=0A = goto out;=0A }=0A--- a/xen/arch/x86/x86_emulate/x86_emulate.c=0A+++ = b/xen/arch/x86/x86_emulate/x86_emulate.c=0A@@ -676,14 +676,34 @@ do{ asm = volatile (=0A #define __emulate_1op_8byte(_op, _dst, _eflags)=0A #endif /* = __i386__ */=0A =0A+#ifdef __XEN__=0A+# define invoke_stub(pre, post, = constraints...) do { \=0A+ union stub_exception_token= res_ =3D { .raw =3D ~0 }; \=0A+ asm volatile ( pre = "\n\tcall *%[stub]\n\t" post "\n" \=0A+ = ".Lret%=3D:\n\t" \=0A+ = ".pushsection .fixup,\"ax\"\n" \=0A+ = ".Lfix%=3D:\n\t" \=0A+ = "pop %[exn]\n\t" \=0A+ = "jmp .Lret%=3D\n\t" \=0A+ = ".popsection\n\t" \=0A+ = _ASM_EXTABLE(.Lret%=3D, .Lfix%=3D) = \=0A+ : [exn] "+g" (res_), constraints, = \=0A+ [stub] "rm" (stub.func) ); = \=0A+ generate_exception_if(~res_.raw, res_.fields.trapnr, = \=0A+ res_.fields.ec); = \=0A+} while (0)=0A+#else=0A+# define invoke_stub(pre, post, = constraints...) \=0A+ asm volatile ( pre = "\n\tcall *%[stub]\n\t" post \=0A+ : = constraints, [stub] "rm" (stub.func) )=0A+#endif=0A+=0A #define emulate_stu= b(dst, src...) do { \=0A unsigned = long tmp; \=0A- asm = volatile ( _PRE_EFLAGS("[efl]", "[msk]", "[tmp]") \=0A- = "call *%[stub];" \=0A- = _POST_EFLAGS("[efl]", "[msk]", "[tmp]") \=0A- = : dst, [tmp] "=3D&r" (tmp), [efl] "+g" (_regs._eflags) = \=0A- : [stub] "r" (stub.func), = \=0A- [msk] "i" (EFLAGS_MASK), ## src ); = \=0A+ invoke_stub(_PRE_EFLAGS("[efl]", "[msk]", "[tmp]"), = \=0A+ _POST_EFLAGS("[efl]", "[msk]", "[tmp]"), = \=0A+ dst, [tmp] "=3D&r" (tmp), [efl] "+g" = (_regs._eflags) \=0A+ : [msk] "i" (EFLAGS_MASK), ## = src); \=0A } while (0)=0A =0A /* Fetch next part of = the instruction being emulated. */=0A@@ -929,8 +949,7 @@ do {=0A = unsigned int nr_ =3D sizeof((uint8_t[]){ bytes }); \=0A = fic.insn_bytes =3D nr_; = \=0A memcpy(get_stub(stub), ((uint8_t[]){ bytes, 0xc3 }), nr_ + 1); = \=0A- asm volatile ( "call *%[stub]" : "+m" (fic) : = \=0A- [stub] "rm" (stub.func) ); = \=0A+ invoke_stub("", "", "=3Dm" (fic) : "m" (fic)); = \=0A put_stub(stub); = \=0A } while (0)=0A =0A@@ -940,13 +959,11 @@ do {=0A = unsigned long tmp_; \=0A = fic.insn_bytes =3D nr_; = \=0A memcpy(get_stub(stub), ((uint8_t[]){ bytes, 0xc3 }), nr_ + 1); = \=0A- asm volatile ( _PRE_EFLAGS("[eflags]", "[mask]", "[tmp]") = \=0A- "call *%[func];" = \=0A- _POST_EFLAGS("[eflags]", "[mask]", "[tmp]") = \=0A- : [eflags] "+g" (_regs._eflags), = \=0A- [tmp] "=3D&r" (tmp_), "+m" (fic) = \=0A- : [func] "rm" (stub.func), = \=0A- [mask] "i" (EFLG_ZF|EFLG_PF|EFLG_= CF) ); \=0A+ invoke_stub(_PRE_EFLAGS("[eflags]", "[mask]", = "[tmp]"), \=0A+ _POST_EFLAGS("[eflags]", = "[mask]", "[tmp]"), \=0A+ [eflags] "+g" = (_regs._eflags), [tmp] "=3D&r" (tmp_), \=0A+ "+m" = (fic) \=0A+ : = [mask] "i" (EFLG_ZF|EFLG_PF|EFLG_CF)); \=0A put_stub(stu= b); \=0A } while = (0)=0A =0A--- a/xen/include/asm-x86/uaccess.h=0A+++ b/xen/include/asm-x86/u= access.h=0A@@ -275,7 +275,16 @@ extern struct exception_table_entry = __st=0A extern struct exception_table_entry __start___pre_ex_table[];=0A = extern struct exception_table_entry __stop___pre_ex_table[];=0A =0A-extern = unsigned long search_exception_table(unsigned long);=0A+union stub_exceptio= n_token {=0A+ struct {=0A+ uint32_t ec;=0A+ uint8_t = trapnr;=0A+ } fields;=0A+ uint64_t raw;=0A+};=0A+=0A+extern unsigned = long search_exception_table(const struct cpu_user_regs *regs,=0A+ = bool check_stub);=0A extern void = sort_exception_tables(void);=0A extern void sort_exception_table(struct = exception_table_entry *start,=0A const = struct exception_table_entry *stop);=0A --=__PartD4ED10B7.1__= Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwczovL2xpc3RzLnhlbi5v cmcveGVuLWRldmVsCg== --=__PartD4ED10B7.1__=--