All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86-64/Xen: fix stack switching
@ 2018-05-07 11:55 Jan Beulich
  0 siblings, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2018-05-07 11:55 UTC (permalink / raw)
  To: mingo, tglx, hpa
  Cc: Juergen Gross, xen-devel, Boris Ostrovsky, linux-kernel, Andy Lutomirski

While on native entry into the kernel happens on the trampoline stack,
PV Xen kernels are being entered with the current thread stack right
away. Hence source and destination stacks are identical in that case,
and special care is needed.

Other than in sync_regs() the copying done on the INT80 path as well as
on the NMI path itself isn't NMI / #MC safe, as either of these events
occurring in the middle of the stack copying would clobber data on the
(source) stack. (Of course, in the NMI case only #MC could break
things.)

I'm not altering the similar code in interrupt_entry(), as that code
path is unreachable when running an PV Xen guest afaict.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Cc: stable@kernel.org 
---
There would certainly have been the option of using alternatives
patching, but afaict the patching code isn't NMI / #MC safe, so I'd
rather stay away from patching the NMI path. And I thought it would be
better to use similar code in both cases.

Another option would be to make the Xen case match the native one, by
going through the trampoline stack, but to me this would look like extra
overhead for no gain.
---
 arch/x86/entry/entry_64.S        |    8 ++++++++
 arch/x86/entry/entry_64_compat.S |    8 +++++++-
 2 files changed, 15 insertions(+), 1 deletion(-)

--- 4.17-rc4/arch/x86/entry/entry_64.S
+++ 4.17-rc4-x86_64-stack-switch-Xen/arch/x86/entry/entry_64.S
@@ -1399,6 +1399,12 @@ ENTRY(nmi)
 	swapgs
 	cld
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdx
+
+	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rdx
+	subq	$8, %rdx
+	xorq	%rsp, %rdx
+	shrq	$PAGE_SHIFT, %rdx
+	jz	.Lnmi_keep_stack
 	movq	%rsp, %rdx
 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 	UNWIND_HINT_IRET_REGS base=%rdx offset=8
@@ -1408,6 +1414,8 @@ ENTRY(nmi)
 	pushq	2*8(%rdx)	/* pt_regs->cs */
 	pushq	1*8(%rdx)	/* pt_regs->rip */
 	UNWIND_HINT_IRET_REGS
+.Lnmi_keep_stack:
+
 	pushq   $-1		/* pt_regs->orig_ax */
 	PUSH_AND_CLEAR_REGS rdx=(%rdx)
 	ENCODE_FRAME_POINTER
--- 4.17-rc4/arch/x86/entry/entry_64_compat.S
+++ 4.17-rc4-x86_64-stack-switch-Xen/arch/x86/entry/entry_64_compat.S
@@ -356,15 +356,21 @@ ENTRY(entry_INT80_compat)
 
 	/* Need to switch before accessing the thread stack. */
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
+
+	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rdi
+	subq	$8, %rdi
+	xorq	%rsp, %rdi
+	shrq	$PAGE_SHIFT, %rdi
+	jz	.Lint80_keep_stack
 	movq	%rsp, %rdi
 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
-
 	pushq	6*8(%rdi)		/* regs->ss */
 	pushq	5*8(%rdi)		/* regs->rsp */
 	pushq	4*8(%rdi)		/* regs->eflags */
 	pushq	3*8(%rdi)		/* regs->cs */
 	pushq	2*8(%rdi)		/* regs->ip */
 	pushq	1*8(%rdi)		/* regs->orig_ax */
+.Lint80_keep_stack:
 
 	pushq	(%rdi)			/* pt_regs->di */
 	pushq	%rsi			/* pt_regs->si */





_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] x86-64/Xen: fix stack switching
       [not found]   ` <5AF964B302000078001C26BC@suse.com>
@ 2018-05-14 12:08     ` Juergen Gross
  2018-05-14 12:08     ` Juergen Gross
  1 sibling, 0 replies; 8+ messages in thread
From: Juergen Gross @ 2018-05-14 12:08 UTC (permalink / raw)
  To: Jan Beulich, Andrew Lutomirski
  Cc: Ingo Molnar, Thomas Gleixner, xen-devel, Boris Ostrovsky, lkml,
	H. Peter Anvin

On 14/05/18 12:28, Jan Beulich wrote:
>>>> On 08.05.18 at 04:38, <luto@kernel.org> wrote:
>> On Mon, May 7, 2018 at 5:16 AM Jan Beulich <JBeulich@suse.com> wrote:
>>
>>> While on native entry into the kernel happens on the trampoline stack,
>>> PV Xen kernels are being entered with the current thread stack right
>>> away. Hence source and destination stacks are identical in that case,
>>> and special care is needed.
>>
>>> Other than in sync_regs() the copying done on the INT80 path as well as
>>> on the NMI path itself isn't NMI / #MC safe, as either of these events
>>> occurring in the middle of the stack copying would clobber data on the
>>> (source) stack. (Of course, in the NMI case only #MC could break
>>> things.)
>>
>> I think I'd rather fix this by changing the stack switch code or
> 
> Well, isn't that what I'm doing in the patch?
> 
>> alternativing around it on non-stack-switching kernels.
> 
> Fine with me if that's considered better than adding the conditionals.
> 
>>  Or make Xen use a trampoline stack just like native.
> 
> Well, as said I'd rather not, unless x86 and Xen maintainers agree
> that's the way to go. But see below for NMI.

I'd prefer not using a trampoline stack, too.

> 
>>> I'm not altering the similar code in interrupt_entry(), as that code
>>> path is unreachable when running an PV Xen guest afaict.
>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> Cc: stable@kernel.org 
>>> ---
>>> There would certainly have been the option of using alternatives
>>> patching, but afaict the patching code isn't NMI / #MC safe, so I'd
>>> rather stay away from patching the NMI path. And I thought it would be
>>> better to use similar code in both cases.
>>
>> I would hope we do the patching before we enable any NMIs.
> 
> "Enable NMIs"? I don't think they're getting disabled anywhere in the
> kernel. Perhaps you merely mean ones the kernel sends itself (which
> I agree would hopefully only be enabled after alternatives patching?
> 
>>> Another option would be to make the Xen case match the native one, by
>>> going through the trampoline stack, but to me this would look like extra
>>> overhead for no gain.
>>
>> Avoiding even more complexity in the nmi code seems like a big gain to me.
> 
> I'm not sure the added conditional is more complexity than making Xen
> switch to the trampoline stack just to switch back almost immediately.

I agree.

> But yes, I could see complexity of the NMI code to be a reason to use
> different solutions on the NMI and INT80 paths. It's just that I'd like
> you, the x86 maintainters, and the Xen ones to agree on which solution
> to use where before I'd send a v2.

With my Xen maintainer hat on I'd prefer Jan's current solution.


Juergen

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] x86-64/Xen: fix stack switching
       [not found]   ` <5AF964B302000078001C26BC@suse.com>
  2018-05-14 12:08     ` Juergen Gross
@ 2018-05-14 12:08     ` Juergen Gross
  1 sibling, 0 replies; 8+ messages in thread
From: Juergen Gross @ 2018-05-14 12:08 UTC (permalink / raw)
  To: Jan Beulich, Andrew Lutomirski
  Cc: lkml, Thomas Gleixner, H. Peter Anvin, xen-devel,
	Boris Ostrovsky, Ingo Molnar

On 14/05/18 12:28, Jan Beulich wrote:
>>>> On 08.05.18 at 04:38, <luto@kernel.org> wrote:
>> On Mon, May 7, 2018 at 5:16 AM Jan Beulich <JBeulich@suse.com> wrote:
>>
>>> While on native entry into the kernel happens on the trampoline stack,
>>> PV Xen kernels are being entered with the current thread stack right
>>> away. Hence source and destination stacks are identical in that case,
>>> and special care is needed.
>>
>>> Other than in sync_regs() the copying done on the INT80 path as well as
>>> on the NMI path itself isn't NMI / #MC safe, as either of these events
>>> occurring in the middle of the stack copying would clobber data on the
>>> (source) stack. (Of course, in the NMI case only #MC could break
>>> things.)
>>
>> I think I'd rather fix this by changing the stack switch code or
> 
> Well, isn't that what I'm doing in the patch?
> 
>> alternativing around it on non-stack-switching kernels.
> 
> Fine with me if that's considered better than adding the conditionals.
> 
>>  Or make Xen use a trampoline stack just like native.
> 
> Well, as said I'd rather not, unless x86 and Xen maintainers agree
> that's the way to go. But see below for NMI.

I'd prefer not using a trampoline stack, too.

> 
>>> I'm not altering the similar code in interrupt_entry(), as that code
>>> path is unreachable when running an PV Xen guest afaict.
>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> Cc: stable@kernel.org 
>>> ---
>>> There would certainly have been the option of using alternatives
>>> patching, but afaict the patching code isn't NMI / #MC safe, so I'd
>>> rather stay away from patching the NMI path. And I thought it would be
>>> better to use similar code in both cases.
>>
>> I would hope we do the patching before we enable any NMIs.
> 
> "Enable NMIs"? I don't think they're getting disabled anywhere in the
> kernel. Perhaps you merely mean ones the kernel sends itself (which
> I agree would hopefully only be enabled after alternatives patching?
> 
>>> Another option would be to make the Xen case match the native one, by
>>> going through the trampoline stack, but to me this would look like extra
>>> overhead for no gain.
>>
>> Avoiding even more complexity in the nmi code seems like a big gain to me.
> 
> I'm not sure the added conditional is more complexity than making Xen
> switch to the trampoline stack just to switch back almost immediately.

I agree.

> But yes, I could see complexity of the NMI code to be a reason to use
> different solutions on the NMI and INT80 paths. It's just that I'd like
> you, the x86 maintainters, and the Xen ones to agree on which solution
> to use where before I'd send a v2.

With my Xen maintainer hat on I'd prefer Jan's current solution.


Juergen

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] x86-64/Xen: fix stack switching
  2018-05-08  2:38 ` Andy Lutomirski
  2018-05-14 10:28   ` Jan Beulich
@ 2018-05-14 10:28   ` Jan Beulich
       [not found]   ` <5AF964B302000078001C26BC@suse.com>
  2 siblings, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2018-05-14 10:28 UTC (permalink / raw)
  To: Andrew Lutomirski
  Cc: mingo, tglx, xen-devel, Boris Ostrovsky, Juergen Gross,
	linux-kernel, hpa

>>> On 08.05.18 at 04:38, <luto@kernel.org> wrote:
> On Mon, May 7, 2018 at 5:16 AM Jan Beulich <JBeulich@suse.com> wrote:
> 
>> While on native entry into the kernel happens on the trampoline stack,
>> PV Xen kernels are being entered with the current thread stack right
>> away. Hence source and destination stacks are identical in that case,
>> and special care is needed.
> 
>> Other than in sync_regs() the copying done on the INT80 path as well as
>> on the NMI path itself isn't NMI / #MC safe, as either of these events
>> occurring in the middle of the stack copying would clobber data on the
>> (source) stack. (Of course, in the NMI case only #MC could break
>> things.)
> 
> I think I'd rather fix this by changing the stack switch code or

Well, isn't that what I'm doing in the patch?

> alternativing around it on non-stack-switching kernels.

Fine with me if that's considered better than adding the conditionals.

>  Or make Xen use a trampoline stack just like native.

Well, as said I'd rather not, unless x86 and Xen maintainers agree
that's the way to go. But see below for NMI.

>> I'm not altering the similar code in interrupt_entry(), as that code
>> path is unreachable when running an PV Xen guest afaict.
> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Cc: stable@kernel.org 
>> ---
>> There would certainly have been the option of using alternatives
>> patching, but afaict the patching code isn't NMI / #MC safe, so I'd
>> rather stay away from patching the NMI path. And I thought it would be
>> better to use similar code in both cases.
> 
> I would hope we do the patching before we enable any NMIs.

"Enable NMIs"? I don't think they're getting disabled anywhere in the
kernel. Perhaps you merely mean ones the kernel sends itself (which
I agree would hopefully only be enabled after alternatives patching?

>> Another option would be to make the Xen case match the native one, by
>> going through the trampoline stack, but to me this would look like extra
>> overhead for no gain.
> 
> Avoiding even more complexity in the nmi code seems like a big gain to me.

I'm not sure the added conditional is more complexity than making Xen
switch to the trampoline stack just to switch back almost immediately.
But yes, I could see complexity of the NMI code to be a reason to use
different solutions on the NMI and INT80 paths. It's just that I'd like
you, the x86 maintainters, and the Xen ones to agree on which solution
to use where before I'd send a v2.

Jan

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] x86-64/Xen: fix stack switching
  2018-05-08  2:38 ` Andy Lutomirski
@ 2018-05-14 10:28   ` Jan Beulich
  2018-05-14 10:28   ` Jan Beulich
       [not found]   ` <5AF964B302000078001C26BC@suse.com>
  2 siblings, 0 replies; 8+ messages in thread
From: Jan Beulich @ 2018-05-14 10:28 UTC (permalink / raw)
  To: Andrew Lutomirski
  Cc: Juergen Gross, linux-kernel, tglx, hpa, xen-devel,
	Boris Ostrovsky, mingo

>>> On 08.05.18 at 04:38, <luto@kernel.org> wrote:
> On Mon, May 7, 2018 at 5:16 AM Jan Beulich <JBeulich@suse.com> wrote:
> 
>> While on native entry into the kernel happens on the trampoline stack,
>> PV Xen kernels are being entered with the current thread stack right
>> away. Hence source and destination stacks are identical in that case,
>> and special care is needed.
> 
>> Other than in sync_regs() the copying done on the INT80 path as well as
>> on the NMI path itself isn't NMI / #MC safe, as either of these events
>> occurring in the middle of the stack copying would clobber data on the
>> (source) stack. (Of course, in the NMI case only #MC could break
>> things.)
> 
> I think I'd rather fix this by changing the stack switch code or

Well, isn't that what I'm doing in the patch?

> alternativing around it on non-stack-switching kernels.

Fine with me if that's considered better than adding the conditionals.

>  Or make Xen use a trampoline stack just like native.

Well, as said I'd rather not, unless x86 and Xen maintainers agree
that's the way to go. But see below for NMI.

>> I'm not altering the similar code in interrupt_entry(), as that code
>> path is unreachable when running an PV Xen guest afaict.
> 
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Cc: stable@kernel.org 
>> ---
>> There would certainly have been the option of using alternatives
>> patching, but afaict the patching code isn't NMI / #MC safe, so I'd
>> rather stay away from patching the NMI path. And I thought it would be
>> better to use similar code in both cases.
> 
> I would hope we do the patching before we enable any NMIs.

"Enable NMIs"? I don't think they're getting disabled anywhere in the
kernel. Perhaps you merely mean ones the kernel sends itself (which
I agree would hopefully only be enabled after alternatives patching?

>> Another option would be to make the Xen case match the native one, by
>> going through the trampoline stack, but to me this would look like extra
>> overhead for no gain.
> 
> Avoiding even more complexity in the nmi code seems like a big gain to me.

I'm not sure the added conditional is more complexity than making Xen
switch to the trampoline stack just to switch back almost immediately.
But yes, I could see complexity of the NMI code to be a reason to use
different solutions on the NMI and INT80 paths. It's just that I'd like
you, the x86 maintainters, and the Xen ones to agree on which solution
to use where before I'd send a v2.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] x86-64/Xen: fix stack switching
  2018-05-07 11:55 Jan Beulich
  2018-05-08  2:38 ` Andy Lutomirski
@ 2018-05-08  2:38 ` Andy Lutomirski
  2018-05-14 10:28   ` Jan Beulich
                     ` (2 more replies)
  1 sibling, 3 replies; 8+ messages in thread
From: Andy Lutomirski @ 2018-05-08  2:38 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Ingo Molnar, Thomas Gleixner, H. Peter Anvin, Andrew Lutomirski,
	xen-devel, Boris Ostrovsky, Juergen Gross, LKML

On Mon, May 7, 2018 at 5:16 AM Jan Beulich <JBeulich@suse.com> wrote:

> While on native entry into the kernel happens on the trampoline stack,
> PV Xen kernels are being entered with the current thread stack right
> away. Hence source and destination stacks are identical in that case,
> and special care is needed.

> Other than in sync_regs() the copying done on the INT80 path as well as
> on the NMI path itself isn't NMI / #MC safe, as either of these events
> occurring in the middle of the stack copying would clobber data on the
> (source) stack. (Of course, in the NMI case only #MC could break
> things.)

I think I'd rather fix this by changing the stack switch code or
alternativing around it on non-stack-switching kernels.  Or make Xen use a
trampoline stack just like native.


> I'm not altering the similar code in interrupt_entry(), as that code
> path is unreachable when running an PV Xen guest afaict.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Cc: stable@kernel.org
> ---
> There would certainly have been the option of using alternatives
> patching, but afaict the patching code isn't NMI / #MC safe, so I'd
> rather stay away from patching the NMI path. And I thought it would be
> better to use similar code in both cases.

I would hope we do the patching before we enable any NMIs.


> Another option would be to make the Xen case match the native one, by
> going through the trampoline stack, but to me this would look like extra
> overhead for no gain.

Avoiding even more complexity in the nmi code seems like a big gain to me.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] x86-64/Xen: fix stack switching
  2018-05-07 11:55 Jan Beulich
@ 2018-05-08  2:38 ` Andy Lutomirski
  2018-05-08  2:38 ` Andy Lutomirski
  1 sibling, 0 replies; 8+ messages in thread
From: Andy Lutomirski @ 2018-05-08  2:38 UTC (permalink / raw)
  To: Jan Beulich
  Cc: Juergen Gross, LKML, Thomas Gleixner, Andrew Lutomirski,
	H. Peter Anvin, xen-devel, Boris Ostrovsky, Ingo Molnar

On Mon, May 7, 2018 at 5:16 AM Jan Beulich <JBeulich@suse.com> wrote:

> While on native entry into the kernel happens on the trampoline stack,
> PV Xen kernels are being entered with the current thread stack right
> away. Hence source and destination stacks are identical in that case,
> and special care is needed.

> Other than in sync_regs() the copying done on the INT80 path as well as
> on the NMI path itself isn't NMI / #MC safe, as either of these events
> occurring in the middle of the stack copying would clobber data on the
> (source) stack. (Of course, in the NMI case only #MC could break
> things.)

I think I'd rather fix this by changing the stack switch code or
alternativing around it on non-stack-switching kernels.  Or make Xen use a
trampoline stack just like native.


> I'm not altering the similar code in interrupt_entry(), as that code
> path is unreachable when running an PV Xen guest afaict.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Cc: stable@kernel.org
> ---
> There would certainly have been the option of using alternatives
> patching, but afaict the patching code isn't NMI / #MC safe, so I'd
> rather stay away from patching the NMI path. And I thought it would be
> better to use similar code in both cases.

I would hope we do the patching before we enable any NMIs.


> Another option would be to make the Xen case match the native one, by
> going through the trampoline stack, but to me this would look like extra
> overhead for no gain.

Avoiding even more complexity in the nmi code seems like a big gain to me.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH] x86-64/Xen: fix stack switching
@ 2018-05-07 11:55 Jan Beulich
  2018-05-08  2:38 ` Andy Lutomirski
  2018-05-08  2:38 ` Andy Lutomirski
  0 siblings, 2 replies; 8+ messages in thread
From: Jan Beulich @ 2018-05-07 11:55 UTC (permalink / raw)
  To: mingo, tglx, hpa
  Cc: Andy Lutomirski, xen-devel, Boris Ostrovsky, Juergen Gross, linux-kernel

While on native entry into the kernel happens on the trampoline stack,
PV Xen kernels are being entered with the current thread stack right
away. Hence source and destination stacks are identical in that case,
and special care is needed.

Other than in sync_regs() the copying done on the INT80 path as well as
on the NMI path itself isn't NMI / #MC safe, as either of these events
occurring in the middle of the stack copying would clobber data on the
(source) stack. (Of course, in the NMI case only #MC could break
things.)

I'm not altering the similar code in interrupt_entry(), as that code
path is unreachable when running an PV Xen guest afaict.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Cc: stable@kernel.org 
---
There would certainly have been the option of using alternatives
patching, but afaict the patching code isn't NMI / #MC safe, so I'd
rather stay away from patching the NMI path. And I thought it would be
better to use similar code in both cases.

Another option would be to make the Xen case match the native one, by
going through the trampoline stack, but to me this would look like extra
overhead for no gain.
---
 arch/x86/entry/entry_64.S        |    8 ++++++++
 arch/x86/entry/entry_64_compat.S |    8 +++++++-
 2 files changed, 15 insertions(+), 1 deletion(-)

--- 4.17-rc4/arch/x86/entry/entry_64.S
+++ 4.17-rc4-x86_64-stack-switch-Xen/arch/x86/entry/entry_64.S
@@ -1399,6 +1399,12 @@ ENTRY(nmi)
 	swapgs
 	cld
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdx
+
+	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rdx
+	subq	$8, %rdx
+	xorq	%rsp, %rdx
+	shrq	$PAGE_SHIFT, %rdx
+	jz	.Lnmi_keep_stack
 	movq	%rsp, %rdx
 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
 	UNWIND_HINT_IRET_REGS base=%rdx offset=8
@@ -1408,6 +1414,8 @@ ENTRY(nmi)
 	pushq	2*8(%rdx)	/* pt_regs->cs */
 	pushq	1*8(%rdx)	/* pt_regs->rip */
 	UNWIND_HINT_IRET_REGS
+.Lnmi_keep_stack:
+
 	pushq   $-1		/* pt_regs->orig_ax */
 	PUSH_AND_CLEAR_REGS rdx=(%rdx)
 	ENCODE_FRAME_POINTER
--- 4.17-rc4/arch/x86/entry/entry_64_compat.S
+++ 4.17-rc4-x86_64-stack-switch-Xen/arch/x86/entry/entry_64_compat.S
@@ -356,15 +356,21 @@ ENTRY(entry_INT80_compat)
 
 	/* Need to switch before accessing the thread stack. */
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
+
+	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rdi
+	subq	$8, %rdi
+	xorq	%rsp, %rdi
+	shrq	$PAGE_SHIFT, %rdi
+	jz	.Lint80_keep_stack
 	movq	%rsp, %rdi
 	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
-
 	pushq	6*8(%rdi)		/* regs->ss */
 	pushq	5*8(%rdi)		/* regs->rsp */
 	pushq	4*8(%rdi)		/* regs->eflags */
 	pushq	3*8(%rdi)		/* regs->cs */
 	pushq	2*8(%rdi)		/* regs->ip */
 	pushq	1*8(%rdi)		/* regs->orig_ax */
+.Lint80_keep_stack:
 
 	pushq	(%rdi)			/* pt_regs->di */
 	pushq	%rsi			/* pt_regs->si */

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-05-14 12:08 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-07 11:55 [PATCH] x86-64/Xen: fix stack switching Jan Beulich
2018-05-07 11:55 Jan Beulich
2018-05-08  2:38 ` Andy Lutomirski
2018-05-08  2:38 ` Andy Lutomirski
2018-05-14 10:28   ` Jan Beulich
2018-05-14 10:28   ` Jan Beulich
     [not found]   ` <5AF964B302000078001C26BC@suse.com>
2018-05-14 12:08     ` Juergen Gross
2018-05-14 12:08     ` Juergen Gross

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.