All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH V4 0/7] x86/entry: Clean up entry code
@ 2022-03-18 14:30 Lai Jiangshan
  2022-03-18 14:30 ` [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret() Lai Jiangshan
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Lai Jiangshan @ 2022-03-18 14:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Borislav Petkov, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, x86, Lai Jiangshan

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

This patchset moves the stack-switch code to the place where
error_entry() return, unravels error_entry() from XENpv and makes
entry_INT80_compat use idtentry macro.

This patchset is highly related to XENpv, because it does the extra
cleanup to convert SWAPGS to swapgs after major cleanup is done.

The patches are the 4th version to pick patches from the patchset
https://lore.kernel.org/lkml/20211126101209.8613-1-jiangshanlai@gmail.com/
which converts ASM code to C code.  These patches are prepared for that
purpose.  But this patchset has it own value: it simplifies the stack
switch, avoids leaving the old stack inside a function call, and
separates XENpv code with native code without adding new code.

Peter said in V3:
>	So AFAICT these patches are indeed correct.
>
>	I'd love for some of the other x86 people to also look at this,
>	but a tentative ACK on this.

Other interactions in V3:
	Peter raised several questions and I think I answered them and I
	don't think the code need to be updated unless I missed some
	points. (Except reordering the patches)

	Josh asked to remove UNWIND_HINT_REGS in patch5, but I think
	UNWIND_HINT_REGS is old code before this patchset and I don't
	want to do a cleanup that is not relate to preparing converting
	ASM code C code in this patchset.  He also asked to remove
	ENCODE_FRAME_POINTER in xenpv case, and I think it just
	complicates the code for just optimizing out a single assignment
	to %rbp.  I would not always stick to these reasons of mine,
	but I just keep the code unchanged since he hasn't emphasized it
	again nor other people has requested it.

Changed from V3:
	Only reorder the int80 thing as the last patch to make patches
	ordering more natural. (Both orders are correct)

Changed from V2:
	Make the patch of folding int80 thing as the first patch
	Add more changelog in "Switch the stack after error_entry() returns"

Changed from V1
	Squash cleanup patches converting SWAPGS to swapgs into one patch

	Use my official email address (Ant Group).  The work is backed
	by my company and I was incorrectly misunderstood that
	XXX@linux.alibaba.com is the only portal for opensource work
	in the corporate group.

[V3]: https://lore.kernel.org/lkml/20220315073949.7541-1-jiangshanlai@gmail.com/
[V2]: https://lore.kernel.org/lkml/20220303035434.20471-1-jiangshanlai@gmail.com/
[V1]: https://lore.kernel.org/lkml/20211208110833.65366-1-jiangshanlai@gmail.com/

Lai Jiangshan (7):
  x86/traps: Move pt_regs only in fixup_bad_iret()
  x86/entry: Switch the stack after error_entry() returns
  x86/entry: move PUSH_AND_CLEAR_REGS out of error_entry
  x86/entry: Move cld to the start of idtentry
  x86/entry: Don't call error_entry for XENPV
  x86/entry: Convert SWAPGS to swapgs and remove the definition of
    SWAPGS
  x86/entry: Use idtentry macro for entry_INT80_compat

 arch/x86/entry/entry_64.S        |  61 +++++++++++++-----
 arch/x86/entry/entry_64_compat.S | 105 +------------------------------
 arch/x86/include/asm/idtentry.h  |  47 ++++++++++++++
 arch/x86/include/asm/irqflags.h  |   8 ---
 arch/x86/include/asm/proto.h     |   4 --
 arch/x86/include/asm/traps.h     |   2 +-
 arch/x86/kernel/traps.c          |  17 ++---
 7 files changed, 100 insertions(+), 144 deletions(-)

-- 
2.19.1.6.gb485710b


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret()
  2022-03-18 14:30 [PATCH V4 0/7] x86/entry: Clean up entry code Lai Jiangshan
@ 2022-03-18 14:30 ` Lai Jiangshan
  2022-04-06 19:00   ` Borislav Petkov
  2022-04-11  9:36   ` Borislav Petkov
  2022-03-18 14:30 ` [PATCH V4 2/7] x86/entry: Switch the stack after error_entry() returns Lai Jiangshan
                   ` (6 subsequent siblings)
  7 siblings, 2 replies; 17+ messages in thread
From: Lai Jiangshan @ 2022-03-18 14:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Borislav Petkov, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, x86, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin, Fenghua Yu, Thomas Tai, Chang S. Bae,
	Masami Hiramatsu

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

fixup_bad_iret() and sync_regs() have similar arguments and do similar
work that copies full or partial pt_regs to a place and switches stack
after return.  They are quite the same, but fixup_bad_iret() not only
copies the pt_regs but also the return address of error_entry() while
sync_regs() copies the pt_regs only and the return address of
error_entry() was preserved and handled in ASM code.

This patch makes fixup_bad_iret() work like sync_regs() and the
handling of the return address of error_entry() is moved in ASM code.

It removes the need to use the struct bad_iret_stack, simplifies
fixup_bad_iret() and makes the ASM error_entry() call fixup_bad_iret()
as the same as calling sync_regs() which adds readability because
the calling patterns are exactly the same.

It is prepared for later patch to do the stack switch after the
error_entry() which simplifies the code further.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/entry/entry_64.S    |  5 ++++-
 arch/x86/include/asm/traps.h |  2 +-
 arch/x86/kernel/traps.c      | 17 ++++++-----------
 3 files changed, 11 insertions(+), 13 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 4faac48ebec5..e9d896717ab4 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1058,9 +1058,12 @@ SYM_CODE_START_LOCAL(error_entry)
 	 * Pretend that the exception came from user mode: set up pt_regs
 	 * as if we faulted immediately after IRET.
 	 */
-	mov	%rsp, %rdi
+	popq	%r12				/* save return addr in %12 */
+	movq	%rsp, %rdi			/* arg0 = pt_regs pointer */
 	call	fixup_bad_iret
 	mov	%rax, %rsp
+	ENCODE_FRAME_POINTER
+	pushq	%r12
 	jmp	.Lerror_entry_from_usermode_after_swapgs
 SYM_CODE_END(error_entry)
 
diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h
index 35317c5c551d..47ecfff2c83d 100644
--- a/arch/x86/include/asm/traps.h
+++ b/arch/x86/include/asm/traps.h
@@ -13,7 +13,7 @@
 #ifdef CONFIG_X86_64
 asmlinkage __visible notrace struct pt_regs *sync_regs(struct pt_regs *eregs);
 asmlinkage __visible notrace
-struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s);
+struct pt_regs *fixup_bad_iret(struct pt_regs *bad_regs);
 void __init trap_init(void);
 asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *eregs);
 #endif
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 1563fb995005..9fe9cd9d3eeb 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -892,13 +892,8 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r
 }
 #endif
 
-struct bad_iret_stack {
-	void *error_entry_ret;
-	struct pt_regs regs;
-};
-
 asmlinkage __visible noinstr
-struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s)
+struct pt_regs *fixup_bad_iret(struct pt_regs *bad_regs)
 {
 	/*
 	 * This is called from entry_64.S early in handling a fault
@@ -908,19 +903,19 @@ struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s)
 	 * just below the IRET frame) and we want to pretend that the
 	 * exception came from the IRET target.
 	 */
-	struct bad_iret_stack tmp, *new_stack =
-		(struct bad_iret_stack *)__this_cpu_read(cpu_tss_rw.x86_tss.sp0) - 1;
+	struct pt_regs tmp, *new_stack =
+		(struct pt_regs *)__this_cpu_read(cpu_tss_rw.x86_tss.sp0) - 1;
 
 	/* Copy the IRET target to the temporary storage. */
-	__memcpy(&tmp.regs.ip, (void *)s->regs.sp, 5*8);
+	__memcpy(&tmp.ip, (void *)bad_regs->sp, 5*8);
 
 	/* Copy the remainder of the stack from the current stack. */
-	__memcpy(&tmp, s, offsetof(struct bad_iret_stack, regs.ip));
+	__memcpy(&tmp, bad_regs, offsetof(struct pt_regs, ip));
 
 	/* Update the entry stack */
 	__memcpy(new_stack, &tmp, sizeof(tmp));
 
-	BUG_ON(!user_mode(&new_stack->regs));
+	BUG_ON(!user_mode(new_stack));
 	return new_stack;
 }
 #endif
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH V4 2/7] x86/entry: Switch the stack after error_entry() returns
  2022-03-18 14:30 [PATCH V4 0/7] x86/entry: Clean up entry code Lai Jiangshan
  2022-03-18 14:30 ` [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret() Lai Jiangshan
@ 2022-03-18 14:30 ` Lai Jiangshan
  2022-04-11  9:35   ` Borislav Petkov
  2022-03-18 14:30 ` [PATCH V4 3/7] x86/entry: move PUSH_AND_CLEAR_REGS out of error_entry Lai Jiangshan
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Lai Jiangshan @ 2022-03-18 14:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Borislav Petkov, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, x86, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

error_entry() calls sync_regs() to settle/copy the pt_regs and switches
the stack directly after sync_regs().  But error_entry() itself is also
a function call, the switching has to handle the return address of it
together, which causes the work complicated and tangly.

Switching to the stack after error_entry() makes the code simpler and
intuitive.

The behavior/logic is unchanged:
  1) (opt) feed fixup_bad_iret() with the pt_regs pushed by ASM code
  2) (opt) fixup_bad_iret() moves the partial pt_regs up
  3) feed sync_regs() with the pt_regs pushed by ASM code or returned
     by fixup_bad_iret()
  4) sync_regs() copies the whole pt_regs to kernel stack if needed
  5) after error_entry() and switching %rsp, it is in kernel stack with
     the pt_regs

Changes only in calling:
  Old code switches to copied pt_regs immediately twice in
  error_entry() while new code switches to the copied pt_regs only
  once after error_entry() returns.
  It is correct since sync_regs() doesn't need to be called close
  to the pt_regs it handles.

  Old code stashes the return-address of error_entry() in a scratch
  register and new code doesn't stash it.
  It relies on the fact that fixup_bad_iret() and sync_regs() don't
  corrupt the return-address of error_entry() on the stack.  But even
  the old code also relies on the fact that fixup_bad_iret() and
  sync_regs() don't corrupt the return-address of themselves.
  They are the same reliances and are assured.

After this change, error_entry() will not do fancy things with the stack
except when in the prolog which will be fixed in the next patch ("move
PUSH_AND_CLEAR_REGS out of error_entry").  This patch and the next patch
can't be swapped because the next patch relies on this patch's stopping
fiddling with the return-address of error_entry(), otherwise the objtool
would complain.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/entry/entry_64.S | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index e9d896717ab4..8eff3e6b1687 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -326,6 +326,8 @@ SYM_CODE_END(ret_from_fork)
 .macro idtentry_body cfunc has_error_code:req
 
 	call	error_entry
+	movq	%rax, %rsp			/* switch stack settled by sync_regs() */
+	ENCODE_FRAME_POINTER
 	UNWIND_HINT_REGS
 
 	movq	%rsp, %rdi			/* pt_regs pointer into 1st argument*/
@@ -999,14 +1001,10 @@ SYM_CODE_START_LOCAL(error_entry)
 	/* We have user CR3.  Change to kernel CR3. */
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
 
+	leaq	8(%rsp), %rdi			/* arg0 = pt_regs pointer */
 .Lerror_entry_from_usermode_after_swapgs:
 	/* Put us onto the real thread stack. */
-	popq	%r12				/* save return addr in %12 */
-	movq	%rsp, %rdi			/* arg0 = pt_regs pointer */
 	call	sync_regs
-	movq	%rax, %rsp			/* switch stack */
-	ENCODE_FRAME_POINTER
-	pushq	%r12
 	RET
 
 	/*
@@ -1038,6 +1036,7 @@ SYM_CODE_START_LOCAL(error_entry)
 	 */
 .Lerror_entry_done_lfence:
 	FENCE_SWAPGS_KERNEL_ENTRY
+	leaq	8(%rsp), %rax			/* return pt_regs pointer */
 	RET
 
 .Lbstep_iret:
@@ -1058,12 +1057,9 @@ SYM_CODE_START_LOCAL(error_entry)
 	 * Pretend that the exception came from user mode: set up pt_regs
 	 * as if we faulted immediately after IRET.
 	 */
-	popq	%r12				/* save return addr in %12 */
-	movq	%rsp, %rdi			/* arg0 = pt_regs pointer */
+	leaq	8(%rsp), %rdi			/* arg0 = pt_regs pointer */
 	call	fixup_bad_iret
-	mov	%rax, %rsp
-	ENCODE_FRAME_POINTER
-	pushq	%r12
+	mov	%rax, %rdi
 	jmp	.Lerror_entry_from_usermode_after_swapgs
 SYM_CODE_END(error_entry)
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH V4 3/7] x86/entry: move PUSH_AND_CLEAR_REGS out of error_entry
  2022-03-18 14:30 [PATCH V4 0/7] x86/entry: Clean up entry code Lai Jiangshan
  2022-03-18 14:30 ` [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret() Lai Jiangshan
  2022-03-18 14:30 ` [PATCH V4 2/7] x86/entry: Switch the stack after error_entry() returns Lai Jiangshan
@ 2022-03-18 14:30 ` Lai Jiangshan
  2022-03-18 14:30 ` [PATCH V4 4/7] x86/entry: Move cld to the start of idtentry Lai Jiangshan
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Lai Jiangshan @ 2022-03-18 14:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Borislav Petkov, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, x86, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

Moving PUSH_AND_CLEAR_REGS out of error_entry doesn't change any
functionality.

It makes error_entry() do not fiddle with the stack.

It will enlarge the size:

size arch/x86/entry/entry_64.o.before:
   text	   data	    bss	    dec	    hex	filename
  17916	    384	      0	  18300	   477c	arch/x86/entry/entry_64.o

size --format=SysV arch/x86/entry/entry_64.o.before:
.entry.text                      5528      0
.orc_unwind                      6456      0
.orc_unwind_ip                   4304      0

size arch/x86/entry/entry_64.o.after:
   text	   data	    bss	    dec	    hex	filename
  26868	    384	      0	  27252	   6a74	arch/x86/entry/entry_64.o

size --format=SysV arch/x86/entry/entry_64.o.after:
.entry.text                      8200      0
.orc_unwind                     10224      0
.orc_unwind_ip                   6816      0

But .entry.text in x86_64 is 2M aligned, enlarging it to 8.2k doesn't
enlarge the final text size.

The tables .orc_unwind[_ip] are enlarged due to it adds many pushes.

It is prepared for not calling error_entry() from XENPV in later patch
and for future converting the whole error_entry into C code.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/entry/entry_64.S | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 8eff3e6b1687..666109d56f6b 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -325,6 +325,9 @@ SYM_CODE_END(ret_from_fork)
  */
 .macro idtentry_body cfunc has_error_code:req
 
+	PUSH_AND_CLEAR_REGS
+	ENCODE_FRAME_POINTER
+
 	call	error_entry
 	movq	%rax, %rsp			/* switch stack settled by sync_regs() */
 	ENCODE_FRAME_POINTER
@@ -987,8 +990,6 @@ SYM_CODE_END(paranoid_exit)
 SYM_CODE_START_LOCAL(error_entry)
 	UNWIND_HINT_FUNC
 	cld
-	PUSH_AND_CLEAR_REGS save_ret=1
-	ENCODE_FRAME_POINTER 8
 	testb	$3, CS+8(%rsp)
 	jz	.Lerror_kernelspace
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH V4 4/7] x86/entry: Move cld to the start of idtentry
  2022-03-18 14:30 [PATCH V4 0/7] x86/entry: Clean up entry code Lai Jiangshan
                   ` (2 preceding siblings ...)
  2022-03-18 14:30 ` [PATCH V4 3/7] x86/entry: move PUSH_AND_CLEAR_REGS out of error_entry Lai Jiangshan
@ 2022-03-18 14:30 ` Lai Jiangshan
  2022-03-18 14:30 ` [PATCH V4 5/7] x86/entry: Don't call error_entry for XENPV Lai Jiangshan
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Lai Jiangshan @ 2022-03-18 14:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Borislav Petkov, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, x86, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

Make it next to CLAC

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/entry/entry_64.S | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 666109d56f6b..8121b9f3fceb 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -360,6 +360,7 @@ SYM_CODE_START(\asmsym)
 	UNWIND_HINT_IRET_REGS offset=\has_error_code*8
 	ENDBR
 	ASM_CLAC
+	cld
 
 	.if \has_error_code == 0
 		pushq	$-1			/* ORIG_RAX: no syscall to restart */
@@ -428,6 +429,7 @@ SYM_CODE_START(\asmsym)
 	UNWIND_HINT_IRET_REGS
 	ENDBR
 	ASM_CLAC
+	cld
 
 	pushq	$-1			/* ORIG_RAX: no syscall to restart */
 
@@ -484,6 +486,7 @@ SYM_CODE_START(\asmsym)
 	UNWIND_HINT_IRET_REGS
 	ENDBR
 	ASM_CLAC
+	cld
 
 	/*
 	 * If the entry is from userspace, switch stacks and treat it as
@@ -546,6 +549,7 @@ SYM_CODE_START(\asmsym)
 	UNWIND_HINT_IRET_REGS offset=8
 	ENDBR
 	ASM_CLAC
+	cld
 
 	/* paranoid_entry returns GS information for paranoid_exit in EBX. */
 	call	paranoid_entry
@@ -871,7 +875,6 @@ SYM_CODE_END(xen_failsafe_callback)
  */
 SYM_CODE_START_LOCAL(paranoid_entry)
 	UNWIND_HINT_FUNC
-	cld
 	PUSH_AND_CLEAR_REGS save_ret=1
 	ENCODE_FRAME_POINTER 8
 
@@ -989,7 +992,6 @@ SYM_CODE_END(paranoid_exit)
  */
 SYM_CODE_START_LOCAL(error_entry)
 	UNWIND_HINT_FUNC
-	cld
 	testb	$3, CS+8(%rsp)
 	jz	.Lerror_kernelspace
 
@@ -1123,6 +1125,7 @@ SYM_CODE_START(asm_exc_nmi)
 	 */
 
 	ASM_CLAC
+	cld
 
 	/* Use %rdx as our temp variable throughout */
 	pushq	%rdx
@@ -1142,7 +1145,6 @@ SYM_CODE_START(asm_exc_nmi)
 	 */
 
 	swapgs
-	cld
 	FENCE_SWAPGS_USER_ENTRY
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdx
 	movq	%rsp, %rdx
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH V4 5/7] x86/entry: Don't call error_entry for XENPV
  2022-03-18 14:30 [PATCH V4 0/7] x86/entry: Clean up entry code Lai Jiangshan
                   ` (3 preceding siblings ...)
  2022-03-18 14:30 ` [PATCH V4 4/7] x86/entry: Move cld to the start of idtentry Lai Jiangshan
@ 2022-03-18 14:30 ` Lai Jiangshan
  2022-03-18 14:30 ` [PATCH V4 6/7] x86/entry: Convert SWAPGS to swapgs and remove the definition of SWAPGS Lai Jiangshan
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Lai Jiangshan @ 2022-03-18 14:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Borislav Petkov, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, x86, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

When in XENPV, it is already in the task stack, and it can't fault
for native_iret() nor native_load_gs_index() since XENPV uses its own
pvops for iret and load_gs_index().  And it doesn't need to switch CR3.
So there is no reason to call error_entry() in XENPV.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/entry/entry_64.S | 13 +++++++++++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 8121b9f3fceb..e9fe9f00d17c 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -328,8 +328,17 @@ SYM_CODE_END(ret_from_fork)
 	PUSH_AND_CLEAR_REGS
 	ENCODE_FRAME_POINTER
 
-	call	error_entry
-	movq	%rax, %rsp			/* switch stack settled by sync_regs() */
+	/*
+	 * Call error_entry and switch stack settled by sync_regs().
+	 *
+	 * When in XENPV, it is already in the task stack, and it can't fault
+	 * for native_iret() nor native_load_gs_index() since XENPV uses its
+	 * own pvops for iret and load_gs_index().  And it doesn't need to
+	 * switch CR3.  So it can skip invoking error_entry().
+	 */
+	ALTERNATIVE "call error_entry; movq %rax, %rsp", \
+		"", X86_FEATURE_XENPV
+
 	ENCODE_FRAME_POINTER
 	UNWIND_HINT_REGS
 
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH V4 6/7] x86/entry: Convert SWAPGS to swapgs and remove the definition of SWAPGS
  2022-03-18 14:30 [PATCH V4 0/7] x86/entry: Clean up entry code Lai Jiangshan
                   ` (4 preceding siblings ...)
  2022-03-18 14:30 ` [PATCH V4 5/7] x86/entry: Don't call error_entry for XENPV Lai Jiangshan
@ 2022-03-18 14:30 ` Lai Jiangshan
  2022-03-18 14:30 ` [PATCH V4 7/7] x86/entry: Use idtentry macro for entry_INT80_compat Lai Jiangshan
  2022-04-06 15:57 ` [PATCH V4 0/7] x86/entry: Clean up entry code Lai Jiangshan
  7 siblings, 0 replies; 17+ messages in thread
From: Lai Jiangshan @ 2022-03-18 14:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Borislav Petkov, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, x86, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin, Kirill A. Shutemov

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

XENPV doesn't use swapgs_restore_regs_and_return_to_usermode(),
error_entry() and entry_SYSENTER_compat(), so the PV-awared SWAPGS in
them can be changed to swapgs.  There is no user of the SWAPGS anymore
after this change.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/entry/entry_64.S        | 6 +++---
 arch/x86/entry/entry_64_compat.S | 2 +-
 arch/x86/include/asm/irqflags.h  | 8 --------
 3 files changed, 4 insertions(+), 12 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index e9fe9f00d17c..9e8d0e259a7d 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1008,7 +1008,7 @@ SYM_CODE_START_LOCAL(error_entry)
 	 * We entered from user mode or we're pretending to have entered
 	 * from user mode due to an IRET fault.
 	 */
-	SWAPGS
+	swapgs
 	FENCE_SWAPGS_USER_ENTRY
 	/* We have user CR3.  Change to kernel CR3. */
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
@@ -1040,7 +1040,7 @@ SYM_CODE_START_LOCAL(error_entry)
 	 * gsbase and proceed.  We'll fix up the exception and land in
 	 * .Lgs_change's error handler with kernel gsbase.
 	 */
-	SWAPGS
+	swapgs
 
 	/*
 	 * Issue an LFENCE to prevent GS speculation, regardless of whether it is a
@@ -1061,7 +1061,7 @@ SYM_CODE_START_LOCAL(error_entry)
 	 * We came from an IRET to user mode, so we have user
 	 * gsbase and CR3.  Switch to kernel gsbase and CR3:
 	 */
-	SWAPGS
+	swapgs
 	FENCE_SWAPGS_USER_ENTRY
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
 
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index 4fdb007cddbd..c5aeb0819707 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -50,7 +50,7 @@ SYM_CODE_START(entry_SYSENTER_compat)
 	UNWIND_HINT_EMPTY
 	ENDBR
 	/* Interrupts are off on entry. */
-	SWAPGS
+	swapgs
 
 	pushq	%rax
 	SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
diff --git a/arch/x86/include/asm/irqflags.h b/arch/x86/include/asm/irqflags.h
index 111104d1c2cd..7793e52d6237 100644
--- a/arch/x86/include/asm/irqflags.h
+++ b/arch/x86/include/asm/irqflags.h
@@ -137,14 +137,6 @@ static __always_inline void arch_local_irq_restore(unsigned long flags)
 	if (!arch_irqs_disabled_flags(flags))
 		arch_local_irq_enable();
 }
-#else
-#ifdef CONFIG_X86_64
-#ifdef CONFIG_XEN_PV
-#define SWAPGS	ALTERNATIVE "swapgs", "", X86_FEATURE_XENPV
-#else
-#define SWAPGS	swapgs
-#endif
-#endif
 #endif /* !__ASSEMBLY__ */
 
 #endif
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH V4 7/7] x86/entry: Use idtentry macro for entry_INT80_compat
  2022-03-18 14:30 [PATCH V4 0/7] x86/entry: Clean up entry code Lai Jiangshan
                   ` (5 preceding siblings ...)
  2022-03-18 14:30 ` [PATCH V4 6/7] x86/entry: Convert SWAPGS to swapgs and remove the definition of SWAPGS Lai Jiangshan
@ 2022-03-18 14:30 ` Lai Jiangshan
  2022-04-06 15:57 ` [PATCH V4 0/7] x86/entry: Clean up entry code Lai Jiangshan
  7 siblings, 0 replies; 17+ messages in thread
From: Lai Jiangshan @ 2022-03-18 14:30 UTC (permalink / raw)
  To: linux-kernel
  Cc: Borislav Petkov, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, x86, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin, Joerg Roedel, Chang S. Bae, Jan Kiszka

From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

entry_INT80_compat is identical to idtentry macro except a special
handling for %rax in the prolog.

Add the prolog to idtentry and use idtentry for entry_INT80_compat.

Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
---
 arch/x86/entry/entry_64.S        |  18 ++++++
 arch/x86/entry/entry_64_compat.S | 103 -------------------------------
 arch/x86/include/asm/idtentry.h  |  47 ++++++++++++++
 arch/x86/include/asm/proto.h     |   4 --
 4 files changed, 65 insertions(+), 107 deletions(-)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 9e8d0e259a7d..6ac070378d8b 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -375,6 +375,24 @@ SYM_CODE_START(\asmsym)
 		pushq	$-1			/* ORIG_RAX: no syscall to restart */
 	.endif
 
+	.if \vector == IA32_SYSCALL_VECTOR
+		/*
+		 * User tracing code (ptrace or signal handlers) might assume
+		 * that the saved RAX contains a 32-bit number when we're
+		 * invoking a 32-bit syscall.  Just in case the high bits are
+		 * nonzero, zero-extend the syscall number.  (This could almost
+		 * certainly be deleted with no ill effects.)
+		 */
+		movl	%eax, %eax
+
+		/*
+		 * do_int80_syscall_32() expects regs->orig_ax to be user ax,
+		 * and regs->ax to be $-ENOSYS.
+		 */
+		movq	%rax, (%rsp)
+		movq	$-ENOSYS, %rax
+	.endif
+
 	.if \vector == X86_TRAP_BP
 		/*
 		 * If coming from kernel space, create a 6-word gap to allow the
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index c5aeb0819707..6866151bbef3 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -315,106 +315,3 @@ sysret32_from_system_call:
 	swapgs
 	sysretl
 SYM_CODE_END(entry_SYSCALL_compat)
-
-/*
- * 32-bit legacy system call entry.
- *
- * 32-bit x86 Linux system calls traditionally used the INT $0x80
- * instruction.  INT $0x80 lands here.
- *
- * This entry point can be used by 32-bit and 64-bit programs to perform
- * 32-bit system calls.  Instances of INT $0x80 can be found inline in
- * various programs and libraries.  It is also used by the vDSO's
- * __kernel_vsyscall fallback for hardware that doesn't support a faster
- * entry method.  Restarted 32-bit system calls also fall back to INT
- * $0x80 regardless of what instruction was originally used to do the
- * system call.
- *
- * This is considered a slow path.  It is not used by most libc
- * implementations on modern hardware except during process startup.
- *
- * Arguments:
- * eax  system call number
- * ebx  arg1
- * ecx  arg2
- * edx  arg3
- * esi  arg4
- * edi  arg5
- * ebp  arg6
- */
-SYM_CODE_START(entry_INT80_compat)
-	UNWIND_HINT_EMPTY
-	ENDBR
-	/*
-	 * Interrupts are off on entry.
-	 */
-	ASM_CLAC			/* Do this early to minimize exposure */
-	SWAPGS
-
-	/*
-	 * User tracing code (ptrace or signal handlers) might assume that
-	 * the saved RAX contains a 32-bit number when we're invoking a 32-bit
-	 * syscall.  Just in case the high bits are nonzero, zero-extend
-	 * the syscall number.  (This could almost certainly be deleted
-	 * with no ill effects.)
-	 */
-	movl	%eax, %eax
-
-	/* switch to thread stack expects orig_ax and rdi to be pushed */
-	pushq	%rax			/* pt_regs->orig_ax */
-	pushq	%rdi			/* pt_regs->di */
-
-	/* Need to switch before accessing the thread stack. */
-	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
-
-	/* In the Xen PV case we already run on the thread stack. */
-	ALTERNATIVE "", "jmp .Lint80_keep_stack", X86_FEATURE_XENPV
-
-	movq	%rsp, %rdi
-	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
-
-	pushq	6*8(%rdi)		/* regs->ss */
-	pushq	5*8(%rdi)		/* regs->rsp */
-	pushq	4*8(%rdi)		/* regs->eflags */
-	pushq	3*8(%rdi)		/* regs->cs */
-	pushq	2*8(%rdi)		/* regs->ip */
-	pushq	1*8(%rdi)		/* regs->orig_ax */
-	pushq	(%rdi)			/* pt_regs->di */
-.Lint80_keep_stack:
-
-	pushq	%rsi			/* pt_regs->si */
-	xorl	%esi, %esi		/* nospec   si */
-	pushq	%rdx			/* pt_regs->dx */
-	xorl	%edx, %edx		/* nospec   dx */
-	pushq	%rcx			/* pt_regs->cx */
-	xorl	%ecx, %ecx		/* nospec   cx */
-	pushq	$-ENOSYS		/* pt_regs->ax */
-	pushq   %r8			/* pt_regs->r8 */
-	xorl	%r8d, %r8d		/* nospec   r8 */
-	pushq   %r9			/* pt_regs->r9 */
-	xorl	%r9d, %r9d		/* nospec   r9 */
-	pushq   %r10			/* pt_regs->r10*/
-	xorl	%r10d, %r10d		/* nospec   r10 */
-	pushq   %r11			/* pt_regs->r11 */
-	xorl	%r11d, %r11d		/* nospec   r11 */
-	pushq   %rbx                    /* pt_regs->rbx */
-	xorl	%ebx, %ebx		/* nospec   rbx */
-	pushq   %rbp                    /* pt_regs->rbp */
-	xorl	%ebp, %ebp		/* nospec   rbp */
-	pushq   %r12                    /* pt_regs->r12 */
-	xorl	%r12d, %r12d		/* nospec   r12 */
-	pushq   %r13                    /* pt_regs->r13 */
-	xorl	%r13d, %r13d		/* nospec   r13 */
-	pushq   %r14                    /* pt_regs->r14 */
-	xorl	%r14d, %r14d		/* nospec   r14 */
-	pushq   %r15                    /* pt_regs->r15 */
-	xorl	%r15d, %r15d		/* nospec   r15 */
-
-	UNWIND_HINT_REGS
-
-	cld
-
-	movq	%rsp, %rdi
-	call	do_int80_syscall_32
-	jmp	swapgs_restore_regs_and_return_to_usermode
-SYM_CODE_END(entry_INT80_compat)
diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index 7924f27f5c8b..fac5db38c895 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -206,6 +206,20 @@ __visible noinstr void func(struct pt_regs *regs,			\
 									\
 static noinline void __##func(struct pt_regs *regs, u32 vector)
 
+/**
+ * DECLARE_IDTENTRY_IA32_EMULATION - Declare functions for int80
+ * @vector:	Vector number (ignored for C)
+ * @asm_func:	Function name of the entry point
+ * @cfunc:	The C handler called from the ASM entry point (ignored for C)
+ *
+ * Declares two functions:
+ * - The ASM entry point: asm_func
+ * - The XEN PV trap entry point: xen_##asm_func (maybe unused)
+ */
+#define DECLARE_IDTENTRY_IA32_EMULATION(vector, asm_func, cfunc)	\
+	asmlinkage void asm_func(void);					\
+	asmlinkage void xen_##asm_func(void)
+
 /**
  * DECLARE_IDTENTRY_SYSVEC - Declare functions for system vector entry points
  * @vector:	Vector number (ignored for C)
@@ -432,6 +446,35 @@ __visible noinstr void func(struct pt_regs *regs,			\
 #define DECLARE_IDTENTRY_ERRORCODE(vector, func)			\
 	idtentry vector asm_##func func has_error_code=1
 
+/*
+ * 32-bit legacy system call entry.
+ *
+ * 32-bit x86 Linux system calls traditionally used the INT $0x80
+ * instruction.  INT $0x80 lands here.
+ *
+ * This entry point can be used by 32-bit and 64-bit programs to perform
+ * 32-bit system calls.  Instances of INT $0x80 can be found inline in
+ * various programs and libraries.  It is also used by the vDSO's
+ * __kernel_vsyscall fallback for hardware that doesn't support a faster
+ * entry method.  Restarted 32-bit system calls also fall back to INT
+ * $0x80 regardless of what instruction was originally used to do the
+ * system call.
+ *
+ * This is considered a slow path.  It is not used by most libc
+ * implementations on modern hardware except during process startup.
+ *
+ * Arguments:
+ * eax  system call number
+ * ebx  arg1
+ * ecx  arg2
+ * edx  arg3
+ * esi  arg4
+ * edi  arg5
+ * ebp  arg6
+ */
+#define DECLARE_IDTENTRY_IA32_EMULATION(vector, asm_func, cfunc)	\
+	idtentry vector asm_func cfunc has_error_code=0
+
 /* Special case for 32bit IRET 'trap'. Do not emit ASM code */
 #define DECLARE_IDTENTRY_SW(vector, func)
 
@@ -638,6 +681,10 @@ DECLARE_IDTENTRY_IRQ(X86_TRAP_OTHER,	common_interrupt);
 DECLARE_IDTENTRY_IRQ(X86_TRAP_OTHER,	spurious_interrupt);
 #endif
 
+#ifdef CONFIG_IA32_EMULATION
+DECLARE_IDTENTRY_IA32_EMULATION(IA32_SYSCALL_VECTOR,	entry_INT80_compat, do_int80_syscall_32);
+#endif
+
 /* System vector entry points */
 #ifdef CONFIG_X86_LOCAL_APIC
 DECLARE_IDTENTRY_SYSVEC(ERROR_APIC_VECTOR,		sysvec_error_interrupt);
diff --git a/arch/x86/include/asm/proto.h b/arch/x86/include/asm/proto.h
index feed36d44d04..c4d331fe65ff 100644
--- a/arch/x86/include/asm/proto.h
+++ b/arch/x86/include/asm/proto.h
@@ -28,10 +28,6 @@ void entry_SYSENTER_compat(void);
 void __end_entry_SYSENTER_compat(void);
 void entry_SYSCALL_compat(void);
 void entry_SYSCALL_compat_safe_stack(void);
-void entry_INT80_compat(void);
-#ifdef CONFIG_XEN_PV
-void xen_entry_INT80_compat(void);
-#endif
 #endif
 
 void x86_configure_nx(void);
-- 
2.19.1.6.gb485710b


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH V4 0/7] x86/entry: Clean up entry code
  2022-03-18 14:30 [PATCH V4 0/7] x86/entry: Clean up entry code Lai Jiangshan
                   ` (6 preceding siblings ...)
  2022-03-18 14:30 ` [PATCH V4 7/7] x86/entry: Use idtentry macro for entry_INT80_compat Lai Jiangshan
@ 2022-04-06 15:57 ` Lai Jiangshan
  7 siblings, 0 replies; 17+ messages in thread
From: Lai Jiangshan @ 2022-04-06 15:57 UTC (permalink / raw)
  To: LKML, Borislav Petkov, Thomas Gleixner
  Cc: Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski, X86 ML, Lai Jiangshan

Hello, Borislav

Could you please review it again? The patches can still be applied
perfectly with the newest tip/master 7bcafc1e843a ("Merge x86/cpu
into tip/master") and work well.  And the patches almost have
nothing changed since your last review except only squashing and
reordering.


Hello, tglx

I’d like to hear your views on the patches.  It is a part of the
patchset which converts ASM code to C code which I think is a nice
foil for your previous excellent x86/entry work.  It reduces the
ASM code without any functionality change and makes entry code more
readable and maintainable.  I came up with the idea when I was
reviewing your patches.


Thanks
Lai


On Fri, Mar 18, 2022 at 10:29 PM Lai Jiangshan <jiangshanlai@gmail.com> wrote:
>
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
>
> This patchset moves the stack-switch code to the place where
> error_entry() return, unravels error_entry() from XENpv and makes
> entry_INT80_compat use idtentry macro.
>
> This patchset is highly related to XENpv, because it does the extra
> cleanup to convert SWAPGS to swapgs after major cleanup is done.
>
> The patches are the 4th version to pick patches from the patchset
> https://lore.kernel.org/lkml/20211126101209.8613-1-jiangshanlai@gmail.com/
> which converts ASM code to C code.  These patches are prepared for that
> purpose.  But this patchset has it own value: it simplifies the stack
> switch, avoids leaving the old stack inside a function call, and
> separates XENpv code with native code without adding new code.
>
> Peter said in V3:
> >       So AFAICT these patches are indeed correct.
> >
> >       I'd love for some of the other x86 people to also look at this,
> >       but a tentative ACK on this.
>
> Other interactions in V3:
>         Peter raised several questions and I think I answered them and I
>         don't think the code need to be updated unless I missed some
>         points. (Except reordering the patches)
>
>         Josh asked to remove UNWIND_HINT_REGS in patch5, but I think
>         UNWIND_HINT_REGS is old code before this patchset and I don't
>         want to do a cleanup that is not relate to preparing converting
>         ASM code C code in this patchset.  He also asked to remove
>         ENCODE_FRAME_POINTER in xenpv case, and I think it just
>         complicates the code for just optimizing out a single assignment
>         to %rbp.  I would not always stick to these reasons of mine,
>         but I just keep the code unchanged since he hasn't emphasized it
>         again nor other people has requested it.
>
> Changed from V3:
>         Only reorder the int80 thing as the last patch to make patches
>         ordering more natural. (Both orders are correct)
>
> Changed from V2:
>         Make the patch of folding int80 thing as the first patch
>         Add more changelog in "Switch the stack after error_entry() returns"
>
> Changed from V1
>         Squash cleanup patches converting SWAPGS to swapgs into one patch
>
>         Use my official email address (Ant Group).  The work is backed
>         by my company and I was incorrectly misunderstood that
>         XXX@linux.alibaba.com is the only portal for opensource work
>         in the corporate group.
>
> [V3]: https://lore.kernel.org/lkml/20220315073949.7541-1-jiangshanlai@gmail.com/
> [V2]: https://lore.kernel.org/lkml/20220303035434.20471-1-jiangshanlai@gmail.com/
> [V1]: https://lore.kernel.org/lkml/20211208110833.65366-1-jiangshanlai@gmail.com/
>
> Lai Jiangshan (7):
>   x86/traps: Move pt_regs only in fixup_bad_iret()
>   x86/entry: Switch the stack after error_entry() returns
>   x86/entry: move PUSH_AND_CLEAR_REGS out of error_entry
>   x86/entry: Move cld to the start of idtentry
>   x86/entry: Don't call error_entry for XENPV
>   x86/entry: Convert SWAPGS to swapgs and remove the definition of
>     SWAPGS
>   x86/entry: Use idtentry macro for entry_INT80_compat
>
>  arch/x86/entry/entry_64.S        |  61 +++++++++++++-----
>  arch/x86/entry/entry_64_compat.S | 105 +------------------------------
>  arch/x86/include/asm/idtentry.h  |  47 ++++++++++++++
>  arch/x86/include/asm/irqflags.h  |   8 ---
>  arch/x86/include/asm/proto.h     |   4 --
>  arch/x86/include/asm/traps.h     |   2 +-
>  arch/x86/kernel/traps.c          |  17 ++---
>  7 files changed, 100 insertions(+), 144 deletions(-)
>
> --
> 2.19.1.6.gb485710b
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret()
  2022-03-18 14:30 ` [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret() Lai Jiangshan
@ 2022-04-06 19:00   ` Borislav Petkov
  2022-04-07  7:03     ` Lai Jiangshan
  2022-04-11  9:36   ` Borislav Petkov
  1 sibling, 1 reply; 17+ messages in thread
From: Borislav Petkov @ 2022-04-06 19:00 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, x86, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin, Fenghua Yu, Thomas Tai, Chang S. Bae,
	Masami Hiramatsu

On Fri, Mar 18, 2022 at 10:30:10PM +0800, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> 
> fixup_bad_iret() and sync_regs() have similar arguments and do similar
> work that copies full or partial pt_regs to a place and switches stack
> after return.  They are quite the same, but fixup_bad_iret() not only
> copies the pt_regs but also the return address of error_entry() while

What return address of error_entry()? You lost me here.

fixup_bad_iret() moves the stack frame while sync_regs() switches to the
thread stack. I have no clue what you mean.

> sync_regs() copies the pt_regs only and the return address of
> error_entry() was preserved and handled in ASM code.

Nope, no idea.
 
> This patch makes fixup_bad_iret() work like sync_regs() and the

Avoid having "This patch" or "This commit" in the commit message. It is
tautologically useless.

Also, do

$ git grep 'This patch' Documentation/process

for more details.

> handling of the return address of error_entry() is moved in ASM code.
> 
> It removes the need to use the struct bad_iret_stack, simplifies
> fixup_bad_iret() and makes the ASM error_entry() call fixup_bad_iret()
> as the same as calling sync_regs() which adds readability because
> the calling patterns are exactly the same.

So fixup_bad_iret() gets the stack ptr passed in by doing:

        mov     %rsp, %rdi
        call    fixup_bad_iret
        mov     %rax, %rsp


and error_regs()

        movq    %rsp, %rdi                      /* arg0 = pt_regs pointer */
        call    sync_regs
        movq    %rax, %rsp                      /* switch stack */

the same way.

Confused.

> It is prepared for later patch to do the stack switch after the
> error_entry() which simplifies the code further.

Looking at your next patch, is all this dance done just so that you can
do

	leaq    8(%rsp), %rdi

in order to pass in pt_regs to both functions?

And get rid of the saving/restoring %r12?

Is that what the whole noise is about?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret()
  2022-04-06 19:00   ` Borislav Petkov
@ 2022-04-07  7:03     ` Lai Jiangshan
  2022-04-07  8:22       ` Borislav Petkov
  0 siblings, 1 reply; 17+ messages in thread
From: Lai Jiangshan @ 2022-04-07  7:03 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: LKML, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, X86 ML, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin, Fenghua Yu, Thomas Tai, Chang S. Bae,
	Masami Hiramatsu

On Thu, Apr 7, 2022 at 3:00 AM Borislav Petkov <bp@alien8.de> wrote:
>
> On Fri, Mar 18, 2022 at 10:30:10PM +0800, Lai Jiangshan wrote:
> > From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> >
> > fixup_bad_iret() and sync_regs() have similar arguments and do similar
> > work that copies full or partial pt_regs to a place and switches stack
> > after return.  They are quite the same, but fixup_bad_iret() not only
> > copies the pt_regs but also the return address of error_entry() while
>
> What return address of error_entry()? You lost me here.

"return address" is the return address of a function which is
error_entry() here.

https://gcc.gnu.org/onlinedocs/gcc/Return-Address.html

Or error_entry_ret in struct bad_iret_stack which is being removed
in the change.


>
> So fixup_bad_iret() gets the stack ptr passed in by doing:
>
>         mov     %rsp, %rdi
>         call    fixup_bad_iret
>         mov     %rax, %rsp
>
>
> and error_regs()
>
>         movq    %rsp, %rdi                      /* arg0 = pt_regs pointer */
>         call    sync_regs
>         movq    %rax, %rsp                      /* switch stack */
>
> the same way.

They are not the same way.

sync_regs() is called before the return address of error_entry()
popped into %r12 while fixup_bad_iret() is called with the return
address of error_entry() still on the stack.  And the primitives of
fixup_bad_iret() and sync_regs() are different which also means
they are not the same way.

After this change, they become the same way.

IMO, sync_regs() is grace while fixup_bad_iret() is a bad C function
or is not a pure C function because it is handling the return address
of its parent function which is better done by the compiler or ASM
code.

>
> Confused.
>
> > It is prepared for later patch to do the stack switch after the
> > error_entry() which simplifies the code further.
>
> Looking at your next patch, is all this dance done just so that you can
> do
>
>         leaq    8(%rsp), %rdi
>
> in order to pass in pt_regs to both functions?
>
> And get rid of the saving/restoring %r12?
>
> Is that what the whole noise is about?

The point is to make fixup_bad_iret() a normal C function and
the same as sync_regs() in calling.

The next patch makes error_entry() as a bunch of ASM code compiled
from a C function and pave the road to really convert it to a C
function.

Getting rid of the saving/restoring the return address in %r12
is necessary since a C function can't save/restore the return
address.

Thanks
Lai

>
> --
> Regards/Gruss,
>     Boris.
>
> https://people.kernel.org/tglx/notes-about-netiquette

I'm sorry for using top-posting and "This patch".  I remember it.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret()
  2022-04-07  7:03     ` Lai Jiangshan
@ 2022-04-07  8:22       ` Borislav Petkov
  2022-04-07 13:18         ` Borislav Petkov
  0 siblings, 1 reply; 17+ messages in thread
From: Borislav Petkov @ 2022-04-07  8:22 UTC (permalink / raw)
  To: Lai Jiangshan, Andy Lutomirski
  Cc: LKML, Peter Zijlstra, Josh Poimboeuf, Thomas Gleixner, X86 ML,
	Lai Jiangshan, Ingo Molnar, Dave Hansen, H. Peter Anvin,
	Fenghua Yu, Thomas Tai, Chang S. Bae, Masami Hiramatsu

On Thu, Apr 07, 2022 at 03:03:08PM +0800, Lai Jiangshan wrote:
> sync_regs() is called before the return address of error_entry()
> popped into %r12 while fixup_bad_iret() is called with the return
> address of error_entry() still on the stack.  And the primitives of
> fixup_bad_iret() and sync_regs() are different which also means
> they are not the same way.
> 
> After this change, they become the same way.
> 
> IMO, sync_regs() is grace while fixup_bad_iret() is a bad C function
> or is not a pure C function because it is handling the return address
> of its parent function which is better done by the compiler or ASM
> code.

Maybe there was a reason it was done this way:

  b645af2d5905 ("x86_64, traps: Rework bad_iret")

although I don't see anything relevant in the text explaining this.

Andy?

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret()
  2022-04-07  8:22       ` Borislav Petkov
@ 2022-04-07 13:18         ` Borislav Petkov
  2022-04-08  1:56           ` Lai Jiangshan
  0 siblings, 1 reply; 17+ messages in thread
From: Borislav Petkov @ 2022-04-07 13:18 UTC (permalink / raw)
  To: Lai Jiangshan, Andy Lutomirski
  Cc: LKML, Peter Zijlstra, Josh Poimboeuf, Thomas Gleixner, X86 ML,
	Lai Jiangshan, Ingo Molnar, Dave Hansen, H. Peter Anvin,
	Fenghua Yu, Thomas Tai, Chang S. Bae, Masami Hiramatsu

On Thu, Apr 07, 2022 at 10:22:25AM +0200, Borislav Petkov wrote:
> Maybe there was a reason it was done this way:

Ok, I went and singlestepped this code so that I can see what's going
on.

The second memcpy in fixup_bad_iret() copies the remainder of pt_regs
from the current stack. The result in tmp looks like this:

(gdb) p/x tmp
$10 = {error_entry_ret = 0xffffffff81a00998, regs = {r15 = 0x0, r14 = 0x0, r13 = 0x7fffffffea30, r12 = 0x40002b, bp = 0x40, 
    bx = 0xa, r11 = 0x246, r10 = 0x8, r9 = 0x7fffffffe860, r8 = 0x0, ax = 0x0, cx = 0x0, dx = 0x0, si = 0x7fffffffe860, 
    di = 0x2, orig_ax = 0x30, ip = 0x403000, cs = 0x33, flags = 0x246, sp = 0x8badf00d5aadc0de, ss = 0x33}}

note error_entry_ret which is:

(gdb) x/10i 0xffffffff81a00998
   0xffffffff81a00998 <asm_exc_general_protection+8>:   mov    %rsp,%rdi
   0xffffffff81a0099b <asm_exc_general_protection+11>:  mov    0x78(%rsp),%rsi
   0xffffffff81a009a0 <asm_exc_general_protection+16>:  movq   $0xffffffffffffffff,0x78(%rsp)
   0xffffffff81a009a9 <asm_exc_general_protection+25>:  call   0xffffffff818c8960 <exc_general_protection>
   0xffffffff81a009ae <asm_exc_general_protection+30>:  jmp    0xffffffff81a01030 <error_return>
   0xffffffff81a009b3:  data16 nopw %cs:0x0(%rax,%rax,1)

i.e., the return address to the #GP handler that has been pushed on
the stack when the IRET fault has happened and former has called
error_entry().

fixup_bad_iret() then ends up returning this in new_stack:

(gdb) p/x *new_stack
$12 = {error_entry_ret = 0xffffffff81a00998, regs = {r15 = 0x0, r14 = 0x0, r13 = 0x7fffffffea30, r12 = 0x40002b, bp = 0x40, 
    bx = 0xa, r11 = 0x246, r10 = 0x8, r9 = 0x7fffffffe860, r8 = 0x0, ax = 0x0, cx = 0x0, dx = 0x0, si = 0x7fffffffe860, 
    di = 0x2, orig_ax = 0x30, ip = 0x403000, cs = 0x33, flags = 0x246, sp = 0x8badf00d5aadc0de, ss = 0x33}}

and when error_entry() does:

        mov     %rax, %rsp

The stack has:

=> 0xffffffff81a0102d <error_entry+173>:        jmp    0xffffffff81a00fd2 <error_entry+82>
0xfffffe0000250f50:     0xffffffff81a00998      0x0000000000000000
0xfffffe0000250f60:     0x0000000000000000      0x00007fffffffea30
0xfffffe0000250f70:     0x000000000040002b      0x0000000000000040
0xfffffe0000250f80:     0x000000000000000a      0x0000000000000246
0xfffffe0000250f90:     0x0000000000000008      0x00007fffffffe860

and you can recognize new_stack there.

Then it does:

        jmp     error_entry_from_usermode_after_swapgs

where it does:

error_entry_from_usermode_after_swapgs:
        /* Put us onto the real thread stack. */
        popq    %r12                            /* save return addr in %12 */
        movq    %rsp, %rdi                      /* arg0 = pt_regs pointer */
        call    sync_regs
        movq    %rax, %rsp                      /* switch stack */
        ENCODE_FRAME_POINTER
        pushq   %r12
        RET

and in here it uses %r12 to stash the return address 0xffffffff81a00998
while sync_regs() runs.

So yeah, all your patch does is get rid of void *error_entry_ret in

struct bad_iret_stack {
        void *error_entry_ret;
        struct pt_regs regs;
};

So your commit message should have been as simple as:

"Always stash the address error_entry() is going to return to, in %r12
and get rid of the void *error_entry_ret; slot in struct bad_iret_stack
which was supposed to account for it and pt_regs pushed on the stack.

After this, both functions can work on a struct pt_regs pointer
directly."

In any case, I don't see why amluto would do this so this looks like a
sensible cleanup to do.

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret()
  2022-04-07 13:18         ` Borislav Petkov
@ 2022-04-08  1:56           ` Lai Jiangshan
  0 siblings, 0 replies; 17+ messages in thread
From: Lai Jiangshan @ 2022-04-08  1:56 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: Andy Lutomirski, LKML, Peter Zijlstra, Josh Poimboeuf,
	Thomas Gleixner, X86 ML, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin, Fenghua Yu, Thomas Tai, Chang S. Bae,
	Masami Hiramatsu

On Thu, Apr 7, 2022 at 9:19 PM Borislav Petkov <bp@alien8.de> wrote:
>
> On Thu, Apr 07, 2022 at 10:22:25AM +0200, Borislav Petkov wrote:
> > Maybe there was a reason it was done this way:
>
> Ok, I went and singlestepped this code so that I can see what's going
> on.

[....]

>
> So your commit message should have been as simple as:
>
> "Always stash the address error_entry() is going to return to, in %r12
> and get rid of the void *error_entry_ret; slot in struct bad_iret_stack
> which was supposed to account for it and pt_regs pushed on the stack.
>
> After this, both functions can work on a struct pt_regs pointer
> directly."

Thank you for elaborating on the details and I will use this changelog.

Thanks
Lai

>
> In any case, I don't see why amluto would do this so this looks like a
> sensible cleanup to do.
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH V4 2/7] x86/entry: Switch the stack after error_entry() returns
  2022-03-18 14:30 ` [PATCH V4 2/7] x86/entry: Switch the stack after error_entry() returns Lai Jiangshan
@ 2022-04-11  9:35   ` Borislav Petkov
  2022-04-11 11:48     ` Lai Jiangshan
  0 siblings, 1 reply; 17+ messages in thread
From: Borislav Petkov @ 2022-04-11  9:35 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, x86, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin

On Fri, Mar 18, 2022 at 10:30:11PM +0800, Lai Jiangshan wrote:
> From: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> 
> error_entry() calls sync_regs() to settle/copy the pt_regs and switches
> the stack directly after sync_regs().  But error_entry() itself is also
> a function call, the switching has to handle the return address of it
> together, which causes the work complicated and tangly.

together, which causes the work complicated and tangly 
Unknown word [tangly] in commit message.

Please restrain yourself when writing commit messages - they're not
write-only but actually for other people to read. It is not friendly to
reviewers to start inventing words and then make me decode your patch
twice:

- once the commit message

- and second time the code

Please use simple and trivial sentences.

> Switching to the stack after error_entry() makes the code simpler and
> intuitive.
> 
> The behavior/logic is unchanged:
>   1) (opt) feed fixup_bad_iret() with the pt_regs pushed by ASM code

opt?

>   2) (opt) fixup_bad_iret() moves the partial pt_regs up
>   3) feed sync_regs() with the pt_regs pushed by ASM code or returned
>      by fixup_bad_iret()
>   4) sync_regs() copies the whole pt_regs to kernel stack if needed
>   5) after error_entry() and switching %rsp, it is in kernel stack with
>      the pt_regs
> 
> Changes only in calling:
>   Old code switches to copied pt_regs immediately twice in
>   error_entry() while new code switches to the copied pt_regs only
>   once after error_entry() returns.
>   It is correct since sync_regs() doesn't need to be called close
>   to the pt_regs it handles.
> 
>   Old code stashes the return-address of error_entry() in a scratch
>   register and new code doesn't stash it.
>   It relies on the fact that fixup_bad_iret() and sync_regs() don't
>   corrupt the return-address of error_entry() on the stack.  But even
>   the old code also relies on the fact that fixup_bad_iret() and
>   sync_regs() don't corrupt the return-address of themselves.
>   They are the same reliances and are assured.

This whole paragraph sounds like unneeded rambling. You need to remain
on the subject in your commit messages. Sounds to me like you need to
read the "Changelog" section here:

Documentation/process/maintainer-tip.rst

> After this change, error_entry() will not do fancy things with the stack
> except when in the prolog which will be fixed in the next patch ("move
> PUSH_AND_CLEAR_REGS out of error_entry").  This patch and the next patch

"This patch" is tautology, as already said.

There's no "next patch" in git.

> can't be swapped because the next patch relies on this patch's stopping
> fiddling with the return-address of error_entry(), otherwise the objtool
> would complain.

If that is the case, then those two should me merged into one!

> Signed-off-by: Lai Jiangshan <jiangshan.ljs@antgroup.com>
> ---
>  arch/x86/entry/entry_64.S | 16 ++++++----------
>  1 file changed, 6 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
> index e9d896717ab4..8eff3e6b1687 100644
> --- a/arch/x86/entry/entry_64.S
> +++ b/arch/x86/entry/entry_64.S
> @@ -326,6 +326,8 @@ SYM_CODE_END(ret_from_fork)
>  .macro idtentry_body cfunc has_error_code:req
>  
>  	call	error_entry
> +	movq	%rax, %rsp			/* switch stack settled by sync_regs() */

"settled" doesn't fit here, try again.

> +	ENCODE_FRAME_POINTER
>  	UNWIND_HINT_REGS
>  
>  	movq	%rsp, %rdi			/* pt_regs pointer into 1st argument*/

...

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret()
  2022-03-18 14:30 ` [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret() Lai Jiangshan
  2022-04-06 19:00   ` Borislav Petkov
@ 2022-04-11  9:36   ` Borislav Petkov
  1 sibling, 0 replies; 17+ messages in thread
From: Borislav Petkov @ 2022-04-11  9:36 UTC (permalink / raw)
  To: Lai Jiangshan
  Cc: linux-kernel, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, x86, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin, Fenghua Yu, Thomas Tai, Chang S. Bae,
	Masami Hiramatsu

On Fri, Mar 18, 2022 at 10:30:10PM +0800, Lai Jiangshan wrote:
> diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
> index 1563fb995005..9fe9cd9d3eeb 100644
> --- a/arch/x86/kernel/traps.c
> +++ b/arch/x86/kernel/traps.c
> @@ -892,13 +892,8 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r
>  }
>  #endif
>  
> -struct bad_iret_stack {
> -	void *error_entry_ret;
> -	struct pt_regs regs;
> -};
> -
>  asmlinkage __visible noinstr
> -struct bad_iret_stack *fixup_bad_iret(struct bad_iret_stack *s)
> +struct pt_regs *fixup_bad_iret(struct pt_regs *bad_regs)
>  {
>  	/*
>  	 * This is called from entry_64.S early in handling a fault

While at it, unbreak that line:

diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 9fe9cd9d3eeb..28591132e885 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -892,8 +892,7 @@ asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *r
 }
 #endif
 
-asmlinkage __visible noinstr
-struct pt_regs *fixup_bad_iret(struct pt_regs *bad_regs)
+asmlinkage __visible noinstr struct pt_regs *fixup_bad_iret(struct pt_regs *bad_regs)
 {
 	/*
 	 * This is called from entry_64.S early in handling a fault

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette

^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH V4 2/7] x86/entry: Switch the stack after error_entry() returns
  2022-04-11  9:35   ` Borislav Petkov
@ 2022-04-11 11:48     ` Lai Jiangshan
  0 siblings, 0 replies; 17+ messages in thread
From: Lai Jiangshan @ 2022-04-11 11:48 UTC (permalink / raw)
  To: Borislav Petkov
  Cc: LKML, Peter Zijlstra, Josh Poimboeuf, Andy Lutomirski,
	Thomas Gleixner, X86 ML, Lai Jiangshan, Ingo Molnar, Dave Hansen,
	H. Peter Anvin

On Mon, Apr 11, 2022 at 5:35 PM Borislav Petkov <bp@alien8.de> wrote:
>
> On Fri, Mar 18, 2022 at 10:30:11PM +0800, Lai Jiangshan wrote:
> > From: Lai Jiangshan <jiangshan.ljs@antgroup.com>

>
> > Switching to the stack after error_entry() makes the code simpler and
> > intuitive.
> >
> > The behavior/logic is unchanged:
> >   1) (opt) feed fixup_bad_iret() with the pt_regs pushed by ASM code
>
> opt?

I meant it as optional.

I will rewrite it as

1) feed fixup_bad_iret() with the pt_regs pushed by the ASM code if it
is a fault
caused by bad IRET.

>
> >   2) (opt) fixup_bad_iret() moves the partial pt_regs up
> >   3) feed sync_regs() with the pt_regs pushed by ASM code or returned
> >      by fixup_bad_iret()
> >   4) sync_regs() copies the whole pt_regs to kernel stack if needed
> >   5) after error_entry() and switching %rsp, it is in kernel stack with
> >      the pt_regs


>
> > After this change, error_entry() will not do fancy things with the stack
> > except when in the prolog which will be fixed in the next patch ("move
> > PUSH_AND_CLEAR_REGS out of error_entry").  This patch and the next patch
>
> "This patch" is tautology, as already said.
>
> There's no "next patch" in git.
>
> > can't be swapped because the next patch relies on this patch's stopping
> > fiddling with the return-address of error_entry(), otherwise the objtool
> > would complain.
>
> If that is the case, then those two should me merged into one!

This patch moves the epilog (switching stack) of error_entry() out of
error_entry().  The next patch moves the prolog (pushing pt_regs) out
of error_entry().  They can be separated patches.

I don't think anything wrong if the order of these two patches
is swapped.  Peter Z asked info about the ordering of other patches
and I tried moving the next patch up and saw the complaint from
the objtool.

I wanted to explain the ordering of the patches.  This explanation
should be put in the cover letter instead of in the commit message.

Thanks
Lai

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-04-11 11:49 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-18 14:30 [PATCH V4 0/7] x86/entry: Clean up entry code Lai Jiangshan
2022-03-18 14:30 ` [PATCH V4 1/7] x86/traps: Move pt_regs only in fixup_bad_iret() Lai Jiangshan
2022-04-06 19:00   ` Borislav Petkov
2022-04-07  7:03     ` Lai Jiangshan
2022-04-07  8:22       ` Borislav Petkov
2022-04-07 13:18         ` Borislav Petkov
2022-04-08  1:56           ` Lai Jiangshan
2022-04-11  9:36   ` Borislav Petkov
2022-03-18 14:30 ` [PATCH V4 2/7] x86/entry: Switch the stack after error_entry() returns Lai Jiangshan
2022-04-11  9:35   ` Borislav Petkov
2022-04-11 11:48     ` Lai Jiangshan
2022-03-18 14:30 ` [PATCH V4 3/7] x86/entry: move PUSH_AND_CLEAR_REGS out of error_entry Lai Jiangshan
2022-03-18 14:30 ` [PATCH V4 4/7] x86/entry: Move cld to the start of idtentry Lai Jiangshan
2022-03-18 14:30 ` [PATCH V4 5/7] x86/entry: Don't call error_entry for XENPV Lai Jiangshan
2022-03-18 14:30 ` [PATCH V4 6/7] x86/entry: Convert SWAPGS to swapgs and remove the definition of SWAPGS Lai Jiangshan
2022-03-18 14:30 ` [PATCH V4 7/7] x86/entry: Use idtentry macro for entry_INT80_compat Lai Jiangshan
2022-04-06 15:57 ` [PATCH V4 0/7] x86/entry: Clean up entry code Lai Jiangshan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.