linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [patch 00/15] x86/entry: Consolidation - Part V
@ 2020-02-25 22:47 Thomas Gleixner
  2020-02-25 22:47 ` [patch 01/15] x86/irq: Convey vector as argument and not in ptregs Thomas Gleixner
                   ` (15 more replies)
  0 siblings, 16 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Hi!

This is the fifthh batch of a 73 patches series which consolidates the x86
entry code. The larger explanation is in the part I cover letter:

 https://lore.kernel.org/r/20200225213636.689276920@linutronix.de

I applies on top of part IV which can be found here:

 https://lore.kernel.org/r/20200225223321.231477305@linutronix.de

This is the last step of _this_ consolidation work:

  - Get rid of the odd vector number transport via pt_regs for do_IRQ() and
    spurious interrupt handlers by pushing the plain vector number into the
    error code location and providing it as second argument to the C
    functions. This also gets rid of thehistorical adjustment of the vector
    number into the -0x80 to 0x7f range which does not make any sense for
    at least 15 years but still survived until today

  - Get rid of the special entry code for device interrupts which just can
    use the common idtentry mechanisms as all other exceptions do.

  - Convert all the system vectors to the IDTENTRY mechanism and get rid of
    the pointless difference in evicting them on 32 and 64 bit

  - Finally move the return from exception work (prepare return to user
    mode and kernel preemption) into C-code and get rid of the ASM gunk.

This applies on top of part three which is available here:

   git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git entry-v1-part4

To get part 1 - 5 pull from here:

   git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git entry-v1-part5

The diffstat for part V is appended below. The overall diffstat summary is:

 50 files changed, 1380 insertions(+), 1264 deletions(-)

but most importantly the overall diffstat for the ASM code is:

 3 files changed, 302 insertions(+), 759 deletions(-)

i.e. 457 lines of ASM gone...

and all idt entry points have now:

  - a central home in idtentry.h instead of being sprinkled aorund 10 files

  - a consistent naming scheme also vs. XEN/PV

  - the same entry/exit conventions and protections against all sorts of
    instrumentation which makes it harder to screw up for new entry points

This finally allows to move the sysall entry/exit work into a generic place
and fix the initial problem of moving POSIX CPu tiemrs hevay lifting into
thread context. But that's going to be another 25 patches which are coming
once this is resolved.

Thanks,

	tglx

8<---------------
 arch/x86/include/asm/acrn.h             |   11 -
 arch/x86/include/asm/entry_arch.h       |   56 ------
 b/arch/x86/entry/common.c               |   56 ++++--
 b/arch/x86/entry/entry_32.S             |  123 ++-----------
 b/arch/x86/entry/entry_64.S             |  296 +++++---------------------------
 b/arch/x86/hyperv/hv_init.c             |    3 
 b/arch/x86/include/asm/hw_irq.h         |   22 --
 b/arch/x86/include/asm/idtentry.h       |  143 +++++++++++++++
 b/arch/x86/include/asm/irq.h            |    6 
 b/arch/x86/include/asm/irq_work.h       |    1 
 b/arch/x86/include/asm/mshyperv.h       |   14 -
 b/arch/x86/include/asm/traps.h          |   10 -
 b/arch/x86/include/asm/uv/uv_bau.h      |    6 
 b/arch/x86/kernel/apic/apic.c           |   28 ++-
 b/arch/x86/kernel/apic/msi.c            |    3 
 b/arch/x86/kernel/apic/vector.c         |    2 
 b/arch/x86/kernel/cpu/acrn.c            |    6 
 b/arch/x86/kernel/cpu/mce/amd.c         |    2 
 b/arch/x86/kernel/cpu/mce/therm_throt.c |    2 
 b/arch/x86/kernel/cpu/mce/threshold.c   |    2 
 b/arch/x86/kernel/cpu/mshyperv.c        |   18 +
 b/arch/x86/kernel/idt.c                 |   34 +--
 b/arch/x86/kernel/irq.c                 |   21 +-
 b/arch/x86/kernel/irq_work.c            |    3 
 b/arch/x86/kernel/smp.c                 |   10 -
 b/arch/x86/platform/uv/tlb_uv.c         |    2 
 b/arch/x86/xen/enlighten_hvm.c          |    6 
 b/drivers/xen/events/events_base.c      |    3 
 b/include/xen/events.h                  |    7 
 29 files changed, 350 insertions(+), 546 deletions(-)




^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 01/15] x86/irq: Convey vector as argument and not in ptregs
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-26  5:13   ` Andy Lutomirski
  2020-02-26  5:45   ` Brian Gerst
  2020-02-25 22:47 ` [patch 02/15] x86/entry/64: Add ability to switch to IRQ stacks in idtentry Thomas Gleixner
                   ` (14 subsequent siblings)
  15 siblings, 2 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Device interrupts which go through do_IRQ() or the spurious interrupt
handler have their separate entry code on 64 bit for no good reason.

Both 32 and 64 bit transport the vector number through ORIG_[RE]AX in
pt_regs. Further the vector number is forced to fit into an u8 and is
complemented and offset by 0x80 for historical reasons.

Push the vector number into the error code slot instead and hand the plain
vector number to the C functions.

Originally-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_32.S         |   11 ++++++-----
 arch/x86/entry/entry_64.S         |   14 ++++++++------
 arch/x86/include/asm/entry_arch.h |    2 +-
 arch/x86/include/asm/hw_irq.h     |    1 +
 arch/x86/include/asm/irq.h        |    2 +-
 arch/x86/include/asm/traps.h      |    3 ++-
 arch/x86/kernel/apic/apic.c       |   25 +++++++++++++++++++------
 arch/x86/kernel/idt.c             |    2 +-
 arch/x86/kernel/irq.c             |    6 ++----
 9 files changed, 41 insertions(+), 25 deletions(-)

--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -1233,7 +1233,7 @@ SYM_FUNC_END(entry_INT80_32)
 SYM_CODE_START(irq_entries_start)
     vector=FIRST_EXTERNAL_VECTOR
     .rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
-	pushl	$(~vector+0x80)			/* Note: always in signed byte range */
+	pushl	$(vector)
     vector=vector+1
 	jmp	common_interrupt
 	.align	8
@@ -1245,7 +1245,7 @@ SYM_CODE_END(irq_entries_start)
 SYM_CODE_START(spurious_entries_start)
     vector=FIRST_SYSTEM_VECTOR
     .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR)
-	pushl	$(~vector+0x80)			/* Note: always in signed byte range */
+	pushl	$(vector)
     vector=vector+1
 	jmp	common_spurious
 	.align	8
@@ -1254,11 +1254,12 @@ SYM_CODE_END(spurious_entries_start)
 
 SYM_CODE_START_LOCAL(common_spurious)
 	ASM_CLAC
-	addl	$-0x80, (%esp)			/* Adjust vector into the [-256, -1] range */
 	SAVE_ALL switch_stacks=1
 	ENCODE_FRAME_POINTER
 	TRACE_IRQS_OFF
 	movl	%esp, %eax
+	movl	PT_ORIG_EAX(%esp), %edx		/* get the vector from stack */
+	movl	$-1, PT_ORIG_EAX(%esp)		/* no syscall to restart */
 	call	smp_spurious_interrupt
 	jmp	ret_from_intr
 SYM_CODE_END(common_spurious)
@@ -1271,12 +1272,12 @@ SYM_CODE_END(common_spurious)
 	.p2align CONFIG_X86_L1_CACHE_SHIFT
 SYM_CODE_START_LOCAL(common_interrupt)
 	ASM_CLAC
-	addl	$-0x80, (%esp)			/* Adjust vector into the [-256, -1] range */
-
 	SAVE_ALL switch_stacks=1
 	ENCODE_FRAME_POINTER
 	TRACE_IRQS_OFF
 	movl	%esp, %eax
+	movl	PT_ORIG_EAX(%esp), %edx		/* get the vector from stack */
+	movl	$-1, PT_ORIG_EAX(%esp)		/* no syscall to restart */
 	call	do_IRQ
 	jmp	ret_from_intr
 SYM_CODE_END(common_interrupt)
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -364,7 +364,7 @@ SYM_CODE_START(irq_entries_start)
     vector=FIRST_EXTERNAL_VECTOR
     .rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
 	UNWIND_HINT_IRET_REGS
-	pushq	$(~vector+0x80)			/* Note: always in signed byte range */
+	pushq	$(vector)
 	jmp	common_interrupt
 	.align	8
 	vector=vector+1
@@ -376,7 +376,7 @@ SYM_CODE_START(spurious_entries_start)
     vector=FIRST_SYSTEM_VECTOR
     .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR)
 	UNWIND_HINT_IRET_REGS
-	pushq	$(~vector+0x80)			/* Note: always in signed byte range */
+	pushq	$(vector)
 	jmp	common_spurious
 	.align	8
 	vector=vector+1
@@ -756,9 +756,10 @@ SYM_CODE_END(interrupt_entry)
  * then jump to common_spurious/interrupt.
  */
 SYM_CODE_START_LOCAL(common_spurious)
-	addq	$-0x80, (%rsp)			/* Adjust vector to [-256, -1] range */
 	call	interrupt_entry
 	UNWIND_HINT_REGS indirect=1
+	movq	ORIG_RAX(%rdi), %rsi		/* get vector from stack */
+	movq	$-1, ORIG_RAX(%rdi)		/* no syscall to restart */
 	call	smp_spurious_interrupt		/* rdi points to pt_regs */
 	jmp	ret_from_intr
 SYM_CODE_END(common_spurious)
@@ -767,10 +768,11 @@ SYM_CODE_END(common_spurious)
 /* common_interrupt is a hotpath. Align it */
 	.p2align CONFIG_X86_L1_CACHE_SHIFT
 SYM_CODE_START_LOCAL(common_interrupt)
-	addq	$-0x80, (%rsp)			/* Adjust vector to [-256, -1] range */
 	call	interrupt_entry
 	UNWIND_HINT_REGS indirect=1
-	call	do_IRQ	/* rdi points to pt_regs */
+	movq	ORIG_RAX(%rdi), %rsi		/* get vector from stack */
+	movq	$-1, ORIG_RAX(%rdi)		/* no syscall to restart */
+	call	do_IRQ				/* rdi points to pt_regs */
 	/* 0(%rsp): old RSP */
 ret_from_intr:
 	DISABLE_INTERRUPTS(CLBR_ANY)
@@ -1019,7 +1021,7 @@ apicinterrupt RESCHEDULE_VECTOR			resche
 #endif
 
 apicinterrupt ERROR_APIC_VECTOR			error_interrupt			smp_error_interrupt
-apicinterrupt SPURIOUS_APIC_VECTOR		spurious_interrupt		smp_spurious_interrupt
+apicinterrupt SPURIOUS_APIC_VECTOR		spurious_apic_interrupt		smp_spurious_apic_interrupt
 
 #ifdef CONFIG_IRQ_WORK
 apicinterrupt IRQ_WORK_VECTOR			irq_work_interrupt		smp_irq_work_interrupt
--- a/arch/x86/include/asm/entry_arch.h
+++ b/arch/x86/include/asm/entry_arch.h
@@ -35,7 +35,7 @@ BUILD_INTERRUPT(kvm_posted_intr_nested_i
 
 BUILD_INTERRUPT(apic_timer_interrupt,LOCAL_TIMER_VECTOR)
 BUILD_INTERRUPT(error_interrupt,ERROR_APIC_VECTOR)
-BUILD_INTERRUPT(spurious_interrupt,SPURIOUS_APIC_VECTOR)
+BUILD_INTERRUPT(spurious_apic_interrupt,SPURIOUS_APIC_VECTOR)
 BUILD_INTERRUPT(x86_platform_ipi, X86_PLATFORM_IPI_VECTOR)
 
 #ifdef CONFIG_IRQ_WORK
--- a/arch/x86/include/asm/hw_irq.h
+++ b/arch/x86/include/asm/hw_irq.h
@@ -39,6 +39,7 @@ extern asmlinkage void irq_work_interrup
 extern asmlinkage void uv_bau_message_intr1(void);
 
 extern asmlinkage void spurious_interrupt(void);
+extern asmlinkage void spurious_apic_interrupt(void);
 extern asmlinkage void thermal_interrupt(void);
 extern asmlinkage void reschedule_interrupt(void);
 
--- a/arch/x86/include/asm/irq.h
+++ b/arch/x86/include/asm/irq.h
@@ -36,7 +36,7 @@ extern void native_init_IRQ(void);
 
 extern void handle_irq(struct irq_desc *desc, struct pt_regs *regs);
 
-extern __visible void do_IRQ(struct pt_regs *regs);
+extern __visible void do_IRQ(struct pt_regs *regs, unsigned long vector);
 
 extern void init_ISA_irqs(void);
 
--- a/arch/x86/include/asm/traps.h
+++ b/arch/x86/include/asm/traps.h
@@ -39,8 +39,9 @@ asmlinkage void smp_deferred_error_inter
 #endif
 
 void smp_apic_timer_interrupt(struct pt_regs *regs);
-void smp_spurious_interrupt(struct pt_regs *regs);
 void smp_error_interrupt(struct pt_regs *regs);
+void smp_spurious_apic_interrupt(struct pt_regs *regs);
+void smp_spurious_interrupt(struct pt_regs *regs, unsigned long vector);
 asmlinkage void smp_irq_move_cleanup_interrupt(void);
 
 extern void ist_enter(struct pt_regs *regs);
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2158,12 +2158,20 @@ void __init register_lapic_address(unsig
  * Local APIC interrupts
  */
 
-/*
- * This interrupt should _never_ happen with our APIC/SMP architecture
+/**
+ * smp_spurious_interrupt - Catch all for interrupts raised on unused vectors
+ * @regs:	Pointer to pt_regs on stack
+ * @vector:	Vector number
+ *
+ * This is invoked from ASM entry code to catch all interrupts which
+ * trigger on an entry which is routed to the common_spurious idtentry
+ * point.
+ *
+ * Also called from smp_spurious_apic_interrupt().
  */
-__visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs)
+__visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs,
+						  unsigned long vector)
 {
-	u8 vector = ~regs->orig_ax;
 	u32 v;
 
 	entering_irq();
@@ -2187,11 +2195,11 @@ void __init register_lapic_address(unsig
 	 */
 	v = apic_read(APIC_ISR + ((vector & ~0x1f) >> 1));
 	if (v & (1 << (vector & 0x1f))) {
-		pr_info("Spurious interrupt (vector 0x%02x) on CPU#%d. Acked\n",
+		pr_info("Spurious interrupt (vector 0x%02lx) on CPU#%d. Acked\n",
 			vector, smp_processor_id());
 		ack_APIC_irq();
 	} else {
-		pr_info("Spurious interrupt (vector 0x%02x) on CPU#%d. Not pending!\n",
+		pr_info("Spurious interrupt (vector 0x%02lx) on CPU#%d. Not pending!\n",
 			vector, smp_processor_id());
 	}
 out:
@@ -2199,6 +2207,11 @@ void __init register_lapic_address(unsig
 	exiting_irq();
 }
 
+__visible void smp_spurious_apic_interrupt(struct pt_regs *regs)
+{
+	smp_spurious_interrupt(regs, SPURIOUS_APIC_VECTOR);
+}
+
 /*
  * This interrupt should never happen with our APIC/SMP architecture
  */
--- a/arch/x86/kernel/idt.c
+++ b/arch/x86/kernel/idt.c
@@ -142,7 +142,7 @@ static const __initconst struct idt_data
 #ifdef CONFIG_X86_UV
 	INTG(UV_BAU_MESSAGE,		uv_bau_message_intr1),
 #endif
-	INTG(SPURIOUS_APIC_VECTOR,	spurious_interrupt),
+	INTG(SPURIOUS_APIC_VECTOR,	spurious_apic_interrupt),
 	INTG(ERROR_APIC_VECTOR,		error_interrupt),
 #endif
 };
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -230,12 +230,10 @@ u64 arch_irq_stat(void)
  * SMP cross-CPU interrupts have their own specific
  * handlers).
  */
-__visible void __irq_entry do_IRQ(struct pt_regs *regs)
+__visible void __irq_entry do_IRQ(struct pt_regs *regs, unsigned long vector)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 	struct irq_desc * desc;
-	/* high bit used in ret_from_ code  */
-	unsigned vector = ~regs->orig_ax;
 
 	entering_irq();
 
@@ -252,7 +250,7 @@ u64 arch_irq_stat(void)
 		ack_APIC_irq();
 
 		if (desc == VECTOR_UNUSED) {
-			pr_emerg_ratelimited("%s: %d.%d No irq handler for vector\n",
+			pr_emerg_ratelimited("%s: %d.%lu No irq handler for vector\n",
 					     __func__, smp_processor_id(),
 					     vector);
 		} else {


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 02/15] x86/entry/64: Add ability to switch to IRQ stacks in idtentry
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
  2020-02-25 22:47 ` [patch 01/15] x86/irq: Convey vector as argument and not in ptregs Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-25 22:47 ` [patch 03/15] x86/entry: Add IRQENTRY_IRQ macro Thomas Gleixner
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann, Andy Lutomirski

Expand the idtentry macro so it supports switching to interrupt stacks on
64bit. Preparatory change to let regular device interrupts use idtentry
instead of having their own mechanism.

Suggested-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_64.S |   18 ++++++++++++++----
 1 file changed, 14 insertions(+), 4 deletions(-)

--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -495,8 +495,9 @@ SYM_CODE_END(spurious_entries_start)
  * idtentry_body - Macro to emit code calling the C function
  * @cfunc:		C function to be called
  * @has_error_code:	Hardware pushed error code on stack
+ * @irq_stack:		Execute @cfunc on the IRQ stack (device interrupts)
  */
-.macro idtentry_body cfunc has_error_code:req
+.macro idtentry_body cfunc has_error_code:req irq_stack:req
 
 	call	error_entry
 	UNWIND_HINT_REGS
@@ -508,8 +509,16 @@ SYM_CODE_END(spurious_entries_start)
 		movq	$-1, ORIG_RAX(%rsp)	/* no syscall to restart */
 	.endif
 
+	.if \irq_stack
+		ENTER_IRQ_STACK old_rsp=%rdi
+	.endif
+
 	call	\cfunc
 
+	.if \irq_stack
+		LEAVE_IRQ_STACK			/* interrupts are disabled */
+	.endif
+
 	jmp	error_exit
 .endm
 
@@ -519,11 +528,12 @@ SYM_CODE_END(spurious_entries_start)
  * @asmsym:		ASM symbol for the entry point
  * @cfunc:		C function to be called
  * @has_error_code:	Hardware pushed error code on stack
+ * @irq_stack:		Execute @cfunc on the IRQ stack (device interrupts)
  *
  * The macro emits code to set up the kernel context for straight forward
  * and simple IDT entries. No IST stack, no paranoid entry checks.
  */
-.macro idtentry vector asmsym cfunc has_error_code:req
+.macro idtentry vector asmsym cfunc has_error_code:req irq_stack=0
 SYM_CODE_START(\asmsym)
 	UNWIND_HINT_IRET_REGS offset=\has_error_code*8
 	ASM_CLAC
@@ -546,7 +556,7 @@ SYM_CODE_START(\asmsym)
 .Lfrom_usermode_no_gap_\@:
 	.endif
 
-	idtentry_body \cfunc \has_error_code
+	idtentry_body \cfunc \has_error_code \irq_stack
 
 _ASM_NOKPROBE(\asmsym)
 SYM_CODE_END(\asmsym)
@@ -621,7 +631,7 @@ SYM_CODE_START(\asmsym)
 
 	/* Switch to the regular task stack and use the noist entry point */
 .Lfrom_usermode_switch_stack_\@:
-	idtentry_body noist_\cfunc, has_error_code=0
+	idtentry_body noist_\cfunc, has_error_code=0 irq_stack=0
 
 _ASM_NOKPROBE(\asmsym)
 SYM_CODE_END(\asmsym)


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 03/15] x86/entry: Add IRQENTRY_IRQ macro
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
  2020-02-25 22:47 ` [patch 01/15] x86/irq: Convey vector as argument and not in ptregs Thomas Gleixner
  2020-02-25 22:47 ` [patch 02/15] x86/entry/64: Add ability to switch to IRQ stacks in idtentry Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-26 15:05   ` Miroslav Benes
  2020-02-25 22:47 ` [patch 04/15] x86/entry: Use idtentry for interrupts Thomas Gleixner
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Provide a seperate IDTENTRY macro for device interrupts, which supports the
interrupt stack switch mode on 64 bit. Otherwise its the same as
IDTENTRY_ERRORCODE.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/include/asm/idtentry.h |   44 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -171,6 +171,46 @@ static __always_inline void __##func(str
 				     unsigned long error_code,		\
 				     unsigned long address)
 
+/**
+ * DECLARE_IDTENTRY_IRQ - Declare functions for device interrupt IDT entry
+ *			  points (common/spurious)
+ * @vector:	Vector number (ignored for C)
+ * @func:	Function name of the entry point
+ *
+ * Declares three functions:
+ * - The ASM entry point: asm_##func
+ * - The XEN PV trap entry point: xen_##func (maybe unused)
+ * - The C handler called from the ASM entry point
+ */
+#define DECLARE_IDTENTRY_IRQ(vector, func)				\
+	asmlinkage void asm_##func(void);				\
+	asmlinkage void xen_asm_##func(void);				\
+	__visible void func(struct pt_regs *regs, unsigned long vector)
+
+/**
+ * DEFINE_IDTENTRY_IRQ - Emit code for device interrupt IDT entry points
+ * @func:	Function name of the entry point
+ *
+ * @func is called from ASM entry code with interrupts disabled.
+ *
+ * Used for C handlers which require the vector number.
+ */
+#define DEFINE_IDTENTRY_IRQ(func)					\
+static __always_inline void __##func(struct pt_regs *regs,		\
+				     unsigned long vector);		\
+									\
+__visible notrace __irq_entry void func(struct pt_regs *regs,		\
+					unsigned long vector)		\
+{									\
+	idtentry_enter(regs);						\
+	__##func (regs, vector);					\
+	idtentry_exit(regs);						\
+}									\
+NOKPROBE_SYMBOL(func);							\
+									\
+static __always_inline void __##func(struct pt_regs *regs,		\
+				     unsigned long vector)
+
 #ifdef CONFIG_X86_64
 /**
  * DECLARE_IDTENTRY_IST - Declare functions for IST handling IDT entry points
@@ -340,6 +380,10 @@ static __always_inline void __##func(str
 /* Special case for 32bit IRET 'trap'. Do not emit ASM code */
 #define DECLARE_IDTENTRY_SW(vector, func)
 
+/* Entries for common/spurious (device) interrupts */
+#define DECLARE_IDTENTRY_IRQ(vector, func)			\
+	idtentry_irq vector func
+
 #ifdef CONFIG_X86_64
 # define DECLARE_IDTENTRY_MCE(vector, func)			\
 	idtentry_mce_db vector asm_##func func


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 04/15] x86/entry: Use idtentry for interrupts
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (2 preceding siblings ...)
  2020-02-25 22:47 ` [patch 03/15] x86/entry: Add IRQENTRY_IRQ macro Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-25 22:47 ` [patch 05/15] x86/entry: Provide IDTEnTRY_SYSVEC Thomas Gleixner
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Use IDTENTRY_IRQ for interrupts. Remove the existing stub code and let the
IDTENTRY machinery emit it automatically.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_32.S       |   39 +++++++++----------------------
 arch/x86/entry/entry_64.S       |   49 +++++++++++++++-------------------------
 arch/x86/include/asm/hw_irq.h   |    1 
 arch/x86/include/asm/idtentry.h |    4 +++
 arch/x86/include/asm/irq.h      |    2 -
 arch/x86/include/asm/traps.h    |    1 
 arch/x86/kernel/apic/apic.c     |    7 ++---
 arch/x86/kernel/apic/msi.c      |    3 +-
 arch/x86/kernel/irq.c           |    8 +++---
 9 files changed, 44 insertions(+), 70 deletions(-)

--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -769,12 +769,6 @@ SYM_CODE_END(\asmsym)
 .endm
 
 /*
- * Include the defines which emit the idt entries which are shared
- * shared between 32 and 64 bit.
- */
-#include <asm/idtentry.h>
-
-/*
  * %eax: prev task
  * %edx: next task
  */
@@ -1235,7 +1229,7 @@ SYM_CODE_START(irq_entries_start)
     .rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
 	pushl	$(vector)
     vector=vector+1
-	jmp	common_interrupt
+	jmp	asm_common_interrupt
 	.align	8
     .endr
 SYM_CODE_END(irq_entries_start)
@@ -1247,40 +1241,31 @@ SYM_CODE_START(spurious_entries_start)
     .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR)
 	pushl	$(vector)
     vector=vector+1
-	jmp	common_spurious
+	jmp	asm_spurious_interrupt
 	.align	8
     .endr
 SYM_CODE_END(spurious_entries_start)
+#endif
 
-SYM_CODE_START_LOCAL(common_spurious)
+.macro idtentry_irq vector cfunc
+	.p2align CONFIG_X86_L1_CACHE_SHIFT
+SYM_CODE_START_LOCAL(asm_\cfunc)
 	ASM_CLAC
 	SAVE_ALL switch_stacks=1
 	ENCODE_FRAME_POINTER
-	TRACE_IRQS_OFF
 	movl	%esp, %eax
 	movl	PT_ORIG_EAX(%esp), %edx		/* get the vector from stack */
 	movl	$-1, PT_ORIG_EAX(%esp)		/* no syscall to restart */
-	call	smp_spurious_interrupt
+	call	\cfunc
 	jmp	ret_from_intr
-SYM_CODE_END(common_spurious)
-#endif
+SYM_CODE_END(asm_\cfunc)
+.endm
 
 /*
- * the CPU automatically disables interrupts when executing an IRQ vector,
- * so IRQ-flags tracing has to follow that:
+ * Include the defines which emit the idt entries which are shared
+ * shared between 32 and 64 bit.
  */
-	.p2align CONFIG_X86_L1_CACHE_SHIFT
-SYM_CODE_START_LOCAL(common_interrupt)
-	ASM_CLAC
-	SAVE_ALL switch_stacks=1
-	ENCODE_FRAME_POINTER
-	TRACE_IRQS_OFF
-	movl	%esp, %eax
-	movl	PT_ORIG_EAX(%esp), %edx		/* get the vector from stack */
-	movl	$-1, PT_ORIG_EAX(%esp)		/* no syscall to restart */
-	call	do_IRQ
-	jmp	ret_from_intr
-SYM_CODE_END(common_interrupt)
+#include <asm/idtentry.h>
 
 #define BUILD_INTERRUPT3(name, nr, fn)			\
 SYM_FUNC_START(name)					\
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -365,7 +365,7 @@ SYM_CODE_START(irq_entries_start)
     .rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
 	UNWIND_HINT_IRET_REGS
 	pushq	$(vector)
-	jmp	common_interrupt
+	jmp	asm_common_interrupt
 	.align	8
 	vector=vector+1
     .endr
@@ -377,7 +377,7 @@ SYM_CODE_START(spurious_entries_start)
     .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR)
 	UNWIND_HINT_IRET_REGS
 	pushq	$(vector)
-	jmp	common_spurious
+	jmp	asm_spurious_interrupt
 	.align	8
 	vector=vector+1
     .endr
@@ -563,6 +563,20 @@ SYM_CODE_END(\asmsym)
 .endm
 
 /*
+ * Interrupt entry/exit.
+ *
+ + The interrupt stubs push (vector) onto the stack, which is the error_code
+ * position of idtentry exceptions, and jump to one of the two idtentry points
+ * (common/spurious).
+ *
+ * common_interrupt is a hotpath, align it to a cache line
+ */
+.macro idtentry_irq vector cfunc
+	.p2align CONFIG_X86_L1_CACHE_SHIFT
+	idtentry \vector asm_\cfunc \cfunc has_error_code=1 irq_stack=1
+.endm
+
+/*
  * MCE and DB exceptions
  */
 #define CPU_TSS_IST(x) PER_CPU_VAR(cpu_tss_rw) + (TSS_ist + (x) * 8)
@@ -758,32 +772,7 @@ SYM_CODE_START(interrupt_entry)
 SYM_CODE_END(interrupt_entry)
 _ASM_NOKPROBE(interrupt_entry)
 
-
-/* Interrupt entry/exit. */
-
-/*
- * The interrupt stubs push (~vector+0x80) onto the stack and
- * then jump to common_spurious/interrupt.
- */
-SYM_CODE_START_LOCAL(common_spurious)
-	call	interrupt_entry
-	UNWIND_HINT_REGS indirect=1
-	movq	ORIG_RAX(%rdi), %rsi		/* get vector from stack */
-	movq	$-1, ORIG_RAX(%rdi)		/* no syscall to restart */
-	call	smp_spurious_interrupt		/* rdi points to pt_regs */
-	jmp	ret_from_intr
-SYM_CODE_END(common_spurious)
-_ASM_NOKPROBE(common_spurious)
-
-/* common_interrupt is a hotpath. Align it */
-	.p2align CONFIG_X86_L1_CACHE_SHIFT
-SYM_CODE_START_LOCAL(common_interrupt)
-	call	interrupt_entry
-	UNWIND_HINT_REGS indirect=1
-	movq	ORIG_RAX(%rdi), %rsi		/* get vector from stack */
-	movq	$-1, ORIG_RAX(%rdi)		/* no syscall to restart */
-	call	do_IRQ				/* rdi points to pt_regs */
-	/* 0(%rsp): old RSP */
+SYM_CODE_START_LOCAL(common_interrupt_return)
 ret_from_intr:
 	DISABLE_INTERRUPTS(CLBR_ANY)
 	TRACE_IRQS_OFF
@@ -965,8 +954,8 @@ SYM_INNER_LABEL(native_irq_return_iret,
 	 */
 	jmp	native_irq_return_iret
 #endif
-SYM_CODE_END(common_interrupt)
-_ASM_NOKPROBE(common_interrupt)
+SYM_CODE_END(common_interrupt_return)
+_ASM_NOKPROBE(common_interrupt_return)
 
 /*
  * APIC interrupts.
--- a/arch/x86/include/asm/hw_irq.h
+++ b/arch/x86/include/asm/hw_irq.h
@@ -38,7 +38,6 @@ extern asmlinkage void error_interrupt(v
 extern asmlinkage void irq_work_interrupt(void);
 extern asmlinkage void uv_bau_message_intr1(void);
 
-extern asmlinkage void spurious_interrupt(void);
 extern asmlinkage void spurious_apic_interrupt(void);
 extern asmlinkage void thermal_interrupt(void);
 extern asmlinkage void reschedule_interrupt(void);
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -450,6 +450,10 @@ DECLARE_IDTENTRY_CR2(X86_TRAP_PF,	exc_pa
 DECLARE_IDTENTRY_CR2(X86_TRAP_PF,	exc_async_page_fault);
 #endif
 
+/* Device interrupts common/spurious */
+DECLARE_IDTENTRY_IRQ(X6_TRAP_OTHER,	common_interrupt);
+DECLARE_IDTENTRY_IRQ(X6_TRAP_OTHER,	spurious_interrupt);
+
 #ifdef CONFIG_X86_MCE
 /* Machine check */
 DECLARE_IDTENTRY_MCE(X86_TRAP_MC,	exc_machine_check);
--- a/arch/x86/include/asm/irq.h
+++ b/arch/x86/include/asm/irq.h
@@ -36,8 +36,6 @@ extern void native_init_IRQ(void);
 
 extern void handle_irq(struct irq_desc *desc, struct pt_regs *regs);
 
-extern __visible void do_IRQ(struct pt_regs *regs, unsigned long vector);
-
 extern void init_ISA_irqs(void);
 
 extern void __init init_IRQ(void);
--- a/arch/x86/include/asm/traps.h
+++ b/arch/x86/include/asm/traps.h
@@ -41,7 +41,6 @@ asmlinkage void smp_deferred_error_inter
 void smp_apic_timer_interrupt(struct pt_regs *regs);
 void smp_error_interrupt(struct pt_regs *regs);
 void smp_spurious_apic_interrupt(struct pt_regs *regs);
-void smp_spurious_interrupt(struct pt_regs *regs, unsigned long vector);
 asmlinkage void smp_irq_move_cleanup_interrupt(void);
 
 extern void ist_enter(struct pt_regs *regs);
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2159,7 +2159,7 @@ void __init register_lapic_address(unsig
  */
 
 /**
- * smp_spurious_interrupt - Catch all for interrupts raised on unused vectors
+ * spurious_interrupt - Catch all for interrupts raised on unused vectors
  * @regs:	Pointer to pt_regs on stack
  * @vector:	Vector number
  *
@@ -2169,8 +2169,7 @@ void __init register_lapic_address(unsig
  *
  * Also called from smp_spurious_apic_interrupt().
  */
-__visible void __irq_entry smp_spurious_interrupt(struct pt_regs *regs,
-						  unsigned long vector)
+DEFINE_IDTENTRY_IRQ(spurious_interrupt)
 {
 	u32 v;
 
@@ -2209,7 +2208,7 @@ void __init register_lapic_address(unsig
 
 __visible void smp_spurious_apic_interrupt(struct pt_regs *regs)
 {
-	smp_spurious_interrupt(regs, SPURIOUS_APIC_VECTOR);
+	__spurious_interrupt(regs, SPURIOUS_APIC_VECTOR);
 }
 
 /*
--- a/arch/x86/kernel/apic/msi.c
+++ b/arch/x86/kernel/apic/msi.c
@@ -115,7 +115,8 @@ msi_set_affinity(struct irq_data *irqd,
 	 * denote it as spurious which is no harm as this is a rare event
 	 * and interrupt handlers have to cope with spurious interrupts
 	 * anyway. If the vector is unused, then it is marked so it won't
-	 * trigger the 'No irq handler for vector' warning in do_IRQ().
+	 * trigger the 'No irq handler for vector' warning in
+	 * common_interrupt().
 	 *
 	 * This requires to hold vector lock to prevent concurrent updates to
 	 * the affected vector.
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -19,6 +19,7 @@
 #include <asm/mce.h>
 #include <asm/hw_irq.h>
 #include <asm/desc.h>
+#include <asm/traps.h>
 
 #define CREATE_TRACE_POINTS
 #include <asm/trace/irq_vectors.h>
@@ -226,11 +227,10 @@ u64 arch_irq_stat(void)
 
 
 /*
- * do_IRQ handles all normal device IRQ's (the special
- * SMP cross-CPU interrupts have their own specific
- * handlers).
+ * common_interrupt() handles all normal device IRQ's (the special SMP
+ * cross-CPU interrupts have their own specific handlers).
  */
-__visible void __irq_entry do_IRQ(struct pt_regs *regs, unsigned long vector)
+DEFINE_IDTENTRY_IRQ(common_interrupt)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 	struct irq_desc * desc;


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 05/15] x86/entry: Provide IDTEnTRY_SYSVEC
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (3 preceding siblings ...)
  2020-02-25 22:47 ` [patch 04/15] x86/entry: Use idtentry for interrupts Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-26  6:10   ` Andy Lutomirski
  2020-02-25 22:47 ` [patch 06/15] x86/entry: Convert APIC interrupts to IDTENTRY_SYSVEC Thomas Gleixner
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Provide a IDTENTRY variant for system vectors to consolidate the differnt
mechanisms to emit the ASM stubs for 32 an 64 bit.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_32.S       |    4 ++++
 arch/x86/entry/entry_64.S       |   19 +++++++++++++++----
 arch/x86/include/asm/idtentry.h |   25 +++++++++++++++++++++++++
 3 files changed, 44 insertions(+), 4 deletions(-)

--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -1261,6 +1261,10 @@ SYM_CODE_START_LOCAL(asm_\cfunc)
 SYM_CODE_END(asm_\cfunc)
 .endm
 
+.macro idtentry_sysvec vector cfunc
+	idtentry \vector asm_\cfunc \cfunc has_error_code=0
+.endm
+
 /*
  * Include the defines which emit the idt entries which are shared
  * shared between 32 and 64 bit.
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -577,6 +577,21 @@ SYM_CODE_END(\asmsym)
 .endm
 
 /*
+ * System vectors which invoke their handlers directly and are not
+ * going through the regular common device interrupt handling code.
+ *
+ * Stick them all into the irqentry.text section.
+ */
+#define PUSH_SECTION_IRQENTRY	.pushsection .irqentry.text, "ax"
+#define POP_SECTION_IRQENTRY	.popsection
+
+.macro idtentry_sysvec vector cfunc
+	PUSH_SECTION_IRQENTRY
+	idtentry \vector asm_\cfunc \cfunc has_error_code=0 irq_stack=0
+	POP_SECTION_IRQENTRY
+.endm
+
+/*
  * MCE and DB exceptions
  */
 #define CPU_TSS_IST(x) PER_CPU_VAR(cpu_tss_rw) + (TSS_ist + (x) * 8)
@@ -973,10 +988,6 @@ SYM_CODE_END(\sym)
 _ASM_NOKPROBE(\sym)
 .endm
 
-/* Make sure APIC interrupt handlers end up in the irqentry section: */
-#define PUSH_SECTION_IRQENTRY	.pushsection .irqentry.text, "ax"
-#define POP_SECTION_IRQENTRY	.popsection
-
 .macro apicinterrupt num sym do_sym
 PUSH_SECTION_IRQENTRY
 apicinterrupt3 \num \sym \do_sym
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -210,6 +210,27 @@ NOKPROBE_SYMBOL(func);							\
 									\
 static __always_inline void __##func(struct pt_regs *regs,		\
 				     unsigned long vector)
+/**
+ * DECLARE_IDTENTRY_SYSVEC - Declare functions for system vector entry points
+ * @vector:	Vector number (ignored for C)
+ * @func:	Function name of the entry point
+ *
+ * Declares three functions:
+ * - The ASM entry point: asm_##func
+ * - The XEN PV trap entry point: xen_##func (maybe unused)
+ * - The C handler called from the ASM entry point
+ */
+#define DECLARE_IDTENTRY_SYSVEC(vector, func)				\
+	DECLARE_IDTENTRY(vector, func)
+
+/**
+ * DEFINE_IDTENTRY_SYSVEC - Emit code for system vector IDT entry points
+ * @func:	Function name of the entry point
+ *
+ * @func is called from ASM entry code with interrupts disabled.
+ */
+#define DEFINE_IDTENTRY_SYSVEC(func)					\
+	DEFINE_IDTENTRY(func)
 
 #ifdef CONFIG_X86_64
 /**
@@ -384,6 +405,10 @@ static __always_inline void __##func(str
 #define DECLARE_IDTENTRY_IRQ(vector, func)			\
 	idtentry_irq vector func
 
+/* System vector entries */
+#define DECLARE_IDTENTRY_SYSVEC(__vector, __func)		\
+	idtentry_sysvec __vector __func
+
 #ifdef CONFIG_X86_64
 # define DECLARE_IDTENTRY_MCE(vector, func)			\
 	idtentry_mce_db vector asm_##func func


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 06/15] x86/entry: Convert APIC interrupts to IDTENTRY_SYSVEC
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (4 preceding siblings ...)
  2020-02-25 22:47 ` [patch 05/15] x86/entry: Provide IDTEnTRY_SYSVEC Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-25 22:47 ` [patch 07/15] x86/entry: Convert SMP system vectors " Thomas Gleixner
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Convert APIC interrupts to IDTENTRY_SYSVEC
  - Implement the C entry point with DEFINE_IDTENTRY_SYSVEC
  - Emit the ASM stub with DECLARE_IDTENTRY_SYSVEC
  - Remove the ASM idtentries in 64bit
  - Remove the BUILD_INTERRUPT entries in 32bit
  - Remove the old prototyoes

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_64.S         |    6 ------
 arch/x86/include/asm/entry_arch.h |    5 -----
 arch/x86/include/asm/hw_irq.h     |    3 ---
 arch/x86/include/asm/idtentry.h   |    8 ++++++++
 arch/x86/include/asm/irq.h        |    1 -
 arch/x86/include/asm/traps.h      |    3 ---
 arch/x86/kernel/apic/apic.c       |    8 ++++----
 arch/x86/kernel/idt.c             |    8 ++++----
 arch/x86/kernel/irq.c             |    3 ++-
 9 files changed, 18 insertions(+), 27 deletions(-)

--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1003,9 +1003,6 @@ apicinterrupt3 REBOOT_VECTOR			reboot_in
 apicinterrupt3 UV_BAU_MESSAGE			uv_bau_message_intr1		uv_bau_message_interrupt
 #endif
 
-apicinterrupt LOCAL_TIMER_VECTOR		apic_timer_interrupt		smp_apic_timer_interrupt
-apicinterrupt X86_PLATFORM_IPI_VECTOR		x86_platform_ipi		smp_x86_platform_ipi
-
 #ifdef CONFIG_HAVE_KVM
 apicinterrupt3 POSTED_INTR_VECTOR		kvm_posted_intr_ipi		smp_kvm_posted_intr_ipi
 apicinterrupt3 POSTED_INTR_WAKEUP_VECTOR	kvm_posted_intr_wakeup_ipi	smp_kvm_posted_intr_wakeup_ipi
@@ -1030,9 +1027,6 @@ apicinterrupt CALL_FUNCTION_VECTOR		call
 apicinterrupt RESCHEDULE_VECTOR			reschedule_interrupt		smp_reschedule_interrupt
 #endif
 
-apicinterrupt ERROR_APIC_VECTOR			error_interrupt			smp_error_interrupt
-apicinterrupt SPURIOUS_APIC_VECTOR		spurious_apic_interrupt		smp_spurious_apic_interrupt
-
 #ifdef CONFIG_IRQ_WORK
 apicinterrupt IRQ_WORK_VECTOR			irq_work_interrupt		smp_irq_work_interrupt
 #endif
--- a/arch/x86/include/asm/entry_arch.h
+++ b/arch/x86/include/asm/entry_arch.h
@@ -33,11 +33,6 @@ BUILD_INTERRUPT(kvm_posted_intr_nested_i
  */
 #ifdef CONFIG_X86_LOCAL_APIC
 
-BUILD_INTERRUPT(apic_timer_interrupt,LOCAL_TIMER_VECTOR)
-BUILD_INTERRUPT(error_interrupt,ERROR_APIC_VECTOR)
-BUILD_INTERRUPT(spurious_apic_interrupt,SPURIOUS_APIC_VECTOR)
-BUILD_INTERRUPT(x86_platform_ipi, X86_PLATFORM_IPI_VECTOR)
-
 #ifdef CONFIG_IRQ_WORK
 BUILD_INTERRUPT(irq_work_interrupt, IRQ_WORK_VECTOR)
 #endif
--- a/arch/x86/include/asm/hw_irq.h
+++ b/arch/x86/include/asm/hw_irq.h
@@ -29,12 +29,9 @@
 #include <asm/sections.h>
 
 /* Interrupt handlers registered during init_IRQ */
-extern asmlinkage void apic_timer_interrupt(void);
-extern asmlinkage void x86_platform_ipi(void);
 extern asmlinkage void kvm_posted_intr_ipi(void);
 extern asmlinkage void kvm_posted_intr_wakeup_ipi(void);
 extern asmlinkage void kvm_posted_intr_nested_ipi(void);
-extern asmlinkage void error_interrupt(void);
 extern asmlinkage void irq_work_interrupt(void);
 extern asmlinkage void uv_bau_message_intr1(void);
 
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -479,6 +479,14 @@ DECLARE_IDTENTRY_CR2(X86_TRAP_PF,	exc_as
 DECLARE_IDTENTRY_IRQ(X6_TRAP_OTHER,	common_interrupt);
 DECLARE_IDTENTRY_IRQ(X6_TRAP_OTHER,	spurious_interrupt);
 
+/* System vector entry points */
+#ifdef CONFIG_X86_LOCAL_APIC
+DECLARE_IDTENTRY_SYSVEC(ERROR_APIC_VECTOR,		sysvec_error_interrupt);
+DECLARE_IDTENTRY_SYSVEC(SPURIOUS_APIC_VECTOR,		sysvec_spurious_apic_interrupt);
+DECLARE_IDTENTRY_SYSVEC(LOCAL_TIMER_VECTOR,		sysvec_apic_timer_interrupt);
+DECLARE_IDTENTRY_SYSVEC(X86_PLATFORM_IPI_VECTOR,	sysvec_x86_platform_ipi);
+#endif
+
 #ifdef CONFIG_X86_MCE
 /* Machine check */
 DECLARE_IDTENTRY_MCE(X86_TRAP_MC,	exc_machine_check);
--- a/arch/x86/include/asm/irq.h
+++ b/arch/x86/include/asm/irq.h
@@ -44,7 +44,6 @@ extern void __init init_IRQ(void);
 void arch_trigger_cpumask_backtrace(const struct cpumask *mask,
 				    bool exclude_self);
 
-extern __visible void smp_x86_platform_ipi(struct pt_regs *regs);
 #define arch_trigger_cpumask_backtrace arch_trigger_cpumask_backtrace
 #endif
 
--- a/arch/x86/include/asm/traps.h
+++ b/arch/x86/include/asm/traps.h
@@ -38,9 +38,6 @@ asmlinkage void smp_threshold_interrupt(
 asmlinkage void smp_deferred_error_interrupt(struct pt_regs *regs);
 #endif
 
-void smp_apic_timer_interrupt(struct pt_regs *regs);
-void smp_error_interrupt(struct pt_regs *regs);
-void smp_spurious_apic_interrupt(struct pt_regs *regs);
 asmlinkage void smp_irq_move_cleanup_interrupt(void);
 
 extern void ist_enter(struct pt_regs *regs);
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -1127,7 +1127,7 @@ static void local_apic_timer_interrupt(v
  * [ if a single-CPU system runs an SMP kernel then we call the local
  *   interrupt as well. Thus we cannot inline the local irq ... ]
  */
-__visible void __irq_entry smp_apic_timer_interrupt(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_apic_timer_interrupt)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 
@@ -2167,7 +2167,7 @@ void __init register_lapic_address(unsig
  * trigger on an entry which is routed to the common_spurious idtentry
  * point.
  *
- * Also called from smp_spurious_apic_interrupt().
+ * Also called from sysvec_spurious_apic_interrupt().
  */
 DEFINE_IDTENTRY_IRQ(spurious_interrupt)
 {
@@ -2206,7 +2206,7 @@ DEFINE_IDTENTRY_IRQ(spurious_interrupt)
 	exiting_irq();
 }
 
-__visible void smp_spurious_apic_interrupt(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_spurious_apic_interrupt)
 {
 	__spurious_interrupt(regs, SPURIOUS_APIC_VECTOR);
 }
@@ -2214,7 +2214,7 @@ DEFINE_IDTENTRY_IRQ(spurious_interrupt)
 /*
  * This interrupt should never happen with our APIC/SMP architecture
  */
-__visible void __irq_entry smp_error_interrupt(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_error_interrupt)
 {
 	static const char * const error_interrupt_reason[] = {
 		"Send CS error",		/* APIC Error Bit 0 */
--- a/arch/x86/kernel/idt.c
+++ b/arch/x86/kernel/idt.c
@@ -129,8 +129,8 @@ static const __initconst struct idt_data
 #endif
 
 #ifdef CONFIG_X86_LOCAL_APIC
-	INTG(LOCAL_TIMER_VECTOR,	apic_timer_interrupt),
-	INTG(X86_PLATFORM_IPI_VECTOR,	x86_platform_ipi),
+	INTG(LOCAL_TIMER_VECTOR,	asm_sysvec_apic_timer_interrupt),
+	INTG(X86_PLATFORM_IPI_VECTOR,	asm_sysvec_x86_platform_ipi),
 # ifdef CONFIG_HAVE_KVM
 	INTG(POSTED_INTR_VECTOR,	kvm_posted_intr_ipi),
 	INTG(POSTED_INTR_WAKEUP_VECTOR, kvm_posted_intr_wakeup_ipi),
@@ -142,8 +142,8 @@ static const __initconst struct idt_data
 #ifdef CONFIG_X86_UV
 	INTG(UV_BAU_MESSAGE,		uv_bau_message_intr1),
 #endif
-	INTG(SPURIOUS_APIC_VECTOR,	spurious_apic_interrupt),
-	INTG(ERROR_APIC_VECTOR,		error_interrupt),
+	INTG(SPURIOUS_APIC_VECTOR,	asm_sysvec_spurious_apic_interrupt),
+	INTG(ERROR_APIC_VECTOR,		asm_sysvec_error_interrupt),
 #endif
 };
 
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -14,6 +14,7 @@
 #include <linux/irq.h>
 
 #include <asm/apic.h>
+#include <asm/traps.h>
 #include <asm/io_apic.h>
 #include <asm/irq.h>
 #include <asm/mce.h>
@@ -269,7 +270,7 @@ void (*x86_platform_ipi_callback)(void)
 /*
  * Handler for X86_PLATFORM_IPI_VECTOR.
  */
-__visible void __irq_entry smp_x86_platform_ipi(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_x86_platform_ipi)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 07/15] x86/entry: Convert SMP system vectors to IDTENTRY_SYSVEC
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (5 preceding siblings ...)
  2020-02-25 22:47 ` [patch 06/15] x86/entry: Convert APIC interrupts to IDTENTRY_SYSVEC Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-25 22:47 ` [patch 08/15] x86/entry: Convert various system vectors Thomas Gleixner
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Convert SMP system vectors to IDTENTRY_SYSVEC
  - Implement the C entry point with DEFINE_IDTENTRY_SYSVEC
  - Emit the ASM stub with DECLARE_IDTENTRY_SYSVEC
  - Remove the ASM idtentries in 64bit
  - Remove the BUILD_INTERRUPT entries in 32bit
  - Remove the old prototyoes

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_64.S         |   11 -----------
 arch/x86/include/asm/entry_arch.h |    7 -------
 arch/x86/include/asm/hw_irq.h     |    6 ------
 arch/x86/include/asm/idtentry.h   |   10 ++++++++++
 arch/x86/include/asm/traps.h      |    2 --
 arch/x86/kernel/apic/vector.c     |    2 +-
 arch/x86/kernel/idt.c             |   10 +++++-----
 arch/x86/kernel/smp.c             |   10 +++++-----
 8 files changed, 21 insertions(+), 37 deletions(-)

--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -994,11 +994,6 @@ apicinterrupt3 \num \sym \do_sym
 POP_SECTION_IRQENTRY
 .endm
 
-#ifdef CONFIG_SMP
-apicinterrupt3 IRQ_MOVE_CLEANUP_VECTOR		irq_move_cleanup_interrupt	smp_irq_move_cleanup_interrupt
-apicinterrupt3 REBOOT_VECTOR			reboot_interrupt		smp_reboot_interrupt
-#endif
-
 #ifdef CONFIG_X86_UV
 apicinterrupt3 UV_BAU_MESSAGE			uv_bau_message_intr1		uv_bau_message_interrupt
 #endif
@@ -1021,12 +1016,6 @@ apicinterrupt DEFERRED_ERROR_VECTOR		def
 apicinterrupt THERMAL_APIC_VECTOR		thermal_interrupt		smp_thermal_interrupt
 #endif
 
-#ifdef CONFIG_SMP
-apicinterrupt CALL_FUNCTION_SINGLE_VECTOR	call_function_single_interrupt	smp_call_function_single_interrupt
-apicinterrupt CALL_FUNCTION_VECTOR		call_function_interrupt		smp_call_function_interrupt
-apicinterrupt RESCHEDULE_VECTOR			reschedule_interrupt		smp_reschedule_interrupt
-#endif
-
 #ifdef CONFIG_IRQ_WORK
 apicinterrupt IRQ_WORK_VECTOR			irq_work_interrupt		smp_irq_work_interrupt
 #endif
--- a/arch/x86/include/asm/entry_arch.h
+++ b/arch/x86/include/asm/entry_arch.h
@@ -10,13 +10,6 @@
  * is no hardware IRQ pin equivalent for them, they are triggered
  * through the ICC by us (IPIs)
  */
-#ifdef CONFIG_SMP
-BUILD_INTERRUPT(reschedule_interrupt,RESCHEDULE_VECTOR)
-BUILD_INTERRUPT(call_function_interrupt,CALL_FUNCTION_VECTOR)
-BUILD_INTERRUPT(call_function_single_interrupt,CALL_FUNCTION_SINGLE_VECTOR)
-BUILD_INTERRUPT(irq_move_cleanup_interrupt, IRQ_MOVE_CLEANUP_VECTOR)
-BUILD_INTERRUPT(reboot_interrupt, REBOOT_VECTOR)
-#endif
 
 #ifdef CONFIG_HAVE_KVM
 BUILD_INTERRUPT(kvm_posted_intr_ipi, POSTED_INTR_VECTOR)
--- a/arch/x86/include/asm/hw_irq.h
+++ b/arch/x86/include/asm/hw_irq.h
@@ -37,16 +37,10 @@ extern asmlinkage void uv_bau_message_in
 
 extern asmlinkage void spurious_apic_interrupt(void);
 extern asmlinkage void thermal_interrupt(void);
-extern asmlinkage void reschedule_interrupt(void);
 
-extern asmlinkage void irq_move_cleanup_interrupt(void);
-extern asmlinkage void reboot_interrupt(void);
 extern asmlinkage void threshold_interrupt(void);
 extern asmlinkage void deferred_error_interrupt(void);
 
-extern asmlinkage void call_function_interrupt(void);
-extern asmlinkage void call_function_single_interrupt(void);
-
 #ifdef	CONFIG_X86_LOCAL_APIC
 struct irq_data;
 struct pci_dev;
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -7,6 +7,8 @@
 
 #ifndef __ASSEMBLY__
 
+#include <asm/kprobes.h>
+
 #ifdef CONFIG_CONTEXT_TRACKING
 static __always_inline void enter_from_user_context(void)
 {
@@ -487,6 +489,14 @@ DECLARE_IDTENTRY_SYSVEC(LOCAL_TIMER_VECT
 DECLARE_IDTENTRY_SYSVEC(X86_PLATFORM_IPI_VECTOR,	sysvec_x86_platform_ipi);
 #endif
 
+#ifdef CONFIG_SMP
+DECLARE_IDTENTRY_SYSVEC(IRQ_MOVE_CLEANUP_VECTOR,	sysvec_irq_move_cleanup);
+DECLARE_IDTENTRY_SYSVEC(REBOOT_VECTOR,			sysvec_reboot);
+DECLARE_IDTENTRY_SYSVEC(CALL_FUNCTION_SINGLE_VECTOR,	sysvec_call_function_single);
+DECLARE_IDTENTRY_SYSVEC(CALL_FUNCTION_VECTOR,		sysvec_call_function);
+DECLARE_IDTENTRY_SYSVEC(RESCHEDULE_VECTOR,		sysvec_reschedule);
+#endif
+
 #ifdef CONFIG_X86_MCE
 /* Machine check */
 DECLARE_IDTENTRY_MCE(X86_TRAP_MC,	exc_machine_check);
--- a/arch/x86/include/asm/traps.h
+++ b/arch/x86/include/asm/traps.h
@@ -38,8 +38,6 @@ asmlinkage void smp_threshold_interrupt(
 asmlinkage void smp_deferred_error_interrupt(struct pt_regs *regs);
 #endif
 
-asmlinkage void smp_irq_move_cleanup_interrupt(void);
-
 extern void ist_enter(struct pt_regs *regs);
 extern void ist_exit(struct pt_regs *regs);
 extern void ist_begin_non_atomic(struct pt_regs *regs);
--- a/arch/x86/kernel/apic/vector.c
+++ b/arch/x86/kernel/apic/vector.c
@@ -853,7 +853,7 @@ static void free_moved_vector(struct api
 	apicd->move_in_progress = 0;
 }
 
-asmlinkage __visible void __irq_entry smp_irq_move_cleanup_interrupt(void)
+DEFINE_IDTENTRY_SYSVEC(sysvec_irq_move_cleanup)
 {
 	struct hlist_head *clhead = this_cpu_ptr(&cleanup_list);
 	struct apic_chip_data *apicd;
--- a/arch/x86/kernel/idt.c
+++ b/arch/x86/kernel/idt.c
@@ -109,11 +109,11 @@ static const __initconst struct idt_data
  */
 static const __initconst struct idt_data apic_idts[] = {
 #ifdef CONFIG_SMP
-	INTG(RESCHEDULE_VECTOR,		reschedule_interrupt),
-	INTG(CALL_FUNCTION_VECTOR,	call_function_interrupt),
-	INTG(CALL_FUNCTION_SINGLE_VECTOR, call_function_single_interrupt),
-	INTG(IRQ_MOVE_CLEANUP_VECTOR,	irq_move_cleanup_interrupt),
-	INTG(REBOOT_VECTOR,		reboot_interrupt),
+	INTG(RESCHEDULE_VECTOR,			asm_sysvec_reschedule),
+	INTG(CALL_FUNCTION_VECTOR,		asm_sysvec_call_function),
+	INTG(CALL_FUNCTION_SINGLE_VECTOR,	asm_sysvec_call_function_single),
+	INTG(IRQ_MOVE_CLEANUP_VECTOR,		asm_sysvec_irq_move_cleanup),
+	INTG(REBOOT_VECTOR,			asm_sysvec_reboot),
 #endif
 
 #ifdef CONFIG_X86_THERMAL_VECTOR
--- a/arch/x86/kernel/smp.c
+++ b/arch/x86/kernel/smp.c
@@ -27,6 +27,7 @@
 #include <asm/mmu_context.h>
 #include <asm/proto.h>
 #include <asm/apic.h>
+#include <asm/idtentry.h>
 #include <asm/nmi.h>
 #include <asm/mce.h>
 #include <asm/trace/irq_vectors.h>
@@ -130,8 +131,7 @@ static int smp_stop_nmi_callback(unsigne
 /*
  * this function calls the 'stop' function on all other CPUs in the system.
  */
-
-asmlinkage __visible void smp_reboot_interrupt(void)
+DEFINE_IDTENTRY_SYSVEC(sysvec_reboot)
 {
 	ipi_entering_ack_irq();
 	cpu_emergency_vmxoff();
@@ -223,7 +223,7 @@ static void native_stop_other_cpus(int w
  * Reschedule call back. KVM uses this interrupt to force a cpu out of
  * guest mode
  */
-__visible void __irq_entry smp_reschedule_interrupt(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_reschedule)
 {
 	ack_APIC_irq();
 	inc_irq_stat(irq_resched_count);
@@ -244,7 +244,7 @@ static void native_stop_other_cpus(int w
 	scheduler_ipi();
 }
 
-__visible void __irq_entry smp_call_function_interrupt(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_call_function)
 {
 	ipi_entering_ack_irq();
 	trace_call_function_entry(CALL_FUNCTION_VECTOR);
@@ -254,7 +254,7 @@ static void native_stop_other_cpus(int w
 	exiting_irq();
 }
 
-__visible void __irq_entry smp_call_function_single_interrupt(struct pt_regs *r)
+DEFINE_IDTENTRY_SYSVEC(sysvec_call_function_single)
 {
 	ipi_entering_ack_irq();
 	trace_call_function_single_entry(CALL_FUNCTION_SINGLE_VECTOR);


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 08/15] x86/entry: Convert various system vectors
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (6 preceding siblings ...)
  2020-02-25 22:47 ` [patch 07/15] x86/entry: Convert SMP system vectors " Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-25 22:47 ` [patch 09/15] x86/entry: Convert KVM vectors to IDTENTRY_SYSVEC Thomas Gleixner
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Convert various system vectors to IDTENTRY_SYSVEC
  - Implement the C entry point with DEFINE_IDTENTRY_SYSVEC
  - Emit the ASM stub with DECLARE_IDTENTRY_SYSVEC
  - Remove the ASM idtentries in 64bit
  - Remove the BUILD_INTERRUPT entries in 32bit
  - Remove the old prototyoes

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_64.S             |   19 -------------------
 arch/x86/include/asm/entry_arch.h     |   25 -------------------------
 arch/x86/include/asm/hw_irq.h         |    8 --------
 arch/x86/include/asm/idtentry.h       |   20 ++++++++++++++++++++
 arch/x86/include/asm/irq_work.h       |    1 -
 arch/x86/include/asm/traps.h          |    5 -----
 arch/x86/include/asm/uv/uv_bau.h      |    6 +++---
 arch/x86/kernel/cpu/mce/amd.c         |    2 +-
 arch/x86/kernel/cpu/mce/therm_throt.c |    2 +-
 arch/x86/kernel/cpu/mce/threshold.c   |    2 +-
 arch/x86/kernel/idt.c                 |   24 ++++++++++++------------
 arch/x86/kernel/irq_work.c            |    3 ++-
 arch/x86/platform/uv/tlb_uv.c         |    2 +-
 13 files changed, 41 insertions(+), 78 deletions(-)

--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -994,9 +994,6 @@ apicinterrupt3 \num \sym \do_sym
 POP_SECTION_IRQENTRY
 .endm
 
-#ifdef CONFIG_X86_UV
-apicinterrupt3 UV_BAU_MESSAGE			uv_bau_message_intr1		uv_bau_message_interrupt
-#endif
 
 #ifdef CONFIG_HAVE_KVM
 apicinterrupt3 POSTED_INTR_VECTOR		kvm_posted_intr_ipi		smp_kvm_posted_intr_ipi
@@ -1004,22 +1001,6 @@ apicinterrupt3 POSTED_INTR_WAKEUP_VECTOR
 apicinterrupt3 POSTED_INTR_NESTED_VECTOR	kvm_posted_intr_nested_ipi	smp_kvm_posted_intr_nested_ipi
 #endif
 
-#ifdef CONFIG_X86_MCE_THRESHOLD
-apicinterrupt THRESHOLD_APIC_VECTOR		threshold_interrupt		smp_threshold_interrupt
-#endif
-
-#ifdef CONFIG_X86_MCE_AMD
-apicinterrupt DEFERRED_ERROR_VECTOR		deferred_error_interrupt	smp_deferred_error_interrupt
-#endif
-
-#ifdef CONFIG_X86_THERMAL_VECTOR
-apicinterrupt THERMAL_APIC_VECTOR		thermal_interrupt		smp_thermal_interrupt
-#endif
-
-#ifdef CONFIG_IRQ_WORK
-apicinterrupt IRQ_WORK_VECTOR			irq_work_interrupt		smp_irq_work_interrupt
-#endif
-
 /*
  * Reload gs selector with exception handling
  * edi:  new selector
--- a/arch/x86/include/asm/entry_arch.h
+++ b/arch/x86/include/asm/entry_arch.h
@@ -17,28 +17,3 @@ BUILD_INTERRUPT(kvm_posted_intr_wakeup_i
 BUILD_INTERRUPT(kvm_posted_intr_nested_ipi, POSTED_INTR_NESTED_VECTOR)
 #endif
 
-/*
- * every pentium local APIC has two 'local interrupts', with a
- * soft-definable vector attached to both interrupts, one of
- * which is a timer interrupt, the other one is error counter
- * overflow. Linux uses the local APIC timer interrupt to get
- * a much simpler SMP time architecture:
- */
-#ifdef CONFIG_X86_LOCAL_APIC
-
-#ifdef CONFIG_IRQ_WORK
-BUILD_INTERRUPT(irq_work_interrupt, IRQ_WORK_VECTOR)
-#endif
-
-#ifdef CONFIG_X86_THERMAL_VECTOR
-BUILD_INTERRUPT(thermal_interrupt,THERMAL_APIC_VECTOR)
-#endif
-
-#ifdef CONFIG_X86_MCE_THRESHOLD
-BUILD_INTERRUPT(threshold_interrupt,THRESHOLD_APIC_VECTOR)
-#endif
-
-#ifdef CONFIG_X86_MCE_AMD
-BUILD_INTERRUPT(deferred_error_interrupt, DEFERRED_ERROR_VECTOR)
-#endif
-#endif
--- a/arch/x86/include/asm/hw_irq.h
+++ b/arch/x86/include/asm/hw_irq.h
@@ -32,14 +32,6 @@
 extern asmlinkage void kvm_posted_intr_ipi(void);
 extern asmlinkage void kvm_posted_intr_wakeup_ipi(void);
 extern asmlinkage void kvm_posted_intr_nested_ipi(void);
-extern asmlinkage void irq_work_interrupt(void);
-extern asmlinkage void uv_bau_message_intr1(void);
-
-extern asmlinkage void spurious_apic_interrupt(void);
-extern asmlinkage void thermal_interrupt(void);
-
-extern asmlinkage void threshold_interrupt(void);
-extern asmlinkage void deferred_error_interrupt(void);
 
 #ifdef	CONFIG_X86_LOCAL_APIC
 struct irq_data;
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -497,6 +497,26 @@ DECLARE_IDTENTRY_SYSVEC(CALL_FUNCTION_VE
 DECLARE_IDTENTRY_SYSVEC(RESCHEDULE_VECTOR,		sysvec_reschedule);
 #endif
 
+#ifdef CONFIG_X86_UV
+DECLARE_IDTENTRY_SYSVEC(UV_BAU_MESSAGE,			sysvec_uv_bau_message);
+#endif
+
+#ifdef CONFIG_X86_MCE_THRESHOLD
+DECLARE_IDTENTRY_SYSVEC(THRESHOLD_APIC_VECTOR,		sysvec_threshold);
+#endif
+
+#ifdef CONFIG_X86_MCE_AMD
+DECLARE_IDTENTRY_SYSVEC(DEFERRED_ERROR_VECTOR,		sysvec_deferred_error);
+#endif
+
+#ifdef CONFIG_X86_THERMAL_VECTOR
+DECLARE_IDTENTRY_SYSVEC(THERMAL_APIC_VECTOR,		sysvec_thermal);
+#endif
+
+#ifdef CONFIG_IRQ_WORK
+DECLARE_IDTENTRY_SYSVEC(IRQ_WORK_VECTOR,		sysvec_irq_work);
+#endif
+
 #ifdef CONFIG_X86_MCE
 /* Machine check */
 DECLARE_IDTENTRY_MCE(X86_TRAP_MC,	exc_machine_check);
--- a/arch/x86/include/asm/irq_work.h
+++ b/arch/x86/include/asm/irq_work.h
@@ -10,7 +10,6 @@ static inline bool arch_irq_work_has_int
 	return boot_cpu_has(X86_FEATURE_APIC);
 }
 extern void arch_irq_work_raise(void);
-extern __visible void smp_irq_work_interrupt(struct pt_regs *regs);
 #else
 static inline bool arch_irq_work_has_interrupt(void)
 {
--- a/arch/x86/include/asm/traps.h
+++ b/arch/x86/include/asm/traps.h
@@ -32,11 +32,6 @@ static inline int get_si_code(unsigned l
 extern int panic_on_unrecovered_nmi;
 
 void math_emulate(struct math_emu_info *);
-#ifndef CONFIG_X86_32
-asmlinkage void smp_thermal_interrupt(struct pt_regs *regs);
-asmlinkage void smp_threshold_interrupt(struct pt_regs *regs);
-asmlinkage void smp_deferred_error_interrupt(struct pt_regs *regs);
-#endif
 
 extern void ist_enter(struct pt_regs *regs);
 extern void ist_exit(struct pt_regs *regs);
--- a/arch/x86/include/asm/uv/uv_bau.h
+++ b/arch/x86/include/asm/uv/uv_bau.h
@@ -12,6 +12,8 @@
 #define _ASM_X86_UV_UV_BAU_H
 
 #include <linux/bitmap.h>
+#include <asm/idtentry.h>
+
 #define BITSPERBYTE 8
 
 /*
@@ -799,11 +801,9 @@ static inline void bau_cpubits_clear(str
 	bitmap_zero(&dstp->bits, nbits);
 }
 
-extern void uv_bau_message_intr1(void);
 #ifdef CONFIG_TRACING
-#define trace_uv_bau_message_intr1 uv_bau_message_intr1
+#define trace_uv_bau_message_intr1 sysvec_uv_bau_message
 #endif
-extern void uv_bau_timeout_intr1(void);
 
 struct atomic_short {
 	short counter;
--- a/arch/x86/kernel/cpu/mce/amd.c
+++ b/arch/x86/kernel/cpu/mce/amd.c
@@ -907,7 +907,7 @@ static void __log_error(unsigned int ban
 	mce_log(&m);
 }
 
-asmlinkage __visible void __irq_entry smp_deferred_error_interrupt(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_deferred_error)
 {
 	entering_irq();
 	trace_deferred_error_apic_entry(DEFERRED_ERROR_VECTOR);
--- a/arch/x86/kernel/cpu/mce/therm_throt.c
+++ b/arch/x86/kernel/cpu/mce/therm_throt.c
@@ -609,7 +609,7 @@ static void unexpected_thermal_interrupt
 
 static void (*smp_thermal_vector)(void) = unexpected_thermal_interrupt;
 
-asmlinkage __visible void __irq_entry smp_thermal_interrupt(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_thermal)
 {
 	entering_irq();
 	trace_thermal_apic_entry(THERMAL_APIC_VECTOR);
--- a/arch/x86/kernel/cpu/mce/threshold.c
+++ b/arch/x86/kernel/cpu/mce/threshold.c
@@ -21,7 +21,7 @@ static void default_threshold_interrupt(
 
 void (*mce_threshold_vector)(void) = default_threshold_interrupt;
 
-asmlinkage __visible void __irq_entry smp_threshold_interrupt(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_threshold)
 {
 	entering_irq();
 	trace_threshold_apic_entry(THRESHOLD_APIC_VECTOR);
--- a/arch/x86/kernel/idt.c
+++ b/arch/x86/kernel/idt.c
@@ -117,33 +117,33 @@ static const __initconst struct idt_data
 #endif
 
 #ifdef CONFIG_X86_THERMAL_VECTOR
-	INTG(THERMAL_APIC_VECTOR,	thermal_interrupt),
+	INTG(THERMAL_APIC_VECTOR,		asm_sysvec_thermal),
 #endif
 
 #ifdef CONFIG_X86_MCE_THRESHOLD
-	INTG(THRESHOLD_APIC_VECTOR,	threshold_interrupt),
+	INTG(THRESHOLD_APIC_VECTOR,		asm_sysvec_threshold),
 #endif
 
 #ifdef CONFIG_X86_MCE_AMD
-	INTG(DEFERRED_ERROR_VECTOR,	deferred_error_interrupt),
+	INTG(DEFERRED_ERROR_VECTOR,		asm_sysvec_deferred_error),
 #endif
 
 #ifdef CONFIG_X86_LOCAL_APIC
-	INTG(LOCAL_TIMER_VECTOR,	asm_sysvec_apic_timer_interrupt),
-	INTG(X86_PLATFORM_IPI_VECTOR,	asm_sysvec_x86_platform_ipi),
+	INTG(LOCAL_TIMER_VECTOR,		asm_sysvec_apic_timer_interrupt),
+	INTG(X86_PLATFORM_IPI_VECTOR,		asm_sysvec_x86_platform_ipi),
 # ifdef CONFIG_HAVE_KVM
-	INTG(POSTED_INTR_VECTOR,	kvm_posted_intr_ipi),
-	INTG(POSTED_INTR_WAKEUP_VECTOR, kvm_posted_intr_wakeup_ipi),
-	INTG(POSTED_INTR_NESTED_VECTOR, kvm_posted_intr_nested_ipi),
+	INTG(POSTED_INTR_VECTOR,		kvm_posted_intr_ipi),
+	INTG(POSTED_INTR_WAKEUP_VECTOR,		kvm_posted_intr_wakeup_ipi),
+	INTG(POSTED_INTR_NESTED_VECTOR,		kvm_posted_intr_nested_ipi),
 # endif
 # ifdef CONFIG_IRQ_WORK
-	INTG(IRQ_WORK_VECTOR,		irq_work_interrupt),
+	INTG(IRQ_WORK_VECTOR,			asm_sysvec_irq_work),
 # endif
 #ifdef CONFIG_X86_UV
-	INTG(UV_BAU_MESSAGE,		uv_bau_message_intr1),
+	INTG(UV_BAU_MESSAGE,			asm_sysvec_uv_bau_message),
 #endif
-	INTG(SPURIOUS_APIC_VECTOR,	asm_sysvec_spurious_apic_interrupt),
-	INTG(ERROR_APIC_VECTOR,		asm_sysvec_error_interrupt),
+	INTG(SPURIOUS_APIC_VECTOR,		asm_sysvec_spurious_apic_interrupt),
+	INTG(ERROR_APIC_VECTOR,			asm_sysvec_error_interrupt),
 #endif
 };
 
--- a/arch/x86/kernel/irq_work.c
+++ b/arch/x86/kernel/irq_work.c
@@ -9,11 +9,12 @@
 #include <linux/irq_work.h>
 #include <linux/hardirq.h>
 #include <asm/apic.h>
+#include <asm/idtentry.h>
 #include <asm/trace/irq_vectors.h>
 #include <linux/interrupt.h>
 
 #ifdef CONFIG_X86_LOCAL_APIC
-__visible void __irq_entry smp_irq_work_interrupt(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_irq_work)
 {
 	ipi_entering_ack_irq();
 	trace_irq_work_entry(IRQ_WORK_VECTOR);
--- a/arch/x86/platform/uv/tlb_uv.c
+++ b/arch/x86/platform/uv/tlb_uv.c
@@ -1272,7 +1272,7 @@ static void process_uv2_message(struct m
  * (the resource will not be freed until noninterruptable cpus see this
  *  interrupt; hardware may timeout the s/w ack and reply ERROR)
  */
-void uv_bau_message_interrupt(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_uv_bau_message)
 {
 	int count = 0;
 	cycles_t time_start;


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 09/15] x86/entry: Convert KVM vectors to IDTENTRY_SYSVEC
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (7 preceding siblings ...)
  2020-02-25 22:47 ` [patch 08/15] x86/entry: Convert various system vectors Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-26 10:54   ` Paolo Bonzini
  2020-02-25 22:47 ` [patch 10/15] x86/entry: Convert various hypervisor " Thomas Gleixner
                   ` (6 subsequent siblings)
  15 siblings, 1 reply; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Convert KVm specific system vectors to IDTENTRY_SYSVEC
  - Implement the C entry point with DEFINE_IDTENTRY_SYSVEC
  - Emit the ASM stub with DECLARE_IDTENTRY_SYSVEC
  - Remove the ASM idtentries in 64bit
  - Remove the BUILD_INTERRUPT entries in 32bit
  - Remove the old prototyoes

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_32.S         |    3 ---
 arch/x86/entry/entry_64.S         |    7 -------
 arch/x86/include/asm/entry_arch.h |   19 -------------------
 arch/x86/include/asm/hw_irq.h     |    5 -----
 arch/x86/include/asm/idtentry.h   |    6 ++++++
 arch/x86/include/asm/irq.h        |    3 ---
 arch/x86/kernel/idt.c             |    6 +++---
 arch/x86/kernel/irq.c             |    6 +++---
 8 files changed, 12 insertions(+), 43 deletions(-)

--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -1286,9 +1286,6 @@ SYM_FUNC_END(name)
 #define BUILD_INTERRUPT(name, nr)		\
 	BUILD_INTERRUPT3(name, nr, smp_##name);	\
 
-/* The include is where all of the SMP etc. interrupts come from */
-#include <asm/entry_arch.h>
-
 #ifdef CONFIG_PARAVIRT
 SYM_CODE_START(native_iret)
 	iret
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -994,13 +994,6 @@ apicinterrupt3 \num \sym \do_sym
 POP_SECTION_IRQENTRY
 .endm
 
-
-#ifdef CONFIG_HAVE_KVM
-apicinterrupt3 POSTED_INTR_VECTOR		kvm_posted_intr_ipi		smp_kvm_posted_intr_ipi
-apicinterrupt3 POSTED_INTR_WAKEUP_VECTOR	kvm_posted_intr_wakeup_ipi	smp_kvm_posted_intr_wakeup_ipi
-apicinterrupt3 POSTED_INTR_NESTED_VECTOR	kvm_posted_intr_nested_ipi	smp_kvm_posted_intr_nested_ipi
-#endif
-
 /*
  * Reload gs selector with exception handling
  * edi:  new selector
--- a/arch/x86/include/asm/entry_arch.h
+++ /dev/null
@@ -1,19 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-/*
- * This file is designed to contain the BUILD_INTERRUPT specifications for
- * all of the extra named interrupt vectors used by the architecture.
- * Usually this is the Inter Process Interrupts (IPIs)
- */
-
-/*
- * The following vectors are part of the Linux architecture, there
- * is no hardware IRQ pin equivalent for them, they are triggered
- * through the ICC by us (IPIs)
- */
-
-#ifdef CONFIG_HAVE_KVM
-BUILD_INTERRUPT(kvm_posted_intr_ipi, POSTED_INTR_VECTOR)
-BUILD_INTERRUPT(kvm_posted_intr_wakeup_ipi, POSTED_INTR_WAKEUP_VECTOR)
-BUILD_INTERRUPT(kvm_posted_intr_nested_ipi, POSTED_INTR_NESTED_VECTOR)
-#endif
-
--- a/arch/x86/include/asm/hw_irq.h
+++ b/arch/x86/include/asm/hw_irq.h
@@ -28,11 +28,6 @@
 #include <asm/irq.h>
 #include <asm/sections.h>
 
-/* Interrupt handlers registered during init_IRQ */
-extern asmlinkage void kvm_posted_intr_ipi(void);
-extern asmlinkage void kvm_posted_intr_wakeup_ipi(void);
-extern asmlinkage void kvm_posted_intr_nested_ipi(void);
-
 #ifdef	CONFIG_X86_LOCAL_APIC
 struct irq_data;
 struct pci_dev;
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -517,6 +517,12 @@ DECLARE_IDTENTRY_SYSVEC(THERMAL_APIC_VEC
 DECLARE_IDTENTRY_SYSVEC(IRQ_WORK_VECTOR,		sysvec_irq_work);
 #endif
 
+#ifdef CONFIG_HAVE_KVM
+DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_VECTOR,		sysvec_kvm_posted_intr_ipi);
+DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_WAKEUP_VECTOR,	sysvec_kvm_posted_intr_wakeup_ipi);
+DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_NESTED_VECTOR,	sysvec_kvm_posted_intr_nested_ipi);
+#endif
+
 #ifdef CONFIG_X86_MCE
 /* Machine check */
 DECLARE_IDTENTRY_MCE(X86_TRAP_MC,	exc_machine_check);
--- a/arch/x86/include/asm/irq.h
+++ b/arch/x86/include/asm/irq.h
@@ -26,9 +26,6 @@ extern void fixup_irqs(void);
 
 #ifdef CONFIG_HAVE_KVM
 extern void kvm_set_posted_intr_wakeup_handler(void (*handler)(void));
-extern __visible void smp_kvm_posted_intr_ipi(struct pt_regs *regs);
-extern __visible void smp_kvm_posted_intr_wakeup_ipi(struct pt_regs *regs);
-extern __visible void smp_kvm_posted_intr_nested_ipi(struct pt_regs *regs);
 #endif
 
 extern void (*x86_platform_ipi_callback)(void);
--- a/arch/x86/kernel/idt.c
+++ b/arch/x86/kernel/idt.c
@@ -132,9 +132,9 @@ static const __initconst struct idt_data
 	INTG(LOCAL_TIMER_VECTOR,		asm_sysvec_apic_timer_interrupt),
 	INTG(X86_PLATFORM_IPI_VECTOR,		asm_sysvec_x86_platform_ipi),
 # ifdef CONFIG_HAVE_KVM
-	INTG(POSTED_INTR_VECTOR,		kvm_posted_intr_ipi),
-	INTG(POSTED_INTR_WAKEUP_VECTOR,		kvm_posted_intr_wakeup_ipi),
-	INTG(POSTED_INTR_NESTED_VECTOR,		kvm_posted_intr_nested_ipi),
+	INTG(POSTED_INTR_VECTOR,		asm_sysvec_kvm_posted_intr_ipi),
+	INTG(POSTED_INTR_WAKEUP_VECTOR,		asm_sysvec_kvm_posted_intr_wakeup_ipi),
+	INTG(POSTED_INTR_NESTED_VECTOR,		asm_sysvec_kvm_posted_intr_nested_ipi),
 # endif
 # ifdef CONFIG_IRQ_WORK
 	INTG(IRQ_WORK_VECTOR,			asm_sysvec_irq_work),
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -301,7 +301,7 @@ EXPORT_SYMBOL_GPL(kvm_set_posted_intr_wa
 /*
  * Handler for POSTED_INTERRUPT_VECTOR.
  */
-__visible void smp_kvm_posted_intr_ipi(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_posted_intr_ipi)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 
@@ -314,7 +314,7 @@ EXPORT_SYMBOL_GPL(kvm_set_posted_intr_wa
 /*
  * Handler for POSTED_INTERRUPT_WAKEUP_VECTOR.
  */
-__visible void smp_kvm_posted_intr_wakeup_ipi(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_posted_intr_wakeup_ipi)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 
@@ -328,7 +328,7 @@ EXPORT_SYMBOL_GPL(kvm_set_posted_intr_wa
 /*
  * Handler for POSTED_INTERRUPT_NESTED_VECTOR.
  */
-__visible void smp_kvm_posted_intr_nested_ipi(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_posted_intr_nested_ipi)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 10/15] x86/entry: Convert various hypervisor vectors to IDTENTRY_SYSVEC
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (8 preceding siblings ...)
  2020-02-25 22:47 ` [patch 09/15] x86/entry: Convert KVM vectors to IDTENTRY_SYSVEC Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-25 22:47 ` [patch 11/15] x86/entry: Convert XEN hypercall vector " Thomas Gleixner
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Convert various hypervisor vectors to IDTENTRY_SYSVEC
  - Implement the C entry point with DEFINE_IDTENTRY_SYSVEC
  - Emit the ASM stub with DECLARE_IDTENTRY_SYSVEC
  - Remove the ASM idtentries in 64bit
  - Remove the BUILD_INTERRUPT entries in 32bit
  - Remove the old prototyoes

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_32.S       |   14 --------------
 arch/x86/entry/entry_64.S       |   17 -----------------
 arch/x86/hyperv/hv_init.c       |    3 ++-
 arch/x86/include/asm/acrn.h     |   11 -----------
 arch/x86/include/asm/idtentry.h |   13 +++++++++++++
 arch/x86/include/asm/mshyperv.h |   14 --------------
 arch/x86/kernel/cpu/acrn.c      |    6 +++---
 arch/x86/kernel/cpu/mshyperv.c  |   18 ++++++++++--------
 8 files changed, 28 insertions(+), 68 deletions(-)

--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -1380,20 +1380,6 @@ BUILD_INTERRUPT3(xen_hvm_callback_vector
 		 xen_evtchn_do_upcall)
 #endif
 
-
-#if IS_ENABLED(CONFIG_HYPERV)
-
-BUILD_INTERRUPT3(hyperv_callback_vector, HYPERVISOR_CALLBACK_VECTOR,
-		 hyperv_vector_handler)
-
-BUILD_INTERRUPT3(hyperv_reenlightenment_vector, HYPERV_REENLIGHTENMENT_VECTOR,
-		 hyperv_reenlightenment_intr)
-
-BUILD_INTERRUPT3(hv_stimer0_callback_vector, HYPERV_STIMER0_VECTOR,
-		 hv_stimer0_vector_handler)
-
-#endif /* CONFIG_HYPERV */
-
 SYM_CODE_START_LOCAL_NOALIGN(common_exception)
 	/* the function address is in %gs's slot on the stack */
 	SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1130,23 +1130,6 @@ apicinterrupt3 HYPERVISOR_CALLBACK_VECTO
 	xen_hvm_callback_vector xen_evtchn_do_upcall
 #endif
 
-
-#if IS_ENABLED(CONFIG_HYPERV)
-apicinterrupt3 HYPERVISOR_CALLBACK_VECTOR \
-	hyperv_callback_vector hyperv_vector_handler
-
-apicinterrupt3 HYPERV_REENLIGHTENMENT_VECTOR \
-	hyperv_reenlightenment_vector hyperv_reenlightenment_intr
-
-apicinterrupt3 HYPERV_STIMER0_VECTOR \
-	hv_stimer0_callback_vector hv_stimer0_vector_handler
-#endif /* CONFIG_HYPERV */
-
-#if IS_ENABLED(CONFIG_ACRN_GUEST)
-apicinterrupt3 HYPERVISOR_CALLBACK_VECTOR \
-	acrn_hv_callback_vector acrn_hv_vector_handler
-#endif
-
 /*
  * Save all registers in pt_regs, and switch gs if needed.
  * Use slow, but surefire "are we in kernel?" check.
--- a/arch/x86/hyperv/hv_init.c
+++ b/arch/x86/hyperv/hv_init.c
@@ -15,6 +15,7 @@
 #include <asm/hypervisor.h>
 #include <asm/hyperv-tlfs.h>
 #include <asm/mshyperv.h>
+#include <asm/idtentry.h>
 #include <linux/version.h>
 #include <linux/vmalloc.h>
 #include <linux/mm.h>
@@ -151,7 +152,7 @@ static inline bool hv_reenlightenment_av
 		ms_hyperv.features & HV_X64_ACCESS_REENLIGHTENMENT;
 }
 
-__visible void __irq_entry hyperv_reenlightenment_intr(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_hyperv_reenlightenment)
 {
 	entering_ack_irq();
 
--- a/arch/x86/include/asm/acrn.h
+++ /dev/null
@@ -1,11 +0,0 @@
-/* SPDX-License-Identifier: GPL-2.0 */
-#ifndef _ASM_X86_ACRN_H
-#define _ASM_X86_ACRN_H
-
-extern void acrn_hv_callback_vector(void);
-#ifdef CONFIG_TRACING
-#define trace_acrn_hv_callback_vector acrn_hv_callback_vector
-#endif
-
-extern void acrn_hv_vector_handler(struct pt_regs *regs);
-#endif /* _ASM_X86_ACRN_H */
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -523,6 +523,19 @@ DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_WAKE
 DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_NESTED_VECTOR,	sysvec_kvm_posted_intr_nested_ipi);
 #endif
 
+#if IS_ENABLED(CONFIG_HYPERV)
+DECLARE_IDTENTRY_SYSVEC(HYPERVISOR_CALLBACK_VECTOR,	sysvec_hyperv_callback);
+DECLARE_IDTENTRY_SYSVEC(HYPERVISOR_REENLIGHTENMENT_VECTOR,	sysvec_hyperv_reenlightenment);
+DECLARE_IDTENTRY_SYSVEC(HYPERVISOR_STIMER0_VECTOR,	sysvec_hyperv_stimer0);
+#ifdef CONFIG_TRACING
+#define trace_hyperv_callback_vector asm_sysvec_hyperv_callback
+#endif
+#endif
+
+#if IS_ENABLED(CONFIG_ACRN_GUEST)
+DECLARE_IDTENTRY_SYSVEC(HYPERVISOR_CALLBACK_VECTOR,	sysvec_acrn_hv_callback);
+#endif
+
 #ifdef CONFIG_X86_MCE
 /* Machine check */
 DECLARE_IDTENTRY_MCE(X86_TRAP_MC,	exc_machine_check);
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -49,24 +49,11 @@ typedef int (*hyperv_fill_flush_list_fun
 	((val).archdata.vclock_mode = VCLOCK_HVCLOCK)
 #define hv_get_raw_timer() rdtsc_ordered()
 
-void hyperv_callback_vector(void);
-void hyperv_reenlightenment_vector(void);
-#ifdef CONFIG_TRACING
-#define trace_hyperv_callback_vector hyperv_callback_vector
-#endif
 void hyperv_vector_handler(struct pt_regs *regs);
 
-/*
- * Routines for stimer0 Direct Mode handling.
- * On x86/x64, there are no percpu actions to take.
- */
-void hv_stimer0_vector_handler(struct pt_regs *regs);
-void hv_stimer0_callback_vector(void);
-
 static inline void hv_enable_stimer0_percpu_irq(int irq) {}
 static inline void hv_disable_stimer0_percpu_irq(int irq) {}
 
-
 #if IS_ENABLED(CONFIG_HYPERV)
 extern void *hv_hypercall_pg;
 extern void  __percpu  **hyperv_pcpu_input_arg;
@@ -221,7 +208,6 @@ void hyperv_setup_mmu_ops(void);
 void *hv_alloc_hyperv_page(void);
 void *hv_alloc_hyperv_zeroed_page(void);
 void hv_free_hyperv_page(unsigned long addr);
-void hyperv_reenlightenment_intr(struct pt_regs *regs);
 void set_hv_tscchange_cb(void (*cb)(void));
 void clear_hv_tscchange_cb(void);
 void hyperv_stop_tsc_emulation(void);
--- a/arch/x86/kernel/cpu/acrn.c
+++ b/arch/x86/kernel/cpu/acrn.c
@@ -10,10 +10,10 @@
  */
 
 #include <linux/interrupt.h>
-#include <asm/acrn.h>
 #include <asm/apic.h>
 #include <asm/desc.h>
 #include <asm/hypervisor.h>
+#include <asm/idtentry.h>
 #include <asm/irq_regs.h>
 
 static uint32_t __init acrn_detect(void)
@@ -24,7 +24,7 @@ static uint32_t __init acrn_detect(void)
 static void __init acrn_init_platform(void)
 {
 	/* Setup the IDT for ACRN hypervisor callback */
-	alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, acrn_hv_callback_vector);
+	alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, asm_sysvec_acrn_hv_callback);
 }
 
 static bool acrn_x2apic_available(void)
@@ -39,7 +39,7 @@ static bool acrn_x2apic_available(void)
 
 static void (*acrn_intr_handler)(void);
 
-__visible void __irq_entry acrn_hv_vector_handler(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_acrn_hv_callback)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -23,6 +23,7 @@
 #include <asm/hyperv-tlfs.h>
 #include <asm/mshyperv.h>
 #include <asm/desc.h>
+#include <asm/idtentry.h>
 #include <asm/irq_regs.h>
 #include <asm/i8259.h>
 #include <asm/apic.h>
@@ -40,7 +41,7 @@ static void (*hv_stimer0_handler)(void);
 static void (*hv_kexec_handler)(void);
 static void (*hv_crash_handler)(struct pt_regs *regs);
 
-__visible void __irq_entry hyperv_vector_handler(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_hyperv_callback)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 
@@ -73,8 +74,7 @@ EXPORT_SYMBOL_GPL(hv_remove_vmbus_irq);
  * Routines to do per-architecture handling of stimer0
  * interrupts when in Direct Mode
  */
-
-__visible void __irq_entry hv_stimer0_vector_handler(struct pt_regs *regs)
+DEFINE_IDTENTRY_SYSVEC(sysvec_hyperv_stimer0)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 
@@ -321,17 +321,19 @@ static void __init ms_hyperv_init_platfo
 	x86_platform.apic_post_init = hyperv_init;
 	hyperv_setup_mmu_ops();
 	/* Setup the IDT for hypervisor callback */
-	alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, hyperv_callback_vector);
+	alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR, sysvec_hyperv_callback);
 
 	/* Setup the IDT for reenlightenment notifications */
-	if (ms_hyperv.features & HV_X64_ACCESS_REENLIGHTENMENT)
+	if (ms_hyperv.features & HV_X64_ACCESS_REENLIGHTENMENT) {
 		alloc_intr_gate(HYPERV_REENLIGHTENMENT_VECTOR,
-				hyperv_reenlightenment_vector);
+				asm_sysvec_hyperv_reenlightenment);
+	}
 
 	/* Setup the IDT for stimer0 */
-	if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE)
+	if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE) {
 		alloc_intr_gate(HYPERV_STIMER0_VECTOR,
-				hv_stimer0_callback_vector);
+				asm_sysvec_hyperv_stimer0);
+	}
 
 # ifdef CONFIG_SMP
 	smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu;


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 11/15] x86/entry: Convert XEN hypercall vector to IDTENTRY_SYSVEC
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (9 preceding siblings ...)
  2020-02-25 22:47 ` [patch 10/15] x86/entry: Convert various hypervisor " Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-25 22:47 ` [patch 12/15] x86/entry: Remove the apic/BUILD interrupt leftovers Thomas Gleixner
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Convert the last oldstyle defined vector to IDTENTRY_SYSVEC
  - Implement the C entry point with DEFINE_IDTENTRY_SYSVEC
  - Emit the ASM stub with DECLARE_IDTENTRY_SYSVEC
  - Remove the ASM idtentries in 64bit
  - Remove the BUILD_INTERRUPT entries in 32bit
  - Remove the old prototyoes

Fixup the related XEN code by providing the primary C entry point in x86 to
avoid cluttering the generic code with X86'isms.

No functional change.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_32.S        |    5 -----
 arch/x86/entry/entry_64.S        |    5 -----
 arch/x86/include/asm/idtentry.h  |    4 ++++
 arch/x86/xen/enlighten_hvm.c     |    6 ++++++
 drivers/xen/events/events_base.c |    3 ++-
 include/xen/events.h             |    7 -------
 6 files changed, 12 insertions(+), 18 deletions(-)

--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -1375,11 +1375,6 @@ SYM_FUNC_START(xen_failsafe_callback)
 SYM_FUNC_END(xen_failsafe_callback)
 #endif /* CONFIG_XEN_PV */
 
-#ifdef CONFIG_XEN_PVHVM
-BUILD_INTERRUPT3(xen_hvm_callback_vector, HYPERVISOR_CALLBACK_VECTOR,
-		 xen_evtchn_do_upcall)
-#endif
-
 SYM_CODE_START_LOCAL_NOALIGN(common_exception)
 	/* the function address is in %gs's slot on the stack */
 	SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -1125,11 +1125,6 @@ SYM_CODE_START(xen_failsafe_callback)
 SYM_CODE_END(xen_failsafe_callback)
 #endif /* CONFIG_XEN_PV */
 
-#ifdef CONFIG_XEN_PVHVM
-apicinterrupt3 HYPERVISOR_CALLBACK_VECTOR \
-	xen_hvm_callback_vector xen_evtchn_do_upcall
-#endif
-
 /*
  * Save all registers in pt_regs, and switch gs if needed.
  * Use slow, but surefire "are we in kernel?" check.
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -536,6 +536,10 @@ DECLARE_IDTENTRY_SYSVEC(HYPERVISOR_STIME
 DECLARE_IDTENTRY_SYSVEC(HYPERVISOR_CALLBACK_VECTOR,	sysvec_acrn_hv_callback);
 #endif
 
+#ifdef CONFIG_XEN_PVHVM
+DECLARE_IDTENTRY_SYSVEC(HYPERVISOR_CALLBACK_VECTOR,	sysvec_xen_hvm_callback);
+#endif
+
 #ifdef CONFIG_X86_MCE
 /* Machine check */
 DECLARE_IDTENTRY_MCE(X86_TRAP_MC,	exc_machine_check);
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -13,6 +13,7 @@
 #include <asm/smp.h>
 #include <asm/reboot.h>
 #include <asm/setup.h>
+#include <asm/idtentry.h>
 #include <asm/hypervisor.h>
 #include <asm/e820/api.h>
 #include <asm/early_ioremap.h>
@@ -118,6 +119,11 @@ static void __init init_hvm_pv_info(void
 		this_cpu_write(xen_vcpu_id, smp_processor_id());
 }
 
+DEFINE_IDTENTRY_SYSVEC(sysvec_xen_hvm_callback)
+{
+	xen_evtchn_do_upcall(regs);
+}
+
 #ifdef CONFIG_KEXEC_CORE
 static void xen_hvm_shutdown(void)
 {
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -37,6 +37,7 @@
 #ifdef CONFIG_X86
 #include <asm/desc.h>
 #include <asm/ptrace.h>
+#include <asm/idtentry.h>
 #include <asm/irq.h>
 #include <asm/io_apic.h>
 #include <asm/i8259.h>
@@ -1651,7 +1652,7 @@ void xen_callback_vector(void)
 		}
 		pr_info_once("Xen HVM callback vector for event delivery is enabled\n");
 		alloc_intr_gate(HYPERVISOR_CALLBACK_VECTOR,
-				xen_hvm_callback_vector);
+				asm_sysvec_xen_hvm_callback);
 	}
 }
 #else
--- a/include/xen/events.h
+++ b/include/xen/events.h
@@ -90,13 +90,6 @@ unsigned irq_from_evtchn(unsigned int ev
 int irq_from_virq(unsigned int cpu, unsigned int virq);
 unsigned int evtchn_from_irq(unsigned irq);
 
-#ifdef CONFIG_XEN_PVHVM
-/* Xen HVM evtchn vector callback */
-void xen_hvm_callback_vector(void);
-#ifdef CONFIG_TRACING
-#define trace_xen_hvm_callback_vector xen_hvm_callback_vector
-#endif
-#endif
 int xen_set_callback_via(uint64_t via);
 void xen_evtchn_do_upcall(struct pt_regs *regs);
 void xen_hvm_evtchn_do_upcall(void);


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 12/15] x86/entry: Remove the apic/BUILD interrupt leftovers
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (10 preceding siblings ...)
  2020-02-25 22:47 ` [patch 11/15] x86/entry: Convert XEN hypercall vector " Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-25 22:47 ` [patch 13/15] x86/entry/32: Remove redundant irq disable code Thomas Gleixner
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Remove all the code which was there to emit the system vector stubs. All
users are gone.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_32.S |   15 -----
 arch/x86/entry/entry_64.S |  118 ----------------------------------------------
 2 files changed, 133 deletions(-)

--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -1271,21 +1271,6 @@ SYM_CODE_END(asm_\cfunc)
  */
 #include <asm/idtentry.h>
 
-#define BUILD_INTERRUPT3(name, nr, fn)			\
-SYM_FUNC_START(name)					\
-	ASM_CLAC;					\
-	pushl	$~(nr);					\
-	SAVE_ALL switch_stacks=1;			\
-	ENCODE_FRAME_POINTER;				\
-	TRACE_IRQS_OFF					\
-	movl	%esp, %eax;				\
-	call	fn;					\
-	jmp	ret_from_intr;				\
-SYM_FUNC_END(name)
-
-#define BUILD_INTERRUPT(name, nr)		\
-	BUILD_INTERRUPT3(name, nr, smp_##name);	\
-
 #ifdef CONFIG_PARAVIRT
 SYM_CODE_START(native_iret)
 	iret
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -700,103 +700,7 @@ SYM_CODE_END(\asmsym)
  */
 #include <asm/idtentry.h>
 
-/*
- * Interrupt entry helper function.
- *
- * Entry runs with interrupts off. Stack layout at entry:
- * +----------------------------------------------------+
- * | regs->ss						|
- * | regs->rsp						|
- * | regs->eflags					|
- * | regs->cs						|
- * | regs->ip						|
- * +----------------------------------------------------+
- * | regs->orig_ax = ~(interrupt number)		|
- * +----------------------------------------------------+
- * | return address					|
- * +----------------------------------------------------+
- */
-SYM_CODE_START(interrupt_entry)
-	UNWIND_HINT_FUNC
-	ASM_CLAC
-	cld
-
-	testb	$3, CS-ORIG_RAX+8(%rsp)
-	jz	1f
-	SWAPGS
-	FENCE_SWAPGS_USER_ENTRY
-	/*
-	 * Switch to the thread stack. The IRET frame and orig_ax are
-	 * on the stack, as well as the return address. RDI..R12 are
-	 * not (yet) on the stack and space has not (yet) been
-	 * allocated for them.
-	 */
-	pushq	%rdi
-
-	/* Need to switch before accessing the thread stack. */
-	SWITCH_TO_KERNEL_CR3 scratch_reg=%rdi
-	movq	%rsp, %rdi
-	movq	PER_CPU_VAR(cpu_current_top_of_stack), %rsp
-
-	 /*
-	  * We have RDI, return address, and orig_ax on the stack on
-	  * top of the IRET frame. That means offset=24
-	  */
-	UNWIND_HINT_IRET_REGS base=%rdi offset=24
-
-	pushq	7*8(%rdi)		/* regs->ss */
-	pushq	6*8(%rdi)		/* regs->rsp */
-	pushq	5*8(%rdi)		/* regs->eflags */
-	pushq	4*8(%rdi)		/* regs->cs */
-	pushq	3*8(%rdi)		/* regs->ip */
-	pushq	2*8(%rdi)		/* regs->orig_ax */
-	pushq	8(%rdi)			/* return address */
-	UNWIND_HINT_FUNC
-
-	movq	(%rdi), %rdi
-	jmp	2f
-1:
-	FENCE_SWAPGS_KERNEL_ENTRY
-2:
-	PUSH_AND_CLEAR_REGS save_ret=1
-	ENCODE_FRAME_POINTER 8
-
-	testb	$3, CS+8(%rsp)
-	jz	1f
-
-	/*
-	 * IRQ from user mode.
-	 *
-	 * We need to tell lockdep that IRQs are off.  We can't do this until
-	 * we fix gsbase, and we should do it before enter_from_user_mode
-	 * (which can take locks).  Since TRACE_IRQS_OFF is idempotent,
-	 * the simplest way to handle it is to just call it twice if
-	 * we enter from user mode.  There's no reason to optimize this since
-	 * TRACE_IRQS_OFF is a no-op if lockdep is off.
-	 */
-	TRACE_IRQS_OFF
-
-	CALL_enter_from_user_mode
-
-1:
-	ENTER_IRQ_STACK old_rsp=%rdi save_ret=1
-	/* We entered an interrupt context - irqs are off: */
-	TRACE_IRQS_OFF
-
-	ret
-SYM_CODE_END(interrupt_entry)
-_ASM_NOKPROBE(interrupt_entry)
-
 SYM_CODE_START_LOCAL(common_interrupt_return)
-ret_from_intr:
-	DISABLE_INTERRUPTS(CLBR_ANY)
-	TRACE_IRQS_OFF
-
-	LEAVE_IRQ_STACK
-
-	testb	$3, CS(%rsp)
-	jz	retint_kernel
-
 	/* Interrupt came from user space */
 .Lretint_user:
 	mov	%rsp,%rdi
@@ -973,28 +877,6 @@ SYM_CODE_END(common_interrupt_return)
 _ASM_NOKPROBE(common_interrupt_return)
 
 /*
- * APIC interrupts.
- */
-.macro apicinterrupt3 num sym do_sym
-SYM_CODE_START(\sym)
-	UNWIND_HINT_IRET_REGS
-	pushq	$~(\num)
-.Lcommon_\sym:
-	call	interrupt_entry
-	UNWIND_HINT_REGS indirect=1
-	call	\do_sym	/* rdi points to pt_regs */
-	jmp	ret_from_intr
-SYM_CODE_END(\sym)
-_ASM_NOKPROBE(\sym)
-.endm
-
-.macro apicinterrupt num sym do_sym
-PUSH_SECTION_IRQENTRY
-apicinterrupt3 \num \sym \do_sym
-POP_SECTION_IRQENTRY
-.endm
-
-/*
  * Reload gs selector with exception handling
  * edi:  new selector
  */


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 13/15] x86/entry/32: Remove redundant irq disable code
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (11 preceding siblings ...)
  2020-02-25 22:47 ` [patch 12/15] x86/entry: Remove the apic/BUILD interrupt leftovers Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-25 22:47 ` [patch 14/15] x86/entry: Provide return_from exception() Thomas Gleixner
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

All exceptions/interrupts return with interrupts disabled now. No point in
doing this in ASM again.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/entry_32.S |   11 -----------
 1 file changed, 11 deletions(-)

--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -64,12 +64,6 @@
  * enough to patch inline, increasing performance.
  */
 
-#ifdef CONFIG_PREEMPTION
-# define preempt_stop(clobbers)	DISABLE_INTERRUPTS(clobbers); TRACE_IRQS_OFF
-#else
-# define preempt_stop(clobbers)
-#endif
-
 .macro TRACE_IRQS_IRET
 #ifdef CONFIG_TRACE_IRQFLAGS
 	testl	$X86_EFLAGS_IF, PT_EFLAGS(%esp)     # interrupts off?
@@ -876,7 +870,6 @@ SYM_CODE_END(ret_from_fork)
 
 	# userspace resumption stub bypassing syscall exit tracing
 SYM_CODE_START_LOCAL(ret_from_exception)
-	preempt_stop(CLBR_ANY)
 ret_from_intr:
 #ifdef CONFIG_VM86
 	movl	PT_EFLAGS(%esp), %eax		# mix EFLAGS and CS
@@ -892,8 +885,6 @@ SYM_CODE_START_LOCAL(ret_from_exception)
 	cmpl	$USER_RPL, %eax
 	jb	restore_all_kernel		# not returning to v8086 or userspace
 
-	DISABLE_INTERRUPTS(CLBR_ANY)
-	TRACE_IRQS_OFF
 	movl	%esp, %eax
 	call	prepare_exit_to_usermode
 	jmp	restore_all_switch_stack
@@ -1135,7 +1126,6 @@ SYM_FUNC_START(entry_INT80_32)
 
 restore_all_kernel:
 #ifdef CONFIG_PREEMPTION
-	DISABLE_INTERRUPTS(CLBR_ANY)
 	cmpl	$0, PER_CPU_VAR(__preempt_count)
 	jnz	.Lno_preempt
 	testl	$X86_EFLAGS_IF, PT_EFLAGS(%esp)	# interrupts off (exception path) ?
@@ -1299,7 +1289,6 @@ SYM_FUNC_START(exc_xen_hypervisor_callba
 	pushl	$-1				/* orig_ax = -1 => not a system call */
 	SAVE_ALL
 	ENCODE_FRAME_POINTER
-	TRACE_IRQS_OFF
 	mov	%esp, %eax
 	call	xen_evtchn_do_upcall
 #ifndef CONFIG_PREEMPTION


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 14/15] x86/entry: Provide return_from exception()
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (12 preceding siblings ...)
  2020-02-25 22:47 ` [patch 13/15] x86/entry/32: Remove redundant irq disable code Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-25 22:47 ` [patch 15/15] x86/entry: Use return_from_exception() Thomas Gleixner
  2020-02-26  9:53 ` [patch 00/15] x86/entry: Consolidation - Part V Peter Zijlstra
  15 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Now that all exceptions, interrupts and system vectors are using the
IDTENTRY machinery, the return from exception handling in ASM can be lifted
to C.

Provide a C function which:

  - Invokes prepare_exit_to_user_mode() when the exception hit user mode
  - Checks for preemption when the exception hit kernel mode
  - Has the interrupt tracing in C

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/common.c         |   43 ++++++++++++++++++++++++++++++++++++++++
 arch/x86/include/asm/idtentry.h |    2 +
 2 files changed, 45 insertions(+)

--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -480,3 +480,46 @@ static __always_inline long do_fast_sysc
 NOKPROBE_SYMBOL(do_fast_syscall_32);
 
 #endif /* CONFIG_X86_32 || CONFIG_IA32_EMULATION */
+
+/**
+ * return_from_exception - Common code to handle return from exceptions
+ * @regs	- Pointer to pt_regs (exception entry regs)
+ *
+ * Depending on the return target (kernel/user) this runs the necessary
+ * preemption and work checks if possible and reguired and returns to
+ * the caller with interrupts disabled and no further work pending.
+ *
+ * This is the last action before returning to the low level ASM code which
+ * just needs to return to the appropriate context.
+ *
+ * Invoked by all exception/interrupt IDTENTRY handlers which are not
+ * returning through the paranoid exit path (all except NMI, MCE, DF).
+ */
+void notrace return_from_exception(struct pt_regs *regs)
+{
+	/*
+	 * Unconditionally disable interrupts as some handlers like
+	 * the fault handler are not guaranteeing to return with
+	 * interrupts disabled.
+	 */
+	local_irq_disable();
+
+	/* Check whether this returns to user mode */
+	if (user_mode(regs)) {
+		prepare_exit_to_usermode(regs);
+	} else {
+		/* Interrupts stay disabled on return? */
+		if (!(regs->flags & X86_EFLAGS_IF))
+			return;
+
+		/* Check kernel preemption, if enabled */
+		if (IS_ENABLED(CONFIG_PREEMPTION)) {
+			/* Check for preemption */
+			if (!preempt_count() && need_resched())
+				preempt_schedule_irq();
+		}
+	}
+	/* Make sure the tracer knows that IRET will enable interrupts */
+	trace_hardirqs_on();
+}
+NOKPROBE_SYMBOL(return_from_exception);
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -19,6 +19,8 @@ static __always_inline void enter_from_u
 static __always_inline void enter_from_user_context(void) { }
 #endif
 
+void return_from_exception(struct pt_regs *regs);
+
 /**
  * idtentry_enter - Handle state tracking on idtentry
  * @regs:	Pointer to pt_regs of interrupted context


^ permalink raw reply	[flat|nested] 30+ messages in thread

* [patch 15/15] x86/entry: Use return_from_exception()
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (13 preceding siblings ...)
  2020-02-25 22:47 ` [patch 14/15] x86/entry: Provide return_from exception() Thomas Gleixner
@ 2020-02-25 22:47 ` Thomas Gleixner
  2020-02-26  9:53 ` [patch 00/15] x86/entry: Consolidation - Part V Peter Zijlstra
  15 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-25 22:47 UTC (permalink / raw)
  To: LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Replace the ASM return from exception checks for user mode and kernel mode
return with the new C function and invoke this from the idtentry_exit()
helper for all regular exceptions and IST exceptions which hit user mode.

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 arch/x86/entry/common.c         |   13 +++----------
 arch/x86/entry/entry_32.S       |   25 ++++---------------------
 arch/x86/entry/entry_64.S       |   25 ++-----------------------
 arch/x86/include/asm/idtentry.h |    7 +++++--
 4 files changed, 14 insertions(+), 56 deletions(-)

--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -196,7 +196,7 @@ static void exit_to_usermode_loop(struct
 }
 
 /* Called with IRQs disabled. */
-static inline void __prepare_exit_to_usermode(struct pt_regs *regs)
+static inline void prepare_exit_to_usermode(struct pt_regs *regs)
 {
 	struct thread_info *ti = current_thread_info();
 	u32 cached_flags;
@@ -241,13 +241,6 @@ static inline void __prepare_exit_to_use
 	mds_user_clear_cpu_buffers();
 }
 
-__visible inline notrace void prepare_exit_to_usermode(struct pt_regs *regs)
-{
-	__prepare_exit_to_usermode(regs);
-	trace_hardirqs_on();
-}
-NOKPROBE_SYMBOL(prepare_exit_to_usermode);
-
 #define SYSCALL_EXIT_WORK_FLAGS				\
 	(_TIF_SYSCALL_TRACE | _TIF_SYSCALL_AUDIT |	\
 	 _TIF_SINGLESTEP | _TIF_SYSCALL_TRACEPOINT)
@@ -299,7 +292,7 @@ static void syscall_slow_exit_work(struc
 		syscall_slow_exit_work(regs, cached_flags);
 
 	local_irq_disable();
-	__prepare_exit_to_usermode(regs);
+	prepare_exit_to_usermode(regs);
 	/* Return to user space enables interrupts */
 	trace_hardirqs_on();
 }
@@ -429,7 +422,7 @@ static __always_inline long do_fast_sysc
 		/* User code screwed up. */
 		local_irq_disable();
 		regs->ax = -EFAULT;
-		__prepare_exit_to_usermode(regs);
+		prepare_exit_to_usermode(regs);
 		return 0;	/* Keep it simple: use IRET. */
 	}
 
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -862,15 +862,10 @@ SYM_CODE_START(ret_from_fork)
 SYM_CODE_END(ret_from_fork)
 
 /*
- * Return to user mode is not as complex as all this looks,
- * but we want the default path for a system call return to
- * go as quickly as possible which is why some of this is
- * less clear than it otherwise should be.
+ * C code already did all preparatory work (prepare_exit_to_usermode or
+ * kernel preemption) so this just has to select the proper return path.
  */
-
-	# userspace resumption stub bypassing syscall exit tracing
 SYM_CODE_START_LOCAL(ret_from_exception)
-ret_from_intr:
 #ifdef CONFIG_VM86
 	movl	PT_EFLAGS(%esp), %eax		# mix EFLAGS and CS
 	movb	PT_CS(%esp), %al
@@ -884,9 +879,6 @@ SYM_CODE_START_LOCAL(ret_from_exception)
 #endif
 	cmpl	$USER_RPL, %eax
 	jb	restore_all_kernel		# not returning to v8086 or userspace
-
-	movl	%esp, %eax
-	call	prepare_exit_to_usermode
 	jmp	restore_all_switch_stack
 SYM_CODE_END(ret_from_exception)
 
@@ -1125,15 +1117,6 @@ SYM_FUNC_START(entry_INT80_32)
 	INTERRUPT_RETURN
 
 restore_all_kernel:
-#ifdef CONFIG_PREEMPTION
-	cmpl	$0, PER_CPU_VAR(__preempt_count)
-	jnz	.Lno_preempt
-	testl	$X86_EFLAGS_IF, PT_EFLAGS(%esp)	# interrupts off (exception path) ?
-	jz	.Lno_preempt
-	call	preempt_schedule_irq
-.Lno_preempt:
-#endif
-	TRACE_IRQS_IRET
 	PARANOID_EXIT_TO_KERNEL_MODE
 	BUG_IF_WRONG_CR3
 	RESTORE_REGS 4
@@ -1247,7 +1230,7 @@ SYM_CODE_START_LOCAL(asm_\cfunc)
 	movl	PT_ORIG_EAX(%esp), %edx		/* get the vector from stack */
 	movl	$-1, PT_ORIG_EAX(%esp)		/* no syscall to restart */
 	call	\cfunc
-	jmp	ret_from_intr
+	jmp	ret_from_exception
 SYM_CODE_END(asm_\cfunc)
 .endm
 
@@ -1294,7 +1277,7 @@ SYM_FUNC_START(exc_xen_hypervisor_callba
 #ifndef CONFIG_PREEMPTION
 	call	xen_maybe_preempt_hcall
 #endif
-	jmp	ret_from_intr
+	jmp	ret_from_exception
 SYM_FUNC_END(exc_xen_hypervisor_callback)
 
 /*
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -701,10 +701,6 @@ SYM_CODE_END(\asmsym)
 #include <asm/idtentry.h>
 
 SYM_CODE_START_LOCAL(common_interrupt_return)
-	/* Interrupt came from user space */
-.Lretint_user:
-	mov	%rsp,%rdi
-	call	prepare_exit_to_usermode
 
 SYM_INNER_LABEL(swapgs_restore_regs_and_return_to_usermode, SYM_L_GLOBAL)
 #ifdef CONFIG_DEBUG_ENTRY
@@ -746,24 +742,7 @@ SYM_INNER_LABEL(swapgs_restore_regs_and_
 	SWAPGS
 	INTERRUPT_RETURN
 
-
 /* Returning to kernel space */
-retint_kernel:
-#ifdef CONFIG_PREEMPTION
-	/* Interrupts are off */
-	/* Check if we need preemption */
-	btl	$9, EFLAGS(%rsp)		/* were interrupts off? */
-	jnc	1f
-	cmpl	$0, PER_CPU_VAR(__preempt_count)
-	jnz	1f
-	call	preempt_schedule_irq
-1:
-#endif
-	/*
-	 * The iretq could re-enable interrupts:
-	 */
-	TRACE_IRQS_IRETQ
-
 SYM_INNER_LABEL(restore_regs_and_return_to_kernel, SYM_L_GLOBAL)
 #ifdef CONFIG_DEBUG_ENTRY
 	/* Assert that pt_regs indicates kernel mode. */
@@ -1167,8 +1146,8 @@ SYM_CODE_START_LOCAL(error_exit)
 	UNWIND_HINT_REGS
 	DEBUG_ENTRY_ASSERT_IRQS_OFF
 	testb	$3, CS(%rsp)
-	jz	retint_kernel
-	jmp	.Lretint_user
+	jz	restore_regs_and_return_to_kernel
+	jmp	swapgs_restore_regs_and_return_to_usermode
 SYM_CODE_END(error_exit)
 
 /*
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -41,11 +41,14 @@ static __always_inline void idtentry_ent
 /**
  * idtentry_exit - Prepare returning to low level ASM code
  *
- * Disables interrupts before returning
+ * Invokes return_from_exception() which disables interrupts
+ * and handles return to user mode work and kernel preemption.
+ * This function returns with interrupts disabled and the
+ * hardirq tracing state updated.
  */
 static __always_inline void idtentry_exit(struct pt_regs *regs)
 {
-	local_irq_disable();
+	return_from_exception(regs);
 }
 
 /**


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 01/15] x86/irq: Convey vector as argument and not in ptregs
  2020-02-25 22:47 ` [patch 01/15] x86/irq: Convey vector as argument and not in ptregs Thomas Gleixner
@ 2020-02-26  5:13   ` Andy Lutomirski
  2020-02-26  5:45   ` Brian Gerst
  1 sibling, 0 replies; 30+ messages in thread
From: Andy Lutomirski @ 2020-02-26  5:13 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, X86 ML, Steven Rostedt, Brian Gerst, Juergen Gross,
	Paolo Bonzini, Arnd Bergmann

On Tue, Feb 25, 2020 at 3:26 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Device interrupts which go through do_IRQ() or the spurious interrupt
> handler have their separate entry code on 64 bit for no good reason.
>
> Both 32 and 64 bit transport the vector number through ORIG_[RE]AX in
> pt_regs. Further the vector number is forced to fit into an u8 and is
> complemented and offset by 0x80 for historical reasons.
>
> Push the vector number into the error code slot instead and hand the plain
> vector number to the C functions.

Reviewed-by: Andy Lutomirski <luto@kernel.org>

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 01/15] x86/irq: Convey vector as argument and not in ptregs
  2020-02-25 22:47 ` [patch 01/15] x86/irq: Convey vector as argument and not in ptregs Thomas Gleixner
  2020-02-26  5:13   ` Andy Lutomirski
@ 2020-02-26  5:45   ` Brian Gerst
  2020-02-26 20:13     ` Thomas Gleixner
  1 sibling, 1 reply; 30+ messages in thread
From: Brian Gerst @ 2020-02-26  5:45 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, the arch/x86 maintainers, Steven Rostedt, Juergen Gross,
	Paolo Bonzini, Arnd Bergmann

On Tue, Feb 25, 2020 at 6:26 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Device interrupts which go through do_IRQ() or the spurious interrupt
> handler have their separate entry code on 64 bit for no good reason.
>
> Both 32 and 64 bit transport the vector number through ORIG_[RE]AX in
> pt_regs. Further the vector number is forced to fit into an u8 and is
> complemented and offset by 0x80 for historical reasons.

The reason for the 0x80 offset is so that the push instruction only
takes two bytes.  This allows each entry stub to be packed into a
fixed 8 bytes.  idt_setup_apic_and_irq_gates() assumes this 8-byte
fixed length for the stubs, so now every odd vector after 0x80 is
broken.

     508:       6a 7f                   pushq  $0x7f
     50a:       e9 f1 08 00 00          jmpq   e00 <common_interrupt>
     50f:       90                      nop
     510:       68 80 00 00 00          pushq  $0x80
     515:       e9 e6 08 00 00          jmpq   e00 <common_interrupt>
     51a:       66 0f 1f 44 00 00       nopw   0x0(%rax,%rax,1)
     520:       68 81 00 00 00          pushq  $0x81
     525:       e9 d6 08 00 00          jmpq   e00 <common_interrupt>
     52a:       66 0f 1f 44 00 00       nopw   0x0(%rax,%rax,1)

The 0x81 vector should start at 0x518, not 0x520.

--
Brian Gerst

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 05/15] x86/entry: Provide IDTEnTRY_SYSVEC
  2020-02-25 22:47 ` [patch 05/15] x86/entry: Provide IDTEnTRY_SYSVEC Thomas Gleixner
@ 2020-02-26  6:10   ` Andy Lutomirski
  2020-02-26 20:15     ` Thomas Gleixner
  0 siblings, 1 reply; 30+ messages in thread
From: Andy Lutomirski @ 2020-02-26  6:10 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

On 2/25/20 2:47 PM, Thomas Gleixner wrote:
> Provide a IDTENTRY variant for system vectors to consolidate the differnt
> mechanisms to emit the ASM stubs for 32 an 64 bit.

$SUBJECT has an obvious typo.

> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/entry/entry_32.S       |    4 ++++
>  arch/x86/entry/entry_64.S       |   19 +++++++++++++++----
>  arch/x86/include/asm/idtentry.h |   25 +++++++++++++++++++++++++
>  3 files changed, 44 insertions(+), 4 deletions(-)
> 
> --- a/arch/x86/entry/entry_32.S
> +++ b/arch/x86/entry/entry_32.S
> @@ -1261,6 +1261,10 @@ SYM_CODE_START_LOCAL(asm_\cfunc)
>  SYM_CODE_END(asm_\cfunc)
>  .endm
>  
> +.macro idtentry_sysvec vector cfunc
> +	idtentry \vector asm_\cfunc \cfunc has_error_code=0
> +.endm

irq_stack?

--Andy

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 00/15] x86/entry: Consolidation - Part V
  2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
                   ` (14 preceding siblings ...)
  2020-02-25 22:47 ` [patch 15/15] x86/entry: Use return_from_exception() Thomas Gleixner
@ 2020-02-26  9:53 ` Peter Zijlstra
  2020-02-26 10:02   ` Peter Zijlstra
  15 siblings, 1 reply; 30+ messages in thread
From: Peter Zijlstra @ 2020-02-26  9:53 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Steven Rostedt, Brian Gerst, Juergen Gross,
	Paolo Bonzini, Arnd Bergmann

On Tue, Feb 25, 2020 at 11:47:19PM +0100, Thomas Gleixner wrote:

>    git://git.kernel.org/pub/scm/linux/kernel/git/tglx/devel.git entry-v1-part5

How about the completely untested something below on top to avoid that
silly indirect call on 32bit idtentry.

---
 arch/x86/entry/entry_32.S | 37 ++++++++++++++++++-------------------
 1 file changed, 18 insertions(+), 19 deletions(-)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index cf94e724743d..c92cd8412ab2 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -723,19 +723,19 @@
 .endm
 
 #ifdef CONFIG_X86_INVD_BUG
-.macro idtentry_push_func vector cfunc
+.macro idtentry_call_func vector cfunc
 	.if \vector == X86_TRAP_XF
 		/* AMD 486 bug: invd from userspace calls exception 19 instead of #GP */
-		ALTERNATIVE "pushl	$exc_general_protection",	\
-			    "pushl	$exc_simd_coprocessor_error",	\
+		ALTERNATIVE "call	exc_general_protection",	\
+			    "call	exc_simd_coprocessor_error",	\
 			    X86_FEATURE_XMM
 	.else
-		pushl $\cfunc
+		call \cfunc
 	.endif
 .endm
 #else
-.macro idtentry_push_func vector cfunc
-	pushl $\cfunc
+.macro idtentry_call_func vector cfunc
+	call \cfunc
 .endm
 #endif
 
@@ -755,10 +755,9 @@ SYM_CODE_START(\asmsym)
 		pushl	$0		/* Clear the error code */
 	.endif
 
-	/* Push the C-function address into the GS slot */
-	idtentry_push_func \vector \cfunc
-	/* Invoke the common exception entry */
-	jmp	common_exception
+	call	common_idtentry
+	idtentry_call_func \vector \cfunc
+	jmp	ret_from_exception
 SYM_CODE_END(\asmsym)
 .endm
 
@@ -1125,7 +1124,6 @@ SYM_FUNC_START(entry_INT80_32)
 .section .fixup, "ax"
 SYM_CODE_START(asm_exc_iret_error)
 	pushl	$0				# no error code
-	pushl	$exc_iret_error
 
 #ifdef CONFIG_DEBUG_ENTRY
 	/*
@@ -1139,7 +1137,9 @@ SYM_CODE_START(asm_exc_iret_error)
 	popl	%eax
 #endif
 
-	jmp	common_exception
+	call	common_idtentry
+	call	exc_iret_error
+	jmp	ret_from_exception
 SYM_CODE_END(asm_exc_iret_error)
 .previous
 	_ASM_EXTABLE(.Lirq_return, asm_exc_iret_error)
@@ -1332,15 +1332,15 @@ SYM_FUNC_START(xen_failsafe_callback)
 SYM_FUNC_END(xen_failsafe_callback)
 #endif /* CONFIG_XEN_PV */
 
-SYM_CODE_START_LOCAL_NOALIGN(common_exception)
-	/* the function address is in %gs's slot on the stack */
+SYM_CODE_START_LOCAL_NOALIGN(common_idtentry)
+	/* the return address is in the %gs stack slot */
 	SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
 	ENCODE_FRAME_POINTER
 
 	/* fixup %gs */
 	GS_TO_REG %ecx
-	movl	PT_GS(%esp), %edi		# get the function address
-	REG_TO_PTGS %ecx
+	pushl	PT_GS(%esp)			# push return address
+	REG_TO_OTGS %ecx
 	SET_KERNEL_GS %ecx
 
 	/* fixup orig %eax */
@@ -1348,9 +1348,8 @@ SYM_CODE_START_LOCAL_NOALIGN(common_exception)
 	movl	$-1, PT_ORIG_EAX(%esp)		# no syscall to restart
 
 	movl	%esp, %eax			# pt_regs pointer
-	CALL_NOSPEC %edi
-	jmp	ret_from_exception
-SYM_CODE_END(common_exception)
+	ret
+SYM_CODE_END(common_idtentry)
 
 #ifdef CONFIG_DOUBLEFAULT
 SYM_CODE_START(asm_exc_double_fault)

^ permalink raw reply related	[flat|nested] 30+ messages in thread

* Re: [patch 00/15] x86/entry: Consolidation - Part V
  2020-02-26  9:53 ` [patch 00/15] x86/entry: Consolidation - Part V Peter Zijlstra
@ 2020-02-26 10:02   ` Peter Zijlstra
  0 siblings, 0 replies; 30+ messages in thread
From: Peter Zijlstra @ 2020-02-26 10:02 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Steven Rostedt, Brian Gerst, Juergen Gross,
	Paolo Bonzini, Arnd Bergmann

On Wed, Feb 26, 2020 at 10:53:19AM +0100, Peter Zijlstra wrote:
> +SYM_CODE_START_LOCAL_NOALIGN(common_idtentry)
> +	/* the return address is in the %gs stack slot */
>  	SAVE_ALL switch_stacks=1 skip_gs=1 unwind_espfix=1
>  	ENCODE_FRAME_POINTER
>  
>  	/* fixup %gs */
>  	GS_TO_REG %ecx
> -	movl	PT_GS(%esp), %edi		# get the function address
> -	REG_TO_PTGS %ecx
> +	pushl	PT_GS(%esp)			# push return address
> +	REG_TO_OTGS %ecx

Aside from the obvious typo, it is also completely broken because
REG_TO_PGTS relies on the stack layout, which we just wrecked.

	movl	PT_GS(%esp), %edi		# get the return address
	REG_TO_PTGS %ecx

>  	SET_KERNEL_GS %ecx
>  
>  	/* fixup orig %eax */
> @@ -1348,9 +1348,8 @@ SYM_CODE_START_LOCAL_NOALIGN(common_exception)
>  	movl	$-1, PT_ORIG_EAX(%esp)		# no syscall to restart
>  
>  	movl	%esp, %eax			# pt_regs pointer

	pushl	%edi
> +	ret
> +SYM_CODE_END(common_idtentry)

Should work, although that push+ret combo is a bit awkward.

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 09/15] x86/entry: Convert KVM vectors to IDTENTRY_SYSVEC
  2020-02-25 22:47 ` [patch 09/15] x86/entry: Convert KVM vectors to IDTENTRY_SYSVEC Thomas Gleixner
@ 2020-02-26 10:54   ` Paolo Bonzini
  0 siblings, 0 replies; 30+ messages in thread
From: Paolo Bonzini @ 2020-02-26 10:54 UTC (permalink / raw)
  To: Thomas Gleixner, LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Arnd Bergmann

On 25/02/20 23:47, Thomas Gleixner wrote:
> Convert KVm specific system vectors to IDTENTRY_SYSVEC
>   - Implement the C entry point with DEFINE_IDTENTRY_SYSVEC
>   - Emit the ASM stub with DECLARE_IDTENTRY_SYSVEC
>   - Remove the ASM idtentries in 64bit
>   - Remove the BUILD_INTERRUPT entries in 32bit
>   - Remove the old prototyoes
> 
> No functional change.
> 
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
>  arch/x86/entry/entry_32.S         |    3 ---
>  arch/x86/entry/entry_64.S         |    7 -------
>  arch/x86/include/asm/entry_arch.h |   19 -------------------
>  arch/x86/include/asm/hw_irq.h     |    5 -----
>  arch/x86/include/asm/idtentry.h   |    6 ++++++
>  arch/x86/include/asm/irq.h        |    3 ---
>  arch/x86/kernel/idt.c             |    6 +++---
>  arch/x86/kernel/irq.c             |    6 +++---
>  8 files changed, 12 insertions(+), 43 deletions(-)
> 
> --- a/arch/x86/entry/entry_32.S
> +++ b/arch/x86/entry/entry_32.S
> @@ -1286,9 +1286,6 @@ SYM_FUNC_END(name)
>  #define BUILD_INTERRUPT(name, nr)		\
>  	BUILD_INTERRUPT3(name, nr, smp_##name);	\
>  
> -/* The include is where all of the SMP etc. interrupts come from */
> -#include <asm/entry_arch.h>
> -
>  #ifdef CONFIG_PARAVIRT
>  SYM_CODE_START(native_iret)
>  	iret
> --- a/arch/x86/entry/entry_64.S
> +++ b/arch/x86/entry/entry_64.S
> @@ -994,13 +994,6 @@ apicinterrupt3 \num \sym \do_sym
>  POP_SECTION_IRQENTRY
>  .endm
>  
> -
> -#ifdef CONFIG_HAVE_KVM
> -apicinterrupt3 POSTED_INTR_VECTOR		kvm_posted_intr_ipi		smp_kvm_posted_intr_ipi
> -apicinterrupt3 POSTED_INTR_WAKEUP_VECTOR	kvm_posted_intr_wakeup_ipi	smp_kvm_posted_intr_wakeup_ipi
> -apicinterrupt3 POSTED_INTR_NESTED_VECTOR	kvm_posted_intr_nested_ipi	smp_kvm_posted_intr_nested_ipi
> -#endif
> -
>  /*
>   * Reload gs selector with exception handling
>   * edi:  new selector
> --- a/arch/x86/include/asm/entry_arch.h
> +++ /dev/null
> @@ -1,19 +0,0 @@
> -/* SPDX-License-Identifier: GPL-2.0 */
> -/*
> - * This file is designed to contain the BUILD_INTERRUPT specifications for
> - * all of the extra named interrupt vectors used by the architecture.
> - * Usually this is the Inter Process Interrupts (IPIs)
> - */
> -
> -/*
> - * The following vectors are part of the Linux architecture, there
> - * is no hardware IRQ pin equivalent for them, they are triggered
> - * through the ICC by us (IPIs)
> - */
> -
> -#ifdef CONFIG_HAVE_KVM
> -BUILD_INTERRUPT(kvm_posted_intr_ipi, POSTED_INTR_VECTOR)
> -BUILD_INTERRUPT(kvm_posted_intr_wakeup_ipi, POSTED_INTR_WAKEUP_VECTOR)
> -BUILD_INTERRUPT(kvm_posted_intr_nested_ipi, POSTED_INTR_NESTED_VECTOR)
> -#endif
> -
> --- a/arch/x86/include/asm/hw_irq.h
> +++ b/arch/x86/include/asm/hw_irq.h
> @@ -28,11 +28,6 @@
>  #include <asm/irq.h>
>  #include <asm/sections.h>
>  
> -/* Interrupt handlers registered during init_IRQ */
> -extern asmlinkage void kvm_posted_intr_ipi(void);
> -extern asmlinkage void kvm_posted_intr_wakeup_ipi(void);
> -extern asmlinkage void kvm_posted_intr_nested_ipi(void);
> -
>  #ifdef	CONFIG_X86_LOCAL_APIC
>  struct irq_data;
>  struct pci_dev;
> --- a/arch/x86/include/asm/idtentry.h
> +++ b/arch/x86/include/asm/idtentry.h
> @@ -517,6 +517,12 @@ DECLARE_IDTENTRY_SYSVEC(THERMAL_APIC_VEC
>  DECLARE_IDTENTRY_SYSVEC(IRQ_WORK_VECTOR,		sysvec_irq_work);
>  #endif
>  
> +#ifdef CONFIG_HAVE_KVM
> +DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_VECTOR,		sysvec_kvm_posted_intr_ipi);
> +DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_WAKEUP_VECTOR,	sysvec_kvm_posted_intr_wakeup_ipi);
> +DECLARE_IDTENTRY_SYSVEC(POSTED_INTR_NESTED_VECTOR,	sysvec_kvm_posted_intr_nested_ipi);
> +#endif
> +
>  #ifdef CONFIG_X86_MCE
>  /* Machine check */
>  DECLARE_IDTENTRY_MCE(X86_TRAP_MC,	exc_machine_check);
> --- a/arch/x86/include/asm/irq.h
> +++ b/arch/x86/include/asm/irq.h
> @@ -26,9 +26,6 @@ extern void fixup_irqs(void);
>  
>  #ifdef CONFIG_HAVE_KVM
>  extern void kvm_set_posted_intr_wakeup_handler(void (*handler)(void));
> -extern __visible void smp_kvm_posted_intr_ipi(struct pt_regs *regs);
> -extern __visible void smp_kvm_posted_intr_wakeup_ipi(struct pt_regs *regs);
> -extern __visible void smp_kvm_posted_intr_nested_ipi(struct pt_regs *regs);
>  #endif
>  
>  extern void (*x86_platform_ipi_callback)(void);
> --- a/arch/x86/kernel/idt.c
> +++ b/arch/x86/kernel/idt.c
> @@ -132,9 +132,9 @@ static const __initconst struct idt_data
>  	INTG(LOCAL_TIMER_VECTOR,		asm_sysvec_apic_timer_interrupt),
>  	INTG(X86_PLATFORM_IPI_VECTOR,		asm_sysvec_x86_platform_ipi),
>  # ifdef CONFIG_HAVE_KVM
> -	INTG(POSTED_INTR_VECTOR,		kvm_posted_intr_ipi),
> -	INTG(POSTED_INTR_WAKEUP_VECTOR,		kvm_posted_intr_wakeup_ipi),
> -	INTG(POSTED_INTR_NESTED_VECTOR,		kvm_posted_intr_nested_ipi),
> +	INTG(POSTED_INTR_VECTOR,		asm_sysvec_kvm_posted_intr_ipi),
> +	INTG(POSTED_INTR_WAKEUP_VECTOR,		asm_sysvec_kvm_posted_intr_wakeup_ipi),
> +	INTG(POSTED_INTR_NESTED_VECTOR,		asm_sysvec_kvm_posted_intr_nested_ipi),
>  # endif
>  # ifdef CONFIG_IRQ_WORK
>  	INTG(IRQ_WORK_VECTOR,			asm_sysvec_irq_work),
> --- a/arch/x86/kernel/irq.c
> +++ b/arch/x86/kernel/irq.c
> @@ -301,7 +301,7 @@ EXPORT_SYMBOL_GPL(kvm_set_posted_intr_wa
>  /*
>   * Handler for POSTED_INTERRUPT_VECTOR.
>   */
> -__visible void smp_kvm_posted_intr_ipi(struct pt_regs *regs)
> +DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_posted_intr_ipi)
>  {
>  	struct pt_regs *old_regs = set_irq_regs(regs);
>  
> @@ -314,7 +314,7 @@ EXPORT_SYMBOL_GPL(kvm_set_posted_intr_wa
>  /*
>   * Handler for POSTED_INTERRUPT_WAKEUP_VECTOR.
>   */
> -__visible void smp_kvm_posted_intr_wakeup_ipi(struct pt_regs *regs)
> +DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_posted_intr_wakeup_ipi)
>  {
>  	struct pt_regs *old_regs = set_irq_regs(regs);
>  
> @@ -328,7 +328,7 @@ EXPORT_SYMBOL_GPL(kvm_set_posted_intr_wa
>  /*
>   * Handler for POSTED_INTERRUPT_NESTED_VECTOR.
>   */
> -__visible void smp_kvm_posted_intr_nested_ipi(struct pt_regs *regs)
> +DEFINE_IDTENTRY_SYSVEC(sysvec_kvm_posted_intr_nested_ipi)
>  {
>  	struct pt_regs *old_regs = set_irq_regs(regs);
>  
> 

Acked-by: Paolo Bonzini <pbonzini@redhat.com>


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 03/15] x86/entry: Add IRQENTRY_IRQ macro
  2020-02-25 22:47 ` [patch 03/15] x86/entry: Add IRQENTRY_IRQ macro Thomas Gleixner
@ 2020-02-26 15:05   ` Miroslav Benes
  0 siblings, 0 replies; 30+ messages in thread
From: Miroslav Benes @ 2020-02-26 15:05 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, x86, Steven Rostedt, Brian Gerst, Juergen Gross,
	Paolo Bonzini, Arnd Bergmann

>  
> +/* Entries for common/spurious (device) interrupts */
> +#define DECLARE_IDTENTRY_IRQ(vector, func)			\
> +	idtentry_irq vector func
> +

idtentry_irq is defined in the next patch (04/15). Wouldn't it be better 
to move it here?

Miroslav

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 01/15] x86/irq: Convey vector as argument and not in ptregs
  2020-02-26  5:45   ` Brian Gerst
@ 2020-02-26 20:13     ` Thomas Gleixner
  2020-02-26 21:35       ` Andy Lutomirski
  2020-02-26 21:54       ` Brian Gerst
  0 siblings, 2 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-26 20:13 UTC (permalink / raw)
  To: Brian Gerst
  Cc: LKML, the arch/x86 maintainers, Steven Rostedt, Juergen Gross,
	Paolo Bonzini, Arnd Bergmann

Brian Gerst <brgerst@gmail.com> writes:

> On Tue, Feb 25, 2020 at 6:26 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>>
>> Device interrupts which go through do_IRQ() or the spurious interrupt
>> handler have their separate entry code on 64 bit for no good reason.
>>
>> Both 32 and 64 bit transport the vector number through ORIG_[RE]AX in
>> pt_regs. Further the vector number is forced to fit into an u8 and is
>> complemented and offset by 0x80 for historical reasons.
>
> The reason for the 0x80 offset is so that the push instruction only
> takes two bytes.  This allows each entry stub to be packed into a
> fixed 8 bytes.  idt_setup_apic_and_irq_gates() assumes this 8-byte
> fixed length for the stubs, so now every odd vector after 0x80 is
> broken.
>
>      508:       6a 7f                   pushq  $0x7f
>      50a:       e9 f1 08 00 00          jmpq   e00 <common_interrupt>
>      50f:       90                      nop
>      510:       68 80 00 00 00          pushq  $0x80
>      515:       e9 e6 08 00 00          jmpq   e00 <common_interrupt>
>      51a:       66 0f 1f 44 00 00       nopw   0x0(%rax,%rax,1)
>      520:       68 81 00 00 00          pushq  $0x81
>      525:       e9 d6 08 00 00          jmpq   e00 <common_interrupt>
>      52a:       66 0f 1f 44 00 00       nopw   0x0(%rax,%rax,1)
>
> The 0x81 vector should start at 0x518, not 0x520.

Bah, I somehow missed that big fat comment explaining it. :)

Thanks for catching it. So my testing just has been lucky to not hit one
of those.

Now the question is whether we care about the packed stubs or just make
them larger by using alignment to get rid of this silly +0x80 and
~vector fixup later on. The straight forward thing clearly has its charm
and I doubt it matters in measurable ways.

Thanks,

        tglx



^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 05/15] x86/entry: Provide IDTEnTRY_SYSVEC
  2020-02-26  6:10   ` Andy Lutomirski
@ 2020-02-26 20:15     ` Thomas Gleixner
  0 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-26 20:15 UTC (permalink / raw)
  To: Andy Lutomirski, LKML
  Cc: x86, Steven Rostedt, Brian Gerst, Juergen Gross, Paolo Bonzini,
	Arnd Bergmann

Andy Lutomirski <luto@kernel.org> writes:

> On 2/25/20 2:47 PM, Thomas Gleixner wrote:
>> Provide a IDTENTRY variant for system vectors to consolidate the differnt
>> mechanisms to emit the ASM stubs for 32 an 64 bit.
>
> $SUBJECT has an obvious typo.

Indeed.

>> --- a/arch/x86/entry/entry_32.S
>> +++ b/arch/x86/entry/entry_32.S
>> @@ -1261,6 +1261,10 @@ SYM_CODE_START_LOCAL(asm_\cfunc)
>>  SYM_CODE_END(asm_\cfunc)
>>  .endm
>>  
>> +.macro idtentry_sysvec vector cfunc
>> +	idtentry \vector asm_\cfunc \cfunc has_error_code=0
>> +.endm
>
> irq_stack?

System vectors have never used irq stacks.

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 01/15] x86/irq: Convey vector as argument and not in ptregs
  2020-02-26 20:13     ` Thomas Gleixner
@ 2020-02-26 21:35       ` Andy Lutomirski
  2020-02-26 23:50         ` Thomas Gleixner
  2020-02-26 21:54       ` Brian Gerst
  1 sibling, 1 reply; 30+ messages in thread
From: Andy Lutomirski @ 2020-02-26 21:35 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Brian Gerst, LKML, the arch/x86 maintainers, Steven Rostedt,
	Juergen Gross, Paolo Bonzini, Arnd Bergmann



> On Feb 26, 2020, at 12:13 PM, Thomas Gleixner <tglx@linutronix.de> wrote:
> 
> Brian Gerst <brgerst@gmail.com> writes:
> 
>>> On Tue, Feb 25, 2020 at 6:26 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>>> 
>>> Device interrupts which go through do_IRQ() or the spurious interrupt
>>> handler have their separate entry code on 64 bit for no good reason.
>>> 
>>> Both 32 and 64 bit transport the vector number through ORIG_[RE]AX in
>>> pt_regs. Further the vector number is forced to fit into an u8 and is
>>> complemented and offset by 0x80 for historical reasons.
>> 
>> The reason for the 0x80 offset is so that the push instruction only
>> takes two bytes.  This allows each entry stub to be packed into a
>> fixed 8 bytes.  idt_setup_apic_and_irq_gates() assumes this 8-byte
>> fixed length for the stubs, so now every odd vector after 0x80 is
>> broken.
>> 
>>     508:       6a 7f                   pushq  $0x7f
>>     50a:       e9 f1 08 00 00          jmpq   e00 <common_interrupt>
>>     50f:       90                      nop
>>     510:       68 80 00 00 00          pushq  $0x80
>>     515:       e9 e6 08 00 00          jmpq   e00 <common_interrupt>
>>     51a:       66 0f 1f 44 00 00       nopw   0x0(%rax,%rax,1)
>>     520:       68 81 00 00 00          pushq  $0x81
>>     525:       e9 d6 08 00 00          jmpq   e00 <common_interrupt>
>>     52a:       66 0f 1f 44 00 00       nopw   0x0(%rax,%rax,1)
>> 
>> The 0x81 vector should start at 0x518, not 0x520.
> 
> Bah, I somehow missed that big fat comment explaining it. :)
> 
> Thanks for catching it. So my testing just has been lucky to not hit one
> of those.
> 
> Now the question is whether we care about the packed stubs or just make
> them larger by using alignment to get rid of this silly +0x80 and
> ~vector fixup later on. The straight forward thing clearly has its charm
> and I doubt it matters in measurable ways.

I agree it probably doesn’t matter. That being said, I have a distinct memory of fixing that asm so it would fail the build if the alignment was off.

> 
> Thanks,
> 
>        tglx
> 
> 

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 01/15] x86/irq: Convey vector as argument and not in ptregs
  2020-02-26 20:13     ` Thomas Gleixner
  2020-02-26 21:35       ` Andy Lutomirski
@ 2020-02-26 21:54       ` Brian Gerst
  2020-02-26 23:43         ` Thomas Gleixner
  1 sibling, 1 reply; 30+ messages in thread
From: Brian Gerst @ 2020-02-26 21:54 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, the arch/x86 maintainers, Steven Rostedt, Juergen Gross,
	Paolo Bonzini, Arnd Bergmann

On Wed, Feb 26, 2020 at 3:13 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Brian Gerst <brgerst@gmail.com> writes:
>
> > On Tue, Feb 25, 2020 at 6:26 PM Thomas Gleixner <tglx@linutronix.de> wrote:
> >>
> >> Device interrupts which go through do_IRQ() or the spurious interrupt
> >> handler have their separate entry code on 64 bit for no good reason.
> >>
> >> Both 32 and 64 bit transport the vector number through ORIG_[RE]AX in
> >> pt_regs. Further the vector number is forced to fit into an u8 and is
> >> complemented and offset by 0x80 for historical reasons.
> >
> > The reason for the 0x80 offset is so that the push instruction only
> > takes two bytes.  This allows each entry stub to be packed into a
> > fixed 8 bytes.  idt_setup_apic_and_irq_gates() assumes this 8-byte
> > fixed length for the stubs, so now every odd vector after 0x80 is
> > broken.
> >
> >      508:       6a 7f                   pushq  $0x7f
> >      50a:       e9 f1 08 00 00          jmpq   e00 <common_interrupt>
> >      50f:       90                      nop
> >      510:       68 80 00 00 00          pushq  $0x80
> >      515:       e9 e6 08 00 00          jmpq   e00 <common_interrupt>
> >      51a:       66 0f 1f 44 00 00       nopw   0x0(%rax,%rax,1)
> >      520:       68 81 00 00 00          pushq  $0x81
> >      525:       e9 d6 08 00 00          jmpq   e00 <common_interrupt>
> >      52a:       66 0f 1f 44 00 00       nopw   0x0(%rax,%rax,1)
> >
> > The 0x81 vector should start at 0x518, not 0x520.
>
> Bah, I somehow missed that big fat comment explaining it. :)
>
> Thanks for catching it. So my testing just has been lucky to not hit one
> of those.
>
> Now the question is whether we care about the packed stubs or just make
> them larger by using alignment to get rid of this silly +0x80 and
> ~vector fixup later on. The straight forward thing clearly has its charm
> and I doubt it matters in measurable ways.

I think we can get rid of the inversion.  That was done so orig_ax had
a negative number (signifying it's not a syscall), but if you replace
it with -1 that isn't necessary.  A simple -0x80 offset should be
sufficient.

I think it's a worthy optimization to keep.  There are 240 of these
stubs, so increasing the allocation to 16 bytes would add 1920 bytes
to the kernel text.

--
Brian Gerst

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 01/15] x86/irq: Convey vector as argument and not in ptregs
  2020-02-26 21:54       ` Brian Gerst
@ 2020-02-26 23:43         ` Thomas Gleixner
  2020-02-27  0:04           ` Brian Gerst
  0 siblings, 1 reply; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-26 23:43 UTC (permalink / raw)
  To: Brian Gerst
  Cc: LKML, the arch/x86 maintainers, Steven Rostedt, Juergen Gross,
	Paolo Bonzini, Arnd Bergmann

Brian Gerst <brgerst@gmail.com> writes:
> On Wed, Feb 26, 2020 at 3:13 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>> Brian Gerst <brgerst@gmail.com> writes:
>> Now the question is whether we care about the packed stubs or just make
>> them larger by using alignment to get rid of this silly +0x80 and
>> ~vector fixup later on. The straight forward thing clearly has its charm
>> and I doubt it matters in measurable ways.
>
> I think we can get rid of the inversion.  That was done so orig_ax had
> a negative number (signifying it's not a syscall), but if you replace
> it with -1 that isn't necessary.  A simple -0x80 offset should be
> sufficient.
>
> I think it's a worthy optimization to keep.  There are 240 of these
> stubs, so increasing the allocation to 16 bytes would add 1920 bytes
> to the kernel text.

I rather pay the 2k text size for readable and straight forward
code. Can you remind me why we are actually worrying at that level about
32bit x86 instead of making it depend on CONFIG_OBSCURE?

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 01/15] x86/irq: Convey vector as argument and not in ptregs
  2020-02-26 21:35       ` Andy Lutomirski
@ 2020-02-26 23:50         ` Thomas Gleixner
  0 siblings, 0 replies; 30+ messages in thread
From: Thomas Gleixner @ 2020-02-26 23:50 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Brian Gerst, LKML, the arch/x86 maintainers, Steven Rostedt,
	Juergen Gross, Paolo Bonzini, Arnd Bergmann

Andy Lutomirski <luto@amacapital.net> writes:
>> On Feb 26, 2020, at 12:13 PM, Thomas Gleixner <tglx@linutronix.de> wrote:
>> Brian Gerst <brgerst@gmail.com> writes:
>> Now the question is whether we care about the packed stubs or just make
>> them larger by using alignment to get rid of this silly +0x80 and
>> ~vector fixup later on. The straight forward thing clearly has its charm
>> and I doubt it matters in measurable ways.
>
> I agree it probably doesn’t matter. That being said, I have a distinct
> memory of fixing that asm so it would fail the build if the alignment
> was off.

Hrm. Doesn't look like. Gah, and I love the hardcoded  * 8 in the IDT
code. Let me add something to catch such things in the future.

Thanks,

        tglx


^ permalink raw reply	[flat|nested] 30+ messages in thread

* Re: [patch 01/15] x86/irq: Convey vector as argument and not in ptregs
  2020-02-26 23:43         ` Thomas Gleixner
@ 2020-02-27  0:04           ` Brian Gerst
  0 siblings, 0 replies; 30+ messages in thread
From: Brian Gerst @ 2020-02-27  0:04 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: LKML, the arch/x86 maintainers, Steven Rostedt, Juergen Gross,
	Paolo Bonzini, Arnd Bergmann

On Wed, Feb 26, 2020 at 6:43 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> Brian Gerst <brgerst@gmail.com> writes:
> > On Wed, Feb 26, 2020 at 3:13 PM Thomas Gleixner <tglx@linutronix.de> wrote:
> >> Brian Gerst <brgerst@gmail.com> writes:
> >> Now the question is whether we care about the packed stubs or just make
> >> them larger by using alignment to get rid of this silly +0x80 and
> >> ~vector fixup later on. The straight forward thing clearly has its charm
> >> and I doubt it matters in measurable ways.
> >
> > I think we can get rid of the inversion.  That was done so orig_ax had
> > a negative number (signifying it's not a syscall), but if you replace
> > it with -1 that isn't necessary.  A simple -0x80 offset should be
> > sufficient.
> >
> > I think it's a worthy optimization to keep.  There are 240 of these
> > stubs, so increasing the allocation to 16 bytes would add 1920 bytes
> > to the kernel text.
>
> I rather pay the 2k text size for readable and straight forward
> code. Can you remind me why we are actually worrying at that level about
> 32bit x86 instead of making it depend on CONFIG_OBSCURE?

Because this also applies to the 64-bit kernel?

--
Brian Gerst

^ permalink raw reply	[flat|nested] 30+ messages in thread

end of thread, other threads:[~2020-02-27  0:05 UTC | newest]

Thread overview: 30+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-02-25 22:47 [patch 00/15] x86/entry: Consolidation - Part V Thomas Gleixner
2020-02-25 22:47 ` [patch 01/15] x86/irq: Convey vector as argument and not in ptregs Thomas Gleixner
2020-02-26  5:13   ` Andy Lutomirski
2020-02-26  5:45   ` Brian Gerst
2020-02-26 20:13     ` Thomas Gleixner
2020-02-26 21:35       ` Andy Lutomirski
2020-02-26 23:50         ` Thomas Gleixner
2020-02-26 21:54       ` Brian Gerst
2020-02-26 23:43         ` Thomas Gleixner
2020-02-27  0:04           ` Brian Gerst
2020-02-25 22:47 ` [patch 02/15] x86/entry/64: Add ability to switch to IRQ stacks in idtentry Thomas Gleixner
2020-02-25 22:47 ` [patch 03/15] x86/entry: Add IRQENTRY_IRQ macro Thomas Gleixner
2020-02-26 15:05   ` Miroslav Benes
2020-02-25 22:47 ` [patch 04/15] x86/entry: Use idtentry for interrupts Thomas Gleixner
2020-02-25 22:47 ` [patch 05/15] x86/entry: Provide IDTEnTRY_SYSVEC Thomas Gleixner
2020-02-26  6:10   ` Andy Lutomirski
2020-02-26 20:15     ` Thomas Gleixner
2020-02-25 22:47 ` [patch 06/15] x86/entry: Convert APIC interrupts to IDTENTRY_SYSVEC Thomas Gleixner
2020-02-25 22:47 ` [patch 07/15] x86/entry: Convert SMP system vectors " Thomas Gleixner
2020-02-25 22:47 ` [patch 08/15] x86/entry: Convert various system vectors Thomas Gleixner
2020-02-25 22:47 ` [patch 09/15] x86/entry: Convert KVM vectors to IDTENTRY_SYSVEC Thomas Gleixner
2020-02-26 10:54   ` Paolo Bonzini
2020-02-25 22:47 ` [patch 10/15] x86/entry: Convert various hypervisor " Thomas Gleixner
2020-02-25 22:47 ` [patch 11/15] x86/entry: Convert XEN hypercall vector " Thomas Gleixner
2020-02-25 22:47 ` [patch 12/15] x86/entry: Remove the apic/BUILD interrupt leftovers Thomas Gleixner
2020-02-25 22:47 ` [patch 13/15] x86/entry/32: Remove redundant irq disable code Thomas Gleixner
2020-02-25 22:47 ` [patch 14/15] x86/entry: Provide return_from exception() Thomas Gleixner
2020-02-25 22:47 ` [patch 15/15] x86/entry: Use return_from_exception() Thomas Gleixner
2020-02-26  9:53 ` [patch 00/15] x86/entry: Consolidation - Part V Peter Zijlstra
2020-02-26 10:02   ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).