All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/8] x86/irq: trap and interrupt cleanups
@ 2021-05-15  1:43 H. Peter Anvin
  2021-05-15  1:43 ` [PATCH v2 1/9] x86/traps: add X86_NR_HW_TRAPS to <asm/trapnr.h> H. Peter Anvin
                   ` (7 more replies)
  0 siblings, 8 replies; 14+ messages in thread
From: H. Peter Anvin @ 2021-05-15  1:43 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Andy Lutomirski, Borislav Petkov,
	H. Peter Anvin
  Cc: Linux Kernel Mailing List

From: "H. Peter Anvin (Intel)" <hpa@zytor.com>

A collection of trap/interrupt-related patches, almost all
cleanups. It does remove a modest amount of code (39 lines.) The only
patches that should have any possible effect at all are:

7/8 - x86/irq: WARN_ONCE() if irq_move_cleanup is called on a pending interrupt

	This condition is believed to be impossible after many
	improvements to the IRQ vector allocation code since this
	function was written. Per discussion with tglx, add a
	WARN_ONCE() if this happens as a first step towards excising
	this hack.

8/8 - x86/irq: merge and functonalize common code in DECLARE/DEFINE_IDTENTRY_*

	This patch reverses kvm_set_cpu_l1tf_flush_l1d() and
	__irq_enter_raw() in DEFINE_IDTENTRY_SYSVEC_SIMPLE() in order
	to be able to unify the code with DEFINE_IDTENTRY_SYSVEC().

	This replaces a lot of macros with inline functions, which
	required some amount of adjusting types in various places,
	they should have no effect.
	
	The reason for unification is mainly to avoid the possibility
	of inadvertent divergence rather than the rather modest amount
	of code, but it does remove 25 lines of code.

--- 
 1/8 x86/irq: merge and functionalize common code in DECLARE/DEFINE_IDTENTRY_*
 2/8 x86/irq: WARN_ONCE() if irq_move_cleanup is called on a pending interrupt
 3/8 x86/kexec: set_[gi]dt() -> native_[gi]dt_invalidate() in machine_kexec_*.c
 4/8 x86/desc: add native_[ig]dt_invalidate() to <asm/desc.h>
 5/8 x86/idt: remove address argument to idt_invalidate()
 6/8 x86/irq: remove unused vectors from <asm/irq_vectors.h>
 7/8 x86/irqvector: add NR_EXTERNAL_VECTORS and NR_SYSTEM_VECTORS
 8/8 x86/traps: add X86_NR_HW_TRAPS to <asm/trapnr.h>

 arch/x86/entry/common.c                  |   5 +-
 arch/x86/include/asm/desc.h              |  17 ++-
 arch/x86/include/asm/idtentry.h          | 174 +++++++++++++++----------------
 arch/x86/include/asm/irq_stack.h         |  73 +++++--------
 arch/x86/include/asm/irq_vectors.h       |   7 +-
 arch/x86/include/asm/trapnr.h            |   1 +
 arch/x86/kernel/apic/apic.c              |   2 +-
 arch/x86/kernel/apic/vector.c            |   5 +
 arch/x86/kernel/idt.c                    |   5 +-
 arch/x86/kernel/irq.c                    |   1 +
 arch/x86/kernel/machine_kexec_32.c       |  15 +--
 arch/x86/kernel/machine_kexec_64.c       |  33 +-----
 arch/x86/kernel/reboot.c                 |   2 +-
 arch/x86/kernel/sev-es.c                 |   6 +-
 arch/x86/kernel/traps.c                  |   2 +-
 tools/arch/x86/include/asm/irq_vectors.h |   7 +-
 16 files changed, 158 insertions(+), 197 deletions(-)

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 1/9] x86/traps: add X86_NR_HW_TRAPS to <asm/trapnr.h>
  2021-05-15  1:43 [PATCH v2 0/8] x86/irq: trap and interrupt cleanups H. Peter Anvin
@ 2021-05-15  1:43 ` H. Peter Anvin
  2021-05-15  1:43 ` [PATCH v2 2/9] x86/irqvector: add NR_EXTERNAL_VECTORS and NR_SYSTEM_VECTORS H. Peter Anvin
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2021-05-15  1:43 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Andy Lutomirski, Borislav Petkov,
	H. Peter Anvin
  Cc: Linux Kernel Mailing List

From: "H. Peter Anvin (Intel)" <hpa@zytor.com>

The x86 architecture supports up to 32 trap vectors. Add that constant
to <asm/trapnr.h>.

Acked-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
 arch/x86/include/asm/trapnr.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/include/asm/trapnr.h b/arch/x86/include/asm/trapnr.h
index f5d2325aa0b7..f0baf92da20b 100644
--- a/arch/x86/include/asm/trapnr.h
+++ b/arch/x86/include/asm/trapnr.h
@@ -27,6 +27,7 @@
 #define X86_TRAP_VE		20	/* Virtualization Exception */
 #define X86_TRAP_CP		21	/* Control Protection Exception */
 #define X86_TRAP_VC		29	/* VMM Communication Exception */
+#define X86_NR_HW_TRAPS		32	/* Max hardware trap number */
 #define X86_TRAP_IRET		32	/* IRET Exception */
 
 #endif
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/9] x86/irqvector: add NR_EXTERNAL_VECTORS and NR_SYSTEM_VECTORS
  2021-05-15  1:43 [PATCH v2 0/8] x86/irq: trap and interrupt cleanups H. Peter Anvin
  2021-05-15  1:43 ` [PATCH v2 1/9] x86/traps: add X86_NR_HW_TRAPS to <asm/trapnr.h> H. Peter Anvin
@ 2021-05-15  1:43 ` H. Peter Anvin
  2021-05-15  1:43 ` [PATCH v2 3/9] x86/irq: remove unused vectors from <asm/irq_vectors.h> H. Peter Anvin
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2021-05-15  1:43 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Andy Lutomirski, Borislav Petkov,
	H. Peter Anvin
  Cc: Linux Kernel Mailing List

From: "H. Peter Anvin (Intel)" <hpa@zytor.com>

Add defines for the number of external vectors and number of system
vectors instead of requiring the use of (FIRST_SYSTEM_VECTOR -
FIRST_EXTERNAL_VECTOR) and (NR_VECTORS - FIRST_SYSTEM_VECTOR)
respectively.

Acked-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
 arch/x86/include/asm/idtentry.h          | 4 ++--
 arch/x86/include/asm/irq_vectors.h       | 3 +++
 tools/arch/x86/include/asm/irq_vectors.h | 3 +++
 3 files changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index 73d45b0dfff2..c03a18cac78e 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -504,7 +504,7 @@ __visible noinstr void func(struct pt_regs *regs,			\
 	.align 8
 SYM_CODE_START(irq_entries_start)
     vector=FIRST_EXTERNAL_VECTOR
-    .rept (FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
+    .rept NR_EXTERNAL_VECTORS
 	UNWIND_HINT_IRET_REGS
 0 :
 	.byte	0x6a, vector
@@ -520,7 +520,7 @@ SYM_CODE_END(irq_entries_start)
 	.align 8
 SYM_CODE_START(spurious_entries_start)
     vector=FIRST_SYSTEM_VECTOR
-    .rept (NR_VECTORS - FIRST_SYSTEM_VECTOR)
+    .rept NR_SYSTEM_VECTORS
 	UNWIND_HINT_IRET_REGS
 0 :
 	.byte	0x6a, vector
diff --git a/arch/x86/include/asm/irq_vectors.h b/arch/x86/include/asm/irq_vectors.h
index 889f8b1b5b7f..d2ef35927770 100644
--- a/arch/x86/include/asm/irq_vectors.h
+++ b/arch/x86/include/asm/irq_vectors.h
@@ -114,6 +114,9 @@
 #define FIRST_SYSTEM_VECTOR		NR_VECTORS
 #endif
 
+#define NR_EXTERNAL_VECTORS		(FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
+#define NR_SYSTEM_VECTORS		(NR_VECTORS - FIRST_SYSTEM_VECTOR)
+
 /*
  * Size the maximum number of interrupts.
  *
diff --git a/tools/arch/x86/include/asm/irq_vectors.h b/tools/arch/x86/include/asm/irq_vectors.h
index 889f8b1b5b7f..d2ef35927770 100644
--- a/tools/arch/x86/include/asm/irq_vectors.h
+++ b/tools/arch/x86/include/asm/irq_vectors.h
@@ -114,6 +114,9 @@
 #define FIRST_SYSTEM_VECTOR		NR_VECTORS
 #endif
 
+#define NR_EXTERNAL_VECTORS		(FIRST_SYSTEM_VECTOR - FIRST_EXTERNAL_VECTOR)
+#define NR_SYSTEM_VECTORS		(NR_VECTORS - FIRST_SYSTEM_VECTOR)
+
 /*
  * Size the maximum number of interrupts.
  *
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 3/9] x86/irq: remove unused vectors from <asm/irq_vectors.h>
  2021-05-15  1:43 [PATCH v2 0/8] x86/irq: trap and interrupt cleanups H. Peter Anvin
  2021-05-15  1:43 ` [PATCH v2 1/9] x86/traps: add X86_NR_HW_TRAPS to <asm/trapnr.h> H. Peter Anvin
  2021-05-15  1:43 ` [PATCH v2 2/9] x86/irqvector: add NR_EXTERNAL_VECTORS and NR_SYSTEM_VECTORS H. Peter Anvin
@ 2021-05-15  1:43 ` H. Peter Anvin
  2021-05-15  1:43 ` [PATCH v2 4/9] x86/idt: remove address argument to idt_invalidate() H. Peter Anvin
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2021-05-15  1:43 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Andy Lutomirski, Borislav Petkov,
	H. Peter Anvin
  Cc: Linux Kernel Mailing List, Steve Wahl, Mike Travis,
	Dimitri Sivanich, Russ Anderson

From: "H. Peter Anvin (Intel)" <hpa@zytor.com>

UV_BAU_MESSAGE is defined but not used anywhere in the
kernel. Presumably this is a stale vector number that can be
reclaimed.

MCE_VECTOR is not an actual vector: #MC is an exception, not an
interrupt vector, and as such is correctly described as
X86_TRAP_MC. MCE_VECTOR is not used anywhere is the kernel.

Note that NMI_VECTOR *is* used; specifically it is the vector number
programmed into the APIC LVT when an NMI interrupt is configured. At
the moment it is always numerically identical to X86_TRAP_NMI, that is
not necessarily going to be the case indefinitely.

Acked-by: Steve Wahl <steve.wahl@hpe.com>
Cc: Mike Travis <mike.travis@hpe.com>
Cc: Dimitri Sivanich <dimitri.sivanich@hpe.com>
Cc: Russ Anderson <russ.anderson@hpe.com>
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
 arch/x86/include/asm/irq_vectors.h       | 4 ++--
 tools/arch/x86/include/asm/irq_vectors.h | 4 ++--
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/irq_vectors.h b/arch/x86/include/asm/irq_vectors.h
index d2ef35927770..43dcb9284208 100644
--- a/arch/x86/include/asm/irq_vectors.h
+++ b/arch/x86/include/asm/irq_vectors.h
@@ -26,8 +26,8 @@
  * This file enumerates the exact layout of them:
  */
 
+/* This is used as an interrupt vector when programming the APIC. */
 #define NMI_VECTOR			0x02
-#define MCE_VECTOR			0x12
 
 /*
  * IDT vectors usable for external interrupt sources start at 0x20.
@@ -84,7 +84,7 @@
  */
 #define IRQ_WORK_VECTOR			0xf6
 
-#define UV_BAU_MESSAGE			0xf5
+/* 0xf5 - unused, was UV_BAU_MESSAGE */
 #define DEFERRED_ERROR_VECTOR		0xf4
 
 /* Vector on which hypervisor callbacks will be delivered */
diff --git a/tools/arch/x86/include/asm/irq_vectors.h b/tools/arch/x86/include/asm/irq_vectors.h
index d2ef35927770..43dcb9284208 100644
--- a/tools/arch/x86/include/asm/irq_vectors.h
+++ b/tools/arch/x86/include/asm/irq_vectors.h
@@ -26,8 +26,8 @@
  * This file enumerates the exact layout of them:
  */
 
+/* This is used as an interrupt vector when programming the APIC. */
 #define NMI_VECTOR			0x02
-#define MCE_VECTOR			0x12
 
 /*
  * IDT vectors usable for external interrupt sources start at 0x20.
@@ -84,7 +84,7 @@
  */
 #define IRQ_WORK_VECTOR			0xf6
 
-#define UV_BAU_MESSAGE			0xf5
+/* 0xf5 - unused, was UV_BAU_MESSAGE */
 #define DEFERRED_ERROR_VECTOR		0xf4
 
 /* Vector on which hypervisor callbacks will be delivered */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 4/9] x86/idt: remove address argument to idt_invalidate()
  2021-05-15  1:43 [PATCH v2 0/8] x86/irq: trap and interrupt cleanups H. Peter Anvin
                   ` (2 preceding siblings ...)
  2021-05-15  1:43 ` [PATCH v2 3/9] x86/irq: remove unused vectors from <asm/irq_vectors.h> H. Peter Anvin
@ 2021-05-15  1:43 ` H. Peter Anvin
  2021-05-15 15:27   ` Andy Lutomirski
  2021-05-15  1:43 ` [PATCH v2 5/9] x86/desc: add native_[ig]dt_invalidate() to <asm/desc.h> H. Peter Anvin
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 14+ messages in thread
From: H. Peter Anvin @ 2021-05-15  1:43 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Andy Lutomirski, Borislav Petkov,
	H. Peter Anvin
  Cc: Linux Kernel Mailing List

From: "H. Peter Anvin (Intel)" <hpa@zytor.com>

There is no reason to specify any specific address to
idt_invalidate(). It looks mostly like an artifact of unifying code
done differently by accident. The most "sensible" address to set here
is a NULL pointer - virtual address zero, just as a visual marker.

This also makes it possible to mark the struct desc_ptr in
idt_invalidate() as static const.

Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
 arch/x86/include/asm/desc.h        | 2 +-
 arch/x86/kernel/idt.c              | 5 ++---
 arch/x86/kernel/machine_kexec_32.c | 2 +-
 arch/x86/kernel/reboot.c           | 2 +-
 4 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/desc.h b/arch/x86/include/asm/desc.h
index 476082a83d1c..b8429ae50b71 100644
--- a/arch/x86/include/asm/desc.h
+++ b/arch/x86/include/asm/desc.h
@@ -427,6 +427,6 @@ static inline void idt_setup_early_pf(void) { }
 static inline void idt_setup_ist_traps(void) { }
 #endif
 
-extern void idt_invalidate(void *addr);
+extern void idt_invalidate(void);
 
 #endif /* _ASM_X86_DESC_H */
diff --git a/arch/x86/kernel/idt.c b/arch/x86/kernel/idt.c
index d552f177eca0..2779f5226dc2 100644
--- a/arch/x86/kernel/idt.c
+++ b/arch/x86/kernel/idt.c
@@ -331,11 +331,10 @@ void __init idt_setup_early_handler(void)
 
 /**
  * idt_invalidate - Invalidate interrupt descriptor table
- * @addr:	The virtual address of the 'invalid' IDT
  */
-void idt_invalidate(void *addr)
+void idt_invalidate(void)
 {
-	struct desc_ptr idt = { .address = (unsigned long) addr, .size = 0 };
+	static const struct desc_ptr idt = { .address = 0, .size = 0 };
 
 	load_idt(&idt);
 }
diff --git a/arch/x86/kernel/machine_kexec_32.c b/arch/x86/kernel/machine_kexec_32.c
index 64b00b0d7fe8..1e34feebcd5d 100644
--- a/arch/x86/kernel/machine_kexec_32.c
+++ b/arch/x86/kernel/machine_kexec_32.c
@@ -232,7 +232,7 @@ void machine_kexec(struct kimage *image)
 	 * The gdt & idt are now invalid.
 	 * If you want to load them you must set up your own idt & gdt.
 	 */
-	idt_invalidate(phys_to_virt(0));
+	idt_invalidate();
 	set_gdt(phys_to_virt(0), 0);
 
 	/* now call it */
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index b29657b76e3f..ebfb91108232 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -669,7 +669,7 @@ static void native_machine_emergency_restart(void)
 			break;
 
 		case BOOT_TRIPLE:
-			idt_invalidate(NULL);
+			idt_invalidate();
 			__asm__ __volatile__("int3");
 
 			/* We're probably dead after this, but... */
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 5/9] x86/desc: add native_[ig]dt_invalidate() to <asm/desc.h>
  2021-05-15  1:43 [PATCH v2 0/8] x86/irq: trap and interrupt cleanups H. Peter Anvin
                   ` (3 preceding siblings ...)
  2021-05-15  1:43 ` [PATCH v2 4/9] x86/idt: remove address argument to idt_invalidate() H. Peter Anvin
@ 2021-05-15  1:43 ` H. Peter Anvin
  2021-05-15 15:29   ` Andy Lutomirski
  2021-05-15  1:43 ` [PATCH v2 6/9] x86/kexec: set_[gi]dt() -> native_[gi]dt_invalidate() in machine_kexec_*.c H. Peter Anvin
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 14+ messages in thread
From: H. Peter Anvin @ 2021-05-15  1:43 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Andy Lutomirski, Borislav Petkov,
	H. Peter Anvin
  Cc: Linux Kernel Mailing List

From: "H. Peter Anvin (Intel)" <hpa@zytor.com>

In some places, we want the native forms of descriptor table
invalidation. Rather than open-coding them, add explicitly native
functions to invalidate the GDT and IDT.

Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
 arch/x86/include/asm/desc.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/arch/x86/include/asm/desc.h b/arch/x86/include/asm/desc.h
index b8429ae50b71..aa366b2bbc41 100644
--- a/arch/x86/include/asm/desc.h
+++ b/arch/x86/include/asm/desc.h
@@ -224,6 +224,21 @@ static inline void store_idt(struct desc_ptr *dtr)
 	asm volatile("sidt %0":"=m" (*dtr));
 }
 
+static const struct desc_ptr __invalid_gdt_idt_ptr __maybe_unused = {
+	.address = 0,
+	.size = 0
+};
+
+static inline void native_gdt_invalidate(void)
+{
+	native_load_gdt(&__invalid_gdt_idt_ptr);
+}
+
+static inline void native_idt_invalidate(void)
+{
+	native_load_idt(&__invalid_gdt_idt_ptr);
+}
+
 /*
  * The LTR instruction marks the TSS GDT entry as busy. On 64-bit, the GDT is
  * a read-only remapping. To prevent a page fault, the GDT is switched to the
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 6/9] x86/kexec: set_[gi]dt() -> native_[gi]dt_invalidate() in machine_kexec_*.c
  2021-05-15  1:43 [PATCH v2 0/8] x86/irq: trap and interrupt cleanups H. Peter Anvin
                   ` (4 preceding siblings ...)
  2021-05-15  1:43 ` [PATCH v2 5/9] x86/desc: add native_[ig]dt_invalidate() to <asm/desc.h> H. Peter Anvin
@ 2021-05-15  1:43 ` H. Peter Anvin
  2021-05-15 15:30   ` Andy Lutomirski
  2021-05-15  1:43 ` [PATCH v2 7/9] x86/irq: WARN_ONCE() if irq_move_cleanup is called on a pending interrupt H. Peter Anvin
  2021-05-15  1:44 ` [PATCH v2 8/9] x86/irq: merge and functionalize common code in DECLARE/DEFINE_IDTENTRY_* H. Peter Anvin
  7 siblings, 1 reply; 14+ messages in thread
From: H. Peter Anvin @ 2021-05-15  1:43 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Andy Lutomirski, Borislav Petkov,
	H. Peter Anvin
  Cc: Linux Kernel Mailing List

From: "H. Peter Anvin (Intel)" <hpa@zytor.com>

These files contain private set_gdt() functions which are only used to
invalid the gdt; machine_kexec_64.c also contains a set_idt()
function to invalidate the idt.

phys_to_virt(0) *really* doesn't make any sense for creating an
invalid GDT. A NULL pointer (virtual 0) makes a lot more sense;
although neither will allow any actual memory reference, a NULL
pointer stands out more.

Replace these calls with native_[gi]dt_invalidate().

Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
 arch/x86/kernel/machine_kexec_32.c | 15 ++------------
 arch/x86/kernel/machine_kexec_64.c | 33 ++----------------------------
 2 files changed, 4 insertions(+), 44 deletions(-)

diff --git a/arch/x86/kernel/machine_kexec_32.c b/arch/x86/kernel/machine_kexec_32.c
index 1e34feebcd5d..1b373d79cedc 100644
--- a/arch/x86/kernel/machine_kexec_32.c
+++ b/arch/x86/kernel/machine_kexec_32.c
@@ -23,17 +23,6 @@
 #include <asm/set_memory.h>
 #include <asm/debugreg.h>
 
-static void set_gdt(void *newgdt, __u16 limit)
-{
-	struct desc_ptr curgdt;
-
-	/* ia32 supports unaligned loads & stores */
-	curgdt.size    = limit;
-	curgdt.address = (unsigned long)newgdt;
-
-	load_gdt(&curgdt);
-}
-
 static void load_segments(void)
 {
 #define __STR(X) #X
@@ -232,8 +221,8 @@ void machine_kexec(struct kimage *image)
 	 * The gdt & idt are now invalid.
 	 * If you want to load them you must set up your own idt & gdt.
 	 */
-	idt_invalidate();
-	set_gdt(phys_to_virt(0), 0);
+	native_idt_invalidate();
+	native_gdt_invalidate();
 
 	/* now call it */
 	image->start = relocate_kernel_ptr((unsigned long)image->head,
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index c078b0d3ab0e..131f30fdcfbd 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -256,35 +256,6 @@ static int init_pgtable(struct kimage *image, unsigned long start_pgtable)
 	return init_transition_pgtable(image, level4p);
 }
 
-static void set_idt(void *newidt, u16 limit)
-{
-	struct desc_ptr curidt;
-
-	/* x86-64 supports unaligned loads & stores */
-	curidt.size    = limit;
-	curidt.address = (unsigned long)newidt;
-
-	__asm__ __volatile__ (
-		"lidtq %0\n"
-		: : "m" (curidt)
-		);
-};
-
-
-static void set_gdt(void *newgdt, u16 limit)
-{
-	struct desc_ptr curgdt;
-
-	/* x86-64 supports unaligned loads & stores */
-	curgdt.size    = limit;
-	curgdt.address = (unsigned long)newgdt;
-
-	__asm__ __volatile__ (
-		"lgdtq %0\n"
-		: : "m" (curgdt)
-		);
-};
-
 static void load_segments(void)
 {
 	__asm__ __volatile__ (
@@ -379,8 +350,8 @@ void machine_kexec(struct kimage *image)
 	 * The gdt & idt are now invalid.
 	 * If you want to load them you must set up your own idt & gdt.
 	 */
-	set_gdt(phys_to_virt(0), 0);
-	set_idt(phys_to_virt(0), 0);
+	native_idt_invalidate();
+	native_gdt_invalidate();
 
 	/* now call it */
 	image->start = relocate_kernel((unsigned long)image->head,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 7/9] x86/irq: WARN_ONCE() if irq_move_cleanup is called on a pending interrupt
  2021-05-15  1:43 [PATCH v2 0/8] x86/irq: trap and interrupt cleanups H. Peter Anvin
                   ` (5 preceding siblings ...)
  2021-05-15  1:43 ` [PATCH v2 6/9] x86/kexec: set_[gi]dt() -> native_[gi]dt_invalidate() in machine_kexec_*.c H. Peter Anvin
@ 2021-05-15  1:43 ` H. Peter Anvin
  2021-05-15  6:24   ` H. Peter Anvin
  2021-05-15  9:59   ` [PATCH v2.1 7/8] " H. Peter Anvin
  2021-05-15  1:44 ` [PATCH v2 8/9] x86/irq: merge and functionalize common code in DECLARE/DEFINE_IDTENTRY_* H. Peter Anvin
  7 siblings, 2 replies; 14+ messages in thread
From: H. Peter Anvin @ 2021-05-15  1:43 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Andy Lutomirski, Borislav Petkov,
	H. Peter Anvin
  Cc: Linux Kernel Mailing List

From: "H. Peter Anvin (Intel)" <hpa@zytor.com>

The current IRQ vector allocation code should be "clean" and never
issue a IRQ_MOVE_CLEANUP_VECTOR IPI for an interrupt that could still
be pending. This should make it possible to move it to the "normal"
system IRQ vector range. This should probably be a three-step process:

1. Introduce this WARN_ONCE() on this event ever occurring.
2. Move the IRQ_MOVE_CLEANUP_VECTOR to the sysvec range.
3. Remove the self-IPI hack.

This implements step 1.

Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
 arch/x86/kernel/apic/vector.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
index 6dbdc7c22bb7..7ba2982a3585 100644
--- a/arch/x86/kernel/apic/vector.c
+++ b/arch/x86/kernel/apic/vector.c
@@ -939,9 +939,14 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_irq_move_cleanup)
 		 * to this CPU. IRQ_MOVE_CLEANUP_VECTOR is the lowest
 		 * priority external vector, so on return from this
 		 * interrupt the device interrupt will happen first.
+		 *
+		 * *** This should never happen with the current IRQ
+		 * cleanup code, so WARN_ONCE() for now, and
+		 * eventually get rid of this hack.
 		 */
 		irr = apic_read(APIC_IRR + (vector / 32 * 0x10));
 		if (irr & (1U << (vector % 32))) {
+			WARN_ONCE(1, "irq_move_cleanup called on still pending interrupt\n");
 			apic->send_IPI_self(IRQ_MOVE_CLEANUP_VECTOR);
 			continue;
 		}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 8/9] x86/irq: merge and functionalize common code in DECLARE/DEFINE_IDTENTRY_*
  2021-05-15  1:43 [PATCH v2 0/8] x86/irq: trap and interrupt cleanups H. Peter Anvin
                   ` (6 preceding siblings ...)
  2021-05-15  1:43 ` [PATCH v2 7/9] x86/irq: WARN_ONCE() if irq_move_cleanup is called on a pending interrupt H. Peter Anvin
@ 2021-05-15  1:44 ` H. Peter Anvin
  7 siblings, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2021-05-15  1:44 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Andy Lutomirski, Borislav Petkov,
	H. Peter Anvin
  Cc: Linux Kernel Mailing List

From: "H. Peter Anvin (Intel)" <hpa@zytor.com>

Move as much of the common code in the _IDTENTRY_ and
run_*_on_irq_stack() into inline functions instead of macros. A lot of
them differ only in types and/or the presence or absence of an
additional argument; the differential amount of code is neglible and
after bending the types a little bit, they unify nicely.

Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
 arch/x86/entry/common.c          |   5 +-
 arch/x86/include/asm/idtentry.h  | 170 +++++++++++++++----------------
 arch/x86/include/asm/irq_stack.h |  73 +++++--------
 arch/x86/kernel/apic/apic.c      |   2 +-
 arch/x86/kernel/irq.c            |   1 +
 arch/x86/kernel/sev-es.c         |   6 +-
 arch/x86/kernel/traps.c          |   2 +-
 7 files changed, 117 insertions(+), 142 deletions(-)

diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index 7b2542b13ebd..1e46a1a35b20 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -253,7 +253,8 @@ static __always_inline bool get_and_clear_inhcall(void) { return false; }
 static __always_inline void restore_inhcall(bool inhcall) { }
 #endif
 
-static void __xen_pv_evtchn_do_upcall(struct pt_regs *regs)
+static void __xen_pv_evtchn_do_upcall(struct pt_regs *regs,
+				      u32 __dummy __always_unused)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
 
@@ -269,7 +270,7 @@ __visible noinstr void xen_pv_evtchn_do_upcall(struct pt_regs *regs)
 	irqentry_state_t state = irqentry_enter(regs);
 	bool inhcall;
 
-	run_sysvec_on_irqstack_cond(__xen_pv_evtchn_do_upcall, regs);
+	run_on_irqstack_cond(__xen_pv_evtchn_do_upcall, regs, 0);
 
 	inhcall = get_and_clear_inhcall();
 	if (inhcall && !WARN_ON_ONCE(state.exit_rcu)) {
diff --git a/arch/x86/include/asm/idtentry.h b/arch/x86/include/asm/idtentry.h
index c03a18cac78e..3932c770551e 100644
--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -11,6 +11,65 @@
 
 #include <asm/irq_stack.h>
 
+/**
+ * idtentry_call - Common code for non-trivial IDT entry points
+ * @func:	What should ultimately be called
+ * @regs:	Pointer to entry struct pt_regs
+ * @arg:	Extra argument to func (vector, error_code)
+ * @flush:	If kvm_set_cpu_l1tf_flush_l1d() should be called
+ * @wrapper:	Wrapper to call func; NULL for just call
+ */
+static __always_inline void
+idtentry_call(irq_func_t func, struct pt_regs *regs, bool flush,
+	      void (*wrapper)(irq_func_t, struct pt_regs *, u32),
+	      u32 arg)
+{
+	irqentry_state_t state = irqentry_enter(regs);
+
+	instrumentation_begin();
+	if (flush)
+		kvm_set_cpu_l1tf_flush_l1d();
+	if (wrapper)
+		wrapper(func, regs, arg);
+	else
+		func(regs, arg);
+	instrumentation_end();
+	irqentry_exit(regs, state);
+}
+
+/**
+ * _DEFINE_IDTENTRY - Common macro for defining idtentries with no argument
+ * @func:	Function name of the entry point
+ * @flush:	If wrapper should call kvm_set_cpu_l1tf_flush_l1d()
+ * @inline_opt: __always_inline or noinline as appropriate for __func
+ * @wrapper:	Wrapper function for calling __func
+ *
+ */
+#define _DEFINE_IDTENTRY(func, flush, inline_opt, wrapper)		\
+static inline_opt void __##func(struct pt_regs *regs, u32);		\
+__visible noinstr void func(struct pt_regs *regs)			\
+{									\
+	idtentry_call(__##func, regs, flush, wrapper, 0);		\
+}									\
+static inline_opt void							\
+__##func(struct pt_regs *regs, u32 __dummy __always_unused)
+
+/**
+ * _DEFINE_IDTENTRY_ERRORCODE - Common macro for defining idtentries with argument
+ * @func:	Function name of the entry point
+ * @flush:	If wrapper should call kvm_set_cpu_l1tf_flush_l1d()
+ * @inline_opt: __always_inline or noinline as appropriate for __func
+ * @wrapper:	Wrapper function for calling __func
+ *
+ */
+#define _DEFINE_IDTENTRY_ERRORCODE(func, flush, inline_opt, wrapper)	\
+static inline_opt void __##func(struct pt_regs *regs, u32 error_code);	\
+__visible noinstr void func(struct pt_regs *regs, u32 error_code)	\
+{									\
+	idtentry_call(__##func, regs, flush, wrapper, error_code);	\
+}									\
+static inline_opt void __##func(struct pt_regs *regs, u32 error_code)
+
 /**
  * DECLARE_IDTENTRY - Declare functions for simple IDT entry points
  *		      No error code pushed by hardware
@@ -45,19 +104,7 @@
  * which has to run before returning to the low level assembly code.
  */
 #define DEFINE_IDTENTRY(func)						\
-static __always_inline void __##func(struct pt_regs *regs);		\
-									\
-__visible noinstr void func(struct pt_regs *regs)			\
-{									\
-	irqentry_state_t state = irqentry_enter(regs);			\
-									\
-	instrumentation_begin();					\
-	__##func (regs);						\
-	instrumentation_end();						\
-	irqentry_exit(regs, state);					\
-}									\
-									\
-static __always_inline void __##func(struct pt_regs *regs)
+	_DEFINE_IDTENTRY(func, false, __always_inline, NULL)
 
 /* Special case for 32bit IRET 'trap' */
 #define DECLARE_IDTENTRY_SW	DECLARE_IDTENTRY
@@ -80,7 +127,7 @@ static __always_inline void __##func(struct pt_regs *regs)
 #define DECLARE_IDTENTRY_ERRORCODE(vector, func)			\
 	asmlinkage void asm_##func(void);				\
 	asmlinkage void xen_asm_##func(void);				\
-	__visible void func(struct pt_regs *regs, unsigned long error_code)
+	__visible void func(struct pt_regs *regs, u32 error_code)
 
 /**
  * DEFINE_IDTENTRY_ERRORCODE - Emit code for simple IDT entry points
@@ -90,22 +137,7 @@ static __always_inline void __##func(struct pt_regs *regs)
  * Same as DEFINE_IDTENTRY, but has an extra error_code argument
  */
 #define DEFINE_IDTENTRY_ERRORCODE(func)					\
-static __always_inline void __##func(struct pt_regs *regs,		\
-				     unsigned long error_code);		\
-									\
-__visible noinstr void func(struct pt_regs *regs,			\
-			    unsigned long error_code)			\
-{									\
-	irqentry_state_t state = irqentry_enter(regs);			\
-									\
-	instrumentation_begin();					\
-	__##func (regs, error_code);					\
-	instrumentation_end();						\
-	irqentry_exit(regs, state);					\
-}									\
-									\
-static __always_inline void __##func(struct pt_regs *regs,		\
-				     unsigned long error_code)
+	_DEFINE_IDTENTRY_ERRORCODE(func, false, __always_inline, NULL)
 
 /**
  * DECLARE_IDTENTRY_RAW - Declare functions for raw IDT entry points
@@ -161,7 +193,7 @@ __visible noinstr void func(struct pt_regs *regs)
  * is required before the enter/exit() helpers are invoked.
  */
 #define DEFINE_IDTENTRY_RAW_ERRORCODE(func)				\
-__visible noinstr void func(struct pt_regs *regs, unsigned long error_code)
+__visible noinstr void func(struct pt_regs *regs, u32 error_code)
 
 /**
  * DECLARE_IDTENTRY_IRQ - Declare functions for device interrupt IDT entry
@@ -187,25 +219,11 @@ __visible noinstr void func(struct pt_regs *regs, unsigned long error_code)
  * has to be done in the function body if necessary.
  */
 #define DEFINE_IDTENTRY_IRQ(func)					\
-static void __##func(struct pt_regs *regs, u32 vector);			\
-									\
-__visible noinstr void func(struct pt_regs *regs,			\
-			    unsigned long error_code)			\
-{									\
-	irqentry_state_t state = irqentry_enter(regs);			\
-	u32 vector = (u32)(u8)error_code;				\
-									\
-	instrumentation_begin();					\
-	kvm_set_cpu_l1tf_flush_l1d();					\
-	run_irq_on_irqstack_cond(__##func, regs, vector);		\
-	instrumentation_end();						\
-	irqentry_exit(regs, state);					\
-}									\
-									\
-static noinline void __##func(struct pt_regs *regs, u32 vector)
+	_DEFINE_IDTENTRY_ERRORCODE(func, true, noinline, run_on_irqstack_cond)
 
 /**
- * DECLARE_IDTENTRY_SYSVEC - Declare functions for system vector entry points
+ * DECLARE_IDTENTRY - Declare functions for simple IDT entry points
+ *		      No error code pushed by hardware
  * @vector:	Vector number (ignored for C)
  * @func:	Function name of the entry point
  *
@@ -214,9 +232,11 @@ static noinline void __##func(struct pt_regs *regs, u32 vector)
  * - The XEN PV trap entry point: xen_##func (maybe unused)
  * - The C handler called from the ASM entry point
  *
- * Maps to DECLARE_IDTENTRY().
+ * Note: This is the C variant of DECLARE_IDTENTRY(). As the name says it
+ * declares the entry points for usage in C code. There is an ASM variant
+ * as well which is used to emit the entry stubs in entry_32/64.S.
  */
-#define DECLARE_IDTENTRY_SYSVEC(vector, func)				\
+#define DECLARE_IDTENTRY_SYSVEC(vector, func)	\
 	DECLARE_IDTENTRY(vector, func)
 
 /**
@@ -229,20 +249,7 @@ static noinline void __##func(struct pt_regs *regs, u32 vector)
  * Runs the function on the interrupt stack if the entry hit kernel mode
  */
 #define DEFINE_IDTENTRY_SYSVEC(func)					\
-static void __##func(struct pt_regs *regs);				\
-									\
-__visible noinstr void func(struct pt_regs *regs)			\
-{									\
-	irqentry_state_t state = irqentry_enter(regs);			\
-									\
-	instrumentation_begin();					\
-	kvm_set_cpu_l1tf_flush_l1d();					\
-	run_sysvec_on_irqstack_cond(__##func, regs);			\
-	instrumentation_end();						\
-	irqentry_exit(regs, state);					\
-}									\
-									\
-static noinline void __##func(struct pt_regs *regs)
+	_DEFINE_IDTENTRY(func, true, noinline, run_on_irqstack_cond)
 
 /**
  * DEFINE_IDTENTRY_SYSVEC_SIMPLE - Emit code for simple system vector IDT
@@ -255,23 +262,16 @@ static noinline void __##func(struct pt_regs *regs)
  * Only use for 'empty' vectors like reschedule IPI and KVM posted
  * interrupt vectors.
  */
+static __always_inline void
+call_sysvec_simple(irq_func_t func, struct pt_regs *regs, u32 arg)
+{
+	__irq_enter_raw();
+	func(regs, arg);
+	__irq_exit_raw();
+}
+
 #define DEFINE_IDTENTRY_SYSVEC_SIMPLE(func)				\
-static __always_inline void __##func(struct pt_regs *regs);		\
-									\
-__visible noinstr void func(struct pt_regs *regs)			\
-{									\
-	irqentry_state_t state = irqentry_enter(regs);			\
-									\
-	instrumentation_begin();					\
-	__irq_enter_raw();						\
-	kvm_set_cpu_l1tf_flush_l1d();					\
-	__##func (regs);						\
-	__irq_exit_raw();						\
-	instrumentation_end();						\
-	irqentry_exit(regs, state);					\
-}									\
-									\
-static __always_inline void __##func(struct pt_regs *regs)
+	_DEFINE_IDTENTRY(func, true, __always_inline, call_sysvec_simple)
 
 /**
  * DECLARE_IDTENTRY_XENCB - Declare functions for XEN HV callback entry point
@@ -312,8 +312,8 @@ static __always_inline void __##func(struct pt_regs *regs)
  */
 #define DECLARE_IDTENTRY_VC(vector, func)				\
 	DECLARE_IDTENTRY_RAW_ERRORCODE(vector, func);			\
-	__visible noinstr void ist_##func(struct pt_regs *regs, unsigned long error_code);	\
-	__visible noinstr void safe_stack_##func(struct pt_regs *regs, unsigned long error_code)
+	__visible noinstr void ist_##func(struct pt_regs *regs, u32 error_code);	\
+	__visible noinstr void safe_stack_##func(struct pt_regs *regs, u32 error_code)
 
 /**
  * DEFINE_IDTENTRY_IST - Emit code for IST entry points
@@ -396,8 +396,7 @@ static __always_inline void __##func(struct pt_regs *regs)
  */
 #define DECLARE_IDTENTRY_DF(vector, func)				\
 	asmlinkage void asm_##func(void);				\
-	__visible void func(struct pt_regs *regs,			\
-			    unsigned long error_code,			\
+	__visible void func(struct pt_regs *regs, u32 error_code,	\
 			    unsigned long address)
 
 /**
@@ -408,8 +407,7 @@ static __always_inline void __##func(struct pt_regs *regs)
  * cr2 in the address argument.
  */
 #define DEFINE_IDTENTRY_DF(func)					\
-__visible noinstr void func(struct pt_regs *regs,			\
-			    unsigned long error_code,			\
+__visible noinstr void func(struct pt_regs *regs, u32 error_code,	\
 			    unsigned long address)
 
 #endif	/* !CONFIG_X86_64 */
diff --git a/arch/x86/include/asm/irq_stack.h b/arch/x86/include/asm/irq_stack.h
index 562854c60808..305ca95b2743 100644
--- a/arch/x86/include/asm/irq_stack.h
+++ b/arch/x86/include/asm/irq_stack.h
@@ -6,6 +6,8 @@
 
 #include <asm/processor.h>
 
+typedef void (*irq_func_t)(struct pt_regs *regs, u32 arg);
+
 #ifdef CONFIG_X86_64
 
 /*
@@ -132,7 +134,7 @@
 }
 
 /*
- * Function call sequence for __call_on_irqstack() for system vectors.
+ * Function call sequence for __call_on_irqstack() for irqs and system vectors.
  *
  * Note that irq_enter_rcu() and irq_exit_rcu() do not use the input
  * mechanism because these functions are global and cannot be optimized out
@@ -145,27 +147,6 @@
  * call to idtentry_exit() anyway, it's likely that it does not cause extra
  * effort for this asm magic.
  */
-#define ASM_CALL_SYSVEC							\
-	"call irq_enter_rcu				\n"		\
-	"movq	%[arg1], %%rdi				\n"		\
-	"call %P[__func]				\n"		\
-	"call irq_exit_rcu				\n"
-
-#define SYSVEC_CONSTRAINTS	, [arg1] "r" (regs)
-
-#define run_sysvec_on_irqstack_cond(func, regs)				\
-{									\
-	assert_function_type(func, void (*)(struct pt_regs *));		\
-	assert_arg_type(regs, struct pt_regs *);			\
-									\
-	call_on_irqstack_cond(func, regs, ASM_CALL_SYSVEC,		\
-			      SYSVEC_CONSTRAINTS, regs);		\
-}
-
-/*
- * As in ASM_CALL_SYSVEC above the clobbers force the compiler to store
- * @regs and @vector in callee saved registers.
- */
 #define ASM_CALL_IRQ							\
 	"call irq_enter_rcu				\n"		\
 	"movq	%[arg1], %%rdi				\n"		\
@@ -173,16 +154,13 @@
 	"call %P[__func]				\n"		\
 	"call irq_exit_rcu				\n"
 
-#define IRQ_CONSTRAINTS	, [arg1] "r" (regs), [arg2] "r" (vector)
+#define IRQ_CONSTRAINTS	, [arg1] "r" (regs), [arg2] "r" (arg)
 
-#define run_irq_on_irqstack_cond(func, regs, vector)			\
-{									\
-	assert_function_type(func, void (*)(struct pt_regs *, u32));	\
-	assert_arg_type(regs, struct pt_regs *);			\
-	assert_arg_type(vector, u32);					\
-									\
-	call_on_irqstack_cond(func, regs, ASM_CALL_IRQ,			\
-			      IRQ_CONSTRAINTS, regs, vector);		\
+static __always_inline void
+run_on_irqstack_cond(irq_func_t func, struct pt_regs *regs, u32 arg)
+{
+	call_on_irqstack_cond(func, regs, ASM_CALL_IRQ,
+			      IRQ_CONSTRAINTS, regs, arg);
 }
 
 #define ASM_CALL_SOFTIRQ						\
@@ -194,28 +172,25 @@
  * interrupts are pending to be processed. The interrupt stack cannot be in
  * use here.
  */
-#define do_softirq_own_stack()						\
-{									\
-	__this_cpu_write(hardirq_stack_inuse, true);			\
-	call_on_irqstack(__do_softirq, ASM_CALL_SOFTIRQ);		\
-	__this_cpu_write(hardirq_stack_inuse, false);			\
+static __always_inline void do_softirq_own_stack(void)
+{
+	__this_cpu_write(hardirq_stack_inuse, true);
+	call_on_irqstack(__do_softirq, ASM_CALL_SOFTIRQ);
+	__this_cpu_write(hardirq_stack_inuse, false);
 }
 
 #else /* CONFIG_X86_64 */
-/* System vector handlers always run on the stack they interrupted. */
-#define run_sysvec_on_irqstack_cond(func, regs)				\
-{									\
-	irq_enter_rcu();						\
-	func(regs);							\
-	irq_exit_rcu();							\
-}
 
-/* Switches to the irq stack within func() */
-#define run_irq_on_irqstack_cond(func, regs, vector)			\
-{									\
-	irq_enter_rcu();						\
-	func(regs, vector);						\
-	irq_exit_rcu();							\
+/*
+ * System vector handlers always run on the stack they interrupted;
+ * irqs switch to the irq stack withing func().
+ */
+static __always_inline void
+run_on_irqstack_cond(irq_func_t func, struct pt_regs *regs, u32 arg)
+{
+	irq_enter_rcu();
+	func(regs, arg);
+	irq_exit_rcu();
 }
 
 #endif /* !CONFIG_X86_64 */
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
index 4a39fb429f15..64832869e1a1 100644
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -2188,7 +2188,7 @@ static noinline void handle_spurious_interrupt(u8 vector)
  */
 DEFINE_IDTENTRY_IRQ(spurious_interrupt)
 {
-	handle_spurious_interrupt(vector);
+	handle_spurious_interrupt((u8)error_code);
 }
 
 DEFINE_IDTENTRY_SYSVEC(sysvec_spurious_apic_interrupt)
diff --git a/arch/x86/kernel/irq.c b/arch/x86/kernel/irq.c
index e28f6a5d14f1..27994818d9b1 100644
--- a/arch/x86/kernel/irq.c
+++ b/arch/x86/kernel/irq.c
@@ -239,6 +239,7 @@ static __always_inline void handle_irq(struct irq_desc *desc,
  */
 DEFINE_IDTENTRY_IRQ(common_interrupt)
 {
+	const u8 vector = error_code;
 	struct pt_regs *old_regs = set_irq_regs(regs);
 	struct irq_desc *desc;
 
diff --git a/arch/x86/kernel/sev-es.c b/arch/x86/kernel/sev-es.c
index 73873b007838..bdd75b71d0f3 100644
--- a/arch/x86/kernel/sev-es.c
+++ b/arch/x86/kernel/sev-es.c
@@ -1335,15 +1335,15 @@ DEFINE_IDTENTRY_VC_SAFE_STACK(exc_vmm_communication)
 		vc_finish_insn(&ctxt);
 		break;
 	case ES_UNSUPPORTED:
-		pr_err_ratelimited("Unsupported exit-code 0x%02lx in early #VC exception (IP: 0x%lx)\n",
+		pr_err_ratelimited("Unsupported exit-code 0x%02x in early #VC exception (IP: 0x%lx)\n",
 				   error_code, regs->ip);
 		goto fail;
 	case ES_VMM_ERROR:
-		pr_err_ratelimited("Failure in communication with VMM (exit-code 0x%02lx IP: 0x%lx)\n",
+		pr_err_ratelimited("Failure in communication with VMM (exit-code 0x%02x IP: 0x%lx)\n",
 				   error_code, regs->ip);
 		goto fail;
 	case ES_DECODE_FAILED:
-		pr_err_ratelimited("Failed to decode instruction (exit-code 0x%02lx IP: 0x%lx)\n",
+		pr_err_ratelimited("Failed to decode instruction (exit-code 0x%02x IP: 0x%lx)\n",
 				   error_code, regs->ip);
 		goto fail;
 	case ES_EXCEPTION:
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 853ea7a80806..f5aecbb44bc7 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -461,7 +461,7 @@ DEFINE_IDTENTRY_DF(exc_double_fault)
 	}
 #endif
 
-	pr_emerg("PANIC: double fault, error_code: 0x%lx\n", error_code);
+	pr_emerg("PANIC: double fault, error_code: 0x%x\n", error_code);
 	die("double fault", regs, error_code);
 	panic("Machine halted.");
 	instrumentation_end();
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 7/9] x86/irq: WARN_ONCE() if irq_move_cleanup is called on a pending interrupt
  2021-05-15  1:43 ` [PATCH v2 7/9] x86/irq: WARN_ONCE() if irq_move_cleanup is called on a pending interrupt H. Peter Anvin
@ 2021-05-15  6:24   ` H. Peter Anvin
  2021-05-15  9:59   ` [PATCH v2.1 7/8] " H. Peter Anvin
  1 sibling, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2021-05-15  6:24 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Andy Lutomirski, Borislav Petkov,
	H. Peter Anvin
  Cc: Linux Kernel Mailing List

Ugh... I could swear I had fixed the patch description.

On May 14, 2021 6:43:59 PM PDT, "H. Peter Anvin" <hpa@zytor.com> wrote:
>From: "H. Peter Anvin (Intel)" <hpa@zytor.com>
>
>The current IRQ vector allocation code should be "clean" and never
>issue a IRQ_MOVE_CLEANUP_VECTOR IPI for an interrupt that could still
>be pending. This should make it possible to move it to the "normal"
>system IRQ vector range. This should probably be a three-step process:
>
>1. Introduce this WARN_ONCE() on this event ever occurring.
>2. Move the IRQ_MOVE_CLEANUP_VECTOR to the sysvec range.
>3. Remove the self-IPI hack.
>
>This implements step 1.
>
>Suggested-by: Thomas Gleixner <tglx@linutronix.de>
>Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
>---
> arch/x86/kernel/apic/vector.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
>diff --git a/arch/x86/kernel/apic/vector.c
>b/arch/x86/kernel/apic/vector.c
>index 6dbdc7c22bb7..7ba2982a3585 100644
>--- a/arch/x86/kernel/apic/vector.c
>+++ b/arch/x86/kernel/apic/vector.c
>@@ -939,9 +939,14 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_irq_move_cleanup)
> 		 * to this CPU. IRQ_MOVE_CLEANUP_VECTOR is the lowest
> 		 * priority external vector, so on return from this
> 		 * interrupt the device interrupt will happen first.
>+		 *
>+		 * *** This should never happen with the current IRQ
>+		 * cleanup code, so WARN_ONCE() for now, and
>+		 * eventually get rid of this hack.
> 		 */
> 		irr = apic_read(APIC_IRR + (vector / 32 * 0x10));
> 		if (irr & (1U << (vector % 32))) {
>+			WARN_ONCE(1, "irq_move_cleanup called on still pending
>interrupt\n");
> 			apic->send_IPI_self(IRQ_MOVE_CLEANUP_VECTOR);
> 			continue;
> 		}

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2.1 7/8] x86/irq: WARN_ONCE() if irq_move_cleanup is called on a pending interrupt
  2021-05-15  1:43 ` [PATCH v2 7/9] x86/irq: WARN_ONCE() if irq_move_cleanup is called on a pending interrupt H. Peter Anvin
  2021-05-15  6:24   ` H. Peter Anvin
@ 2021-05-15  9:59   ` H. Peter Anvin
  1 sibling, 0 replies; 14+ messages in thread
From: H. Peter Anvin @ 2021-05-15  9:59 UTC (permalink / raw)
  To: Thomas Gleixner, Ingo Molnar, Andy Lutomirski, Borislav Petkov,
	H. Peter Anvin
  Cc: Linux Kernel Mailing List

From: "H. Peter Anvin (Intel)" <hpa@zytor.com>

The current IRQ vector allocation code should be "clean" and never
issue a IRQ_MOVE_CLEANUP_VECTOR IPI for an interrupt that could still
be pending. This should make it possible to move it to the "normal"
system IRQ vector range. This should probably be a three-step process:

1. Introduce this WARN_ONCE() on this event ever occurring.
2. Remove the self-IPI hack.
3. Move the IRQ_MOVE_CLEANUP_VECTOR to the sysvec range.

This implements step 1.

[ Previous versions of this patch had steps 2 and 3 reversed. ]

Suggested-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
---
 arch/x86/kernel/apic/vector.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
index 6dbdc7c22bb7..7ba2982a3585 100644
--- a/arch/x86/kernel/apic/vector.c
+++ b/arch/x86/kernel/apic/vector.c
@@ -939,9 +939,14 @@ DEFINE_IDTENTRY_SYSVEC(sysvec_irq_move_cleanup)
 		 * to this CPU. IRQ_MOVE_CLEANUP_VECTOR is the lowest
 		 * priority external vector, so on return from this
 		 * interrupt the device interrupt will happen first.
+		 *
+		 * *** This should never happen with the current IRQ
+		 * cleanup code, so WARN_ONCE() for now, and
+		 * eventually get rid of this hack.
 		 */
 		irr = apic_read(APIC_IRR + (vector / 32 * 0x10));
 		if (irr & (1U << (vector % 32))) {
+			WARN_ONCE(1, "irq_move_cleanup called on still pending interrupt\n");
 			apic->send_IPI_self(IRQ_MOVE_CLEANUP_VECTOR);
 			continue;
 		}
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 4/9] x86/idt: remove address argument to idt_invalidate()
  2021-05-15  1:43 ` [PATCH v2 4/9] x86/idt: remove address argument to idt_invalidate() H. Peter Anvin
@ 2021-05-15 15:27   ` Andy Lutomirski
  0 siblings, 0 replies; 14+ messages in thread
From: Andy Lutomirski @ 2021-05-15 15:27 UTC (permalink / raw)
  To: H. Peter Anvin, Thomas Gleixner, Ingo Molnar, Borislav Petkov
  Cc: Linux Kernel Mailing List

On 5/14/21 6:43 PM, H. Peter Anvin wrote:
> From: "H. Peter Anvin (Intel)" <hpa@zytor.com>
> 
> There is no reason to specify any specific address to
> idt_invalidate(). It looks mostly like an artifact of unifying code
> done differently by accident. The most "sensible" address to set here
> is a NULL pointer - virtual address zero, just as a visual marker.
> 
> This also makes it possible to mark the struct desc_ptr in
> idt_invalidate() as static const.
> 
> Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>

Acked-by: Andy Lutomirski <luto@kernel.org>

> -	idt_invalidate(phys_to_virt(0));
> +	idt_invalidate();

This hunk is an actual change, but I'm not sure it needs to be mentioned
in the changelog.

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 5/9] x86/desc: add native_[ig]dt_invalidate() to <asm/desc.h>
  2021-05-15  1:43 ` [PATCH v2 5/9] x86/desc: add native_[ig]dt_invalidate() to <asm/desc.h> H. Peter Anvin
@ 2021-05-15 15:29   ` Andy Lutomirski
  0 siblings, 0 replies; 14+ messages in thread
From: Andy Lutomirski @ 2021-05-15 15:29 UTC (permalink / raw)
  To: H. Peter Anvin, Thomas Gleixner, Ingo Molnar, Borislav Petkov
  Cc: Linux Kernel Mailing List

On 5/14/21 6:43 PM, H. Peter Anvin wrote:
> From: "H. Peter Anvin (Intel)" <hpa@zytor.com>
> 
> In some places, we want the native forms of descriptor table
> invalidation. Rather than open-coding them, add explicitly native
> functions to invalidate the GDT and IDT.
> 
> Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>
> ---
>  arch/x86/include/asm/desc.h | 15 +++++++++++++++
>  1 file changed, 15 insertions(+)
> 
> diff --git a/arch/x86/include/asm/desc.h b/arch/x86/include/asm/desc.h
> index b8429ae50b71..aa366b2bbc41 100644
> --- a/arch/x86/include/asm/desc.h
> +++ b/arch/x86/include/asm/desc.h
> @@ -224,6 +224,21 @@ static inline void store_idt(struct desc_ptr *dtr)
>  	asm volatile("sidt %0":"=m" (*dtr));
>  }
>  
> +static const struct desc_ptr __invalid_gdt_idt_ptr __maybe_unused = {
> +	.address = 0,
> +	.size = 0
> +};

I'm not convinced that putting this in a header is really a great idea.
How about:

> +
> +static inline void native_gdt_invalidate(void)
> +{
        const struct desc_ptr ... = ...;

> +	native_load_gdt(&__invalid_gdt_idt_ptr);
> +}

That should generate two PUSH instructions and may well result in a
smaller binary :)

--Andy

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 6/9] x86/kexec: set_[gi]dt() -> native_[gi]dt_invalidate() in machine_kexec_*.c
  2021-05-15  1:43 ` [PATCH v2 6/9] x86/kexec: set_[gi]dt() -> native_[gi]dt_invalidate() in machine_kexec_*.c H. Peter Anvin
@ 2021-05-15 15:30   ` Andy Lutomirski
  0 siblings, 0 replies; 14+ messages in thread
From: Andy Lutomirski @ 2021-05-15 15:30 UTC (permalink / raw)
  To: H. Peter Anvin, Thomas Gleixner, Ingo Molnar, Borislav Petkov
  Cc: Linux Kernel Mailing List

On 5/14/21 6:43 PM, H. Peter Anvin wrote:
> From: "H. Peter Anvin (Intel)" <hpa@zytor.com>
> 
> These files contain private set_gdt() functions which are only used to
> invalid the gdt; machine_kexec_64.c also contains a set_idt()
> function to invalidate the idt.
> 
> phys_to_virt(0) *really* doesn't make any sense for creating an
> invalid GDT. A NULL pointer (virtual 0) makes a lot more sense;
> although neither will allow any actual memory reference, a NULL
> pointer stands out more.
> 
> Replace these calls with native_[gi]dt_invalidate().
> 
> Signed-off-by: H. Peter Anvin (Intel) <hpa@zytor.com>

Acked-by: Andy Lutomirski <luto@kernel.org>

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2021-05-15 15:30 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-15  1:43 [PATCH v2 0/8] x86/irq: trap and interrupt cleanups H. Peter Anvin
2021-05-15  1:43 ` [PATCH v2 1/9] x86/traps: add X86_NR_HW_TRAPS to <asm/trapnr.h> H. Peter Anvin
2021-05-15  1:43 ` [PATCH v2 2/9] x86/irqvector: add NR_EXTERNAL_VECTORS and NR_SYSTEM_VECTORS H. Peter Anvin
2021-05-15  1:43 ` [PATCH v2 3/9] x86/irq: remove unused vectors from <asm/irq_vectors.h> H. Peter Anvin
2021-05-15  1:43 ` [PATCH v2 4/9] x86/idt: remove address argument to idt_invalidate() H. Peter Anvin
2021-05-15 15:27   ` Andy Lutomirski
2021-05-15  1:43 ` [PATCH v2 5/9] x86/desc: add native_[ig]dt_invalidate() to <asm/desc.h> H. Peter Anvin
2021-05-15 15:29   ` Andy Lutomirski
2021-05-15  1:43 ` [PATCH v2 6/9] x86/kexec: set_[gi]dt() -> native_[gi]dt_invalidate() in machine_kexec_*.c H. Peter Anvin
2021-05-15 15:30   ` Andy Lutomirski
2021-05-15  1:43 ` [PATCH v2 7/9] x86/irq: WARN_ONCE() if irq_move_cleanup is called on a pending interrupt H. Peter Anvin
2021-05-15  6:24   ` H. Peter Anvin
2021-05-15  9:59   ` [PATCH v2.1 7/8] " H. Peter Anvin
2021-05-15  1:44 ` [PATCH v2 8/9] x86/irq: merge and functionalize common code in DECLARE/DEFINE_IDTENTRY_* H. Peter Anvin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.