All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes
@ 2021-06-30  7:46 Nicholas Piggin
  2021-06-30  7:46 ` [PATCH v3 1/9] powerpc/64s: fix hash page fault interrupt handler Nicholas Piggin
                   ` (9 more replies)
  0 siblings, 10 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-06-30  7:46 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Sachin Sant, Nicholas Piggin

This is a bunch of fixes for powerpc next, mostly a nasty hole in fast
interrupt exit code found by Sachin and some other bits along the way
while looking at it.

Since v2:
- Fixed 64e patch 3 to really set exit_must_hard_disable.
- Reworded some changelogs.

Since v1:
- Fixed a bisection compile error due to a fix incorrectly going to
  a later patch.
- Fixed the "add a table of implicit soft-masked addresses" patch to
  include the low scv vector range as a soft-masked table entry. scv
  was the original reason for implicit soft masking, and the previous
  version breaks it. My stress testing was using an image without
  scv glibc unfortunately.
- Fixed a bug with the same patch that would restore r12 with SRR1
  rather than HSRR1 in the case of masked hypervisor interrupts after
  searching the soft-mask table. Again unfortunately my stress testing
  was in a guest so no HV interrupts.

Thanks to Michael for noticing these issues.

- Pulled in the hash page fault interrupt handler fix into this series
  to make the dependencies clear (well it's not exactly dependent but
  assertions introduced later make the existing bug crash more often).

Thanks,
Nick

Nicholas Piggin (9):
  powerpc/64s: fix hash page fault interrupt handler
  powerpc/64e: fix CONFIG_RELOCATABLE build warnings
  powerpc/64e: remove implicit soft-masking and interrupt exit restart
    logic
  powerpc/64s: add a table of implicit soft-masked addresses
  powerpc/64s/interrupt: preserve regs->softe for NMI interrupts
  powerpc/64: enable MSR[EE] in irq replay pt_regs
  powerpc/64/interrupt: add missing kprobe annotations on interrupt exit
    symbols
  powerpc/64s/interrupt: clean up interrupt return labels
  powerpc/64s: move ret_from_fork etc above __end_soft_masked

 arch/powerpc/include/asm/interrupt.h  | 41 +++++++++---
 arch/powerpc/include/asm/ppc_asm.h    |  7 +++
 arch/powerpc/kernel/exceptions-64e.S  | 23 +++----
 arch/powerpc/kernel/exceptions-64s.S  | 64 ++++++++++++++++---
 arch/powerpc/kernel/interrupt.c       |  2 +-
 arch/powerpc/kernel/interrupt_64.S    | 90 ++++++++++++++++++---------
 arch/powerpc/kernel/irq.c             |  1 +
 arch/powerpc/kernel/vmlinux.lds.S     |  9 +++
 arch/powerpc/lib/restart_table.c      | 26 ++++++++
 arch/powerpc/mm/book3s64/hash_utils.c | 24 ++++---
 10 files changed, 212 insertions(+), 75 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v3 1/9] powerpc/64s: fix hash page fault interrupt handler
  2021-06-30  7:46 [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Nicholas Piggin
@ 2021-06-30  7:46 ` Nicholas Piggin
  2021-06-30  7:46 ` [PATCH v3 2/9] powerpc/64e: fix CONFIG_RELOCATABLE build warnings Nicholas Piggin
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-06-30  7:46 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Sachin Sant, Nicholas Piggin

The early bad fault or key fault test in do_hash_fault() ends up calling
into ___do_page_fault without having gone through an interrupt handler
wrapper (except the initial _RAW one). This can end up calling local irq
functions while the interrupt has not been reconciled, which will likely
cause crashes and it trips up on a later patch that adds more assertions.

pkey_exec_prot from selftests causes this path to be executed.

There is no real reason to run the in_nmi() test should be performed
before the key fault check. In fact if a perf interrupt in the hash
fault code did a stack walk that was made to take a key fault somehow
then running ___do_page_fault could possibly cause another hash fault
causing problems. Move the in_nmi() test first, and then do everything
else inside the regular interrupt handler function.

Fixes: 3a96570ffceb ("powerpc: convert interrupt handlers to use wrappers")
Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
Tested-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/mm/book3s64/hash_utils.c | 24 +++++++++++-------------
 1 file changed, 11 insertions(+), 13 deletions(-)

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 96d9aa164007..ac5720371c0d 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1522,8 +1522,8 @@ int hash_page(unsigned long ea, unsigned long access, unsigned long trap,
 }
 EXPORT_SYMBOL_GPL(hash_page);
 
-DECLARE_INTERRUPT_HANDLER_RET(__do_hash_fault);
-DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
+DECLARE_INTERRUPT_HANDLER(__do_hash_fault);
+DEFINE_INTERRUPT_HANDLER(__do_hash_fault)
 {
 	unsigned long ea = regs->dar;
 	unsigned long dsisr = regs->dsisr;
@@ -1533,6 +1533,11 @@ DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
 	unsigned int region_id;
 	long err;
 
+	if (unlikely(dsisr & (DSISR_BAD_FAULT_64S | DSISR_KEYFAULT))) {
+		hash__do_page_fault(regs);
+		return;
+	}
+
 	region_id = get_region_id(ea);
 	if ((region_id == VMALLOC_REGION_ID) || (region_id == IO_REGION_ID))
 		mm = &init_mm;
@@ -1571,9 +1576,10 @@ DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
 			bad_page_fault(regs, SIGBUS);
 		}
 		err = 0;
-	}
 
-	return err;
+	} else if (err) {
+		hash__do_page_fault(regs);
+	}
 }
 
 /*
@@ -1582,13 +1588,6 @@ DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
  */
 DEFINE_INTERRUPT_HANDLER_RAW(do_hash_fault)
 {
-	unsigned long dsisr = regs->dsisr;
-
-	if (unlikely(dsisr & (DSISR_BAD_FAULT_64S | DSISR_KEYFAULT))) {
-		hash__do_page_fault(regs);
-		return 0;
-	}
-
 	/*
 	 * If we are in an "NMI" (e.g., an interrupt when soft-disabled), then
 	 * don't call hash_page, just fail the fault. This is required to
@@ -1607,8 +1606,7 @@ DEFINE_INTERRUPT_HANDLER_RAW(do_hash_fault)
 		return 0;
 	}
 
-	if (__do_hash_fault(regs))
-		hash__do_page_fault(regs);
+	__do_hash_fault(regs);
 
 	return 0;
 }
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 2/9] powerpc/64e: fix CONFIG_RELOCATABLE build warnings
  2021-06-30  7:46 [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Nicholas Piggin
  2021-06-30  7:46 ` [PATCH v3 1/9] powerpc/64s: fix hash page fault interrupt handler Nicholas Piggin
@ 2021-06-30  7:46 ` Nicholas Piggin
  2021-06-30  7:46 ` [PATCH v3 3/9] powerpc/64e: remove implicit soft-masking and interrupt exit restart logic Nicholas Piggin
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-06-30  7:46 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Sachin Sant, Nicholas Piggin

CONFIG_RELOCATABLE=y causes build warnings from unresolved relocations.
Fix these by using TOC addressing for these cases.

Commit 24d33ac5b8ff ("powerpc/64s: Make prom_init require RELOCATABLE")
caused some 64e configs to select RELOCATABLE resulting in these
warnings, but the underlying issue was already there.

This passes basic qemu testing.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64e.S | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 22fcd95dd8dc..d634bfceed2c 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -912,8 +912,14 @@ kernel_dbg_exc:
 	b	interrupt_return
 
 .macro SEARCH_RESTART_TABLE
+#ifdef CONFIG_RELOCATABLE
+	ld	r11,PACATOC(r13)
+	ld	r14,__start___restart_table@got(r11)
+	ld	r15,__stop___restart_table@got(r11)
+#else
 	LOAD_REG_IMMEDIATE_SYM(r14, r11, __start___restart_table)
 	LOAD_REG_IMMEDIATE_SYM(r15, r11, __stop___restart_table)
+#endif
 300:
 	cmpd	r14,r15
 	beq	302f
@@ -1329,7 +1335,12 @@ a2_tlbinit_code_start:
 a2_tlbinit_after_linear_map:
 
 	/* Now we branch the new virtual address mapped by this entry */
+#ifdef CONFIG_RELOCATABLE
+	ld	r5,PACATOC(r13)
+	ld	r3,1f@got(r5)
+#else
 	LOAD_REG_IMMEDIATE_SYM(r3, r5, 1f)
+#endif
 	mtctr	r3
 	bctr
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 3/9] powerpc/64e: remove implicit soft-masking and interrupt exit restart logic
  2021-06-30  7:46 [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Nicholas Piggin
  2021-06-30  7:46 ` [PATCH v3 1/9] powerpc/64s: fix hash page fault interrupt handler Nicholas Piggin
  2021-06-30  7:46 ` [PATCH v3 2/9] powerpc/64e: fix CONFIG_RELOCATABLE build warnings Nicholas Piggin
@ 2021-06-30  7:46 ` Nicholas Piggin
  2021-06-30  7:56   ` Christophe Leroy
  2021-06-30  7:46 ` [PATCH v3 4/9] powerpc/64s: add a table of implicit soft-masked addresses Nicholas Piggin
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 13+ messages in thread
From: Nicholas Piggin @ 2021-06-30  7:46 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Sachin Sant, Nicholas Piggin

The implicit soft-masking to speed up interrupt return was going to be
used by 64e as well, but it has not been extensively tested on that
platform and is not considered ready. It was intended to be disabled
before merge. Disable it for now.

Most of the restart code is common with 64s, so with more correctness
and performance testing this could be re-enabled again by adding the
extra soft-mask checks to interrupt handlers and flipping
exit_must_hard_disable().

Fixes: 9d1988ca87dd ("powerpc/64: treat low kernel text as irqs soft-masked")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/interrupt.h | 33 ++++++++++++++++++++--------
 arch/powerpc/kernel/exceptions-64e.S | 12 +---------
 arch/powerpc/kernel/interrupt.c      |  2 +-
 arch/powerpc/kernel/interrupt_64.S   | 16 ++++++++++++--
 4 files changed, 40 insertions(+), 23 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 8b4b1e84e110..f13c93b033c7 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -73,20 +73,34 @@
 #include <asm/kprobes.h>
 #include <asm/runlatch.h>
 
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S_64
 extern char __end_soft_masked[];
 unsigned long search_kernel_restart_table(unsigned long addr);
-#endif
 
-#ifdef CONFIG_PPC_BOOK3S_64
 DECLARE_STATIC_KEY_FALSE(interrupt_exit_not_reentrant);
 
+static inline bool is_implicit_soft_masked(struct pt_regs *regs)
+{
+	if (regs->msr & MSR_PR)
+		return false;
+
+	if (regs->nip >= (unsigned long)__end_soft_masked)
+		return false;
+
+	return true;
+}
+
 static inline void srr_regs_clobbered(void)
 {
 	local_paca->srr_valid = 0;
 	local_paca->hsrr_valid = 0;
 }
 #else
+static inline bool is_implicit_soft_masked(struct pt_regs *regs)
+{
+	return false;
+}
+
 static inline void srr_regs_clobbered(void)
 {
 }
@@ -150,11 +164,13 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
 		 */
 		if (TRAP(regs) != INTERRUPT_PROGRAM) {
 			CT_WARN_ON(ct_state() != CONTEXT_KERNEL);
-			BUG_ON(regs->nip < (unsigned long)__end_soft_masked);
+			BUG_ON(is_implicit_soft_masked(regs));
 		}
+#ifdef CONFIG_PPC_BOOK3S
 		/* Move this under a debugging check */
 		if (arch_irq_disabled_regs(regs))
 			BUG_ON(search_kernel_restart_table(regs->nip));
+#endif
 	}
 #endif
 
@@ -244,10 +260,9 @@ static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte
 	local_paca->irq_soft_mask = IRQS_ALL_DISABLED;
 	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
 
-	if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !(regs->msr & MSR_PR) &&
-				regs->nip < (unsigned long)__end_soft_masked) {
-		// Kernel code running below __end_soft_masked is
-		// implicitly soft-masked.
+	if (is_implicit_soft_masked(regs)) {
+		// Adjust regs->softe soft implicit soft-mask, so
+		// arch_irq_disabled_regs(regs) behaves as expected.
 		regs->softe = IRQS_ALL_DISABLED;
 	}
 
@@ -282,6 +297,7 @@ static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter
 	 */
 
 #ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_BOOK3S
 	if (arch_irq_disabled_regs(regs)) {
 		unsigned long rst = search_kernel_restart_table(regs->nip);
 		if (rst)
@@ -289,7 +305,6 @@ static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter
 	}
 #endif
 
-#ifdef CONFIG_PPC64
 	if (nmi_disables_ftrace(regs))
 		this_cpu_set_ftrace_enabled(state->ftrace_enabled);
 
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index d634bfceed2c..1401787b0b93 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -342,17 +342,7 @@ ret_from_mc_except:
 #define PROLOG_ADDITION_MASKABLE_GEN(n)					    \
 	lbz	r10,PACAIRQSOFTMASK(r13);	/* are irqs soft-masked? */ \
 	andi.	r10,r10,IRQS_DISABLED;	/* yes -> go out of line */ \
-	bne	masked_interrupt_book3e_##n;				    \
-	/* Kernel code below __end_soft_masked is implicitly masked */	    \
-	andi.	r10,r11,MSR_PR;						    \
-	bne	1f;			/* user -> not masked */	    \
-	std	r14,PACA_EXGEN+EX_R14(r13);				    \
-	LOAD_REG_IMMEDIATE_SYM(r14, r10, __end_soft_masked);		    \
-	mfspr	r10,SPRN_SRR0;						    \
-	cmpld	r10,r14;						    \
-	ld	r14,PACA_EXGEN+EX_R14(r13);				    \
-	blt	masked_interrupt_book3e_##n;				    \
-1:
+	bne	masked_interrupt_book3e_##n
 
 /*
  * Additional regs must be re-loaded from paca before EXCEPTION_COMMON* is
diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index 0052702ee5ac..21bbd615ca41 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -36,7 +36,7 @@ static inline bool exit_must_hard_disable(void)
 #else
 static inline bool exit_must_hard_disable(void)
 {
-	return IS_ENABLED(CONFIG_PPC32);
+	return true;
 }
 #endif
 
diff --git a/arch/powerpc/kernel/interrupt_64.S b/arch/powerpc/kernel/interrupt_64.S
index e7a50613a570..09b8d8846c67 100644
--- a/arch/powerpc/kernel/interrupt_64.S
+++ b/arch/powerpc/kernel/interrupt_64.S
@@ -231,7 +231,7 @@ _ASM_NOKPROBE_SYMBOL(system_call_vectored_emulate)
 	li	r10,IRQS_ALL_DISABLED
 	stb	r10,PACAIRQSOFTMASK(r13)
 	b	system_call_vectored_common
-#endif
+#endif /* CONFIG_PPC_BOOK3S */
 
 	.balign IFETCH_ALIGN_BYTES
 	.globl system_call_common_real
@@ -320,10 +320,12 @@ END_BTB_FLUSH_SECTION
 	li	r5,0 /* !scv */
 	bl	syscall_exit_prepare
 	std	r1,PACA_EXIT_SAVE_R1(r13) /* save r1 for restart */
+#ifdef CONFIG_PPC_BOOK3S
 .Lsyscall_rst_start:
 	lbz	r11,PACAIRQHAPPENED(r13)
 	andi.	r11,r11,(~PACA_IRQ_HARD_DIS)@l
 	bne-	syscall_restart
+#endif
 	li	r11,IRQS_ENABLED
 	stb	r11,PACAIRQSOFTMASK(r13)
 	li	r11,0
@@ -396,6 +398,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 	b	.Lsyscall_restore_regs_cont
 .Lsyscall_rst_end:
 
+#ifdef CONFIG_PPC_BOOK3S
 syscall_restart:
 	GET_PACA(r13)
 	ld	r1,PACA_EXIT_SAVE_R1(r13)
@@ -409,6 +412,7 @@ syscall_restart:
 	b	.Lsyscall_rst_start
 
 RESTART_TABLE(.Lsyscall_rst_start, .Lsyscall_rst_end, syscall_restart)
+#endif
 
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
 tabort_syscall:
@@ -504,10 +508,12 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\())
 	bne-	.Lrestore_nvgprs_\srr
 .Lrestore_nvgprs_\srr\()_cont:
 	std	r1,PACA_EXIT_SAVE_R1(r13) /* save r1 for restart */
+#ifdef CONFIG_PPC_BOOK3S
 .Linterrupt_return_\srr\()_user_rst_start:
 	lbz	r11,PACAIRQHAPPENED(r13)
 	andi.	r11,r11,(~PACA_IRQ_HARD_DIS)@l
 	bne-	interrupt_return_\srr\()_user_restart
+#endif
 	li	r11,IRQS_ENABLED
 	stb	r11,PACAIRQSOFTMASK(r13)
 	li	r11,0
@@ -590,6 +596,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
 	REST_NVGPRS(r1)
 	b	.Lrestore_nvgprs_\srr\()_cont
 
+#ifdef CONFIG_PPC_BOOK3S
 interrupt_return_\srr\()_user_restart:
 	GET_PACA(r13)
 	ld	r1,PACA_EXIT_SAVE_R1(r13)
@@ -602,6 +609,7 @@ interrupt_return_\srr\()_user_restart:
 	b	.Linterrupt_return_\srr\()_user_rst_start
 
 RESTART_TABLE(.Linterrupt_return_\srr\()_user_rst_start, .Linterrupt_return_\srr\()_user_rst_end, interrupt_return_\srr\()_user_restart)
+#endif
 
 	.balign IFETCH_ALIGN_BYTES
 .Lkernel_interrupt_return_\srr\():
@@ -615,9 +623,11 @@ RESTART_TABLE(.Linterrupt_return_\srr\()_user_rst_start, .Linterrupt_return_\srr
 	cmpwi	r11,IRQS_ENABLED
 	stb	r11,PACAIRQSOFTMASK(r13)
 	bne	1f
+#ifdef CONFIG_PPC_BOOK3S
 	lbz	r11,PACAIRQHAPPENED(r13)
 	andi.	r11,r11,(~PACA_IRQ_HARD_DIS)@l
 	bne-	interrupt_return_\srr\()_kernel_restart
+#endif
 	li	r11,0
 	stb	r11,PACAIRQHAPPENED(r13) # clear out possible HARD_DIS
 1:
@@ -717,6 +727,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
 	b	.	/* prevent speculative execution */
 .Linterrupt_return_\srr\()_kernel_rst_end:
 
+#ifdef CONFIG_PPC_BOOK3S
 interrupt_return_\srr\()_kernel_restart:
 	GET_PACA(r13)
 	ld	r1,PACA_EXIT_SAVE_R1(r13)
@@ -729,14 +740,15 @@ interrupt_return_\srr\()_kernel_restart:
 	b	.Linterrupt_return_\srr\()_kernel_rst_start
 
 RESTART_TABLE(.Linterrupt_return_\srr\()_kernel_rst_start, .Linterrupt_return_\srr\()_kernel_rst_end, interrupt_return_\srr\()_kernel_restart)
+#endif
 
 .endm
 
 interrupt_return_macro srr
 #ifdef CONFIG_PPC_BOOK3S
 interrupt_return_macro hsrr
-#endif /* CONFIG_PPC_BOOK3S */
 
 	.globl __end_soft_masked
 __end_soft_masked:
 DEFINE_FIXED_SYMBOL(__end_soft_masked)
+#endif /* CONFIG_PPC_BOOK3S */
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 4/9] powerpc/64s: add a table of implicit soft-masked addresses
  2021-06-30  7:46 [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Nicholas Piggin
                   ` (2 preceding siblings ...)
  2021-06-30  7:46 ` [PATCH v3 3/9] powerpc/64e: remove implicit soft-masking and interrupt exit restart logic Nicholas Piggin
@ 2021-06-30  7:46 ` Nicholas Piggin
  2021-06-30  7:46 ` [PATCH v3 5/9] powerpc/64s/interrupt: preserve regs->softe for NMI interrupts Nicholas Piggin
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-06-30  7:46 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Sachin Sant, Nicholas Piggin

Commit 9d1988ca87dd ("powerpc/64: treat low kernel text as irqs
soft-masked") ends up catching too much code, including ret_from_fork,
and parts of interrupt and syscall return that do not expect to be
interrupts to be soft-masked. If an interrupt gets marked pending,
and then the code proceeds out of the implicit soft-masked region it
will fail to deal with the pending interrupt.

Fix this by adding a new table of addresses which explicitly marks
the regions of code that are soft masked. This table is only checked
for interrupts that below __end_soft_masked, so most kernel interrupts
will not have the overhead of the table search.

Fixes: 9d1988ca87dd ("powerpc/64: treat low kernel text as irqs soft-masked")
Reported-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
Tested-by: Sachin Sant <sachinp@linux.vnet.ibm.com>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/interrupt.h |  3 +-
 arch/powerpc/include/asm/ppc_asm.h   |  7 +++
 arch/powerpc/kernel/exceptions-64s.S | 64 +++++++++++++++++++++++-----
 arch/powerpc/kernel/interrupt_64.S   |  8 ++++
 arch/powerpc/kernel/vmlinux.lds.S    |  9 ++++
 arch/powerpc/lib/restart_table.c     | 26 +++++++++++
 6 files changed, 106 insertions(+), 11 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index f13c93b033c7..d7df247a149c 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -75,6 +75,7 @@
 
 #ifdef CONFIG_PPC_BOOK3S_64
 extern char __end_soft_masked[];
+bool search_kernel_soft_mask_table(unsigned long addr);
 unsigned long search_kernel_restart_table(unsigned long addr);
 
 DECLARE_STATIC_KEY_FALSE(interrupt_exit_not_reentrant);
@@ -87,7 +88,7 @@ static inline bool is_implicit_soft_masked(struct pt_regs *regs)
 	if (regs->nip >= (unsigned long)__end_soft_masked)
 		return false;
 
-	return true;
+	return search_kernel_soft_mask_table(regs->nip);
 }
 
 static inline void srr_regs_clobbered(void)
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index c9c2c36c1f8f..116c1519728a 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -762,6 +762,13 @@ END_FTR_SECTION_NESTED(CPU_FTR_CELL_TB_BUG, CPU_FTR_CELL_TB_BUG, 96)
 	stringify_in_c(.long (_target) - . ;)	\
 	stringify_in_c(.previous)
 
+#define SOFT_MASK_TABLE(_start, _end)		\
+	stringify_in_c(.section __soft_mask_table,"a";)\
+	stringify_in_c(.balign 8;)		\
+	stringify_in_c(.llong (_start);)	\
+	stringify_in_c(.llong (_end);)		\
+	stringify_in_c(.previous)
+
 #define RESTART_TABLE(_start, _end, _target)	\
 	stringify_in_c(.section __restart_table,"a";)\
 	stringify_in_c(.balign 8;)		\
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index ecd07bf604c5..4aec59a77d4c 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -428,21 +428,31 @@ DEFINE_FIXED_SYMBOL(\name\()_common_real)
 
 		/* If coming from user, skip soft-mask tests. */
 		andi.	r10,r12,MSR_PR
-		bne	2f
+		bne	3f
 
 		/*
-		 * Kernel code running below __end_soft_masked is implicitly
-		 * soft-masked
+		 * Kernel code running below __end_soft_masked may be
+		 * implicitly soft-masked if it is within the regions
+		 * in the soft mask table.
 		 */
 		LOAD_HANDLER(r10, __end_soft_masked)
 		cmpld	r11,r10
-
+		bge+	1f
+
+		/* SEARCH_SOFT_MASK_TABLE clobbers r9,r10,r12 */
+		mtctr	r12
+		stw	r9,PACA_EXGEN+EX_CCR(r13)
+		SEARCH_SOFT_MASK_TABLE
+		cmpdi	r12,0
+		mfctr	r12		/* Restore r12 to SRR1 */
+		lwz	r9,PACA_EXGEN+EX_CCR(r13)
+		beq	1f		/* Not in soft-mask table */
 		li	r10,IMASK
-		blt-	1f
+		b	2f		/* In soft-mask table, always mask */
 
 		/* Test the soft mask state against our interrupt's bit */
-		lbz	r10,PACAIRQSOFTMASK(r13)
-1:		andi.	r10,r10,IMASK
+1:		lbz	r10,PACAIRQSOFTMASK(r13)
+2:		andi.	r10,r10,IMASK
 		/* Associate vector numbers with bits in paca->irq_happened */
 		.if IVEC == 0x500 || IVEC == 0xea0
 		li	r10,PACA_IRQ_EE
@@ -473,7 +483,7 @@ DEFINE_FIXED_SYMBOL(\name\()_common_real)
 
 	.if ISTACK
 	andi.	r10,r12,MSR_PR		/* See if coming from user	*/
-2:	mr	r10,r1			/* Save r1			*/
+3:	mr	r10,r1			/* Save r1			*/
 	subi	r1,r1,INT_FRAME_SIZE	/* alloc frame on kernel stack	*/
 	beq-	100f
 	ld	r1,PACAKSAVE(r13)	/* kernel stack to use		*/
@@ -624,6 +634,36 @@ END_FTR_SECTION_IFSET(CPU_FTR_CFAR)
 303:
 .endm
 
+.macro SEARCH_SOFT_MASK_TABLE
+#ifdef CONFIG_RELOCATABLE
+	mr	r12,r2
+	ld	r2,PACATOC(r13)
+	LOAD_REG_ADDR(r9, __start___soft_mask_table)
+	LOAD_REG_ADDR(r10, __stop___soft_mask_table)
+	mr	r2,r12
+#else
+	LOAD_REG_IMMEDIATE_SYM(r9, r12, __start___soft_mask_table)
+	LOAD_REG_IMMEDIATE_SYM(r10, r12, __stop___soft_mask_table)
+#endif
+300:
+	cmpd	r9,r10
+	beq	302f
+	ld	r12,0(r9)
+	cmpld	r11,r12
+	blt	301f
+	ld	r12,8(r9)
+	cmpld	r11,r12
+	bge	301f
+	li	r12,1
+	b	303f
+301:
+	addi	r9,r9,16
+	b	300b
+302:
+	li	r12,0
+303:
+.endm
+
 /*
  * Restore all registers including H/SRR0/1 saved in a stack frame of a
  * standard exception.
@@ -754,8 +794,8 @@ __start_interrupts:
  * scv instructions enter the kernel without changing EE, RI, ME, or HV.
  * In particular, this means we can take a maskable interrupt at any point
  * in the scv handler, which is unlike any other interrupt. This is solved
- * by treating the instruction addresses below __end_soft_masked as being
- * soft-masked.
+ * by treating the instruction addresses in the handler as being soft-masked,
+ * by adding a SOFT_MASK_TABLE entry for them.
  *
  * AIL-0 mode scv exceptions go to 0x17000-0x17fff, but we set AIL-3 and
  * ensure scv is never executed with relocation off, which means AIL-0
@@ -772,6 +812,7 @@ __start_interrupts:
  * syscall register convention is in Documentation/powerpc/syscall64-abi.rst
  */
 EXC_VIRT_BEGIN(system_call_vectored, 0x3000, 0x1000)
+1:
 	/* SCV 0 */
 	mr	r9,r13
 	GET_PACA(r13)
@@ -801,8 +842,11 @@ EXC_VIRT_BEGIN(system_call_vectored, 0x3000, 0x1000)
 	b	system_call_vectored_sigill
 #endif
 	.endr
+2:
 EXC_VIRT_END(system_call_vectored, 0x3000, 0x1000)
 
+SOFT_MASK_TABLE(1b, 2b) // Treat scv vectors as soft-masked, see comment above.
+
 #ifdef CONFIG_RELOCATABLE
 TRAMP_VIRT_BEGIN(system_call_vectored_tramp)
 	__LOAD_HANDLER(r10, system_call_vectored_common)
diff --git a/arch/powerpc/kernel/interrupt_64.S b/arch/powerpc/kernel/interrupt_64.S
index 09b8d8846c67..2c92bbca02ca 100644
--- a/arch/powerpc/kernel/interrupt_64.S
+++ b/arch/powerpc/kernel/interrupt_64.S
@@ -207,7 +207,9 @@ syscall_vectored_\name\()_restart:
 	bl	syscall_exit_restart
 	std	r1,PACA_EXIT_SAVE_R1(r13) /* save r1 for restart */
 	b	.Lsyscall_vectored_\name\()_rst_start
+1:
 
+SOFT_MASK_TABLE(.Lsyscall_vectored_\name\()_rst_start, 1b)
 RESTART_TABLE(.Lsyscall_vectored_\name\()_rst_start, .Lsyscall_vectored_\name\()_rst_end, syscall_vectored_\name\()_restart)
 
 .endm
@@ -410,7 +412,9 @@ syscall_restart:
 	bl	syscall_exit_restart
 	std	r1,PACA_EXIT_SAVE_R1(r13) /* save r1 for restart */
 	b	.Lsyscall_rst_start
+1:
 
+SOFT_MASK_TABLE(.Lsyscall_rst_start, 1b)
 RESTART_TABLE(.Lsyscall_rst_start, .Lsyscall_rst_end, syscall_restart)
 #endif
 
@@ -607,7 +611,9 @@ interrupt_return_\srr\()_user_restart:
 	bl	interrupt_exit_user_restart
 	std	r1,PACA_EXIT_SAVE_R1(r13) /* save r1 for restart */
 	b	.Linterrupt_return_\srr\()_user_rst_start
+1:
 
+SOFT_MASK_TABLE(.Linterrupt_return_\srr\()_user_rst_start, 1b)
 RESTART_TABLE(.Linterrupt_return_\srr\()_user_rst_start, .Linterrupt_return_\srr\()_user_rst_end, interrupt_return_\srr\()_user_restart)
 #endif
 
@@ -738,7 +744,9 @@ interrupt_return_\srr\()_kernel_restart:
 	bl	interrupt_exit_kernel_restart
 	std	r1,PACA_EXIT_SAVE_R1(r13) /* save r1 for restart */
 	b	.Linterrupt_return_\srr\()_kernel_rst_start
+1:
 
+SOFT_MASK_TABLE(.Linterrupt_return_\srr\()_kernel_rst_start, 1b)
 RESTART_TABLE(.Linterrupt_return_\srr\()_kernel_rst_start, .Linterrupt_return_\srr\()_kernel_rst_end, interrupt_return_\srr\()_kernel_restart)
 #endif
 
diff --git a/arch/powerpc/kernel/vmlinux.lds.S b/arch/powerpc/kernel/vmlinux.lds.S
index 16c5e13e00c4..40bdefe9caa7 100644
--- a/arch/powerpc/kernel/vmlinux.lds.S
+++ b/arch/powerpc/kernel/vmlinux.lds.S
@@ -9,6 +9,14 @@
 #define EMITS_PT_NOTE
 #define RO_EXCEPTION_TABLE_ALIGN	0
 
+#define SOFT_MASK_TABLE(align)						\
+	. = ALIGN(align);						\
+	__soft_mask_table : AT(ADDR(__soft_mask_table) - LOAD_OFFSET) {	\
+		__start___soft_mask_table = .;				\
+		KEEP(*(__soft_mask_table))				\
+		__stop___soft_mask_table = .;				\
+	}
+
 #define RESTART_TABLE(align)						\
 	. = ALIGN(align);						\
 	__restart_table : AT(ADDR(__restart_table) - LOAD_OFFSET) {	\
@@ -132,6 +140,7 @@ SECTIONS
 	RO_DATA(PAGE_SIZE)
 
 #ifdef CONFIG_PPC64
+	SOFT_MASK_TABLE(8)
 	RESTART_TABLE(8)
 
 	. = ALIGN(8);
diff --git a/arch/powerpc/lib/restart_table.c b/arch/powerpc/lib/restart_table.c
index 7cd20757cc33..bccb662c1b7b 100644
--- a/arch/powerpc/lib/restart_table.c
+++ b/arch/powerpc/lib/restart_table.c
@@ -1,15 +1,41 @@
 #include <asm/interrupt.h>
 #include <asm/kprobes.h>
 
+struct soft_mask_table_entry {
+	unsigned long start;
+	unsigned long end;
+};
+
 struct restart_table_entry {
 	unsigned long start;
 	unsigned long end;
 	unsigned long fixup;
 };
 
+extern struct soft_mask_table_entry __start___soft_mask_table[];
+extern struct soft_mask_table_entry __stop___soft_mask_table[];
+
 extern struct restart_table_entry __start___restart_table[];
 extern struct restart_table_entry __stop___restart_table[];
 
+/* Given an address, look for it in the soft mask table */
+bool search_kernel_soft_mask_table(unsigned long addr)
+{
+	struct soft_mask_table_entry *smte = __start___soft_mask_table;
+
+	while (smte < __stop___soft_mask_table) {
+		unsigned long start = smte->start;
+		unsigned long end = smte->end;
+
+		if (addr >= start && addr < end)
+			return true;
+
+		smte++;
+	}
+	return false;
+}
+NOKPROBE_SYMBOL(search_kernel_soft_mask_table);
+
 /* Given an address, look for it in the kernel exception table */
 unsigned long search_kernel_restart_table(unsigned long addr)
 {
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 5/9] powerpc/64s/interrupt: preserve regs->softe for NMI interrupts
  2021-06-30  7:46 [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Nicholas Piggin
                   ` (3 preceding siblings ...)
  2021-06-30  7:46 ` [PATCH v3 4/9] powerpc/64s: add a table of implicit soft-masked addresses Nicholas Piggin
@ 2021-06-30  7:46 ` Nicholas Piggin
  2021-06-30  7:46 ` [PATCH v3 6/9] powerpc/64: enable MSR[EE] in irq replay pt_regs Nicholas Piggin
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-06-30  7:46 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Sachin Sant, Nicholas Piggin

If an NMI interrupt hits in an implicit soft-masked region, regs->softe
is modified to reflect that. This may not be necessary for correctness
at the moment, but it is less surprising and it's unhelpful when
debugging or adding checks.

Make sure this is changed back to how it was found before returning.

Fixes: 4ec5feec1ad0 ("powerpc/64s: Make NMI record implicitly soft-masked code as irqs disabled")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/interrupt.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index d7df247a149c..789311d1e283 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -227,6 +227,7 @@ struct interrupt_nmi_state {
 	u8 irq_soft_mask;
 	u8 irq_happened;
 	u8 ftrace_enabled;
+	u64 softe;
 #endif
 };
 
@@ -252,6 +253,7 @@ static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte
 #ifdef CONFIG_PPC64
 	state->irq_soft_mask = local_paca->irq_soft_mask;
 	state->irq_happened = local_paca->irq_happened;
+	state->softe = regs->softe;
 
 	/*
 	 * Set IRQS_ALL_DISABLED unconditionally so irqs_disabled() does
@@ -311,6 +313,7 @@ static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter
 
 	/* Check we didn't change the pending interrupt mask. */
 	WARN_ON_ONCE((state->irq_happened | PACA_IRQ_HARD_DIS) != local_paca->irq_happened);
+	regs->softe = state->softe;
 	local_paca->irq_happened = state->irq_happened;
 	local_paca->irq_soft_mask = state->irq_soft_mask;
 #endif
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 6/9] powerpc/64: enable MSR[EE] in irq replay pt_regs
  2021-06-30  7:46 [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Nicholas Piggin
                   ` (4 preceding siblings ...)
  2021-06-30  7:46 ` [PATCH v3 5/9] powerpc/64s/interrupt: preserve regs->softe for NMI interrupts Nicholas Piggin
@ 2021-06-30  7:46 ` Nicholas Piggin
  2021-06-30  7:46 ` [PATCH v3 7/9] powerpc/64/interrupt: add missing kprobe annotations on interrupt exit symbols Nicholas Piggin
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-06-30  7:46 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Sachin Sant, Nicholas Piggin

Similar to commit 2b48e96be2f9f ("powerpc/64: fix irq replay
pt_regs->softe value"), enable MSR_EE in pt_regs->msr. This makes the
regs look more normal. It also allows some extra debug checks to be
added to interrupt handler entry.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/interrupt.h | 4 ++++
 arch/powerpc/kernel/irq.c            | 1 +
 2 files changed, 5 insertions(+)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 789311d1e283..d4bdf7d274ac 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -173,6 +173,8 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
 			BUG_ON(search_kernel_restart_table(regs->nip));
 #endif
 	}
+	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
+		BUG_ON(!arch_irq_disabled_regs(regs) && !(regs->msr & MSR_EE));
 #endif
 
 	booke_restore_dbcr0();
@@ -268,6 +270,8 @@ static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte
 		// arch_irq_disabled_regs(regs) behaves as expected.
 		regs->softe = IRQS_ALL_DISABLED;
 	}
+	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
+		BUG_ON(!arch_irq_disabled_regs(regs) && !(regs->msr & MSR_EE));
 
 	/* Don't do any per-CPU operations until interrupt state is fixed */
 
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 8428caf3194e..91e63eac4e8f 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -121,6 +121,7 @@ void replay_soft_interrupts(void)
 
 	ppc_save_regs(&regs);
 	regs.softe = IRQS_ENABLED;
+	regs.msr |= MSR_EE;
 
 again:
 	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 7/9] powerpc/64/interrupt: add missing kprobe annotations on interrupt exit symbols
  2021-06-30  7:46 [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Nicholas Piggin
                   ` (5 preceding siblings ...)
  2021-06-30  7:46 ` [PATCH v3 6/9] powerpc/64: enable MSR[EE] in irq replay pt_regs Nicholas Piggin
@ 2021-06-30  7:46 ` Nicholas Piggin
  2021-06-30  7:46 ` [PATCH v3 8/9] powerpc/64s/interrupt: clean up interrupt return labels Nicholas Piggin
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-06-30  7:46 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Sachin Sant, Nicholas Piggin

If one interrupt exit symbol must not be kprobed, none of them can be,
without more justification for why it's safe. Disallow kprobing on any
of the (non-local) labels in the exit paths.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/interrupt_64.S | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/powerpc/kernel/interrupt_64.S b/arch/powerpc/kernel/interrupt_64.S
index 2c92bbca02ca..5c18362693fe 100644
--- a/arch/powerpc/kernel/interrupt_64.S
+++ b/arch/powerpc/kernel/interrupt_64.S
@@ -197,6 +197,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 .Lsyscall_vectored_\name\()_rst_end:
 
 syscall_vectored_\name\()_restart:
+_ASM_NOKPROBE_SYMBOL(syscall_vectored_\name\()_restart)
 	GET_PACA(r13)
 	ld	r1,PACA_EXIT_SAVE_R1(r13)
 	ld	r2,PACATOC(r13)
@@ -238,6 +239,7 @@ _ASM_NOKPROBE_SYMBOL(system_call_vectored_emulate)
 	.balign IFETCH_ALIGN_BYTES
 	.globl system_call_common_real
 system_call_common_real:
+_ASM_NOKPROBE_SYMBOL(system_call_common_real)
 	ld	r10,PACAKMSR(r13)	/* get MSR value for kernel */
 	mtmsrd	r10
 
@@ -402,6 +404,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 
 #ifdef CONFIG_PPC_BOOK3S
 syscall_restart:
+_ASM_NOKPROBE_SYMBOL(syscall_restart)
 	GET_PACA(r13)
 	ld	r1,PACA_EXIT_SAVE_R1(r13)
 	ld	r2,PACATOC(r13)
@@ -420,6 +423,7 @@ RESTART_TABLE(.Lsyscall_rst_start, .Lsyscall_rst_end, syscall_restart)
 
 #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
 tabort_syscall:
+_ASM_NOKPROBE_SYMBOL(tabort_syscall)
 	/* Firstly we need to enable TM in the kernel */
 	mfmsr	r10
 	li	r9, 1
@@ -602,6 +606,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
 
 #ifdef CONFIG_PPC_BOOK3S
 interrupt_return_\srr\()_user_restart:
+_ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\()_user_restart)
 	GET_PACA(r13)
 	ld	r1,PACA_EXIT_SAVE_R1(r13)
 	ld	r2,PACATOC(r13)
@@ -735,6 +740,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
 
 #ifdef CONFIG_PPC_BOOK3S
 interrupt_return_\srr\()_kernel_restart:
+_ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\()_kernel_restart)
 	GET_PACA(r13)
 	ld	r1,PACA_EXIT_SAVE_R1(r13)
 	ld	r2,PACATOC(r13)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 8/9] powerpc/64s/interrupt: clean up interrupt return labels
  2021-06-30  7:46 [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Nicholas Piggin
                   ` (6 preceding siblings ...)
  2021-06-30  7:46 ` [PATCH v3 7/9] powerpc/64/interrupt: add missing kprobe annotations on interrupt exit symbols Nicholas Piggin
@ 2021-06-30  7:46 ` Nicholas Piggin
  2021-06-30  7:46 ` [PATCH v3 9/9] powerpc/64s: move ret_from_fork etc above __end_soft_masked Nicholas Piggin
  2021-06-30 13:14 ` [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Michael Ellerman
  9 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-06-30  7:46 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Sachin Sant, Nicholas Piggin

Normal kernel-interrupt exits can get interrupt_return_srr_user_restart
in their backtrace, which is an unusual and notable function, and it is
part of the user-interrupt exit path, which is doubly confusing.

Add non-local labels for both user and kernel interrupt exit cases to
address this and make the user and kernel cases more symmetric. Also get
rid of an unused label.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/interrupt_64.S | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/interrupt_64.S b/arch/powerpc/kernel/interrupt_64.S
index 5c18362693fe..c4336e2e2ce8 100644
--- a/arch/powerpc/kernel/interrupt_64.S
+++ b/arch/powerpc/kernel/interrupt_64.S
@@ -509,7 +509,9 @@ interrupt_return_\srr\():
 _ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\())
 	ld	r4,_MSR(r1)
 	andi.	r0,r4,MSR_PR
-	beq	.Lkernel_interrupt_return_\srr
+	beq	interrupt_return_\srr\()_kernel
+interrupt_return_\srr\()_user: /* make backtraces match the _kernel variant */
+_ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\()_user)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	interrupt_exit_user_prepare
 	cmpdi	r3,0
@@ -623,8 +625,8 @@ RESTART_TABLE(.Linterrupt_return_\srr\()_user_rst_start, .Linterrupt_return_\srr
 #endif
 
 	.balign IFETCH_ALIGN_BYTES
-.Lkernel_interrupt_return_\srr\():
-.Linterrupt_return_\srr\()_kernel:
+interrupt_return_\srr\()_kernel:
+_ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\()_kernel)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	interrupt_exit_kernel_prepare
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v3 9/9] powerpc/64s: move ret_from_fork etc above __end_soft_masked
  2021-06-30  7:46 [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Nicholas Piggin
                   ` (7 preceding siblings ...)
  2021-06-30  7:46 ` [PATCH v3 8/9] powerpc/64s/interrupt: clean up interrupt return labels Nicholas Piggin
@ 2021-06-30  7:46 ` Nicholas Piggin
  2021-06-30 13:14 ` [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Michael Ellerman
  9 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-06-30  7:46 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Sachin Sant, Nicholas Piggin

Code which runs with interrupts enabled should be moved above
__end_soft_masked where possible, because maskable interrupts that hit
below that symbol will need to consult the soft mask table, which is an
extra cost.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/interrupt_64.S | 52 +++++++++++++++---------------
 1 file changed, 26 insertions(+), 26 deletions(-)

diff --git a/arch/powerpc/kernel/interrupt_64.S b/arch/powerpc/kernel/interrupt_64.S
index c4336e2e2ce8..4063e8a3f704 100644
--- a/arch/powerpc/kernel/interrupt_64.S
+++ b/arch/powerpc/kernel/interrupt_64.S
@@ -449,32 +449,6 @@ _ASM_NOKPROBE_SYMBOL(tabort_syscall)
 	b	.	/* prevent speculative execution */
 #endif
 
-#ifdef CONFIG_PPC_BOOK3S
-_GLOBAL(ret_from_fork_scv)
-	bl	schedule_tail
-	REST_NVGPRS(r1)
-	li	r3,0	/* fork() return value */
-	b	.Lsyscall_vectored_common_exit
-#endif
-
-_GLOBAL(ret_from_fork)
-	bl	schedule_tail
-	REST_NVGPRS(r1)
-	li	r3,0	/* fork() return value */
-	b	.Lsyscall_exit
-
-_GLOBAL(ret_from_kernel_thread)
-	bl	schedule_tail
-	REST_NVGPRS(r1)
-	mtctr	r14
-	mr	r3,r15
-#ifdef PPC64_ELF_ABI_v2
-	mr	r12,r14
-#endif
-	bctrl
-	li	r3,0
-	b	.Lsyscall_exit
-
 	/*
 	 * If MSR EE/RI was never enabled, IRQs not reconciled, NVGPRs not
 	 * touched, no exit work created, then this can be used.
@@ -768,3 +742,29 @@ interrupt_return_macro hsrr
 __end_soft_masked:
 DEFINE_FIXED_SYMBOL(__end_soft_masked)
 #endif /* CONFIG_PPC_BOOK3S */
+
+#ifdef CONFIG_PPC_BOOK3S
+_GLOBAL(ret_from_fork_scv)
+	bl	schedule_tail
+	REST_NVGPRS(r1)
+	li	r3,0	/* fork() return value */
+	b	.Lsyscall_vectored_common_exit
+#endif
+
+_GLOBAL(ret_from_fork)
+	bl	schedule_tail
+	REST_NVGPRS(r1)
+	li	r3,0	/* fork() return value */
+	b	.Lsyscall_exit
+
+_GLOBAL(ret_from_kernel_thread)
+	bl	schedule_tail
+	REST_NVGPRS(r1)
+	mtctr	r14
+	mr	r3,r15
+#ifdef PPC64_ELF_ABI_v2
+	mr	r12,r14
+#endif
+	bctrl
+	li	r3,0
+	b	.Lsyscall_exit
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/9] powerpc/64e: remove implicit soft-masking and interrupt exit restart logic
  2021-06-30  7:46 ` [PATCH v3 3/9] powerpc/64e: remove implicit soft-masking and interrupt exit restart logic Nicholas Piggin
@ 2021-06-30  7:56   ` Christophe Leroy
  2021-07-01  1:26     ` Nicholas Piggin
  0 siblings, 1 reply; 13+ messages in thread
From: Christophe Leroy @ 2021-06-30  7:56 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Sachin Sant



Le 30/06/2021 à 09:46, Nicholas Piggin a écrit :
> The implicit soft-masking to speed up interrupt return was going to be
> used by 64e as well, but it has not been extensively tested on that
> platform and is not considered ready. It was intended to be disabled
> before merge. Disable it for now.
> 
> Most of the restart code is common with 64s, so with more correctness
> and performance testing this could be re-enabled again by adding the
> extra soft-mask checks to interrupt handlers and flipping
> exit_must_hard_disable().
> 
> Fixes: 9d1988ca87dd ("powerpc/64: treat low kernel text as irqs soft-masked")
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/include/asm/interrupt.h | 33 ++++++++++++++++++++--------
>   arch/powerpc/kernel/exceptions-64e.S | 12 +---------
>   arch/powerpc/kernel/interrupt.c      |  2 +-
>   arch/powerpc/kernel/interrupt_64.S   | 16 ++++++++++++--
>   4 files changed, 40 insertions(+), 23 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
> index 8b4b1e84e110..f13c93b033c7 100644
> --- a/arch/powerpc/include/asm/interrupt.h
> +++ b/arch/powerpc/include/asm/interrupt.h
> @@ -73,20 +73,34 @@
>   #include <asm/kprobes.h>
>   #include <asm/runlatch.h>
>   
> -#ifdef CONFIG_PPC64
> +#ifdef CONFIG_PPC_BOOK3S_64

Can we avoid that ifdef and use IS_ENABLED(CONFIG_PPC_BOOK3S_64) below ?

>   extern char __end_soft_masked[];
>   unsigned long search_kernel_restart_table(unsigned long addr);
> -#endif
>   
> -#ifdef CONFIG_PPC_BOOK3S_64
>   DECLARE_STATIC_KEY_FALSE(interrupt_exit_not_reentrant);
>   
> +static inline bool is_implicit_soft_masked(struct pt_regs *regs)
> +{
> +	if (regs->msr & MSR_PR)
> +		return false;
> +
> +	if (regs->nip >= (unsigned long)__end_soft_masked)
> +		return false;
> +
> +	return true;
> +}
> +
>   static inline void srr_regs_clobbered(void)
>   {
>   	local_paca->srr_valid = 0;
>   	local_paca->hsrr_valid = 0;
>   }
>   #else
> +static inline bool is_implicit_soft_masked(struct pt_regs *regs)
> +{
> +	return false;
> +}
> +
>   static inline void srr_regs_clobbered(void)
>   {
>   }
> @@ -150,11 +164,13 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
>   		 */
>   		if (TRAP(regs) != INTERRUPT_PROGRAM) {
>   			CT_WARN_ON(ct_state() != CONTEXT_KERNEL);
> -			BUG_ON(regs->nip < (unsigned long)__end_soft_masked);
> +			BUG_ON(is_implicit_soft_masked(regs));
>   		}
> +#ifdef CONFIG_PPC_BOOK3S

Allthough we are already in a PPC64 section, wouldn't it be better to use CONFIG_PPC_BOOK3S_64 ?

Can we use IS_ENABLED(CONFIG_PPC_BOOK3S_64) instead ?

>   		/* Move this under a debugging check */
>   		if (arch_irq_disabled_regs(regs))
>   			BUG_ON(search_kernel_restart_table(regs->nip));
> +#endif
>   	}
>   #endif
>   
> @@ -244,10 +260,9 @@ static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte
>   	local_paca->irq_soft_mask = IRQS_ALL_DISABLED;
>   	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
>   
> -	if (IS_ENABLED(CONFIG_PPC_BOOK3S_64) && !(regs->msr & MSR_PR) &&
> -				regs->nip < (unsigned long)__end_soft_masked) {
> -		// Kernel code running below __end_soft_masked is
> -		// implicitly soft-masked.
> +	if (is_implicit_soft_masked(regs)) {
> +		// Adjust regs->softe soft implicit soft-mask, so
> +		// arch_irq_disabled_regs(regs) behaves as expected.
>   		regs->softe = IRQS_ALL_DISABLED;
>   	}
>   
> @@ -282,6 +297,7 @@ static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter
>   	 */
>   
>   #ifdef CONFIG_PPC64
> +#ifdef CONFIG_PPC_BOOK3S

IS_ENABLED(CONFIG_PPC_BOOK3S_64) instead ?

>   	if (arch_irq_disabled_regs(regs)) {
>   		unsigned long rst = search_kernel_restart_table(regs->nip);
>   		if (rst)
> @@ -289,7 +305,6 @@ static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter
>   	}
>   #endif
>   
> -#ifdef CONFIG_PPC64
>   	if (nmi_disables_ftrace(regs))
>   		this_cpu_set_ftrace_enabled(state->ftrace_enabled);
>   
> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
> index d634bfceed2c..1401787b0b93 100644
> --- a/arch/powerpc/kernel/exceptions-64e.S
> +++ b/arch/powerpc/kernel/exceptions-64e.S
> @@ -342,17 +342,7 @@ ret_from_mc_except:
>   #define PROLOG_ADDITION_MASKABLE_GEN(n)					    \
>   	lbz	r10,PACAIRQSOFTMASK(r13);	/* are irqs soft-masked? */ \
>   	andi.	r10,r10,IRQS_DISABLED;	/* yes -> go out of line */ \
> -	bne	masked_interrupt_book3e_##n;				    \
> -	/* Kernel code below __end_soft_masked is implicitly masked */	    \
> -	andi.	r10,r11,MSR_PR;						    \
> -	bne	1f;			/* user -> not masked */	    \
> -	std	r14,PACA_EXGEN+EX_R14(r13);				    \
> -	LOAD_REG_IMMEDIATE_SYM(r14, r10, __end_soft_masked);		    \
> -	mfspr	r10,SPRN_SRR0;						    \
> -	cmpld	r10,r14;						    \
> -	ld	r14,PACA_EXGEN+EX_R14(r13);				    \
> -	blt	masked_interrupt_book3e_##n;				    \
> -1:
> +	bne	masked_interrupt_book3e_##n
>   
>   /*
>    * Additional regs must be re-loaded from paca before EXCEPTION_COMMON* is
> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
> index 0052702ee5ac..21bbd615ca41 100644
> --- a/arch/powerpc/kernel/interrupt.c
> +++ b/arch/powerpc/kernel/interrupt.c
> @@ -36,7 +36,7 @@ static inline bool exit_must_hard_disable(void)
>   #else
>   static inline bool exit_must_hard_disable(void)
>   {
> -	return IS_ENABLED(CONFIG_PPC32);
> +	return true;
>   }
>   #endif
>   
> diff --git a/arch/powerpc/kernel/interrupt_64.S b/arch/powerpc/kernel/interrupt_64.S
> index e7a50613a570..09b8d8846c67 100644
> --- a/arch/powerpc/kernel/interrupt_64.S
> +++ b/arch/powerpc/kernel/interrupt_64.S
> @@ -231,7 +231,7 @@ _ASM_NOKPROBE_SYMBOL(system_call_vectored_emulate)
>   	li	r10,IRQS_ALL_DISABLED
>   	stb	r10,PACAIRQSOFTMASK(r13)
>   	b	system_call_vectored_common
> -#endif
> +#endif /* CONFIG_PPC_BOOK3S */
>   
>   	.balign IFETCH_ALIGN_BYTES
>   	.globl system_call_common_real
> @@ -320,10 +320,12 @@ END_BTB_FLUSH_SECTION
>   	li	r5,0 /* !scv */
>   	bl	syscall_exit_prepare
>   	std	r1,PACA_EXIT_SAVE_R1(r13) /* save r1 for restart */
> +#ifdef CONFIG_PPC_BOOK3S
>   .Lsyscall_rst_start:
>   	lbz	r11,PACAIRQHAPPENED(r13)
>   	andi.	r11,r11,(~PACA_IRQ_HARD_DIS)@l
>   	bne-	syscall_restart
> +#endif
>   	li	r11,IRQS_ENABLED
>   	stb	r11,PACAIRQSOFTMASK(r13)
>   	li	r11,0
> @@ -396,6 +398,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
>   	b	.Lsyscall_restore_regs_cont
>   .Lsyscall_rst_end:
>   
> +#ifdef CONFIG_PPC_BOOK3S
>   syscall_restart:
>   	GET_PACA(r13)
>   	ld	r1,PACA_EXIT_SAVE_R1(r13)
> @@ -409,6 +412,7 @@ syscall_restart:
>   	b	.Lsyscall_rst_start
>   
>   RESTART_TABLE(.Lsyscall_rst_start, .Lsyscall_rst_end, syscall_restart)
> +#endif
>   
>   #ifdef CONFIG_PPC_TRANSACTIONAL_MEM
>   tabort_syscall:
> @@ -504,10 +508,12 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return_\srr\())
>   	bne-	.Lrestore_nvgprs_\srr
>   .Lrestore_nvgprs_\srr\()_cont:
>   	std	r1,PACA_EXIT_SAVE_R1(r13) /* save r1 for restart */
> +#ifdef CONFIG_PPC_BOOK3S
>   .Linterrupt_return_\srr\()_user_rst_start:
>   	lbz	r11,PACAIRQHAPPENED(r13)
>   	andi.	r11,r11,(~PACA_IRQ_HARD_DIS)@l
>   	bne-	interrupt_return_\srr\()_user_restart
> +#endif
>   	li	r11,IRQS_ENABLED
>   	stb	r11,PACAIRQSOFTMASK(r13)
>   	li	r11,0
> @@ -590,6 +596,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
>   	REST_NVGPRS(r1)
>   	b	.Lrestore_nvgprs_\srr\()_cont
>   
> +#ifdef CONFIG_PPC_BOOK3S
>   interrupt_return_\srr\()_user_restart:
>   	GET_PACA(r13)
>   	ld	r1,PACA_EXIT_SAVE_R1(r13)
> @@ -602,6 +609,7 @@ interrupt_return_\srr\()_user_restart:
>   	b	.Linterrupt_return_\srr\()_user_rst_start
>   
>   RESTART_TABLE(.Linterrupt_return_\srr\()_user_rst_start, .Linterrupt_return_\srr\()_user_rst_end, interrupt_return_\srr\()_user_restart)
> +#endif
>   
>   	.balign IFETCH_ALIGN_BYTES
>   .Lkernel_interrupt_return_\srr\():
> @@ -615,9 +623,11 @@ RESTART_TABLE(.Linterrupt_return_\srr\()_user_rst_start, .Linterrupt_return_\srr
>   	cmpwi	r11,IRQS_ENABLED
>   	stb	r11,PACAIRQSOFTMASK(r13)
>   	bne	1f
> +#ifdef CONFIG_PPC_BOOK3S
>   	lbz	r11,PACAIRQHAPPENED(r13)
>   	andi.	r11,r11,(~PACA_IRQ_HARD_DIS)@l
>   	bne-	interrupt_return_\srr\()_kernel_restart
> +#endif
>   	li	r11,0
>   	stb	r11,PACAIRQHAPPENED(r13) # clear out possible HARD_DIS
>   1:
> @@ -717,6 +727,7 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
>   	b	.	/* prevent speculative execution */
>   .Linterrupt_return_\srr\()_kernel_rst_end:
>   
> +#ifdef CONFIG_PPC_BOOK3S
>   interrupt_return_\srr\()_kernel_restart:
>   	GET_PACA(r13)
>   	ld	r1,PACA_EXIT_SAVE_R1(r13)
> @@ -729,14 +740,15 @@ interrupt_return_\srr\()_kernel_restart:
>   	b	.Linterrupt_return_\srr\()_kernel_rst_start
>   
>   RESTART_TABLE(.Linterrupt_return_\srr\()_kernel_rst_start, .Linterrupt_return_\srr\()_kernel_rst_end, interrupt_return_\srr\()_kernel_restart)
> +#endif
>   
>   .endm
>   
>   interrupt_return_macro srr
>   #ifdef CONFIG_PPC_BOOK3S
>   interrupt_return_macro hsrr
> -#endif /* CONFIG_PPC_BOOK3S */
>   
>   	.globl __end_soft_masked
>   __end_soft_masked:
>   DEFINE_FIXED_SYMBOL(__end_soft_masked)
> +#endif /* CONFIG_PPC_BOOK3S */
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes
  2021-06-30  7:46 [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Nicholas Piggin
                   ` (8 preceding siblings ...)
  2021-06-30  7:46 ` [PATCH v3 9/9] powerpc/64s: move ret_from_fork etc above __end_soft_masked Nicholas Piggin
@ 2021-06-30 13:14 ` Michael Ellerman
  9 siblings, 0 replies; 13+ messages in thread
From: Michael Ellerman @ 2021-06-30 13:14 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Sachin Sant

On Wed, 30 Jun 2021 17:46:12 +1000, Nicholas Piggin wrote:
> This is a bunch of fixes for powerpc next, mostly a nasty hole in fast
> interrupt exit code found by Sachin and some other bits along the way
> while looking at it.
> 
> Since v2:
> - Fixed 64e patch 3 to really set exit_must_hard_disable.
> - Reworded some changelogs.
> 
> [...]

Applied to powerpc/next.

[1/9] powerpc/64s: fix hash page fault interrupt handler
      https://git.kernel.org/powerpc/c/5567b1ee29b7a83e8c01d99d34b5bbd306ce0bcf
[2/9] powerpc/64e: fix CONFIG_RELOCATABLE build warnings
      https://git.kernel.org/powerpc/c/fce01acf830a697110ed72ecace4b0afdbcd53cb
[3/9] powerpc/64e: remove implicit soft-masking and interrupt exit restart logic
      https://git.kernel.org/powerpc/c/9b69d48c7516a29cdaacd18d8bf5f575014a42a1
[4/9] powerpc/64s: add a table of implicit soft-masked addresses
      https://git.kernel.org/powerpc/c/325678fd052259e7c05ef29060a73c705ea90432
[5/9] powerpc/64s/interrupt: preserve regs->softe for NMI interrupts
      https://git.kernel.org/powerpc/c/1b0482229c302a3c6afd00d6b3bf0169cf279b44
[6/9] powerpc/64: enable MSR[EE] in irq replay pt_regs
      https://git.kernel.org/powerpc/c/2b43dd7653cca47d297756980846ebbfe8887fa1
[7/9] powerpc/64/interrupt: add missing kprobe annotations on interrupt exit symbols
      https://git.kernel.org/powerpc/c/98798f33c6be5a511ab61958b40835b3ef08def2
[8/9] powerpc/64s/interrupt: clean up interrupt return labels
      https://git.kernel.org/powerpc/c/c59458b00aec4ba580d9628d36d6c984af94d192
[9/9] powerpc/64s: move ret_from_fork etc above __end_soft_masked
      https://git.kernel.org/powerpc/c/91fc46eced0f70526d74468ac6c932c90a8585b3

cheers

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v3 3/9] powerpc/64e: remove implicit soft-masking and interrupt exit restart logic
  2021-06-30  7:56   ` Christophe Leroy
@ 2021-07-01  1:26     ` Nicholas Piggin
  0 siblings, 0 replies; 13+ messages in thread
From: Nicholas Piggin @ 2021-07-01  1:26 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev; +Cc: Sachin Sant

Excerpts from Christophe Leroy's message of June 30, 2021 5:56 pm:
> 
> 
> Le 30/06/2021 à 09:46, Nicholas Piggin a écrit :
>> The implicit soft-masking to speed up interrupt return was going to be
>> used by 64e as well, but it has not been extensively tested on that
>> platform and is not considered ready. It was intended to be disabled
>> before merge. Disable it for now.
>> 
>> Most of the restart code is common with 64s, so with more correctness
>> and performance testing this could be re-enabled again by adding the
>> extra soft-mask checks to interrupt handlers and flipping
>> exit_must_hard_disable().
>> 
>> Fixes: 9d1988ca87dd ("powerpc/64: treat low kernel text as irqs soft-masked")
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>   arch/powerpc/include/asm/interrupt.h | 33 ++++++++++++++++++++--------
>>   arch/powerpc/kernel/exceptions-64e.S | 12 +---------
>>   arch/powerpc/kernel/interrupt.c      |  2 +-
>>   arch/powerpc/kernel/interrupt_64.S   | 16 ++++++++++++--
>>   4 files changed, 40 insertions(+), 23 deletions(-)
>> 
>> diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
>> index 8b4b1e84e110..f13c93b033c7 100644
>> --- a/arch/powerpc/include/asm/interrupt.h
>> +++ b/arch/powerpc/include/asm/interrupt.h
>> @@ -73,20 +73,34 @@
>>   #include <asm/kprobes.h>
>>   #include <asm/runlatch.h>
>>   
>> -#ifdef CONFIG_PPC64
>> +#ifdef CONFIG_PPC_BOOK3S_64
> 
> Can we avoid that ifdef and use IS_ENABLED(CONFIG_PPC_BOOK3S_64) below ?

Hey Christophe,

Thanks for the review, sorry it was a bit rushed to get these fixes in
before the pull. I agree with this and there's a few other cleanups we
might do as well. Something to look at next.

> 
>>   extern char __end_soft_masked[];
>>   unsigned long search_kernel_restart_table(unsigned long addr);
>> -#endif
>>   
>> -#ifdef CONFIG_PPC_BOOK3S_64
>>   DECLARE_STATIC_KEY_FALSE(interrupt_exit_not_reentrant);
>>   
>> +static inline bool is_implicit_soft_masked(struct pt_regs *regs)
>> +{
>> +	if (regs->msr & MSR_PR)
>> +		return false;
>> +
>> +	if (regs->nip >= (unsigned long)__end_soft_masked)
>> +		return false;
>> +
>> +	return true;
>> +}
>> +
>>   static inline void srr_regs_clobbered(void)
>>   {
>>   	local_paca->srr_valid = 0;
>>   	local_paca->hsrr_valid = 0;
>>   }
>>   #else
>> +static inline bool is_implicit_soft_masked(struct pt_regs *regs)
>> +{
>> +	return false;
>> +}
>> +
>>   static inline void srr_regs_clobbered(void)
>>   {
>>   }
>> @@ -150,11 +164,13 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
>>   		 */
>>   		if (TRAP(regs) != INTERRUPT_PROGRAM) {
>>   			CT_WARN_ON(ct_state() != CONTEXT_KERNEL);
>> -			BUG_ON(regs->nip < (unsigned long)__end_soft_masked);
>> +			BUG_ON(is_implicit_soft_masked(regs));
>>   		}
>> +#ifdef CONFIG_PPC_BOOK3S
> 
> Allthough we are already in a PPC64 section, wouldn't it be better to use CONFIG_PPC_BOOK3S_64 ?
> 
> Can we use IS_ENABLED(CONFIG_PPC_BOOK3S_64) instead ?

Good question, it's a matter of preference. I have used PPC_BOOK3S in 
other places, but maybe in files shared by 32-bit it would be better to 
have the _64?

On the other hand, in cases where you have an else or #else, then you
still need the PPC64 context to understand that.

I don't really have a preference, I would go with either. Making some
convention and using it everywhere is probably a good idea though.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2021-07-01  1:26 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-30  7:46 [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Nicholas Piggin
2021-06-30  7:46 ` [PATCH v3 1/9] powerpc/64s: fix hash page fault interrupt handler Nicholas Piggin
2021-06-30  7:46 ` [PATCH v3 2/9] powerpc/64e: fix CONFIG_RELOCATABLE build warnings Nicholas Piggin
2021-06-30  7:46 ` [PATCH v3 3/9] powerpc/64e: remove implicit soft-masking and interrupt exit restart logic Nicholas Piggin
2021-06-30  7:56   ` Christophe Leroy
2021-07-01  1:26     ` Nicholas Piggin
2021-06-30  7:46 ` [PATCH v3 4/9] powerpc/64s: add a table of implicit soft-masked addresses Nicholas Piggin
2021-06-30  7:46 ` [PATCH v3 5/9] powerpc/64s/interrupt: preserve regs->softe for NMI interrupts Nicholas Piggin
2021-06-30  7:46 ` [PATCH v3 6/9] powerpc/64: enable MSR[EE] in irq replay pt_regs Nicholas Piggin
2021-06-30  7:46 ` [PATCH v3 7/9] powerpc/64/interrupt: add missing kprobe annotations on interrupt exit symbols Nicholas Piggin
2021-06-30  7:46 ` [PATCH v3 8/9] powerpc/64s/interrupt: clean up interrupt return labels Nicholas Piggin
2021-06-30  7:46 ` [PATCH v3 9/9] powerpc/64s: move ret_from_fork etc above __end_soft_masked Nicholas Piggin
2021-06-30 13:14 ` [PATCH v3 0/9] powerpc: fast interrupt exit bug and misc fixes Michael Ellerman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.