All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/10] Move 64e to new interrupt return code
@ 2021-03-15  3:17 Nicholas Piggin
  2021-03-15  3:17 ` [PATCH 01/10] powerpc/syscall: switch user_exit_irqoff and trace_hardirqs_off order Nicholas Piggin
                   ` (10 more replies)
  0 siblings, 11 replies; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-15  3:17 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Scott Wood, Nicholas Piggin

Since RFC this is rebased on Christophe's v3 ppc32 conversion, and
has fixed up small details, and then adds some powerpc-wide
cleanups at the end.

Tested on qemu only (QEMU e500), which is not ideal for interrupt
handling particularly the critical interrupts which I don't know
whether it can generate.

Thanks,
Nick

Nicholas Piggin (10):
  powerpc/syscall: switch user_exit_irqoff and trace_hardirqs_off order
  powerpc/64e/interrupt: always save nvgprs on interrupt
  powerpc/64e/interrupt: use new interrupt return
  powerpc/64e/interrupt: NMI save irq soft-mask state in C
  powerpc/64e/interrupt: reconcile irq soft-mask state in C
  powerpc/64e/interrupt: Use new interrupt context tracking scheme
  powerpc/64e/interrupt: handle bad_page_fault in C
  powerpc: clean up do_page_fault
  powerpc: remove partial register save logic
  powerpc: move norestart trap flag to bit 0

 arch/powerpc/include/asm/asm-prototypes.h |   2 -
 arch/powerpc/include/asm/bug.h            |   4 +-
 arch/powerpc/include/asm/interrupt.h      |  66 ++--
 arch/powerpc/include/asm/ptrace.h         |  36 +-
 arch/powerpc/kernel/align.c               |   6 -
 arch/powerpc/kernel/entry_64.S            |  40 +-
 arch/powerpc/kernel/exceptions-64e.S      | 425 ++--------------------
 arch/powerpc/kernel/interrupt.c           |  22 +-
 arch/powerpc/kernel/irq.c                 |  76 ----
 arch/powerpc/kernel/process.c             |  12 -
 arch/powerpc/kernel/ptrace/ptrace-view.c  |  21 --
 arch/powerpc/kernel/ptrace/ptrace.c       |   2 -
 arch/powerpc/kernel/ptrace/ptrace32.c     |   4 -
 arch/powerpc/kernel/signal_32.c           |   3 -
 arch/powerpc/kernel/signal_64.c           |   2 -
 arch/powerpc/kernel/traps.c               |  14 +-
 arch/powerpc/lib/sstep.c                  |   4 -
 arch/powerpc/mm/book3s64/hash_utils.c     |  16 +-
 arch/powerpc/mm/fault.c                   |  28 +-
 arch/powerpc/xmon/xmon.c                  |  23 +-
 20 files changed, 130 insertions(+), 676 deletions(-)

-- 
2.23.0


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 01/10] powerpc/syscall: switch user_exit_irqoff and trace_hardirqs_off order
  2021-03-15  3:17 [PATCH 00/10] Move 64e to new interrupt return code Nicholas Piggin
@ 2021-03-15  3:17 ` Nicholas Piggin
  2021-03-15  3:17 ` [PATCH 02/10] powerpc/64e/interrupt: always save nvgprs on interrupt Nicholas Piggin
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-15  3:17 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Scott Wood, Nicholas Piggin

user_exit_irqoff() -> __context_tracking_exit -> vtime_user_exit
warns in __seqprop_assert due to lockdep thinking preemption is enabled
because trace_hardirqs_off() has not yet been called.

Switch the order of these two calls, which matches their ordering in
interrupt_enter_prepare.

Fixes: 5f0b6ac3905f ("powerpc/64/syscall: Reconcile interrupts")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/interrupt.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index c4dd4b8f9cfa..fbabb49888d3 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -43,11 +43,11 @@ notrace long system_call_exception(long r3, long r4, long r5,
 	if (IS_ENABLED(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG))
 		BUG_ON(irq_soft_mask_return() != IRQS_ALL_DISABLED);
 
+	trace_hardirqs_off(); /* finish reconciling */
+
 	CT_WARN_ON(ct_state() == CONTEXT_KERNEL);
 	user_exit_irqoff();
 
-	trace_hardirqs_off(); /* finish reconciling */
-
 	if (!IS_ENABLED(CONFIG_BOOKE) && !IS_ENABLED(CONFIG_40x))
 		BUG_ON(!(regs->msr & MSR_RI));
 	BUG_ON(!(regs->msr & MSR_PR));
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 02/10] powerpc/64e/interrupt: always save nvgprs on interrupt
  2021-03-15  3:17 [PATCH 00/10] Move 64e to new interrupt return code Nicholas Piggin
  2021-03-15  3:17 ` [PATCH 01/10] powerpc/syscall: switch user_exit_irqoff and trace_hardirqs_off order Nicholas Piggin
@ 2021-03-15  3:17 ` Nicholas Piggin
  2021-03-15  3:17 ` [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return Nicholas Piggin
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-15  3:17 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Scott Wood, Nicholas Piggin

In order to use the C interrupt return, nvgprs must always be saved.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/ptrace.h    |  9 +--------
 arch/powerpc/kernel/entry_64.S       | 13 -------------
 arch/powerpc/kernel/exceptions-64e.S | 27 +++------------------------
 3 files changed, 4 insertions(+), 45 deletions(-)

diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
index 1aca5fe79285..c5b3669918f4 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -186,18 +186,11 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
 	((struct pt_regs *)((unsigned long)task_stack_page(current) + THREAD_SIZE) - 1)
 
 #ifdef __powerpc64__
-#ifdef CONFIG_PPC_BOOK3S
 #define TRAP_FLAGS_MASK		0x10
 #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
 #define FULL_REGS(regs)		true
 #define SET_FULL_REGS(regs)	do { } while (0)
-#else
-#define TRAP_FLAGS_MASK		0x11
-#define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
-#define FULL_REGS(regs)		(((regs)->trap & 1) == 0)
-#define SET_FULL_REGS(regs)	((regs)->trap &= ~1)
-#endif
-#define CHECK_FULL_REGS(regs)	BUG_ON(!FULL_REGS(regs))
+#define CHECK_FULL_REGS(regs)	do { } while (0)
 #define NV_REG_POISON		0xdeadbeefdeadbeefUL
 #else
 /*
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 6c4d9e276c4d..853534b2ae2e 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -417,19 +417,6 @@ _GLOBAL(ret_from_kernel_thread)
 	li	r3,0
 	b	.Lsyscall_exit
 
-#ifdef CONFIG_PPC_BOOK3E
-/* Save non-volatile GPRs, if not already saved. */
-_GLOBAL(save_nvgprs)
-	ld	r11,_TRAP(r1)
-	andi.	r0,r11,1
-	beqlr-
-	SAVE_NVGPRS(r1)
-	clrrdi	r0,r11,1
-	std	r0,_TRAP(r1)
-	blr
-_ASM_NOKPROBE_SYMBOL(save_nvgprs);
-#endif
-
 #ifdef CONFIG_PPC_BOOK3S_64
 
 #define FLUSH_COUNT_CACHE	\
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index e8eb9992a270..da78eb6ab92f 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -417,14 +417,15 @@ exc_##n##_common:							    \
 	std	r6,_LINK(r1);						    \
 	std	r7,_CTR(r1);						    \
 	std	r8,_XER(r1);						    \
-	li	r3,(n)+1;		/* indicate partial regs in trap */ \
+	li	r3,(n);			/* indicate partial regs in trap */ \
 	std	r9,0(r1);		/* store stack frame back link */   \
 	std	r10,_CCR(r1);		/* store orig CR in stackframe */   \
 	std	r9,GPR1(r1);		/* store stack frame back link */   \
 	std	r11,SOFTE(r1);		/* and save it to stackframe */     \
 	std	r12,STACK_FRAME_OVERHEAD-16(r1); /* mark the frame */	    \
 	std	r3,_TRAP(r1);		/* set trap number		*/  \
-	std	r0,RESULT(r1);		/* clear regs->result */
+	std	r0,RESULT(r1);		/* clear regs->result */	    \
+	SAVE_NVGPRS(r1);
 
 #define EXCEPTION_COMMON(n) \
 	EXCEPTION_COMMON_LVL(n, SPRN_SPRG_GEN_SCRATCH, PACA_EXGEN)
@@ -561,7 +562,6 @@ __end_interrupts:
 	CRIT_EXCEPTION_PROLOG(0x100, BOOKE_INTERRUPT_CRITICAL,
 			      PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON_CRIT(0x100)
-	bl	save_nvgprs
 	bl	special_reg_save
 	CHECK_NAPPING();
 	addi	r3,r1,STACK_FRAME_OVERHEAD
@@ -573,7 +573,6 @@ __end_interrupts:
 	MC_EXCEPTION_PROLOG(0x000, BOOKE_INTERRUPT_MACHINE_CHECK,
 			    PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON_MC(0x000)
-	bl	save_nvgprs
 	bl	special_reg_save
 	CHECK_NAPPING();
 	addi	r3,r1,STACK_FRAME_OVERHEAD
@@ -623,7 +622,6 @@ __end_interrupts:
 	std	r14,_DSISR(r1)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	ld	r14,PACA_EXGEN+EX_R14(r13)
-	bl	save_nvgprs
 	bl	program_check_exception
 	b	ret_from_except
 
@@ -639,7 +637,6 @@ __end_interrupts:
 	bl	load_up_fpu
 	b	fast_exception_return
 1:	INTS_DISABLE
-	bl	save_nvgprs
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	kernel_fp_unavailable_exception
 	b	ret_from_except
@@ -661,7 +658,6 @@ BEGIN_FTR_SECTION
 END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 #endif
 	INTS_DISABLE
-	bl	save_nvgprs
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	altivec_unavailable_exception
 	b	ret_from_except
@@ -673,7 +669,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 				PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0x220)
 	INTS_DISABLE
-	bl	save_nvgprs
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 #ifdef CONFIG_ALTIVEC
 BEGIN_FTR_SECTION
@@ -698,7 +693,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	CRIT_EXCEPTION_PROLOG(0x9f0, BOOKE_INTERRUPT_WATCHDOG,
 			      PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON_CRIT(0x9f0)
-	bl	save_nvgprs
 	bl	special_reg_save
 	CHECK_NAPPING();
 	addi	r3,r1,STACK_FRAME_OVERHEAD
@@ -723,7 +717,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 				PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0xf20)
 	INTS_DISABLE
-	bl	save_nvgprs
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	unknown_exception
 	b	ret_from_except
@@ -792,7 +785,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	ld	r14,PACA_EXCRIT+EX_R14(r13)
 	ld	r15,PACA_EXCRIT+EX_R15(r13)
-	bl	save_nvgprs
 	bl	DebugException
 	b	ret_from_except
 
@@ -864,7 +856,6 @@ kernel_dbg_exc:
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	ld	r14,PACA_EXDBG+EX_R14(r13)
 	ld	r15,PACA_EXDBG+EX_R15(r13)
-	bl	save_nvgprs
 	bl	DebugException
 	b	ret_from_except
 
@@ -887,7 +878,6 @@ kernel_dbg_exc:
 	CRIT_EXCEPTION_PROLOG(0x2a0, BOOKE_INTERRUPT_DOORBELL_CRITICAL,
 			      PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON_CRIT(0x2a0)
-	bl	save_nvgprs
 	bl	special_reg_save
 	CHECK_NAPPING();
 	addi	r3,r1,STACK_FRAME_OVERHEAD
@@ -903,7 +893,6 @@ kernel_dbg_exc:
 			        PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0x2c0)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	save_nvgprs
 	INTS_RESTORE_HARD
 	bl	unknown_exception
 	b	ret_from_except
@@ -913,7 +902,6 @@ kernel_dbg_exc:
 	CRIT_EXCEPTION_PROLOG(0x2e0, BOOKE_INTERRUPT_GUEST_DBELL_CRIT,
 			      PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON_CRIT(0x2e0)
-	bl	save_nvgprs
 	bl	special_reg_save
 	CHECK_NAPPING();
 	addi	r3,r1,STACK_FRAME_OVERHEAD
@@ -926,7 +914,6 @@ kernel_dbg_exc:
 			        PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0x310)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	save_nvgprs
 	INTS_RESTORE_HARD
 	bl	unknown_exception
 	b	ret_from_except
@@ -937,7 +924,6 @@ kernel_dbg_exc:
 			        PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0x320)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	save_nvgprs
 	INTS_RESTORE_HARD
 	bl	unknown_exception
 	b	ret_from_except
@@ -948,7 +934,6 @@ kernel_dbg_exc:
 			        PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0x340)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	save_nvgprs
 	INTS_RESTORE_HARD
 	bl	unknown_exception
 	b	ret_from_except
@@ -1014,7 +999,6 @@ storage_fault_common:
 	cmpdi	r3,0
 	bne-	1f
 	b	ret_from_except_lite
-1:	bl	save_nvgprs
 	mr	r4,r3
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	__bad_page_fault
@@ -1030,16 +1014,12 @@ alignment_more:
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	ld	r14,PACA_EXGEN+EX_R14(r13)
 	ld	r15,PACA_EXGEN+EX_R15(r13)
-	bl	save_nvgprs
 	INTS_RESTORE_HARD
 	bl	alignment_exception
 	b	ret_from_except
 
 	.align	7
 _GLOBAL(ret_from_except)
-	ld	r11,_TRAP(r1)
-	andi.	r0,r11,1
-	bne	ret_from_except_lite
 	REST_NVGPRS(r1)
 
 _GLOBAL(ret_from_except_lite)
@@ -1080,7 +1060,6 @@ _GLOBAL(ret_from_except_lite)
 	SCHEDULE_USER
 	b	ret_from_except_lite
 2:
-	bl	save_nvgprs
 	/*
 	 * Use a non volatile GPR to save and restore our thread_info flags
 	 * across the call to restore_interrupts.
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return
  2021-03-15  3:17 [PATCH 00/10] Move 64e to new interrupt return code Nicholas Piggin
  2021-03-15  3:17 ` [PATCH 01/10] powerpc/syscall: switch user_exit_irqoff and trace_hardirqs_off order Nicholas Piggin
  2021-03-15  3:17 ` [PATCH 02/10] powerpc/64e/interrupt: always save nvgprs on interrupt Nicholas Piggin
@ 2021-03-15  3:17 ` Nicholas Piggin
  2021-03-15  7:50   ` Christophe Leroy
  2021-03-15 13:30   ` Christophe Leroy
  2021-03-15  3:17 ` [PATCH 04/10] powerpc/64e/interrupt: NMI save irq soft-mask state in C Nicholas Piggin
                   ` (7 subsequent siblings)
  10 siblings, 2 replies; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-15  3:17 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Scott Wood, Nicholas Piggin

Update the new C and asm interrupt return code to account for 64e
specifics, switch over to use it.

The now-unused old ret_from_except code, that was moved to 64e after the
64s conversion, is removed.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/asm-prototypes.h |   2 -
 arch/powerpc/kernel/entry_64.S            |   9 +-
 arch/powerpc/kernel/exceptions-64e.S      | 321 ++--------------------
 arch/powerpc/kernel/interrupt.c           |  27 +-
 arch/powerpc/kernel/irq.c                 |  76 -----
 5 files changed, 56 insertions(+), 379 deletions(-)

diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
index 939f3c94c8f3..1c7b75834e04 100644
--- a/arch/powerpc/include/asm/asm-prototypes.h
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -77,8 +77,6 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
 long ppc_fadvise64_64(int fd, int advice, u32 offset_high, u32 offset_low,
 		      u32 len_high, u32 len_low);
 long sys_switch_endian(void);
-notrace unsigned int __check_irq_replay(void);
-void notrace restore_interrupts(void);
 
 /* prom_init (OpenFirmware) */
 unsigned long __init prom_init(unsigned long r3, unsigned long r4,
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 853534b2ae2e..555b3d0a3f38 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -632,7 +632,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
 	addi	r1,r1,SWITCH_FRAME_SIZE
 	blr
 
-#ifdef CONFIG_PPC_BOOK3S
 	/*
 	 * If MSR EE/RI was never enabled, IRQs not reconciled, NVGPRs not
 	 * touched, no exit work created, then this can be used.
@@ -644,6 +643,7 @@ _ASM_NOKPROBE_SYMBOL(fast_interrupt_return)
 	kuap_check_amr r3, r4
 	ld	r5,_MSR(r1)
 	andi.	r0,r5,MSR_PR
+#ifdef CONFIG_PPC_BOOK3S
 	bne	.Lfast_user_interrupt_return_amr
 	kuap_kernel_restore r3, r4
 	andi.	r0,r5,MSR_RI
@@ -652,6 +652,10 @@ _ASM_NOKPROBE_SYMBOL(fast_interrupt_return)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	unrecoverable_exception
 	b	. /* should not get here */
+#else
+	bne	.Lfast_user_interrupt_return
+	b	.Lfast_kernel_interrupt_return
+#endif
 
 	.balign IFETCH_ALIGN_BYTES
 	.globl interrupt_return
@@ -665,8 +669,10 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return)
 	cmpdi	r3,0
 	bne-	.Lrestore_nvgprs
 
+#ifdef CONFIG_PPC_BOOK3S
 .Lfast_user_interrupt_return_amr:
 	kuap_user_restore r3, r4
+#endif
 .Lfast_user_interrupt_return:
 	ld	r11,_NIP(r1)
 	ld	r12,_MSR(r1)
@@ -775,7 +781,6 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
 
 	RFI_TO_KERNEL
 	b	.	/* prevent speculative execution */
-#endif /* CONFIG_PPC_BOOK3S */
 
 #ifdef CONFIG_PPC_RTAS
 /*
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index da78eb6ab92f..1bb4e9b37748 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -139,7 +139,8 @@ ret_from_level_except:
 	ld	r3,_MSR(r1)
 	andi.	r3,r3,MSR_PR
 	beq	1f
-	b	ret_from_except
+	REST_NVGPRS(r1)
+	b	interrupt_return
 1:
 
 	LOAD_REG_ADDR(r11,extlb_level_exc)
@@ -208,7 +209,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 	/*
 	 * Restore PACAIRQHAPPENED rather than setting it based on
 	 * the return MSR[EE], since we could have interrupted
-	 * __check_irq_replay() or other inconsistent transitory
+	 * interrupt replay or other inconsistent transitory
 	 * states that must remain that way.
 	 */
 	SPECIAL_EXC_LOAD(r10,IRQHAPPENED)
@@ -511,7 +512,7 @@ exc_##n##_bad_stack:							    \
 	CHECK_NAPPING();						\
 	addi	r3,r1,STACK_FRAME_OVERHEAD;				\
 	bl	hdlr;							\
-	b	ret_from_except_lite;
+	b	interrupt_return
 
 /* This value is used to mark exception frames on the stack. */
 	.section	".toc","aw"
@@ -623,7 +624,8 @@ __end_interrupts:
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	ld	r14,PACA_EXGEN+EX_R14(r13)
 	bl	program_check_exception
-	b	ret_from_except
+	REST_NVGPRS(r1)
+	b	interrupt_return
 
 /* Floating Point Unavailable Interrupt */
 	START_EXCEPTION(fp_unavailable);
@@ -635,11 +637,11 @@ __end_interrupts:
 	andi.	r0,r12,MSR_PR;
 	beq-	1f
 	bl	load_up_fpu
-	b	fast_exception_return
+	b	fast_interrupt_return
 1:	INTS_DISABLE
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	kernel_fp_unavailable_exception
-	b	ret_from_except
+	b	interrupt_return
 
 /* Altivec Unavailable Interrupt */
 	START_EXCEPTION(altivec_unavailable);
@@ -653,14 +655,14 @@ BEGIN_FTR_SECTION
 	andi.	r0,r12,MSR_PR;
 	beq-	1f
 	bl	load_up_altivec
-	b	fast_exception_return
+	b	fast_interrupt_return
 1:
 END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 #endif
 	INTS_DISABLE
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	altivec_unavailable_exception
-	b	ret_from_except
+	b	interrupt_return
 
 /* AltiVec Assist */
 	START_EXCEPTION(altivec_assist);
@@ -674,10 +676,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 BEGIN_FTR_SECTION
 	bl	altivec_assist_exception
 END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
+	REST_NVGPRS(r1)
 #else
 	bl	unknown_exception
 #endif
-	b	ret_from_except
+	b	interrupt_return
 
 
 /* Decrementer Interrupt */
@@ -719,7 +722,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	INTS_DISABLE
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	unknown_exception
-	b	ret_from_except
+	b	interrupt_return
 
 /* Debug exception as a critical interrupt*/
 	START_EXCEPTION(debug_crit);
@@ -786,7 +789,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	ld	r14,PACA_EXCRIT+EX_R14(r13)
 	ld	r15,PACA_EXCRIT+EX_R15(r13)
 	bl	DebugException
-	b	ret_from_except
+	REST_NVGPRS(r1)
+	b	interrupt_return
 
 kernel_dbg_exc:
 	b	.	/* NYI */
@@ -857,7 +861,8 @@ kernel_dbg_exc:
 	ld	r14,PACA_EXDBG+EX_R14(r13)
 	ld	r15,PACA_EXDBG+EX_R15(r13)
 	bl	DebugException
-	b	ret_from_except
+	REST_NVGPRS(r1)
+	b	interrupt_return
 
 	START_EXCEPTION(perfmon);
 	NORMAL_EXCEPTION_PROLOG(0x260, BOOKE_INTERRUPT_PERFORMANCE_MONITOR,
@@ -867,7 +872,7 @@ kernel_dbg_exc:
 	CHECK_NAPPING()
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	performance_monitor_exception
-	b	ret_from_except_lite
+	b	interrupt_return
 
 /* Doorbell interrupt */
 	MASKABLE_EXCEPTION(0x280, BOOKE_INTERRUPT_DOORBELL,
@@ -895,7 +900,7 @@ kernel_dbg_exc:
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	INTS_RESTORE_HARD
 	bl	unknown_exception
-	b	ret_from_except
+	b	interrupt_return
 
 /* Guest Doorbell critical Interrupt */
 	START_EXCEPTION(guest_doorbell_crit);
@@ -916,7 +921,7 @@ kernel_dbg_exc:
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	INTS_RESTORE_HARD
 	bl	unknown_exception
-	b	ret_from_except
+	b	interrupt_return
 
 /* Embedded Hypervisor priviledged  */
 	START_EXCEPTION(ehpriv);
@@ -926,7 +931,7 @@ kernel_dbg_exc:
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	INTS_RESTORE_HARD
 	bl	unknown_exception
-	b	ret_from_except
+	b	interrupt_return
 
 /* LRAT Error interrupt */
 	START_EXCEPTION(lrat_error);
@@ -936,7 +941,7 @@ kernel_dbg_exc:
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	INTS_RESTORE_HARD
 	bl	unknown_exception
-	b	ret_from_except
+	b	interrupt_return
 
 /*
  * An interrupt came in while soft-disabled; We mark paca->irq_happened
@@ -998,11 +1003,11 @@ storage_fault_common:
 	bl	do_page_fault
 	cmpdi	r3,0
 	bne-	1f
-	b	ret_from_except_lite
+	b	interrupt_return
 	mr	r4,r3
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	__bad_page_fault
-	b	ret_from_except
+	b	interrupt_return
 
 /*
  * Alignment exception doesn't fit entirely in the 0x100 bytes so it
@@ -1016,284 +1021,8 @@ alignment_more:
 	ld	r15,PACA_EXGEN+EX_R15(r13)
 	INTS_RESTORE_HARD
 	bl	alignment_exception
-	b	ret_from_except
-
-	.align	7
-_GLOBAL(ret_from_except)
 	REST_NVGPRS(r1)
-
-_GLOBAL(ret_from_except_lite)
-	/*
-	 * Disable interrupts so that current_thread_info()->flags
-	 * can't change between when we test it and when we return
-	 * from the interrupt.
-	 */
-	wrteei	0
-
-	ld	r9, PACA_THREAD_INFO(r13)
-	ld	r3,_MSR(r1)
-	ld	r10,PACACURRENT(r13)
-	ld	r4,TI_FLAGS(r9)
-	andi.	r3,r3,MSR_PR
-	beq	resume_kernel
-	lwz	r3,(THREAD+THREAD_DBCR0)(r10)
-
-	/* Check current_thread_info()->flags */
-	andi.	r0,r4,_TIF_USER_WORK_MASK
-	bne	1f
-	/*
-	 * Check to see if the dbcr0 register is set up to debug.
-	 * Use the internal debug mode bit to do this.
-	 */
-	andis.	r0,r3,DBCR0_IDM@h
-	beq	restore
-	mfmsr	r0
-	rlwinm	r0,r0,0,~MSR_DE	/* Clear MSR.DE */
-	mtmsr	r0
-	mtspr	SPRN_DBCR0,r3
-	li	r10, -1
-	mtspr	SPRN_DBSR,r10
-	b	restore
-1:	andi.	r0,r4,_TIF_NEED_RESCHED
-	beq	2f
-	bl	restore_interrupts
-	SCHEDULE_USER
-	b	ret_from_except_lite
-2:
-	/*
-	 * Use a non volatile GPR to save and restore our thread_info flags
-	 * across the call to restore_interrupts.
-	 */
-	mr	r30,r4
-	bl	restore_interrupts
-	mr	r4,r30
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	do_notify_resume
-	b	ret_from_except
-
-resume_kernel:
-	/* check current_thread_info, _TIF_EMULATE_STACK_STORE */
-	andis.	r8,r4,_TIF_EMULATE_STACK_STORE@h
-	beq+	1f
-
-	addi	r8,r1,INT_FRAME_SIZE	/* Get the kprobed function entry */
-
-	ld	r3,GPR1(r1)
-	subi	r3,r3,INT_FRAME_SIZE	/* dst: Allocate a trampoline exception frame */
-	mr	r4,r1			/* src:  current exception frame */
-	mr	r1,r3			/* Reroute the trampoline frame to r1 */
-
-	/* Copy from the original to the trampoline. */
-	li	r5,INT_FRAME_SIZE/8	/* size: INT_FRAME_SIZE */
-	li	r6,0			/* start offset: 0 */
-	mtctr	r5
-2:	ldx	r0,r6,r4
-	stdx	r0,r6,r3
-	addi	r6,r6,8
-	bdnz	2b
-
-	/* Do real store operation to complete stdu */
-	ld	r5,GPR1(r1)
-	std	r8,0(r5)
-
-	/* Clear _TIF_EMULATE_STACK_STORE flag */
-	lis	r11,_TIF_EMULATE_STACK_STORE@h
-	addi	r5,r9,TI_FLAGS
-0:	ldarx	r4,0,r5
-	andc	r4,r4,r11
-	stdcx.	r4,0,r5
-	bne-	0b
-1:
-
-#ifdef CONFIG_PREEMPT
-	/* Check if we need to preempt */
-	andi.	r0,r4,_TIF_NEED_RESCHED
-	beq+	restore
-	/* Check that preempt_count() == 0 and interrupts are enabled */
-	lwz	r8,TI_PREEMPT(r9)
-	cmpwi	cr0,r8,0
-	bne	restore
-	ld	r0,SOFTE(r1)
-	andi.	r0,r0,IRQS_DISABLED
-	bne	restore
-
-	/*
-	 * Here we are preempting the current task. We want to make
-	 * sure we are soft-disabled first and reconcile irq state.
-	 */
-	RECONCILE_IRQ_STATE(r3,r4)
-	bl	preempt_schedule_irq
-
-	/*
-	 * arch_local_irq_restore() from preempt_schedule_irq above may
-	 * enable hard interrupt but we really should disable interrupts
-	 * when we return from the interrupt, and so that we don't get
-	 * interrupted after loading SRR0/1.
-	 */
-	wrteei	0
-#endif /* CONFIG_PREEMPT */
-
-restore:
-	/*
-	 * This is the main kernel exit path. First we check if we
-	 * are about to re-enable interrupts
-	 */
-	ld	r5,SOFTE(r1)
-	lbz	r6,PACAIRQSOFTMASK(r13)
-	andi.	r5,r5,IRQS_DISABLED
-	bne	.Lrestore_irq_off
-
-	/* We are enabling, were we already enabled ? Yes, just return */
-	andi.	r6,r6,IRQS_DISABLED
-	beq	cr0,fast_exception_return
-
-	/*
-	 * We are about to soft-enable interrupts (we are hard disabled
-	 * at this point). We check if there's anything that needs to
-	 * be replayed first.
-	 */
-	lbz	r0,PACAIRQHAPPENED(r13)
-	cmpwi	cr0,r0,0
-	bne-	.Lrestore_check_irq_replay
-
-	/*
-	 * Get here when nothing happened while soft-disabled, just
-	 * soft-enable and move-on. We will hard-enable as a side
-	 * effect of rfi
-	 */
-.Lrestore_no_replay:
-	TRACE_ENABLE_INTS
-	li	r0,IRQS_ENABLED
-	stb	r0,PACAIRQSOFTMASK(r13);
-
-/* This is the return from load_up_fpu fast path which could do with
- * less GPR restores in fact, but for now we have a single return path
- */
-fast_exception_return:
-	wrteei	0
-1:	mr	r0,r13
-	ld	r10,_MSR(r1)
-	REST_4GPRS(2, r1)
-	andi.	r6,r10,MSR_PR
-	REST_2GPRS(6, r1)
-	beq	1f
-	ACCOUNT_CPU_USER_EXIT(r13, r10, r11)
-	ld	r0,GPR13(r1)
-
-1:	stdcx.	r0,0,r1		/* to clear the reservation */
-
-	ld	r8,_CCR(r1)
-	ld	r9,_LINK(r1)
-	ld	r10,_CTR(r1)
-	ld	r11,_XER(r1)
-	mtcr	r8
-	mtlr	r9
-	mtctr	r10
-	mtxer	r11
-	REST_2GPRS(8, r1)
-	ld	r10,GPR10(r1)
-	ld	r11,GPR11(r1)
-	ld	r12,GPR12(r1)
-	mtspr	SPRN_SPRG_GEN_SCRATCH,r0
-
-	std	r10,PACA_EXGEN+EX_R10(r13);
-	std	r11,PACA_EXGEN+EX_R11(r13);
-	ld	r10,_NIP(r1)
-	ld	r11,_MSR(r1)
-	ld	r0,GPR0(r1)
-	ld	r1,GPR1(r1)
-	mtspr	SPRN_SRR0,r10
-	mtspr	SPRN_SRR1,r11
-	ld	r10,PACA_EXGEN+EX_R10(r13)
-	ld	r11,PACA_EXGEN+EX_R11(r13)
-	mfspr	r13,SPRN_SPRG_GEN_SCRATCH
-	rfi
-
-	/*
-	 * We are returning to a context with interrupts soft disabled.
-	 *
-	 * However, we may also about to hard enable, so we need to
-	 * make sure that in this case, we also clear PACA_IRQ_HARD_DIS
-	 * or that bit can get out of sync and bad things will happen
-	 */
-.Lrestore_irq_off:
-	ld	r3,_MSR(r1)
-	lbz	r7,PACAIRQHAPPENED(r13)
-	andi.	r0,r3,MSR_EE
-	beq	1f
-	rlwinm	r7,r7,0,~PACA_IRQ_HARD_DIS
-	stb	r7,PACAIRQHAPPENED(r13)
-1:
-#if defined(CONFIG_PPC_IRQ_SOFT_MASK_DEBUG) && defined(CONFIG_BUG)
-	/* The interrupt should not have soft enabled. */
-	lbz	r7,PACAIRQSOFTMASK(r13)
-1:	tdeqi	r7,IRQS_ENABLED
-	EMIT_BUG_ENTRY 1b,__FILE__,__LINE__,BUGFLAG_WARNING
-#endif
-	b	fast_exception_return
-
-	/*
-	 * Something did happen, check if a re-emit is needed
-	 * (this also clears paca->irq_happened)
-	 */
-.Lrestore_check_irq_replay:
-	/* XXX: We could implement a fast path here where we check
-	 * for irq_happened being just 0x01, in which case we can
-	 * clear it and return. That means that we would potentially
-	 * miss a decrementer having wrapped all the way around.
-	 *
-	 * Still, this might be useful for things like hash_page
-	 */
-	bl	__check_irq_replay
-	cmpwi	cr0,r3,0
-	beq	.Lrestore_no_replay
-
-	/*
-	 * We need to re-emit an interrupt. We do so by re-using our
-	 * existing exception frame. We first change the trap value,
-	 * but we need to ensure we preserve the low nibble of it
-	 */
-	ld	r4,_TRAP(r1)
-	clrldi	r4,r4,60
-	or	r4,r4,r3
-	std	r4,_TRAP(r1)
-
-	/*
-	 * PACA_IRQ_HARD_DIS won't always be set here, so set it now
-	 * to reconcile the IRQ state. Tracing is already accounted for.
-	 */
-	lbz	r4,PACAIRQHAPPENED(r13)
-	ori	r4,r4,PACA_IRQ_HARD_DIS
-	stb	r4,PACAIRQHAPPENED(r13)
-
-	/*
-	 * Then find the right handler and call it. Interrupts are
-	 * still soft-disabled and we keep them that way.
-	*/
-	cmpwi	cr0,r3,0x500
-	bne	1f
-	addi	r3,r1,STACK_FRAME_OVERHEAD;
-	bl	do_IRQ
-	b	ret_from_except
-1:	cmpwi	cr0,r3,0x900
-	bne	1f
-	addi	r3,r1,STACK_FRAME_OVERHEAD;
-	bl	timer_interrupt
-	b	ret_from_except
-#ifdef CONFIG_PPC_DOORBELL
-1:
-	cmpwi	cr0,r3,0x280
-	bne	1f
-	addi	r3,r1,STACK_FRAME_OVERHEAD;
-	bl	doorbell_exception
-#endif /* CONFIG_PPC_DOORBELL */
-1:	b	ret_from_except /* What else to do here ? */
-
-_ASM_NOKPROBE_SYMBOL(ret_from_except);
-_ASM_NOKPROBE_SYMBOL(ret_from_except_lite);
-_ASM_NOKPROBE_SYMBOL(resume_kernel);
-_ASM_NOKPROBE_SYMBOL(restore);
-_ASM_NOKPROBE_SYMBOL(fast_exception_return);
+	b	interrupt_return
 
 /*
  * Trampolines used when spotting a bad kernel stack pointer in
diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index fbabb49888d3..ae7b058b2970 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -235,6 +235,10 @@ static notrace void booke_load_dbcr0(void)
 #endif
 }
 
+/* temporary hack for context tracking, removed in later patch */
+#include <linux/sched/debug.h>
+asmlinkage __visible void __sched schedule_user(void);
+
 /*
  * This should be called after a syscall returns, with r3 the return value
  * from the syscall. If this function returns non-zero, the system call
@@ -292,7 +296,11 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
 	while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
 		local_irq_enable();
 		if (ti_flags & _TIF_NEED_RESCHED) {
+#ifdef CONFIG_PPC_BOOK3E_64
+			schedule_user();
+#else
 			schedule();
+#endif
 		} else {
 			/*
 			 * SIGPENDING must restore signal handler function
@@ -360,7 +368,6 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
 	return ret;
 }
 
-#ifndef CONFIG_PPC_BOOK3E_64 /* BOOK3E not yet using this */
 notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned long msr)
 {
 	unsigned long ti_flags;
@@ -372,7 +379,9 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
 	BUG_ON(!(regs->msr & MSR_PR));
 	BUG_ON(!FULL_REGS(regs));
 	BUG_ON(arch_irq_disabled_regs(regs));
+#ifdef CONFIG_PPC_BOOK3S_64
 	CT_WARN_ON(ct_state() == CONTEXT_USER);
+#endif
 
 	/*
 	 * We don't need to restore AMR on the way back to userspace for KUAP.
@@ -387,7 +396,11 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
 	while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
 		local_irq_enable(); /* returning to user: may enable */
 		if (ti_flags & _TIF_NEED_RESCHED) {
+#ifdef CONFIG_PPC_BOOK3E_64
+			schedule_user();
+#else
 			schedule();
+#endif
 		} else {
 			if (ti_flags & _TIF_SIGPENDING)
 				ret |= _TIF_RESTOREALL;
@@ -435,7 +448,10 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
 	/*
 	 * We do this at the end so that we do context switch with KERNEL AMR
 	 */
+#ifndef CONFIG_PPC_BOOK3E_64
 	kuap_user_restore(regs);
+#endif
+
 	return ret;
 }
 
@@ -445,7 +461,9 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
 {
 	unsigned long flags;
 	unsigned long ret = 0;
+#ifndef CONFIG_PPC_BOOK3E_64
 	unsigned long kuap;
+#endif
 
 	if (!IS_ENABLED(CONFIG_BOOKE) && !IS_ENABLED(CONFIG_40x) &&
 	    unlikely(!(regs->msr & MSR_RI)))
@@ -456,10 +474,12 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
 	 * CT_WARN_ON comes here via program_check_exception,
 	 * so avoid recursion.
 	 */
-	if (TRAP(regs) != 0x700)
+	if (IS_ENABLED(CONFIG_BOOKS) && TRAP(regs) != 0x700)
 		CT_WARN_ON(ct_state() == CONTEXT_USER);
 
+#ifndef CONFIG_PPC_BOOK3E_64
 	kuap = kuap_get_and_assert_locked();
+#endif
 
 	if (unlikely(current_thread_info()->flags & _TIF_EMULATE_STACK_STORE)) {
 		clear_bits(_TIF_EMULATE_STACK_STORE, &current_thread_info()->flags);
@@ -501,8 +521,9 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
 	 * which would cause Read-After-Write stalls. Hence, we take the AMR
 	 * value from the check above.
 	 */
+#ifndef CONFIG_PPC_BOOK3E_64
 	kuap_kernel_restore(regs, kuap);
+#endif
 
 	return ret;
 }
-#endif
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 5b72abbff96c..08a747b92735 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -104,82 +104,6 @@ static inline notrace unsigned long get_irq_happened(void)
 	return happened;
 }
 
-#ifdef CONFIG_PPC_BOOK3E
-
-/* This is called whenever we are re-enabling interrupts
- * and returns either 0 (nothing to do) or 500/900/280 if
- * there's an EE, DEC or DBELL to generate.
- *
- * This is called in two contexts: From arch_local_irq_restore()
- * before soft-enabling interrupts, and from the exception exit
- * path when returning from an interrupt from a soft-disabled to
- * a soft enabled context. In both case we have interrupts hard
- * disabled.
- *
- * We take care of only clearing the bits we handled in the
- * PACA irq_happened field since we can only re-emit one at a
- * time and we don't want to "lose" one.
- */
-notrace unsigned int __check_irq_replay(void)
-{
-	/*
-	 * We use local_paca rather than get_paca() to avoid all
-	 * the debug_smp_processor_id() business in this low level
-	 * function
-	 */
-	unsigned char happened = local_paca->irq_happened;
-
-	/*
-	 * We are responding to the next interrupt, so interrupt-off
-	 * latencies should be reset here.
-	 */
-	trace_hardirqs_on();
-	trace_hardirqs_off();
-
-	if (happened & PACA_IRQ_DEC) {
-		local_paca->irq_happened &= ~PACA_IRQ_DEC;
-		return 0x900;
-	}
-
-	if (happened & PACA_IRQ_EE) {
-		local_paca->irq_happened &= ~PACA_IRQ_EE;
-		return 0x500;
-	}
-
-	if (happened & PACA_IRQ_DBELL) {
-		local_paca->irq_happened &= ~PACA_IRQ_DBELL;
-		return 0x280;
-	}
-
-	if (happened & PACA_IRQ_HARD_DIS)
-		local_paca->irq_happened &= ~PACA_IRQ_HARD_DIS;
-
-	/* There should be nothing left ! */
-	BUG_ON(local_paca->irq_happened != 0);
-
-	return 0;
-}
-
-/*
- * This is specifically called by assembly code to re-enable interrupts
- * if they are currently disabled. This is typically called before
- * schedule() or do_signal() when returning to userspace. We do it
- * in C to avoid the burden of dealing with lockdep etc...
- *
- * NOTE: This is called with interrupts hard disabled but not marked
- * as such in paca->irq_happened, so we need to resync this.
- */
-void notrace restore_interrupts(void)
-{
-	if (irqs_disabled()) {
-		local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
-		local_irq_enable();
-	} else
-		__hard_irq_enable();
-}
-
-#endif /* CONFIG_PPC_BOOK3E */
-
 void replay_soft_interrupts(void)
 {
 	struct pt_regs regs;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 04/10] powerpc/64e/interrupt: NMI save irq soft-mask state in C
  2021-03-15  3:17 [PATCH 00/10] Move 64e to new interrupt return code Nicholas Piggin
                   ` (2 preceding siblings ...)
  2021-03-15  3:17 ` [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return Nicholas Piggin
@ 2021-03-15  3:17 ` Nicholas Piggin
  2021-03-15  3:17 ` [PATCH 05/10] powerpc/64e/interrupt: reconcile " Nicholas Piggin
                   ` (6 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-15  3:17 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Scott Wood, Nicholas Piggin

64e non-maskable interrupts save the state of the irq soft-mask in
asm. This can be done in C in interrupt wrappers as 64s does.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>

I haven't been able to test this with qemu because it doesn't seem
to cause FSL bookE WDT interrupts.

This makes WatchdogException an NMI interrupt, which affects 32-bit
as well (okay, or create a new handler?)
---
 arch/powerpc/include/asm/interrupt.h | 32 +++++++++++++++++--------
 arch/powerpc/kernel/exceptions-64e.S | 36 ++++------------------------
 arch/powerpc/kernel/traps.c          | 13 +++++++++-
 3 files changed, 38 insertions(+), 43 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 7c633896d758..305d7c17a4cf 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -130,18 +130,32 @@ static inline void interrupt_async_exit_prepare(struct pt_regs *regs, struct int
 
 struct interrupt_nmi_state {
 #ifdef CONFIG_PPC64
-#ifdef CONFIG_PPC_BOOK3S_64
 	u8 irq_soft_mask;
 	u8 irq_happened;
-#endif
 	u8 ftrace_enabled;
 #endif
 };
 
+static inline bool nmi_disables_ftrace(struct pt_regs *regs)
+{
+	/* Allow DEC and PMI to be traced when they are soft-NMI */
+	if (IS_ENABLED(CONFIG_PPC_BOOK3S_64)) {
+		if (TRAP(regs) == 0x900)
+		       return false;
+		if (TRAP(regs) == 0xf00)
+		       return false;
+	}
+	if (IS_ENABLED(CONFIG_PPC_BOOK3E)) {
+		if (TRAP(regs) == 0x260)
+			return false;
+	}
+
+	return true;
+}
+
 static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct interrupt_nmi_state *state)
 {
 #ifdef CONFIG_PPC64
-#ifdef CONFIG_PPC_BOOK3S_64
 	state->irq_soft_mask = local_paca->irq_soft_mask;
 	state->irq_happened = local_paca->irq_happened;
 
@@ -154,9 +168,8 @@ static inline void interrupt_nmi_enter_prepare(struct pt_regs *regs, struct inte
 	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
 
 	/* Don't do any per-CPU operations until interrupt state is fixed */
-#endif
-	/* Allow DEC and PMI to be traced when they are soft-NMI */
-	if (TRAP(regs) != 0x900 && TRAP(regs) != 0xf00 && TRAP(regs) != 0x260) {
+
+	if (nmi_disables_ftrace(regs)) {
 		state->ftrace_enabled = this_cpu_get_ftrace_enabled();
 		this_cpu_set_ftrace_enabled(0);
 	}
@@ -180,16 +193,14 @@ static inline void interrupt_nmi_exit_prepare(struct pt_regs *regs, struct inter
 		nmi_exit();
 
 #ifdef CONFIG_PPC64
-	if (TRAP(regs) != 0x900 && TRAP(regs) != 0xf00 && TRAP(regs) != 0x260)
+	if (nmi_disables_ftrace(regs))
 		this_cpu_set_ftrace_enabled(state->ftrace_enabled);
 
-#ifdef CONFIG_PPC_BOOK3S_64
 	/* Check we didn't change the pending interrupt mask. */
 	WARN_ON_ONCE((state->irq_happened | PACA_IRQ_HARD_DIS) != local_paca->irq_happened);
 	local_paca->irq_happened = state->irq_happened;
 	local_paca->irq_soft_mask = state->irq_soft_mask;
 #endif
-#endif
 }
 
 /*
@@ -402,6 +413,7 @@ DECLARE_INTERRUPT_HANDLER(SMIException);
 DECLARE_INTERRUPT_HANDLER(handle_hmi_exception);
 DECLARE_INTERRUPT_HANDLER(unknown_exception);
 DECLARE_INTERRUPT_HANDLER_ASYNC(unknown_async_exception);
+DECLARE_INTERRUPT_HANDLER_NMI(unknown_nmi_exception);
 DECLARE_INTERRUPT_HANDLER(instruction_breakpoint_exception);
 DECLARE_INTERRUPT_HANDLER(RunModeException);
 DECLARE_INTERRUPT_HANDLER(single_step_exception);
@@ -425,7 +437,7 @@ DECLARE_INTERRUPT_HANDLER(altivec_assist_exception);
 DECLARE_INTERRUPT_HANDLER(CacheLockingException);
 DECLARE_INTERRUPT_HANDLER(SPEFloatingPointException);
 DECLARE_INTERRUPT_HANDLER(SPEFloatingPointRoundException);
-DECLARE_INTERRUPT_HANDLER(WatchdogException);
+DECLARE_INTERRUPT_HANDLER_NMI(WatchdogException);
 DECLARE_INTERRUPT_HANDLER(kernel_bad_stack);
 
 /* slb.c */
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 1bb4e9b37748..2074a1e41ae2 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -63,9 +63,6 @@
 	ld	reg, (SPECIAL_EXC_##name * 8 + SPECIAL_EXC_FRAME_OFFS)(r1)
 
 special_reg_save:
-	lbz	r9,PACAIRQHAPPENED(r13)
-	RECONCILE_IRQ_STATE(r3,r4)
-
 	/*
 	 * We only need (or have stack space) to save this stuff if
 	 * we interrupted the kernel.
@@ -119,15 +116,11 @@ BEGIN_FTR_SECTION
 	mtspr	SPRN_MAS5,r10
 	mtspr	SPRN_MAS8,r10
 END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
-	SPECIAL_EXC_STORE(r9,IRQHAPPENED)
-
 	mfspr	r10,SPRN_DEAR
 	SPECIAL_EXC_STORE(r10,DEAR)
 	mfspr	r10,SPRN_ESR
 	SPECIAL_EXC_STORE(r10,ESR)
 
-	lbz	r10,PACAIRQSOFTMASK(r13)
-	SPECIAL_EXC_STORE(r10,SOFTE)
 	ld	r10,_NIP(r1)
 	SPECIAL_EXC_STORE(r10,CSRR0)
 	ld	r10,_MSR(r1)
@@ -194,27 +187,6 @@ BEGIN_FTR_SECTION
 	mtspr	SPRN_MAS8,r10
 END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
 
-	lbz	r6,PACAIRQSOFTMASK(r13)
-	ld	r5,SOFTE(r1)
-
-	/* Interrupts had better not already be enabled... */
-	tweqi	r6,IRQS_ENABLED
-
-	andi.	r6,r5,IRQS_DISABLED
-	bne	1f
-
-	TRACE_ENABLE_INTS
-	stb	r5,PACAIRQSOFTMASK(r13)
-1:
-	/*
-	 * Restore PACAIRQHAPPENED rather than setting it based on
-	 * the return MSR[EE], since we could have interrupted
-	 * interrupt replay or other inconsistent transitory
-	 * states that must remain that way.
-	 */
-	SPECIAL_EXC_LOAD(r10,IRQHAPPENED)
-	stb	r10,PACAIRQHAPPENED(r13)
-
 	SPECIAL_EXC_LOAD(r10,DEAR)
 	mtspr	SPRN_DEAR,r10
 	SPECIAL_EXC_LOAD(r10,ESR)
@@ -566,7 +538,7 @@ __end_interrupts:
 	bl	special_reg_save
 	CHECK_NAPPING();
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	unknown_exception
+	bl	unknown_nmi_exception
 	b	ret_from_crit_except
 
 /* Machine Check Interrupt */
@@ -702,7 +674,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 #ifdef CONFIG_BOOKE_WDT
 	bl	WatchdogException
 #else
-	bl	unknown_exception
+	bl	unknown_nmi_exception
 #endif
 	b	ret_from_crit_except
 
@@ -886,7 +858,7 @@ kernel_dbg_exc:
 	bl	special_reg_save
 	CHECK_NAPPING();
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	unknown_exception
+	bl	unknown_nmi_exception
 	b	ret_from_crit_except
 
 /*
@@ -910,7 +882,7 @@ kernel_dbg_exc:
 	bl	special_reg_save
 	CHECK_NAPPING();
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	unknown_exception
+	bl	unknown_nmi_exception
 	b	ret_from_crit_except
 
 /* Hypervisor call */
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index c74e7727860a..97b5f3d83ff7 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1079,6 +1079,16 @@ DEFINE_INTERRUPT_HANDLER_ASYNC(unknown_async_exception)
 	_exception(SIGTRAP, regs, TRAP_UNK, 0);
 }
 
+DEFINE_INTERRUPT_HANDLER_NMI(unknown_nmi_exception)
+{
+	printk("Bad trap at PC: %lx, SR: %lx, vector=%lx\n",
+	       regs->nip, regs->msr, regs->trap);
+
+	_exception(SIGTRAP, regs, TRAP_UNK, 0);
+
+	return 0;
+}
+
 DEFINE_INTERRUPT_HANDLER(instruction_breakpoint_exception)
 {
 	if (notify_die(DIE_IABR_MATCH, "iabr_match", regs, 5,
@@ -2183,10 +2193,11 @@ void __attribute__ ((weak)) WatchdogHandler(struct pt_regs *regs)
 	return;
 }
 
-DEFINE_INTERRUPT_HANDLER(WatchdogException) /* XXX NMI? async? */
+DEFINE_INTERRUPT_HANDLER_NMI(WatchdogException)
 {
 	printk (KERN_EMERG "PowerPC Book-E Watchdog Exception\n");
 	WatchdogHandler(regs);
+	return 0;
 }
 #endif
 
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 05/10] powerpc/64e/interrupt: reconcile irq soft-mask state in C
  2021-03-15  3:17 [PATCH 00/10] Move 64e to new interrupt return code Nicholas Piggin
                   ` (3 preceding siblings ...)
  2021-03-15  3:17 ` [PATCH 04/10] powerpc/64e/interrupt: NMI save irq soft-mask state in C Nicholas Piggin
@ 2021-03-15  3:17 ` Nicholas Piggin
  2021-03-15  3:17 ` [PATCH 06/10] powerpc/64e/interrupt: Use new interrupt context tracking scheme Nicholas Piggin
                   ` (5 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-15  3:17 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Scott Wood, Nicholas Piggin

Use existing 64s interrupt entry wrapper code to reconcile irqs in C.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/interrupt.h |  8 +++---
 arch/powerpc/kernel/entry_64.S       | 18 ++++++-------
 arch/powerpc/kernel/exceptions-64e.S | 39 +---------------------------
 3 files changed, 13 insertions(+), 52 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 305d7c17a4cf..29b48d083156 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -40,14 +40,14 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
 		kuap_save_and_lock(regs);
 	}
 #endif
-	/*
-	 * Book3E reconciles irq soft mask in asm
-	 */
-#ifdef CONFIG_PPC_BOOK3S_64
+
+#ifdef CONFIG_PPC64
 	if (irq_soft_mask_set_return(IRQS_ALL_DISABLED) == IRQS_ENABLED)
 		trace_hardirqs_off();
 	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
+#endif
 
+#ifdef CONFIG_PPC_BOOK3S_64
 	if (user_mode(regs)) {
 		CT_WARN_ON(ct_state() != CONTEXT_USER);
 		user_exit_irqoff();
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 555b3d0a3f38..03727308d8cc 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -117,13 +117,12 @@ BEGIN_FTR_SECTION
 END_FTR_SECTION_IFSET(CPU_FTR_HAS_PPR)
 
 	/*
-	 * RECONCILE_IRQ_STATE without calling trace_hardirqs_off(), which
-	 * would clobber syscall parameters. Also we always enter with IRQs
-	 * enabled and nothing pending. system_call_exception() will call
-	 * trace_hardirqs_off().
-	 *
-	 * scv enters with MSR[EE]=1, so don't set PACA_IRQ_HARD_DIS. The
-	 * entry vector already sets PACAIRQSOFTMASK to IRQS_ALL_DISABLED.
+	 * scv enters with MSR[EE]=1 and is immediately considered soft-masked.
+	 * The entry vector already sets PACAIRQSOFTMASK to IRQS_ALL_DISABLED,
+	 * and interrupts may be masked and pending already.
+	 * system_call_exception() will call trace_hardirqs_off() which means
+	 * interrupts could already have been blocked before trace_hardirqs_off,
+	 * but this is the best we can do.
 	 */
 
 	/* Calling convention has r9 = orig r0, r10 = regs */
@@ -288,9 +287,8 @@ END_BTB_FLUSH_SECTION
 	std	r11,-16(r10)		/* "regshere" marker */
 
 	/*
-	 * RECONCILE_IRQ_STATE without calling trace_hardirqs_off(), which
-	 * would clobber syscall parameters. Also we always enter with IRQs
-	 * enabled and nothing pending. system_call_exception() will call
+	 * We always enter kernel from userspace with irq soft-mask enabled and
+	 * nothing pending. system_call_exception() will call
 	 * trace_hardirqs_off().
 	 */
 	li	r11,IRQS_ALL_DISABLED
diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index 2074a1e41ae2..a059ab3542c2 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -409,28 +409,6 @@ exc_##n##_common:							    \
 #define EXCEPTION_COMMON_DBG(n) \
 	EXCEPTION_COMMON_LVL(n, SPRN_SPRG_DBG_SCRATCH, PACA_EXDBG)
 
-/*
- * This is meant for exceptions that don't immediately hard-enable.  We
- * set a bit in paca->irq_happened to ensure that a subsequent call to
- * arch_local_irq_restore() will properly hard-enable and avoid the
- * fast-path, and then reconcile irq state.
- */
-#define INTS_DISABLE	RECONCILE_IRQ_STATE(r3,r4)
-
-/*
- * This is called by exceptions that don't use INTS_DISABLE (that did not
- * touch irq indicators in the PACA).  This will restore MSR:EE to it's
- * previous value
- *
- * XXX In the long run, we may want to open-code it in order to separate the
- *     load from the wrtee, thus limiting the latency caused by the dependency
- *     but at this point, I'll favor code clarity until we have a near to final
- *     implementation
- */
-#define INTS_RESTORE_HARD						    \
-	ld	r11,_MSR(r1);						    \
-	wrtee	r11;
-
 /* XXX FIXME: Restore r14/r15 when necessary */
 #define BAD_STACK_TRAMPOLINE(n)						    \
 exc_##n##_bad_stack:							    \
@@ -479,7 +457,6 @@ exc_##n##_bad_stack:							    \
 	START_EXCEPTION(label);						\
 	NORMAL_EXCEPTION_PROLOG(trapnum, intnum, PROLOG_ADDITION_MASKABLE)\
 	EXCEPTION_COMMON(trapnum)					\
-	INTS_DISABLE;							\
 	ack(r8);							\
 	CHECK_NAPPING();						\
 	addi	r3,r1,STACK_FRAME_OVERHEAD;				\
@@ -559,7 +536,6 @@ __end_interrupts:
 	mfspr	r14,SPRN_DEAR
 	mfspr	r15,SPRN_ESR
 	EXCEPTION_COMMON(0x300)
-	INTS_DISABLE
 	b	storage_fault_common
 
 /* Instruction Storage Interrupt */
@@ -569,7 +545,6 @@ __end_interrupts:
 	li	r15,0
 	mr	r14,r10
 	EXCEPTION_COMMON(0x400)
-	INTS_DISABLE
 	b	storage_fault_common
 
 /* External Input Interrupt */
@@ -591,7 +566,6 @@ __end_interrupts:
 				PROLOG_ADDITION_1REG)
 	mfspr	r14,SPRN_ESR
 	EXCEPTION_COMMON(0x700)
-	INTS_DISABLE
 	std	r14,_DSISR(r1)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	ld	r14,PACA_EXGEN+EX_R14(r13)
@@ -610,8 +584,7 @@ __end_interrupts:
 	beq-	1f
 	bl	load_up_fpu
 	b	fast_interrupt_return
-1:	INTS_DISABLE
-	addi	r3,r1,STACK_FRAME_OVERHEAD
+1:	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	kernel_fp_unavailable_exception
 	b	interrupt_return
 
@@ -631,7 +604,6 @@ BEGIN_FTR_SECTION
 1:
 END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 #endif
-	INTS_DISABLE
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	altivec_unavailable_exception
 	b	interrupt_return
@@ -642,7 +614,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 				BOOKE_INTERRUPT_ALTIVEC_ASSIST,
 				PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0x220)
-	INTS_DISABLE
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 #ifdef CONFIG_ALTIVEC
 BEGIN_FTR_SECTION
@@ -691,7 +662,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
 	NORMAL_EXCEPTION_PROLOG(0xf20, BOOKE_INTERRUPT_AP_UNAVAIL,
 				PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0xf20)
-	INTS_DISABLE
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	unknown_exception
 	b	interrupt_return
@@ -827,7 +797,6 @@ kernel_dbg_exc:
 	 */
 	mfspr	r14,SPRN_DBSR
 	EXCEPTION_COMMON_DBG(0xd08)
-	INTS_DISABLE
 	std	r14,_DSISR(r1)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	ld	r14,PACA_EXDBG+EX_R14(r13)
@@ -840,7 +809,6 @@ kernel_dbg_exc:
 	NORMAL_EXCEPTION_PROLOG(0x260, BOOKE_INTERRUPT_PERFORMANCE_MONITOR,
 				PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0x260)
-	INTS_DISABLE
 	CHECK_NAPPING()
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	bl	performance_monitor_exception
@@ -870,7 +838,6 @@ kernel_dbg_exc:
 			        PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0x2c0)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	INTS_RESTORE_HARD
 	bl	unknown_exception
 	b	interrupt_return
 
@@ -891,7 +858,6 @@ kernel_dbg_exc:
 			        PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0x310)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	INTS_RESTORE_HARD
 	bl	unknown_exception
 	b	interrupt_return
 
@@ -901,7 +867,6 @@ kernel_dbg_exc:
 			        PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0x320)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	INTS_RESTORE_HARD
 	bl	unknown_exception
 	b	interrupt_return
 
@@ -911,7 +876,6 @@ kernel_dbg_exc:
 			        PROLOG_ADDITION_NONE)
 	EXCEPTION_COMMON(0x340)
 	addi	r3,r1,STACK_FRAME_OVERHEAD
-	INTS_RESTORE_HARD
 	bl	unknown_exception
 	b	interrupt_return
 
@@ -991,7 +955,6 @@ alignment_more:
 	addi	r3,r1,STACK_FRAME_OVERHEAD
 	ld	r14,PACA_EXGEN+EX_R14(r13)
 	ld	r15,PACA_EXGEN+EX_R15(r13)
-	INTS_RESTORE_HARD
 	bl	alignment_exception
 	REST_NVGPRS(r1)
 	b	interrupt_return
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 06/10] powerpc/64e/interrupt: Use new interrupt context tracking scheme
  2021-03-15  3:17 [PATCH 00/10] Move 64e to new interrupt return code Nicholas Piggin
                   ` (4 preceding siblings ...)
  2021-03-15  3:17 ` [PATCH 05/10] powerpc/64e/interrupt: reconcile " Nicholas Piggin
@ 2021-03-15  3:17 ` Nicholas Piggin
  2021-03-15  3:17 ` [PATCH 07/10] powerpc/64e/interrupt: handle bad_page_fault in C Nicholas Piggin
                   ` (4 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-15  3:17 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Scott Wood, Nicholas Piggin

With the new interrupt exit code, context tracking can be managed
more precisely, so remove the last of the 64e workarounds and switch
to the new context tracking code already used by 64s.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/interrupt.h | 28 ----------------------------
 arch/powerpc/kernel/interrupt.c      | 12 ------------
 2 files changed, 40 deletions(-)

diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 29b48d083156..94fd8e1ff52c 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -10,9 +10,6 @@
 #include <asm/runlatch.h>
 
 struct interrupt_state {
-#ifdef CONFIG_PPC_BOOK3E_64
-	enum ctx_state ctx_state;
-#endif
 };
 
 static inline void booke_restore_dbcr0(void)
@@ -45,9 +42,7 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
 	if (irq_soft_mask_set_return(IRQS_ALL_DISABLED) == IRQS_ENABLED)
 		trace_hardirqs_off();
 	local_paca->irq_happened |= PACA_IRQ_HARD_DIS;
-#endif
 
-#ifdef CONFIG_PPC_BOOK3S_64
 	if (user_mode(regs)) {
 		CT_WARN_ON(ct_state() != CONTEXT_USER);
 		user_exit_irqoff();
@@ -64,12 +59,6 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
 	}
 #endif
 
-#ifdef CONFIG_PPC_BOOK3E_64
-	state->ctx_state = exception_enter();
-	if (user_mode(regs))
-		account_cpu_user_entry();
-#endif
-
 	booke_restore_dbcr0();
 }
 
@@ -89,25 +78,8 @@ static inline void interrupt_enter_prepare(struct pt_regs *regs, struct interrup
  */
 static inline void interrupt_exit_prepare(struct pt_regs *regs, struct interrupt_state *state)
 {
-#ifdef CONFIG_PPC_BOOK3E_64
-	exception_exit(state->ctx_state);
-#endif
-
 	if (user_mode(regs))
 		kuep_unlock();
-	/*
-	 * Book3S exits to user via interrupt_exit_user_prepare(), which does
-	 * context tracking, which is a cleaner way to handle PREEMPT=y
-	 * and avoid context entry/exit in e.g., preempt_schedule_irq()),
-	 * which is likely to be where the core code wants to end up.
-	 *
-	 * The above comment explains why we can't do the
-	 *
-	 *     if (user_mode(regs))
-	 *         user_exit_irqoff();
-	 *
-	 * sequence here.
-	 */
 }
 
 static inline void interrupt_async_enter_prepare(struct pt_regs *regs, struct interrupt_state *state)
diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index ae7b058b2970..2a017d98973a 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -235,10 +235,6 @@ static notrace void booke_load_dbcr0(void)
 #endif
 }
 
-/* temporary hack for context tracking, removed in later patch */
-#include <linux/sched/debug.h>
-asmlinkage __visible void __sched schedule_user(void);
-
 /*
  * This should be called after a syscall returns, with r3 the return value
  * from the syscall. If this function returns non-zero, the system call
@@ -296,11 +292,7 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
 	while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
 		local_irq_enable();
 		if (ti_flags & _TIF_NEED_RESCHED) {
-#ifdef CONFIG_PPC_BOOK3E_64
-			schedule_user();
-#else
 			schedule();
-#endif
 		} else {
 			/*
 			 * SIGPENDING must restore signal handler function
@@ -396,11 +388,7 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
 	while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
 		local_irq_enable(); /* returning to user: may enable */
 		if (ti_flags & _TIF_NEED_RESCHED) {
-#ifdef CONFIG_PPC_BOOK3E_64
-			schedule_user();
-#else
 			schedule();
-#endif
 		} else {
 			if (ti_flags & _TIF_SIGPENDING)
 				ret |= _TIF_RESTOREALL;
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 07/10] powerpc/64e/interrupt: handle bad_page_fault in C
  2021-03-15  3:17 [PATCH 00/10] Move 64e to new interrupt return code Nicholas Piggin
                   ` (5 preceding siblings ...)
  2021-03-15  3:17 ` [PATCH 06/10] powerpc/64e/interrupt: Use new interrupt context tracking scheme Nicholas Piggin
@ 2021-03-15  3:17 ` Nicholas Piggin
  2021-03-15 14:07   ` Christophe Leroy
  2021-03-15  3:17 ` [PATCH 08/10] powerpc: clean up do_page_fault Nicholas Piggin
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-15  3:17 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Scott Wood, Nicholas Piggin

With non-volatile registers saved on interrupt, bad_page_fault
can now be called by do_page_fault.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/kernel/exceptions-64e.S | 6 ------
 arch/powerpc/mm/fault.c              | 5 +----
 2 files changed, 1 insertion(+), 10 deletions(-)

diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
index a059ab3542c2..b08c84e0fa56 100644
--- a/arch/powerpc/kernel/exceptions-64e.S
+++ b/arch/powerpc/kernel/exceptions-64e.S
@@ -937,12 +937,6 @@ storage_fault_common:
 	ld	r14,PACA_EXGEN+EX_R14(r13)
 	ld	r15,PACA_EXGEN+EX_R15(r13)
 	bl	do_page_fault
-	cmpdi	r3,0
-	bne-	1f
-	b	interrupt_return
-	mr	r4,r3
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	bl	__bad_page_fault
 	b	interrupt_return
 
 /*
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 2e54bac99a22..44833660b21d 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -552,12 +552,9 @@ static long __do_page_fault(struct pt_regs *regs)
 	if (likely(entry)) {
 		instruction_pointer_set(regs, extable_fixup(entry));
 		return 0;
-	} else if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64)) {
+	} else {
 		__bad_page_fault(regs, err);
 		return 0;
-	} else {
-		/* 32 and 64e handle the bad page fault in asm */
-		return err;
 	}
 }
 NOKPROBE_SYMBOL(__do_page_fault);
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 08/10] powerpc: clean up do_page_fault
  2021-03-15  3:17 [PATCH 00/10] Move 64e to new interrupt return code Nicholas Piggin
                   ` (6 preceding siblings ...)
  2021-03-15  3:17 ` [PATCH 07/10] powerpc/64e/interrupt: handle bad_page_fault in C Nicholas Piggin
@ 2021-03-15  3:17 ` Nicholas Piggin
  2021-03-15  3:17 ` [PATCH 09/10] powerpc: remove partial register save logic Nicholas Piggin
                   ` (2 subsequent siblings)
  10 siblings, 0 replies; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-15  3:17 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Scott Wood, Nicholas Piggin

search_exception_tables + __bad_page_fault can be substituted with
bad_page_fault, and do_page_fault no longer needs to return a value
to asm for any sub-architecture, so some cleanups can be made there.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/bug.h        |  4 +---
 arch/powerpc/include/asm/interrupt.h  |  2 +-
 arch/powerpc/mm/book3s64/hash_utils.c | 16 +++++++---------
 arch/powerpc/mm/fault.c               | 25 +++++++------------------
 4 files changed, 16 insertions(+), 31 deletions(-)

diff --git a/arch/powerpc/include/asm/bug.h b/arch/powerpc/include/asm/bug.h
index d1635ffbb179..d02c93e30d4a 100644
--- a/arch/powerpc/include/asm/bug.h
+++ b/arch/powerpc/include/asm/bug.h
@@ -111,11 +111,9 @@
 #ifndef __ASSEMBLY__
 
 struct pt_regs;
-long do_page_fault(struct pt_regs *);
-long hash__do_page_fault(struct pt_regs *);
+void hash__do_page_fault(struct pt_regs *);
 void bad_page_fault(struct pt_regs *, int);
 void __bad_page_fault(struct pt_regs *regs, int sig);
-void do_bad_page_fault_segv(struct pt_regs *regs);
 extern void _exception(int, struct pt_regs *, int, unsigned long);
 extern void _exception_pkey(struct pt_regs *, unsigned long, int);
 extern void die(const char *, struct pt_regs *, long);
diff --git a/arch/powerpc/include/asm/interrupt.h b/arch/powerpc/include/asm/interrupt.h
index 94fd8e1ff52c..bd0bd9430f78 100644
--- a/arch/powerpc/include/asm/interrupt.h
+++ b/arch/powerpc/include/asm/interrupt.h
@@ -420,7 +420,7 @@ DECLARE_INTERRUPT_HANDLER(do_bad_slb_fault);
 DECLARE_INTERRUPT_HANDLER_RAW(do_hash_fault);
 
 /* fault.c */
-DECLARE_INTERRUPT_HANDLER_RET(do_page_fault);
+DECLARE_INTERRUPT_HANDLER(do_page_fault);
 DECLARE_INTERRUPT_HANDLER(do_bad_page_fault_segv);
 
 /* process.c */
diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 581b20a2feaf..1c4b0a29f0f5 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -1572,10 +1572,11 @@ DEFINE_INTERRUPT_HANDLER_RET(__do_hash_fault)
 DEFINE_INTERRUPT_HANDLER_RAW(do_hash_fault)
 {
 	unsigned long dsisr = regs->dsisr;
-	long err;
 
-	if (unlikely(dsisr & (DSISR_BAD_FAULT_64S | DSISR_KEYFAULT)))
-		goto page_fault;
+	if (unlikely(dsisr & (DSISR_BAD_FAULT_64S | DSISR_KEYFAULT))) {
+		hash__do_page_fault(regs);
+		return 0;
+	}
 
 	/*
 	 * If we are in an "NMI" (e.g., an interrupt when soft-disabled), then
@@ -1595,13 +1596,10 @@ DEFINE_INTERRUPT_HANDLER_RAW(do_hash_fault)
 		return 0;
 	}
 
-	err = __do_hash_fault(regs);
-	if (err) {
-page_fault:
-		err = hash__do_page_fault(regs);
-	}
+	if (__do_hash_fault(regs))
+		hash__do_page_fault(regs);
 
-	return err;
+	return 0;
 }
 
 #ifdef CONFIG_PPC_MM_SLICES
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 44833660b21d..d4e66ec78189 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -539,36 +539,25 @@ static int ___do_page_fault(struct pt_regs *regs, unsigned long address,
 }
 NOKPROBE_SYMBOL(___do_page_fault);
 
-static long __do_page_fault(struct pt_regs *regs)
+static __always_inline void __do_page_fault(struct pt_regs *regs)
 {
-	const struct exception_table_entry *entry;
 	long err;
 
 	err = ___do_page_fault(regs, regs->dar, regs->dsisr);
-	if (likely(!err))
-		return err;
-
-	entry = search_exception_tables(regs->nip);
-	if (likely(entry)) {
-		instruction_pointer_set(regs, extable_fixup(entry));
-		return 0;
-	} else {
-		__bad_page_fault(regs, err);
-		return 0;
-	}
+	if (unlikely(err))
+		bad_page_fault(regs, err);
 }
-NOKPROBE_SYMBOL(__do_page_fault);
 
-DEFINE_INTERRUPT_HANDLER_RET(do_page_fault)
+DEFINE_INTERRUPT_HANDLER(do_page_fault)
 {
-	return __do_page_fault(regs);
+	__do_page_fault(regs);
 }
 
 #ifdef CONFIG_PPC_BOOK3S_64
 /* Same as do_page_fault but interrupt entry has already run in do_hash_fault */
-long hash__do_page_fault(struct pt_regs *regs)
+void hash__do_page_fault(struct pt_regs *regs)
 {
-	return __do_page_fault(regs);
+	__do_page_fault(regs);
 }
 NOKPROBE_SYMBOL(hash__do_page_fault);
 #endif
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 09/10] powerpc: remove partial register save logic
  2021-03-15  3:17 [PATCH 00/10] Move 64e to new interrupt return code Nicholas Piggin
                   ` (7 preceding siblings ...)
  2021-03-15  3:17 ` [PATCH 08/10] powerpc: clean up do_page_fault Nicholas Piggin
@ 2021-03-15  3:17 ` Nicholas Piggin
  2021-03-15  3:17 ` [PATCH 10/10] powerpc: move norestart trap flag to bit 0 Nicholas Piggin
  2021-03-22 23:45 ` [PATCH 00/10] Move 64e to new interrupt return code Daniel Axtens
  10 siblings, 0 replies; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-15  3:17 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Scott Wood, Nicholas Piggin

All subarchitectures always save all GPRs to pt_regs interrupt frames
now. Remove FULL_REGS and associated bits.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/ptrace.h        | 17 ++---------------
 arch/powerpc/kernel/align.c              |  6 ------
 arch/powerpc/kernel/interrupt.c          |  3 ---
 arch/powerpc/kernel/process.c            | 12 ------------
 arch/powerpc/kernel/ptrace/ptrace-view.c | 21 ---------------------
 arch/powerpc/kernel/ptrace/ptrace.c      |  2 --
 arch/powerpc/kernel/ptrace/ptrace32.c    |  4 ----
 arch/powerpc/kernel/signal_32.c          |  3 ---
 arch/powerpc/kernel/signal_64.c          |  2 --
 arch/powerpc/kernel/traps.c              |  1 -
 arch/powerpc/lib/sstep.c                 |  4 ----
 arch/powerpc/xmon/xmon.c                 | 23 +++++++----------------
 12 files changed, 9 insertions(+), 89 deletions(-)

diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
index c5b3669918f4..91194fdd5d01 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -188,29 +188,16 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
 #ifdef __powerpc64__
 #define TRAP_FLAGS_MASK		0x10
 #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
-#define FULL_REGS(regs)		true
-#define SET_FULL_REGS(regs)	do { } while (0)
-#define CHECK_FULL_REGS(regs)	do { } while (0)
-#define NV_REG_POISON		0xdeadbeefdeadbeefUL
 #else
 /*
- * We use the least-significant bit of the trap field to indicate
- * whether we have saved the full set of registers, or only a
- * partial set.  A 1 there means the partial set.
- * On 4xx we use the next bit to indicate whether the exception
+ * On 4xx we use bit 1 in the trap word to indicate whether the exception
  * is a critical exception (1 means it is).
  */
-#define TRAP_FLAGS_MASK		0x1F
+#define TRAP_FLAGS_MASK		0x1E
 #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
-#define FULL_REGS(regs)		true
-#define SET_FULL_REGS(regs)	do { } while (0)
 #define IS_CRITICAL_EXC(regs)	(((regs)->trap & 2) != 0)
 #define IS_MCHECK_EXC(regs)	(((regs)->trap & 4) != 0)
 #define IS_DEBUG_EXC(regs)	(((regs)->trap & 8) != 0)
-#define NV_REG_POISON		0xdeadbeef
-#define CHECK_FULL_REGS(regs)						      \
-do {									      \
-} while (0)
 #endif /* __powerpc64__ */
 
 static inline void set_trap(struct pt_regs *regs, unsigned long val)
diff --git a/arch/powerpc/kernel/align.c b/arch/powerpc/kernel/align.c
index c7797eb958c7..ae525397947e 100644
--- a/arch/powerpc/kernel/align.c
+++ b/arch/powerpc/kernel/align.c
@@ -299,12 +299,6 @@ int fix_alignment(struct pt_regs *regs)
 	struct instruction_op op;
 	int r, type;
 
-	/*
-	 * We require a complete register set, if not, then our assembly
-	 * is broken
-	 */
-	CHECK_FULL_REGS(regs);
-
 	if (unlikely(__get_user_instr(instr, (void __user *)regs->nip)))
 		return -EFAULT;
 	if ((regs->msr & MSR_LE) != (MSR_KERNEL & MSR_LE)) {
diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
index 2a017d98973a..96ca27ef68ae 100644
--- a/arch/powerpc/kernel/interrupt.c
+++ b/arch/powerpc/kernel/interrupt.c
@@ -51,7 +51,6 @@ notrace long system_call_exception(long r3, long r4, long r5,
 	if (!IS_ENABLED(CONFIG_BOOKE) && !IS_ENABLED(CONFIG_40x))
 		BUG_ON(!(regs->msr & MSR_RI));
 	BUG_ON(!(regs->msr & MSR_PR));
-	BUG_ON(!FULL_REGS(regs));
 	BUG_ON(arch_irq_disabled_regs(regs));
 
 #ifdef CONFIG_PPC_PKEY
@@ -369,7 +368,6 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
 	if (!IS_ENABLED(CONFIG_BOOKE) && !IS_ENABLED(CONFIG_40x))
 		BUG_ON(!(regs->msr & MSR_RI));
 	BUG_ON(!(regs->msr & MSR_PR));
-	BUG_ON(!FULL_REGS(regs));
 	BUG_ON(arch_irq_disabled_regs(regs));
 #ifdef CONFIG_PPC_BOOK3S_64
 	CT_WARN_ON(ct_state() == CONTEXT_USER);
@@ -457,7 +455,6 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
 	    unlikely(!(regs->msr & MSR_RI)))
 		unrecoverable_exception(regs);
 	BUG_ON(regs->msr & MSR_PR);
-	BUG_ON(!FULL_REGS(regs));
 	/*
 	 * CT_WARN_ON comes here via program_check_exception,
 	 * so avoid recursion.
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 7989d9ce468b..1e62a70a29aa 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1447,11 +1447,9 @@ static void print_msr_bits(unsigned long val)
 #ifdef CONFIG_PPC64
 #define REG		"%016lx"
 #define REGS_PER_LINE	4
-#define LAST_VOLATILE	13
 #else
 #define REG		"%08lx"
 #define REGS_PER_LINE	8
-#define LAST_VOLATILE	12
 #endif
 
 static void __show_regs(struct pt_regs *regs)
@@ -1487,8 +1485,6 @@ static void __show_regs(struct pt_regs *regs)
 		if ((i % REGS_PER_LINE) == 0)
 			pr_cont("\nGPR%02d: ", i);
 		pr_cont(REG " ", regs->gpr[i]);
-		if (i == LAST_VOLATILE && !FULL_REGS(regs))
-			break;
 	}
 	pr_cont("\n");
 	/*
@@ -1691,7 +1687,6 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
 	} else {
 		/* user thread */
 		struct pt_regs *regs = current_pt_regs();
-		CHECK_FULL_REGS(regs);
 		*childregs = *regs;
 		if (usp)
 			childregs->gpr[1] = usp;
@@ -1796,13 +1791,6 @@ void start_thread(struct pt_regs *regs, unsigned long start, unsigned long sp)
 	regs->ccr = 0;
 	regs->gpr[1] = sp;
 
-	/*
-	 * We have just cleared all the nonvolatile GPRs, so make
-	 * FULL_REGS(regs) return true.  This is necessary to allow
-	 * ptrace to examine the thread immediately after exec.
-	 */
-	SET_FULL_REGS(regs);
-
 #ifdef CONFIG_PPC32
 	regs->mq = 0;
 	regs->nip = start;
diff --git a/arch/powerpc/kernel/ptrace/ptrace-view.c b/arch/powerpc/kernel/ptrace/ptrace-view.c
index 2bad8068f598..8eb826aa2a10 100644
--- a/arch/powerpc/kernel/ptrace/ptrace-view.c
+++ b/arch/powerpc/kernel/ptrace/ptrace-view.c
@@ -221,17 +221,9 @@ static int gpr_get(struct task_struct *target, const struct user_regset *regset,
 #ifdef CONFIG_PPC64
 	struct membuf to_softe = membuf_at(&to, offsetof(struct pt_regs, softe));
 #endif
-	int i;
-
 	if (target->thread.regs == NULL)
 		return -EIO;
 
-	if (!FULL_REGS(target->thread.regs)) {
-		/* We have a partial register set.  Fill 14-31 with bogus values */
-		for (i = 14; i < 32; i++)
-			target->thread.regs->gpr[i] = NV_REG_POISON;
-	}
-
 	membuf_write(&to, target->thread.regs, sizeof(struct user_pt_regs));
 
 	membuf_store(&to_msr, get_user_msr(target));
@@ -252,8 +244,6 @@ static int gpr_set(struct task_struct *target, const struct user_regset *regset,
 	if (target->thread.regs == NULL)
 		return -EIO;
 
-	CHECK_FULL_REGS(target->thread.regs);
-
 	ret = user_regset_copyin(&pos, &count, &kbuf, &ubuf,
 				 target->thread.regs,
 				 0, PT_MSR * sizeof(reg));
@@ -723,19 +713,9 @@ static int gpr32_get(struct task_struct *target,
 		     const struct user_regset *regset,
 		     struct membuf to)
 {
-	int i;
-
 	if (target->thread.regs == NULL)
 		return -EIO;
 
-	if (!FULL_REGS(target->thread.regs)) {
-		/*
-		 * We have a partial register set.
-		 * Fill 14-31 with bogus values.
-		 */
-		for (i = 14; i < 32; i++)
-			target->thread.regs->gpr[i] = NV_REG_POISON;
-	}
 	return gpr32_get_common(target, regset, to,
 			&target->thread.regs->gpr[0]);
 }
@@ -748,7 +728,6 @@ static int gpr32_set(struct task_struct *target,
 	if (target->thread.regs == NULL)
 		return -EIO;
 
-	CHECK_FULL_REGS(target->thread.regs);
 	return gpr32_set_common(target, regset, pos, count, kbuf, ubuf,
 			&target->thread.regs->gpr[0]);
 }
diff --git a/arch/powerpc/kernel/ptrace/ptrace.c b/arch/powerpc/kernel/ptrace/ptrace.c
index 4f3d4ff3728c..f59883902b35 100644
--- a/arch/powerpc/kernel/ptrace/ptrace.c
+++ b/arch/powerpc/kernel/ptrace/ptrace.c
@@ -59,7 +59,6 @@ long arch_ptrace(struct task_struct *child, long request,
 		if ((addr & (sizeof(long) - 1)) || !child->thread.regs)
 			break;
 
-		CHECK_FULL_REGS(child->thread.regs);
 		if (index < PT_FPR0)
 			ret = ptrace_get_reg(child, (int) index, &tmp);
 		else
@@ -81,7 +80,6 @@ long arch_ptrace(struct task_struct *child, long request,
 		if ((addr & (sizeof(long) - 1)) || !child->thread.regs)
 			break;
 
-		CHECK_FULL_REGS(child->thread.regs);
 		if (index < PT_FPR0)
 			ret = ptrace_put_reg(child, index, data);
 		else
diff --git a/arch/powerpc/kernel/ptrace/ptrace32.c b/arch/powerpc/kernel/ptrace/ptrace32.c
index d30b9ad70edc..19c224808982 100644
--- a/arch/powerpc/kernel/ptrace/ptrace32.c
+++ b/arch/powerpc/kernel/ptrace/ptrace32.c
@@ -83,7 +83,6 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
 		if ((addr & 3) || (index > PT_FPSCR32))
 			break;
 
-		CHECK_FULL_REGS(child->thread.regs);
 		if (index < PT_FPR0) {
 			ret = ptrace_get_reg(child, index, &tmp);
 			if (ret)
@@ -133,7 +132,6 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
 		if ((addr & 3) || numReg > PT_FPSCR)
 			break;
 
-		CHECK_FULL_REGS(child->thread.regs);
 		if (numReg >= PT_FPR0) {
 			flush_fp_to_thread(child);
 			/* get 64 bit FPR */
@@ -187,7 +185,6 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
 		if ((addr & 3) || (index > PT_FPSCR32))
 			break;
 
-		CHECK_FULL_REGS(child->thread.regs);
 		if (index < PT_FPR0) {
 			ret = ptrace_put_reg(child, index, data);
 		} else {
@@ -226,7 +223,6 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
 		 */
 		if ((addr & 3) || (numReg > PT_FPSCR))
 			break;
-		CHECK_FULL_REGS(child->thread.regs);
 		if (numReg < PT_FPR0) {
 			unsigned long freg;
 			ret = ptrace_get_reg(child, numReg, &freg);
diff --git a/arch/powerpc/kernel/signal_32.c b/arch/powerpc/kernel/signal_32.c
index 75ee918a120a..73551237de9d 100644
--- a/arch/powerpc/kernel/signal_32.c
+++ b/arch/powerpc/kernel/signal_32.c
@@ -99,8 +99,6 @@ save_general_regs_unsafe(struct pt_regs *regs, struct mcontext __user *frame)
 	elf_greg_t64 *gregs = (elf_greg_t64 *)regs;
 	int val, i;
 
-	WARN_ON(!FULL_REGS(regs));
-
 	for (i = 0; i <= PT_RESULT; i ++) {
 		/* Force usr to alway see softe as 1 (interrupts enabled) */
 		if (i == PT_SOFTE)
@@ -153,7 +151,6 @@ static inline int get_sigset_t(sigset_t *set, const sigset_t __user *uset)
 static __always_inline int
 save_general_regs_unsafe(struct pt_regs *regs, struct mcontext __user *frame)
 {
-	WARN_ON(!FULL_REGS(regs));
 	unsafe_copy_to_user(&frame->mc_gregs, regs, GP_REGS_SIZE, failed);
 	return 0;
 
diff --git a/arch/powerpc/kernel/signal_64.c b/arch/powerpc/kernel/signal_64.c
index f9e4a1ac440f..0e3637722e97 100644
--- a/arch/powerpc/kernel/signal_64.c
+++ b/arch/powerpc/kernel/signal_64.c
@@ -160,7 +160,6 @@ static long setup_sigcontext(struct sigcontext __user *sc,
 	}
 #endif /* CONFIG_VSX */
 	err |= __put_user(&sc->gp_regs, &sc->regs);
-	WARN_ON(!FULL_REGS(regs));
 	err |= __copy_to_user(&sc->gp_regs, regs, GP_REGS_SIZE);
 	err |= __put_user(msr, &sc->gp_regs[PT_MSR]);
 	err |= __put_user(softe, &sc->gp_regs[PT_SOFTE]);
@@ -294,7 +293,6 @@ static long setup_tm_sigcontexts(struct sigcontext __user *sc,
 
 	err |= __put_user(&sc->gp_regs, &sc->regs);
 	err |= __put_user(&tm_sc->gp_regs, &tm_sc->regs);
-	WARN_ON(!FULL_REGS(regs));
 	err |= __copy_to_user(&tm_sc->gp_regs, regs, GP_REGS_SIZE);
 	err |= __copy_to_user(&sc->gp_regs,
 			      &tsk->thread.ckpt_regs, GP_REGS_SIZE);
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 97b5f3d83ff7..6c62e4e87979 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1319,7 +1319,6 @@ static int emulate_instruction(struct pt_regs *regs)
 
 	if (!user_mode(regs))
 		return -EINVAL;
-	CHECK_FULL_REGS(regs);
 
 	if (get_user(instword, (u32 __user *)(regs->nip)))
 		return -EFAULT;
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index 739ea6dc461c..45bda2520755 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -1401,10 +1401,6 @@ int analyse_instr(struct instruction_op *op, const struct pt_regs *regs,
 		break;
 	}
 
-	/* Following cases refer to regs->gpr[], so we need all regs */
-	if (!FULL_REGS(regs))
-		return -1;
-
 	rd = (word >> 21) & 0x1f;
 	ra = (word >> 16) & 0x1f;
 	rb = (word >> 11) & 0x1f;
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 3fe37495f63d..42a2c831d87a 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -1815,25 +1815,16 @@ static void prregs(struct pt_regs *fp)
 	}
 
 #ifdef CONFIG_PPC64
-	if (FULL_REGS(fp)) {
-		for (n = 0; n < 16; ++n)
-			printf("R%.2d = "REG"   R%.2d = "REG"\n",
-			       n, fp->gpr[n], n+16, fp->gpr[n+16]);
-	} else {
-		for (n = 0; n < 7; ++n)
-			printf("R%.2d = "REG"   R%.2d = "REG"\n",
-			       n, fp->gpr[n], n+7, fp->gpr[n+7]);
-	}
+#define R_PER_LINE 2
 #else
+#define R_PER_LINE 4
+#endif
+
 	for (n = 0; n < 32; ++n) {
-		printf("R%.2d = %.8lx%s", n, fp->gpr[n],
-		       (n & 3) == 3? "\n": "   ");
-		if (n == 12 && !FULL_REGS(fp)) {
-			printf("\n");
-			break;
-		}
+		printf("R%.2d = "REG"%s", n, fp->gpr[n],
+			(n % R_PER_LINE) == R_PER_LINE - 1 ? "\n" : "   ");
 	}
-#endif
+
 	printf("pc  = ");
 	xmon_print_symbol(fp->nip, " ", "\n");
 	if (!trap_is_syscall(fp) && cpu_has_feature(CPU_FTR_CFAR)) {
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 10/10] powerpc: move norestart trap flag to bit 0
  2021-03-15  3:17 [PATCH 00/10] Move 64e to new interrupt return code Nicholas Piggin
                   ` (8 preceding siblings ...)
  2021-03-15  3:17 ` [PATCH 09/10] powerpc: remove partial register save logic Nicholas Piggin
@ 2021-03-15  3:17 ` Nicholas Piggin
  2021-03-15  8:14   ` Christophe Leroy
  2021-03-22 23:45 ` [PATCH 00/10] Move 64e to new interrupt return code Daniel Axtens
  10 siblings, 1 reply; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-15  3:17 UTC (permalink / raw)
  To: linuxppc-dev; +Cc: Scott Wood, Nicholas Piggin

Compact the trap flags down to use the low 4 bits of regs.trap.

A few 64e interrupt trap numbers set bit 4. Although they tended to be
trivial so it wasn't a real problem[1], it is not the right thing to do,
and confusing.

[*] E.g., 0x310 hypercall goes to unknown_exception, which prints
    regs->trap directly so 0x310 will appear fine, and only the syscall
    interrupt will test norestart, so it won't be confused by 0x310.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
 arch/powerpc/include/asm/ptrace.h | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
index 91194fdd5d01..6a04abfe5eb6 100644
--- a/arch/powerpc/include/asm/ptrace.h
+++ b/arch/powerpc/include/asm/ptrace.h
@@ -185,15 +185,21 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
 #define current_pt_regs() \
 	((struct pt_regs *)((unsigned long)task_stack_page(current) + THREAD_SIZE) - 1)
 
+/*
+ * The 4 low bits (0xf) are available as flags to overload the trap word,
+ * because interrupt vectors have minimum alignment of 0x10. TRAP_FLAGS_MASK
+ * must cover the bits used as flags, including bit 0 which is used as the
+ * "norestart" bit.
+ */
 #ifdef __powerpc64__
-#define TRAP_FLAGS_MASK		0x10
+#define TRAP_FLAGS_MASK		0x1
 #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
 #else
 /*
  * On 4xx we use bit 1 in the trap word to indicate whether the exception
  * is a critical exception (1 means it is).
  */
-#define TRAP_FLAGS_MASK		0x1E
+#define TRAP_FLAGS_MASK		0xf
 #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
 #define IS_CRITICAL_EXC(regs)	(((regs)->trap & 2) != 0)
 #define IS_MCHECK_EXC(regs)	(((regs)->trap & 4) != 0)
@@ -222,12 +228,12 @@ static inline bool trap_is_syscall(struct pt_regs *regs)
 
 static inline bool trap_norestart(struct pt_regs *regs)
 {
-	return regs->trap & 0x10;
+	return regs->trap & 0x1;
 }
 
 static inline void set_trap_norestart(struct pt_regs *regs)
 {
-	regs->trap |= 0x10;
+	regs->trap |= 0x1;
 }
 
 #define arch_has_single_step()	(1)
-- 
2.23.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return
  2021-03-15  3:17 ` [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return Nicholas Piggin
@ 2021-03-15  7:50   ` Christophe Leroy
  2021-03-15  8:20     ` Christophe Leroy
  2021-03-16  7:03     ` Nicholas Piggin
  2021-03-15 13:30   ` Christophe Leroy
  1 sibling, 2 replies; 25+ messages in thread
From: Christophe Leroy @ 2021-03-15  7:50 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Scott Wood



Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
> Update the new C and asm interrupt return code to account for 64e
> specifics, switch over to use it.
> 
> The now-unused old ret_from_except code, that was moved to 64e after the
> 64s conversion, is removed.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/include/asm/asm-prototypes.h |   2 -
>   arch/powerpc/kernel/entry_64.S            |   9 +-
>   arch/powerpc/kernel/exceptions-64e.S      | 321 ++--------------------
>   arch/powerpc/kernel/interrupt.c           |  27 +-
>   arch/powerpc/kernel/irq.c                 |  76 -----
>   5 files changed, 56 insertions(+), 379 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
> index 939f3c94c8f3..1c7b75834e04 100644
> --- a/arch/powerpc/include/asm/asm-prototypes.h
> +++ b/arch/powerpc/include/asm/asm-prototypes.h
> @@ -77,8 +77,6 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
>   long ppc_fadvise64_64(int fd, int advice, u32 offset_high, u32 offset_low,
>   		      u32 len_high, u32 len_low);
>   long sys_switch_endian(void);
> -notrace unsigned int __check_irq_replay(void);
> -void notrace restore_interrupts(void);
>   
>   /* prom_init (OpenFirmware) */
>   unsigned long __init prom_init(unsigned long r3, unsigned long r4,
> diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
> index 853534b2ae2e..555b3d0a3f38 100644
> --- a/arch/powerpc/kernel/entry_64.S
> +++ b/arch/powerpc/kernel/entry_64.S
> @@ -632,7 +632,6 @@ END_FTR_SECTION_IFCLR(CPU_FTR_ARCH_207S)
>   	addi	r1,r1,SWITCH_FRAME_SIZE
>   	blr
>   
> -#ifdef CONFIG_PPC_BOOK3S
>   	/*
>   	 * If MSR EE/RI was never enabled, IRQs not reconciled, NVGPRs not
>   	 * touched, no exit work created, then this can be used.
> @@ -644,6 +643,7 @@ _ASM_NOKPROBE_SYMBOL(fast_interrupt_return)
>   	kuap_check_amr r3, r4
>   	ld	r5,_MSR(r1)
>   	andi.	r0,r5,MSR_PR
> +#ifdef CONFIG_PPC_BOOK3S
>   	bne	.Lfast_user_interrupt_return_amr
>   	kuap_kernel_restore r3, r4
>   	andi.	r0,r5,MSR_RI
> @@ -652,6 +652,10 @@ _ASM_NOKPROBE_SYMBOL(fast_interrupt_return)
>   	addi	r3,r1,STACK_FRAME_OVERHEAD
>   	bl	unrecoverable_exception
>   	b	. /* should not get here */
> +#else
> +	bne	.Lfast_user_interrupt_return
> +	b	.Lfast_kernel_interrupt_return
> +#endif
>   
>   	.balign IFETCH_ALIGN_BYTES
>   	.globl interrupt_return
> @@ -665,8 +669,10 @@ _ASM_NOKPROBE_SYMBOL(interrupt_return)
>   	cmpdi	r3,0
>   	bne-	.Lrestore_nvgprs
>   
> +#ifdef CONFIG_PPC_BOOK3S
>   .Lfast_user_interrupt_return_amr:
>   	kuap_user_restore r3, r4
> +#endif
>   .Lfast_user_interrupt_return:
>   	ld	r11,_NIP(r1)
>   	ld	r12,_MSR(r1)
> @@ -775,7 +781,6 @@ ALT_FTR_SECTION_END_IFCLR(CPU_FTR_STCX_CHECKS_ADDRESS)
>   
>   	RFI_TO_KERNEL
>   	b	.	/* prevent speculative execution */
> -#endif /* CONFIG_PPC_BOOK3S */
>   
>   #ifdef CONFIG_PPC_RTAS
>   /*
> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
> index da78eb6ab92f..1bb4e9b37748 100644
> --- a/arch/powerpc/kernel/exceptions-64e.S
> +++ b/arch/powerpc/kernel/exceptions-64e.S
> @@ -139,7 +139,8 @@ ret_from_level_except:
>   	ld	r3,_MSR(r1)
>   	andi.	r3,r3,MSR_PR
>   	beq	1f
> -	b	ret_from_except
> +	REST_NVGPRS(r1)

Could this be in a separate preceding patch (only the adding of REST_NVGPRS(), the call to 
ret_from_except can remain as is by removing the REST_NVGPRS() which is there to make 
ret_from_except and ret_from_except_lite identical).

Or maybe you can also do the name change to interrupt_return in that preceeding patch, so than the 
"use new interrupt return" patch only contains the interesting parts.

> +	b	interrupt_return
>   1:
>   
>   	LOAD_REG_ADDR(r11,extlb_level_exc)
> @@ -208,7 +209,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_EMB_HV)
>   	/*
>   	 * Restore PACAIRQHAPPENED rather than setting it based on
>   	 * the return MSR[EE], since we could have interrupted
> -	 * __check_irq_replay() or other inconsistent transitory
> +	 * interrupt replay or other inconsistent transitory
>   	 * states that must remain that way.
>   	 */
>   	SPECIAL_EXC_LOAD(r10,IRQHAPPENED)
> @@ -511,7 +512,7 @@ exc_##n##_bad_stack:							    \
>   	CHECK_NAPPING();						\
>   	addi	r3,r1,STACK_FRAME_OVERHEAD;				\
>   	bl	hdlr;							\
> -	b	ret_from_except_lite;
> +	b	interrupt_return
>   
>   /* This value is used to mark exception frames on the stack. */
>   	.section	".toc","aw"
> @@ -623,7 +624,8 @@ __end_interrupts:
>   	addi	r3,r1,STACK_FRAME_OVERHEAD
>   	ld	r14,PACA_EXGEN+EX_R14(r13)
>   	bl	program_check_exception
> -	b	ret_from_except
> +	REST_NVGPRS(r1)
> +	b	interrupt_return
>   
>   /* Floating Point Unavailable Interrupt */
>   	START_EXCEPTION(fp_unavailable);
> @@ -635,11 +637,11 @@ __end_interrupts:
>   	andi.	r0,r12,MSR_PR;
>   	beq-	1f
>   	bl	load_up_fpu
> -	b	fast_exception_return
> +	b	fast_interrupt_return
>   1:	INTS_DISABLE
>   	addi	r3,r1,STACK_FRAME_OVERHEAD
>   	bl	kernel_fp_unavailable_exception
> -	b	ret_from_except
> +	b	interrupt_return
>   
>   /* Altivec Unavailable Interrupt */
>   	START_EXCEPTION(altivec_unavailable);
> @@ -653,14 +655,14 @@ BEGIN_FTR_SECTION
>   	andi.	r0,r12,MSR_PR;
>   	beq-	1f
>   	bl	load_up_altivec
> -	b	fast_exception_return
> +	b	fast_interrupt_return
>   1:
>   END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
>   #endif
>   	INTS_DISABLE
>   	addi	r3,r1,STACK_FRAME_OVERHEAD
>   	bl	altivec_unavailable_exception
> -	b	ret_from_except
> +	b	interrupt_return
>   
>   /* AltiVec Assist */
>   	START_EXCEPTION(altivec_assist);
> @@ -674,10 +676,11 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
>   BEGIN_FTR_SECTION
>   	bl	altivec_assist_exception
>   END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
> +	REST_NVGPRS(r1)
>   #else
>   	bl	unknown_exception
>   #endif
> -	b	ret_from_except
> +	b	interrupt_return
>   
>   
>   /* Decrementer Interrupt */
> @@ -719,7 +722,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
>   	INTS_DISABLE
>   	addi	r3,r1,STACK_FRAME_OVERHEAD
>   	bl	unknown_exception
> -	b	ret_from_except
> +	b	interrupt_return
>   
>   /* Debug exception as a critical interrupt*/
>   	START_EXCEPTION(debug_crit);
> @@ -786,7 +789,8 @@ END_FTR_SECTION_IFSET(CPU_FTR_ALTIVEC)
>   	ld	r14,PACA_EXCRIT+EX_R14(r13)
>   	ld	r15,PACA_EXCRIT+EX_R15(r13)
>   	bl	DebugException
> -	b	ret_from_except
> +	REST_NVGPRS(r1)
> +	b	interrupt_return
>   
>   kernel_dbg_exc:
>   	b	.	/* NYI */
> @@ -857,7 +861,8 @@ kernel_dbg_exc:
>   	ld	r14,PACA_EXDBG+EX_R14(r13)
>   	ld	r15,PACA_EXDBG+EX_R15(r13)
>   	bl	DebugException
> -	b	ret_from_except
> +	REST_NVGPRS(r1)
> +	b	interrupt_return
>   
>   	START_EXCEPTION(perfmon);
>   	NORMAL_EXCEPTION_PROLOG(0x260, BOOKE_INTERRUPT_PERFORMANCE_MONITOR,
> @@ -867,7 +872,7 @@ kernel_dbg_exc:
>   	CHECK_NAPPING()
>   	addi	r3,r1,STACK_FRAME_OVERHEAD
>   	bl	performance_monitor_exception
> -	b	ret_from_except_lite
> +	b	interrupt_return
>   
>   /* Doorbell interrupt */
>   	MASKABLE_EXCEPTION(0x280, BOOKE_INTERRUPT_DOORBELL,
> @@ -895,7 +900,7 @@ kernel_dbg_exc:
>   	addi	r3,r1,STACK_FRAME_OVERHEAD
>   	INTS_RESTORE_HARD
>   	bl	unknown_exception
> -	b	ret_from_except
> +	b	interrupt_return
>   
>   /* Guest Doorbell critical Interrupt */
>   	START_EXCEPTION(guest_doorbell_crit);
> @@ -916,7 +921,7 @@ kernel_dbg_exc:
>   	addi	r3,r1,STACK_FRAME_OVERHEAD
>   	INTS_RESTORE_HARD
>   	bl	unknown_exception
> -	b	ret_from_except
> +	b	interrupt_return
>   
>   /* Embedded Hypervisor priviledged  */
>   	START_EXCEPTION(ehpriv);
> @@ -926,7 +931,7 @@ kernel_dbg_exc:
>   	addi	r3,r1,STACK_FRAME_OVERHEAD
>   	INTS_RESTORE_HARD
>   	bl	unknown_exception
> -	b	ret_from_except
> +	b	interrupt_return
>   
>   /* LRAT Error interrupt */
>   	START_EXCEPTION(lrat_error);
> @@ -936,7 +941,7 @@ kernel_dbg_exc:
>   	addi	r3,r1,STACK_FRAME_OVERHEAD
>   	INTS_RESTORE_HARD
>   	bl	unknown_exception
> -	b	ret_from_except
> +	b	interrupt_return
>   
>   /*
>    * An interrupt came in while soft-disabled; We mark paca->irq_happened
> @@ -998,11 +1003,11 @@ storage_fault_common:
>   	bl	do_page_fault
>   	cmpdi	r3,0
>   	bne-	1f
> -	b	ret_from_except_lite
> +	b	interrupt_return
>   	mr	r4,r3
>   	addi	r3,r1,STACK_FRAME_OVERHEAD
>   	bl	__bad_page_fault
> -	b	ret_from_except
> +	b	interrupt_return
>   
>   /*
>    * Alignment exception doesn't fit entirely in the 0x100 bytes so it
> @@ -1016,284 +1021,8 @@ alignment_more:

...

> -fast_exception_return:
> -	wrteei	0
> -1:	mr	r0,r13
> -	ld	r10,_MSR(r1)
> -	REST_4GPRS(2, r1)
> -	andi.	r6,r10,MSR_PR
> -	REST_2GPRS(6, r1)
> -	beq	1f
> -	ACCOUNT_CPU_USER_EXIT(r13, r10, r11)

Then ACCOUNT_CPU_USER_EXIT can be removed from asm/ppc_asm.h

...

> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
> index fbabb49888d3..ae7b058b2970 100644
> --- a/arch/powerpc/kernel/interrupt.c
> +++ b/arch/powerpc/kernel/interrupt.c
> @@ -235,6 +235,10 @@ static notrace void booke_load_dbcr0(void)
>   #endif
>   }
>   
> +/* temporary hack for context tracking, removed in later patch */
> +#include <linux/sched/debug.h>
> +asmlinkage __visible void __sched schedule_user(void);
> +
>   /*
>    * This should be called after a syscall returns, with r3 the return value
>    * from the syscall. If this function returns non-zero, the system call
> @@ -292,7 +296,11 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>   	while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
>   		local_irq_enable();
>   		if (ti_flags & _TIF_NEED_RESCHED) {
> +#ifdef CONFIG_PPC_BOOK3E_64
> +			schedule_user();
> +#else
>   			schedule();
> +#endif
>   		} else {
>   			/*
>   			 * SIGPENDING must restore signal handler function
> @@ -360,7 +368,6 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>   	return ret;
>   }
>   
> -#ifndef CONFIG_PPC_BOOK3E_64 /* BOOK3E not yet using this */
>   notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned long msr)
>   {
>   	unsigned long ti_flags;
> @@ -372,7 +379,9 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>   	BUG_ON(!(regs->msr & MSR_PR));
>   	BUG_ON(!FULL_REGS(regs));
>   	BUG_ON(arch_irq_disabled_regs(regs));
> +#ifdef CONFIG_PPC_BOOK3S_64
>   	CT_WARN_ON(ct_state() == CONTEXT_USER);
> +#endif
>   
>   	/*
>   	 * We don't need to restore AMR on the way back to userspace for KUAP.
> @@ -387,7 +396,11 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>   	while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
>   		local_irq_enable(); /* returning to user: may enable */
>   		if (ti_flags & _TIF_NEED_RESCHED) {
> +#ifdef CONFIG_PPC_BOOK3E_64
> +			schedule_user();
> +#else
>   			schedule();
> +#endif
>   		} else {
>   			if (ti_flags & _TIF_SIGPENDING)
>   				ret |= _TIF_RESTOREALL;
> @@ -435,7 +448,10 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>   	/*
>   	 * We do this at the end so that we do context switch with KERNEL AMR
>   	 */
> +#ifndef CONFIG_PPC_BOOK3E_64
>   	kuap_user_restore(regs);

Why do you need to ifdef this out ?
Only PPC_8xx, PPC_BOOK3S_32 and PPC_RADIX_MMU select PPC_HAVE_KUAP.
When PPC_KUAP is not selected, kuap_user_restore() is a static inline {} defined in asm/kup.h

> +#endif
> +
>   	return ret;
>   }
>   
> @@ -445,7 +461,9 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
>   {
>   	unsigned long flags;
>   	unsigned long ret = 0;
> +#ifndef CONFIG_PPC_BOOK3E_64
>   	unsigned long kuap;
> +#endif
>   
>   	if (!IS_ENABLED(CONFIG_BOOKE) && !IS_ENABLED(CONFIG_40x) &&
>   	    unlikely(!(regs->msr & MSR_RI)))
> @@ -456,10 +474,12 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
>   	 * CT_WARN_ON comes here via program_check_exception,
>   	 * so avoid recursion.
>   	 */
> -	if (TRAP(regs) != 0x700)
> +	if (IS_ENABLED(CONFIG_BOOKS) && TRAP(regs) != 0x700)
>   		CT_WARN_ON(ct_state() == CONTEXT_USER);
>   
> +#ifndef CONFIG_PPC_BOOK3E_64
>   	kuap = kuap_get_and_assert_locked();

Same, kuap_get_and_assert_locked() always exists, no need to ifdef it.

> +#endif
>   
>   	if (unlikely(current_thread_info()->flags & _TIF_EMULATE_STACK_STORE)) {
>   		clear_bits(_TIF_EMULATE_STACK_STORE, &current_thread_info()->flags);
> @@ -501,8 +521,9 @@ notrace unsigned long interrupt_exit_kernel_prepare(struct pt_regs *regs, unsign
>   	 * which would cause Read-After-Write stalls. Hence, we take the AMR
>   	 * value from the check above.
>   	 */
> +#ifndef CONFIG_PPC_BOOK3E_64
>   	kuap_kernel_restore(regs, kuap);

Same

> +#endif
>   
>   	return ret;
>   }
> -#endif

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 10/10] powerpc: move norestart trap flag to bit 0
  2021-03-15  3:17 ` [PATCH 10/10] powerpc: move norestart trap flag to bit 0 Nicholas Piggin
@ 2021-03-15  8:14   ` Christophe Leroy
  2021-03-16  7:11     ` Nicholas Piggin
  0 siblings, 1 reply; 25+ messages in thread
From: Christophe Leroy @ 2021-03-15  8:14 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Scott Wood



Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
> Compact the trap flags down to use the low 4 bits of regs.trap.
> 
> A few 64e interrupt trap numbers set bit 4. Although they tended to be
> trivial so it wasn't a real problem[1], it is not the right thing to do,
> and confusing.
> 
> [*] E.g., 0x310 hypercall goes to unknown_exception, which prints
>      regs->trap directly so 0x310 will appear fine, and only the syscall
>      interrupt will test norestart, so it won't be confused by 0x310.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/include/asm/ptrace.h | 14 ++++++++++----
>   1 file changed, 10 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
> index 91194fdd5d01..6a04abfe5eb6 100644
> --- a/arch/powerpc/include/asm/ptrace.h
> +++ b/arch/powerpc/include/asm/ptrace.h
> @@ -185,15 +185,21 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
>   #define current_pt_regs() \
>   	((struct pt_regs *)((unsigned long)task_stack_page(current) + THREAD_SIZE) - 1)
>   
> +/*
> + * The 4 low bits (0xf) are available as flags to overload the trap word,
> + * because interrupt vectors have minimum alignment of 0x10. TRAP_FLAGS_MASK
> + * must cover the bits used as flags, including bit 0 which is used as the
> + * "norestart" bit.
> + */
>   #ifdef __powerpc64__
> -#define TRAP_FLAGS_MASK		0x10
> +#define TRAP_FLAGS_MASK		0x1
>   #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
>   #else
>   /*
>    * On 4xx we use bit 1 in the trap word to indicate whether the exception
>    * is a critical exception (1 means it is).
>    */
> -#define TRAP_FLAGS_MASK		0x1E
> +#define TRAP_FLAGS_MASK		0xf

Could we set 0xf for all and remove the ifdef __powerpc64__ ?

>   #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
>   #define IS_CRITICAL_EXC(regs)	(((regs)->trap & 2) != 0)
>   #define IS_MCHECK_EXC(regs)	(((regs)->trap & 4) != 0)
> @@ -222,12 +228,12 @@ static inline bool trap_is_syscall(struct pt_regs *regs)
>   
>   static inline bool trap_norestart(struct pt_regs *regs)
>   {
> -	return regs->trap & 0x10;
> +	return regs->trap & 0x1;
>   }
>   
>   static inline void set_trap_norestart(struct pt_regs *regs)
>   {
> -	regs->trap |= 0x10;
> +	regs->trap |= 0x1;
>   }
>   
>   #define arch_has_single_step()	(1)
> 

While we are playing with ->trap, in mm/book3s64/hash_utils.c there is an if (regs->trap == 0x400). 
Should be TRAP(regs) == 0x400 ?

Christophe

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return
  2021-03-15  7:50   ` Christophe Leroy
@ 2021-03-15  8:20     ` Christophe Leroy
  2021-03-16  7:03     ` Nicholas Piggin
  1 sibling, 0 replies; 25+ messages in thread
From: Christophe Leroy @ 2021-03-15  8:20 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Scott Wood



Le 15/03/2021 à 08:50, Christophe Leroy a écrit :
> 
> 
> Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
>> Update the new C and asm interrupt return code to account for 64e
>> specifics, switch over to use it.
>>
>> The now-unused old ret_from_except code, that was moved to 64e after the
>> 64s conversion, is removed.
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>   arch/powerpc/include/asm/asm-prototypes.h |   2 -
>>   arch/powerpc/kernel/entry_64.S            |   9 +-
>>   arch/powerpc/kernel/exceptions-64e.S      | 321 ++--------------------
>>   arch/powerpc/kernel/interrupt.c           |  27 +-
>>   arch/powerpc/kernel/irq.c                 |  76 -----
>>   5 files changed, 56 insertions(+), 379 deletions(-)
>>
>> @@ -1016,284 +1021,8 @@ alignment_more:
> 
> ...
> 
>> -fast_exception_return:
>> -    wrteei    0
>> -1:    mr    r0,r13
>> -    ld    r10,_MSR(r1)
>> -    REST_4GPRS(2, r1)
>> -    andi.    r6,r10,MSR_PR
>> -    REST_2GPRS(6, r1)
>> -    beq    1f
>> -    ACCOUNT_CPU_USER_EXIT(r13, r10, r11)
> 
> Then ACCOUNT_CPU_USER_EXIT can be removed from asm/ppc_asm.h
> 

And all associated definitions in asm-offsets.c

And also ACCOUNT_USER_TIME which was likely left over after the removal of ACCOUNT_CPU_USER_ENTRY()

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return
  2021-03-15  3:17 ` [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return Nicholas Piggin
  2021-03-15  7:50   ` Christophe Leroy
@ 2021-03-15 13:30   ` Christophe Leroy
  2021-03-16  7:04     ` Nicholas Piggin
  1 sibling, 1 reply; 25+ messages in thread
From: Christophe Leroy @ 2021-03-15 13:30 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Scott Wood



Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
> Update the new C and asm interrupt return code to account for 64e
> specifics, switch over to use it.
> 
> The now-unused old ret_from_except code, that was moved to 64e after the
> 64s conversion, is removed.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/include/asm/asm-prototypes.h |   2 -
>   arch/powerpc/kernel/entry_64.S            |   9 +-
>   arch/powerpc/kernel/exceptions-64e.S      | 321 ++--------------------
>   arch/powerpc/kernel/interrupt.c           |  27 +-
>   arch/powerpc/kernel/irq.c                 |  76 -----
>   5 files changed, 56 insertions(+), 379 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
> index fbabb49888d3..ae7b058b2970 100644
> --- a/arch/powerpc/kernel/interrupt.c
> +++ b/arch/powerpc/kernel/interrupt.c
> @@ -235,6 +235,10 @@ static notrace void booke_load_dbcr0(void)
>   #endif
>   }
>   
> +/* temporary hack for context tracking, removed in later patch */
> +#include <linux/sched/debug.h>
> +asmlinkage __visible void __sched schedule_user(void);
> +
>   /*
>    * This should be called after a syscall returns, with r3 the return value
>    * from the syscall. If this function returns non-zero, the system call
> @@ -292,7 +296,11 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>   	while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
>   		local_irq_enable();
>   		if (ti_flags & _TIF_NEED_RESCHED) {
> +#ifdef CONFIG_PPC_BOOK3E_64
> +			schedule_user();
> +#else
>   			schedule();
> +#endif
>   		} else {
>   			/*
>   			 * SIGPENDING must restore signal handler function
> @@ -360,7 +368,6 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>   	return ret;
>   }
>   
> -#ifndef CONFIG_PPC_BOOK3E_64 /* BOOK3E not yet using this */
>   notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned long msr)
>   {
>   	unsigned long ti_flags;
> @@ -372,7 +379,9 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>   	BUG_ON(!(regs->msr & MSR_PR));
>   	BUG_ON(!FULL_REGS(regs));
>   	BUG_ON(arch_irq_disabled_regs(regs));
> +#ifdef CONFIG_PPC_BOOK3S_64

Shouldn't this go away in patch 6 as well ?
Or is that needed at all ? In syscall_exit_prepare() it is not ifdefed .

>   	CT_WARN_ON(ct_state() == CONTEXT_USER);
> +#endif
>   
>   	/*
>   	 * We don't need to restore AMR on the way back to userspace for KUAP.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 07/10] powerpc/64e/interrupt: handle bad_page_fault in C
  2021-03-15  3:17 ` [PATCH 07/10] powerpc/64e/interrupt: handle bad_page_fault in C Nicholas Piggin
@ 2021-03-15 14:07   ` Christophe Leroy
  2021-03-16  7:06     ` Nicholas Piggin
  0 siblings, 1 reply; 25+ messages in thread
From: Christophe Leroy @ 2021-03-15 14:07 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Scott Wood



Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
> With non-volatile registers saved on interrupt, bad_page_fault
> can now be called by do_page_fault.
> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
>   arch/powerpc/kernel/exceptions-64e.S | 6 ------
>   arch/powerpc/mm/fault.c              | 5 +----
>   2 files changed, 1 insertion(+), 10 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
> index a059ab3542c2..b08c84e0fa56 100644
> --- a/arch/powerpc/kernel/exceptions-64e.S
> +++ b/arch/powerpc/kernel/exceptions-64e.S
> @@ -937,12 +937,6 @@ storage_fault_common:
>   	ld	r14,PACA_EXGEN+EX_R14(r13)
>   	ld	r15,PACA_EXGEN+EX_R15(r13)
>   	bl	do_page_fault
> -	cmpdi	r3,0
> -	bne-	1f
> -	b	interrupt_return
> -	mr	r4,r3
> -	addi	r3,r1,STACK_FRAME_OVERHEAD
> -	bl	__bad_page_fault

Then __bad_page_fault() can be static now.

>   	b	interrupt_return
>   
>   /*
> diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
> index 2e54bac99a22..44833660b21d 100644
> --- a/arch/powerpc/mm/fault.c
> +++ b/arch/powerpc/mm/fault.c
> @@ -552,12 +552,9 @@ static long __do_page_fault(struct pt_regs *regs)
>   	if (likely(entry)) {
>   		instruction_pointer_set(regs, extable_fixup(entry));
>   		return 0;
> -	} else if (!IS_ENABLED(CONFIG_PPC_BOOK3E_64)) {
> +	} else {
>   		__bad_page_fault(regs, err);
>   		return 0;
> -	} else {
> -		/* 32 and 64e handle the bad page fault in asm */
> -		return err;
>   	}
>   }
>   NOKPROBE_SYMBOL(__do_page_fault);
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return
  2021-03-15  7:50   ` Christophe Leroy
  2021-03-15  8:20     ` Christophe Leroy
@ 2021-03-16  7:03     ` Nicholas Piggin
  1 sibling, 0 replies; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-16  7:03 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev; +Cc: Scott Wood

Excerpts from Christophe Leroy's message of March 15, 2021 5:50 pm:
> 
> 
> Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
>> Update the new C and asm interrupt return code to account for 64e
>> specifics, switch over to use it.
>> 
>> The now-unused old ret_from_except code, that was moved to 64e after the
>> 64s conversion, is removed.
>> 
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>   arch/powerpc/include/asm/asm-prototypes.h |   2 -
>>   arch/powerpc/kernel/entry_64.S            |   9 +-
>>   arch/powerpc/kernel/exceptions-64e.S      | 321 ++--------------------
>>   arch/powerpc/kernel/interrupt.c           |  27 +-
>>   arch/powerpc/kernel/irq.c                 |  76 -----
>>   5 files changed, 56 insertions(+), 379 deletions(-)
>> 

...

>> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
>> index da78eb6ab92f..1bb4e9b37748 100644
>> --- a/arch/powerpc/kernel/exceptions-64e.S
>> +++ b/arch/powerpc/kernel/exceptions-64e.S
>> @@ -139,7 +139,8 @@ ret_from_level_except:
>>   	ld	r3,_MSR(r1)
>>   	andi.	r3,r3,MSR_PR
>>   	beq	1f
>> -	b	ret_from_except
>> +	REST_NVGPRS(r1)
> 
> Could this be in a separate preceding patch (only the adding of REST_NVGPRS(), the call to 
> ret_from_except can remain as is by removing the REST_NVGPRS() which is there to make 
> ret_from_except and ret_from_except_lite identical).
> 
> Or maybe you can also do the name change to interrupt_return in that preceeding patch, so than the 
> "use new interrupt return" patch only contains the interesting parts.

I don't like that so much, maybe the better split is to first change the 
common code to add the 64e bits, and then convert 64e from 
ret_from_except to interrupt_return and remove the old code.

...

>> @@ -1016,284 +1021,8 @@ alignment_more:
> 
> ...
> 
>> -fast_exception_return:
>> -	wrteei	0
>> -1:	mr	r0,r13
>> -	ld	r10,_MSR(r1)
>> -	REST_4GPRS(2, r1)
>> -	andi.	r6,r10,MSR_PR
>> -	REST_2GPRS(6, r1)
>> -	beq	1f
>> -	ACCOUNT_CPU_USER_EXIT(r13, r10, r11)
> 
> Then ACCOUNT_CPU_USER_EXIT can be removed from asm/ppc_asm.h

Will do.

>> @@ -387,7 +396,11 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>>   	while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
>>   		local_irq_enable(); /* returning to user: may enable */
>>   		if (ti_flags & _TIF_NEED_RESCHED) {
>> +#ifdef CONFIG_PPC_BOOK3E_64
>> +			schedule_user();
>> +#else
>>   			schedule();
>> +#endif
>>   		} else {
>>   			if (ti_flags & _TIF_SIGPENDING)
>>   				ret |= _TIF_RESTOREALL;
>> @@ -435,7 +448,10 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>>   	/*
>>   	 * We do this at the end so that we do context switch with KERNEL AMR
>>   	 */
>> +#ifndef CONFIG_PPC_BOOK3E_64
>>   	kuap_user_restore(regs);
> 
> Why do you need to ifdef this out ?
> Only PPC_8xx, PPC_BOOK3S_32 and PPC_RADIX_MMU select PPC_HAVE_KUAP.
> When PPC_KUAP is not selected, kuap_user_restore() is a static inline {} defined in asm/kup.h

It came in from an old patch rebase. I'll get rid of them.

...

Thanks,
Nick

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return
  2021-03-15 13:30   ` Christophe Leroy
@ 2021-03-16  7:04     ` Nicholas Piggin
  2021-03-16  7:25       ` Nicholas Piggin
  0 siblings, 1 reply; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-16  7:04 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev; +Cc: Scott Wood

Excerpts from Christophe Leroy's message of March 15, 2021 11:30 pm:
> 
> 
> Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
>> Update the new C and asm interrupt return code to account for 64e
>> specifics, switch over to use it.
>> 
>> The now-unused old ret_from_except code, that was moved to 64e after the
>> 64s conversion, is removed.
>> 
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>   arch/powerpc/include/asm/asm-prototypes.h |   2 -
>>   arch/powerpc/kernel/entry_64.S            |   9 +-
>>   arch/powerpc/kernel/exceptions-64e.S      | 321 ++--------------------
>>   arch/powerpc/kernel/interrupt.c           |  27 +-
>>   arch/powerpc/kernel/irq.c                 |  76 -----
>>   5 files changed, 56 insertions(+), 379 deletions(-)
>> 
>> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
>> index fbabb49888d3..ae7b058b2970 100644
>> --- a/arch/powerpc/kernel/interrupt.c
>> +++ b/arch/powerpc/kernel/interrupt.c
>> @@ -235,6 +235,10 @@ static notrace void booke_load_dbcr0(void)
>>   #endif
>>   }
>>   
>> +/* temporary hack for context tracking, removed in later patch */
>> +#include <linux/sched/debug.h>
>> +asmlinkage __visible void __sched schedule_user(void);
>> +
>>   /*
>>    * This should be called after a syscall returns, with r3 the return value
>>    * from the syscall. If this function returns non-zero, the system call
>> @@ -292,7 +296,11 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>>   	while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
>>   		local_irq_enable();
>>   		if (ti_flags & _TIF_NEED_RESCHED) {
>> +#ifdef CONFIG_PPC_BOOK3E_64
>> +			schedule_user();
>> +#else
>>   			schedule();
>> +#endif
>>   		} else {
>>   			/*
>>   			 * SIGPENDING must restore signal handler function
>> @@ -360,7 +368,6 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>>   	return ret;
>>   }
>>   
>> -#ifndef CONFIG_PPC_BOOK3E_64 /* BOOK3E not yet using this */
>>   notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned long msr)
>>   {
>>   	unsigned long ti_flags;
>> @@ -372,7 +379,9 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>>   	BUG_ON(!(regs->msr & MSR_PR));
>>   	BUG_ON(!FULL_REGS(regs));
>>   	BUG_ON(arch_irq_disabled_regs(regs));
>> +#ifdef CONFIG_PPC_BOOK3S_64
> 
> Shouldn't this go away in patch 6 as well ?
> Or is that needed at all ? In syscall_exit_prepare() it is not ifdefed .

Hmm, not sure. I'll take a look. It probably shouldn't be ifdefed at all 
but definitely by the end it should run without warning.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 07/10] powerpc/64e/interrupt: handle bad_page_fault in C
  2021-03-15 14:07   ` Christophe Leroy
@ 2021-03-16  7:06     ` Nicholas Piggin
  0 siblings, 0 replies; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-16  7:06 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev; +Cc: Scott Wood

Excerpts from Christophe Leroy's message of March 16, 2021 12:07 am:
> 
> 
> Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
>> With non-volatile registers saved on interrupt, bad_page_fault
>> can now be called by do_page_fault.
>> 
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>   arch/powerpc/kernel/exceptions-64e.S | 6 ------
>>   arch/powerpc/mm/fault.c              | 5 +----
>>   2 files changed, 1 insertion(+), 10 deletions(-)
>> 
>> diff --git a/arch/powerpc/kernel/exceptions-64e.S b/arch/powerpc/kernel/exceptions-64e.S
>> index a059ab3542c2..b08c84e0fa56 100644
>> --- a/arch/powerpc/kernel/exceptions-64e.S
>> +++ b/arch/powerpc/kernel/exceptions-64e.S
>> @@ -937,12 +937,6 @@ storage_fault_common:
>>   	ld	r14,PACA_EXGEN+EX_R14(r13)
>>   	ld	r15,PACA_EXGEN+EX_R15(r13)
>>   	bl	do_page_fault
>> -	cmpdi	r3,0
>> -	bne-	1f
>> -	b	interrupt_return
>> -	mr	r4,r3
>> -	addi	r3,r1,STACK_FRAME_OVERHEAD
>> -	bl	__bad_page_fault
> 
> Then __bad_page_fault() can be static now.

Good point, I'll change it.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 10/10] powerpc: move norestart trap flag to bit 0
  2021-03-15  8:14   ` Christophe Leroy
@ 2021-03-16  7:11     ` Nicholas Piggin
  2021-03-16  7:13       ` Christophe Leroy
  0 siblings, 1 reply; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-16  7:11 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev; +Cc: Scott Wood

Excerpts from Christophe Leroy's message of March 15, 2021 6:14 pm:
> 
> 
> Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
>> Compact the trap flags down to use the low 4 bits of regs.trap.
>> 
>> A few 64e interrupt trap numbers set bit 4. Although they tended to be
>> trivial so it wasn't a real problem[1], it is not the right thing to do,
>> and confusing.
>> 
>> [*] E.g., 0x310 hypercall goes to unknown_exception, which prints
>>      regs->trap directly so 0x310 will appear fine, and only the syscall
>>      interrupt will test norestart, so it won't be confused by 0x310.
>> 
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>>   arch/powerpc/include/asm/ptrace.h | 14 ++++++++++----
>>   1 file changed, 10 insertions(+), 4 deletions(-)
>> 
>> diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
>> index 91194fdd5d01..6a04abfe5eb6 100644
>> --- a/arch/powerpc/include/asm/ptrace.h
>> +++ b/arch/powerpc/include/asm/ptrace.h
>> @@ -185,15 +185,21 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
>>   #define current_pt_regs() \
>>   	((struct pt_regs *)((unsigned long)task_stack_page(current) + THREAD_SIZE) - 1)
>>   
>> +/*
>> + * The 4 low bits (0xf) are available as flags to overload the trap word,
>> + * because interrupt vectors have minimum alignment of 0x10. TRAP_FLAGS_MASK
>> + * must cover the bits used as flags, including bit 0 which is used as the
>> + * "norestart" bit.
>> + */
>>   #ifdef __powerpc64__
>> -#define TRAP_FLAGS_MASK		0x10
>> +#define TRAP_FLAGS_MASK		0x1
>>   #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
>>   #else
>>   /*
>>    * On 4xx we use bit 1 in the trap word to indicate whether the exception
>>    * is a critical exception (1 means it is).
>>    */
>> -#define TRAP_FLAGS_MASK		0x1E
>> +#define TRAP_FLAGS_MASK		0xf
> 
> Could we set 0xf for all and remove the ifdef __powerpc64__ ?

I like that it documents the bit number allocation so I prefer to leave 
it, but TRAP() does not have to be defined twice at least.

> 
>>   #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
>>   #define IS_CRITICAL_EXC(regs)	(((regs)->trap & 2) != 0)
>>   #define IS_MCHECK_EXC(regs)	(((regs)->trap & 4) != 0)
>> @@ -222,12 +228,12 @@ static inline bool trap_is_syscall(struct pt_regs *regs)
>>   
>>   static inline bool trap_norestart(struct pt_regs *regs)
>>   {
>> -	return regs->trap & 0x10;
>> +	return regs->trap & 0x1;
>>   }
>>   
>>   static inline void set_trap_norestart(struct pt_regs *regs)
>>   {
>> -	regs->trap |= 0x10;
>> +	regs->trap |= 0x1;
>>   }
>>   
>>   #define arch_has_single_step()	(1)
>> 
> 
> While we are playing with ->trap, in mm/book3s64/hash_utils.c there is an if (regs->trap == 0x400). 
> Should be TRAP(regs) == 0x400 ?

Yes I would say so, if you want to do a patch you can add
Acked-by: Nicholas Piggin <npiggin@gmail.com>

Otherwise I can do it.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 10/10] powerpc: move norestart trap flag to bit 0
  2021-03-16  7:11     ` Nicholas Piggin
@ 2021-03-16  7:13       ` Christophe Leroy
  0 siblings, 0 replies; 25+ messages in thread
From: Christophe Leroy @ 2021-03-16  7:13 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Scott Wood



Le 16/03/2021 à 08:11, Nicholas Piggin a écrit :
> Excerpts from Christophe Leroy's message of March 15, 2021 6:14 pm:
>>
>>
>> Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
>>> Compact the trap flags down to use the low 4 bits of regs.trap.
>>>
>>> A few 64e interrupt trap numbers set bit 4. Although they tended to be
>>> trivial so it wasn't a real problem[1], it is not the right thing to do,
>>> and confusing.
>>>
>>> [*] E.g., 0x310 hypercall goes to unknown_exception, which prints
>>>       regs->trap directly so 0x310 will appear fine, and only the syscall
>>>       interrupt will test norestart, so it won't be confused by 0x310.
>>>
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>>    arch/powerpc/include/asm/ptrace.h | 14 ++++++++++----
>>>    1 file changed, 10 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/arch/powerpc/include/asm/ptrace.h b/arch/powerpc/include/asm/ptrace.h
>>> index 91194fdd5d01..6a04abfe5eb6 100644
>>> --- a/arch/powerpc/include/asm/ptrace.h
>>> +++ b/arch/powerpc/include/asm/ptrace.h
>>> @@ -185,15 +185,21 @@ static inline void regs_set_return_value(struct pt_regs *regs, unsigned long rc)
>>>    #define current_pt_regs() \
>>>    	((struct pt_regs *)((unsigned long)task_stack_page(current) + THREAD_SIZE) - 1)
>>>    
>>> +/*
>>> + * The 4 low bits (0xf) are available as flags to overload the trap word,
>>> + * because interrupt vectors have minimum alignment of 0x10. TRAP_FLAGS_MASK
>>> + * must cover the bits used as flags, including bit 0 which is used as the
>>> + * "norestart" bit.
>>> + */
>>>    #ifdef __powerpc64__
>>> -#define TRAP_FLAGS_MASK		0x10
>>> +#define TRAP_FLAGS_MASK		0x1
>>>    #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
>>>    #else
>>>    /*
>>>     * On 4xx we use bit 1 in the trap word to indicate whether the exception
>>>     * is a critical exception (1 means it is).
>>>     */
>>> -#define TRAP_FLAGS_MASK		0x1E
>>> +#define TRAP_FLAGS_MASK		0xf
>>
>> Could we set 0xf for all and remove the ifdef __powerpc64__ ?
> 
> I like that it documents the bit number allocation so I prefer to leave
> it, but TRAP() does not have to be defined twice at least.
> 
>>
>>>    #define TRAP(regs)		((regs)->trap & ~TRAP_FLAGS_MASK)
>>>    #define IS_CRITICAL_EXC(regs)	(((regs)->trap & 2) != 0)
>>>    #define IS_MCHECK_EXC(regs)	(((regs)->trap & 4) != 0)
>>> @@ -222,12 +228,12 @@ static inline bool trap_is_syscall(struct pt_regs *regs)
>>>    
>>>    static inline bool trap_norestart(struct pt_regs *regs)
>>>    {
>>> -	return regs->trap & 0x10;
>>> +	return regs->trap & 0x1;
>>>    }
>>>    
>>>    static inline void set_trap_norestart(struct pt_regs *regs)
>>>    {
>>> -	regs->trap |= 0x10;
>>> +	regs->trap |= 0x1;
>>>    }
>>>    
>>>    #define arch_has_single_step()	(1)
>>>
>>
>> While we are playing with ->trap, in mm/book3s64/hash_utils.c there is an if (regs->trap == 0x400).
>> Should be TRAP(regs) == 0x400 ?
> 
> Yes I would say so, if you want to do a patch you can add
> Acked-by: Nicholas Piggin <npiggin@gmail.com>
> 
> Otherwise I can do it.

Yes please do.

Thanks
Christophe

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return
  2021-03-16  7:04     ` Nicholas Piggin
@ 2021-03-16  7:25       ` Nicholas Piggin
  2021-03-16  7:29         ` Christophe Leroy
  0 siblings, 1 reply; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-16  7:25 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev; +Cc: Scott Wood

Excerpts from Nicholas Piggin's message of March 16, 2021 5:04 pm:
> Excerpts from Christophe Leroy's message of March 15, 2021 11:30 pm:
>> 
>> 
>> Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
>>> Update the new C and asm interrupt return code to account for 64e
>>> specifics, switch over to use it.
>>> 
>>> The now-unused old ret_from_except code, that was moved to 64e after the
>>> 64s conversion, is removed.
>>> 
>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>> ---
>>>   arch/powerpc/include/asm/asm-prototypes.h |   2 -
>>>   arch/powerpc/kernel/entry_64.S            |   9 +-
>>>   arch/powerpc/kernel/exceptions-64e.S      | 321 ++--------------------
>>>   arch/powerpc/kernel/interrupt.c           |  27 +-
>>>   arch/powerpc/kernel/irq.c                 |  76 -----
>>>   5 files changed, 56 insertions(+), 379 deletions(-)
>>> 
>>> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
>>> index fbabb49888d3..ae7b058b2970 100644
>>> --- a/arch/powerpc/kernel/interrupt.c
>>> +++ b/arch/powerpc/kernel/interrupt.c
>>> @@ -235,6 +235,10 @@ static notrace void booke_load_dbcr0(void)
>>>   #endif
>>>   }
>>>   
>>> +/* temporary hack for context tracking, removed in later patch */
>>> +#include <linux/sched/debug.h>
>>> +asmlinkage __visible void __sched schedule_user(void);
>>> +
>>>   /*
>>>    * This should be called after a syscall returns, with r3 the return value
>>>    * from the syscall. If this function returns non-zero, the system call
>>> @@ -292,7 +296,11 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>>>   	while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
>>>   		local_irq_enable();
>>>   		if (ti_flags & _TIF_NEED_RESCHED) {
>>> +#ifdef CONFIG_PPC_BOOK3E_64
>>> +			schedule_user();
>>> +#else
>>>   			schedule();
>>> +#endif
>>>   		} else {
>>>   			/*
>>>   			 * SIGPENDING must restore signal handler function
>>> @@ -360,7 +368,6 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>>>   	return ret;
>>>   }
>>>   
>>> -#ifndef CONFIG_PPC_BOOK3E_64 /* BOOK3E not yet using this */
>>>   notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned long msr)
>>>   {
>>>   	unsigned long ti_flags;
>>> @@ -372,7 +379,9 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>>>   	BUG_ON(!(regs->msr & MSR_PR));
>>>   	BUG_ON(!FULL_REGS(regs));
>>>   	BUG_ON(arch_irq_disabled_regs(regs));
>>> +#ifdef CONFIG_PPC_BOOK3S_64
>> 
>> Shouldn't this go away in patch 6 as well ?
>> Or is that needed at all ? In syscall_exit_prepare() it is not ifdefed .
> 
> Hmm, not sure. I'll take a look. It probably shouldn't be ifdefed at all 
> but definitely by the end it should run without warning.

Oh I got confused and thought that was the syscall exit. Interrupt exit 
has to keep this until patch 6 because 64e context tracking does 
everything in interrupt wrappers, so by the time we get here it will
already be set to CONTEXT_USER.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return
  2021-03-16  7:25       ` Nicholas Piggin
@ 2021-03-16  7:29         ` Christophe Leroy
  2021-03-16  8:14           ` Nicholas Piggin
  0 siblings, 1 reply; 25+ messages in thread
From: Christophe Leroy @ 2021-03-16  7:29 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Scott Wood



Le 16/03/2021 à 08:25, Nicholas Piggin a écrit :
> Excerpts from Nicholas Piggin's message of March 16, 2021 5:04 pm:
>> Excerpts from Christophe Leroy's message of March 15, 2021 11:30 pm:
>>>
>>>
>>> Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
>>>> Update the new C and asm interrupt return code to account for 64e
>>>> specifics, switch over to use it.
>>>>
>>>> The now-unused old ret_from_except code, that was moved to 64e after the
>>>> 64s conversion, is removed.
>>>>
>>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>>> ---
>>>>    arch/powerpc/include/asm/asm-prototypes.h |   2 -
>>>>    arch/powerpc/kernel/entry_64.S            |   9 +-
>>>>    arch/powerpc/kernel/exceptions-64e.S      | 321 ++--------------------
>>>>    arch/powerpc/kernel/interrupt.c           |  27 +-
>>>>    arch/powerpc/kernel/irq.c                 |  76 -----
>>>>    5 files changed, 56 insertions(+), 379 deletions(-)
>>>>
>>>> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
>>>> index fbabb49888d3..ae7b058b2970 100644
>>>> --- a/arch/powerpc/kernel/interrupt.c
>>>> +++ b/arch/powerpc/kernel/interrupt.c
>>>> @@ -235,6 +235,10 @@ static notrace void booke_load_dbcr0(void)
>>>>    #endif
>>>>    }
>>>>    
>>>> +/* temporary hack for context tracking, removed in later patch */
>>>> +#include <linux/sched/debug.h>
>>>> +asmlinkage __visible void __sched schedule_user(void);
>>>> +
>>>>    /*
>>>>     * This should be called after a syscall returns, with r3 the return value
>>>>     * from the syscall. If this function returns non-zero, the system call
>>>> @@ -292,7 +296,11 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>>>>    	while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
>>>>    		local_irq_enable();
>>>>    		if (ti_flags & _TIF_NEED_RESCHED) {
>>>> +#ifdef CONFIG_PPC_BOOK3E_64
>>>> +			schedule_user();
>>>> +#else
>>>>    			schedule();
>>>> +#endif
>>>>    		} else {
>>>>    			/*
>>>>    			 * SIGPENDING must restore signal handler function
>>>> @@ -360,7 +368,6 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>>>>    	return ret;
>>>>    }
>>>>    
>>>> -#ifndef CONFIG_PPC_BOOK3E_64 /* BOOK3E not yet using this */
>>>>    notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned long msr)
>>>>    {
>>>>    	unsigned long ti_flags;
>>>> @@ -372,7 +379,9 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>>>>    	BUG_ON(!(regs->msr & MSR_PR));
>>>>    	BUG_ON(!FULL_REGS(regs));
>>>>    	BUG_ON(arch_irq_disabled_regs(regs));
>>>> +#ifdef CONFIG_PPC_BOOK3S_64
>>>
>>> Shouldn't this go away in patch 6 as well ?
>>> Or is that needed at all ? In syscall_exit_prepare() it is not ifdefed .
>>
>> Hmm, not sure. I'll take a look. It probably shouldn't be ifdefed at all
>> but definitely by the end it should run without warning.
> 
> Oh I got confused and thought that was the syscall exit. Interrupt exit
> has to keep this until patch 6 because 64e context tracking does
> everything in interrupt wrappers, so by the time we get here it will
> already be set to CONTEXT_USER.
> 

ok, but that it has to go in patch 6 ? At the time being the #ifdef is still there at the end of the 
series

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return
  2021-03-16  7:29         ` Christophe Leroy
@ 2021-03-16  8:14           ` Nicholas Piggin
  0 siblings, 0 replies; 25+ messages in thread
From: Nicholas Piggin @ 2021-03-16  8:14 UTC (permalink / raw)
  To: Christophe Leroy, linuxppc-dev; +Cc: Scott Wood

Excerpts from Christophe Leroy's message of March 16, 2021 5:29 pm:
> 
> 
> Le 16/03/2021 à 08:25, Nicholas Piggin a écrit :
>> Excerpts from Nicholas Piggin's message of March 16, 2021 5:04 pm:
>>> Excerpts from Christophe Leroy's message of March 15, 2021 11:30 pm:
>>>>
>>>>
>>>> Le 15/03/2021 à 04:17, Nicholas Piggin a écrit :
>>>>> Update the new C and asm interrupt return code to account for 64e
>>>>> specifics, switch over to use it.
>>>>>
>>>>> The now-unused old ret_from_except code, that was moved to 64e after the
>>>>> 64s conversion, is removed.
>>>>>
>>>>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>>>>> ---
>>>>>    arch/powerpc/include/asm/asm-prototypes.h |   2 -
>>>>>    arch/powerpc/kernel/entry_64.S            |   9 +-
>>>>>    arch/powerpc/kernel/exceptions-64e.S      | 321 ++--------------------
>>>>>    arch/powerpc/kernel/interrupt.c           |  27 +-
>>>>>    arch/powerpc/kernel/irq.c                 |  76 -----
>>>>>    5 files changed, 56 insertions(+), 379 deletions(-)
>>>>>
>>>>> diff --git a/arch/powerpc/kernel/interrupt.c b/arch/powerpc/kernel/interrupt.c
>>>>> index fbabb49888d3..ae7b058b2970 100644
>>>>> --- a/arch/powerpc/kernel/interrupt.c
>>>>> +++ b/arch/powerpc/kernel/interrupt.c
>>>>> @@ -235,6 +235,10 @@ static notrace void booke_load_dbcr0(void)
>>>>>    #endif
>>>>>    }
>>>>>    
>>>>> +/* temporary hack for context tracking, removed in later patch */
>>>>> +#include <linux/sched/debug.h>
>>>>> +asmlinkage __visible void __sched schedule_user(void);
>>>>> +
>>>>>    /*
>>>>>     * This should be called after a syscall returns, with r3 the return value
>>>>>     * from the syscall. If this function returns non-zero, the system call
>>>>> @@ -292,7 +296,11 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>>>>>    	while (unlikely(ti_flags & (_TIF_USER_WORK_MASK & ~_TIF_RESTORE_TM))) {
>>>>>    		local_irq_enable();
>>>>>    		if (ti_flags & _TIF_NEED_RESCHED) {
>>>>> +#ifdef CONFIG_PPC_BOOK3E_64
>>>>> +			schedule_user();
>>>>> +#else
>>>>>    			schedule();
>>>>> +#endif
>>>>>    		} else {
>>>>>    			/*
>>>>>    			 * SIGPENDING must restore signal handler function
>>>>> @@ -360,7 +368,6 @@ notrace unsigned long syscall_exit_prepare(unsigned long r3,
>>>>>    	return ret;
>>>>>    }
>>>>>    
>>>>> -#ifndef CONFIG_PPC_BOOK3E_64 /* BOOK3E not yet using this */
>>>>>    notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned long msr)
>>>>>    {
>>>>>    	unsigned long ti_flags;
>>>>> @@ -372,7 +379,9 @@ notrace unsigned long interrupt_exit_user_prepare(struct pt_regs *regs, unsigned
>>>>>    	BUG_ON(!(regs->msr & MSR_PR));
>>>>>    	BUG_ON(!FULL_REGS(regs));
>>>>>    	BUG_ON(arch_irq_disabled_regs(regs));
>>>>> +#ifdef CONFIG_PPC_BOOK3S_64
>>>>
>>>> Shouldn't this go away in patch 6 as well ?
>>>> Or is that needed at all ? In syscall_exit_prepare() it is not ifdefed .
>>>
>>> Hmm, not sure. I'll take a look. It probably shouldn't be ifdefed at all
>>> but definitely by the end it should run without warning.
>> 
>> Oh I got confused and thought that was the syscall exit. Interrupt exit
>> has to keep this until patch 6 because 64e context tracking does
>> everything in interrupt wrappers, so by the time we get here it will
>> already be set to CONTEXT_USER.
>> 
> 
> ok, but that it has to go in patch 6 ? At the time being the #ifdef is still there at the end of the 
> series
> 

Yes, that was an oversight. I'll remove it.

Thanks,
Nick

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH 00/10] Move 64e to new interrupt return code
  2021-03-15  3:17 [PATCH 00/10] Move 64e to new interrupt return code Nicholas Piggin
                   ` (9 preceding siblings ...)
  2021-03-15  3:17 ` [PATCH 10/10] powerpc: move norestart trap flag to bit 0 Nicholas Piggin
@ 2021-03-22 23:45 ` Daniel Axtens
  10 siblings, 0 replies; 25+ messages in thread
From: Daniel Axtens @ 2021-03-22 23:45 UTC (permalink / raw)
  To: Nicholas Piggin, linuxppc-dev; +Cc: Scott Wood, Nicholas Piggin

Hi Nick,

> Since RFC this is rebased on Christophe's v3 ppc32 conversion, and
> has fixed up small details, and then adds some powerpc-wide
> cleanups at the end.
>
> Tested on qemu only (QEMU e500), which is not ideal for interrupt
> handling particularly the critical interrupts which I don't know
> whether it can generate.

I tested this on a T4240RDB with:

stress-ng --class interrupts --seq 0 -t 5

There are some problems that occur only when testing with your series. I
haven't made any attempt to debug them yet.

stress-ng: info:  [3101] unsuccessful run completed in 6352.60s (1 hour, 45 mins, 52.60 secs)
stress-ng: fail:  [3101] aio instance 0 corrupted bogo-ops counter, 7542705 vs 0
stress-ng: fail:  [3101] aio instance 0 hash error in bogo-ops counter and run flag, 359866039 vs 0
stress-ng: fail:  [3101] aio instance 17 corrupted bogo-ops counter, 7638823 vs 0
stress-ng: fail:  [3101] aio instance 17 hash error in bogo-ops counter and run flag, 2001558423 vs 0
stress-ng: fail:  [3101] aio instance 30 corrupted bogo-ops counter, 8192545 vs 0
info: 5 failures reached, aborting stress process
stress-ng: fail:  [3101] aio instance 30 hash error in bogo-ops counter and run flag, 3023200976 vs 0
stress-ng: fail:  [3101] pidfd instance 25 corrupted bogo-ops counter, 116476 vs 0
stress-ng: fail:  [3101] pidfd instance 25 hash error in bogo-ops counter and run flag, 1964630417 vs 0
stress-ng: fail:  [3101] sigabrt instance 3 corrupted bogo-ops counter, 95662 vs 0
stress-ng: fail:  [3101] sigabrt instance 3 hash error in bogo-ops counter and run flag, 1321243721 vs 0
stress-ng: fail:  [3101] sigabrt instance 9 corrupted bogo-ops counter, 92858 vs 0
stress-ng: fail:  [3101] sigabrt instance 9 hash error in bogo-ops counter and run flag, 3835381330 vs 0
stress-ng: fail:  [3101] sigabrt instance 11 corrupted bogo-ops counter, 98333 vs 0
stress-ng: fail:  [3101] sigabrt instance 11 hash error in bogo-ops counter and run flag, 3447969030 vs 0
stress-ng: fail:  [3101] sigabrt instance 14 corrupted bogo-ops counter, 96995 vs 0
stress-ng: fail:  [3101] sigabrt instance 14 hash error in bogo-ops counter and run flag, 2621581502 vs 0
stress-ng: fail:  [3101] sigabrt instance 16 corrupted bogo-ops counter, 97464 vs 0
stress-ng: fail:  [3101] sigabrt instance 16 hash error in bogo-ops counter and run flag, 3422440538 vs 0
stress-ng: fail:  [3101] sigabrt instance 19 corrupted bogo-ops counter, 96044 vs 0
stress-ng: fail:  [3101] sigabrt instance 19 hash error in bogo-ops counter and run flag, 511989935 vs 0
stress-ng: fail:  [3101] sigabrt instance 20 corrupted bogo-ops counter, 96018 vs 0
stress-ng: fail:  [3101] sigabrt instance 20 hash error in bogo-ops counter and run flag, 2348631606 vs 0
stress-ng: fail:  [3101] sigabrt instance 23 corrupted bogo-ops counter, 95252 vs 0
stress-ng: fail:  [3101] sigabrt instance 23 hash error in bogo-ops counter and run flag, 2302430489 vs 0
stress-ng: fail:  [3101] sigabrt instance 26 corrupted bogo-ops counter, 99151 vs 0
stress-ng: fail:  [3101] sigabrt instance 26 hash error in bogo-ops counter and run flag, 2882282932 vs 0
stress-ng: fail:  [3101] sigabrt instance 27 corrupted bogo-ops counter, 95434 vs 0
stress-ng: fail:  [3101] sigabrt instance 27 hash error in bogo-ops counter and run flag, 260112434 vs 0
stress-ng: fail:  [3101] sigabrt instance 28 corrupted bogo-ops counter, 97138 vs 0
stress-ng: fail:  [3101] sigabrt instance 28 hash error in bogo-ops counter and run flag, 2822283734 vs 0
stress-ng: fail:  [3101] sigabrt instance 30 corrupted bogo-ops counter, 97728 vs 0
stress-ng: fail:  [3101] sigabrt instance 30 hash error in bogo-ops counter and run flag, 738567801 vs 0
stress-ng: fail:  [3101] sigabrt instance 31 corrupted bogo-ops counter, 96368 vs 0
stress-ng: fail:  [3101] sigabrt instance 31 hash error in bogo-ops counter and run flag, 1663873592 vs 0
stress-ng: fail:  [3101] sigio instance 0 corrupted bogo-ops counter, 1141 vs 0
stress-ng: fail:  [3101] sigio instance 0 hash error in bogo-ops counter and run flag, 3981634025 vs 0
stress-ng: fail:  [3101] sigio instance 1 corrupted bogo-ops counter, 1323 vs 0
stress-ng: fail:  [3101] sigio instance 1 hash error in bogo-ops counter and run flag, 2384922462 vs 0
stress-ng: fail:  [3101] sigio instance 2 corrupted bogo-ops counter, 876 vs 0
stress-ng: fail:  [3101] sigio instance 2 hash error in bogo-ops counter and run flag, 2730635354 vs 0
stress-ng: fail:  [3101] sigio instance 3 corrupted bogo-ops counter, 3391 vs 0
stress-ng: fail:  [3101] sigio instance 3 hash error in bogo-ops counter and run flag, 3893594528 vs 0
stress-ng: fail:  [3101] sigio instance 4 corrupted bogo-ops counter, 988 vs 0
stress-ng: fail:  [3101] sigio instance 4 hash error in bogo-ops counter and run flag, 2252189661 vs 0
stress-ng: fail:  [3101] sigio instance 5 corrupted bogo-ops counter, 4158 vs 0
stress-ng: fail:  [3101] sigio instance 5 hash error in bogo-ops counter and run flag, 908770141 vs 0
stress-ng: fail:  [3101] sigio instance 6 corrupted bogo-ops counter, 657 vs 0
stress-ng: fail:  [3101] sigio instance 6 hash error in bogo-ops counter and run flag, 3022228667 vs 0
stress-ng: fail:  [3101] sigio instance 7 corrupted bogo-ops counter, 239 vs 0
stress-ng: fail:  [3101] sigio instance 7 hash error in bogo-ops counter and run flag, 2339545388 vs 0
stress-ng: fail:  [3101] sigio instance 8 corrupted bogo-ops counter, 183062 vs 0
stress-ng: fail:  [3101] sigio instance 8 hash error in bogo-ops counter and run flag, 2294439106 vs 0
stress-ng: fail:  [3101] sigio instance 9 corrupted bogo-ops counter, 946 vs 0
stress-ng: fail:  [3101] sigio instance 9 hash error in bogo-ops counter and run flag, 2990832529 vs 0
stress-ng: fail:  [3101] sigio instance 10 corrupted bogo-ops counter, 2799 vs 0
stress-ng: fail:  [3101] sigio instance 10 hash error in bogo-ops counter and run flag, 1781985030 vs 0
stress-ng: fail:  [3101] sigio instance 11 corrupted bogo-ops counter, 2705 vs 0
stress-ng: fail:  [3101] sigio instance 11 hash error in bogo-ops counter and run flag, 3301490000 vs 0
stress-ng: fail:  [3101] sigio instance 21 corrupted bogo-ops counter, 238787 vs 0
stress-ng: fail:  [3101] sigio instance 21 hash error in bogo-ops counter and run flag, 2490210165 vs 0
stress-ng: fail:  [3101] sigio instance 28 corrupted bogo-ops counter, 1020 vs 0
stress-ng: fail:  [3101] sigio instance 28 hash error in bogo-ops counter and run flag, 3260422232 vs 0
stress-ng: fail:  [3101] metrics-check: stressor metrics corrupted, data is compromised

It looks like this is paired with some segfaults in dmesg:

stress-ng-pidfd[4417]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-pidfd[4417]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-pidfd[4417]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigab[3748349]: segfault (11) at 800100 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigab[3748349]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigab[3748390]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1
stress-ng-sigab[3748405]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigab[3748405]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigab[3748405]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigab[3748427]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1
stress-ng-sigab[3748376]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1
in stress-ng[107e8d000+3000]
in stress-ng[107e8d000+3000]
stress-ng-sigab[3748427]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigab[3748376]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigab[3748427]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigab[3748376]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigab[3748460]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigab[3748460]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigab[3748460]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigab[3748434]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigab[3748434]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigab[3748434]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigab[3748367]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigab[3748367]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigab[3748367]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigab[3748349]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
in stress-ng[107e8d000+3000]
stress-ng-sigab[3748507]: segfault (11) at 800100 nip 107e8fb14 lr 107e8fb04 code 1

in stress-ng[107e8d000+3000]
stress-ng-sigab[3748390]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 

stress-ng-sigab[3748491]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigab[3748491]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigab[3748491]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigab[3748390]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigab[3748507]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigab[3748507]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
show_signal_msg: 3 callbacks suppressed
stress-ng-sigio[2635277]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1
stress-ng-sigio[2635278]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1
stress-ng-sigio[2635279]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigio[2635279]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigio[2635279]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigio[2635280]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigio[2635280]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigio[2635280]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigio[2635283]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigio[2635283]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigio[2635283]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigio[2635285]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigio[2635285]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigio[2635285]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigio[2635289]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigio[2635289]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigio[2635289]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
in stress-ng[107e8d000+3000]
stress-ng-sigio[2635293]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigio[2635293]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigio[2635293]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigio[2635292]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigio[2635292]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigio[2635292]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigio[2635315]: segfault (11) at 800000 nip 107e8fb14 lr 107e8fb04 code 1 in stress-ng[107e8d000+3000]
stress-ng-sigio[2635315]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigio[2635315]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
in stress-ng[107e8d000+3000]

stress-ng-sigio[2635278]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigio[2635277]: code: 7d4903a6 e8490008 4e800421 e8410028 7c691b78 386100ac 912100ac 4bfc3be1 
stress-ng-sigio[2635278]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 
stress-ng-sigio[2635277]: code: 60000000 812100ac 2c090000 40820014 <e90f0000> 39200001 993c0940 99280008 

In one run, I had problems with a hardware interrupt, but I haven't seen
it reoccur so I can't be sure it came from your series:


mmc0: Timeout waiting for hardware cmd interrupt.        
mmc0: sdhci: ============ SDHCI REGISTER DUMP ===========
mmc0: sdhci: Sys addr:  0x00000000 | Version:  0x00001301
mmc0: sdhci: Blk size:  0x00000000 | Blk cnt:  0x00000000
mmc0: sdhci: Argument:  0x00000c00 | Trn mode: 0x00000000
mmc0: sdhci: Present:   0x01f00008 | Host ctl: 0x00000020
mmc0: sdhci: Power:     0x00000000 | Blk gap:  0x00000000
mmc0: sdhci: Wake-up:   0x00000000 | Clock:    0x000020e8
mmc0: sdhci: Timeout:   0x00000000 | Int stat: 0x00010001
mmc0: sdhci: Int enab:  0x007f0007 | Sig enab: 0x007f0003
mmc0: sdhci: ACmd stat: 0x00000000 | Slot int: 0x00001402
mmc0: sdhci: Caps:      0x04fa0000 | Caps_1:   0x00000000
mmc0: sdhci: Cmd:       0x0000341a | Max curr: 0x00000000
mmc0: sdhci: Resp[0]:   0x00000000 | Resp[1]:  0x00000000
mmc0: sdhci: Resp[2]:   0x00000000 | Resp[3]:  0x00000000
mmc0: sdhci: Host ctl2: 0x00000000                       
mmc0: sdhci: ADMA Err:  0x00000000 | ADMA Ptr: 0x00000000
mmc0: sdhci: ============================================
mmc0: Timeout waiting for hardware cmd interrupt. 

Let me know if you'd like me to run any further tests.

Kind regards,
Daniel

>
> Thanks,
> Nick
>
> Nicholas Piggin (10):
>   powerpc/syscall: switch user_exit_irqoff and trace_hardirqs_off order
>   powerpc/64e/interrupt: always save nvgprs on interrupt
>   powerpc/64e/interrupt: use new interrupt return
>   powerpc/64e/interrupt: NMI save irq soft-mask state in C
>   powerpc/64e/interrupt: reconcile irq soft-mask state in C
>   powerpc/64e/interrupt: Use new interrupt context tracking scheme
>   powerpc/64e/interrupt: handle bad_page_fault in C
>   powerpc: clean up do_page_fault
>   powerpc: remove partial register save logic
>   powerpc: move norestart trap flag to bit 0
>
>  arch/powerpc/include/asm/asm-prototypes.h |   2 -
>  arch/powerpc/include/asm/bug.h            |   4 +-
>  arch/powerpc/include/asm/interrupt.h      |  66 ++--
>  arch/powerpc/include/asm/ptrace.h         |  36 +-
>  arch/powerpc/kernel/align.c               |   6 -
>  arch/powerpc/kernel/entry_64.S            |  40 +-
>  arch/powerpc/kernel/exceptions-64e.S      | 425 ++--------------------
>  arch/powerpc/kernel/interrupt.c           |  22 +-
>  arch/powerpc/kernel/irq.c                 |  76 ----
>  arch/powerpc/kernel/process.c             |  12 -
>  arch/powerpc/kernel/ptrace/ptrace-view.c  |  21 --
>  arch/powerpc/kernel/ptrace/ptrace.c       |   2 -
>  arch/powerpc/kernel/ptrace/ptrace32.c     |   4 -
>  arch/powerpc/kernel/signal_32.c           |   3 -
>  arch/powerpc/kernel/signal_64.c           |   2 -
>  arch/powerpc/kernel/traps.c               |  14 +-
>  arch/powerpc/lib/sstep.c                  |   4 -
>  arch/powerpc/mm/book3s64/hash_utils.c     |  16 +-
>  arch/powerpc/mm/fault.c                   |  28 +-
>  arch/powerpc/xmon/xmon.c                  |  23 +-
>  20 files changed, 130 insertions(+), 676 deletions(-)
>
> -- 
> 2.23.0

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2021-03-22 23:46 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-15  3:17 [PATCH 00/10] Move 64e to new interrupt return code Nicholas Piggin
2021-03-15  3:17 ` [PATCH 01/10] powerpc/syscall: switch user_exit_irqoff and trace_hardirqs_off order Nicholas Piggin
2021-03-15  3:17 ` [PATCH 02/10] powerpc/64e/interrupt: always save nvgprs on interrupt Nicholas Piggin
2021-03-15  3:17 ` [PATCH 03/10] powerpc/64e/interrupt: use new interrupt return Nicholas Piggin
2021-03-15  7:50   ` Christophe Leroy
2021-03-15  8:20     ` Christophe Leroy
2021-03-16  7:03     ` Nicholas Piggin
2021-03-15 13:30   ` Christophe Leroy
2021-03-16  7:04     ` Nicholas Piggin
2021-03-16  7:25       ` Nicholas Piggin
2021-03-16  7:29         ` Christophe Leroy
2021-03-16  8:14           ` Nicholas Piggin
2021-03-15  3:17 ` [PATCH 04/10] powerpc/64e/interrupt: NMI save irq soft-mask state in C Nicholas Piggin
2021-03-15  3:17 ` [PATCH 05/10] powerpc/64e/interrupt: reconcile " Nicholas Piggin
2021-03-15  3:17 ` [PATCH 06/10] powerpc/64e/interrupt: Use new interrupt context tracking scheme Nicholas Piggin
2021-03-15  3:17 ` [PATCH 07/10] powerpc/64e/interrupt: handle bad_page_fault in C Nicholas Piggin
2021-03-15 14:07   ` Christophe Leroy
2021-03-16  7:06     ` Nicholas Piggin
2021-03-15  3:17 ` [PATCH 08/10] powerpc: clean up do_page_fault Nicholas Piggin
2021-03-15  3:17 ` [PATCH 09/10] powerpc: remove partial register save logic Nicholas Piggin
2021-03-15  3:17 ` [PATCH 10/10] powerpc: move norestart trap flag to bit 0 Nicholas Piggin
2021-03-15  8:14   ` Christophe Leroy
2021-03-16  7:11     ` Nicholas Piggin
2021-03-16  7:13       ` Christophe Leroy
2021-03-22 23:45 ` [PATCH 00/10] Move 64e to new interrupt return code Daniel Axtens

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.