linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH 0/8] Accelarate IRQ entry
@ 2019-12-23 15:26 Christophe Leroy
  2019-12-23 15:26 ` [RFC PATCH 1/8] powerpc/32: drop ksp_limit based stack overflow detection Christophe Leroy
                   ` (7 more replies)
  0 siblings, 8 replies; 9+ messages in thread
From: Christophe Leroy @ 2019-12-23 15:26 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev

The purpose of this series is to accelerate IRQ entry by
avoiding unneccessary trampoline functions like call_do_irq()
and call_do_softirq() and by switching to IRQ stack
immediately in the exception handler.

For now, it is an RFC as it is still a bit messy.

Please provide feedback and I'll improve next year

Christophe Leroy (8):
  powerpc/32: drop ksp_limit based stack overflow detection
  powerpc/irq: inline call_do_irq() and call_do_softirq() on PPC32
  powerpc/irq: don't use current_stack_pointer() in do_IRQ()
  powerpc/irq: move set_irq_regs() closer to irq_enter/exit()
  powerpc/irq: move stack overflow verification
  powerpc/irq: cleanup check_stack_overflow() a bit
  powerpc/32: use IRQ stack immediately on IRQ exception
  powerpc/irq: drop softirq stack

 arch/powerpc/include/asm/asm-prototypes.h |  1 -
 arch/powerpc/include/asm/irq.h            |  3 +-
 arch/powerpc/include/asm/processor.h      |  3 --
 arch/powerpc/include/asm/reg.h            |  8 ++++
 arch/powerpc/kernel/asm-offsets.c         |  2 -
 arch/powerpc/kernel/entry_32.S            | 57 ------------------------
 arch/powerpc/kernel/head_32.S             |  2 +-
 arch/powerpc/kernel/head_32.h             | 32 +++++++++++--
 arch/powerpc/kernel/head_40x.S            |  4 +-
 arch/powerpc/kernel/head_8xx.S            |  2 +-
 arch/powerpc/kernel/head_booke.h          |  1 -
 arch/powerpc/kernel/irq.c                 | 74 +++++++++++++++++++++----------
 arch/powerpc/kernel/misc_32.S             | 39 ----------------
 arch/powerpc/kernel/process.c             |  7 ---
 arch/powerpc/kernel/setup_32.c            |  4 +-
 arch/powerpc/kernel/setup_64.c            |  4 +-
 arch/powerpc/kernel/traps.c               |  9 ----
 arch/powerpc/lib/sstep.c                  |  9 ----
 18 files changed, 95 insertions(+), 166 deletions(-)

-- 
2.13.3


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [RFC PATCH 1/8] powerpc/32: drop ksp_limit based stack overflow detection
  2019-12-23 15:26 [RFC PATCH 0/8] Accelarate IRQ entry Christophe Leroy
@ 2019-12-23 15:26 ` Christophe Leroy
  2019-12-23 15:26 ` [RFC PATCH 2/8] powerpc/irq: inline call_do_irq() and call_do_softirq() on PPC32 Christophe Leroy
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Christophe Leroy @ 2019-12-23 15:26 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev

PPC32 implements a specific early stack overflow detection.

This detection is inherited from ppc arch (before the merge of
ppc and ppc64 into powerpc). At that time, there was no irqstacks
and the verification was simply to check that the stack pointer
was still over the stack base. But when irqstacks were implemented,
it was not possible to perform a simple check anymore so a
thread specific value called ksp_limit was introduced in the
task_struct and is updated at every stack switch in order to
keep track of the limit and perform the verification.

ppc64 didn't have this but had a verification during IRQs. This
verification was then extended to PPC32 and can be selected through
CONFIG_DEBUG_STACKOVERFLOW.

In the meantime, thread_info has moved away from the stack, reducing
the impact of a stack overflow.

In addition, there is CONFIG_SCHED_STACK_END_CHECK which can be used
to check that the magic stored at stack base has not be overwritten.

Remove this PPC32 specific stack overflow mechanism in order to
simplify ongoing work which also aim at reducing even more risks of
stack overflow:
- Switch to irqstack in IRQ exception entry in ASM
- VMAP stack

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/asm-prototypes.h |  1 -
 arch/powerpc/include/asm/processor.h      |  3 --
 arch/powerpc/kernel/asm-offsets.c         |  2 --
 arch/powerpc/kernel/entry_32.S            | 57 -------------------------------
 arch/powerpc/kernel/head_40x.S            |  2 --
 arch/powerpc/kernel/head_booke.h          |  1 -
 arch/powerpc/kernel/misc_32.S             | 14 --------
 arch/powerpc/kernel/process.c             |  3 --
 arch/powerpc/kernel/traps.c               |  9 -----
 arch/powerpc/lib/sstep.c                  |  9 -----
 10 files changed, 101 deletions(-)

diff --git a/arch/powerpc/include/asm/asm-prototypes.h b/arch/powerpc/include/asm/asm-prototypes.h
index 983c0084fb3f..90e9c6e415af 100644
--- a/arch/powerpc/include/asm/asm-prototypes.h
+++ b/arch/powerpc/include/asm/asm-prototypes.h
@@ -66,7 +66,6 @@ void RunModeException(struct pt_regs *regs);
 void single_step_exception(struct pt_regs *regs);
 void program_check_exception(struct pt_regs *regs);
 void alignment_exception(struct pt_regs *regs);
-void StackOverflow(struct pt_regs *regs);
 void kernel_fp_unavailable_exception(struct pt_regs *regs);
 void altivec_unavailable_exception(struct pt_regs *regs);
 void vsx_unavailable_exception(struct pt_regs *regs);
diff --git a/arch/powerpc/include/asm/processor.h b/arch/powerpc/include/asm/processor.h
index a9993e7a443b..a9552048c20b 100644
--- a/arch/powerpc/include/asm/processor.h
+++ b/arch/powerpc/include/asm/processor.h
@@ -155,7 +155,6 @@ struct thread_struct {
 #endif
 #ifdef CONFIG_PPC32
 	void		*pgdir;		/* root of page-table tree */
-	unsigned long	ksp_limit;	/* if ksp <= ksp_limit stack overflow */
 #ifdef CONFIG_PPC_RTAS
 	unsigned long	rtas_sp;	/* stack pointer for when in RTAS */
 #endif
@@ -269,7 +268,6 @@ struct thread_struct {
 #define ARCH_MIN_TASKALIGN 16
 
 #define INIT_SP		(sizeof(init_stack) + (unsigned long) &init_stack)
-#define INIT_SP_LIMIT	((unsigned long)&init_stack)
 
 #ifdef CONFIG_SPE
 #define SPEFSCR_INIT \
@@ -282,7 +280,6 @@ struct thread_struct {
 #ifdef CONFIG_PPC32
 #define INIT_THREAD { \
 	.ksp = INIT_SP, \
-	.ksp_limit = INIT_SP_LIMIT, \
 	.addr_limit = KERNEL_DS, \
 	.pgdir = swapper_pg_dir, \
 	.fpexc_mode = MSR_FE0 | MSR_FE1, \
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 3d47aec7becf..d936db6b702f 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -88,7 +88,6 @@ int main(void)
 	DEFINE(SIGSEGV, SIGSEGV);
 	DEFINE(NMI_MASK, NMI_MASK);
 #else
-	OFFSET(KSP_LIMIT, thread_struct, ksp_limit);
 #ifdef CONFIG_PPC_RTAS
 	OFFSET(RTAS_SP, thread_struct, rtas_sp);
 #endif
@@ -353,7 +352,6 @@ int main(void)
 	DEFINE(_CSRR1, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, csrr1));
 	DEFINE(_DSRR0, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, dsrr0));
 	DEFINE(_DSRR1, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, dsrr1));
-	DEFINE(SAVED_KSP_LIMIT, STACK_INT_FRAME_SIZE+offsetof(struct exception_regs, saved_ksp_limit));
 #endif
 #endif
 
diff --git a/arch/powerpc/kernel/entry_32.S b/arch/powerpc/kernel/entry_32.S
index d60908ea37fb..bf11b464a17b 100644
--- a/arch/powerpc/kernel/entry_32.S
+++ b/arch/powerpc/kernel/entry_32.S
@@ -86,13 +86,6 @@ crit_transfer_to_handler:
 	stw	r0,_SRR0(r11)
 	mfspr	r0,SPRN_SRR1
 	stw	r0,_SRR1(r11)
-
-	/* set the stack limit to the current stack */
-	mfspr	r8,SPRN_SPRG_THREAD
-	lwz	r0,KSP_LIMIT(r8)
-	stw	r0,SAVED_KSP_LIMIT(r11)
-	rlwinm	r0,r1,0,0,(31 - THREAD_SHIFT)
-	stw	r0,KSP_LIMIT(r8)
 	/* fall through */
 #endif
 
@@ -107,13 +100,6 @@ crit_transfer_to_handler:
 	stw	r0,crit_srr0@l(0)
 	mfspr	r0,SPRN_SRR1
 	stw	r0,crit_srr1@l(0)
-
-	/* set the stack limit to the current stack */
-	mfspr	r8,SPRN_SPRG_THREAD
-	lwz	r0,KSP_LIMIT(r8)
-	stw	r0,saved_ksp_limit@l(0)
-	rlwinm	r0,r1,0,0,(31 - THREAD_SHIFT)
-	stw	r0,KSP_LIMIT(r8)
 	/* fall through */
 #endif
 
@@ -181,9 +167,6 @@ transfer_to_handler:
          */
 	kuap_save_and_lock r11, r12, r9, r2, r0
 	addi	r2, r12, -THREAD
-	lwz	r9,KSP_LIMIT(r12)
-	cmplw	r1,r9			/* if r1 <= ksp_limit */
-	ble-	stack_ovf		/* then the kernel stack overflowed */
 5:
 #if defined(CONFIG_PPC_BOOK3S_32) || defined(CONFIG_E500)
 	lwz	r12,TI_LOCAL_FLAGS(r2)
@@ -287,32 +270,6 @@ reenable_mmu:
 	b	fast_exception_return
 #endif
 
-/*
- * On kernel stack overflow, load up an initial stack pointer
- * and call StackOverflow(regs), which should not return.
- */
-stack_ovf:
-	/* sometimes we use a statically-allocated stack, which is OK. */
-	lis	r12,_end@h
-	ori	r12,r12,_end@l
-	cmplw	r1,r12
-	ble	5b			/* r1 <= &_end is OK */
-	SAVE_NVGPRS(r11)
-	addi	r3,r1,STACK_FRAME_OVERHEAD
-	lis	r1,init_thread_union@ha
-	addi	r1,r1,init_thread_union@l
-	addi	r1,r1,THREAD_SIZE-STACK_FRAME_OVERHEAD
-	lis	r9,StackOverflow@ha
-	addi	r9,r9,StackOverflow@l
-	LOAD_REG_IMMEDIATE(r10,MSR_KERNEL)
-#if defined(CONFIG_PPC_8xx) && defined(CONFIG_PERF_EVENTS)
-	mtspr	SPRN_NRI, r0
-#endif
-	mtspr	SPRN_SRR0,r9
-	mtspr	SPRN_SRR1,r10
-	SYNC
-	RFI
-
 #ifdef CONFIG_TRACE_IRQFLAGS
 trace_syscall_entry_irq_off:
 	/*
@@ -1142,11 +1099,6 @@ exc_exit_restart_end:
 #ifdef CONFIG_40x
 	.globl	ret_from_crit_exc
 ret_from_crit_exc:
-	mfspr	r9,SPRN_SPRG_THREAD
-	lis	r10,saved_ksp_limit@ha;
-	lwz	r10,saved_ksp_limit@l(r10);
-	tovirt(r9,r9);
-	stw	r10,KSP_LIMIT(r9)
 	lis	r9,crit_srr0@ha;
 	lwz	r9,crit_srr0@l(r9);
 	lis	r10,crit_srr1@ha;
@@ -1159,18 +1111,12 @@ ret_from_crit_exc:
 #ifdef CONFIG_BOOKE
 	.globl	ret_from_crit_exc
 ret_from_crit_exc:
-	mfspr	r9,SPRN_SPRG_THREAD
-	lwz	r10,SAVED_KSP_LIMIT(r1)
-	stw	r10,KSP_LIMIT(r9)
 	RESTORE_xSRR(SRR0,SRR1);
 	RESTORE_MMU_REGS;
 	RET_FROM_EXC_LEVEL(SPRN_CSRR0, SPRN_CSRR1, PPC_RFCI)
 
 	.globl	ret_from_debug_exc
 ret_from_debug_exc:
-	mfspr	r9,SPRN_SPRG_THREAD
-	lwz	r10,SAVED_KSP_LIMIT(r1)
-	stw	r10,KSP_LIMIT(r9)
 	RESTORE_xSRR(SRR0,SRR1);
 	RESTORE_xSRR(CSRR0,CSRR1);
 	RESTORE_MMU_REGS;
@@ -1178,9 +1124,6 @@ ret_from_debug_exc:
 
 	.globl	ret_from_mcheck_exc
 ret_from_mcheck_exc:
-	mfspr	r9,SPRN_SPRG_THREAD
-	lwz	r10,SAVED_KSP_LIMIT(r1)
-	stw	r10,KSP_LIMIT(r9)
 	RESTORE_xSRR(SRR0,SRR1);
 	RESTORE_xSRR(CSRR0,CSRR1);
 	RESTORE_xSRR(DSRR0,DSRR1);
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 585ea1976550..4511fc1549f7 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -91,8 +91,6 @@ _ENTRY(crit_srr0)
 	.space	4
 _ENTRY(crit_srr1)
 	.space	4
-_ENTRY(saved_ksp_limit)
-	.space	4
 
 /*
  * Exception prolog for critical exceptions.  This is a little different
diff --git a/arch/powerpc/kernel/head_booke.h b/arch/powerpc/kernel/head_booke.h
index 2ae635df9026..41dd53846e0c 100644
--- a/arch/powerpc/kernel/head_booke.h
+++ b/arch/powerpc/kernel/head_booke.h
@@ -525,7 +525,6 @@ struct exception_regs {
 	unsigned long csrr1;
 	unsigned long dsrr0;
 	unsigned long dsrr1;
-	unsigned long saved_ksp_limit;
 };
 
 /* ensure this structure is always sized to a multiple of the stack alignment */
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index d80212be8698..bb5995fa6884 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -27,23 +27,14 @@
 
 	.text
 
-/*
- * We store the saved ksp_limit in the unused part
- * of the STACK_FRAME_OVERHEAD
- */
 _GLOBAL(call_do_softirq)
 	mflr	r0
 	stw	r0,4(r1)
-	lwz	r10,THREAD+KSP_LIMIT(r2)
-	stw	r3, THREAD+KSP_LIMIT(r2)
 	stwu	r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r3)
 	mr	r1,r3
-	stw	r10,8(r1)
 	bl	__do_softirq
-	lwz	r10,8(r1)
 	lwz	r1,0(r1)
 	lwz	r0,4(r1)
-	stw	r10,THREAD+KSP_LIMIT(r2)
 	mtlr	r0
 	blr
 
@@ -53,16 +44,11 @@ _GLOBAL(call_do_softirq)
 _GLOBAL(call_do_irq)
 	mflr	r0
 	stw	r0,4(r1)
-	lwz	r10,THREAD+KSP_LIMIT(r2)
-	stw	r4, THREAD+KSP_LIMIT(r2)
 	stwu	r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r4)
 	mr	r1,r4
-	stw	r10,8(r1)
 	bl	__do_irq
-	lwz	r10,8(r1)
 	lwz	r1,0(r1)
 	lwz	r0,4(r1)
-	stw	r10,THREAD+KSP_LIMIT(r2)
 	mtlr	r0
 	blr
 
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 4df94b6e2f32..49d0ebf28ab9 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1657,9 +1657,6 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long usp,
 	kregs = (struct pt_regs *) sp;
 	sp -= STACK_FRAME_OVERHEAD;
 	p->thread.ksp = sp;
-#ifdef CONFIG_PPC32
-	p->thread.ksp_limit = (unsigned long)end_of_stack(p);
-#endif
 #ifdef CONFIG_HAVE_HW_BREAKPOINT
 	p->thread.ptrace_bps[0] = NULL;
 #endif
diff --git a/arch/powerpc/kernel/traps.c b/arch/powerpc/kernel/traps.c
index 014ff0701f24..ee641dd8eb90 100644
--- a/arch/powerpc/kernel/traps.c
+++ b/arch/powerpc/kernel/traps.c
@@ -1628,15 +1628,6 @@ void alignment_exception(struct pt_regs *regs)
 	exception_exit(prev_state);
 }
 
-void StackOverflow(struct pt_regs *regs)
-{
-	pr_crit("Kernel stack overflow in process %s[%d], r1=%lx\n",
-		current->comm, task_pid_nr(current), regs->gpr[1]);
-	debugger(regs);
-	show_regs(regs);
-	panic("kernel stack overflow");
-}
-
 void kernel_fp_unavailable_exception(struct pt_regs *regs)
 {
 	enum ctx_state prev_state = exception_enter();
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index c077acb983a1..390f43f1d4d8 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -2701,15 +2701,6 @@ NOKPROBE_SYMBOL(analyse_instr);
  */
 static nokprobe_inline int handle_stack_update(unsigned long ea, struct pt_regs *regs)
 {
-#ifdef CONFIG_PPC32
-	/*
-	 * Check if we will touch kernel stack overflow
-	 */
-	if (ea - STACK_INT_FRAME_SIZE <= current->thread.ksp_limit) {
-		printk(KERN_CRIT "Can't kprobe this since kernel stack would overflow.\n");
-		return -EINVAL;
-	}
-#endif /* CONFIG_PPC32 */
 	/*
 	 * Check if we already set since that means we'll
 	 * lose the previous value.
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC PATCH 2/8] powerpc/irq: inline call_do_irq() and call_do_softirq() on PPC32
  2019-12-23 15:26 [RFC PATCH 0/8] Accelarate IRQ entry Christophe Leroy
  2019-12-23 15:26 ` [RFC PATCH 1/8] powerpc/32: drop ksp_limit based stack overflow detection Christophe Leroy
@ 2019-12-23 15:26 ` Christophe Leroy
  2019-12-23 15:26 ` [RFC PATCH 3/8] powerpc/irq: don't use current_stack_pointer() in do_IRQ() Christophe Leroy
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Christophe Leroy @ 2019-12-23 15:26 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev

call_do_irq() and call_do_softirq() are simple enough to be
worth inlining.

Inlining them avoids an mflr/mtlr pair plus a save/reload on stack.
It also allows GCC to keep the saved ksp_limit in an nonvolatile reg.

This is inspired from S390 arch. Several other arches do more or
less the same. The way sparc arch does seems odd thought.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>

---
v2: no change.
v3: no change.
v4:
- comment reminding the purpose of the inline asm block.
- added r2 as clobbered reg
v5:
- Limiting the change to PPC32 for now.
- removed r2 from the clobbered regs list (on PPC32 r2 points to current all the time)
- Removed patch 1 and merged ksp_limit handling in here.
v6:
- rebased after removal of ksp_limit
---
 arch/powerpc/include/asm/irq.h |  2 ++
 arch/powerpc/kernel/irq.c      | 34 ++++++++++++++++++++++++++++++++++
 arch/powerpc/kernel/misc_32.S  | 25 -------------------------
 3 files changed, 36 insertions(+), 25 deletions(-)

diff --git a/arch/powerpc/include/asm/irq.h b/arch/powerpc/include/asm/irq.h
index 814dfab7e392..e4a92f0b4ad4 100644
--- a/arch/powerpc/include/asm/irq.h
+++ b/arch/powerpc/include/asm/irq.h
@@ -56,8 +56,10 @@ extern void *mcheckirq_ctx[NR_CPUS];
 extern void *hardirq_ctx[NR_CPUS];
 extern void *softirq_ctx[NR_CPUS];
 
+#ifdef CONFIG_PPC64
 void call_do_softirq(void *sp);
 void call_do_irq(struct pt_regs *regs, void *sp);
+#endif
 extern void do_IRQ(struct pt_regs *regs);
 extern void __init init_IRQ(void);
 extern void __do_irq(struct pt_regs *regs);
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index add67498c126..4690e5270806 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -611,6 +611,40 @@ static inline void check_stack_overflow(void)
 #endif
 }
 
+#ifdef CONFIG_PPC32
+static inline void call_do_softirq(const void *sp)
+{
+	register unsigned long ret asm("r3");
+
+	/* Temporarily switch r1 to sp, call __do_softirq() then restore r1. */
+	asm volatile(
+		"	"PPC_STLU"	1, %2(%1);\n"
+		"	mr		1, %1;\n"
+		"	bl		%3;\n"
+		"	"PPC_LL"	1, 0(1);\n" :
+		"=r"(ret) :
+		"b"(sp), "i"(THREAD_SIZE - STACK_FRAME_OVERHEAD), "i"(__do_softirq) :
+		"lr", "xer", "ctr", "memory", "cr0", "cr1", "cr5", "cr6", "cr7",
+		"r0", "r4", "r5", "r6", "r7", "r8", "r9", "r10", "r11", "r12");
+}
+
+static inline void call_do_irq(struct pt_regs *regs, void *sp)
+{
+	register unsigned long r3 asm("r3") = (unsigned long)regs;
+
+	/* Temporarily switch r1 to sp, call __do_irq() then restore r1 */
+	asm volatile(
+		"	"PPC_STLU"	1, %2(%1);\n"
+		"	mr		1, %1;\n"
+		"	bl		%3;\n"
+		"	"PPC_LL"	1, 0(1);\n" :
+		"+r"(r3) :
+		"b"(sp), "i"(THREAD_SIZE - STACK_FRAME_OVERHEAD), "i"(__do_irq) :
+		"lr", "xer", "ctr", "memory", "cr0", "cr1", "cr5", "cr6", "cr7",
+		"r0", "r4", "r5", "r6", "r7", "r8", "r9", "r10", "r11", "r12");
+}
+#endif
+
 void __do_irq(struct pt_regs *regs)
 {
 	unsigned int irq;
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index bb5995fa6884..341a3cd199cb 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -27,31 +27,6 @@
 
 	.text
 
-_GLOBAL(call_do_softirq)
-	mflr	r0
-	stw	r0,4(r1)
-	stwu	r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r3)
-	mr	r1,r3
-	bl	__do_softirq
-	lwz	r1,0(r1)
-	lwz	r0,4(r1)
-	mtlr	r0
-	blr
-
-/*
- * void call_do_irq(struct pt_regs *regs, void *sp);
- */
-_GLOBAL(call_do_irq)
-	mflr	r0
-	stw	r0,4(r1)
-	stwu	r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r4)
-	mr	r1,r4
-	bl	__do_irq
-	lwz	r1,0(r1)
-	lwz	r0,4(r1)
-	mtlr	r0
-	blr
-
 /*
  * This returns the high 64 bits of the product of two 64-bit numbers.
  */
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC PATCH 3/8] powerpc/irq: don't use current_stack_pointer() in do_IRQ()
  2019-12-23 15:26 [RFC PATCH 0/8] Accelarate IRQ entry Christophe Leroy
  2019-12-23 15:26 ` [RFC PATCH 1/8] powerpc/32: drop ksp_limit based stack overflow detection Christophe Leroy
  2019-12-23 15:26 ` [RFC PATCH 2/8] powerpc/irq: inline call_do_irq() and call_do_softirq() on PPC32 Christophe Leroy
@ 2019-12-23 15:26 ` Christophe Leroy
  2019-12-23 15:26 ` [RFC PATCH 4/8] powerpc/irq: move set_irq_regs() closer to irq_enter/exit() Christophe Leroy
                   ` (4 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Christophe Leroy @ 2019-12-23 15:26 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev

Before commit 7306e83ccf5c ("powerpc: Don't use CURRENT_THREAD_INFO to
find the stack"), the current stack base address was obtained by
calling current_thread_info(). That inline function was simply masking
out the value of r1.

In that commit, it was changed to using current_stack_pointer(), which
is an heavier function as it is an outline assembly function which
cannot be inlined and which reads the content of the stack at 0(r1)

Create stack_pointer() function which returns the value of r1 and use
it instead.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
Fixes: 7306e83ccf5c ("powerpc: Don't use CURRENT_THREAD_INFO to find the stack")
---
 arch/powerpc/include/asm/reg.h | 8 ++++++++
 arch/powerpc/kernel/irq.c      | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index 1aa46dff0957..bc14fca9b13b 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -1466,6 +1466,14 @@ static inline void update_power8_hid0(unsigned long hid0)
 	 */
 	asm volatile("sync; mtspr %0,%1; isync":: "i"(SPRN_HID0), "r"(hid0));
 }
+
+static __always_inline unsigned long stack_pointer(void)
+{
+	register unsigned long r1 asm("r1");
+
+	return r1;
+}
+
 #endif /* __ASSEMBLY__ */
 #endif /* __KERNEL__ */
 #endif /* _ASM_POWERPC_REG_H */
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 4690e5270806..410accba865d 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -680,7 +680,7 @@ void do_IRQ(struct pt_regs *regs)
 	void *cursp, *irqsp, *sirqsp;
 
 	/* Switch to the irq stack to handle this */
-	cursp = (void *)(current_stack_pointer() & ~(THREAD_SIZE - 1));
+	cursp = (void *)(stack_pointer() & ~(THREAD_SIZE - 1));
 	irqsp = hardirq_ctx[raw_smp_processor_id()];
 	sirqsp = softirq_ctx[raw_smp_processor_id()];
 
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC PATCH 4/8] powerpc/irq: move set_irq_regs() closer to irq_enter/exit()
  2019-12-23 15:26 [RFC PATCH 0/8] Accelarate IRQ entry Christophe Leroy
                   ` (2 preceding siblings ...)
  2019-12-23 15:26 ` [RFC PATCH 3/8] powerpc/irq: don't use current_stack_pointer() in do_IRQ() Christophe Leroy
@ 2019-12-23 15:26 ` Christophe Leroy
  2019-12-23 15:26 ` [RFC PATCH 5/8] powerpc/irq: move stack overflow verification Christophe Leroy
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Christophe Leroy @ 2019-12-23 15:26 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev

set_irq_regs() is called by do_IRQ() while irq_enter() and irq_exit()
are called by __do_irq().

Move set_irq_regs() in __do_irq()

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/irq.c | 16 ++++++----------
 1 file changed, 6 insertions(+), 10 deletions(-)

diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 410accba865d..28414c6665cc 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -647,6 +647,7 @@ static inline void call_do_irq(struct pt_regs *regs, void *sp)
 
 void __do_irq(struct pt_regs *regs)
 {
+	struct pt_regs *old_regs = set_irq_regs(regs);
 	unsigned int irq;
 
 	irq_enter();
@@ -672,11 +673,11 @@ void __do_irq(struct pt_regs *regs)
 	trace_irq_exit(regs);
 
 	irq_exit();
+	set_irq_regs(old_regs);
 }
 
 void do_IRQ(struct pt_regs *regs)
 {
-	struct pt_regs *old_regs = set_irq_regs(regs);
 	void *cursp, *irqsp, *sirqsp;
 
 	/* Switch to the irq stack to handle this */
@@ -686,16 +687,11 @@ void do_IRQ(struct pt_regs *regs)
 
 	check_stack_overflow();
 
-	/* Already there ? */
-	if (unlikely(cursp == irqsp || cursp == sirqsp)) {
+	/* Already there ? Otherwise switch stack and call */
+	if (unlikely(cursp == irqsp || cursp == sirqsp))
 		__do_irq(regs);
-		set_irq_regs(old_regs);
-		return;
-	}
-	/* Switch stack and call */
-	call_do_irq(regs, irqsp);
-
-	set_irq_regs(old_regs);
+	else
+		call_do_irq(regs, irqsp);
 }
 
 void __init init_IRQ(void)
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC PATCH 5/8] powerpc/irq: move stack overflow verification
  2019-12-23 15:26 [RFC PATCH 0/8] Accelarate IRQ entry Christophe Leroy
                   ` (3 preceding siblings ...)
  2019-12-23 15:26 ` [RFC PATCH 4/8] powerpc/irq: move set_irq_regs() closer to irq_enter/exit() Christophe Leroy
@ 2019-12-23 15:26 ` Christophe Leroy
  2019-12-23 15:26 ` [RFC PATCH 6/8] powerpc/irq: cleanup check_stack_overflow() a bit Christophe Leroy
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 9+ messages in thread
From: Christophe Leroy @ 2019-12-23 15:26 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev

As we are going to switch to IRQ stack immediately in the exception
handler, it won't be possible anymore to check stack overflow by
reading stack pointer.

Do the verification on regs->gpr[1] which contains the stack pointer
at the time the IRQ happended, and move it to __do_irq() so that the
verification is also done when calling __do_irq() directly once the
exception entry does the stack switch.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/irq.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 28414c6665cc..4df49f6e9987 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -596,15 +596,16 @@ u64 arch_irq_stat_cpu(unsigned int cpu)
 	return sum;
 }
 
-static inline void check_stack_overflow(void)
+static inline void check_stack_overflow(struct pt_regs *regs)
 {
 #ifdef CONFIG_DEBUG_STACKOVERFLOW
+	bool is_user = user_mode(regs);
 	long sp;
 
-	sp = current_stack_pointer() & (THREAD_SIZE-1);
+	sp = regs->gpr[1] & (THREAD_SIZE - 1);
 
 	/* check for stack overflow: is there less than 2KB free? */
-	if (unlikely(sp < 2048)) {
+	if (unlikely(!is_user && sp < 2048)) {
 		pr_err("do_IRQ: stack overflow: %ld\n", sp);
 		dump_stack();
 	}
@@ -654,6 +655,8 @@ void __do_irq(struct pt_regs *regs)
 
 	trace_irq_entry(regs);
 
+	check_stack_overflow(regs);
+
 	/*
 	 * Query the platform PIC for the interrupt & ack it.
 	 *
@@ -685,8 +688,6 @@ void do_IRQ(struct pt_regs *regs)
 	irqsp = hardirq_ctx[raw_smp_processor_id()];
 	sirqsp = softirq_ctx[raw_smp_processor_id()];
 
-	check_stack_overflow();
-
 	/* Already there ? Otherwise switch stack and call */
 	if (unlikely(cursp == irqsp || cursp == sirqsp))
 		__do_irq(regs);
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC PATCH 6/8] powerpc/irq: cleanup check_stack_overflow() a bit
  2019-12-23 15:26 [RFC PATCH 0/8] Accelarate IRQ entry Christophe Leroy
                   ` (4 preceding siblings ...)
  2019-12-23 15:26 ` [RFC PATCH 5/8] powerpc/irq: move stack overflow verification Christophe Leroy
@ 2019-12-23 15:26 ` Christophe Leroy
  2019-12-23 15:26 ` [RFC PATCH 7/8] powerpc/32: use IRQ stack immediately on IRQ exception Christophe Leroy
  2019-12-23 15:26 ` [RFC PATCH 8/8] powerpc/irq: drop softirq stack Christophe Leroy
  7 siblings, 0 replies; 9+ messages in thread
From: Christophe Leroy @ 2019-12-23 15:26 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev

Instead of #ifdef, use IS_ENABLED(CONFIG_DEBUG_STACKOVERFLOW).
This enable GCC to check for code validity even when the option
is not selected.

The function is not using current_stack_pointer() anymore so no
need to declare it inline, let GCC decide.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/irq.c | 9 ++++-----
 1 file changed, 4 insertions(+), 5 deletions(-)

diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 4df49f6e9987..a1122ef4a16c 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -596,20 +596,19 @@ u64 arch_irq_stat_cpu(unsigned int cpu)
 	return sum;
 }
 
-static inline void check_stack_overflow(struct pt_regs *regs)
+static void check_stack_overflow(struct pt_regs *regs)
 {
-#ifdef CONFIG_DEBUG_STACKOVERFLOW
 	bool is_user = user_mode(regs);
-	long sp;
+	long sp = regs->gpr[1] & (THREAD_SIZE - 1);
 
-	sp = regs->gpr[1] & (THREAD_SIZE - 1);
+	if (!IS_ENABLED(CONFIG_DEBUG_STACKOVERFLOW))
+		return;
 
 	/* check for stack overflow: is there less than 2KB free? */
 	if (unlikely(!is_user && sp < 2048)) {
 		pr_err("do_IRQ: stack overflow: %ld\n", sp);
 		dump_stack();
 	}
-#endif
 }
 
 #ifdef CONFIG_PPC32
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC PATCH 7/8] powerpc/32: use IRQ stack immediately on IRQ exception
  2019-12-23 15:26 [RFC PATCH 0/8] Accelarate IRQ entry Christophe Leroy
                   ` (5 preceding siblings ...)
  2019-12-23 15:26 ` [RFC PATCH 6/8] powerpc/irq: cleanup check_stack_overflow() a bit Christophe Leroy
@ 2019-12-23 15:26 ` Christophe Leroy
  2019-12-23 15:26 ` [RFC PATCH 8/8] powerpc/irq: drop softirq stack Christophe Leroy
  7 siblings, 0 replies; 9+ messages in thread
From: Christophe Leroy @ 2019-12-23 15:26 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev

Exception entries run of kernel thread stack, then do_IRQ()
switches to the IRQ stack.

Instead of doing a first step of the thread stack, increasing the
risk of stack overflow and spending time switch stacks two times when
coming from userspace, set the stack to IRQ stack immediately in the
EXCEPTION entry.

In the same way as ARM64, consider that when the stack pointer is not
within the kernel thread stack, it means it is already on IRQ stack.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/head_32.S  |  2 +-
 arch/powerpc/kernel/head_32.h  | 32 +++++++++++++++++++++++++++++---
 arch/powerpc/kernel/head_40x.S |  2 +-
 arch/powerpc/kernel/head_8xx.S |  2 +-
 4 files changed, 32 insertions(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/head_32.S b/arch/powerpc/kernel/head_32.S
index 4a24f8f026c7..0c36fba5b861 100644
--- a/arch/powerpc/kernel/head_32.S
+++ b/arch/powerpc/kernel/head_32.S
@@ -332,7 +332,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_HPTE_TABLE)
 	EXC_XFER_LITE(0x400, handle_page_fault)
 
 /* External interrupt */
-	EXCEPTION(0x500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)
+	EXCEPTION_IRQ(0x500, HardwareInterrupt, __do_irq, EXC_XFER_LITE)
 
 /* Alignment exception */
 	. = 0x600
diff --git a/arch/powerpc/kernel/head_32.h b/arch/powerpc/kernel/head_32.h
index 8abc7783dbe5..f9e77e51723e 100644
--- a/arch/powerpc/kernel/head_32.h
+++ b/arch/powerpc/kernel/head_32.h
@@ -11,21 +11,41 @@
  * task's thread_struct.
  */
 
-.macro EXCEPTION_PROLOG
+.macro EXCEPTION_PROLOG is_irq=0
 	mtspr	SPRN_SPRG_SCRATCH0,r10
 	mtspr	SPRN_SPRG_SCRATCH1,r11
 	mfcr	r10
-	EXCEPTION_PROLOG_1
+	EXCEPTION_PROLOG_1 is_irq=\is_irq
 	EXCEPTION_PROLOG_2
 .endm
 
-.macro EXCEPTION_PROLOG_1
+.macro EXCEPTION_PROLOG_1 is_irq=0
 	mfspr	r11,SPRN_SRR1		/* check whether user or kernel */
 	andi.	r11,r11,MSR_PR
+	.if \is_irq
+	bne	2f
+	mfspr	r11, SPRN_SPRG_THREAD
+	lwz	r11, TASK_STACK - THREAD(r11)
+	xor	r11, r11, r1
+	cmplwi	cr7, r11, THREAD_SIZE - 1
+	tophys(r11, r1)			/* use tophys(r1) if not thread stack */
+	bgt	cr7, 1f
+2:
+#ifdef CONFIG_SMP
+	mfspr	r11, SPRN_SPRG_THREAD
+	lwz	r11, TASK_CPU - THREAD(r11)
+	slwi	r11, r11, 3
+	addis	r11, r11, (hardirq_ctx - PAGE_OFFSET)@ha
+#else
+	lis	r11, (hardirq_ctx - PAGE_OFFSET)@ha
+#endif
+	lwz	r11, (hardirq_ctx - PAGE_OFFSET)@l(r11)
+	.else
 	tophys(r11,r1)			/* use tophys(r1) if kernel */
 	beq	1f
 	mfspr	r11,SPRN_SPRG_THREAD
 	lwz	r11,TASK_STACK-THREAD(r11)
+	.endif
 	addi	r11,r11,THREAD_SIZE
 	tophys(r11,r11)
 1:	subi	r11,r11,INT_FRAME_SIZE	/* alloc exc. frame */
@@ -171,6 +191,12 @@
 	addi	r3,r1,STACK_FRAME_OVERHEAD;	\
 	xfer(n, hdlr)
 
+#define EXCEPTION_IRQ(n, label, hdlr, xfer)	\
+	START_EXCEPTION(n, label)		\
+	EXCEPTION_PROLOG is_irq=1;		\
+	addi	r3,r1,STACK_FRAME_OVERHEAD;	\
+	xfer(n, hdlr)
+
 #define EXC_XFER_TEMPLATE(hdlr, trap, msr, tfer, ret)		\
 	li	r10,trap;					\
 	stw	r10,_TRAP(r11);					\
diff --git a/arch/powerpc/kernel/head_40x.S b/arch/powerpc/kernel/head_40x.S
index 4511fc1549f7..dd236f596c0b 100644
--- a/arch/powerpc/kernel/head_40x.S
+++ b/arch/powerpc/kernel/head_40x.S
@@ -315,7 +315,7 @@ _ENTRY(crit_srr1)
 	EXC_XFER_LITE(0x400, handle_page_fault)
 
 /* 0x0500 - External Interrupt Exception */
-	EXCEPTION(0x0500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)
+	EXCEPTION_IRQ(0x0500, HardwareInterrupt, __do_irq, EXC_XFER_LITE)
 
 /* 0x0600 - Alignment Exception */
 	START_EXCEPTION(0x0600, Alignment)
diff --git a/arch/powerpc/kernel/head_8xx.S b/arch/powerpc/kernel/head_8xx.S
index 19f583e18402..5a6cdbc89e26 100644
--- a/arch/powerpc/kernel/head_8xx.S
+++ b/arch/powerpc/kernel/head_8xx.S
@@ -150,7 +150,7 @@ DataAccess:
 InstructionAccess:
 
 /* External interrupt */
-	EXCEPTION(0x500, HardwareInterrupt, do_IRQ, EXC_XFER_LITE)
+	EXCEPTION_IRQ(0x500, HardwareInterrupt, __do_irq, EXC_XFER_LITE)
 
 /* Alignment exception */
 	. = 0x600
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [RFC PATCH 8/8] powerpc/irq: drop softirq stack
  2019-12-23 15:26 [RFC PATCH 0/8] Accelarate IRQ entry Christophe Leroy
                   ` (6 preceding siblings ...)
  2019-12-23 15:26 ` [RFC PATCH 7/8] powerpc/32: use IRQ stack immediately on IRQ exception Christophe Leroy
@ 2019-12-23 15:26 ` Christophe Leroy
  7 siblings, 0 replies; 9+ messages in thread
From: Christophe Leroy @ 2019-12-23 15:26 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman
  Cc: linux-kernel, linuxppc-dev

There are two IRQ stacks: softirq_ctx and hardirq_ctx

do_softirq_own_stack() switches stack to softirq_ctx
do_IRQ() switches stack to hardirq_ctx

However, when soft and hard IRQs are nested, only one of the two
stacks is used:
- When on softirq stack, do_IRQ() doesn't switch to hardirq stack.
- irq_exit() runs softirqs on hardirq stack.

There is no added value in having two IRQ stacks as only one is
used when hard and soft irqs are nested. Remove softirq_ctx and
use hardirq_ctx for both hard and soft IRQs.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/irq.h | 1 -
 arch/powerpc/kernel/irq.c      | 8 +++-----
 arch/powerpc/kernel/process.c  | 4 ----
 arch/powerpc/kernel/setup_32.c | 4 +---
 arch/powerpc/kernel/setup_64.c | 4 +---
 5 files changed, 5 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/include/asm/irq.h b/arch/powerpc/include/asm/irq.h
index e4a92f0b4ad4..7cb2c76aa3ed 100644
--- a/arch/powerpc/include/asm/irq.h
+++ b/arch/powerpc/include/asm/irq.h
@@ -54,7 +54,6 @@ extern void *mcheckirq_ctx[NR_CPUS];
  * Per-cpu stacks for handling hard and soft interrupts.
  */
 extern void *hardirq_ctx[NR_CPUS];
-extern void *softirq_ctx[NR_CPUS];
 
 #ifdef CONFIG_PPC64
 void call_do_softirq(void *sp);
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index a1122ef4a16c..3af0d1897354 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -680,15 +680,14 @@ void __do_irq(struct pt_regs *regs)
 
 void do_IRQ(struct pt_regs *regs)
 {
-	void *cursp, *irqsp, *sirqsp;
+	void *cursp, *irqsp;
 
 	/* Switch to the irq stack to handle this */
 	cursp = (void *)(stack_pointer() & ~(THREAD_SIZE - 1));
 	irqsp = hardirq_ctx[raw_smp_processor_id()];
-	sirqsp = softirq_ctx[raw_smp_processor_id()];
 
 	/* Already there ? Otherwise switch stack and call */
-	if (unlikely(cursp == irqsp || cursp == sirqsp))
+	if (unlikely(cursp == irqsp))
 		__do_irq(regs);
 	else
 		call_do_irq(regs, irqsp);
@@ -706,12 +705,11 @@ void    *dbgirq_ctx[NR_CPUS] __read_mostly;
 void *mcheckirq_ctx[NR_CPUS] __read_mostly;
 #endif
 
-void *softirq_ctx[NR_CPUS] __read_mostly;
 void *hardirq_ctx[NR_CPUS] __read_mostly;
 
 void do_softirq_own_stack(void)
 {
-	call_do_softirq(softirq_ctx[smp_processor_id()]);
+	call_do_softirq(hardirq_ctx[smp_processor_id()]);
 }
 
 irq_hw_number_t virq_to_hw(unsigned int virq)
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 49d0ebf28ab9..be3e64cf28b4 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -1963,10 +1963,6 @@ static inline int valid_irq_stack(unsigned long sp, struct task_struct *p,
 	if (sp >= stack_page && sp <= stack_page + THREAD_SIZE - nbytes)
 		return 1;
 
-	stack_page = (unsigned long)softirq_ctx[cpu];
-	if (sp >= stack_page && sp <= stack_page + THREAD_SIZE - nbytes)
-		return 1;
-
 	return 0;
 }
 
diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index dcffe927f5b9..8752aae06177 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -155,10 +155,8 @@ void __init irqstack_early_init(void)
 
 	/* interrupt stacks must be in lowmem, we get that for free on ppc32
 	 * as the memblock is limited to lowmem by default */
-	for_each_possible_cpu(i) {
-		softirq_ctx[i] = alloc_stack();
+	for_each_possible_cpu(i)
 		hardirq_ctx[i] = alloc_stack();
-	}
 }
 
 #if defined(CONFIG_BOOKE) || defined(CONFIG_40x)
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 6104917a282d..96ee7627eda6 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -652,10 +652,8 @@ void __init irqstack_early_init(void)
 	 * cannot afford to take SLB misses on them. They are not
 	 * accessed in realmode.
 	 */
-	for_each_possible_cpu(i) {
-		softirq_ctx[i] = alloc_stack(limit, i);
+	for_each_possible_cpu(i)
 		hardirq_ctx[i] = alloc_stack(limit, i);
-	}
 }
 
 #ifdef CONFIG_PPC_BOOK3E
-- 
2.13.3


^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2019-12-23 15:26 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-12-23 15:26 [RFC PATCH 0/8] Accelarate IRQ entry Christophe Leroy
2019-12-23 15:26 ` [RFC PATCH 1/8] powerpc/32: drop ksp_limit based stack overflow detection Christophe Leroy
2019-12-23 15:26 ` [RFC PATCH 2/8] powerpc/irq: inline call_do_irq() and call_do_softirq() on PPC32 Christophe Leroy
2019-12-23 15:26 ` [RFC PATCH 3/8] powerpc/irq: don't use current_stack_pointer() in do_IRQ() Christophe Leroy
2019-12-23 15:26 ` [RFC PATCH 4/8] powerpc/irq: move set_irq_regs() closer to irq_enter/exit() Christophe Leroy
2019-12-23 15:26 ` [RFC PATCH 5/8] powerpc/irq: move stack overflow verification Christophe Leroy
2019-12-23 15:26 ` [RFC PATCH 6/8] powerpc/irq: cleanup check_stack_overflow() a bit Christophe Leroy
2019-12-23 15:26 ` [RFC PATCH 7/8] powerpc/32: use IRQ stack immediately on IRQ exception Christophe Leroy
2019-12-23 15:26 ` [RFC PATCH 8/8] powerpc/irq: drop softirq stack Christophe Leroy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).