[v3,2/2] powerpc/irq: inline call_do_irq() and call_do_softirq()
diff mbox series

Message ID 5fb4aedadbd28b9849cf2fabe13392fb3b5cd3ed.1568821668.git.christophe.leroy@c-s.fr
State New
Headers show
Series
  • [v3,1/2] powerpc/irq: bring back ksp_limit management in C functions.
Related show

Commit Message

Christophe Leroy Sept. 18, 2019, 3:48 p.m. UTC
call_do_irq() and call_do_softirq() are quite similar on PPC32 and
PPC64 and are simple enough to be worth inlining.

Inlining them avoids an mflr/mtlr pair plus a save/reload on stack.

This is inspired from S390 arch. Several other arches do more or
less the same. The way sparc arch does seems odd thought.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>

---
v2: no change.
v3: no change.
---
 arch/powerpc/include/asm/irq.h |  2 --
 arch/powerpc/kernel/irq.c      | 26 ++++++++++++++++++++++++++
 arch/powerpc/kernel/misc_32.S  | 25 -------------------------
 arch/powerpc/kernel/misc_64.S  | 22 ----------------------
 4 files changed, 26 insertions(+), 49 deletions(-)

Comments

Segher Boessenkool Sept. 18, 2019, 4:39 p.m. UTC | #1
Hi Christophe,

On Wed, Sep 18, 2019 at 03:48:20PM +0000, Christophe Leroy wrote:
> call_do_irq() and call_do_softirq() are quite similar on PPC32 and
> PPC64 and are simple enough to be worth inlining.
> 
> Inlining them avoids an mflr/mtlr pair plus a save/reload on stack.

But you hardcode the calling sequence in inline asm, which for various
reasons is not a great idea.

> +static inline void call_do_irq(struct pt_regs *regs, void *sp)
> +{
> +	register unsigned long r3 asm("r3") = (unsigned long)regs;
> +
> +	asm volatile(
> +		"	"PPC_STLU"	1, %2(%1);\n"
> +		"	mr		1, %1;\n"
> +		"	bl		%3;\n"
> +		"	"PPC_LL"	1, 0(1);\n" : "+r"(r3) :
> +		"b"(sp), "i"(THREAD_SIZE - STACK_FRAME_OVERHEAD), "i"(__do_irq) :
> +		"lr", "xer", "ctr", "memory", "cr0", "cr1", "cr5", "cr6", "cr7",
> +		"r0", "r4", "r5", "r6", "r7", "r8", "r9", "r10", "r11", "r12");
> +}

I realise the original code had this...  Loading the old stack pointer
value back from the stack creates a bottleneck (via the store->load
forwarding it requires).  It could just use
  addi 1,1,-(%2)
here, which can also be written as
  addi 1,1,%n2
(that is portable to all architectures btw).


Please write the  "+r"(r3)  on the next line?  Not on the same line as
the multi-line template.  This make things more readable.


I don't know if using functions as an "i" works properly...  It probably
does, it's just not something that you see often :-)


What about r2?  Various ABIs handle that differently.  This might make
it impossible to share implementation between 32-bit and 64-bit for this.
But we could add it to the clobber list worst case, that will always work.


So anyway, it looks to me like it will work.  Nice cleanup.  Would be
better if you could do the call to __do_irq from C code, but maybe we
cannot have everything ;-)

Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>


Segher
Christophe Leroy Sept. 19, 2019, 5:23 a.m. UTC | #2
Le 18/09/2019 à 18:39, Segher Boessenkool a écrit :
> Hi Christophe,
> 
> On Wed, Sep 18, 2019 at 03:48:20PM +0000, Christophe Leroy wrote:
>> call_do_irq() and call_do_softirq() are quite similar on PPC32 and
>> PPC64 and are simple enough to be worth inlining.
>>
>> Inlining them avoids an mflr/mtlr pair plus a save/reload on stack.
> 
> But you hardcode the calling sequence in inline asm, which for various
> reasons is not a great idea.
> 
>> +static inline void call_do_irq(struct pt_regs *regs, void *sp)
>> +{
>> +	register unsigned long r3 asm("r3") = (unsigned long)regs;
>> +
>> +	asm volatile(
>> +		"	"PPC_STLU"	1, %2(%1);\n"
>> +		"	mr		1, %1;\n"
>> +		"	bl		%3;\n"
>> +		"	"PPC_LL"	1, 0(1);\n" : "+r"(r3) :
>> +		"b"(sp), "i"(THREAD_SIZE - STACK_FRAME_OVERHEAD), "i"(__do_irq) :
>> +		"lr", "xer", "ctr", "memory", "cr0", "cr1", "cr5", "cr6", "cr7",
>> +		"r0", "r4", "r5", "r6", "r7", "r8", "r9", "r10", "r11", "r12");
>> +}
> 
> I realise the original code had this...  Loading the old stack pointer
> value back from the stack creates a bottleneck (via the store->load
> forwarding it requires).  It could just use
>    addi 1,1,-(%2)
> here, which can also be written as
>    addi 1,1,%n2
> (that is portable to all architectures btw).

No, we switched stack before the bl call, we replaced r1 by r3 after 
saving r1 into r3 stack. Now we have to restore the original r1.

> 
> 
> Please write the  "+r"(r3)  on the next line?  Not on the same line as
> the multi-line template.  This make things more readable.
> 
> 
> I don't know if using functions as an "i" works properly...  It probably
> does, it's just not something that you see often :-)
> 
> 
> What about r2?  Various ABIs handle that differently.  This might make
> it impossible to share implementation between 32-bit and 64-bit for this.
> But we could add it to the clobber list worst case, that will always work.

Isn't r2 non-volatile on all ABIs ?

> 
> 
> So anyway, it looks to me like it will work.  Nice cleanup.  Would be
> better if you could do the call to __do_irq from C code, but maybe we
> cannot have everything ;-)

sparc do it the following way, is there no risk that GCC adds unwanted 
code inbetween that is not aware there the stack pointer has changed ?

void do_softirq_own_stack(void)
{
	void *orig_sp, *sp = softirq_stack[smp_processor_id()];

	sp += THREAD_SIZE - 192 - STACK_BIAS;

	__asm__ __volatile__("mov %%sp, %0\n\t"
			     "mov %1, %%sp"
			     : "=&r" (orig_sp)
			     : "r" (sp));
	__do_softirq();
	__asm__ __volatile__("mov %0, %%sp"
			     : : "r" (orig_sp));
}

If the above is no risk, then can we do the same on powerpc ?

> 
> Reviewed-by: Segher Boessenkool <segher@kernel.crashing.org>

Thanks for the review.

Christophe
Segher Boessenkool Sept. 19, 2019, 7:05 p.m. UTC | #3
On Thu, Sep 19, 2019 at 07:23:18AM +0200, Christophe Leroy wrote:
> Le 18/09/2019 à 18:39, Segher Boessenkool a écrit :
> >I realise the original code had this...  Loading the old stack pointer
> >value back from the stack creates a bottleneck (via the store->load
> >forwarding it requires).  It could just use
> >   addi 1,1,-(%2)
> >here, which can also be written as
> >   addi 1,1,%n2
> >(that is portable to all architectures btw).
> 
> No, we switched stack before the bl call, we replaced r1 by r3 after 
> saving r1 into r3 stack. Now we have to restore the original r1.

Yeah wow, I missed that once again.  Whoops.

Add a comment for this?  Just before the asm maybe, "we temporarily switch
r1 to a different stack" or something.

> >What about r2?  Various ABIs handle that differently.  This might make
> >it impossible to share implementation between 32-bit and 64-bit for this.
> >But we could add it to the clobber list worst case, that will always work.
> 
> Isn't r2 non-volatile on all ABIs ?

It is not.  On ELFv2 it is (or will be) optionally volatile, but like on
ELFv1 it already has special rules as well: the linker is responsible for
restoring it if it is non-volatile, and for that there needs to be a nop
after the bl, etc.

But the existing code was in a similar situation and we survived that, I
think we should be fine this way too.  And it won't be too hard to change
again if needed.

> >So anyway, it looks to me like it will work.  Nice cleanup.  Would be
> >better if you could do the call to __do_irq from C code, but maybe we
> >cannot have everything ;-)
> 
> sparc do it the following way, is there no risk that GCC adds unwanted 
> code inbetween that is not aware there the stack pointer has changed ?
> 
> void do_softirq_own_stack(void)
> {
> 	void *orig_sp, *sp = softirq_stack[smp_processor_id()];
> 
> 	sp += THREAD_SIZE - 192 - STACK_BIAS;
> 
> 	__asm__ __volatile__("mov %%sp, %0\n\t"
> 			     "mov %1, %%sp"
> 			     : "=&r" (orig_sp)
> 			     : "r" (sp));
> 	__do_softirq();
> 	__asm__ __volatile__("mov %0, %%sp"
> 			     : : "r" (orig_sp));
> }
> 
> If the above is no risk, then can we do the same on powerpc ?

No, that is a quite bad idea: it depends on the stack pointer not being
used in any way between the two asms.  Which this code does not guarantee
(what if it is inlined, for example).

Doing the stack juggling and the actual call in a single asm is much more
safe and correct.  It's just that you then need asm for the actual call
that works for all ABIs you support, etc. :-)


Segher

Patch
diff mbox series

diff --git a/arch/powerpc/include/asm/irq.h b/arch/powerpc/include/asm/irq.h
index 0c6469983c66..10476d5283dc 100644
--- a/arch/powerpc/include/asm/irq.h
+++ b/arch/powerpc/include/asm/irq.h
@@ -57,8 +57,6 @@  extern void *mcheckirq_ctx[NR_CPUS];
 extern void *hardirq_ctx[NR_CPUS];
 extern void *softirq_ctx[NR_CPUS];
 
-void call_do_softirq(void *sp);
-void call_do_irq(struct pt_regs *regs, void *sp);
 extern void do_IRQ(struct pt_regs *regs);
 extern void __init init_IRQ(void);
 extern void __do_irq(struct pt_regs *regs);
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 04204be49577..b028c49f9635 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -642,6 +642,20 @@  void __do_irq(struct pt_regs *regs)
 	irq_exit();
 }
 
+static inline void call_do_irq(struct pt_regs *regs, void *sp)
+{
+	register unsigned long r3 asm("r3") = (unsigned long)regs;
+
+	asm volatile(
+		"	"PPC_STLU"	1, %2(%1);\n"
+		"	mr		1, %1;\n"
+		"	bl		%3;\n"
+		"	"PPC_LL"	1, 0(1);\n" : "+r"(r3) :
+		"b"(sp), "i"(THREAD_SIZE - STACK_FRAME_OVERHEAD), "i"(__do_irq) :
+		"lr", "xer", "ctr", "memory", "cr0", "cr1", "cr5", "cr6", "cr7",
+		"r0", "r4", "r5", "r6", "r7", "r8", "r9", "r10", "r11", "r12");
+}
+
 void do_IRQ(struct pt_regs *regs)
 {
 	struct pt_regs *old_regs = set_irq_regs(regs);
@@ -686,6 +700,18 @@  void *mcheckirq_ctx[NR_CPUS] __read_mostly;
 void *softirq_ctx[NR_CPUS] __read_mostly;
 void *hardirq_ctx[NR_CPUS] __read_mostly;
 
+static inline void call_do_softirq(const void *sp)
+{
+	asm volatile(
+		"	"PPC_STLU"	1, %1(%0);\n"
+		"	mr		1, %0;\n"
+		"	bl		%2;\n"
+		"	"PPC_LL"	1, 0(1);\n" : :
+		"b"(sp), "i"(THREAD_SIZE - STACK_FRAME_OVERHEAD), "i"(__do_softirq) :
+		"lr", "xer", "ctr", "memory", "cr0", "cr1", "cr5", "cr6", "cr7",
+		"r0", "r4", "r5", "r6", "r7", "r8", "r9", "r10", "r11", "r12");
+}
+
 void do_softirq_own_stack(void)
 {
 	void *irqsp = softirq_ctx[smp_processor_id()];
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index a5422f7782b3..307307b57743 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -33,31 +33,6 @@ 
 
 	.text
 
-_GLOBAL(call_do_softirq)
-	mflr	r0
-	stw	r0,4(r1)
-	stwu	r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r3)
-	mr	r1,r3
-	bl	__do_softirq
-	lwz	r1,0(r1)
-	lwz	r0,4(r1)
-	mtlr	r0
-	blr
-
-/*
- * void call_do_irq(struct pt_regs *regs, void *sp);
- */
-_GLOBAL(call_do_irq)
-	mflr	r0
-	stw	r0,4(r1)
-	stwu	r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r4)
-	mr	r1,r4
-	bl	__do_irq
-	lwz	r1,0(r1)
-	lwz	r0,4(r1)
-	mtlr	r0
-	blr
-
 /*
  * This returns the high 64 bits of the product of two 64-bit numbers.
  */
diff --git a/arch/powerpc/kernel/misc_64.S b/arch/powerpc/kernel/misc_64.S
index b55a7b4cb543..69fd714a5236 100644
--- a/arch/powerpc/kernel/misc_64.S
+++ b/arch/powerpc/kernel/misc_64.S
@@ -27,28 +27,6 @@ 
 
 	.text
 
-_GLOBAL(call_do_softirq)
-	mflr	r0
-	std	r0,16(r1)
-	stdu	r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r3)
-	mr	r1,r3
-	bl	__do_softirq
-	ld	r1,0(r1)
-	ld	r0,16(r1)
-	mtlr	r0
-	blr
-
-_GLOBAL(call_do_irq)
-	mflr	r0
-	std	r0,16(r1)
-	stdu	r1,THREAD_SIZE-STACK_FRAME_OVERHEAD(r4)
-	mr	r1,r4
-	bl	__do_irq
-	ld	r1,0(r1)
-	ld	r0,16(r1)
-	mtlr	r0
-	blr
-
 	.section	".toc","aw"
 PPC64_CACHES:
 	.tc		ppc64_caches[TC],ppc64_caches