linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/2] powerpc32: Optimise csum_partial()
@ 2015-06-30 13:56 Christophe Leroy
  2015-06-30 13:56 ` [PATCH 1/2] powerpc32: optimise a few instructions in csum_partial() Christophe Leroy
  2015-06-30 13:56 ` [PATCH 2/2] powerpc32: optimise csum_partial() loop Christophe Leroy
  0 siblings, 2 replies; 3+ messages in thread
From: Christophe Leroy @ 2015-06-30 13:56 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, scottwood
  Cc: linux-kernel, linuxppc-dev, Joakim Tjernlund

The purpose of this patchset is to optimise csum_partial() on powerpc32.
In the first part, we remove some unneccessary instructions
In the second part, we partially unloop the main loop

Christophe Leroy (2):
  Optimise a few instructions in csum_partial()
  Optimise csum_partial() loop

 arch/powerpc/lib/checksum_32.S | 53 +++++++++++++++++++++++++-----------------
 1 file changed, 32 insertions(+), 21 deletions(-)

-- 
2.1.0


^ permalink raw reply	[flat|nested] 3+ messages in thread

* [PATCH 1/2] powerpc32: optimise a few instructions in csum_partial()
  2015-06-30 13:56 [PATCH 0/2] powerpc32: Optimise csum_partial() Christophe Leroy
@ 2015-06-30 13:56 ` Christophe Leroy
  2015-06-30 13:56 ` [PATCH 2/2] powerpc32: optimise csum_partial() loop Christophe Leroy
  1 sibling, 0 replies; 3+ messages in thread
From: Christophe Leroy @ 2015-06-30 13:56 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, scottwood
  Cc: linux-kernel, linuxppc-dev, Joakim Tjernlund

r5 does contain the value to be updated, so lets use r5 all way long
for that. It makes the code more readable.

To avoid confusion, it is better to use adde instead of addc

The first addition is useless. Its only purpose is to clear carry.
As r4 is a signed int that is always positive, this can be done by
using srawi instead of srwi

Let's also remove the comment about dcbz having no overhead as it
is not correct on all powerpc, at least on MPC8xx

In the last part, in our situation, the remaining quantity of bytes
to be proceeded is between 0 and 3. Therefore, we can base that part
on the value of bit 31 and bit 30 of r4 instead of anding r4 with 3
then proceding on comparisons and substractions.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/lib/checksum_32.S | 37 +++++++++++++++++--------------------
 1 file changed, 17 insertions(+), 20 deletions(-)

diff --git a/arch/powerpc/lib/checksum_32.S b/arch/powerpc/lib/checksum_32.S
index 7b95a68..2e4879c 100644
--- a/arch/powerpc/lib/checksum_32.S
+++ b/arch/powerpc/lib/checksum_32.S
@@ -64,35 +64,32 @@ _GLOBAL(csum_tcpudp_magic)
  * csum_partial(buff, len, sum)
  */
 _GLOBAL(csum_partial)
-	addic	r0,r5,0
 	subi	r3,r3,4
-	srwi.	r6,r4,2
+	srawi.	r6,r4,2		/* Divide len by 4 and also clear carry */
 	beq	3f		/* if we're doing < 4 bytes */
-	andi.	r5,r3,2		/* Align buffer to longword boundary */
+	andi.	r0,r3,2		/* Align buffer to longword boundary */
 	beq+	1f
-	lhz	r5,4(r3)	/* do 2 bytes to get aligned */
-	addi	r3,r3,2
+	lhz	r0,4(r3)	/* do 2 bytes to get aligned */
 	subi	r4,r4,2
-	addc	r0,r0,r5
+	addi	r3,r3,2
 	srwi.	r6,r4,2		/* # words to do */
+	adde	r5,r5,r0
 	beq	3f
 1:	mtctr	r6
-2:	lwzu	r5,4(r3)	/* the bdnz has zero overhead, so it should */
-	adde	r0,r0,r5	/* be unnecessary to unroll this loop */
+2:	lwzu	r0,4(r3)
+	adde	r5,r5,r0
 	bdnz	2b
-	andi.	r4,r4,3
-3:	cmpwi	0,r4,2
-	blt+	4f
-	lhz	r5,4(r3)
+3:	andi.	r0,r4,2
+	beq+	4f
+	lhz	r0,4(r3)
 	addi	r3,r3,2
-	subi	r4,r4,2
-	adde	r0,r0,r5
-4:	cmpwi	0,r4,1
-	bne+	5f
-	lbz	r5,4(r3)
-	slwi	r5,r5,8		/* Upper byte of word */
-	adde	r0,r0,r5
-5:	addze	r3,r0		/* add in final carry */
+	adde	r5,r5,r0
+4:	andi.	r0,r4,1
+	beq+	5f
+	lbz	r0,4(r3)
+	slwi	r0,r0,8		/* Upper byte of word */
+	adde	r5,r5,r0
+5:	addze	r3,r5		/* add in final carry */
 	blr
 
 /*
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* [PATCH 2/2] powerpc32: optimise csum_partial() loop
  2015-06-30 13:56 [PATCH 0/2] powerpc32: Optimise csum_partial() Christophe Leroy
  2015-06-30 13:56 ` [PATCH 1/2] powerpc32: optimise a few instructions in csum_partial() Christophe Leroy
@ 2015-06-30 13:56 ` Christophe Leroy
  1 sibling, 0 replies; 3+ messages in thread
From: Christophe Leroy @ 2015-06-30 13:56 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, scottwood
  Cc: linux-kernel, linuxppc-dev, Joakim Tjernlund

On the 8xx, load latency is 2 cycles and taking branches also takes
2 cycles. So let's unroll the loop.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/lib/checksum_32.S | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/lib/checksum_32.S b/arch/powerpc/lib/checksum_32.S
index 2e4879c..9c48ee0 100644
--- a/arch/powerpc/lib/checksum_32.S
+++ b/arch/powerpc/lib/checksum_32.S
@@ -75,10 +75,24 @@ _GLOBAL(csum_partial)
 	srwi.	r6,r4,2		/* # words to do */
 	adde	r5,r5,r0
 	beq	3f
-1:	mtctr	r6
+1:	andi.	r6,r6,3		/* Prepare to handle words 4 by 4 */
+	beq	21f
+	mtctr	r6
 2:	lwzu	r0,4(r3)
 	adde	r5,r5,r0
 	bdnz	2b
+21:	srwi.	r6,r4,4		/* # blocks of 4 words to do */
+	beq	3f
+	mtctr	r6
+22:	lwzu	r0,4(r3)
+	lwzu	r6,4(r3)
+	lwzu	r7,4(r3)
+	lwzu	r8,4(r3)
+	adde	r5,r5,r0
+	adde	r5,r5,r6
+	adde	r5,r5,r7
+	adde	r5,r5,r8
+	bdnz	22b
 3:	andi.	r0,r4,2
 	beq+	4f
 	lhz	r0,4(r3)
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2015-06-30 13:57 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-30 13:56 [PATCH 0/2] powerpc32: Optimise csum_partial() Christophe Leroy
2015-06-30 13:56 ` [PATCH 1/2] powerpc32: optimise a few instructions in csum_partial() Christophe Leroy
2015-06-30 13:56 ` [PATCH 2/2] powerpc32: optimise csum_partial() loop Christophe Leroy

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).