All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] powerpc/32: memset() optimisations
@ 2017-08-23 14:54 Christophe Leroy
  2017-08-23 14:54 ` [PATCH 1/4] powerpc/32: add memset16() Christophe Leroy
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Christophe Leroy @ 2017-08-23 14:54 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev

This serie provide small optimisation of memset() for PPC32.

Christophe Leroy (4):
  powerpc/32: add memset16()
  powerpc: fix location of two EXPORT_SYMBOL
  powerpc/32: optimise memset()
  powerpc/32: remove a NOP from memset()

 arch/powerpc/include/asm/string.h |  4 +++-
 arch/powerpc/kernel/setup_32.c    |  7 ++++++-
 arch/powerpc/lib/copy_32.S        | 44 ++++++++++++++++++++++++++++++---------
 arch/powerpc/mm/hash_low_32.S     |  2 +-
 4 files changed, 44 insertions(+), 13 deletions(-)

-- 
2.13.3

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH 1/4] powerpc/32: add memset16()
  2017-08-23 14:54 [PATCH 0/4] powerpc/32: memset() optimisations Christophe Leroy
@ 2017-08-23 14:54 ` Christophe Leroy
  2017-09-01 13:29   ` [1/4] " Michael Ellerman
  2017-08-23 14:54 ` [PATCH 2/4] powerpc: fix location of two EXPORT_SYMBOL Christophe Leroy
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Christophe Leroy @ 2017-08-23 14:54 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev, Naveen N. Rao

Commit 694fc88ce271f ("powerpc/string: Implement optimized
memset variants") added memset16(), memset32() and memset64()
for the 64 bits PPC.

On 32 bits, memset64() is not relevant, and as shown below,
the generic version of memset32() gives a good code, so only
memset16() is candidate for an optimised version.

000009c0 <memset32>:
 9c0:   2c 05 00 00     cmpwi   r5,0
 9c4:   39 23 ff fc     addi    r9,r3,-4
 9c8:   4d 82 00 20     beqlr
 9cc:   7c a9 03 a6     mtctr   r5
 9d0:   94 89 00 04     stwu    r4,4(r9)
 9d4:   42 00 ff fc     bdnz    9d0 <memset32+0x10>
 9d8:   4e 80 00 20     blr

The last part of memset() handling the not 4-bytes multiples
operates on bytes, making it unsuitable for handling word without
modification. As it would increase memset() complexity, it is
better to implement memset16() from scratch. In addition it
has the advantage of allowing a more optimised memset16() than what
we would have by using the memset() function.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/include/asm/string.h |  4 +++-
 arch/powerpc/lib/copy_32.S        | 14 ++++++++++++++
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/string.h b/arch/powerpc/include/asm/string.h
index b0e82466d4e8..b9f54bb34db6 100644
--- a/arch/powerpc/include/asm/string.h
+++ b/arch/powerpc/include/asm/string.h
@@ -10,6 +10,7 @@
 #define __HAVE_ARCH_MEMMOVE
 #define __HAVE_ARCH_MEMCMP
 #define __HAVE_ARCH_MEMCHR
+#define __HAVE_ARCH_MEMSET16
 
 extern char * strcpy(char *,const char *);
 extern char * strncpy(char *,const char *, __kernel_size_t);
@@ -24,7 +25,6 @@ extern int memcmp(const void *,const void *,__kernel_size_t);
 extern void * memchr(const void *,int,__kernel_size_t);
 
 #ifdef CONFIG_PPC64
-#define __HAVE_ARCH_MEMSET16
 #define __HAVE_ARCH_MEMSET32
 #define __HAVE_ARCH_MEMSET64
 
@@ -46,6 +46,8 @@ static inline void *memset64(uint64_t *p, uint64_t v, __kernel_size_t n)
 {
 	return __memset64(p, v, n * 8);
 }
+#else
+extern void *memset16(uint16_t *, uint16_t, __kernel_size_t);
 #endif
 #endif /* __KERNEL__ */
 
diff --git a/arch/powerpc/lib/copy_32.S b/arch/powerpc/lib/copy_32.S
index 8aedbb5f4b86..a14d4df2ebc9 100644
--- a/arch/powerpc/lib/copy_32.S
+++ b/arch/powerpc/lib/copy_32.S
@@ -67,6 +67,20 @@ CACHELINE_BYTES = L1_CACHE_BYTES
 LG_CACHELINE_BYTES = L1_CACHE_SHIFT
 CACHELINE_MASK = (L1_CACHE_BYTES-1)
 
+_GLOBAL(memset16)
+	rlwinm.	r0 ,r5, 31, 1, 31
+	addi	r6, r3, -4
+	beq-	2f
+	rlwimi	r4 ,r4 ,16 ,0 ,15
+	mtctr	r0
+1:	stwu	r4, 4(r6)
+	bdnz	1b
+2:	andi.	r0, r5, 1
+	beqlr
+	sth	r4, 4(r6)
+	blr
+EXPORT_SYMBOL(memset16)
+
 /*
  * Use dcbz on the complete cache lines in the destination
  * to set them to zero.  This requires that the destination
-- 
2.13.3

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 2/4] powerpc: fix location of two EXPORT_SYMBOL
  2017-08-23 14:54 [PATCH 0/4] powerpc/32: memset() optimisations Christophe Leroy
  2017-08-23 14:54 ` [PATCH 1/4] powerpc/32: add memset16() Christophe Leroy
@ 2017-08-23 14:54 ` Christophe Leroy
  2017-08-23 14:54 ` [PATCH 3/4] powerpc/32: optimise memset() Christophe Leroy
  2017-08-23 14:54 ` [PATCH 4/4] powerpc/32: remove a NOP from memset() Christophe Leroy
  3 siblings, 0 replies; 10+ messages in thread
From: Christophe Leroy @ 2017-08-23 14:54 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev, Al Viro

Commit 9445aa1a3062a ("ppc: move exports to definitions")
added EXPORT_SYMBOL() for memset() and flush_hash_pages() in
the middle of the functions.

This patch moves them at the end of the two functions.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/lib/copy_32.S    | 2 +-
 arch/powerpc/mm/hash_low_32.S | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/lib/copy_32.S b/arch/powerpc/lib/copy_32.S
index a14d4df2ebc9..a3ffeac69eca 100644
--- a/arch/powerpc/lib/copy_32.S
+++ b/arch/powerpc/lib/copy_32.S
@@ -104,7 +104,6 @@ _GLOBAL(memset)
 	subf	r6,r0,r6
 	cmplwi	0,r4,0
 	bne	2f	/* Use normal procedure if r4 is not zero */
-EXPORT_SYMBOL(memset)
 _GLOBAL(memset_nocache_branch)
 	b	2f	/* Skip optimised bloc until cache is enabled */
 
@@ -140,6 +139,7 @@ _GLOBAL(memset_nocache_branch)
 8:	stbu	r4,1(r6)
 	bdnz	8b
 	blr
+EXPORT_SYMBOL(memset)
 
 /*
  * This version uses dcbz on the complete cache lines in the
diff --git a/arch/powerpc/mm/hash_low_32.S b/arch/powerpc/mm/hash_low_32.S
index 6f962e5cb5e1..ffbd7c0bda96 100644
--- a/arch/powerpc/mm/hash_low_32.S
+++ b/arch/powerpc/mm/hash_low_32.S
@@ -575,7 +575,6 @@ _GLOBAL(flush_hash_pages)
 	rlwinm	r8,r8,0,31,29		/* clear HASHPTE bit */
 	stwcx.	r8,0,r5			/* update the pte */
 	bne-	33b
-EXPORT_SYMBOL(flush_hash_pages)
 
 	/* Get the address of the primary PTE group in the hash table (r3) */
 _GLOBAL(flush_hash_patch_A)
@@ -634,6 +633,7 @@ _GLOBAL(flush_hash_patch_B)
 	SYNC_601
 	isync
 	blr
+EXPORT_SYMBOL(flush_hash_pages)
 
 /*
  * Flush an entry from the TLB
-- 
2.13.3

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 3/4] powerpc/32: optimise memset()
  2017-08-23 14:54 [PATCH 0/4] powerpc/32: memset() optimisations Christophe Leroy
  2017-08-23 14:54 ` [PATCH 1/4] powerpc/32: add memset16() Christophe Leroy
  2017-08-23 14:54 ` [PATCH 2/4] powerpc: fix location of two EXPORT_SYMBOL Christophe Leroy
@ 2017-08-23 14:54 ` Christophe Leroy
  2017-08-23 14:54 ` [PATCH 4/4] powerpc/32: remove a NOP from memset() Christophe Leroy
  3 siblings, 0 replies; 10+ messages in thread
From: Christophe Leroy @ 2017-08-23 14:54 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev

There is no need to extend the set value to an int when the length
is lower than 4 as in that case we only do byte stores.
We can therefore immediately branch to the part handling it.
By separating it from the normal case, we are able to eliminate
a few actions on the destination pointer.

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/lib/copy_32.S | 21 ++++++++++++++-------
 1 file changed, 14 insertions(+), 7 deletions(-)

diff --git a/arch/powerpc/lib/copy_32.S b/arch/powerpc/lib/copy_32.S
index a3ffeac69eca..05aaee20590f 100644
--- a/arch/powerpc/lib/copy_32.S
+++ b/arch/powerpc/lib/copy_32.S
@@ -91,17 +91,17 @@ EXPORT_SYMBOL(memset16)
  * replaced by a nop once cache is active. This is done in machine_init()
  */
 _GLOBAL(memset)
+	cmplwi	0,r5,4
+	blt	7f
+
 	rlwimi	r4,r4,8,16,23
 	rlwimi	r4,r4,16,0,15
 
-	addi	r6,r3,-4
-	cmplwi	0,r5,4
-	blt	7f
-	stwu	r4,4(r6)
+	stw	r4,0(r3)
 	beqlr
-	andi.	r0,r6,3
+	andi.	r0,r3,3
 	add	r5,r0,r5
-	subf	r6,r0,r6
+	subf	r6,r0,r3
 	cmplwi	0,r4,0
 	bne	2f	/* Use normal procedure if r4 is not zero */
 _GLOBAL(memset_nocache_branch)
@@ -132,13 +132,20 @@ _GLOBAL(memset_nocache_branch)
 1:	stwu	r4,4(r6)
 	bdnz	1b
 6:	andi.	r5,r5,3
-7:	cmpwi	0,r5,0
 	beqlr
 	mtctr	r5
 	addi	r6,r6,3
 8:	stbu	r4,1(r6)
 	bdnz	8b
 	blr
+
+7:	cmpwi	0,r5,0
+	beqlr
+	mtctr	r5
+	addi	r6,r3,-1
+9:	stbu	r4,1(r6)
+	bdnz	9b
+	blr
 EXPORT_SYMBOL(memset)
 
 /*
-- 
2.13.3

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH 4/4] powerpc/32: remove a NOP from memset()
  2017-08-23 14:54 [PATCH 0/4] powerpc/32: memset() optimisations Christophe Leroy
                   ` (2 preceding siblings ...)
  2017-08-23 14:54 ` [PATCH 3/4] powerpc/32: optimise memset() Christophe Leroy
@ 2017-08-23 14:54 ` Christophe Leroy
  2017-08-24 10:51   ` Michael Ellerman
  3 siblings, 1 reply; 10+ messages in thread
From: Christophe Leroy @ 2017-08-23 14:54 UTC (permalink / raw)
  To: Benjamin Herrenschmidt, Paul Mackerras, Michael Ellerman, Scott Wood
  Cc: linux-kernel, linuxppc-dev

memset() is patched after initialisation to activate the
optimised part which uses cache instructions.

Today we have a 'b 2f' to skip the optimised patch, which then gets
replaced by a NOP, implying a useless cycle consumption.
As we have a 'bne 2f' just before, we could use that instruction
for the live patching, hence removing the need to have a
dedicated 'b 2f' to be replaced by a NOP.

This patch changes the 'bne 2f' by a 'b 2f'. During init, that
'b 2f' is then replaced by 'bne 2f'

Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>
---
 arch/powerpc/kernel/setup_32.c | 7 ++++++-
 arch/powerpc/lib/copy_32.S     | 7 +++++--
 2 files changed, 11 insertions(+), 3 deletions(-)

diff --git a/arch/powerpc/kernel/setup_32.c b/arch/powerpc/kernel/setup_32.c
index 2f88f6cf1a42..51ebc01fff52 100644
--- a/arch/powerpc/kernel/setup_32.c
+++ b/arch/powerpc/kernel/setup_32.c
@@ -98,6 +98,9 @@ extern unsigned int memset_nocache_branch; /* Insn to be replaced by NOP */
 
 notrace void __init machine_init(u64 dt_ptr)
 {
+	unsigned int *addr = &memset_nocache_branch;
+	unsigned long insn;
+
 	/* Configure static keys first, now that we're relocated. */
 	setup_feature_keys();
 
@@ -105,7 +108,9 @@ notrace void __init machine_init(u64 dt_ptr)
 	udbg_early_init();
 
 	patch_instruction((unsigned int *)&memcpy, PPC_INST_NOP);
-	patch_instruction(&memset_nocache_branch, PPC_INST_NOP);
+
+	insn = create_cond_branch(addr, branch_target(addr), 0x820000);
+	patch_instruction(addr, insn);	/* replace b by bne cr0 */
 
 	/* Do some early initialization based on the flat device tree */
 	early_init_devtree(__va(dt_ptr));
diff --git a/arch/powerpc/lib/copy_32.S b/arch/powerpc/lib/copy_32.S
index 05aaee20590f..da425bb6b369 100644
--- a/arch/powerpc/lib/copy_32.S
+++ b/arch/powerpc/lib/copy_32.S
@@ -103,9 +103,12 @@ _GLOBAL(memset)
 	add	r5,r0,r5
 	subf	r6,r0,r3
 	cmplwi	0,r4,0
-	bne	2f	/* Use normal procedure if r4 is not zero */
+	/*
+	 * Skip optimised bloc until cache is enabled. Will be replaced
+	 * by 'bne' during boot to use normal procedure if r4 is not zero
+	 */
 _GLOBAL(memset_nocache_branch)
-	b	2f	/* Skip optimised bloc until cache is enabled */
+	b	2f
 
 	clrlwi	r7,r6,32-LG_CACHELINE_BYTES
 	add	r8,r7,r5
-- 
2.13.3

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/4] powerpc/32: remove a NOP from memset()
  2017-08-23 14:54 ` [PATCH 4/4] powerpc/32: remove a NOP from memset() Christophe Leroy
@ 2017-08-24 10:51   ` Michael Ellerman
  2017-08-24 13:58     ` Christophe LEROY
  0 siblings, 1 reply; 10+ messages in thread
From: Michael Ellerman @ 2017-08-24 10:51 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras, Scott Wood
  Cc: linux-kernel, linuxppc-dev

Christophe Leroy <christophe.leroy@c-s.fr> writes:

> memset() is patched after initialisation to activate the
> optimised part which uses cache instructions.
>
> Today we have a 'b 2f' to skip the optimised patch, which then gets
> replaced by a NOP, implying a useless cycle consumption.
> As we have a 'bne 2f' just before, we could use that instruction
> for the live patching, hence removing the need to have a
> dedicated 'b 2f' to be replaced by a NOP.
>
> This patch changes the 'bne 2f' by a 'b 2f'. During init, that
> 'b 2f' is then replaced by 'bne 2f'

I'm not sure what the sequence is during boot for the 32-bit code, but
can you use an ALT_FTR section for this? Possibly that doesn't get done
at the right time though.

cheers

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/4] powerpc/32: remove a NOP from memset()
  2017-08-24 10:51   ` Michael Ellerman
@ 2017-08-24 13:58     ` Christophe LEROY
  2017-08-25  0:15         ` Michael Ellerman
  0 siblings, 1 reply; 10+ messages in thread
From: Christophe LEROY @ 2017-08-24 13:58 UTC (permalink / raw)
  To: Michael Ellerman, Benjamin Herrenschmidt, Paul Mackerras, Scott Wood
  Cc: linux-kernel, linuxppc-dev



Le 24/08/2017 à 12:51, Michael Ellerman a écrit :
> Christophe Leroy <christophe.leroy@c-s.fr> writes:
> 
>> memset() is patched after initialisation to activate the
>> optimised part which uses cache instructions.
>>
>> Today we have a 'b 2f' to skip the optimised patch, which then gets
>> replaced by a NOP, implying a useless cycle consumption.
>> As we have a 'bne 2f' just before, we could use that instruction
>> for the live patching, hence removing the need to have a
>> dedicated 'b 2f' to be replaced by a NOP.
>>
>> This patch changes the 'bne 2f' by a 'b 2f'. During init, that
>> 'b 2f' is then replaced by 'bne 2f'
> 
> I'm not sure what the sequence is during boot for the 32-bit code, but
> can you use an ALT_FTR section for this? Possibly that doesn't get done
> at the right time though.

Unfortunately, as we discussed in 2015 
(https://lkml.org/lkml/2015/9/10/608), the ALT_FTR does things too 
early, while the cache is not enabled yet.

Christophe

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/4] powerpc/32: remove a NOP from memset()
  2017-08-24 13:58     ` Christophe LEROY
@ 2017-08-25  0:15         ` Michael Ellerman
  0 siblings, 0 replies; 10+ messages in thread
From: Michael Ellerman @ 2017-08-25  0:15 UTC (permalink / raw)
  To: Christophe LEROY, Benjamin Herrenschmidt, Paul Mackerras, Scott Wood
  Cc: linux-kernel, linuxppc-dev

Christophe LEROY <christophe.leroy@c-s.fr> writes:

> Le 24/08/2017 à 12:51, Michael Ellerman a écrit :
>> Christophe Leroy <christophe.leroy@c-s.fr> writes:
>> 
>>> memset() is patched after initialisation to activate the
>>> optimised part which uses cache instructions.
>>>
>>> Today we have a 'b 2f' to skip the optimised patch, which then gets
>>> replaced by a NOP, implying a useless cycle consumption.
>>> As we have a 'bne 2f' just before, we could use that instruction
>>> for the live patching, hence removing the need to have a
>>> dedicated 'b 2f' to be replaced by a NOP.
>>>
>>> This patch changes the 'bne 2f' by a 'b 2f'. During init, that
>>> 'b 2f' is then replaced by 'bne 2f'
>> 
>> I'm not sure what the sequence is during boot for the 32-bit code, but
>> can you use an ALT_FTR section for this? Possibly that doesn't get done
>> at the right time though.
>
> Unfortunately, as we discussed in 2015 
> (https://lkml.org/lkml/2015/9/10/608),

Haha, you expect me to remember things I said then! ;)

> the ALT_FTR does things too early, while the cache is not enabled yet.

OK. Ben did do some reworks to the early init since then, but I don't
think he changed that.

I notice we do setup_feature_keys() in machine_init(), which is the jump
label equivalent of apply_feature_fixups(). So I wonder if we could
actually move apply_feature_fixups() to there. But it would need some
serious review.

cheers

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH 4/4] powerpc/32: remove a NOP from memset()
@ 2017-08-25  0:15         ` Michael Ellerman
  0 siblings, 0 replies; 10+ messages in thread
From: Michael Ellerman @ 2017-08-25  0:15 UTC (permalink / raw)
  To: Christophe LEROY, Benjamin Herrenschmidt, Paul Mackerras, Scott Wood
  Cc: linux-kernel, linuxppc-dev

Christophe LEROY <christophe.leroy@c-s.fr> writes:

> Le 24/08/2017 =C3=A0 12:51, Michael Ellerman a =C3=A9crit=C2=A0:
>> Christophe Leroy <christophe.leroy@c-s.fr> writes:
>>=20
>>> memset() is patched after initialisation to activate the
>>> optimised part which uses cache instructions.
>>>
>>> Today we have a 'b 2f' to skip the optimised patch, which then gets
>>> replaced by a NOP, implying a useless cycle consumption.
>>> As we have a 'bne 2f' just before, we could use that instruction
>>> for the live patching, hence removing the need to have a
>>> dedicated 'b 2f' to be replaced by a NOP.
>>>
>>> This patch changes the 'bne 2f' by a 'b 2f'. During init, that
>>> 'b 2f' is then replaced by 'bne 2f'
>>=20
>> I'm not sure what the sequence is during boot for the 32-bit code, but
>> can you use an ALT_FTR section for this? Possibly that doesn't get done
>> at the right time though.
>
> Unfortunately, as we discussed in 2015=20
> (https://lkml.org/lkml/2015/9/10/608),

Haha, you expect me to remember things I said then! ;)

> the ALT_FTR does things too early, while the cache is not enabled yet.

OK. Ben did do some reworks to the early init since then, but I don't
think he changed that.

I notice we do setup_feature_keys() in machine_init(), which is the jump
label equivalent of apply_feature_fixups(). So I wonder if we could
actually move apply_feature_fixups() to there. But it would need some
serious review.

cheers

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [1/4] powerpc/32: add memset16()
  2017-08-23 14:54 ` [PATCH 1/4] powerpc/32: add memset16() Christophe Leroy
@ 2017-09-01 13:29   ` Michael Ellerman
  0 siblings, 0 replies; 10+ messages in thread
From: Michael Ellerman @ 2017-09-01 13:29 UTC (permalink / raw)
  To: Christophe Leroy, Benjamin Herrenschmidt, Paul Mackerras, Scott Wood
  Cc: Naveen N. Rao, linuxppc-dev, linux-kernel

On Wed, 2017-08-23 at 14:54:32 UTC, Christophe Leroy wrote:
> Commit 694fc88ce271f ("powerpc/string: Implement optimized
> memset variants") added memset16(), memset32() and memset64()
> for the 64 bits PPC.
> 
> On 32 bits, memset64() is not relevant, and as shown below,
> the generic version of memset32() gives a good code, so only
> memset16() is candidate for an optimised version.
> 
> 000009c0 <memset32>:
>  9c0:   2c 05 00 00     cmpwi   r5,0
>  9c4:   39 23 ff fc     addi    r9,r3,-4
>  9c8:   4d 82 00 20     beqlr
>  9cc:   7c a9 03 a6     mtctr   r5
>  9d0:   94 89 00 04     stwu    r4,4(r9)
>  9d4:   42 00 ff fc     bdnz    9d0 <memset32+0x10>
>  9d8:   4e 80 00 20     blr
> 
> The last part of memset() handling the not 4-bytes multiples
> operates on bytes, making it unsuitable for handling word without
> modification. As it would increase memset() complexity, it is
> better to implement memset16() from scratch. In addition it
> has the advantage of allowing a more optimised memset16() than what
> we would have by using the memset() function.
> 
> Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr>

Series applied to powerpc next, thanks.

https://git.kernel.org/powerpc/c/da74f659205ea08cb0fd0b3050637b

cheers

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2017-09-01 13:30 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-23 14:54 [PATCH 0/4] powerpc/32: memset() optimisations Christophe Leroy
2017-08-23 14:54 ` [PATCH 1/4] powerpc/32: add memset16() Christophe Leroy
2017-09-01 13:29   ` [1/4] " Michael Ellerman
2017-08-23 14:54 ` [PATCH 2/4] powerpc: fix location of two EXPORT_SYMBOL Christophe Leroy
2017-08-23 14:54 ` [PATCH 3/4] powerpc/32: optimise memset() Christophe Leroy
2017-08-23 14:54 ` [PATCH 4/4] powerpc/32: remove a NOP from memset() Christophe Leroy
2017-08-24 10:51   ` Michael Ellerman
2017-08-24 13:58     ` Christophe LEROY
2017-08-25  0:15       ` Michael Ellerman
2017-08-25  0:15         ` Michael Ellerman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.