* [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions
@ 2021-08-11 14:02 Stefan Roese
2021-08-11 14:02 ` [PATCH v3 1/3] arm64: arch/arm/lib: Add optimized memset/memcpy/memmove functions Stefan Roese
` (3 more replies)
0 siblings, 4 replies; 7+ messages in thread
From: Stefan Roese @ 2021-08-11 14:02 UTC (permalink / raw)
To: u-boot; +Cc: trini, Rasmus Villemoes, sjg, Wolfgang Denk
On an NXP LX2160 based platform it has been noticed, that the currently
implemented memset/memcpy functions for aarch64 are suboptimal.
Especially the memset() for clearing the NXP MC firmware memory is very
expensive (time-wise).
By using optimized functions, a speedup of ~ factor 6 has been measured.
This patchset now adds the optimized functions ported from this
repository:
https://github.com/ARM-software/optimized-routines
As the optimized memset function make use of the dc opcode, which needs
the caches to be enabled, an additional check is added and a simple
memset version is used in this case.
Please note that checkpatch.pl complains about some issue with this
imported file: arch/arm/lib/asmdefs.h
Since it's imported I did explicitly not make any changes here, to make
potential future sync'ing easer.
Thanks,
Stefan
Changes in v3:
- Add memmove alias, as this function also handles it optimized
- Add memmove as well
Changes in v2:
- Add file names and locations and git commit ID from imported files
to the commit message
- New patch
Stefan Roese (3):
arm64: arch/arm/lib: Add optimized memset/memcpy/memmove functions
arm64: memset-arm64: Use simple memset when cache is disabled
arm64: Kconfig: Enable usage of optimized memset/memcpy/memmove
arch/arm/Kconfig | 38 +++++-
arch/arm/include/asm/string.h | 4 +
arch/arm/lib/Makefile | 5 +
arch/arm/lib/asmdefs.h | 98 ++++++++++++++
arch/arm/lib/memcpy-arm64.S | 242 ++++++++++++++++++++++++++++++++++
arch/arm/lib/memset-arm64.S | 146 ++++++++++++++++++++
6 files changed, 527 insertions(+), 6 deletions(-)
create mode 100644 arch/arm/lib/asmdefs.h
create mode 100644 arch/arm/lib/memcpy-arm64.S
create mode 100644 arch/arm/lib/memset-arm64.S
--
2.32.0
^ permalink raw reply [flat|nested] 7+ messages in thread
* [PATCH v3 1/3] arm64: arch/arm/lib: Add optimized memset/memcpy/memmove functions
2021-08-11 14:02 [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions Stefan Roese
@ 2021-08-11 14:02 ` Stefan Roese
2021-08-11 14:02 ` [PATCH v3 2/3] arm64: memset-arm64: Use simple memset when cache is disabled Stefan Roese
` (2 subsequent siblings)
3 siblings, 0 replies; 7+ messages in thread
From: Stefan Roese @ 2021-08-11 14:02 UTC (permalink / raw)
To: u-boot; +Cc: trini, Rasmus Villemoes, sjg, Wolfgang Denk
Ported from https://github.com/ARM-software/optimized-routines
These files are included from this repository, including the latest
git commit ID:
string/aarch64/memcpy.S: afd6244a1f8d
string/aarch64/memset.S: e823e3abf5f8
string/asmdefs.h: e823e3abf5f8
Note that memmove is also handled by the memcpy function.
Please note that when adding these optimized functions as default memset
memcpy functions in U-Boot, U-Boot fails to boot on the LX2160ARDB.
After the initial ATF output, no U-Boot output is shown on the serial
console. Some exception is triggered here in the very early boot process
as some of the assembler opcodes need the caches to be enabled.
Because of this, a follow-up patch will add a check to use a simple
non-optimized memset for the "cache disabled" case.
Note:
I also integrated and tested with the Linux versions of these optimized
functions. They are similar to the ones now integrated but these ARM
versions are still a small bit faster.
Signed-off-by: Stefan Roese <sr@denx.de>
---
Changes in v3:
- Add memmove alias, as this function also handles it optimized
Changes in v2:
- Add file names and locations and git commit ID from imported files
to the commit message
arch/arm/lib/Makefile | 5 +
arch/arm/lib/asmdefs.h | 98 +++++++++++++++
arch/arm/lib/memcpy-arm64.S | 242 ++++++++++++++++++++++++++++++++++++
arch/arm/lib/memset-arm64.S | 116 +++++++++++++++++
4 files changed, 461 insertions(+)
create mode 100644 arch/arm/lib/asmdefs.h
create mode 100644 arch/arm/lib/memcpy-arm64.S
create mode 100644 arch/arm/lib/memset-arm64.S
diff --git a/arch/arm/lib/Makefile b/arch/arm/lib/Makefile
index 7f6633271518..c48e1f622d3c 100644
--- a/arch/arm/lib/Makefile
+++ b/arch/arm/lib/Makefile
@@ -39,8 +39,13 @@ obj-$(CONFIG_$(SPL_TPL_)FRAMEWORK) += spl.o
obj-$(CONFIG_SPL_FRAMEWORK) += zimage.o
obj-$(CONFIG_OF_LIBFDT) += bootm-fdt.o
endif
+ifdef CONFIG_ARM64
+obj-$(CONFIG_$(SPL_TPL_)USE_ARCH_MEMSET) += memset-arm64.o
+obj-$(CONFIG_$(SPL_TPL_)USE_ARCH_MEMCPY) += memcpy-arm64.o
+else
obj-$(CONFIG_$(SPL_TPL_)USE_ARCH_MEMSET) += memset.o
obj-$(CONFIG_$(SPL_TPL_)USE_ARCH_MEMCPY) += memcpy.o
+endif
obj-$(CONFIG_SEMIHOSTING) += semihosting.o
obj-y += bdinfo.o
diff --git a/arch/arm/lib/asmdefs.h b/arch/arm/lib/asmdefs.h
new file mode 100644
index 000000000000..d307a3a8a25c
--- /dev/null
+++ b/arch/arm/lib/asmdefs.h
@@ -0,0 +1,98 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Macros for asm code.
+ *
+ * Copyright (c) 2019, Arm Limited.
+ */
+
+#ifndef _ASMDEFS_H
+#define _ASMDEFS_H
+
+#if defined(__aarch64__)
+
+/* Branch Target Identitication support. */
+#define BTI_C hint 34
+#define BTI_J hint 36
+/* Return address signing support (pac-ret). */
+#define PACIASP hint 25; .cfi_window_save
+#define AUTIASP hint 29; .cfi_window_save
+
+/* GNU_PROPERTY_AARCH64_* macros from elf.h. */
+#define FEATURE_1_AND 0xc0000000
+#define FEATURE_1_BTI 1
+#define FEATURE_1_PAC 2
+
+/* Add a NT_GNU_PROPERTY_TYPE_0 note. */
+#define GNU_PROPERTY(type, value) \
+ .section .note.gnu.property, "a"; \
+ .p2align 3; \
+ .word 4; \
+ .word 16; \
+ .word 5; \
+ .asciz "GNU"; \
+ .word type; \
+ .word 4; \
+ .word value; \
+ .word 0; \
+ .text
+
+/* If set then the GNU Property Note section will be added to
+ mark objects to support BTI and PAC-RET. */
+#ifndef WANT_GNU_PROPERTY
+#define WANT_GNU_PROPERTY 1
+#endif
+
+#if WANT_GNU_PROPERTY
+/* Add property note with supported features to all asm files. */
+GNU_PROPERTY (FEATURE_1_AND, FEATURE_1_BTI|FEATURE_1_PAC)
+#endif
+
+#define ENTRY_ALIGN(name, alignment) \
+ .global name; \
+ .type name,%function; \
+ .align alignment; \
+ name: \
+ .cfi_startproc; \
+ BTI_C;
+
+#else
+
+#define END_FILE
+
+#define ENTRY_ALIGN(name, alignment) \
+ .global name; \
+ .type name,%function; \
+ .align alignment; \
+ name: \
+ .cfi_startproc;
+
+#endif
+
+#define ENTRY(name) ENTRY_ALIGN(name, 6)
+
+#define ENTRY_ALIAS(name) \
+ .global name; \
+ .type name,%function; \
+ name:
+
+#define END(name) \
+ .cfi_endproc; \
+ .size name, .-name;
+
+#define L(l) .L ## l
+
+#ifdef __ILP32__
+ /* Sanitize padding bits of pointer arguments as per aapcs64 */
+#define PTR_ARG(n) mov w##n, w##n
+#else
+#define PTR_ARG(n)
+#endif
+
+#ifdef __ILP32__
+ /* Sanitize padding bits of size arguments as per aapcs64 */
+#define SIZE_ARG(n) mov w##n, w##n
+#else
+#define SIZE_ARG(n)
+#endif
+
+#endif
diff --git a/arch/arm/lib/memcpy-arm64.S b/arch/arm/lib/memcpy-arm64.S
new file mode 100644
index 000000000000..507054d847ea
--- /dev/null
+++ b/arch/arm/lib/memcpy-arm64.S
@@ -0,0 +1,242 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * memcpy - copy memory area
+ *
+ * Copyright (c) 2012-2020, Arm Limited.
+ */
+
+/* Assumptions:
+ *
+ * ARMv8-a, AArch64, unaligned accesses.
+ *
+ */
+
+#include "asmdefs.h"
+
+#define dstin x0
+#define src x1
+#define count x2
+#define dst x3
+#define srcend x4
+#define dstend x5
+#define A_l x6
+#define A_lw w6
+#define A_h x7
+#define B_l x8
+#define B_lw w8
+#define B_h x9
+#define C_l x10
+#define C_lw w10
+#define C_h x11
+#define D_l x12
+#define D_h x13
+#define E_l x14
+#define E_h x15
+#define F_l x16
+#define F_h x17
+#define G_l count
+#define G_h dst
+#define H_l src
+#define H_h srcend
+#define tmp1 x14
+
+/* This implementation handles overlaps and supports both memcpy and memmove
+ from a single entry point. It uses unaligned accesses and branchless
+ sequences to keep the code small, simple and improve performance.
+
+ Copies are split into 3 main cases: small copies of up to 32 bytes, medium
+ copies of up to 128 bytes, and large copies. The overhead of the overlap
+ check is negligible since it is only required for large copies.
+
+ Large copies use a software pipelined loop processing 64 bytes per iteration.
+ The destination pointer is 16-byte aligned to minimize unaligned accesses.
+ The loop tail is handled by always copying 64 bytes from the end.
+*/
+
+ENTRY_ALIAS (memmove)
+ENTRY (memcpy)
+ PTR_ARG (0)
+ PTR_ARG (1)
+ SIZE_ARG (2)
+ add srcend, src, count
+ add dstend, dstin, count
+ cmp count, 128
+ b.hi L(copy_long)
+ cmp count, 32
+ b.hi L(copy32_128)
+
+ /* Small copies: 0..32 bytes. */
+ cmp count, 16
+ b.lo L(copy16)
+ ldp A_l, A_h, [src]
+ ldp D_l, D_h, [srcend, -16]
+ stp A_l, A_h, [dstin]
+ stp D_l, D_h, [dstend, -16]
+ ret
+
+ /* Copy 8-15 bytes. */
+L(copy16):
+ tbz count, 3, L(copy8)
+ ldr A_l, [src]
+ ldr A_h, [srcend, -8]
+ str A_l, [dstin]
+ str A_h, [dstend, -8]
+ ret
+
+ .p2align 3
+ /* Copy 4-7 bytes. */
+L(copy8):
+ tbz count, 2, L(copy4)
+ ldr A_lw, [src]
+ ldr B_lw, [srcend, -4]
+ str A_lw, [dstin]
+ str B_lw, [dstend, -4]
+ ret
+
+ /* Copy 0..3 bytes using a branchless sequence. */
+L(copy4):
+ cbz count, L(copy0)
+ lsr tmp1, count, 1
+ ldrb A_lw, [src]
+ ldrb C_lw, [srcend, -1]
+ ldrb B_lw, [src, tmp1]
+ strb A_lw, [dstin]
+ strb B_lw, [dstin, tmp1]
+ strb C_lw, [dstend, -1]
+L(copy0):
+ ret
+
+ .p2align 4
+ /* Medium copies: 33..128 bytes. */
+L(copy32_128):
+ ldp A_l, A_h, [src]
+ ldp B_l, B_h, [src, 16]
+ ldp C_l, C_h, [srcend, -32]
+ ldp D_l, D_h, [srcend, -16]
+ cmp count, 64
+ b.hi L(copy128)
+ stp A_l, A_h, [dstin]
+ stp B_l, B_h, [dstin, 16]
+ stp C_l, C_h, [dstend, -32]
+ stp D_l, D_h, [dstend, -16]
+ ret
+
+ .p2align 4
+ /* Copy 65..128 bytes. */
+L(copy128):
+ ldp E_l, E_h, [src, 32]
+ ldp F_l, F_h, [src, 48]
+ cmp count, 96
+ b.ls L(copy96)
+ ldp G_l, G_h, [srcend, -64]
+ ldp H_l, H_h, [srcend, -48]
+ stp G_l, G_h, [dstend, -64]
+ stp H_l, H_h, [dstend, -48]
+L(copy96):
+ stp A_l, A_h, [dstin]
+ stp B_l, B_h, [dstin, 16]
+ stp E_l, E_h, [dstin, 32]
+ stp F_l, F_h, [dstin, 48]
+ stp C_l, C_h, [dstend, -32]
+ stp D_l, D_h, [dstend, -16]
+ ret
+
+ .p2align 4
+ /* Copy more than 128 bytes. */
+L(copy_long):
+ /* Use backwards copy if there is an overlap. */
+ sub tmp1, dstin, src
+ cbz tmp1, L(copy0)
+ cmp tmp1, count
+ b.lo L(copy_long_backwards)
+
+ /* Copy 16 bytes and then align dst to 16-byte alignment. */
+
+ ldp D_l, D_h, [src]
+ and tmp1, dstin, 15
+ bic dst, dstin, 15
+ sub src, src, tmp1
+ add count, count, tmp1 /* Count is now 16 too large. */
+ ldp A_l, A_h, [src, 16]
+ stp D_l, D_h, [dstin]
+ ldp B_l, B_h, [src, 32]
+ ldp C_l, C_h, [src, 48]
+ ldp D_l, D_h, [src, 64]!
+ subs count, count, 128 + 16 /* Test and readjust count. */
+ b.ls L(copy64_from_end)
+
+L(loop64):
+ stp A_l, A_h, [dst, 16]
+ ldp A_l, A_h, [src, 16]
+ stp B_l, B_h, [dst, 32]
+ ldp B_l, B_h, [src, 32]
+ stp C_l, C_h, [dst, 48]
+ ldp C_l, C_h, [src, 48]
+ stp D_l, D_h, [dst, 64]!
+ ldp D_l, D_h, [src, 64]!
+ subs count, count, 64
+ b.hi L(loop64)
+
+ /* Write the last iteration and copy 64 bytes from the end. */
+L(copy64_from_end):
+ ldp E_l, E_h, [srcend, -64]
+ stp A_l, A_h, [dst, 16]
+ ldp A_l, A_h, [srcend, -48]
+ stp B_l, B_h, [dst, 32]
+ ldp B_l, B_h, [srcend, -32]
+ stp C_l, C_h, [dst, 48]
+ ldp C_l, C_h, [srcend, -16]
+ stp D_l, D_h, [dst, 64]
+ stp E_l, E_h, [dstend, -64]
+ stp A_l, A_h, [dstend, -48]
+ stp B_l, B_h, [dstend, -32]
+ stp C_l, C_h, [dstend, -16]
+ ret
+
+ .p2align 4
+
+ /* Large backwards copy for overlapping copies.
+ Copy 16 bytes and then align dst to 16-byte alignment. */
+L(copy_long_backwards):
+ ldp D_l, D_h, [srcend, -16]
+ and tmp1, dstend, 15
+ sub srcend, srcend, tmp1
+ sub count, count, tmp1
+ ldp A_l, A_h, [srcend, -16]
+ stp D_l, D_h, [dstend, -16]
+ ldp B_l, B_h, [srcend, -32]
+ ldp C_l, C_h, [srcend, -48]
+ ldp D_l, D_h, [srcend, -64]!
+ sub dstend, dstend, tmp1
+ subs count, count, 128
+ b.ls L(copy64_from_start)
+
+L(loop64_backwards):
+ stp A_l, A_h, [dstend, -16]
+ ldp A_l, A_h, [srcend, -16]
+ stp B_l, B_h, [dstend, -32]
+ ldp B_l, B_h, [srcend, -32]
+ stp C_l, C_h, [dstend, -48]
+ ldp C_l, C_h, [srcend, -48]
+ stp D_l, D_h, [dstend, -64]!
+ ldp D_l, D_h, [srcend, -64]!
+ subs count, count, 64
+ b.hi L(loop64_backwards)
+
+ /* Write the last iteration and copy 64 bytes from the start. */
+L(copy64_from_start):
+ ldp G_l, G_h, [src, 48]
+ stp A_l, A_h, [dstend, -16]
+ ldp A_l, A_h, [src, 32]
+ stp B_l, B_h, [dstend, -32]
+ ldp B_l, B_h, [src, 16]
+ stp C_l, C_h, [dstend, -48]
+ ldp C_l, C_h, [src]
+ stp D_l, D_h, [dstend, -64]
+ stp G_l, G_h, [dstin, 48]
+ stp A_l, A_h, [dstin, 32]
+ stp B_l, B_h, [dstin, 16]
+ stp C_l, C_h, [dstin]
+ ret
+
+END (memcpy)
diff --git a/arch/arm/lib/memset-arm64.S b/arch/arm/lib/memset-arm64.S
new file mode 100644
index 000000000000..710f6f582cad
--- /dev/null
+++ b/arch/arm/lib/memset-arm64.S
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * memset - fill memory with a constant byte
+ *
+ * Copyright (c) 2012-2021, Arm Limited.
+ */
+
+/* Assumptions:
+ *
+ * ARMv8-a, AArch64, Advanced SIMD, unaligned accesses.
+ *
+ */
+
+#include "asmdefs.h"
+
+#define dstin x0
+#define val x1
+#define valw w1
+#define count x2
+#define dst x3
+#define dstend x4
+#define zva_val x5
+
+ENTRY (memset)
+ PTR_ARG (0)
+ SIZE_ARG (2)
+
+ dup v0.16B, valw
+ add dstend, dstin, count
+
+ cmp count, 96
+ b.hi L(set_long)
+ cmp count, 16
+ b.hs L(set_medium)
+ mov val, v0.D[0]
+
+ /* Set 0..15 bytes. */
+ tbz count, 3, 1f
+ str val, [dstin]
+ str val, [dstend, -8]
+ ret
+ .p2align 4
+1: tbz count, 2, 2f
+ str valw, [dstin]
+ str valw, [dstend, -4]
+ ret
+2: cbz count, 3f
+ strb valw, [dstin]
+ tbz count, 1, 3f
+ strh valw, [dstend, -2]
+3: ret
+
+ /* Set 17..96 bytes. */
+L(set_medium):
+ str q0, [dstin]
+ tbnz count, 6, L(set96)
+ str q0, [dstend, -16]
+ tbz count, 5, 1f
+ str q0, [dstin, 16]
+ str q0, [dstend, -32]
+1: ret
+
+ .p2align 4
+ /* Set 64..96 bytes. Write 64 bytes from the start and
+ 32 bytes from the end. */
+L(set96):
+ str q0, [dstin, 16]
+ stp q0, q0, [dstin, 32]
+ stp q0, q0, [dstend, -32]
+ ret
+
+ .p2align 4
+L(set_long):
+ and valw, valw, 255
+ bic dst, dstin, 15
+ str q0, [dstin]
+ cmp count, 160
+ ccmp valw, 0, 0, hs
+ b.ne L(no_zva)
+
+#ifndef SKIP_ZVA_CHECK
+ mrs zva_val, dczid_el0
+ and zva_val, zva_val, 31
+ cmp zva_val, 4 /* ZVA size is 64 bytes. */
+ b.ne L(no_zva)
+#endif
+ str q0, [dst, 16]
+ stp q0, q0, [dst, 32]
+ bic dst, dst, 63
+ sub count, dstend, dst /* Count is now 64 too large. */
+ sub count, count, 128 /* Adjust count and bias for loop. */
+
+ .p2align 4
+L(zva_loop):
+ add dst, dst, 64
+ dc zva, dst
+ subs count, count, 64
+ b.hi L(zva_loop)
+ stp q0, q0, [dstend, -64]
+ stp q0, q0, [dstend, -32]
+ ret
+
+L(no_zva):
+ sub count, dstend, dst /* Count is 16 too large. */
+ sub dst, dst, 16 /* Dst is biased by -32. */
+ sub count, count, 64 + 16 /* Adjust count and bias for loop. */
+L(no_zva_loop):
+ stp q0, q0, [dst, 32]
+ stp q0, q0, [dst, 64]!
+ subs count, count, 64
+ b.hi L(no_zva_loop)
+ stp q0, q0, [dstend, -64]
+ stp q0, q0, [dstend, -32]
+ ret
+
+END (memset)
--
2.32.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v3 2/3] arm64: memset-arm64: Use simple memset when cache is disabled
2021-08-11 14:02 [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions Stefan Roese
2021-08-11 14:02 ` [PATCH v3 1/3] arm64: arch/arm/lib: Add optimized memset/memcpy/memmove functions Stefan Roese
@ 2021-08-11 14:02 ` Stefan Roese
2021-08-11 14:02 ` [PATCH v3 3/3] arm64: Kconfig: Enable usage of optimized memset/memcpy/memmove Stefan Roese
2021-08-11 14:25 ` [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions Tom Rini
3 siblings, 0 replies; 7+ messages in thread
From: Stefan Roese @ 2021-08-11 14:02 UTC (permalink / raw)
To: u-boot; +Cc: trini, Rasmus Villemoes, sjg, Wolfgang Denk
The optimized memset uses the dc opcode, which causes problems when the
cache is disabled. This patch adds a check if the cache is disabled and
uses a very simple memset implementation in this case. Otherwise the
optimized version is used.
Signed-off-by: Stefan Roese <sr@denx.de>
---
(no changes since v2)
Changes in v2:
- New patch
arch/arm/lib/memset-arm64.S | 30 ++++++++++++++++++++++++++++++
1 file changed, 30 insertions(+)
diff --git a/arch/arm/lib/memset-arm64.S b/arch/arm/lib/memset-arm64.S
index 710f6f582cad..a474dcb53c83 100644
--- a/arch/arm/lib/memset-arm64.S
+++ b/arch/arm/lib/memset-arm64.S
@@ -11,6 +11,7 @@
*
*/
+#include <asm/macro.h>
#include "asmdefs.h"
#define dstin x0
@@ -25,6 +26,35 @@ ENTRY (memset)
PTR_ARG (0)
SIZE_ARG (2)
+ /*
+ * The optimized memset uses the dc opcode, which causes problems
+ * when the cache is disabled. Let's check if the cache is disabled
+ * and use a very simple memset implementation in this case. Otherwise
+ * jump to the optimized version.
+ */
+ switch_el x6, 3f, 2f, 1f
+3: mrs x6, sctlr_el3
+ b 0f
+2: mrs x6, sctlr_el2
+ b 0f
+1: mrs x6, sctlr_el1
+0:
+ tst x6, #CR_C
+ bne 9f
+
+ /*
+ * A very "simple" memset implementation without the use of the
+ * dc opcode. Can be run with caches disabled.
+ */
+ mov x3, #0x0
+4: strb w1, [x0, x3]
+ add x3, x3, #0x1
+ cmp x2, x3
+ bne 4b
+ ret
+9:
+
+ /* Here the optimized memset version starts */
dup v0.16B, valw
add dstend, dstin, count
--
2.32.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* [PATCH v3 3/3] arm64: Kconfig: Enable usage of optimized memset/memcpy/memmove
2021-08-11 14:02 [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions Stefan Roese
2021-08-11 14:02 ` [PATCH v3 1/3] arm64: arch/arm/lib: Add optimized memset/memcpy/memmove functions Stefan Roese
2021-08-11 14:02 ` [PATCH v3 2/3] arm64: memset-arm64: Use simple memset when cache is disabled Stefan Roese
@ 2021-08-11 14:02 ` Stefan Roese
2021-08-11 14:25 ` [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions Tom Rini
3 siblings, 0 replies; 7+ messages in thread
From: Stefan Roese @ 2021-08-11 14:02 UTC (permalink / raw)
To: u-boot; +Cc: trini, Rasmus Villemoes, sjg, Wolfgang Denk
This patch enables the use of the optimized memset(), memmove() &
memcpy() versions recently added on ARM64.
Signed-off-by: Stefan Roese <sr@denx.de>
---
Changes in v3:
- Add memmove as well
arch/arm/Kconfig | 38 +++++++++++++++++++++++++++++------
arch/arm/include/asm/string.h | 4 ++++
2 files changed, 36 insertions(+), 6 deletions(-)
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index caa8a71c6cfd..9f74b5c352ad 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -457,7 +457,6 @@ config ARM_CORTEX_CPU_IS_UP
config USE_ARCH_MEMCPY
bool "Use an assembly optimized implementation of memcpy"
default y
- depends on !ARM64
help
Enable the generation of an optimized version of memcpy.
Such an implementation may be faster under some conditions
@@ -466,7 +465,7 @@ config USE_ARCH_MEMCPY
config SPL_USE_ARCH_MEMCPY
bool "Use an assembly optimized implementation of memcpy for SPL"
default y if USE_ARCH_MEMCPY
- depends on !ARM64 && SPL
+ depends on SPL
help
Enable the generation of an optimized version of memcpy.
Such an implementation may be faster under some conditions
@@ -475,16 +474,43 @@ config SPL_USE_ARCH_MEMCPY
config TPL_USE_ARCH_MEMCPY
bool "Use an assembly optimized implementation of memcpy for TPL"
default y if USE_ARCH_MEMCPY
- depends on !ARM64 && TPL
+ depends on TPL
help
Enable the generation of an optimized version of memcpy.
Such an implementation may be faster under some conditions
but may increase the binary size.
+config USE_ARCH_MEMMOVE
+ bool "Use an assembly optimized implementation of memmove"
+ default y
+ depends on ARM64
+ help
+ Enable the generation of an optimized version of memmove.
+ Such an implementation may be faster under some conditions
+ but may increase the binary size.
+
+config SPL_USE_ARCH_MEMMOVE
+ bool "Use an assembly optimized implementation of memmove for SPL"
+ default y if USE_ARCH_MEMCPY
+ depends on SPL && ARM64
+ help
+ Enable the generation of an optimized version of memmove.
+ Such an implementation may be faster under some conditions
+ but may increase the binary size.
+
+config TPL_USE_ARCH_MEMMOVE
+ bool "Use an assembly optimized implementation of memmove for TPL"
+ default y if USE_ARCH_MEMCPY
+ depends on TPL && ARM64
+ depends on TPL
+ help
+ Enable the generation of an optimized version of memmove.
+ Such an implementation may be faster under some conditions
+ but may increase the binary size.
+
config USE_ARCH_MEMSET
bool "Use an assembly optimized implementation of memset"
default y
- depends on !ARM64
help
Enable the generation of an optimized version of memset.
Such an implementation may be faster under some conditions
@@ -493,7 +519,7 @@ config USE_ARCH_MEMSET
config SPL_USE_ARCH_MEMSET
bool "Use an assembly optimized implementation of memset for SPL"
default y if USE_ARCH_MEMSET
- depends on !ARM64 && SPL
+ depends on SPL
help
Enable the generation of an optimized version of memset.
Such an implementation may be faster under some conditions
@@ -502,7 +528,7 @@ config SPL_USE_ARCH_MEMSET
config TPL_USE_ARCH_MEMSET
bool "Use an assembly optimized implementation of memset for TPL"
default y if USE_ARCH_MEMSET
- depends on !ARM64 && TPL
+ depends on TPL
help
Enable the generation of an optimized version of memset.
Such an implementation may be faster under some conditions
diff --git a/arch/arm/include/asm/string.h b/arch/arm/include/asm/string.h
index 11eaa34fab8c..ead3f2c35643 100644
--- a/arch/arm/include/asm/string.h
+++ b/arch/arm/include/asm/string.h
@@ -19,7 +19,11 @@ extern char * strchr(const char * s, int c);
#endif
extern void * memcpy(void *, const void *, __kernel_size_t);
+#if CONFIG_IS_ENABLED(USE_ARCH_MEMMOVE)
+#define __HAVE_ARCH_MEMMOVE
+#else
#undef __HAVE_ARCH_MEMMOVE
+#endif
extern void * memmove(void *, const void *, __kernel_size_t);
#undef __HAVE_ARCH_MEMCHR
--
2.32.0
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions
2021-08-11 14:02 [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions Stefan Roese
` (2 preceding siblings ...)
2021-08-11 14:02 ` [PATCH v3 3/3] arm64: Kconfig: Enable usage of optimized memset/memcpy/memmove Stefan Roese
@ 2021-08-11 14:25 ` Tom Rini
2021-08-11 14:28 ` Stefan Roese
3 siblings, 1 reply; 7+ messages in thread
From: Tom Rini @ 2021-08-11 14:25 UTC (permalink / raw)
To: Stefan Roese; +Cc: u-boot, Rasmus Villemoes, sjg, Wolfgang Denk
[-- Attachment #1: Type: text/plain, Size: 494 bytes --]
On Wed, Aug 11, 2021 at 04:02:39PM +0200, Stefan Roese wrote:
>
> On an NXP LX2160 based platform it has been noticed, that the currently
> implemented memset/memcpy functions for aarch64 are suboptimal.
> Especially the memset() for clearing the NXP MC firmware memory is very
> expensive (time-wise).
>
> By using optimized functions, a speedup of ~ factor 6 has been measured.
To be clear, you re-measured with the cache check code added, and this
is the speed up?
--
Tom
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 659 bytes --]
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions
2021-08-11 14:25 ` [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions Tom Rini
@ 2021-08-11 14:28 ` Stefan Roese
2021-08-12 8:43 ` Stefan Roese
0 siblings, 1 reply; 7+ messages in thread
From: Stefan Roese @ 2021-08-11 14:28 UTC (permalink / raw)
To: Tom Rini; +Cc: u-boot, Rasmus Villemoes, sjg, Wolfgang Denk
On 11.08.21 16:25, Tom Rini wrote:
> On Wed, Aug 11, 2021 at 04:02:39PM +0200, Stefan Roese wrote:
>>
>> On an NXP LX2160 based platform it has been noticed, that the currently
>> implemented memset/memcpy functions for aarch64 are suboptimal.
>> Especially the memset() for clearing the NXP MC firmware memory is very
>> expensive (time-wise).
>>
>> By using optimized functions, a speedup of ~ factor 6 has been measured.
>
> To be clear, you re-measured with the cache check code added, and this
> is the speed up?
I forgot doing this. BTW: I was wrong with factor ~6. From my notices,
it is ~ factor 4 using the optimized memset() version.
I'll follow-up on this mail with some measurements for all affected
functions, using small and large sizes. Hopefully tomorrow.
Thanks,
Stefan
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions
2021-08-11 14:28 ` Stefan Roese
@ 2021-08-12 8:43 ` Stefan Roese
0 siblings, 0 replies; 7+ messages in thread
From: Stefan Roese @ 2021-08-12 8:43 UTC (permalink / raw)
To: Tom Rini; +Cc: u-boot, Rasmus Villemoes, sjg, Wolfgang Denk
On 11.08.21 16:28, Stefan Roese wrote:
> On 11.08.21 16:25, Tom Rini wrote:
>> On Wed, Aug 11, 2021 at 04:02:39PM +0200, Stefan Roese wrote:
>>>
>>> On an NXP LX2160 based platform it has been noticed, that the currently
>>> implemented memset/memcpy functions for aarch64 are suboptimal.
>>> Especially the memset() for clearing the NXP MC firmware memory is very
>>> expensive (time-wise).
>>>
>>> By using optimized functions, a speedup of ~ factor 6 has been measured.
>>
>> To be clear, you re-measured with the cache check code added, and this
>> is the speed up?
>
> I forgot doing this. BTW: I was wrong with factor ~6. From my notices,
> it is ~ factor 4 using the optimized memset() version.
>
> I'll follow-up on this mail with some measurements for all affected
> functions, using small and large sizes. Hopefully tomorrow.
Here the numbers:
Current original version:
-------------------------
memset() 32 Bytes, 16M times:
time: 0.446 seconds
memset() 16MiB, 256 times:
time: 1.076 seconds
memcpy() 512MiB:
time: 0.224 seconds
New optimized version:
----------------------
memset() 32 Bytes, 16M times:
time: 0.287 seconds
memset() 16MiB, 256 times:
time: 0.292 seconds
memcpy() 512MiB:
time: 0.222 seconds
Summary:
The optimized memcpy is nearly identical to the original one. But the
optimized memset is much faster, for small and big sizes. Small sizes
factor ~1.6 and big sizes factor ~3.7.
Note: These measurements were done on the NXP LX2160ARDB board.
Thanks,
Stefan
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2021-08-12 8:44 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-11 14:02 [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions Stefan Roese
2021-08-11 14:02 ` [PATCH v3 1/3] arm64: arch/arm/lib: Add optimized memset/memcpy/memmove functions Stefan Roese
2021-08-11 14:02 ` [PATCH v3 2/3] arm64: memset-arm64: Use simple memset when cache is disabled Stefan Roese
2021-08-11 14:02 ` [PATCH v3 3/3] arm64: Kconfig: Enable usage of optimized memset/memcpy/memmove Stefan Roese
2021-08-11 14:25 ` [PATCH v3 0/3] arm64: Add optimized memset/memcpy/memove functions Tom Rini
2021-08-11 14:28 ` Stefan Roese
2021-08-12 8:43 ` Stefan Roese
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.