All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions
@ 2021-08-06 13:38 Stefan Roese
  2021-08-06 13:38 ` [PATCH v1 1/5] lib/string: Add memset_simple() Stefan Roese
                   ` (5 more replies)
  0 siblings, 6 replies; 17+ messages in thread
From: Stefan Roese @ 2021-08-06 13:38 UTC (permalink / raw)
  To: u-boot; +Cc: sjg, trini


On an NXP LX2160 based platform it has been noticed, that the currently
implemented memset/memcpy functions for aarch64 are suboptimal.
Especially the memset() for clearing the NXP MC firmware memory is very
expensive (time-wise).

By using optimized functions, a speedup of ~ factor 6 has been measured.

This patchset now adds the optimized functions ported from this
repository:
https://github.com/ARM-software/optimized-routines

As these functions make use of opcodes that need the caches to be
enabled, they can't be used in the very early boot stage, before the
caches are enabled. Because of this, a simple memset() version is also
added, in this case memset_simple(), and will be used in very few
selected places.

Please note that checkpatch.pl complains about some issue with this
imported file: arch/arm/lib/asmdefs.h
Since it's imported I did explicitly not make any changes here, to make
potential future sync'ing easer.

Thanks,
Stefan


Stefan Roese (5):
  lib/string: Add memset_simple()
  board_init: Use memset_simple() in board_init_f_init_reserve()
  arm64: cache_v8: Use memset_simple() in create_table()
  arm64: arch/arm/lib: Add optimized memset/memcpy functions
  arm64: Kconfig: Enable usage of optimized memset/memcpy

 arch/arm/Kconfig              |  10 +-
 arch/arm/cpu/armv8/cache_v8.c |   2 +-
 arch/arm/lib/Makefile         |   5 +
 arch/arm/lib/asmdefs.h        |  98 ++++++++++++++
 arch/arm/lib/memcpy-arm64.S   | 241 ++++++++++++++++++++++++++++++++++
 arch/arm/lib/memset-arm64.S   | 116 ++++++++++++++++
 common/init/board_init.c      |   2 +-
 include/linux/string.h        |   2 +
 lib/string.c                  |  10 ++
 9 files changed, 478 insertions(+), 8 deletions(-)
 create mode 100644 arch/arm/lib/asmdefs.h
 create mode 100644 arch/arm/lib/memcpy-arm64.S
 create mode 100644 arch/arm/lib/memset-arm64.S

-- 
2.32.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v1 1/5] lib/string: Add memset_simple()
  2021-08-06 13:38 [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions Stefan Roese
@ 2021-08-06 13:38 ` Stefan Roese
  2021-08-06 15:36   ` Wolfgang Denk
  2021-08-06 13:38 ` [PATCH v1 2/5] board_init: Use memset_simple() in board_init_f_init_reserve() Stefan Roese
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Stefan Roese @ 2021-08-06 13:38 UTC (permalink / raw)
  To: u-boot; +Cc: sjg, trini

The optimized memset() ARM64 implementation cannot be used very early
on, since caches need to be enabled for this function to work. Otherwise
an exception occurs (only printed with very early DEBUG_UART enabled).

This patch now implements a very simple memset() version, which will be
used in a few selected places in the ARM64 early boot process.

Signed-off-by: Stefan Roese <sr@denx.de>
---

 include/linux/string.h |  2 ++
 lib/string.c           | 10 ++++++++++
 2 files changed, 12 insertions(+)

diff --git a/include/linux/string.h b/include/linux/string.h
index dd255f21633a..0ab1c557da3d 100644
--- a/include/linux/string.h
+++ b/include/linux/string.h
@@ -129,6 +129,8 @@ extern void * memchr(const void *,int,__kernel_size_t);
 void *memchr_inv(const void *, int, size_t);
 #endif
 
+void *memset_simple(void *s, int c, size_t count);
+
 unsigned long ustrtoul(const char *cp, char **endp, unsigned int base);
 unsigned long long ustrtoull(const char *cp, char **endp, unsigned int base);
 
diff --git a/lib/string.c b/lib/string.c
index ba176fb08f73..7092181f831f 100644
--- a/lib/string.c
+++ b/lib/string.c
@@ -543,6 +543,16 @@ __used void * memset(void * s,int c,size_t count)
 }
 #endif
 
+void *memset_simple(void *s, int c, size_t count)
+{
+	char *s8 = (char *)s;
+
+	while (count--)
+		*s8++ = c;
+
+	return s;
+}
+
 #ifndef __HAVE_ARCH_MEMCPY
 /**
  * memcpy - Copy one area of memory to another
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v1 2/5] board_init: Use memset_simple() in board_init_f_init_reserve()
  2021-08-06 13:38 [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions Stefan Roese
  2021-08-06 13:38 ` [PATCH v1 1/5] lib/string: Add memset_simple() Stefan Roese
@ 2021-08-06 13:38 ` Stefan Roese
  2021-08-09  7:52   ` Rasmus Villemoes
  2021-08-06 13:38 ` [PATCH v1 3/5] arm64: cache_v8: Use memset_simple() in create_table() Stefan Roese
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 17+ messages in thread
From: Stefan Roese @ 2021-08-06 13:38 UTC (permalink / raw)
  To: u-boot; +Cc: sjg, trini

board_init_f_init_reserve() is called very early in the boot process,
before the caches are enabled. Because of this, the optimized memset()
version can't be used here on ARM64. With this patch, the simple memset
version memset_simple() is used here instead.

Signed-off-by: Stefan Roese <sr@denx.de>
---

 common/init/board_init.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/common/init/board_init.c b/common/init/board_init.c
index 0965b96fa3ad..9996aff74373 100644
--- a/common/init/board_init.c
+++ b/common/init/board_init.c
@@ -140,7 +140,7 @@ void board_init_f_init_reserve(ulong base)
 
 	gd_ptr = (struct global_data *)base;
 	/* zero the area */
-	memset(gd_ptr, '\0', sizeof(*gd));
+	memset_simple(gd_ptr, '\0', sizeof(*gd));
 	/* set GD unless architecture did it already */
 #if !defined(CONFIG_ARM)
 	arch_setup_gd(gd_ptr);
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v1 3/5] arm64: cache_v8: Use memset_simple() in create_table()
  2021-08-06 13:38 [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions Stefan Roese
  2021-08-06 13:38 ` [PATCH v1 1/5] lib/string: Add memset_simple() Stefan Roese
  2021-08-06 13:38 ` [PATCH v1 2/5] board_init: Use memset_simple() in board_init_f_init_reserve() Stefan Roese
@ 2021-08-06 13:38 ` Stefan Roese
  2021-08-06 13:38 ` [PATCH v1 4/5] arm64: arch/arm/lib: Add optimized memset/memcpy functions Stefan Roese
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 17+ messages in thread
From: Stefan Roese @ 2021-08-06 13:38 UTC (permalink / raw)
  To: u-boot; +Cc: sjg, trini

create_table() is called very early in the boot process, before the
caches are enabled. Because of this, the optimized memset() version
can't be used here. With this patch, the simple memset version
memset_simple() is used here instead.

Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/arm/cpu/armv8/cache_v8.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/arm/cpu/armv8/cache_v8.c b/arch/arm/cpu/armv8/cache_v8.c
index 3de18c7675b9..04c437aebabf 100644
--- a/arch/arm/cpu/armv8/cache_v8.c
+++ b/arch/arm/cpu/armv8/cache_v8.c
@@ -159,7 +159,7 @@ static u64 *create_table(void)
 			gd->arch.tlb_size);
 
 	/* Mark all entries as invalid */
-	memset(new_table, 0, pt_len);
+	memset_simple(new_table, 0, pt_len);
 
 	return new_table;
 }
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v1 4/5] arm64: arch/arm/lib: Add optimized memset/memcpy functions
  2021-08-06 13:38 [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions Stefan Roese
                   ` (2 preceding siblings ...)
  2021-08-06 13:38 ` [PATCH v1 3/5] arm64: cache_v8: Use memset_simple() in create_table() Stefan Roese
@ 2021-08-06 13:38 ` Stefan Roese
  2021-08-06 14:43   ` Tom Rini
  2021-08-06 15:41   ` Wolfgang Denk
  2021-08-06 13:38 ` [PATCH v1 5/5] arm64: Kconfig: Enable usage of optimized memset/memcpy Stefan Roese
  2021-08-06 14:24 ` [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions Tom Rini
  5 siblings, 2 replies; 17+ messages in thread
From: Stefan Roese @ 2021-08-06 13:38 UTC (permalink / raw)
  To: u-boot; +Cc: sjg, trini

Ported from https://github.com/ARM-software/optimized-routines

Please note that when adding these optimized functions as default memset
memcpy functions in U-Boot, U-Boot fails to boot on the LX2160ARDB.
After the initial ATF output, no U-Boot output is shown on the serial
console. Some exception is triggered here in the very early boot process
as some of the assembler opcodes need the caches to be enabled.

Because of this, some additional patches will introduce memset_simple()
which will be used in some selected code location.

Note:
I also integrated and tested with the Linux versions of these optimized
functions. They are similar to the ones now integrated but these ARM
versions are still a small bit faster.

Signed-off-by: Stefan Roese <sr@denx.de>
---

 arch/arm/lib/Makefile       |   5 +
 arch/arm/lib/asmdefs.h      |  98 +++++++++++++++
 arch/arm/lib/memcpy-arm64.S | 241 ++++++++++++++++++++++++++++++++++++
 arch/arm/lib/memset-arm64.S | 116 +++++++++++++++++
 4 files changed, 460 insertions(+)
 create mode 100644 arch/arm/lib/asmdefs.h
 create mode 100644 arch/arm/lib/memcpy-arm64.S
 create mode 100644 arch/arm/lib/memset-arm64.S

diff --git a/arch/arm/lib/Makefile b/arch/arm/lib/Makefile
index 7f6633271518..c48e1f622d3c 100644
--- a/arch/arm/lib/Makefile
+++ b/arch/arm/lib/Makefile
@@ -39,8 +39,13 @@ obj-$(CONFIG_$(SPL_TPL_)FRAMEWORK) += spl.o
 obj-$(CONFIG_SPL_FRAMEWORK) += zimage.o
 obj-$(CONFIG_OF_LIBFDT) += bootm-fdt.o
 endif
+ifdef CONFIG_ARM64
+obj-$(CONFIG_$(SPL_TPL_)USE_ARCH_MEMSET) += memset-arm64.o
+obj-$(CONFIG_$(SPL_TPL_)USE_ARCH_MEMCPY) += memcpy-arm64.o
+else
 obj-$(CONFIG_$(SPL_TPL_)USE_ARCH_MEMSET) += memset.o
 obj-$(CONFIG_$(SPL_TPL_)USE_ARCH_MEMCPY) += memcpy.o
+endif
 obj-$(CONFIG_SEMIHOSTING) += semihosting.o
 
 obj-y	+= bdinfo.o
diff --git a/arch/arm/lib/asmdefs.h b/arch/arm/lib/asmdefs.h
new file mode 100644
index 000000000000..d307a3a8a25c
--- /dev/null
+++ b/arch/arm/lib/asmdefs.h
@@ -0,0 +1,98 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * Macros for asm code.
+ *
+ * Copyright (c) 2019, Arm Limited.
+ */
+
+#ifndef _ASMDEFS_H
+#define _ASMDEFS_H
+
+#if defined(__aarch64__)
+
+/* Branch Target Identitication support.  */
+#define BTI_C		hint	34
+#define BTI_J		hint	36
+/* Return address signing support (pac-ret).  */
+#define PACIASP		hint	25; .cfi_window_save
+#define AUTIASP		hint	29; .cfi_window_save
+
+/* GNU_PROPERTY_AARCH64_* macros from elf.h.  */
+#define FEATURE_1_AND 0xc0000000
+#define FEATURE_1_BTI 1
+#define FEATURE_1_PAC 2
+
+/* Add a NT_GNU_PROPERTY_TYPE_0 note.  */
+#define GNU_PROPERTY(type, value)	\
+  .section .note.gnu.property, "a";	\
+  .p2align 3;				\
+  .word 4;				\
+  .word 16;				\
+  .word 5;				\
+  .asciz "GNU";				\
+  .word type;				\
+  .word 4;				\
+  .word value;				\
+  .word 0;				\
+  .text
+
+/* If set then the GNU Property Note section will be added to
+   mark objects to support BTI and PAC-RET.  */
+#ifndef WANT_GNU_PROPERTY
+#define WANT_GNU_PROPERTY 1
+#endif
+
+#if WANT_GNU_PROPERTY
+/* Add property note with supported features to all asm files.  */
+GNU_PROPERTY (FEATURE_1_AND, FEATURE_1_BTI|FEATURE_1_PAC)
+#endif
+
+#define ENTRY_ALIGN(name, alignment)	\
+  .global name;		\
+  .type name,%function;	\
+  .align alignment;		\
+  name:			\
+  .cfi_startproc;	\
+  BTI_C;
+
+#else
+
+#define END_FILE
+
+#define ENTRY_ALIGN(name, alignment)	\
+  .global name;		\
+  .type name,%function;	\
+  .align alignment;		\
+  name:			\
+  .cfi_startproc;
+
+#endif
+
+#define ENTRY(name)	ENTRY_ALIGN(name, 6)
+
+#define ENTRY_ALIAS(name)	\
+  .global name;		\
+  .type name,%function;	\
+  name:
+
+#define END(name)	\
+  .cfi_endproc;		\
+  .size name, .-name;
+
+#define L(l) .L ## l
+
+#ifdef __ILP32__
+  /* Sanitize padding bits of pointer arguments as per aapcs64 */
+#define PTR_ARG(n)  mov w##n, w##n
+#else
+#define PTR_ARG(n)
+#endif
+
+#ifdef __ILP32__
+  /* Sanitize padding bits of size arguments as per aapcs64 */
+#define SIZE_ARG(n)  mov w##n, w##n
+#else
+#define SIZE_ARG(n)
+#endif
+
+#endif
diff --git a/arch/arm/lib/memcpy-arm64.S b/arch/arm/lib/memcpy-arm64.S
new file mode 100644
index 000000000000..08f0d854868a
--- /dev/null
+++ b/arch/arm/lib/memcpy-arm64.S
@@ -0,0 +1,241 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * memcpy - copy memory area
+ *
+ * Copyright (c) 2012-2020, Arm Limited.
+ */
+
+/* Assumptions:
+ *
+ * ARMv8-a, AArch64, unaligned accesses.
+ *
+ */
+
+#include "asmdefs.h"
+
+#define dstin	x0
+#define src	x1
+#define count	x2
+#define dst	x3
+#define srcend	x4
+#define dstend	x5
+#define A_l	x6
+#define A_lw	w6
+#define A_h	x7
+#define B_l	x8
+#define B_lw	w8
+#define B_h	x9
+#define C_l	x10
+#define C_lw	w10
+#define C_h	x11
+#define D_l	x12
+#define D_h	x13
+#define E_l	x14
+#define E_h	x15
+#define F_l	x16
+#define F_h	x17
+#define G_l	count
+#define G_h	dst
+#define H_l	src
+#define H_h	srcend
+#define tmp1	x14
+
+/* This implementation handles overlaps and supports both memcpy and memmove
+   from a single entry point.  It uses unaligned accesses and branchless
+   sequences to keep the code small, simple and improve performance.
+
+   Copies are split into 3 main cases: small copies of up to 32 bytes, medium
+   copies of up to 128 bytes, and large copies.  The overhead of the overlap
+   check is negligible since it is only required for large copies.
+
+   Large copies use a software pipelined loop processing 64 bytes per iteration.
+   The destination pointer is 16-byte aligned to minimize unaligned accesses.
+   The loop tail is handled by always copying 64 bytes from the end.
+*/
+
+ENTRY (memcpy)
+	PTR_ARG (0)
+	PTR_ARG (1)
+	SIZE_ARG (2)
+	add	srcend, src, count
+	add	dstend, dstin, count
+	cmp	count, 128
+	b.hi	L(copy_long)
+	cmp	count, 32
+	b.hi	L(copy32_128)
+
+	/* Small copies: 0..32 bytes.  */
+	cmp	count, 16
+	b.lo	L(copy16)
+	ldp	A_l, A_h, [src]
+	ldp	D_l, D_h, [srcend, -16]
+	stp	A_l, A_h, [dstin]
+	stp	D_l, D_h, [dstend, -16]
+	ret
+
+	/* Copy 8-15 bytes.  */
+L(copy16):
+	tbz	count, 3, L(copy8)
+	ldr	A_l, [src]
+	ldr	A_h, [srcend, -8]
+	str	A_l, [dstin]
+	str	A_h, [dstend, -8]
+	ret
+
+	.p2align 3
+	/* Copy 4-7 bytes.  */
+L(copy8):
+	tbz	count, 2, L(copy4)
+	ldr	A_lw, [src]
+	ldr	B_lw, [srcend, -4]
+	str	A_lw, [dstin]
+	str	B_lw, [dstend, -4]
+	ret
+
+	/* Copy 0..3 bytes using a branchless sequence.  */
+L(copy4):
+	cbz	count, L(copy0)
+	lsr	tmp1, count, 1
+	ldrb	A_lw, [src]
+	ldrb	C_lw, [srcend, -1]
+	ldrb	B_lw, [src, tmp1]
+	strb	A_lw, [dstin]
+	strb	B_lw, [dstin, tmp1]
+	strb	C_lw, [dstend, -1]
+L(copy0):
+	ret
+
+	.p2align 4
+	/* Medium copies: 33..128 bytes.  */
+L(copy32_128):
+	ldp	A_l, A_h, [src]
+	ldp	B_l, B_h, [src, 16]
+	ldp	C_l, C_h, [srcend, -32]
+	ldp	D_l, D_h, [srcend, -16]
+	cmp	count, 64
+	b.hi	L(copy128)
+	stp	A_l, A_h, [dstin]
+	stp	B_l, B_h, [dstin, 16]
+	stp	C_l, C_h, [dstend, -32]
+	stp	D_l, D_h, [dstend, -16]
+	ret
+
+	.p2align 4
+	/* Copy 65..128 bytes.  */
+L(copy128):
+	ldp	E_l, E_h, [src, 32]
+	ldp	F_l, F_h, [src, 48]
+	cmp	count, 96
+	b.ls	L(copy96)
+	ldp	G_l, G_h, [srcend, -64]
+	ldp	H_l, H_h, [srcend, -48]
+	stp	G_l, G_h, [dstend, -64]
+	stp	H_l, H_h, [dstend, -48]
+L(copy96):
+	stp	A_l, A_h, [dstin]
+	stp	B_l, B_h, [dstin, 16]
+	stp	E_l, E_h, [dstin, 32]
+	stp	F_l, F_h, [dstin, 48]
+	stp	C_l, C_h, [dstend, -32]
+	stp	D_l, D_h, [dstend, -16]
+	ret
+
+	.p2align 4
+	/* Copy more than 128 bytes.  */
+L(copy_long):
+	/* Use backwards copy if there is an overlap.  */
+	sub	tmp1, dstin, src
+	cbz	tmp1, L(copy0)
+	cmp	tmp1, count
+	b.lo	L(copy_long_backwards)
+
+	/* Copy 16 bytes and then align dst to 16-byte alignment.  */
+
+	ldp	D_l, D_h, [src]
+	and	tmp1, dstin, 15
+	bic	dst, dstin, 15
+	sub	src, src, tmp1
+	add	count, count, tmp1	/* Count is now 16 too large.  */
+	ldp	A_l, A_h, [src, 16]
+	stp	D_l, D_h, [dstin]
+	ldp	B_l, B_h, [src, 32]
+	ldp	C_l, C_h, [src, 48]
+	ldp	D_l, D_h, [src, 64]!
+	subs	count, count, 128 + 16	/* Test and readjust count.  */
+	b.ls	L(copy64_from_end)
+
+L(loop64):
+	stp	A_l, A_h, [dst, 16]
+	ldp	A_l, A_h, [src, 16]
+	stp	B_l, B_h, [dst, 32]
+	ldp	B_l, B_h, [src, 32]
+	stp	C_l, C_h, [dst, 48]
+	ldp	C_l, C_h, [src, 48]
+	stp	D_l, D_h, [dst, 64]!
+	ldp	D_l, D_h, [src, 64]!
+	subs	count, count, 64
+	b.hi	L(loop64)
+
+	/* Write the last iteration and copy 64 bytes from the end.  */
+L(copy64_from_end):
+	ldp	E_l, E_h, [srcend, -64]
+	stp	A_l, A_h, [dst, 16]
+	ldp	A_l, A_h, [srcend, -48]
+	stp	B_l, B_h, [dst, 32]
+	ldp	B_l, B_h, [srcend, -32]
+	stp	C_l, C_h, [dst, 48]
+	ldp	C_l, C_h, [srcend, -16]
+	stp	D_l, D_h, [dst, 64]
+	stp	E_l, E_h, [dstend, -64]
+	stp	A_l, A_h, [dstend, -48]
+	stp	B_l, B_h, [dstend, -32]
+	stp	C_l, C_h, [dstend, -16]
+	ret
+
+	.p2align 4
+
+	/* Large backwards copy for overlapping copies.
+	   Copy 16 bytes and then align dst to 16-byte alignment.  */
+L(copy_long_backwards):
+	ldp	D_l, D_h, [srcend, -16]
+	and	tmp1, dstend, 15
+	sub	srcend, srcend, tmp1
+	sub	count, count, tmp1
+	ldp	A_l, A_h, [srcend, -16]
+	stp	D_l, D_h, [dstend, -16]
+	ldp	B_l, B_h, [srcend, -32]
+	ldp	C_l, C_h, [srcend, -48]
+	ldp	D_l, D_h, [srcend, -64]!
+	sub	dstend, dstend, tmp1
+	subs	count, count, 128
+	b.ls	L(copy64_from_start)
+
+L(loop64_backwards):
+	stp	A_l, A_h, [dstend, -16]
+	ldp	A_l, A_h, [srcend, -16]
+	stp	B_l, B_h, [dstend, -32]
+	ldp	B_l, B_h, [srcend, -32]
+	stp	C_l, C_h, [dstend, -48]
+	ldp	C_l, C_h, [srcend, -48]
+	stp	D_l, D_h, [dstend, -64]!
+	ldp	D_l, D_h, [srcend, -64]!
+	subs	count, count, 64
+	b.hi	L(loop64_backwards)
+
+	/* Write the last iteration and copy 64 bytes from the start.  */
+L(copy64_from_start):
+	ldp	G_l, G_h, [src, 48]
+	stp	A_l, A_h, [dstend, -16]
+	ldp	A_l, A_h, [src, 32]
+	stp	B_l, B_h, [dstend, -32]
+	ldp	B_l, B_h, [src, 16]
+	stp	C_l, C_h, [dstend, -48]
+	ldp	C_l, C_h, [src]
+	stp	D_l, D_h, [dstend, -64]
+	stp	G_l, G_h, [dstin, 48]
+	stp	A_l, A_h, [dstin, 32]
+	stp	B_l, B_h, [dstin, 16]
+	stp	C_l, C_h, [dstin]
+	ret
+
+END (memcpy)
diff --git a/arch/arm/lib/memset-arm64.S b/arch/arm/lib/memset-arm64.S
new file mode 100644
index 000000000000..710f6f582cad
--- /dev/null
+++ b/arch/arm/lib/memset-arm64.S
@@ -0,0 +1,116 @@
+/* SPDX-License-Identifier: MIT */
+/*
+ * memset - fill memory with a constant byte
+ *
+ * Copyright (c) 2012-2021, Arm Limited.
+ */
+
+/* Assumptions:
+ *
+ * ARMv8-a, AArch64, Advanced SIMD, unaligned accesses.
+ *
+ */
+
+#include "asmdefs.h"
+
+#define dstin	x0
+#define val	x1
+#define valw	w1
+#define count	x2
+#define dst	x3
+#define dstend	x4
+#define zva_val	x5
+
+ENTRY (memset)
+	PTR_ARG (0)
+	SIZE_ARG (2)
+
+	dup	v0.16B, valw
+	add	dstend, dstin, count
+
+	cmp	count, 96
+	b.hi	L(set_long)
+	cmp	count, 16
+	b.hs	L(set_medium)
+	mov	val, v0.D[0]
+
+	/* Set 0..15 bytes.  */
+	tbz	count, 3, 1f
+	str	val, [dstin]
+	str	val, [dstend, -8]
+	ret
+	.p2align 4
+1:	tbz	count, 2, 2f
+	str	valw, [dstin]
+	str	valw, [dstend, -4]
+	ret
+2:	cbz	count, 3f
+	strb	valw, [dstin]
+	tbz	count, 1, 3f
+	strh	valw, [dstend, -2]
+3:	ret
+
+	/* Set 17..96 bytes.  */
+L(set_medium):
+	str	q0, [dstin]
+	tbnz	count, 6, L(set96)
+	str	q0, [dstend, -16]
+	tbz	count, 5, 1f
+	str	q0, [dstin, 16]
+	str	q0, [dstend, -32]
+1:	ret
+
+	.p2align 4
+	/* Set 64..96 bytes.  Write 64 bytes from the start and
+	   32 bytes from the end.  */
+L(set96):
+	str	q0, [dstin, 16]
+	stp	q0, q0, [dstin, 32]
+	stp	q0, q0, [dstend, -32]
+	ret
+
+	.p2align 4
+L(set_long):
+	and	valw, valw, 255
+	bic	dst, dstin, 15
+	str	q0, [dstin]
+	cmp	count, 160
+	ccmp	valw, 0, 0, hs
+	b.ne	L(no_zva)
+
+#ifndef SKIP_ZVA_CHECK
+	mrs	zva_val, dczid_el0
+	and	zva_val, zva_val, 31
+	cmp	zva_val, 4		/* ZVA size is 64 bytes.  */
+	b.ne	L(no_zva)
+#endif
+	str	q0, [dst, 16]
+	stp	q0, q0, [dst, 32]
+	bic	dst, dst, 63
+	sub	count, dstend, dst	/* Count is now 64 too large.  */
+	sub	count, count, 128	/* Adjust count and bias for loop.  */
+
+	.p2align 4
+L(zva_loop):
+	add	dst, dst, 64
+	dc	zva, dst
+	subs	count, count, 64
+	b.hi	L(zva_loop)
+	stp	q0, q0, [dstend, -64]
+	stp	q0, q0, [dstend, -32]
+	ret
+
+L(no_zva):
+	sub	count, dstend, dst	/* Count is 16 too large.  */
+	sub	dst, dst, 16		/* Dst is biased by -32.  */
+	sub	count, count, 64 + 16	/* Adjust count and bias for loop.  */
+L(no_zva_loop):
+	stp	q0, q0, [dst, 32]
+	stp	q0, q0, [dst, 64]!
+	subs	count, count, 64
+	b.hi	L(no_zva_loop)
+	stp	q0, q0, [dstend, -64]
+	stp	q0, q0, [dstend, -32]
+	ret
+
+END (memset)
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v1 5/5] arm64: Kconfig: Enable usage of optimized memset/memcpy
  2021-08-06 13:38 [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions Stefan Roese
                   ` (3 preceding siblings ...)
  2021-08-06 13:38 ` [PATCH v1 4/5] arm64: arch/arm/lib: Add optimized memset/memcpy functions Stefan Roese
@ 2021-08-06 13:38 ` Stefan Roese
  2021-08-06 14:24 ` [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions Tom Rini
  5 siblings, 0 replies; 17+ messages in thread
From: Stefan Roese @ 2021-08-06 13:38 UTC (permalink / raw)
  To: u-boot; +Cc: sjg, trini

This patch enables the use of the optimized memset() & memcpy() versions
recently added on ARM64.

Signed-off-by: Stefan Roese <sr@denx.de>

---

 arch/arm/Kconfig | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index d4d658569033..6669cc9382ed 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -457,7 +457,6 @@ config ARM_CORTEX_CPU_IS_UP
 config USE_ARCH_MEMCPY
 	bool "Use an assembly optimized implementation of memcpy"
 	default y
-	depends on !ARM64
 	help
 	  Enable the generation of an optimized version of memcpy.
 	  Such an implementation may be faster under some conditions
@@ -466,7 +465,7 @@ config USE_ARCH_MEMCPY
 config SPL_USE_ARCH_MEMCPY
 	bool "Use an assembly optimized implementation of memcpy for SPL"
 	default y if USE_ARCH_MEMCPY
-	depends on !ARM64 && SPL
+	depends on SPL
 	help
 	  Enable the generation of an optimized version of memcpy.
 	  Such an implementation may be faster under some conditions
@@ -475,7 +474,7 @@ config SPL_USE_ARCH_MEMCPY
 config TPL_USE_ARCH_MEMCPY
 	bool "Use an assembly optimized implementation of memcpy for TPL"
 	default y if USE_ARCH_MEMCPY
-	depends on !ARM64 && TPL
+	depends on TPL
 	help
 	  Enable the generation of an optimized version of memcpy.
 	  Such an implementation may be faster under some conditions
@@ -484,7 +483,6 @@ config TPL_USE_ARCH_MEMCPY
 config USE_ARCH_MEMSET
 	bool "Use an assembly optimized implementation of memset"
 	default y
-	depends on !ARM64
 	help
 	  Enable the generation of an optimized version of memset.
 	  Such an implementation may be faster under some conditions
@@ -493,7 +491,7 @@ config USE_ARCH_MEMSET
 config SPL_USE_ARCH_MEMSET
 	bool "Use an assembly optimized implementation of memset for SPL"
 	default y if USE_ARCH_MEMSET
-	depends on !ARM64 && SPL
+	depends on SPL
 	help
 	  Enable the generation of an optimized version of memset.
 	  Such an implementation may be faster under some conditions
@@ -502,7 +500,7 @@ config SPL_USE_ARCH_MEMSET
 config TPL_USE_ARCH_MEMSET
 	bool "Use an assembly optimized implementation of memset for TPL"
 	default y if USE_ARCH_MEMSET
-	depends on !ARM64 && TPL
+	depends on TPL
 	help
 	  Enable the generation of an optimized version of memset.
 	  Such an implementation may be faster under some conditions
-- 
2.32.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions
  2021-08-06 13:38 [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions Stefan Roese
                   ` (4 preceding siblings ...)
  2021-08-06 13:38 ` [PATCH v1 5/5] arm64: Kconfig: Enable usage of optimized memset/memcpy Stefan Roese
@ 2021-08-06 14:24 ` Tom Rini
  2021-08-06 14:44   ` Stefan Roese
  5 siblings, 1 reply; 17+ messages in thread
From: Tom Rini @ 2021-08-06 14:24 UTC (permalink / raw)
  To: Stefan Roese; +Cc: u-boot, sjg

[-- Attachment #1: Type: text/plain, Size: 1182 bytes --]

On Fri, Aug 06, 2021 at 03:38:38PM +0200, Stefan Roese wrote:

> 
> On an NXP LX2160 based platform it has been noticed, that the currently
> implemented memset/memcpy functions for aarch64 are suboptimal.
> Especially the memset() for clearing the NXP MC firmware memory is very
> expensive (time-wise).
> 
> By using optimized functions, a speedup of ~ factor 6 has been measured.
> 
> This patchset now adds the optimized functions ported from this
> repository:
> https://github.com/ARM-software/optimized-routines
> 
> As these functions make use of opcodes that need the caches to be
> enabled, they can't be used in the very early boot stage, before the
> caches are enabled. Because of this, a simple memset() version is also
> added, in this case memset_simple(), and will be used in very few
> selected places.
> 
> Please note that checkpatch.pl complains about some issue with this
> imported file: arch/arm/lib/asmdefs.h
> Since it's imported I did explicitly not make any changes here, to make
> potential future sync'ing easer.

Traditionally, we grab the linux kernel's optimized functions.  Are
there not a set to grab there?

-- 
Tom

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 659 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 4/5] arm64: arch/arm/lib: Add optimized memset/memcpy functions
  2021-08-06 13:38 ` [PATCH v1 4/5] arm64: arch/arm/lib: Add optimized memset/memcpy functions Stefan Roese
@ 2021-08-06 14:43   ` Tom Rini
  2021-08-06 14:45     ` Stefan Roese
  2021-08-06 15:41   ` Wolfgang Denk
  1 sibling, 1 reply; 17+ messages in thread
From: Tom Rini @ 2021-08-06 14:43 UTC (permalink / raw)
  To: Stefan Roese; +Cc: u-boot, sjg

[-- Attachment #1: Type: text/plain, Size: 1044 bytes --]

On Fri, Aug 06, 2021 at 03:38:42PM +0200, Stefan Roese wrote:

> Ported from https://github.com/ARM-software/optimized-routines
> 
> Please note that when adding these optimized functions as default memset
> memcpy functions in U-Boot, U-Boot fails to boot on the LX2160ARDB.
> After the initial ATF output, no U-Boot output is shown on the serial
> console. Some exception is triggered here in the very early boot process
> as some of the assembler opcodes need the caches to be enabled.
> 
> Because of this, some additional patches will introduce memset_simple()
> which will be used in some selected code location.
> 
> Note:
> I also integrated and tested with the Linux versions of these optimized
> functions. They are similar to the ones now integrated but these ARM
> versions are still a small bit faster.

Ah, so here's why you didn't use the kernel versions.  Do the kernel
versions still have the cache issue?  Also, this should be reworded a
bit since you did introduce memset_simple before here.

-- 
Tom

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 659 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions
  2021-08-06 14:24 ` [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions Tom Rini
@ 2021-08-06 14:44   ` Stefan Roese
  0 siblings, 0 replies; 17+ messages in thread
From: Stefan Roese @ 2021-08-06 14:44 UTC (permalink / raw)
  To: Tom Rini; +Cc: u-boot, sjg

Hi Tom,

On 06.08.21 16:24, Tom Rini wrote:
> On Fri, Aug 06, 2021 at 03:38:38PM +0200, Stefan Roese wrote:
> 
>>
>> On an NXP LX2160 based platform it has been noticed, that the currently
>> implemented memset/memcpy functions for aarch64 are suboptimal.
>> Especially the memset() for clearing the NXP MC firmware memory is very
>> expensive (time-wise).
>>
>> By using optimized functions, a speedup of ~ factor 6 has been measured.
>>
>> This patchset now adds the optimized functions ported from this
>> repository:
>> https://github.com/ARM-software/optimized-routines
>>
>> As these functions make use of opcodes that need the caches to be
>> enabled, they can't be used in the very early boot stage, before the
>> caches are enabled. Because of this, a simple memset() version is also
>> added, in this case memset_simple(), and will be used in very few
>> selected places.
>>
>> Please note that checkpatch.pl complains about some issue with this
>> imported file: arch/arm/lib/asmdefs.h
>> Since it's imported I did explicitly not make any changes here, to make
>> potential future sync'ing easer.
> 
> Traditionally, we grab the linux kernel's optimized functions.  Are
> there not a set to grab there?

Yes, there are and I did this also. Here my comment from the commit log
of patch 4/5:

Note:
I also integrated and tested with the Linux versions of these optimized
functions. They are similar to the ones now integrated but these ARM
versions are still a small bit faster.

Thanks,
Stefan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 4/5] arm64: arch/arm/lib: Add optimized memset/memcpy functions
  2021-08-06 14:43   ` Tom Rini
@ 2021-08-06 14:45     ` Stefan Roese
  0 siblings, 0 replies; 17+ messages in thread
From: Stefan Roese @ 2021-08-06 14:45 UTC (permalink / raw)
  To: Tom Rini; +Cc: u-boot, sjg

Hi Tom,

On 06.08.21 16:43, Tom Rini wrote:
> On Fri, Aug 06, 2021 at 03:38:42PM +0200, Stefan Roese wrote:
> 
>> Ported from https://github.com/ARM-software/optimized-routines
>>
>> Please note that when adding these optimized functions as default memset
>> memcpy functions in U-Boot, U-Boot fails to boot on the LX2160ARDB.
>> After the initial ATF output, no U-Boot output is shown on the serial
>> console. Some exception is triggered here in the very early boot process
>> as some of the assembler opcodes need the caches to be enabled.
>>
>> Because of this, some additional patches will introduce memset_simple()
>> which will be used in some selected code location.
>>
>> Note:
>> I also integrated and tested with the Linux versions of these optimized
>> functions. They are similar to the ones now integrated but these ARM
>> versions are still a small bit faster.
> 
> Ah, so here's why you didn't use the kernel versions.  Do the kernel
> versions still have the cache issue?

Yes, same issue unfortunately.

>  Also, this should be reworded a
> bit since you did introduce memset_simple before here.

Agreed. I missed this. I'll reword this in v2.

Thanks,
Stefan

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 1/5] lib/string: Add memset_simple()
  2021-08-06 13:38 ` [PATCH v1 1/5] lib/string: Add memset_simple() Stefan Roese
@ 2021-08-06 15:36   ` Wolfgang Denk
  2021-08-06 16:13     ` Heinrich Schuchardt
  0 siblings, 1 reply; 17+ messages in thread
From: Wolfgang Denk @ 2021-08-06 15:36 UTC (permalink / raw)
  To: Stefan Roese; +Cc: u-boot, sjg, trini, Heinrich Schuchardt

Dear Stefan,

In message <20210806133843.3642916-2-sr@denx.de> you wrote:
> The optimized memset() ARM64 implementation cannot be used very early
> on, since caches need to be enabled for this function to work. Otherwise
> an exception occurs (only printed with very early DEBUG_UART enabled).
>
> This patch now implements a very simple memset() version, which will be
> used in a few selected places in the ARM64 early boot process.
...

We have already implementations of memset() elsewhere, for example in
lib/string.c [hey, that's even the file you are modifying...], then
again in lib/efi/efi_stub.c and also in
lib/efi_loader/efi_freestanding.c - so do we really need another
implementation?

BTW: lib/efi_loader/efi_freestanding.c  looks completely useless to
me.  Can we dump this? [Adding Heinich to Cc:]


> +void *memset_simple(void *s, int c, size_t count)
> +{
> +	char *s8 = (char *)s;
> +
> +	while (count--)
> +		*s8++ = c;
> +
> +	return s;
> +}

In which way is this different from memset() as defined in the same
file (further up) with CONFIG_TINY_MEMSET enabled?  And even without
this option the memset() implementation here should work.  Or does
it not?

Best regards,

Wolfgang Denk

-- 
DENX Software Engineering GmbH,      Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: wd@denx.de
: ... and it's got weird formatting - Notepad, Write, Works  3  can't
: decipher it, and it's too big to go in DOS Edit. Help!
Install an operating system. :-)                  -- Tom Christiansen

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 4/5] arm64: arch/arm/lib: Add optimized memset/memcpy functions
  2021-08-06 13:38 ` [PATCH v1 4/5] arm64: arch/arm/lib: Add optimized memset/memcpy functions Stefan Roese
  2021-08-06 14:43   ` Tom Rini
@ 2021-08-06 15:41   ` Wolfgang Denk
  1 sibling, 0 replies; 17+ messages in thread
From: Wolfgang Denk @ 2021-08-06 15:41 UTC (permalink / raw)
  To: Stefan Roese; +Cc: u-boot, sjg, trini

Dear Stefan,

In message <20210806133843.3642916-5-sr@denx.de> you wrote:
> Ported from https://github.com/ARM-software/optimized-routines

Please provide exact reference, at least including file names and
git commit ID's.

Best regards,

Wolfgang Denk

-- 
DENX Software Engineering GmbH,      Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: wd@denx.de
Artificial Intelligence is the study of how to  make  real  computers
act like the ones in movies.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 1/5] lib/string: Add memset_simple()
  2021-08-06 15:36   ` Wolfgang Denk
@ 2021-08-06 16:13     ` Heinrich Schuchardt
  2021-08-07 15:18       ` Wolfgang Denk
  0 siblings, 1 reply; 17+ messages in thread
From: Heinrich Schuchardt @ 2021-08-06 16:13 UTC (permalink / raw)
  To: Wolfgang Denk, Stefan Roese; +Cc: u-boot, sjg, trini



On 8/6/21 5:36 PM, Wolfgang Denk wrote:
> Dear Stefan,
>
> In message <20210806133843.3642916-2-sr@denx.de> you wrote:
>> The optimized memset() ARM64 implementation cannot be used very early
>> on, since caches need to be enabled for this function to work. Otherwise
>> an exception occurs (only printed with very early DEBUG_UART enabled).
>>
>> This patch now implements a very simple memset() version, which will be
>> used in a few selected places in the ARM64 early boot process.
> ...
>
> We have already implementations of memset() elsewhere, for example in
> lib/string.c [hey, that's even the file you are modifying...], then
> again in lib/efi/efi_stub.c and also in
> lib/efi_loader/efi_freestanding.c - so do we really need another
> implementation?
>
> BTW: lib/efi_loader/efi_freestanding.c  looks completely useless to
> me.  Can we dump this? [Adding Heinich to Cc:]

EFI binaries are freestanding. They cannot use any U-Boot library
function except the UEFI API exposed via the system table.

We compile EFI binaries with GCC parameter -ffreestanding. As we do not
include any GCC library it is assumed that functions like memset() are
provided by the compiled source package. This is where
efi_freestanding.c comes in.

Please, have a look at

scripts/Makefile.lib:420:
targets += $(obj)/efi_crt0.o $(obj)/efi_reloc.o $(obj)/efi_freestanding.o

where we link the efi_freestanding.o module into every EFI binary.

Best regards

Heinrich

>
>
>> +void *memset_simple(void *s, int c, size_t count)
>> +{
>> +	char *s8 = (char *)s;
>> +
>> +	while (count--)
>> +		*s8++ = c;
>> +
>> +	return s;
>> +}
>
> In which way is this different from memset() as defined in the same
> file (further up) with CONFIG_TINY_MEMSET enabled?  And even without
> this option the memset() implementation here should work.  Or does
> it not?
>
> Best regards,
>
> Wolfgang Denk
>

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 1/5] lib/string: Add memset_simple()
  2021-08-06 16:13     ` Heinrich Schuchardt
@ 2021-08-07 15:18       ` Wolfgang Denk
  2021-08-07 15:34         ` Tom Rini
  0 siblings, 1 reply; 17+ messages in thread
From: Wolfgang Denk @ 2021-08-07 15:18 UTC (permalink / raw)
  To: Heinrich Schuchardt; +Cc: Stefan Roese, u-boot, sjg, trini

Dear Heinrich,

In message <c79ed1e1-3a8b-fc72-8423-a1d8ebd811cd@gmx.de> you wrote:
> 
> EFI binaries are freestanding. They cannot use any U-Boot library
> function except the UEFI API exposed via the system table.

I fail to see why they cannot link standard library functions
provided elsewhere in the U-Boot code.  Why do you have to
reimplement exactly the same code?

Best regards,

Wolfgang Denk

-- 
DENX Software Engineering GmbH,      Managing Director: Wolfgang Denk
HRB 165235 Munich, Office: Kirchenstr.5, D-82194 Groebenzell, Germany
Phone: (+49)-8142-66989-10 Fax: (+49)-8142-66989-80 Email: wd@denx.de
[Braddock:] Mr. Churchill, you are drunk.
[Churchill:] And you madam, are ugly.  But I shall be sober tomorrow.

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 1/5] lib/string: Add memset_simple()
  2021-08-07 15:18       ` Wolfgang Denk
@ 2021-08-07 15:34         ` Tom Rini
  0 siblings, 0 replies; 17+ messages in thread
From: Tom Rini @ 2021-08-07 15:34 UTC (permalink / raw)
  To: Wolfgang Denk; +Cc: Heinrich Schuchardt, Stefan Roese, u-boot, sjg

[-- Attachment #1: Type: text/plain, Size: 565 bytes --]

On Sat, Aug 07, 2021 at 05:18:48PM +0200, Wolfgang Denk wrote:
> Dear Heinrich,
> 
> In message <c79ed1e1-3a8b-fc72-8423-a1d8ebd811cd@gmx.de> you wrote:
> > 
> > EFI binaries are freestanding. They cannot use any U-Boot library
> > function except the UEFI API exposed via the system table.
> 
> I fail to see why they cannot link standard library functions
> provided elsewhere in the U-Boot code.  Why do you have to
> reimplement exactly the same code?

Because it's not part of U-Boot, it's part of the freestanding EFI
selftest code.

-- 
Tom

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 659 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/5] board_init: Use memset_simple() in board_init_f_init_reserve()
  2021-08-06 13:38 ` [PATCH v1 2/5] board_init: Use memset_simple() in board_init_f_init_reserve() Stefan Roese
@ 2021-08-09  7:52   ` Rasmus Villemoes
  2021-08-09 13:17     ` Stefan Roese
  0 siblings, 1 reply; 17+ messages in thread
From: Rasmus Villemoes @ 2021-08-09  7:52 UTC (permalink / raw)
  To: Stefan Roese, u-boot; +Cc: sjg, trini

On 06/08/2021 15.38, Stefan Roese wrote:
> board_init_f_init_reserve() is called very early in the boot process,
> before the caches are enabled. Because of this, the optimized memset()
> version can't be used here on ARM64. With this patch, the simple memset
> version memset_simple() is used here instead.

I'm afraid this approach is pretty much whack-a-mole, and it's also a
pity that one arch's limitations/special needs affects common code. It
also means we lose the compiler's knowledge of "ah, here you're calling
a standard library function whose semantics I know".

The linux kernel has a lot of instances of self-modifying code
(alternatives, static keys, etc.). Could we do something like that in
U-Boot? That is, have the optimized asm version of memset start with an
unconditional jump to memset_simple, then when the arch is ready,
overwrite that jump instruction with a nop. Compared to the kernel, we
don't have any complications from multiple CPUs, so it shouldn't be too
hard.

That would allow all callers to just call memset(), which is better both
for the compiler and the human brain. Also, when somebody modifies the
common code later, they won't know that they need to spell memset
"memset_simple", as their modifications probably work just fine on their
platforms, but then it'll explode on ARM64. And finally, what happens if
somebody disables caches at run-time? Shouldn't we then also have a
mechanism to switch all mem* calls over to the simple versions?

Rasmus

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v1 2/5] board_init: Use memset_simple() in board_init_f_init_reserve()
  2021-08-09  7:52   ` Rasmus Villemoes
@ 2021-08-09 13:17     ` Stefan Roese
  0 siblings, 0 replies; 17+ messages in thread
From: Stefan Roese @ 2021-08-09 13:17 UTC (permalink / raw)
  To: Rasmus Villemoes, u-boot; +Cc: sjg, trini

Hi Rasmus,

On 09.08.21 09:52, Rasmus Villemoes wrote:
> On 06/08/2021 15.38, Stefan Roese wrote:
>> board_init_f_init_reserve() is called very early in the boot process,
>> before the caches are enabled. Because of this, the optimized memset()
>> version can't be used here on ARM64. With this patch, the simple memset
>> version memset_simple() is used here instead.
> 
> I'm afraid this approach is pretty much whack-a-mole, and it's also a
> pity that one arch's limitations/special needs affects common code.

I fully agree - I was (am) also not really happy with the suggested
implemented.

> It
> also means we lose the compiler's knowledge of "ah, here you're calling
> a standard library function whose semantics I know".
> 
> The linux kernel has a lot of instances of self-modifying code
> (alternatives, static keys, etc.). Could we do something like that in
> U-Boot? That is, have the optimized asm version of memset start with an
> unconditional jump to memset_simple, then when the arch is ready,
> overwrite that jump instruction with a nop. Compared to the kernel, we
> don't have any complications from multiple CPUs, so it shouldn't be too
> hard.
> 
> That would allow all callers to just call memset(), which is better both
> for the compiler and the human brain. Also, when somebody modifies the
> common code later, they won't know that they need to spell memset
> "memset_simple", as their modifications probably work just fine on their
> platforms, but then it'll explode on ARM64. And finally, what happens if
> somebody disables caches at run-time?

U-Boot crashes! I forgot about this problem.

> Shouldn't we then also have a
> mechanism to switch all mem* calls over to the simple versions?

AFAICT, it's only the "dc" opcode which needs caching enabled. And "dc"
is only used in memset() and not memcpy().

Let me think again about either using your suggestion with the self
modifying code or perhaps by adding a "cache enable" check in the
new memset() function.

Thanks for your valuable feedback.

Thanks,
Stefan

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2021-08-09 13:17 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-06 13:38 [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions Stefan Roese
2021-08-06 13:38 ` [PATCH v1 1/5] lib/string: Add memset_simple() Stefan Roese
2021-08-06 15:36   ` Wolfgang Denk
2021-08-06 16:13     ` Heinrich Schuchardt
2021-08-07 15:18       ` Wolfgang Denk
2021-08-07 15:34         ` Tom Rini
2021-08-06 13:38 ` [PATCH v1 2/5] board_init: Use memset_simple() in board_init_f_init_reserve() Stefan Roese
2021-08-09  7:52   ` Rasmus Villemoes
2021-08-09 13:17     ` Stefan Roese
2021-08-06 13:38 ` [PATCH v1 3/5] arm64: cache_v8: Use memset_simple() in create_table() Stefan Roese
2021-08-06 13:38 ` [PATCH v1 4/5] arm64: arch/arm/lib: Add optimized memset/memcpy functions Stefan Roese
2021-08-06 14:43   ` Tom Rini
2021-08-06 14:45     ` Stefan Roese
2021-08-06 15:41   ` Wolfgang Denk
2021-08-06 13:38 ` [PATCH v1 5/5] arm64: Kconfig: Enable usage of optimized memset/memcpy Stefan Roese
2021-08-06 14:24 ` [PATCH v1 0/5] arm64: Add optimized memset/memcpy functions Tom Rini
2021-08-06 14:44   ` Stefan Roese

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.