* [PATCH v2 1/2] arm64: add workaround for ambiguous C99 stdint.h types @ 2018-11-27 5:27 ` Jackie Liu 0 siblings, 0 replies; 8+ messages in thread From: Jackie Liu @ 2018-11-27 5:27 UTC (permalink / raw) To: ard.biesheuvel; +Cc: linux-arm-kernel, linux-block, Jackie Liu In a way similar to ARM commit 09096f6a0ee2 ("ARM: 7822/1: add workaround for ambiguous C99 stdint.h types"), this patch redefines the macros that are used in stdint.h so its definitions of uint64_t and int64_t are compatible with those of the kernel. This patch comes from: https://patchwork.kernel.org/patch/3540001/ Wrote by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> --- arch/arm64/include/uapi/asm/types.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) create mode 100644 arch/arm64/include/uapi/asm/types.h diff --git a/arch/arm64/include/uapi/asm/types.h b/arch/arm64/include/uapi/asm/types.h new file mode 100644 index 0000000..0016780 --- /dev/null +++ b/arch/arm64/include/uapi/asm/types.h @@ -0,0 +1,26 @@ +#ifndef _UAPI_ASM_TYPES_H +#define _UAPI_ASM_TYPES_H + +#include <asm-generic/int-ll64.h> + +/* + * For Aarch64, there is some ambiguity in the definition of the types below + * between the kernel and GCC itself. This is usually not a big deal, but it + * causes trouble when including GCC's version of 'stdint.h' (this is the file + * that gets included when you #include <stdint.h> on a -ffreestanding build). + * As this file also gets included implicitly when including 'arm_neon.h' (the + * NEON intrinsics support header), we need the following to work around the + * issue if we want to use NEON intrinsics in the kernel. + */ + +#ifdef __INT64_TYPE__ +#undef __INT64_TYPE__ +#define __INT64_TYPE__ __signed__ long long +#endif + +#ifdef __UINT64_TYPE__ +#undef __UINT64_TYPE__ +#define __UINT64_TYPE__ unsigned long long +#endif + +#endif /* _UAPI_ASM_TYPES_H */ -- 2.7.4 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 1/2] arm64: add workaround for ambiguous C99 stdint.h types @ 2018-11-27 5:27 ` Jackie Liu 0 siblings, 0 replies; 8+ messages in thread From: Jackie Liu @ 2018-11-27 5:27 UTC (permalink / raw) To: linux-arm-kernel In a way similar to ARM commit 09096f6a0ee2 ("ARM: 7822/1: add workaround for ambiguous C99 stdint.h types"), this patch redefines the macros that are used in stdint.h so its definitions of uint64_t and int64_t are compatible with those of the kernel. This patch comes from: https://patchwork.kernel.org/patch/3540001/ Wrote by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> --- arch/arm64/include/uapi/asm/types.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) create mode 100644 arch/arm64/include/uapi/asm/types.h diff --git a/arch/arm64/include/uapi/asm/types.h b/arch/arm64/include/uapi/asm/types.h new file mode 100644 index 0000000..0016780 --- /dev/null +++ b/arch/arm64/include/uapi/asm/types.h @@ -0,0 +1,26 @@ +#ifndef _UAPI_ASM_TYPES_H +#define _UAPI_ASM_TYPES_H + +#include <asm-generic/int-ll64.h> + +/* + * For Aarch64, there is some ambiguity in the definition of the types below + * between the kernel and GCC itself. This is usually not a big deal, but it + * causes trouble when including GCC's version of 'stdint.h' (this is the file + * that gets included when you #include <stdint.h> on a -ffreestanding build). + * As this file also gets included implicitly when including 'arm_neon.h' (the + * NEON intrinsics support header), we need the following to work around the + * issue if we want to use NEON intrinsics in the kernel. + */ + +#ifdef __INT64_TYPE__ +#undef __INT64_TYPE__ +#define __INT64_TYPE__ __signed__ long long +#endif + +#ifdef __UINT64_TYPE__ +#undef __UINT64_TYPE__ +#define __UINT64_TYPE__ unsigned long long +#endif + +#endif /* _UAPI_ASM_TYPES_H */ -- 2.7.4 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 2/2] arm64: crypto: add NEON accelerated XOR implementation 2018-11-27 5:27 ` Jackie Liu @ 2018-11-27 5:27 ` Jackie Liu -1 siblings, 0 replies; 8+ messages in thread From: Jackie Liu @ 2018-11-27 5:27 UTC (permalink / raw) To: ard.biesheuvel; +Cc: linux-arm-kernel, linux-block, Jackie Liu This is a NEON acceleration method that can improve performance by approximately 20%. I got the following data from the centos 7.5 on Huawei's HISI1616 chip: [ 93.837726] xor: measuring software checksum speed [ 93.874039] 8regs : 7123.200 MB/sec [ 93.914038] 32regs : 7180.300 MB/sec [ 93.954043] arm64_neon: 9856.000 MB/sec [ 93.954047] xor: using function: arm64_neon (9856.000 MB/sec) I believe this code can bring some optimization for all arm64 platform. That is patch version 2. Thanks for Ard Biesheuvel's suggestions. Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> --- arch/arm64/include/asm/Kbuild | 1 - arch/arm64/include/asm/xor.h | 73 +++++++++++++++++ arch/arm64/lib/Makefile | 6 ++ arch/arm64/lib/xor-neon.c | 184 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 263 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/xor.h create mode 100644 arch/arm64/lib/xor-neon.c diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild index 6cd5d77..1877f29 100644 --- a/arch/arm64/include/asm/Kbuild +++ b/arch/arm64/include/asm/Kbuild @@ -27,4 +27,3 @@ generic-y += trace_clock.h generic-y += unaligned.h generic-y += user.h generic-y += vga.h -generic-y += xor.h diff --git a/arch/arm64/include/asm/xor.h b/arch/arm64/include/asm/xor.h new file mode 100644 index 0000000..856386a --- /dev/null +++ b/arch/arm64/include/asm/xor.h @@ -0,0 +1,73 @@ +/* + * arch/arm64/include/asm/xor.h + * + * Authors: Jackie Liu <liuyun01@kylinos.cn> + * Copyright (C) 2018,Tianjin KYLIN Information Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/hardirq.h> +#include <asm-generic/xor.h> +#include <asm/hwcap.h> +#include <asm/neon.h> + +#ifdef CONFIG_KERNEL_MODE_NEON + +extern struct xor_block_template const xor_block_inner_neon; + +static void +xor_neon_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) +{ + kernel_neon_begin(); + xor_block_inner_neon.do_2(bytes, p1, p2); + kernel_neon_end(); +} + +static void +xor_neon_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, + unsigned long *p3) +{ + kernel_neon_begin(); + xor_block_inner_neon.do_3(bytes, p1, p2, p3); + kernel_neon_end(); +} + +static void +xor_neon_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, + unsigned long *p3, unsigned long *p4) +{ + kernel_neon_begin(); + xor_block_inner_neon.do_4(bytes, p1, p2, p3, p4); + kernel_neon_end(); +} + +static void +xor_neon_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, + unsigned long *p3, unsigned long *p4, unsigned long *p5) +{ + kernel_neon_begin(); + xor_block_inner_neon.do_5(bytes, p1, p2, p3, p4, p5); + kernel_neon_end(); +} + +static struct xor_block_template xor_block_arm64 = { + .name = "arm64_neon", + .do_2 = xor_neon_2, + .do_3 = xor_neon_3, + .do_4 = xor_neon_4, + .do_5 = xor_neon_5 +}; +#undef XOR_TRY_TEMPLATES +#define XOR_TRY_TEMPLATES \ + do { \ + xor_speed(&xor_block_8regs); \ + xor_speed(&xor_block_32regs); \ + if (cpu_has_neon()) { \ + xor_speed(&xor_block_arm64);\ + } \ + } while (0) + +#endif /* ! CONFIG_KERNEL_MODE_NEON */ diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 69ff988..5540a16 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -5,6 +5,12 @@ lib-y := clear_user.o delay.o copy_from_user.o \ memcmp.o strcmp.o strncmp.o strlen.o strnlen.o \ strchr.o strrchr.o tishift.o +ifeq ($(CONFIG_KERNEL_MODE_NEON), y) +obj-$(CONFIG_XOR_BLOCKS) += xor-neon.o +CFLAGS_REMOVE_xor-neon.o += -mgeneral-regs-only +CFLAGS_xor-neon.o += -ffreestanding +endif + # Tell the compiler to treat all general purpose registers (with the # exception of the IP registers, which are already handled by the caller # in case of a PLT) as callee-saved, which allows for efficient runtime diff --git a/arch/arm64/lib/xor-neon.c b/arch/arm64/lib/xor-neon.c new file mode 100644 index 0000000..e2d2f4a --- /dev/null +++ b/arch/arm64/lib/xor-neon.c @@ -0,0 +1,184 @@ +/* + * arch/arm64/lib/xor-neon-core.c + * + * Authors: Jackie Liu <liuyun01@kylinos.cn> + * Copyright (C) 2018,Tianjin KYLIN Information Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/raid/xor.h> +#include <linux/module.h> +#include <arm_neon.h> + +void xor_arm64_neon_2(unsigned long bytes, unsigned long *p1, + unsigned long *p2) +{ + uint64_t *dp1 = (uint64_t *)p1; + uint64_t *dp2 = (uint64_t *)p2; + + register uint64x2_t v0, v1, v2, v3; + long lines = bytes / (sizeof(uint64x2_t) * 4); + + do { + /* p1 ^= p2 */ + v0 = veorq_u64(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0)); + v1 = veorq_u64(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2)); + v2 = veorq_u64(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4)); + v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); + + /* store */ + vst1q_u64(dp1 + 0, v0); + vst1q_u64(dp1 + 2, v1); + vst1q_u64(dp1 + 4, v2); + vst1q_u64(dp1 + 6, v3); + + dp1 += 8; + dp2 += 8; + } while (--lines > 0); +} + +void xor_arm64_neon_3(unsigned long bytes, unsigned long *p1, + unsigned long *p2, unsigned long *p3) +{ + uint64_t *dp1 = (uint64_t *)p1; + uint64_t *dp2 = (uint64_t *)p2; + uint64_t *dp3 = (uint64_t *)p3; + + register uint64x2_t v0, v1, v2, v3; + long lines = bytes / (sizeof(uint64x2_t) * 4); + + do { + /* p1 ^= p2 */ + v0 = veorq_u64(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0)); + v1 = veorq_u64(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2)); + v2 = veorq_u64(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4)); + v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); + + /* p1 ^= p3 */ + v0 = veorq_u64(v0, vld1q_u64(dp3 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp3 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp3 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp3 + 6)); + + /* store */ + vst1q_u64(dp1 + 0, v0); + vst1q_u64(dp1 + 2, v1); + vst1q_u64(dp1 + 4, v2); + vst1q_u64(dp1 + 6, v3); + + dp1 += 8; + dp2 += 8; + dp3 += 8; + } while (--lines > 0); +} + +void xor_arm64_neon_4(unsigned long bytes, unsigned long *p1, + unsigned long *p2, unsigned long *p3, unsigned long *p4) +{ + uint64_t *dp1 = (uint64_t *)p1; + uint64_t *dp2 = (uint64_t *)p2; + uint64_t *dp3 = (uint64_t *)p3; + uint64_t *dp4 = (uint64_t *)p4; + + register uint64x2_t v0, v1, v2, v3; + long lines = bytes / (sizeof(uint64x2_t) * 4); + + do { + /* p1 ^= p2 */ + v0 = veorq_u64(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0)); + v1 = veorq_u64(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2)); + v2 = veorq_u64(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4)); + v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); + + /* p1 ^= p3 */ + v0 = veorq_u64(v0, vld1q_u64(dp3 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp3 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp3 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp3 + 6)); + + /* p1 ^= p4 */ + v0 = veorq_u64(v0, vld1q_u64(dp4 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp4 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp4 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp4 + 6)); + + /* store */ + vst1q_u64(dp1 + 0, v0); + vst1q_u64(dp1 + 2, v1); + vst1q_u64(dp1 + 4, v2); + vst1q_u64(dp1 + 6, v3); + + dp1 += 8; + dp2 += 8; + dp3 += 8; + dp4 += 8; + } while (--lines > 0); +} + +void xor_arm64_neon_5(unsigned long bytes, unsigned long *p1, + unsigned long *p2, unsigned long *p3, + unsigned long *p4, unsigned long *p5) +{ + uint64_t *dp1 = (uint64_t *)p1; + uint64_t *dp2 = (uint64_t *)p2; + uint64_t *dp3 = (uint64_t *)p3; + uint64_t *dp4 = (uint64_t *)p4; + uint64_t *dp5 = (uint64_t *)p5; + + register uint64x2_t v0, v1, v2, v3; + long lines = bytes / (sizeof(uint64x2_t) * 4); + + do { + /* p1 ^= p2 */ + v0 = veorq_u64(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0)); + v1 = veorq_u64(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2)); + v2 = veorq_u64(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4)); + v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); + + /* p1 ^= p3 */ + v0 = veorq_u64(v0, vld1q_u64(dp3 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp3 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp3 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp3 + 6)); + + /* p1 ^= p4 */ + v0 = veorq_u64(v0, vld1q_u64(dp4 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp4 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp4 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp4 + 6)); + + /* p1 ^= p5 */ + v0 = veorq_u64(v0, vld1q_u64(dp5 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp5 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp5 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp5 + 6)); + + /* store */ + vst1q_u64(dp1 + 0, v0); + vst1q_u64(dp1 + 2, v1); + vst1q_u64(dp1 + 4, v2); + vst1q_u64(dp1 + 6, v3); + + dp1 += 8; + dp2 += 8; + dp3 += 8; + dp4 += 8; + dp5 += 8; + } while (--lines > 0); +} + +struct xor_block_template const xor_block_inner_neon = { + .name = "__inner_neon__", + .do_2 = xor_arm64_neon_2, + .do_3 = xor_arm64_neon_3, + .do_4 = xor_arm64_neon_4, + .do_5 = xor_arm64_neon_5, +}; +EXPORT_SYMBOL(xor_block_inner_neon); + +MODULE_AUTHOR("Jackie Liu <liuyun01@kylinos.cn>"); +MODULE_DESCRIPTION("ARMv8 XOR Extensions"); +MODULE_LICENSE("GPL"); -- 2.7.4 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* [PATCH v2 2/2] arm64: crypto: add NEON accelerated XOR implementation @ 2018-11-27 5:27 ` Jackie Liu 0 siblings, 0 replies; 8+ messages in thread From: Jackie Liu @ 2018-11-27 5:27 UTC (permalink / raw) To: linux-arm-kernel This is a NEON acceleration method that can improve performance by approximately 20%. I got the following data from the centos 7.5 on Huawei's HISI1616 chip: [ 93.837726] xor: measuring software checksum speed [ 93.874039] 8regs : 7123.200 MB/sec [ 93.914038] 32regs : 7180.300 MB/sec [ 93.954043] arm64_neon: 9856.000 MB/sec [ 93.954047] xor: using function: arm64_neon (9856.000 MB/sec) I believe this code can bring some optimization for all arm64 platform. That is patch version 2. Thanks for Ard Biesheuvel's suggestions. Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> --- arch/arm64/include/asm/Kbuild | 1 - arch/arm64/include/asm/xor.h | 73 +++++++++++++++++ arch/arm64/lib/Makefile | 6 ++ arch/arm64/lib/xor-neon.c | 184 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 263 insertions(+), 1 deletion(-) create mode 100644 arch/arm64/include/asm/xor.h create mode 100644 arch/arm64/lib/xor-neon.c diff --git a/arch/arm64/include/asm/Kbuild b/arch/arm64/include/asm/Kbuild index 6cd5d77..1877f29 100644 --- a/arch/arm64/include/asm/Kbuild +++ b/arch/arm64/include/asm/Kbuild @@ -27,4 +27,3 @@ generic-y += trace_clock.h generic-y += unaligned.h generic-y += user.h generic-y += vga.h -generic-y += xor.h diff --git a/arch/arm64/include/asm/xor.h b/arch/arm64/include/asm/xor.h new file mode 100644 index 0000000..856386a --- /dev/null +++ b/arch/arm64/include/asm/xor.h @@ -0,0 +1,73 @@ +/* + * arch/arm64/include/asm/xor.h + * + * Authors: Jackie Liu <liuyun01@kylinos.cn> + * Copyright (C) 2018,Tianjin KYLIN Information Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/hardirq.h> +#include <asm-generic/xor.h> +#include <asm/hwcap.h> +#include <asm/neon.h> + +#ifdef CONFIG_KERNEL_MODE_NEON + +extern struct xor_block_template const xor_block_inner_neon; + +static void +xor_neon_2(unsigned long bytes, unsigned long *p1, unsigned long *p2) +{ + kernel_neon_begin(); + xor_block_inner_neon.do_2(bytes, p1, p2); + kernel_neon_end(); +} + +static void +xor_neon_3(unsigned long bytes, unsigned long *p1, unsigned long *p2, + unsigned long *p3) +{ + kernel_neon_begin(); + xor_block_inner_neon.do_3(bytes, p1, p2, p3); + kernel_neon_end(); +} + +static void +xor_neon_4(unsigned long bytes, unsigned long *p1, unsigned long *p2, + unsigned long *p3, unsigned long *p4) +{ + kernel_neon_begin(); + xor_block_inner_neon.do_4(bytes, p1, p2, p3, p4); + kernel_neon_end(); +} + +static void +xor_neon_5(unsigned long bytes, unsigned long *p1, unsigned long *p2, + unsigned long *p3, unsigned long *p4, unsigned long *p5) +{ + kernel_neon_begin(); + xor_block_inner_neon.do_5(bytes, p1, p2, p3, p4, p5); + kernel_neon_end(); +} + +static struct xor_block_template xor_block_arm64 = { + .name = "arm64_neon", + .do_2 = xor_neon_2, + .do_3 = xor_neon_3, + .do_4 = xor_neon_4, + .do_5 = xor_neon_5 +}; +#undef XOR_TRY_TEMPLATES +#define XOR_TRY_TEMPLATES \ + do { \ + xor_speed(&xor_block_8regs); \ + xor_speed(&xor_block_32regs); \ + if (cpu_has_neon()) { \ + xor_speed(&xor_block_arm64);\ + } \ + } while (0) + +#endif /* ! CONFIG_KERNEL_MODE_NEON */ diff --git a/arch/arm64/lib/Makefile b/arch/arm64/lib/Makefile index 69ff988..5540a16 100644 --- a/arch/arm64/lib/Makefile +++ b/arch/arm64/lib/Makefile @@ -5,6 +5,12 @@ lib-y := clear_user.o delay.o copy_from_user.o \ memcmp.o strcmp.o strncmp.o strlen.o strnlen.o \ strchr.o strrchr.o tishift.o +ifeq ($(CONFIG_KERNEL_MODE_NEON), y) +obj-$(CONFIG_XOR_BLOCKS) += xor-neon.o +CFLAGS_REMOVE_xor-neon.o += -mgeneral-regs-only +CFLAGS_xor-neon.o += -ffreestanding +endif + # Tell the compiler to treat all general purpose registers (with the # exception of the IP registers, which are already handled by the caller # in case of a PLT) as callee-saved, which allows for efficient runtime diff --git a/arch/arm64/lib/xor-neon.c b/arch/arm64/lib/xor-neon.c new file mode 100644 index 0000000..e2d2f4a --- /dev/null +++ b/arch/arm64/lib/xor-neon.c @@ -0,0 +1,184 @@ +/* + * arch/arm64/lib/xor-neon-core.c + * + * Authors: Jackie Liu <liuyun01@kylinos.cn> + * Copyright (C) 2018,Tianjin KYLIN Information Technology Co., Ltd. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#include <linux/raid/xor.h> +#include <linux/module.h> +#include <arm_neon.h> + +void xor_arm64_neon_2(unsigned long bytes, unsigned long *p1, + unsigned long *p2) +{ + uint64_t *dp1 = (uint64_t *)p1; + uint64_t *dp2 = (uint64_t *)p2; + + register uint64x2_t v0, v1, v2, v3; + long lines = bytes / (sizeof(uint64x2_t) * 4); + + do { + /* p1 ^= p2 */ + v0 = veorq_u64(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0)); + v1 = veorq_u64(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2)); + v2 = veorq_u64(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4)); + v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); + + /* store */ + vst1q_u64(dp1 + 0, v0); + vst1q_u64(dp1 + 2, v1); + vst1q_u64(dp1 + 4, v2); + vst1q_u64(dp1 + 6, v3); + + dp1 += 8; + dp2 += 8; + } while (--lines > 0); +} + +void xor_arm64_neon_3(unsigned long bytes, unsigned long *p1, + unsigned long *p2, unsigned long *p3) +{ + uint64_t *dp1 = (uint64_t *)p1; + uint64_t *dp2 = (uint64_t *)p2; + uint64_t *dp3 = (uint64_t *)p3; + + register uint64x2_t v0, v1, v2, v3; + long lines = bytes / (sizeof(uint64x2_t) * 4); + + do { + /* p1 ^= p2 */ + v0 = veorq_u64(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0)); + v1 = veorq_u64(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2)); + v2 = veorq_u64(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4)); + v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); + + /* p1 ^= p3 */ + v0 = veorq_u64(v0, vld1q_u64(dp3 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp3 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp3 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp3 + 6)); + + /* store */ + vst1q_u64(dp1 + 0, v0); + vst1q_u64(dp1 + 2, v1); + vst1q_u64(dp1 + 4, v2); + vst1q_u64(dp1 + 6, v3); + + dp1 += 8; + dp2 += 8; + dp3 += 8; + } while (--lines > 0); +} + +void xor_arm64_neon_4(unsigned long bytes, unsigned long *p1, + unsigned long *p2, unsigned long *p3, unsigned long *p4) +{ + uint64_t *dp1 = (uint64_t *)p1; + uint64_t *dp2 = (uint64_t *)p2; + uint64_t *dp3 = (uint64_t *)p3; + uint64_t *dp4 = (uint64_t *)p4; + + register uint64x2_t v0, v1, v2, v3; + long lines = bytes / (sizeof(uint64x2_t) * 4); + + do { + /* p1 ^= p2 */ + v0 = veorq_u64(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0)); + v1 = veorq_u64(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2)); + v2 = veorq_u64(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4)); + v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); + + /* p1 ^= p3 */ + v0 = veorq_u64(v0, vld1q_u64(dp3 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp3 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp3 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp3 + 6)); + + /* p1 ^= p4 */ + v0 = veorq_u64(v0, vld1q_u64(dp4 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp4 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp4 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp4 + 6)); + + /* store */ + vst1q_u64(dp1 + 0, v0); + vst1q_u64(dp1 + 2, v1); + vst1q_u64(dp1 + 4, v2); + vst1q_u64(dp1 + 6, v3); + + dp1 += 8; + dp2 += 8; + dp3 += 8; + dp4 += 8; + } while (--lines > 0); +} + +void xor_arm64_neon_5(unsigned long bytes, unsigned long *p1, + unsigned long *p2, unsigned long *p3, + unsigned long *p4, unsigned long *p5) +{ + uint64_t *dp1 = (uint64_t *)p1; + uint64_t *dp2 = (uint64_t *)p2; + uint64_t *dp3 = (uint64_t *)p3; + uint64_t *dp4 = (uint64_t *)p4; + uint64_t *dp5 = (uint64_t *)p5; + + register uint64x2_t v0, v1, v2, v3; + long lines = bytes / (sizeof(uint64x2_t) * 4); + + do { + /* p1 ^= p2 */ + v0 = veorq_u64(vld1q_u64(dp1 + 0), vld1q_u64(dp2 + 0)); + v1 = veorq_u64(vld1q_u64(dp1 + 2), vld1q_u64(dp2 + 2)); + v2 = veorq_u64(vld1q_u64(dp1 + 4), vld1q_u64(dp2 + 4)); + v3 = veorq_u64(vld1q_u64(dp1 + 6), vld1q_u64(dp2 + 6)); + + /* p1 ^= p3 */ + v0 = veorq_u64(v0, vld1q_u64(dp3 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp3 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp3 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp3 + 6)); + + /* p1 ^= p4 */ + v0 = veorq_u64(v0, vld1q_u64(dp4 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp4 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp4 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp4 + 6)); + + /* p1 ^= p5 */ + v0 = veorq_u64(v0, vld1q_u64(dp5 + 0)); + v1 = veorq_u64(v1, vld1q_u64(dp5 + 2)); + v2 = veorq_u64(v2, vld1q_u64(dp5 + 4)); + v3 = veorq_u64(v3, vld1q_u64(dp5 + 6)); + + /* store */ + vst1q_u64(dp1 + 0, v0); + vst1q_u64(dp1 + 2, v1); + vst1q_u64(dp1 + 4, v2); + vst1q_u64(dp1 + 6, v3); + + dp1 += 8; + dp2 += 8; + dp3 += 8; + dp4 += 8; + dp5 += 8; + } while (--lines > 0); +} + +struct xor_block_template const xor_block_inner_neon = { + .name = "__inner_neon__", + .do_2 = xor_arm64_neon_2, + .do_3 = xor_arm64_neon_3, + .do_4 = xor_arm64_neon_4, + .do_5 = xor_arm64_neon_5, +}; +EXPORT_SYMBOL(xor_block_inner_neon); + +MODULE_AUTHOR("Jackie Liu <liuyun01@kylinos.cn>"); +MODULE_DESCRIPTION("ARMv8 XOR Extensions"); +MODULE_LICENSE("GPL"); -- 2.7.4 ^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/2] arm64: add workaround for ambiguous C99 stdint.h types 2018-11-27 5:27 ` Jackie Liu @ 2018-11-27 8:17 ` Ard Biesheuvel -1 siblings, 0 replies; 8+ messages in thread From: Ard Biesheuvel @ 2018-11-27 8:17 UTC (permalink / raw) To: liuyun01; +Cc: linux-arm-kernel, linux-block On Tue, 27 Nov 2018 at 06:28, Jackie Liu <liuyun01@kylinos.cn> wrote: > > In a way similar to ARM commit 09096f6a0ee2 ("ARM: 7822/1: add workaround > for ambiguous C99 stdint.h types"), this patch redefines the macros that > are used in stdint.h so its definitions of uint64_t and int64_t are > compatible with those of the kernel. > > This patch comes from: https://patchwork.kernel.org/patch/3540001/ > Wrote by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > OK, I remember now :-) So this is the reason you had two separate source files in the previous revision. Could we maybe deal with this differently? Could we add a header arch/arm64/include/asm/neon-intrinsics.h that includes <arm_neon.h> after setting the preprocessor overrides below? And reference that header from your code? That way, we don't have to override asm/types.h for everyone. > Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> > --- > arch/arm64/include/uapi/asm/types.h | 26 ++++++++++++++++++++++++++ > 1 file changed, 26 insertions(+) > create mode 100644 arch/arm64/include/uapi/asm/types.h > > diff --git a/arch/arm64/include/uapi/asm/types.h b/arch/arm64/include/uapi/asm/types.h > new file mode 100644 > index 0000000..0016780 > --- /dev/null > +++ b/arch/arm64/include/uapi/asm/types.h > @@ -0,0 +1,26 @@ > +#ifndef _UAPI_ASM_TYPES_H > +#define _UAPI_ASM_TYPES_H > + > +#include <asm-generic/int-ll64.h> > + > +/* > + * For Aarch64, there is some ambiguity in the definition of the types below > + * between the kernel and GCC itself. This is usually not a big deal, but it > + * causes trouble when including GCC's version of 'stdint.h' (this is the file > + * that gets included when you #include <stdint.h> on a -ffreestanding build). > + * As this file also gets included implicitly when including 'arm_neon.h' (the > + * NEON intrinsics support header), we need the following to work around the > + * issue if we want to use NEON intrinsics in the kernel. > + */ > + > +#ifdef __INT64_TYPE__ > +#undef __INT64_TYPE__ > +#define __INT64_TYPE__ __signed__ long long > +#endif > + > +#ifdef __UINT64_TYPE__ > +#undef __UINT64_TYPE__ > +#define __UINT64_TYPE__ unsigned long long > +#endif > + > +#endif /* _UAPI_ASM_TYPES_H */ > -- > 2.7.4 > > > > ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 1/2] arm64: add workaround for ambiguous C99 stdint.h types @ 2018-11-27 8:17 ` Ard Biesheuvel 0 siblings, 0 replies; 8+ messages in thread From: Ard Biesheuvel @ 2018-11-27 8:17 UTC (permalink / raw) To: linux-arm-kernel On Tue, 27 Nov 2018 at 06:28, Jackie Liu <liuyun01@kylinos.cn> wrote: > > In a way similar to ARM commit 09096f6a0ee2 ("ARM: 7822/1: add workaround > for ambiguous C99 stdint.h types"), this patch redefines the macros that > are used in stdint.h so its definitions of uint64_t and int64_t are > compatible with those of the kernel. > > This patch comes from: https://patchwork.kernel.org/patch/3540001/ > Wrote by: Ard Biesheuvel <ard.biesheuvel@linaro.org> > OK, I remember now :-) So this is the reason you had two separate source files in the previous revision. Could we maybe deal with this differently? Could we add a header arch/arm64/include/asm/neon-intrinsics.h that includes <arm_neon.h> after setting the preprocessor overrides below? And reference that header from your code? That way, we don't have to override asm/types.h for everyone. > Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> > --- > arch/arm64/include/uapi/asm/types.h | 26 ++++++++++++++++++++++++++ > 1 file changed, 26 insertions(+) > create mode 100644 arch/arm64/include/uapi/asm/types.h > > diff --git a/arch/arm64/include/uapi/asm/types.h b/arch/arm64/include/uapi/asm/types.h > new file mode 100644 > index 0000000..0016780 > --- /dev/null > +++ b/arch/arm64/include/uapi/asm/types.h > @@ -0,0 +1,26 @@ > +#ifndef _UAPI_ASM_TYPES_H > +#define _UAPI_ASM_TYPES_H > + > +#include <asm-generic/int-ll64.h> > + > +/* > + * For Aarch64, there is some ambiguity in the definition of the types below > + * between the kernel and GCC itself. This is usually not a big deal, but it > + * causes trouble when including GCC's version of 'stdint.h' (this is the file > + * that gets included when you #include <stdint.h> on a -ffreestanding build). > + * As this file also gets included implicitly when including 'arm_neon.h' (the > + * NEON intrinsics support header), we need the following to work around the > + * issue if we want to use NEON intrinsics in the kernel. > + */ > + > +#ifdef __INT64_TYPE__ > +#undef __INT64_TYPE__ > +#define __INT64_TYPE__ __signed__ long long > +#endif > + > +#ifdef __UINT64_TYPE__ > +#undef __UINT64_TYPE__ > +#define __UINT64_TYPE__ unsigned long long > +#endif > + > +#endif /* _UAPI_ASM_TYPES_H */ > -- > 2.7.4 > > > > ^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [PATCH v2 1/2] arm64: add workaround for ambiguous C99 stdint.h types 2018-11-27 8:17 ` Ard Biesheuvel @ 2018-11-27 10:01 ` JackieLiu -1 siblings, 0 replies; 8+ messages in thread From: JackieLiu @ 2018-11-27 10:01 UTC (permalink / raw) To: Ard Biesheuvel; +Cc: linux-arm-kernel, linux-block Yep, you are right. I will change code later. But now, I found an new problem, when build the kernel, I got follow message: WARNING: EXPORT symbol "xor_block_inner_neon" [arch/arm64/lib/xor-neon.ko] version generation failed, symbol will not be versioned. I don’t know why, do you have any idea? > 在 2018年11月27日,16:17,Ard Biesheuvel <ard.biesheuvel@linaro.org> 写道: > > On Tue, 27 Nov 2018 at 06:28, Jackie Liu <liuyun01@kylinos.cn> wrote: >> >> In a way similar to ARM commit 09096f6a0ee2 ("ARM: 7822/1: add workaround >> for ambiguous C99 stdint.h types"), this patch redefines the macros that >> are used in stdint.h so its definitions of uint64_t and int64_t are >> compatible with those of the kernel. >> >> This patch comes from: https://patchwork.kernel.org/patch/3540001/ >> Wrote by: Ard Biesheuvel <ard.biesheuvel@linaro.org> >> > > OK, I remember now :-) > > So this is the reason you had two separate source files in the > previous revision. > > Could we maybe deal with this differently? Could we add a header > > arch/arm64/include/asm/neon-intrinsics.h > > that includes <arm_neon.h> after setting the preprocessor overrides > below? And reference that header from your code? > > That way, we don't have to override asm/types.h for everyone. > > > >> Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> >> --- >> arch/arm64/include/uapi/asm/types.h | 26 ++++++++++++++++++++++++++ >> 1 file changed, 26 insertions(+) >> create mode 100644 arch/arm64/include/uapi/asm/types.h >> >> diff --git a/arch/arm64/include/uapi/asm/types.h b/arch/arm64/include/uapi/asm/types.h >> new file mode 100644 >> index 0000000..0016780 >> --- /dev/null >> +++ b/arch/arm64/include/uapi/asm/types.h >> @@ -0,0 +1,26 @@ >> +#ifndef _UAPI_ASM_TYPES_H >> +#define _UAPI_ASM_TYPES_H >> + >> +#include <asm-generic/int-ll64.h> >> + >> +/* >> + * For Aarch64, there is some ambiguity in the definition of the types below >> + * between the kernel and GCC itself. This is usually not a big deal, but it >> + * causes trouble when including GCC's version of 'stdint.h' (this is the file >> + * that gets included when you #include <stdint.h> on a -ffreestanding build). >> + * As this file also gets included implicitly when including 'arm_neon.h' (the >> + * NEON intrinsics support header), we need the following to work around the >> + * issue if we want to use NEON intrinsics in the kernel. >> + */ >> + >> +#ifdef __INT64_TYPE__ >> +#undef __INT64_TYPE__ >> +#define __INT64_TYPE__ __signed__ long long >> +#endif >> + >> +#ifdef __UINT64_TYPE__ >> +#undef __UINT64_TYPE__ >> +#define __UINT64_TYPE__ unsigned long long >> +#endif >> + >> +#endif /* _UAPI_ASM_TYPES_H */ >> -- >> 2.7.4 >> >> >> >> > ^ permalink raw reply [flat|nested] 8+ messages in thread
* [PATCH v2 1/2] arm64: add workaround for ambiguous C99 stdint.h types @ 2018-11-27 10:01 ` JackieLiu 0 siblings, 0 replies; 8+ messages in thread From: JackieLiu @ 2018-11-27 10:01 UTC (permalink / raw) To: linux-arm-kernel Yep, you are right. I will change code later. But now, I found an new problem, when build the kernel, I got follow message: WARNING: EXPORT symbol "xor_block_inner_neon" [arch/arm64/lib/xor-neon.ko] version generation failed, symbol will not be versioned. I don?t know why, do you have any idea? > ? 2018?11?27??16:17?Ard Biesheuvel <ard.biesheuvel@linaro.org> ??? > > On Tue, 27 Nov 2018 at 06:28, Jackie Liu <liuyun01@kylinos.cn> wrote: >> >> In a way similar to ARM commit 09096f6a0ee2 ("ARM: 7822/1: add workaround >> for ambiguous C99 stdint.h types"), this patch redefines the macros that >> are used in stdint.h so its definitions of uint64_t and int64_t are >> compatible with those of the kernel. >> >> This patch comes from: https://patchwork.kernel.org/patch/3540001/ >> Wrote by: Ard Biesheuvel <ard.biesheuvel@linaro.org> >> > > OK, I remember now :-) > > So this is the reason you had two separate source files in the > previous revision. > > Could we maybe deal with this differently? Could we add a header > > arch/arm64/include/asm/neon-intrinsics.h > > that includes <arm_neon.h> after setting the preprocessor overrides > below? And reference that header from your code? > > That way, we don't have to override asm/types.h for everyone. > > > >> Signed-off-by: Jackie Liu <liuyun01@kylinos.cn> >> --- >> arch/arm64/include/uapi/asm/types.h | 26 ++++++++++++++++++++++++++ >> 1 file changed, 26 insertions(+) >> create mode 100644 arch/arm64/include/uapi/asm/types.h >> >> diff --git a/arch/arm64/include/uapi/asm/types.h b/arch/arm64/include/uapi/asm/types.h >> new file mode 100644 >> index 0000000..0016780 >> --- /dev/null >> +++ b/arch/arm64/include/uapi/asm/types.h >> @@ -0,0 +1,26 @@ >> +#ifndef _UAPI_ASM_TYPES_H >> +#define _UAPI_ASM_TYPES_H >> + >> +#include <asm-generic/int-ll64.h> >> + >> +/* >> + * For Aarch64, there is some ambiguity in the definition of the types below >> + * between the kernel and GCC itself. This is usually not a big deal, but it >> + * causes trouble when including GCC's version of 'stdint.h' (this is the file >> + * that gets included when you #include <stdint.h> on a -ffreestanding build). >> + * As this file also gets included implicitly when including 'arm_neon.h' (the >> + * NEON intrinsics support header), we need the following to work around the >> + * issue if we want to use NEON intrinsics in the kernel. >> + */ >> + >> +#ifdef __INT64_TYPE__ >> +#undef __INT64_TYPE__ >> +#define __INT64_TYPE__ __signed__ long long >> +#endif >> + >> +#ifdef __UINT64_TYPE__ >> +#undef __UINT64_TYPE__ >> +#define __UINT64_TYPE__ unsigned long long >> +#endif >> + >> +#endif /* _UAPI_ASM_TYPES_H */ >> -- >> 2.7.4 >> >> >> >> > ^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2018-11-27 10:02 UTC | newest] Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2018-11-27 5:27 [PATCH v2 1/2] arm64: add workaround for ambiguous C99 stdint.h types Jackie Liu 2018-11-27 5:27 ` Jackie Liu 2018-11-27 5:27 ` [PATCH v2 2/2] arm64: crypto: add NEON accelerated XOR implementation Jackie Liu 2018-11-27 5:27 ` Jackie Liu 2018-11-27 8:17 ` [PATCH v2 1/2] arm64: add workaround for ambiguous C99 stdint.h types Ard Biesheuvel 2018-11-27 8:17 ` Ard Biesheuvel 2018-11-27 10:01 ` JackieLiu 2018-11-27 10:01 ` JackieLiu
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.