All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/3] crypto: crct10dif assembly cleanup and optimizations
@ 2019-01-30  3:14 ` Eric Biggers
  0 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-30  3:14 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: Ard Biesheuvel, Tim Chen, linux-arm-kernel

The x86, arm, and arm64 asm implementations of crct10dif are very
difficult to understand partly because many of the comments, labels, and
macros are named incorrectly: the lengths mentioned are usually off by a
factor of two from the actual code.  Many other things are unnecessarily
convoluted as well, e.g. there are many more fold constants than
actually needed and some aren't fully reduced.

This series therefore cleans up all these implementations to be much
more maintainable.  I also made some small optimizations where I saw
opportunities, resulting in slightly better performance.

This is based on top of the pending patches from Ard Biesheuvel.

These all pass the new extra self-tests.

Changed since v2:
- Removed the unnecessary '__LINUX_ARM_ARCH__ < 7' case.
- Added Ard's Acked-by.

Changed since v1:
- Moved constants in arm implementation to .rodata.
- Eliminated a few instructions from the x86 implementation.
- Tweaked a few comments.

Eric Biggers (3):
  crypto: x86/crct10dif-pcl - cleanup and optimizations
  crypto: arm/crct10dif-ce - cleanup and optimizations
  crypto: arm64/crct10dif-ce - cleanup and optimizations

 arch/arm/crypto/crct10dif-ce-core.S     | 552 ++++++++--------
 arch/arm/crypto/crct10dif-ce-glue.c     |   2 +-
 arch/arm64/crypto/crct10dif-ce-core.S   | 496 +++++++-------
 arch/arm64/crypto/crct10dif-ce-glue.c   |   4 +-
 arch/x86/crypto/crct10dif-pcl-asm_64.S  | 844 +++++++++---------------
 arch/x86/crypto/crct10dif-pclmul_glue.c |   3 +-
 6 files changed, 794 insertions(+), 1107 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 0/3] crypto: crct10dif assembly cleanup and optimizations
@ 2019-01-30  3:14 ` Eric Biggers
  0 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-30  3:14 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: Tim Chen, linux-arm-kernel, Ard Biesheuvel

The x86, arm, and arm64 asm implementations of crct10dif are very
difficult to understand partly because many of the comments, labels, and
macros are named incorrectly: the lengths mentioned are usually off by a
factor of two from the actual code.  Many other things are unnecessarily
convoluted as well, e.g. there are many more fold constants than
actually needed and some aren't fully reduced.

This series therefore cleans up all these implementations to be much
more maintainable.  I also made some small optimizations where I saw
opportunities, resulting in slightly better performance.

This is based on top of the pending patches from Ard Biesheuvel.

These all pass the new extra self-tests.

Changed since v2:
- Removed the unnecessary '__LINUX_ARM_ARCH__ < 7' case.
- Added Ard's Acked-by.

Changed since v1:
- Moved constants in arm implementation to .rodata.
- Eliminated a few instructions from the x86 implementation.
- Tweaked a few comments.

Eric Biggers (3):
  crypto: x86/crct10dif-pcl - cleanup and optimizations
  crypto: arm/crct10dif-ce - cleanup and optimizations
  crypto: arm64/crct10dif-ce - cleanup and optimizations

 arch/arm/crypto/crct10dif-ce-core.S     | 552 ++++++++--------
 arch/arm/crypto/crct10dif-ce-glue.c     |   2 +-
 arch/arm64/crypto/crct10dif-ce-core.S   | 496 +++++++-------
 arch/arm64/crypto/crct10dif-ce-glue.c   |   4 +-
 arch/x86/crypto/crct10dif-pcl-asm_64.S  | 844 +++++++++---------------
 arch/x86/crypto/crct10dif-pclmul_glue.c |   3 +-
 6 files changed, 794 insertions(+), 1107 deletions(-)

-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH v3 1/3] crypto: x86/crct10dif-pcl - cleanup and optimizations
  2019-01-30  3:14 ` Eric Biggers
@ 2019-01-30  3:14   ` Eric Biggers
  -1 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-30  3:14 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: Ard Biesheuvel, Tim Chen, linux-arm-kernel

From: Eric Biggers <ebiggers@google.com>

The x86, arm, and arm64 asm implementations of crct10dif are very
difficult to understand partly because many of the comments, labels, and
macros are named incorrectly: the lengths mentioned are usually off by a
factor of two from the actual code.  Many other things are unnecessarily
convoluted as well, e.g. there are many more fold constants than
actually needed and some aren't fully reduced.

This series therefore cleans up all these implementations to be much
more maintainable.  I also made some small optimizations where I saw
opportunities, resulting in slightly better performance.

This patch cleans up the x86 version.  Note that I opted to retain the
support for buffers < 16 bytes, but I simplified it to be much shorter
and have fewer different cases.  Performance on 2 and 3-byte inputs
dropped slightly but the rest are actually faster now.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/crct10dif-pcl-asm_64.S  | 844 +++++++++---------------
 arch/x86/crypto/crct10dif-pclmul_glue.c |   3 +-
 2 files changed, 301 insertions(+), 546 deletions(-)

diff --git a/arch/x86/crypto/crct10dif-pcl-asm_64.S b/arch/x86/crypto/crct10dif-pcl-asm_64.S
index de04d3e98d8d3..e57afa35ac0f9 100644
--- a/arch/x86/crypto/crct10dif-pcl-asm_64.S
+++ b/arch/x86/crypto/crct10dif-pcl-asm_64.S
@@ -43,609 +43,365 @@
 # LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
 # NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-########################################################################
-#       Function API:
-#       UINT16 crc_t10dif_pcl(
-#               UINT16 init_crc, //initial CRC value, 16 bits
-#               const unsigned char *buf, //buffer pointer to calculate CRC on
-#               UINT64 len //buffer length in bytes (64-bit data)
-#       );
 #
 #       Reference paper titled "Fast CRC Computation for Generic
 #	Polynomials Using PCLMULQDQ Instruction"
 #       URL: http://www.intel.com/content/dam/www/public/us/en/documents
 #  /white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
 #
-#
 
 #include <linux/linkage.h>
 
 .text
 
-#define        arg1 %rdi
-#define        arg2 %rsi
-#define        arg3 %rdx
-
-#define        arg1_low32 %edi
+#define		init_crc	%di
+#define		init_crcl	%edi
+#define		buf		%rsi
+#define		len		%rdx
+
+#define		FOLD_CONSTS	%xmm10
+#define		BSWAP_MASK	%xmm11
+
+# Fold reg1, reg2 into the next 32 data bytes, storing the result back into
+# reg1, reg2.
+.macro	fold_32_bytes	offset, reg1, reg2
+	movdqu	\offset(buf), %xmm9
+	movdqu	\offset+16(buf), %xmm12
+	pshufb	BSWAP_MASK, %xmm9
+	pshufb	BSWAP_MASK, %xmm12
+	movdqa	\reg1, %xmm8
+	movdqa	\reg2, %xmm13
+	pclmulqdq	$0x00, FOLD_CONSTS, \reg1
+	pclmulqdq	$0x11, FOLD_CONSTS, %xmm8
+	pclmulqdq	$0x00, FOLD_CONSTS, \reg2
+	pclmulqdq	$0x11, FOLD_CONSTS, %xmm13
+	pxor	%xmm9 , \reg1
+	xorps	%xmm8 , \reg1
+	pxor	%xmm12, \reg2
+	xorps	%xmm13, \reg2
+.endm
+
+# Fold src_reg into dst_reg.
+.macro	fold_16_bytes	src_reg, dst_reg
+	movdqa	\src_reg, %xmm8
+	pclmulqdq	$0x11, FOLD_CONSTS, \src_reg
+	pclmulqdq	$0x00, FOLD_CONSTS, %xmm8
+	pxor	%xmm8, \dst_reg
+	xorps	\src_reg, \dst_reg
+.endm
 
-ENTRY(crc_t10dif_pcl)
+#
+# u16 crc_t10dif_pcl(u16 init_crc, const *u8 buf, size_t len);
+#
 .align 16
-
-	# adjust the 16-bit initial_crc value, scale it to 32 bits
-	shl	$16, arg1_low32
-
-	# Allocate Stack Space
-	mov     %rsp, %rcx
-	sub	$16*2, %rsp
-	# align stack to 16 byte boundary
-	and     $~(0x10 - 1), %rsp
-
-	# check if smaller than 256
-	cmp	$256, arg3
-
-	# for sizes less than 128, we can't fold 64B at a time...
-	jl	_less_than_128
-
-
-	# load the initial crc value
-	movd	arg1_low32, %xmm10	# initial crc
-
-	# crc value does not need to be byte-reflected, but it needs
-	# to be moved to the high part of the register.
-	# because data will be byte-reflected and will align with
-	# initial crc at correct place.
-	pslldq	$12, %xmm10
-
-	movdqa  SHUF_MASK(%rip), %xmm11
-	# receive the initial 64B data, xor the initial crc value
-	movdqu	16*0(arg2), %xmm0
-	movdqu	16*1(arg2), %xmm1
-	movdqu	16*2(arg2), %xmm2
-	movdqu	16*3(arg2), %xmm3
-	movdqu	16*4(arg2), %xmm4
-	movdqu	16*5(arg2), %xmm5
-	movdqu	16*6(arg2), %xmm6
-	movdqu	16*7(arg2), %xmm7
-
-	pshufb	%xmm11, %xmm0
-	# XOR the initial_crc value
-	pxor	%xmm10, %xmm0
-	pshufb	%xmm11, %xmm1
-	pshufb	%xmm11, %xmm2
-	pshufb	%xmm11, %xmm3
-	pshufb	%xmm11, %xmm4
-	pshufb	%xmm11, %xmm5
-	pshufb	%xmm11, %xmm6
-	pshufb	%xmm11, %xmm7
-
-	movdqa	rk3(%rip), %xmm10	#xmm10 has rk3 and rk4
-					#imm value of pclmulqdq instruction
-					#will determine which constant to use
-
-	#################################################################
-	# we subtract 256 instead of 128 to save one instruction from the loop
-	sub	$256, arg3
-
-	# at this section of the code, there is 64*x+y (0<=y<64) bytes of
-	# buffer. The _fold_64_B_loop will fold 64B at a time
-	# until we have 64+y Bytes of buffer
-
-
-	# fold 64B at a time. This section of the code folds 4 xmm
-	# registers in parallel
-_fold_64_B_loop:
-
-	# update the buffer pointer
-	add	$128, arg2		#    buf += 64#
-
-	movdqu	16*0(arg2), %xmm9
-	movdqu	16*1(arg2), %xmm12
-	pshufb	%xmm11, %xmm9
-	pshufb	%xmm11, %xmm12
-	movdqa	%xmm0, %xmm8
-	movdqa	%xmm1, %xmm13
-	pclmulqdq	$0x0 , %xmm10, %xmm0
-	pclmulqdq	$0x11, %xmm10, %xmm8
-	pclmulqdq	$0x0 , %xmm10, %xmm1
-	pclmulqdq	$0x11, %xmm10, %xmm13
-	pxor	%xmm9 , %xmm0
-	xorps	%xmm8 , %xmm0
-	pxor	%xmm12, %xmm1
-	xorps	%xmm13, %xmm1
-
-	movdqu	16*2(arg2), %xmm9
-	movdqu	16*3(arg2), %xmm12
-	pshufb	%xmm11, %xmm9
-	pshufb	%xmm11, %xmm12
-	movdqa	%xmm2, %xmm8
-	movdqa	%xmm3, %xmm13
-	pclmulqdq	$0x0, %xmm10, %xmm2
-	pclmulqdq	$0x11, %xmm10, %xmm8
-	pclmulqdq	$0x0, %xmm10, %xmm3
-	pclmulqdq	$0x11, %xmm10, %xmm13
-	pxor	%xmm9 , %xmm2
-	xorps	%xmm8 , %xmm2
-	pxor	%xmm12, %xmm3
-	xorps	%xmm13, %xmm3
-
-	movdqu	16*4(arg2), %xmm9
-	movdqu	16*5(arg2), %xmm12
-	pshufb	%xmm11, %xmm9
-	pshufb	%xmm11, %xmm12
-	movdqa	%xmm4, %xmm8
-	movdqa	%xmm5, %xmm13
-	pclmulqdq	$0x0,  %xmm10, %xmm4
-	pclmulqdq	$0x11, %xmm10, %xmm8
-	pclmulqdq	$0x0,  %xmm10, %xmm5
-	pclmulqdq	$0x11, %xmm10, %xmm13
-	pxor	%xmm9 ,  %xmm4
-	xorps	%xmm8 ,  %xmm4
-	pxor	%xmm12,  %xmm5
-	xorps	%xmm13,  %xmm5
-
-	movdqu	16*6(arg2), %xmm9
-	movdqu	16*7(arg2), %xmm12
-	pshufb	%xmm11, %xmm9
-	pshufb	%xmm11, %xmm12
-	movdqa	%xmm6 , %xmm8
-	movdqa	%xmm7 , %xmm13
-	pclmulqdq	$0x0 , %xmm10, %xmm6
-	pclmulqdq	$0x11, %xmm10, %xmm8
-	pclmulqdq	$0x0 , %xmm10, %xmm7
-	pclmulqdq	$0x11, %xmm10, %xmm13
-	pxor	%xmm9 , %xmm6
-	xorps	%xmm8 , %xmm6
-	pxor	%xmm12, %xmm7
-	xorps	%xmm13, %xmm7
-
-	sub	$128, arg3
-
-	# check if there is another 64B in the buffer to be able to fold
-	jge	_fold_64_B_loop
-	##################################################################
-
-
-	add	$128, arg2
-	# at this point, the buffer pointer is pointing at the last y Bytes
-	# of the buffer the 64B of folded data is in 4 of the xmm
-	# registers: xmm0, xmm1, xmm2, xmm3
-
-
-	# fold the 8 xmm registers to 1 xmm register with different constants
-
-	movdqa	rk9(%rip), %xmm10
-	movdqa	%xmm0, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm0
-	pclmulqdq	$0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	xorps	%xmm0, %xmm7
-
-	movdqa	rk11(%rip), %xmm10
-	movdqa	%xmm1, %xmm8
-	pclmulqdq	 $0x11, %xmm10, %xmm1
-	pclmulqdq	 $0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	xorps	%xmm1, %xmm7
-
-	movdqa	rk13(%rip), %xmm10
-	movdqa	%xmm2, %xmm8
-	pclmulqdq	 $0x11, %xmm10, %xmm2
-	pclmulqdq	 $0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	pxor	%xmm2, %xmm7
-
-	movdqa	rk15(%rip), %xmm10
-	movdqa	%xmm3, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm3
-	pclmulqdq	$0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	xorps	%xmm3, %xmm7
-
-	movdqa	rk17(%rip), %xmm10
-	movdqa	%xmm4, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm4
-	pclmulqdq	$0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	pxor	%xmm4, %xmm7
-
-	movdqa	rk19(%rip), %xmm10
-	movdqa	%xmm5, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm5
-	pclmulqdq	$0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	xorps	%xmm5, %xmm7
-
-	movdqa	rk1(%rip), %xmm10	#xmm10 has rk1 and rk2
-					#imm value of pclmulqdq instruction
-					#will determine which constant to use
-	movdqa	%xmm6, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm6
-	pclmulqdq	$0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	pxor	%xmm6, %xmm7
-
-
-	# instead of 64, we add 48 to the loop counter to save 1 instruction
-	# from the loop instead of a cmp instruction, we use the negative
-	# flag with the jl instruction
-	add	$128-16, arg3
-	jl	_final_reduction_for_128
-
-	# now we have 16+y bytes left to reduce. 16 Bytes is in register xmm7
-	# and the rest is in memory. We can fold 16 bytes at a time if y>=16
-	# continue folding 16B at a time
-
-_16B_reduction_loop:
+ENTRY(crc_t10dif_pcl)
+	# Allocate a 16-byte aligned 16-byte stack buffer.
+	mov	%rsp, %rcx
+	sub	$16, %rsp
+	and	$~15, %rsp
+
+	movdqa	.Lbswap_mask(%rip), BSWAP_MASK
+
+	# For sizes less than 256 bytes, we can't fold 128 bytes at a time.
+	cmp	$256, len
+	jl	.Lless_than_256_bytes
+
+	# Load the first 128 data bytes.  Byte swapping is necessary to make the
+	# bit order match the polynomial coefficient order.
+	movdqu	16*0(buf), %xmm0
+	movdqu	16*1(buf), %xmm1
+	movdqu	16*2(buf), %xmm2
+	movdqu	16*3(buf), %xmm3
+	movdqu	16*4(buf), %xmm4
+	movdqu	16*5(buf), %xmm5
+	movdqu	16*6(buf), %xmm6
+	movdqu	16*7(buf), %xmm7
+	add	$128, buf
+	pshufb	BSWAP_MASK, %xmm0
+	pshufb	BSWAP_MASK, %xmm1
+	pshufb	BSWAP_MASK, %xmm2
+	pshufb	BSWAP_MASK, %xmm3
+	pshufb	BSWAP_MASK, %xmm4
+	pshufb	BSWAP_MASK, %xmm5
+	pshufb	BSWAP_MASK, %xmm6
+	pshufb	BSWAP_MASK, %xmm7
+
+	# XOR the first 16 data *bits* with the initial CRC value.
+	pxor	%xmm8, %xmm8
+	pinsrw	$7, init_crcl, %xmm8
+	pxor	%xmm8, %xmm0
+
+	movdqa	.Lfold_across_128_bytes_consts(%rip), FOLD_CONSTS
+
+	# Subtract 128 for the 128 data bytes just consumed.  Subtract another
+	# 128 to simplify the termination condition of the following loop.
+	sub	$256, len
+
+	# While >= 128 data bytes remain (not counting xmm0-7), fold the 128
+	# bytes xmm0-7 into them, storing the result back into xmm0-7.
+.Lfold_128_bytes_loop:
+	fold_32_bytes	0, %xmm0, %xmm1
+	fold_32_bytes	32, %xmm2, %xmm3
+	fold_32_bytes	64, %xmm4, %xmm5
+	fold_32_bytes	96, %xmm6, %xmm7
+	add	$128, buf
+	sub	$128, len
+	jge	.Lfold_128_bytes_loop
+
+	# Now fold the 112 bytes in xmm0-xmm6 into the 16 bytes in xmm7.
+
+	# Fold across 64 bytes.
+	movdqa	.Lfold_across_64_bytes_consts(%rip), FOLD_CONSTS
+	fold_16_bytes	%xmm0, %xmm4
+	fold_16_bytes	%xmm1, %xmm5
+	fold_16_bytes	%xmm2, %xmm6
+	fold_16_bytes	%xmm3, %xmm7
+	# Fold across 32 bytes.
+	movdqa	.Lfold_across_32_bytes_consts(%rip), FOLD_CONSTS
+	fold_16_bytes	%xmm4, %xmm6
+	fold_16_bytes	%xmm5, %xmm7
+	# Fold across 16 bytes.
+	movdqa	.Lfold_across_16_bytes_consts(%rip), FOLD_CONSTS
+	fold_16_bytes	%xmm6, %xmm7
+
+	# Add 128 to get the correct number of data bytes remaining in 0...127
+	# (not counting xmm7), following the previous extra subtraction by 128.
+	# Then subtract 16 to simplify the termination condition of the
+	# following loop.
+	add	$128-16, len
+
+	# While >= 16 data bytes remain (not counting xmm7), fold the 16 bytes
+	# xmm7 into them, storing the result back into xmm7.
+	jl	.Lfold_16_bytes_loop_done
+.Lfold_16_bytes_loop:
 	movdqa	%xmm7, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm7
-	pclmulqdq	$0x0 , %xmm10, %xmm8
+	pclmulqdq	$0x11, FOLD_CONSTS, %xmm7
+	pclmulqdq	$0x00, FOLD_CONSTS, %xmm8
 	pxor	%xmm8, %xmm7
-	movdqu	(arg2), %xmm0
-	pshufb	%xmm11, %xmm0
+	movdqu	(buf), %xmm0
+	pshufb	BSWAP_MASK, %xmm0
 	pxor	%xmm0 , %xmm7
-	add	$16, arg2
-	sub	$16, arg3
-	# instead of a cmp instruction, we utilize the flags with the
-	# jge instruction equivalent of: cmp arg3, 16-16
-	# check if there is any more 16B in the buffer to be able to fold
-	jge	_16B_reduction_loop
-
-	#now we have 16+z bytes left to reduce, where 0<= z < 16.
-	#first, we reduce the data in the xmm7 register
-
-
-_final_reduction_for_128:
-	# check if any more data to fold. If not, compute the CRC of
-	# the final 128 bits
-	add	$16, arg3
-	je	_128_done
-
-	# here we are getting data that is less than 16 bytes.
-	# since we know that there was data before the pointer, we can
-	# offset the input pointer before the actual point, to receive
-	# exactly 16 bytes. after that the registers need to be adjusted.
-_get_last_two_xmms:
+	add	$16, buf
+	sub	$16, len
+	jge	.Lfold_16_bytes_loop
+
+.Lfold_16_bytes_loop_done:
+	# Add 16 to get the correct number of data bytes remaining in 0...15
+	# (not counting xmm7), following the previous extra subtraction by 16.
+	add	$16, len
+	je	.Lreduce_final_16_bytes
+
+.Lhandle_partial_segment:
+	# Reduce the last '16 + len' bytes where 1 <= len <= 15 and the first 16
+	# bytes are in xmm7 and the rest are the remaining data in 'buf'.  To do
+	# this without needing a fold constant for each possible 'len', redivide
+	# the bytes into a first chunk of 'len' bytes and a second chunk of 16
+	# bytes, then fold the first chunk into the second.
+
 	movdqa	%xmm7, %xmm2
 
-	movdqu	-16(arg2, arg3), %xmm1
-	pshufb	%xmm11, %xmm1
+	# xmm1 = last 16 original data bytes
+	movdqu	-16(buf, len), %xmm1
+	pshufb	BSWAP_MASK, %xmm1
 
-	# get rid of the extra data that was loaded before
-	# load the shift constant
-	lea	pshufb_shf_table+16(%rip), %rax
-	sub	arg3, %rax
+	# xmm2 = high order part of second chunk: xmm7 left-shifted by 'len' bytes.
+	lea	.Lbyteshift_table+16(%rip), %rax
+	sub	len, %rax
 	movdqu	(%rax), %xmm0
-
-	# shift xmm2 to the left by arg3 bytes
 	pshufb	%xmm0, %xmm2
 
-	# shift xmm7 to the right by 16-arg3 bytes
-	pxor	mask1(%rip), %xmm0
+	# xmm7 = first chunk: xmm7 right-shifted by '16-len' bytes.
+	pxor	.Lmask1(%rip), %xmm0
 	pshufb	%xmm0, %xmm7
+
+	# xmm1 = second chunk: 'len' bytes from xmm1 (low-order bytes),
+	# then '16-len' bytes from xmm2 (high-order bytes).
 	pblendvb	%xmm2, %xmm1	#xmm0 is implicit
 
-	# fold 16 Bytes
-	movdqa	%xmm1, %xmm2
+	# Fold the first chunk into the second chunk, storing the result in xmm7.
 	movdqa	%xmm7, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm7
-	pclmulqdq	$0x0 , %xmm10, %xmm8
+	pclmulqdq	$0x11, FOLD_CONSTS, %xmm7
+	pclmulqdq	$0x00, FOLD_CONSTS, %xmm8
 	pxor	%xmm8, %xmm7
-	pxor	%xmm2, %xmm7
+	pxor	%xmm1, %xmm7
 
-_128_done:
-	# compute crc of a 128-bit value
-	movdqa	rk5(%rip), %xmm10	# rk5 and rk6 in xmm10
-	movdqa	%xmm7, %xmm0
+.Lreduce_final_16_bytes:
+	# Reduce the 128-bit value M(x), stored in xmm7, to the final 16-bit CRC
 
-	#64b fold
-	pclmulqdq	$0x1, %xmm10, %xmm7
-	pslldq	$8   ,  %xmm0
-	pxor	%xmm0,  %xmm7
+	# Load 'x^48 * (x^48 mod G(x))' and 'x^48 * (x^80 mod G(x))'.
+	movdqa	.Lfinal_fold_consts(%rip), FOLD_CONSTS
 
-	#32b fold
+	# Fold the high 64 bits into the low 64 bits, while also multiplying by
+	# x^64.  This produces a 128-bit value congruent to x^64 * M(x) and
+	# whose low 48 bits are 0.
 	movdqa	%xmm7, %xmm0
+	pclmulqdq	$0x11, FOLD_CONSTS, %xmm7 # high bits * x^48 * (x^80 mod G(x))
+	pslldq	$8, %xmm0
+	pxor	%xmm0, %xmm7			  # + low bits * x^64
 
-	pand	mask2(%rip), %xmm0
-
-	psrldq	$12, %xmm7
-	pclmulqdq	$0x10, %xmm10, %xmm7
-	pxor	%xmm0, %xmm7
-
-	#barrett reduction
-_barrett:
-	movdqa	rk7(%rip), %xmm10	# rk7 and rk8 in xmm10
+	# Fold the high 32 bits into the low 96 bits.  This produces a 96-bit
+	# value congruent to x^64 * M(x) and whose low 48 bits are 0.
 	movdqa	%xmm7, %xmm0
-	pclmulqdq	$0x01, %xmm10, %xmm7
-	pslldq	$4, %xmm7
-	pclmulqdq	$0x11, %xmm10, %xmm7
+	pand	.Lmask2(%rip), %xmm0		  # zero high 32 bits
+	psrldq	$12, %xmm7			  # extract high 32 bits
+	pclmulqdq	$0x00, FOLD_CONSTS, %xmm7 # high 32 bits * x^48 * (x^48 mod G(x))
+	pxor	%xmm0, %xmm7			  # + low bits
 
-	pslldq	$4, %xmm7
-	pxor	%xmm0, %xmm7
-	pextrd	$1, %xmm7, %eax
+.Lbarrett_reduction:
+	# Load G(x) and floor(x^48 / G(x)).
+	movdqa	.Lbarrett_reduction_consts(%rip), FOLD_CONSTS
 
-_cleanup:
-	# scale the result back to 16 bits
-	shr	$16, %eax
-	mov     %rcx, %rsp
+	# Use Barrett reduction to compute the final CRC value.
+	movdqa	%xmm7, %xmm0
+	pclmulqdq	$0x11, FOLD_CONSTS, %xmm7 # high 32 bits * floor(x^48 / G(x))
+	psrlq	$32, %xmm7			  # /= x^32
+	pclmulqdq	$0x00, FOLD_CONSTS, %xmm7 # *= G(x)
+	psrlq	$48, %xmm0
+	pxor	%xmm7, %xmm0		     # + low 16 nonzero bits
+	# Final CRC value (x^16 * M(x)) mod G(x) is in low 16 bits of xmm0.
+
+	pextrw	$0, %xmm0, %eax
+.Ldone:
+	mov	%rcx, %rsp
 	ret
 
-########################################################################
-
 .align 16
-_less_than_128:
+.Lless_than_256_bytes:
+	pxor	%xmm0, %xmm0
+	cmp	$16, len
+	jl	.Lless_than_16_bytes
 
-	# check if there is enough buffer to be able to fold 16B at a time
-	cmp	$32, arg3
-	jl	_less_than_32
-	movdqa  SHUF_MASK(%rip), %xmm11
+	# Checksumming a buffer of length 16...255 bytes
 
-	# now if there is, load the constants
-	movdqa	rk1(%rip), %xmm10	# rk1 and rk2 in xmm10
+	# Load the first 16 data bytes.
+	movdqu	(buf), %xmm7
+	pshufb	BSWAP_MASK, %xmm7
+	add	$16, buf
 
-	movd	arg1_low32, %xmm0	# get the initial crc value
-	pslldq	$12, %xmm0	# align it to its correct place
-	movdqu	(arg2), %xmm7	# load the plaintext
-	pshufb	%xmm11, %xmm7	# byte-reflect the plaintext
+	# XOR the first 16 data *bits* with the initial CRC value.
+	pinsrw	$7, init_crcl, %xmm0
 	pxor	%xmm0, %xmm7
 
-
-	# update the buffer pointer
-	add	$16, arg2
-
-	# update the counter. subtract 32 instead of 16 to save one
-	# instruction from the loop
-	sub	$32, arg3
-
-	jmp	_16B_reduction_loop
-
-
-.align 16
-_less_than_32:
-	# mov initial crc to the return value. this is necessary for
-	# zero-length buffers.
-	mov	arg1_low32, %eax
-	test	arg3, arg3
-	je	_cleanup
-
-	movdqa  SHUF_MASK(%rip), %xmm11
-
-	movd	arg1_low32, %xmm0	# get the initial crc value
-	pslldq	$12, %xmm0	# align it to its correct place
-
-	cmp	$16, arg3
-	je	_exact_16_left
-	jl	_less_than_16_left
-
-	movdqu	(arg2), %xmm7	# load the plaintext
-	pshufb	%xmm11, %xmm7	# byte-reflect the plaintext
-	pxor	%xmm0 , %xmm7	# xor the initial crc value
-	add	$16, arg2
-	sub	$16, arg3
-	movdqa	rk1(%rip), %xmm10	# rk1 and rk2 in xmm10
-	jmp	_get_last_two_xmms
-
+	movdqa	.Lfold_across_16_bytes_consts(%rip), FOLD_CONSTS
+	cmp	$16, len
+	je	.Lreduce_final_16_bytes		# len == 16
+	sub	$32, len
+	jge	.Lfold_16_bytes_loop		# 32 <= len <= 255
+	add	$16, len
+	jmp	.Lhandle_partial_segment	# 17 <= len <= 31
 
 .align 16
-_less_than_16_left:
-	# use stack space to load data less than 16 bytes, zero-out
-	# the 16B in memory first.
-
-	pxor	%xmm1, %xmm1
-	mov	%rsp, %r11
-	movdqa	%xmm1, (%r11)
-
-	cmp	$4, arg3
-	jl	_only_less_than_4
-
-	# backup the counter value
-	mov	arg3, %r9
-	cmp	$8, arg3
-	jl	_less_than_8_left
-
-	# load 8 Bytes
-	mov	(arg2), %rax
-	mov	%rax, (%r11)
-	add	$8, %r11
-	sub	$8, arg3
-	add	$8, arg2
-_less_than_8_left:
-
-	cmp	$4, arg3
-	jl	_less_than_4_left
-
-	# load 4 Bytes
-	mov	(arg2), %eax
-	mov	%eax, (%r11)
-	add	$4, %r11
-	sub	$4, arg3
-	add	$4, arg2
-_less_than_4_left:
-
-	cmp	$2, arg3
-	jl	_less_than_2_left
-
-	# load 2 Bytes
-	mov	(arg2), %ax
-	mov	%ax, (%r11)
-	add	$2, %r11
-	sub	$2, arg3
-	add	$2, arg2
-_less_than_2_left:
-	cmp     $1, arg3
-        jl      _zero_left
-
-	# load 1 Byte
-	mov	(arg2), %al
-	mov	%al, (%r11)
-_zero_left:
+.Lless_than_16_bytes:
+	cmp	$2, len
+	jl	.Lless_than_2_bytes
+
+	# Checksumming a buffer of length 2...15 bytes.  Copy the data into a
+	# 16-byte stack buffer, left-padding with zeroes.  Then reduce the
+	# resulting 16-byte value.
+
+	movdqa	%xmm0, (%rsp)
+	lea	16(%rsp), %r8
+	sub	len, %r8
+	mov	%r8, %r10
+
+	test	$8, len
+	jz	1f
+	movq	(buf), %r9
+	movq	%r9, (%r8)
+	add	$8, buf
+	add	$8, %r8
+1:
+	test	$4, len
+	jz	1f
+	movl	(buf), %r9d
+	movl	%r9d, (%r8)
+	add	$4, buf
+	add	$4, %r8
+1:
+	test	$2, len
+	jz	1f
+	movw	(buf), %r9w
+	movw	%r9w, (%r8)
+	add	$2, buf
+	add	$2, %r8
+1:
+	test	$1, len
+	jz	1f
+	movb	(buf), %r9b
+	movb	%r9b, (%r8)
+1:
+	# XOR the first 16 data *bits* with the initial CRC value.
+	rol	$0x8, init_crc
+	xor	init_crc, (%r10)
+
+	# Load the data and reduce it.
 	movdqa	(%rsp), %xmm7
-	pshufb	%xmm11, %xmm7
-	pxor	%xmm0 , %xmm7	# xor the initial crc value
-
-	# shl r9, 4
-	lea	pshufb_shf_table+16(%rip), %rax
-	sub	%r9, %rax
-	movdqu	(%rax), %xmm0
-	pxor	mask1(%rip), %xmm0
-
-	pshufb	%xmm0, %xmm7
-	jmp	_128_done
+	pshufb	BSWAP_MASK, %xmm7
+	jmp	.Lreduce_final_16_bytes
 
 .align 16
-_exact_16_left:
-	movdqu	(arg2), %xmm7
-	pshufb	%xmm11, %xmm7
-	pxor	%xmm0 , %xmm7   # xor the initial crc value
-
-	jmp	_128_done
-
-_only_less_than_4:
-	cmp	$3, arg3
-	jl	_only_less_than_3
-
-	# load 3 Bytes
-	mov	(arg2), %al
-	mov	%al, (%r11)
-
-	mov	1(arg2), %al
-	mov	%al, 1(%r11)
-
-	mov	2(arg2), %al
-	mov	%al, 2(%r11)
-
-	movdqa	 (%rsp), %xmm7
-	pshufb	 %xmm11, %xmm7
-	pxor	 %xmm0 , %xmm7  # xor the initial crc value
-
-	psrldq	$5, %xmm7
-
-	jmp	_barrett
-_only_less_than_3:
-	cmp	$2, arg3
-	jl	_only_less_than_2
-
-	# load 2 Bytes
-	mov	(arg2), %al
-	mov	%al, (%r11)
-
-	mov	1(arg2), %al
-	mov	%al, 1(%r11)
-
-	movdqa	(%rsp), %xmm7
-	pshufb	%xmm11, %xmm7
-	pxor	%xmm0 , %xmm7   # xor the initial crc value
-
-	psrldq	$6, %xmm7
-
-	jmp	_barrett
-_only_less_than_2:
-
-	# load 1 Byte
-	mov	(arg2), %al
-	mov	%al, (%r11)
-
-	movdqa	(%rsp), %xmm7
-	pshufb	%xmm11, %xmm7
-	pxor	%xmm0 , %xmm7   # xor the initial crc value
-
-	psrldq	$7, %xmm7
-
-	jmp	_barrett
+.Lless_than_2_bytes:
+	movzwl	init_crc, %eax
+	test	len, len
+	je	.Ldone	# len = 0.  Nothing to do, just return init_crc.
+
+	# len = 1.  This is a special case because with only 1 data byte the
+	# 16-bit initial CRC value can't just be XOR'ed with the first 16 data
+	# bits as usual.  Instead, directly build the scaled value 'x^64 * M(x)'
+	# and skip to the Barrett reduction step.
+	xor	(buf), %ah
+	movd	%eax, %xmm7
+	pslldq	$7, %xmm7
+	jmp	.Lbarrett_reduction
 
 ENDPROC(crc_t10dif_pcl)
 
 .section	.rodata, "a", @progbits
 .align 16
-# precomputed constants
-# these constants are precomputed from the poly:
-# 0x8bb70000 (0x8bb7 scaled to 32 bits)
-# Q = 0x18BB70000
-# rk1 = 2^(32*3) mod Q << 32
-# rk2 = 2^(32*5) mod Q << 32
-# rk3 = 2^(32*15) mod Q << 32
-# rk4 = 2^(32*17) mod Q << 32
-# rk5 = 2^(32*3) mod Q << 32
-# rk6 = 2^(32*2) mod Q << 32
-# rk7 = floor(2^64/Q)
-# rk8 = Q
-rk1:
-.quad 0x2d56000000000000
-rk2:
-.quad 0x06df000000000000
-rk3:
-.quad 0x9d9d000000000000
-rk4:
-.quad 0x7cf5000000000000
-rk5:
-.quad 0x2d56000000000000
-rk6:
-.quad 0x1368000000000000
-rk7:
-.quad 0x00000001f65a57f8
-rk8:
-.quad 0x000000018bb70000
-
-rk9:
-.quad 0xceae000000000000
-rk10:
-.quad 0xbfd6000000000000
-rk11:
-.quad 0x1e16000000000000
-rk12:
-.quad 0x713c000000000000
-rk13:
-.quad 0xf7f9000000000000
-rk14:
-.quad 0x80a6000000000000
-rk15:
-.quad 0x044c000000000000
-rk16:
-.quad 0xe658000000000000
-rk17:
-.quad 0xad18000000000000
-rk18:
-.quad 0xa497000000000000
-rk19:
-.quad 0x6ee3000000000000
-rk20:
-.quad 0xe7b5000000000000
-
 
+# Fold constants precomputed from the polynomial 0x18bb7
+# G(x) = x^16 + x^15 + x^11 + x^9 + x^8 + x^7 + x^5 + x^4 + x^2 + x^1 + x^0
+.Lfold_across_128_bytes_consts:
+	.quad		0x0000000000006123	# x^(8*128)	mod G(x)
+	.quad		0x0000000000002295	# x^(8*128+64)	mod G(x)
+.Lfold_across_64_bytes_consts:
+	.quad		0x0000000000001069	# x^(4*128)	mod G(x)
+	.quad		0x000000000000dd31	# x^(4*128+64)	mod G(x)
+.Lfold_across_32_bytes_consts:
+	.quad		0x000000000000857d	# x^(2*128)	mod G(x)
+	.quad		0x0000000000007acc	# x^(2*128+64)	mod G(x)
+.Lfold_across_16_bytes_consts:
+	.quad		0x000000000000a010	# x^(1*128)	mod G(x)
+	.quad		0x0000000000001faa	# x^(1*128+64)	mod G(x)
+.Lfinal_fold_consts:
+	.quad		0x1368000000000000	# x^48 * (x^48 mod G(x))
+	.quad		0x2d56000000000000	# x^48 * (x^80 mod G(x))
+.Lbarrett_reduction_consts:
+	.quad		0x0000000000018bb7	# G(x)
+	.quad		0x00000001f65a57f8	# floor(x^48 / G(x))
 
 .section	.rodata.cst16.mask1, "aM", @progbits, 16
 .align 16
-mask1:
-.octa 0x80808080808080808080808080808080
+.Lmask1:
+	.octa	0x80808080808080808080808080808080
 
 .section	.rodata.cst16.mask2, "aM", @progbits, 16
 .align 16
-mask2:
-.octa 0x00000000FFFFFFFFFFFFFFFFFFFFFFFF
+.Lmask2:
+	.octa	0x00000000FFFFFFFFFFFFFFFFFFFFFFFF
+
+.section	.rodata.cst16.bswap_mask, "aM", @progbits, 16
+.align 16
+.Lbswap_mask:
+	.octa	0x000102030405060708090A0B0C0D0E0F
 
-.section	.rodata.cst16.SHUF_MASK, "aM", @progbits, 16
+.section	.rodata.cst32.byteshift_table, "aM", @progbits, 32
 .align 16
-SHUF_MASK:
-.octa 0x000102030405060708090A0B0C0D0E0F
-
-.section	.rodata.cst32.pshufb_shf_table, "aM", @progbits, 32
-.align 32
-pshufb_shf_table:
-# use these values for shift constants for the pshufb instruction
-# different alignments result in values as shown:
-#	DDQ 0x008f8e8d8c8b8a898887868584838281 # shl 15 (16-1) / shr1
-#	DDQ 0x01008f8e8d8c8b8a8988878685848382 # shl 14 (16-3) / shr2
-#	DDQ 0x0201008f8e8d8c8b8a89888786858483 # shl 13 (16-4) / shr3
-#	DDQ 0x030201008f8e8d8c8b8a898887868584 # shl 12 (16-4) / shr4
-#	DDQ 0x04030201008f8e8d8c8b8a8988878685 # shl 11 (16-5) / shr5
-#	DDQ 0x0504030201008f8e8d8c8b8a89888786 # shl 10 (16-6) / shr6
-#	DDQ 0x060504030201008f8e8d8c8b8a898887 # shl 9  (16-7) / shr7
-#	DDQ 0x07060504030201008f8e8d8c8b8a8988 # shl 8  (16-8) / shr8
-#	DDQ 0x0807060504030201008f8e8d8c8b8a89 # shl 7  (16-9) / shr9
-#	DDQ 0x090807060504030201008f8e8d8c8b8a # shl 6  (16-10) / shr10
-#	DDQ 0x0a090807060504030201008f8e8d8c8b # shl 5  (16-11) / shr11
-#	DDQ 0x0b0a090807060504030201008f8e8d8c # shl 4  (16-12) / shr12
-#	DDQ 0x0c0b0a090807060504030201008f8e8d # shl 3  (16-13) / shr13
-#	DDQ 0x0d0c0b0a090807060504030201008f8e # shl 2  (16-14) / shr14
-#	DDQ 0x0e0d0c0b0a090807060504030201008f # shl 1  (16-15) / shr15
-.octa 0x8f8e8d8c8b8a89888786858483828100
-.octa 0x000e0d0c0b0a09080706050403020100
+# For 1 <= len <= 15, the 16-byte vector beginning at &byteshift_table[16 - len]
+# is the index vector to shift left by 'len' bytes, and is also {0x80, ...,
+# 0x80} XOR the index vector to shift right by '16 - len' bytes.
+.Lbyteshift_table:
+	.byte		 0x0, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87
+	.byte		0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f
+	.byte		 0x0,  0x1,  0x2,  0x3,  0x4,  0x5,  0x6,  0x7
+	.byte		 0x8,  0x9,  0xa,  0xb,  0xc,  0xd,  0xe , 0x0
diff --git a/arch/x86/crypto/crct10dif-pclmul_glue.c b/arch/x86/crypto/crct10dif-pclmul_glue.c
index cd4df93225014..80cea60e9c6d6 100644
--- a/arch/x86/crypto/crct10dif-pclmul_glue.c
+++ b/arch/x86/crypto/crct10dif-pclmul_glue.c
@@ -33,8 +33,7 @@
 #include <asm/cpufeatures.h>
 #include <asm/cpu_device_id.h>
 
-asmlinkage __u16 crc_t10dif_pcl(__u16 crc, const unsigned char *buf,
-				size_t len);
+asmlinkage u16 crc_t10dif_pcl(u16 init_crc, const u8 *buf, size_t len);
 
 struct chksum_desc_ctx {
 	__u16 crc;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 1/3] crypto: x86/crct10dif-pcl - cleanup and optimizations
@ 2019-01-30  3:14   ` Eric Biggers
  0 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-30  3:14 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: Tim Chen, linux-arm-kernel, Ard Biesheuvel

From: Eric Biggers <ebiggers@google.com>

The x86, arm, and arm64 asm implementations of crct10dif are very
difficult to understand partly because many of the comments, labels, and
macros are named incorrectly: the lengths mentioned are usually off by a
factor of two from the actual code.  Many other things are unnecessarily
convoluted as well, e.g. there are many more fold constants than
actually needed and some aren't fully reduced.

This series therefore cleans up all these implementations to be much
more maintainable.  I also made some small optimizations where I saw
opportunities, resulting in slightly better performance.

This patch cleans up the x86 version.  Note that I opted to retain the
support for buffers < 16 bytes, but I simplified it to be much shorter
and have fewer different cases.  Performance on 2 and 3-byte inputs
dropped slightly but the rest are actually faster now.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/crct10dif-pcl-asm_64.S  | 844 +++++++++---------------
 arch/x86/crypto/crct10dif-pclmul_glue.c |   3 +-
 2 files changed, 301 insertions(+), 546 deletions(-)

diff --git a/arch/x86/crypto/crct10dif-pcl-asm_64.S b/arch/x86/crypto/crct10dif-pcl-asm_64.S
index de04d3e98d8d3..e57afa35ac0f9 100644
--- a/arch/x86/crypto/crct10dif-pcl-asm_64.S
+++ b/arch/x86/crypto/crct10dif-pcl-asm_64.S
@@ -43,609 +43,365 @@
 # LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
 # NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-########################################################################
-#       Function API:
-#       UINT16 crc_t10dif_pcl(
-#               UINT16 init_crc, //initial CRC value, 16 bits
-#               const unsigned char *buf, //buffer pointer to calculate CRC on
-#               UINT64 len //buffer length in bytes (64-bit data)
-#       );
 #
 #       Reference paper titled "Fast CRC Computation for Generic
 #	Polynomials Using PCLMULQDQ Instruction"
 #       URL: http://www.intel.com/content/dam/www/public/us/en/documents
 #  /white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
 #
-#
 
 #include <linux/linkage.h>
 
 .text
 
-#define        arg1 %rdi
-#define        arg2 %rsi
-#define        arg3 %rdx
-
-#define        arg1_low32 %edi
+#define		init_crc	%di
+#define		init_crcl	%edi
+#define		buf		%rsi
+#define		len		%rdx
+
+#define		FOLD_CONSTS	%xmm10
+#define		BSWAP_MASK	%xmm11
+
+# Fold reg1, reg2 into the next 32 data bytes, storing the result back into
+# reg1, reg2.
+.macro	fold_32_bytes	offset, reg1, reg2
+	movdqu	\offset(buf), %xmm9
+	movdqu	\offset+16(buf), %xmm12
+	pshufb	BSWAP_MASK, %xmm9
+	pshufb	BSWAP_MASK, %xmm12
+	movdqa	\reg1, %xmm8
+	movdqa	\reg2, %xmm13
+	pclmulqdq	$0x00, FOLD_CONSTS, \reg1
+	pclmulqdq	$0x11, FOLD_CONSTS, %xmm8
+	pclmulqdq	$0x00, FOLD_CONSTS, \reg2
+	pclmulqdq	$0x11, FOLD_CONSTS, %xmm13
+	pxor	%xmm9 , \reg1
+	xorps	%xmm8 , \reg1
+	pxor	%xmm12, \reg2
+	xorps	%xmm13, \reg2
+.endm
+
+# Fold src_reg into dst_reg.
+.macro	fold_16_bytes	src_reg, dst_reg
+	movdqa	\src_reg, %xmm8
+	pclmulqdq	$0x11, FOLD_CONSTS, \src_reg
+	pclmulqdq	$0x00, FOLD_CONSTS, %xmm8
+	pxor	%xmm8, \dst_reg
+	xorps	\src_reg, \dst_reg
+.endm
 
-ENTRY(crc_t10dif_pcl)
+#
+# u16 crc_t10dif_pcl(u16 init_crc, const *u8 buf, size_t len);
+#
 .align 16
-
-	# adjust the 16-bit initial_crc value, scale it to 32 bits
-	shl	$16, arg1_low32
-
-	# Allocate Stack Space
-	mov     %rsp, %rcx
-	sub	$16*2, %rsp
-	# align stack to 16 byte boundary
-	and     $~(0x10 - 1), %rsp
-
-	# check if smaller than 256
-	cmp	$256, arg3
-
-	# for sizes less than 128, we can't fold 64B at a time...
-	jl	_less_than_128
-
-
-	# load the initial crc value
-	movd	arg1_low32, %xmm10	# initial crc
-
-	# crc value does not need to be byte-reflected, but it needs
-	# to be moved to the high part of the register.
-	# because data will be byte-reflected and will align with
-	# initial crc at correct place.
-	pslldq	$12, %xmm10
-
-	movdqa  SHUF_MASK(%rip), %xmm11
-	# receive the initial 64B data, xor the initial crc value
-	movdqu	16*0(arg2), %xmm0
-	movdqu	16*1(arg2), %xmm1
-	movdqu	16*2(arg2), %xmm2
-	movdqu	16*3(arg2), %xmm3
-	movdqu	16*4(arg2), %xmm4
-	movdqu	16*5(arg2), %xmm5
-	movdqu	16*6(arg2), %xmm6
-	movdqu	16*7(arg2), %xmm7
-
-	pshufb	%xmm11, %xmm0
-	# XOR the initial_crc value
-	pxor	%xmm10, %xmm0
-	pshufb	%xmm11, %xmm1
-	pshufb	%xmm11, %xmm2
-	pshufb	%xmm11, %xmm3
-	pshufb	%xmm11, %xmm4
-	pshufb	%xmm11, %xmm5
-	pshufb	%xmm11, %xmm6
-	pshufb	%xmm11, %xmm7
-
-	movdqa	rk3(%rip), %xmm10	#xmm10 has rk3 and rk4
-					#imm value of pclmulqdq instruction
-					#will determine which constant to use
-
-	#################################################################
-	# we subtract 256 instead of 128 to save one instruction from the loop
-	sub	$256, arg3
-
-	# at this section of the code, there is 64*x+y (0<=y<64) bytes of
-	# buffer. The _fold_64_B_loop will fold 64B at a time
-	# until we have 64+y Bytes of buffer
-
-
-	# fold 64B at a time. This section of the code folds 4 xmm
-	# registers in parallel
-_fold_64_B_loop:
-
-	# update the buffer pointer
-	add	$128, arg2		#    buf += 64#
-
-	movdqu	16*0(arg2), %xmm9
-	movdqu	16*1(arg2), %xmm12
-	pshufb	%xmm11, %xmm9
-	pshufb	%xmm11, %xmm12
-	movdqa	%xmm0, %xmm8
-	movdqa	%xmm1, %xmm13
-	pclmulqdq	$0x0 , %xmm10, %xmm0
-	pclmulqdq	$0x11, %xmm10, %xmm8
-	pclmulqdq	$0x0 , %xmm10, %xmm1
-	pclmulqdq	$0x11, %xmm10, %xmm13
-	pxor	%xmm9 , %xmm0
-	xorps	%xmm8 , %xmm0
-	pxor	%xmm12, %xmm1
-	xorps	%xmm13, %xmm1
-
-	movdqu	16*2(arg2), %xmm9
-	movdqu	16*3(arg2), %xmm12
-	pshufb	%xmm11, %xmm9
-	pshufb	%xmm11, %xmm12
-	movdqa	%xmm2, %xmm8
-	movdqa	%xmm3, %xmm13
-	pclmulqdq	$0x0, %xmm10, %xmm2
-	pclmulqdq	$0x11, %xmm10, %xmm8
-	pclmulqdq	$0x0, %xmm10, %xmm3
-	pclmulqdq	$0x11, %xmm10, %xmm13
-	pxor	%xmm9 , %xmm2
-	xorps	%xmm8 , %xmm2
-	pxor	%xmm12, %xmm3
-	xorps	%xmm13, %xmm3
-
-	movdqu	16*4(arg2), %xmm9
-	movdqu	16*5(arg2), %xmm12
-	pshufb	%xmm11, %xmm9
-	pshufb	%xmm11, %xmm12
-	movdqa	%xmm4, %xmm8
-	movdqa	%xmm5, %xmm13
-	pclmulqdq	$0x0,  %xmm10, %xmm4
-	pclmulqdq	$0x11, %xmm10, %xmm8
-	pclmulqdq	$0x0,  %xmm10, %xmm5
-	pclmulqdq	$0x11, %xmm10, %xmm13
-	pxor	%xmm9 ,  %xmm4
-	xorps	%xmm8 ,  %xmm4
-	pxor	%xmm12,  %xmm5
-	xorps	%xmm13,  %xmm5
-
-	movdqu	16*6(arg2), %xmm9
-	movdqu	16*7(arg2), %xmm12
-	pshufb	%xmm11, %xmm9
-	pshufb	%xmm11, %xmm12
-	movdqa	%xmm6 , %xmm8
-	movdqa	%xmm7 , %xmm13
-	pclmulqdq	$0x0 , %xmm10, %xmm6
-	pclmulqdq	$0x11, %xmm10, %xmm8
-	pclmulqdq	$0x0 , %xmm10, %xmm7
-	pclmulqdq	$0x11, %xmm10, %xmm13
-	pxor	%xmm9 , %xmm6
-	xorps	%xmm8 , %xmm6
-	pxor	%xmm12, %xmm7
-	xorps	%xmm13, %xmm7
-
-	sub	$128, arg3
-
-	# check if there is another 64B in the buffer to be able to fold
-	jge	_fold_64_B_loop
-	##################################################################
-
-
-	add	$128, arg2
-	# at this point, the buffer pointer is pointing at the last y Bytes
-	# of the buffer the 64B of folded data is in 4 of the xmm
-	# registers: xmm0, xmm1, xmm2, xmm3
-
-
-	# fold the 8 xmm registers to 1 xmm register with different constants
-
-	movdqa	rk9(%rip), %xmm10
-	movdqa	%xmm0, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm0
-	pclmulqdq	$0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	xorps	%xmm0, %xmm7
-
-	movdqa	rk11(%rip), %xmm10
-	movdqa	%xmm1, %xmm8
-	pclmulqdq	 $0x11, %xmm10, %xmm1
-	pclmulqdq	 $0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	xorps	%xmm1, %xmm7
-
-	movdqa	rk13(%rip), %xmm10
-	movdqa	%xmm2, %xmm8
-	pclmulqdq	 $0x11, %xmm10, %xmm2
-	pclmulqdq	 $0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	pxor	%xmm2, %xmm7
-
-	movdqa	rk15(%rip), %xmm10
-	movdqa	%xmm3, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm3
-	pclmulqdq	$0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	xorps	%xmm3, %xmm7
-
-	movdqa	rk17(%rip), %xmm10
-	movdqa	%xmm4, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm4
-	pclmulqdq	$0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	pxor	%xmm4, %xmm7
-
-	movdqa	rk19(%rip), %xmm10
-	movdqa	%xmm5, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm5
-	pclmulqdq	$0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	xorps	%xmm5, %xmm7
-
-	movdqa	rk1(%rip), %xmm10	#xmm10 has rk1 and rk2
-					#imm value of pclmulqdq instruction
-					#will determine which constant to use
-	movdqa	%xmm6, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm6
-	pclmulqdq	$0x0 , %xmm10, %xmm8
-	pxor	%xmm8, %xmm7
-	pxor	%xmm6, %xmm7
-
-
-	# instead of 64, we add 48 to the loop counter to save 1 instruction
-	# from the loop instead of a cmp instruction, we use the negative
-	# flag with the jl instruction
-	add	$128-16, arg3
-	jl	_final_reduction_for_128
-
-	# now we have 16+y bytes left to reduce. 16 Bytes is in register xmm7
-	# and the rest is in memory. We can fold 16 bytes at a time if y>=16
-	# continue folding 16B at a time
-
-_16B_reduction_loop:
+ENTRY(crc_t10dif_pcl)
+	# Allocate a 16-byte aligned 16-byte stack buffer.
+	mov	%rsp, %rcx
+	sub	$16, %rsp
+	and	$~15, %rsp
+
+	movdqa	.Lbswap_mask(%rip), BSWAP_MASK
+
+	# For sizes less than 256 bytes, we can't fold 128 bytes at a time.
+	cmp	$256, len
+	jl	.Lless_than_256_bytes
+
+	# Load the first 128 data bytes.  Byte swapping is necessary to make the
+	# bit order match the polynomial coefficient order.
+	movdqu	16*0(buf), %xmm0
+	movdqu	16*1(buf), %xmm1
+	movdqu	16*2(buf), %xmm2
+	movdqu	16*3(buf), %xmm3
+	movdqu	16*4(buf), %xmm4
+	movdqu	16*5(buf), %xmm5
+	movdqu	16*6(buf), %xmm6
+	movdqu	16*7(buf), %xmm7
+	add	$128, buf
+	pshufb	BSWAP_MASK, %xmm0
+	pshufb	BSWAP_MASK, %xmm1
+	pshufb	BSWAP_MASK, %xmm2
+	pshufb	BSWAP_MASK, %xmm3
+	pshufb	BSWAP_MASK, %xmm4
+	pshufb	BSWAP_MASK, %xmm5
+	pshufb	BSWAP_MASK, %xmm6
+	pshufb	BSWAP_MASK, %xmm7
+
+	# XOR the first 16 data *bits* with the initial CRC value.
+	pxor	%xmm8, %xmm8
+	pinsrw	$7, init_crcl, %xmm8
+	pxor	%xmm8, %xmm0
+
+	movdqa	.Lfold_across_128_bytes_consts(%rip), FOLD_CONSTS
+
+	# Subtract 128 for the 128 data bytes just consumed.  Subtract another
+	# 128 to simplify the termination condition of the following loop.
+	sub	$256, len
+
+	# While >= 128 data bytes remain (not counting xmm0-7), fold the 128
+	# bytes xmm0-7 into them, storing the result back into xmm0-7.
+.Lfold_128_bytes_loop:
+	fold_32_bytes	0, %xmm0, %xmm1
+	fold_32_bytes	32, %xmm2, %xmm3
+	fold_32_bytes	64, %xmm4, %xmm5
+	fold_32_bytes	96, %xmm6, %xmm7
+	add	$128, buf
+	sub	$128, len
+	jge	.Lfold_128_bytes_loop
+
+	# Now fold the 112 bytes in xmm0-xmm6 into the 16 bytes in xmm7.
+
+	# Fold across 64 bytes.
+	movdqa	.Lfold_across_64_bytes_consts(%rip), FOLD_CONSTS
+	fold_16_bytes	%xmm0, %xmm4
+	fold_16_bytes	%xmm1, %xmm5
+	fold_16_bytes	%xmm2, %xmm6
+	fold_16_bytes	%xmm3, %xmm7
+	# Fold across 32 bytes.
+	movdqa	.Lfold_across_32_bytes_consts(%rip), FOLD_CONSTS
+	fold_16_bytes	%xmm4, %xmm6
+	fold_16_bytes	%xmm5, %xmm7
+	# Fold across 16 bytes.
+	movdqa	.Lfold_across_16_bytes_consts(%rip), FOLD_CONSTS
+	fold_16_bytes	%xmm6, %xmm7
+
+	# Add 128 to get the correct number of data bytes remaining in 0...127
+	# (not counting xmm7), following the previous extra subtraction by 128.
+	# Then subtract 16 to simplify the termination condition of the
+	# following loop.
+	add	$128-16, len
+
+	# While >= 16 data bytes remain (not counting xmm7), fold the 16 bytes
+	# xmm7 into them, storing the result back into xmm7.
+	jl	.Lfold_16_bytes_loop_done
+.Lfold_16_bytes_loop:
 	movdqa	%xmm7, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm7
-	pclmulqdq	$0x0 , %xmm10, %xmm8
+	pclmulqdq	$0x11, FOLD_CONSTS, %xmm7
+	pclmulqdq	$0x00, FOLD_CONSTS, %xmm8
 	pxor	%xmm8, %xmm7
-	movdqu	(arg2), %xmm0
-	pshufb	%xmm11, %xmm0
+	movdqu	(buf), %xmm0
+	pshufb	BSWAP_MASK, %xmm0
 	pxor	%xmm0 , %xmm7
-	add	$16, arg2
-	sub	$16, arg3
-	# instead of a cmp instruction, we utilize the flags with the
-	# jge instruction equivalent of: cmp arg3, 16-16
-	# check if there is any more 16B in the buffer to be able to fold
-	jge	_16B_reduction_loop
-
-	#now we have 16+z bytes left to reduce, where 0<= z < 16.
-	#first, we reduce the data in the xmm7 register
-
-
-_final_reduction_for_128:
-	# check if any more data to fold. If not, compute the CRC of
-	# the final 128 bits
-	add	$16, arg3
-	je	_128_done
-
-	# here we are getting data that is less than 16 bytes.
-	# since we know that there was data before the pointer, we can
-	# offset the input pointer before the actual point, to receive
-	# exactly 16 bytes. after that the registers need to be adjusted.
-_get_last_two_xmms:
+	add	$16, buf
+	sub	$16, len
+	jge	.Lfold_16_bytes_loop
+
+.Lfold_16_bytes_loop_done:
+	# Add 16 to get the correct number of data bytes remaining in 0...15
+	# (not counting xmm7), following the previous extra subtraction by 16.
+	add	$16, len
+	je	.Lreduce_final_16_bytes
+
+.Lhandle_partial_segment:
+	# Reduce the last '16 + len' bytes where 1 <= len <= 15 and the first 16
+	# bytes are in xmm7 and the rest are the remaining data in 'buf'.  To do
+	# this without needing a fold constant for each possible 'len', redivide
+	# the bytes into a first chunk of 'len' bytes and a second chunk of 16
+	# bytes, then fold the first chunk into the second.
+
 	movdqa	%xmm7, %xmm2
 
-	movdqu	-16(arg2, arg3), %xmm1
-	pshufb	%xmm11, %xmm1
+	# xmm1 = last 16 original data bytes
+	movdqu	-16(buf, len), %xmm1
+	pshufb	BSWAP_MASK, %xmm1
 
-	# get rid of the extra data that was loaded before
-	# load the shift constant
-	lea	pshufb_shf_table+16(%rip), %rax
-	sub	arg3, %rax
+	# xmm2 = high order part of second chunk: xmm7 left-shifted by 'len' bytes.
+	lea	.Lbyteshift_table+16(%rip), %rax
+	sub	len, %rax
 	movdqu	(%rax), %xmm0
-
-	# shift xmm2 to the left by arg3 bytes
 	pshufb	%xmm0, %xmm2
 
-	# shift xmm7 to the right by 16-arg3 bytes
-	pxor	mask1(%rip), %xmm0
+	# xmm7 = first chunk: xmm7 right-shifted by '16-len' bytes.
+	pxor	.Lmask1(%rip), %xmm0
 	pshufb	%xmm0, %xmm7
+
+	# xmm1 = second chunk: 'len' bytes from xmm1 (low-order bytes),
+	# then '16-len' bytes from xmm2 (high-order bytes).
 	pblendvb	%xmm2, %xmm1	#xmm0 is implicit
 
-	# fold 16 Bytes
-	movdqa	%xmm1, %xmm2
+	# Fold the first chunk into the second chunk, storing the result in xmm7.
 	movdqa	%xmm7, %xmm8
-	pclmulqdq	$0x11, %xmm10, %xmm7
-	pclmulqdq	$0x0 , %xmm10, %xmm8
+	pclmulqdq	$0x11, FOLD_CONSTS, %xmm7
+	pclmulqdq	$0x00, FOLD_CONSTS, %xmm8
 	pxor	%xmm8, %xmm7
-	pxor	%xmm2, %xmm7
+	pxor	%xmm1, %xmm7
 
-_128_done:
-	# compute crc of a 128-bit value
-	movdqa	rk5(%rip), %xmm10	# rk5 and rk6 in xmm10
-	movdqa	%xmm7, %xmm0
+.Lreduce_final_16_bytes:
+	# Reduce the 128-bit value M(x), stored in xmm7, to the final 16-bit CRC
 
-	#64b fold
-	pclmulqdq	$0x1, %xmm10, %xmm7
-	pslldq	$8   ,  %xmm0
-	pxor	%xmm0,  %xmm7
+	# Load 'x^48 * (x^48 mod G(x))' and 'x^48 * (x^80 mod G(x))'.
+	movdqa	.Lfinal_fold_consts(%rip), FOLD_CONSTS
 
-	#32b fold
+	# Fold the high 64 bits into the low 64 bits, while also multiplying by
+	# x^64.  This produces a 128-bit value congruent to x^64 * M(x) and
+	# whose low 48 bits are 0.
 	movdqa	%xmm7, %xmm0
+	pclmulqdq	$0x11, FOLD_CONSTS, %xmm7 # high bits * x^48 * (x^80 mod G(x))
+	pslldq	$8, %xmm0
+	pxor	%xmm0, %xmm7			  # + low bits * x^64
 
-	pand	mask2(%rip), %xmm0
-
-	psrldq	$12, %xmm7
-	pclmulqdq	$0x10, %xmm10, %xmm7
-	pxor	%xmm0, %xmm7
-
-	#barrett reduction
-_barrett:
-	movdqa	rk7(%rip), %xmm10	# rk7 and rk8 in xmm10
+	# Fold the high 32 bits into the low 96 bits.  This produces a 96-bit
+	# value congruent to x^64 * M(x) and whose low 48 bits are 0.
 	movdqa	%xmm7, %xmm0
-	pclmulqdq	$0x01, %xmm10, %xmm7
-	pslldq	$4, %xmm7
-	pclmulqdq	$0x11, %xmm10, %xmm7
+	pand	.Lmask2(%rip), %xmm0		  # zero high 32 bits
+	psrldq	$12, %xmm7			  # extract high 32 bits
+	pclmulqdq	$0x00, FOLD_CONSTS, %xmm7 # high 32 bits * x^48 * (x^48 mod G(x))
+	pxor	%xmm0, %xmm7			  # + low bits
 
-	pslldq	$4, %xmm7
-	pxor	%xmm0, %xmm7
-	pextrd	$1, %xmm7, %eax
+.Lbarrett_reduction:
+	# Load G(x) and floor(x^48 / G(x)).
+	movdqa	.Lbarrett_reduction_consts(%rip), FOLD_CONSTS
 
-_cleanup:
-	# scale the result back to 16 bits
-	shr	$16, %eax
-	mov     %rcx, %rsp
+	# Use Barrett reduction to compute the final CRC value.
+	movdqa	%xmm7, %xmm0
+	pclmulqdq	$0x11, FOLD_CONSTS, %xmm7 # high 32 bits * floor(x^48 / G(x))
+	psrlq	$32, %xmm7			  # /= x^32
+	pclmulqdq	$0x00, FOLD_CONSTS, %xmm7 # *= G(x)
+	psrlq	$48, %xmm0
+	pxor	%xmm7, %xmm0		     # + low 16 nonzero bits
+	# Final CRC value (x^16 * M(x)) mod G(x) is in low 16 bits of xmm0.
+
+	pextrw	$0, %xmm0, %eax
+.Ldone:
+	mov	%rcx, %rsp
 	ret
 
-########################################################################
-
 .align 16
-_less_than_128:
+.Lless_than_256_bytes:
+	pxor	%xmm0, %xmm0
+	cmp	$16, len
+	jl	.Lless_than_16_bytes
 
-	# check if there is enough buffer to be able to fold 16B at a time
-	cmp	$32, arg3
-	jl	_less_than_32
-	movdqa  SHUF_MASK(%rip), %xmm11
+	# Checksumming a buffer of length 16...255 bytes
 
-	# now if there is, load the constants
-	movdqa	rk1(%rip), %xmm10	# rk1 and rk2 in xmm10
+	# Load the first 16 data bytes.
+	movdqu	(buf), %xmm7
+	pshufb	BSWAP_MASK, %xmm7
+	add	$16, buf
 
-	movd	arg1_low32, %xmm0	# get the initial crc value
-	pslldq	$12, %xmm0	# align it to its correct place
-	movdqu	(arg2), %xmm7	# load the plaintext
-	pshufb	%xmm11, %xmm7	# byte-reflect the plaintext
+	# XOR the first 16 data *bits* with the initial CRC value.
+	pinsrw	$7, init_crcl, %xmm0
 	pxor	%xmm0, %xmm7
 
-
-	# update the buffer pointer
-	add	$16, arg2
-
-	# update the counter. subtract 32 instead of 16 to save one
-	# instruction from the loop
-	sub	$32, arg3
-
-	jmp	_16B_reduction_loop
-
-
-.align 16
-_less_than_32:
-	# mov initial crc to the return value. this is necessary for
-	# zero-length buffers.
-	mov	arg1_low32, %eax
-	test	arg3, arg3
-	je	_cleanup
-
-	movdqa  SHUF_MASK(%rip), %xmm11
-
-	movd	arg1_low32, %xmm0	# get the initial crc value
-	pslldq	$12, %xmm0	# align it to its correct place
-
-	cmp	$16, arg3
-	je	_exact_16_left
-	jl	_less_than_16_left
-
-	movdqu	(arg2), %xmm7	# load the plaintext
-	pshufb	%xmm11, %xmm7	# byte-reflect the plaintext
-	pxor	%xmm0 , %xmm7	# xor the initial crc value
-	add	$16, arg2
-	sub	$16, arg3
-	movdqa	rk1(%rip), %xmm10	# rk1 and rk2 in xmm10
-	jmp	_get_last_two_xmms
-
+	movdqa	.Lfold_across_16_bytes_consts(%rip), FOLD_CONSTS
+	cmp	$16, len
+	je	.Lreduce_final_16_bytes		# len == 16
+	sub	$32, len
+	jge	.Lfold_16_bytes_loop		# 32 <= len <= 255
+	add	$16, len
+	jmp	.Lhandle_partial_segment	# 17 <= len <= 31
 
 .align 16
-_less_than_16_left:
-	# use stack space to load data less than 16 bytes, zero-out
-	# the 16B in memory first.
-
-	pxor	%xmm1, %xmm1
-	mov	%rsp, %r11
-	movdqa	%xmm1, (%r11)
-
-	cmp	$4, arg3
-	jl	_only_less_than_4
-
-	# backup the counter value
-	mov	arg3, %r9
-	cmp	$8, arg3
-	jl	_less_than_8_left
-
-	# load 8 Bytes
-	mov	(arg2), %rax
-	mov	%rax, (%r11)
-	add	$8, %r11
-	sub	$8, arg3
-	add	$8, arg2
-_less_than_8_left:
-
-	cmp	$4, arg3
-	jl	_less_than_4_left
-
-	# load 4 Bytes
-	mov	(arg2), %eax
-	mov	%eax, (%r11)
-	add	$4, %r11
-	sub	$4, arg3
-	add	$4, arg2
-_less_than_4_left:
-
-	cmp	$2, arg3
-	jl	_less_than_2_left
-
-	# load 2 Bytes
-	mov	(arg2), %ax
-	mov	%ax, (%r11)
-	add	$2, %r11
-	sub	$2, arg3
-	add	$2, arg2
-_less_than_2_left:
-	cmp     $1, arg3
-        jl      _zero_left
-
-	# load 1 Byte
-	mov	(arg2), %al
-	mov	%al, (%r11)
-_zero_left:
+.Lless_than_16_bytes:
+	cmp	$2, len
+	jl	.Lless_than_2_bytes
+
+	# Checksumming a buffer of length 2...15 bytes.  Copy the data into a
+	# 16-byte stack buffer, left-padding with zeroes.  Then reduce the
+	# resulting 16-byte value.
+
+	movdqa	%xmm0, (%rsp)
+	lea	16(%rsp), %r8
+	sub	len, %r8
+	mov	%r8, %r10
+
+	test	$8, len
+	jz	1f
+	movq	(buf), %r9
+	movq	%r9, (%r8)
+	add	$8, buf
+	add	$8, %r8
+1:
+	test	$4, len
+	jz	1f
+	movl	(buf), %r9d
+	movl	%r9d, (%r8)
+	add	$4, buf
+	add	$4, %r8
+1:
+	test	$2, len
+	jz	1f
+	movw	(buf), %r9w
+	movw	%r9w, (%r8)
+	add	$2, buf
+	add	$2, %r8
+1:
+	test	$1, len
+	jz	1f
+	movb	(buf), %r9b
+	movb	%r9b, (%r8)
+1:
+	# XOR the first 16 data *bits* with the initial CRC value.
+	rol	$0x8, init_crc
+	xor	init_crc, (%r10)
+
+	# Load the data and reduce it.
 	movdqa	(%rsp), %xmm7
-	pshufb	%xmm11, %xmm7
-	pxor	%xmm0 , %xmm7	# xor the initial crc value
-
-	# shl r9, 4
-	lea	pshufb_shf_table+16(%rip), %rax
-	sub	%r9, %rax
-	movdqu	(%rax), %xmm0
-	pxor	mask1(%rip), %xmm0
-
-	pshufb	%xmm0, %xmm7
-	jmp	_128_done
+	pshufb	BSWAP_MASK, %xmm7
+	jmp	.Lreduce_final_16_bytes
 
 .align 16
-_exact_16_left:
-	movdqu	(arg2), %xmm7
-	pshufb	%xmm11, %xmm7
-	pxor	%xmm0 , %xmm7   # xor the initial crc value
-
-	jmp	_128_done
-
-_only_less_than_4:
-	cmp	$3, arg3
-	jl	_only_less_than_3
-
-	# load 3 Bytes
-	mov	(arg2), %al
-	mov	%al, (%r11)
-
-	mov	1(arg2), %al
-	mov	%al, 1(%r11)
-
-	mov	2(arg2), %al
-	mov	%al, 2(%r11)
-
-	movdqa	 (%rsp), %xmm7
-	pshufb	 %xmm11, %xmm7
-	pxor	 %xmm0 , %xmm7  # xor the initial crc value
-
-	psrldq	$5, %xmm7
-
-	jmp	_barrett
-_only_less_than_3:
-	cmp	$2, arg3
-	jl	_only_less_than_2
-
-	# load 2 Bytes
-	mov	(arg2), %al
-	mov	%al, (%r11)
-
-	mov	1(arg2), %al
-	mov	%al, 1(%r11)
-
-	movdqa	(%rsp), %xmm7
-	pshufb	%xmm11, %xmm7
-	pxor	%xmm0 , %xmm7   # xor the initial crc value
-
-	psrldq	$6, %xmm7
-
-	jmp	_barrett
-_only_less_than_2:
-
-	# load 1 Byte
-	mov	(arg2), %al
-	mov	%al, (%r11)
-
-	movdqa	(%rsp), %xmm7
-	pshufb	%xmm11, %xmm7
-	pxor	%xmm0 , %xmm7   # xor the initial crc value
-
-	psrldq	$7, %xmm7
-
-	jmp	_barrett
+.Lless_than_2_bytes:
+	movzwl	init_crc, %eax
+	test	len, len
+	je	.Ldone	# len = 0.  Nothing to do, just return init_crc.
+
+	# len = 1.  This is a special case because with only 1 data byte the
+	# 16-bit initial CRC value can't just be XOR'ed with the first 16 data
+	# bits as usual.  Instead, directly build the scaled value 'x^64 * M(x)'
+	# and skip to the Barrett reduction step.
+	xor	(buf), %ah
+	movd	%eax, %xmm7
+	pslldq	$7, %xmm7
+	jmp	.Lbarrett_reduction
 
 ENDPROC(crc_t10dif_pcl)
 
 .section	.rodata, "a", @progbits
 .align 16
-# precomputed constants
-# these constants are precomputed from the poly:
-# 0x8bb70000 (0x8bb7 scaled to 32 bits)
-# Q = 0x18BB70000
-# rk1 = 2^(32*3) mod Q << 32
-# rk2 = 2^(32*5) mod Q << 32
-# rk3 = 2^(32*15) mod Q << 32
-# rk4 = 2^(32*17) mod Q << 32
-# rk5 = 2^(32*3) mod Q << 32
-# rk6 = 2^(32*2) mod Q << 32
-# rk7 = floor(2^64/Q)
-# rk8 = Q
-rk1:
-.quad 0x2d56000000000000
-rk2:
-.quad 0x06df000000000000
-rk3:
-.quad 0x9d9d000000000000
-rk4:
-.quad 0x7cf5000000000000
-rk5:
-.quad 0x2d56000000000000
-rk6:
-.quad 0x1368000000000000
-rk7:
-.quad 0x00000001f65a57f8
-rk8:
-.quad 0x000000018bb70000
-
-rk9:
-.quad 0xceae000000000000
-rk10:
-.quad 0xbfd6000000000000
-rk11:
-.quad 0x1e16000000000000
-rk12:
-.quad 0x713c000000000000
-rk13:
-.quad 0xf7f9000000000000
-rk14:
-.quad 0x80a6000000000000
-rk15:
-.quad 0x044c000000000000
-rk16:
-.quad 0xe658000000000000
-rk17:
-.quad 0xad18000000000000
-rk18:
-.quad 0xa497000000000000
-rk19:
-.quad 0x6ee3000000000000
-rk20:
-.quad 0xe7b5000000000000
-
 
+# Fold constants precomputed from the polynomial 0x18bb7
+# G(x) = x^16 + x^15 + x^11 + x^9 + x^8 + x^7 + x^5 + x^4 + x^2 + x^1 + x^0
+.Lfold_across_128_bytes_consts:
+	.quad		0x0000000000006123	# x^(8*128)	mod G(x)
+	.quad		0x0000000000002295	# x^(8*128+64)	mod G(x)
+.Lfold_across_64_bytes_consts:
+	.quad		0x0000000000001069	# x^(4*128)	mod G(x)
+	.quad		0x000000000000dd31	# x^(4*128+64)	mod G(x)
+.Lfold_across_32_bytes_consts:
+	.quad		0x000000000000857d	# x^(2*128)	mod G(x)
+	.quad		0x0000000000007acc	# x^(2*128+64)	mod G(x)
+.Lfold_across_16_bytes_consts:
+	.quad		0x000000000000a010	# x^(1*128)	mod G(x)
+	.quad		0x0000000000001faa	# x^(1*128+64)	mod G(x)
+.Lfinal_fold_consts:
+	.quad		0x1368000000000000	# x^48 * (x^48 mod G(x))
+	.quad		0x2d56000000000000	# x^48 * (x^80 mod G(x))
+.Lbarrett_reduction_consts:
+	.quad		0x0000000000018bb7	# G(x)
+	.quad		0x00000001f65a57f8	# floor(x^48 / G(x))
 
 .section	.rodata.cst16.mask1, "aM", @progbits, 16
 .align 16
-mask1:
-.octa 0x80808080808080808080808080808080
+.Lmask1:
+	.octa	0x80808080808080808080808080808080
 
 .section	.rodata.cst16.mask2, "aM", @progbits, 16
 .align 16
-mask2:
-.octa 0x00000000FFFFFFFFFFFFFFFFFFFFFFFF
+.Lmask2:
+	.octa	0x00000000FFFFFFFFFFFFFFFFFFFFFFFF
+
+.section	.rodata.cst16.bswap_mask, "aM", @progbits, 16
+.align 16
+.Lbswap_mask:
+	.octa	0x000102030405060708090A0B0C0D0E0F
 
-.section	.rodata.cst16.SHUF_MASK, "aM", @progbits, 16
+.section	.rodata.cst32.byteshift_table, "aM", @progbits, 32
 .align 16
-SHUF_MASK:
-.octa 0x000102030405060708090A0B0C0D0E0F
-
-.section	.rodata.cst32.pshufb_shf_table, "aM", @progbits, 32
-.align 32
-pshufb_shf_table:
-# use these values for shift constants for the pshufb instruction
-# different alignments result in values as shown:
-#	DDQ 0x008f8e8d8c8b8a898887868584838281 # shl 15 (16-1) / shr1
-#	DDQ 0x01008f8e8d8c8b8a8988878685848382 # shl 14 (16-3) / shr2
-#	DDQ 0x0201008f8e8d8c8b8a89888786858483 # shl 13 (16-4) / shr3
-#	DDQ 0x030201008f8e8d8c8b8a898887868584 # shl 12 (16-4) / shr4
-#	DDQ 0x04030201008f8e8d8c8b8a8988878685 # shl 11 (16-5) / shr5
-#	DDQ 0x0504030201008f8e8d8c8b8a89888786 # shl 10 (16-6) / shr6
-#	DDQ 0x060504030201008f8e8d8c8b8a898887 # shl 9  (16-7) / shr7
-#	DDQ 0x07060504030201008f8e8d8c8b8a8988 # shl 8  (16-8) / shr8
-#	DDQ 0x0807060504030201008f8e8d8c8b8a89 # shl 7  (16-9) / shr9
-#	DDQ 0x090807060504030201008f8e8d8c8b8a # shl 6  (16-10) / shr10
-#	DDQ 0x0a090807060504030201008f8e8d8c8b # shl 5  (16-11) / shr11
-#	DDQ 0x0b0a090807060504030201008f8e8d8c # shl 4  (16-12) / shr12
-#	DDQ 0x0c0b0a090807060504030201008f8e8d # shl 3  (16-13) / shr13
-#	DDQ 0x0d0c0b0a090807060504030201008f8e # shl 2  (16-14) / shr14
-#	DDQ 0x0e0d0c0b0a090807060504030201008f # shl 1  (16-15) / shr15
-.octa 0x8f8e8d8c8b8a89888786858483828100
-.octa 0x000e0d0c0b0a09080706050403020100
+# For 1 <= len <= 15, the 16-byte vector beginning at &byteshift_table[16 - len]
+# is the index vector to shift left by 'len' bytes, and is also {0x80, ...,
+# 0x80} XOR the index vector to shift right by '16 - len' bytes.
+.Lbyteshift_table:
+	.byte		 0x0, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87
+	.byte		0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f
+	.byte		 0x0,  0x1,  0x2,  0x3,  0x4,  0x5,  0x6,  0x7
+	.byte		 0x8,  0x9,  0xa,  0xb,  0xc,  0xd,  0xe , 0x0
diff --git a/arch/x86/crypto/crct10dif-pclmul_glue.c b/arch/x86/crypto/crct10dif-pclmul_glue.c
index cd4df93225014..80cea60e9c6d6 100644
--- a/arch/x86/crypto/crct10dif-pclmul_glue.c
+++ b/arch/x86/crypto/crct10dif-pclmul_glue.c
@@ -33,8 +33,7 @@
 #include <asm/cpufeatures.h>
 #include <asm/cpu_device_id.h>
 
-asmlinkage __u16 crc_t10dif_pcl(__u16 crc, const unsigned char *buf,
-				size_t len);
+asmlinkage u16 crc_t10dif_pcl(u16 init_crc, const u8 *buf, size_t len);
 
 struct chksum_desc_ctx {
 	__u16 crc;
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 2/3] crypto: arm/crct10dif-ce - cleanup and optimizations
  2019-01-30  3:14 ` Eric Biggers
@ 2019-01-30  3:14   ` Eric Biggers
  -1 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-30  3:14 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: Ard Biesheuvel, Tim Chen, linux-arm-kernel

From: Eric Biggers <ebiggers@google.com>

The x86, arm, and arm64 asm implementations of crct10dif are very
difficult to understand partly because many of the comments, labels, and
macros are named incorrectly: the lengths mentioned are usually off by a
factor of two from the actual code.  Many other things are unnecessarily
convoluted as well, e.g. there are many more fold constants than
actually needed and some aren't fully reduced.

This series therefore cleans up all these implementations to be much
more maintainable.  I also made some small optimizations where I saw
opportunities, resulting in slightly better performance.

This patch cleans up the arm version.

(Also moved the constants to .rodata as suggested by Ard Biesheuvel.)

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm/crypto/crct10dif-ce-core.S | 552 +++++++++++++---------------
 arch/arm/crypto/crct10dif-ce-glue.c |   2 +-
 2 files changed, 260 insertions(+), 294 deletions(-)

diff --git a/arch/arm/crypto/crct10dif-ce-core.S b/arch/arm/crypto/crct10dif-ce-core.S
index d058fad423c2f..b101025757218 100644
--- a/arch/arm/crypto/crct10dif-ce-core.S
+++ b/arch/arm/crypto/crct10dif-ce-core.S
@@ -2,12 +2,14 @@
 // Accelerated CRC-T10DIF using ARM NEON and Crypto Extensions instructions
 //
 // Copyright (C) 2016 Linaro Ltd <ard.biesheuvel@linaro.org>
+// Copyright (C) 2019 Google LLC <ebiggers@google.com>
 //
 // This program is free software; you can redistribute it and/or modify
 // it under the terms of the GNU General Public License version 2 as
 // published by the Free Software Foundation.
 //
 
+// Derived from the x86 version:
 //
 // Implement fast CRC-T10DIF computation with SSE and PCLMULQDQ instructions
 //
@@ -54,19 +56,11 @@
 // NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
 // SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-//       Function API:
-//       UINT16 crc_t10dif_pcl(
-//               UINT16 init_crc, //initial CRC value, 16 bits
-//               const unsigned char *buf, //buffer pointer to calculate CRC on
-//               UINT64 len //buffer length in bytes (64-bit data)
-//       );
-//
 //       Reference paper titled "Fast CRC Computation for Generic
 //	Polynomials Using PCLMULQDQ Instruction"
 //       URL: http://www.intel.com/content/dam/www/public/us/en/documents
 //  /white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
 //
-//
 
 #include <linux/linkage.h>
 #include <asm/assembler.h>
@@ -80,11 +74,11 @@
 	.text
 	.fpu		crypto-neon-fp-armv8
 
-	arg1_low32	.req	r0
-	arg2		.req	r1
-	arg3		.req	r2
+	init_crc	.req	r0
+	buf		.req	r1
+	len		.req	r2
 
-	qzr		.req	q13
+	fold_consts_ptr	.req	ip
 
 	q0l		.req	d0
 	q0h		.req	d1
@@ -102,82 +96,35 @@
 	q6h		.req	d13
 	q7l		.req	d14
 	q7h		.req	d15
-
-ENTRY(crc_t10dif_pmull)
-	vmov.i8		qzr, #0			// init zero register
-
-	// adjust the 16-bit initial_crc value, scale it to 32 bits
-	lsl		arg1_low32, arg1_low32, #16
-
-	// check if smaller than 256
-	cmp		arg3, #256
-
-	// for sizes less than 128, we can't fold 64B at a time...
-	blt		_less_than_128
-
-	// load the initial crc value
-	// crc value does not need to be byte-reflected, but it needs
-	// to be moved to the high part of the register.
-	// because data will be byte-reflected and will align with
-	// initial crc at correct place.
-	vmov		s0, arg1_low32		// initial crc
-	vext.8		q10, qzr, q0, #4
-
-	// receive the initial 64B data, xor the initial crc value
-	vld1.64		{q0-q1}, [arg2]!
-	vld1.64		{q2-q3}, [arg2]!
-	vld1.64		{q4-q5}, [arg2]!
-	vld1.64		{q6-q7}, [arg2]!
-CPU_LE(	vrev64.8	q0, q0			)
-CPU_LE(	vrev64.8	q1, q1			)
-CPU_LE(	vrev64.8	q2, q2			)
-CPU_LE(	vrev64.8	q3, q3			)
-CPU_LE(	vrev64.8	q4, q4			)
-CPU_LE(	vrev64.8	q5, q5			)
-CPU_LE(	vrev64.8	q6, q6			)
-CPU_LE(	vrev64.8	q7, q7			)
-
-	vswp		d0, d1
-	vswp		d2, d3
-	vswp		d4, d5
-	vswp		d6, d7
-	vswp		d8, d9
-	vswp		d10, d11
-	vswp		d12, d13
-	vswp		d14, d15
-
-	// XOR the initial_crc value
-	veor.8		q0, q0, q10
-
-	adr		ip, rk3
-	vld1.64		{q10}, [ip, :128]	// xmm10 has rk3 and rk4
-
-	//
-	// we subtract 256 instead of 128 to save one instruction from the loop
-	//
-	sub		arg3, arg3, #256
-
-	// at this section of the code, there is 64*x+y (0<=y<64) bytes of
-	// buffer. The _fold_64_B_loop will fold 64B at a time
-	// until we have 64+y Bytes of buffer
-
-
-	// fold 64B at a time. This section of the code folds 4 vector
-	// registers in parallel
-_fold_64_B_loop:
-
-	.macro		fold64, reg1, reg2
-	vld1.64		{q11-q12}, [arg2]!
-
-	vmull.p64	q8, \reg1\()h, d21
-	vmull.p64	\reg1, \reg1\()l, d20
-	vmull.p64	q9, \reg2\()h, d21
-	vmull.p64	\reg2, \reg2\()l, d20
-
-CPU_LE(	vrev64.8	q11, q11		)
-CPU_LE(	vrev64.8	q12, q12		)
-	vswp		d22, d23
-	vswp		d24, d25
+	q8l		.req	d16
+	q8h		.req	d17
+	q9l		.req	d18
+	q9h		.req	d19
+	q10l		.req	d20
+	q10h		.req	d21
+	q11l		.req	d22
+	q11h		.req	d23
+	q12l		.req	d24
+	q12h		.req	d25
+
+	FOLD_CONSTS	.req	q10
+	FOLD_CONST_L	.req	q10l
+	FOLD_CONST_H	.req	q10h
+
+	// Fold reg1, reg2 into the next 32 data bytes, storing the result back
+	// into reg1, reg2.
+	.macro		fold_32_bytes, reg1, reg2
+	vld1.64		{q11-q12}, [buf]!
+
+	vmull.p64	q8, \reg1\()h, FOLD_CONST_H
+	vmull.p64	\reg1, \reg1\()l, FOLD_CONST_L
+	vmull.p64	q9, \reg2\()h, FOLD_CONST_H
+	vmull.p64	\reg2, \reg2\()l, FOLD_CONST_L
+
+CPU_LE(	vrev64.8	q11, q11	)
+CPU_LE(	vrev64.8	q12, q12	)
+	vswp		q11l, q11h
+	vswp		q12l, q12h
 
 	veor.8		\reg1, \reg1, q8
 	veor.8		\reg2, \reg2, q9
@@ -185,229 +132,248 @@ CPU_LE(	vrev64.8	q12, q12		)
 	veor.8		\reg2, \reg2, q12
 	.endm
 
-	fold64		q0, q1
-	fold64		q2, q3
-	fold64		q4, q5
-	fold64		q6, q7
-
-	subs		arg3, arg3, #128
-
-	// check if there is another 64B in the buffer to be able to fold
-	bge		_fold_64_B_loop
-
-	// at this point, the buffer pointer is pointing at the last y Bytes
-	// of the buffer the 64B of folded data is in 4 of the vector
-	// registers: v0, v1, v2, v3
-
-	// fold the 8 vector registers to 1 vector register with different
-	// constants
-
-	adr		ip, rk9
-	vld1.64		{q10}, [ip, :128]!
-
-	.macro		fold16, reg, rk
-	vmull.p64	q8, \reg\()l, d20
-	vmull.p64	\reg, \reg\()h, d21
-	.ifnb		\rk
-	vld1.64		{q10}, [ip, :128]!
+	// Fold src_reg into dst_reg, optionally loading the next fold constants
+	.macro		fold_16_bytes, src_reg, dst_reg, load_next_consts
+	vmull.p64	q8, \src_reg\()l, FOLD_CONST_L
+	vmull.p64	\src_reg, \src_reg\()h, FOLD_CONST_H
+	.ifnb		\load_next_consts
+	vld1.64		{FOLD_CONSTS}, [fold_consts_ptr, :128]!
 	.endif
-	veor.8		q7, q7, q8
-	veor.8		q7, q7, \reg
+	veor.8		\dst_reg, \dst_reg, q8
+	veor.8		\dst_reg, \dst_reg, \src_reg
 	.endm
 
-	fold16		q0, rk11
-	fold16		q1, rk13
-	fold16		q2, rk15
-	fold16		q3, rk17
-	fold16		q4, rk19
-	fold16		q5, rk1
-	fold16		q6
-
-	// instead of 64, we add 48 to the loop counter to save 1 instruction
-	// from the loop instead of a cmp instruction, we use the negative
-	// flag with the jl instruction
-	adds		arg3, arg3, #(128-16)
-	blt		_final_reduction_for_128
-
-	// now we have 16+y bytes left to reduce. 16 Bytes is in register v7
-	// and the rest is in memory. We can fold 16 bytes at a time if y>=16
-	// continue folding 16B at a time
-
-_16B_reduction_loop:
-	vmull.p64	q8, d14, d20
-	vmull.p64	q7, d15, d21
-	veor.8		q7, q7, q8
-
-	vld1.64		{q0}, [arg2]!
-CPU_LE(	vrev64.8	q0, q0		)
-	vswp		d0, d1
-	veor.8		q7, q7, q0
-	subs		arg3, arg3, #16
-
-	// instead of a cmp instruction, we utilize the flags with the
-	// jge instruction equivalent of: cmp arg3, 16-16
-	// check if there is any more 16B in the buffer to be able to fold
-	bge		_16B_reduction_loop
-
-	// now we have 16+z bytes left to reduce, where 0<= z < 16.
-	// first, we reduce the data in the xmm7 register
-
-_final_reduction_for_128:
-	// check if any more data to fold. If not, compute the CRC of
-	// the final 128 bits
-	adds		arg3, arg3, #16
-	beq		_128_done
-
-	// here we are getting data that is less than 16 bytes.
-	// since we know that there was data before the pointer, we can
-	// offset the input pointer before the actual point, to receive
-	// exactly 16 bytes. after that the registers need to be adjusted.
-_get_last_two_regs:
-	add		arg2, arg2, arg3
-	sub		arg2, arg2, #16
-	vld1.64		{q1}, [arg2]
-CPU_LE(	vrev64.8	q1, q1			)
-	vswp		d2, d3
-
-	// get rid of the extra data that was loaded before
-	// load the shift constant
-	adr		ip, tbl_shf_table + 16
-	sub		ip, ip, arg3
-	vld1.8		{q0}, [ip]
-
-	// shift v2 to the left by arg3 bytes
-	vtbl.8		d4, {d14-d15}, d0
-	vtbl.8		d5, {d14-d15}, d1
-
-	// shift v7 to the right by 16-arg3 bytes
-	vmov.i8		q9, #0x80
-	veor.8		q0, q0, q9
-	vtbl.8		d18, {d14-d15}, d0
-	vtbl.8		d19, {d14-d15}, d1
-
-	// blend
-	vshr.s8		q0, q0, #7		// convert to 8-bit mask
-	vbsl.8		q0, q2, q1
-
-	// fold 16 Bytes
-	vmull.p64	q8, d18, d20
-	vmull.p64	q7, d19, d21
-	veor.8		q7, q7, q8
-	veor.8		q7, q7, q0
+	.macro		__adrl, out, sym
+	movw		\out, #:lower16:\sym
+	movt		\out, #:upper16:\sym
+	.endm
 
-_128_done:
-	// compute crc of a 128-bit value
-	vldr		d20, rk5
-	vldr		d21, rk6		// rk5 and rk6 in xmm10
+//
+// u16 crc_t10dif_pmull(u16 init_crc, const u8 *buf, size_t len);
+//
+// Assumes len >= 16.
+//
+ENTRY(crc_t10dif_pmull)
 
-	// 64b fold
-	vext.8		q0, qzr, q7, #8
-	vmull.p64	q7, d15, d20
+	// For sizes less than 256 bytes, we can't fold 128 bytes at a time.
+	cmp		len, #256
+	blt		.Lless_than_256_bytes
+
+	__adrl		fold_consts_ptr, .Lfold_across_128_bytes_consts
+
+	// Load the first 128 data bytes.  Byte swapping is necessary to make
+	// the bit order match the polynomial coefficient order.
+	vld1.64		{q0-q1}, [buf]!
+	vld1.64		{q2-q3}, [buf]!
+	vld1.64		{q4-q5}, [buf]!
+	vld1.64		{q6-q7}, [buf]!
+CPU_LE(	vrev64.8	q0, q0	)
+CPU_LE(	vrev64.8	q1, q1	)
+CPU_LE(	vrev64.8	q2, q2	)
+CPU_LE(	vrev64.8	q3, q3	)
+CPU_LE(	vrev64.8	q4, q4	)
+CPU_LE(	vrev64.8	q5, q5	)
+CPU_LE(	vrev64.8	q6, q6	)
+CPU_LE(	vrev64.8	q7, q7	)
+	vswp		q0l, q0h
+	vswp		q1l, q1h
+	vswp		q2l, q2h
+	vswp		q3l, q3h
+	vswp		q4l, q4h
+	vswp		q5l, q5h
+	vswp		q6l, q6h
+	vswp		q7l, q7h
+
+	// XOR the first 16 data *bits* with the initial CRC value.
+	vmov.i8		q8h, #0
+	vmov.u16	q8h[3], init_crc
+	veor		q0h, q0h, q8h
+
+	// Load the constants for folding across 128 bytes.
+	vld1.64		{FOLD_CONSTS}, [fold_consts_ptr, :128]!
+
+	// Subtract 128 for the 128 data bytes just consumed.  Subtract another
+	// 128 to simplify the termination condition of the following loop.
+	sub		len, len, #256
+
+	// While >= 128 data bytes remain (not counting q0-q7), fold the 128
+	// bytes q0-q7 into them, storing the result back into q0-q7.
+.Lfold_128_bytes_loop:
+	fold_32_bytes	q0, q1
+	fold_32_bytes	q2, q3
+	fold_32_bytes	q4, q5
+	fold_32_bytes	q6, q7
+	subs		len, len, #128
+	bge		.Lfold_128_bytes_loop
+
+	// Now fold the 112 bytes in q0-q6 into the 16 bytes in q7.
+
+	// Fold across 64 bytes.
+	vld1.64		{FOLD_CONSTS}, [fold_consts_ptr, :128]!
+	fold_16_bytes	q0, q4
+	fold_16_bytes	q1, q5
+	fold_16_bytes	q2, q6
+	fold_16_bytes	q3, q7, 1
+	// Fold across 32 bytes.
+	fold_16_bytes	q4, q6
+	fold_16_bytes	q5, q7, 1
+	// Fold across 16 bytes.
+	fold_16_bytes	q6, q7
+
+	// Add 128 to get the correct number of data bytes remaining in 0...127
+	// (not counting q7), following the previous extra subtraction by 128.
+	// Then subtract 16 to simplify the termination condition of the
+	// following loop.
+	adds		len, len, #(128-16)
+
+	// While >= 16 data bytes remain (not counting q7), fold the 16 bytes q7
+	// into them, storing the result back into q7.
+	blt		.Lfold_16_bytes_loop_done
+.Lfold_16_bytes_loop:
+	vmull.p64	q8, q7l, FOLD_CONST_L
+	vmull.p64	q7, q7h, FOLD_CONST_H
+	veor.8		q7, q7, q8
+	vld1.64		{q0}, [buf]!
+CPU_LE(	vrev64.8	q0, q0	)
+	vswp		q0l, q0h
 	veor.8		q7, q7, q0
-
-	// 32b fold
-	vext.8		q0, q7, qzr, #12
-	vmov		s31, s3
-	vmull.p64	q0, d0, d21
-	veor.8		q7, q0, q7
-
-	// barrett reduction
-_barrett:
-	vldr		d20, rk7
-	vldr		d21, rk8
-
-	vmull.p64	q0, d15, d20
-	vext.8		q0, qzr, q0, #12
-	vmull.p64	q0, d1, d21
-	vext.8		q0, qzr, q0, #12
+	subs		len, len, #16
+	bge		.Lfold_16_bytes_loop
+
+.Lfold_16_bytes_loop_done:
+	// Add 16 to get the correct number of data bytes remaining in 0...15
+	// (not counting q7), following the previous extra subtraction by 16.
+	adds		len, len, #16
+	beq		.Lreduce_final_16_bytes
+
+.Lhandle_partial_segment:
+	// Reduce the last '16 + len' bytes where 1 <= len <= 15 and the first
+	// 16 bytes are in q7 and the rest are the remaining data in 'buf'.  To
+	// do this without needing a fold constant for each possible 'len',
+	// redivide the bytes into a first chunk of 'len' bytes and a second
+	// chunk of 16 bytes, then fold the first chunk into the second.
+
+	// q0 = last 16 original data bytes
+	add		buf, buf, len
+	sub		buf, buf, #16
+	vld1.64		{q0}, [buf]
+CPU_LE(	vrev64.8	q0, q0	)
+	vswp		q0l, q0h
+
+	// q1 = high order part of second chunk: q7 left-shifted by 'len' bytes.
+	__adrl		r3, .Lbyteshift_table + 16
+	sub		r3, r3, len
+	vld1.8		{q2}, [r3]
+	vtbl.8		q1l, {q7l-q7h}, q2l
+	vtbl.8		q1h, {q7l-q7h}, q2h
+
+	// q3 = first chunk: q7 right-shifted by '16-len' bytes.
+	vmov.i8		q3, #0x80
+	veor.8		q2, q2, q3
+	vtbl.8		q3l, {q7l-q7h}, q2l
+	vtbl.8		q3h, {q7l-q7h}, q2h
+
+	// Convert to 8-bit masks: 'len' 0x00 bytes, then '16-len' 0xff bytes.
+	vshr.s8		q2, q2, #7
+
+	// q2 = second chunk: 'len' bytes from q0 (low-order bytes),
+	// then '16-len' bytes from q1 (high-order bytes).
+	vbsl.8		q2, q1, q0
+
+	// Fold the first chunk into the second chunk, storing the result in q7.
+	vmull.p64	q0, q3l, FOLD_CONST_L
+	vmull.p64	q7, q3h, FOLD_CONST_H
 	veor.8		q7, q7, q0
-	vmov		r0, s29
-
-_cleanup:
-	// scale the result back to 16 bits
-	lsr		r0, r0, #16
+	veor.8		q7, q7, q2
+
+.Lreduce_final_16_bytes:
+	// Reduce the 128-bit value M(x), stored in q7, to the final 16-bit CRC.
+
+	// Load 'x^48 * (x^48 mod G(x))' and 'x^48 * (x^80 mod G(x))'.
+	vld1.64		{FOLD_CONSTS}, [fold_consts_ptr, :128]!
+
+	// Fold the high 64 bits into the low 64 bits, while also multiplying by
+	// x^64.  This produces a 128-bit value congruent to x^64 * M(x) and
+	// whose low 48 bits are 0.
+	vmull.p64	q0, q7h, FOLD_CONST_H	// high bits * x^48 * (x^80 mod G(x))
+	veor.8		q0h, q0h, q7l		// + low bits * x^64
+
+	// Fold the high 32 bits into the low 96 bits.  This produces a 96-bit
+	// value congruent to x^64 * M(x) and whose low 48 bits are 0.
+	vmov.i8		q1, #0
+	vmov		s4, s3			// extract high 32 bits
+	vmov		s3, s5			// zero high 32 bits
+	vmull.p64	q1, q1l, FOLD_CONST_L	// high 32 bits * x^48 * (x^48 mod G(x))
+	veor.8		q0, q0, q1		// + low bits
+
+	// Load G(x) and floor(x^48 / G(x)).
+	vld1.64		{FOLD_CONSTS}, [fold_consts_ptr, :128]
+
+	// Use Barrett reduction to compute the final CRC value.
+	vmull.p64	q1, q0h, FOLD_CONST_H	// high 32 bits * floor(x^48 / G(x))
+	vshr.u64	q1l, q1l, #32		// /= x^32
+	vmull.p64	q1, q1l, FOLD_CONST_L	// *= G(x)
+	vshr.u64	q0l, q0l, #48
+	veor.8		q0l, q0l, q1l		// + low 16 nonzero bits
+	// Final CRC value (x^16 * M(x)) mod G(x) is in low 16 bits of q0.
+
+	vmov.u16	r0, q0l[0]
 	bx		lr
 
-_less_than_128:
-	teq		arg3, #0
-	beq		_cleanup
+.Lless_than_256_bytes:
+	// Checksumming a buffer of length 16...255 bytes
 
-	vmov.i8		q0, #0
-	vmov		s3, arg1_low32		// get the initial crc value
+	__adrl		fold_consts_ptr, .Lfold_across_16_bytes_consts
 
-	vld1.64		{q7}, [arg2]!
-CPU_LE(	vrev64.8	q7, q7		)
-	vswp		d14, d15
-	veor.8		q7, q7, q0
+	// Load the first 16 data bytes.
+	vld1.64		{q7}, [buf]!
+CPU_LE(	vrev64.8	q7, q7	)
+	vswp		q7l, q7h
 
-	cmp		arg3, #16
-	beq		_128_done		// exactly 16 left
+	// XOR the first 16 data *bits* with the initial CRC value.
+	vmov.i8		q0h, #0
+	vmov.u16	q0h[3], init_crc
+	veor.8		q7h, q7h, q0h
 
-	// now if there is, load the constants
-	vldr		d20, rk1
-	vldr		d21, rk2		// rk1 and rk2 in xmm10
+	// Load the fold-across-16-bytes constants.
+	vld1.64		{FOLD_CONSTS}, [fold_consts_ptr, :128]!
 
-	// check if there is enough buffer to be able to fold 16B at a time
-	subs		arg3, arg3, #32
-	addlt		arg3, arg3, #16
-	blt		_get_last_two_regs
-	b		_16B_reduction_loop
+	cmp		len, #16
+	beq		.Lreduce_final_16_bytes		// len == 16
+	subs		len, len, #32
+	addlt		len, len, #16
+	blt		.Lhandle_partial_segment	// 17 <= len <= 31
+	b		.Lfold_16_bytes_loop		// 32 <= len <= 255
 ENDPROC(crc_t10dif_pmull)
 
-// precomputed constants
-// these constants are precomputed from the poly:
-// 0x8bb70000 (0x8bb7 scaled to 32 bits)
+	.section	".rodata", "a"
 	.align		4
-// Q = 0x18BB70000
-// rk1 = 2^(32*3) mod Q << 32
-// rk2 = 2^(32*5) mod Q << 32
-// rk3 = 2^(32*15) mod Q << 32
-// rk4 = 2^(32*17) mod Q << 32
-// rk5 = 2^(32*3) mod Q << 32
-// rk6 = 2^(32*2) mod Q << 32
-// rk7 = floor(2^64/Q)
-// rk8 = Q
-
-rk3:	.quad		0x9d9d000000000000
-rk4:	.quad		0x7cf5000000000000
-rk5:	.quad		0x2d56000000000000
-rk6:	.quad		0x1368000000000000
-rk7:	.quad		0x00000001f65a57f8
-rk8:	.quad		0x000000018bb70000
-rk9:	.quad		0xceae000000000000
-rk10:	.quad		0xbfd6000000000000
-rk11:	.quad		0x1e16000000000000
-rk12:	.quad		0x713c000000000000
-rk13:	.quad		0xf7f9000000000000
-rk14:	.quad		0x80a6000000000000
-rk15:	.quad		0x044c000000000000
-rk16:	.quad		0xe658000000000000
-rk17:	.quad		0xad18000000000000
-rk18:	.quad		0xa497000000000000
-rk19:	.quad		0x6ee3000000000000
-rk20:	.quad		0xe7b5000000000000
-rk1:	.quad		0x2d56000000000000
-rk2:	.quad		0x06df000000000000
-
-tbl_shf_table:
-// use these values for shift constants for the tbl/tbx instruction
-// different alignments result in values as shown:
-//	DDQ 0x008f8e8d8c8b8a898887868584838281 # shl 15 (16-1) / shr1
-//	DDQ 0x01008f8e8d8c8b8a8988878685848382 # shl 14 (16-3) / shr2
-//	DDQ 0x0201008f8e8d8c8b8a89888786858483 # shl 13 (16-4) / shr3
-//	DDQ 0x030201008f8e8d8c8b8a898887868584 # shl 12 (16-4) / shr4
-//	DDQ 0x04030201008f8e8d8c8b8a8988878685 # shl 11 (16-5) / shr5
-//	DDQ 0x0504030201008f8e8d8c8b8a89888786 # shl 10 (16-6) / shr6
-//	DDQ 0x060504030201008f8e8d8c8b8a898887 # shl 9  (16-7) / shr7
-//	DDQ 0x07060504030201008f8e8d8c8b8a8988 # shl 8  (16-8) / shr8
-//	DDQ 0x0807060504030201008f8e8d8c8b8a89 # shl 7  (16-9) / shr9
-//	DDQ 0x090807060504030201008f8e8d8c8b8a # shl 6  (16-10) / shr10
-//	DDQ 0x0a090807060504030201008f8e8d8c8b # shl 5  (16-11) / shr11
-//	DDQ 0x0b0a090807060504030201008f8e8d8c # shl 4  (16-12) / shr12
-//	DDQ 0x0c0b0a090807060504030201008f8e8d # shl 3  (16-13) / shr13
-//	DDQ 0x0d0c0b0a090807060504030201008f8e # shl 2  (16-14) / shr14
-//	DDQ 0x0e0d0c0b0a090807060504030201008f # shl 1  (16-15) / shr15
 
+// Fold constants precomputed from the polynomial 0x18bb7
+// G(x) = x^16 + x^15 + x^11 + x^9 + x^8 + x^7 + x^5 + x^4 + x^2 + x^1 + x^0
+.Lfold_across_128_bytes_consts:
+	.quad		0x0000000000006123	// x^(8*128)	mod G(x)
+	.quad		0x0000000000002295	// x^(8*128+64)	mod G(x)
+// .Lfold_across_64_bytes_consts:
+	.quad		0x0000000000001069	// x^(4*128)	mod G(x)
+	.quad		0x000000000000dd31	// x^(4*128+64)	mod G(x)
+// .Lfold_across_32_bytes_consts:
+	.quad		0x000000000000857d	// x^(2*128)	mod G(x)
+	.quad		0x0000000000007acc	// x^(2*128+64)	mod G(x)
+.Lfold_across_16_bytes_consts:
+	.quad		0x000000000000a010	// x^(1*128)	mod G(x)
+	.quad		0x0000000000001faa	// x^(1*128+64)	mod G(x)
+// .Lfinal_fold_consts:
+	.quad		0x1368000000000000	// x^48 * (x^48 mod G(x))
+	.quad		0x2d56000000000000	// x^48 * (x^80 mod G(x))
+// .Lbarrett_reduction_consts:
+	.quad		0x0000000000018bb7	// G(x)
+	.quad		0x00000001f65a57f8	// floor(x^48 / G(x))
+
+// For 1 <= len <= 15, the 16-byte vector beginning at &byteshift_table[16 -
+// len] is the index vector to shift left by 'len' bytes, and is also {0x80,
+// ..., 0x80} XOR the index vector to shift right by '16 - len' bytes.
+.Lbyteshift_table:
 	.byte		 0x0, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87
 	.byte		0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f
 	.byte		 0x0,  0x1,  0x2,  0x3,  0x4,  0x5,  0x6,  0x7
diff --git a/arch/arm/crypto/crct10dif-ce-glue.c b/arch/arm/crypto/crct10dif-ce-glue.c
index 14c19c70a8416..3d6b800b83965 100644
--- a/arch/arm/crypto/crct10dif-ce-glue.c
+++ b/arch/arm/crypto/crct10dif-ce-glue.c
@@ -21,7 +21,7 @@
 
 #define CRC_T10DIF_PMULL_CHUNK_SIZE	16U
 
-asmlinkage u16 crc_t10dif_pmull(u16 init_crc, const u8 buf[], u32 len);
+asmlinkage u16 crc_t10dif_pmull(u16 init_crc, const u8 *buf, size_t len);
 
 static int crct10dif_init(struct shash_desc *desc)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 2/3] crypto: arm/crct10dif-ce - cleanup and optimizations
@ 2019-01-30  3:14   ` Eric Biggers
  0 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-30  3:14 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: Tim Chen, linux-arm-kernel, Ard Biesheuvel

From: Eric Biggers <ebiggers@google.com>

The x86, arm, and arm64 asm implementations of crct10dif are very
difficult to understand partly because many of the comments, labels, and
macros are named incorrectly: the lengths mentioned are usually off by a
factor of two from the actual code.  Many other things are unnecessarily
convoluted as well, e.g. there are many more fold constants than
actually needed and some aren't fully reduced.

This series therefore cleans up all these implementations to be much
more maintainable.  I also made some small optimizations where I saw
opportunities, resulting in slightly better performance.

This patch cleans up the arm version.

(Also moved the constants to .rodata as suggested by Ard Biesheuvel.)

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm/crypto/crct10dif-ce-core.S | 552 +++++++++++++---------------
 arch/arm/crypto/crct10dif-ce-glue.c |   2 +-
 2 files changed, 260 insertions(+), 294 deletions(-)

diff --git a/arch/arm/crypto/crct10dif-ce-core.S b/arch/arm/crypto/crct10dif-ce-core.S
index d058fad423c2f..b101025757218 100644
--- a/arch/arm/crypto/crct10dif-ce-core.S
+++ b/arch/arm/crypto/crct10dif-ce-core.S
@@ -2,12 +2,14 @@
 // Accelerated CRC-T10DIF using ARM NEON and Crypto Extensions instructions
 //
 // Copyright (C) 2016 Linaro Ltd <ard.biesheuvel@linaro.org>
+// Copyright (C) 2019 Google LLC <ebiggers@google.com>
 //
 // This program is free software; you can redistribute it and/or modify
 // it under the terms of the GNU General Public License version 2 as
 // published by the Free Software Foundation.
 //
 
+// Derived from the x86 version:
 //
 // Implement fast CRC-T10DIF computation with SSE and PCLMULQDQ instructions
 //
@@ -54,19 +56,11 @@
 // NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
 // SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-//       Function API:
-//       UINT16 crc_t10dif_pcl(
-//               UINT16 init_crc, //initial CRC value, 16 bits
-//               const unsigned char *buf, //buffer pointer to calculate CRC on
-//               UINT64 len //buffer length in bytes (64-bit data)
-//       );
-//
 //       Reference paper titled "Fast CRC Computation for Generic
 //	Polynomials Using PCLMULQDQ Instruction"
 //       URL: http://www.intel.com/content/dam/www/public/us/en/documents
 //  /white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
 //
-//
 
 #include <linux/linkage.h>
 #include <asm/assembler.h>
@@ -80,11 +74,11 @@
 	.text
 	.fpu		crypto-neon-fp-armv8
 
-	arg1_low32	.req	r0
-	arg2		.req	r1
-	arg3		.req	r2
+	init_crc	.req	r0
+	buf		.req	r1
+	len		.req	r2
 
-	qzr		.req	q13
+	fold_consts_ptr	.req	ip
 
 	q0l		.req	d0
 	q0h		.req	d1
@@ -102,82 +96,35 @@
 	q6h		.req	d13
 	q7l		.req	d14
 	q7h		.req	d15
-
-ENTRY(crc_t10dif_pmull)
-	vmov.i8		qzr, #0			// init zero register
-
-	// adjust the 16-bit initial_crc value, scale it to 32 bits
-	lsl		arg1_low32, arg1_low32, #16
-
-	// check if smaller than 256
-	cmp		arg3, #256
-
-	// for sizes less than 128, we can't fold 64B at a time...
-	blt		_less_than_128
-
-	// load the initial crc value
-	// crc value does not need to be byte-reflected, but it needs
-	// to be moved to the high part of the register.
-	// because data will be byte-reflected and will align with
-	// initial crc at correct place.
-	vmov		s0, arg1_low32		// initial crc
-	vext.8		q10, qzr, q0, #4
-
-	// receive the initial 64B data, xor the initial crc value
-	vld1.64		{q0-q1}, [arg2]!
-	vld1.64		{q2-q3}, [arg2]!
-	vld1.64		{q4-q5}, [arg2]!
-	vld1.64		{q6-q7}, [arg2]!
-CPU_LE(	vrev64.8	q0, q0			)
-CPU_LE(	vrev64.8	q1, q1			)
-CPU_LE(	vrev64.8	q2, q2			)
-CPU_LE(	vrev64.8	q3, q3			)
-CPU_LE(	vrev64.8	q4, q4			)
-CPU_LE(	vrev64.8	q5, q5			)
-CPU_LE(	vrev64.8	q6, q6			)
-CPU_LE(	vrev64.8	q7, q7			)
-
-	vswp		d0, d1
-	vswp		d2, d3
-	vswp		d4, d5
-	vswp		d6, d7
-	vswp		d8, d9
-	vswp		d10, d11
-	vswp		d12, d13
-	vswp		d14, d15
-
-	// XOR the initial_crc value
-	veor.8		q0, q0, q10
-
-	adr		ip, rk3
-	vld1.64		{q10}, [ip, :128]	// xmm10 has rk3 and rk4
-
-	//
-	// we subtract 256 instead of 128 to save one instruction from the loop
-	//
-	sub		arg3, arg3, #256
-
-	// at this section of the code, there is 64*x+y (0<=y<64) bytes of
-	// buffer. The _fold_64_B_loop will fold 64B at a time
-	// until we have 64+y Bytes of buffer
-
-
-	// fold 64B at a time. This section of the code folds 4 vector
-	// registers in parallel
-_fold_64_B_loop:
-
-	.macro		fold64, reg1, reg2
-	vld1.64		{q11-q12}, [arg2]!
-
-	vmull.p64	q8, \reg1\()h, d21
-	vmull.p64	\reg1, \reg1\()l, d20
-	vmull.p64	q9, \reg2\()h, d21
-	vmull.p64	\reg2, \reg2\()l, d20
-
-CPU_LE(	vrev64.8	q11, q11		)
-CPU_LE(	vrev64.8	q12, q12		)
-	vswp		d22, d23
-	vswp		d24, d25
+	q8l		.req	d16
+	q8h		.req	d17
+	q9l		.req	d18
+	q9h		.req	d19
+	q10l		.req	d20
+	q10h		.req	d21
+	q11l		.req	d22
+	q11h		.req	d23
+	q12l		.req	d24
+	q12h		.req	d25
+
+	FOLD_CONSTS	.req	q10
+	FOLD_CONST_L	.req	q10l
+	FOLD_CONST_H	.req	q10h
+
+	// Fold reg1, reg2 into the next 32 data bytes, storing the result back
+	// into reg1, reg2.
+	.macro		fold_32_bytes, reg1, reg2
+	vld1.64		{q11-q12}, [buf]!
+
+	vmull.p64	q8, \reg1\()h, FOLD_CONST_H
+	vmull.p64	\reg1, \reg1\()l, FOLD_CONST_L
+	vmull.p64	q9, \reg2\()h, FOLD_CONST_H
+	vmull.p64	\reg2, \reg2\()l, FOLD_CONST_L
+
+CPU_LE(	vrev64.8	q11, q11	)
+CPU_LE(	vrev64.8	q12, q12	)
+	vswp		q11l, q11h
+	vswp		q12l, q12h
 
 	veor.8		\reg1, \reg1, q8
 	veor.8		\reg2, \reg2, q9
@@ -185,229 +132,248 @@ CPU_LE(	vrev64.8	q12, q12		)
 	veor.8		\reg2, \reg2, q12
 	.endm
 
-	fold64		q0, q1
-	fold64		q2, q3
-	fold64		q4, q5
-	fold64		q6, q7
-
-	subs		arg3, arg3, #128
-
-	// check if there is another 64B in the buffer to be able to fold
-	bge		_fold_64_B_loop
-
-	// at this point, the buffer pointer is pointing at the last y Bytes
-	// of the buffer the 64B of folded data is in 4 of the vector
-	// registers: v0, v1, v2, v3
-
-	// fold the 8 vector registers to 1 vector register with different
-	// constants
-
-	adr		ip, rk9
-	vld1.64		{q10}, [ip, :128]!
-
-	.macro		fold16, reg, rk
-	vmull.p64	q8, \reg\()l, d20
-	vmull.p64	\reg, \reg\()h, d21
-	.ifnb		\rk
-	vld1.64		{q10}, [ip, :128]!
+	// Fold src_reg into dst_reg, optionally loading the next fold constants
+	.macro		fold_16_bytes, src_reg, dst_reg, load_next_consts
+	vmull.p64	q8, \src_reg\()l, FOLD_CONST_L
+	vmull.p64	\src_reg, \src_reg\()h, FOLD_CONST_H
+	.ifnb		\load_next_consts
+	vld1.64		{FOLD_CONSTS}, [fold_consts_ptr, :128]!
 	.endif
-	veor.8		q7, q7, q8
-	veor.8		q7, q7, \reg
+	veor.8		\dst_reg, \dst_reg, q8
+	veor.8		\dst_reg, \dst_reg, \src_reg
 	.endm
 
-	fold16		q0, rk11
-	fold16		q1, rk13
-	fold16		q2, rk15
-	fold16		q3, rk17
-	fold16		q4, rk19
-	fold16		q5, rk1
-	fold16		q6
-
-	// instead of 64, we add 48 to the loop counter to save 1 instruction
-	// from the loop instead of a cmp instruction, we use the negative
-	// flag with the jl instruction
-	adds		arg3, arg3, #(128-16)
-	blt		_final_reduction_for_128
-
-	// now we have 16+y bytes left to reduce. 16 Bytes is in register v7
-	// and the rest is in memory. We can fold 16 bytes at a time if y>=16
-	// continue folding 16B at a time
-
-_16B_reduction_loop:
-	vmull.p64	q8, d14, d20
-	vmull.p64	q7, d15, d21
-	veor.8		q7, q7, q8
-
-	vld1.64		{q0}, [arg2]!
-CPU_LE(	vrev64.8	q0, q0		)
-	vswp		d0, d1
-	veor.8		q7, q7, q0
-	subs		arg3, arg3, #16
-
-	// instead of a cmp instruction, we utilize the flags with the
-	// jge instruction equivalent of: cmp arg3, 16-16
-	// check if there is any more 16B in the buffer to be able to fold
-	bge		_16B_reduction_loop
-
-	// now we have 16+z bytes left to reduce, where 0<= z < 16.
-	// first, we reduce the data in the xmm7 register
-
-_final_reduction_for_128:
-	// check if any more data to fold. If not, compute the CRC of
-	// the final 128 bits
-	adds		arg3, arg3, #16
-	beq		_128_done
-
-	// here we are getting data that is less than 16 bytes.
-	// since we know that there was data before the pointer, we can
-	// offset the input pointer before the actual point, to receive
-	// exactly 16 bytes. after that the registers need to be adjusted.
-_get_last_two_regs:
-	add		arg2, arg2, arg3
-	sub		arg2, arg2, #16
-	vld1.64		{q1}, [arg2]
-CPU_LE(	vrev64.8	q1, q1			)
-	vswp		d2, d3
-
-	// get rid of the extra data that was loaded before
-	// load the shift constant
-	adr		ip, tbl_shf_table + 16
-	sub		ip, ip, arg3
-	vld1.8		{q0}, [ip]
-
-	// shift v2 to the left by arg3 bytes
-	vtbl.8		d4, {d14-d15}, d0
-	vtbl.8		d5, {d14-d15}, d1
-
-	// shift v7 to the right by 16-arg3 bytes
-	vmov.i8		q9, #0x80
-	veor.8		q0, q0, q9
-	vtbl.8		d18, {d14-d15}, d0
-	vtbl.8		d19, {d14-d15}, d1
-
-	// blend
-	vshr.s8		q0, q0, #7		// convert to 8-bit mask
-	vbsl.8		q0, q2, q1
-
-	// fold 16 Bytes
-	vmull.p64	q8, d18, d20
-	vmull.p64	q7, d19, d21
-	veor.8		q7, q7, q8
-	veor.8		q7, q7, q0
+	.macro		__adrl, out, sym
+	movw		\out, #:lower16:\sym
+	movt		\out, #:upper16:\sym
+	.endm
 
-_128_done:
-	// compute crc of a 128-bit value
-	vldr		d20, rk5
-	vldr		d21, rk6		// rk5 and rk6 in xmm10
+//
+// u16 crc_t10dif_pmull(u16 init_crc, const u8 *buf, size_t len);
+//
+// Assumes len >= 16.
+//
+ENTRY(crc_t10dif_pmull)
 
-	// 64b fold
-	vext.8		q0, qzr, q7, #8
-	vmull.p64	q7, d15, d20
+	// For sizes less than 256 bytes, we can't fold 128 bytes at a time.
+	cmp		len, #256
+	blt		.Lless_than_256_bytes
+
+	__adrl		fold_consts_ptr, .Lfold_across_128_bytes_consts
+
+	// Load the first 128 data bytes.  Byte swapping is necessary to make
+	// the bit order match the polynomial coefficient order.
+	vld1.64		{q0-q1}, [buf]!
+	vld1.64		{q2-q3}, [buf]!
+	vld1.64		{q4-q5}, [buf]!
+	vld1.64		{q6-q7}, [buf]!
+CPU_LE(	vrev64.8	q0, q0	)
+CPU_LE(	vrev64.8	q1, q1	)
+CPU_LE(	vrev64.8	q2, q2	)
+CPU_LE(	vrev64.8	q3, q3	)
+CPU_LE(	vrev64.8	q4, q4	)
+CPU_LE(	vrev64.8	q5, q5	)
+CPU_LE(	vrev64.8	q6, q6	)
+CPU_LE(	vrev64.8	q7, q7	)
+	vswp		q0l, q0h
+	vswp		q1l, q1h
+	vswp		q2l, q2h
+	vswp		q3l, q3h
+	vswp		q4l, q4h
+	vswp		q5l, q5h
+	vswp		q6l, q6h
+	vswp		q7l, q7h
+
+	// XOR the first 16 data *bits* with the initial CRC value.
+	vmov.i8		q8h, #0
+	vmov.u16	q8h[3], init_crc
+	veor		q0h, q0h, q8h
+
+	// Load the constants for folding across 128 bytes.
+	vld1.64		{FOLD_CONSTS}, [fold_consts_ptr, :128]!
+
+	// Subtract 128 for the 128 data bytes just consumed.  Subtract another
+	// 128 to simplify the termination condition of the following loop.
+	sub		len, len, #256
+
+	// While >= 128 data bytes remain (not counting q0-q7), fold the 128
+	// bytes q0-q7 into them, storing the result back into q0-q7.
+.Lfold_128_bytes_loop:
+	fold_32_bytes	q0, q1
+	fold_32_bytes	q2, q3
+	fold_32_bytes	q4, q5
+	fold_32_bytes	q6, q7
+	subs		len, len, #128
+	bge		.Lfold_128_bytes_loop
+
+	// Now fold the 112 bytes in q0-q6 into the 16 bytes in q7.
+
+	// Fold across 64 bytes.
+	vld1.64		{FOLD_CONSTS}, [fold_consts_ptr, :128]!
+	fold_16_bytes	q0, q4
+	fold_16_bytes	q1, q5
+	fold_16_bytes	q2, q6
+	fold_16_bytes	q3, q7, 1
+	// Fold across 32 bytes.
+	fold_16_bytes	q4, q6
+	fold_16_bytes	q5, q7, 1
+	// Fold across 16 bytes.
+	fold_16_bytes	q6, q7
+
+	// Add 128 to get the correct number of data bytes remaining in 0...127
+	// (not counting q7), following the previous extra subtraction by 128.
+	// Then subtract 16 to simplify the termination condition of the
+	// following loop.
+	adds		len, len, #(128-16)
+
+	// While >= 16 data bytes remain (not counting q7), fold the 16 bytes q7
+	// into them, storing the result back into q7.
+	blt		.Lfold_16_bytes_loop_done
+.Lfold_16_bytes_loop:
+	vmull.p64	q8, q7l, FOLD_CONST_L
+	vmull.p64	q7, q7h, FOLD_CONST_H
+	veor.8		q7, q7, q8
+	vld1.64		{q0}, [buf]!
+CPU_LE(	vrev64.8	q0, q0	)
+	vswp		q0l, q0h
 	veor.8		q7, q7, q0
-
-	// 32b fold
-	vext.8		q0, q7, qzr, #12
-	vmov		s31, s3
-	vmull.p64	q0, d0, d21
-	veor.8		q7, q0, q7
-
-	// barrett reduction
-_barrett:
-	vldr		d20, rk7
-	vldr		d21, rk8
-
-	vmull.p64	q0, d15, d20
-	vext.8		q0, qzr, q0, #12
-	vmull.p64	q0, d1, d21
-	vext.8		q0, qzr, q0, #12
+	subs		len, len, #16
+	bge		.Lfold_16_bytes_loop
+
+.Lfold_16_bytes_loop_done:
+	// Add 16 to get the correct number of data bytes remaining in 0...15
+	// (not counting q7), following the previous extra subtraction by 16.
+	adds		len, len, #16
+	beq		.Lreduce_final_16_bytes
+
+.Lhandle_partial_segment:
+	// Reduce the last '16 + len' bytes where 1 <= len <= 15 and the first
+	// 16 bytes are in q7 and the rest are the remaining data in 'buf'.  To
+	// do this without needing a fold constant for each possible 'len',
+	// redivide the bytes into a first chunk of 'len' bytes and a second
+	// chunk of 16 bytes, then fold the first chunk into the second.
+
+	// q0 = last 16 original data bytes
+	add		buf, buf, len
+	sub		buf, buf, #16
+	vld1.64		{q0}, [buf]
+CPU_LE(	vrev64.8	q0, q0	)
+	vswp		q0l, q0h
+
+	// q1 = high order part of second chunk: q7 left-shifted by 'len' bytes.
+	__adrl		r3, .Lbyteshift_table + 16
+	sub		r3, r3, len
+	vld1.8		{q2}, [r3]
+	vtbl.8		q1l, {q7l-q7h}, q2l
+	vtbl.8		q1h, {q7l-q7h}, q2h
+
+	// q3 = first chunk: q7 right-shifted by '16-len' bytes.
+	vmov.i8		q3, #0x80
+	veor.8		q2, q2, q3
+	vtbl.8		q3l, {q7l-q7h}, q2l
+	vtbl.8		q3h, {q7l-q7h}, q2h
+
+	// Convert to 8-bit masks: 'len' 0x00 bytes, then '16-len' 0xff bytes.
+	vshr.s8		q2, q2, #7
+
+	// q2 = second chunk: 'len' bytes from q0 (low-order bytes),
+	// then '16-len' bytes from q1 (high-order bytes).
+	vbsl.8		q2, q1, q0
+
+	// Fold the first chunk into the second chunk, storing the result in q7.
+	vmull.p64	q0, q3l, FOLD_CONST_L
+	vmull.p64	q7, q3h, FOLD_CONST_H
 	veor.8		q7, q7, q0
-	vmov		r0, s29
-
-_cleanup:
-	// scale the result back to 16 bits
-	lsr		r0, r0, #16
+	veor.8		q7, q7, q2
+
+.Lreduce_final_16_bytes:
+	// Reduce the 128-bit value M(x), stored in q7, to the final 16-bit CRC.
+
+	// Load 'x^48 * (x^48 mod G(x))' and 'x^48 * (x^80 mod G(x))'.
+	vld1.64		{FOLD_CONSTS}, [fold_consts_ptr, :128]!
+
+	// Fold the high 64 bits into the low 64 bits, while also multiplying by
+	// x^64.  This produces a 128-bit value congruent to x^64 * M(x) and
+	// whose low 48 bits are 0.
+	vmull.p64	q0, q7h, FOLD_CONST_H	// high bits * x^48 * (x^80 mod G(x))
+	veor.8		q0h, q0h, q7l		// + low bits * x^64
+
+	// Fold the high 32 bits into the low 96 bits.  This produces a 96-bit
+	// value congruent to x^64 * M(x) and whose low 48 bits are 0.
+	vmov.i8		q1, #0
+	vmov		s4, s3			// extract high 32 bits
+	vmov		s3, s5			// zero high 32 bits
+	vmull.p64	q1, q1l, FOLD_CONST_L	// high 32 bits * x^48 * (x^48 mod G(x))
+	veor.8		q0, q0, q1		// + low bits
+
+	// Load G(x) and floor(x^48 / G(x)).
+	vld1.64		{FOLD_CONSTS}, [fold_consts_ptr, :128]
+
+	// Use Barrett reduction to compute the final CRC value.
+	vmull.p64	q1, q0h, FOLD_CONST_H	// high 32 bits * floor(x^48 / G(x))
+	vshr.u64	q1l, q1l, #32		// /= x^32
+	vmull.p64	q1, q1l, FOLD_CONST_L	// *= G(x)
+	vshr.u64	q0l, q0l, #48
+	veor.8		q0l, q0l, q1l		// + low 16 nonzero bits
+	// Final CRC value (x^16 * M(x)) mod G(x) is in low 16 bits of q0.
+
+	vmov.u16	r0, q0l[0]
 	bx		lr
 
-_less_than_128:
-	teq		arg3, #0
-	beq		_cleanup
+.Lless_than_256_bytes:
+	// Checksumming a buffer of length 16...255 bytes
 
-	vmov.i8		q0, #0
-	vmov		s3, arg1_low32		// get the initial crc value
+	__adrl		fold_consts_ptr, .Lfold_across_16_bytes_consts
 
-	vld1.64		{q7}, [arg2]!
-CPU_LE(	vrev64.8	q7, q7		)
-	vswp		d14, d15
-	veor.8		q7, q7, q0
+	// Load the first 16 data bytes.
+	vld1.64		{q7}, [buf]!
+CPU_LE(	vrev64.8	q7, q7	)
+	vswp		q7l, q7h
 
-	cmp		arg3, #16
-	beq		_128_done		// exactly 16 left
+	// XOR the first 16 data *bits* with the initial CRC value.
+	vmov.i8		q0h, #0
+	vmov.u16	q0h[3], init_crc
+	veor.8		q7h, q7h, q0h
 
-	// now if there is, load the constants
-	vldr		d20, rk1
-	vldr		d21, rk2		// rk1 and rk2 in xmm10
+	// Load the fold-across-16-bytes constants.
+	vld1.64		{FOLD_CONSTS}, [fold_consts_ptr, :128]!
 
-	// check if there is enough buffer to be able to fold 16B at a time
-	subs		arg3, arg3, #32
-	addlt		arg3, arg3, #16
-	blt		_get_last_two_regs
-	b		_16B_reduction_loop
+	cmp		len, #16
+	beq		.Lreduce_final_16_bytes		// len == 16
+	subs		len, len, #32
+	addlt		len, len, #16
+	blt		.Lhandle_partial_segment	// 17 <= len <= 31
+	b		.Lfold_16_bytes_loop		// 32 <= len <= 255
 ENDPROC(crc_t10dif_pmull)
 
-// precomputed constants
-// these constants are precomputed from the poly:
-// 0x8bb70000 (0x8bb7 scaled to 32 bits)
+	.section	".rodata", "a"
 	.align		4
-// Q = 0x18BB70000
-// rk1 = 2^(32*3) mod Q << 32
-// rk2 = 2^(32*5) mod Q << 32
-// rk3 = 2^(32*15) mod Q << 32
-// rk4 = 2^(32*17) mod Q << 32
-// rk5 = 2^(32*3) mod Q << 32
-// rk6 = 2^(32*2) mod Q << 32
-// rk7 = floor(2^64/Q)
-// rk8 = Q
-
-rk3:	.quad		0x9d9d000000000000
-rk4:	.quad		0x7cf5000000000000
-rk5:	.quad		0x2d56000000000000
-rk6:	.quad		0x1368000000000000
-rk7:	.quad		0x00000001f65a57f8
-rk8:	.quad		0x000000018bb70000
-rk9:	.quad		0xceae000000000000
-rk10:	.quad		0xbfd6000000000000
-rk11:	.quad		0x1e16000000000000
-rk12:	.quad		0x713c000000000000
-rk13:	.quad		0xf7f9000000000000
-rk14:	.quad		0x80a6000000000000
-rk15:	.quad		0x044c000000000000
-rk16:	.quad		0xe658000000000000
-rk17:	.quad		0xad18000000000000
-rk18:	.quad		0xa497000000000000
-rk19:	.quad		0x6ee3000000000000
-rk20:	.quad		0xe7b5000000000000
-rk1:	.quad		0x2d56000000000000
-rk2:	.quad		0x06df000000000000
-
-tbl_shf_table:
-// use these values for shift constants for the tbl/tbx instruction
-// different alignments result in values as shown:
-//	DDQ 0x008f8e8d8c8b8a898887868584838281 # shl 15 (16-1) / shr1
-//	DDQ 0x01008f8e8d8c8b8a8988878685848382 # shl 14 (16-3) / shr2
-//	DDQ 0x0201008f8e8d8c8b8a89888786858483 # shl 13 (16-4) / shr3
-//	DDQ 0x030201008f8e8d8c8b8a898887868584 # shl 12 (16-4) / shr4
-//	DDQ 0x04030201008f8e8d8c8b8a8988878685 # shl 11 (16-5) / shr5
-//	DDQ 0x0504030201008f8e8d8c8b8a89888786 # shl 10 (16-6) / shr6
-//	DDQ 0x060504030201008f8e8d8c8b8a898887 # shl 9  (16-7) / shr7
-//	DDQ 0x07060504030201008f8e8d8c8b8a8988 # shl 8  (16-8) / shr8
-//	DDQ 0x0807060504030201008f8e8d8c8b8a89 # shl 7  (16-9) / shr9
-//	DDQ 0x090807060504030201008f8e8d8c8b8a # shl 6  (16-10) / shr10
-//	DDQ 0x0a090807060504030201008f8e8d8c8b # shl 5  (16-11) / shr11
-//	DDQ 0x0b0a090807060504030201008f8e8d8c # shl 4  (16-12) / shr12
-//	DDQ 0x0c0b0a090807060504030201008f8e8d # shl 3  (16-13) / shr13
-//	DDQ 0x0d0c0b0a090807060504030201008f8e # shl 2  (16-14) / shr14
-//	DDQ 0x0e0d0c0b0a090807060504030201008f # shl 1  (16-15) / shr15
 
+// Fold constants precomputed from the polynomial 0x18bb7
+// G(x) = x^16 + x^15 + x^11 + x^9 + x^8 + x^7 + x^5 + x^4 + x^2 + x^1 + x^0
+.Lfold_across_128_bytes_consts:
+	.quad		0x0000000000006123	// x^(8*128)	mod G(x)
+	.quad		0x0000000000002295	// x^(8*128+64)	mod G(x)
+// .Lfold_across_64_bytes_consts:
+	.quad		0x0000000000001069	// x^(4*128)	mod G(x)
+	.quad		0x000000000000dd31	// x^(4*128+64)	mod G(x)
+// .Lfold_across_32_bytes_consts:
+	.quad		0x000000000000857d	// x^(2*128)	mod G(x)
+	.quad		0x0000000000007acc	// x^(2*128+64)	mod G(x)
+.Lfold_across_16_bytes_consts:
+	.quad		0x000000000000a010	// x^(1*128)	mod G(x)
+	.quad		0x0000000000001faa	// x^(1*128+64)	mod G(x)
+// .Lfinal_fold_consts:
+	.quad		0x1368000000000000	// x^48 * (x^48 mod G(x))
+	.quad		0x2d56000000000000	// x^48 * (x^80 mod G(x))
+// .Lbarrett_reduction_consts:
+	.quad		0x0000000000018bb7	// G(x)
+	.quad		0x00000001f65a57f8	// floor(x^48 / G(x))
+
+// For 1 <= len <= 15, the 16-byte vector beginning at &byteshift_table[16 -
+// len] is the index vector to shift left by 'len' bytes, and is also {0x80,
+// ..., 0x80} XOR the index vector to shift right by '16 - len' bytes.
+.Lbyteshift_table:
 	.byte		 0x0, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87
 	.byte		0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f
 	.byte		 0x0,  0x1,  0x2,  0x3,  0x4,  0x5,  0x6,  0x7
diff --git a/arch/arm/crypto/crct10dif-ce-glue.c b/arch/arm/crypto/crct10dif-ce-glue.c
index 14c19c70a8416..3d6b800b83965 100644
--- a/arch/arm/crypto/crct10dif-ce-glue.c
+++ b/arch/arm/crypto/crct10dif-ce-glue.c
@@ -21,7 +21,7 @@
 
 #define CRC_T10DIF_PMULL_CHUNK_SIZE	16U
 
-asmlinkage u16 crc_t10dif_pmull(u16 init_crc, const u8 buf[], u32 len);
+asmlinkage u16 crc_t10dif_pmull(u16 init_crc, const u8 *buf, size_t len);
 
 static int crct10dif_init(struct shash_desc *desc)
 {
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 3/3] crypto: arm64/crct10dif-ce - cleanup and optimizations
  2019-01-30  3:14 ` Eric Biggers
@ 2019-01-30  3:14   ` Eric Biggers
  -1 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-30  3:14 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: Ard Biesheuvel, Tim Chen, linux-arm-kernel

From: Eric Biggers <ebiggers@google.com>

The x86, arm, and arm64 asm implementations of crct10dif are very
difficult to understand partly because many of the comments, labels, and
macros are named incorrectly: the lengths mentioned are usually off by a
factor of two from the actual code.  Many other things are unnecessarily
convoluted as well, e.g. there are many more fold constants than
actually needed and some aren't fully reduced.

This series therefore cleans up all these implementations to be much
more maintainable.  I also made some small optimizations where I saw
opportunities, resulting in slightly better performance.

This patch cleans up the arm64 version.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm64/crypto/crct10dif-ce-core.S | 496 ++++++++++++--------------
 arch/arm64/crypto/crct10dif-ce-glue.c |   4 +-
 2 files changed, 233 insertions(+), 267 deletions(-)

diff --git a/arch/arm64/crypto/crct10dif-ce-core.S b/arch/arm64/crypto/crct10dif-ce-core.S
index f7326259c40de..e545b42e6a468 100644
--- a/arch/arm64/crypto/crct10dif-ce-core.S
+++ b/arch/arm64/crypto/crct10dif-ce-core.S
@@ -2,12 +2,14 @@
 // Accelerated CRC-T10DIF using arm64 NEON and Crypto Extensions instructions
 //
 // Copyright (C) 2016 Linaro Ltd <ard.biesheuvel@linaro.org>
+// Copyright (C) 2019 Google LLC <ebiggers@google.com>
 //
 // This program is free software; you can redistribute it and/or modify
 // it under the terms of the GNU General Public License version 2 as
 // published by the Free Software Foundation.
 //
 
+// Derived from the x86 version:
 //
 // Implement fast CRC-T10DIF computation with SSE and PCLMULQDQ instructions
 //
@@ -54,19 +56,11 @@
 // NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
 // SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-//       Function API:
-//       UINT16 crc_t10dif_pcl(
-//               UINT16 init_crc, //initial CRC value, 16 bits
-//               const unsigned char *buf, //buffer pointer to calculate CRC on
-//               UINT64 len //buffer length in bytes (64-bit data)
-//       );
-//
 //       Reference paper titled "Fast CRC Computation for Generic
 //	Polynomials Using PCLMULQDQ Instruction"
 //       URL: http://www.intel.com/content/dam/www/public/us/en/documents
 //  /white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
 //
-//
 
 #include <linux/linkage.h>
 #include <asm/assembler.h>
@@ -74,14 +68,14 @@
 	.text
 	.cpu		generic+crypto
 
-	arg1_low32	.req	w19
-	arg2		.req	x20
-	arg3		.req	x21
+	init_crc	.req	w19
+	buf		.req	x20
+	len		.req	x21
+	fold_consts_ptr	.req	x22
 
-	vzr		.req	v13
+	fold_consts	.req	v10
 
 	ad		.req	v14
-	bd		.req	v10
 
 	k00_16		.req	v15
 	k32_48		.req	v16
@@ -143,11 +137,11 @@ __pmull_p8_core:
 	ext		t5.8b, ad.8b, ad.8b, #2			// A2
 	ext		t6.8b, ad.8b, ad.8b, #3			// A3
 
-	pmull		t4.8h, t4.8b, bd.8b			// F = A1*B
+	pmull		t4.8h, t4.8b, fold_consts.8b		// F = A1*B
 	pmull		t8.8h, ad.8b, bd1.8b			// E = A*B1
-	pmull		t5.8h, t5.8b, bd.8b			// H = A2*B
+	pmull		t5.8h, t5.8b, fold_consts.8b		// H = A2*B
 	pmull		t7.8h, ad.8b, bd2.8b			// G = A*B2
-	pmull		t6.8h, t6.8b, bd.8b			// J = A3*B
+	pmull		t6.8h, t6.8b, fold_consts.8b		// J = A3*B
 	pmull		t9.8h, ad.8b, bd3.8b			// I = A*B3
 	pmull		t3.8h, ad.8b, bd4.8b			// K = A*B4
 	b		0f
@@ -157,11 +151,11 @@ __pmull_p8_core:
 	tbl		t5.16b, {ad.16b}, perm2.16b		// A2
 	tbl		t6.16b, {ad.16b}, perm3.16b		// A3
 
-	pmull2		t4.8h, t4.16b, bd.16b			// F = A1*B
+	pmull2		t4.8h, t4.16b, fold_consts.16b		// F = A1*B
 	pmull2		t8.8h, ad.16b, bd1.16b			// E = A*B1
-	pmull2		t5.8h, t5.16b, bd.16b			// H = A2*B
+	pmull2		t5.8h, t5.16b, fold_consts.16b		// H = A2*B
 	pmull2		t7.8h, ad.16b, bd2.16b			// G = A*B2
-	pmull2		t6.8h, t6.16b, bd.16b			// J = A3*B
+	pmull2		t6.8h, t6.16b, fold_consts.16b		// J = A3*B
 	pmull2		t9.8h, ad.16b, bd3.16b			// I = A*B3
 	pmull2		t3.8h, ad.16b, bd4.16b			// K = A*B4
 
@@ -203,14 +197,14 @@ __pmull_p8_core:
 ENDPROC(__pmull_p8_core)
 
 	.macro		__pmull_p8, rq, ad, bd, i
-	.ifnc		\bd, v10
+	.ifnc		\bd, fold_consts
 	.err
 	.endif
 	mov		ad.16b, \ad\().16b
 	.ifb		\i
-	pmull		\rq\().8h, \ad\().8b, bd.8b		// D = A*B
+	pmull		\rq\().8h, \ad\().8b, \bd\().8b		// D = A*B
 	.else
-	pmull2		\rq\().8h, \ad\().16b, bd.16b		// D = A*B
+	pmull2		\rq\().8h, \ad\().16b, \bd\().16b	// D = A*B
 	.endif
 
 	bl		.L__pmull_p8_core\i
@@ -219,17 +213,19 @@ ENDPROC(__pmull_p8_core)
 	eor		\rq\().16b, \rq\().16b, t6.16b
 	.endm
 
-	.macro		fold64, p, reg1, reg2
-	ldp		q11, q12, [arg2], #0x20
+	// Fold reg1, reg2 into the next 32 data bytes, storing the result back
+	// into reg1, reg2.
+	.macro		fold_32_bytes, p, reg1, reg2
+	ldp		q11, q12, [buf], #0x20
 
-	__pmull_\p	v8, \reg1, v10, 2
-	__pmull_\p	\reg1, \reg1, v10
+	__pmull_\p	v8, \reg1, fold_consts, 2
+	__pmull_\p	\reg1, \reg1, fold_consts
 
 CPU_LE(	rev64		v11.16b, v11.16b		)
 CPU_LE(	rev64		v12.16b, v12.16b		)
 
-	__pmull_\p	v9, \reg2, v10, 2
-	__pmull_\p	\reg2, \reg2, v10
+	__pmull_\p	v9, \reg2, fold_consts, 2
+	__pmull_\p	\reg2, \reg2, fold_consts
 
 CPU_LE(	ext		v11.16b, v11.16b, v11.16b, #8	)
 CPU_LE(	ext		v12.16b, v12.16b, v12.16b, #8	)
@@ -240,15 +236,16 @@ CPU_LE(	ext		v12.16b, v12.16b, v12.16b, #8	)
 	eor		\reg2\().16b, \reg2\().16b, v12.16b
 	.endm
 
-	.macro		fold16, p, reg, rk
-	__pmull_\p	v8, \reg, v10
-	__pmull_\p	\reg, \reg, v10, 2
-	.ifnb		\rk
-	ldr_l		q10, \rk, x8
-	__pmull_pre_\p	v10
+	// Fold src_reg into dst_reg, optionally loading the next fold constants
+	.macro		fold_16_bytes, p, src_reg, dst_reg, load_next_consts
+	__pmull_\p	v8, \src_reg, fold_consts
+	__pmull_\p	\src_reg, \src_reg, fold_consts, 2
+	.ifnb		\load_next_consts
+	ld1		{fold_consts.2d}, [fold_consts_ptr], #16
+	__pmull_pre_\p	fold_consts
 	.endif
-	eor		v7.16b, v7.16b, v8.16b
-	eor		v7.16b, v7.16b, \reg\().16b
+	eor		\dst_reg\().16b, \dst_reg\().16b, v8.16b
+	eor		\dst_reg\().16b, \dst_reg\().16b, \src_reg\().16b
 	.endm
 
 	.macro		__pmull_p64, rd, rn, rm, n
@@ -260,40 +257,27 @@ CPU_LE(	ext		v12.16b, v12.16b, v12.16b, #8	)
 	.endm
 
 	.macro		crc_t10dif_pmull, p
-	frame_push	3, 128
+	frame_push	4, 128
 
-	mov		arg1_low32, w0
-	mov		arg2, x1
-	mov		arg3, x2
-
-	movi		vzr.16b, #0		// init zero register
+	mov		init_crc, w0
+	mov		buf, x1
+	mov		len, x2
 
 	__pmull_init_\p
 
-	// adjust the 16-bit initial_crc value, scale it to 32 bits
-	lsl		arg1_low32, arg1_low32, #16
-
-	// check if smaller than 256
-	cmp		arg3, #256
-
-	// for sizes less than 128, we can't fold 64B at a time...
-	b.lt		.L_less_than_128_\@
+	// For sizes less than 256 bytes, we can't fold 128 bytes at a time.
+	cmp		len, #256
+	b.lt		.Lless_than_256_bytes_\@
 
-	// load the initial crc value
-	// crc value does not need to be byte-reflected, but it needs
-	// to be moved to the high part of the register.
-	// because data will be byte-reflected and will align with
-	// initial crc at correct place.
-	movi		v10.16b, #0
-	mov		v10.s[3], arg1_low32		// initial crc
-
-	// receive the initial 64B data, xor the initial crc value
-	ldp		q0, q1, [arg2]
-	ldp		q2, q3, [arg2, #0x20]
-	ldp		q4, q5, [arg2, #0x40]
-	ldp		q6, q7, [arg2, #0x60]
-	add		arg2, arg2, #0x80
+	adr_l		fold_consts_ptr, .Lfold_across_128_bytes_consts
 
+	// Load the first 128 data bytes.  Byte swapping is necessary to make
+	// the bit order match the polynomial coefficient order.
+	ldp		q0, q1, [buf]
+	ldp		q2, q3, [buf, #0x20]
+	ldp		q4, q5, [buf, #0x40]
+	ldp		q6, q7, [buf, #0x60]
+	add		buf, buf, #0x80
 CPU_LE(	rev64		v0.16b, v0.16b			)
 CPU_LE(	rev64		v1.16b, v1.16b			)
 CPU_LE(	rev64		v2.16b, v2.16b			)
@@ -302,7 +286,6 @@ CPU_LE(	rev64		v4.16b, v4.16b			)
 CPU_LE(	rev64		v5.16b, v5.16b			)
 CPU_LE(	rev64		v6.16b, v6.16b			)
 CPU_LE(	rev64		v7.16b, v7.16b			)
-
 CPU_LE(	ext		v0.16b, v0.16b, v0.16b, #8	)
 CPU_LE(	ext		v1.16b, v1.16b, v1.16b, #8	)
 CPU_LE(	ext		v2.16b, v2.16b, v2.16b, #8	)
@@ -312,36 +295,29 @@ CPU_LE(	ext		v5.16b, v5.16b, v5.16b, #8	)
 CPU_LE(	ext		v6.16b, v6.16b, v6.16b, #8	)
 CPU_LE(	ext		v7.16b, v7.16b, v7.16b, #8	)
 
-	// XOR the initial_crc value
-	eor		v0.16b, v0.16b, v10.16b
-
-	ldr_l		q10, rk3, x8	// xmm10 has rk3 and rk4
-					// type of pmull instruction
-					// will determine which constant to use
-	__pmull_pre_\p	v10
-
-	//
-	// we subtract 256 instead of 128 to save one instruction from the loop
-	//
-	sub		arg3, arg3, #256
-
-	// at this section of the code, there is 64*x+y (0<=y<64) bytes of
-	// buffer. The _fold_64_B_loop will fold 64B at a time
-	// until we have 64+y Bytes of buffer
+	// XOR the first 16 data *bits* with the initial CRC value.
+	movi		v8.16b, #0
+	mov		v8.h[7], init_crc
+	eor		v0.16b, v0.16b, v8.16b
 
-	// fold 64B at a time. This section of the code folds 4 vector
-	// registers in parallel
-.L_fold_64_B_loop_\@:
+	// Load the constants for folding across 128 bytes.
+	ld1		{fold_consts.2d}, [fold_consts_ptr]
+	__pmull_pre_\p	fold_consts
 
-	fold64		\p, v0, v1
-	fold64		\p, v2, v3
-	fold64		\p, v4, v5
-	fold64		\p, v6, v7
+	// Subtract 128 for the 128 data bytes just consumed.  Subtract another
+	// 128 to simplify the termination condition of the following loop.
+	sub		len, len, #256
 
-	subs		arg3, arg3, #128
+	// While >= 128 data bytes remain (not counting v0-v7), fold the 128
+	// bytes v0-v7 into them, storing the result back into v0-v7.
+.Lfold_128_bytes_loop_\@:
+	fold_32_bytes	\p, v0, v1
+	fold_32_bytes	\p, v2, v3
+	fold_32_bytes	\p, v4, v5
+	fold_32_bytes	\p, v6, v7
 
-	// check if there is another 64B in the buffer to be able to fold
-	b.lt		.L_fold_64_B_end_\@
+	subs		len, len, #128
+	b.lt		.Lfold_128_bytes_loop_done_\@
 
 	if_will_cond_yield_neon
 	stp		q0, q1, [sp, #.Lframe_local_offset]
@@ -353,217 +329,207 @@ CPU_LE(	ext		v7.16b, v7.16b, v7.16b, #8	)
 	ldp		q2, q3, [sp, #.Lframe_local_offset + 32]
 	ldp		q4, q5, [sp, #.Lframe_local_offset + 64]
 	ldp		q6, q7, [sp, #.Lframe_local_offset + 96]
-	ldr_l		q10, rk3, x8
-	movi		vzr.16b, #0		// init zero register
+	ld1		{fold_consts.2d}, [fold_consts_ptr]
 	__pmull_init_\p
-	__pmull_pre_\p	v10
+	__pmull_pre_\p	fold_consts
 	endif_yield_neon
 
-	b		.L_fold_64_B_loop_\@
-
-.L_fold_64_B_end_\@:
-	// at this point, the buffer pointer is pointing at the last y Bytes
-	// of the buffer the 64B of folded data is in 4 of the vector
-	// registers: v0, v1, v2, v3
-
-	// fold the 8 vector registers to 1 vector register with different
-	// constants
-
-	ldr_l		q10, rk9, x8
-	__pmull_pre_\p	v10
-
-	fold16		\p, v0, rk11
-	fold16		\p, v1, rk13
-	fold16		\p, v2, rk15
-	fold16		\p, v3, rk17
-	fold16		\p, v4, rk19
-	fold16		\p, v5, rk1
-	fold16		\p, v6
-
-	// instead of 64, we add 48 to the loop counter to save 1 instruction
-	// from the loop instead of a cmp instruction, we use the negative
-	// flag with the jl instruction
-	adds		arg3, arg3, #(128-16)
-	b.lt		.L_final_reduction_for_128_\@
-
-	// now we have 16+y bytes left to reduce. 16 Bytes is in register v7
-	// and the rest is in memory. We can fold 16 bytes at a time if y>=16
-	// continue folding 16B at a time
-
-.L_16B_reduction_loop_\@:
-	__pmull_\p	v8, v7, v10
-	__pmull_\p	v7, v7, v10, 2
+	b		.Lfold_128_bytes_loop_\@
+
+.Lfold_128_bytes_loop_done_\@:
+
+	// Now fold the 112 bytes in v0-v6 into the 16 bytes in v7.
+
+	// Fold across 64 bytes.
+	add		fold_consts_ptr, fold_consts_ptr, #16
+	ld1		{fold_consts.2d}, [fold_consts_ptr], #16
+	__pmull_pre_\p	fold_consts
+	fold_16_bytes	\p, v0, v4
+	fold_16_bytes	\p, v1, v5
+	fold_16_bytes	\p, v2, v6
+	fold_16_bytes	\p, v3, v7, 1
+	// Fold across 32 bytes.
+	fold_16_bytes	\p, v4, v6
+	fold_16_bytes	\p, v5, v7, 1
+	// Fold across 16 bytes.
+	fold_16_bytes	\p, v6, v7
+
+	// Add 128 to get the correct number of data bytes remaining in 0...127
+	// (not counting v7), following the previous extra subtraction by 128.
+	// Then subtract 16 to simplify the termination condition of the
+	// following loop.
+	adds		len, len, #(128-16)
+
+	// While >= 16 data bytes remain (not counting v7), fold the 16 bytes v7
+	// into them, storing the result back into v7.
+	b.lt		.Lfold_16_bytes_loop_done_\@
+.Lfold_16_bytes_loop_\@:
+	__pmull_\p	v8, v7, fold_consts
+	__pmull_\p	v7, v7, fold_consts, 2
 	eor		v7.16b, v7.16b, v8.16b
-
-	ldr		q0, [arg2], #16
+	ldr		q0, [buf], #16
 CPU_LE(	rev64		v0.16b, v0.16b			)
 CPU_LE(	ext		v0.16b, v0.16b, v0.16b, #8	)
 	eor		v7.16b, v7.16b, v0.16b
-	subs		arg3, arg3, #16
-
-	// instead of a cmp instruction, we utilize the flags with the
-	// jge instruction equivalent of: cmp arg3, 16-16
-	// check if there is any more 16B in the buffer to be able to fold
-	b.ge		.L_16B_reduction_loop_\@
-
-	// now we have 16+z bytes left to reduce, where 0<= z < 16.
-	// first, we reduce the data in the xmm7 register
-
-.L_final_reduction_for_128_\@:
-	// check if any more data to fold. If not, compute the CRC of
-	// the final 128 bits
-	adds		arg3, arg3, #16
-	b.eq		.L_128_done_\@
-
-	// here we are getting data that is less than 16 bytes.
-	// since we know that there was data before the pointer, we can
-	// offset the input pointer before the actual point, to receive
-	// exactly 16 bytes. after that the registers need to be adjusted.
-.L_get_last_two_regs_\@:
-	add		arg2, arg2, arg3
-	ldr		q1, [arg2, #-16]
-CPU_LE(	rev64		v1.16b, v1.16b			)
-CPU_LE(	ext		v1.16b, v1.16b, v1.16b, #8	)
-
-	// get rid of the extra data that was loaded before
-	// load the shift constant
-	adr_l		x4, tbl_shf_table + 16
-	sub		x4, x4, arg3
-	ld1		{v0.16b}, [x4]
-
-	// shift v2 to the left by arg3 bytes
-	tbl		v2.16b, {v7.16b}, v0.16b
-
-	// shift v7 to the right by 16-arg3 bytes
-	movi		v9.16b, #0x80
-	eor		v0.16b, v0.16b, v9.16b
-	tbl		v7.16b, {v7.16b}, v0.16b
-
-	// blend
-	sshr		v0.16b, v0.16b, #7	// convert to 8-bit mask
-	bsl		v0.16b, v2.16b, v1.16b
-
-	// fold 16 Bytes
-	__pmull_\p	v8, v7, v10
-	__pmull_\p	v7, v7, v10, 2
-	eor		v7.16b, v7.16b, v8.16b
-	eor		v7.16b, v7.16b, v0.16b
+	subs		len, len, #16
+	b.ge		.Lfold_16_bytes_loop_\@
+
+.Lfold_16_bytes_loop_done_\@:
+	// Add 16 to get the correct number of data bytes remaining in 0...15
+	// (not counting v7), following the previous extra subtraction by 16.
+	adds		len, len, #16
+	b.eq		.Lreduce_final_16_bytes_\@
+
+.Lhandle_partial_segment_\@:
+	// Reduce the last '16 + len' bytes where 1 <= len <= 15 and the first
+	// 16 bytes are in v7 and the rest are the remaining data in 'buf'.  To
+	// do this without needing a fold constant for each possible 'len',
+	// redivide the bytes into a first chunk of 'len' bytes and a second
+	// chunk of 16 bytes, then fold the first chunk into the second.
+
+	// v0 = last 16 original data bytes
+	add		buf, buf, len
+	ldr		q0, [buf, #-16]
+CPU_LE(	rev64		v0.16b, v0.16b			)
+CPU_LE(	ext		v0.16b, v0.16b, v0.16b, #8	)
 
-.L_128_done_\@:
-	// compute crc of a 128-bit value
-	ldr_l		q10, rk5, x8		// rk5 and rk6 in xmm10
-	__pmull_pre_\p	v10
+	// v1 = high order part of second chunk: v7 left-shifted by 'len' bytes.
+	adr_l		x4, .Lbyteshift_table + 16
+	sub		x4, x4, len
+	ld1		{v2.16b}, [x4]
+	tbl		v1.16b, {v7.16b}, v2.16b
 
-	// 64b fold
-	ext		v0.16b, vzr.16b, v7.16b, #8
-	mov		v7.d[0], v7.d[1]
-	__pmull_\p	v7, v7, v10
-	eor		v7.16b, v7.16b, v0.16b
+	// v3 = first chunk: v7 right-shifted by '16-len' bytes.
+	movi		v3.16b, #0x80
+	eor		v2.16b, v2.16b, v3.16b
+	tbl		v3.16b, {v7.16b}, v2.16b
 
-	// 32b fold
-	ext		v0.16b, v7.16b, vzr.16b, #4
-	mov		v7.s[3], vzr.s[0]
-	__pmull_\p	v0, v0, v10, 2
-	eor		v7.16b, v7.16b, v0.16b
+	// Convert to 8-bit masks: 'len' 0x00 bytes, then '16-len' 0xff bytes.
+	sshr		v2.16b, v2.16b, #7
 
-	// barrett reduction
-	ldr_l		q10, rk7, x8
-	__pmull_pre_\p	v10
-	mov		v0.d[0], v7.d[1]
+	// v2 = second chunk: 'len' bytes from v0 (low-order bytes),
+	// then '16-len' bytes from v1 (high-order bytes).
+	bsl		v2.16b, v1.16b, v0.16b
 
-	__pmull_\p	v0, v0, v10
-	ext		v0.16b, vzr.16b, v0.16b, #12
-	__pmull_\p	v0, v0, v10, 2
-	ext		v0.16b, vzr.16b, v0.16b, #12
+	// Fold the first chunk into the second chunk, storing the result in v7.
+	__pmull_\p	v0, v3, fold_consts
+	__pmull_\p	v7, v3, fold_consts, 2
 	eor		v7.16b, v7.16b, v0.16b
-	mov		w0, v7.s[1]
-
-.L_cleanup_\@:
-	// scale the result back to 16 bits
-	lsr		x0, x0, #16
+	eor		v7.16b, v7.16b, v2.16b
+
+.Lreduce_final_16_bytes_\@:
+	// Reduce the 128-bit value M(x), stored in v7, to the final 16-bit CRC.
+
+	movi		v2.16b, #0		// init zero register
+
+	// Load 'x^48 * (x^48 mod G(x))' and 'x^48 * (x^80 mod G(x))'.
+	ld1		{fold_consts.2d}, [fold_consts_ptr], #16
+	__pmull_pre_\p	fold_consts
+
+	// Fold the high 64 bits into the low 64 bits, while also multiplying by
+	// x^64.  This produces a 128-bit value congruent to x^64 * M(x) and
+	// whose low 48 bits are 0.
+	ext		v0.16b, v2.16b, v7.16b, #8
+	__pmull_\p	v7, v7, fold_consts, 2	// high bits * x^48 * (x^80 mod G(x))
+	eor		v0.16b, v0.16b, v7.16b	// + low bits * x^64
+
+	// Fold the high 32 bits into the low 96 bits.  This produces a 96-bit
+	// value congruent to x^64 * M(x) and whose low 48 bits are 0.
+	ext		v1.16b, v0.16b, v2.16b, #12	// extract high 32 bits
+	mov		v0.s[3], v2.s[0]	// zero high 32 bits
+	__pmull_\p	v1, v1, fold_consts	// high 32 bits * x^48 * (x^48 mod G(x))
+	eor		v0.16b, v0.16b, v1.16b	// + low bits
+
+	// Load G(x) and floor(x^48 / G(x)).
+	ld1		{fold_consts.2d}, [fold_consts_ptr]
+	__pmull_pre_\p	fold_consts
+
+	// Use Barrett reduction to compute the final CRC value.
+	__pmull_\p	v1, v0, fold_consts, 2	// high 32 bits * floor(x^48 / G(x))
+	ushr		v1.2d, v1.2d, #32	// /= x^32
+	__pmull_\p	v1, v1, fold_consts	// *= G(x)
+	ushr		v0.2d, v0.2d, #48
+	eor		v0.16b, v0.16b, v1.16b	// + low 16 nonzero bits
+	// Final CRC value (x^16 * M(x)) mod G(x) is in low 16 bits of v0.
+
+	umov		w0, v0.h[0]
 	frame_pop
 	ret
 
-.L_less_than_128_\@:
-	cbz		arg3, .L_cleanup_\@
+.Lless_than_256_bytes_\@:
+	// Checksumming a buffer of length 16...255 bytes
 
-	movi		v0.16b, #0
-	mov		v0.s[3], arg1_low32	// get the initial crc value
+	adr_l		fold_consts_ptr, .Lfold_across_16_bytes_consts
 
-	ldr		q7, [arg2], #0x10
+	// Load the first 16 data bytes.
+	ldr		q7, [buf], #0x10
 CPU_LE(	rev64		v7.16b, v7.16b			)
 CPU_LE(	ext		v7.16b, v7.16b, v7.16b, #8	)
-	eor		v7.16b, v7.16b, v0.16b	// xor the initial crc value
-
-	cmp		arg3, #16
-	b.eq		.L_128_done_\@		// exactly 16 left
 
-	ldr_l		q10, rk1, x8		// rk1 and rk2 in xmm10
-	__pmull_pre_\p	v10
+	// XOR the first 16 data *bits* with the initial CRC value.
+	movi		v0.16b, #0
+	mov		v0.h[7], init_crc
+	eor		v7.16b, v7.16b, v0.16b
 
-	// update the counter. subtract 32 instead of 16 to save one
-	// instruction from the loop
-	subs		arg3, arg3, #32
-	b.ge		.L_16B_reduction_loop_\@
+	// Load the fold-across-16-bytes constants.
+	ld1		{fold_consts.2d}, [fold_consts_ptr], #16
+	__pmull_pre_\p	fold_consts
 
-	add		arg3, arg3, #16
-	b		.L_get_last_two_regs_\@
+	cmp		len, #16
+	b.eq		.Lreduce_final_16_bytes_\@	// len == 16
+	subs		len, len, #32
+	b.ge		.Lfold_16_bytes_loop_\@		// 32 <= len <= 255
+	add		len, len, #16
+	b		.Lhandle_partial_segment_\@	// 17 <= len <= 31
 	.endm
 
+//
+// u16 crc_t10dif_pmull_p8(u16 init_crc, const u8 *buf, size_t len);
+//
+// Assumes len >= 16.
+//
 ENTRY(crc_t10dif_pmull_p8)
 	crc_t10dif_pmull	p8
 ENDPROC(crc_t10dif_pmull_p8)
 
 	.align		5
+//
+// u16 crc_t10dif_pmull_p64(u16 init_crc, const u8 *buf, size_t len);
+//
+// Assumes len >= 16.
+//
 ENTRY(crc_t10dif_pmull_p64)
 	crc_t10dif_pmull	p64
 ENDPROC(crc_t10dif_pmull_p64)
 
-// precomputed constants
-// these constants are precomputed from the poly:
-// 0x8bb70000 (0x8bb7 scaled to 32 bits)
 	.section	".rodata", "a"
 	.align		4
-// Q = 0x18BB70000
-// rk1 = 2^(32*3) mod Q << 32
-// rk2 = 2^(32*5) mod Q << 32
-// rk3 = 2^(32*15) mod Q << 32
-// rk4 = 2^(32*17) mod Q << 32
-// rk5 = 2^(32*3) mod Q << 32
-// rk6 = 2^(32*2) mod Q << 32
-// rk7 = floor(2^64/Q)
-// rk8 = Q
-
-rk1:	.octa		0x06df0000000000002d56000000000000
-rk3:	.octa		0x7cf50000000000009d9d000000000000
-rk5:	.octa		0x13680000000000002d56000000000000
-rk7:	.octa		0x000000018bb7000000000001f65a57f8
-rk9:	.octa		0xbfd6000000000000ceae000000000000
-rk11:	.octa		0x713c0000000000001e16000000000000
-rk13:	.octa		0x80a6000000000000f7f9000000000000
-rk15:	.octa		0xe658000000000000044c000000000000
-rk17:	.octa		0xa497000000000000ad18000000000000
-rk19:	.octa		0xe7b50000000000006ee3000000000000
-
-tbl_shf_table:
-// use these values for shift constants for the tbl/tbx instruction
-// different alignments result in values as shown:
-//	DDQ 0x008f8e8d8c8b8a898887868584838281 # shl 15 (16-1) / shr1
-//	DDQ 0x01008f8e8d8c8b8a8988878685848382 # shl 14 (16-3) / shr2
-//	DDQ 0x0201008f8e8d8c8b8a89888786858483 # shl 13 (16-4) / shr3
-//	DDQ 0x030201008f8e8d8c8b8a898887868584 # shl 12 (16-4) / shr4
-//	DDQ 0x04030201008f8e8d8c8b8a8988878685 # shl 11 (16-5) / shr5
-//	DDQ 0x0504030201008f8e8d8c8b8a89888786 # shl 10 (16-6) / shr6
-//	DDQ 0x060504030201008f8e8d8c8b8a898887 # shl 9  (16-7) / shr7
-//	DDQ 0x07060504030201008f8e8d8c8b8a8988 # shl 8  (16-8) / shr8
-//	DDQ 0x0807060504030201008f8e8d8c8b8a89 # shl 7  (16-9) / shr9
-//	DDQ 0x090807060504030201008f8e8d8c8b8a # shl 6  (16-10) / shr10
-//	DDQ 0x0a090807060504030201008f8e8d8c8b # shl 5  (16-11) / shr11
-//	DDQ 0x0b0a090807060504030201008f8e8d8c # shl 4  (16-12) / shr12
-//	DDQ 0x0c0b0a090807060504030201008f8e8d # shl 3  (16-13) / shr13
-//	DDQ 0x0d0c0b0a090807060504030201008f8e # shl 2  (16-14) / shr14
-//	DDQ 0x0e0d0c0b0a090807060504030201008f # shl 1  (16-15) / shr15
 
+// Fold constants precomputed from the polynomial 0x18bb7
+// G(x) = x^16 + x^15 + x^11 + x^9 + x^8 + x^7 + x^5 + x^4 + x^2 + x^1 + x^0
+.Lfold_across_128_bytes_consts:
+	.quad		0x0000000000006123	// x^(8*128)	mod G(x)
+	.quad		0x0000000000002295	// x^(8*128+64)	mod G(x)
+// .Lfold_across_64_bytes_consts:
+	.quad		0x0000000000001069	// x^(4*128)	mod G(x)
+	.quad		0x000000000000dd31	// x^(4*128+64)	mod G(x)
+// .Lfold_across_32_bytes_consts:
+	.quad		0x000000000000857d	// x^(2*128)	mod G(x)
+	.quad		0x0000000000007acc	// x^(2*128+64)	mod G(x)
+.Lfold_across_16_bytes_consts:
+	.quad		0x000000000000a010	// x^(1*128)	mod G(x)
+	.quad		0x0000000000001faa	// x^(1*128+64)	mod G(x)
+// .Lfinal_fold_consts:
+	.quad		0x1368000000000000	// x^48 * (x^48 mod G(x))
+	.quad		0x2d56000000000000	// x^48 * (x^80 mod G(x))
+// .Lbarrett_reduction_consts:
+	.quad		0x0000000000018bb7	// G(x)
+	.quad		0x00000001f65a57f8	// floor(x^48 / G(x))
+
+// For 1 <= len <= 15, the 16-byte vector beginning at &byteshift_table[16 -
+// len] is the index vector to shift left by 'len' bytes, and is also {0x80,
+// ..., 0x80} XOR the index vector to shift right by '16 - len' bytes.
+.Lbyteshift_table:
 	.byte		 0x0, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87
 	.byte		0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f
 	.byte		 0x0,  0x1,  0x2,  0x3,  0x4,  0x5,  0x6,  0x7
diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c
index 242757cc6da91..dd325829ee44f 100644
--- a/arch/arm64/crypto/crct10dif-ce-glue.c
+++ b/arch/arm64/crypto/crct10dif-ce-glue.c
@@ -22,8 +22,8 @@
 
 #define CRC_T10DIF_PMULL_CHUNK_SIZE	16U
 
-asmlinkage u16 crc_t10dif_pmull_p64(u16 init_crc, const u8 buf[], u64 len);
-asmlinkage u16 crc_t10dif_pmull_p8(u16 init_crc, const u8 buf[], u64 len);
+asmlinkage u16 crc_t10dif_pmull_p8(u16 init_crc, const u8 *buf, size_t len);
+asmlinkage u16 crc_t10dif_pmull_p64(u16 init_crc, const u8 *buf, size_t len);
 
 static int crct10dif_init(struct shash_desc *desc)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH v3 3/3] crypto: arm64/crct10dif-ce - cleanup and optimizations
@ 2019-01-30  3:14   ` Eric Biggers
  0 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-30  3:14 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: Tim Chen, linux-arm-kernel, Ard Biesheuvel

From: Eric Biggers <ebiggers@google.com>

The x86, arm, and arm64 asm implementations of crct10dif are very
difficult to understand partly because many of the comments, labels, and
macros are named incorrectly: the lengths mentioned are usually off by a
factor of two from the actual code.  Many other things are unnecessarily
convoluted as well, e.g. there are many more fold constants than
actually needed and some aren't fully reduced.

This series therefore cleans up all these implementations to be much
more maintainable.  I also made some small optimizations where I saw
opportunities, resulting in slightly better performance.

This patch cleans up the arm64 version.

Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm64/crypto/crct10dif-ce-core.S | 496 ++++++++++++--------------
 arch/arm64/crypto/crct10dif-ce-glue.c |   4 +-
 2 files changed, 233 insertions(+), 267 deletions(-)

diff --git a/arch/arm64/crypto/crct10dif-ce-core.S b/arch/arm64/crypto/crct10dif-ce-core.S
index f7326259c40de..e545b42e6a468 100644
--- a/arch/arm64/crypto/crct10dif-ce-core.S
+++ b/arch/arm64/crypto/crct10dif-ce-core.S
@@ -2,12 +2,14 @@
 // Accelerated CRC-T10DIF using arm64 NEON and Crypto Extensions instructions
 //
 // Copyright (C) 2016 Linaro Ltd <ard.biesheuvel@linaro.org>
+// Copyright (C) 2019 Google LLC <ebiggers@google.com>
 //
 // This program is free software; you can redistribute it and/or modify
 // it under the terms of the GNU General Public License version 2 as
 // published by the Free Software Foundation.
 //
 
+// Derived from the x86 version:
 //
 // Implement fast CRC-T10DIF computation with SSE and PCLMULQDQ instructions
 //
@@ -54,19 +56,11 @@
 // NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
 // SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
 //
-//       Function API:
-//       UINT16 crc_t10dif_pcl(
-//               UINT16 init_crc, //initial CRC value, 16 bits
-//               const unsigned char *buf, //buffer pointer to calculate CRC on
-//               UINT64 len //buffer length in bytes (64-bit data)
-//       );
-//
 //       Reference paper titled "Fast CRC Computation for Generic
 //	Polynomials Using PCLMULQDQ Instruction"
 //       URL: http://www.intel.com/content/dam/www/public/us/en/documents
 //  /white-papers/fast-crc-computation-generic-polynomials-pclmulqdq-paper.pdf
 //
-//
 
 #include <linux/linkage.h>
 #include <asm/assembler.h>
@@ -74,14 +68,14 @@
 	.text
 	.cpu		generic+crypto
 
-	arg1_low32	.req	w19
-	arg2		.req	x20
-	arg3		.req	x21
+	init_crc	.req	w19
+	buf		.req	x20
+	len		.req	x21
+	fold_consts_ptr	.req	x22
 
-	vzr		.req	v13
+	fold_consts	.req	v10
 
 	ad		.req	v14
-	bd		.req	v10
 
 	k00_16		.req	v15
 	k32_48		.req	v16
@@ -143,11 +137,11 @@ __pmull_p8_core:
 	ext		t5.8b, ad.8b, ad.8b, #2			// A2
 	ext		t6.8b, ad.8b, ad.8b, #3			// A3
 
-	pmull		t4.8h, t4.8b, bd.8b			// F = A1*B
+	pmull		t4.8h, t4.8b, fold_consts.8b		// F = A1*B
 	pmull		t8.8h, ad.8b, bd1.8b			// E = A*B1
-	pmull		t5.8h, t5.8b, bd.8b			// H = A2*B
+	pmull		t5.8h, t5.8b, fold_consts.8b		// H = A2*B
 	pmull		t7.8h, ad.8b, bd2.8b			// G = A*B2
-	pmull		t6.8h, t6.8b, bd.8b			// J = A3*B
+	pmull		t6.8h, t6.8b, fold_consts.8b		// J = A3*B
 	pmull		t9.8h, ad.8b, bd3.8b			// I = A*B3
 	pmull		t3.8h, ad.8b, bd4.8b			// K = A*B4
 	b		0f
@@ -157,11 +151,11 @@ __pmull_p8_core:
 	tbl		t5.16b, {ad.16b}, perm2.16b		// A2
 	tbl		t6.16b, {ad.16b}, perm3.16b		// A3
 
-	pmull2		t4.8h, t4.16b, bd.16b			// F = A1*B
+	pmull2		t4.8h, t4.16b, fold_consts.16b		// F = A1*B
 	pmull2		t8.8h, ad.16b, bd1.16b			// E = A*B1
-	pmull2		t5.8h, t5.16b, bd.16b			// H = A2*B
+	pmull2		t5.8h, t5.16b, fold_consts.16b		// H = A2*B
 	pmull2		t7.8h, ad.16b, bd2.16b			// G = A*B2
-	pmull2		t6.8h, t6.16b, bd.16b			// J = A3*B
+	pmull2		t6.8h, t6.16b, fold_consts.16b		// J = A3*B
 	pmull2		t9.8h, ad.16b, bd3.16b			// I = A*B3
 	pmull2		t3.8h, ad.16b, bd4.16b			// K = A*B4
 
@@ -203,14 +197,14 @@ __pmull_p8_core:
 ENDPROC(__pmull_p8_core)
 
 	.macro		__pmull_p8, rq, ad, bd, i
-	.ifnc		\bd, v10
+	.ifnc		\bd, fold_consts
 	.err
 	.endif
 	mov		ad.16b, \ad\().16b
 	.ifb		\i
-	pmull		\rq\().8h, \ad\().8b, bd.8b		// D = A*B
+	pmull		\rq\().8h, \ad\().8b, \bd\().8b		// D = A*B
 	.else
-	pmull2		\rq\().8h, \ad\().16b, bd.16b		// D = A*B
+	pmull2		\rq\().8h, \ad\().16b, \bd\().16b	// D = A*B
 	.endif
 
 	bl		.L__pmull_p8_core\i
@@ -219,17 +213,19 @@ ENDPROC(__pmull_p8_core)
 	eor		\rq\().16b, \rq\().16b, t6.16b
 	.endm
 
-	.macro		fold64, p, reg1, reg2
-	ldp		q11, q12, [arg2], #0x20
+	// Fold reg1, reg2 into the next 32 data bytes, storing the result back
+	// into reg1, reg2.
+	.macro		fold_32_bytes, p, reg1, reg2
+	ldp		q11, q12, [buf], #0x20
 
-	__pmull_\p	v8, \reg1, v10, 2
-	__pmull_\p	\reg1, \reg1, v10
+	__pmull_\p	v8, \reg1, fold_consts, 2
+	__pmull_\p	\reg1, \reg1, fold_consts
 
 CPU_LE(	rev64		v11.16b, v11.16b		)
 CPU_LE(	rev64		v12.16b, v12.16b		)
 
-	__pmull_\p	v9, \reg2, v10, 2
-	__pmull_\p	\reg2, \reg2, v10
+	__pmull_\p	v9, \reg2, fold_consts, 2
+	__pmull_\p	\reg2, \reg2, fold_consts
 
 CPU_LE(	ext		v11.16b, v11.16b, v11.16b, #8	)
 CPU_LE(	ext		v12.16b, v12.16b, v12.16b, #8	)
@@ -240,15 +236,16 @@ CPU_LE(	ext		v12.16b, v12.16b, v12.16b, #8	)
 	eor		\reg2\().16b, \reg2\().16b, v12.16b
 	.endm
 
-	.macro		fold16, p, reg, rk
-	__pmull_\p	v8, \reg, v10
-	__pmull_\p	\reg, \reg, v10, 2
-	.ifnb		\rk
-	ldr_l		q10, \rk, x8
-	__pmull_pre_\p	v10
+	// Fold src_reg into dst_reg, optionally loading the next fold constants
+	.macro		fold_16_bytes, p, src_reg, dst_reg, load_next_consts
+	__pmull_\p	v8, \src_reg, fold_consts
+	__pmull_\p	\src_reg, \src_reg, fold_consts, 2
+	.ifnb		\load_next_consts
+	ld1		{fold_consts.2d}, [fold_consts_ptr], #16
+	__pmull_pre_\p	fold_consts
 	.endif
-	eor		v7.16b, v7.16b, v8.16b
-	eor		v7.16b, v7.16b, \reg\().16b
+	eor		\dst_reg\().16b, \dst_reg\().16b, v8.16b
+	eor		\dst_reg\().16b, \dst_reg\().16b, \src_reg\().16b
 	.endm
 
 	.macro		__pmull_p64, rd, rn, rm, n
@@ -260,40 +257,27 @@ CPU_LE(	ext		v12.16b, v12.16b, v12.16b, #8	)
 	.endm
 
 	.macro		crc_t10dif_pmull, p
-	frame_push	3, 128
+	frame_push	4, 128
 
-	mov		arg1_low32, w0
-	mov		arg2, x1
-	mov		arg3, x2
-
-	movi		vzr.16b, #0		// init zero register
+	mov		init_crc, w0
+	mov		buf, x1
+	mov		len, x2
 
 	__pmull_init_\p
 
-	// adjust the 16-bit initial_crc value, scale it to 32 bits
-	lsl		arg1_low32, arg1_low32, #16
-
-	// check if smaller than 256
-	cmp		arg3, #256
-
-	// for sizes less than 128, we can't fold 64B at a time...
-	b.lt		.L_less_than_128_\@
+	// For sizes less than 256 bytes, we can't fold 128 bytes at a time.
+	cmp		len, #256
+	b.lt		.Lless_than_256_bytes_\@
 
-	// load the initial crc value
-	// crc value does not need to be byte-reflected, but it needs
-	// to be moved to the high part of the register.
-	// because data will be byte-reflected and will align with
-	// initial crc at correct place.
-	movi		v10.16b, #0
-	mov		v10.s[3], arg1_low32		// initial crc
-
-	// receive the initial 64B data, xor the initial crc value
-	ldp		q0, q1, [arg2]
-	ldp		q2, q3, [arg2, #0x20]
-	ldp		q4, q5, [arg2, #0x40]
-	ldp		q6, q7, [arg2, #0x60]
-	add		arg2, arg2, #0x80
+	adr_l		fold_consts_ptr, .Lfold_across_128_bytes_consts
 
+	// Load the first 128 data bytes.  Byte swapping is necessary to make
+	// the bit order match the polynomial coefficient order.
+	ldp		q0, q1, [buf]
+	ldp		q2, q3, [buf, #0x20]
+	ldp		q4, q5, [buf, #0x40]
+	ldp		q6, q7, [buf, #0x60]
+	add		buf, buf, #0x80
 CPU_LE(	rev64		v0.16b, v0.16b			)
 CPU_LE(	rev64		v1.16b, v1.16b			)
 CPU_LE(	rev64		v2.16b, v2.16b			)
@@ -302,7 +286,6 @@ CPU_LE(	rev64		v4.16b, v4.16b			)
 CPU_LE(	rev64		v5.16b, v5.16b			)
 CPU_LE(	rev64		v6.16b, v6.16b			)
 CPU_LE(	rev64		v7.16b, v7.16b			)
-
 CPU_LE(	ext		v0.16b, v0.16b, v0.16b, #8	)
 CPU_LE(	ext		v1.16b, v1.16b, v1.16b, #8	)
 CPU_LE(	ext		v2.16b, v2.16b, v2.16b, #8	)
@@ -312,36 +295,29 @@ CPU_LE(	ext		v5.16b, v5.16b, v5.16b, #8	)
 CPU_LE(	ext		v6.16b, v6.16b, v6.16b, #8	)
 CPU_LE(	ext		v7.16b, v7.16b, v7.16b, #8	)
 
-	// XOR the initial_crc value
-	eor		v0.16b, v0.16b, v10.16b
-
-	ldr_l		q10, rk3, x8	// xmm10 has rk3 and rk4
-					// type of pmull instruction
-					// will determine which constant to use
-	__pmull_pre_\p	v10
-
-	//
-	// we subtract 256 instead of 128 to save one instruction from the loop
-	//
-	sub		arg3, arg3, #256
-
-	// at this section of the code, there is 64*x+y (0<=y<64) bytes of
-	// buffer. The _fold_64_B_loop will fold 64B at a time
-	// until we have 64+y Bytes of buffer
+	// XOR the first 16 data *bits* with the initial CRC value.
+	movi		v8.16b, #0
+	mov		v8.h[7], init_crc
+	eor		v0.16b, v0.16b, v8.16b
 
-	// fold 64B at a time. This section of the code folds 4 vector
-	// registers in parallel
-.L_fold_64_B_loop_\@:
+	// Load the constants for folding across 128 bytes.
+	ld1		{fold_consts.2d}, [fold_consts_ptr]
+	__pmull_pre_\p	fold_consts
 
-	fold64		\p, v0, v1
-	fold64		\p, v2, v3
-	fold64		\p, v4, v5
-	fold64		\p, v6, v7
+	// Subtract 128 for the 128 data bytes just consumed.  Subtract another
+	// 128 to simplify the termination condition of the following loop.
+	sub		len, len, #256
 
-	subs		arg3, arg3, #128
+	// While >= 128 data bytes remain (not counting v0-v7), fold the 128
+	// bytes v0-v7 into them, storing the result back into v0-v7.
+.Lfold_128_bytes_loop_\@:
+	fold_32_bytes	\p, v0, v1
+	fold_32_bytes	\p, v2, v3
+	fold_32_bytes	\p, v4, v5
+	fold_32_bytes	\p, v6, v7
 
-	// check if there is another 64B in the buffer to be able to fold
-	b.lt		.L_fold_64_B_end_\@
+	subs		len, len, #128
+	b.lt		.Lfold_128_bytes_loop_done_\@
 
 	if_will_cond_yield_neon
 	stp		q0, q1, [sp, #.Lframe_local_offset]
@@ -353,217 +329,207 @@ CPU_LE(	ext		v7.16b, v7.16b, v7.16b, #8	)
 	ldp		q2, q3, [sp, #.Lframe_local_offset + 32]
 	ldp		q4, q5, [sp, #.Lframe_local_offset + 64]
 	ldp		q6, q7, [sp, #.Lframe_local_offset + 96]
-	ldr_l		q10, rk3, x8
-	movi		vzr.16b, #0		// init zero register
+	ld1		{fold_consts.2d}, [fold_consts_ptr]
 	__pmull_init_\p
-	__pmull_pre_\p	v10
+	__pmull_pre_\p	fold_consts
 	endif_yield_neon
 
-	b		.L_fold_64_B_loop_\@
-
-.L_fold_64_B_end_\@:
-	// at this point, the buffer pointer is pointing at the last y Bytes
-	// of the buffer the 64B of folded data is in 4 of the vector
-	// registers: v0, v1, v2, v3
-
-	// fold the 8 vector registers to 1 vector register with different
-	// constants
-
-	ldr_l		q10, rk9, x8
-	__pmull_pre_\p	v10
-
-	fold16		\p, v0, rk11
-	fold16		\p, v1, rk13
-	fold16		\p, v2, rk15
-	fold16		\p, v3, rk17
-	fold16		\p, v4, rk19
-	fold16		\p, v5, rk1
-	fold16		\p, v6
-
-	// instead of 64, we add 48 to the loop counter to save 1 instruction
-	// from the loop instead of a cmp instruction, we use the negative
-	// flag with the jl instruction
-	adds		arg3, arg3, #(128-16)
-	b.lt		.L_final_reduction_for_128_\@
-
-	// now we have 16+y bytes left to reduce. 16 Bytes is in register v7
-	// and the rest is in memory. We can fold 16 bytes at a time if y>=16
-	// continue folding 16B at a time
-
-.L_16B_reduction_loop_\@:
-	__pmull_\p	v8, v7, v10
-	__pmull_\p	v7, v7, v10, 2
+	b		.Lfold_128_bytes_loop_\@
+
+.Lfold_128_bytes_loop_done_\@:
+
+	// Now fold the 112 bytes in v0-v6 into the 16 bytes in v7.
+
+	// Fold across 64 bytes.
+	add		fold_consts_ptr, fold_consts_ptr, #16
+	ld1		{fold_consts.2d}, [fold_consts_ptr], #16
+	__pmull_pre_\p	fold_consts
+	fold_16_bytes	\p, v0, v4
+	fold_16_bytes	\p, v1, v5
+	fold_16_bytes	\p, v2, v6
+	fold_16_bytes	\p, v3, v7, 1
+	// Fold across 32 bytes.
+	fold_16_bytes	\p, v4, v6
+	fold_16_bytes	\p, v5, v7, 1
+	// Fold across 16 bytes.
+	fold_16_bytes	\p, v6, v7
+
+	// Add 128 to get the correct number of data bytes remaining in 0...127
+	// (not counting v7), following the previous extra subtraction by 128.
+	// Then subtract 16 to simplify the termination condition of the
+	// following loop.
+	adds		len, len, #(128-16)
+
+	// While >= 16 data bytes remain (not counting v7), fold the 16 bytes v7
+	// into them, storing the result back into v7.
+	b.lt		.Lfold_16_bytes_loop_done_\@
+.Lfold_16_bytes_loop_\@:
+	__pmull_\p	v8, v7, fold_consts
+	__pmull_\p	v7, v7, fold_consts, 2
 	eor		v7.16b, v7.16b, v8.16b
-
-	ldr		q0, [arg2], #16
+	ldr		q0, [buf], #16
 CPU_LE(	rev64		v0.16b, v0.16b			)
 CPU_LE(	ext		v0.16b, v0.16b, v0.16b, #8	)
 	eor		v7.16b, v7.16b, v0.16b
-	subs		arg3, arg3, #16
-
-	// instead of a cmp instruction, we utilize the flags with the
-	// jge instruction equivalent of: cmp arg3, 16-16
-	// check if there is any more 16B in the buffer to be able to fold
-	b.ge		.L_16B_reduction_loop_\@
-
-	// now we have 16+z bytes left to reduce, where 0<= z < 16.
-	// first, we reduce the data in the xmm7 register
-
-.L_final_reduction_for_128_\@:
-	// check if any more data to fold. If not, compute the CRC of
-	// the final 128 bits
-	adds		arg3, arg3, #16
-	b.eq		.L_128_done_\@
-
-	// here we are getting data that is less than 16 bytes.
-	// since we know that there was data before the pointer, we can
-	// offset the input pointer before the actual point, to receive
-	// exactly 16 bytes. after that the registers need to be adjusted.
-.L_get_last_two_regs_\@:
-	add		arg2, arg2, arg3
-	ldr		q1, [arg2, #-16]
-CPU_LE(	rev64		v1.16b, v1.16b			)
-CPU_LE(	ext		v1.16b, v1.16b, v1.16b, #8	)
-
-	// get rid of the extra data that was loaded before
-	// load the shift constant
-	adr_l		x4, tbl_shf_table + 16
-	sub		x4, x4, arg3
-	ld1		{v0.16b}, [x4]
-
-	// shift v2 to the left by arg3 bytes
-	tbl		v2.16b, {v7.16b}, v0.16b
-
-	// shift v7 to the right by 16-arg3 bytes
-	movi		v9.16b, #0x80
-	eor		v0.16b, v0.16b, v9.16b
-	tbl		v7.16b, {v7.16b}, v0.16b
-
-	// blend
-	sshr		v0.16b, v0.16b, #7	// convert to 8-bit mask
-	bsl		v0.16b, v2.16b, v1.16b
-
-	// fold 16 Bytes
-	__pmull_\p	v8, v7, v10
-	__pmull_\p	v7, v7, v10, 2
-	eor		v7.16b, v7.16b, v8.16b
-	eor		v7.16b, v7.16b, v0.16b
+	subs		len, len, #16
+	b.ge		.Lfold_16_bytes_loop_\@
+
+.Lfold_16_bytes_loop_done_\@:
+	// Add 16 to get the correct number of data bytes remaining in 0...15
+	// (not counting v7), following the previous extra subtraction by 16.
+	adds		len, len, #16
+	b.eq		.Lreduce_final_16_bytes_\@
+
+.Lhandle_partial_segment_\@:
+	// Reduce the last '16 + len' bytes where 1 <= len <= 15 and the first
+	// 16 bytes are in v7 and the rest are the remaining data in 'buf'.  To
+	// do this without needing a fold constant for each possible 'len',
+	// redivide the bytes into a first chunk of 'len' bytes and a second
+	// chunk of 16 bytes, then fold the first chunk into the second.
+
+	// v0 = last 16 original data bytes
+	add		buf, buf, len
+	ldr		q0, [buf, #-16]
+CPU_LE(	rev64		v0.16b, v0.16b			)
+CPU_LE(	ext		v0.16b, v0.16b, v0.16b, #8	)
 
-.L_128_done_\@:
-	// compute crc of a 128-bit value
-	ldr_l		q10, rk5, x8		// rk5 and rk6 in xmm10
-	__pmull_pre_\p	v10
+	// v1 = high order part of second chunk: v7 left-shifted by 'len' bytes.
+	adr_l		x4, .Lbyteshift_table + 16
+	sub		x4, x4, len
+	ld1		{v2.16b}, [x4]
+	tbl		v1.16b, {v7.16b}, v2.16b
 
-	// 64b fold
-	ext		v0.16b, vzr.16b, v7.16b, #8
-	mov		v7.d[0], v7.d[1]
-	__pmull_\p	v7, v7, v10
-	eor		v7.16b, v7.16b, v0.16b
+	// v3 = first chunk: v7 right-shifted by '16-len' bytes.
+	movi		v3.16b, #0x80
+	eor		v2.16b, v2.16b, v3.16b
+	tbl		v3.16b, {v7.16b}, v2.16b
 
-	// 32b fold
-	ext		v0.16b, v7.16b, vzr.16b, #4
-	mov		v7.s[3], vzr.s[0]
-	__pmull_\p	v0, v0, v10, 2
-	eor		v7.16b, v7.16b, v0.16b
+	// Convert to 8-bit masks: 'len' 0x00 bytes, then '16-len' 0xff bytes.
+	sshr		v2.16b, v2.16b, #7
 
-	// barrett reduction
-	ldr_l		q10, rk7, x8
-	__pmull_pre_\p	v10
-	mov		v0.d[0], v7.d[1]
+	// v2 = second chunk: 'len' bytes from v0 (low-order bytes),
+	// then '16-len' bytes from v1 (high-order bytes).
+	bsl		v2.16b, v1.16b, v0.16b
 
-	__pmull_\p	v0, v0, v10
-	ext		v0.16b, vzr.16b, v0.16b, #12
-	__pmull_\p	v0, v0, v10, 2
-	ext		v0.16b, vzr.16b, v0.16b, #12
+	// Fold the first chunk into the second chunk, storing the result in v7.
+	__pmull_\p	v0, v3, fold_consts
+	__pmull_\p	v7, v3, fold_consts, 2
 	eor		v7.16b, v7.16b, v0.16b
-	mov		w0, v7.s[1]
-
-.L_cleanup_\@:
-	// scale the result back to 16 bits
-	lsr		x0, x0, #16
+	eor		v7.16b, v7.16b, v2.16b
+
+.Lreduce_final_16_bytes_\@:
+	// Reduce the 128-bit value M(x), stored in v7, to the final 16-bit CRC.
+
+	movi		v2.16b, #0		// init zero register
+
+	// Load 'x^48 * (x^48 mod G(x))' and 'x^48 * (x^80 mod G(x))'.
+	ld1		{fold_consts.2d}, [fold_consts_ptr], #16
+	__pmull_pre_\p	fold_consts
+
+	// Fold the high 64 bits into the low 64 bits, while also multiplying by
+	// x^64.  This produces a 128-bit value congruent to x^64 * M(x) and
+	// whose low 48 bits are 0.
+	ext		v0.16b, v2.16b, v7.16b, #8
+	__pmull_\p	v7, v7, fold_consts, 2	// high bits * x^48 * (x^80 mod G(x))
+	eor		v0.16b, v0.16b, v7.16b	// + low bits * x^64
+
+	// Fold the high 32 bits into the low 96 bits.  This produces a 96-bit
+	// value congruent to x^64 * M(x) and whose low 48 bits are 0.
+	ext		v1.16b, v0.16b, v2.16b, #12	// extract high 32 bits
+	mov		v0.s[3], v2.s[0]	// zero high 32 bits
+	__pmull_\p	v1, v1, fold_consts	// high 32 bits * x^48 * (x^48 mod G(x))
+	eor		v0.16b, v0.16b, v1.16b	// + low bits
+
+	// Load G(x) and floor(x^48 / G(x)).
+	ld1		{fold_consts.2d}, [fold_consts_ptr]
+	__pmull_pre_\p	fold_consts
+
+	// Use Barrett reduction to compute the final CRC value.
+	__pmull_\p	v1, v0, fold_consts, 2	// high 32 bits * floor(x^48 / G(x))
+	ushr		v1.2d, v1.2d, #32	// /= x^32
+	__pmull_\p	v1, v1, fold_consts	// *= G(x)
+	ushr		v0.2d, v0.2d, #48
+	eor		v0.16b, v0.16b, v1.16b	// + low 16 nonzero bits
+	// Final CRC value (x^16 * M(x)) mod G(x) is in low 16 bits of v0.
+
+	umov		w0, v0.h[0]
 	frame_pop
 	ret
 
-.L_less_than_128_\@:
-	cbz		arg3, .L_cleanup_\@
+.Lless_than_256_bytes_\@:
+	// Checksumming a buffer of length 16...255 bytes
 
-	movi		v0.16b, #0
-	mov		v0.s[3], arg1_low32	// get the initial crc value
+	adr_l		fold_consts_ptr, .Lfold_across_16_bytes_consts
 
-	ldr		q7, [arg2], #0x10
+	// Load the first 16 data bytes.
+	ldr		q7, [buf], #0x10
 CPU_LE(	rev64		v7.16b, v7.16b			)
 CPU_LE(	ext		v7.16b, v7.16b, v7.16b, #8	)
-	eor		v7.16b, v7.16b, v0.16b	// xor the initial crc value
-
-	cmp		arg3, #16
-	b.eq		.L_128_done_\@		// exactly 16 left
 
-	ldr_l		q10, rk1, x8		// rk1 and rk2 in xmm10
-	__pmull_pre_\p	v10
+	// XOR the first 16 data *bits* with the initial CRC value.
+	movi		v0.16b, #0
+	mov		v0.h[7], init_crc
+	eor		v7.16b, v7.16b, v0.16b
 
-	// update the counter. subtract 32 instead of 16 to save one
-	// instruction from the loop
-	subs		arg3, arg3, #32
-	b.ge		.L_16B_reduction_loop_\@
+	// Load the fold-across-16-bytes constants.
+	ld1		{fold_consts.2d}, [fold_consts_ptr], #16
+	__pmull_pre_\p	fold_consts
 
-	add		arg3, arg3, #16
-	b		.L_get_last_two_regs_\@
+	cmp		len, #16
+	b.eq		.Lreduce_final_16_bytes_\@	// len == 16
+	subs		len, len, #32
+	b.ge		.Lfold_16_bytes_loop_\@		// 32 <= len <= 255
+	add		len, len, #16
+	b		.Lhandle_partial_segment_\@	// 17 <= len <= 31
 	.endm
 
+//
+// u16 crc_t10dif_pmull_p8(u16 init_crc, const u8 *buf, size_t len);
+//
+// Assumes len >= 16.
+//
 ENTRY(crc_t10dif_pmull_p8)
 	crc_t10dif_pmull	p8
 ENDPROC(crc_t10dif_pmull_p8)
 
 	.align		5
+//
+// u16 crc_t10dif_pmull_p64(u16 init_crc, const u8 *buf, size_t len);
+//
+// Assumes len >= 16.
+//
 ENTRY(crc_t10dif_pmull_p64)
 	crc_t10dif_pmull	p64
 ENDPROC(crc_t10dif_pmull_p64)
 
-// precomputed constants
-// these constants are precomputed from the poly:
-// 0x8bb70000 (0x8bb7 scaled to 32 bits)
 	.section	".rodata", "a"
 	.align		4
-// Q = 0x18BB70000
-// rk1 = 2^(32*3) mod Q << 32
-// rk2 = 2^(32*5) mod Q << 32
-// rk3 = 2^(32*15) mod Q << 32
-// rk4 = 2^(32*17) mod Q << 32
-// rk5 = 2^(32*3) mod Q << 32
-// rk6 = 2^(32*2) mod Q << 32
-// rk7 = floor(2^64/Q)
-// rk8 = Q
-
-rk1:	.octa		0x06df0000000000002d56000000000000
-rk3:	.octa		0x7cf50000000000009d9d000000000000
-rk5:	.octa		0x13680000000000002d56000000000000
-rk7:	.octa		0x000000018bb7000000000001f65a57f8
-rk9:	.octa		0xbfd6000000000000ceae000000000000
-rk11:	.octa		0x713c0000000000001e16000000000000
-rk13:	.octa		0x80a6000000000000f7f9000000000000
-rk15:	.octa		0xe658000000000000044c000000000000
-rk17:	.octa		0xa497000000000000ad18000000000000
-rk19:	.octa		0xe7b50000000000006ee3000000000000
-
-tbl_shf_table:
-// use these values for shift constants for the tbl/tbx instruction
-// different alignments result in values as shown:
-//	DDQ 0x008f8e8d8c8b8a898887868584838281 # shl 15 (16-1) / shr1
-//	DDQ 0x01008f8e8d8c8b8a8988878685848382 # shl 14 (16-3) / shr2
-//	DDQ 0x0201008f8e8d8c8b8a89888786858483 # shl 13 (16-4) / shr3
-//	DDQ 0x030201008f8e8d8c8b8a898887868584 # shl 12 (16-4) / shr4
-//	DDQ 0x04030201008f8e8d8c8b8a8988878685 # shl 11 (16-5) / shr5
-//	DDQ 0x0504030201008f8e8d8c8b8a89888786 # shl 10 (16-6) / shr6
-//	DDQ 0x060504030201008f8e8d8c8b8a898887 # shl 9  (16-7) / shr7
-//	DDQ 0x07060504030201008f8e8d8c8b8a8988 # shl 8  (16-8) / shr8
-//	DDQ 0x0807060504030201008f8e8d8c8b8a89 # shl 7  (16-9) / shr9
-//	DDQ 0x090807060504030201008f8e8d8c8b8a # shl 6  (16-10) / shr10
-//	DDQ 0x0a090807060504030201008f8e8d8c8b # shl 5  (16-11) / shr11
-//	DDQ 0x0b0a090807060504030201008f8e8d8c # shl 4  (16-12) / shr12
-//	DDQ 0x0c0b0a090807060504030201008f8e8d # shl 3  (16-13) / shr13
-//	DDQ 0x0d0c0b0a090807060504030201008f8e # shl 2  (16-14) / shr14
-//	DDQ 0x0e0d0c0b0a090807060504030201008f # shl 1  (16-15) / shr15
 
+// Fold constants precomputed from the polynomial 0x18bb7
+// G(x) = x^16 + x^15 + x^11 + x^9 + x^8 + x^7 + x^5 + x^4 + x^2 + x^1 + x^0
+.Lfold_across_128_bytes_consts:
+	.quad		0x0000000000006123	// x^(8*128)	mod G(x)
+	.quad		0x0000000000002295	// x^(8*128+64)	mod G(x)
+// .Lfold_across_64_bytes_consts:
+	.quad		0x0000000000001069	// x^(4*128)	mod G(x)
+	.quad		0x000000000000dd31	// x^(4*128+64)	mod G(x)
+// .Lfold_across_32_bytes_consts:
+	.quad		0x000000000000857d	// x^(2*128)	mod G(x)
+	.quad		0x0000000000007acc	// x^(2*128+64)	mod G(x)
+.Lfold_across_16_bytes_consts:
+	.quad		0x000000000000a010	// x^(1*128)	mod G(x)
+	.quad		0x0000000000001faa	// x^(1*128+64)	mod G(x)
+// .Lfinal_fold_consts:
+	.quad		0x1368000000000000	// x^48 * (x^48 mod G(x))
+	.quad		0x2d56000000000000	// x^48 * (x^80 mod G(x))
+// .Lbarrett_reduction_consts:
+	.quad		0x0000000000018bb7	// G(x)
+	.quad		0x00000001f65a57f8	// floor(x^48 / G(x))
+
+// For 1 <= len <= 15, the 16-byte vector beginning at &byteshift_table[16 -
+// len] is the index vector to shift left by 'len' bytes, and is also {0x80,
+// ..., 0x80} XOR the index vector to shift right by '16 - len' bytes.
+.Lbyteshift_table:
 	.byte		 0x0, 0x81, 0x82, 0x83, 0x84, 0x85, 0x86, 0x87
 	.byte		0x88, 0x89, 0x8a, 0x8b, 0x8c, 0x8d, 0x8e, 0x8f
 	.byte		 0x0,  0x1,  0x2,  0x3,  0x4,  0x5,  0x6,  0x7
diff --git a/arch/arm64/crypto/crct10dif-ce-glue.c b/arch/arm64/crypto/crct10dif-ce-glue.c
index 242757cc6da91..dd325829ee44f 100644
--- a/arch/arm64/crypto/crct10dif-ce-glue.c
+++ b/arch/arm64/crypto/crct10dif-ce-glue.c
@@ -22,8 +22,8 @@
 
 #define CRC_T10DIF_PMULL_CHUNK_SIZE	16U
 
-asmlinkage u16 crc_t10dif_pmull_p64(u16 init_crc, const u8 buf[], u64 len);
-asmlinkage u16 crc_t10dif_pmull_p8(u16 init_crc, const u8 buf[], u64 len);
+asmlinkage u16 crc_t10dif_pmull_p8(u16 init_crc, const u8 *buf, size_t len);
+asmlinkage u16 crc_t10dif_pmull_p64(u16 init_crc, const u8 *buf, size_t len);
 
 static int crct10dif_init(struct shash_desc *desc)
 {
-- 
2.20.1


_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 0/3] crypto: crct10dif assembly cleanup and optimizations
  2019-01-30  3:14 ` Eric Biggers
@ 2019-01-30  8:33   ` Ard Biesheuvel
  -1 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2019-01-30  8:33 UTC (permalink / raw)
  To: Eric Biggers
  Cc: open list:HARDWARE RANDOM NUMBER GENERATOR CORE, Herbert Xu,
	Tim Chen, linux-arm-kernel

On Wed, 30 Jan 2019 at 04:15, Eric Biggers <ebiggers@kernel.org> wrote:
>
> The x86, arm, and arm64 asm implementations of crct10dif are very
> difficult to understand partly because many of the comments, labels, and
> macros are named incorrectly: the lengths mentioned are usually off by a
> factor of two from the actual code.  Many other things are unnecessarily
> convoluted as well, e.g. there are many more fold constants than
> actually needed and some aren't fully reduced.
>
> This series therefore cleans up all these implementations to be much
> more maintainable.  I also made some small optimizations where I saw
> opportunities, resulting in slightly better performance.
>
> This is based on top of the pending patches from Ard Biesheuvel.
>
> These all pass the new extra self-tests.
>

Hi Eric,

As a FYI, the issue that broke ARM and arm64 with your updated
selftests was the 1 byte length special case that you also have
special handling for in the x86 version (but while fixing that, I
noticed my version was reading beyond the end of the input). I think
it hardly matters, though, given the way T10-DIF appears to be used in
practice (disk blocks), although it is hard to be sure from reading
the code, and the algo should be correct in any case.

So what remains is the way these implementations are encapsulated by
the crct10dif() library function, which is raster nasty, making
CRC-T10DIF an excellent use case to discuss whether we can make any
improvements to address some of the concerns that were also raised in
the zinc discussion. I threw some code together a while ago [0] (and
posted it as well, IIRC). In the mean time, a 'static call'
infrastructure is being proposed that could be used in a similar way
to avoid function pointers. I'm also interested in hearing opinions on
whether the indirect call overhead is actually significant in use
cases such as this one.

-- 
Ard.



[0] https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=crypto-crct10dif


> Changed since v2:
> - Removed the unnecessary '__LINUX_ARM_ARCH__ < 7' case.
> - Added Ard's Acked-by.
>
> Changed since v1:
> - Moved constants in arm implementation to .rodata.
> - Eliminated a few instructions from the x86 implementation.
> - Tweaked a few comments.
>
> Eric Biggers (3):
>   crypto: x86/crct10dif-pcl - cleanup and optimizations
>   crypto: arm/crct10dif-ce - cleanup and optimizations
>   crypto: arm64/crct10dif-ce - cleanup and optimizations
>
>  arch/arm/crypto/crct10dif-ce-core.S     | 552 ++++++++--------
>  arch/arm/crypto/crct10dif-ce-glue.c     |   2 +-
>  arch/arm64/crypto/crct10dif-ce-core.S   | 496 +++++++-------
>  arch/arm64/crypto/crct10dif-ce-glue.c   |   4 +-
>  arch/x86/crypto/crct10dif-pcl-asm_64.S  | 844 +++++++++---------------
>  arch/x86/crypto/crct10dif-pclmul_glue.c |   3 +-
>  6 files changed, 794 insertions(+), 1107 deletions(-)
>
> --
> 2.20.1
>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 0/3] crypto: crct10dif assembly cleanup and optimizations
@ 2019-01-30  8:33   ` Ard Biesheuvel
  0 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2019-01-30  8:33 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Tim Chen, open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	linux-arm-kernel, Herbert Xu

On Wed, 30 Jan 2019 at 04:15, Eric Biggers <ebiggers@kernel.org> wrote:
>
> The x86, arm, and arm64 asm implementations of crct10dif are very
> difficult to understand partly because many of the comments, labels, and
> macros are named incorrectly: the lengths mentioned are usually off by a
> factor of two from the actual code.  Many other things are unnecessarily
> convoluted as well, e.g. there are many more fold constants than
> actually needed and some aren't fully reduced.
>
> This series therefore cleans up all these implementations to be much
> more maintainable.  I also made some small optimizations where I saw
> opportunities, resulting in slightly better performance.
>
> This is based on top of the pending patches from Ard Biesheuvel.
>
> These all pass the new extra self-tests.
>

Hi Eric,

As a FYI, the issue that broke ARM and arm64 with your updated
selftests was the 1 byte length special case that you also have
special handling for in the x86 version (but while fixing that, I
noticed my version was reading beyond the end of the input). I think
it hardly matters, though, given the way T10-DIF appears to be used in
practice (disk blocks), although it is hard to be sure from reading
the code, and the algo should be correct in any case.

So what remains is the way these implementations are encapsulated by
the crct10dif() library function, which is raster nasty, making
CRC-T10DIF an excellent use case to discuss whether we can make any
improvements to address some of the concerns that were also raised in
the zinc discussion. I threw some code together a while ago [0] (and
posted it as well, IIRC). In the mean time, a 'static call'
infrastructure is being proposed that could be used in a similar way
to avoid function pointers. I'm also interested in hearing opinions on
whether the indirect call overhead is actually significant in use
cases such as this one.

-- 
Ard.



[0] https://git.kernel.org/pub/scm/linux/kernel/git/ardb/linux.git/log/?h=crypto-crct10dif


> Changed since v2:
> - Removed the unnecessary '__LINUX_ARM_ARCH__ < 7' case.
> - Added Ard's Acked-by.
>
> Changed since v1:
> - Moved constants in arm implementation to .rodata.
> - Eliminated a few instructions from the x86 implementation.
> - Tweaked a few comments.
>
> Eric Biggers (3):
>   crypto: x86/crct10dif-pcl - cleanup and optimizations
>   crypto: arm/crct10dif-ce - cleanup and optimizations
>   crypto: arm64/crct10dif-ce - cleanup and optimizations
>
>  arch/arm/crypto/crct10dif-ce-core.S     | 552 ++++++++--------
>  arch/arm/crypto/crct10dif-ce-glue.c     |   2 +-
>  arch/arm64/crypto/crct10dif-ce-core.S   | 496 +++++++-------
>  arch/arm64/crypto/crct10dif-ce-glue.c   |   4 +-
>  arch/x86/crypto/crct10dif-pcl-asm_64.S  | 844 +++++++++---------------
>  arch/x86/crypto/crct10dif-pclmul_glue.c |   3 +-
>  6 files changed, 794 insertions(+), 1107 deletions(-)
>
> --
> 2.20.1
>

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 0/3] crypto: crct10dif assembly cleanup and optimizations
  2019-01-30  8:33   ` Ard Biesheuvel
@ 2019-01-30  9:13     ` Herbert Xu
  -1 siblings, 0 replies; 22+ messages in thread
From: Herbert Xu @ 2019-01-30  9:13 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Eric Biggers, open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	Tim Chen, linux-arm-kernel

On Wed, Jan 30, 2019 at 09:33:57AM +0100, Ard Biesheuvel wrote:
> 
> So what remains is the way these implementations are encapsulated by
> the crct10dif() library function, which is raster nasty, making
> CRC-T10DIF an excellent use case to discuss whether we can make any
> improvements to address some of the concerns that were also raised in
> the zinc discussion. I threw some code together a while ago [0] (and
> posted it as well, IIRC). In the mean time, a 'static call'
> infrastructure is being proposed that could be used in a similar way
> to avoid function pointers. I'm also interested in hearing opinions on
> whether the indirect call overhead is actually significant in use
> cases such as this one.

I think even if the overhead wasn't significant it would still make
sense to make the move just for the sake of simplicity.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 0/3] crypto: crct10dif assembly cleanup and optimizations
@ 2019-01-30  9:13     ` Herbert Xu
  0 siblings, 0 replies; 22+ messages in thread
From: Herbert Xu @ 2019-01-30  9:13 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Eric Biggers, Tim Chen,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	linux-arm-kernel

On Wed, Jan 30, 2019 at 09:33:57AM +0100, Ard Biesheuvel wrote:
> 
> So what remains is the way these implementations are encapsulated by
> the crct10dif() library function, which is raster nasty, making
> CRC-T10DIF an excellent use case to discuss whether we can make any
> improvements to address some of the concerns that were also raised in
> the zinc discussion. I threw some code together a while ago [0] (and
> posted it as well, IIRC). In the mean time, a 'static call'
> infrastructure is being proposed that could be used in a similar way
> to avoid function pointers. I'm also interested in hearing opinions on
> whether the indirect call overhead is actually significant in use
> cases such as this one.

I think even if the overhead wasn't significant it would still make
sense to make the move just for the sake of simplicity.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 0/3] crypto: crct10dif assembly cleanup and optimizations
  2019-01-30  9:13     ` Herbert Xu
@ 2019-01-30  9:19       ` Ard Biesheuvel
  -1 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2019-01-30  9:19 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Eric Biggers, open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	Tim Chen, linux-arm-kernel

On Wed, 30 Jan 2019 at 10:13, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Wed, Jan 30, 2019 at 09:33:57AM +0100, Ard Biesheuvel wrote:
> >
> > So what remains is the way these implementations are encapsulated by
> > the crct10dif() library function, which is raster nasty, making
> > CRC-T10DIF an excellent use case to discuss whether we can make any
> > improvements to address some of the concerns that were also raised in
> > the zinc discussion. I threw some code together a while ago [0] (and
> > posted it as well, IIRC). In the mean time, a 'static call'
> > infrastructure is being proposed that could be used in a similar way
> > to avoid function pointers. I'm also interested in hearing opinions on
> > whether the indirect call overhead is actually significant in use
> > cases such as this one.
>
> I think even if the overhead wasn't significant it would still make
> sense to make the move just for the sake of simplicity.
>

Agreed, we should simplify this if we can.

However, my question is whether in this particular case, a simple
indirect call via a function pointer is /so/ much worse than a direct
call that relies on code patching techniques that are different on
every arch (and may rely on objtool or GCC plugins) that the extra
complexity is justified.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 0/3] crypto: crct10dif assembly cleanup and optimizations
@ 2019-01-30  9:19       ` Ard Biesheuvel
  0 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2019-01-30  9:19 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Eric Biggers, Tim Chen,
	open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	linux-arm-kernel

On Wed, 30 Jan 2019 at 10:13, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Wed, Jan 30, 2019 at 09:33:57AM +0100, Ard Biesheuvel wrote:
> >
> > So what remains is the way these implementations are encapsulated by
> > the crct10dif() library function, which is raster nasty, making
> > CRC-T10DIF an excellent use case to discuss whether we can make any
> > improvements to address some of the concerns that were also raised in
> > the zinc discussion. I threw some code together a while ago [0] (and
> > posted it as well, IIRC). In the mean time, a 'static call'
> > infrastructure is being proposed that could be used in a similar way
> > to avoid function pointers. I'm also interested in hearing opinions on
> > whether the indirect call overhead is actually significant in use
> > cases such as this one.
>
> I think even if the overhead wasn't significant it would still make
> sense to make the move just for the sake of simplicity.
>

Agreed, we should simplify this if we can.

However, my question is whether in this particular case, a simple
indirect call via a function pointer is /so/ much worse than a direct
call that relies on code patching techniques that are different on
every arch (and may rely on objtool or GCC plugins) that the extra
complexity is justified.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] crypto: arm/crct10dif-ce - cleanup and optimizations
  2019-01-30  3:14   ` Eric Biggers
@ 2019-01-30 20:55     ` Eric Biggers
  -1 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-30 20:55 UTC (permalink / raw)
  To: linux-crypto, Ard Biesheuvel; +Cc: Herbert Xu, Tim Chen, linux-arm-kernel

On Tue, Jan 29, 2019 at 07:14:43PM -0800, Eric Biggers wrote:
> +	.macro		__adrl, out, sym
> +	movw		\out, #:lower16:\sym
> +	movt		\out, #:upper16:\sym
> +	.endm

Hi Ard, it seems we need the __LINUX_ARM_ARCH__ < 7 case after all?
The kbuild test robot reported a build error, which can be reproduced with:

( cat arch/arm/configs/multi_v7_defconfig;
  echo CONFIG_ARCH_MULTI_V6=y;
  echo CONFIG_CRC_T10DIF=y;
  echo CONFIG_CRYPTO_CRCT10DIF_ARM_CE=y ) > .config
make olddefconfig
make arch/arm/crypto/crct10dif-ce-core.o


  AS      arch/arm/crypto/crct10dif-ce-core.o
arch/arm/crypto/crct10dif-ce-core.S: Assembler messages:
arch/arm/crypto/crct10dif-ce-core.S:165: Error: .err encountered
arch/arm/crypto/crct10dif-ce-core.S:165: Error: selected processor does not support `movw fold_consts_ptr,#:lower16:.Lfold_across_128_bytes_consts' in ARM mode
arch/arm/crypto/crct10dif-ce-core.S:165: Error: selected processor does not support `movt fold_consts_ptr,#:upper16:.Lfold_across_128_bytes_consts' in ARM mode
arch/arm/crypto/crct10dif-ce-core.S:267: Error: .err encountered
arch/arm/crypto/crct10dif-ce-core.S:267: Error: selected processor does not support `movw r3,#:lower16:.Lbyteshift_table+16' in ARM mode
arch/arm/crypto/crct10dif-ce-core.S:267: Error: selected processor does not support `movt r3,#:upper16:.Lbyteshift_table+16' in ARM mode
arch/arm/crypto/crct10dif-ce-core.S:329: Error: .err encountered
arch/arm/crypto/crct10dif-ce-core.S:329: Error: selected processor does not support `movw fold_consts_ptr,#:lower16:.Lfold_across_16_bytes_consts' in ARM mode
arch/arm/crypto/crct10dif-ce-core.S:329: Error: selected processor does not support `movt fold_consts_ptr,#:upper16:.Lfold_across_16_bytes_consts' in ARM mode
scripts/Makefile.build:367: recipe for target 'arch/arm/crypto/crct10dif-ce-core.o' failed
make[1]: *** [arch/arm/crypto/crct10dif-ce-core.o] Error 1
Makefile:1709: recipe for target 'arch/arm/crypto/crct10dif-ce-core.o' failed
make: *** [arch/arm/crypto/crct10dif-ce-core.o] Error 2

- Eric

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] crypto: arm/crct10dif-ce - cleanup and optimizations
@ 2019-01-30 20:55     ` Eric Biggers
  0 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-30 20:55 UTC (permalink / raw)
  To: linux-crypto, Ard Biesheuvel; +Cc: Tim Chen, Herbert Xu, linux-arm-kernel

On Tue, Jan 29, 2019 at 07:14:43PM -0800, Eric Biggers wrote:
> +	.macro		__adrl, out, sym
> +	movw		\out, #:lower16:\sym
> +	movt		\out, #:upper16:\sym
> +	.endm

Hi Ard, it seems we need the __LINUX_ARM_ARCH__ < 7 case after all?
The kbuild test robot reported a build error, which can be reproduced with:

( cat arch/arm/configs/multi_v7_defconfig;
  echo CONFIG_ARCH_MULTI_V6=y;
  echo CONFIG_CRC_T10DIF=y;
  echo CONFIG_CRYPTO_CRCT10DIF_ARM_CE=y ) > .config
make olddefconfig
make arch/arm/crypto/crct10dif-ce-core.o


  AS      arch/arm/crypto/crct10dif-ce-core.o
arch/arm/crypto/crct10dif-ce-core.S: Assembler messages:
arch/arm/crypto/crct10dif-ce-core.S:165: Error: .err encountered
arch/arm/crypto/crct10dif-ce-core.S:165: Error: selected processor does not support `movw fold_consts_ptr,#:lower16:.Lfold_across_128_bytes_consts' in ARM mode
arch/arm/crypto/crct10dif-ce-core.S:165: Error: selected processor does not support `movt fold_consts_ptr,#:upper16:.Lfold_across_128_bytes_consts' in ARM mode
arch/arm/crypto/crct10dif-ce-core.S:267: Error: .err encountered
arch/arm/crypto/crct10dif-ce-core.S:267: Error: selected processor does not support `movw r3,#:lower16:.Lbyteshift_table+16' in ARM mode
arch/arm/crypto/crct10dif-ce-core.S:267: Error: selected processor does not support `movt r3,#:upper16:.Lbyteshift_table+16' in ARM mode
arch/arm/crypto/crct10dif-ce-core.S:329: Error: .err encountered
arch/arm/crypto/crct10dif-ce-core.S:329: Error: selected processor does not support `movw fold_consts_ptr,#:lower16:.Lfold_across_16_bytes_consts' in ARM mode
arch/arm/crypto/crct10dif-ce-core.S:329: Error: selected processor does not support `movt fold_consts_ptr,#:upper16:.Lfold_across_16_bytes_consts' in ARM mode
scripts/Makefile.build:367: recipe for target 'arch/arm/crypto/crct10dif-ce-core.o' failed
make[1]: *** [arch/arm/crypto/crct10dif-ce-core.o] Error 1
Makefile:1709: recipe for target 'arch/arm/crypto/crct10dif-ce-core.o' failed
make: *** [arch/arm/crypto/crct10dif-ce-core.o] Error 2

- Eric

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] crypto: arm/crct10dif-ce - cleanup and optimizations
  2019-01-30 20:55     ` Eric Biggers
@ 2019-01-30 21:00       ` Ard Biesheuvel
  -1 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2019-01-30 21:00 UTC (permalink / raw)
  To: Eric Biggers
  Cc: open list:HARDWARE RANDOM NUMBER GENERATOR CORE, Herbert Xu,
	Tim Chen, linux-arm-kernel

On Wed, 30 Jan 2019 at 21:55, Eric Biggers <ebiggers@kernel.org> wrote:
>
> On Tue, Jan 29, 2019 at 07:14:43PM -0800, Eric Biggers wrote:
> > +     .macro          __adrl, out, sym
> > +     movw            \out, #:lower16:\sym
> > +     movt            \out, #:upper16:\sym
> > +     .endm
>
> Hi Ard, it seems we need the __LINUX_ARM_ARCH__ < 7 case after all?
> The kbuild test robot reported a build error, which can be reproduced with:
>
> ( cat arch/arm/configs/multi_v7_defconfig;
>   echo CONFIG_ARCH_MULTI_V6=y;
>   echo CONFIG_CRC_T10DIF=y;
>   echo CONFIG_CRYPTO_CRCT10DIF_ARM_CE=y ) > .config
> make olddefconfig
> make arch/arm/crypto/crct10dif-ce-core.o
>
>
>   AS      arch/arm/crypto/crct10dif-ce-core.o
> arch/arm/crypto/crct10dif-ce-core.S: Assembler messages:
> arch/arm/crypto/crct10dif-ce-core.S:165: Error: .err encountered
> arch/arm/crypto/crct10dif-ce-core.S:165: Error: selected processor does not support `movw fold_consts_ptr,#:lower16:.Lfold_across_128_bytes_consts' in ARM mode
> arch/arm/crypto/crct10dif-ce-core.S:165: Error: selected processor does not support `movt fold_consts_ptr,#:upper16:.Lfold_across_128_bytes_consts' in ARM mode
> arch/arm/crypto/crct10dif-ce-core.S:267: Error: .err encountered
> arch/arm/crypto/crct10dif-ce-core.S:267: Error: selected processor does not support `movw r3,#:lower16:.Lbyteshift_table+16' in ARM mode
> arch/arm/crypto/crct10dif-ce-core.S:267: Error: selected processor does not support `movt r3,#:upper16:.Lbyteshift_table+16' in ARM mode
> arch/arm/crypto/crct10dif-ce-core.S:329: Error: .err encountered
> arch/arm/crypto/crct10dif-ce-core.S:329: Error: selected processor does not support `movw fold_consts_ptr,#:lower16:.Lfold_across_16_bytes_consts' in ARM mode
> arch/arm/crypto/crct10dif-ce-core.S:329: Error: selected processor does not support `movt fold_consts_ptr,#:upper16:.Lfold_across_16_bytes_consts' in ARM mode
> scripts/Makefile.build:367: recipe for target 'arch/arm/crypto/crct10dif-ce-core.o' failed
> make[1]: *** [arch/arm/crypto/crct10dif-ce-core.o] Error 1
> Makefile:1709: recipe for target 'arch/arm/crypto/crct10dif-ce-core.o' failed
> make: *** [arch/arm/crypto/crct10dif-ce-core.o] Error 2
>

That does not make sense. That code can only execute on a NEON capable
CPU, which is guaranteed to support movw/movt as well. So I'd prefer
solving this by adding '.arch armv7-a' to this .S file.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] crypto: arm/crct10dif-ce - cleanup and optimizations
@ 2019-01-30 21:00       ` Ard Biesheuvel
  0 siblings, 0 replies; 22+ messages in thread
From: Ard Biesheuvel @ 2019-01-30 21:00 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Tim Chen, open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	linux-arm-kernel, Herbert Xu

On Wed, 30 Jan 2019 at 21:55, Eric Biggers <ebiggers@kernel.org> wrote:
>
> On Tue, Jan 29, 2019 at 07:14:43PM -0800, Eric Biggers wrote:
> > +     .macro          __adrl, out, sym
> > +     movw            \out, #:lower16:\sym
> > +     movt            \out, #:upper16:\sym
> > +     .endm
>
> Hi Ard, it seems we need the __LINUX_ARM_ARCH__ < 7 case after all?
> The kbuild test robot reported a build error, which can be reproduced with:
>
> ( cat arch/arm/configs/multi_v7_defconfig;
>   echo CONFIG_ARCH_MULTI_V6=y;
>   echo CONFIG_CRC_T10DIF=y;
>   echo CONFIG_CRYPTO_CRCT10DIF_ARM_CE=y ) > .config
> make olddefconfig
> make arch/arm/crypto/crct10dif-ce-core.o
>
>
>   AS      arch/arm/crypto/crct10dif-ce-core.o
> arch/arm/crypto/crct10dif-ce-core.S: Assembler messages:
> arch/arm/crypto/crct10dif-ce-core.S:165: Error: .err encountered
> arch/arm/crypto/crct10dif-ce-core.S:165: Error: selected processor does not support `movw fold_consts_ptr,#:lower16:.Lfold_across_128_bytes_consts' in ARM mode
> arch/arm/crypto/crct10dif-ce-core.S:165: Error: selected processor does not support `movt fold_consts_ptr,#:upper16:.Lfold_across_128_bytes_consts' in ARM mode
> arch/arm/crypto/crct10dif-ce-core.S:267: Error: .err encountered
> arch/arm/crypto/crct10dif-ce-core.S:267: Error: selected processor does not support `movw r3,#:lower16:.Lbyteshift_table+16' in ARM mode
> arch/arm/crypto/crct10dif-ce-core.S:267: Error: selected processor does not support `movt r3,#:upper16:.Lbyteshift_table+16' in ARM mode
> arch/arm/crypto/crct10dif-ce-core.S:329: Error: .err encountered
> arch/arm/crypto/crct10dif-ce-core.S:329: Error: selected processor does not support `movw fold_consts_ptr,#:lower16:.Lfold_across_16_bytes_consts' in ARM mode
> arch/arm/crypto/crct10dif-ce-core.S:329: Error: selected processor does not support `movt fold_consts_ptr,#:upper16:.Lfold_across_16_bytes_consts' in ARM mode
> scripts/Makefile.build:367: recipe for target 'arch/arm/crypto/crct10dif-ce-core.o' failed
> make[1]: *** [arch/arm/crypto/crct10dif-ce-core.o] Error 1
> Makefile:1709: recipe for target 'arch/arm/crypto/crct10dif-ce-core.o' failed
> make: *** [arch/arm/crypto/crct10dif-ce-core.o] Error 2
>

That does not make sense. That code can only execute on a NEON capable
CPU, which is guaranteed to support movw/movt as well. So I'd prefer
solving this by adding '.arch armv7-a' to this .S file.

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] crypto: arm/crct10dif-ce - cleanup and optimizations
  2019-01-30 21:00       ` Ard Biesheuvel
@ 2019-01-30 21:08         ` Eric Biggers
  -1 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-30 21:08 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: open list:HARDWARE RANDOM NUMBER GENERATOR CORE, Herbert Xu,
	Tim Chen, linux-arm-kernel

On Wed, Jan 30, 2019 at 10:00:53PM +0100, Ard Biesheuvel wrote:
> On Wed, 30 Jan 2019 at 21:55, Eric Biggers <ebiggers@kernel.org> wrote:
> >
> > On Tue, Jan 29, 2019 at 07:14:43PM -0800, Eric Biggers wrote:
> > > +     .macro          __adrl, out, sym
> > > +     movw            \out, #:lower16:\sym
> > > +     movt            \out, #:upper16:\sym
> > > +     .endm
> >
> > Hi Ard, it seems we need the __LINUX_ARM_ARCH__ < 7 case after all?
> > The kbuild test robot reported a build error, which can be reproduced with:
> >
> > ( cat arch/arm/configs/multi_v7_defconfig;
> >   echo CONFIG_ARCH_MULTI_V6=y;
> >   echo CONFIG_CRC_T10DIF=y;
> >   echo CONFIG_CRYPTO_CRCT10DIF_ARM_CE=y ) > .config
> > make olddefconfig
> > make arch/arm/crypto/crct10dif-ce-core.o
> >
> >
> >   AS      arch/arm/crypto/crct10dif-ce-core.o
> > arch/arm/crypto/crct10dif-ce-core.S: Assembler messages:
> > arch/arm/crypto/crct10dif-ce-core.S:165: Error: .err encountered
> > arch/arm/crypto/crct10dif-ce-core.S:165: Error: selected processor does not support `movw fold_consts_ptr,#:lower16:.Lfold_across_128_bytes_consts' in ARM mode
> > arch/arm/crypto/crct10dif-ce-core.S:165: Error: selected processor does not support `movt fold_consts_ptr,#:upper16:.Lfold_across_128_bytes_consts' in ARM mode
> > arch/arm/crypto/crct10dif-ce-core.S:267: Error: .err encountered
> > arch/arm/crypto/crct10dif-ce-core.S:267: Error: selected processor does not support `movw r3,#:lower16:.Lbyteshift_table+16' in ARM mode
> > arch/arm/crypto/crct10dif-ce-core.S:267: Error: selected processor does not support `movt r3,#:upper16:.Lbyteshift_table+16' in ARM mode
> > arch/arm/crypto/crct10dif-ce-core.S:329: Error: .err encountered
> > arch/arm/crypto/crct10dif-ce-core.S:329: Error: selected processor does not support `movw fold_consts_ptr,#:lower16:.Lfold_across_16_bytes_consts' in ARM mode
> > arch/arm/crypto/crct10dif-ce-core.S:329: Error: selected processor does not support `movt fold_consts_ptr,#:upper16:.Lfold_across_16_bytes_consts' in ARM mode
> > scripts/Makefile.build:367: recipe for target 'arch/arm/crypto/crct10dif-ce-core.o' failed
> > make[1]: *** [arch/arm/crypto/crct10dif-ce-core.o] Error 1
> > Makefile:1709: recipe for target 'arch/arm/crypto/crct10dif-ce-core.o' failed
> > make: *** [arch/arm/crypto/crct10dif-ce-core.o] Error 2
> >
> 
> That does not make sense. That code can only execute on a NEON capable
> CPU, which is guaranteed to support movw/movt as well. So I'd prefer
> solving this by adding '.arch armv7-a' to this .S file.

Yes that works.  I'll send out a new version with that.

- Eric

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 2/3] crypto: arm/crct10dif-ce - cleanup and optimizations
@ 2019-01-30 21:08         ` Eric Biggers
  0 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-30 21:08 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Tim Chen, open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	linux-arm-kernel, Herbert Xu

On Wed, Jan 30, 2019 at 10:00:53PM +0100, Ard Biesheuvel wrote:
> On Wed, 30 Jan 2019 at 21:55, Eric Biggers <ebiggers@kernel.org> wrote:
> >
> > On Tue, Jan 29, 2019 at 07:14:43PM -0800, Eric Biggers wrote:
> > > +     .macro          __adrl, out, sym
> > > +     movw            \out, #:lower16:\sym
> > > +     movt            \out, #:upper16:\sym
> > > +     .endm
> >
> > Hi Ard, it seems we need the __LINUX_ARM_ARCH__ < 7 case after all?
> > The kbuild test robot reported a build error, which can be reproduced with:
> >
> > ( cat arch/arm/configs/multi_v7_defconfig;
> >   echo CONFIG_ARCH_MULTI_V6=y;
> >   echo CONFIG_CRC_T10DIF=y;
> >   echo CONFIG_CRYPTO_CRCT10DIF_ARM_CE=y ) > .config
> > make olddefconfig
> > make arch/arm/crypto/crct10dif-ce-core.o
> >
> >
> >   AS      arch/arm/crypto/crct10dif-ce-core.o
> > arch/arm/crypto/crct10dif-ce-core.S: Assembler messages:
> > arch/arm/crypto/crct10dif-ce-core.S:165: Error: .err encountered
> > arch/arm/crypto/crct10dif-ce-core.S:165: Error: selected processor does not support `movw fold_consts_ptr,#:lower16:.Lfold_across_128_bytes_consts' in ARM mode
> > arch/arm/crypto/crct10dif-ce-core.S:165: Error: selected processor does not support `movt fold_consts_ptr,#:upper16:.Lfold_across_128_bytes_consts' in ARM mode
> > arch/arm/crypto/crct10dif-ce-core.S:267: Error: .err encountered
> > arch/arm/crypto/crct10dif-ce-core.S:267: Error: selected processor does not support `movw r3,#:lower16:.Lbyteshift_table+16' in ARM mode
> > arch/arm/crypto/crct10dif-ce-core.S:267: Error: selected processor does not support `movt r3,#:upper16:.Lbyteshift_table+16' in ARM mode
> > arch/arm/crypto/crct10dif-ce-core.S:329: Error: .err encountered
> > arch/arm/crypto/crct10dif-ce-core.S:329: Error: selected processor does not support `movw fold_consts_ptr,#:lower16:.Lfold_across_16_bytes_consts' in ARM mode
> > arch/arm/crypto/crct10dif-ce-core.S:329: Error: selected processor does not support `movt fold_consts_ptr,#:upper16:.Lfold_across_16_bytes_consts' in ARM mode
> > scripts/Makefile.build:367: recipe for target 'arch/arm/crypto/crct10dif-ce-core.o' failed
> > make[1]: *** [arch/arm/crypto/crct10dif-ce-core.o] Error 1
> > Makefile:1709: recipe for target 'arch/arm/crypto/crct10dif-ce-core.o' failed
> > make: *** [arch/arm/crypto/crct10dif-ce-core.o] Error 2
> >
> 
> That does not make sense. That code can only execute on a NEON capable
> CPU, which is guaranteed to support movw/movt as well. So I'd prefer
> solving this by adding '.arch armv7-a' to this .S file.

Yes that works.  I'll send out a new version with that.

- Eric

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 0/3] crypto: crct10dif assembly cleanup and optimizations
  2019-01-30  8:33   ` Ard Biesheuvel
@ 2019-01-31  3:37     ` Eric Biggers
  -1 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-31  3:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: open list:HARDWARE RANDOM NUMBER GENERATOR CORE, Herbert Xu,
	Tim Chen, linux-arm-kernel

On Wed, Jan 30, 2019 at 09:33:57AM +0100, Ard Biesheuvel wrote:
> On Wed, 30 Jan 2019 at 04:15, Eric Biggers <ebiggers@kernel.org> wrote:
> >
> > The x86, arm, and arm64 asm implementations of crct10dif are very
> > difficult to understand partly because many of the comments, labels, and
> > macros are named incorrectly: the lengths mentioned are usually off by a
> > factor of two from the actual code.  Many other things are unnecessarily
> > convoluted as well, e.g. there are many more fold constants than
> > actually needed and some aren't fully reduced.
> >
> > This series therefore cleans up all these implementations to be much
> > more maintainable.  I also made some small optimizations where I saw
> > opportunities, resulting in slightly better performance.
> >
> > This is based on top of the pending patches from Ard Biesheuvel.
> >
> > These all pass the new extra self-tests.
> >
> 
> Hi Eric,
> 
> As a FYI, the issue that broke ARM and arm64 with your updated
> selftests was the 1 byte length special case that you also have
> special handling for in the x86 version (but while fixing that, I
> noticed my version was reading beyond the end of the input). I think
> it hardly matters, though, given the way T10-DIF appears to be used in
> practice (disk blocks), although it is hard to be sure from reading
> the code, and the algo should be correct in any case.

Yes, on second thought I'm thinking the len < 16 support in the x86 assembly
isn't worthwhile.  Actually it's much slower than the generic table-based code
on those lengths due to the overhead of kernel_fpu_begin().  And even if
kernel_fpu_begin() were free, the generic code is faster until about len=11.

There's a theoretical niceness to using pclmul for all lengths so that no table
is needed.  But we still need the table for the !irq_fpu_usable() case anyway.

So I'll drop the len < 16 case.

> 
> So what remains is the way these implementations are encapsulated by
> the crct10dif() library function, which is raster nasty, making
> CRC-T10DIF an excellent use case to discuss whether we can make any
> improvements to address some of the concerns that were also raised in
> the zinc discussion. I threw some code together a while ago [0] (and
> posted it as well, IIRC). In the mean time, a 'static call'
> infrastructure is being proposed that could be used in a similar way
> to avoid function pointers. I'm also interested in hearing opinions on
> whether the indirect call overhead is actually significant in use
> cases such as this one.
> 

I agree that lib/crc-t10dif.c is very ugly, and we need to find a better way to
provide simple crypto library functions.  But I'm not sure how to make everyone
happy.  I actually think the Zinc approach of centrally dispatching to all the
software implementations of each algorithm (with one module per algorithm rather
than one per implementation) is fine for the vast majority of users.  So maybe
we should just go with that along with per-implementation knobs so that users
can still disable unwanted implementations at build or boot time if they want.

E.g., CONFIG_ZINC_CHACHA would be a module that has all the software ChaCha
implementations for the architecture.  But people building the kernel who do not
want or need, say, the NEON implementation could unset the bool
CONFIG_ZINC_CHACHA_NEON to exclude it from the zinc_chacha module at build time.
Alternatively, users with a precompiled kernel who don't want to use the NEON
implementation despite their CPU supporting it could set zinc_chacha.neon=0 on
the kernel command line (when CONFIG_ZINC_CHACHA=y) or when loading the
zinc_chacha module (when CONFIG_ZINC_CHACHA=m).

- Eric

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3 0/3] crypto: crct10dif assembly cleanup and optimizations
@ 2019-01-31  3:37     ` Eric Biggers
  0 siblings, 0 replies; 22+ messages in thread
From: Eric Biggers @ 2019-01-31  3:37 UTC (permalink / raw)
  To: Ard Biesheuvel
  Cc: Tim Chen, open list:HARDWARE RANDOM NUMBER GENERATOR CORE,
	linux-arm-kernel, Herbert Xu

On Wed, Jan 30, 2019 at 09:33:57AM +0100, Ard Biesheuvel wrote:
> On Wed, 30 Jan 2019 at 04:15, Eric Biggers <ebiggers@kernel.org> wrote:
> >
> > The x86, arm, and arm64 asm implementations of crct10dif are very
> > difficult to understand partly because many of the comments, labels, and
> > macros are named incorrectly: the lengths mentioned are usually off by a
> > factor of two from the actual code.  Many other things are unnecessarily
> > convoluted as well, e.g. there are many more fold constants than
> > actually needed and some aren't fully reduced.
> >
> > This series therefore cleans up all these implementations to be much
> > more maintainable.  I also made some small optimizations where I saw
> > opportunities, resulting in slightly better performance.
> >
> > This is based on top of the pending patches from Ard Biesheuvel.
> >
> > These all pass the new extra self-tests.
> >
> 
> Hi Eric,
> 
> As a FYI, the issue that broke ARM and arm64 with your updated
> selftests was the 1 byte length special case that you also have
> special handling for in the x86 version (but while fixing that, I
> noticed my version was reading beyond the end of the input). I think
> it hardly matters, though, given the way T10-DIF appears to be used in
> practice (disk blocks), although it is hard to be sure from reading
> the code, and the algo should be correct in any case.

Yes, on second thought I'm thinking the len < 16 support in the x86 assembly
isn't worthwhile.  Actually it's much slower than the generic table-based code
on those lengths due to the overhead of kernel_fpu_begin().  And even if
kernel_fpu_begin() were free, the generic code is faster until about len=11.

There's a theoretical niceness to using pclmul for all lengths so that no table
is needed.  But we still need the table for the !irq_fpu_usable() case anyway.

So I'll drop the len < 16 case.

> 
> So what remains is the way these implementations are encapsulated by
> the crct10dif() library function, which is raster nasty, making
> CRC-T10DIF an excellent use case to discuss whether we can make any
> improvements to address some of the concerns that were also raised in
> the zinc discussion. I threw some code together a while ago [0] (and
> posted it as well, IIRC). In the mean time, a 'static call'
> infrastructure is being proposed that could be used in a similar way
> to avoid function pointers. I'm also interested in hearing opinions on
> whether the indirect call overhead is actually significant in use
> cases such as this one.
> 

I agree that lib/crc-t10dif.c is very ugly, and we need to find a better way to
provide simple crypto library functions.  But I'm not sure how to make everyone
happy.  I actually think the Zinc approach of centrally dispatching to all the
software implementations of each algorithm (with one module per algorithm rather
than one per implementation) is fine for the vast majority of users.  So maybe
we should just go with that along with per-implementation knobs so that users
can still disable unwanted implementations at build or boot time if they want.

E.g., CONFIG_ZINC_CHACHA would be a module that has all the software ChaCha
implementations for the architecture.  But people building the kernel who do not
want or need, say, the NEON implementation could unset the bool
CONFIG_ZINC_CHACHA_NEON to exclude it from the zinc_chacha module at build time.
Alternatively, users with a precompiled kernel who don't want to use the NEON
implementation despite their CPU supporting it could set zinc_chacha.neon=0 on
the kernel command line (when CONFIG_ZINC_CHACHA=y) or when loading the
zinc_chacha module (when CONFIG_ZINC_CHACHA=m).

- Eric

_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2019-01-31  3:37 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-30  3:14 [PATCH v3 0/3] crypto: crct10dif assembly cleanup and optimizations Eric Biggers
2019-01-30  3:14 ` Eric Biggers
2019-01-30  3:14 ` [PATCH v3 1/3] crypto: x86/crct10dif-pcl - " Eric Biggers
2019-01-30  3:14   ` Eric Biggers
2019-01-30  3:14 ` [PATCH v3 2/3] crypto: arm/crct10dif-ce " Eric Biggers
2019-01-30  3:14   ` Eric Biggers
2019-01-30 20:55   ` Eric Biggers
2019-01-30 20:55     ` Eric Biggers
2019-01-30 21:00     ` Ard Biesheuvel
2019-01-30 21:00       ` Ard Biesheuvel
2019-01-30 21:08       ` Eric Biggers
2019-01-30 21:08         ` Eric Biggers
2019-01-30  3:14 ` [PATCH v3 3/3] crypto: arm64/crct10dif-ce " Eric Biggers
2019-01-30  3:14   ` Eric Biggers
2019-01-30  8:33 ` [PATCH v3 0/3] crypto: crct10dif assembly " Ard Biesheuvel
2019-01-30  8:33   ` Ard Biesheuvel
2019-01-30  9:13   ` Herbert Xu
2019-01-30  9:13     ` Herbert Xu
2019-01-30  9:19     ` Ard Biesheuvel
2019-01-30  9:19       ` Ard Biesheuvel
2019-01-31  3:37   ` Eric Biggers
2019-01-31  3:37     ` Eric Biggers

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.