linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/3] Introduce x86 assembler accelerated implementation for SM4 algorithm
@ 2021-06-10 13:44 Tianjia Zhang
  2021-06-10 13:44 ` [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code Tianjia Zhang
                   ` (2 more replies)
  0 siblings, 3 replies; 20+ messages in thread
From: Tianjia Zhang @ 2021-06-10 13:44 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, x86, linux-crypto, linux-arm-kernel,
	linux-kernel
  Cc: Tianjia Zhang

This patchset extracts the public SM4 algorithm as a separate library,
and introduces an accelerated implementation of the instruction set on
x86, and will also introduce more mode-specific accelerated
implementations later.

Tianjia Zhang (3):
  crypto: sm4 - create SM4 library based on sm4 generic code
  crypto: arm64/sm4-ce - Make dependent on sm4 library instead of
    sm4-generic
  crypto: x86/sm4 - add AES-NI/AVX/x86_64 assembler implementation

 arch/arm64/crypto/Kconfig              |   2 +-
 arch/arm64/crypto/sm4-ce-glue.c        |  14 +-
 arch/x86/crypto/Makefile               |   3 +
 arch/x86/crypto/sm4-aesni-avx-asm_64.S | 339 +++++++++++++++++++++++++
 arch/x86/crypto/sm4_aesni_avx_glue.c   | 115 +++++++++
 crypto/Kconfig                         |  30 +++
 crypto/sm4_generic.c                   | 164 +-----------
 include/crypto/sm4.h                   |  25 +-
 lib/crypto/Kconfig                     |   3 +
 lib/crypto/Makefile                    |   3 +
 lib/crypto/sm4.c                       | 184 ++++++++++++++
 11 files changed, 718 insertions(+), 164 deletions(-)
 create mode 100644 arch/x86/crypto/sm4-aesni-avx-asm_64.S
 create mode 100644 arch/x86/crypto/sm4_aesni_avx_glue.c
 create mode 100644 lib/crypto/sm4.c

-- 
2.19.1.3.ge56e4f7


^ permalink raw reply	[flat|nested] 20+ messages in thread

* [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2021-06-10 13:44 [PATCH 0/3] Introduce x86 assembler accelerated implementation for SM4 algorithm Tianjia Zhang
@ 2021-06-10 13:44 ` Tianjia Zhang
  2021-06-10 23:19   ` Eric Biggers
  2022-03-01 10:34   ` Jason A. Donenfeld
  2021-06-10 13:44 ` [PATCH 2/3] crypto: arm64/sm4-ce - Make dependent on sm4 library instead of sm4-generic Tianjia Zhang
  2021-06-10 13:44 ` [PATCH 3/3] crypto: x86/sm4 - add AES-NI/AVX/x86_64 assembler implementation Tianjia Zhang
  2 siblings, 2 replies; 20+ messages in thread
From: Tianjia Zhang @ 2021-06-10 13:44 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, x86, linux-crypto, linux-arm-kernel,
	linux-kernel
  Cc: Tianjia Zhang

Take the existing small footprint and mostly time invariant C code
and turn it into a SM4 library that can be used for non-performance
critical, casual use of SM4, and as a fallback for, e.g., SIMD code
that needs a secondary path that can be taken in contexts where the
SIMD unit is off limits.

Secondly, some codes have been optimized, such as unrolling small
times loop, removing unnecessary memory shifts, exporting sbox, fk,
ck arrays, and basic encryption and decryption functions.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
---
 crypto/Kconfig       |   1 +
 crypto/sm4_generic.c | 149 +----------------------------------
 include/crypto/sm4.h |  26 +++++-
 lib/crypto/Kconfig   |   3 +
 lib/crypto/Makefile  |   3 +
 lib/crypto/sm4.c     | 184 +++++++++++++++++++++++++++++++++++++++++++
 6 files changed, 217 insertions(+), 149 deletions(-)
 create mode 100644 lib/crypto/sm4.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index ca3b02dcbbfa..4fbc9c080ca9 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1547,6 +1547,7 @@ config CRYPTO_SERPENT_AVX2_X86_64
 config CRYPTO_SM4
 	tristate "SM4 cipher algorithm"
 	select CRYPTO_ALGAPI
+	select CRYPTO_LIB_SM4
 	help
 	  SM4 cipher algorithms (OSCCA GB/T 32907-2016).
 
diff --git a/crypto/sm4_generic.c b/crypto/sm4_generic.c
index 016dbc595705..7d1c25244268 100644
--- a/crypto/sm4_generic.c
+++ b/crypto/sm4_generic.c
@@ -16,132 +16,6 @@
 #include <asm/byteorder.h>
 #include <asm/unaligned.h>
 
-static const u32 fk[4] = {
-	0xa3b1bac6, 0x56aa3350, 0x677d9197, 0xb27022dc
-};
-
-static const u8 sbox[256] = {
-	0xd6, 0x90, 0xe9, 0xfe, 0xcc, 0xe1, 0x3d, 0xb7,
-	0x16, 0xb6, 0x14, 0xc2, 0x28, 0xfb, 0x2c, 0x05,
-	0x2b, 0x67, 0x9a, 0x76, 0x2a, 0xbe, 0x04, 0xc3,
-	0xaa, 0x44, 0x13, 0x26, 0x49, 0x86, 0x06, 0x99,
-	0x9c, 0x42, 0x50, 0xf4, 0x91, 0xef, 0x98, 0x7a,
-	0x33, 0x54, 0x0b, 0x43, 0xed, 0xcf, 0xac, 0x62,
-	0xe4, 0xb3, 0x1c, 0xa9, 0xc9, 0x08, 0xe8, 0x95,
-	0x80, 0xdf, 0x94, 0xfa, 0x75, 0x8f, 0x3f, 0xa6,
-	0x47, 0x07, 0xa7, 0xfc, 0xf3, 0x73, 0x17, 0xba,
-	0x83, 0x59, 0x3c, 0x19, 0xe6, 0x85, 0x4f, 0xa8,
-	0x68, 0x6b, 0x81, 0xb2, 0x71, 0x64, 0xda, 0x8b,
-	0xf8, 0xeb, 0x0f, 0x4b, 0x70, 0x56, 0x9d, 0x35,
-	0x1e, 0x24, 0x0e, 0x5e, 0x63, 0x58, 0xd1, 0xa2,
-	0x25, 0x22, 0x7c, 0x3b, 0x01, 0x21, 0x78, 0x87,
-	0xd4, 0x00, 0x46, 0x57, 0x9f, 0xd3, 0x27, 0x52,
-	0x4c, 0x36, 0x02, 0xe7, 0xa0, 0xc4, 0xc8, 0x9e,
-	0xea, 0xbf, 0x8a, 0xd2, 0x40, 0xc7, 0x38, 0xb5,
-	0xa3, 0xf7, 0xf2, 0xce, 0xf9, 0x61, 0x15, 0xa1,
-	0xe0, 0xae, 0x5d, 0xa4, 0x9b, 0x34, 0x1a, 0x55,
-	0xad, 0x93, 0x32, 0x30, 0xf5, 0x8c, 0xb1, 0xe3,
-	0x1d, 0xf6, 0xe2, 0x2e, 0x82, 0x66, 0xca, 0x60,
-	0xc0, 0x29, 0x23, 0xab, 0x0d, 0x53, 0x4e, 0x6f,
-	0xd5, 0xdb, 0x37, 0x45, 0xde, 0xfd, 0x8e, 0x2f,
-	0x03, 0xff, 0x6a, 0x72, 0x6d, 0x6c, 0x5b, 0x51,
-	0x8d, 0x1b, 0xaf, 0x92, 0xbb, 0xdd, 0xbc, 0x7f,
-	0x11, 0xd9, 0x5c, 0x41, 0x1f, 0x10, 0x5a, 0xd8,
-	0x0a, 0xc1, 0x31, 0x88, 0xa5, 0xcd, 0x7b, 0xbd,
-	0x2d, 0x74, 0xd0, 0x12, 0xb8, 0xe5, 0xb4, 0xb0,
-	0x89, 0x69, 0x97, 0x4a, 0x0c, 0x96, 0x77, 0x7e,
-	0x65, 0xb9, 0xf1, 0x09, 0xc5, 0x6e, 0xc6, 0x84,
-	0x18, 0xf0, 0x7d, 0xec, 0x3a, 0xdc, 0x4d, 0x20,
-	0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48
-};
-
-static const u32 ck[] = {
-	0x00070e15, 0x1c232a31, 0x383f464d, 0x545b6269,
-	0x70777e85, 0x8c939aa1, 0xa8afb6bd, 0xc4cbd2d9,
-	0xe0e7eef5, 0xfc030a11, 0x181f262d, 0x343b4249,
-	0x50575e65, 0x6c737a81, 0x888f969d, 0xa4abb2b9,
-	0xc0c7ced5, 0xdce3eaf1, 0xf8ff060d, 0x141b2229,
-	0x30373e45, 0x4c535a61, 0x686f767d, 0x848b9299,
-	0xa0a7aeb5, 0xbcc3cad1, 0xd8dfe6ed, 0xf4fb0209,
-	0x10171e25, 0x2c333a41, 0x484f565d, 0x646b7279
-};
-
-static u32 sm4_t_non_lin_sub(u32 x)
-{
-	int i;
-	u8 *b = (u8 *)&x;
-
-	for (i = 0; i < 4; ++i)
-		b[i] = sbox[b[i]];
-
-	return x;
-}
-
-static u32 sm4_key_lin_sub(u32 x)
-{
-	return x ^ rol32(x, 13) ^ rol32(x, 23);
-
-}
-
-static u32 sm4_enc_lin_sub(u32 x)
-{
-	return x ^ rol32(x, 2) ^ rol32(x, 10) ^ rol32(x, 18) ^ rol32(x, 24);
-}
-
-static u32 sm4_key_sub(u32 x)
-{
-	return sm4_key_lin_sub(sm4_t_non_lin_sub(x));
-}
-
-static u32 sm4_enc_sub(u32 x)
-{
-	return sm4_enc_lin_sub(sm4_t_non_lin_sub(x));
-}
-
-static u32 sm4_round(const u32 *x, const u32 rk)
-{
-	return x[0] ^ sm4_enc_sub(x[1] ^ x[2] ^ x[3] ^ rk);
-}
-
-
-/**
- * crypto_sm4_expand_key - Expands the SM4 key as described in GB/T 32907-2016
- * @ctx:	The location where the computed key will be stored.
- * @in_key:	The supplied key.
- * @key_len:	The length of the supplied key.
- *
- * Returns 0 on success. The function fails only if an invalid key size (or
- * pointer) is supplied.
- */
-int crypto_sm4_expand_key(struct crypto_sm4_ctx *ctx, const u8 *in_key,
-			  unsigned int key_len)
-{
-	u32 rk[4], t;
-	const u32 *key = (u32 *)in_key;
-	int i;
-
-	if (key_len != SM4_KEY_SIZE)
-		return -EINVAL;
-
-	for (i = 0; i < 4; ++i)
-		rk[i] = get_unaligned_be32(&key[i]) ^ fk[i];
-
-	for (i = 0; i < 32; ++i) {
-		t = rk[0] ^ sm4_key_sub(rk[1] ^ rk[2] ^ rk[3] ^ ck[i]);
-		ctx->rkey_enc[i] = t;
-		rk[0] = rk[1];
-		rk[1] = rk[2];
-		rk[2] = rk[3];
-		rk[3] = t;
-	}
-
-	for (i = 0; i < 32; ++i)
-		ctx->rkey_dec[i] = ctx->rkey_enc[31 - i];
-
-	return 0;
-}
-EXPORT_SYMBOL_GPL(crypto_sm4_expand_key);
-
 /**
  * crypto_sm4_set_key - Set the SM4 key.
  * @tfm:	The %crypto_tfm that is used in the context.
@@ -163,32 +37,13 @@ int crypto_sm4_set_key(struct crypto_tfm *tfm, const u8 *in_key,
 }
 EXPORT_SYMBOL_GPL(crypto_sm4_set_key);
 
-static void sm4_do_crypt(const u32 *rk, u32 *out, const u32 *in)
-{
-	u32 x[4], i, t;
-
-	for (i = 0; i < 4; ++i)
-		x[i] = get_unaligned_be32(&in[i]);
-
-	for (i = 0; i < 32; ++i) {
-		t = sm4_round(x, rk[i]);
-		x[0] = x[1];
-		x[1] = x[2];
-		x[2] = x[3];
-		x[3] = t;
-	}
-
-	for (i = 0; i < 4; ++i)
-		put_unaligned_be32(x[3 - i], &out[i]);
-}
-
 /* encrypt a block of text */
 
 void crypto_sm4_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 {
 	const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	sm4_do_crypt(ctx->rkey_enc, (u32 *)out, (u32 *)in);
+	crypto_sm4_do_crypt(ctx->rkey_enc, out, in);
 }
 EXPORT_SYMBOL_GPL(crypto_sm4_encrypt);
 
@@ -198,7 +53,7 @@ void crypto_sm4_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 {
 	const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	sm4_do_crypt(ctx->rkey_dec, (u32 *)out, (u32 *)in);
+	crypto_sm4_do_crypt(ctx->rkey_dec, out, in);
 }
 EXPORT_SYMBOL_GPL(crypto_sm4_decrypt);
 
diff --git a/include/crypto/sm4.h b/include/crypto/sm4.h
index 7afd730d16ff..39273121c145 100644
--- a/include/crypto/sm4.h
+++ b/include/crypto/sm4.h
@@ -3,6 +3,7 @@
 /*
  * Common values for the SM4 algorithm
  * Copyright (C) 2018 ARM Limited or its affiliates.
+ * Copyright (c) 2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
  */
 
 #ifndef _CRYPTO_SM4_H
@@ -20,11 +21,32 @@ struct crypto_sm4_ctx {
 	u32 rkey_dec[SM4_RKEY_WORDS];
 };
 
-int crypto_sm4_set_key(struct crypto_tfm *tfm, const u8 *in_key,
-		       unsigned int key_len);
+extern const u32 crypto_sm4_fk[];
+extern const u32 crypto_sm4_ck[];
+extern const u8 crypto_sm4_sbox[];
+
+/**
+ * crypto_sm4_expand_key - Expands the SM4 key as described in GB/T 32907-2016
+ * @ctx:	The location where the computed key will be stored.
+ * @in_key:	The supplied key.
+ * @key_len:	The length of the supplied key.
+ *
+ * Returns 0 on success. The function fails only if an invalid key size (or
+ * pointer) is supplied.
+ */
 int crypto_sm4_expand_key(struct crypto_sm4_ctx *ctx, const u8 *in_key,
 			  unsigned int key_len);
 
+/**
+ * crypto_sm4_do_crypt - Encrypt or decrypt a single SM4 block
+ * @rk:		The rkey_enc for encrypt or rkey_dec for decrypt
+ * @out:	Buffer to store output data
+ * @in: 	Buffer containing the input data
+ */
+void crypto_sm4_do_crypt(const u32 *rk, u8 *out, const u8 *in);
+
+int crypto_sm4_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+		       unsigned int key_len);
 void crypto_sm4_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in);
 void crypto_sm4_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in);
 
diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig
index 14c032de276e..545ccbddf6a1 100644
--- a/lib/crypto/Kconfig
+++ b/lib/crypto/Kconfig
@@ -128,3 +128,6 @@ config CRYPTO_LIB_CHACHA20POLY1305
 
 config CRYPTO_LIB_SHA256
 	tristate
+
+config CRYPTO_LIB_SM4
+	tristate
diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile
index 3a435629d9ce..73205ed269ba 100644
--- a/lib/crypto/Makefile
+++ b/lib/crypto/Makefile
@@ -38,6 +38,9 @@ libpoly1305-y					+= poly1305.o
 obj-$(CONFIG_CRYPTO_LIB_SHA256)			+= libsha256.o
 libsha256-y					:= sha256.o
 
+obj-$(CONFIG_CRYPTO_LIB_SM4)			+= libsm4.o
+libsm4-y					:= sm4.o
+
 ifneq ($(CONFIG_CRYPTO_MANAGER_DISABLE_TESTS),y)
 libblake2s-y					+= blake2s-selftest.o
 libchacha20poly1305-y				+= chacha20poly1305-selftest.o
diff --git a/lib/crypto/sm4.c b/lib/crypto/sm4.c
new file mode 100644
index 000000000000..cbdd14a254d0
--- /dev/null
+++ b/lib/crypto/sm4.c
@@ -0,0 +1,184 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * SM4, as specified in
+ * https://tools.ietf.org/id/draft-ribose-cfrg-sm4-04.html
+ *
+ * Copyright (C) 2018 ARM Limited or its affiliates.
+ * Copyright (c) 2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
+ */
+
+#include <linux/module.h>
+#include <asm/unaligned.h>
+#include <crypto/sm4.h>
+
+static const u32 fk[4] = {
+	0xa3b1bac6, 0x56aa3350, 0x677d9197, 0xb27022dc
+};
+
+static const u32 ck[32] = {
+	0x00070e15, 0x1c232a31, 0x383f464d, 0x545b6269,
+	0x70777e85, 0x8c939aa1, 0xa8afb6bd, 0xc4cbd2d9,
+	0xe0e7eef5, 0xfc030a11, 0x181f262d, 0x343b4249,
+	0x50575e65, 0x6c737a81, 0x888f969d, 0xa4abb2b9,
+	0xc0c7ced5, 0xdce3eaf1, 0xf8ff060d, 0x141b2229,
+	0x30373e45, 0x4c535a61, 0x686f767d, 0x848b9299,
+	0xa0a7aeb5, 0xbcc3cad1, 0xd8dfe6ed, 0xf4fb0209,
+	0x10171e25, 0x2c333a41, 0x484f565d, 0x646b7279
+};
+
+static const u8 __cacheline_aligned sbox[256] = {
+	0xd6, 0x90, 0xe9, 0xfe, 0xcc, 0xe1, 0x3d, 0xb7,
+	0x16, 0xb6, 0x14, 0xc2, 0x28, 0xfb, 0x2c, 0x05,
+	0x2b, 0x67, 0x9a, 0x76, 0x2a, 0xbe, 0x04, 0xc3,
+	0xaa, 0x44, 0x13, 0x26, 0x49, 0x86, 0x06, 0x99,
+	0x9c, 0x42, 0x50, 0xf4, 0x91, 0xef, 0x98, 0x7a,
+	0x33, 0x54, 0x0b, 0x43, 0xed, 0xcf, 0xac, 0x62,
+	0xe4, 0xb3, 0x1c, 0xa9, 0xc9, 0x08, 0xe8, 0x95,
+	0x80, 0xdf, 0x94, 0xfa, 0x75, 0x8f, 0x3f, 0xa6,
+	0x47, 0x07, 0xa7, 0xfc, 0xf3, 0x73, 0x17, 0xba,
+	0x83, 0x59, 0x3c, 0x19, 0xe6, 0x85, 0x4f, 0xa8,
+	0x68, 0x6b, 0x81, 0xb2, 0x71, 0x64, 0xda, 0x8b,
+	0xf8, 0xeb, 0x0f, 0x4b, 0x70, 0x56, 0x9d, 0x35,
+	0x1e, 0x24, 0x0e, 0x5e, 0x63, 0x58, 0xd1, 0xa2,
+	0x25, 0x22, 0x7c, 0x3b, 0x01, 0x21, 0x78, 0x87,
+	0xd4, 0x00, 0x46, 0x57, 0x9f, 0xd3, 0x27, 0x52,
+	0x4c, 0x36, 0x02, 0xe7, 0xa0, 0xc4, 0xc8, 0x9e,
+	0xea, 0xbf, 0x8a, 0xd2, 0x40, 0xc7, 0x38, 0xb5,
+	0xa3, 0xf7, 0xf2, 0xce, 0xf9, 0x61, 0x15, 0xa1,
+	0xe0, 0xae, 0x5d, 0xa4, 0x9b, 0x34, 0x1a, 0x55,
+	0xad, 0x93, 0x32, 0x30, 0xf5, 0x8c, 0xb1, 0xe3,
+	0x1d, 0xf6, 0xe2, 0x2e, 0x82, 0x66, 0xca, 0x60,
+	0xc0, 0x29, 0x23, 0xab, 0x0d, 0x53, 0x4e, 0x6f,
+	0xd5, 0xdb, 0x37, 0x45, 0xde, 0xfd, 0x8e, 0x2f,
+	0x03, 0xff, 0x6a, 0x72, 0x6d, 0x6c, 0x5b, 0x51,
+	0x8d, 0x1b, 0xaf, 0x92, 0xbb, 0xdd, 0xbc, 0x7f,
+	0x11, 0xd9, 0x5c, 0x41, 0x1f, 0x10, 0x5a, 0xd8,
+	0x0a, 0xc1, 0x31, 0x88, 0xa5, 0xcd, 0x7b, 0xbd,
+	0x2d, 0x74, 0xd0, 0x12, 0xb8, 0xe5, 0xb4, 0xb0,
+	0x89, 0x69, 0x97, 0x4a, 0x0c, 0x96, 0x77, 0x7e,
+	0x65, 0xb9, 0xf1, 0x09, 0xc5, 0x6e, 0xc6, 0x84,
+	0x18, 0xf0, 0x7d, 0xec, 0x3a, 0xdc, 0x4d, 0x20,
+	0x79, 0xee, 0x5f, 0x3e, 0xd7, 0xcb, 0x39, 0x48
+};
+
+extern const u32 crypto_sm4_fk[4] __alias(fk);
+extern const u32 crypto_sm4_ck[32] __alias(ck);
+extern const u8 crypto_sm4_sbox[256] __alias(sbox);
+
+EXPORT_SYMBOL(crypto_sm4_fk);
+EXPORT_SYMBOL(crypto_sm4_ck);
+EXPORT_SYMBOL(crypto_sm4_sbox);
+
+static inline u32 sm4_t_non_lin_sub(u32 x)
+{
+	u32 out;
+
+	out  = (u32)sbox[x & 0xff];
+	out |= (u32)sbox[(x >> 8) & 0xff] << 8;
+	out |= (u32)sbox[(x >> 16) & 0xff] << 16;
+	out |= (u32)sbox[(x >> 24) & 0xff] << 24;
+
+	return out;
+}
+
+static inline u32 sm4_key_lin_sub(u32 x)
+{
+	return x ^ rol32(x, 13) ^ rol32(x, 23);
+}
+
+static inline u32 sm4_enc_lin_sub(u32 x)
+{
+	return x ^ rol32(x, 2) ^ rol32(x, 10) ^ rol32(x, 18) ^ rol32(x, 24);
+}
+
+static inline u32 sm4_key_sub(u32 x)
+{
+	return sm4_key_lin_sub(sm4_t_non_lin_sub(x));
+}
+
+static inline u32 sm4_enc_sub(u32 x)
+{
+	return sm4_enc_lin_sub(sm4_t_non_lin_sub(x));
+}
+
+static inline u32 sm4_round(u32 x0, u32 x1, u32 x2, u32 x3, u32 rk)
+{
+	return x0 ^ sm4_enc_sub(x1 ^ x2 ^ x3 ^ rk);
+}
+
+
+/**
+ * crypto_sm4_expand_key - Expands the SM4 key as described in GB/T 32907-2016
+ * @ctx:	The location where the computed key will be stored.
+ * @in_key:	The supplied key.
+ * @key_len:	The length of the supplied key.
+ *
+ * Returns 0 on success. The function fails only if an invalid key size (or
+ * pointer) is supplied.
+ */
+int crypto_sm4_expand_key(struct crypto_sm4_ctx *ctx, const u8 *in_key,
+			  unsigned int key_len)
+{
+	u32 rk[4];
+	const u32 *key = (u32 *)in_key;
+	int i;
+
+	if (key_len != SM4_KEY_SIZE)
+		return -EINVAL;
+
+	rk[0] = get_unaligned_be32(&key[0]) ^ fk[0];
+	rk[1] = get_unaligned_be32(&key[1]) ^ fk[1];
+	rk[2] = get_unaligned_be32(&key[2]) ^ fk[2];
+	rk[3] = get_unaligned_be32(&key[3]) ^ fk[3];
+
+	for (i = 0; i < 32; i += 4) {
+		rk[0] ^= sm4_key_sub(rk[1] ^ rk[2] ^ rk[3] ^ ck[i + 0]);
+		rk[1] ^= sm4_key_sub(rk[2] ^ rk[3] ^ rk[0] ^ ck[i + 1]);
+		rk[2] ^= sm4_key_sub(rk[3] ^ rk[0] ^ rk[1] ^ ck[i + 2]);
+		rk[3] ^= sm4_key_sub(rk[0] ^ rk[1] ^ rk[2] ^ ck[i + 3]);
+
+		ctx->rkey_enc[i + 0] = rk[0];
+		ctx->rkey_enc[i + 1] = rk[1];
+		ctx->rkey_enc[i + 2] = rk[2];
+		ctx->rkey_enc[i + 3] = rk[3];
+		ctx->rkey_dec[31 - 0 - i] = rk[0];
+		ctx->rkey_dec[31 - 1 - i] = rk[1];
+		ctx->rkey_dec[31 - 2 - i] = rk[2];
+		ctx->rkey_dec[31 - 3 - i] = rk[3];
+	}
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(crypto_sm4_expand_key);
+
+/**
+ * crypto_sm4_do_crypt - Encrypt or decrypt a single SM4 block
+ * @rk:		The rkey_enc for encrypt or rkey_dec for decrypt
+ * @out:	Buffer to store output data
+ * @in: 	Buffer containing the input data
+ */
+void crypto_sm4_do_crypt(const u32 *rk, u8 *out, const u8 *in)
+{
+	u32 x[4], i;
+
+	x[0] = get_unaligned_be32(in + 0 * 4);
+	x[1] = get_unaligned_be32(in + 1 * 4);
+	x[2] = get_unaligned_be32(in + 2 * 4);
+	x[3] = get_unaligned_be32(in + 3 * 4);
+
+	for (i = 0; i < 32; i += 4) {
+		x[0] = sm4_round(x[0], x[1], x[2], x[3], rk[i + 0]);
+		x[1] = sm4_round(x[1], x[2], x[3], x[0], rk[i + 1]);
+		x[2] = sm4_round(x[2], x[3], x[0], x[1], rk[i + 2]);
+		x[3] = sm4_round(x[3], x[0], x[1], x[2], rk[i + 3]);
+	}
+
+	put_unaligned_be32(x[3 - 0], out + 0 * 4);
+	put_unaligned_be32(x[3 - 1], out + 1 * 4);
+	put_unaligned_be32(x[3 - 2], out + 2 * 4);
+	put_unaligned_be32(x[3 - 3], out + 3 * 4);
+}
+EXPORT_SYMBOL_GPL(crypto_sm4_do_crypt);
+
+MODULE_DESCRIPTION("Generic SM4 library");
+MODULE_LICENSE("GPL v2");
-- 
2.19.1.3.ge56e4f7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 2/3] crypto: arm64/sm4-ce - Make dependent on sm4 library instead of sm4-generic
  2021-06-10 13:44 [PATCH 0/3] Introduce x86 assembler accelerated implementation for SM4 algorithm Tianjia Zhang
  2021-06-10 13:44 ` [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code Tianjia Zhang
@ 2021-06-10 13:44 ` Tianjia Zhang
  2021-06-10 13:44 ` [PATCH 3/3] crypto: x86/sm4 - add AES-NI/AVX/x86_64 assembler implementation Tianjia Zhang
  2 siblings, 0 replies; 20+ messages in thread
From: Tianjia Zhang @ 2021-06-10 13:44 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, x86, linux-crypto, linux-arm-kernel,
	linux-kernel
  Cc: Tianjia Zhang

SM4 library is abstracted from sm4-generic algorithm, sm4-ce can depend on
the SM4 library instead of sm4-generic, and some functions in sm4-generic
do not need to be exported.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
---
 arch/arm64/crypto/Kconfig       |  2 +-
 arch/arm64/crypto/sm4-ce-glue.c | 14 +++++++++++---
 crypto/sm4_generic.c            | 15 ++++++---------
 include/crypto/sm4.h            |  5 -----
 4 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index b8eb0453123d..55f19450091b 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -51,7 +51,7 @@ config CRYPTO_SM4_ARM64_CE
 	tristate "SM4 symmetric cipher (ARMv8.2 Crypto Extensions)"
 	depends on KERNEL_MODE_NEON
 	select CRYPTO_ALGAPI
-	select CRYPTO_SM4
+	select CRYPTO_LIB_SM4
 
 config CRYPTO_GHASH_ARM64_CE
 	tristate "GHASH/AES-GCM using ARMv8 Crypto Extensions"
diff --git a/arch/arm64/crypto/sm4-ce-glue.c b/arch/arm64/crypto/sm4-ce-glue.c
index 2754c875d39c..ba2261ec54d5 100644
--- a/arch/arm64/crypto/sm4-ce-glue.c
+++ b/arch/arm64/crypto/sm4-ce-glue.c
@@ -17,12 +17,20 @@ MODULE_LICENSE("GPL v2");
 
 asmlinkage void sm4_ce_do_crypt(const u32 *rk, void *out, const void *in);
 
+int sm4_ce_setkey(struct crypto_tfm *tfm, const u8 *in_key,
+		       unsigned int key_len)
+{
+	struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	return crypto_sm4_expand_key(ctx, in_key, key_len);
+}
+
 static void sm4_ce_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 {
 	const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
 
 	if (!crypto_simd_usable()) {
-		crypto_sm4_encrypt(tfm, out, in);
+		crypto_sm4_do_crypt(ctx->rkey_enc, out, in);
 	} else {
 		kernel_neon_begin();
 		sm4_ce_do_crypt(ctx->rkey_enc, out, in);
@@ -35,7 +43,7 @@ static void sm4_ce_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 	const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
 
 	if (!crypto_simd_usable()) {
-		crypto_sm4_decrypt(tfm, out, in);
+		crypto_sm4_do_crypt(ctx->rkey_dec, out, in);
 	} else {
 		kernel_neon_begin();
 		sm4_ce_do_crypt(ctx->rkey_dec, out, in);
@@ -54,7 +62,7 @@ static struct crypto_alg sm4_ce_alg = {
 	.cra_u.cipher = {
 		.cia_min_keysize	= SM4_KEY_SIZE,
 		.cia_max_keysize	= SM4_KEY_SIZE,
-		.cia_setkey		= crypto_sm4_set_key,
+		.cia_setkey		= sm4_ce_setkey,
 		.cia_encrypt		= sm4_ce_encrypt,
 		.cia_decrypt		= sm4_ce_decrypt
 	}
diff --git a/crypto/sm4_generic.c b/crypto/sm4_generic.c
index 7d1c25244268..c06856110126 100644
--- a/crypto/sm4_generic.c
+++ b/crypto/sm4_generic.c
@@ -28,34 +28,31 @@
  *
  * Return: 0 on success; -EINVAL on failure (only happens for bad key lengths)
  */
-int crypto_sm4_set_key(struct crypto_tfm *tfm, const u8 *in_key,
+static int sm4_setkey(struct crypto_tfm *tfm, const u8 *in_key,
 		       unsigned int key_len)
 {
 	struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
 
 	return crypto_sm4_expand_key(ctx, in_key, key_len);
 }
-EXPORT_SYMBOL_GPL(crypto_sm4_set_key);
 
 /* encrypt a block of text */
 
-void crypto_sm4_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+static void sm4_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 {
 	const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
 
 	crypto_sm4_do_crypt(ctx->rkey_enc, out, in);
 }
-EXPORT_SYMBOL_GPL(crypto_sm4_encrypt);
 
 /* decrypt a block of text */
 
-void crypto_sm4_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+static void sm4_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 {
 	const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
 
 	crypto_sm4_do_crypt(ctx->rkey_dec, out, in);
 }
-EXPORT_SYMBOL_GPL(crypto_sm4_decrypt);
 
 static struct crypto_alg sm4_alg = {
 	.cra_name		=	"sm4",
@@ -69,9 +66,9 @@ static struct crypto_alg sm4_alg = {
 		.cipher = {
 			.cia_min_keysize	=	SM4_KEY_SIZE,
 			.cia_max_keysize	=	SM4_KEY_SIZE,
-			.cia_setkey		=	crypto_sm4_set_key,
-			.cia_encrypt		=	crypto_sm4_encrypt,
-			.cia_decrypt		=	crypto_sm4_decrypt
+			.cia_setkey		=	sm4_setkey,
+			.cia_encrypt		=	sm4_encrypt,
+			.cia_decrypt		=	sm4_decrypt
 		}
 	}
 };
diff --git a/include/crypto/sm4.h b/include/crypto/sm4.h
index 39273121c145..73c15179e4c6 100644
--- a/include/crypto/sm4.h
+++ b/include/crypto/sm4.h
@@ -45,9 +45,4 @@ int crypto_sm4_expand_key(struct crypto_sm4_ctx *ctx, const u8 *in_key,
  */
 void crypto_sm4_do_crypt(const u32 *rk, u8 *out, const u8 *in);
 
-int crypto_sm4_set_key(struct crypto_tfm *tfm, const u8 *in_key,
-		       unsigned int key_len);
-void crypto_sm4_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in);
-void crypto_sm4_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in);
-
 #endif
-- 
2.19.1.3.ge56e4f7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* [PATCH 3/3] crypto: x86/sm4 - add AES-NI/AVX/x86_64 assembler implementation
  2021-06-10 13:44 [PATCH 0/3] Introduce x86 assembler accelerated implementation for SM4 algorithm Tianjia Zhang
  2021-06-10 13:44 ` [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code Tianjia Zhang
  2021-06-10 13:44 ` [PATCH 2/3] crypto: arm64/sm4-ce - Make dependent on sm4 library instead of sm4-generic Tianjia Zhang
@ 2021-06-10 13:44 ` Tianjia Zhang
  2021-06-10 23:27   ` Eric Biggers
  2 siblings, 1 reply; 20+ messages in thread
From: Tianjia Zhang @ 2021-06-10 13:44 UTC (permalink / raw)
  To: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, x86, linux-crypto, linux-arm-kernel,
	linux-kernel
  Cc: Tianjia Zhang

This patch adds AES-NI/AVX/x86_64 assembler implementation of SM4
block cipher. Through two affine transforms, we can use the AES
S-Box to simulate the SM4 S-Box to achieve the effect of instruction
acceleration.

Signed-off-by: Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
---
 arch/x86/crypto/Makefile               |   3 +
 arch/x86/crypto/sm4-aesni-avx-asm_64.S | 339 +++++++++++++++++++++++++
 arch/x86/crypto/sm4_aesni_avx_glue.c   | 115 +++++++++
 crypto/Kconfig                         |  29 +++
 4 files changed, 486 insertions(+)
 create mode 100644 arch/x86/crypto/sm4-aesni-avx-asm_64.S
 create mode 100644 arch/x86/crypto/sm4_aesni_avx_glue.c

diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
index d0959e7b809f..08f95d4e1e7c 100644
--- a/arch/x86/crypto/Makefile
+++ b/arch/x86/crypto/Makefile
@@ -88,6 +88,9 @@ nhpoly1305-avx2-y := nh-avx2-x86_64.o nhpoly1305-avx2-glue.o
 
 obj-$(CONFIG_CRYPTO_CURVE25519_X86) += curve25519-x86_64.o
 
+obj-$(CONFIG_CRYPTO_SM4_AESNI_AVX_X86_64) += sm4-aesni-avx-x86_64.o
+sm4-aesni-avx-x86_64-y := sm4-aesni-avx-asm_64.o sm4_aesni_avx_glue.o
+
 quiet_cmd_perlasm = PERLASM $@
       cmd_perlasm = $(PERL) $< > $@
 $(obj)/%.S: $(src)/%.pl FORCE
diff --git a/arch/x86/crypto/sm4-aesni-avx-asm_64.S b/arch/x86/crypto/sm4-aesni-avx-asm_64.S
new file mode 100644
index 000000000000..c7cbb7e90bec
--- /dev/null
+++ b/arch/x86/crypto/sm4-aesni-avx-asm_64.S
@@ -0,0 +1,339 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * SM4 Cipher Algorithm, AES-NI/AVX optimized.
+ * as specified in
+ * https://tools.ietf.org/id/draft-ribose-cfrg-sm4-04.html
+ *
+ * Copyright (C) 2018 Markku-Juhani O. Saarinen <mjos@iki.fi>
+ * Copyright (C) 2020 Jussi Kivilinna <jussi.kivilinna@iki.fi>
+ * Copyright (c) 2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
+ */
+
+/* Based on SM4 AES-NI work by libgcrypt and Markku-Juhani O. Saarinen at:
+ *  https://github.com/mjosaarinen/sm4ni
+ */
+
+#include <linux/linkage.h>
+#include <asm/frame.h>
+
+#define rRIP         (%rip)
+
+#define RX0          %xmm0
+#define RX1          %xmm1
+#define MASK_4BIT    %xmm2
+#define RTMP0        %xmm3
+#define RTMP1        %xmm4
+#define RTMP2        %xmm5
+#define RTMP3        %xmm6
+#define RTMP4        %xmm7
+
+#define RA0          %xmm8
+#define RA1          %xmm9
+#define RA2          %xmm10
+#define RA3          %xmm11
+
+#define RB0          %xmm12
+#define RB1          %xmm13
+#define RB2          %xmm14
+#define RB3          %xmm15
+
+#define RNOT         %xmm0
+#define RBSWAP       %xmm1
+
+
+/* Transpose four 32-bit words between 128-bit vectors. */
+#define transpose_4x4(x0, x1, x2, x3, t1, t2) \
+	vpunpckhdq x1, x0, t2;                \
+	vpunpckldq x1, x0, x0;                \
+	                                      \
+	vpunpckldq x3, x2, t1;                \
+	vpunpckhdq x3, x2, x2;                \
+	                                      \
+	vpunpckhqdq t1, x0, x1;               \
+	vpunpcklqdq t1, x0, x0;               \
+	                                      \
+	vpunpckhqdq x2, t2, x3;               \
+	vpunpcklqdq x2, t2, x2;
+
+/* pre-SubByte transform. */
+#define transform_pre(x, lo_t, hi_t, mask4bit, tmp0) \
+	vpand x, mask4bit, tmp0;                     \
+	vpandn x, mask4bit, x;                       \
+	vpsrld $4, x, x;                             \
+	                                             \
+	vpshufb tmp0, lo_t, tmp0;                    \
+	vpshufb x, hi_t, x;                          \
+	vpxor tmp0, x, x;
+
+/* post-SubByte transform. Note: x has been XOR'ed with mask4bit by
+ * 'vaeslastenc' instruction.
+ */
+#define transform_post(x, lo_t, hi_t, mask4bit, tmp0) \
+	vpandn mask4bit, x, tmp0;                     \
+	vpsrld $4, x, x;                              \
+	vpand x, mask4bit, x;                         \
+	                                              \
+	vpshufb tmp0, lo_t, tmp0;                     \
+	vpshufb x, hi_t, x;                           \
+	vpxor tmp0, x, x;
+
+
+.section	.rodata.cst16, "aM", @progbits, 16
+.align 16
+
+/*
+ * Following four affine transform look-up tables are from work by
+ * Markku-Juhani O. Saarinen, at https://github.com/mjosaarinen/sm4ni
+ *
+ * These allow exposing SM4 S-Box from AES SubByte.
+ */
+
+/* pre-SubByte affine transform, from SM4 field to AES field. */
+.Lpre_tf_lo_s:
+	.quad 0x9197E2E474720701, 0xC7C1B4B222245157
+.Lpre_tf_hi_s:
+	.quad 0xE240AB09EB49A200, 0xF052B91BF95BB012
+
+/* post-SubByte affine transform, from AES field to SM4 field. */
+.Lpost_tf_lo_s:
+	.quad 0x5B67F2CEA19D0834, 0xEDD14478172BBE82
+.Lpost_tf_hi_s:
+	.quad 0xAE7201DD73AFDC00, 0x11CDBE62CC1063BF
+
+/* For isolating SubBytes from AESENCLAST, inverse shift row */
+.Linv_shift_row:
+	.byte 0x00, 0x0d, 0x0a, 0x07, 0x04, 0x01, 0x0e, 0x0b
+	.byte 0x08, 0x05, 0x02, 0x0f, 0x0c, 0x09, 0x06, 0x03
+
+/* Inverse shift row + Rotate left by 8 bits on 32-bit words with vpshufb */
+.Linv_shift_row_rol_8:
+	.byte 0x07, 0x00, 0x0d, 0x0a, 0x0b, 0x04, 0x01, 0x0e
+	.byte 0x0f, 0x08, 0x05, 0x02, 0x03, 0x0c, 0x09, 0x06
+
+/* Inverse shift row + Rotate left by 16 bits on 32-bit words with vpshufb */
+.Linv_shift_row_rol_16:
+	.byte 0x0a, 0x07, 0x00, 0x0d, 0x0e, 0x0b, 0x04, 0x01
+	.byte 0x02, 0x0f, 0x08, 0x05, 0x06, 0x03, 0x0c, 0x09
+
+/* Inverse shift row + Rotate left by 24 bits on 32-bit words with vpshufb */
+.Linv_shift_row_rol_24:
+	.byte 0x0d, 0x0a, 0x07, 0x00, 0x01, 0x0e, 0x0b, 0x04
+	.byte 0x05, 0x02, 0x0f, 0x08, 0x09, 0x06, 0x03, 0x0c
+
+/* For CTR-mode IV byteswap */
+.Lbswap128_mask:
+	.byte 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0
+
+/* For input word byte-swap */
+.Lbswap32_mask:
+	.byte 3, 2, 1, 0, 7, 6, 5, 4, 11, 10, 9, 8, 15, 14, 13, 12
+
+.align 4
+/* 4-bit mask */
+.L0f0f0f0f:
+	.long 0x0f0f0f0f
+
+
+.text
+.align 16
+
+/*
+ * void sm4_aesni_avx_expand_key(const u8 *key, u32 *rk_enc,
+ *                  u32 *rk_dec, const u32 *fk, const u32 *ck);
+ */
+SYM_FUNC_START(sm4_aesni_avx_expand_key)
+	/* input:
+	 *	%rdi: 128-bit key
+	 *	%rsi: rkey_enc
+	 *	%rdx: rkey_dec
+	 *	%rcx: fk array
+	 *	%r8: ck array
+	 */
+	FRAME_BEGIN
+
+	vmovd 0*4(%rdi), RA0;
+	vmovd 1*4(%rdi), RA1;
+	vmovd 2*4(%rdi), RA2;
+	vmovd 3*4(%rdi), RA3;
+
+	vmovdqa .Lbswap32_mask rRIP, RTMP2;
+	vpshufb RTMP2, RA0, RA0;
+	vpshufb RTMP2, RA1, RA1;
+	vpshufb RTMP2, RA2, RA2;
+	vpshufb RTMP2, RA3, RA3;
+
+	vmovd 0*4(%rcx), RB0;
+	vmovd 1*4(%rcx), RB1;
+	vmovd 2*4(%rcx), RB2;
+	vmovd 3*4(%rcx), RB3;
+	vpxor RB0, RA0, RA0;
+	vpxor RB1, RA1, RA1;
+	vpxor RB2, RA2, RA2;
+	vpxor RB3, RA3, RA3;
+
+	vbroadcastss .L0f0f0f0f rRIP, MASK_4BIT;
+	vmovdqa .Lpre_tf_lo_s rRIP, RTMP4;
+	vmovdqa .Lpre_tf_hi_s rRIP, RB0;
+	vmovdqa .Lpost_tf_lo_s rRIP, RB1;
+	vmovdqa .Lpost_tf_hi_s rRIP, RB2;
+	vmovdqa .Linv_shift_row rRIP, RB3;
+
+#define ROUND(round, s0, s1, s2, s3)                              \
+	vbroadcastss (4*(round))(%r8), RX0;                       \
+	vpxor s1, RX0, RX0;                                       \
+	vpxor s2, RX0, RX0;                                       \
+	vpxor s3, RX0, RX0; /* s1 ^ s2 ^ s3 ^ rk */               \
+	                                                          \
+	/* sbox, non-linear part */                               \
+	transform_pre(RX0, RTMP4, RB0, MASK_4BIT, RTMP0);         \
+	vaesenclast MASK_4BIT, RX0, RX0;                          \
+	transform_post(RX0, RB1, RB2, MASK_4BIT, RTMP0);          \
+	                                                          \
+	/* linear part */                                         \
+	vpshufb RB3, RX0, RX0;                                    \
+	vpxor RX0, s0, s0; /* s0 ^ x */                           \
+	vpslld $13, RX0, RTMP0;                                   \
+	vpsrld $19, RX0, RTMP1;                                   \
+	vpslld $23, RX0, RTMP2;                                   \
+	vpsrld $9, RX0, RTMP3;                                    \
+	vpxor RTMP0, RTMP1, RTMP1;                                \
+	vpxor RTMP2, RTMP3, RTMP3;                                \
+	vpxor RTMP1, s0, s0; /* s0 ^ x ^ rol(x,13) */             \
+	vpxor RTMP3, s0, s0; /* s0 ^ x ^ rol(x,13) ^ rol(x,23) */
+
+	leaq (32*4)(%r8), %rax;
+	leaq (32*4)(%rdx), %rdx;
+.align 16
+.Lroundloop_expand_key:
+	leaq (-4*4)(%rdx), %rdx;
+	ROUND(0, RA0, RA1, RA2, RA3);
+	ROUND(1, RA1, RA2, RA3, RA0);
+	ROUND(2, RA2, RA3, RA0, RA1);
+	ROUND(3, RA3, RA0, RA1, RA2);
+	leaq (4*4)(%r8), %r8;
+	vmovd RA0, (0*4)(%rsi);
+	vmovd RA1, (1*4)(%rsi);
+	vmovd RA2, (2*4)(%rsi);
+	vmovd RA3, (3*4)(%rsi);
+	vmovd RA0, (3*4)(%rdx);
+	vmovd RA1, (2*4)(%rdx);
+	vmovd RA2, (1*4)(%rdx);
+	vmovd RA3, (0*4)(%rdx);
+	leaq (4*4)(%rsi), %rsi;
+	cmpq %rax, %r8;
+	jne .Lroundloop_expand_key;
+
+#undef ROUND
+
+	vzeroall;
+	FRAME_END
+	ret;
+SYM_FUNC_END(sm4_aesni_avx_expand_key)
+
+
+/*
+ * void sm4_aesni_avx_crypt4(const u32 *rk, u8 *dst,
+ *                          const u8 *src, int nblocks)
+ */
+SYM_FUNC_START(sm4_aesni_avx_crypt4)
+	/* input:
+	 *	%rdi: round key array, CTX
+	 *	%rsi: dst (1..4 blocks)
+	 *	%rdx: src (1..4 blocks)
+	 *	%rcx: num blocks (1..4)
+	 */
+	FRAME_BEGIN
+
+	vmovdqu 0*16(%rdx), RA0;
+	vmovdqa RA0, RA1;
+	vmovdqa RA0, RA2;
+	vmovdqa RA0, RA3;
+	cmpq $2, %rcx;
+	jb .Lblk4_load_input_done;
+	vmovdqu 1*16(%rdx), RA1;
+	je .Lblk4_load_input_done;
+	vmovdqu 2*16(%rdx), RA2;
+	cmpq $3, %rcx;
+	je .Lblk4_load_input_done;
+	vmovdqu 3*16(%rdx), RA3;
+
+.Lblk4_load_input_done:
+
+	vmovdqa .Lbswap32_mask rRIP, RTMP2;
+	vpshufb RTMP2, RA0, RA0;
+	vpshufb RTMP2, RA1, RA1;
+	vpshufb RTMP2, RA2, RA2;
+	vpshufb RTMP2, RA3, RA3;
+
+	vbroadcastss .L0f0f0f0f rRIP, MASK_4BIT;
+	vmovdqa .Lpre_tf_lo_s rRIP, RTMP4;
+	vmovdqa .Lpre_tf_hi_s rRIP, RB0;
+	vmovdqa .Lpost_tf_lo_s rRIP, RB1;
+	vmovdqa .Lpost_tf_hi_s rRIP, RB2;
+	vmovdqa .Linv_shift_row rRIP, RB3;
+	vmovdqa .Linv_shift_row_rol_8 rRIP, RTMP2;
+	vmovdqa .Linv_shift_row_rol_16 rRIP, RTMP3;
+	transpose_4x4(RA0, RA1, RA2, RA3, RTMP0, RTMP1);
+
+#define ROUND(round, s0, s1, s2, s3)                                \
+	vbroadcastss (4*(round))(%rdi), RX0;                        \
+	vpxor s1, RX0, RX0;                                         \
+	vpxor s2, RX0, RX0;                                         \
+	vpxor s3, RX0, RX0; /* s1 ^ s2 ^ s3 ^ rk */                 \
+	                                                            \
+	/* sbox, non-linear part */                                 \
+	transform_pre(RX0, RTMP4, RB0, MASK_4BIT, RTMP0);           \
+	vaesenclast MASK_4BIT, RX0, RX0;                            \
+	transform_post(RX0, RB1, RB2, MASK_4BIT, RTMP0);            \
+	                                                            \
+	/* linear part */                                           \
+	vpshufb RB3, RX0, RTMP0;                                    \
+	vpxor RTMP0, s0, s0; /* s0 ^ x */                           \
+	vpshufb RTMP2, RX0, RTMP1;                                  \
+	vpxor RTMP1, RTMP0, RTMP0; /* x ^ rol(x,8) */               \
+	vpshufb RTMP3, RX0, RTMP1;                                  \
+	vpxor RTMP1, RTMP0, RTMP0; /* x ^ rol(x,8) ^ rol(x,16) */   \
+	vpshufb .Linv_shift_row_rol_24 rRIP, RX0, RTMP1;            \
+	vpxor RTMP1, s0, s0; /* s0 ^ x ^ rol(x,24) */               \
+	vpslld $2, RTMP0, RTMP1;                                    \
+	vpsrld $30, RTMP0, RTMP0;                                   \
+	vpxor RTMP0, s0, s0;                                        \
+	/* s0 ^ x ^ rol(x,2) ^ rol(x,10) ^ rol(x,18) ^ rol(x,24) */ \
+	vpxor RTMP1, s0, s0;
+
+	leaq (32*4)(%rdi), %rax;
+.align 16
+.Lroundloop_blk4:
+	ROUND(0, RA0, RA1, RA2, RA3);
+	ROUND(1, RA1, RA2, RA3, RA0);
+	ROUND(2, RA2, RA3, RA0, RA1);
+	ROUND(3, RA3, RA0, RA1, RA2);
+	leaq (4*4)(%rdi), %rdi;
+	cmpq %rax, %rdi;
+	jne .Lroundloop_blk4;
+
+#undef ROUND
+
+	vmovdqa .Lbswap128_mask rRIP, RTMP2;
+
+	transpose_4x4(RA0, RA1, RA2, RA3, RTMP0, RTMP1);
+	vpshufb RTMP2, RA0, RA0;
+	vpshufb RTMP2, RA1, RA1;
+	vpshufb RTMP2, RA2, RA2;
+	vpshufb RTMP2, RA3, RA3;
+
+	vmovdqu RA0, 0*16(%rsi);
+	cmpq $2, %rcx;
+	jb .Lblk4_store_output_done;
+	vmovdqu RA1, 1*16(%rsi);
+	je .Lblk4_store_output_done;
+	vmovdqu RA2, 2*16(%rsi);
+	cmpq $3, %rcx;
+	je .Lblk4_store_output_done;
+	vmovdqu RA3, 3*16(%rsi);
+
+.Lblk4_store_output_done:
+	vzeroall;
+	FRAME_END
+	ret;
+SYM_FUNC_END(sm4_aesni_avx_crypt4)
diff --git a/arch/x86/crypto/sm4_aesni_avx_glue.c b/arch/x86/crypto/sm4_aesni_avx_glue.c
new file mode 100644
index 000000000000..3f822fa1070a
--- /dev/null
+++ b/arch/x86/crypto/sm4_aesni_avx_glue.c
@@ -0,0 +1,115 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+/*
+ * SM4 Cipher Algorithm, AES-NI/AVX optimized.
+ * as specified in
+ * https://tools.ietf.org/id/draft-ribose-cfrg-sm4-04.html
+ *
+ * Copyright (c) 2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com>
+ */
+
+#include <linux/module.h>
+#include <linux/crypto.h>
+#include <asm/simd.h>
+#include <crypto/internal/simd.h>
+#include <crypto/sm4.h>
+
+asmlinkage void sm4_aesni_avx_expand_key(const u8 *key, u32 *rk_enc,
+				u32 *rk_dec, const u32 *fk, const u32 *ck);
+asmlinkage void sm4_aesni_avx_crypt4(const u32 *rk, u8 *dst,
+				const u8 *src, int nblocks);
+
+static int sm4_setkey(struct crypto_tfm *tfm, const u8 *in_key,
+			unsigned int key_len)
+{
+	struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	if (key_len != SM4_KEY_SIZE)
+		return -EINVAL;
+
+	if (crypto_simd_usable()) {
+		kernel_fpu_begin();
+		sm4_aesni_avx_expand_key(in_key, ctx->rkey_enc,
+				ctx->rkey_dec, crypto_sm4_fk, crypto_sm4_ck);
+		kernel_fpu_end();
+	} else
+		crypto_sm4_expand_key(ctx, in_key, key_len);
+
+	return 0;
+}
+
+static void sm4_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+{
+	const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	if (crypto_simd_usable()) {
+		kernel_fpu_begin();
+		sm4_aesni_avx_crypt4(ctx->rkey_enc, out, in, 1);
+		kernel_fpu_end();
+	} else
+		crypto_sm4_do_crypt(ctx->rkey_enc, out, in);
+}
+
+static void sm4_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
+{
+	const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	if (crypto_simd_usable()) {
+		kernel_fpu_begin();
+		sm4_aesni_avx_crypt4(ctx->rkey_dec, out, in, 1);
+		kernel_fpu_end();
+	} else
+		crypto_sm4_do_crypt(ctx->rkey_dec, out, in);
+}
+
+static struct crypto_alg sm4_asm_alg = {
+	.cra_name		= "sm4",
+	.cra_driver_name	= "sm4-asm",
+	.cra_priority		= 200,
+	.cra_flags		= CRYPTO_ALG_TYPE_CIPHER,
+	.cra_blocksize		= SM4_BLOCK_SIZE,
+	.cra_ctxsize		= sizeof(struct crypto_sm4_ctx),
+	.cra_module		= THIS_MODULE,
+	.cra_u			= {
+		.cipher = {
+			.cia_min_keysize	= SM4_KEY_SIZE,
+			.cia_max_keysize	= SM4_KEY_SIZE,
+			.cia_setkey		= sm4_setkey,
+			.cia_encrypt		= sm4_encrypt,
+			.cia_decrypt		= sm4_decrypt
+		}
+	}
+};
+
+static int __init sm4_init(void)
+{
+	const char *feature_name;
+
+	if (!boot_cpu_has(X86_FEATURE_AVX) ||
+	    !boot_cpu_has(X86_FEATURE_AES) ||
+	    !boot_cpu_has(X86_FEATURE_OSXSAVE)) {
+		pr_info("AVX or AES-NI instructions are not detected.\n");
+		return -ENODEV;
+	}
+
+	if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM,
+				&feature_name)) {
+		pr_info("CPU feature '%s' is not supported.\n", feature_name);
+		return -ENODEV;
+	}
+
+	return crypto_register_alg(&sm4_asm_alg);
+}
+
+static void __exit sm4_exit(void)
+{
+	crypto_unregister_alg(&sm4_asm_alg);
+}
+
+module_init(sm4_init);
+module_exit(sm4_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Tianjia Zhang <tianjia.zhang@linux.alibaba.com>");
+MODULE_DESCRIPTION("SM4 Cipher Algorithm, AES-NI/AVX optimized");
+MODULE_ALIAS_CRYPTO("sm4");
+MODULE_ALIAS_CRYPTO("sm4-asm");
diff --git a/crypto/Kconfig b/crypto/Kconfig
index 4fbc9c080ca9..9f639395c667 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1570,6 +1570,35 @@ config CRYPTO_SM4
 
 	  If unsure, say N.
 
+config CRYPTO_SM4_AESNI_AVX_X86_64
+	tristate "SM4 cipher algorithm (x86_64/AES-NI/AVX)"
+	depends on X86 && 64BIT
+	select CRYPTO_SKCIPHER
+	select CRYPTO_SIMD
+	select CRYPTO_ALGAPI
+	select CRYPTO_LIB_SM4
+	help
+	  SM4 cipher algorithms (OSCCA GB/T 32907-2016) (x86_64/AES-NI/AVX).
+
+	  SM4 (GBT.32907-2016) is a cryptographic standard issued by the
+	  Organization of State Commercial Administration of China (OSCCA)
+	  as an authorized cryptographic algorithms for the use within China.
+
+	  SMS4 was originally created for use in protecting wireless
+	  networks, and is mandated in the Chinese National Standard for
+	  Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure)
+	  (GB.15629.11-2003).
+
+	  The latest SM4 standard (GBT.32907-2016) was proposed by OSCCA and
+	  standardized through TC 260 of the Standardization Administration
+	  of the People's Republic of China (SAC).
+
+	  The input, output, and key of SMS4 are each 128 bits.
+
+	  See also: <https://eprint.iacr.org/2008/329.pdf>
+
+	  If unsure, say N.
+
 config CRYPTO_TEA
 	tristate "TEA, XTEA and XETA cipher algorithms"
 	depends on CRYPTO_USER_API_ENABLE_OBSOLETE
-- 
2.19.1.3.ge56e4f7


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2021-06-10 13:44 ` [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code Tianjia Zhang
@ 2021-06-10 23:19   ` Eric Biggers
  2021-06-13 10:14     ` Tianjia Zhang
  2022-03-01 10:34   ` Jason A. Donenfeld
  1 sibling, 1 reply; 20+ messages in thread
From: Eric Biggers @ 2021-06-10 23:19 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, x86, linux-crypto, linux-arm-kernel,
	linux-kernel

On Thu, Jun 10, 2021 at 09:44:57PM +0800, Tianjia Zhang wrote:
> Take the existing small footprint and mostly time invariant C code

It is using an S-box without any prefetching.  That doesn't look very
"time invariant" to me.

> diff --git a/lib/crypto/sm4.c b/lib/crypto/sm4.c
> new file mode 100644
> index 000000000000..cbdd14a254d0
[..]
> +/**
> + * crypto_sm4_expand_key - Expands the SM4 key as described in GB/T 32907-2016
> + * @ctx:	The location where the computed key will be stored.
> + * @in_key:	The supplied key.
> + * @key_len:	The length of the supplied key.
> + *
> + * Returns 0 on success. The function fails only if an invalid key size (or
> + * pointer) is supplied.
> + */
> +int crypto_sm4_expand_key(struct crypto_sm4_ctx *ctx, const u8 *in_key,
> +			  unsigned int key_len)
[...]
> +/**
> + * crypto_sm4_do_crypt - Encrypt or decrypt a single SM4 block
> + * @rk:		The rkey_enc for encrypt or rkey_dec for decrypt
> + * @out:	Buffer to store output data
> + * @in: 	Buffer containing the input data
> + */
> +void crypto_sm4_do_crypt(const u32 *rk, u8 *out, const u8 *in)

Calling these "sm4_expandkey()" and "sm4_crypt_block()" would be more consistent
with the other lib/crypto/ functions such as the AES ones.  The other
lib/crypto/ functions don't have a "crypto_" prefix, as that is used for
functions related to the traditional crypto API rather than the library API.

- Eric

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/3] crypto: x86/sm4 - add AES-NI/AVX/x86_64 assembler implementation
  2021-06-10 13:44 ` [PATCH 3/3] crypto: x86/sm4 - add AES-NI/AVX/x86_64 assembler implementation Tianjia Zhang
@ 2021-06-10 23:27   ` Eric Biggers
  2021-06-13 10:14     ` Tianjia Zhang
  0 siblings, 1 reply; 20+ messages in thread
From: Eric Biggers @ 2021-06-10 23:27 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, x86, linux-crypto, linux-arm-kernel,
	linux-kernel

On Thu, Jun 10, 2021 at 09:44:59PM +0800, Tianjia Zhang wrote:
> This patch adds AES-NI/AVX/x86_64 assembler implementation of SM4
> block cipher. Through two affine transforms, we can use the AES
> S-Box to simulate the SM4 S-Box to achieve the effect of instruction
> acceleration.
> 

Benchmark results, please.

Also, is this passing the self-tests, including the fuzz tests?

> +/*
> + * void sm4_aesni_avx_expand_key(const u8 *key, u32 *rk_enc,
> + *                  u32 *rk_dec, const u32 *fk, const u32 *ck);
> + */
> +SYM_FUNC_START(sm4_aesni_avx_expand_key)
> +	/* input:
> +	 *	%rdi: 128-bit key
> +	 *	%rsi: rkey_enc
> +	 *	%rdx: rkey_dec
> +	 *	%rcx: fk array
> +	 *	%r8: ck array
> +	 */
> +	FRAME_BEGIN

Key expansion isn't performance-critical.  Can the C library version be used, or
does the key need to be expanded in a way specific to this x86 implementation?

> +/*
> + * void sm4_aesni_avx_crypt4(const u32 *rk, u8 *dst,
> + *                          const u8 *src, int nblocks)
> + */
> +SYM_FUNC_START(sm4_aesni_avx_crypt4)
> +	/* input:
> +	 *	%rdi: round key array, CTX
> +	 *	%rsi: dst (1..4 blocks)
> +	 *	%rdx: src (1..4 blocks)
> +	 *	%rcx: num blocks (1..4)
> +	 */
> +	FRAME_BEGIN
[...]

> +static void sm4_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
> +{
> +	const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
> +
> +	if (crypto_simd_usable()) {
> +		kernel_fpu_begin();
> +		sm4_aesni_avx_crypt4(ctx->rkey_enc, out, in, 1);
> +		kernel_fpu_end();
> +	} else
> +		crypto_sm4_do_crypt(ctx->rkey_enc, out, in);
> +}
> +
> +static void sm4_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
> +{
> +	const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
> +
> +	if (crypto_simd_usable()) {
> +		kernel_fpu_begin();
> +		sm4_aesni_avx_crypt4(ctx->rkey_dec, out, in, 1);
> +		kernel_fpu_end();
> +	} else
> +		crypto_sm4_do_crypt(ctx->rkey_dec, out, in);
> +}

Your assembly code appears to handle encrypting up to 4 blocks at a time.
However you have only wired this up to the "cipher" API which does 1 block at a
time.  Is this intentional?

What are your performance results with real-world chaining modes like XTS, and
do you plan to implement any of these modes directly?

> +
> +static struct crypto_alg sm4_asm_alg = {
> +	.cra_name		= "sm4",
> +	.cra_driver_name	= "sm4-asm",

In arch/x86/crypto/, "-asm" usually means a vanilla x86 assembly implementation
without any AES-NI, SSE, AVX, etc. instructions.  Calling this something like
"sm4-aesni-avx" would make more sense.  (Or is it actually avx2, not avx?)

> +config CRYPTO_SM4_AESNI_AVX_X86_64
> +	tristate "SM4 cipher algorithm (x86_64/AES-NI/AVX)"
> +	depends on X86 && 64BIT
> +	select CRYPTO_SKCIPHER
> +	select CRYPTO_SIMD
> +	select CRYPTO_ALGAPI
> +	select CRYPTO_LIB_SM4

As-is, neither CRYPTO_SKCIPHER nor CRYPTO_SIMD needs to be selected here.

> +	help
> +	  SM4 cipher algorithms (OSCCA GB/T 32907-2016) (x86_64/AES-NI/AVX).
> +
> +	  SM4 (GBT.32907-2016) is a cryptographic standard issued by the
> +	  Organization of State Commercial Administration of China (OSCCA)
> +	  as an authorized cryptographic algorithms for the use within China.
> +
> +	  SMS4 was originally created for use in protecting wireless
> +	  networks, and is mandated in the Chinese National Standard for
> +	  Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure)
> +	  (GB.15629.11-2003).
> +
> +	  The latest SM4 standard (GBT.32907-2016) was proposed by OSCCA and
> +	  standardized through TC 260 of the Standardization Administration
> +	  of the People's Republic of China (SAC).
> +
> +	  The input, output, and key of SMS4 are each 128 bits.
> +
> +	  See also: <https://eprint.iacr.org/2008/329.pdf>
> +
> +	  If unsure, say N.

This is the help text for the x86 implementation specifically.  Please don't
have boilerplate text about the algorithm here; that already exists for the
generic implementation.  The text should explain about the x86 implementation.

- Eric

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2021-06-10 23:19   ` Eric Biggers
@ 2021-06-13 10:14     ` Tianjia Zhang
  0 siblings, 0 replies; 20+ messages in thread
From: Tianjia Zhang @ 2021-06-13 10:14 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, x86, linux-crypto, linux-arm-kernel,
	linux-kernel

Hi Eric,

On 6/11/21 7:19 AM, Eric Biggers wrote:
> On Thu, Jun 10, 2021 at 09:44:57PM +0800, Tianjia Zhang wrote:
>> Take the existing small footprint and mostly time invariant C code
> 
> It is using an S-box without any prefetching.  That doesn't look very
> "time invariant" to me.
> 

Thanks for your suggestion, will do in the next version patch.

>> diff --git a/lib/crypto/sm4.c b/lib/crypto/sm4.c
>> new file mode 100644
>> index 000000000000..cbdd14a254d0
> [..]
>> +/**
>> + * crypto_sm4_expand_key - Expands the SM4 key as described in GB/T 32907-2016
>> + * @ctx:	The location where the computed key will be stored.
>> + * @in_key:	The supplied key.
>> + * @key_len:	The length of the supplied key.
>> + *
>> + * Returns 0 on success. The function fails only if an invalid key size (or
>> + * pointer) is supplied.
>> + */
>> +int crypto_sm4_expand_key(struct crypto_sm4_ctx *ctx, const u8 *in_key,
>> +			  unsigned int key_len)
> [...]
>> +/**
>> + * crypto_sm4_do_crypt - Encrypt or decrypt a single SM4 block
>> + * @rk:		The rkey_enc for encrypt or rkey_dec for decrypt
>> + * @out:	Buffer to store output data
>> + * @in: 	Buffer containing the input data
>> + */
>> +void crypto_sm4_do_crypt(const u32 *rk, u8 *out, const u8 *in)
> 
> Calling these "sm4_expandkey()" and "sm4_crypt_block()" would be more consistent
> with the other lib/crypto/ functions such as the AES ones.  The other
> lib/crypto/ functions don't have a "crypto_" prefix, as that is used for
> functions related to the traditional crypto API rather than the library API.

Ditto. thanks for pointing it out.

> 
> - Eric
> 

Best regards,
Tianjia

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 3/3] crypto: x86/sm4 - add AES-NI/AVX/x86_64 assembler implementation
  2021-06-10 23:27   ` Eric Biggers
@ 2021-06-13 10:14     ` Tianjia Zhang
  0 siblings, 0 replies; 20+ messages in thread
From: Tianjia Zhang @ 2021-06-13 10:14 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, x86, linux-crypto, linux-arm-kernel,
	linux-kernel

Hi Eric,

On 6/11/21 7:27 AM, Eric Biggers wrote:
> On Thu, Jun 10, 2021 at 09:44:59PM +0800, Tianjia Zhang wrote:
>> This patch adds AES-NI/AVX/x86_64 assembler implementation of SM4
>> block cipher. Through two affine transforms, we can use the AES
>> S-Box to simulate the SM4 S-Box to achieve the effect of instruction
>> acceleration.
>>
> 
> Benchmark results, please.
> 
> Also, is this passing the self-tests, including the fuzz tests?
> 

I will provide this information in the next version.

>> +/*
>> + * void sm4_aesni_avx_expand_key(const u8 *key, u32 *rk_enc,
>> + *                  u32 *rk_dec, const u32 *fk, const u32 *ck);
>> + */
>> +SYM_FUNC_START(sm4_aesni_avx_expand_key)
>> +	/* input:
>> +	 *	%rdi: 128-bit key
>> +	 *	%rsi: rkey_enc
>> +	 *	%rdx: rkey_dec
>> +	 *	%rcx: fk array
>> +	 *	%r8: ck array
>> +	 */
>> +	FRAME_BEGIN
> 
> Key expansion isn't performance-critical.  Can the C library version be used, or
> does the key need to be expanded in a way specific to this x86 implementation?
> 

It can be replaced by a common implementation of C library. For expand 
key that are not called frequently, the optimization of a specific 
instruction set does not bring much benefit. Of course, it is possible 
to delete this implementation.

>> +/*
>> + * void sm4_aesni_avx_crypt4(const u32 *rk, u8 *dst,
>> + *                          const u8 *src, int nblocks)
>> + */
>> +SYM_FUNC_START(sm4_aesni_avx_crypt4)
>> +	/* input:
>> +	 *	%rdi: round key array, CTX
>> +	 *	%rsi: dst (1..4 blocks)
>> +	 *	%rdx: src (1..4 blocks)
>> +	 *	%rcx: num blocks (1..4)
>> +	 */
>> +	FRAME_BEGIN
> [...]
> 
>> +static void sm4_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
>> +{
>> +	const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
>> +
>> +	if (crypto_simd_usable()) {
>> +		kernel_fpu_begin();
>> +		sm4_aesni_avx_crypt4(ctx->rkey_enc, out, in, 1);
>> +		kernel_fpu_end();
>> +	} else
>> +		crypto_sm4_do_crypt(ctx->rkey_enc, out, in);
>> +}
>> +
>> +static void sm4_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
>> +{
>> +	const struct crypto_sm4_ctx *ctx = crypto_tfm_ctx(tfm);
>> +
>> +	if (crypto_simd_usable()) {
>> +		kernel_fpu_begin();
>> +		sm4_aesni_avx_crypt4(ctx->rkey_dec, out, in, 1);
>> +		kernel_fpu_end();
>> +	} else
>> +		crypto_sm4_do_crypt(ctx->rkey_dec, out, in);
>> +}
> 
> Your assembly code appears to handle encrypting up to 4 blocks at a time.
> However you have only wired this up to the "cipher" API which does 1 block at a
> time.  Is this intentional?
> 
> What are your performance results with real-world chaining modes like XTS, and
> do you plan to implement any of these modes directly?
> 

This implementation is intentional. First, a general block encryption is 
provided. There is no obvious performance improvement in this 
implementation. The key to optimization is to make full use of parallel 
four blocks encryption at a time. This is still under development, and I 
will continue to implement things like XTS in the future. Optimization 
of such specific modes.

>> +
>> +static struct crypto_alg sm4_asm_alg = {
>> +	.cra_name		= "sm4",
>> +	.cra_driver_name	= "sm4-asm",
> 
> In arch/x86/crypto/, "-asm" usually means a vanilla x86 assembly implementation
> without any AES-NI, SSE, AVX, etc. instructions.  Calling this something like
> "sm4-aesni-avx" would make more sense.  (Or is it actually avx2, not avx?)
> 

will do in next version patch.

>> +config CRYPTO_SM4_AESNI_AVX_X86_64
>> +	tristate "SM4 cipher algorithm (x86_64/AES-NI/AVX)"
>> +	depends on X86 && 64BIT
>> +	select CRYPTO_SKCIPHER
>> +	select CRYPTO_SIMD
>> +	select CRYPTO_ALGAPI
>> +	select CRYPTO_LIB_SM4
> 
> As-is, neither CRYPTO_SKCIPHER nor CRYPTO_SIMD needs to be selected here.
> 

ditto.

>> +	help
>> +	  SM4 cipher algorithms (OSCCA GB/T 32907-2016) (x86_64/AES-NI/AVX).
>> +
>> +	  SM4 (GBT.32907-2016) is a cryptographic standard issued by the
>> +	  Organization of State Commercial Administration of China (OSCCA)
>> +	  as an authorized cryptographic algorithms for the use within China.
>> +
>> +	  SMS4 was originally created for use in protecting wireless
>> +	  networks, and is mandated in the Chinese National Standard for
>> +	  Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure)
>> +	  (GB.15629.11-2003).
>> +
>> +	  The latest SM4 standard (GBT.32907-2016) was proposed by OSCCA and
>> +	  standardized through TC 260 of the Standardization Administration
>> +	  of the People's Republic of China (SAC).
>> +
>> +	  The input, output, and key of SMS4 are each 128 bits.
>> +
>> +	  See also: <https://eprint.iacr.org/2008/329.pdf>
>> +
>> +	  If unsure, say N.
> 
> This is the help text for the x86 implementation specifically.  Please don't
> have boilerplate text about the algorithm here; that already exists for the
> generic implementation.  The text should explain about the x86 implementation.
> 

ditto.

> - Eric
> 

Thanks for your suggestion.

Cheers,
Tianjia

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2021-06-10 13:44 ` [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code Tianjia Zhang
  2021-06-10 23:19   ` Eric Biggers
@ 2022-03-01 10:34   ` Jason A. Donenfeld
  2022-03-01 11:50     ` Tianjia Zhang
  2022-03-02  0:24     ` Herbert Xu
  1 sibling, 2 replies; 20+ messages in thread
From: Jason A. Donenfeld @ 2022-03-01 10:34 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, x86, linux-crypto, linux-arm-kernel,
	linux-kernel

>  lib/crypto/Kconfig   |   3 +
>  lib/crypto/Makefile  |   3 +
>  lib/crypto/sm4.c     | 184 +++++++++++++++++++++++++++++++++++++++++++

If this is only used by the crypto API, it does not belong in
lib/crypto. I understand you want fallback generic code for the SIMD
implementation, but we've generally done that in crypto/ when the use
case is only the crypto API. Can you move this to the right place?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2022-03-01 10:34   ` Jason A. Donenfeld
@ 2022-03-01 11:50     ` Tianjia Zhang
  2022-03-01 13:22       ` Jason A. Donenfeld
  2022-03-02  0:24     ` Herbert Xu
  1 sibling, 1 reply; 20+ messages in thread
From: Tianjia Zhang @ 2022-03-01 11:50 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, x86, linux-crypto, linux-arm-kernel,
	linux-kernel

Hi Jason,

On 3/1/22 6:34 PM, Jason A. Donenfeld wrote:
>>   lib/crypto/Kconfig   |   3 +
>>   lib/crypto/Makefile  |   3 +
>>   lib/crypto/sm4.c     | 184 +++++++++++++++++++++++++++++++++++++++++++
> 
> If this is only used by the crypto API, it does not belong in
> lib/crypto. I understand you want fallback generic code for the SIMD
> implementation, but we've generally done that in crypto/ when the use
> case is only the crypto API. Can you move this to the right place?

This is not only used by the crypto API, but also used for SIMD
acceleration under the x86 and arm architectures, mainly for processing
the remaining blocks after SIMD acceleration. In general, the
performance of SIMD processing a single block is not as good as that of
general software implementations.

Kind regards,
Tianjia

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2022-03-01 11:50     ` Tianjia Zhang
@ 2022-03-01 13:22       ` Jason A. Donenfeld
  2022-03-01 14:17         ` Jason A. Donenfeld
  0 siblings, 1 reply; 20+ messages in thread
From: Jason A. Donenfeld @ 2022-03-01 13:22 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, X86 ML, Linux Crypto Mailing List,
	linux-arm-kernel, LKML

Hi Tianjia,

On Tue, Mar 1, 2022 at 12:50 PM Tianjia Zhang
<tianjia.zhang@linux.alibaba.com> wrote:
>
> Hi Jason,
>
> On 3/1/22 6:34 PM, Jason A. Donenfeld wrote:
> >>   lib/crypto/Kconfig   |   3 +
> >>   lib/crypto/Makefile  |   3 +
> >>   lib/crypto/sm4.c     | 184 +++++++++++++++++++++++++++++++++++++++++++
> >
> > If this is only used by the crypto API, it does not belong in
> > lib/crypto. I understand you want fallback generic code for the SIMD
> > implementation, but we've generally done that in crypto/ when the use
> > case is only the crypto API. Can you move this to the right place?
>
> This is not only used by the crypto API, but also used for SIMD
> acceleration under the x86 and arm architectures, mainly for processing
> the remaining blocks after SIMD acceleration. In general, the
> performance of SIMD processing a single block is not as good as that of
> general software implementations.

Yes, and those accelerated implementations are part of the crypto API,
and are not used by anything except the crypto API. Hence this should
be in crypto/, just like everything else that is /only/ used for the
cryto API. lib/crypto/ is for in-kernel users of crypto via normal
code paths. sm4.c does not belong in lib/crypto/ and should be moved.

You additional export symbols of those SIMD implementations in
arch/crypto/, which is not correct either, since nothing in the tree
uses those symbols. Please remove those EXPORT_SYMBOL directives as
well. Those functions can be static, and do not need to be declared in
the .h file.

Thanks,
Jason

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2022-03-01 13:22       ` Jason A. Donenfeld
@ 2022-03-01 14:17         ` Jason A. Donenfeld
  0 siblings, 0 replies; 20+ messages in thread
From: Jason A. Donenfeld @ 2022-03-01 14:17 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, X86 ML, Linux Crypto Mailing List,
	linux-arm-kernel, LKML

On Tue, Mar 1, 2022 at 2:22 PM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> You additional export symbols of those SIMD implementations in
> arch/crypto/, which is not correct either, since nothing in the tree
> uses those symbols. Please remove those EXPORT_SYMBOL directives as
> well. Those functions can be static, and do not need to be declared in
> the .h file.

Actually, this part isn't quite so, because you share the avx
implementation in the avx2 implementation. However,

> Yes, and those accelerated implementations are part of the crypto API,
> and are not used by anything except the crypto API. Hence this should
> be in crypto/, just like everything else that is /only/ used for the
> cryto API. lib/crypto/ is for in-kernel users of crypto via normal
> code paths. sm4.c does not belong in lib/crypto/ and should be moved.

This still holds.

Jason

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2022-03-01 10:34   ` Jason A. Donenfeld
  2022-03-01 11:50     ` Tianjia Zhang
@ 2022-03-02  0:24     ` Herbert Xu
  2022-03-02  0:26       ` Jason A. Donenfeld
  1 sibling, 1 reply; 20+ messages in thread
From: Herbert Xu @ 2022-03-02  0:24 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: Tianjia Zhang, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, x86, linux-crypto, linux-arm-kernel,
	linux-kernel

On Tue, Mar 01, 2022 at 11:34:28AM +0100, Jason A. Donenfeld wrote:
> >  lib/crypto/Kconfig   |   3 +
> >  lib/crypto/Makefile  |   3 +
> >  lib/crypto/sm4.c     | 184 +++++++++++++++++++++++++++++++++++++++++++
> 
> If this is only used by the crypto API, it does not belong in
> lib/crypto.

Nope there is no such rule.  lib/crypto is fine if you're adding
code that is shared between crypto and arch/*/crypto.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2022-03-02  0:24     ` Herbert Xu
@ 2022-03-02  0:26       ` Jason A. Donenfeld
  2022-03-02 22:23         ` Eric Biggers
  0 siblings, 1 reply; 20+ messages in thread
From: Jason A. Donenfeld @ 2022-03-02  0:26 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Tianjia Zhang, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, X86 ML, Linux Crypto Mailing List,
	linux-arm-kernel, LKML

On Wed, Mar 2, 2022 at 1:24 AM Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Tue, Mar 01, 2022 at 11:34:28AM +0100, Jason A. Donenfeld wrote:
> > >  lib/crypto/Kconfig   |   3 +
> > >  lib/crypto/Makefile  |   3 +
> > >  lib/crypto/sm4.c     | 184 +++++++++++++++++++++++++++++++++++++++++++
> >
> > If this is only used by the crypto API, it does not belong in
> > lib/crypto.
>
> Nope there is no such rule.  lib/crypto is fine if you're adding
> code that is shared between crypto and arch/*/crypto.

The sprawling madness continues then... Noted.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2022-03-02  0:26       ` Jason A. Donenfeld
@ 2022-03-02 22:23         ` Eric Biggers
  2022-03-11 23:03           ` Jason A. Donenfeld
  0 siblings, 1 reply; 20+ messages in thread
From: Eric Biggers @ 2022-03-02 22:23 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: Herbert Xu, Tianjia Zhang, David S. Miller, Catalin Marinas,
	Will Deacon, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Gilad Ben-Yossef, Ard Biesheuvel,
	Markku-Juhani O . Saarinen, Jussi Kivilinna, X86 ML,
	Linux Crypto Mailing List, linux-arm-kernel, LKML

On Wed, Mar 02, 2022 at 01:26:13AM +0100, Jason A. Donenfeld wrote:
> On Wed, Mar 2, 2022 at 1:24 AM Herbert Xu <herbert@gondor.apana.org.au> wrote:
> >
> > On Tue, Mar 01, 2022 at 11:34:28AM +0100, Jason A. Donenfeld wrote:
> > > >  lib/crypto/Kconfig   |   3 +
> > > >  lib/crypto/Makefile  |   3 +
> > > >  lib/crypto/sm4.c     | 184 +++++++++++++++++++++++++++++++++++++++++++
> > >
> > > If this is only used by the crypto API, it does not belong in
> > > lib/crypto.
> >
> > Nope there is no such rule.  lib/crypto is fine if you're adding
> > code that is shared between crypto and arch/*/crypto.
> 
> The sprawling madness continues then... Noted.

I think it would make more sense for this code to be in crypto/, for the reason
that Jason gave.

- Eric

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2022-03-02 22:23         ` Eric Biggers
@ 2022-03-11 23:03           ` Jason A. Donenfeld
  2022-03-14  2:32             ` Tianjia Zhang
  0 siblings, 1 reply; 20+ messages in thread
From: Jason A. Donenfeld @ 2022-03-11 23:03 UTC (permalink / raw)
  To: Eric Biggers
  Cc: Herbert Xu, Tianjia Zhang, David S. Miller, Catalin Marinas,
	Will Deacon, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Gilad Ben-Yossef, Ard Biesheuvel,
	Markku-Juhani O . Saarinen, Jussi Kivilinna, X86 ML,
	Linux Crypto Mailing List, linux-arm-kernel, LKML

Hi Tianjia,

On Wed, Mar 2, 2022 at 3:23 PM Eric Biggers <ebiggers@kernel.org> wrote:
>
> On Wed, Mar 02, 2022 at 01:26:13AM +0100, Jason A. Donenfeld wrote:
> > On Wed, Mar 2, 2022 at 1:24 AM Herbert Xu <herbert@gondor.apana.org.au> wrote:
> > >
> > > On Tue, Mar 01, 2022 at 11:34:28AM +0100, Jason A. Donenfeld wrote:
> > > > >  lib/crypto/Kconfig   |   3 +
> > > > >  lib/crypto/Makefile  |   3 +
> > > > >  lib/crypto/sm4.c     | 184 +++++++++++++++++++++++++++++++++++++++++++
> > > >
> > > > If this is only used by the crypto API, it does not belong in
> > > > lib/crypto.
> > >
> > > Nope there is no such rule.  lib/crypto is fine if you're adding
> > > code that is shared between crypto and arch/*/crypto.
> >
> > The sprawling madness continues then... Noted.
>
> I think it would make more sense for this code to be in crypto/, for the reason
> that Jason gave.

Were you planning on submitting a patch for this?

Thanks,
Jason

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2022-03-11 23:03           ` Jason A. Donenfeld
@ 2022-03-14  2:32             ` Tianjia Zhang
  2022-03-14  2:40               ` Jason A. Donenfeld
  0 siblings, 1 reply; 20+ messages in thread
From: Tianjia Zhang @ 2022-03-14  2:32 UTC (permalink / raw)
  To: Jason A. Donenfeld, Eric Biggers
  Cc: Herbert Xu, David S. Miller, Catalin Marinas, Will Deacon,
	Thomas Gleixner, Ingo Molnar, Borislav Petkov, H. Peter Anvin,
	Gilad Ben-Yossef, Ard Biesheuvel, Markku-Juhani O . Saarinen,
	Jussi Kivilinna, X86 ML, Linux Crypto Mailing List,
	linux-arm-kernel, LKML

Hi Jason,

On 3/12/22 7:03 AM, Jason A. Donenfeld wrote:
> Hi Tianjia,
> 
> On Wed, Mar 2, 2022 at 3:23 PM Eric Biggers <ebiggers@kernel.org> wrote:
>>
>> On Wed, Mar 02, 2022 at 01:26:13AM +0100, Jason A. Donenfeld wrote:
>>> On Wed, Mar 2, 2022 at 1:24 AM Herbert Xu <herbert@gondor.apana.org.au> wrote:
>>>>
>>>> On Tue, Mar 01, 2022 at 11:34:28AM +0100, Jason A. Donenfeld wrote:
>>>>>>   lib/crypto/Kconfig   |   3 +
>>>>>>   lib/crypto/Makefile  |   3 +
>>>>>>   lib/crypto/sm4.c     | 184 +++++++++++++++++++++++++++++++++++++++++++
>>>>>
>>>>> If this is only used by the crypto API, it does not belong in
>>>>> lib/crypto.
>>>>
>>>> Nope there is no such rule.  lib/crypto is fine if you're adding
>>>> code that is shared between crypto and arch/*/crypto.
>>>
>>> The sprawling madness continues then... Noted.
>>
>> I think it would make more sense for this code to be in crypto/, for the reason
>> that Jason gave.
> 
> Were you planning on submitting a patch for this?
> 
> Thanks,
> Jason

I agree with Herbert that this is not necessary and the community is 
currently not in agreement on this point, and I'd be happy to do the 
work if there's a more reasonable reason.

Thanks so much for your suggestion.

Best regards,
Tianjia

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2022-03-14  2:32             ` Tianjia Zhang
@ 2022-03-14  2:40               ` Jason A. Donenfeld
  2022-03-14  2:45                 ` Herbert Xu
  0 siblings, 1 reply; 20+ messages in thread
From: Jason A. Donenfeld @ 2022-03-14  2:40 UTC (permalink / raw)
  To: Tianjia Zhang
  Cc: Eric Biggers, Herbert Xu, David S. Miller, Catalin Marinas,
	Will Deacon, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Gilad Ben-Yossef, Ard Biesheuvel,
	Markku-Juhani O . Saarinen, Jussi Kivilinna, X86 ML,
	Linux Crypto Mailing List, linux-arm-kernel, LKML

Hi Herbert,

Are you willing to consider the views of Eric and me? Or is this a
hard nack from you?

Thanks,
Jason

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2022-03-14  2:40               ` Jason A. Donenfeld
@ 2022-03-14  2:45                 ` Herbert Xu
  2022-03-14  2:46                   ` Jason A. Donenfeld
  0 siblings, 1 reply; 20+ messages in thread
From: Herbert Xu @ 2022-03-14  2:45 UTC (permalink / raw)
  To: Jason A. Donenfeld
  Cc: Tianjia Zhang, Eric Biggers, David S. Miller, Catalin Marinas,
	Will Deacon, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Gilad Ben-Yossef, Ard Biesheuvel,
	Markku-Juhani O . Saarinen, Jussi Kivilinna, X86 ML,
	Linux Crypto Mailing List, linux-arm-kernel, LKML

On Sun, Mar 13, 2022 at 08:40:00PM -0600, Jason A. Donenfeld wrote:
> Hi Herbert,
> 
> Are you willing to consider the views of Eric and me? Or is this a
> hard nack from you?

Please present your patch to move the code with the reasoning
for the move and then I can consider it.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code
  2022-03-14  2:45                 ` Herbert Xu
@ 2022-03-14  2:46                   ` Jason A. Donenfeld
  0 siblings, 0 replies; 20+ messages in thread
From: Jason A. Donenfeld @ 2022-03-14  2:46 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Tianjia Zhang, Eric Biggers, David S. Miller, Catalin Marinas,
	Will Deacon, Thomas Gleixner, Ingo Molnar, Borislav Petkov,
	H. Peter Anvin, Gilad Ben-Yossef, Ard Biesheuvel,
	Markku-Juhani O . Saarinen, Jussi Kivilinna, X86 ML,
	Linux Crypto Mailing List, linux-arm-kernel, LKML

Hi Herbert,

On Sun, Mar 13, 2022 at 8:45 PM Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Sun, Mar 13, 2022 at 08:40:00PM -0600, Jason A. Donenfeld wrote:
> > Hi Herbert,
> >
> > Are you willing to consider the views of Eric and me? Or is this a
> > hard nack from you?
>
> Please present your patch to move the code with the reasoning
> for the move and then I can consider it.

Okay, no problem, will do.

Jason

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2022-03-14  2:46 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-10 13:44 [PATCH 0/3] Introduce x86 assembler accelerated implementation for SM4 algorithm Tianjia Zhang
2021-06-10 13:44 ` [PATCH 1/3] crypto: sm4 - create SM4 library based on sm4 generic code Tianjia Zhang
2021-06-10 23:19   ` Eric Biggers
2021-06-13 10:14     ` Tianjia Zhang
2022-03-01 10:34   ` Jason A. Donenfeld
2022-03-01 11:50     ` Tianjia Zhang
2022-03-01 13:22       ` Jason A. Donenfeld
2022-03-01 14:17         ` Jason A. Donenfeld
2022-03-02  0:24     ` Herbert Xu
2022-03-02  0:26       ` Jason A. Donenfeld
2022-03-02 22:23         ` Eric Biggers
2022-03-11 23:03           ` Jason A. Donenfeld
2022-03-14  2:32             ` Tianjia Zhang
2022-03-14  2:40               ` Jason A. Donenfeld
2022-03-14  2:45                 ` Herbert Xu
2022-03-14  2:46                   ` Jason A. Donenfeld
2021-06-10 13:44 ` [PATCH 2/3] crypto: arm64/sm4-ce - Make dependent on sm4 library instead of sm4-generic Tianjia Zhang
2021-06-10 13:44 ` [PATCH 3/3] crypto: x86/sm4 - add AES-NI/AVX/x86_64 assembler implementation Tianjia Zhang
2021-06-10 23:27   ` Eric Biggers
2021-06-13 10:14     ` Tianjia Zhang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).