linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/8] crypto: HCTR2 support
@ 2022-03-15 23:00 Nathan Huckleberry
  2022-03-15 23:00 ` [PATCH v3 1/8] crypto: xctr - Add XCTR support Nathan Huckleberry
                   ` (7 more replies)
  0 siblings, 8 replies; 15+ messages in thread
From: Nathan Huckleberry @ 2022-03-15 23:00 UTC (permalink / raw)
  To: linux-crypto
  Cc: Herbert Xu, David S. Miller, linux-arm-kernel, Paul Crowley,
	Eric Biggers, Sami Tolvanen, Ard Biesheuvel, Nathan Huckleberry

HCTR2 is a length-preserving encryption mode that is efficient on
processors with instructions to accelerate AES and carryless
multiplication, e.g. x86 processors with AES-NI and CLMUL, and ARM
processors with the ARMv8 Crypto Extensions.

HCTR2 is specified in https://ia.cr/2021/1441 "Length-preserving
encryption with HCTR2" which shows that if AES is secure and HCTR2 is
instantiated with AES, then HCTR2 is secure.  Reference code and test
vectors are at https://github.com/google/hctr2.

As a length-preserving encryption mode, HCTR2 is suitable for applications
such as storage encryption where ciphertext expansion is not possible, and
thus authenticated encryption cannot be used.  Currently, such
applications usually use XTS, or in some cases Adiantum.  XTS has the
disadvantage that it is a narrow-block mode: a bitflip will only change 16
bytes in the resulting ciphertext or plaintext.  This reveals more
information to an attacker than necessary.

HCTR2 is a wide-block mode, so it provides a stronger security property: a
bitflip will change the entire message.  HCTR2 is somewhat similar to
Adiantum, which is also a wide-block mode.  However, HCTR2 is designed to
take advantage of existing crypto instructions, while Adiantum targets
devices without such hardware support.  Adiantum is also designed with
longer messages in mind, while HCTR2 is designed to be efficient even on
short messages.

The first intended use of this mode in the kernel is for the encryption of
filenames, where for efficiency reasons encryption must be fully
deterministic (only one ciphertext for each plaintext) and the existing
CBC solution leaks more information than necessary for filenames with
common prefixes.

HCTR2 uses two passes of an ε-almost-∆-universal hash function called
POLYVAL and one pass of a block cipher mode called XCTR.  POLYVAL is a
polynomial hash designed for efficiency on modern processors and was
originally specified for use in AES-GCM-SIV (RFC 8452).  XCTR mode is a
variant of CTR mode that is more efficient on little-endian machines.

This patchset adds HCTR2 to Linux's crypto API, including generic
implementations of XCTR and POLYVAL, hardware accelerated implementations of
XCTR and POLYVAL for both x86-64 and ARM64, a templated implementation of HCTR2,
and an fscrypt policy for using HCTR2 for filename encryption.

Changes in v3:
 * Improve testvec coverage for XCTR, POLYVAL and HCTR2
 * Fix endianness bug in xctr.c
 * Fix alignment issues in polyval-generic.c
 * Optimize hctr2.c by exporting/importing hash states
 * Fix blockcipher name derivation in hctr2.c
 * Move x86-64 XCTR implementation into aes_ctrby8_avx-x86_64.S
 * Reuse ARM64 CTR mode tail handling in ARM64 XCTR
 * Fix x86-64 POLYVAL comments
 * Fix x86-64 POLYVAL key_powers type to match asm
 * Fix ARM64 POLYVAL comments
 * Fix ARM64 POLYVAL key_powers type to match asm
 * Add XTS + HCTR2 policy to fscrypt

Nathan Huckleberry (8):
  crypto: xctr - Add XCTR support
  crypto: polyval - Add POLYVAL support
  crypto: hctr2 - Add HCTR2 support
  crypto: x86/aesni-xctr: Add accelerated implementation of XCTR
  crypto: arm64/aes-xctr: Add accelerated implementation of XCTR
  crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of
    POLYVAL
  crypto: arm64/polyval: Add PMULL accelerated implementation of POLYVAL
  fscrypt: Add HCTR2 support for filename encryption

 Documentation/filesystems/fscrypt.rst   |   19 +-
 arch/arm64/crypto/Kconfig               |   11 +-
 arch/arm64/crypto/Makefile              |    3 +
 arch/arm64/crypto/aes-glue.c            |   65 +-
 arch/arm64/crypto/aes-modes.S           |  134 ++
 arch/arm64/crypto/polyval-ce-core.S     |  372 ++++++
 arch/arm64/crypto/polyval-ce-glue.c     |  363 ++++++
 arch/x86/crypto/Makefile                |    3 +
 arch/x86/crypto/aes_ctrby8_avx-x86_64.S |  233 ++--
 arch/x86/crypto/aesni-intel_asm.S       |   70 ++
 arch/x86/crypto/aesni-intel_glue.c      |   89 ++
 arch/x86/crypto/polyval-clmulni_asm.S   |  376 ++++++
 arch/x86/crypto/polyval-clmulni_glue.c  |  361 ++++++
 crypto/Kconfig                          |   40 +-
 crypto/Makefile                         |    3 +
 crypto/hctr2.c                          |  580 +++++++++
 crypto/polyval-generic.c                |  205 +++
 crypto/tcrypt.c                         |   10 +
 crypto/testmgr.c                        |   20 +
 crypto/testmgr.h                        | 1536 +++++++++++++++++++++++
 crypto/xctr.c                           |  193 +++
 fs/crypto/fscrypt_private.h             |    2 +-
 fs/crypto/keysetup.c                    |    7 +
 fs/crypto/policy.c                      |    4 +
 include/crypto/polyval.h                |   17 +
 include/uapi/linux/fscrypt.h            |    3 +-
 tools/include/uapi/linux/fscrypt.h      |    3 +-
 27 files changed, 4633 insertions(+), 89 deletions(-)
 create mode 100644 arch/arm64/crypto/polyval-ce-core.S
 create mode 100644 arch/arm64/crypto/polyval-ce-glue.c
 create mode 100644 arch/x86/crypto/polyval-clmulni_asm.S
 create mode 100644 arch/x86/crypto/polyval-clmulni_glue.c
 create mode 100644 crypto/hctr2.c
 create mode 100644 crypto/polyval-generic.c
 create mode 100644 crypto/xctr.c
 create mode 100644 include/crypto/polyval.h

-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH v3 1/8] crypto: xctr - Add XCTR support
  2022-03-15 23:00 [PATCH v3 0/8] crypto: HCTR2 support Nathan Huckleberry
@ 2022-03-15 23:00 ` Nathan Huckleberry
  2022-03-22  5:23   ` Eric Biggers
  2022-03-15 23:00 ` [PATCH v3 2/8] crypto: polyval - Add POLYVAL support Nathan Huckleberry
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Nathan Huckleberry @ 2022-03-15 23:00 UTC (permalink / raw)
  To: linux-crypto
  Cc: Herbert Xu, David S. Miller, linux-arm-kernel, Paul Crowley,
	Eric Biggers, Sami Tolvanen, Ard Biesheuvel, Nathan Huckleberry

Add a generic implementation of XCTR mode as a template.  XCTR is a
blockcipher mode similar to CTR mode.  XCTR uses XORs and little-endian
addition rather than big-endian arithmetic which has two advantages:  It
is slightly faster on little-endian CPUs and it is less likely to be
implemented incorrect since integer overflows are not possible on
practical input sizes.  XCTR is used as a component to implement HCTR2.

More information on XCTR mode can be found in the HCTR2 paper:
https://eprint.iacr.org/2021/1441.pdf

Signed-off-by: Nathan Huckleberry <nhuck@google.com>
---
 crypto/Kconfig   |   9 +
 crypto/Makefile  |   1 +
 crypto/tcrypt.c  |   1 +
 crypto/testmgr.c |   6 +
 crypto/testmgr.h | 693 +++++++++++++++++++++++++++++++++++++++++++++++
 crypto/xctr.c    | 193 +++++++++++++
 6 files changed, 903 insertions(+)
 create mode 100644 crypto/xctr.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index d6d7e84bb7f8..47752aaa16ff 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -460,6 +460,15 @@ config CRYPTO_PCBC
 	  PCBC: Propagating Cipher Block Chaining mode
 	  This block cipher algorithm is required for RxRPC.
 
+config CRYPTO_XCTR
+	tristate
+	select CRYPTO_SKCIPHER
+	select CRYPTO_MANAGER
+	help
+	  XCTR: XOR Counter mode. This blockcipher mode is a variant of CTR mode
+	  using XORs and little-endian addition rather than big-endian arithmetic.
+	  XCTR mode is used to implement HCTR2.
+
 config CRYPTO_XTS
 	tristate "XTS support"
 	select CRYPTO_SKCIPHER
diff --git a/crypto/Makefile b/crypto/Makefile
index d76bff8d0ffd..6b3fe3df1489 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -93,6 +93,7 @@ obj-$(CONFIG_CRYPTO_CTS) += cts.o
 obj-$(CONFIG_CRYPTO_LRW) += lrw.o
 obj-$(CONFIG_CRYPTO_XTS) += xts.o
 obj-$(CONFIG_CRYPTO_CTR) += ctr.o
+obj-$(CONFIG_CRYPTO_XCTR) += xctr.o
 obj-$(CONFIG_CRYPTO_KEYWRAP) += keywrap.o
 obj-$(CONFIG_CRYPTO_ADIANTUM) += adiantum.o
 obj-$(CONFIG_CRYPTO_NHPOLY1305) += nhpoly1305.o
diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index 2bacf8384f59..fd671d0e2012 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -1556,6 +1556,7 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
 		ret += tcrypt_test("rfc3686(ctr(aes))");
 		ret += tcrypt_test("ofb(aes)");
 		ret += tcrypt_test("cfb(aes)");
+		ret += tcrypt_test("xctr(aes)");
 		break;
 
 	case 11:
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 2d632a285869..fbb12d7d78af 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -5490,6 +5490,12 @@ static const struct alg_test_desc alg_test_descs[] = {
 		.suite = {
 			.cipher = __VECS(xchacha20_tv_template)
 		},
+	}, {
+		.alg = "xctr(aes)",
+		.test = alg_test_skcipher,
+		.suite = {
+			.cipher = __VECS(aes_xctr_tv_template)
+		}
 	}, {
 		.alg = "xts(aes)",
 		.generic_driver = "xts(ecb(aes-generic))",
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index d1aa90993bbd..bf4ff97eeb37 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -34236,4 +34236,697 @@ static const struct hash_testvec blakes2s_256_tv_template[] = {{
 			  0xd5, 0x06, 0xb5, 0x3a, 0x7c, 0x7a, 0x65, 0x1d, },
 }};
 
+/*
+ * Test vectors generated using https://github.com/google/hctr2
+ */
+static const struct cipher_testvec aes_xctr_tv_template[] = {
+	{
+		.key	= "\x9c\x8d\xc4\xbd\x71\x36\xdc\x82"
+			  "\x7c\xa1\xca\xa3\x23\x5a\xdb\xa4",
+		.iv	= "\x8d\xe7\xa5\x6a\x95\x86\x42\xde"
+			  "\xba\xea\x6e\x69\x03\x33\x86\x0f",
+		.ptext	= "\xbd",
+		.ctext	= "\xb9",
+		.klen	= 16,
+		.len	= 1,
+	},
+	{
+		.key	= "\xbc\x1b\x12\x0c\x3f\x18\xcc\x1f"
+			  "\x5a\x1d\xab\x81\xa8\x68\x7c\x63",
+		.iv	= "\x22\xc1\xdd\x25\x0b\x18\xcb\xa5"
+			  "\x4a\xda\x15\x07\x73\xd9\x88\x10",
+		.ptext	= "\x24\x6e\x64\xc6\x15\x26\x9c\xda"
+			  "\x2a\x4b\x57\x12\xff\x7c\xd6\xb5",
+		.ctext	= "\xd6\x47\x8d\x58\x92\xb2\x84\xf9"
+			  "\xb7\xee\x0d\x98\xa1\x39\x4d\x8f",
+		.klen	= 16,
+		.len	= 16,
+	},
+	{
+		.key	= "\x44\x03\xbf\x4c\x30\xf0\xa7\xd6"
+			  "\xbd\x54\xbb\x66\x8e\xa6\x0e\x8a",
+		.iv	= "\xe6\xf7\x26\xdf\x8c\x3c\xaa\x88"
+			  "\xce\xc1\xbd\x43\x3b\x09\x62\xad",
+		.ptext	= "\x3c\xe3\x46\xb9\x8f\x9d\x3f\x8d"
+			  "\xef\xf2\x53\xab\x24\xe2\x29\x08"
+			  "\xf8\x7e\x1d\xa6\x6d\x86\x7d\x60"
+			  "\x97\x63\x93\x29\x71\x94\xb4",
+		.ctext	= "\xd4\xa3\xc6\xb8\xc1\x6f\x70\x1a"
+			  "\x52\x0c\xed\x4c\xaf\x51\x56\x23"
+			  "\x48\x45\x07\x10\x34\xc5\xba\x71"
+			  "\xe5\xf8\x1e\xd8\xcb\xa6\xe7",
+		.klen	= 16,
+		.len	= 31,
+	},
+	{
+		.key	= "\x5b\x17\x30\x94\x19\x31\xa1\xae"
+			  "\x24\x8e\x42\x1e\x82\xe6\xec\xb8",
+		.iv	= "\xd1\x2e\xb9\xb8\xf8\x49\xeb\x68"
+			  "\x06\xeb\x65\x33\x34\xa2\xeb\xf0",
+		.ptext	= "\x19\x75\xec\x59\x60\x1b\x7a\x3e"
+			  "\x62\x46\x87\xf0\xde\xab\x81\x36"
+			  "\x63\x53\x11\xa0\x1f\xce\x25\x85"
+			  "\x49\x6b\x28\xfa\x1c\x92\xe5\x18"
+			  "\x38\x14\x00\x79\xf2\x9e\xeb\xfc"
+			  "\x36\xa7\x6b\xe1\xe5\xcf\x04\x48"
+			  "\x44\x6d\xbd\x64\xb3\xcb\x78\x05"
+			  "\x8d\x7f\x9a\xaf\x3c\xcf\x6c\x45"
+			  "\x6c\x7c\x46\x4c\xa8\xc0\x1e\xe4"
+			  "\x33\xa5\x7b\xbb\x26\xd9\xc0\x32"
+			  "\x9d\x8a\xb3\xf3\x3d\x52\xe6\x48"
+			  "\x4c\x9b\x4c\x6e\xa4\xa3\xad\x66"
+			  "\x56\x48\xd5\x98\x3a\x93\xc4\x85"
+			  "\xe9\x89\xca\xa6\xc1\xc8\xe7\xf8"
+			  "\xc3\xe9\xef\xbe\x77\xe6\xd1\x3a"
+			  "\xa6\x99\xc8\x2d\xdf\x40\x0f\x44",
+		.ctext	= "\xc6\x1a\x01\x1a\x00\xba\x04\xff"
+			  "\x10\xd1\x7e\x5d\xad\x91\xde\x8c"
+			  "\x08\x55\x95\xae\xd7\x22\x77\x40"
+			  "\xf0\x33\x1b\x51\xef\xfe\x3d\x67"
+			  "\xdf\xc4\x9f\x39\x47\x67\x93\xab"
+			  "\xaa\x37\x55\xfe\x41\xe0\xba\xcd"
+			  "\x25\x02\x7c\x61\x51\xa1\xcc\x72"
+			  "\x7a\x20\x26\xb9\x06\x68\xbd\x19"
+			  "\xc5\x2e\x1b\x75\x4a\x40\xb2\xd2"
+			  "\xc4\xee\xd8\x5b\xa4\x55\x7d\x25"
+			  "\xfc\x01\x4d\x6f\x0a\xfd\x37\x5d"
+			  "\x3e\x67\xc0\x35\x72\x53\x7b\xe2"
+			  "\xd6\x19\x5b\x92\x6c\x3a\x8c\x2a"
+			  "\xe2\xc2\xa2\x4f\x2a\xf2\xb5\x15"
+			  "\x65\xc5\x8d\x97\xf9\xbf\x8c\x98"
+			  "\xe4\x50\x1a\xf2\x76\x55\x07\x49",
+		.klen	= 16,
+		.len	= 128,
+	},
+	{
+		.key	= "\x17\xa6\x01\x3d\x5d\xd6\xef\x2d"
+			  "\x69\x8f\x4c\x54\x5b\xae\x43\xf0",
+		.iv	= "\xa9\x1b\x47\x60\x26\x82\xf7\x1c"
+			  "\x80\xf8\x88\xdd\xfb\x44\xd9\xda",
+		.ptext	= "\xf7\x67\xcd\xa6\x04\x65\x53\x99"
+			  "\x90\x5c\xa2\x56\x74\xd7\x9d\xf2"
+			  "\x0b\x03\x7f\x4e\xa7\x84\x72\x2b"
+			  "\xf0\xa5\xbf\xe6\x9a\x62\x3a\xfe"
+			  "\x69\x5c\x93\x79\x23\x86\x64\x85"
+			  "\xeb\x13\xb1\x5a\xd5\x48\x39\xa0"
+			  "\x70\xfb\x06\x9a\xd7\x12\x5a\xb9"
+			  "\xbe\xed\x2c\x81\x64\xf7\xcf\x80"
+			  "\xee\xe6\x28\x32\x2d\x37\x4c\x32"
+			  "\xf4\x1f\x23\x21\xe9\xc8\xc9\xbf"
+			  "\x54\xbc\xcf\xb4\xc2\x65\x39\xdf"
+			  "\xa5\xfb\x14\x11\xed\x62\x38\xcf"
+			  "\x9b\x58\x11\xdd\xe9\xbd\x37\x57"
+			  "\x75\x4c\x9e\xd5\x67\x0a\x48\xc6"
+			  "\x0d\x05\x4e\xb1\x06\xd7\xec\x2e"
+			  "\x9e\x59\xde\x4f\xab\x38\xbb\xe5"
+			  "\x87\x04\x5a\x2c\x2a\xa2\x8f\x3c"
+			  "\xe7\xe1\x46\xa9\x49\x9f\x24\xad"
+			  "\x2d\xb0\x55\x40\x64\xd5\xda\x7e"
+			  "\x1e\x77\xb8\x29\x72\x73\xc3\x84"
+			  "\xcd\xf3\x94\x90\x58\x76\xc9\x2c"
+			  "\x2a\xad\x56\xde\x33\x18\xb6\x3b"
+			  "\x10\xe9\xe9\x8d\xf0\xa9\x7f\x05"
+			  "\xf7\xb5\x8c\x13\x7e\x11\x3d\x1e"
+			  "\x02\xbb\x5b\xea\x69\xff\x85\xcf"
+			  "\x6a\x18\x97\x45\xe3\x96\xba\x4d"
+			  "\x2d\x7a\x70\x78\x15\x2c\xe9\xdc"
+			  "\x4e\x09\x92\x57\x04\xd8\x0b\xa6"
+			  "\x20\x71\x76\x47\x76\x96\x89\xa0"
+			  "\xd9\x29\xa2\x5a\x06\xdb\x56\x39"
+			  "\x60\x33\x59\x04\x95\x89\xf6\x18"
+			  "\x1d\x70\x75\x85\x3a\xb7\x6e",
+		.ctext	= "\xe1\xe7\x3f\xd3\x6a\xb9\x2f\x64"
+			  "\x37\xc5\xa4\xe9\xca\x0a\xa1\xd6"
+			  "\xea\x7d\x39\xe5\xe6\xcc\x80\x54"
+			  "\x74\x31\x2a\x04\x33\x79\x8c\x8e"
+			  "\x4d\x47\x84\x28\x27\x9b\x3c\x58"
+			  "\x54\x58\x20\x4f\x70\x01\x52\x5b"
+			  "\xac\x95\x61\x49\x5f\xef\xba\xce"
+			  "\xd7\x74\x56\xe7\xbb\xe0\x3c\xd0"
+			  "\x7f\xa9\x23\x57\x33\x2a\xf6\xcb"
+			  "\xbe\x42\x14\x95\xa8\xf9\x7a\x7e"
+			  "\x12\x53\x3a\xe2\x13\xfe\x2d\x89"
+			  "\xeb\xac\xd7\xa8\xa5\xf8\x27\xf3"
+			  "\x74\x9a\x65\x63\xd1\x98\x3a\x7e"
+			  "\x27\x7b\xc0\x20\x00\x4d\xf4\xe5"
+			  "\x7b\x69\xa6\xa8\x06\x50\x85\xb6"
+			  "\x7f\xac\x7f\xda\x1f\xf5\x37\x56"
+			  "\x9b\x2f\xd3\x86\x6b\x70\xbd\x0e"
+			  "\x55\x9a\x9d\x4b\x08\xb5\x5b\x7b"
+			  "\xd4\x7c\xb4\x71\x49\x92\x4a\x1e"
+			  "\xed\x6d\x11\x09\x47\x72\x32\x6a"
+			  "\x97\x53\x36\xaf\xf3\x06\x06\x2c"
+			  "\x69\xf1\x59\x00\x36\x95\x28\x2a"
+			  "\xb6\xcd\x10\x21\x84\x73\x5c\x96"
+			  "\x86\x14\x2c\x3d\x02\xdb\x53\x9a"
+			  "\x61\xde\xea\x99\x84\x7a\x27\xf6"
+			  "\xf7\xc8\x49\x73\x4b\xb8\xeb\xd3"
+			  "\x41\x33\xdd\x09\x68\xe2\x64\xb8"
+			  "\x5f\x75\x74\x97\x91\x54\xda\xc2"
+			  "\x73\x2c\x1e\x5a\x84\x48\x01\x1a"
+			  "\x0d\x8b\x0a\xdf\x07\x2e\xee\x77"
+			  "\x1d\x17\x41\x7a\xc9\x33\x63\xfa"
+			  "\x9f\xc3\x74\x57\x5f\x03\x4c",
+		.klen	= 16,
+		.len	= 255,
+	},
+	{
+		.key	= "\xe5\xf1\x48\x2e\x88\xdb\xc7\x28"
+			  "\xa2\x55\x5d\x2f\x90\x02\xdc\xd3"
+			  "\xf5\xd3\x9e\x87\xd5\x58\x30\x4a",
+		.iv	= "\xa6\x40\x39\xf9\x63\x6c\x2d\xd4"
+			  "\x1b\x71\x05\xa4\x88\x86\x11\xd3",
+		.ptext	= "\xb6\x06\xae\x15\x11\x96\xc1\x44"
+			  "\x44\xc2\x98\xf9\xa8\x0a\x0b",
+		.ctext	= "\x27\x3b\x68\x40\xa9\x5e\x74\x6b"
+			  "\x74\x67\x18\xf9\x37\xed\xed",
+		.klen	= 24,
+		.len	= 15,
+	},
+	{
+		.key	= "\xc8\xa0\x27\x67\x04\x3f\xed\xa5"
+			  "\xb4\x0c\x51\x91\x2d\x27\x77\x33"
+			  "\xa5\xfc\x2a\x9f\x78\xd8\x1c\x68",
+		.iv	= "\x83\x99\x1a\xe2\x84\xca\xa9\x16"
+			  "\x8d\xc4\x2d\x1b\x67\xc8\x86\x21",
+		.ptext	= "\xd6\x22\x85\xb8\x5d\x7e\x26\x2e"
+			  "\xbe\x04\x9d\x0c\x03\x91\x45\x4a"
+			  "\x36",
+		.ctext	= "\x0f\x44\xa9\x62\x72\xec\x12\x26"
+			  "\x3a\xc6\x83\x26\x62\x5e\xb7\x13"
+			  "\x05",
+		.klen	= 24,
+		.len	= 17,
+	},
+	{
+		.key	= "\xc5\x87\x18\x09\x0a\x4e\x66\x3e"
+			  "\x50\x90\x19\x93\xc0\x33\xcf\x80"
+			  "\x3a\x36\x6b\x6c\x43\xd7\xe4\x93",
+		.iv	= "\xdd\x0b\x75\x1f\xee\x2f\xb4\x52"
+			  "\x10\x82\x1f\x79\x8a\xa4\x9b\x87",
+		.ptext	= "\x56\xf9\x13\xce\x9f\x30\x10\x11"
+			  "\x1b\x59\xfd\x39\x5a\x29\xa3\x44"
+			  "\x78\x97\x8c\xf6\x99\x6d\x26\xf1"
+			  "\x32\x60\x6a\xeb\x04\x47\x29\x4c"
+			  "\x7e\x14\xef\x4d\x55\x29\xfe\x36"
+			  "\x37\xcf\x0b\x6e\xf3\xce\x15\xd2",
+		.ctext	= "\x8f\x98\xe1\x5a\x7f\xfe\xc7\x05"
+			  "\x76\xb0\xd5\xde\x90\x52\x2b\xa8"
+			  "\xf3\x6e\x3c\x77\xa5\x33\x63\xdd"
+			  "\x6f\x62\x12\xb0\x80\x10\xc1\x28"
+			  "\x58\xe5\xd6\x24\x44\x04\x55\xf3"
+			  "\x6d\x94\xcb\x2c\x7e\x7a\x85\x79",
+		.klen	= 24,
+		.len	= 48,
+	},
+	{
+		.key	= "\x84\x9b\xe8\x10\x4c\xb3\xd1\x7a"
+			  "\xb3\xab\x4e\x6f\x90\x12\x07\xf8"
+			  "\xef\xde\x42\x09\xbf\x34\x95\xb2",
+		.iv	= "\x66\x62\xf9\x48\x9d\x17\xf7\xdf"
+			  "\x06\x67\xf4\x6d\xf2\xbc\xa2\xe5",
+		.ptext	= "\x2f\xd6\x16\x6b\xf9\x4b\x44\x14"
+			  "\x90\x93\xe5\xfd\x05\xaa\x00\x26"
+			  "\xbd\xab\x11\xb8\xf0\xcb\x11\x72"
+			  "\xdd\xc5\x15\x4f\x4e\x1b\xf8\xc9"
+			  "\x8f\x4a\xd5\x69\xf8\x9e\xfb\x05"
+			  "\x8a\x37\x46\xfe\xfa\x58\x9b\x0e"
+			  "\x72\x90\x9a\x06\xa5\x42\xf4\x7c"
+			  "\x35\xd5\x64\x70\x72\x67\xfc\x8b"
+			  "\xab\x5a\x2f\x64\x9b\xa1\xec\xe7"
+			  "\xe6\x92\x69\xdb\x62\xa4\xe7\x44"
+			  "\x88\x28\xd4\x52\x64\x19\xa9\xd7"
+			  "\x0c\x00\xe6\xe7\xc1\x28\xc1\xf5"
+			  "\x72\xc5\xfa\x09\x22\x2e\xf4\x82"
+			  "\xa3\xdc\xc1\x68\xf9\x29\x55\x8d"
+			  "\x04\x67\x13\xa6\x52\x04\x3c\x0c"
+			  "\x14\xf2\x87\x23\x61\xab\x82\xcb"
+			  "\x49\x5b\x6b\xd4\x4f\x0d\xd4\x95"
+			  "\x82\xcd\xe3\x69\x47\x1b\x31\x73"
+			  "\x73\x77\xc1\x53\x7d\x43\x5e\x4a"
+			  "\x80\x3a\xca\x9c\xc7\x04\x1a\x31"
+			  "\x8e\xe6\x76\x7f\xe1\xb3\xd0\x57"
+			  "\xa2\xb2\xf6\x09\x51\xc9\x6d\xbc"
+			  "\x79\xed\x57\x50\x36\xd2\x93\xa4"
+			  "\x40\x5d\xac\x3a\x3b\xb6\x2d\x89"
+			  "\x78\xa2\xbd\x23\xec\x35\x06\xf0"
+			  "\xa8\xc8\xc9\xb0\xe3\x28\x2b\xba"
+			  "\x70\xa0\xfe\xed\x13\xc4\xd7\x90"
+			  "\xb1\x6a\xe0\xe1\x30\x71\x15\xd0"
+			  "\xe2\xb3\xa6\x4e\xb0\x01\xf9\xe7"
+			  "\x59\xc6\x1e\xed\x46\x2b\xe3\xa8"
+			  "\x22\xeb\x7f\x1c\xd9\xcd\xe0\xa6"
+			  "\x72\x42\x2c\x06\x75\xbb\xb7\x6b"
+			  "\xca\x49\x5e\xa1\x47\x8d\x9e\xfe"
+			  "\x60\xcc\x34\x95\x8e\xfa\x1e\x3e"
+			  "\x85\x4b\x03\x54\xea\x34\x1c\x41"
+			  "\x90\x45\xa6\xbe\xcf\x58\x4f\xca"
+			  "\x2c\x79\xc0\x3e\x8f\xd7\x3b\xd4"
+			  "\x55\x74\xa8\xe1\x57\x09\xbf\xab"
+			  "\x2c\xf9\xe4\xdd\x17\x99\x57\x60"
+			  "\x4b\x88\x2a\x7f\x43\x86\xb9\x9a"
+			  "\x60\xbf\x4c\xcf\x9b\x41\xb8\x99"
+			  "\x69\x15\x4f\x91\x4d\xeb\xdf\x6f"
+			  "\xcc\x4c\xf9\x6f\xf2\x33\x23\xe7"
+			  "\x02\x44\xaa\xa2\xfa\xb1\x39\xa5"
+			  "\xff\x88\xf5\x37\x02\x33\x24\xfc"
+			  "\x79\x11\x4c\x94\xc2\x31\x87\x9c"
+			  "\x53\x19\x99\x32\xe4\xde\x18\xf4"
+			  "\x8f\xe2\xe8\xa3\xfb\x0b\xaa\x7c"
+			  "\xdb\x83\x0f\xf6\xc0\x8a\x9b\xcd"
+			  "\x7b\x16\x05\x5b\xe4\xb4\x34\x03"
+			  "\xe3\x8f\xc9\x4b\x56\x84\x2a\x4c"
+			  "\x36\x72\x3c\x84\x4f\xba\xa2\x7f"
+			  "\xf7\x1b\xba\x4d\x8a\xb8\x5d\x51"
+			  "\x36\xfb\xef\x23\x18\x6f\x33\x2d"
+			  "\xbb\x06\x24\x8e\x33\x98\x6e\xcd"
+			  "\x63\x11\x18\x6b\xcc\x1b\x66\xb9"
+			  "\x38\x8d\x06\x8d\x98\x1a\xef\xaa"
+			  "\x35\x4a\x90\xfa\xb1\xd3\xcc\x11"
+			  "\x50\x4c\x54\x18\x60\x5d\xe4\x11"
+			  "\xfc\x19\xe1\x53\x20\x5c\xe7\xef"
+			  "\x8a\x2b\xa8\x82\x51\x5f\x5d\x43"
+			  "\x34\xe5\xcf\x7b\x1b\x6f\x81\x19"
+			  "\xb7\xdf\xa8\x9e\x81\x89\x5f\x33"
+			  "\x69\xaf\xde\x89\x68\x88\xf0\x71",
+		.ctext	= "\xab\x15\x46\x5b\xed\x4f\xa8\xac"
+			  "\xbf\x31\x30\x84\x55\xa4\xb8\x98"
+			  "\x79\xba\xa0\x15\xa4\x55\x20\xec"
+			  "\xf9\x94\x71\xe6\x6a\x6f\xee\x87"
+			  "\x2e\x3a\xa2\x95\xae\x6e\x56\x09"
+			  "\xe9\xc0\x0f\xe2\xc6\xb7\x30\xa9"
+			  "\x73\x8e\x59\x7c\xfd\xe3\x71\xf7"
+			  "\xae\x8b\x91\xab\x5e\x36\xe9\xa8"
+			  "\xff\x17\xfa\xa2\x94\x93\x11\x42"
+			  "\x67\x96\x99\xc5\xf0\xad\x2a\x57"
+			  "\xf9\xa6\x70\x4a\xdf\x71\xff\xc0"
+			  "\xe2\xaf\x9a\xae\x57\x58\x13\x3b"
+			  "\x2d\xf1\xc7\x8f\xdb\x8a\xcc\xce"
+			  "\x53\x1a\x69\x55\x39\xc8\xbe\xc3"
+			  "\x2d\xb1\x03\xd9\xa3\x99\xf4\x8d"
+			  "\xd9\x2d\x27\xae\xa5\xe7\x77\x7f"
+			  "\xbb\x88\x84\xea\xfa\x19\x3f\x44"
+			  "\x61\x21\x8a\x1f\xbe\xac\x60\xb4"
+			  "\xaf\xe9\x00\xab\xef\x3c\x53\x56"
+			  "\xcd\x4b\x53\xd8\x9b\xfe\x88\x23"
+			  "\x5b\x85\x76\x08\xec\xd1\x6e\x4a"
+			  "\x87\xa4\x7d\x29\x4e\x4f\x3f\xc9"
+			  "\xa4\xab\x63\xea\xdd\xef\x9f\x79"
+			  "\x38\x18\x7d\x90\x90\xf9\x12\x57"
+			  "\x1d\x89\xea\xfe\xd4\x47\x45\x32"
+			  "\x6a\xf6\xe7\xde\x22\x7e\xee\xc1"
+			  "\xbc\x2d\xc3\xbb\xe5\xd4\x13\xac"
+			  "\x63\xff\x5b\xb1\x05\x96\xd5\xf3"
+			  "\x07\x9a\x62\xb6\x30\xea\x7d\x1e"
+			  "\xee\x75\x0a\x1b\xcc\x6e\x4d\xa7"
+			  "\xf7\x4d\x74\xd8\x60\x32\x5e\xd0"
+			  "\x93\xd7\x19\x90\x4e\x26\xdb\xe4"
+			  "\x5e\xd4\xa8\xb9\x76\xba\x56\x91"
+			  "\xc4\x75\x04\x1e\xc2\x77\x24\x6f"
+			  "\xf9\xe8\x4a\xec\x7f\x86\x95\xb3"
+			  "\x5c\x2c\x97\xab\xf0\xf7\x74\x5b"
+			  "\x0b\xc2\xda\x42\x40\x34\x16\xed"
+			  "\x06\xc1\x25\x53\x17\x0d\x81\x4e"
+			  "\xe6\xf2\x0f\x6d\x94\x3c\x90\x7a"
+			  "\xae\x20\xe9\x3f\xf8\x18\x67\x6a"
+			  "\x49\x1e\x41\xb6\x46\xab\xc8\xa7"
+			  "\xcb\x19\x96\xf5\x99\xc0\x66\x3e"
+			  "\x77\xcf\x73\x52\x83\x2a\xe2\x48"
+			  "\x27\x6c\xeb\xe7\xe7\xc4\xd5\x6a"
+			  "\x40\x67\xbc\xbf\x6b\x3c\xf3\xbb"
+			  "\x51\x5e\x31\xac\x03\x81\xab\x61"
+			  "\xfa\xa5\xa6\x7d\x8b\xc3\x8a\x75"
+			  "\x28\x7a\x71\x9c\xac\x8f\x76\xfc"
+			  "\xf9\x6c\x5d\x9b\xd7\xf6\x36\x2d"
+			  "\x61\xd5\x61\xaa\xdd\x01\xfc\x57"
+			  "\x91\x10\xcd\xcd\x6d\x27\x63\x24"
+			  "\x67\x46\x7a\xbb\x61\x56\x39\xb1"
+			  "\xd6\x79\xfe\x77\xca\xd6\x73\x59"
+			  "\x6e\x58\x11\x90\x03\x26\x74\x2a"
+			  "\xfa\x52\x12\x47\xfb\x12\xeb\x3e"
+			  "\x88\xf0\x52\x6c\xc0\x54\x7a\x88"
+			  "\x8c\xe5\xde\x9e\xba\xb9\xf2\xe1"
+			  "\x97\x2e\x5c\xbd\xf4\x13\x7e\xf3"
+			  "\xc4\xe1\x87\xa5\x35\xfa\x7c\x71"
+			  "\x1a\xc9\xf4\xa8\x57\xe2\x5a\x6b"
+			  "\x14\xe0\x73\xaf\x56\x6b\xa0\x00"
+			  "\x9e\x5f\x64\xac\x00\xfb\xc4\x92"
+			  "\xe5\xe2\x8a\xb2\x9e\x75\x49\x85"
+			  "\x25\x66\xa5\x1a\xf9\x7d\x1d\x60",
+		.klen	= 24,
+		.len	= 512,
+	},
+	{
+		.key	= "\x05\x60\x3a\x7e\x60\x90\x46\x18"
+			  "\x6c\x60\xba\xeb\x12\xd7\xbe\xd1"
+			  "\xd3\xf6\x10\x46\x9d\xf1\x0c\xb4"
+			  "\x73\xe3\x93\x27\xa8\x2c\x13\xaa",
+		.iv	= "\xf5\x96\xd1\xb6\xcb\x44\xd8\xd0"
+			  "\x3e\xdb\x92\x80\x08\x94\xcd\xd3",
+		.ptext	= "\x78",
+		.ctext	= "\xc5",
+		.klen	= 32,
+		.len	= 1,
+	},
+	{
+		.key	= "\x35\xca\x38\xf3\xd9\xd6\x34\xef"
+			  "\xcd\xee\xa3\x26\x86\xba\xfb\x45"
+			  "\x01\xfa\x52\x67\xff\xc5\x9d\xaa"
+			  "\x64\x9a\x05\xbb\x85\x20\xa7\xf2",
+		.iv	= "\xe3\xda\xf5\xff\x42\x59\x87\x86"
+			  "\xee\x7b\xd6\xb4\x6a\x25\x44\xff",
+		.ptext	= "\x44\x67\x1e\x04\x53\xd2\x4b\xd9"
+			  "\x96\x33\x07\x54\xe4\x8e\x20",
+		.ctext	= "\xcc\x55\x40\x79\x47\x5c\x8b\xa6"
+			  "\xca\x7b\x9f\x50\xe3\x21\xea",
+		.klen	= 32,
+		.len	= 15,
+	},
+	{
+		.key	= "\xaf\xd9\x14\x14\xd5\xdb\xc9\xce"
+			  "\x76\x5c\x5a\xbf\x43\x05\x29\x24"
+			  "\xc4\x13\x68\xcc\xe8\x37\xbd\xb9"
+			  "\x41\x20\xf5\x53\x48\xd0\xa2\xd6",
+		.iv	= "\xa7\xb4\x00\x08\x79\x10\xae\xf5"
+			  "\x02\xbf\x85\xb2\x69\x4c\xc6\x04",
+		.ptext	= "\xac\x6a\xa8\x0c\xb0\x84\xbf\x4c"
+			  "\xae\x94\x20\x58\x7e\x00\x93\x89",
+		.ctext	= "\xd5\xaa\xe2\xe9\x86\x4c\x95\x4e"
+			  "\xde\xb6\x15\xcb\xdc\x1f\x13\x38",
+		.klen	= 32,
+		.len	= 16,
+	},
+	{
+		.key	= "\xed\xe3\x8b\xe7\x1c\x17\xbf\x4a"
+			  "\x02\xe2\xfc\x76\xac\xf5\x3c\x00"
+			  "\x5d\xdc\xfc\x83\xeb\x45\xb4\xcb"
+			  "\x59\x62\x60\xec\x69\x9c\x16\x45",
+		.iv	= "\xe4\x0e\x2b\x90\xd2\xfa\x94\x2e"
+			  "\x10\xe5\x64\x2b\x97\x28\x15\xc7",
+		.ptext	= "\xe6\x53\xff\x60\x0e\xc4\x51\xe4"
+			  "\x93\x4d\xe5\x55\xc5\xd9\xad\x48"
+			  "\x52",
+		.ctext	= "\xba\x25\x28\xf5\xcf\x31\x91\x80"
+			  "\xda\x2b\x95\x5f\x20\xcb\xfb\x9f"
+			  "\xc6",
+		.klen	= 32,
+		.len	= 17,
+	},
+	{
+		.key	= "\x77\x5c\xc0\x73\x9a\x64\x97\x91"
+			  "\x2f\xee\xe0\x20\xc2\x04\x59\x2e"
+			  "\x97\xd2\xa7\x70\xb3\xb0\x21\x6b"
+			  "\x8f\xbf\xb8\x51\xa8\xea\x0f\x62",
+		.iv	= "\x31\x8e\x1f\xcd\xfd\x23\xeb\x7f"
+			  "\x8a\x1f\x1b\x23\x53\x27\x44\xe5",
+		.ptext	= "\xcd\xff\x8c\x9b\x94\x5a\x51\x3f"
+			  "\x40\x93\x56\x93\x66\x39\x63\x1f"
+			  "\xbf\xe6\xa4\xfa\xbe\x79\x93\x03"
+			  "\xf5\x66\x74\x16\xfc\xe4\xce",
+		.ctext	= "\x8b\xd3\xc3\xce\x66\xf8\x66\x4c"
+			  "\xad\xd6\xf5\x0f\xd8\x99\x5a\x75"
+			  "\xa1\x3c\xab\x0b\x21\x36\x57\x72"
+			  "\x88\x29\xe9\xea\x4a\x8d\xe9",
+		.klen	= 32,
+		.len	= 31,
+	},
+	{
+		.key	= "\xa1\x2f\x4d\xde\xfe\xa1\xff\xa8"
+			  "\x73\xdd\xe3\xe2\x95\xfc\xea\x9c"
+			  "\xd0\x80\x42\x0c\xb8\x43\x3e\x99"
+			  "\x39\x38\x0a\x8c\xe8\x45\x3a\x7b",
+		.iv	= "\x32\xc4\x6f\xb1\x14\x43\xd1\x87"
+			  "\xe2\x6f\x5a\x58\x02\x36\x7e\x2a",
+		.ptext	= "\x9e\x5c\x1e\xf1\xd6\x7d\x09\x57"
+			  "\x18\x48\x55\xda\x7d\x44\xf9\x6d"
+			  "\xac\xcd\x59\xbb\x10\xa2\x94\x67"
+			  "\xd1\x6f\xfe\x6b\x4a\x11\xe8\x04"
+			  "\x09\x26\x4f\x8d\x5d\xa1\x7b\x42"
+			  "\xf9\x4b\x66\x76\x38\x12\xfe\xfe",
+		.ctext	= "\x42\xbc\xa7\x64\x15\x9a\x04\x71"
+			  "\x2c\x5f\x94\xba\x89\x3a\xad\xbc"
+			  "\x87\xb3\xf4\x09\x4f\x57\x06\x18"
+			  "\xdc\x84\x20\xf7\x64\x85\xca\x3b"
+			  "\xab\xe6\x33\x56\x34\x60\x5d\x4b"
+			  "\x2e\x16\x13\xd4\x77\xde\x2d\x2b",
+		.klen	= 32,
+		.len	= 48,
+	},
+	{
+		.key	= "\xfb\xf5\xb7\x3d\xa6\x95\x42\xbf"
+			  "\xd2\x94\x6c\x74\x0f\xbc\x5a\x28"
+			  "\x35\x3c\x51\x58\x84\xfb\x7d\x11"
+			  "\x16\x1e\x00\x97\x37\x08\xb7\x16",
+		.iv	= "\x9b\x53\x57\x40\xe6\xd9\xa7\x27"
+			  "\x78\xd4\x9b\xd2\x29\x1d\x24\xa9",
+		.ptext	= "\x8b\x02\x60\x0a\x3e\xb7\x10\x59"
+			  "\xc3\xac\xd5\x2a\x75\x81\xf2\xdb"
+			  "\x55\xca\x65\x86\x44\xfb\xfe\x91"
+			  "\x26\xbb\x45\xb2\x46\x22\x3e\x08"
+			  "\xa2\xbf\x46\xcb\x68\x7d\x45\x7b"
+			  "\xa1\x6a\x3c\x6e\x25\xeb\xed\x31"
+			  "\x7a\x8b\x47\xf9\xde\xec\x3d\x87"
+			  "\x09\x20\x2e\xfa\xba\x8b\x9b\xc5"
+			  "\x6c\x25\x9c\x9d\x2a\xe8\xab\x90"
+			  "\x3f\x86\xee\x61\x13\x21\xd4\xde"
+			  "\xe1\x0c\x95\xfc\x5c\x8a\x6e\x0a"
+			  "\x73\xcf\x08\x69\x44\x4e\xde\x25"
+			  "\xaf\xaa\x56\x04\xc4\xb3\x60\x44"
+			  "\x3b\x8b\x3d\xee\xae\x42\x4b\xd2"
+			  "\x9a\x6c\xa0\x8e\x52\x06\xb2\xd1"
+			  "\x5d\x38\x30\x6d\x27\x9b\x1a\xd8",
+		.ctext	= "\xa3\x78\x33\x78\x95\x95\x97\x07"
+			  "\x53\xa3\xa1\x5b\x18\x32\x27\xf7"
+			  "\x09\x12\x53\x70\x83\xb5\x6a\x9f"
+			  "\x26\x6d\x10\x0d\xe0\x1c\xe6\x2b"
+			  "\x70\x00\xdc\xa1\x60\xef\x1b\xee"
+			  "\xc5\xa5\x51\x17\xae\xcc\xf2\xed"
+			  "\xc4\x60\x07\xdf\xd5\x7a\xe9\x90"
+			  "\x3c\x9f\x96\x5d\x72\x65\x5d\xef"
+			  "\xd0\x94\x32\xc4\x85\x90\x78\xa1"
+			  "\x2e\x64\xf6\xee\x8e\x74\x3f\x20"
+			  "\x2f\x12\x3b\x3d\xd5\x39\x8e\x5a"
+			  "\xf9\x8f\xce\x94\x5d\x82\x18\x66"
+			  "\x14\xaf\x4c\xfe\xe0\x91\xc3\x4a"
+			  "\x85\xcf\xe7\xe8\xf7\xcb\xf0\x31"
+			  "\x88\x7d\xc9\x5b\x71\x9d\x5f\xd2"
+			  "\xfa\xed\xa6\x24\xda\xbb\xb1\x84",
+		.klen	= 32,
+		.len	= 128,
+	},
+	{
+		.key	= "\x32\x37\x2b\x8f\x7b\xb1\x23\x79"
+			  "\x05\x52\xde\x05\xf1\x68\x3f\x6c"
+			  "\xa4\xae\xbc\x21\xc2\xc6\xf0\xbd"
+			  "\x0f\x20\xb7\xa4\xc5\x05\x7b\x64",
+		.iv	= "\xff\x26\x4e\x67\x48\xdd\xcf\xfe"
+			  "\x42\x09\x04\x98\x5f\x1e\xfa\x80",
+		.ptext	= "\x99\xdc\x3b\x19\x41\xf9\xff\x6e"
+			  "\x76\xb5\x03\xfa\x61\xed\xf8\x44"
+			  "\x70\xb9\xf0\x83\x80\x6e\x31\x77"
+			  "\x77\xe4\xc7\xb4\x77\x02\xab\x91"
+			  "\x82\xc6\xf8\x7c\x46\x61\x03\x69"
+			  "\x09\xa0\xf7\x12\xb7\x81\x6c\xa9"
+			  "\x10\x5c\xbb\x55\xb3\x44\xed\xb5"
+			  "\xa2\x52\x48\x71\x90\x5d\xda\x40"
+			  "\x0b\x7f\x4a\x11\x6d\xa7\x3d\x8e"
+			  "\x1b\xcd\x9d\x4e\x75\x8b\x7d\x87"
+			  "\xe5\x39\x34\x32\x1e\xe6\x8d\x51"
+			  "\xd4\x1f\xe3\x1d\x50\xa0\x22\x37"
+			  "\x7c\xb0\xd9\xfb\xb6\xb2\x16\xf6"
+			  "\x6d\x26\xa0\x4e\x8c\x6a\xe6\xb6"
+			  "\xbe\x4c\x7c\xe3\x88\x10\x18\x90"
+			  "\x11\x50\x19\x90\xe7\x19\x3f\xd0"
+			  "\x31\x15\x0f\x06\x96\xfe\xa7\x7b"
+			  "\xc3\x32\x88\x69\xa4\x12\xe3\x64"
+			  "\x02\x30\x17\x74\x6c\x88\x7c\x9b"
+			  "\xd6\x6d\x75\xdf\x11\x86\x70\x79"
+			  "\x48\x7d\x34\x3e\x33\x58\x07\x8b"
+			  "\xd2\x50\xac\x35\x15\x45\x05\xb4"
+			  "\x4d\x31\x97\x19\x87\x23\x4b\x87"
+			  "\x53\xdc\xa9\x19\x78\xf1\xbf\x35"
+			  "\x30\x04\x14\xd4\xcf\xb2\x8c\x87"
+			  "\x7d\xdb\x69\xc9\xcd\xfe\x40\x3e"
+			  "\x8d\x66\x5b\x61\xe5\xf0\x2d\x87"
+			  "\x93\x3a\x0c\x2b\x04\x98\x05\xc2"
+			  "\x56\x4d\xc4\x6c\xcd\x7a\x98\x7e"
+			  "\xe2\x2d\x79\x07\x91\x9f\xdf\x2f"
+			  "\x72\xc9\x8f\xcb\x0b\x87\x1b\xb7"
+			  "\x04\x86\xcb\x47\xfa\x5d\x03",
+		.ctext	= "\x0b\x00\xf7\xf2\xc8\x6a\xba\x9a"
+			  "\x0a\x97\x18\x7a\x00\xa0\xdb\xf4"
+			  "\x5e\x8e\x4a\xb7\xe0\x51\xf1\x75"
+			  "\x17\x8b\xb4\xf1\x56\x11\x05\x9f"
+			  "\x2f\x2e\xba\x67\x04\xe1\xb4\xa5"
+			  "\xfc\x7c\x8c\xad\xc6\xb9\xd1\x64"
+			  "\xca\xbd\x5d\xaf\xdb\x65\x48\x4f"
+			  "\x1b\xb3\x94\x5c\x0b\xd0\xee\xcd"
+			  "\xb5\x7f\x43\x8a\xd8\x8b\x66\xde"
+			  "\xd2\x9c\x13\x65\xa4\x47\xa7\x03"
+			  "\xc5\xa1\x46\x8f\x2f\x84\xbc\xef"
+			  "\x48\x9d\x9d\xb5\xbd\x43\xff\xd2"
+			  "\xd2\x7a\x5a\x13\xbf\xb4\xf6\x05"
+			  "\x17\xcd\x01\x12\xf0\x35\x27\x96"
+			  "\xf4\xc1\x65\xf7\x69\xef\x64\x1b"
+			  "\x6e\x4a\xe8\x77\xce\x83\x01\xb7"
+			  "\x60\xe6\x45\x2a\xcd\x41\x4a\xb5"
+			  "\x8e\xcc\x45\x93\xf1\xd6\x64\x5f"
+			  "\x32\x60\xe4\x29\x4a\x82\x6c\x86"
+			  "\x16\xe4\xcc\xdb\x5f\xc8\x11\xa6"
+			  "\xfe\x88\xd6\xc3\xe5\x5c\xbb\x67"
+			  "\xec\xa5\x7b\xf5\xa8\x4f\x77\x25"
+			  "\x5d\x0c\x2a\x99\xf9\xb9\xd1\xae"
+			  "\x3c\x83\x2a\x93\x9b\x66\xec\x68"
+			  "\x2c\x93\x02\x8a\x8a\x1e\x2f\x50"
+			  "\x09\x37\x19\x5c\x2a\x3a\xc2\xcb"
+			  "\xcb\x89\x82\x81\xb7\xbb\xef\x73"
+			  "\x8b\xc9\xae\x42\x96\xef\x70\xc0"
+			  "\x89\xc7\x3e\x6a\x26\xc3\xe4\x39"
+			  "\x53\xa9\xcf\x63\x7d\x05\xf3\xff"
+			  "\x52\x04\xf6\x7f\x23\x96\xe9\xf7"
+			  "\xff\xd6\x50\xa3\x0e\x20\x71",
+		.klen	= 32,
+		.len	= 255,
+	},
+	{
+		.key	= "\x39\x5f\xf4\x9c\x90\x3a\x9a\x25"
+			  "\x15\x11\x79\x39\xed\x26\x5e\xf6"
+			  "\xda\xcf\x33\x4f\x82\x97\xab\x10"
+			  "\xc1\x55\x48\x82\x80\xa8\x02\xb2",
+		.iv	= "\x82\x60\xd9\x06\xeb\x40\x99\x76"
+			  "\x08\xc5\xa4\x83\x45\xb8\x38\x5a",
+		.ptext	= "\xa1\xa8\xac\xac\x08\xaf\x8f\x84"
+			  "\xbf\xcc\x79\x31\x5e\x61\x01\xd1"
+			  "\x4d\x5f\x9b\xcd\x91\x92\x9a\xa1"
+			  "\x99\x0d\x49\xb2\xd7\xfd\x25\x93"
+			  "\x51\x96\xbd\x91\x8b\x08\xf1\xc6"
+			  "\x0d\x17\xf6\xef\xfd\xd2\x78\x16"
+			  "\xc8\x08\x27\x7b\xca\x98\xc6\x12"
+			  "\x86\x11\xdb\xd5\x08\x3d\x5a\x2c"
+			  "\xcf\x15\x0e\x9b\x42\x78\xeb\x1f"
+			  "\x52\xbc\xd7\x5a\x8a\x33\x6c\x14"
+			  "\xfc\x61\xad\x2e\x1e\x03\x66\xea"
+			  "\x79\x0e\x88\x88\xde\x93\xe3\x81"
+			  "\xb5\xc4\x1c\xe6\x9c\x08\x18\x8e"
+			  "\xa0\x87\xda\xe6\xf8\xcb\x30\x44"
+			  "\x2d\x4e\xc0\xa3\x60\xf9\x62\x7b"
+			  "\x4b\xd5\x61\x6d\xe2\x67\x95\x54"
+			  "\x10\xd1\xca\x22\xe8\xb6\xb1\x3a"
+			  "\x2d\xd7\x35\x5b\x22\x88\x55\x67"
+			  "\x3d\x83\x8f\x07\x98\xa8\xf2\xcf"
+			  "\x04\xb7\x9e\x52\xca\xe0\x98\x72"
+			  "\x5c\xc1\x00\xd4\x1f\x2c\x61\xf3"
+			  "\xe8\x40\xaf\x4a\xee\x66\x41\xa0"
+			  "\x02\x77\x29\x30\x65\x59\x4b\x20"
+			  "\x7b\x0d\x80\x97\x27\x7f\xd5\x90"
+			  "\xbb\x9d\x76\x90\xe5\x43\x43\x72"
+			  "\xd0\xd4\x14\x75\x66\xb3\xb6\xaf"
+			  "\x09\xe4\x23\xb0\x62\xad\x17\x28"
+			  "\x39\x26\xab\xf5\xf7\x5c\xb6\x33"
+			  "\xbd\x27\x09\x5b\x29\xe4\x40\x0b"
+			  "\xc1\x26\x32\xdb\x9a\xdf\xf9\x5a"
+			  "\xae\x03\x2c\xa4\x40\x84\x9a\xb7"
+			  "\x4e\x47\xa8\x0f\x23\xc7\xbb\xcf"
+			  "\x2b\xf2\x32\x6c\x35\x6a\x91\xba"
+			  "\x0e\xea\xa2\x8b\x2f\xbd\xb5\xea"
+			  "\x6e\xbc\xb5\x4b\x03\xb3\x86\xe0"
+			  "\x86\xcf\xba\xcb\x38\x2c\x32\xa6"
+			  "\x6d\xe5\x28\xa6\xad\xd2\x7f\x73"
+			  "\x43\x14\xf8\xb1\x99\x12\x2d\x2b"
+			  "\xdf\xcd\xf2\x81\x43\x94\xdf\xb1"
+			  "\x17\xc9\x33\xa6\x3d\xef\x96\xb8"
+			  "\xd6\x0d\x00\xec\x49\x66\x85\x5d"
+			  "\x44\x62\x12\x04\x55\x5c\x48\xd3"
+			  "\xbd\x73\xac\x54\x8f\xbf\x97\x8e"
+			  "\x85\xfd\xc2\xa1\x25\x32\x38\x6a"
+			  "\x1f\xac\x57\x3c\x4f\x56\x73\xf2"
+			  "\x1d\xb6\x48\x68\xc7\x0c\xe7\x60"
+			  "\xd2\x8e\x4d\xfb\xc7\x20\x7b\xb7"
+			  "\x45\x28\x12\xc6\x26\xae\xea\x7c"
+			  "\x5d\xe2\x46\xb5\xae\xe1\xc3\x98"
+			  "\x6f\x72\xd5\xa2\xfd\xed\x40\xfd"
+			  "\xf9\xdf\x61\xec\x45\x2c\x15\xe0"
+			  "\x1e\xbb\xde\x71\x37\x5f\x73\xc2"
+			  "\x11\xcc\x6e\x6d\xe1\xb5\x1b\xd2"
+			  "\x2a\xdd\x19\x8a\xc2\xe1\xa0\xa4"
+			  "\x26\xeb\xb2\x2c\x4f\x77\x52\xf1"
+			  "\x42\x72\x6c\xad\xd7\x78\x5d\x72"
+			  "\xc9\x16\x26\x25\x1b\x4c\xe6\x58"
+			  "\x79\x57\xb5\x06\x15\x4f\xe5\xba"
+			  "\xa2\x7f\x2d\x5b\x87\x8a\x44\x70"
+			  "\xec\xc7\xef\x84\xae\x60\xa2\x61"
+			  "\x86\xe9\x18\xcd\x28\xc4\xa4\xf5"
+			  "\xbc\x84\xb8\x86\xa0\xba\xf1\xf1"
+			  "\x08\x3b\x32\x75\x35\x22\x7a\x65"
+			  "\xca\x48\xe8\xef\x6e\xe2\x8e\x00",
+		.ctext	= "\x2f\xae\xd8\x67\xeb\x15\xde\x75"
+			  "\x53\xa3\x0e\x5a\xcf\x1c\xbe\xea"
+			  "\xde\xf9\xcf\xc2\x9f\xfd\x0f\x44"
+			  "\xc0\xe0\x7a\x76\x1d\xcb\x4a\xf8"
+			  "\x35\xd6\xe3\x95\x98\x6b\x3f\x89"
+			  "\xc4\xe6\xb6\x6f\xe1\x8b\x39\x4b"
+			  "\x1c\x6c\x77\xe4\xe1\x8a\xbc\x61"
+			  "\x00\x6a\xb1\x37\x2f\x45\xe6\x04"
+			  "\x52\x0b\xfc\x1e\x32\xc1\xd8\x9d"
+			  "\xfa\xdd\x67\x5c\xe0\x75\x83\xd0"
+			  "\x21\x9e\x02\xea\xc0\x7f\xc0\x29"
+			  "\xb3\x6c\xa5\x97\xb3\x29\x82\x1a"
+			  "\x94\xa5\xb4\xb6\x49\xe5\xa5\xad"
+			  "\x95\x40\x52\x7c\x84\x88\xa4\xa8"
+			  "\x26\xe4\xd9\x5d\x41\xf2\x93\x7b"
+			  "\xa4\x48\x1b\x66\x91\xb9\x7c\xc2"
+			  "\x99\x29\xdf\xd8\x30\xac\xd4\x47"
+			  "\x42\xa0\x14\x87\x67\xb8\xfd\x0b"
+			  "\x1e\xcb\x5e\x5c\x9a\xc2\x04\x8b"
+			  "\x17\x29\x9d\x99\x7f\x86\x4c\xe2"
+			  "\x5c\x96\xa6\x0f\xb6\x47\x33\x5c"
+			  "\xe4\x50\x49\xd5\x4f\x92\x0b\x9a"
+			  "\xbc\x52\x4c\x41\xf5\xc9\x3e\x76"
+			  "\x55\x55\xd4\xdc\x71\x14\x23\xfc"
+			  "\x5f\xd5\x08\xde\xa0\xf7\x28\xc0"
+			  "\xe1\x61\xac\x64\x66\xf6\xd1\x31"
+			  "\xe4\xa4\xa9\xed\xbc\xad\x4f\x3b"
+			  "\x59\xb9\x48\x1b\xe7\xb1\x6f\xc6"
+			  "\xba\x40\x1c\x0b\xe7\x2f\x31\x65"
+			  "\x85\xf5\xe9\x14\x0a\x31\xf5\xf3"
+			  "\xc0\x1c\x20\x35\x73\x38\x0f\x8e"
+			  "\x39\xf0\x68\xae\x08\x9c\x87\x4b"
+			  "\x42\xfc\x22\x17\xee\x96\x51\x2a"
+			  "\xd8\x57\x5a\x35\xea\x72\x74\xfc"
+			  "\xb3\x0e\x69\x9a\xe1\x4f\x24\x90"
+			  "\xc5\x4b\xe5\xd7\xe3\x82\x2f\xc5"
+			  "\x62\x46\x3e\xab\x72\x4e\xe0\xf3"
+			  "\x90\x09\x4c\xb2\xe1\xe8\xa0\xf5"
+			  "\x46\x40\x2b\x47\x85\x3c\x21\x90"
+			  "\x3d\xad\x25\x5a\x36\xdf\xe5\xbc"
+			  "\x7e\x80\x4d\x53\x77\xf1\x79\xa6"
+			  "\xec\x22\x80\x88\x68\xd6\x2d\x8b"
+			  "\x3e\xf7\x52\xc7\x2a\x20\x42\x5c"
+			  "\xed\x99\x4f\x32\x80\x00\x7e\x73"
+			  "\xd7\x6d\x7f\x7d\x42\x54\x4a\xfe"
+			  "\xff\x6f\x61\xca\x2a\xbb\x4f\xeb"
+			  "\x4f\xe4\x4e\xaf\x2c\x4f\x82\xcd"
+			  "\xa1\xa7\x11\xb3\x34\x33\xcf\x32"
+			  "\x63\x0e\x24\x3a\x35\xbe\x06\xd5"
+			  "\x17\xcb\x02\x30\x33\x6e\x8c\x49"
+			  "\x40\x6e\x34\x8c\x07\xd4\x3e\xe6"
+			  "\xaf\x78\x6d\x8c\x10\x5f\x21\x58"
+			  "\x49\x26\xc5\xaf\x0d\x7d\xd4\xaf"
+			  "\xcd\x5b\xa1\xe3\xf6\x39\x1c\x9b"
+			  "\x8e\x00\xa1\xa7\x9e\x17\x4a\xc0"
+			  "\x54\x56\x9e\xcf\xcf\x88\x79\x8d"
+			  "\x50\xf7\x56\x8e\x0a\x73\x46\x6b"
+			  "\xc3\xb9\x9b\x6c\x7d\xc4\xc8\xb6"
+			  "\x03\x5f\x30\x62\x7d\xe6\xdb\x15"
+			  "\xe1\x39\x02\x8c\xff\xda\xc8\x43"
+			  "\xf2\xa9\xbf\x00\xe7\x3a\x61\x89"
+			  "\xdf\xb0\xca\x7d\x8c\x8a\x6a\x9f"
+			  "\x18\x89\x3d\x39\xac\x36\x6f\x05"
+			  "\x1f\xb5\xda\x00\xea\xe1\x51\x21",
+		.klen	= 32,
+		.len	= 512,
+	},
+
+};
+
 #endif	/* _CRYPTO_TESTMGR_H */
diff --git a/crypto/xctr.c b/crypto/xctr.c
new file mode 100644
index 000000000000..3d3e9b2d6a3a
--- /dev/null
+++ b/crypto/xctr.c
@@ -0,0 +1,193 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * XCTR: XOR Counter mode - Adapted from ctr.c
+ *
+ * (C) Copyright IBM Corp. 2007 - Joy Latten <latten@us.ibm.com>
+ * Copyright 2021 Google LLC
+ */
+
+/*
+ * XCTR mode is a blockcipher mode of operation used to implement HCTR2. XCTR is
+ * closely related to the CTR mode of operation; the main difference is that CTR
+ * generates the keystream using E(CTR + IV) whereas XCTR generates the
+ * keystream using E(CTR ^ IV). This allows implementations to avoid dealing
+ * with multi-limb integers (as is required in CTR mode). XCTR is also specified
+ * using little-endian arithmetic which makes it slightly faster on LE machines.
+ *
+ * See the HCTR2 paper for more details:
+ *	Length-preserving encryption with HCTR2
+ *      (https://eprint.iacr.org/2021/1441.pdf)
+ */
+
+#include <crypto/algapi.h>
+#include <crypto/internal/cipher.h>
+#include <crypto/internal/skcipher.h>
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+
+// Limited to 16-byte blocks for simplicity
+#define XCTR_BLOCKSIZE 16
+
+static void crypto_xctr_crypt_final(struct skcipher_walk *walk,
+				   struct crypto_cipher *tfm, u32 byte_ctr)
+{
+	u8 keystream[XCTR_BLOCKSIZE];
+	u8 *src = walk->src.virt.addr;
+	u8 *dst = walk->dst.virt.addr;
+	unsigned int nbytes = walk->nbytes;
+	__le32 ctr32 = cpu_to_le32(byte_ctr / XCTR_BLOCKSIZE + 1);
+
+	crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));
+	crypto_cipher_encrypt_one(tfm, keystream, walk->iv);
+	crypto_xor_cpy(dst, keystream, src, nbytes);
+	crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));
+}
+
+static int crypto_xctr_crypt_segment(struct skcipher_walk *walk,
+				    struct crypto_cipher *tfm, u32 byte_ctr)
+{
+	void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+		   crypto_cipher_alg(tfm)->cia_encrypt;
+	u8 *src = walk->src.virt.addr;
+	u8 *dst = walk->dst.virt.addr;
+	unsigned int nbytes = walk->nbytes;
+	__le32 ctr32 = cpu_to_le32(byte_ctr / XCTR_BLOCKSIZE + 1);
+
+	do {
+		/* create keystream */
+		crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));
+		fn(crypto_cipher_tfm(tfm), dst, walk->iv);
+		crypto_xor(dst, src, XCTR_BLOCKSIZE);
+		crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));
+
+		ctr32 = cpu_to_le32(le32_to_cpu(ctr32) + 1);
+
+		src += XCTR_BLOCKSIZE;
+		dst += XCTR_BLOCKSIZE;
+	} while ((nbytes -= XCTR_BLOCKSIZE) >= XCTR_BLOCKSIZE);
+
+	return nbytes;
+}
+
+static int crypto_xctr_crypt_inplace(struct skcipher_walk *walk,
+				    struct crypto_cipher *tfm, u32 byte_ctr)
+{
+	void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
+		   crypto_cipher_alg(tfm)->cia_encrypt;
+	unsigned long alignmask = crypto_cipher_alignmask(tfm);
+	unsigned int nbytes = walk->nbytes;
+	u8 *src = walk->src.virt.addr;
+	u8 tmp[XCTR_BLOCKSIZE + MAX_CIPHER_ALIGNMASK];
+	u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
+	__le32 ctr32 = cpu_to_le32(byte_ctr / XCTR_BLOCKSIZE + 1);
+
+	do {
+		/* create keystream */
+		crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));
+		fn(crypto_cipher_tfm(tfm), keystream, walk->iv);
+		crypto_xor(src, keystream, XCTR_BLOCKSIZE);
+		crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));
+
+		ctr32 = cpu_to_le32(le32_to_cpu(ctr32) + 1);
+
+		src += XCTR_BLOCKSIZE;
+	} while ((nbytes -= XCTR_BLOCKSIZE) >= XCTR_BLOCKSIZE);
+
+	return nbytes;
+}
+
+static int crypto_xctr_crypt(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
+	struct skcipher_walk walk;
+	unsigned int nbytes;
+	int err;
+	u32 byte_ctr = 0;
+
+	err = skcipher_walk_virt(&walk, req, false);
+
+	while (walk.nbytes >= XCTR_BLOCKSIZE) {
+		if (walk.src.virt.addr == walk.dst.virt.addr)
+			nbytes = crypto_xctr_crypt_inplace(&walk, cipher,
+							   byte_ctr);
+		else
+			nbytes = crypto_xctr_crypt_segment(&walk, cipher,
+							   byte_ctr);
+
+		byte_ctr += walk.nbytes - nbytes;
+		err = skcipher_walk_done(&walk, nbytes);
+	}
+
+	if (walk.nbytes) {
+		crypto_xctr_crypt_final(&walk, cipher, byte_ctr);
+		err = skcipher_walk_done(&walk, 0);
+	}
+
+	return err;
+}
+
+static int crypto_xctr_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+	struct skcipher_instance *inst;
+	struct crypto_alg *alg;
+	int err;
+
+	inst = skcipher_alloc_instance_simple(tmpl, tb);
+	if (IS_ERR(inst))
+		return PTR_ERR(inst);
+
+	alg = skcipher_ialg_simple(inst);
+
+	/* Block size must be 16 bytes. */
+	err = -EINVAL;
+	if (alg->cra_blocksize != XCTR_BLOCKSIZE)
+		goto out_free_inst;
+
+	/* XCTR mode is a stream cipher. */
+	inst->alg.base.cra_blocksize = 1;
+
+	/*
+	 * To simplify the implementation, configure the skcipher walk to only
+	 * give a partial block at the very end, never earlier.
+	 */
+	inst->alg.chunksize = alg->cra_blocksize;
+
+	inst->alg.encrypt = crypto_xctr_crypt;
+	inst->alg.decrypt = crypto_xctr_crypt;
+
+	err = skcipher_register_instance(tmpl, inst);
+	if (err) {
+out_free_inst:
+		inst->free(inst);
+	}
+
+	return err;
+}
+
+static struct crypto_template crypto_xctr_tmpl = {
+	.name = "xctr",
+	.create = crypto_xctr_create,
+	.module = THIS_MODULE,
+};
+
+static int __init crypto_xctr_module_init(void)
+{
+	return crypto_register_template(&crypto_xctr_tmpl);
+}
+
+static void __exit crypto_xctr_module_exit(void)
+{
+	crypto_unregister_template(&crypto_xctr_tmpl);
+}
+
+subsys_initcall(crypto_xctr_module_init);
+module_exit(crypto_xctr_module_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("XCTR block cipher mode of operation");
+MODULE_ALIAS_CRYPTO("xctr");
+MODULE_IMPORT_NS(CRYPTO_INTERNAL);
-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 2/8] crypto: polyval - Add POLYVAL support
  2022-03-15 23:00 [PATCH v3 0/8] crypto: HCTR2 support Nathan Huckleberry
  2022-03-15 23:00 ` [PATCH v3 1/8] crypto: xctr - Add XCTR support Nathan Huckleberry
@ 2022-03-15 23:00 ` Nathan Huckleberry
  2022-03-22  5:55   ` Eric Biggers
  2022-03-15 23:00 ` [PATCH v3 3/8] crypto: hctr2 - Add HCTR2 support Nathan Huckleberry
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Nathan Huckleberry @ 2022-03-15 23:00 UTC (permalink / raw)
  To: linux-crypto
  Cc: Herbert Xu, David S. Miller, linux-arm-kernel, Paul Crowley,
	Eric Biggers, Sami Tolvanen, Ard Biesheuvel, Nathan Huckleberry

Add support for POLYVAL, an ε-Δ-universal hash function similar to
GHASH.  POLYVAL is used as a component to implement HCTR2 mode.

POLYVAL is implemented as an shash algorithm.  The implementation is
modified from ghash-generic.c.

More information on POLYVAL can be found in the HCTR2 paper:
https://eprint.iacr.org/2021/1441.pdf

Signed-off-by: Nathan Huckleberry <nhuck@google.com>
---
 crypto/Kconfig           |   8 ++
 crypto/Makefile          |   1 +
 crypto/polyval-generic.c | 205 +++++++++++++++++++++++++++++++++++++++
 crypto/tcrypt.c          |   4 +
 crypto/testmgr.c         |   6 ++
 crypto/testmgr.h         | 171 ++++++++++++++++++++++++++++++++
 include/crypto/polyval.h |  17 ++++
 7 files changed, 412 insertions(+)
 create mode 100644 crypto/polyval-generic.c
 create mode 100644 include/crypto/polyval.h

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 47752aaa16ff..00139845d76d 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -768,6 +768,14 @@ config CRYPTO_GHASH
 	  GHASH is the hash function used in GCM (Galois/Counter Mode).
 	  It is not a general-purpose cryptographic hash function.
 
+config CRYPTO_POLYVAL
+	tristate
+	select CRYPTO_GF128MUL
+	select CRYPTO_HASH
+	help
+	  POLYVAL is the hash function used in HCTR2.  It is not a general-purpose
+	  cryptographic hash function.
+
 config CRYPTO_POLY1305
 	tristate "Poly1305 authenticator algorithm"
 	select CRYPTO_HASH
diff --git a/crypto/Makefile b/crypto/Makefile
index 6b3fe3df1489..561f901a91d4 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -169,6 +169,7 @@ UBSAN_SANITIZE_jitterentropy.o = n
 jitterentropy_rng-y := jitterentropy.o jitterentropy-kcapi.o
 obj-$(CONFIG_CRYPTO_TEST) += tcrypt.o
 obj-$(CONFIG_CRYPTO_GHASH) += ghash-generic.o
+obj-$(CONFIG_CRYPTO_POLYVAL) += polyval-generic.o
 obj-$(CONFIG_CRYPTO_USER_API) += af_alg.o
 obj-$(CONFIG_CRYPTO_USER_API_HASH) += algif_hash.o
 obj-$(CONFIG_CRYPTO_USER_API_SKCIPHER) += algif_skcipher.o
diff --git a/crypto/polyval-generic.c b/crypto/polyval-generic.c
new file mode 100644
index 000000000000..2de58c316a79
--- /dev/null
+++ b/crypto/polyval-generic.c
@@ -0,0 +1,205 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * POLYVAL: hash function for HCTR2.
+ *
+ * Copyright (c) 2007 Nokia Siemens Networks - Mikko Herranen <mh1@iki.fi>
+ * Copyright (c) 2009 Intel Corp.
+ *   Author: Huang Ying <ying.huang@intel.com>
+ * Copyright 2021 Google LLC
+ */
+
+/*
+ * Code based on crypto/ghash-generic.c
+ *
+ * POLYVAL is a keyed hash function similar to GHASH. POLYVAL uses a
+ * different modulus for finite field multiplication which makes hardware
+ * accelerated implementations on little-endian machines faster.
+ *
+ * Like GHASH, POLYVAL is not a cryptographic hash function and should
+ * not be used outside of crypto modes explicitly designed to use POLYVAL.
+ *
+ * This implementation uses a convenient trick involving the GHASH and POLYVAL
+ * fields. This trick allows multiplication in the POLYVAL field to be
+ * implemented by using multiplication in the GHASH field as a subroutine. An
+ * element of the POLYVAL field can be converted to an element of the GHASH
+ * field by computing x*REVERSE(a), where REVERSE reverses the byte-ordering of
+ * a. Similarly, an element of the GHASH field can be converted back to the
+ * POLYVAL field by computing REVERSE(x^{-1}*a).
+ *
+ * By using this trick, we do not need to implement the POLYVAL field for the
+ * generic implementation.
+ *
+ * Warning: this generic implementation is not intended to be used in practice
+ * and is not constant time. For practical use, a hardware accelerated
+ * implementation of POLYVAL should be used instead.
+ *
+ */
+
+#include <asm/unaligned.h>
+#include <crypto/algapi.h>
+#include <crypto/gf128mul.h>
+#include <crypto/polyval.h>
+#include <crypto/internal/hash.h>
+#include <linux/crypto.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+
+struct polyval_tfm_ctx {
+	struct gf128mul_4k *gf128;
+};
+
+struct polyval_desc_ctx {
+	union {
+		u8 buffer[POLYVAL_BLOCK_SIZE];
+		be128 buffer128;
+	};
+	u32 bytes;
+};
+
+static int polyval_init(struct shash_desc *desc)
+{
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+
+	memset(dctx, 0, sizeof(*dctx));
+
+	return 0;
+}
+
+static void reverse_block(u8 block[POLYVAL_BLOCK_SIZE])
+{
+	u64 *p1 = (u64 *)block;
+	u64 *p2 = (u64 *)&block[8];
+	u64 a = get_unaligned(p1);
+	u64 b = get_unaligned(p2);
+
+	put_unaligned(swab64(a), p2);
+	put_unaligned(swab64(b), p1);
+}
+
+static int polyval_setkey(struct crypto_shash *tfm,
+			const u8 *key, unsigned int keylen)
+{
+	struct polyval_tfm_ctx *ctx = crypto_shash_ctx(tfm);
+	be128 k;
+
+	if (keylen != POLYVAL_BLOCK_SIZE)
+		return -EINVAL;
+
+	gf128mul_free_4k(ctx->gf128);
+
+	BUILD_BUG_ON(sizeof(k) != POLYVAL_BLOCK_SIZE);
+	// avoid violating alignment rules
+	memcpy(&k, key, POLYVAL_BLOCK_SIZE);
+
+	reverse_block((u8 *)&k);
+	gf128mul_x_lle(&k, &k);
+
+	ctx->gf128 = gf128mul_init_4k_lle(&k);
+	memzero_explicit(&k, POLYVAL_BLOCK_SIZE);
+
+	if (!ctx->gf128)
+		return -ENOMEM;
+
+	return 0;
+}
+
+static int polyval_update(struct shash_desc *desc,
+			 const u8 *src, unsigned int srclen)
+{
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+	const struct polyval_tfm_ctx *ctx = crypto_shash_ctx(desc->tfm);
+	u8 *pos;
+	u8 tmp[POLYVAL_BLOCK_SIZE];
+	int n;
+
+	if (dctx->bytes) {
+		n = min(srclen, dctx->bytes);
+		pos = dctx->buffer + dctx->bytes - 1;
+
+		dctx->bytes -= n;
+		srclen -= n;
+
+		while (n--)
+			*pos-- ^= *src++;
+
+		if (!dctx->bytes)
+			gf128mul_4k_lle(&dctx->buffer128, ctx->gf128);
+	}
+
+	while (srclen >= POLYVAL_BLOCK_SIZE) {
+		memcpy(tmp, src, POLYVAL_BLOCK_SIZE);
+		reverse_block(tmp);
+		crypto_xor(dctx->buffer, tmp, POLYVAL_BLOCK_SIZE);
+		gf128mul_4k_lle(&dctx->buffer128, ctx->gf128);
+		src += POLYVAL_BLOCK_SIZE;
+		srclen -= POLYVAL_BLOCK_SIZE;
+	}
+
+	if (srclen) {
+		dctx->bytes = POLYVAL_BLOCK_SIZE - srclen;
+		pos = dctx->buffer + POLYVAL_BLOCK_SIZE - 1;
+		while (srclen--)
+			*pos-- ^= *src++;
+	}
+
+	return 0;
+}
+
+static int polyval_final(struct shash_desc *desc, u8 *dst)
+{
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+	const struct polyval_tfm_ctx *ctx = crypto_shash_ctx(desc->tfm);
+
+	if (dctx->bytes)
+		gf128mul_4k_lle(&dctx->buffer128, ctx->gf128);
+	dctx->bytes = 0;
+
+	reverse_block(dctx->buffer);
+	memcpy(dst, dctx->buffer, POLYVAL_BLOCK_SIZE);
+
+	return 0;
+}
+
+static void polyval_exit_tfm(struct crypto_tfm *tfm)
+{
+	struct polyval_tfm_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	gf128mul_free_4k(ctx->gf128);
+}
+
+static struct shash_alg polyval_alg = {
+	.digestsize	= POLYVAL_DIGEST_SIZE,
+	.init		= polyval_init,
+	.update		= polyval_update,
+	.final		= polyval_final,
+	.setkey		= polyval_setkey,
+	.descsize	= sizeof(struct polyval_desc_ctx),
+	.base		= {
+		.cra_name		= "polyval",
+		.cra_driver_name	= "polyval-generic",
+		.cra_priority		= 100,
+		.cra_blocksize		= POLYVAL_BLOCK_SIZE,
+		.cra_ctxsize		= sizeof(struct polyval_tfm_ctx),
+		.cra_module		= THIS_MODULE,
+		.cra_exit		= polyval_exit_tfm,
+	},
+};
+
+static int __init polyval_mod_init(void)
+{
+	return crypto_register_shash(&polyval_alg);
+}
+
+static void __exit polyval_mod_exit(void)
+{
+	crypto_unregister_shash(&polyval_alg);
+}
+
+subsys_initcall(polyval_mod_init);
+module_exit(polyval_mod_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("POLYVAL hash function");
+MODULE_ALIAS_CRYPTO("polyval");
+MODULE_ALIAS_CRYPTO("polyval-generic");
diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index fd671d0e2012..dd9cf216029b 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -1730,6 +1730,10 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
 		ret += tcrypt_test("ccm(sm4)");
 		break;
 
+	case 57:
+		ret += tcrypt_test("polyval");
+		break;
+
 	case 100:
 		ret += tcrypt_test("hmac(md5)");
 		break;
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index fbb12d7d78af..d807b200edf6 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -5284,6 +5284,12 @@ static const struct alg_test_desc alg_test_descs[] = {
 		.suite = {
 			.hash = __VECS(poly1305_tv_template)
 		}
+	}, {
+		.alg = "polyval",
+		.test = alg_test_hash,
+		.suite = {
+			.hash = __VECS(polyval_tv_template)
+		}
 	}, {
 		.alg = "rfc3686(ctr(aes))",
 		.test = alg_test_skcipher,
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index bf4ff97eeb37..c581e5405916 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -34929,4 +34929,175 @@ static const struct cipher_testvec aes_xctr_tv_template[] = {
 
 };
 
+/*
+ * Test vectors generated using https://github.com/google/hctr2
+ *
+ * To ensure compatibility with RFC 8452, some tests were sourced from
+ * https://datatracker.ietf.org/doc/html/rfc8452
+ */
+static const struct hash_testvec polyval_tv_template[] = {
+	{ // From RFC 8452
+		.key	= "\x31\x07\x28\xd9\x91\x1f\x1f\x38"
+			  "\x37\xb2\x43\x16\xc3\xfa\xb9\xa0",
+		.plaintext	= "\x65\x78\x61\x6d\x70\x6c\x65\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x48\x65\x6c\x6c\x6f\x20\x77\x6f"
+			  "\x72\x6c\x64\x00\x00\x00\x00\x00"
+			  "\x38\x00\x00\x00\x00\x00\x00\x00"
+			  "\x58\x00\x00\x00\x00\x00\x00\x00",
+		.digest	= "\xad\x7f\xcf\x0b\x51\x69\x85\x16"
+			  "\x62\x67\x2f\x3c\x5f\x95\x13\x8f",
+		.psize	= 48,
+		.ksize	= 16,
+	},
+	{ // From RFC 8452
+		.key	= "\xd9\xb3\x60\x27\x96\x94\x94\x1a"
+			  "\xc5\xdb\xc6\x98\x7a\xda\x73\x77",
+		.plaintext	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+		.digest	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+		.psize	= 16,
+		.ksize	= 16,
+	},
+	{ // From RFC 8452
+		.key	= "\xd9\xb3\x60\x27\x96\x94\x94\x1a"
+			  "\xc5\xdb\xc6\x98\x7a\xda\x73\x77",
+		.plaintext	= "\x01\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x40\x00\x00\x00\x00\x00\x00\x00",
+		.digest	= "\xeb\x93\xb7\x74\x09\x62\xc5\xe4"
+			  "\x9d\x2a\x90\xa7\xdc\x5c\xec\x74",
+		.psize	= 32,
+		.ksize	= 16,
+	},
+	{ // From RFC 8452
+		.key	= "\xd9\xb3\x60\x27\x96\x94\x94\x1a"
+			  "\xc5\xdb\xc6\x98\x7a\xda\x73\x77",
+		.plaintext	= "\x01\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x02\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x03\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x80\x01\x00\x00\x00\x00\x00\x00",
+		.digest	= "\x81\x38\x87\x46\xbc\x22\xd2\x6b"
+			  "\x2a\xbc\x3d\xcb\x15\x75\x42\x22",
+		.psize	= 64,
+		.ksize	= 16,
+	},
+	{ // From RFC 8452
+		.key	= "\xd9\xb3\x60\x27\x96\x94\x94\x1a"
+			  "\xc5\xdb\xc6\x98\x7a\xda\x73\x77",
+		.plaintext	= "\x01\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x02\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x03\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x04\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x02\x00\x00\x00\x00\x00\x00",
+		.digest	= "\x1e\x39\xb6\xd3\x34\x4d\x34\x8f"
+			  "\x60\x44\xf8\x99\x35\xd1\xcf\x78",
+		.psize	= 80,
+		.ksize	= 16,
+	},
+	{ // From RFC 8452
+		.key	= "\xd9\xb3\x60\x27\x96\x94\x94\x1a"
+			  "\xc5\xdb\xc6\x98\x7a\xda\x73\x77",
+		.plaintext	= "\x01\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x02\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x03\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x04\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x05\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x08\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x02\x00\x00\x00\x00\x00\x00",
+		.digest	= "\xff\xcd\x05\xd5\x77\x0f\x34\xad"
+			  "\x92\x67\xf0\xa5\x99\x94\xb1\x5a",
+		.psize	= 96,
+		.ksize	= 16,
+	},
+	{ // Random ( 1)
+		.key	= "\x90\xcc\xac\xee\xba\xd7\xd4\x68"
+			  "\x98\xa6\x79\x70\xdf\x66\x15\x6c",
+		.plaintext	= "",
+		.digest	= "\x00\x00\x00\x00\x00\x00\x00\x00"
+			  "\x00\x00\x00\x00\x00\x00\x00\x00",
+		.psize	= 0,
+		.ksize	= 16,
+	},
+	{ // Random ( 1)
+		.key	= "\xc1\x45\x71\xf0\x30\x07\x94\xe7"
+			  "\x3a\xdd\xe4\xc6\x19\x2d\x02\xa2",
+		.plaintext	= "\xc1\x5d\x47\xc7\x4c\x7c\x5e\x07"
+			  "\x85\x14\x8f\x79\xcc\x73\x83\xf7"
+			  "\x35\xb8\xcb\x73\x61\xf0\x53\x31"
+			  "\xbf\x84\xde\xb6\xde\xaf\xb0\xb8"
+			  "\xb7\xd9\x11\x91\x89\xfd\x1e\x4c"
+			  "\x84\x4a\x1f\x2a\x87\xa4\xaf\x62"
+			  "\x8d\x7d\x58\xf6\x43\x35\xfc\x53"
+			  "\x8f\x1a\xf6\x12\xe1\x13\x3f\x66"
+			  "\x91\x4b\x13\xd6\x45\xfb\xb0\x7a"
+			  "\xe0\x8b\x8e\x99\xf7\x86\x46\x37"
+			  "\xd1\x22\x9e\x52\xf3\x3f\xd9\x75"
+			  "\x2c\x2c\xc6\xbb\x0e\x08\x14\x29"
+			  "\xe8\x50\x2f\xd8\xbe\xf4\xe9\x69"
+			  "\x4a\xee\xf7\xae\x15\x65\x35\x1e",
+		.digest	= "\x00\x4f\x5d\xe9\x3b\xc0\xd6\x50"
+			  "\x3e\x38\x73\x86\xc6\xda\xca\x7f",
+		.psize	= 112,
+		.ksize	= 16,
+	},
+	{ // Random ( 1)
+		.key	= "\x37\xbe\x68\x16\x50\xb9\x4e\xb0"
+			  "\x47\xde\xe2\xbd\xde\xe4\x48\x09",
+		.plaintext	= "\x87\xfc\x68\x9f\xff\xf2\x4a\x1e"
+			  "\x82\x3b\x73\x8f\xc1\xb2\x1b\x7a"
+			  "\x6c\x4f\x81\xbc\x88\x9b\x6c\xa3"
+			  "\x9c\xc2\xa5\xbc\x14\x70\x4c\x9b"
+			  "\x0c\x9f\x59\x92\x16\x4b\x91\x3d"
+			  "\x18\x55\x22\x68\x12\x8c\x63\xb2"
+			  "\x51\xcb\x85\x4b\xd2\xae\x0b\x1c"
+			  "\x5d\x28\x9d\x1d\xb1\xc8\xf0\x77"
+			  "\xe9\xb5\x07\x4e\x06\xc8\xee\xf8"
+			  "\x1b\xed\x72\x2a\x55\x7d\x16\xc9"
+			  "\xf2\x54\xe7\xe9\xe0\x44\x5b\x33"
+			  "\xb1\x49\xee\xff\x43\xfb\x82\xcd"
+			  "\x4a\x70\x78\x81\xa4\x34\x36\xe8"
+			  "\x4c\x28\x54\xa6\x6c\xc3\x6b\x78"
+			  "\xe7\xc0\x5d\xc6\x5d\x81\xab\x70"
+			  "\x08\x86\xa1\xfd\xf4\x77\x55\xfd"
+			  "\xa3\xe9\xe2\x1b\xdf\x99\xb7\x80"
+			  "\xf9\x0a\x4f\x72\x4a\xd3\xaf\xbb"
+			  "\xb3\x3b\xeb\x08\x58\x0f\x79\xce"
+			  "\xa5\x99\x05\x12\x34\xd4\xf4\x86"
+			  "\x37\x23\x1d\xc8\x49\xc0\x92\xae"
+			  "\xa6\xac\x9b\x31\x55\xed\x15\xc6"
+			  "\x05\x17\x37\x8d\x90\x42\xe4\x87"
+			  "\x89\x62\x88\x69\x1c\x6a\xfd\xe3"
+			  "\x00\x2b\x47\x1a\x73\xc1\x51\xc2"
+			  "\xc0\x62\x74\x6a\x9e\xb2\xe5\x21"
+			  "\xbe\x90\xb5\xb0\x50\xca\x88\x68"
+			  "\xe1\x9d\x7a\xdf\x6c\xb7\xb9\x98"
+			  "\xee\x28\x62\x61\x8b\xd1\x47\xf9"
+			  "\x04\x7a\x0b\x5d\xcd\x2b\x65\xf5"
+			  "\x12\xa3\xfe\x1a\xaa\x2c\x78\x42"
+			  "\xb8\xbe\x7d\x74\xeb\x59\xba\xba",
+		.digest	= "\xae\x11\xd4\x60\x2a\x5f\x9e\x42"
+			  "\x89\x04\xc2\x34\x8d\x55\x94\x0a",
+		.psize	= 256,
+		.ksize	= 16,
+	},
+
+};
+
 #endif	/* _CRYPTO_TESTMGR_H */
diff --git a/include/crypto/polyval.h b/include/crypto/polyval.h
new file mode 100644
index 000000000000..b14c38aa9166
--- /dev/null
+++ b/include/crypto/polyval.h
@@ -0,0 +1,17 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Common values for the Polyval hash algorithm
+ *
+ * Copyright 2021 Google LLC
+ */
+
+#ifndef _CRYPTO_POLYVAL_H
+#define _CRYPTO_POLYVAL_H
+
+#include <linux/types.h>
+#include <linux/crypto.h>
+
+#define POLYVAL_BLOCK_SIZE	16
+#define POLYVAL_DIGEST_SIZE	16
+
+#endif
-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 3/8] crypto: hctr2 - Add HCTR2 support
  2022-03-15 23:00 [PATCH v3 0/8] crypto: HCTR2 support Nathan Huckleberry
  2022-03-15 23:00 ` [PATCH v3 1/8] crypto: xctr - Add XCTR support Nathan Huckleberry
  2022-03-15 23:00 ` [PATCH v3 2/8] crypto: polyval - Add POLYVAL support Nathan Huckleberry
@ 2022-03-15 23:00 ` Nathan Huckleberry
  2022-03-22  7:00   ` Eric Biggers
  2022-03-15 23:00 ` [PATCH v3 4/8] crypto: x86/aesni-xctr: Add accelerated implementation of XCTR Nathan Huckleberry
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 15+ messages in thread
From: Nathan Huckleberry @ 2022-03-15 23:00 UTC (permalink / raw)
  To: linux-crypto
  Cc: Herbert Xu, David S. Miller, linux-arm-kernel, Paul Crowley,
	Eric Biggers, Sami Tolvanen, Ard Biesheuvel, Nathan Huckleberry

Add support for HCTR2 as a template.  HCTR2 is a length-preserving
encryption mode that is efficient on processors with instructions to
accelerate AES and carryless multiplication, e.g. x86 processors with
AES-NI and CLMUL, and ARM processors with the ARMv8 Crypto Extensions.

As a length-preserving encryption mode, HCTR2 is suitable for
applications such as storage encryption where ciphertext expansion is
not possible, and thus authenticated encryption cannot be used.
Currently, such applications usually use XTS, or in some cases Adiantum.
XTS has the disadvantage that it is a narrow-block mode: a bitflip will
only change 16 bytes in the resulting ciphertext or plaintext.  This
reveals more information to an attacker than necessary.

HCTR2 is a wide-block mode, so it provides a stronger security property:
a bitflip will change the entire message.  HCTR2 is somewhat similar to
Adiantum, which is also a wide-block mode.  However, HCTR2 is designed
to take advantage of existing crypto instructions, while Adiantum
targets devices without such hardware support.  Adiantum is also
designed with longer messages in mind, while HCTR2 is designed to be
efficient even on short messages.

HCTR2 requires POLYVAL and XCTR as components.  More information on
HCTR2 can be found here: Length-preserving encryption with HCTR2:
https://eprint.iacr.org/2021/1441.pdf

Signed-off-by: Nathan Huckleberry <nhuck@google.com>
---
 crypto/Kconfig   |  11 +
 crypto/Makefile  |   1 +
 crypto/hctr2.c   | 580 ++++++++++++++++++++++++++++++++++++++++
 crypto/tcrypt.c  |   5 +
 crypto/testmgr.c |   8 +
 crypto/testmgr.h | 672 +++++++++++++++++++++++++++++++++++++++++++++++
 6 files changed, 1277 insertions(+)
 create mode 100644 crypto/hctr2.c

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 00139845d76d..0dedba74db4a 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -532,6 +532,17 @@ config CRYPTO_ADIANTUM
 
 	  If unsure, say N.
 
+config CRYPTO_HCTR2
+	tristate "HCTR2 support"
+	select CRYPTO_XCTR
+	select CRYPTO_POLYVAL
+	select CRYPTO_MANAGER
+	help
+	  HCTR2 is a length-preserving encryption mode for storage encryption that
+	  is efficient on processors with instructions to accelerate AES and
+	  carryless multiplication, e.g. x86 processors with AES-NI and CLMUL, and
+	  ARM processors with the ARMv8 crypto extensions.
+
 config CRYPTO_ESSIV
 	tristate "ESSIV support for block encryption"
 	select CRYPTO_AUTHENC
diff --git a/crypto/Makefile b/crypto/Makefile
index 561f901a91d4..2dca9dbdede6 100644
--- a/crypto/Makefile
+++ b/crypto/Makefile
@@ -94,6 +94,7 @@ obj-$(CONFIG_CRYPTO_LRW) += lrw.o
 obj-$(CONFIG_CRYPTO_XTS) += xts.o
 obj-$(CONFIG_CRYPTO_CTR) += ctr.o
 obj-$(CONFIG_CRYPTO_XCTR) += xctr.o
+obj-$(CONFIG_CRYPTO_HCTR2) += hctr2.o
 obj-$(CONFIG_CRYPTO_KEYWRAP) += keywrap.o
 obj-$(CONFIG_CRYPTO_ADIANTUM) += adiantum.o
 obj-$(CONFIG_CRYPTO_NHPOLY1305) += nhpoly1305.o
diff --git a/crypto/hctr2.c b/crypto/hctr2.c
new file mode 100644
index 000000000000..81b986dcc5b0
--- /dev/null
+++ b/crypto/hctr2.c
@@ -0,0 +1,580 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * HCTR2 length-preserving encryption mode
+ *
+ * Copyright 2021 Google LLC
+ */
+
+
+/*
+ * HCTR2 is a length-preserving encryption mode that is efficient on
+ * processors with instructions to accelerate aes and carryless
+ * multiplication, e.g. x86 processors with AES-NI and CLMUL, and ARM
+ * processors with the ARMv8 crypto extensions.
+ *
+ * For more details, see the paper: Length-preserving encryption with HCTR2
+ * (https://eprint.iacr.org/2021/1441.pdf)
+ */
+
+#include <crypto/internal/cipher.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/skcipher.h>
+#include <crypto/polyval.h>
+#include <crypto/scatterwalk.h>
+#include <linux/module.h>
+
+#define BLOCKCIPHER_BLOCK_SIZE		16
+
+/*
+ * The specification allows variable-length tweaks, but Linux's crypto API
+ * currently only allows algorithms to support a single length.  The "natural"
+ * tweak length for HCTR2 is 16, since that fits into one POLYVAL block for
+ * the best performance.  But longer tweaks are useful for fscrypt, to avoid
+ * needing to derive per-file keys.  So instead we use two blocks, or 32 bytes.
+ */
+#define TWEAK_SIZE		32
+
+struct hctr2_instance_ctx {
+	struct crypto_cipher_spawn blockcipher_spawn;
+	struct crypto_skcipher_spawn xctr_spawn;
+	struct crypto_shash_spawn polyval_spawn;
+};
+
+struct hctr2_tfm_ctx {
+	struct crypto_cipher *blockcipher;
+	struct crypto_skcipher *xctr;
+	struct crypto_shash *polyval;
+	u8 L[BLOCKCIPHER_BLOCK_SIZE];
+};
+
+struct hctr2_request_ctx {
+	u8 first_block[BLOCKCIPHER_BLOCK_SIZE];
+	u8 xctr_iv[BLOCKCIPHER_BLOCK_SIZE];
+	struct scatterlist *bulk_part_dst;
+	struct scatterlist *bulk_part_src;
+	struct scatterlist sg_src[2];
+	struct scatterlist sg_dst[2];
+	/* Sub-requests, must be last */
+	union {
+		struct shash_desc hash_desc;
+		struct skcipher_request xctr_req;
+	} u;
+};
+
+static int hctr2_setkey(struct crypto_skcipher *tfm, const u8 *key,
+			unsigned int keylen)
+{
+	struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+	u8 hbar[BLOCKCIPHER_BLOCK_SIZE];
+	__le64 tweak_length_block[2];
+	void *exported_length_digests[2];
+	SHASH_DESC_ON_STACK(shash, tfm->polyval);
+	int err;
+
+	exported_length_digests[0] = (u8 *)tctx + sizeof(*tctx);
+	exported_length_digests[1] = (u8 *)tctx + sizeof(*tctx) +
+				     crypto_shash_descsize(tctx->polyval);
+	crypto_cipher_clear_flags(tctx->blockcipher, CRYPTO_TFM_REQ_MASK);
+	crypto_cipher_set_flags(tctx->blockcipher,
+				crypto_skcipher_get_flags(tfm) &
+				CRYPTO_TFM_REQ_MASK);
+	err = crypto_cipher_setkey(tctx->blockcipher, key, keylen);
+	if (err)
+		return err;
+
+	crypto_skcipher_clear_flags(tctx->xctr, CRYPTO_TFM_REQ_MASK);
+	crypto_skcipher_set_flags(tctx->xctr,
+				  crypto_skcipher_get_flags(tfm) &
+				  CRYPTO_TFM_REQ_MASK);
+	err = crypto_skcipher_setkey(tctx->xctr, key, keylen);
+	if (err)
+		return err;
+
+	memset(tctx->L, 0, sizeof(tctx->L));
+	memset(hbar, 0, sizeof(hbar));
+	tctx->L[0] = 0x01;
+	crypto_cipher_encrypt_one(tctx->blockcipher, tctx->L, tctx->L);
+	crypto_cipher_encrypt_one(tctx->blockcipher, hbar, hbar);
+
+	crypto_shash_clear_flags(tctx->polyval, CRYPTO_TFM_REQ_MASK);
+	crypto_shash_set_flags(tctx->polyval, crypto_skcipher_get_flags(tfm) &
+			       CRYPTO_TFM_REQ_MASK);
+	err = crypto_shash_setkey(tctx->polyval, hbar, BLOCKCIPHER_BLOCK_SIZE);
+	if (err)
+		return err;
+	memzero_explicit(hbar, sizeof(hbar));
+
+	shash->tfm = tctx->polyval;
+	memset(tweak_length_block, 0, sizeof(tweak_length_block));
+
+	tweak_length_block[0] = cpu_to_le64(TWEAK_SIZE * 8 * 2 + 2);
+	err = crypto_shash_init(shash);
+	if (err)
+		return err;
+	err = crypto_shash_update(shash, (u8 *)tweak_length_block,
+				  POLYVAL_BLOCK_SIZE);
+	if (err)
+		return err;
+	err = crypto_shash_export(shash, exported_length_digests[0]);
+	if (err)
+		return err;
+
+	tweak_length_block[0] = cpu_to_le64(TWEAK_SIZE * 8 * 2 + 3);
+	err = crypto_shash_init(shash);
+	if (err)
+		return err;
+	err = crypto_shash_update(shash, (u8 *)tweak_length_block,
+				  POLYVAL_BLOCK_SIZE);
+	if (err)
+		return err;
+	return crypto_shash_export(shash, exported_length_digests[1]);
+}
+
+static int hctr2_hash_tweak(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	const struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+	struct hctr2_request_ctx *rctx = skcipher_request_ctx(req);
+	struct shash_desc *hash_desc = &rctx->u.hash_desc;
+	const void *exported_length_digests[2];
+	void *exported_tweak_digest;
+	int err;
+
+	exported_length_digests[0] = (u8 *)tctx + sizeof(*tctx);
+	exported_length_digests[1] = (u8 *)tctx + sizeof(*tctx) +
+				     crypto_shash_descsize(tctx->polyval);
+	exported_tweak_digest = (u8 *)rctx + tfm->reqsize -
+				crypto_shash_descsize(tctx->polyval);
+
+	hash_desc->tfm = tctx->polyval;
+	if (req->cryptlen % POLYVAL_BLOCK_SIZE == 0)
+		err = crypto_shash_import(hash_desc, exported_length_digests[0]);
+	else
+		err = crypto_shash_import(hash_desc, exported_length_digests[1]);
+	if (err)
+		return err;
+	err = crypto_shash_update(hash_desc, req->iv, TWEAK_SIZE);
+	if (err)
+		return err;
+
+	return crypto_shash_export(hash_desc, exported_tweak_digest);
+}
+
+static int hctr2_hash_message(struct skcipher_request *req,
+			      struct scatterlist *sgl,
+			      u8 digest[POLYVAL_DIGEST_SIZE])
+{
+	u8 padding[BLOCKCIPHER_BLOCK_SIZE];
+	struct hctr2_request_ctx *rctx = skcipher_request_ctx(req);
+	struct shash_desc *hash_desc = &rctx->u.hash_desc;
+	const unsigned int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
+	struct sg_mapping_iter miter;
+	unsigned int remainder = bulk_len % BLOCKCIPHER_BLOCK_SIZE;
+	int err, i;
+	int n = 0;
+
+	sg_miter_start(&miter, sgl, sg_nents(sgl),
+		       SG_MITER_FROM_SG | SG_MITER_ATOMIC);
+	for (i = 0; i < bulk_len; i += n) {
+		sg_miter_next(&miter);
+		n = min_t(unsigned int, miter.length, bulk_len - i);
+		err = crypto_shash_update(hash_desc, miter.addr, n);
+		if (err)
+			break;
+	}
+	sg_miter_stop(&miter);
+
+	if (err)
+		return err;
+
+	if (remainder) {
+		memset(padding, 0, BLOCKCIPHER_BLOCK_SIZE);
+		padding[0] = 0x01;
+		err = crypto_shash_update(hash_desc, padding,
+					  BLOCKCIPHER_BLOCK_SIZE - remainder);
+		if (err)
+			return err;
+	}
+	return crypto_shash_final(hash_desc, digest);
+}
+
+static int hctr2_finish(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	const struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+	struct hctr2_request_ctx *rctx = skcipher_request_ctx(req);
+	u8 digest[POLYVAL_DIGEST_SIZE];
+	struct shash_desc *hash_desc = &rctx->u.hash_desc;
+	void *exported_tweak_digest;
+	int err;
+
+	exported_tweak_digest = (u8 *)rctx + tfm->reqsize -
+				crypto_shash_descsize(tctx->polyval);
+
+	// U = UU ^ H(T || V)
+	// or M = MM ^ H(T || N)
+	hash_desc->tfm = tctx->polyval;
+	err = crypto_shash_import(hash_desc, exported_tweak_digest);
+	if (err)
+		return err;
+	err = hctr2_hash_message(req, rctx->bulk_part_dst, digest);
+	if (err)
+		return err;
+	crypto_xor(rctx->first_block, digest, BLOCKCIPHER_BLOCK_SIZE);
+
+	// Copy U (or M) into dst scatterlist
+	scatterwalk_map_and_copy(rctx->first_block, req->dst,
+				 0, BLOCKCIPHER_BLOCK_SIZE, 1);
+	return 0;
+}
+
+static void hctr2_xctr_done(struct crypto_async_request *areq,
+				    int err)
+{
+	struct skcipher_request *req = areq->data;
+
+	if (!err)
+		err = hctr2_finish(req);
+
+	skcipher_request_complete(req, err);
+}
+
+static int hctr2_crypt(struct skcipher_request *req, bool enc)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	const struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+	struct hctr2_request_ctx *rctx = skcipher_request_ctx(req);
+	u8 digest[POLYVAL_DIGEST_SIZE];
+	int bulk_len = req->cryptlen - BLOCKCIPHER_BLOCK_SIZE;
+	int err;
+
+	// Requests must be at least one block
+	if (req->cryptlen < BLOCKCIPHER_BLOCK_SIZE)
+		return -EINVAL;
+
+	// Copy M (or U) into a temporary buffer
+	scatterwalk_map_and_copy(rctx->first_block, req->src,
+				 0, BLOCKCIPHER_BLOCK_SIZE, 0);
+
+	// Create scatterlists for N and V
+	rctx->bulk_part_src = scatterwalk_ffwd(rctx->sg_src, req->src,
+					       BLOCKCIPHER_BLOCK_SIZE);
+	rctx->bulk_part_dst = scatterwalk_ffwd(rctx->sg_dst, req->dst,
+					       BLOCKCIPHER_BLOCK_SIZE);
+
+	// MM = M ^ H(T || N)
+	// or UU = U ^ H(T || V)
+	err = hctr2_hash_tweak(req);
+	if (err)
+		return err;
+	err = hctr2_hash_message(req, rctx->bulk_part_src, digest);
+	if (err)
+		return err;
+	crypto_xor(digest, rctx->first_block, BLOCKCIPHER_BLOCK_SIZE);
+
+	// UU = E(MM)
+	// or MM = D(UU)
+	if (enc)
+		crypto_cipher_encrypt_one(tctx->blockcipher, rctx->first_block,
+					  digest);
+	else
+		crypto_cipher_decrypt_one(tctx->blockcipher, rctx->first_block,
+					  digest);
+
+	// S = MM ^ UU ^ L
+	crypto_xor(digest, rctx->first_block, BLOCKCIPHER_BLOCK_SIZE);
+	crypto_xor_cpy(rctx->xctr_iv, digest, tctx->L, BLOCKCIPHER_BLOCK_SIZE);
+
+	// V = XCTR(S, N)
+	// or N = XCTR(S, V)
+	skcipher_request_set_tfm(&rctx->u.xctr_req, tctx->xctr);
+	skcipher_request_set_crypt(&rctx->u.xctr_req, rctx->bulk_part_src,
+				   rctx->bulk_part_dst, bulk_len,
+				   rctx->xctr_iv);
+	skcipher_request_set_callback(&rctx->u.xctr_req,
+				      req->base.flags,
+				      hctr2_xctr_done, req);
+	return crypto_skcipher_encrypt(&rctx->u.xctr_req) ?:
+		hctr2_finish(req);
+}
+
+static int hctr2_encrypt(struct skcipher_request *req)
+{
+	return hctr2_crypt(req, true);
+}
+
+static int hctr2_decrypt(struct skcipher_request *req)
+{
+	return hctr2_crypt(req, false);
+}
+
+static int hctr2_init_tfm(struct crypto_skcipher *tfm)
+{
+	struct skcipher_instance *inst = skcipher_alg_instance(tfm);
+	struct hctr2_instance_ctx *ictx = skcipher_instance_ctx(inst);
+	struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+	struct crypto_skcipher *xctr;
+	struct crypto_cipher *blockcipher;
+	struct crypto_shash *polyval;
+	unsigned int subreq_size;
+	int err;
+
+	xctr = crypto_spawn_skcipher(&ictx->xctr_spawn);
+	if (IS_ERR(xctr))
+		return PTR_ERR(xctr);
+
+	blockcipher = crypto_spawn_cipher(&ictx->blockcipher_spawn);
+	if (IS_ERR(blockcipher)) {
+		err = PTR_ERR(blockcipher);
+		goto err_free_xctr;
+	}
+
+	polyval = crypto_spawn_shash(&ictx->polyval_spawn);
+	if (IS_ERR(polyval)) {
+		err = PTR_ERR(polyval);
+		goto err_free_blockcipher;
+	}
+
+	tctx->xctr = xctr;
+	tctx->blockcipher = blockcipher;
+	tctx->polyval = polyval;
+
+	BUILD_BUG_ON(offsetofend(struct hctr2_request_ctx, u) !=
+				 sizeof(struct hctr2_request_ctx));
+	subreq_size = max(sizeof_field(struct hctr2_request_ctx, u.hash_desc) +
+			  crypto_shash_descsize(polyval), sizeof_field(struct
+			  hctr2_request_ctx, u.xctr_req) +
+			  crypto_skcipher_reqsize(xctr));
+
+	crypto_skcipher_set_reqsize(tfm, offsetof(struct hctr2_request_ctx, u) +
+				    subreq_size +
+				    crypto_shash_descsize(polyval));
+	return 0;
+
+err_free_blockcipher:
+	crypto_free_cipher(blockcipher);
+err_free_xctr:
+	crypto_free_skcipher(xctr);
+	return err;
+}
+
+static void hctr2_exit_tfm(struct crypto_skcipher *tfm)
+{
+	struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
+
+	crypto_free_cipher(tctx->blockcipher);
+	crypto_free_skcipher(tctx->xctr);
+	crypto_free_shash(tctx->polyval);
+}
+
+static void hctr2_free_instance(struct skcipher_instance *inst)
+{
+	struct hctr2_instance_ctx *ictx = skcipher_instance_ctx(inst);
+
+	crypto_drop_cipher(&ictx->blockcipher_spawn);
+	crypto_drop_skcipher(&ictx->xctr_spawn);
+	crypto_drop_shash(&ictx->polyval_spawn);
+	kfree(inst);
+}
+
+/*
+ * Check for a supported set of inner algorithms.
+ * See the comment at the beginning of this file.
+ */
+static bool hctr2_supported_algorithms(struct skcipher_alg *xctr_alg,
+				       struct crypto_alg *blockcipher_alg,
+				       struct shash_alg *polyval_alg)
+{
+	if (strncmp(xctr_alg->base.cra_name, "xctr(", 4) != 0)
+		return false;
+
+	if (blockcipher_alg->cra_blocksize != BLOCKCIPHER_BLOCK_SIZE)
+		return false;
+
+	if (strcmp(polyval_alg->base.cra_name, "polyval") != 0)
+		return false;
+
+	return true;
+}
+
+static int hctr2_create_common(struct crypto_template *tmpl,
+			       struct rtattr **tb,
+			       const char *xctr_name,
+			       const char *polyval_name)
+{
+	u32 mask;
+	struct skcipher_instance *inst;
+	struct hctr2_instance_ctx *ictx;
+	struct skcipher_alg *xctr_alg;
+	struct crypto_alg *blockcipher_alg;
+	struct shash_alg *polyval_alg;
+	char blockcipher_name[CRYPTO_MAX_ALG_NAME];
+	int len;
+	int err;
+
+	err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SKCIPHER, &mask);
+	if (err)
+		return err;
+
+	inst = kzalloc(sizeof(*inst) + sizeof(*ictx), GFP_KERNEL);
+	if (!inst)
+		return -ENOMEM;
+	ictx = skcipher_instance_ctx(inst);
+
+	/* Stream cipher, xctr(block_cipher) */
+	err = crypto_grab_skcipher(&ictx->xctr_spawn,
+				   skcipher_crypto_instance(inst),
+				   xctr_name, 0, mask);
+	if (err)
+		goto err_free_inst;
+	xctr_alg = crypto_spawn_skcipher_alg(&ictx->xctr_spawn);
+
+	if (!strncmp(xctr_alg->base.cra_name, "xctr(", 5)) {
+		len = strscpy(blockcipher_name, xctr_name + 5,
+			    sizeof(blockcipher_name));
+
+		if (len < 1)
+			return -EINVAL;
+
+		if (blockcipher_name[len - 1] != ')')
+			return -EINVAL;
+
+		blockcipher_name[len - 1] = 0;
+	} else
+		return -EINVAL;
+
+
+	/* Block cipher, e.g. "aes" */
+	err = crypto_grab_cipher(&ictx->blockcipher_spawn,
+				 skcipher_crypto_instance(inst),
+				 blockcipher_name, 0, mask);
+	if (err)
+		goto err_free_inst;
+	blockcipher_alg = crypto_spawn_cipher_alg(&ictx->blockcipher_spawn);
+
+	/* Polyval ε-∆U hash function */
+	err = crypto_grab_shash(&ictx->polyval_spawn,
+				skcipher_crypto_instance(inst),
+				polyval_name, 0, mask);
+	if (err)
+		goto err_free_inst;
+	polyval_alg = crypto_spawn_shash_alg(&ictx->polyval_spawn);
+
+	/* Check the set of algorithms */
+	if (!hctr2_supported_algorithms(xctr_alg, blockcipher_alg,
+					polyval_alg)) {
+		pr_warn("Unsupported HCTR2 instantiation: (%s,%s,%s)\n",
+			xctr_alg->base.cra_name, blockcipher_alg->cra_name,
+			polyval_alg->base.cra_name);
+		err = -EINVAL;
+		goto err_free_inst;
+	}
+
+	/* Instance fields */
+
+	err = -ENAMETOOLONG;
+	if (snprintf(inst->alg.base.cra_name, CRYPTO_MAX_ALG_NAME, "hctr2(%s)",
+		     blockcipher_alg->cra_name) >= CRYPTO_MAX_ALG_NAME)
+		goto err_free_inst;
+	if (snprintf(inst->alg.base.cra_driver_name, CRYPTO_MAX_ALG_NAME,
+		     "hctr2_base(%s,%s)",
+		     xctr_alg->base.cra_driver_name,
+		     polyval_alg->base.cra_driver_name) >= CRYPTO_MAX_ALG_NAME)
+		goto err_free_inst;
+
+	inst->alg.base.cra_blocksize = BLOCKCIPHER_BLOCK_SIZE;
+	inst->alg.base.cra_ctxsize = sizeof(struct hctr2_tfm_ctx) +
+				     polyval_alg->descsize * 2;
+	inst->alg.base.cra_alignmask = xctr_alg->base.cra_alignmask |
+				       polyval_alg->base.cra_alignmask;
+	/*
+	 * The hash function is called twice, so it is weighted higher than the
+	 * xctr and blockcipher.
+	 */
+	inst->alg.base.cra_priority = (2 * xctr_alg->base.cra_priority +
+				       4 * polyval_alg->base.cra_priority +
+				       blockcipher_alg->cra_priority) / 7;
+
+	inst->alg.setkey = hctr2_setkey;
+	inst->alg.encrypt = hctr2_encrypt;
+	inst->alg.decrypt = hctr2_decrypt;
+	inst->alg.init = hctr2_init_tfm;
+	inst->alg.exit = hctr2_exit_tfm;
+	inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(xctr_alg);
+	inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(xctr_alg);
+	inst->alg.ivsize = TWEAK_SIZE;
+
+	inst->free = hctr2_free_instance;
+
+	err = skcipher_register_instance(tmpl, inst);
+	if (err) {
+err_free_inst:
+		hctr2_free_instance(inst);
+	}
+	return err;
+}
+
+static int hctr2_create_base(struct crypto_template *tmpl, struct rtattr **tb)
+{
+	const char *xctr_name;
+	const char *polyval_name;
+
+	xctr_name = crypto_attr_alg_name(tb[1]);
+	if (IS_ERR(xctr_name))
+		return PTR_ERR(xctr_name);
+
+	polyval_name = crypto_attr_alg_name(tb[2]);
+	if (IS_ERR(polyval_name))
+		return PTR_ERR(polyval_name);
+
+	return hctr2_create_common(tmpl, tb, xctr_name, polyval_name);
+}
+
+static int hctr2_create(struct crypto_template *tmpl, struct rtattr **tb)
+{
+	const char *blockcipher_name;
+	char xctr_name[CRYPTO_MAX_ALG_NAME];
+
+	blockcipher_name = crypto_attr_alg_name(tb[1]);
+	if (IS_ERR(blockcipher_name))
+		return PTR_ERR(blockcipher_name);
+
+	if (snprintf(xctr_name, CRYPTO_MAX_ALG_NAME, "xctr(%s)",
+		    blockcipher_name) >= CRYPTO_MAX_ALG_NAME)
+		return -ENAMETOOLONG;
+
+	return hctr2_create_common(tmpl, tb, xctr_name, "polyval");
+}
+
+/* hctr2(blockcipher_name) */
+/* hctr2_base(xctr_name, polyval_name) */
+static struct crypto_template hctr2_tmpls[] = {
+	{
+		.name = "hctr2_base",
+		.create = hctr2_create_base,
+		.module = THIS_MODULE,
+	}, {
+		.name = "hctr2",
+		.create = hctr2_create,
+		.module = THIS_MODULE,
+	}
+};
+
+static int __init hctr2_module_init(void)
+{
+	return crypto_register_templates(hctr2_tmpls, ARRAY_SIZE(hctr2_tmpls));
+}
+
+static void __exit hctr2_module_exit(void)
+{
+	return crypto_unregister_templates(hctr2_tmpls,
+					   ARRAY_SIZE(hctr2_tmpls));
+}
+
+subsys_initcall(hctr2_module_init);
+module_exit(hctr2_module_exit);
+
+MODULE_DESCRIPTION("HCTR2 length-preserving encryption mode");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS_CRYPTO("hctr2");
+MODULE_IMPORT_NS(CRYPTO_INTERNAL);
diff --git a/crypto/tcrypt.c b/crypto/tcrypt.c
index dd9cf216029b..336598da8eac 100644
--- a/crypto/tcrypt.c
+++ b/crypto/tcrypt.c
@@ -2191,6 +2191,11 @@ static int do_test(const char *alg, u32 type, u32 mask, int m, u32 num_mb)
 				   16, 16, aead_speed_template_19, num_mb);
 		break;
 
+	case 226:
+		test_cipher_speed("hctr2(aes)", ENCRYPT, sec, NULL,
+				  0, speed_template_32);
+		break;
+
 	case 300:
 		if (alg) {
 			test_hash_speed(alg, sec, generic_hash_speed_template);
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index d807b200edf6..3244b7e5aa7e 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -5030,6 +5030,14 @@ static const struct alg_test_desc alg_test_descs[] = {
 		.suite = {
 			.hash = __VECS(ghash_tv_template)
 		}
+	}, {
+		.alg = "hctr2(aes)",
+		.generic_driver =
+		    "hctr2_base(xctr(aes-generic),polyval-generic)",
+		.test = alg_test_skcipher,
+		.suite = {
+			.cipher = __VECS(aes_hctr2_tv_template)
+		}
 	}, {
 		.alg = "hmac(md5)",
 		.test = alg_test_hash,
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index c581e5405916..13c70dde2f89 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -35100,4 +35100,676 @@ static const struct hash_testvec polyval_tv_template[] = {
 
 };
 
+/*
+ * Test vectors generated using https://github.com/google/hctr2
+ */
+static const struct cipher_testvec aes_hctr2_tv_template[] = {
+	{
+		.key	= "\xe1\x15\x66\x3c\x8d\xc6\x3a\xff"
+			  "\xef\x41\xd7\x47\xa2\xcc\x8a\xba",
+		.iv	= "\xc3\xbe\x2a\xcb\xb5\x39\x86\xf1"
+			  "\x91\xad\x6c\xf4\xde\x74\x45\x63"
+			  "\x5c\x7a\xd5\xcc\x8b\x76\xef\x0e"
+			  "\xcf\x2c\x60\x69\x37\xfd\x07\x96",
+		.ptext	= "\x65\x75\xae\xd3\xe2\xbc\x43\x5c"
+			  "\xb3\x1a\xd8\x05\xc3\xd0\x56\x29",
+		.ctext	= "\x11\x91\xea\x74\x58\xcc\xd5\xa2"
+			  "\xd0\x55\x9e\x3d\xfe\x7f\xc8\xfe",
+		.klen	= 16,
+		.len	= 16,
+	},
+	{
+		.key	= "\xe7\xd1\x77\x48\x76\x0b\xcd\x34"
+			  "\x2a\x2d\xe7\x74\xca\x11\x9c\xae",
+		.iv	= "\x71\x1c\x49\x62\xd9\x5b\x50\x5e"
+			  "\x68\x87\xbc\xf6\x89\xff\xed\x30"
+			  "\xe4\xe5\xbd\xb6\x10\x4f\x9f\x66"
+			  "\x28\x06\x5a\xf4\x27\x35\xcd\xe5",
+		.ptext	= "\x87\x03\x8f\x06\xa8\x61\x54\xda"
+			  "\x01\x45\xd4\x01\xef\x4a\x22\xcf"
+			  "\x78\x15\x9f\xbd\x64\xbd\x2c\xb9"
+			  "\x40\x1d\x72\xae\x53\x63\xa5",
+		.ctext	= "\x4e\xa1\x05\x27\xb8\x45\xe4\xa1"
+			  "\xbb\x30\xb4\xa6\x12\x74\x63\xd6"
+			  "\x17\xc9\xcc\x2f\x18\x64\xe0\x06"
+			  "\x0a\xa0\xff\x72\x10\x7b\x22",
+		.klen	= 16,
+		.len	= 31,
+	},
+	{
+		.key	= "\x59\x65\x3b\x1d\x43\x5e\xc0\xae"
+			  "\xb8\x9d\x9b\xdd\x22\x03\xbf\xca",
+		.iv	= "\xec\x95\xfa\x5a\xcf\x5e\xd2\x93"
+			  "\xa3\xb5\xe5\xbe\xf3\x01\x7b\x01"
+			  "\xd1\xca\x6c\x06\x82\xf0\xbd\x67"
+			  "\xd9\x6c\xa4\xdc\xb4\x38\x0f\x74",
+		.ptext	= "\x45\xdf\x75\x87\xbc\x72\xce\x55"
+			  "\xc9\xfa\xcb\xfc\x9f\x40\x82\x2b"
+			  "\xc6\x4f\x4f\x5b\x8b\x3b\x6d\x67"
+			  "\xa6\x93\x62\x89\x8c\x19\xf4\xe3"
+			  "\x08\x92\x9c\xc9\x47\x2c\x6e\xd0"
+			  "\xa3\x02\x2b\xdb\x2c\xf2\x8d\x46"
+			  "\xcd\xb0\x9d\x26\x63\x4c\x40\x6b"
+			  "\x79\x43\xe5\xce\x42\xa8\xec\x3b"
+			  "\x5b\xd0\xea\xa4\xe6\xdb\x66\x55"
+			  "\x7a\x76\xec\xab\x7d\x2a\x2b\xbd"
+			  "\xa9\xab\x22\x64\x1a\xa1\xae\x84"
+			  "\x86\x79\x67\xe9\xb2\x50\xbe\x12"
+			  "\x2f\xb2\x14\xf0\xdb\x71\xd8\xa7"
+			  "\x41\x8a\x88\xa0\x6a\x6e\x9d\x2a"
+			  "\xfa\x11\x37\x40\x32\x09\x4c\x47"
+			  "\x41\x07\x31\x85\x3d\xa8\xf7\x64",
+		.ctext	= "\x2d\x4b\x9f\x93\xca\x5a\x48\x26"
+			  "\x01\xcc\x54\xe4\x31\x50\x12\xf0"
+			  "\x49\xff\x59\x42\x68\xbd\x87\x8f"
+			  "\x9e\x62\x96\xcd\xb9\x24\x57\xa4"
+			  "\x0b\x7b\xf5\x2e\x0e\xa8\x65\x07"
+			  "\xab\x05\xd5\xca\xe7\x9c\x6c\x34"
+			  "\x5d\x42\x34\xa4\x62\xe9\x75\x48"
+			  "\x3d\x9e\x8f\xfa\x42\xe9\x75\x08"
+			  "\x4e\x54\x91\x2b\xbd\x11\x0f\x8e"
+			  "\xf0\x82\xf5\x24\xf1\xc4\xfc\xae"
+			  "\x42\x54\x7f\xce\x15\xa8\xb2\x33"
+			  "\xc0\x86\xb6\x2b\xe8\x44\xce\x1f"
+			  "\x68\x57\x66\x94\x6e\xad\xeb\xf3"
+			  "\x30\xf8\x11\xbd\x60\x00\xc6\xd5"
+			  "\x4c\x81\xf1\x20\x2b\x4a\x5b\x99"
+			  "\x79\x3b\xc9\x5c\x74\x23\xe6\x5d",
+		.klen	= 16,
+		.len	= 128,
+	},
+	{
+		.key	= "\x3e\x08\x5d\x64\x6c\x98\xec\xec"
+			  "\x70\x0e\x0d\xa1\x41\x20\x99\x82",
+		.iv	= "\x11\xb7\x77\x91\x0d\x99\xd9\x8d"
+			  "\x35\x3a\xf7\x14\x6b\x09\x37\xe5"
+			  "\xad\x51\xf6\xc3\x96\x4b\x64\x56"
+			  "\xa8\xbd\x81\xcc\xbe\x94\xaf\xe4",
+		.ptext	= "\xff\x8d\xb9\xc0\xe3\x69\xb3\xb2"
+			  "\x8b\x11\x26\xb3\x11\xec\xfb\xb9"
+			  "\x9c\xc1\x71\xd6\xe3\x26\x0e\xe0"
+			  "\x68\x40\x60\xb9\x3a\x63\x56\x8a"
+			  "\x9e\xc1\xf0\x10\xb1\x64\x32\x70"
+			  "\xf8\xcd\xc6\xc4\x49\x4c\xe1\xce"
+			  "\xf3\xe1\x03\xf8\x35\xae\xe0\x5e"
+			  "\xef\x5f\xbc\x41\x75\x26\x13\xcc"
+			  "\x37\x85\xdf\xc0\x5d\xa6\x47\x98"
+			  "\xf1\x97\x52\x58\x04\xe6\xb5\x01"
+			  "\xc0\xb8\x17\x6d\x74\xbd\x9a\xdf"
+			  "\xa4\x37\x94\x86\xb0\x13\x83\x28"
+			  "\xc9\xa2\x07\x3f\xb5\xb2\x72\x40"
+			  "\x0e\x60\xdf\x57\x07\xb7\x2c\x66"
+			  "\x10\x3f\x8d\xdd\x30\x0a\x47\xd5"
+			  "\xe8\x9d\xfb\xa1\xaf\x53\xd7\x05"
+			  "\xc7\xd2\xba\xe7\x2c\xa0\xbf\xb8"
+			  "\xd1\x93\xe7\x41\x82\xa3\x41\x3a"
+			  "\xaf\x12\xd6\xf8\x34\xda\x92\x46"
+			  "\xad\xa2\x2f\xf6\x7e\x46\x96\xd8"
+			  "\x03\xf3\x49\x64\xde\xd8\x06\x8b"
+			  "\xa0\xbc\x63\x35\x38\xb6\x6b\xda"
+			  "\x5b\x50\x3f\x13\xa5\x84\x1b\x1b"
+			  "\x66\x89\x95\xb7\xc2\x16\x3c\xe9"
+			  "\x24\xb0\x8c\x6f\x49\xef\xf7\x28"
+			  "\x6a\x24\xfd\xbe\x25\xe2\xb4\x90"
+			  "\x77\x44\x08\xb8\xda\xd2\xde\x2c"
+			  "\xa0\x57\x45\x57\x29\x47\x6b\x89"
+			  "\x4a\xf6\xa7\x2a\xc3\x9e\x7b\xc8"
+			  "\xfd\x9f\x89\xab\xee\x6d\xa3\xb4"
+			  "\x23\x90\x7a\xe9\x89\xa0\xc7\xb3"
+			  "\x17\x41\x87\x91\xfc\x97\x42",
+		.ctext	= "\xfc\x9b\x96\x66\xc4\x82\x2a\x4a"
+			  "\xb1\x24\xba\xc7\x78\x5f\x79\xc1"
+			  "\x57\x2e\x47\x29\x4d\x7b\xd2\x9a"
+			  "\xbd\xc6\xc1\x26\x7b\x8e\x3f\x5d"
+			  "\xd4\xb4\x9f\x6a\x02\x24\x4a\xad"
+			  "\x0c\x00\x1b\xdf\x92\xc5\x8a\xe1"
+			  "\x77\x79\xcc\xd5\x20\xbf\x83\xf4"
+			  "\x4b\xad\x11\xbf\xdb\x47\x65\x70"
+			  "\x43\xf3\x65\xdf\xb7\xdc\xb2\xb9"
+			  "\xaa\x3f\xb3\xdf\x79\x69\x0d\xa0"
+			  "\x86\x1c\xba\x48\x0b\x01\xc1\x88"
+			  "\xdf\x03\xb1\x06\x3c\x1d\x56\xa1"
+			  "\x8e\x98\xc1\xa6\x95\xa2\x5b\x72"
+			  "\x76\x59\xd2\x26\x25\xcd\xef\x7c"
+			  "\xc9\x60\xea\x43\xd1\x12\x8a\x8a"
+			  "\x63\x12\x78\xcb\x2f\x88\x1e\x88"
+			  "\x78\x59\xde\xba\x4d\x2c\x78\x61"
+			  "\x75\x37\x54\xfd\x80\xc7\x5e\x98"
+			  "\xcf\x14\x62\x8e\xfb\x72\xee\x4d"
+			  "\x9f\xaf\x8b\x09\xe5\x21\x0a\x91"
+			  "\x8f\x88\x87\xd5\xb1\x84\xab\x18"
+			  "\x08\x57\xed\x72\x35\xa6\x0e\xc6"
+			  "\xff\xcb\xfe\x2c\x48\x39\x14\x44"
+			  "\xba\x59\x32\x3a\x2d\xc4\x5f\xcb"
+			  "\xbe\x68\x8e\x7b\xee\x21\xa4\x32"
+			  "\x11\xa0\x99\xfd\x90\xde\x59\x43"
+			  "\xeb\xed\xd5\x87\x68\x46\xc6\xde"
+			  "\x0b\x07\x17\x59\x6a\xab\xca\x15"
+			  "\x65\x02\x01\xb6\x71\x8c\x3b\xaa"
+			  "\x18\x3b\x30\xae\x38\x5b\x2c\x74"
+			  "\xd4\xee\x4a\xfc\xf7\x1b\x09\xd4"
+			  "\xda\x8b\x1d\x5d\x6f\x21\x6c",
+		.klen	= 16,
+		.len	= 255,
+	},
+	{
+		.key	= "\x24\xf6\xe1\x62\xe5\xaf\x99\xda"
+			  "\x84\xec\x41\xb0\xa3\x0b\xd5\xa8"
+			  "\xa0\x3e\x7b\xa6\xdd\x6c\x8f\xa8",
+		.iv	= "\x7f\x80\x24\x62\x32\xdd\xab\x66"
+			  "\xf2\x87\x29\x24\xec\xd2\x4b\x9f"
+			  "\x0c\x33\x52\xd9\xe0\xcc\x6e\xe4"
+			  "\x90\x85\x43\x97\xc4\x62\x14\x33",
+		.ptext	= "\xef\x58\xe7\x7f\xa9\xd9\xb8\xd7"
+			  "\xa2\x91\x97\x07\x27\x9e\xba\xe8"
+			  "\xaa",
+		.ctext	= "\xd7\xc3\x81\x91\xf2\x40\x17\x73"
+			  "\x3e\x3b\x1c\x2a\x8e\x11\x9c\x17"
+			  "\xf1",
+		.klen	= 24,
+		.len	= 17,
+	},
+	{
+		.key	= "\xbf\xaf\xd7\x67\x8c\x47\xcf\x21"
+			  "\x8a\xa5\xdd\x32\x25\x47\xbe\x4f"
+			  "\xf1\x3a\x0b\xa6\xaa\x2d\xcf\x09",
+		.iv	= "\xd9\xe8\xf0\x92\x4e\xfc\x1d\xf2"
+			  "\x81\x37\x7c\x8f\xf1\x59\x09\x20"
+			  "\xf4\x46\x51\x86\x4f\x54\x8b\x32"
+			  "\x58\xd1\x99\x8b\x8c\x03\xeb\x5d",
+		.ptext	= "\xcd\x64\x90\xf9\x7c\xe5\x0e\x5a"
+			  "\x75\xe7\x8e\x39\x86\xec\x20\x43"
+			  "\x8a\x49\x09\x15\x47\xf4\x3c\x89"
+			  "\x21\xeb\xcf\x4e\xcf\x91\xb5\x40"
+			  "\xcd\xe5\x4d\x5c\x6f\xf2\xd2\x80"
+			  "\xfa\xab\xb3\x76\x9f\x7f\x84\x0a",
+		.ctext	= "\x44\x98\x64\x15\xb7\x0b\x80\xa3"
+			  "\xb9\xca\x23\xff\x3b\x0b\x68\x74"
+			  "\xbb\x3e\x20\x19\x9f\x28\x71\x2a"
+			  "\x48\x3c\x7c\xe2\xef\xb5\x10\xac"
+			  "\x82\x9f\xcd\x08\x8f\x6b\x16\x6f"
+			  "\xc3\xbb\x07\xfb\x3c\xb0\x1b\x27",
+		.klen	= 24,
+		.len	= 48,
+	},
+	{
+		.key	= "\xb8\x35\xa2\x5f\x86\xbb\x82\x99"
+			  "\x27\xeb\x01\x3f\x92\xaf\x80\x24"
+			  "\x4c\x66\xa2\x89\xff\x2e\xa2\x25",
+		.iv	= "\x0a\x1d\x96\xd3\xe0\xe8\x0c\x9b"
+			  "\x9d\x6f\x21\x97\xc2\x17\xdb\x39"
+			  "\x3f\xd8\x64\x48\x80\x04\xee\x43"
+			  "\x02\xce\x88\xe2\x81\x81\x5f\x81",
+		.ptext	= "\xb8\xf9\x16\x8b\x25\x68\xd0\x9c"
+			  "\xd2\x28\xac\xa8\x79\xc2\x30\xc1"
+			  "\x31\xde\x1c\x37\x1b\xa2\xb5\xe6"
+			  "\xf0\xd0\xf8\x9c\x7f\xc6\x46\x07"
+			  "\x5c\xc3\x06\xe4\xf0\x02\xec\xf8"
+			  "\x59\x7c\xc2\x5d\xf8\x0c\x21\xae"
+			  "\x9e\x82\xb1\x1a\x5f\x78\x44\x15"
+			  "\x00\xa7\x2e\x52\xc5\x98\x98\x35"
+			  "\x03\xae\xd0\x8e\x07\x57\xe2\x5a"
+			  "\x17\xbf\x52\x40\x54\x5b\x74\xe5"
+			  "\x2d\x35\xaf\x9e\x37\xf7\x7e\x4a"
+			  "\x8c\x9e\xa1\xdc\x40\xb4\x5b\x36"
+			  "\xdc\x3a\x68\xe6\xb7\x35\x0b\x8a"
+			  "\x90\xec\x74\x8f\x09\x9a\x7f\x02"
+			  "\x4d\x03\x46\x35\x62\xb1\xbd\x08"
+			  "\x3f\x54\x2a\x10\x0b\xdc\x69\xaf"
+			  "\x25\x3a\x0c\x5f\xe0\x51\xe7\x11"
+			  "\xb7\x00\xab\xbb\x9a\xb0\xdc\x4d"
+			  "\xc3\x7d\x1a\x6e\xd1\x09\x52\xbd"
+			  "\x6b\x43\x55\x22\x3a\x78\x14\x7d"
+			  "\x79\xfd\x8d\xfc\x9b\x1d\x0f\xa2"
+			  "\xc7\xb9\xf8\x87\xd5\x96\x50\x61"
+			  "\xa7\x5e\x1e\x57\x97\xe0\xad\x2f"
+			  "\x93\xe6\xe8\x83\xec\x85\x26\x5e"
+			  "\xd9\x2a\x15\xe0\xe9\x09\x25\xa1"
+			  "\x77\x2b\x88\xdc\xa4\xa5\x48\xb6"
+			  "\xf7\xcc\xa6\xa9\xba\xf3\x42\x5c"
+			  "\x70\x9d\xe9\x29\xc1\xf1\x33\xdd"
+			  "\x56\x48\x17\x86\x14\x51\x5c\x10"
+			  "\xab\xfd\xd3\x26\x8c\x21\xf5\x93"
+			  "\x1b\xeb\x47\x97\x73\xbb\x88\x10"
+			  "\xf3\xfe\xf5\xde\xf3\x2e\x05\x46"
+			  "\x1c\x0d\xa3\x10\x48\x9c\x71\x16"
+			  "\x78\x33\x4d\x0a\x74\x3b\xe9\x34"
+			  "\x0b\xa7\x0e\x9e\x61\xe9\xe9\xfd"
+			  "\x85\xa0\xcb\x19\xfd\x7c\x33\xe3"
+			  "\x0e\xce\xc2\x6f\x9d\xa4\x2d\x77"
+			  "\xfd\xad\xee\x5e\x08\x3e\xd7\xf5"
+			  "\xfb\xc3\xd7\x93\x96\x08\x96\xca"
+			  "\x58\x81\x16\x9b\x98\x0a\xe2\xef"
+			  "\x7f\xda\x40\xe4\x1f\x46\x9e\x67"
+			  "\x2b\x84\xcb\x42\xc4\xd6\x6a\xcf"
+			  "\x2d\xb2\x33\xc0\x56\xb3\x35\x6f"
+			  "\x29\x36\x8f\x6a\x5b\xec\xd5\x4f"
+			  "\xa0\x70\xff\xb6\x5b\xde\x6a\x93"
+			  "\x20\x3c\xe2\x76\x7a\xef\x3c\x79"
+			  "\x31\x65\xce\x3a\x0e\xd0\xbe\xa8"
+			  "\x21\x95\xc7\x2b\x62\x8e\x67\xdd"
+			  "\x20\x79\xe4\xe5\x01\x15\xc0\xec"
+			  "\x0f\xd9\x23\xc8\xca\xdf\xd4\x7d"
+			  "\x1d\xf8\x64\x4f\x56\xb1\x83\xa7"
+			  "\x43\xbe\xfc\xcf\xc2\x8c\x33\xda"
+			  "\x36\xd0\x52\xef\x9e\x9e\x88\xf4"
+			  "\xa8\x21\x0f\xaa\xee\x8d\xa0\x24"
+			  "\x4d\xcb\xb1\x72\x07\xf0\xc2\x06"
+			  "\x60\x65\x85\x84\x2c\x60\xcf\x61"
+			  "\xe7\x56\x43\x5b\x2b\x50\x74\xfa"
+			  "\xdb\x4e\xea\x88\xd4\xb3\x83\x8f"
+			  "\x6f\x97\x4b\x57\x7a\x64\x64\xae"
+			  "\x0a\x37\x66\xc5\x03\xad\xb5\xf9"
+			  "\x08\xb0\x3a\x74\xde\x97\x51\xff"
+			  "\x48\x4f\x5c\xa4\xf8\x7a\xb4\x05"
+			  "\x27\x70\x52\x86\x1b\x78\xfc\x18"
+			  "\x06\x27\xa9\x62\xf7\xda\xd2\x8e",
+		.ctext	= "\x3b\xe1\xdb\xb3\xc5\x9a\xde\x69"
+			  "\x58\x05\xcc\xeb\x02\x51\x78\x4a"
+			  "\xac\x28\xe9\xed\xd1\xc9\x15\x7d"
+			  "\x33\x7d\xc1\x47\x12\x41\x11\xf8"
+			  "\x4a\x2c\xb7\xa3\x41\xbe\x59\xf7"
+			  "\x22\xdb\x2c\xda\x9c\x00\x61\x9b"
+			  "\x73\xb3\x0b\x84\x2b\xc1\xf3\x80"
+			  "\x84\xeb\x19\x60\x80\x09\xe1\xcd"
+			  "\x16\x3a\x20\x23\xc4\x82\x4f\xba"
+			  "\x3b\x8e\x55\xd7\xa9\x0b\x75\xd0"
+			  "\xda\xce\xd2\xee\x7e\x4b\x7f\x65"
+			  "\x4d\x28\xc5\xd3\x15\x2c\x40\x96"
+			  "\x52\xd4\x18\x61\x2b\xe7\x83\xec"
+			  "\x89\x62\x9c\x4c\x50\xe6\xe2\xbb"
+			  "\x25\xa1\x0f\xa7\xb0\xb4\xb2\xde"
+			  "\x54\x20\xae\xa3\x56\xa5\x26\x4c"
+			  "\xd5\xcc\xe5\xcb\x28\x44\xb1\xef"
+			  "\x67\x2e\x93\x6d\x00\x88\x83\x9a"
+			  "\xf2\x1c\x48\x38\xec\x1a\x24\x90"
+			  "\x73\x0a\xdb\xe8\xce\x95\x7a\x2c"
+			  "\x8c\xe9\xb7\x07\x1d\xb3\xa3\x20"
+			  "\xbe\xad\x61\x84\xac\xde\x76\xb5"
+			  "\xa6\x28\x29\x47\x63\xc4\xfc\x13"
+			  "\x3f\x71\xfb\x58\x37\x34\x82\xed"
+			  "\x9e\x05\x19\x1f\xc1\x67\xc1\xab"
+			  "\xf5\xfd\x7c\xea\xfa\xa4\xf8\x0a"
+			  "\xac\x4c\x92\xdf\x65\x73\xd7\xdb"
+			  "\xed\x2c\xe0\x84\x5f\x57\x8c\x76"
+			  "\x3e\x05\xc0\xc3\x68\x96\x95\x0b"
+			  "\x88\x97\xfe\x2e\x99\xd5\xc2\xb9"
+			  "\x53\x9f\xf3\x32\x10\x1f\x1f\x5d"
+			  "\xdf\x21\x95\x70\x91\xe8\xa1\x3e"
+			  "\x19\x3e\xb6\x0b\xa8\xdb\xf8\xd4"
+			  "\x54\x27\xb8\xab\x5d\x78\x0c\xe6"
+			  "\xb7\x08\xee\xa4\xb6\x6b\xeb\x5a"
+			  "\x89\x69\x2b\xbd\xd4\x21\x5b\xbf"
+			  "\x79\xbb\x0f\xff\xdb\x23\x9a\xeb"
+			  "\x8d\xf2\xc4\x39\xb4\x90\x77\x6f"
+			  "\x68\xe2\xb8\xf3\xf1\x65\x4f\xd5"
+			  "\x24\x80\x06\xaf\x7c\x8d\x15\x0c"
+			  "\xfd\x56\xe5\xe3\x01\xa5\xf7\x1c"
+			  "\x31\xd6\xa2\x01\x1e\x59\xf9\xa9"
+			  "\x42\xd5\xc2\x34\xda\x25\xde\xc6"
+			  "\x5d\x38\xef\xd1\x4c\xc1\xd9\x1b"
+			  "\x98\xfd\xcd\x57\x6f\xfd\x46\x91"
+			  "\x90\x3d\x52\x2b\x2c\x7d\xcf\x71"
+			  "\xcf\xd1\x77\x23\x71\x36\xb1\xce"
+			  "\xc7\x5d\xf0\x5b\x44\x3d\x43\x71"
+			  "\xac\xb8\xa0\x6a\xea\x89\x5c\xff"
+			  "\x81\x73\xd4\x83\xd1\xc9\xe9\xe2"
+			  "\xa8\xa6\x0f\x36\xe6\xaa\x57\xd4"
+			  "\x27\xd2\xc9\xda\x94\x02\x1f\xfb"
+			  "\xe1\xa1\x07\xbe\xe1\x1b\x15\x94"
+			  "\x1e\xac\x2f\x57\xbb\x41\x22\xaf"
+			  "\x60\x5e\xcc\x66\xcb\x16\x62\xab"
+			  "\xb8\x7c\x99\xf4\x84\x93\x0c\xc2"
+			  "\xa2\x49\xe4\xfd\x17\x55\xe1\xa6"
+			  "\x8d\x5b\xc6\x1b\xc8\xac\xec\x11"
+			  "\x33\xcf\xb0\xe8\xc7\x28\x4f\xb2"
+			  "\x5c\xa6\xe2\x71\xab\x80\x0a\xa7"
+			  "\x5c\x59\x50\x9f\x7a\x32\xb7\xe5"
+			  "\x24\x9a\x8e\x25\x21\x2e\xb7\x18"
+			  "\xd0\xf2\xe7\x27\x6f\xda\xc1\x00"
+			  "\xd9\xa6\x03\x59\xac\x4b\xcb\xba",
+		.klen	= 24,
+		.len	= 512,
+	},
+	{
+		.key	= "\x9e\xeb\xb2\x49\x3c\x1c\xf5\xf4"
+			  "\x6a\x99\xc2\xc4\xdf\xb1\xf4\xdd"
+			  "\x75\x20\x57\xea\x2c\x4f\xcd\xb2"
+			  "\xa5\x3d\x7b\x49\x1e\xab\xfd\x0f",
+		.iv	= "\xdf\x63\xd4\xab\xd2\x49\xf3\xd8"
+			  "\x33\x81\x37\x60\x7d\xfa\x73\x08"
+			  "\xd8\x49\x6d\x80\xe8\x2f\x62\x54"
+			  "\xeb\x0e\xa9\x39\x5b\x45\x7f\x8a",
+		.ptext	= "\x67\xc9\xf2\x30\x84\x41\x8e\x43"
+			  "\xfb\xf3\xb3\x3e\x79\x36\x7f\xe8",
+		.ctext	= "\x27\x38\x78\x47\x16\xd9\x71\x35"
+			  "\x2e\x7e\xdd\x7e\x43\x3c\xb8\x40",
+		.klen	= 32,
+		.len	= 16,
+	},
+	{
+		.key	= "\x93\xfa\x7e\xe2\x0e\x67\xc4\x39"
+			  "\xe7\xca\x47\x95\x68\x9d\x5e\x5a"
+			  "\x7c\x26\x19\xab\xc6\xca\x6a\x4c"
+			  "\x45\xa6\x96\x42\xae\x6c\xff\xe7",
+		.iv	= "\xea\x82\x47\x95\x3b\x22\xa1\x3a"
+			  "\x6a\xca\x24\x4c\x50\x7e\x23\xcd"
+			  "\x0e\x50\xe5\x41\xb6\x65\x29\xd8"
+			  "\x30\x23\x00\xd2\x54\xa7\xd6\x56",
+		.ptext	= "\xdb\x1f\x1f\xec\xad\x83\x6e\x5d"
+			  "\x19\xa5\xf6\x3b\xb4\x93\x5a\x57"
+			  "\x6f",
+		.ctext	= "\xf1\x46\x6e\x9d\xb3\x01\xf0\x6b"
+			  "\xc2\xac\x57\x88\x48\x6d\x40\x72"
+			  "\x68",
+		.klen	= 32,
+		.len	= 17,
+	},
+	{
+		.key	= "\x36\x2b\x57\x97\xf8\x5d\xcd\x99"
+			  "\x5f\x1a\x5a\x44\x1d\x92\x0f\x27"
+			  "\xcc\x16\xd7\x2b\x85\x63\x99\xd3"
+			  "\xba\x96\xa1\xdb\xd2\x60\x68\xda",
+		.iv	= "\xef\x58\x69\xb1\x2c\x5e\x9a\x47"
+			  "\x24\xc1\xb1\x69\xe1\x12\x93\x8f"
+			  "\x43\x3d\x6d\x00\xdb\x5e\xd8\xd9"
+			  "\x12\x9a\xfe\xd9\xff\x2d\xaa\xc4",
+		.ptext	= "\x5e\xa8\x68\x19\x85\x98\x12\x23"
+			  "\x26\x0a\xcc\xdb\x0a\x04\xb9\xdf"
+			  "\x4d\xb3\x48\x7b\xb0\xe3\xc8\x19"
+			  "\x43\x5a\x46\x06\x94\x2d\xf2",
+		.ctext	= "\xdb\xfd\xc8\x03\xd0\xec\xc1\xfe"
+			  "\xbd\x64\x37\xb8\x82\x43\x62\x4e"
+			  "\x7e\x54\xa3\xe2\x24\xa7\x27\xe8"
+			  "\xa4\xd5\xb3\x6c\xb2\x26\xb4",
+		.klen	= 32,
+		.len	= 31,
+	},
+	{
+		.key	= "\x03\x65\x03\x6e\x4d\xe6\xe8\x4e"
+			  "\x8b\xbe\x22\x19\x48\x31\xee\xd9"
+			  "\xa0\x91\x21\xbe\x62\x89\xde\x78"
+			  "\xd9\xb0\x36\xa3\x3c\xce\x43\xd5",
+		.iv	= "\xa9\xc3\x4b\xe7\x0f\xfc\x6d\xbf"
+			  "\x56\x27\x21\x1c\xfc\xd6\x04\x10"
+			  "\x5f\x43\xe2\x30\x35\x29\x6c\x10"
+			  "\x90\xf1\xbf\x61\xed\x0f\x8a\x91",
+		.ptext	= "\x07\xaa\x02\x26\xb4\x98\x11\x5e"
+			  "\x33\x41\x21\x51\x51\x63\x2c\x72"
+			  "\x00\xab\x32\xa7\x1c\xc8\x3c\x9c"
+			  "\x25\x0e\x8b\x9a\xdf\x85\xed\x2d"
+			  "\xf4\xf2\xbc\x55\xca\x92\x6d\x22"
+			  "\xfd\x22\x3b\x42\x4c\x0b\x74\xec",
+		.ctext	= "\x7b\xb1\x43\x6d\xd8\x72\x6c\xf6"
+			  "\x67\x6a\x00\xc4\xf1\xf0\xf5\xa4"
+			  "\xfc\x60\x91\xab\x46\x0b\x15\xfc"
+			  "\xd7\xc1\x28\x15\xa1\xfc\xf7\x68"
+			  "\x8e\xcc\x27\x62\x00\x64\x56\x72"
+			  "\xa6\x17\xd7\x3f\x67\x80\x10\x58",
+		.klen	= 32,
+		.len	= 48,
+	},
+	{
+		.key	= "\xa5\x28\x24\x34\x1a\x3c\xd8\xf7"
+			  "\x05\x91\x8f\xee\x85\x1f\x35\x7f"
+			  "\x80\x3d\xfc\x9b\x94\xf6\xfc\x9e"
+			  "\x19\x09\x00\xa9\x04\x31\x4f\x11",
+		.iv	= "\xa1\xba\x49\x95\xff\x34\x6d\xb8"
+			  "\xcd\x87\x5d\x5e\xfd\xea\x85\xdb"
+			  "\x8a\x7b\x5e\xb2\x5d\x57\xdd\x62"
+			  "\xac\xa9\x8c\x41\x42\x94\x75\xb7",
+		.ptext	= "\x69\xb4\xe8\x8c\x37\xe8\x67\x82"
+			  "\xf1\xec\x5d\x04\xe5\x14\x91\x13"
+			  "\xdf\xf2\x87\x1b\x69\x81\x1d\x71"
+			  "\x70\x9e\x9c\x3b\xde\x49\x70\x11"
+			  "\xa0\xa3\xdb\x0d\x54\x4f\x66\x69"
+			  "\xd7\xdb\x80\xa7\x70\x92\x68\xce"
+			  "\x81\x04\x2c\xc6\xab\xae\xe5\x60"
+			  "\x15\xe9\x6f\xef\xaa\x8f\xa7\xa7"
+			  "\x63\x8f\xf2\xf0\x77\xf1\xa8\xea"
+			  "\xe1\xb7\x1f\x9e\xab\x9e\x4b\x3f"
+			  "\x07\x87\x5b\x6f\xcd\xa8\xaf\xb9"
+			  "\xfa\x70\x0b\x52\xb8\xa8\xa7\x9e"
+			  "\x07\x5f\xa6\x0e\xb3\x9b\x79\x13"
+			  "\x79\xc3\x3e\x8d\x1c\x2c\x68\xc8"
+			  "\x51\x1d\x3c\x7b\x7d\x79\x77\x2a"
+			  "\x56\x65\xc5\x54\x23\x28\xb0\x03",
+		.ctext	= "\xeb\xf9\x98\x86\x3c\x40\x9f\x16"
+			  "\x84\x01\xf9\x06\x0f\xeb\x3c\xa9"
+			  "\x4c\xa4\x8e\x5d\xc3\x8d\xe5\xd3"
+			  "\xae\xa6\xe6\xcc\xd6\x2d\x37\x4f"
+			  "\x99\xc8\xa3\x21\x46\xb8\x69\xf2"
+			  "\xe3\x14\x89\xd7\xb9\xf5\x9e\x4e"
+			  "\x07\x93\x6f\x78\x8e\x6b\xea\x8f"
+			  "\xfb\x43\xb8\x3e\x9b\x4c\x1d\x7e"
+			  "\x20\x9a\xc5\x87\xee\xaf\xf6\xf9"
+			  "\x46\xc5\x18\x8a\xe8\x69\xe7\x96"
+			  "\x52\x55\x5f\x00\x1e\x1a\xdc\xcc"
+			  "\x13\xa5\xee\xff\x4b\x27\xca\xdc"
+			  "\x10\xa6\x48\x76\x98\x43\x94\xa3"
+			  "\xc7\xe2\xc9\x65\x9b\x08\x14\x26"
+			  "\x1d\x68\xfb\x15\x0a\x33\x49\x84"
+			  "\x84\x33\x5a\x1b\x24\x46\x31\x92",
+		.klen	= 32,
+		.len	= 128,
+	},
+	{
+		.key	= "\x36\x45\x11\xa2\x98\x5f\x96\x7c"
+			  "\xc6\xb4\x94\x31\x0a\x67\x09\x32"
+			  "\x6c\x6f\x6f\x00\xf0\x17\xcb\xac"
+			  "\xa5\xa9\x47\x9e\x2e\x85\x2f\xfa",
+		.iv	= "\x28\x88\xaa\x9b\x59\x3b\x1e\x97"
+			  "\x82\xe5\x5c\x9e\x6d\x14\x11\x19"
+			  "\x6e\x38\x8f\xd5\x40\x2b\xca\xf9"
+			  "\x7b\x4c\xe4\xa3\xd0\xd2\x8a\x13",
+		.ptext	= "\x95\xd2\xf7\x71\x1b\xca\xa5\x86"
+			  "\xd9\x48\x01\x93\x2f\x79\x55\x29"
+			  "\x71\x13\x15\x0e\xe6\x12\xbc\x4d"
+			  "\x8a\x31\xe3\x40\x2a\xc6\x5e\x0d"
+			  "\x68\xbb\x4a\x62\x8d\xc7\x45\x77"
+			  "\xd2\xb8\xc7\x1d\xf1\xd2\x5d\x97"
+			  "\xcf\xac\x52\xe5\x32\x77\xb6\xda"
+			  "\x30\x85\xcf\x2b\x98\xe9\xaa\x34"
+			  "\x62\xb5\x23\x9e\xb7\xa6\xd4\xe0"
+			  "\xb4\x58\x18\x8c\x4d\xde\x4d\x01"
+			  "\x83\x89\x24\xca\xfb\x11\xd4\x82"
+			  "\x30\x7a\x81\x35\xa0\xb4\xd4\xb6"
+			  "\x84\xea\x47\x91\x8c\x19\x86\x25"
+			  "\xa6\x06\x8d\x78\xe6\xed\x87\xeb"
+			  "\xda\xea\x73\x7c\xbf\x66\xb8\x72"
+			  "\xe3\x0a\xb8\x0c\xcb\x1a\x73\xf1"
+			  "\xa7\xca\x0a\xde\x57\x2b\xbd\x2b"
+			  "\xeb\x8b\x24\x38\x22\xd3\x0e\x1f"
+			  "\x17\xa0\x84\x98\x31\x77\xfd\x34"
+			  "\x6a\x4e\x3d\x84\x4c\x0e\xfb\xed"
+			  "\xc8\x2a\x51\xfa\xd8\x73\x21\x8a"
+			  "\xdb\xb5\xfe\x1f\xee\xc4\xe8\x65"
+			  "\x54\x84\xdd\x96\x6d\xfd\xd3\x31"
+			  "\x77\x36\x52\x6b\x80\x4f\x9e\xb4"
+			  "\xa2\x55\xbf\x66\x41\x49\x4e\x87"
+			  "\xa7\x0c\xca\xe7\xa5\xc5\xf6\x6f"
+			  "\x27\x56\xe2\x48\x22\xdd\x5f\x59"
+			  "\x3c\xf1\x9f\x83\xe5\x2d\xfb\x71"
+			  "\xad\xd1\xae\x1b\x20\x5c\x47\xb7"
+			  "\x3b\xd3\x14\xce\x81\x42\xb1\x0a"
+			  "\xf0\x49\xfa\xc2\xe7\x86\xbf\xcd"
+			  "\xb0\x95\x9f\x8f\x79\x41\x54",
+		.ctext	= "\xf6\x57\x51\xc4\x25\x61\x2d\xfa"
+			  "\xd6\xd9\x3f\x9a\x81\x51\xdd\x8e"
+			  "\x3d\xe7\xaa\x2d\xb1\xda\xc8\xa6"
+			  "\x9d\xaa\x3c\xab\x62\xf2\x80\xc3"
+			  "\x2c\xe7\x58\x72\x1d\x44\xc5\x28"
+			  "\x7f\xb4\xf9\xbc\x9c\xb2\xab\x8e"
+			  "\xfa\xd1\x4d\x72\xd9\x79\xf5\xa0"
+			  "\x24\x3e\x90\x25\x31\x14\x38\x45"
+			  "\x59\xc8\xf6\xe2\xc6\xf6\xc1\xa7"
+			  "\xb2\xf8\xa7\xa9\x2b\x6f\x12\x3a"
+			  "\xb0\x81\xa4\x08\x57\x59\xb1\x56"
+			  "\x4c\x8f\x18\x55\x33\x5f\xd6\x6a"
+			  "\xc6\xa0\x4b\xd6\x6b\x64\x3e\x9e"
+			  "\xfd\x66\x16\xe2\xdb\xeb\x5f\xb3"
+			  "\x50\x50\x3e\xde\x8d\x72\x76\x01"
+			  "\xbe\xcc\xc9\x52\x09\x2d\x8d\xe7"
+			  "\xd6\xc3\x66\xdb\x36\x08\xd1\x77"
+			  "\xc8\x73\x46\x26\x24\x29\xbf\x68"
+			  "\x2d\x2a\x99\x43\x56\x55\xe4\x93"
+			  "\xaf\xae\x4d\xe7\x55\x4a\xc0\x45"
+			  "\x26\xeb\x3b\x12\x90\x7c\xdc\xd1"
+			  "\xd5\x6f\x0a\xd0\xa9\xd7\x4b\x89"
+			  "\x0b\x07\xd8\x86\xad\xa1\xc4\x69"
+			  "\x1f\x5e\x8b\xc4\x9e\x91\x41\x25"
+			  "\x56\x98\x69\x78\x3a\x9e\xae\x91"
+			  "\xd8\xd9\xfa\xfb\xff\x81\x25\x09"
+			  "\xfc\xed\x2d\x87\xbc\x04\x62\x97"
+			  "\x35\xe1\x26\xc2\x46\x1c\xcf\xd7"
+			  "\x14\xed\x02\x09\xa5\xb2\xb6\xaa"
+			  "\x27\x4e\x61\xb3\x71\x6b\x47\x16"
+			  "\xb7\xe8\xd4\xaf\x52\xeb\x6a\x6b"
+			  "\xdb\x4c\x65\x21\x9e\x1c\x36",
+		.klen	= 32,
+		.len	= 255,
+	},
+	{
+		.key	= "\xd3\x81\x72\x18\x23\xff\x6f\x4a"
+			  "\x25\x74\x29\x0d\x51\x8a\x0e\x13"
+			  "\xc1\x53\x5d\x30\x8d\xee\x75\x0d"
+			  "\x14\xd6\x69\xc9\x15\xa9\x0c\x60",
+		.iv	= "\x65\x9b\xd4\xa8\x7d\x29\x1d\xf4"
+			  "\xc4\xd6\x9b\x6a\x28\xab\x64\xe2"
+			  "\x62\x81\x97\xc5\x81\xaa\xf9\x44"
+			  "\xc1\x72\x59\x82\xaf\x16\xc8\x2c",
+		.ptext	= "\xc7\x6b\x52\x6a\x10\xf0\xcc\x09"
+			  "\xc1\x12\x1d\x6d\x21\xa6\x78\xf5"
+			  "\x05\xa3\x69\x60\x91\x36\x98\x57"
+			  "\xba\x0c\x14\xcc\xf3\x2d\x73\x03"
+			  "\xc6\xb2\x5f\xc8\x16\x27\x37\x5d"
+			  "\xd0\x0b\x87\xb2\x50\x94\x7b\x58"
+			  "\x04\xf4\xe0\x7f\x6e\x57\x8e\xc9"
+			  "\x41\x84\xc1\xb1\x7e\x4b\x91\x12"
+			  "\x3a\x8b\x5d\x50\x82\x7b\xcb\xd9"
+			  "\x9a\xd9\x4e\x18\x06\x23\x9e\xd4"
+			  "\xa5\x20\x98\xef\xb5\xda\xe5\xc0"
+			  "\x8a\x6a\x83\x77\x15\x84\x1e\xae"
+			  "\x78\x94\x9d\xdf\xb7\xd1\xea\x67"
+			  "\xaa\xb0\x14\x15\xfa\x67\x21\x84"
+			  "\xd3\x41\x2a\xce\xba\x4b\x4a\xe8"
+			  "\x95\x62\xa9\x55\xf0\x80\xad\xbd"
+			  "\xab\xaf\xdd\x4f\xa5\x7c\x13\x36"
+			  "\xed\x5e\x4f\x72\xad\x4b\xf1\xd0"
+			  "\x88\x4e\xec\x2c\x88\x10\x5e\xea"
+			  "\x12\xc0\x16\x01\x29\xa3\xa0\x55"
+			  "\xaa\x68\xf3\xe9\x9d\x3b\x0d\x3b"
+			  "\x6d\xec\xf8\xa0\x2d\xf0\x90\x8d"
+			  "\x1c\xe2\x88\xd4\x24\x71\xf9\xb3"
+			  "\xc1\x9f\xc5\xd6\x76\x70\xc5\x2e"
+			  "\x9c\xac\xdb\x90\xbd\x83\x72\xba"
+			  "\x6e\xb5\xa5\x53\x83\xa9\xa5\xbf"
+			  "\x7d\x06\x0e\x3c\x2a\xd2\x04\xb5"
+			  "\x1e\x19\x38\x09\x16\xd2\x82\x1f"
+			  "\x75\x18\x56\xb8\x96\x0b\xa6\xf9"
+			  "\xcf\x62\xd9\x32\x5d\xa9\xd7\x1d"
+			  "\xec\xe4\xdf\x1b\xbe\xf1\x36\xee"
+			  "\xe3\x7b\xb5\x2f\xee\xf8\x53\x3d"
+			  "\x6a\xb7\x70\xa9\xfc\x9c\x57\x25"
+			  "\xf2\x89\x10\xd3\xb8\xa8\x8c\x30"
+			  "\xae\x23\x4f\x0e\x13\x66\x4f\xe1"
+			  "\xb6\xc0\xe4\xf8\xef\x93\xbd\x6e"
+			  "\x15\x85\x6b\xe3\x60\x81\x1d\x68"
+			  "\xd7\x31\x87\x89\x09\xab\xd5\x96"
+			  "\x1d\xf3\x6d\x67\x80\xca\x07\x31"
+			  "\x5d\xa7\xe4\xfb\x3e\xf2\x9b\x33"
+			  "\x52\x18\xc8\x30\xfe\x2d\xca\x1e"
+			  "\x79\x92\x7a\x60\x5c\xb6\x58\x87"
+			  "\xa4\x36\xa2\x67\x92\x8b\xa4\xb7"
+			  "\xf1\x86\xdf\xdc\xc0\x7e\x8f\x63"
+			  "\xd2\xa2\xdc\x78\xeb\x4f\xd8\x96"
+			  "\x47\xca\xb8\x91\xf9\xf7\x94\x21"
+			  "\x5f\x9a\x9f\x5b\xb8\x40\x41\x4b"
+			  "\x66\x69\x6a\x72\xd0\xcb\x70\xb7"
+			  "\x93\xb5\x37\x96\x05\x37\x4f\xe5"
+			  "\x8c\xa7\x5a\x4e\x8b\xb7\x84\xea"
+			  "\xc7\xfc\x19\x6e\x1f\x5a\xa1\xac"
+			  "\x18\x7d\x52\x3b\xb3\x34\x62\x99"
+			  "\xe4\x9e\x31\x04\x3f\xc0\x8d\x84"
+			  "\x17\x7c\x25\x48\x52\x67\x11\x27"
+			  "\x67\xbb\x5a\x85\xca\x56\xb2\x5c"
+			  "\xe6\xec\xd5\x96\x3d\x15\xfc\xfb"
+			  "\x22\x25\xf4\x13\xe5\x93\x4b\x9a"
+			  "\x77\xf1\x52\x18\xfa\x16\x5e\x49"
+			  "\x03\x45\xa8\x08\xfa\xb3\x41\x92"
+			  "\x79\x50\x33\xca\xd0\xd7\x42\x55"
+			  "\xc3\x9a\x0c\x4e\xd9\xa4\x3c\x86"
+			  "\x80\x9f\x53\xd1\xa4\x2e\xd1\xbc"
+			  "\xf1\x54\x6e\x93\xa4\x65\x99\x8e"
+			  "\xdf\x29\xc0\x64\x63\x07\xbb\xea",
+		.ctext	= "\x9f\x72\x87\xc7\x17\xfb\x20\x15"
+			  "\x65\xb3\x55\xa8\x1c\x8e\x52\x32"
+			  "\xb1\x82\x8d\xbf\xb5\x9f\x10\x0a"
+			  "\xe8\x0c\x70\x62\xef\x89\xb6\x1f"
+			  "\x73\xcc\xe4\xcc\x7a\x3a\x75\x4a"
+			  "\x26\xe7\xf5\xd7\x7b\x17\x39\x2d"
+			  "\xd2\x27\x6e\xf9\x2f\x9e\xe2\xf6"
+			  "\xfa\x16\xc2\xf2\x49\x26\xa7\x5b"
+			  "\xe7\xca\x25\x0e\x45\xa0\x34\xc2"
+			  "\x9a\x37\x79\x7e\x7c\x58\x18\x94"
+			  "\x10\xa8\x7c\x48\xa9\xd7\x63\x89"
+			  "\x9e\x61\x4d\x26\x34\xd9\xf0\xb1"
+			  "\x2d\x17\x2c\x6f\x7c\x35\x0e\xbe"
+			  "\x77\x71\x7c\x17\x5b\xab\x70\xdb"
+			  "\x2f\x54\x0f\xa9\xc8\xf4\xf5\xab"
+			  "\x52\x04\x3a\xb8\x03\xa7\xfd\x57"
+			  "\x45\x5e\xbc\x77\xe1\xee\x79\x8c"
+			  "\x58\x7b\x1f\xf7\x75\xde\x68\x17"
+			  "\x98\x85\x8a\x18\x5c\xd2\x39\x78"
+			  "\x7a\x6f\x26\x6e\xe1\x13\x91\xdd"
+			  "\xdf\x0e\x6e\x67\xcc\x51\x53\xd8"
+			  "\x17\x5e\xce\xa7\xe4\xaf\xfa\xf3"
+			  "\x4f\x9f\x01\x9b\x04\xe7\xfc\xf9"
+			  "\x6a\xdc\x1d\x0c\x9a\xaa\x3a\x7a"
+			  "\x73\x03\xdf\xbf\x3b\x82\xbe\xb0"
+			  "\xb4\xa4\xcf\x07\xd7\xde\x71\x25"
+			  "\xc5\x10\xee\x0a\x15\x96\x8b\x4f"
+			  "\xfe\xb8\x28\xbd\x4a\xcd\xeb\x9f"
+			  "\x5d\x00\xc1\xee\xe8\x16\x44\xec"
+			  "\xe9\x7b\xd6\x85\x17\x29\xcf\x58"
+			  "\x20\xab\xf7\xce\x6b\xe7\x71\x7d"
+			  "\x4f\xa8\xb0\xe9\x7d\x70\xd6\x0b"
+			  "\x2e\x20\xb1\x1a\x63\x37\xaa\x2c"
+			  "\x94\xee\xd5\xf6\x58\x2a\xf4\x7a"
+			  "\x4c\xba\xf5\xe9\x3c\x6f\x95\x13"
+			  "\x5f\x96\x81\x5b\xb5\x62\xf2\xd7"
+			  "\x8d\xbe\xa1\x31\x51\xe6\xfe\xc9"
+			  "\x07\x7d\x0f\x00\x3a\x66\x8c\x4b"
+			  "\x94\xaa\xe5\x56\xde\xcd\x74\xa7"
+			  "\x48\x67\x6f\xed\xc9\x6a\xef\xaf"
+			  "\x9a\xb7\xae\x60\xfa\xc0\x37\x39"
+			  "\xa5\x25\xe5\x22\xea\x82\x55\x68"
+			  "\x3e\x30\xc3\x5a\xb6\x29\x73\x7a"
+			  "\xb6\xfb\x34\xee\x51\x7c\x54\xe5"
+			  "\x01\x4d\x72\x25\x32\x4a\xa3\x68"
+			  "\x80\x9a\x89\xc5\x11\x66\x4c\x8c"
+			  "\x44\x50\xbe\xd7\xa0\xee\xa6\xbb"
+			  "\x92\x0c\xe6\xd7\x83\x51\xb1\x69"
+			  "\x63\x40\xf3\xf4\x92\x84\xc4\x38"
+			  "\x29\xfb\xb4\x84\xa0\x19\x75\x16"
+			  "\x60\xbf\x0a\x9c\x89\xee\xad\xb4"
+			  "\x43\xf9\x71\x39\x45\x7c\x24\x83"
+			  "\x30\xbb\xee\x28\xb0\x86\x7b\xec"
+			  "\x93\xc1\xbf\xb9\x97\x1b\x96\xef"
+			  "\xee\x58\x35\x61\x12\x19\xda\x25"
+			  "\x77\xe5\x80\x1a\x31\x27\x9b\xe4"
+			  "\xda\x8b\x7e\x51\x4d\xcb\x01\x19"
+			  "\x4f\xdc\x92\x1a\x17\xd5\x6b\xf4"
+			  "\x50\xe3\x06\xe4\x76\x9f\x65\x00"
+			  "\xbd\x7a\xe2\x64\x26\xf2\xe4\x7e"
+			  "\x40\xf2\x80\xab\x62\xd5\xef\x23"
+			  "\x8b\xfb\x6f\x24\x6e\x9b\x66\x0e"
+			  "\xf4\x1c\x24\x1e\x1d\x26\x95\x09"
+			  "\x94\x3c\xb2\xb6\x02\xa7\xd9\x9a",
+		.klen	= 32,
+		.len	= 512,
+	},
+
+};
+
 #endif	/* _CRYPTO_TESTMGR_H */
-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 4/8] crypto: x86/aesni-xctr: Add accelerated implementation of XCTR
  2022-03-15 23:00 [PATCH v3 0/8] crypto: HCTR2 support Nathan Huckleberry
                   ` (2 preceding siblings ...)
  2022-03-15 23:00 ` [PATCH v3 3/8] crypto: hctr2 - Add HCTR2 support Nathan Huckleberry
@ 2022-03-15 23:00 ` Nathan Huckleberry
  2022-03-15 23:00 ` [PATCH v3 5/8] crypto: arm64/aes-xctr: " Nathan Huckleberry
                   ` (3 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Nathan Huckleberry @ 2022-03-15 23:00 UTC (permalink / raw)
  To: linux-crypto
  Cc: Herbert Xu, David S. Miller, linux-arm-kernel, Paul Crowley,
	Eric Biggers, Sami Tolvanen, Ard Biesheuvel, Nathan Huckleberry

Add hardware accelerated versions of XCTR for x86-64 CPUs with AESNI
support.  These implementations are modified versions of the CTR
implementations found in aesni-intel_asm.S and aes_ctrby8_avx-x86_64.S.

More information on XCTR can be found in the HCTR2 paper:
Length-preserving encryption with HCTR2:
https://enterprint.iacr.org/2021/1441.pdf

Signed-off-by: Nathan Huckleberry <nhuck@google.com>
---
 arch/x86/crypto/aes_ctrby8_avx-x86_64.S | 233 ++++++++++++++++--------
 arch/x86/crypto/aesni-intel_asm.S       |  70 +++++++
 arch/x86/crypto/aesni-intel_glue.c      |  89 +++++++++
 crypto/Kconfig                          |   2 +-
 4 files changed, 317 insertions(+), 77 deletions(-)

diff --git a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
index 43852ba6e19c..9e20d7d3d6da 100644
--- a/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
+++ b/arch/x86/crypto/aes_ctrby8_avx-x86_64.S
@@ -53,6 +53,10 @@
 #define KEY_192		2
 #define KEY_256		3
 
+// XCTR mode only
+#define counter		%r9
+#define xiv		%xmm8
+
 .section .rodata
 .align 16
 
@@ -102,38 +106,67 @@ ddq_add_8:
  * do_aes num_in_par load_keys key_len
  * This increments p_in, but not p_out
  */
-.macro do_aes b, k, key_len
+.macro do_aes b, k, key_len, xctr
 	.set by, \b
 	.set load_keys, \k
 	.set klen, \key_len
 
+	.if (\xctr == 1)
+		.set i, 0
+		.rept (by)
+			club XDATA, i
+			movq counter, var_xdata
+			.set i, (i +1)
+		.endr
+	.endif
+
 	.if (load_keys)
 		vmovdqa	0*16(p_keys), xkey0
 	.endif
 
-	vpshufb	xbyteswap, xcounter, xdata0
-
-	.set i, 1
-	.rept (by - 1)
-		club XDATA, i
-		vpaddq	(ddq_add_1 + 16 * (i - 1))(%rip), xcounter, var_xdata
-		vptest	ddq_low_msk(%rip), var_xdata
-		jnz 1f
-		vpaddq	ddq_high_add_1(%rip), var_xdata, var_xdata
-		vpaddq	ddq_high_add_1(%rip), xcounter, xcounter
-		1:
-		vpshufb	xbyteswap, var_xdata, var_xdata
-		.set i, (i +1)
-	.endr
+	.if (\xctr == 0)
+		vpshufb	xbyteswap, xcounter, xdata0
+		.set i, 1
+		.rept (by - 1)
+			club XDATA, i
+			vpaddq	(ddq_add_1 + 16 * (i - 1))(%rip), xcounter, var_xdata
+			vptest	ddq_low_msk(%rip), var_xdata
+			jnz 1f
+			vpaddq	ddq_high_add_1(%rip), var_xdata, var_xdata
+			vpaddq	ddq_high_add_1(%rip), xcounter, xcounter
+			1:
+			vpshufb	xbyteswap, var_xdata, var_xdata
+			.set i, (i +1)
+		.endr
+	.endif
+	.if (\xctr == 1)
+		.set i, 0
+		.rept (by)
+			club XDATA, i
+			vpaddq	(ddq_add_1 + 16 * i)(%rip), var_xdata, var_xdata
+			.set i, (i +1)
+		.endr
+		.set i, 0
+		.rept (by)
+			club	XDATA, i
+			vpxor	xiv, var_xdata, var_xdata
+			.set i, (i +1)
+		.endr
+	.endif
 
 	vmovdqa	1*16(p_keys), xkeyA
 
 	vpxor	xkey0, xdata0, xdata0
-	vpaddq	(ddq_add_1 + 16 * (by - 1))(%rip), xcounter, xcounter
-	vptest	ddq_low_msk(%rip), xcounter
-	jnz	1f
-	vpaddq	ddq_high_add_1(%rip), xcounter, xcounter
-	1:
+	.if (\xctr == 0)
+		vpaddq	(ddq_add_1 + 16 * (by - 1))(%rip), xcounter, xcounter
+		vptest	ddq_low_msk(%rip), xcounter
+		jnz	1f
+		vpaddq	ddq_high_add_1(%rip), xcounter, xcounter
+		1:
+	.endif
+	.if (\xctr == 1)
+		add $by, counter
+	.endif
 
 	.set i, 1
 	.rept (by - 1)
@@ -371,94 +404,101 @@ ddq_add_8:
 	.endr
 .endm
 
-.macro do_aes_load val, key_len
-	do_aes \val, 1, \key_len
+.macro do_aes_load val, key_len, xctr
+	do_aes \val, 1, \key_len, \xctr
 .endm
 
-.macro do_aes_noload val, key_len
-	do_aes \val, 0, \key_len
+.macro do_aes_noload val, key_len, xctr
+	do_aes \val, 0, \key_len, \xctr
 .endm
 
 /* main body of aes ctr load */
 
-.macro do_aes_ctrmain key_len
+.macro do_aes_ctrmain key_len, xctr
 	cmp	$16, num_bytes
-	jb	.Ldo_return2\key_len
+	jb	.Ldo_return2\xctr\key_len
 
 	vmovdqa	byteswap_const(%rip), xbyteswap
-	vmovdqu	(p_iv), xcounter
-	vpshufb	xbyteswap, xcounter, xcounter
+	.if (\xctr == 0)
+		vmovdqu	(p_iv), xcounter
+		vpshufb	xbyteswap, xcounter, xcounter
+	.endif
+	.if (\xctr == 1)
+		andq	$(~0xf), num_bytes
+		shr	$4, counter
+		vmovdqu	(p_iv), xiv
+	.endif
 
 	mov	num_bytes, tmp
 	and	$(7*16), tmp
-	jz	.Lmult_of_8_blks\key_len
+	jz	.Lmult_of_8_blks\xctr\key_len
 
 	/* 1 <= tmp <= 7 */
 	cmp	$(4*16), tmp
-	jg	.Lgt4\key_len
-	je	.Leq4\key_len
+	jg	.Lgt4\xctr\key_len
+	je	.Leq4\xctr\key_len
 
-.Llt4\key_len:
+.Llt4\xctr\key_len:
 	cmp	$(2*16), tmp
-	jg	.Leq3\key_len
-	je	.Leq2\key_len
+	jg	.Leq3\xctr\key_len
+	je	.Leq2\xctr\key_len
 
-.Leq1\key_len:
-	do_aes_load	1, \key_len
+.Leq1\xctr\key_len:
+	do_aes_load	1, \key_len, \xctr
 	add	$(1*16), p_out
 	and	$(~7*16), num_bytes
-	jz	.Ldo_return2\key_len
-	jmp	.Lmain_loop2\key_len
+	jz	.Ldo_return2\xctr\key_len
+	jmp	.Lmain_loop2\xctr\key_len
 
-.Leq2\key_len:
-	do_aes_load	2, \key_len
+.Leq2\xctr\key_len:
+	do_aes_load	2, \key_len, \xctr
 	add	$(2*16), p_out
 	and	$(~7*16), num_bytes
-	jz	.Ldo_return2\key_len
-	jmp	.Lmain_loop2\key_len
+	jz	.Ldo_return2\xctr\key_len
+	jmp	.Lmain_loop2\xctr\key_len
 
 
-.Leq3\key_len:
-	do_aes_load	3, \key_len
+.Leq3\xctr\key_len:
+	do_aes_load	3, \key_len, \xctr
 	add	$(3*16), p_out
 	and	$(~7*16), num_bytes
-	jz	.Ldo_return2\key_len
-	jmp	.Lmain_loop2\key_len
+	jz	.Ldo_return2\xctr\key_len
+	jmp	.Lmain_loop2\xctr\key_len
 
-.Leq4\key_len:
-	do_aes_load	4, \key_len
+.Leq4\xctr\key_len:
+	do_aes_load	4, \key_len, \xctr
 	add	$(4*16), p_out
 	and	$(~7*16), num_bytes
-	jz	.Ldo_return2\key_len
-	jmp	.Lmain_loop2\key_len
+	jz	.Ldo_return2\xctr\key_len
+	jmp	.Lmain_loop2\xctr\key_len
 
-.Lgt4\key_len:
+.Lgt4\xctr\key_len:
 	cmp	$(6*16), tmp
-	jg	.Leq7\key_len
-	je	.Leq6\key_len
+	jg	.Leq7\xctr\key_len
+	je	.Leq6\xctr\key_len
 
-.Leq5\key_len:
-	do_aes_load	5, \key_len
+.Leq5\xctr\key_len:
+	do_aes_load	5, \key_len, \xctr
 	add	$(5*16), p_out
 	and	$(~7*16), num_bytes
-	jz	.Ldo_return2\key_len
-	jmp	.Lmain_loop2\key_len
+	jz	.Ldo_return2\xctr\key_len
+	jmp	.Lmain_loop2\xctr\key_len
 
-.Leq6\key_len:
-	do_aes_load	6, \key_len
+.Leq6\xctr\key_len:
+	do_aes_load	6, \key_len, \xctr
 	add	$(6*16), p_out
 	and	$(~7*16), num_bytes
-	jz	.Ldo_return2\key_len
-	jmp	.Lmain_loop2\key_len
+	jz	.Ldo_return2\xctr\key_len
+	jmp	.Lmain_loop2\xctr\key_len
 
-.Leq7\key_len:
-	do_aes_load	7, \key_len
+.Leq7\xctr\key_len:
+	do_aes_load	7, \key_len, \xctr
 	add	$(7*16), p_out
 	and	$(~7*16), num_bytes
-	jz	.Ldo_return2\key_len
-	jmp	.Lmain_loop2\key_len
+	jz	.Ldo_return2\xctr\key_len
+	jmp	.Lmain_loop2\xctr\key_len
 
-.Lmult_of_8_blks\key_len:
+.Lmult_of_8_blks\xctr\key_len:
 	.if (\key_len != KEY_128)
 		vmovdqa	0*16(p_keys), xkey0
 		vmovdqa	4*16(p_keys), xkey4
@@ -471,17 +511,19 @@ ddq_add_8:
 		vmovdqa	9*16(p_keys), xkey12
 	.endif
 .align 16
-.Lmain_loop2\key_len:
+.Lmain_loop2\xctr\key_len:
 	/* num_bytes is a multiple of 8 and >0 */
-	do_aes_noload	8, \key_len
+	do_aes_noload	8, \key_len, \xctr
 	add	$(8*16), p_out
 	sub	$(8*16), num_bytes
-	jne	.Lmain_loop2\key_len
+	jne	.Lmain_loop2\xctr\key_len
 
-.Ldo_return2\key_len:
-	/* return updated IV */
-	vpshufb	xbyteswap, xcounter, xcounter
-	vmovdqu	xcounter, (p_iv)
+.Ldo_return2\xctr\key_len:
+	.if (\xctr == 0)
+		/* return updated IV */
+		vpshufb	xbyteswap, xcounter, xcounter
+		vmovdqu	xcounter, (p_iv)
+	.endif
 	RET
 .endm
 
@@ -494,7 +536,7 @@ ddq_add_8:
  */
 SYM_FUNC_START(aes_ctr_enc_128_avx_by8)
 	/* call the aes main loop */
-	do_aes_ctrmain KEY_128
+	do_aes_ctrmain KEY_128 0
 
 SYM_FUNC_END(aes_ctr_enc_128_avx_by8)
 
@@ -507,7 +549,7 @@ SYM_FUNC_END(aes_ctr_enc_128_avx_by8)
  */
 SYM_FUNC_START(aes_ctr_enc_192_avx_by8)
 	/* call the aes main loop */
-	do_aes_ctrmain KEY_192
+	do_aes_ctrmain KEY_192 0
 
 SYM_FUNC_END(aes_ctr_enc_192_avx_by8)
 
@@ -520,6 +562,45 @@ SYM_FUNC_END(aes_ctr_enc_192_avx_by8)
  */
 SYM_FUNC_START(aes_ctr_enc_256_avx_by8)
 	/* call the aes main loop */
-	do_aes_ctrmain KEY_256
+	do_aes_ctrmain KEY_256 0
 
 SYM_FUNC_END(aes_ctr_enc_256_avx_by8)
+
+/*
+ * routine to do AES128 XCTR enc/decrypt "by8"
+ * XMM registers are clobbered.
+ * Saving/restoring must be done at a higher level
+ * aes_xctr_enc_128_avx_by8(const u8 *in, const u8 *iv, const aes_ctx *keys, u8
+ * 			    *out, unsigned int num_bytes, unsigned int byte_ctr)
+ */
+SYM_FUNC_START(aes_xctr_enc_128_avx_by8)
+	/* call the aes main loop */
+	do_aes_ctrmain KEY_128 1
+
+SYM_FUNC_END(aes_xctr_enc_128_avx_by8)
+
+/*
+ * routine to do AES192 XCTR enc/decrypt "by8"
+ * XMM registers are clobbered.
+ * Saving/restoring must be done at a higher level
+ * aes_xctr_enc_192_avx_by8(const u8 *in, const u8 *iv, const aes_ctx *keys, u8
+ * 			    *out, unsigned int num_bytes, unsigned int byte_ctr)
+ */
+SYM_FUNC_START(aes_xctr_enc_192_avx_by8)
+	/* call the aes main loop */
+	do_aes_ctrmain KEY_192 1
+
+SYM_FUNC_END(aes_xctr_enc_192_avx_by8)
+
+/*
+ * routine to do AES256 XCTR enc/decrypt "by8"
+ * XMM registers are clobbered.
+ * Saving/restoring must be done at a higher level
+ * aes_xctr_enc_256_avx_by8(const u8 *in, const u8 *iv, const aes_ctx *keys, u8
+ * 			    *out, unsigned int num_bytes, unsigned int byte_ctr)
+ */
+SYM_FUNC_START(aes_xctr_enc_256_avx_by8)
+	/* call the aes main loop */
+	do_aes_ctrmain KEY_256 1
+
+SYM_FUNC_END(aes_xctr_enc_256_avx_by8)
diff --git a/arch/x86/crypto/aesni-intel_asm.S b/arch/x86/crypto/aesni-intel_asm.S
index 363699dd7220..ce17fe630150 100644
--- a/arch/x86/crypto/aesni-intel_asm.S
+++ b/arch/x86/crypto/aesni-intel_asm.S
@@ -2821,6 +2821,76 @@ SYM_FUNC_END(aesni_ctr_enc)
 
 #endif
 
+#ifdef __x86_64__
+/*
+ * void aesni_xctr_enc(struct crypto_aes_ctx *ctx, const u8 *dst, u8 *src,
+ *		      size_t len, u8 *iv, int byte_ctr)
+ */
+SYM_FUNC_START(aesni_xctr_enc)
+	FRAME_BEGIN
+	cmp $16, LEN
+	jb .Lxctr_ret
+	shr	$4, %arg6
+	movq %arg6, CTR
+	mov 480(KEYP), KLEN
+	movups (IVP), IV
+	cmp $64, LEN
+	jb .Lxctr_enc_loop1
+.align 4
+.Lxctr_enc_loop4:
+	movaps IV, STATE1
+	vpaddq ONE(%rip), CTR, CTR
+	vpxor CTR, STATE1, STATE1
+	movups (INP), IN1
+	movaps IV, STATE2
+	vpaddq ONE(%rip), CTR, CTR
+	vpxor CTR, STATE2, STATE2
+	movups 0x10(INP), IN2
+	movaps IV, STATE3
+	vpaddq ONE(%rip), CTR, CTR
+	vpxor CTR, STATE3, STATE3
+	movups 0x20(INP), IN3
+	movaps IV, STATE4
+	vpaddq ONE(%rip), CTR, CTR
+	vpxor CTR, STATE4, STATE4
+	movups 0x30(INP), IN4
+	call _aesni_enc4
+	pxor IN1, STATE1
+	movups STATE1, (OUTP)
+	pxor IN2, STATE2
+	movups STATE2, 0x10(OUTP)
+	pxor IN3, STATE3
+	movups STATE3, 0x20(OUTP)
+	pxor IN4, STATE4
+	movups STATE4, 0x30(OUTP)
+	sub $64, LEN
+	add $64, INP
+	add $64, OUTP
+	cmp $64, LEN
+	jge .Lxctr_enc_loop4
+	cmp $16, LEN
+	jb .Lxctr_ret
+.align 4
+.Lxctr_enc_loop1:
+	movaps IV, STATE
+	vpaddq ONE(%rip), CTR, CTR
+	vpxor CTR, STATE1, STATE1
+	movups (INP), IN
+	call _aesni_enc1
+	pxor IN, STATE
+	movups STATE, (OUTP)
+	sub $16, LEN
+	add $16, INP
+	add $16, OUTP
+	cmp $16, LEN
+	jge .Lxctr_enc_loop1
+.Lxctr_ret:
+	FRAME_END
+	RET
+SYM_FUNC_END(aesni_xctr_enc)
+
+#endif
+
 .section	.rodata.cst16.gf128mul_x_ble_mask, "aM", @progbits, 16
 .align 16
 .Lgf128mul_x_ble_mask:
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 41901ba9d3a2..74021bd524b6 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -112,6 +112,11 @@ asmlinkage void aesni_ctr_enc(struct crypto_aes_ctx *ctx, u8 *out,
 			      const u8 *in, unsigned int len, u8 *iv);
 DEFINE_STATIC_CALL(aesni_ctr_enc_tfm, aesni_ctr_enc);
 
+asmlinkage void aesni_xctr_enc(struct crypto_aes_ctx *ctx, u8 *out,
+			       const u8 *in, unsigned int len, u8 *iv,
+			       unsigned int byte_ctr);
+DEFINE_STATIC_CALL(aesni_xctr_enc_tfm, aesni_xctr_enc);
+
 /* Scatter / Gather routines, with args similar to above */
 asmlinkage void aesni_gcm_init(void *ctx,
 			       struct gcm_context_data *gdata,
@@ -135,6 +140,16 @@ asmlinkage void aes_ctr_enc_192_avx_by8(const u8 *in, u8 *iv,
 		void *keys, u8 *out, unsigned int num_bytes);
 asmlinkage void aes_ctr_enc_256_avx_by8(const u8 *in, u8 *iv,
 		void *keys, u8 *out, unsigned int num_bytes);
+
+asmlinkage void aes_xctr_enc_128_avx_by8(const u8 *in, u8 *iv, void *keys, u8
+	*out, unsigned int num_bytes, unsigned int byte_ctr);
+
+asmlinkage void aes_xctr_enc_192_avx_by8(const u8 *in, u8 *iv, void *keys, u8
+	*out, unsigned int num_bytes, unsigned int byte_ctr);
+
+asmlinkage void aes_xctr_enc_256_avx_by8(const u8 *in, u8 *iv, void *keys, u8
+	*out, unsigned int num_bytes, unsigned int byte_ctr);
+
 /*
  * asmlinkage void aesni_gcm_init_avx_gen2()
  * gcm_data *my_ctx_data, context data
@@ -527,6 +542,61 @@ static int ctr_crypt(struct skcipher_request *req)
 	return err;
 }
 
+static void aesni_xctr_enc_avx_tfm(struct crypto_aes_ctx *ctx, u8 *out, const u8
+				   *in, unsigned int len, u8 *iv, unsigned int
+				   byte_ctr)
+{
+	if (ctx->key_length == AES_KEYSIZE_128)
+		aes_xctr_enc_128_avx_by8(in, iv, (void *)ctx, out, len,
+					 byte_ctr);
+	else if (ctx->key_length == AES_KEYSIZE_192)
+		aes_xctr_enc_192_avx_by8(in, iv, (void *)ctx, out, len,
+					 byte_ctr);
+	else
+		aes_xctr_enc_256_avx_by8(in, iv, (void *)ctx, out, len,
+					 byte_ctr);
+}
+
+static int xctr_crypt(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct crypto_aes_ctx *ctx = aes_ctx(crypto_skcipher_ctx(tfm));
+	u8 keystream[AES_BLOCK_SIZE];
+	u8 ctr[AES_BLOCK_SIZE];
+	struct skcipher_walk walk;
+	unsigned int nbytes;
+	unsigned int byte_ctr = 0;
+	int err;
+	__le32 ctr32;
+
+	err = skcipher_walk_virt(&walk, req, false);
+
+	while ((nbytes = walk.nbytes) > 0) {
+		kernel_fpu_begin();
+		if (nbytes & AES_BLOCK_MASK)
+			static_call(aesni_xctr_enc_tfm)(ctx, walk.dst.virt.addr,
+				walk.src.virt.addr, nbytes & AES_BLOCK_MASK,
+				walk.iv, byte_ctr);
+		nbytes &= ~AES_BLOCK_MASK;
+		byte_ctr += walk.nbytes - nbytes;
+
+		if (walk.nbytes == walk.total && nbytes > 0) {
+			ctr32 = cpu_to_le32(byte_ctr / AES_BLOCK_SIZE + 1);
+			memcpy(ctr, walk.iv, AES_BLOCK_SIZE);
+			crypto_xor(ctr, (u8 *)&ctr32, sizeof(ctr32));
+			aesni_enc(ctx, keystream, ctr);
+			crypto_xor_cpy(walk.dst.virt.addr + walk.nbytes -
+				       nbytes, walk.src.virt.addr + walk.nbytes
+				       - nbytes, keystream, nbytes);
+			byte_ctr += nbytes;
+			nbytes = 0;
+		}
+		kernel_fpu_end();
+		err = skcipher_walk_done(&walk, nbytes);
+	}
+	return err;
+}
+
 static int
 rfc4106_set_hash_subkey(u8 *hash_subkey, const u8 *key, unsigned int key_len)
 {
@@ -1026,6 +1096,23 @@ static struct skcipher_alg aesni_skciphers[] = {
 		.setkey		= aesni_skcipher_setkey,
 		.encrypt	= ctr_crypt,
 		.decrypt	= ctr_crypt,
+	}, {
+		.base = {
+			.cra_name		= "__xctr(aes)",
+			.cra_driver_name	= "__xctr-aes-aesni",
+			.cra_priority		= 400,
+			.cra_flags		= CRYPTO_ALG_INTERNAL,
+			.cra_blocksize		= 1,
+			.cra_ctxsize		= CRYPTO_AES_CTX_SIZE,
+			.cra_module		= THIS_MODULE,
+		},
+		.min_keysize	= AES_MIN_KEY_SIZE,
+		.max_keysize	= AES_MAX_KEY_SIZE,
+		.ivsize		= AES_BLOCK_SIZE,
+		.chunksize	= AES_BLOCK_SIZE,
+		.setkey		= aesni_skcipher_setkey,
+		.encrypt	= xctr_crypt,
+		.decrypt	= xctr_crypt,
 #endif
 	}, {
 		.base = {
@@ -1162,6 +1249,8 @@ static int __init aesni_init(void)
 		/* optimize performance of ctr mode encryption transform */
 		static_call_update(aesni_ctr_enc_tfm, aesni_ctr_enc_avx_tfm);
 		pr_info("AES CTR mode by8 optimization enabled\n");
+		static_call_update(aesni_xctr_enc_tfm, aesni_xctr_enc_avx_tfm);
+		pr_info("AES XCTR mode by8 optimization enabled\n");
 	}
 #endif
 
diff --git a/crypto/Kconfig b/crypto/Kconfig
index 0dedba74db4a..aa06af0e0ebe 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -1161,7 +1161,7 @@ config CRYPTO_AES_NI_INTEL
 	  In addition to AES cipher algorithm support, the acceleration
 	  for some popular block cipher mode is supported too, including
 	  ECB, CBC, LRW, XTS. The 64 bit version has additional
-	  acceleration for CTR.
+	  acceleration for CTR and XCTR.
 
 config CRYPTO_AES_SPARC64
 	tristate "AES cipher algorithms (SPARC64)"
-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 5/8] crypto: arm64/aes-xctr: Add accelerated implementation of XCTR
  2022-03-15 23:00 [PATCH v3 0/8] crypto: HCTR2 support Nathan Huckleberry
                   ` (3 preceding siblings ...)
  2022-03-15 23:00 ` [PATCH v3 4/8] crypto: x86/aesni-xctr: Add accelerated implementation of XCTR Nathan Huckleberry
@ 2022-03-15 23:00 ` Nathan Huckleberry
  2022-03-15 23:00 ` [PATCH v3 6/8] crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of POLYVAL Nathan Huckleberry
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 15+ messages in thread
From: Nathan Huckleberry @ 2022-03-15 23:00 UTC (permalink / raw)
  To: linux-crypto
  Cc: Herbert Xu, David S. Miller, linux-arm-kernel, Paul Crowley,
	Eric Biggers, Sami Tolvanen, Ard Biesheuvel, Nathan Huckleberry

Add hardware accelerated version of XCTR for ARM64 CPUs with ARMv8
Crypto Extension support.  This XCTR implementation is based on the CTR
implementation in aes-modes.S.

More information on XCTR can be found in
the HCTR2 paper: Length-preserving encryption with HCTR2:
https://eprint.iacr.org/2021/1441.pdf

Signed-off-by: Nathan Huckleberry <nhuck@google.com>
---
 arch/arm64/crypto/Kconfig     |   4 +-
 arch/arm64/crypto/aes-glue.c  |  65 ++++++++++++++++-
 arch/arm64/crypto/aes-modes.S | 134 ++++++++++++++++++++++++++++++++++
 3 files changed, 199 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index 2a965aa0188d..897f9a4b5b67 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -84,13 +84,13 @@ config CRYPTO_AES_ARM64_CE_CCM
 	select CRYPTO_LIB_AES
 
 config CRYPTO_AES_ARM64_CE_BLK
-	tristate "AES in ECB/CBC/CTR/XTS modes using ARMv8 Crypto Extensions"
+	tristate "AES in ECB/CBC/CTR/XTS/XCTR modes using ARMv8 Crypto Extensions"
 	depends on KERNEL_MODE_NEON
 	select CRYPTO_SKCIPHER
 	select CRYPTO_AES_ARM64_CE
 
 config CRYPTO_AES_ARM64_NEON_BLK
-	tristate "AES in ECB/CBC/CTR/XTS modes using NEON instructions"
+	tristate "AES in ECB/CBC/CTR/XTS/XCTR modes using NEON instructions"
 	depends on KERNEL_MODE_NEON
 	select CRYPTO_SKCIPHER
 	select CRYPTO_LIB_AES
diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index 561dd2332571..06ebd466cf7c 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64/crypto/aes-glue.c
@@ -34,10 +34,11 @@
 #define aes_essiv_cbc_encrypt	ce_aes_essiv_cbc_encrypt
 #define aes_essiv_cbc_decrypt	ce_aes_essiv_cbc_decrypt
 #define aes_ctr_encrypt		ce_aes_ctr_encrypt
+#define aes_xctr_encrypt	ce_aes_xctr_encrypt
 #define aes_xts_encrypt		ce_aes_xts_encrypt
 #define aes_xts_decrypt		ce_aes_xts_decrypt
 #define aes_mac_update		ce_aes_mac_update
-MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions");
+MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS/XCTR using ARMv8 Crypto Extensions");
 #else
 #define MODE			"neon"
 #define PRIO			200
@@ -50,16 +51,18 @@ MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 Crypto Extensions");
 #define aes_essiv_cbc_encrypt	neon_aes_essiv_cbc_encrypt
 #define aes_essiv_cbc_decrypt	neon_aes_essiv_cbc_decrypt
 #define aes_ctr_encrypt		neon_aes_ctr_encrypt
+#define aes_xctr_encrypt	neon_aes_xctr_encrypt
 #define aes_xts_encrypt		neon_aes_xts_encrypt
 #define aes_xts_decrypt		neon_aes_xts_decrypt
 #define aes_mac_update		neon_aes_mac_update
-MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS using ARMv8 NEON");
+MODULE_DESCRIPTION("AES-ECB/CBC/CTR/XTS/XCTR using ARMv8 NEON");
 #endif
 #if defined(USE_V8_CRYPTO_EXTENSIONS) || !IS_ENABLED(CONFIG_CRYPTO_AES_ARM64_BS)
 MODULE_ALIAS_CRYPTO("ecb(aes)");
 MODULE_ALIAS_CRYPTO("cbc(aes)");
 MODULE_ALIAS_CRYPTO("ctr(aes)");
 MODULE_ALIAS_CRYPTO("xts(aes)");
+MODULE_ALIAS_CRYPTO("xctr(aes)");
 #endif
 MODULE_ALIAS_CRYPTO("cts(cbc(aes))");
 MODULE_ALIAS_CRYPTO("essiv(cbc(aes),sha256)");
@@ -89,6 +92,10 @@ asmlinkage void aes_cbc_cts_decrypt(u8 out[], u8 const in[], u32 const rk[],
 asmlinkage void aes_ctr_encrypt(u8 out[], u8 const in[], u32 const rk[],
 				int rounds, int bytes, u8 ctr[]);
 
+asmlinkage void aes_xctr_encrypt(u8 out[], u8 const in[], u32 const rk[],
+				 int rounds, int bytes, u8 ctr[], u8 finalbuf[],
+				 int byte_ctr);
+
 asmlinkage void aes_xts_encrypt(u8 out[], u8 const in[], u32 const rk1[],
 				int rounds, int bytes, u32 const rk2[], u8 iv[],
 				int first);
@@ -442,6 +449,44 @@ static int __maybe_unused essiv_cbc_decrypt(struct skcipher_request *req)
 	return err ?: cbc_decrypt_walk(req, &walk);
 }
 
+static int __maybe_unused xctr_encrypt(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct crypto_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
+	int err, rounds = 6 + ctx->key_length / 4;
+	struct skcipher_walk walk;
+	unsigned int byte_ctr = 0;
+
+	err = skcipher_walk_virt(&walk, req, false);
+
+	while (walk.nbytes > 0) {
+		const u8 *src = walk.src.virt.addr;
+		unsigned int nbytes = walk.nbytes;
+		u8 *dst = walk.dst.virt.addr;
+		u8 buf[AES_BLOCK_SIZE];
+
+		if (unlikely(nbytes < AES_BLOCK_SIZE))
+			src = dst = memcpy(buf + sizeof(buf) - nbytes,
+					   src, nbytes);
+		else if (nbytes < walk.total)
+			nbytes &= ~(AES_BLOCK_SIZE - 1);
+
+		kernel_neon_begin();
+		aes_xctr_encrypt(dst, src, ctx->key_enc, rounds, nbytes,
+						 walk.iv, buf, byte_ctr);
+		kernel_neon_end();
+
+		if (unlikely(nbytes < AES_BLOCK_SIZE))
+			memcpy(walk.dst.virt.addr,
+			       buf + sizeof(buf) - nbytes, nbytes);
+		byte_ctr += nbytes;
+
+		err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
+	}
+
+	return err;
+}
+
 static int __maybe_unused ctr_encrypt(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
@@ -669,6 +714,22 @@ static struct skcipher_alg aes_algs[] = { {
 	.setkey		= skcipher_aes_setkey,
 	.encrypt	= ctr_encrypt,
 	.decrypt	= ctr_encrypt,
+}, {
+	.base = {
+		.cra_name		= "xctr(aes)",
+		.cra_driver_name	= "xctr-aes-" MODE,
+		.cra_priority		= PRIO,
+		.cra_blocksize		= 1,
+		.cra_ctxsize		= sizeof(struct crypto_aes_ctx),
+		.cra_module		= THIS_MODULE,
+	},
+	.min_keysize	= AES_MIN_KEY_SIZE,
+	.max_keysize	= AES_MAX_KEY_SIZE,
+	.ivsize		= AES_BLOCK_SIZE,
+	.chunksize	= AES_BLOCK_SIZE,
+	.setkey		= skcipher_aes_setkey,
+	.encrypt	= xctr_encrypt,
+	.decrypt	= xctr_encrypt,
 }, {
 	.base = {
 		.cra_name		= "xts(aes)",
diff --git a/arch/arm64/crypto/aes-modes.S b/arch/arm64/crypto/aes-modes.S
index dc35eb0245c5..ac37e2f7ca84 100644
--- a/arch/arm64/crypto/aes-modes.S
+++ b/arch/arm64/crypto/aes-modes.S
@@ -479,6 +479,140 @@ ST5(	mov		v3.16b, v4.16b			)
 	b		.Lctrout
 AES_FUNC_END(aes_ctr_encrypt)
 
+	/*
+	 * aes_xctr_encrypt(u8 out[], u8 const in[], u8 const rk[], int rounds,
+	 *		   int bytes, u8 const ctr[], u8 finalbuf[], int
+	 *		   byte_ctr)
+	 */
+
+AES_FUNC_START(aes_xctr_encrypt)
+	stp		x29, x30, [sp, #-16]!
+	mov		x29, sp
+
+	enc_prepare	w3, x2, x12
+	ld1		{vctr.16b}, [x5]
+
+	umov		x12, vctr.d[0]		/* keep ctr in reg */
+	lsr		x7, x7, #4
+	add		x11, x7, #1
+
+.LxctrloopNx:
+	add		w7, w4, #15
+	sub		w4, w4, #MAX_STRIDE << 4
+	lsr		w7, w7, #4
+	mov		w8, #MAX_STRIDE
+	cmp		w7, w8
+	csel		w7, w7, w8, lt
+	add		x11, x11, x7
+
+	mov		v0.16b, vctr.16b
+	mov		v1.16b, vctr.16b
+	mov		v2.16b, vctr.16b
+	mov		v3.16b, vctr.16b
+ST5(	mov		v4.16b, vctr.16b		)
+
+	sub		x7, x11, #MAX_STRIDE
+	eor		x7, x12, x7
+	ins		v0.d[0], x7
+	sub		x7, x11, #MAX_STRIDE - 1
+	sub		x8, x11, #MAX_STRIDE - 2
+	eor		x7, x7, x12
+	sub		x9, x11, #MAX_STRIDE - 3
+	mov		v1.d[0], x7
+	eor		x8, x8, x12
+	eor		x9, x9, x12
+ST5(	sub		x10, x11, #MAX_STRIDE - 4)
+	mov		v2.d[0], x8
+	eor		x10, x10, x12
+	mov		v3.d[0], x9
+ST5(	mov		v4.d[0], x10			)
+	tbnz		w4, #31, .Lxctrtail
+	ld1		{v5.16b-v7.16b}, [x1], #48
+ST4(	bl		aes_encrypt_block4x		)
+ST5(	bl		aes_encrypt_block5x		)
+	eor		v0.16b, v5.16b, v0.16b
+ST4(	ld1		{v5.16b}, [x1], #16		)
+	eor		v1.16b, v6.16b, v1.16b
+ST5(	ld1		{v5.16b-v6.16b}, [x1], #32	)
+	eor		v2.16b, v7.16b, v2.16b
+	eor		v3.16b, v5.16b, v3.16b
+ST5(	eor		v4.16b, v6.16b, v4.16b		)
+	st1		{v0.16b-v3.16b}, [x0], #64
+ST5(	st1		{v4.16b}, [x0], #16		)
+	cbz		w4, .Lxctrout
+	b		.LxctrloopNx
+
+.Lxctrout:
+	ldp		x29, x30, [sp], #16
+	ret
+
+.Lxctrtail:
+	/* XOR up to MAX_STRIDE * 16 - 1 bytes of in/output with v0 ... v3/v4 */
+	mov		x16, #16
+	ands		x6, x4, #0xf
+	csel		x13, x6, x16, ne
+
+ST5(	cmp		w4, #64 - (MAX_STRIDE << 4)	)
+ST5(	csel		x14, x16, xzr, gt		)
+	cmp		w4, #48 - (MAX_STRIDE << 4)
+	csel		x15, x16, xzr, gt
+	cmp		w4, #32 - (MAX_STRIDE << 4)
+	csel		x16, x16, xzr, gt
+	cmp		w4, #16 - (MAX_STRIDE << 4)
+
+	adr_l		x12, .Lcts_permute_table
+	add		x12, x12, x13
+	ble		.Lctrtail1x
+
+ST5(	ld1		{v5.16b}, [x1], x14		)
+	ld1		{v6.16b}, [x1], x15
+	ld1		{v7.16b}, [x1], x16
+
+ST4(	bl		aes_encrypt_block4x		)
+ST5(	bl		aes_encrypt_block5x		)
+
+	ld1		{v8.16b}, [x1], x13
+	ld1		{v9.16b}, [x1]
+	ld1		{v10.16b}, [x12]
+
+ST4(	eor		v6.16b, v6.16b, v0.16b		)
+ST4(	eor		v7.16b, v7.16b, v1.16b		)
+ST4(	tbl		v3.16b, {v3.16b}, v10.16b	)
+ST4(	eor		v8.16b, v8.16b, v2.16b		)
+ST4(	eor		v9.16b, v9.16b, v3.16b		)
+
+ST5(	eor		v5.16b, v5.16b, v0.16b		)
+ST5(	eor		v6.16b, v6.16b, v1.16b		)
+ST5(	tbl		v4.16b, {v4.16b}, v10.16b	)
+ST5(	eor		v7.16b, v7.16b, v2.16b		)
+ST5(	eor		v8.16b, v8.16b, v3.16b		)
+ST5(	eor		v9.16b, v9.16b, v4.16b		)
+
+ST5(	st1		{v5.16b}, [x0], x14		)
+	st1		{v6.16b}, [x0], x15
+	st1		{v7.16b}, [x0], x16
+	add		x13, x13, x0
+	st1		{v9.16b}, [x13]		// overlapping stores
+	st1		{v8.16b}, [x0]
+	b		.Lctrout
+.Lxctrtail1x:
+	sub		x7, x6, #16
+	csel		x6, x6, x7, eq
+	add		x1, x1, x6
+	add		x0, x0, x6
+	ld1		{v5.16b}, [x1]
+	ld1		{v6.16b}, [x0]
+ST5(	mov		v3.16b, v4.16b			)
+	encrypt_block	v3, w3, x2, x8, w7
+	ld1		{v10.16b-v11.16b}, [x12]
+	tbl		v3.16b, {v3.16b}, v10.16b
+	sshr		v11.16b, v11.16b, #7
+	eor		v5.16b, v5.16b, v3.16b
+	bif		v5.16b, v6.16b, v11.16b
+	st1		{v5.16b}, [x0]
+	b		.Lctrout
+AES_FUNC_END(aes_xctr_encrypt)
+
 
 	/*
 	 * aes_xts_encrypt(u8 out[], u8 const in[], u8 const rk1[], int rounds,
-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 6/8] crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of POLYVAL
  2022-03-15 23:00 [PATCH v3 0/8] crypto: HCTR2 support Nathan Huckleberry
                   ` (4 preceding siblings ...)
  2022-03-15 23:00 ` [PATCH v3 5/8] crypto: arm64/aes-xctr: " Nathan Huckleberry
@ 2022-03-15 23:00 ` Nathan Huckleberry
  2022-03-23  2:15   ` Eric Biggers
  2022-03-15 23:00 ` [PATCH v3 7/8] crypto: arm64/polyval: Add PMULL " Nathan Huckleberry
  2022-03-15 23:00 ` [PATCH v3 8/8] fscrypt: Add HCTR2 support for filename encryption Nathan Huckleberry
  7 siblings, 1 reply; 15+ messages in thread
From: Nathan Huckleberry @ 2022-03-15 23:00 UTC (permalink / raw)
  To: linux-crypto
  Cc: Herbert Xu, David S. Miller, linux-arm-kernel, Paul Crowley,
	Eric Biggers, Sami Tolvanen, Ard Biesheuvel, Nathan Huckleberry

Add hardware accelerated version of POLYVAL for x86-64 CPUs with
PCLMULQDQ support.

This implementation is accelerated using PCLMULQDQ instructions to
perform the finite field computations.  For added efficiency, 8 blocks
of the message are processed simultaneously by precomputing the first
8 powers of the key.

Schoolbook multiplication is used instead of Karatsuba multiplication
because it was found to be slightly faster on x86-64 machines.
Montgomery reduction must be used instead of Barrett reduction due to
the difference in modulus between POLYVAL's field and other finite
fields.

More information on POLYVAL can be found in the HCTR2 paper:
Length-preserving encryption with HCTR2:
https://eprint.iacr.org/2021/1441.pdf

Signed-off-by: Nathan Huckleberry <nhuck@google.com>
---
 arch/x86/crypto/Makefile               |   3 +
 arch/x86/crypto/polyval-clmulni_asm.S  | 376 +++++++++++++++++++++++++
 arch/x86/crypto/polyval-clmulni_glue.c | 361 ++++++++++++++++++++++++
 crypto/Kconfig                         |  10 +
 4 files changed, 750 insertions(+)
 create mode 100644 arch/x86/crypto/polyval-clmulni_asm.S
 create mode 100644 arch/x86/crypto/polyval-clmulni_glue.c

diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile
index 2831685adf6f..b9847152acd8 100644
--- a/arch/x86/crypto/Makefile
+++ b/arch/x86/crypto/Makefile
@@ -69,6 +69,9 @@ libblake2s-x86_64-y := blake2s-core.o blake2s-glue.o
 obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) += ghash-clmulni-intel.o
 ghash-clmulni-intel-y := ghash-clmulni-intel_asm.o ghash-clmulni-intel_glue.o
 
+obj-$(CONFIG_CRYPTO_POLYVAL_CLMUL_NI) += polyval-clmulni.o
+polyval-clmulni-y := polyval-clmulni_asm.o polyval-clmulni_glue.o
+
 obj-$(CONFIG_CRYPTO_CRC32C_INTEL) += crc32c-intel.o
 crc32c-intel-y := crc32c-intel_glue.o
 crc32c-intel-$(CONFIG_64BIT) += crc32c-pcl-intel-asm_64.o
diff --git a/arch/x86/crypto/polyval-clmulni_asm.S b/arch/x86/crypto/polyval-clmulni_asm.S
new file mode 100644
index 000000000000..ad7126d9f0ff
--- /dev/null
+++ b/arch/x86/crypto/polyval-clmulni_asm.S
@@ -0,0 +1,376 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright 2021 Google LLC
+ */
+/*
+ * This is an efficient implementation of POLYVAL using intel PCLMULQDQ-NI
+ * instructions. It works on 8 blocks at a time, by precomputing the first 8
+ * keys powers h^8, ..., h^1 in the POLYVAL finite field. This precomputation
+ * allows us to split finite field multiplication into two steps.
+ *
+ * In the first step, we consider h^i, m_i as normal polynomials of degree less
+ * than 128. We then compute p(x) = h^8m_0 + ... + h^1m_7 where multiplication
+ * is simply polynomial multiplication.
+ *
+ * In the second step, we compute the reduction of p(x) modulo the finite field
+ * modulus g(x) = x^128 + x^127 + x^126 + x^121 + 1.
+ *
+ * This two step process is equivalent to computing h^8m_0 + ... + h^1m_7 where
+ * multiplication is finite field multiplication. The advantage is that the
+ * two-step process  only requires 1 finite field reduction for every 8
+ * polynomial multiplications. Further parallelism is gained by interleaving the
+ * multiplications and polynomial reductions.
+ */
+
+#include <linux/linkage.h>
+#include <asm/frame.h>
+
+#define NUM_PRECOMPUTE_POWERS 8
+
+#define GSTAR %xmm7
+#define PL %xmm8
+#define PH %xmm9
+#define T %xmm10
+#define V %xmm11
+#define LO %xmm12
+#define HI %xmm13
+#define MI %xmm14
+#define SUM %xmm15
+
+#define BLOCKS_LEFT %rdx
+#define MSG %rdi
+#define KEY_POWERS %r10
+#define IDX %r11
+#define TMP %rax
+
+.section    .rodata.cst16.gstar, "aM", @progbits, 16
+.align 16
+
+.Lgstar:
+	.quad 0xc200000000000000, 0xc200000000000000
+
+.text
+
+/*
+ * Performs schoolbook1_iteration on two lists of 128-bit polynomials of length
+ * b pointed to by MSG and KEY_POWERS.
+ */
+.macro schoolbook1 count
+	.set i, 0
+	.rept (\count)
+		schoolbook1_iteration i 0
+		.set i, (i +1)
+	.endr
+.endm
+
+/*
+ * Computes the product of two 128-bit polynomials at the memory locations
+ * specified by (MSG + 16*i) and (KEY_POWERS + 16*i) and XORs the components of the
+ * 256-bit product into LO, MI, HI.
+ *
+ * The multiplication produces four parts:
+ *   LOW: The polynomial given by performing carryless multiplication of the
+ *   bottom 64-bits of each polynomial
+ *   MID1: The polynomial given by performing carryless multiplication of the
+ *   bottom 64-bits of the first polynomial and the top 64-bits of the second
+ *   MID2: The polynomial given by performing carryless multiplication of the
+ *   bottom 64-bits of the second polynomial and the top 64-bits of the first
+ *   HIGH: The polynomial given by performing carryless multiplication of the
+ *   top 64-bits of each polynomial
+ *
+ * We compute:
+ *  LO ^= LOW
+ *  MI ^= MID1 ^ MID2
+ *  HI ^= HIGH
+ *
+ * Later, the 256-bit result can be extracted as:
+ *   [HI_H : HI_L ^ MI_H : LO_H ^ MI_L : LO_L]
+ * This step is done when computing the polynomial reduction for efficiency
+ * reasons.
+ *
+ * If xor_sum == 1, then also XOR the value of SUM into m_0.  This avoids an
+ * extra multiplication of SUM and h^N.
+ */
+.macro schoolbook1_iteration i xor_sum
+	movups (16*\i)(MSG), %xmm0
+	.if (\i == 0 && \xor_sum == 1)
+		pxor SUM, %xmm0
+	.endif
+	vpclmulqdq $0x00, (16*\i)(KEY_POWERS), %xmm0, %xmm2
+	vpclmulqdq $0x01, (16*\i)(KEY_POWERS), %xmm0, %xmm1
+	vpclmulqdq $0x11, (16*\i)(KEY_POWERS), %xmm0, %xmm3
+	vpclmulqdq $0x10, (16*\i)(KEY_POWERS), %xmm0, %xmm4
+	vpxor %xmm2, LO, LO
+	vpxor %xmm1, MI, MI
+	vpxor %xmm4, MI, MI
+	vpxor %xmm3, HI, HI
+.endm
+
+/*
+ * Performs the same computation as schoolbook1_iteration, except we expect the
+ * arguments to already be loaded into xmm0 and xmm1.
+ */
+.macro schoolbook1_noload
+	vpclmulqdq $0x01, %xmm0, %xmm1, %xmm2
+	vpclmulqdq $0x00, %xmm0, %xmm1, %xmm3
+	vpclmulqdq $0x11, %xmm0, %xmm1, %xmm4
+	vpclmulqdq $0x10, %xmm0, %xmm1, %xmm5
+	vpxor %xmm2, MI, MI
+	vpxor %xmm3, LO, LO
+	vpxor %xmm5, MI, MI
+	vpxor %xmm4, HI, HI
+.endm
+
+/*
+ * Computes the 256-bit polynomial represented by LO, HI, MI. Stores
+ * the result in PL, PH.
+ *   [PH :: PL] = [HI_H : HI_L ^ MI_H :: LO_H ^ MI_L : LO_L]
+ */
+.macro schoolbook2
+	vpslldq $8, MI, PL
+	vpsrldq $8, MI, PH
+	pxor LO, PL
+	pxor HI, PH
+.endm
+
+/*
+ * Computes the 128-bit reduction of PL, PH. Stores the result in PH.
+ *
+ * This macro computes p(x) mod g(x) where p(x) is in montgomery form and g(x) =
+ * x^128 + x^127 + x^126 + x^121 + 1.
+ *
+ * We have a 256-bit polynomial P_3 : P_2 : P_1 : P_0 that is the product of
+ * two 128-bit polynomials in Montgomery form.  We need to reduce it mod g(x).
+ * Also, since polynomials in Montgomery form have an "extra" factor of x^128,
+ * this product has two extra factors of x^128.  To get it back into Montgomery
+ * form, we need to remove one of these factors by dividing by x^128.
+ *
+ * To accomplish both of these goals, we add multiples of g(x) that cancel out
+ * the low 128 bits P_1 : P_0, leaving just the high 128 bits. Since the low
+ * bits are zero, the polynomial division by x^128 can be done by right shifting.
+ *
+ * Since the only nonzero term in the low 64 bits of g(x) is the constant term,
+ * the multiple of g(x) needed to cancel out P_0 is P_0 * g(x).  The CPU can
+ * only do 64x64 bit multiplications, so split P_0 * g(x) into x^128 * P_0 +
+ * x^64 g*(x) * P_0 + P_0, where g*(x) is bits 64-127 of g(x).  Adding this to
+ * the original polynomial gives P_3 : P_2 + P_0 + T_1 : P_1 + T_0 : 0, where T
+ * = T_1 : T_0 = g*(x) * P0.  Thus, bits 0-63 got "folded" into bits 64-191.
+ *
+ * Repeating this same process on the next 64 bits "folds" bits 64-127 into bits
+ * 128-255, giving the answer in bits 128-255. This time, we need to cancel P_1
+ * + T_0 in bits 64-127. The multiple of g(x) required is (P_1 + T_0) * g(x) *
+ * x^64. Adding this to our previous computation gives P_3 + P_1 + T_0 + V_1 :
+ * P_2 + P_0 + T_1 + V_0 : 0 : 0, where V = V_1 : V_0 = g*(x) * (P_1 + T_0).
+ *
+ * So our final computation is:
+ *   T = T_1 : T_0 = g*(x) * P_0
+ *   V = V_1 : V_0 = g*(x) * (T_0 ^ P_1)
+ *   p(x) / x^{128} mod g(x) = P_3 ^ P_1 ^ V_1 ^ T_0 : P_2 ^ P_0 ^ V_0 ^ T_1
+ *
+ * The implementation below saves a XOR instruction by computing P_1 ^ T_0 : P_0
+ * ^ T_1 and XORing into PH, rather than directly XORing P_1 : P_0, T_0 : T1
+ * into PH.  This allows us to reuse P_1 ^ T_0 when computing V.
+ */
+.macro montgomery_reduction
+	movdqa PL, T
+	pclmulqdq $0x00, GSTAR, T # T = [P_0 * g*(x)]
+	pshufd $0b01001110, T, V # V = [T_0 : T_1]
+	pxor V, PL # PL = [P_1 ^ T_0 : P_0 ^ T_1]
+	pxor PL, PH # PH = [P_1 ^ T_0 ^ P_3 : P_0 ^ T_1 ^ P_2]
+	pclmulqdq $0x11, GSTAR, PL # PL = [(P_1 ^ T_0) * g*(x)]
+	pxor PL, PH
+.endm
+
+/*
+ * Compute schoolbook multiplication for 8 blocks
+ * m_0h^8 + ... + m_7h^1
+ *
+ * If reduce is set, also computes the montgomery reduction of the
+ * previous full_stride call and XORs with the first message block.
+ * (m_0 + REDUCE(PL, PH))h^8 + ... + m_7h^1.
+ * I.e., the first multiplication uses m_0 + REDUCE(PL, PH) instead of m_0.
+ *
+ * Sets PL, PH
+ * Clobbers LO, HI, MI
+ *
+ */
+.macro full_stride reduce
+	mov %rsi, KEY_POWERS
+	pxor LO, LO
+	pxor HI, HI
+	pxor MI, MI
+
+	schoolbook1_iteration 7 0
+	.if (\reduce)
+		movdqa PL, T
+	.endif
+
+	schoolbook1_iteration 6 0
+	.if (\reduce)
+		pclmulqdq $0x00, GSTAR, T # T = [X0 * g*(x)]
+	.endif
+
+	schoolbook1_iteration 5 0
+	.if (\reduce)
+		pshufd $0b01001110, T, V # V = [T0 : T1]
+	.endif
+
+	schoolbook1_iteration 4 0
+	.if (\reduce)
+		pxor V, PL # PL = [X1 ^ T0 : X0 ^ T1]
+	.endif
+
+	schoolbook1_iteration 3 0
+	.if (\reduce)
+		pxor PL, PH # PH = [X1 ^ T0 ^ X3 : X0 ^ T1 ^ X2]
+	.endif
+
+	schoolbook1_iteration 2 0
+	.if (\reduce)
+		pclmulqdq $0x11, GSTAR, PL # PL = [X1 ^ T0 * g*(x)]
+	.endif
+
+	schoolbook1_iteration 1 0
+	.if (\reduce)
+		pxor PL, PH
+		movdqa PH, SUM
+	.endif
+
+	schoolbook1_iteration 0 1
+
+	addq $(8*16), MSG
+	addq $(8*16), KEY_POWERS
+	schoolbook2
+.endm
+
+/*
+ * Compute poly on window size of %rdx blocks
+ * 0 < %rdx < NUM_PRECOMPUTE_POWERS
+ */
+.macro partial_stride
+	pxor LO, LO
+	pxor HI, HI
+	pxor MI, MI
+	mov BLOCKS_LEFT, TMP
+	shlq $4, TMP
+	mov %rsi, KEY_POWERS
+	addq $(16*NUM_PRECOMPUTE_POWERS), KEY_POWERS
+	subq TMP, KEY_POWERS
+	# Multiply sum by h^N
+	movups (KEY_POWERS), %xmm0
+	movdqa SUM, %xmm1
+	schoolbook1_noload
+	schoolbook2
+	montgomery_reduction
+	movdqa PH, SUM
+	pxor LO, LO
+	pxor HI, HI
+	pxor MI, MI
+	xor IDX, IDX
+.LloopPartial:
+	cmpq BLOCKS_LEFT, IDX # IDX < rdx
+	jae .LloopExitPartial
+
+	movq BLOCKS_LEFT, TMP
+	subq IDX, TMP # TMP = rdx - IDX
+
+	cmp $4, TMP # TMP < 4 ?
+	jl .Llt4Partial
+	schoolbook1 4
+	addq $4, IDX
+	addq $(4*16), MSG
+	addq $(4*16), KEY_POWERS
+	jmp .LoutPartial
+.Llt4Partial:
+	cmp $3, TMP # TMP < 3 ?
+	jl .Llt3Partial
+	schoolbook1 3
+	addq $3, IDX
+	addq $(3*16), MSG
+	addq $(3*16), KEY_POWERS
+	jmp .LoutPartial
+.Llt3Partial:
+	cmp $2, TMP # TMP < 2 ?
+	jl .Llt2Partial
+	schoolbook1 2
+	addq $2, IDX
+	addq $(2*16), MSG
+	addq $(2*16), KEY_POWERS
+	jmp .LoutPartial
+.Llt2Partial:
+	schoolbook1 1 # TMP < 1 ?
+	addq $1, IDX
+	addq $(1*16), MSG
+	addq $(1*16), KEY_POWERS
+.LoutPartial:
+	jmp .LloopPartial
+.LloopExitPartial:
+	schoolbook2
+	montgomery_reduction
+	pxor PH, SUM
+.endm
+
+/*
+ * Perform montgomery multiplication in GF(2^128) and store result in op1.
+ *
+ * Computes op1*op2*x^{-128} mod x^128 + x^127 + x^126 + x^121 + 1
+ * If op1, op2 are in montgomery form,  this computes the montgomery
+ * form of op1*op2.
+ *
+ * void clmul_polyval_mul(u8 *op1, const u8 *op2);
+ */
+SYM_FUNC_START(clmul_polyval_mul)
+	FRAME_BEGIN
+	vmovdqa .Lgstar(%rip), GSTAR
+	pxor LO, LO
+	pxor HI, HI
+	pxor MI, MI
+	movups (%rdi), %xmm0
+	movups (%rsi), %xmm1
+	schoolbook1_noload
+	schoolbook2
+	montgomery_reduction
+	movups PH, (%rdi)
+	FRAME_END
+	ret
+SYM_FUNC_END(clmul_polyval_mul)
+
+/*
+ * Perform polynomial evaluation as specified by POLYVAL.  This computes:
+ * 	h^n * accumulator + h^n * m_0 + ... + h^1 * m_{n-1}
+ * where n=nblocks, h is the hash key, and m_i are the message blocks.
+ *
+ * rdi - pointer to message blocks
+ * rsi - pointer to precomputed key powers h^8 ... h^1
+ * rdx - number of blocks to hash
+ * rcx - pointer to the accumulator
+ *
+ * void clmul_polyval_update(const u8 *in, const struct polyval_ctx *ctx,
+ *			     size_t nblocks, u8 *accumulator);
+ */
+SYM_FUNC_START(clmul_polyval_update)
+	FRAME_BEGIN
+	vmovdqa .Lgstar(%rip), GSTAR
+	movups (%rcx), SUM
+	cmpq $NUM_PRECOMPUTE_POWERS, BLOCKS_LEFT
+	jb .LstrideLoopExit
+	full_stride 0
+	subq $NUM_PRECOMPUTE_POWERS, BLOCKS_LEFT
+.LstrideLoop:
+	cmpq $NUM_PRECOMPUTE_POWERS, BLOCKS_LEFT
+	jb .LstrideLoopExitReduce
+	full_stride 1
+	subq $NUM_PRECOMPUTE_POWERS, BLOCKS_LEFT
+	jmp .LstrideLoop
+.LstrideLoopExitReduce:
+	montgomery_reduction
+	movdqa PH, SUM
+.LstrideLoopExit:
+	test BLOCKS_LEFT, BLOCKS_LEFT
+	je .LskipPartial
+	partial_stride
+.LskipPartial:
+	movups SUM, (%rcx)
+	FRAME_END
+	ret
+SYM_FUNC_END(clmul_polyval_update)
diff --git a/arch/x86/crypto/polyval-clmulni_glue.c b/arch/x86/crypto/polyval-clmulni_glue.c
new file mode 100644
index 000000000000..ae73750ba059
--- /dev/null
+++ b/arch/x86/crypto/polyval-clmulni_glue.c
@@ -0,0 +1,361 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Accelerated POLYVAL implementation with Intel PCLMULQDQ-NI
+ * instructions. This file contains glue code.
+ *
+ * Copyright (c) 2007 Nokia Siemens Networks - Mikko Herranen <mh1@iki.fi>
+ * Copyright (c) 2009 Intel Corp.
+ *   Author: Huang Ying <ying.huang@intel.com>
+ * Copyright 2021 Google LLC
+ */
+/*
+ * Glue code based on ghash-clmulni-intel_glue.c.
+ *
+ * This implementation of POLYVAL uses montgomery multiplication
+ * accelerated by PCLMULQDQ-NI to implement the finite field
+ * operations.
+ *
+ */
+
+#include <crypto/algapi.h>
+#include <crypto/cryptd.h>
+#include <crypto/gf128mul.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/simd.h>
+#include <crypto/polyval.h>
+#include <linux/crypto.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <asm/cpu_device_id.h>
+#include <asm/simd.h>
+
+#define NUM_PRECOMPUTE_POWERS	8
+
+struct polyval_async_ctx {
+	struct cryptd_ahash *cryptd_tfm;
+};
+
+struct polyval_ctx {
+	/*
+	 * These powers must be in the order h^8, ..., h^1.
+	 */
+	u8 key_powers[NUM_PRECOMPUTE_POWERS][POLYVAL_BLOCK_SIZE];
+};
+
+struct polyval_desc_ctx {
+	u8 buffer[POLYVAL_BLOCK_SIZE];
+	u32 bytes;
+};
+
+asmlinkage void clmul_polyval_update(const u8 *in, struct polyval_ctx *keys,
+				     size_t nblocks, u8 *accumulator);
+asmlinkage void clmul_polyval_mul(u8 *op1, const u8 *op2);
+
+static int polyval_init(struct shash_desc *desc)
+{
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+
+	memset(dctx, 0, sizeof(*dctx));
+
+	return 0;
+}
+
+static int polyval_setkey(struct crypto_shash *tfm,
+			const u8 *key, unsigned int keylen)
+{
+	struct polyval_ctx *ctx = crypto_shash_ctx(tfm);
+	int i;
+
+	if (keylen != POLYVAL_BLOCK_SIZE)
+		return -EINVAL;
+
+	memcpy(ctx->key_powers[NUM_PRECOMPUTE_POWERS-1], key,
+	       POLYVAL_BLOCK_SIZE);
+
+	kernel_fpu_begin();
+	for (i = NUM_PRECOMPUTE_POWERS-2; i >= 0; i--) {
+		memcpy(ctx->key_powers[i], key, POLYVAL_BLOCK_SIZE);
+		clmul_polyval_mul(ctx->key_powers[i], ctx->key_powers[i+1]);
+	}
+	kernel_fpu_end();
+
+	return 0;
+}
+
+static int polyval_update(struct shash_desc *desc,
+			 const u8 *src, unsigned int srclen)
+{
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+	struct polyval_ctx *ctx = crypto_shash_ctx(desc->tfm);
+	u8 *pos;
+	unsigned int nblocks;
+	int n;
+
+	kernel_fpu_begin();
+	if (dctx->bytes) {
+		n = min(srclen, dctx->bytes);
+		pos = dctx->buffer + POLYVAL_BLOCK_SIZE - dctx->bytes;
+
+		dctx->bytes -= n;
+		srclen -= n;
+
+		while (n--)
+			*pos++ ^= *src++;
+
+		if (!dctx->bytes)
+			clmul_polyval_mul(dctx->buffer,
+				ctx->key_powers[NUM_PRECOMPUTE_POWERS-1]);
+	}
+
+	nblocks = srclen/POLYVAL_BLOCK_SIZE;
+	clmul_polyval_update(src, ctx, nblocks, dctx->buffer);
+	srclen -= nblocks*POLYVAL_BLOCK_SIZE;
+	kernel_fpu_end();
+
+	if (srclen) {
+		dctx->bytes = POLYVAL_BLOCK_SIZE - srclen;
+		src += nblocks*POLYVAL_BLOCK_SIZE;
+		pos = dctx->buffer;
+		while (srclen--)
+			*pos++ ^= *src++;
+	}
+
+	return 0;
+}
+
+static int polyval_final(struct shash_desc *desc, u8 *dst)
+{
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+	struct polyval_ctx *ctx = crypto_shash_ctx(desc->tfm);
+
+	if (dctx->bytes) {
+		kernel_fpu_begin();
+		clmul_polyval_mul(dctx->buffer,
+			ctx->key_powers[NUM_PRECOMPUTE_POWERS-1]);
+		kernel_fpu_end();
+	}
+
+	dctx->bytes = 0;
+	memcpy(dst, dctx->buffer, POLYVAL_BLOCK_SIZE);
+
+	return 0;
+}
+
+static struct shash_alg polyval_alg = {
+	.digestsize	= POLYVAL_DIGEST_SIZE,
+	.init		= polyval_init,
+	.update		= polyval_update,
+	.final		= polyval_final,
+	.setkey		= polyval_setkey,
+	.descsize	= sizeof(struct polyval_desc_ctx),
+	.base		= {
+		.cra_name		= "__polyval",
+		.cra_driver_name	= "__polyval-clmulni",
+		.cra_priority		= 0,
+		.cra_flags		= CRYPTO_ALG_INTERNAL,
+		.cra_blocksize		= POLYVAL_BLOCK_SIZE,
+		.cra_ctxsize		= sizeof(struct polyval_ctx),
+		.cra_module		= THIS_MODULE,
+	},
+};
+
+static int polyval_async_init(struct ahash_request *req)
+{
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct ahash_request *cryptd_req = ahash_request_ctx(req);
+	struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
+	struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
+	struct crypto_shash *child = cryptd_ahash_child(cryptd_tfm);
+
+	desc->tfm = child;
+	return crypto_shash_init(desc);
+}
+
+static int polyval_async_update(struct ahash_request *req)
+{
+	struct ahash_request *cryptd_req = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
+	struct shash_desc *desc;
+
+	if (!crypto_simd_usable() ||
+	    (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) {
+		memcpy(cryptd_req, req, sizeof(*req));
+		ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
+		return crypto_ahash_update(cryptd_req);
+	}
+	desc = cryptd_shash_desc(cryptd_req);
+
+	return shash_ahash_update(req, desc);
+}
+
+static int polyval_async_final(struct ahash_request *req)
+{
+	struct ahash_request *cryptd_req = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
+	struct shash_desc *desc;
+
+	if (!crypto_simd_usable() ||
+	    (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) {
+		memcpy(cryptd_req, req, sizeof(*req));
+		ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
+		return crypto_ahash_final(cryptd_req);
+	}
+	desc = cryptd_shash_desc(cryptd_req);
+
+	return crypto_shash_final(desc, req->result);
+}
+
+static int polyval_async_import(struct ahash_request *req, const void *in)
+{
+	struct ahash_request *cryptd_req = ahash_request_ctx(req);
+	struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+
+	polyval_async_init(req);
+	memcpy(dctx, in, sizeof(*dctx));
+	return 0;
+
+}
+
+static int polyval_async_export(struct ahash_request *req, void *out)
+{
+	struct ahash_request *cryptd_req = ahash_request_ctx(req);
+	struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+
+	memcpy(out, dctx, sizeof(*dctx));
+	return 0;
+
+}
+
+static int polyval_async_digest(struct ahash_request *req)
+{
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct ahash_request *cryptd_req = ahash_request_ctx(req);
+	struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
+	struct shash_desc *desc;
+	struct crypto_shash *child;
+
+	if (!crypto_simd_usable() ||
+	    (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) {
+		memcpy(cryptd_req, req, sizeof(*req));
+		ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
+		return crypto_ahash_digest(cryptd_req);
+	}
+	desc = cryptd_shash_desc(cryptd_req);
+	child = cryptd_ahash_child(cryptd_tfm);
+
+	desc->tfm = child;
+	return shash_ahash_digest(req, desc);
+}
+
+static int polyval_async_setkey(struct crypto_ahash *tfm, const u8 *key,
+			      unsigned int keylen)
+{
+	struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct crypto_ahash *child = &ctx->cryptd_tfm->base;
+
+	crypto_ahash_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+	crypto_ahash_set_flags(child, crypto_ahash_get_flags(tfm)
+			       & CRYPTO_TFM_REQ_MASK);
+	return crypto_ahash_setkey(child, key, keylen);
+}
+
+static int polyval_async_init_tfm(struct crypto_tfm *tfm)
+{
+	struct cryptd_ahash *cryptd_tfm;
+	struct polyval_async_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	cryptd_tfm = cryptd_alloc_ahash("__polyval-clmulni",
+					CRYPTO_ALG_INTERNAL,
+					CRYPTO_ALG_INTERNAL);
+	if (IS_ERR(cryptd_tfm))
+		return PTR_ERR(cryptd_tfm);
+	ctx->cryptd_tfm = cryptd_tfm;
+	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
+				 sizeof(struct ahash_request) +
+				 crypto_ahash_reqsize(&cryptd_tfm->base));
+
+	return 0;
+}
+
+static void polyval_async_exit_tfm(struct crypto_tfm *tfm)
+{
+	struct polyval_async_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	cryptd_free_ahash(ctx->cryptd_tfm);
+}
+
+static struct ahash_alg polyval_async_alg = {
+	.init		= polyval_async_init,
+	.update		= polyval_async_update,
+	.final		= polyval_async_final,
+	.setkey		= polyval_async_setkey,
+	.digest		= polyval_async_digest,
+	.export		= polyval_async_export,
+	.import		= polyval_async_import,
+	.halg = {
+		.digestsize	= POLYVAL_DIGEST_SIZE,
+		.statesize = sizeof(struct polyval_desc_ctx),
+		.base = {
+			.cra_name		= "polyval",
+			.cra_driver_name	= "polyval-clmulni",
+			.cra_priority		= 200,
+			.cra_ctxsize		= sizeof(struct polyval_async_ctx),
+			.cra_flags		= CRYPTO_ALG_ASYNC,
+			.cra_blocksize		= POLYVAL_BLOCK_SIZE,
+			.cra_module		= THIS_MODULE,
+			.cra_init		= polyval_async_init_tfm,
+			.cra_exit		= polyval_async_exit_tfm,
+		},
+	},
+};
+
+static const struct x86_cpu_id pcmul_cpu_id[] = {
+	X86_MATCH_FEATURE(X86_FEATURE_PCLMULQDQ, NULL), /* Pickle-Mickle-Duck */
+	{}
+};
+MODULE_DEVICE_TABLE(x86cpu, pcmul_cpu_id);
+
+static int __init polyval_clmulni_mod_init(void)
+{
+	int err;
+
+	if (!x86_match_cpu(pcmul_cpu_id))
+		return -ENODEV;
+
+	err = crypto_register_shash(&polyval_alg);
+	if (err)
+		goto err_out;
+	err = crypto_register_ahash(&polyval_async_alg);
+	if (err)
+		goto err_shash;
+
+	return 0;
+
+err_shash:
+	crypto_unregister_shash(&polyval_alg);
+err_out:
+	return err;
+}
+
+static void __exit polyval_clmulni_mod_exit(void)
+{
+	crypto_unregister_ahash(&polyval_async_alg);
+	crypto_unregister_shash(&polyval_alg);
+}
+
+module_init(polyval_clmulni_mod_init);
+module_exit(polyval_clmulni_mod_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("POLYVAL hash function accelerated by PCLMULQDQ-NI");
+MODULE_ALIAS_CRYPTO("polyval");
+MODULE_ALIAS_CRYPTO("polyval-clmulni");
diff --git a/crypto/Kconfig b/crypto/Kconfig
index aa06af0e0ebe..c6aec88213b1 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -787,6 +787,16 @@ config CRYPTO_POLYVAL
 	  POLYVAL is the hash function used in HCTR2.  It is not a general-purpose
 	  cryptographic hash function.
 
+config CRYPTO_POLYVAL_CLMUL_NI
+	tristate "POLYVAL hash function (CLMUL-NI accelerated)"
+	depends on X86 && 64BIT
+	select CRYPTO_CRYPTD
+	select CRYPTO_POLYVAL
+	help
+	  This is the x86_64 CLMUL-NI accelerated implementation of POLYVAL. It is
+	  used to efficiently implement HCTR2 on x86-64 processors that support
+	  carry-less multiplication instructions.
+
 config CRYPTO_POLY1305
 	tristate "Poly1305 authenticator algorithm"
 	select CRYPTO_HASH
-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 7/8] crypto: arm64/polyval: Add PMULL accelerated implementation of POLYVAL
  2022-03-15 23:00 [PATCH v3 0/8] crypto: HCTR2 support Nathan Huckleberry
                   ` (5 preceding siblings ...)
  2022-03-15 23:00 ` [PATCH v3 6/8] crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of POLYVAL Nathan Huckleberry
@ 2022-03-15 23:00 ` Nathan Huckleberry
  2022-03-24  1:37   ` Eric Biggers
  2022-03-15 23:00 ` [PATCH v3 8/8] fscrypt: Add HCTR2 support for filename encryption Nathan Huckleberry
  7 siblings, 1 reply; 15+ messages in thread
From: Nathan Huckleberry @ 2022-03-15 23:00 UTC (permalink / raw)
  To: linux-crypto
  Cc: Herbert Xu, David S. Miller, linux-arm-kernel, Paul Crowley,
	Eric Biggers, Sami Tolvanen, Ard Biesheuvel, Nathan Huckleberry

Add hardware accelerated version of POLYVAL for ARM64 CPUs with
Crypto Extension support.

This implementation is accelerated using PMULL instructions to perform
the finite field computations.  For added efficiency, 8 blocks of the
message are processed simultaneously by precomputing the first 8
powers of the key.

Karatsuba multiplication is used instead of Schoolbook multiplication
because it was found to be slightly faster on ARM64 CPUs.  Montgomery
reduction must be used instead of Barrett reduction due to the
difference in modulus between POLYVAL's field and other finite fields.

Signed-off-by: Nathan Huckleberry <nhuck@google.com>
---
 arch/arm64/crypto/Kconfig           |   7 +
 arch/arm64/crypto/Makefile          |   3 +
 arch/arm64/crypto/polyval-ce-core.S | 372 ++++++++++++++++++++++++++++
 arch/arm64/crypto/polyval-ce-glue.c | 363 +++++++++++++++++++++++++++
 4 files changed, 745 insertions(+)
 create mode 100644 arch/arm64/crypto/polyval-ce-core.S
 create mode 100644 arch/arm64/crypto/polyval-ce-glue.c

diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig
index 897f9a4b5b67..f7fbe8637e5c 100644
--- a/arch/arm64/crypto/Kconfig
+++ b/arch/arm64/crypto/Kconfig
@@ -60,6 +60,13 @@ config CRYPTO_GHASH_ARM64_CE
 	select CRYPTO_GF128MUL
 	select CRYPTO_LIB_AES
 
+config CRYPTO_POLYVAL_ARM64_CE
+	tristate "POLYVAL using ARMv8 Crypto Extensions (for HCTR2)"
+	depends on KERNEL_MODE_NEON
+	select CRYPTO_CRYPTD
+	select CRYPTO_HASH
+	select CRYPTO_POLYVAL
+
 config CRYPTO_CRCT10DIF_ARM64_CE
 	tristate "CRCT10DIF digest algorithm using PMULL instructions"
 	depends on KERNEL_MODE_NEON && CRC_T10DIF
diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile
index 09a805cc32d7..53f9af962b86 100644
--- a/arch/arm64/crypto/Makefile
+++ b/arch/arm64/crypto/Makefile
@@ -26,6 +26,9 @@ sm4-ce-y := sm4-ce-glue.o sm4-ce-core.o
 obj-$(CONFIG_CRYPTO_GHASH_ARM64_CE) += ghash-ce.o
 ghash-ce-y := ghash-ce-glue.o ghash-ce-core.o
 
+obj-$(CONFIG_CRYPTO_POLYVAL_ARM64_CE) += polyval-ce.o
+polyval-ce-y := polyval-ce-glue.o polyval-ce-core.o
+
 obj-$(CONFIG_CRYPTO_CRCT10DIF_ARM64_CE) += crct10dif-ce.o
 crct10dif-ce-y := crct10dif-ce-core.o crct10dif-ce-glue.o
 
diff --git a/arch/arm64/crypto/polyval-ce-core.S b/arch/arm64/crypto/polyval-ce-core.S
new file mode 100644
index 000000000000..9c0fba11716c
--- /dev/null
+++ b/arch/arm64/crypto/polyval-ce-core.S
@@ -0,0 +1,372 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Implementation of POLYVAL using ARMv8 Crypto Extensions.
+ *
+ * Copyright 2021 Google LLC
+ */
+/*
+ * This is an efficient implementation of POLYVAL using ARMv8 Crypto Extensions
+ * It works on 8 blocks at a time, by precomputing the first 8 keys powers h^8,
+ * ..., h^1 in the POLYVAL finite field. This precomputation allows us to split
+ * finite field multiplication into two steps.
+ *
+ * In the first step, we consider h^i, m_i as normal polynomials of degree less
+ * than 128. We then compute p(x) = h^8m_0 + ... + h^1m_7 where multiplication
+ * is simply polynomial multiplication.
+ *
+ * In the second step, we compute the reduction of p(x) modulo the finite field
+ * modulus g(x) = x^128 + x^127 + x^126 + x^121 + 1.
+ *
+ * This two step process is equivalent to computing h^8m_0 + ... + h^1m_7 where
+ * multiplication is finite field multiplication. The advantage is that the
+ * two-step process  only requires 1 finite field reduction for every 8
+ * polynomial multiplications. Further parallelism is gained by interleaving the
+ * multiplications and polynomial reductions.
+ */
+
+#include <linux/linkage.h>
+#define NUM_PRECOMPUTE_POWERS 8
+
+BLOCKS_LEFT	.req	x2
+KEY_START	.req	x10
+EXTRA_BYTES	.req	x11
+IND	.req	x12
+TMP	.req	x13
+PARTIAL_LEFT	.req	x14
+
+M0	.req	v0
+M1	.req	v1
+M2	.req	v2
+M3	.req	v3
+M4	.req	v4
+M5	.req	v5
+M6	.req	v6
+M7	.req	v7
+KEY8	.req	v8
+KEY7	.req	v9
+KEY6	.req	v10
+KEY5	.req	v11
+KEY4	.req	v12
+KEY3	.req	v13
+KEY2	.req	v14
+KEY1	.req	v15
+PL	.req	v16
+PH	.req	v17
+T	.req	v18
+V	.req	v19
+LO	.req	v20
+MI	.req	v21
+HI	.req	v22
+SUM	.req	v23
+GSTAR	.req	v24
+
+	.text
+	.align	4
+
+	.arch	armv8-a+crypto
+	.align	4
+
+.Lgstar:
+	.quad	0xc200000000000000, 0xc200000000000000
+
+/*
+ * Computes the product of two 128-bit polynomials in X and Y and XORs the
+ * components of the 256-bit product into LO, MI, HI.
+ *
+ * The multiplication produces four parts:
+ *   LOW: The polynomial given by performing carryless multiplication of X_L and
+ *   Y_L
+ *   MID: The polynomial given by performing carryless multiplication of (X_L ^
+ *   X_H) and (Y_L ^ Y_H)
+ *   HIGH: The polynomial given by performing carryless multiplication of X_H
+ *   and Y_H
+ *
+ * We compute:
+ *  LO ^= LOW
+ *  MI ^= MID
+ *  HI ^= HIGH
+ *
+ * Later, the 256-bit result can be extracted as:
+ *   [HI_H : HI_L ^ HI_H ^ MI_H ^ LO_H :: LO_H ^ HI_L ^ MI_L ^ LO_L : LO_L]
+ * This step is done when computing the polynomial reduction for efficiency
+ * reasons.
+ */
+.macro karatsuba1 X Y
+	X .req \X
+	Y .req \Y
+	ext	v25.16b, X.16b, Y.16b, #8
+	ext	v26.16b, Y.16b, Y.16b, #8
+	eor	v25.16b, v25.16b, X.16b
+	eor	v26.16b, v26.16b, Y.16b
+	pmull	v27.1q, v25.1d, v26.1d
+	pmull2	v28.1q, X.2d, Y.2d
+	pmull	v29.1q, X.1d, Y.1d
+	eor	HI.16b, HI.16b, v27.16b
+	eor	LO.16b, LO.16b, v28.16b
+	eor	MI.16b, MI.16b, v29.16b
+	.unreq X
+	.unreq Y
+.endm
+
+/*
+ * Computes the 256-bit polynomial represented by LO, HI, MI. Stores
+ * the result in PL, PH.
+ *   [PH :: PL] = [HI_H : HI_L ^ HI_H ^ MI_H ^ LO_H :: LO_H ^ HI_L ^ MI_L ^ LO_L
+ *   : LO_L]
+ */
+.macro karatsuba2
+	ext	v4.16b, MI.16b, LO.16b, #8
+	eor	HI.16b, HI.16b, v4.16b //[HI1 ^ LO0 : HI0 ^ MI1]
+	eor	v4.16b, LO.16b, MI.16b //[LO1 ^ MI1 : LO0 ^ MI0]
+	//[LO0 ^ LO1 ^ MI1 ^ HI1 : MI1 ^ LO0 ^ MI0 ^ HI0]
+	eor	v4.16b, HI.16b, v4.16b
+	ext	LO.16b, LO.16b, LO.16b, #8 // [LO0 : LO1]
+	ext	MI.16b, MI.16b, MI.16b, #8 // [MI0 : MI1]
+	ext	PH.16b, v4.16b, LO.16b, #8 //[LO1 : LO1 ^ MI1 ^ HI1 ^ LO0]
+	ext	PL.16b, MI.16b, v4.16b, #8 //[MI1 ^ LO0 ^ MI0 ^ HI0 : MI0]
+.endm
+
+/*
+ * Computes the 128-bit reduction of PL, PH. Stores the result in PH.
+ *
+ * This macro computes p(x) mod g(x) where p(x) is in montgomery form and g(x) =
+ * x^128 + x^127 + x^126 + x^121 + 1.
+ *
+ * We have a 256-bit polynomial P_3 : P_2 : P_1 : P_0 that is the product of
+ * two 128-bit polynomials in Montgomery form.  We need to reduce it mod g(x).
+ * Also, since polynomials in Montgomery form have an "extra" factor of x^128,
+ * this product has two extra factors of x^128.  To get it back into Montgomery
+ * form, we need to remove one of these factors by dividing by x^128.
+ *
+ * To accomplish both of these goals, we add multiples of g(x) that cancel out
+ * the low 128 bits P_1 : P_0, leaving just the high 128 bits. Since the low
+ * bits are zero, the polynomial division by x^128 can be done by right shifting.
+ *
+ * Since the only nonzero term in the low 64 bits of g(x) is the constant term,
+ * the multiple of g(x) needed to cancel out P_0 is P_0 * g(x).  The CPU can
+ * only do 64x64 bit multiplications, so split P_0 * g(x) into x^128 * P_0 +
+ * x^64 g*(x) * P_0 + P_0, where g*(x) is bits 64-127 of g(x).  Adding this to
+ * the original polynomial gives P_3 : P_2 + P_0 + T_1 : P_1 + T_0 : 0, where T
+ * = T_1 : T_0 = g*(x) * P0.  Thus, bits 0-63 got "folded" into bits 64-191.
+ *
+ * Repeating this same process on the next 64 bits "folds" bits 64-127 into bits
+ * 128-255, giving the answer in bits 128-255. This time, we need to cancel P_1
+ * + T_0 in bits 64-127. The multiple of g(x) required is (P_1 + T_0) * g(x) *
+ * x^64. Adding this to our previous computation gives P_3 + P_1 + T_0 + V_1 :
+ * P_2 + P_0 + T_1 + V_0 : 0 : 0, where V = V_1 : V_0 = g*(x) * (P_1 + T_0).
+ *
+ * So our final computation is:
+ *   T = T_1 : T_0 = g*(x) * P_0
+ *   V = V_1 : V_0 = g*(x) * (T_0 ^ P_1)
+ *   p(x) / x^{128} mod g(x) = P_3 ^ P_1 ^ V_1 ^ T_0 : P_2 ^ P_0 ^ V_0 ^ T_1
+ *
+ * The implementation below saves a XOR instruction by computing P_1 ^ T_0 : P_0
+ * ^ T_1 and XORing it into V, rather than directly XORing P_1 : P_0, T_0 : T1
+ * into PH.  This allows us to reuse P_1 ^ T_0 when computing V.
+ */
+.macro montgomery_reduction
+	pmull	T.1q, GSTAR.1d, PL.1d
+	ext	T.16b, T.16b, T.16b, #8
+	eor	PL.16b, PL.16b, T.16b
+	pmull2	V.1q, GSTAR.2d, PL.2d
+	eor	V.16b, PL.16b, V.16b
+	eor	PH.16b, PH.16b, V.16b
+.endm
+
+/*
+ * Compute Polyval on 8 blocks.
+ *
+ * If reduce is set, also computes the montgomery reduction of the
+ * previous full_stride call and XORs with the first message block.
+ * (m_0 + REDUCE(PL, PH))h^8 + ... + m_7h^1.
+ * I.e., the first multiplication uses m_0 + REDUCE(PL, PH) instead of m_0.
+ *
+ * Sets PL, PH.
+ */
+.macro full_stride reduce
+	eor		LO.16b, LO.16b, LO.16b
+	eor		MI.16b, MI.16b, MI.16b
+	eor		HI.16b, HI.16b, HI.16b
+
+	ld1		{M0.16b, M1.16b, M2.16b, M3.16b}, [x0], #64
+	ld1		{M4.16b, M5.16b, M6.16b, M7.16b}, [x0], #64
+
+	karatsuba1 M7 KEY1
+	.if (\reduce)
+	pmull	T.1q, GSTAR.1d, PL.1d
+	.endif
+
+	karatsuba1 M6 KEY2
+	.if (\reduce)
+	ext	T.16b, T.16b, T.16b, #8
+	.endif
+
+	karatsuba1 M5 KEY3
+	.if (\reduce)
+	eor	PL.16b, PL.16b, T.16b
+	.endif
+
+	karatsuba1 M4 KEY4
+	.if (\reduce)
+	pmull2	V.1q, GSTAR.2d, PL.2d
+	.endif
+
+	karatsuba1 M3 KEY5
+	.if (\reduce)
+	eor	V.16b, PL.16b, V.16b
+	.endif
+
+	karatsuba1 M2 KEY6
+	.if (\reduce)
+	eor	PH.16b, PH.16b, V.16b
+	.endif
+
+	karatsuba1 M1 KEY7
+	.if (\reduce)
+	mov	SUM.16b, PH.16b
+	.endif
+	eor	M0.16b, M0.16b, SUM.16b
+
+	karatsuba1 M0 KEY8
+
+	karatsuba2
+.endm
+
+/*
+ * Handle any extra blocks before
+ * full_stride loop.
+ */
+.macro partial_stride
+	eor		LO.16b, LO.16b, LO.16b
+	eor		MI.16b, MI.16b, MI.16b
+	eor		HI.16b, HI.16b, HI.16b
+	add		KEY_START, x1, #(NUM_PRECOMPUTE_POWERS << 4)
+	sub		KEY_START, KEY_START, PARTIAL_LEFT, lsl #4
+	ld1		{v0.16b}, [KEY_START]
+	mov		v1.16b, SUM.16b
+	karatsuba1 v0 v1
+	karatsuba2
+	montgomery_reduction
+	mov		SUM.16b, PH.16b
+	eor		LO.16b, LO.16b, LO.16b
+	eor		MI.16b, MI.16b, MI.16b
+	eor		HI.16b, HI.16b, HI.16b
+	mov		IND, XZR
+.LloopPartial:
+	cmp		IND, PARTIAL_LEFT
+	bge		.LloopExitPartial
+
+	sub		TMP, IND, PARTIAL_LEFT
+
+	cmp		TMP, #-4
+	bgt		.Lgt4Partial
+	ld1		{M0.16b, M1.16b,  M2.16b, M3.16b}, [x0], #64
+	// Clobber key registers
+	ld1		{KEY8.16b, KEY7.16b, KEY6.16b,  KEY5.16b}, [KEY_START], #64
+	karatsuba1 M0 KEY8
+	karatsuba1 M1 KEY7
+	karatsuba1 M2 KEY6
+	karatsuba1 M3 KEY5
+	add		IND, IND, #4
+	b		.LoutPartial
+
+.Lgt4Partial:
+	cmp		TMP, #-3
+	bgt		.Lgt3Partial
+	ld1		{M0.16b, M1.16b, M2.16b}, [x0], #48
+	// Clobber key registers
+	ld1		{KEY8.16b, KEY7.16b, KEY6.16b}, [KEY_START], #48
+	karatsuba1 M0 KEY8
+	karatsuba1 M1 KEY7
+	karatsuba1 M2 KEY6
+	add		IND, IND, #3
+	b		.LoutPartial
+
+.Lgt3Partial:
+	cmp		TMP, #-2
+	bgt		.Lgt2Partial
+	ld1		{M0.16b, M1.16b}, [x0], #32
+	// Clobber key registers
+	ld1		{KEY8.16b, KEY7.16b}, [KEY_START], #32
+	karatsuba1 M0 KEY8
+	karatsuba1 M1 KEY7
+	add		IND, IND, #2
+	b		.LoutPartial
+
+.Lgt2Partial:
+	ld1		{M0.16b}, [x0], #16
+	// Clobber key registers
+	ld1		{KEY8.16b}, [KEY_START], #16
+	karatsuba1 M0 KEY8
+	add		IND, IND, #1
+.LoutPartial:
+	b .LloopPartial
+.LloopExitPartial:
+	karatsuba2
+	montgomery_reduction
+	eor		SUM.16b, SUM.16b, PH.16b
+.endm
+
+/*
+ * Perform montgomery multiplication in GF(2^128) and store result in op1.
+ *
+ * Computes op1*op2*x^{-128} mod x^128 + x^127 + x^126 + x^121 + 1
+ * If op1, op2 are in montgomery form, this computes the montgomery
+ * form of op1*op2.
+ *
+ * void pmull_polyval_mul(u8 *op1, const u8 *op2);
+ */
+SYM_FUNC_START(pmull_polyval_mul)
+	adr		TMP, .Lgstar
+	ld1		{GSTAR.2d}, [TMP]
+	eor		LO.16b, LO.16b, LO.16b
+	eor		MI.16b, MI.16b, MI.16b
+	eor		HI.16b, HI.16b, HI.16b
+	ld1		{v0.16b}, [x0]
+	ld1		{v1.16b}, [x1]
+	karatsuba1 v0 v1
+	karatsuba2
+	montgomery_reduction
+	st1		{PH.16b}, [x0]
+	ret
+SYM_FUNC_END(pmull_polyval_mul)
+
+/*
+ * Perform polynomial evaluation as specified by POLYVAL.  This computes:
+ * 	h^n * accumulator + h^n * m_0 + ... + h^1 * m_{n-1}
+ * where n=nblocks, h is the hash key, and m_i are the message blocks.
+ *
+ * x0 - pointer to message blocks
+ * x1 - pointer to precomputed key powers h^8 ... h^1
+ * x2 - number of blocks to hash
+ * x3 - pointer to accumulator
+ *
+ * void pmull_polyval_update(const u8 *in, const struct polyval_ctx *ctx,
+ *			     size_t nblocks, u8 *accumulator);
+ */
+SYM_FUNC_START(pmull_polyval_update)
+	adr		TMP, .Lgstar
+	ld1		{GSTAR.2d}, [TMP]
+	ld1		{SUM.16b}, [x3]
+	ands		PARTIAL_LEFT, BLOCKS_LEFT, #7
+	beq		.LskipPartial
+	partial_stride
+.LskipPartial:
+	subs		BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS
+	blt		.LstrideLoopExit
+	ld1		{KEY8.16b, KEY7.16b, KEY6.16b, KEY5.16b}, [x1], #64
+	ld1		{KEY4.16b, KEY3.16b, KEY2.16b, KEY1.16b}, [x1], #64
+	full_stride 0
+	subs		BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS
+	blt		.LstrideLoopExitReduce
+.LstrideLoop:
+	full_stride 1
+	subs		BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS
+	bge		.LstrideLoop
+.LstrideLoopExitReduce:
+	montgomery_reduction
+	mov		SUM.16b, PH.16b
+.LstrideLoopExit:
+	st1		{SUM.16b}, [x3]
+	ret
+SYM_FUNC_END(pmull_polyval_update)
diff --git a/arch/arm64/crypto/polyval-ce-glue.c b/arch/arm64/crypto/polyval-ce-glue.c
new file mode 100644
index 000000000000..52cf87c36043
--- /dev/null
+++ b/arch/arm64/crypto/polyval-ce-glue.c
@@ -0,0 +1,363 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Accelerated POLYVAL implementation with ARMv8 Crypto Extension
+ * instructions. This file contains glue code.
+ *
+ * Copyright (c) 2007 Nokia Siemens Networks - Mikko Herranen <mh1@iki.fi>
+ * Copyright (c) 2009 Intel Corp.
+ *   Author: Huang Ying <ying.huang@intel.com>
+ * Copyright 2021 Google LLC
+ */
+/*
+ * Glue code based on ghash-clmulni-intel_glue.c.
+ *
+ * This implementation of POLYVAL uses montgomery multiplication accelerated by
+ * ARMv8 Crypto Extension instructions to implement the finite field operations.
+ *
+ */
+
+#include <crypto/algapi.h>
+#include <crypto/cryptd.h>
+#include <crypto/gf128mul.h>
+#include <crypto/internal/hash.h>
+#include <crypto/internal/simd.h>
+#include <crypto/polyval.h>
+#include <linux/crypto.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/cpufeature.h>
+#include <asm/neon.h>
+#include <asm/simd.h>
+#include <asm/unaligned.h>
+
+#define NUM_PRECOMPUTE_POWERS	8
+
+struct polyval_async_ctx {
+	struct cryptd_ahash *cryptd_tfm;
+};
+
+struct polyval_ctx {
+	u8 key_powers[NUM_PRECOMPUTE_POWERS][POLYVAL_BLOCK_SIZE];
+};
+
+struct polyval_desc_ctx {
+	u8 buffer[POLYVAL_BLOCK_SIZE];
+	u32 bytes;
+};
+
+asmlinkage void pmull_polyval_update(const u8 *in, const struct polyval_ctx
+				     *ctx, size_t nblocks, u8 *accumulator);
+asmlinkage void pmull_polyval_mul(u8 *op1, const u8 *op2);
+
+static int polyval_init(struct shash_desc *desc)
+{
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+
+	memset(dctx, 0, sizeof(*dctx));
+
+	return 0;
+}
+
+static int polyval_setkey(struct crypto_shash *tfm,
+			const u8 *key, unsigned int keylen)
+{
+	struct polyval_ctx *ctx = crypto_shash_ctx(tfm);
+	int i;
+
+	if (keylen != POLYVAL_BLOCK_SIZE)
+		return -EINVAL;
+
+	BUILD_BUG_ON(sizeof(u128) != POLYVAL_BLOCK_SIZE);
+
+	memcpy(ctx->key_powers[NUM_PRECOMPUTE_POWERS-1], key,
+	       POLYVAL_BLOCK_SIZE);
+
+	kernel_neon_begin();
+	for (i = NUM_PRECOMPUTE_POWERS-2; i >= 0; i--) {
+		memcpy(ctx->key_powers[i], key, POLYVAL_BLOCK_SIZE);
+		pmull_polyval_mul(ctx->key_powers[i], ctx->key_powers[i+1]);
+	}
+	kernel_neon_end();
+
+	return 0;
+}
+
+static int polyval_update(struct shash_desc *desc,
+			 const u8 *src, unsigned int srclen)
+{
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+	struct polyval_ctx *ctx = crypto_shash_ctx(desc->tfm);
+	u8 *pos;
+	unsigned int nblocks;
+	unsigned int n;
+
+	kernel_neon_begin();
+	if (dctx->bytes) {
+		n = min(srclen, dctx->bytes);
+		pos = dctx->buffer + POLYVAL_BLOCK_SIZE - dctx->bytes;
+
+		dctx->bytes -= n;
+		srclen -= n;
+
+		while (n--)
+			*pos++ ^= *src++;
+
+		if (!dctx->bytes)
+			pmull_polyval_mul(dctx->buffer,
+				ctx->key_powers[NUM_PRECOMPUTE_POWERS-1]);
+	}
+
+	nblocks = srclen/POLYVAL_BLOCK_SIZE;
+	pmull_polyval_update(src, ctx, nblocks, dctx->buffer);
+	srclen -= nblocks*POLYVAL_BLOCK_SIZE;
+	kernel_neon_end();
+
+	if (srclen) {
+		dctx->bytes = POLYVAL_BLOCK_SIZE - srclen;
+		src += nblocks*POLYVAL_BLOCK_SIZE;
+		pos = dctx->buffer;
+		while (srclen--)
+			*pos++ ^= *src++;
+	}
+
+	return 0;
+}
+
+static int polyval_final(struct shash_desc *desc, u8 *dst)
+{
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+	struct polyval_ctx *ctx = crypto_shash_ctx(desc->tfm);
+
+	if (dctx->bytes) {
+		kernel_neon_begin();
+		pmull_polyval_mul(dctx->buffer,
+			ctx->key_powers[NUM_PRECOMPUTE_POWERS-1]);
+		kernel_neon_end();
+	}
+
+	dctx->bytes = 0;
+	memcpy(dst, dctx->buffer, POLYVAL_BLOCK_SIZE);
+
+	return 0;
+}
+
+static struct shash_alg polyval_alg = {
+	.digestsize	= POLYVAL_DIGEST_SIZE,
+	.init		= polyval_init,
+	.update		= polyval_update,
+	.final		= polyval_final,
+	.setkey		= polyval_setkey,
+	.descsize	= sizeof(struct polyval_desc_ctx),
+	.base		= {
+		.cra_name		= "__polyval",
+		.cra_driver_name	= "__polyval-ce",
+		.cra_priority		= 0,
+		.cra_flags		= CRYPTO_ALG_INTERNAL,
+		.cra_blocksize		= POLYVAL_BLOCK_SIZE,
+		.cra_ctxsize		= sizeof(struct polyval_ctx),
+		.cra_module		= THIS_MODULE,
+	},
+};
+
+static int polyval_async_init(struct ahash_request *req)
+{
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct ahash_request *cryptd_req = ahash_request_ctx(req);
+	struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
+	struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
+	struct crypto_shash *child = cryptd_ahash_child(cryptd_tfm);
+
+	desc->tfm = child;
+	return crypto_shash_init(desc);
+}
+
+static int polyval_async_update(struct ahash_request *req)
+{
+	struct ahash_request *cryptd_req = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
+	struct shash_desc *desc;
+
+	if (!crypto_simd_usable() ||
+	    (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) {
+		memcpy(cryptd_req, req, sizeof(*req));
+		ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
+		return crypto_ahash_update(cryptd_req);
+	}
+	desc = cryptd_shash_desc(cryptd_req);
+
+	return shash_ahash_update(req, desc);
+}
+
+static int polyval_async_final(struct ahash_request *req)
+{
+	struct ahash_request *cryptd_req = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
+	struct shash_desc *desc;
+
+	if (!crypto_simd_usable() ||
+	    (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) {
+		memcpy(cryptd_req, req, sizeof(*req));
+		ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
+		return crypto_ahash_final(cryptd_req);
+	}
+	desc = cryptd_shash_desc(cryptd_req);
+
+	return crypto_shash_final(desc, req->result);
+}
+
+static int polyval_async_import(struct ahash_request *req, const void *in)
+{
+	struct ahash_request *cryptd_req = ahash_request_ctx(req);
+	struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+
+	polyval_async_init(req);
+	memcpy(dctx, in, sizeof(*dctx));
+	return 0;
+
+}
+
+static int polyval_async_export(struct ahash_request *req, void *out)
+{
+	struct ahash_request *cryptd_req = ahash_request_ctx(req);
+	struct shash_desc *desc = cryptd_shash_desc(cryptd_req);
+	struct polyval_desc_ctx *dctx = shash_desc_ctx(desc);
+
+	memcpy(out, dctx, sizeof(*dctx));
+	return 0;
+
+}
+
+static int polyval_async_digest(struct ahash_request *req)
+{
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct ahash_request *cryptd_req = ahash_request_ctx(req);
+	struct cryptd_ahash *cryptd_tfm = ctx->cryptd_tfm;
+	struct shash_desc *desc;
+	struct crypto_shash *child;
+
+	if (!crypto_simd_usable() ||
+	    (in_atomic() && cryptd_ahash_queued(cryptd_tfm))) {
+		memcpy(cryptd_req, req, sizeof(*req));
+		ahash_request_set_tfm(cryptd_req, &cryptd_tfm->base);
+		return crypto_ahash_digest(cryptd_req);
+	}
+	desc = cryptd_shash_desc(cryptd_req);
+	child = cryptd_ahash_child(cryptd_tfm);
+
+	desc->tfm = child;
+	return shash_ahash_digest(req, desc);
+}
+
+static int polyval_async_setkey(struct crypto_ahash *tfm, const u8 *key,
+			      unsigned int keylen)
+{
+	struct polyval_async_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct crypto_ahash *child = &ctx->cryptd_tfm->base;
+
+	crypto_ahash_clear_flags(child, CRYPTO_TFM_REQ_MASK);
+	crypto_ahash_set_flags(child, crypto_ahash_get_flags(tfm)
+			       & CRYPTO_TFM_REQ_MASK);
+	return crypto_ahash_setkey(child, key, keylen);
+}
+
+static int polyval_async_init_tfm(struct crypto_tfm *tfm)
+{
+	struct cryptd_ahash *cryptd_tfm;
+	struct polyval_async_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	cryptd_tfm = cryptd_alloc_ahash("__polyval-ce",
+					CRYPTO_ALG_INTERNAL,
+					CRYPTO_ALG_INTERNAL);
+	if (IS_ERR(cryptd_tfm))
+		return PTR_ERR(cryptd_tfm);
+	ctx->cryptd_tfm = cryptd_tfm;
+	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
+				 sizeof(struct ahash_request) +
+				 crypto_ahash_reqsize(&cryptd_tfm->base));
+
+	return 0;
+}
+
+static void polyval_async_exit_tfm(struct crypto_tfm *tfm)
+{
+	struct polyval_async_ctx *ctx = crypto_tfm_ctx(tfm);
+
+	cryptd_free_ahash(ctx->cryptd_tfm);
+}
+
+static struct ahash_alg polyval_async_alg = {
+	.init		= polyval_async_init,
+	.update		= polyval_async_update,
+	.final		= polyval_async_final,
+	.setkey		= polyval_async_setkey,
+	.digest		= polyval_async_digest,
+	.export		= polyval_async_export,
+	.import		= polyval_async_import,
+	.halg = {
+		.digestsize	= POLYVAL_DIGEST_SIZE,
+		.statesize = sizeof(struct polyval_desc_ctx),
+		.base = {
+			.cra_name		= "polyval",
+			.cra_driver_name	= "polyval-ce",
+			.cra_priority		= 200,
+			.cra_ctxsize		= sizeof(struct polyval_async_ctx),
+			.cra_flags		= CRYPTO_ALG_ASYNC,
+			.cra_blocksize		= POLYVAL_BLOCK_SIZE,
+			.cra_module		= THIS_MODULE,
+			.cra_init		= polyval_async_init_tfm,
+			.cra_exit		= polyval_async_exit_tfm,
+		},
+	},
+};
+
+static int __init polyval_ce_mod_init(void)
+{
+	int err;
+
+	if (!cpu_have_named_feature(ASIMD))
+		return -ENODEV;
+
+	if (!cpu_have_named_feature(PMULL))
+		return -ENODEV;
+
+	err = crypto_register_shash(&polyval_alg);
+	if (err)
+		goto err_out;
+	err = crypto_register_ahash(&polyval_async_alg);
+	if (err)
+		goto err_shash;
+
+	return 0;
+
+err_shash:
+	crypto_unregister_shash(&polyval_alg);
+err_out:
+	return err;
+}
+
+static void __exit polyval_ce_mod_exit(void)
+{
+	crypto_unregister_ahash(&polyval_async_alg);
+	crypto_unregister_shash(&polyval_alg);
+}
+
+static const struct cpu_feature polyval_cpu_feature[] = {
+	{ cpu_feature(PMULL) }, { }
+};
+MODULE_DEVICE_TABLE(cpu, polyval_cpu_feature);
+
+module_init(polyval_ce_mod_init);
+module_exit(polyval_ce_mod_exit);
+
+MODULE_LICENSE("GPL");
+MODULE_DESCRIPTION("POLYVAL hash function accelerated by ARMv8 Crypto Extension");
+MODULE_ALIAS_CRYPTO("polyval");
+MODULE_ALIAS_CRYPTO("polyval-ce");
-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH v3 8/8] fscrypt: Add HCTR2 support for filename encryption
  2022-03-15 23:00 [PATCH v3 0/8] crypto: HCTR2 support Nathan Huckleberry
                   ` (6 preceding siblings ...)
  2022-03-15 23:00 ` [PATCH v3 7/8] crypto: arm64/polyval: Add PMULL " Nathan Huckleberry
@ 2022-03-15 23:00 ` Nathan Huckleberry
  7 siblings, 0 replies; 15+ messages in thread
From: Nathan Huckleberry @ 2022-03-15 23:00 UTC (permalink / raw)
  To: linux-crypto
  Cc: Herbert Xu, David S. Miller, linux-arm-kernel, Paul Crowley,
	Eric Biggers, Sami Tolvanen, Ard Biesheuvel, Nathan Huckleberry

HCTR2 is a tweakable, length-preserving encryption mode.  It has the
same security guarantees as Adiantum, but is intended for use on CPUs
with dedicated crypto instructions.  It fixes a known weakness with
filename encryption: when two filenames in the same directory share a
prefix of >= 16 bytes, with CTS-CBC their encrypted filenames share a
common substring, leaking information.  HCTR2 does not have this
problem.

More information on HCTR2 can be found here: Length-preserving
encryption with HCTR2: https://eprint.iacr.org/2021/1441.pdf

Signed-off-by: Nathan Huckleberry <nhuck@google.com>
---
 Documentation/filesystems/fscrypt.rst | 19 ++++++++++++++-----
 fs/crypto/fscrypt_private.h           |  2 +-
 fs/crypto/keysetup.c                  |  7 +++++++
 fs/crypto/policy.c                    |  4 ++++
 include/uapi/linux/fscrypt.h          |  3 ++-
 tools/include/uapi/linux/fscrypt.h    |  3 ++-
 6 files changed, 30 insertions(+), 8 deletions(-)

diff --git a/Documentation/filesystems/fscrypt.rst b/Documentation/filesystems/fscrypt.rst
index 4d5d50dca65c..09915086abd8 100644
--- a/Documentation/filesystems/fscrypt.rst
+++ b/Documentation/filesystems/fscrypt.rst
@@ -337,6 +337,7 @@ Currently, the following pairs of encryption modes are supported:
 - AES-256-XTS for contents and AES-256-CTS-CBC for filenames
 - AES-128-CBC for contents and AES-128-CTS-CBC for filenames
 - Adiantum for both contents and filenames
+- AES-256-XTS for contents and AES-256-HCTR2 for filenames
 
 If unsure, you should use the (AES-256-XTS, AES-256-CTS-CBC) pair.
 
@@ -357,6 +358,14 @@ To use Adiantum, CONFIG_CRYPTO_ADIANTUM must be enabled.  Also, fast
 implementations of ChaCha and NHPoly1305 should be enabled, e.g.
 CONFIG_CRYPTO_CHACHA20_NEON and CONFIG_CRYPTO_NHPOLY1305_NEON for ARM.
 
+AES-256-HCTR2 is another true wide-block encryption mode.  It has the same
+security guarantees as Adiantum, but is intended for use on CPUs with dedicated
+crypto instructions. See the paper "Length-preserving encryption with HCTR2"
+(https://eprint.iacr.org/2021/1441.pdf) for more details. To use HCTR2,
+CONFIG_CRYPTO_HCTR2 must be enabled. Also, fast implementations of XCTR and
+POLYVAL should be enabled, e.g. CRYPTO_POLYVAL_ARM64_CE and
+CRYPTO_AES_ARM64_CE_BLK for ARM64.
+
 New encryption modes can be added relatively easily, without changes
 to individual filesystems.  However, authenticated encryption (AE)
 modes are not currently supported because of the difficulty of dealing
@@ -404,11 +413,11 @@ alternatively has the file's nonce (for `DIRECT_KEY policies`_) or
 inode number (for `IV_INO_LBLK_64 policies`_) included in the IVs.
 Thus, IV reuse is limited to within a single directory.
 
-With CTS-CBC, the IV reuse means that when the plaintext filenames
-share a common prefix at least as long as the cipher block size (16
-bytes for AES), the corresponding encrypted filenames will also share
-a common prefix.  This is undesirable.  Adiantum does not have this
-weakness, as it is a wide-block encryption mode.
+With CTS-CBC, the IV reuse means that when the plaintext filenames share a
+common prefix at least as long as the cipher block size (16 bytes for AES), the
+corresponding encrypted filenames will also share a common prefix.  This is
+undesirable.  Adiantum and HCTR2 do not have this weakness, as they are
+wide-block encryption modes.
 
 All supported filenames encryption modes accept any plaintext length
 >= 16 bytes; cipher block alignment is not required.  However,
diff --git a/fs/crypto/fscrypt_private.h b/fs/crypto/fscrypt_private.h
index 5b0a9e6478b5..d8617d01f7bd 100644
--- a/fs/crypto/fscrypt_private.h
+++ b/fs/crypto/fscrypt_private.h
@@ -31,7 +31,7 @@
 #define FSCRYPT_CONTEXT_V2	2
 
 /* Keep this in sync with include/uapi/linux/fscrypt.h */
-#define FSCRYPT_MODE_MAX	FSCRYPT_MODE_ADIANTUM
+#define FSCRYPT_MODE_MAX	FSCRYPT_MODE_AES_256_HCTR2
 
 struct fscrypt_context_v1 {
 	u8 version; /* FSCRYPT_CONTEXT_V1 */
diff --git a/fs/crypto/keysetup.c b/fs/crypto/keysetup.c
index eede186b04ce..ae24b581d3d7 100644
--- a/fs/crypto/keysetup.c
+++ b/fs/crypto/keysetup.c
@@ -53,6 +53,13 @@ struct fscrypt_mode fscrypt_modes[] = {
 		.ivsize = 32,
 		.blk_crypto_mode = BLK_ENCRYPTION_MODE_ADIANTUM,
 	},
+	[FSCRYPT_MODE_AES_256_HCTR2] = {
+		.friendly_name = "HCTR2",
+		.cipher_str = "hctr2(aes)",
+		.keysize = 32,
+		.security_strength = 32,
+		.ivsize = 32,
+	},
 };
 
 static DEFINE_MUTEX(fscrypt_mode_key_setup_mutex);
diff --git a/fs/crypto/policy.c b/fs/crypto/policy.c
index ed3d623724cd..fa8bdb8c76b7 100644
--- a/fs/crypto/policy.c
+++ b/fs/crypto/policy.c
@@ -54,6 +54,10 @@ static bool fscrypt_valid_enc_modes(u32 contents_mode, u32 filenames_mode)
 	    filenames_mode == FSCRYPT_MODE_ADIANTUM)
 		return true;
 
+	if (contents_mode == FSCRYPT_MODE_AES_256_XTS &&
+	    filenames_mode == FSCRYPT_MODE_AES_256_HCTR2)
+		return true;
+
 	return false;
 }
 
diff --git a/include/uapi/linux/fscrypt.h b/include/uapi/linux/fscrypt.h
index 9f4428be3e36..a756b29afcc2 100644
--- a/include/uapi/linux/fscrypt.h
+++ b/include/uapi/linux/fscrypt.h
@@ -27,7 +27,8 @@
 #define FSCRYPT_MODE_AES_128_CBC		5
 #define FSCRYPT_MODE_AES_128_CTS		6
 #define FSCRYPT_MODE_ADIANTUM			9
-/* If adding a mode number > 9, update FSCRYPT_MODE_MAX in fscrypt_private.h */
+#define FSCRYPT_MODE_AES_256_HCTR2		10
+/* If adding a mode number > 10, update FSCRYPT_MODE_MAX in fscrypt_private.h */
 
 /*
  * Legacy policy version; ad-hoc KDF and no key verification.
diff --git a/tools/include/uapi/linux/fscrypt.h b/tools/include/uapi/linux/fscrypt.h
index 9f4428be3e36..a756b29afcc2 100644
--- a/tools/include/uapi/linux/fscrypt.h
+++ b/tools/include/uapi/linux/fscrypt.h
@@ -27,7 +27,8 @@
 #define FSCRYPT_MODE_AES_128_CBC		5
 #define FSCRYPT_MODE_AES_128_CTS		6
 #define FSCRYPT_MODE_ADIANTUM			9
-/* If adding a mode number > 9, update FSCRYPT_MODE_MAX in fscrypt_private.h */
+#define FSCRYPT_MODE_AES_256_HCTR2		10
+/* If adding a mode number > 10, update FSCRYPT_MODE_MAX in fscrypt_private.h */
 
 /*
  * Legacy policy version; ad-hoc KDF and no key verification.
-- 
2.35.1.723.g4982287a31-goog


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 1/8] crypto: xctr - Add XCTR support
  2022-03-15 23:00 ` [PATCH v3 1/8] crypto: xctr - Add XCTR support Nathan Huckleberry
@ 2022-03-22  5:23   ` Eric Biggers
  0 siblings, 0 replies; 15+ messages in thread
From: Eric Biggers @ 2022-03-22  5:23 UTC (permalink / raw)
  To: Nathan Huckleberry
  Cc: linux-crypto, Herbert Xu, David S. Miller, linux-arm-kernel,
	Paul Crowley, Sami Tolvanen, Ard Biesheuvel

On Tue, Mar 15, 2022 at 11:00:28PM +0000, Nathan Huckleberry wrote:
> Add a generic implementation of XCTR mode as a template.  XCTR is a
> blockcipher mode similar to CTR mode.  XCTR uses XORs and little-endian
> addition rather than big-endian arithmetic which has two advantages:  It
> is slightly faster on little-endian CPUs and it is less likely to be
> implemented incorrect since integer overflows are not possible on
> practical input sizes.  XCTR is used as a component to implement HCTR2.
> 
> More information on XCTR mode can be found in the HCTR2 paper:
> https://eprint.iacr.org/2021/1441.pdf
> 
> Signed-off-by: Nathan Huckleberry <nhuck@google.com>

Looks good, feel free to add:

Reviewed-by: Eric Biggers <ebiggers@google.com>

A few minor nits below:

> +// Limited to 16-byte blocks for simplicity
> +#define XCTR_BLOCKSIZE 16
> +
> +static void crypto_xctr_crypt_final(struct skcipher_walk *walk,
> +				   struct crypto_cipher *tfm, u32 byte_ctr)
> +{
> +	u8 keystream[XCTR_BLOCKSIZE];
> +	u8 *src = walk->src.virt.addr;

Use 'const u8 *src'

> +static int crypto_xctr_crypt_segment(struct skcipher_walk *walk,
> +				    struct crypto_cipher *tfm, u32 byte_ctr)
> +{
> +	void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
> +		   crypto_cipher_alg(tfm)->cia_encrypt;
> +	u8 *src = walk->src.virt.addr;

Likewise, 'const u8 *src'

> +	u8 *dst = walk->dst.virt.addr;
> +	unsigned int nbytes = walk->nbytes;
> +	__le32 ctr32 = cpu_to_le32(byte_ctr / XCTR_BLOCKSIZE + 1);
> +
> +	do {
> +		/* create keystream */
> +		crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));
> +		fn(crypto_cipher_tfm(tfm), dst, walk->iv);
> +		crypto_xor(dst, src, XCTR_BLOCKSIZE);
> +		crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));

The comment "/* create keystream /*" is a bit misleading, since the part of the
code that it describes isn't just creating the keystream, but also XOR'ing it
with the data.  It would be better to just remove that comment.

> +
> +		ctr32 = cpu_to_le32(le32_to_cpu(ctr32) + 1);

This could use le32_add_cpu().

> +
> +		src += XCTR_BLOCKSIZE;
> +		dst += XCTR_BLOCKSIZE;
> +	} while ((nbytes -= XCTR_BLOCKSIZE) >= XCTR_BLOCKSIZE);
> +
> +	return nbytes;
> +}
> +
> +static int crypto_xctr_crypt_inplace(struct skcipher_walk *walk,
> +				    struct crypto_cipher *tfm, u32 byte_ctr)
> +{
> +	void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
> +		   crypto_cipher_alg(tfm)->cia_encrypt;
> +	unsigned long alignmask = crypto_cipher_alignmask(tfm);
> +	unsigned int nbytes = walk->nbytes;
> +	u8 *src = walk->src.virt.addr;

Perhaps call this 'data' instead of 'src', since here it's both the source and
destination?

> +	u8 tmp[XCTR_BLOCKSIZE + MAX_CIPHER_ALIGNMASK];
> +	u8 *keystream = PTR_ALIGN(tmp + 0, alignmask + 1);
> +	__le32 ctr32 = cpu_to_le32(byte_ctr / XCTR_BLOCKSIZE + 1);
> +
> +	do {
> +		/* create keystream */

Likewise, remove or clarify the '/* create keystream */' comment.

> +		crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));
> +		fn(crypto_cipher_tfm(tfm), keystream, walk->iv);
> +		crypto_xor(src, keystream, XCTR_BLOCKSIZE);
> +		crypto_xor(walk->iv, (u8 *)&ctr32, sizeof(ctr32));
> +
> +		ctr32 = cpu_to_le32(le32_to_cpu(ctr32) + 1);

Likewise, le32_add_cpu().

- Eric

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 2/8] crypto: polyval - Add POLYVAL support
  2022-03-15 23:00 ` [PATCH v3 2/8] crypto: polyval - Add POLYVAL support Nathan Huckleberry
@ 2022-03-22  5:55   ` Eric Biggers
  0 siblings, 0 replies; 15+ messages in thread
From: Eric Biggers @ 2022-03-22  5:55 UTC (permalink / raw)
  To: Nathan Huckleberry
  Cc: linux-crypto, Herbert Xu, David S. Miller, linux-arm-kernel,
	Paul Crowley, Sami Tolvanen, Ard Biesheuvel

On Tue, Mar 15, 2022 at 11:00:29PM +0000, Nathan Huckleberry wrote:
> Add support for POLYVAL, an ε-Δ-universal hash function similar to
> GHASH.  POLYVAL is used as a component to implement HCTR2 mode.
> 
> POLYVAL is implemented as an shash algorithm.  The implementation is
> modified from ghash-generic.c.
> 
> More information on POLYVAL can be found in the HCTR2 paper:
> https://eprint.iacr.org/2021/1441.pdf
> 
> Signed-off-by: Nathan Huckleberry <nhuck@google.com>

Generally looks good, feel free to add:

	Reviewed-by: Eric Biggers <ebiggers@google.com>

But, I think you should mention that POLYVAL is originally from AES-GCM-SIV (RFC
8452).  It's true that the kernel doesn't implement AES-GCM-SIV currently, but
it's still important to mention.  Both the commit message and comment in
crypto/polyval-generic.c should mention this, IMO.  As-is, the only hint of this
in this patch is the comment above the test vectors.

Your explanation about how POLYVAL can be implemented on top of GHASH is also a
bit incomplete.  Linking to
https://datatracker.ietf.org/doc/html/rfc8452#appendix-A would be helpful.

- Eric

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 3/8] crypto: hctr2 - Add HCTR2 support
  2022-03-15 23:00 ` [PATCH v3 3/8] crypto: hctr2 - Add HCTR2 support Nathan Huckleberry
@ 2022-03-22  7:00   ` Eric Biggers
  0 siblings, 0 replies; 15+ messages in thread
From: Eric Biggers @ 2022-03-22  7:00 UTC (permalink / raw)
  To: Nathan Huckleberry
  Cc: linux-crypto, Herbert Xu, David S. Miller, linux-arm-kernel,
	Paul Crowley, Sami Tolvanen, Ard Biesheuvel

On Tue, Mar 15, 2022 at 11:00:30PM +0000, Nathan Huckleberry wrote:
> +struct hctr2_tfm_ctx {
> +	struct crypto_cipher *blockcipher;
> +	struct crypto_skcipher *xctr;
> +	struct crypto_shash *polyval;
> +	u8 L[BLOCKCIPHER_BLOCK_SIZE];
> +};

How about adding a comment at the end of the struct above that says that the
struct is followed by the two exported_length_digests?  (Or hashed_tweaklen,
which is the name suggested in my suggested helper functions below?)

> +
> +struct hctr2_request_ctx {
> +	u8 first_block[BLOCKCIPHER_BLOCK_SIZE];
> +	u8 xctr_iv[BLOCKCIPHER_BLOCK_SIZE];
> +	struct scatterlist *bulk_part_dst;
> +	struct scatterlist *bulk_part_src;
> +	struct scatterlist sg_src[2];
> +	struct scatterlist sg_dst[2];
> +	/* Sub-requests, must be last */
> +	union {
> +		struct shash_desc hash_desc;
> +		struct skcipher_request xctr_req;
> +	} u;
> +};

Likewise above for the hashed tweak.

Also how about adding inline functions or macros that return these new fields,
so that the arithmetic to find them doesn't have to be duplicated in the code?
The 'exported_length_digests' array local variables are a bit weird.  Maybe just
use some helper functions directly to get at the fields?

How about:

static inline u8 *hctr2_hashed_tweaklen(const struct hctr2_tfm_ctx *tctx,
                                        bool odd)
{
        u8 *p = (u8 *)tctx + sizeof(*tctx);

        if (odd) /* For messages not a multiple of block length */
                p += crypto_shash_statesize(tctx->polyval);
        return p;
}

static inline u8 *hctr2_hashed_tweak(const struct hctr2_tfm_ctx *tctx,
                                     struct hctr2_request_ctx *rctx)
{
        return (u8 *)rctx + tctx->hashed_tweak_offset;
}

> +static int hctr2_setkey(struct crypto_skcipher *tfm, const u8 *key,
> +			unsigned int keylen)
> +{
> +	struct hctr2_tfm_ctx *tctx = crypto_skcipher_ctx(tfm);
> +	u8 hbar[BLOCKCIPHER_BLOCK_SIZE];
> +	__le64 tweak_length_block[2];
> +	void *exported_length_digests[2];
> +	SHASH_DESC_ON_STACK(shash, tfm->polyval);
> +	int err;
> +
> +	exported_length_digests[0] = (u8 *)tctx + sizeof(*tctx);
> +	exported_length_digests[1] = (u8 *)tctx + sizeof(*tctx) +
> +				     crypto_shash_descsize(tctx->polyval);

The size needed by crypto_shash_export() is crypto_shash_statesize(), not
crypto_shash_descsize().  They happen to be the same with all polyval
implementations you've proposed, but it's not guaranteed for shash algorithms in
general.

> +	crypto_cipher_clear_flags(tctx->blockcipher, CRYPTO_TFM_REQ_MASK);
> +	crypto_cipher_set_flags(tctx->blockcipher,
> +				crypto_skcipher_get_flags(tfm) &
> +				CRYPTO_TFM_REQ_MASK);
> +	err = crypto_cipher_setkey(tctx->blockcipher, key, keylen);
> +	if (err)
> +		return err;
> +
> +	crypto_skcipher_clear_flags(tctx->xctr, CRYPTO_TFM_REQ_MASK);
> +	crypto_skcipher_set_flags(tctx->xctr,
> +				  crypto_skcipher_get_flags(tfm) &
> +				  CRYPTO_TFM_REQ_MASK);
> +	err = crypto_skcipher_setkey(tctx->xctr, key, keylen);
> +	if (err)
> +		return err;
> +
> +	memset(tctx->L, 0, sizeof(tctx->L));
> +	memset(hbar, 0, sizeof(hbar));
> +	tctx->L[0] = 0x01;
> +	crypto_cipher_encrypt_one(tctx->blockcipher, tctx->L, tctx->L);
> +	crypto_cipher_encrypt_one(tctx->blockcipher, hbar, hbar);
> +
> +	crypto_shash_clear_flags(tctx->polyval, CRYPTO_TFM_REQ_MASK);
> +	crypto_shash_set_flags(tctx->polyval, crypto_skcipher_get_flags(tfm) &
> +			       CRYPTO_TFM_REQ_MASK);
> +	err = crypto_shash_setkey(tctx->polyval, hbar, BLOCKCIPHER_BLOCK_SIZE);
> +	if (err)
> +		return err;
> +	memzero_explicit(hbar, sizeof(hbar));
> +
> +	shash->tfm = tctx->polyval;
> +	memset(tweak_length_block, 0, sizeof(tweak_length_block));
> +
> +	tweak_length_block[0] = cpu_to_le64(TWEAK_SIZE * 8 * 2 + 2);
> +	err = crypto_shash_init(shash);
> +	if (err)
> +		return err;
> +	err = crypto_shash_update(shash, (u8 *)tweak_length_block,
> +				  POLYVAL_BLOCK_SIZE);
> +	if (err)
> +		return err;
> +	err = crypto_shash_export(shash, exported_length_digests[0]);
> +	if (err)
> +		return err;
> +
> +	tweak_length_block[0] = cpu_to_le64(TWEAK_SIZE * 8 * 2 + 3);
> +	err = crypto_shash_init(shash);
> +	if (err)
> +		return err;
> +	err = crypto_shash_update(shash, (u8 *)tweak_length_block,
> +				  POLYVAL_BLOCK_SIZE);
> +	if (err)
> +		return err;
> +	return crypto_shash_export(shash, exported_length_digests[1]);
> +}

hctr2_setkey() is getting pretty long.  How about splitting the tweak length
pre-hashing into a helper function?

Also, a comment that explains why the tweak length is being pre-hashed, and why
it *can* be pre-hashed, would be helpful.  Note that it is only possible because
this implementation only supports one tweak length.

- Eric

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 6/8] crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of POLYVAL
  2022-03-15 23:00 ` [PATCH v3 6/8] crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of POLYVAL Nathan Huckleberry
@ 2022-03-23  2:15   ` Eric Biggers
  0 siblings, 0 replies; 15+ messages in thread
From: Eric Biggers @ 2022-03-23  2:15 UTC (permalink / raw)
  To: Nathan Huckleberry
  Cc: linux-crypto, Herbert Xu, David S. Miller, linux-arm-kernel,
	Paul Crowley, Sami Tolvanen, Ard Biesheuvel

On Tue, Mar 15, 2022 at 11:00:33PM +0000, Nathan Huckleberry wrote:
> diff --git a/arch/x86/crypto/polyval-clmulni_asm.S b/arch/x86/crypto/polyval-clmulni_asm.S
> new file mode 100644
> index 000000000000..ad7126d9f0ff
> --- /dev/null
> +++ b/arch/x86/crypto/polyval-clmulni_asm.S
> @@ -0,0 +1,376 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/*
> + * Copyright 2021 Google LLC
> + */
> +/*
> + * This is an efficient implementation of POLYVAL using intel PCLMULQDQ-NI
> + * instructions. It works on 8 blocks at a time, by precomputing the first 8
> + * keys powers h^8, ..., h^1 in the POLYVAL finite field. This precomputation
> + * allows us to split finite field multiplication into two steps.
> + *
> + * In the first step, we consider h^i, m_i as normal polynomials of degree less
> + * than 128. We then compute p(x) = h^8m_0 + ... + h^1m_7 where multiplication
> + * is simply polynomial multiplication.
> + *
> + * In the second step, we compute the reduction of p(x) modulo the finite field
> + * modulus g(x) = x^128 + x^127 + x^126 + x^121 + 1.
> + *
> + * This two step process is equivalent to computing h^8m_0 + ... + h^1m_7 where
> + * multiplication is finite field multiplication. The advantage is that the
> + * two-step process  only requires 1 finite field reduction for every 8
> + * polynomial multiplications. Further parallelism is gained by interleaving the
> + * multiplications and polynomial reductions.
> + */
> +
> +#include <linux/linkage.h>
> +#include <asm/frame.h>
> +
> +#define NUM_PRECOMPUTE_POWERS 8

STRIDE_BLOCKS might be a better name than NUM_PRECOMPUTE_POWERS.  Shorter but
more descriptive, IMO.

> +/*
> + * Performs schoolbook1_iteration on two lists of 128-bit polynomials of length
> + * b pointed to by MSG and KEY_POWERS.
> + */
> +.macro schoolbook1 count
> +	.set i, 0
> +	.rept (\count)
> +		schoolbook1_iteration i 0
> +		.set i, (i +1)
> +	.endr
> +.endm

'count', not 'b'.

> +/*
> + * Computes the product of two 128-bit polynomials at the memory locations
> + * specified by (MSG + 16*i) and (KEY_POWERS + 16*i) and XORs the components of the
> + * 256-bit product into LO, MI, HI.
> + *
> + * The multiplication produces four parts:
> + *   LOW: The polynomial given by performing carryless multiplication of the
> + *   bottom 64-bits of each polynomial
> + *   MID1: The polynomial given by performing carryless multiplication of the
> + *   bottom 64-bits of the first polynomial and the top 64-bits of the second
> + *   MID2: The polynomial given by performing carryless multiplication of the
> + *   bottom 64-bits of the second polynomial and the top 64-bits of the first
> + *   HIGH: The polynomial given by performing carryless multiplication of the
> + *   top 64-bits of each polynomial
> + *
> + * We compute:
> + *  LO ^= LOW
> + *  MI ^= MID1 ^ MID2
> + *  HI ^= HIGH
> + *
> + * Later, the 256-bit result can be extracted as:
> + *   [HI_H : HI_L ^ MI_H : LO_H ^ MI_L : LO_L]
> + * This step is done when computing the polynomial reduction for efficiency
> + * reasons.
> + *
> + * If xor_sum == 1, then also XOR the value of SUM into m_0.  This avoids an
> + * extra multiplication of SUM and h^N.
> + */

h^8 instead of h^N?  The above is one of only two places where "N" is mentioned,
and the other uses to mean something different from here.

> +/*
> + * Performs the same computation as schoolbook1_iteration, except we expect the
> + * arguments to already be loaded into xmm0 and xmm1.
> + */
> +.macro schoolbook1_noload
> +	vpclmulqdq $0x01, %xmm0, %xmm1, %xmm2
> +	vpclmulqdq $0x00, %xmm0, %xmm1, %xmm3
> +	vpclmulqdq $0x11, %xmm0, %xmm1, %xmm4
> +	vpclmulqdq $0x10, %xmm0, %xmm1, %xmm5
> +	vpxor %xmm2, MI, MI
> +	vpxor %xmm3, LO, LO
> +	vpxor %xmm5, MI, MI
> +	vpxor %xmm4, HI, HI
> +.endm

How about making this macro set LO, MI, HI and directly instead of XOR'ing into
them?  That's actually what the two users of it want.  I.e.:

/*
 * Performs the same computation as schoolbook1_iteration, except we expect the
 * arguments to already be loaded into xmm0 and xmm1, and we set the result
 * registers LO, MI, and HI directly rather than XOR'ing into them.
 */
.macro schoolbook1_noload
        vpclmulqdq $0x01, %xmm0, %xmm1, MI
        vpclmulqdq $0x10, %xmm0, %xmm1, %xmm2
        vpclmulqdq $0x00, %xmm0, %xmm1, LO
        vpclmulqdq $0x11, %xmm0, %xmm1, HI
        vpxor %xmm2, MI, MI
.endm

That would save some instructions.

> +/*
> + * Computes the 128-bit reduction of PL, PH. Stores the result in PH.

"PL, PH" => "PH : PL".

Also mention which register this clobbers.

> + *
> + * This macro computes p(x) mod g(x) where p(x) is in montgomery form and g(x) =
> + * x^128 + x^127 + x^126 + x^121 + 1.
> + *
> + * We have a 256-bit polynomial P_3 : P_2 : P_1 : P_0 that is the product of

"P_3 : P_2 : P_1 : P_0" => "PH : PL = P_3 : P_2 : P_1 : P_0", so that it's clear
how P_3 through P_0 relate to PH and PL.

> + * two 128-bit polynomials in Montgomery form.  We need to reduce it mod g(x).
> + * Also, since polynomials in Montgomery form have an "extra" factor of x^128,
> + * this product has two extra factors of x^128.  To get it back into Montgomery
> + * form, we need to remove one of these factors by dividing by x^128.
> + *
> + * To accomplish both of these goals, we add multiples of g(x) that cancel out
> + * the low 128 bits P_1 : P_0, leaving just the high 128 bits. Since the low
> + * bits are zero, the polynomial division by x^128 can be done by right shifting.
> + *
> + * Since the only nonzero term in the low 64 bits of g(x) is the constant term,
> + * the multiple of g(x) needed to cancel out P_0 is P_0 * g(x).  The CPU can
> + * only do 64x64 bit multiplications, so split P_0 * g(x) into x^128 * P_0 +
> + * x^64 g*(x) * P_0 + P_0, where g*(x) is bits 64-127 of g(x).  Adding this to

"x^64 g*(x)" => "x^64 * g*(x)"

> + * the original polynomial gives P_3 : P_2 + P_0 + T_1 : P_1 + T_0 : 0, where T
> + * = T_1 : T_0 = g*(x) * P0.  Thus, bits 0-63 got "folded" into bits 64-191.

"P0" => "P_0"

> + *
> + * Repeating this same process on the next 64 bits "folds" bits 64-127 into bits
> + * 128-255, giving the answer in bits 128-255. This time, we need to cancel P_1
> + * + T_0 in bits 64-127. The multiple of g(x) required is (P_1 + T_0) * g(x) *
> + * x^64. Adding this to our previous computation gives P_3 + P_1 + T_0 + V_1 :
> + * P_2 + P_0 + T_1 + V_0 : 0 : 0, where V = V_1 : V_0 = g*(x) * (P_1 + T_0).
> + *
> + * So our final computation is:
> + *   T = T_1 : T_0 = g*(x) * P_0
> + *   V = V_1 : V_0 = g*(x) * (T_0 ^ P_1)
> + *   p(x) / x^{128} mod g(x) = P_3 ^ P_1 ^ V_1 ^ T_0 : P_2 ^ P_0 ^ V_0 ^ T_1

The notation suddenly changes from + to ^.  How about consistently using +?
Or ^, either one as long as it's consistent...

Also, for the final line, the order "P_3 + P_1 + T_0 + V_1 : P_2 + P_0 + T_1 +
V_0" would make more sense, as it would match the logic of the code.

> + *
> + * The implementation below saves a XOR instruction by computing P_1 ^ T_0 : P_0
> + * ^ T_1 and XORing into PH, rather than directly XORing P_1 : P_0, T_0 : T1
> + * into PH.  This allows us to reuse P_1 ^ T_0 when computing V.
> + */
> +.macro montgomery_reduction
> +	movdqa PL, T
> +	pclmulqdq $0x00, GSTAR, T # T = [P_0 * g*(x)]
> +	pshufd $0b01001110, T, V # V = [T_0 : T_1]
> +	pxor V, PL # PL = [P_1 ^ T_0 : P_0 ^ T_1]
> +	pxor PL, PH # PH = [P_1 ^ T_0 ^ P_3 : P_0 ^ T_1 ^ P_2]
> +	pclmulqdq $0x11, GSTAR, PL # PL = [(P_1 ^ T_0) * g*(x)]
> +	pxor PL, PH
> +.endm

Several comments here:

- Aligning the comments would make them much easier to read.

- Only one temporary register is needed, since T isn't used after it's used to
  compute V.

- The thing called V isn't actually the same as the V described in the long
  comment above.  Maybe just call the temporary variable 'TMP_XMM' or something?
  Or even just hard-code %xmm6, similar to %xmm0-%xmm5.

- It's not necessary to modify PL.

- Since this file is relying on AVX anyway, the three-operand instructions are
  available, and can be used to avoid the 'movdqa' at the beginning.

- None of the users of this macro really want the result in register PH.  How
  about passing the destination register as an argument and using vpxor to put
  it in the appropriate place?

So in summary, this is what I'd suggest:

.macro montgomery_reduction dest
	vpclmulqdq $0x00, GSTAR, PL, TMP_XMM	# TMP_XMM = T_1 : T_0 = P_0 * g*(x)
	pshufd $0b01001110, TMP_XMM, TMP_XMM	# TMP_XMM = T_0 : T_1
	pxor PL, TMP_XMM			# TMP_XMM = P_1 + T_0 : P_0 + T_1
	pxor TMP_XMM, PH			# PH = P_3 + P_1 + T_0 : P_2 + P_0 + T_1
	pclmulqdq $0x11, GSTAR, TMP_XMM		# TMP_XMM = V_1 : V_0 = V = [(P_1 + T_0) * g*(x)]
	vpxor TMP_XMM, PH, \dest
.endm

> +
> +/*
> + * Compute schoolbook multiplication for 8 blocks
> + * m_0h^8 + ... + m_7h^1
> + *
> + * If reduce is set, also computes the montgomery reduction of the
> + * previous full_stride call and XORs with the first message block.
> + * (m_0 + REDUCE(PL, PH))h^8 + ... + m_7h^1.
> + * I.e., the first multiplication uses m_0 + REDUCE(PL, PH) instead of m_0.
> + *
> + * Sets PL, PH
> + * Clobbers LO, HI, MI
> + *
> + */
> +.macro full_stride reduce
> +	mov %rsi, KEY_POWERS

I don't see why KEY_POWERS and %rsi are different registers.  Why not just
define KEY_POWERS to %rsi?  It stays the same during any full_strides, and then
will be incremented by partial_stride.  That's fine.

[...]
> +	addq $(8*16), KEY_POWERS

As per the above, there's no need to increment KEY_POWERS here.

> +	schoolbook2
> +.endm
> +
> +/*
> + * Compute poly on window size of %rdx blocks
> + * 0 < %rdx < NUM_PRECOMPUTE_POWERS
> + */

The code doesn't actually use %rdx directly.  It should be BLOCKS_LEFT.

> +.macro partial_stride
> +	pxor LO, LO
> +	pxor HI, HI
> +	pxor MI, MI
> +	mov BLOCKS_LEFT, TMP
> +	shlq $4, TMP
> +	mov %rsi, KEY_POWERS
> +	addq $(16*NUM_PRECOMPUTE_POWERS), KEY_POWERS
> +	subq TMP, KEY_POWERS
> +	# Multiply sum by h^N
> +	movups (KEY_POWERS), %xmm0
> +	movdqa SUM, %xmm1
> +	schoolbook1_noload
> +	schoolbook2
> +	montgomery_reduction
> +	movdqa PH, SUM
> +	pxor LO, LO
> +	pxor HI, HI
> +	pxor MI, MI
> +	xor IDX, IDX
> +.LloopPartial:
> +	cmpq BLOCKS_LEFT, IDX # IDX < rdx
> +	jae .LloopExitPartial
> +
> +	movq BLOCKS_LEFT, TMP
> +	subq IDX, TMP # TMP = rdx - IDX
> +
> +	cmp $4, TMP # TMP < 4 ?
> +	jl .Llt4Partial
> +	schoolbook1 4
> +	addq $4, IDX
> +	addq $(4*16), MSG
> +	addq $(4*16), KEY_POWERS
> +	jmp .LoutPartial
> +.Llt4Partial:
> +	cmp $3, TMP # TMP < 3 ?
> +	jl .Llt3Partial
> +	schoolbook1 3
> +	addq $3, IDX
> +	addq $(3*16), MSG
> +	addq $(3*16), KEY_POWERS
> +	jmp .LoutPartial
> +.Llt3Partial:
> +	cmp $2, TMP # TMP < 2 ?
> +	jl .Llt2Partial
> +	schoolbook1 2
> +	addq $2, IDX
> +	addq $(2*16), MSG
> +	addq $(2*16), KEY_POWERS
> +	jmp .LoutPartial
> +.Llt2Partial:
> +	schoolbook1 1 # TMP < 1 ?
> +	addq $1, IDX
> +	addq $(1*16), MSG
> +	addq $(1*16), KEY_POWERS
> +.LoutPartial:
> +	jmp .LloopPartial
> +.LloopExitPartial:
> +	schoolbook2
> +	montgomery_reduction
> +	pxor PH, SUM
> +.endm

This can be simplified and optimized quite a bit:

- The first schoolbook2 and montgomery_reduction are unnecessary.
- The IDX variable is unnecessary.
- There's no need for a loop if there are going to be separate cases for 4, 2,
  and 1 blocks anyway.  We can just always jump forward.
- There's no need to increment MSG and KEY_POWERS after the last block.

Can you consider the following?

/*
 * Process BLOCKS_LEFT blocks, where 0 < BLOCKS_LEFT < STRIDE_BLOCKS
 */
.macro partial_stride
	mov BLOCKS_LEFT, TMP
	shlq $4, TMP
	addq $(16*STRIDE_BLOCKS), KEY_POWERS
	subq TMP, KEY_POWERS

	movups (MSG), %xmm0
	pxor SUM, %xmm0
	movaps (KEY_POWERS), %xmm1
	schoolbook1_noload
	dec BLOCKS_LEFT
	addq $16, MSG
	addq $16, KEY_POWERS

	test $4, BLOCKS_LEFT
	jz .Lpartial4BlocksDone
	schoolbook1 4
	addq $(4*16), MSG
	addq $(4*16), KEY_POWERS
.Lpartial4BlocksDone:
	test $2, BLOCKS_LEFT
	jz .Lpartial2BlocksDone
	schoolbook1 2
	addq $(2*16), MSG
	addq $(2*16), KEY_POWERS
.Lpartial2BlocksDone:
	test $1, BLOCKS_LEFT
	jz .LpartialDone
	schoolbook1 1
.LpartialDone:
	schoolbook2
	montgomery_reduction SUM
.endm

> +	FRAME_END
> +	ret
> +SYM_FUNC_END(clmul_polyval_mul)

It needs to be RET, not ret.  See https://git.kernel.org/linus/f94909ceb1ed4bfd

> +
> +/*
> + * Perform polynomial evaluation as specified by POLYVAL.  This computes:
> + * 	h^n * accumulator + h^n * m_0 + ... + h^1 * m_{n-1}
> + * where n=nblocks, h is the hash key, and m_i are the message blocks.
> + *
> + * rdi - pointer to message blocks
> + * rsi - pointer to precomputed key powers h^8 ... h^1
> + * rdx - number of blocks to hash
> + * rcx - pointer to the accumulator
> + *
> + * void clmul_polyval_update(const u8 *in, const struct polyval_ctx *ctx,
> + *			     size_t nblocks, u8 *accumulator);
> + */
> +SYM_FUNC_START(clmul_polyval_update)
> +	FRAME_BEGIN
> +	vmovdqa .Lgstar(%rip), GSTAR
> +	movups (%rcx), SUM
> +	cmpq $NUM_PRECOMPUTE_POWERS, BLOCKS_LEFT
> +	jb .LstrideLoopExit
> +	full_stride 0
> +	subq $NUM_PRECOMPUTE_POWERS, BLOCKS_LEFT
> +.LstrideLoop:
> +	cmpq $NUM_PRECOMPUTE_POWERS, BLOCKS_LEFT
> +	jb .LstrideLoopExitReduce
> +	full_stride 1
> +	subq $NUM_PRECOMPUTE_POWERS, BLOCKS_LEFT
> +	jmp .LstrideLoop
> +.LstrideLoopExitReduce:
> +	montgomery_reduction
> +	movdqa PH, SUM
> +.LstrideLoopExit:
> +	test BLOCKS_LEFT, BLOCKS_LEFT
> +	je .LskipPartial
> +	partial_stride
> +.LskipPartial:
> +	movups SUM, (%rcx)
> +	FRAME_END
> +	ret
> +SYM_FUNC_END(clmul_polyval_update)

There are several unneeded instructions above.  Unconditional jumps can be
avoided, as can comparisons if they are already paired with subtractions using
the same amounts (since on x86, subtractions set the flags too).

Consider the following:

SYM_FUNC_START(clmul_polyval_update)
	FRAME_BEGIN
	vmovdqa .Lgstar(%rip), GSTAR
	movups (%rcx), SUM
	subq $STRIDE_BLOCKS, BLOCKS_LEFT
	js .LstrideLoopExit
	full_stride 0
	subq $STRIDE_BLOCKS, BLOCKS_LEFT
	js .LstrideLoopExitReduce
.LstrideLoop:
	full_stride 1
	subq $STRIDE_BLOCKS, BLOCKS_LEFT
	jns .LstrideLoop
.LstrideLoopExitReduce:
	montgomery_reduction SUM
.LstrideLoopExit:
	add $STRIDE_BLOCKS, BLOCKS_LEFT
	jz .LskipPartial
	partial_stride
.LskipPartial:
	movups SUM, (%rcx)
	FRAME_END
	RET
SYM_FUNC_END(clmul_polyval_update)


- Eric

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 7/8] crypto: arm64/polyval: Add PMULL accelerated implementation of POLYVAL
  2022-03-15 23:00 ` [PATCH v3 7/8] crypto: arm64/polyval: Add PMULL " Nathan Huckleberry
@ 2022-03-24  1:37   ` Eric Biggers
  2022-04-05  1:55     ` Nathan Huckleberry
  0 siblings, 1 reply; 15+ messages in thread
From: Eric Biggers @ 2022-03-24  1:37 UTC (permalink / raw)
  To: Nathan Huckleberry
  Cc: linux-crypto, Herbert Xu, David S. Miller, linux-arm-kernel,
	Paul Crowley, Sami Tolvanen, Ard Biesheuvel

On Tue, Mar 15, 2022 at 11:00:34PM +0000, Nathan Huckleberry wrote:
> Add hardware accelerated version of POLYVAL for ARM64 CPUs with
> Crypto Extension support.

Nit: It's "Crypto Extensions", not "Crypto Extension".

> +config CRYPTO_POLYVAL_ARM64_CE
> +	tristate "POLYVAL using ARMv8 Crypto Extensions (for HCTR2)"
> +	depends on KERNEL_MODE_NEON
> +	select CRYPTO_CRYPTD
> +	select CRYPTO_HASH
> +	select CRYPTO_POLYVAL

CRYPTO_POLYVAL selects CRYPTO_HASH already, so there's no need to select it
here.

> +/*
> + * Perform polynomial evaluation as specified by POLYVAL.  This computes:
> + * 	h^n * accumulator + h^n * m_0 + ... + h^1 * m_{n-1}
> + * where n=nblocks, h is the hash key, and m_i are the message blocks.
> + *
> + * x0 - pointer to message blocks
> + * x1 - pointer to precomputed key powers h^8 ... h^1
> + * x2 - number of blocks to hash
> + * x3 - pointer to accumulator
> + *
> + * void pmull_polyval_update(const u8 *in, const struct polyval_ctx *ctx,
> + *			     size_t nblocks, u8 *accumulator);
> + */
> +SYM_FUNC_START(pmull_polyval_update)
> +	adr		TMP, .Lgstar
> +	ld1		{GSTAR.2d}, [TMP]
> +	ld1		{SUM.16b}, [x3]
> +	ands		PARTIAL_LEFT, BLOCKS_LEFT, #7
> +	beq		.LskipPartial
> +	partial_stride
> +.LskipPartial:
> +	subs		BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS
> +	blt		.LstrideLoopExit
> +	ld1		{KEY8.16b, KEY7.16b, KEY6.16b, KEY5.16b}, [x1], #64
> +	ld1		{KEY4.16b, KEY3.16b, KEY2.16b, KEY1.16b}, [x1], #64
> +	full_stride 0
> +	subs		BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS
> +	blt		.LstrideLoopExitReduce
> +.LstrideLoop:
> +	full_stride 1
> +	subs		BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS
> +	bge		.LstrideLoop
> +.LstrideLoopExitReduce:
> +	montgomery_reduction
> +	mov		SUM.16b, PH.16b
> +.LstrideLoopExit:
> +	st1		{SUM.16b}, [x3]
> +	ret
> +SYM_FUNC_END(pmull_polyval_update)

Is there a reason why partial_stride is done first in the arm64 implementation,
but last in the x86 implementation?  It would be nice if the implementations
worked the same way.  Probably last would be better?  What is the advantage of
doing it first?

Besides that, many of the comments I made on the x86 implementation apply to the
arm64 implementation too.

- Eric

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH v3 7/8] crypto: arm64/polyval: Add PMULL accelerated implementation of POLYVAL
  2022-03-24  1:37   ` Eric Biggers
@ 2022-04-05  1:55     ` Nathan Huckleberry
  0 siblings, 0 replies; 15+ messages in thread
From: Nathan Huckleberry @ 2022-04-05  1:55 UTC (permalink / raw)
  To: Eric Biggers
  Cc: linux-crypto, Herbert Xu, David S. Miller, linux-arm-kernel,
	Paul Crowley, Sami Tolvanen, Ard Biesheuvel

On Wed, Mar 23, 2022 at 8:37 PM Eric Biggers <ebiggers@kernel.org> wrote:
>
> On Tue, Mar 15, 2022 at 11:00:34PM +0000, Nathan Huckleberry wrote:
> > Add hardware accelerated version of POLYVAL for ARM64 CPUs with
> > Crypto Extension support.
>
> Nit: It's "Crypto Extensions", not "Crypto Extension".
>
> > +config CRYPTO_POLYVAL_ARM64_CE
> > +     tristate "POLYVAL using ARMv8 Crypto Extensions (for HCTR2)"
> > +     depends on KERNEL_MODE_NEON
> > +     select CRYPTO_CRYPTD
> > +     select CRYPTO_HASH
> > +     select CRYPTO_POLYVAL
>
> CRYPTO_POLYVAL selects CRYPTO_HASH already, so there's no need to select it
> here.
>
> > +/*
> > + * Perform polynomial evaluation as specified by POLYVAL.  This computes:
> > + *   h^n * accumulator + h^n * m_0 + ... + h^1 * m_{n-1}
> > + * where n=nblocks, h is the hash key, and m_i are the message blocks.
> > + *
> > + * x0 - pointer to message blocks
> > + * x1 - pointer to precomputed key powers h^8 ... h^1
> > + * x2 - number of blocks to hash
> > + * x3 - pointer to accumulator
> > + *
> > + * void pmull_polyval_update(const u8 *in, const struct polyval_ctx *ctx,
> > + *                        size_t nblocks, u8 *accumulator);
> > + */
> > +SYM_FUNC_START(pmull_polyval_update)
> > +     adr             TMP, .Lgstar
> > +     ld1             {GSTAR.2d}, [TMP]
> > +     ld1             {SUM.16b}, [x3]
> > +     ands            PARTIAL_LEFT, BLOCKS_LEFT, #7
> > +     beq             .LskipPartial
> > +     partial_stride
> > +.LskipPartial:
> > +     subs            BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS
> > +     blt             .LstrideLoopExit
> > +     ld1             {KEY8.16b, KEY7.16b, KEY6.16b, KEY5.16b}, [x1], #64
> > +     ld1             {KEY4.16b, KEY3.16b, KEY2.16b, KEY1.16b}, [x1], #64
> > +     full_stride 0
> > +     subs            BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS
> > +     blt             .LstrideLoopExitReduce
> > +.LstrideLoop:
> > +     full_stride 1
> > +     subs            BLOCKS_LEFT, BLOCKS_LEFT, #NUM_PRECOMPUTE_POWERS
> > +     bge             .LstrideLoop
> > +.LstrideLoopExitReduce:
> > +     montgomery_reduction
> > +     mov             SUM.16b, PH.16b
> > +.LstrideLoopExit:
> > +     st1             {SUM.16b}, [x3]
> > +     ret
> > +SYM_FUNC_END(pmull_polyval_update)
>
> Is there a reason why partial_stride is done first in the arm64 implementation,
> but last in the x86 implementation?  It would be nice if the implementations
> worked the same way.  Probably last would be better?  What is the advantage of
> doing it first?

It was so I could return early without loading keys into registers,
since I only need them if there's
a full stride. I was able to rewrite it in the same way that the x86
implementation works.
>
> Besides that, many of the comments I made on the x86 implementation apply to the
> arm64 implementation too.
>
> - Eric

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2022-04-05  2:48 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-15 23:00 [PATCH v3 0/8] crypto: HCTR2 support Nathan Huckleberry
2022-03-15 23:00 ` [PATCH v3 1/8] crypto: xctr - Add XCTR support Nathan Huckleberry
2022-03-22  5:23   ` Eric Biggers
2022-03-15 23:00 ` [PATCH v3 2/8] crypto: polyval - Add POLYVAL support Nathan Huckleberry
2022-03-22  5:55   ` Eric Biggers
2022-03-15 23:00 ` [PATCH v3 3/8] crypto: hctr2 - Add HCTR2 support Nathan Huckleberry
2022-03-22  7:00   ` Eric Biggers
2022-03-15 23:00 ` [PATCH v3 4/8] crypto: x86/aesni-xctr: Add accelerated implementation of XCTR Nathan Huckleberry
2022-03-15 23:00 ` [PATCH v3 5/8] crypto: arm64/aes-xctr: " Nathan Huckleberry
2022-03-15 23:00 ` [PATCH v3 6/8] crypto: x86/polyval: Add PCLMULQDQ accelerated implementation of POLYVAL Nathan Huckleberry
2022-03-23  2:15   ` Eric Biggers
2022-03-15 23:00 ` [PATCH v3 7/8] crypto: arm64/polyval: Add PMULL " Nathan Huckleberry
2022-03-24  1:37   ` Eric Biggers
2022-04-05  1:55     ` Nathan Huckleberry
2022-03-15 23:00 ` [PATCH v3 8/8] fscrypt: Add HCTR2 support for filename encryption Nathan Huckleberry

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).