linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests
@ 2019-02-01  7:51 Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 01/15] crypto: aegis - fix handling chunked inputs Eric Biggers
                   ` (15 more replies)
  0 siblings, 16 replies; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel

Hello,

Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned.  Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.

However, testing of this currently has many gaps.  For example,
individual algorithms are responsible for providing their own chunked
test vectors.  But many don't bother to do this or test only one or two
cases, providing poor test coverage.  Also, other things such as buffers
spanning a page boundary, misaligned IVs, and CRYPTO_TFM_REQ_MAY_SLEEP
are never tested at all.

Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.

To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms.  However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to do
each update() and when to use finup() vs. final() vs. digest().

Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code.  This improves overall test coverage, without reducing test
performance too much.  Note that the test vectors themselves are not
changed, except for removing the chunk lists.

This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.

I've run these improved tests on x86, arm32, and arm64 with all crypto
algorithms enabled, and they have already found many bugs.  Patches 1-7
and the patches from Ard Biesheuvel fix most of the bugs found so far.
A bug was also detected in the Rockchip crypto driver which remains to
be fixed.  Also many AEADs incorrectly change aead_request::base.tfm,
but for now I'm temporarily working around that in the tests as I plan
to fix it later after the other types of bugs are addressed.

If anyone reading this has access to systems with other architectures or
crypto drivers that may not have been tested yet, you can help by
applying these patches on your system, enabling
CONFIG_CRYPTO_MANAGER_EXTRA_TESTS, and reporting or fixing any test
failures.

This patch series can also be found in git at
https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git
branch "testmgr-improvements".

Changed since v1:

- Made CONFIG_CRYPTO_MANAGER_EXTRA_TESTS depend on CONFIG_DEBUG_KERNEL.
- Improved commit description of AEGIS and MORUS fixes.
- A few very minor cleanups to the test code.

Eric Biggers (15):
  crypto: aegis - fix handling chunked inputs
  crypto: morus - fix handling chunked inputs
  crypto: x86/aegis - fix handling chunked inputs and MAY_SLEEP
  crypto: x86/morus - fix handling chunked inputs and MAY_SLEEP
  crypto: x86/aesni-gcm - fix crash on empty plaintext
  crypto: ahash - fix another early termination in hash walk
  crypto: arm64/aes-neonbs - fix returning final keystream block
  crypto: testmgr - add testvec_config struct and helper functions
  crypto: testmgr - introduce CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
  crypto: testmgr - implement random testvec_config generation
  crypto: testmgr - convert skcipher testing to use testvec_configs
  crypto: testmgr - convert aead testing to use testvec_configs
  crypto: testmgr - convert hash testing to use testvec_configs
  crypto: testmgr - check for skcipher_request corruption
  crypto: testmgr - check for aead_request corruption

 arch/arm64/crypto/aes-neonbs-core.S    |    8 +-
 arch/x86/crypto/aegis128-aesni-glue.c  |   38 +-
 arch/x86/crypto/aegis128l-aesni-glue.c |   38 +-
 arch/x86/crypto/aegis256-aesni-glue.c  |   38 +-
 arch/x86/crypto/aesni-intel_glue.c     |   13 +-
 arch/x86/crypto/morus1280_glue.c       |   40 +-
 arch/x86/crypto/morus640_glue.c        |   39 +-
 crypto/Kconfig                         |   10 +
 crypto/aegis128.c                      |   14 +-
 crypto/aegis128l.c                     |   14 +-
 crypto/aegis256.c                      |   14 +-
 crypto/ahash.c                         |   14 +-
 crypto/morus1280.c                     |   13 +-
 crypto/morus640.c                      |   13 +-
 crypto/testmgr.c                       | 2549 +++++++++++++-----------
 crypto/testmgr.h                       |  407 +---
 16 files changed, 1556 insertions(+), 1706 deletions(-)

-- 
2.20.1


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v2 01/15] crypto: aegis - fix handling chunked inputs
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-05  9:31   ` Ondrej Mosnacek
  2019-02-01  7:51 ` [PATCH v2 02/15] crypto: morus " Eric Biggers
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel, stable, Ondrej Mosnacek

From: Eric Biggers <ebiggers@google.com>

The generic AEGIS implementations all fail the improved AEAD tests
because they produce the wrong result with some data layouts.  The issue
is that they assume that if the skcipher_walk API gives 'nbytes' not
aligned to the walksize (a.k.a. walk.stride), then it is the end of the
data.  In fact, this can happen before the end.  Fix them.

Fixes: f606a88e5823 ("crypto: aegis - Add generic AEGIS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/aegis128.c  | 14 +++++++-------
 crypto/aegis128l.c | 14 +++++++-------
 crypto/aegis256.c  | 14 +++++++-------
 3 files changed, 21 insertions(+), 21 deletions(-)

diff --git a/crypto/aegis128.c b/crypto/aegis128.c
index 96e078a8a00a1..3718a83413032 100644
--- a/crypto/aegis128.c
+++ b/crypto/aegis128.c
@@ -286,19 +286,19 @@ static void crypto_aegis128_process_crypt(struct aegis_state *state,
 					  const struct aegis128_ops *ops)
 {
 	struct skcipher_walk walk;
-	u8 *src, *dst;
-	unsigned int chunksize;
 
 	ops->skcipher_walk_init(&walk, req, false);
 
 	while (walk.nbytes) {
-		src = walk.src.virt.addr;
-		dst = walk.dst.virt.addr;
-		chunksize = walk.nbytes;
+		unsigned int nbytes = walk.nbytes;
 
-		ops->crypt_chunk(state, dst, src, chunksize);
+		if (nbytes < walk.total)
+			nbytes = round_down(nbytes, walk.stride);
 
-		skcipher_walk_done(&walk, 0);
+		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
+				 nbytes);
+
+		skcipher_walk_done(&walk, walk.nbytes - nbytes);
 	}
 }
 
diff --git a/crypto/aegis128l.c b/crypto/aegis128l.c
index a210e779b911b..275a8616d71bd 100644
--- a/crypto/aegis128l.c
+++ b/crypto/aegis128l.c
@@ -349,19 +349,19 @@ static void crypto_aegis128l_process_crypt(struct aegis_state *state,
 					   const struct aegis128l_ops *ops)
 {
 	struct skcipher_walk walk;
-	u8 *src, *dst;
-	unsigned int chunksize;
 
 	ops->skcipher_walk_init(&walk, req, false);
 
 	while (walk.nbytes) {
-		src = walk.src.virt.addr;
-		dst = walk.dst.virt.addr;
-		chunksize = walk.nbytes;
+		unsigned int nbytes = walk.nbytes;
 
-		ops->crypt_chunk(state, dst, src, chunksize);
+		if (nbytes < walk.total)
+			nbytes = round_down(nbytes, walk.stride);
 
-		skcipher_walk_done(&walk, 0);
+		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
+				 nbytes);
+
+		skcipher_walk_done(&walk, walk.nbytes - nbytes);
 	}
 }
 
diff --git a/crypto/aegis256.c b/crypto/aegis256.c
index 49882a28e93e9..ecd6b7f34a2d2 100644
--- a/crypto/aegis256.c
+++ b/crypto/aegis256.c
@@ -299,19 +299,19 @@ static void crypto_aegis256_process_crypt(struct aegis_state *state,
 					  const struct aegis256_ops *ops)
 {
 	struct skcipher_walk walk;
-	u8 *src, *dst;
-	unsigned int chunksize;
 
 	ops->skcipher_walk_init(&walk, req, false);
 
 	while (walk.nbytes) {
-		src = walk.src.virt.addr;
-		dst = walk.dst.virt.addr;
-		chunksize = walk.nbytes;
+		unsigned int nbytes = walk.nbytes;
 
-		ops->crypt_chunk(state, dst, src, chunksize);
+		if (nbytes < walk.total)
+			nbytes = round_down(nbytes, walk.stride);
 
-		skcipher_walk_done(&walk, 0);
+		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
+				 nbytes);
+
+		skcipher_walk_done(&walk, walk.nbytes - nbytes);
 	}
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 02/15] crypto: morus - fix handling chunked inputs
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 01/15] crypto: aegis - fix handling chunked inputs Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-05  9:30   ` Ondrej Mosnacek
  2019-02-01  7:51 ` [PATCH v2 03/15] crypto: x86/aegis - fix handling chunked inputs and MAY_SLEEP Eric Biggers
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel, stable, Ondrej Mosnacek

From: Eric Biggers <ebiggers@google.com>

The generic MORUS implementations all fail the improved AEAD tests
because they produce the wrong result with some data layouts.  The issue
is that they assume that if the skcipher_walk API gives 'nbytes' not
aligned to the walksize (a.k.a. walk.stride), then it is the end of the
data.  In fact, this can happen before the end.  Fix them.

Fixes: 396be41f16fd ("crypto: morus - Add generic MORUS AEAD implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/morus1280.c | 13 +++++++------
 crypto/morus640.c  | 13 +++++++------
 2 files changed, 14 insertions(+), 12 deletions(-)

diff --git a/crypto/morus1280.c b/crypto/morus1280.c
index 78ba09db7328c..0747732d5b78a 100644
--- a/crypto/morus1280.c
+++ b/crypto/morus1280.c
@@ -362,18 +362,19 @@ static void crypto_morus1280_process_crypt(struct morus1280_state *state,
 					   const struct morus1280_ops *ops)
 {
 	struct skcipher_walk walk;
-	u8 *dst;
-	const u8 *src;
 
 	ops->skcipher_walk_init(&walk, req, false);
 
 	while (walk.nbytes) {
-		src = walk.src.virt.addr;
-		dst = walk.dst.virt.addr;
+		unsigned int nbytes = walk.nbytes;
 
-		ops->crypt_chunk(state, dst, src, walk.nbytes);
+		if (nbytes < walk.total)
+			nbytes = round_down(nbytes, walk.stride);
 
-		skcipher_walk_done(&walk, 0);
+		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
+				 nbytes);
+
+		skcipher_walk_done(&walk, walk.nbytes - nbytes);
 	}
 }
 
diff --git a/crypto/morus640.c b/crypto/morus640.c
index 5cf530139b271..1617a1eb8be13 100644
--- a/crypto/morus640.c
+++ b/crypto/morus640.c
@@ -361,18 +361,19 @@ static void crypto_morus640_process_crypt(struct morus640_state *state,
 					  const struct morus640_ops *ops)
 {
 	struct skcipher_walk walk;
-	u8 *dst;
-	const u8 *src;
 
 	ops->skcipher_walk_init(&walk, req, false);
 
 	while (walk.nbytes) {
-		src = walk.src.virt.addr;
-		dst = walk.dst.virt.addr;
+		unsigned int nbytes = walk.nbytes;
 
-		ops->crypt_chunk(state, dst, src, walk.nbytes);
+		if (nbytes < walk.total)
+			nbytes = round_down(nbytes, walk.stride);
 
-		skcipher_walk_done(&walk, 0);
+		ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
+				 nbytes);
+
+		skcipher_walk_done(&walk, walk.nbytes - nbytes);
 	}
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 03/15] crypto: x86/aegis - fix handling chunked inputs and MAY_SLEEP
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 01/15] crypto: aegis - fix handling chunked inputs Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 02/15] crypto: morus " Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-05  9:31   ` Ondrej Mosnacek
  2019-02-01  7:51 ` [PATCH v2 04/15] crypto: x86/morus " Eric Biggers
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel, stable, Ondrej Mosnacek

From: Eric Biggers <ebiggers@google.com>

The x86 AEGIS implementations all fail the improved AEAD tests because
they produce the wrong result with some data layouts.  The issue is that
they assume that if the skcipher_walk API gives 'nbytes' not aligned to
the walksize (a.k.a. walk.stride), then it is the end of the data.  In
fact, this can happen before the end.

Also, when the CRYPTO_TFM_REQ_MAY_SLEEP flag is given, they can
incorrectly sleep in the skcipher_walk_*() functions while preemption
has been disabled by kernel_fpu_begin().

Fix these bugs.

Fixes: 1d373d4e8e15 ("crypto: x86 - Add optimized AEGIS implementations")
Cc: <stable@vger.kernel.org> # v4.18+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/aegis128-aesni-glue.c  | 38 ++++++++++----------------
 arch/x86/crypto/aegis128l-aesni-glue.c | 38 ++++++++++----------------
 arch/x86/crypto/aegis256-aesni-glue.c  | 38 ++++++++++----------------
 3 files changed, 45 insertions(+), 69 deletions(-)

diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c
index 2a356b948720e..3ea71b8718135 100644
--- a/arch/x86/crypto/aegis128-aesni-glue.c
+++ b/arch/x86/crypto/aegis128-aesni-glue.c
@@ -119,31 +119,20 @@ static void crypto_aegis128_aesni_process_ad(
 }
 
 static void crypto_aegis128_aesni_process_crypt(
-		struct aegis_state *state, struct aead_request *req,
+		struct aegis_state *state, struct skcipher_walk *walk,
 		const struct aegis_crypt_ops *ops)
 {
-	struct skcipher_walk walk;
-	u8 *src, *dst;
-	unsigned int chunksize, base;
-
-	ops->skcipher_walk_init(&walk, req, false);
-
-	while (walk.nbytes) {
-		src = walk.src.virt.addr;
-		dst = walk.dst.virt.addr;
-		chunksize = walk.nbytes;
-
-		ops->crypt_blocks(state, chunksize, src, dst);
-
-		base = chunksize & ~(AEGIS128_BLOCK_SIZE - 1);
-		src += base;
-		dst += base;
-		chunksize &= AEGIS128_BLOCK_SIZE - 1;
-
-		if (chunksize > 0)
-			ops->crypt_tail(state, chunksize, src, dst);
+	while (walk->nbytes >= AEGIS128_BLOCK_SIZE) {
+		ops->crypt_blocks(state,
+				  round_down(walk->nbytes, AEGIS128_BLOCK_SIZE),
+				  walk->src.virt.addr, walk->dst.virt.addr);
+		skcipher_walk_done(walk, walk->nbytes % AEGIS128_BLOCK_SIZE);
+	}
 
-		skcipher_walk_done(&walk, 0);
+	if (walk->nbytes) {
+		ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
+				walk->dst.virt.addr);
+		skcipher_walk_done(walk, 0);
 	}
 }
 
@@ -186,13 +175,16 @@ static void crypto_aegis128_aesni_crypt(struct aead_request *req,
 {
 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
 	struct aegis_ctx *ctx = crypto_aegis128_aesni_ctx(tfm);
+	struct skcipher_walk walk;
 	struct aegis_state state;
 
+	ops->skcipher_walk_init(&walk, req, true);
+
 	kernel_fpu_begin();
 
 	crypto_aegis128_aesni_init(&state, ctx->key.bytes, req->iv);
 	crypto_aegis128_aesni_process_ad(&state, req->src, req->assoclen);
-	crypto_aegis128_aesni_process_crypt(&state, req, ops);
+	crypto_aegis128_aesni_process_crypt(&state, &walk, ops);
 	crypto_aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
 
 	kernel_fpu_end();
diff --git a/arch/x86/crypto/aegis128l-aesni-glue.c b/arch/x86/crypto/aegis128l-aesni-glue.c
index dbe8bb980da15..1b1b39c66c5e2 100644
--- a/arch/x86/crypto/aegis128l-aesni-glue.c
+++ b/arch/x86/crypto/aegis128l-aesni-glue.c
@@ -119,31 +119,20 @@ static void crypto_aegis128l_aesni_process_ad(
 }
 
 static void crypto_aegis128l_aesni_process_crypt(
-		struct aegis_state *state, struct aead_request *req,
+		struct aegis_state *state, struct skcipher_walk *walk,
 		const struct aegis_crypt_ops *ops)
 {
-	struct skcipher_walk walk;
-	u8 *src, *dst;
-	unsigned int chunksize, base;
-
-	ops->skcipher_walk_init(&walk, req, false);
-
-	while (walk.nbytes) {
-		src = walk.src.virt.addr;
-		dst = walk.dst.virt.addr;
-		chunksize = walk.nbytes;
-
-		ops->crypt_blocks(state, chunksize, src, dst);
-
-		base = chunksize & ~(AEGIS128L_BLOCK_SIZE - 1);
-		src += base;
-		dst += base;
-		chunksize &= AEGIS128L_BLOCK_SIZE - 1;
-
-		if (chunksize > 0)
-			ops->crypt_tail(state, chunksize, src, dst);
+	while (walk->nbytes >= AEGIS128L_BLOCK_SIZE) {
+		ops->crypt_blocks(state, round_down(walk->nbytes,
+						    AEGIS128L_BLOCK_SIZE),
+				  walk->src.virt.addr, walk->dst.virt.addr);
+		skcipher_walk_done(walk, walk->nbytes % AEGIS128L_BLOCK_SIZE);
+	}
 
-		skcipher_walk_done(&walk, 0);
+	if (walk->nbytes) {
+		ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
+				walk->dst.virt.addr);
+		skcipher_walk_done(walk, 0);
 	}
 }
 
@@ -186,13 +175,16 @@ static void crypto_aegis128l_aesni_crypt(struct aead_request *req,
 {
 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
 	struct aegis_ctx *ctx = crypto_aegis128l_aesni_ctx(tfm);
+	struct skcipher_walk walk;
 	struct aegis_state state;
 
+	ops->skcipher_walk_init(&walk, req, true);
+
 	kernel_fpu_begin();
 
 	crypto_aegis128l_aesni_init(&state, ctx->key.bytes, req->iv);
 	crypto_aegis128l_aesni_process_ad(&state, req->src, req->assoclen);
-	crypto_aegis128l_aesni_process_crypt(&state, req, ops);
+	crypto_aegis128l_aesni_process_crypt(&state, &walk, ops);
 	crypto_aegis128l_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
 
 	kernel_fpu_end();
diff --git a/arch/x86/crypto/aegis256-aesni-glue.c b/arch/x86/crypto/aegis256-aesni-glue.c
index 8bebda2de92fe..6227ca3220a05 100644
--- a/arch/x86/crypto/aegis256-aesni-glue.c
+++ b/arch/x86/crypto/aegis256-aesni-glue.c
@@ -119,31 +119,20 @@ static void crypto_aegis256_aesni_process_ad(
 }
 
 static void crypto_aegis256_aesni_process_crypt(
-		struct aegis_state *state, struct aead_request *req,
+		struct aegis_state *state, struct skcipher_walk *walk,
 		const struct aegis_crypt_ops *ops)
 {
-	struct skcipher_walk walk;
-	u8 *src, *dst;
-	unsigned int chunksize, base;
-
-	ops->skcipher_walk_init(&walk, req, false);
-
-	while (walk.nbytes) {
-		src = walk.src.virt.addr;
-		dst = walk.dst.virt.addr;
-		chunksize = walk.nbytes;
-
-		ops->crypt_blocks(state, chunksize, src, dst);
-
-		base = chunksize & ~(AEGIS256_BLOCK_SIZE - 1);
-		src += base;
-		dst += base;
-		chunksize &= AEGIS256_BLOCK_SIZE - 1;
-
-		if (chunksize > 0)
-			ops->crypt_tail(state, chunksize, src, dst);
+	while (walk->nbytes >= AEGIS256_BLOCK_SIZE) {
+		ops->crypt_blocks(state,
+				  round_down(walk->nbytes, AEGIS256_BLOCK_SIZE),
+				  walk->src.virt.addr, walk->dst.virt.addr);
+		skcipher_walk_done(walk, walk->nbytes % AEGIS256_BLOCK_SIZE);
+	}
 
-		skcipher_walk_done(&walk, 0);
+	if (walk->nbytes) {
+		ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
+				walk->dst.virt.addr);
+		skcipher_walk_done(walk, 0);
 	}
 }
 
@@ -186,13 +175,16 @@ static void crypto_aegis256_aesni_crypt(struct aead_request *req,
 {
 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
 	struct aegis_ctx *ctx = crypto_aegis256_aesni_ctx(tfm);
+	struct skcipher_walk walk;
 	struct aegis_state state;
 
+	ops->skcipher_walk_init(&walk, req, true);
+
 	kernel_fpu_begin();
 
 	crypto_aegis256_aesni_init(&state, ctx->key, req->iv);
 	crypto_aegis256_aesni_process_ad(&state, req->src, req->assoclen);
-	crypto_aegis256_aesni_process_crypt(&state, req, ops);
+	crypto_aegis256_aesni_process_crypt(&state, &walk, ops);
 	crypto_aegis256_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
 
 	kernel_fpu_end();
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 04/15] crypto: x86/morus - fix handling chunked inputs and MAY_SLEEP
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (2 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 03/15] crypto: x86/aegis - fix handling chunked inputs and MAY_SLEEP Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-05  9:32   ` Ondrej Mosnacek
  2019-02-01  7:51 ` [PATCH v2 05/15] crypto: x86/aesni-gcm - fix crash on empty plaintext Eric Biggers
                   ` (11 subsequent siblings)
  15 siblings, 1 reply; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel, stable, Ondrej Mosnacek

From: Eric Biggers <ebiggers@google.com>

The x86 MORUS implementations all fail the improved AEAD tests because
they produce the wrong result with some data layouts.  The issue is that
they assume that if the skcipher_walk API gives 'nbytes' not aligned to
the walksize (a.k.a. walk.stride), then it is the end of the data.  In
fact, this can happen before the end.

Also, when the CRYPTO_TFM_REQ_MAY_SLEEP flag is given, they can
incorrectly sleep in the skcipher_walk_*() functions while preemption
has been disabled by kernel_fpu_begin().

Fix these bugs.

Fixes: 56e8e57fc3a7 ("crypto: morus - Add common SIMD glue code for MORUS")
Cc: <stable@vger.kernel.org> # v4.18+
Cc: Ondrej Mosnacek <omosnace@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/morus1280_glue.c | 40 +++++++++++++-------------------
 arch/x86/crypto/morus640_glue.c  | 39 ++++++++++++-------------------
 2 files changed, 31 insertions(+), 48 deletions(-)

diff --git a/arch/x86/crypto/morus1280_glue.c b/arch/x86/crypto/morus1280_glue.c
index 0dccdda1eb3a1..7e600f8bcdad8 100644
--- a/arch/x86/crypto/morus1280_glue.c
+++ b/arch/x86/crypto/morus1280_glue.c
@@ -85,31 +85,20 @@ static void crypto_morus1280_glue_process_ad(
 
 static void crypto_morus1280_glue_process_crypt(struct morus1280_state *state,
 						struct morus1280_ops ops,
-						struct aead_request *req)
+						struct skcipher_walk *walk)
 {
-	struct skcipher_walk walk;
-	u8 *cursor_src, *cursor_dst;
-	unsigned int chunksize, base;
-
-	ops.skcipher_walk_init(&walk, req, false);
-
-	while (walk.nbytes) {
-		cursor_src = walk.src.virt.addr;
-		cursor_dst = walk.dst.virt.addr;
-		chunksize = walk.nbytes;
-
-		ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize);
-
-		base = chunksize & ~(MORUS1280_BLOCK_SIZE - 1);
-		cursor_src += base;
-		cursor_dst += base;
-		chunksize &= MORUS1280_BLOCK_SIZE - 1;
-
-		if (chunksize > 0)
-			ops.crypt_tail(state, cursor_src, cursor_dst,
-				       chunksize);
+	while (walk->nbytes >= MORUS1280_BLOCK_SIZE) {
+		ops.crypt_blocks(state, walk->src.virt.addr,
+				 walk->dst.virt.addr,
+				 round_down(walk->nbytes,
+					    MORUS1280_BLOCK_SIZE));
+		skcipher_walk_done(walk, walk->nbytes % MORUS1280_BLOCK_SIZE);
+	}
 
-		skcipher_walk_done(&walk, 0);
+	if (walk->nbytes) {
+		ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr,
+			       walk->nbytes);
+		skcipher_walk_done(walk, 0);
 	}
 }
 
@@ -147,12 +136,15 @@ static void crypto_morus1280_glue_crypt(struct aead_request *req,
 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
 	struct morus1280_ctx *ctx = crypto_aead_ctx(tfm);
 	struct morus1280_state state;
+	struct skcipher_walk walk;
+
+	ops.skcipher_walk_init(&walk, req, true);
 
 	kernel_fpu_begin();
 
 	ctx->ops->init(&state, &ctx->key, req->iv);
 	crypto_morus1280_glue_process_ad(&state, ctx->ops, req->src, req->assoclen);
-	crypto_morus1280_glue_process_crypt(&state, ops, req);
+	crypto_morus1280_glue_process_crypt(&state, ops, &walk);
 	ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen);
 
 	kernel_fpu_end();
diff --git a/arch/x86/crypto/morus640_glue.c b/arch/x86/crypto/morus640_glue.c
index 7b58fe4d9bd1a..cb3a817320160 100644
--- a/arch/x86/crypto/morus640_glue.c
+++ b/arch/x86/crypto/morus640_glue.c
@@ -85,31 +85,19 @@ static void crypto_morus640_glue_process_ad(
 
 static void crypto_morus640_glue_process_crypt(struct morus640_state *state,
 					       struct morus640_ops ops,
-					       struct aead_request *req)
+					       struct skcipher_walk *walk)
 {
-	struct skcipher_walk walk;
-	u8 *cursor_src, *cursor_dst;
-	unsigned int chunksize, base;
-
-	ops.skcipher_walk_init(&walk, req, false);
-
-	while (walk.nbytes) {
-		cursor_src = walk.src.virt.addr;
-		cursor_dst = walk.dst.virt.addr;
-		chunksize = walk.nbytes;
-
-		ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize);
-
-		base = chunksize & ~(MORUS640_BLOCK_SIZE - 1);
-		cursor_src += base;
-		cursor_dst += base;
-		chunksize &= MORUS640_BLOCK_SIZE - 1;
-
-		if (chunksize > 0)
-			ops.crypt_tail(state, cursor_src, cursor_dst,
-				       chunksize);
+	while (walk->nbytes >= MORUS640_BLOCK_SIZE) {
+		ops.crypt_blocks(state, walk->src.virt.addr,
+				 walk->dst.virt.addr,
+				 round_down(walk->nbytes, MORUS640_BLOCK_SIZE));
+		skcipher_walk_done(walk, walk->nbytes % MORUS640_BLOCK_SIZE);
+	}
 
-		skcipher_walk_done(&walk, 0);
+	if (walk->nbytes) {
+		ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr,
+			       walk->nbytes);
+		skcipher_walk_done(walk, 0);
 	}
 }
 
@@ -143,12 +131,15 @@ static void crypto_morus640_glue_crypt(struct aead_request *req,
 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
 	struct morus640_ctx *ctx = crypto_aead_ctx(tfm);
 	struct morus640_state state;
+	struct skcipher_walk walk;
+
+	ops.skcipher_walk_init(&walk, req, true);
 
 	kernel_fpu_begin();
 
 	ctx->ops->init(&state, &ctx->key, req->iv);
 	crypto_morus640_glue_process_ad(&state, ctx->ops, req->src, req->assoclen);
-	crypto_morus640_glue_process_crypt(&state, ops, req);
+	crypto_morus640_glue_process_crypt(&state, ops, &walk);
 	ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen);
 
 	kernel_fpu_end();
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 05/15] crypto: x86/aesni-gcm - fix crash on empty plaintext
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (3 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 04/15] crypto: x86/morus " Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 06/15] crypto: ahash - fix another early termination in hash walk Eric Biggers
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel, stable, Dave Watson

From: Eric Biggers <ebiggers@google.com>

gcmaes_crypt_by_sg() dereferences the NULL pointer returned by
scatterwalk_ffwd() when encrypting an empty plaintext and the source
scatterlist ends immediately after the associated data.

Fix it by only fast-forwarding to the src/dst data scatterlists if the
data length is nonzero.

This bug is reproduced by the "rfc4543(gcm(aes))" test vectors when run
with the new AEAD test manager.

Fixes: e845520707f8 ("crypto: aesni - Update aesni-intel_glue to use scatter/gather")
Cc: <stable@vger.kernel.org> # v4.17+
Cc: Dave Watson <davejwatson@fb.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/x86/crypto/aesni-intel_glue.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 9b5ccde3ef315..1e3d2102033a0 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -813,11 +813,14 @@ static int gcmaes_crypt_by_sg(bool enc, struct aead_request *req,
 		scatterwalk_map_and_copy(assoc, req->src, 0, assoclen, 0);
 	}
 
-	src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen);
-	scatterwalk_start(&src_sg_walk, src_sg);
-	if (req->src != req->dst) {
-		dst_sg = scatterwalk_ffwd(dst_start, req->dst, req->assoclen);
-		scatterwalk_start(&dst_sg_walk, dst_sg);
+	if (left) {
+		src_sg = scatterwalk_ffwd(src_start, req->src, req->assoclen);
+		scatterwalk_start(&src_sg_walk, src_sg);
+		if (req->src != req->dst) {
+			dst_sg = scatterwalk_ffwd(dst_start, req->dst,
+						  req->assoclen);
+			scatterwalk_start(&dst_sg_walk, dst_sg);
+		}
 	}
 
 	kernel_fpu_begin();
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 06/15] crypto: ahash - fix another early termination in hash walk
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (4 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 05/15] crypto: x86/aesni-gcm - fix crash on empty plaintext Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 07/15] crypto: arm64/aes-neonbs - fix returning final keystream block Eric Biggers
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel, stable

From: Eric Biggers <ebiggers@google.com>

Hash algorithms with an alignmask set, e.g. "xcbc(aes-aesni)" and
"michael_mic", fail the improved hash tests because they sometimes
produce the wrong digest.  The bug is that in the case where a
scatterlist element crosses pages, not all the data is actually hashed
because the scatterlist walk terminates too early.  This happens because
the 'nbytes' variable in crypto_hash_walk_done() is assigned the number
of bytes remaining in the page, then later interpreted as the number of
bytes remaining in the scatterlist element.  Fix it.

Fixes: 900a081f6912 ("crypto: ahash - Fix early termination in hash walk")
Cc: stable@vger.kernel.org
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/ahash.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/crypto/ahash.c b/crypto/ahash.c
index ca0d3e281fefb..81e2767e2164e 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -86,17 +86,17 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
 int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
 {
 	unsigned int alignmask = walk->alignmask;
-	unsigned int nbytes = walk->entrylen;
 
 	walk->data -= walk->offset;
 
-	if (nbytes && walk->offset & alignmask && !err) {
-		walk->offset = ALIGN(walk->offset, alignmask + 1);
-		nbytes = min(nbytes,
-			     ((unsigned int)(PAGE_SIZE)) - walk->offset);
-		walk->entrylen -= nbytes;
+	if (walk->entrylen && (walk->offset & alignmask) && !err) {
+		unsigned int nbytes;
 
+		walk->offset = ALIGN(walk->offset, alignmask + 1);
+		nbytes = min(walk->entrylen,
+			     (unsigned int)(PAGE_SIZE - walk->offset));
 		if (nbytes) {
+			walk->entrylen -= nbytes;
 			walk->data += walk->offset;
 			return nbytes;
 		}
@@ -116,7 +116,7 @@ int crypto_hash_walk_done(struct crypto_hash_walk *walk, int err)
 	if (err)
 		return err;
 
-	if (nbytes) {
+	if (walk->entrylen) {
 		walk->offset = 0;
 		walk->pg++;
 		return hash_walk_next(walk);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 07/15] crypto: arm64/aes-neonbs - fix returning final keystream block
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (5 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 06/15] crypto: ahash - fix another early termination in hash walk Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 08/15] crypto: testmgr - add testvec_config struct and helper functions Eric Biggers
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel, stable, Ard Biesheuvel

From: Eric Biggers <ebiggers@google.com>

The arm64 NEON bit-sliced implementation of AES-CTR fails the improved
skcipher tests because it sometimes produces the wrong ciphertext.  The
bug is that the final keystream block isn't returned from the assembly
code when the number of non-final blocks is zero.  This can happen if
the input data ends a few bytes after a page boundary.  In this case the
last bytes get "encrypted" by XOR'ing them with uninitialized memory.

Fix the assembly code to return the final keystream block when needed.

Fixes: 88a3f582bea9 ("crypto: arm64/aes - don't use IV buffer to return final keystream block")
Cc: <stable@vger.kernel.org> # v4.11+
Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 arch/arm64/crypto/aes-neonbs-core.S | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/crypto/aes-neonbs-core.S b/arch/arm64/crypto/aes-neonbs-core.S
index e613a87f8b53f..8432c8d0dea66 100644
--- a/arch/arm64/crypto/aes-neonbs-core.S
+++ b/arch/arm64/crypto/aes-neonbs-core.S
@@ -971,18 +971,22 @@ CPU_LE(	rev		x8, x8		)
 
 8:	next_ctr	v0
 	st1		{v0.16b}, [x24]
-	cbz		x23, 0f
+	cbz		x23, .Lctr_done
 
 	cond_yield_neon	98b
 	b		99b
 
-0:	frame_pop
+.Lctr_done:
+	frame_pop
 	ret
 
 	/*
 	 * If we are handling the tail of the input (x6 != NULL), return the
 	 * final keystream block back to the caller.
 	 */
+0:	cbz		x25, 8b
+	st1		{v0.16b}, [x25]
+	b		8b
 1:	cbz		x25, 8b
 	st1		{v1.16b}, [x25]
 	b		8b
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 08/15] crypto: testmgr - add testvec_config struct and helper functions
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (6 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 07/15] crypto: arm64/aes-neonbs - fix returning final keystream block Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 09/15] crypto: testmgr - introduce CONFIG_CRYPTO_MANAGER_EXTRA_TESTS Eric Biggers
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel

From: Eric Biggers <ebiggers@google.com>

Crypto algorithms must produce the same output for the same input
regardless of data layout, i.e. how the src and dst scatterlists are
divided into chunks and how each chunk is aligned.  Request flags such
as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.

However, testing of this currently has many gaps.  For example,
individual algorithms are responsible for providing their own chunked
test vectors.  But many don't bother to do this or test only one or two
cases, providing poor test coverage.  Also, other things such as
misaligned IVs and CRYPTO_TFM_REQ_MAY_SLEEP are never tested at all.

Test code is also duplicated between the chunked and non-chunked cases,
making it difficult to make other improvements.

To improve the situation, this patch series basically moves the chunk
descriptions into the testmgr itself so that they are shared by all
algorithms.  However, it's done in an extensible way via a new struct
'testvec_config', which describes not just the scaled chunk lengths but
also all other aspects of the crypto operation besides the data itself
such as the buffer alignments, the request flags, whether the operation
is in-place or not, the IV alignment, and for hash algorithms when to
do each update() and when to use finup() vs. final() vs. digest().

Then, this patch series makes skcipher, aead, and hash algorithms be
tested against a list of default testvec_configs, replacing the current
test code.  This improves overall test coverage, without reducing test
performance too much.  Note that the test vectors themselves are not
changed, except for removing the chunk lists.

This series also adds randomized fuzz tests, enabled by a new kconfig
option intended for developer use only, where skcipher, aead, and hash
algorithms are tested against many randomly generated testvec_configs.
This provides much more comprehensive test coverage.

These improved tests have already exposed many bugs.

To start it off, this initial patch adds the testvec_config and various
helper functions that will be used by the skcipher, aead, and hash test
code that will be converted to use the new testvec_config framework.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/testmgr.c | 452 +++++++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 437 insertions(+), 15 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 17f57f277e58c..1163d39ef8c9a 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -5,6 +5,7 @@
  * Copyright (c) 2002 Jean-Francois Dive <jef@linuxbe.org>
  * Copyright (c) 2007 Nokia Siemens Networks
  * Copyright (c) 2008 Herbert Xu <herbert@gondor.apana.org.au>
+ * Copyright (c) 2019 Google LLC
  *
  * Updated RFC4106 AES-GCM testing.
  *    Authors: Aidan O'Mahony (aidan.o.mahony@intel.com)
@@ -26,6 +27,7 @@
 #include <linux/err.h>
 #include <linux/fips.h>
 #include <linux/module.h>
+#include <linux/once.h>
 #include <linux/scatterlist.h>
 #include <linux/slab.h>
 #include <linux/string.h>
@@ -146,12 +148,12 @@ static void hexdump(unsigned char *buf, unsigned int len)
 			buf, len, false);
 }
 
-static int testmgr_alloc_buf(char *buf[XBUFSIZE])
+static int __testmgr_alloc_buf(char *buf[XBUFSIZE], int order)
 {
 	int i;
 
 	for (i = 0; i < XBUFSIZE; i++) {
-		buf[i] = (void *)__get_free_page(GFP_KERNEL);
+		buf[i] = (char *)__get_free_pages(GFP_KERNEL, order);
 		if (!buf[i])
 			goto err_free_buf;
 	}
@@ -160,17 +162,435 @@ static int testmgr_alloc_buf(char *buf[XBUFSIZE])
 
 err_free_buf:
 	while (i-- > 0)
-		free_page((unsigned long)buf[i]);
+		free_pages((unsigned long)buf[i], order);
 
 	return -ENOMEM;
 }
 
-static void testmgr_free_buf(char *buf[XBUFSIZE])
+static int testmgr_alloc_buf(char *buf[XBUFSIZE])
+{
+	return __testmgr_alloc_buf(buf, 0);
+}
+
+static void __testmgr_free_buf(char *buf[XBUFSIZE], int order)
 {
 	int i;
 
 	for (i = 0; i < XBUFSIZE; i++)
-		free_page((unsigned long)buf[i]);
+		free_pages((unsigned long)buf[i], order);
+}
+
+static void testmgr_free_buf(char *buf[XBUFSIZE])
+{
+	__testmgr_free_buf(buf, 0);
+}
+
+#define TESTMGR_POISON_BYTE	0xfe
+#define TESTMGR_POISON_LEN	16
+
+static inline void testmgr_poison(void *addr, size_t len)
+{
+	memset(addr, TESTMGR_POISON_BYTE, len);
+}
+
+/* Is the memory region still fully poisoned? */
+static inline bool testmgr_is_poison(const void *addr, size_t len)
+{
+	return memchr_inv(addr, TESTMGR_POISON_BYTE, len) == NULL;
+}
+
+/* flush type for hash algorithms */
+enum flush_type {
+	/* merge with update of previous buffer(s) */
+	FLUSH_TYPE_NONE = 0,
+
+	/* update with previous buffer(s) before doing this one */
+	FLUSH_TYPE_FLUSH,
+
+	/* likewise, but also export and re-import the intermediate state */
+	FLUSH_TYPE_REIMPORT,
+};
+
+/* finalization function for hash algorithms */
+enum finalization_type {
+	FINALIZATION_TYPE_FINAL,	/* use final() */
+	FINALIZATION_TYPE_FINUP,	/* use finup() */
+	FINALIZATION_TYPE_DIGEST,	/* use digest() */
+};
+
+#define TEST_SG_TOTAL	10000
+
+/**
+ * struct test_sg_division - description of a scatterlist entry
+ *
+ * This struct describes one entry of a scatterlist being constructed to check a
+ * crypto test vector.
+ *
+ * @proportion_of_total: length of this chunk relative to the total length,
+ *			 given as a proportion out of TEST_SG_TOTAL so that it
+ *			 scales to fit any test vector
+ * @offset: byte offset into a 2-page buffer at which this chunk will start
+ * @offset_relative_to_alignmask: if true, add the algorithm's alignmask to the
+ *				  @offset
+ * @flush_type: for hashes, whether an update() should be done now vs.
+ *		continuing to accumulate data
+ */
+struct test_sg_division {
+	unsigned int proportion_of_total;
+	unsigned int offset;
+	bool offset_relative_to_alignmask;
+	enum flush_type flush_type;
+};
+
+/**
+ * struct testvec_config - configuration for testing a crypto test vector
+ *
+ * This struct describes the data layout and other parameters with which each
+ * crypto test vector can be tested.
+ *
+ * @name: name of this config, logged for debugging purposes if a test fails
+ * @inplace: operate on the data in-place, if applicable for the algorithm type?
+ * @req_flags: extra request_flags, e.g. CRYPTO_TFM_REQ_MAY_SLEEP
+ * @src_divs: description of how to arrange the source scatterlist
+ * @dst_divs: description of how to arrange the dst scatterlist, if applicable
+ *	      for the algorithm type.  Defaults to @src_divs if unset.
+ * @iv_offset: misalignment of the IV in the range [0..MAX_ALGAPI_ALIGNMASK+1],
+ *	       where 0 is aligned to a 2*(MAX_ALGAPI_ALIGNMASK+1) byte boundary
+ * @iv_offset_relative_to_alignmask: if true, add the algorithm's alignmask to
+ *				     the @iv_offset
+ * @finalization_type: what finalization function to use for hashes
+ */
+struct testvec_config {
+	const char *name;
+	bool inplace;
+	u32 req_flags;
+	struct test_sg_division src_divs[XBUFSIZE];
+	struct test_sg_division dst_divs[XBUFSIZE];
+	unsigned int iv_offset;
+	bool iv_offset_relative_to_alignmask;
+	enum finalization_type finalization_type;
+};
+
+#define TESTVEC_CONFIG_NAMELEN	192
+
+static unsigned int count_test_sg_divisions(const struct test_sg_division *divs)
+{
+	unsigned int remaining = TEST_SG_TOTAL;
+	unsigned int ndivs = 0;
+
+	do {
+		remaining -= divs[ndivs++].proportion_of_total;
+	} while (remaining);
+
+	return ndivs;
+}
+
+static bool valid_sg_divisions(const struct test_sg_division *divs,
+			       unsigned int count, bool *any_flushes_ret)
+{
+	unsigned int total = 0;
+	unsigned int i;
+
+	for (i = 0; i < count && total != TEST_SG_TOTAL; i++) {
+		if (divs[i].proportion_of_total <= 0 ||
+		    divs[i].proportion_of_total > TEST_SG_TOTAL - total)
+			return false;
+		total += divs[i].proportion_of_total;
+		if (divs[i].flush_type != FLUSH_TYPE_NONE)
+			*any_flushes_ret = true;
+	}
+	return total == TEST_SG_TOTAL &&
+		memchr_inv(&divs[i], 0, (count - i) * sizeof(divs[0])) == NULL;
+}
+
+/*
+ * Check whether the given testvec_config is valid.  This isn't strictly needed
+ * since every testvec_config should be valid, but check anyway so that people
+ * don't unknowingly add broken configs that don't do what they wanted.
+ */
+static bool valid_testvec_config(const struct testvec_config *cfg)
+{
+	bool any_flushes = false;
+
+	if (cfg->name == NULL)
+		return false;
+
+	if (!valid_sg_divisions(cfg->src_divs, ARRAY_SIZE(cfg->src_divs),
+				&any_flushes))
+		return false;
+
+	if (cfg->dst_divs[0].proportion_of_total) {
+		if (!valid_sg_divisions(cfg->dst_divs,
+					ARRAY_SIZE(cfg->dst_divs),
+					&any_flushes))
+			return false;
+	} else {
+		if (memchr_inv(cfg->dst_divs, 0, sizeof(cfg->dst_divs)))
+			return false;
+		/* defaults to dst_divs=src_divs */
+	}
+
+	if (cfg->iv_offset +
+	    (cfg->iv_offset_relative_to_alignmask ? MAX_ALGAPI_ALIGNMASK : 0) >
+	    MAX_ALGAPI_ALIGNMASK + 1)
+		return false;
+
+	if (any_flushes && cfg->finalization_type == FINALIZATION_TYPE_DIGEST)
+		return false;
+
+	return true;
+}
+
+struct test_sglist {
+	char *bufs[XBUFSIZE];
+	struct scatterlist sgl[XBUFSIZE];
+	struct scatterlist sgl_saved[XBUFSIZE];
+	struct scatterlist *sgl_ptr;
+	unsigned int nents;
+};
+
+static int init_test_sglist(struct test_sglist *tsgl)
+{
+	return __testmgr_alloc_buf(tsgl->bufs, 1 /* two pages per buffer */);
+}
+
+static void destroy_test_sglist(struct test_sglist *tsgl)
+{
+	return __testmgr_free_buf(tsgl->bufs, 1 /* two pages per buffer */);
+}
+
+/**
+ * build_test_sglist() - build a scatterlist for a crypto test
+ *
+ * @tsgl: the scatterlist to build.  @tsgl->bufs[] contains an array of 2-page
+ *	  buffers which the scatterlist @tsgl->sgl[] will be made to point into.
+ * @divs: the layout specification on which the scatterlist will be based
+ * @alignmask: the algorithm's alignmask
+ * @total_len: the total length of the scatterlist to build in bytes
+ * @data: if non-NULL, the buffers will be filled with this data until it ends.
+ *	  Otherwise the buffers will be poisoned.  In both cases, some bytes
+ *	  past the end of each buffer will be poisoned to help detect overruns.
+ * @out_divs: if non-NULL, the test_sg_division to which each scatterlist entry
+ *	      corresponds will be returned here.  This will match @divs except
+ *	      that divisions resolving to a length of 0 are omitted as they are
+ *	      not included in the scatterlist.
+ *
+ * Return: 0 or a -errno value
+ */
+static int build_test_sglist(struct test_sglist *tsgl,
+			     const struct test_sg_division *divs,
+			     const unsigned int alignmask,
+			     const unsigned int total_len,
+			     struct iov_iter *data,
+			     const struct test_sg_division *out_divs[XBUFSIZE])
+{
+	struct {
+		const struct test_sg_division *div;
+		size_t length;
+	} partitions[XBUFSIZE];
+	const unsigned int ndivs = count_test_sg_divisions(divs);
+	unsigned int len_remaining = total_len;
+	unsigned int i;
+
+	BUILD_BUG_ON(ARRAY_SIZE(partitions) != ARRAY_SIZE(tsgl->sgl));
+	if (WARN_ON(ndivs > ARRAY_SIZE(partitions)))
+		return -EINVAL;
+
+	/* Calculate the (div, length) pairs */
+	tsgl->nents = 0;
+	for (i = 0; i < ndivs; i++) {
+		unsigned int len_this_sg =
+			min(len_remaining,
+			    (total_len * divs[i].proportion_of_total +
+			     TEST_SG_TOTAL / 2) / TEST_SG_TOTAL);
+
+		if (len_this_sg != 0) {
+			partitions[tsgl->nents].div = &divs[i];
+			partitions[tsgl->nents].length = len_this_sg;
+			tsgl->nents++;
+			len_remaining -= len_this_sg;
+		}
+	}
+	if (tsgl->nents == 0) {
+		partitions[tsgl->nents].div = &divs[0];
+		partitions[tsgl->nents].length = 0;
+		tsgl->nents++;
+	}
+	partitions[tsgl->nents - 1].length += len_remaining;
+
+	/* Set up the sgl entries and fill the data or poison */
+	sg_init_table(tsgl->sgl, tsgl->nents);
+	for (i = 0; i < tsgl->nents; i++) {
+		unsigned int offset = partitions[i].div->offset;
+		void *addr;
+
+		if (partitions[i].div->offset_relative_to_alignmask)
+			offset += alignmask;
+
+		while (offset + partitions[i].length + TESTMGR_POISON_LEN >
+		       2 * PAGE_SIZE) {
+			if (WARN_ON(offset <= 0))
+				return -EINVAL;
+			offset /= 2;
+		}
+
+		addr = &tsgl->bufs[i][offset];
+		sg_set_buf(&tsgl->sgl[i], addr, partitions[i].length);
+
+		if (out_divs)
+			out_divs[i] = partitions[i].div;
+
+		if (data) {
+			size_t copy_len, copied;
+
+			copy_len = min(partitions[i].length, data->count);
+			copied = copy_from_iter(addr, copy_len, data);
+			if (WARN_ON(copied != copy_len))
+				return -EINVAL;
+			testmgr_poison(addr + copy_len, partitions[i].length +
+				       TESTMGR_POISON_LEN - copy_len);
+		} else {
+			testmgr_poison(addr, partitions[i].length +
+				       TESTMGR_POISON_LEN);
+		}
+	}
+
+	sg_mark_end(&tsgl->sgl[tsgl->nents - 1]);
+	tsgl->sgl_ptr = tsgl->sgl;
+	memcpy(tsgl->sgl_saved, tsgl->sgl, tsgl->nents * sizeof(tsgl->sgl[0]));
+	return 0;
+}
+
+/*
+ * Verify that a scatterlist crypto operation produced the correct output.
+ *
+ * @tsgl: scatterlist containing the actual output
+ * @expected_output: buffer containing the expected output
+ * @len_to_check: length of @expected_output in bytes
+ * @unchecked_prefix_len: number of ignored bytes in @tsgl prior to real result
+ * @check_poison: verify that the poison bytes after each chunk are intact?
+ *
+ * Return: 0 if correct, -EINVAL if incorrect, -EOVERFLOW if buffer overrun.
+ */
+static int verify_correct_output(const struct test_sglist *tsgl,
+				 const char *expected_output,
+				 unsigned int len_to_check,
+				 unsigned int unchecked_prefix_len,
+				 bool check_poison)
+{
+	unsigned int i;
+
+	for (i = 0; i < tsgl->nents; i++) {
+		struct scatterlist *sg = &tsgl->sgl_ptr[i];
+		unsigned int len = sg->length;
+		unsigned int offset = sg->offset;
+		const char *actual_output;
+
+		if (unchecked_prefix_len) {
+			if (unchecked_prefix_len >= len) {
+				unchecked_prefix_len -= len;
+				continue;
+			}
+			offset += unchecked_prefix_len;
+			len -= unchecked_prefix_len;
+			unchecked_prefix_len = 0;
+		}
+		len = min(len, len_to_check);
+		actual_output = page_address(sg_page(sg)) + offset;
+		if (memcmp(expected_output, actual_output, len) != 0)
+			return -EINVAL;
+		if (check_poison &&
+		    !testmgr_is_poison(actual_output + len, TESTMGR_POISON_LEN))
+			return -EOVERFLOW;
+		len_to_check -= len;
+		expected_output += len;
+	}
+	if (WARN_ON(len_to_check != 0))
+		return -EINVAL;
+	return 0;
+}
+
+static bool is_test_sglist_corrupted(const struct test_sglist *tsgl)
+{
+	unsigned int i;
+
+	for (i = 0; i < tsgl->nents; i++) {
+		if (tsgl->sgl[i].page_link != tsgl->sgl_saved[i].page_link)
+			return true;
+		if (tsgl->sgl[i].offset != tsgl->sgl_saved[i].offset)
+			return true;
+		if (tsgl->sgl[i].length != tsgl->sgl_saved[i].length)
+			return true;
+	}
+	return false;
+}
+
+struct cipher_test_sglists {
+	struct test_sglist src;
+	struct test_sglist dst;
+};
+
+static struct cipher_test_sglists *alloc_cipher_test_sglists(void)
+{
+	struct cipher_test_sglists *tsgls;
+
+	tsgls = kmalloc(sizeof(*tsgls), GFP_KERNEL);
+	if (!tsgls)
+		return NULL;
+
+	if (init_test_sglist(&tsgls->src) != 0)
+		goto fail_kfree;
+	if (init_test_sglist(&tsgls->dst) != 0)
+		goto fail_destroy_src;
+
+	return tsgls;
+
+fail_destroy_src:
+	destroy_test_sglist(&tsgls->src);
+fail_kfree:
+	kfree(tsgls);
+	return NULL;
+}
+
+static void free_cipher_test_sglists(struct cipher_test_sglists *tsgls)
+{
+	if (tsgls) {
+		destroy_test_sglist(&tsgls->src);
+		destroy_test_sglist(&tsgls->dst);
+		kfree(tsgls);
+	}
+}
+
+/* Build the src and dst scatterlists for an skcipher or AEAD test */
+static int build_cipher_test_sglists(struct cipher_test_sglists *tsgls,
+				     const struct testvec_config *cfg,
+				     unsigned int alignmask,
+				     unsigned int src_total_len,
+				     unsigned int dst_total_len,
+				     const struct kvec *inputs,
+				     unsigned int nr_inputs)
+{
+	struct iov_iter input;
+	int err;
+
+	iov_iter_kvec(&input, WRITE, inputs, nr_inputs, src_total_len);
+	err = build_test_sglist(&tsgls->src, cfg->src_divs, alignmask,
+				cfg->inplace ?
+					max(dst_total_len, src_total_len) :
+					src_total_len,
+				&input, NULL);
+	if (err)
+		return err;
+
+	if (cfg->inplace) {
+		tsgls->dst.sgl_ptr = tsgls->src.sgl;
+		tsgls->dst.nents = tsgls->src.nents;
+		return 0;
+	}
+	return build_test_sglist(&tsgls->dst,
+				 cfg->dst_divs[0].proportion_of_total ?
+					cfg->dst_divs : cfg->src_divs,
+				 alignmask, dst_total_len, NULL, NULL);
 }
 
 static int ahash_guard_result(char *result, char c, int size)
@@ -3657,18 +4077,10 @@ static const struct alg_test_desc alg_test_descs[] = {
 	}
 };
 
-static bool alg_test_descs_checked;
-
-static void alg_test_descs_check_order(void)
+static void alg_check_test_descs_order(void)
 {
 	int i;
 
-	/* only check once */
-	if (alg_test_descs_checked)
-		return;
-
-	alg_test_descs_checked = true;
-
 	for (i = 1; i < ARRAY_SIZE(alg_test_descs); i++) {
 		int diff = strcmp(alg_test_descs[i - 1].alg,
 				  alg_test_descs[i].alg);
@@ -3686,6 +4098,16 @@ static void alg_test_descs_check_order(void)
 	}
 }
 
+static void alg_check_testvec_configs(void)
+{
+}
+
+static void testmgr_onetime_init(void)
+{
+	alg_check_test_descs_order();
+	alg_check_testvec_configs();
+}
+
 static int alg_find_test(const char *alg)
 {
 	int start = 0;
@@ -3722,7 +4144,7 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask)
 		return 0;
 	}
 
-	alg_test_descs_check_order();
+	DO_ONCE(testmgr_onetime_init);
 
 	if ((type & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_CIPHER) {
 		char nalg[CRYPTO_MAX_ALG_NAME];
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 09/15] crypto: testmgr - introduce CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (7 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 08/15] crypto: testmgr - add testvec_config struct and helper functions Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 10/15] crypto: testmgr - implement random testvec_config generation Eric Biggers
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel

From: Eric Biggers <ebiggers@google.com>

To achieve more comprehensive crypto test coverage, I'd like to add fuzz
tests that use random data layouts and request flags.

To be most effective these tests should be part of testmgr, so they
automatically run on every algorithm registered with the crypto API.
However, they will take much longer to run than the current tests and
therefore will only really be intended to be run by developers, whereas
the current tests have a wider audience.

Therefore, add a new kconfig option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
that can be set by developers to enable these extra, expensive tests.

Similar to the regular tests, also add a module parameter
cryptomgr.noextratests to support disabling the tests.

Finally, another module parameter cryptomgr.fuzz_iterations is added to
control how many iterations the fuzz tests do.  Note: for now setting
this to 0 will be equivalent to cryptomgr.noextratests=1.  But I opted
for separate parameters to provide more flexibility to add other types
of tests under the "extra tests" category in the future.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/Kconfig   | 10 ++++++++++
 crypto/testmgr.c | 14 ++++++++++++++
 2 files changed, 24 insertions(+)

diff --git a/crypto/Kconfig b/crypto/Kconfig
index 86960aa53e0f2..bbab6bf335198 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -168,6 +168,16 @@ config CRYPTO_MANAGER_DISABLE_TESTS
 	  Disable run-time self tests that normally take place at
 	  algorithm registration.
 
+config CRYPTO_MANAGER_EXTRA_TESTS
+	bool "Enable extra run-time crypto self tests"
+	depends on DEBUG_KERNEL && !CRYPTO_MANAGER_DISABLE_TESTS
+	help
+	  Enable extra run-time self tests of registered crypto algorithms,
+	  including randomized fuzz tests.
+
+	  This is intended for developer use only, as these tests take much
+	  longer to run than the normal self tests.
+
 config CRYPTO_GF128MUL
 	tristate "GF(2^128) multiplication functions"
 	help
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 1163d39ef8c9a..5e527dbe6524c 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -43,6 +43,16 @@ static bool notests;
 module_param(notests, bool, 0644);
 MODULE_PARM_DESC(notests, "disable crypto self-tests");
 
+#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
+static bool noextratests;
+module_param(noextratests, bool, 0644);
+MODULE_PARM_DESC(noextratests, "disable expensive crypto self-tests");
+
+static unsigned int fuzz_iterations = 100;
+module_param(fuzz_iterations, uint, 0644);
+MODULE_PARM_DESC(fuzz_iterations, "number of fuzz test iterations");
+#endif
+
 #ifdef CONFIG_CRYPTO_MANAGER_DISABLE_TESTS
 
 /* a perfect nop */
@@ -4106,6 +4116,10 @@ static void testmgr_onetime_init(void)
 {
 	alg_check_test_descs_order();
 	alg_check_testvec_configs();
+
+#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
+	pr_warn("alg: extra crypto tests enabled.  This is intended for developer use only.\n");
+#endif
 }
 
 static int alg_find_test(const char *alg)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 10/15] crypto: testmgr - implement random testvec_config generation
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (8 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 09/15] crypto: testmgr - introduce CONFIG_CRYPTO_MANAGER_EXTRA_TESTS Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 11/15] crypto: testmgr - convert skcipher testing to use testvec_configs Eric Biggers
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel

From: Eric Biggers <ebiggers@google.com>

Add functions that generate a random testvec_config, in preparation for
using it for randomized fuzz tests.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/testmgr.c | 117 +++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 117 insertions(+)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 5e527dbe6524c..b9e28d4edb5cc 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -28,6 +28,7 @@
 #include <linux/fips.h>
 #include <linux/module.h>
 #include <linux/once.h>
+#include <linux/random.h>
 #include <linux/scatterlist.h>
 #include <linux/slab.h>
 #include <linux/string.h>
@@ -603,6 +604,122 @@ static int build_cipher_test_sglists(struct cipher_test_sglists *tsgls,
 				 alignmask, dst_total_len, NULL, NULL);
 }
 
+#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
+static char *generate_random_sgl_divisions(struct test_sg_division *divs,
+					   size_t max_divs, char *p, char *end,
+					   bool gen_flushes)
+{
+	struct test_sg_division *div = divs;
+	unsigned int remaining = TEST_SG_TOTAL;
+
+	do {
+		unsigned int this_len;
+
+		if (div == &divs[max_divs - 1] || prandom_u32() % 2 == 0)
+			this_len = remaining;
+		else
+			this_len = 1 + (prandom_u32() % remaining);
+		div->proportion_of_total = this_len;
+
+		if (prandom_u32() % 4 == 0)
+			div->offset = (PAGE_SIZE - 128) + (prandom_u32() % 128);
+		else if (prandom_u32() % 2 == 0)
+			div->offset = prandom_u32() % 32;
+		else
+			div->offset = prandom_u32() % PAGE_SIZE;
+		if (prandom_u32() % 8 == 0)
+			div->offset_relative_to_alignmask = true;
+
+		div->flush_type = FLUSH_TYPE_NONE;
+		if (gen_flushes) {
+			switch (prandom_u32() % 4) {
+			case 0:
+				div->flush_type = FLUSH_TYPE_REIMPORT;
+				break;
+			case 1:
+				div->flush_type = FLUSH_TYPE_FLUSH;
+				break;
+			}
+		}
+
+		BUILD_BUG_ON(TEST_SG_TOTAL != 10000); /* for "%u.%u%%" */
+		p += scnprintf(p, end - p, "%s%u.%u%%@%s+%u%s",
+			       div->flush_type == FLUSH_TYPE_NONE ? "" :
+			       div->flush_type == FLUSH_TYPE_FLUSH ?
+			       "<flush> " : "<reimport> ",
+			       this_len / 100, this_len % 100,
+			       div->offset_relative_to_alignmask ?
+					"alignmask" : "",
+			       div->offset, this_len == remaining ? "" : ", ");
+		remaining -= this_len;
+		div++;
+	} while (remaining);
+
+	return p;
+}
+
+/* Generate a random testvec_config for fuzz testing */
+static void generate_random_testvec_config(struct testvec_config *cfg,
+					   char *name, size_t max_namelen)
+{
+	char *p = name;
+	char * const end = name + max_namelen;
+
+	memset(cfg, 0, sizeof(*cfg));
+
+	cfg->name = name;
+
+	p += scnprintf(p, end - p, "random:");
+
+	if (prandom_u32() % 2 == 0) {
+		cfg->inplace = true;
+		p += scnprintf(p, end - p, " inplace");
+	}
+
+	if (prandom_u32() % 2 == 0) {
+		cfg->req_flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
+		p += scnprintf(p, end - p, " may_sleep");
+	}
+
+	switch (prandom_u32() % 4) {
+	case 0:
+		cfg->finalization_type = FINALIZATION_TYPE_FINAL;
+		p += scnprintf(p, end - p, " use_final");
+		break;
+	case 1:
+		cfg->finalization_type = FINALIZATION_TYPE_FINUP;
+		p += scnprintf(p, end - p, " use_finup");
+		break;
+	default:
+		cfg->finalization_type = FINALIZATION_TYPE_DIGEST;
+		p += scnprintf(p, end - p, " use_digest");
+		break;
+	}
+
+	p += scnprintf(p, end - p, " src_divs=[");
+	p = generate_random_sgl_divisions(cfg->src_divs,
+					  ARRAY_SIZE(cfg->src_divs), p, end,
+					  (cfg->finalization_type !=
+					   FINALIZATION_TYPE_DIGEST));
+	p += scnprintf(p, end - p, "]");
+
+	if (!cfg->inplace && prandom_u32() % 2 == 0) {
+		p += scnprintf(p, end - p, " dst_divs=[");
+		p = generate_random_sgl_divisions(cfg->dst_divs,
+						  ARRAY_SIZE(cfg->dst_divs),
+						  p, end, false);
+		p += scnprintf(p, end - p, "]");
+	}
+
+	if (prandom_u32() % 2 == 0) {
+		cfg->iv_offset = 1 + (prandom_u32() % MAX_ALGAPI_ALIGNMASK);
+		p += scnprintf(p, end - p, " iv_offset=%u", cfg->iv_offset);
+	}
+
+	WARN_ON_ONCE(!valid_testvec_config(cfg));
+}
+#endif /* CONFIG_CRYPTO_MANAGER_EXTRA_TESTS */
+
 static int ahash_guard_result(char *result, char c, int size)
 {
 	int i;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 11/15] crypto: testmgr - convert skcipher testing to use testvec_configs
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (9 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 10/15] crypto: testmgr - implement random testvec_config generation Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 12/15] crypto: testmgr - convert aead " Eric Biggers
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel

From: Eric Biggers <ebiggers@google.com>

Convert alg_test_skcipher() to use the new test framework, adding a list
of testvec_configs to test by default.  When the extra self-tests are
enabled, randomly generated testvec_configs are tested as well.

This improves skcipher test coverage mainly because now all algorithms
have a variety of data layouts tested, whereas before each algorithm was
responsible for declaring its own chunked test cases which were often
missing or provided poor test coverage.  The new code also tests both
the MAY_SLEEP and !MAY_SLEEP cases, different IV alignments, and buffers
that cross pages.

This has already found a bug in the arm64 ctr-aes-neonbs algorithm.
It would have easily found many past bugs.

I removed the skcipher chunked test vectors that were the same as
non-chunked ones, but left the ones that were unique.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/testmgr.c | 515 ++++++++++++++++++++++-------------------------
 crypto/testmgr.h | 253 -----------------------
 2 files changed, 245 insertions(+), 523 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index b9e28d4edb5cc..3f024e5dd8b70 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -284,6 +284,68 @@ struct testvec_config {
 
 #define TESTVEC_CONFIG_NAMELEN	192
 
+/*
+ * The following are the lists of testvec_configs to test for each algorithm
+ * type when the basic crypto self-tests are enabled, i.e. when
+ * CONFIG_CRYPTO_MANAGER_DISABLE_TESTS is unset.  They aim to provide good test
+ * coverage, while keeping the test time much shorter than the full fuzz tests
+ * so that the basic tests can be enabled in a wider range of circumstances.
+ */
+
+/* Configs for skciphers and aeads */
+static const struct testvec_config default_cipher_testvec_configs[] = {
+	{
+		.name = "in-place",
+		.inplace = true,
+		.src_divs = { { .proportion_of_total = 10000 } },
+	}, {
+		.name = "out-of-place",
+		.src_divs = { { .proportion_of_total = 10000 } },
+	}, {
+		.name = "unaligned buffer, offset=1",
+		.src_divs = { { .proportion_of_total = 10000, .offset = 1 } },
+		.iv_offset = 1,
+	}, {
+		.name = "buffer aligned only to alignmask",
+		.src_divs = {
+			{
+				.proportion_of_total = 10000,
+				.offset = 1,
+				.offset_relative_to_alignmask = true,
+			},
+		},
+		.iv_offset = 1,
+		.iv_offset_relative_to_alignmask = true,
+	}, {
+		.name = "two even aligned splits",
+		.src_divs = {
+			{ .proportion_of_total = 5000 },
+			{ .proportion_of_total = 5000 },
+		},
+	}, {
+		.name = "uneven misaligned splits, may sleep",
+		.req_flags = CRYPTO_TFM_REQ_MAY_SLEEP,
+		.src_divs = {
+			{ .proportion_of_total = 1900, .offset = 33 },
+			{ .proportion_of_total = 3300, .offset = 7  },
+			{ .proportion_of_total = 4800, .offset = 18 },
+		},
+		.iv_offset = 3,
+	}, {
+		.name = "misaligned splits crossing pages, inplace",
+		.inplace = true,
+		.src_divs = {
+			{
+				.proportion_of_total = 7500,
+				.offset = PAGE_SIZE - 32
+			}, {
+				.proportion_of_total = 2500,
+				.offset = PAGE_SIZE - 7
+			},
+		},
+	}
+};
+
 static unsigned int count_test_sg_divisions(const struct test_sg_division *divs)
 {
 	unsigned int remaining = TEST_SG_TOTAL;
@@ -1608,8 +1670,6 @@ static int test_cipher(struct crypto_cipher *tfm, int enc,
 
 	j = 0;
 	for (i = 0; i < tcount; i++) {
-		if (template[i].np)
-			continue;
 
 		if (fips_enabled && template[i].fips_skip)
 			continue;
@@ -1667,282 +1727,214 @@ static int test_cipher(struct crypto_cipher *tfm, int enc,
 	return ret;
 }
 
-static int __test_skcipher(struct crypto_skcipher *tfm, int enc,
-			   const struct cipher_testvec *template,
-			   unsigned int tcount,
-			   const bool diff_dst, const int align_offset)
+static int test_skcipher_vec_cfg(const char *driver, int enc,
+				 const struct cipher_testvec *vec,
+				 unsigned int vec_num,
+				 const struct testvec_config *cfg,
+				 struct skcipher_request *req,
+				 struct cipher_test_sglists *tsgls)
 {
-	const char *algo =
-		crypto_tfm_alg_driver_name(crypto_skcipher_tfm(tfm));
-	unsigned int i, j, k, n, temp;
-	char *q;
-	struct skcipher_request *req;
-	struct scatterlist sg[8];
-	struct scatterlist sgout[8];
-	const char *e, *d;
-	struct crypto_wait wait;
-	const char *input, *result;
-	void *data;
-	char iv[MAX_IVLEN];
-	char *xbuf[XBUFSIZE];
-	char *xoutbuf[XBUFSIZE];
-	int ret = -ENOMEM;
-	unsigned int ivsize = crypto_skcipher_ivsize(tfm);
-
-	if (testmgr_alloc_buf(xbuf))
-		goto out_nobuf;
-
-	if (diff_dst && testmgr_alloc_buf(xoutbuf))
-		goto out_nooutbuf;
-
-	if (diff_dst)
-		d = "-ddst";
-	else
-		d = "";
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	const unsigned int alignmask = crypto_skcipher_alignmask(tfm);
+	const unsigned int ivsize = crypto_skcipher_ivsize(tfm);
+	const u32 req_flags = CRYPTO_TFM_REQ_MAY_BACKLOG | cfg->req_flags;
+	const char *op = enc ? "encryption" : "decryption";
+	DECLARE_CRYPTO_WAIT(wait);
+	u8 _iv[3 * (MAX_ALGAPI_ALIGNMASK + 1) + MAX_IVLEN];
+	u8 *iv = PTR_ALIGN(&_iv[0], 2 * (MAX_ALGAPI_ALIGNMASK + 1)) +
+		 cfg->iv_offset +
+		 (cfg->iv_offset_relative_to_alignmask ? alignmask : 0);
+	struct kvec input;
+	int err;
 
-	if (enc == ENCRYPT)
-	        e = "encryption";
+	/* Set the key */
+	if (vec->wk)
+		crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
 	else
-		e = "decryption";
-
-	crypto_init_wait(&wait);
-
-	req = skcipher_request_alloc(tfm, GFP_KERNEL);
-	if (!req) {
-		pr_err("alg: skcipher%s: Failed to allocate request for %s\n",
-		       d, algo);
-		goto out;
+		crypto_skcipher_clear_flags(tfm,
+					    CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
+	err = crypto_skcipher_setkey(tfm, vec->key, vec->klen);
+	if (err) {
+		if (vec->fail) /* expectedly failed to set key? */
+			return 0;
+		pr_err("alg: skcipher: %s setkey failed with err %d on test vector %u; flags=%#x\n",
+		       driver, err, vec_num, crypto_skcipher_get_flags(tfm));
+		return err;
+	}
+	if (vec->fail) {
+		pr_err("alg: skcipher: %s setkey unexpectedly succeeded on test vector %u\n",
+		       driver, vec_num);
+		return -EINVAL;
 	}
 
-	skcipher_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-				      crypto_req_done, &wait);
-
-	j = 0;
-	for (i = 0; i < tcount; i++) {
-		if (template[i].np && !template[i].also_non_np)
-			continue;
-
-		if (fips_enabled && template[i].fips_skip)
-			continue;
-
-		if (template[i].iv && !(template[i].generates_iv && enc))
-			memcpy(iv, template[i].iv, ivsize);
+	/* The IV must be copied to a buffer, as the algorithm may modify it */
+	if (ivsize) {
+		if (WARN_ON(ivsize > MAX_IVLEN))
+			return -EINVAL;
+		if (vec->iv && !(vec->generates_iv && enc))
+			memcpy(iv, vec->iv, ivsize);
 		else
-			memset(iv, 0, MAX_IVLEN);
-
-		input  = enc ? template[i].ptext : template[i].ctext;
-		result = enc ? template[i].ctext : template[i].ptext;
-		j++;
-		ret = -EINVAL;
-		if (WARN_ON(align_offset + template[i].len > PAGE_SIZE))
-			goto out;
-
-		data = xbuf[0];
-		data += align_offset;
-		memcpy(data, input, template[i].len);
-
-		crypto_skcipher_clear_flags(tfm, ~0);
-		if (template[i].wk)
-			crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
-
-		ret = crypto_skcipher_setkey(tfm, template[i].key,
-					     template[i].klen);
-		if (template[i].fail == !ret) {
-			pr_err("alg: skcipher%s: setkey failed on test %d for %s: flags=%x\n",
-			       d, j, algo, crypto_skcipher_get_flags(tfm));
-			goto out;
-		} else if (ret)
-			continue;
-
-		sg_init_one(&sg[0], data, template[i].len);
-		if (diff_dst) {
-			data = xoutbuf[0];
-			data += align_offset;
-			sg_init_one(&sgout[0], data, template[i].len);
-		}
-
-		skcipher_request_set_crypt(req, sg, (diff_dst) ? sgout : sg,
-					   template[i].len, iv);
-		ret = crypto_wait_req(enc ? crypto_skcipher_encrypt(req) :
-				      crypto_skcipher_decrypt(req), &wait);
-
-		if (ret) {
-			pr_err("alg: skcipher%s: %s failed on test %d for %s: ret=%d\n",
-			       d, e, j, algo, -ret);
-			goto out;
-		}
-
-		q = data;
-		if (memcmp(q, result, template[i].len)) {
-			pr_err("alg: skcipher%s: Test %d failed (invalid result) on %s for %s\n",
-			       d, j, e, algo);
-			hexdump(q, template[i].len);
-			ret = -EINVAL;
-			goto out;
-		}
-
-		if (template[i].generates_iv && enc &&
-		    memcmp(iv, template[i].iv, crypto_skcipher_ivsize(tfm))) {
-			pr_err("alg: skcipher%s: Test %d failed (invalid output IV) on %s for %s\n",
-			       d, j, e, algo);
-			hexdump(iv, crypto_skcipher_ivsize(tfm));
-			ret = -EINVAL;
-			goto out;
+			memset(iv, 0, ivsize);
+	} else {
+		if (vec->generates_iv) {
+			pr_err("alg: skcipher: %s has ivsize=0 but test vector %u generates IV!\n",
+			       driver, vec_num);
+			return -EINVAL;
 		}
+		iv = NULL;
 	}
 
-	j = 0;
-	for (i = 0; i < tcount; i++) {
-		/* alignment tests are only done with continuous buffers */
-		if (align_offset != 0)
-			break;
-
-		if (!template[i].np)
-			continue;
-
-		if (fips_enabled && template[i].fips_skip)
-			continue;
-
-		if (template[i].iv && !(template[i].generates_iv && enc))
-			memcpy(iv, template[i].iv, ivsize);
-		else
-			memset(iv, 0, MAX_IVLEN);
-
-		input  = enc ? template[i].ptext : template[i].ctext;
-		result = enc ? template[i].ctext : template[i].ptext;
-		j++;
-		crypto_skcipher_clear_flags(tfm, ~0);
-		if (template[i].wk)
-			crypto_skcipher_set_flags(tfm, CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
-
-		ret = crypto_skcipher_setkey(tfm, template[i].key,
-					     template[i].klen);
-		if (template[i].fail == !ret) {
-			pr_err("alg: skcipher%s: setkey failed on chunk test %d for %s: flags=%x\n",
-			       d, j, algo, crypto_skcipher_get_flags(tfm));
-			goto out;
-		} else if (ret)
-			continue;
-
-		temp = 0;
-		ret = -EINVAL;
-		sg_init_table(sg, template[i].np);
-		if (diff_dst)
-			sg_init_table(sgout, template[i].np);
-		for (k = 0; k < template[i].np; k++) {
-			if (WARN_ON(offset_in_page(IDX[k]) +
-				    template[i].tap[k] > PAGE_SIZE))
-				goto out;
-
-			q = xbuf[IDX[k] >> PAGE_SHIFT] + offset_in_page(IDX[k]);
-
-			memcpy(q, input + temp, template[i].tap[k]);
-
-			if (offset_in_page(q) + template[i].tap[k] < PAGE_SIZE)
-				q[template[i].tap[k]] = 0;
-
-			sg_set_buf(&sg[k], q, template[i].tap[k]);
-			if (diff_dst) {
-				q = xoutbuf[IDX[k] >> PAGE_SHIFT] +
-				    offset_in_page(IDX[k]);
-
-				sg_set_buf(&sgout[k], q, template[i].tap[k]);
+	/* Build the src/dst scatterlists */
+	input.iov_base = enc ? (void *)vec->ptext : (void *)vec->ctext;
+	input.iov_len = vec->len;
+	err = build_cipher_test_sglists(tsgls, cfg, alignmask,
+					vec->len, vec->len, &input, 1);
+	if (err) {
+		pr_err("alg: skcipher: %s %s: error preparing scatterlists for test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		return err;
+	}
 
-				memset(q, 0, template[i].tap[k]);
-				if (offset_in_page(q) +
-				    template[i].tap[k] < PAGE_SIZE)
-					q[template[i].tap[k]] = 0;
-			}
+	/* Do the actual encryption or decryption */
+	testmgr_poison(req->__ctx, crypto_skcipher_reqsize(tfm));
+	skcipher_request_set_callback(req, req_flags, crypto_req_done, &wait);
+	skcipher_request_set_crypt(req, tsgls->src.sgl_ptr, tsgls->dst.sgl_ptr,
+				   vec->len, iv);
+	err = crypto_wait_req(enc ? crypto_skcipher_encrypt(req) :
+			      crypto_skcipher_decrypt(req), &wait);
+	if (err) {
+		pr_err("alg: skcipher: %s %s failed with err %d on test vector %u, cfg=\"%s\"\n",
+		       driver, op, err, vec_num, cfg->name);
+		return err;
+	}
 
-			temp += template[i].tap[k];
-		}
+	/* Check for the correct output (ciphertext or plaintext) */
+	err = verify_correct_output(&tsgls->dst, enc ? vec->ctext : vec->ptext,
+				    vec->len, 0, true);
+	if (err == -EOVERFLOW) {
+		pr_err("alg: skcipher: %s %s overran dst buffer on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		return err;
+	}
+	if (err) {
+		pr_err("alg: skcipher: %s %s test failed (wrong result) on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		return err;
+	}
 
-		skcipher_request_set_crypt(req, sg, (diff_dst) ? sgout : sg,
-					   template[i].len, iv);
+	/* If applicable, check that the algorithm generated the correct IV */
+	if (vec->generates_iv && enc && memcmp(iv, vec->iv, ivsize) != 0) {
+		pr_err("alg: skcipher: %s %s test failed (wrong output IV) on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		hexdump(iv, ivsize);
+		return -EINVAL;
+	}
 
-		ret = crypto_wait_req(enc ? crypto_skcipher_encrypt(req) :
-				      crypto_skcipher_decrypt(req), &wait);
+	return 0;
+}
 
-		if (ret) {
-			pr_err("alg: skcipher%s: %s failed on chunk test %d for %s: ret=%d\n",
-			       d, e, j, algo, -ret);
-			goto out;
-		}
+static int test_skcipher_vec(const char *driver, int enc,
+			     const struct cipher_testvec *vec,
+			     unsigned int vec_num,
+			     struct skcipher_request *req,
+			     struct cipher_test_sglists *tsgls)
+{
+	unsigned int i;
+	int err;
 
-		temp = 0;
-		ret = -EINVAL;
-		for (k = 0; k < template[i].np; k++) {
-			if (diff_dst)
-				q = xoutbuf[IDX[k] >> PAGE_SHIFT] +
-				    offset_in_page(IDX[k]);
-			else
-				q = xbuf[IDX[k] >> PAGE_SHIFT] +
-				    offset_in_page(IDX[k]);
+	if (fips_enabled && vec->fips_skip)
+		return 0;
 
-			if (memcmp(q, result + temp, template[i].tap[k])) {
-				pr_err("alg: skcipher%s: Chunk test %d failed on %s at page %u for %s\n",
-				       d, j, e, k, algo);
-				hexdump(q, template[i].tap[k]);
-				goto out;
-			}
+	for (i = 0; i < ARRAY_SIZE(default_cipher_testvec_configs); i++) {
+		err = test_skcipher_vec_cfg(driver, enc, vec, vec_num,
+					    &default_cipher_testvec_configs[i],
+					    req, tsgls);
+		if (err)
+			return err;
+	}
 
-			q += template[i].tap[k];
-			for (n = 0; offset_in_page(q + n) && q[n]; n++)
-				;
-			if (n) {
-				pr_err("alg: skcipher%s: Result buffer corruption in chunk test %d on %s at page %u for %s: %u bytes:\n",
-				       d, j, e, k, algo, n);
-				hexdump(q, n);
-				goto out;
-			}
-			temp += template[i].tap[k];
+#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
+	if (!noextratests) {
+		struct testvec_config cfg;
+		char cfgname[TESTVEC_CONFIG_NAMELEN];
+
+		for (i = 0; i < fuzz_iterations; i++) {
+			generate_random_testvec_config(&cfg, cfgname,
+						       sizeof(cfgname));
+			err = test_skcipher_vec_cfg(driver, enc, vec, vec_num,
+						    &cfg, req, tsgls);
+			if (err)
+				return err;
 		}
 	}
+#endif
+	return 0;
+}
 
-	ret = 0;
+static int test_skcipher(const char *driver, int enc,
+			 const struct cipher_test_suite *suite,
+			 struct skcipher_request *req,
+			 struct cipher_test_sglists *tsgls)
+{
+	unsigned int i;
+	int err;
 
-out:
-	skcipher_request_free(req);
-	if (diff_dst)
-		testmgr_free_buf(xoutbuf);
-out_nooutbuf:
-	testmgr_free_buf(xbuf);
-out_nobuf:
-	return ret;
+	for (i = 0; i < suite->count; i++) {
+		err = test_skcipher_vec(driver, enc, &suite->vecs[i], i, req,
+					tsgls);
+		if (err)
+			return err;
+	}
+	return 0;
 }
 
-static int test_skcipher(struct crypto_skcipher *tfm, int enc,
-			 const struct cipher_testvec *template,
-			 unsigned int tcount)
+static int alg_test_skcipher(const struct alg_test_desc *desc,
+			     const char *driver, u32 type, u32 mask)
 {
-	unsigned int alignmask;
-	int ret;
+	const struct cipher_test_suite *suite = &desc->suite.cipher;
+	struct crypto_skcipher *tfm;
+	struct skcipher_request *req = NULL;
+	struct cipher_test_sglists *tsgls = NULL;
+	int err;
 
-	/* test 'dst == src' case */
-	ret = __test_skcipher(tfm, enc, template, tcount, false, 0);
-	if (ret)
-		return ret;
+	if (suite->count <= 0) {
+		pr_err("alg: skcipher: empty test suite for %s\n", driver);
+		return -EINVAL;
+	}
 
-	/* test 'dst != src' case */
-	ret = __test_skcipher(tfm, enc, template, tcount, true, 0);
-	if (ret)
-		return ret;
+	tfm = crypto_alloc_skcipher(driver, type, mask);
+	if (IS_ERR(tfm)) {
+		pr_err("alg: skcipher: failed to allocate transform for %s: %ld\n",
+		       driver, PTR_ERR(tfm));
+		return PTR_ERR(tfm);
+	}
 
-	/* test unaligned buffers, check with one byte offset */
-	ret = __test_skcipher(tfm, enc, template, tcount, true, 1);
-	if (ret)
-		return ret;
+	req = skcipher_request_alloc(tfm, GFP_KERNEL);
+	if (!req) {
+		pr_err("alg: skcipher: failed to allocate request for %s\n",
+		       driver);
+		err = -ENOMEM;
+		goto out;
+	}
 
-	alignmask = crypto_tfm_alg_alignmask(&tfm->base);
-	if (alignmask) {
-		/* Check if alignment mask for tfm is correctly set. */
-		ret = __test_skcipher(tfm, enc, template, tcount, true,
-				      alignmask + 1);
-		if (ret)
-			return ret;
+	tsgls = alloc_cipher_test_sglists();
+	if (!tsgls) {
+		pr_err("alg: skcipher: failed to allocate test buffers for %s\n",
+		       driver);
+		err = -ENOMEM;
+		goto out;
 	}
 
-	return 0;
+	err = test_skcipher(driver, ENCRYPT, suite, req, tsgls);
+	if (err)
+		goto out;
+
+	err = test_skcipher(driver, DECRYPT, suite, req, tsgls);
+out:
+	free_cipher_test_sglists(tsgls);
+	skcipher_request_free(req);
+	crypto_free_skcipher(tfm);
+	return err;
 }
 
 static int test_comp(struct crypto_comp *tfm,
@@ -2326,28 +2318,6 @@ static int alg_test_cipher(const struct alg_test_desc *desc,
 	return err;
 }
 
-static int alg_test_skcipher(const struct alg_test_desc *desc,
-			     const char *driver, u32 type, u32 mask)
-{
-	const struct cipher_test_suite *suite = &desc->suite.cipher;
-	struct crypto_skcipher *tfm;
-	int err;
-
-	tfm = crypto_alloc_skcipher(driver, type, mask);
-	if (IS_ERR(tfm)) {
-		printk(KERN_ERR "alg: skcipher: Failed to load transform for "
-		       "%s: %ld\n", driver, PTR_ERR(tfm));
-		return PTR_ERR(tfm);
-	}
-
-	err = test_skcipher(tfm, ENCRYPT, suite->vecs, suite->count);
-	if (!err)
-		err = test_skcipher(tfm, DECRYPT, suite->vecs, suite->count);
-
-	crypto_free_skcipher(tfm);
-	return err;
-}
-
 static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
 			 u32 type, u32 mask)
 {
@@ -4227,6 +4197,11 @@ static void alg_check_test_descs_order(void)
 
 static void alg_check_testvec_configs(void)
 {
+	int i;
+
+	for (i = 0; i < ARRAY_SIZE(default_cipher_testvec_configs); i++)
+		WARN_ON(!valid_testvec_config(
+				&default_cipher_testvec_configs[i]));
 }
 
 static void testmgr_onetime_init(void)
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index d8f6035c7ff25..1a73af8a79f7a 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -52,10 +52,6 @@ struct hash_testvec {
  * @fail:	If set to one, the test need to fail
  * @wk:		Does the test need CRYPTO_TFM_REQ_FORBID_WEAK_KEYS?
  * 		( e.g. test needs to fail due to a weak key )
- * @np: 	numbers of SG to distribute data in (from 1 to MAX_TAP)
- * @tap:	How to distribute data in @np SGs
- * @also_non_np: 	if set to 1, the test will be also done without
- * 			splitting data in @np SGs
  * @fips_skip:	Skip the test vector in FIPS mode
  * @generates_iv: Encryption should ignore the given IV, and output @iv.
  *		  Decryption takes @iv.  Needed for AES Keywrap ("kw(aes)").
@@ -65,9 +61,6 @@ struct cipher_testvec {
 	const char *iv;
 	const char *ptext;
 	const char *ctext;
-	unsigned short tap[MAX_TAP];
-	int np;
-	unsigned char also_non_np;
 	bool fail;
 	unsigned char wk; /* weak key flag */
 	unsigned char klen;
@@ -7011,18 +7004,6 @@ static const struct cipher_testvec des_tv_template[] = {
 		.ctext	= "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d"
 			  "\xf7\x9c\x89\x2a\x33\x8f\x4a\x8b",
 		.len	= 16,
-		.np	= 2,
-		.tap	= { 8, 8 }
-	}, {
-		.key	= "\x01\x23\x45\x67\x89\xab\xcd\xef",
-		.klen	= 8,
-		.ptext	= "\x01\x23\x45\x67\x89\xab\xcd\xe7"
-			  "\xa3\x99\x7b\xca\xaf\x69\xa0\xf5",
-		.ctext	= "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d"
-			  "\x69\x0f\x5b\x0d\x9a\x26\x93\x9b",
-		.len	= 16,
-		.np	= 2,
-		.tap	= { 8, 8 }
 	}, {
 		.key	= "\x01\x23\x45\x67\x89\xab\xcd\xef",
 		.klen	= 8,
@@ -7031,8 +7012,6 @@ static const struct cipher_testvec des_tv_template[] = {
 		.ctext	= "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d"
 			  "\x69\x0f\x5b\x0d\x9a\x26\x93\x9b",
 		.len	= 16,
-		.np	= 3,
-		.tap	= { 3, 12, 1 }
 	}, { /* Four blocks -- for testing encryption with chunking */
 		.key	= "\x01\x23\x45\x67\x89\xab\xcd\xef",
 		.klen	= 8,
@@ -7045,38 +7024,6 @@ static const struct cipher_testvec des_tv_template[] = {
 			  "\xb4\x99\x26\xf7\x1f\xe1\xd4\x90"
 			  "\xf7\x9c\x89\x2a\x33\x8f\x4a\x8b",
 		.len	= 32,
-		.np	= 3,
-		.tap	= { 14, 10, 8 }
-	}, {
-		.key	= "\x01\x23\x45\x67\x89\xab\xcd\xef",
-		.klen	= 8,
-		.ptext	= "\x01\x23\x45\x67\x89\xab\xcd\xe7"
-			  "\x22\x33\x44\x55\x66\x77\x88\x99"
-			  "\xca\xfe\xba\xbe\xfe\xed\xbe\xef",
-		.ctext	= "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d"
-			  "\xf7\x9c\x89\x2a\x33\x8f\x4a\x8b"
-			  "\xb4\x99\x26\xf7\x1f\xe1\xd4\x90",
-		.len	= 24,
-		.np	= 4,
-		.tap	= { 2, 1, 3, 18 }
-	}, {
-		.key	= "\x01\x23\x45\x67\x89\xab\xcd\xef",
-		.klen	= 8,
-		.ptext	= "\x01\x23\x45\x67\x89\xab\xcd\xe7"
-			  "\x22\x33\x44\x55\x66\x77\x88\x99",
-		.ctext	= "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d"
-			  "\xf7\x9c\x89\x2a\x33\x8f\x4a\x8b",
-		.len	= 16,
-		.np	= 5,
-		.tap	= { 2, 2, 2, 2, 8 }
-	}, {
-		.key	= "\x01\x23\x45\x67\x89\xab\xcd\xef",
-		.klen	= 8,
-		.ptext	= "\x01\x23\x45\x67\x89\xab\xcd\xe7",
-		.ctext	= "\xc9\x57\x44\x25\x6a\x5e\xd3\x1d",
-		.len	= 8,
-		.np	= 8,
-		.tap	= { 1, 1, 1, 1, 1, 1, 1, 1 }
 	}, { /* Generated with Crypto++ */
 		.key	= "\xC9\x83\xA6\xC9\xEC\x0F\x32\x55",
 		.klen	= 8,
@@ -7143,9 +7090,6 @@ static const struct cipher_testvec des_tv_template[] = {
 			  "\xE1\x58\x39\x09\xB4\x8B\x40\xAC"
 			  "\x5F\x62\xC7\x72\xD9\xFC\xCB\x9A",
 		.len	= 248,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 248 - 10, 2, 8 },
 	},
 };
 
@@ -7182,23 +7126,6 @@ static const struct cipher_testvec des_cbc_tv_template[] = {
 		.ptext	= "\x66\x6f\x72\x20\x61\x6c\x6c\x20",
 		.ctext	= "\x68\x37\x88\x49\x9a\x7c\x05\xf6",
 		.len	= 8,
-		.np	= 2,
-		.tap	= { 4, 4 },
-		.also_non_np = 1,
-	}, { /* Copy of openssl vector for chunk testing */
-	     /* From OpenSSL */
-		.key	= "\x01\x23\x45\x67\x89\xab\xcd\xef",
-		.klen	= 8,
-		.iv	= "\xfe\xdc\xba\x98\x76\x54\x32\x10",
-		.ptext	= "\x37\x36\x35\x34\x33\x32\x31\x20"
-			  "\x4e\x6f\x77\x20\x69\x73\x20\x74"
-			  "\x68\x65\x20\x74\x69\x6d\x65\x20",
-		.ctext	= "\xcc\xd1\x73\xff\xab\x20\x39\xf4"
-			  "\xac\xd8\xae\xfd\xdf\xd8\xa1\xeb"
-			  "\x46\x8e\x91\x15\x78\x88\xba\x68",
-		.len	= 24,
-		.np	= 2,
-		.tap	= { 13, 11 }
 	}, { /* Generated with Crypto++ */
 		.key	= "\xC9\x83\xA6\xC9\xEC\x0F\x32\x55",
 		.klen	= 8,
@@ -7266,9 +7193,6 @@ static const struct cipher_testvec des_cbc_tv_template[] = {
 			  "\x82\xA9\xBD\x6A\x31\x91\x39\x11"
 			  "\xC6\x4A\xF3\x55\xC7\x29\x2E\x63",
 		.len	= 248,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 248 - 10, 2, 8 },
 	},
 };
 
@@ -7340,9 +7264,6 @@ static const struct cipher_testvec des_ctr_tv_template[] = {
 			  "\x19\x7F\x99\x19\x53\xCE\x1D\x14"
 			  "\x69\x74\xA1\x06\x46\x0F\x4E\x75",
 		.len	= 248,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 248 - 10, 2, 8 },
 	}, { /* Generated with Crypto++ */
 		.key	= "\xC9\x83\xA6\xC9\xEC\x0F\x32\x55",
 		.klen	= 8,
@@ -7410,9 +7331,6 @@ static const struct cipher_testvec des_ctr_tv_template[] = {
 			  "\xA5\xA6\xE7\xB0\x51\x36\x52\x37"
 			  "\x91\x45\x05\x3E\x58\xBF\x32",
 		.len	= 247,
-		.also_non_np = 1,
-		.np	= 2,
-		.tap	= { 247 - 8, 8 },
 	},
 };
 
@@ -7571,9 +7489,6 @@ static const struct cipher_testvec des3_ede_tv_template[] = {
 			  "\x93\x03\xD7\x51\x09\xFA\xBE\x68"
 			  "\xD8\x45\xFF\x33\xBA\xBB\x2B\x63",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -7749,9 +7664,6 @@ static const struct cipher_testvec des3_ede_cbc_tv_template[] = {
 			  "\x83\x70\xFF\x86\xE6\xAA\x0F\x1F"
 			  "\x95\x63\x73\xA2\x44\xAC\xF8\xA5",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -7888,9 +7800,6 @@ static const struct cipher_testvec des3_ede_ctr_tv_template[] = {
 			  "\xFD\x51\xB0\xC6\x2C\x63\x13\x78"
 			  "\x5C\xEE\xFC\xCF\xC4\x70\x00\x34",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	}, { /* Generated with Crypto++ */
 		.key	= "\x9C\xD6\xF3\x9C\xB9\x5A\x67\x00"
 			  "\x5A\x67\x00\x2D\xCE\xEB\x2D\xCE"
@@ -8025,9 +7934,6 @@ static const struct cipher_testvec des3_ede_ctr_tv_template[] = {
 			  "\x32\x0F\x05\x2F\xF2\x4C\x95\x3B"
 			  "\xF2\x79\xD9",
 		.len	= 499,
-		.also_non_np = 1,
-		.np	= 2,
-		.tap	= { 499 - 16, 16 },
 	},
 };
 
@@ -8213,9 +8119,6 @@ static const struct cipher_testvec bf_tv_template[] = {
 			  "\x56\xEB\x36\x77\x3D\xAA\xB8\xF5"
 			  "\xC9\x1A\xFB\x5D\xDE\xBB\x43\xF4",
 		.len	= 504,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 504 - 10, 2, 8 },
 	},
 };
 
@@ -8368,9 +8271,6 @@ static const struct cipher_testvec bf_cbc_tv_template[] = {
 			  "\x93\x9B\xEE\xB5\x97\x41\xD2\xA0"
 			  "\xB4\x98\xD8\x6B\x74\xE7\x65\xF4",
 		.len	= 504,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 504 - 10, 2, 8 },
 	},
 };
 
@@ -8643,9 +8543,6 @@ static const struct cipher_testvec bf_ctr_tv_template[] = {
 			  "\x32\x44\x96\x1C\xD8\xEB\x95\xD2"
 			  "\xF3\x71\xEF\xEB\x4E\xBB\x4D",
 		.len	= 503,
-		.also_non_np = 1,
-		.np	= 2,
-		.tap	= { 503 - 8, 8 },
 	}, { /* Generated with Crypto++ */
 		.key	= "\x85\x62\x3F\x1C\xF9\xD6\x1C\xF9"
 			  "\xD6\xB3\x90\x6D\x4A\x90\x6D\x4A"
@@ -8944,9 +8841,6 @@ static const struct cipher_testvec tf_tv_template[] = {
 			  "\x58\x33\x9B\x78\xC7\x58\x48\x6B"
 			  "\x2C\x75\x64\xC4\xCA\xC1\x7E\xD5",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -9122,9 +9016,6 @@ static const struct cipher_testvec tf_cbc_tv_template[] = {
 			  "\x30\x70\x56\xA4\x37\xDD\x7C\xC0"
 			  "\x0A\xA3\x30\x10\x26\x25\x41\x2C",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -9530,9 +9421,6 @@ static const struct cipher_testvec tf_ctr_tv_template[] = {
 			  "\xC5\xC9\x7F\x9E\xCF\x33\x7A\xDF"
 			  "\x6C\x82\x9D",
 		.len	= 499,
-		.also_non_np = 1,
-		.np	= 2,
-		.tap	= { 499 - 16, 16 },
 	},
 };
 
@@ -9774,9 +9662,6 @@ static const struct cipher_testvec tf_lrw_tv_template[] = {
 			  "\x80\x18\xc4\x6c\x03\xd3\xb7\xba"
 			  "\x11\xd7\xb8\x6e\xea\xe1\x80\x30",
 		.len	= 512,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 512 - 20, 4, 16 },
 	},
 };
 
@@ -10111,9 +9996,6 @@ static const struct cipher_testvec tf_xts_tv_template[] = {
 			  "\xa4\x05\x0b\xb2\xb3\xa8\x30\x97"
 			  "\x37\x30\xe1\x91\x8d\xb3\x2a\xff",
 		.len	= 512,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 512 - 20, 4, 16 },
 	},
 };
 
@@ -10286,9 +10168,6 @@ static const struct cipher_testvec serpent_tv_template[] = {
 			  "\x75\x55\x9B\xFF\x36\x73\xAB\x7C"
 			  "\xF4\x46\x2E\xEB\xAC\xF3\xD2\xB7",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -10505,9 +10384,6 @@ static const struct cipher_testvec serpent_cbc_tv_template[] = {
 			  "\xFC\x66\xAA\x37\xF2\x37\x39\x6B"
 			  "\xBC\x08\x3A\xA2\x29\xB3\xDF\xD1",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -10780,9 +10656,6 @@ static const struct cipher_testvec serpent_ctr_tv_template[] = {
 			  "\x40\x53\x77\x8C\x15\xF8\x8D\x13"
 			  "\x38\xE2\xE5",
 		.len	= 499,
-		.also_non_np = 1,
-		.np	= 2,
-		.tap	= { 499 - 16, 16 },
 	}, { /* Generated with Crypto++ */
 		.key	= "\x85\x62\x3F\x1C\xF9\xD6\x1C\xF9"
 			  "\xD6\xB3\x90\x6D\x4A\x90\x6D\x4A"
@@ -11157,9 +11030,6 @@ static const struct cipher_testvec serpent_lrw_tv_template[] = {
 			  "\x5c\xc6\x84\xfe\x7c\xcb\x26\xfd"
 			  "\xd9\x51\x0f\xd7\x94\x2f\xc5\xa7",
 		.len	= 512,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 512 - 20, 4, 16 },
 	},
 };
 
@@ -11494,9 +11364,6 @@ static const struct cipher_testvec serpent_xts_tv_template[] = {
 			  "\xaf\x43\x0b\xc5\x20\x41\x92\x20"
 			  "\xd4\xa0\x91\x98\x11\x5f\x4d\xb1",
 		.len	= 512,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 512 - 20, 4, 16 },
 	},
 };
 
@@ -11836,9 +11703,6 @@ static const struct cipher_testvec cast6_tv_template[] = {
 			  "\x84\x52\x6D\x68\xDE\xC6\x64\xB2"
 			  "\x11\x74\x93\x57\xB4\x7E\xC6\x00",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -11976,9 +11840,6 @@ static const struct cipher_testvec cast6_cbc_tv_template[] = {
 			  "\x4D\x59\x7D\xC5\x28\x69\xFA\x92"
 			  "\x22\x46\x89\x2D\x0F\x2B\x08\x24",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -12131,9 +11992,6 @@ static const struct cipher_testvec cast6_ctr_tv_template[] = {
 			  "\x0E\x74\x33\x30\x62\xB9\x89\xDF"
 			  "\xF9\xC5\xDD\x27\xB3\x39\xCB\xCB",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -12277,9 +12135,6 @@ static const struct cipher_testvec cast6_lrw_tv_template[] = {
 			  "\x8D\xD9\xCD\x3B\x22\x67\x18\xC7"
 			  "\xC4\xF5\x99\x61\xBC\xBB\x5B\x46",
 		.len	= 512,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 512 - 20, 4, 16 },
 	},
 };
 
@@ -12425,9 +12280,6 @@ static const struct cipher_testvec cast6_xts_tv_template[] = {
 			  "\xA1\xAC\xE8\xCF\xC6\x74\xCF\xDC"
 			  "\x22\x60\x4E\xE8\xA4\x5D\x85\xB9",
 		.len	= 512,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 512 - 20, 4, 16 },
 	},
 };
 
@@ -12596,9 +12448,6 @@ static const struct cipher_testvec aes_tv_template[] = {
 			  "\x09\x79\xA0\x43\x5C\x0D\x08\x58"
 			  "\x17\xBB\xC0\x6B\x62\x3F\x56\xE9",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -12613,9 +12462,6 @@ static const struct cipher_testvec aes_cbc_tv_template[] = {
 		.ctext	= "\xe3\x53\x77\x9c\x10\x79\xae\xb8"
 			  "\x27\x08\x94\x2d\xbe\x77\x18\x1a",
 		.len	= 16,
-		.also_non_np = 1,
-		.np	= 8,
-		.tap	= { 3, 2, 3, 2, 3, 1, 1, 1 },
 	}, {
 		.key    = "\xc2\x86\x69\x6d\x88\x7c\x9a\xa0"
 			  "\x61\x1b\xbb\x3e\x20\x25\xa4\x5a",
@@ -12813,9 +12659,6 @@ static const struct cipher_testvec aes_cbc_tv_template[] = {
 			  "\xE0\x1F\x91\xF8\x82\x96\x2D\x65"
 			  "\xA3\xAA\x13\xCC\x50\xFF\x7B\x02",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -12892,9 +12735,6 @@ static const struct cipher_testvec aes_cfb_tv_template[] = {
 			  "\x75\xa3\x85\x74\x1a\xb9\xce\xf8"
 			  "\x20\x31\x62\x3d\x55\xb1\xe4\x71",
 		.len	= 64,
-		.also_non_np = 1,
-		.np	= 2,
-		.tap	= { 31, 33 },
 	}, { /* > 16 bytes, not a multiple of 16 bytes */
 		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
 			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
@@ -14795,9 +14635,6 @@ static const struct cipher_testvec aes_lrw_tv_template[] = {
 			  "\xcd\x7e\x2b\x5d\x43\xea\x42\xe7"
 			  "\x74\x3f\x7d\x58\x88\x75\xde\x3e",
 		.len	= 512,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 512 - 20, 4, 16 },
 	}
 };
 
@@ -15133,9 +14970,6 @@ static const struct cipher_testvec aes_xts_tv_template[] = {
 			  "\xc4\xf3\x6f\xfd\xa9\xfc\xea\x70"
 			  "\xb9\xc6\xe6\x93\xe1\x48\xc1\x51",
 		.len	= 512,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 512 - 20, 4, 16 },
 	}
 };
 
@@ -15345,9 +15179,6 @@ static const struct cipher_testvec aes_ctr_tv_template[] = {
 			  "\xFA\x3A\x05\x4C\xFA\xD1\xFF\xFE"
 			  "\xF1\x4C\xE5\xB2\x91\x64\x0C\x51",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	}, { /* Generated with Crypto++ */
 		.key	= "\xC9\x83\xA6\xC9\xEC\x0F\x32\x55"
 			  "\x0F\x32\x55\x78\x9B\xBE\x78\x9B"
@@ -15483,9 +15314,6 @@ static const struct cipher_testvec aes_ctr_tv_template[] = {
 			  "\xD8\xFE\xC9\x5B\x5C\x25\xE5\x76"
 			  "\xFB\xF2\x3F",
 		.len	= 499,
-		.also_non_np = 1,
-		.np	= 2,
-		.tap	= { 499 - 16, 16 },
 	},
 };
 
@@ -16609,8 +16437,6 @@ static const struct cipher_testvec aes_ctr_rfc3686_tv_template[] = {
 			"\x4b\xef\x31\x18\xea\xac\xb1\x84"
 			"\x21\xed\xda\x86",
 		.len	= 4100,
-		.np	= 2,
-		.tap	= { 4064, 36 },
 	},
 };
 
@@ -16638,9 +16464,6 @@ static const struct cipher_testvec aes_ofb_tv_template[] = {
 			  "\x30\x4c\x65\x28\xf6\x59\xc7\x78"
 			  "\x66\xa5\x10\xd9\xc1\xd6\xae\x5e",
 		.len	= 64,
-		.also_non_np = 1,
-		.np	= 2,
-		.tap	= { 31, 33 },
 	}, { /* > 16 bytes, not a multiple of 16 bytes */
 		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
 			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
@@ -23174,9 +22997,6 @@ static const struct cipher_testvec cast5_tv_template[] = {
 			  "\x4F\xFE\x24\x9C\x9A\x02\xE5\x57"
 			  "\xF5\xBC\x25\xD6\x02\x56\x57\x1C",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -23311,9 +23131,6 @@ static const struct cipher_testvec cast5_cbc_tv_template[] = {
 			  "\x15\x5F\xDB\xE9\xB1\x83\xD2\xE6"
 			  "\x1D\x18\x66\x44\x5B\x8F\x14\xEB",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -23460,9 +23277,6 @@ static const struct cipher_testvec cast5_ctr_tv_template[] = {
 			  "\x8C\x98\xDB\xDE\xFC\x72\x94\xAA"
 			  "\xC0\x0D\x96\xAA\x23\xF8\xFE\x13",
 		.len	= 496,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 496 - 20, 4, 16 },
 	},
 };
 
@@ -23835,20 +23649,6 @@ static const struct cipher_testvec fcrypt_pcbc_tv_template[] = {
 			  "\x19\x89\x09\x1c\x2a\x8e\x8c\x94"
 			  "\xfc\xc7\x68\xe4\x88\xaa\xde\x0f",
 		.len	= 48,
-	}, { /* split-page version */
-		.key	= "\xfe\xdc\xba\x98\x76\x54\x32\x10",
-		.klen	= 8,
-		.iv	= "\xf0\xe1\xd2\xc3\xb4\xa5\x96\x87",
-		.ptext	= "The quick brown fox jumps over the lazy dogs.\0\0",
-		.ctext	= "\xca\x90\xf5\x9d\xcb\xd4\xd2\x3c"
-			  "\x01\x88\x7f\x3e\x31\x6e\x62\x9d"
-			  "\xd8\xe0\x57\xa3\x06\x3a\x42\x58"
-			  "\x2a\x28\xfe\x72\x52\x2f\xdd\xe0"
-			  "\x19\x89\x09\x1c\x2a\x8e\x8c\x94"
-			  "\xfc\xc7\x68\xe4\x88\xaa\xde\x0f",
-		.len	= 48,
-		.np	= 2,
-		.tap	= { 20, 28 },
 	}
 };
 
@@ -24145,9 +23945,6 @@ static const struct cipher_testvec camellia_tv_template[] = {
 			  "\xF8\xB2\xAA\x7A\xD6\xFF\xFA\x55"
 			  "\x33\x1A\xBB\xD3\xA2\x7E\x97\x66",
 		.len	= 1008,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 1008 - 20, 4, 16 },
 	},
 };
 
@@ -24438,9 +24235,6 @@ static const struct cipher_testvec camellia_cbc_tv_template[] = {
 			  "\x55\x01\xD4\x58\xB2\xF2\x85\x49"
 			  "\x70\xC5\xB9\x0B\x3B\x7A\x6E\x6C",
 		.len	= 1008,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 1008 - 20, 4, 16 },
 	},
 };
 
@@ -24841,9 +24635,6 @@ static const struct cipher_testvec camellia_ctr_tv_template[] = {
 			  "\xE7\x2C\x49\x08\x8B\x72\xFA\x5C"
 			  "\xF1\x6B\xD9",
 		.len	= 1011,
-		.also_non_np = 1,
-		.np	= 2,
-		.tap	= { 1011 - 16, 16 },
 	}, { /* Generated with Crypto++ */
 		.key	= "\x85\x62\x3F\x1C\xF9\xD6\x1C\xF9"
 			  "\xD6\xB3\x90\x6D\x4A\x90\x6D\x4A"
@@ -25346,9 +25137,6 @@ static const struct cipher_testvec camellia_lrw_tv_template[] = {
 			  "\xb2\x1a\xd8\x4c\xbd\x1d\x10\xe9"
 			  "\x5a\xa8\x92\x7f\xba\xe6\x0c\x95",
 		.len	= 512,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 512 - 20, 4, 16 },
 	},
 };
 
@@ -25683,9 +25471,6 @@ static const struct cipher_testvec camellia_xts_tv_template[] = {
 			  "\xb7\x16\xd8\x12\x5c\xcd\x7d\x4e"
 			  "\xd5\xc6\x99\xcc\x4e\x6c\x94\x95",
 		.len	= 512,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 512 - 20, 4, 16 },
 	},
 };
 
@@ -26889,8 +26674,6 @@ static const struct cipher_testvec salsa20_stream_tv_template[] = {
 			"\x87\x13\xc6\x5b\x59\x8d\xf2\xc8"
 			"\xaf\xdf\x11\x95",
 		.len	= 4100,
-		.np	= 2,
-		.tap	= { 4064, 36 },
 	},
 };
 
@@ -27023,9 +26806,6 @@ static const struct cipher_testvec chacha20_tv_template[] = {
 			  "\x5b\x86\x2f\x37\x30\xe3\x7c\xfd"
 			  "\xc4\xfd\x80\x6c\x22\xf2\x21",
 		.len	= 375,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 375 - 20, 4, 16 },
 
 	}, { /* RFC7539 A.2. Test Vector #3 */
 		.key	= "\x1c\x92\x40\xa5\xeb\x55\xd3\x8a"
@@ -27399,9 +27179,6 @@ static const struct cipher_testvec chacha20_tv_template[] = {
 			  "\xa1\xed\xad\xd5\x76\xfa\x24\x8f"
 			  "\x98",
 		.len	= 1281,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 1200, 1, 80 },
 	},
 };
 
@@ -27594,9 +27371,6 @@ static const struct cipher_testvec xchacha20_tv_template[] = {
 			  "\xab\xff\x1f\x12\xc3\xee\xe5\x65"
 			  "\x12\x8d\x7b\x61\xe5\x1f\x98",
 		.len	= 375,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 375 - 20, 4, 16 },
 
 	}, { /* Derived from a ChaCha20 test vector, via the process above */
 		.key	= "\x1c\x92\x40\xa5\xeb\x55\xd3\x8a"
@@ -27974,9 +27748,6 @@ static const struct cipher_testvec xchacha20_tv_template[] = {
 			  "\xba\xd0\x34\xc9\x2d\x91\xc5\x17"
 			  "\x11",
 		.len	= 1281,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 1200, 1, 80 },
 	}, { /* test vector from https://tools.ietf.org/html/draft-arciszewski-xchacha-02#appendix-A.3.2 */
 		.key	= "\x80\x81\x82\x83\x84\x85\x86\x87"
 			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
@@ -28259,9 +28030,6 @@ static const struct cipher_testvec xchacha12_tv_template[] = {
 			  "\xda\x4e\xc9\xab\x9b\x8a\x7b",
 
 		.len	= 375,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 375 - 20, 4, 16 },
 
 	}, {
 		.key	= "\x1c\x92\x40\xa5\xeb\x55\xd3\x8a"
@@ -28639,9 +28407,6 @@ static const struct cipher_testvec xchacha12_tv_template[] = {
 			  "\xf0\xfc\x5e\x1c\xf1\xf5\xf9\xf3"
 			  "\x5b",
 		.len	= 1281,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 1200, 1, 80 },
 	}, {
 		.key	= "\x80\x81\x82\x83\x84\x85\x86\x87"
 			  "\x88\x89\x8a\x8b\x8c\x8d\x8e\x8f"
@@ -28749,9 +28514,6 @@ static const struct cipher_testvec adiantum_xchacha12_aes_tv_template[] = {
 		.ctext	= "\x6d\x32\x86\x18\x67\x86\x0f\x3f"
 			  "\x96\x7c\x9d\x28\x0d\x53\xec\x9f",
 		.len	= 16,
-		.also_non_np = 1,
-		.np	= 2,
-		.tap	= { 14, 2 },
 	}, {
 		.key	= "\x36\x2b\x57\x97\xf8\x5d\xcd\x99"
 			  "\x5f\x1a\x5a\x44\x1d\x92\x0f\x27"
@@ -28814,9 +28576,6 @@ static const struct cipher_testvec adiantum_xchacha12_aes_tv_template[] = {
 			  "\x74\xa6\xaa\xa3\xac\xdc\xc2\xf5"
 			  "\x8d\xde\x34\x86\x78\x60\x75\x8d",
 		.len	= 128,
-		.also_non_np = 1,
-		.np	= 4,
-		.tap	= { 104, 16, 4, 4 },
 	}, {
 		.key	= "\xd3\x81\x72\x18\x23\xff\x6f\x4a"
 			  "\x25\x74\x29\x0d\x51\x8a\x0e\x13"
@@ -28956,9 +28715,6 @@ static const struct cipher_testvec adiantum_xchacha12_aes_tv_template[] = {
 			  "\x21\xb0\x21\x52\xba\xa7\x37\xaa"
 			  "\xcc\xbf\x95\xa8\xf4\xd0\x91\xf6",
 		.len	= 512,
-		.also_non_np = 1,
-		.np	= 2,
-		.tap	= { 144, 368 },
 	}
 };
 
@@ -28980,9 +28736,6 @@ static const struct cipher_testvec adiantum_xchacha20_aes_tv_template[] = {
 		.ctext	= "\xf6\x78\x97\xd6\xaa\x94\x01\x27"
 			  "\x2e\x4d\x83\xe0\x6e\x64\x9a\xdf",
 		.len	= 16,
-		.also_non_np = 1,
-		.np	= 3,
-		.tap	= { 5, 2, 9 },
 	}, {
 		.key	= "\x36\x2b\x57\x97\xf8\x5d\xcd\x99"
 			  "\x5f\x1a\x5a\x44\x1d\x92\x0f\x27"
@@ -29002,9 +28755,6 @@ static const struct cipher_testvec adiantum_xchacha20_aes_tv_template[] = {
 			  "\x57\x72\xb5\xfd\xb5\x5d\xb8\x28"
 			  "\x0c\x04\x91\x14\x91\xe9\x37",
 		.len	= 31,
-		.also_non_np = 1,
-		.np	= 2,
-		.tap	= { 16, 15 },
 	}, {
 		.key	= "\xa5\x28\x24\x34\x1a\x3c\xd8\xf7"
 			  "\x05\x91\x8f\xee\x85\x1f\x35\x7f"
@@ -29048,9 +28798,6 @@ static const struct cipher_testvec adiantum_xchacha20_aes_tv_template[] = {
 			  "\x29\x62\x0d\xb2\xf6\x3c\x58\x57"
 			  "\xc1\xd5\x5a\xbb\xd6\xa6\x2a\xe5",
 		.len	= 128,
-		.also_non_np = 1,
-		.np	= 4,
-		.tap	= { 112, 7, 8, 1 },
 	}, {
 		.key	= "\xd3\x81\x72\x18\x23\xff\x6f\x4a"
 			  "\x25\x74\x29\x0d\x51\x8a\x0e\x13"
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 12/15] crypto: testmgr - convert aead testing to use testvec_configs
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (10 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 11/15] crypto: testmgr - convert skcipher testing to use testvec_configs Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 13/15] crypto: testmgr - convert hash " Eric Biggers
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel

From: Eric Biggers <ebiggers@google.com>

Convert alg_test_aead() to use the new test framework, using the same
list of testvec_configs that skcipher testing uses.

This significantly improves AEAD test coverage mainly because previously
there was only very limited test coverage of the possible data layouts.
Now the data layouts to test are listed in one place for all algorithms
and optionally are also randomly generated.  In fact, only one AEAD
algorithm (AES-GCM) even had a chunked test case before.

This already found bugs in all the AEGIS and MORUS implementations, the
x86 AES-GCM implementation, and the arm64 AES-CCM implementation.

I removed the AEAD chunked test vectors that were the same as
non-chunked ones, but left the ones that were unique.

Note: the rewritten test code allocates an aead_request just once per
algorithm rather than once per encryption/decryption, but some AEAD
algorithms incorrectly change the tfm pointer in the request.  It's
nontrivial to fix these, so to move forward I'm temporarily working
around it by resetting the tfm pointer.  But they'll need to be fixed.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/testmgr.c | 613 ++++++++++++++---------------------------------
 crypto/testmgr.h |  47 ----
 2 files changed, 185 insertions(+), 475 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 3f024e5dd8b70..7638090ff1b0a 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -1208,443 +1208,222 @@ static int test_hash(struct crypto_ahash *tfm,
 	return 0;
 }
 
-static int __test_aead(struct crypto_aead *tfm, int enc,
-		       const struct aead_testvec *template, unsigned int tcount,
-		       const bool diff_dst, const int align_offset)
+static int test_aead_vec_cfg(const char *driver, int enc,
+			     const struct aead_testvec *vec,
+			     unsigned int vec_num,
+			     const struct testvec_config *cfg,
+			     struct aead_request *req,
+			     struct cipher_test_sglists *tsgls)
 {
-	const char *algo = crypto_tfm_alg_driver_name(crypto_aead_tfm(tfm));
-	unsigned int i, j, k, n, temp;
-	int ret = -ENOMEM;
-	char *q;
-	char *key;
-	struct aead_request *req;
-	struct scatterlist *sg;
-	struct scatterlist *sgout;
-	const char *e, *d;
-	struct crypto_wait wait;
-	unsigned int authsize, iv_len;
-	char *iv;
-	char *xbuf[XBUFSIZE];
-	char *xoutbuf[XBUFSIZE];
-	char *axbuf[XBUFSIZE];
-
-	iv = kzalloc(MAX_IVLEN, GFP_KERNEL);
-	if (!iv)
-		return ret;
-	key = kmalloc(MAX_KEYLEN, GFP_KERNEL);
-	if (!key)
-		goto out_noxbuf;
-	if (testmgr_alloc_buf(xbuf))
-		goto out_noxbuf;
-	if (testmgr_alloc_buf(axbuf))
-		goto out_noaxbuf;
-	if (diff_dst && testmgr_alloc_buf(xoutbuf))
-		goto out_nooutbuf;
-
-	/* avoid "the frame size is larger than 1024 bytes" compiler warning */
-	sg = kmalloc(array3_size(sizeof(*sg), 8, (diff_dst ? 4 : 2)),
-		     GFP_KERNEL);
-	if (!sg)
-		goto out_nosg;
-	sgout = &sg[16];
-
-	if (diff_dst)
-		d = "-ddst";
-	else
-		d = "";
+	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
+	const unsigned int alignmask = crypto_aead_alignmask(tfm);
+	const unsigned int ivsize = crypto_aead_ivsize(tfm);
+	const unsigned int authsize = vec->clen - vec->plen;
+	const u32 req_flags = CRYPTO_TFM_REQ_MAY_BACKLOG | cfg->req_flags;
+	const char *op = enc ? "encryption" : "decryption";
+	DECLARE_CRYPTO_WAIT(wait);
+	u8 _iv[3 * (MAX_ALGAPI_ALIGNMASK + 1) + MAX_IVLEN];
+	u8 *iv = PTR_ALIGN(&_iv[0], 2 * (MAX_ALGAPI_ALIGNMASK + 1)) +
+		 cfg->iv_offset +
+		 (cfg->iv_offset_relative_to_alignmask ? alignmask : 0);
+	struct kvec input[2];
+	int err;
 
-	if (enc == ENCRYPT)
-		e = "encryption";
+	/* Set the key */
+	if (vec->wk)
+		crypto_aead_set_flags(tfm, CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
 	else
-		e = "decryption";
-
-	crypto_init_wait(&wait);
-
-	req = aead_request_alloc(tfm, GFP_KERNEL);
-	if (!req) {
-		pr_err("alg: aead%s: Failed to allocate request for %s\n",
-		       d, algo);
-		goto out;
+		crypto_aead_clear_flags(tfm, CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
+	err = crypto_aead_setkey(tfm, vec->key, vec->klen);
+	if (err) {
+		if (vec->fail) /* expectedly failed to set key? */
+			return 0;
+		pr_err("alg: aead: %s setkey failed with err %d on test vector %u; flags=%#x\n",
+		       driver, err, vec_num, crypto_aead_get_flags(tfm));
+		return err;
 	}
-
-	aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-				  crypto_req_done, &wait);
-
-	iv_len = crypto_aead_ivsize(tfm);
-
-	for (i = 0, j = 0; i < tcount; i++) {
-		const char *input, *expected_output;
-		unsigned int inlen, outlen;
-		char *inbuf, *outbuf, *assocbuf;
-
-		if (template[i].np)
-			continue;
-		if (enc) {
-			if (template[i].novrfy)
-				continue;
-			input = template[i].ptext;
-			inlen = template[i].plen;
-			expected_output = template[i].ctext;
-			outlen = template[i].clen;
-		} else {
-			input = template[i].ctext;
-			inlen = template[i].clen;
-			expected_output = template[i].ptext;
-			outlen = template[i].plen;
-		}
-
-		j++;
-
-		/* some templates have no input data but they will
-		 * touch input
-		 */
-		inbuf = xbuf[0] + align_offset;
-		assocbuf = axbuf[0];
-
-		ret = -EINVAL;
-		if (WARN_ON(align_offset + template[i].clen > PAGE_SIZE ||
-			    template[i].alen > PAGE_SIZE))
-			goto out;
-
-		memcpy(inbuf, input, inlen);
-		memcpy(assocbuf, template[i].assoc, template[i].alen);
-		if (template[i].iv)
-			memcpy(iv, template[i].iv, iv_len);
-		else
-			memset(iv, 0, iv_len);
-
-		crypto_aead_clear_flags(tfm, ~0);
-		if (template[i].wk)
-			crypto_aead_set_flags(tfm,
-					      CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
-
-		if (template[i].klen > MAX_KEYLEN) {
-			pr_err("alg: aead%s: setkey failed on test %d for %s: key size %d > %d\n",
-			       d, j, algo, template[i].klen,
-			       MAX_KEYLEN);
-			ret = -EINVAL;
-			goto out;
-		}
-		memcpy(key, template[i].key, template[i].klen);
-
-		ret = crypto_aead_setkey(tfm, key, template[i].klen);
-		if (template[i].fail == !ret) {
-			pr_err("alg: aead%s: setkey failed on test %d for %s: flags=%x\n",
-			       d, j, algo, crypto_aead_get_flags(tfm));
-			goto out;
-		} else if (ret)
-			continue;
-
-		authsize = template[i].clen - template[i].plen;
-		ret = crypto_aead_setauthsize(tfm, authsize);
-		if (ret) {
-			pr_err("alg: aead%s: Failed to set authsize to %u on test %d for %s\n",
-			       d, authsize, j, algo);
-			goto out;
-		}
-
-		k = !!template[i].alen;
-		sg_init_table(sg, k + 1);
-		sg_set_buf(&sg[0], assocbuf, template[i].alen);
-		sg_set_buf(&sg[k], inbuf, template[i].clen);
-		outbuf = inbuf;
-
-		if (diff_dst) {
-			sg_init_table(sgout, k + 1);
-			sg_set_buf(&sgout[0], assocbuf, template[i].alen);
-
-			outbuf = xoutbuf[0] + align_offset;
-			sg_set_buf(&sgout[k], outbuf, template[i].clen);
-		}
-
-		aead_request_set_crypt(req, sg, (diff_dst) ? sgout : sg, inlen,
-				       iv);
-
-		aead_request_set_ad(req, template[i].alen);
-
-		ret = crypto_wait_req(enc ? crypto_aead_encrypt(req)
-				      : crypto_aead_decrypt(req), &wait);
-
-		switch (ret) {
-		case 0:
-			if (template[i].novrfy) {
-				/* verification was supposed to fail */
-				pr_err("alg: aead%s: %s failed on test %d for %s: ret was 0, expected -EBADMSG\n",
-				       d, e, j, algo);
-				/* so really, we got a bad message */
-				ret = -EBADMSG;
-				goto out;
-			}
-			break;
-		case -EBADMSG:
-			if (template[i].novrfy)
-				/* verification failure was expected */
-				continue;
-			/* fall through */
-		default:
-			pr_err("alg: aead%s: %s failed on test %d for %s: ret=%d\n",
-			       d, e, j, algo, -ret);
-			goto out;
-		}
-
-		if (memcmp(outbuf, expected_output, outlen)) {
-			pr_err("alg: aead%s: Test %d failed on %s for %s\n",
-			       d, j, e, algo);
-			hexdump(outbuf, outlen);
-			ret = -EINVAL;
-			goto out;
-		}
+	if (vec->fail) {
+		pr_err("alg: aead: %s setkey unexpectedly succeeded on test vector %u\n",
+		       driver, vec_num);
+		return -EINVAL;
 	}
 
-	for (i = 0, j = 0; i < tcount; i++) {
-		const char *input, *expected_output;
-		unsigned int inlen, outlen;
-
-		/* alignment tests are only done with continuous buffers */
-		if (align_offset != 0)
-			break;
-
-		if (!template[i].np)
-			continue;
-
-		if (enc) {
-			if (template[i].novrfy)
-				continue;
-			input = template[i].ptext;
-			inlen = template[i].plen;
-			expected_output = template[i].ctext;
-			outlen = template[i].clen;
-		} else {
-			input = template[i].ctext;
-			inlen = template[i].clen;
-			expected_output = template[i].ptext;
-			outlen = template[i].plen;
-		}
-
-		j++;
-
-		if (template[i].iv)
-			memcpy(iv, template[i].iv, iv_len);
-		else
-			memset(iv, 0, MAX_IVLEN);
-
-		crypto_aead_clear_flags(tfm, ~0);
-		if (template[i].wk)
-			crypto_aead_set_flags(tfm,
-					      CRYPTO_TFM_REQ_FORBID_WEAK_KEYS);
-		if (template[i].klen > MAX_KEYLEN) {
-			pr_err("alg: aead%s: setkey failed on test %d for %s: key size %d > %d\n",
-			       d, j, algo, template[i].klen, MAX_KEYLEN);
-			ret = -EINVAL;
-			goto out;
-		}
-		memcpy(key, template[i].key, template[i].klen);
-
-		ret = crypto_aead_setkey(tfm, key, template[i].klen);
-		if (template[i].fail == !ret) {
-			pr_err("alg: aead%s: setkey failed on chunk test %d for %s: flags=%x\n",
-			       d, j, algo, crypto_aead_get_flags(tfm));
-			goto out;
-		} else if (ret)
-			continue;
-
-		authsize = template[i].clen - template[i].plen;
-
-		ret = -EINVAL;
-		sg_init_table(sg, template[i].anp + template[i].np);
-		if (diff_dst)
-			sg_init_table(sgout, template[i].anp + template[i].np);
-
-		ret = -EINVAL;
-		for (k = 0, temp = 0; k < template[i].anp; k++) {
-			if (WARN_ON(offset_in_page(IDX[k]) +
-				    template[i].atap[k] > PAGE_SIZE))
-				goto out;
-			sg_set_buf(&sg[k],
-				   memcpy(axbuf[IDX[k] >> PAGE_SHIFT] +
-					  offset_in_page(IDX[k]),
-					  template[i].assoc + temp,
-					  template[i].atap[k]),
-				   template[i].atap[k]);
-			if (diff_dst)
-				sg_set_buf(&sgout[k],
-					   axbuf[IDX[k] >> PAGE_SHIFT] +
-					   offset_in_page(IDX[k]),
-					   template[i].atap[k]);
-			temp += template[i].atap[k];
-		}
-
-		for (k = 0, temp = 0; k < template[i].np; k++) {
-			n = template[i].tap[k];
-			if (k == template[i].np - 1 && !enc)
-				n += authsize;
-
-			if (WARN_ON(offset_in_page(IDX[k]) + n > PAGE_SIZE))
-				goto out;
-
-			q = xbuf[IDX[k] >> PAGE_SHIFT] + offset_in_page(IDX[k]);
-			memcpy(q, input + temp, n);
-			sg_set_buf(&sg[template[i].anp + k], q, n);
-
-			if (diff_dst) {
-				q = xoutbuf[IDX[k] >> PAGE_SHIFT] +
-				    offset_in_page(IDX[k]);
-
-				memset(q, 0, n);
-
-				sg_set_buf(&sgout[template[i].anp + k], q, n);
-			}
-
-			if (k == template[i].np - 1 && enc)
-				n += authsize;
-			if (offset_in_page(q) + n < PAGE_SIZE)
-				q[n] = 0;
-
-			temp += n;
-		}
+	/* Set the authentication tag size */
+	err = crypto_aead_setauthsize(tfm, authsize);
+	if (err) {
+		pr_err("alg: aead: %s setauthsize failed with err %d on test vector %u\n",
+		       driver, err, vec_num);
+		return err;
+	}
 
-		ret = crypto_aead_setauthsize(tfm, authsize);
-		if (ret) {
-			pr_err("alg: aead%s: Failed to set authsize to %u on chunk test %d for %s\n",
-			       d, authsize, j, algo);
-			goto out;
-		}
+	/* The IV must be copied to a buffer, as the algorithm may modify it */
+	if (WARN_ON(ivsize > MAX_IVLEN))
+		return -EINVAL;
+	if (vec->iv)
+		memcpy(iv, vec->iv, ivsize);
+	else
+		memset(iv, 0, ivsize);
 
-		if (enc) {
-			if (WARN_ON(sg[template[i].anp + k - 1].offset +
-				    sg[template[i].anp + k - 1].length +
-				    authsize > PAGE_SIZE)) {
-				ret = -EINVAL;
-				goto out;
-			}
+	/* Build the src/dst scatterlists */
+	input[0].iov_base = (void *)vec->assoc;
+	input[0].iov_len = vec->alen;
+	input[1].iov_base = enc ? (void *)vec->ptext : (void *)vec->ctext;
+	input[1].iov_len = enc ? vec->plen : vec->clen;
+	err = build_cipher_test_sglists(tsgls, cfg, alignmask,
+					vec->alen + (enc ? vec->plen :
+						     vec->clen),
+					vec->alen + (enc ? vec->clen :
+						     vec->plen),
+					input, 2);
+	if (err) {
+		pr_err("alg: aead: %s %s: error preparing scatterlists for test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		return err;
+	}
 
-			if (diff_dst)
-				sgout[template[i].anp + k - 1].length +=
-					authsize;
-			sg[template[i].anp + k - 1].length += authsize;
-		}
+	/* Do the actual encryption or decryption */
+	testmgr_poison(req->__ctx, crypto_aead_reqsize(tfm));
+	aead_request_set_callback(req, req_flags, crypto_req_done, &wait);
+	aead_request_set_crypt(req, tsgls->src.sgl_ptr, tsgls->dst.sgl_ptr,
+			       enc ? vec->plen : vec->clen, iv);
+	aead_request_set_ad(req, vec->alen);
+	err = crypto_wait_req(enc ? crypto_aead_encrypt(req) :
+			      crypto_aead_decrypt(req), &wait);
 
-		aead_request_set_crypt(req, sg, (diff_dst) ? sgout : sg,
-				       inlen, iv);
+	aead_request_set_tfm(req, tfm); /* TODO: get rid of this */
 
-		aead_request_set_ad(req, template[i].alen);
+	if (err) {
+		if (err == -EBADMSG && vec->novrfy)
+			return 0;
+		pr_err("alg: aead: %s %s failed with err %d on test vector %u, cfg=\"%s\"\n",
+		       driver, op, err, vec_num, cfg->name);
+		return err;
+	}
+	if (vec->novrfy) {
+		pr_err("alg: aead: %s %s unexpectedly succeeded on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		return -EINVAL;
+	}
 
-		ret = crypto_wait_req(enc ? crypto_aead_encrypt(req)
-				      : crypto_aead_decrypt(req), &wait);
+	/* Check for the correct output (ciphertext or plaintext) */
+	err = verify_correct_output(&tsgls->dst, enc ? vec->ctext : vec->ptext,
+				    enc ? vec->clen : vec->plen,
+				    vec->alen, enc || !cfg->inplace);
+	if (err == -EOVERFLOW) {
+		pr_err("alg: aead: %s %s overran dst buffer on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		return err;
+	}
+	if (err) {
+		pr_err("alg: aead: %s %s test failed (wrong result) on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		return err;
+	}
 
-		switch (ret) {
-		case 0:
-			if (template[i].novrfy) {
-				/* verification was supposed to fail */
-				pr_err("alg: aead%s: %s failed on chunk test %d for %s: ret was 0, expected -EBADMSG\n",
-				       d, e, j, algo);
-				/* so really, we got a bad message */
-				ret = -EBADMSG;
-				goto out;
-			}
-			break;
-		case -EBADMSG:
-			if (template[i].novrfy)
-				/* verification failure was expected */
-				continue;
-			/* fall through */
-		default:
-			pr_err("alg: aead%s: %s failed on chunk test %d for %s: ret=%d\n",
-			       d, e, j, algo, -ret);
-			goto out;
-		}
+	return 0;
+}
 
-		ret = -EINVAL;
-		for (k = 0, temp = 0; k < template[i].np; k++) {
-			if (diff_dst)
-				q = xoutbuf[IDX[k] >> PAGE_SHIFT] +
-				    offset_in_page(IDX[k]);
-			else
-				q = xbuf[IDX[k] >> PAGE_SHIFT] +
-				    offset_in_page(IDX[k]);
+static int test_aead_vec(const char *driver, int enc,
+			 const struct aead_testvec *vec, unsigned int vec_num,
+			 struct aead_request *req,
+			 struct cipher_test_sglists *tsgls)
+{
+	unsigned int i;
+	int err;
 
-			n = template[i].tap[k];
-			if (k == template[i].np - 1 && enc)
-				n += authsize;
+	if (enc && vec->novrfy)
+		return 0;
 
-			if (memcmp(q, expected_output + temp, n)) {
-				pr_err("alg: aead%s: Chunk test %d failed on %s at page %u for %s\n",
-				       d, j, e, k, algo);
-				hexdump(q, n);
-				goto out;
-			}
+	for (i = 0; i < ARRAY_SIZE(default_cipher_testvec_configs); i++) {
+		err = test_aead_vec_cfg(driver, enc, vec, vec_num,
+					&default_cipher_testvec_configs[i],
+					req, tsgls);
+		if (err)
+			return err;
+	}
 
-			q += n;
-			if (k == template[i].np - 1 && !enc) {
-				if (!diff_dst && memcmp(q, input + temp + n,
-							authsize))
-					n = authsize;
-				else
-					n = 0;
-			} else {
-				for (n = 0; offset_in_page(q + n) && q[n]; n++)
-					;
-			}
-			if (n) {
-				pr_err("alg: aead%s: Result buffer corruption in chunk test %d on %s at page %u for %s: %u bytes:\n",
-				       d, j, e, k, algo, n);
-				hexdump(q, n);
-				goto out;
-			}
+#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
+	if (!noextratests) {
+		struct testvec_config cfg;
+		char cfgname[TESTVEC_CONFIG_NAMELEN];
 
-			temp += template[i].tap[k];
+		for (i = 0; i < fuzz_iterations; i++) {
+			generate_random_testvec_config(&cfg, cfgname,
+						       sizeof(cfgname));
+			err = test_aead_vec_cfg(driver, enc, vec, vec_num,
+						&cfg, req, tsgls);
+			if (err)
+				return err;
 		}
 	}
+#endif
+	return 0;
+}
 
-	ret = 0;
+static int test_aead(const char *driver, int enc,
+		     const struct aead_test_suite *suite,
+		     struct aead_request *req,
+		     struct cipher_test_sglists *tsgls)
+{
+	unsigned int i;
+	int err;
 
-out:
-	aead_request_free(req);
-	kfree(sg);
-out_nosg:
-	if (diff_dst)
-		testmgr_free_buf(xoutbuf);
-out_nooutbuf:
-	testmgr_free_buf(axbuf);
-out_noaxbuf:
-	testmgr_free_buf(xbuf);
-out_noxbuf:
-	kfree(key);
-	kfree(iv);
-	return ret;
+	for (i = 0; i < suite->count; i++) {
+		err = test_aead_vec(driver, enc, &suite->vecs[i], i, req,
+				    tsgls);
+		if (err)
+			return err;
+	}
+	return 0;
 }
 
-static int test_aead(struct crypto_aead *tfm, int enc,
-		     const struct aead_testvec *template, unsigned int tcount)
+static int alg_test_aead(const struct alg_test_desc *desc, const char *driver,
+			 u32 type, u32 mask)
 {
-	unsigned int alignmask;
-	int ret;
+	const struct aead_test_suite *suite = &desc->suite.aead;
+	struct crypto_aead *tfm;
+	struct aead_request *req = NULL;
+	struct cipher_test_sglists *tsgls = NULL;
+	int err;
 
-	/* test 'dst == src' case */
-	ret = __test_aead(tfm, enc, template, tcount, false, 0);
-	if (ret)
-		return ret;
+	if (suite->count <= 0) {
+		pr_err("alg: aead: empty test suite for %s\n", driver);
+		return -EINVAL;
+	}
 
-	/* test 'dst != src' case */
-	ret = __test_aead(tfm, enc, template, tcount, true, 0);
-	if (ret)
-		return ret;
+	tfm = crypto_alloc_aead(driver, type, mask);
+	if (IS_ERR(tfm)) {
+		pr_err("alg: aead: failed to allocate transform for %s: %ld\n",
+		       driver, PTR_ERR(tfm));
+		return PTR_ERR(tfm);
+	}
 
-	/* test unaligned buffers, check with one byte offset */
-	ret = __test_aead(tfm, enc, template, tcount, true, 1);
-	if (ret)
-		return ret;
+	req = aead_request_alloc(tfm, GFP_KERNEL);
+	if (!req) {
+		pr_err("alg: aead: failed to allocate request for %s\n",
+		       driver);
+		err = -ENOMEM;
+		goto out;
+	}
 
-	alignmask = crypto_tfm_alg_alignmask(&tfm->base);
-	if (alignmask) {
-		/* Check if alignment mask for tfm is correctly set. */
-		ret = __test_aead(tfm, enc, template, tcount, true,
-				  alignmask + 1);
-		if (ret)
-			return ret;
+	tsgls = alloc_cipher_test_sglists();
+	if (!tsgls) {
+		pr_err("alg: aead: failed to allocate test buffers for %s\n",
+		       driver);
+		err = -ENOMEM;
+		goto out;
 	}
 
-	return 0;
+	err = test_aead(driver, ENCRYPT, suite, req, tsgls);
+	if (err)
+		goto out;
+
+	err = test_aead(driver, DECRYPT, suite, req, tsgls);
+out:
+	free_cipher_test_sglists(tsgls);
+	aead_request_free(req);
+	crypto_free_aead(tfm);
+	return err;
 }
 
 static int test_cipher(struct crypto_cipher *tfm, int enc,
@@ -2274,28 +2053,6 @@ static int test_cprng(struct crypto_rng *tfm,
 	return err;
 }
 
-static int alg_test_aead(const struct alg_test_desc *desc, const char *driver,
-			 u32 type, u32 mask)
-{
-	const struct aead_test_suite *suite = &desc->suite.aead;
-	struct crypto_aead *tfm;
-	int err;
-
-	tfm = crypto_alloc_aead(driver, type, mask);
-	if (IS_ERR(tfm)) {
-		printk(KERN_ERR "alg: aead: Failed to load transform for %s: "
-		       "%ld\n", driver, PTR_ERR(tfm));
-		return PTR_ERR(tfm);
-	}
-
-	err = test_aead(tfm, ENCRYPT, suite->vecs, suite->count);
-	if (!err)
-		err = test_aead(tfm, DECRYPT, suite->vecs, suite->count);
-
-	crypto_free_aead(tfm);
-	return err;
-}
-
 static int alg_test_cipher(const struct alg_test_desc *desc,
 			   const char *driver, u32 type, u32 mask)
 {
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 1a73af8a79f7a..dfd8059d1306d 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -78,10 +78,6 @@ struct cipher_testvec {
  * @ctext:	Pointer to the full authenticated ciphertext.  For AEADs that
  *		produce a separate "ciphertext" and "authentication tag", these
  *		two parts are concatenated: ciphertext || tag.
- * @tap:	How to distribute ptext data in @np SGs
- * @atap:	How to distribute assoc data in @anp SGs
- * @np:		Numbers of SG to distribute ptext data in
- * @anp:	Numbers of SG to distribute assoc data in
  * @fail:	setkey() failure expected?
  * @novrfy:	Decryption verification failure expected?
  * @wk:		Does the test need CRYPTO_TFM_REQ_FORBID_WEAK_KEYS?
@@ -97,10 +93,6 @@ struct aead_testvec {
 	const char *ptext;
 	const char *assoc;
 	const char *ctext;
-	unsigned char tap[MAX_TAP];
-	unsigned char atap[MAX_TAP];
-	int np;
-	int anp;
 	bool fail;
 	unsigned char novrfy;
 	unsigned char wk;
@@ -16605,41 +16597,6 @@ static const struct aead_testvec aes_gcm_tv_template[] = {
 			  "\x99\x24\xa7\xc8\x58\x73\x36\xbf"
 			  "\xb1\x18\x02\x4d\xb8\x67\x4a\x14",
 		.clen	= 80,
-	}, {
-		.key	= "\xfe\xff\xe9\x92\x86\x65\x73\x1c"
-			  "\x6d\x6a\x8f\x94\x67\x30\x83\x08"
-			  "\xfe\xff\xe9\x92\x86\x65\x73\x1c",
-		.klen	= 24,
-		.iv	= "\xca\xfe\xba\xbe\xfa\xce\xdb\xad"
-			  "\xde\xca\xf8\x88",
-		.ptext	= "\xd9\x31\x32\x25\xf8\x84\x06\xe5"
-			  "\xa5\x59\x09\xc5\xaf\xf5\x26\x9a"
-			  "\x86\xa7\xa9\x53\x15\x34\xf7\xda"
-			  "\x2e\x4c\x30\x3d\x8a\x31\x8a\x72"
-			  "\x1c\x3c\x0c\x95\x95\x68\x09\x53"
-			  "\x2f\xcf\x0e\x24\x49\xa6\xb5\x25"
-			  "\xb1\x6a\xed\xf5\xaa\x0d\xe6\x57"
-			  "\xba\x63\x7b\x39",
-		.plen	= 60,
-		.assoc	= "\xfe\xed\xfa\xce\xde\xad\xbe\xef"
-			  "\xfe\xed\xfa\xce\xde\xad\xbe\xef"
-			  "\xab\xad\xda\xd2",
-		.alen	= 20,
-		.ctext	= "\x39\x80\xca\x0b\x3c\x00\xe8\x41"
-			  "\xeb\x06\xfa\xc4\x87\x2a\x27\x57"
-			  "\x85\x9e\x1c\xea\xa6\xef\xd9\x84"
-			  "\x62\x85\x93\xb4\x0c\xa1\xe1\x9c"
-			  "\x7d\x77\x3d\x00\xc1\x44\xc5\x25"
-			  "\xac\x61\x9d\x18\xc8\x4a\x3f\x47"
-			  "\x18\xe2\x44\x8b\x2f\xe3\x24\xd9"
-			  "\xcc\xda\x27\x10"
-			  "\x25\x19\x49\x8e\x80\xf1\x47\x8f"
-			  "\x37\xba\x55\xbd\x6d\x27\x61\x8c",
-		.clen	= 76,
-		.np	= 2,
-		.tap	= { 32, 28 },
-		.anp	= 2,
-		.atap	= { 8, 12 }
 	}, {
 		.key    = zeroed_string,
 		.klen	= 32,
@@ -16716,10 +16673,6 @@ static const struct aead_testvec aes_gcm_tv_template[] = {
 			  "\x76\xfc\x6e\xce\x0f\x4e\x17\x68"
 			  "\xcd\xdf\x88\x53\xbb\x2d\x55\x1b",
 		.clen	= 76,
-		.np     = 2,
-		.tap    = { 48, 12 },
-		.anp	= 3,
-		.atap	= { 8, 8, 4 }
 	}, {
 		.key	= "\xfe\xff\xe9\x92\x86\x65\x73\x1c"
 			  "\x6d\x6a\x8f\x94\x67\x30\x83\x08"
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 13/15] crypto: testmgr - convert hash testing to use testvec_configs
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (11 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 12/15] crypto: testmgr - convert aead " Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-08-29 15:32   ` Christophe Leroy
  2019-02-01  7:51 ` [PATCH v2 14/15] crypto: testmgr - check for skcipher_request corruption Eric Biggers
                   ` (2 subsequent siblings)
  15 siblings, 1 reply; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel

From: Eric Biggers <ebiggers@google.com>

Convert alg_test_hash() to use the new test framework, adding a list of
testvec_configs to test by default.  When the extra self-tests are
enabled, randomly generated testvec_configs are tested as well.

This improves hash test coverage mainly because now all algorithms have
a variety of data layouts tested, whereas before each algorithm was
responsible for declaring its own chunked test cases which were often
missing or provided poor test coverage.  The new code also tests both
the MAY_SLEEP and !MAY_SLEEP cases and buffers that cross pages.

This already found bugs in the hash walk code and in the arm32 and arm64
implementations of crct10dif.

I removed the hash chunked test vectors that were the same as
non-chunked ones, but left the ones that were unique.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/testmgr.c | 795 ++++++++++++++++++++---------------------------
 crypto/testmgr.h | 107 +------
 2 files changed, 352 insertions(+), 550 deletions(-)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 7638090ff1b0a..96aa268ff4184 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -71,18 +71,6 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask)
  */
 #define XBUFSIZE	8
 
-/*
- * Indexes into the xbuf to simulate cross-page access.
- */
-#define IDX1		32
-#define IDX2		32400
-#define IDX3		1511
-#define IDX4		8193
-#define IDX5		22222
-#define IDX6		17101
-#define IDX7		27333
-#define IDX8		3000
-
 /*
 * Used by test_cipher()
 */
@@ -149,9 +137,6 @@ struct alg_test_desc {
 	} suite;
 };
 
-static const unsigned int IDX[8] = {
-	IDX1, IDX2, IDX3, IDX4, IDX5, IDX6, IDX7, IDX8 };
-
 static void hexdump(unsigned char *buf, unsigned int len)
 {
 	print_hex_dump(KERN_CONT, "", DUMP_PREFIX_OFFSET,
@@ -346,6 +331,79 @@ static const struct testvec_config default_cipher_testvec_configs[] = {
 	}
 };
 
+static const struct testvec_config default_hash_testvec_configs[] = {
+	{
+		.name = "init+update+final aligned buffer",
+		.src_divs = { { .proportion_of_total = 10000 } },
+		.finalization_type = FINALIZATION_TYPE_FINAL,
+	}, {
+		.name = "init+finup aligned buffer",
+		.src_divs = { { .proportion_of_total = 10000 } },
+		.finalization_type = FINALIZATION_TYPE_FINUP,
+	}, {
+		.name = "digest aligned buffer",
+		.src_divs = { { .proportion_of_total = 10000 } },
+		.finalization_type = FINALIZATION_TYPE_DIGEST,
+	}, {
+		.name = "init+update+final misaligned buffer",
+		.src_divs = { { .proportion_of_total = 10000, .offset = 1 } },
+		.finalization_type = FINALIZATION_TYPE_FINAL,
+	}, {
+		.name = "digest buffer aligned only to alignmask",
+		.src_divs = {
+			{
+				.proportion_of_total = 10000,
+				.offset = 1,
+				.offset_relative_to_alignmask = true,
+			},
+		},
+		.finalization_type = FINALIZATION_TYPE_DIGEST,
+	}, {
+		.name = "init+update+update+final two even splits",
+		.src_divs = {
+			{ .proportion_of_total = 5000 },
+			{
+				.proportion_of_total = 5000,
+				.flush_type = FLUSH_TYPE_FLUSH,
+			},
+		},
+		.finalization_type = FINALIZATION_TYPE_FINAL,
+	}, {
+		.name = "digest uneven misaligned splits, may sleep",
+		.req_flags = CRYPTO_TFM_REQ_MAY_SLEEP,
+		.src_divs = {
+			{ .proportion_of_total = 1900, .offset = 33 },
+			{ .proportion_of_total = 3300, .offset = 7  },
+			{ .proportion_of_total = 4800, .offset = 18 },
+		},
+		.finalization_type = FINALIZATION_TYPE_DIGEST,
+	}, {
+		.name = "digest misaligned splits crossing pages",
+		.src_divs = {
+			{
+				.proportion_of_total = 7500,
+				.offset = PAGE_SIZE - 32,
+			}, {
+				.proportion_of_total = 2500,
+				.offset = PAGE_SIZE - 7,
+			},
+		},
+		.finalization_type = FINALIZATION_TYPE_DIGEST,
+	}, {
+		.name = "import/export",
+		.src_divs = {
+			{
+				.proportion_of_total = 6500,
+				.flush_type = FLUSH_TYPE_REIMPORT,
+			}, {
+				.proportion_of_total = 3500,
+				.flush_type = FLUSH_TYPE_REIMPORT,
+			},
+		},
+		.finalization_type = FINALIZATION_TYPE_FINAL,
+	}
+};
+
 static unsigned int count_test_sg_divisions(const struct test_sg_division *divs)
 {
 	unsigned int remaining = TEST_SG_TOTAL;
@@ -782,430 +840,320 @@ static void generate_random_testvec_config(struct testvec_config *cfg,
 }
 #endif /* CONFIG_CRYPTO_MANAGER_EXTRA_TESTS */
 
-static int ahash_guard_result(char *result, char c, int size)
+static int check_nonfinal_hash_op(const char *op, int err,
+				  u8 *result, unsigned int digestsize,
+				  const char *driver, unsigned int vec_num,
+				  const struct testvec_config *cfg)
 {
-	int i;
-
-	for (i = 0; i < size; i++) {
-		if (result[i] != c)
-			return -EINVAL;
-	}
-
-	return 0;
-}
-
-static int ahash_partial_update(struct ahash_request **preq,
-	struct crypto_ahash *tfm, const struct hash_testvec *template,
-	void *hash_buff, int k, int temp, struct scatterlist *sg,
-	const char *algo, char *result, struct crypto_wait *wait)
-{
-	char *state;
-	struct ahash_request *req;
-	int statesize, ret = -EINVAL;
-	static const unsigned char guard[] = { 0x00, 0xba, 0xad, 0x00 };
-	int digestsize = crypto_ahash_digestsize(tfm);
-
-	req = *preq;
-	statesize = crypto_ahash_statesize(
-			crypto_ahash_reqtfm(req));
-	state = kmalloc(statesize + sizeof(guard), GFP_KERNEL);
-	if (!state) {
-		pr_err("alg: hash: Failed to alloc state for %s\n", algo);
-		goto out_nostate;
-	}
-	memcpy(state + statesize, guard, sizeof(guard));
-	memset(result, 1, digestsize);
-	ret = crypto_ahash_export(req, state);
-	WARN_ON(memcmp(state + statesize, guard, sizeof(guard)));
-	if (ret) {
-		pr_err("alg: hash: Failed to export() for %s\n", algo);
-		goto out;
-	}
-	ret = ahash_guard_result(result, 1, digestsize);
-	if (ret) {
-		pr_err("alg: hash: Failed, export used req->result for %s\n",
-		       algo);
-		goto out;
-	}
-	ahash_request_free(req);
-	req = ahash_request_alloc(tfm, GFP_KERNEL);
-	if (!req) {
-		pr_err("alg: hash: Failed to alloc request for %s\n", algo);
-		goto out_noreq;
-	}
-	ahash_request_set_callback(req,
-		CRYPTO_TFM_REQ_MAY_BACKLOG,
-		crypto_req_done, wait);
-
-	memcpy(hash_buff, template->plaintext + temp,
-		template->tap[k]);
-	sg_init_one(&sg[0], hash_buff, template->tap[k]);
-	ahash_request_set_crypt(req, sg, result, template->tap[k]);
-	ret = crypto_ahash_import(req, state);
-	if (ret) {
-		pr_err("alg: hash: Failed to import() for %s\n", algo);
-		goto out;
+	if (err) {
+		pr_err("alg: hash: %s %s() failed with err %d on test vector %u, cfg=\"%s\"\n",
+		       driver, op, err, vec_num, cfg->name);
+		return err;
 	}
-	ret = ahash_guard_result(result, 1, digestsize);
-	if (ret) {
-		pr_err("alg: hash: Failed, import used req->result for %s\n",
-		       algo);
-		goto out;
+	if (!testmgr_is_poison(result, digestsize)) {
+		pr_err("alg: hash: %s %s() used result buffer on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		return -EINVAL;
 	}
-	ret = crypto_wait_req(crypto_ahash_update(req), wait);
-	if (ret)
-		goto out;
-	*preq = req;
-	ret = 0;
-	goto out_noreq;
-out:
-	ahash_request_free(req);
-out_noreq:
-	kfree(state);
-out_nostate:
-	return ret;
+	return 0;
 }
 
-enum hash_test {
-	HASH_TEST_DIGEST,
-	HASH_TEST_FINAL,
-	HASH_TEST_FINUP
-};
-
-static int __test_hash(struct crypto_ahash *tfm,
-		       const struct hash_testvec *template, unsigned int tcount,
-		       enum hash_test test_type, const int align_offset)
+static int test_hash_vec_cfg(const char *driver,
+			     const struct hash_testvec *vec,
+			     unsigned int vec_num,
+			     const struct testvec_config *cfg,
+			     struct ahash_request *req,
+			     struct test_sglist *tsgl,
+			     u8 *hashstate)
 {
-	const char *algo = crypto_tfm_alg_driver_name(crypto_ahash_tfm(tfm));
-	size_t digest_size = crypto_ahash_digestsize(tfm);
-	unsigned int i, j, k, temp;
-	struct scatterlist sg[8];
-	char *result;
-	char *key;
-	struct ahash_request *req;
-	struct crypto_wait wait;
-	void *hash_buff;
-	char *xbuf[XBUFSIZE];
-	int ret = -ENOMEM;
-
-	result = kmalloc(digest_size, GFP_KERNEL);
-	if (!result)
-		return ret;
-	key = kmalloc(MAX_KEYLEN, GFP_KERNEL);
-	if (!key)
-		goto out_nobuf;
-	if (testmgr_alloc_buf(xbuf))
-		goto out_nobuf;
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	const unsigned int alignmask = crypto_ahash_alignmask(tfm);
+	const unsigned int digestsize = crypto_ahash_digestsize(tfm);
+	const unsigned int statesize = crypto_ahash_statesize(tfm);
+	const u32 req_flags = CRYPTO_TFM_REQ_MAY_BACKLOG | cfg->req_flags;
+	const struct test_sg_division *divs[XBUFSIZE];
+	DECLARE_CRYPTO_WAIT(wait);
+	struct kvec _input;
+	struct iov_iter input;
+	unsigned int i;
+	struct scatterlist *pending_sgl;
+	unsigned int pending_len;
+	u8 result[HASH_MAX_DIGESTSIZE + TESTMGR_POISON_LEN];
+	int err;
 
-	crypto_init_wait(&wait);
+	/* Set the key, if specified */
+	if (vec->ksize) {
+		err = crypto_ahash_setkey(tfm, vec->key, vec->ksize);
+		if (err) {
+			pr_err("alg: hash: %s setkey failed with err %d on test vector %u; flags=%#x\n",
+			       driver, err, vec_num,
+			       crypto_ahash_get_flags(tfm));
+			return err;
+		}
+	}
 
-	req = ahash_request_alloc(tfm, GFP_KERNEL);
-	if (!req) {
-		printk(KERN_ERR "alg: hash: Failed to allocate request for "
-		       "%s\n", algo);
-		goto out_noreq;
+	/* Build the scatterlist for the source data */
+	_input.iov_base = (void *)vec->plaintext;
+	_input.iov_len = vec->psize;
+	iov_iter_kvec(&input, WRITE, &_input, 1, vec->psize);
+	err = build_test_sglist(tsgl, cfg->src_divs, alignmask, vec->psize,
+				&input, divs);
+	if (err) {
+		pr_err("alg: hash: %s: error preparing scatterlist for test vector %u, cfg=\"%s\"\n",
+		       driver, vec_num, cfg->name);
+		return err;
 	}
-	ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
-				   crypto_req_done, &wait);
 
-	j = 0;
-	for (i = 0; i < tcount; i++) {
-		if (template[i].np)
-			continue;
+	/* Do the actual hashing */
 
-		ret = -EINVAL;
-		if (WARN_ON(align_offset + template[i].psize > PAGE_SIZE))
-			goto out;
+	testmgr_poison(req->__ctx, crypto_ahash_reqsize(tfm));
+	testmgr_poison(result, digestsize + TESTMGR_POISON_LEN);
 
-		j++;
-		memset(result, 0, digest_size);
+	if (cfg->finalization_type == FINALIZATION_TYPE_DIGEST) {
+		/* Just using digest() */
+		ahash_request_set_callback(req, req_flags, crypto_req_done,
+					   &wait);
+		ahash_request_set_crypt(req, tsgl->sgl, result, vec->psize);
+		err = crypto_wait_req(crypto_ahash_digest(req), &wait);
+		if (err) {
+			pr_err("alg: hash: %s digest() failed with err %d on test vector %u, cfg=\"%s\"\n",
+			       driver, err, vec_num, cfg->name);
+			return err;
+		}
+		goto result_ready;
+	}
 
-		hash_buff = xbuf[0];
-		hash_buff += align_offset;
+	/* Using init(), zero or more update(), then final() or finup() */
 
-		memcpy(hash_buff, template[i].plaintext, template[i].psize);
-		sg_init_one(&sg[0], hash_buff, template[i].psize);
+	ahash_request_set_callback(req, req_flags, crypto_req_done, &wait);
+	ahash_request_set_crypt(req, NULL, result, 0);
+	err = crypto_wait_req(crypto_ahash_init(req), &wait);
+	err = check_nonfinal_hash_op("init", err, result, digestsize,
+				     driver, vec_num, cfg);
+	if (err)
+		return err;
 
-		if (template[i].ksize) {
-			crypto_ahash_clear_flags(tfm, ~0);
-			if (template[i].ksize > MAX_KEYLEN) {
-				pr_err("alg: hash: setkey failed on test %d for %s: key size %d > %d\n",
-				       j, algo, template[i].ksize, MAX_KEYLEN);
-				ret = -EINVAL;
-				goto out;
-			}
-			memcpy(key, template[i].key, template[i].ksize);
-			ret = crypto_ahash_setkey(tfm, key, template[i].ksize);
-			if (ret) {
-				printk(KERN_ERR "alg: hash: setkey failed on "
-				       "test %d for %s: ret=%d\n", j, algo,
-				       -ret);
-				goto out;
-			}
+	pending_sgl = NULL;
+	pending_len = 0;
+	for (i = 0; i < tsgl->nents; i++) {
+		if (divs[i]->flush_type != FLUSH_TYPE_NONE &&
+		    pending_sgl != NULL) {
+			/* update() with the pending data */
+			ahash_request_set_callback(req, req_flags,
+						   crypto_req_done, &wait);
+			ahash_request_set_crypt(req, pending_sgl, result,
+						pending_len);
+			err = crypto_wait_req(crypto_ahash_update(req), &wait);
+			err = check_nonfinal_hash_op("update", err,
+						     result, digestsize,
+						     driver, vec_num, cfg);
+			if (err)
+				return err;
+			pending_sgl = NULL;
+			pending_len = 0;
 		}
-
-		ahash_request_set_crypt(req, sg, result, template[i].psize);
-		switch (test_type) {
-		case HASH_TEST_DIGEST:
-			ret = crypto_wait_req(crypto_ahash_digest(req), &wait);
-			if (ret) {
-				pr_err("alg: hash: digest failed on test %d "
-				       "for %s: ret=%d\n", j, algo, -ret);
-				goto out;
-			}
-			break;
-
-		case HASH_TEST_FINAL:
-			memset(result, 1, digest_size);
-			ret = crypto_wait_req(crypto_ahash_init(req), &wait);
-			if (ret) {
-				pr_err("alg: hash: init failed on test %d "
-				       "for %s: ret=%d\n", j, algo, -ret);
-				goto out;
-			}
-			ret = ahash_guard_result(result, 1, digest_size);
-			if (ret) {
-				pr_err("alg: hash: init failed on test %d "
-				       "for %s: used req->result\n", j, algo);
-				goto out;
-			}
-			ret = crypto_wait_req(crypto_ahash_update(req), &wait);
-			if (ret) {
-				pr_err("alg: hash: update failed on test %d "
-				       "for %s: ret=%d\n", j, algo, -ret);
-				goto out;
-			}
-			ret = ahash_guard_result(result, 1, digest_size);
-			if (ret) {
-				pr_err("alg: hash: update failed on test %d "
-				       "for %s: used req->result\n", j, algo);
-				goto out;
-			}
-			ret = crypto_wait_req(crypto_ahash_final(req), &wait);
-			if (ret) {
-				pr_err("alg: hash: final failed on test %d "
-				       "for %s: ret=%d\n", j, algo, -ret);
-				goto out;
+		if (divs[i]->flush_type == FLUSH_TYPE_REIMPORT) {
+			/* Test ->export() and ->import() */
+			testmgr_poison(hashstate + statesize,
+				       TESTMGR_POISON_LEN);
+			err = crypto_ahash_export(req, hashstate);
+			err = check_nonfinal_hash_op("export", err,
+						     result, digestsize,
+						     driver, vec_num, cfg);
+			if (err)
+				return err;
+			if (!testmgr_is_poison(hashstate + statesize,
+					       TESTMGR_POISON_LEN)) {
+				pr_err("alg: hash: %s export() overran state buffer on test vector %u, cfg=\"%s\"\n",
+				       driver, vec_num, cfg->name);
+				return -EOVERFLOW;
 			}
-			break;
 
-		case HASH_TEST_FINUP:
-			memset(result, 1, digest_size);
-			ret = crypto_wait_req(crypto_ahash_init(req), &wait);
-			if (ret) {
-				pr_err("alg: hash: init failed on test %d "
-				       "for %s: ret=%d\n", j, algo, -ret);
-				goto out;
-			}
-			ret = ahash_guard_result(result, 1, digest_size);
-			if (ret) {
-				pr_err("alg: hash: init failed on test %d "
-				       "for %s: used req->result\n", j, algo);
-				goto out;
-			}
-			ret = crypto_wait_req(crypto_ahash_finup(req), &wait);
-			if (ret) {
-				pr_err("alg: hash: final failed on test %d "
-				       "for %s: ret=%d\n", j, algo, -ret);
-				goto out;
-			}
-			break;
+			testmgr_poison(req->__ctx, crypto_ahash_reqsize(tfm));
+			err = crypto_ahash_import(req, hashstate);
+			err = check_nonfinal_hash_op("import", err,
+						     result, digestsize,
+						     driver, vec_num, cfg);
+			if (err)
+				return err;
 		}
+		if (pending_sgl == NULL)
+			pending_sgl = &tsgl->sgl[i];
+		pending_len += tsgl->sgl[i].length;
+	}
 
-		if (memcmp(result, template[i].digest,
-			   crypto_ahash_digestsize(tfm))) {
-			printk(KERN_ERR "alg: hash: Test %d failed for %s\n",
-			       j, algo);
-			hexdump(result, crypto_ahash_digestsize(tfm));
-			ret = -EINVAL;
-			goto out;
+	ahash_request_set_callback(req, req_flags, crypto_req_done, &wait);
+	ahash_request_set_crypt(req, pending_sgl, result, pending_len);
+	if (cfg->finalization_type == FINALIZATION_TYPE_FINAL) {
+		/* finish with update() and final() */
+		err = crypto_wait_req(crypto_ahash_update(req), &wait);
+		err = check_nonfinal_hash_op("update", err, result, digestsize,
+					     driver, vec_num, cfg);
+		if (err)
+			return err;
+		err = crypto_wait_req(crypto_ahash_final(req), &wait);
+		if (err) {
+			pr_err("alg: hash: %s final() failed with err %d on test vector %u, cfg=\"%s\"\n",
+			       driver, err, vec_num, cfg->name);
+			return err;
+		}
+	} else {
+		/* finish with finup() */
+		err = crypto_wait_req(crypto_ahash_finup(req), &wait);
+		if (err) {
+			pr_err("alg: hash: %s finup() failed with err %d on test vector %u, cfg=\"%s\"\n",
+			       driver, err, vec_num, cfg->name);
+			return err;
 		}
 	}
 
-	if (test_type)
-		goto out;
-
-	j = 0;
-	for (i = 0; i < tcount; i++) {
-		/* alignment tests are only done with continuous buffers */
-		if (align_offset != 0)
-			break;
+result_ready:
+	/* Check that the algorithm produced the correct digest */
+	if (memcmp(result, vec->digest, digestsize) != 0) {
+		pr_err("alg: hash: %s test failed (wrong result) on test vector %u, cfg=\"%s\"\n",
+		       driver, vec_num, cfg->name);
+		return -EINVAL;
+	}
+	if (!testmgr_is_poison(&result[digestsize], TESTMGR_POISON_LEN)) {
+		pr_err("alg: hash: %s overran result buffer on test vector %u, cfg=\"%s\"\n",
+		       driver, vec_num, cfg->name);
+		return -EOVERFLOW;
+	}
 
-		if (!template[i].np)
-			continue;
+	return 0;
+}
 
-		j++;
-		memset(result, 0, digest_size);
+static int test_hash_vec(const char *driver, const struct hash_testvec *vec,
+			 unsigned int vec_num, struct ahash_request *req,
+			 struct test_sglist *tsgl, u8 *hashstate)
+{
+	unsigned int i;
+	int err;
 
-		temp = 0;
-		sg_init_table(sg, template[i].np);
-		ret = -EINVAL;
-		for (k = 0; k < template[i].np; k++) {
-			if (WARN_ON(offset_in_page(IDX[k]) +
-				    template[i].tap[k] > PAGE_SIZE))
-				goto out;
-			sg_set_buf(&sg[k],
-				   memcpy(xbuf[IDX[k] >> PAGE_SHIFT] +
-					  offset_in_page(IDX[k]),
-					  template[i].plaintext + temp,
-					  template[i].tap[k]),
-				   template[i].tap[k]);
-			temp += template[i].tap[k];
-		}
-
-		if (template[i].ksize) {
-			if (template[i].ksize > MAX_KEYLEN) {
-				pr_err("alg: hash: setkey failed on test %d for %s: key size %d > %d\n",
-				       j, algo, template[i].ksize, MAX_KEYLEN);
-				ret = -EINVAL;
-				goto out;
-			}
-			crypto_ahash_clear_flags(tfm, ~0);
-			memcpy(key, template[i].key, template[i].ksize);
-			ret = crypto_ahash_setkey(tfm, key, template[i].ksize);
-
-			if (ret) {
-				printk(KERN_ERR "alg: hash: setkey "
-				       "failed on chunking test %d "
-				       "for %s: ret=%d\n", j, algo, -ret);
-				goto out;
-			}
-		}
+	for (i = 0; i < ARRAY_SIZE(default_hash_testvec_configs); i++) {
+		err = test_hash_vec_cfg(driver, vec, vec_num,
+					&default_hash_testvec_configs[i],
+					req, tsgl, hashstate);
+		if (err)
+			return err;
+	}
 
-		ahash_request_set_crypt(req, sg, result, template[i].psize);
-		ret = crypto_wait_req(crypto_ahash_digest(req), &wait);
-		if (ret) {
-			pr_err("alg: hash: digest failed on chunking test %d for %s: ret=%d\n",
-			       j, algo, -ret);
-			goto out;
-		}
+#ifdef CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
+	if (!noextratests) {
+		struct testvec_config cfg;
+		char cfgname[TESTVEC_CONFIG_NAMELEN];
 
-		if (memcmp(result, template[i].digest,
-			   crypto_ahash_digestsize(tfm))) {
-			printk(KERN_ERR "alg: hash: Chunking test %d "
-			       "failed for %s\n", j, algo);
-			hexdump(result, crypto_ahash_digestsize(tfm));
-			ret = -EINVAL;
-			goto out;
+		for (i = 0; i < fuzz_iterations; i++) {
+			generate_random_testvec_config(&cfg, cfgname,
+						       sizeof(cfgname));
+			err = test_hash_vec_cfg(driver, vec, vec_num, &cfg,
+						req, tsgl, hashstate);
+			if (err)
+				return err;
 		}
 	}
+#endif
+	return 0;
+}
 
-	/* partial update exercise */
-	j = 0;
-	for (i = 0; i < tcount; i++) {
-		/* alignment tests are only done with continuous buffers */
-		if (align_offset != 0)
-			break;
+static int __alg_test_hash(const struct hash_testvec *vecs,
+			   unsigned int num_vecs, const char *driver,
+			   u32 type, u32 mask)
+{
+	struct crypto_ahash *tfm;
+	struct ahash_request *req = NULL;
+	struct test_sglist *tsgl = NULL;
+	u8 *hashstate = NULL;
+	unsigned int i;
+	int err;
 
-		if (template[i].np < 2)
-			continue;
+	tfm = crypto_alloc_ahash(driver, type, mask);
+	if (IS_ERR(tfm)) {
+		pr_err("alg: hash: failed to allocate transform for %s: %ld\n",
+		       driver, PTR_ERR(tfm));
+		return PTR_ERR(tfm);
+	}
 
-		j++;
-		memset(result, 0, digest_size);
+	req = ahash_request_alloc(tfm, GFP_KERNEL);
+	if (!req) {
+		pr_err("alg: hash: failed to allocate request for %s\n",
+		       driver);
+		err = -ENOMEM;
+		goto out;
+	}
 
-		ret = -EINVAL;
-		hash_buff = xbuf[0];
-		memcpy(hash_buff, template[i].plaintext,
-			template[i].tap[0]);
-		sg_init_one(&sg[0], hash_buff, template[i].tap[0]);
-
-		if (template[i].ksize) {
-			crypto_ahash_clear_flags(tfm, ~0);
-			if (template[i].ksize > MAX_KEYLEN) {
-				pr_err("alg: hash: setkey failed on test %d for %s: key size %d > %d\n",
-					j, algo, template[i].ksize, MAX_KEYLEN);
-				ret = -EINVAL;
-				goto out;
-			}
-			memcpy(key, template[i].key, template[i].ksize);
-			ret = crypto_ahash_setkey(tfm, key, template[i].ksize);
-			if (ret) {
-				pr_err("alg: hash: setkey failed on test %d for %s: ret=%d\n",
-					j, algo, -ret);
-				goto out;
-			}
-		}
+	tsgl = kmalloc(sizeof(*tsgl), GFP_KERNEL);
+	if (!tsgl || init_test_sglist(tsgl) != 0) {
+		pr_err("alg: hash: failed to allocate test buffers for %s\n",
+		       driver);
+		kfree(tsgl);
+		tsgl = NULL;
+		err = -ENOMEM;
+		goto out;
+	}
 
-		ahash_request_set_crypt(req, sg, result, template[i].tap[0]);
-		ret = crypto_wait_req(crypto_ahash_init(req), &wait);
-		if (ret) {
-			pr_err("alg: hash: init failed on test %d for %s: ret=%d\n",
-				j, algo, -ret);
-			goto out;
-		}
-		ret = crypto_wait_req(crypto_ahash_update(req), &wait);
-		if (ret) {
-			pr_err("alg: hash: update failed on test %d for %s: ret=%d\n",
-				j, algo, -ret);
-			goto out;
-		}
+	hashstate = kmalloc(crypto_ahash_statesize(tfm) + TESTMGR_POISON_LEN,
+			    GFP_KERNEL);
+	if (!hashstate) {
+		pr_err("alg: hash: failed to allocate hash state buffer for %s\n",
+		       driver);
+		err = -ENOMEM;
+		goto out;
+	}
 
-		temp = template[i].tap[0];
-		for (k = 1; k < template[i].np; k++) {
-			ret = ahash_partial_update(&req, tfm, &template[i],
-				hash_buff, k, temp, &sg[0], algo, result,
-				&wait);
-			if (ret) {
-				pr_err("alg: hash: partial update failed on test %d for %s: ret=%d\n",
-					j, algo, -ret);
-				goto out_noreq;
-			}
-			temp += template[i].tap[k];
-		}
-		ret = crypto_wait_req(crypto_ahash_final(req), &wait);
-		if (ret) {
-			pr_err("alg: hash: final failed on test %d for %s: ret=%d\n",
-				j, algo, -ret);
-			goto out;
-		}
-		if (memcmp(result, template[i].digest,
-			   crypto_ahash_digestsize(tfm))) {
-			pr_err("alg: hash: Partial Test %d failed for %s\n",
-			       j, algo);
-			hexdump(result, crypto_ahash_digestsize(tfm));
-			ret = -EINVAL;
+	for (i = 0; i < num_vecs; i++) {
+		err = test_hash_vec(driver, &vecs[i], i, req, tsgl, hashstate);
+		if (err)
 			goto out;
-		}
 	}
-
-	ret = 0;
-
+	err = 0;
 out:
+	kfree(hashstate);
+	if (tsgl) {
+		destroy_test_sglist(tsgl);
+		kfree(tsgl);
+	}
 	ahash_request_free(req);
-out_noreq:
-	testmgr_free_buf(xbuf);
-out_nobuf:
-	kfree(key);
-	kfree(result);
-	return ret;
+	crypto_free_ahash(tfm);
+	return err;
 }
 
-static int test_hash(struct crypto_ahash *tfm,
-		     const struct hash_testvec *template,
-		     unsigned int tcount, enum hash_test test_type)
+static int alg_test_hash(const struct alg_test_desc *desc, const char *driver,
+			 u32 type, u32 mask)
 {
-	unsigned int alignmask;
-	int ret;
+	const struct hash_testvec *template = desc->suite.hash.vecs;
+	unsigned int tcount = desc->suite.hash.count;
+	unsigned int nr_unkeyed, nr_keyed;
+	int err;
 
-	ret = __test_hash(tfm, template, tcount, test_type, 0);
-	if (ret)
-		return ret;
+	/*
+	 * For OPTIONAL_KEY algorithms, we have to do all the unkeyed tests
+	 * first, before setting a key on the tfm.  To make this easier, we
+	 * require that the unkeyed test vectors (if any) are listed first.
+	 */
 
-	/* test unaligned buffers, check with one byte offset */
-	ret = __test_hash(tfm, template, tcount, test_type, 1);
-	if (ret)
-		return ret;
+	for (nr_unkeyed = 0; nr_unkeyed < tcount; nr_unkeyed++) {
+		if (template[nr_unkeyed].ksize)
+			break;
+	}
+	for (nr_keyed = 0; nr_unkeyed + nr_keyed < tcount; nr_keyed++) {
+		if (!template[nr_unkeyed + nr_keyed].ksize) {
+			pr_err("alg: hash: test vectors for %s out of order, "
+			       "unkeyed ones must come first\n", desc->alg);
+			return -EINVAL;
+		}
+	}
 
-	alignmask = crypto_tfm_alg_alignmask(&tfm->base);
-	if (alignmask) {
-		/* Check if alignment mask for tfm is correctly set. */
-		ret = __test_hash(tfm, template, tcount, test_type,
-				  alignmask + 1);
-		if (ret)
-			return ret;
+	err = 0;
+	if (nr_unkeyed) {
+		err = __alg_test_hash(template, nr_unkeyed, driver, type, mask);
+		template += nr_unkeyed;
 	}
 
-	return 0;
+	if (!err && nr_keyed)
+		err = __alg_test_hash(template, nr_keyed, driver, type, mask);
+
+	return err;
 }
 
 static int test_aead_vec_cfg(const char *driver, int enc,
@@ -2113,67 +2061,6 @@ static int alg_test_comp(const struct alg_test_desc *desc, const char *driver,
 	return err;
 }
 
-static int __alg_test_hash(const struct hash_testvec *template,
-			   unsigned int tcount, const char *driver,
-			   u32 type, u32 mask)
-{
-	struct crypto_ahash *tfm;
-	int err;
-
-	tfm = crypto_alloc_ahash(driver, type, mask);
-	if (IS_ERR(tfm)) {
-		printk(KERN_ERR "alg: hash: Failed to load transform for %s: "
-		       "%ld\n", driver, PTR_ERR(tfm));
-		return PTR_ERR(tfm);
-	}
-
-	err = test_hash(tfm, template, tcount, HASH_TEST_DIGEST);
-	if (!err)
-		err = test_hash(tfm, template, tcount, HASH_TEST_FINAL);
-	if (!err)
-		err = test_hash(tfm, template, tcount, HASH_TEST_FINUP);
-	crypto_free_ahash(tfm);
-	return err;
-}
-
-static int alg_test_hash(const struct alg_test_desc *desc, const char *driver,
-			 u32 type, u32 mask)
-{
-	const struct hash_testvec *template = desc->suite.hash.vecs;
-	unsigned int tcount = desc->suite.hash.count;
-	unsigned int nr_unkeyed, nr_keyed;
-	int err;
-
-	/*
-	 * For OPTIONAL_KEY algorithms, we have to do all the unkeyed tests
-	 * first, before setting a key on the tfm.  To make this easier, we
-	 * require that the unkeyed test vectors (if any) are listed first.
-	 */
-
-	for (nr_unkeyed = 0; nr_unkeyed < tcount; nr_unkeyed++) {
-		if (template[nr_unkeyed].ksize)
-			break;
-	}
-	for (nr_keyed = 0; nr_unkeyed + nr_keyed < tcount; nr_keyed++) {
-		if (!template[nr_unkeyed + nr_keyed].ksize) {
-			pr_err("alg: hash: test vectors for %s out of order, "
-			       "unkeyed ones must come first\n", desc->alg);
-			return -EINVAL;
-		}
-	}
-
-	err = 0;
-	if (nr_unkeyed) {
-		err = __alg_test_hash(template, nr_unkeyed, driver, type, mask);
-		template += nr_unkeyed;
-	}
-
-	if (!err && nr_keyed)
-		err = __alg_test_hash(template, nr_keyed, driver, type, mask);
-
-	return err;
-}
-
 static int alg_test_crc32c(const struct alg_test_desc *desc,
 			   const char *driver, u32 type, u32 mask)
 {
@@ -3959,6 +3846,10 @@ static void alg_check_testvec_configs(void)
 	for (i = 0; i < ARRAY_SIZE(default_cipher_testvec_configs); i++)
 		WARN_ON(!valid_testvec_config(
 				&default_cipher_testvec_configs[i]));
+
+	for (i = 0; i < ARRAY_SIZE(default_hash_testvec_configs); i++)
+		WARN_ON(!valid_testvec_config(
+				&default_hash_testvec_configs[i]));
 }
 
 static void testmgr_onetime_init(void)
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index dfd8059d1306d..8f1d30b54a76e 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -5,6 +5,7 @@
  * Copyright (c) 2002 Jean-Francois Dive <jef@linuxbe.org>
  * Copyright (c) 2007 Nokia Siemens Networks
  * Copyright (c) 2008 Herbert Xu <herbert@gondor.apana.org.au>
+ * Copyright (c) 2019 Google LLC
  *
  * Updated RFC4106 AES-GCM testing. Some test vectors were taken from
  * http://csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/
@@ -24,19 +25,20 @@
 #ifndef _CRYPTO_TESTMGR_H
 #define _CRYPTO_TESTMGR_H
 
-#define MAX_DIGEST_SIZE		64
-#define MAX_TAP			8
-
-#define MAX_KEYLEN		1088
 #define MAX_IVLEN		32
 
+/*
+ * hash_testvec:	structure to describe a hash (message digest) test
+ * @key:	Pointer to key (NULL if none)
+ * @plaintext:	Pointer to source data
+ * @digest:	Pointer to expected digest
+ * @psize:	Length of source data in bytes
+ * @ksize:	Length of @key in bytes (0 if no key)
+ */
 struct hash_testvec {
-	/* only used with keyed hash algorithms */
 	const char *key;
 	const char *plaintext;
 	const char *digest;
-	unsigned short tap[MAX_TAP];
-	unsigned short np;
 	unsigned short psize;
 	unsigned short ksize;
 };
@@ -1022,8 +1024,6 @@ static const struct hash_testvec md4_tv_template[] = {
 		.psize	= 26,
 		.digest	= "\xd7\x9e\x1c\x30\x8a\xa5\xbb\xcd"
 			  "\xee\xa8\xed\x63\xdf\x41\x2d\xa9",
-		.np	= 2,
-		.tap	= { 13, 13 },
 	}, {
 		.plaintext = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789",
 		.psize	= 62,
@@ -1060,8 +1060,6 @@ static const struct hash_testvec sha3_224_tv_template[] = {
 				"\xc9\xfd\x55\x74\x49\x44\x79\xba"
 				"\x5c\x7e\x7a\xb7\x6e\xf2\x64\xea"
 				"\xd0\xfc\xce\x33",
-		.np	= 2,
-		.tap	= { 28, 28 },
 	}, {
 		.plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3"
 			     "\x7a\x11\x85\x1c\xb3\x27\xbe\x55"
@@ -1221,8 +1219,6 @@ static const struct hash_testvec sha3_256_tv_template[] = {
 				"\x49\x10\x03\x76\xa8\x23\x5e\x2c"
 				"\x82\xe1\xb9\x99\x8a\x99\x9e\x21"
 				"\xdb\x32\xdd\x97\x49\x6d\x33\x76",
-		.np	= 2,
-		.tap	= { 28, 28 },
 	}, {
 		.plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3"
 			     "\x7a\x11\x85\x1c\xb3\x27\xbe\x55"
@@ -1389,8 +1385,6 @@ static const struct hash_testvec sha3_384_tv_template[] = {
 				"\x9b\xfd\xbc\x32\xb9\xd4\xad\x5a"
 				"\xa0\x4a\x1f\x07\x6e\x62\xfe\xa1"
 				"\x9e\xef\x51\xac\xd0\x65\x7c\x22",
-		.np	= 2,
-		.tap	= { 28, 28 },
 	}, {
 		.plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3"
 			     "\x7a\x11\x85\x1c\xb3\x27\xbe\x55"
@@ -1565,8 +1559,6 @@ static const struct hash_testvec sha3_512_tv_template[] = {
 				"\xba\x1b\x0d\x8d\xc7\x8c\x08\x63"
 				"\x46\xb5\x33\xb4\x9c\x03\x0d\x99"
 				"\xa2\x7d\xaf\x11\x39\xd6\xe7\x5e",
-		.np	= 2,
-		.tap	= { 28, 28 },
 	}, {
 		.plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3"
 			     "\x7a\x11\x85\x1c\xb3\x27\xbe\x55"
@@ -1736,8 +1728,6 @@ static const struct hash_testvec md5_tv_template[] = {
 		.psize	= 26,
 		.digest	= "\xc3\xfc\xd3\xd7\x61\x92\xe4\x00"
 			  "\x7d\xfb\x49\x6c\xca\x67\xe1\x3b",
-		.np	= 2,
-		.tap	= {13, 13}
 	}, {
 		.plaintext = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789",
 		.psize	= 62,
@@ -1798,8 +1788,6 @@ static const struct hash_testvec rmd128_tv_template[] = {
 		.psize	= 56,
 		.digest	= "\xa1\xaa\x06\x89\xd0\xfa\xfa\x2d"
 			  "\xdc\x22\xe8\x8b\x49\x13\x3a\x06",
-		.np	= 2,
-		.tap	= { 28, 28 },
 	}, {
 		.plaintext = "abcdefghbcdefghicdefghijdefghijkefghijklfghi"
 			     "jklmghijklmnhijklmnoijklmnopjklmnopqklmnopqr"
@@ -1860,8 +1848,6 @@ static const struct hash_testvec rmd160_tv_template[] = {
 		.psize	= 56,
 		.digest	= "\x12\xa0\x53\x38\x4a\x9c\x0c\x88\xe4\x05"
 			  "\xa0\x6c\x27\xdc\xf4\x9a\xda\x62\xeb\x2b",
-		.np	= 2,
-		.tap	= { 28, 28 },
 	}, {
 		.plaintext = "abcdefghbcdefghicdefghijdefghijkefghijklfghi"
 			     "jklmghijklmnhijklmnoijklmnopjklmnopqklmnopqr"
@@ -1938,8 +1924,6 @@ static const struct hash_testvec rmd256_tv_template[] = {
 			  "\xc8\xd9\x12\x85\x73\xe7\xa9\x80"
 			  "\x9a\xfb\x2a\x0f\x34\xcc\xc3\x6e"
 			  "\xa9\xe7\x2f\x16\xf6\x36\x8e\x3f",
-		.np	= 2,
-		.tap	= { 28, 28 },
 	}
 };
 
@@ -2004,8 +1988,6 @@ static const struct hash_testvec rmd320_tv_template[] = {
 			  "\xb8\x4d\xf7\x69\xa5\xde\x20\x60\xe2\x59"
 			  "\xdf\x4c\x9b\xb4\xa4\x26\x8c\x0e\x93\x5b"
 			  "\xbc\x74\x70\xa9\x69\xc9\xd0\x72\xa1\xac",
-		.np	= 2,
-		.tap	= { 28, 28 },
 	}
 };
 
@@ -2019,26 +2001,11 @@ static const struct hash_testvec crct10dif_tv_template[] = {
 				  "123456789012345678901234567890123456789",
 		.psize		= 79,
 		.digest 	= (u8 *)(u16 []){ 0x4b70 },
-		.np		= 2,
-		.tap		= { 63, 16 },
 	}, {
 		.plaintext	= "abcdddddddddddddddddddddddddddddddddddddddd"
 				  "ddddddddddddd",
 		.psize		= 56,
 		.digest		= (u8 *)(u16 []){ 0x9ce3 },
-		.np		= 8,
-		.tap		= { 1, 2, 28, 7, 6, 5, 4, 3 },
-	}, {
-		.plaintext 	= "1234567890123456789012345678901234567890"
-				  "1234567890123456789012345678901234567890"
-				  "1234567890123456789012345678901234567890"
-				  "1234567890123456789012345678901234567890"
-				  "1234567890123456789012345678901234567890"
-				  "1234567890123456789012345678901234567890"
-				  "1234567890123456789012345678901234567890"
-				  "123456789012345678901234567890123456789",
-		.psize		= 319,
-		.digest		= (u8 *)(u16 []){ 0x44c6 },
 	}, {
 		.plaintext 	= "1234567890123456789012345678901234567890"
 				  "1234567890123456789012345678901234567890"
@@ -2050,8 +2017,6 @@ static const struct hash_testvec crct10dif_tv_template[] = {
 				  "123456789012345678901234567890123456789",
 		.psize		= 319,
 		.digest		= (u8 *)(u16 []){ 0x44c6 },
-		.np		= 4,
-		.tap		= { 1, 255, 57, 6 },
 	}, {
 		.plaintext =	"\x6e\x05\x79\x10\xa7\x1b\xb2\x49"
 				"\xe0\x54\xeb\x82\x19\x8d\x24\xbb"
@@ -2517,8 +2482,6 @@ static const struct hash_testvec sha1_tv_template[] = {
 		.psize	= 56,
 		.digest	= "\x84\x98\x3e\x44\x1c\x3b\xd2\x6e\xba\xae"
 			  "\x4a\xa1\xf9\x51\x29\xe5\xe5\x46\x70\xf1",
-		.np	= 2,
-		.tap	= { 28, 28 }
 	}, {
 		.plaintext = "\xec\x29\x56\x12\x44\xed\xe7\x06"
 			     "\xb6\xeb\x30\xa1\xc3\x71\xd7\x44"
@@ -2544,8 +2507,6 @@ static const struct hash_testvec sha1_tv_template[] = {
 		.psize	= 163,
 		.digest	= "\x97\x01\x11\xc4\xe7\x7b\xcc\x88\xcc\x20"
 			  "\x45\x9c\x02\xb6\x9b\x4a\xa8\xf5\x82\x17",
-		.np	= 4,
-		.tap	= { 63, 64, 31, 5 }
 	}, {
 		.plaintext = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+-",
 		.psize	= 64,
@@ -2714,8 +2675,6 @@ static const struct hash_testvec sha224_tv_template[] = {
 			  "\x5D\xBA\x5D\xA1\xFD\x89\x01\x50"
 			  "\xB0\xC6\x45\x5C\xB4\xF5\x8B\x19"
 			  "\x52\x52\x25\x25",
-		.np     = 2,
-		.tap    = { 28, 28 }
 	}, {
 		.plaintext = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+-",
 		.psize	= 64,
@@ -2885,8 +2844,6 @@ static const struct hash_testvec sha256_tv_template[] = {
 			  "\xe5\xc0\x26\x93\x0c\x3e\x60\x39"
 			  "\xa3\x3c\xe4\x59\x64\xff\x21\x67"
 			  "\xf6\xec\xed\xd4\x19\xdb\x06\xc1",
-		.np	= 2,
-		.tap	= { 28, 28 }
 	}, {
 		.plaintext = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+-",
 		.psize	= 64,
@@ -3082,8 +3039,6 @@ static const struct hash_testvec sha384_tv_template[] = {
 			  "\x4d\x8f\xd0\x14\xe5\x82\x82\x3a"
 			  "\x89\xe1\x6f\x9b\x2a\x7b\xbc\x1a"
 			  "\xc9\x38\xe2\xd1\x99\xe8\xbe\xa4",
-		.np	= 4,
-		.tap	= { 26, 26, 26, 26 }
 	}, {
 		.plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3"
 			     "\x7a\x11\x85\x1c\xb3\x27\xbe\x55"
@@ -3284,8 +3239,6 @@ static const struct hash_testvec sha512_tv_template[] = {
 			  "\xb2\x78\xe6\x6d\xff\x8b\x84\xfe"
 			  "\x2b\x28\x70\xf7\x42\xa5\x80\xd8"
 			  "\xed\xb4\x19\x87\x23\x28\x50\xc9",
-		.np	= 4,
-		.tap	= { 26, 26, 26, 26 }
 	}, {
 		.plaintext = "\x08\x9f\x13\xaa\x41\xd8\x4c\xe3"
 			     "\x7a\x11\x85\x1c\xb3\x27\xbe\x55"
@@ -3818,8 +3771,6 @@ static const struct hash_testvec ghash_tv_template[] =
 		.psize	= 28,
 		.digest	= "\x3e\x1f\x5c\x4d\x65\xf0\xef\xce"
 			  "\x0d\x61\x06\x27\x66\x51\xd5\xe2",
-		.np	= 2,
-		.tap	= {14, 14}
 	}, {
 		.key	= "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
 			  "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa",
@@ -3930,8 +3881,6 @@ static const struct hash_testvec hmac_md5_tv_template[] =
 		.psize	= 28,
 		.digest	= "\x75\x0c\x78\x3e\x6a\xb0\xb5\x03"
 			  "\xea\xa8\x6e\x31\x0a\x5d\xb7\x38",
-		.np	= 2,
-		.tap	= {14, 14}
 	}, {
 		.key	= "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa",
 		.ksize	= 16,
@@ -4009,8 +3958,6 @@ static const struct hash_testvec hmac_rmd128_tv_template[] = {
 		.psize	= 28,
 		.digest	= "\x87\x5f\x82\x88\x62\xb6\xb3\x34"
 			  "\xb4\x27\xc5\x5f\x9f\x7f\xf0\x9b",
-		.np	= 2,
-		.tap	= { 14, 14 },
 	}, {
 		.key	= "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa",
 		.ksize	= 16,
@@ -4088,8 +4035,6 @@ static const struct hash_testvec hmac_rmd160_tv_template[] = {
 		.psize	= 28,
 		.digest	= "\xdd\xa6\xc0\x21\x3a\x48\x5a\x9e\x24\xf4"
 			  "\x74\x20\x64\xa7\xf0\x33\xb4\x3c\x40\x69",
-		.np	= 2,
-		.tap	= { 14, 14 },
 	}, {
 		.key	= "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa",
 		.ksize	= 20,
@@ -4168,8 +4113,6 @@ static const struct hash_testvec hmac_sha1_tv_template[] = {
 		.psize	= 28,
 		.digest	= "\xef\xfc\xdf\x6a\xe5\xeb\x2f\xa2\xd2\x74"
 			  "\x16\xd5\xf1\x84\xdf\x9c\x25\x9a\x7c\x79",
-		.np	= 2,
-		.tap	= { 14, 14 }
 	}, {
 		.key	= "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa",
 		.ksize	= 20,
@@ -4259,8 +4202,6 @@ static const struct hash_testvec hmac_sha224_tv_template[] = {
 			"\x45\x69\x0f\x3a\x7e\x9e\x6d\x0f"
 			"\x8b\xbe\xa2\xa3\x9e\x61\x48\x00"
 			"\x8f\xd0\x5e\x44",
-		.np = 4,
-		.tap    = { 7, 7, 7, 7 }
 	}, {
 		.key    = "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
 			"\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
@@ -4404,8 +4345,6 @@ static const struct hash_testvec hmac_sha256_tv_template[] = {
 			  "\x6a\x04\x24\x26\x08\x95\x75\xc7"
 			  "\x5a\x00\x3f\x08\x9d\x27\x39\x83"
 			  "\x9d\xec\x58\xb9\x64\xec\x38\x43",
-		.np	= 2,
-		.tap	= { 14, 14 }
 	}, {
 		.key	= "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
 			"\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
@@ -4578,8 +4517,6 @@ static const struct hash_testvec aes_cbcmac_tv_template[] = {
 				  "\xf8\xf2\x76\x03\xac\x39\xb0\x9d",
 		.psize		= 33,
 		.ksize		= 16,
-		.np		= 2,
-		.tap		= { 7, 26 },
 	}, {
 		.key		= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
 				  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
@@ -4696,9 +4633,7 @@ static const struct hash_testvec aes_xcbc128_tv_template[] = {
 			     "\x10\x11\x12\x13",
 		.digest = "\x47\xf5\x1b\x45\x64\x96\x62\x15"
 			  "\xb8\x98\x5c\x63\x05\x5e\xd3\x08",
-		.tap	= { 10, 10 },
 		.psize	= 20,
-		.np	= 2,
 		.ksize	= 16,
 	}, {
 		.key	= "\x00\x01\x02\x03\x04\x05\x06\x07"
@@ -4721,9 +4656,7 @@ static const struct hash_testvec aes_xcbc128_tv_template[] = {
 			     "\x20\x21",
 		.digest = "\xbe\xcb\xb3\xbc\xcd\xb5\x18\xa3"
 			  "\x06\x77\xd5\x48\x1f\xb6\xb4\xd8",
-		.tap	= { 17, 17 },
 		.psize	= 34,
-		.np	= 2,
 		.ksize	= 16,
 	}
 };
@@ -4806,8 +4739,6 @@ static const struct hash_testvec vmac64_aes_tv_template[] = {
 			  "abcabcabcabcabcabcabcabcabcabcabcabcabcabcabc",
 		.psize	= 316,
 		.digest	= "\x44\x92\xdf\x6c\x5c\xac\x1b\xbe",
-		.tap	= { 1, 100, 200, 15 },
-		.np	= 4,
 	}, {
 		.key	= "\x00\x01\x02\x03\x04\x05\x06\x07"
 			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
@@ -4912,8 +4843,6 @@ static const struct hash_testvec hmac_sha384_tv_template[] = {
 			  "\xe4\x2e\xc3\x73\x63\x22\x44\x5e"
 			  "\x8e\x22\x40\xca\x5e\x69\xe2\xc7"
 			  "\x8b\x32\x39\xec\xfa\xb2\x16\x49",
-		.np	= 4,
-		.tap	= { 7, 7, 7, 7 }
 	}, {
 		.key	= "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
 			  "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
@@ -5014,8 +4943,6 @@ static const struct hash_testvec hmac_sha512_tv_template[] = {
 			  "\x6d\x03\x4f\x65\xf8\xf0\xe6\xfd"
 			  "\xca\xea\xb1\xa3\x4d\x4a\x6b\x4b"
 			  "\x63\x6e\x07\x0a\x38\xbc\xe7\x37",
-		.np	= 4,
-		.tap	= { 7, 7, 7, 7 }
 	}, {
 		.key	= "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
 			  "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
@@ -5111,8 +5038,6 @@ static const struct hash_testvec hmac_sha3_224_tv_template[] = {
 			  "\x1b\x79\x86\x34\xad\x38\x68\x11"
 			  "\xc2\xcf\xc8\x5b\xfa\xf5\xd5\x2b"
 			  "\xba\xce\x5e\x66",
-		.np	= 4,
-		.tap	= { 7, 7, 7, 7 }
 	}, {
 		.key	= "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
 			  "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
@@ -5200,8 +5125,6 @@ static const struct hash_testvec hmac_sha3_256_tv_template[] = {
 			  "\x35\x96\xbb\xb0\xda\x73\xb8\x87"
 			  "\xc9\x17\x1f\x93\x09\x5b\x29\x4a"
 			  "\xe8\x57\xfb\xe2\x64\x5e\x1b\xa5",
-		.np	= 4,
-		.tap	= { 7, 7, 7, 7 }
 	}, {
 		.key	= "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
 			  "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
@@ -5293,8 +5216,6 @@ static const struct hash_testvec hmac_sha3_384_tv_template[] = {
 			  "\x3c\xa1\x35\x08\xa9\x32\x43\xce"
 			  "\x48\xc0\x45\xdc\x00\x7f\x26\xa2"
 			  "\x1b\x3f\x5e\x0e\x9d\xf4\xc2\x0a",
-		.np	= 4,
-		.tap	= { 7, 7, 7, 7 }
 	}, {
 		.key	= "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
 			  "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
@@ -5394,8 +5315,6 @@ static const struct hash_testvec hmac_sha3_512_tv_template[] = {
 			  "\xee\x7a\x0c\x31\xd0\x22\xa9\x5e"
 			  "\x1f\xc9\x2b\xa9\xd7\x7d\xf8\x83"
 			  "\x96\x02\x75\xbe\xb4\xe6\x20\x24",
-		.np	= 4,
-		.tap	= { 7, 7, 7, 7 }
 	}, {
 		.key	= "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
 			  "\xaa\xaa\xaa\xaa\xaa\xaa\xaa\xaa"
@@ -6003,8 +5922,6 @@ static const struct hash_testvec nhpoly1305_tv_template[] = {
 		.psize	= 16,
 		.digest	= "\x04\xbf\x7f\x6a\xce\x72\xea\x6a"
 			  "\x79\xdb\xb0\xc9\x60\xf6\x12\xcc",
-		.np	= 6,
-		.tap	= { 4, 4, 1, 1, 1, 5 },
 	}, {
 		.key	= "\x65\x4d\xe3\xf8\xd2\x4c\xac\x28"
 			  "\x68\xf5\xb3\x81\x71\x4b\xa1\xfa"
@@ -6274,8 +6191,6 @@ static const struct hash_testvec nhpoly1305_tv_template[] = {
 		.psize	= 1024,
 		.digest	= "\x64\x3a\xbc\xc3\x3f\x74\x40\x51"
 			  "\x6e\x56\x01\x1a\x51\xec\x36\xde",
-		.np	= 8,
-		.tap	= { 64, 203, 267, 28, 263, 62, 54, 83 },
 	}, {
 		.key	= "\x1b\x82\x2e\x1b\x17\x23\xb9\x6d"
 			  "\xdc\x9c\xda\x99\x07\xe3\x5f\xd8"
@@ -29461,8 +29376,6 @@ static const struct hash_testvec crc32_tv_template[] = {
 			     "\xe9\xea\xeb\xec\xed\xee\xef\xf0",
 		.psize = 240,
 		.digest = "\x6c\xc6\x56\xde",
-		.np = 2,
-		.tap = { 31, 209 }
 	}, {
 		.key = "\xff\xff\xff\xff",
 		.ksize = 4,
@@ -29902,8 +29815,6 @@ static const struct hash_testvec crc32c_tv_template[] = {
 			     "\xe9\xea\xeb\xec\xed\xee\xef\xf0",
 		.psize = 240,
 		.digest = "\x75\xd3\xc5\x24",
-		.np = 2,
-		.tap = { 31, 209 }
 	}, {
 		.key = "\xff\xff\xff\xff",
 		.ksize = 4,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 14/15] crypto: testmgr - check for skcipher_request corruption
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (12 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 13/15] crypto: testmgr - convert hash " Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-01  7:51 ` [PATCH v2 15/15] crypto: testmgr - check for aead_request corruption Eric Biggers
  2019-02-08  7:47 ` [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Herbert Xu
  15 siblings, 0 replies; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel

From: Eric Biggers <ebiggers@google.com>

Check that algorithms do not change the skcipher_request structure, as
users may rely on submitting the request again (e.g. after copying new
data into the same source buffer) without reinitializing everything.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/testmgr.c | 41 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 41 insertions(+)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index 96aa268ff4184..c4ee453e8bf53 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -1535,6 +1535,47 @@ static int test_skcipher_vec_cfg(const char *driver, int enc,
 		return err;
 	}
 
+	/* Check that the algorithm didn't overwrite things it shouldn't have */
+	if (req->cryptlen != vec->len ||
+	    req->iv != iv ||
+	    req->src != tsgls->src.sgl_ptr ||
+	    req->dst != tsgls->dst.sgl_ptr ||
+	    crypto_skcipher_reqtfm(req) != tfm ||
+	    req->base.complete != crypto_req_done ||
+	    req->base.flags != req_flags ||
+	    req->base.data != &wait) {
+		pr_err("alg: skcipher: %s %s corrupted request struct on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		if (req->cryptlen != vec->len)
+			pr_err("alg: skcipher: changed 'req->cryptlen'\n");
+		if (req->iv != iv)
+			pr_err("alg: skcipher: changed 'req->iv'\n");
+		if (req->src != tsgls->src.sgl_ptr)
+			pr_err("alg: skcipher: changed 'req->src'\n");
+		if (req->dst != tsgls->dst.sgl_ptr)
+			pr_err("alg: skcipher: changed 'req->dst'\n");
+		if (crypto_skcipher_reqtfm(req) != tfm)
+			pr_err("alg: skcipher: changed 'req->base.tfm'\n");
+		if (req->base.complete != crypto_req_done)
+			pr_err("alg: skcipher: changed 'req->base.complete'\n");
+		if (req->base.flags != req_flags)
+			pr_err("alg: skcipher: changed 'req->base.flags'\n");
+		if (req->base.data != &wait)
+			pr_err("alg: skcipher: changed 'req->base.data'\n");
+		return -EINVAL;
+	}
+	if (is_test_sglist_corrupted(&tsgls->src)) {
+		pr_err("alg: skcipher: %s %s corrupted src sgl on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		return -EINVAL;
+	}
+	if (tsgls->dst.sgl_ptr != tsgls->src.sgl &&
+	    is_test_sglist_corrupted(&tsgls->dst)) {
+		pr_err("alg: skcipher: %s %s corrupted dst sgl on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		return -EINVAL;
+	}
+
 	/* Check for the correct output (ciphertext or plaintext) */
 	err = verify_correct_output(&tsgls->dst, enc ? vec->ctext : vec->ptext,
 				    vec->len, 0, true);
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2 15/15] crypto: testmgr - check for aead_request corruption
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (13 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 14/15] crypto: testmgr - check for skcipher_request corruption Eric Biggers
@ 2019-02-01  7:51 ` Eric Biggers
  2019-02-08  7:47 ` [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Herbert Xu
  15 siblings, 0 replies; 23+ messages in thread
From: Eric Biggers @ 2019-02-01  7:51 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: linux-kernel

From: Eric Biggers <ebiggers@google.com>

Check that algorithms do not change the aead_request structure, as users
may rely on submitting the request again (e.g. after copying new data
into the same source buffer) without reinitializing everything.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/testmgr.c | 44 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 44 insertions(+)

diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index c4ee453e8bf53..d2be79e9de6fc 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -1253,6 +1253,50 @@ static int test_aead_vec_cfg(const char *driver, int enc,
 		return -EINVAL;
 	}
 
+	/* Check that the algorithm didn't overwrite things it shouldn't have */
+	if (req->cryptlen != (enc ? vec->plen : vec->clen) ||
+	    req->assoclen != vec->alen ||
+	    req->iv != iv ||
+	    req->src != tsgls->src.sgl_ptr ||
+	    req->dst != tsgls->dst.sgl_ptr ||
+	    crypto_aead_reqtfm(req) != tfm ||
+	    req->base.complete != crypto_req_done ||
+	    req->base.flags != req_flags ||
+	    req->base.data != &wait) {
+		pr_err("alg: aead: %s %s corrupted request struct on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		if (req->cryptlen != (enc ? vec->plen : vec->clen))
+			pr_err("alg: aead: changed 'req->cryptlen'\n");
+		if (req->assoclen != vec->alen)
+			pr_err("alg: aead: changed 'req->assoclen'\n");
+		if (req->iv != iv)
+			pr_err("alg: aead: changed 'req->iv'\n");
+		if (req->src != tsgls->src.sgl_ptr)
+			pr_err("alg: aead: changed 'req->src'\n");
+		if (req->dst != tsgls->dst.sgl_ptr)
+			pr_err("alg: aead: changed 'req->dst'\n");
+		if (crypto_aead_reqtfm(req) != tfm)
+			pr_err("alg: aead: changed 'req->base.tfm'\n");
+		if (req->base.complete != crypto_req_done)
+			pr_err("alg: aead: changed 'req->base.complete'\n");
+		if (req->base.flags != req_flags)
+			pr_err("alg: aead: changed 'req->base.flags'\n");
+		if (req->base.data != &wait)
+			pr_err("alg: aead: changed 'req->base.data'\n");
+		return -EINVAL;
+	}
+	if (is_test_sglist_corrupted(&tsgls->src)) {
+		pr_err("alg: aead: %s %s corrupted src sgl on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		return -EINVAL;
+	}
+	if (tsgls->dst.sgl_ptr != tsgls->src.sgl &&
+	    is_test_sglist_corrupted(&tsgls->dst)) {
+		pr_err("alg: aead: %s %s corrupted dst sgl on test vector %u, cfg=\"%s\"\n",
+		       driver, op, vec_num, cfg->name);
+		return -EINVAL;
+	}
+
 	/* Check for the correct output (ciphertext or plaintext) */
 	err = verify_correct_output(&tsgls->dst, enc ? vec->ctext : vec->ptext,
 				    enc ? vec->clen : vec->plen,
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 02/15] crypto: morus - fix handling chunked inputs
  2019-02-01  7:51 ` [PATCH v2 02/15] crypto: morus " Eric Biggers
@ 2019-02-05  9:30   ` Ondrej Mosnacek
  0 siblings, 0 replies; 23+ messages in thread
From: Ondrej Mosnacek @ 2019-02-05  9:30 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, Herbert Xu, Linux kernel mailing list, stable

On Fri, Feb 1, 2019 at 8:52 AM Eric Biggers <ebiggers@kernel.org> wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> The generic MORUS implementations all fail the improved AEAD tests
> because they produce the wrong result with some data layouts.  The issue
> is that they assume that if the skcipher_walk API gives 'nbytes' not
> aligned to the walksize (a.k.a. walk.stride), then it is the end of the
> data.  In fact, this can happen before the end.  Fix them.
>
> Fixes: 396be41f16fd ("crypto: morus - Add generic MORUS AEAD implementations")
> Cc: <stable@vger.kernel.org> # v4.18+
> Cc: Ondrej Mosnacek <omosnace@redhat.com>
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>


> ---
>  crypto/morus1280.c | 13 +++++++------
>  crypto/morus640.c  | 13 +++++++------
>  2 files changed, 14 insertions(+), 12 deletions(-)
>
> diff --git a/crypto/morus1280.c b/crypto/morus1280.c
> index 78ba09db7328c..0747732d5b78a 100644
> --- a/crypto/morus1280.c
> +++ b/crypto/morus1280.c
> @@ -362,18 +362,19 @@ static void crypto_morus1280_process_crypt(struct morus1280_state *state,
>                                            const struct morus1280_ops *ops)
>  {
>         struct skcipher_walk walk;
> -       u8 *dst;
> -       const u8 *src;
>
>         ops->skcipher_walk_init(&walk, req, false);
>
>         while (walk.nbytes) {
> -               src = walk.src.virt.addr;
> -               dst = walk.dst.virt.addr;
> +               unsigned int nbytes = walk.nbytes;
>
> -               ops->crypt_chunk(state, dst, src, walk.nbytes);
> +               if (nbytes < walk.total)
> +                       nbytes = round_down(nbytes, walk.stride);
>
> -               skcipher_walk_done(&walk, 0);
> +               ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
> +                                nbytes);
> +
> +               skcipher_walk_done(&walk, walk.nbytes - nbytes);
>         }
>  }
>
> diff --git a/crypto/morus640.c b/crypto/morus640.c
> index 5cf530139b271..1617a1eb8be13 100644
> --- a/crypto/morus640.c
> +++ b/crypto/morus640.c
> @@ -361,18 +361,19 @@ static void crypto_morus640_process_crypt(struct morus640_state *state,
>                                           const struct morus640_ops *ops)
>  {
>         struct skcipher_walk walk;
> -       u8 *dst;
> -       const u8 *src;
>
>         ops->skcipher_walk_init(&walk, req, false);
>
>         while (walk.nbytes) {
> -               src = walk.src.virt.addr;
> -               dst = walk.dst.virt.addr;
> +               unsigned int nbytes = walk.nbytes;
>
> -               ops->crypt_chunk(state, dst, src, walk.nbytes);
> +               if (nbytes < walk.total)
> +                       nbytes = round_down(nbytes, walk.stride);
>
> -               skcipher_walk_done(&walk, 0);
> +               ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
> +                                nbytes);
> +
> +               skcipher_walk_done(&walk, walk.nbytes - nbytes);
>         }
>  }
>
> --
> 2.20.1
>


--
Ondrej Mosnacek <omosnace at redhat dot com>
Associate Software Engineer, Security Technologies
Red Hat, Inc.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 01/15] crypto: aegis - fix handling chunked inputs
  2019-02-01  7:51 ` [PATCH v2 01/15] crypto: aegis - fix handling chunked inputs Eric Biggers
@ 2019-02-05  9:31   ` Ondrej Mosnacek
  0 siblings, 0 replies; 23+ messages in thread
From: Ondrej Mosnacek @ 2019-02-05  9:31 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, Herbert Xu, Linux kernel mailing list, stable

On Fri, Feb 1, 2019 at 8:52 AM Eric Biggers <ebiggers@kernel.org> wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> The generic AEGIS implementations all fail the improved AEAD tests
> because they produce the wrong result with some data layouts.  The issue
> is that they assume that if the skcipher_walk API gives 'nbytes' not
> aligned to the walksize (a.k.a. walk.stride), then it is the end of the
> data.  In fact, this can happen before the end.  Fix them.
>
> Fixes: f606a88e5823 ("crypto: aegis - Add generic AEGIS AEAD implementations")
> Cc: <stable@vger.kernel.org> # v4.18+
> Cc: Ondrej Mosnacek <omosnace@redhat.com>
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>

> ---
>  crypto/aegis128.c  | 14 +++++++-------
>  crypto/aegis128l.c | 14 +++++++-------
>  crypto/aegis256.c  | 14 +++++++-------
>  3 files changed, 21 insertions(+), 21 deletions(-)
>
> diff --git a/crypto/aegis128.c b/crypto/aegis128.c
> index 96e078a8a00a1..3718a83413032 100644
> --- a/crypto/aegis128.c
> +++ b/crypto/aegis128.c
> @@ -286,19 +286,19 @@ static void crypto_aegis128_process_crypt(struct aegis_state *state,
>                                           const struct aegis128_ops *ops)
>  {
>         struct skcipher_walk walk;
> -       u8 *src, *dst;
> -       unsigned int chunksize;
>
>         ops->skcipher_walk_init(&walk, req, false);
>
>         while (walk.nbytes) {
> -               src = walk.src.virt.addr;
> -               dst = walk.dst.virt.addr;
> -               chunksize = walk.nbytes;
> +               unsigned int nbytes = walk.nbytes;
>
> -               ops->crypt_chunk(state, dst, src, chunksize);
> +               if (nbytes < walk.total)
> +                       nbytes = round_down(nbytes, walk.stride);
>
> -               skcipher_walk_done(&walk, 0);
> +               ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
> +                                nbytes);
> +
> +               skcipher_walk_done(&walk, walk.nbytes - nbytes);
>         }
>  }
>
> diff --git a/crypto/aegis128l.c b/crypto/aegis128l.c
> index a210e779b911b..275a8616d71bd 100644
> --- a/crypto/aegis128l.c
> +++ b/crypto/aegis128l.c
> @@ -349,19 +349,19 @@ static void crypto_aegis128l_process_crypt(struct aegis_state *state,
>                                            const struct aegis128l_ops *ops)
>  {
>         struct skcipher_walk walk;
> -       u8 *src, *dst;
> -       unsigned int chunksize;
>
>         ops->skcipher_walk_init(&walk, req, false);
>
>         while (walk.nbytes) {
> -               src = walk.src.virt.addr;
> -               dst = walk.dst.virt.addr;
> -               chunksize = walk.nbytes;
> +               unsigned int nbytes = walk.nbytes;
>
> -               ops->crypt_chunk(state, dst, src, chunksize);
> +               if (nbytes < walk.total)
> +                       nbytes = round_down(nbytes, walk.stride);
>
> -               skcipher_walk_done(&walk, 0);
> +               ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
> +                                nbytes);
> +
> +               skcipher_walk_done(&walk, walk.nbytes - nbytes);
>         }
>  }
>
> diff --git a/crypto/aegis256.c b/crypto/aegis256.c
> index 49882a28e93e9..ecd6b7f34a2d2 100644
> --- a/crypto/aegis256.c
> +++ b/crypto/aegis256.c
> @@ -299,19 +299,19 @@ static void crypto_aegis256_process_crypt(struct aegis_state *state,
>                                           const struct aegis256_ops *ops)
>  {
>         struct skcipher_walk walk;
> -       u8 *src, *dst;
> -       unsigned int chunksize;
>
>         ops->skcipher_walk_init(&walk, req, false);
>
>         while (walk.nbytes) {
> -               src = walk.src.virt.addr;
> -               dst = walk.dst.virt.addr;
> -               chunksize = walk.nbytes;
> +               unsigned int nbytes = walk.nbytes;
>
> -               ops->crypt_chunk(state, dst, src, chunksize);
> +               if (nbytes < walk.total)
> +                       nbytes = round_down(nbytes, walk.stride);
>
> -               skcipher_walk_done(&walk, 0);
> +               ops->crypt_chunk(state, walk.dst.virt.addr, walk.src.virt.addr,
> +                                nbytes);
> +
> +               skcipher_walk_done(&walk, walk.nbytes - nbytes);
>         }
>  }
>
> --
> 2.20.1
>


-- 
Ondrej Mosnacek <omosnace at redhat dot com>
Associate Software Engineer, Security Technologies
Red Hat, Inc.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 03/15] crypto: x86/aegis - fix handling chunked inputs and MAY_SLEEP
  2019-02-01  7:51 ` [PATCH v2 03/15] crypto: x86/aegis - fix handling chunked inputs and MAY_SLEEP Eric Biggers
@ 2019-02-05  9:31   ` Ondrej Mosnacek
  0 siblings, 0 replies; 23+ messages in thread
From: Ondrej Mosnacek @ 2019-02-05  9:31 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, Herbert Xu, Linux kernel mailing list, stable

On Fri, Feb 1, 2019 at 8:52 AM Eric Biggers <ebiggers@kernel.org> wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> The x86 AEGIS implementations all fail the improved AEAD tests because
> they produce the wrong result with some data layouts.  The issue is that
> they assume that if the skcipher_walk API gives 'nbytes' not aligned to
> the walksize (a.k.a. walk.stride), then it is the end of the data.  In
> fact, this can happen before the end.
>
> Also, when the CRYPTO_TFM_REQ_MAY_SLEEP flag is given, they can
> incorrectly sleep in the skcipher_walk_*() functions while preemption
> has been disabled by kernel_fpu_begin().
>
> Fix these bugs.
>
> Fixes: 1d373d4e8e15 ("crypto: x86 - Add optimized AEGIS implementations")
> Cc: <stable@vger.kernel.org> # v4.18+
> Cc: Ondrej Mosnacek <omosnace@redhat.com>
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>

> ---
>  arch/x86/crypto/aegis128-aesni-glue.c  | 38 ++++++++++----------------
>  arch/x86/crypto/aegis128l-aesni-glue.c | 38 ++++++++++----------------
>  arch/x86/crypto/aegis256-aesni-glue.c  | 38 ++++++++++----------------
>  3 files changed, 45 insertions(+), 69 deletions(-)
>
> diff --git a/arch/x86/crypto/aegis128-aesni-glue.c b/arch/x86/crypto/aegis128-aesni-glue.c
> index 2a356b948720e..3ea71b8718135 100644
> --- a/arch/x86/crypto/aegis128-aesni-glue.c
> +++ b/arch/x86/crypto/aegis128-aesni-glue.c
> @@ -119,31 +119,20 @@ static void crypto_aegis128_aesni_process_ad(
>  }
>
>  static void crypto_aegis128_aesni_process_crypt(
> -               struct aegis_state *state, struct aead_request *req,
> +               struct aegis_state *state, struct skcipher_walk *walk,
>                 const struct aegis_crypt_ops *ops)
>  {
> -       struct skcipher_walk walk;
> -       u8 *src, *dst;
> -       unsigned int chunksize, base;
> -
> -       ops->skcipher_walk_init(&walk, req, false);
> -
> -       while (walk.nbytes) {
> -               src = walk.src.virt.addr;
> -               dst = walk.dst.virt.addr;
> -               chunksize = walk.nbytes;
> -
> -               ops->crypt_blocks(state, chunksize, src, dst);
> -
> -               base = chunksize & ~(AEGIS128_BLOCK_SIZE - 1);
> -               src += base;
> -               dst += base;
> -               chunksize &= AEGIS128_BLOCK_SIZE - 1;
> -
> -               if (chunksize > 0)
> -                       ops->crypt_tail(state, chunksize, src, dst);
> +       while (walk->nbytes >= AEGIS128_BLOCK_SIZE) {
> +               ops->crypt_blocks(state,
> +                                 round_down(walk->nbytes, AEGIS128_BLOCK_SIZE),
> +                                 walk->src.virt.addr, walk->dst.virt.addr);
> +               skcipher_walk_done(walk, walk->nbytes % AEGIS128_BLOCK_SIZE);
> +       }
>
> -               skcipher_walk_done(&walk, 0);
> +       if (walk->nbytes) {
> +               ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
> +                               walk->dst.virt.addr);
> +               skcipher_walk_done(walk, 0);
>         }
>  }
>
> @@ -186,13 +175,16 @@ static void crypto_aegis128_aesni_crypt(struct aead_request *req,
>  {
>         struct crypto_aead *tfm = crypto_aead_reqtfm(req);
>         struct aegis_ctx *ctx = crypto_aegis128_aesni_ctx(tfm);
> +       struct skcipher_walk walk;
>         struct aegis_state state;
>
> +       ops->skcipher_walk_init(&walk, req, true);
> +
>         kernel_fpu_begin();
>
>         crypto_aegis128_aesni_init(&state, ctx->key.bytes, req->iv);
>         crypto_aegis128_aesni_process_ad(&state, req->src, req->assoclen);
> -       crypto_aegis128_aesni_process_crypt(&state, req, ops);
> +       crypto_aegis128_aesni_process_crypt(&state, &walk, ops);
>         crypto_aegis128_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
>
>         kernel_fpu_end();
> diff --git a/arch/x86/crypto/aegis128l-aesni-glue.c b/arch/x86/crypto/aegis128l-aesni-glue.c
> index dbe8bb980da15..1b1b39c66c5e2 100644
> --- a/arch/x86/crypto/aegis128l-aesni-glue.c
> +++ b/arch/x86/crypto/aegis128l-aesni-glue.c
> @@ -119,31 +119,20 @@ static void crypto_aegis128l_aesni_process_ad(
>  }
>
>  static void crypto_aegis128l_aesni_process_crypt(
> -               struct aegis_state *state, struct aead_request *req,
> +               struct aegis_state *state, struct skcipher_walk *walk,
>                 const struct aegis_crypt_ops *ops)
>  {
> -       struct skcipher_walk walk;
> -       u8 *src, *dst;
> -       unsigned int chunksize, base;
> -
> -       ops->skcipher_walk_init(&walk, req, false);
> -
> -       while (walk.nbytes) {
> -               src = walk.src.virt.addr;
> -               dst = walk.dst.virt.addr;
> -               chunksize = walk.nbytes;
> -
> -               ops->crypt_blocks(state, chunksize, src, dst);
> -
> -               base = chunksize & ~(AEGIS128L_BLOCK_SIZE - 1);
> -               src += base;
> -               dst += base;
> -               chunksize &= AEGIS128L_BLOCK_SIZE - 1;
> -
> -               if (chunksize > 0)
> -                       ops->crypt_tail(state, chunksize, src, dst);
> +       while (walk->nbytes >= AEGIS128L_BLOCK_SIZE) {
> +               ops->crypt_blocks(state, round_down(walk->nbytes,
> +                                                   AEGIS128L_BLOCK_SIZE),
> +                                 walk->src.virt.addr, walk->dst.virt.addr);
> +               skcipher_walk_done(walk, walk->nbytes % AEGIS128L_BLOCK_SIZE);
> +       }
>
> -               skcipher_walk_done(&walk, 0);
> +       if (walk->nbytes) {
> +               ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
> +                               walk->dst.virt.addr);
> +               skcipher_walk_done(walk, 0);
>         }
>  }
>
> @@ -186,13 +175,16 @@ static void crypto_aegis128l_aesni_crypt(struct aead_request *req,
>  {
>         struct crypto_aead *tfm = crypto_aead_reqtfm(req);
>         struct aegis_ctx *ctx = crypto_aegis128l_aesni_ctx(tfm);
> +       struct skcipher_walk walk;
>         struct aegis_state state;
>
> +       ops->skcipher_walk_init(&walk, req, true);
> +
>         kernel_fpu_begin();
>
>         crypto_aegis128l_aesni_init(&state, ctx->key.bytes, req->iv);
>         crypto_aegis128l_aesni_process_ad(&state, req->src, req->assoclen);
> -       crypto_aegis128l_aesni_process_crypt(&state, req, ops);
> +       crypto_aegis128l_aesni_process_crypt(&state, &walk, ops);
>         crypto_aegis128l_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
>
>         kernel_fpu_end();
> diff --git a/arch/x86/crypto/aegis256-aesni-glue.c b/arch/x86/crypto/aegis256-aesni-glue.c
> index 8bebda2de92fe..6227ca3220a05 100644
> --- a/arch/x86/crypto/aegis256-aesni-glue.c
> +++ b/arch/x86/crypto/aegis256-aesni-glue.c
> @@ -119,31 +119,20 @@ static void crypto_aegis256_aesni_process_ad(
>  }
>
>  static void crypto_aegis256_aesni_process_crypt(
> -               struct aegis_state *state, struct aead_request *req,
> +               struct aegis_state *state, struct skcipher_walk *walk,
>                 const struct aegis_crypt_ops *ops)
>  {
> -       struct skcipher_walk walk;
> -       u8 *src, *dst;
> -       unsigned int chunksize, base;
> -
> -       ops->skcipher_walk_init(&walk, req, false);
> -
> -       while (walk.nbytes) {
> -               src = walk.src.virt.addr;
> -               dst = walk.dst.virt.addr;
> -               chunksize = walk.nbytes;
> -
> -               ops->crypt_blocks(state, chunksize, src, dst);
> -
> -               base = chunksize & ~(AEGIS256_BLOCK_SIZE - 1);
> -               src += base;
> -               dst += base;
> -               chunksize &= AEGIS256_BLOCK_SIZE - 1;
> -
> -               if (chunksize > 0)
> -                       ops->crypt_tail(state, chunksize, src, dst);
> +       while (walk->nbytes >= AEGIS256_BLOCK_SIZE) {
> +               ops->crypt_blocks(state,
> +                                 round_down(walk->nbytes, AEGIS256_BLOCK_SIZE),
> +                                 walk->src.virt.addr, walk->dst.virt.addr);
> +               skcipher_walk_done(walk, walk->nbytes % AEGIS256_BLOCK_SIZE);
> +       }
>
> -               skcipher_walk_done(&walk, 0);
> +       if (walk->nbytes) {
> +               ops->crypt_tail(state, walk->nbytes, walk->src.virt.addr,
> +                               walk->dst.virt.addr);
> +               skcipher_walk_done(walk, 0);
>         }
>  }
>
> @@ -186,13 +175,16 @@ static void crypto_aegis256_aesni_crypt(struct aead_request *req,
>  {
>         struct crypto_aead *tfm = crypto_aead_reqtfm(req);
>         struct aegis_ctx *ctx = crypto_aegis256_aesni_ctx(tfm);
> +       struct skcipher_walk walk;
>         struct aegis_state state;
>
> +       ops->skcipher_walk_init(&walk, req, true);
> +
>         kernel_fpu_begin();
>
>         crypto_aegis256_aesni_init(&state, ctx->key, req->iv);
>         crypto_aegis256_aesni_process_ad(&state, req->src, req->assoclen);
> -       crypto_aegis256_aesni_process_crypt(&state, req, ops);
> +       crypto_aegis256_aesni_process_crypt(&state, &walk, ops);
>         crypto_aegis256_aesni_final(&state, tag_xor, req->assoclen, cryptlen);
>
>         kernel_fpu_end();
> --
> 2.20.1
>


-- 
Ondrej Mosnacek <omosnace at redhat dot com>
Associate Software Engineer, Security Technologies
Red Hat, Inc.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 04/15] crypto: x86/morus - fix handling chunked inputs and MAY_SLEEP
  2019-02-01  7:51 ` [PATCH v2 04/15] crypto: x86/morus " Eric Biggers
@ 2019-02-05  9:32   ` Ondrej Mosnacek
  0 siblings, 0 replies; 23+ messages in thread
From: Ondrej Mosnacek @ 2019-02-05  9:32 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, Herbert Xu, Linux kernel mailing list, stable

On Fri, Feb 1, 2019 at 8:52 AM Eric Biggers <ebiggers@kernel.org> wrote:
> From: Eric Biggers <ebiggers@google.com>
>
> The x86 MORUS implementations all fail the improved AEAD tests because
> they produce the wrong result with some data layouts.  The issue is that
> they assume that if the skcipher_walk API gives 'nbytes' not aligned to
> the walksize (a.k.a. walk.stride), then it is the end of the data.  In
> fact, this can happen before the end.
>
> Also, when the CRYPTO_TFM_REQ_MAY_SLEEP flag is given, they can
> incorrectly sleep in the skcipher_walk_*() functions while preemption
> has been disabled by kernel_fpu_begin().
>
> Fix these bugs.
>
> Fixes: 56e8e57fc3a7 ("crypto: morus - Add common SIMD glue code for MORUS")
> Cc: <stable@vger.kernel.org> # v4.18+
> Cc: Ondrej Mosnacek <omosnace@redhat.com>
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Reviewed-by: Ondrej Mosnacek <omosnace@redhat.com>

> ---
>  arch/x86/crypto/morus1280_glue.c | 40 +++++++++++++-------------------
>  arch/x86/crypto/morus640_glue.c  | 39 ++++++++++++-------------------
>  2 files changed, 31 insertions(+), 48 deletions(-)
>
> diff --git a/arch/x86/crypto/morus1280_glue.c b/arch/x86/crypto/morus1280_glue.c
> index 0dccdda1eb3a1..7e600f8bcdad8 100644
> --- a/arch/x86/crypto/morus1280_glue.c
> +++ b/arch/x86/crypto/morus1280_glue.c
> @@ -85,31 +85,20 @@ static void crypto_morus1280_glue_process_ad(
>
>  static void crypto_morus1280_glue_process_crypt(struct morus1280_state *state,
>                                                 struct morus1280_ops ops,
> -                                               struct aead_request *req)
> +                                               struct skcipher_walk *walk)
>  {
> -       struct skcipher_walk walk;
> -       u8 *cursor_src, *cursor_dst;
> -       unsigned int chunksize, base;
> -
> -       ops.skcipher_walk_init(&walk, req, false);
> -
> -       while (walk.nbytes) {
> -               cursor_src = walk.src.virt.addr;
> -               cursor_dst = walk.dst.virt.addr;
> -               chunksize = walk.nbytes;
> -
> -               ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize);
> -
> -               base = chunksize & ~(MORUS1280_BLOCK_SIZE - 1);
> -               cursor_src += base;
> -               cursor_dst += base;
> -               chunksize &= MORUS1280_BLOCK_SIZE - 1;
> -
> -               if (chunksize > 0)
> -                       ops.crypt_tail(state, cursor_src, cursor_dst,
> -                                      chunksize);
> +       while (walk->nbytes >= MORUS1280_BLOCK_SIZE) {
> +               ops.crypt_blocks(state, walk->src.virt.addr,
> +                                walk->dst.virt.addr,
> +                                round_down(walk->nbytes,
> +                                           MORUS1280_BLOCK_SIZE));
> +               skcipher_walk_done(walk, walk->nbytes % MORUS1280_BLOCK_SIZE);
> +       }
>
> -               skcipher_walk_done(&walk, 0);
> +       if (walk->nbytes) {
> +               ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr,
> +                              walk->nbytes);
> +               skcipher_walk_done(walk, 0);
>         }
>  }
>
> @@ -147,12 +136,15 @@ static void crypto_morus1280_glue_crypt(struct aead_request *req,
>         struct crypto_aead *tfm = crypto_aead_reqtfm(req);
>         struct morus1280_ctx *ctx = crypto_aead_ctx(tfm);
>         struct morus1280_state state;
> +       struct skcipher_walk walk;
> +
> +       ops.skcipher_walk_init(&walk, req, true);
>
>         kernel_fpu_begin();
>
>         ctx->ops->init(&state, &ctx->key, req->iv);
>         crypto_morus1280_glue_process_ad(&state, ctx->ops, req->src, req->assoclen);
> -       crypto_morus1280_glue_process_crypt(&state, ops, req);
> +       crypto_morus1280_glue_process_crypt(&state, ops, &walk);
>         ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen);
>
>         kernel_fpu_end();
> diff --git a/arch/x86/crypto/morus640_glue.c b/arch/x86/crypto/morus640_glue.c
> index 7b58fe4d9bd1a..cb3a817320160 100644
> --- a/arch/x86/crypto/morus640_glue.c
> +++ b/arch/x86/crypto/morus640_glue.c
> @@ -85,31 +85,19 @@ static void crypto_morus640_glue_process_ad(
>
>  static void crypto_morus640_glue_process_crypt(struct morus640_state *state,
>                                                struct morus640_ops ops,
> -                                              struct aead_request *req)
> +                                              struct skcipher_walk *walk)
>  {
> -       struct skcipher_walk walk;
> -       u8 *cursor_src, *cursor_dst;
> -       unsigned int chunksize, base;
> -
> -       ops.skcipher_walk_init(&walk, req, false);
> -
> -       while (walk.nbytes) {
> -               cursor_src = walk.src.virt.addr;
> -               cursor_dst = walk.dst.virt.addr;
> -               chunksize = walk.nbytes;
> -
> -               ops.crypt_blocks(state, cursor_src, cursor_dst, chunksize);
> -
> -               base = chunksize & ~(MORUS640_BLOCK_SIZE - 1);
> -               cursor_src += base;
> -               cursor_dst += base;
> -               chunksize &= MORUS640_BLOCK_SIZE - 1;
> -
> -               if (chunksize > 0)
> -                       ops.crypt_tail(state, cursor_src, cursor_dst,
> -                                      chunksize);
> +       while (walk->nbytes >= MORUS640_BLOCK_SIZE) {
> +               ops.crypt_blocks(state, walk->src.virt.addr,
> +                                walk->dst.virt.addr,
> +                                round_down(walk->nbytes, MORUS640_BLOCK_SIZE));
> +               skcipher_walk_done(walk, walk->nbytes % MORUS640_BLOCK_SIZE);
> +       }
>
> -               skcipher_walk_done(&walk, 0);
> +       if (walk->nbytes) {
> +               ops.crypt_tail(state, walk->src.virt.addr, walk->dst.virt.addr,
> +                              walk->nbytes);
> +               skcipher_walk_done(walk, 0);
>         }
>  }
>
> @@ -143,12 +131,15 @@ static void crypto_morus640_glue_crypt(struct aead_request *req,
>         struct crypto_aead *tfm = crypto_aead_reqtfm(req);
>         struct morus640_ctx *ctx = crypto_aead_ctx(tfm);
>         struct morus640_state state;
> +       struct skcipher_walk walk;
> +
> +       ops.skcipher_walk_init(&walk, req, true);
>
>         kernel_fpu_begin();
>
>         ctx->ops->init(&state, &ctx->key, req->iv);
>         crypto_morus640_glue_process_ad(&state, ctx->ops, req->src, req->assoclen);
> -       crypto_morus640_glue_process_crypt(&state, ops, req);
> +       crypto_morus640_glue_process_crypt(&state, ops, &walk);
>         ctx->ops->final(&state, tag_xor, req->assoclen, cryptlen);
>
>         kernel_fpu_end();
> --
> 2.20.1
>


-- 
Ondrej Mosnacek <omosnace at redhat dot com>
Associate Software Engineer, Security Technologies
Red Hat, Inc.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests
  2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
                   ` (14 preceding siblings ...)
  2019-02-01  7:51 ` [PATCH v2 15/15] crypto: testmgr - check for aead_request corruption Eric Biggers
@ 2019-02-08  7:47 ` Herbert Xu
  15 siblings, 0 replies; 23+ messages in thread
From: Herbert Xu @ 2019-02-08  7:47 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, linux-kernel

On Thu, Jan 31, 2019 at 11:51:35PM -0800, Eric Biggers wrote:
> Hello,
> 
> Crypto algorithms must produce the same output for the same input
> regardless of data layout, i.e. how the src and dst scatterlists are
> divided into chunks and how each chunk is aligned.  Request flags such
> as CRYPTO_TFM_REQ_MAY_SLEEP must not affect the result either.
> 
> However, testing of this currently has many gaps.  For example,
> individual algorithms are responsible for providing their own chunked
> test vectors.  But many don't bother to do this or test only one or two
> cases, providing poor test coverage.  Also, other things such as buffers
> spanning a page boundary, misaligned IVs, and CRYPTO_TFM_REQ_MAY_SLEEP
> are never tested at all.
> 
> Test code is also duplicated between the chunked and non-chunked cases,
> making it difficult to make other improvements.
> 
> To improve the situation, this patch series basically moves the chunk
> descriptions into the testmgr itself so that they are shared by all
> algorithms.  However, it's done in an extensible way via a new struct
> 'testvec_config', which describes not just the scaled chunk lengths but
> also all other aspects of the crypto operation besides the data itself
> such as the buffer alignments, the request flags, whether the operation
> is in-place or not, the IV alignment, and for hash algorithms when to do
> each update() and when to use finup() vs. final() vs. digest().
> 
> Then, this patch series makes skcipher, aead, and hash algorithms be
> tested against a list of default testvec_configs, replacing the current
> test code.  This improves overall test coverage, without reducing test
> performance too much.  Note that the test vectors themselves are not
> changed, except for removing the chunk lists.
> 
> This series also adds randomized fuzz tests, enabled by a new kconfig
> option intended for developer use only, where skcipher, aead, and hash
> algorithms are tested against many randomly generated testvec_configs.
> This provides much more comprehensive test coverage.
> 
> I've run these improved tests on x86, arm32, and arm64 with all crypto
> algorithms enabled, and they have already found many bugs.  Patches 1-7
> and the patches from Ard Biesheuvel fix most of the bugs found so far.
> A bug was also detected in the Rockchip crypto driver which remains to
> be fixed.  Also many AEADs incorrectly change aead_request::base.tfm,
> but for now I'm temporarily working around that in the tests as I plan
> to fix it later after the other types of bugs are addressed.
> 
> If anyone reading this has access to systems with other architectures or
> crypto drivers that may not have been tested yet, you can help by
> applying these patches on your system, enabling
> CONFIG_CRYPTO_MANAGER_EXTRA_TESTS, and reporting or fixing any test
> failures.
> 
> This patch series can also be found in git at
> https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git
> branch "testmgr-improvements".
> 
> Changed since v1:
> 
> - Made CONFIG_CRYPTO_MANAGER_EXTRA_TESTS depend on CONFIG_DEBUG_KERNEL.
> - Improved commit description of AEGIS and MORUS fixes.
> - A few very minor cleanups to the test code.
> 
> Eric Biggers (15):
>   crypto: aegis - fix handling chunked inputs
>   crypto: morus - fix handling chunked inputs
>   crypto: x86/aegis - fix handling chunked inputs and MAY_SLEEP
>   crypto: x86/morus - fix handling chunked inputs and MAY_SLEEP
>   crypto: x86/aesni-gcm - fix crash on empty plaintext
>   crypto: ahash - fix another early termination in hash walk
>   crypto: arm64/aes-neonbs - fix returning final keystream block
>   crypto: testmgr - add testvec_config struct and helper functions
>   crypto: testmgr - introduce CONFIG_CRYPTO_MANAGER_EXTRA_TESTS
>   crypto: testmgr - implement random testvec_config generation
>   crypto: testmgr - convert skcipher testing to use testvec_configs
>   crypto: testmgr - convert aead testing to use testvec_configs
>   crypto: testmgr - convert hash testing to use testvec_configs
>   crypto: testmgr - check for skcipher_request corruption
>   crypto: testmgr - check for aead_request corruption
> 
>  arch/arm64/crypto/aes-neonbs-core.S    |    8 +-
>  arch/x86/crypto/aegis128-aesni-glue.c  |   38 +-
>  arch/x86/crypto/aegis128l-aesni-glue.c |   38 +-
>  arch/x86/crypto/aegis256-aesni-glue.c  |   38 +-
>  arch/x86/crypto/aesni-intel_glue.c     |   13 +-
>  arch/x86/crypto/morus1280_glue.c       |   40 +-
>  arch/x86/crypto/morus640_glue.c        |   39 +-
>  crypto/Kconfig                         |   10 +
>  crypto/aegis128.c                      |   14 +-
>  crypto/aegis128l.c                     |   14 +-
>  crypto/aegis256.c                      |   14 +-
>  crypto/ahash.c                         |   14 +-
>  crypto/morus1280.c                     |   13 +-
>  crypto/morus640.c                      |   13 +-
>  crypto/testmgr.c                       | 2549 +++++++++++++-----------
>  crypto/testmgr.h                       |  407 +---
>  16 files changed, 1556 insertions(+), 1706 deletions(-)

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 13/15] crypto: testmgr - convert hash testing to use testvec_configs
  2019-02-01  7:51 ` [PATCH v2 13/15] crypto: testmgr - convert hash " Eric Biggers
@ 2019-08-29 15:32   ` Christophe Leroy
  2019-08-29 15:58     ` Eric Biggers
  0 siblings, 1 reply; 23+ messages in thread
From: Christophe Leroy @ 2019-08-29 15:32 UTC (permalink / raw)
  To: Eric Biggers, linux-crypto, Herbert Xu; +Cc: linux-kernel

Hi Eric,


Le 01/02/2019 à 08:51, Eric Biggers a écrit :
> From: Eric Biggers <ebiggers@google.com>
> 
> Convert alg_test_hash() to use the new test framework, adding a list of
> testvec_configs to test by default.  When the extra self-tests are
> enabled, randomly generated testvec_configs are tested as well.
> 
> This improves hash test coverage mainly because now all algorithms have
> a variety of data layouts tested, whereas before each algorithm was
> responsible for declaring its own chunked test cases which were often
> missing or provided poor test coverage.  The new code also tests both
> the MAY_SLEEP and !MAY_SLEEP cases and buffers that cross pages.
> 
> This already found bugs in the hash walk code and in the arm32 and arm64
> implementations of crct10dif.
> 
> I removed the hash chunked test vectors that were the same as
> non-chunked ones, but left the ones that were unique.
> 
> Signed-off-by: Eric Biggers <ebiggers@google.com>
> ---
>   crypto/testmgr.c | 795 ++++++++++++++++++++---------------------------
>   crypto/testmgr.h | 107 +------
>   2 files changed, 352 insertions(+), 550 deletions(-)
> 
> diff --git a/crypto/testmgr.c b/crypto/testmgr.c
> index 7638090ff1b0a..96aa268ff4184 100644
> --- a/crypto/testmgr.c
> +++ b/crypto/testmgr.c

[...]


> -static int __test_hash(struct crypto_ahash *tfm,
> -		       const struct hash_testvec *template, unsigned int tcount,
> -		       enum hash_test test_type, const int align_offset)
> +static int test_hash_vec_cfg(const char *driver,
> +			     const struct hash_testvec *vec,
> +			     unsigned int vec_num,
> +			     const struct testvec_config *cfg,
> +			     struct ahash_request *req,
> +			     struct test_sglist *tsgl,
> +			     u8 *hashstate)
>   {
> -	const char *algo = crypto_tfm_alg_driver_name(crypto_ahash_tfm(tfm));
> -	size_t digest_size = crypto_ahash_digestsize(tfm);
> -	unsigned int i, j, k, temp;
> -	struct scatterlist sg[8];
> -	char *result;
> -	char *key;
> -	struct ahash_request *req;
> -	struct crypto_wait wait;
> -	void *hash_buff;
> -	char *xbuf[XBUFSIZE];
> -	int ret = -ENOMEM;
> -
> -	result = kmalloc(digest_size, GFP_KERNEL);
> -	if (!result)
> -		return ret;
> -	key = kmalloc(MAX_KEYLEN, GFP_KERNEL);
> -	if (!key)
> -		goto out_nobuf;
> -	if (testmgr_alloc_buf(xbuf))
> -		goto out_nobuf;
> +	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
> +	const unsigned int alignmask = crypto_ahash_alignmask(tfm);
> +	const unsigned int digestsize = crypto_ahash_digestsize(tfm);
> +	const unsigned int statesize = crypto_ahash_statesize(tfm);
> +	const u32 req_flags = CRYPTO_TFM_REQ_MAY_BACKLOG | cfg->req_flags;
> +	const struct test_sg_division *divs[XBUFSIZE];
> +	DECLARE_CRYPTO_WAIT(wait);
> +	struct kvec _input;
> +	struct iov_iter input;
> +	unsigned int i;
> +	struct scatterlist *pending_sgl;
> +	unsigned int pending_len;
> +	u8 result[HASH_MAX_DIGESTSIZE + TESTMGR_POISON_LEN];

Before this patch, result was allocated with kmalloc().
Now, result is in the stack. Is there a reason for this change ?

Due to this change, the talitos driver fails when using 
CONFIG_VMAP_STACK, because result is not dma-able anymore.

CONFIG_DEBUG_VIRTUAL warns on:

[    4.276401] WARNING: CPU: 0 PID: 72 at 
./arch/powerpc/include/asm/io.h:804 dma_direct_map_page+0x50/0x178
[    4.285725] CPU: 0 PID: 72 Comm: cryptomgr_test Tainted: G        W 
       5.3.0-rc6-s3k-dev-00897-g03e8e9014403-dirty #2182
	[snip registers]
[    4.353542] NIP [c0066eac] dma_direct_map_page+0x50/0x178
[    4.358872] LR [c0066eac] dma_direct_map_page+0x50/0x178
[    4.364074] Call Trace:
[    4.366533] [c9d0fc18] [c0066eac] dma_direct_map_page+0x50/0x178 
(unreliable)
[    4.373587] [c9d0fc38] [c033ac38] __map_single_talitos_ptr+0x54/0x9c
[    4.379869] [c9d0fc48] [c033e878] ahash_process_req+0x318/0x7a8
[    4.385739] [c9d0fca8] [c0210b68] do_ahash_op.isra.0+0x24/0x70
[    4.391494] [c9d0fcb8] [c02130e8] test_ahash_vec_cfg+0x478/0x5a8
[    4.397432] [c9d0fda8] [c0213b40] __alg_test_hash.isra.13+0x16c/0x334
[    4.403797] [c9d0fe08] [c0213d84] alg_test_hash+0x7c/0x164
[    4.409218] [c9d0fe28] [c0213f88] alg_test+0xc0/0x384
[    4.414209] [c9d0fef8] [c020f0ec] cryptomgr_test+0x48/0x50
[    4.419623] [c9d0ff08] [c003a818] kthread+0xe0/0x10c
[    4.424539] [c9d0ff38] [c000e1d0] ret_from_kernel_thread+0x14/0x1c

How should this be fixed ?

Christophe


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v2 13/15] crypto: testmgr - convert hash testing to use testvec_configs
  2019-08-29 15:32   ` Christophe Leroy
@ 2019-08-29 15:58     ` Eric Biggers
  0 siblings, 0 replies; 23+ messages in thread
From: Eric Biggers @ 2019-08-29 15:58 UTC (permalink / raw)
  To: Christophe Leroy; +Cc: linux-crypto, Herbert Xu, linux-kernel

On Thu, Aug 29, 2019 at 05:32:46PM +0200, Christophe Leroy wrote:
> Hi Eric,
> 
> 
> Le 01/02/2019 à 08:51, Eric Biggers a écrit :
> > From: Eric Biggers <ebiggers@google.com>
> > 
> > Convert alg_test_hash() to use the new test framework, adding a list of
> > testvec_configs to test by default.  When the extra self-tests are
> > enabled, randomly generated testvec_configs are tested as well.
> > 
> > This improves hash test coverage mainly because now all algorithms have
> > a variety of data layouts tested, whereas before each algorithm was
> > responsible for declaring its own chunked test cases which were often
> > missing or provided poor test coverage.  The new code also tests both
> > the MAY_SLEEP and !MAY_SLEEP cases and buffers that cross pages.
> > 
> > This already found bugs in the hash walk code and in the arm32 and arm64
> > implementations of crct10dif.
> > 
> > I removed the hash chunked test vectors that were the same as
> > non-chunked ones, but left the ones that were unique.
> > 
> > Signed-off-by: Eric Biggers <ebiggers@google.com>
> > ---
> >   crypto/testmgr.c | 795 ++++++++++++++++++++---------------------------
> >   crypto/testmgr.h | 107 +------
> >   2 files changed, 352 insertions(+), 550 deletions(-)
> > 
> > diff --git a/crypto/testmgr.c b/crypto/testmgr.c
> > index 7638090ff1b0a..96aa268ff4184 100644
> > --- a/crypto/testmgr.c
> > +++ b/crypto/testmgr.c
> 
> [...]
> 
> 
> > -static int __test_hash(struct crypto_ahash *tfm,
> > -		       const struct hash_testvec *template, unsigned int tcount,
> > -		       enum hash_test test_type, const int align_offset)
> > +static int test_hash_vec_cfg(const char *driver,
> > +			     const struct hash_testvec *vec,
> > +			     unsigned int vec_num,
> > +			     const struct testvec_config *cfg,
> > +			     struct ahash_request *req,
> > +			     struct test_sglist *tsgl,
> > +			     u8 *hashstate)
> >   {
> > -	const char *algo = crypto_tfm_alg_driver_name(crypto_ahash_tfm(tfm));
> > -	size_t digest_size = crypto_ahash_digestsize(tfm);
> > -	unsigned int i, j, k, temp;
> > -	struct scatterlist sg[8];
> > -	char *result;
> > -	char *key;
> > -	struct ahash_request *req;
> > -	struct crypto_wait wait;
> > -	void *hash_buff;
> > -	char *xbuf[XBUFSIZE];
> > -	int ret = -ENOMEM;
> > -
> > -	result = kmalloc(digest_size, GFP_KERNEL);
> > -	if (!result)
> > -		return ret;
> > -	key = kmalloc(MAX_KEYLEN, GFP_KERNEL);
> > -	if (!key)
> > -		goto out_nobuf;
> > -	if (testmgr_alloc_buf(xbuf))
> > -		goto out_nobuf;
> > +	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
> > +	const unsigned int alignmask = crypto_ahash_alignmask(tfm);
> > +	const unsigned int digestsize = crypto_ahash_digestsize(tfm);
> > +	const unsigned int statesize = crypto_ahash_statesize(tfm);
> > +	const u32 req_flags = CRYPTO_TFM_REQ_MAY_BACKLOG | cfg->req_flags;
> > +	const struct test_sg_division *divs[XBUFSIZE];
> > +	DECLARE_CRYPTO_WAIT(wait);
> > +	struct kvec _input;
> > +	struct iov_iter input;
> > +	unsigned int i;
> > +	struct scatterlist *pending_sgl;
> > +	unsigned int pending_len;
> > +	u8 result[HASH_MAX_DIGESTSIZE + TESTMGR_POISON_LEN];
> 
> Before this patch, result was allocated with kmalloc().
> Now, result is in the stack. Is there a reason for this change ?
> 
> Due to this change, the talitos driver fails when using CONFIG_VMAP_STACK,
> because result is not dma-able anymore.
> 

Yes, moving 'result' to the stack simplified the code slightly and also helps
detect buggy drivers that assume that 'result' is DMA-able.  Note that some
"real" API users also put 'result' on the stack, as the API allows it.
This is not new; the only new thing is that the tests now test it.

> 
> How should this be fixed ?
> 

Take a look at how other drivers have fixed this issue in the past, e.g.
commit c19650d6ea9 ("crypto: caam - fix DMA mapping of stack memory").
Seems that you need to DMA into a temporary buffer.

- Eric

^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2019-08-29 15:58 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-01  7:51 [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Eric Biggers
2019-02-01  7:51 ` [PATCH v2 01/15] crypto: aegis - fix handling chunked inputs Eric Biggers
2019-02-05  9:31   ` Ondrej Mosnacek
2019-02-01  7:51 ` [PATCH v2 02/15] crypto: morus " Eric Biggers
2019-02-05  9:30   ` Ondrej Mosnacek
2019-02-01  7:51 ` [PATCH v2 03/15] crypto: x86/aegis - fix handling chunked inputs and MAY_SLEEP Eric Biggers
2019-02-05  9:31   ` Ondrej Mosnacek
2019-02-01  7:51 ` [PATCH v2 04/15] crypto: x86/morus " Eric Biggers
2019-02-05  9:32   ` Ondrej Mosnacek
2019-02-01  7:51 ` [PATCH v2 05/15] crypto: x86/aesni-gcm - fix crash on empty plaintext Eric Biggers
2019-02-01  7:51 ` [PATCH v2 06/15] crypto: ahash - fix another early termination in hash walk Eric Biggers
2019-02-01  7:51 ` [PATCH v2 07/15] crypto: arm64/aes-neonbs - fix returning final keystream block Eric Biggers
2019-02-01  7:51 ` [PATCH v2 08/15] crypto: testmgr - add testvec_config struct and helper functions Eric Biggers
2019-02-01  7:51 ` [PATCH v2 09/15] crypto: testmgr - introduce CONFIG_CRYPTO_MANAGER_EXTRA_TESTS Eric Biggers
2019-02-01  7:51 ` [PATCH v2 10/15] crypto: testmgr - implement random testvec_config generation Eric Biggers
2019-02-01  7:51 ` [PATCH v2 11/15] crypto: testmgr - convert skcipher testing to use testvec_configs Eric Biggers
2019-02-01  7:51 ` [PATCH v2 12/15] crypto: testmgr - convert aead " Eric Biggers
2019-02-01  7:51 ` [PATCH v2 13/15] crypto: testmgr - convert hash " Eric Biggers
2019-08-29 15:32   ` Christophe Leroy
2019-08-29 15:58     ` Eric Biggers
2019-02-01  7:51 ` [PATCH v2 14/15] crypto: testmgr - check for skcipher_request corruption Eric Biggers
2019-02-01  7:51 ` [PATCH v2 15/15] crypto: testmgr - check for aead_request corruption Eric Biggers
2019-02-08  7:47 ` [PATCH v2 00/15] crypto: improved skcipher, aead, and hash tests Herbert Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).