All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/16] crypto: skcipher template simplifications and conversions
@ 2019-01-04  4:16 Eric Biggers
  2019-01-04  4:16 ` [PATCH 01/16] crypto: cfb - add missing 'chunksize' property Eric Biggers
                   ` (17 more replies)
  0 siblings, 18 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu

Hello,

This series adds a function skcipher_alloc_instance_simple() that
greatly simplifies creating an skcipher_instance that uses a single
underlying block cipher.  It then converts the cbc, cfb, ctr, ecb, kw,
ofb, and pcbc templates to use it.  In doing so, ctr, ecb, and kw are
also converted from the deprecated "blkcipher" API to the skcipher API.

While doing this, I also found some rather silly bugs in the cfb, ofb,
and pcbc templates...  So I've included the fixes for these first, in
patches 1-4.  Please consider taking these first 4 patches through
'crypto' rather than 'cryptodev'.  (But 5-16 are cleanups only, so no
rush on those.)

Finally, I also converted ecb(arc4) and ecb(cipher_null) to the skcipher
API, since following the template conversions these were the last
generic algorithms that were still using the "blkcipher" API.

The overall delta is almost 500 lines removed, due to removing a lot of
boilerplate that created algorithm instances.

Eric Biggers (16):
  crypto: cfb - add missing 'chunksize' property
  crypto: cfb - remove bogus memcpy() with src == dest
  crypto: ofb - fix handling partial blocks and make thread-safe
  crypto: pcbc - remove bogus memcpy()s with src == dest
  crypto: skcipher - add helper for simple block cipher modes
  crypto: cbc - convert to skcipher_alloc_instance_simple()
  crypto: cfb - convert to skcipher_alloc_instance_simple()
  crypto: ctr - convert to skcipher API
  crypto: ecb - convert to skcipher API
  crypto: keywrap - convert to skcipher API
  crypto: ofb - convert to skcipher_alloc_instance_simple()
  crypto: pcbc - remove ability to wrap internal ciphers
  crypto: pcbc - convert to skcipher_alloc_instance_simple()
  crypto: arc4 - convert to skcipher API
  crypto: null - convert ecb-cipher_null to skcipher API
  crypto: algapi - remove crypto_alloc_instance()

 crypto/algapi.c                    |  33 +----
 crypto/arc4.c                      |  82 ++++++------
 crypto/cbc.c                       | 131 ++-----------------
 crypto/cfb.c                       | 139 +++-----------------
 crypto/crypto_null.c               |  57 ++++----
 crypto/ctr.c                       | 160 ++++++-----------------
 crypto/ecb.c                       | 151 +++++----------------
 crypto/keywrap.c                   | 198 ++++++++++------------------
 crypto/ofb.c                       | 202 ++++++-----------------------
 crypto/pcbc.c                      | 143 +++-----------------
 crypto/skcipher.c                  | 131 +++++++++++++++++++
 crypto/testmgr.h                   |  53 +++++++-
 include/crypto/algapi.h            |   6 +-
 include/crypto/internal/hash.h     |   6 +-
 include/crypto/internal/skcipher.h |  15 +++
 15 files changed, 508 insertions(+), 999 deletions(-)

-- 
2.20.1

^ permalink raw reply	[flat|nested] 28+ messages in thread

* [PATCH 01/16] crypto: cfb - add missing 'chunksize' property
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
       [not found]   ` <20190104210311.5AC542087F@mail.kernel.org>
  2019-01-04  4:16 ` [PATCH 02/16] crypto: cfb - remove bogus memcpy() with src == dest Eric Biggers
                   ` (16 subsequent siblings)
  17 siblings, 1 reply; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: stable, James Bottomley

From: Eric Biggers <ebiggers@google.com>

Like some other block cipher mode implementations, the CFB
implementation assumes that while walking through the scatterlist, a
partial block does not occur until the end.  But the walk is incorrectly
being done with a blocksize of 1, as 'cra_blocksize' is set to 1 (since
CFB is a stream cipher) but no 'chunksize' is set.  This bug causes
incorrect encryption/decryption for some scatterlist layouts.

Fix it by setting the 'chunksize'.  Also extend the CFB test vectors to
cover this bug as well as cases where the message length is not a
multiple of the block size.

Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode")
Cc: <stable@vger.kernel.org> # v4.17+
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/cfb.c     |  6 ++++++
 crypto/testmgr.h | 25 +++++++++++++++++++++++++
 2 files changed, 31 insertions(+)

diff --git a/crypto/cfb.c b/crypto/cfb.c
index e81e456734985..183e8a9c33128 100644
--- a/crypto/cfb.c
+++ b/crypto/cfb.c
@@ -298,6 +298,12 @@ static int crypto_cfb_create(struct crypto_template *tmpl, struct rtattr **tb)
 	inst->alg.base.cra_blocksize = 1;
 	inst->alg.base.cra_alignmask = alg->cra_alignmask;
 
+	/*
+	 * To simplify the implementation, configure the skcipher walk to only
+	 * give a partial block at the very end, never earlier.
+	 */
+	inst->alg.chunksize = alg->cra_blocksize;
+
 	inst->alg.ivsize = alg->cra_blocksize;
 	inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
 	inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index e8f47d7b92cdd..7f4dae7a57a1c 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -12870,6 +12870,31 @@ static const struct cipher_testvec aes_cfb_tv_template[] = {
 			  "\x75\xa3\x85\x74\x1a\xb9\xce\xf8"
 			  "\x20\x31\x62\x3d\x55\xb1\xe4\x71",
 		.len	= 64,
+		.also_non_np = 1,
+		.np	= 2,
+		.tap	= { 31, 33 },
+	}, { /* > 16 bytes, not a multiple of 16 bytes */
+		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
+			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
+		.klen	= 16,
+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
+			  "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
+			  "\xae",
+		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
+			  "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
+			  "\xc8",
+		.len	= 17,
+	}, { /* < 16 bytes */
+		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
+			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
+		.klen	= 16,
+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f",
+		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad",
+		.len	= 7,
 	},
 };
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 02/16] crypto: cfb - remove bogus memcpy() with src == dest
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
  2019-01-04  4:16 ` [PATCH 01/16] crypto: cfb - add missing 'chunksize' property Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-04  4:16 ` [PATCH 03/16] crypto: ofb - fix handling partial blocks and make thread-safe Eric Biggers
                   ` (15 subsequent siblings)
  17 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: stable, James Bottomley

From: Eric Biggers <ebiggers@google.com>

The memcpy() in crypto_cfb_decrypt_inplace() uses walk->iv as both the
source and destination, which has undefined behavior.  It is unneeded
because walk->iv is already used to hold the previous ciphertext block;
thus, walk->iv is already updated to its final value.  So, remove it.

Also, note that in-place decryption is the only case where the previous
ciphertext block is not directly available.  Therefore, as a related
cleanup I also updated crypto_cfb_encrypt_segment() to directly use the
previous ciphertext block rather than save it into walk->iv.  This makes
it consistent with in-place encryption and out-of-place decryption; now
only in-place decryption is different, because it has to be.

Fixes: a7d85e06ed80 ("crypto: cfb - add support for Cipher FeedBack mode")
Cc: <stable@vger.kernel.org> # v4.17+
Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/cfb.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/crypto/cfb.c b/crypto/cfb.c
index 183e8a9c33128..4abfe32ff8451 100644
--- a/crypto/cfb.c
+++ b/crypto/cfb.c
@@ -77,12 +77,14 @@ static int crypto_cfb_encrypt_segment(struct skcipher_walk *walk,
 	do {
 		crypto_cfb_encrypt_one(tfm, iv, dst);
 		crypto_xor(dst, src, bsize);
-		memcpy(iv, dst, bsize);
+		iv = dst;
 
 		src += bsize;
 		dst += bsize;
 	} while ((nbytes -= bsize) >= bsize);
 
+	memcpy(walk->iv, iv, bsize);
+
 	return nbytes;
 }
 
@@ -162,7 +164,7 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk,
 	const unsigned int bsize = crypto_cfb_bsize(tfm);
 	unsigned int nbytes = walk->nbytes;
 	u8 *src = walk->src.virt.addr;
-	u8 *iv = walk->iv;
+	u8 * const iv = walk->iv;
 	u8 tmp[MAX_CIPHER_BLOCKSIZE];
 
 	do {
@@ -172,8 +174,6 @@ static int crypto_cfb_decrypt_inplace(struct skcipher_walk *walk,
 		src += bsize;
 	} while ((nbytes -= bsize) >= bsize);
 
-	memcpy(walk->iv, iv, bsize);
-
 	return nbytes;
 }
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 03/16] crypto: ofb - fix handling partial blocks and make thread-safe
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
  2019-01-04  4:16 ` [PATCH 01/16] crypto: cfb - add missing 'chunksize' property Eric Biggers
  2019-01-04  4:16 ` [PATCH 02/16] crypto: cfb - remove bogus memcpy() with src == dest Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-06 10:38   ` Gilad Ben-Yossef
  2019-01-04  4:16 ` [PATCH 04/16] crypto: pcbc - remove bogus memcpy()s with src == dest Eric Biggers
                   ` (14 subsequent siblings)
  17 siblings, 1 reply; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: stable, Gilad Ben-Yossef

From: Eric Biggers <ebiggers@google.com>

Fix multiple bugs in the OFB implementation:

1. It stored the per-request state 'cnt' in the tfm context, which can be
   used by multiple threads concurrently (e.g. via AF_ALG).
2. It didn't support messages not a multiple of the block cipher size,
   despite being a stream cipher.
3. It didn't set cra_blocksize to 1 to indicate it is a stream cipher.

To fix these, set the 'chunksize' property to the cipher block size to
guarantee that when walking through the scatterlist, a partial block can
only occur at the end.  Then change the implementation to XOR a block at
a time at first, then XOR the partial block at the end if needed.  This
is the same way CTR and CFB are implemented.  As a bonus, this also
improves performance in most cases over the current approach.

Fixes: e497c51896b3 ("crypto: ofb - add output feedback mode")
Cc: <stable@vger.kernel.org> # v4.20+
Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/ofb.c     | 91 ++++++++++++++++++++----------------------------
 crypto/testmgr.h | 28 +++++++++++++--
 2 files changed, 63 insertions(+), 56 deletions(-)

diff --git a/crypto/ofb.c b/crypto/ofb.c
index 886631708c5e9..cab0b80953fed 100644
--- a/crypto/ofb.c
+++ b/crypto/ofb.c
@@ -5,9 +5,6 @@
  *
  * Copyright (C) 2018 ARM Limited or its affiliates.
  * All rights reserved.
- *
- * Based loosely on public domain code gleaned from libtomcrypt
- * (https://github.com/libtom/libtomcrypt).
  */
 
 #include <crypto/algapi.h>
@@ -21,7 +18,6 @@
 
 struct crypto_ofb_ctx {
 	struct crypto_cipher *child;
-	int cnt;
 };
 
 
@@ -41,58 +37,40 @@ static int crypto_ofb_setkey(struct crypto_skcipher *parent, const u8 *key,
 	return err;
 }
 
-static int crypto_ofb_encrypt_segment(struct crypto_ofb_ctx *ctx,
-				      struct skcipher_walk *walk,
-				      struct crypto_cipher *tfm)
+static int crypto_ofb_crypt(struct skcipher_request *req)
 {
-	int bsize = crypto_cipher_blocksize(tfm);
-	int nbytes = walk->nbytes;
-
-	u8 *src = walk->src.virt.addr;
-	u8 *dst = walk->dst.virt.addr;
-	u8 *iv = walk->iv;
-
-	do {
-		if (ctx->cnt == bsize) {
-			if (nbytes < bsize)
-				break;
-			crypto_cipher_encrypt_one(tfm, iv, iv);
-			ctx->cnt = 0;
-		}
-		*dst = *src ^ iv[ctx->cnt];
-		src++;
-		dst++;
-		ctx->cnt++;
-	} while (--nbytes);
-	return nbytes;
-}
-
-static int crypto_ofb_encrypt(struct skcipher_request *req)
-{
-	struct skcipher_walk walk;
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	unsigned int bsize;
 	struct crypto_ofb_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_cipher *child = ctx->child;
-	int ret = 0;
+	struct crypto_cipher *cipher = ctx->child;
+	const unsigned int bsize = crypto_cipher_blocksize(cipher);
+	struct skcipher_walk walk;
+	int err;
 
-	bsize =  crypto_cipher_blocksize(child);
-	ctx->cnt = bsize;
+	err = skcipher_walk_virt(&walk, req, false);
 
-	ret = skcipher_walk_virt(&walk, req, false);
+	while (walk.nbytes >= bsize) {
+		const u8 *src = walk.src.virt.addr;
+		u8 *dst = walk.dst.virt.addr;
+		u8 * const iv = walk.iv;
+		unsigned int nbytes = walk.nbytes;
 
-	while (walk.nbytes) {
-		ret = crypto_ofb_encrypt_segment(ctx, &walk, child);
-		ret = skcipher_walk_done(&walk, ret);
-	}
+		do {
+			crypto_cipher_encrypt_one(cipher, iv, iv);
+			crypto_xor_cpy(dst, src, iv, bsize);
+			dst += bsize;
+			src += bsize;
+		} while ((nbytes -= bsize) >= bsize);
 
-	return ret;
-}
+		err = skcipher_walk_done(&walk, nbytes);
+	}
 
-/* OFB encrypt and decrypt are identical */
-static int crypto_ofb_decrypt(struct skcipher_request *req)
-{
-	return crypto_ofb_encrypt(req);
+	if (walk.nbytes) {
+		crypto_cipher_encrypt_one(cipher, walk.iv, walk.iv);
+		crypto_xor_cpy(walk.dst.virt.addr, walk.src.virt.addr, walk.iv,
+			       walk.nbytes);
+		err = skcipher_walk_done(&walk, 0);
+	}
+	return err;
 }
 
 static int crypto_ofb_init_tfm(struct crypto_skcipher *tfm)
@@ -165,13 +143,18 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb)
 	if (err)
 		goto err_drop_spawn;
 
+	/* OFB mode is a stream cipher. */
+	inst->alg.base.cra_blocksize = 1;
+
+	/*
+	 * To simplify the implementation, configure the skcipher walk to only
+	 * give a partial block at the very end, never earlier.
+	 */
+	inst->alg.chunksize = alg->cra_blocksize;
+
 	inst->alg.base.cra_priority = alg->cra_priority;
-	inst->alg.base.cra_blocksize = alg->cra_blocksize;
 	inst->alg.base.cra_alignmask = alg->cra_alignmask;
 
-	/* We access the data as u32s when xoring. */
-	inst->alg.base.cra_alignmask |= __alignof__(u32) - 1;
-
 	inst->alg.ivsize = alg->cra_blocksize;
 	inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
 	inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
@@ -182,8 +165,8 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb)
 	inst->alg.exit = crypto_ofb_exit_tfm;
 
 	inst->alg.setkey = crypto_ofb_setkey;
-	inst->alg.encrypt = crypto_ofb_encrypt;
-	inst->alg.decrypt = crypto_ofb_decrypt;
+	inst->alg.encrypt = crypto_ofb_crypt;
+	inst->alg.decrypt = crypto_ofb_crypt;
 
 	inst->free = crypto_ofb_free;
 
diff --git a/crypto/testmgr.h b/crypto/testmgr.h
index 7f4dae7a57a1c..ca8e8ebef309d 100644
--- a/crypto/testmgr.h
+++ b/crypto/testmgr.h
@@ -16681,8 +16681,7 @@ static const struct cipher_testvec aes_ctr_rfc3686_tv_template[] = {
 };
 
 static const struct cipher_testvec aes_ofb_tv_template[] = {
-	 /* From NIST Special Publication 800-38A, Appendix F.5 */
-	{
+	{ /* From NIST Special Publication 800-38A, Appendix F.5 */
 		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
 			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
 		.klen	= 16,
@@ -16705,6 +16704,31 @@ static const struct cipher_testvec aes_ofb_tv_template[] = {
 			  "\x30\x4c\x65\x28\xf6\x59\xc7\x78"
 			  "\x66\xa5\x10\xd9\xc1\xd6\xae\x5e",
 		.len	= 64,
+		.also_non_np = 1,
+		.np	= 2,
+		.tap	= { 31, 33 },
+	}, { /* > 16 bytes, not a multiple of 16 bytes */
+		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
+			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
+		.klen	= 16,
+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96"
+			  "\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
+			  "\xae",
+		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad\x20"
+			  "\x33\x34\x49\xf8\xe8\x3c\xfb\x4a"
+			  "\x77",
+		.len	= 17,
+	}, { /* < 16 bytes */
+		.key	= "\x2b\x7e\x15\x16\x28\xae\xd2\xa6"
+			  "\xab\xf7\x15\x88\x09\xcf\x4f\x3c",
+		.klen	= 16,
+		.iv	= "\x00\x01\x02\x03\x04\x05\x06\x07"
+			  "\x08\x09\x0a\x0b\x0c\x0d\x0e\x0f",
+		.ptext	= "\x6b\xc1\xbe\xe2\x2e\x40\x9f",
+		.ctext	= "\x3b\x3f\xd9\x2e\xb7\x2d\xad",
+		.len	= 7,
 	}
 };
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 04/16] crypto: pcbc - remove bogus memcpy()s with src == dest
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (2 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 03/16] crypto: ofb - fix handling partial blocks and make thread-safe Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-04  4:16 ` [PATCH 05/16] crypto: skcipher - add helper for simple block cipher modes Eric Biggers
                   ` (13 subsequent siblings)
  17 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: stable, David Howells

From: Eric Biggers <ebiggers@google.com>

The memcpy()s in the PCBC implementation use walk->iv as both the source
and destination, which has undefined behavior.  These memcpy()'s are
actually unneeded, because walk->iv is already used to hold the previous
plaintext block XOR'd with the previous ciphertext block.  Thus,
walk->iv is already updated to its final value.

So remove the broken and unnecessary memcpy()s.

Fixes: 91652be5d1b9 ("[CRYPTO] pcbc: Add Propagated CBC template")
Cc: <stable@vger.kernel.org> # v2.6.21+
Cc: David Howells <dhowells@redhat.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/pcbc.c | 14 ++++----------
 1 file changed, 4 insertions(+), 10 deletions(-)

diff --git a/crypto/pcbc.c b/crypto/pcbc.c
index 8aa10144407c0..1b182dfedc948 100644
--- a/crypto/pcbc.c
+++ b/crypto/pcbc.c
@@ -51,7 +51,7 @@ static int crypto_pcbc_encrypt_segment(struct skcipher_request *req,
 	unsigned int nbytes = walk->nbytes;
 	u8 *src = walk->src.virt.addr;
 	u8 *dst = walk->dst.virt.addr;
-	u8 *iv = walk->iv;
+	u8 * const iv = walk->iv;
 
 	do {
 		crypto_xor(iv, src, bsize);
@@ -72,7 +72,7 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req,
 	int bsize = crypto_cipher_blocksize(tfm);
 	unsigned int nbytes = walk->nbytes;
 	u8 *src = walk->src.virt.addr;
-	u8 *iv = walk->iv;
+	u8 * const iv = walk->iv;
 	u8 tmpbuf[MAX_CIPHER_BLOCKSIZE];
 
 	do {
@@ -84,8 +84,6 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req,
 		src += bsize;
 	} while ((nbytes -= bsize) >= bsize);
 
-	memcpy(walk->iv, iv, bsize);
-
 	return nbytes;
 }
 
@@ -121,7 +119,7 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req,
 	unsigned int nbytes = walk->nbytes;
 	u8 *src = walk->src.virt.addr;
 	u8 *dst = walk->dst.virt.addr;
-	u8 *iv = walk->iv;
+	u8 * const iv = walk->iv;
 
 	do {
 		crypto_cipher_decrypt_one(tfm, dst, src);
@@ -132,8 +130,6 @@ static int crypto_pcbc_decrypt_segment(struct skcipher_request *req,
 		dst += bsize;
 	} while ((nbytes -= bsize) >= bsize);
 
-	memcpy(walk->iv, iv, bsize);
-
 	return nbytes;
 }
 
@@ -144,7 +140,7 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req,
 	int bsize = crypto_cipher_blocksize(tfm);
 	unsigned int nbytes = walk->nbytes;
 	u8 *src = walk->src.virt.addr;
-	u8 *iv = walk->iv;
+	u8 * const iv = walk->iv;
 	u8 tmpbuf[MAX_CIPHER_BLOCKSIZE] __aligned(__alignof__(u32));
 
 	do {
@@ -156,8 +152,6 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req,
 		src += bsize;
 	} while ((nbytes -= bsize) >= bsize);
 
-	memcpy(walk->iv, iv, bsize);
-
 	return nbytes;
 }
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 05/16] crypto: skcipher - add helper for simple block cipher modes
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (3 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 04/16] crypto: pcbc - remove bogus memcpy()s with src == dest Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-05 12:03   ` Stephan Mueller
  2019-01-04  4:16 ` [PATCH 06/16] crypto: cbc - convert to skcipher_alloc_instance_simple() Eric Biggers
                   ` (12 subsequent siblings)
  17 siblings, 1 reply; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu

From: Eric Biggers <ebiggers@google.com>

The majority of skcipher templates (including both the existing ones and
the ones remaining to be converted from the "blkcipher" API) just wrap a
single block cipher algorithm.  This includes cbc, cfb, ctr, ecb, kw,
ofb, and pcbc.  Add a helper function skcipher_alloc_instance_simple()
that handles allocating an skcipher instance for this common case.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/skcipher.c                  | 131 +++++++++++++++++++++++++++++
 include/crypto/internal/skcipher.h |  15 ++++
 2 files changed, 146 insertions(+)

diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index 2a969296bc248..040ae6377b32b 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -1058,5 +1058,136 @@ int skcipher_register_instance(struct crypto_template *tmpl,
 }
 EXPORT_SYMBOL_GPL(skcipher_register_instance);
 
+static int skcipher_setkey_simple(struct crypto_skcipher *tfm, const u8 *key,
+				  unsigned int keylen)
+{
+	struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
+	int err;
+
+	crypto_cipher_clear_flags(cipher, CRYPTO_TFM_REQ_MASK);
+	crypto_cipher_set_flags(cipher, crypto_skcipher_get_flags(tfm) &
+				CRYPTO_TFM_REQ_MASK);
+	err = crypto_cipher_setkey(cipher, key, keylen);
+	crypto_skcipher_set_flags(tfm, crypto_cipher_get_flags(cipher) &
+				  CRYPTO_TFM_RES_MASK);
+	return err;
+}
+
+static int skcipher_init_tfm_simple(struct crypto_skcipher *tfm)
+{
+	struct skcipher_instance *inst = skcipher_alg_instance(tfm);
+	struct crypto_spawn *spawn = skcipher_instance_ctx(inst);
+	struct skcipher_ctx_simple *ctx = crypto_skcipher_ctx(tfm);
+	struct crypto_cipher *cipher;
+
+	cipher = crypto_spawn_cipher(spawn);
+	if (IS_ERR(cipher))
+		return PTR_ERR(cipher);
+
+	ctx->cipher = cipher;
+	return 0;
+}
+
+static void skcipher_exit_tfm_simple(struct crypto_skcipher *tfm)
+{
+	struct skcipher_ctx_simple *ctx = crypto_skcipher_ctx(tfm);
+
+	crypto_free_cipher(ctx->cipher);
+}
+
+static void skcipher_free_instance_simple(struct skcipher_instance *inst)
+{
+	crypto_drop_spawn(skcipher_instance_ctx(inst));
+	kfree(inst);
+}
+
+/**
+ * skcipher_alloc_instance_simple - allocate instance of simple block cipher mode
+ *
+ * Allocate an skcipher_instance for a simple block cipher mode of operation,
+ * e.g. cbc or ecb.  The instance context will have just a single crypto_spawn,
+ * that for the underlying cipher.  The {min,max}_keysize, ivsize, blocksize,
+ * alignmask, and priority are set from the underlying cipher but can be
+ * overridden if needed.  The tfm context defaults to skcipher_ctx_simple, and
+ * default ->setkey(), ->init(), and ->exit() methods are installed.
+ *
+ * @tmpl: the template being instantiated
+ * @tb: the template parameters
+ * @cipher_alg_ret: on success, a pointer to the underlying cipher algorithm is
+ *		    returned here.  It must be dropped with crypto_mod_put().
+ *
+ * Return: a pointer to the new instance, or an ERR_PTR().  The caller still
+ *	   needs to register the instance.
+ */
+struct skcipher_instance *
+skcipher_alloc_instance_simple(struct crypto_template *tmpl, struct rtattr **tb,
+			       struct crypto_alg **cipher_alg_ret)
+{
+	struct crypto_attr_type *algt;
+	struct crypto_alg *cipher_alg;
+	struct skcipher_instance *inst;
+	struct crypto_spawn *spawn;
+	u32 mask;
+	int err;
+
+	algt = crypto_get_attr_type(tb);
+	if (IS_ERR(algt))
+		return ERR_CAST(algt);
+
+	if ((algt->type ^ CRYPTO_ALG_TYPE_SKCIPHER) & algt->mask)
+		return ERR_PTR(-EINVAL);
+
+	mask = CRYPTO_ALG_TYPE_MASK |
+		crypto_requires_off(algt->type, algt->mask,
+				    CRYPTO_ALG_NEED_FALLBACK);
+
+	cipher_alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER, mask);
+	if (IS_ERR(cipher_alg))
+		return ERR_CAST(cipher_alg);
+
+	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
+	if (!inst) {
+		err = -ENOMEM;
+		goto err_put_cipher_alg;
+	}
+	spawn = skcipher_instance_ctx(inst);
+
+	err = crypto_inst_setname(skcipher_crypto_instance(inst), tmpl->name,
+				  cipher_alg);
+	if (err)
+		goto err_free_inst;
+
+	err = crypto_init_spawn(spawn, cipher_alg,
+				skcipher_crypto_instance(inst),
+				CRYPTO_ALG_TYPE_MASK);
+	if (err)
+		goto err_free_inst;
+	inst->free = skcipher_free_instance_simple;
+
+	/* Default algorithm properties, can be overridden */
+	inst->alg.base.cra_blocksize = cipher_alg->cra_blocksize;
+	inst->alg.base.cra_alignmask = cipher_alg->cra_alignmask;
+	inst->alg.base.cra_priority = cipher_alg->cra_priority;
+	inst->alg.min_keysize = cipher_alg->cra_cipher.cia_min_keysize;
+	inst->alg.max_keysize = cipher_alg->cra_cipher.cia_max_keysize;
+	inst->alg.ivsize = cipher_alg->cra_blocksize;
+
+	/* Use skcipher_ctx_simple by default, can be overridden */
+	inst->alg.base.cra_ctxsize = sizeof(struct skcipher_ctx_simple);
+	inst->alg.setkey = skcipher_setkey_simple;
+	inst->alg.init = skcipher_init_tfm_simple;
+	inst->alg.exit = skcipher_exit_tfm_simple;
+
+	*cipher_alg_ret = cipher_alg;
+	return inst;
+
+err_free_inst:
+	kfree(inst);
+err_put_cipher_alg:
+	crypto_mod_put(cipher_alg);
+	return ERR_PTR(err);
+}
+EXPORT_SYMBOL_GPL(skcipher_alloc_instance_simple);
+
 MODULE_LICENSE("GPL");
 MODULE_DESCRIPTION("Symmetric key cipher type");
diff --git a/include/crypto/internal/skcipher.h b/include/crypto/internal/skcipher.h
index 453e867b4bd92..9de6032209cb1 100644
--- a/include/crypto/internal/skcipher.h
+++ b/include/crypto/internal/skcipher.h
@@ -205,5 +205,20 @@ static inline unsigned int crypto_skcipher_alg_max_keysize(
 	return alg->max_keysize;
 }
 
+/* Helpers for simple block cipher modes of operation */
+struct skcipher_ctx_simple {
+	struct crypto_cipher *cipher;	/* underlying block cipher */
+};
+static inline struct crypto_cipher *
+skcipher_cipher_simple(struct crypto_skcipher *tfm)
+{
+	struct skcipher_ctx_simple *ctx = crypto_skcipher_ctx(tfm);
+
+	return ctx->cipher;
+}
+struct skcipher_instance *
+skcipher_alloc_instance_simple(struct crypto_template *tmpl, struct rtattr **tb,
+			       struct crypto_alg **cipher_alg_ret);
+
 #endif	/* _CRYPTO_INTERNAL_SKCIPHER_H */
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 06/16] crypto: cbc - convert to skcipher_alloc_instance_simple()
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (4 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 05/16] crypto: skcipher - add helper for simple block cipher modes Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-04  4:16 ` [PATCH 07/16] crypto: cfb " Eric Biggers
                   ` (11 subsequent siblings)
  17 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu

From: Eric Biggers <ebiggers@google.com>

The CBC template just wraps a single block cipher algorithm, so simplify
it by converting it to use skcipher_alloc_instance_simple().

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/cbc.c | 131 +++++----------------------------------------------
 1 file changed, 13 insertions(+), 118 deletions(-)

diff --git a/crypto/cbc.c b/crypto/cbc.c
index dd5f332fd5668..d12efaac9230b 100644
--- a/crypto/cbc.c
+++ b/crypto/cbc.c
@@ -18,34 +18,11 @@
 #include <linux/kernel.h>
 #include <linux/log2.h>
 #include <linux/module.h>
-#include <linux/slab.h>
-
-struct crypto_cbc_ctx {
-	struct crypto_cipher *child;
-};
-
-static int crypto_cbc_setkey(struct crypto_skcipher *parent, const u8 *key,
-			     unsigned int keylen)
-{
-	struct crypto_cbc_ctx *ctx = crypto_skcipher_ctx(parent);
-	struct crypto_cipher *child = ctx->child;
-	int err;
-
-	crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-	crypto_cipher_set_flags(child, crypto_skcipher_get_flags(parent) &
-				       CRYPTO_TFM_REQ_MASK);
-	err = crypto_cipher_setkey(child, key, keylen);
-	crypto_skcipher_set_flags(parent, crypto_cipher_get_flags(child) &
-					  CRYPTO_TFM_RES_MASK);
-	return err;
-}
 
 static inline void crypto_cbc_encrypt_one(struct crypto_skcipher *tfm,
 					  const u8 *src, u8 *dst)
 {
-	struct crypto_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
-
-	crypto_cipher_encrypt_one(ctx->child, dst, src);
+	crypto_cipher_encrypt_one(skcipher_cipher_simple(tfm), dst, src);
 }
 
 static int crypto_cbc_encrypt(struct skcipher_request *req)
@@ -56,9 +33,7 @@ static int crypto_cbc_encrypt(struct skcipher_request *req)
 static inline void crypto_cbc_decrypt_one(struct crypto_skcipher *tfm,
 					  const u8 *src, u8 *dst)
 {
-	struct crypto_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
-
-	crypto_cipher_decrypt_one(ctx->child, dst, src);
+	crypto_cipher_decrypt_one(skcipher_cipher_simple(tfm), dst, src);
 }
 
 static int crypto_cbc_decrypt(struct skcipher_request *req)
@@ -78,113 +53,33 @@ static int crypto_cbc_decrypt(struct skcipher_request *req)
 	return err;
 }
 
-static int crypto_cbc_init_tfm(struct crypto_skcipher *tfm)
-{
-	struct skcipher_instance *inst = skcipher_alg_instance(tfm);
-	struct crypto_spawn *spawn = skcipher_instance_ctx(inst);
-	struct crypto_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_cipher *cipher;
-
-	cipher = crypto_spawn_cipher(spawn);
-	if (IS_ERR(cipher))
-		return PTR_ERR(cipher);
-
-	ctx->child = cipher;
-	return 0;
-}
-
-static void crypto_cbc_exit_tfm(struct crypto_skcipher *tfm)
-{
-	struct crypto_cbc_ctx *ctx = crypto_skcipher_ctx(tfm);
-
-	crypto_free_cipher(ctx->child);
-}
-
-static void crypto_cbc_free(struct skcipher_instance *inst)
-{
-	crypto_drop_skcipher(skcipher_instance_ctx(inst));
-	kfree(inst);
-}
-
 static int crypto_cbc_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
 	struct skcipher_instance *inst;
-	struct crypto_attr_type *algt;
-	struct crypto_spawn *spawn;
 	struct crypto_alg *alg;
-	u32 mask;
 	int err;
 
-	err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SKCIPHER);
-	if (err)
-		return err;
-
-	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
-	if (!inst)
-		return -ENOMEM;
-
-	algt = crypto_get_attr_type(tb);
-	err = PTR_ERR(algt);
-	if (IS_ERR(algt))
-		goto err_free_inst;
-
-	mask = CRYPTO_ALG_TYPE_MASK |
-		crypto_requires_off(algt->type, algt->mask,
-				    CRYPTO_ALG_NEED_FALLBACK);
-
-	alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER, mask);
-	err = PTR_ERR(alg);
-	if (IS_ERR(alg))
-		goto err_free_inst;
-
-	spawn = skcipher_instance_ctx(inst);
-	err = crypto_init_spawn(spawn, alg, skcipher_crypto_instance(inst),
-				CRYPTO_ALG_TYPE_MASK);
-	if (err)
-		goto err_put_alg;
-
-	err = crypto_inst_setname(skcipher_crypto_instance(inst), "cbc", alg);
-	if (err)
-		goto err_drop_spawn;
+	inst = skcipher_alloc_instance_simple(tmpl, tb, &alg);
+	if (IS_ERR(inst))
+		return PTR_ERR(inst);
 
 	err = -EINVAL;
 	if (!is_power_of_2(alg->cra_blocksize))
-		goto err_drop_spawn;
-
-	inst->alg.base.cra_priority = alg->cra_priority;
-	inst->alg.base.cra_blocksize = alg->cra_blocksize;
-	inst->alg.base.cra_alignmask = alg->cra_alignmask;
-
-	inst->alg.ivsize = alg->cra_blocksize;
-	inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
-	inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
-
-	inst->alg.base.cra_ctxsize = sizeof(struct crypto_cbc_ctx);
-
-	inst->alg.init = crypto_cbc_init_tfm;
-	inst->alg.exit = crypto_cbc_exit_tfm;
+		goto out_free_inst;
 
-	inst->alg.setkey = crypto_cbc_setkey;
 	inst->alg.encrypt = crypto_cbc_encrypt;
 	inst->alg.decrypt = crypto_cbc_decrypt;
 
-	inst->free = crypto_cbc_free;
-
 	err = skcipher_register_instance(tmpl, inst);
 	if (err)
-		goto err_drop_spawn;
-	crypto_mod_put(alg);
+		goto out_free_inst;
+	goto out_put_alg;
 
-out:
-	return err;
-
-err_drop_spawn:
-	crypto_drop_spawn(spawn);
-err_put_alg:
+out_free_inst:
+	inst->free(inst);
+out_put_alg:
 	crypto_mod_put(alg);
-err_free_inst:
-	kfree(inst);
-	goto out;
+	return err;
 }
 
 static struct crypto_template crypto_cbc_tmpl = {
@@ -207,5 +102,5 @@ module_init(crypto_cbc_module_init);
 module_exit(crypto_cbc_module_exit);
 
 MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("CBC block cipher algorithm");
+MODULE_DESCRIPTION("CBC block cipher mode of operation");
 MODULE_ALIAS_CRYPTO("cbc");
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 07/16] crypto: cfb - convert to skcipher_alloc_instance_simple()
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (5 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 06/16] crypto: cbc - convert to skcipher_alloc_instance_simple() Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-04  4:16 ` [PATCH 08/16] crypto: ctr - convert to skcipher API Eric Biggers
                   ` (10 subsequent siblings)
  17 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: James Bottomley

From: Eric Biggers <ebiggers@google.com>

The CFB template just wraps a single block cipher algorithm, so simplify
it by converting it to use skcipher_alloc_instance_simple().

Cc: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/cfb.c | 127 ++++-----------------------------------------------
 1 file changed, 9 insertions(+), 118 deletions(-)

diff --git a/crypto/cfb.c b/crypto/cfb.c
index 4abfe32ff8451..03ac847f6d6ab 100644
--- a/crypto/cfb.c
+++ b/crypto/cfb.c
@@ -25,28 +25,17 @@
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
-#include <linux/slab.h>
 #include <linux/string.h>
-#include <linux/types.h>
-
-struct crypto_cfb_ctx {
-	struct crypto_cipher *child;
-};
 
 static unsigned int crypto_cfb_bsize(struct crypto_skcipher *tfm)
 {
-	struct crypto_cfb_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_cipher *child = ctx->child;
-
-	return crypto_cipher_blocksize(child);
+	return crypto_cipher_blocksize(skcipher_cipher_simple(tfm));
 }
 
 static void crypto_cfb_encrypt_one(struct crypto_skcipher *tfm,
 					  const u8 *src, u8 *dst)
 {
-	struct crypto_cfb_ctx *ctx = crypto_skcipher_ctx(tfm);
-
-	crypto_cipher_encrypt_one(ctx->child, dst, src);
+	crypto_cipher_encrypt_one(skcipher_cipher_simple(tfm), dst, src);
 }
 
 /* final encrypt and decrypt is the same */
@@ -186,22 +175,6 @@ static int crypto_cfb_decrypt_blocks(struct skcipher_walk *walk,
 		return crypto_cfb_decrypt_segment(walk, tfm);
 }
 
-static int crypto_cfb_setkey(struct crypto_skcipher *parent, const u8 *key,
-			     unsigned int keylen)
-{
-	struct crypto_cfb_ctx *ctx = crypto_skcipher_ctx(parent);
-	struct crypto_cipher *child = ctx->child;
-	int err;
-
-	crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-	crypto_cipher_set_flags(child, crypto_skcipher_get_flags(parent) &
-				       CRYPTO_TFM_REQ_MASK);
-	err = crypto_cipher_setkey(child, key, keylen);
-	crypto_skcipher_set_flags(parent, crypto_cipher_get_flags(child) &
-					  CRYPTO_TFM_RES_MASK);
-	return err;
-}
-
 static int crypto_cfb_decrypt(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
@@ -224,79 +197,18 @@ static int crypto_cfb_decrypt(struct skcipher_request *req)
 	return err;
 }
 
-static int crypto_cfb_init_tfm(struct crypto_skcipher *tfm)
-{
-	struct skcipher_instance *inst = skcipher_alg_instance(tfm);
-	struct crypto_spawn *spawn = skcipher_instance_ctx(inst);
-	struct crypto_cfb_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_cipher *cipher;
-
-	cipher = crypto_spawn_cipher(spawn);
-	if (IS_ERR(cipher))
-		return PTR_ERR(cipher);
-
-	ctx->child = cipher;
-	return 0;
-}
-
-static void crypto_cfb_exit_tfm(struct crypto_skcipher *tfm)
-{
-	struct crypto_cfb_ctx *ctx = crypto_skcipher_ctx(tfm);
-
-	crypto_free_cipher(ctx->child);
-}
-
-static void crypto_cfb_free(struct skcipher_instance *inst)
-{
-	crypto_drop_skcipher(skcipher_instance_ctx(inst));
-	kfree(inst);
-}
-
 static int crypto_cfb_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
 	struct skcipher_instance *inst;
-	struct crypto_attr_type *algt;
-	struct crypto_spawn *spawn;
 	struct crypto_alg *alg;
-	u32 mask;
 	int err;
 
-	err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SKCIPHER);
-	if (err)
-		return err;
-
-	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
-	if (!inst)
-		return -ENOMEM;
-
-	algt = crypto_get_attr_type(tb);
-	err = PTR_ERR(algt);
-	if (IS_ERR(algt))
-		goto err_free_inst;
+	inst = skcipher_alloc_instance_simple(tmpl, tb, &alg);
+	if (IS_ERR(inst))
+		return PTR_ERR(inst);
 
-	mask = CRYPTO_ALG_TYPE_MASK |
-		crypto_requires_off(algt->type, algt->mask,
-				    CRYPTO_ALG_NEED_FALLBACK);
-
-	alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER, mask);
-	err = PTR_ERR(alg);
-	if (IS_ERR(alg))
-		goto err_free_inst;
-
-	spawn = skcipher_instance_ctx(inst);
-	err = crypto_init_spawn(spawn, alg, skcipher_crypto_instance(inst),
-				CRYPTO_ALG_TYPE_MASK);
-	if (err)
-		goto err_put_alg;
-
-	err = crypto_inst_setname(skcipher_crypto_instance(inst), "cfb", alg);
-	if (err)
-		goto err_drop_spawn;
-
-	inst->alg.base.cra_priority = alg->cra_priority;
-	/* we're a stream cipher independend of the crypto cra_blocksize */
+	/* CFB mode is a stream cipher. */
 	inst->alg.base.cra_blocksize = 1;
-	inst->alg.base.cra_alignmask = alg->cra_alignmask;
 
 	/*
 	 * To simplify the implementation, configure the skcipher walk to only
@@ -304,36 +216,15 @@ static int crypto_cfb_create(struct crypto_template *tmpl, struct rtattr **tb)
 	 */
 	inst->alg.chunksize = alg->cra_blocksize;
 
-	inst->alg.ivsize = alg->cra_blocksize;
-	inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
-	inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
-
-	inst->alg.base.cra_ctxsize = sizeof(struct crypto_cfb_ctx);
-
-	inst->alg.init = crypto_cfb_init_tfm;
-	inst->alg.exit = crypto_cfb_exit_tfm;
-
-	inst->alg.setkey = crypto_cfb_setkey;
 	inst->alg.encrypt = crypto_cfb_encrypt;
 	inst->alg.decrypt = crypto_cfb_decrypt;
 
-	inst->free = crypto_cfb_free;
-
 	err = skcipher_register_instance(tmpl, inst);
 	if (err)
-		goto err_drop_spawn;
-	crypto_mod_put(alg);
-
-out:
-	return err;
+		inst->free(inst);
 
-err_drop_spawn:
-	crypto_drop_spawn(spawn);
-err_put_alg:
 	crypto_mod_put(alg);
-err_free_inst:
-	kfree(inst);
-	goto out;
+	return err;
 }
 
 static struct crypto_template crypto_cfb_tmpl = {
@@ -356,5 +247,5 @@ module_init(crypto_cfb_module_init);
 module_exit(crypto_cfb_module_exit);
 
 MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("CFB block cipher algorithm");
+MODULE_DESCRIPTION("CFB block cipher mode of operation");
 MODULE_ALIAS_CRYPTO("cfb");
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 08/16] crypto: ctr - convert to skcipher API
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (6 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 07/16] crypto: cfb " Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-04  4:16 ` [PATCH 09/16] crypto: ecb " Eric Biggers
                   ` (9 subsequent siblings)
  17 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu

From: Eric Biggers <ebiggers@google.com>

Convert the CTR template from the deprecated "blkcipher" API to the
"skcipher" API, taking advantage of skcipher_alloc_instance_simple() to
simplify it considerably.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/ctr.c | 160 +++++++++++++--------------------------------------
 1 file changed, 41 insertions(+), 119 deletions(-)

diff --git a/crypto/ctr.c b/crypto/ctr.c
index 30f3946efc6da..4c743a96faa49 100644
--- a/crypto/ctr.c
+++ b/crypto/ctr.c
@@ -17,14 +17,8 @@
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
-#include <linux/random.h>
-#include <linux/scatterlist.h>
 #include <linux/slab.h>
 
-struct crypto_ctr_ctx {
-	struct crypto_cipher *child;
-};
-
 struct crypto_rfc3686_ctx {
 	struct crypto_skcipher *child;
 	u8 nonce[CTR_RFC3686_NONCE_SIZE];
@@ -35,24 +29,7 @@ struct crypto_rfc3686_req_ctx {
 	struct skcipher_request subreq CRYPTO_MINALIGN_ATTR;
 };
 
-static int crypto_ctr_setkey(struct crypto_tfm *parent, const u8 *key,
-			     unsigned int keylen)
-{
-	struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(parent);
-	struct crypto_cipher *child = ctx->child;
-	int err;
-
-	crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-	crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
-				CRYPTO_TFM_REQ_MASK);
-	err = crypto_cipher_setkey(child, key, keylen);
-	crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
-			     CRYPTO_TFM_RES_MASK);
-
-	return err;
-}
-
-static void crypto_ctr_crypt_final(struct blkcipher_walk *walk,
+static void crypto_ctr_crypt_final(struct skcipher_walk *walk,
 				   struct crypto_cipher *tfm)
 {
 	unsigned int bsize = crypto_cipher_blocksize(tfm);
@@ -70,7 +47,7 @@ static void crypto_ctr_crypt_final(struct blkcipher_walk *walk,
 	crypto_inc(ctrblk, bsize);
 }
 
-static int crypto_ctr_crypt_segment(struct blkcipher_walk *walk,
+static int crypto_ctr_crypt_segment(struct skcipher_walk *walk,
 				    struct crypto_cipher *tfm)
 {
 	void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
@@ -96,7 +73,7 @@ static int crypto_ctr_crypt_segment(struct blkcipher_walk *walk,
 	return nbytes;
 }
 
-static int crypto_ctr_crypt_inplace(struct blkcipher_walk *walk,
+static int crypto_ctr_crypt_inplace(struct skcipher_walk *walk,
 				    struct crypto_cipher *tfm)
 {
 	void (*fn)(struct crypto_tfm *, u8 *, const u8 *) =
@@ -123,135 +100,80 @@ static int crypto_ctr_crypt_inplace(struct blkcipher_walk *walk,
 	return nbytes;
 }
 
-static int crypto_ctr_crypt(struct blkcipher_desc *desc,
-			      struct scatterlist *dst, struct scatterlist *src,
-			      unsigned int nbytes)
+static int crypto_ctr_crypt(struct skcipher_request *req)
 {
-	struct blkcipher_walk walk;
-	struct crypto_blkcipher *tfm = desc->tfm;
-	struct crypto_ctr_ctx *ctx = crypto_blkcipher_ctx(tfm);
-	struct crypto_cipher *child = ctx->child;
-	unsigned int bsize = crypto_cipher_blocksize(child);
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
+	const unsigned int bsize = crypto_cipher_blocksize(cipher);
+	struct skcipher_walk walk;
+	unsigned int nbytes;
 	int err;
 
-	blkcipher_walk_init(&walk, dst, src, nbytes);
-	err = blkcipher_walk_virt_block(desc, &walk, bsize);
+	err = skcipher_walk_virt(&walk, req, false);
 
 	while (walk.nbytes >= bsize) {
 		if (walk.src.virt.addr == walk.dst.virt.addr)
-			nbytes = crypto_ctr_crypt_inplace(&walk, child);
+			nbytes = crypto_ctr_crypt_inplace(&walk, cipher);
 		else
-			nbytes = crypto_ctr_crypt_segment(&walk, child);
+			nbytes = crypto_ctr_crypt_segment(&walk, cipher);
 
-		err = blkcipher_walk_done(desc, &walk, nbytes);
+		err = skcipher_walk_done(&walk, nbytes);
 	}
 
 	if (walk.nbytes) {
-		crypto_ctr_crypt_final(&walk, child);
-		err = blkcipher_walk_done(desc, &walk, 0);
+		crypto_ctr_crypt_final(&walk, cipher);
+		err = skcipher_walk_done(&walk, 0);
 	}
 
 	return err;
 }
 
-static int crypto_ctr_init_tfm(struct crypto_tfm *tfm)
+static int crypto_ctr_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
-	struct crypto_instance *inst = (void *)tfm->__crt_alg;
-	struct crypto_spawn *spawn = crypto_instance_ctx(inst);
-	struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
-	struct crypto_cipher *cipher;
-
-	cipher = crypto_spawn_cipher(spawn);
-	if (IS_ERR(cipher))
-		return PTR_ERR(cipher);
-
-	ctx->child = cipher;
-
-	return 0;
-}
-
-static void crypto_ctr_exit_tfm(struct crypto_tfm *tfm)
-{
-	struct crypto_ctr_ctx *ctx = crypto_tfm_ctx(tfm);
-
-	crypto_free_cipher(ctx->child);
-}
-
-static struct crypto_instance *crypto_ctr_alloc(struct rtattr **tb)
-{
-	struct crypto_instance *inst;
-	struct crypto_attr_type *algt;
+	struct skcipher_instance *inst;
 	struct crypto_alg *alg;
-	u32 mask;
 	int err;
 
-	err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_BLKCIPHER);
-	if (err)
-		return ERR_PTR(err);
-
-	algt = crypto_get_attr_type(tb);
-	if (IS_ERR(algt))
-		return ERR_CAST(algt);
-
-	mask = CRYPTO_ALG_TYPE_MASK |
-		crypto_requires_off(algt->type, algt->mask,
-				    CRYPTO_ALG_NEED_FALLBACK);
-
-	alg = crypto_attr_alg(tb[1], CRYPTO_ALG_TYPE_CIPHER, mask);
-	if (IS_ERR(alg))
-		return ERR_CAST(alg);
+	inst = skcipher_alloc_instance_simple(tmpl, tb, &alg);
+	if (IS_ERR(inst))
+		return PTR_ERR(inst);
 
 	/* Block size must be >= 4 bytes. */
 	err = -EINVAL;
 	if (alg->cra_blocksize < 4)
-		goto out_put_alg;
+		goto out_free_inst;
 
 	/* If this is false we'd fail the alignment of crypto_inc. */
 	if (alg->cra_blocksize % 4)
-		goto out_put_alg;
-
-	inst = crypto_alloc_instance("ctr", alg);
-	if (IS_ERR(inst))
-		goto out;
-
-	inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
-	inst->alg.cra_priority = alg->cra_priority;
-	inst->alg.cra_blocksize = 1;
-	inst->alg.cra_alignmask = alg->cra_alignmask;
-	inst->alg.cra_type = &crypto_blkcipher_type;
+		goto out_free_inst;
 
-	inst->alg.cra_blkcipher.ivsize = alg->cra_blocksize;
-	inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
-	inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
-
-	inst->alg.cra_ctxsize = sizeof(struct crypto_ctr_ctx);
+	/* CTR mode is a stream cipher. */
+	inst->alg.base.cra_blocksize = 1;
 
-	inst->alg.cra_init = crypto_ctr_init_tfm;
-	inst->alg.cra_exit = crypto_ctr_exit_tfm;
+	/*
+	 * To simplify the implementation, configure the skcipher walk to only
+	 * give a partial block at the very end, never earlier.
+	 */
+	inst->alg.chunksize = alg->cra_blocksize;
 
-	inst->alg.cra_blkcipher.setkey = crypto_ctr_setkey;
-	inst->alg.cra_blkcipher.encrypt = crypto_ctr_crypt;
-	inst->alg.cra_blkcipher.decrypt = crypto_ctr_crypt;
+	inst->alg.encrypt = crypto_ctr_crypt;
+	inst->alg.decrypt = crypto_ctr_crypt;
 
-out:
-	crypto_mod_put(alg);
-	return inst;
+	err = skcipher_register_instance(tmpl, inst);
+	if (err)
+		goto out_free_inst;
+	goto out_put_alg;
 
+out_free_inst:
+	inst->free(inst);
 out_put_alg:
-	inst = ERR_PTR(err);
-	goto out;
-}
-
-static void crypto_ctr_free(struct crypto_instance *inst)
-{
-	crypto_drop_spawn(crypto_instance_ctx(inst));
-	kfree(inst);
+	crypto_mod_put(alg);
+	return err;
 }
 
 static struct crypto_template crypto_ctr_tmpl = {
 	.name = "ctr",
-	.alloc = crypto_ctr_alloc,
-	.free = crypto_ctr_free,
+	.create = crypto_ctr_create,
 	.module = THIS_MODULE,
 };
 
@@ -480,6 +402,6 @@ module_init(crypto_ctr_module_init);
 module_exit(crypto_ctr_module_exit);
 
 MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("CTR Counter block mode");
+MODULE_DESCRIPTION("CTR block cipher mode of operation");
 MODULE_ALIAS_CRYPTO("rfc3686");
 MODULE_ALIAS_CRYPTO("ctr");
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 09/16] crypto: ecb - convert to skcipher API
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (7 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 08/16] crypto: ctr - convert to skcipher API Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-04  4:16 ` [PATCH 10/16] crypto: keywrap " Eric Biggers
                   ` (8 subsequent siblings)
  17 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu

From: Eric Biggers <ebiggers@google.com>

Convert the ECB template from the deprecated "blkcipher" API to the
"skcipher" API, taking advantage of skcipher_alloc_instance_simple() to
simplify it considerably.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/ecb.c | 151 ++++++++++++---------------------------------------
 1 file changed, 36 insertions(+), 115 deletions(-)

diff --git a/crypto/ecb.c b/crypto/ecb.c
index 12011aff09713..0732715c8d915 100644
--- a/crypto/ecb.c
+++ b/crypto/ecb.c
@@ -11,162 +11,83 @@
  */
 
 #include <crypto/algapi.h>
+#include <crypto/internal/skcipher.h>
 #include <linux/err.h>
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
-#include <linux/scatterlist.h>
-#include <linux/slab.h>
 
-struct crypto_ecb_ctx {
-	struct crypto_cipher *child;
-};
-
-static int crypto_ecb_setkey(struct crypto_tfm *parent, const u8 *key,
-			     unsigned int keylen)
-{
-	struct crypto_ecb_ctx *ctx = crypto_tfm_ctx(parent);
-	struct crypto_cipher *child = ctx->child;
-	int err;
-
-	crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-	crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
-				       CRYPTO_TFM_REQ_MASK);
-	err = crypto_cipher_setkey(child, key, keylen);
-	crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
-				     CRYPTO_TFM_RES_MASK);
-	return err;
-}
-
-static int crypto_ecb_crypt(struct blkcipher_desc *desc,
-			    struct blkcipher_walk *walk,
-			    struct crypto_cipher *tfm,
+static int crypto_ecb_crypt(struct skcipher_request *req,
+			    struct crypto_cipher *cipher,
 			    void (*fn)(struct crypto_tfm *, u8 *, const u8 *))
 {
-	int bsize = crypto_cipher_blocksize(tfm);
+	const unsigned int bsize = crypto_cipher_blocksize(cipher);
+	struct skcipher_walk walk;
 	unsigned int nbytes;
 	int err;
 
-	err = blkcipher_walk_virt(desc, walk);
+	err = skcipher_walk_virt(&walk, req, false);
 
-	while ((nbytes = walk->nbytes)) {
-		u8 *wsrc = walk->src.virt.addr;
-		u8 *wdst = walk->dst.virt.addr;
+	while ((nbytes = walk.nbytes) != 0) {
+		const u8 *src = walk.src.virt.addr;
+		u8 *dst = walk.dst.virt.addr;
 
 		do {
-			fn(crypto_cipher_tfm(tfm), wdst, wsrc);
+			fn(crypto_cipher_tfm(cipher), dst, src);
 
-			wsrc += bsize;
-			wdst += bsize;
+			src += bsize;
+			dst += bsize;
 		} while ((nbytes -= bsize) >= bsize);
 
-		err = blkcipher_walk_done(desc, walk, nbytes);
+		err = skcipher_walk_done(&walk, nbytes);
 	}
 
 	return err;
 }
 
-static int crypto_ecb_encrypt(struct blkcipher_desc *desc,
-			      struct scatterlist *dst, struct scatterlist *src,
-			      unsigned int nbytes)
+static int crypto_ecb_encrypt(struct skcipher_request *req)
 {
-	struct blkcipher_walk walk;
-	struct crypto_blkcipher *tfm = desc->tfm;
-	struct crypto_ecb_ctx *ctx = crypto_blkcipher_ctx(tfm);
-	struct crypto_cipher *child = ctx->child;
-
-	blkcipher_walk_init(&walk, dst, src, nbytes);
-	return crypto_ecb_crypt(desc, &walk, child,
-				crypto_cipher_alg(child)->cia_encrypt);
-}
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
 
-static int crypto_ecb_decrypt(struct blkcipher_desc *desc,
-			      struct scatterlist *dst, struct scatterlist *src,
-			      unsigned int nbytes)
-{
-	struct blkcipher_walk walk;
-	struct crypto_blkcipher *tfm = desc->tfm;
-	struct crypto_ecb_ctx *ctx = crypto_blkcipher_ctx(tfm);
-	struct crypto_cipher *child = ctx->child;
-
-	blkcipher_walk_init(&walk, dst, src, nbytes);
-	return crypto_ecb_crypt(desc, &walk, child,
-				crypto_cipher_alg(child)->cia_decrypt);
+	return crypto_ecb_crypt(req, cipher,
+				crypto_cipher_alg(cipher)->cia_encrypt);
 }
 
-static int crypto_ecb_init_tfm(struct crypto_tfm *tfm)
+static int crypto_ecb_decrypt(struct skcipher_request *req)
 {
-	struct crypto_instance *inst = (void *)tfm->__crt_alg;
-	struct crypto_spawn *spawn = crypto_instance_ctx(inst);
-	struct crypto_ecb_ctx *ctx = crypto_tfm_ctx(tfm);
-	struct crypto_cipher *cipher;
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
 
-	cipher = crypto_spawn_cipher(spawn);
-	if (IS_ERR(cipher))
-		return PTR_ERR(cipher);
-
-	ctx->child = cipher;
-	return 0;
+	return crypto_ecb_crypt(req, cipher,
+				crypto_cipher_alg(cipher)->cia_decrypt);
 }
 
-static void crypto_ecb_exit_tfm(struct crypto_tfm *tfm)
+static int crypto_ecb_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
-	struct crypto_ecb_ctx *ctx = crypto_tfm_ctx(tfm);
-	crypto_free_cipher(ctx->child);
-}
-
-static struct crypto_instance *crypto_ecb_alloc(struct rtattr **tb)
-{
-	struct crypto_instance *inst;
+	struct skcipher_instance *inst;
 	struct crypto_alg *alg;
 	int err;
 
-	err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_BLKCIPHER);
-	if (err)
-		return ERR_PTR(err);
-
-	alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER,
-				  CRYPTO_ALG_TYPE_MASK);
-	if (IS_ERR(alg))
-		return ERR_CAST(alg);
-
-	inst = crypto_alloc_instance("ecb", alg);
+	inst = skcipher_alloc_instance_simple(tmpl, tb, &alg);
 	if (IS_ERR(inst))
-		goto out_put_alg;
-
-	inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
-	inst->alg.cra_priority = alg->cra_priority;
-	inst->alg.cra_blocksize = alg->cra_blocksize;
-	inst->alg.cra_alignmask = alg->cra_alignmask;
-	inst->alg.cra_type = &crypto_blkcipher_type;
-
-	inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
-	inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
+		return PTR_ERR(inst);
 
-	inst->alg.cra_ctxsize = sizeof(struct crypto_ecb_ctx);
+	inst->alg.ivsize = 0; /* ECB mode doesn't take an IV */
 
-	inst->alg.cra_init = crypto_ecb_init_tfm;
-	inst->alg.cra_exit = crypto_ecb_exit_tfm;
+	inst->alg.encrypt = crypto_ecb_encrypt;
+	inst->alg.decrypt = crypto_ecb_decrypt;
 
-	inst->alg.cra_blkcipher.setkey = crypto_ecb_setkey;
-	inst->alg.cra_blkcipher.encrypt = crypto_ecb_encrypt;
-	inst->alg.cra_blkcipher.decrypt = crypto_ecb_decrypt;
-
-out_put_alg:
+	err = skcipher_register_instance(tmpl, inst);
+	if (err)
+		inst->free(inst);
 	crypto_mod_put(alg);
-	return inst;
-}
-
-static void crypto_ecb_free(struct crypto_instance *inst)
-{
-	crypto_drop_spawn(crypto_instance_ctx(inst));
-	kfree(inst);
+	return err;
 }
 
 static struct crypto_template crypto_ecb_tmpl = {
 	.name = "ecb",
-	.alloc = crypto_ecb_alloc,
-	.free = crypto_ecb_free,
+	.create = crypto_ecb_create,
 	.module = THIS_MODULE,
 };
 
@@ -184,5 +105,5 @@ module_init(crypto_ecb_module_init);
 module_exit(crypto_ecb_module_exit);
 
 MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("ECB block cipher algorithm");
+MODULE_DESCRIPTION("ECB block cipher mode of operation");
 MODULE_ALIAS_CRYPTO("ecb");
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 10/16] crypto: keywrap - convert to skcipher API
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (8 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 09/16] crypto: ecb " Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-05 11:40   ` Stephan Mueller
  2019-01-04  4:16 ` [PATCH 11/16] crypto: ofb - convert to skcipher_alloc_instance_simple() Eric Biggers
                   ` (7 subsequent siblings)
  17 siblings, 1 reply; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: Stephan Mueller

From: Eric Biggers <ebiggers@google.com>

Convert the keywrap template from the deprecated "blkcipher" API to the
"skcipher" API, taking advantage of skcipher_alloc_instance_simple() to
simplify it considerably.

Cc: Stephan Mueller <smueller@chronox.de>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/keywrap.c | 198 ++++++++++++++++-------------------------------
 1 file changed, 65 insertions(+), 133 deletions(-)

diff --git a/crypto/keywrap.c b/crypto/keywrap.c
index ec5c6a087c901..a5cfe610d8f40 100644
--- a/crypto/keywrap.c
+++ b/crypto/keywrap.c
@@ -56,7 +56,7 @@
  *	u8 *iv = data;
  *	u8 *pt = data + crypto_skcipher_ivsize(tfm);
  *		<ensure that pt contains the plaintext of size ptlen>
- *	sg_init_one(&sg, ptdata, ptlen);
+ *	sg_init_one(&sg, pt, ptlen);
  *	skcipher_request_set_crypt(req, &sg, &sg, ptlen, iv);
  *
  *	==> After encryption, data now contains full KW result as per SP800-38F.
@@ -70,8 +70,8 @@
  *	u8 *iv = data;
  *	u8 *ct = data + crypto_skcipher_ivsize(tfm);
  *	unsigned int ctlen = datalen - crypto_skcipher_ivsize(tfm);
- *	sg_init_one(&sg, ctdata, ctlen);
- *	skcipher_request_set_crypt(req, &sg, &sg, ptlen, iv);
+ *	sg_init_one(&sg, ct, ctlen);
+ *	skcipher_request_set_crypt(req, &sg, &sg, ctlen, iv);
  *
  *	==> After decryption (which hopefully does not return EBADMSG), the ct
  *	pointer now points to the plaintext of size ctlen.
@@ -87,10 +87,6 @@
 #include <crypto/scatterwalk.h>
 #include <crypto/internal/skcipher.h>
 
-struct crypto_kw_ctx {
-	struct crypto_cipher *child;
-};
-
 struct crypto_kw_block {
 #define SEMIBSIZE 8
 	__be64 A;
@@ -124,16 +120,13 @@ static void crypto_kw_scatterlist_ff(struct scatter_walk *walk,
 	}
 }
 
-static int crypto_kw_decrypt(struct blkcipher_desc *desc,
-			     struct scatterlist *dst, struct scatterlist *src,
-			     unsigned int nbytes)
+static int crypto_kw_decrypt(struct skcipher_request *req)
 {
-	struct crypto_blkcipher *tfm = desc->tfm;
-	struct crypto_kw_ctx *ctx = crypto_blkcipher_ctx(tfm);
-	struct crypto_cipher *child = ctx->child;
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
 	struct crypto_kw_block block;
-	struct scatterlist *lsrc, *ldst;
-	u64 t = 6 * ((nbytes) >> 3);
+	struct scatterlist *src, *dst;
+	u64 t = 6 * ((req->cryptlen) >> 3);
 	unsigned int i;
 	int ret = 0;
 
@@ -141,27 +134,27 @@ static int crypto_kw_decrypt(struct blkcipher_desc *desc,
 	 * Require at least 2 semiblocks (note, the 3rd semiblock that is
 	 * required by SP800-38F is the IV.
 	 */
-	if (nbytes < (2 * SEMIBSIZE) || nbytes % SEMIBSIZE)
+	if (req->cryptlen < (2 * SEMIBSIZE) || req->cryptlen % SEMIBSIZE)
 		return -EINVAL;
 
 	/* Place the IV into block A */
-	memcpy(&block.A, desc->info, SEMIBSIZE);
+	memcpy(&block.A, req->iv, SEMIBSIZE);
 
 	/*
 	 * src scatterlist is read-only. dst scatterlist is r/w. During the
-	 * first loop, lsrc points to src and ldst to dst. For any
-	 * subsequent round, the code operates on dst only.
+	 * first loop, src points to req->src and dst to req->dst. For any
+	 * subsequent round, the code operates on req->dst only.
 	 */
-	lsrc = src;
-	ldst = dst;
+	src = req->src;
+	dst = req->dst;
 
 	for (i = 0; i < 6; i++) {
 		struct scatter_walk src_walk, dst_walk;
-		unsigned int tmp_nbytes = nbytes;
+		unsigned int nbytes = req->cryptlen;
 
-		while (tmp_nbytes) {
-			/* move pointer by tmp_nbytes in the SGL */
-			crypto_kw_scatterlist_ff(&src_walk, lsrc, tmp_nbytes);
+		while (nbytes) {
+			/* move pointer by nbytes in the SGL */
+			crypto_kw_scatterlist_ff(&src_walk, src, nbytes);
 			/* get the source block */
 			scatterwalk_copychunks(&block.R, &src_walk, SEMIBSIZE,
 					       false);
@@ -170,21 +163,21 @@ static int crypto_kw_decrypt(struct blkcipher_desc *desc,
 			block.A ^= cpu_to_be64(t);
 			t--;
 			/* perform KW operation: decrypt block */
-			crypto_cipher_decrypt_one(child, (u8*)&block,
-						  (u8*)&block);
+			crypto_cipher_decrypt_one(cipher, (u8 *)&block,
+						  (u8 *)&block);
 
-			/* move pointer by tmp_nbytes in the SGL */
-			crypto_kw_scatterlist_ff(&dst_walk, ldst, tmp_nbytes);
+			/* move pointer by nbytes in the SGL */
+			crypto_kw_scatterlist_ff(&dst_walk, dst, nbytes);
 			/* Copy block->R into place */
 			scatterwalk_copychunks(&block.R, &dst_walk, SEMIBSIZE,
 					       true);
 
-			tmp_nbytes -= SEMIBSIZE;
+			nbytes -= SEMIBSIZE;
 		}
 
 		/* we now start to operate on the dst SGL only */
-		lsrc = dst;
-		ldst = dst;
+		src = req->dst;
+		dst = req->dst;
 	}
 
 	/* Perform authentication check */
@@ -196,15 +189,12 @@ static int crypto_kw_decrypt(struct blkcipher_desc *desc,
 	return ret;
 }
 
-static int crypto_kw_encrypt(struct blkcipher_desc *desc,
-			     struct scatterlist *dst, struct scatterlist *src,
-			     unsigned int nbytes)
+static int crypto_kw_encrypt(struct skcipher_request *req)
 {
-	struct crypto_blkcipher *tfm = desc->tfm;
-	struct crypto_kw_ctx *ctx = crypto_blkcipher_ctx(tfm);
-	struct crypto_cipher *child = ctx->child;
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
 	struct crypto_kw_block block;
-	struct scatterlist *lsrc, *ldst;
+	struct scatterlist *src, *dst;
 	u64 t = 1;
 	unsigned int i;
 
@@ -214,7 +204,7 @@ static int crypto_kw_encrypt(struct blkcipher_desc *desc,
 	 * This means that the dst memory must be one semiblock larger than src.
 	 * Also ensure that the given data is aligned to semiblock.
 	 */
-	if (nbytes < (2 * SEMIBSIZE) || nbytes % SEMIBSIZE)
+	if (req->cryptlen < (2 * SEMIBSIZE) || req->cryptlen % SEMIBSIZE)
 		return -EINVAL;
 
 	/*
@@ -225,26 +215,26 @@ static int crypto_kw_encrypt(struct blkcipher_desc *desc,
 
 	/*
 	 * src scatterlist is read-only. dst scatterlist is r/w. During the
-	 * first loop, lsrc points to src and ldst to dst. For any
-	 * subsequent round, the code operates on dst only.
+	 * first loop, src points to req->src and dst to req->dst. For any
+	 * subsequent round, the code operates on req->dst only.
 	 */
-	lsrc = src;
-	ldst = dst;
+	src = req->src;
+	dst = req->dst;
 
 	for (i = 0; i < 6; i++) {
 		struct scatter_walk src_walk, dst_walk;
-		unsigned int tmp_nbytes = nbytes;
+		unsigned int nbytes = req->cryptlen;
 
-		scatterwalk_start(&src_walk, lsrc);
-		scatterwalk_start(&dst_walk, ldst);
+		scatterwalk_start(&src_walk, src);
+		scatterwalk_start(&dst_walk, dst);
 
-		while (tmp_nbytes) {
+		while (nbytes) {
 			/* get the source block */
 			scatterwalk_copychunks(&block.R, &src_walk, SEMIBSIZE,
 					       false);
 
 			/* perform KW operation: encrypt block */
-			crypto_cipher_encrypt_one(child, (u8 *)&block,
+			crypto_cipher_encrypt_one(cipher, (u8 *)&block,
 						  (u8 *)&block);
 			/* perform KW operation: modify IV with counter */
 			block.A ^= cpu_to_be64(t);
@@ -254,117 +244,59 @@ static int crypto_kw_encrypt(struct blkcipher_desc *desc,
 			scatterwalk_copychunks(&block.R, &dst_walk, SEMIBSIZE,
 					       true);
 
-			tmp_nbytes -= SEMIBSIZE;
+			nbytes -= SEMIBSIZE;
 		}
 
 		/* we now start to operate on the dst SGL only */
-		lsrc = dst;
-		ldst = dst;
+		src = req->dst;
+		dst = req->dst;
 	}
 
 	/* establish the IV for the caller to pick up */
-	memcpy(desc->info, &block.A, SEMIBSIZE);
+	memcpy(req->iv, &block.A, SEMIBSIZE);
 
 	memzero_explicit(&block, sizeof(struct crypto_kw_block));
 
 	return 0;
 }
 
-static int crypto_kw_setkey(struct crypto_tfm *parent, const u8 *key,
-			    unsigned int keylen)
+static int crypto_kw_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
-	struct crypto_kw_ctx *ctx = crypto_tfm_ctx(parent);
-	struct crypto_cipher *child = ctx->child;
+	struct skcipher_instance *inst;
+	struct crypto_alg *alg;
 	int err;
 
-	crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-	crypto_cipher_set_flags(child, crypto_tfm_get_flags(parent) &
-				       CRYPTO_TFM_REQ_MASK);
-	err = crypto_cipher_setkey(child, key, keylen);
-	crypto_tfm_set_flags(parent, crypto_cipher_get_flags(child) &
-				     CRYPTO_TFM_RES_MASK);
-	return err;
-}
-
-static int crypto_kw_init_tfm(struct crypto_tfm *tfm)
-{
-	struct crypto_instance *inst = crypto_tfm_alg_instance(tfm);
-	struct crypto_spawn *spawn = crypto_instance_ctx(inst);
-	struct crypto_kw_ctx *ctx = crypto_tfm_ctx(tfm);
-	struct crypto_cipher *cipher;
-
-	cipher = crypto_spawn_cipher(spawn);
-	if (IS_ERR(cipher))
-		return PTR_ERR(cipher);
-
-	ctx->child = cipher;
-	return 0;
-}
-
-static void crypto_kw_exit_tfm(struct crypto_tfm *tfm)
-{
-	struct crypto_kw_ctx *ctx = crypto_tfm_ctx(tfm);
-
-	crypto_free_cipher(ctx->child);
-}
-
-static struct crypto_instance *crypto_kw_alloc(struct rtattr **tb)
-{
-	struct crypto_instance *inst = NULL;
-	struct crypto_alg *alg = NULL;
-	int err;
-
-	err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_BLKCIPHER);
-	if (err)
-		return ERR_PTR(err);
-
-	alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER,
-				  CRYPTO_ALG_TYPE_MASK);
-	if (IS_ERR(alg))
-		return ERR_CAST(alg);
+	inst = skcipher_alloc_instance_simple(tmpl, tb, &alg);
+	if (IS_ERR(inst))
+		return PTR_ERR(inst);
 
-	inst = ERR_PTR(-EINVAL);
+	err = -EINVAL;
 	/* Section 5.1 requirement for KW */
 	if (alg->cra_blocksize != sizeof(struct crypto_kw_block))
-		goto err;
+		goto out_free_inst;
 
-	inst = crypto_alloc_instance("kw", alg);
-	if (IS_ERR(inst))
-		goto err;
+	inst->alg.base.cra_blocksize = SEMIBSIZE;
+	inst->alg.base.cra_alignmask = 0;
+	inst->alg.ivsize = SEMIBSIZE;
 
-	inst->alg.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER;
-	inst->alg.cra_priority = alg->cra_priority;
-	inst->alg.cra_blocksize = SEMIBSIZE;
-	inst->alg.cra_alignmask = 0;
-	inst->alg.cra_type = &crypto_blkcipher_type;
-	inst->alg.cra_blkcipher.ivsize = SEMIBSIZE;
-	inst->alg.cra_blkcipher.min_keysize = alg->cra_cipher.cia_min_keysize;
-	inst->alg.cra_blkcipher.max_keysize = alg->cra_cipher.cia_max_keysize;
+	inst->alg.encrypt = crypto_kw_encrypt;
+	inst->alg.decrypt = crypto_kw_decrypt;
 
-	inst->alg.cra_ctxsize = sizeof(struct crypto_kw_ctx);
-
-	inst->alg.cra_init = crypto_kw_init_tfm;
-	inst->alg.cra_exit = crypto_kw_exit_tfm;
-
-	inst->alg.cra_blkcipher.setkey = crypto_kw_setkey;
-	inst->alg.cra_blkcipher.encrypt = crypto_kw_encrypt;
-	inst->alg.cra_blkcipher.decrypt = crypto_kw_decrypt;
+	err = skcipher_register_instance(tmpl, inst);
+	if (err)
+		goto out_free_inst;
+	goto out_put_alg;
 
-err:
+out_free_inst:
+	inst->free(inst);
+out_put_alg:
 	crypto_mod_put(alg);
-	return inst;
-}
-
-static void crypto_kw_free(struct crypto_instance *inst)
-{
-	crypto_drop_spawn(crypto_instance_ctx(inst));
-	kfree(inst);
+	return err;
 }
 
 static struct crypto_template crypto_kw_tmpl = {
 	.name = "kw",
-	.alloc = crypto_kw_alloc,
-	.free = crypto_kw_free,
+	.create = crypto_kw_create,
 	.module = THIS_MODULE,
 };
 
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 11/16] crypto: ofb - convert to skcipher_alloc_instance_simple()
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (9 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 10/16] crypto: keywrap " Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-04  4:16 ` [PATCH 12/16] crypto: pcbc - remove ability to wrap internal ciphers Eric Biggers
                   ` (6 subsequent siblings)
  17 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu; +Cc: Gilad Ben-Yossef

From: Eric Biggers <ebiggers@google.com>

The OFB template just wraps a single block cipher algorithm, so simplify
it by converting it to use skcipher_alloc_instance_simple().

Cc: Gilad Ben-Yossef <gilad@benyossef.com>
Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/ofb.c | 119 +++------------------------------------------------
 1 file changed, 7 insertions(+), 112 deletions(-)

diff --git a/crypto/ofb.c b/crypto/ofb.c
index cab0b80953fed..34b6e1f426f7a 100644
--- a/crypto/ofb.c
+++ b/crypto/ofb.c
@@ -13,35 +13,11 @@
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
-#include <linux/scatterlist.h>
-#include <linux/slab.h>
-
-struct crypto_ofb_ctx {
-	struct crypto_cipher *child;
-};
-
-
-static int crypto_ofb_setkey(struct crypto_skcipher *parent, const u8 *key,
-			     unsigned int keylen)
-{
-	struct crypto_ofb_ctx *ctx = crypto_skcipher_ctx(parent);
-	struct crypto_cipher *child = ctx->child;
-	int err;
-
-	crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-	crypto_cipher_set_flags(child, crypto_skcipher_get_flags(parent) &
-				       CRYPTO_TFM_REQ_MASK);
-	err = crypto_cipher_setkey(child, key, keylen);
-	crypto_skcipher_set_flags(parent, crypto_cipher_get_flags(child) &
-				  CRYPTO_TFM_RES_MASK);
-	return err;
-}
 
 static int crypto_ofb_crypt(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	struct crypto_ofb_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_cipher *cipher = ctx->child;
+	struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
 	const unsigned int bsize = crypto_cipher_blocksize(cipher);
 	struct skcipher_walk walk;
 	int err;
@@ -73,75 +49,15 @@ static int crypto_ofb_crypt(struct skcipher_request *req)
 	return err;
 }
 
-static int crypto_ofb_init_tfm(struct crypto_skcipher *tfm)
-{
-	struct skcipher_instance *inst = skcipher_alg_instance(tfm);
-	struct crypto_spawn *spawn = skcipher_instance_ctx(inst);
-	struct crypto_ofb_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_cipher *cipher;
-
-	cipher = crypto_spawn_cipher(spawn);
-	if (IS_ERR(cipher))
-		return PTR_ERR(cipher);
-
-	ctx->child = cipher;
-	return 0;
-}
-
-static void crypto_ofb_exit_tfm(struct crypto_skcipher *tfm)
-{
-	struct crypto_ofb_ctx *ctx = crypto_skcipher_ctx(tfm);
-
-	crypto_free_cipher(ctx->child);
-}
-
-static void crypto_ofb_free(struct skcipher_instance *inst)
-{
-	crypto_drop_skcipher(skcipher_instance_ctx(inst));
-	kfree(inst);
-}
-
 static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
 	struct skcipher_instance *inst;
-	struct crypto_attr_type *algt;
-	struct crypto_spawn *spawn;
 	struct crypto_alg *alg;
-	u32 mask;
 	int err;
 
-	err = crypto_check_attr_type(tb, CRYPTO_ALG_TYPE_SKCIPHER);
-	if (err)
-		return err;
-
-	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
-	if (!inst)
-		return -ENOMEM;
-
-	algt = crypto_get_attr_type(tb);
-	err = PTR_ERR(algt);
-	if (IS_ERR(algt))
-		goto err_free_inst;
-
-	mask = CRYPTO_ALG_TYPE_MASK |
-		crypto_requires_off(algt->type, algt->mask,
-				    CRYPTO_ALG_NEED_FALLBACK);
-
-	alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER, mask);
-	err = PTR_ERR(alg);
-	if (IS_ERR(alg))
-		goto err_free_inst;
-
-	spawn = skcipher_instance_ctx(inst);
-	err = crypto_init_spawn(spawn, alg, skcipher_crypto_instance(inst),
-				CRYPTO_ALG_TYPE_MASK);
-	crypto_mod_put(alg);
-	if (err)
-		goto err_free_inst;
-
-	err = crypto_inst_setname(skcipher_crypto_instance(inst), "ofb", alg);
-	if (err)
-		goto err_drop_spawn;
+	inst = skcipher_alloc_instance_simple(tmpl, tb, &alg);
+	if (IS_ERR(inst))
+		return PTR_ERR(inst);
 
 	/* OFB mode is a stream cipher. */
 	inst->alg.base.cra_blocksize = 1;
@@ -152,36 +68,15 @@ static int crypto_ofb_create(struct crypto_template *tmpl, struct rtattr **tb)
 	 */
 	inst->alg.chunksize = alg->cra_blocksize;
 
-	inst->alg.base.cra_priority = alg->cra_priority;
-	inst->alg.base.cra_alignmask = alg->cra_alignmask;
-
-	inst->alg.ivsize = alg->cra_blocksize;
-	inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
-	inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
-
-	inst->alg.base.cra_ctxsize = sizeof(struct crypto_ofb_ctx);
-
-	inst->alg.init = crypto_ofb_init_tfm;
-	inst->alg.exit = crypto_ofb_exit_tfm;
-
-	inst->alg.setkey = crypto_ofb_setkey;
 	inst->alg.encrypt = crypto_ofb_crypt;
 	inst->alg.decrypt = crypto_ofb_crypt;
 
-	inst->free = crypto_ofb_free;
-
 	err = skcipher_register_instance(tmpl, inst);
 	if (err)
-		goto err_drop_spawn;
+		inst->free(inst);
 
-out:
+	crypto_mod_put(alg);
 	return err;
-
-err_drop_spawn:
-	crypto_drop_spawn(spawn);
-err_free_inst:
-	kfree(inst);
-	goto out;
 }
 
 static struct crypto_template crypto_ofb_tmpl = {
@@ -204,5 +99,5 @@ module_init(crypto_ofb_module_init);
 module_exit(crypto_ofb_module_exit);
 
 MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("OFB block cipher algorithm");
+MODULE_DESCRIPTION("OFB block cipher mode of operation");
 MODULE_ALIAS_CRYPTO("ofb");
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 12/16] crypto: pcbc - remove ability to wrap internal ciphers
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (10 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 11/16] crypto: ofb - convert to skcipher_alloc_instance_simple() Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-04  4:16 ` [PATCH 13/16] crypto: pcbc - convert to skcipher_alloc_instance_simple() Eric Biggers
                   ` (5 subsequent siblings)
  17 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu

From: Eric Biggers <ebiggers@google.com>

Following commit 944585a64f5e ("crypto: x86/aes-ni - remove special
handling of AES in PCBC mode"), it's no longer needed for the PCBC
template to support wrapping a cipher that has the CRYPTO_ALG_INTERNAL
flag set.  Thus, remove this now-unused functionality to make PCBC
consistent with the other single block cipher templates.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/pcbc.c | 10 +++-------
 1 file changed, 3 insertions(+), 7 deletions(-)

diff --git a/crypto/pcbc.c b/crypto/pcbc.c
index 1b182dfedc948..4f97a9d069b62 100644
--- a/crypto/pcbc.c
+++ b/crypto/pcbc.c
@@ -219,18 +219,15 @@ static int crypto_pcbc_create(struct crypto_template *tmpl, struct rtattr **tb)
 	if (IS_ERR(algt))
 		return PTR_ERR(algt);
 
-	if (((algt->type ^ CRYPTO_ALG_TYPE_SKCIPHER) & algt->mask) &
-	    ~CRYPTO_ALG_INTERNAL)
+	if ((algt->type ^ CRYPTO_ALG_TYPE_SKCIPHER) & algt->mask)
 		return -EINVAL;
 
 	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
 	if (!inst)
 		return -ENOMEM;
 
-	alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER |
-				      (algt->type & CRYPTO_ALG_INTERNAL),
-				  CRYPTO_ALG_TYPE_MASK |
-				  (algt->mask & CRYPTO_ALG_INTERNAL));
+	alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER,
+				  CRYPTO_ALG_TYPE_MASK);
 	err = PTR_ERR(alg);
 	if (IS_ERR(alg))
 		goto err_free_inst;
@@ -245,7 +242,6 @@ static int crypto_pcbc_create(struct crypto_template *tmpl, struct rtattr **tb)
 	if (err)
 		goto err_drop_spawn;
 
-	inst->alg.base.cra_flags = alg->cra_flags & CRYPTO_ALG_INTERNAL;
 	inst->alg.base.cra_priority = alg->cra_priority;
 	inst->alg.base.cra_blocksize = alg->cra_blocksize;
 	inst->alg.base.cra_alignmask = alg->cra_alignmask;
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 13/16] crypto: pcbc - convert to skcipher_alloc_instance_simple()
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (11 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 12/16] crypto: pcbc - remove ability to wrap internal ciphers Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-04  4:16 ` [PATCH 14/16] crypto: arc4 - convert to skcipher API Eric Biggers
                   ` (4 subsequent siblings)
  17 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu

From: Eric Biggers <ebiggers@google.com>

The PCBC template just wraps a single block cipher algorithm, so
simplify it by converting it to use skcipher_alloc_instance_simple().

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/pcbc.c | 125 +++++---------------------------------------------
 1 file changed, 11 insertions(+), 114 deletions(-)

diff --git a/crypto/pcbc.c b/crypto/pcbc.c
index 4f97a9d069b62..2fa03fc576fe4 100644
--- a/crypto/pcbc.c
+++ b/crypto/pcbc.c
@@ -20,28 +20,6 @@
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
-#include <linux/slab.h>
-#include <linux/compiler.h>
-
-struct crypto_pcbc_ctx {
-	struct crypto_cipher *child;
-};
-
-static int crypto_pcbc_setkey(struct crypto_skcipher *parent, const u8 *key,
-			      unsigned int keylen)
-{
-	struct crypto_pcbc_ctx *ctx = crypto_skcipher_ctx(parent);
-	struct crypto_cipher *child = ctx->child;
-	int err;
-
-	crypto_cipher_clear_flags(child, CRYPTO_TFM_REQ_MASK);
-	crypto_cipher_set_flags(child, crypto_skcipher_get_flags(parent) &
-				       CRYPTO_TFM_REQ_MASK);
-	err = crypto_cipher_setkey(child, key, keylen);
-	crypto_skcipher_set_flags(parent, crypto_cipher_get_flags(child) &
-					  CRYPTO_TFM_RES_MASK);
-	return err;
-}
 
 static int crypto_pcbc_encrypt_segment(struct skcipher_request *req,
 				       struct skcipher_walk *walk,
@@ -90,8 +68,7 @@ static int crypto_pcbc_encrypt_inplace(struct skcipher_request *req,
 static int crypto_pcbc_encrypt(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	struct crypto_pcbc_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_cipher *child = ctx->child;
+	struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
 	struct skcipher_walk walk;
 	unsigned int nbytes;
 	int err;
@@ -101,10 +78,10 @@ static int crypto_pcbc_encrypt(struct skcipher_request *req)
 	while ((nbytes = walk.nbytes)) {
 		if (walk.src.virt.addr == walk.dst.virt.addr)
 			nbytes = crypto_pcbc_encrypt_inplace(req, &walk,
-							     child);
+							     cipher);
 		else
 			nbytes = crypto_pcbc_encrypt_segment(req, &walk,
-							     child);
+							     cipher);
 		err = skcipher_walk_done(&walk, nbytes);
 	}
 
@@ -158,8 +135,7 @@ static int crypto_pcbc_decrypt_inplace(struct skcipher_request *req,
 static int crypto_pcbc_decrypt(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	struct crypto_pcbc_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_cipher *child = ctx->child;
+	struct crypto_cipher *cipher = skcipher_cipher_simple(tfm);
 	struct skcipher_walk walk;
 	unsigned int nbytes;
 	int err;
@@ -169,113 +145,34 @@ static int crypto_pcbc_decrypt(struct skcipher_request *req)
 	while ((nbytes = walk.nbytes)) {
 		if (walk.src.virt.addr == walk.dst.virt.addr)
 			nbytes = crypto_pcbc_decrypt_inplace(req, &walk,
-							     child);
+							     cipher);
 		else
 			nbytes = crypto_pcbc_decrypt_segment(req, &walk,
-							     child);
+							     cipher);
 		err = skcipher_walk_done(&walk, nbytes);
 	}
 
 	return err;
 }
 
-static int crypto_pcbc_init_tfm(struct crypto_skcipher *tfm)
-{
-	struct skcipher_instance *inst = skcipher_alg_instance(tfm);
-	struct crypto_spawn *spawn = skcipher_instance_ctx(inst);
-	struct crypto_pcbc_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct crypto_cipher *cipher;
-
-	cipher = crypto_spawn_cipher(spawn);
-	if (IS_ERR(cipher))
-		return PTR_ERR(cipher);
-
-	ctx->child = cipher;
-	return 0;
-}
-
-static void crypto_pcbc_exit_tfm(struct crypto_skcipher *tfm)
-{
-	struct crypto_pcbc_ctx *ctx = crypto_skcipher_ctx(tfm);
-
-	crypto_free_cipher(ctx->child);
-}
-
-static void crypto_pcbc_free(struct skcipher_instance *inst)
-{
-	crypto_drop_skcipher(skcipher_instance_ctx(inst));
-	kfree(inst);
-}
-
 static int crypto_pcbc_create(struct crypto_template *tmpl, struct rtattr **tb)
 {
 	struct skcipher_instance *inst;
-	struct crypto_attr_type *algt;
-	struct crypto_spawn *spawn;
 	struct crypto_alg *alg;
 	int err;
 
-	algt = crypto_get_attr_type(tb);
-	if (IS_ERR(algt))
-		return PTR_ERR(algt);
-
-	if ((algt->type ^ CRYPTO_ALG_TYPE_SKCIPHER) & algt->mask)
-		return -EINVAL;
-
-	inst = kzalloc(sizeof(*inst) + sizeof(*spawn), GFP_KERNEL);
-	if (!inst)
-		return -ENOMEM;
-
-	alg = crypto_get_attr_alg(tb, CRYPTO_ALG_TYPE_CIPHER,
-				  CRYPTO_ALG_TYPE_MASK);
-	err = PTR_ERR(alg);
-	if (IS_ERR(alg))
-		goto err_free_inst;
-
-	spawn = skcipher_instance_ctx(inst);
-	err = crypto_init_spawn(spawn, alg, skcipher_crypto_instance(inst),
-				CRYPTO_ALG_TYPE_MASK);
-	if (err)
-		goto err_put_alg;
-
-	err = crypto_inst_setname(skcipher_crypto_instance(inst), "pcbc", alg);
-	if (err)
-		goto err_drop_spawn;
+	inst = skcipher_alloc_instance_simple(tmpl, tb, &alg);
+	if (IS_ERR(inst))
+		return PTR_ERR(inst);
 
-	inst->alg.base.cra_priority = alg->cra_priority;
-	inst->alg.base.cra_blocksize = alg->cra_blocksize;
-	inst->alg.base.cra_alignmask = alg->cra_alignmask;
-
-	inst->alg.ivsize = alg->cra_blocksize;
-	inst->alg.min_keysize = alg->cra_cipher.cia_min_keysize;
-	inst->alg.max_keysize = alg->cra_cipher.cia_max_keysize;
-
-	inst->alg.base.cra_ctxsize = sizeof(struct crypto_pcbc_ctx);
-
-	inst->alg.init = crypto_pcbc_init_tfm;
-	inst->alg.exit = crypto_pcbc_exit_tfm;
-
-	inst->alg.setkey = crypto_pcbc_setkey;
 	inst->alg.encrypt = crypto_pcbc_encrypt;
 	inst->alg.decrypt = crypto_pcbc_decrypt;
 
-	inst->free = crypto_pcbc_free;
-
 	err = skcipher_register_instance(tmpl, inst);
 	if (err)
-		goto err_drop_spawn;
+		inst->free(inst);
 	crypto_mod_put(alg);
-
-out:
 	return err;
-
-err_drop_spawn:
-	crypto_drop_spawn(spawn);
-err_put_alg:
-	crypto_mod_put(alg);
-err_free_inst:
-	kfree(inst);
-	goto out;
 }
 
 static struct crypto_template crypto_pcbc_tmpl = {
@@ -298,5 +195,5 @@ module_init(crypto_pcbc_module_init);
 module_exit(crypto_pcbc_module_exit);
 
 MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("PCBC block cipher algorithm");
+MODULE_DESCRIPTION("PCBC block cipher mode of operation");
 MODULE_ALIAS_CRYPTO("pcbc");
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 14/16] crypto: arc4 - convert to skcipher API
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (12 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 13/16] crypto: pcbc - convert to skcipher_alloc_instance_simple() Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-04  4:16 ` [PATCH 15/16] crypto: null - convert ecb-cipher_null " Eric Biggers
                   ` (3 subsequent siblings)
  17 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu

From: Eric Biggers <ebiggers@google.com>

Convert the "ecb(arc4)" algorithm from the deprecated "blkcipher" API to
the "skcipher" API.

(Note that this is really a stream cipher and not a block cipher in ECB
mode as the name implies, but that's a problem for another day...)

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/arc4.c | 82 +++++++++++++++++++++++++++------------------------
 1 file changed, 44 insertions(+), 38 deletions(-)

diff --git a/crypto/arc4.c b/crypto/arc4.c
index f1a81925558fa..652d24399afa6 100644
--- a/crypto/arc4.c
+++ b/crypto/arc4.c
@@ -12,10 +12,10 @@
  *
  */
 
-#include <linux/module.h>
-#include <linux/init.h>
-#include <linux/crypto.h>
 #include <crypto/algapi.h>
+#include <crypto/internal/skcipher.h>
+#include <linux/init.h>
+#include <linux/module.h>
 
 #define ARC4_MIN_KEY_SIZE	1
 #define ARC4_MAX_KEY_SIZE	256
@@ -50,6 +50,12 @@ static int arc4_set_key(struct crypto_tfm *tfm, const u8 *in_key,
 	return 0;
 }
 
+static int arc4_set_key_skcipher(struct crypto_skcipher *tfm, const u8 *in_key,
+				 unsigned int key_len)
+{
+	return arc4_set_key(&tfm->base, in_key, key_len);
+}
+
 static void arc4_crypt(struct arc4_ctx *ctx, u8 *out, const u8 *in,
 		       unsigned int len)
 {
@@ -92,30 +98,25 @@ static void arc4_crypt_one(struct crypto_tfm *tfm, u8 *out, const u8 *in)
 	arc4_crypt(crypto_tfm_ctx(tfm), out, in, 1);
 }
 
-static int ecb_arc4_crypt(struct blkcipher_desc *desc, struct scatterlist *dst,
-			  struct scatterlist *src, unsigned int nbytes)
+static int ecb_arc4_crypt(struct skcipher_request *req)
 {
-	struct arc4_ctx *ctx = crypto_blkcipher_ctx(desc->tfm);
-	struct blkcipher_walk walk;
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct arc4_ctx *ctx = crypto_skcipher_ctx(tfm);
+	struct skcipher_walk walk;
 	int err;
 
-	blkcipher_walk_init(&walk, dst, src, nbytes);
-
-	err = blkcipher_walk_virt(desc, &walk);
+	err = skcipher_walk_virt(&walk, req, false);
 
 	while (walk.nbytes > 0) {
-		u8 *wsrc = walk.src.virt.addr;
-		u8 *wdst = walk.dst.virt.addr;
-
-		arc4_crypt(ctx, wdst, wsrc, walk.nbytes);
-
-		err = blkcipher_walk_done(desc, &walk, 0);
+		arc4_crypt(ctx, walk.dst.virt.addr, walk.src.virt.addr,
+			   walk.nbytes);
+		err = skcipher_walk_done(&walk, 0);
 	}
 
 	return err;
 }
 
-static struct crypto_alg arc4_algs[2] = { {
+static struct crypto_alg arc4_cipher = {
 	.cra_name		=	"arc4",
 	.cra_flags		=	CRYPTO_ALG_TYPE_CIPHER,
 	.cra_blocksize		=	ARC4_BLOCK_SIZE,
@@ -130,34 +131,39 @@ static struct crypto_alg arc4_algs[2] = { {
 			.cia_decrypt		=	arc4_crypt_one,
 		},
 	},
-}, {
-	.cra_name		=	"ecb(arc4)",
-	.cra_priority		=	100,
-	.cra_flags		=	CRYPTO_ALG_TYPE_BLKCIPHER,
-	.cra_blocksize		=	ARC4_BLOCK_SIZE,
-	.cra_ctxsize		=	sizeof(struct arc4_ctx),
-	.cra_alignmask		=	0,
-	.cra_type		=	&crypto_blkcipher_type,
-	.cra_module		=	THIS_MODULE,
-	.cra_u			=	{
-		.blkcipher = {
-			.min_keysize	=	ARC4_MIN_KEY_SIZE,
-			.max_keysize	=	ARC4_MAX_KEY_SIZE,
-			.setkey		=	arc4_set_key,
-			.encrypt	=	ecb_arc4_crypt,
-			.decrypt	=	ecb_arc4_crypt,
-		},
-	},
-} };
+};
+
+static struct skcipher_alg arc4_skcipher = {
+	.base.cra_name		=	"ecb(arc4)",
+	.base.cra_priority	=	100,
+	.base.cra_blocksize	=	ARC4_BLOCK_SIZE,
+	.base.cra_ctxsize	=	sizeof(struct arc4_ctx),
+	.base.cra_module	=	THIS_MODULE,
+	.min_keysize		=	ARC4_MIN_KEY_SIZE,
+	.max_keysize		=	ARC4_MAX_KEY_SIZE,
+	.setkey			=	arc4_set_key_skcipher,
+	.encrypt		=	ecb_arc4_crypt,
+	.decrypt		=	ecb_arc4_crypt,
+};
 
 static int __init arc4_init(void)
 {
-	return crypto_register_algs(arc4_algs, ARRAY_SIZE(arc4_algs));
+	int err;
+
+	err = crypto_register_alg(&arc4_cipher);
+	if (err)
+		return err;
+
+	err = crypto_register_skcipher(&arc4_skcipher);
+	if (err)
+		crypto_unregister_alg(&arc4_cipher);
+	return err;
 }
 
 static void __exit arc4_exit(void)
 {
-	crypto_unregister_algs(arc4_algs, ARRAY_SIZE(arc4_algs));
+	crypto_unregister_alg(&arc4_cipher);
+	crypto_unregister_skcipher(&arc4_skcipher);
 }
 
 module_init(arc4_init);
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 15/16] crypto: null - convert ecb-cipher_null to skcipher API
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (13 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 14/16] crypto: arc4 - convert to skcipher API Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-04  4:16 ` [PATCH 16/16] crypto: algapi - remove crypto_alloc_instance() Eric Biggers
                   ` (2 subsequent siblings)
  17 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu

From: Eric Biggers <ebiggers@google.com>

Convert the "ecb-cipher_null" algorithm from the deprecated "blkcipher"
API to the "skcipher" API.

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/crypto_null.c | 57 +++++++++++++++++++++++++-------------------
 1 file changed, 32 insertions(+), 25 deletions(-)

diff --git a/crypto/crypto_null.c b/crypto/crypto_null.c
index 0bae59922a805..01630a9c7e011 100644
--- a/crypto/crypto_null.c
+++ b/crypto/crypto_null.c
@@ -65,6 +65,10 @@ static int null_hash_setkey(struct crypto_shash *tfm, const u8 *key,
 			    unsigned int keylen)
 { return 0; }
 
+static int null_skcipher_setkey(struct crypto_skcipher *tfm, const u8 *key,
+				unsigned int keylen)
+{ return 0; }
+
 static int null_setkey(struct crypto_tfm *tfm, const u8 *key,
 		       unsigned int keylen)
 { return 0; }
@@ -74,21 +78,18 @@ static void null_crypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
 	memcpy(dst, src, NULL_BLOCK_SIZE);
 }
 
-static int skcipher_null_crypt(struct blkcipher_desc *desc,
-			       struct scatterlist *dst,
-			       struct scatterlist *src, unsigned int nbytes)
+static int null_skcipher_crypt(struct skcipher_request *req)
 {
-	struct blkcipher_walk walk;
+	struct skcipher_walk walk;
 	int err;
 
-	blkcipher_walk_init(&walk, dst, src, nbytes);
-	err = blkcipher_walk_virt(desc, &walk);
+	err = skcipher_walk_virt(&walk, req, false);
 
 	while (walk.nbytes) {
 		if (walk.src.virt.addr != walk.dst.virt.addr)
 			memcpy(walk.dst.virt.addr, walk.src.virt.addr,
 			       walk.nbytes);
-		err = blkcipher_walk_done(desc, &walk, 0);
+		err = skcipher_walk_done(&walk, 0);
 	}
 
 	return err;
@@ -109,7 +110,22 @@ static struct shash_alg digest_null = {
 	}
 };
 
-static struct crypto_alg null_algs[3] = { {
+static struct skcipher_alg skcipher_null = {
+	.base.cra_name		=	"ecb(cipher_null)",
+	.base.cra_driver_name	=	"ecb-cipher_null",
+	.base.cra_priority	=	100,
+	.base.cra_blocksize	=	NULL_BLOCK_SIZE,
+	.base.cra_ctxsize	=	0,
+	.base.cra_module	=	THIS_MODULE,
+	.min_keysize		=	NULL_KEY_SIZE,
+	.max_keysize		=	NULL_KEY_SIZE,
+	.ivsize			=	NULL_IV_SIZE,
+	.setkey			=	null_skcipher_setkey,
+	.encrypt		=	null_skcipher_crypt,
+	.decrypt		=	null_skcipher_crypt,
+};
+
+static struct crypto_alg null_algs[] = { {
 	.cra_name		=	"cipher_null",
 	.cra_flags		=	CRYPTO_ALG_TYPE_CIPHER,
 	.cra_blocksize		=	NULL_BLOCK_SIZE,
@@ -121,22 +137,6 @@ static struct crypto_alg null_algs[3] = { {
 	.cia_setkey		= 	null_setkey,
 	.cia_encrypt		=	null_crypt,
 	.cia_decrypt		=	null_crypt } }
-}, {
-	.cra_name		=	"ecb(cipher_null)",
-	.cra_driver_name	=	"ecb-cipher_null",
-	.cra_priority		=	100,
-	.cra_flags		=	CRYPTO_ALG_TYPE_BLKCIPHER,
-	.cra_blocksize		=	NULL_BLOCK_SIZE,
-	.cra_type		=	&crypto_blkcipher_type,
-	.cra_ctxsize		=	0,
-	.cra_module		=	THIS_MODULE,
-	.cra_u			=	{ .blkcipher = {
-	.min_keysize		=	NULL_KEY_SIZE,
-	.max_keysize		=	NULL_KEY_SIZE,
-	.ivsize			=	NULL_IV_SIZE,
-	.setkey			= 	null_setkey,
-	.encrypt		=	skcipher_null_crypt,
-	.decrypt		=	skcipher_null_crypt } }
 }, {
 	.cra_name		=	"compress_null",
 	.cra_flags		=	CRYPTO_ALG_TYPE_COMPRESS,
@@ -199,8 +199,14 @@ static int __init crypto_null_mod_init(void)
 	if (ret < 0)
 		goto out_unregister_algs;
 
+	ret = crypto_register_skcipher(&skcipher_null);
+	if (ret < 0)
+		goto out_unregister_shash;
+
 	return 0;
 
+out_unregister_shash:
+	crypto_unregister_shash(&digest_null);
 out_unregister_algs:
 	crypto_unregister_algs(null_algs, ARRAY_SIZE(null_algs));
 out:
@@ -209,8 +215,9 @@ static int __init crypto_null_mod_init(void)
 
 static void __exit crypto_null_mod_fini(void)
 {
-	crypto_unregister_shash(&digest_null);
 	crypto_unregister_algs(null_algs, ARRAY_SIZE(null_algs));
+	crypto_unregister_shash(&digest_null);
+	crypto_unregister_skcipher(&skcipher_null);
 }
 
 module_init(crypto_null_mod_init);
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* [PATCH 16/16] crypto: algapi - remove crypto_alloc_instance()
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (14 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 15/16] crypto: null - convert ecb-cipher_null " Eric Biggers
@ 2019-01-04  4:16 ` Eric Biggers
  2019-01-04  9:57 ` [PATCH 04/16] crypto: pcbc - remove bogus memcpy()s with src == dest David Howells
  2019-01-11  6:33 ` [PATCH 00/16] crypto: skcipher template simplifications and conversions Herbert Xu
  17 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04  4:16 UTC (permalink / raw)
  To: linux-crypto, Herbert Xu

From: Eric Biggers <ebiggers@google.com>

Now that all "blkcipher" templates have been converted to "skcipher",
crypto_alloc_instance() is no longer used.  And it's not useful any
longer as it creates an old-style weakly typed instance rather than a
new-style strongly typed instance.  So remove it, and now that the name
is freed up rename crypto_alloc_instance2() to crypto_alloc_instance().

Signed-off-by: Eric Biggers <ebiggers@google.com>
---
 crypto/algapi.c                | 33 ++-------------------------------
 include/crypto/algapi.h        |  6 ++----
 include/crypto/internal/hash.h |  6 +++---
 3 files changed, 7 insertions(+), 38 deletions(-)

diff --git a/crypto/algapi.c b/crypto/algapi.c
index 8b65ada33e5d3..f3d766312bd96 100644
--- a/crypto/algapi.c
+++ b/crypto/algapi.c
@@ -845,8 +845,8 @@ int crypto_inst_setname(struct crypto_instance *inst, const char *name,
 }
 EXPORT_SYMBOL_GPL(crypto_inst_setname);
 
-void *crypto_alloc_instance2(const char *name, struct crypto_alg *alg,
-			     unsigned int head)
+void *crypto_alloc_instance(const char *name, struct crypto_alg *alg,
+			    unsigned int head)
 {
 	struct crypto_instance *inst;
 	char *p;
@@ -869,35 +869,6 @@ void *crypto_alloc_instance2(const char *name, struct crypto_alg *alg,
 	kfree(p);
 	return ERR_PTR(err);
 }
-EXPORT_SYMBOL_GPL(crypto_alloc_instance2);
-
-struct crypto_instance *crypto_alloc_instance(const char *name,
-					      struct crypto_alg *alg)
-{
-	struct crypto_instance *inst;
-	struct crypto_spawn *spawn;
-	int err;
-
-	inst = crypto_alloc_instance2(name, alg, 0);
-	if (IS_ERR(inst))
-		goto out;
-
-	spawn = crypto_instance_ctx(inst);
-	err = crypto_init_spawn(spawn, alg, inst,
-				CRYPTO_ALG_TYPE_MASK | CRYPTO_ALG_ASYNC);
-
-	if (err)
-		goto err_free_inst;
-
-	return inst;
-
-err_free_inst:
-	kfree(inst);
-	inst = ERR_PTR(err);
-
-out:
-	return inst;
-}
 EXPORT_SYMBOL_GPL(crypto_alloc_instance);
 
 void crypto_init_queue(struct crypto_queue *queue, unsigned int max_qlen)
diff --git a/include/crypto/algapi.h b/include/crypto/algapi.h
index 4a5ad10e75f0e..093869f175d6b 100644
--- a/include/crypto/algapi.h
+++ b/include/crypto/algapi.h
@@ -185,10 +185,8 @@ static inline struct crypto_alg *crypto_attr_alg(struct rtattr *rta,
 int crypto_attr_u32(struct rtattr *rta, u32 *num);
 int crypto_inst_setname(struct crypto_instance *inst, const char *name,
 			struct crypto_alg *alg);
-void *crypto_alloc_instance2(const char *name, struct crypto_alg *alg,
-			     unsigned int head);
-struct crypto_instance *crypto_alloc_instance(const char *name,
-					      struct crypto_alg *alg);
+void *crypto_alloc_instance(const char *name, struct crypto_alg *alg,
+			    unsigned int head);
 
 void crypto_init_queue(struct crypto_queue *queue, unsigned int max_qlen);
 int crypto_enqueue_request(struct crypto_queue *queue,
diff --git a/include/crypto/internal/hash.h b/include/crypto/internal/hash.h
index a0b0ad9d585e6..e355fdb642a92 100644
--- a/include/crypto/internal/hash.h
+++ b/include/crypto/internal/hash.h
@@ -170,7 +170,7 @@ static inline unsigned int ahash_instance_headroom(void)
 static inline struct ahash_instance *ahash_alloc_instance(
 	const char *name, struct crypto_alg *alg)
 {
-	return crypto_alloc_instance2(name, alg, ahash_instance_headroom());
+	return crypto_alloc_instance(name, alg, ahash_instance_headroom());
 }
 
 static inline void ahash_request_complete(struct ahash_request *req, int err)
@@ -233,8 +233,8 @@ static inline void *shash_instance_ctx(struct shash_instance *inst)
 static inline struct shash_instance *shash_alloc_instance(
 	const char *name, struct crypto_alg *alg)
 {
-	return crypto_alloc_instance2(name, alg,
-				      sizeof(struct shash_alg) - sizeof(*alg));
+	return crypto_alloc_instance(name, alg,
+				     sizeof(struct shash_alg) - sizeof(*alg));
 }
 
 static inline struct crypto_shash *crypto_spawn_shash(
-- 
2.20.1

^ permalink raw reply related	[flat|nested] 28+ messages in thread

* Re: [PATCH 04/16] crypto: pcbc - remove bogus memcpy()s with src == dest
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (15 preceding siblings ...)
  2019-01-04  4:16 ` [PATCH 16/16] crypto: algapi - remove crypto_alloc_instance() Eric Biggers
@ 2019-01-04  9:57 ` David Howells
  2019-01-04 17:07   ` Eric Biggers
  2019-01-04 17:24   ` David Howells
  2019-01-11  6:33 ` [PATCH 00/16] crypto: skcipher template simplifications and conversions Herbert Xu
  17 siblings, 2 replies; 28+ messages in thread
From: David Howells @ 2019-01-04  9:57 UTC (permalink / raw)
  To: Eric Biggers; +Cc: dhowells, linux-crypto, Herbert Xu, stable

Eric Biggers <ebiggers@kernel.org> wrote:

> -	u8 *iv = walk->iv;
> +	u8 * const iv = walk->iv;

Does adding this const actually gain anything?  (this is done twice)

David

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 04/16] crypto: pcbc - remove bogus memcpy()s with src == dest
  2019-01-04  9:57 ` [PATCH 04/16] crypto: pcbc - remove bogus memcpy()s with src == dest David Howells
@ 2019-01-04 17:07   ` Eric Biggers
  2019-01-04 17:24   ` David Howells
  1 sibling, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-04 17:07 UTC (permalink / raw)
  To: David Howells; +Cc: linux-crypto, Herbert Xu, stable

On Fri, Jan 04, 2019 at 09:57:13AM +0000, David Howells wrote:
> Eric Biggers <ebiggers@kernel.org> wrote:
> 
> > -	u8 *iv = walk->iv;
> > +	u8 * const iv = walk->iv;
> 
> Does adding this const actually gain anything?  (this is done twice)
> 
> David

It makes it clearer what's going on, especially since some modes update the 'iv'
pointer after each block (delaying the copy to 'walk.iv' until the end) but
others can't do that.  The 'const' is helpful to further distinguish these two
cases, which were confused in both the pcbc and cfb implementations.

- Eric

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 04/16] crypto: pcbc - remove bogus memcpy()s with src == dest
  2019-01-04  9:57 ` [PATCH 04/16] crypto: pcbc - remove bogus memcpy()s with src == dest David Howells
  2019-01-04 17:07   ` Eric Biggers
@ 2019-01-04 17:24   ` David Howells
  1 sibling, 0 replies; 28+ messages in thread
From: David Howells @ 2019-01-04 17:24 UTC (permalink / raw)
  To: Eric Biggers; +Cc: dhowells, linux-crypto, Herbert Xu, stable

Eric Biggers <ebiggers@kernel.org> wrote:

> It makes it clearer what's going on, especially since some modes update the
> 'iv' pointer after each block (delaying the copy to 'walk.iv' until the end)
> but others can't do that.  The 'const' is helpful to further distinguish
> these two cases, which were confused in both the pcbc and cfb
> implementations.

I'm not sure I agree that it makes it clearer, but:

Reviewed-and-tested-by: David Howells <dhowells@redhat.com>

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 01/16] crypto: cfb - add missing 'chunksize' property
       [not found]   ` <20190104210311.5AC542087F@mail.kernel.org>
@ 2019-01-05  3:07     ` Eric Biggers
  2019-01-05  3:40       ` Herbert Xu
  0 siblings, 1 reply; 28+ messages in thread
From: Eric Biggers @ 2019-01-05  3:07 UTC (permalink / raw)
  To: Sasha Levin, Herbert Xu; +Cc: linux-crypto, stable, James Bottomley

On Fri, Jan 04, 2019 at 09:03:10PM +0000, Sasha Levin wrote:
> Hi,
> 
> [This is an automated email]
> 
> This commit has been processed because it contains a "Fixes:" tag,
> fixing commit: a7d85e06ed80 crypto: cfb - add support for Cipher FeedBack mode.
> 
> The bot has tested the following trees: v4.20.0, v4.19.13.
> 
> v4.20.0: Failed to apply! Possible dependencies:
>     7da66670775d ("crypto: testmgr - add AES-CFB tests")
> 
> v4.19.13: Failed to apply! Possible dependencies:
>     7da66670775d ("crypto: testmgr - add AES-CFB tests")
>     dfb89ab3f0a7 ("crypto: tcrypt - add OFB functional tests")
> 
> 
> How should we proceed with this patch?
> 
> --
> Thanks,
> Sasha

The following will need to be applied to 4.19 and 4.20 first.  Both had Cc stable:

fa4600734b74 ("crypto: cfb - fix decryption")
7da66670775d ("crypto: testmgr - add AES-CFB tests")

Herbert, why was CFB accepted without any test vectors in the first place?

- Eric

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 01/16] crypto: cfb - add missing 'chunksize' property
  2019-01-05  3:07     ` Eric Biggers
@ 2019-01-05  3:40       ` Herbert Xu
  0 siblings, 0 replies; 28+ messages in thread
From: Herbert Xu @ 2019-01-05  3:40 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Sasha Levin, linux-crypto, stable, James Bottomley

On Fri, Jan 04, 2019 at 07:07:48PM -0800, Eric Biggers wrote:
>
> Herbert, why was CFB accepted without any test vectors in the first place?

That was an oversight.  Longer term we should restructure how
the test vectors are stored by moving them in with the generic
implementation.  That should also ensure that we would never
add an algorithm without both a generic implementation as well
as test vectors.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 10/16] crypto: keywrap - convert to skcipher API
  2019-01-04  4:16 ` [PATCH 10/16] crypto: keywrap " Eric Biggers
@ 2019-01-05 11:40   ` Stephan Mueller
  0 siblings, 0 replies; 28+ messages in thread
From: Stephan Mueller @ 2019-01-05 11:40 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, Herbert Xu

Am Freitag, 4. Januar 2019, 05:16:19 CET schrieb Eric Biggers:

Hi Eric,

> From: Eric Biggers <ebiggers@google.com>
> 
> Convert the keywrap template from the deprecated "blkcipher" API to the
> "skcipher" API, taking advantage of skcipher_alloc_instance_simple() to
> simplify it considerably.
> 
> Cc: Stephan Mueller <smueller@chronox.de>
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Adding the *_simple functions to the kernel crypto API is really nice to 
reduce code duplication! Thanks a lot.

Also, for this patch:

Reviewed-by: Stephan Mueller <smueller@chronox.de>

Ciao
Stephan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 05/16] crypto: skcipher - add helper for simple block cipher modes
  2019-01-04  4:16 ` [PATCH 05/16] crypto: skcipher - add helper for simple block cipher modes Eric Biggers
@ 2019-01-05 12:03   ` Stephan Mueller
  2019-01-06 21:09     ` Eric Biggers
  0 siblings, 1 reply; 28+ messages in thread
From: Stephan Mueller @ 2019-01-05 12:03 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto, Herbert Xu

Am Freitag, 4. Januar 2019, 05:16:14 CET schrieb Eric Biggers:

Hi Eric,

> From: Eric Biggers <ebiggers@google.com>
> 
> The majority of skcipher templates (including both the existing ones and
> the ones remaining to be converted from the "blkcipher" API) just wrap a
> single block cipher algorithm.  This includes cbc, cfb, ctr, ecb, kw,
> ofb, and pcbc.  Add a helper function skcipher_alloc_instance_simple()
> that handles allocating an skcipher instance for this common case.
> 
> Signed-off-by: Eric Biggers <ebiggers@google.com>

Apart from the comment below:

Reviewed-by: Stephan Mueller <smueller@chronox.de>

> +struct skcipher_instance *
> +skcipher_alloc_instance_simple(struct crypto_template *tmpl, struct rtattr
> **tb, +			       struct crypto_alg **cipher_alg_ret)
> +{
> +	struct crypto_attr_type *algt;
> +	struct crypto_alg *cipher_alg;
> +	struct skcipher_instance *inst;
> +	struct crypto_spawn *spawn;
> +	u32 mask;
> +	int err;
> +
> +	algt = crypto_get_attr_type(tb);
> +	if (IS_ERR(algt))
> +		return ERR_CAST(algt);
> +
> +	if ((algt->type ^ CRYPTO_ALG_TYPE_SKCIPHER) & algt->mask)
> +		return ERR_PTR(-EINVAL);

Why not using crypto_check_attr_type? I understand that it does not return 
algt for the next check, but maybe we can consolidate the code a bit here?

> +
> +	mask = CRYPTO_ALG_TYPE_MASK |
> +		crypto_requires_off(algt->type, algt->mask,
> +				    CRYPTO_ALG_NEED_FALLBACK);


Ciao
Stephan

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 03/16] crypto: ofb - fix handling partial blocks and make thread-safe
  2019-01-04  4:16 ` [PATCH 03/16] crypto: ofb - fix handling partial blocks and make thread-safe Eric Biggers
@ 2019-01-06 10:38   ` Gilad Ben-Yossef
  0 siblings, 0 replies; 28+ messages in thread
From: Gilad Ben-Yossef @ 2019-01-06 10:38 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Linux Crypto Mailing List, Herbert Xu, stable

On Fri, Jan 4, 2019 at 6:20 AM Eric Biggers <ebiggers@kernel.org> wrote:
>
> From: Eric Biggers <ebiggers@google.com>
>
> Fix multiple bugs in the OFB implementation:
>
> 1. It stored the per-request state 'cnt' in the tfm context, which can be
>    used by multiple threads concurrently (e.g. via AF_ALG).
> 2. It didn't support messages not a multiple of the block cipher size,
>    despite being a stream cipher.
> 3. It didn't set cra_blocksize to 1 to indicate it is a stream cipher.
>
> To fix these, set the 'chunksize' property to the cipher block size to
> guarantee that when walking through the scatterlist, a partial block can
> only occur at the end.  Then change the implementation to XOR a block at
> a time at first, then XOR the partial block at the end if needed.  This
> is the same way CTR and CFB are implemented.  As a bonus, this also
> improves performance in most cases over the current approach.


Well, it certainly looks like my implementation had a lot of room for
improvement :-)
Thank you for doing this, Eric

Reviewed-by: Gilad Ben-Yossef <gilad@benyossef.com>

Gilad

-- 
Gilad Ben-Yossef
Chief Coffee Drinker

values of β will give rise to dom!

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 05/16] crypto: skcipher - add helper for simple block cipher modes
  2019-01-05 12:03   ` Stephan Mueller
@ 2019-01-06 21:09     ` Eric Biggers
  0 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-06 21:09 UTC (permalink / raw)
  To: Stephan Mueller; +Cc: linux-crypto, Herbert Xu

Hi Stephan,

On Sat, Jan 05, 2019 at 01:03:40PM +0100, Stephan Mueller wrote:
> Am Freitag, 4. Januar 2019, 05:16:14 CET schrieb Eric Biggers:
> 
> Hi Eric,
> 
> > From: Eric Biggers <ebiggers@google.com>
> > 
> > The majority of skcipher templates (including both the existing ones and
> > the ones remaining to be converted from the "blkcipher" API) just wrap a
> > single block cipher algorithm.  This includes cbc, cfb, ctr, ecb, kw,
> > ofb, and pcbc.  Add a helper function skcipher_alloc_instance_simple()
> > that handles allocating an skcipher instance for this common case.
> > 
> > Signed-off-by: Eric Biggers <ebiggers@google.com>
> 
> Apart from the comment below:
> 
> Reviewed-by: Stephan Mueller <smueller@chronox.de>
> 
> > +struct skcipher_instance *
> > +skcipher_alloc_instance_simple(struct crypto_template *tmpl, struct rtattr
> > **tb, +			       struct crypto_alg **cipher_alg_ret)
> > +{
> > +	struct crypto_attr_type *algt;
> > +	struct crypto_alg *cipher_alg;
> > +	struct skcipher_instance *inst;
> > +	struct crypto_spawn *spawn;
> > +	u32 mask;
> > +	int err;
> > +
> > +	algt = crypto_get_attr_type(tb);
> > +	if (IS_ERR(algt))
> > +		return ERR_CAST(algt);
> > +
> > +	if ((algt->type ^ CRYPTO_ALG_TYPE_SKCIPHER) & algt->mask)
> > +		return ERR_PTR(-EINVAL);
> 
> Why not using crypto_check_attr_type? I understand that it does not return 
> algt for the next check, but maybe we can consolidate the code a bit here?

crypto_check_attr_type() isn't a great fit because as you point out, it doesn't
return 'algt' to calculate the mask.  So 'algt' would be validated twice.  Also,
most templates actually do the inline check like this.

So for now I prefer to leave this as-is, and leave for later any additional
simplification of this and the other templates doing this same check.  (Maybe we
could add a function that does both the type check and mask calculation.)

> 
> > +
> > +	mask = CRYPTO_ALG_TYPE_MASK |
> > +		crypto_requires_off(algt->type, algt->mask,
> > +				    CRYPTO_ALG_NEED_FALLBACK);
> 
> 

- Eric

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 00/16] crypto: skcipher template simplifications and conversions
  2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
                   ` (16 preceding siblings ...)
  2019-01-04  9:57 ` [PATCH 04/16] crypto: pcbc - remove bogus memcpy()s with src == dest David Howells
@ 2019-01-11  6:33 ` Herbert Xu
  2019-01-11 17:58   ` Eric Biggers
  17 siblings, 1 reply; 28+ messages in thread
From: Herbert Xu @ 2019-01-11  6:33 UTC (permalink / raw)
  To: Eric Biggers; +Cc: linux-crypto

On Thu, Jan 03, 2019 at 08:16:09PM -0800, Eric Biggers wrote:
> Hello,
> 
> This series adds a function skcipher_alloc_instance_simple() that
> greatly simplifies creating an skcipher_instance that uses a single
> underlying block cipher.  It then converts the cbc, cfb, ctr, ecb, kw,
> ofb, and pcbc templates to use it.  In doing so, ctr, ecb, and kw are
> also converted from the deprecated "blkcipher" API to the skcipher API.
> 
> While doing this, I also found some rather silly bugs in the cfb, ofb,
> and pcbc templates...  So I've included the fixes for these first, in
> patches 1-4.  Please consider taking these first 4 patches through
> 'crypto' rather than 'cryptodev'.  (But 5-16 are cleanups only, so no
> rush on those.)

I have decided to push all of these through cryptodev because
it's not clear whether the first four bug fixes are serious enough 
to warrant going through straight away.  If I'm wrong please let
me know.

> Finally, I also converted ecb(arc4) and ecb(cipher_null) to the skcipher
> API, since following the template conversions these were the last
> generic algorithms that were still using the "blkcipher" API.
> 
> The overall delta is almost 500 lines removed, due to removing a lot of
> boilerplate that created algorithm instances.
> 
> Eric Biggers (16):
>   crypto: cfb - add missing 'chunksize' property
>   crypto: cfb - remove bogus memcpy() with src == dest
>   crypto: ofb - fix handling partial blocks and make thread-safe
>   crypto: pcbc - remove bogus memcpy()s with src == dest
>   crypto: skcipher - add helper for simple block cipher modes
>   crypto: cbc - convert to skcipher_alloc_instance_simple()
>   crypto: cfb - convert to skcipher_alloc_instance_simple()
>   crypto: ctr - convert to skcipher API
>   crypto: ecb - convert to skcipher API
>   crypto: keywrap - convert to skcipher API
>   crypto: ofb - convert to skcipher_alloc_instance_simple()
>   crypto: pcbc - remove ability to wrap internal ciphers
>   crypto: pcbc - convert to skcipher_alloc_instance_simple()
>   crypto: arc4 - convert to skcipher API
>   crypto: null - convert ecb-cipher_null to skcipher API
>   crypto: algapi - remove crypto_alloc_instance()
> 
>  crypto/algapi.c                    |  33 +----
>  crypto/arc4.c                      |  82 ++++++------
>  crypto/cbc.c                       | 131 ++-----------------
>  crypto/cfb.c                       | 139 +++-----------------
>  crypto/crypto_null.c               |  57 ++++----
>  crypto/ctr.c                       | 160 ++++++-----------------
>  crypto/ecb.c                       | 151 +++++----------------
>  crypto/keywrap.c                   | 198 ++++++++++------------------
>  crypto/ofb.c                       | 202 ++++++-----------------------
>  crypto/pcbc.c                      | 143 +++-----------------
>  crypto/skcipher.c                  | 131 +++++++++++++++++++
>  crypto/testmgr.h                   |  53 +++++++-
>  include/crypto/algapi.h            |   6 +-
>  include/crypto/internal/hash.h     |   6 +-
>  include/crypto/internal/skcipher.h |  15 +++
>  15 files changed, 508 insertions(+), 999 deletions(-)

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 28+ messages in thread

* Re: [PATCH 00/16] crypto: skcipher template simplifications and conversions
  2019-01-11  6:33 ` [PATCH 00/16] crypto: skcipher template simplifications and conversions Herbert Xu
@ 2019-01-11 17:58   ` Eric Biggers
  0 siblings, 0 replies; 28+ messages in thread
From: Eric Biggers @ 2019-01-11 17:58 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linux-crypto

On Fri, Jan 11, 2019 at 02:33:58PM +0800, Herbert Xu wrote:
> On Thu, Jan 03, 2019 at 08:16:09PM -0800, Eric Biggers wrote:
> > Hello,
> > 
> > This series adds a function skcipher_alloc_instance_simple() that
> > greatly simplifies creating an skcipher_instance that uses a single
> > underlying block cipher.  It then converts the cbc, cfb, ctr, ecb, kw,
> > ofb, and pcbc templates to use it.  In doing so, ctr, ecb, and kw are
> > also converted from the deprecated "blkcipher" API to the skcipher API.
> > 
> > While doing this, I also found some rather silly bugs in the cfb, ofb,
> > and pcbc templates...  So I've included the fixes for these first, in
> > patches 1-4.  Please consider taking these first 4 patches through
> > 'crypto' rather than 'cryptodev'.  (But 5-16 are cleanups only, so no
> > rush on those.)
> 
> I have decided to push all of these through cryptodev because
> it's not clear whether the first four bug fixes are serious enough 
> to warrant going through straight away.  If I'm wrong please let
> me know.
> 

I'm fine with cryptodev.

- Eric

^ permalink raw reply	[flat|nested] 28+ messages in thread

end of thread, other threads:[~2019-01-11 17:58 UTC | newest]

Thread overview: 28+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-04  4:16 [PATCH 00/16] crypto: skcipher template simplifications and conversions Eric Biggers
2019-01-04  4:16 ` [PATCH 01/16] crypto: cfb - add missing 'chunksize' property Eric Biggers
     [not found]   ` <20190104210311.5AC542087F@mail.kernel.org>
2019-01-05  3:07     ` Eric Biggers
2019-01-05  3:40       ` Herbert Xu
2019-01-04  4:16 ` [PATCH 02/16] crypto: cfb - remove bogus memcpy() with src == dest Eric Biggers
2019-01-04  4:16 ` [PATCH 03/16] crypto: ofb - fix handling partial blocks and make thread-safe Eric Biggers
2019-01-06 10:38   ` Gilad Ben-Yossef
2019-01-04  4:16 ` [PATCH 04/16] crypto: pcbc - remove bogus memcpy()s with src == dest Eric Biggers
2019-01-04  4:16 ` [PATCH 05/16] crypto: skcipher - add helper for simple block cipher modes Eric Biggers
2019-01-05 12:03   ` Stephan Mueller
2019-01-06 21:09     ` Eric Biggers
2019-01-04  4:16 ` [PATCH 06/16] crypto: cbc - convert to skcipher_alloc_instance_simple() Eric Biggers
2019-01-04  4:16 ` [PATCH 07/16] crypto: cfb " Eric Biggers
2019-01-04  4:16 ` [PATCH 08/16] crypto: ctr - convert to skcipher API Eric Biggers
2019-01-04  4:16 ` [PATCH 09/16] crypto: ecb " Eric Biggers
2019-01-04  4:16 ` [PATCH 10/16] crypto: keywrap " Eric Biggers
2019-01-05 11:40   ` Stephan Mueller
2019-01-04  4:16 ` [PATCH 11/16] crypto: ofb - convert to skcipher_alloc_instance_simple() Eric Biggers
2019-01-04  4:16 ` [PATCH 12/16] crypto: pcbc - remove ability to wrap internal ciphers Eric Biggers
2019-01-04  4:16 ` [PATCH 13/16] crypto: pcbc - convert to skcipher_alloc_instance_simple() Eric Biggers
2019-01-04  4:16 ` [PATCH 14/16] crypto: arc4 - convert to skcipher API Eric Biggers
2019-01-04  4:16 ` [PATCH 15/16] crypto: null - convert ecb-cipher_null " Eric Biggers
2019-01-04  4:16 ` [PATCH 16/16] crypto: algapi - remove crypto_alloc_instance() Eric Biggers
2019-01-04  9:57 ` [PATCH 04/16] crypto: pcbc - remove bogus memcpy()s with src == dest David Howells
2019-01-04 17:07   ` Eric Biggers
2019-01-04 17:24   ` David Howells
2019-01-11  6:33 ` [PATCH 00/16] crypto: skcipher template simplifications and conversions Herbert Xu
2019-01-11 17:58   ` Eric Biggers

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.