Linux-Crypto Archive on lore.kernel.org
 help / color / Atom feed
* [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining
@ 2020-07-28  7:17 Herbert Xu
  2020-07-28  7:18 ` [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining Herbert Xu
                   ` (31 more replies)
  0 siblings, 32 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:17 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

This patch-set adds support to the Crypto API and algif_skcipher
for algorithms that cannot be chained, as well as ones that can
be chained if you withhold a certain number of blocks at the end.

The vast majority of algorithms can be chained already, e.g., cbc
and lrw.  Everything else can either be modified to support chaining,
e.g., chacha and xts, or they cannot chain at all, e.g., keywrap.

Some drivers that implement algorithms which can be chained with
modification may not be able to support chaining due to hardware
limitations.  For now they're treated the same way as ones that
cannot be chained at all.

The algorithm arc4 has been left out of all this owing to ongoing
discussions regarding its future.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
@ 2020-07-28  7:18 ` Herbert Xu
  2020-07-28 17:15   ` Eric Biggers
  2020-07-28  7:18 ` [v3 PATCH 2/31] crypto: algif_skcipher - Add support for final_chunksize Herbert Xu
                   ` (30 subsequent siblings)
  31 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:18 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

Crypto skcipher algorithms in general allow chaining to break
large operations into smaller ones based on multiples of the chunk
size.  However, some algorithms don't support chaining while others
(such as cts) only support chaining for the leading blocks.

This patch adds the necessary API support for these algorithms.  In
particular, a new request flag CRYPTO_TFM_REQ_MORE is added to allow
chaining for algorithms such as cts that cannot otherwise be chained.

A new algorithm attribute final_chunksize has also been added to
indicate how many blocks at the end of a request that cannot be
chained and therefore must be withheld if chaining is attempted.

This attribute can also be used to indicate that no chaining is
allowed.  Its value should be set to -1 in that case.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 include/crypto/skcipher.h |   24 ++++++++++++++++++++++++
 include/linux/crypto.h    |    1 +
 2 files changed, 25 insertions(+)

diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index 5663f71198b37..fb90c3e1c26ba 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -97,6 +97,9 @@ struct crypto_sync_skcipher {
  * @walksize: Equal to the chunk size except in cases where the algorithm is
  * 	      considerably more efficient if it can operate on multiple chunks
  * 	      in parallel. Should be a multiple of chunksize.
+ * @final_chunksize: Number of bytes that must be processed together
+ *		     at the end.  If set to -1 then chaining is not
+ *		     possible.
  * @base: Definition of a generic crypto algorithm.
  *
  * All fields except @ivsize are mandatory and must be filled.
@@ -114,6 +117,7 @@ struct skcipher_alg {
 	unsigned int ivsize;
 	unsigned int chunksize;
 	unsigned int walksize;
+	int final_chunksize;
 
 	struct crypto_alg base;
 };
@@ -279,6 +283,11 @@ static inline unsigned int crypto_skcipher_alg_chunksize(
 	return alg->chunksize;
 }
 
+static inline int crypto_skcipher_alg_final_chunksize(struct skcipher_alg *alg)
+{
+	return alg->final_chunksize;
+}
+
 /**
  * crypto_skcipher_chunksize() - obtain chunk size
  * @tfm: cipher handle
@@ -296,6 +305,21 @@ static inline unsigned int crypto_skcipher_chunksize(
 	return crypto_skcipher_alg_chunksize(crypto_skcipher_alg(tfm));
 }
 
+/**
+ * crypto_skcipher_final_chunksize() - obtain number of final bytes
+ * @tfm: cipher handle
+ *
+ * For algorithms such as CTS the final chunks cannot be chained.
+ * This returns the number of final bytes that must be withheld
+ * when chaining.
+ *
+ * Return: number of final bytes
+ */
+static inline int crypto_skcipher_final_chunksize(struct crypto_skcipher *tfm)
+{
+	return crypto_skcipher_alg_final_chunksize(crypto_skcipher_alg(tfm));
+}
+
 static inline unsigned int crypto_sync_skcipher_blocksize(
 	struct crypto_sync_skcipher *tfm)
 {
diff --git a/include/linux/crypto.h b/include/linux/crypto.h
index ef90e07c9635c..2e624a1d4f832 100644
--- a/include/linux/crypto.h
+++ b/include/linux/crypto.h
@@ -141,6 +141,7 @@
 #define CRYPTO_TFM_REQ_FORBID_WEAK_KEYS	0x00000100
 #define CRYPTO_TFM_REQ_MAY_SLEEP	0x00000200
 #define CRYPTO_TFM_REQ_MAY_BACKLOG	0x00000400
+#define CRYPTO_TFM_REQ_MORE		0x00000800
 
 /*
  * Miscellaneous stuff.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 2/31] crypto: algif_skcipher - Add support for final_chunksize
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
  2020-07-28  7:18 ` [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining Herbert Xu
@ 2020-07-28  7:18 ` Herbert Xu
  2020-07-28  7:18 ` [v3 PATCH 3/31] crypto: cts - Add support for chaining Herbert Xu
                   ` (29 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:18 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it stands algif_skcipher assumes all algorithms support chaining.
This patch teaches it about the new final_chunksize attribute which
can be used to disable chaining on a given algorithm.  It can also
be used to support chaining on algorithms such as cts that cannot
otherwise do chaining.  For that case algif_skcipher will also now
set the request flag CRYPTO_TFM_REQ_MORE when needed.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/algif_skcipher.c |   28 ++++++++++++++++++++--------
 1 file changed, 20 insertions(+), 8 deletions(-)

diff --git a/crypto/algif_skcipher.c b/crypto/algif_skcipher.c
index a51ba22fef58f..1d50f042dd319 100644
--- a/crypto/algif_skcipher.c
+++ b/crypto/algif_skcipher.c
@@ -57,12 +57,15 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	struct af_alg_ctx *ctx = ask->private;
 	struct crypto_skcipher *tfm = pask->private;
 	unsigned int bs = crypto_skcipher_chunksize(tfm);
+	unsigned int rflags = CRYPTO_TFM_REQ_MAY_SLEEP;
+	int fc = crypto_skcipher_final_chunksize(tfm);
+	unsigned int min = bs + (fc > 0 ? fc : 0);
 	struct af_alg_async_req *areq;
 	int err = 0;
 	size_t len = 0;
 
-	if (!ctx->init || (ctx->more && ctx->used < bs)) {
-		err = af_alg_wait_for_data(sk, flags, bs);
+	if (!ctx->init || (ctx->more && ctx->used < min)) {
+		err = af_alg_wait_for_data(sk, flags, min);
 		if (err)
 			return err;
 	}
@@ -78,13 +81,23 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 	if (err)
 		goto free;
 
+	err = -EINVAL;
+
 	/*
 	 * If more buffers are to be expected to be processed, process only
-	 * full block size buffers.
+	 * full block size buffers and withhold data according to the final
+	 * chunk size.
 	 */
-	if (ctx->more || len < ctx->used)
+	if (ctx->more || len < ctx->used) {
+		if (fc < 0)
+			goto free;
+
+		len -= fc;
 		len -= len % bs;
 
+		rflags |= CRYPTO_TFM_REQ_MORE;
+	}
+
 	/*
 	 * Create a per request TX SGL for this request which tracks the
 	 * SG entries from the global TX SGL.
@@ -116,8 +129,7 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 		areq->outlen = len;
 
 		skcipher_request_set_callback(&areq->cra_u.skcipher_req,
-					      CRYPTO_TFM_REQ_MAY_SLEEP,
-					      af_alg_async_cb, areq);
+					      rflags, af_alg_async_cb, areq);
 		err = ctx->enc ?
 			crypto_skcipher_encrypt(&areq->cra_u.skcipher_req) :
 			crypto_skcipher_decrypt(&areq->cra_u.skcipher_req);
@@ -129,9 +141,9 @@ static int _skcipher_recvmsg(struct socket *sock, struct msghdr *msg,
 		sock_put(sk);
 	} else {
 		/* Synchronous operation */
+		rflags |= CRYPTO_TFM_REQ_MAY_BACKLOG;
 		skcipher_request_set_callback(&areq->cra_u.skcipher_req,
-					      CRYPTO_TFM_REQ_MAY_SLEEP |
-					      CRYPTO_TFM_REQ_MAY_BACKLOG,
+					      rflags,
 					      crypto_req_done, &ctx->wait);
 		err = crypto_wait_req(ctx->enc ?
 			crypto_skcipher_encrypt(&areq->cra_u.skcipher_req) :

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 3/31] crypto: cts - Add support for chaining
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
  2020-07-28  7:18 ` [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining Herbert Xu
  2020-07-28  7:18 ` [v3 PATCH 2/31] crypto: algif_skcipher - Add support for final_chunksize Herbert Xu
@ 2020-07-28  7:18 ` Herbert Xu
  2020-07-28 11:05   ` Ard Biesheuvel
  2020-07-28  7:18 ` [v3 PATCH 4/31] crypto: arm64/aes-glue - Add support for chaining CTS Herbert Xu
                   ` (28 subsequent siblings)
  31 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:18 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it stands cts cannot do chaining.  That is, it always performs
the cipher-text stealing at the end of a request.  This patch adds
support for chaining when the CRYPTO_TM_REQ_MORE flag is set.

It also sets final_chunksize so that data can be withheld by the
caller to enable correct processing at the true end of a request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/cts.c |   19 ++++++++++---------
 1 file changed, 10 insertions(+), 9 deletions(-)

diff --git a/crypto/cts.c b/crypto/cts.c
index 3766d47ebcc01..67990146c9b06 100644
--- a/crypto/cts.c
+++ b/crypto/cts.c
@@ -100,7 +100,7 @@ static int cts_cbc_encrypt(struct skcipher_request *req)
 	struct crypto_cts_reqctx *rctx = skcipher_request_ctx(req);
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct skcipher_request *subreq = &rctx->subreq;
-	int bsize = crypto_skcipher_blocksize(tfm);
+	int bsize = crypto_skcipher_chunksize(tfm);
 	u8 d[MAX_CIPHER_BLOCKSIZE * 2] __aligned(__alignof__(u32));
 	struct scatterlist *sg;
 	unsigned int offset;
@@ -146,7 +146,7 @@ static int crypto_cts_encrypt(struct skcipher_request *req)
 	struct crypto_cts_reqctx *rctx = skcipher_request_ctx(req);
 	struct crypto_cts_ctx *ctx = crypto_skcipher_ctx(tfm);
 	struct skcipher_request *subreq = &rctx->subreq;
-	int bsize = crypto_skcipher_blocksize(tfm);
+	int bsize = crypto_skcipher_chunksize(tfm);
 	unsigned int nbytes = req->cryptlen;
 	unsigned int offset;
 
@@ -155,7 +155,7 @@ static int crypto_cts_encrypt(struct skcipher_request *req)
 	if (nbytes < bsize)
 		return -EINVAL;
 
-	if (nbytes == bsize) {
+	if (nbytes == bsize || req->base.flags & CRYPTO_TFM_REQ_MORE) {
 		skcipher_request_set_callback(subreq, req->base.flags,
 					      req->base.complete,
 					      req->base.data);
@@ -181,7 +181,7 @@ static int cts_cbc_decrypt(struct skcipher_request *req)
 	struct crypto_cts_reqctx *rctx = skcipher_request_ctx(req);
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	struct skcipher_request *subreq = &rctx->subreq;
-	int bsize = crypto_skcipher_blocksize(tfm);
+	int bsize = crypto_skcipher_chunksize(tfm);
 	u8 d[MAX_CIPHER_BLOCKSIZE * 2] __aligned(__alignof__(u32));
 	struct scatterlist *sg;
 	unsigned int offset;
@@ -240,7 +240,7 @@ static int crypto_cts_decrypt(struct skcipher_request *req)
 	struct crypto_cts_reqctx *rctx = skcipher_request_ctx(req);
 	struct crypto_cts_ctx *ctx = crypto_skcipher_ctx(tfm);
 	struct skcipher_request *subreq = &rctx->subreq;
-	int bsize = crypto_skcipher_blocksize(tfm);
+	int bsize = crypto_skcipher_chunksize(tfm);
 	unsigned int nbytes = req->cryptlen;
 	unsigned int offset;
 	u8 *space;
@@ -250,7 +250,7 @@ static int crypto_cts_decrypt(struct skcipher_request *req)
 	if (nbytes < bsize)
 		return -EINVAL;
 
-	if (nbytes == bsize) {
+	if (nbytes == bsize || req->base.flags & CRYPTO_TFM_REQ_MORE) {
 		skcipher_request_set_callback(subreq, req->base.flags,
 					      req->base.complete,
 					      req->base.data);
@@ -297,7 +297,7 @@ static int crypto_cts_init_tfm(struct crypto_skcipher *tfm)
 	ctx->child = cipher;
 
 	align = crypto_skcipher_alignmask(tfm);
-	bsize = crypto_skcipher_blocksize(cipher);
+	bsize = crypto_skcipher_chunksize(cipher);
 	reqsize = ALIGN(sizeof(struct crypto_cts_reqctx) +
 			crypto_skcipher_reqsize(cipher),
 			crypto_tfm_ctx_alignment()) +
@@ -359,11 +359,12 @@ static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
 		goto err_free_inst;
 
 	inst->alg.base.cra_priority = alg->base.cra_priority;
-	inst->alg.base.cra_blocksize = alg->base.cra_blocksize;
+	inst->alg.base.cra_blocksize = 1;
 	inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
 
 	inst->alg.ivsize = alg->base.cra_blocksize;
-	inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
+	inst->alg.chunksize = alg->base.cra_blocksize;
+	inst->alg.final_chunksize = inst->alg.chunksize * 2;
 	inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg);
 	inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg);
 

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 4/31] crypto: arm64/aes-glue - Add support for chaining CTS
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (2 preceding siblings ...)
  2020-07-28  7:18 ` [v3 PATCH 3/31] crypto: cts - Add support for chaining Herbert Xu
@ 2020-07-28  7:18 ` Herbert Xu
  2020-07-28  7:18 ` [v3 PATCH 5/31] crypto: nitrox " Herbert Xu
                   ` (27 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:18 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it stands cts cannot do chaining.  That is, it always performs
the cipher-text stealing at the end of a request.  This patch adds
support for chaining when the CRYPTO_TM_REQ_MORE flag is set.

It also sets the final_chunksize so that data can be withheld by
the caller to enable correct processing at the true end of a request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 arch/arm64/crypto/aes-glue.c |   17 ++++++++++++-----
 1 file changed, 12 insertions(+), 5 deletions(-)

diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index 395bbf64b2abb..f63feb00e354d 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64/crypto/aes-glue.c
@@ -283,11 +283,15 @@ static int cts_cbc_encrypt(struct skcipher_request *req)
 	skcipher_request_set_callback(&subreq, skcipher_request_flags(req),
 				      NULL, NULL);
 
-	if (req->cryptlen <= AES_BLOCK_SIZE) {
-		if (req->cryptlen < AES_BLOCK_SIZE)
+	if (req->cryptlen < AES_BLOCK_SIZE)
+		return -EINVAL;
+
+	if (req->base.flags & CRYPTO_TFM_REQ_MORE) {
+		if (req->cryptlen & (AES_BLOCK_SIZE - 1))
 			return -EINVAL;
+		cbc_blocks += 2;
+	} else if (req->cryptlen == AES_BLOCK_SIZE)
 		cbc_blocks = 1;
-	}
 
 	if (cbc_blocks > 0) {
 		skcipher_request_set_crypt(&subreq, req->src, req->dst,
@@ -299,7 +303,8 @@ static int cts_cbc_encrypt(struct skcipher_request *req)
 		if (err)
 			return err;
 
-		if (req->cryptlen == AES_BLOCK_SIZE)
+		if (req->cryptlen == AES_BLOCK_SIZE ||
+		    req->base.flags & CRYPTO_TFM_REQ_MORE)
 			return 0;
 
 		dst = src = scatterwalk_ffwd(sg_src, req->src, subreq.cryptlen);
@@ -738,13 +743,15 @@ static struct skcipher_alg aes_algs[] = { {
 		.cra_driver_name	= "__cts-cbc-aes-" MODE,
 		.cra_priority		= PRIO,
 		.cra_flags		= CRYPTO_ALG_INTERNAL,
-		.cra_blocksize		= AES_BLOCK_SIZE,
+		.cra_blocksize		= 1,
 		.cra_ctxsize		= sizeof(struct crypto_aes_ctx),
 		.cra_module		= THIS_MODULE,
 	},
 	.min_keysize	= AES_MIN_KEY_SIZE,
 	.max_keysize	= AES_MAX_KEY_SIZE,
 	.ivsize		= AES_BLOCK_SIZE,
+	.chunksize	= AES_BLOCK_SIZE,
+	.final_chunksize	= 2 * AES_BLOCK_SIZE,
 	.walksize	= 2 * AES_BLOCK_SIZE,
 	.setkey		= skcipher_aes_setkey,
 	.encrypt	= cts_cbc_encrypt,

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 5/31] crypto: nitrox - Add support for chaining CTS
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (3 preceding siblings ...)
  2020-07-28  7:18 ` [v3 PATCH 4/31] crypto: arm64/aes-glue - Add support for chaining CTS Herbert Xu
@ 2020-07-28  7:18 ` Herbert Xu
  2020-07-28  7:18 ` [v3 PATCH 6/31] crypto: ccree " Herbert Xu
                   ` (26 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:18 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it stands cts cannot do chaining.  That is, it always performs
the cipher-text stealing at the end of a request.  This patch adds
support for chaining when the CRYPTO_TM_REQ_MORE flag is set.

It also sets the final_chunksize so that data can be withheld by
the caller to enable correct processing at the true end of a request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/cavium/nitrox/nitrox_skcipher.c |  124 ++++++++++++++++++++++---
 1 file changed, 113 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/cavium/nitrox/nitrox_skcipher.c b/drivers/crypto/cavium/nitrox/nitrox_skcipher.c
index a553ac65f3249..7a159a5da30a0 100644
--- a/drivers/crypto/cavium/nitrox/nitrox_skcipher.c
+++ b/drivers/crypto/cavium/nitrox/nitrox_skcipher.c
@@ -20,6 +20,16 @@ struct nitrox_cipher {
 	enum flexi_cipher value;
 };
 
+struct nitrox_crypto_cts_ctx {
+	struct nitrox_crypto_ctx base;
+	union {
+		u8 *u8p;
+		u64 ctx_handle;
+		struct flexi_crypto_context *fctx;
+	} cbc;
+	struct crypto_ctx_hdr *cbchdr;
+};
+
 /**
  * supported cipher list
  */
@@ -105,6 +115,18 @@ static void nitrox_cbc_cipher_callback(void *arg, int err)
 	nitrox_skcipher_callback(arg, err);
 }
 
+static void nitrox_cts_cipher_callback(void *arg, int err)
+{
+	struct skcipher_request *skreq = arg;
+
+	if (skreq->base.flags & CRYPTO_TFM_REQ_MORE) {
+		nitrox_cbc_cipher_callback(arg, err);
+		return;
+	}
+
+	nitrox_skcipher_callback(arg, err);
+}
+
 static int nitrox_skcipher_init(struct crypto_skcipher *tfm)
 {
 	struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(tfm);
@@ -162,6 +184,42 @@ static void nitrox_skcipher_exit(struct crypto_skcipher *tfm)
 	nctx->ndev = NULL;
 }
 
+static int nitrox_cts_init(struct crypto_skcipher *tfm)
+{
+	struct nitrox_crypto_cts_ctx *ctsctx = crypto_skcipher_ctx(tfm);
+	struct nitrox_crypto_ctx *nctx = &ctsctx->base;
+	struct crypto_ctx_hdr *chdr;
+	int err;
+
+	err = nitrox_skcipher_init(tfm);
+	if (err)
+		return err;
+
+	chdr = crypto_alloc_context(nctx->ndev);
+	if (!chdr) {
+		nitrox_skcipher_exit(tfm);
+		return -ENOMEM;
+	}
+
+	ctsctx->cbchdr = chdr;
+	ctsctx->cbc.u8p = chdr->vaddr;
+	ctsctx->cbc.u8p += sizeof(struct ctx_hdr);
+	nctx->callback = nitrox_cts_cipher_callback;
+	return 0;
+}
+
+static void nitrox_cts_exit(struct crypto_skcipher *tfm)
+{
+	struct nitrox_crypto_cts_ctx *ctsctx = crypto_skcipher_ctx(tfm);
+	struct flexi_crypto_context *fctx = ctsctx->cbc.fctx;
+
+	memset(&fctx->crypto, 0, sizeof(struct crypto_keys));
+	memset(&fctx->auth, 0, sizeof(struct auth_keys));
+	crypto_free_context(ctsctx->cbchdr);
+
+	nitrox_skcipher_exit(tfm);
+}
+
 static inline int nitrox_skcipher_setkey(struct crypto_skcipher *cipher,
 					 int aes_keylen, const u8 *key,
 					 unsigned int keylen)
@@ -244,7 +302,8 @@ static int alloc_dst_sglist(struct skcipher_request *skreq, int ivsize)
 	return 0;
 }
 
-static int nitrox_skcipher_crypt(struct skcipher_request *skreq, bool enc)
+static int nitrox_skcipher_crypt_handle(struct skcipher_request *skreq,
+					bool enc, u64 handle)
 {
 	struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(skreq);
 	struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(cipher);
@@ -269,7 +328,7 @@ static int nitrox_skcipher_crypt(struct skcipher_request *skreq, bool enc)
 	creq->gph.param2 = cpu_to_be16(ivsize);
 	creq->gph.param3 = 0;
 
-	creq->ctx_handle = nctx->u.ctx_handle;
+	creq->ctx_handle = handle;
 	creq->ctrl.s.ctxl = sizeof(struct flexi_crypto_context);
 
 	ret = alloc_src_sglist(skreq, ivsize);
@@ -287,7 +346,16 @@ static int nitrox_skcipher_crypt(struct skcipher_request *skreq, bool enc)
 					 skreq);
 }
 
-static int nitrox_cbc_decrypt(struct skcipher_request *skreq)
+static int nitrox_skcipher_crypt(struct skcipher_request *skreq, bool enc)
+{
+	struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(skreq);
+	struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(cipher);
+
+	return nitrox_skcipher_crypt_handle(skreq, enc, nctx->u.ctx_handle);
+}
+
+static int nitrox_cbc_decrypt_handle(struct skcipher_request *skreq,
+				     u64 handle)
 {
 	struct nitrox_kcrypt_request *nkreq = skcipher_request_ctx(skreq);
 	struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(skreq);
@@ -297,14 +365,46 @@ static int nitrox_cbc_decrypt(struct skcipher_request *skreq)
 	unsigned int start = skreq->cryptlen - ivsize;
 
 	if (skreq->src != skreq->dst)
-		return nitrox_skcipher_crypt(skreq, false);
+		return nitrox_skcipher_crypt_handle(skreq, false, handle);
 
 	nkreq->iv_out = kmalloc(ivsize, flags);
 	if (!nkreq->iv_out)
 		return -ENOMEM;
 
 	scatterwalk_map_and_copy(nkreq->iv_out, skreq->src, start, ivsize, 0);
-	return nitrox_skcipher_crypt(skreq, false);
+	return nitrox_skcipher_crypt_handle(skreq, false, handle);
+}
+
+static int nitrox_cbc_decrypt(struct skcipher_request *skreq)
+{
+	struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(skreq);
+	struct nitrox_crypto_ctx *nctx = crypto_skcipher_ctx(cipher);
+
+	return nitrox_cbc_decrypt_handle(skreq, nctx->u.ctx_handle);
+}
+
+static int nitrox_cts_encrypt(struct skcipher_request *skreq)
+{
+	if (skreq->base.flags & CRYPTO_TFM_REQ_MORE) {
+		struct nitrox_crypto_cts_ctx *ctsctx;
+
+		ctsctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(skreq));
+		return nitrox_skcipher_crypt_handle(skreq, true,
+						    ctsctx->cbc.ctx_handle);
+	}
+
+	return nitrox_skcipher_crypt(skreq, true);
+}
+
+static int nitrox_cts_decrypt(struct skcipher_request *skreq)
+{
+	struct nitrox_crypto_cts_ctx *ctsctx;
+
+	if (!(skreq->base.flags & CRYPTO_TFM_REQ_MORE))
+		return nitrox_skcipher_crypt(skreq, false);
+
+	ctsctx = crypto_skcipher_ctx(crypto_skcipher_reqtfm(skreq));
+	return nitrox_cbc_decrypt_handle(skreq, ctsctx->cbc.ctx_handle);
 }
 
 static int nitrox_aes_encrypt(struct skcipher_request *skreq)
@@ -484,19 +584,21 @@ static struct skcipher_alg nitrox_skciphers[] = { {
 		.cra_driver_name = "n5_cts(cbc(aes))",
 		.cra_priority = PRIO,
 		.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_ALLOCATES_MEMORY,
-		.cra_blocksize = AES_BLOCK_SIZE,
-		.cra_ctxsize = sizeof(struct nitrox_crypto_ctx),
+		.cra_blocksize = 1,
+		.cra_ctxsize = sizeof(struct nitrox_crypto_cts_ctx),
 		.cra_alignmask = 0,
 		.cra_module = THIS_MODULE,
 	},
 	.min_keysize = AES_MIN_KEY_SIZE,
 	.max_keysize = AES_MAX_KEY_SIZE,
 	.ivsize = AES_BLOCK_SIZE,
+	.chunksize = AES_BLOCK_SIZE,
+	.final_chunksize = 2 * AES_BLOCK_SIZE,
 	.setkey = nitrox_aes_setkey,
-	.encrypt = nitrox_aes_encrypt,
-	.decrypt = nitrox_aes_decrypt,
-	.init = nitrox_skcipher_init,
-	.exit = nitrox_skcipher_exit,
+	.encrypt = nitrox_cts_encrypt,
+	.decrypt = nitrox_cts_decrypt,
+	.init = nitrox_cts_init,
+	.exit = nitrox_cts_exit,
 }, {
 	.base = {
 		.cra_name = "cbc(des3_ede)",

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 6/31] crypto: ccree - Add support for chaining CTS
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (4 preceding siblings ...)
  2020-07-28  7:18 ` [v3 PATCH 5/31] crypto: nitrox " Herbert Xu
@ 2020-07-28  7:18 ` Herbert Xu
  2020-07-28  7:18 ` [v3 PATCH 7/31] crypto: skcipher - Add alg reqsize field Herbert Xu
                   ` (25 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:18 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it stands cts cannot do chaining.  That is, it always performs
the cipher-text stealing at the end of a request.  This patch adds
support for chaining when the CRYPTO_TM_REQ_MORE flag is set.

It also sets the final_chunksize so that data can be withheld by
the caller to enable correct processing at the true end of a request.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/ccree/cc_cipher.c |   72 +++++++++++++++++++++++++--------------
 1 file changed, 47 insertions(+), 25 deletions(-)

diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
index beeb283c3c949..83567b60d6908 100644
--- a/drivers/crypto/ccree/cc_cipher.c
+++ b/drivers/crypto/ccree/cc_cipher.c
@@ -61,9 +61,9 @@ struct cc_cipher_ctx {
 
 static void cc_cipher_complete(struct device *dev, void *cc_req, int err);
 
-static inline enum cc_key_type cc_key_type(struct crypto_tfm *tfm)
+static inline enum cc_key_type cc_key_type(struct crypto_skcipher *tfm)
 {
-	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct cc_cipher_ctx *ctx_p = crypto_skcipher_ctx(tfm);
 
 	return ctx_p->key_type;
 }
@@ -105,12 +105,26 @@ static int validate_keys_sizes(struct cc_cipher_ctx *ctx_p, u32 size)
 	return -EINVAL;
 }
 
-static int validate_data_size(struct cc_cipher_ctx *ctx_p,
+static inline int req_cipher_mode(struct skcipher_request *req)
+{
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct cc_cipher_ctx *ctx_p = crypto_skcipher_ctx(tfm);
+	int cipher_mode = ctx_p->cipher_mode;
+
+	if (cipher_mode == DRV_CIPHER_CBC_CTS &&
+	    req->base.flags & CRYPTO_TFM_REQ_MORE)
+		cipher_mode = DRV_CIPHER_CBC;
+
+	return cipher_mode;
+}
+
+static int validate_data_size(struct skcipher_request *req,
+			      struct cc_cipher_ctx *ctx_p,
 			      unsigned int size)
 {
 	switch (ctx_p->flow_mode) {
 	case S_DIN_to_AES:
-		switch (ctx_p->cipher_mode) {
+		switch (req_cipher_mode(req)) {
 		case DRV_CIPHER_XTS:
 		case DRV_CIPHER_CBC_CTS:
 			if (size >= AES_BLOCK_SIZE)
@@ -508,17 +522,18 @@ static int cc_out_setup_mode(struct cc_cipher_ctx *ctx_p)
 	}
 }
 
-static void cc_setup_readiv_desc(struct crypto_tfm *tfm,
+static void cc_setup_readiv_desc(struct skcipher_request *req,
 				 struct cipher_req_ctx *req_ctx,
 				 unsigned int ivsize, struct cc_hw_desc desc[],
 				 unsigned int *seq_size)
 {
-	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct cc_cipher_ctx *ctx_p = crypto_skcipher_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
-	int cipher_mode = ctx_p->cipher_mode;
 	int flow_mode = cc_out_setup_mode(ctx_p);
 	int direction = req_ctx->gen_ctx.op_type;
 	dma_addr_t iv_dma_addr = req_ctx->gen_ctx.iv_dma_addr;
+	int cipher_mode = req_cipher_mode(req);
 
 	if (ctx_p->key_type == CC_POLICY_PROTECTED_KEY)
 		return;
@@ -565,15 +580,16 @@ static void cc_setup_readiv_desc(struct crypto_tfm *tfm,
 }
 
 
-static void cc_setup_state_desc(struct crypto_tfm *tfm,
+static void cc_setup_state_desc(struct skcipher_request *req,
 				 struct cipher_req_ctx *req_ctx,
 				 unsigned int ivsize, unsigned int nbytes,
 				 struct cc_hw_desc desc[],
 				 unsigned int *seq_size)
 {
-	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct cc_cipher_ctx *ctx_p = crypto_skcipher_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
-	int cipher_mode = ctx_p->cipher_mode;
+	int cipher_mode = req_cipher_mode(req);
 	int flow_mode = ctx_p->flow_mode;
 	int direction = req_ctx->gen_ctx.op_type;
 	dma_addr_t iv_dma_addr = req_ctx->gen_ctx.iv_dma_addr;
@@ -610,15 +626,16 @@ static void cc_setup_state_desc(struct crypto_tfm *tfm,
 }
 
 
-static void cc_setup_xex_state_desc(struct crypto_tfm *tfm,
+static void cc_setup_xex_state_desc(struct skcipher_request *req,
 				 struct cipher_req_ctx *req_ctx,
 				 unsigned int ivsize, unsigned int nbytes,
 				 struct cc_hw_desc desc[],
 				 unsigned int *seq_size)
 {
-	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct cc_cipher_ctx *ctx_p = crypto_skcipher_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
-	int cipher_mode = ctx_p->cipher_mode;
+	int cipher_mode = req_cipher_mode(req);
 	int flow_mode = ctx_p->flow_mode;
 	int direction = req_ctx->gen_ctx.op_type;
 	dma_addr_t key_dma_addr = ctx_p->user.key_dma_addr;
@@ -628,8 +645,8 @@ static void cc_setup_xex_state_desc(struct crypto_tfm *tfm,
 	unsigned int key_offset = key_len;
 
 	struct cc_crypto_alg *cc_alg =
-		container_of(tfm->__crt_alg, struct cc_crypto_alg,
-			     skcipher_alg.base);
+		container_of(crypto_skcipher_alg(tfm), struct cc_crypto_alg,
+			     skcipher_alg);
 
 	if (cc_alg->data_unit)
 		du_size = cc_alg->data_unit;
@@ -697,14 +714,15 @@ static int cc_out_flow_mode(struct cc_cipher_ctx *ctx_p)
 	}
 }
 
-static void cc_setup_key_desc(struct crypto_tfm *tfm,
+static void cc_setup_key_desc(struct skcipher_request *req,
 			      struct cipher_req_ctx *req_ctx,
 			      unsigned int nbytes, struct cc_hw_desc desc[],
 			      unsigned int *seq_size)
 {
-	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct cc_cipher_ctx *ctx_p = crypto_skcipher_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
-	int cipher_mode = ctx_p->cipher_mode;
+	int cipher_mode = req_cipher_mode(req);
 	int flow_mode = ctx_p->flow_mode;
 	int direction = req_ctx->gen_ctx.op_type;
 	dma_addr_t key_dma_addr = ctx_p->user.key_dma_addr;
@@ -912,7 +930,7 @@ static int cc_cipher_process(struct skcipher_request *req,
 
 	/* STAT_PHASE_0: Init and sanity checks */
 
-	if (validate_data_size(ctx_p, nbytes)) {
+	if (validate_data_size(req, ctx_p, nbytes)) {
 		dev_dbg(dev, "Unsupported data size %d.\n", nbytes);
 		rc = -EINVAL;
 		goto exit_process;
@@ -969,17 +987,17 @@ static int cc_cipher_process(struct skcipher_request *req,
 	/* STAT_PHASE_2: Create sequence */
 
 	/* Setup state (IV)  */
-	cc_setup_state_desc(tfm, req_ctx, ivsize, nbytes, desc, &seq_len);
+	cc_setup_state_desc(req, req_ctx, ivsize, nbytes, desc, &seq_len);
 	/* Setup MLLI line, if needed */
 	cc_setup_mlli_desc(tfm, req_ctx, dst, src, nbytes, req, desc, &seq_len);
 	/* Setup key */
-	cc_setup_key_desc(tfm, req_ctx, nbytes, desc, &seq_len);
+	cc_setup_key_desc(req, req_ctx, nbytes, desc, &seq_len);
 	/* Setup state (IV and XEX key)  */
-	cc_setup_xex_state_desc(tfm, req_ctx, ivsize, nbytes, desc, &seq_len);
+	cc_setup_xex_state_desc(req, req_ctx, ivsize, nbytes, desc, &seq_len);
 	/* Data processing */
 	cc_setup_flow_desc(tfm, req_ctx, dst, src, nbytes, desc, &seq_len);
 	/* Read next IV */
-	cc_setup_readiv_desc(tfm, req_ctx, ivsize, desc, &seq_len);
+	cc_setup_readiv_desc(req, req_ctx, ivsize, desc, &seq_len);
 
 	/* STAT_PHASE_3: Lock HW and push sequence */
 
@@ -1113,7 +1131,7 @@ static const struct cc_alg_template skcipher_algs[] = {
 	{
 		.name = "cts(cbc(paes))",
 		.driver_name = "cts-cbc-paes-ccree",
-		.blocksize = AES_BLOCK_SIZE,
+		.blocksize = 1,
 		.template_skcipher = {
 			.setkey = cc_cipher_sethkey,
 			.encrypt = cc_cipher_encrypt,
@@ -1121,6 +1139,8 @@ static const struct cc_alg_template skcipher_algs[] = {
 			.min_keysize = CC_HW_KEY_SIZE,
 			.max_keysize = CC_HW_KEY_SIZE,
 			.ivsize = AES_BLOCK_SIZE,
+			.chunksize = AES_BLOCK_SIZE,
+			.final_chunksize = 2 * AES_BLOCK_SIZE,
 			},
 		.cipher_mode = DRV_CIPHER_CBC_CTS,
 		.flow_mode = S_DIN_to_AES,
@@ -1238,7 +1258,7 @@ static const struct cc_alg_template skcipher_algs[] = {
 	{
 		.name = "cts(cbc(aes))",
 		.driver_name = "cts-cbc-aes-ccree",
-		.blocksize = AES_BLOCK_SIZE,
+		.blocksize = 1,
 		.template_skcipher = {
 			.setkey = cc_cipher_setkey,
 			.encrypt = cc_cipher_encrypt,
@@ -1246,6 +1266,8 @@ static const struct cc_alg_template skcipher_algs[] = {
 			.min_keysize = AES_MIN_KEY_SIZE,
 			.max_keysize = AES_MAX_KEY_SIZE,
 			.ivsize = AES_BLOCK_SIZE,
+			.chunksize = AES_BLOCK_SIZE,
+			.final_chunksize = 2 * AES_BLOCK_SIZE,
 			},
 		.cipher_mode = DRV_CIPHER_CBC_CTS,
 		.flow_mode = S_DIN_to_AES,

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 7/31] crypto: skcipher - Add alg reqsize field
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (5 preceding siblings ...)
  2020-07-28  7:18 ` [v3 PATCH 6/31] crypto: ccree " Herbert Xu
@ 2020-07-28  7:18 ` Herbert Xu
  2020-07-28  7:18 ` [v3 PATCH 8/31] crypto: skcipher - Initialise requests to zero Herbert Xu
                   ` (24 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:18 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it is the reqsize field only exists on the tfm object which means
that in order to set it you must provide an init function for the
tfm, even if the size is actually static.

This patch adds a reqsize field to the skcipher alg object which
allows it to be set without having an init function.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/skcipher.c         |    4 +++-
 include/crypto/skcipher.h |    2 ++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/crypto/skcipher.c b/crypto/skcipher.c
index 467af525848a1..3bfa06fd25600 100644
--- a/crypto/skcipher.c
+++ b/crypto/skcipher.c
@@ -668,6 +668,8 @@ static int crypto_skcipher_init_tfm(struct crypto_tfm *tfm)
 	struct crypto_skcipher *skcipher = __crypto_skcipher_cast(tfm);
 	struct skcipher_alg *alg = crypto_skcipher_alg(skcipher);
 
+	skcipher->reqsize = alg->reqsize;
+
 	skcipher_set_needkey(skcipher);
 
 	if (alg->exit)
@@ -797,7 +799,7 @@ static int skcipher_prepare_alg(struct skcipher_alg *alg)
 	struct crypto_alg *base = &alg->base;
 
 	if (alg->ivsize > PAGE_SIZE / 8 || alg->chunksize > PAGE_SIZE / 8 ||
-	    alg->walksize > PAGE_SIZE / 8)
+	    alg->walksize > PAGE_SIZE / 8 || alg->reqsize > PAGE_SIZE / 8)
 		return -EINVAL;
 
 	if (!alg->chunksize)
diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index fb90c3e1c26ba..c46ea1c157b29 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -100,6 +100,7 @@ struct crypto_sync_skcipher {
  * @final_chunksize: Number of bytes that must be processed together
  *		     at the end.  If set to -1 then chaining is not
  *		     possible.
+ * @reqsize: Size of the request data structure.
  * @base: Definition of a generic crypto algorithm.
  *
  * All fields except @ivsize are mandatory and must be filled.
@@ -118,6 +119,7 @@ struct skcipher_alg {
 	unsigned int chunksize;
 	unsigned int walksize;
 	int final_chunksize;
+	unsigned int reqsize;
 
 	struct crypto_alg base;
 };

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 8/31] crypto: skcipher - Initialise requests to zero
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (6 preceding siblings ...)
  2020-07-28  7:18 ` [v3 PATCH 7/31] crypto: skcipher - Add alg reqsize field Herbert Xu
@ 2020-07-28  7:18 ` Herbert Xu
  2020-07-28 17:10   ` Eric Biggers
  2020-07-28  7:18 ` [v3 PATCH 9/31] crypto: cryptd - Add support for chaining Herbert Xu
                   ` (23 subsequent siblings)
  31 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:18 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

This patch initialises skcipher requests to zero.  This allows
algorithms to distinguish between the first operation versus
subsequent ones.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 include/crypto/skcipher.h |   18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
index c46ea1c157b29..6db5f83d6e482 100644
--- a/include/crypto/skcipher.h
+++ b/include/crypto/skcipher.h
@@ -129,13 +129,14 @@ struct skcipher_alg {
  * This performs a type-check against the "tfm" argument to make sure
  * all users have the correct skcipher tfm for doing on-stack requests.
  */
-#define SYNC_SKCIPHER_REQUEST_ON_STACK(name, tfm) \
-	char __##name##_desc[sizeof(struct skcipher_request) + \
-			     MAX_SYNC_SKCIPHER_REQSIZE + \
-			     (!(sizeof((struct crypto_sync_skcipher *)1 == \
-				       (typeof(tfm))1))) \
-			    ] CRYPTO_MINALIGN_ATTR; \
-	struct skcipher_request *name = (void *)__##name##_desc
+#define SYNC_SKCIPHER_REQUEST_ON_STACK(name, sync) \
+	struct { \
+		struct skcipher_request req; \
+		char ext[MAX_SYNC_SKCIPHER_REQSIZE]; \
+	} __##name##_desc = { \
+		.req.base.tfm = crypto_skcipher_tfm(&sync->base), \
+	}; \
+	struct skcipher_request *name = &__##name##_desc.req
 
 /**
  * DOC: Symmetric Key Cipher API
@@ -519,8 +520,7 @@ static inline struct skcipher_request *skcipher_request_alloc(
 {
 	struct skcipher_request *req;
 
-	req = kmalloc(sizeof(struct skcipher_request) +
-		      crypto_skcipher_reqsize(tfm), gfp);
+	req = kzalloc(sizeof(*req) + crypto_skcipher_reqsize(tfm), gfp);
 
 	if (likely(req))
 		skcipher_request_set_tfm(req, tfm);

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 9/31] crypto: cryptd - Add support for chaining
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (7 preceding siblings ...)
  2020-07-28  7:18 ` [v3 PATCH 8/31] crypto: skcipher - Initialise requests to zero Herbert Xu
@ 2020-07-28  7:18 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 10/31] crypto: chacha-generic " Herbert Xu
                   ` (22 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:18 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

This patch makes cryptd pass along the CRYPTO_TFM_REQ_MORE flag to
its child skcipher as well as inheriting the final chunk size from
it.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/cryptd.c |   15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/crypto/cryptd.c b/crypto/cryptd.c
index a1bea0f4baa88..510c23b320082 100644
--- a/crypto/cryptd.c
+++ b/crypto/cryptd.c
@@ -261,13 +261,16 @@ static void cryptd_skcipher_encrypt(struct crypto_async_request *base,
 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
 	struct crypto_sync_skcipher *child = ctx->child;
 	SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child);
+	unsigned int flags = req->base.flags;
 
 	if (unlikely(err == -EINPROGRESS))
 		goto out;
 
+	flags &= CRYPTO_TFM_REQ_MORE;
+	flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
+
 	skcipher_request_set_sync_tfm(subreq, child);
-	skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
-				      NULL, NULL);
+	skcipher_request_set_callback(subreq, flags, NULL, NULL);
 	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
 				   req->iv);
 
@@ -289,13 +292,16 @@ static void cryptd_skcipher_decrypt(struct crypto_async_request *base,
 	struct cryptd_skcipher_ctx *ctx = crypto_skcipher_ctx(tfm);
 	struct crypto_sync_skcipher *child = ctx->child;
 	SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, child);
+	unsigned int flags = req->base.flags;
 
 	if (unlikely(err == -EINPROGRESS))
 		goto out;
 
+	flags &= CRYPTO_TFM_REQ_MORE;
+	flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
+
 	skcipher_request_set_sync_tfm(subreq, child);
-	skcipher_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_SLEEP,
-				      NULL, NULL);
+	skcipher_request_set_callback(subreq, flags, NULL, NULL);
 	skcipher_request_set_crypt(subreq, req->src, req->dst, req->cryptlen,
 				   req->iv);
 
@@ -400,6 +406,7 @@ static int cryptd_create_skcipher(struct crypto_template *tmpl,
 		(alg->base.cra_flags & CRYPTO_ALG_INTERNAL);
 	inst->alg.ivsize = crypto_skcipher_alg_ivsize(alg);
 	inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
+	inst->alg.final_chunksize = crypto_skcipher_alg_final_chunksize(alg);
 	inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg);
 	inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg);
 

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 10/31] crypto: chacha-generic - Add support for chaining
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (8 preceding siblings ...)
  2020-07-28  7:18 ` [v3 PATCH 9/31] crypto: cryptd - Add support for chaining Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-08-10 15:20   ` Horia Geantă
  2020-07-28  7:19 ` [v3 PATCH 11/31] crypto: arm/chacha " Herbert Xu
                   ` (21 subsequent siblings)
  31 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it stands chacha cannot do chaining.  That is, it has to handle
each request as a whole.  This patch adds support for chaining when
the CRYPTO_TFM_REQ_MORE flag is set.
    
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/chacha_generic.c          |   43 +++++++++++++++++++++++++--------------
 include/crypto/internal/chacha.h |    8 ++++++-
 2 files changed, 35 insertions(+), 16 deletions(-)

diff --git a/crypto/chacha_generic.c b/crypto/chacha_generic.c
index 8beea79ab1178..f74ac54d7aa5b 100644
--- a/crypto/chacha_generic.c
+++ b/crypto/chacha_generic.c
@@ -6,22 +6,21 @@
  * Copyright (C) 2018 Google LLC
  */
 
-#include <asm/unaligned.h>
-#include <crypto/algapi.h>
 #include <crypto/internal/chacha.h>
-#include <crypto/internal/skcipher.h>
+#include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/string.h>
 
-static int chacha_stream_xor(struct skcipher_request *req,
-			     const struct chacha_ctx *ctx, const u8 *iv)
+static int chacha_stream_xor(struct skcipher_request *req, int nrounds)
 {
+	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
 	struct skcipher_walk walk;
-	u32 state[16];
+	u32 *state = rctx->state;
 	int err;
 
-	err = skcipher_walk_virt(&walk, req, false);
+	rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE;
 
-	chacha_init_generic(state, ctx->key, iv);
+	err = skcipher_walk_virt(&walk, req, false);
 
 	while (walk.nbytes > 0) {
 		unsigned int nbytes = walk.nbytes;
@@ -30,7 +29,7 @@ static int chacha_stream_xor(struct skcipher_request *req,
 			nbytes = round_down(nbytes, CHACHA_BLOCK_SIZE);
 
 		chacha_crypt_generic(state, walk.dst.virt.addr,
-				     walk.src.virt.addr, nbytes, ctx->nrounds);
+				     walk.src.virt.addr, nbytes, nrounds);
 		err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
 	}
 
@@ -40,30 +39,41 @@ static int chacha_stream_xor(struct skcipher_request *req,
 static int crypto_chacha_crypt(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
 	struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
 
-	return chacha_stream_xor(req, ctx, req->iv);
+	if (!rctx->init)
+		chacha_init_generic(rctx->state, ctx->key, req->iv);
+
+	return chacha_stream_xor(req, ctx->nrounds);
 }
 
 static int crypto_xchacha_crypt(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
 	struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct chacha_ctx subctx;
-	u32 state[16];
+	int nrounds = ctx->nrounds;
+	u32 *state = rctx->state;
 	u8 real_iv[16];
+	u32 key[8];
+
+	if (rctx->init)
+		goto skip_init;
 
 	/* Compute the subkey given the original key and first 128 nonce bits */
 	chacha_init_generic(state, ctx->key, req->iv);
-	hchacha_block_generic(state, subctx.key, ctx->nrounds);
-	subctx.nrounds = ctx->nrounds;
+	hchacha_block_generic(state, key, nrounds);
 
 	/* Build the real IV */
 	memcpy(&real_iv[0], req->iv + 24, 8); /* stream position */
 	memcpy(&real_iv[8], req->iv + 16, 8); /* remaining 64 nonce bits */
 
+	chacha_init_generic(state, key, real_iv);
+
+skip_init:
 	/* Generate the stream and XOR it with the data */
-	return chacha_stream_xor(req, &subctx, real_iv);
+	return chacha_stream_xor(req, nrounds);
 }
 
 static struct skcipher_alg algs[] = {
@@ -79,6 +89,7 @@ static struct skcipher_alg algs[] = {
 		.max_keysize		= CHACHA_KEY_SIZE,
 		.ivsize			= CHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha20_setkey,
 		.encrypt		= crypto_chacha_crypt,
 		.decrypt		= crypto_chacha_crypt,
@@ -94,6 +105,7 @@ static struct skcipher_alg algs[] = {
 		.max_keysize		= CHACHA_KEY_SIZE,
 		.ivsize			= XCHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha20_setkey,
 		.encrypt		= crypto_xchacha_crypt,
 		.decrypt		= crypto_xchacha_crypt,
@@ -109,6 +121,7 @@ static struct skcipher_alg algs[] = {
 		.max_keysize		= CHACHA_KEY_SIZE,
 		.ivsize			= XCHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha12_setkey,
 		.encrypt		= crypto_xchacha_crypt,
 		.decrypt		= crypto_xchacha_crypt,
diff --git a/include/crypto/internal/chacha.h b/include/crypto/internal/chacha.h
index b085dc1ac1516..149ff90fa4afd 100644
--- a/include/crypto/internal/chacha.h
+++ b/include/crypto/internal/chacha.h
@@ -3,15 +3,21 @@
 #ifndef _CRYPTO_INTERNAL_CHACHA_H
 #define _CRYPTO_INTERNAL_CHACHA_H
 
+#include <asm/unaligned.h>
 #include <crypto/chacha.h>
 #include <crypto/internal/skcipher.h>
-#include <linux/crypto.h>
+#include <linux/kernel.h>
 
 struct chacha_ctx {
 	u32 key[8];
 	int nrounds;
 };
 
+struct chacha_reqctx {
+	u32 state[16];
+	bool init;
+};
+
 static inline int chacha_setkey(struct crypto_skcipher *tfm, const u8 *key,
 				unsigned int keysize, int nrounds)
 {

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 11/31] crypto: arm/chacha - Add support for chaining
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (9 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 10/31] crypto: chacha-generic " Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 12/31] crypto: arm64/chacha " Herbert Xu
                   ` (20 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it stands chacha cannot do chaining.  That is, it has to handle
each request as a whole.  This patch adds support for chaining when
the CRYPTO_TFM_REQ_MORE flag is set.
    
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 arch/arm/crypto/chacha-glue.c |   48 ++++++++++++++++++++++++++++--------------
 1 file changed, 32 insertions(+), 16 deletions(-)

diff --git a/arch/arm/crypto/chacha-glue.c b/arch/arm/crypto/chacha-glue.c
index 59da6c0b63b62..7f753fc54137a 100644
--- a/arch/arm/crypto/chacha-glue.c
+++ b/arch/arm/crypto/chacha-glue.c
@@ -7,10 +7,8 @@
  * Copyright (C) 2015 Martin Willi
  */
 
-#include <crypto/algapi.h>
 #include <crypto/internal/chacha.h>
 #include <crypto/internal/simd.h>
-#include <crypto/internal/skcipher.h>
 #include <linux/jump_label.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
@@ -106,16 +104,16 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
 EXPORT_SYMBOL(chacha_crypt_arch);
 
 static int chacha_stream_xor(struct skcipher_request *req,
-			     const struct chacha_ctx *ctx, const u8 *iv,
-			     bool neon)
+			     int nrounds, bool neon)
 {
+	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
 	struct skcipher_walk walk;
-	u32 state[16];
+	u32 *state = rctx->state;
 	int err;
 
-	err = skcipher_walk_virt(&walk, req, false);
+	rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE;
 
-	chacha_init_generic(state, ctx->key, iv);
+	err = skcipher_walk_virt(&walk, req, false);
 
 	while (walk.nbytes > 0) {
 		unsigned int nbytes = walk.nbytes;
@@ -125,12 +123,12 @@ static int chacha_stream_xor(struct skcipher_request *req,
 
 		if (!IS_ENABLED(CONFIG_KERNEL_MODE_NEON) || !neon) {
 			chacha_doarm(walk.dst.virt.addr, walk.src.virt.addr,
-				     nbytes, state, ctx->nrounds);
+				     nbytes, state, nrounds);
 			state[12] += DIV_ROUND_UP(nbytes, CHACHA_BLOCK_SIZE);
 		} else {
 			kernel_neon_begin();
 			chacha_doneon(state, walk.dst.virt.addr,
-				      walk.src.virt.addr, nbytes, ctx->nrounds);
+				      walk.src.virt.addr, nbytes, nrounds);
 			kernel_neon_end();
 		}
 		err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
@@ -142,9 +140,13 @@ static int chacha_stream_xor(struct skcipher_request *req,
 static int do_chacha(struct skcipher_request *req, bool neon)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
 	struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
 
-	return chacha_stream_xor(req, ctx, req->iv, neon);
+	if (!rctx->init)
+		chacha_init_generic(rctx->state, ctx->key, req->iv);
+
+	return chacha_stream_xor(req, ctx->nrounds, neon);
 }
 
 static int chacha_arm(struct skcipher_request *req)
@@ -160,25 +162,33 @@ static int chacha_neon(struct skcipher_request *req)
 static int do_xchacha(struct skcipher_request *req, bool neon)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
 	struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct chacha_ctx subctx;
-	u32 state[16];
+	int nrounds = ctx->nrounds;
+	u32 *state = rctx->state;
 	u8 real_iv[16];
+	u32 key[8];
+
+	if (rctx->init)
+		goto skip_init;
 
 	chacha_init_generic(state, ctx->key, req->iv);
 
 	if (!IS_ENABLED(CONFIG_KERNEL_MODE_NEON) || !neon) {
-		hchacha_block_arm(state, subctx.key, ctx->nrounds);
+		hchacha_block_arm(state, key, nrounds);
 	} else {
 		kernel_neon_begin();
-		hchacha_block_neon(state, subctx.key, ctx->nrounds);
+		hchacha_block_neon(state, key, nrounds);
 		kernel_neon_end();
 	}
-	subctx.nrounds = ctx->nrounds;
 
 	memcpy(&real_iv[0], req->iv + 24, 8);
 	memcpy(&real_iv[8], req->iv + 16, 8);
-	return chacha_stream_xor(req, &subctx, real_iv, neon);
+
+	chacha_init_generic(state, key, real_iv);
+
+skip_init:
+	return chacha_stream_xor(req, nrounds, neon);
 }
 
 static int xchacha_arm(struct skcipher_request *req)
@@ -204,6 +214,7 @@ static struct skcipher_alg arm_algs[] = {
 		.max_keysize		= CHACHA_KEY_SIZE,
 		.ivsize			= CHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha20_setkey,
 		.encrypt		= chacha_arm,
 		.decrypt		= chacha_arm,
@@ -219,6 +230,7 @@ static struct skcipher_alg arm_algs[] = {
 		.max_keysize		= CHACHA_KEY_SIZE,
 		.ivsize			= XCHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha20_setkey,
 		.encrypt		= xchacha_arm,
 		.decrypt		= xchacha_arm,
@@ -234,6 +246,7 @@ static struct skcipher_alg arm_algs[] = {
 		.max_keysize		= CHACHA_KEY_SIZE,
 		.ivsize			= XCHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha12_setkey,
 		.encrypt		= xchacha_arm,
 		.decrypt		= xchacha_arm,
@@ -254,6 +267,7 @@ static struct skcipher_alg neon_algs[] = {
 		.ivsize			= CHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
 		.walksize		= 4 * CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha20_setkey,
 		.encrypt		= chacha_neon,
 		.decrypt		= chacha_neon,
@@ -270,6 +284,7 @@ static struct skcipher_alg neon_algs[] = {
 		.ivsize			= XCHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
 		.walksize		= 4 * CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha20_setkey,
 		.encrypt		= xchacha_neon,
 		.decrypt		= xchacha_neon,
@@ -286,6 +301,7 @@ static struct skcipher_alg neon_algs[] = {
 		.ivsize			= XCHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
 		.walksize		= 4 * CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha12_setkey,
 		.encrypt		= xchacha_neon,
 		.decrypt		= xchacha_neon,

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 12/31] crypto: arm64/chacha - Add support for chaining
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (10 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 11/31] crypto: arm/chacha " Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-29  6:16   ` Ard Biesheuvel
  2020-07-28  7:19 ` [v3 PATCH 13/31] crypto: mips/chacha " Herbert Xu
                   ` (19 subsequent siblings)
  31 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it stands chacha cannot do chaining.  That is, it has to handle
each request as a whole.  This patch adds support for chaining when
the CRYPTO_TFM_REQ_MORE flag is set.
    
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 arch/arm64/crypto/chacha-neon-glue.c |   43 ++++++++++++++++++++++-------------
 1 file changed, 28 insertions(+), 15 deletions(-)

diff --git a/arch/arm64/crypto/chacha-neon-glue.c b/arch/arm64/crypto/chacha-neon-glue.c
index af2bbca38e70f..d82c574ddcc00 100644
--- a/arch/arm64/crypto/chacha-neon-glue.c
+++ b/arch/arm64/crypto/chacha-neon-glue.c
@@ -19,10 +19,8 @@
  * (at your option) any later version.
  */
 
-#include <crypto/algapi.h>
 #include <crypto/internal/chacha.h>
 #include <crypto/internal/simd.h>
-#include <crypto/internal/skcipher.h>
 #include <linux/jump_label.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
@@ -101,16 +99,16 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
 }
 EXPORT_SYMBOL(chacha_crypt_arch);
 
-static int chacha_neon_stream_xor(struct skcipher_request *req,
-				  const struct chacha_ctx *ctx, const u8 *iv)
+static int chacha_neon_stream_xor(struct skcipher_request *req, int nrounds)
 {
+	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
 	struct skcipher_walk walk;
-	u32 state[16];
+	u32 *state = rctx->state;
 	int err;
 
-	err = skcipher_walk_virt(&walk, req, false);
+	rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE;
 
-	chacha_init_generic(state, ctx->key, iv);
+	err = skcipher_walk_virt(&walk, req, false);
 
 	while (walk.nbytes > 0) {
 		unsigned int nbytes = walk.nbytes;
@@ -122,11 +120,11 @@ static int chacha_neon_stream_xor(struct skcipher_request *req,
 		    !crypto_simd_usable()) {
 			chacha_crypt_generic(state, walk.dst.virt.addr,
 					     walk.src.virt.addr, nbytes,
-					     ctx->nrounds);
+					     nrounds);
 		} else {
 			kernel_neon_begin();
 			chacha_doneon(state, walk.dst.virt.addr,
-				      walk.src.virt.addr, nbytes, ctx->nrounds);
+				      walk.src.virt.addr, nbytes, nrounds);
 			kernel_neon_end();
 		}
 		err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
@@ -138,26 +136,38 @@ static int chacha_neon_stream_xor(struct skcipher_request *req,
 static int chacha_neon(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
 	struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
 
-	return chacha_neon_stream_xor(req, ctx, req->iv);
+	if (!rctx->init)
+		chacha_init_generic(rctx->state, ctx->key, req->iv);
+
+	return chacha_neon_stream_xor(req, ctx->nrounds);
 }
 
 static int xchacha_neon(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
 	struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct chacha_ctx subctx;
-	u32 state[16];
+	int nrounds = ctx->nrounds;
+	u32 *state = rctx->state;
 	u8 real_iv[16];
+	u32 key[8];
+
+	if (rctx->init)
+		goto skip_init;
 
 	chacha_init_generic(state, ctx->key, req->iv);
-	hchacha_block_arch(state, subctx.key, ctx->nrounds);
-	subctx.nrounds = ctx->nrounds;
+	hchacha_block_arch(state, key, nrounds);
 
 	memcpy(&real_iv[0], req->iv + 24, 8);
 	memcpy(&real_iv[8], req->iv + 16, 8);
-	return chacha_neon_stream_xor(req, &subctx, real_iv);
+
+	chacha_init_generic(state, key, real_iv);
+
+skip_init:
+	return chacha_neon_stream_xor(req, nrounds);
 }
 
 static struct skcipher_alg algs[] = {
@@ -174,6 +184,7 @@ static struct skcipher_alg algs[] = {
 		.ivsize			= CHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
 		.walksize		= 5 * CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha20_setkey,
 		.encrypt		= chacha_neon,
 		.decrypt		= chacha_neon,
@@ -190,6 +201,7 @@ static struct skcipher_alg algs[] = {
 		.ivsize			= XCHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
 		.walksize		= 5 * CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha20_setkey,
 		.encrypt		= xchacha_neon,
 		.decrypt		= xchacha_neon,
@@ -206,6 +218,7 @@ static struct skcipher_alg algs[] = {
 		.ivsize			= XCHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
 		.walksize		= 5 * CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha12_setkey,
 		.encrypt		= xchacha_neon,
 		.decrypt		= xchacha_neon,

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 13/31] crypto: mips/chacha - Add support for chaining
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (11 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 12/31] crypto: arm64/chacha " Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 14/31] crypto: x86/chacha " Herbert Xu
                   ` (18 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it stands chacha cannot do chaining.  That is, it has to handle
each request as a whole.  This patch adds support for chaining when
the CRYPTO_TFM_REQ_MORE flag is set.
    
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 arch/mips/crypto/chacha-glue.c |   41 +++++++++++++++++++++++++++--------------
 1 file changed, 27 insertions(+), 14 deletions(-)

diff --git a/arch/mips/crypto/chacha-glue.c b/arch/mips/crypto/chacha-glue.c
index d1fd23e6ef844..658412bfdea29 100644
--- a/arch/mips/crypto/chacha-glue.c
+++ b/arch/mips/crypto/chacha-glue.c
@@ -7,9 +7,7 @@
  */
 
 #include <asm/byteorder.h>
-#include <crypto/algapi.h>
 #include <crypto/internal/chacha.h>
-#include <crypto/internal/skcipher.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
 
@@ -26,16 +24,16 @@ void chacha_init_arch(u32 *state, const u32 *key, const u8 *iv)
 }
 EXPORT_SYMBOL(chacha_init_arch);
 
-static int chacha_mips_stream_xor(struct skcipher_request *req,
-				  const struct chacha_ctx *ctx, const u8 *iv)
+static int chacha_mips_stream_xor(struct skcipher_request *req, int nrounds)
 {
+	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
 	struct skcipher_walk walk;
-	u32 state[16];
+	u32 *state = rctx->state;
 	int err;
 
-	err = skcipher_walk_virt(&walk, req, false);
+	rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE;
 
-	chacha_init_generic(state, ctx->key, iv);
+	err = skcipher_walk_virt(&walk, req, false);
 
 	while (walk.nbytes > 0) {
 		unsigned int nbytes = walk.nbytes;
@@ -44,7 +42,7 @@ static int chacha_mips_stream_xor(struct skcipher_request *req,
 			nbytes = round_down(nbytes, walk.stride);
 
 		chacha_crypt(state, walk.dst.virt.addr, walk.src.virt.addr,
-			     nbytes, ctx->nrounds);
+			     nbytes, nrounds);
 		err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
 	}
 
@@ -54,27 +52,39 @@ static int chacha_mips_stream_xor(struct skcipher_request *req,
 static int chacha_mips(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
 	struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
 
-	return chacha_mips_stream_xor(req, ctx, req->iv);
+	if (!rctx->init)
+		chacha_init_generic(rctx->state, ctx->key, req->iv);
+
+	return chacha_mips_stream_xor(req, ctx->nrounds);
 }
 
 static int xchacha_mips(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
 	struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct chacha_ctx subctx;
-	u32 state[16];
+	int nrounds = ctx->nrounds;
+	u32 *state = rctx->state;
 	u8 real_iv[16];
+	u32 key[8];
+
+	if (rctx->init)
+		goto skip_init;
 
 	chacha_init_generic(state, ctx->key, req->iv);
 
-	hchacha_block(state, subctx.key, ctx->nrounds);
-	subctx.nrounds = ctx->nrounds;
+	hchacha_block(state, key, nrounds);
 
 	memcpy(&real_iv[0], req->iv + 24, 8);
 	memcpy(&real_iv[8], req->iv + 16, 8);
-	return chacha_mips_stream_xor(req, &subctx, real_iv);
+
+	chacha_init_generic(rctx->state, key, real_iv);
+
+skip_init:
+	return chacha_mips_stream_xor(req, nrounds);
 }
 
 static struct skcipher_alg algs[] = {
@@ -90,6 +100,7 @@ static struct skcipher_alg algs[] = {
 		.max_keysize		= CHACHA_KEY_SIZE,
 		.ivsize			= CHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha20_setkey,
 		.encrypt		= chacha_mips,
 		.decrypt		= chacha_mips,
@@ -105,6 +116,7 @@ static struct skcipher_alg algs[] = {
 		.max_keysize		= CHACHA_KEY_SIZE,
 		.ivsize			= XCHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha20_setkey,
 		.encrypt		= xchacha_mips,
 		.decrypt		= xchacha_mips,
@@ -120,6 +132,7 @@ static struct skcipher_alg algs[] = {
 		.max_keysize		= CHACHA_KEY_SIZE,
 		.ivsize			= XCHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
+		.reqsize		= sizeof(struct chacha_reqctx),
 		.setkey			= chacha12_setkey,
 		.encrypt		= xchacha_mips,
 		.decrypt		= xchacha_mips,

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 14/31] crypto: x86/chacha - Add support for chaining
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (12 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 13/31] crypto: mips/chacha " Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 15/31] crypto: inside-secure - Set final_chunksize on chacha Herbert Xu
                   ` (17 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it stands chacha cannot do chaining.  That is, it has to handle
each request as a whole.  This patch adds support for chaining when
the CRYPTO_TFM_REQ_MORE flag is set.
    
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 arch/x86/crypto/chacha_glue.c |   55 +++++++++++++++++++++++++++++-------------
 1 file changed, 39 insertions(+), 16 deletions(-)

diff --git a/arch/x86/crypto/chacha_glue.c b/arch/x86/crypto/chacha_glue.c
index e67a59130025e..96cbdcbfe4f8f 100644
--- a/arch/x86/crypto/chacha_glue.c
+++ b/arch/x86/crypto/chacha_glue.c
@@ -6,14 +6,16 @@
  * Copyright (C) 2015 Martin Willi
  */
 
-#include <crypto/algapi.h>
 #include <crypto/internal/chacha.h>
 #include <crypto/internal/simd.h>
-#include <crypto/internal/skcipher.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <asm/simd.h>
 
+#define CHACHA_STATE_ALIGN 16
+#define CHACHA_REQSIZE sizeof(struct chacha_reqctx) + \
+		       ((CHACHA_STATE_ALIGN - 1) & ~(CRYPTO_MINALIGN - 1))
+
 asmlinkage void chacha_block_xor_ssse3(u32 *state, u8 *dst, const u8 *src,
 				       unsigned int len, int nrounds);
 asmlinkage void chacha_4block_xor_ssse3(u32 *state, u8 *dst, const u8 *src,
@@ -38,6 +40,12 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(chacha_use_simd);
 static __ro_after_init DEFINE_STATIC_KEY_FALSE(chacha_use_avx2);
 static __ro_after_init DEFINE_STATIC_KEY_FALSE(chacha_use_avx512vl);
 
+static inline struct chacha_reqctx *chacha_request_ctx(
+	struct skcipher_request *req)
+{
+	return PTR_ALIGN(skcipher_request_ctx(req), CHACHA_STATE_ALIGN);
+}
+
 static unsigned int chacha_advance(unsigned int len, unsigned int maxblocks)
 {
 	len = min(len, maxblocks * CHACHA_BLOCK_SIZE);
@@ -159,16 +167,16 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
 }
 EXPORT_SYMBOL(chacha_crypt_arch);
 
-static int chacha_simd_stream_xor(struct skcipher_request *req,
-				  const struct chacha_ctx *ctx, const u8 *iv)
+static int chacha_simd_stream_xor(struct skcipher_request *req, int nrounds)
 {
-	u32 state[CHACHA_STATE_WORDS] __aligned(8);
+	struct chacha_reqctx *rctx = chacha_request_ctx(req);
 	struct skcipher_walk walk;
+	u32 *state = rctx->state;
 	int err;
 
-	err = skcipher_walk_virt(&walk, req, false);
+	rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE;
 
-	chacha_init_generic(state, ctx->key, iv);
+	err = skcipher_walk_virt(&walk, req, false);
 
 	while (walk.nbytes > 0) {
 		unsigned int nbytes = walk.nbytes;
@@ -180,12 +188,12 @@ static int chacha_simd_stream_xor(struct skcipher_request *req,
 		    !crypto_simd_usable()) {
 			chacha_crypt_generic(state, walk.dst.virt.addr,
 					     walk.src.virt.addr, nbytes,
-					     ctx->nrounds);
+					     nrounds);
 		} else {
 			kernel_fpu_begin();
 			chacha_dosimd(state, walk.dst.virt.addr,
 				      walk.src.virt.addr, nbytes,
-				      ctx->nrounds);
+				      nrounds);
 			kernel_fpu_end();
 		}
 		err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
@@ -197,33 +205,45 @@ static int chacha_simd_stream_xor(struct skcipher_request *req,
 static int chacha_simd(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct chacha_reqctx *rctx = chacha_request_ctx(req);
 	struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
 
-	return chacha_simd_stream_xor(req, ctx, req->iv);
+	if (!rctx->init)
+		chacha_init_generic(rctx->state, ctx->key, req->iv);
+
+	return chacha_simd_stream_xor(req, ctx->nrounds);
 }
 
 static int xchacha_simd(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
+	struct chacha_reqctx *rctx = chacha_request_ctx(req);
 	struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
-	u32 state[CHACHA_STATE_WORDS] __aligned(8);
-	struct chacha_ctx subctx;
+	int nrounds = ctx->nrounds;
+	u32 *state = rctx->state;
 	u8 real_iv[16];
+	u32 key[8];
+
+	if (rctx->init)
+		goto skip_init;
 
 	chacha_init_generic(state, ctx->key, req->iv);
 
 	if (req->cryptlen > CHACHA_BLOCK_SIZE && crypto_simd_usable()) {
 		kernel_fpu_begin();
-		hchacha_block_ssse3(state, subctx.key, ctx->nrounds);
+		hchacha_block_ssse3(state, key, nrounds);
 		kernel_fpu_end();
 	} else {
-		hchacha_block_generic(state, subctx.key, ctx->nrounds);
+		hchacha_block_generic(state, key, nrounds);
 	}
-	subctx.nrounds = ctx->nrounds;
 
 	memcpy(&real_iv[0], req->iv + 24, 8);
 	memcpy(&real_iv[8], req->iv + 16, 8);
-	return chacha_simd_stream_xor(req, &subctx, real_iv);
+
+	chacha_init_generic(state, key, real_iv);
+
+skip_init:
+	return chacha_simd_stream_xor(req, nrounds);
 }
 
 static struct skcipher_alg algs[] = {
@@ -239,6 +259,7 @@ static struct skcipher_alg algs[] = {
 		.max_keysize		= CHACHA_KEY_SIZE,
 		.ivsize			= CHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
+		.reqsize		= CHACHA_REQSIZE,
 		.setkey			= chacha20_setkey,
 		.encrypt		= chacha_simd,
 		.decrypt		= chacha_simd,
@@ -254,6 +275,7 @@ static struct skcipher_alg algs[] = {
 		.max_keysize		= CHACHA_KEY_SIZE,
 		.ivsize			= XCHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
+		.reqsize		= CHACHA_REQSIZE,
 		.setkey			= chacha20_setkey,
 		.encrypt		= xchacha_simd,
 		.decrypt		= xchacha_simd,
@@ -269,6 +291,7 @@ static struct skcipher_alg algs[] = {
 		.max_keysize		= CHACHA_KEY_SIZE,
 		.ivsize			= XCHACHA_IV_SIZE,
 		.chunksize		= CHACHA_BLOCK_SIZE,
+		.reqsize		= CHACHA_REQSIZE,
 		.setkey			= chacha12_setkey,
 		.encrypt		= xchacha_simd,
 		.decrypt		= xchacha_simd,

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 15/31] crypto: inside-secure - Set final_chunksize on chacha
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (13 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 14/31] crypto: x86/chacha " Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 16/31] crypto: caam/qi2 " Herbert Xu
                   ` (16 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The chacha implementation in inside-secure does not support partial
operation and therefore this patch sets its final_chunksize to -1
to mark this fact.

This patch also sets the chunksize to the chacha block size.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/inside-secure/safexcel_cipher.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c
index 1ac3253b7903a..ef04a394ff49d 100644
--- a/drivers/crypto/inside-secure/safexcel_cipher.c
+++ b/drivers/crypto/inside-secure/safexcel_cipher.c
@@ -2859,6 +2859,8 @@ struct safexcel_alg_template safexcel_alg_chacha20 = {
 		.min_keysize = CHACHA_KEY_SIZE,
 		.max_keysize = CHACHA_KEY_SIZE,
 		.ivsize = CHACHA_IV_SIZE,
+		.chunksize = CHACHA_BLOCK_SIZE,
+		.final_chunksize = -1,
 		.base = {
 			.cra_name = "chacha20",
 			.cra_driver_name = "safexcel-chacha20",

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 16/31] crypto: caam/qi2 - Set final_chunksize on chacha
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (14 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 15/31] crypto: inside-secure - Set final_chunksize on chacha Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-08-10 15:24   ` Horia Geantă
  2020-07-28  7:19 ` [v3 PATCH 17/31] crypto: ctr - Allow rfc3686 to be chained Herbert Xu
                   ` (15 subsequent siblings)
  31 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The chacha implementation in caam/qi2 does not support partial
operation and therefore this patch sets its final_chunksize to -1
to mark this fact.

This patch also sets the chunksize to the chacha block size.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/caam/caamalg_qi2.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
index 1b0c286759065..6294c104bf7a9 100644
--- a/drivers/crypto/caam/caamalg_qi2.c
+++ b/drivers/crypto/caam/caamalg_qi2.c
@@ -1689,6 +1689,8 @@ static struct caam_skcipher_alg driver_algs[] = {
 			.min_keysize = CHACHA_KEY_SIZE,
 			.max_keysize = CHACHA_KEY_SIZE,
 			.ivsize = CHACHA_IV_SIZE,
+			.chunksize = CHACHA_BLOCK_SIZE,
+			.final_chunksize = -1,
 		},
 		.caam.class1_alg_type = OP_ALG_ALGSEL_CHACHA20,
 	},

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 17/31] crypto: ctr - Allow rfc3686 to be chained
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (15 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 16/31] crypto: caam/qi2 " Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 18/31] crypto: crypto4xx - Remove rfc3686 implementation Herbert Xu
                   ` (14 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it stands rfc3686 cannot do chaining.  That is, it has to handle
each request as a whole.  This patch adds support for chaining when
the CRYPTO_TFM_REQ_MORE flag is set.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/ctr.c |   10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/crypto/ctr.c b/crypto/ctr.c
index c39fcffba27f5..eccfab07f2fbb 100644
--- a/crypto/ctr.c
+++ b/crypto/ctr.c
@@ -5,7 +5,6 @@
  * (C) Copyright IBM Corp. 2007 - Joy Latten <latten@us.ibm.com>
  */
 
-#include <crypto/algapi.h>
 #include <crypto/ctr.h>
 #include <crypto/internal/skcipher.h>
 #include <linux/err.h>
@@ -21,7 +20,8 @@ struct crypto_rfc3686_ctx {
 
 struct crypto_rfc3686_req_ctx {
 	u8 iv[CTR_RFC3686_BLOCK_SIZE];
-	struct skcipher_request subreq CRYPTO_MINALIGN_ATTR;
+	bool init;
+	struct skcipher_request subreq;
 };
 
 static void crypto_ctr_crypt_final(struct skcipher_walk *walk,
@@ -197,6 +197,9 @@ static int crypto_rfc3686_crypt(struct skcipher_request *req)
 	struct skcipher_request *subreq = &rctx->subreq;
 	u8 *iv = rctx->iv;
 
+	if (rctx->init)
+		goto skip_init;
+
 	/* set up counter block */
 	memcpy(iv, ctx->nonce, CTR_RFC3686_NONCE_SIZE);
 	memcpy(iv + CTR_RFC3686_NONCE_SIZE, req->iv, CTR_RFC3686_IV_SIZE);
@@ -205,6 +208,9 @@ static int crypto_rfc3686_crypt(struct skcipher_request *req)
 	*(__be32 *)(iv + CTR_RFC3686_NONCE_SIZE + CTR_RFC3686_IV_SIZE) =
 		cpu_to_be32(1);
 
+skip_init:
+	rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE;
+
 	skcipher_request_set_tfm(subreq, child);
 	skcipher_request_set_callback(subreq, req->base.flags,
 				      req->base.complete, req->base.data);

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 18/31] crypto: crypto4xx - Remove rfc3686 implementation
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (16 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 17/31] crypto: ctr - Allow rfc3686 to be chained Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 19/31] crypto: caam - Remove rfc3686 implementations Herbert Xu
                   ` (13 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The rfc3686 implementation in crypto4xx is pretty much the same
as the generic rfc3686 wrapper.  So it can simply be removed to
reduce complexity.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/amcc/crypto4xx_alg.c  |   47 -----------------------------------
 drivers/crypto/amcc/crypto4xx_core.c |   20 --------------
 drivers/crypto/amcc/crypto4xx_core.h |    4 --
 3 files changed, 71 deletions(-)

diff --git a/drivers/crypto/amcc/crypto4xx_alg.c b/drivers/crypto/amcc/crypto4xx_alg.c
index f7fc0c4641254..a7c17cdb1deb2 100644
--- a/drivers/crypto/amcc/crypto4xx_alg.c
+++ b/drivers/crypto/amcc/crypto4xx_alg.c
@@ -202,53 +202,6 @@ int crypto4xx_setkey_aes_ofb(struct crypto_skcipher *cipher,
 				    CRYPTO_FEEDBACK_MODE_64BIT_OFB);
 }
 
-int crypto4xx_setkey_rfc3686(struct crypto_skcipher *cipher,
-			     const u8 *key, unsigned int keylen)
-{
-	struct crypto4xx_ctx *ctx = crypto_skcipher_ctx(cipher);
-	int rc;
-
-	rc = crypto4xx_setkey_aes(cipher, key, keylen - CTR_RFC3686_NONCE_SIZE,
-		CRYPTO_MODE_CTR, CRYPTO_FEEDBACK_MODE_NO_FB);
-	if (rc)
-		return rc;
-
-	ctx->iv_nonce = cpu_to_le32p((u32 *)&key[keylen -
-						 CTR_RFC3686_NONCE_SIZE]);
-
-	return 0;
-}
-
-int crypto4xx_rfc3686_encrypt(struct skcipher_request *req)
-{
-	struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req);
-	struct crypto4xx_ctx *ctx = crypto_skcipher_ctx(cipher);
-	__le32 iv[AES_IV_SIZE / 4] = {
-		ctx->iv_nonce,
-		cpu_to_le32p((u32 *) req->iv),
-		cpu_to_le32p((u32 *) (req->iv + 4)),
-		cpu_to_le32(1) };
-
-	return crypto4xx_build_pd(&req->base, ctx, req->src, req->dst,
-				  req->cryptlen, iv, AES_IV_SIZE,
-				  ctx->sa_out, ctx->sa_len, 0, NULL);
-}
-
-int crypto4xx_rfc3686_decrypt(struct skcipher_request *req)
-{
-	struct crypto_skcipher *cipher = crypto_skcipher_reqtfm(req);
-	struct crypto4xx_ctx *ctx = crypto_skcipher_ctx(cipher);
-	__le32 iv[AES_IV_SIZE / 4] = {
-		ctx->iv_nonce,
-		cpu_to_le32p((u32 *) req->iv),
-		cpu_to_le32p((u32 *) (req->iv + 4)),
-		cpu_to_le32(1) };
-
-	return crypto4xx_build_pd(&req->base, ctx, req->src, req->dst,
-				  req->cryptlen, iv, AES_IV_SIZE,
-				  ctx->sa_out, ctx->sa_len, 0, NULL);
-}
-
 static int
 crypto4xx_ctr_crypt(struct skcipher_request *req, bool encrypt)
 {
diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c
index 981de43ea5e24..2054e216440b5 100644
--- a/drivers/crypto/amcc/crypto4xx_core.c
+++ b/drivers/crypto/amcc/crypto4xx_core.c
@@ -1252,26 +1252,6 @@ static struct crypto4xx_alg_common crypto4xx_alg[] = {
 		.init = crypto4xx_sk_init,
 		.exit = crypto4xx_sk_exit,
 	} },
-	{ .type = CRYPTO_ALG_TYPE_SKCIPHER, .u.cipher = {
-		.base = {
-			.cra_name = "rfc3686(ctr(aes))",
-			.cra_driver_name = "rfc3686-ctr-aes-ppc4xx",
-			.cra_priority = CRYPTO4XX_CRYPTO_PRIORITY,
-			.cra_flags = CRYPTO_ALG_ASYNC |
-				CRYPTO_ALG_KERN_DRIVER_ONLY,
-			.cra_blocksize = 1,
-			.cra_ctxsize = sizeof(struct crypto4xx_ctx),
-			.cra_module = THIS_MODULE,
-		},
-		.min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
-		.max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
-		.ivsize	= CTR_RFC3686_IV_SIZE,
-		.setkey = crypto4xx_setkey_rfc3686,
-		.encrypt = crypto4xx_rfc3686_encrypt,
-		.decrypt = crypto4xx_rfc3686_decrypt,
-		.init = crypto4xx_sk_init,
-		.exit = crypto4xx_sk_exit,
-	} },
 	{ .type = CRYPTO_ALG_TYPE_SKCIPHER, .u.cipher = {
 		.base = {
 			.cra_name = "ecb(aes)",
diff --git a/drivers/crypto/amcc/crypto4xx_core.h b/drivers/crypto/amcc/crypto4xx_core.h
index 6b68413591905..97f625fc5e8b1 100644
--- a/drivers/crypto/amcc/crypto4xx_core.h
+++ b/drivers/crypto/amcc/crypto4xx_core.h
@@ -169,8 +169,6 @@ int crypto4xx_setkey_aes_ecb(struct crypto_skcipher *cipher,
 			     const u8 *key, unsigned int keylen);
 int crypto4xx_setkey_aes_ofb(struct crypto_skcipher *cipher,
 			     const u8 *key, unsigned int keylen);
-int crypto4xx_setkey_rfc3686(struct crypto_skcipher *cipher,
-			     const u8 *key, unsigned int keylen);
 int crypto4xx_encrypt_ctr(struct skcipher_request *req);
 int crypto4xx_decrypt_ctr(struct skcipher_request *req);
 int crypto4xx_encrypt_iv_stream(struct skcipher_request *req);
@@ -179,8 +177,6 @@ int crypto4xx_encrypt_iv_block(struct skcipher_request *req);
 int crypto4xx_decrypt_iv_block(struct skcipher_request *req);
 int crypto4xx_encrypt_noiv_block(struct skcipher_request *req);
 int crypto4xx_decrypt_noiv_block(struct skcipher_request *req);
-int crypto4xx_rfc3686_encrypt(struct skcipher_request *req);
-int crypto4xx_rfc3686_decrypt(struct skcipher_request *req);
 int crypto4xx_sha1_alg_init(struct crypto_tfm *tfm);
 int crypto4xx_hash_digest(struct ahash_request *req);
 int crypto4xx_hash_final(struct ahash_request *req);

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 19/31] crypto: caam - Remove rfc3686 implementations
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (17 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 18/31] crypto: crypto4xx - Remove rfc3686 implementation Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-08-10 16:47   ` Horia Geantă
  2020-07-28  7:19 ` [v3 PATCH 20/31] crypto: nitrox - Set final_chunksize on rfc3686 Herbert Xu
                   ` (12 subsequent siblings)
  31 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The rfc3686 implementations in caam are pretty much the same
as the generic rfc3686 wrapper.  So they can simply be removed
to reduce complexity.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/caam/caamalg.c      |   54 +------------------------------------
 drivers/crypto/caam/caamalg_desc.c |   46 +------------------------------
 drivers/crypto/caam/caamalg_desc.h |    6 +---
 drivers/crypto/caam/caamalg_qi.c   |   52 +----------------------------------
 drivers/crypto/caam/caamalg_qi2.c  |   54 +------------------------------------
 5 files changed, 10 insertions(+), 202 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index e94e7f27f1d0d..4a787f74ebf9c 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -725,13 +725,9 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
 			   unsigned int keylen, const u32 ctx1_iv_off)
 {
 	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
-	struct caam_skcipher_alg *alg =
-		container_of(crypto_skcipher_alg(skcipher), typeof(*alg),
-			     skcipher);
 	struct device *jrdev = ctx->jrdev;
 	unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
 	u32 *desc;
-	const bool is_rfc3686 = alg->caam.rfc3686;
 
 	print_hex_dump_debug("key in @"__stringify(__LINE__)": ",
 			     DUMP_PREFIX_ADDRESS, 16, 4, key, keylen, 1);
@@ -742,15 +738,13 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
 
 	/* skcipher_encrypt shared descriptor */
 	desc = ctx->sh_desc_enc;
-	cnstr_shdsc_skcipher_encap(desc, &ctx->cdata, ivsize, is_rfc3686,
-				   ctx1_iv_off);
+	cnstr_shdsc_skcipher_encap(desc, &ctx->cdata, ivsize, ctx1_iv_off);
 	dma_sync_single_for_device(jrdev, ctx->sh_desc_enc_dma,
 				   desc_bytes(desc), ctx->dir);
 
 	/* skcipher_decrypt shared descriptor */
 	desc = ctx->sh_desc_dec;
-	cnstr_shdsc_skcipher_decap(desc, &ctx->cdata, ivsize, is_rfc3686,
-				   ctx1_iv_off);
+	cnstr_shdsc_skcipher_decap(desc, &ctx->cdata, ivsize, ctx1_iv_off);
 	dma_sync_single_for_device(jrdev, ctx->sh_desc_dec_dma,
 				   desc_bytes(desc), ctx->dir);
 
@@ -769,27 +763,6 @@ static int aes_skcipher_setkey(struct crypto_skcipher *skcipher,
 	return skcipher_setkey(skcipher, key, keylen, 0);
 }
 
-static int rfc3686_skcipher_setkey(struct crypto_skcipher *skcipher,
-				   const u8 *key, unsigned int keylen)
-{
-	u32 ctx1_iv_off;
-	int err;
-
-	/*
-	 * RFC3686 specific:
-	 *	| CONTEXT1[255:128] = {NONCE, IV, COUNTER}
-	 *	| *key = {KEY, NONCE}
-	 */
-	ctx1_iv_off = 16 + CTR_RFC3686_NONCE_SIZE;
-	keylen -= CTR_RFC3686_NONCE_SIZE;
-
-	err = aes_check_keylen(keylen);
-	if (err)
-		return err;
-
-	return skcipher_setkey(skcipher, key, keylen, ctx1_iv_off);
-}
-
 static int ctr_skcipher_setkey(struct crypto_skcipher *skcipher,
 			       const u8 *key, unsigned int keylen)
 {
@@ -1877,29 +1850,6 @@ static struct caam_skcipher_alg driver_algs[] = {
 		.caam.class1_alg_type = OP_ALG_ALGSEL_AES |
 					OP_ALG_AAI_CTR_MOD128,
 	},
-	{
-		.skcipher = {
-			.base = {
-				.cra_name = "rfc3686(ctr(aes))",
-				.cra_driver_name = "rfc3686-ctr-aes-caam",
-				.cra_blocksize = 1,
-			},
-			.setkey = rfc3686_skcipher_setkey,
-			.encrypt = skcipher_encrypt,
-			.decrypt = skcipher_decrypt,
-			.min_keysize = AES_MIN_KEY_SIZE +
-				       CTR_RFC3686_NONCE_SIZE,
-			.max_keysize = AES_MAX_KEY_SIZE +
-				       CTR_RFC3686_NONCE_SIZE,
-			.ivsize = CTR_RFC3686_IV_SIZE,
-			.chunksize = AES_BLOCK_SIZE,
-		},
-		.caam = {
-			.class1_alg_type = OP_ALG_ALGSEL_AES |
-					   OP_ALG_AAI_CTR_MOD128,
-			.rfc3686 = true,
-		},
-	},
 	{
 		.skcipher = {
 			.base = {
diff --git a/drivers/crypto/caam/caamalg_desc.c b/drivers/crypto/caam/caamalg_desc.c
index d6c58184bb57c..e9b32f151b0da 100644
--- a/drivers/crypto/caam/caamalg_desc.c
+++ b/drivers/crypto/caam/caamalg_desc.c
@@ -1371,12 +1371,10 @@ static inline void skcipher_append_src_dst(u32 *desc)
  *         with OP_ALG_AAI_CBC or OP_ALG_AAI_CTR_MOD128
  *                                - OP_ALG_ALGSEL_CHACHA20
  * @ivsize: initialization vector size
- * @is_rfc3686: true when ctr(aes) is wrapped by rfc3686 template
  * @ctx1_iv_off: IV offset in CONTEXT1 register
  */
 void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata,
-				unsigned int ivsize, const bool is_rfc3686,
-				const u32 ctx1_iv_off)
+				unsigned int ivsize, const u32 ctx1_iv_off)
 {
 	u32 *key_jump_cmd;
 	u32 options = cdata->algtype | OP_ALG_AS_INIT | OP_ALG_ENCRYPT;
@@ -1392,18 +1390,6 @@ void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata,
 	append_key_as_imm(desc, cdata->key_virt, cdata->keylen,
 			  cdata->keylen, CLASS_1 | KEY_DEST_CLASS_REG);
 
-	/* Load nonce into CONTEXT1 reg */
-	if (is_rfc3686) {
-		const u8 *nonce = cdata->key_virt + cdata->keylen;
-
-		append_load_as_imm(desc, nonce, CTR_RFC3686_NONCE_SIZE,
-				   LDST_CLASS_IND_CCB |
-				   LDST_SRCDST_BYTE_OUTFIFO | LDST_IMM);
-		append_move(desc, MOVE_WAITCOMP | MOVE_SRC_OUTFIFO |
-			    MOVE_DEST_CLASS1CTX | (16 << MOVE_OFFSET_SHIFT) |
-			    (CTR_RFC3686_NONCE_SIZE << MOVE_LEN_SHIFT));
-	}
-
 	set_jump_tgt_here(desc, key_jump_cmd);
 
 	/* Load IV, if there is one */
@@ -1412,13 +1398,6 @@ void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata,
 				LDST_CLASS_1_CCB | (ctx1_iv_off <<
 				LDST_OFFSET_SHIFT));
 
-	/* Load counter into CONTEXT1 reg */
-	if (is_rfc3686)
-		append_load_imm_be32(desc, 1, LDST_IMM | LDST_CLASS_1_CCB |
-				     LDST_SRCDST_BYTE_CONTEXT |
-				     ((ctx1_iv_off + CTR_RFC3686_IV_SIZE) <<
-				      LDST_OFFSET_SHIFT));
-
 	/* Load operation */
 	if (is_chacha20)
 		options |= OP_ALG_AS_FINALIZE;
@@ -1447,12 +1426,10 @@ EXPORT_SYMBOL(cnstr_shdsc_skcipher_encap);
  *         with OP_ALG_AAI_CBC or OP_ALG_AAI_CTR_MOD128
  *                                - OP_ALG_ALGSEL_CHACHA20
  * @ivsize: initialization vector size
- * @is_rfc3686: true when ctr(aes) is wrapped by rfc3686 template
  * @ctx1_iv_off: IV offset in CONTEXT1 register
  */
 void cnstr_shdsc_skcipher_decap(u32 * const desc, struct alginfo *cdata,
-				unsigned int ivsize, const bool is_rfc3686,
-				const u32 ctx1_iv_off)
+				unsigned int ivsize, const u32 ctx1_iv_off)
 {
 	u32 *key_jump_cmd;
 	bool is_chacha20 = ((cdata->algtype & OP_ALG_ALGSEL_MASK) ==
@@ -1467,18 +1444,6 @@ void cnstr_shdsc_skcipher_decap(u32 * const desc, struct alginfo *cdata,
 	append_key_as_imm(desc, cdata->key_virt, cdata->keylen,
 			  cdata->keylen, CLASS_1 | KEY_DEST_CLASS_REG);
 
-	/* Load nonce into CONTEXT1 reg */
-	if (is_rfc3686) {
-		const u8 *nonce = cdata->key_virt + cdata->keylen;
-
-		append_load_as_imm(desc, nonce, CTR_RFC3686_NONCE_SIZE,
-				   LDST_CLASS_IND_CCB |
-				   LDST_SRCDST_BYTE_OUTFIFO | LDST_IMM);
-		append_move(desc, MOVE_WAITCOMP | MOVE_SRC_OUTFIFO |
-			    MOVE_DEST_CLASS1CTX | (16 << MOVE_OFFSET_SHIFT) |
-			    (CTR_RFC3686_NONCE_SIZE << MOVE_LEN_SHIFT));
-	}
-
 	set_jump_tgt_here(desc, key_jump_cmd);
 
 	/* Load IV, if there is one */
@@ -1487,13 +1452,6 @@ void cnstr_shdsc_skcipher_decap(u32 * const desc, struct alginfo *cdata,
 				LDST_CLASS_1_CCB | (ctx1_iv_off <<
 				LDST_OFFSET_SHIFT));
 
-	/* Load counter into CONTEXT1 reg */
-	if (is_rfc3686)
-		append_load_imm_be32(desc, 1, LDST_IMM | LDST_CLASS_1_CCB |
-				     LDST_SRCDST_BYTE_CONTEXT |
-				     ((ctx1_iv_off + CTR_RFC3686_IV_SIZE) <<
-				      LDST_OFFSET_SHIFT));
-
 	/* Choose operation */
 	if (ctx1_iv_off)
 		append_operation(desc, cdata->algtype | OP_ALG_AS_INIT |
diff --git a/drivers/crypto/caam/caamalg_desc.h b/drivers/crypto/caam/caamalg_desc.h
index f2893393ba5e7..ac3d3ebc544e2 100644
--- a/drivers/crypto/caam/caamalg_desc.h
+++ b/drivers/crypto/caam/caamalg_desc.h
@@ -102,12 +102,10 @@ void cnstr_shdsc_chachapoly(u32 * const desc, struct alginfo *cdata,
 			    const bool is_qi);
 
 void cnstr_shdsc_skcipher_encap(u32 * const desc, struct alginfo *cdata,
-				unsigned int ivsize, const bool is_rfc3686,
-				const u32 ctx1_iv_off);
+				unsigned int ivsize, const u32 ctx1_iv_off);
 
 void cnstr_shdsc_skcipher_decap(u32 * const desc, struct alginfo *cdata,
-				unsigned int ivsize, const bool is_rfc3686,
-				const u32 ctx1_iv_off);
+				unsigned int ivsize, const u32 ctx1_iv_off);
 
 void cnstr_shdsc_xts_skcipher_encap(u32 * const desc, struct alginfo *cdata);
 
diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
index efe8f15a4a51a..a140e9090d244 100644
--- a/drivers/crypto/caam/caamalg_qi.c
+++ b/drivers/crypto/caam/caamalg_qi.c
@@ -610,12 +610,8 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
 			   unsigned int keylen, const u32 ctx1_iv_off)
 {
 	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
-	struct caam_skcipher_alg *alg =
-		container_of(crypto_skcipher_alg(skcipher), typeof(*alg),
-			     skcipher);
 	struct device *jrdev = ctx->jrdev;
 	unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
-	const bool is_rfc3686 = alg->caam.rfc3686;
 	int ret = 0;
 
 	print_hex_dump_debug("key in @" __stringify(__LINE__)": ",
@@ -627,9 +623,9 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
 
 	/* skcipher encrypt, decrypt shared descriptors */
 	cnstr_shdsc_skcipher_encap(ctx->sh_desc_enc, &ctx->cdata, ivsize,
-				   is_rfc3686, ctx1_iv_off);
+				   ctx1_iv_off);
 	cnstr_shdsc_skcipher_decap(ctx->sh_desc_dec, &ctx->cdata, ivsize,
-				   is_rfc3686, ctx1_iv_off);
+				   ctx1_iv_off);
 
 	/* Now update the driver contexts with the new shared descriptor */
 	if (ctx->drv_ctx[ENCRYPT]) {
@@ -665,27 +661,6 @@ static int aes_skcipher_setkey(struct crypto_skcipher *skcipher,
 	return skcipher_setkey(skcipher, key, keylen, 0);
 }
 
-static int rfc3686_skcipher_setkey(struct crypto_skcipher *skcipher,
-				   const u8 *key, unsigned int keylen)
-{
-	u32 ctx1_iv_off;
-	int err;
-
-	/*
-	 * RFC3686 specific:
-	 *	| CONTEXT1[255:128] = {NONCE, IV, COUNTER}
-	 *	| *key = {KEY, NONCE}
-	 */
-	ctx1_iv_off = 16 + CTR_RFC3686_NONCE_SIZE;
-	keylen -= CTR_RFC3686_NONCE_SIZE;
-
-	err = aes_check_keylen(keylen);
-	if (err)
-		return err;
-
-	return skcipher_setkey(skcipher, key, keylen, ctx1_iv_off);
-}
-
 static int ctr_skcipher_setkey(struct crypto_skcipher *skcipher,
 			       const u8 *key, unsigned int keylen)
 {
@@ -1479,29 +1454,6 @@ static struct caam_skcipher_alg driver_algs[] = {
 		.caam.class1_alg_type = OP_ALG_ALGSEL_AES |
 					OP_ALG_AAI_CTR_MOD128,
 	},
-	{
-		.skcipher = {
-			.base = {
-				.cra_name = "rfc3686(ctr(aes))",
-				.cra_driver_name = "rfc3686-ctr-aes-caam-qi",
-				.cra_blocksize = 1,
-			},
-			.setkey = rfc3686_skcipher_setkey,
-			.encrypt = skcipher_encrypt,
-			.decrypt = skcipher_decrypt,
-			.min_keysize = AES_MIN_KEY_SIZE +
-				       CTR_RFC3686_NONCE_SIZE,
-			.max_keysize = AES_MAX_KEY_SIZE +
-				       CTR_RFC3686_NONCE_SIZE,
-			.ivsize = CTR_RFC3686_IV_SIZE,
-			.chunksize = AES_BLOCK_SIZE,
-		},
-		.caam = {
-			.class1_alg_type = OP_ALG_ALGSEL_AES |
-					   OP_ALG_AAI_CTR_MOD128,
-			.rfc3686 = true,
-		},
-	},
 	{
 		.skcipher = {
 			.base = {
diff --git a/drivers/crypto/caam/caamalg_qi2.c b/drivers/crypto/caam/caamalg_qi2.c
index 6294c104bf7a9..fd0f070fb9971 100644
--- a/drivers/crypto/caam/caamalg_qi2.c
+++ b/drivers/crypto/caam/caamalg_qi2.c
@@ -934,14 +934,10 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
 			   unsigned int keylen, const u32 ctx1_iv_off)
 {
 	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
-	struct caam_skcipher_alg *alg =
-		container_of(crypto_skcipher_alg(skcipher),
-			     struct caam_skcipher_alg, skcipher);
 	struct device *dev = ctx->dev;
 	struct caam_flc *flc;
 	unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
 	u32 *desc;
-	const bool is_rfc3686 = alg->caam.rfc3686;
 
 	print_hex_dump_debug("key in @" __stringify(__LINE__)": ",
 			     DUMP_PREFIX_ADDRESS, 16, 4, key, keylen, 1);
@@ -953,8 +949,7 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
 	/* skcipher_encrypt shared descriptor */
 	flc = &ctx->flc[ENCRYPT];
 	desc = flc->sh_desc;
-	cnstr_shdsc_skcipher_encap(desc, &ctx->cdata, ivsize, is_rfc3686,
-				   ctx1_iv_off);
+	cnstr_shdsc_skcipher_encap(desc, &ctx->cdata, ivsize, ctx1_iv_off);
 	flc->flc[1] = cpu_to_caam32(desc_len(desc)); /* SDL */
 	dma_sync_single_for_device(dev, ctx->flc_dma[ENCRYPT],
 				   sizeof(flc->flc) + desc_bytes(desc),
@@ -963,8 +958,7 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
 	/* skcipher_decrypt shared descriptor */
 	flc = &ctx->flc[DECRYPT];
 	desc = flc->sh_desc;
-	cnstr_shdsc_skcipher_decap(desc, &ctx->cdata, ivsize, is_rfc3686,
-				   ctx1_iv_off);
+	cnstr_shdsc_skcipher_decap(desc, &ctx->cdata, ivsize, ctx1_iv_off);
 	flc->flc[1] = cpu_to_caam32(desc_len(desc)); /* SDL */
 	dma_sync_single_for_device(dev, ctx->flc_dma[DECRYPT],
 				   sizeof(flc->flc) + desc_bytes(desc),
@@ -985,27 +979,6 @@ static int aes_skcipher_setkey(struct crypto_skcipher *skcipher,
 	return skcipher_setkey(skcipher, key, keylen, 0);
 }
 
-static int rfc3686_skcipher_setkey(struct crypto_skcipher *skcipher,
-				   const u8 *key, unsigned int keylen)
-{
-	u32 ctx1_iv_off;
-	int err;
-
-	/*
-	 * RFC3686 specific:
-	 *	| CONTEXT1[255:128] = {NONCE, IV, COUNTER}
-	 *	| *key = {KEY, NONCE}
-	 */
-	ctx1_iv_off = 16 + CTR_RFC3686_NONCE_SIZE;
-	keylen -= CTR_RFC3686_NONCE_SIZE;
-
-	err = aes_check_keylen(keylen);
-	if (err)
-		return err;
-
-	return skcipher_setkey(skcipher, key, keylen, ctx1_iv_off);
-}
-
 static int ctr_skcipher_setkey(struct crypto_skcipher *skcipher,
 			       const u8 *key, unsigned int keylen)
 {
@@ -1637,29 +1610,6 @@ static struct caam_skcipher_alg driver_algs[] = {
 		.caam.class1_alg_type = OP_ALG_ALGSEL_AES |
 					OP_ALG_AAI_CTR_MOD128,
 	},
-	{
-		.skcipher = {
-			.base = {
-				.cra_name = "rfc3686(ctr(aes))",
-				.cra_driver_name = "rfc3686-ctr-aes-caam-qi2",
-				.cra_blocksize = 1,
-			},
-			.setkey = rfc3686_skcipher_setkey,
-			.encrypt = skcipher_encrypt,
-			.decrypt = skcipher_decrypt,
-			.min_keysize = AES_MIN_KEY_SIZE +
-				       CTR_RFC3686_NONCE_SIZE,
-			.max_keysize = AES_MAX_KEY_SIZE +
-				       CTR_RFC3686_NONCE_SIZE,
-			.ivsize = CTR_RFC3686_IV_SIZE,
-			.chunksize = AES_BLOCK_SIZE,
-		},
-		.caam = {
-			.class1_alg_type = OP_ALG_ALGSEL_AES |
-					   OP_ALG_AAI_CTR_MOD128,
-			.rfc3686 = true,
-		},
-	},
 	{
 		.skcipher = {
 			.base = {

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 20/31] crypto: nitrox - Set final_chunksize on rfc3686
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (18 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 19/31] crypto: caam - Remove rfc3686 implementations Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 21/31] crypto: ccp - Remove rfc3686 implementation Herbert Xu
                   ` (11 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The rfc3686 implementation in cavium/nitrox does not support partial
operation and therefore this patch sets its final_chunksize to -1
to mark this fact.

This patch also sets the chunksize to the AES block size.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/cavium/nitrox/nitrox_skcipher.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/crypto/cavium/nitrox/nitrox_skcipher.c b/drivers/crypto/cavium/nitrox/nitrox_skcipher.c
index 7a159a5da30a0..0b597c6aa68af 100644
--- a/drivers/crypto/cavium/nitrox/nitrox_skcipher.c
+++ b/drivers/crypto/cavium/nitrox/nitrox_skcipher.c
@@ -573,6 +573,8 @@ static struct skcipher_alg nitrox_skciphers[] = { {
 	.min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
 	.max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
 	.ivsize = CTR_RFC3686_IV_SIZE,
+	.chunksize = AES_BLOCK_SIZE,
+	.final_chunksize = -1,
 	.init = nitrox_skcipher_init,
 	.exit = nitrox_skcipher_exit,
 	.setkey = nitrox_aes_ctr_rfc3686_setkey,

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 21/31] crypto: ccp - Remove rfc3686 implementation
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (19 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 20/31] crypto: nitrox - Set final_chunksize on rfc3686 Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-08-06 19:16   ` John Allen
  2020-07-28  7:19 ` [v3 PATCH 22/31] crypto: chelsio " Herbert Xu
                   ` (10 subsequent siblings)
  31 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The rfc3686 implementation in ccp is pretty much the same
as the generic rfc3686 wrapper.  So it can simply be removed to
reduce complexity.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/ccp/ccp-crypto-aes.c |   99 ------------------------------------
 drivers/crypto/ccp/ccp-crypto.h     |    6 --
 2 files changed, 105 deletions(-)

diff --git a/drivers/crypto/ccp/ccp-crypto-aes.c b/drivers/crypto/ccp/ccp-crypto-aes.c
index e6dcd8cedd53e..a45e5c994e381 100644
--- a/drivers/crypto/ccp/ccp-crypto-aes.c
+++ b/drivers/crypto/ccp/ccp-crypto-aes.c
@@ -131,78 +131,6 @@ static int ccp_aes_init_tfm(struct crypto_skcipher *tfm)
 	return 0;
 }
 
-static int ccp_aes_rfc3686_complete(struct crypto_async_request *async_req,
-				    int ret)
-{
-	struct skcipher_request *req = skcipher_request_cast(async_req);
-	struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req);
-
-	/* Restore the original pointer */
-	req->iv = rctx->rfc3686_info;
-
-	return ccp_aes_complete(async_req, ret);
-}
-
-static int ccp_aes_rfc3686_setkey(struct crypto_skcipher *tfm, const u8 *key,
-				  unsigned int key_len)
-{
-	struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm);
-
-	if (key_len < CTR_RFC3686_NONCE_SIZE)
-		return -EINVAL;
-
-	key_len -= CTR_RFC3686_NONCE_SIZE;
-	memcpy(ctx->u.aes.nonce, key + key_len, CTR_RFC3686_NONCE_SIZE);
-
-	return ccp_aes_setkey(tfm, key, key_len);
-}
-
-static int ccp_aes_rfc3686_crypt(struct skcipher_request *req, bool encrypt)
-{
-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct ccp_aes_req_ctx *rctx = skcipher_request_ctx(req);
-	u8 *iv;
-
-	/* Initialize the CTR block */
-	iv = rctx->rfc3686_iv;
-	memcpy(iv, ctx->u.aes.nonce, CTR_RFC3686_NONCE_SIZE);
-
-	iv += CTR_RFC3686_NONCE_SIZE;
-	memcpy(iv, req->iv, CTR_RFC3686_IV_SIZE);
-
-	iv += CTR_RFC3686_IV_SIZE;
-	*(__be32 *)iv = cpu_to_be32(1);
-
-	/* Point to the new IV */
-	rctx->rfc3686_info = req->iv;
-	req->iv = rctx->rfc3686_iv;
-
-	return ccp_aes_crypt(req, encrypt);
-}
-
-static int ccp_aes_rfc3686_encrypt(struct skcipher_request *req)
-{
-	return ccp_aes_rfc3686_crypt(req, true);
-}
-
-static int ccp_aes_rfc3686_decrypt(struct skcipher_request *req)
-{
-	return ccp_aes_rfc3686_crypt(req, false);
-}
-
-static int ccp_aes_rfc3686_init_tfm(struct crypto_skcipher *tfm)
-{
-	struct ccp_ctx *ctx = crypto_skcipher_ctx(tfm);
-
-	ctx->complete = ccp_aes_rfc3686_complete;
-	ctx->u.aes.key_len = 0;
-
-	crypto_skcipher_set_reqsize(tfm, sizeof(struct ccp_aes_req_ctx));
-
-	return 0;
-}
-
 static const struct skcipher_alg ccp_aes_defaults = {
 	.setkey			= ccp_aes_setkey,
 	.encrypt		= ccp_aes_encrypt,
@@ -221,24 +149,6 @@ static const struct skcipher_alg ccp_aes_defaults = {
 	.base.cra_module	= THIS_MODULE,
 };
 
-static const struct skcipher_alg ccp_aes_rfc3686_defaults = {
-	.setkey			= ccp_aes_rfc3686_setkey,
-	.encrypt		= ccp_aes_rfc3686_encrypt,
-	.decrypt		= ccp_aes_rfc3686_decrypt,
-	.min_keysize		= AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
-	.max_keysize		= AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
-	.init			= ccp_aes_rfc3686_init_tfm,
-
-	.base.cra_flags		= CRYPTO_ALG_ASYNC |
-				  CRYPTO_ALG_ALLOCATES_MEMORY |
-				  CRYPTO_ALG_KERN_DRIVER_ONLY |
-				  CRYPTO_ALG_NEED_FALLBACK,
-	.base.cra_blocksize	= CTR_RFC3686_BLOCK_SIZE,
-	.base.cra_ctxsize	= sizeof(struct ccp_ctx),
-	.base.cra_priority	= CCP_CRA_PRIORITY,
-	.base.cra_module	= THIS_MODULE,
-};
-
 struct ccp_aes_def {
 	enum ccp_aes_mode mode;
 	unsigned int version;
@@ -295,15 +205,6 @@ static struct ccp_aes_def aes_algs[] = {
 		.ivsize		= AES_BLOCK_SIZE,
 		.alg_defaults	= &ccp_aes_defaults,
 	},
-	{
-		.mode		= CCP_AES_MODE_CTR,
-		.version	= CCP_VERSION(3, 0),
-		.name		= "rfc3686(ctr(aes))",
-		.driver_name	= "rfc3686-ctr-aes-ccp",
-		.blocksize	= 1,
-		.ivsize		= CTR_RFC3686_IV_SIZE,
-		.alg_defaults	= &ccp_aes_rfc3686_defaults,
-	},
 };
 
 static int ccp_register_aes_alg(struct list_head *head,
diff --git a/drivers/crypto/ccp/ccp-crypto.h b/drivers/crypto/ccp/ccp-crypto.h
index aed3d2192d013..a837b2a994d9f 100644
--- a/drivers/crypto/ccp/ccp-crypto.h
+++ b/drivers/crypto/ccp/ccp-crypto.h
@@ -99,8 +99,6 @@ struct ccp_aes_ctx {
 	unsigned int key_len;
 	u8 key[AES_MAX_KEY_SIZE * 2];
 
-	u8 nonce[CTR_RFC3686_NONCE_SIZE];
-
 	/* CMAC key structures */
 	struct scatterlist k1_sg;
 	struct scatterlist k2_sg;
@@ -116,10 +114,6 @@ struct ccp_aes_req_ctx {
 	struct scatterlist tag_sg;
 	u8 tag[AES_BLOCK_SIZE];
 
-	/* Fields used for RFC3686 requests */
-	u8 *rfc3686_info;
-	u8 rfc3686_iv[AES_BLOCK_SIZE];
-
 	struct ccp_cmd cmd;
 
 	struct skcipher_request fallback_req;	// keep at the end

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 22/31] crypto: chelsio - Remove rfc3686 implementation
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (20 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 21/31] crypto: ccp - Remove rfc3686 implementation Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 23/31] crypto: inside-secure - Set final_chunksize on rfc3686 Herbert Xu
                   ` (9 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The rfc3686 implementation in chelsio is pretty much the same
as the generic rfc3686 wrapper.  So it can simply be removed to
reduce complexity.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/chelsio/chcr_algo.c |  109 +------------------------------------
 1 file changed, 4 insertions(+), 105 deletions(-)

diff --git a/drivers/crypto/chelsio/chcr_algo.c b/drivers/crypto/chelsio/chcr_algo.c
index 13b908ea48738..8374be72454db 100644
--- a/drivers/crypto/chelsio/chcr_algo.c
+++ b/drivers/crypto/chelsio/chcr_algo.c
@@ -856,9 +856,7 @@ static struct sk_buff *create_cipher_wr(struct cipher_wr_param *wrparam)
 	chcr_req->key_ctx.ctx_hdr = ablkctx->key_ctx_hdr;
 	if ((reqctx->op == CHCR_DECRYPT_OP) &&
 	    (!(get_cryptoalg_subtype(tfm) ==
-	       CRYPTO_ALG_SUB_TYPE_CTR)) &&
-	    (!(get_cryptoalg_subtype(tfm) ==
-	       CRYPTO_ALG_SUB_TYPE_CTR_RFC3686))) {
+	       CRYPTO_ALG_SUB_TYPE_CTR))) {
 		generate_copy_rrkey(ablkctx, &chcr_req->key_ctx);
 	} else {
 		if ((ablkctx->ciph_mode == CHCR_SCMD_CIPHER_MODE_AES_CBC) ||
@@ -988,42 +986,6 @@ static int chcr_aes_ctr_setkey(struct crypto_skcipher *cipher,
 	return err;
 }
 
-static int chcr_aes_rfc3686_setkey(struct crypto_skcipher *cipher,
-				   const u8 *key,
-				   unsigned int keylen)
-{
-	struct ablk_ctx *ablkctx = ABLK_CTX(c_ctx(cipher));
-	unsigned int ck_size, context_size;
-	u16 alignment = 0;
-	int err;
-
-	if (keylen < CTR_RFC3686_NONCE_SIZE)
-		return -EINVAL;
-	memcpy(ablkctx->nonce, key + (keylen - CTR_RFC3686_NONCE_SIZE),
-	       CTR_RFC3686_NONCE_SIZE);
-
-	keylen -= CTR_RFC3686_NONCE_SIZE;
-	err = chcr_cipher_fallback_setkey(cipher, key, keylen);
-	if (err)
-		goto badkey_err;
-
-	ck_size = chcr_keyctx_ck_size(keylen);
-	alignment = (ck_size == CHCR_KEYCTX_CIPHER_KEY_SIZE_192) ? 8 : 0;
-	memcpy(ablkctx->key, key, keylen);
-	ablkctx->enckey_len = keylen;
-	context_size = (KEY_CONTEXT_HDR_SALT_AND_PAD +
-			keylen + alignment) >> 4;
-
-	ablkctx->key_ctx_hdr = FILL_KEY_CTX_HDR(ck_size, CHCR_KEYCTX_NO_KEY,
-						0, 0, context_size);
-	ablkctx->ciph_mode = CHCR_SCMD_CIPHER_MODE_AES_CTR;
-
-	return 0;
-badkey_err:
-	ablkctx->enckey_len = 0;
-
-	return err;
-}
 static void ctr_add_iv(u8 *dstiv, u8 *srciv, u32 add)
 {
 	unsigned int size = AES_BLOCK_SIZE;
@@ -1107,10 +1069,6 @@ static int chcr_update_cipher_iv(struct skcipher_request *req,
 	if (subtype == CRYPTO_ALG_SUB_TYPE_CTR)
 		ctr_add_iv(iv, req->iv, (reqctx->processed /
 			   AES_BLOCK_SIZE));
-	else if (subtype == CRYPTO_ALG_SUB_TYPE_CTR_RFC3686)
-		*(__be32 *)(reqctx->iv + CTR_RFC3686_NONCE_SIZE +
-			CTR_RFC3686_IV_SIZE) = cpu_to_be32((reqctx->processed /
-						AES_BLOCK_SIZE) + 1);
 	else if (subtype == CRYPTO_ALG_SUB_TYPE_XTS)
 		ret = chcr_update_tweak(req, iv, 0);
 	else if (subtype == CRYPTO_ALG_SUB_TYPE_CBC) {
@@ -1125,11 +1083,6 @@ static int chcr_update_cipher_iv(struct skcipher_request *req,
 
 }
 
-/* We need separate function for final iv because in rfc3686  Initial counter
- * starts from 1 and buffer size of iv is 8 byte only which remains constant
- * for subsequent update requests
- */
-
 static int chcr_final_cipher_iv(struct skcipher_request *req,
 				   struct cpl_fw6_pld *fw6_pld, u8 *iv)
 {
@@ -1313,30 +1266,16 @@ static int process_cipher(struct skcipher_request *req,
 	if (subtype == CRYPTO_ALG_SUB_TYPE_CTR) {
 		bytes = adjust_ctr_overflow(req->iv, bytes);
 	}
-	if (subtype == CRYPTO_ALG_SUB_TYPE_CTR_RFC3686) {
-		memcpy(reqctx->iv, ablkctx->nonce, CTR_RFC3686_NONCE_SIZE);
-		memcpy(reqctx->iv + CTR_RFC3686_NONCE_SIZE, req->iv,
-				CTR_RFC3686_IV_SIZE);
-
-		/* initialize counter portion of counter block */
-		*(__be32 *)(reqctx->iv + CTR_RFC3686_NONCE_SIZE +
-			CTR_RFC3686_IV_SIZE) = cpu_to_be32(1);
-		memcpy(reqctx->init_iv, reqctx->iv, IV);
 
-	} else {
+	memcpy(reqctx->iv, req->iv, IV);
+	memcpy(reqctx->init_iv, req->iv, IV);
 
-		memcpy(reqctx->iv, req->iv, IV);
-		memcpy(reqctx->init_iv, req->iv, IV);
-	}
 	if (unlikely(bytes == 0)) {
 		chcr_cipher_dma_unmap(&ULD_CTX(c_ctx(tfm))->lldi.pdev->dev,
 				      req);
 fallback:       atomic_inc(&adap->chcr_stats.fallback);
 		err = chcr_cipher_fallback(ablkctx->sw_cipher, req,
-					   subtype ==
-					   CRYPTO_ALG_SUB_TYPE_CTR_RFC3686 ?
-					   reqctx->iv : req->iv,
-					   op_type);
+					   req->iv, op_type);
 		goto error;
 	}
 	reqctx->op = op_type;
@@ -1486,27 +1425,6 @@ static int chcr_init_tfm(struct crypto_skcipher *tfm)
 	return chcr_device_init(ctx);
 }
 
-static int chcr_rfc3686_init(struct crypto_skcipher *tfm)
-{
-	struct skcipher_alg *alg = crypto_skcipher_alg(tfm);
-	struct chcr_context *ctx = crypto_skcipher_ctx(tfm);
-	struct ablk_ctx *ablkctx = ABLK_CTX(ctx);
-
-	/*RFC3686 initialises IV counter value to 1, rfc3686(ctr(aes))
-	 * cannot be used as fallback in chcr_handle_cipher_response
-	 */
-	ablkctx->sw_cipher = crypto_alloc_skcipher("ctr(aes)", 0,
-				CRYPTO_ALG_NEED_FALLBACK);
-	if (IS_ERR(ablkctx->sw_cipher)) {
-		pr_err("failed to allocate fallback for %s\n", alg->base.cra_name);
-		return PTR_ERR(ablkctx->sw_cipher);
-	}
-	crypto_skcipher_set_reqsize(tfm, sizeof(struct chcr_skcipher_req_ctx) +
-				    crypto_skcipher_reqsize(ablkctx->sw_cipher));
-	return chcr_device_init(ctx);
-}
-
-
 static void chcr_exit_tfm(struct crypto_skcipher *tfm)
 {
 	struct chcr_context *ctx = crypto_skcipher_ctx(tfm);
@@ -3894,25 +3812,6 @@ static struct chcr_alg_template driver_algs[] = {
 			.decrypt		= chcr_aes_decrypt,
 		}
 	},
-	{
-		.type = CRYPTO_ALG_TYPE_SKCIPHER |
-			CRYPTO_ALG_SUB_TYPE_CTR_RFC3686,
-		.is_registered = 0,
-		.alg.skcipher = {
-			.base.cra_name		= "rfc3686(ctr(aes))",
-			.base.cra_driver_name	= "rfc3686-ctr-aes-chcr",
-			.base.cra_blocksize	= 1,
-
-			.init			= chcr_rfc3686_init,
-			.exit			= chcr_exit_tfm,
-			.min_keysize		= AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
-			.max_keysize		= AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
-			.ivsize			= CTR_RFC3686_IV_SIZE,
-			.setkey			= chcr_aes_rfc3686_setkey,
-			.encrypt		= chcr_aes_encrypt,
-			.decrypt		= chcr_aes_decrypt,
-		}
-	},
 	/* SHA */
 	{
 		.type = CRYPTO_ALG_TYPE_AHASH,

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 23/31] crypto: inside-secure - Set final_chunksize on rfc3686
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (21 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 22/31] crypto: chelsio " Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 24/31] crypto: ixp4xx - Remove rfc3686 implementation Herbert Xu
                   ` (8 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The rfc3686 implementation in inside-secure does not support partial
operation and therefore this patch sets its final_chunksize to -1
to mark this fact.
    
This patch also sets the chunksize to the underlying block size.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/inside-secure/safexcel_cipher.c |    4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/crypto/inside-secure/safexcel_cipher.c b/drivers/crypto/inside-secure/safexcel_cipher.c
index ef04a394ff49d..4e269e92c25dc 100644
--- a/drivers/crypto/inside-secure/safexcel_cipher.c
+++ b/drivers/crypto/inside-secure/safexcel_cipher.c
@@ -1484,6 +1484,8 @@ struct safexcel_alg_template safexcel_alg_ctr_aes = {
 		.min_keysize = AES_MIN_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
 		.max_keysize = AES_MAX_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
 		.ivsize = CTR_RFC3686_IV_SIZE,
+		.chunksize = AES_BLOCK_SIZE,
+		.final_chunksize = -1,
 		.base = {
 			.cra_name = "rfc3686(ctr(aes))",
 			.cra_driver_name = "safexcel-ctr-aes",
@@ -3309,6 +3311,8 @@ struct safexcel_alg_template safexcel_alg_ctr_sm4 = {
 		.min_keysize = SM4_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
 		.max_keysize = SM4_KEY_SIZE + CTR_RFC3686_NONCE_SIZE,
 		.ivsize = CTR_RFC3686_IV_SIZE,
+		.chunksize = SM4_BLOCK_SIZE,
+		.final_chunksize = -1,
 		.base = {
 			.cra_name = "rfc3686(ctr(sm4))",
 			.cra_driver_name = "safexcel-ctr-sm4",

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 24/31] crypto: ixp4xx - Remove rfc3686 implementation
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (22 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 23/31] crypto: inside-secure - Set final_chunksize on rfc3686 Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 25/31] crypto: nx - Set final_chunksize on rfc3686 Herbert Xu
                   ` (7 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The rfc3686 implementation in ixp4xx is pretty much the same
as the generic rfc3686 wrapper.  So it can simply be removed to
reduce complexity.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/ixp4xx_crypto.c |   53 -----------------------------------------
 1 file changed, 53 deletions(-)

diff --git a/drivers/crypto/ixp4xx_crypto.c b/drivers/crypto/ixp4xx_crypto.c
index f478bb0a566af..c93f5db8d0503 100644
--- a/drivers/crypto/ixp4xx_crypto.c
+++ b/drivers/crypto/ixp4xx_crypto.c
@@ -180,7 +180,6 @@ struct ixp_ctx {
 	int enckey_len;
 	u8 enckey[MAX_KEYLEN];
 	u8 salt[MAX_IVLEN];
-	u8 nonce[CTR_RFC3686_NONCE_SIZE];
 	unsigned salted;
 	atomic_t configuring;
 	struct completion completion;
@@ -848,22 +847,6 @@ static int ablk_des3_setkey(struct crypto_skcipher *tfm, const u8 *key,
 	       ablk_setkey(tfm, key, key_len);
 }
 
-static int ablk_rfc3686_setkey(struct crypto_skcipher *tfm, const u8 *key,
-		unsigned int key_len)
-{
-	struct ixp_ctx *ctx = crypto_skcipher_ctx(tfm);
-
-	/* the nonce is stored in bytes at end of key */
-	if (key_len < CTR_RFC3686_NONCE_SIZE)
-		return -EINVAL;
-
-	memcpy(ctx->nonce, key + (key_len - CTR_RFC3686_NONCE_SIZE),
-			CTR_RFC3686_NONCE_SIZE);
-
-	key_len -= CTR_RFC3686_NONCE_SIZE;
-	return ablk_setkey(tfm, key, key_len);
-}
-
 static int ablk_perform(struct skcipher_request *req, int encrypt)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
@@ -947,28 +930,6 @@ static int ablk_decrypt(struct skcipher_request *req)
 	return ablk_perform(req, 0);
 }
 
-static int ablk_rfc3686_crypt(struct skcipher_request *req)
-{
-	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
-	struct ixp_ctx *ctx = crypto_skcipher_ctx(tfm);
-	u8 iv[CTR_RFC3686_BLOCK_SIZE];
-	u8 *info = req->iv;
-	int ret;
-
-	/* set up counter block */
-        memcpy(iv, ctx->nonce, CTR_RFC3686_NONCE_SIZE);
-	memcpy(iv + CTR_RFC3686_NONCE_SIZE, info, CTR_RFC3686_IV_SIZE);
-
-	/* initialize counter portion of counter block */
-	*(__be32 *)(iv + CTR_RFC3686_NONCE_SIZE + CTR_RFC3686_IV_SIZE) =
-		cpu_to_be32(1);
-
-	req->iv = iv;
-	ret = ablk_perform(req, 1);
-	req->iv = info;
-	return ret;
-}
-
 static int aead_perform(struct aead_request *req, int encrypt,
 		int cryptoffset, int eff_cryptlen, u8 *iv)
 {
@@ -1269,20 +1230,6 @@ static struct ixp_alg ixp4xx_algos[] = {
 	},
 	.cfg_enc = CIPH_ENCR | MOD_AES | MOD_CTR,
 	.cfg_dec = CIPH_ENCR | MOD_AES | MOD_CTR,
-}, {
-	.crypto	= {
-		.base.cra_name		= "rfc3686(ctr(aes))",
-		.base.cra_blocksize	= 1,
-
-		.min_keysize		= AES_MIN_KEY_SIZE,
-		.max_keysize		= AES_MAX_KEY_SIZE,
-		.ivsize			= AES_BLOCK_SIZE,
-		.setkey			= ablk_rfc3686_setkey,
-		.encrypt		= ablk_rfc3686_crypt,
-		.decrypt		= ablk_rfc3686_crypt,
-	},
-	.cfg_enc = CIPH_ENCR | MOD_AES | MOD_CTR,
-	.cfg_dec = CIPH_ENCR | MOD_AES | MOD_CTR,
 } };
 
 static struct ixp_aead_alg ixp4xx_aeads[] = {

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 25/31] crypto: nx - Set final_chunksize on rfc3686
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (23 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 24/31] crypto: ixp4xx - Remove rfc3686 implementation Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 26/31] crypto: essiv - Set final_chunksize Herbert Xu
                   ` (6 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The rfc3686 implementation in nx does not support partial
operation and therefore this patch sets its final_chunksize to -1
to mark this fact.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/nx/nx-aes-ctr.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/crypto/nx/nx-aes-ctr.c b/drivers/crypto/nx/nx-aes-ctr.c
index 6d5ce1a66f1ee..0e95e975cf5ba 100644
--- a/drivers/crypto/nx/nx-aes-ctr.c
+++ b/drivers/crypto/nx/nx-aes-ctr.c
@@ -142,4 +142,5 @@ struct skcipher_alg nx_ctr3686_aes_alg = {
 	.encrypt		= ctr3686_aes_nx_crypt,
 	.decrypt		= ctr3686_aes_nx_crypt,
 	.chunksize		= AES_BLOCK_SIZE,
+	.final_chunksize	= -1,
 };

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 26/31] crypto: essiv - Set final_chunksize
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (24 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 25/31] crypto: nx - Set final_chunksize on rfc3686 Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 27/31] crypto: simd - Add support for chaining Herbert Xu
                   ` (5 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The essiv template does not support partial operation and therefore
this patch sets its final_chunksize to -1 to mark this fact.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/essiv.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/crypto/essiv.c b/crypto/essiv.c
index d012be23d496d..dd19cfefe559c 100644
--- a/crypto/essiv.c
+++ b/crypto/essiv.c
@@ -580,6 +580,7 @@ static int essiv_create(struct crypto_template *tmpl, struct rtattr **tb)
 		skcipher_inst->alg.ivsize	= ivsize;
 		skcipher_inst->alg.chunksize	= crypto_skcipher_alg_chunksize(skcipher_alg);
 		skcipher_inst->alg.walksize	= crypto_skcipher_alg_walksize(skcipher_alg);
+		skcipher_inst->alg.final_chunksize = -1;
 
 		skcipher_inst->free		= essiv_skcipher_free_instance;
 

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 27/31] crypto: simd - Add support for chaining
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (25 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 26/31] crypto: essiv - Set final_chunksize Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 28/31] crypto: arm64/essiv - Set final_chunksize Herbert Xu
                   ` (4 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

This patch sets the simd final chunk size from its child skcipher.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/simd.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/crypto/simd.c b/crypto/simd.c
index edaa479a1ec5e..260c26ad92fdf 100644
--- a/crypto/simd.c
+++ b/crypto/simd.c
@@ -181,6 +181,7 @@ struct simd_skcipher_alg *simd_skcipher_create_compat(const char *algname,
 
 	alg->ivsize = ialg->ivsize;
 	alg->chunksize = ialg->chunksize;
+	alg->final_chunksize = ialg->final_chunksize;
 	alg->min_keysize = ialg->min_keysize;
 	alg->max_keysize = ialg->max_keysize;
 

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 28/31] crypto: arm64/essiv - Set final_chunksize
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (26 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 27/31] crypto: simd - Add support for chaining Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 29/31] crypto: ccree - Set final_chunksize on essiv Herbert Xu
                   ` (3 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The arm64 essiv implementation does not support partial operation
and therefore this patch sets its final_chunksize to -1 to mark this
fact.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 arch/arm64/crypto/aes-glue.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index f63feb00e354d..a0ac7bf070d53 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64/crypto/aes-glue.c
@@ -769,6 +769,7 @@ static struct skcipher_alg aes_algs[] = { {
 	.min_keysize	= AES_MIN_KEY_SIZE,
 	.max_keysize	= AES_MAX_KEY_SIZE,
 	.ivsize		= AES_BLOCK_SIZE,
+	.final_chunksize = -1,
 	.setkey		= essiv_cbc_set_key,
 	.encrypt	= essiv_cbc_encrypt,
 	.decrypt	= essiv_cbc_decrypt,

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 29/31] crypto: ccree - Set final_chunksize on essiv
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (27 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 28/31] crypto: arm64/essiv - Set final_chunksize Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 30/31] crypto: kw - Set final_chunksize Herbert Xu
                   ` (2 subsequent siblings)
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The ccree essiv implementation does not support partial operation
and therefore this patch sets its final_chunksize to -1 to mark this
fact.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 drivers/crypto/ccree/cc_cipher.c |    2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/crypto/ccree/cc_cipher.c b/drivers/crypto/ccree/cc_cipher.c
index 83567b60d6908..a380391b2186a 100644
--- a/drivers/crypto/ccree/cc_cipher.c
+++ b/drivers/crypto/ccree/cc_cipher.c
@@ -1067,6 +1067,7 @@ static const struct cc_alg_template skcipher_algs[] = {
 			.min_keysize = CC_HW_KEY_SIZE,
 			.max_keysize = CC_HW_KEY_SIZE,
 			.ivsize = AES_BLOCK_SIZE,
+			.final_chunksize = -1,
 			},
 		.cipher_mode = DRV_CIPHER_ESSIV,
 		.flow_mode = S_DIN_to_AES,
@@ -1198,6 +1199,7 @@ static const struct cc_alg_template skcipher_algs[] = {
 			.min_keysize = AES_MIN_KEY_SIZE,
 			.max_keysize = AES_MAX_KEY_SIZE,
 			.ivsize = AES_BLOCK_SIZE,
+			.final_chunksize = -1,
 			},
 		.cipher_mode = DRV_CIPHER_ESSIV,
 		.flow_mode = S_DIN_to_AES,

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 30/31] crypto: kw - Set final_chunksize
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (28 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 29/31] crypto: ccree - Set final_chunksize on essiv Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28  7:19 ` [v3 PATCH 31/31] crypto: salsa20-generic - dd support for chaining Herbert Xu
  2020-07-28 17:19 ` [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Eric Biggers
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

The kw algorithm does not support partial operation and therefore
this patch sets its final_chunksize to -1 to mark this fact.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/keywrap.c |    1 +
 1 file changed, 1 insertion(+)

diff --git a/crypto/keywrap.c b/crypto/keywrap.c
index 0355cce21b1e2..b99568c6d032c 100644
--- a/crypto/keywrap.c
+++ b/crypto/keywrap.c
@@ -280,6 +280,7 @@ static int crypto_kw_create(struct crypto_template *tmpl, struct rtattr **tb)
 	inst->alg.base.cra_blocksize = SEMIBSIZE;
 	inst->alg.base.cra_alignmask = 0;
 	inst->alg.ivsize = SEMIBSIZE;
+	inst->alg.final_chunksize = -1;
 
 	inst->alg.encrypt = crypto_kw_encrypt;
 	inst->alg.decrypt = crypto_kw_decrypt;

^ permalink raw reply	[flat|nested] 58+ messages in thread

* [v3 PATCH 31/31] crypto: salsa20-generic - dd support for chaining
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (29 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 30/31] crypto: kw - Set final_chunksize Herbert Xu
@ 2020-07-28  7:19 ` Herbert Xu
  2020-07-28 17:19 ` [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Eric Biggers
  31 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28  7:19 UTC (permalink / raw)
  To: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

As it stands salsa20 cannot do chaining.  That is, it has to handle
each request as a whole.  This patch adds support for chaining when
the CRYPTO_TFM_REQ_MORE flag is set.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
---

 crypto/salsa20_generic.c |   20 ++++++++++++++++----
 1 file changed, 16 insertions(+), 4 deletions(-)

diff --git a/crypto/salsa20_generic.c b/crypto/salsa20_generic.c
index 3418869dabefd..dd4b4cc8e76b9 100644
--- a/crypto/salsa20_generic.c
+++ b/crypto/salsa20_generic.c
@@ -21,7 +21,10 @@
 
 #include <asm/unaligned.h>
 #include <crypto/internal/skcipher.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/string.h>
 
 #define SALSA20_IV_SIZE        8
 #define SALSA20_MIN_KEY_SIZE  16
@@ -32,6 +35,11 @@ struct salsa20_ctx {
 	u32 initial_state[16];
 };
 
+struct salsa20_reqctx {
+	u32 state[16];
+	bool init;
+};
+
 static void salsa20_block(u32 *state, __le32 *stream)
 {
 	u32 x[16];
@@ -154,13 +162,16 @@ static int salsa20_crypt(struct skcipher_request *req)
 {
 	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
 	const struct salsa20_ctx *ctx = crypto_skcipher_ctx(tfm);
+	struct salsa20_reqctx *rctx = skcipher_request_ctx(req);
 	struct skcipher_walk walk;
-	u32 state[16];
 	int err;
 
 	err = skcipher_walk_virt(&walk, req, false);
 
-	salsa20_init(state, ctx, req->iv);
+	if (!rctx->init)
+		salsa20_init(rctx->state, ctx, req->iv);
+
+	rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE;
 
 	while (walk.nbytes > 0) {
 		unsigned int nbytes = walk.nbytes;
@@ -168,8 +179,8 @@ static int salsa20_crypt(struct skcipher_request *req)
 		if (nbytes < walk.total)
 			nbytes = round_down(nbytes, walk.stride);
 
-		salsa20_docrypt(state, walk.dst.virt.addr, walk.src.virt.addr,
-				nbytes);
+		salsa20_docrypt(rctx->state, walk.dst.virt.addr,
+				walk.src.virt.addr, nbytes);
 		err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
 	}
 
@@ -188,6 +199,7 @@ static struct skcipher_alg alg = {
 	.max_keysize		= SALSA20_MAX_KEY_SIZE,
 	.ivsize			= SALSA20_IV_SIZE,
 	.chunksize		= SALSA20_BLOCK_SIZE,
+	.reqsize		= sizeof(struct salsa20_reqctx),
 	.setkey			= salsa20_setkey,
 	.encrypt		= salsa20_crypt,
 	.decrypt		= salsa20_crypt,

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 3/31] crypto: cts - Add support for chaining
  2020-07-28  7:18 ` [v3 PATCH 3/31] crypto: cts - Add support for chaining Herbert Xu
@ 2020-07-28 11:05   ` Ard Biesheuvel
  2020-07-28 11:53     ` Herbert Xu
  0 siblings, 1 reply; 58+ messages in thread
From: Ard Biesheuvel @ 2020-07-28 11:05 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On Tue, 28 Jul 2020 at 10:18, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> As it stands cts cannot do chaining.  That is, it always performs
> the cipher-text stealing at the end of a request.  This patch adds
> support for chaining when the CRYPTO_TM_REQ_MORE flag is set.
>
> It also sets final_chunksize so that data can be withheld by the
> caller to enable correct processing at the true end of a request.
>

But isn't the final chunksize a function of cryptlen? What happens if
i try to use cts(cbc(aes)) to encrypt 16 bytes with the MORE flag, and
<16 additional bytes as the final chunk?


> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
> ---
>
>  crypto/cts.c |   19 ++++++++++---------
>  1 file changed, 10 insertions(+), 9 deletions(-)
>
> diff --git a/crypto/cts.c b/crypto/cts.c
> index 3766d47ebcc01..67990146c9b06 100644
> --- a/crypto/cts.c
> +++ b/crypto/cts.c
> @@ -100,7 +100,7 @@ static int cts_cbc_encrypt(struct skcipher_request *req)
>         struct crypto_cts_reqctx *rctx = skcipher_request_ctx(req);
>         struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>         struct skcipher_request *subreq = &rctx->subreq;
> -       int bsize = crypto_skcipher_blocksize(tfm);
> +       int bsize = crypto_skcipher_chunksize(tfm);
>         u8 d[MAX_CIPHER_BLOCKSIZE * 2] __aligned(__alignof__(u32));
>         struct scatterlist *sg;
>         unsigned int offset;
> @@ -146,7 +146,7 @@ static int crypto_cts_encrypt(struct skcipher_request *req)
>         struct crypto_cts_reqctx *rctx = skcipher_request_ctx(req);
>         struct crypto_cts_ctx *ctx = crypto_skcipher_ctx(tfm);
>         struct skcipher_request *subreq = &rctx->subreq;
> -       int bsize = crypto_skcipher_blocksize(tfm);
> +       int bsize = crypto_skcipher_chunksize(tfm);
>         unsigned int nbytes = req->cryptlen;
>         unsigned int offset;
>
> @@ -155,7 +155,7 @@ static int crypto_cts_encrypt(struct skcipher_request *req)
>         if (nbytes < bsize)
>                 return -EINVAL;
>
> -       if (nbytes == bsize) {
> +       if (nbytes == bsize || req->base.flags & CRYPTO_TFM_REQ_MORE) {
>                 skcipher_request_set_callback(subreq, req->base.flags,
>                                               req->base.complete,
>                                               req->base.data);
> @@ -181,7 +181,7 @@ static int cts_cbc_decrypt(struct skcipher_request *req)
>         struct crypto_cts_reqctx *rctx = skcipher_request_ctx(req);
>         struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
>         struct skcipher_request *subreq = &rctx->subreq;
> -       int bsize = crypto_skcipher_blocksize(tfm);
> +       int bsize = crypto_skcipher_chunksize(tfm);
>         u8 d[MAX_CIPHER_BLOCKSIZE * 2] __aligned(__alignof__(u32));
>         struct scatterlist *sg;
>         unsigned int offset;
> @@ -240,7 +240,7 @@ static int crypto_cts_decrypt(struct skcipher_request *req)
>         struct crypto_cts_reqctx *rctx = skcipher_request_ctx(req);
>         struct crypto_cts_ctx *ctx = crypto_skcipher_ctx(tfm);
>         struct skcipher_request *subreq = &rctx->subreq;
> -       int bsize = crypto_skcipher_blocksize(tfm);
> +       int bsize = crypto_skcipher_chunksize(tfm);
>         unsigned int nbytes = req->cryptlen;
>         unsigned int offset;
>         u8 *space;
> @@ -250,7 +250,7 @@ static int crypto_cts_decrypt(struct skcipher_request *req)
>         if (nbytes < bsize)
>                 return -EINVAL;
>
> -       if (nbytes == bsize) {
> +       if (nbytes == bsize || req->base.flags & CRYPTO_TFM_REQ_MORE) {
>                 skcipher_request_set_callback(subreq, req->base.flags,
>                                               req->base.complete,
>                                               req->base.data);
> @@ -297,7 +297,7 @@ static int crypto_cts_init_tfm(struct crypto_skcipher *tfm)
>         ctx->child = cipher;
>
>         align = crypto_skcipher_alignmask(tfm);
> -       bsize = crypto_skcipher_blocksize(cipher);
> +       bsize = crypto_skcipher_chunksize(cipher);
>         reqsize = ALIGN(sizeof(struct crypto_cts_reqctx) +
>                         crypto_skcipher_reqsize(cipher),
>                         crypto_tfm_ctx_alignment()) +
> @@ -359,11 +359,12 @@ static int crypto_cts_create(struct crypto_template *tmpl, struct rtattr **tb)
>                 goto err_free_inst;
>
>         inst->alg.base.cra_priority = alg->base.cra_priority;
> -       inst->alg.base.cra_blocksize = alg->base.cra_blocksize;
> +       inst->alg.base.cra_blocksize = 1;
>         inst->alg.base.cra_alignmask = alg->base.cra_alignmask;
>
>         inst->alg.ivsize = alg->base.cra_blocksize;
> -       inst->alg.chunksize = crypto_skcipher_alg_chunksize(alg);
> +       inst->alg.chunksize = alg->base.cra_blocksize;
> +       inst->alg.final_chunksize = inst->alg.chunksize * 2;
>         inst->alg.min_keysize = crypto_skcipher_alg_min_keysize(alg);
>         inst->alg.max_keysize = crypto_skcipher_alg_max_keysize(alg);
>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 3/31] crypto: cts - Add support for chaining
  2020-07-28 11:05   ` Ard Biesheuvel
@ 2020-07-28 11:53     ` Herbert Xu
  2020-07-28 11:59       ` Ard Biesheuvel
  0 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-07-28 11:53 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On Tue, Jul 28, 2020 at 02:05:58PM +0300, Ard Biesheuvel wrote:
>
> But isn't the final chunksize a function of cryptlen? What happens if
> i try to use cts(cbc(aes)) to encrypt 16 bytes with the MORE flag, and
> <16 additional bytes as the final chunk?

The final chunksize is an attribute that the caller has to act on.
So for cts it tells the caller that it must withhold at least two
blocks (32 bytes) of data unless it is the final chunk.

Of course the implementation should not crash when given malformed
input like the ones you suggested but the content of the output will
be undefined.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 3/31] crypto: cts - Add support for chaining
  2020-07-28 11:53     ` Herbert Xu
@ 2020-07-28 11:59       ` Ard Biesheuvel
  2020-07-28 12:03         ` Herbert Xu
  0 siblings, 1 reply; 58+ messages in thread
From: Ard Biesheuvel @ 2020-07-28 11:59 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On Tue, 28 Jul 2020 at 14:53, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Tue, Jul 28, 2020 at 02:05:58PM +0300, Ard Biesheuvel wrote:
> >
> > But isn't the final chunksize a function of cryptlen? What happens if
> > i try to use cts(cbc(aes)) to encrypt 16 bytes with the MORE flag, and
> > <16 additional bytes as the final chunk?
>
> The final chunksize is an attribute that the caller has to act on.
> So for cts it tells the caller that it must withhold at least two
> blocks (32 bytes) of data unless it is the final chunk.
>
> Of course the implementation should not crash when given malformed
> input like the ones you suggested but the content of the output will
> be undefined.
>

How is it malformed? Between 16 and 31 bytes of input is perfectly
valid for cts(cbc(aes)), and splitting it up after the first chunk
should be as well, no?

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 3/31] crypto: cts - Add support for chaining
  2020-07-28 11:59       ` Ard Biesheuvel
@ 2020-07-28 12:03         ` Herbert Xu
  2020-07-28 12:08           ` Ard Biesheuvel
  0 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-07-28 12:03 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On Tue, Jul 28, 2020 at 02:59:24PM +0300, Ard Biesheuvel wrote:
>
> How is it malformed? Between 16 and 31 bytes of input is perfectly
> valid for cts(cbc(aes)), and splitting it up after the first chunk
> should be as well, no?

This is the whole point of final_chunksize.  If you're going to
do chaining then you must always withhold at least final_chunksize
bytes until you're at the final chunk.

If you disobey that then you get undefined results.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 3/31] crypto: cts - Add support for chaining
  2020-07-28 12:03         ` Herbert Xu
@ 2020-07-28 12:08           ` Ard Biesheuvel
  2020-07-28 12:19             ` Herbert Xu
  0 siblings, 1 reply; 58+ messages in thread
From: Ard Biesheuvel @ 2020-07-28 12:08 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On Tue, 28 Jul 2020 at 15:03, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Tue, Jul 28, 2020 at 02:59:24PM +0300, Ard Biesheuvel wrote:
> >
> > How is it malformed? Between 16 and 31 bytes of input is perfectly
> > valid for cts(cbc(aes)), and splitting it up after the first chunk
> > should be as well, no?
>
> This is the whole point of final_chunksize.  If you're going to
> do chaining then you must always withhold at least final_chunksize
> bytes until you're at the final chunk.
>
> If you disobey that then you get undefined results.
>

Ah ok, I'm with you now.

So the contract is that using CRYPTO_TFM_REQ_MORE is only permitted if
you take the final chunksize into account. If you don't use that flag,
you can ignore it.

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 3/31] crypto: cts - Add support for chaining
  2020-07-28 12:08           ` Ard Biesheuvel
@ 2020-07-28 12:19             ` Herbert Xu
  0 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28 12:19 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On Tue, Jul 28, 2020 at 03:08:58PM +0300, Ard Biesheuvel wrote:
>
> So the contract is that using CRYPTO_TFM_REQ_MORE is only permitted if
> you take the final chunksize into account. If you don't use that flag,
> you can ignore it.

Right.

I think at least sunrpc could use this right away.  We could extend
this to algif_aead too but I wouldn't worry about it unless a real
in-kernel user like sunrpc also showed up.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 8/31] crypto: skcipher - Initialise requests to zero
  2020-07-28  7:18 ` [v3 PATCH 8/31] crypto: skcipher - Initialise requests to zero Herbert Xu
@ 2020-07-28 17:10   ` Eric Biggers
  2020-07-29  3:38     ` Herbert Xu
  0 siblings, 1 reply; 58+ messages in thread
From: Eric Biggers @ 2020-07-28 17:10 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List

On Tue, Jul 28, 2020 at 05:18:55PM +1000, Herbert Xu wrote:
> This patch initialises skcipher requests to zero.  This allows
> algorithms to distinguish between the first operation versus
> subsequent ones.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
> ---
> 
>  include/crypto/skcipher.h |   18 +++++++++---------
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/include/crypto/skcipher.h b/include/crypto/skcipher.h
> index c46ea1c157b29..6db5f83d6e482 100644
> --- a/include/crypto/skcipher.h
> +++ b/include/crypto/skcipher.h
> @@ -129,13 +129,14 @@ struct skcipher_alg {
>   * This performs a type-check against the "tfm" argument to make sure
>   * all users have the correct skcipher tfm for doing on-stack requests.
>   */
> -#define SYNC_SKCIPHER_REQUEST_ON_STACK(name, tfm) \
> -	char __##name##_desc[sizeof(struct skcipher_request) + \
> -			     MAX_SYNC_SKCIPHER_REQSIZE + \
> -			     (!(sizeof((struct crypto_sync_skcipher *)1 == \
> -				       (typeof(tfm))1))) \
> -			    ] CRYPTO_MINALIGN_ATTR; \
> -	struct skcipher_request *name = (void *)__##name##_desc
> +#define SYNC_SKCIPHER_REQUEST_ON_STACK(name, sync) \
> +	struct { \
> +		struct skcipher_request req; \
> +		char ext[MAX_SYNC_SKCIPHER_REQSIZE]; \
> +	} __##name##_desc = { \
> +		.req.base.tfm = crypto_skcipher_tfm(&sync->base), \
> +	}; \
> +	struct skcipher_request *name = &__##name##_desc.req
>  
>  /**
>   * DOC: Symmetric Key Cipher API
> @@ -519,8 +520,7 @@ static inline struct skcipher_request *skcipher_request_alloc(
>  {
>  	struct skcipher_request *req;
>  
> -	req = kmalloc(sizeof(struct skcipher_request) +
> -		      crypto_skcipher_reqsize(tfm), gfp);
> +	req = kzalloc(sizeof(*req) + crypto_skcipher_reqsize(tfm), gfp);
>  
>  	if (likely(req))
>  		skcipher_request_set_tfm(req, tfm);

Does this really work?  Some users allocate memory themselves without using
*_request_alloc().

- Eric

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining
  2020-07-28  7:18 ` [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining Herbert Xu
@ 2020-07-28 17:15   ` Eric Biggers
  2020-07-28 17:22     ` Herbert Xu
  0 siblings, 1 reply; 58+ messages in thread
From: Eric Biggers @ 2020-07-28 17:15 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List

On Tue, Jul 28, 2020 at 05:18:39PM +1000, Herbert Xu wrote:
> Crypto skcipher algorithms in general allow chaining to break
> large operations into smaller ones based on multiples of the chunk
> size.  However, some algorithms don't support chaining while others
> (such as cts) only support chaining for the leading blocks.
> 
> This patch adds the necessary API support for these algorithms.  In
> particular, a new request flag CRYPTO_TFM_REQ_MORE is added to allow
> chaining for algorithms such as cts that cannot otherwise be chained.
> 
> A new algorithm attribute final_chunksize has also been added to
> indicate how many blocks at the end of a request that cannot be
> chained and therefore must be withheld if chaining is attempted.
> 
> This attribute can also be used to indicate that no chaining is
> allowed.  Its value should be set to -1 in that case.

Shouldn't chaining be disabled by default?  This is inviting bugs where drivers
don't implement chaining, but leave final_chunksize unset (0) which apparently
indicates that chaining is supported.

- Eric

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining
  2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
                   ` (30 preceding siblings ...)
  2020-07-28  7:19 ` [v3 PATCH 31/31] crypto: salsa20-generic - dd support for chaining Herbert Xu
@ 2020-07-28 17:19 ` Eric Biggers
  2020-07-29  3:40   ` Herbert Xu
  31 siblings, 1 reply; 58+ messages in thread
From: Eric Biggers @ 2020-07-28 17:19 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List

On Tue, Jul 28, 2020 at 05:17:46PM +1000, Herbert Xu wrote:
> This patch-set adds support to the Crypto API and algif_skcipher
> for algorithms that cannot be chained, as well as ones that can
> be chained if you withhold a certain number of blocks at the end.
> 
> The vast majority of algorithms can be chained already, e.g., cbc
> and lrw.  Everything else can either be modified to support chaining,
> e.g., chacha and xts, or they cannot chain at all, e.g., keywrap.
> 
> Some drivers that implement algorithms which can be chained with
> modification may not be able to support chaining due to hardware
> limitations.  For now they're treated the same way as ones that
> cannot be chained at all.
> 
> The algorithm arc4 has been left out of all this owing to ongoing
> discussions regarding its future.
> 

Can you elaborate on the use case for supporting chaining on algorithms that
don't currently support it?

Also, the self-tests need to be updated to test all this.

- Eric

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining
  2020-07-28 17:15   ` Eric Biggers
@ 2020-07-28 17:22     ` Herbert Xu
  2020-07-28 17:26       ` Ard Biesheuvel
  0 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-07-28 17:22 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List

On Tue, Jul 28, 2020 at 10:15:12AM -0700, Eric Biggers wrote:
>
> Shouldn't chaining be disabled by default?  This is inviting bugs where drivers
> don't implement chaining, but leave final_chunksize unset (0) which apparently
> indicates that chaining is supported.

I've gone through everything and the majority of algorithms do
support chaining so I think defaulting to on makes more sense.

For now we have some algorithms that can be chained but the drivers
do not allow it, this is not something that I'd like to see in
new drivers.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining
  2020-07-28 17:22     ` Herbert Xu
@ 2020-07-28 17:26       ` Ard Biesheuvel
  2020-07-28 17:30         ` Herbert Xu
  0 siblings, 1 reply; 58+ messages in thread
From: Ard Biesheuvel @ 2020-07-28 17:26 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Eric Biggers, Stephan Mueller, Linux Crypto Mailing List

On Tue, 28 Jul 2020 at 20:22, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Tue, Jul 28, 2020 at 10:15:12AM -0700, Eric Biggers wrote:
> >
> > Shouldn't chaining be disabled by default?  This is inviting bugs where drivers
> > don't implement chaining, but leave final_chunksize unset (0) which apparently
> > indicates that chaining is supported.
>
> I've gone through everything and the majority of algorithms do
> support chaining so I think defaulting to on makes more sense.
>
> For now we have some algorithms that can be chained but the drivers
> do not allow it, this is not something that I'd like to see in
> new drivers.
>

So how does one allocate a tfm that supports chaining if their use
case requires it? Having different implementations of the same algo
where one does support it while the other one doesn't means we will
need some flag to request this at alloc time.


> Cheers,
> --
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining
  2020-07-28 17:26       ` Ard Biesheuvel
@ 2020-07-28 17:30         ` Herbert Xu
  2020-07-28 17:46           ` Ard Biesheuvel
  0 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-07-28 17:30 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Eric Biggers, Stephan Mueller, Linux Crypto Mailing List

On Tue, Jul 28, 2020 at 08:26:38PM +0300, Ard Biesheuvel wrote:
>
> So how does one allocate a tfm that supports chaining if their use
> case requires it? Having different implementations of the same algo
> where one does support it while the other one doesn't means we will
> need some flag to request this at alloc time.

Yes we could add a flag for it.  However, for the two users that
I'm looking at right now (algif_skcipher and sunrpc) this is not
required.  For algif_skcipher it'll simply fall back to the current
behaviour if chaining is not supported, while sunrpc would only
use chaining with cts where it is always supported.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining
  2020-07-28 17:30         ` Herbert Xu
@ 2020-07-28 17:46           ` Ard Biesheuvel
  2020-07-28 22:12             ` Herbert Xu
  0 siblings, 1 reply; 58+ messages in thread
From: Ard Biesheuvel @ 2020-07-28 17:46 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Eric Biggers, Stephan Mueller, Linux Crypto Mailing List

On Tue, 28 Jul 2020 at 20:30, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> On Tue, Jul 28, 2020 at 08:26:38PM +0300, Ard Biesheuvel wrote:
> >
> > So how does one allocate a tfm that supports chaining if their use
> > case requires it? Having different implementations of the same algo
> > where one does support it while the other one doesn't means we will
> > need some flag to request this at alloc time.
>
> Yes we could add a flag for it.  However, for the two users that
> I'm looking at right now (algif_skcipher and sunrpc) this is not
> required.  For algif_skcipher it'll simply fall back to the current
> behaviour if chaining is not supported, while sunrpc would only
> use chaining with cts where it is always supported.
>

Ok, now I'm confused again: if falling back to the current behavior is
acceptable for algif_skcipher, why do we need all these changes?


> Cheers,
> --
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining
  2020-07-28 17:46           ` Ard Biesheuvel
@ 2020-07-28 22:12             ` Herbert Xu
  0 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-28 22:12 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Eric Biggers, Stephan Mueller, Linux Crypto Mailing List

On Tue, Jul 28, 2020 at 08:46:42PM +0300, Ard Biesheuvel wrote:
>
> > Yes we could add a flag for it.  However, for the two users that
> > I'm looking at right now (algif_skcipher and sunrpc) this is not
> > required.  For algif_skcipher it'll simply fall back to the current
> > behaviour if chaining is not supported, while sunrpc would only
> > use chaining with cts where it is always supported.
> 
> Ok, now I'm confused again: if falling back to the current behavior is
> acceptable for algif_skcipher, why do we need all these changes?

The current behaviour isn't quite the right phrase.  What happens
now is that algif_skcipher will try to chain everything which
would obviously fail with such a driver.  With the patch-set
it won't try to chain and will instead return -EINVAL.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 8/31] crypto: skcipher - Initialise requests to zero
  2020-07-28 17:10   ` Eric Biggers
@ 2020-07-29  3:38     ` Herbert Xu
  0 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-29  3:38 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List

On Tue, Jul 28, 2020 at 10:10:59AM -0700, Eric Biggers wrote:
>
> Does this really work?  Some users allocate memory themselves without using
> *_request_alloc().

Yes good point.  I will instead add a second request flag used to
indicate that we should retain the internal state.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining
  2020-07-28 17:19 ` [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Eric Biggers
@ 2020-07-29  3:40   ` Herbert Xu
  0 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-29  3:40 UTC (permalink / raw)
  To: Eric Biggers; +Cc: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List

On Tue, Jul 28, 2020 at 10:19:21AM -0700, Eric Biggers wrote:
>
> Can you elaborate on the use case for supporting chaining on algorithms that
> don't currently support it?

OK I will describe the algif issue in more detail and also add
a mention of sunrpc.  In fact I might try to convert sunrpc to use
this new API.

> Also, the self-tests need to be updated to test all this.

Yes.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 12/31] crypto: arm64/chacha - Add support for chaining
  2020-07-28  7:19 ` [v3 PATCH 12/31] crypto: arm64/chacha " Herbert Xu
@ 2020-07-29  6:16   ` Ard Biesheuvel
  2020-07-29  6:28     ` Herbert Xu
  0 siblings, 1 reply; 58+ messages in thread
From: Ard Biesheuvel @ 2020-07-29  6:16 UTC (permalink / raw)
  To: Herbert Xu; +Cc: Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On Tue, 28 Jul 2020 at 10:19, Herbert Xu <herbert@gondor.apana.org.au> wrote:
>
> As it stands chacha cannot do chaining.  That is, it has to handle
> each request as a whole.  This patch adds support for chaining when
> the CRYPTO_TFM_REQ_MORE flag is set.
>
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Only state[12] needs to be preserved, since it contains the block
counter. Everything else in the state can be derived from the IV.

So by doing the init unconditionally, and overriding state[12] to the
captured value (if it exists), we can get rid of the redundant copy of
state, which also avoids inconsistencies if IV and state are out of
sync.

> ---
>
>  arch/arm64/crypto/chacha-neon-glue.c |   43 ++++++++++++++++++++++-------------
>  1 file changed, 28 insertions(+), 15 deletions(-)
>
> diff --git a/arch/arm64/crypto/chacha-neon-glue.c b/arch/arm64/crypto/chacha-neon-glue.c
> index af2bbca38e70f..d82c574ddcc00 100644
> --- a/arch/arm64/crypto/chacha-neon-glue.c
> +++ b/arch/arm64/crypto/chacha-neon-glue.c
> @@ -19,10 +19,8 @@
>   * (at your option) any later version.
>   */
>
> -#include <crypto/algapi.h>
>  #include <crypto/internal/chacha.h>
>  #include <crypto/internal/simd.h>
> -#include <crypto/internal/skcipher.h>
>  #include <linux/jump_label.h>
>  #include <linux/kernel.h>
>  #include <linux/module.h>
> @@ -101,16 +99,16 @@ void chacha_crypt_arch(u32 *state, u8 *dst, const u8 *src, unsigned int bytes,
>  }
>  EXPORT_SYMBOL(chacha_crypt_arch);
>
> -static int chacha_neon_stream_xor(struct skcipher_request *req,
> -                                 const struct chacha_ctx *ctx, const u8 *iv)
> +static int chacha_neon_stream_xor(struct skcipher_request *req, int nrounds)
>  {
> +       struct chacha_reqctx *rctx = skcipher_request_ctx(req);
>         struct skcipher_walk walk;
> -       u32 state[16];
> +       u32 *state = rctx->state;
>         int err;
>
> -       err = skcipher_walk_virt(&walk, req, false);
> +       rctx->init = req->base.flags & CRYPTO_TFM_REQ_MORE;
>
> -       chacha_init_generic(state, ctx->key, iv);
> +       err = skcipher_walk_virt(&walk, req, false);
>
>         while (walk.nbytes > 0) {
>                 unsigned int nbytes = walk.nbytes;
> @@ -122,11 +120,11 @@ static int chacha_neon_stream_xor(struct skcipher_request *req,
>                     !crypto_simd_usable()) {
>                         chacha_crypt_generic(state, walk.dst.virt.addr,
>                                              walk.src.virt.addr, nbytes,
> -                                            ctx->nrounds);
> +                                            nrounds);
>                 } else {
>                         kernel_neon_begin();
>                         chacha_doneon(state, walk.dst.virt.addr,
> -                                     walk.src.virt.addr, nbytes, ctx->nrounds);
> +                                     walk.src.virt.addr, nbytes, nrounds);
>                         kernel_neon_end();
>                 }
>                 err = skcipher_walk_done(&walk, walk.nbytes - nbytes);
> @@ -138,26 +136,38 @@ static int chacha_neon_stream_xor(struct skcipher_request *req,
>  static int chacha_neon(struct skcipher_request *req)
>  {
>         struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> +       struct chacha_reqctx *rctx = skcipher_request_ctx(req);
>         struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
>
> -       return chacha_neon_stream_xor(req, ctx, req->iv);
> +       if (!rctx->init)
> +               chacha_init_generic(rctx->state, ctx->key, req->iv);
> +
> +       return chacha_neon_stream_xor(req, ctx->nrounds);
>  }
>
>  static int xchacha_neon(struct skcipher_request *req)
>  {
>         struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> +       struct chacha_reqctx *rctx = skcipher_request_ctx(req);
>         struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
> -       struct chacha_ctx subctx;
> -       u32 state[16];
> +       int nrounds = ctx->nrounds;
> +       u32 *state = rctx->state;
>         u8 real_iv[16];
> +       u32 key[8];
> +
> +       if (rctx->init)
> +               goto skip_init;
>
>         chacha_init_generic(state, ctx->key, req->iv);
> -       hchacha_block_arch(state, subctx.key, ctx->nrounds);
> -       subctx.nrounds = ctx->nrounds;
> +       hchacha_block_arch(state, key, nrounds);
>
>         memcpy(&real_iv[0], req->iv + 24, 8);
>         memcpy(&real_iv[8], req->iv + 16, 8);
> -       return chacha_neon_stream_xor(req, &subctx, real_iv);
> +
> +       chacha_init_generic(state, key, real_iv);
> +
> +skip_init:
> +       return chacha_neon_stream_xor(req, nrounds);
>  }
>
>  static struct skcipher_alg algs[] = {
> @@ -174,6 +184,7 @@ static struct skcipher_alg algs[] = {
>                 .ivsize                 = CHACHA_IV_SIZE,
>                 .chunksize              = CHACHA_BLOCK_SIZE,
>                 .walksize               = 5 * CHACHA_BLOCK_SIZE,
> +               .reqsize                = sizeof(struct chacha_reqctx),
>                 .setkey                 = chacha20_setkey,
>                 .encrypt                = chacha_neon,
>                 .decrypt                = chacha_neon,
> @@ -190,6 +201,7 @@ static struct skcipher_alg algs[] = {
>                 .ivsize                 = XCHACHA_IV_SIZE,
>                 .chunksize              = CHACHA_BLOCK_SIZE,
>                 .walksize               = 5 * CHACHA_BLOCK_SIZE,
> +               .reqsize                = sizeof(struct chacha_reqctx),
>                 .setkey                 = chacha20_setkey,
>                 .encrypt                = xchacha_neon,
>                 .decrypt                = xchacha_neon,
> @@ -206,6 +218,7 @@ static struct skcipher_alg algs[] = {
>                 .ivsize                 = XCHACHA_IV_SIZE,
>                 .chunksize              = CHACHA_BLOCK_SIZE,
>                 .walksize               = 5 * CHACHA_BLOCK_SIZE,
> +               .reqsize                = sizeof(struct chacha_reqctx),
>                 .setkey                 = chacha12_setkey,
>                 .encrypt                = xchacha_neon,
>                 .decrypt                = xchacha_neon,

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 12/31] crypto: arm64/chacha - Add support for chaining
  2020-07-29  6:16   ` Ard Biesheuvel
@ 2020-07-29  6:28     ` Herbert Xu
  0 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-07-29  6:28 UTC (permalink / raw)
  To: Ard Biesheuvel; +Cc: Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On Wed, Jul 29, 2020 at 09:16:55AM +0300, Ard Biesheuvel wrote:
>
> Only state[12] needs to be preserved, since it contains the block
> counter. Everything else in the state can be derived from the IV.
> 
> So by doing the init unconditionally, and overriding state[12] to the
> captured value (if it exists), we can get rid of the redundant copy of
> state, which also avoids inconsistencies if IV and state are out of
> sync.

Good point.  In fact we could try to put the counter back into
the IV just like CTR.  Let me have a play with this to see what
it would look like.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 21/31] crypto: ccp - Remove rfc3686 implementation
  2020-07-28  7:19 ` [v3 PATCH 21/31] crypto: ccp - Remove rfc3686 implementation Herbert Xu
@ 2020-08-06 19:16   ` John Allen
  0 siblings, 0 replies; 58+ messages in thread
From: John Allen @ 2020-08-06 19:16 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On Tue, Jul 28, 2020 at 05:19:26PM +1000, Herbert Xu wrote:
> The rfc3686 implementation in ccp is pretty much the same
> as the generic rfc3686 wrapper.  So it can simply be removed to
> reduce complexity.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>

Acked-by: John Allen <john.allen@amd.com>

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 10/31] crypto: chacha-generic - Add support for chaining
  2020-07-28  7:19 ` [v3 PATCH 10/31] crypto: chacha-generic " Herbert Xu
@ 2020-08-10 15:20   ` Horia Geantă
  2020-08-11  0:57     ` Herbert Xu
  0 siblings, 1 reply; 58+ messages in thread
From: Horia Geantă @ 2020-08-10 15:20 UTC (permalink / raw)
  To: Herbert Xu, Ard Biesheuvel, Stephan Mueller,
	Linux Crypto Mailing List, Eric Biggers

On 7/28/2020 10:19 AM, Herbert Xu wrote:
> @@ -40,30 +39,41 @@ static int chacha_stream_xor(struct skcipher_request *req,
>  static int crypto_chacha_crypt(struct skcipher_request *req)
>  {
>  	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> +	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
>  	struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
>  
> -	return chacha_stream_xor(req, ctx, req->iv);
> +	if (!rctx->init)
> +		chacha_init_generic(rctx->state, ctx->key, req->iv);
It would probably be better to rename "init" to "no_init" or "final".

Horia

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 16/31] crypto: caam/qi2 - Set final_chunksize on chacha
  2020-07-28  7:19 ` [v3 PATCH 16/31] crypto: caam/qi2 " Herbert Xu
@ 2020-08-10 15:24   ` Horia Geantă
  0 siblings, 0 replies; 58+ messages in thread
From: Horia Geantă @ 2020-08-10 15:24 UTC (permalink / raw)
  To: Herbert Xu, Ard Biesheuvel, Stephan Mueller,
	Linux Crypto Mailing List, Eric Biggers

On 7/28/2020 10:19 AM, Herbert Xu wrote:
> The chacha implementation in caam/qi2 does not support partial
> operation and therefore this patch sets its final_chunksize to -1
> to mark this fact.
> 
> This patch also sets the chunksize to the chacha block size.
> 
> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Reviewed-by: Horia Geantă <horia.geanta@nxp.com>

Thanks,
Horia

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 19/31] crypto: caam - Remove rfc3686 implementations
  2020-07-28  7:19 ` [v3 PATCH 19/31] crypto: caam - Remove rfc3686 implementations Herbert Xu
@ 2020-08-10 16:47   ` Horia Geantă
  2020-08-11  0:59     ` Herbert Xu
  0 siblings, 1 reply; 58+ messages in thread
From: Horia Geantă @ 2020-08-10 16:47 UTC (permalink / raw)
  To: Herbert Xu, Ard Biesheuvel, Stephan Mueller,
	Linux Crypto Mailing List, Eric Biggers

On 7/28/2020 10:19 AM, Herbert Xu wrote:
> The rfc3686 implementations in caam are pretty much the same
> as the generic rfc3686 wrapper.  So they can simply be removed
> to reduce complexity.
> 
I would prefer keeping the caam rfc3686(ctr(aes)) implementation.
It's almost cost-free when compared to ctr(aes), since:
-there are no (accelerator-)external DMAs generated
-shared descriptors are constructed at .setkey time
-code complexity is manageable

Thanks,
Horia

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 10/31] crypto: chacha-generic - Add support for chaining
  2020-08-10 15:20   ` Horia Geantă
@ 2020-08-11  0:57     ` Herbert Xu
  0 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-08-11  0:57 UTC (permalink / raw)
  To: Horia Geantă
  Cc: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On Mon, Aug 10, 2020 at 06:20:16PM +0300, Horia Geantă wrote:
> On 7/28/2020 10:19 AM, Herbert Xu wrote:
> > @@ -40,30 +39,41 @@ static int chacha_stream_xor(struct skcipher_request *req,
> >  static int crypto_chacha_crypt(struct skcipher_request *req)
> >  {
> >  	struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
> > +	struct chacha_reqctx *rctx = skcipher_request_ctx(req);
> >  	struct chacha_ctx *ctx = crypto_skcipher_ctx(tfm);
> >  
> > -	return chacha_stream_xor(req, ctx, req->iv);
> > +	if (!rctx->init)
> > +		chacha_init_generic(rctx->state, ctx->key, req->iv);
> It would probably be better to rename "init" to "no_init" or "final".

This turns out to be broken so it'll disappear anyway.  It'll
be replaced with a request flag instead.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 19/31] crypto: caam - Remove rfc3686 implementations
  2020-08-10 16:47   ` Horia Geantă
@ 2020-08-11  0:59     ` Herbert Xu
  2020-08-11  7:32       ` Horia Geantă
  0 siblings, 1 reply; 58+ messages in thread
From: Herbert Xu @ 2020-08-11  0:59 UTC (permalink / raw)
  To: Horia Geantă
  Cc: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On Mon, Aug 10, 2020 at 07:47:40PM +0300, Horia Geantă wrote:
>
> I would prefer keeping the caam rfc3686(ctr(aes)) implementation.
> It's almost cost-free when compared to ctr(aes), since:
> -there are no (accelerator-)external DMAs generated
> -shared descriptors are constructed at .setkey time
> -code complexity is manageable

The reason I'm removing it is because it doesn't support chaining.
We could keep it by adding chaining support which should be trivial
but it's something that I can't test.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 19/31] crypto: caam - Remove rfc3686 implementations
  2020-08-11  0:59     ` Herbert Xu
@ 2020-08-11  7:32       ` Horia Geantă
  2020-08-11  7:34         ` Herbert Xu
  0 siblings, 1 reply; 58+ messages in thread
From: Horia Geantă @ 2020-08-11  7:32 UTC (permalink / raw)
  To: Herbert Xu
  Cc: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On 8/11/2020 3:59 AM, Herbert Xu wrote:
> On Mon, Aug 10, 2020 at 07:47:40PM +0300, Horia Geantă wrote:
>>
>> I would prefer keeping the caam rfc3686(ctr(aes)) implementation.
>> It's almost cost-free when compared to ctr(aes), since:
>> -there are no (accelerator-)external DMAs generated
>> -shared descriptors are constructed at .setkey time
>> -code complexity is manageable
> 
> The reason I'm removing it is because it doesn't support chaining.
> We could keep it by adding chaining support which should be trivial
> but it's something that I can't test.
> 
Would it be possible in the meantime to set final_chunksize = -1,
then I'd follow-up with chaining support?

Thanks,
Horia

^ permalink raw reply	[flat|nested] 58+ messages in thread

* Re: [v3 PATCH 19/31] crypto: caam - Remove rfc3686 implementations
  2020-08-11  7:32       ` Horia Geantă
@ 2020-08-11  7:34         ` Herbert Xu
  0 siblings, 0 replies; 58+ messages in thread
From: Herbert Xu @ 2020-08-11  7:34 UTC (permalink / raw)
  To: Horia Geantă
  Cc: Ard Biesheuvel, Stephan Mueller, Linux Crypto Mailing List, Eric Biggers

On Tue, Aug 11, 2020 at 10:32:28AM +0300, Horia Geantă wrote:
>
> Would it be possible in the meantime to set final_chunksize = -1,
> then I'd follow-up with chaining support?

I don't think there's much point.  IPsec is going to be using
the authenc entry anyway so this doesn't make that much of a
difference.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 58+ messages in thread

end of thread, back to index

Thread overview: 58+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-28  7:17 [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Herbert Xu
2020-07-28  7:18 ` [v3 PATCH 1/31] crypto: skcipher - Add final chunk size field for chaining Herbert Xu
2020-07-28 17:15   ` Eric Biggers
2020-07-28 17:22     ` Herbert Xu
2020-07-28 17:26       ` Ard Biesheuvel
2020-07-28 17:30         ` Herbert Xu
2020-07-28 17:46           ` Ard Biesheuvel
2020-07-28 22:12             ` Herbert Xu
2020-07-28  7:18 ` [v3 PATCH 2/31] crypto: algif_skcipher - Add support for final_chunksize Herbert Xu
2020-07-28  7:18 ` [v3 PATCH 3/31] crypto: cts - Add support for chaining Herbert Xu
2020-07-28 11:05   ` Ard Biesheuvel
2020-07-28 11:53     ` Herbert Xu
2020-07-28 11:59       ` Ard Biesheuvel
2020-07-28 12:03         ` Herbert Xu
2020-07-28 12:08           ` Ard Biesheuvel
2020-07-28 12:19             ` Herbert Xu
2020-07-28  7:18 ` [v3 PATCH 4/31] crypto: arm64/aes-glue - Add support for chaining CTS Herbert Xu
2020-07-28  7:18 ` [v3 PATCH 5/31] crypto: nitrox " Herbert Xu
2020-07-28  7:18 ` [v3 PATCH 6/31] crypto: ccree " Herbert Xu
2020-07-28  7:18 ` [v3 PATCH 7/31] crypto: skcipher - Add alg reqsize field Herbert Xu
2020-07-28  7:18 ` [v3 PATCH 8/31] crypto: skcipher - Initialise requests to zero Herbert Xu
2020-07-28 17:10   ` Eric Biggers
2020-07-29  3:38     ` Herbert Xu
2020-07-28  7:18 ` [v3 PATCH 9/31] crypto: cryptd - Add support for chaining Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 10/31] crypto: chacha-generic " Herbert Xu
2020-08-10 15:20   ` Horia Geantă
2020-08-11  0:57     ` Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 11/31] crypto: arm/chacha " Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 12/31] crypto: arm64/chacha " Herbert Xu
2020-07-29  6:16   ` Ard Biesheuvel
2020-07-29  6:28     ` Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 13/31] crypto: mips/chacha " Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 14/31] crypto: x86/chacha " Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 15/31] crypto: inside-secure - Set final_chunksize on chacha Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 16/31] crypto: caam/qi2 " Herbert Xu
2020-08-10 15:24   ` Horia Geantă
2020-07-28  7:19 ` [v3 PATCH 17/31] crypto: ctr - Allow rfc3686 to be chained Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 18/31] crypto: crypto4xx - Remove rfc3686 implementation Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 19/31] crypto: caam - Remove rfc3686 implementations Herbert Xu
2020-08-10 16:47   ` Horia Geantă
2020-08-11  0:59     ` Herbert Xu
2020-08-11  7:32       ` Horia Geantă
2020-08-11  7:34         ` Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 20/31] crypto: nitrox - Set final_chunksize on rfc3686 Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 21/31] crypto: ccp - Remove rfc3686 implementation Herbert Xu
2020-08-06 19:16   ` John Allen
2020-07-28  7:19 ` [v3 PATCH 22/31] crypto: chelsio " Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 23/31] crypto: inside-secure - Set final_chunksize on rfc3686 Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 24/31] crypto: ixp4xx - Remove rfc3686 implementation Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 25/31] crypto: nx - Set final_chunksize on rfc3686 Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 26/31] crypto: essiv - Set final_chunksize Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 27/31] crypto: simd - Add support for chaining Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 28/31] crypto: arm64/essiv - Set final_chunksize Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 29/31] crypto: ccree - Set final_chunksize on essiv Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 30/31] crypto: kw - Set final_chunksize Herbert Xu
2020-07-28  7:19 ` [v3 PATCH 31/31] crypto: salsa20-generic - dd support for chaining Herbert Xu
2020-07-28 17:19 ` [v3 PATCH 0/31] crypto: skcipher - Add support for no chaining and partial chaining Eric Biggers
2020-07-29  3:40   ` Herbert Xu

Linux-Crypto Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-crypto/0 linux-crypto/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-crypto linux-crypto/ https://lore.kernel.org/linux-crypto \
		linux-crypto@vger.kernel.org
	public-inbox-index linux-crypto

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-crypto


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git