linux-crypto.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests
@ 2021-07-20  8:55 Tudor Ambarus
  2021-07-20  8:55 ` [PATCH 1/9] crypto: atmel-tdes: Clarify how tdes dev gets allocated to the tfm Tudor Ambarus
                   ` (9 more replies)
  0 siblings, 10 replies; 11+ messages in thread
From: Tudor Ambarus @ 2021-07-20  8:55 UTC (permalink / raw)
  To: herbert
  Cc: nicolas.ferre, alexandre.belloni, ludovic.desroches,
	linux-crypto, linux-arm-kernel, linux-kernel, Tudor Ambarus

The extra run-time crypto self tests hit some corner cases that were            
not handled in the drivers. Fix some corner cases. Propose some cleaning        
patches.      

Tudor Ambarus (9):
  crypto: atmel-tdes: Clarify how tdes dev gets allocated to the tfm
  crypto: atmel-tdes: Handle error messages
  crypto: atmel-aes: Add blocksize constraint for ECB and CBC modes
  crypto: atmel-aes: Add XTS input length constraint
  crypto: atmel-aes: Add NIST 800-38A's zero length cryptlen constraint
  crypto: atmel-tdes: Add FIPS81's zero length cryptlen constraint
  crypto: atmel-{aes, tdes}: Set OFB's blocksize to 1
  crypto: atmel-aes: Add fallback to XTS software implementation
  crypto: atmel-aes: Allocate aes dev at tfm init time

 drivers/crypto/atmel-aes.c  | 146 +++++++++++++++++++++++++++---------
 drivers/crypto/atmel-tdes.c |  66 +++++++---------
 2 files changed, 138 insertions(+), 74 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/9] crypto: atmel-tdes: Clarify how tdes dev gets allocated to the tfm
  2021-07-20  8:55 [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Tudor Ambarus
@ 2021-07-20  8:55 ` Tudor Ambarus
  2021-07-20  8:55 ` [PATCH 2/9] crypto: atmel-tdes: Handle error messages Tudor Ambarus
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Tudor Ambarus @ 2021-07-20  8:55 UTC (permalink / raw)
  To: herbert
  Cc: nicolas.ferre, alexandre.belloni, ludovic.desroches,
	linux-crypto, linux-arm-kernel, linux-kernel, Tudor Ambarus

The tdes dev gets allocated to the tfm at alg->init time, there's no
need to overwrite the pointer to tdes_dd afterwards.
There's a single IP per SoC anyway, the first entry from the
atmel_tdes.dev_list is chosen without counting for tfms for example,
in case one thinks of an even distribution of tfms across the TDES
IPs: there's only one. At alg->init time the ctx->dd should already
be NULL, there's no need to check its value before requesting for a
tdes dev.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
---
 drivers/crypto/atmel-tdes.c | 28 +++++++++-------------------
 1 file changed, 9 insertions(+), 19 deletions(-)

diff --git a/drivers/crypto/atmel-tdes.c b/drivers/crypto/atmel-tdes.c
index 6f01c51e3c37..dda70dbe0838 100644
--- a/drivers/crypto/atmel-tdes.c
+++ b/drivers/crypto/atmel-tdes.c
@@ -196,23 +196,15 @@ static void atmel_tdes_write_n(struct atmel_tdes_dev *dd, u32 offset,
 		atmel_tdes_write(dd, offset, *value);
 }
 
-static struct atmel_tdes_dev *atmel_tdes_find_dev(struct atmel_tdes_ctx *ctx)
+static struct atmel_tdes_dev *atmel_tdes_dev_alloc(void)
 {
-	struct atmel_tdes_dev *tdes_dd = NULL;
-	struct atmel_tdes_dev *tmp;
+	struct atmel_tdes_dev *tdes_dd;
 
 	spin_lock_bh(&atmel_tdes.lock);
-	if (!ctx->dd) {
-		list_for_each_entry(tmp, &atmel_tdes.dev_list, list) {
-			tdes_dd = tmp;
-			break;
-		}
-		ctx->dd = tdes_dd;
-	} else {
-		tdes_dd = ctx->dd;
-	}
+	/* One TDES IP per SoC. */
+	tdes_dd = list_first_entry_or_null(&atmel_tdes.dev_list,
+					   struct atmel_tdes_dev, list);
 	spin_unlock_bh(&atmel_tdes.lock);
-
 	return tdes_dd;
 }
 
@@ -646,7 +638,6 @@ static int atmel_tdes_handle_queue(struct atmel_tdes_dev *dd,
 	rctx->mode &= TDES_FLAGS_MODE_MASK;
 	dd->flags = (dd->flags & ~TDES_FLAGS_MODE_MASK) | rctx->mode;
 	dd->ctx = ctx;
-	ctx->dd = dd;
 
 	err = atmel_tdes_write_ctrl(dd);
 	if (!err)
@@ -897,14 +888,13 @@ static int atmel_tdes_ofb_decrypt(struct skcipher_request *req)
 static int atmel_tdes_init_tfm(struct crypto_skcipher *tfm)
 {
 	struct atmel_tdes_ctx *ctx = crypto_skcipher_ctx(tfm);
-	struct atmel_tdes_dev *dd;
-
-	crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_tdes_reqctx));
 
-	dd = atmel_tdes_find_dev(ctx);
-	if (!dd)
+	ctx->dd = atmel_tdes_dev_alloc();
+	if (!ctx->dd)
 		return -ENODEV;
 
+	crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_tdes_reqctx));
+
 	return 0;
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/9] crypto: atmel-tdes: Handle error messages
  2021-07-20  8:55 [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Tudor Ambarus
  2021-07-20  8:55 ` [PATCH 1/9] crypto: atmel-tdes: Clarify how tdes dev gets allocated to the tfm Tudor Ambarus
@ 2021-07-20  8:55 ` Tudor Ambarus
  2021-07-20  8:55 ` [PATCH 3/9] crypto: atmel-aes: Add blocksize constraint for ECB and CBC modes Tudor Ambarus
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Tudor Ambarus @ 2021-07-20  8:55 UTC (permalink / raw)
  To: herbert
  Cc: nicolas.ferre, alexandre.belloni, ludovic.desroches,
	linux-crypto, linux-arm-kernel, linux-kernel, Tudor Ambarus

Downgrade all runtime error messages to dev_dbg so that we don't
pollute the console. All probe error messages are kept with dev_err.
Get rid of pr_err and use dev_dbg instead, so that we know from which
device the error comes.
dma_mapping_error() return code was overwritten, use the error code
that the function returns.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
---
 drivers/crypto/atmel-tdes.c | 33 ++++++++++++++++-----------------
 1 file changed, 16 insertions(+), 17 deletions(-)

diff --git a/drivers/crypto/atmel-tdes.c b/drivers/crypto/atmel-tdes.c
index dda70dbe0838..abbf1b7a75ab 100644
--- a/drivers/crypto/atmel-tdes.c
+++ b/drivers/crypto/atmel-tdes.c
@@ -312,7 +312,7 @@ static int atmel_tdes_crypt_pdc_stop(struct atmel_tdes_dev *dd)
 				dd->buf_out, dd->buflen, dd->dma_size, 1);
 		if (count != dd->dma_size) {
 			err = -EINVAL;
-			pr_err("not all data converted: %zu\n", count);
+			dev_dbg(dd->dev, "not all data converted: %zu\n", count);
 		}
 	}
 
@@ -329,24 +329,24 @@ static int atmel_tdes_buff_init(struct atmel_tdes_dev *dd)
 	dd->buflen &= ~(DES_BLOCK_SIZE - 1);
 
 	if (!dd->buf_in || !dd->buf_out) {
-		dev_err(dd->dev, "unable to alloc pages.\n");
+		dev_dbg(dd->dev, "unable to alloc pages.\n");
 		goto err_alloc;
 	}
 
 	/* MAP here */
 	dd->dma_addr_in = dma_map_single(dd->dev, dd->buf_in,
 					dd->buflen, DMA_TO_DEVICE);
-	if (dma_mapping_error(dd->dev, dd->dma_addr_in)) {
-		dev_err(dd->dev, "dma %zd bytes error\n", dd->buflen);
-		err = -EINVAL;
+	err = dma_mapping_error(dd->dev, dd->dma_addr_in);
+	if (err) {
+		dev_dbg(dd->dev, "dma %zd bytes error\n", dd->buflen);
 		goto err_map_in;
 	}
 
 	dd->dma_addr_out = dma_map_single(dd->dev, dd->buf_out,
 					dd->buflen, DMA_FROM_DEVICE);
-	if (dma_mapping_error(dd->dev, dd->dma_addr_out)) {
-		dev_err(dd->dev, "dma %zd bytes error\n", dd->buflen);
-		err = -EINVAL;
+	err = dma_mapping_error(dd->dev, dd->dma_addr_out);
+	if (err) {
+		dev_dbg(dd->dev, "dma %zd bytes error\n", dd->buflen);
 		goto err_map_out;
 	}
 
@@ -359,8 +359,6 @@ static int atmel_tdes_buff_init(struct atmel_tdes_dev *dd)
 err_alloc:
 	free_page((unsigned long)dd->buf_out);
 	free_page((unsigned long)dd->buf_in);
-	if (err)
-		pr_err("error: %d\n", err);
 	return err;
 }
 
@@ -512,14 +510,14 @@ static int atmel_tdes_crypt_start(struct atmel_tdes_dev *dd)
 
 		err = dma_map_sg(dd->dev, dd->in_sg, 1, DMA_TO_DEVICE);
 		if (!err) {
-			dev_err(dd->dev, "dma_map_sg() error\n");
+			dev_dbg(dd->dev, "dma_map_sg() error\n");
 			return -EINVAL;
 		}
 
 		err = dma_map_sg(dd->dev, dd->out_sg, 1,
 				DMA_FROM_DEVICE);
 		if (!err) {
-			dev_err(dd->dev, "dma_map_sg() error\n");
+			dev_dbg(dd->dev, "dma_map_sg() error\n");
 			dma_unmap_sg(dd->dev, dd->in_sg, 1,
 				DMA_TO_DEVICE);
 			return -EINVAL;
@@ -670,7 +668,7 @@ static int atmel_tdes_crypt_dma_stop(struct atmel_tdes_dev *dd)
 				dd->buf_out, dd->buflen, dd->dma_size, 1);
 			if (count != dd->dma_size) {
 				err = -EINVAL;
-				pr_err("not all data converted: %zu\n", count);
+				dev_dbg(dd->dev, "not all data converted: %zu\n", count);
 			}
 		}
 	}
@@ -682,11 +680,12 @@ static int atmel_tdes_crypt(struct skcipher_request *req, unsigned long mode)
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct atmel_tdes_ctx *ctx = crypto_skcipher_ctx(skcipher);
 	struct atmel_tdes_reqctx *rctx = skcipher_request_ctx(req);
+	struct device *dev = ctx->dd->dev;
 
 	switch (mode & TDES_FLAGS_OPMODE_MASK) {
 	case TDES_FLAGS_CFB8:
 		if (!IS_ALIGNED(req->cryptlen, CFB8_BLOCK_SIZE)) {
-			pr_err("request size is not exact amount of CFB8 blocks\n");
+			dev_dbg(dev, "request size is not exact amount of CFB8 blocks\n");
 			return -EINVAL;
 		}
 		ctx->block_size = CFB8_BLOCK_SIZE;
@@ -694,7 +693,7 @@ static int atmel_tdes_crypt(struct skcipher_request *req, unsigned long mode)
 
 	case TDES_FLAGS_CFB16:
 		if (!IS_ALIGNED(req->cryptlen, CFB16_BLOCK_SIZE)) {
-			pr_err("request size is not exact amount of CFB16 blocks\n");
+			dev_dbg(dev, "request size is not exact amount of CFB16 blocks\n");
 			return -EINVAL;
 		}
 		ctx->block_size = CFB16_BLOCK_SIZE;
@@ -702,7 +701,7 @@ static int atmel_tdes_crypt(struct skcipher_request *req, unsigned long mode)
 
 	case TDES_FLAGS_CFB32:
 		if (!IS_ALIGNED(req->cryptlen, CFB32_BLOCK_SIZE)) {
-			pr_err("request size is not exact amount of CFB32 blocks\n");
+			dev_dbg(dev, "request size is not exact amount of CFB32 blocks\n");
 			return -EINVAL;
 		}
 		ctx->block_size = CFB32_BLOCK_SIZE;
@@ -710,7 +709,7 @@ static int atmel_tdes_crypt(struct skcipher_request *req, unsigned long mode)
 
 	default:
 		if (!IS_ALIGNED(req->cryptlen, DES_BLOCK_SIZE)) {
-			pr_err("request size is not exact amount of DES blocks\n");
+			dev_dbg(dev, "request size is not exact amount of DES blocks\n");
 			return -EINVAL;
 		}
 		ctx->block_size = DES_BLOCK_SIZE;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/9] crypto: atmel-aes: Add blocksize constraint for ECB and CBC modes
  2021-07-20  8:55 [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Tudor Ambarus
  2021-07-20  8:55 ` [PATCH 1/9] crypto: atmel-tdes: Clarify how tdes dev gets allocated to the tfm Tudor Ambarus
  2021-07-20  8:55 ` [PATCH 2/9] crypto: atmel-tdes: Handle error messages Tudor Ambarus
@ 2021-07-20  8:55 ` Tudor Ambarus
  2021-07-20  8:55 ` [PATCH 4/9] crypto: atmel-aes: Add XTS input length constraint Tudor Ambarus
                   ` (6 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Tudor Ambarus @ 2021-07-20  8:55 UTC (permalink / raw)
  To: herbert
  Cc: nicolas.ferre, alexandre.belloni, ludovic.desroches,
	linux-crypto, linux-arm-kernel, linux-kernel, Tudor Ambarus

NIST 800-38A requires for the ECB and CBC modes that the total number
of bits in the plaintext to be a multiple of the block cipher.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
---
 drivers/crypto/atmel-aes.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
index b1d286004295..9c6d80d1d7a0 100644
--- a/drivers/crypto/atmel-aes.c
+++ b/drivers/crypto/atmel-aes.c
@@ -1089,6 +1089,11 @@ static int atmel_aes_crypt(struct skcipher_request *req, unsigned long mode)
 	struct atmel_aes_base_ctx *ctx = crypto_skcipher_ctx(skcipher);
 	struct atmel_aes_reqctx *rctx;
 	struct atmel_aes_dev *dd;
+	u32 opmode = mode & AES_FLAGS_OPMODE_MASK;
+
+	if ((opmode == AES_FLAGS_ECB || opmode == AES_FLAGS_CBC) &&
+	    !IS_ALIGNED(req->cryptlen, crypto_skcipher_blocksize(skcipher)))
+		return -EINVAL;
 
 	switch (mode & AES_FLAGS_OPMODE_MASK) {
 	case AES_FLAGS_CFB8:
@@ -1120,7 +1125,7 @@ static int atmel_aes_crypt(struct skcipher_request *req, unsigned long mode)
 	rctx = skcipher_request_ctx(req);
 	rctx->mode = mode;
 
-	if ((mode & AES_FLAGS_OPMODE_MASK) != AES_FLAGS_ECB &&
+	if (opmode != AES_FLAGS_ECB &&
 	    !(mode & AES_FLAGS_ENCRYPT) && req->src == req->dst) {
 		unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/9] crypto: atmel-aes: Add XTS input length constraint
  2021-07-20  8:55 [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Tudor Ambarus
                   ` (2 preceding siblings ...)
  2021-07-20  8:55 ` [PATCH 3/9] crypto: atmel-aes: Add blocksize constraint for ECB and CBC modes Tudor Ambarus
@ 2021-07-20  8:55 ` Tudor Ambarus
  2021-07-20  8:55 ` [PATCH 5/9] crypto: atmel-aes: Add NIST 800-38A's zero length cryptlen constraint Tudor Ambarus
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Tudor Ambarus @ 2021-07-20  8:55 UTC (permalink / raw)
  To: herbert
  Cc: nicolas.ferre, alexandre.belloni, ludovic.desroches,
	linux-crypto, linux-arm-kernel, linux-kernel, Tudor Ambarus

Input length smaller than block size does not make sense for XTS.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
---
 drivers/crypto/atmel-aes.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
index 9c6d80d1d7a0..4e9515e8dd25 100644
--- a/drivers/crypto/atmel-aes.c
+++ b/drivers/crypto/atmel-aes.c
@@ -1091,6 +1091,9 @@ static int atmel_aes_crypt(struct skcipher_request *req, unsigned long mode)
 	struct atmel_aes_dev *dd;
 	u32 opmode = mode & AES_FLAGS_OPMODE_MASK;
 
+	if (opmode == AES_FLAGS_XTS && req->cryptlen < XTS_BLOCK_SIZE)
+		return -EINVAL;
+
 	if ((opmode == AES_FLAGS_ECB || opmode == AES_FLAGS_CBC) &&
 	    !IS_ALIGNED(req->cryptlen, crypto_skcipher_blocksize(skcipher)))
 		return -EINVAL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/9] crypto: atmel-aes: Add NIST 800-38A's zero length cryptlen constraint
  2021-07-20  8:55 [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Tudor Ambarus
                   ` (3 preceding siblings ...)
  2021-07-20  8:55 ` [PATCH 4/9] crypto: atmel-aes: Add XTS input length constraint Tudor Ambarus
@ 2021-07-20  8:55 ` Tudor Ambarus
  2021-07-20  8:55 ` [PATCH 6/9] crypto: atmel-tdes: Add FIPS81's " Tudor Ambarus
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Tudor Ambarus @ 2021-07-20  8:55 UTC (permalink / raw)
  To: herbert
  Cc: nicolas.ferre, alexandre.belloni, ludovic.desroches,
	linux-crypto, linux-arm-kernel, linux-kernel, Tudor Ambarus

NIST 800-38A requires for the ECB, CBC, CFB, OFB and CTR modes that
the plaintext and ciphertext to have a positive integer length.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
---
 drivers/crypto/atmel-aes.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
index 4e9515e8dd25..8ea873bf6b86 100644
--- a/drivers/crypto/atmel-aes.c
+++ b/drivers/crypto/atmel-aes.c
@@ -1094,6 +1094,13 @@ static int atmel_aes_crypt(struct skcipher_request *req, unsigned long mode)
 	if (opmode == AES_FLAGS_XTS && req->cryptlen < XTS_BLOCK_SIZE)
 		return -EINVAL;
 
+	/*
+	 * ECB, CBC, CFB, OFB or CTR mode require the plaintext and ciphertext
+	 * to have a positve integer length.
+	 */
+	if (!req->cryptlen && opmode != AES_FLAGS_XTS)
+		return 0;
+
 	if ((opmode == AES_FLAGS_ECB || opmode == AES_FLAGS_CBC) &&
 	    !IS_ALIGNED(req->cryptlen, crypto_skcipher_blocksize(skcipher)))
 		return -EINVAL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 6/9] crypto: atmel-tdes: Add FIPS81's zero length cryptlen constraint
  2021-07-20  8:55 [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Tudor Ambarus
                   ` (4 preceding siblings ...)
  2021-07-20  8:55 ` [PATCH 5/9] crypto: atmel-aes: Add NIST 800-38A's zero length cryptlen constraint Tudor Ambarus
@ 2021-07-20  8:55 ` Tudor Ambarus
  2021-07-20  8:55 ` [PATCH 7/9] crypto: atmel-{aes, tdes}: Set OFB's blocksize to 1 Tudor Ambarus
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Tudor Ambarus @ 2021-07-20  8:55 UTC (permalink / raw)
  To: herbert
  Cc: nicolas.ferre, alexandre.belloni, ludovic.desroches,
	linux-crypto, linux-arm-kernel, linux-kernel, Tudor Ambarus

FIPS81 requires for the ECB, CBC, CFB, and OFB modes that the
plaintext and ciphertext to have a positive integer length.
Add this constraint and just return 0 for a zero length cryptlen.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
---
 drivers/crypto/atmel-tdes.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/crypto/atmel-tdes.c b/drivers/crypto/atmel-tdes.c
index abbf1b7a75ab..8380e0ab149a 100644
--- a/drivers/crypto/atmel-tdes.c
+++ b/drivers/crypto/atmel-tdes.c
@@ -682,6 +682,9 @@ static int atmel_tdes_crypt(struct skcipher_request *req, unsigned long mode)
 	struct atmel_tdes_reqctx *rctx = skcipher_request_ctx(req);
 	struct device *dev = ctx->dd->dev;
 
+	if (!req->cryptlen)
+		return 0;
+
 	switch (mode & TDES_FLAGS_OPMODE_MASK) {
 	case TDES_FLAGS_CFB8:
 		if (!IS_ALIGNED(req->cryptlen, CFB8_BLOCK_SIZE)) {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 7/9] crypto: atmel-{aes, tdes}: Set OFB's blocksize to 1
  2021-07-20  8:55 [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Tudor Ambarus
                   ` (5 preceding siblings ...)
  2021-07-20  8:55 ` [PATCH 6/9] crypto: atmel-tdes: Add FIPS81's " Tudor Ambarus
@ 2021-07-20  8:55 ` Tudor Ambarus
  2021-07-20  8:55 ` [PATCH 8/9] crypto: atmel-aes: Add fallback to XTS software implementation Tudor Ambarus
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 11+ messages in thread
From: Tudor Ambarus @ 2021-07-20  8:55 UTC (permalink / raw)
  To: herbert
  Cc: nicolas.ferre, alexandre.belloni, ludovic.desroches,
	linux-crypto, linux-arm-kernel, linux-kernel, Tudor Ambarus

Set cra_blocksize to 1 to indicate OFB is a stream cipher.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
---
 drivers/crypto/atmel-aes.c  | 2 +-
 drivers/crypto/atmel-tdes.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
index 8ea873bf6b86..9ec007b4f8fc 100644
--- a/drivers/crypto/atmel-aes.c
+++ b/drivers/crypto/atmel-aes.c
@@ -1305,7 +1305,7 @@ static struct skcipher_alg aes_algs[] = {
 {
 	.base.cra_name		= "ofb(aes)",
 	.base.cra_driver_name	= "atmel-ofb-aes",
-	.base.cra_blocksize	= AES_BLOCK_SIZE,
+	.base.cra_blocksize	= 1,
 	.base.cra_ctxsize	= sizeof(struct atmel_aes_ctx),
 
 	.init			= atmel_aes_init_tfm,
diff --git a/drivers/crypto/atmel-tdes.c b/drivers/crypto/atmel-tdes.c
index 8380e0ab149a..e30786ec9f2d 100644
--- a/drivers/crypto/atmel-tdes.c
+++ b/drivers/crypto/atmel-tdes.c
@@ -991,7 +991,7 @@ static struct skcipher_alg tdes_algs[] = {
 {
 	.base.cra_name		= "ofb(des)",
 	.base.cra_driver_name	= "atmel-ofb-des",
-	.base.cra_blocksize	= DES_BLOCK_SIZE,
+	.base.cra_blocksize	= 1,
 	.base.cra_alignmask	= 0x7,
 
 	.min_keysize		= DES_KEY_SIZE,
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 8/9] crypto: atmel-aes: Add fallback to XTS software implementation
  2021-07-20  8:55 [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Tudor Ambarus
                   ` (6 preceding siblings ...)
  2021-07-20  8:55 ` [PATCH 7/9] crypto: atmel-{aes, tdes}: Set OFB's blocksize to 1 Tudor Ambarus
@ 2021-07-20  8:55 ` Tudor Ambarus
  2021-07-20  8:55 ` [PATCH 9/9] crypto: atmel-aes: Allocate aes dev at tfm init time Tudor Ambarus
  2021-07-30  3:10 ` [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Herbert Xu
  9 siblings, 0 replies; 11+ messages in thread
From: Tudor Ambarus @ 2021-07-20  8:55 UTC (permalink / raw)
  To: herbert
  Cc: nicolas.ferre, alexandre.belloni, ludovic.desroches,
	linux-crypto, linux-arm-kernel, linux-kernel, Tudor Ambarus

XTS is supported just for input lengths with data units of 128-bit blocks.
Add a fallback to software implementation when the last block is shorter
than 128 bits.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
---
 drivers/crypto/atmel-aes.c | 55 +++++++++++++++++++++++++++++++++++---
 1 file changed, 51 insertions(+), 4 deletions(-)

diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
index 9ec007b4f8fc..e74fcaac551e 100644
--- a/drivers/crypto/atmel-aes.c
+++ b/drivers/crypto/atmel-aes.c
@@ -143,6 +143,7 @@ struct atmel_aes_xts_ctx {
 	struct atmel_aes_base_ctx	base;
 
 	u32			key2[AES_KEYSIZE_256 / sizeof(u32)];
+	struct crypto_skcipher *fallback_tfm;
 };
 
 #if IS_ENABLED(CONFIG_CRYPTO_DEV_ATMEL_AUTHENC)
@@ -155,6 +156,7 @@ struct atmel_aes_authenc_ctx {
 struct atmel_aes_reqctx {
 	unsigned long		mode;
 	u8			lastc[AES_BLOCK_SIZE];
+	struct skcipher_request fallback_req;
 };
 
 #if IS_ENABLED(CONFIG_CRYPTO_DEV_ATMEL_AUTHENC)
@@ -1083,6 +1085,22 @@ static int atmel_aes_ctr_start(struct atmel_aes_dev *dd)
 	return atmel_aes_ctr_transfer(dd);
 }
 
+static int atmel_aes_xts_fallback(struct skcipher_request *req, bool enc)
+{
+	struct atmel_aes_reqctx *rctx = skcipher_request_ctx(req);
+	struct atmel_aes_xts_ctx *ctx = crypto_skcipher_ctx(
+			crypto_skcipher_reqtfm(req));
+
+	skcipher_request_set_tfm(&rctx->fallback_req, ctx->fallback_tfm);
+	skcipher_request_set_callback(&rctx->fallback_req, req->base.flags,
+				      req->base.complete, req->base.data);
+	skcipher_request_set_crypt(&rctx->fallback_req, req->src, req->dst,
+				   req->cryptlen, req->iv);
+
+	return enc ? crypto_skcipher_encrypt(&rctx->fallback_req) :
+		     crypto_skcipher_decrypt(&rctx->fallback_req);
+}
+
 static int atmel_aes_crypt(struct skcipher_request *req, unsigned long mode)
 {
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
@@ -1091,8 +1109,14 @@ static int atmel_aes_crypt(struct skcipher_request *req, unsigned long mode)
 	struct atmel_aes_dev *dd;
 	u32 opmode = mode & AES_FLAGS_OPMODE_MASK;
 
-	if (opmode == AES_FLAGS_XTS && req->cryptlen < XTS_BLOCK_SIZE)
-		return -EINVAL;
+	if (opmode == AES_FLAGS_XTS) {
+		if (req->cryptlen < XTS_BLOCK_SIZE)
+			return -EINVAL;
+
+		if (!IS_ALIGNED(req->cryptlen, XTS_BLOCK_SIZE))
+			return atmel_aes_xts_fallback(req,
+						      mode & AES_FLAGS_ENCRYPT);
+	}
 
 	/*
 	 * ECB, CBC, CFB, OFB or CTR mode require the plaintext and ciphertext
@@ -1864,6 +1888,13 @@ static int atmel_aes_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,
 	if (err)
 		return err;
 
+	crypto_skcipher_clear_flags(ctx->fallback_tfm, CRYPTO_TFM_REQ_MASK);
+	crypto_skcipher_set_flags(ctx->fallback_tfm, tfm->base.crt_flags &
+				  CRYPTO_TFM_REQ_MASK);
+	err = crypto_skcipher_setkey(ctx->fallback_tfm, key, keylen);
+	if (err)
+		return err;
+
 	memcpy(ctx->base.key, key, keylen/2);
 	memcpy(ctx->key2, key + keylen/2, keylen/2);
 	ctx->base.keylen = keylen/2;
@@ -1884,18 +1915,33 @@ static int atmel_aes_xts_decrypt(struct skcipher_request *req)
 static int atmel_aes_xts_init_tfm(struct crypto_skcipher *tfm)
 {
 	struct atmel_aes_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+	const char *tfm_name = crypto_tfm_alg_name(&tfm->base);
 
-	crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx));
+	ctx->fallback_tfm = crypto_alloc_skcipher(tfm_name, 0,
+						  CRYPTO_ALG_NEED_FALLBACK);
+	if (IS_ERR(ctx->fallback_tfm))
+		return PTR_ERR(ctx->fallback_tfm);
+
+	crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx) +
+				    crypto_skcipher_reqsize(ctx->fallback_tfm));
 	ctx->base.start = atmel_aes_xts_start;
 
 	return 0;
 }
 
+static void atmel_aes_xts_exit_tfm(struct crypto_skcipher *tfm)
+{
+	struct atmel_aes_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+
+	crypto_free_skcipher(ctx->fallback_tfm);
+}
+
 static struct skcipher_alg aes_xts_alg = {
 	.base.cra_name		= "xts(aes)",
 	.base.cra_driver_name	= "atmel-xts-aes",
 	.base.cra_blocksize	= AES_BLOCK_SIZE,
 	.base.cra_ctxsize	= sizeof(struct atmel_aes_xts_ctx),
+	.base.cra_flags		= CRYPTO_ALG_NEED_FALLBACK,
 
 	.min_keysize		= 2 * AES_MIN_KEY_SIZE,
 	.max_keysize		= 2 * AES_MAX_KEY_SIZE,
@@ -1904,6 +1950,7 @@ static struct skcipher_alg aes_xts_alg = {
 	.encrypt		= atmel_aes_xts_encrypt,
 	.decrypt		= atmel_aes_xts_decrypt,
 	.init			= atmel_aes_xts_init_tfm,
+	.exit			= atmel_aes_xts_exit_tfm,
 };
 
 #if IS_ENABLED(CONFIG_CRYPTO_DEV_ATMEL_AUTHENC)
@@ -2373,7 +2420,7 @@ static void atmel_aes_unregister_algs(struct atmel_aes_dev *dd)
 
 static void atmel_aes_crypto_alg_init(struct crypto_alg *alg)
 {
-	alg->cra_flags = CRYPTO_ALG_ASYNC;
+	alg->cra_flags |= CRYPTO_ALG_ASYNC;
 	alg->cra_alignmask = 0xf;
 	alg->cra_priority = ATMEL_AES_PRIORITY;
 	alg->cra_module = THIS_MODULE;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 9/9] crypto: atmel-aes: Allocate aes dev at tfm init time
  2021-07-20  8:55 [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Tudor Ambarus
                   ` (7 preceding siblings ...)
  2021-07-20  8:55 ` [PATCH 8/9] crypto: atmel-aes: Add fallback to XTS software implementation Tudor Ambarus
@ 2021-07-20  8:55 ` Tudor Ambarus
  2021-07-30  3:10 ` [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Herbert Xu
  9 siblings, 0 replies; 11+ messages in thread
From: Tudor Ambarus @ 2021-07-20  8:55 UTC (permalink / raw)
  To: herbert
  Cc: nicolas.ferre, alexandre.belloni, ludovic.desroches,
	linux-crypto, linux-arm-kernel, linux-kernel, Tudor Ambarus

Allocate the atmel_aes_dev data at tfm init time, and not for
each crypt request.
There's a single AES IP per SoC, clarify that in the code.

Signed-off-by: Tudor Ambarus <tudor.ambarus@microchip.com>
---
 drivers/crypto/atmel-aes.c | 76 +++++++++++++++++++++-----------------
 1 file changed, 43 insertions(+), 33 deletions(-)

diff --git a/drivers/crypto/atmel-aes.c b/drivers/crypto/atmel-aes.c
index e74fcaac551e..d0f387674d32 100644
--- a/drivers/crypto/atmel-aes.c
+++ b/drivers/crypto/atmel-aes.c
@@ -420,24 +420,15 @@ static inline size_t atmel_aes_padlen(size_t len, size_t block_size)
 	return len ? block_size - len : 0;
 }
 
-static struct atmel_aes_dev *atmel_aes_find_dev(struct atmel_aes_base_ctx *ctx)
+static struct atmel_aes_dev *atmel_aes_dev_alloc(struct atmel_aes_base_ctx *ctx)
 {
-	struct atmel_aes_dev *aes_dd = NULL;
-	struct atmel_aes_dev *tmp;
+	struct atmel_aes_dev *aes_dd;
 
 	spin_lock_bh(&atmel_aes.lock);
-	if (!ctx->dd) {
-		list_for_each_entry(tmp, &atmel_aes.dev_list, list) {
-			aes_dd = tmp;
-			break;
-		}
-		ctx->dd = aes_dd;
-	} else {
-		aes_dd = ctx->dd;
-	}
-
+	/* One AES IP per SoC. */
+	aes_dd = list_first_entry_or_null(&atmel_aes.dev_list,
+					  struct atmel_aes_dev, list);
 	spin_unlock_bh(&atmel_aes.lock);
-
 	return aes_dd;
 }
 
@@ -969,7 +960,6 @@ static int atmel_aes_handle_queue(struct atmel_aes_dev *dd,
 	ctx = crypto_tfm_ctx(areq->tfm);
 
 	dd->areq = areq;
-	dd->ctx = ctx;
 	start_async = (areq != new_areq);
 	dd->is_async = start_async;
 
@@ -1106,7 +1096,6 @@ static int atmel_aes_crypt(struct skcipher_request *req, unsigned long mode)
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct atmel_aes_base_ctx *ctx = crypto_skcipher_ctx(skcipher);
 	struct atmel_aes_reqctx *rctx;
-	struct atmel_aes_dev *dd;
 	u32 opmode = mode & AES_FLAGS_OPMODE_MASK;
 
 	if (opmode == AES_FLAGS_XTS) {
@@ -1152,10 +1141,6 @@ static int atmel_aes_crypt(struct skcipher_request *req, unsigned long mode)
 	}
 	ctx->is_aead = false;
 
-	dd = atmel_aes_find_dev(ctx);
-	if (!dd)
-		return -ENODEV;
-
 	rctx = skcipher_request_ctx(req);
 	rctx->mode = mode;
 
@@ -1169,7 +1154,7 @@ static int atmel_aes_crypt(struct skcipher_request *req, unsigned long mode)
 						 ivsize, 0);
 	}
 
-	return atmel_aes_handle_queue(dd, &req->base);
+	return atmel_aes_handle_queue(ctx->dd, &req->base);
 }
 
 static int atmel_aes_setkey(struct crypto_skcipher *tfm, const u8 *key,
@@ -1281,8 +1266,15 @@ static int atmel_aes_ctr_decrypt(struct skcipher_request *req)
 static int atmel_aes_init_tfm(struct crypto_skcipher *tfm)
 {
 	struct atmel_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
+	struct atmel_aes_dev *dd;
+
+	dd = atmel_aes_dev_alloc(&ctx->base);
+	if (!dd)
+		return -ENODEV;
 
 	crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx));
+	ctx->base.dd = dd;
+	ctx->base.dd->ctx = &ctx->base;
 	ctx->base.start = atmel_aes_start;
 
 	return 0;
@@ -1291,8 +1283,15 @@ static int atmel_aes_init_tfm(struct crypto_skcipher *tfm)
 static int atmel_aes_ctr_init_tfm(struct crypto_skcipher *tfm)
 {
 	struct atmel_aes_ctx *ctx = crypto_skcipher_ctx(tfm);
+	struct atmel_aes_dev *dd;
+
+	dd = atmel_aes_dev_alloc(&ctx->base);
+	if (!dd)
+		return -ENODEV;
 
 	crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx));
+	ctx->base.dd = dd;
+	ctx->base.dd->ctx = &ctx->base;
 	ctx->base.start = atmel_aes_ctr_start;
 
 	return 0;
@@ -1730,20 +1729,15 @@ static int atmel_aes_gcm_crypt(struct aead_request *req,
 {
 	struct atmel_aes_base_ctx *ctx;
 	struct atmel_aes_reqctx *rctx;
-	struct atmel_aes_dev *dd;
 
 	ctx = crypto_aead_ctx(crypto_aead_reqtfm(req));
 	ctx->block_size = AES_BLOCK_SIZE;
 	ctx->is_aead = true;
 
-	dd = atmel_aes_find_dev(ctx);
-	if (!dd)
-		return -ENODEV;
-
 	rctx = aead_request_ctx(req);
 	rctx->mode = AES_FLAGS_GCM | mode;
 
-	return atmel_aes_handle_queue(dd, &req->base);
+	return atmel_aes_handle_queue(ctx->dd, &req->base);
 }
 
 static int atmel_aes_gcm_setkey(struct crypto_aead *tfm, const u8 *key,
@@ -1781,8 +1775,15 @@ static int atmel_aes_gcm_decrypt(struct aead_request *req)
 static int atmel_aes_gcm_init(struct crypto_aead *tfm)
 {
 	struct atmel_aes_gcm_ctx *ctx = crypto_aead_ctx(tfm);
+	struct atmel_aes_dev *dd;
+
+	dd = atmel_aes_dev_alloc(&ctx->base);
+	if (!dd)
+		return -ENODEV;
 
 	crypto_aead_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx));
+	ctx->base.dd = dd;
+	ctx->base.dd->ctx = &ctx->base;
 	ctx->base.start = atmel_aes_gcm_start;
 
 	return 0;
@@ -1915,8 +1916,13 @@ static int atmel_aes_xts_decrypt(struct skcipher_request *req)
 static int atmel_aes_xts_init_tfm(struct crypto_skcipher *tfm)
 {
 	struct atmel_aes_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+	struct atmel_aes_dev *dd;
 	const char *tfm_name = crypto_tfm_alg_name(&tfm->base);
 
+	dd = atmel_aes_dev_alloc(&ctx->base);
+	if (!dd)
+		return -ENODEV;
+
 	ctx->fallback_tfm = crypto_alloc_skcipher(tfm_name, 0,
 						  CRYPTO_ALG_NEED_FALLBACK);
 	if (IS_ERR(ctx->fallback_tfm))
@@ -1924,6 +1930,8 @@ static int atmel_aes_xts_init_tfm(struct crypto_skcipher *tfm)
 
 	crypto_skcipher_set_reqsize(tfm, sizeof(struct atmel_aes_reqctx) +
 				    crypto_skcipher_reqsize(ctx->fallback_tfm));
+	ctx->base.dd = dd;
+	ctx->base.dd->ctx = &ctx->base;
 	ctx->base.start = atmel_aes_xts_start;
 
 	return 0;
@@ -2137,6 +2145,11 @@ static int atmel_aes_authenc_init_tfm(struct crypto_aead *tfm,
 {
 	struct atmel_aes_authenc_ctx *ctx = crypto_aead_ctx(tfm);
 	unsigned int auth_reqsize = atmel_sha_authenc_get_reqsize();
+	struct atmel_aes_dev *dd;
+
+	dd = atmel_aes_dev_alloc(&ctx->base);
+	if (!dd)
+		return -ENODEV;
 
 	ctx->auth = atmel_sha_authenc_spawn(auth_mode);
 	if (IS_ERR(ctx->auth))
@@ -2144,6 +2157,8 @@ static int atmel_aes_authenc_init_tfm(struct crypto_aead *tfm,
 
 	crypto_aead_set_reqsize(tfm, (sizeof(struct atmel_aes_authenc_reqctx) +
 				      auth_reqsize));
+	ctx->base.dd = dd;
+	ctx->base.dd->ctx = &ctx->base;
 	ctx->base.start = atmel_aes_authenc_start;
 
 	return 0;
@@ -2189,7 +2204,6 @@ static int atmel_aes_authenc_crypt(struct aead_request *req,
 	struct atmel_aes_base_ctx *ctx = crypto_aead_ctx(tfm);
 	u32 authsize = crypto_aead_authsize(tfm);
 	bool enc = (mode & AES_FLAGS_ENCRYPT);
-	struct atmel_aes_dev *dd;
 
 	/* Compute text length. */
 	if (!enc && req->cryptlen < authsize)
@@ -2208,11 +2222,7 @@ static int atmel_aes_authenc_crypt(struct aead_request *req,
 	ctx->block_size = AES_BLOCK_SIZE;
 	ctx->is_aead = true;
 
-	dd = atmel_aes_find_dev(ctx);
-	if (!dd)
-		return -ENODEV;
-
-	return atmel_aes_handle_queue(dd, &req->base);
+	return atmel_aes_handle_queue(ctx->dd, &req->base);
 }
 
 static int atmel_aes_authenc_cbc_aes_encrypt(struct aead_request *req)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests
  2021-07-20  8:55 [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Tudor Ambarus
                   ` (8 preceding siblings ...)
  2021-07-20  8:55 ` [PATCH 9/9] crypto: atmel-aes: Allocate aes dev at tfm init time Tudor Ambarus
@ 2021-07-30  3:10 ` Herbert Xu
  9 siblings, 0 replies; 11+ messages in thread
From: Herbert Xu @ 2021-07-30  3:10 UTC (permalink / raw)
  To: Tudor Ambarus
  Cc: nicolas.ferre, alexandre.belloni, ludovic.desroches,
	linux-crypto, linux-arm-kernel, linux-kernel

On Tue, Jul 20, 2021 at 11:55:26AM +0300, Tudor Ambarus wrote:
> The extra run-time crypto self tests hit some corner cases that were            
> not handled in the drivers. Fix some corner cases. Propose some cleaning        
> patches.      
> 
> Tudor Ambarus (9):
>   crypto: atmel-tdes: Clarify how tdes dev gets allocated to the tfm
>   crypto: atmel-tdes: Handle error messages
>   crypto: atmel-aes: Add blocksize constraint for ECB and CBC modes
>   crypto: atmel-aes: Add XTS input length constraint
>   crypto: atmel-aes: Add NIST 800-38A's zero length cryptlen constraint
>   crypto: atmel-tdes: Add FIPS81's zero length cryptlen constraint
>   crypto: atmel-{aes, tdes}: Set OFB's blocksize to 1
>   crypto: atmel-aes: Add fallback to XTS software implementation
>   crypto: atmel-aes: Allocate aes dev at tfm init time
> 
>  drivers/crypto/atmel-aes.c  | 146 +++++++++++++++++++++++++++---------
>  drivers/crypto/atmel-tdes.c |  66 +++++++---------
>  2 files changed, 138 insertions(+), 74 deletions(-)

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-07-30  3:10 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-20  8:55 [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Tudor Ambarus
2021-07-20  8:55 ` [PATCH 1/9] crypto: atmel-tdes: Clarify how tdes dev gets allocated to the tfm Tudor Ambarus
2021-07-20  8:55 ` [PATCH 2/9] crypto: atmel-tdes: Handle error messages Tudor Ambarus
2021-07-20  8:55 ` [PATCH 3/9] crypto: atmel-aes: Add blocksize constraint for ECB and CBC modes Tudor Ambarus
2021-07-20  8:55 ` [PATCH 4/9] crypto: atmel-aes: Add XTS input length constraint Tudor Ambarus
2021-07-20  8:55 ` [PATCH 5/9] crypto: atmel-aes: Add NIST 800-38A's zero length cryptlen constraint Tudor Ambarus
2021-07-20  8:55 ` [PATCH 6/9] crypto: atmel-tdes: Add FIPS81's " Tudor Ambarus
2021-07-20  8:55 ` [PATCH 7/9] crypto: atmel-{aes, tdes}: Set OFB's blocksize to 1 Tudor Ambarus
2021-07-20  8:55 ` [PATCH 8/9] crypto: atmel-aes: Add fallback to XTS software implementation Tudor Ambarus
2021-07-20  8:55 ` [PATCH 9/9] crypto: atmel-aes: Allocate aes dev at tfm init time Tudor Ambarus
2021-07-30  3:10 ` [PATCH 0/9] crypto: atmel-{aes, tdes}: Fix corner cases - crypto self tests Herbert Xu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).