All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] crypto: ccree: fixes and improvements
@ 2022-04-06  8:11 Gilad Ben-Yossef
  2022-04-06  8:11 ` [PATCH 1/2] crypto: ccree: rearrange init calls to avoid race Gilad Ben-Yossef
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Gilad Ben-Yossef @ 2022-04-06  8:11 UTC (permalink / raw)
  To: Gilad Ben-Yossef, Herbert Xu, David S. Miller
  Cc: Cristian Marussi, linux-crypto, linux-kernel

A small fix for a rare registration time and a minor improvment

Gilad Ben-Yossef (2):
  crypto: ccree: rearrange init calls to avoid race
  crypto: ccree: use fine grained DMA mapping dir

 drivers/crypto/ccree/cc_buffer_mgr.c | 27 +++++++++++++++------------
 drivers/crypto/ccree/cc_driver.c     | 24 +++++++++++++-----------
 2 files changed, 28 insertions(+), 23 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH 1/2] crypto: ccree: rearrange init calls to avoid race
  2022-04-06  8:11 [PATCH 0/2] crypto: ccree: fixes and improvements Gilad Ben-Yossef
@ 2022-04-06  8:11 ` Gilad Ben-Yossef
  2022-04-06  8:11 ` [PATCH 2/2] crypto: ccree: use fine grained DMA mapping dir Gilad Ben-Yossef
  2022-04-15  8:40 ` [PATCH 0/2] crypto: ccree: fixes and improvements Herbert Xu
  2 siblings, 0 replies; 4+ messages in thread
From: Gilad Ben-Yossef @ 2022-04-06  8:11 UTC (permalink / raw)
  To: Gilad Ben-Yossef, Herbert Xu, David S. Miller
  Cc: Cristian Marussi, Dung Nguyen, Jing Dan, linux-crypto, linux-kernel

Rearrange init calls to avoid the rare race condition of
the cipher algs being registered and used while we still
init the hash code which uses the HW without proper lock.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Reported-by: Dung Nguyen <dung.nguyen.zy@renesas.com>
Tested-by: Jing Dan <jing.dan.nx@renesas.com>
Tested-by: Dung Nguyen <dung.nguyen.zy@renesas.com>
Fixes: 63893811b0fc("crypto: ccree - add ahash support")
---
 drivers/crypto/ccree/cc_driver.c | 24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/drivers/crypto/ccree/cc_driver.c b/drivers/crypto/ccree/cc_driver.c
index 790fa9058a36..7d1bee86d581 100644
--- a/drivers/crypto/ccree/cc_driver.c
+++ b/drivers/crypto/ccree/cc_driver.c
@@ -529,24 +529,26 @@ static int init_cc_resources(struct platform_device *plat_dev)
 		goto post_req_mgr_err;
 	}
 
-	/* Allocate crypto algs */
-	rc = cc_cipher_alloc(new_drvdata);
+	/* hash must be allocated first due to use of send_request_init()
+	 * and dependency of AEAD on it
+	 */
+	rc = cc_hash_alloc(new_drvdata);
 	if (rc) {
-		dev_err(dev, "cc_cipher_alloc failed\n");
+		dev_err(dev, "cc_hash_alloc failed\n");
 		goto post_buf_mgr_err;
 	}
 
-	/* hash must be allocated before aead since hash exports APIs */
-	rc = cc_hash_alloc(new_drvdata);
+	/* Allocate crypto algs */
+	rc = cc_cipher_alloc(new_drvdata);
 	if (rc) {
-		dev_err(dev, "cc_hash_alloc failed\n");
-		goto post_cipher_err;
+		dev_err(dev, "cc_cipher_alloc failed\n");
+		goto post_hash_err;
 	}
 
 	rc = cc_aead_alloc(new_drvdata);
 	if (rc) {
 		dev_err(dev, "cc_aead_alloc failed\n");
-		goto post_hash_err;
+		goto post_cipher_err;
 	}
 
 	/* If we got here and FIPS mode is enabled
@@ -558,10 +560,10 @@ static int init_cc_resources(struct platform_device *plat_dev)
 	pm_runtime_put(dev);
 	return 0;
 
-post_hash_err:
-	cc_hash_free(new_drvdata);
 post_cipher_err:
 	cc_cipher_free(new_drvdata);
+post_hash_err:
+	cc_hash_free(new_drvdata);
 post_buf_mgr_err:
 	 cc_buffer_mgr_fini(new_drvdata);
 post_req_mgr_err:
@@ -593,8 +595,8 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 		(struct cc_drvdata *)platform_get_drvdata(plat_dev);
 
 	cc_aead_free(drvdata);
-	cc_hash_free(drvdata);
 	cc_cipher_free(drvdata);
+	cc_hash_free(drvdata);
 	cc_buffer_mgr_fini(drvdata);
 	cc_req_mgr_fini(drvdata);
 	cc_fips_fini(drvdata);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* [PATCH 2/2] crypto: ccree: use fine grained DMA mapping dir
  2022-04-06  8:11 [PATCH 0/2] crypto: ccree: fixes and improvements Gilad Ben-Yossef
  2022-04-06  8:11 ` [PATCH 1/2] crypto: ccree: rearrange init calls to avoid race Gilad Ben-Yossef
@ 2022-04-06  8:11 ` Gilad Ben-Yossef
  2022-04-15  8:40 ` [PATCH 0/2] crypto: ccree: fixes and improvements Herbert Xu
  2 siblings, 0 replies; 4+ messages in thread
From: Gilad Ben-Yossef @ 2022-04-06  8:11 UTC (permalink / raw)
  To: Gilad Ben-Yossef, Herbert Xu, David S. Miller
  Cc: Cristian Marussi, Corentin Labbe, linux-crypto, linux-kernel

Use a fine grained specification of DMA mapping directions
in certain cases, allowing both a more optimized operation
as well as shushing out a harmless, though persky
dma-debug warning.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
Reported-by: Corentin Labbe <clabbe.montjoie@gmail.com>

---
 drivers/crypto/ccree/cc_buffer_mgr.c | 27 +++++++++++++++------------
 1 file changed, 15 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/ccree/cc_buffer_mgr.c b/drivers/crypto/ccree/cc_buffer_mgr.c
index 11e0278c8631..6140e4927322 100644
--- a/drivers/crypto/ccree/cc_buffer_mgr.c
+++ b/drivers/crypto/ccree/cc_buffer_mgr.c
@@ -356,12 +356,14 @@ void cc_unmap_cipher_request(struct device *dev, void *ctx,
 			      req_ctx->mlli_params.mlli_dma_addr);
 	}
 
-	dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL);
-	dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
-
 	if (src != dst) {
-		dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_BIDIRECTIONAL);
+		dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_TO_DEVICE);
+		dma_unmap_sg(dev, dst, req_ctx->out_nents, DMA_FROM_DEVICE);
 		dev_dbg(dev, "Unmapped req->dst=%pK\n", sg_virt(dst));
+		dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
+	} else {
+		dma_unmap_sg(dev, src, req_ctx->in_nents, DMA_BIDIRECTIONAL);
+		dev_dbg(dev, "Unmapped req->src=%pK\n", sg_virt(src));
 	}
 }
 
@@ -377,6 +379,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
 	u32 dummy = 0;
 	int rc = 0;
 	u32 mapped_nents = 0;
+	int src_direction = (src != dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL);
 
 	req_ctx->dma_buf_type = CC_DMA_BUF_DLLI;
 	mlli_params->curr_pool = NULL;
@@ -399,7 +402,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
 	}
 
 	/* Map the src SGL */
-	rc = cc_map_sg(dev, src, nbytes, DMA_BIDIRECTIONAL, &req_ctx->in_nents,
+	rc = cc_map_sg(dev, src, nbytes, src_direction, &req_ctx->in_nents,
 		       LLI_MAX_NUM_OF_DATA_ENTRIES, &dummy, &mapped_nents);
 	if (rc)
 		goto cipher_exit;
@@ -416,7 +419,7 @@ int cc_map_cipher_request(struct cc_drvdata *drvdata, void *ctx,
 		}
 	} else {
 		/* Map the dst sg */
-		rc = cc_map_sg(dev, dst, nbytes, DMA_BIDIRECTIONAL,
+		rc = cc_map_sg(dev, dst, nbytes, DMA_FROM_DEVICE,
 			       &req_ctx->out_nents, LLI_MAX_NUM_OF_DATA_ENTRIES,
 			       &dummy, &mapped_nents);
 		if (rc)
@@ -456,6 +459,7 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
 	unsigned int hw_iv_size = areq_ctx->hw_iv_size;
 	struct cc_drvdata *drvdata = dev_get_drvdata(dev);
+	int src_direction = (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL);
 
 	if (areq_ctx->mac_buf_dma_addr) {
 		dma_unmap_single(dev, areq_ctx->mac_buf_dma_addr,
@@ -514,13 +518,11 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
 		sg_virt(req->src), areq_ctx->src.nents, areq_ctx->assoc.nents,
 		areq_ctx->assoclen, req->cryptlen);
 
-	dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents,
-		     DMA_BIDIRECTIONAL);
+	dma_unmap_sg(dev, req->src, areq_ctx->src.mapped_nents, src_direction);
 	if (req->src != req->dst) {
 		dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n",
 			sg_virt(req->dst));
-		dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents,
-			     DMA_BIDIRECTIONAL);
+		dma_unmap_sg(dev, req->dst, areq_ctx->dst.mapped_nents, DMA_FROM_DEVICE);
 	}
 	if (drvdata->coherent &&
 	    areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT &&
@@ -843,7 +845,7 @@ static int cc_aead_chain_data(struct cc_drvdata *drvdata,
 		else
 			size_for_map -= authsize;
 
-		rc = cc_map_sg(dev, req->dst, size_for_map, DMA_BIDIRECTIONAL,
+		rc = cc_map_sg(dev, req->dst, size_for_map, DMA_FROM_DEVICE,
 			       &areq_ctx->dst.mapped_nents,
 			       LLI_MAX_NUM_OF_DATA_ENTRIES, &dst_last_bytes,
 			       &dst_mapped_nents);
@@ -1056,7 +1058,8 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
 		size_to_map += authsize;
 	}
 
-	rc = cc_map_sg(dev, req->src, size_to_map, DMA_BIDIRECTIONAL,
+	rc = cc_map_sg(dev, req->src, size_to_map,
+		       (req->src != req->dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL),
 		       &areq_ctx->src.mapped_nents,
 		       (LLI_MAX_NUM_OF_ASSOC_DATA_ENTRIES +
 			LLI_MAX_NUM_OF_DATA_ENTRIES),
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH 0/2] crypto: ccree: fixes and improvements
  2022-04-06  8:11 [PATCH 0/2] crypto: ccree: fixes and improvements Gilad Ben-Yossef
  2022-04-06  8:11 ` [PATCH 1/2] crypto: ccree: rearrange init calls to avoid race Gilad Ben-Yossef
  2022-04-06  8:11 ` [PATCH 2/2] crypto: ccree: use fine grained DMA mapping dir Gilad Ben-Yossef
@ 2022-04-15  8:40 ` Herbert Xu
  2 siblings, 0 replies; 4+ messages in thread
From: Herbert Xu @ 2022-04-15  8:40 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: David S. Miller, Cristian Marussi, linux-crypto, linux-kernel

On Wed, Apr 06, 2022 at 11:11:37AM +0300, Gilad Ben-Yossef wrote:
> A small fix for a rare registration time and a minor improvment
> 
> Gilad Ben-Yossef (2):
>   crypto: ccree: rearrange init calls to avoid race
>   crypto: ccree: use fine grained DMA mapping dir
> 
>  drivers/crypto/ccree/cc_buffer_mgr.c | 27 +++++++++++++++------------
>  drivers/crypto/ccree/cc_driver.c     | 24 +++++++++++++-----------
>  2 files changed, 28 insertions(+), 23 deletions(-)

All applied.  Thanks.
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-04-15  8:40 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-06  8:11 [PATCH 0/2] crypto: ccree: fixes and improvements Gilad Ben-Yossef
2022-04-06  8:11 ` [PATCH 1/2] crypto: ccree: rearrange init calls to avoid race Gilad Ben-Yossef
2022-04-06  8:11 ` [PATCH 2/2] crypto: ccree: use fine grained DMA mapping dir Gilad Ben-Yossef
2022-04-15  8:40 ` [PATCH 0/2] crypto: ccree: fixes and improvements Herbert Xu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.