linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] staging: ccree: Fixes and cleanups
@ 2017-11-02  8:10 Gilad Ben-Yossef
  2017-11-02  8:10 ` [PATCH v2 1/3] staging: ccree: copy IV to DMAable memory Gilad Ben-Yossef
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Gilad Ben-Yossef @ 2017-11-02  8:10 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Fixes and cleanups for 4.15

Changes from v1:
- Move DMA mask code to before turning on clocks, based
  on feedback from Dan Carpenter.
- Add missing kmalloc success check, as spotted by Dan
  Carpenter.

Gilad Ben-Yossef (3):
  staging: ccree: copy IV to DMAable memory
  staging: ccree: handle limiting of DMA masks
  staging: ccree: remove dead code

 drivers/staging/ccree/ssi_cipher.c | 24 ++++++++++++++++++------
 drivers/staging/ccree/ssi_cipher.h |  1 +
 drivers/staging/ccree/ssi_driver.c | 25 ++++++++++++++++++++-----
 drivers/staging/ccree/ssi_driver.h |  1 -
 4 files changed, 39 insertions(+), 12 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH v2 1/3] staging: ccree: copy IV to DMAable memory
  2017-11-02  8:10 [PATCH v2 0/3] staging: ccree: Fixes and cleanups Gilad Ben-Yossef
@ 2017-11-02  8:10 ` Gilad Ben-Yossef
  2017-11-08 10:26   ` Horia Geantă
  2017-11-02  8:10 ` [PATCH v2 2/3] staging: ccree: handle limiting of DMA masks Gilad Ben-Yossef
  2017-11-02  8:10 ` [PATCH v2 3/3] staging: ccree: remove dead code Gilad Ben-Yossef
  2 siblings, 1 reply; 7+ messages in thread
From: Gilad Ben-Yossef @ 2017-11-02  8:10 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

We are being passed an IV buffer from unknown origin, which may be
stack allocated and thus not safe for DMA. Allocate a DMA safe
buffer for the IV and use that instead.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_cipher.c | 20 ++++++++++++++++++--
 drivers/staging/ccree/ssi_cipher.h |  1 +
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index 78706f5..0b69103 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -695,6 +695,7 @@ static int ssi_blkcipher_complete(struct device *dev,
 	struct ablkcipher_request *req = (struct ablkcipher_request *)areq;
 
 	ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
+	kfree(req_ctx->iv);
 
 	/*Decrease the inflight counter*/
 	if (ctx_p->flow_mode == BYPASS && ctx_p->drvdata->inflight_counter > 0)
@@ -757,6 +758,17 @@ static int ssi_blkcipher_process(
 		rc = 0;
 		goto exit_process;
 	}
+
+	/* The IV we are handed may be allocted from the stack so
+	 * we must copy it to a DMAable buffer before use.
+	 */
+	req_ctx->iv = kmalloc(ivsize, GFP_KERNEL);
+	if (!req_ctx->iv) {
+		rc = -ENOMEM;
+		goto exit_process;
+	}
+	memcpy(req_ctx->iv, info, ivsize);
+
 	/*For CTS in case of data size aligned to 16 use CBC mode*/
 	if (((nbytes % AES_BLOCK_SIZE) == 0) && (ctx_p->cipher_mode == DRV_CIPHER_CBC_CTS)) {
 		ctx_p->cipher_mode = DRV_CIPHER_CBC;
@@ -778,7 +790,9 @@ static int ssi_blkcipher_process(
 
 	/* STAT_PHASE_1: Map buffers */
 
-	rc = ssi_buffer_mgr_map_blkcipher_request(ctx_p->drvdata, req_ctx, ivsize, nbytes, info, src, dst);
+	rc = ssi_buffer_mgr_map_blkcipher_request(ctx_p->drvdata, req_ctx,
+						  ivsize, nbytes, req_ctx->iv,
+						  src, dst);
 	if (unlikely(rc != 0)) {
 		dev_err(dev, "map_request() failed\n");
 		goto exit_process;
@@ -830,8 +844,10 @@ static int ssi_blkcipher_process(
 	if (cts_restore_flag != 0)
 		ctx_p->cipher_mode = DRV_CIPHER_CBC_CTS;
 
-	if (rc != -EINPROGRESS)
+	if (rc != -EINPROGRESS) {
 		kfree(req_ctx->backup_info);
+		kfree(req_ctx->iv);
+	}
 
 	return rc;
 }
diff --git a/drivers/staging/ccree/ssi_cipher.h b/drivers/staging/ccree/ssi_cipher.h
index f499962..25e6335 100644
--- a/drivers/staging/ccree/ssi_cipher.h
+++ b/drivers/staging/ccree/ssi_cipher.h
@@ -43,6 +43,7 @@ struct blkcipher_req_ctx {
 	u32 out_nents;
 	u32 out_mlli_nents;
 	u8 *backup_info; /*store iv for generated IV flow*/
+	u8 *iv;
 	bool is_giv;
 	struct mlli_params mlli_params;
 };
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 2/3] staging: ccree: handle limiting of DMA masks
  2017-11-02  8:10 [PATCH v2 0/3] staging: ccree: Fixes and cleanups Gilad Ben-Yossef
  2017-11-02  8:10 ` [PATCH v2 1/3] staging: ccree: copy IV to DMAable memory Gilad Ben-Yossef
@ 2017-11-02  8:10 ` Gilad Ben-Yossef
  2017-11-02  8:10 ` [PATCH v2 3/3] staging: ccree: remove dead code Gilad Ben-Yossef
  2 siblings, 0 replies; 7+ messages in thread
From: Gilad Ben-Yossef @ 2017-11-02  8:10 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Properly handle limiting of DMA masks based on device and bus
capabilities.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_driver.c | 25 ++++++++++++++++++++-----
 1 file changed, 20 insertions(+), 5 deletions(-)

diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 5f03c25..1d4c7bb 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -208,6 +208,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 	struct device *dev = &plat_dev->dev;
 	struct device_node *np = dev->of_node;
 	u32 signature_val;
+	dma_addr_t dma_mask;
 	int rc = 0;
 
 	new_drvdata = devm_kzalloc(dev, sizeof(*new_drvdata), GFP_KERNEL);
@@ -256,15 +257,29 @@ static int init_cc_resources(struct platform_device *plat_dev)
 	}
 	dev_dbg(dev, "Registered to IRQ: %d\n", new_drvdata->irq);
 
+	if (!plat_dev->dev.dma_mask)
+		plat_dev->dev.dma_mask = &plat_dev->dev.coherent_dma_mask;
+
+	dma_mask = (dma_addr_t)(DMA_BIT_MASK(DMA_BIT_MASK_LEN));
+	while (dma_mask > 0x7fffffffUL) {
+		if (dma_supported(&plat_dev->dev, dma_mask)) {
+			rc = dma_set_coherent_mask(&plat_dev->dev, dma_mask);
+			if (!rc)
+				break;
+		}
+		dma_mask >>= 1;
+	}
+
+	if (rc) {
+		dev_err(dev, "Error: failed in dma_set_mask, mask=%par\n",
+			&dma_mask);
+		goto post_drvdata_err;
+	}
+
 	rc = cc_clk_on(new_drvdata);
 	if (rc)
 		goto post_drvdata_err;
 
-	if (!dev->dma_mask)
-		dev->dma_mask = &dev->coherent_dma_mask;
-
-	if (!dev->coherent_dma_mask)
-		dev->coherent_dma_mask = DMA_BIT_MASK(DMA_BIT_MASK_LEN);
 
 	/* Verify correct mapping */
 	signature_val = CC_HAL_READ_REGISTER(CC_REG_OFFSET(HOST_RGF, HOST_SIGNATURE));
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH v2 3/3] staging: ccree: remove dead code
  2017-11-02  8:10 [PATCH v2 0/3] staging: ccree: Fixes and cleanups Gilad Ben-Yossef
  2017-11-02  8:10 ` [PATCH v2 1/3] staging: ccree: copy IV to DMAable memory Gilad Ben-Yossef
  2017-11-02  8:10 ` [PATCH v2 2/3] staging: ccree: handle limiting of DMA masks Gilad Ben-Yossef
@ 2017-11-02  8:10 ` Gilad Ben-Yossef
  2 siblings, 0 replies; 7+ messages in thread
From: Gilad Ben-Yossef @ 2017-11-02  8:10 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

The inflight_counter field is updated in a single location and
never used. Remove it.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_cipher.c | 4 ----
 drivers/staging/ccree/ssi_driver.h | 1 -
 2 files changed, 5 deletions(-)

diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index 0b69103..ee85cbf 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -697,10 +697,6 @@ static int ssi_blkcipher_complete(struct device *dev,
 	ssi_buffer_mgr_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
 	kfree(req_ctx->iv);
 
-	/*Decrease the inflight counter*/
-	if (ctx_p->flow_mode == BYPASS && ctx_p->drvdata->inflight_counter > 0)
-		ctx_p->drvdata->inflight_counter--;
-
 	if (areq) {
 		/*
 		 * The crypto API expects us to set the req->info to the last
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index 488f665..e01c880 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -134,7 +134,6 @@ struct ssi_drvdata {
 	void *fips_handle;
 	void *ivgen_handle;
 	void *sram_mgr_handle;
-	u32 inflight_counter;
 	struct clk *clk;
 	bool coherent;
 };
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/3] staging: ccree: copy IV to DMAable memory
  2017-11-02  8:10 ` [PATCH v2 1/3] staging: ccree: copy IV to DMAable memory Gilad Ben-Yossef
@ 2017-11-08 10:26   ` Horia Geantă
  2017-11-08 11:54     ` Gilad Ben-Yossef
  0 siblings, 1 reply; 7+ messages in thread
From: Horia Geantă @ 2017-11-08 10:26 UTC (permalink / raw)
  To: Gilad Ben-Yossef, Greg Kroah-Hartman, Herbert Xu, David S. Miller
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

On 11/2/2017 10:14 AM, Gilad Ben-Yossef wrote:
> We are being passed an IV buffer from unknown origin, which may be
> stack allocated and thus not safe for DMA. Allocate a DMA safe
> buffer for the IV and use that instead.
> 
IIUC this fixes only the (a)blkcipher / skcipher algorithms.
What about aead, authenc?

The fact that only the skcipher tcrypt tests use IVs on stack doesn't
mean aead, authenc implementations are safe - other crypto API users
could provide IVs laying in non-DMAable memory.

To reiterate, the proper approach is to fix the crypto API to guarantee
IVs are DMAable.
However Herbert suggests he is not willing to do this work:
https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg28821.html

A few high-level details mentioning what this implies would be helpful,
in case somebody else decides its worth pursuing this path.

The compromise is to fix all crypto drivers that need DMAable IVs.
IMHO this is suboptimal, both in terms of performance (memory
allocation, memcpy) and increased code complexity.

Horia

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/3] staging: ccree: copy IV to DMAable memory
  2017-11-08 10:26   ` Horia Geantă
@ 2017-11-08 11:54     ` Gilad Ben-Yossef
  2017-11-08 12:07       ` Herbert Xu
  0 siblings, 1 reply; 7+ messages in thread
From: Gilad Ben-Yossef @ 2017-11-08 11:54 UTC (permalink / raw)
  To: Horia Geantă
  Cc: Greg Kroah-Hartman, Herbert Xu, David S. Miller, Ofir Drang,
	linux-crypto, driverdev-devel, devel, linux-kernel

Hi,

On Wed, Nov 8, 2017 at 12:26 PM, Horia Geantă <horia.geanta@nxp.com> wrote:
> On 11/2/2017 10:14 AM, Gilad Ben-Yossef wrote:
>> We are being passed an IV buffer from unknown origin, which may be
>> stack allocated and thus not safe for DMA. Allocate a DMA safe
>> buffer for the IV and use that instead.
>>
> IIUC this fixes only the (a)blkcipher / skcipher algorithms.
> What about aead, authenc?

AFAIK the implementation of aead/authenc in ccree already copies the IV to a
internally allocated buffer because how it deals with GCM and CTR.

But you did trigger me to double check that, so thanks for that :-)

>
> The fact that only the skcipher tcrypt tests use IVs on stack doesn't
> mean aead, authenc implementations are safe - other crypto API users
> could provide IVs laying in non-DMAable memory.

Of course. In fact, it might be a good idea to actually make tcrypt
explicitly use a stack allocated IV just to trigger proper warnings
from the DMA debug code. Does that sounds sane?

>
> To reiterate, the proper approach is to fix the crypto API to guarantee
> IVs are DMAable.
> However Herbert suggests he is not willing to do this work:
> https://www.mail-archive.com/linux-crypto@vger.kernel.org/msg28821.html
>
> A few high-level details mentioning what this implies would be helpful,
> in case somebody else decides its worth pursuing this path.
>
> The compromise is to fix all crypto drivers that need DMAable IVs.
> IMHO this is suboptimal, both in terms of performance (memory
> allocation, memcpy) and increased code complexity.

As a HW based crypto driver maintainer I sympathize, but let's play
devil's advocate for a second here:

In the current state, HW based crypto drivers need to allocate a buffer
and copy the IV, because they don't know if they got a DMAable buffer
or not. SW based crypto drivers don't need to do anything.

If we switch to an API that guarantee IVs are DMAable all crypto API
*users* will be forced to make sure IV are located in DMAable memory,
possibly resulting in the need for extra buffer allocation, whether this is
needed or not and SW based crypto drivers suffer added complexicty
of extracting the IV from scatter/gather list (I'm assuming we'll just add
it there).

Despite this hurting the driver I care about, I'm not sure this is a good trade
off, really.

Thinking aloud, would it be sane to add an API of "alloc and copy if buffer
is on stack"?

Gilad


-- 
Gilad Ben-Yossef
Chief Coffee Drinker

"If you take a class in large-scale robotics, can you end up in a
situation where the homework eats your dog?"
 -- Jean-Baptiste Queru

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 1/3] staging: ccree: copy IV to DMAable memory
  2017-11-08 11:54     ` Gilad Ben-Yossef
@ 2017-11-08 12:07       ` Herbert Xu
  0 siblings, 0 replies; 7+ messages in thread
From: Herbert Xu @ 2017-11-08 12:07 UTC (permalink / raw)
  To: Gilad Ben-Yossef
  Cc: Horia Geantă,
	Greg Kroah-Hartman, David S. Miller, Ofir Drang, linux-crypto,
	driverdev-devel, devel, linux-kernel

On Wed, Nov 08, 2017 at 01:54:03PM +0200, Gilad Ben-Yossef wrote:
>
> As a HW based crypto driver maintainer I sympathize, but let's play
> devil's advocate for a second here:
> 
> In the current state, HW based crypto drivers need to allocate a buffer
> and copy the IV, because they don't know if they got a DMAable buffer
> or not. SW based crypto drivers don't need to do anything.

When I suggested an API change, I was thinking of an option of
supplying an SG list instead of the current kernel pointer.

IOW the existing users do not have to change but where we know
that the pointer can be DMAed you can opt in to the new interface.

The crypto API can then provide a helper to do the copying where
necessary.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2017-11-08 12:09 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-11-02  8:10 [PATCH v2 0/3] staging: ccree: Fixes and cleanups Gilad Ben-Yossef
2017-11-02  8:10 ` [PATCH v2 1/3] staging: ccree: copy IV to DMAable memory Gilad Ben-Yossef
2017-11-08 10:26   ` Horia Geantă
2017-11-08 11:54     ` Gilad Ben-Yossef
2017-11-08 12:07       ` Herbert Xu
2017-11-02  8:10 ` [PATCH v2 2/3] staging: ccree: handle limiting of DMA masks Gilad Ben-Yossef
2017-11-02  8:10 ` [PATCH v2 3/3] staging: ccree: remove dead code Gilad Ben-Yossef

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).