linuxppc-dev.lists.ozlabs.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/10] Series of fixes for NX driver
@ 2013-08-23 20:01 Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 01/10] crypto: nx - add offset to nx_build_sg_lists() Marcelo Cerri
                   ` (9 more replies)
  0 siblings, 10 replies; 13+ messages in thread
From: Marcelo Cerri @ 2013-08-23 20:01 UTC (permalink / raw)
  To: herbert; +Cc: Marcelo Cerri, linuxppc-dev, linux-kernel, linux-crypto

This series of patches contains fixes in several algorithms implemented
by the NX driver. The patches can be separated in three different
categories:

 - Changes to split the data in several hyper calls to respect the
   limits of data that the co-processador can handle. This affects
   all AES modes.
 - Fixes in how the driver handle zero length messages. This affects
   XCBC and GCM.
 - Fixes for SHA-2 when chunks bigger than the block size are provided.

Fionnuala Gunter (2):
  crypto: nx - fix limits to sg lists for AES-XCBC
  crypto: nx - fix limits to sg lists for AES-CCM

Marcelo Cerri (8):
  crypto: nx - add offset to nx_build_sg_lists()
  crypto: nx - fix limits to sg lists for AES-ECB
  crypto: nx - fix limits to sg lists for AES-CBC
  crypto: nx - fix limits to sg lists for AES-CTR
  crypto: nx - fix limits to sg lists for AES-GCM
  crypto: nx - fix XCBC for zero length messages
  crypto: nx - fix GCM for zero length messages
  crypto: nx - fix SHA-2 for chunks bigger than block size

 drivers/crypto/nx/nx-aes-cbc.c  |  50 ++++---
 drivers/crypto/nx/nx-aes-ccm.c  | 297 +++++++++++++++++++++++++++++-----------
 drivers/crypto/nx/nx-aes-ctr.c  |  50 ++++---
 drivers/crypto/nx/nx-aes-ecb.c  |  48 ++++---
 drivers/crypto/nx/nx-aes-gcm.c  | 292 ++++++++++++++++++++++++++++++---------
 drivers/crypto/nx/nx-aes-xcbc.c | 191 +++++++++++++++++++-------
 drivers/crypto/nx/nx-sha256.c   |   2 +-
 drivers/crypto/nx/nx-sha512.c   |   2 +-
 drivers/crypto/nx/nx.c          |   9 +-
 drivers/crypto/nx/nx.h          |   2 +-
 10 files changed, 683 insertions(+), 260 deletions(-)

-- 
1.7.12

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 01/10] crypto: nx - add offset to nx_build_sg_lists()
  2013-08-23 20:01 [PATCH 00/10] Series of fixes for NX driver Marcelo Cerri
@ 2013-08-23 20:01 ` Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 02/10] crypto: nx - fix limits to sg lists for AES-ECB Marcelo Cerri
                   ` (8 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Marcelo Cerri @ 2013-08-23 20:01 UTC (permalink / raw)
  To: herbert; +Cc: Marcelo Cerri, linuxppc-dev, linux-kernel, linux-crypto

This patch includes one more parameter to nx_build_sg_lists() to skip
the given number of bytes from beginning of each sg list.

This is needed in order to implement the fixes for the AES modes to make
them able to process larger chunks of data.

Reviewed-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
---
 drivers/crypto/nx/nx-aes-cbc.c | 2 +-
 drivers/crypto/nx/nx-aes-ccm.c | 4 ++--
 drivers/crypto/nx/nx-aes-ctr.c | 2 +-
 drivers/crypto/nx/nx-aes-ecb.c | 2 +-
 drivers/crypto/nx/nx-aes-gcm.c | 2 +-
 drivers/crypto/nx/nx.c         | 9 +++++++--
 drivers/crypto/nx/nx.h         | 2 +-
 7 files changed, 14 insertions(+), 9 deletions(-)

diff --git a/drivers/crypto/nx/nx-aes-cbc.c b/drivers/crypto/nx/nx-aes-cbc.c
index 9310982..f334a60 100644
--- a/drivers/crypto/nx/nx-aes-cbc.c
+++ b/drivers/crypto/nx/nx-aes-cbc.c
@@ -85,7 +85,7 @@ static int cbc_aes_nx_crypt(struct blkcipher_desc *desc,
 	else
 		NX_CPB_FDM(csbcpb) &= ~NX_FDM_ENDE_ENCRYPT;
 
-	rc = nx_build_sg_lists(nx_ctx, desc, dst, src, nbytes,
+	rc = nx_build_sg_lists(nx_ctx, desc, dst, src, nbytes, 0,
 			       csbcpb->cpb.aes_cbc.iv);
 	if (rc)
 		goto out;
diff --git a/drivers/crypto/nx/nx-aes-ccm.c b/drivers/crypto/nx/nx-aes-ccm.c
index 39d4224..666a35b 100644
--- a/drivers/crypto/nx/nx-aes-ccm.c
+++ b/drivers/crypto/nx/nx-aes-ccm.c
@@ -293,7 +293,7 @@ static int ccm_nx_decrypt(struct aead_request   *req,
 	if (rc)
 		goto out;
 
-	rc = nx_build_sg_lists(nx_ctx, desc, req->dst, req->src, nbytes,
+	rc = nx_build_sg_lists(nx_ctx, desc, req->dst, req->src, nbytes, 0,
 			       csbcpb->cpb.aes_ccm.iv_or_ctr);
 	if (rc)
 		goto out;
@@ -339,7 +339,7 @@ static int ccm_nx_encrypt(struct aead_request   *req,
 	if (rc)
 		goto out;
 
-	rc = nx_build_sg_lists(nx_ctx, desc, req->dst, req->src, nbytes,
+	rc = nx_build_sg_lists(nx_ctx, desc, req->dst, req->src, nbytes, 0,
 			       csbcpb->cpb.aes_ccm.iv_or_ctr);
 	if (rc)
 		goto out;
diff --git a/drivers/crypto/nx/nx-aes-ctr.c b/drivers/crypto/nx/nx-aes-ctr.c
index 762611b..80dee8d 100644
--- a/drivers/crypto/nx/nx-aes-ctr.c
+++ b/drivers/crypto/nx/nx-aes-ctr.c
@@ -98,7 +98,7 @@ static int ctr_aes_nx_crypt(struct blkcipher_desc *desc,
 		goto out;
 	}
 
-	rc = nx_build_sg_lists(nx_ctx, desc, dst, src, nbytes,
+	rc = nx_build_sg_lists(nx_ctx, desc, dst, src, nbytes, 0,
 			       csbcpb->cpb.aes_ctr.iv);
 	if (rc)
 		goto out;
diff --git a/drivers/crypto/nx/nx-aes-ecb.c b/drivers/crypto/nx/nx-aes-ecb.c
index 77dbe08..fe0d803 100644
--- a/drivers/crypto/nx/nx-aes-ecb.c
+++ b/drivers/crypto/nx/nx-aes-ecb.c
@@ -85,7 +85,7 @@ static int ecb_aes_nx_crypt(struct blkcipher_desc *desc,
 	else
 		NX_CPB_FDM(csbcpb) &= ~NX_FDM_ENDE_ENCRYPT;
 
-	rc = nx_build_sg_lists(nx_ctx, desc, dst, src, nbytes, NULL);
+	rc = nx_build_sg_lists(nx_ctx, desc, dst, src, nbytes, 0, NULL);
 	if (rc)
 		goto out;
 
diff --git a/drivers/crypto/nx/nx-aes-gcm.c b/drivers/crypto/nx/nx-aes-gcm.c
index 74feee1..c2d6f76 100644
--- a/drivers/crypto/nx/nx-aes-gcm.c
+++ b/drivers/crypto/nx/nx-aes-gcm.c
@@ -226,7 +226,7 @@ static int gcm_aes_nx_crypt(struct aead_request *req, int enc)
 
 	csbcpb->cpb.aes_gcm.bit_length_data = nbytes * 8;
 
-	rc = nx_build_sg_lists(nx_ctx, &desc, req->dst, req->src, nbytes,
+	rc = nx_build_sg_lists(nx_ctx, &desc, req->dst, req->src, nbytes, 0,
 			       csbcpb->cpb.aes_gcm.iv_or_cnt);
 	if (rc)
 		goto out;
diff --git a/drivers/crypto/nx/nx.c b/drivers/crypto/nx/nx.c
index bdf4990..5533fe3 100644
--- a/drivers/crypto/nx/nx.c
+++ b/drivers/crypto/nx/nx.c
@@ -211,6 +211,8 @@ struct nx_sg *nx_walk_and_build(struct nx_sg       *nx_dst,
  * @dst: destination scatterlist
  * @src: source scatterlist
  * @nbytes: length of data described in the scatterlists
+ * @offset: number of bytes to fast-forward past at the beginning of
+ *          scatterlists.
  * @iv: destination for the iv data, if the algorithm requires it
  *
  * This is common code shared by all the AES algorithms. It uses the block
@@ -222,6 +224,7 @@ int nx_build_sg_lists(struct nx_crypto_ctx  *nx_ctx,
 		      struct scatterlist    *dst,
 		      struct scatterlist    *src,
 		      unsigned int           nbytes,
+		      unsigned int           offset,
 		      u8                    *iv)
 {
 	struct nx_sg *nx_insg = nx_ctx->in_sg;
@@ -230,8 +233,10 @@ int nx_build_sg_lists(struct nx_crypto_ctx  *nx_ctx,
 	if (iv)
 		memcpy(iv, desc->info, AES_BLOCK_SIZE);
 
-	nx_insg = nx_walk_and_build(nx_insg, nx_ctx->ap->sglen, src, 0, nbytes);
-	nx_outsg = nx_walk_and_build(nx_outsg, nx_ctx->ap->sglen, dst, 0, nbytes);
+	nx_insg = nx_walk_and_build(nx_insg, nx_ctx->ap->sglen, src,
+				    offset, nbytes);
+	nx_outsg = nx_walk_and_build(nx_outsg, nx_ctx->ap->sglen, dst,
+				    offset, nbytes);
 
 	/* these lengths should be negative, which will indicate to phyp that
 	 * the input and output parameters are scatterlists, not linear
diff --git a/drivers/crypto/nx/nx.h b/drivers/crypto/nx/nx.h
index 14bb97f..befda07 100644
--- a/drivers/crypto/nx/nx.h
+++ b/drivers/crypto/nx/nx.h
@@ -156,7 +156,7 @@ int nx_hcall_sync(struct nx_crypto_ctx *ctx, struct vio_pfo_op *op,
 struct nx_sg *nx_build_sg_list(struct nx_sg *, u8 *, unsigned int, u32);
 int nx_build_sg_lists(struct nx_crypto_ctx *, struct blkcipher_desc *,
 		      struct scatterlist *, struct scatterlist *, unsigned int,
-		      u8 *);
+		      unsigned int, u8 *);
 struct nx_sg *nx_walk_and_build(struct nx_sg *, unsigned int,
 				struct scatterlist *, unsigned int,
 				unsigned int);
-- 
1.7.12

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 02/10] crypto: nx - fix limits to sg lists for AES-ECB
  2013-08-23 20:01 [PATCH 00/10] Series of fixes for NX driver Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 01/10] crypto: nx - add offset to nx_build_sg_lists() Marcelo Cerri
@ 2013-08-23 20:01 ` Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 03/10] crypto: nx - fix limits to sg lists for AES-CBC Marcelo Cerri
                   ` (7 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Marcelo Cerri @ 2013-08-23 20:01 UTC (permalink / raw)
  To: herbert; +Cc: Marcelo Cerri, linuxppc-dev, linux-kernel, linux-crypto

This patch updates the nx-aes-ecb implementation to perform several
hyper calls if needed in order to always respect the length limits for
scatter/gather lists.

Two different limits are considered:

 - "ibm,max-sg-len": maximum number of bytes of each scatter/gather
   list.

 - "ibm,max-sync-cop":
    - The total number of bytes that a scatter/gather list can hold.
    - The maximum number of elements that a scatter/gather list can have.

Reviewed-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
---
 drivers/crypto/nx/nx-aes-ecb.c | 48 ++++++++++++++++++++++++++----------------
 1 file changed, 30 insertions(+), 18 deletions(-)

diff --git a/drivers/crypto/nx/nx-aes-ecb.c b/drivers/crypto/nx/nx-aes-ecb.c
index fe0d803..85a8d23 100644
--- a/drivers/crypto/nx/nx-aes-ecb.c
+++ b/drivers/crypto/nx/nx-aes-ecb.c
@@ -71,37 +71,49 @@ static int ecb_aes_nx_crypt(struct blkcipher_desc *desc,
 	struct nx_crypto_ctx *nx_ctx = crypto_blkcipher_ctx(desc->tfm);
 	struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
 	unsigned long irq_flags;
+	unsigned int processed = 0, to_process;
+	u32 max_sg_len;
 	int rc;
 
 	spin_lock_irqsave(&nx_ctx->lock, irq_flags);
 
-	if (nbytes > nx_ctx->ap->databytelen) {
-		rc = -EINVAL;
-		goto out;
-	}
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len/sizeof(struct nx_sg),
+			   nx_ctx->ap->sglen);
 
 	if (enc)
 		NX_CPB_FDM(csbcpb) |= NX_FDM_ENDE_ENCRYPT;
 	else
 		NX_CPB_FDM(csbcpb) &= ~NX_FDM_ENDE_ENCRYPT;
 
-	rc = nx_build_sg_lists(nx_ctx, desc, dst, src, nbytes, 0, NULL);
-	if (rc)
-		goto out;
+	do {
+		to_process = min_t(u64, nbytes - processed,
+				   nx_ctx->ap->databytelen);
+		to_process = min_t(u64, to_process,
+				   NX_PAGE_SIZE * (max_sg_len - 1));
+		to_process = to_process & ~(AES_BLOCK_SIZE - 1);
 
-	if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) {
-		rc = -EINVAL;
-		goto out;
-	}
+		rc = nx_build_sg_lists(nx_ctx, desc, dst, src, to_process,
+				processed, NULL);
+		if (rc)
+			goto out;
 
-	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
-			   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
-	if (rc)
-		goto out;
+		if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) {
+			rc = -EINVAL;
+			goto out;
+		}
+
+		rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+				   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+		if (rc)
+			goto out;
+
+		atomic_inc(&(nx_ctx->stats->aes_ops));
+		atomic64_add(csbcpb->csb.processed_byte_count,
+			     &(nx_ctx->stats->aes_bytes));
+
+		processed += to_process;
+	} while (processed < nbytes);
 
-	atomic_inc(&(nx_ctx->stats->aes_ops));
-	atomic64_add(csbcpb->csb.processed_byte_count,
-		     &(nx_ctx->stats->aes_bytes));
 out:
 	spin_unlock_irqrestore(&nx_ctx->lock, irq_flags);
 	return rc;
-- 
1.7.12

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 03/10] crypto: nx - fix limits to sg lists for AES-CBC
  2013-08-23 20:01 [PATCH 00/10] Series of fixes for NX driver Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 01/10] crypto: nx - add offset to nx_build_sg_lists() Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 02/10] crypto: nx - fix limits to sg lists for AES-ECB Marcelo Cerri
@ 2013-08-23 20:01 ` Marcelo Cerri
  2013-08-29  4:42   ` Herbert Xu
  2013-08-23 20:01 ` [PATCH 04/10] crypto: nx - fix limits to sg lists for AES-CTR Marcelo Cerri
                   ` (6 subsequent siblings)
  9 siblings, 1 reply; 13+ messages in thread
From: Marcelo Cerri @ 2013-08-23 20:01 UTC (permalink / raw)
  To: herbert; +Cc: Marcelo Cerri, linuxppc-dev, linux-kernel, linux-crypto

This patch updates the nx-aes-cbc implementation to perform several
hyper calls if needed in order to always respect the length limits for
scatter/gather lists.

Two different limits are considered:

 - "ibm,max-sg-len": maximum number of bytes of each scatter/gather
   list.

 - "ibm,max-sync-cop":
    - The total number of bytes that a scatter/gather list can hold.
    - The maximum number of elements that a scatter/gather list can have.

Reviewed-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
---
 drivers/crypto/nx/nx-aes-cbc.c | 50 +++++++++++++++++++++++++-----------------
 1 file changed, 30 insertions(+), 20 deletions(-)

diff --git a/drivers/crypto/nx/nx-aes-cbc.c b/drivers/crypto/nx/nx-aes-cbc.c
index f334a60..fa37df1 100644
--- a/drivers/crypto/nx/nx-aes-cbc.c
+++ b/drivers/crypto/nx/nx-aes-cbc.c
@@ -71,40 +71,50 @@ static int cbc_aes_nx_crypt(struct blkcipher_desc *desc,
 	struct nx_crypto_ctx *nx_ctx = crypto_blkcipher_ctx(desc->tfm);
 	struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
 	unsigned long irq_flags;
+	unsigned int processed = 0, to_process;
+	u32 max_sg_len;
 	int rc;
 
 	spin_lock_irqsave(&nx_ctx->lock, irq_flags);
 
-	if (nbytes > nx_ctx->ap->databytelen) {
-		rc = -EINVAL;
-		goto out;
-	}
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len/sizeof(struct nx_sg),
+			   nx_ctx->ap->sglen);
 
 	if (enc)
 		NX_CPB_FDM(csbcpb) |= NX_FDM_ENDE_ENCRYPT;
 	else
 		NX_CPB_FDM(csbcpb) &= ~NX_FDM_ENDE_ENCRYPT;
 
-	rc = nx_build_sg_lists(nx_ctx, desc, dst, src, nbytes, 0,
-			       csbcpb->cpb.aes_cbc.iv);
-	if (rc)
-		goto out;
+	do {
+		to_process = min_t(u64, nbytes - processed,
+				   nx_ctx->ap->databytelen);
+		to_process = min_t(u64, to_process,
+				   NX_PAGE_SIZE * (max_sg_len - 1));
+		to_process = to_process & ~(AES_BLOCK_SIZE - 1);
 
-	if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) {
-		rc = -EINVAL;
-		goto out;
-	}
+		rc = nx_build_sg_lists(nx_ctx, desc, dst, src, to_process,
+				       processed, csbcpb->cpb.aes_cbc.iv);
+		if (rc)
+			goto out;
 
-	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
-			   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
-	if (rc)
-		goto out;
+		if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) {
+			rc = -EINVAL;
+			goto out;
+		}
 
-	memcpy(desc->info, csbcpb->cpb.aes_cbc.cv, AES_BLOCK_SIZE);
+		rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+				   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+		if (rc)
+			goto out;
 
-	atomic_inc(&(nx_ctx->stats->aes_ops));
-	atomic64_add(csbcpb->csb.processed_byte_count,
-		     &(nx_ctx->stats->aes_bytes));
+		memcpy(desc->info, csbcpb->cpb.aes_cbc.cv, AES_BLOCK_SIZE);
+
+		atomic_inc(&(nx_ctx->stats->aes_ops));
+		atomic64_add(csbcpb->csb.processed_byte_count,
+			     &(nx_ctx->stats->aes_bytes));
+
+		processed += to_process;
+	} while (processed < nbytes);
 out:
 	spin_unlock_irqrestore(&nx_ctx->lock, irq_flags);
 	return rc;
-- 
1.7.12

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 04/10] crypto: nx - fix limits to sg lists for AES-CTR
  2013-08-23 20:01 [PATCH 00/10] Series of fixes for NX driver Marcelo Cerri
                   ` (2 preceding siblings ...)
  2013-08-23 20:01 ` [PATCH 03/10] crypto: nx - fix limits to sg lists for AES-CBC Marcelo Cerri
@ 2013-08-23 20:01 ` Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 05/10] crypto: nx - fix limits to sg lists for AES-GCM Marcelo Cerri
                   ` (5 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Marcelo Cerri @ 2013-08-23 20:01 UTC (permalink / raw)
  To: herbert; +Cc: Marcelo Cerri, linuxppc-dev, linux-kernel, linux-crypto

This patch updates the nx-aes-ctr implementation to perform several
hyper calls if needed in order to always respect the length limits for
scatter/gather lists.

Two different limits are considered:

 - "ibm,max-sg-len": maximum number of bytes of each scatter/gather
   list.

 - "ibm,max-sync-cop":
    - The total number of bytes that a scatter/gather list can hold.
    - The maximum number of elements that a scatter/gather list can have.

Reviewed-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
---
 drivers/crypto/nx/nx-aes-ctr.c | 50 ++++++++++++++++++++++++++----------------
 1 file changed, 31 insertions(+), 19 deletions(-)

diff --git a/drivers/crypto/nx/nx-aes-ctr.c b/drivers/crypto/nx/nx-aes-ctr.c
index 80dee8d..a37d009 100644
--- a/drivers/crypto/nx/nx-aes-ctr.c
+++ b/drivers/crypto/nx/nx-aes-ctr.c
@@ -89,33 +89,45 @@ static int ctr_aes_nx_crypt(struct blkcipher_desc *desc,
 	struct nx_crypto_ctx *nx_ctx = crypto_blkcipher_ctx(desc->tfm);
 	struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
 	unsigned long irq_flags;
+	unsigned int processed = 0, to_process;
+	u32 max_sg_len;
 	int rc;
 
 	spin_lock_irqsave(&nx_ctx->lock, irq_flags);
 
-	if (nbytes > nx_ctx->ap->databytelen) {
-		rc = -EINVAL;
-		goto out;
-	}
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len/sizeof(struct nx_sg),
+			   nx_ctx->ap->sglen);
 
-	rc = nx_build_sg_lists(nx_ctx, desc, dst, src, nbytes, 0,
-			       csbcpb->cpb.aes_ctr.iv);
-	if (rc)
-		goto out;
+	do {
+		to_process = min_t(u64, nbytes - processed,
+				   nx_ctx->ap->databytelen);
+		to_process = min_t(u64, to_process,
+				   NX_PAGE_SIZE * (max_sg_len - 1));
+		to_process = to_process & ~(AES_BLOCK_SIZE - 1);
 
-	if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) {
-		rc = -EINVAL;
-		goto out;
-	}
+		rc = nx_build_sg_lists(nx_ctx, desc, dst, src, to_process,
+				       processed, csbcpb->cpb.aes_ctr.iv);
+		if (rc)
+			goto out;
 
-	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
-			   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
-	if (rc)
-		goto out;
+		if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) {
+			rc = -EINVAL;
+			goto out;
+		}
 
-	atomic_inc(&(nx_ctx->stats->aes_ops));
-	atomic64_add(csbcpb->csb.processed_byte_count,
-		     &(nx_ctx->stats->aes_bytes));
+		rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+				   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+		if (rc)
+			goto out;
+
+		memcpy(desc->info, csbcpb->cpb.aes_cbc.cv, AES_BLOCK_SIZE);
+
+		atomic_inc(&(nx_ctx->stats->aes_ops));
+		atomic64_add(csbcpb->csb.processed_byte_count,
+			     &(nx_ctx->stats->aes_bytes));
+
+		processed += to_process;
+	} while (processed < nbytes);
 out:
 	spin_unlock_irqrestore(&nx_ctx->lock, irq_flags);
 	return rc;
-- 
1.7.12

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 05/10] crypto: nx - fix limits to sg lists for AES-GCM
  2013-08-23 20:01 [PATCH 00/10] Series of fixes for NX driver Marcelo Cerri
                   ` (3 preceding siblings ...)
  2013-08-23 20:01 ` [PATCH 04/10] crypto: nx - fix limits to sg lists for AES-CTR Marcelo Cerri
@ 2013-08-23 20:01 ` Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 06/10] crypto: nx - fix limits to sg lists for AES-XCBC Marcelo Cerri
                   ` (4 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Marcelo Cerri @ 2013-08-23 20:01 UTC (permalink / raw)
  To: herbert; +Cc: Marcelo Cerri, linuxppc-dev, linux-kernel, linux-crypto

This patch updates the nx-aes-gcm implementation to perform several
hyper calls if needed in order to always respect the length limits for
scatter/gather lists.

Two different limits are considered:

 - "ibm,max-sg-len": maximum number of bytes of each scatter/gather
   list.

 - "ibm,max-sync-cop":
    - The total number of bytes that a scatter/gather list can hold.
    - The maximum number of elements that a scatter/gather list can have.

Reviewed-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
---
 drivers/crypto/nx/nx-aes-gcm.c | 202 +++++++++++++++++++++++++++--------------
 1 file changed, 136 insertions(+), 66 deletions(-)

diff --git a/drivers/crypto/nx/nx-aes-gcm.c b/drivers/crypto/nx/nx-aes-gcm.c
index c2d6f76..9e89bdf 100644
--- a/drivers/crypto/nx/nx-aes-gcm.c
+++ b/drivers/crypto/nx/nx-aes-gcm.c
@@ -125,37 +125,101 @@ static int nx_gca(struct nx_crypto_ctx  *nx_ctx,
 		  struct aead_request   *req,
 		  u8                    *out)
 {
+	int rc;
 	struct nx_csbcpb *csbcpb_aead = nx_ctx->csbcpb_aead;
-	int rc = -EINVAL;
 	struct scatter_walk walk;
 	struct nx_sg *nx_sg = nx_ctx->in_sg;
+	unsigned int nbytes = req->assoclen;
+	unsigned int processed = 0, to_process;
+	u32 max_sg_len;
 
-	if (req->assoclen > nx_ctx->ap->databytelen)
-		goto out;
-
-	if (req->assoclen <= AES_BLOCK_SIZE) {
+	if (nbytes <= AES_BLOCK_SIZE) {
 		scatterwalk_start(&walk, req->assoc);
-		scatterwalk_copychunks(out, &walk, req->assoclen,
-				       SCATTERWALK_FROM_SG);
+		scatterwalk_copychunks(out, &walk, nbytes, SCATTERWALK_FROM_SG);
 		scatterwalk_done(&walk, SCATTERWALK_FROM_SG, 0);
-
-		rc = 0;
-		goto out;
+		return 0;
 	}
 
-	nx_sg = nx_walk_and_build(nx_sg, nx_ctx->ap->sglen, req->assoc, 0,
-				  req->assoclen);
-	nx_ctx->op_aead.inlen = (nx_ctx->in_sg - nx_sg) * sizeof(struct nx_sg);
+	NX_CPB_FDM(csbcpb_aead) &= ~NX_FDM_CONTINUATION;
 
-	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op_aead,
-			   req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
-	if (rc)
-		goto out;
+	/* page_limit: number of sg entries that fit on one page */
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len/sizeof(struct nx_sg),
+			   nx_ctx->ap->sglen);
 
-	atomic_inc(&(nx_ctx->stats->aes_ops));
-	atomic64_add(req->assoclen, &(nx_ctx->stats->aes_bytes));
+	do {
+		/*
+		 * to_process: the data chunk to process in this update.
+		 * This value is bound by sg list limits.
+		 */
+		to_process = min_t(u64, nbytes - processed,
+				   nx_ctx->ap->databytelen);
+		to_process = min_t(u64, to_process,
+				   NX_PAGE_SIZE * (max_sg_len - 1));
+
+		if ((to_process + processed) < nbytes)
+			NX_CPB_FDM(csbcpb_aead) |= NX_FDM_INTERMEDIATE;
+		else
+			NX_CPB_FDM(csbcpb_aead) &= ~NX_FDM_INTERMEDIATE;
+
+		nx_sg = nx_walk_and_build(nx_ctx->in_sg, nx_ctx->ap->sglen,
+					  req->assoc, processed, to_process);
+		nx_ctx->op_aead.inlen = (nx_ctx->in_sg - nx_sg)
+					* sizeof(struct nx_sg);
+
+		rc = nx_hcall_sync(nx_ctx, &nx_ctx->op_aead,
+				req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+		if (rc)
+			return rc;
+
+		memcpy(csbcpb_aead->cpb.aes_gca.in_pat,
+				csbcpb_aead->cpb.aes_gca.out_pat,
+				AES_BLOCK_SIZE);
+		NX_CPB_FDM(csbcpb_aead) |= NX_FDM_CONTINUATION;
+
+		atomic_inc(&(nx_ctx->stats->aes_ops));
+		atomic64_add(req->assoclen, &(nx_ctx->stats->aes_bytes));
+
+		processed += to_process;
+	} while (processed < nbytes);
 
 	memcpy(out, csbcpb_aead->cpb.aes_gca.out_pat, AES_BLOCK_SIZE);
+
+	return rc;
+}
+
+static int gcm_empty(struct aead_request *req, struct blkcipher_desc *desc,
+		     int enc)
+{
+	int rc;
+	struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm);
+	struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+
+	/* For scenarios where the input message is zero length, AES CTR mode
+	 * may be used. Set the source data to be a single block (16B) of all
+	 * zeros, and set the input IV value to be the same as the GMAC IV
+	 * value. - nx_wb 4.8.1.3 */
+	char src[AES_BLOCK_SIZE] = {};
+	struct scatterlist sg;
+
+	desc->tfm = crypto_alloc_blkcipher("ctr(aes)", 0, 0);
+	if (IS_ERR(desc->tfm)) {
+		rc = -ENOMEM;
+		goto out;
+	}
+
+	crypto_blkcipher_setkey(desc->tfm, csbcpb->cpb.aes_gcm.key,
+		NX_CPB_KEY_SIZE(csbcpb) == NX_KS_AES_128 ? 16 :
+		NX_CPB_KEY_SIZE(csbcpb) == NX_KS_AES_192 ? 24 : 32);
+
+	sg_init_one(&sg, src, AES_BLOCK_SIZE);
+	if (enc)
+		rc = crypto_blkcipher_encrypt_iv(desc, req->dst, &sg,
+						 AES_BLOCK_SIZE);
+	else
+		rc = crypto_blkcipher_decrypt_iv(desc, req->dst, &sg,
+						 AES_BLOCK_SIZE);
+	crypto_free_blkcipher(desc->tfm);
+
 out:
 	return rc;
 }
@@ -166,79 +230,85 @@ static int gcm_aes_nx_crypt(struct aead_request *req, int enc)
 	struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
 	struct blkcipher_desc desc;
 	unsigned int nbytes = req->cryptlen;
+	unsigned int processed = 0, to_process;
 	unsigned long irq_flags;
+	u32 max_sg_len;
 	int rc = -EINVAL;
 
 	spin_lock_irqsave(&nx_ctx->lock, irq_flags);
 
-	if (nbytes > nx_ctx->ap->databytelen)
-		goto out;
-
 	desc.info = nx_ctx->priv.gcm.iv;
 	/* initialize the counter */
 	*(u32 *)(desc.info + NX_GCM_CTR_OFFSET) = 1;
 
-	/* For scenarios where the input message is zero length, AES CTR mode
-	 * may be used. Set the source data to be a single block (16B) of all
-	 * zeros, and set the input IV value to be the same as the GMAC IV
-	 * value. - nx_wb 4.8.1.3 */
 	if (nbytes == 0) {
-		char src[AES_BLOCK_SIZE] = {};
-		struct scatterlist sg;
-
-		desc.tfm = crypto_alloc_blkcipher("ctr(aes)", 0, 0);
-		if (IS_ERR(desc.tfm)) {
-			rc = -ENOMEM;
-			goto out;
-		}
-
-		crypto_blkcipher_setkey(desc.tfm, csbcpb->cpb.aes_gcm.key,
-			NX_CPB_KEY_SIZE(csbcpb) == NX_KS_AES_128 ? 16 :
-			NX_CPB_KEY_SIZE(csbcpb) == NX_KS_AES_192 ? 24 : 32);
-
-		sg_init_one(&sg, src, AES_BLOCK_SIZE);
-		if (enc)
-			crypto_blkcipher_encrypt_iv(&desc, req->dst, &sg,
-						    AES_BLOCK_SIZE);
-		else
-			crypto_blkcipher_decrypt_iv(&desc, req->dst, &sg,
-						    AES_BLOCK_SIZE);
-		crypto_free_blkcipher(desc.tfm);
-
-		rc = 0;
+		rc = gcm_empty(req, &desc, enc);
 		goto out;
 	}
 
-	desc.tfm = (struct crypto_blkcipher *)req->base.tfm;
-
+	/* Process associated data */
 	csbcpb->cpb.aes_gcm.bit_length_aad = req->assoclen * 8;
-
 	if (req->assoclen) {
 		rc = nx_gca(nx_ctx, req, csbcpb->cpb.aes_gcm.in_pat_or_aad);
 		if (rc)
 			goto out;
 	}
 
-	if (enc)
+	/* Set flags for encryption */
+	NX_CPB_FDM(csbcpb) &= ~NX_FDM_CONTINUATION;
+	if (enc) {
 		NX_CPB_FDM(csbcpb) |= NX_FDM_ENDE_ENCRYPT;
-	else
+	} else {
+		NX_CPB_FDM(csbcpb) &= ~NX_FDM_ENDE_ENCRYPT;
 		nbytes -= crypto_aead_authsize(crypto_aead_reqtfm(req));
+	}
 
-	csbcpb->cpb.aes_gcm.bit_length_data = nbytes * 8;
+	/* page_limit: number of sg entries that fit on one page */
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len/sizeof(struct nx_sg),
+			   nx_ctx->ap->sglen);
 
-	rc = nx_build_sg_lists(nx_ctx, &desc, req->dst, req->src, nbytes, 0,
-			       csbcpb->cpb.aes_gcm.iv_or_cnt);
-	if (rc)
-		goto out;
+	do {
+		/*
+		 * to_process: the data chunk to process in this update.
+		 * This value is bound by sg list limits.
+		 */
+		to_process = min_t(u64, nbytes - processed,
+				   nx_ctx->ap->databytelen);
+		to_process = min_t(u64, to_process,
+				   NX_PAGE_SIZE * (max_sg_len - 1));
 
-	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
-			   req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
-	if (rc)
-		goto out;
+		if ((to_process + processed) < nbytes)
+			NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE;
+		else
+			NX_CPB_FDM(csbcpb) &= ~NX_FDM_INTERMEDIATE;
 
-	atomic_inc(&(nx_ctx->stats->aes_ops));
-	atomic64_add(csbcpb->csb.processed_byte_count,
-		     &(nx_ctx->stats->aes_bytes));
+		csbcpb->cpb.aes_gcm.bit_length_data = nbytes * 8;
+		desc.tfm = (struct crypto_blkcipher *) req->base.tfm;
+		rc = nx_build_sg_lists(nx_ctx, &desc, req->dst,
+				       req->src, to_process, processed,
+				       csbcpb->cpb.aes_gcm.iv_or_cnt);
+		if (rc)
+			goto out;
+
+		rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+				   req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+		if (rc)
+			goto out;
+
+		memcpy(desc.info, csbcpb->cpb.aes_gcm.out_cnt, AES_BLOCK_SIZE);
+		memcpy(csbcpb->cpb.aes_gcm.in_pat_or_aad,
+			csbcpb->cpb.aes_gcm.out_pat_or_mac, AES_BLOCK_SIZE);
+		memcpy(csbcpb->cpb.aes_gcm.in_s0,
+			csbcpb->cpb.aes_gcm.out_s0, AES_BLOCK_SIZE);
+
+		NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION;
+
+		atomic_inc(&(nx_ctx->stats->aes_ops));
+		atomic64_add(csbcpb->csb.processed_byte_count,
+			     &(nx_ctx->stats->aes_bytes));
+
+		processed += to_process;
+	} while (processed < nbytes);
 
 	if (enc) {
 		/* copy out the auth tag */
-- 
1.7.12

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 06/10] crypto: nx - fix limits to sg lists for AES-XCBC
  2013-08-23 20:01 [PATCH 00/10] Series of fixes for NX driver Marcelo Cerri
                   ` (4 preceding siblings ...)
  2013-08-23 20:01 ` [PATCH 05/10] crypto: nx - fix limits to sg lists for AES-GCM Marcelo Cerri
@ 2013-08-23 20:01 ` Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 07/10] crypto: nx - fix limits to sg lists for AES-CCM Marcelo Cerri
                   ` (3 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Marcelo Cerri @ 2013-08-23 20:01 UTC (permalink / raw)
  To: herbert; +Cc: Fionnuala Gunter, linuxppc-dev, linux-kernel, linux-crypto

From: Fionnuala Gunter <fin@linux.vnet.ibm.com>

This patch updates the NX driver to perform several hyper calls when necessary
so that the length limits of scatter/gather lists are respected.

Reviewed-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
Reviewed-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
Signed-off-by: Fionnuala Gunter <fin@linux.vnet.ibm.com>
---
 drivers/crypto/nx/nx-aes-xcbc.c | 107 +++++++++++++++++++++++-----------------
 1 file changed, 63 insertions(+), 44 deletions(-)

diff --git a/drivers/crypto/nx/nx-aes-xcbc.c b/drivers/crypto/nx/nx-aes-xcbc.c
index 658da0f..1a5d9e3 100644
--- a/drivers/crypto/nx/nx-aes-xcbc.c
+++ b/drivers/crypto/nx/nx-aes-xcbc.c
@@ -88,78 +88,97 @@ static int nx_xcbc_update(struct shash_desc *desc,
 	struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
 	struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
 	struct nx_sg *in_sg;
-	u32 to_process, leftover;
+	u32 to_process, leftover, total;
+	u32 max_sg_len;
 	unsigned long irq_flags;
 	int rc = 0;
 
 	spin_lock_irqsave(&nx_ctx->lock, irq_flags);
 
-	if (NX_CPB_FDM(csbcpb) & NX_FDM_CONTINUATION) {
-		/* we've hit the nx chip previously and we're updating again,
-		 * so copy over the partial digest */
-		memcpy(csbcpb->cpb.aes_xcbc.cv,
-		       csbcpb->cpb.aes_xcbc.out_cv_mac, AES_BLOCK_SIZE);
-	}
+
+	total = sctx->count + len;
 
 	/* 2 cases for total data len:
 	 *  1: <= AES_BLOCK_SIZE: copy into state, return 0
 	 *  2: > AES_BLOCK_SIZE: process X blocks, copy in leftover
 	 */
-	if (len + sctx->count <= AES_BLOCK_SIZE) {
+	if (total <= AES_BLOCK_SIZE) {
 		memcpy(sctx->buffer + sctx->count, data, len);
 		sctx->count += len;
 		goto out;
 	}
 
-	/* to_process: the AES_BLOCK_SIZE data chunk to process in this
-	 * update */
-	to_process = (sctx->count + len) & ~(AES_BLOCK_SIZE - 1);
-	leftover = (sctx->count + len) & (AES_BLOCK_SIZE - 1);
+	in_sg = nx_ctx->in_sg;
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len/sizeof(struct nx_sg),
+				nx_ctx->ap->sglen);
 
-	/* the hardware will not accept a 0 byte operation for this algorithm
-	 * and the operation MUST be finalized to be correct. So if we happen
-	 * to get an update that falls on a block sized boundary, we must
-	 * save off the last block to finalize with later. */
-	if (!leftover) {
-		to_process -= AES_BLOCK_SIZE;
-		leftover = AES_BLOCK_SIZE;
-	}
+	do {
 
-	if (sctx->count) {
-		in_sg = nx_build_sg_list(nx_ctx->in_sg, sctx->buffer,
-					 sctx->count, nx_ctx->ap->sglen);
-		in_sg = nx_build_sg_list(in_sg, (u8 *)data,
-					 to_process - sctx->count,
-					 nx_ctx->ap->sglen);
-		nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) *
-					sizeof(struct nx_sg);
-	} else {
-		in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *)data, to_process,
-					 nx_ctx->ap->sglen);
+		/* to_process: the AES_BLOCK_SIZE data chunk to process in this
+		 * update */
+		to_process = min_t(u64, total, nx_ctx->ap->databytelen);
+		to_process = min_t(u64, to_process,
+					NX_PAGE_SIZE * (max_sg_len - 1));
+		to_process = to_process & ~(AES_BLOCK_SIZE - 1);
+		leftover = total - to_process;
+
+		/* the hardware will not accept a 0 byte operation for this
+		 * algorithm and the operation MUST be finalized to be correct.
+		 * So if we happen to get an update that falls on a block sized
+		 * boundary, we must save off the last block to finalize with
+		 * later. */
+		if (!leftover) {
+			to_process -= AES_BLOCK_SIZE;
+			leftover = AES_BLOCK_SIZE;
+		}
+
+		if (sctx->count) {
+			in_sg = nx_build_sg_list(nx_ctx->in_sg,
+						(u8 *) sctx->buffer,
+						sctx->count,
+						max_sg_len);
+		}
+		in_sg = nx_build_sg_list(in_sg,
+					(u8 *) data,
+					to_process - sctx->count,
+					max_sg_len);
 		nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) *
 					sizeof(struct nx_sg);
-	}
 
-	NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE;
+		/* we've hit the nx chip previously and we're updating again,
+		 * so copy over the partial digest */
+		if (NX_CPB_FDM(csbcpb) & NX_FDM_CONTINUATION) {
+			memcpy(csbcpb->cpb.aes_xcbc.cv,
+				csbcpb->cpb.aes_xcbc.out_cv_mac,
+				AES_BLOCK_SIZE);
+		}
 
-	if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) {
-		rc = -EINVAL;
-		goto out;
-	}
+		NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE;
+		if (!nx_ctx->op.inlen || !nx_ctx->op.outlen) {
+			rc = -EINVAL;
+			goto out;
+		}
 
-	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+		rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
 			   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
-	if (rc)
-		goto out;
+		if (rc)
+			goto out;
 
-	atomic_inc(&(nx_ctx->stats->aes_ops));
+		atomic_inc(&(nx_ctx->stats->aes_ops));
+
+		/* everything after the first update is continuation */
+		NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION;
+
+		total -= to_process;
+		data += to_process - sctx->count;
+		sctx->count = 0;
+		in_sg = nx_ctx->in_sg;
+	} while (leftover > AES_BLOCK_SIZE);
 
 	/* copy the leftover back into the state struct */
-	memcpy(sctx->buffer, data + len - leftover, leftover);
+	memcpy(sctx->buffer, data, leftover);
 	sctx->count = leftover;
 
-	/* everything after the first update is continuation */
-	NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION;
 out:
 	spin_unlock_irqrestore(&nx_ctx->lock, irq_flags);
 	return rc;
-- 
1.7.12

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 07/10] crypto: nx - fix limits to sg lists for AES-CCM
  2013-08-23 20:01 [PATCH 00/10] Series of fixes for NX driver Marcelo Cerri
                   ` (5 preceding siblings ...)
  2013-08-23 20:01 ` [PATCH 06/10] crypto: nx - fix limits to sg lists for AES-XCBC Marcelo Cerri
@ 2013-08-23 20:01 ` Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 08/10] crypto: nx - fix XCBC for zero length messages Marcelo Cerri
                   ` (2 subsequent siblings)
  9 siblings, 0 replies; 13+ messages in thread
From: Marcelo Cerri @ 2013-08-23 20:01 UTC (permalink / raw)
  To: herbert
  Cc: Joy Latten, Fionnuala Gunter, linuxppc-dev, linux-kernel, linux-crypto

From: Fionnuala Gunter <fin@linux.vnet.ibm.com>

This patch updates the NX driver to perform several hyper calls when necessary
so that the length limits of scatter/gather lists are respected.

Reviewed-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
Signed-off-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
Signed-off-by: Fionnuala Gunter <fin@linux.vnet.ibm.com>
---
 drivers/crypto/nx/nx-aes-ccm.c | 297 +++++++++++++++++++++++++++++------------
 1 file changed, 215 insertions(+), 82 deletions(-)

diff --git a/drivers/crypto/nx/nx-aes-ccm.c b/drivers/crypto/nx/nx-aes-ccm.c
index 666a35b..5ecd4c2 100644
--- a/drivers/crypto/nx/nx-aes-ccm.c
+++ b/drivers/crypto/nx/nx-aes-ccm.c
@@ -179,13 +179,26 @@ static int generate_pat(u8                   *iv,
 	struct nx_sg *nx_insg = nx_ctx->in_sg;
 	struct nx_sg *nx_outsg = nx_ctx->out_sg;
 	unsigned int iauth_len = 0;
-	struct vio_pfo_op *op = NULL;
 	u8 tmp[16], *b1 = NULL, *b0 = NULL, *result = NULL;
 	int rc;
 
 	/* zero the ctr value */
 	memset(iv + 15 - iv[0], 0, iv[0] + 1);
 
+	/* page 78 of nx_wb.pdf has,
+	 * Note: RFC3610 allows the AAD data to be up to 2^64 -1 bytes
+	 * in length. If a full message is used, the AES CCA implementation
+	 * restricts the maximum AAD length to 2^32 -1 bytes.
+	 * If partial messages are used, the implementation supports
+	 * 2^64 -1 bytes maximum AAD length.
+	 *
+	 * However, in the cryptoapi's aead_request structure,
+	 * assoclen is an unsigned int, thus it cannot hold a length
+	 * value greater than 2^32 - 1.
+	 * Thus the AAD is further constrained by this and is never
+	 * greater than 2^32.
+	 */
+
 	if (!req->assoclen) {
 		b0 = nx_ctx->csbcpb->cpb.aes_ccm.in_pat_or_b0;
 	} else if (req->assoclen <= 14) {
@@ -195,7 +208,46 @@ static int generate_pat(u8                   *iv,
 		b0 = nx_ctx->csbcpb->cpb.aes_ccm.in_pat_or_b0;
 		b1 = nx_ctx->priv.ccm.iauth_tag;
 		iauth_len = req->assoclen;
+	} else if (req->assoclen <= 65280) {
+		/* if associated data is less than (2^16 - 2^8), we construct
+		 * B1 differently and feed in the associated data to a CCA
+		 * operation */
+		b0 = nx_ctx->csbcpb_aead->cpb.aes_cca.b0;
+		b1 = nx_ctx->csbcpb_aead->cpb.aes_cca.b1;
+		iauth_len = 14;
+	} else {
+		b0 = nx_ctx->csbcpb_aead->cpb.aes_cca.b0;
+		b1 = nx_ctx->csbcpb_aead->cpb.aes_cca.b1;
+		iauth_len = 10;
+	}
 
+	/* generate B0 */
+	rc = generate_b0(iv, req->assoclen, authsize, nbytes, b0);
+	if (rc)
+		return rc;
+
+	/* generate B1:
+	 * add control info for associated data
+	 * RFC 3610 and NIST Special Publication 800-38C
+	 */
+	if (b1) {
+		memset(b1, 0, 16);
+		if (req->assoclen <= 65280) {
+			*(u16 *)b1 = (u16)req->assoclen;
+			scatterwalk_map_and_copy(b1 + 2, req->assoc, 0,
+					 iauth_len, SCATTERWALK_FROM_SG);
+		} else {
+			*(u16 *)b1 = (u16)(0xfffe);
+			*(u32 *)&b1[2] = (u32)req->assoclen;
+			scatterwalk_map_and_copy(b1 + 6, req->assoc, 0,
+					 iauth_len, SCATTERWALK_FROM_SG);
+		}
+	}
+
+	/* now copy any remaining AAD to scatterlist and call nx... */
+	if (!req->assoclen) {
+		return rc;
+	} else if (req->assoclen <= 14) {
 		nx_insg = nx_build_sg_list(nx_insg, b1, 16, nx_ctx->ap->sglen);
 		nx_outsg = nx_build_sg_list(nx_outsg, tmp, 16,
 					    nx_ctx->ap->sglen);
@@ -210,56 +262,74 @@ static int generate_pat(u8                   *iv,
 		NX_CPB_FDM(nx_ctx->csbcpb) |= NX_FDM_ENDE_ENCRYPT;
 		NX_CPB_FDM(nx_ctx->csbcpb) |= NX_FDM_INTERMEDIATE;
 
-		op = &nx_ctx->op;
 		result = nx_ctx->csbcpb->cpb.aes_ccm.out_pat_or_mac;
-	} else if (req->assoclen <= 65280) {
-		/* if associated data is less than (2^16 - 2^8), we construct
-		 * B1 differently and feed in the associated data to a CCA
-		 * operation */
-		b0 = nx_ctx->csbcpb_aead->cpb.aes_cca.b0;
-		b1 = nx_ctx->csbcpb_aead->cpb.aes_cca.b1;
-		iauth_len = 14;
-
-		/* remaining assoc data must have scatterlist built for it */
-		nx_insg = nx_walk_and_build(nx_insg, nx_ctx->ap->sglen,
-					    req->assoc, iauth_len,
-					    req->assoclen - iauth_len);
-		nx_ctx->op_aead.inlen = (nx_ctx->in_sg - nx_insg) *
+
+		rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+				   req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+		if (rc)
+			return rc;
+
+		atomic_inc(&(nx_ctx->stats->aes_ops));
+		atomic64_add(req->assoclen, &(nx_ctx->stats->aes_bytes));
+
+	} else {
+		u32 max_sg_len;
+		unsigned int processed = 0, to_process;
+
+		/* page_limit: number of sg entries that fit on one page */
+		max_sg_len = min_t(u32,
+				   nx_driver.of.max_sg_len/sizeof(struct nx_sg),
+				   nx_ctx->ap->sglen);
+
+		processed += iauth_len;
+
+		do {
+			to_process = min_t(u32, req->assoclen - processed,
+					   nx_ctx->ap->databytelen);
+			to_process = min_t(u64, to_process,
+					   NX_PAGE_SIZE * (max_sg_len - 1));
+
+			if ((to_process + processed) < req->assoclen) {
+				NX_CPB_FDM(nx_ctx->csbcpb_aead) |=
+					NX_FDM_INTERMEDIATE;
+			} else {
+				NX_CPB_FDM(nx_ctx->csbcpb_aead) &=
+					~NX_FDM_INTERMEDIATE;
+			}
+
+			nx_insg = nx_walk_and_build(nx_ctx->in_sg,
+						    nx_ctx->ap->sglen,
+						    req->assoc, processed,
+						    to_process);
+
+			nx_ctx->op_aead.inlen = (nx_ctx->in_sg - nx_insg) *
 						sizeof(struct nx_sg);
 
-		op = &nx_ctx->op_aead;
+			result = nx_ctx->csbcpb_aead->cpb.aes_cca.out_pat_or_b0;
+
+			rc = nx_hcall_sync(nx_ctx, &nx_ctx->op_aead,
+				   req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+			if (rc)
+				return rc;
+
+			memcpy(nx_ctx->csbcpb_aead->cpb.aes_cca.b0,
+				nx_ctx->csbcpb_aead->cpb.aes_cca.out_pat_or_b0,
+				AES_BLOCK_SIZE);
+
+			NX_CPB_FDM(nx_ctx->csbcpb_aead) |= NX_FDM_CONTINUATION;
+
+			atomic_inc(&(nx_ctx->stats->aes_ops));
+			atomic64_add(req->assoclen,
+					&(nx_ctx->stats->aes_bytes));
+
+			processed += to_process;
+		} while (processed < req->assoclen);
+
 		result = nx_ctx->csbcpb_aead->cpb.aes_cca.out_pat_or_b0;
-	} else {
-		/* if associated data is less than (2^32), we construct B1
-		 * differently yet again and feed in the associated data to a
-		 * CCA operation */
-		pr_err("associated data len is %u bytes (returning -EINVAL)\n",
-		       req->assoclen);
-		rc = -EINVAL;
 	}
 
-	rc = generate_b0(iv, req->assoclen, authsize, nbytes, b0);
-	if (rc)
-		goto done;
+	memcpy(out, result, AES_BLOCK_SIZE);
 
-	if (b1) {
-		memset(b1, 0, 16);
-		*(u16 *)b1 = (u16)req->assoclen;
-
-		scatterwalk_map_and_copy(b1 + 2, req->assoc, 0,
-					 iauth_len, SCATTERWALK_FROM_SG);
-
-		rc = nx_hcall_sync(nx_ctx, op,
-				   req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
-		if (rc)
-			goto done;
-
-		atomic_inc(&(nx_ctx->stats->aes_ops));
-		atomic64_add(req->assoclen, &(nx_ctx->stats->aes_bytes));
-
-		memcpy(out, result, AES_BLOCK_SIZE);
-	}
-done:
 	return rc;
 }
 
@@ -272,15 +342,12 @@ static int ccm_nx_decrypt(struct aead_request   *req,
 	unsigned int authsize = crypto_aead_authsize(crypto_aead_reqtfm(req));
 	struct nx_ccm_priv *priv = &nx_ctx->priv.ccm;
 	unsigned long irq_flags;
+	unsigned int processed = 0, to_process;
+	u32 max_sg_len;
 	int rc = -1;
 
 	spin_lock_irqsave(&nx_ctx->lock, irq_flags);
 
-	if (nbytes > nx_ctx->ap->databytelen) {
-		rc = -EINVAL;
-		goto out;
-	}
-
 	nbytes -= authsize;
 
 	/* copy out the auth tag to compare with later */
@@ -293,22 +360,56 @@ static int ccm_nx_decrypt(struct aead_request   *req,
 	if (rc)
 		goto out;
 
-	rc = nx_build_sg_lists(nx_ctx, desc, req->dst, req->src, nbytes, 0,
-			       csbcpb->cpb.aes_ccm.iv_or_ctr);
-	if (rc)
-		goto out;
+	/* page_limit: number of sg entries that fit on one page */
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len/sizeof(struct nx_sg),
+			   nx_ctx->ap->sglen);
 
-	NX_CPB_FDM(nx_ctx->csbcpb) &= ~NX_FDM_ENDE_ENCRYPT;
-	NX_CPB_FDM(nx_ctx->csbcpb) &= ~NX_FDM_INTERMEDIATE;
+	do {
 
-	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+		/* to_process: the AES_BLOCK_SIZE data chunk to process in this
+		 * update. This value is bound by sg list limits.
+		 */
+		to_process = min_t(u64, nbytes - processed,
+				   nx_ctx->ap->databytelen);
+		to_process = min_t(u64, to_process,
+				   NX_PAGE_SIZE * (max_sg_len - 1));
+
+		if ((to_process + processed) < nbytes)
+			NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE;
+		else
+			NX_CPB_FDM(csbcpb) &= ~NX_FDM_INTERMEDIATE;
+
+		NX_CPB_FDM(nx_ctx->csbcpb) &= ~NX_FDM_ENDE_ENCRYPT;
+
+		rc = nx_build_sg_lists(nx_ctx, desc, req->dst, req->src,
+					to_process, processed,
+					csbcpb->cpb.aes_ccm.iv_or_ctr);
+		if (rc)
+			goto out;
+
+		rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
 			   req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
-	if (rc)
-		goto out;
+		if (rc)
+			goto out;
 
-	atomic_inc(&(nx_ctx->stats->aes_ops));
-	atomic64_add(csbcpb->csb.processed_byte_count,
-		     &(nx_ctx->stats->aes_bytes));
+		/* for partial completion, copy following for next
+		 * entry into loop...
+		 */
+		memcpy(desc->info, csbcpb->cpb.aes_ccm.out_ctr, AES_BLOCK_SIZE);
+		memcpy(csbcpb->cpb.aes_ccm.in_pat_or_b0,
+			csbcpb->cpb.aes_ccm.out_pat_or_mac, AES_BLOCK_SIZE);
+		memcpy(csbcpb->cpb.aes_ccm.in_s0,
+			csbcpb->cpb.aes_ccm.out_s0, AES_BLOCK_SIZE);
+
+		NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION;
+
+		/* update stats */
+		atomic_inc(&(nx_ctx->stats->aes_ops));
+		atomic64_add(csbcpb->csb.processed_byte_count,
+			     &(nx_ctx->stats->aes_bytes));
+
+		processed += to_process;
+	} while (processed < nbytes);
 
 	rc = memcmp(csbcpb->cpb.aes_ccm.out_pat_or_mac, priv->oauth_tag,
 		    authsize) ? -EBADMSG : 0;
@@ -325,41 +426,73 @@ static int ccm_nx_encrypt(struct aead_request   *req,
 	unsigned int nbytes = req->cryptlen;
 	unsigned int authsize = crypto_aead_authsize(crypto_aead_reqtfm(req));
 	unsigned long irq_flags;
+	unsigned int processed = 0, to_process;
+	u32 max_sg_len;
 	int rc = -1;
 
 	spin_lock_irqsave(&nx_ctx->lock, irq_flags);
 
-	if (nbytes > nx_ctx->ap->databytelen) {
-		rc = -EINVAL;
-		goto out;
-	}
-
 	rc = generate_pat(desc->info, req, nx_ctx, authsize, nbytes,
 			  csbcpb->cpb.aes_ccm.in_pat_or_b0);
 	if (rc)
 		goto out;
 
-	rc = nx_build_sg_lists(nx_ctx, desc, req->dst, req->src, nbytes, 0,
-			       csbcpb->cpb.aes_ccm.iv_or_ctr);
-	if (rc)
-		goto out;
-
-	NX_CPB_FDM(csbcpb) |= NX_FDM_ENDE_ENCRYPT;
-	NX_CPB_FDM(csbcpb) &= ~NX_FDM_INTERMEDIATE;
-
-	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
-			   req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
-	if (rc)
-		goto out;
-
-	atomic_inc(&(nx_ctx->stats->aes_ops));
-	atomic64_add(csbcpb->csb.processed_byte_count,
-		     &(nx_ctx->stats->aes_bytes));
+	/* page_limit: number of sg entries that fit on one page */
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len/sizeof(struct nx_sg),
+			   nx_ctx->ap->sglen);
+
+	do {
+		/* to process: the AES_BLOCK_SIZE data chunk to process in this
+		 * update. This value is bound by sg list limits.
+		 */
+		to_process = min_t(u64, nbytes - processed,
+				   nx_ctx->ap->databytelen);
+		to_process = min_t(u64, to_process,
+				   NX_PAGE_SIZE * (max_sg_len - 1));
+
+		if ((to_process + processed) < nbytes)
+			NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE;
+		else
+			NX_CPB_FDM(csbcpb) &= ~NX_FDM_INTERMEDIATE;
+
+		NX_CPB_FDM(csbcpb) |= NX_FDM_ENDE_ENCRYPT;
+
+		rc = nx_build_sg_lists(nx_ctx, desc, req->dst, req->src,
+					to_process, processed,
+				       csbcpb->cpb.aes_ccm.iv_or_ctr);
+		if (rc)
+			goto out;
+
+		rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+				   req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+		if (rc)
+			goto out;
+
+		/* for partial completion, copy following for next
+		 * entry into loop...
+		 */
+		memcpy(desc->info, csbcpb->cpb.aes_ccm.out_ctr, AES_BLOCK_SIZE);
+		memcpy(csbcpb->cpb.aes_ccm.in_pat_or_b0,
+			csbcpb->cpb.aes_ccm.out_pat_or_mac, AES_BLOCK_SIZE);
+		memcpy(csbcpb->cpb.aes_ccm.in_s0,
+			csbcpb->cpb.aes_ccm.out_s0, AES_BLOCK_SIZE);
+
+		NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION;
+
+		/* update stats */
+		atomic_inc(&(nx_ctx->stats->aes_ops));
+		atomic64_add(csbcpb->csb.processed_byte_count,
+			     &(nx_ctx->stats->aes_bytes));
+
+		processed += to_process;
+
+	} while (processed < nbytes);
 
 	/* copy out the auth tag */
 	scatterwalk_map_and_copy(csbcpb->cpb.aes_ccm.out_pat_or_mac,
 				 req->dst, nbytes, authsize,
 				 SCATTERWALK_TO_SG);
+
 out:
 	spin_unlock_irqrestore(&nx_ctx->lock, irq_flags);
 	return rc;
-- 
1.7.12

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 08/10] crypto: nx - fix XCBC for zero length messages
  2013-08-23 20:01 [PATCH 00/10] Series of fixes for NX driver Marcelo Cerri
                   ` (6 preceding siblings ...)
  2013-08-23 20:01 ` [PATCH 07/10] crypto: nx - fix limits to sg lists for AES-CCM Marcelo Cerri
@ 2013-08-23 20:01 ` Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 09/10] crypto: nx - fix GCM " Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 10/10] crypto: nx - fix SHA-2 for chunks bigger than block size Marcelo Cerri
  9 siblings, 0 replies; 13+ messages in thread
From: Marcelo Cerri @ 2013-08-23 20:01 UTC (permalink / raw)
  To: herbert; +Cc: Marcelo Cerri, linuxppc-dev, linux-kernel, linux-crypto

The NX XCBC implementation doesn't support zero length messages and
because of that NX is currently returning a hard-coded hash for zero
length messages. However this approach is incorrect since the hash value
also depends on which key is used.

This patch removes the hard-coded hash and replace it with an
implementation based on the RFC 3566 using ECB.

Reviewed-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
---
 drivers/crypto/nx/nx-aes-xcbc.c | 84 +++++++++++++++++++++++++++++++++++++----
 1 file changed, 77 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/nx/nx-aes-xcbc.c b/drivers/crypto/nx/nx-aes-xcbc.c
index 1a5d9e3..03c4bf5 100644
--- a/drivers/crypto/nx/nx-aes-xcbc.c
+++ b/drivers/crypto/nx/nx-aes-xcbc.c
@@ -56,6 +56,77 @@ static int nx_xcbc_set_key(struct crypto_shash *desc,
 	return 0;
 }
 
+/*
+ * Based on RFC 3566, for a zero-length message:
+ *
+ * n = 1
+ * K1 = E(K, 0x01010101010101010101010101010101)
+ * K3 = E(K, 0x03030303030303030303030303030303)
+ * E[0] = 0x00000000000000000000000000000000
+ * M[1] = 0x80000000000000000000000000000000 (0 length message with padding)
+ * E[1] = (K1, M[1] ^ E[0] ^ K3)
+ * Tag = M[1]
+ */
+static int nx_xcbc_empty(struct shash_desc *desc, u8 *out)
+{
+	struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(&desc->tfm->base);
+	struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+	struct nx_sg *in_sg, *out_sg;
+	u8 keys[2][AES_BLOCK_SIZE];
+	u8 key[32];
+	int rc = 0;
+
+	/* Change to ECB mode */
+	csbcpb->cpb.hdr.mode = NX_MODE_AES_ECB;
+	memcpy(key, csbcpb->cpb.aes_xcbc.key, AES_BLOCK_SIZE);
+	memcpy(csbcpb->cpb.aes_ecb.key, key, AES_BLOCK_SIZE);
+	NX_CPB_FDM(csbcpb) |= NX_FDM_ENDE_ENCRYPT;
+
+	/* K1 and K3 base patterns */
+	memset(keys[0], 0x01, sizeof(keys[0]));
+	memset(keys[1], 0x03, sizeof(keys[1]));
+
+	/* Generate K1 and K3 encrypting the patterns */
+	in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *) keys, sizeof(keys),
+				 nx_ctx->ap->sglen);
+	out_sg = nx_build_sg_list(nx_ctx->out_sg, (u8 *) keys, sizeof(keys),
+				  nx_ctx->ap->sglen);
+	nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg);
+	nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg);
+
+	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+			   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+	if (rc)
+		goto out;
+	atomic_inc(&(nx_ctx->stats->aes_ops));
+
+	/* XOr K3 with the padding for a 0 length message */
+	keys[1][0] ^= 0x80;
+
+	/* Encrypt the final result */
+	memcpy(csbcpb->cpb.aes_ecb.key, keys[0], AES_BLOCK_SIZE);
+	in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *) keys[1], sizeof(keys[1]),
+				 nx_ctx->ap->sglen);
+	out_sg = nx_build_sg_list(nx_ctx->out_sg, out, AES_BLOCK_SIZE,
+				  nx_ctx->ap->sglen);
+	nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg);
+	nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg);
+
+	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+			   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+	if (rc)
+		goto out;
+	atomic_inc(&(nx_ctx->stats->aes_ops));
+
+out:
+	/* Restore XCBC mode */
+	csbcpb->cpb.hdr.mode = NX_MODE_AES_XCBC_MAC;
+	memcpy(csbcpb->cpb.aes_xcbc.key, key, AES_BLOCK_SIZE);
+	NX_CPB_FDM(csbcpb) &= ~NX_FDM_ENDE_ENCRYPT;
+
+	return rc;
+}
+
 static int nx_xcbc_init(struct shash_desc *desc)
 {
 	struct xcbc_state *sctx = shash_desc_ctx(desc);
@@ -201,13 +272,12 @@ static int nx_xcbc_final(struct shash_desc *desc, u8 *out)
 		memcpy(csbcpb->cpb.aes_xcbc.cv,
 		       csbcpb->cpb.aes_xcbc.out_cv_mac, AES_BLOCK_SIZE);
 	} else if (sctx->count == 0) {
-		/* we've never seen an update, so this is a 0 byte op. The
-		 * hardware cannot handle a 0 byte op, so just copy out the
-		 * known 0 byte result. This is cheaper than allocating a
-		 * software context to do a 0 byte op */
-		u8 data[] = { 0x75, 0xf0, 0x25, 0x1d, 0x52, 0x8a, 0xc0, 0x1c,
-			      0x45, 0x73, 0xdf, 0xd5, 0x84, 0xd7, 0x9f, 0x29 };
-		memcpy(out, data, sizeof(data));
+		/*
+		 * we've never seen an update, so this is a 0 byte op. The
+		 * hardware cannot handle a 0 byte op, so just ECB to
+		 * generate the hash.
+		 */
+		rc = nx_xcbc_empty(desc, out);
 		goto out;
 	}
 
-- 
1.7.12

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 09/10] crypto: nx - fix GCM for zero length messages
  2013-08-23 20:01 [PATCH 00/10] Series of fixes for NX driver Marcelo Cerri
                   ` (7 preceding siblings ...)
  2013-08-23 20:01 ` [PATCH 08/10] crypto: nx - fix XCBC for zero length messages Marcelo Cerri
@ 2013-08-23 20:01 ` Marcelo Cerri
  2013-08-23 20:01 ` [PATCH 10/10] crypto: nx - fix SHA-2 for chunks bigger than block size Marcelo Cerri
  9 siblings, 0 replies; 13+ messages in thread
From: Marcelo Cerri @ 2013-08-23 20:01 UTC (permalink / raw)
  To: herbert; +Cc: Marcelo Cerri, linuxppc-dev, linux-kernel, linux-crypto

The NX CGM implementation doesn't support zero length messages and the
current implementation has two flaws:

 - When the input data length is zero, it ignores the associated data.
 - Even when both lengths are zero, it uses the Crypto API to encrypt a
   zeroed block using ctr(aes) and because of this it allocates a new
   transformation and sets the key for this new tfm. Both operations are
   intended to be used only in user context, while the cryptographic
   operations can be called in both user and softirq contexts.

This patch replaces the nested Crypto API use and adds two special
cases:

 - When input data and associated data lengths are zero: it uses NX ECB
   mode to emulate the encryption of a zeroed block using ctr(aes).
 - When input data is zero and associated data is available: it uses NX
   GMAC mode to calculate the associated data MAC.

Reviewed-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
---
 drivers/crypto/nx/nx-aes-gcm.c | 132 ++++++++++++++++++++++++++++++++++-------
 1 file changed, 112 insertions(+), 20 deletions(-)

diff --git a/drivers/crypto/nx/nx-aes-gcm.c b/drivers/crypto/nx/nx-aes-gcm.c
index 9e89bdf..025d9a8 100644
--- a/drivers/crypto/nx/nx-aes-gcm.c
+++ b/drivers/crypto/nx/nx-aes-gcm.c
@@ -187,40 +187,125 @@ static int nx_gca(struct nx_crypto_ctx  *nx_ctx,
 	return rc;
 }
 
+static int gmac(struct aead_request *req, struct blkcipher_desc *desc)
+{
+	int rc;
+	struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm);
+	struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+	struct nx_sg *nx_sg;
+	unsigned int nbytes = req->assoclen;
+	unsigned int processed = 0, to_process;
+	u32 max_sg_len;
+
+	/* Set GMAC mode */
+	csbcpb->cpb.hdr.mode = NX_MODE_AES_GMAC;
+
+	NX_CPB_FDM(csbcpb) &= ~NX_FDM_CONTINUATION;
+
+	/* page_limit: number of sg entries that fit on one page */
+	max_sg_len = min_t(u32, nx_driver.of.max_sg_len/sizeof(struct nx_sg),
+			   nx_ctx->ap->sglen);
+
+	/* Copy IV */
+	memcpy(csbcpb->cpb.aes_gcm.iv_or_cnt, desc->info, AES_BLOCK_SIZE);
+
+	do {
+		/*
+		 * to_process: the data chunk to process in this update.
+		 * This value is bound by sg list limits.
+		 */
+		to_process = min_t(u64, nbytes - processed,
+				   nx_ctx->ap->databytelen);
+		to_process = min_t(u64, to_process,
+				   NX_PAGE_SIZE * (max_sg_len - 1));
+
+		if ((to_process + processed) < nbytes)
+			NX_CPB_FDM(csbcpb) |= NX_FDM_INTERMEDIATE;
+		else
+			NX_CPB_FDM(csbcpb) &= ~NX_FDM_INTERMEDIATE;
+
+		nx_sg = nx_walk_and_build(nx_ctx->in_sg, nx_ctx->ap->sglen,
+					  req->assoc, processed, to_process);
+		nx_ctx->op.inlen = (nx_ctx->in_sg - nx_sg)
+					* sizeof(struct nx_sg);
+
+		csbcpb->cpb.aes_gcm.bit_length_data = 0;
+		csbcpb->cpb.aes_gcm.bit_length_aad = 8 * nbytes;
+
+		rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+				req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+		if (rc)
+			goto out;
+
+		memcpy(csbcpb->cpb.aes_gcm.in_pat_or_aad,
+			csbcpb->cpb.aes_gcm.out_pat_or_mac, AES_BLOCK_SIZE);
+		memcpy(csbcpb->cpb.aes_gcm.in_s0,
+			csbcpb->cpb.aes_gcm.out_s0, AES_BLOCK_SIZE);
+
+		NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION;
+
+		atomic_inc(&(nx_ctx->stats->aes_ops));
+		atomic64_add(req->assoclen, &(nx_ctx->stats->aes_bytes));
+
+		processed += to_process;
+	} while (processed < nbytes);
+
+out:
+	/* Restore GCM mode */
+	csbcpb->cpb.hdr.mode = NX_MODE_AES_GCM;
+	return rc;
+}
+
 static int gcm_empty(struct aead_request *req, struct blkcipher_desc *desc,
 		     int enc)
 {
 	int rc;
 	struct nx_crypto_ctx *nx_ctx = crypto_tfm_ctx(req->base.tfm);
 	struct nx_csbcpb *csbcpb = nx_ctx->csbcpb;
+	char out[AES_BLOCK_SIZE];
+	struct nx_sg *in_sg, *out_sg;
 
 	/* For scenarios where the input message is zero length, AES CTR mode
 	 * may be used. Set the source data to be a single block (16B) of all
 	 * zeros, and set the input IV value to be the same as the GMAC IV
 	 * value. - nx_wb 4.8.1.3 */
-	char src[AES_BLOCK_SIZE] = {};
-	struct scatterlist sg;
 
-	desc->tfm = crypto_alloc_blkcipher("ctr(aes)", 0, 0);
-	if (IS_ERR(desc->tfm)) {
-		rc = -ENOMEM;
-		goto out;
-	}
-
-	crypto_blkcipher_setkey(desc->tfm, csbcpb->cpb.aes_gcm.key,
-		NX_CPB_KEY_SIZE(csbcpb) == NX_KS_AES_128 ? 16 :
-		NX_CPB_KEY_SIZE(csbcpb) == NX_KS_AES_192 ? 24 : 32);
-
-	sg_init_one(&sg, src, AES_BLOCK_SIZE);
+	/* Change to ECB mode */
+	csbcpb->cpb.hdr.mode = NX_MODE_AES_ECB;
+	memcpy(csbcpb->cpb.aes_ecb.key, csbcpb->cpb.aes_gcm.key,
+			sizeof(csbcpb->cpb.aes_ecb.key));
 	if (enc)
-		rc = crypto_blkcipher_encrypt_iv(desc, req->dst, &sg,
-						 AES_BLOCK_SIZE);
+		NX_CPB_FDM(csbcpb) |= NX_FDM_ENDE_ENCRYPT;
 	else
-		rc = crypto_blkcipher_decrypt_iv(desc, req->dst, &sg,
-						 AES_BLOCK_SIZE);
-	crypto_free_blkcipher(desc->tfm);
+		NX_CPB_FDM(csbcpb) &= ~NX_FDM_ENDE_ENCRYPT;
 
+	/* Encrypt the counter/IV */
+	in_sg = nx_build_sg_list(nx_ctx->in_sg, (u8 *) desc->info,
+				 AES_BLOCK_SIZE, nx_ctx->ap->sglen);
+	out_sg = nx_build_sg_list(nx_ctx->out_sg, (u8 *) out, sizeof(out),
+				  nx_ctx->ap->sglen);
+	nx_ctx->op.inlen = (nx_ctx->in_sg - in_sg) * sizeof(struct nx_sg);
+	nx_ctx->op.outlen = (nx_ctx->out_sg - out_sg) * sizeof(struct nx_sg);
+
+	rc = nx_hcall_sync(nx_ctx, &nx_ctx->op,
+			   desc->flags & CRYPTO_TFM_REQ_MAY_SLEEP);
+	if (rc)
+		goto out;
+	atomic_inc(&(nx_ctx->stats->aes_ops));
+
+	/* Copy out the auth tag */
+	memcpy(csbcpb->cpb.aes_gcm.out_pat_or_mac, out,
+			crypto_aead_authsize(crypto_aead_reqtfm(req)));
 out:
+	/* Restore XCBC mode */
+	csbcpb->cpb.hdr.mode = NX_MODE_AES_GCM;
+
+	/*
+	 * ECB key uses the same region that GCM AAD and counter, so it's safe
+	 * to just fill it with zeroes.
+	 */
+	memset(csbcpb->cpb.aes_ecb.key, 0, sizeof(csbcpb->cpb.aes_ecb.key));
+
 	return rc;
 }
 
@@ -242,8 +327,14 @@ static int gcm_aes_nx_crypt(struct aead_request *req, int enc)
 	*(u32 *)(desc.info + NX_GCM_CTR_OFFSET) = 1;
 
 	if (nbytes == 0) {
-		rc = gcm_empty(req, &desc, enc);
-		goto out;
+		if (req->assoclen == 0)
+			rc = gcm_empty(req, &desc, enc);
+		else
+			rc = gmac(req, &desc);
+		if (rc)
+			goto out;
+		else
+			goto mac;
 	}
 
 	/* Process associated data */
@@ -310,6 +401,7 @@ static int gcm_aes_nx_crypt(struct aead_request *req, int enc)
 		processed += to_process;
 	} while (processed < nbytes);
 
+mac:
 	if (enc) {
 		/* copy out the auth tag */
 		scatterwalk_map_and_copy(csbcpb->cpb.aes_gcm.out_pat_or_mac,
-- 
1.7.12

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 10/10] crypto: nx - fix SHA-2 for chunks bigger than block size
  2013-08-23 20:01 [PATCH 00/10] Series of fixes for NX driver Marcelo Cerri
                   ` (8 preceding siblings ...)
  2013-08-23 20:01 ` [PATCH 09/10] crypto: nx - fix GCM " Marcelo Cerri
@ 2013-08-23 20:01 ` Marcelo Cerri
  9 siblings, 0 replies; 13+ messages in thread
From: Marcelo Cerri @ 2013-08-23 20:01 UTC (permalink / raw)
  To: herbert; +Cc: Marcelo Cerri, linuxppc-dev, linux-kernel, linux-crypto

Each call to the co-processor, with exception of the last call, needs to
send data that is multiple of block size. As consequence, any remaining
data is kept in the internal NX context.

This patch fixes a bug in the driver that causes it to save incorrect
data into the context when data is bigger than the block size.

Reviewed-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
---
 drivers/crypto/nx/nx-sha256.c | 2 +-
 drivers/crypto/nx/nx-sha512.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/crypto/nx/nx-sha256.c b/drivers/crypto/nx/nx-sha256.c
index 6547a71..da0b24a 100644
--- a/drivers/crypto/nx/nx-sha256.c
+++ b/drivers/crypto/nx/nx-sha256.c
@@ -129,7 +129,7 @@ static int nx_sha256_update(struct shash_desc *desc, const u8 *data,
 		NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION;
 
 		total -= to_process;
-		data += to_process;
+		data += to_process - sctx->count;
 		sctx->count = 0;
 		in_sg = nx_ctx->in_sg;
 	} while (leftover >= SHA256_BLOCK_SIZE);
diff --git a/drivers/crypto/nx/nx-sha512.c b/drivers/crypto/nx/nx-sha512.c
index 236e6af..4ae5b0f 100644
--- a/drivers/crypto/nx/nx-sha512.c
+++ b/drivers/crypto/nx/nx-sha512.c
@@ -131,7 +131,7 @@ static int nx_sha512_update(struct shash_desc *desc, const u8 *data,
 		NX_CPB_FDM(csbcpb) |= NX_FDM_CONTINUATION;
 
 		total -= to_process;
-		data += to_process;
+		data += to_process - sctx->count[0];
 		sctx->count[0] = 0;
 		in_sg = nx_ctx->in_sg;
 	} while (leftover >= SHA512_BLOCK_SIZE);
-- 
1.7.12

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 03/10] crypto: nx - fix limits to sg lists for AES-CBC
  2013-08-23 20:01 ` [PATCH 03/10] crypto: nx - fix limits to sg lists for AES-CBC Marcelo Cerri
@ 2013-08-29  4:42   ` Herbert Xu
  2013-08-29 14:32     ` Marcelo Cerri
  0 siblings, 1 reply; 13+ messages in thread
From: Herbert Xu @ 2013-08-29  4:42 UTC (permalink / raw)
  To: Marcelo Cerri; +Cc: linuxppc-dev, linux-kernel, linux-crypto

On Fri, Aug 23, 2013 at 05:01:07PM -0300, Marcelo Cerri wrote:
> This patch updates the nx-aes-cbc implementation to perform several
> hyper calls if needed in order to always respect the length limits for
> scatter/gather lists.
> 
> Two different limits are considered:
> 
>  - "ibm,max-sg-len": maximum number of bytes of each scatter/gather
>    list.
> 
>  - "ibm,max-sync-cop":
>     - The total number of bytes that a scatter/gather list can hold.
>     - The maximum number of elements that a scatter/gather list can have.
> 
> Reviewed-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
> Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>

This patch does not apply against the current cryptodev tree.

Please regenerate your pathces.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 03/10] crypto: nx - fix limits to sg lists for AES-CBC
  2013-08-29  4:42   ` Herbert Xu
@ 2013-08-29 14:32     ` Marcelo Cerri
  0 siblings, 0 replies; 13+ messages in thread
From: Marcelo Cerri @ 2013-08-29 14:32 UTC (permalink / raw)
  To: Herbert Xu; +Cc: linuxppc-dev, linux-kernel, linux-crypto

On Thu, Aug 29, 2013 at 02:42:22PM +1000, Herbert Xu wrote:
> On Fri, Aug 23, 2013 at 05:01:07PM -0300, Marcelo Cerri wrote:
> > This patch updates the nx-aes-cbc implementation to perform several
> > hyper calls if needed in order to always respect the length limits for
> > scatter/gather lists.
> > 
> > Two different limits are considered:
> > 
> >  - "ibm,max-sg-len": maximum number of bytes of each scatter/gather
> >    list.
> > 
> >  - "ibm,max-sync-cop":
> >     - The total number of bytes that a scatter/gather list can hold.
> >     - The maximum number of elements that a scatter/gather list can have.
> > 
> > Reviewed-by: Joy Latten <jmlatten@linux.vnet.ibm.com>
> > Signed-off-by: Marcelo Cerri <mhcerri@linux.vnet.ibm.com>
> 
> This patch does not apply against the current cryptodev tree.
> 
> Please regenerate your pathces.

Sorry for this. I'm sending a v2 series without conflicts.

> 
> Thanks,
> -- 
> Email: Herbert Xu <herbert@gondor.apana.org.au>
> Home Page: http://gondor.apana.org.au/~herbert/
> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
> --
> To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2013-08-29 14:32 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-08-23 20:01 [PATCH 00/10] Series of fixes for NX driver Marcelo Cerri
2013-08-23 20:01 ` [PATCH 01/10] crypto: nx - add offset to nx_build_sg_lists() Marcelo Cerri
2013-08-23 20:01 ` [PATCH 02/10] crypto: nx - fix limits to sg lists for AES-ECB Marcelo Cerri
2013-08-23 20:01 ` [PATCH 03/10] crypto: nx - fix limits to sg lists for AES-CBC Marcelo Cerri
2013-08-29  4:42   ` Herbert Xu
2013-08-29 14:32     ` Marcelo Cerri
2013-08-23 20:01 ` [PATCH 04/10] crypto: nx - fix limits to sg lists for AES-CTR Marcelo Cerri
2013-08-23 20:01 ` [PATCH 05/10] crypto: nx - fix limits to sg lists for AES-GCM Marcelo Cerri
2013-08-23 20:01 ` [PATCH 06/10] crypto: nx - fix limits to sg lists for AES-XCBC Marcelo Cerri
2013-08-23 20:01 ` [PATCH 07/10] crypto: nx - fix limits to sg lists for AES-CCM Marcelo Cerri
2013-08-23 20:01 ` [PATCH 08/10] crypto: nx - fix XCBC for zero length messages Marcelo Cerri
2013-08-23 20:01 ` [PATCH 09/10] crypto: nx - fix GCM " Marcelo Cerri
2013-08-23 20:01 ` [PATCH 10/10] crypto: nx - fix SHA-2 for chunks bigger than block size Marcelo Cerri

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).