linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/24] staging: ccree: cleanups and simplification
@ 2017-12-12 14:52 Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 01/24] staging: ccree: remove ahash wrappers Gilad Ben-Yossef
                   ` (23 more replies)
  0 siblings, 24 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

More CCREE code cleanup and simplifications, including:
- Drop code supporting long code synch cipher and hash usage
- Drop ifdef out code for features not supported by HW
- More naming convention and name space cleanup
- Coding style fixes

This patch set goes on top of Dan Carpenter's patch entitled
"staging: ccree: Uninitialized return in ssi_ahash_import()"
sent to the list.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>

Gilad Ben-Yossef (24):
  staging: ccree: remove ahash wrappers
  staging: ccree: fix hash naming convention
  staging: ccree: amend hash func def for readability
  staging: ccree: func params should follow func name
  staging: ccree: shorten parameter name
  staging: ccree: fix func def and decl coding style
  staging: ccree: simplify expression with local var
  staging: ccree: fix func call param indentation
  staging: ccree: fix reg mgr naming convention
  staging: ccree: fix req mgr func def coding style
  staging: ccree: remove cipher sync blkcipher remains
  staging: ccree: fix cipher naming convention
  staging: ccree: fix cipher func def coding style
  staging: ccree: fix ivgen naming convention
  staging: ccree: fix ivgen func def coding style
  staging: ccree: drop unsupported MULTI2 mode code
  staging: ccree: remove SSI_CC_HAS_ macros
  staging: ccree: rename all SSI to CC
  staging: ccree: rename all DX to CC
  staging: ccree: rename vars/structs/enums from ssi_ to cc_
  staging: ccree: fix buf mgr naming convention
  staging: ccree: fix sram mgr naming convention
  staging: ccree: simplify freeing SRAM memory address
  staging: ccree: fix FIPS mgr naming convention

 drivers/staging/ccree/cc_crypto_ctx.h    |  17 -
 drivers/staging/ccree/cc_hw_queue_defs.h |   6 +-
 drivers/staging/ccree/cc_lli_defs.h      |   2 +-
 drivers/staging/ccree/dx_crys_kernel.h   | 314 +++++------
 drivers/staging/ccree/dx_host.h          | 262 +++++-----
 drivers/staging/ccree/dx_reg_common.h    |  10 +-
 drivers/staging/ccree/ssi_aead.c         | 176 +++----
 drivers/staging/ccree/ssi_aead.h         |  20 +-
 drivers/staging/ccree/ssi_buffer_mgr.c   | 320 +++++-------
 drivers/staging/ccree/ssi_buffer_mgr.h   |  36 +-
 drivers/staging/ccree/ssi_cipher.c       | 604 ++++++++--------------
 drivers/staging/ccree/ssi_cipher.h       |  16 +-
 drivers/staging/ccree/ssi_config.h       |  12 +-
 drivers/staging/ccree/ssi_driver.c       | 106 ++--
 drivers/staging/ccree/ssi_driver.h       |  79 ++-
 drivers/staging/ccree/ssi_fips.c         |  22 +-
 drivers/staging/ccree/ssi_fips.h         |  22 +-
 drivers/staging/ccree/ssi_hash.c         | 857 +++++++++++++------------------
 drivers/staging/ccree/ssi_hash.h         |  30 +-
 drivers/staging/ccree/ssi_ivgen.c        |  88 ++--
 drivers/staging/ccree/ssi_ivgen.h        |  24 +-
 drivers/staging/ccree/ssi_pm.c           |  18 +-
 drivers/staging/ccree/ssi_pm.h           |  10 +-
 drivers/staging/ccree/ssi_request_mgr.c  | 136 +++--
 drivers/staging/ccree/ssi_request_mgr.h  |  23 +-
 drivers/staging/ccree/ssi_sram_mgr.c     |  31 +-
 drivers/staging/ccree/ssi_sram_mgr.h     |  29 +-
 drivers/staging/ccree/ssi_sysfs.c        |  22 +-
 drivers/staging/ccree/ssi_sysfs.h        |  10 +-
 29 files changed, 1419 insertions(+), 1883 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH 01/24] staging: ccree: remove ahash wrappers
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 02/24] staging: ccree: fix hash naming convention Gilad Ben-Yossef
                   ` (22 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Remove a no longer needed abstraction around ccree hash crypto API
internals that used to allow same ops to be used in synchronous and
asynchronous fashion.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_hash.c | 260 ++++++++++++---------------------------
 1 file changed, 76 insertions(+), 184 deletions(-)

diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c
index 0c4f3db..a762eef 100644
--- a/drivers/staging/ccree/ssi_hash.c
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -426,13 +426,15 @@ static void ssi_hash_complete(struct device *dev, void *ssi_req)
 	req->base.complete(&req->base, 0);
 }
 
-static int ssi_hash_digest(struct ahash_req_ctx *state,
-			   struct ssi_hash_ctx *ctx,
-			   unsigned int digestsize,
-			   struct scatterlist *src,
-			   unsigned int nbytes, u8 *result,
-			   void *async_req)
+static int ssi_ahash_digest(struct ahash_request *req)
 {
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	u32 digestsize = crypto_ahash_digestsize(tfm);
+	struct scatterlist *src = req->src;
+	unsigned int nbytes = req->nbytes;
+	u8 *result = req->result;
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	bool is_hmac = ctx->is_hmac;
 	struct ssi_crypto_req ssi_req = {};
@@ -460,11 +462,9 @@ static int ssi_hash_digest(struct ahash_req_ctx *state,
 		return -ENOMEM;
 	}
 
-	if (async_req) {
-		/* Setup DX request structure */
-		ssi_req.user_cb = (void *)ssi_hash_digest_complete;
-		ssi_req.user_arg = (void *)async_req;
-	}
+	/* Setup DX request structure */
+	ssi_req.user_cb = ssi_hash_digest_complete;
+	ssi_req.user_arg = req;
 
 	/* If HMAC then load hash IPAD xor key, if HASH then load initial
 	 * digest
@@ -563,44 +563,32 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
 	set_cipher_mode(&desc[idx], ctx->hw_mode);
 	/* TODO */
 	set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize,
-		      NS_BIT, (async_req ? 1 : 0));
-	if (async_req)
-		set_queue_last_ind(&desc[idx]);
+		      NS_BIT, 1);
+	set_queue_last_ind(&desc[idx]);
 	set_flow_mode(&desc[idx], S_HASH_to_DOUT);
 	set_setup_mode(&desc[idx], SETUP_WRITE_STATE0);
 	set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED);
 	ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]);
 	idx++;
 
-	if (async_req) {
-		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
-		if (rc != -EINPROGRESS) {
-			dev_err(dev, "send_request() failed (rc=%d)\n", rc);
-			cc_unmap_hash_request(dev, state, src, true);
-			ssi_hash_unmap_result(dev, state, digestsize, result);
-			ssi_hash_unmap_request(dev, state, ctx);
-		}
-	} else {
-		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
-		if (rc) {
-			dev_err(dev, "send_request() failed (rc=%d)\n", rc);
-			cc_unmap_hash_request(dev, state, src, true);
-		} else {
-			cc_unmap_hash_request(dev, state, src, false);
-		}
+	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	if (rc != -EINPROGRESS) {
+		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
+		cc_unmap_hash_request(dev, state, src, true);
 		ssi_hash_unmap_result(dev, state, digestsize, result);
 		ssi_hash_unmap_request(dev, state, ctx);
 	}
 	return rc;
 }
 
-static int ssi_hash_update(struct ahash_req_ctx *state,
-			   struct ssi_hash_ctx *ctx,
-			   unsigned int block_size,
-			   struct scatterlist *src,
-			   unsigned int nbytes,
-			   void *async_req)
+static int ssi_ahash_update(struct ahash_request *req)
 {
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base);
+	struct scatterlist *src = req->src;
+	unsigned int nbytes = req->nbytes;
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	struct ssi_crypto_req ssi_req = {};
 	struct cc_hw_desc desc[SSI_MAX_AHASH_SEQ_LEN];
@@ -628,11 +616,9 @@ static int ssi_hash_update(struct ahash_req_ctx *state,
 		return -ENOMEM;
 	}
 
-	if (async_req) {
-		/* Setup DX request structure */
-		ssi_req.user_cb = (void *)ssi_hash_update_complete;
-		ssi_req.user_arg = async_req;
-	}
+	/* Setup DX request structure */
+	ssi_req.user_cb = ssi_hash_update_complete;
+	ssi_req.user_arg = req;
 
 	/* Restore hash digest */
 	hw_desc_init(&desc[idx]);
@@ -666,39 +652,29 @@ static int ssi_hash_update(struct ahash_req_ctx *state,
 	hw_desc_init(&desc[idx]);
 	set_cipher_mode(&desc[idx], ctx->hw_mode);
 	set_dout_dlli(&desc[idx], state->digest_bytes_len_dma_addr,
-		      HASH_LEN_SIZE, NS_BIT, (async_req ? 1 : 0));
-	if (async_req)
-		set_queue_last_ind(&desc[idx]);
+		      HASH_LEN_SIZE, NS_BIT, 1);
+	set_queue_last_ind(&desc[idx]);
 	set_flow_mode(&desc[idx], S_HASH_to_DOUT);
 	set_setup_mode(&desc[idx], SETUP_WRITE_STATE1);
 	idx++;
 
-	if (async_req) {
-		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
-		if (rc != -EINPROGRESS) {
-			dev_err(dev, "send_request() failed (rc=%d)\n", rc);
-			cc_unmap_hash_request(dev, state, src, true);
-		}
-	} else {
-		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
-		if (rc) {
-			dev_err(dev, "send_request() failed (rc=%d)\n", rc);
-			cc_unmap_hash_request(dev, state, src, true);
-		} else {
-			cc_unmap_hash_request(dev, state, src, false);
-		}
+	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	if (rc != -EINPROGRESS) {
+		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
+		cc_unmap_hash_request(dev, state, src, true);
 	}
 	return rc;
 }
 
-static int ssi_hash_finup(struct ahash_req_ctx *state,
-			  struct ssi_hash_ctx *ctx,
-			  unsigned int digestsize,
-			  struct scatterlist *src,
-			  unsigned int nbytes,
-			  u8 *result,
-			  void *async_req)
+static int ssi_ahash_finup(struct ahash_request *req)
 {
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	u32 digestsize = crypto_ahash_digestsize(tfm);
+	struct scatterlist *src = req->src;
+	unsigned int nbytes = req->nbytes;
+	u8 *result = req->result;
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	bool is_hmac = ctx->is_hmac;
 	struct ssi_crypto_req ssi_req = {};
@@ -718,11 +694,9 @@ static int ssi_hash_finup(struct ahash_req_ctx *state,
 		return -ENOMEM;
 	}
 
-	if (async_req) {
-		/* Setup DX request structure */
-		ssi_req.user_cb = (void *)ssi_hash_complete;
-		ssi_req.user_arg = async_req;
-	}
+	/* Setup DX request structure */
+	ssi_req.user_cb = ssi_hash_complete;
+	ssi_req.user_arg = req;
 
 	/* Restore hash digest */
 	hw_desc_init(&desc[idx]);
@@ -794,9 +768,8 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
 	hw_desc_init(&desc[idx]);
 	/* TODO */
 	set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize,
-		      NS_BIT, (async_req ? 1 : 0));
-	if (async_req)
-		set_queue_last_ind(&desc[idx]);
+		      NS_BIT, 1);
+	set_queue_last_ind(&desc[idx]);
 	set_flow_mode(&desc[idx], S_HASH_to_DOUT);
 	set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED);
 	set_setup_mode(&desc[idx], SETUP_WRITE_STATE0);
@@ -804,36 +777,24 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
 	set_cipher_mode(&desc[idx], ctx->hw_mode);
 	idx++;
 
-	if (async_req) {
-		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
-		if (rc != -EINPROGRESS) {
-			dev_err(dev, "send_request() failed (rc=%d)\n", rc);
-			cc_unmap_hash_request(dev, state, src, true);
-			ssi_hash_unmap_result(dev, state, digestsize, result);
-		}
-	} else {
-		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
-		if (rc) {
-			dev_err(dev, "send_request() failed (rc=%d)\n", rc);
-			cc_unmap_hash_request(dev, state, src, true);
-			ssi_hash_unmap_result(dev, state, digestsize, result);
-		} else {
-			cc_unmap_hash_request(dev, state, src, false);
-			ssi_hash_unmap_result(dev, state, digestsize, result);
-			ssi_hash_unmap_request(dev, state, ctx);
-		}
+	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	if (rc != -EINPROGRESS) {
+		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
+		cc_unmap_hash_request(dev, state, src, true);
+		ssi_hash_unmap_result(dev, state, digestsize, result);
 	}
 	return rc;
 }
 
-static int ssi_hash_final(struct ahash_req_ctx *state,
-			  struct ssi_hash_ctx *ctx,
-			  unsigned int digestsize,
-			  struct scatterlist *src,
-			  unsigned int nbytes,
-			  u8 *result,
-			  void *async_req)
+static int ssi_ahash_final(struct ahash_request *req)
 {
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	u32 digestsize = crypto_ahash_digestsize(tfm);
+	struct scatterlist *src = req->src;
+	unsigned int nbytes = req->nbytes;
+	u8 *result = req->result;
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	bool is_hmac = ctx->is_hmac;
 	struct ssi_crypto_req ssi_req = {};
@@ -854,11 +815,9 @@ static int ssi_hash_final(struct ahash_req_ctx *state,
 		return -ENOMEM;
 	}
 
-	if (async_req) {
-		/* Setup DX request structure */
-		ssi_req.user_cb = (void *)ssi_hash_complete;
-		ssi_req.user_arg = async_req;
-	}
+	/* Setup DX request structure */
+	ssi_req.user_cb = ssi_hash_complete;
+	ssi_req.user_arg = req;
 
 	/* Restore hash digest */
 	hw_desc_init(&desc[idx]);
@@ -939,9 +898,8 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
 	/* Get final MAC result */
 	hw_desc_init(&desc[idx]);
 	set_dout_dlli(&desc[idx], state->digest_result_dma_addr, digestsize,
-		      NS_BIT, (async_req ? 1 : 0));
-	if (async_req)
-		set_queue_last_ind(&desc[idx]);
+		      NS_BIT, 1);
+	set_queue_last_ind(&desc[idx]);
 	set_flow_mode(&desc[idx], S_HASH_to_DOUT);
 	set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED);
 	set_setup_mode(&desc[idx], SETUP_WRITE_STATE0);
@@ -949,34 +907,25 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
 	set_cipher_mode(&desc[idx], ctx->hw_mode);
 	idx++;
 
-	if (async_req) {
-		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
-		if (rc != -EINPROGRESS) {
-			dev_err(dev, "send_request() failed (rc=%d)\n", rc);
-			cc_unmap_hash_request(dev, state, src, true);
-			ssi_hash_unmap_result(dev, state, digestsize, result);
-		}
-	} else {
-		rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
-		if (rc) {
-			dev_err(dev, "send_request() failed (rc=%d)\n", rc);
-			cc_unmap_hash_request(dev, state, src, true);
-			ssi_hash_unmap_result(dev, state, digestsize, result);
-		} else {
-			cc_unmap_hash_request(dev, state, src, false);
-			ssi_hash_unmap_result(dev, state, digestsize, result);
-			ssi_hash_unmap_request(dev, state, ctx);
-		}
+	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	if (rc != -EINPROGRESS) {
+		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
+		cc_unmap_hash_request(dev, state, src, true);
+		ssi_hash_unmap_result(dev, state, digestsize, result);
 	}
 	return rc;
 }
 
-static int ssi_hash_init(struct ahash_req_ctx *state, struct ssi_hash_ctx *ctx)
+static int ssi_ahash_init(struct ahash_request *req)
 {
+	struct ahash_req_ctx *state = ahash_request_ctx(req);
+	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
+	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 
-	state->xcbc_count = 0;
+	dev_dbg(dev, "===== init (%d) ====\n", req->nbytes);
 
+	state->xcbc_count = 0;
 	ssi_hash_map_request(dev, state, ctx);
 
 	return 0;
@@ -1713,63 +1662,6 @@ static int ssi_mac_digest(struct ahash_request *req)
 	return rc;
 }
 
-//ahash wrap functions
-static int ssi_ahash_digest(struct ahash_request *req)
-{
-	struct ahash_req_ctx *state = ahash_request_ctx(req);
-	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
-	u32 digestsize = crypto_ahash_digestsize(tfm);
-
-	return ssi_hash_digest(state, ctx, digestsize, req->src, req->nbytes,
-			       req->result, (void *)req);
-}
-
-static int ssi_ahash_update(struct ahash_request *req)
-{
-	struct ahash_req_ctx *state = ahash_request_ctx(req);
-	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
-	unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base);
-
-	return ssi_hash_update(state, ctx, block_size, req->src, req->nbytes,
-			       (void *)req);
-}
-
-static int ssi_ahash_finup(struct ahash_request *req)
-{
-	struct ahash_req_ctx *state = ahash_request_ctx(req);
-	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
-	u32 digestsize = crypto_ahash_digestsize(tfm);
-
-	return ssi_hash_finup(state, ctx, digestsize, req->src, req->nbytes,
-			      req->result, (void *)req);
-}
-
-static int ssi_ahash_final(struct ahash_request *req)
-{
-	struct ahash_req_ctx *state = ahash_request_ctx(req);
-	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
-	u32 digestsize = crypto_ahash_digestsize(tfm);
-
-	return ssi_hash_final(state, ctx, digestsize, req->src, req->nbytes,
-			      req->result, (void *)req);
-}
-
-static int ssi_ahash_init(struct ahash_request *req)
-{
-	struct ahash_req_ctx *state = ahash_request_ctx(req);
-	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
-	struct device *dev = drvdata_to_dev(ctx->drvdata);
-
-	dev_dbg(dev, "===== init (%d) ====\n", req->nbytes);
-
-	return ssi_hash_init(state, ctx);
-}
-
 static int ssi_ahash_export(struct ahash_request *req, void *out)
 {
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
@@ -1829,7 +1721,7 @@ static int ssi_ahash_import(struct ahash_request *req, const void *in)
 
 	/* call init() to allocate bufs if the user hasn't */
 	if (!state->digest_buff) {
-		rc = ssi_hash_init(state, ctx);
+		rc = ssi_ahash_init(req);
 		if (rc)
 			goto out;
 	}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 02/24] staging: ccree: fix hash naming convention
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 01/24] staging: ccree: remove ahash wrappers Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 03/24] staging: ccree: amend hash func def for readability Gilad Ben-Yossef
                   ` (21 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

The hash files were using a naming convention which was inconsistent
(ssi vs. cc), included a useless prefix (ssi_hash) and often used too
long function names producing monster such as
ssi_ahash_get_initial_digest_len_sram_addr() that made the call site
hard to read.

Make the code more readable by switching to a simpler, consistent naming
convention for the file.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_aead.c   |  10 +-
 drivers/staging/ccree/ssi_driver.c |   8 +-
 drivers/staging/ccree/ssi_hash.c   | 494 ++++++++++++++++++-------------------
 drivers/staging/ccree/ssi_hash.h   |  16 +-
 drivers/staging/ccree/ssi_pm.c     |   2 +-
 5 files changed, 257 insertions(+), 273 deletions(-)

diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c
index 5548c7b..408ea24 100644
--- a/drivers/staging/ccree/ssi_aead.c
+++ b/drivers/staging/ccree/ssi_aead.c
@@ -1023,10 +1023,8 @@ static void cc_set_hmac_desc(struct aead_request *req, struct cc_hw_desc desc[],
 	/* Load init. digest len (64 bytes) */
 	hw_desc_init(&desc[idx]);
 	set_cipher_mode(&desc[idx], hash_mode);
-	set_din_sram(&desc[idx],
-		     ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata,
-								hash_mode),
-								HASH_LEN_SIZE);
+	set_din_sram(&desc[idx], cc_digest_len_addr(ctx->drvdata, hash_mode),
+		     HASH_LEN_SIZE);
 	set_flow_mode(&desc[idx], S_DIN_to_HASH);
 	set_setup_mode(&desc[idx], SETUP_LOAD_KEY0);
 	idx++;
@@ -1152,9 +1150,7 @@ static void cc_proc_scheme_desc(struct aead_request *req,
 	/* Load init. digest len (64 bytes) */
 	hw_desc_init(&desc[idx]);
 	set_cipher_mode(&desc[idx], hash_mode);
-	set_din_sram(&desc[idx],
-		     ssi_ahash_get_initial_digest_len_sram_addr(ctx->drvdata,
-								hash_mode),
+	set_din_sram(&desc[idx], cc_digest_len_addr(ctx->drvdata, hash_mode),
 		     HASH_LEN_SIZE);
 	set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED);
 	set_flow_mode(&desc[idx], S_DIN_to_HASH);
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index a0d8eb8..513c5e4 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -358,9 +358,9 @@ static int init_cc_resources(struct platform_device *plat_dev)
 	}
 
 	/* hash must be allocated before aead since hash exports APIs */
-	rc = ssi_hash_alloc(new_drvdata);
+	rc = cc_hash_alloc(new_drvdata);
 	if (rc) {
-		dev_err(dev, "ssi_hash_alloc failed\n");
+		dev_err(dev, "cc_hash_alloc failed\n");
 		goto post_cipher_err;
 	}
 
@@ -379,7 +379,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 	return 0;
 
 post_hash_err:
-	ssi_hash_free(new_drvdata);
+	cc_hash_free(new_drvdata);
 post_cipher_err:
 	ssi_ablkcipher_free(new_drvdata);
 post_ivgen_err:
@@ -417,7 +417,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 		(struct ssi_drvdata *)platform_get_drvdata(plat_dev);
 
 	cc_aead_free(drvdata);
-	ssi_hash_free(drvdata);
+	cc_hash_free(drvdata);
 	ssi_ablkcipher_free(drvdata);
 	ssi_ivgen_fini(drvdata);
 	cc_pm_fini(drvdata);
diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c
index a762eef..6bc42e4 100644
--- a/drivers/staging/ccree/ssi_hash.c
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -31,10 +31,10 @@
 #include "ssi_hash.h"
 #include "ssi_sram_mgr.h"
 
-#define SSI_MAX_AHASH_SEQ_LEN 12
-#define SSI_MAX_OPAD_KEYS_SIZE SSI_MAX_HASH_BLCK_SIZE
+#define CC_MAX_HASH_SEQ_LEN 12
+#define CC_MAX_OPAD_KEYS_SIZE CC_MAX_HASH_BLCK_SIZE
 
-struct ssi_hash_handle {
+struct cc_hash_handle {
 	ssi_sram_addr_t digest_len_sram_addr; /* const value in SRAM*/
 	ssi_sram_addr_t larval_digest_sram_addr;   /* const value in SRAM */
 	struct list_head hash_list;
@@ -64,16 +64,15 @@ static const u64 sha512_init[] = {
 	SHA512_H3, SHA512_H2, SHA512_H1, SHA512_H0 };
 #endif
 
-static void ssi_hash_create_xcbc_setup(
+static void cc_setup_xcbc(
 	struct ahash_request *areq,
 	struct cc_hw_desc desc[],
 	unsigned int *seq_size);
 
-static void ssi_hash_create_cmac_setup(struct ahash_request *areq,
-				       struct cc_hw_desc desc[],
-				       unsigned int *seq_size);
+static void cc_setup_cmac(struct ahash_request *areq, struct cc_hw_desc desc[],
+			  unsigned int *seq_size);
 
-struct ssi_hash_alg {
+struct cc_hash_alg {
 	struct list_head entry;
 	int hash_mode;
 	int hw_mode;
@@ -88,13 +87,13 @@ struct hash_key_req_ctx {
 };
 
 /* hash per-session context */
-struct ssi_hash_ctx {
+struct cc_hash_ctx {
 	struct ssi_drvdata *drvdata;
 	/* holds the origin digest; the digest after "setkey" if HMAC,*
 	 * the initial digest if HASH.
 	 */
-	u8 digest_buff[SSI_MAX_HASH_DIGEST_SIZE]  ____cacheline_aligned;
-	u8 opad_tmp_keys_buff[SSI_MAX_OPAD_KEYS_SIZE]  ____cacheline_aligned;
+	u8 digest_buff[CC_MAX_HASH_DIGEST_SIZE]  ____cacheline_aligned;
+	u8 opad_tmp_keys_buff[CC_MAX_OPAD_KEYS_SIZE]  ____cacheline_aligned;
 
 	dma_addr_t opad_tmp_keys_dma_addr  ____cacheline_aligned;
 	dma_addr_t digest_buff_dma_addr;
@@ -107,14 +106,14 @@ struct ssi_hash_ctx {
 	bool is_hmac;
 };
 
-static void ssi_hash_create_data_desc(
+static void cc_set_desc(
 	struct ahash_req_ctx *areq_ctx,
-	struct ssi_hash_ctx *ctx,
+	struct cc_hash_ctx *ctx,
 	unsigned int flow_mode, struct cc_hw_desc desc[],
 	bool is_not_last_data,
 	unsigned int *seq_size);
 
-static void ssi_set_hash_endianity(u32 mode, struct cc_hw_desc *desc)
+static void cc_set_endianity(u32 mode, struct cc_hw_desc *desc)
 {
 	if (mode == DRV_HASH_MD5 || mode == DRV_HASH_SHA384 ||
 	    mode == DRV_HASH_SHA512) {
@@ -124,9 +123,8 @@ static void ssi_set_hash_endianity(u32 mode, struct cc_hw_desc *desc)
 	}
 }
 
-static int ssi_hash_map_result(struct device *dev,
-			       struct ahash_req_ctx *state,
-			       unsigned int digestsize)
+static int cc_map_result(struct device *dev, struct ahash_req_ctx *state,
+			 unsigned int digestsize)
 {
 	state->digest_result_dma_addr =
 		dma_map_single(dev, (void *)state->digest_result_buff,
@@ -144,9 +142,8 @@ static int ssi_hash_map_result(struct device *dev,
 	return 0;
 }
 
-static int ssi_hash_map_request(struct device *dev,
-				struct ahash_req_ctx *state,
-				struct ssi_hash_ctx *ctx)
+static int cc_map_req(struct device *dev, struct ahash_req_ctx *state,
+		      struct cc_hash_ctx *ctx)
 {
 	bool is_hmac = ctx->is_hmac;
 	ssi_sram_addr_t larval_digest_addr =
@@ -155,15 +152,15 @@ static int ssi_hash_map_request(struct device *dev,
 	struct cc_hw_desc desc;
 	int rc = -ENOMEM;
 
-	state->buff0 = kzalloc(SSI_MAX_HASH_BLCK_SIZE, GFP_KERNEL | GFP_DMA);
+	state->buff0 = kzalloc(CC_MAX_HASH_BLCK_SIZE, GFP_KERNEL | GFP_DMA);
 	if (!state->buff0)
 		goto fail0;
 
-	state->buff1 = kzalloc(SSI_MAX_HASH_BLCK_SIZE, GFP_KERNEL | GFP_DMA);
+	state->buff1 = kzalloc(CC_MAX_HASH_BLCK_SIZE, GFP_KERNEL | GFP_DMA);
 	if (!state->buff1)
 		goto fail_buff0;
 
-	state->digest_result_buff = kzalloc(SSI_MAX_HASH_DIGEST_SIZE,
+	state->digest_result_buff = kzalloc(CC_MAX_HASH_DIGEST_SIZE,
 					    GFP_KERNEL | GFP_DMA);
 	if (!state->digest_result_buff)
 		goto fail_buff1;
@@ -330,9 +327,8 @@ static int ssi_hash_map_request(struct device *dev,
 	return rc;
 }
 
-static void ssi_hash_unmap_request(struct device *dev,
-				   struct ahash_req_ctx *state,
-				   struct ssi_hash_ctx *ctx)
+static void cc_unmap_req(struct device *dev, struct ahash_req_ctx *state,
+			 struct cc_hash_ctx *ctx)
 {
 	if (state->digest_buff_dma_addr) {
 		dma_unmap_single(dev, state->digest_buff_dma_addr,
@@ -364,9 +360,8 @@ static void ssi_hash_unmap_request(struct device *dev,
 	kfree(state->buff0);
 }
 
-static void ssi_hash_unmap_result(struct device *dev,
-				  struct ahash_req_ctx *state,
-				  unsigned int digestsize, u8 *result)
+static void cc_unmap_result(struct device *dev, struct ahash_req_ctx *state,
+			    unsigned int digestsize, u8 *result)
 {
 	if (state->digest_result_dma_addr) {
 		dma_unmap_single(dev,
@@ -383,7 +378,7 @@ static void ssi_hash_unmap_result(struct device *dev,
 	state->digest_result_dma_addr = 0;
 }
 
-static void ssi_hash_update_complete(struct device *dev, void *ssi_req)
+static void cc_update_complete(struct device *dev, void *ssi_req)
 {
 	struct ahash_request *req = (struct ahash_request *)ssi_req;
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
@@ -394,43 +389,43 @@ static void ssi_hash_update_complete(struct device *dev, void *ssi_req)
 	req->base.complete(&req->base, 0);
 }
 
-static void ssi_hash_digest_complete(struct device *dev, void *ssi_req)
+static void cc_digest_complete(struct device *dev, void *ssi_req)
 {
 	struct ahash_request *req = (struct ahash_request *)ssi_req;
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	u32 digestsize = crypto_ahash_digestsize(tfm);
 
 	dev_dbg(dev, "req=%pK\n", req);
 
 	cc_unmap_hash_request(dev, state, req->src, false);
-	ssi_hash_unmap_result(dev, state, digestsize, req->result);
-	ssi_hash_unmap_request(dev, state, ctx);
+	cc_unmap_result(dev, state, digestsize, req->result);
+	cc_unmap_req(dev, state, ctx);
 	req->base.complete(&req->base, 0);
 }
 
-static void ssi_hash_complete(struct device *dev, void *ssi_req)
+static void cc_hash_complete(struct device *dev, void *ssi_req)
 {
 	struct ahash_request *req = (struct ahash_request *)ssi_req;
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	u32 digestsize = crypto_ahash_digestsize(tfm);
 
 	dev_dbg(dev, "req=%pK\n", req);
 
 	cc_unmap_hash_request(dev, state, req->src, false);
-	ssi_hash_unmap_result(dev, state, digestsize, req->result);
-	ssi_hash_unmap_request(dev, state, ctx);
+	cc_unmap_result(dev, state, digestsize, req->result);
+	cc_unmap_req(dev, state, ctx);
 	req->base.complete(&req->base, 0);
 }
 
-static int ssi_ahash_digest(struct ahash_request *req)
+static int cc_hash_digest(struct ahash_request *req)
 {
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	u32 digestsize = crypto_ahash_digestsize(tfm);
 	struct scatterlist *src = req->src;
 	unsigned int nbytes = req->nbytes;
@@ -438,7 +433,7 @@ static int ssi_ahash_digest(struct ahash_request *req)
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	bool is_hmac = ctx->is_hmac;
 	struct ssi_crypto_req ssi_req = {};
-	struct cc_hw_desc desc[SSI_MAX_AHASH_SEQ_LEN];
+	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	ssi_sram_addr_t larval_digest_addr =
 		cc_larval_digest_addr(ctx->drvdata, ctx->hash_mode);
 	int idx = 0;
@@ -447,12 +442,12 @@ static int ssi_ahash_digest(struct ahash_request *req)
 	dev_dbg(dev, "===== %s-digest (%d) ====\n", is_hmac ? "hmac" : "hash",
 		nbytes);
 
-	if (ssi_hash_map_request(dev, state, ctx)) {
+	if (cc_map_req(dev, state, ctx)) {
 		dev_err(dev, "map_ahash_source() failed\n");
 		return -ENOMEM;
 	}
 
-	if (ssi_hash_map_result(dev, state, digestsize)) {
+	if (cc_map_result(dev, state, digestsize)) {
 		dev_err(dev, "map_ahash_digest() failed\n");
 		return -ENOMEM;
 	}
@@ -463,7 +458,7 @@ static int ssi_ahash_digest(struct ahash_request *req)
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = ssi_hash_digest_complete;
+	ssi_req.user_cb = cc_digest_complete;
 	ssi_req.user_arg = req;
 
 	/* If HMAC then load hash IPAD xor key, if HASH then load initial
@@ -501,7 +496,7 @@ static int ssi_ahash_digest(struct ahash_request *req)
 	set_setup_mode(&desc[idx], SETUP_LOAD_KEY0);
 	idx++;
 
-	ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx);
+	cc_set_desc(state, ctx, DIN_HASH, desc, false, &idx);
 
 	if (is_hmac) {
 		/* HW last hash block padding (aka. "DO_PAD") */
@@ -520,7 +515,7 @@ static int ssi_ahash_digest(struct ahash_request *req)
 		set_dout_dlli(&desc[idx], state->digest_buff_dma_addr,
 			      digestsize, NS_BIT, 0);
 		set_flow_mode(&desc[idx], S_HASH_to_DOUT);
-		ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]);
+		cc_set_endianity(ctx->hash_mode, &desc[idx]);
 		set_setup_mode(&desc[idx], SETUP_WRITE_STATE0);
 		idx++;
 
@@ -537,8 +532,8 @@ static int ssi_ahash_digest(struct ahash_request *req)
 		hw_desc_init(&desc[idx]);
 		set_cipher_mode(&desc[idx], ctx->hw_mode);
 		set_din_sram(&desc[idx],
-			     ssi_ahash_get_initial_digest_len_sram_addr(
-ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
+			     cc_digest_len_addr(ctx->drvdata, ctx->hash_mode),
+			     HASH_LEN_SIZE);
 		set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED);
 		set_flow_mode(&desc[idx], S_DIN_to_HASH);
 		set_setup_mode(&desc[idx], SETUP_LOAD_KEY0);
@@ -568,30 +563,30 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
 	set_flow_mode(&desc[idx], S_HASH_to_DOUT);
 	set_setup_mode(&desc[idx], SETUP_WRITE_STATE0);
 	set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED);
-	ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]);
+	cc_set_endianity(ctx->hash_mode, &desc[idx]);
 	idx++;
 
 	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, src, true);
-		ssi_hash_unmap_result(dev, state, digestsize, result);
-		ssi_hash_unmap_request(dev, state, ctx);
+		cc_unmap_result(dev, state, digestsize, result);
+		cc_unmap_req(dev, state, ctx);
 	}
 	return rc;
 }
 
-static int ssi_ahash_update(struct ahash_request *req)
+static int cc_hash_update(struct ahash_request *req)
 {
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base);
 	struct scatterlist *src = req->src;
 	unsigned int nbytes = req->nbytes;
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	struct ssi_crypto_req ssi_req = {};
-	struct cc_hw_desc desc[SSI_MAX_AHASH_SEQ_LEN];
+	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	u32 idx = 0;
 	int rc;
 
@@ -617,7 +612,7 @@ static int ssi_ahash_update(struct ahash_request *req)
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = ssi_hash_update_complete;
+	ssi_req.user_cb = cc_update_complete;
 	ssi_req.user_arg = req;
 
 	/* Restore hash digest */
@@ -637,7 +632,7 @@ static int ssi_ahash_update(struct ahash_request *req)
 	set_setup_mode(&desc[idx], SETUP_LOAD_KEY0);
 	idx++;
 
-	ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx);
+	cc_set_desc(state, ctx, DIN_HASH, desc, false, &idx);
 
 	/* store the hash digest result in context */
 	hw_desc_init(&desc[idx]);
@@ -666,11 +661,11 @@ static int ssi_ahash_update(struct ahash_request *req)
 	return rc;
 }
 
-static int ssi_ahash_finup(struct ahash_request *req)
+static int cc_hash_finup(struct ahash_request *req)
 {
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	u32 digestsize = crypto_ahash_digestsize(tfm);
 	struct scatterlist *src = req->src;
 	unsigned int nbytes = req->nbytes;
@@ -678,7 +673,7 @@ static int ssi_ahash_finup(struct ahash_request *req)
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	bool is_hmac = ctx->is_hmac;
 	struct ssi_crypto_req ssi_req = {};
-	struct cc_hw_desc desc[SSI_MAX_AHASH_SEQ_LEN];
+	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	int idx = 0;
 	int rc;
 
@@ -689,13 +684,13 @@ static int ssi_ahash_finup(struct ahash_request *req)
 		dev_err(dev, "map_ahash_request_final() failed\n");
 		return -ENOMEM;
 	}
-	if (ssi_hash_map_result(dev, state, digestsize)) {
+	if (cc_map_result(dev, state, digestsize)) {
 		dev_err(dev, "map_ahash_digest() failed\n");
 		return -ENOMEM;
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = ssi_hash_complete;
+	ssi_req.user_cb = cc_hash_complete;
 	ssi_req.user_arg = req;
 
 	/* Restore hash digest */
@@ -717,7 +712,7 @@ static int ssi_ahash_finup(struct ahash_request *req)
 	set_setup_mode(&desc[idx], SETUP_LOAD_KEY0);
 	idx++;
 
-	ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx);
+	cc_set_desc(state, ctx, DIN_HASH, desc, false, &idx);
 
 	if (is_hmac) {
 		/* Store the hash digest result in the context */
@@ -725,7 +720,7 @@ static int ssi_ahash_finup(struct ahash_request *req)
 		set_cipher_mode(&desc[idx], ctx->hw_mode);
 		set_dout_dlli(&desc[idx], state->digest_buff_dma_addr,
 			      digestsize, NS_BIT, 0);
-		ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]);
+		cc_set_endianity(ctx->hash_mode, &desc[idx]);
 		set_flow_mode(&desc[idx], S_HASH_to_DOUT);
 		set_setup_mode(&desc[idx], SETUP_WRITE_STATE0);
 		idx++;
@@ -743,8 +738,8 @@ static int ssi_ahash_finup(struct ahash_request *req)
 		hw_desc_init(&desc[idx]);
 		set_cipher_mode(&desc[idx], ctx->hw_mode);
 		set_din_sram(&desc[idx],
-			     ssi_ahash_get_initial_digest_len_sram_addr(
-ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
+			     cc_digest_len_addr(ctx->drvdata, ctx->hash_mode),
+			     HASH_LEN_SIZE);
 		set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED);
 		set_flow_mode(&desc[idx], S_DIN_to_HASH);
 		set_setup_mode(&desc[idx], SETUP_LOAD_KEY0);
@@ -773,7 +768,7 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
 	set_flow_mode(&desc[idx], S_HASH_to_DOUT);
 	set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED);
 	set_setup_mode(&desc[idx], SETUP_WRITE_STATE0);
-	ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]);
+	cc_set_endianity(ctx->hash_mode, &desc[idx]);
 	set_cipher_mode(&desc[idx], ctx->hw_mode);
 	idx++;
 
@@ -781,16 +776,16 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, src, true);
-		ssi_hash_unmap_result(dev, state, digestsize, result);
+		cc_unmap_result(dev, state, digestsize, result);
 	}
 	return rc;
 }
 
-static int ssi_ahash_final(struct ahash_request *req)
+static int cc_hash_final(struct ahash_request *req)
 {
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	u32 digestsize = crypto_ahash_digestsize(tfm);
 	struct scatterlist *src = req->src;
 	unsigned int nbytes = req->nbytes;
@@ -798,7 +793,7 @@ static int ssi_ahash_final(struct ahash_request *req)
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	bool is_hmac = ctx->is_hmac;
 	struct ssi_crypto_req ssi_req = {};
-	struct cc_hw_desc desc[SSI_MAX_AHASH_SEQ_LEN];
+	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	int idx = 0;
 	int rc;
 
@@ -810,13 +805,13 @@ static int ssi_ahash_final(struct ahash_request *req)
 		return -ENOMEM;
 	}
 
-	if (ssi_hash_map_result(dev, state, digestsize)) {
+	if (cc_map_result(dev, state, digestsize)) {
 		dev_err(dev, "map_ahash_digest() failed\n");
 		return -ENOMEM;
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = ssi_hash_complete;
+	ssi_req.user_cb = cc_hash_complete;
 	ssi_req.user_arg = req;
 
 	/* Restore hash digest */
@@ -838,7 +833,7 @@ static int ssi_ahash_final(struct ahash_request *req)
 	set_setup_mode(&desc[idx], SETUP_LOAD_KEY0);
 	idx++;
 
-	ssi_hash_create_data_desc(state, ctx, DIN_HASH, desc, false, &idx);
+	cc_set_desc(state, ctx, DIN_HASH, desc, false, &idx);
 
 	/* "DO-PAD" must be enabled only when writing current length to HW */
 	hw_desc_init(&desc[idx]);
@@ -856,7 +851,7 @@ static int ssi_ahash_final(struct ahash_request *req)
 		set_cipher_mode(&desc[idx], ctx->hw_mode);
 		set_dout_dlli(&desc[idx], state->digest_buff_dma_addr,
 			      digestsize, NS_BIT, 0);
-		ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]);
+		cc_set_endianity(ctx->hash_mode, &desc[idx]);
 		set_flow_mode(&desc[idx], S_HASH_to_DOUT);
 		set_setup_mode(&desc[idx], SETUP_WRITE_STATE0);
 		idx++;
@@ -874,8 +869,8 @@ static int ssi_ahash_final(struct ahash_request *req)
 		hw_desc_init(&desc[idx]);
 		set_cipher_mode(&desc[idx], ctx->hw_mode);
 		set_din_sram(&desc[idx],
-			     ssi_ahash_get_initial_digest_len_sram_addr(
-ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
+			     cc_digest_len_addr(ctx->drvdata, ctx->hash_mode),
+			     HASH_LEN_SIZE);
 		set_cipher_config1(&desc[idx], HASH_PADDING_ENABLED);
 		set_flow_mode(&desc[idx], S_DIN_to_HASH);
 		set_setup_mode(&desc[idx], SETUP_LOAD_KEY0);
@@ -903,7 +898,7 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
 	set_flow_mode(&desc[idx], S_HASH_to_DOUT);
 	set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED);
 	set_setup_mode(&desc[idx], SETUP_WRITE_STATE0);
-	ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]);
+	cc_set_endianity(ctx->hash_mode, &desc[idx]);
 	set_cipher_mode(&desc[idx], ctx->hw_mode);
 	idx++;
 
@@ -911,36 +906,36 @@ ctx->drvdata, ctx->hash_mode), HASH_LEN_SIZE);
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, src, true);
-		ssi_hash_unmap_result(dev, state, digestsize, result);
+		cc_unmap_result(dev, state, digestsize, result);
 	}
 	return rc;
 }
 
-static int ssi_ahash_init(struct ahash_request *req)
+static int cc_hash_init(struct ahash_request *req)
 {
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 
 	dev_dbg(dev, "===== init (%d) ====\n", req->nbytes);
 
 	state->xcbc_count = 0;
-	ssi_hash_map_request(dev, state, ctx);
+	cc_map_req(dev, state, ctx);
 
 	return 0;
 }
 
-static int ssi_ahash_setkey(struct crypto_ahash *ahash, const u8 *key,
-			    unsigned int keylen)
+static int cc_hash_setkey(struct crypto_ahash *ahash, const u8 *key,
+			  unsigned int keylen)
 {
 	unsigned int hmac_pad_const[2] = { HMAC_IPAD_CONST, HMAC_OPAD_CONST };
 	struct ssi_crypto_req ssi_req = {};
-	struct ssi_hash_ctx *ctx = NULL;
+	struct cc_hash_ctx *ctx = NULL;
 	int blocksize = 0;
 	int digestsize = 0;
 	int i, idx = 0, rc = 0;
-	struct cc_hw_desc desc[SSI_MAX_AHASH_SEQ_LEN];
+	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	ssi_sram_addr_t larval_addr;
 	struct device *dev;
 
@@ -1006,7 +1001,7 @@ static int ssi_ahash_setkey(struct crypto_ahash *ahash, const u8 *key,
 			set_flow_mode(&desc[idx], S_HASH_to_DOUT);
 			set_setup_mode(&desc[idx], SETUP_WRITE_STATE0);
 			set_cipher_config1(&desc[idx], HASH_PADDING_DISABLED);
-			ssi_set_hash_endianity(ctx->hash_mode, &desc[idx]);
+			cc_set_endianity(ctx->hash_mode, &desc[idx]);
 			idx++;
 
 			hw_desc_init(&desc[idx]);
@@ -1120,14 +1115,14 @@ static int ssi_ahash_setkey(struct crypto_ahash *ahash, const u8 *key,
 	return rc;
 }
 
-static int ssi_xcbc_setkey(struct crypto_ahash *ahash,
-			   const u8 *key, unsigned int keylen)
+static int cc_xcbc_setkey(struct crypto_ahash *ahash,
+			  const u8 *key, unsigned int keylen)
 {
 	struct ssi_crypto_req ssi_req = {};
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	int idx = 0, rc = 0;
-	struct cc_hw_desc desc[SSI_MAX_AHASH_SEQ_LEN];
+	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 
 	dev_dbg(dev, "===== setkey (%d) ====\n", keylen);
 
@@ -1203,10 +1198,10 @@ static int ssi_xcbc_setkey(struct crypto_ahash *ahash,
 }
 
 #if SSI_CC_HAS_CMAC
-static int ssi_cmac_setkey(struct crypto_ahash *ahash,
-			   const u8 *key, unsigned int keylen)
+static int cc_cmac_setkey(struct crypto_ahash *ahash,
+			  const u8 *key, unsigned int keylen)
 {
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 
 	dev_dbg(dev, "===== setkey (%d) ====\n", keylen);
@@ -1244,7 +1239,7 @@ static int ssi_cmac_setkey(struct crypto_ahash *ahash,
 }
 #endif
 
-static void ssi_hash_free_ctx(struct ssi_hash_ctx *ctx)
+static void cc_free_ctx(struct cc_hash_ctx *ctx)
 {
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 
@@ -1267,7 +1262,7 @@ static void ssi_hash_free_ctx(struct ssi_hash_ctx *ctx)
 	ctx->key_params.keylen = 0;
 }
 
-static int ssi_hash_alloc_ctx(struct ssi_hash_ctx *ctx)
+static int cc_alloc_ctx(struct cc_hash_ctx *ctx)
 {
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 
@@ -1303,19 +1298,19 @@ static int ssi_hash_alloc_ctx(struct ssi_hash_ctx *ctx)
 	return 0;
 
 fail:
-	ssi_hash_free_ctx(ctx);
+	cc_free_ctx(ctx);
 	return -ENOMEM;
 }
 
-static int ssi_ahash_cra_init(struct crypto_tfm *tfm)
+static int cc_cra_init(struct crypto_tfm *tfm)
 {
-	struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_tfm_ctx(tfm);
 	struct hash_alg_common *hash_alg_common =
 		container_of(tfm->__crt_alg, struct hash_alg_common, base);
 	struct ahash_alg *ahash_alg =
 		container_of(hash_alg_common, struct ahash_alg, halg);
-	struct ssi_hash_alg *ssi_alg =
-			container_of(ahash_alg, struct ssi_hash_alg,
+	struct cc_hash_alg *ssi_alg =
+			container_of(ahash_alg, struct cc_hash_alg,
 				     ahash_alg);
 
 	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
@@ -1326,27 +1321,27 @@ static int ssi_ahash_cra_init(struct crypto_tfm *tfm)
 	ctx->inter_digestsize = ssi_alg->inter_digestsize;
 	ctx->drvdata = ssi_alg->drvdata;
 
-	return ssi_hash_alloc_ctx(ctx);
+	return cc_alloc_ctx(ctx);
 }
 
-static void ssi_hash_cra_exit(struct crypto_tfm *tfm)
+static void cc_cra_exit(struct crypto_tfm *tfm)
 {
-	struct ssi_hash_ctx *ctx = crypto_tfm_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_tfm_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 
-	dev_dbg(dev, "ssi_hash_cra_exit");
-	ssi_hash_free_ctx(ctx);
+	dev_dbg(dev, "cc_cra_exit");
+	cc_free_ctx(ctx);
 }
 
-static int ssi_mac_update(struct ahash_request *req)
+static int cc_mac_update(struct ahash_request *req)
 {
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base);
 	struct ssi_crypto_req ssi_req = {};
-	struct cc_hw_desc desc[SSI_MAX_AHASH_SEQ_LEN];
+	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	int rc;
 	u32 idx = 0;
 
@@ -1371,11 +1366,11 @@ static int ssi_mac_update(struct ahash_request *req)
 	}
 
 	if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC)
-		ssi_hash_create_xcbc_setup(req, desc, &idx);
+		cc_setup_xcbc(req, desc, &idx);
 	else
-		ssi_hash_create_cmac_setup(req, desc, &idx);
+		cc_setup_cmac(req, desc, &idx);
 
-	ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc, true, &idx);
+	cc_set_desc(state, ctx, DIN_AES_DOUT, desc, true, &idx);
 
 	/* store the hash digest result in context */
 	hw_desc_init(&desc[idx]);
@@ -1388,7 +1383,7 @@ static int ssi_mac_update(struct ahash_request *req)
 	idx++;
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = (void *)ssi_hash_update_complete;
+	ssi_req.user_cb = (void *)cc_update_complete;
 	ssi_req.user_arg = (void *)req;
 
 	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
@@ -1399,14 +1394,14 @@ static int ssi_mac_update(struct ahash_request *req)
 	return rc;
 }
 
-static int ssi_mac_final(struct ahash_request *req)
+static int cc_mac_final(struct ahash_request *req)
 {
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	struct ssi_crypto_req ssi_req = {};
-	struct cc_hw_desc desc[SSI_MAX_AHASH_SEQ_LEN];
+	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	int idx = 0;
 	int rc = 0;
 	u32 key_size, key_len;
@@ -1432,13 +1427,13 @@ static int ssi_mac_final(struct ahash_request *req)
 		return -ENOMEM;
 	}
 
-	if (ssi_hash_map_result(dev, state, digestsize)) {
+	if (cc_map_result(dev, state, digestsize)) {
 		dev_err(dev, "map_ahash_digest() failed\n");
 		return -ENOMEM;
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = (void *)ssi_hash_complete;
+	ssi_req.user_cb = (void *)cc_hash_complete;
 	ssi_req.user_arg = (void *)req;
 
 	if (state->xcbc_count && rem_cnt == 0) {
@@ -1473,9 +1468,9 @@ static int ssi_mac_final(struct ahash_request *req)
 	}
 
 	if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC)
-		ssi_hash_create_xcbc_setup(req, desc, &idx);
+		cc_setup_xcbc(req, desc, &idx);
 	else
-		ssi_hash_create_cmac_setup(req, desc, &idx);
+		cc_setup_cmac(req, desc, &idx);
 
 	if (state->xcbc_count == 0) {
 		hw_desc_init(&desc[idx]);
@@ -1485,8 +1480,7 @@ static int ssi_mac_final(struct ahash_request *req)
 		set_flow_mode(&desc[idx], S_DIN_to_AES);
 		idx++;
 	} else if (rem_cnt > 0) {
-		ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc,
-					  false, &idx);
+		cc_set_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx);
 	} else {
 		hw_desc_init(&desc[idx]);
 		set_din_const(&desc[idx], 0x00, CC_AES_BLOCK_SIZE);
@@ -1509,19 +1503,19 @@ static int ssi_mac_final(struct ahash_request *req)
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, req->src, true);
-		ssi_hash_unmap_result(dev, state, digestsize, req->result);
+		cc_unmap_result(dev, state, digestsize, req->result);
 	}
 	return rc;
 }
 
-static int ssi_mac_finup(struct ahash_request *req)
+static int cc_mac_finup(struct ahash_request *req)
 {
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	struct ssi_crypto_req ssi_req = {};
-	struct cc_hw_desc desc[SSI_MAX_AHASH_SEQ_LEN];
+	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	int idx = 0;
 	int rc = 0;
 	u32 key_len = 0;
@@ -1530,7 +1524,7 @@ static int ssi_mac_finup(struct ahash_request *req)
 	dev_dbg(dev, "===== finup xcbc(%d) ====\n", req->nbytes);
 	if (state->xcbc_count > 0 && req->nbytes == 0) {
 		dev_dbg(dev, "No data to update. Call to fdx_mac_final\n");
-		return ssi_mac_final(req);
+		return cc_mac_final(req);
 	}
 
 	if (cc_map_hash_request_final(ctx->drvdata, state, req->src,
@@ -1538,21 +1532,21 @@ static int ssi_mac_finup(struct ahash_request *req)
 		dev_err(dev, "map_ahash_request_final() failed\n");
 		return -ENOMEM;
 	}
-	if (ssi_hash_map_result(dev, state, digestsize)) {
+	if (cc_map_result(dev, state, digestsize)) {
 		dev_err(dev, "map_ahash_digest() failed\n");
 		return -ENOMEM;
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = (void *)ssi_hash_complete;
+	ssi_req.user_cb = (void *)cc_hash_complete;
 	ssi_req.user_arg = (void *)req;
 
 	if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
 		key_len = CC_AES_128_BIT_KEY_SIZE;
-		ssi_hash_create_xcbc_setup(req, desc, &idx);
+		cc_setup_xcbc(req, desc, &idx);
 	} else {
 		key_len = ctx->key_params.keylen;
-		ssi_hash_create_cmac_setup(req, desc, &idx);
+		cc_setup_cmac(req, desc, &idx);
 	}
 
 	if (req->nbytes == 0) {
@@ -1563,8 +1557,7 @@ static int ssi_mac_finup(struct ahash_request *req)
 		set_flow_mode(&desc[idx], S_DIN_to_AES);
 		idx++;
 	} else {
-		ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc,
-					  false, &idx);
+		cc_set_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx);
 	}
 
 	/* Get final MAC result */
@@ -1582,31 +1575,31 @@ static int ssi_mac_finup(struct ahash_request *req)
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, req->src, true);
-		ssi_hash_unmap_result(dev, state, digestsize, req->result);
+		cc_unmap_result(dev, state, digestsize, req->result);
 	}
 	return rc;
 }
 
-static int ssi_mac_digest(struct ahash_request *req)
+static int cc_mac_digest(struct ahash_request *req)
 {
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	u32 digestsize = crypto_ahash_digestsize(tfm);
 	struct ssi_crypto_req ssi_req = {};
-	struct cc_hw_desc desc[SSI_MAX_AHASH_SEQ_LEN];
+	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	u32 key_len;
 	int idx = 0;
 	int rc;
 
 	dev_dbg(dev, "===== -digest mac (%d) ====\n",  req->nbytes);
 
-	if (ssi_hash_map_request(dev, state, ctx)) {
+	if (cc_map_req(dev, state, ctx)) {
 		dev_err(dev, "map_ahash_source() failed\n");
 		return -ENOMEM;
 	}
-	if (ssi_hash_map_result(dev, state, digestsize)) {
+	if (cc_map_result(dev, state, digestsize)) {
 		dev_err(dev, "map_ahash_digest() failed\n");
 		return -ENOMEM;
 	}
@@ -1618,15 +1611,15 @@ static int ssi_mac_digest(struct ahash_request *req)
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = (void *)ssi_hash_digest_complete;
+	ssi_req.user_cb = (void *)cc_digest_complete;
 	ssi_req.user_arg = (void *)req;
 
 	if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
 		key_len = CC_AES_128_BIT_KEY_SIZE;
-		ssi_hash_create_xcbc_setup(req, desc, &idx);
+		cc_setup_xcbc(req, desc, &idx);
 	} else {
 		key_len = ctx->key_params.keylen;
-		ssi_hash_create_cmac_setup(req, desc, &idx);
+		cc_setup_cmac(req, desc, &idx);
 	}
 
 	if (req->nbytes == 0) {
@@ -1637,8 +1630,7 @@ static int ssi_mac_digest(struct ahash_request *req)
 		set_flow_mode(&desc[idx], S_DIN_to_AES);
 		idx++;
 	} else {
-		ssi_hash_create_data_desc(state, ctx, DIN_AES_DOUT, desc,
-					  false, &idx);
+		cc_set_desc(state, ctx, DIN_AES_DOUT, desc, false, &idx);
 	}
 
 	/* Get final MAC result */
@@ -1656,16 +1648,16 @@ static int ssi_mac_digest(struct ahash_request *req)
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, req->src, true);
-		ssi_hash_unmap_result(dev, state, digestsize, req->result);
-		ssi_hash_unmap_request(dev, state, ctx);
+		cc_unmap_result(dev, state, digestsize, req->result);
+		cc_unmap_req(dev, state, ctx);
 	}
 	return rc;
 }
 
-static int ssi_ahash_export(struct ahash_request *req, void *out)
+static int cc_hash_export(struct ahash_request *req, void *out)
 {
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	u8 *curr_buff = state->buff_index ? state->buff1 : state->buff0;
@@ -1703,10 +1695,10 @@ static int ssi_ahash_export(struct ahash_request *req, void *out)
 	return 0;
 }
 
-static int ssi_ahash_import(struct ahash_request *req, const void *in)
+static int cc_hash_import(struct ahash_request *req, const void *in)
 {
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	u32 tmp;
@@ -1721,7 +1713,7 @@ static int ssi_ahash_import(struct ahash_request *req, const void *in)
 
 	/* call init() to allocate bufs if the user hasn't */
 	if (!state->digest_buff) {
-		rc = ssi_ahash_init(req);
+		rc = cc_hash_init(req);
 		if (rc)
 			goto out;
 	}
@@ -1750,7 +1742,7 @@ static int ssi_ahash_import(struct ahash_request *req, const void *in)
 
 	/* Sanity check the data as much as possible */
 	memcpy(&tmp, in, sizeof(u32));
-	if (tmp > SSI_MAX_HASH_BLCK_SIZE) {
+	if (tmp > CC_MAX_HASH_BLCK_SIZE) {
 		rc = -EINVAL;
 		goto out;
 	}
@@ -1763,7 +1755,7 @@ static int ssi_ahash_import(struct ahash_request *req, const void *in)
 	return rc;
 }
 
-struct ssi_hash_template {
+struct cc_hash_template {
 	char name[CRYPTO_MAX_ALG_NAME];
 	char driver_name[CRYPTO_MAX_ALG_NAME];
 	char mac_name[CRYPTO_MAX_ALG_NAME];
@@ -1778,10 +1770,10 @@ struct ssi_hash_template {
 };
 
 #define CC_STATE_SIZE(_x) \
-	((_x) + HASH_LEN_SIZE + SSI_MAX_HASH_BLCK_SIZE + (2 * sizeof(u32)))
+	((_x) + HASH_LEN_SIZE + CC_MAX_HASH_BLCK_SIZE + (2 * sizeof(u32)))
 
 /* hash descriptors */
-static struct ssi_hash_template driver_hash[] = {
+static struct cc_hash_template driver_hash[] = {
 	//Asynchronize hash template
 	{
 		.name = "sha1",
@@ -1791,14 +1783,14 @@ static struct ssi_hash_template driver_hash[] = {
 		.blocksize = SHA1_BLOCK_SIZE,
 		.synchronize = false,
 		.template_ahash = {
-			.init = ssi_ahash_init,
-			.update = ssi_ahash_update,
-			.final = ssi_ahash_final,
-			.finup = ssi_ahash_finup,
-			.digest = ssi_ahash_digest,
-			.export = ssi_ahash_export,
-			.import = ssi_ahash_import,
-			.setkey = ssi_ahash_setkey,
+			.init = cc_hash_init,
+			.update = cc_hash_update,
+			.final = cc_hash_final,
+			.finup = cc_hash_finup,
+			.digest = cc_hash_digest,
+			.export = cc_hash_export,
+			.import = cc_hash_import,
+			.setkey = cc_hash_setkey,
 			.halg = {
 				.digestsize = SHA1_DIGEST_SIZE,
 				.statesize = CC_STATE_SIZE(SHA1_DIGEST_SIZE),
@@ -1815,14 +1807,14 @@ static struct ssi_hash_template driver_hash[] = {
 		.mac_driver_name = "hmac-sha256-dx",
 		.blocksize = SHA256_BLOCK_SIZE,
 		.template_ahash = {
-			.init = ssi_ahash_init,
-			.update = ssi_ahash_update,
-			.final = ssi_ahash_final,
-			.finup = ssi_ahash_finup,
-			.digest = ssi_ahash_digest,
-			.export = ssi_ahash_export,
-			.import = ssi_ahash_import,
-			.setkey = ssi_ahash_setkey,
+			.init = cc_hash_init,
+			.update = cc_hash_update,
+			.final = cc_hash_final,
+			.finup = cc_hash_finup,
+			.digest = cc_hash_digest,
+			.export = cc_hash_export,
+			.import = cc_hash_import,
+			.setkey = cc_hash_setkey,
 			.halg = {
 				.digestsize = SHA256_DIGEST_SIZE,
 				.statesize = CC_STATE_SIZE(SHA256_DIGEST_SIZE)
@@ -1839,14 +1831,14 @@ static struct ssi_hash_template driver_hash[] = {
 		.mac_driver_name = "hmac-sha224-dx",
 		.blocksize = SHA224_BLOCK_SIZE,
 		.template_ahash = {
-			.init = ssi_ahash_init,
-			.update = ssi_ahash_update,
-			.final = ssi_ahash_final,
-			.finup = ssi_ahash_finup,
-			.digest = ssi_ahash_digest,
-			.export = ssi_ahash_export,
-			.import = ssi_ahash_import,
-			.setkey = ssi_ahash_setkey,
+			.init = cc_hash_init,
+			.update = cc_hash_update,
+			.final = cc_hash_final,
+			.finup = cc_hash_finup,
+			.digest = cc_hash_digest,
+			.export = cc_hash_export,
+			.import = cc_hash_import,
+			.setkey = cc_hash_setkey,
 			.halg = {
 				.digestsize = SHA224_DIGEST_SIZE,
 				.statesize = CC_STATE_SIZE(SHA224_DIGEST_SIZE),
@@ -1864,14 +1856,14 @@ static struct ssi_hash_template driver_hash[] = {
 		.mac_driver_name = "hmac-sha384-dx",
 		.blocksize = SHA384_BLOCK_SIZE,
 		.template_ahash = {
-			.init = ssi_ahash_init,
-			.update = ssi_ahash_update,
-			.final = ssi_ahash_final,
-			.finup = ssi_ahash_finup,
-			.digest = ssi_ahash_digest,
-			.export = ssi_ahash_export,
-			.import = ssi_ahash_import,
-			.setkey = ssi_ahash_setkey,
+			.init = cc_hash_init,
+			.update = cc_hash_update,
+			.final = cc_hash_final,
+			.finup = cc_hash_finup,
+			.digest = cc_hash_digest,
+			.export = cc_hash_export,
+			.import = cc_hash_import,
+			.setkey = cc_hash_setkey,
 			.halg = {
 				.digestsize = SHA384_DIGEST_SIZE,
 				.statesize = CC_STATE_SIZE(SHA384_DIGEST_SIZE),
@@ -1888,14 +1880,14 @@ static struct ssi_hash_template driver_hash[] = {
 		.mac_driver_name = "hmac-sha512-dx",
 		.blocksize = SHA512_BLOCK_SIZE,
 		.template_ahash = {
-			.init = ssi_ahash_init,
-			.update = ssi_ahash_update,
-			.final = ssi_ahash_final,
-			.finup = ssi_ahash_finup,
-			.digest = ssi_ahash_digest,
-			.export = ssi_ahash_export,
-			.import = ssi_ahash_import,
-			.setkey = ssi_ahash_setkey,
+			.init = cc_hash_init,
+			.update = cc_hash_update,
+			.final = cc_hash_final,
+			.finup = cc_hash_finup,
+			.digest = cc_hash_digest,
+			.export = cc_hash_export,
+			.import = cc_hash_import,
+			.setkey = cc_hash_setkey,
 			.halg = {
 				.digestsize = SHA512_DIGEST_SIZE,
 				.statesize = CC_STATE_SIZE(SHA512_DIGEST_SIZE),
@@ -1913,14 +1905,14 @@ static struct ssi_hash_template driver_hash[] = {
 		.mac_driver_name = "hmac-md5-dx",
 		.blocksize = MD5_HMAC_BLOCK_SIZE,
 		.template_ahash = {
-			.init = ssi_ahash_init,
-			.update = ssi_ahash_update,
-			.final = ssi_ahash_final,
-			.finup = ssi_ahash_finup,
-			.digest = ssi_ahash_digest,
-			.export = ssi_ahash_export,
-			.import = ssi_ahash_import,
-			.setkey = ssi_ahash_setkey,
+			.init = cc_hash_init,
+			.update = cc_hash_update,
+			.final = cc_hash_final,
+			.finup = cc_hash_finup,
+			.digest = cc_hash_digest,
+			.export = cc_hash_export,
+			.import = cc_hash_import,
+			.setkey = cc_hash_setkey,
 			.halg = {
 				.digestsize = MD5_DIGEST_SIZE,
 				.statesize = CC_STATE_SIZE(MD5_DIGEST_SIZE),
@@ -1935,14 +1927,14 @@ static struct ssi_hash_template driver_hash[] = {
 		.mac_driver_name = "xcbc-aes-dx",
 		.blocksize = AES_BLOCK_SIZE,
 		.template_ahash = {
-			.init = ssi_ahash_init,
-			.update = ssi_mac_update,
-			.final = ssi_mac_final,
-			.finup = ssi_mac_finup,
-			.digest = ssi_mac_digest,
-			.setkey = ssi_xcbc_setkey,
-			.export = ssi_ahash_export,
-			.import = ssi_ahash_import,
+			.init = cc_hash_init,
+			.update = cc_mac_update,
+			.final = cc_mac_final,
+			.finup = cc_mac_finup,
+			.digest = cc_mac_digest,
+			.setkey = cc_xcbc_setkey,
+			.export = cc_hash_export,
+			.import = cc_hash_import,
 			.halg = {
 				.digestsize = AES_BLOCK_SIZE,
 				.statesize = CC_STATE_SIZE(AES_BLOCK_SIZE),
@@ -1958,14 +1950,14 @@ static struct ssi_hash_template driver_hash[] = {
 		.mac_driver_name = "cmac-aes-dx",
 		.blocksize = AES_BLOCK_SIZE,
 		.template_ahash = {
-			.init = ssi_ahash_init,
-			.update = ssi_mac_update,
-			.final = ssi_mac_final,
-			.finup = ssi_mac_finup,
-			.digest = ssi_mac_digest,
-			.setkey = ssi_cmac_setkey,
-			.export = ssi_ahash_export,
-			.import = ssi_ahash_import,
+			.init = cc_hash_init,
+			.update = cc_mac_update,
+			.final = cc_mac_final,
+			.finup = cc_mac_finup,
+			.digest = cc_mac_digest,
+			.setkey = cc_cmac_setkey,
+			.export = cc_hash_export,
+			.import = cc_hash_import,
 			.halg = {
 				.digestsize = AES_BLOCK_SIZE,
 				.statesize = CC_STATE_SIZE(AES_BLOCK_SIZE),
@@ -1979,11 +1971,11 @@ static struct ssi_hash_template driver_hash[] = {
 
 };
 
-static struct ssi_hash_alg *
-ssi_hash_create_alg(struct ssi_hash_template *template, struct device *dev,
-		    bool keyed)
+static struct cc_hash_alg *
+cc_alloc_hash_alg(struct cc_hash_template *template, struct device *dev,
+		  bool keyed)
 {
-	struct ssi_hash_alg *t_crypto_alg;
+	struct cc_hash_alg *t_crypto_alg;
 	struct crypto_alg *alg;
 	struct ahash_alg *halg;
 
@@ -2008,13 +2000,13 @@ ssi_hash_create_alg(struct ssi_hash_template *template, struct device *dev,
 			 template->driver_name);
 	}
 	alg->cra_module = THIS_MODULE;
-	alg->cra_ctxsize = sizeof(struct ssi_hash_ctx);
+	alg->cra_ctxsize = sizeof(struct cc_hash_ctx);
 	alg->cra_priority = SSI_CRA_PRIO;
 	alg->cra_blocksize = template->blocksize;
 	alg->cra_alignmask = 0;
-	alg->cra_exit = ssi_hash_cra_exit;
+	alg->cra_exit = cc_cra_exit;
 
-	alg->cra_init = ssi_ahash_cra_init;
+	alg->cra_init = cc_cra_init;
 	alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_TYPE_AHASH |
 			CRYPTO_ALG_KERN_DRIVER_ONLY;
 	alg->cra_type = &crypto_ahash_type;
@@ -2026,9 +2018,9 @@ ssi_hash_create_alg(struct ssi_hash_template *template, struct device *dev,
 	return t_crypto_alg;
 }
 
-int ssi_hash_init_sram_digest_consts(struct ssi_drvdata *drvdata)
+int cc_init_hash_sram(struct ssi_drvdata *drvdata)
 {
-	struct ssi_hash_handle *hash_handle = drvdata->hash_handle;
+	struct cc_hash_handle *hash_handle = drvdata->hash_handle;
 	ssi_sram_addr_t sram_buff_ofs = hash_handle->digest_len_sram_addr;
 	unsigned int larval_seq_len = 0;
 	struct cc_hw_desc larval_seq[CC_DIGEST_SIZE_MAX / sizeof(u32)];
@@ -2146,9 +2138,9 @@ int ssi_hash_init_sram_digest_consts(struct ssi_drvdata *drvdata)
 	return rc;
 }
 
-int ssi_hash_alloc(struct ssi_drvdata *drvdata)
+int cc_hash_alloc(struct ssi_drvdata *drvdata)
 {
-	struct ssi_hash_handle *hash_handle;
+	struct cc_hash_handle *hash_handle;
 	ssi_sram_addr_t sram_buff;
 	u32 sram_size_to_alloc;
 	struct device *dev = drvdata_to_dev(drvdata);
@@ -2184,7 +2176,7 @@ int ssi_hash_alloc(struct ssi_drvdata *drvdata)
 	hash_handle->digest_len_sram_addr = sram_buff;
 
 	/*must be set before the alg registration as it is being used there*/
-	rc = ssi_hash_init_sram_digest_consts(drvdata);
+	rc = cc_init_hash_sram(drvdata);
 	if (rc) {
 		dev_err(dev, "Init digest CONST failed (rc=%d)\n", rc);
 		goto fail;
@@ -2192,11 +2184,11 @@ int ssi_hash_alloc(struct ssi_drvdata *drvdata)
 
 	/* ahash registration */
 	for (alg = 0; alg < ARRAY_SIZE(driver_hash); alg++) {
-		struct ssi_hash_alg *t_alg;
+		struct cc_hash_alg *t_alg;
 		int hw_mode = driver_hash[alg].hw_mode;
 
 		/* register hmac version */
-		t_alg = ssi_hash_create_alg(&driver_hash[alg], dev, true);
+		t_alg = cc_alloc_hash_alg(&driver_hash[alg], dev, true);
 		if (IS_ERR(t_alg)) {
 			rc = PTR_ERR(t_alg);
 			dev_err(dev, "%s alg allocation failed\n",
@@ -2221,7 +2213,7 @@ int ssi_hash_alloc(struct ssi_drvdata *drvdata)
 			continue;
 
 		/* register hash version */
-		t_alg = ssi_hash_create_alg(&driver_hash[alg], dev, false);
+		t_alg = cc_alloc_hash_alg(&driver_hash[alg], dev, false);
 		if (IS_ERR(t_alg)) {
 			rc = PTR_ERR(t_alg);
 			dev_err(dev, "%s alg allocation failed\n",
@@ -2249,10 +2241,10 @@ int ssi_hash_alloc(struct ssi_drvdata *drvdata)
 	return rc;
 }
 
-int ssi_hash_free(struct ssi_drvdata *drvdata)
+int cc_hash_free(struct ssi_drvdata *drvdata)
 {
-	struct ssi_hash_alg *t_hash_alg, *hash_n;
-	struct ssi_hash_handle *hash_handle = drvdata->hash_handle;
+	struct cc_hash_alg *t_hash_alg, *hash_n;
+	struct cc_hash_handle *hash_handle = drvdata->hash_handle;
 
 	if (hash_handle) {
 		list_for_each_entry_safe(t_hash_alg, hash_n,
@@ -2268,14 +2260,13 @@ int ssi_hash_free(struct ssi_drvdata *drvdata)
 	return 0;
 }
 
-static void ssi_hash_create_xcbc_setup(struct ahash_request *areq,
-				       struct cc_hw_desc desc[],
-				       unsigned int *seq_size)
+static void cc_setup_xcbc(struct ahash_request *areq, struct cc_hw_desc desc[],
+			  unsigned int *seq_size)
 {
 	unsigned int idx = *seq_size;
 	struct ahash_req_ctx *state = ahash_request_ctx(areq);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 
 	/* Setup XCBC MAC K1 */
 	hw_desc_init(&desc[idx]);
@@ -2326,14 +2317,13 @@ static void ssi_hash_create_xcbc_setup(struct ahash_request *areq,
 	*seq_size = idx;
 }
 
-static void ssi_hash_create_cmac_setup(struct ahash_request *areq,
-				       struct cc_hw_desc desc[],
-				       unsigned int *seq_size)
+static void cc_setup_cmac(struct ahash_request *areq, struct cc_hw_desc desc[],
+			  unsigned int *seq_size)
 {
 	unsigned int idx = *seq_size;
 	struct ahash_req_ctx *state = ahash_request_ctx(areq);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
-	struct ssi_hash_ctx *ctx = crypto_ahash_ctx(tfm);
+	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 
 	/* Setup CMAC Key */
 	hw_desc_init(&desc[idx]);
@@ -2360,12 +2350,10 @@ static void ssi_hash_create_cmac_setup(struct ahash_request *areq,
 	*seq_size = idx;
 }
 
-static void ssi_hash_create_data_desc(struct ahash_req_ctx *areq_ctx,
-				      struct ssi_hash_ctx *ctx,
-				      unsigned int flow_mode,
-				      struct cc_hw_desc desc[],
-				      bool is_not_last_data,
-				      unsigned int *seq_size)
+static void cc_set_desc(struct ahash_req_ctx *areq_ctx,
+			struct cc_hash_ctx *ctx, unsigned int flow_mode,
+			struct cc_hw_desc desc[], bool is_not_last_data,
+			unsigned int *seq_size)
 {
 	unsigned int idx = *seq_size;
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
@@ -2418,7 +2406,7 @@ static void ssi_hash_create_data_desc(struct ahash_req_ctx *areq_ctx,
 ssi_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode)
 {
 	struct ssi_drvdata *_drvdata = (struct ssi_drvdata *)drvdata;
-	struct ssi_hash_handle *hash_handle = _drvdata->hash_handle;
+	struct cc_hash_handle *hash_handle = _drvdata->hash_handle;
 	struct device *dev = drvdata_to_dev(_drvdata);
 
 	switch (mode) {
@@ -2462,10 +2450,10 @@ ssi_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode)
 }
 
 ssi_sram_addr_t
-ssi_ahash_get_initial_digest_len_sram_addr(void *drvdata, u32 mode)
+cc_digest_len_addr(void *drvdata, u32 mode)
 {
 	struct ssi_drvdata *_drvdata = (struct ssi_drvdata *)drvdata;
-	struct ssi_hash_handle *hash_handle = _drvdata->hash_handle;
+	struct cc_hash_handle *hash_handle = _drvdata->hash_handle;
 	ssi_sram_addr_t digest_len_addr = hash_handle->digest_len_sram_addr;
 
 	switch (mode) {
diff --git a/drivers/staging/ccree/ssi_hash.h b/drivers/staging/ccree/ssi_hash.h
index 32eb473..ade4119 100644
--- a/drivers/staging/ccree/ssi_hash.h
+++ b/drivers/staging/ccree/ssi_hash.h
@@ -27,12 +27,12 @@
 #define HMAC_OPAD_CONST	0x5C5C5C5C
 #if (DX_DEV_SHA_MAX > 256)
 #define HASH_LEN_SIZE 16
-#define SSI_MAX_HASH_DIGEST_SIZE	SHA512_DIGEST_SIZE
-#define SSI_MAX_HASH_BLCK_SIZE SHA512_BLOCK_SIZE
+#define CC_MAX_HASH_DIGEST_SIZE	SHA512_DIGEST_SIZE
+#define CC_MAX_HASH_BLCK_SIZE SHA512_BLOCK_SIZE
 #else
 #define HASH_LEN_SIZE 8
-#define SSI_MAX_HASH_DIGEST_SIZE	SHA256_DIGEST_SIZE
-#define SSI_MAX_HASH_BLCK_SIZE SHA256_BLOCK_SIZE
+#define CC_MAX_HASH_DIGEST_SIZE	SHA256_DIGEST_SIZE
+#define CC_MAX_HASH_BLCK_SIZE SHA256_BLOCK_SIZE
 #endif
 
 #define XCBC_MAC_K1_OFFSET 0
@@ -75,9 +75,9 @@ struct ahash_req_ctx {
 	struct mlli_params mlli_params;
 };
 
-int ssi_hash_alloc(struct ssi_drvdata *drvdata);
-int ssi_hash_init_sram_digest_consts(struct ssi_drvdata *drvdata);
-int ssi_hash_free(struct ssi_drvdata *drvdata);
+int cc_hash_alloc(struct ssi_drvdata *drvdata);
+int cc_init_hash_sram(struct ssi_drvdata *drvdata);
+int cc_hash_free(struct ssi_drvdata *drvdata);
 
 /*!
  * Gets the initial digest length
@@ -89,7 +89,7 @@ int ssi_hash_free(struct ssi_drvdata *drvdata);
  * \return u32 returns the address of the initial digest length in SRAM
  */
 ssi_sram_addr_t
-ssi_ahash_get_initial_digest_len_sram_addr(void *drvdata, u32 mode);
+cc_digest_len_addr(void *drvdata, u32 mode);
 
 /*!
  * Gets the address of the initial digest in SRAM
diff --git a/drivers/staging/ccree/ssi_pm.c b/drivers/staging/ccree/ssi_pm.c
index 5e2ef5e..d1a6318 100644
--- a/drivers/staging/ccree/ssi_pm.c
+++ b/drivers/staging/ccree/ssi_pm.c
@@ -79,7 +79,7 @@ int cc_pm_resume(struct device *dev)
 	}
 
 	/* must be after the queue resuming as it uses the HW queue*/
-	ssi_hash_init_sram_digest_consts(drvdata);
+	cc_init_hash_sram(drvdata);
 
 	ssi_ivgen_init_sram_pool(drvdata);
 	return 0;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 03/24] staging: ccree: amend hash func def for readability
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 01/24] staging: ccree: remove ahash wrappers Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 02/24] staging: ccree: fix hash naming convention Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 04/24] staging: ccree: func params should follow func name Gilad Ben-Yossef
                   ` (20 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Func definitions in the hash implementation were did not adhere to
coding style. Fix them for better readability.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_hash.c | 20 +++++++-------------
 1 file changed, 7 insertions(+), 13 deletions(-)

diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c
index 6bc42e4..a80279e 100644
--- a/drivers/staging/ccree/ssi_hash.c
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -64,10 +64,8 @@ static const u64 sha512_init[] = {
 	SHA512_H3, SHA512_H2, SHA512_H1, SHA512_H0 };
 #endif
 
-static void cc_setup_xcbc(
-	struct ahash_request *areq,
-	struct cc_hw_desc desc[],
-	unsigned int *seq_size);
+static void cc_setup_xcbc(struct ahash_request *areq, struct cc_hw_desc desc[],
+			  unsigned int *seq_size);
 
 static void cc_setup_cmac(struct ahash_request *areq, struct cc_hw_desc desc[],
 			  unsigned int *seq_size);
@@ -106,12 +104,9 @@ struct cc_hash_ctx {
 	bool is_hmac;
 };
 
-static void cc_set_desc(
-	struct ahash_req_ctx *areq_ctx,
-	struct cc_hash_ctx *ctx,
-	unsigned int flow_mode, struct cc_hw_desc desc[],
-	bool is_not_last_data,
-	unsigned int *seq_size);
+static void cc_set_desc(struct ahash_req_ctx *areq_ctx, struct cc_hash_ctx *ctx,
+			unsigned int flow_mode, struct cc_hw_desc desc[],
+			bool is_not_last_data, unsigned int *seq_size);
 
 static void cc_set_endianity(u32 mode, struct cc_hw_desc *desc)
 {
@@ -1971,9 +1966,8 @@ static struct cc_hash_template driver_hash[] = {
 
 };
 
-static struct cc_hash_alg *
-cc_alloc_hash_alg(struct cc_hash_template *template, struct device *dev,
-		  bool keyed)
+static struct cc_hash_alg *cc_alloc_hash_alg(struct cc_hash_template *template,
+					     struct device *dev, bool keyed)
 {
 	struct cc_hash_alg *t_crypto_alg;
 	struct crypto_alg *alg;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 04/24] staging: ccree: func params should follow func name
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (2 preceding siblings ...)
  2017-12-12 14:52 ` [PATCH 03/24] staging: ccree: amend hash func def for readability Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 05/24] staging: ccree: shorten parameter name Gilad Ben-Yossef
                   ` (19 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Fix some call sites with func params not following func name in AEAD
code.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_aead.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c
index 408ea24..75a578e 100644
--- a/drivers/staging/ccree/ssi_aead.c
+++ b/drivers/staging/ccree/ssi_aead.c
@@ -1226,8 +1226,9 @@ static void cc_hmac_authenc(struct aead_request *req, struct cc_hw_desc desc[],
 	struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm);
 	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
 	int direct = req_ctx->gen_ctx.op_type;
-	unsigned int data_flow_mode = cc_get_data_flow(
-		direct, ctx->flow_mode, req_ctx->is_single_pass);
+	unsigned int data_flow_mode =
+		cc_get_data_flow(direct, ctx->flow_mode,
+				 req_ctx->is_single_pass);
 
 	if (req_ctx->is_single_pass) {
 		/**
@@ -1278,8 +1279,9 @@ cc_xcbc_authenc(struct aead_request *req, struct cc_hw_desc desc[],
 	struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm);
 	struct aead_req_ctx *req_ctx = aead_request_ctx(req);
 	int direct = req_ctx->gen_ctx.op_type;
-	unsigned int data_flow_mode = cc_get_data_flow(
-		direct, ctx->flow_mode, req_ctx->is_single_pass);
+	unsigned int data_flow_mode =
+		cc_get_data_flow(direct, ctx->flow_mode,
+				 req_ctx->is_single_pass);
 
 	if (req_ctx->is_single_pass) {
 		/**
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 05/24] staging: ccree: shorten parameter name
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (3 preceding siblings ...)
  2017-12-12 14:52 ` [PATCH 04/24] staging: ccree: func params should follow func name Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 06/24] staging: ccree: fix func def and decl coding style Gilad Ben-Yossef
                   ` (18 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Shorten parameter name for better code readability

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_aead.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c
index 75a578e..62d45e9 100644
--- a/drivers/staging/ccree/ssi_aead.c
+++ b/drivers/staging/ccree/ssi_aead.c
@@ -2687,7 +2687,7 @@ static struct ssi_alg_template aead_algs[] = {
 };
 
 static struct ssi_crypto_alg *cc_create_aead_alg(
-			struct ssi_alg_template *template,
+			struct ssi_alg_template *tmpl,
 			struct device *dev)
 {
 	struct ssi_crypto_alg *t_alg;
@@ -2697,26 +2697,26 @@ static struct ssi_crypto_alg *cc_create_aead_alg(
 	if (!t_alg)
 		return ERR_PTR(-ENOMEM);
 
-	alg = &template->template_aead;
+	alg = &tmpl->template_aead;
 
 	snprintf(alg->base.cra_name, CRYPTO_MAX_ALG_NAME, "%s",
-		 template->name);
+		 tmpl->name);
 	snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
-		 template->driver_name);
+		 tmpl->driver_name);
 	alg->base.cra_module = THIS_MODULE;
 	alg->base.cra_priority = SSI_CRA_PRIO;
 
 	alg->base.cra_ctxsize = sizeof(struct cc_aead_ctx);
 	alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY |
-			 template->type;
+			 tmpl->type;
 	alg->init = cc_aead_init;
 	alg->exit = cc_aead_exit;
 
 	t_alg->aead_alg = *alg;
 
-	t_alg->cipher_mode = template->cipher_mode;
-	t_alg->flow_mode = template->flow_mode;
-	t_alg->auth_mode = template->auth_mode;
+	t_alg->cipher_mode = tmpl->cipher_mode;
+	t_alg->flow_mode = tmpl->flow_mode;
+	t_alg->auth_mode = tmpl->auth_mode;
 
 	return t_alg;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 06/24] staging: ccree: fix func def and decl coding style
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (4 preceding siblings ...)
  2017-12-12 14:52 ` [PATCH 05/24] staging: ccree: shorten parameter name Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 07/24] staging: ccree: simplify expression with local var Gilad Ben-Yossef
                   ` (17 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Fix functions definition and declaration indentation according to
coding style guide lines for better code readability

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_aead.c       |   5 +-
 drivers/staging/ccree/ssi_buffer_mgr.c | 144 +++++++++++++--------------------
 drivers/staging/ccree/ssi_sram_mgr.h   |   7 +-
 3 files changed, 63 insertions(+), 93 deletions(-)

diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c
index 62d45e9..112fba3 100644
--- a/drivers/staging/ccree/ssi_aead.c
+++ b/drivers/staging/ccree/ssi_aead.c
@@ -2686,9 +2686,8 @@ static struct ssi_alg_template aead_algs[] = {
 #endif /*SSI_CC_HAS_AES_GCM*/
 };
 
-static struct ssi_crypto_alg *cc_create_aead_alg(
-			struct ssi_alg_template *tmpl,
-			struct device *dev)
+static struct ssi_crypto_alg *cc_create_aead_alg(struct ssi_alg_template *tmpl,
+						 struct device *dev)
 {
 	struct ssi_crypto_alg *t_alg;
 	struct aead_alg *alg;
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index c5bc027..099d83d 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -100,9 +100,10 @@ static void cc_copy_mac(struct device *dev, struct aead_request *req,
  * @nbytes: [IN] Total SGL data bytes.
  * @lbytes: [OUT] Returns the amount of bytes at the last entry
  */
-static unsigned int cc_get_sgl_nents(
-	struct device *dev, struct scatterlist *sg_list,
-	unsigned int nbytes, u32 *lbytes, bool *is_chained)
+static unsigned int cc_get_sgl_nents(struct device *dev,
+				     struct scatterlist *sg_list,
+				     unsigned int nbytes, u32 *lbytes,
+				     bool *is_chained)
 {
 	unsigned int nents = 0;
 
@@ -155,10 +156,8 @@ void cc_zero_sgl(struct scatterlist *sgl, u32 data_len)
  * @end:
  * @direct:
  */
-void cc_copy_sg_portion(
-	struct device *dev, u8 *dest,
-	struct scatterlist *sg, u32 to_skip,
-	u32 end, enum ssi_sg_cpy_direct direct)
+void cc_copy_sg_portion(struct device *dev, u8 *dest, struct scatterlist *sg,
+			u32 to_skip, u32 end, enum ssi_sg_cpy_direct direct)
 {
 	u32 nents, lbytes;
 
@@ -167,9 +166,9 @@ void cc_copy_sg_portion(
 		       (direct == SSI_SG_TO_BUF));
 }
 
-static int cc_render_buff_to_mlli(
-	struct device *dev, dma_addr_t buff_dma, u32 buff_size,
-	u32 *curr_nents, u32 **mlli_entry_pp)
+static int cc_render_buff_to_mlli(struct device *dev, dma_addr_t buff_dma,
+				  u32 buff_size, u32 *curr_nents,
+				  u32 **mlli_entry_pp)
 {
 	u32 *mlli_entry_p = *mlli_entry_pp;
 	u32 new_nents;
@@ -203,10 +202,9 @@ static int cc_render_buff_to_mlli(
 	return 0;
 }
 
-static int cc_render_sg_to_mlli(
-	struct device *dev, struct scatterlist *sgl,
-	u32 sgl_data_len, u32 sgl_offset, u32 *curr_nents,
-	u32 **mlli_entry_pp)
+static int cc_render_sg_to_mlli(struct device *dev, struct scatterlist *sgl,
+				u32 sgl_data_len, u32 sgl_offset,
+				u32 *curr_nents, u32 **mlli_entry_pp)
 {
 	struct scatterlist *curr_sgl = sgl;
 	u32 *mlli_entry_p = *mlli_entry_pp;
@@ -231,10 +229,8 @@ static int cc_render_sg_to_mlli(
 	return 0;
 }
 
-static int cc_generate_mlli(
-	struct device *dev,
-	struct buffer_array *sg_data,
-	struct mlli_params *mlli_params)
+static int cc_generate_mlli(struct device *dev, struct buffer_array *sg_data,
+			    struct mlli_params *mlli_params)
 {
 	u32 *mlli_p;
 	u32 total_nents = 0, prev_total_nents = 0;
@@ -292,10 +288,10 @@ static int cc_generate_mlli(
 	return rc;
 }
 
-static void cc_add_buffer_entry(
-	struct device *dev, struct buffer_array *sgl_data,
-	dma_addr_t buffer_dma, unsigned int buffer_len,
-	bool is_last_entry, u32 *mlli_nents)
+static void cc_add_buffer_entry(struct device *dev,
+				struct buffer_array *sgl_data,
+				dma_addr_t buffer_dma, unsigned int buffer_len,
+				bool is_last_entry, u32 *mlli_nents)
 {
 	unsigned int index = sgl_data->num_of_buffers;
 
@@ -313,15 +309,10 @@ static void cc_add_buffer_entry(
 	sgl_data->num_of_buffers++;
 }
 
-static void cc_add_sg_entry(
-	struct device *dev,
-	struct buffer_array *sgl_data,
-	unsigned int nents,
-	struct scatterlist *sgl,
-	unsigned int data_len,
-	unsigned int data_offset,
-	bool is_last_table,
-	u32 *mlli_nents)
+static void cc_add_sg_entry(struct device *dev, struct buffer_array *sgl_data,
+			    unsigned int nents, struct scatterlist *sgl,
+			    unsigned int data_len, unsigned int data_offset,
+			    bool is_last_table, u32 *mlli_nents)
 {
 	unsigned int index = sgl_data->num_of_buffers;
 
@@ -339,9 +330,8 @@ static void cc_add_sg_entry(
 	sgl_data->num_of_buffers++;
 }
 
-static int
-cc_dma_map_sg(struct device *dev, struct scatterlist *sg, u32 nents,
-	      enum dma_data_direction direction)
+static int cc_dma_map_sg(struct device *dev, struct scatterlist *sg, u32 nents,
+			 enum dma_data_direction direction)
 {
 	u32 i, j;
 	struct scatterlist *l_sg = sg;
@@ -368,11 +358,9 @@ cc_dma_map_sg(struct device *dev, struct scatterlist *sg, u32 nents,
 	return 0;
 }
 
-static int cc_map_sg(
-	struct device *dev, struct scatterlist *sg,
-	unsigned int nbytes, int direction,
-	u32 *nents, u32 max_sg_nents,
-	u32 *lbytes, u32 *mapped_nents)
+static int cc_map_sg(struct device *dev, struct scatterlist *sg,
+		     unsigned int nbytes, int direction, u32 *nents,
+		     u32 max_sg_nents, u32 *lbytes, u32 *mapped_nents)
 {
 	bool is_chained = false;
 
@@ -478,12 +466,9 @@ static int ssi_ahash_handle_curr_buf(struct device *dev,
 	return 0;
 }
 
-void cc_unmap_blkcipher_request(
-	struct device *dev,
-	void *ctx,
-	unsigned int ivsize,
-	struct scatterlist *src,
-	struct scatterlist *dst)
+void cc_unmap_blkcipher_request(struct device *dev, void *ctx,
+				unsigned int ivsize, struct scatterlist *src,
+				struct scatterlist *dst)
 {
 	struct blkcipher_req_ctx *req_ctx = (struct blkcipher_req_ctx *)ctx;
 
@@ -511,14 +496,10 @@ void cc_unmap_blkcipher_request(
 	}
 }
 
-int cc_map_blkcipher_request(
-	struct ssi_drvdata *drvdata,
-	void *ctx,
-	unsigned int ivsize,
-	unsigned int nbytes,
-	void *info,
-	struct scatterlist *src,
-	struct scatterlist *dst)
+int cc_map_blkcipher_request(struct ssi_drvdata *drvdata, void *ctx,
+			     unsigned int ivsize, unsigned int nbytes,
+			     void *info, struct scatterlist *src,
+			     struct scatterlist *dst)
 {
 	struct blkcipher_req_ctx *req_ctx = (struct blkcipher_req_ctx *)ctx;
 	struct mlli_params *mlli_params = &req_ctx->mlli_params;
@@ -704,13 +685,10 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
 	}
 }
 
-static int cc_get_aead_icv_nents(
-	struct device *dev,
-	struct scatterlist *sgl,
-	unsigned int sgl_nents,
-	unsigned int authsize,
-	u32 last_entry_data_size,
-	bool *is_icv_fragmented)
+static int cc_get_aead_icv_nents(struct device *dev, struct scatterlist *sgl,
+				 unsigned int sgl_nents, unsigned int authsize,
+				 u32 last_entry_data_size,
+				 bool *is_icv_fragmented)
 {
 	unsigned int icv_max_size = 0;
 	unsigned int icv_required_size = authsize > last_entry_data_size ?
@@ -758,11 +736,10 @@ static int cc_get_aead_icv_nents(
 	return nents;
 }
 
-static int cc_aead_chain_iv(
-	struct ssi_drvdata *drvdata,
-	struct aead_request *req,
-	struct buffer_array *sg_data,
-	bool is_last, bool do_chain)
+static int cc_aead_chain_iv(struct ssi_drvdata *drvdata,
+			    struct aead_request *req,
+			    struct buffer_array *sg_data,
+			    bool is_last, bool do_chain)
 {
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
 	unsigned int hw_iv_size = areq_ctx->hw_iv_size;
@@ -803,11 +780,10 @@ static int cc_aead_chain_iv(
 	return rc;
 }
 
-static int cc_aead_chain_assoc(
-	struct ssi_drvdata *drvdata,
-	struct aead_request *req,
-	struct buffer_array *sg_data,
-	bool is_last, bool do_chain)
+static int cc_aead_chain_assoc(struct ssi_drvdata *drvdata,
+			       struct aead_request *req,
+			       struct buffer_array *sg_data,
+			       bool is_last, bool do_chain)
 {
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
 	int rc = 0;
@@ -895,9 +871,8 @@ static int cc_aead_chain_assoc(
 	return rc;
 }
 
-static void cc_prepare_aead_data_dlli(
-	struct aead_request *req,
-	u32 *src_last_bytes, u32 *dst_last_bytes)
+static void cc_prepare_aead_data_dlli(struct aead_request *req,
+				      u32 *src_last_bytes, u32 *dst_last_bytes)
 {
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
 	enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type;
@@ -931,12 +906,11 @@ static void cc_prepare_aead_data_dlli(
 	}
 }
 
-static int cc_prepare_aead_data_mlli(
-	struct ssi_drvdata *drvdata,
-	struct aead_request *req,
-	struct buffer_array *sg_data,
-	u32 *src_last_bytes, u32 *dst_last_bytes,
-	bool is_last_table)
+static int cc_prepare_aead_data_mlli(struct ssi_drvdata *drvdata,
+				     struct aead_request *req,
+				     struct buffer_array *sg_data,
+				     u32 *src_last_bytes, u32 *dst_last_bytes,
+				     bool is_last_table)
 {
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
 	enum drv_crypto_direction direct = areq_ctx->gen_ctx.op_type;
@@ -1066,11 +1040,10 @@ static int cc_prepare_aead_data_mlli(
 	return rc;
 }
 
-static int cc_aead_chain_data(
-	struct ssi_drvdata *drvdata,
-	struct aead_request *req,
-	struct buffer_array *sg_data,
-	bool is_last_table, bool do_chain)
+static int cc_aead_chain_data(struct ssi_drvdata *drvdata,
+			      struct aead_request *req,
+			      struct buffer_array *sg_data,
+			      bool is_last_table, bool do_chain)
 {
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
 	struct device *dev = drvdata_to_dev(drvdata);
@@ -1238,8 +1211,7 @@ static void cc_update_aead_mlli_nents(struct ssi_drvdata *drvdata,
 	}
 }
 
-int cc_map_aead_request(
-	struct ssi_drvdata *drvdata, struct aead_request *req)
+int cc_map_aead_request(struct ssi_drvdata *drvdata, struct aead_request *req)
 {
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
 	struct mlli_params *mlli_params = &areq_ctx->mlli_params;
diff --git a/drivers/staging/ccree/ssi_sram_mgr.h b/drivers/staging/ccree/ssi_sram_mgr.h
index 9e39262..76719ec 100644
--- a/drivers/staging/ccree/ssi_sram_mgr.h
+++ b/drivers/staging/ccree/ssi_sram_mgr.h
@@ -71,9 +71,8 @@ ssi_sram_addr_t cc_sram_alloc(struct ssi_drvdata *drvdata, u32 size);
  * @seq:	  A pointer to the given IN/OUT descriptor sequence
  * @seq_len:	  A pointer to the given IN/OUT sequence length
  */
-void cc_set_sram_desc(
-	const u32 *src, ssi_sram_addr_t dst,
-	unsigned int nelement,
-	struct cc_hw_desc *seq, unsigned int *seq_len);
+void cc_set_sram_desc(const u32 *src, ssi_sram_addr_t dst,
+		      unsigned int nelement, struct cc_hw_desc *seq,
+		      unsigned int *seq_len);
 
 #endif /*__SSI_SRAM_MGR_H__*/
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 07/24] staging: ccree: simplify expression with local var
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (5 preceding siblings ...)
  2017-12-12 14:52 ` [PATCH 06/24] staging: ccree: fix func def and decl coding style Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 08/24] staging: ccree: fix func call param indentation Gilad Ben-Yossef
                   ` (16 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Simplify expression by using a local variable for better code
readability.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_buffer_mgr.c | 22 ++++++++++------------
 1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index 099d83d..490dd5a 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -917,6 +917,7 @@ static int cc_prepare_aead_data_mlli(struct ssi_drvdata *drvdata,
 	unsigned int authsize = areq_ctx->req_authsize;
 	int rc = 0, icv_nents;
 	struct device *dev = drvdata_to_dev(drvdata);
+	struct scatterlist *sg;
 
 	if (req->src == req->dst) {
 		/*INPLACE*/
@@ -955,12 +956,11 @@ static int cc_prepare_aead_data_mlli(struct ssi_drvdata *drvdata,
 					areq_ctx->mac_buf_dma_addr;
 			}
 		} else { /* Contig. ICV */
+			sg = &areq_ctx->src_sgl[areq_ctx->src.nents - 1];
 			/*Should hanlde if the sg is not contig.*/
-			areq_ctx->icv_dma_addr = sg_dma_address(
-				&areq_ctx->src_sgl[areq_ctx->src.nents - 1]) +
+			areq_ctx->icv_dma_addr = sg_dma_address(sg) +
 				(*src_last_bytes - authsize);
-			areq_ctx->icv_virt_addr = sg_virt(
-				&areq_ctx->src_sgl[areq_ctx->src.nents - 1]) +
+			areq_ctx->icv_virt_addr = sg_virt(sg) +
 				(*src_last_bytes - authsize);
 		}
 
@@ -993,12 +993,11 @@ static int cc_prepare_aead_data_mlli(struct ssi_drvdata *drvdata,
 			areq_ctx->icv_virt_addr = areq_ctx->backup_mac;
 
 		} else { /* Contig. ICV */
+			sg = &areq_ctx->src_sgl[areq_ctx->src.nents - 1];
 			/*Should hanlde if the sg is not contig.*/
-			areq_ctx->icv_dma_addr = sg_dma_address(
-				&areq_ctx->src_sgl[areq_ctx->src.nents - 1]) +
+			areq_ctx->icv_dma_addr = sg_dma_address(sg) +
 				(*src_last_bytes - authsize);
-			areq_ctx->icv_virt_addr = sg_virt(
-				&areq_ctx->src_sgl[areq_ctx->src.nents - 1]) +
+			areq_ctx->icv_virt_addr = sg_virt(sg) +
 				(*src_last_bytes - authsize);
 		}
 
@@ -1023,12 +1022,11 @@ static int cc_prepare_aead_data_mlli(struct ssi_drvdata *drvdata,
 		}
 
 		if (!areq_ctx->is_icv_fragmented) {
+			sg = &areq_ctx->dst_sgl[areq_ctx->dst.nents - 1];
 			/* Contig. ICV */
-			areq_ctx->icv_dma_addr = sg_dma_address(
-				&areq_ctx->dst_sgl[areq_ctx->dst.nents - 1]) +
+			areq_ctx->icv_dma_addr = sg_dma_address(sg) +
 				(*dst_last_bytes - authsize);
-			areq_ctx->icv_virt_addr = sg_virt(
-				&areq_ctx->dst_sgl[areq_ctx->dst.nents - 1]) +
+			areq_ctx->icv_virt_addr = sg_virt(sg) +
 				(*dst_last_bytes - authsize);
 		} else {
 			areq_ctx->icv_dma_addr = areq_ctx->mac_buf_dma_addr;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 08/24] staging: ccree: fix func call param indentation
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (6 preceding siblings ...)
  2017-12-12 14:52 ` [PATCH 07/24] staging: ccree: simplify expression with local var Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 09/24] staging: ccree: fix reg mgr naming convention Gilad Ben-Yossef
                   ` (15 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Fix function call parameter indentation according to coding
style guide lines.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_buffer_mgr.c | 28 +++++++++++-----------------
 drivers/staging/ccree/ssi_hash.c       | 10 ++++------
 2 files changed, 15 insertions(+), 23 deletions(-)

diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index 490dd5a..4ab76dc 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -239,9 +239,9 @@ static int cc_generate_mlli(struct device *dev, struct buffer_array *sg_data,
 	dev_dbg(dev, "NUM of SG's = %d\n", sg_data->num_of_buffers);
 
 	/* Allocate memory from the pointed pool */
-	mlli_params->mlli_virt_addr = dma_pool_alloc(
-			mlli_params->curr_pool, GFP_KERNEL,
-			&mlli_params->mlli_dma_addr);
+	mlli_params->mlli_virt_addr =
+		dma_pool_alloc(mlli_params->curr_pool, GFP_KERNEL,
+			       &mlli_params->mlli_dma_addr);
 	if (!mlli_params->mlli_virt_addr) {
 		dev_err(dev, "dma_pool_alloc() failed\n");
 		rc = -ENOMEM;
@@ -881,27 +881,21 @@ static void cc_prepare_aead_data_dlli(struct aead_request *req,
 	areq_ctx->is_icv_fragmented = false;
 	if (req->src == req->dst) {
 		/*INPLACE*/
-		areq_ctx->icv_dma_addr = sg_dma_address(
-			areq_ctx->src_sgl) +
+		areq_ctx->icv_dma_addr = sg_dma_address(areq_ctx->src_sgl) +
 			(*src_last_bytes - authsize);
-		areq_ctx->icv_virt_addr = sg_virt(
-			areq_ctx->src_sgl) +
+		areq_ctx->icv_virt_addr = sg_virt(areq_ctx->src_sgl) +
 			(*src_last_bytes - authsize);
 	} else if (direct == DRV_CRYPTO_DIRECTION_DECRYPT) {
 		/*NON-INPLACE and DECRYPT*/
-		areq_ctx->icv_dma_addr = sg_dma_address(
-			areq_ctx->src_sgl) +
+		areq_ctx->icv_dma_addr = sg_dma_address(areq_ctx->src_sgl) +
 			(*src_last_bytes - authsize);
-		areq_ctx->icv_virt_addr = sg_virt(
-			areq_ctx->src_sgl) +
+		areq_ctx->icv_virt_addr = sg_virt(areq_ctx->src_sgl) +
 			(*src_last_bytes - authsize);
 	} else {
 		/*NON-INPLACE and ENCRYPT*/
-		areq_ctx->icv_dma_addr = sg_dma_address(
-			areq_ctx->dst_sgl) +
+		areq_ctx->icv_dma_addr = sg_dma_address(areq_ctx->dst_sgl) +
 			(*dst_last_bytes - authsize);
-		areq_ctx->icv_virt_addr = sg_virt(
-			areq_ctx->dst_sgl) +
+		areq_ctx->icv_virt_addr = sg_virt(areq_ctx->dst_sgl) +
 			(*dst_last_bytes - authsize);
 	}
 }
@@ -1660,8 +1654,8 @@ int cc_buffer_mgr_init(struct ssi_drvdata *drvdata)
 
 	drvdata->buff_mgr_handle = buff_mgr_handle;
 
-	buff_mgr_handle->mlli_buffs_pool = dma_pool_create(
-				"dx_single_mlli_tables", dev,
+	buff_mgr_handle->mlli_buffs_pool =
+		dma_pool_create("dx_single_mlli_tables", dev,
 				MAX_NUM_OF_TOTAL_MLLI_ENTRIES *
 				LLI_ENTRY_BYTE_SIZE,
 				MLLI_TABLE_MIN_ALIGNMENT, 0);
diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c
index a80279e..29c17f3 100644
--- a/drivers/staging/ccree/ssi_hash.c
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -951,9 +951,8 @@ static int cc_hash_setkey(struct crypto_ahash *ahash, const u8 *key,
 	ctx->is_hmac = true;
 
 	if (keylen) {
-		ctx->key_params.key_dma_addr = dma_map_single(
-						dev, (void *)key,
-						keylen, DMA_TO_DEVICE);
+		ctx->key_params.key_dma_addr =
+			dma_map_single(dev, (void *)key, keylen, DMA_TO_DEVICE);
 		if (dma_mapping_error(dev, ctx->key_params.key_dma_addr)) {
 			dev_err(dev, "Mapping key va=0x%p len=%u for DMA failed\n",
 				key, keylen);
@@ -1132,9 +1131,8 @@ static int cc_xcbc_setkey(struct crypto_ahash *ahash,
 
 	ctx->key_params.keylen = keylen;
 
-	ctx->key_params.key_dma_addr = dma_map_single(
-					dev, (void *)key,
-					keylen, DMA_TO_DEVICE);
+	ctx->key_params.key_dma_addr =
+		dma_map_single(dev, (void *)key, keylen, DMA_TO_DEVICE);
 	if (dma_mapping_error(dev, ctx->key_params.key_dma_addr)) {
 		dev_err(dev, "Mapping key va=0x%p len=%u for DMA failed\n",
 			key, keylen);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 09/24] staging: ccree: fix reg mgr naming convention
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (7 preceding siblings ...)
  2017-12-12 14:52 ` [PATCH 08/24] staging: ccree: fix func call param indentation Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 10/24] staging: ccree: fix req mgr func def coding style Gilad Ben-Yossef
                   ` (14 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

The request manager files were using a func naming convention which was
inconsistent (ssi vs. cc), included a useless prefix (ssi_request_mgr)
and often too long.

Make the code more readable by switching to a simpler, consistent naming
convention.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_driver.c      |  8 +++----
 drivers/staging/ccree/ssi_request_mgr.c | 40 ++++++++++++++++-----------------
 drivers/staging/ccree/ssi_request_mgr.h |  4 ++--
 3 files changed, 25 insertions(+), 27 deletions(-)

diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 513c5e4..491e2b9 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -326,9 +326,9 @@ static int init_cc_resources(struct platform_device *plat_dev)
 		goto post_sram_mgr_err;
 	}
 
-	rc = request_mgr_init(new_drvdata);
+	rc = cc_req_mgr_init(new_drvdata);
 	if (rc) {
-		dev_err(dev, "request_mgr_init failed\n");
+		dev_err(dev, "cc_req_mgr_init failed\n");
 		goto post_sram_mgr_err;
 	}
 
@@ -389,7 +389,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 post_buf_mgr_err:
 	 cc_buffer_mgr_fini(new_drvdata);
 post_req_mgr_err:
-	request_mgr_fini(new_drvdata);
+	cc_req_mgr_fini(new_drvdata);
 post_sram_mgr_err:
 	ssi_sram_mgr_fini(new_drvdata);
 post_fips_init_err:
@@ -422,7 +422,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 	ssi_ivgen_fini(drvdata);
 	cc_pm_fini(drvdata);
 	cc_buffer_mgr_fini(drvdata);
-	request_mgr_fini(drvdata);
+	cc_req_mgr_fini(drvdata);
 	ssi_sram_mgr_fini(drvdata);
 	ssi_fips_fini(drvdata);
 #ifdef ENABLE_CC_SYSFS
diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c
index 5f34336..dbdfd0c 100644
--- a/drivers/staging/ccree/ssi_request_mgr.c
+++ b/drivers/staging/ccree/ssi_request_mgr.c
@@ -33,7 +33,7 @@
 
 #define SSI_MAX_POLL_ITER	10
 
-struct ssi_request_mgr_handle {
+struct cc_req_mgr_handle {
 	/* Request manager resources */
 	unsigned int hw_queue_size; /* HW capability */
 	unsigned int min_free_hw_slots;
@@ -68,9 +68,9 @@ static void comp_handler(unsigned long devarg);
 static void comp_work_handler(struct work_struct *work);
 #endif
 
-void request_mgr_fini(struct ssi_drvdata *drvdata)
+void cc_req_mgr_fini(struct ssi_drvdata *drvdata)
 {
-	struct ssi_request_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
+	struct cc_req_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
 	struct device *dev = drvdata_to_dev(drvdata);
 
 	if (!req_mgr_h)
@@ -92,14 +92,14 @@ void request_mgr_fini(struct ssi_drvdata *drvdata)
 	/* Kill tasklet */
 	tasklet_kill(&req_mgr_h->comptask);
 #endif
-	memset(req_mgr_h, 0, sizeof(struct ssi_request_mgr_handle));
+	memset(req_mgr_h, 0, sizeof(struct cc_req_mgr_handle));
 	kfree(req_mgr_h);
 	drvdata->request_mgr_handle = NULL;
 }
 
-int request_mgr_init(struct ssi_drvdata *drvdata)
+int cc_req_mgr_init(struct ssi_drvdata *drvdata)
 {
-	struct ssi_request_mgr_handle *req_mgr_h;
+	struct cc_req_mgr_handle *req_mgr_h;
 	struct device *dev = drvdata_to_dev(drvdata);
 	int rc = 0;
 
@@ -161,7 +161,7 @@ int request_mgr_init(struct ssi_drvdata *drvdata)
 	return 0;
 
 req_mgr_init_err:
-	request_mgr_fini(drvdata);
+	cc_req_mgr_fini(drvdata);
 	return rc;
 }
 
@@ -202,9 +202,9 @@ static void request_mgr_complete(struct device *dev, void *dx_compl_h)
 	complete(this_compl);
 }
 
-static int request_mgr_queues_status_check(
+static int cc_queues_status(
 		struct ssi_drvdata *drvdata,
-		struct ssi_request_mgr_handle *req_mgr_h,
+		struct cc_req_mgr_handle *req_mgr_h,
 		unsigned int total_seq_len)
 {
 	unsigned long poll_queue;
@@ -264,7 +264,7 @@ int send_request(
 	struct cc_hw_desc *desc, unsigned int len, bool is_dout)
 {
 	void __iomem *cc_base = drvdata->cc_base;
-	struct ssi_request_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
+	struct cc_req_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
 	unsigned int used_sw_slots;
 	unsigned int iv_seq_len = 0;
 	unsigned int total_seq_len = len; /*initial sequence length*/
@@ -291,8 +291,7 @@ int send_request(
 		 * in case iv gen add the max size and in case of no dout add 1
 		 * for the internal completion descriptor
 		 */
-		rc = request_mgr_queues_status_check(drvdata, req_mgr_h,
-						     max_required_seq_len);
+		rc = cc_queues_status(drvdata, req_mgr_h, max_required_seq_len);
 		if (rc == 0)
 			/* There is enough place in the queue */
 			break;
@@ -418,14 +417,13 @@ int send_request_init(
 	struct ssi_drvdata *drvdata, struct cc_hw_desc *desc, unsigned int len)
 {
 	void __iomem *cc_base = drvdata->cc_base;
-	struct ssi_request_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
+	struct cc_req_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
 	unsigned int total_seq_len = len; /*initial sequence length*/
 	int rc = 0;
 
 	/* Wait for space in HW and SW FIFO. Poll for as much as FIFO_TIMEOUT.
 	 */
-	rc = request_mgr_queues_status_check(drvdata, req_mgr_h,
-					     total_seq_len);
+	rc = cc_queues_status(drvdata, req_mgr_h, total_seq_len);
 	if (rc)
 		return rc;
 
@@ -448,7 +446,7 @@ int send_request_init(
 
 void complete_request(struct ssi_drvdata *drvdata)
 {
-	struct ssi_request_mgr_handle *request_mgr_handle =
+	struct cc_req_mgr_handle *request_mgr_handle =
 						drvdata->request_mgr_handle;
 
 	complete(&drvdata->hw_queue_avail);
@@ -474,7 +472,7 @@ static void proc_completions(struct ssi_drvdata *drvdata)
 {
 	struct ssi_crypto_req *ssi_req;
 	struct device *dev = drvdata_to_dev(drvdata);
-	struct ssi_request_mgr_handle *request_mgr_handle =
+	struct cc_req_mgr_handle *request_mgr_handle =
 						drvdata->request_mgr_handle;
 	unsigned int *tail = &request_mgr_handle->req_queue_tail;
 	unsigned int *head = &request_mgr_handle->req_queue_head;
@@ -540,7 +538,7 @@ static inline u32 cc_axi_comp_count(struct ssi_drvdata *drvdata)
 static void comp_handler(unsigned long devarg)
 {
 	struct ssi_drvdata *drvdata = (struct ssi_drvdata *)devarg;
-	struct ssi_request_mgr_handle *request_mgr_handle =
+	struct cc_req_mgr_handle *request_mgr_handle =
 						drvdata->request_mgr_handle;
 
 	u32 irq;
@@ -590,7 +588,7 @@ static void comp_handler(unsigned long devarg)
 #if defined(CONFIG_PM)
 int cc_resume_req_queue(struct ssi_drvdata *drvdata)
 {
-	struct ssi_request_mgr_handle *request_mgr_handle =
+	struct cc_req_mgr_handle *request_mgr_handle =
 		drvdata->request_mgr_handle;
 
 	spin_lock_bh(&request_mgr_handle->hw_lock);
@@ -606,7 +604,7 @@ int cc_resume_req_queue(struct ssi_drvdata *drvdata)
  */
 int cc_suspend_req_queue(struct ssi_drvdata *drvdata)
 {
-	struct ssi_request_mgr_handle *request_mgr_handle =
+	struct cc_req_mgr_handle *request_mgr_handle =
 						drvdata->request_mgr_handle;
 
 	/* lock the send_request */
@@ -624,7 +622,7 @@ int cc_suspend_req_queue(struct ssi_drvdata *drvdata)
 
 bool cc_req_queue_suspended(struct ssi_drvdata *drvdata)
 {
-	struct ssi_request_mgr_handle *request_mgr_handle =
+	struct cc_req_mgr_handle *request_mgr_handle =
 						drvdata->request_mgr_handle;
 
 	return	request_mgr_handle->is_runtime_suspended;
diff --git a/drivers/staging/ccree/ssi_request_mgr.h b/drivers/staging/ccree/ssi_request_mgr.h
index 53eed5f..d018f51 100644
--- a/drivers/staging/ccree/ssi_request_mgr.h
+++ b/drivers/staging/ccree/ssi_request_mgr.h
@@ -23,7 +23,7 @@
 
 #include "cc_hw_queue_defs.h"
 
-int request_mgr_init(struct ssi_drvdata *drvdata);
+int cc_req_mgr_init(struct ssi_drvdata *drvdata);
 
 /*!
  * Enqueue caller request to crypto hardware.
@@ -47,7 +47,7 @@ int send_request_init(
 
 void complete_request(struct ssi_drvdata *drvdata);
 
-void request_mgr_fini(struct ssi_drvdata *drvdata);
+void cc_req_mgr_fini(struct ssi_drvdata *drvdata);
 
 #if defined(CONFIG_PM)
 int cc_resume_req_queue(struct ssi_drvdata *drvdata);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 10/24] staging: ccree: fix req mgr func def coding style
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (8 preceding siblings ...)
  2017-12-12 14:52 ` [PATCH 09/24] staging: ccree: fix reg mgr naming convention Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 11/24] staging: ccree: remove cipher sync blkcipher remains Gilad Ben-Yossef
                   ` (13 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Fix request manager functions definition indentation according to coding
style guide lines for better code readability

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_request_mgr.c | 21 +++++++++------------
 drivers/staging/ccree/ssi_request_mgr.h |  9 ++++-----
 2 files changed, 13 insertions(+), 17 deletions(-)

diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c
index dbdfd0c..91f5e2d 100644
--- a/drivers/staging/ccree/ssi_request_mgr.c
+++ b/drivers/staging/ccree/ssi_request_mgr.c
@@ -165,9 +165,8 @@ int cc_req_mgr_init(struct ssi_drvdata *drvdata)
 	return rc;
 }
 
-static void enqueue_seq(
-	void __iomem *cc_base,
-	struct cc_hw_desc seq[], unsigned int seq_len)
+static void enqueue_seq(void __iomem *cc_base, struct cc_hw_desc seq[],
+			unsigned int seq_len)
 {
 	int i, w;
 	void * __iomem reg = cc_base + CC_REG(DSCRPTR_QUEUE_WORD0);
@@ -202,10 +201,9 @@ static void request_mgr_complete(struct device *dev, void *dx_compl_h)
 	complete(this_compl);
 }
 
-static int cc_queues_status(
-		struct ssi_drvdata *drvdata,
-		struct cc_req_mgr_handle *req_mgr_h,
-		unsigned int total_seq_len)
+static int cc_queues_status(struct ssi_drvdata *drvdata,
+			    struct cc_req_mgr_handle *req_mgr_h,
+			    unsigned int total_seq_len)
 {
 	unsigned long poll_queue;
 	struct device *dev = drvdata_to_dev(drvdata);
@@ -259,9 +257,8 @@ static int cc_queues_status(
  *
  * \return int Returns -EINPROGRESS if "is_dout=true"; "0" if "is_dout=false"
  */
-int send_request(
-	struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
-	struct cc_hw_desc *desc, unsigned int len, bool is_dout)
+int send_request(struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
+		 struct cc_hw_desc *desc, unsigned int len, bool is_dout)
 {
 	void __iomem *cc_base = drvdata->cc_base;
 	struct cc_req_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
@@ -413,8 +410,8 @@ int send_request(
  *
  * \return int Returns "0" upon success
  */
-int send_request_init(
-	struct ssi_drvdata *drvdata, struct cc_hw_desc *desc, unsigned int len)
+int send_request_init(struct ssi_drvdata *drvdata, struct cc_hw_desc *desc,
+		      unsigned int len)
 {
 	void __iomem *cc_base = drvdata->cc_base;
 	struct cc_req_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
diff --git a/drivers/staging/ccree/ssi_request_mgr.h b/drivers/staging/ccree/ssi_request_mgr.h
index d018f51..91e0d47 100644
--- a/drivers/staging/ccree/ssi_request_mgr.h
+++ b/drivers/staging/ccree/ssi_request_mgr.h
@@ -38,12 +38,11 @@ int cc_req_mgr_init(struct ssi_drvdata *drvdata);
  *
  * \return int Returns -EINPROGRESS if "is_dout=true"; "0" if "is_dout=false"
  */
-int send_request(
-	struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
-	struct cc_hw_desc *desc, unsigned int len, bool is_dout);
+int send_request(struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
+		 struct cc_hw_desc *desc, unsigned int len, bool is_dout);
 
-int send_request_init(
-	struct ssi_drvdata *drvdata, struct cc_hw_desc *desc, unsigned int len);
+int send_request_init(struct ssi_drvdata *drvdata, struct cc_hw_desc *desc,
+		      unsigned int len);
 
 void complete_request(struct ssi_drvdata *drvdata);
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 11/24] staging: ccree: remove cipher sync blkcipher remains
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (9 preceding siblings ...)
  2017-12-12 14:52 ` [PATCH 10/24] staging: ccree: fix req mgr func def coding style Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 12/24] staging: ccree: fix cipher naming convention Gilad Ben-Yossef
                   ` (12 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Remove the remains of no longer existing support for running
blkcipher is sync mode.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_cipher.c | 156 ++++++++++++-------------------------
 1 file changed, 51 insertions(+), 105 deletions(-)

diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index 7b484f1..0dc63f1 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -180,7 +180,7 @@ static unsigned int get_max_keysize(struct crypto_tfm *tfm)
 	return 0;
 }
 
-static int ssi_blkcipher_init(struct crypto_tfm *tfm)
+static int ssi_ablkcipher_init(struct crypto_tfm *tfm)
 {
 	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct crypto_alg *alg = tfm->__crt_alg;
@@ -189,10 +189,13 @@ static int ssi_blkcipher_init(struct crypto_tfm *tfm)
 	struct device *dev = drvdata_to_dev(ssi_alg->drvdata);
 	int rc = 0;
 	unsigned int max_key_buf_size = get_max_keysize(tfm);
+	struct ablkcipher_tfm *ablktfm = &tfm->crt_ablkcipher;
 
 	dev_dbg(dev, "Initializing context @%p for %s\n", ctx_p,
 		crypto_tfm_alg_name(tfm));
 
+	ablktfm->reqsize = sizeof(struct blkcipher_req_ctx);
+
 	ctx_p->cipher_mode = ssi_alg->cipher_mode;
 	ctx_p->flow_mode = ssi_alg->flow_mode;
 	ctx_p->drvdata = ssi_alg->drvdata;
@@ -297,10 +300,10 @@ static enum cc_hw_crypto_key hw_key_to_cc_hw_key(int slot_num)
 	return END_OF_KEYS;
 }
 
-static int ssi_blkcipher_setkey(struct crypto_tfm *tfm,
-				const u8 *key,
+static int ssi_ablkcipher_setkey(struct crypto_ablkcipher *atfm, const u8 *key,
 				unsigned int keylen)
 {
+	struct crypto_tfm *tfm = crypto_ablkcipher_tfm(atfm);
 	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
 	u32 tmp[DES_EXPKEY_WORDS];
@@ -700,62 +703,59 @@ ssi_blkcipher_create_data_desc(
 	}
 }
 
-static int ssi_blkcipher_complete(struct device *dev,
-				  struct ssi_ablkcipher_ctx *ctx_p,
-				  struct blkcipher_req_ctx *req_ctx,
-				  struct scatterlist *dst,
-				  struct scatterlist *src,
-				  unsigned int ivsize,
-				  void *areq)
+static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req)
 {
+	struct ablkcipher_request *areq = (struct ablkcipher_request *)ssi_req;
+	struct scatterlist *dst = areq->dst;
+	struct scatterlist *src = areq->src;
+	struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(areq);
+	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(areq);
+	unsigned int ivsize = crypto_ablkcipher_ivsize(tfm);
 	int completion_error = 0;
 	struct ablkcipher_request *req = (struct ablkcipher_request *)areq;
 
 	cc_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
 	kfree(req_ctx->iv);
 
-	if (areq) {
-		/*
-		 * The crypto API expects us to set the req->info to the last
-		 * ciphertext block. For encrypt, simply copy from the result.
-		 * For decrypt, we must copy from a saved buffer since this
-		 * could be an in-place decryption operation and the src is
-		 * lost by this point.
-		 */
-		if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT)  {
-			memcpy(req->info, req_ctx->backup_info, ivsize);
-			kfree(req_ctx->backup_info);
-		} else {
-			scatterwalk_map_and_copy(req->info, req->dst,
-						 (req->nbytes - ivsize),
-						 ivsize, 0);
-		}
-
-		ablkcipher_request_complete(areq, completion_error);
-		return 0;
+	/*
+	 * The crypto API expects us to set the req->info to the last
+	 * ciphertext block. For encrypt, simply copy from the result.
+	 * For decrypt, we must copy from a saved buffer since this
+	 * could be an in-place decryption operation and the src is
+	 * lost by this point.
+	 */
+	if (req_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT)  {
+		memcpy(req->info, req_ctx->backup_info, ivsize);
+		kfree(req_ctx->backup_info);
+	} else {
+		scatterwalk_map_and_copy(req->info, req->dst,
+					 (req->nbytes - ivsize),
+					 ivsize, 0);
 	}
-	return completion_error;
+
+	ablkcipher_request_complete(areq, completion_error);
 }
 
-static int ssi_blkcipher_process(
-	struct crypto_tfm *tfm,
-	struct blkcipher_req_ctx *req_ctx,
-	struct scatterlist *dst, struct scatterlist *src,
-	unsigned int nbytes,
-	void *info, //req info
-	unsigned int ivsize,
-	void *areq,
-	enum drv_crypto_direction direction)
+static int cc_cipher_process(struct ablkcipher_request *req,
+			     enum drv_crypto_direction direction)
 {
+	struct crypto_ablkcipher *ablk_tfm = crypto_ablkcipher_reqtfm(req);
+	struct crypto_tfm *tfm = crypto_ablkcipher_tfm(ablk_tfm);
+	struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
+	unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);
+	struct scatterlist *dst = req->dst;
+	struct scatterlist *src = req->src;
+	unsigned int nbytes = req->nbytes;
+	void *info = req->info;
 	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
 	struct cc_hw_desc desc[MAX_ABLKCIPHER_SEQ_LEN];
 	struct ssi_crypto_req ssi_req = {};
 	int rc, seq_len = 0, cts_restore_flag = 0;
 
-	dev_dbg(dev, "%s areq=%p info=%p nbytes=%d\n",
+	dev_dbg(dev, "%s req=%p info=%p nbytes=%d\n",
 		((direction == DRV_CRYPTO_DIRECTION_ENCRYPT) ?
-		"Encrypt" : "Decrypt"), areq, info, nbytes);
+		"Encrypt" : "Decrypt"), req, info, nbytes);
 
 	/* STAT_PHASE_0: Init and sanity checks */
 
@@ -791,7 +791,7 @@ static int ssi_blkcipher_process(
 
 	/* Setup DX request structure */
 	ssi_req.user_cb = (void *)ssi_ablkcipher_complete;
-	ssi_req.user_arg = (void *)areq;
+	ssi_req.user_arg = (void *)req;
 
 #ifdef ENABLE_CYCLE_COUNT
 	ssi_req.op_type = (direction == DRV_CRYPTO_DIRECTION_DECRYPT) ?
@@ -823,7 +823,7 @@ static int ssi_blkcipher_process(
 		ssi_blkcipher_create_setup_desc(tfm, req_ctx, ivsize, nbytes,
 						desc, &seq_len);
 	/* Data processing */
-	ssi_blkcipher_create_data_desc(tfm, req_ctx, dst, src, nbytes, areq,
+	ssi_blkcipher_create_data_desc(tfm, req_ctx, dst, src, nbytes, req,
 				       desc, &seq_len);
 
 	/* do we need to generate IV? */
@@ -836,25 +836,12 @@ static int ssi_blkcipher_process(
 
 	/* STAT_PHASE_3: Lock HW and push sequence */
 
-	rc = send_request(ctx_p->drvdata, &ssi_req, desc, seq_len,
-			  (!areq) ? 0 : 1);
-	if (areq) {
-		if (rc != -EINPROGRESS) {
-			/* Failed to send the request or request completed
-			 * synchronously
-			 */
-			cc_unmap_blkcipher_request(dev, req_ctx, ivsize, src,
-						   dst);
-		}
-
-	} else {
-		if (rc) {
-			cc_unmap_blkcipher_request(dev, req_ctx, ivsize, src,
-						   dst);
-		} else {
-			rc = ssi_blkcipher_complete(dev, ctx_p, req_ctx, dst,
-						    src, ivsize, NULL);
-		}
+	rc = send_request(ctx_p->drvdata, &ssi_req, desc, seq_len, 1);
+	if (rc != -EINPROGRESS) {
+		/* Failed to send the request or request completed
+		 * synchronously
+		 */
+		cc_unmap_blkcipher_request(dev, req_ctx, ivsize, src, dst);
 	}
 
 exit_process:
@@ -869,56 +856,19 @@ static int ssi_blkcipher_process(
 	return rc;
 }
 
-static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req)
-{
-	struct ablkcipher_request *areq = (struct ablkcipher_request *)ssi_req;
-	struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(areq);
-	struct crypto_ablkcipher *tfm = crypto_ablkcipher_reqtfm(areq);
-	struct ssi_ablkcipher_ctx *ctx_p = crypto_ablkcipher_ctx(tfm);
-	unsigned int ivsize = crypto_ablkcipher_ivsize(tfm);
-
-	ssi_blkcipher_complete(dev, ctx_p, req_ctx, areq->dst, areq->src,
-			       ivsize, areq);
-}
-
-/* Async wrap functions */
-
-static int ssi_ablkcipher_init(struct crypto_tfm *tfm)
-{
-	struct ablkcipher_tfm *ablktfm = &tfm->crt_ablkcipher;
-
-	ablktfm->reqsize = sizeof(struct blkcipher_req_ctx);
-
-	return ssi_blkcipher_init(tfm);
-}
-
-static int ssi_ablkcipher_setkey(struct crypto_ablkcipher *tfm,
-				 const u8 *key,
-				 unsigned int keylen)
-{
-	return ssi_blkcipher_setkey(crypto_ablkcipher_tfm(tfm), key, keylen);
-}
-
 static int ssi_ablkcipher_encrypt(struct ablkcipher_request *req)
 {
-	struct crypto_ablkcipher *ablk_tfm = crypto_ablkcipher_reqtfm(req);
-	struct crypto_tfm *tfm = crypto_ablkcipher_tfm(ablk_tfm);
 	struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
-	unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);
 
 	req_ctx->is_giv = false;
 	req_ctx->backup_info = NULL;
 
-	return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src,
-				     req->nbytes, req->info, ivsize,
-				     (void *)req,
-				     DRV_CRYPTO_DIRECTION_ENCRYPT);
+	return cc_cipher_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT);
 }
 
 static int ssi_ablkcipher_decrypt(struct ablkcipher_request *req)
 {
 	struct crypto_ablkcipher *ablk_tfm = crypto_ablkcipher_reqtfm(req);
-	struct crypto_tfm *tfm = crypto_ablkcipher_tfm(ablk_tfm);
 	struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
 	unsigned int ivsize = crypto_ablkcipher_ivsize(ablk_tfm);
 
@@ -934,15 +884,11 @@ static int ssi_ablkcipher_decrypt(struct ablkcipher_request *req)
 				 (req->nbytes - ivsize), ivsize, 0);
 	req_ctx->is_giv = false;
 
-	return ssi_blkcipher_process(tfm, req_ctx, req->dst, req->src,
-				     req->nbytes, req->info, ivsize,
-				     (void *)req,
-				     DRV_CRYPTO_DIRECTION_DECRYPT);
+	return cc_cipher_process(req, DRV_CRYPTO_DIRECTION_DECRYPT);
 }
 
 /* DX Block cipher alg */
 static struct ssi_alg_template blkcipher_algs[] = {
-/* Async template */
 #if SSI_CC_HAS_AES_XTS
 	{
 		.name = "xts(aes)",
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 12/24] staging: ccree: fix cipher naming convention
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (10 preceding siblings ...)
  2017-12-12 14:52 ` [PATCH 11/24] staging: ccree: remove cipher sync blkcipher remains Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:52 ` [PATCH 13/24] staging: ccree: fix cipher func def coding style Gilad Ben-Yossef
                   ` (11 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

The blkcipher files were using a func naming convention which was
inconsistent (ssi vs. cc), included a long prefix (ssi_ablkcipher)
and often too long.

Make the code more readable by switching to a simpler, consistent naming
convention.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_cipher.c | 221 ++++++++++++++++++-------------------
 drivers/staging/ccree/ssi_cipher.h |   8 +-
 drivers/staging/ccree/ssi_driver.c |   8 +-
 3 files changed, 118 insertions(+), 119 deletions(-)

diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index 0dc63f1..d7687a4 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -38,9 +38,9 @@
 
 #define template_ablkcipher	template_u.ablkcipher
 
-#define SSI_MIN_AES_XTS_SIZE 0x10
-#define SSI_MAX_AES_XTS_SIZE 0x2000
-struct ssi_blkcipher_handle {
+#define CC_MIN_AES_XTS_SIZE 0x10
+#define CC_MAX_AES_XTS_SIZE 0x2000
+struct cc_cipher_handle {
 	struct list_head blkcipher_alg_list;
 };
 
@@ -54,7 +54,7 @@ struct cc_hw_key_info {
 	enum cc_hw_crypto_key key2_slot;
 };
 
-struct ssi_ablkcipher_ctx {
+struct cc_cipher_ctx {
 	struct ssi_drvdata *drvdata;
 	int keylen;
 	int key_round_number;
@@ -67,9 +67,9 @@ struct ssi_ablkcipher_ctx {
 	struct crypto_shash *shash_tfm;
 };
 
-static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req);
+static void cc_cipher_complete(struct device *dev, void *ssi_req);
 
-static int validate_keys_sizes(struct ssi_ablkcipher_ctx *ctx_p, u32 size)
+static int validate_keys_sizes(struct cc_cipher_ctx *ctx_p, u32 size)
 {
 	switch (ctx_p->flow_mode) {
 	case S_DIN_to_AES:
@@ -109,15 +109,15 @@ static int validate_keys_sizes(struct ssi_ablkcipher_ctx *ctx_p, u32 size)
 	return -EINVAL;
 }
 
-static int validate_data_size(struct ssi_ablkcipher_ctx *ctx_p,
+static int validate_data_size(struct cc_cipher_ctx *ctx_p,
 			      unsigned int size)
 {
 	switch (ctx_p->flow_mode) {
 	case S_DIN_to_AES:
 		switch (ctx_p->cipher_mode) {
 		case DRV_CIPHER_XTS:
-			if (size >= SSI_MIN_AES_XTS_SIZE &&
-			    size <= SSI_MAX_AES_XTS_SIZE &&
+			if (size >= CC_MIN_AES_XTS_SIZE &&
+			    size <= CC_MAX_AES_XTS_SIZE &&
 			    IS_ALIGNED(size, AES_BLOCK_SIZE))
 				return 0;
 			break;
@@ -180,9 +180,9 @@ static unsigned int get_max_keysize(struct crypto_tfm *tfm)
 	return 0;
 }
 
-static int ssi_ablkcipher_init(struct crypto_tfm *tfm)
+static int cc_cipher_init(struct crypto_tfm *tfm)
 {
-	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct crypto_alg *alg = tfm->__crt_alg;
 	struct ssi_crypto_alg *ssi_alg =
 			container_of(alg, struct ssi_crypto_alg, crypto_alg);
@@ -232,9 +232,9 @@ static int ssi_ablkcipher_init(struct crypto_tfm *tfm)
 	return rc;
 }
 
-static void ssi_blkcipher_exit(struct crypto_tfm *tfm)
+static void cc_cipher_exit(struct crypto_tfm *tfm)
 {
-	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
 	unsigned int max_key_buf_size = get_max_keysize(tfm);
 
@@ -270,7 +270,7 @@ static const u8 zero_buff[] = {	0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0,
 				0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0};
 
 /* The function verifies that tdes keys are not weak.*/
-static int ssi_verify_3des_keys(const u8 *key, unsigned int keylen)
+static int cc_verify_3des_keys(const u8 *key, unsigned int keylen)
 {
 	struct tdes_keys *tdes_key = (struct tdes_keys *)key;
 
@@ -300,11 +300,11 @@ static enum cc_hw_crypto_key hw_key_to_cc_hw_key(int slot_num)
 	return END_OF_KEYS;
 }
 
-static int ssi_ablkcipher_setkey(struct crypto_ablkcipher *atfm, const u8 *key,
-				unsigned int keylen)
+static int cc_cipher_setkey(struct crypto_ablkcipher *atfm, const u8 *key,
+			    unsigned int keylen)
 {
 	struct crypto_tfm *tfm = crypto_ablkcipher_tfm(atfm);
-	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
 	u32 tmp[DES_EXPKEY_WORDS];
 	unsigned int max_key_buf_size = get_max_keysize(tfm);
@@ -329,7 +329,7 @@ static int ssi_ablkcipher_setkey(struct crypto_ablkcipher *atfm, const u8 *key,
 		return -EINVAL;
 	}
 
-	if (ssi_is_hw_key(tfm)) {
+	if (cc_is_hw_key(tfm)) {
 		/* setting HW key slots */
 		struct arm_hw_key_info *hki = (struct arm_hw_key_info *)key;
 
@@ -363,7 +363,7 @@ static int ssi_ablkcipher_setkey(struct crypto_ablkcipher *atfm, const u8 *key,
 		}
 
 		ctx_p->keylen = keylen;
-		dev_dbg(dev, "ssi_is_hw_key ret 0");
+		dev_dbg(dev, "cc_is_hw_key ret 0");
 
 		return 0;
 	}
@@ -384,7 +384,7 @@ static int ssi_ablkcipher_setkey(struct crypto_ablkcipher *atfm, const u8 *key,
 	}
 	if (ctx_p->flow_mode == S_DIN_to_DES &&
 	    keylen == DES3_EDE_KEY_SIZE &&
-	    ssi_verify_3des_keys(key, keylen)) {
+	    cc_verify_3des_keys(key, keylen)) {
 		dev_dbg(dev, "weak 3DES key");
 		return -EINVAL;
 	}
@@ -436,7 +436,7 @@ static int ssi_ablkcipher_setkey(struct crypto_ablkcipher *atfm, const u8 *key,
 }
 
 static void
-ssi_blkcipher_create_setup_desc(
+cc_setup_cipher_desc(
 	struct crypto_tfm *tfm,
 	struct blkcipher_req_ctx *req_ctx,
 	unsigned int ivsize,
@@ -444,7 +444,7 @@ ssi_blkcipher_create_setup_desc(
 	struct cc_hw_desc desc[],
 	unsigned int *seq_size)
 {
-	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
 	int cipher_mode = ctx_p->cipher_mode;
 	int flow_mode = ctx_p->flow_mode;
@@ -491,7 +491,7 @@ ssi_blkcipher_create_setup_desc(
 		set_cipher_mode(&desc[*seq_size], cipher_mode);
 		set_cipher_config0(&desc[*seq_size], direction);
 		if (flow_mode == S_DIN_to_AES) {
-			if (ssi_is_hw_key(tfm)) {
+			if (cc_is_hw_key(tfm)) {
 				set_hw_crypto_key(&desc[*seq_size],
 						  ctx_p->hw.key1_slot);
 			} else {
@@ -518,7 +518,7 @@ ssi_blkcipher_create_setup_desc(
 		hw_desc_init(&desc[*seq_size]);
 		set_cipher_mode(&desc[*seq_size], cipher_mode);
 		set_cipher_config0(&desc[*seq_size], direction);
-		if (ssi_is_hw_key(tfm)) {
+		if (cc_is_hw_key(tfm)) {
 			set_hw_crypto_key(&desc[*seq_size],
 					  ctx_p->hw.key1_slot);
 		} else {
@@ -534,7 +534,7 @@ ssi_blkcipher_create_setup_desc(
 		hw_desc_init(&desc[*seq_size]);
 		set_cipher_mode(&desc[*seq_size], cipher_mode);
 		set_cipher_config0(&desc[*seq_size], direction);
-		if (ssi_is_hw_key(tfm)) {
+		if (cc_is_hw_key(tfm)) {
 			set_hw_crypto_key(&desc[*seq_size],
 					  ctx_p->hw.key2_slot);
 		} else {
@@ -565,14 +565,14 @@ ssi_blkcipher_create_setup_desc(
 }
 
 #if SSI_CC_HAS_MULTI2
-static void ssi_blkcipher_create_multi2_setup_desc(
+static void cc_setup_multi2_desc(
 	struct crypto_tfm *tfm,
 	struct blkcipher_req_ctx *req_ctx,
 	unsigned int ivsize,
 	struct cc_hw_desc desc[],
 	unsigned int *seq_size)
 {
-	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 
 	int direction = req_ctx->gen_ctx.op_type;
 	/* Load system key */
@@ -610,7 +610,7 @@ static void ssi_blkcipher_create_multi2_setup_desc(
 #endif /*SSI_CC_HAS_MULTI2*/
 
 static void
-ssi_blkcipher_create_data_desc(
+cc_setup_cipher_data(
 	struct crypto_tfm *tfm,
 	struct blkcipher_req_ctx *req_ctx,
 	struct scatterlist *dst, struct scatterlist *src,
@@ -619,7 +619,7 @@ ssi_blkcipher_create_data_desc(
 	struct cc_hw_desc desc[],
 	unsigned int *seq_size)
 {
-	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
 	unsigned int flow_mode = ctx_p->flow_mode;
 
@@ -703,7 +703,7 @@ ssi_blkcipher_create_data_desc(
 	}
 }
 
-static void ssi_ablkcipher_complete(struct device *dev, void *ssi_req)
+static void cc_cipher_complete(struct device *dev, void *ssi_req)
 {
 	struct ablkcipher_request *areq = (struct ablkcipher_request *)ssi_req;
 	struct scatterlist *dst = areq->dst;
@@ -747,7 +747,7 @@ static int cc_cipher_process(struct ablkcipher_request *req,
 	struct scatterlist *src = req->src;
 	unsigned int nbytes = req->nbytes;
 	void *info = req->info;
-	struct ssi_ablkcipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
+	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
 	struct cc_hw_desc desc[MAX_ABLKCIPHER_SEQ_LEN];
 	struct ssi_crypto_req ssi_req = {};
@@ -790,7 +790,7 @@ static int cc_cipher_process(struct ablkcipher_request *req,
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = (void *)ssi_ablkcipher_complete;
+	ssi_req.user_cb = (void *)cc_cipher_complete;
 	ssi_req.user_arg = (void *)req;
 
 #ifdef ENABLE_CYCLE_COUNT
@@ -816,15 +816,14 @@ static int cc_cipher_process(struct ablkcipher_request *req,
 	/* Setup processing */
 #if SSI_CC_HAS_MULTI2
 	if (ctx_p->flow_mode == S_DIN_to_MULTI2)
-		ssi_blkcipher_create_multi2_setup_desc(tfm, req_ctx, ivsize,
-						       desc, &seq_len);
+		cc_setup_multi2_desc(tfm, req_ctx, ivsize, desc, &seq_len);
 	else
 #endif /*SSI_CC_HAS_MULTI2*/
-		ssi_blkcipher_create_setup_desc(tfm, req_ctx, ivsize, nbytes,
-						desc, &seq_len);
+		cc_setup_cipher_desc(tfm, req_ctx, ivsize, nbytes, desc,
+				     &seq_len);
 	/* Data processing */
-	ssi_blkcipher_create_data_desc(tfm, req_ctx, dst, src, nbytes, req,
-				       desc, &seq_len);
+	cc_setup_cipher_data(tfm, req_ctx, dst, src, nbytes, req, desc,
+			     &seq_len);
 
 	/* do we need to generate IV? */
 	if (req_ctx->is_giv) {
@@ -856,7 +855,7 @@ static int cc_cipher_process(struct ablkcipher_request *req,
 	return rc;
 }
 
-static int ssi_ablkcipher_encrypt(struct ablkcipher_request *req)
+static int cc_cipher_encrypt(struct ablkcipher_request *req)
 {
 	struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
 
@@ -866,7 +865,7 @@ static int ssi_ablkcipher_encrypt(struct ablkcipher_request *req)
 	return cc_cipher_process(req, DRV_CRYPTO_DIRECTION_ENCRYPT);
 }
 
-static int ssi_ablkcipher_decrypt(struct ablkcipher_request *req)
+static int cc_cipher_decrypt(struct ablkcipher_request *req)
 {
 	struct crypto_ablkcipher *ablk_tfm = crypto_ablkcipher_reqtfm(req);
 	struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(req);
@@ -896,9 +895,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE * 2,
 			.max_keysize = AES_MAX_KEY_SIZE * 2,
 			.ivsize = AES_BLOCK_SIZE,
@@ -913,9 +912,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_512,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE * 2,
 			.max_keysize = AES_MAX_KEY_SIZE * 2,
 			.ivsize = AES_BLOCK_SIZE,
@@ -929,9 +928,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_4096,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE * 2,
 			.max_keysize = AES_MAX_KEY_SIZE * 2,
 			.ivsize = AES_BLOCK_SIZE,
@@ -947,9 +946,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE * 2,
 			.max_keysize = AES_MAX_KEY_SIZE * 2,
 			.ivsize = AES_BLOCK_SIZE,
@@ -963,9 +962,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_512,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE * 2,
 			.max_keysize = AES_MAX_KEY_SIZE * 2,
 			.ivsize = AES_BLOCK_SIZE,
@@ -979,9 +978,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_4096,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE * 2,
 			.max_keysize = AES_MAX_KEY_SIZE * 2,
 			.ivsize = AES_BLOCK_SIZE,
@@ -997,9 +996,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE * 2,
 			.max_keysize = AES_MAX_KEY_SIZE * 2,
 			.ivsize = AES_BLOCK_SIZE,
@@ -1013,9 +1012,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_512,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE * 2,
 			.max_keysize = AES_MAX_KEY_SIZE * 2,
 			.ivsize = AES_BLOCK_SIZE,
@@ -1029,9 +1028,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER | CRYPTO_ALG_BULK_DU_4096,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE * 2,
 			.max_keysize = AES_MAX_KEY_SIZE * 2,
 			.ivsize = AES_BLOCK_SIZE,
@@ -1046,9 +1045,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE,
 			.max_keysize = AES_MAX_KEY_SIZE,
 			.ivsize = 0,
@@ -1062,9 +1061,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE,
 			.max_keysize = AES_MAX_KEY_SIZE,
 			.ivsize = AES_BLOCK_SIZE,
@@ -1078,9 +1077,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE,
 			.max_keysize = AES_MAX_KEY_SIZE,
 			.ivsize = AES_BLOCK_SIZE,
@@ -1095,9 +1094,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = AES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE,
 			.max_keysize = AES_MAX_KEY_SIZE,
 			.ivsize = AES_BLOCK_SIZE,
@@ -1112,9 +1111,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = 1,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = AES_MIN_KEY_SIZE,
 			.max_keysize = AES_MAX_KEY_SIZE,
 			.ivsize = AES_BLOCK_SIZE,
@@ -1128,9 +1127,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = DES3_EDE_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = DES3_EDE_KEY_SIZE,
 			.max_keysize = DES3_EDE_KEY_SIZE,
 			.ivsize = DES3_EDE_BLOCK_SIZE,
@@ -1144,9 +1143,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = DES3_EDE_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = DES3_EDE_KEY_SIZE,
 			.max_keysize = DES3_EDE_KEY_SIZE,
 			.ivsize = 0,
@@ -1160,9 +1159,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = DES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = DES_KEY_SIZE,
 			.max_keysize = DES_KEY_SIZE,
 			.ivsize = DES_BLOCK_SIZE,
@@ -1176,9 +1175,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = DES_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = DES_KEY_SIZE,
 			.max_keysize = DES_KEY_SIZE,
 			.ivsize = 0,
@@ -1193,9 +1192,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = CC_MULTI2_BLOCK_SIZE,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_decrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_decrypt,
 			.min_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
 			.max_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
 			.ivsize = CC_MULTI2_IV_SIZE,
@@ -1209,9 +1208,9 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.blocksize = 1,
 		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
 		.template_ablkcipher = {
-			.setkey = ssi_ablkcipher_setkey,
-			.encrypt = ssi_ablkcipher_encrypt,
-			.decrypt = ssi_ablkcipher_encrypt,
+			.setkey = cc_cipher_setkey,
+			.encrypt = cc_cipher_encrypt,
+			.decrypt = cc_cipher_encrypt,
 			.min_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
 			.max_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
 			.ivsize = CC_MULTI2_IV_SIZE,
@@ -1223,8 +1222,8 @@ static struct ssi_alg_template blkcipher_algs[] = {
 };
 
 static
-struct ssi_crypto_alg *ssi_ablkcipher_create_alg(struct ssi_alg_template
-						 *template, struct device *dev)
+struct ssi_crypto_alg *cc_cipher_create_alg(struct ssi_alg_template *template,
+					    struct device *dev)
 {
 	struct ssi_crypto_alg *t_alg;
 	struct crypto_alg *alg;
@@ -1242,10 +1241,10 @@ struct ssi_crypto_alg *ssi_ablkcipher_create_alg(struct ssi_alg_template
 	alg->cra_priority = SSI_CRA_PRIO;
 	alg->cra_blocksize = template->blocksize;
 	alg->cra_alignmask = 0;
-	alg->cra_ctxsize = sizeof(struct ssi_ablkcipher_ctx);
+	alg->cra_ctxsize = sizeof(struct cc_cipher_ctx);
 
-	alg->cra_init = ssi_ablkcipher_init;
-	alg->cra_exit = ssi_blkcipher_exit;
+	alg->cra_init = cc_cipher_init;
+	alg->cra_exit = cc_cipher_exit;
 	alg->cra_type = &crypto_ablkcipher_type;
 	alg->cra_ablkcipher = template->template_ablkcipher;
 	alg->cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY |
@@ -1257,10 +1256,10 @@ struct ssi_crypto_alg *ssi_ablkcipher_create_alg(struct ssi_alg_template
 	return t_alg;
 }
 
-int ssi_ablkcipher_free(struct ssi_drvdata *drvdata)
+int cc_cipher_free(struct ssi_drvdata *drvdata)
 {
 	struct ssi_crypto_alg *t_alg, *n;
-	struct ssi_blkcipher_handle *blkcipher_handle =
+	struct cc_cipher_handle *blkcipher_handle =
 						drvdata->blkcipher_handle;
 	if (blkcipher_handle) {
 		/* Remove registered algs */
@@ -1277,9 +1276,9 @@ int ssi_ablkcipher_free(struct ssi_drvdata *drvdata)
 	return 0;
 }
 
-int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata)
+int cc_cipher_alloc(struct ssi_drvdata *drvdata)
 {
-	struct ssi_blkcipher_handle *ablkcipher_handle;
+	struct cc_cipher_handle *ablkcipher_handle;
 	struct ssi_crypto_alg *t_alg;
 	struct device *dev = drvdata_to_dev(drvdata);
 	int rc = -ENOMEM;
@@ -1297,7 +1296,7 @@ int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata)
 		ARRAY_SIZE(blkcipher_algs));
 	for (alg = 0; alg < ARRAY_SIZE(blkcipher_algs); alg++) {
 		dev_dbg(dev, "creating %s\n", blkcipher_algs[alg].driver_name);
-		t_alg = ssi_ablkcipher_create_alg(&blkcipher_algs[alg], dev);
+		t_alg = cc_cipher_create_alg(&blkcipher_algs[alg], dev);
 		if (IS_ERR(t_alg)) {
 			rc = PTR_ERR(t_alg);
 			dev_err(dev, "%s alg allocation failed\n",
@@ -1326,6 +1325,6 @@ int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata)
 	return 0;
 
 fail0:
-	ssi_ablkcipher_free(drvdata);
+	cc_cipher_free(drvdata);
 	return rc;
 }
diff --git a/drivers/staging/ccree/ssi_cipher.h b/drivers/staging/ccree/ssi_cipher.h
index 14c0ad9..ef6d6e9 100644
--- a/drivers/staging/ccree/ssi_cipher.h
+++ b/drivers/staging/ccree/ssi_cipher.h
@@ -51,9 +51,9 @@ struct blkcipher_req_ctx {
 	struct mlli_params mlli_params;
 };
 
-int ssi_ablkcipher_alloc(struct ssi_drvdata *drvdata);
+int cc_cipher_alloc(struct ssi_drvdata *drvdata);
 
-int ssi_ablkcipher_free(struct ssi_drvdata *drvdata);
+int cc_cipher_free(struct ssi_drvdata *drvdata);
 
 #ifndef CRYPTO_ALG_BULK_MASK
 
@@ -65,7 +65,7 @@ int ssi_ablkcipher_free(struct ssi_drvdata *drvdata);
 
 #ifdef CRYPTO_TFM_REQ_HW_KEY
 
-static inline bool ssi_is_hw_key(struct crypto_tfm *tfm)
+static inline bool cc_is_hw_key(struct crypto_tfm *tfm)
 {
 	return (crypto_tfm_get_flags(tfm) & CRYPTO_TFM_REQ_HW_KEY);
 }
@@ -77,7 +77,7 @@ struct arm_hw_key_info {
 	int hw_key2;
 };
 
-static inline bool ssi_is_hw_key(struct crypto_tfm *tfm)
+static inline bool cc_is_hw_key(struct crypto_tfm *tfm)
 {
 	return false;
 }
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 491e2b9..2a0dd85 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -351,9 +351,9 @@ static int init_cc_resources(struct platform_device *plat_dev)
 	}
 
 	/* Allocate crypto algs */
-	rc = ssi_ablkcipher_alloc(new_drvdata);
+	rc = cc_cipher_alloc(new_drvdata);
 	if (rc) {
-		dev_err(dev, "ssi_ablkcipher_alloc failed\n");
+		dev_err(dev, "cc_cipher_alloc failed\n");
 		goto post_ivgen_err;
 	}
 
@@ -381,7 +381,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 post_hash_err:
 	cc_hash_free(new_drvdata);
 post_cipher_err:
-	ssi_ablkcipher_free(new_drvdata);
+	cc_cipher_free(new_drvdata);
 post_ivgen_err:
 	ssi_ivgen_fini(new_drvdata);
 post_power_mgr_err:
@@ -418,7 +418,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 
 	cc_aead_free(drvdata);
 	cc_hash_free(drvdata);
-	ssi_ablkcipher_free(drvdata);
+	cc_cipher_free(drvdata);
 	ssi_ivgen_fini(drvdata);
 	cc_pm_fini(drvdata);
 	cc_buffer_mgr_fini(drvdata);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 13/24] staging: ccree: fix cipher func def coding style
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (11 preceding siblings ...)
  2017-12-12 14:52 ` [PATCH 12/24] staging: ccree: fix cipher naming convention Gilad Ben-Yossef
@ 2017-12-12 14:52 ` Gilad Ben-Yossef
  2017-12-12 14:53 ` [PATCH 14/24] staging: ccree: fix ivgen naming convention Gilad Ben-Yossef
                   ` (10 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:52 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Fix cipher functions definition indentation according to coding
style guide lines for better code readability

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_cipher.c | 38 +++++++++++++++-----------------------
 1 file changed, 15 insertions(+), 23 deletions(-)

diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index d7687a4..0b464d8 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -435,14 +435,11 @@ static int cc_cipher_setkey(struct crypto_ablkcipher *atfm, const u8 *key,
 	return 0;
 }
 
-static void
-cc_setup_cipher_desc(
-	struct crypto_tfm *tfm,
-	struct blkcipher_req_ctx *req_ctx,
-	unsigned int ivsize,
-	unsigned int nbytes,
-	struct cc_hw_desc desc[],
-	unsigned int *seq_size)
+static void cc_setup_cipher_desc(struct crypto_tfm *tfm,
+				 struct blkcipher_req_ctx *req_ctx,
+				 unsigned int ivsize, unsigned int nbytes,
+				 struct cc_hw_desc desc[],
+				 unsigned int *seq_size)
 {
 	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
@@ -565,12 +562,10 @@ cc_setup_cipher_desc(
 }
 
 #if SSI_CC_HAS_MULTI2
-static void cc_setup_multi2_desc(
-	struct crypto_tfm *tfm,
-	struct blkcipher_req_ctx *req_ctx,
-	unsigned int ivsize,
-	struct cc_hw_desc desc[],
-	unsigned int *seq_size)
+static void cc_setup_multi2_desc(struct crypto_tfm *tfm,
+				 struct blkcipher_req_ctx *req_ctx,
+				 unsigned int ivsize, struct cc_hw_desc desc[],
+				 unsigned int *seq_size)
 {
 	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 
@@ -609,15 +604,12 @@ static void cc_setup_multi2_desc(
 }
 #endif /*SSI_CC_HAS_MULTI2*/
 
-static void
-cc_setup_cipher_data(
-	struct crypto_tfm *tfm,
-	struct blkcipher_req_ctx *req_ctx,
-	struct scatterlist *dst, struct scatterlist *src,
-	unsigned int nbytes,
-	void *areq,
-	struct cc_hw_desc desc[],
-	unsigned int *seq_size)
+static void cc_setup_cipher_data(struct crypto_tfm *tfm,
+				 struct blkcipher_req_ctx *req_ctx,
+				 struct scatterlist *dst,
+				 struct scatterlist *src, unsigned int nbytes,
+				 void *areq, struct cc_hw_desc desc[],
+				 unsigned int *seq_size)
 {
 	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 14/24] staging: ccree: fix ivgen naming convention
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (12 preceding siblings ...)
  2017-12-12 14:52 ` [PATCH 13/24] staging: ccree: fix cipher func def coding style Gilad Ben-Yossef
@ 2017-12-12 14:53 ` Gilad Ben-Yossef
  2017-12-12 14:53 ` [PATCH 15/24] staging: ccree: fix ivgen func def coding style Gilad Ben-Yossef
                   ` (9 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

The ivgen files were using a func naming convention which was
inconsistent (ssi vs. cc), included a long prefix (ssi_ivgen)
and often too long.

Make the code more readable by switching to a simpler, consistent naming
convention.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_driver.c      |  8 ++--
 drivers/staging/ccree/ssi_ivgen.c       | 66 ++++++++++++++++-----------------
 drivers/staging/ccree/ssi_ivgen.h       |  8 ++--
 drivers/staging/ccree/ssi_pm.c          |  2 +-
 drivers/staging/ccree/ssi_request_mgr.c |  7 ++--
 5 files changed, 46 insertions(+), 45 deletions(-)

diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 2a0dd85..f4164eb 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -344,9 +344,9 @@ static int init_cc_resources(struct platform_device *plat_dev)
 		goto post_buf_mgr_err;
 	}
 
-	rc = ssi_ivgen_init(new_drvdata);
+	rc = cc_ivgen_init(new_drvdata);
 	if (rc) {
-		dev_err(dev, "ssi_ivgen_init failed\n");
+		dev_err(dev, "cc_ivgen_init failed\n");
 		goto post_power_mgr_err;
 	}
 
@@ -383,7 +383,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 post_cipher_err:
 	cc_cipher_free(new_drvdata);
 post_ivgen_err:
-	ssi_ivgen_fini(new_drvdata);
+	cc_ivgen_fini(new_drvdata);
 post_power_mgr_err:
 	cc_pm_fini(new_drvdata);
 post_buf_mgr_err:
@@ -419,7 +419,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 	cc_aead_free(drvdata);
 	cc_hash_free(drvdata);
 	cc_cipher_free(drvdata);
-	ssi_ivgen_fini(drvdata);
+	cc_ivgen_fini(drvdata);
 	cc_pm_fini(drvdata);
 	cc_buffer_mgr_fini(drvdata);
 	cc_req_mgr_fini(drvdata);
diff --git a/drivers/staging/ccree/ssi_ivgen.c b/drivers/staging/ccree/ssi_ivgen.c
index febee22..ad6cd97 100644
--- a/drivers/staging/ccree/ssi_ivgen.c
+++ b/drivers/staging/ccree/ssi_ivgen.c
@@ -24,15 +24,15 @@
 #include "ssi_buffer_mgr.h"
 
 /* The max. size of pool *MUST* be <= SRAM total size */
-#define SSI_IVPOOL_SIZE 1024
+#define CC_IVPOOL_SIZE 1024
 /* The first 32B fraction of pool are dedicated to the
  * next encryption "key" & "IV" for pool regeneration
  */
-#define SSI_IVPOOL_META_SIZE (CC_AES_IV_SIZE + AES_KEYSIZE_128)
-#define SSI_IVPOOL_GEN_SEQ_LEN	4
+#define CC_IVPOOL_META_SIZE (CC_AES_IV_SIZE + AES_KEYSIZE_128)
+#define CC_IVPOOL_GEN_SEQ_LEN	4
 
 /**
- * struct ssi_ivgen_ctx -IV pool generation context
+ * struct cc_ivgen_ctx -IV pool generation context
  * @pool:          the start address of the iv-pool resides in internal RAM
  * @ctr_key_dma:   address of pool's encryption key material in internal RAM
  * @ctr_iv_dma:    address of pool's counter iv in internal RAM
@@ -40,7 +40,7 @@
  * @pool_meta:     virt. address of the initial enc. key/IV
  * @pool_meta_dma: phys. address of the initial enc. key/IV
  */
-struct ssi_ivgen_ctx {
+struct cc_ivgen_ctx {
 	ssi_sram_addr_t pool;
 	ssi_sram_addr_t ctr_key;
 	ssi_sram_addr_t ctr_iv;
@@ -50,21 +50,21 @@ struct ssi_ivgen_ctx {
 };
 
 /*!
- * Generates SSI_IVPOOL_SIZE of random bytes by
+ * Generates CC_IVPOOL_SIZE of random bytes by
  * encrypting 0's using AES128-CTR.
  *
  * \param ivgen iv-pool context
  * \param iv_seq IN/OUT array to the descriptors sequence
  * \param iv_seq_len IN/OUT pointer to the sequence length
  */
-static int ssi_ivgen_generate_pool(
-	struct ssi_ivgen_ctx *ivgen_ctx,
+static int cc_gen_iv_pool(
+	struct cc_ivgen_ctx *ivgen_ctx,
 	struct cc_hw_desc iv_seq[],
 	unsigned int *iv_seq_len)
 {
 	unsigned int idx = *iv_seq_len;
 
-	if ((*iv_seq_len + SSI_IVPOOL_GEN_SEQ_LEN) > SSI_IVPOOL_SEQ_LEN) {
+	if ((*iv_seq_len + CC_IVPOOL_GEN_SEQ_LEN) > SSI_IVPOOL_SEQ_LEN) {
 		/* The sequence will be longer than allowed */
 		return -EINVAL;
 	}
@@ -97,15 +97,15 @@ static int ssi_ivgen_generate_pool(
 
 	/* Generate IV pool */
 	hw_desc_init(&iv_seq[idx]);
-	set_din_const(&iv_seq[idx], 0, SSI_IVPOOL_SIZE);
-	set_dout_sram(&iv_seq[idx], ivgen_ctx->pool, SSI_IVPOOL_SIZE);
+	set_din_const(&iv_seq[idx], 0, CC_IVPOOL_SIZE);
+	set_dout_sram(&iv_seq[idx], ivgen_ctx->pool, CC_IVPOOL_SIZE);
 	set_flow_mode(&iv_seq[idx], DIN_AES_DOUT);
 	idx++;
 
 	*iv_seq_len = idx; /* Update sequence length */
 
 	/* queue ordering assures pool readiness */
-	ivgen_ctx->next_iv_ofs = SSI_IVPOOL_META_SIZE;
+	ivgen_ctx->next_iv_ofs = CC_IVPOOL_META_SIZE;
 
 	return 0;
 }
@@ -118,15 +118,15 @@ static int ssi_ivgen_generate_pool(
  *
  * \return int Zero for success, negative value otherwise.
  */
-int ssi_ivgen_init_sram_pool(struct ssi_drvdata *drvdata)
+int cc_init_iv_sram(struct ssi_drvdata *drvdata)
 {
-	struct ssi_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
+	struct cc_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
 	struct cc_hw_desc iv_seq[SSI_IVPOOL_SEQ_LEN];
 	unsigned int iv_seq_len = 0;
 	int rc;
 
 	/* Generate initial enc. key/iv */
-	get_random_bytes(ivgen_ctx->pool_meta, SSI_IVPOOL_META_SIZE);
+	get_random_bytes(ivgen_ctx->pool_meta, CC_IVPOOL_META_SIZE);
 
 	/* The first 32B reserved for the enc. Key/IV */
 	ivgen_ctx->ctr_key = ivgen_ctx->pool;
@@ -135,14 +135,14 @@ int ssi_ivgen_init_sram_pool(struct ssi_drvdata *drvdata)
 	/* Copy initial enc. key and IV to SRAM at a single descriptor */
 	hw_desc_init(&iv_seq[iv_seq_len]);
 	set_din_type(&iv_seq[iv_seq_len], DMA_DLLI, ivgen_ctx->pool_meta_dma,
-		     SSI_IVPOOL_META_SIZE, NS_BIT);
+		     CC_IVPOOL_META_SIZE, NS_BIT);
 	set_dout_sram(&iv_seq[iv_seq_len], ivgen_ctx->pool,
-		      SSI_IVPOOL_META_SIZE);
+		      CC_IVPOOL_META_SIZE);
 	set_flow_mode(&iv_seq[iv_seq_len], BYPASS);
 	iv_seq_len++;
 
 	/* Generate initial pool */
-	rc = ssi_ivgen_generate_pool(ivgen_ctx, iv_seq, &iv_seq_len);
+	rc = cc_gen_iv_pool(ivgen_ctx, iv_seq, &iv_seq_len);
 	if (rc)
 		return rc;
 
@@ -155,17 +155,17 @@ int ssi_ivgen_init_sram_pool(struct ssi_drvdata *drvdata)
  *
  * \param drvdata
  */
-void ssi_ivgen_fini(struct ssi_drvdata *drvdata)
+void cc_ivgen_fini(struct ssi_drvdata *drvdata)
 {
-	struct ssi_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
+	struct cc_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
 	struct device *device = &drvdata->plat_dev->dev;
 
 	if (!ivgen_ctx)
 		return;
 
 	if (ivgen_ctx->pool_meta) {
-		memset(ivgen_ctx->pool_meta, 0, SSI_IVPOOL_META_SIZE);
-		dma_free_coherent(device, SSI_IVPOOL_META_SIZE,
+		memset(ivgen_ctx->pool_meta, 0, CC_IVPOOL_META_SIZE);
+		dma_free_coherent(device, CC_IVPOOL_META_SIZE,
 				  ivgen_ctx->pool_meta,
 				  ivgen_ctx->pool_meta_dma);
 	}
@@ -184,9 +184,9 @@ void ssi_ivgen_fini(struct ssi_drvdata *drvdata)
  *
  * \return int Zero for success, negative value otherwise.
  */
-int ssi_ivgen_init(struct ssi_drvdata *drvdata)
+int cc_ivgen_init(struct ssi_drvdata *drvdata)
 {
-	struct ssi_ivgen_ctx *ivgen_ctx;
+	struct cc_ivgen_ctx *ivgen_ctx;
 	struct device *device = &drvdata->plat_dev->dev;
 	int rc;
 
@@ -199,27 +199,27 @@ int ssi_ivgen_init(struct ssi_drvdata *drvdata)
 	ivgen_ctx = drvdata->ivgen_handle;
 
 	/* Allocate pool's header for initial enc. key/IV */
-	ivgen_ctx->pool_meta = dma_alloc_coherent(device, SSI_IVPOOL_META_SIZE,
+	ivgen_ctx->pool_meta = dma_alloc_coherent(device, CC_IVPOOL_META_SIZE,
 						  &ivgen_ctx->pool_meta_dma,
 						  GFP_KERNEL);
 	if (!ivgen_ctx->pool_meta) {
 		dev_err(device, "Not enough memory to allocate DMA of pool_meta (%u B)\n",
-			SSI_IVPOOL_META_SIZE);
+			CC_IVPOOL_META_SIZE);
 		rc = -ENOMEM;
 		goto out;
 	}
 	/* Allocate IV pool in SRAM */
-	ivgen_ctx->pool = cc_sram_alloc(drvdata, SSI_IVPOOL_SIZE);
+	ivgen_ctx->pool = cc_sram_alloc(drvdata, CC_IVPOOL_SIZE);
 	if (ivgen_ctx->pool == NULL_SRAM_ADDR) {
 		dev_err(device, "SRAM pool exhausted\n");
 		rc = -ENOMEM;
 		goto out;
 	}
 
-	return ssi_ivgen_init_sram_pool(drvdata);
+	return cc_init_iv_sram(drvdata);
 
 out:
-	ssi_ivgen_fini(drvdata);
+	cc_ivgen_fini(drvdata);
 	return rc;
 }
 
@@ -236,7 +236,7 @@ int ssi_ivgen_init(struct ssi_drvdata *drvdata)
  *
  * \return int Zero for success, negative value otherwise.
  */
-int ssi_ivgen_getiv(
+int cc_get_iv(
 	struct ssi_drvdata *drvdata,
 	dma_addr_t iv_out_dma[],
 	unsigned int iv_out_dma_len,
@@ -244,7 +244,7 @@ int ssi_ivgen_getiv(
 	struct cc_hw_desc iv_seq[],
 	unsigned int *iv_seq_len)
 {
-	struct ssi_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
+	struct cc_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
 	unsigned int idx = *iv_seq_len;
 	struct device *dev = drvdata_to_dev(drvdata);
 	unsigned int t;
@@ -291,10 +291,10 @@ int ssi_ivgen_getiv(
 	/* Update iv index */
 	ivgen_ctx->next_iv_ofs += iv_out_size;
 
-	if ((SSI_IVPOOL_SIZE - ivgen_ctx->next_iv_ofs) < CC_AES_IV_SIZE) {
+	if ((CC_IVPOOL_SIZE - ivgen_ctx->next_iv_ofs) < CC_AES_IV_SIZE) {
 		dev_dbg(dev, "Pool exhausted, regenerating iv-pool\n");
 		/* pool is drained -regenerate it! */
-		return ssi_ivgen_generate_pool(ivgen_ctx, iv_seq, iv_seq_len);
+		return cc_gen_iv_pool(ivgen_ctx, iv_seq, iv_seq_len);
 	}
 
 	return 0;
diff --git a/drivers/staging/ccree/ssi_ivgen.h b/drivers/staging/ccree/ssi_ivgen.h
index fd28309..fe3d919 100644
--- a/drivers/staging/ccree/ssi_ivgen.h
+++ b/drivers/staging/ccree/ssi_ivgen.h
@@ -29,14 +29,14 @@
  *
  * \return int Zero for success, negative value otherwise.
  */
-int ssi_ivgen_init(struct ssi_drvdata *drvdata);
+int cc_ivgen_init(struct ssi_drvdata *drvdata);
 
 /*!
  * Free iv-pool and ivgen context.
  *
  * \param drvdata
  */
-void ssi_ivgen_fini(struct ssi_drvdata *drvdata);
+void cc_ivgen_fini(struct ssi_drvdata *drvdata);
 
 /*!
  * Generates the initial pool in SRAM.
@@ -46,7 +46,7 @@ void ssi_ivgen_fini(struct ssi_drvdata *drvdata);
  *
  * \return int Zero for success, negative value otherwise.
  */
-int ssi_ivgen_init_sram_pool(struct ssi_drvdata *drvdata);
+int cc_init_iv_sram(struct ssi_drvdata *drvdata);
 
 /*!
  * Acquires 16 Bytes IV from the iv-pool
@@ -61,7 +61,7 @@ int ssi_ivgen_init_sram_pool(struct ssi_drvdata *drvdata);
  *
  * \return int Zero for success, negative value otherwise.
  */
-int ssi_ivgen_getiv(
+int cc_get_iv(
 	struct ssi_drvdata *drvdata,
 	dma_addr_t iv_out_dma[],
 	unsigned int iv_out_dma_len,
diff --git a/drivers/staging/ccree/ssi_pm.c b/drivers/staging/ccree/ssi_pm.c
index d1a6318..f0e3baf 100644
--- a/drivers/staging/ccree/ssi_pm.c
+++ b/drivers/staging/ccree/ssi_pm.c
@@ -81,7 +81,7 @@ int cc_pm_resume(struct device *dev)
 	/* must be after the queue resuming as it uses the HW queue*/
 	cc_init_hash_sram(drvdata);
 
-	ssi_ivgen_init_sram_pool(drvdata);
+	cc_init_iv_sram(drvdata);
 	return 0;
 }
 
diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c
index 91f5e2d..3d25b72 100644
--- a/drivers/staging/ccree/ssi_request_mgr.c
+++ b/drivers/staging/ccree/ssi_request_mgr.c
@@ -329,9 +329,10 @@ int send_request(struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
 			ssi_req->ivgen_size);
 
 		/* Acquire IV from pool */
-		rc = ssi_ivgen_getiv(drvdata, ssi_req->ivgen_dma_addr,
-				     ssi_req->ivgen_dma_addr_len,
-				     ssi_req->ivgen_size, iv_seq, &iv_seq_len);
+		rc = cc_get_iv(drvdata, ssi_req->ivgen_dma_addr,
+			       ssi_req->ivgen_dma_addr_len,
+			       ssi_req->ivgen_size,
+			       iv_seq, &iv_seq_len);
 
 		if (rc) {
 			dev_err(dev, "Failed to generate IV (rc=%d)\n", rc);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 15/24] staging: ccree: fix ivgen func def coding style
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (13 preceding siblings ...)
  2017-12-12 14:53 ` [PATCH 14/24] staging: ccree: fix ivgen naming convention Gilad Ben-Yossef
@ 2017-12-12 14:53 ` Gilad Ben-Yossef
  2017-12-12 14:53 ` [PATCH 16/24] staging: ccree: drop unsupported MULTI2 mode code Gilad Ben-Yossef
                   ` (8 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Fix ivgen functions definition indentation according to coding
style guide lines for better code readability

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_ivgen.c | 16 +++++-----------
 drivers/staging/ccree/ssi_ivgen.h | 10 +++-------
 2 files changed, 8 insertions(+), 18 deletions(-)

diff --git a/drivers/staging/ccree/ssi_ivgen.c b/drivers/staging/ccree/ssi_ivgen.c
index ad6cd97..c499361 100644
--- a/drivers/staging/ccree/ssi_ivgen.c
+++ b/drivers/staging/ccree/ssi_ivgen.c
@@ -57,10 +57,8 @@ struct cc_ivgen_ctx {
  * \param iv_seq IN/OUT array to the descriptors sequence
  * \param iv_seq_len IN/OUT pointer to the sequence length
  */
-static int cc_gen_iv_pool(
-	struct cc_ivgen_ctx *ivgen_ctx,
-	struct cc_hw_desc iv_seq[],
-	unsigned int *iv_seq_len)
+static int cc_gen_iv_pool(struct cc_ivgen_ctx *ivgen_ctx,
+			  struct cc_hw_desc iv_seq[], unsigned int *iv_seq_len)
 {
 	unsigned int idx = *iv_seq_len;
 
@@ -236,13 +234,9 @@ int cc_ivgen_init(struct ssi_drvdata *drvdata)
  *
  * \return int Zero for success, negative value otherwise.
  */
-int cc_get_iv(
-	struct ssi_drvdata *drvdata,
-	dma_addr_t iv_out_dma[],
-	unsigned int iv_out_dma_len,
-	unsigned int iv_out_size,
-	struct cc_hw_desc iv_seq[],
-	unsigned int *iv_seq_len)
+int cc_get_iv(struct ssi_drvdata *drvdata, dma_addr_t iv_out_dma[],
+	      unsigned int iv_out_dma_len, unsigned int iv_out_size,
+	      struct cc_hw_desc iv_seq[], unsigned int *iv_seq_len)
 {
 	struct cc_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
 	unsigned int idx = *iv_seq_len;
diff --git a/drivers/staging/ccree/ssi_ivgen.h b/drivers/staging/ccree/ssi_ivgen.h
index fe3d919..bbd0245 100644
--- a/drivers/staging/ccree/ssi_ivgen.h
+++ b/drivers/staging/ccree/ssi_ivgen.h
@@ -61,12 +61,8 @@ int cc_init_iv_sram(struct ssi_drvdata *drvdata);
  *
  * \return int Zero for success, negative value otherwise.
  */
-int cc_get_iv(
-	struct ssi_drvdata *drvdata,
-	dma_addr_t iv_out_dma[],
-	unsigned int iv_out_dma_len,
-	unsigned int iv_out_size,
-	struct cc_hw_desc iv_seq[],
-	unsigned int *iv_seq_len);
+int cc_get_iv(struct ssi_drvdata *drvdata, dma_addr_t iv_out_dma[],
+	      unsigned int iv_out_dma_len, unsigned int iv_out_size,
+	      struct cc_hw_desc iv_seq[], unsigned int *iv_seq_len);
 
 #endif /*__SSI_IVGEN_H__*/
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 16/24] staging: ccree: drop unsupported MULTI2 mode code
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (14 preceding siblings ...)
  2017-12-12 14:53 ` [PATCH 15/24] staging: ccree: fix ivgen func def coding style Gilad Ben-Yossef
@ 2017-12-12 14:53 ` Gilad Ben-Yossef
  2017-12-12 14:53 ` [PATCH 17/24] staging: ccree: remove SSI_CC_HAS_ macros Gilad Ben-Yossef
                   ` (7 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Remove the code support for MULTI2 mode which is not supported
by the current hardware.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/cc_crypto_ctx.h    |  17 ----
 drivers/staging/ccree/cc_hw_queue_defs.h |   2 -
 drivers/staging/ccree/ssi_cipher.c       | 167 ++++---------------------------
 drivers/staging/ccree/ssi_driver.h       |   1 -
 4 files changed, 18 insertions(+), 169 deletions(-)

diff --git a/drivers/staging/ccree/cc_crypto_ctx.h b/drivers/staging/ccree/cc_crypto_ctx.h
index 591f6fd..0e34d9a 100644
--- a/drivers/staging/ccree/cc_crypto_ctx.h
+++ b/drivers/staging/ccree/cc_crypto_ctx.h
@@ -82,15 +82,6 @@
 
 #define CC_HMAC_BLOCK_SIZE_MAX CC_HASH_BLOCK_SIZE_MAX
 
-#define CC_MULTI2_SYSTEM_KEY_SIZE		32
-#define CC_MULTI2_DATA_KEY_SIZE		8
-#define CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE \
-		(CC_MULTI2_SYSTEM_KEY_SIZE + CC_MULTI2_DATA_KEY_SIZE)
-#define	CC_MULTI2_BLOCK_SIZE					8
-#define	CC_MULTI2_IV_SIZE					8
-#define	CC_MULTI2_MIN_NUM_ROUNDS				8
-#define	CC_MULTI2_MAX_NUM_ROUNDS				128
-
 #define CC_DRV_ALG_MAX_BLOCK_SIZE CC_HASH_BLOCK_SIZE_MAX
 
 enum drv_engine_type {
@@ -168,14 +159,6 @@ enum drv_hash_hw_mode {
 	DRV_HASH_HW_RESERVE32B = S32_MAX
 };
 
-enum drv_multi2_mode {
-	DRV_MULTI2_NULL = -1,
-	DRV_MULTI2_ECB = 0,
-	DRV_MULTI2_CBC = 1,
-	DRV_MULTI2_OFB = 2,
-	DRV_MULTI2_RESERVE32B = S32_MAX
-};
-
 /* drv_crypto_key_type[1:0] is mapped to cipher_do[1:0] */
 /* drv_crypto_key_type[2] is mapped to cipher_config2 */
 enum drv_crypto_key_type {
diff --git a/drivers/staging/ccree/cc_hw_queue_defs.h b/drivers/staging/ccree/cc_hw_queue_defs.h
index c5aaa79..3ca548d 100644
--- a/drivers/staging/ccree/cc_hw_queue_defs.h
+++ b/drivers/staging/ccree/cc_hw_queue_defs.h
@@ -120,7 +120,6 @@ enum cc_flow_mode {
 	AES_to_AES_to_HASH_and_DOUT	= 13,
 	AES_to_AES_to_HASH	= 14,
 	AES_to_HASH_and_AES	= 15,
-	DIN_MULTI2_DOUT		= 16,
 	DIN_AES_AESMAC		= 17,
 	HASH_to_DOUT		= 18,
 	/* setup flows */
@@ -128,7 +127,6 @@ enum cc_flow_mode {
 	S_DIN_to_AES2		= 33,
 	S_DIN_to_DES		= 34,
 	S_DIN_to_RC4		= 35,
-	S_DIN_to_MULTI2		= 36,
 	S_DIN_to_HASH		= 37,
 	S_AES_to_DOUT		= 38,
 	S_AES2_to_DOUT		= 39,
diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index 0b464d8..a158213 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -97,12 +97,6 @@ static int validate_keys_sizes(struct cc_cipher_ctx *ctx_p, u32 size)
 		if (size == DES3_EDE_KEY_SIZE || size == DES_KEY_SIZE)
 			return 0;
 		break;
-#if SSI_CC_HAS_MULTI2
-	case S_DIN_to_MULTI2:
-		if (size == CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE)
-			return 0;
-		break;
-#endif
 	default:
 		break;
 	}
@@ -143,20 +137,6 @@ static int validate_data_size(struct cc_cipher_ctx *ctx_p,
 		if (IS_ALIGNED(size, DES_BLOCK_SIZE))
 			return 0;
 		break;
-#if SSI_CC_HAS_MULTI2
-	case S_DIN_to_MULTI2:
-		switch (ctx_p->cipher_mode) {
-		case DRV_MULTI2_CBC:
-			if (IS_ALIGNED(size, CC_MULTI2_BLOCK_SIZE))
-				return 0;
-			break;
-		case DRV_MULTI2_OFB:
-			return 0;
-		default:
-			break;
-		}
-		break;
-#endif /*SSI_CC_HAS_MULTI2*/
 	default:
 		break;
 	}
@@ -315,14 +295,6 @@ static int cc_cipher_setkey(struct crypto_ablkcipher *atfm, const u8 *key,
 
 	/* STAT_PHASE_0: Init and sanity checks */
 
-#if SSI_CC_HAS_MULTI2
-	/* last byte of key buffer is round number and should not be a part
-	 * of key size
-	 */
-	if (ctx_p->flow_mode == S_DIN_to_MULTI2)
-		keylen -= 1;
-#endif /*SSI_CC_HAS_MULTI2*/
-
 	if (validate_keys_sizes(ctx_p, keylen)) {
 		dev_err(dev, "Unsupported key size %d.\n", keylen);
 		crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
@@ -393,38 +365,23 @@ static int cc_cipher_setkey(struct crypto_ablkcipher *atfm, const u8 *key,
 	dma_sync_single_for_cpu(dev, ctx_p->user.key_dma_addr,
 				max_key_buf_size, DMA_TO_DEVICE);
 
-	if (ctx_p->flow_mode == S_DIN_to_MULTI2) {
-#if SSI_CC_HAS_MULTI2
-		memcpy(ctx_p->user.key, key, CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE);
-		ctx_p->key_round_number =
-			key[CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE];
-		if (ctx_p->key_round_number < CC_MULTI2_MIN_NUM_ROUNDS ||
-		    ctx_p->key_round_number > CC_MULTI2_MAX_NUM_ROUNDS) {
-			crypto_tfm_set_flags(tfm, CRYPTO_TFM_RES_BAD_KEY_LEN);
-			dev_dbg(dev, "SSI_CC_HAS_MULTI2 einval");
-			return -EINVAL;
-#endif /*SSI_CC_HAS_MULTI2*/
-	} else {
-		memcpy(ctx_p->user.key, key, keylen);
-		if (keylen == 24)
-			memset(ctx_p->user.key + 24, 0,
-			       CC_AES_KEY_SIZE_MAX - 24);
-
-		if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
-			/* sha256 for key2 - use sw implementation */
-			int key_len = keylen >> 1;
-			int err;
-			SHASH_DESC_ON_STACK(desc, ctx_p->shash_tfm);
-
-			desc->tfm = ctx_p->shash_tfm;
-
-			err = crypto_shash_digest(desc, ctx_p->user.key,
-						  key_len,
-						  ctx_p->user.key + key_len);
-			if (err) {
-				dev_err(dev, "Failed to hash ESSIV key.\n");
-				return err;
-			}
+	memcpy(ctx_p->user.key, key, keylen);
+	if (keylen == 24)
+		memset(ctx_p->user.key + 24, 0, CC_AES_KEY_SIZE_MAX - 24);
+
+	if (ctx_p->cipher_mode == DRV_CIPHER_ESSIV) {
+		/* sha256 for key2 - use sw implementation */
+		int key_len = keylen >> 1;
+		int err;
+		SHASH_DESC_ON_STACK(desc, ctx_p->shash_tfm);
+
+		desc->tfm = ctx_p->shash_tfm;
+
+		err = crypto_shash_digest(desc, ctx_p->user.key, key_len,
+					  ctx_p->user.key + key_len);
+		if (err) {
+			dev_err(dev, "Failed to hash ESSIV key.\n");
+			return err;
 		}
 	}
 	dma_sync_single_for_device(dev, ctx_p->user.key_dma_addr,
@@ -561,49 +518,6 @@ static void cc_setup_cipher_desc(struct crypto_tfm *tfm,
 	}
 }
 
-#if SSI_CC_HAS_MULTI2
-static void cc_setup_multi2_desc(struct crypto_tfm *tfm,
-				 struct blkcipher_req_ctx *req_ctx,
-				 unsigned int ivsize, struct cc_hw_desc desc[],
-				 unsigned int *seq_size)
-{
-	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
-
-	int direction = req_ctx->gen_ctx.op_type;
-	/* Load system key */
-	hw_desc_init(&desc[*seq_size]);
-	set_cipher_mode(&desc[*seq_size], ctx_p->cipher_mode);
-	set_cipher_config0(&desc[*seq_size], direction);
-	set_din_type(&desc[*seq_size], DMA_DLLI, ctx_p->user.key_dma_addr,
-		     CC_MULTI2_SYSTEM_KEY_SIZE, NS_BIT);
-	set_flow_mode(&desc[*seq_size], ctx_p->flow_mode);
-	set_setup_mode(&desc[*seq_size], SETUP_LOAD_KEY0);
-	(*seq_size)++;
-
-	/* load data key */
-	hw_desc_init(&desc[*seq_size]);
-	set_din_type(&desc[*seq_size], DMA_DLLI,
-		     (ctx_p->user.key_dma_addr + CC_MULTI2_SYSTEM_KEY_SIZE),
-		     CC_MULTI2_DATA_KEY_SIZE, NS_BIT);
-	set_multi2_num_rounds(&desc[*seq_size], ctx_p->key_round_number);
-	set_flow_mode(&desc[*seq_size], ctx_p->flow_mode);
-	set_cipher_mode(&desc[*seq_size], ctx_p->cipher_mode);
-	set_cipher_config0(&desc[*seq_size], direction);
-	set_setup_mode(&desc[*seq_size], SETUP_LOAD_STATE0);
-	(*seq_size)++;
-
-	/* Set state */
-	hw_desc_init(&desc[*seq_size]);
-	set_din_type(&desc[*seq_size], DMA_DLLI, req_ctx->gen_ctx.iv_dma_addr,
-		     ivsize, NS_BIT);
-	set_cipher_config0(&desc[*seq_size], direction);
-	set_flow_mode(&desc[*seq_size], ctx_p->flow_mode);
-	set_cipher_mode(&desc[*seq_size], ctx_p->cipher_mode);
-	set_setup_mode(&desc[*seq_size], SETUP_LOAD_STATE1);
-	(*seq_size)++;
-}
-#endif /*SSI_CC_HAS_MULTI2*/
-
 static void cc_setup_cipher_data(struct crypto_tfm *tfm,
 				 struct blkcipher_req_ctx *req_ctx,
 				 struct scatterlist *dst,
@@ -622,11 +536,6 @@ static void cc_setup_cipher_data(struct crypto_tfm *tfm,
 	case S_DIN_to_DES:
 		flow_mode = DIN_DES_DOUT;
 		break;
-#if SSI_CC_HAS_MULTI2
-	case S_DIN_to_MULTI2:
-		flow_mode = DIN_MULTI2_DOUT;
-		break;
-#endif /*SSI_CC_HAS_MULTI2*/
 	default:
 		dev_err(dev, "invalid flow mode, flow_mode = %d\n", flow_mode);
 		return;
@@ -806,13 +715,7 @@ static int cc_cipher_process(struct ablkcipher_request *req,
 	/* STAT_PHASE_2: Create sequence */
 
 	/* Setup processing */
-#if SSI_CC_HAS_MULTI2
-	if (ctx_p->flow_mode == S_DIN_to_MULTI2)
-		cc_setup_multi2_desc(tfm, req_ctx, ivsize, desc, &seq_len);
-	else
-#endif /*SSI_CC_HAS_MULTI2*/
-		cc_setup_cipher_desc(tfm, req_ctx, ivsize, nbytes, desc,
-				     &seq_len);
+	cc_setup_cipher_desc(tfm, req_ctx, ivsize, nbytes, desc, &seq_len);
 	/* Data processing */
 	cc_setup_cipher_data(tfm, req_ctx, dst, src, nbytes, req, desc,
 			     &seq_len);
@@ -1177,40 +1080,6 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.cipher_mode = DRV_CIPHER_ECB,
 		.flow_mode = S_DIN_to_DES,
 	},
-#if SSI_CC_HAS_MULTI2
-	{
-		.name = "cbc(multi2)",
-		.driver_name = "cbc-multi2-dx",
-		.blocksize = CC_MULTI2_BLOCK_SIZE,
-		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
-		.template_ablkcipher = {
-			.setkey = cc_cipher_setkey,
-			.encrypt = cc_cipher_encrypt,
-			.decrypt = cc_cipher_decrypt,
-			.min_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
-			.max_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
-			.ivsize = CC_MULTI2_IV_SIZE,
-			},
-		.cipher_mode = DRV_MULTI2_CBC,
-		.flow_mode = S_DIN_to_MULTI2,
-	},
-	{
-		.name = "ofb(multi2)",
-		.driver_name = "ofb-multi2-dx",
-		.blocksize = 1,
-		.type = CRYPTO_ALG_TYPE_ABLKCIPHER,
-		.template_ablkcipher = {
-			.setkey = cc_cipher_setkey,
-			.encrypt = cc_cipher_encrypt,
-			.decrypt = cc_cipher_encrypt,
-			.min_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
-			.max_keysize = CC_MULTI2_SYSTEM_N_DATA_KEY_SIZE + 1,
-			.ivsize = CC_MULTI2_IV_SIZE,
-			},
-		.cipher_mode = DRV_MULTI2_OFB,
-		.flow_mode = S_DIN_to_MULTI2,
-	},
-#endif /*SSI_CC_HAS_MULTI2*/
 };
 
 static
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index f92867b..a2de584 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -60,7 +60,6 @@
 #define SSI_CC_HAS_AES_ESSIV 1
 #define SSI_CC_HAS_AES_BITLOCKER 1
 #define SSI_CC_HAS_AES_CTS 1
-#define SSI_CC_HAS_MULTI2 0
 #define SSI_CC_HAS_CMAC 1
 
 #define SSI_AXI_IRQ_MASK ((1 << DX_AXIM_CFG_BRESPMASK_BIT_SHIFT) | \
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 17/24] staging: ccree: remove SSI_CC_HAS_ macros
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (15 preceding siblings ...)
  2017-12-12 14:53 ` [PATCH 16/24] staging: ccree: drop unsupported MULTI2 mode code Gilad Ben-Yossef
@ 2017-12-12 14:53 ` Gilad Ben-Yossef
  2017-12-12 14:53 ` [PATCH 18/24] staging: ccree: rename all SSI to CC Gilad Ben-Yossef
                   ` (6 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Remove macro controlling build of various features. This
needs to happen dynamically in registration time.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_aead.c       | 33 ---------------------------------
 drivers/staging/ccree/ssi_buffer_mgr.c |  4 ----
 drivers/staging/ccree/ssi_cipher.c     |  8 --------
 drivers/staging/ccree/ssi_driver.h     |  8 --------
 drivers/staging/ccree/ssi_hash.c       |  5 -----
 5 files changed, 58 deletions(-)

diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c
index 112fba3..ac9961c 100644
--- a/drivers/staging/ccree/ssi_aead.c
+++ b/drivers/staging/ccree/ssi_aead.c
@@ -662,7 +662,6 @@ cc_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
 	return rc;
 }
 
-#if SSI_CC_HAS_AES_CCM
 static int cc_rfc4309_ccm_setkey(struct crypto_aead *tfm, const u8 *key,
 				 unsigned int keylen)
 {
@@ -676,7 +675,6 @@ static int cc_rfc4309_ccm_setkey(struct crypto_aead *tfm, const u8 *key,
 
 	return cc_aead_setkey(tfm, key, keylen);
 }
-#endif /*SSI_CC_HAS_AES_CCM*/
 
 static int cc_aead_setauthsize(struct crypto_aead *authenc,
 			       unsigned int authsize)
@@ -696,7 +694,6 @@ static int cc_aead_setauthsize(struct crypto_aead *authenc,
 	return 0;
 }
 
-#if SSI_CC_HAS_AES_CCM
 static int cc_rfc4309_ccm_setauthsize(struct crypto_aead *authenc,
 				      unsigned int authsize)
 {
@@ -730,7 +727,6 @@ static int cc_ccm_setauthsize(struct crypto_aead *authenc,
 
 	return cc_aead_setauthsize(authenc, authsize);
 }
-#endif /*SSI_CC_HAS_AES_CCM*/
 
 static void cc_set_assoc_desc(struct aead_request *areq, unsigned int flow_mode,
 			      struct cc_hw_desc desc[], unsigned int *seq_size)
@@ -1374,7 +1370,6 @@ static int validate_data_size(struct cc_aead_ctx *ctx,
 	return -EINVAL;
 }
 
-#if SSI_CC_HAS_AES_CCM
 static unsigned int format_ccm_a0(u8 *pa0_buff, u32 header_size)
 {
 	unsigned int len = 0;
@@ -1623,9 +1618,6 @@ static void cc_proc_rfc4309_ccm(struct aead_request *req)
 	req->iv = areq_ctx->ctr_iv;
 	req->assoclen -= CCM_BLOCK_IV_SIZE;
 }
-#endif /*SSI_CC_HAS_AES_CCM*/
-
-#if SSI_CC_HAS_AES_GCM
 
 static void cc_set_ghash_desc(struct aead_request *req,
 			      struct cc_hw_desc desc[], unsigned int *seq_size)
@@ -1952,8 +1944,6 @@ static void cc_proc_rfc4_gcm(struct aead_request *req)
 	req->assoclen -= GCM_BLOCK_RFC4_IV_SIZE;
 }
 
-#endif /*SSI_CC_HAS_AES_GCM*/
-
 static int cc_proc_aead(struct aead_request *req,
 			enum drv_crypto_direction direct)
 {
@@ -2020,7 +2010,6 @@ static int cc_proc_aead(struct aead_request *req,
 		areq_ctx->hw_iv_size = crypto_aead_ivsize(tfm);
 	}
 
-#if SSI_CC_HAS_AES_CCM
 	if (ctx->cipher_mode == DRV_CIPHER_CCM) {
 		rc = config_ccm_adata(req);
 		if (rc) {
@@ -2031,11 +2020,7 @@ static int cc_proc_aead(struct aead_request *req,
 	} else {
 		areq_ctx->ccm_hdr_size = ccm_header_size_null;
 	}
-#else
-	areq_ctx->ccm_hdr_size = ccm_header_size_null;
-#endif /*SSI_CC_HAS_AES_CCM*/
 
-#if SSI_CC_HAS_AES_GCM
 	if (ctx->cipher_mode == DRV_CIPHER_GCTR) {
 		rc = config_gcm_context(req);
 		if (rc) {
@@ -2044,7 +2029,6 @@ static int cc_proc_aead(struct aead_request *req,
 			goto exit;
 		}
 	}
-#endif /*SSI_CC_HAS_AES_GCM*/
 
 	rc = cc_map_aead_request(ctx->drvdata, req);
 	if (rc) {
@@ -2100,18 +2084,12 @@ static int cc_proc_aead(struct aead_request *req,
 	case DRV_HASH_XCBC_MAC:
 		cc_xcbc_authenc(req, desc, &seq_len);
 		break;
-#if (SSI_CC_HAS_AES_CCM || SSI_CC_HAS_AES_GCM)
 	case DRV_HASH_NULL:
-#if SSI_CC_HAS_AES_CCM
 		if (ctx->cipher_mode == DRV_CIPHER_CCM)
 			cc_ccm(req, desc, &seq_len);
-#endif /*SSI_CC_HAS_AES_CCM*/
-#if SSI_CC_HAS_AES_GCM
 		if (ctx->cipher_mode == DRV_CIPHER_GCTR)
 			cc_gcm(req, desc, &seq_len);
-#endif /*SSI_CC_HAS_AES_GCM*/
 		break;
-#endif
 	default:
 		dev_err(dev, "Unsupported authenc (%d)\n", ctx->auth_mode);
 		cc_unmap_aead_request(dev, req);
@@ -2151,7 +2129,6 @@ static int cc_aead_encrypt(struct aead_request *req)
 	return rc;
 }
 
-#if SSI_CC_HAS_AES_CCM
 static int cc_rfc4309_ccm_encrypt(struct aead_request *req)
 {
 	/* Very similar to cc_aead_encrypt() above. */
@@ -2180,7 +2157,6 @@ static int cc_rfc4309_ccm_encrypt(struct aead_request *req)
 out:
 	return rc;
 }
-#endif /* SSI_CC_HAS_AES_CCM */
 
 static int cc_aead_decrypt(struct aead_request *req)
 {
@@ -2201,7 +2177,6 @@ static int cc_aead_decrypt(struct aead_request *req)
 	return rc;
 }
 
-#if SSI_CC_HAS_AES_CCM
 static int cc_rfc4309_ccm_decrypt(struct aead_request *req)
 {
 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
@@ -2229,9 +2204,6 @@ static int cc_rfc4309_ccm_decrypt(struct aead_request *req)
 out:
 	return rc;
 }
-#endif /* SSI_CC_HAS_AES_CCM */
-
-#if SSI_CC_HAS_AES_GCM
 
 static int cc_rfc4106_gcm_setkey(struct crypto_aead *tfm, const u8 *key,
 				 unsigned int keylen)
@@ -2429,7 +2401,6 @@ static int cc_rfc4543_gcm_decrypt(struct aead_request *req)
 
 	return rc;
 }
-#endif /* SSI_CC_HAS_AES_GCM */
 
 /* DX Block aead alg */
 static struct ssi_alg_template aead_algs[] = {
@@ -2585,7 +2556,6 @@ static struct ssi_alg_template aead_algs[] = {
 		.flow_mode = S_DIN_to_AES,
 		.auth_mode = DRV_HASH_XCBC_MAC,
 	},
-#if SSI_CC_HAS_AES_CCM
 	{
 		.name = "ccm(aes)",
 		.driver_name = "ccm-aes-dx",
@@ -2624,8 +2594,6 @@ static struct ssi_alg_template aead_algs[] = {
 		.flow_mode = S_DIN_to_AES,
 		.auth_mode = DRV_HASH_NULL,
 	},
-#endif /*SSI_CC_HAS_AES_CCM*/
-#if SSI_CC_HAS_AES_GCM
 	{
 		.name = "gcm(aes)",
 		.driver_name = "gcm-aes-dx",
@@ -2683,7 +2651,6 @@ static struct ssi_alg_template aead_algs[] = {
 		.flow_mode = S_DIN_to_AES,
 		.auth_mode = DRV_HASH_NULL,
 	},
-#endif /*SSI_CC_HAS_AES_GCM*/
 };
 
 static struct ssi_crypto_alg *cc_create_aead_alg(struct ssi_alg_template *tmpl,
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index 4ab76dc..c28ce7c 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -604,7 +604,6 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
 				 MAX_MAC_SIZE, DMA_BIDIRECTIONAL);
 	}
 
-#if SSI_CC_HAS_AES_GCM
 	if (areq_ctx->cipher_mode == DRV_CIPHER_GCTR) {
 		if (areq_ctx->hkey_dma_addr) {
 			dma_unmap_single(dev, areq_ctx->hkey_dma_addr,
@@ -626,7 +625,6 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
 					 AES_BLOCK_SIZE, DMA_TO_DEVICE);
 		}
 	}
-#endif
 
 	if (areq_ctx->ccm_hdr_size != ccm_header_size_null) {
 		if (areq_ctx->ccm_iv0_dma_addr) {
@@ -1269,7 +1267,6 @@ int cc_map_aead_request(struct ssi_drvdata *drvdata, struct aead_request *req)
 		}
 	}
 
-#if SSI_CC_HAS_AES_GCM
 	if (areq_ctx->cipher_mode == DRV_CIPHER_GCTR) {
 		dma_addr = dma_map_single(dev, areq_ctx->hkey, AES_BLOCK_SIZE,
 					  DMA_BIDIRECTIONAL);
@@ -1315,7 +1312,6 @@ int cc_map_aead_request(struct ssi_drvdata *drvdata, struct aead_request *req)
 		}
 		areq_ctx->gcm_iv_inc2_dma_addr = dma_addr;
 	}
-#endif /*SSI_CC_HAS_AES_GCM*/
 
 	size_to_map = req->cryptlen + req->assoclen;
 	if (areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_ENCRYPT)
diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index a158213..299e73a 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -783,7 +783,6 @@ static int cc_cipher_decrypt(struct ablkcipher_request *req)
 
 /* DX Block cipher alg */
 static struct ssi_alg_template blkcipher_algs[] = {
-#if SSI_CC_HAS_AES_XTS
 	{
 		.name = "xts(aes)",
 		.driver_name = "xts-aes-dx",
@@ -833,8 +832,6 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.cipher_mode = DRV_CIPHER_XTS,
 		.flow_mode = S_DIN_to_AES,
 	},
-#endif /*SSI_CC_HAS_AES_XTS*/
-#if SSI_CC_HAS_AES_ESSIV
 	{
 		.name = "essiv(aes)",
 		.driver_name = "essiv-aes-dx",
@@ -883,8 +880,6 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.cipher_mode = DRV_CIPHER_ESSIV,
 		.flow_mode = S_DIN_to_AES,
 	},
-#endif /*SSI_CC_HAS_AES_ESSIV*/
-#if SSI_CC_HAS_AES_BITLOCKER
 	{
 		.name = "bitlocker(aes)",
 		.driver_name = "bitlocker-aes-dx",
@@ -933,7 +928,6 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.cipher_mode = DRV_CIPHER_BITLOCKER,
 		.flow_mode = S_DIN_to_AES,
 	},
-#endif /*SSI_CC_HAS_AES_BITLOCKER*/
 	{
 		.name = "ecb(aes)",
 		.driver_name = "ecb-aes-dx",
@@ -982,7 +976,6 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.cipher_mode = DRV_CIPHER_OFB,
 		.flow_mode = S_DIN_to_AES,
 	},
-#if SSI_CC_HAS_AES_CTS
 	{
 		.name = "cts1(cbc(aes))",
 		.driver_name = "cts1-cbc-aes-dx",
@@ -999,7 +992,6 @@ static struct ssi_alg_template blkcipher_algs[] = {
 		.cipher_mode = DRV_CIPHER_CBC_CTS,
 		.flow_mode = S_DIN_to_AES,
 	},
-#endif
 	{
 		.name = "ctr(aes)",
 		.driver_name = "ctr-aes-dx",
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index a2de584..c9fdb89 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -54,14 +54,6 @@
 #define SSI_DEV_NAME_STR "cc715ree"
 #define CC_COHERENT_CACHE_PARAMS 0xEEE
 
-#define SSI_CC_HAS_AES_CCM 1
-#define SSI_CC_HAS_AES_GCM 1
-#define SSI_CC_HAS_AES_XTS 1
-#define SSI_CC_HAS_AES_ESSIV 1
-#define SSI_CC_HAS_AES_BITLOCKER 1
-#define SSI_CC_HAS_AES_CTS 1
-#define SSI_CC_HAS_CMAC 1
-
 #define SSI_AXI_IRQ_MASK ((1 << DX_AXIM_CFG_BRESPMASK_BIT_SHIFT) | \
 			  (1 << DX_AXIM_CFG_RRESPMASK_BIT_SHIFT) | \
 			  (1 << DX_AXIM_CFG_INFLTMASK_BIT_SHIFT) | \
diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c
index 29c17f3..10c73ef 100644
--- a/drivers/staging/ccree/ssi_hash.c
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -1190,7 +1190,6 @@ static int cc_xcbc_setkey(struct crypto_ahash *ahash,
 	return rc;
 }
 
-#if SSI_CC_HAS_CMAC
 static int cc_cmac_setkey(struct crypto_ahash *ahash,
 			  const u8 *key, unsigned int keylen)
 {
@@ -1230,7 +1229,6 @@ static int cc_cmac_setkey(struct crypto_ahash *ahash,
 
 	return 0;
 }
-#endif
 
 static void cc_free_ctx(struct cc_hash_ctx *ctx)
 {
@@ -1937,7 +1935,6 @@ static struct cc_hash_template driver_hash[] = {
 		.hw_mode = DRV_CIPHER_XCBC_MAC,
 		.inter_digestsize = AES_BLOCK_SIZE,
 	},
-#if SSI_CC_HAS_CMAC
 	{
 		.mac_name = "cmac(aes)",
 		.mac_driver_name = "cmac-aes-dx",
@@ -1960,8 +1957,6 @@ static struct cc_hash_template driver_hash[] = {
 		.hw_mode = DRV_CIPHER_CMAC,
 		.inter_digestsize = AES_BLOCK_SIZE,
 	},
-#endif
-
 };
 
 static struct cc_hash_alg *cc_alloc_hash_alg(struct cc_hash_template *template,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 18/24] staging: ccree: rename all SSI to CC
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (16 preceding siblings ...)
  2017-12-12 14:53 ` [PATCH 17/24] staging: ccree: remove SSI_CC_HAS_ macros Gilad Ben-Yossef
@ 2017-12-12 14:53 ` Gilad Ben-Yossef
  2017-12-12 14:53 ` [PATCH 19/24] staging: ccree: rename all DX " Gilad Ben-Yossef
                   ` (5 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Unify naming convention by renaming all SSI macros to CC.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_aead.c        | 26 +++++------
 drivers/staging/ccree/ssi_aead.h        |  6 +--
 drivers/staging/ccree/ssi_buffer_mgr.c  | 78 ++++++++++++++++-----------------
 drivers/staging/ccree/ssi_buffer_mgr.h  | 14 +++---
 drivers/staging/ccree/ssi_cipher.c      |  4 +-
 drivers/staging/ccree/ssi_cipher.h      |  6 +--
 drivers/staging/ccree/ssi_config.h      |  4 +-
 drivers/staging/ccree/ssi_driver.c      | 26 +++++------
 drivers/staging/ccree/ssi_driver.h      | 22 +++++-----
 drivers/staging/ccree/ssi_fips.c        |  2 +-
 drivers/staging/ccree/ssi_fips.h        |  6 +--
 drivers/staging/ccree/ssi_hash.c        |  6 +--
 drivers/staging/ccree/ssi_hash.h        |  6 +--
 drivers/staging/ccree/ssi_ivgen.c       |  8 ++--
 drivers/staging/ccree/ssi_ivgen.h       |  8 ++--
 drivers/staging/ccree/ssi_pm.c          |  2 +-
 drivers/staging/ccree/ssi_pm.h          |  6 +--
 drivers/staging/ccree/ssi_request_mgr.c | 16 +++----
 drivers/staging/ccree/ssi_sram_mgr.c    |  2 +-
 drivers/staging/ccree/ssi_sram_mgr.h    | 10 ++---
 drivers/staging/ccree/ssi_sysfs.h       |  6 +--
 21 files changed, 132 insertions(+), 132 deletions(-)

diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c
index ac9961c..d07b38d 100644
--- a/drivers/staging/ccree/ssi_aead.c
+++ b/drivers/staging/ccree/ssi_aead.c
@@ -257,7 +257,7 @@ static void cc_aead_complete(struct device *dev, void *ssi_req)
 			cc_copy_sg_portion(dev, areq_ctx->mac_buf,
 					   areq_ctx->dst_sgl, skip,
 					   (skip + ctx->authsize),
-					   SSI_SG_FROM_BUF);
+					   CC_SG_FROM_BUF);
 		}
 
 		/* If an IV was generated, copy it back to the user provided
@@ -739,7 +739,7 @@ static void cc_set_assoc_desc(struct aead_request *areq, unsigned int flow_mode,
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 
 	switch (assoc_dma_type) {
-	case SSI_DMA_BUF_DLLI:
+	case CC_DMA_BUF_DLLI:
 		dev_dbg(dev, "ASSOC buffer type DLLI\n");
 		hw_desc_init(&desc[idx]);
 		set_din_type(&desc[idx], DMA_DLLI, sg_dma_address(areq->src),
@@ -749,7 +749,7 @@ static void cc_set_assoc_desc(struct aead_request *areq, unsigned int flow_mode,
 		    areq_ctx->cryptlen > 0)
 			set_din_not_last_indication(&desc[idx]);
 		break;
-	case SSI_DMA_BUF_MLLI:
+	case CC_DMA_BUF_MLLI:
 		dev_dbg(dev, "ASSOC buffer type MLLI\n");
 		hw_desc_init(&desc[idx]);
 		set_din_type(&desc[idx], DMA_MLLI, areq_ctx->assoc.sram_addr,
@@ -759,7 +759,7 @@ static void cc_set_assoc_desc(struct aead_request *areq, unsigned int flow_mode,
 		    areq_ctx->cryptlen > 0)
 			set_din_not_last_indication(&desc[idx]);
 		break;
-	case SSI_DMA_BUF_NULL:
+	case CC_DMA_BUF_NULL:
 	default:
 		dev_err(dev, "Invalid ASSOC buffer type\n");
 	}
@@ -780,7 +780,7 @@ static void cc_proc_authen_desc(struct aead_request *areq,
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 
 	switch (data_dma_type) {
-	case SSI_DMA_BUF_DLLI:
+	case CC_DMA_BUF_DLLI:
 	{
 		struct scatterlist *cipher =
 			(direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ?
@@ -797,7 +797,7 @@ static void cc_proc_authen_desc(struct aead_request *areq,
 		set_flow_mode(&desc[idx], flow_mode);
 		break;
 	}
-	case SSI_DMA_BUF_MLLI:
+	case CC_DMA_BUF_MLLI:
 	{
 		/* DOUBLE-PASS flow (as default)
 		 * assoc. + iv + data -compact in one table
@@ -823,7 +823,7 @@ static void cc_proc_authen_desc(struct aead_request *areq,
 		set_flow_mode(&desc[idx], flow_mode);
 		break;
 	}
-	case SSI_DMA_BUF_NULL:
+	case CC_DMA_BUF_NULL:
 	default:
 		dev_err(dev, "AUTHENC: Invalid SRC/DST buffer type\n");
 	}
@@ -847,7 +847,7 @@ static void cc_proc_cipher_desc(struct aead_request *areq,
 		return; /*null processing*/
 
 	switch (data_dma_type) {
-	case SSI_DMA_BUF_DLLI:
+	case CC_DMA_BUF_DLLI:
 		dev_dbg(dev, "CIPHER: SRC/DST buffer type DLLI\n");
 		hw_desc_init(&desc[idx]);
 		set_din_type(&desc[idx], DMA_DLLI,
@@ -860,7 +860,7 @@ static void cc_proc_cipher_desc(struct aead_request *areq,
 			      areq_ctx->cryptlen, NS_BIT, 0);
 		set_flow_mode(&desc[idx], flow_mode);
 		break;
-	case SSI_DMA_BUF_MLLI:
+	case CC_DMA_BUF_MLLI:
 		dev_dbg(dev, "CIPHER: SRC/DST buffer type MLLI\n");
 		hw_desc_init(&desc[idx]);
 		set_din_type(&desc[idx], DMA_MLLI, areq_ctx->src.sram_addr,
@@ -869,7 +869,7 @@ static void cc_proc_cipher_desc(struct aead_request *areq,
 			      areq_ctx->dst.mlli_nents, NS_BIT, 0);
 		set_flow_mode(&desc[idx], flow_mode);
 		break;
-	case SSI_DMA_BUF_NULL:
+	case CC_DMA_BUF_NULL:
 	default:
 		dev_err(dev, "CIPHER: Invalid SRC/DST buffer type\n");
 	}
@@ -1171,8 +1171,8 @@ static void cc_mlli_to_sram(struct aead_request *req,
 	struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 
-	if (req_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI ||
-	    req_ctx->data_buff_type == SSI_DMA_BUF_MLLI ||
+	if (req_ctx->assoc_buff_type == CC_DMA_BUF_MLLI ||
+	    req_ctx->data_buff_type == CC_DMA_BUF_MLLI ||
 	    !req_ctx->is_single_pass) {
 		dev_dbg(dev, "Copy-to-sram: mlli_dma=%08x, mlli_size=%u\n",
 			(unsigned int)ctx->drvdata->mlli_sram_addr,
@@ -2670,7 +2670,7 @@ static struct ssi_crypto_alg *cc_create_aead_alg(struct ssi_alg_template *tmpl,
 	snprintf(alg->base.cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
 		 tmpl->driver_name);
 	alg->base.cra_module = THIS_MODULE;
-	alg->base.cra_priority = SSI_CRA_PRIO;
+	alg->base.cra_priority = CC_CRA_PRIO;
 
 	alg->base.cra_ctxsize = sizeof(struct cc_aead_ctx);
 	alg->base.cra_flags = CRYPTO_ALG_ASYNC | CRYPTO_ALG_KERN_DRIVER_ONLY |
diff --git a/drivers/staging/ccree/ssi_aead.h b/drivers/staging/ccree/ssi_aead.h
index 5172241..e41040e 100644
--- a/drivers/staging/ccree/ssi_aead.h
+++ b/drivers/staging/ccree/ssi_aead.h
@@ -18,8 +18,8 @@
  * ARM CryptoCell AEAD Crypto API
  */
 
-#ifndef __SSI_AEAD_H__
-#define __SSI_AEAD_H__
+#ifndef __CC_AEAD_H__
+#define __CC_AEAD_H__
 
 #include <linux/kernel.h>
 #include <crypto/algapi.h>
@@ -119,4 +119,4 @@ struct aead_req_ctx {
 int cc_aead_alloc(struct ssi_drvdata *drvdata);
 int cc_aead_free(struct ssi_drvdata *drvdata);
 
-#endif /*__SSI_AEAD_H__*/
+#endif /*__CC_AEAD_H__*/
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index c28ce7c..ee5c086 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -61,11 +61,11 @@ struct buffer_array {
 static inline char *cc_dma_buf_type(enum ssi_req_dma_buf_type type)
 {
 	switch (type) {
-	case SSI_DMA_BUF_NULL:
+	case CC_DMA_BUF_NULL:
 		return "BUF_NULL";
-	case SSI_DMA_BUF_DLLI:
+	case CC_DMA_BUF_DLLI:
 		return "BUF_DLLI";
-	case SSI_DMA_BUF_MLLI:
+	case CC_DMA_BUF_MLLI:
 		return "BUF_MLLI";
 	default:
 		return "BUF_INVALID";
@@ -163,7 +163,7 @@ void cc_copy_sg_portion(struct device *dev, u8 *dest, struct scatterlist *sg,
 
 	nents = cc_get_sgl_nents(dev, sg, end, &lbytes, NULL);
 	sg_copy_buffer(sg, nents, (void *)dest, (end - to_skip + 1), to_skip,
-		       (direct == SSI_SG_TO_BUF));
+		       (direct == CC_SG_TO_BUF));
 }
 
 static int cc_render_buff_to_mlli(struct device *dev, dma_addr_t buff_dma,
@@ -457,7 +457,7 @@ static int ssi_ahash_handle_curr_buf(struct device *dev,
 		&sg_dma_address(areq_ctx->buff_sg), sg_page(areq_ctx->buff_sg),
 		sg_virt(areq_ctx->buff_sg), areq_ctx->buff_sg->offset,
 		areq_ctx->buff_sg->length);
-	areq_ctx->data_dma_buf_type = SSI_DMA_BUF_DLLI;
+	areq_ctx->data_dma_buf_type = CC_DMA_BUF_DLLI;
 	areq_ctx->curr_sg = areq_ctx->buff_sg;
 	areq_ctx->in_nents = 0;
 	/* prepare for case of MLLI */
@@ -481,7 +481,7 @@ void cc_unmap_blkcipher_request(struct device *dev, void *ctx,
 				 DMA_TO_DEVICE);
 	}
 	/* Release pool */
-	if (req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI) {
+	if (req_ctx->dma_buf_type == CC_DMA_BUF_MLLI) {
 		dma_pool_free(req_ctx->mlli_params.curr_pool,
 			      req_ctx->mlli_params.mlli_virt_addr,
 			      req_ctx->mlli_params.mlli_dma_addr);
@@ -510,7 +510,7 @@ int cc_map_blkcipher_request(struct ssi_drvdata *drvdata, void *ctx,
 	int rc = 0;
 	u32 mapped_nents = 0;
 
-	req_ctx->dma_buf_type = SSI_DMA_BUF_DLLI;
+	req_ctx->dma_buf_type = CC_DMA_BUF_DLLI;
 	mlli_params->curr_pool = NULL;
 	sg_data.num_of_buffers = 0;
 
@@ -541,11 +541,11 @@ int cc_map_blkcipher_request(struct ssi_drvdata *drvdata, void *ctx,
 		goto ablkcipher_exit;
 	}
 	if (mapped_nents > 1)
-		req_ctx->dma_buf_type = SSI_DMA_BUF_MLLI;
+		req_ctx->dma_buf_type = CC_DMA_BUF_MLLI;
 
 	if (src == dst) {
 		/* Handle inplace operation */
-		if (req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI) {
+		if (req_ctx->dma_buf_type == CC_DMA_BUF_MLLI) {
 			req_ctx->out_nents = 0;
 			cc_add_sg_entry(dev, &sg_data, req_ctx->in_nents, src,
 					nbytes, 0, true,
@@ -560,9 +560,9 @@ int cc_map_blkcipher_request(struct ssi_drvdata *drvdata, void *ctx,
 			goto ablkcipher_exit;
 		}
 		if (mapped_nents > 1)
-			req_ctx->dma_buf_type = SSI_DMA_BUF_MLLI;
+			req_ctx->dma_buf_type = CC_DMA_BUF_MLLI;
 
-		if (req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI) {
+		if (req_ctx->dma_buf_type == CC_DMA_BUF_MLLI) {
 			cc_add_sg_entry(dev, &sg_data, req_ctx->in_nents, src,
 					nbytes, 0, true,
 					&req_ctx->in_mlli_nents);
@@ -572,7 +572,7 @@ int cc_map_blkcipher_request(struct ssi_drvdata *drvdata, void *ctx,
 		}
 	}
 
-	if (req_ctx->dma_buf_type == SSI_DMA_BUF_MLLI) {
+	if (req_ctx->dma_buf_type == CC_DMA_BUF_MLLI) {
 		mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
 		rc = cc_generate_mlli(dev, &sg_data, mlli_params);
 		if (rc)
@@ -679,7 +679,7 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
 		 * data memory overriding that caused by cache coherence
 		 * problem.
 		 */
-		cc_copy_mac(dev, req, SSI_SG_FROM_BUF);
+		cc_copy_mac(dev, req, CC_SG_FROM_BUF);
 	}
 }
 
@@ -771,7 +771,7 @@ static int cc_aead_chain_iv(struct ssi_drvdata *drvdata,
 				    (areq_ctx->gen_ctx.iv_dma_addr + iv_ofs),
 				    iv_size_to_authenc, is_last,
 				    &areq_ctx->assoc.mlli_nents);
-		areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI;
+		areq_ctx->assoc_buff_type = CC_DMA_BUF_MLLI;
 	}
 
 chain_iv_exit:
@@ -801,7 +801,7 @@ static int cc_aead_chain_assoc(struct ssi_drvdata *drvdata,
 	}
 
 	if (req->assoclen == 0) {
-		areq_ctx->assoc_buff_type = SSI_DMA_BUF_NULL;
+		areq_ctx->assoc_buff_type = CC_DMA_BUF_NULL;
 		areq_ctx->assoc.nents = 0;
 		areq_ctx->assoc.mlli_nents = 0;
 		dev_dbg(dev, "Chain assoc of length 0: buff_type=%s nents=%u\n",
@@ -851,18 +851,18 @@ static int cc_aead_chain_assoc(struct ssi_drvdata *drvdata,
 	}
 
 	if (mapped_nents == 1 && areq_ctx->ccm_hdr_size == ccm_header_size_null)
-		areq_ctx->assoc_buff_type = SSI_DMA_BUF_DLLI;
+		areq_ctx->assoc_buff_type = CC_DMA_BUF_DLLI;
 	else
-		areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI;
+		areq_ctx->assoc_buff_type = CC_DMA_BUF_MLLI;
 
-	if (do_chain || areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI) {
+	if (do_chain || areq_ctx->assoc_buff_type == CC_DMA_BUF_MLLI) {
 		dev_dbg(dev, "Chain assoc: buff_type=%s nents=%u\n",
 			cc_dma_buf_type(areq_ctx->assoc_buff_type),
 			areq_ctx->assoc.nents);
 		cc_add_sg_entry(dev, sg_data, areq_ctx->assoc.nents, req->src,
 				req->assoclen, 0, is_last,
 				&areq_ctx->assoc.mlli_nents);
-		areq_ctx->assoc_buff_type = SSI_DMA_BUF_MLLI;
+		areq_ctx->assoc_buff_type = CC_DMA_BUF_MLLI;
 	}
 
 chain_assoc_exit:
@@ -939,7 +939,7 @@ static int cc_prepare_aead_data_mlli(struct ssi_drvdata *drvdata,
 				 * we must neglect this code.
 				 */
 				if (!drvdata->coherent)
-					cc_copy_mac(dev, req, SSI_SG_TO_BUF);
+					cc_copy_mac(dev, req, CC_SG_TO_BUF);
 
 				areq_ctx->icv_virt_addr = areq_ctx->backup_mac;
 			} else {
@@ -981,7 +981,7 @@ static int cc_prepare_aead_data_mlli(struct ssi_drvdata *drvdata,
 		 * MAC verification upon request completion
 		 */
 		if (areq_ctx->is_icv_fragmented) {
-			cc_copy_mac(dev, req, SSI_SG_TO_BUF);
+			cc_copy_mac(dev, req, CC_SG_TO_BUF);
 			areq_ctx->icv_virt_addr = areq_ctx->backup_mac;
 
 		} else { /* Contig. ICV */
@@ -1136,12 +1136,12 @@ static int cc_aead_chain_data(struct ssi_drvdata *drvdata,
 	if (src_mapped_nents > 1 ||
 	    dst_mapped_nents  > 1 ||
 	    do_chain) {
-		areq_ctx->data_buff_type = SSI_DMA_BUF_MLLI;
+		areq_ctx->data_buff_type = CC_DMA_BUF_MLLI;
 		rc = cc_prepare_aead_data_mlli(drvdata, req, sg_data,
 					       &src_last_bytes,
 					       &dst_last_bytes, is_last_table);
 	} else {
-		areq_ctx->data_buff_type = SSI_DMA_BUF_DLLI;
+		areq_ctx->data_buff_type = CC_DMA_BUF_DLLI;
 		cc_prepare_aead_data_dlli(req, &src_last_bytes,
 					  &dst_last_bytes);
 	}
@@ -1156,13 +1156,13 @@ static void cc_update_aead_mlli_nents(struct ssi_drvdata *drvdata,
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
 	u32 curr_mlli_size = 0;
 
-	if (areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI) {
+	if (areq_ctx->assoc_buff_type == CC_DMA_BUF_MLLI) {
 		areq_ctx->assoc.sram_addr = drvdata->mlli_sram_addr;
 		curr_mlli_size = areq_ctx->assoc.mlli_nents *
 						LLI_ENTRY_BYTE_SIZE;
 	}
 
-	if (areq_ctx->data_buff_type == SSI_DMA_BUF_MLLI) {
+	if (areq_ctx->data_buff_type == CC_DMA_BUF_MLLI) {
 		/*Inplace case dst nents equal to src nents*/
 		if (req->src == req->dst) {
 			areq_ctx->dst.mlli_nents = areq_ctx->src.mlli_nents;
@@ -1226,7 +1226,7 @@ int cc_map_aead_request(struct ssi_drvdata *drvdata, struct aead_request *req)
 	if (drvdata->coherent &&
 	    areq_ctx->gen_ctx.op_type == DRV_CRYPTO_DIRECTION_DECRYPT &&
 	    req->src == req->dst)
-		cc_copy_mac(dev, req, SSI_SG_TO_BUF);
+		cc_copy_mac(dev, req, CC_SG_TO_BUF);
 
 	/* cacluate the size for cipher remove ICV in decrypt*/
 	areq_ctx->cryptlen = (areq_ctx->gen_ctx.op_type ==
@@ -1380,8 +1380,8 @@ int cc_map_aead_request(struct ssi_drvdata *drvdata, struct aead_request *req)
 	/* Mlli support -start building the MLLI according to the above
 	 * results
 	 */
-	if (areq_ctx->assoc_buff_type == SSI_DMA_BUF_MLLI ||
-	    areq_ctx->data_buff_type == SSI_DMA_BUF_MLLI) {
+	if (areq_ctx->assoc_buff_type == CC_DMA_BUF_MLLI ||
+	    areq_ctx->data_buff_type == CC_DMA_BUF_MLLI) {
 		mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
 		rc = cc_generate_mlli(dev, &sg_data, mlli_params);
 		if (rc)
@@ -1419,7 +1419,7 @@ int cc_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx,
 	dev_dbg(dev, "final params : curr_buff=%pK curr_buff_cnt=0x%X nbytes = 0x%X src=%pK curr_index=%u\n",
 		curr_buff, *curr_buff_cnt, nbytes, src, areq_ctx->buff_index);
 	/* Init the type of the dma buffer */
-	areq_ctx->data_dma_buf_type = SSI_DMA_BUF_NULL;
+	areq_ctx->data_dma_buf_type = CC_DMA_BUF_NULL;
 	mlli_params->curr_pool = NULL;
 	sg_data.num_of_buffers = 0;
 	areq_ctx->in_nents = 0;
@@ -1445,19 +1445,19 @@ int cc_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx,
 			goto unmap_curr_buff;
 		}
 		if (src && mapped_nents == 1 &&
-		    areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) {
+		    areq_ctx->data_dma_buf_type == CC_DMA_BUF_NULL) {
 			memcpy(areq_ctx->buff_sg, src,
 			       sizeof(struct scatterlist));
 			areq_ctx->buff_sg->length = nbytes;
 			areq_ctx->curr_sg = areq_ctx->buff_sg;
-			areq_ctx->data_dma_buf_type = SSI_DMA_BUF_DLLI;
+			areq_ctx->data_dma_buf_type = CC_DMA_BUF_DLLI;
 		} else {
-			areq_ctx->data_dma_buf_type = SSI_DMA_BUF_MLLI;
+			areq_ctx->data_dma_buf_type = CC_DMA_BUF_MLLI;
 		}
 	}
 
 	/*build mlli */
-	if (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_MLLI) {
+	if (areq_ctx->data_dma_buf_type == CC_DMA_BUF_MLLI) {
 		mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
 		/* add the src data to the sg_data */
 		cc_add_sg_entry(dev, &sg_data, areq_ctx->in_nents, src, nbytes,
@@ -1507,7 +1507,7 @@ int cc_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx,
 	dev_dbg(dev, " update params : curr_buff=%pK curr_buff_cnt=0x%X nbytes=0x%X src=%pK curr_index=%u\n",
 		curr_buff, *curr_buff_cnt, nbytes, src, areq_ctx->buff_index);
 	/* Init the type of the dma buffer */
-	areq_ctx->data_dma_buf_type = SSI_DMA_BUF_NULL;
+	areq_ctx->data_dma_buf_type = CC_DMA_BUF_NULL;
 	mlli_params->curr_pool = NULL;
 	areq_ctx->curr_sg = NULL;
 	sg_data.num_of_buffers = 0;
@@ -1539,7 +1539,7 @@ int cc_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx,
 			*next_buff_cnt);
 		cc_copy_sg_portion(dev, next_buff, src,
 				   (update_data_len - *curr_buff_cnt),
-				   nbytes, SSI_SG_TO_BUF);
+				   nbytes, CC_SG_TO_BUF);
 		/* change the buffer index for next operation */
 		swap_index = 1;
 	}
@@ -1561,19 +1561,19 @@ int cc_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx,
 			goto unmap_curr_buff;
 		}
 		if (mapped_nents == 1 &&
-		    areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) {
+		    areq_ctx->data_dma_buf_type == CC_DMA_BUF_NULL) {
 			/* only one entry in the SG and no previous data */
 			memcpy(areq_ctx->buff_sg, src,
 			       sizeof(struct scatterlist));
 			areq_ctx->buff_sg->length = update_data_len;
-			areq_ctx->data_dma_buf_type = SSI_DMA_BUF_DLLI;
+			areq_ctx->data_dma_buf_type = CC_DMA_BUF_DLLI;
 			areq_ctx->curr_sg = areq_ctx->buff_sg;
 		} else {
-			areq_ctx->data_dma_buf_type = SSI_DMA_BUF_MLLI;
+			areq_ctx->data_dma_buf_type = CC_DMA_BUF_MLLI;
 		}
 	}
 
-	if (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_MLLI) {
+	if (areq_ctx->data_dma_buf_type == CC_DMA_BUF_MLLI) {
 		mlli_params->curr_pool = buff_mgr->mlli_buffs_pool;
 		/* add the src data to the sg_data */
 		cc_add_sg_entry(dev, &sg_data, areq_ctx->in_nents, src,
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h
index f6411de..77744a6 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.h
+++ b/drivers/staging/ccree/ssi_buffer_mgr.h
@@ -18,8 +18,8 @@
  * Buffer Manager
  */
 
-#ifndef __SSI_BUFFER_MGR_H__
-#define __SSI_BUFFER_MGR_H__
+#ifndef __CC_BUFFER_MGR_H__
+#define __CC_BUFFER_MGR_H__
 
 #include <crypto/algapi.h>
 
@@ -27,14 +27,14 @@
 #include "ssi_driver.h"
 
 enum ssi_req_dma_buf_type {
-	SSI_DMA_BUF_NULL = 0,
-	SSI_DMA_BUF_DLLI,
-	SSI_DMA_BUF_MLLI
+	CC_DMA_BUF_NULL = 0,
+	CC_DMA_BUF_DLLI,
+	CC_DMA_BUF_MLLI
 };
 
 enum ssi_sg_cpy_direct {
-	SSI_SG_TO_BUF = 0,
-	SSI_SG_FROM_BUF = 1
+	CC_SG_TO_BUF = 0,
+	CC_SG_FROM_BUF = 1
 };
 
 struct ssi_mlli {
diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index 299e73a..c437a79 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -541,7 +541,7 @@ static void cc_setup_cipher_data(struct crypto_tfm *tfm,
 		return;
 	}
 	/* Process */
-	if (req_ctx->dma_buf_type == SSI_DMA_BUF_DLLI) {
+	if (req_ctx->dma_buf_type == CC_DMA_BUF_DLLI) {
 		dev_dbg(dev, " data params addr %pad length 0x%X\n",
 			&sg_dma_address(src), nbytes);
 		dev_dbg(dev, " data params addr %pad length 0x%X\n",
@@ -1091,7 +1091,7 @@ struct ssi_crypto_alg *cc_cipher_create_alg(struct ssi_alg_template *template,
 	snprintf(alg->cra_driver_name, CRYPTO_MAX_ALG_NAME, "%s",
 		 template->driver_name);
 	alg->cra_module = THIS_MODULE;
-	alg->cra_priority = SSI_CRA_PRIO;
+	alg->cra_priority = CC_CRA_PRIO;
 	alg->cra_blocksize = template->blocksize;
 	alg->cra_alignmask = 0;
 	alg->cra_ctxsize = sizeof(struct cc_cipher_ctx);
diff --git a/drivers/staging/ccree/ssi_cipher.h b/drivers/staging/ccree/ssi_cipher.h
index ef6d6e9..977b543 100644
--- a/drivers/staging/ccree/ssi_cipher.h
+++ b/drivers/staging/ccree/ssi_cipher.h
@@ -18,8 +18,8 @@
  * ARM CryptoCell Cipher Crypto API
  */
 
-#ifndef __SSI_CIPHER_H__
-#define __SSI_CIPHER_H__
+#ifndef __CC_CIPHER_H__
+#define __CC_CIPHER_H__
 
 #include <linux/kernel.h>
 #include <crypto/algapi.h>
@@ -84,4 +84,4 @@ static inline bool cc_is_hw_key(struct crypto_tfm *tfm)
 
 #endif /* CRYPTO_TFM_REQ_HW_KEY */
 
-#endif /*__SSI_CIPHER_H__*/
+#endif /*__CC_CIPHER_H__*/
diff --git a/drivers/staging/ccree/ssi_config.h b/drivers/staging/ccree/ssi_config.h
index ea74845..e97bc68 100644
--- a/drivers/staging/ccree/ssi_config.h
+++ b/drivers/staging/ccree/ssi_config.h
@@ -18,8 +18,8 @@
  * Definitions for ARM CryptoCell Linux Crypto Driver
  */
 
-#ifndef __SSI_CONFIG_H__
-#define __SSI_CONFIG_H__
+#ifndef __CC_CONFIG_H__
+#define __CC_CONFIG_H__
 
 #include <linux/version.h>
 
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index f4164eb..dce12e1 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -110,27 +110,27 @@ static irqreturn_t cc_isr(int irq, void *dev_id)
 
 	drvdata->irq = irr;
 	/* Completion interrupt - most probable */
-	if (irr & SSI_COMP_IRQ_MASK) {
+	if (irr & CC_COMP_IRQ_MASK) {
 		/* Mask AXI completion interrupt - will be unmasked in
 		 * Deferred service handler
 		 */
-		cc_iowrite(drvdata, CC_REG(HOST_IMR), imr | SSI_COMP_IRQ_MASK);
-		irr &= ~SSI_COMP_IRQ_MASK;
+		cc_iowrite(drvdata, CC_REG(HOST_IMR), imr | CC_COMP_IRQ_MASK);
+		irr &= ~CC_COMP_IRQ_MASK;
 		complete_request(drvdata);
 	}
 #ifdef CC_SUPPORT_FIPS
 	/* TEE FIPS interrupt */
-	if (irr & SSI_GPR0_IRQ_MASK) {
+	if (irr & CC_GPR0_IRQ_MASK) {
 		/* Mask interrupt - will be unmasked in Deferred service
 		 * handler
 		 */
-		cc_iowrite(drvdata, CC_REG(HOST_IMR), imr | SSI_GPR0_IRQ_MASK);
-		irr &= ~SSI_GPR0_IRQ_MASK;
+		cc_iowrite(drvdata, CC_REG(HOST_IMR), imr | CC_GPR0_IRQ_MASK);
+		irr &= ~CC_GPR0_IRQ_MASK;
 		fips_handler(drvdata);
 	}
 #endif
 	/* AXI error interrupt */
-	if (irr & SSI_AXI_ERR_IRQ_MASK) {
+	if (irr & CC_AXI_ERR_IRQ_MASK) {
 		u32 axi_err;
 
 		/* Read the AXI error ID */
@@ -138,7 +138,7 @@ static irqreturn_t cc_isr(int irq, void *dev_id)
 		dev_dbg(dev, "AXI completion error: axim_mon_err=0x%08X\n",
 			axi_err);
 
-		irr &= ~SSI_AXI_ERR_IRQ_MASK;
+		irr &= ~CC_AXI_ERR_IRQ_MASK;
 	}
 
 	if (irr) {
@@ -157,7 +157,7 @@ int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe)
 
 	/* Unmask all AXI interrupt sources AXI_CFG1 register */
 	val = cc_ioread(drvdata, CC_REG(AXIM_CFG));
-	cc_iowrite(drvdata, CC_REG(AXIM_CFG), val & ~SSI_AXI_IRQ_MASK);
+	cc_iowrite(drvdata, CC_REG(AXIM_CFG), val & ~CC_AXI_IRQ_MASK);
 	dev_dbg(dev, "AXIM_CFG=0x%08X\n",
 		cc_ioread(drvdata, CC_REG(AXIM_CFG)));
 
@@ -167,8 +167,8 @@ int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe)
 	cc_iowrite(drvdata, CC_REG(HOST_ICR), val);
 
 	/* Unmask relevant interrupt cause */
-	val = (unsigned int)(~(SSI_COMP_IRQ_MASK | SSI_AXI_ERR_IRQ_MASK |
-			       SSI_GPR0_IRQ_MASK));
+	val = (unsigned int)(~(CC_COMP_IRQ_MASK | CC_AXI_ERR_IRQ_MASK |
+			       CC_GPR0_IRQ_MASK));
 	cc_iowrite(drvdata, CC_REG(HOST_IMR), val);
 
 #ifdef DX_HOST_IRQ_TIMER_INIT_VAL_REG_OFFSET
@@ -289,7 +289,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 
 	/* Display HW versions */
 	dev_info(dev, "ARM CryptoCell %s Driver: HW version 0x%08X, Driver version %s\n",
-		 SSI_DEV_NAME_STR,
+		 CC_DEV_NAME_STR,
 		 cc_ioread(new_drvdata, CC_REG(HOST_VERSION)),
 		 DRV_MODULE_VERSION);
 
@@ -309,7 +309,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 
 	rc = ssi_fips_init(new_drvdata);
 	if (rc) {
-		dev_err(dev, "SSI_FIPS_INIT failed 0x%x\n", rc);
+		dev_err(dev, "CC_FIPS_INIT failed 0x%x\n", rc);
 		goto post_sysfs_err;
 	}
 	rc = ssi_sram_mgr_init(new_drvdata);
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index c9fdb89..3d4513b 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -18,8 +18,8 @@
  * ARM CryptoCell Linux Crypto Driver
  */
 
-#ifndef __SSI_DRIVER_H__
-#define __SSI_DRIVER_H__
+#ifndef __CC_DRIVER_H__
+#define __CC_DRIVER_H__
 
 #include "ssi_config.h"
 #ifdef COMP_IN_WQ
@@ -51,17 +51,17 @@
 
 #define DRV_MODULE_VERSION "3.0"
 
-#define SSI_DEV_NAME_STR "cc715ree"
+#define CC_DEV_NAME_STR "cc715ree"
 #define CC_COHERENT_CACHE_PARAMS 0xEEE
 
-#define SSI_AXI_IRQ_MASK ((1 << DX_AXIM_CFG_BRESPMASK_BIT_SHIFT) | \
+#define CC_AXI_IRQ_MASK ((1 << DX_AXIM_CFG_BRESPMASK_BIT_SHIFT) | \
 			  (1 << DX_AXIM_CFG_RRESPMASK_BIT_SHIFT) | \
 			  (1 << DX_AXIM_CFG_INFLTMASK_BIT_SHIFT) | \
 			  (1 << DX_AXIM_CFG_COMPMASK_BIT_SHIFT))
 
-#define SSI_AXI_ERR_IRQ_MASK BIT(DX_HOST_IRR_AXI_ERR_INT_BIT_SHIFT)
+#define CC_AXI_ERR_IRQ_MASK BIT(DX_HOST_IRR_AXI_ERR_INT_BIT_SHIFT)
 
-#define SSI_COMP_IRQ_MASK BIT(DX_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT)
+#define CC_COMP_IRQ_MASK BIT(DX_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT)
 
 #define AXIM_MON_COMP_VALUE GENMASK(DX_AXIM_MON_COMP_VALUE_BIT_SIZE + \
 				    DX_AXIM_MON_COMP_VALUE_BIT_SHIFT, \
@@ -71,9 +71,9 @@
 #define CC_REG(reg_name) DX_ ## reg_name ## _REG_OFFSET
 
 /* TEE FIPS status interrupt */
-#define SSI_GPR0_IRQ_MASK BIT(DX_HOST_IRR_GPR0_BIT_SHIFT)
+#define CC_GPR0_IRQ_MASK BIT(DX_HOST_IRR_GPR0_BIT_SHIFT)
 
-#define SSI_CRA_PRIO 3000
+#define CC_CRA_PRIO 3000
 
 #define MIN_HW_QUEUE_SIZE 50 /* Minimum size required for proper function */
 
@@ -88,11 +88,11 @@
  * field in the HW descriptor. The DMA engine +8 that value.
  */
 
-#define SSI_MAX_IVGEN_DMA_ADDRESSES	3
+#define CC_MAX_IVGEN_DMA_ADDRESSES	3
 struct ssi_crypto_req {
 	void (*user_cb)(struct device *dev, void *req);
 	void *user_arg;
-	dma_addr_t ivgen_dma_addr[SSI_MAX_IVGEN_DMA_ADDRESSES];
+	dma_addr_t ivgen_dma_addr[CC_MAX_IVGEN_DMA_ADDRESSES];
 	/* For the first 'ivgen_dma_addr_len' addresses of this array,
 	 * generated IV would be placed in it by send_request().
 	 * Same generated IV for all addresses!
@@ -192,5 +192,5 @@ static inline u32 cc_ioread(struct ssi_drvdata *drvdata, u32 reg)
 	return ioread32(drvdata->cc_base + reg);
 }
 
-#endif /*__SSI_DRIVER_H__*/
+#endif /*__CC_DRIVER_H__*/
 
diff --git a/drivers/staging/ccree/ssi_fips.c b/drivers/staging/ccree/ssi_fips.c
index 4aea99f..273b414 100644
--- a/drivers/staging/ccree/ssi_fips.c
+++ b/drivers/staging/ccree/ssi_fips.c
@@ -88,7 +88,7 @@ static void fips_dsr(unsigned long devarg)
 	struct device *dev = drvdata_to_dev(drvdata);
 	u32 irq, state, val;
 
-	irq = (drvdata->irq & (SSI_GPR0_IRQ_MASK));
+	irq = (drvdata->irq & (CC_GPR0_IRQ_MASK));
 
 	if (irq) {
 		state = cc_ioread(drvdata, CC_REG(GPR_HOST));
diff --git a/drivers/staging/ccree/ssi_fips.h b/drivers/staging/ccree/ssi_fips.h
index 8cb1893..1889c74 100644
--- a/drivers/staging/ccree/ssi_fips.h
+++ b/drivers/staging/ccree/ssi_fips.h
@@ -14,8 +14,8 @@
  * along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
 
-#ifndef __SSI_FIPS_H__
-#define __SSI_FIPS_H__
+#ifndef __CC_FIPS_H__
+#define __CC_FIPS_H__
 
 #ifdef CONFIG_CRYPTO_FIPS
 
@@ -46,5 +46,5 @@ static inline void fips_handler(struct ssi_drvdata *drvdata) {}
 
 #endif /* CONFIG_CRYPTO_FIPS */
 
-#endif  /*__SSI_FIPS_H__*/
+#endif  /*__CC_FIPS_H__*/
 
diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c
index 10c73ef..7458c24 100644
--- a/drivers/staging/ccree/ssi_hash.c
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -1988,7 +1988,7 @@ static struct cc_hash_alg *cc_alloc_hash_alg(struct cc_hash_template *template,
 	}
 	alg->cra_module = THIS_MODULE;
 	alg->cra_ctxsize = sizeof(struct cc_hash_ctx);
-	alg->cra_priority = SSI_CRA_PRIO;
+	alg->cra_priority = CC_CRA_PRIO;
 	alg->cra_blocksize = template->blocksize;
 	alg->cra_alignmask = 0;
 	alg->cra_exit = cc_cra_exit;
@@ -2345,7 +2345,7 @@ static void cc_set_desc(struct ahash_req_ctx *areq_ctx,
 	unsigned int idx = *seq_size;
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 
-	if (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_DLLI) {
+	if (areq_ctx->data_dma_buf_type == CC_DMA_BUF_DLLI) {
 		hw_desc_init(&desc[idx]);
 		set_din_type(&desc[idx], DMA_DLLI,
 			     sg_dma_address(areq_ctx->curr_sg),
@@ -2353,7 +2353,7 @@ static void cc_set_desc(struct ahash_req_ctx *areq_ctx,
 		set_flow_mode(&desc[idx], flow_mode);
 		idx++;
 	} else {
-		if (areq_ctx->data_dma_buf_type == SSI_DMA_BUF_NULL) {
+		if (areq_ctx->data_dma_buf_type == CC_DMA_BUF_NULL) {
 			dev_dbg(dev, " NULL mode\n");
 			/* nothing to build */
 			return;
diff --git a/drivers/staging/ccree/ssi_hash.h b/drivers/staging/ccree/ssi_hash.h
index ade4119..19fc4cf 100644
--- a/drivers/staging/ccree/ssi_hash.h
+++ b/drivers/staging/ccree/ssi_hash.h
@@ -18,8 +18,8 @@
  * ARM CryptoCell Hash Crypto API
  */
 
-#ifndef __SSI_HASH_H__
-#define __SSI_HASH_H__
+#ifndef __CC_HASH_H__
+#define __CC_HASH_H__
 
 #include "ssi_buffer_mgr.h"
 
@@ -103,5 +103,5 @@ cc_digest_len_addr(void *drvdata, u32 mode);
  */
 ssi_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode);
 
-#endif /*__SSI_HASH_H__*/
+#endif /*__CC_HASH_H__*/
 
diff --git a/drivers/staging/ccree/ssi_ivgen.c b/drivers/staging/ccree/ssi_ivgen.c
index c499361..d362bf6 100644
--- a/drivers/staging/ccree/ssi_ivgen.c
+++ b/drivers/staging/ccree/ssi_ivgen.c
@@ -62,7 +62,7 @@ static int cc_gen_iv_pool(struct cc_ivgen_ctx *ivgen_ctx,
 {
 	unsigned int idx = *iv_seq_len;
 
-	if ((*iv_seq_len + CC_IVPOOL_GEN_SEQ_LEN) > SSI_IVPOOL_SEQ_LEN) {
+	if ((*iv_seq_len + CC_IVPOOL_GEN_SEQ_LEN) > CC_IVPOOL_SEQ_LEN) {
 		/* The sequence will be longer than allowed */
 		return -EINVAL;
 	}
@@ -119,7 +119,7 @@ static int cc_gen_iv_pool(struct cc_ivgen_ctx *ivgen_ctx,
 int cc_init_iv_sram(struct ssi_drvdata *drvdata)
 {
 	struct cc_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
-	struct cc_hw_desc iv_seq[SSI_IVPOOL_SEQ_LEN];
+	struct cc_hw_desc iv_seq[CC_IVPOOL_SEQ_LEN];
 	unsigned int iv_seq_len = 0;
 	int rc;
 
@@ -247,7 +247,7 @@ int cc_get_iv(struct ssi_drvdata *drvdata, dma_addr_t iv_out_dma[],
 	    iv_out_size != CTR_RFC3686_IV_SIZE) {
 		return -EINVAL;
 	}
-	if ((iv_out_dma_len + 1) > SSI_IVPOOL_SEQ_LEN) {
+	if ((iv_out_dma_len + 1) > CC_IVPOOL_SEQ_LEN) {
 		/* The sequence will be longer than allowed */
 		return -EINVAL;
 	}
@@ -255,7 +255,7 @@ int cc_get_iv(struct ssi_drvdata *drvdata, dma_addr_t iv_out_dma[],
 	/* check that number of generated IV is limited to max dma address
 	 * iv buffer size
 	 */
-	if (iv_out_dma_len > SSI_MAX_IVGEN_DMA_ADDRESSES) {
+	if (iv_out_dma_len > CC_MAX_IVGEN_DMA_ADDRESSES) {
 		/* The sequence will be longer than allowed */
 		return -EINVAL;
 	}
diff --git a/drivers/staging/ccree/ssi_ivgen.h b/drivers/staging/ccree/ssi_ivgen.h
index bbd0245..9890f62 100644
--- a/drivers/staging/ccree/ssi_ivgen.h
+++ b/drivers/staging/ccree/ssi_ivgen.h
@@ -14,12 +14,12 @@
  * along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
 
-#ifndef __SSI_IVGEN_H__
-#define __SSI_IVGEN_H__
+#ifndef __CC_IVGEN_H__
+#define __CC_IVGEN_H__
 
 #include "cc_hw_queue_defs.h"
 
-#define SSI_IVPOOL_SEQ_LEN 8
+#define CC_IVPOOL_SEQ_LEN 8
 
 /*!
  * Allocates iv-pool and maps resources.
@@ -65,4 +65,4 @@ int cc_get_iv(struct ssi_drvdata *drvdata, dma_addr_t iv_out_dma[],
 	      unsigned int iv_out_dma_len, unsigned int iv_out_size,
 	      struct cc_hw_desc iv_seq[], unsigned int *iv_seq_len);
 
-#endif /*__SSI_IVGEN_H__*/
+#endif /*__CC_IVGEN_H__*/
diff --git a/drivers/staging/ccree/ssi_pm.c b/drivers/staging/ccree/ssi_pm.c
index f0e3baf..e387d46 100644
--- a/drivers/staging/ccree/ssi_pm.c
+++ b/drivers/staging/ccree/ssi_pm.c
@@ -123,7 +123,7 @@ int cc_pm_init(struct ssi_drvdata *drvdata)
 	struct device *dev = drvdata_to_dev(drvdata);
 
 	/* must be before the enabling to avoid resdundent suspending */
-	pm_runtime_set_autosuspend_delay(dev, SSI_SUSPEND_TIMEOUT);
+	pm_runtime_set_autosuspend_delay(dev, CC_SUSPEND_TIMEOUT);
 	pm_runtime_use_autosuspend(dev);
 	/* activate the PM module */
 	rc = pm_runtime_set_active(dev);
diff --git a/drivers/staging/ccree/ssi_pm.h b/drivers/staging/ccree/ssi_pm.h
index 50bcf03..940ef2d 100644
--- a/drivers/staging/ccree/ssi_pm.h
+++ b/drivers/staging/ccree/ssi_pm.h
@@ -17,13 +17,13 @@
 /* \file ssi_pm.h
  */
 
-#ifndef __SSI_POWER_MGR_H__
-#define __SSI_POWER_MGR_H__
+#ifndef __CC_POWER_MGR_H__
+#define __CC_POWER_MGR_H__
 
 #include "ssi_config.h"
 #include "ssi_driver.h"
 
-#define SSI_SUSPEND_TIMEOUT 3000
+#define CC_SUSPEND_TIMEOUT 3000
 
 int cc_pm_init(struct ssi_drvdata *drvdata);
 
diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c
index 3d25b72..436e035 100644
--- a/drivers/staging/ccree/ssi_request_mgr.c
+++ b/drivers/staging/ccree/ssi_request_mgr.c
@@ -31,7 +31,7 @@
 #include "ssi_ivgen.h"
 #include "ssi_pm.h"
 
-#define SSI_MAX_POLL_ITER	10
+#define CC_MAX_POLL_ITER	10
 
 struct cc_req_mgr_handle {
 	/* Request manager resources */
@@ -223,7 +223,7 @@ static int cc_queues_status(struct ssi_drvdata *drvdata,
 		return 0;
 
 	/* Wait for space in HW queue. Poll constant num of iterations. */
-	for (poll_queue = 0; poll_queue < SSI_MAX_POLL_ITER ; poll_queue++) {
+	for (poll_queue = 0; poll_queue < CC_MAX_POLL_ITER ; poll_queue++) {
 		req_mgr_h->q_free_slots =
 			cc_ioread(drvdata, CC_REG(DSCRPTR_QUEUE_CONTENT));
 		if (req_mgr_h->q_free_slots < req_mgr_h->min_free_hw_slots)
@@ -265,13 +265,13 @@ int send_request(struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
 	unsigned int used_sw_slots;
 	unsigned int iv_seq_len = 0;
 	unsigned int total_seq_len = len; /*initial sequence length*/
-	struct cc_hw_desc iv_seq[SSI_IVPOOL_SEQ_LEN];
+	struct cc_hw_desc iv_seq[CC_IVPOOL_SEQ_LEN];
 	struct device *dev = drvdata_to_dev(drvdata);
 	int rc;
 	unsigned int max_required_seq_len =
 		(total_seq_len +
 		 ((ssi_req->ivgen_dma_addr_len == 0) ? 0 :
-		  SSI_IVPOOL_SEQ_LEN) + (!is_dout ? 1 : 0));
+		  CC_IVPOOL_SEQ_LEN) + (!is_dout ? 1 : 0));
 
 #if defined(CONFIG_PM)
 	rc = cc_pm_get(dev);
@@ -541,13 +541,13 @@ static void comp_handler(unsigned long devarg)
 
 	u32 irq;
 
-	irq = (drvdata->irq & SSI_COMP_IRQ_MASK);
+	irq = (drvdata->irq & CC_COMP_IRQ_MASK);
 
-	if (irq & SSI_COMP_IRQ_MASK) {
+	if (irq & CC_COMP_IRQ_MASK) {
 		/* To avoid the interrupt from firing as we unmask it,
 		 * we clear it now
 		 */
-		cc_iowrite(drvdata, CC_REG(HOST_ICR), SSI_COMP_IRQ_MASK);
+		cc_iowrite(drvdata, CC_REG(HOST_ICR), CC_COMP_IRQ_MASK);
 
 		/* Avoid race with above clear: Test completion counter
 		 * once more
@@ -566,7 +566,7 @@ static void comp_handler(unsigned long devarg)
 			} while (request_mgr_handle->axi_completed > 0);
 
 			cc_iowrite(drvdata, CC_REG(HOST_ICR),
-				   SSI_COMP_IRQ_MASK);
+				   CC_COMP_IRQ_MASK);
 
 			request_mgr_handle->axi_completed +=
 					cc_axi_comp_count(drvdata);
diff --git a/drivers/staging/ccree/ssi_sram_mgr.c b/drivers/staging/ccree/ssi_sram_mgr.c
index 0704031..cbe5e3b 100644
--- a/drivers/staging/ccree/ssi_sram_mgr.c
+++ b/drivers/staging/ccree/ssi_sram_mgr.c
@@ -80,7 +80,7 @@ ssi_sram_addr_t cc_sram_alloc(struct ssi_drvdata *drvdata, u32 size)
 			size);
 		return NULL_SRAM_ADDR;
 	}
-	if (size > (SSI_CC_SRAM_SIZE - smgr_ctx->sram_free_offset)) {
+	if (size > (CC_CC_SRAM_SIZE - smgr_ctx->sram_free_offset)) {
 		dev_err(dev, "Not enough space to allocate %u B (at offset %llu)\n",
 			size, smgr_ctx->sram_free_offset);
 		return NULL_SRAM_ADDR;
diff --git a/drivers/staging/ccree/ssi_sram_mgr.h b/drivers/staging/ccree/ssi_sram_mgr.h
index 76719ec..fdd325b 100644
--- a/drivers/staging/ccree/ssi_sram_mgr.h
+++ b/drivers/staging/ccree/ssi_sram_mgr.h
@@ -14,11 +14,11 @@
  * along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
 
-#ifndef __SSI_SRAM_MGR_H__
-#define __SSI_SRAM_MGR_H__
+#ifndef __CC_SRAM_MGR_H__
+#define __CC_SRAM_MGR_H__
 
-#ifndef SSI_CC_SRAM_SIZE
-#define SSI_CC_SRAM_SIZE 4096
+#ifndef CC_CC_SRAM_SIZE
+#define CC_CC_SRAM_SIZE 4096
 #endif
 
 struct ssi_drvdata;
@@ -75,4 +75,4 @@ void cc_set_sram_desc(const u32 *src, ssi_sram_addr_t dst,
 		      unsigned int nelement, struct cc_hw_desc *seq,
 		      unsigned int *seq_len);
 
-#endif /*__SSI_SRAM_MGR_H__*/
+#endif /*__CC_SRAM_MGR_H__*/
diff --git a/drivers/staging/ccree/ssi_sysfs.h b/drivers/staging/ccree/ssi_sysfs.h
index 5124528..de68bc6 100644
--- a/drivers/staging/ccree/ssi_sysfs.h
+++ b/drivers/staging/ccree/ssi_sysfs.h
@@ -18,8 +18,8 @@
  * ARM CryptoCell sysfs APIs
  */
 
-#ifndef __SSI_SYSFS_H__
-#define __SSI_SYSFS_H__
+#ifndef __CC_SYSFS_H__
+#define __CC_SYSFS_H__
 
 #include <asm/timex.h>
 
@@ -29,4 +29,4 @@ struct ssi_drvdata;
 int ssi_sysfs_init(struct kobject *sys_dev_obj, struct ssi_drvdata *drvdata);
 void ssi_sysfs_fini(void);
 
-#endif /*__SSI_SYSFS_H__*/
+#endif /*__CC_SYSFS_H__*/
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 19/24] staging: ccree: rename all DX to CC
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (17 preceding siblings ...)
  2017-12-12 14:53 ` [PATCH 18/24] staging: ccree: rename all SSI to CC Gilad Ben-Yossef
@ 2017-12-12 14:53 ` Gilad Ben-Yossef
  2017-12-12 14:53 ` [PATCH 20/24] staging: ccree: rename vars/structs/enums from ssi_ to cc_ Gilad Ben-Yossef
                   ` (4 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Unify naming convention by renaming all DX macros to CC.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/cc_hw_queue_defs.h |   4 +-
 drivers/staging/ccree/cc_lli_defs.h      |   2 +-
 drivers/staging/ccree/dx_crys_kernel.h   | 314 +++++++++++++++----------------
 drivers/staging/ccree/dx_host.h          | 262 +++++++++++++-------------
 drivers/staging/ccree/dx_reg_common.h    |  10 +-
 drivers/staging/ccree/ssi_config.h       |   8 +-
 drivers/staging/ccree/ssi_driver.c       |  18 +-
 drivers/staging/ccree/ssi_driver.h       |  26 +--
 drivers/staging/ccree/ssi_hash.c         |  18 +-
 drivers/staging/ccree/ssi_hash.h         |   2 +-
 drivers/staging/ccree/ssi_request_mgr.c  |   2 +-
 drivers/staging/ccree/ssi_sysfs.c        |  10 +-
 12 files changed, 338 insertions(+), 338 deletions(-)

diff --git a/drivers/staging/ccree/cc_hw_queue_defs.h b/drivers/staging/ccree/cc_hw_queue_defs.h
index 3ca548d..7c25a4f 100644
--- a/drivers/staging/ccree/cc_hw_queue_defs.h
+++ b/drivers/staging/ccree/cc_hw_queue_defs.h
@@ -31,11 +31,11 @@
 #define HW_QUEUE_SLOTS_MAX              15
 
 #define CC_REG_LOW(word, name)  \
-	(DX_DSCRPTR_QUEUE_WORD ## word ## _ ## name ## _BIT_SHIFT)
+	(CC_DSCRPTR_QUEUE_WORD ## word ## _ ## name ## _BIT_SHIFT)
 
 #define CC_REG_HIGH(word, name) \
 	(CC_REG_LOW(word, name) + \
-	 DX_DSCRPTR_QUEUE_WORD ## word ## _ ## name ## _BIT_SIZE - 1)
+	 CC_DSCRPTR_QUEUE_WORD ## word ## _ ## name ## _BIT_SIZE - 1)
 
 #define CC_GENMASK(word, name) \
 	GENMASK(CC_REG_HIGH(word, name), CC_REG_LOW(word, name))
diff --git a/drivers/staging/ccree/cc_lli_defs.h b/drivers/staging/ccree/cc_lli_defs.h
index a9c417b..861634a 100644
--- a/drivers/staging/ccree/cc_lli_defs.h
+++ b/drivers/staging/ccree/cc_lli_defs.h
@@ -20,7 +20,7 @@
 #include <linux/types.h>
 
 /* Max DLLI size
- *  AKA DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SIZE
+ *  AKA CC_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SIZE
  */
 #define DLLI_SIZE_BIT_SIZE	0x18
 
diff --git a/drivers/staging/ccree/dx_crys_kernel.h b/drivers/staging/ccree/dx_crys_kernel.h
index 2196030..30719f4 100644
--- a/drivers/staging/ccree/dx_crys_kernel.h
+++ b/drivers/staging/ccree/dx_crys_kernel.h
@@ -14,167 +14,167 @@
  * along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
 
-#ifndef __DX_CRYS_KERNEL_H__
-#define __DX_CRYS_KERNEL_H__
+#ifndef __CC_CRYS_KERNEL_H__
+#define __CC_CRYS_KERNEL_H__
 
 // --------------------------------------
 // BLOCK: DSCRPTR
 // --------------------------------------
-#define DX_DSCRPTR_COMPLETION_COUNTER_REG_OFFSET	0xE00UL
-#define DX_DSCRPTR_COMPLETION_COUNTER_COMPLETION_COUNTER_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_COMPLETION_COUNTER_COMPLETION_COUNTER_BIT_SIZE	0x6UL
-#define DX_DSCRPTR_COMPLETION_COUNTER_OVERFLOW_COUNTER_BIT_SHIFT	0x6UL
-#define DX_DSCRPTR_COMPLETION_COUNTER_OVERFLOW_COUNTER_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_SW_RESET_REG_OFFSET	0xE40UL
-#define DX_DSCRPTR_SW_RESET_VALUE_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_SW_RESET_VALUE_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_SRAM_SIZE_REG_OFFSET	0xE60UL
-#define DX_DSCRPTR_QUEUE_SRAM_SIZE_NUM_OF_DSCRPTR_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_QUEUE_SRAM_SIZE_NUM_OF_DSCRPTR_BIT_SIZE	0xAUL
-#define DX_DSCRPTR_QUEUE_SRAM_SIZE_DSCRPTR_SRAM_SIZE_BIT_SHIFT	0xAUL
-#define DX_DSCRPTR_QUEUE_SRAM_SIZE_DSCRPTR_SRAM_SIZE_BIT_SIZE	0xCUL
-#define DX_DSCRPTR_QUEUE_SRAM_SIZE_SRAM_SIZE_BIT_SHIFT	0x16UL
-#define DX_DSCRPTR_QUEUE_SRAM_SIZE_SRAM_SIZE_BIT_SIZE	0x3UL
-#define DX_DSCRPTR_SINGLE_ADDR_EN_REG_OFFSET	0xE64UL
-#define DX_DSCRPTR_SINGLE_ADDR_EN_VALUE_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_SINGLE_ADDR_EN_VALUE_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_MEASURE_CNTR_REG_OFFSET	0xE68UL
-#define DX_DSCRPTR_MEASURE_CNTR_VALUE_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_MEASURE_CNTR_VALUE_BIT_SIZE	0x20UL
-#define DX_DSCRPTR_QUEUE_WORD0_REG_OFFSET	0xE80UL
-#define DX_DSCRPTR_QUEUE_WORD0_VALUE_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_QUEUE_WORD0_VALUE_BIT_SIZE	0x20UL
-#define DX_DSCRPTR_QUEUE_WORD1_REG_OFFSET	0xE84UL
-#define DX_DSCRPTR_QUEUE_WORD1_DIN_DMA_MODE_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_QUEUE_WORD1_DIN_DMA_MODE_BIT_SIZE	0x2UL
-#define DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SHIFT	0x2UL
-#define DX_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SIZE	0x18UL
-#define DX_DSCRPTR_QUEUE_WORD1_NS_BIT_BIT_SHIFT	0x1AUL
-#define DX_DSCRPTR_QUEUE_WORD1_NS_BIT_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD1_DIN_CONST_VALUE_BIT_SHIFT	0x1BUL
-#define DX_DSCRPTR_QUEUE_WORD1_DIN_CONST_VALUE_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD1_NOT_LAST_BIT_SHIFT	0x1CUL
-#define DX_DSCRPTR_QUEUE_WORD1_NOT_LAST_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD1_LOCK_QUEUE_BIT_SHIFT	0x1DUL
-#define DX_DSCRPTR_QUEUE_WORD1_LOCK_QUEUE_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD1_NOT_USED_BIT_SHIFT	0x1EUL
-#define DX_DSCRPTR_QUEUE_WORD1_NOT_USED_BIT_SIZE	0x2UL
-#define DX_DSCRPTR_QUEUE_WORD2_REG_OFFSET	0xE88UL
-#define DX_DSCRPTR_QUEUE_WORD2_VALUE_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_QUEUE_WORD2_VALUE_BIT_SIZE	0x20UL
-#define DX_DSCRPTR_QUEUE_WORD3_REG_OFFSET	0xE8CUL
-#define DX_DSCRPTR_QUEUE_WORD3_DOUT_DMA_MODE_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_QUEUE_WORD3_DOUT_DMA_MODE_BIT_SIZE	0x2UL
-#define DX_DSCRPTR_QUEUE_WORD3_DOUT_SIZE_BIT_SHIFT	0x2UL
-#define DX_DSCRPTR_QUEUE_WORD3_DOUT_SIZE_BIT_SIZE	0x18UL
-#define DX_DSCRPTR_QUEUE_WORD3_NS_BIT_BIT_SHIFT	0x1AUL
-#define DX_DSCRPTR_QUEUE_WORD3_NS_BIT_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD3_DOUT_LAST_IND_BIT_SHIFT	0x1BUL
-#define DX_DSCRPTR_QUEUE_WORD3_DOUT_LAST_IND_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD3_HASH_XOR_BIT_BIT_SHIFT	0x1DUL
-#define DX_DSCRPTR_QUEUE_WORD3_HASH_XOR_BIT_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD3_NOT_USED_BIT_SHIFT	0x1EUL
-#define DX_DSCRPTR_QUEUE_WORD3_NOT_USED_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD3_QUEUE_LAST_IND_BIT_SHIFT	0x1FUL
-#define DX_DSCRPTR_QUEUE_WORD3_QUEUE_LAST_IND_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD4_REG_OFFSET	0xE90UL
-#define DX_DSCRPTR_QUEUE_WORD4_DATA_FLOW_MODE_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_QUEUE_WORD4_DATA_FLOW_MODE_BIT_SIZE	0x6UL
-#define DX_DSCRPTR_QUEUE_WORD4_AES_SEL_N_HASH_BIT_SHIFT	0x6UL
-#define DX_DSCRPTR_QUEUE_WORD4_AES_SEL_N_HASH_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD4_AES_XOR_CRYPTO_KEY_BIT_SHIFT	0x7UL
-#define DX_DSCRPTR_QUEUE_WORD4_AES_XOR_CRYPTO_KEY_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD4_ACK_NEEDED_BIT_SHIFT	0x8UL
-#define DX_DSCRPTR_QUEUE_WORD4_ACK_NEEDED_BIT_SIZE	0x2UL
-#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_MODE_BIT_SHIFT	0xAUL
-#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_MODE_BIT_SIZE	0x4UL
-#define DX_DSCRPTR_QUEUE_WORD4_CMAC_SIZE0_BIT_SHIFT	0xEUL
-#define DX_DSCRPTR_QUEUE_WORD4_CMAC_SIZE0_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_DO_BIT_SHIFT	0xFUL
-#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_DO_BIT_SIZE	0x2UL
-#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF0_BIT_SHIFT	0x11UL
-#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF0_BIT_SIZE	0x2UL
-#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF1_BIT_SHIFT	0x13UL
-#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF1_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF2_BIT_SHIFT	0x14UL
-#define DX_DSCRPTR_QUEUE_WORD4_CIPHER_CONF2_BIT_SIZE	0x2UL
-#define DX_DSCRPTR_QUEUE_WORD4_KEY_SIZE_BIT_SHIFT	0x16UL
-#define DX_DSCRPTR_QUEUE_WORD4_KEY_SIZE_BIT_SIZE	0x2UL
-#define DX_DSCRPTR_QUEUE_WORD4_SETUP_OPERATION_BIT_SHIFT	0x18UL
-#define DX_DSCRPTR_QUEUE_WORD4_SETUP_OPERATION_BIT_SIZE	0x4UL
-#define DX_DSCRPTR_QUEUE_WORD4_DIN_SRAM_ENDIANNESS_BIT_SHIFT	0x1CUL
-#define DX_DSCRPTR_QUEUE_WORD4_DIN_SRAM_ENDIANNESS_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD4_DOUT_SRAM_ENDIANNESS_BIT_SHIFT	0x1DUL
-#define DX_DSCRPTR_QUEUE_WORD4_DOUT_SRAM_ENDIANNESS_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD4_WORD_SWAP_BIT_SHIFT	0x1EUL
-#define DX_DSCRPTR_QUEUE_WORD4_WORD_SWAP_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD4_BYTES_SWAP_BIT_SHIFT	0x1FUL
-#define DX_DSCRPTR_QUEUE_WORD4_BYTES_SWAP_BIT_SIZE	0x1UL
-#define DX_DSCRPTR_QUEUE_WORD5_REG_OFFSET	0xE94UL
-#define DX_DSCRPTR_QUEUE_WORD5_DIN_ADDR_HIGH_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_QUEUE_WORD5_DIN_ADDR_HIGH_BIT_SIZE	0x10UL
-#define DX_DSCRPTR_QUEUE_WORD5_DOUT_ADDR_HIGH_BIT_SHIFT	0x10UL
-#define DX_DSCRPTR_QUEUE_WORD5_DOUT_ADDR_HIGH_BIT_SIZE	0x10UL
-#define DX_DSCRPTR_QUEUE_WATERMARK_REG_OFFSET	0xE98UL
-#define DX_DSCRPTR_QUEUE_WATERMARK_VALUE_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_QUEUE_WATERMARK_VALUE_BIT_SIZE	0xAUL
-#define DX_DSCRPTR_QUEUE_CONTENT_REG_OFFSET	0xE9CUL
-#define DX_DSCRPTR_QUEUE_CONTENT_VALUE_BIT_SHIFT	0x0UL
-#define DX_DSCRPTR_QUEUE_CONTENT_VALUE_BIT_SIZE	0xAUL
+#define CC_DSCRPTR_COMPLETION_COUNTER_REG_OFFSET	0xE00UL
+#define CC_DSCRPTR_COMPLETION_COUNTER_COMPLETION_COUNTER_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_COMPLETION_COUNTER_COMPLETION_COUNTER_BIT_SIZE	0x6UL
+#define CC_DSCRPTR_COMPLETION_COUNTER_OVERFLOW_COUNTER_BIT_SHIFT	0x6UL
+#define CC_DSCRPTR_COMPLETION_COUNTER_OVERFLOW_COUNTER_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_SW_RESET_REG_OFFSET	0xE40UL
+#define CC_DSCRPTR_SW_RESET_VALUE_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_SW_RESET_VALUE_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_SRAM_SIZE_REG_OFFSET	0xE60UL
+#define CC_DSCRPTR_QUEUE_SRAM_SIZE_NUM_OF_DSCRPTR_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_QUEUE_SRAM_SIZE_NUM_OF_DSCRPTR_BIT_SIZE	0xAUL
+#define CC_DSCRPTR_QUEUE_SRAM_SIZE_DSCRPTR_SRAM_SIZE_BIT_SHIFT	0xAUL
+#define CC_DSCRPTR_QUEUE_SRAM_SIZE_DSCRPTR_SRAM_SIZE_BIT_SIZE	0xCUL
+#define CC_DSCRPTR_QUEUE_SRAM_SIZE_SRAM_SIZE_BIT_SHIFT	0x16UL
+#define CC_DSCRPTR_QUEUE_SRAM_SIZE_SRAM_SIZE_BIT_SIZE	0x3UL
+#define CC_DSCRPTR_SINGLE_ADDR_EN_REG_OFFSET	0xE64UL
+#define CC_DSCRPTR_SINGLE_ADDR_EN_VALUE_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_SINGLE_ADDR_EN_VALUE_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_MEASURE_CNTR_REG_OFFSET	0xE68UL
+#define CC_DSCRPTR_MEASURE_CNTR_VALUE_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_MEASURE_CNTR_VALUE_BIT_SIZE	0x20UL
+#define CC_DSCRPTR_QUEUE_WORD0_REG_OFFSET	0xE80UL
+#define CC_DSCRPTR_QUEUE_WORD0_VALUE_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_QUEUE_WORD0_VALUE_BIT_SIZE	0x20UL
+#define CC_DSCRPTR_QUEUE_WORD1_REG_OFFSET	0xE84UL
+#define CC_DSCRPTR_QUEUE_WORD1_DIN_DMA_MODE_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_QUEUE_WORD1_DIN_DMA_MODE_BIT_SIZE	0x2UL
+#define CC_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SHIFT	0x2UL
+#define CC_DSCRPTR_QUEUE_WORD1_DIN_SIZE_BIT_SIZE	0x18UL
+#define CC_DSCRPTR_QUEUE_WORD1_NS_BIT_BIT_SHIFT	0x1AUL
+#define CC_DSCRPTR_QUEUE_WORD1_NS_BIT_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD1_DIN_CONST_VALUE_BIT_SHIFT	0x1BUL
+#define CC_DSCRPTR_QUEUE_WORD1_DIN_CONST_VALUE_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD1_NOT_LAST_BIT_SHIFT	0x1CUL
+#define CC_DSCRPTR_QUEUE_WORD1_NOT_LAST_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD1_LOCK_QUEUE_BIT_SHIFT	0x1DUL
+#define CC_DSCRPTR_QUEUE_WORD1_LOCK_QUEUE_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD1_NOT_USED_BIT_SHIFT	0x1EUL
+#define CC_DSCRPTR_QUEUE_WORD1_NOT_USED_BIT_SIZE	0x2UL
+#define CC_DSCRPTR_QUEUE_WORD2_REG_OFFSET	0xE88UL
+#define CC_DSCRPTR_QUEUE_WORD2_VALUE_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_QUEUE_WORD2_VALUE_BIT_SIZE	0x20UL
+#define CC_DSCRPTR_QUEUE_WORD3_REG_OFFSET	0xE8CUL
+#define CC_DSCRPTR_QUEUE_WORD3_DOUT_DMA_MODE_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_QUEUE_WORD3_DOUT_DMA_MODE_BIT_SIZE	0x2UL
+#define CC_DSCRPTR_QUEUE_WORD3_DOUT_SIZE_BIT_SHIFT	0x2UL
+#define CC_DSCRPTR_QUEUE_WORD3_DOUT_SIZE_BIT_SIZE	0x18UL
+#define CC_DSCRPTR_QUEUE_WORD3_NS_BIT_BIT_SHIFT	0x1AUL
+#define CC_DSCRPTR_QUEUE_WORD3_NS_BIT_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD3_DOUT_LAST_IND_BIT_SHIFT	0x1BUL
+#define CC_DSCRPTR_QUEUE_WORD3_DOUT_LAST_IND_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD3_HASH_XOR_BIT_BIT_SHIFT	0x1DUL
+#define CC_DSCRPTR_QUEUE_WORD3_HASH_XOR_BIT_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD3_NOT_USED_BIT_SHIFT	0x1EUL
+#define CC_DSCRPTR_QUEUE_WORD3_NOT_USED_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD3_QUEUE_LAST_IND_BIT_SHIFT	0x1FUL
+#define CC_DSCRPTR_QUEUE_WORD3_QUEUE_LAST_IND_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD4_REG_OFFSET	0xE90UL
+#define CC_DSCRPTR_QUEUE_WORD4_DATA_FLOW_MODE_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_QUEUE_WORD4_DATA_FLOW_MODE_BIT_SIZE	0x6UL
+#define CC_DSCRPTR_QUEUE_WORD4_AES_SEL_N_HASH_BIT_SHIFT	0x6UL
+#define CC_DSCRPTR_QUEUE_WORD4_AES_SEL_N_HASH_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD4_AES_XOR_CRYPTO_KEY_BIT_SHIFT	0x7UL
+#define CC_DSCRPTR_QUEUE_WORD4_AES_XOR_CRYPTO_KEY_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD4_ACK_NEEDED_BIT_SHIFT	0x8UL
+#define CC_DSCRPTR_QUEUE_WORD4_ACK_NEEDED_BIT_SIZE	0x2UL
+#define CC_DSCRPTR_QUEUE_WORD4_CIPHER_MODE_BIT_SHIFT	0xAUL
+#define CC_DSCRPTR_QUEUE_WORD4_CIPHER_MODE_BIT_SIZE	0x4UL
+#define CC_DSCRPTR_QUEUE_WORD4_CMAC_SIZE0_BIT_SHIFT	0xEUL
+#define CC_DSCRPTR_QUEUE_WORD4_CMAC_SIZE0_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD4_CIPHER_DO_BIT_SHIFT	0xFUL
+#define CC_DSCRPTR_QUEUE_WORD4_CIPHER_DO_BIT_SIZE	0x2UL
+#define CC_DSCRPTR_QUEUE_WORD4_CIPHER_CONF0_BIT_SHIFT	0x11UL
+#define CC_DSCRPTR_QUEUE_WORD4_CIPHER_CONF0_BIT_SIZE	0x2UL
+#define CC_DSCRPTR_QUEUE_WORD4_CIPHER_CONF1_BIT_SHIFT	0x13UL
+#define CC_DSCRPTR_QUEUE_WORD4_CIPHER_CONF1_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD4_CIPHER_CONF2_BIT_SHIFT	0x14UL
+#define CC_DSCRPTR_QUEUE_WORD4_CIPHER_CONF2_BIT_SIZE	0x2UL
+#define CC_DSCRPTR_QUEUE_WORD4_KEY_SIZE_BIT_SHIFT	0x16UL
+#define CC_DSCRPTR_QUEUE_WORD4_KEY_SIZE_BIT_SIZE	0x2UL
+#define CC_DSCRPTR_QUEUE_WORD4_SETUP_OPERATION_BIT_SHIFT	0x18UL
+#define CC_DSCRPTR_QUEUE_WORD4_SETUP_OPERATION_BIT_SIZE	0x4UL
+#define CC_DSCRPTR_QUEUE_WORD4_DIN_SRAM_ENDIANNESS_BIT_SHIFT	0x1CUL
+#define CC_DSCRPTR_QUEUE_WORD4_DIN_SRAM_ENDIANNESS_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD4_DOUT_SRAM_ENDIANNESS_BIT_SHIFT	0x1DUL
+#define CC_DSCRPTR_QUEUE_WORD4_DOUT_SRAM_ENDIANNESS_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD4_WORD_SWAP_BIT_SHIFT	0x1EUL
+#define CC_DSCRPTR_QUEUE_WORD4_WORD_SWAP_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD4_BYTES_SWAP_BIT_SHIFT	0x1FUL
+#define CC_DSCRPTR_QUEUE_WORD4_BYTES_SWAP_BIT_SIZE	0x1UL
+#define CC_DSCRPTR_QUEUE_WORD5_REG_OFFSET	0xE94UL
+#define CC_DSCRPTR_QUEUE_WORD5_DIN_ADDR_HIGH_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_QUEUE_WORD5_DIN_ADDR_HIGH_BIT_SIZE	0x10UL
+#define CC_DSCRPTR_QUEUE_WORD5_DOUT_ADDR_HIGH_BIT_SHIFT	0x10UL
+#define CC_DSCRPTR_QUEUE_WORD5_DOUT_ADDR_HIGH_BIT_SIZE	0x10UL
+#define CC_DSCRPTR_QUEUE_WATERMARK_REG_OFFSET	0xE98UL
+#define CC_DSCRPTR_QUEUE_WATERMARK_VALUE_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_QUEUE_WATERMARK_VALUE_BIT_SIZE	0xAUL
+#define CC_DSCRPTR_QUEUE_CONTENT_REG_OFFSET	0xE9CUL
+#define CC_DSCRPTR_QUEUE_CONTENT_VALUE_BIT_SHIFT	0x0UL
+#define CC_DSCRPTR_QUEUE_CONTENT_VALUE_BIT_SIZE	0xAUL
 // --------------------------------------
 // BLOCK: AXI_P
 // --------------------------------------
-#define DX_AXIM_MON_INFLIGHT_REG_OFFSET	0xB00UL
-#define DX_AXIM_MON_INFLIGHT_VALUE_BIT_SHIFT	0x0UL
-#define DX_AXIM_MON_INFLIGHT_VALUE_BIT_SIZE	0x8UL
-#define DX_AXIM_MON_INFLIGHTLAST_REG_OFFSET	0xB40UL
-#define DX_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SHIFT	0x0UL
-#define DX_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SIZE	0x8UL
-#define DX_AXIM_MON_COMP_REG_OFFSET	0xB80UL
-#define DX_AXIM_MON_COMP_VALUE_BIT_SHIFT	0x0UL
-#define DX_AXIM_MON_COMP_VALUE_BIT_SIZE	0x10UL
-#define DX_AXIM_MON_ERR_REG_OFFSET	0xBC4UL
-#define DX_AXIM_MON_ERR_BRESP_BIT_SHIFT	0x0UL
-#define DX_AXIM_MON_ERR_BRESP_BIT_SIZE	0x2UL
-#define DX_AXIM_MON_ERR_BID_BIT_SHIFT	0x2UL
-#define DX_AXIM_MON_ERR_BID_BIT_SIZE	0x4UL
-#define DX_AXIM_MON_ERR_RRESP_BIT_SHIFT	0x10UL
-#define DX_AXIM_MON_ERR_RRESP_BIT_SIZE	0x2UL
-#define DX_AXIM_MON_ERR_RID_BIT_SHIFT	0x12UL
-#define DX_AXIM_MON_ERR_RID_BIT_SIZE	0x4UL
-#define DX_AXIM_CFG_REG_OFFSET	0xBE8UL
-#define DX_AXIM_CFG_BRESPMASK_BIT_SHIFT	0x4UL
-#define DX_AXIM_CFG_BRESPMASK_BIT_SIZE	0x1UL
-#define DX_AXIM_CFG_RRESPMASK_BIT_SHIFT	0x5UL
-#define DX_AXIM_CFG_RRESPMASK_BIT_SIZE	0x1UL
-#define DX_AXIM_CFG_INFLTMASK_BIT_SHIFT	0x6UL
-#define DX_AXIM_CFG_INFLTMASK_BIT_SIZE	0x1UL
-#define DX_AXIM_CFG_COMPMASK_BIT_SHIFT	0x7UL
-#define DX_AXIM_CFG_COMPMASK_BIT_SIZE	0x1UL
-#define DX_AXIM_ACE_CONST_REG_OFFSET	0xBECUL
-#define DX_AXIM_ACE_CONST_ARDOMAIN_BIT_SHIFT	0x0UL
-#define DX_AXIM_ACE_CONST_ARDOMAIN_BIT_SIZE	0x2UL
-#define DX_AXIM_ACE_CONST_AWDOMAIN_BIT_SHIFT	0x2UL
-#define DX_AXIM_ACE_CONST_AWDOMAIN_BIT_SIZE	0x2UL
-#define DX_AXIM_ACE_CONST_ARBAR_BIT_SHIFT	0x4UL
-#define DX_AXIM_ACE_CONST_ARBAR_BIT_SIZE	0x2UL
-#define DX_AXIM_ACE_CONST_AWBAR_BIT_SHIFT	0x6UL
-#define DX_AXIM_ACE_CONST_AWBAR_BIT_SIZE	0x2UL
-#define DX_AXIM_ACE_CONST_ARSNOOP_BIT_SHIFT	0x8UL
-#define DX_AXIM_ACE_CONST_ARSNOOP_BIT_SIZE	0x4UL
-#define DX_AXIM_ACE_CONST_AWSNOOP_NOT_ALIGNED_BIT_SHIFT	0xCUL
-#define DX_AXIM_ACE_CONST_AWSNOOP_NOT_ALIGNED_BIT_SIZE	0x3UL
-#define DX_AXIM_ACE_CONST_AWSNOOP_ALIGNED_BIT_SHIFT	0xFUL
-#define DX_AXIM_ACE_CONST_AWSNOOP_ALIGNED_BIT_SIZE	0x3UL
-#define DX_AXIM_ACE_CONST_AWADDR_NOT_MASKED_BIT_SHIFT	0x12UL
-#define DX_AXIM_ACE_CONST_AWADDR_NOT_MASKED_BIT_SIZE	0x7UL
-#define DX_AXIM_ACE_CONST_AWLEN_VAL_BIT_SHIFT	0x19UL
-#define DX_AXIM_ACE_CONST_AWLEN_VAL_BIT_SIZE	0x4UL
-#define DX_AXIM_CACHE_PARAMS_REG_OFFSET	0xBF0UL
-#define DX_AXIM_CACHE_PARAMS_AWCACHE_LAST_BIT_SHIFT	0x0UL
-#define DX_AXIM_CACHE_PARAMS_AWCACHE_LAST_BIT_SIZE	0x4UL
-#define DX_AXIM_CACHE_PARAMS_AWCACHE_BIT_SHIFT	0x4UL
-#define DX_AXIM_CACHE_PARAMS_AWCACHE_BIT_SIZE	0x4UL
-#define DX_AXIM_CACHE_PARAMS_ARCACHE_BIT_SHIFT	0x8UL
-#define DX_AXIM_CACHE_PARAMS_ARCACHE_BIT_SIZE	0x4UL
-#endif	// __DX_CRYS_KERNEL_H__
+#define CC_AXIM_MON_INFLIGHT_REG_OFFSET	0xB00UL
+#define CC_AXIM_MON_INFLIGHT_VALUE_BIT_SHIFT	0x0UL
+#define CC_AXIM_MON_INFLIGHT_VALUE_BIT_SIZE	0x8UL
+#define CC_AXIM_MON_INFLIGHTLAST_REG_OFFSET	0xB40UL
+#define CC_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SHIFT	0x0UL
+#define CC_AXIM_MON_INFLIGHTLAST_VALUE_BIT_SIZE	0x8UL
+#define CC_AXIM_MON_COMP_REG_OFFSET	0xB80UL
+#define CC_AXIM_MON_COMP_VALUE_BIT_SHIFT	0x0UL
+#define CC_AXIM_MON_COMP_VALUE_BIT_SIZE	0x10UL
+#define CC_AXIM_MON_ERR_REG_OFFSET	0xBC4UL
+#define CC_AXIM_MON_ERR_BRESP_BIT_SHIFT	0x0UL
+#define CC_AXIM_MON_ERR_BRESP_BIT_SIZE	0x2UL
+#define CC_AXIM_MON_ERR_BID_BIT_SHIFT	0x2UL
+#define CC_AXIM_MON_ERR_BID_BIT_SIZE	0x4UL
+#define CC_AXIM_MON_ERR_RRESP_BIT_SHIFT	0x10UL
+#define CC_AXIM_MON_ERR_RRESP_BIT_SIZE	0x2UL
+#define CC_AXIM_MON_ERR_RID_BIT_SHIFT	0x12UL
+#define CC_AXIM_MON_ERR_RID_BIT_SIZE	0x4UL
+#define CC_AXIM_CFG_REG_OFFSET	0xBE8UL
+#define CC_AXIM_CFG_BRESPMASK_BIT_SHIFT	0x4UL
+#define CC_AXIM_CFG_BRESPMASK_BIT_SIZE	0x1UL
+#define CC_AXIM_CFG_RRESPMASK_BIT_SHIFT	0x5UL
+#define CC_AXIM_CFG_RRESPMASK_BIT_SIZE	0x1UL
+#define CC_AXIM_CFG_INFLTMASK_BIT_SHIFT	0x6UL
+#define CC_AXIM_CFG_INFLTMASK_BIT_SIZE	0x1UL
+#define CC_AXIM_CFG_COMPMASK_BIT_SHIFT	0x7UL
+#define CC_AXIM_CFG_COMPMASK_BIT_SIZE	0x1UL
+#define CC_AXIM_ACE_CONST_REG_OFFSET	0xBECUL
+#define CC_AXIM_ACE_CONST_ARDOMAIN_BIT_SHIFT	0x0UL
+#define CC_AXIM_ACE_CONST_ARDOMAIN_BIT_SIZE	0x2UL
+#define CC_AXIM_ACE_CONST_AWDOMAIN_BIT_SHIFT	0x2UL
+#define CC_AXIM_ACE_CONST_AWDOMAIN_BIT_SIZE	0x2UL
+#define CC_AXIM_ACE_CONST_ARBAR_BIT_SHIFT	0x4UL
+#define CC_AXIM_ACE_CONST_ARBAR_BIT_SIZE	0x2UL
+#define CC_AXIM_ACE_CONST_AWBAR_BIT_SHIFT	0x6UL
+#define CC_AXIM_ACE_CONST_AWBAR_BIT_SIZE	0x2UL
+#define CC_AXIM_ACE_CONST_ARSNOOP_BIT_SHIFT	0x8UL
+#define CC_AXIM_ACE_CONST_ARSNOOP_BIT_SIZE	0x4UL
+#define CC_AXIM_ACE_CONST_AWSNOOP_NOT_ALIGNED_BIT_SHIFT	0xCUL
+#define CC_AXIM_ACE_CONST_AWSNOOP_NOT_ALIGNED_BIT_SIZE	0x3UL
+#define CC_AXIM_ACE_CONST_AWSNOOP_ALIGNED_BIT_SHIFT	0xFUL
+#define CC_AXIM_ACE_CONST_AWSNOOP_ALIGNED_BIT_SIZE	0x3UL
+#define CC_AXIM_ACE_CONST_AWADDR_NOT_MASKED_BIT_SHIFT	0x12UL
+#define CC_AXIM_ACE_CONST_AWADDR_NOT_MASKED_BIT_SIZE	0x7UL
+#define CC_AXIM_ACE_CONST_AWLEN_VAL_BIT_SHIFT	0x19UL
+#define CC_AXIM_ACE_CONST_AWLEN_VAL_BIT_SIZE	0x4UL
+#define CC_AXIM_CACHE_PARAMS_REG_OFFSET	0xBF0UL
+#define CC_AXIM_CACHE_PARAMS_AWCACHE_LAST_BIT_SHIFT	0x0UL
+#define CC_AXIM_CACHE_PARAMS_AWCACHE_LAST_BIT_SIZE	0x4UL
+#define CC_AXIM_CACHE_PARAMS_AWCACHE_BIT_SHIFT	0x4UL
+#define CC_AXIM_CACHE_PARAMS_AWCACHE_BIT_SIZE	0x4UL
+#define CC_AXIM_CACHE_PARAMS_ARCACHE_BIT_SHIFT	0x8UL
+#define CC_AXIM_CACHE_PARAMS_ARCACHE_BIT_SIZE	0x4UL
+#endif	// __CC_CRYS_KERNEL_H__
diff --git a/drivers/staging/ccree/dx_host.h b/drivers/staging/ccree/dx_host.h
index 863c267..e90afbc 100644
--- a/drivers/staging/ccree/dx_host.h
+++ b/drivers/staging/ccree/dx_host.h
@@ -14,142 +14,142 @@
  * along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
 
-#ifndef __DX_HOST_H__
-#define __DX_HOST_H__
+#ifndef __CC_HOST_H__
+#define __CC_HOST_H__
 
 // --------------------------------------
 // BLOCK: HOST_P
 // --------------------------------------
-#define DX_HOST_IRR_REG_OFFSET	0xA00UL
-#define DX_HOST_IRR_DSCRPTR_COMPLETION_LOW_INT_BIT_SHIFT	0x2UL
-#define DX_HOST_IRR_DSCRPTR_COMPLETION_LOW_INT_BIT_SIZE	0x1UL
-#define DX_HOST_IRR_AXI_ERR_INT_BIT_SHIFT	0x8UL
-#define DX_HOST_IRR_AXI_ERR_INT_BIT_SIZE	0x1UL
-#define DX_HOST_IRR_GPR0_BIT_SHIFT	0xBUL
-#define DX_HOST_IRR_GPR0_BIT_SIZE	0x1UL
-#define DX_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SHIFT	0x13UL
-#define DX_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SIZE	0x1UL
-#define DX_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT	0x17UL
-#define DX_HOST_IRR_AXIM_COMP_INT_BIT_SIZE	0x1UL
-#define DX_HOST_IMR_REG_OFFSET	0xA04UL
-#define DX_HOST_IMR_NOT_USED_MASK_BIT_SHIFT	0x1UL
-#define DX_HOST_IMR_NOT_USED_MASK_BIT_SIZE	0x1UL
-#define DX_HOST_IMR_DSCRPTR_COMPLETION_MASK_BIT_SHIFT	0x2UL
-#define DX_HOST_IMR_DSCRPTR_COMPLETION_MASK_BIT_SIZE	0x1UL
-#define DX_HOST_IMR_AXI_ERR_MASK_BIT_SHIFT	0x8UL
-#define DX_HOST_IMR_AXI_ERR_MASK_BIT_SIZE	0x1UL
-#define DX_HOST_IMR_GPR0_BIT_SHIFT	0xBUL
-#define DX_HOST_IMR_GPR0_BIT_SIZE	0x1UL
-#define DX_HOST_IMR_DSCRPTR_WATERMARK_MASK0_BIT_SHIFT	0x13UL
-#define DX_HOST_IMR_DSCRPTR_WATERMARK_MASK0_BIT_SIZE	0x1UL
-#define DX_HOST_IMR_AXIM_COMP_INT_MASK_BIT_SHIFT	0x17UL
-#define DX_HOST_IMR_AXIM_COMP_INT_MASK_BIT_SIZE	0x1UL
-#define DX_HOST_ICR_REG_OFFSET	0xA08UL
-#define DX_HOST_ICR_DSCRPTR_COMPLETION_BIT_SHIFT	0x2UL
-#define DX_HOST_ICR_DSCRPTR_COMPLETION_BIT_SIZE	0x1UL
-#define DX_HOST_ICR_AXI_ERR_CLEAR_BIT_SHIFT	0x8UL
-#define DX_HOST_ICR_AXI_ERR_CLEAR_BIT_SIZE	0x1UL
-#define DX_HOST_ICR_GPR_INT_CLEAR_BIT_SHIFT	0xBUL
-#define DX_HOST_ICR_GPR_INT_CLEAR_BIT_SIZE	0x1UL
-#define DX_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SHIFT	0x13UL
-#define DX_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SIZE	0x1UL
-#define DX_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SHIFT	0x17UL
-#define DX_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SIZE	0x1UL
-#define DX_HOST_SIGNATURE_REG_OFFSET	0xA24UL
-#define DX_HOST_SIGNATURE_VALUE_BIT_SHIFT	0x0UL
-#define DX_HOST_SIGNATURE_VALUE_BIT_SIZE	0x20UL
-#define DX_HOST_BOOT_REG_OFFSET	0xA28UL
-#define DX_HOST_BOOT_SYNTHESIS_CONFIG_BIT_SHIFT	0x0UL
-#define DX_HOST_BOOT_SYNTHESIS_CONFIG_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_LARGE_RKEK_LOCAL_BIT_SHIFT	0x1UL
-#define DX_HOST_BOOT_LARGE_RKEK_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_HASH_IN_FUSES_LOCAL_BIT_SHIFT	0x2UL
-#define DX_HOST_BOOT_HASH_IN_FUSES_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_EXT_MEM_SECURED_LOCAL_BIT_SHIFT	0x3UL
-#define DX_HOST_BOOT_EXT_MEM_SECURED_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_RKEK_ECC_EXISTS_LOCAL_N_BIT_SHIFT	0x5UL
-#define DX_HOST_BOOT_RKEK_ECC_EXISTS_LOCAL_N_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_SRAM_SIZE_LOCAL_BIT_SHIFT	0x6UL
-#define DX_HOST_BOOT_SRAM_SIZE_LOCAL_BIT_SIZE	0x3UL
-#define DX_HOST_BOOT_DSCRPTR_EXISTS_LOCAL_BIT_SHIFT	0x9UL
-#define DX_HOST_BOOT_DSCRPTR_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_PAU_EXISTS_LOCAL_BIT_SHIFT	0xAUL
-#define DX_HOST_BOOT_PAU_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_RNG_EXISTS_LOCAL_BIT_SHIFT	0xBUL
-#define DX_HOST_BOOT_RNG_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_PKA_EXISTS_LOCAL_BIT_SHIFT	0xCUL
-#define DX_HOST_BOOT_PKA_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_RC4_EXISTS_LOCAL_BIT_SHIFT	0xDUL
-#define DX_HOST_BOOT_RC4_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_SHA_512_PRSNT_LOCAL_BIT_SHIFT	0xEUL
-#define DX_HOST_BOOT_SHA_512_PRSNT_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_SHA_256_PRSNT_LOCAL_BIT_SHIFT	0xFUL
-#define DX_HOST_BOOT_SHA_256_PRSNT_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_MD5_PRSNT_LOCAL_BIT_SHIFT	0x10UL
-#define DX_HOST_BOOT_MD5_PRSNT_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_HASH_EXISTS_LOCAL_BIT_SHIFT	0x11UL
-#define DX_HOST_BOOT_HASH_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_C2_EXISTS_LOCAL_BIT_SHIFT	0x12UL
-#define DX_HOST_BOOT_C2_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_DES_EXISTS_LOCAL_BIT_SHIFT	0x13UL
-#define DX_HOST_BOOT_DES_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_AES_XCBC_MAC_EXISTS_LOCAL_BIT_SHIFT	0x14UL
-#define DX_HOST_BOOT_AES_XCBC_MAC_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_AES_CMAC_EXISTS_LOCAL_BIT_SHIFT	0x15UL
-#define DX_HOST_BOOT_AES_CMAC_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_AES_CCM_EXISTS_LOCAL_BIT_SHIFT	0x16UL
-#define DX_HOST_BOOT_AES_CCM_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_AES_XEX_HW_T_CALC_LOCAL_BIT_SHIFT	0x17UL
-#define DX_HOST_BOOT_AES_XEX_HW_T_CALC_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_AES_XEX_EXISTS_LOCAL_BIT_SHIFT	0x18UL
-#define DX_HOST_BOOT_AES_XEX_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_CTR_EXISTS_LOCAL_BIT_SHIFT	0x19UL
-#define DX_HOST_BOOT_CTR_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_AES_DIN_BYTE_RESOLUTION_LOCAL_BIT_SHIFT	0x1AUL
-#define DX_HOST_BOOT_AES_DIN_BYTE_RESOLUTION_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_TUNNELING_ENB_LOCAL_BIT_SHIFT	0x1BUL
-#define DX_HOST_BOOT_TUNNELING_ENB_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_SUPPORT_256_192_KEY_LOCAL_BIT_SHIFT	0x1CUL
-#define DX_HOST_BOOT_SUPPORT_256_192_KEY_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SHIFT	0x1DUL
-#define DX_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SHIFT	0x1EUL
-#define DX_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SIZE	0x1UL
-#define DX_HOST_VERSION_REG_OFFSET	0xA40UL
-#define DX_HOST_VERSION_VALUE_BIT_SHIFT	0x0UL
-#define DX_HOST_VERSION_VALUE_BIT_SIZE	0x20UL
-#define DX_HOST_KFDE0_VALID_REG_OFFSET	0xA60UL
-#define DX_HOST_KFDE0_VALID_VALUE_BIT_SHIFT	0x0UL
-#define DX_HOST_KFDE0_VALID_VALUE_BIT_SIZE	0x1UL
-#define DX_HOST_KFDE1_VALID_REG_OFFSET	0xA64UL
-#define DX_HOST_KFDE1_VALID_VALUE_BIT_SHIFT	0x0UL
-#define DX_HOST_KFDE1_VALID_VALUE_BIT_SIZE	0x1UL
-#define DX_HOST_KFDE2_VALID_REG_OFFSET	0xA68UL
-#define DX_HOST_KFDE2_VALID_VALUE_BIT_SHIFT	0x0UL
-#define DX_HOST_KFDE2_VALID_VALUE_BIT_SIZE	0x1UL
-#define DX_HOST_KFDE3_VALID_REG_OFFSET	0xA6CUL
-#define DX_HOST_KFDE3_VALID_VALUE_BIT_SHIFT	0x0UL
-#define DX_HOST_KFDE3_VALID_VALUE_BIT_SIZE	0x1UL
-#define DX_HOST_GPR0_REG_OFFSET	0xA70UL
-#define DX_HOST_GPR0_VALUE_BIT_SHIFT	0x0UL
-#define DX_HOST_GPR0_VALUE_BIT_SIZE	0x20UL
-#define DX_GPR_HOST_REG_OFFSET	0xA74UL
-#define DX_GPR_HOST_VALUE_BIT_SHIFT	0x0UL
-#define DX_GPR_HOST_VALUE_BIT_SIZE	0x20UL
-#define DX_HOST_POWER_DOWN_EN_REG_OFFSET	0xA78UL
-#define DX_HOST_POWER_DOWN_EN_VALUE_BIT_SHIFT	0x0UL
-#define DX_HOST_POWER_DOWN_EN_VALUE_BIT_SIZE	0x1UL
+#define CC_HOST_IRR_REG_OFFSET	0xA00UL
+#define CC_HOST_IRR_DSCRPTR_COMPLETION_LOW_INT_BIT_SHIFT	0x2UL
+#define CC_HOST_IRR_DSCRPTR_COMPLETION_LOW_INT_BIT_SIZE	0x1UL
+#define CC_HOST_IRR_AXI_ERR_INT_BIT_SHIFT	0x8UL
+#define CC_HOST_IRR_AXI_ERR_INT_BIT_SIZE	0x1UL
+#define CC_HOST_IRR_GPR0_BIT_SHIFT	0xBUL
+#define CC_HOST_IRR_GPR0_BIT_SIZE	0x1UL
+#define CC_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SHIFT	0x13UL
+#define CC_HOST_IRR_DSCRPTR_WATERMARK_INT_BIT_SIZE	0x1UL
+#define CC_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT	0x17UL
+#define CC_HOST_IRR_AXIM_COMP_INT_BIT_SIZE	0x1UL
+#define CC_HOST_IMR_REG_OFFSET	0xA04UL
+#define CC_HOST_IMR_NOT_USED_MASK_BIT_SHIFT	0x1UL
+#define CC_HOST_IMR_NOT_USED_MASK_BIT_SIZE	0x1UL
+#define CC_HOST_IMR_DSCRPTR_COMPLETION_MASK_BIT_SHIFT	0x2UL
+#define CC_HOST_IMR_DSCRPTR_COMPLETION_MASK_BIT_SIZE	0x1UL
+#define CC_HOST_IMR_AXI_ERR_MASK_BIT_SHIFT	0x8UL
+#define CC_HOST_IMR_AXI_ERR_MASK_BIT_SIZE	0x1UL
+#define CC_HOST_IMR_GPR0_BIT_SHIFT	0xBUL
+#define CC_HOST_IMR_GPR0_BIT_SIZE	0x1UL
+#define CC_HOST_IMR_DSCRPTR_WATERMARK_MASK0_BIT_SHIFT	0x13UL
+#define CC_HOST_IMR_DSCRPTR_WATERMARK_MASK0_BIT_SIZE	0x1UL
+#define CC_HOST_IMR_AXIM_COMP_INT_MASK_BIT_SHIFT	0x17UL
+#define CC_HOST_IMR_AXIM_COMP_INT_MASK_BIT_SIZE	0x1UL
+#define CC_HOST_ICR_REG_OFFSET	0xA08UL
+#define CC_HOST_ICR_DSCRPTR_COMPLETION_BIT_SHIFT	0x2UL
+#define CC_HOST_ICR_DSCRPTR_COMPLETION_BIT_SIZE	0x1UL
+#define CC_HOST_ICR_AXI_ERR_CLEAR_BIT_SHIFT	0x8UL
+#define CC_HOST_ICR_AXI_ERR_CLEAR_BIT_SIZE	0x1UL
+#define CC_HOST_ICR_GPR_INT_CLEAR_BIT_SHIFT	0xBUL
+#define CC_HOST_ICR_GPR_INT_CLEAR_BIT_SIZE	0x1UL
+#define CC_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SHIFT	0x13UL
+#define CC_HOST_ICR_DSCRPTR_WATERMARK_QUEUE0_CLEAR_BIT_SIZE	0x1UL
+#define CC_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SHIFT	0x17UL
+#define CC_HOST_ICR_AXIM_COMP_INT_CLEAR_BIT_SIZE	0x1UL
+#define CC_HOST_SIGNATURE_REG_OFFSET	0xA24UL
+#define CC_HOST_SIGNATURE_VALUE_BIT_SHIFT	0x0UL
+#define CC_HOST_SIGNATURE_VALUE_BIT_SIZE	0x20UL
+#define CC_HOST_BOOT_REG_OFFSET	0xA28UL
+#define CC_HOST_BOOT_SYNTHESIS_CONFIG_BIT_SHIFT	0x0UL
+#define CC_HOST_BOOT_SYNTHESIS_CONFIG_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_LARGE_RKEK_LOCAL_BIT_SHIFT	0x1UL
+#define CC_HOST_BOOT_LARGE_RKEK_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_HASH_IN_FUSES_LOCAL_BIT_SHIFT	0x2UL
+#define CC_HOST_BOOT_HASH_IN_FUSES_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_EXT_MEM_SECURED_LOCAL_BIT_SHIFT	0x3UL
+#define CC_HOST_BOOT_EXT_MEM_SECURED_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_RKEK_ECC_EXISTS_LOCAL_N_BIT_SHIFT	0x5UL
+#define CC_HOST_BOOT_RKEK_ECC_EXISTS_LOCAL_N_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_SRAM_SIZE_LOCAL_BIT_SHIFT	0x6UL
+#define CC_HOST_BOOT_SRAM_SIZE_LOCAL_BIT_SIZE	0x3UL
+#define CC_HOST_BOOT_DSCRPTR_EXISTS_LOCAL_BIT_SHIFT	0x9UL
+#define CC_HOST_BOOT_DSCRPTR_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_PAU_EXISTS_LOCAL_BIT_SHIFT	0xAUL
+#define CC_HOST_BOOT_PAU_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_RNG_EXISTS_LOCAL_BIT_SHIFT	0xBUL
+#define CC_HOST_BOOT_RNG_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_PKA_EXISTS_LOCAL_BIT_SHIFT	0xCUL
+#define CC_HOST_BOOT_PKA_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_RC4_EXISTS_LOCAL_BIT_SHIFT	0xDUL
+#define CC_HOST_BOOT_RC4_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_SHA_512_PRSNT_LOCAL_BIT_SHIFT	0xEUL
+#define CC_HOST_BOOT_SHA_512_PRSNT_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_SHA_256_PRSNT_LOCAL_BIT_SHIFT	0xFUL
+#define CC_HOST_BOOT_SHA_256_PRSNT_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_MD5_PRSNT_LOCAL_BIT_SHIFT	0x10UL
+#define CC_HOST_BOOT_MD5_PRSNT_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_HASH_EXISTS_LOCAL_BIT_SHIFT	0x11UL
+#define CC_HOST_BOOT_HASH_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_C2_EXISTS_LOCAL_BIT_SHIFT	0x12UL
+#define CC_HOST_BOOT_C2_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_DES_EXISTS_LOCAL_BIT_SHIFT	0x13UL
+#define CC_HOST_BOOT_DES_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_AES_XCBC_MAC_EXISTS_LOCAL_BIT_SHIFT	0x14UL
+#define CC_HOST_BOOT_AES_XCBC_MAC_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_AES_CMAC_EXISTS_LOCAL_BIT_SHIFT	0x15UL
+#define CC_HOST_BOOT_AES_CMAC_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_AES_CCM_EXISTS_LOCAL_BIT_SHIFT	0x16UL
+#define CC_HOST_BOOT_AES_CCM_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_AES_XEX_HW_T_CALC_LOCAL_BIT_SHIFT	0x17UL
+#define CC_HOST_BOOT_AES_XEX_HW_T_CALC_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_AES_XEX_EXISTS_LOCAL_BIT_SHIFT	0x18UL
+#define CC_HOST_BOOT_AES_XEX_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_CTR_EXISTS_LOCAL_BIT_SHIFT	0x19UL
+#define CC_HOST_BOOT_CTR_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_AES_DIN_BYTE_RESOLUTION_LOCAL_BIT_SHIFT	0x1AUL
+#define CC_HOST_BOOT_AES_DIN_BYTE_RESOLUTION_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_TUNNELING_ENB_LOCAL_BIT_SHIFT	0x1BUL
+#define CC_HOST_BOOT_TUNNELING_ENB_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_SUPPORT_256_192_KEY_LOCAL_BIT_SHIFT	0x1CUL
+#define CC_HOST_BOOT_SUPPORT_256_192_KEY_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SHIFT	0x1DUL
+#define CC_HOST_BOOT_ONLY_ENCRYPT_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SHIFT	0x1EUL
+#define CC_HOST_BOOT_AES_EXISTS_LOCAL_BIT_SIZE	0x1UL
+#define CC_HOST_VERSION_REG_OFFSET	0xA40UL
+#define CC_HOST_VERSION_VALUE_BIT_SHIFT	0x0UL
+#define CC_HOST_VERSION_VALUE_BIT_SIZE	0x20UL
+#define CC_HOST_KFDE0_VALID_REG_OFFSET	0xA60UL
+#define CC_HOST_KFDE0_VALID_VALUE_BIT_SHIFT	0x0UL
+#define CC_HOST_KFDE0_VALID_VALUE_BIT_SIZE	0x1UL
+#define CC_HOST_KFDE1_VALID_REG_OFFSET	0xA64UL
+#define CC_HOST_KFDE1_VALID_VALUE_BIT_SHIFT	0x0UL
+#define CC_HOST_KFDE1_VALID_VALUE_BIT_SIZE	0x1UL
+#define CC_HOST_KFDE2_VALID_REG_OFFSET	0xA68UL
+#define CC_HOST_KFDE2_VALID_VALUE_BIT_SHIFT	0x0UL
+#define CC_HOST_KFDE2_VALID_VALUE_BIT_SIZE	0x1UL
+#define CC_HOST_KFDE3_VALID_REG_OFFSET	0xA6CUL
+#define CC_HOST_KFDE3_VALID_VALUE_BIT_SHIFT	0x0UL
+#define CC_HOST_KFDE3_VALID_VALUE_BIT_SIZE	0x1UL
+#define CC_HOST_GPR0_REG_OFFSET	0xA70UL
+#define CC_HOST_GPR0_VALUE_BIT_SHIFT	0x0UL
+#define CC_HOST_GPR0_VALUE_BIT_SIZE	0x20UL
+#define CC_GPR_HOST_REG_OFFSET	0xA74UL
+#define CC_GPR_HOST_VALUE_BIT_SHIFT	0x0UL
+#define CC_GPR_HOST_VALUE_BIT_SIZE	0x20UL
+#define CC_HOST_POWER_DOWN_EN_REG_OFFSET	0xA78UL
+#define CC_HOST_POWER_DOWN_EN_VALUE_BIT_SHIFT	0x0UL
+#define CC_HOST_POWER_DOWN_EN_VALUE_BIT_SIZE	0x1UL
 // --------------------------------------
 // BLOCK: HOST_SRAM
 // --------------------------------------
-#define DX_SRAM_DATA_REG_OFFSET	0xF00UL
-#define DX_SRAM_DATA_VALUE_BIT_SHIFT	0x0UL
-#define DX_SRAM_DATA_VALUE_BIT_SIZE	0x20UL
-#define DX_SRAM_ADDR_REG_OFFSET	0xF04UL
-#define DX_SRAM_ADDR_VALUE_BIT_SHIFT	0x0UL
-#define DX_SRAM_ADDR_VALUE_BIT_SIZE	0xFUL
-#define DX_SRAM_DATA_READY_REG_OFFSET	0xF08UL
-#define DX_SRAM_DATA_READY_VALUE_BIT_SHIFT	0x0UL
-#define DX_SRAM_DATA_READY_VALUE_BIT_SIZE	0x1UL
+#define CC_SRAM_DATA_REG_OFFSET	0xF00UL
+#define CC_SRAM_DATA_VALUE_BIT_SHIFT	0x0UL
+#define CC_SRAM_DATA_VALUE_BIT_SIZE	0x20UL
+#define CC_SRAM_ADDR_REG_OFFSET	0xF04UL
+#define CC_SRAM_ADDR_VALUE_BIT_SHIFT	0x0UL
+#define CC_SRAM_ADDR_VALUE_BIT_SIZE	0xFUL
+#define CC_SRAM_DATA_READY_REG_OFFSET	0xF08UL
+#define CC_SRAM_DATA_READY_VALUE_BIT_SHIFT	0x0UL
+#define CC_SRAM_DATA_READY_VALUE_BIT_SIZE	0x1UL
 
-#endif //__DX_HOST_H__
+#endif //__CC_HOST_H__
diff --git a/drivers/staging/ccree/dx_reg_common.h b/drivers/staging/ccree/dx_reg_common.h
index d5132ff..8334d9f 100644
--- a/drivers/staging/ccree/dx_reg_common.h
+++ b/drivers/staging/ccree/dx_reg_common.h
@@ -14,13 +14,13 @@
  * along with this program; if not, see <http://www.gnu.org/licenses/>.
  */
 
-#ifndef __DX_REG_COMMON_H__
-#define __DX_REG_COMMON_H__
+#ifndef __CC_REG_COMMON_H__
+#define __CC_REG_COMMON_H__
 
-#define DX_DEV_SIGNATURE 0xDCC71200UL
+#define CC_DEV_SIGNATURE 0xDCC71200UL
 
 #define CC_HW_VERSION 0xef840015UL
 
-#define DX_DEV_SHA_MAX 512
+#define CC_DEV_SHA_MAX 512
 
-#endif /*__DX_REG_COMMON_H__*/
+#endif /*__CC_REG_COMMON_H__*/
diff --git a/drivers/staging/ccree/ssi_config.h b/drivers/staging/ccree/ssi_config.h
index e97bc68..ee2d310 100644
--- a/drivers/staging/ccree/ssi_config.h
+++ b/drivers/staging/ccree/ssi_config.h
@@ -25,14 +25,14 @@
 
 //#define FLUSH_CACHE_ALL
 //#define COMPLETION_DELAY
-//#define DX_DUMP_DESCS
-// #define DX_DUMP_BYTES
+//#define CC_DUMP_DESCS
+// #define CC_DUMP_BYTES
 // #define CC_DEBUG
 /* Enable sysfs interface for debugging REE driver */
 #define ENABLE_CC_SYSFS
-//#define DX_IRQ_DELAY 100000
+//#define CC_IRQ_DELAY 100000
 /* was 32 bit, but for juno's sake it was enlarged to 48 bit */
 #define DMA_BIT_MASK_LEN	48
 
-#endif /*__DX_CONFIG_H__*/
+#endif /*__CC_CONFIG_H__*/
 
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index dce12e1..078d146 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -72,7 +72,7 @@
 #include "ssi_pm.h"
 #include "ssi_fips.h"
 
-#ifdef DX_DUMP_BYTES
+#ifdef CC_DUMP_BYTES
 void dump_byte_array(const char *name, const u8 *buf, size_t len)
 {
 	char prefix[NAME_LEN];
@@ -171,10 +171,10 @@ int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe)
 			       CC_GPR0_IRQ_MASK));
 	cc_iowrite(drvdata, CC_REG(HOST_IMR), val);
 
-#ifdef DX_HOST_IRQ_TIMER_INIT_VAL_REG_OFFSET
-#ifdef DX_IRQ_DELAY
+#ifdef CC_HOST_IRQ_TIMER_INIT_VAL_REG_OFFSET
+#ifdef CC_IRQ_DELAY
 	/* Set CC IRQ delay */
-	cc_iowrite(drvdata, CC_REG(HOST_IRQ_TIMER_INIT_VAL), DX_IRQ_DELAY);
+	cc_iowrite(drvdata, CC_REG(HOST_IRQ_TIMER_INIT_VAL), CC_IRQ_DELAY);
 #endif
 	if (cc_ioread(drvdata, CC_REG(HOST_IRQ_TIMER_INIT_VAL)) > 0) {
 		dev_dbg(dev, "irq_delay=%d CC cycles\n",
@@ -279,9 +279,9 @@ static int init_cc_resources(struct platform_device *plat_dev)
 
 	/* Verify correct mapping */
 	signature_val = cc_ioread(new_drvdata, CC_REG(HOST_SIGNATURE));
-	if (signature_val != DX_DEV_SIGNATURE) {
+	if (signature_val != CC_DEV_SIGNATURE) {
 		dev_err(dev, "Invalid CC signature: SIGNATURE=0x%08X != expected=0x%08X\n",
-			signature_val, (u32)DX_DEV_SIGNATURE);
+			signature_val, (u32)CC_DEV_SIGNATURE);
 		rc = -EINVAL;
 		goto post_clk_err;
 	}
@@ -507,9 +507,9 @@ static const struct dev_pm_ops arm_cc7x_driver_pm = {
 #endif
 
 #if defined(CONFIG_PM)
-#define	DX_DRIVER_RUNTIME_PM	(&arm_cc7x_driver_pm)
+#define	CC_DRIVER_RUNTIME_PM	(&arm_cc7x_driver_pm)
 #else
-#define	DX_DRIVER_RUNTIME_PM	NULL
+#define	CC_DRIVER_RUNTIME_PM	NULL
 #endif
 
 #ifdef CONFIG_OF
@@ -526,7 +526,7 @@ static struct platform_driver cc7x_driver = {
 #ifdef CONFIG_OF
 		   .of_match_table = arm_cc7x_dev_of_match,
 #endif
-		   .pm = DX_DRIVER_RUNTIME_PM,
+		   .pm = CC_DRIVER_RUNTIME_PM,
 	},
 	.probe = cc7x_probe,
 	.remove = cc7x_remove,
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index 3d4513b..4d94a06 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -42,7 +42,7 @@
 /* Registers definitions from shared/hw/ree_include */
 #include "dx_host.h"
 #include "dx_reg_common.h"
-#define CC_SUPPORT_SHA DX_DEV_SHA_MAX
+#define CC_SUPPORT_SHA CC_DEV_SHA_MAX
 #include "cc_crypto_ctx.h"
 #include "ssi_sysfs.h"
 #include "hash_defs.h"
@@ -54,24 +54,24 @@
 #define CC_DEV_NAME_STR "cc715ree"
 #define CC_COHERENT_CACHE_PARAMS 0xEEE
 
-#define CC_AXI_IRQ_MASK ((1 << DX_AXIM_CFG_BRESPMASK_BIT_SHIFT) | \
-			  (1 << DX_AXIM_CFG_RRESPMASK_BIT_SHIFT) | \
-			  (1 << DX_AXIM_CFG_INFLTMASK_BIT_SHIFT) | \
-			  (1 << DX_AXIM_CFG_COMPMASK_BIT_SHIFT))
+#define CC_AXI_IRQ_MASK ((1 << CC_AXIM_CFG_BRESPMASK_BIT_SHIFT) | \
+			  (1 << CC_AXIM_CFG_RRESPMASK_BIT_SHIFT) | \
+			  (1 << CC_AXIM_CFG_INFLTMASK_BIT_SHIFT) | \
+			  (1 << CC_AXIM_CFG_COMPMASK_BIT_SHIFT))
 
-#define CC_AXI_ERR_IRQ_MASK BIT(DX_HOST_IRR_AXI_ERR_INT_BIT_SHIFT)
+#define CC_AXI_ERR_IRQ_MASK BIT(CC_HOST_IRR_AXI_ERR_INT_BIT_SHIFT)
 
-#define CC_COMP_IRQ_MASK BIT(DX_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT)
+#define CC_COMP_IRQ_MASK BIT(CC_HOST_IRR_AXIM_COMP_INT_BIT_SHIFT)
 
-#define AXIM_MON_COMP_VALUE GENMASK(DX_AXIM_MON_COMP_VALUE_BIT_SIZE + \
-				    DX_AXIM_MON_COMP_VALUE_BIT_SHIFT, \
-				    DX_AXIM_MON_COMP_VALUE_BIT_SHIFT)
+#define AXIM_MON_COMP_VALUE GENMASK(CC_AXIM_MON_COMP_VALUE_BIT_SIZE + \
+				    CC_AXIM_MON_COMP_VALUE_BIT_SHIFT, \
+				    CC_AXIM_MON_COMP_VALUE_BIT_SHIFT)
 
 /* Register name mangling macro */
-#define CC_REG(reg_name) DX_ ## reg_name ## _REG_OFFSET
+#define CC_REG(reg_name) CC_ ## reg_name ## _REG_OFFSET
 
 /* TEE FIPS status interrupt */
-#define CC_GPR0_IRQ_MASK BIT(DX_HOST_IRR_GPR0_BIT_SHIFT)
+#define CC_GPR0_IRQ_MASK BIT(CC_HOST_IRR_GPR0_BIT_SHIFT)
 
 #define CC_CRA_PRIO 3000
 
@@ -169,7 +169,7 @@ static inline struct device *drvdata_to_dev(struct ssi_drvdata *drvdata)
 	return &drvdata->plat_dev->dev;
 }
 
-#ifdef DX_DUMP_BYTES
+#ifdef CC_DUMP_BYTES
 void dump_byte_array(const char *name, const u8 *the_array,
 		     unsigned long size);
 #else
diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c
index 7458c24..5a041bb 100644
--- a/drivers/staging/ccree/ssi_hash.c
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -53,7 +53,7 @@ static const u32 sha224_init[] = {
 static const u32 sha256_init[] = {
 	SHA256_H7, SHA256_H6, SHA256_H5, SHA256_H4,
 	SHA256_H3, SHA256_H2, SHA256_H1, SHA256_H0 };
-#if (DX_DEV_SHA_MAX > 256)
+#if (CC_DEV_SHA_MAX > 256)
 static const u32 digest_len_sha512_init[] = {
 	0x00000080, 0x00000000, 0x00000000, 0x00000000 };
 static const u64 sha384_init[] = {
@@ -209,7 +209,7 @@ static int cc_map_req(struct device *dev, struct ahash_req_ctx *state,
 		} else { /*sha*/
 			memcpy(state->digest_buff, ctx->digest_buff,
 			       ctx->inter_digestsize);
-#if (DX_DEV_SHA_MAX > 256)
+#if (CC_DEV_SHA_MAX > 256)
 			if (ctx->hash_mode == DRV_HASH_SHA512 ||
 			    ctx->hash_mode == DRV_HASH_SHA384)
 				memcpy(state->digest_bytes_len,
@@ -1839,7 +1839,7 @@ static struct cc_hash_template driver_hash[] = {
 		.hw_mode = DRV_HASH_HW_SHA256,
 		.inter_digestsize = SHA256_DIGEST_SIZE,
 	},
-#if (DX_DEV_SHA_MAX > 256)
+#if (CC_DEV_SHA_MAX > 256)
 	{
 		.name = "sha384",
 		.driver_name = "sha384-dx",
@@ -2013,7 +2013,7 @@ int cc_init_hash_sram(struct ssi_drvdata *drvdata)
 	struct cc_hw_desc larval_seq[CC_DIGEST_SIZE_MAX / sizeof(u32)];
 	struct device *dev = drvdata_to_dev(drvdata);
 	int rc = 0;
-#if (DX_DEV_SHA_MAX > 256)
+#if (CC_DEV_SHA_MAX > 256)
 	int i;
 #endif
 
@@ -2028,7 +2028,7 @@ int cc_init_hash_sram(struct ssi_drvdata *drvdata)
 	sram_buff_ofs += sizeof(digest_len_init);
 	larval_seq_len = 0;
 
-#if (DX_DEV_SHA_MAX > 256)
+#if (CC_DEV_SHA_MAX > 256)
 	/* Copy-to-sram digest-len for sha384/512 */
 	cc_set_sram_desc(digest_len_sha512_init, sram_buff_ofs,
 			 ARRAY_SIZE(digest_len_sha512_init),
@@ -2081,7 +2081,7 @@ int cc_init_hash_sram(struct ssi_drvdata *drvdata)
 	sram_buff_ofs += sizeof(sha256_init);
 	larval_seq_len = 0;
 
-#if (DX_DEV_SHA_MAX > 256)
+#if (CC_DEV_SHA_MAX > 256)
 	/* We are forced to swap each double-word larval before copying to
 	 * sram
 	 */
@@ -2142,7 +2142,7 @@ int cc_hash_alloc(struct ssi_drvdata *drvdata)
 	drvdata->hash_handle = hash_handle;
 
 	sram_size_to_alloc = sizeof(digest_len_init) +
-#if (DX_DEV_SHA_MAX > 256)
+#if (CC_DEV_SHA_MAX > 256)
 			sizeof(digest_len_sha512_init) +
 			sizeof(sha384_init) +
 			sizeof(sha512_init) +
@@ -2413,7 +2413,7 @@ ssi_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode)
 			sizeof(md5_init) +
 			sizeof(sha1_init) +
 			sizeof(sha224_init));
-#if (DX_DEV_SHA_MAX > 256)
+#if (CC_DEV_SHA_MAX > 256)
 	case DRV_HASH_SHA384:
 		return (hash_handle->larval_digest_sram_addr +
 			sizeof(md5_init) +
@@ -2449,7 +2449,7 @@ cc_digest_len_addr(void *drvdata, u32 mode)
 	case DRV_HASH_SHA256:
 	case DRV_HASH_MD5:
 		return digest_len_addr;
-#if (DX_DEV_SHA_MAX > 256)
+#if (CC_DEV_SHA_MAX > 256)
 	case DRV_HASH_SHA384:
 	case DRV_HASH_SHA512:
 		return  digest_len_addr + sizeof(digest_len_init);
diff --git a/drivers/staging/ccree/ssi_hash.h b/drivers/staging/ccree/ssi_hash.h
index 19fc4cf..9d1af96 100644
--- a/drivers/staging/ccree/ssi_hash.h
+++ b/drivers/staging/ccree/ssi_hash.h
@@ -25,7 +25,7 @@
 
 #define HMAC_IPAD_CONST	0x36363636
 #define HMAC_OPAD_CONST	0x5C5C5C5C
-#if (DX_DEV_SHA_MAX > 256)
+#if (CC_DEV_SHA_MAX > 256)
 #define HASH_LEN_SIZE 16
 #define CC_MAX_HASH_DIGEST_SIZE	SHA512_DIGEST_SIZE
 #define CC_MAX_HASH_BLCK_SIZE SHA512_BLOCK_SIZE
diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c
index 436e035..f1356d1 100644
--- a/drivers/staging/ccree/ssi_request_mgr.c
+++ b/drivers/staging/ccree/ssi_request_mgr.c
@@ -179,7 +179,7 @@ static void enqueue_seq(void __iomem *cc_base, struct cc_hw_desc seq[],
 	for (i = 0; i < seq_len; i++) {
 		for (w = 0; w <= 5; w++)
 			writel_relaxed(seq[i].word[w], reg);
-#ifdef DX_DUMP_DESCS
+#ifdef CC_DUMP_DESCS
 		dev_dbg(dev, "desc[%02d]: 0x%08X 0x%08X 0x%08X 0x%08X 0x%08X 0x%08X\n",
 			i, seq[i].word[0], seq[i].word[1], seq[i].word[2],
 			seq[i].word[3], seq[i].word[4], seq[i].word[5]);
diff --git a/drivers/staging/ccree/ssi_sysfs.c b/drivers/staging/ccree/ssi_sysfs.c
index 08858a4..6b11a72 100644
--- a/drivers/staging/ccree/ssi_sysfs.c
+++ b/drivers/staging/ccree/ssi_sysfs.c
@@ -34,23 +34,23 @@ static ssize_t ssi_sys_regdump_show(struct kobject *kobj,
 	register_value = cc_ioread(drvdata, CC_REG(HOST_SIGNATURE));
 	offset += scnprintf(buf + offset, PAGE_SIZE - offset,
 			    "%s \t(0x%lX)\t 0x%08X\n", "HOST_SIGNATURE       ",
-			    DX_HOST_SIGNATURE_REG_OFFSET, register_value);
+			    CC_HOST_SIGNATURE_REG_OFFSET, register_value);
 	register_value = cc_ioread(drvdata, CC_REG(HOST_IRR));
 	offset += scnprintf(buf + offset, PAGE_SIZE - offset,
 			    "%s \t(0x%lX)\t 0x%08X\n", "HOST_IRR             ",
-			    DX_HOST_IRR_REG_OFFSET, register_value);
+			    CC_HOST_IRR_REG_OFFSET, register_value);
 	register_value = cc_ioread(drvdata, CC_REG(HOST_POWER_DOWN_EN));
 	offset += scnprintf(buf + offset, PAGE_SIZE - offset,
 			    "%s \t(0x%lX)\t 0x%08X\n", "HOST_POWER_DOWN_EN   ",
-			    DX_HOST_POWER_DOWN_EN_REG_OFFSET, register_value);
+			    CC_HOST_POWER_DOWN_EN_REG_OFFSET, register_value);
 	register_value =  cc_ioread(drvdata, CC_REG(AXIM_MON_ERR));
 	offset += scnprintf(buf + offset, PAGE_SIZE - offset,
 			    "%s \t(0x%lX)\t 0x%08X\n", "AXIM_MON_ERR         ",
-			    DX_AXIM_MON_ERR_REG_OFFSET, register_value);
+			    CC_AXIM_MON_ERR_REG_OFFSET, register_value);
 	register_value = cc_ioread(drvdata, CC_REG(DSCRPTR_QUEUE_CONTENT));
 	offset += scnprintf(buf + offset, PAGE_SIZE - offset,
 			    "%s \t(0x%lX)\t 0x%08X\n", "DSCRPTR_QUEUE_CONTENT",
-			    DX_DSCRPTR_QUEUE_CONTENT_REG_OFFSET,
+			    CC_DSCRPTR_QUEUE_CONTENT_REG_OFFSET,
 			    register_value);
 	return offset;
 }
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 20/24] staging: ccree: rename vars/structs/enums from ssi_ to cc_
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (18 preceding siblings ...)
  2017-12-12 14:53 ` [PATCH 19/24] staging: ccree: rename all DX " Gilad Ben-Yossef
@ 2017-12-12 14:53 ` Gilad Ben-Yossef
  2017-12-12 14:53 ` [PATCH 21/24] staging: ccree: fix buf mgr naming convention Gilad Ben-Yossef
                   ` (3 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

Unify naming convention by renaming all ssi_ vars/structs/enums
and variables to cc_*

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_aead.c        |  82 +++++++++----------
 drivers/staging/ccree/ssi_aead.h        |  14 ++--
 drivers/staging/ccree/ssi_buffer_mgr.c  |  30 +++----
 drivers/staging/ccree/ssi_buffer_mgr.h  |  22 +++---
 drivers/staging/ccree/ssi_cipher.c      |  72 ++++++++---------
 drivers/staging/ccree/ssi_cipher.h      |   6 +-
 drivers/staging/ccree/ssi_driver.c      |  16 ++--
 drivers/staging/ccree/ssi_driver.h      |  30 +++----
 drivers/staging/ccree/ssi_fips.c        |  20 ++---
 drivers/staging/ccree/ssi_fips.h        |  16 ++--
 drivers/staging/ccree/ssi_hash.c        | 136 ++++++++++++++++----------------
 drivers/staging/ccree/ssi_hash.h        |  12 +--
 drivers/staging/ccree/ssi_ivgen.c       |  14 ++--
 drivers/staging/ccree/ssi_ivgen.h       |   8 +-
 drivers/staging/ccree/ssi_pm.c          |  12 +--
 drivers/staging/ccree/ssi_pm.h          |   4 +-
 drivers/staging/ccree/ssi_request_mgr.c |  70 ++++++++--------
 drivers/staging/ccree/ssi_request_mgr.h |  18 ++---
 drivers/staging/ccree/ssi_sram_mgr.c    |  12 +--
 drivers/staging/ccree/ssi_sram_mgr.h    |  14 ++--
 drivers/staging/ccree/ssi_sysfs.c       |  12 +--
 drivers/staging/ccree/ssi_sysfs.h       |   4 +-
 22 files changed, 312 insertions(+), 312 deletions(-)

diff --git a/drivers/staging/ccree/ssi_aead.c b/drivers/staging/ccree/ssi_aead.c
index d07b38d..73ae970 100644
--- a/drivers/staging/ccree/ssi_aead.c
+++ b/drivers/staging/ccree/ssi_aead.c
@@ -52,7 +52,7 @@
 #define ICV_VERIF_OK 0x01
 
 struct cc_aead_handle {
-	ssi_sram_addr_t sram_workspace_addr;
+	cc_sram_addr_t sram_workspace_addr;
 	struct list_head aead_list;
 };
 
@@ -69,7 +69,7 @@ struct cc_xcbc_s {
 };
 
 struct cc_aead_ctx {
-	struct ssi_drvdata *drvdata;
+	struct cc_drvdata *drvdata;
 	u8 ctr_nonce[MAX_NONCE_SIZE]; /* used for ctr3686 iv and aes ccm */
 	u8 *enckey;
 	dma_addr_t enckey_dma_addr;
@@ -148,18 +148,18 @@ static int cc_aead_init(struct crypto_aead *tfm)
 {
 	struct aead_alg *alg = crypto_aead_alg(tfm);
 	struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm);
-	struct ssi_crypto_alg *ssi_alg =
-			container_of(alg, struct ssi_crypto_alg, aead_alg);
-	struct device *dev = drvdata_to_dev(ssi_alg->drvdata);
+	struct cc_crypto_alg *cc_alg =
+			container_of(alg, struct cc_crypto_alg, aead_alg);
+	struct device *dev = drvdata_to_dev(cc_alg->drvdata);
 
 	dev_dbg(dev, "Initializing context @%p for %s\n", ctx,
 		crypto_tfm_alg_name(&tfm->base));
 
 	/* Initialize modes in instance */
-	ctx->cipher_mode = ssi_alg->cipher_mode;
-	ctx->flow_mode = ssi_alg->flow_mode;
-	ctx->auth_mode = ssi_alg->auth_mode;
-	ctx->drvdata = ssi_alg->drvdata;
+	ctx->cipher_mode = cc_alg->cipher_mode;
+	ctx->flow_mode = cc_alg->flow_mode;
+	ctx->auth_mode = cc_alg->auth_mode;
+	ctx->drvdata = cc_alg->drvdata;
 	crypto_aead_set_reqsize(tfm, sizeof(struct aead_req_ctx));
 
 	/* Allocate key buffer, cache line aligned */
@@ -226,11 +226,11 @@ static int cc_aead_init(struct crypto_aead *tfm)
 	return -ENOMEM;
 }
 
-static void cc_aead_complete(struct device *dev, void *ssi_req)
+static void cc_aead_complete(struct device *dev, void *cc_req)
 {
-	struct aead_request *areq = (struct aead_request *)ssi_req;
+	struct aead_request *areq = (struct aead_request *)cc_req;
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(areq);
-	struct crypto_aead *tfm = crypto_aead_reqtfm(ssi_req);
+	struct crypto_aead *tfm = crypto_aead_reqtfm(cc_req);
 	struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm);
 	int err = 0;
 
@@ -442,7 +442,7 @@ cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key,
 	struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	u32 larval_addr = cc_larval_digest_addr(ctx->drvdata, ctx->auth_mode);
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	unsigned int blocksize;
 	unsigned int digestsize;
 	unsigned int hashmode;
@@ -546,7 +546,7 @@ cc_get_plain_hmac_key(struct crypto_aead *tfm, const u8 *key,
 		idx++;
 	}
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+	rc = send_request(ctx->drvdata, &cc_req, desc, idx, 0);
 	if (rc)
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 
@@ -561,7 +561,7 @@ cc_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
 {
 	struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm);
 	struct rtattr *rta = (struct rtattr *)key;
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	struct crypto_authenc_key_param *param;
 	struct cc_hw_desc desc[MAX_AEAD_SETKEY_SEQ];
 	int seq_len = 0, rc = -EINVAL;
@@ -645,7 +645,7 @@ cc_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsigned int keylen)
 	/* STAT_PHASE_3: Submit sequence to HW */
 
 	if (seq_len > 0) { /* For CCM there is no sequence to setup the key */
-		rc = send_request(ctx->drvdata, &ssi_req, desc, seq_len, 0);
+		rc = send_request(ctx->drvdata, &cc_req, desc, seq_len, 0);
 		if (rc) {
 			dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 			goto setkey_error;
@@ -734,7 +734,7 @@ static void cc_set_assoc_desc(struct aead_request *areq, unsigned int flow_mode,
 	struct crypto_aead *tfm = crypto_aead_reqtfm(areq);
 	struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm);
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(areq);
-	enum ssi_req_dma_buf_type assoc_dma_type = areq_ctx->assoc_buff_type;
+	enum cc_req_dma_buf_type assoc_dma_type = areq_ctx->assoc_buff_type;
 	unsigned int idx = *seq_size;
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 
@@ -773,7 +773,7 @@ static void cc_proc_authen_desc(struct aead_request *areq,
 				unsigned int *seq_size, int direct)
 {
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(areq);
-	enum ssi_req_dma_buf_type data_dma_type = areq_ctx->data_buff_type;
+	enum cc_req_dma_buf_type data_dma_type = areq_ctx->data_buff_type;
 	unsigned int idx = *seq_size;
 	struct crypto_aead *tfm = crypto_aead_reqtfm(areq);
 	struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm);
@@ -803,7 +803,7 @@ static void cc_proc_authen_desc(struct aead_request *areq,
 		 * assoc. + iv + data -compact in one table
 		 * if assoclen is ZERO only IV perform
 		 */
-		ssi_sram_addr_t mlli_addr = areq_ctx->assoc.sram_addr;
+		cc_sram_addr_t mlli_addr = areq_ctx->assoc.sram_addr;
 		u32 mlli_nents = areq_ctx->assoc.mlli_nents;
 
 		if (areq_ctx->is_single_pass) {
@@ -838,7 +838,7 @@ static void cc_proc_cipher_desc(struct aead_request *areq,
 {
 	unsigned int idx = *seq_size;
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(areq);
-	enum ssi_req_dma_buf_type data_dma_type = areq_ctx->data_buff_type;
+	enum cc_req_dma_buf_type data_dma_type = areq_ctx->data_buff_type;
 	struct crypto_aead *tfm = crypto_aead_reqtfm(areq);
 	struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
@@ -1954,7 +1954,7 @@ static int cc_proc_aead(struct aead_request *req,
 	struct cc_aead_ctx *ctx = crypto_aead_ctx(tfm);
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 
 	dev_dbg(dev, "%s context=%p req=%p iv=%p src=%p src_ofs=%d dst=%p dst_ofs=%d cryptolen=%d\n",
 		((direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? "Enc" : "Dec"),
@@ -1972,8 +1972,8 @@ static int cc_proc_aead(struct aead_request *req,
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = (void *)cc_aead_complete;
-	ssi_req.user_arg = (void *)req;
+	cc_req.user_cb = (void *)cc_aead_complete;
+	cc_req.user_arg = (void *)req;
 
 	/* Setup request context */
 	areq_ctx->gen_ctx.op_type = direct;
@@ -2040,34 +2040,34 @@ static int cc_proc_aead(struct aead_request *req,
 	if (areq_ctx->backup_giv) {
 		/* set the DMA mapped IV address*/
 		if (ctx->cipher_mode == DRV_CIPHER_CTR) {
-			ssi_req.ivgen_dma_addr[0] =
+			cc_req.ivgen_dma_addr[0] =
 				areq_ctx->gen_ctx.iv_dma_addr +
 				CTR_RFC3686_NONCE_SIZE;
-			ssi_req.ivgen_dma_addr_len = 1;
+			cc_req.ivgen_dma_addr_len = 1;
 		} else if (ctx->cipher_mode == DRV_CIPHER_CCM) {
 			/* In ccm, the IV needs to exist both inside B0 and
 			 * inside the counter.It is also copied to iv_dma_addr
 			 * for other reasons (like returning it to the user).
 			 * So, using 3 (identical) IV outputs.
 			 */
-			ssi_req.ivgen_dma_addr[0] =
+			cc_req.ivgen_dma_addr[0] =
 				areq_ctx->gen_ctx.iv_dma_addr +
 				CCM_BLOCK_IV_OFFSET;
-			ssi_req.ivgen_dma_addr[1] =
+			cc_req.ivgen_dma_addr[1] =
 				sg_dma_address(&areq_ctx->ccm_adata_sg) +
 				CCM_B0_OFFSET + CCM_BLOCK_IV_OFFSET;
-			ssi_req.ivgen_dma_addr[2] =
+			cc_req.ivgen_dma_addr[2] =
 				sg_dma_address(&areq_ctx->ccm_adata_sg) +
 				CCM_CTR_COUNT_0_OFFSET + CCM_BLOCK_IV_OFFSET;
-			ssi_req.ivgen_dma_addr_len = 3;
+			cc_req.ivgen_dma_addr_len = 3;
 		} else {
-			ssi_req.ivgen_dma_addr[0] =
+			cc_req.ivgen_dma_addr[0] =
 				areq_ctx->gen_ctx.iv_dma_addr;
-			ssi_req.ivgen_dma_addr_len = 1;
+			cc_req.ivgen_dma_addr_len = 1;
 		}
 
 		/* set the IV size (8/16 B long)*/
-		ssi_req.ivgen_size = crypto_aead_ivsize(tfm);
+		cc_req.ivgen_size = crypto_aead_ivsize(tfm);
 	}
 
 	/* STAT_PHASE_2: Create sequence */
@@ -2099,7 +2099,7 @@ static int cc_proc_aead(struct aead_request *req,
 
 	/* STAT_PHASE_3: Lock HW and push sequence */
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, seq_len, 1);
+	rc = send_request(ctx->drvdata, &cc_req, desc, seq_len, 1);
 
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
@@ -2403,7 +2403,7 @@ static int cc_rfc4543_gcm_decrypt(struct aead_request *req)
 }
 
 /* DX Block aead alg */
-static struct ssi_alg_template aead_algs[] = {
+static struct cc_alg_template aead_algs[] = {
 	{
 		.name = "authenc(hmac(sha1),cbc(aes))",
 		.driver_name = "authenc-hmac-sha1-cbc-aes-dx",
@@ -2653,10 +2653,10 @@ static struct ssi_alg_template aead_algs[] = {
 	},
 };
 
-static struct ssi_crypto_alg *cc_create_aead_alg(struct ssi_alg_template *tmpl,
-						 struct device *dev)
+static struct cc_crypto_alg *cc_create_aead_alg(struct cc_alg_template *tmpl,
+						struct device *dev)
 {
-	struct ssi_crypto_alg *t_alg;
+	struct cc_crypto_alg *t_alg;
 	struct aead_alg *alg;
 
 	t_alg = kzalloc(sizeof(*t_alg), GFP_KERNEL);
@@ -2687,9 +2687,9 @@ static struct ssi_crypto_alg *cc_create_aead_alg(struct ssi_alg_template *tmpl,
 	return t_alg;
 }
 
-int cc_aead_free(struct ssi_drvdata *drvdata)
+int cc_aead_free(struct cc_drvdata *drvdata)
 {
-	struct ssi_crypto_alg *t_alg, *n;
+	struct cc_crypto_alg *t_alg, *n;
 	struct cc_aead_handle *aead_handle =
 		(struct cc_aead_handle *)drvdata->aead_handle;
 
@@ -2708,10 +2708,10 @@ int cc_aead_free(struct ssi_drvdata *drvdata)
 	return 0;
 }
 
-int cc_aead_alloc(struct ssi_drvdata *drvdata)
+int cc_aead_alloc(struct cc_drvdata *drvdata)
 {
 	struct cc_aead_handle *aead_handle;
-	struct ssi_crypto_alg *t_alg;
+	struct cc_crypto_alg *t_alg;
 	int rc = -ENOMEM;
 	int alg;
 	struct device *dev = drvdata_to_dev(drvdata);
diff --git a/drivers/staging/ccree/ssi_aead.h b/drivers/staging/ccree/ssi_aead.h
index e41040e..2507be1 100644
--- a/drivers/staging/ccree/ssi_aead.h
+++ b/drivers/staging/ccree/ssi_aead.h
@@ -96,15 +96,15 @@ struct aead_req_ctx {
 
 	u8 *icv_virt_addr; /* Virt. address of ICV */
 	struct async_gen_req_ctx gen_ctx;
-	struct ssi_mlli assoc;
-	struct ssi_mlli src;
-	struct ssi_mlli dst;
+	struct cc_mlli assoc;
+	struct cc_mlli src;
+	struct cc_mlli dst;
 	struct scatterlist *src_sgl;
 	struct scatterlist *dst_sgl;
 	unsigned int src_offset;
 	unsigned int dst_offset;
-	enum ssi_req_dma_buf_type assoc_buff_type;
-	enum ssi_req_dma_buf_type data_buff_type;
+	enum cc_req_dma_buf_type assoc_buff_type;
+	enum cc_req_dma_buf_type data_buff_type;
 	struct mlli_params mlli_params;
 	unsigned int cryptlen;
 	struct scatterlist ccm_adata_sg;
@@ -116,7 +116,7 @@ struct aead_req_ctx {
 	bool plaintext_authenticate_only; //for gcm_rfc4543
 };
 
-int cc_aead_alloc(struct ssi_drvdata *drvdata);
-int cc_aead_free(struct ssi_drvdata *drvdata);
+int cc_aead_alloc(struct cc_drvdata *drvdata);
+int cc_aead_free(struct cc_drvdata *drvdata);
 
 #endif /*__CC_AEAD_H__*/
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index ee5c086..8649bcb 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -58,7 +58,7 @@ struct buffer_array {
 	u32 *mlli_nents[MAX_NUM_OF_BUFFERS_IN_MLLI];
 };
 
-static inline char *cc_dma_buf_type(enum ssi_req_dma_buf_type type)
+static inline char *cc_dma_buf_type(enum cc_req_dma_buf_type type)
 {
 	switch (type) {
 	case CC_DMA_BUF_NULL:
@@ -80,7 +80,7 @@ static inline char *cc_dma_buf_type(enum ssi_req_dma_buf_type type)
  * @dir: [IN] copy from/to sgl
  */
 static void cc_copy_mac(struct device *dev, struct aead_request *req,
-			enum ssi_sg_cpy_direct dir)
+			enum cc_sg_cpy_direct dir)
 {
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
@@ -157,7 +157,7 @@ void cc_zero_sgl(struct scatterlist *sgl, u32 data_len)
  * @direct:
  */
 void cc_copy_sg_portion(struct device *dev, u8 *dest, struct scatterlist *sg,
-			u32 to_skip, u32 end, enum ssi_sg_cpy_direct direct)
+			u32 to_skip, u32 end, enum cc_sg_cpy_direct direct)
 {
 	u32 nents, lbytes;
 
@@ -496,7 +496,7 @@ void cc_unmap_blkcipher_request(struct device *dev, void *ctx,
 	}
 }
 
-int cc_map_blkcipher_request(struct ssi_drvdata *drvdata, void *ctx,
+int cc_map_blkcipher_request(struct cc_drvdata *drvdata, void *ctx,
 			     unsigned int ivsize, unsigned int nbytes,
 			     void *info, struct scatterlist *src,
 			     struct scatterlist *dst)
@@ -594,7 +594,7 @@ void cc_unmap_aead_request(struct device *dev, struct aead_request *req)
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
 	unsigned int hw_iv_size = areq_ctx->hw_iv_size;
 	struct crypto_aead *tfm = crypto_aead_reqtfm(req);
-	struct ssi_drvdata *drvdata = dev_get_drvdata(dev);
+	struct cc_drvdata *drvdata = dev_get_drvdata(dev);
 	u32 dummy;
 	bool chained;
 	u32 size_to_unmap = 0;
@@ -734,7 +734,7 @@ static int cc_get_aead_icv_nents(struct device *dev, struct scatterlist *sgl,
 	return nents;
 }
 
-static int cc_aead_chain_iv(struct ssi_drvdata *drvdata,
+static int cc_aead_chain_iv(struct cc_drvdata *drvdata,
 			    struct aead_request *req,
 			    struct buffer_array *sg_data,
 			    bool is_last, bool do_chain)
@@ -778,7 +778,7 @@ static int cc_aead_chain_iv(struct ssi_drvdata *drvdata,
 	return rc;
 }
 
-static int cc_aead_chain_assoc(struct ssi_drvdata *drvdata,
+static int cc_aead_chain_assoc(struct cc_drvdata *drvdata,
 			       struct aead_request *req,
 			       struct buffer_array *sg_data,
 			       bool is_last, bool do_chain)
@@ -898,7 +898,7 @@ static void cc_prepare_aead_data_dlli(struct aead_request *req,
 	}
 }
 
-static int cc_prepare_aead_data_mlli(struct ssi_drvdata *drvdata,
+static int cc_prepare_aead_data_mlli(struct cc_drvdata *drvdata,
 				     struct aead_request *req,
 				     struct buffer_array *sg_data,
 				     u32 *src_last_bytes, u32 *dst_last_bytes,
@@ -1030,7 +1030,7 @@ static int cc_prepare_aead_data_mlli(struct ssi_drvdata *drvdata,
 	return rc;
 }
 
-static int cc_aead_chain_data(struct ssi_drvdata *drvdata,
+static int cc_aead_chain_data(struct cc_drvdata *drvdata,
 			      struct aead_request *req,
 			      struct buffer_array *sg_data,
 			      bool is_last_table, bool do_chain)
@@ -1150,7 +1150,7 @@ static int cc_aead_chain_data(struct ssi_drvdata *drvdata,
 	return rc;
 }
 
-static void cc_update_aead_mlli_nents(struct ssi_drvdata *drvdata,
+static void cc_update_aead_mlli_nents(struct cc_drvdata *drvdata,
 				      struct aead_request *req)
 {
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
@@ -1201,7 +1201,7 @@ static void cc_update_aead_mlli_nents(struct ssi_drvdata *drvdata,
 	}
 }
 
-int cc_map_aead_request(struct ssi_drvdata *drvdata, struct aead_request *req)
+int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
 {
 	struct aead_req_ctx *areq_ctx = aead_request_ctx(req);
 	struct mlli_params *mlli_params = &areq_ctx->mlli_params;
@@ -1400,7 +1400,7 @@ int cc_map_aead_request(struct ssi_drvdata *drvdata, struct aead_request *req)
 	return rc;
 }
 
-int cc_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx,
+int cc_map_hash_request_final(struct cc_drvdata *drvdata, void *ctx,
 			      struct scatterlist *src, unsigned int nbytes,
 			      bool do_update)
 {
@@ -1481,7 +1481,7 @@ int cc_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx,
 	return -ENOMEM;
 }
 
-int cc_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx,
+int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
 			       struct scatterlist *src, unsigned int nbytes,
 			       unsigned int block_size)
 {
@@ -1639,7 +1639,7 @@ void cc_unmap_hash_request(struct device *dev, void *ctx,
 	}
 }
 
-int cc_buffer_mgr_init(struct ssi_drvdata *drvdata)
+int cc_buffer_mgr_init(struct cc_drvdata *drvdata)
 {
 	struct buff_mgr_handle *buff_mgr_handle;
 	struct device *dev = drvdata_to_dev(drvdata);
@@ -1666,7 +1666,7 @@ int cc_buffer_mgr_init(struct ssi_drvdata *drvdata)
 	return -ENOMEM;
 }
 
-int cc_buffer_mgr_fini(struct ssi_drvdata *drvdata)
+int cc_buffer_mgr_fini(struct cc_drvdata *drvdata)
 {
 	struct buff_mgr_handle *buff_mgr_handle = drvdata->buff_mgr_handle;
 
diff --git a/drivers/staging/ccree/ssi_buffer_mgr.h b/drivers/staging/ccree/ssi_buffer_mgr.h
index 77744a6..da43354 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.h
+++ b/drivers/staging/ccree/ssi_buffer_mgr.h
@@ -26,19 +26,19 @@
 #include "ssi_config.h"
 #include "ssi_driver.h"
 
-enum ssi_req_dma_buf_type {
+enum cc_req_dma_buf_type {
 	CC_DMA_BUF_NULL = 0,
 	CC_DMA_BUF_DLLI,
 	CC_DMA_BUF_MLLI
 };
 
-enum ssi_sg_cpy_direct {
+enum cc_sg_cpy_direct {
 	CC_SG_TO_BUF = 0,
 	CC_SG_FROM_BUF = 1
 };
 
-struct ssi_mlli {
-	ssi_sram_addr_t sram_addr;
+struct cc_mlli {
+	cc_sram_addr_t sram_addr;
 	unsigned int nents; //sg nents
 	unsigned int mlli_nents; //mlli nents might be different than the above
 };
@@ -50,11 +50,11 @@ struct mlli_params {
 	u32 mlli_len;
 };
 
-int cc_buffer_mgr_init(struct ssi_drvdata *drvdata);
+int cc_buffer_mgr_init(struct cc_drvdata *drvdata);
 
-int cc_buffer_mgr_fini(struct ssi_drvdata *drvdata);
+int cc_buffer_mgr_fini(struct cc_drvdata *drvdata);
 
-int cc_map_blkcipher_request(struct ssi_drvdata *drvdata, void *ctx,
+int cc_map_blkcipher_request(struct cc_drvdata *drvdata, void *ctx,
 			     unsigned int ivsize, unsigned int nbytes,
 			     void *info, struct scatterlist *src,
 			     struct scatterlist *dst);
@@ -64,15 +64,15 @@ void cc_unmap_blkcipher_request(struct device *dev, void *ctx,
 				struct scatterlist *src,
 				struct scatterlist *dst);
 
-int cc_map_aead_request(struct ssi_drvdata *drvdata, struct aead_request *req);
+int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req);
 
 void cc_unmap_aead_request(struct device *dev, struct aead_request *req);
 
-int cc_map_hash_request_final(struct ssi_drvdata *drvdata, void *ctx,
+int cc_map_hash_request_final(struct cc_drvdata *drvdata, void *ctx,
 			      struct scatterlist *src, unsigned int nbytes,
 			      bool do_update);
 
-int cc_map_hash_request_update(struct ssi_drvdata *drvdata, void *ctx,
+int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
 			       struct scatterlist *src, unsigned int nbytes,
 			       unsigned int block_size);
 
@@ -80,7 +80,7 @@ void cc_unmap_hash_request(struct device *dev, void *ctx,
 			   struct scatterlist *src, bool do_revert);
 
 void cc_copy_sg_portion(struct device *dev, u8 *dest, struct scatterlist *sg,
-			u32 to_skip, u32 end, enum ssi_sg_cpy_direct direct);
+			u32 to_skip, u32 end, enum cc_sg_cpy_direct direct);
 
 void cc_zero_sgl(struct scatterlist *sgl, u32 data_len);
 
diff --git a/drivers/staging/ccree/ssi_cipher.c b/drivers/staging/ccree/ssi_cipher.c
index c437a79..791fe75 100644
--- a/drivers/staging/ccree/ssi_cipher.c
+++ b/drivers/staging/ccree/ssi_cipher.c
@@ -55,7 +55,7 @@ struct cc_hw_key_info {
 };
 
 struct cc_cipher_ctx {
-	struct ssi_drvdata *drvdata;
+	struct cc_drvdata *drvdata;
 	int keylen;
 	int key_round_number;
 	int cipher_mode;
@@ -67,7 +67,7 @@ struct cc_cipher_ctx {
 	struct crypto_shash *shash_tfm;
 };
 
-static void cc_cipher_complete(struct device *dev, void *ssi_req);
+static void cc_cipher_complete(struct device *dev, void *cc_req);
 
 static int validate_keys_sizes(struct cc_cipher_ctx *ctx_p, u32 size)
 {
@@ -145,17 +145,17 @@ static int validate_data_size(struct cc_cipher_ctx *ctx_p,
 
 static unsigned int get_max_keysize(struct crypto_tfm *tfm)
 {
-	struct ssi_crypto_alg *ssi_alg =
-		container_of(tfm->__crt_alg, struct ssi_crypto_alg,
+	struct cc_crypto_alg *cc_alg =
+		container_of(tfm->__crt_alg, struct cc_crypto_alg,
 			     crypto_alg);
 
-	if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_TYPE_MASK) ==
+	if ((cc_alg->crypto_alg.cra_flags & CRYPTO_ALG_TYPE_MASK) ==
 	    CRYPTO_ALG_TYPE_ABLKCIPHER)
-		return ssi_alg->crypto_alg.cra_ablkcipher.max_keysize;
+		return cc_alg->crypto_alg.cra_ablkcipher.max_keysize;
 
-	if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_TYPE_MASK) ==
+	if ((cc_alg->crypto_alg.cra_flags & CRYPTO_ALG_TYPE_MASK) ==
 	    CRYPTO_ALG_TYPE_BLKCIPHER)
-		return ssi_alg->crypto_alg.cra_blkcipher.max_keysize;
+		return cc_alg->crypto_alg.cra_blkcipher.max_keysize;
 
 	return 0;
 }
@@ -164,9 +164,9 @@ static int cc_cipher_init(struct crypto_tfm *tfm)
 {
 	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct crypto_alg *alg = tfm->__crt_alg;
-	struct ssi_crypto_alg *ssi_alg =
-			container_of(alg, struct ssi_crypto_alg, crypto_alg);
-	struct device *dev = drvdata_to_dev(ssi_alg->drvdata);
+	struct cc_crypto_alg *cc_alg =
+			container_of(alg, struct cc_crypto_alg, crypto_alg);
+	struct device *dev = drvdata_to_dev(cc_alg->drvdata);
 	int rc = 0;
 	unsigned int max_key_buf_size = get_max_keysize(tfm);
 	struct ablkcipher_tfm *ablktfm = &tfm->crt_ablkcipher;
@@ -176,9 +176,9 @@ static int cc_cipher_init(struct crypto_tfm *tfm)
 
 	ablktfm->reqsize = sizeof(struct blkcipher_req_ctx);
 
-	ctx_p->cipher_mode = ssi_alg->cipher_mode;
-	ctx_p->flow_mode = ssi_alg->flow_mode;
-	ctx_p->drvdata = ssi_alg->drvdata;
+	ctx_p->cipher_mode = cc_alg->cipher_mode;
+	ctx_p->flow_mode = cc_alg->flow_mode;
+	ctx_p->drvdata = cc_alg->drvdata;
 
 	/* Allocate key buffer, cache line aligned */
 	ctx_p->user.key = kmalloc(max_key_buf_size, GFP_KERNEL | GFP_DMA);
@@ -408,14 +408,14 @@ static void cc_setup_cipher_desc(struct crypto_tfm *tfm,
 	dma_addr_t iv_dma_addr = req_ctx->gen_ctx.iv_dma_addr;
 	unsigned int du_size = nbytes;
 
-	struct ssi_crypto_alg *ssi_alg =
-		container_of(tfm->__crt_alg, struct ssi_crypto_alg,
+	struct cc_crypto_alg *cc_alg =
+		container_of(tfm->__crt_alg, struct cc_crypto_alg,
 			     crypto_alg);
 
-	if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_BULK_MASK) ==
+	if ((cc_alg->crypto_alg.cra_flags & CRYPTO_ALG_BULK_MASK) ==
 	    CRYPTO_ALG_BULK_DU_512)
 		du_size = 512;
-	if ((ssi_alg->crypto_alg.cra_flags & CRYPTO_ALG_BULK_MASK) ==
+	if ((cc_alg->crypto_alg.cra_flags & CRYPTO_ALG_BULK_MASK) ==
 	    CRYPTO_ALG_BULK_DU_4096)
 		du_size = 4096;
 
@@ -604,9 +604,9 @@ static void cc_setup_cipher_data(struct crypto_tfm *tfm,
 	}
 }
 
-static void cc_cipher_complete(struct device *dev, void *ssi_req)
+static void cc_cipher_complete(struct device *dev, void *cc_req)
 {
-	struct ablkcipher_request *areq = (struct ablkcipher_request *)ssi_req;
+	struct ablkcipher_request *areq = (struct ablkcipher_request *)cc_req;
 	struct scatterlist *dst = areq->dst;
 	struct scatterlist *src = areq->src;
 	struct blkcipher_req_ctx *req_ctx = ablkcipher_request_ctx(areq);
@@ -651,7 +651,7 @@ static int cc_cipher_process(struct ablkcipher_request *req,
 	struct cc_cipher_ctx *ctx_p = crypto_tfm_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx_p->drvdata);
 	struct cc_hw_desc desc[MAX_ABLKCIPHER_SEQ_LEN];
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	int rc, seq_len = 0, cts_restore_flag = 0;
 
 	dev_dbg(dev, "%s req=%p info=%p nbytes=%d\n",
@@ -691,11 +691,11 @@ static int cc_cipher_process(struct ablkcipher_request *req,
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = (void *)cc_cipher_complete;
-	ssi_req.user_arg = (void *)req;
+	cc_req.user_cb = (void *)cc_cipher_complete;
+	cc_req.user_arg = (void *)req;
 
 #ifdef ENABLE_CYCLE_COUNT
-	ssi_req.op_type = (direction == DRV_CRYPTO_DIRECTION_DECRYPT) ?
+	cc_req.op_type = (direction == DRV_CRYPTO_DIRECTION_DECRYPT) ?
 		STAT_OP_TYPE_DECODE : STAT_OP_TYPE_ENCODE;
 
 #endif
@@ -722,15 +722,15 @@ static int cc_cipher_process(struct ablkcipher_request *req,
 
 	/* do we need to generate IV? */
 	if (req_ctx->is_giv) {
-		ssi_req.ivgen_dma_addr[0] = req_ctx->gen_ctx.iv_dma_addr;
-		ssi_req.ivgen_dma_addr_len = 1;
+		cc_req.ivgen_dma_addr[0] = req_ctx->gen_ctx.iv_dma_addr;
+		cc_req.ivgen_dma_addr_len = 1;
 		/* set the IV size (8/16 B long)*/
-		ssi_req.ivgen_size = ivsize;
+		cc_req.ivgen_size = ivsize;
 	}
 
 	/* STAT_PHASE_3: Lock HW and push sequence */
 
-	rc = send_request(ctx_p->drvdata, &ssi_req, desc, seq_len, 1);
+	rc = send_request(ctx_p->drvdata, &cc_req, desc, seq_len, 1);
 	if (rc != -EINPROGRESS) {
 		/* Failed to send the request or request completed
 		 * synchronously
@@ -782,7 +782,7 @@ static int cc_cipher_decrypt(struct ablkcipher_request *req)
 }
 
 /* DX Block cipher alg */
-static struct ssi_alg_template blkcipher_algs[] = {
+static struct cc_alg_template blkcipher_algs[] = {
 	{
 		.name = "xts(aes)",
 		.driver_name = "xts-aes-dx",
@@ -1075,10 +1075,10 @@ static struct ssi_alg_template blkcipher_algs[] = {
 };
 
 static
-struct ssi_crypto_alg *cc_cipher_create_alg(struct ssi_alg_template *template,
-					    struct device *dev)
+struct cc_crypto_alg *cc_cipher_create_alg(struct cc_alg_template *template,
+					   struct device *dev)
 {
-	struct ssi_crypto_alg *t_alg;
+	struct cc_crypto_alg *t_alg;
 	struct crypto_alg *alg;
 
 	t_alg = kzalloc(sizeof(*t_alg), GFP_KERNEL);
@@ -1109,9 +1109,9 @@ struct ssi_crypto_alg *cc_cipher_create_alg(struct ssi_alg_template *template,
 	return t_alg;
 }
 
-int cc_cipher_free(struct ssi_drvdata *drvdata)
+int cc_cipher_free(struct cc_drvdata *drvdata)
 {
-	struct ssi_crypto_alg *t_alg, *n;
+	struct cc_crypto_alg *t_alg, *n;
 	struct cc_cipher_handle *blkcipher_handle =
 						drvdata->blkcipher_handle;
 	if (blkcipher_handle) {
@@ -1129,10 +1129,10 @@ int cc_cipher_free(struct ssi_drvdata *drvdata)
 	return 0;
 }
 
-int cc_cipher_alloc(struct ssi_drvdata *drvdata)
+int cc_cipher_alloc(struct cc_drvdata *drvdata)
 {
 	struct cc_cipher_handle *ablkcipher_handle;
-	struct ssi_crypto_alg *t_alg;
+	struct cc_crypto_alg *t_alg;
 	struct device *dev = drvdata_to_dev(drvdata);
 	int rc = -ENOMEM;
 	int alg;
diff --git a/drivers/staging/ccree/ssi_cipher.h b/drivers/staging/ccree/ssi_cipher.h
index 977b543..5d94cd3 100644
--- a/drivers/staging/ccree/ssi_cipher.h
+++ b/drivers/staging/ccree/ssi_cipher.h
@@ -40,7 +40,7 @@
 
 struct blkcipher_req_ctx {
 	struct async_gen_req_ctx gen_ctx;
-	enum ssi_req_dma_buf_type dma_buf_type;
+	enum cc_req_dma_buf_type dma_buf_type;
 	u32 in_nents;
 	u32 in_mlli_nents;
 	u32 out_nents;
@@ -51,9 +51,9 @@ struct blkcipher_req_ctx {
 	struct mlli_params mlli_params;
 };
 
-int cc_cipher_alloc(struct ssi_drvdata *drvdata);
+int cc_cipher_alloc(struct cc_drvdata *drvdata);
 
-int cc_cipher_free(struct ssi_drvdata *drvdata);
+int cc_cipher_free(struct cc_drvdata *drvdata);
 
 #ifndef CRYPTO_ALG_BULK_MASK
 
diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 078d146..3f02ceb 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -89,7 +89,7 @@ void dump_byte_array(const char *name, const u8 *buf, size_t len)
 
 static irqreturn_t cc_isr(int irq, void *dev_id)
 {
-	struct ssi_drvdata *drvdata = (struct ssi_drvdata *)dev_id;
+	struct cc_drvdata *drvdata = (struct cc_drvdata *)dev_id;
 	struct device *dev = drvdata_to_dev(drvdata);
 	u32 irr;
 	u32 imr;
@@ -150,7 +150,7 @@ static irqreturn_t cc_isr(int irq, void *dev_id)
 	return IRQ_HANDLED;
 }
 
-int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe)
+int init_cc_regs(struct cc_drvdata *drvdata, bool is_probe)
 {
 	unsigned int val, cache_params;
 	struct device *dev = drvdata_to_dev(drvdata);
@@ -202,7 +202,7 @@ int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe)
 static int init_cc_resources(struct platform_device *plat_dev)
 {
 	struct resource *req_mem_cc_regs = NULL;
-	struct ssi_drvdata *new_drvdata;
+	struct cc_drvdata *new_drvdata;
 	struct device *dev = &plat_dev->dev;
 	struct device_node *np = dev->of_node;
 	u32 signature_val;
@@ -405,7 +405,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 	return rc;
 }
 
-void fini_cc_regs(struct ssi_drvdata *drvdata)
+void fini_cc_regs(struct cc_drvdata *drvdata)
 {
 	/* Mask all interrupts */
 	cc_iowrite(drvdata, CC_REG(HOST_IMR), 0xFFFFFFFF);
@@ -413,8 +413,8 @@ void fini_cc_regs(struct ssi_drvdata *drvdata)
 
 static void cleanup_cc_resources(struct platform_device *plat_dev)
 {
-	struct ssi_drvdata *drvdata =
-		(struct ssi_drvdata *)platform_get_drvdata(plat_dev);
+	struct cc_drvdata *drvdata =
+		(struct cc_drvdata *)platform_get_drvdata(plat_dev);
 
 	cc_aead_free(drvdata);
 	cc_hash_free(drvdata);
@@ -432,7 +432,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 	cc_clk_off(drvdata);
 }
 
-int cc_clk_on(struct ssi_drvdata *drvdata)
+int cc_clk_on(struct cc_drvdata *drvdata)
 {
 	struct clk *clk = drvdata->clk;
 	int rc;
@@ -448,7 +448,7 @@ int cc_clk_on(struct ssi_drvdata *drvdata)
 	return 0;
 }
 
-void cc_clk_off(struct ssi_drvdata *drvdata)
+void cc_clk_off(struct cc_drvdata *drvdata)
 {
 	struct clk *clk = drvdata->clk;
 
diff --git a/drivers/staging/ccree/ssi_driver.h b/drivers/staging/ccree/ssi_driver.h
index 4d94a06..35e1b72 100644
--- a/drivers/staging/ccree/ssi_driver.h
+++ b/drivers/staging/ccree/ssi_driver.h
@@ -89,7 +89,7 @@
  */
 
 #define CC_MAX_IVGEN_DMA_ADDRESSES	3
-struct ssi_crypto_req {
+struct cc_crypto_req {
 	void (*user_cb)(struct device *dev, void *req);
 	void *user_arg;
 	dma_addr_t ivgen_dma_addr[CC_MAX_IVGEN_DMA_ADDRESSES];
@@ -105,20 +105,20 @@ struct ssi_crypto_req {
 };
 
 /**
- * struct ssi_drvdata - driver private data context
+ * struct cc_drvdata - driver private data context
  * @cc_base:	virt address of the CC registers
  * @irq:	device IRQ number
  * @irq_mask:	Interrupt mask shadow (1 for masked interrupts)
  * @fw_ver:	SeP loaded firmware version
  */
-struct ssi_drvdata {
+struct cc_drvdata {
 	void __iomem *cc_base;
 	int irq;
 	u32 irq_mask;
 	u32 fw_ver;
 	struct completion hw_queue_avail; /* wait for HW queue availability */
 	struct platform_device *plat_dev;
-	ssi_sram_addr_t mlli_sram_addr;
+	cc_sram_addr_t mlli_sram_addr;
 	void *buff_mgr_handle;
 	void *hash_handle;
 	void *aead_handle;
@@ -131,17 +131,17 @@ struct ssi_drvdata {
 	bool coherent;
 };
 
-struct ssi_crypto_alg {
+struct cc_crypto_alg {
 	struct list_head entry;
 	int cipher_mode;
 	int flow_mode; /* Note: currently, refers to the cipher mode only. */
 	int auth_mode;
-	struct ssi_drvdata *drvdata;
+	struct cc_drvdata *drvdata;
 	struct crypto_alg crypto_alg;
 	struct aead_alg aead_alg;
 };
 
-struct ssi_alg_template {
+struct cc_alg_template {
 	char name[CRYPTO_MAX_ALG_NAME];
 	char driver_name[CRYPTO_MAX_ALG_NAME];
 	unsigned int blocksize;
@@ -156,7 +156,7 @@ struct ssi_alg_template {
 	int cipher_mode;
 	int flow_mode; /* Note: currently, refers to the cipher mode only. */
 	int auth_mode;
-	struct ssi_drvdata *drvdata;
+	struct cc_drvdata *drvdata;
 };
 
 struct async_gen_req_ctx {
@@ -164,7 +164,7 @@ struct async_gen_req_ctx {
 	enum drv_crypto_direction op_type;
 };
 
-static inline struct device *drvdata_to_dev(struct ssi_drvdata *drvdata)
+static inline struct device *drvdata_to_dev(struct cc_drvdata *drvdata)
 {
 	return &drvdata->plat_dev->dev;
 }
@@ -177,17 +177,17 @@ static inline void dump_byte_array(const char *name, const u8 *the_array,
 				   unsigned long size) {};
 #endif
 
-int init_cc_regs(struct ssi_drvdata *drvdata, bool is_probe);
-void fini_cc_regs(struct ssi_drvdata *drvdata);
-int cc_clk_on(struct ssi_drvdata *drvdata);
-void cc_clk_off(struct ssi_drvdata *drvdata);
+int init_cc_regs(struct cc_drvdata *drvdata, bool is_probe);
+void fini_cc_regs(struct cc_drvdata *drvdata);
+int cc_clk_on(struct cc_drvdata *drvdata);
+void cc_clk_off(struct cc_drvdata *drvdata);
 
-static inline void cc_iowrite(struct ssi_drvdata *drvdata, u32 reg, u32 val)
+static inline void cc_iowrite(struct cc_drvdata *drvdata, u32 reg, u32 val)
 {
 	iowrite32(val, (drvdata->cc_base + reg));
 }
 
-static inline u32 cc_ioread(struct ssi_drvdata *drvdata, u32 reg)
+static inline u32 cc_ioread(struct cc_drvdata *drvdata, u32 reg)
 {
 	return ioread32(drvdata->cc_base + reg);
 }
diff --git a/drivers/staging/ccree/ssi_fips.c b/drivers/staging/ccree/ssi_fips.c
index 273b414..036215f 100644
--- a/drivers/staging/ccree/ssi_fips.c
+++ b/drivers/staging/ccree/ssi_fips.c
@@ -23,14 +23,14 @@
 
 static void fips_dsr(unsigned long devarg);
 
-struct ssi_fips_handle {
+struct cc_fips_handle {
 	struct tasklet_struct tasklet;
 };
 
 /* The function called once at driver entry point to check
  * whether TEE FIPS error occurred.
  */
-static bool cc_get_tee_fips_status(struct ssi_drvdata *drvdata)
+static bool cc_get_tee_fips_status(struct cc_drvdata *drvdata)
 {
 	u32 reg;
 
@@ -42,7 +42,7 @@ static bool cc_get_tee_fips_status(struct ssi_drvdata *drvdata)
  * This function should push the FIPS REE library status towards the TEE library
  * by writing the error state to HOST_GPR0 register.
  */
-void cc_set_ree_fips_status(struct ssi_drvdata *drvdata, bool status)
+void cc_set_ree_fips_status(struct cc_drvdata *drvdata, bool status)
 {
 	int val = CC_FIPS_SYNC_REE_STATUS;
 
@@ -51,9 +51,9 @@ void cc_set_ree_fips_status(struct ssi_drvdata *drvdata, bool status)
 	cc_iowrite(drvdata, CC_REG(HOST_GPR0), val);
 }
 
-void ssi_fips_fini(struct ssi_drvdata *drvdata)
+void ssi_fips_fini(struct cc_drvdata *drvdata)
 {
-	struct ssi_fips_handle *fips_h = drvdata->fips_handle;
+	struct cc_fips_handle *fips_h = drvdata->fips_handle;
 
 	if (!fips_h)
 		return; /* Not allocated */
@@ -65,9 +65,9 @@ void ssi_fips_fini(struct ssi_drvdata *drvdata)
 	drvdata->fips_handle = NULL;
 }
 
-void fips_handler(struct ssi_drvdata *drvdata)
+void fips_handler(struct cc_drvdata *drvdata)
 {
-	struct ssi_fips_handle *fips_handle_ptr =
+	struct cc_fips_handle *fips_handle_ptr =
 		drvdata->fips_handle;
 
 	tasklet_schedule(&fips_handle_ptr->tasklet);
@@ -84,7 +84,7 @@ static inline void tee_fips_error(struct device *dev)
 /* Deferred service handler, run as interrupt-fired tasklet */
 static void fips_dsr(unsigned long devarg)
 {
-	struct ssi_drvdata *drvdata = (struct ssi_drvdata *)devarg;
+	struct cc_drvdata *drvdata = (struct cc_drvdata *)devarg;
 	struct device *dev = drvdata_to_dev(drvdata);
 	u32 irq, state, val;
 
@@ -105,9 +105,9 @@ static void fips_dsr(unsigned long devarg)
 }
 
 /* The function called once at driver entry point .*/
-int ssi_fips_init(struct ssi_drvdata *p_drvdata)
+int ssi_fips_init(struct cc_drvdata *p_drvdata)
 {
-	struct ssi_fips_handle *fips_h;
+	struct cc_fips_handle *fips_h;
 	struct device *dev = drvdata_to_dev(p_drvdata);
 
 	fips_h = kzalloc(sizeof(*fips_h), GFP_KERNEL);
diff --git a/drivers/staging/ccree/ssi_fips.h b/drivers/staging/ccree/ssi_fips.h
index 1889c74..5eed9f6 100644
--- a/drivers/staging/ccree/ssi_fips.h
+++ b/drivers/staging/ccree/ssi_fips.h
@@ -27,22 +27,22 @@ enum cc_fips_status {
 	CC_FIPS_SYNC_STATUS_RESERVE32B = S32_MAX
 };
 
-int ssi_fips_init(struct ssi_drvdata *p_drvdata);
-void ssi_fips_fini(struct ssi_drvdata *drvdata);
-void fips_handler(struct ssi_drvdata *drvdata);
-void cc_set_ree_fips_status(struct ssi_drvdata *drvdata, bool ok);
+int ssi_fips_init(struct cc_drvdata *p_drvdata);
+void ssi_fips_fini(struct cc_drvdata *drvdata);
+void fips_handler(struct cc_drvdata *drvdata);
+void cc_set_ree_fips_status(struct cc_drvdata *drvdata, bool ok);
 
 #else  /* CONFIG_CRYPTO_FIPS */
 
-static inline int ssi_fips_init(struct ssi_drvdata *p_drvdata)
+static inline int ssi_fips_init(struct cc_drvdata *p_drvdata)
 {
 	return 0;
 }
 
-static inline void ssi_fips_fini(struct ssi_drvdata *drvdata) {}
-static inline void cc_set_ree_fips_status(struct ssi_drvdata *drvdata,
+static inline void ssi_fips_fini(struct cc_drvdata *drvdata) {}
+static inline void cc_set_ree_fips_status(struct cc_drvdata *drvdata,
 					  bool ok) {}
-static inline void fips_handler(struct ssi_drvdata *drvdata) {}
+static inline void fips_handler(struct cc_drvdata *drvdata) {}
 
 #endif /* CONFIG_CRYPTO_FIPS */
 
diff --git a/drivers/staging/ccree/ssi_hash.c b/drivers/staging/ccree/ssi_hash.c
index 5a041bb..e5e71c2 100644
--- a/drivers/staging/ccree/ssi_hash.c
+++ b/drivers/staging/ccree/ssi_hash.c
@@ -35,8 +35,8 @@
 #define CC_MAX_OPAD_KEYS_SIZE CC_MAX_HASH_BLCK_SIZE
 
 struct cc_hash_handle {
-	ssi_sram_addr_t digest_len_sram_addr; /* const value in SRAM*/
-	ssi_sram_addr_t larval_digest_sram_addr;   /* const value in SRAM */
+	cc_sram_addr_t digest_len_sram_addr; /* const value in SRAM*/
+	cc_sram_addr_t larval_digest_sram_addr;   /* const value in SRAM */
 	struct list_head hash_list;
 	struct completion init_comp;
 };
@@ -75,7 +75,7 @@ struct cc_hash_alg {
 	int hash_mode;
 	int hw_mode;
 	int inter_digestsize;
-	struct ssi_drvdata *drvdata;
+	struct cc_drvdata *drvdata;
 	struct ahash_alg ahash_alg;
 };
 
@@ -86,7 +86,7 @@ struct hash_key_req_ctx {
 
 /* hash per-session context */
 struct cc_hash_ctx {
-	struct ssi_drvdata *drvdata;
+	struct cc_drvdata *drvdata;
 	/* holds the origin digest; the digest after "setkey" if HMAC,*
 	 * the initial digest if HASH.
 	 */
@@ -141,9 +141,9 @@ static int cc_map_req(struct device *dev, struct ahash_req_ctx *state,
 		      struct cc_hash_ctx *ctx)
 {
 	bool is_hmac = ctx->is_hmac;
-	ssi_sram_addr_t larval_digest_addr =
+	cc_sram_addr_t larval_digest_addr =
 		cc_larval_digest_addr(ctx->drvdata, ctx->hash_mode);
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	struct cc_hw_desc desc;
 	int rc = -ENOMEM;
 
@@ -244,7 +244,7 @@ static int cc_map_req(struct device *dev, struct ahash_req_ctx *state,
 			      ctx->inter_digestsize, NS_BIT, 0);
 		set_flow_mode(&desc, BYPASS);
 
-		rc = send_request(ctx->drvdata, &ssi_req, &desc, 1, 0);
+		rc = send_request(ctx->drvdata, &cc_req, &desc, 1, 0);
 		if (rc) {
 			dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 			goto fail4;
@@ -373,9 +373,9 @@ static void cc_unmap_result(struct device *dev, struct ahash_req_ctx *state,
 	state->digest_result_dma_addr = 0;
 }
 
-static void cc_update_complete(struct device *dev, void *ssi_req)
+static void cc_update_complete(struct device *dev, void *cc_req)
 {
-	struct ahash_request *req = (struct ahash_request *)ssi_req;
+	struct ahash_request *req = (struct ahash_request *)cc_req;
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 
 	dev_dbg(dev, "req=%pK\n", req);
@@ -384,9 +384,9 @@ static void cc_update_complete(struct device *dev, void *ssi_req)
 	req->base.complete(&req->base, 0);
 }
 
-static void cc_digest_complete(struct device *dev, void *ssi_req)
+static void cc_digest_complete(struct device *dev, void *cc_req)
 {
-	struct ahash_request *req = (struct ahash_request *)ssi_req;
+	struct ahash_request *req = (struct ahash_request *)cc_req;
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
 	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
@@ -400,9 +400,9 @@ static void cc_digest_complete(struct device *dev, void *ssi_req)
 	req->base.complete(&req->base, 0);
 }
 
-static void cc_hash_complete(struct device *dev, void *ssi_req)
+static void cc_hash_complete(struct device *dev, void *cc_req)
 {
-	struct ahash_request *req = (struct ahash_request *)ssi_req;
+	struct ahash_request *req = (struct ahash_request *)cc_req;
 	struct ahash_req_ctx *state = ahash_request_ctx(req);
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
 	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
@@ -427,9 +427,9 @@ static int cc_hash_digest(struct ahash_request *req)
 	u8 *result = req->result;
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	bool is_hmac = ctx->is_hmac;
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
-	ssi_sram_addr_t larval_digest_addr =
+	cc_sram_addr_t larval_digest_addr =
 		cc_larval_digest_addr(ctx->drvdata, ctx->hash_mode);
 	int idx = 0;
 	int rc = 0;
@@ -453,8 +453,8 @@ static int cc_hash_digest(struct ahash_request *req)
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = cc_digest_complete;
-	ssi_req.user_arg = req;
+	cc_req.user_cb = cc_digest_complete;
+	cc_req.user_arg = req;
 
 	/* If HMAC then load hash IPAD xor key, if HASH then load initial
 	 * digest
@@ -561,7 +561,7 @@ static int cc_hash_digest(struct ahash_request *req)
 	cc_set_endianity(ctx->hash_mode, &desc[idx]);
 	idx++;
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	rc = send_request(ctx->drvdata, &cc_req, desc, idx, 1);
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, src, true);
@@ -580,7 +580,7 @@ static int cc_hash_update(struct ahash_request *req)
 	struct scatterlist *src = req->src;
 	unsigned int nbytes = req->nbytes;
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	u32 idx = 0;
 	int rc;
@@ -607,8 +607,8 @@ static int cc_hash_update(struct ahash_request *req)
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = cc_update_complete;
-	ssi_req.user_arg = req;
+	cc_req.user_cb = cc_update_complete;
+	cc_req.user_arg = req;
 
 	/* Restore hash digest */
 	hw_desc_init(&desc[idx]);
@@ -648,7 +648,7 @@ static int cc_hash_update(struct ahash_request *req)
 	set_setup_mode(&desc[idx], SETUP_WRITE_STATE1);
 	idx++;
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	rc = send_request(ctx->drvdata, &cc_req, desc, idx, 1);
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, src, true);
@@ -667,7 +667,7 @@ static int cc_hash_finup(struct ahash_request *req)
 	u8 *result = req->result;
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	bool is_hmac = ctx->is_hmac;
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	int idx = 0;
 	int rc;
@@ -685,8 +685,8 @@ static int cc_hash_finup(struct ahash_request *req)
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = cc_hash_complete;
-	ssi_req.user_arg = req;
+	cc_req.user_cb = cc_hash_complete;
+	cc_req.user_arg = req;
 
 	/* Restore hash digest */
 	hw_desc_init(&desc[idx]);
@@ -767,7 +767,7 @@ static int cc_hash_finup(struct ahash_request *req)
 	set_cipher_mode(&desc[idx], ctx->hw_mode);
 	idx++;
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	rc = send_request(ctx->drvdata, &cc_req, desc, idx, 1);
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, src, true);
@@ -787,7 +787,7 @@ static int cc_hash_final(struct ahash_request *req)
 	u8 *result = req->result;
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	bool is_hmac = ctx->is_hmac;
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	int idx = 0;
 	int rc;
@@ -806,8 +806,8 @@ static int cc_hash_final(struct ahash_request *req)
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = cc_hash_complete;
-	ssi_req.user_arg = req;
+	cc_req.user_cb = cc_hash_complete;
+	cc_req.user_arg = req;
 
 	/* Restore hash digest */
 	hw_desc_init(&desc[idx]);
@@ -897,7 +897,7 @@ static int cc_hash_final(struct ahash_request *req)
 	set_cipher_mode(&desc[idx], ctx->hw_mode);
 	idx++;
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	rc = send_request(ctx->drvdata, &cc_req, desc, idx, 1);
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, src, true);
@@ -925,13 +925,13 @@ static int cc_hash_setkey(struct crypto_ahash *ahash, const u8 *key,
 			  unsigned int keylen)
 {
 	unsigned int hmac_pad_const[2] = { HMAC_IPAD_CONST, HMAC_OPAD_CONST };
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	struct cc_hash_ctx *ctx = NULL;
 	int blocksize = 0;
 	int digestsize = 0;
 	int i, idx = 0, rc = 0;
 	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
-	ssi_sram_addr_t larval_addr;
+	cc_sram_addr_t larval_addr;
 	struct device *dev;
 
 	ctx = crypto_ahash_ctx(ahash);
@@ -1037,7 +1037,7 @@ static int cc_hash_setkey(struct crypto_ahash *ahash, const u8 *key,
 		idx++;
 	}
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+	rc = send_request(ctx->drvdata, &cc_req, desc, idx, 0);
 	if (rc) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		goto out;
@@ -1094,7 +1094,7 @@ static int cc_hash_setkey(struct crypto_ahash *ahash, const u8 *key,
 		idx++;
 	}
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+	rc = send_request(ctx->drvdata, &cc_req, desc, idx, 0);
 
 out:
 	if (rc)
@@ -1112,7 +1112,7 @@ static int cc_hash_setkey(struct crypto_ahash *ahash, const u8 *key,
 static int cc_xcbc_setkey(struct crypto_ahash *ahash,
 			  const u8 *key, unsigned int keylen)
 {
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	struct cc_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	int idx = 0, rc = 0;
@@ -1177,7 +1177,7 @@ static int cc_xcbc_setkey(struct crypto_ahash *ahash,
 			       CC_AES_128_BIT_KEY_SIZE, NS_BIT, 0);
 	idx++;
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 0);
+	rc = send_request(ctx->drvdata, &cc_req, desc, idx, 0);
 
 	if (rc)
 		crypto_ahash_set_flags(ahash, CRYPTO_TFM_RES_BAD_KEY_LEN);
@@ -1300,17 +1300,17 @@ static int cc_cra_init(struct crypto_tfm *tfm)
 		container_of(tfm->__crt_alg, struct hash_alg_common, base);
 	struct ahash_alg *ahash_alg =
 		container_of(hash_alg_common, struct ahash_alg, halg);
-	struct cc_hash_alg *ssi_alg =
+	struct cc_hash_alg *cc_alg =
 			container_of(ahash_alg, struct cc_hash_alg,
 				     ahash_alg);
 
 	crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm),
 				 sizeof(struct ahash_req_ctx));
 
-	ctx->hash_mode = ssi_alg->hash_mode;
-	ctx->hw_mode = ssi_alg->hw_mode;
-	ctx->inter_digestsize = ssi_alg->inter_digestsize;
-	ctx->drvdata = ssi_alg->drvdata;
+	ctx->hash_mode = cc_alg->hash_mode;
+	ctx->hw_mode = cc_alg->hw_mode;
+	ctx->inter_digestsize = cc_alg->inter_digestsize;
+	ctx->drvdata = cc_alg->drvdata;
 
 	return cc_alloc_ctx(ctx);
 }
@@ -1331,7 +1331,7 @@ static int cc_mac_update(struct ahash_request *req)
 	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	unsigned int block_size = crypto_tfm_alg_blocksize(&tfm->base);
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	int rc;
 	u32 idx = 0;
@@ -1374,10 +1374,10 @@ static int cc_mac_update(struct ahash_request *req)
 	idx++;
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = (void *)cc_update_complete;
-	ssi_req.user_arg = (void *)req;
+	cc_req.user_cb = (void *)cc_update_complete;
+	cc_req.user_arg = (void *)req;
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	rc = send_request(ctx->drvdata, &cc_req, desc, idx, 1);
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, req->src, true);
@@ -1391,7 +1391,7 @@ static int cc_mac_final(struct ahash_request *req)
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
 	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	int idx = 0;
 	int rc = 0;
@@ -1424,8 +1424,8 @@ static int cc_mac_final(struct ahash_request *req)
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = (void *)cc_hash_complete;
-	ssi_req.user_arg = (void *)req;
+	cc_req.user_cb = (void *)cc_hash_complete;
+	cc_req.user_arg = (void *)req;
 
 	if (state->xcbc_count && rem_cnt == 0) {
 		/* Load key for ECB decryption */
@@ -1490,7 +1490,7 @@ static int cc_mac_final(struct ahash_request *req)
 	set_cipher_mode(&desc[idx], ctx->hw_mode);
 	idx++;
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	rc = send_request(ctx->drvdata, &cc_req, desc, idx, 1);
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, req->src, true);
@@ -1505,7 +1505,7 @@ static int cc_mac_finup(struct ahash_request *req)
 	struct crypto_ahash *tfm = crypto_ahash_reqtfm(req);
 	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	int idx = 0;
 	int rc = 0;
@@ -1529,8 +1529,8 @@ static int cc_mac_finup(struct ahash_request *req)
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = (void *)cc_hash_complete;
-	ssi_req.user_arg = (void *)req;
+	cc_req.user_cb = (void *)cc_hash_complete;
+	cc_req.user_arg = (void *)req;
 
 	if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
 		key_len = CC_AES_128_BIT_KEY_SIZE;
@@ -1562,7 +1562,7 @@ static int cc_mac_finup(struct ahash_request *req)
 	set_cipher_mode(&desc[idx], ctx->hw_mode);
 	idx++;
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	rc = send_request(ctx->drvdata, &cc_req, desc, idx, 1);
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, req->src, true);
@@ -1578,7 +1578,7 @@ static int cc_mac_digest(struct ahash_request *req)
 	struct cc_hash_ctx *ctx = crypto_ahash_ctx(tfm);
 	struct device *dev = drvdata_to_dev(ctx->drvdata);
 	u32 digestsize = crypto_ahash_digestsize(tfm);
-	struct ssi_crypto_req ssi_req = {};
+	struct cc_crypto_req cc_req = {};
 	struct cc_hw_desc desc[CC_MAX_HASH_SEQ_LEN];
 	u32 key_len;
 	int idx = 0;
@@ -1602,8 +1602,8 @@ static int cc_mac_digest(struct ahash_request *req)
 	}
 
 	/* Setup DX request structure */
-	ssi_req.user_cb = (void *)cc_digest_complete;
-	ssi_req.user_arg = (void *)req;
+	cc_req.user_cb = (void *)cc_digest_complete;
+	cc_req.user_arg = (void *)req;
 
 	if (ctx->hw_mode == DRV_CIPHER_XCBC_MAC) {
 		key_len = CC_AES_128_BIT_KEY_SIZE;
@@ -1635,7 +1635,7 @@ static int cc_mac_digest(struct ahash_request *req)
 	set_cipher_mode(&desc[idx], ctx->hw_mode);
 	idx++;
 
-	rc = send_request(ctx->drvdata, &ssi_req, desc, idx, 1);
+	rc = send_request(ctx->drvdata, &cc_req, desc, idx, 1);
 	if (rc != -EINPROGRESS) {
 		dev_err(dev, "send_request() failed (rc=%d)\n", rc);
 		cc_unmap_hash_request(dev, state, req->src, true);
@@ -1757,7 +1757,7 @@ struct cc_hash_template {
 	int hash_mode;
 	int hw_mode;
 	int inter_digestsize;
-	struct ssi_drvdata *drvdata;
+	struct cc_drvdata *drvdata;
 };
 
 #define CC_STATE_SIZE(_x) \
@@ -2005,10 +2005,10 @@ static struct cc_hash_alg *cc_alloc_hash_alg(struct cc_hash_template *template,
 	return t_crypto_alg;
 }
 
-int cc_init_hash_sram(struct ssi_drvdata *drvdata)
+int cc_init_hash_sram(struct cc_drvdata *drvdata)
 {
 	struct cc_hash_handle *hash_handle = drvdata->hash_handle;
-	ssi_sram_addr_t sram_buff_ofs = hash_handle->digest_len_sram_addr;
+	cc_sram_addr_t sram_buff_ofs = hash_handle->digest_len_sram_addr;
 	unsigned int larval_seq_len = 0;
 	struct cc_hw_desc larval_seq[CC_DIGEST_SIZE_MAX / sizeof(u32)];
 	struct device *dev = drvdata_to_dev(drvdata);
@@ -2125,10 +2125,10 @@ int cc_init_hash_sram(struct ssi_drvdata *drvdata)
 	return rc;
 }
 
-int cc_hash_alloc(struct ssi_drvdata *drvdata)
+int cc_hash_alloc(struct cc_drvdata *drvdata)
 {
 	struct cc_hash_handle *hash_handle;
-	ssi_sram_addr_t sram_buff;
+	cc_sram_addr_t sram_buff;
 	u32 sram_size_to_alloc;
 	struct device *dev = drvdata_to_dev(drvdata);
 	int rc = 0;
@@ -2228,7 +2228,7 @@ int cc_hash_alloc(struct ssi_drvdata *drvdata)
 	return rc;
 }
 
-int cc_hash_free(struct ssi_drvdata *drvdata)
+int cc_hash_free(struct cc_drvdata *drvdata)
 {
 	struct cc_hash_alg *t_hash_alg, *hash_n;
 	struct cc_hash_handle *hash_handle = drvdata->hash_handle;
@@ -2390,9 +2390,9 @@ static void cc_set_desc(struct ahash_req_ctx *areq_ctx,
  *
  * \return u32 The address of the initial digest in SRAM
  */
-ssi_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode)
+cc_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode)
 {
-	struct ssi_drvdata *_drvdata = (struct ssi_drvdata *)drvdata;
+	struct cc_drvdata *_drvdata = (struct cc_drvdata *)drvdata;
 	struct cc_hash_handle *hash_handle = _drvdata->hash_handle;
 	struct device *dev = drvdata_to_dev(_drvdata);
 
@@ -2436,12 +2436,12 @@ ssi_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode)
 	return hash_handle->larval_digest_sram_addr;
 }
 
-ssi_sram_addr_t
+cc_sram_addr_t
 cc_digest_len_addr(void *drvdata, u32 mode)
 {
-	struct ssi_drvdata *_drvdata = (struct ssi_drvdata *)drvdata;
+	struct cc_drvdata *_drvdata = (struct cc_drvdata *)drvdata;
 	struct cc_hash_handle *hash_handle = _drvdata->hash_handle;
-	ssi_sram_addr_t digest_len_addr = hash_handle->digest_len_sram_addr;
+	cc_sram_addr_t digest_len_addr = hash_handle->digest_len_sram_addr;
 
 	switch (mode) {
 	case DRV_HASH_SHA1:
diff --git a/drivers/staging/ccree/ssi_hash.h b/drivers/staging/ccree/ssi_hash.h
index 9d1af96..81f57fc 100644
--- a/drivers/staging/ccree/ssi_hash.h
+++ b/drivers/staging/ccree/ssi_hash.h
@@ -56,7 +56,7 @@ struct ahash_req_ctx {
 	u8 *buff1;
 	u8 *digest_result_buff;
 	struct async_gen_req_ctx gen_ctx;
-	enum ssi_req_dma_buf_type data_dma_buf_type;
+	enum cc_req_dma_buf_type data_dma_buf_type;
 	u8 *digest_buff;
 	u8 *opad_digest_buff;
 	u8 *digest_bytes_len;
@@ -75,9 +75,9 @@ struct ahash_req_ctx {
 	struct mlli_params mlli_params;
 };
 
-int cc_hash_alloc(struct ssi_drvdata *drvdata);
-int cc_init_hash_sram(struct ssi_drvdata *drvdata);
-int cc_hash_free(struct ssi_drvdata *drvdata);
+int cc_hash_alloc(struct cc_drvdata *drvdata);
+int cc_init_hash_sram(struct cc_drvdata *drvdata);
+int cc_hash_free(struct cc_drvdata *drvdata);
 
 /*!
  * Gets the initial digest length
@@ -88,7 +88,7 @@ int cc_hash_free(struct ssi_drvdata *drvdata);
  *
  * \return u32 returns the address of the initial digest length in SRAM
  */
-ssi_sram_addr_t
+cc_sram_addr_t
 cc_digest_len_addr(void *drvdata, u32 mode);
 
 /*!
@@ -101,7 +101,7 @@ cc_digest_len_addr(void *drvdata, u32 mode);
  *
  * \return u32 The address of the initial digest in SRAM
  */
-ssi_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode);
+cc_sram_addr_t cc_larval_digest_addr(void *drvdata, u32 mode);
 
 #endif /*__CC_HASH_H__*/
 
diff --git a/drivers/staging/ccree/ssi_ivgen.c b/drivers/staging/ccree/ssi_ivgen.c
index d362bf6..0303c85 100644
--- a/drivers/staging/ccree/ssi_ivgen.c
+++ b/drivers/staging/ccree/ssi_ivgen.c
@@ -41,9 +41,9 @@
  * @pool_meta_dma: phys. address of the initial enc. key/IV
  */
 struct cc_ivgen_ctx {
-	ssi_sram_addr_t pool;
-	ssi_sram_addr_t ctr_key;
-	ssi_sram_addr_t ctr_iv;
+	cc_sram_addr_t pool;
+	cc_sram_addr_t ctr_key;
+	cc_sram_addr_t ctr_iv;
 	u32 next_iv_ofs;
 	u8 *pool_meta;
 	dma_addr_t pool_meta_dma;
@@ -116,7 +116,7 @@ static int cc_gen_iv_pool(struct cc_ivgen_ctx *ivgen_ctx,
  *
  * \return int Zero for success, negative value otherwise.
  */
-int cc_init_iv_sram(struct ssi_drvdata *drvdata)
+int cc_init_iv_sram(struct cc_drvdata *drvdata)
 {
 	struct cc_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
 	struct cc_hw_desc iv_seq[CC_IVPOOL_SEQ_LEN];
@@ -153,7 +153,7 @@ int cc_init_iv_sram(struct ssi_drvdata *drvdata)
  *
  * \param drvdata
  */
-void cc_ivgen_fini(struct ssi_drvdata *drvdata)
+void cc_ivgen_fini(struct cc_drvdata *drvdata)
 {
 	struct cc_ivgen_ctx *ivgen_ctx = drvdata->ivgen_handle;
 	struct device *device = &drvdata->plat_dev->dev;
@@ -182,7 +182,7 @@ void cc_ivgen_fini(struct ssi_drvdata *drvdata)
  *
  * \return int Zero for success, negative value otherwise.
  */
-int cc_ivgen_init(struct ssi_drvdata *drvdata)
+int cc_ivgen_init(struct cc_drvdata *drvdata)
 {
 	struct cc_ivgen_ctx *ivgen_ctx;
 	struct device *device = &drvdata->plat_dev->dev;
@@ -234,7 +234,7 @@ int cc_ivgen_init(struct ssi_drvdata *drvdata)
  *
  * \return int Zero for success, negative value otherwise.
  */
-int cc_get_iv(struct ssi_drvdata *drvdata, dma_addr_t iv_out_dma[],
+int cc_get_iv(struct cc_drvdata *drvdata, dma_addr_t iv_out_dma[],
 	      unsigned int iv_out_dma_len, unsigned int iv_out_size,
 	      struct cc_hw_desc iv_seq[], unsigned int *iv_seq_len)
 {
diff --git a/drivers/staging/ccree/ssi_ivgen.h b/drivers/staging/ccree/ssi_ivgen.h
index 9890f62..eeca45e3 100644
--- a/drivers/staging/ccree/ssi_ivgen.h
+++ b/drivers/staging/ccree/ssi_ivgen.h
@@ -29,14 +29,14 @@
  *
  * \return int Zero for success, negative value otherwise.
  */
-int cc_ivgen_init(struct ssi_drvdata *drvdata);
+int cc_ivgen_init(struct cc_drvdata *drvdata);
 
 /*!
  * Free iv-pool and ivgen context.
  *
  * \param drvdata
  */
-void cc_ivgen_fini(struct ssi_drvdata *drvdata);
+void cc_ivgen_fini(struct cc_drvdata *drvdata);
 
 /*!
  * Generates the initial pool in SRAM.
@@ -46,7 +46,7 @@ void cc_ivgen_fini(struct ssi_drvdata *drvdata);
  *
  * \return int Zero for success, negative value otherwise.
  */
-int cc_init_iv_sram(struct ssi_drvdata *drvdata);
+int cc_init_iv_sram(struct cc_drvdata *drvdata);
 
 /*!
  * Acquires 16 Bytes IV from the iv-pool
@@ -61,7 +61,7 @@ int cc_init_iv_sram(struct ssi_drvdata *drvdata);
  *
  * \return int Zero for success, negative value otherwise.
  */
-int cc_get_iv(struct ssi_drvdata *drvdata, dma_addr_t iv_out_dma[],
+int cc_get_iv(struct cc_drvdata *drvdata, dma_addr_t iv_out_dma[],
 	      unsigned int iv_out_dma_len, unsigned int iv_out_size,
 	      struct cc_hw_desc iv_seq[], unsigned int *iv_seq_len);
 
diff --git a/drivers/staging/ccree/ssi_pm.c b/drivers/staging/ccree/ssi_pm.c
index e387d46..3c4892b 100644
--- a/drivers/staging/ccree/ssi_pm.c
+++ b/drivers/staging/ccree/ssi_pm.c
@@ -36,7 +36,7 @@
 
 int cc_pm_suspend(struct device *dev)
 {
-	struct ssi_drvdata *drvdata = dev_get_drvdata(dev);
+	struct cc_drvdata *drvdata = dev_get_drvdata(dev);
 	int rc;
 
 	dev_dbg(dev, "set HOST_POWER_DOWN_EN\n");
@@ -55,7 +55,7 @@ int cc_pm_suspend(struct device *dev)
 int cc_pm_resume(struct device *dev)
 {
 	int rc;
-	struct ssi_drvdata *drvdata = dev_get_drvdata(dev);
+	struct cc_drvdata *drvdata = dev_get_drvdata(dev);
 
 	dev_dbg(dev, "unset HOST_POWER_DOWN_EN\n");
 	cc_iowrite(drvdata, CC_REG(HOST_POWER_DOWN_EN), POWER_DOWN_DISABLE);
@@ -88,7 +88,7 @@ int cc_pm_resume(struct device *dev)
 int cc_pm_get(struct device *dev)
 {
 	int rc = 0;
-	struct ssi_drvdata *drvdata = dev_get_drvdata(dev);
+	struct cc_drvdata *drvdata = dev_get_drvdata(dev);
 
 	if (cc_req_queue_suspended(drvdata))
 		rc = pm_runtime_get_sync(dev);
@@ -101,7 +101,7 @@ int cc_pm_get(struct device *dev)
 int cc_pm_put_suspend(struct device *dev)
 {
 	int rc = 0;
-	struct ssi_drvdata *drvdata = dev_get_drvdata(dev);
+	struct cc_drvdata *drvdata = dev_get_drvdata(dev);
 
 	if (!cc_req_queue_suspended(drvdata)) {
 		pm_runtime_mark_last_busy(dev);
@@ -116,7 +116,7 @@ int cc_pm_put_suspend(struct device *dev)
 
 #endif
 
-int cc_pm_init(struct ssi_drvdata *drvdata)
+int cc_pm_init(struct cc_drvdata *drvdata)
 {
 	int rc = 0;
 #if defined(CONFIG_PM)
@@ -135,7 +135,7 @@ int cc_pm_init(struct ssi_drvdata *drvdata)
 	return rc;
 }
 
-void cc_pm_fini(struct ssi_drvdata *drvdata)
+void cc_pm_fini(struct cc_drvdata *drvdata)
 {
 #if defined(CONFIG_PM)
 	pm_runtime_disable(drvdata_to_dev(drvdata));
diff --git a/drivers/staging/ccree/ssi_pm.h b/drivers/staging/ccree/ssi_pm.h
index 940ef2d..a5f2b1b 100644
--- a/drivers/staging/ccree/ssi_pm.h
+++ b/drivers/staging/ccree/ssi_pm.h
@@ -25,9 +25,9 @@
 
 #define CC_SUSPEND_TIMEOUT 3000
 
-int cc_pm_init(struct ssi_drvdata *drvdata);
+int cc_pm_init(struct cc_drvdata *drvdata);
 
-void cc_pm_fini(struct ssi_drvdata *drvdata);
+void cc_pm_fini(struct cc_drvdata *drvdata);
 
 #if defined(CONFIG_PM)
 int cc_pm_suspend(struct device *dev);
diff --git a/drivers/staging/ccree/ssi_request_mgr.c b/drivers/staging/ccree/ssi_request_mgr.c
index f1356d1..480e6d3 100644
--- a/drivers/staging/ccree/ssi_request_mgr.c
+++ b/drivers/staging/ccree/ssi_request_mgr.c
@@ -38,7 +38,7 @@ struct cc_req_mgr_handle {
 	unsigned int hw_queue_size; /* HW capability */
 	unsigned int min_free_hw_slots;
 	unsigned int max_used_sw_slots;
-	struct ssi_crypto_req req_queue[MAX_REQUEST_QUEUE_SIZE];
+	struct cc_crypto_req req_queue[MAX_REQUEST_QUEUE_SIZE];
 	u32 req_queue_head;
 	u32 req_queue_tail;
 	u32 axi_completed;
@@ -68,7 +68,7 @@ static void comp_handler(unsigned long devarg);
 static void comp_work_handler(struct work_struct *work);
 #endif
 
-void cc_req_mgr_fini(struct ssi_drvdata *drvdata)
+void cc_req_mgr_fini(struct cc_drvdata *drvdata)
 {
 	struct cc_req_mgr_handle *req_mgr_h = drvdata->request_mgr_handle;
 	struct device *dev = drvdata_to_dev(drvdata);
@@ -97,7 +97,7 @@ void cc_req_mgr_fini(struct ssi_drvdata *drvdata)
 	drvdata->request_mgr_handle = NULL;
 }
 
-int cc_req_mgr_init(struct ssi_drvdata *drvdata)
+int cc_req_mgr_init(struct cc_drvdata *drvdata)
 {
 	struct cc_req_mgr_handle *req_mgr_h;
 	struct device *dev = drvdata_to_dev(drvdata);
@@ -201,7 +201,7 @@ static void request_mgr_complete(struct device *dev, void *dx_compl_h)
 	complete(this_compl);
 }
 
-static int cc_queues_status(struct ssi_drvdata *drvdata,
+static int cc_queues_status(struct cc_drvdata *drvdata,
 			    struct cc_req_mgr_handle *req_mgr_h,
 			    unsigned int total_seq_len)
 {
@@ -248,7 +248,7 @@ static int cc_queues_status(struct ssi_drvdata *drvdata,
  * Enqueue caller request to crypto hardware.
  *
  * \param drvdata
- * \param ssi_req The request to enqueue
+ * \param cc_req The request to enqueue
  * \param desc The crypto sequence
  * \param len The crypto sequence length
  * \param is_dout If "true": completion is handled by the caller
@@ -257,7 +257,7 @@ static int cc_queues_status(struct ssi_drvdata *drvdata,
  *
  * \return int Returns -EINPROGRESS if "is_dout=true"; "0" if "is_dout=false"
  */
-int send_request(struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
+int send_request(struct cc_drvdata *drvdata, struct cc_crypto_req *cc_req,
 		 struct cc_hw_desc *desc, unsigned int len, bool is_dout)
 {
 	void __iomem *cc_base = drvdata->cc_base;
@@ -270,7 +270,7 @@ int send_request(struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
 	int rc;
 	unsigned int max_required_seq_len =
 		(total_seq_len +
-		 ((ssi_req->ivgen_dma_addr_len == 0) ? 0 :
+		 ((cc_req->ivgen_dma_addr_len == 0) ? 0 :
 		  CC_IVPOOL_SEQ_LEN) + (!is_dout ? 1 : 0));
 
 #if defined(CONFIG_PM)
@@ -314,24 +314,24 @@ int send_request(struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
 	 * enabled any DLLI/MLLI DOUT bit in the given sequence
 	 */
 	if (!is_dout) {
-		init_completion(&ssi_req->seq_compl);
-		ssi_req->user_cb = request_mgr_complete;
-		ssi_req->user_arg = &ssi_req->seq_compl;
+		init_completion(&cc_req->seq_compl);
+		cc_req->user_cb = request_mgr_complete;
+		cc_req->user_arg = &cc_req->seq_compl;
 		total_seq_len++;
 	}
 
-	if (ssi_req->ivgen_dma_addr_len > 0) {
+	if (cc_req->ivgen_dma_addr_len > 0) {
 		dev_dbg(dev, "Acquire IV from pool into %d DMA addresses %pad, %pad, %pad, IV-size=%u\n",
-			ssi_req->ivgen_dma_addr_len,
-			&ssi_req->ivgen_dma_addr[0],
-			&ssi_req->ivgen_dma_addr[1],
-			&ssi_req->ivgen_dma_addr[2],
-			ssi_req->ivgen_size);
+			cc_req->ivgen_dma_addr_len,
+			&cc_req->ivgen_dma_addr[0],
+			&cc_req->ivgen_dma_addr[1],
+			&cc_req->ivgen_dma_addr[2],
+			cc_req->ivgen_size);
 
 		/* Acquire IV from pool */
-		rc = cc_get_iv(drvdata, ssi_req->ivgen_dma_addr,
-			       ssi_req->ivgen_dma_addr_len,
-			       ssi_req->ivgen_size,
+		rc = cc_get_iv(drvdata, cc_req->ivgen_dma_addr,
+			       cc_req->ivgen_dma_addr_len,
+			       cc_req->ivgen_size,
 			       iv_seq, &iv_seq_len);
 
 		if (rc) {
@@ -353,7 +353,7 @@ int send_request(struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
 		req_mgr_h->max_used_sw_slots = used_sw_slots;
 
 	/* Enqueue request - must be locked with HW lock*/
-	req_mgr_h->req_queue[req_mgr_h->req_queue_head] = *ssi_req;
+	req_mgr_h->req_queue[req_mgr_h->req_queue_head] = *cc_req;
 	req_mgr_h->req_queue_head = (req_mgr_h->req_queue_head + 1) &
 				    (MAX_REQUEST_QUEUE_SIZE - 1);
 	/* TODO: Use circ_buf.h ? */
@@ -393,7 +393,7 @@ int send_request(struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
 		/* Wait upon sequence completion.
 		 *  Return "0" -Operation done successfully.
 		 */
-		wait_for_completion(&ssi_req->seq_compl);
+		wait_for_completion(&cc_req->seq_compl);
 		return 0;
 	}
 	/* Operation still in process */
@@ -411,7 +411,7 @@ int send_request(struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
  *
  * \return int Returns "0" upon success
  */
-int send_request_init(struct ssi_drvdata *drvdata, struct cc_hw_desc *desc,
+int send_request_init(struct cc_drvdata *drvdata, struct cc_hw_desc *desc,
 		      unsigned int len)
 {
 	void __iomem *cc_base = drvdata->cc_base;
@@ -442,7 +442,7 @@ int send_request_init(struct ssi_drvdata *drvdata, struct cc_hw_desc *desc,
 	return 0;
 }
 
-void complete_request(struct ssi_drvdata *drvdata)
+void complete_request(struct cc_drvdata *drvdata)
 {
 	struct cc_req_mgr_handle *request_mgr_handle =
 						drvdata->request_mgr_handle;
@@ -459,16 +459,16 @@ void complete_request(struct ssi_drvdata *drvdata)
 #ifdef COMP_IN_WQ
 static void comp_work_handler(struct work_struct *work)
 {
-	struct ssi_drvdata *drvdata =
-		container_of(work, struct ssi_drvdata, compwork.work);
+	struct cc_drvdata *drvdata =
+		container_of(work, struct cc_drvdata, compwork.work);
 
 	comp_handler((unsigned long)drvdata);
 }
 #endif
 
-static void proc_completions(struct ssi_drvdata *drvdata)
+static void proc_completions(struct cc_drvdata *drvdata)
 {
-	struct ssi_crypto_req *ssi_req;
+	struct cc_crypto_req *cc_req;
 	struct device *dev = drvdata_to_dev(drvdata);
 	struct cc_req_mgr_handle *request_mgr_handle =
 						drvdata->request_mgr_handle;
@@ -492,7 +492,7 @@ static void proc_completions(struct ssi_drvdata *drvdata)
 			break;
 		}
 
-		ssi_req = &request_mgr_handle->req_queue[*tail];
+		cc_req = &request_mgr_handle->req_queue[*tail];
 
 #ifdef FLUSH_CACHE_ALL
 		flush_cache_all();
@@ -511,8 +511,8 @@ static void proc_completions(struct ssi_drvdata *drvdata)
 		}
 #endif /* COMPLETION_DELAY */
 
-		if (ssi_req->user_cb)
-			ssi_req->user_cb(dev, ssi_req->user_arg);
+		if (cc_req->user_cb)
+			cc_req->user_cb(dev, cc_req->user_arg);
 		*tail = (*tail + 1) & (MAX_REQUEST_QUEUE_SIZE - 1);
 		dev_dbg(dev, "Dequeue request tail=%u\n", *tail);
 		dev_dbg(dev, "Request completed. axi_completed=%d\n",
@@ -526,7 +526,7 @@ static void proc_completions(struct ssi_drvdata *drvdata)
 	}
 }
 
-static inline u32 cc_axi_comp_count(struct ssi_drvdata *drvdata)
+static inline u32 cc_axi_comp_count(struct cc_drvdata *drvdata)
 {
 	return FIELD_GET(AXIM_MON_COMP_VALUE,
 			 cc_ioread(drvdata, CC_REG(AXIM_MON_COMP)));
@@ -535,7 +535,7 @@ static inline u32 cc_axi_comp_count(struct ssi_drvdata *drvdata)
 /* Deferred service handler, run as interrupt-fired tasklet */
 static void comp_handler(unsigned long devarg)
 {
-	struct ssi_drvdata *drvdata = (struct ssi_drvdata *)devarg;
+	struct cc_drvdata *drvdata = (struct cc_drvdata *)devarg;
 	struct cc_req_mgr_handle *request_mgr_handle =
 						drvdata->request_mgr_handle;
 
@@ -584,7 +584,7 @@ static void comp_handler(unsigned long devarg)
  * inside the spin lock protection
  */
 #if defined(CONFIG_PM)
-int cc_resume_req_queue(struct ssi_drvdata *drvdata)
+int cc_resume_req_queue(struct cc_drvdata *drvdata)
 {
 	struct cc_req_mgr_handle *request_mgr_handle =
 		drvdata->request_mgr_handle;
@@ -600,7 +600,7 @@ int cc_resume_req_queue(struct ssi_drvdata *drvdata)
  * suspend the queue configuration. Since it is used for the runtime suspend
  * only verify that the queue can be suspended.
  */
-int cc_suspend_req_queue(struct ssi_drvdata *drvdata)
+int cc_suspend_req_queue(struct cc_drvdata *drvdata)
 {
 	struct cc_req_mgr_handle *request_mgr_handle =
 						drvdata->request_mgr_handle;
@@ -618,7 +618,7 @@ int cc_suspend_req_queue(struct ssi_drvdata *drvdata)
 	return 0;
 }
 
-bool cc_req_queue_suspended(struct ssi_drvdata *drvdata)
+bool cc_req_queue_suspended(struct cc_drvdata *drvdata)
 {
 	struct cc_req_mgr_handle *request_mgr_handle =
 						drvdata->request_mgr_handle;
diff --git a/drivers/staging/ccree/ssi_request_mgr.h b/drivers/staging/ccree/ssi_request_mgr.h
index 91e0d47..eb068bf 100644
--- a/drivers/staging/ccree/ssi_request_mgr.h
+++ b/drivers/staging/ccree/ssi_request_mgr.h
@@ -23,13 +23,13 @@
 
 #include "cc_hw_queue_defs.h"
 
-int cc_req_mgr_init(struct ssi_drvdata *drvdata);
+int cc_req_mgr_init(struct cc_drvdata *drvdata);
 
 /*!
  * Enqueue caller request to crypto hardware.
  *
  * \param drvdata
- * \param ssi_req The request to enqueue
+ * \param cc_req The request to enqueue
  * \param desc The crypto sequence
  * \param len The crypto sequence length
  * \param is_dout If "true": completion is handled by the caller
@@ -38,22 +38,22 @@ int cc_req_mgr_init(struct ssi_drvdata *drvdata);
  *
  * \return int Returns -EINPROGRESS if "is_dout=true"; "0" if "is_dout=false"
  */
-int send_request(struct ssi_drvdata *drvdata, struct ssi_crypto_req *ssi_req,
+int send_request(struct cc_drvdata *drvdata, struct cc_crypto_req *cc_req,
 		 struct cc_hw_desc *desc, unsigned int len, bool is_dout);
 
-int send_request_init(struct ssi_drvdata *drvdata, struct cc_hw_desc *desc,
+int send_request_init(struct cc_drvdata *drvdata, struct cc_hw_desc *desc,
 		      unsigned int len);
 
-void complete_request(struct ssi_drvdata *drvdata);
+void complete_request(struct cc_drvdata *drvdata);
 
-void cc_req_mgr_fini(struct ssi_drvdata *drvdata);
+void cc_req_mgr_fini(struct cc_drvdata *drvdata);
 
 #if defined(CONFIG_PM)
-int cc_resume_req_queue(struct ssi_drvdata *drvdata);
+int cc_resume_req_queue(struct cc_drvdata *drvdata);
 
-int cc_suspend_req_queue(struct ssi_drvdata *drvdata);
+int cc_suspend_req_queue(struct cc_drvdata *drvdata);
 
-bool cc_req_queue_suspended(struct ssi_drvdata *drvdata);
+bool cc_req_queue_suspended(struct cc_drvdata *drvdata);
 #endif
 
 #endif /*__REQUEST_MGR_H__*/
diff --git a/drivers/staging/ccree/ssi_sram_mgr.c b/drivers/staging/ccree/ssi_sram_mgr.c
index cbe5e3b..5d83af5 100644
--- a/drivers/staging/ccree/ssi_sram_mgr.c
+++ b/drivers/staging/ccree/ssi_sram_mgr.c
@@ -22,7 +22,7 @@
  * @sram_free_offset:   the offset to the non-allocated area
  */
 struct ssi_sram_mgr_ctx {
-	ssi_sram_addr_t sram_free_offset;
+	cc_sram_addr_t sram_free_offset;
 };
 
 /**
@@ -30,7 +30,7 @@ struct ssi_sram_mgr_ctx {
  *
  * @drvdata: Associated device driver context
  */
-void ssi_sram_mgr_fini(struct ssi_drvdata *drvdata)
+void ssi_sram_mgr_fini(struct cc_drvdata *drvdata)
 {
 	struct ssi_sram_mgr_ctx *smgr_ctx = drvdata->sram_mgr_handle;
 
@@ -48,7 +48,7 @@ void ssi_sram_mgr_fini(struct ssi_drvdata *drvdata)
  *
  * @drvdata: Associated device driver context
  */
-int ssi_sram_mgr_init(struct ssi_drvdata *drvdata)
+int ssi_sram_mgr_init(struct cc_drvdata *drvdata)
 {
 	/* Allocate "this" context */
 	drvdata->sram_mgr_handle = kzalloc(sizeof(*drvdata->sram_mgr_handle),
@@ -69,11 +69,11 @@ int ssi_sram_mgr_init(struct ssi_drvdata *drvdata)
  * \param drvdata
  * \param size The requested bytes to allocate
  */
-ssi_sram_addr_t cc_sram_alloc(struct ssi_drvdata *drvdata, u32 size)
+cc_sram_addr_t cc_sram_alloc(struct cc_drvdata *drvdata, u32 size)
 {
 	struct ssi_sram_mgr_ctx *smgr_ctx = drvdata->sram_mgr_handle;
 	struct device *dev = drvdata_to_dev(drvdata);
-	ssi_sram_addr_t p;
+	cc_sram_addr_t p;
 
 	if ((size & 0x3)) {
 		dev_err(dev, "Requested buffer size (%u) is not multiple of 4",
@@ -103,7 +103,7 @@ ssi_sram_addr_t cc_sram_alloc(struct ssi_drvdata *drvdata, u32 size)
  * @seq:	  A pointer to the given IN/OUT descriptor sequence
  * @seq_len:	  A pointer to the given IN/OUT sequence length
  */
-void cc_set_sram_desc(const u32 *src, ssi_sram_addr_t dst,
+void cc_set_sram_desc(const u32 *src, cc_sram_addr_t dst,
 		      unsigned int nelement, struct cc_hw_desc *seq,
 		      unsigned int *seq_len)
 {
diff --git a/drivers/staging/ccree/ssi_sram_mgr.h b/drivers/staging/ccree/ssi_sram_mgr.h
index fdd325b..52f5288 100644
--- a/drivers/staging/ccree/ssi_sram_mgr.h
+++ b/drivers/staging/ccree/ssi_sram_mgr.h
@@ -21,15 +21,15 @@
 #define CC_CC_SRAM_SIZE 4096
 #endif
 
-struct ssi_drvdata;
+struct cc_drvdata;
 
 /**
  * Address (offset) within CC internal SRAM
  */
 
-typedef u64 ssi_sram_addr_t;
+typedef u64 cc_sram_addr_t;
 
-#define NULL_SRAM_ADDR ((ssi_sram_addr_t)-1)
+#define NULL_SRAM_ADDR ((cc_sram_addr_t)-1)
 
 /*!
  * Initializes SRAM pool.
@@ -40,14 +40,14 @@ typedef u64 ssi_sram_addr_t;
  *
  * \return int Zero for success, negative value otherwise.
  */
-int ssi_sram_mgr_init(struct ssi_drvdata *drvdata);
+int ssi_sram_mgr_init(struct cc_drvdata *drvdata);
 
 /*!
  * Uninits SRAM pool.
  *
  * \param drvdata
  */
-void ssi_sram_mgr_fini(struct ssi_drvdata *drvdata);
+void ssi_sram_mgr_fini(struct cc_drvdata *drvdata);
 
 /*!
  * Allocated buffer from SRAM pool.
@@ -58,7 +58,7 @@ void ssi_sram_mgr_fini(struct ssi_drvdata *drvdata);
  * \param drvdata
  * \param size The requested bytes to allocate
  */
-ssi_sram_addr_t cc_sram_alloc(struct ssi_drvdata *drvdata, u32 size);
+cc_sram_addr_t cc_sram_alloc(struct cc_drvdata *drvdata, u32 size);
 
 /**
  * cc_set_sram_desc() - Create const descriptors sequence to
@@ -71,7 +71,7 @@ ssi_sram_addr_t cc_sram_alloc(struct ssi_drvdata *drvdata, u32 size);
  * @seq:	  A pointer to the given IN/OUT descriptor sequence
  * @seq_len:	  A pointer to the given IN/OUT sequence length
  */
-void cc_set_sram_desc(const u32 *src, ssi_sram_addr_t dst,
+void cc_set_sram_desc(const u32 *src, cc_sram_addr_t dst,
 		      unsigned int nelement, struct cc_hw_desc *seq,
 		      unsigned int *seq_len);
 
diff --git a/drivers/staging/ccree/ssi_sysfs.c b/drivers/staging/ccree/ssi_sysfs.c
index 6b11a72..b2e58f5 100644
--- a/drivers/staging/ccree/ssi_sysfs.c
+++ b/drivers/staging/ccree/ssi_sysfs.c
@@ -22,12 +22,12 @@
 
 #ifdef ENABLE_CC_SYSFS
 
-static struct ssi_drvdata *sys_get_drvdata(void);
+static struct cc_drvdata *sys_get_drvdata(void);
 
 static ssize_t ssi_sys_regdump_show(struct kobject *kobj,
 				    struct kobj_attribute *attr, char *buf)
 {
-	struct ssi_drvdata *drvdata = sys_get_drvdata();
+	struct cc_drvdata *drvdata = sys_get_drvdata();
 	u32 register_value;
 	int offset = 0;
 
@@ -86,7 +86,7 @@ struct sys_dir {
 	struct attribute_group sys_dir_attr_group;
 	struct attribute **sys_dir_attr_list;
 	u32 num_of_attrs;
-	struct ssi_drvdata *drvdata; /* Associated driver context */
+	struct cc_drvdata *drvdata; /* Associated driver context */
 };
 
 /* top level directory structures */
@@ -105,7 +105,7 @@ static struct kobj_attribute ssi_sys_top_level_attrs[] = {
 
 };
 
-static struct ssi_drvdata *sys_get_drvdata(void)
+static struct cc_drvdata *sys_get_drvdata(void)
 {
 	/* TODO: supporting multiple SeP devices would require avoiding
 	 * global "top_dir" and finding associated "top_dir" by traversing
@@ -114,7 +114,7 @@ static struct ssi_drvdata *sys_get_drvdata(void)
 	return sys_top_dir.drvdata;
 }
 
-static int sys_init_dir(struct sys_dir *sys_dir, struct ssi_drvdata *drvdata,
+static int sys_init_dir(struct sys_dir *sys_dir, struct cc_drvdata *drvdata,
 			struct kobject *parent_dir_kobj, const char *dir_name,
 			struct kobj_attribute *attrs, u32 num_of_attrs)
 {
@@ -169,7 +169,7 @@ static void sys_free_dir(struct sys_dir *sys_dir)
 	}
 }
 
-int ssi_sysfs_init(struct kobject *sys_dev_obj, struct ssi_drvdata *drvdata)
+int ssi_sysfs_init(struct kobject *sys_dev_obj, struct cc_drvdata *drvdata)
 {
 	int retval;
 	struct device *dev = drvdata_to_dev(drvdata);
diff --git a/drivers/staging/ccree/ssi_sysfs.h b/drivers/staging/ccree/ssi_sysfs.h
index de68bc6..9833d18 100644
--- a/drivers/staging/ccree/ssi_sysfs.h
+++ b/drivers/staging/ccree/ssi_sysfs.h
@@ -24,9 +24,9 @@
 #include <asm/timex.h>
 
 /* forward declaration */
-struct ssi_drvdata;
+struct cc_drvdata;
 
-int ssi_sysfs_init(struct kobject *sys_dev_obj, struct ssi_drvdata *drvdata);
+int ssi_sysfs_init(struct kobject *sys_dev_obj, struct cc_drvdata *drvdata);
 void ssi_sysfs_fini(void);
 
 #endif /*__CC_SYSFS_H__*/
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 21/24] staging: ccree: fix buf mgr naming convention
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (19 preceding siblings ...)
  2017-12-12 14:53 ` [PATCH 20/24] staging: ccree: rename vars/structs/enums from ssi_ to cc_ Gilad Ben-Yossef
@ 2017-12-12 14:53 ` Gilad Ben-Yossef
  2017-12-12 14:53 ` [PATCH 22/24] staging: ccree: fix sram " Gilad Ben-Yossef
                   ` (2 subsequent siblings)
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

The buffer manager files were using a func naming convention which was
inconsistent (ssi vs. cc) and often too long.

Make the code more readable by switching to a simpler, consistent naming
convention.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_buffer_mgr.c | 28 ++++++++++++----------------
 1 file changed, 12 insertions(+), 16 deletions(-)

diff --git a/drivers/staging/ccree/ssi_buffer_mgr.c b/drivers/staging/ccree/ssi_buffer_mgr.c
index 8649bcb..6846d93 100644
--- a/drivers/staging/ccree/ssi_buffer_mgr.c
+++ b/drivers/staging/ccree/ssi_buffer_mgr.c
@@ -413,11 +413,9 @@ static int cc_map_sg(struct device *dev, struct scatterlist *sg,
 }
 
 static int
-ssi_aead_handle_config_buf(struct device *dev,
-			   struct aead_req_ctx *areq_ctx,
-			   u8 *config_data,
-			   struct buffer_array *sg_data,
-			   unsigned int assoclen)
+cc_set_aead_conf_buf(struct device *dev, struct aead_req_ctx *areq_ctx,
+		     u8 *config_data, struct buffer_array *sg_data,
+		     unsigned int assoclen)
 {
 	dev_dbg(dev, " handle additional data config set to DLLI\n");
 	/* create sg for the current buffer */
@@ -441,10 +439,9 @@ ssi_aead_handle_config_buf(struct device *dev,
 	return 0;
 }
 
-static int ssi_ahash_handle_curr_buf(struct device *dev,
-				     struct ahash_req_ctx *areq_ctx,
-				     u8 *curr_buff, u32 curr_buff_cnt,
-				     struct buffer_array *sg_data)
+static int cc_set_hash_buf(struct device *dev, struct ahash_req_ctx *areq_ctx,
+			   u8 *curr_buff, u32 curr_buff_cnt,
+			   struct buffer_array *sg_data)
 {
 	dev_dbg(dev, " handle curr buff %x set to   DLLI\n", curr_buff_cnt);
 	/* create sg for the current buffer */
@@ -1259,9 +1256,8 @@ int cc_map_aead_request(struct cc_drvdata *drvdata, struct aead_request *req)
 		}
 		areq_ctx->ccm_iv0_dma_addr = dma_addr;
 
-		if (ssi_aead_handle_config_buf(dev, areq_ctx,
-					       areq_ctx->ccm_config, &sg_data,
-					       req->assoclen)) {
+		if (cc_set_aead_conf_buf(dev, areq_ctx, areq_ctx->ccm_config,
+					 &sg_data, req->assoclen)) {
 			rc = -ENOMEM;
 			goto aead_map_failure;
 		}
@@ -1432,8 +1428,8 @@ int cc_map_hash_request_final(struct cc_drvdata *drvdata, void *ctx,
 	/*TODO: copy data in case that buffer is enough for operation */
 	/* map the previous buffer */
 	if (*curr_buff_cnt) {
-		if (ssi_ahash_handle_curr_buf(dev, areq_ctx, curr_buff,
-					      *curr_buff_cnt, &sg_data)) {
+		if (cc_set_hash_buf(dev, areq_ctx, curr_buff, *curr_buff_cnt,
+				    &sg_data)) {
 			return -ENOMEM;
 		}
 	}
@@ -1545,8 +1541,8 @@ int cc_map_hash_request_update(struct cc_drvdata *drvdata, void *ctx,
 	}
 
 	if (*curr_buff_cnt) {
-		if (ssi_ahash_handle_curr_buf(dev, areq_ctx, curr_buff,
-					      *curr_buff_cnt, &sg_data)) {
+		if (cc_set_hash_buf(dev, areq_ctx, curr_buff, *curr_buff_cnt,
+				    &sg_data)) {
 			return -ENOMEM;
 		}
 		/* change the buffer index for next operation */
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 22/24] staging: ccree: fix sram mgr naming convention
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (20 preceding siblings ...)
  2017-12-12 14:53 ` [PATCH 21/24] staging: ccree: fix buf mgr naming convention Gilad Ben-Yossef
@ 2017-12-12 14:53 ` Gilad Ben-Yossef
  2017-12-12 14:53 ` [PATCH 23/24] staging: ccree: simplify freeing SRAM memory address Gilad Ben-Yossef
  2017-12-12 14:53 ` [PATCH 24/24] staging: ccree: fix FIPS mgr naming convention Gilad Ben-Yossef
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

The SRAM manager files were using a naming convention which was
inconsistent (ssi vs. cc) and often too long.

Make the code more readable by switching to a simpler, consistent naming
convention.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_driver.c   |  8 ++++----
 drivers/staging/ccree/ssi_sram_mgr.c | 18 +++++++++---------
 drivers/staging/ccree/ssi_sram_mgr.h |  4 ++--
 3 files changed, 15 insertions(+), 15 deletions(-)

diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 3f02ceb..6e7a396 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -312,9 +312,9 @@ static int init_cc_resources(struct platform_device *plat_dev)
 		dev_err(dev, "CC_FIPS_INIT failed 0x%x\n", rc);
 		goto post_sysfs_err;
 	}
-	rc = ssi_sram_mgr_init(new_drvdata);
+	rc = cc_sram_mgr_init(new_drvdata);
 	if (rc) {
-		dev_err(dev, "ssi_sram_mgr_init failed\n");
+		dev_err(dev, "cc_sram_mgr_init failed\n");
 		goto post_fips_init_err;
 	}
 
@@ -391,7 +391,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 post_req_mgr_err:
 	cc_req_mgr_fini(new_drvdata);
 post_sram_mgr_err:
-	ssi_sram_mgr_fini(new_drvdata);
+	cc_sram_mgr_fini(new_drvdata);
 post_fips_init_err:
 	ssi_fips_fini(new_drvdata);
 post_sysfs_err:
@@ -423,7 +423,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 	cc_pm_fini(drvdata);
 	cc_buffer_mgr_fini(drvdata);
 	cc_req_mgr_fini(drvdata);
-	ssi_sram_mgr_fini(drvdata);
+	cc_sram_mgr_fini(drvdata);
 	ssi_fips_fini(drvdata);
 #ifdef ENABLE_CC_SYSFS
 	ssi_sysfs_fini();
diff --git a/drivers/staging/ccree/ssi_sram_mgr.c b/drivers/staging/ccree/ssi_sram_mgr.c
index 5d83af5..b664e9b 100644
--- a/drivers/staging/ccree/ssi_sram_mgr.c
+++ b/drivers/staging/ccree/ssi_sram_mgr.c
@@ -18,37 +18,37 @@
 #include "ssi_sram_mgr.h"
 
 /**
- * struct ssi_sram_mgr_ctx -Internal RAM context manager
+ * struct cc_sram_ctx -Internal RAM context manager
  * @sram_free_offset:   the offset to the non-allocated area
  */
-struct ssi_sram_mgr_ctx {
+struct cc_sram_ctx {
 	cc_sram_addr_t sram_free_offset;
 };
 
 /**
- * ssi_sram_mgr_fini() - Cleanup SRAM pool.
+ * cc_sram_mgr_fini() - Cleanup SRAM pool.
  *
  * @drvdata: Associated device driver context
  */
-void ssi_sram_mgr_fini(struct cc_drvdata *drvdata)
+void cc_sram_mgr_fini(struct cc_drvdata *drvdata)
 {
-	struct ssi_sram_mgr_ctx *smgr_ctx = drvdata->sram_mgr_handle;
+	struct cc_sram_ctx *smgr_ctx = drvdata->sram_mgr_handle;
 
 	/* Free "this" context */
 	if (smgr_ctx) {
-		memset(smgr_ctx, 0, sizeof(struct ssi_sram_mgr_ctx));
+		memset(smgr_ctx, 0, sizeof(struct cc_sram_ctx));
 		kfree(smgr_ctx);
 	}
 }
 
 /**
- * ssi_sram_mgr_init() - Initializes SRAM pool.
+ * cc_sram_mgr_init() - Initializes SRAM pool.
  *      The pool starts right at the beginning of SRAM.
  *      Returns zero for success, negative value otherwise.
  *
  * @drvdata: Associated device driver context
  */
-int ssi_sram_mgr_init(struct cc_drvdata *drvdata)
+int cc_sram_mgr_init(struct cc_drvdata *drvdata)
 {
 	/* Allocate "this" context */
 	drvdata->sram_mgr_handle = kzalloc(sizeof(*drvdata->sram_mgr_handle),
@@ -71,7 +71,7 @@ int ssi_sram_mgr_init(struct cc_drvdata *drvdata)
  */
 cc_sram_addr_t cc_sram_alloc(struct cc_drvdata *drvdata, u32 size)
 {
-	struct ssi_sram_mgr_ctx *smgr_ctx = drvdata->sram_mgr_handle;
+	struct cc_sram_ctx *smgr_ctx = drvdata->sram_mgr_handle;
 	struct device *dev = drvdata_to_dev(drvdata);
 	cc_sram_addr_t p;
 
diff --git a/drivers/staging/ccree/ssi_sram_mgr.h b/drivers/staging/ccree/ssi_sram_mgr.h
index 52f5288..181968a 100644
--- a/drivers/staging/ccree/ssi_sram_mgr.h
+++ b/drivers/staging/ccree/ssi_sram_mgr.h
@@ -40,14 +40,14 @@ typedef u64 cc_sram_addr_t;
  *
  * \return int Zero for success, negative value otherwise.
  */
-int ssi_sram_mgr_init(struct cc_drvdata *drvdata);
+int cc_sram_mgr_init(struct cc_drvdata *drvdata);
 
 /*!
  * Uninits SRAM pool.
  *
  * \param drvdata
  */
-void ssi_sram_mgr_fini(struct cc_drvdata *drvdata);
+void cc_sram_mgr_fini(struct cc_drvdata *drvdata);
 
 /*!
  * Allocated buffer from SRAM pool.
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 23/24] staging: ccree: simplify freeing SRAM memory address
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (21 preceding siblings ...)
  2017-12-12 14:53 ` [PATCH 22/24] staging: ccree: fix sram " Gilad Ben-Yossef
@ 2017-12-12 14:53 ` Gilad Ben-Yossef
  2017-12-12 14:53 ` [PATCH 24/24] staging: ccree: fix FIPS mgr naming convention Gilad Ben-Yossef
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

The code freeing the SRAM memory address was zeroing the address
on release although there is nothing secret about it. Simplify
the code by simply calling kfree directly.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_sram_mgr.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/staging/ccree/ssi_sram_mgr.c b/drivers/staging/ccree/ssi_sram_mgr.c
index b664e9b..f72d64a 100644
--- a/drivers/staging/ccree/ssi_sram_mgr.c
+++ b/drivers/staging/ccree/ssi_sram_mgr.c
@@ -32,13 +32,8 @@ struct cc_sram_ctx {
  */
 void cc_sram_mgr_fini(struct cc_drvdata *drvdata)
 {
-	struct cc_sram_ctx *smgr_ctx = drvdata->sram_mgr_handle;
-
 	/* Free "this" context */
-	if (smgr_ctx) {
-		memset(smgr_ctx, 0, sizeof(struct cc_sram_ctx));
-		kfree(smgr_ctx);
-	}
+	kfree(drvdata->sram_mgr_handle);
 }
 
 /**
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH 24/24] staging: ccree: fix FIPS mgr naming convention
  2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
                   ` (22 preceding siblings ...)
  2017-12-12 14:53 ` [PATCH 23/24] staging: ccree: simplify freeing SRAM memory address Gilad Ben-Yossef
@ 2017-12-12 14:53 ` Gilad Ben-Yossef
  23 siblings, 0 replies; 25+ messages in thread
From: Gilad Ben-Yossef @ 2017-12-12 14:53 UTC (permalink / raw)
  To: Greg Kroah-Hartman
  Cc: Ofir Drang, linux-crypto, driverdev-devel, devel, linux-kernel

The FIPS manager files were using a naming convention which was
inconsistent (ssi vs. cc) and often too long.

Make the code more readable by switching to a simpler, consistent naming
convention.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
---
 drivers/staging/ccree/ssi_driver.c | 6 +++---
 drivers/staging/ccree/ssi_fips.c   | 4 ++--
 drivers/staging/ccree/ssi_fips.h   | 8 ++++----
 3 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/staging/ccree/ssi_driver.c b/drivers/staging/ccree/ssi_driver.c
index 6e7a396..28cfbb4 100644
--- a/drivers/staging/ccree/ssi_driver.c
+++ b/drivers/staging/ccree/ssi_driver.c
@@ -307,7 +307,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 	}
 #endif
 
-	rc = ssi_fips_init(new_drvdata);
+	rc = cc_fips_init(new_drvdata);
 	if (rc) {
 		dev_err(dev, "CC_FIPS_INIT failed 0x%x\n", rc);
 		goto post_sysfs_err;
@@ -393,7 +393,7 @@ static int init_cc_resources(struct platform_device *plat_dev)
 post_sram_mgr_err:
 	cc_sram_mgr_fini(new_drvdata);
 post_fips_init_err:
-	ssi_fips_fini(new_drvdata);
+	cc_fips_fini(new_drvdata);
 post_sysfs_err:
 #ifdef ENABLE_CC_SYSFS
 	ssi_sysfs_fini();
@@ -424,7 +424,7 @@ static void cleanup_cc_resources(struct platform_device *plat_dev)
 	cc_buffer_mgr_fini(drvdata);
 	cc_req_mgr_fini(drvdata);
 	cc_sram_mgr_fini(drvdata);
-	ssi_fips_fini(drvdata);
+	cc_fips_fini(drvdata);
 #ifdef ENABLE_CC_SYSFS
 	ssi_sysfs_fini();
 #endif
diff --git a/drivers/staging/ccree/ssi_fips.c b/drivers/staging/ccree/ssi_fips.c
index 036215f..a1d7782 100644
--- a/drivers/staging/ccree/ssi_fips.c
+++ b/drivers/staging/ccree/ssi_fips.c
@@ -51,7 +51,7 @@ void cc_set_ree_fips_status(struct cc_drvdata *drvdata, bool status)
 	cc_iowrite(drvdata, CC_REG(HOST_GPR0), val);
 }
 
-void ssi_fips_fini(struct cc_drvdata *drvdata)
+void cc_fips_fini(struct cc_drvdata *drvdata)
 {
 	struct cc_fips_handle *fips_h = drvdata->fips_handle;
 
@@ -105,7 +105,7 @@ static void fips_dsr(unsigned long devarg)
 }
 
 /* The function called once at driver entry point .*/
-int ssi_fips_init(struct cc_drvdata *p_drvdata)
+int cc_fips_init(struct cc_drvdata *p_drvdata)
 {
 	struct cc_fips_handle *fips_h;
 	struct device *dev = drvdata_to_dev(p_drvdata);
diff --git a/drivers/staging/ccree/ssi_fips.h b/drivers/staging/ccree/ssi_fips.h
index 5eed9f6..8321dde 100644
--- a/drivers/staging/ccree/ssi_fips.h
+++ b/drivers/staging/ccree/ssi_fips.h
@@ -27,19 +27,19 @@ enum cc_fips_status {
 	CC_FIPS_SYNC_STATUS_RESERVE32B = S32_MAX
 };
 
-int ssi_fips_init(struct cc_drvdata *p_drvdata);
-void ssi_fips_fini(struct cc_drvdata *drvdata);
+int cc_fips_init(struct cc_drvdata *p_drvdata);
+void cc_fips_fini(struct cc_drvdata *drvdata);
 void fips_handler(struct cc_drvdata *drvdata);
 void cc_set_ree_fips_status(struct cc_drvdata *drvdata, bool ok);
 
 #else  /* CONFIG_CRYPTO_FIPS */
 
-static inline int ssi_fips_init(struct cc_drvdata *p_drvdata)
+static inline int cc_fips_init(struct cc_drvdata *p_drvdata)
 {
 	return 0;
 }
 
-static inline void ssi_fips_fini(struct cc_drvdata *drvdata) {}
+static inline void cc_fips_fini(struct cc_drvdata *drvdata) {}
 static inline void cc_set_ree_fips_status(struct cc_drvdata *drvdata,
 					  bool ok) {}
 static inline void fips_handler(struct cc_drvdata *drvdata) {}
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2017-12-12 14:57 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-12-12 14:52 [PATCH 00/24] staging: ccree: cleanups and simplification Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 01/24] staging: ccree: remove ahash wrappers Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 02/24] staging: ccree: fix hash naming convention Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 03/24] staging: ccree: amend hash func def for readability Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 04/24] staging: ccree: func params should follow func name Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 05/24] staging: ccree: shorten parameter name Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 06/24] staging: ccree: fix func def and decl coding style Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 07/24] staging: ccree: simplify expression with local var Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 08/24] staging: ccree: fix func call param indentation Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 09/24] staging: ccree: fix reg mgr naming convention Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 10/24] staging: ccree: fix req mgr func def coding style Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 11/24] staging: ccree: remove cipher sync blkcipher remains Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 12/24] staging: ccree: fix cipher naming convention Gilad Ben-Yossef
2017-12-12 14:52 ` [PATCH 13/24] staging: ccree: fix cipher func def coding style Gilad Ben-Yossef
2017-12-12 14:53 ` [PATCH 14/24] staging: ccree: fix ivgen naming convention Gilad Ben-Yossef
2017-12-12 14:53 ` [PATCH 15/24] staging: ccree: fix ivgen func def coding style Gilad Ben-Yossef
2017-12-12 14:53 ` [PATCH 16/24] staging: ccree: drop unsupported MULTI2 mode code Gilad Ben-Yossef
2017-12-12 14:53 ` [PATCH 17/24] staging: ccree: remove SSI_CC_HAS_ macros Gilad Ben-Yossef
2017-12-12 14:53 ` [PATCH 18/24] staging: ccree: rename all SSI to CC Gilad Ben-Yossef
2017-12-12 14:53 ` [PATCH 19/24] staging: ccree: rename all DX " Gilad Ben-Yossef
2017-12-12 14:53 ` [PATCH 20/24] staging: ccree: rename vars/structs/enums from ssi_ to cc_ Gilad Ben-Yossef
2017-12-12 14:53 ` [PATCH 21/24] staging: ccree: fix buf mgr naming convention Gilad Ben-Yossef
2017-12-12 14:53 ` [PATCH 22/24] staging: ccree: fix sram " Gilad Ben-Yossef
2017-12-12 14:53 ` [PATCH 23/24] staging: ccree: simplify freeing SRAM memory address Gilad Ben-Yossef
2017-12-12 14:53 ` [PATCH 24/24] staging: ccree: fix FIPS mgr naming convention Gilad Ben-Yossef

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).