Linux-Crypto Archive on lore.kernel.org
 help / color / Atom feed
* [PATCH 0/5] CAAM JR lifecycle
@ 2019-11-05 15:13 Andrey Smirnov
  2019-11-05 15:13 ` [PATCH 1/5] crypto: caam - use static initialization Andrey Smirnov
                   ` (5 more replies)
  0 siblings, 6 replies; 9+ messages in thread
From: Andrey Smirnov @ 2019-11-05 15:13 UTC (permalink / raw)
  To: linux-crypto
  Cc: Andrey Smirnov, Chris Healy, Lucas Stach, Horia Geantă,
	Herbert Xu, Iuliana Prodan, linux-imx, linux-kernel

Everyone:

This series is a different approach to addressing the issues brought
up in [discussion]. This time the proposition is to get away from
creating per-JR platfrom device, move all of the underlying code into
caam.ko and disable manual binding/unbinding of the CAAM device via
sysfs. Note that this series is a rough cut intented to gauge if this
approach could be acceptable for upstreaming.

Thanks,
Andrey Smirnov

[discussion] lore.kernel.org/lkml/20190904023515.7107-13-andrew.smirnov@gmail.com

Andrey Smirnov (5):
  crypto: caam - use static initialization
  crypto: caam - introduce caam_jr_cbk
  crypto: caam - convert JR API to use struct caam_drv_private_jr
  crypto: caam - do not create a platform devices for JRs
  crypto: caam - disable CAAM's bind/unbind attributes

 drivers/crypto/caam/Kconfig      |   5 +-
 drivers/crypto/caam/Makefile     |  15 +--
 drivers/crypto/caam/caamalg.c    | 114 ++++++++++----------
 drivers/crypto/caam/caamalg_qi.c |  12 +--
 drivers/crypto/caam/caamhash.c   | 117 +++++++++++----------
 drivers/crypto/caam/caampkc.c    |  67 ++++++------
 drivers/crypto/caam/caampkc.h    |   2 +-
 drivers/crypto/caam/caamrng.c    |  41 ++++----
 drivers/crypto/caam/ctrl.c       |  16 ++-
 drivers/crypto/caam/intern.h     |   3 +-
 drivers/crypto/caam/jr.c         | 173 ++++++++-----------------------
 drivers/crypto/caam/jr.h         |  14 ++-
 drivers/crypto/caam/key_gen.c    |  11 +-
 drivers/crypto/caam/key_gen.h    |   5 +-
 14 files changed, 275 insertions(+), 320 deletions(-)

-- 
2.21.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 1/5] crypto: caam - use static initialization
  2019-11-05 15:13 [PATCH 0/5] CAAM JR lifecycle Andrey Smirnov
@ 2019-11-05 15:13 ` Andrey Smirnov
  2019-11-05 15:13 ` [PATCH 2/5] crypto: caam - introduce caam_jr_cbk Andrey Smirnov
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Andrey Smirnov @ 2019-11-05 15:13 UTC (permalink / raw)
  To: linux-crypto
  Cc: Andrey Smirnov, Chris Healy, Lucas Stach, Horia Geantă,
	Herbert Xu, Iuliana Prodan, linux-imx, linux-kernel

Use static initialization for global variables.

Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
Cc: Chris Healy <cphealy@gmail.com>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Horia Geantă <horia.geanta@nxp.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Iuliana Prodan <iuliana.prodan@nxp.com>
Cc: linux-imx@nxp.com
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 drivers/crypto/caam/jr.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index fc97cde27059..49c98a7f6723 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -23,7 +23,10 @@ struct jr_driver_data {
 	spinlock_t		jr_alloc_lock;	/* jr_list lock */
 } ____cacheline_aligned;
 
-static struct jr_driver_data driver_data;
+static struct jr_driver_data driver_data = {
+	.jr_list = LIST_HEAD_INIT(driver_data.jr_list),
+	.jr_alloc_lock = __SPIN_LOCK_UNLOCKED(driver_data.jr_alloc_lock),
+};
 static DEFINE_MUTEX(algs_lock);
 static unsigned int active_devs;
 
@@ -589,8 +592,6 @@ static struct platform_driver caam_jr_driver = {
 
 static int __init jr_driver_init(void)
 {
-	spin_lock_init(&driver_data.jr_alloc_lock);
-	INIT_LIST_HEAD(&driver_data.jr_list);
 	return platform_driver_register(&caam_jr_driver);
 }
 
-- 
2.21.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 2/5] crypto: caam - introduce caam_jr_cbk
  2019-11-05 15:13 [PATCH 0/5] CAAM JR lifecycle Andrey Smirnov
  2019-11-05 15:13 ` [PATCH 1/5] crypto: caam - use static initialization Andrey Smirnov
@ 2019-11-05 15:13 ` Andrey Smirnov
  2019-11-05 15:13 ` [PATCH 3/5] crypto: caam - convert JR API to use struct caam_drv_private_jr Andrey Smirnov
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 9+ messages in thread
From: Andrey Smirnov @ 2019-11-05 15:13 UTC (permalink / raw)
  To: linux-crypto
  Cc: Andrey Smirnov, Chris Healy, Lucas Stach, Horia Geantă,
	Herbert Xu, Iuliana Prodan, linux-imx, linux-kernel

Coalesce multiple ad-hoc definitions of the same function pointer into
a dedicated type to avoid repetition.

Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
Cc: Chris Healy <cphealy@gmail.com>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Horia Geantă <horia.geanta@nxp.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Iuliana Prodan <iuliana.prodan@nxp.com>
Cc: linux-imx@nxp.com
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 drivers/crypto/caam/intern.h | 3 ++-
 drivers/crypto/caam/jr.c     | 9 +++------
 drivers/crypto/caam/jr.h     | 7 ++++---
 3 files changed, 9 insertions(+), 10 deletions(-)

diff --git a/drivers/crypto/caam/intern.h b/drivers/crypto/caam/intern.h
index c7c10c90464b..fe2ca2ad6ff0 100644
--- a/drivers/crypto/caam/intern.h
+++ b/drivers/crypto/caam/intern.h
@@ -11,6 +11,7 @@
 #define INTERN_H
 
 #include "ctrl.h"
+#include "jr.h"
 
 /* Currently comes from Kconfig param as a ^2 (driver-required) */
 #define JOBR_DEPTH (1 << CONFIG_CRYPTO_DEV_FSL_CAAM_RINGSIZE)
@@ -31,7 +32,7 @@
  * Each entry on an output ring needs one of these
  */
 struct caam_jrentry_info {
-	void (*callbk)(struct device *dev, u32 *desc, u32 status, void *arg);
+	caam_jr_cbk callbk;
 	void *cbkarg;	/* Argument per ring entry */
 	u32 *desc_addr_virt;	/* Stored virt addr for postprocessing */
 	dma_addr_t desc_addr_dma;	/* Stored bus addr for done matching */
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index 49c98a7f6723..3e78fedeea30 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -197,7 +197,7 @@ static void caam_jr_dequeue(unsigned long devarg)
 	int hw_idx, sw_idx, i, head, tail;
 	struct device *dev = (struct device *)devarg;
 	struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
-	void (*usercall)(struct device *dev, u32 *desc, u32 status, void *arg);
+	caam_jr_cbk usercall;
 	u32 *userdesc, userstatus;
 	void *userarg;
 	u32 outring_used = 0;
@@ -354,10 +354,7 @@ EXPORT_SYMBOL(caam_jr_free);
  * @areq: optional pointer to a user argument for use at callback
  *        time.
  **/
-int caam_jr_enqueue(struct device *dev, u32 *desc,
-		    void (*cbk)(struct device *dev, u32 *desc,
-				u32 status, void *areq),
-		    void *areq)
+int caam_jr_enqueue(struct device *dev, u32 *desc, caam_jr_cbk cbk, void *areq)
 {
 	struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
 	struct caam_jrentry_info *head_entry;
@@ -386,7 +383,7 @@ int caam_jr_enqueue(struct device *dev, u32 *desc,
 	head_entry = &jrp->entinfo[head];
 	head_entry->desc_addr_virt = desc;
 	head_entry->desc_size = desc_size;
-	head_entry->callbk = (void *)cbk;
+	head_entry->callbk = cbk;
 	head_entry->cbkarg = areq;
 	head_entry->desc_addr_dma = desc_dma;
 
diff --git a/drivers/crypto/caam/jr.h b/drivers/crypto/caam/jr.h
index eab611530f36..81acc6a6909f 100644
--- a/drivers/crypto/caam/jr.h
+++ b/drivers/crypto/caam/jr.h
@@ -9,11 +9,12 @@
 #define JR_H
 
 /* Prototypes for backend-level services exposed to APIs */
+typedef void (*caam_jr_cbk)(struct device *dev, u32 *desc, u32 status,
+			    void *areq);
+
 struct device *caam_jr_alloc(void);
 void caam_jr_free(struct device *rdev);
-int caam_jr_enqueue(struct device *dev, u32 *desc,
-		    void (*cbk)(struct device *dev, u32 *desc, u32 status,
-				void *areq),
+int caam_jr_enqueue(struct device *dev, u32 *desc, caam_jr_cbk cbk,
 		    void *areq);
 
 #endif /* JR_H */
-- 
2.21.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 3/5] crypto: caam - convert JR API to use struct caam_drv_private_jr
  2019-11-05 15:13 [PATCH 0/5] CAAM JR lifecycle Andrey Smirnov
  2019-11-05 15:13 ` [PATCH 1/5] crypto: caam - use static initialization Andrey Smirnov
  2019-11-05 15:13 ` [PATCH 2/5] crypto: caam - introduce caam_jr_cbk Andrey Smirnov
@ 2019-11-05 15:13 ` Andrey Smirnov
  2019-11-06  9:04   ` kbuild test robot
  2019-11-05 15:13 ` [PATCH 4/5] crypto: caam - do not create a platform devices for JRs Andrey Smirnov
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 9+ messages in thread
From: Andrey Smirnov @ 2019-11-05 15:13 UTC (permalink / raw)
  To: linux-crypto
  Cc: Andrey Smirnov, Chris Healy, Lucas Stach, Horia Geantă,
	Herbert Xu, Iuliana Prodan, linux-imx, linux-kernel

As a first step of detaching JR object from underlying 'struct device'
convert all of the API in jr.h to operate on 'struct
caam_drv_private_jr' instead of 'struct device' and adjust the rest of
the code accordingly.

Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
Cc: Chris Healy <cphealy@gmail.com>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Horia Geantă <horia.geanta@nxp.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Iuliana Prodan <iuliana.prodan@nxp.com>
Cc: linux-imx@nxp.com
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 drivers/crypto/caam/caamalg.c    | 108 +++++++++++++++--------------
 drivers/crypto/caam/caamalg_qi.c |   8 +--
 drivers/crypto/caam/caamhash.c   | 115 ++++++++++++++++---------------
 drivers/crypto/caam/caampkc.c    |  67 +++++++++---------
 drivers/crypto/caam/caampkc.h    |   2 +-
 drivers/crypto/caam/caamrng.c    |  41 ++++++-----
 drivers/crypto/caam/jr.c         |  23 +++----
 drivers/crypto/caam/jr.h         |  12 ++--
 drivers/crypto/caam/key_gen.c    |  11 +--
 drivers/crypto/caam/key_gen.h    |   5 +-
 10 files changed, 207 insertions(+), 185 deletions(-)

diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 2912006b946b..4cb7d5b281cc 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -108,7 +108,7 @@ struct caam_ctx {
 	dma_addr_t sh_desc_dec_dma;
 	dma_addr_t key_dma;
 	enum dma_data_direction dir;
-	struct device *jrdev;
+	struct caam_drv_private_jr *jr;
 	struct alginfo adata;
 	struct alginfo cdata;
 	unsigned int authsize;
@@ -117,7 +117,7 @@ struct caam_ctx {
 static int aead_null_set_sh_desc(struct crypto_aead *aead)
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
 	u32 *desc;
 	int rem_bytes = CAAM_DESC_BYTES_MAX - AEAD_DESC_JOB_IO_LEN -
@@ -170,7 +170,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
 						 struct caam_aead_alg, aead);
 	unsigned int ivsize = crypto_aead_ivsize(aead);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
 	u32 ctx1_iv_off = 0;
 	u32 *desc, *nonce = NULL;
@@ -308,7 +308,7 @@ static int aead_setauthsize(struct crypto_aead *authenc,
 static int gcm_set_sh_desc(struct crypto_aead *aead)
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	unsigned int ivsize = crypto_aead_ivsize(aead);
 	u32 *desc;
 	int rem_bytes = CAAM_DESC_BYTES_MAX - GCM_DESC_JOB_IO_LEN -
@@ -373,7 +373,7 @@ static int gcm_setauthsize(struct crypto_aead *authenc, unsigned int authsize)
 static int rfc4106_set_sh_desc(struct crypto_aead *aead)
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	unsigned int ivsize = crypto_aead_ivsize(aead);
 	u32 *desc;
 	int rem_bytes = CAAM_DESC_BYTES_MAX - GCM_DESC_JOB_IO_LEN -
@@ -441,7 +441,7 @@ static int rfc4106_setauthsize(struct crypto_aead *authenc,
 static int rfc4543_set_sh_desc(struct crypto_aead *aead)
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	unsigned int ivsize = crypto_aead_ivsize(aead);
 	u32 *desc;
 	int rem_bytes = CAAM_DESC_BYTES_MAX - GCM_DESC_JOB_IO_LEN -
@@ -507,7 +507,7 @@ static int rfc4543_setauthsize(struct crypto_aead *authenc,
 static int chachapoly_set_sh_desc(struct crypto_aead *aead)
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	unsigned int ivsize = crypto_aead_ivsize(aead);
 	u32 *desc;
 
@@ -563,7 +563,7 @@ static int aead_setkey(struct crypto_aead *aead,
 			       const u8 *key, unsigned int keylen)
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
 	struct crypto_authenc_keys keys;
 	int ret = 0;
@@ -598,7 +598,7 @@ static int aead_setkey(struct crypto_aead *aead,
 		goto skip_split_key;
 	}
 
-	ret = gen_split_key(ctx->jrdev, ctx->key, &ctx->adata, keys.authkey,
+	ret = gen_split_key(ctx->jr, ctx->key, &ctx->adata, keys.authkey,
 			    keys.authkeylen, CAAM_MAX_KEY_SIZE -
 			    keys.enckeylen);
 	if (ret) {
@@ -645,7 +645,7 @@ static int gcm_setkey(struct crypto_aead *aead,
 		      const u8 *key, unsigned int keylen)
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	int err;
 
 	err = aes_check_keylen(keylen);
@@ -668,7 +668,7 @@ static int rfc4106_setkey(struct crypto_aead *aead,
 			  const u8 *key, unsigned int keylen)
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	int err;
 
 	err = aes_check_keylen(keylen - 4);
@@ -696,7 +696,7 @@ static int rfc4543_setkey(struct crypto_aead *aead,
 			  const u8 *key, unsigned int keylen)
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	int err;
 
 	err = aes_check_keylen(keylen - 4);
@@ -727,7 +727,7 @@ static int skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
 	struct caam_skcipher_alg *alg =
 		container_of(crypto_skcipher_alg(skcipher), typeof(*alg),
 			     skcipher);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	unsigned int ivsize = crypto_skcipher_ivsize(skcipher);
 	u32 *desc;
 	const bool is_rfc3686 = alg->caam.rfc3686;
@@ -842,7 +842,7 @@ static int xts_skcipher_setkey(struct crypto_skcipher *skcipher, const u8 *key,
 			       unsigned int keylen)
 {
 	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	u32 *desc;
 
 	if (keylen != 2 * AES_MIN_KEY_SIZE  && keylen != 2 * AES_MAX_KEY_SIZE) {
@@ -960,10 +960,11 @@ static void skcipher_unmap(struct device *dev, struct skcipher_edesc *edesc,
 		   edesc->sec4_sg_dma, edesc->sec4_sg_bytes);
 }
 
-static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
-				   void *context)
+static void aead_encrypt_done(struct caam_drv_private_jr *jr, u32 *desc,
+			      u32 err, void *context)
 {
 	struct aead_request *req = context;
+	struct device *jrdev = jr->dev;
 	struct aead_edesc *edesc;
 	int ecode = 0;
 
@@ -981,10 +982,11 @@ static void aead_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
 	aead_request_complete(req, ecode);
 }
 
-static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
-				   void *context)
+static void aead_decrypt_done(struct caam_drv_private_jr *jr, u32 *desc,
+			      u32 err, void *context)
 {
 	struct aead_request *req = context;
+	struct device *jrdev = jr->dev;
 	struct aead_edesc *edesc;
 	int ecode = 0;
 
@@ -1002,13 +1004,14 @@ static void aead_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
 	aead_request_complete(req, ecode);
 }
 
-static void skcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
-				  void *context)
+static void skcipher_encrypt_done(struct caam_drv_private_jr *jr, u32 *desc,
+				  u32 err, void *context)
 {
 	struct skcipher_request *req = context;
 	struct skcipher_edesc *edesc;
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	int ivsize = crypto_skcipher_ivsize(skcipher);
+	struct device *jrdev = jr->dev;
 	int ecode = 0;
 
 	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
@@ -1042,13 +1045,14 @@ static void skcipher_encrypt_done(struct device *jrdev, u32 *desc, u32 err,
 	skcipher_request_complete(req, ecode);
 }
 
-static void skcipher_decrypt_done(struct device *jrdev, u32 *desc, u32 err,
-				  void *context)
+static void skcipher_decrypt_done(struct caam_drv_private_jr *jr, u32 *desc,
+				  u32 err, void *context)
 {
 	struct skcipher_request *req = context;
 	struct skcipher_edesc *edesc;
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	int ivsize = crypto_skcipher_ivsize(skcipher);
+	struct device *jrdev = jr->dev;
 	int ecode = 0;
 
 	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
@@ -1219,7 +1223,7 @@ static void init_authenc_job(struct aead_request *req,
 						 struct caam_aead_alg, aead);
 	unsigned int ivsize = crypto_aead_ivsize(aead);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctx->jrdev->parent);
+	struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctx->jr->dev->parent);
 	const bool ctr_mode = ((ctx->cdata.algtype & OP_ALG_AAI_MASK) ==
 			       OP_ALG_AAI_CTR_MOD128);
 	const bool is_rfc3686 = alg->caam.rfc3686;
@@ -1268,7 +1272,7 @@ static void init_skcipher_job(struct skcipher_request *req,
 {
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	int ivsize = crypto_skcipher_ivsize(skcipher);
 	u32 *desc = edesc->hw_desc;
 	u32 *sh_desc;
@@ -1324,7 +1328,7 @@ static struct aead_edesc *aead_edesc_alloc(struct aead_request *req,
 {
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
 	int src_nents, mapped_src_nents, dst_nents = 0, mapped_dst_nents = 0;
@@ -1460,7 +1464,7 @@ static int gcm_encrypt(struct aead_request *req)
 	struct aead_edesc *edesc;
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	bool all_contig;
 	u32 *desc;
 	int ret = 0;
@@ -1478,7 +1482,7 @@ static int gcm_encrypt(struct aead_request *req)
 			     desc_bytes(edesc->hw_desc), 1);
 
 	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, aead_encrypt_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -1494,7 +1498,7 @@ static int chachapoly_encrypt(struct aead_request *req)
 	struct aead_edesc *edesc;
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	bool all_contig;
 	u32 *desc;
 	int ret;
@@ -1511,7 +1515,7 @@ static int chachapoly_encrypt(struct aead_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, aead_encrypt_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -1527,7 +1531,7 @@ static int chachapoly_decrypt(struct aead_request *req)
 	struct aead_edesc *edesc;
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	bool all_contig;
 	u32 *desc;
 	int ret;
@@ -1544,7 +1548,7 @@ static int chachapoly_decrypt(struct aead_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, aead_decrypt_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -1565,7 +1569,7 @@ static int aead_encrypt(struct aead_request *req)
 	struct aead_edesc *edesc;
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	bool all_contig;
 	u32 *desc;
 	int ret = 0;
@@ -1584,7 +1588,7 @@ static int aead_encrypt(struct aead_request *req)
 			     desc_bytes(edesc->hw_desc), 1);
 
 	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, aead_encrypt_done, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, aead_encrypt_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -1600,7 +1604,7 @@ static int gcm_decrypt(struct aead_request *req)
 	struct aead_edesc *edesc;
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	bool all_contig;
 	u32 *desc;
 	int ret = 0;
@@ -1618,7 +1622,7 @@ static int gcm_decrypt(struct aead_request *req)
 			     desc_bytes(edesc->hw_desc), 1);
 
 	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, aead_decrypt_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -1639,7 +1643,7 @@ static int aead_decrypt(struct aead_request *req)
 	struct aead_edesc *edesc;
 	struct crypto_aead *aead = crypto_aead_reqtfm(req);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	bool all_contig;
 	u32 *desc;
 	int ret = 0;
@@ -1662,7 +1666,7 @@ static int aead_decrypt(struct aead_request *req)
 			     desc_bytes(edesc->hw_desc), 1);
 
 	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, aead_decrypt_done, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, aead_decrypt_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -1681,7 +1685,7 @@ static struct skcipher_edesc *skcipher_edesc_alloc(struct skcipher_request *req,
 {
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
 	int src_nents, mapped_src_nents, dst_nents = 0, mapped_dst_nents = 0;
@@ -1839,7 +1843,7 @@ static int skcipher_encrypt(struct skcipher_request *req)
 	struct skcipher_edesc *edesc;
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	u32 *desc;
 	int ret = 0;
 
@@ -1859,7 +1863,7 @@ static int skcipher_encrypt(struct skcipher_request *req)
 			     desc_bytes(edesc->hw_desc), 1);
 
 	desc = edesc->hw_desc;
-	ret = caam_jr_enqueue(jrdev, desc, skcipher_encrypt_done, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, skcipher_encrypt_done, req);
 
 	if (!ret) {
 		ret = -EINPROGRESS;
@@ -1876,7 +1880,7 @@ static int skcipher_decrypt(struct skcipher_request *req)
 	struct skcipher_edesc *edesc;
 	struct crypto_skcipher *skcipher = crypto_skcipher_reqtfm(req);
 	struct caam_ctx *ctx = crypto_skcipher_ctx(skcipher);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	u32 *desc;
 	int ret = 0;
 
@@ -1896,7 +1900,7 @@ static int skcipher_decrypt(struct skcipher_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, edesc->hw_desc,
 			     desc_bytes(edesc->hw_desc), 1);
 
-	ret = caam_jr_enqueue(jrdev, desc, skcipher_decrypt_done, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, skcipher_decrypt_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -3411,25 +3415,25 @@ static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam,
 	dma_addr_t dma_addr;
 	struct caam_drv_private *priv;
 
-	ctx->jrdev = caam_jr_alloc();
-	if (IS_ERR(ctx->jrdev)) {
+	ctx->jr = caam_jr_alloc();
+	if (IS_ERR(ctx->jr)) {
 		pr_err("Job Ring Device allocation for transform failed\n");
-		return PTR_ERR(ctx->jrdev);
+		return PTR_ERR(ctx->jr);
 	}
 
-	priv = dev_get_drvdata(ctx->jrdev->parent);
+	priv = dev_get_drvdata(ctx->jr->dev->parent);
 	if (priv->era >= 6 && uses_dkp)
 		ctx->dir = DMA_BIDIRECTIONAL;
 	else
 		ctx->dir = DMA_TO_DEVICE;
 
-	dma_addr = dma_map_single_attrs(ctx->jrdev, ctx->sh_desc_enc,
+	dma_addr = dma_map_single_attrs(ctx->jr->dev, ctx->sh_desc_enc,
 					offsetof(struct caam_ctx,
 						 sh_desc_enc_dma),
 					ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
-	if (dma_mapping_error(ctx->jrdev, dma_addr)) {
-		dev_err(ctx->jrdev, "unable to map key, shared descriptors\n");
-		caam_jr_free(ctx->jrdev);
+	if (dma_mapping_error(ctx->jr->dev, dma_addr)) {
+		dev_err(ctx->jr->dev, "unable to map key, shared descriptors\n");
+		caam_jr_free(ctx->jr);
 		return -ENOMEM;
 	}
 
@@ -3467,10 +3471,10 @@ static int caam_aead_init(struct crypto_aead *tfm)
 
 static void caam_exit_common(struct caam_ctx *ctx)
 {
-	dma_unmap_single_attrs(ctx->jrdev, ctx->sh_desc_enc_dma,
+	dma_unmap_single_attrs(ctx->jr->dev, ctx->sh_desc_enc_dma,
 			       offsetof(struct caam_ctx, sh_desc_enc_dma),
 			       ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
-	caam_jr_free(ctx->jrdev);
+	caam_jr_free(ctx->jr);
 }
 
 static void caam_cra_exit(struct crypto_skcipher *tfm)
diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
index 8e3449670d2f..31bee401f9e5 100644
--- a/drivers/crypto/caam/caamalg_qi.c
+++ b/drivers/crypto/caam/caamalg_qi.c
@@ -55,7 +55,7 @@ struct caam_skcipher_alg {
  * per-session context
  */
 struct caam_ctx {
-	struct device *jrdev;
+	struct caam_drv_private_jr *jr;
 	u32 sh_desc_enc[DESC_MAX_USED_LEN];
 	u32 sh_desc_dec[DESC_MAX_USED_LEN];
 	u8 key[CAAM_MAX_KEY_SIZE];
@@ -2423,10 +2423,10 @@ static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam,
 	 * distribute tfms across job rings to ensure in-order
 	 * crypto request processing per tfm
 	 */
-	ctx->jrdev = caam_jr_alloc();
-	if (IS_ERR(ctx->jrdev)) {
+	ctx->jr = caam_jr_alloc();
+	if (IS_ERR(ctx->jr)) {
 		pr_err("Job Ring Device allocation for transform failed\n");
-		return PTR_ERR(ctx->jrdev);
+		return PTR_ERR(ctx->jr);
 	}
 
 	dev = ctx->jrdev->parent;
diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 65399cb2a770..6e4fd5eb833a 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -97,7 +97,7 @@ struct caam_hash_ctx {
 	dma_addr_t sh_desc_digest_dma;
 	enum dma_data_direction dir;
 	enum dma_data_direction key_dir;
-	struct device *jrdev;
+	struct caam_drv_private_jr *jr;
 	int ctx_len;
 	struct alginfo adata;
 };
@@ -223,7 +223,7 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
 {
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	int digestsize = crypto_ahash_digestsize(ahash);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
 	u32 *desc;
 
@@ -279,7 +279,7 @@ static int axcbc_set_sh_desc(struct crypto_ahash *ahash)
 {
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	int digestsize = crypto_ahash_digestsize(ahash);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	u32 *desc;
 
 	/* shared descriptor for ahash_update */
@@ -331,7 +331,7 @@ static int acmac_set_sh_desc(struct crypto_ahash *ahash)
 {
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	int digestsize = crypto_ahash_digestsize(ahash);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	u32 *desc;
 
 	/* shared descriptor for ahash_update */
@@ -381,7 +381,7 @@ static int acmac_set_sh_desc(struct crypto_ahash *ahash)
 static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key,
 			   u32 digestsize)
 {
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	u32 *desc;
 	struct split_key_result result;
 	dma_addr_t key_dma;
@@ -421,7 +421,7 @@ static int hash_digest_key(struct caam_hash_ctx *ctx, u32 *keylen, u8 *key,
 	result.err = 0;
 	init_completion(&result.completion);
 
-	ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
+	ret = caam_jr_enqueue(ctx->jr, desc, split_key_done, &result);
 	if (!ret) {
 		/* in progress */
 		wait_for_completion(&result.completion);
@@ -444,10 +444,10 @@ static int ahash_setkey(struct crypto_ahash *ahash,
 			const u8 *key, unsigned int keylen)
 {
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	int blocksize = crypto_tfm_alg_blocksize(&ahash->base);
 	int digestsize = crypto_ahash_digestsize(ahash);
-	struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctx->jrdev->parent);
+	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
 	int ret;
 	u8 *hashed_key = NULL;
 
@@ -485,12 +485,12 @@ static int ahash_setkey(struct crypto_ahash *ahash,
 		 * virtual and dma key addresses are needed.
 		 */
 		if (keylen > ctx->adata.keylen_pad)
-			dma_sync_single_for_device(ctx->jrdev,
+			dma_sync_single_for_device(jrdev,
 						   ctx->adata.key_dma,
 						   ctx->adata.keylen_pad,
 						   DMA_TO_DEVICE);
 	} else {
-		ret = gen_split_key(ctx->jrdev, ctx->key, &ctx->adata, key,
+		ret = gen_split_key(ctx->jr, ctx->key, &ctx->adata, key,
 				    keylen, CAAM_MAX_HASH_KEY_SIZE);
 		if (ret)
 			goto bad_free_key;
@@ -508,7 +508,7 @@ static int axcbc_setkey(struct crypto_ahash *ahash, const u8 *key,
 			unsigned int keylen)
 {
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 
 	if (keylen != AES_KEYSIZE_128) {
 		crypto_ahash_set_flags(ahash, CRYPTO_TFM_RES_BAD_KEY_LEN);
@@ -597,9 +597,10 @@ static inline void ahash_unmap_ctx(struct device *dev,
 	ahash_unmap(dev, edesc, req, dst_len);
 }
 
-static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
+static void ahash_done(struct caam_drv_private_jr *jr, u32 *desc, u32 err,
 		       void *context)
 {
+	struct device *jrdev = jr->dev;
 	struct ahash_request *req = context;
 	struct ahash_edesc *edesc;
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
@@ -625,8 +626,8 @@ static void ahash_done(struct device *jrdev, u32 *desc, u32 err,
 	req->base.complete(&req->base, ecode);
 }
 
-static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
-			    void *context)
+static void ahash_done_bi(struct caam_drv_private_jr *jr, u32 *desc, u32 err,
+			  void *context)
 {
 	struct ahash_request *req = context;
 	struct ahash_edesc *edesc;
@@ -634,6 +635,7 @@ static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
 	int digestsize = crypto_ahash_digestsize(ahash);
+	struct device *jrdev = jr->dev;
 	int ecode = 0;
 
 	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
@@ -657,8 +659,8 @@ static void ahash_done_bi(struct device *jrdev, u32 *desc, u32 err,
 	req->base.complete(&req->base, ecode);
 }
 
-static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
-			       void *context)
+static void ahash_done_ctx_src(struct caam_drv_private_jr *jr, u32 *desc,
+			       u32 err, void *context)
 {
 	struct ahash_request *req = context;
 	struct ahash_edesc *edesc;
@@ -666,6 +668,7 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
 	int digestsize = crypto_ahash_digestsize(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
+	struct device *jrdev = jr->dev;
 	int ecode = 0;
 
 	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
@@ -685,8 +688,8 @@ static void ahash_done_ctx_src(struct device *jrdev, u32 *desc, u32 err,
 	req->base.complete(&req->base, ecode);
 }
 
-static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
-			       void *context)
+static void ahash_done_ctx_dst(struct caam_drv_private_jr *jr, u32 *desc,
+			       u32 err, void *context)
 {
 	struct ahash_request *req = context;
 	struct ahash_edesc *edesc;
@@ -694,6 +697,7 @@ static void ahash_done_ctx_dst(struct device *jrdev, u32 *desc, u32 err,
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
 	int digestsize = crypto_ahash_digestsize(ahash);
+	struct device *jrdev = jr->dev;
 	int ecode = 0;
 
 	dev_dbg(jrdev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
@@ -731,7 +735,7 @@ static struct ahash_edesc *ahash_edesc_alloc(struct caam_hash_ctx *ctx,
 
 	edesc = kzalloc(sizeof(*edesc) + sg_size, GFP_DMA | flags);
 	if (!edesc) {
-		dev_err(ctx->jrdev, "could not allocate extended descriptor\n");
+		dev_err(ctx->jr->dev, "could not allocate extended descriptor\n");
 		return NULL;
 	}
 
@@ -747,6 +751,7 @@ static int ahash_edesc_add_src(struct caam_hash_ctx *ctx,
 			       unsigned int first_sg,
 			       unsigned int first_bytes, size_t to_hash)
 {
+	struct device *jrdev = ctx->jr->dev;
 	dma_addr_t src_dma;
 	u32 options;
 
@@ -757,9 +762,9 @@ static int ahash_edesc_add_src(struct caam_hash_ctx *ctx,
 
 		sg_to_sec4_sg_last(req->src, to_hash, sg + first_sg, 0);
 
-		src_dma = dma_map_single(ctx->jrdev, sg, sgsize, DMA_TO_DEVICE);
-		if (dma_mapping_error(ctx->jrdev, src_dma)) {
-			dev_err(ctx->jrdev, "unable to map S/G table\n");
+		src_dma = dma_map_single(jrdev, sg, sgsize, DMA_TO_DEVICE);
+		if (dma_mapping_error(jrdev, src_dma)) {
+			dev_err(jrdev, "unable to map S/G table\n");
 			return -ENOMEM;
 		}
 
@@ -783,7 +788,7 @@ static int ahash_update_ctx(struct ahash_request *req)
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
 	u8 *buf = current_buf(state);
@@ -892,7 +897,7 @@ static int ahash_update_ctx(struct ahash_request *req)
 				     DUMP_PREFIX_ADDRESS, 16, 4, desc,
 				     desc_bytes(desc), 1);
 
-		ret = caam_jr_enqueue(jrdev, desc, ahash_done_bi, req);
+		ret = caam_jr_enqueue(ctx->jr, desc, ahash_done_bi, req);
 		if (ret)
 			goto unmap_ctx;
 
@@ -922,7 +927,7 @@ static int ahash_final_ctx(struct ahash_request *req)
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
 	int buflen = *current_buflen(state);
@@ -972,7 +977,7 @@ static int ahash_final_ctx(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, ahash_done_ctx_src, req);
 	if (ret)
 		goto unmap_ctx;
 
@@ -988,7 +993,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
 	int buflen = *current_buflen(state);
@@ -1052,7 +1057,7 @@ static int ahash_finup_ctx(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_src, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, ahash_done_ctx_src, req);
 	if (ret)
 		goto unmap_ctx;
 
@@ -1068,7 +1073,7 @@ static int ahash_digest(struct ahash_request *req)
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
 	u32 *desc;
@@ -1128,7 +1133,7 @@ static int ahash_digest(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, ahash_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -1145,7 +1150,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
 	u8 *buf = current_buf(state);
@@ -1182,7 +1187,7 @@ static int ahash_final_no_ctx(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, ahash_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -1204,7 +1209,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
 	u8 *buf = current_buf(state);
@@ -1305,7 +1310,7 @@ static int ahash_update_no_ctx(struct ahash_request *req)
 				     DUMP_PREFIX_ADDRESS, 16, 4, desc,
 				     desc_bytes(desc), 1);
 
-		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
+		ret = caam_jr_enqueue(ctx->jr, desc, ahash_done_ctx_dst, req);
 		if (ret)
 			goto unmap_ctx;
 
@@ -1339,7 +1344,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
 	int buflen = *current_buflen(state);
@@ -1403,7 +1408,7 @@ static int ahash_finup_no_ctx(struct ahash_request *req)
 			     DUMP_PREFIX_ADDRESS, 16, 4, desc, desc_bytes(desc),
 			     1);
 
-	ret = caam_jr_enqueue(jrdev, desc, ahash_done, req);
+	ret = caam_jr_enqueue(ctx->jr, desc, ahash_done, req);
 	if (!ret) {
 		ret = -EINPROGRESS;
 	} else {
@@ -1425,7 +1430,7 @@ static int ahash_update_first(struct ahash_request *req)
 	struct crypto_ahash *ahash = crypto_ahash_reqtfm(req);
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	struct caam_hash_state *state = ahash_request_ctx(req);
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	gfp_t flags = (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) ?
 		       GFP_KERNEL : GFP_ATOMIC;
 	u8 *next_buf = alt_buf(state);
@@ -1505,7 +1510,7 @@ static int ahash_update_first(struct ahash_request *req)
 				     DUMP_PREFIX_ADDRESS, 16, 4, desc,
 				     desc_bytes(desc), 1);
 
-		ret = caam_jr_enqueue(jrdev, desc, ahash_done_ctx_dst, req);
+		ret = caam_jr_enqueue(ctx->jr, desc, ahash_done_ctx_dst, req);
 		if (ret)
 			goto unmap_ctx;
 
@@ -1828,13 +1833,13 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
 	 * Get a Job ring from Job Ring driver to ensure in-order
 	 * crypto request processing per tfm
 	 */
-	ctx->jrdev = caam_jr_alloc();
-	if (IS_ERR(ctx->jrdev)) {
+	ctx->jr = caam_jr_alloc();
+	if (IS_ERR(ctx->jr)) {
 		pr_err("Job Ring Device allocation for transform failed\n");
-		return PTR_ERR(ctx->jrdev);
+		return PTR_ERR(ctx->jr);
 	}
 
-	priv = dev_get_drvdata(ctx->jrdev->parent);
+	priv = dev_get_drvdata(ctx->jr->dev->parent);
 
 	if (is_xcbc_aes(caam_hash->alg_type)) {
 		ctx->dir = DMA_TO_DEVICE;
@@ -1861,30 +1866,32 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
 	}
 
 	if (ctx->key_dir != DMA_NONE) {
-		ctx->adata.key_dma = dma_map_single_attrs(ctx->jrdev, ctx->key,
+		ctx->adata.key_dma = dma_map_single_attrs(ctx->jr->dev,
+							  ctx->key,
 							  ARRAY_SIZE(ctx->key),
 							  ctx->key_dir,
 							  DMA_ATTR_SKIP_CPU_SYNC);
-		if (dma_mapping_error(ctx->jrdev, ctx->adata.key_dma)) {
-			dev_err(ctx->jrdev, "unable to map key\n");
-			caam_jr_free(ctx->jrdev);
+		if (dma_mapping_error(ctx->jr->dev, ctx->adata.key_dma)) {
+			dev_err(ctx->jr->dev, "unable to map key\n");
+			caam_jr_free(ctx->jr);
 			return -ENOMEM;
 		}
 	}
 
-	dma_addr = dma_map_single_attrs(ctx->jrdev, ctx->sh_desc_update,
+	dma_addr = dma_map_single_attrs(ctx->jr->dev, ctx->sh_desc_update,
 					offsetof(struct caam_hash_ctx, key),
 					ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
-	if (dma_mapping_error(ctx->jrdev, dma_addr)) {
-		dev_err(ctx->jrdev, "unable to map shared descriptors\n");
+	if (dma_mapping_error(ctx->jr->dev, dma_addr)) {
+		dev_err(ctx->jr->dev, "unable to map shared descriptors\n");
 
 		if (ctx->key_dir != DMA_NONE)
-			dma_unmap_single_attrs(ctx->jrdev, ctx->adata.key_dma,
+			dma_unmap_single_attrs(ctx->jr->dev,
+					       ctx->adata.key_dma,
 					       ARRAY_SIZE(ctx->key),
 					       ctx->key_dir,
 					       DMA_ATTR_SKIP_CPU_SYNC);
 
-		caam_jr_free(ctx->jrdev);
+		caam_jr_free(ctx->jr);
 		return -ENOMEM;
 	}
 
@@ -1911,14 +1918,14 @@ static void caam_hash_cra_exit(struct crypto_tfm *tfm)
 {
 	struct caam_hash_ctx *ctx = crypto_tfm_ctx(tfm);
 
-	dma_unmap_single_attrs(ctx->jrdev, ctx->sh_desc_update_dma,
+	dma_unmap_single_attrs(ctx->jr->dev, ctx->sh_desc_update_dma,
 			       offsetof(struct caam_hash_ctx, key),
 			       ctx->dir, DMA_ATTR_SKIP_CPU_SYNC);
 	if (ctx->key_dir != DMA_NONE)
-		dma_unmap_single_attrs(ctx->jrdev, ctx->adata.key_dma,
+		dma_unmap_single_attrs(ctx->jr->dev, ctx->adata.key_dma,
 				       ARRAY_SIZE(ctx->key), ctx->key_dir,
 				       DMA_ATTR_SKIP_CPU_SYNC);
-	caam_jr_free(ctx->jrdev);
+	caam_jr_free(ctx->jr);
 }
 
 void caam_algapi_hash_exit(void)
diff --git a/drivers/crypto/caam/caampkc.c b/drivers/crypto/caam/caampkc.c
index 6619c512ef1a..8f2abd636aff 100644
--- a/drivers/crypto/caam/caampkc.c
+++ b/drivers/crypto/caam/caampkc.c
@@ -114,8 +114,10 @@ static void rsa_priv_f3_unmap(struct device *dev, struct rsa_edesc *edesc,
 }
 
 /* RSA Job Completion handler */
-static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
+static void rsa_pub_done(struct caam_drv_private_jr *jr, u32 *desc, u32 err,
+			 void *context)
 {
+	struct device *dev = jr->dev;
 	struct akcipher_request *req = context;
 	struct rsa_edesc *edesc;
 	int ecode = 0;
@@ -132,9 +134,10 @@ static void rsa_pub_done(struct device *dev, u32 *desc, u32 err, void *context)
 	akcipher_request_complete(req, ecode);
 }
 
-static void rsa_priv_f1_done(struct device *dev, u32 *desc, u32 err,
-			     void *context)
+static void rsa_priv_f1_done(struct caam_drv_private_jr *jr, u32 *desc,
+			     u32 err, void *context)
 {
+	struct device *dev = jr->dev;
 	struct akcipher_request *req = context;
 	struct rsa_edesc *edesc;
 	int ecode = 0;
@@ -151,9 +154,10 @@ static void rsa_priv_f1_done(struct device *dev, u32 *desc, u32 err,
 	akcipher_request_complete(req, ecode);
 }
 
-static void rsa_priv_f2_done(struct device *dev, u32 *desc, u32 err,
-			     void *context)
+static void rsa_priv_f2_done(struct caam_drv_private_jr *jr, u32 *desc,
+			     u32 err, void *context)
 {
+	struct device *dev = jr->dev;
 	struct akcipher_request *req = context;
 	struct rsa_edesc *edesc;
 	int ecode = 0;
@@ -170,9 +174,10 @@ static void rsa_priv_f2_done(struct device *dev, u32 *desc, u32 err,
 	akcipher_request_complete(req, ecode);
 }
 
-static void rsa_priv_f3_done(struct device *dev, u32 *desc, u32 err,
-			     void *context)
+static void rsa_priv_f3_done(struct caam_drv_private_jr *jr, u32 *desc,
+			     u32 err, void *context)
 {
+	struct device *dev = jr->dev;
 	struct akcipher_request *req = context;
 	struct rsa_edesc *edesc;
 	int ecode = 0;
@@ -245,7 +250,7 @@ static struct rsa_edesc *rsa_edesc_alloc(struct akcipher_request *req,
 {
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
-	struct device *dev = ctx->dev;
+	struct device *dev = ctx->jr->dev;
 	struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
 	struct caam_rsa_key *key = &ctx->key;
 	struct rsa_edesc *edesc;
@@ -371,7 +376,7 @@ static int set_rsa_pub_pdb(struct akcipher_request *req,
 	struct caam_rsa_req_ctx *req_ctx = akcipher_request_ctx(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
 	struct caam_rsa_key *key = &ctx->key;
-	struct device *dev = ctx->dev;
+	struct device *dev = ctx->jr->dev;
 	struct rsa_pub_pdb *pdb = &edesc->pdb.pub;
 	int sec4_sg_index = 0;
 
@@ -416,7 +421,7 @@ static int set_rsa_priv_f1_pdb(struct akcipher_request *req,
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
 	struct caam_rsa_key *key = &ctx->key;
-	struct device *dev = ctx->dev;
+	struct device *dev = ctx->jr->dev;
 	struct rsa_priv_f1_pdb *pdb = &edesc->pdb.priv_f1;
 	int sec4_sg_index = 0;
 
@@ -463,7 +468,7 @@ static int set_rsa_priv_f2_pdb(struct akcipher_request *req,
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
 	struct caam_rsa_key *key = &ctx->key;
-	struct device *dev = ctx->dev;
+	struct device *dev = ctx->jr->dev;
 	struct rsa_priv_f2_pdb *pdb = &edesc->pdb.priv_f2;
 	int sec4_sg_index = 0;
 	size_t p_sz = key->p_sz;
@@ -540,7 +545,7 @@ static int set_rsa_priv_f3_pdb(struct akcipher_request *req,
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
 	struct caam_rsa_key *key = &ctx->key;
-	struct device *dev = ctx->dev;
+	struct device *dev = ctx->jr->dev;
 	struct rsa_priv_f3_pdb *pdb = &edesc->pdb.priv_f3;
 	int sec4_sg_index = 0;
 	size_t p_sz = key->p_sz;
@@ -632,7 +637,7 @@ static int caam_rsa_enc(struct akcipher_request *req)
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
 	struct caam_rsa_key *key = &ctx->key;
-	struct device *jrdev = ctx->dev;
+	struct device *jrdev = ctx->jr->dev;
 	struct rsa_edesc *edesc;
 	int ret;
 
@@ -658,7 +663,7 @@ static int caam_rsa_enc(struct akcipher_request *req)
 	/* Initialize Job Descriptor */
 	init_rsa_pub_desc(edesc->hw_desc, &edesc->pdb.pub);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_pub_done, req);
+	ret = caam_jr_enqueue(ctx->jr, edesc->hw_desc, rsa_pub_done, req);
 	if (!ret)
 		return -EINPROGRESS;
 
@@ -674,7 +679,7 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
 {
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
-	struct device *jrdev = ctx->dev;
+	struct device *jrdev = ctx->jr->dev;
 	struct rsa_edesc *edesc;
 	int ret;
 
@@ -691,7 +696,7 @@ static int caam_rsa_dec_priv_f1(struct akcipher_request *req)
 	/* Initialize Job Descriptor */
 	init_rsa_priv_f1_desc(edesc->hw_desc, &edesc->pdb.priv_f1);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f1_done, req);
+	ret = caam_jr_enqueue(ctx->jr, edesc->hw_desc, rsa_priv_f1_done, req);
 	if (!ret)
 		return -EINPROGRESS;
 
@@ -707,7 +712,7 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
 {
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
-	struct device *jrdev = ctx->dev;
+	struct device *jrdev = ctx->jr->dev;
 	struct rsa_edesc *edesc;
 	int ret;
 
@@ -724,7 +729,7 @@ static int caam_rsa_dec_priv_f2(struct akcipher_request *req)
 	/* Initialize Job Descriptor */
 	init_rsa_priv_f2_desc(edesc->hw_desc, &edesc->pdb.priv_f2);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f2_done, req);
+	ret = caam_jr_enqueue(ctx->jr, edesc->hw_desc, rsa_priv_f2_done, req);
 	if (!ret)
 		return -EINPROGRESS;
 
@@ -740,7 +745,7 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
 {
 	struct crypto_akcipher *tfm = crypto_akcipher_reqtfm(req);
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
-	struct device *jrdev = ctx->dev;
+	struct device *jrdev = ctx->jr->dev;
 	struct rsa_edesc *edesc;
 	int ret;
 
@@ -757,7 +762,7 @@ static int caam_rsa_dec_priv_f3(struct akcipher_request *req)
 	/* Initialize Job Descriptor */
 	init_rsa_priv_f3_desc(edesc->hw_desc, &edesc->pdb.priv_f3);
 
-	ret = caam_jr_enqueue(jrdev, edesc->hw_desc, rsa_priv_f3_done, req);
+	ret = caam_jr_enqueue(ctx->jr, edesc->hw_desc, rsa_priv_f3_done, req);
 	if (!ret)
 		return -EINPROGRESS;
 
@@ -781,7 +786,7 @@ static int caam_rsa_dec(struct akcipher_request *req)
 
 	if (req->dst_len < key->n_sz) {
 		req->dst_len = key->n_sz;
-		dev_err(ctx->dev, "Output buffer length less than parameter n\n");
+		dev_err(ctx->jr->dev, "Output buffer length less than parameter n\n");
 		return -EOVERFLOW;
 	}
 
@@ -1038,19 +1043,19 @@ static int caam_rsa_init_tfm(struct crypto_akcipher *tfm)
 {
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
 
-	ctx->dev = caam_jr_alloc();
+	ctx->jr = caam_jr_alloc();
 
-	if (IS_ERR(ctx->dev)) {
+	if (IS_ERR(ctx->jr)) {
 		pr_err("Job Ring Device allocation for transform failed\n");
-		return PTR_ERR(ctx->dev);
+		return PTR_ERR(ctx->jr);
 	}
 
-	ctx->padding_dma = dma_map_single(ctx->dev, zero_buffer,
+	ctx->padding_dma = dma_map_single(ctx->jr->dev, zero_buffer,
 					  CAAM_RSA_MAX_INPUT_SIZE - 1,
 					  DMA_TO_DEVICE);
-	if (dma_mapping_error(ctx->dev, ctx->padding_dma)) {
-		dev_err(ctx->dev, "unable to map padding\n");
-		caam_jr_free(ctx->dev);
+	if (dma_mapping_error(ctx->jr->dev, ctx->padding_dma)) {
+		dev_err(ctx->jr->dev, "unable to map padding\n");
+		caam_jr_free(ctx->jr);
 		return -ENOMEM;
 	}
 
@@ -1063,10 +1068,10 @@ static void caam_rsa_exit_tfm(struct crypto_akcipher *tfm)
 	struct caam_rsa_ctx *ctx = akcipher_tfm_ctx(tfm);
 	struct caam_rsa_key *key = &ctx->key;
 
-	dma_unmap_single(ctx->dev, ctx->padding_dma, CAAM_RSA_MAX_INPUT_SIZE -
-			 1, DMA_TO_DEVICE);
+	dma_unmap_single(ctx->jr->dev, ctx->padding_dma,
+			 CAAM_RSA_MAX_INPUT_SIZE - 1, DMA_TO_DEVICE);
 	caam_rsa_free_key(key);
-	caam_jr_free(ctx->dev);
+	caam_jr_free(ctx->jr);
 }
 
 static struct caam_akcipher_alg caam_rsa = {
diff --git a/drivers/crypto/caam/caampkc.h b/drivers/crypto/caam/caampkc.h
index c68fb4c03ee6..5cbb4ea31ce8 100644
--- a/drivers/crypto/caam/caampkc.h
+++ b/drivers/crypto/caam/caampkc.h
@@ -93,7 +93,7 @@ struct caam_rsa_key {
  */
 struct caam_rsa_ctx {
 	struct caam_rsa_key key;
-	struct device *dev;
+	struct caam_drv_private_jr *jr;
 	dma_addr_t padding_dma;
 
 };
diff --git a/drivers/crypto/caam/caamrng.c b/drivers/crypto/caam/caamrng.c
index e8baacaabe07..2527b593707b 100644
--- a/drivers/crypto/caam/caamrng.c
+++ b/drivers/crypto/caam/caamrng.c
@@ -70,7 +70,7 @@ struct buf_data {
 
 /* rng per-device context */
 struct caam_rng_ctx {
-	struct device *jrdev;
+	struct caam_drv_private_jr *jr;
 	dma_addr_t sh_desc_dma;
 	u32 sh_desc[DESC_RNG_LEN];
 	unsigned int cur_buf_idx;
@@ -95,7 +95,7 @@ static inline void rng_unmap_buf(struct device *jrdev, struct buf_data *bd)
 
 static inline void rng_unmap_ctx(struct caam_rng_ctx *ctx)
 {
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 
 	if (ctx->sh_desc_dma)
 		dma_unmap_single(jrdev, ctx->sh_desc_dma,
@@ -104,8 +104,10 @@ static inline void rng_unmap_ctx(struct caam_rng_ctx *ctx)
 	rng_unmap_buf(jrdev, &ctx->bufs[1]);
 }
 
-static void rng_done(struct device *jrdev, u32 *desc, u32 err, void *context)
+static void rng_done(struct caam_drv_private_jr *jr, u32 *desc, u32 err,
+		     void *context)
 {
+	struct device *jrdev = jr->dev;
 	struct buf_data *bd;
 
 	bd = container_of(desc, struct buf_data, hw_desc[0]);
@@ -126,13 +128,13 @@ static void rng_done(struct device *jrdev, u32 *desc, u32 err, void *context)
 static inline int submit_job(struct caam_rng_ctx *ctx, int to_current)
 {
 	struct buf_data *bd = &ctx->bufs[!(to_current ^ ctx->current_buf)];
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	u32 *desc = bd->hw_desc;
 	int err;
 
 	dev_dbg(jrdev, "submitting job %d\n", !(to_current ^ ctx->current_buf));
 	init_completion(&bd->filled);
-	err = caam_jr_enqueue(jrdev, desc, rng_done, ctx);
+	err = caam_jr_enqueue(ctx->jr, desc, rng_done, ctx);
 	if (err)
 		complete(&bd->filled); /* don't wait on failed job*/
 	else
@@ -166,7 +168,7 @@ static int caam_read(struct hwrng *rng, void *data, size_t max, bool wait)
 	}
 
 	next_buf_idx = ctx->cur_buf_idx + max;
-	dev_dbg(ctx->jrdev, "%s: start reading at buffer %d, idx %d\n",
+	dev_dbg(ctx->jr->dev, "%s: start reading at buffer %d, idx %d\n",
 		 __func__, ctx->current_buf, ctx->cur_buf_idx);
 
 	/* if enough data in current buffer */
@@ -187,7 +189,7 @@ static int caam_read(struct hwrng *rng, void *data, size_t max, bool wait)
 
 	/* and use next buffer */
 	ctx->current_buf = !ctx->current_buf;
-	dev_dbg(ctx->jrdev, "switched to buffer %d\n", ctx->current_buf);
+	dev_dbg(ctx->jr->dev, "switched to buffer %d\n", ctx->current_buf);
 
 	/* since there already is some data read, don't wait */
 	return copied_idx + caam_read(rng, data + copied_idx,
@@ -196,7 +198,7 @@ static int caam_read(struct hwrng *rng, void *data, size_t max, bool wait)
 
 static inline int rng_create_sh_desc(struct caam_rng_ctx *ctx)
 {
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	u32 *desc = ctx->sh_desc;
 
 	init_sh_desc(desc, HDR_SHARE_SERIAL);
@@ -222,7 +224,7 @@ static inline int rng_create_sh_desc(struct caam_rng_ctx *ctx)
 
 static inline int rng_create_job_desc(struct caam_rng_ctx *ctx, int buf_id)
 {
-	struct device *jrdev = ctx->jrdev;
+	struct device *jrdev = ctx->jr->dev;
 	struct buf_data *bd = &ctx->bufs[buf_id];
 	u32 *desc = bd->hw_desc;
 	int sh_len = desc_len(ctx->sh_desc);
@@ -274,11 +276,12 @@ static int caam_init_buf(struct caam_rng_ctx *ctx, int buf_id)
 	return 0;
 }
 
-static int caam_init_rng(struct caam_rng_ctx *ctx, struct device *jrdev)
+static int caam_init_rng(struct caam_rng_ctx *ctx,
+			 struct caam_drv_private_jr *jr)
 {
 	int err;
 
-	ctx->jrdev = jrdev;
+	ctx->jr = jr;
 
 	err = rng_create_sh_desc(ctx);
 	if (err)
@@ -305,14 +308,14 @@ void caam_rng_exit(void)
 	if (!init_done)
 		return;
 
-	caam_jr_free(rng_ctx->jrdev);
+	caam_jr_free(rng_ctx->jr);
 	hwrng_unregister(&caam_rng);
 	kfree(rng_ctx);
 }
 
 int caam_rng_init(struct device *ctrldev)
 {
-	struct device *dev;
+	struct caam_drv_private_jr *jr;
 	u32 rng_inst;
 	struct caam_drv_private *priv = dev_get_drvdata(ctrldev);
 	int err;
@@ -328,21 +331,21 @@ int caam_rng_init(struct device *ctrldev)
 	if (!rng_inst)
 		return 0;
 
-	dev = caam_jr_alloc();
-	if (IS_ERR(dev)) {
+	jr = caam_jr_alloc();
+	if (IS_ERR(jr)) {
 		pr_err("Job Ring Device allocation for transform failed\n");
-		return PTR_ERR(dev);
+		return PTR_ERR(jr);
 	}
 	rng_ctx = kmalloc(sizeof(*rng_ctx), GFP_DMA | GFP_KERNEL);
 	if (!rng_ctx) {
 		err = -ENOMEM;
 		goto free_caam_alloc;
 	}
-	err = caam_init_rng(rng_ctx, dev);
+	err = caam_init_rng(rng_ctx, jr);
 	if (err)
 		goto free_rng_ctx;
 
-	dev_info(dev, "registering rng-caam\n");
+	dev_info(jr->dev, "registering rng-caam\n");
 
 	err = hwrng_register(&caam_rng);
 	if (!err) {
@@ -353,6 +356,6 @@ int caam_rng_init(struct device *ctrldev)
 free_rng_ctx:
 	kfree(rng_ctx);
 free_caam_alloc:
-	caam_jr_free(dev);
+	caam_jr_free(jr);
 	return err;
 }
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index 3e78fedeea30..1e2929b7c6b9 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -265,7 +265,7 @@ static void caam_jr_dequeue(unsigned long devarg)
 		}
 
 		/* Finally, execute user's callback */
-		usercall(dev, userdesc, userstatus, userarg);
+		usercall(jrp, userdesc, userstatus, userarg);
 		outring_used--;
 	}
 
@@ -279,10 +279,9 @@ static void caam_jr_dequeue(unsigned long devarg)
  * returns :  pointer to the newly allocated physical
  *	      JobR dev can be written to if successful.
  **/
-struct device *caam_jr_alloc(void)
+struct caam_drv_private_jr *caam_jr_alloc(void)
 {
-	struct caam_drv_private_jr *jrpriv, *min_jrpriv = NULL;
-	struct device *dev = ERR_PTR(-ENODEV);
+	struct caam_drv_private_jr *jrpriv, *min_jrpriv = ERR_PTR(-ENODEV);
 	int min_tfm_cnt	= INT_MAX;
 	int tfm_cnt;
 
@@ -303,13 +302,12 @@ struct device *caam_jr_alloc(void)
 			break;
 	}
 
-	if (min_jrpriv) {
+	if (!IS_ERR(min_jrpriv))
 		atomic_inc(&min_jrpriv->tfm_count);
-		dev = min_jrpriv->dev;
-	}
+
 	spin_unlock(&driver_data.jr_alloc_lock);
 
-	return dev;
+	return min_jrpriv;
 }
 EXPORT_SYMBOL(caam_jr_alloc);
 
@@ -318,10 +316,8 @@ EXPORT_SYMBOL(caam_jr_alloc);
  * @rdev     - points to the dev that identifies the Job ring to
  *             be released.
  **/
-void caam_jr_free(struct device *rdev)
+void caam_jr_free(struct caam_drv_private_jr *jrpriv)
 {
-	struct caam_drv_private_jr *jrpriv = dev_get_drvdata(rdev);
-
 	atomic_dec(&jrpriv->tfm_count);
 }
 EXPORT_SYMBOL(caam_jr_free);
@@ -354,9 +350,10 @@ EXPORT_SYMBOL(caam_jr_free);
  * @areq: optional pointer to a user argument for use at callback
  *        time.
  **/
-int caam_jr_enqueue(struct device *dev, u32 *desc, caam_jr_cbk cbk, void *areq)
+int caam_jr_enqueue(struct caam_drv_private_jr *jrp, u32 *desc,
+		    caam_jr_cbk cbk, void *areq)
 {
-	struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
+	struct device *dev = jrp->dev;
 	struct caam_jrentry_info *head_entry;
 	int head, tail, desc_size;
 	dma_addr_t desc_dma;
diff --git a/drivers/crypto/caam/jr.h b/drivers/crypto/caam/jr.h
index 81acc6a6909f..f49caa0ac0ff 100644
--- a/drivers/crypto/caam/jr.h
+++ b/drivers/crypto/caam/jr.h
@@ -8,13 +8,15 @@
 #ifndef JR_H
 #define JR_H
 
+struct caam_drv_private_jr;
+
 /* Prototypes for backend-level services exposed to APIs */
-typedef void (*caam_jr_cbk)(struct device *dev, u32 *desc, u32 status,
-			    void *areq);
+typedef void (*caam_jr_cbk)(struct caam_drv_private_jr *jr, u32 *desc,
+			    u32 status, void *areq);
 
-struct device *caam_jr_alloc(void);
-void caam_jr_free(struct device *rdev);
-int caam_jr_enqueue(struct device *dev, u32 *desc, caam_jr_cbk cbk,
+struct caam_drv_private_jr *caam_jr_alloc(void);
+void caam_jr_free(struct caam_drv_private_jr *jr);
+int caam_jr_enqueue(struct caam_drv_private_jr *jr, u32 *desc, caam_jr_cbk cbk,
 		    void *areq);
 
 #endif /* JR_H */
diff --git a/drivers/crypto/caam/key_gen.c b/drivers/crypto/caam/key_gen.c
index 5a851ddc48fb..cabd39821176 100644
--- a/drivers/crypto/caam/key_gen.c
+++ b/drivers/crypto/caam/key_gen.c
@@ -9,12 +9,14 @@
 #include "jr.h"
 #include "error.h"
 #include "desc_constr.h"
+#include "intern.h"
 #include "key_gen.h"
 
-void split_key_done(struct device *dev, u32 *desc, u32 err,
-			   void *context)
+void split_key_done(struct caam_drv_private_jr *jr, u32 *desc, u32 err,
+		    void *context)
 {
 	struct split_key_result *res = context;
+	struct device *dev = jr->dev;
 	int ecode = 0;
 
 	dev_dbg(dev, "%s %d: err 0x%x\n", __func__, __LINE__, err);
@@ -41,11 +43,12 @@ Split key generation-----------------------------------------------
 [06] 0x64260028    fifostr: class2 mdsplit-jdk len=40
 			@0xffe04000
 */
-int gen_split_key(struct device *jrdev, u8 *key_out,
+int gen_split_key(struct caam_drv_private_jr *jr, u8 *key_out,
 		  struct alginfo * const adata, const u8 *key_in, u32 keylen,
 		  int max_keylen)
 {
 	u32 *desc;
+	struct device *jrdev = jr->dev;
 	struct split_key_result result;
 	dma_addr_t dma_addr;
 	unsigned int local_max;
@@ -107,7 +110,7 @@ int gen_split_key(struct device *jrdev, u8 *key_out,
 	result.err = 0;
 	init_completion(&result.completion);
 
-	ret = caam_jr_enqueue(jrdev, desc, split_key_done, &result);
+	ret = caam_jr_enqueue(jr, desc, split_key_done, &result);
 	if (!ret) {
 		/* in progress */
 		wait_for_completion(&result.completion);
diff --git a/drivers/crypto/caam/key_gen.h b/drivers/crypto/caam/key_gen.h
index 818f78f6fc1a..10275080f969 100644
--- a/drivers/crypto/caam/key_gen.h
+++ b/drivers/crypto/caam/key_gen.h
@@ -41,8 +41,9 @@ struct split_key_result {
 	int err;
 };
 
-void split_key_done(struct device *dev, u32 *desc, u32 err, void *context);
+void split_key_done(struct caam_drv_private_jr *jr, u32 *desc, u32 err,
+		    void *context);
 
-int gen_split_key(struct device *jrdev, u8 *key_out,
+int gen_split_key(struct caam_drv_private_jr *jr, u8 *key_out,
 		  struct alginfo * const adata, const u8 *key_in, u32 keylen,
 		  int max_keylen);
-- 
2.21.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 4/5] crypto: caam - do not create a platform devices for JRs
  2019-11-05 15:13 [PATCH 0/5] CAAM JR lifecycle Andrey Smirnov
                   ` (2 preceding siblings ...)
  2019-11-05 15:13 ` [PATCH 3/5] crypto: caam - convert JR API to use struct caam_drv_private_jr Andrey Smirnov
@ 2019-11-05 15:13 ` Andrey Smirnov
  2019-11-05 15:13 ` [PATCH 5/5] crypto: caam - disable CAAM's bind/unbind attributes Andrey Smirnov
  2019-11-06  7:27 ` [PATCH 0/5] CAAM JR lifecycle Vakul Garg
  5 siblings, 0 replies; 9+ messages in thread
From: Andrey Smirnov @ 2019-11-05 15:13 UTC (permalink / raw)
  To: linux-crypto
  Cc: Andrey Smirnov, Chris Healy, Lucas Stach, Horia Geantă,
	Herbert Xu, Iuliana Prodan, linux-imx, linux-kernel

Job rings are an integral part of underlying CAAM IP and treating them
as independent devices means that we have to:

1. Properly maintain device reference counter
   via (get_device/put_device). Currently not implemented

2. Properly coordinate lifecycle of the underlying platform
   device (e.g. removal via 'unbind') with active users of that
   JR. Not implemented currently as well.

3. Have extra logic to initialize crypto algorithms after at least one
   JR has been registered (see register_algs() and related code)

Instead of adding extra code to deal with #1 and #2 above and open up
the possibility of simpliying #3, convert the driver to not create
platform devices for available JR and instead treat them as internal
implementation detail while providing same API to all of the original
users.

Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
Cc: Chris Healy <cphealy@gmail.com>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Horia Geantă <horia.geanta@nxp.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Iuliana Prodan <iuliana.prodan@nxp.com>
Cc: linux-imx@nxp.com
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 drivers/crypto/caam/Kconfig      |   5 +-
 drivers/crypto/caam/Makefile     |  15 ++--
 drivers/crypto/caam/caamalg.c    |  10 +--
 drivers/crypto/caam/caamalg_qi.c |   4 +-
 drivers/crypto/caam/caamhash.c   |   6 +-
 drivers/crypto/caam/ctrl.c       |  15 +++-
 drivers/crypto/caam/jr.c         | 136 +++++++------------------------
 drivers/crypto/caam/jr.h         |   1 +
 8 files changed, 62 insertions(+), 130 deletions(-)

diff --git a/drivers/crypto/caam/Kconfig b/drivers/crypto/caam/Kconfig
index 137ed3df0c74..86ab869e4176 100644
--- a/drivers/crypto/caam/Kconfig
+++ b/drivers/crypto/caam/Kconfig
@@ -32,7 +32,7 @@ config CRYPTO_DEV_FSL_CAAM_DEBUG
 	  information in the CAAM driver.
 
 menuconfig CRYPTO_DEV_FSL_CAAM_JR
-	tristate "Freescale CAAM Job Ring driver backend"
+	bool "Freescale CAAM Job Ring driver backend"
 	default y
 	help
 	  Enables the driver module for Job Rings which are part of
@@ -40,9 +40,6 @@ menuconfig CRYPTO_DEV_FSL_CAAM_JR
 	  and Assurance Module (CAAM). This module adds a job ring operation
 	  interface.
 
-	  To compile this driver as a module, choose M here: the module
-	  will be called caam_jr.
-
 if CRYPTO_DEV_FSL_CAAM_JR
 
 config CRYPTO_DEV_FSL_CAAM_RINGSIZE
diff --git a/drivers/crypto/caam/Makefile b/drivers/crypto/caam/Makefile
index 68d5cc0f28e2..ab6c094f8bd8 100644
--- a/drivers/crypto/caam/Makefile
+++ b/drivers/crypto/caam/Makefile
@@ -10,17 +10,18 @@ ccflags-y += -DVERSION=\"\"
 
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_COMMON) += error.o
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM) += caam.o
-obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_JR) += caam_jr.o
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_DESC) += caamalg_desc.o
 obj-$(CONFIG_CRYPTO_DEV_FSL_CAAM_AHASH_API_DESC) += caamhash_desc.o
 
 caam-y := ctrl.o
-caam_jr-y := jr.o key_gen.o
-caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API) += caamalg.o
-caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI) += caamalg_qi.o
-caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_AHASH_API) += caamhash.o
-caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_API) += caamrng.o
-caam_jr-$(CONFIG_CRYPTO_DEV_FSL_CAAM_PKC_API) += caampkc.o pkc_desc.o
+ifneq ($(CONFIG_CRYPTO_DEV_FSL_CAAM_JR),)
+caam-y += jr.o key_gen.o
+caam-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API) += caamalg.o
+caam-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI) += caamalg_qi.o
+caam-$(CONFIG_CRYPTO_DEV_FSL_CAAM_AHASH_API) += caamhash.o
+caam-$(CONFIG_CRYPTO_DEV_FSL_CAAM_RNG_API) += caamrng.o
+caam-$(CONFIG_CRYPTO_DEV_FSL_CAAM_PKC_API) += caampkc.o pkc_desc.o
+endif
 
 caam-$(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI) += qi.o
 ifneq ($(CONFIG_CRYPTO_DEV_FSL_CAAM_CRYPTO_API_QI),)
diff --git a/drivers/crypto/caam/caamalg.c b/drivers/crypto/caam/caamalg.c
index 4cb7d5b281cc..f2230256ef9f 100644
--- a/drivers/crypto/caam/caamalg.c
+++ b/drivers/crypto/caam/caamalg.c
@@ -118,7 +118,7 @@ static int aead_null_set_sh_desc(struct crypto_aead *aead)
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
 	struct device *jrdev = ctx->jr->dev;
-	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
+	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev);
 	u32 *desc;
 	int rem_bytes = CAAM_DESC_BYTES_MAX - AEAD_DESC_JOB_IO_LEN -
 			ctx->adata.keylen_pad;
@@ -171,7 +171,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
 	unsigned int ivsize = crypto_aead_ivsize(aead);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
 	struct device *jrdev = ctx->jr->dev;
-	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
+	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev);
 	u32 ctx1_iv_off = 0;
 	u32 *desc, *nonce = NULL;
 	u32 inl_mask;
@@ -564,7 +564,7 @@ static int aead_setkey(struct crypto_aead *aead,
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
 	struct device *jrdev = ctx->jr->dev;
-	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
+	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev);
 	struct crypto_authenc_keys keys;
 	int ret = 0;
 
@@ -1223,7 +1223,7 @@ static void init_authenc_job(struct aead_request *req,
 						 struct caam_aead_alg, aead);
 	unsigned int ivsize = crypto_aead_ivsize(aead);
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
-	struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctx->jr->dev->parent);
+	struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctx->jr->dev);
 	const bool ctr_mode = ((ctx->cdata.algtype & OP_ALG_AAI_MASK) ==
 			       OP_ALG_AAI_CTR_MOD128);
 	const bool is_rfc3686 = alg->caam.rfc3686;
@@ -3421,7 +3421,7 @@ static int caam_init_common(struct caam_ctx *ctx, struct caam_alg_entry *caam,
 		return PTR_ERR(ctx->jr);
 	}
 
-	priv = dev_get_drvdata(ctx->jr->dev->parent);
+	priv = dev_get_drvdata(ctx->jr->dev);
 	if (priv->era >= 6 && uses_dkp)
 		ctx->dir = DMA_BIDIRECTIONAL;
 	else
diff --git a/drivers/crypto/caam/caamalg_qi.c b/drivers/crypto/caam/caamalg_qi.c
index 31bee401f9e5..b8905e3a9c80 100644
--- a/drivers/crypto/caam/caamalg_qi.c
+++ b/drivers/crypto/caam/caamalg_qi.c
@@ -82,7 +82,7 @@ static int aead_set_sh_desc(struct crypto_aead *aead)
 	const bool ctr_mode = ((ctx->cdata.algtype & OP_ALG_AAI_MASK) ==
 			       OP_ALG_AAI_CTR_MOD128);
 	const bool is_rfc3686 = alg->caam.rfc3686;
-	struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctx->jrdev->parent);
+	struct caam_drv_private *ctrlpriv = dev_get_drvdata(ctx->jrdev);
 
 	if (!ctx->cdata.keylen || !ctx->authsize)
 		return 0;
@@ -189,7 +189,7 @@ static int aead_setkey(struct crypto_aead *aead, const u8 *key,
 {
 	struct caam_ctx *ctx = crypto_aead_ctx(aead);
 	struct device *jrdev = ctx->jrdev;
-	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
+	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev);
 	struct crypto_authenc_keys keys;
 	int ret = 0;
 
diff --git a/drivers/crypto/caam/caamhash.c b/drivers/crypto/caam/caamhash.c
index 6e4fd5eb833a..94ecad06a120 100644
--- a/drivers/crypto/caam/caamhash.c
+++ b/drivers/crypto/caam/caamhash.c
@@ -224,7 +224,7 @@ static int ahash_set_sh_desc(struct crypto_ahash *ahash)
 	struct caam_hash_ctx *ctx = crypto_ahash_ctx(ahash);
 	int digestsize = crypto_ahash_digestsize(ahash);
 	struct device *jrdev = ctx->jr->dev;
-	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
+	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev);
 	u32 *desc;
 
 	ctx->adata.key_virt = ctx->key;
@@ -447,7 +447,7 @@ static int ahash_setkey(struct crypto_ahash *ahash,
 	struct device *jrdev = ctx->jr->dev;
 	int blocksize = crypto_tfm_alg_blocksize(&ahash->base);
 	int digestsize = crypto_ahash_digestsize(ahash);
-	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev->parent);
+	struct caam_drv_private *ctrlpriv = dev_get_drvdata(jrdev);
 	int ret;
 	u8 *hashed_key = NULL;
 
@@ -1839,7 +1839,7 @@ static int caam_hash_cra_init(struct crypto_tfm *tfm)
 		return PTR_ERR(ctx->jr);
 	}
 
-	priv = dev_get_drvdata(ctx->jr->dev->parent);
+	priv = dev_get_drvdata(ctx->jr->dev);
 
 	if (is_xcbc_aes(caam_hash->alg_type)) {
 		ctx->dir = DMA_TO_DEVICE;
diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
index d7c3c3805693..0fb39bcf638a 100644
--- a/drivers/crypto/caam/ctrl.c
+++ b/drivers/crypto/caam/ctrl.c
@@ -900,9 +900,18 @@ static int caam_probe(struct platform_device *pdev)
 			    &ctrlpriv->ctl_tdsk_wrap);
 #endif
 
-	ret = devm_of_platform_populate(dev);
-	if (ret)
-		dev_err(dev, "JR platform devices creation error\n");
+	for_each_available_child_of_node(nprop, np) {
+		if (of_device_is_compatible(np, "fsl,sec-v4.0-job-ring") ||
+		    of_device_is_compatible(np, "fsl,sec4.0-job-ring")) {
+			ret = caam_jr_probe(dev, np);
+			if (ret) {
+				dev_err(dev,
+				       "JR platform devices creation error\n");
+				of_node_put(np);
+				break;
+			}
+		}
+	}
 
 	return ret;
 }
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index 1e2929b7c6b9..7f0d192b9276 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -65,9 +65,9 @@ static void unregister_algs(void)
 	mutex_unlock(&algs_lock);
 }
 
-static int caam_reset_hw_jr(struct device *dev)
+static int caam_reset_hw_jr(struct caam_drv_private_jr *jrp)
 {
-	struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
+	struct device *dev = jrp->dev;
 	unsigned int timeout = 100000;
 
 	/*
@@ -105,37 +105,11 @@ static int caam_reset_hw_jr(struct device *dev)
 	return 0;
 }
 
-/*
- * Shutdown JobR independent of platform property code
- */
-static int caam_jr_shutdown(struct device *dev)
-{
-	struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
-	int ret;
-
-	ret = caam_reset_hw_jr(dev);
-
-	tasklet_kill(&jrp->irqtask);
-
-	return ret;
-}
-
-static int caam_jr_remove(struct platform_device *pdev)
+static void caam_jr_remove(void *data)
 {
 	int ret;
-	struct device *jrdev;
-	struct caam_drv_private_jr *jrpriv;
-
-	jrdev = &pdev->dev;
-	jrpriv = dev_get_drvdata(jrdev);
-
-	/*
-	 * Return EBUSY if job ring already allocated.
-	 */
-	if (atomic_read(&jrpriv->tfm_count)) {
-		dev_err(jrdev, "Device is busy\n");
-		return -EBUSY;
-	}
+	struct caam_drv_private_jr *jrpriv = data;
+	struct device *jrdev = jrpriv->dev;
 
 	/* Unregister JR-based RNG & crypto algorithms */
 	unregister_algs();
@@ -146,18 +120,18 @@ static int caam_jr_remove(struct platform_device *pdev)
 	spin_unlock(&driver_data.jr_alloc_lock);
 
 	/* Release ring */
-	ret = caam_jr_shutdown(jrdev);
+	ret = caam_reset_hw_jr(jrpriv);
 	if (ret)
 		dev_err(jrdev, "Failed to shut down job ring\n");
 
-	return ret;
+	tasklet_kill(&jrpriv->irqtask);
 }
 
 /* Main per-ring interrupt handler */
 static irqreturn_t caam_jr_interrupt(int irq, void *st_dev)
 {
-	struct device *dev = st_dev;
-	struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
+	struct caam_drv_private_jr *jrp = st_dev;
+	struct device *dev = jrp->dev;
 	u32 irqstate;
 
 	/*
@@ -195,8 +169,8 @@ static irqreturn_t caam_jr_interrupt(int irq, void *st_dev)
 static void caam_jr_dequeue(unsigned long devarg)
 {
 	int hw_idx, sw_idx, i, head, tail;
-	struct device *dev = (struct device *)devarg;
-	struct caam_drv_private_jr *jrp = dev_get_drvdata(dev);
+	struct caam_drv_private_jr *jrp = (void *)devarg;
+	struct device *dev = jrp->dev;
 	caam_jr_cbk usercall;
 	u32 *userdesc, userstatus;
 	void *userarg;
@@ -418,15 +392,13 @@ EXPORT_SYMBOL(caam_jr_enqueue);
 /*
  * Init JobR independent of platform property detection
  */
-static int caam_jr_init(struct device *dev)
+static int caam_jr_init(struct caam_drv_private_jr *jrp)
 {
-	struct caam_drv_private_jr *jrp;
+	struct device *dev = jrp->dev;
 	dma_addr_t inpbusaddr, outbusaddr;
 	int i, error;
 
-	jrp = dev_get_drvdata(dev);
-
-	error = caam_reset_hw_jr(dev);
+	error = caam_reset_hw_jr(jrp);
 	if (error)
 		return error;
 
@@ -469,11 +441,11 @@ static int caam_jr_init(struct device *dev)
 		      (JOBR_INTC_COUNT_THLD << JRCFG_ICDCT_SHIFT) |
 		      (JOBR_INTC_TIME_THLD << JRCFG_ICTT_SHIFT));
 
-	tasklet_init(&jrp->irqtask, caam_jr_dequeue, (unsigned long)dev);
+	tasklet_init(&jrp->irqtask, caam_jr_dequeue, (unsigned long)jrp);
 
 	/* Connect job ring interrupt handler. */
 	error = devm_request_irq(dev, jrp->irq, caam_jr_interrupt, IRQF_SHARED,
-				 dev_name(dev), dev);
+				 dev_name(dev), jrp);
 	if (error) {
 		dev_err(dev, "can't connect JobR %d interrupt (%d)\n",
 			jrp->ridx, jrp->irq);
@@ -491,36 +463,29 @@ static void caam_jr_irq_dispose_mapping(void *data)
 /*
  * Probe routine for each detected JobR subsystem.
  */
-static int caam_jr_probe(struct platform_device *pdev)
+int caam_jr_probe(struct device *jrdev, struct device_node *nprop)
 {
-	struct device *jrdev;
-	struct device_node *nprop;
 	struct caam_job_ring __iomem *ctrl;
 	struct caam_drv_private_jr *jrpriv;
 	static int total_jobrs;
-	struct resource *r;
+	struct resource r;
 	int error;
 
-	jrdev = &pdev->dev;
 	jrpriv = devm_kmalloc(jrdev, sizeof(*jrpriv), GFP_KERNEL);
 	if (!jrpriv)
 		return -ENOMEM;
 
-	dev_set_drvdata(jrdev, jrpriv);
-
 	/* save ring identity relative to detection */
 	jrpriv->ridx = total_jobrs++;
+	jrpriv->dev = jrdev;
 
-	nprop = pdev->dev.of_node;
-	/* Get configuration properties from device tree */
-	/* First, get register page */
-	r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
-	if (!r) {
-		dev_err(jrdev, "platform_get_resource() failed\n");
-		return -ENOMEM;
+	error = of_address_to_resource(nprop, 0, &r);
+	if (error) {
+		dev_err(jrdev, "of_address_to_resource() failed\n");
+		return error;
 	}
 
-	ctrl = devm_ioremap(jrdev, r->start, resource_size(r));
+	ctrl = devm_ioremap(jrdev, r.start, resource_size(&r));
 	if (!ctrl) {
 		dev_err(jrdev, "devm_ioremap() failed\n");
 		return -ENOMEM;
@@ -528,13 +493,6 @@ static int caam_jr_probe(struct platform_device *pdev)
 
 	jrpriv->rregs = (struct caam_job_ring __iomem __force *)ctrl;
 
-	error = dma_set_mask_and_coherent(jrdev, caam_get_dma_mask(jrdev));
-	if (error) {
-		dev_err(jrdev, "dma_set_mask_and_coherent failed (%d)\n",
-			error);
-		return error;
-	}
-
 	/* Identify the interrupt */
 	jrpriv->irq = irq_of_parse_and_map(nprop, 0);
 	if (!jrpriv->irq) {
@@ -548,55 +506,21 @@ static int caam_jr_probe(struct platform_device *pdev)
 		return error;
 
 	/* Now do the platform independent part */
-	error = caam_jr_init(jrdev); /* now turn on hardware */
+	error = caam_jr_init(jrpriv); /* now turn on hardware */
 	if (error)
 		return error;
 
-	jrpriv->dev = jrdev;
 	spin_lock(&driver_data.jr_alloc_lock);
 	list_add_tail(&jrpriv->list_node, &driver_data.jr_list);
 	spin_unlock(&driver_data.jr_alloc_lock);
 
 	atomic_set(&jrpriv->tfm_count, 0);
 
-	register_algs(jrdev->parent);
-
-	return 0;
-}
-
-static const struct of_device_id caam_jr_match[] = {
-	{
-		.compatible = "fsl,sec-v4.0-job-ring",
-	},
-	{
-		.compatible = "fsl,sec4.0-job-ring",
-	},
-	{},
-};
-MODULE_DEVICE_TABLE(of, caam_jr_match);
-
-static struct platform_driver caam_jr_driver = {
-	.driver = {
-		.name = "caam_jr",
-		.of_match_table = caam_jr_match,
-	},
-	.probe       = caam_jr_probe,
-	.remove      = caam_jr_remove,
-};
+	error = devm_add_action_or_reset(jrdev, caam_jr_remove, jrpriv);
+	if (error)
+		return error;
 
-static int __init jr_driver_init(void)
-{
-	return platform_driver_register(&caam_jr_driver);
-}
+	register_algs(jrdev);
 
-static void __exit jr_driver_exit(void)
-{
-	platform_driver_unregister(&caam_jr_driver);
+	return 0;
 }
-
-module_init(jr_driver_init);
-module_exit(jr_driver_exit);
-
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("FSL CAAM JR request backend");
-MODULE_AUTHOR("Freescale Semiconductor - NMG/STC");
diff --git a/drivers/crypto/caam/jr.h b/drivers/crypto/caam/jr.h
index f49caa0ac0ff..ee2aa1798605 100644
--- a/drivers/crypto/caam/jr.h
+++ b/drivers/crypto/caam/jr.h
@@ -18,5 +18,6 @@ struct caam_drv_private_jr *caam_jr_alloc(void);
 void caam_jr_free(struct caam_drv_private_jr *jr);
 int caam_jr_enqueue(struct caam_drv_private_jr *jr, u32 *desc, caam_jr_cbk cbk,
 		    void *areq);
+int caam_jr_probe(struct device *jrdev, struct device_node *nprop);
 
 #endif /* JR_H */
-- 
2.21.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH 5/5] crypto: caam - disable CAAM's bind/unbind attributes
  2019-11-05 15:13 [PATCH 0/5] CAAM JR lifecycle Andrey Smirnov
                   ` (3 preceding siblings ...)
  2019-11-05 15:13 ` [PATCH 4/5] crypto: caam - do not create a platform devices for JRs Andrey Smirnov
@ 2019-11-05 15:13 ` Andrey Smirnov
  2019-11-06  7:27 ` [PATCH 0/5] CAAM JR lifecycle Vakul Garg
  5 siblings, 0 replies; 9+ messages in thread
From: Andrey Smirnov @ 2019-11-05 15:13 UTC (permalink / raw)
  To: linux-crypto
  Cc: Andrey Smirnov, Chris Healy, Lucas Stach, Horia Geantă,
	Herbert Xu, Iuliana Prodan, linux-imx, linux-kernel

Exposing bind/unbind attributes for CAAM device allows user to
circumvent module use counter and remove underlying device even while
it is still in use by crypto API. The problem can be easily reproduce
using the following sinppiet:

$ openssl speed -evp aes-128-cbc -engine afalg &
$ echo 30900000.crypto > /sys/bus/platform/drivers/caam/unbind
[  164.797687] ------------[ cut here ]------------
[  164.802320] kernel BUG at crypto/algapi.c:412!
[  164.806771] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[  164.812260] Modules linked in: crct10dif_ce caam caamhash_desc caamalg_desc error btusb btbcm btintel
[  164.821506] CPU: 1 PID: 2170 Comm: sh Not tainted 5.4.0-rc1 #30
[  164.827428] Hardware name: ZII i.MX8MQ Ultra Zest Board (DT)
[  164.833091] pstate: 20000005 (nzCv daif -PAN -UAO)
[  164.837897] pc : crypto_unregister_alg+0xe4/0xf0
[  164.842520] lr : crypto_unregister_alg+0x8c/0xf0
[  164.847138] sp : ffff8000130f3b20
[  164.850454] x29: ffff8000130f3b20 x28: ffff0000f1131a80
[  164.855771] x27: 0000000000000000 x26: 0000000000000000
[  164.861087] x25: ffff0000fa147ea0 x24: 0000000000000020
[  164.866404] x23: ffff8000130f3c58 x22: ffff8000130f3b58
[  164.871721] x21: ffff800012b787c8 x20: ffff800012be7ef0
[  164.877037] x19: ffff800008ad7300 x18: 000000000000002b
[  164.882353] x17: 0000000000000000 x16: 0000000000000000
[  164.887670] x15: ffff800012b8f4d0 x14: 55980d468eb0c075
[  164.892987] x13: 4375a0958c16498f x12: 27cb4484db878b3d
[  164.898304] x11: c3bdc615f6902956 x10: e030849201295489
[  164.903620] x9 : 00a97e1a31855afa x8 : 00000000000014a5
[  164.908937] x7 : ffff800008ad7310 x6 : ffff8000130f3a60
[  164.914253] x5 : ffff8000130f3af8 x4 : ffff800008ad7310
[  164.919570] x3 : 0000000000000000 x2 : 0000000000000000
[  164.924886] x1 : ffffffffffffffff x0 : 0000000000000002
[  164.930202] Call trace:
[  164.932656]  crypto_unregister_alg+0xe4/0xf0
[  164.936932]  crypto_unregister_skcipher+0x20/0x30
[  164.941662]  caam_algapi_exit+0x84/0xa0 [caam]
[  164.946124]  caam_jr_remove+0x54/0xd0 [caam]
[  164.950401]  devm_action_release+0x20/0x30
[  164.954501]  release_nodes+0x1c8/0x240
[  164.958255]  devres_release_all+0x3c/0x60
[  164.962272]  device_release_driver_internal+0x10c/0x1c0
[  164.967501]  device_driver_detach+0x28/0x40
[  164.971689]  unbind_store+0x94/0x100
[  164.975269]  drv_attr_store+0x40/0x60
[  164.978938]  sysfs_kf_write+0x5c/0x70
[  164.982605]  kernfs_fop_write+0xf4/0x1f0
[  164.986534]  __vfs_write+0x48/0x90
[  164.989941]  vfs_write+0xb8/0x1d0
[  164.993261]  ksys_write+0x74/0x100
[  164.996668]  __arm64_sys_write+0x24/0x30
[  165.000598]  el0_svc_handler+0x94/0x100
[  165.004439]  el0_svc+0x8/0xc
[  165.007329] Code: aa1403e0 97f2d52e 12800020 17fffff5 (d4210000)
[  165.013428] ---[ end trace 11587fd1ef597dd6 ]---
[  165.018138] note: sh[2170] exited with preempt_count 1
[  165.024146] ------------[ cut here ]------------
[  165.028786] WARNING: CPU: 1 PID: 0 at kernel/rcu/tree.c:569 rcu_idle_enter+0x7c/0x90
[  165.048977] Hardware name: ZII i.MX8MQ Ultra Zest Board (DT)
[  165.054640] pstate: 200003c5 (nzCv DAIF -PAN -UAO)
[  165.059435] pc : rcu_idle_enter+0x7c/0x90
[  165.063450] lr : do_idle+0x218/0x2b0
[  165.067027] sp : ffff800012e1bf20
[  165.070343] x29: ffff800012e1bf20 x28: 0000000000000000
[  165.075663] x27: 0000000000000000 x26: 0000000000000000
[  165.080983] x25: 0000000000000000 x24: ffff800012b78884
[  165.089045] x21: ffff800012b78860 x20: 0000000000000002
[  165.094362] x19: ffff800012b787e8 x18: 0000000000000010
[  165.099678] x17: 0000000000000000 x16: 0000000000000001
[  165.104995] x15: ffff0000ff789170 x14: 0000000000000001
[  165.110311] x13: ffff0000ff7a8170 x12: ffff0000fa996cd4
[  165.115628] x11: ffff0000fa996cd4 x10: 0000000000000970
[  165.120945] x9 : ffff800012e1bea0 x8 : ffff0000fa9aa450
[  165.126261] x7 : 0000000000000001 x6 : ffff800012e1bee0
[  165.131577] x5 : 0000000000000001 x4 : ffff800012cc61a8
[  165.136894] x3 : 4000000000000002 x2 : 4000000000000000
[  165.142210] x1 : ffff800012b6edc0 x0 : ffff0000ff789dc0
[  165.147526] Call trace:
[  165.149978]  rcu_idle_enter+0x7c/0x90
[  165.153644]  do_idle+0x218/0x2b0
[  165.156876]  cpu_startup_entry+0x2c/0x50
[  165.160806]  secondary_start_kernel+0x164/0x180
[  165.165339] ---[ end trace 11587fd1ef597dd7 ]---

Remove bind/unbind attributes of CAAM device, so that the only way to
remove it during runtime would be to remove underlying kernel module.

Signed-off-by: Andrey Smirnov <andrew.smirnov@gmail.com>
Cc: Chris Healy <cphealy@gmail.com>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Horia Geantă <horia.geanta@nxp.com>
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Iuliana Prodan <iuliana.prodan@nxp.com>
Cc: linux-imx@nxp.com
Cc: linux-crypto@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
---
 drivers/crypto/caam/ctrl.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
index 0fb39bcf638a..e0c16cd2ce1a 100644
--- a/drivers/crypto/caam/ctrl.c
+++ b/drivers/crypto/caam/ctrl.c
@@ -920,6 +920,7 @@ static struct platform_driver caam_driver = {
 	.driver = {
 		.name = "caam",
 		.of_match_table = caam_match,
+		.suppress_bind_attrs = true,
 	},
 	.probe       = caam_probe,
 };
-- 
2.21.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* RE: [PATCH 0/5] CAAM JR lifecycle
  2019-11-05 15:13 [PATCH 0/5] CAAM JR lifecycle Andrey Smirnov
                   ` (4 preceding siblings ...)
  2019-11-05 15:13 ` [PATCH 5/5] crypto: caam - disable CAAM's bind/unbind attributes Andrey Smirnov
@ 2019-11-06  7:27 ` Vakul Garg
  2019-11-06 15:19   ` Andrey Smirnov
  5 siblings, 1 reply; 9+ messages in thread
From: Vakul Garg @ 2019-11-06  7:27 UTC (permalink / raw)
  To: Andrey Smirnov, linux-crypto
  Cc: Chris Healy, Lucas Stach, Horia Geanta, Herbert Xu,
	Iuliana Prodan, dl-linux-imx, linux-kernel



> -----Original Message-----
> From: linux-crypto-owner@vger.kernel.org <linux-crypto-
> owner@vger.kernel.org> On Behalf Of Andrey Smirnov
> Sent: Tuesday, November 5, 2019 8:44 PM
> To: linux-crypto@vger.kernel.org
> Cc: Andrey Smirnov <andrew.smirnov@gmail.com>; Chris Healy
> <cphealy@gmail.com>; Lucas Stach <l.stach@pengutronix.de>; Horia Geanta
> <horia.geanta@nxp.com>; Herbert Xu <herbert@gondor.apana.org.au>;
> Iuliana Prodan <iuliana.prodan@nxp.com>; dl-linux-imx <linux-
> imx@nxp.com>; linux-kernel@vger.kernel.org
> Subject: [PATCH 0/5] CAAM JR lifecycle
> 
> Everyone:
> 
> This series is a different approach to addressing the issues brought up in
> [discussion]. This time the proposition is to get away from creating per-JR
> platfrom device, move all of the underlying code into caam.ko and disable
> manual binding/unbinding of the CAAM device via sysfs. Note that this series
> is a rough cut intented to gauge if this approach could be acceptable for
> upstreaming.
> 
> Thanks,
> Andrey Smirnov
> 
> [discussion] lore.kernel.org/lkml/20190904023515.7107-13-
> andrew.smirnov@gmail.com
> 
> Andrey Smirnov (5):
>   crypto: caam - use static initialization
>   crypto: caam - introduce caam_jr_cbk
>   crypto: caam - convert JR API to use struct caam_drv_private_jr
>   crypto: caam - do not create a platform devices for JRs
>   crypto: caam - disable CAAM's bind/unbind attributes
> 

To access caam jobrings from DPDK (user space drivers), we unbind job-ring's platform device from the kernel.
What would be the alternate way to enable job ring drivers in user space?


>  drivers/crypto/caam/Kconfig      |   5 +-
>  drivers/crypto/caam/Makefile     |  15 +--
>  drivers/crypto/caam/caamalg.c    | 114 ++++++++++----------
>  drivers/crypto/caam/caamalg_qi.c |  12 +--
>  drivers/crypto/caam/caamhash.c   | 117 +++++++++++----------
>  drivers/crypto/caam/caampkc.c    |  67 ++++++------
>  drivers/crypto/caam/caampkc.h    |   2 +-
>  drivers/crypto/caam/caamrng.c    |  41 ++++----
>  drivers/crypto/caam/ctrl.c       |  16 ++-
>  drivers/crypto/caam/intern.h     |   3 +-
>  drivers/crypto/caam/jr.c         | 173 ++++++++-----------------------
>  drivers/crypto/caam/jr.h         |  14 ++-
>  drivers/crypto/caam/key_gen.c    |  11 +-
>  drivers/crypto/caam/key_gen.h    |   5 +-
>  14 files changed, 275 insertions(+), 320 deletions(-)
> 
> --
> 2.21.0


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 3/5] crypto: caam - convert JR API to use struct caam_drv_private_jr
  2019-11-05 15:13 ` [PATCH 3/5] crypto: caam - convert JR API to use struct caam_drv_private_jr Andrey Smirnov
@ 2019-11-06  9:04   ` kbuild test robot
  0 siblings, 0 replies; 9+ messages in thread
From: kbuild test robot @ 2019-11-06  9:04 UTC (permalink / raw)
  To: Andrey Smirnov
  Cc: kbuild-all, linux-crypto, Andrey Smirnov, Chris Healy,
	Lucas Stach, Horia Geantă,
	Herbert Xu, Iuliana Prodan, linux-imx, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 1887 bytes --]

Hi Andrey,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on cryptodev/master]
[also build test WARNING on v5.4-rc6 next-20191105]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Andrey-Smirnov/CAAM-JR-lifecycle/20191106-090151
base:   https://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
config: x86_64-allyesconfig (attached as .config)
compiler: gcc-7 (Debian 7.4.0-14) 7.4.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

If you fix the issue, kindly add following tag
Reported-by: kbuild test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   In file included from drivers/crypto/caam/caamalg_qi2.c:15:0:
>> drivers/crypto/caam/key_gen.h:44:28: warning: 'struct caam_drv_private_jr' declared inside parameter list will not be visible outside of this definition or declaration
    void split_key_done(struct caam_drv_private_jr *jr, u32 *desc, u32 err,
                               ^~~~~~~~~~~~~~~~~~~
   drivers/crypto/caam/key_gen.h:47:26: warning: 'struct caam_drv_private_jr' declared inside parameter list will not be visible outside of this definition or declaration
    int gen_split_key(struct caam_drv_private_jr *jr, u8 *key_out,
                             ^~~~~~~~~~~~~~~~~~~

vim +44 drivers/crypto/caam/key_gen.h

    43	
  > 44	void split_key_done(struct caam_drv_private_jr *jr, u32 *desc, u32 err,
    45			    void *context);
    46	

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 70262 bytes --]

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH 0/5] CAAM JR lifecycle
  2019-11-06  7:27 ` [PATCH 0/5] CAAM JR lifecycle Vakul Garg
@ 2019-11-06 15:19   ` Andrey Smirnov
  0 siblings, 0 replies; 9+ messages in thread
From: Andrey Smirnov @ 2019-11-06 15:19 UTC (permalink / raw)
  To: Vakul Garg
  Cc: linux-crypto, Chris Healy, Lucas Stach, Horia Geanta, Herbert Xu,
	Iuliana Prodan, dl-linux-imx, linux-kernel

On Tue, Nov 5, 2019 at 11:27 PM Vakul Garg <vakul.garg@nxp.com> wrote:
>
>
>
> > -----Original Message-----
> > From: linux-crypto-owner@vger.kernel.org <linux-crypto-
> > owner@vger.kernel.org> On Behalf Of Andrey Smirnov
> > Sent: Tuesday, November 5, 2019 8:44 PM
> > To: linux-crypto@vger.kernel.org
> > Cc: Andrey Smirnov <andrew.smirnov@gmail.com>; Chris Healy
> > <cphealy@gmail.com>; Lucas Stach <l.stach@pengutronix.de>; Horia Geanta
> > <horia.geanta@nxp.com>; Herbert Xu <herbert@gondor.apana.org.au>;
> > Iuliana Prodan <iuliana.prodan@nxp.com>; dl-linux-imx <linux-
> > imx@nxp.com>; linux-kernel@vger.kernel.org
> > Subject: [PATCH 0/5] CAAM JR lifecycle
> >
> > Everyone:
> >
> > This series is a different approach to addressing the issues brought up in
> > [discussion]. This time the proposition is to get away from creating per-JR
> > platfrom device, move all of the underlying code into caam.ko and disable
> > manual binding/unbinding of the CAAM device via sysfs. Note that this series
> > is a rough cut intented to gauge if this approach could be acceptable for
> > upstreaming.
> >
> > Thanks,
> > Andrey Smirnov
> >
> > [discussion] lore.kernel.org/lkml/20190904023515.7107-13-
> > andrew.smirnov@gmail.com
> >
> > Andrey Smirnov (5):
> >   crypto: caam - use static initialization
> >   crypto: caam - introduce caam_jr_cbk
> >   crypto: caam - convert JR API to use struct caam_drv_private_jr
> >   crypto: caam - do not create a platform devices for JRs
> >   crypto: caam - disable CAAM's bind/unbind attributes
> >
>
> To access caam jobrings from DPDK (user space drivers), we unbind job-ring's platform device from the kernel.
> What would be the alternate way to enable job ring drivers in user space?
>

Wouldn't either building your kernel with
CONFIG_CRYPTO_DEV_FSL_CAAM_JR=n (this series doesn't handle that right
currently due to being a rough cut) or disabling specific/all JRs via
DT accomplish the same goal?

Thanks,
Andrey Smirnov

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, back to index

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-05 15:13 [PATCH 0/5] CAAM JR lifecycle Andrey Smirnov
2019-11-05 15:13 ` [PATCH 1/5] crypto: caam - use static initialization Andrey Smirnov
2019-11-05 15:13 ` [PATCH 2/5] crypto: caam - introduce caam_jr_cbk Andrey Smirnov
2019-11-05 15:13 ` [PATCH 3/5] crypto: caam - convert JR API to use struct caam_drv_private_jr Andrey Smirnov
2019-11-06  9:04   ` kbuild test robot
2019-11-05 15:13 ` [PATCH 4/5] crypto: caam - do not create a platform devices for JRs Andrey Smirnov
2019-11-05 15:13 ` [PATCH 5/5] crypto: caam - disable CAAM's bind/unbind attributes Andrey Smirnov
2019-11-06  7:27 ` [PATCH 0/5] CAAM JR lifecycle Vakul Garg
2019-11-06 15:19   ` Andrey Smirnov

Linux-Crypto Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/linux-crypto/0 linux-crypto/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 linux-crypto linux-crypto/ https://lore.kernel.org/linux-crypto \
		linux-crypto@vger.kernel.org
	public-inbox-index linux-crypto

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.linux-crypto


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git