All of lore.kernel.org
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx
@ 2021-10-13 18:27 Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 01/15] crypto: change sgl to src_sgl in vector Hemant Agrawal
                   ` (14 more replies)
  0 siblings, 15 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev

This patch series adds support for raw vector API in dpaax_sec drivers
This also enhances the raw vector APIs to support OOP and security
protocol support.

v2: fix aesni compilation and add release notes.
v3: fix the tot_length patch as per Konstantin's comments


Gagandeep Singh (11):
  crypto: add total raw buffer length
  crypto: fix raw process for multi-seg case
  crypto/dpaa2_sec: support raw datapath APIs
  crypto/dpaa2_sec: support AUTH only with raw buffer APIs
  crypto/dpaa2_sec: support AUTHENC with raw buffer APIs
  crypto/dpaa2_sec: support AEAD with raw buffer APIs
  crypto/dpaa2_sec: support OOP with raw buffer API
  crypto/dpaa2_sec: enhance error checks with raw buffer APIs
  crypto/dpaa_sec: support raw datapath APIs
  crypto/dpaa_sec: support authonly and chain with raw APIs
  crypto/dpaa_sec: support AEAD and proto with raw APIs

Hemant Agrawal (4):
  crypto: change sgl to src_sgl in vector
  crypto: add dest_sgl in raw vector APIs
  test/crypto: add raw API test for dpaax
  test/crypto: add raw API support in 5G algos

 app/test/test_cryptodev.c                   |  179 +++-
 doc/guides/rel_notes/deprecation.rst        |   12 -
 doc/guides/rel_notes/release_21_11.rst      |    2 +
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c    |   12 +-
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c  |    6 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |   13 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |   82 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 1045 ++++++++++++++++++
 drivers/crypto/dpaa2_sec/meson.build        |    3 +-
 drivers/crypto/dpaa_sec/dpaa_sec.c          |   23 +-
 drivers/crypto/dpaa_sec/dpaa_sec.h          |   40 +-
 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c   | 1052 +++++++++++++++++++
 drivers/crypto/dpaa_sec/meson.build         |    4 +-
 drivers/crypto/qat/qat_sym_hw_dp.c          |   27 +-
 lib/cryptodev/rte_crypto_sym.h              |   13 +-
 lib/ipsec/misc.h                            |    4 +-
 16 files changed, 2401 insertions(+), 116 deletions(-)
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
 create mode 100644 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c

-- 
2.17.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 01/15] crypto: change sgl to src_sgl in vector
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 02/15] crypto: add total raw buffer length Hemant Agrawal
                   ` (13 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev

This patch renames the sgl to src_sgl to help differentiating
between source and destination sgl.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 app/test/test_cryptodev.c                  |  6 ++---
 drivers/crypto/aesni_gcm/aesni_gcm_pmd.c   | 12 +++++-----
 drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c |  6 ++---
 drivers/crypto/qat/qat_sym_hw_dp.c         | 27 +++++++++++++---------
 lib/cryptodev/rte_crypto_sym.h             |  2 +-
 lib/ipsec/misc.h                           |  4 ++--
 6 files changed, 31 insertions(+), 26 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index e6ceeb487f..1e951981c2 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -230,7 +230,7 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id,
 	digest.va = NULL;
 	sgl.vec = data_vec;
 	vec.num = 1;
-	vec.sgl = &sgl;
+	vec.src_sgl = &sgl;
 	vec.iv = &cipher_iv;
 	vec.digest = &digest;
 	vec.aad = &aad_auth_iv;
@@ -394,7 +394,7 @@ process_cpu_aead_op(uint8_t dev_id, struct rte_crypto_op *op)
 
 	sgl.vec = vec;
 	sgl.num = n;
-	symvec.sgl = &sgl;
+	symvec.src_sgl = &sgl;
 	symvec.iv = &iv_ptr;
 	symvec.digest = &digest_ptr;
 	symvec.aad = &aad_ptr;
@@ -440,7 +440,7 @@ process_cpu_crypt_auth_op(uint8_t dev_id, struct rte_crypto_op *op)
 
 	sgl.vec = vec;
 	sgl.num = n;
-	symvec.sgl = &sgl;
+	symvec.src_sgl = &sgl;
 	symvec.iv = &iv_ptr;
 	symvec.digest = &digest_ptr;
 	symvec.status = &st;
diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
index 330aad8157..d0368828e9 100644
--- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
+++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c
@@ -535,7 +535,7 @@ aesni_gcm_sgl_encrypt(struct aesni_gcm_session *s,
 	processed = 0;
 	for (i = 0; i < vec->num; ++i) {
 		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i].va,
+			&vec->src_sgl[i], vec->iv[i].va,
 			vec->aad[i].va);
 		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
 			gdata_ctx, vec->digest[i].va);
@@ -554,7 +554,7 @@ aesni_gcm_sgl_decrypt(struct aesni_gcm_session *s,
 	processed = 0;
 	for (i = 0; i < vec->num; ++i) {
 		aesni_gcm_process_gcm_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i].va,
+			&vec->src_sgl[i], vec->iv[i].va,
 			vec->aad[i].va);
 		 vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
 			gdata_ctx, vec->digest[i].va);
@@ -572,13 +572,13 @@ aesni_gmac_sgl_generate(struct aesni_gcm_session *s,
 
 	processed = 0;
 	for (i = 0; i < vec->num; ++i) {
-		if (vec->sgl[i].num != 1) {
+		if (vec->src_sgl[i].num != 1) {
 			vec->status[i] = ENOTSUP;
 			continue;
 		}
 
 		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i].va);
+			&vec->src_sgl[i], vec->iv[i].va);
 		vec->status[i] = aesni_gcm_sgl_op_finalize_encryption(s,
 			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
@@ -595,13 +595,13 @@ aesni_gmac_sgl_verify(struct aesni_gcm_session *s,
 
 	processed = 0;
 	for (i = 0; i < vec->num; ++i) {
-		if (vec->sgl[i].num != 1) {
+		if (vec->src_sgl[i].num != 1) {
 			vec->status[i] = ENOTSUP;
 			continue;
 		}
 
 		aesni_gcm_process_gmac_sgl_op(s, gdata_ctx,
-			&vec->sgl[i], vec->iv[i].va);
+			&vec->src_sgl[i], vec->iv[i].va);
 		vec->status[i] = aesni_gcm_sgl_op_finalize_decryption(s,
 			gdata_ctx, vec->digest[i].va);
 		processed += (vec->status[i] == 0);
diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
index 60963a8208..2419adc699 100644
--- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
+++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c
@@ -2002,14 +2002,14 @@ aesni_mb_cpu_crypto_process_bulk(struct rte_cryptodev *dev,
 	for (i = 0, j = 0, k = 0; i != vec->num; i++) {
 
 
-		ret = check_crypto_sgl(sofs, vec->sgl + i);
+		ret = check_crypto_sgl(sofs, vec->src_sgl + i);
 		if (ret != 0) {
 			vec->status[i] = ret;
 			continue;
 		}
 
-		buf = vec->sgl[i].vec[0].base;
-		len = vec->sgl[i].vec[0].len;
+		buf = vec->src_sgl[i].vec[0].base;
+		len = vec->src_sgl[i].vec[0].len;
 
 		job = IMB_GET_NEXT_JOB(mb_mgr);
 		if (job == NULL) {
diff --git a/drivers/crypto/qat/qat_sym_hw_dp.c b/drivers/crypto/qat/qat_sym_hw_dp.c
index 36d11e0dc9..12825e448b 100644
--- a/drivers/crypto/qat/qat_sym_hw_dp.c
+++ b/drivers/crypto/qat/qat_sym_hw_dp.c
@@ -181,8 +181,9 @@ qat_sym_dp_enqueue_cipher_jobs(void *qp_data, uint8_t *drv_ctx,
 			(uint8_t *)tx_queue->base_addr + tail);
 		rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
 
-		data_len = qat_sym_dp_parse_data_vec(qp, req, vec->sgl[i].vec,
-			vec->sgl[i].num);
+		data_len = qat_sym_dp_parse_data_vec(qp, req,
+			vec->src_sgl[i].vec,
+			vec->src_sgl[i].num);
 		if (unlikely(data_len < 0))
 			break;
 		req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i];
@@ -302,8 +303,9 @@ qat_sym_dp_enqueue_auth_jobs(void *qp_data, uint8_t *drv_ctx,
 			(uint8_t *)tx_queue->base_addr + tail);
 		rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
 
-		data_len = qat_sym_dp_parse_data_vec(qp, req, vec->sgl[i].vec,
-			vec->sgl[i].num);
+		data_len = qat_sym_dp_parse_data_vec(qp, req,
+			vec->src_sgl[i].vec,
+			vec->src_sgl[i].num);
 		if (unlikely(data_len < 0))
 			break;
 		req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i];
@@ -484,14 +486,16 @@ qat_sym_dp_enqueue_chain_jobs(void *qp_data, uint8_t *drv_ctx,
 			(uint8_t *)tx_queue->base_addr + tail);
 		rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
 
-		data_len = qat_sym_dp_parse_data_vec(qp, req, vec->sgl[i].vec,
-			vec->sgl[i].num);
+		data_len = qat_sym_dp_parse_data_vec(qp, req,
+			vec->src_sgl[i].vec,
+			vec->src_sgl[i].num);
 		if (unlikely(data_len < 0))
 			break;
 		req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i];
-		if (unlikely(enqueue_one_chain_job(ctx, req, vec->sgl[i].vec,
-			vec->sgl[i].num, &vec->iv[i], &vec->digest[i],
-				&vec->auth_iv[i], ofs, (uint32_t)data_len)))
+		if (unlikely(enqueue_one_chain_job(ctx, req,
+			vec->src_sgl[i].vec, vec->src_sgl[i].num,
+			&vec->iv[i], &vec->digest[i],
+			&vec->auth_iv[i], ofs, (uint32_t)data_len)))
 			break;
 
 		tail = (tail + tx_queue->msg_size) & tx_queue->modulo_mask;
@@ -688,8 +692,9 @@ qat_sym_dp_enqueue_aead_jobs(void *qp_data, uint8_t *drv_ctx,
 			(uint8_t *)tx_queue->base_addr + tail);
 		rte_mov128((uint8_t *)req, (const uint8_t *)&(ctx->fw_req));
 
-		data_len = qat_sym_dp_parse_data_vec(qp, req, vec->sgl[i].vec,
-			vec->sgl[i].num);
+		data_len = qat_sym_dp_parse_data_vec(qp, req,
+			vec->src_sgl[i].vec,
+			vec->src_sgl[i].num);
 		if (unlikely(data_len < 0))
 			break;
 		req->comn_mid.opaque_data = (uint64_t)(uintptr_t)user_data[i];
diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index 58c0724743..dcc0bd5933 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -69,7 +69,7 @@ struct rte_crypto_sym_vec {
 	/** number of operations to perform */
 	uint32_t num;
 	/** array of SGL vectors */
-	struct rte_crypto_sgl *sgl;
+	struct rte_crypto_sgl *src_sgl;
 	/** array of pointers to cipher IV */
 	struct rte_crypto_va_iova_ptr *iv;
 	/** array of pointers to digest */
diff --git a/lib/ipsec/misc.h b/lib/ipsec/misc.h
index 79b9a20762..58ff538141 100644
--- a/lib/ipsec/misc.h
+++ b/lib/ipsec/misc.h
@@ -136,7 +136,7 @@ cpu_crypto_bulk(const struct rte_ipsec_session *ss,
 		/* not enough space in vec[] to hold all segments */
 		if (vcnt < 0) {
 			/* fill the request structure */
-			symvec.sgl = &vecpkt[j];
+			symvec.src_sgl = &vecpkt[j];
 			symvec.iv = &iv[j];
 			symvec.digest = &dgst[j];
 			symvec.aad = &aad[j];
@@ -160,7 +160,7 @@ cpu_crypto_bulk(const struct rte_ipsec_session *ss,
 	}
 
 	/* fill the request structure */
-	symvec.sgl = &vecpkt[j];
+	symvec.src_sgl = &vecpkt[j];
 	symvec.iv = &iv[j];
 	symvec.aad = &aad[j];
 	symvec.digest = &dgst[j];
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 02/15] crypto: add total raw buffer length
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 01/15] crypto: change sgl to src_sgl in vector Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:35   ` [dpdk-dev] [EXT] " Akhil Goyal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 03/15] crypto: add dest_sgl in raw vector APIs Hemant Agrawal
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

The current crypto raw data vectors is extended to support
rte_security usecases, where we need total data length to know
how much additional memory space is available in buffer other
than data length so that driver/HW can write expanded size
data after encryption.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
 doc/guides/rel_notes/deprecation.rst | 7 -------
 lib/cryptodev/rte_crypto_sym.h       | 6 ++++++
 2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index f3c998a655..4b26ef6747 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -211,13 +211,6 @@ Deprecation Notices
   This field will be null for inplace processing.
   This change is targeted for DPDK 21.11.
 
-* cryptodev: The structure ``rte_crypto_vec`` would be updated to add
-  ``tot_len`` to support total buffer length.
-  This is required for security cases like IPsec and PDCP encryption offload
-  to know how much additional memory space is available in buffer other than
-  data length so that driver/HW can write expanded size data after encryption.
-  This change is targeted for DPDK 21.11.
-
 * cryptodev: Hide structures ``rte_cryptodev_sym_session`` and
   ``rte_cryptodev_asym_session`` to remove unnecessary indirection between
   session and the private data of session. An opaque pointer can be exposed
diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index dcc0bd5933..e5cef1fb72 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -37,6 +37,8 @@ struct rte_crypto_vec {
 	rte_iova_t iova;
 	/** length of the data buffer */
 	uint32_t len;
+	/** total buffer length*/
+	uint32_t tot_len;
 };
 
 /**
@@ -980,12 +982,14 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t len,
 	seglen = mb->data_len - ofs;
 	if (len <= seglen) {
 		vec[0].len = len;
+		vec[0].tot_len = mb->buf_len;
 		return 1;
 	}
 
 	/* data spread across segments */
 	vec[0].len = seglen;
 	left = len - seglen;
+	vec[0].tot_len = mb->buf_len;
 	for (i = 1, nseg = mb->next; nseg != NULL; nseg = nseg->next, i++) {
 
 		vec[i].base = rte_pktmbuf_mtod(nseg, void *);
@@ -995,6 +999,7 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t len,
 		if (left <= seglen) {
 			/* whole requested data is completed */
 			vec[i].len = left;
+			vec[i].tot_len = mb->buf_len;
 			left = 0;
 			break;
 		}
@@ -1002,6 +1007,7 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t len,
 		/* use whole segment */
 		vec[i].len = seglen;
 		left -= seglen;
+		vec[i].tot_len = mb->buf_len;
 	}
 
 	RTE_ASSERT(left == 0);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 03/15] crypto: add dest_sgl in raw vector APIs
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 01/15] crypto: change sgl to src_sgl in vector Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 02/15] crypto: add total raw buffer length Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 04/15] crypto: fix raw process for multi-seg case Hemant Agrawal
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev

The structure rte_crypto_sym_vec is updated to
add dest_sgl to support out of place processing.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 doc/guides/rel_notes/deprecation.rst | 5 -----
 lib/cryptodev/rte_crypto_sym.h       | 2 ++
 2 files changed, 2 insertions(+), 5 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4b26ef6747..b978843471 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -206,11 +206,6 @@ Deprecation Notices
   has a limited size ``uint16_t``.
   It will be moved and extended as ``uint32_t`` in DPDK 21.11.
 
-* cryptodev: The structure ``rte_crypto_sym_vec`` would be updated to add
-  ``dest_sgl`` to support out of place processing.
-  This field will be null for inplace processing.
-  This change is targeted for DPDK 21.11.
-
 * cryptodev: Hide structures ``rte_cryptodev_sym_session`` and
   ``rte_cryptodev_asym_session`` to remove unnecessary indirection between
   session and the private data of session. An opaque pointer can be exposed
diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index e5cef1fb72..978708845f 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -72,6 +72,8 @@ struct rte_crypto_sym_vec {
 	uint32_t num;
 	/** array of SGL vectors */
 	struct rte_crypto_sgl *src_sgl;
+	/** array of SGL vectors for OOP, keep it NULL for inplace*/
+	struct rte_crypto_sgl *dest_sgl;
 	/** array of pointers to cipher IV */
 	struct rte_crypto_va_iova_ptr *iv;
 	/** array of pointers to digest */
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 04/15] crypto: fix raw process for multi-seg case
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
                   ` (2 preceding siblings ...)
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 03/15] crypto: add dest_sgl in raw vector APIs Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 05/15] crypto/dpaa2_sec: support raw datapath APIs Hemant Agrawal
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil
  Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh,
	marcinx.smoczynski, stable

From: Gagandeep Singh <g.singh@nxp.com>

If no next segment available the “for” loop will fail and it still
returns i+1 i.e. 2, which is wrong as it has filled only 1 buffer.

Fixes: 7adf992fb9bf ("cryptodev: introduce CPU crypto API")
Cc: marcinx.smoczynski@intel.com
Cc: stable@dpdk.org

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 lib/cryptodev/rte_crypto_sym.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
index 978708845f..a48228a646 100644
--- a/lib/cryptodev/rte_crypto_sym.h
+++ b/lib/cryptodev/rte_crypto_sym.h
@@ -1003,6 +1003,7 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t len,
 			vec[i].len = left;
 			vec[i].tot_len = mb->buf_len;
 			left = 0;
+			i++;
 			break;
 		}
 
@@ -1013,7 +1014,7 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf *mb, uint32_t ofs, uint32_t len,
 	}
 
 	RTE_ASSERT(left == 0);
-	return i + 1;
+	return i;
 }
 
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 05/15] crypto/dpaa2_sec: support raw datapath APIs
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
                   ` (3 preceding siblings ...)
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 04/15] crypto: fix raw process for multi-seg case Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 06/15] crypto/dpaa2_sec: support AUTH only with raw buffer APIs Hemant Agrawal
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

This path add framework for raw API support.
The initial patch only test cipher only part.

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 doc/guides/rel_notes/release_21_11.rst      |   1 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c |  13 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  60 +-
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 595 ++++++++++++++++++++
 drivers/crypto/dpaa2_sec/meson.build        |   3 +-
 5 files changed, 643 insertions(+), 29 deletions(-)
 create mode 100644 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index a8900a3079..b1049a92e3 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -106,6 +106,7 @@ New Features
 * **Updated NXP dpaa2_sec crypto PMD.**
 
   * Added PDCP short MAC-I support.
+  * Added raw vector datapath API support
 
 * **Updated the turbo_sw bbdev PMD.**
 
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index dfa72f3f93..4eb3615250 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -49,15 +49,8 @@
 #define FSL_MC_DPSECI_DEVID     3
 
 #define NO_PREFETCH 0
-/* FLE_POOL_NUM_BUFS is set as per the ipsec-secgw application */
-#define FLE_POOL_NUM_BUFS	32000
-#define FLE_POOL_BUF_SIZE	256
-#define FLE_POOL_CACHE_SIZE	512
-#define FLE_SG_MEM_SIZE(num)	(FLE_POOL_BUF_SIZE + ((num) * 32))
-#define SEC_FLC_DHR_OUTBOUND	-114
-#define SEC_FLC_DHR_INBOUND	0
 
-static uint8_t cryptodev_driver_id;
+uint8_t cryptodev_driver_id;
 
 #ifdef RTE_LIB_SECURITY
 static inline int
@@ -3828,6 +3821,9 @@ static struct rte_cryptodev_ops crypto_ops = {
 	.sym_session_get_size     = dpaa2_sec_sym_session_get_size,
 	.sym_session_configure    = dpaa2_sec_sym_session_configure,
 	.sym_session_clear        = dpaa2_sec_sym_session_clear,
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = dpaa2_sec_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = dpaa2_sec_configure_raw_dp_ctx,
 };
 
 #ifdef RTE_LIB_SECURITY
@@ -3910,6 +3906,7 @@ dpaa2_sec_dev_init(struct rte_cryptodev *cryptodev)
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
 			RTE_CRYPTODEV_FF_SECURITY |
+			RTE_CRYPTODEV_FF_SYM_RAW_DP |
 			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index 8dee0a4bda..e9b888186e 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -15,6 +15,16 @@
 #define CRYPTODEV_NAME_DPAA2_SEC_PMD	crypto_dpaa2_sec
 /**< NXP DPAA2 - SEC PMD device name */
 
+extern uint8_t cryptodev_driver_id;
+
+/* FLE_POOL_NUM_BUFS is set as per the ipsec-secgw application */
+#define FLE_POOL_NUM_BUFS	32000
+#define FLE_POOL_BUF_SIZE	256
+#define FLE_POOL_CACHE_SIZE	512
+#define FLE_SG_MEM_SIZE(num)	(FLE_POOL_BUF_SIZE + ((num) * 32))
+#define SEC_FLC_DHR_OUTBOUND	-114
+#define SEC_FLC_DHR_INBOUND	0
+
 #define MAX_QUEUES		64
 #define MAX_DESC_SIZE		64
 /** private data structure for each DPAA2_SEC device */
@@ -158,6 +168,24 @@ struct dpaa2_pdcp_ctxt {
 	uint32_t hfn_threshold;	/*!< HFN Threashold for key renegotiation */
 };
 #endif
+
+typedef int (*dpaa2_sec_build_fd_t)(
+	void *qp, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec,
+	uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_va_iova_ptr *iv,
+	struct rte_crypto_va_iova_ptr *digest,
+	struct rte_crypto_va_iova_ptr *aad_or_auth_iv,
+	void *user_data);
+
+typedef int (*dpaa2_sec_build_raw_dp_fd_t)(uint8_t *drv_ctx,
+		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_va_iova_ptr *iv,
+		       struct rte_crypto_va_iova_ptr *digest,
+		       struct rte_crypto_va_iova_ptr *auth_iv,
+		       union rte_crypto_sym_ofs ofs,
+		       void *userdata,
+		       struct qbman_fd *fd);
+
 typedef struct dpaa2_sec_session_entry {
 	void *ctxt;
 	uint8_t ctxt_type;
@@ -165,6 +193,8 @@ typedef struct dpaa2_sec_session_entry {
 	enum rte_crypto_cipher_algorithm cipher_alg; /*!< Cipher Algorithm*/
 	enum rte_crypto_auth_algorithm auth_alg; /*!< Authentication Algorithm*/
 	enum rte_crypto_aead_algorithm aead_alg; /*!< AEAD Algorithm*/
+	dpaa2_sec_build_fd_t build_fd;
+	dpaa2_sec_build_raw_dp_fd_t build_raw_dp_fd;
 	union {
 		struct {
 			uint8_t *data;	/**< pointer to key data */
@@ -547,26 +577,6 @@ static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
 			}, }
 		}, }
 	},
-	{	/* NULL (CIPHER) */
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
-		{.sym = {
-			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
-			{.cipher = {
-				.algo = RTE_CRYPTO_CIPHER_NULL,
-				.block_size = 1,
-				.key_size = {
-					.min = 0,
-					.max = 0,
-					.increment = 0
-				},
-				.iv_size = {
-					.min = 0,
-					.max = 0,
-					.increment = 0
-				}
-			}, },
-		}, }
-	},
 	{	/* AES CBC */
 		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
 		{.sym = {
@@ -983,4 +993,14 @@ calc_chksum(void *buffer, int len)
 	return  result;
 }
 
+int
+dpaa2_sec_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+	struct rte_crypto_raw_dp_ctx *raw_dp_ctx,
+	enum rte_crypto_op_sess_type sess_type,
+	union rte_cryptodev_session_ctx session_ctx, uint8_t is_update);
+
+int
+dpaa2_sec_get_dp_ctx_size(struct rte_cryptodev *dev);
+
+
 #endif /* _DPAA2_SEC_PMD_PRIVATE_H_ */
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
new file mode 100644
index 0000000000..8925c8e938
--- /dev/null
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -0,0 +1,595 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <cryptodev_pmd.h>
+#include <rte_fslmc.h>
+#include <fslmc_vfio.h>
+#include <dpaa2_hw_pvt.h>
+#include <dpaa2_hw_dpio.h>
+
+#include "dpaa2_sec_priv.h"
+#include "dpaa2_sec_logs.h"
+
+struct dpaa2_sec_raw_dp_ctx {
+	dpaa2_sec_session *session;
+	uint32_t tail;
+	uint32_t head;
+	uint16_t cached_enqueue;
+	uint16_t cached_dequeue;
+};
+
+static int
+build_raw_dp_chain_fd(uint8_t *drv_ctx,
+		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_va_iova_ptr *iv,
+		       struct rte_crypto_va_iova_ptr *digest,
+		       struct rte_crypto_va_iova_ptr *auth_iv,
+		       union rte_crypto_sym_ofs ofs,
+		       void *userdata,
+		       struct qbman_fd *fd)
+{
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(sgl);
+	RTE_SET_USED(iv);
+	RTE_SET_USED(digest);
+	RTE_SET_USED(auth_iv);
+	RTE_SET_USED(ofs);
+	RTE_SET_USED(userdata);
+	RTE_SET_USED(fd);
+
+	return 0;
+}
+
+static int
+build_raw_dp_aead_fd(uint8_t *drv_ctx,
+		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_va_iova_ptr *iv,
+		       struct rte_crypto_va_iova_ptr *digest,
+		       struct rte_crypto_va_iova_ptr *auth_iv,
+		       union rte_crypto_sym_ofs ofs,
+		       void *userdata,
+		       struct qbman_fd *fd)
+{
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(sgl);
+	RTE_SET_USED(iv);
+	RTE_SET_USED(digest);
+	RTE_SET_USED(auth_iv);
+	RTE_SET_USED(ofs);
+	RTE_SET_USED(userdata);
+	RTE_SET_USED(fd);
+
+	return 0;
+}
+
+static int
+build_raw_dp_auth_fd(uint8_t *drv_ctx,
+		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_va_iova_ptr *iv,
+		       struct rte_crypto_va_iova_ptr *digest,
+		       struct rte_crypto_va_iova_ptr *auth_iv,
+		       union rte_crypto_sym_ofs ofs,
+		       void *userdata,
+		       struct qbman_fd *fd)
+{
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(sgl);
+	RTE_SET_USED(iv);
+	RTE_SET_USED(digest);
+	RTE_SET_USED(auth_iv);
+	RTE_SET_USED(ofs);
+	RTE_SET_USED(userdata);
+	RTE_SET_USED(fd);
+
+	return 0;
+}
+
+static int
+build_raw_dp_proto_fd(uint8_t *drv_ctx,
+		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_va_iova_ptr *iv,
+		       struct rte_crypto_va_iova_ptr *digest,
+		       struct rte_crypto_va_iova_ptr *auth_iv,
+		       union rte_crypto_sym_ofs ofs,
+		       void *userdata,
+		       struct qbman_fd *fd)
+{
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(sgl);
+	RTE_SET_USED(iv);
+	RTE_SET_USED(digest);
+	RTE_SET_USED(auth_iv);
+	RTE_SET_USED(ofs);
+	RTE_SET_USED(userdata);
+	RTE_SET_USED(fd);
+
+	return 0;
+}
+
+static int
+build_raw_dp_proto_compound_fd(uint8_t *drv_ctx,
+		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_va_iova_ptr *iv,
+		       struct rte_crypto_va_iova_ptr *digest,
+		       struct rte_crypto_va_iova_ptr *auth_iv,
+		       union rte_crypto_sym_ofs ofs,
+		       void *userdata,
+		       struct qbman_fd *fd)
+{
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(sgl);
+	RTE_SET_USED(iv);
+	RTE_SET_USED(digest);
+	RTE_SET_USED(auth_iv);
+	RTE_SET_USED(ofs);
+	RTE_SET_USED(userdata);
+	RTE_SET_USED(fd);
+
+	return 0;
+}
+
+static int
+build_raw_dp_cipher_fd(uint8_t *drv_ctx,
+		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_va_iova_ptr *iv,
+		       struct rte_crypto_va_iova_ptr *digest,
+		       struct rte_crypto_va_iova_ptr *auth_iv,
+		       union rte_crypto_sym_ofs ofs,
+		       void *userdata,
+		       struct qbman_fd *fd)
+{
+	RTE_SET_USED(digest);
+	RTE_SET_USED(auth_iv);
+
+	dpaa2_sec_session *sess =
+		((struct dpaa2_sec_raw_dp_ctx *)drv_ctx)->session;
+	struct qbman_fle *ip_fle, *op_fle, *sge, *fle;
+	int total_len = 0, data_len = 0, data_offset;
+	struct sec_flow_context *flc;
+	struct ctxt_priv *priv = sess->ctxt;
+	unsigned int i;
+
+	for (i = 0; i < sgl->num; i++)
+		total_len += sgl->vec[i].len;
+
+	data_len = total_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
+	data_offset = ofs.ofs.cipher.head;
+
+	if (sess->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
+		sess->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
+		if ((data_len & 7) || (data_offset & 7)) {
+			DPAA2_SEC_ERR("CIPHER: len/offset must be full bytes");
+			return -ENOTSUP;
+		}
+
+		data_len = data_len >> 3;
+		data_offset = data_offset >> 3;
+	}
+
+	/* first FLE entry used to store mbuf and session ctxt */
+	fle = (struct qbman_fle *)rte_malloc(NULL,
+			FLE_SG_MEM_SIZE(2*sgl->num),
+			RTE_CACHE_LINE_SIZE);
+	if (!fle) {
+		DPAA2_SEC_ERR("RAW CIPHER SG: Memory alloc failed for SGE");
+		return -ENOMEM;
+	}
+	memset(fle, 0, FLE_SG_MEM_SIZE(2*sgl->num));
+	/* first FLE entry used to store userdata and session ctxt */
+	DPAA2_SET_FLE_ADDR(fle, (size_t)userdata);
+	DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
+
+	op_fle = fle + 1;
+	ip_fle = fle + 2;
+	sge = fle + 3;
+
+	flc = &priv->flc_desc[0].flc;
+
+	DPAA2_SEC_DP_DEBUG(
+		"RAW CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d\n",
+		data_offset,
+		data_len,
+		sess->iv.length);
+
+	/* o/p fle */
+	DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
+	op_fle->length = data_len;
+	DPAA2_SET_FLE_SG_EXT(op_fle);
+
+	/* o/p 1st seg */
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+	DPAA2_SET_FLE_OFFSET(sge, data_offset);
+	sge->length = sgl->vec[0].len - data_offset;
+
+	/* o/p segs */
+	for (i = 1; i < sgl->num; i++) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+		DPAA2_SET_FLE_OFFSET(sge, 0);
+		sge->length = sgl->vec[i].len;
+	}
+	DPAA2_SET_FLE_FIN(sge);
+
+	DPAA2_SEC_DP_DEBUG(
+		"RAW CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d\n",
+		flc, fle, fle->addr_hi, fle->addr_lo,
+		fle->length);
+
+	/* i/p fle */
+	sge++;
+	DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
+	ip_fle->length = sess->iv.length + data_len;
+	DPAA2_SET_FLE_SG_EXT(ip_fle);
+
+	/* i/p IV */
+	DPAA2_SET_FLE_ADDR(sge, iv->iova);
+	DPAA2_SET_FLE_OFFSET(sge, 0);
+	sge->length = sess->iv.length;
+
+	sge++;
+
+	/* i/p 1st seg */
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+	DPAA2_SET_FLE_OFFSET(sge, data_offset);
+	sge->length = sgl->vec[0].len - data_offset;
+
+	/* i/p segs */
+	for (i = 1; i < sgl->num; i++) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+		DPAA2_SET_FLE_OFFSET(sge, 0);
+		sge->length = sgl->vec[i].len;
+	}
+	DPAA2_SET_FLE_FIN(sge);
+	DPAA2_SET_FLE_FIN(ip_fle);
+
+	/* sg fd */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(op_fle));
+	DPAA2_SET_FD_LEN(fd, ip_fle->length);
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	DPAA2_SEC_DP_DEBUG(
+		"RAW CIPHER SG: fdaddr =%" PRIx64 " off =%d, len =%d\n",
+		DPAA2_GET_FD_ADDR(fd),
+		DPAA2_GET_FD_OFFSET(fd),
+		DPAA2_GET_FD_LEN(fd));
+
+	return 0;
+}
+
+static __rte_always_inline uint32_t
+dpaa2_sec_raw_enqueue_burst(void *qp_data, uint8_t *drv_ctx,
+	struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+	void *user_data[], int *status)
+{
+	RTE_SET_USED(user_data);
+	uint32_t loop;
+	int32_t ret;
+	struct qbman_fd fd_arr[MAX_TX_RING_SLOTS];
+	uint32_t frames_to_send, retry_count;
+	struct qbman_eq_desc eqdesc;
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp_data;
+	dpaa2_sec_session *sess =
+		((struct dpaa2_sec_raw_dp_ctx *)drv_ctx)->session;
+	struct qbman_swp *swp;
+	uint16_t num_tx = 0;
+	uint32_t flags[MAX_TX_RING_SLOTS] = {0};
+
+	if (unlikely(vec->num == 0))
+		return 0;
+
+	if (sess == NULL) {
+		DPAA2_SEC_ERR("sessionless raw crypto not supported");
+		return 0;
+	}
+	/*Prepare enqueue descriptor*/
+	qbman_eq_desc_clear(&eqdesc);
+	qbman_eq_desc_set_no_orp(&eqdesc, DPAA2_EQ_RESP_ERR_FQ);
+	qbman_eq_desc_set_response(&eqdesc, 0, 0);
+	qbman_eq_desc_set_fq(&eqdesc, dpaa2_qp->tx_vq.fqid);
+
+	if (!DPAA2_PER_LCORE_DPIO) {
+		ret = dpaa2_affine_qbman_swp();
+		if (ret) {
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_PORTAL;
+
+	while (vec->num) {
+		frames_to_send = (vec->num > dpaa2_eqcr_size) ?
+			dpaa2_eqcr_size : vec->num;
+
+		for (loop = 0; loop < frames_to_send; loop++) {
+			/*Clear the unused FD fields before sending*/
+			memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
+			ret = sess->build_raw_dp_fd(drv_ctx,
+						    &vec->src_sgl[loop],
+						    &vec->iv[loop],
+						    &vec->digest[loop],
+						    &vec->auth_iv[loop],
+						    ofs,
+						    user_data[loop],
+						    &fd_arr[loop]);
+			if (ret) {
+				DPAA2_SEC_ERR("error: Improper packet contents"
+					      " for crypto operation");
+				goto skip_tx;
+			}
+			status[loop] = 1;
+		}
+
+		loop = 0;
+		retry_count = 0;
+		while (loop < frames_to_send) {
+			ret = qbman_swp_enqueue_multiple(swp, &eqdesc,
+							 &fd_arr[loop],
+							 &flags[loop],
+							 frames_to_send - loop);
+			if (unlikely(ret < 0)) {
+				retry_count++;
+				if (retry_count > DPAA2_MAX_TX_RETRY_COUNT) {
+					num_tx += loop;
+					vec->num -= loop;
+					goto skip_tx;
+				}
+			} else {
+				loop += ret;
+				retry_count = 0;
+			}
+		}
+
+		num_tx += loop;
+		vec->num -= loop;
+	}
+skip_tx:
+	dpaa2_qp->tx_vq.tx_pkts += num_tx;
+	dpaa2_qp->tx_vq.err_pkts += vec->num;
+
+	return num_tx;
+}
+
+static __rte_always_inline int
+dpaa2_sec_raw_enqueue(void *qp_data, uint8_t *drv_ctx,
+	struct rte_crypto_vec *data_vec,
+	uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_va_iova_ptr *iv,
+	struct rte_crypto_va_iova_ptr *digest,
+	struct rte_crypto_va_iova_ptr *aad_or_auth_iv,
+	void *user_data)
+{
+	RTE_SET_USED(qp_data);
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(data_vec);
+	RTE_SET_USED(n_data_vecs);
+	RTE_SET_USED(ofs);
+	RTE_SET_USED(iv);
+	RTE_SET_USED(digest);
+	RTE_SET_USED(aad_or_auth_iv);
+	RTE_SET_USED(user_data);
+
+	return 0;
+}
+
+static inline void *
+sec_fd_to_userdata(const struct qbman_fd *fd)
+{
+	struct qbman_fle *fle;
+	void *userdata;
+	fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
+
+	DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x\n",
+			   fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
+	userdata = (struct rte_crypto_op *)DPAA2_GET_FLE_ADDR((fle - 1));
+	/* free the fle memory */
+	rte_free((void *)(fle-1));
+
+	return userdata;
+}
+
+static __rte_always_inline uint32_t
+dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
+	rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
+	uint32_t max_nb_to_dequeue,
+	rte_cryptodev_raw_post_dequeue_t post_dequeue,
+	void **out_user_data, uint8_t is_user_data_array,
+	uint32_t *n_success, int *dequeue_status)
+{
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(get_dequeue_count);
+
+	/* Function is responsible to receive frames for a given device and VQ*/
+	struct dpaa2_sec_qp *dpaa2_qp = (struct dpaa2_sec_qp *)qp_data;
+	struct qbman_result *dq_storage;
+	uint32_t fqid = dpaa2_qp->rx_vq.fqid;
+	int ret, num_rx = 0;
+	uint8_t is_last = 0, status;
+	struct qbman_swp *swp;
+	const struct qbman_fd *fd;
+	struct qbman_pull_desc pulldesc;
+	void *user_data;
+	uint32_t nb_ops = max_nb_to_dequeue;
+
+	if (!DPAA2_PER_LCORE_DPIO) {
+		ret = dpaa2_affine_qbman_swp();
+		if (ret) {
+			DPAA2_SEC_ERR(
+				"Failed to allocate IO portal, tid: %d\n",
+				rte_gettid());
+			return 0;
+		}
+	}
+	swp = DPAA2_PER_LCORE_PORTAL;
+	dq_storage = dpaa2_qp->rx_vq.q_storage->dq_storage[0];
+
+	qbman_pull_desc_clear(&pulldesc);
+	qbman_pull_desc_set_numframes(&pulldesc,
+				      (nb_ops > dpaa2_dqrr_size) ?
+				      dpaa2_dqrr_size : nb_ops);
+	qbman_pull_desc_set_fq(&pulldesc, fqid);
+	qbman_pull_desc_set_storage(&pulldesc, dq_storage,
+				    (uint64_t)DPAA2_VADDR_TO_IOVA(dq_storage),
+				    1);
+
+	/*Issue a volatile dequeue command. */
+	while (1) {
+		if (qbman_swp_pull(swp, &pulldesc)) {
+			DPAA2_SEC_WARN(
+				"SEC VDQ command is not issued : QBMAN busy");
+			/* Portal was busy, try again */
+			continue;
+		}
+		break;
+	};
+
+	/* Receive the packets till Last Dequeue entry is found with
+	 * respect to the above issues PULL command.
+	 */
+	while (!is_last) {
+		/* Check if the previous issued command is completed.
+		 * Also seems like the SWP is shared between the Ethernet Driver
+		 * and the SEC driver.
+		 */
+		while (!qbman_check_command_complete(dq_storage))
+			;
+
+		/* Loop until the dq_storage is updated with
+		 * new token by QBMAN
+		 */
+		while (!qbman_check_new_result(dq_storage))
+			;
+		/* Check whether Last Pull command is Expired and
+		 * setting Condition for Loop termination
+		 */
+		if (qbman_result_DQ_is_pull_complete(dq_storage)) {
+			is_last = 1;
+			/* Check for valid frame. */
+			status = (uint8_t)qbman_result_DQ_flags(dq_storage);
+			if (unlikely(
+				(status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
+				DPAA2_SEC_DP_DEBUG("No frame is delivered\n");
+				continue;
+			}
+		}
+
+		fd = qbman_result_DQ_fd(dq_storage);
+		user_data = sec_fd_to_userdata(fd);
+		if (is_user_data_array)
+			out_user_data[num_rx] = user_data;
+		else
+			out_user_data[0] = user_data;
+		if (unlikely(fd->simple.frc)) {
+			/* TODO Parse SEC errors */
+			DPAA2_SEC_ERR("SEC returned Error - %x",
+				      fd->simple.frc);
+			status = RTE_CRYPTO_OP_STATUS_ERROR;
+		} else {
+			status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+		}
+		post_dequeue(user_data, num_rx, status);
+
+		num_rx++;
+		dq_storage++;
+	} /* End of Packet Rx loop */
+
+	dpaa2_qp->rx_vq.rx_pkts += num_rx;
+	*dequeue_status = 1;
+	*n_success = num_rx;
+
+	DPAA2_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx);
+	/*Return the total number of packets received to DPAA2 app*/
+	return num_rx;
+}
+
+static __rte_always_inline void *
+dpaa2_sec_raw_dequeue(void *qp_data, uint8_t *drv_ctx, int *dequeue_status,
+		enum rte_crypto_op_status *op_status)
+{
+	RTE_SET_USED(qp_data);
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(dequeue_status);
+	RTE_SET_USED(op_status);
+
+	return NULL;
+}
+
+static __rte_always_inline int
+dpaa2_sec_raw_enqueue_done(void *qp_data, uint8_t *drv_ctx, uint32_t n)
+{
+	RTE_SET_USED(qp_data);
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(n);
+
+	return 0;
+}
+
+static __rte_always_inline int
+dpaa2_sec_raw_dequeue_done(void *qp_data, uint8_t *drv_ctx, uint32_t n)
+{
+	RTE_SET_USED(qp_data);
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(n);
+
+	return 0;
+}
+
+int
+dpaa2_sec_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+	struct rte_crypto_raw_dp_ctx *raw_dp_ctx,
+	enum rte_crypto_op_sess_type sess_type,
+	union rte_cryptodev_session_ctx session_ctx, uint8_t is_update)
+{
+	dpaa2_sec_session *sess;
+	struct dpaa2_sec_raw_dp_ctx *dp_ctx;
+	RTE_SET_USED(qp_id);
+
+	if (!is_update) {
+		memset(raw_dp_ctx, 0, sizeof(*raw_dp_ctx));
+		raw_dp_ctx->qp_data = dev->data->queue_pairs[qp_id];
+	}
+
+	if (sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
+		sess = (dpaa2_sec_session *)get_sec_session_private_data(
+				session_ctx.sec_sess);
+	else if (sess_type == RTE_CRYPTO_OP_WITH_SESSION)
+		sess = (dpaa2_sec_session *)get_sym_session_private_data(
+			session_ctx.crypto_sess, cryptodev_driver_id);
+	else
+		return -ENOTSUP;
+	raw_dp_ctx->dequeue_burst = dpaa2_sec_raw_dequeue_burst;
+	raw_dp_ctx->dequeue = dpaa2_sec_raw_dequeue;
+	raw_dp_ctx->dequeue_done = dpaa2_sec_raw_dequeue_done;
+	raw_dp_ctx->enqueue_burst = dpaa2_sec_raw_enqueue_burst;
+	raw_dp_ctx->enqueue = dpaa2_sec_raw_enqueue;
+	raw_dp_ctx->enqueue_done = dpaa2_sec_raw_enqueue_done;
+
+	if (sess->ctxt_type == DPAA2_SEC_CIPHER_HASH)
+		sess->build_raw_dp_fd = build_raw_dp_chain_fd;
+	else if (sess->ctxt_type == DPAA2_SEC_AEAD)
+		sess->build_raw_dp_fd = build_raw_dp_aead_fd;
+	else if (sess->ctxt_type == DPAA2_SEC_AUTH)
+		sess->build_raw_dp_fd = build_raw_dp_auth_fd;
+	else if (sess->ctxt_type == DPAA2_SEC_CIPHER)
+		sess->build_raw_dp_fd = build_raw_dp_cipher_fd;
+	else if (sess->ctxt_type == DPAA2_SEC_IPSEC)
+		sess->build_raw_dp_fd = build_raw_dp_proto_fd;
+	else if (sess->ctxt_type == DPAA2_SEC_PDCP)
+		sess->build_raw_dp_fd = build_raw_dp_proto_compound_fd;
+	else
+		return -ENOTSUP;
+	dp_ctx = (struct dpaa2_sec_raw_dp_ctx *)raw_dp_ctx->drv_ctx_data;
+	dp_ctx->session = sess;
+
+	return 0;
+}
+
+int
+dpaa2_sec_get_dp_ctx_size(__rte_unused struct rte_cryptodev *dev)
+{
+	return sizeof(struct dpaa2_sec_raw_dp_ctx);
+}
diff --git a/drivers/crypto/dpaa2_sec/meson.build b/drivers/crypto/dpaa2_sec/meson.build
index ea1d73a13d..e6e5abb3c1 100644
--- a/drivers/crypto/dpaa2_sec/meson.build
+++ b/drivers/crypto/dpaa2_sec/meson.build
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2018 NXP
+# Copyright 2018,2021 NXP
 
 if not is_linux
     build = false
@@ -9,6 +9,7 @@ endif
 deps += ['security', 'mempool_dpaa2']
 sources = files(
         'dpaa2_sec_dpseci.c',
+	'dpaa2_sec_raw_dp.c',
         'mc/dpseci.c',
 )
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 06/15] crypto/dpaa2_sec: support AUTH only with raw buffer APIs
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
                   ` (4 preceding siblings ...)
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 05/15] crypto/dpaa2_sec: support raw datapath APIs Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 07/15] crypto/dpaa2_sec: support AUTHENC " Hemant Agrawal
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

Auth only with raw buffer APIs has been supported in this patch.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |  21 ----
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 114 ++++++++++++++++++--
 2 files changed, 108 insertions(+), 27 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index e9b888186e..f397b756e8 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -231,27 +231,6 @@ typedef struct dpaa2_sec_session_entry {
 
 static const struct rte_cryptodev_capabilities dpaa2_sec_capabilities[] = {
 	/* Symmetric capabilities */
-	{	/* NULL (AUTH) */
-		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
-		{.sym = {
-			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
-			{.auth = {
-				.algo = RTE_CRYPTO_AUTH_NULL,
-				.block_size = 1,
-				.key_size = {
-					.min = 0,
-					.max = 0,
-					.increment = 0
-				},
-				.digest_size = {
-					.min = 0,
-					.max = 0,
-					.increment = 0
-				},
-				.iv_size = { 0 }
-			}, },
-		}, },
-	},
 	{	/* MD5 */
 		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
 		{.sym = {
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 8925c8e938..471c81b9e7 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -11,6 +11,8 @@
 #include "dpaa2_sec_priv.h"
 #include "dpaa2_sec_logs.h"
 
+#include <desc/algo.h>
+
 struct dpaa2_sec_raw_dp_ctx {
 	dpaa2_sec_session *session;
 	uint32_t tail;
@@ -73,14 +75,114 @@ build_raw_dp_auth_fd(uint8_t *drv_ctx,
 		       void *userdata,
 		       struct qbman_fd *fd)
 {
-	RTE_SET_USED(drv_ctx);
-	RTE_SET_USED(sgl);
 	RTE_SET_USED(iv);
-	RTE_SET_USED(digest);
 	RTE_SET_USED(auth_iv);
-	RTE_SET_USED(ofs);
-	RTE_SET_USED(userdata);
-	RTE_SET_USED(fd);
+
+	dpaa2_sec_session *sess =
+		((struct dpaa2_sec_raw_dp_ctx *)drv_ctx)->session;
+	struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
+	struct sec_flow_context *flc;
+	int total_len = 0, data_len = 0, data_offset;
+	uint8_t *old_digest;
+	struct ctxt_priv *priv = sess->ctxt;
+	unsigned int i;
+
+	for (i = 0; i < sgl->num; i++)
+		total_len += sgl->vec[i].len;
+
+	data_len = total_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
+	data_offset = ofs.ofs.auth.head;
+
+	if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
+		sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+		if ((data_len & 7) || (data_offset & 7)) {
+			DPAA2_SEC_ERR("AUTH: len/offset must be full bytes");
+			return -ENOTSUP;
+		}
+
+		data_len = data_len >> 3;
+		data_offset = data_offset >> 3;
+	}
+	fle = (struct qbman_fle *)rte_malloc(NULL,
+		FLE_SG_MEM_SIZE(2 * sgl->num),
+			RTE_CACHE_LINE_SIZE);
+	if (unlikely(!fle)) {
+		DPAA2_SEC_ERR("AUTH SG: Memory alloc failed for SGE");
+		return -ENOMEM;
+	}
+	memset(fle, 0, FLE_SG_MEM_SIZE(2*sgl->num));
+	/* first FLE entry used to store mbuf and session ctxt */
+	DPAA2_SET_FLE_ADDR(fle, (size_t)userdata);
+	DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
+	op_fle = fle + 1;
+	ip_fle = fle + 2;
+	sge = fle + 3;
+
+	flc = &priv->flc_desc[DESC_INITFINAL].flc;
+
+	/* sg FD */
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(op_fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+
+	/* o/p fle */
+	DPAA2_SET_FLE_ADDR(op_fle,
+			DPAA2_VADDR_TO_IOVA(digest->va));
+	op_fle->length = sess->digest_length;
+
+	/* i/p fle */
+	DPAA2_SET_FLE_SG_EXT(ip_fle);
+	DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
+	ip_fle->length = data_len;
+
+	if (sess->iv.length) {
+		uint8_t *iv_ptr;
+
+		iv_ptr = rte_crypto_op_ctod_offset(userdata, uint8_t *,
+						sess->iv.offset);
+
+		if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+			iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+			sge->length = 12;
+		} else if (sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+			iv_ptr = conv_to_zuc_eia_iv(iv_ptr);
+			sge->length = 8;
+		} else {
+			sge->length = sess->iv.length;
+		}
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
+		ip_fle->length += sge->length;
+		sge++;
+	}
+	/* i/p 1st seg */
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+	DPAA2_SET_FLE_OFFSET(sge, data_offset);
+
+	if (data_len <= (int)(sgl->vec[0].len - data_offset)) {
+		sge->length = data_len;
+		data_len = 0;
+	} else {
+		sge->length = sgl->vec[0].len - data_offset;
+		for (i = 1; i < sgl->num; i++) {
+			sge++;
+			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+			DPAA2_SET_FLE_OFFSET(sge, 0);
+			sge->length = sgl->vec[i].len;
+		}
+	}
+	if (sess->dir == DIR_DEC) {
+		/* Digest verification case */
+		sge++;
+		old_digest = (uint8_t *)(sge + 1);
+		rte_memcpy(old_digest, digest->va,
+			sess->digest_length);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_digest));
+		sge->length = sess->digest_length;
+		ip_fle->length += sess->digest_length;
+	}
+	DPAA2_SET_FLE_FIN(sge);
+	DPAA2_SET_FLE_FIN(ip_fle);
+	DPAA2_SET_FD_LEN(fd, ip_fle->length);
 
 	return 0;
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 07/15] crypto/dpaa2_sec: support AUTHENC with raw buffer APIs
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
                   ` (5 preceding siblings ...)
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 06/15] crypto/dpaa2_sec: support AUTH only with raw buffer APIs Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 08/15] crypto/dpaa2_sec: support AEAD " Hemant Agrawal
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

This patch supports AUTHENC with raw buufer APIs

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 128 ++++++++++++++++++--
 1 file changed, 121 insertions(+), 7 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 471c81b9e7..565af6dcba 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -31,14 +31,128 @@ build_raw_dp_chain_fd(uint8_t *drv_ctx,
 		       void *userdata,
 		       struct qbman_fd *fd)
 {
-	RTE_SET_USED(drv_ctx);
-	RTE_SET_USED(sgl);
-	RTE_SET_USED(iv);
-	RTE_SET_USED(digest);
 	RTE_SET_USED(auth_iv);
-	RTE_SET_USED(ofs);
-	RTE_SET_USED(userdata);
-	RTE_SET_USED(fd);
+
+	dpaa2_sec_session *sess =
+		((struct dpaa2_sec_raw_dp_ctx *)drv_ctx)->session;
+	struct ctxt_priv *priv = sess->ctxt;
+	struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
+	struct sec_flow_context *flc;
+	int data_len = 0, auth_len = 0, cipher_len = 0;
+	unsigned int i = 0;
+	uint16_t auth_hdr_len = ofs.ofs.cipher.head -
+				ofs.ofs.auth.head;
+
+	uint16_t auth_tail_len = ofs.ofs.auth.tail;
+	uint32_t auth_only_len = (auth_tail_len << 16) | auth_hdr_len;
+	int icv_len = sess->digest_length;
+	uint8_t *old_icv;
+	uint8_t *iv_ptr = iv->va;
+
+	for (i = 0; i < sgl->num; i++)
+		data_len += sgl->vec[i].len;
+
+	cipher_len = data_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
+	auth_len = data_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
+	/* first FLE entry used to store session ctxt */
+	fle = (struct qbman_fle *)rte_malloc(NULL,
+			FLE_SG_MEM_SIZE(2 * sgl->num),
+			RTE_CACHE_LINE_SIZE);
+	if (unlikely(!fle)) {
+		DPAA2_SEC_ERR("AUTHENC SG: Memory alloc failed for SGE");
+		return -ENOMEM;
+	}
+	memset(fle, 0, FLE_SG_MEM_SIZE(2 * sgl->num));
+	DPAA2_SET_FLE_ADDR(fle, (size_t)userdata);
+	DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
+
+	op_fle = fle + 1;
+	ip_fle = fle + 2;
+	sge = fle + 3;
+
+	/* Save the shared descriptor */
+	flc = &priv->flc_desc[0].flc;
+
+	/* Configure FD as a FRAME LIST */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(op_fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	/* Configure Output FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_SG_EXT(op_fle);
+	DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
+
+	if (auth_only_len)
+		DPAA2_SET_FLE_INTERNAL_JD(op_fle, auth_only_len);
+
+	op_fle->length = (sess->dir == DIR_ENC) ?
+			(cipher_len + icv_len) :
+			cipher_len;
+
+	/* Configure Output SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+	DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.auth.head);
+	sge->length = sgl->vec[0].len - ofs.ofs.auth.head;
+
+	/* o/p segs */
+	for (i = 1; i < sgl->num; i++) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+		DPAA2_SET_FLE_OFFSET(sge, 0);
+		sge->length = sgl->vec[i].len;
+	}
+
+	if (sess->dir == DIR_ENC) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge,
+			digest->iova);
+		sge->length = icv_len;
+	}
+	DPAA2_SET_FLE_FIN(sge);
+
+	sge++;
+
+	/* Configure Input FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
+	DPAA2_SET_FLE_SG_EXT(ip_fle);
+	DPAA2_SET_FLE_FIN(ip_fle);
+
+	ip_fle->length = (sess->dir == DIR_ENC) ?
+			(auth_len + sess->iv.length) :
+			(auth_len + sess->iv.length +
+			icv_len);
+
+	/* Configure Input SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(iv_ptr));
+	sge->length = sess->iv.length;
+
+	sge++;
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+	DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.auth.head);
+	sge->length = sgl->vec[0].len - ofs.ofs.auth.head;
+
+	for (i = 1; i < sgl->num; i++) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+		DPAA2_SET_FLE_OFFSET(sge, 0);
+		sge->length = sgl->vec[i].len;
+	}
+
+	if (sess->dir == DIR_DEC) {
+		sge++;
+		old_icv = (uint8_t *)(sge + 1);
+		memcpy(old_icv, digest->va,
+			icv_len);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+		sge->length = icv_len;
+	}
+
+	DPAA2_SET_FLE_FIN(sge);
+	if (auth_only_len) {
+		DPAA2_SET_FLE_INTERNAL_JD(ip_fle, auth_only_len);
+		DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
+	}
+	DPAA2_SET_FD_LEN(fd, ip_fle->length);
 
 	return 0;
 }
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 08/15] crypto/dpaa2_sec: support AEAD with raw buffer APIs
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
                   ` (6 preceding siblings ...)
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 07/15] crypto/dpaa2_sec: support AUTHENC " Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 09/15] crypto/dpaa2_sec: support OOP with raw buffer API Hemant Agrawal
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

add raw vector API support for AEAD algos.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 249 +++++++++++++++++---
 1 file changed, 214 insertions(+), 35 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 565af6dcba..5c29c61f9d 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -167,14 +167,126 @@ build_raw_dp_aead_fd(uint8_t *drv_ctx,
 		       void *userdata,
 		       struct qbman_fd *fd)
 {
-	RTE_SET_USED(drv_ctx);
-	RTE_SET_USED(sgl);
-	RTE_SET_USED(iv);
-	RTE_SET_USED(digest);
-	RTE_SET_USED(auth_iv);
-	RTE_SET_USED(ofs);
-	RTE_SET_USED(userdata);
-	RTE_SET_USED(fd);
+	dpaa2_sec_session *sess =
+		((struct dpaa2_sec_raw_dp_ctx *)drv_ctx)->session;
+	struct ctxt_priv *priv = sess->ctxt;
+	struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
+	struct sec_flow_context *flc;
+	uint32_t auth_only_len = sess->ext_params.aead_ctxt.auth_only_len;
+	int icv_len = sess->digest_length;
+	uint8_t *old_icv;
+	uint8_t *IV_ptr = iv->va;
+	unsigned int i = 0;
+	int data_len = 0, aead_len = 0;
+
+	for (i = 0; i < sgl->num; i++)
+		data_len += sgl->vec[i].len;
+
+	aead_len = data_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
+
+	/* first FLE entry used to store mbuf and session ctxt */
+	fle = (struct qbman_fle *)rte_malloc(NULL,
+			FLE_SG_MEM_SIZE(2 * sgl->num),
+			RTE_CACHE_LINE_SIZE);
+	if (unlikely(!fle)) {
+		DPAA2_SEC_ERR("GCM SG: Memory alloc failed for SGE");
+		return -ENOMEM;
+	}
+	memset(fle, 0, FLE_SG_MEM_SIZE(2 * sgl->num));
+	DPAA2_SET_FLE_ADDR(fle, (size_t)userdata);
+	DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
+
+	op_fle = fle + 1;
+	ip_fle = fle + 2;
+	sge = fle + 3;
+
+	/* Save the shared descriptor */
+	flc = &priv->flc_desc[0].flc;
+
+	/* Configure FD as a FRAME LIST */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(op_fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	/* Configure Output FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_SG_EXT(op_fle);
+	DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
+
+	if (auth_only_len)
+		DPAA2_SET_FLE_INTERNAL_JD(op_fle, auth_only_len);
+
+	op_fle->length = (sess->dir == DIR_ENC) ?
+			(aead_len + icv_len) :
+			aead_len;
+
+	/* Configure Output SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+	DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+	sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+
+	/* o/p segs */
+	for (i = 1; i < sgl->num; i++) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+		DPAA2_SET_FLE_OFFSET(sge, 0);
+		sge->length = sgl->vec[i].len;
+	}
+
+	if (sess->dir == DIR_ENC) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge, digest->iova);
+		sge->length = icv_len;
+	}
+	DPAA2_SET_FLE_FIN(sge);
+
+	sge++;
+
+	/* Configure Input FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
+	DPAA2_SET_FLE_SG_EXT(ip_fle);
+	DPAA2_SET_FLE_FIN(ip_fle);
+	ip_fle->length = (sess->dir == DIR_ENC) ?
+		(aead_len + sess->iv.length + auth_only_len) :
+		(aead_len + sess->iv.length + auth_only_len +
+		icv_len);
+
+	/* Configure Input SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(IV_ptr));
+	sge->length = sess->iv.length;
+
+	sge++;
+	if (auth_only_len) {
+		DPAA2_SET_FLE_ADDR(sge, auth_iv->iova);
+		sge->length = auth_only_len;
+		sge++;
+	}
+
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+	DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+	sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+
+	/* i/p segs */
+	for (i = 1; i < sgl->num; i++) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+		DPAA2_SET_FLE_OFFSET(sge, 0);
+		sge->length = sgl->vec[i].len;
+	}
+
+	if (sess->dir == DIR_DEC) {
+		sge++;
+		old_icv = (uint8_t *)(sge + 1);
+		memcpy(old_icv,  digest->va, icv_len);
+		DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(old_icv));
+		sge->length = icv_len;
+	}
+
+	DPAA2_SET_FLE_FIN(sge);
+	if (auth_only_len) {
+		DPAA2_SET_FLE_INTERNAL_JD(ip_fle, auth_only_len);
+		DPAA2_SET_FD_INTERNAL_JD(fd, auth_only_len);
+	}
+	DPAA2_SET_FD_LEN(fd, ip_fle->length);
 
 	return 0;
 }
@@ -311,36 +423,104 @@ build_raw_dp_proto_fd(uint8_t *drv_ctx,
 		       void *userdata,
 		       struct qbman_fd *fd)
 {
-	RTE_SET_USED(drv_ctx);
-	RTE_SET_USED(sgl);
 	RTE_SET_USED(iv);
 	RTE_SET_USED(digest);
 	RTE_SET_USED(auth_iv);
 	RTE_SET_USED(ofs);
-	RTE_SET_USED(userdata);
-	RTE_SET_USED(fd);
 
-	return 0;
-}
+	dpaa2_sec_session *sess =
+		((struct dpaa2_sec_raw_dp_ctx *)drv_ctx)->session;
+	struct ctxt_priv *priv = sess->ctxt;
+	struct qbman_fle *fle, *sge, *ip_fle, *op_fle;
+	struct sec_flow_context *flc;
+	uint32_t in_len = 0, out_len = 0, i;
 
-static int
-build_raw_dp_proto_compound_fd(uint8_t *drv_ctx,
-		       struct rte_crypto_sgl *sgl,
-		       struct rte_crypto_va_iova_ptr *iv,
-		       struct rte_crypto_va_iova_ptr *digest,
-		       struct rte_crypto_va_iova_ptr *auth_iv,
-		       union rte_crypto_sym_ofs ofs,
-		       void *userdata,
-		       struct qbman_fd *fd)
-{
-	RTE_SET_USED(drv_ctx);
-	RTE_SET_USED(sgl);
-	RTE_SET_USED(iv);
-	RTE_SET_USED(digest);
-	RTE_SET_USED(auth_iv);
-	RTE_SET_USED(ofs);
-	RTE_SET_USED(userdata);
-	RTE_SET_USED(fd);
+	/* first FLE entry used to store mbuf and session ctxt */
+	fle = (struct qbman_fle *)rte_malloc(NULL,
+			FLE_SG_MEM_SIZE(2 * sgl->num),
+			RTE_CACHE_LINE_SIZE);
+	if (unlikely(!fle)) {
+		DPAA2_SEC_DP_ERR("Proto:SG: Memory alloc failed for SGE");
+		return -ENOMEM;
+	}
+	memset(fle, 0, FLE_SG_MEM_SIZE(2 * sgl->num));
+	DPAA2_SET_FLE_ADDR(fle, (size_t)userdata);
+	DPAA2_FLE_SAVE_CTXT(fle, (ptrdiff_t)priv);
+
+	/* Save the shared descriptor */
+	flc = &priv->flc_desc[0].flc;
+	op_fle = fle + 1;
+	ip_fle = fle + 2;
+	sge = fle + 3;
+
+	DPAA2_SET_FD_IVP(fd);
+	DPAA2_SET_FLE_IVP(op_fle);
+	DPAA2_SET_FLE_IVP(ip_fle);
+
+	/* Configure FD as a FRAME LIST */
+	DPAA2_SET_FD_ADDR(fd, DPAA2_VADDR_TO_IOVA(op_fle));
+	DPAA2_SET_FD_COMPOUND_FMT(fd);
+	DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
+
+	/* Configure Output FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_SG_EXT(op_fle);
+	DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
+
+	/* Configure Output SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+	DPAA2_SET_FLE_OFFSET(sge, 0);
+	sge->length = sgl->vec[0].len;
+	out_len += sge->length;
+	/* o/p segs */
+	for (i = 1; i < sgl->num; i++) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+		DPAA2_SET_FLE_OFFSET(sge, 0);
+		sge->length = sgl->vec[i].len;
+		out_len += sge->length;
+	}
+	sge->length = sgl->vec[i - 1].tot_len;
+	out_len += sge->length;
+
+	DPAA2_SET_FLE_FIN(sge);
+	op_fle->length = out_len;
+
+	sge++;
+
+	/* Configure Input FLE with Scatter/Gather Entry */
+	DPAA2_SET_FLE_ADDR(ip_fle, DPAA2_VADDR_TO_IOVA(sge));
+	DPAA2_SET_FLE_SG_EXT(ip_fle);
+	DPAA2_SET_FLE_FIN(ip_fle);
+
+	/* Configure input SGE for Encap/Decap */
+	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+	DPAA2_SET_FLE_OFFSET(sge, 0);
+	sge->length = sgl->vec[0].len;
+	in_len += sge->length;
+	/* i/p segs */
+	for (i = 1; i < sgl->num; i++) {
+		sge++;
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+		DPAA2_SET_FLE_OFFSET(sge, 0);
+		sge->length = sgl->vec[i].len;
+		in_len += sge->length;
+	}
+
+	ip_fle->length = in_len;
+	DPAA2_SET_FLE_FIN(sge);
+
+	/* In case of PDCP, per packet HFN is stored in
+	 * mbuf priv after sym_op.
+	 */
+	if (sess->ctxt_type == DPAA2_SEC_PDCP && sess->pdcp.hfn_ovd) {
+		uint32_t hfn_ovd = *(uint32_t *)((uint8_t *)userdata +
+				sess->pdcp.hfn_ovd_offset);
+		/*enable HFN override override */
+		DPAA2_SET_FLE_INTERNAL_JD(ip_fle, hfn_ovd);
+		DPAA2_SET_FLE_INTERNAL_JD(op_fle, hfn_ovd);
+		DPAA2_SET_FD_INTERNAL_JD(fd, hfn_ovd);
+	}
+	DPAA2_SET_FD_LEN(fd, ip_fle->length);
 
 	return 0;
 }
@@ -792,10 +972,9 @@ dpaa2_sec_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
 		sess->build_raw_dp_fd = build_raw_dp_auth_fd;
 	else if (sess->ctxt_type == DPAA2_SEC_CIPHER)
 		sess->build_raw_dp_fd = build_raw_dp_cipher_fd;
-	else if (sess->ctxt_type == DPAA2_SEC_IPSEC)
+	else if (sess->ctxt_type == DPAA2_SEC_IPSEC ||
+		sess->ctxt_type == DPAA2_SEC_PDCP)
 		sess->build_raw_dp_fd = build_raw_dp_proto_fd;
-	else if (sess->ctxt_type == DPAA2_SEC_PDCP)
-		sess->build_raw_dp_fd = build_raw_dp_proto_compound_fd;
 	else
 		return -ENOTSUP;
 	dp_ctx = (struct dpaa2_sec_raw_dp_ctx *)raw_dp_ctx->drv_ctx_data;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 09/15] crypto/dpaa2_sec: support OOP with raw buffer API
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
                   ` (7 preceding siblings ...)
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 08/15] crypto/dpaa2_sec: support AEAD " Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 10/15] crypto/dpaa2_sec: enhance error checks with raw buffer APIs Hemant Agrawal
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

add support for out of order processing with raw vector APIs.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h   |   1 +
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 156 +++++++++++++++-----
 2 files changed, 116 insertions(+), 41 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
index f397b756e8..05bd7c0736 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h
@@ -179,6 +179,7 @@ typedef int (*dpaa2_sec_build_fd_t)(
 
 typedef int (*dpaa2_sec_build_raw_dp_fd_t)(uint8_t *drv_ctx,
 		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_sgl *dest_sgl,
 		       struct rte_crypto_va_iova_ptr *iv,
 		       struct rte_crypto_va_iova_ptr *digest,
 		       struct rte_crypto_va_iova_ptr *auth_iv,
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 5c29c61f9d..4f78cef9c0 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -24,6 +24,7 @@ struct dpaa2_sec_raw_dp_ctx {
 static int
 build_raw_dp_chain_fd(uint8_t *drv_ctx,
 		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_sgl *dest_sgl,
 		       struct rte_crypto_va_iova_ptr *iv,
 		       struct rte_crypto_va_iova_ptr *digest,
 		       struct rte_crypto_va_iova_ptr *auth_iv,
@@ -89,17 +90,33 @@ build_raw_dp_chain_fd(uint8_t *drv_ctx,
 			(cipher_len + icv_len) :
 			cipher_len;
 
-	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.auth.head);
-	sge->length = sgl->vec[0].len - ofs.ofs.auth.head;
+	/* OOP */
+	if (dest_sgl) {
+		/* Configure Output SGE for Encap/Decap */
+		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
+		DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+		sge->length = dest_sgl->vec[0].len - ofs.ofs.cipher.head;
 
-	/* o/p segs */
-	for (i = 1; i < sgl->num; i++) {
-		sge++;
-		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
-		sge->length = sgl->vec[i].len;
+		/* o/p segs */
+		for (i = 1; i < dest_sgl->num; i++) {
+			sge++;
+			DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
+			DPAA2_SET_FLE_OFFSET(sge, 0);
+			sge->length = dest_sgl->vec[i].len;
+		}
+	} else {
+		/* Configure Output SGE for Encap/Decap */
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+		DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+		sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+
+		/* o/p segs */
+		for (i = 1; i < sgl->num; i++) {
+			sge++;
+			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+			DPAA2_SET_FLE_OFFSET(sge, 0);
+			sge->length = sgl->vec[i].len;
+		}
 	}
 
 	if (sess->dir == DIR_ENC) {
@@ -160,6 +177,7 @@ build_raw_dp_chain_fd(uint8_t *drv_ctx,
 static int
 build_raw_dp_aead_fd(uint8_t *drv_ctx,
 		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_sgl *dest_sgl,
 		       struct rte_crypto_va_iova_ptr *iv,
 		       struct rte_crypto_va_iova_ptr *digest,
 		       struct rte_crypto_va_iova_ptr *auth_iv,
@@ -219,17 +237,33 @@ build_raw_dp_aead_fd(uint8_t *drv_ctx,
 			(aead_len + icv_len) :
 			aead_len;
 
-	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
-	sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+	/* OOP */
+	if (dest_sgl) {
+		/* Configure Output SGE for Encap/Decap */
+		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
+		DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+		sge->length = dest_sgl->vec[0].len - ofs.ofs.cipher.head;
 
-	/* o/p segs */
-	for (i = 1; i < sgl->num; i++) {
-		sge++;
-		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
-		sge->length = sgl->vec[i].len;
+		/* o/p segs */
+		for (i = 1; i < dest_sgl->num; i++) {
+			sge++;
+			DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
+			DPAA2_SET_FLE_OFFSET(sge, 0);
+			sge->length = dest_sgl->vec[i].len;
+		}
+	} else {
+		/* Configure Output SGE for Encap/Decap */
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+		DPAA2_SET_FLE_OFFSET(sge, ofs.ofs.cipher.head);
+		sge->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+
+		/* o/p segs */
+		for (i = 1; i < sgl->num; i++) {
+			sge++;
+			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+			DPAA2_SET_FLE_OFFSET(sge, 0);
+			sge->length = sgl->vec[i].len;
+		}
 	}
 
 	if (sess->dir == DIR_ENC) {
@@ -294,6 +328,7 @@ build_raw_dp_aead_fd(uint8_t *drv_ctx,
 static int
 build_raw_dp_auth_fd(uint8_t *drv_ctx,
 		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_sgl *dest_sgl,
 		       struct rte_crypto_va_iova_ptr *iv,
 		       struct rte_crypto_va_iova_ptr *digest,
 		       struct rte_crypto_va_iova_ptr *auth_iv,
@@ -303,6 +338,7 @@ build_raw_dp_auth_fd(uint8_t *drv_ctx,
 {
 	RTE_SET_USED(iv);
 	RTE_SET_USED(auth_iv);
+	RTE_SET_USED(dest_sgl);
 
 	dpaa2_sec_session *sess =
 		((struct dpaa2_sec_raw_dp_ctx *)drv_ctx)->session;
@@ -416,6 +452,7 @@ build_raw_dp_auth_fd(uint8_t *drv_ctx,
 static int
 build_raw_dp_proto_fd(uint8_t *drv_ctx,
 		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_sgl *dest_sgl,
 		       struct rte_crypto_va_iova_ptr *iv,
 		       struct rte_crypto_va_iova_ptr *digest,
 		       struct rte_crypto_va_iova_ptr *auth_iv,
@@ -466,20 +503,39 @@ build_raw_dp_proto_fd(uint8_t *drv_ctx,
 	DPAA2_SET_FLE_SG_EXT(op_fle);
 	DPAA2_SET_FLE_ADDR(op_fle, DPAA2_VADDR_TO_IOVA(sge));
 
-	/* Configure Output SGE for Encap/Decap */
-	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, 0);
-	sge->length = sgl->vec[0].len;
-	out_len += sge->length;
-	/* o/p segs */
-	for (i = 1; i < sgl->num; i++) {
-		sge++;
-		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+	/* OOP */
+	if (dest_sgl) {
+		/* Configure Output SGE for Encap/Decap */
+		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
 		DPAA2_SET_FLE_OFFSET(sge, 0);
-		sge->length = sgl->vec[i].len;
+		sge->length = dest_sgl->vec[0].len;
+		out_len += sge->length;
+		/* o/p segs */
+		for (i = 1; i < dest_sgl->num; i++) {
+			sge++;
+			DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
+			DPAA2_SET_FLE_OFFSET(sge, 0);
+			sge->length = dest_sgl->vec[i].len;
+			out_len += sge->length;
+		}
+		sge->length = dest_sgl->vec[i - 1].tot_len;
+
+	} else {
+		/* Configure Output SGE for Encap/Decap */
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+		DPAA2_SET_FLE_OFFSET(sge, 0);
+		sge->length = sgl->vec[0].len;
 		out_len += sge->length;
+		/* o/p segs */
+		for (i = 1; i < sgl->num; i++) {
+			sge++;
+			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+			DPAA2_SET_FLE_OFFSET(sge, 0);
+			sge->length = sgl->vec[i].len;
+			out_len += sge->length;
+		}
+		sge->length = sgl->vec[i - 1].tot_len;
 	}
-	sge->length = sgl->vec[i - 1].tot_len;
 	out_len += sge->length;
 
 	DPAA2_SET_FLE_FIN(sge);
@@ -528,6 +584,7 @@ build_raw_dp_proto_fd(uint8_t *drv_ctx,
 static int
 build_raw_dp_cipher_fd(uint8_t *drv_ctx,
 		       struct rte_crypto_sgl *sgl,
+		       struct rte_crypto_sgl *dest_sgl,
 		       struct rte_crypto_va_iova_ptr *iv,
 		       struct rte_crypto_va_iova_ptr *digest,
 		       struct rte_crypto_va_iova_ptr *auth_iv,
@@ -593,17 +650,33 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
 	op_fle->length = data_len;
 	DPAA2_SET_FLE_SG_EXT(op_fle);
 
-	/* o/p 1st seg */
-	DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
-	DPAA2_SET_FLE_OFFSET(sge, data_offset);
-	sge->length = sgl->vec[0].len - data_offset;
+	/* OOP */
+	if (dest_sgl) {
+		/* o/p 1st seg */
+		DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[0].iova);
+		DPAA2_SET_FLE_OFFSET(sge, data_offset);
+		sge->length = dest_sgl->vec[0].len - data_offset;
 
-	/* o/p segs */
-	for (i = 1; i < sgl->num; i++) {
-		sge++;
-		DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
-		DPAA2_SET_FLE_OFFSET(sge, 0);
-		sge->length = sgl->vec[i].len;
+		/* o/p segs */
+		for (i = 1; i < dest_sgl->num; i++) {
+			sge++;
+			DPAA2_SET_FLE_ADDR(sge, dest_sgl->vec[i].iova);
+			DPAA2_SET_FLE_OFFSET(sge, 0);
+			sge->length = dest_sgl->vec[i].len;
+		}
+	} else {
+		/* o/p 1st seg */
+		DPAA2_SET_FLE_ADDR(sge, sgl->vec[0].iova);
+		DPAA2_SET_FLE_OFFSET(sge, data_offset);
+		sge->length = sgl->vec[0].len - data_offset;
+
+		/* o/p segs */
+		for (i = 1; i < sgl->num; i++) {
+			sge++;
+			DPAA2_SET_FLE_ADDR(sge, sgl->vec[i].iova);
+			DPAA2_SET_FLE_OFFSET(sge, 0);
+			sge->length = sgl->vec[i].len;
+		}
 	}
 	DPAA2_SET_FLE_FIN(sge);
 
@@ -706,6 +779,7 @@ dpaa2_sec_raw_enqueue_burst(void *qp_data, uint8_t *drv_ctx,
 			memset(&fd_arr[loop], 0, sizeof(struct qbman_fd));
 			ret = sess->build_raw_dp_fd(drv_ctx,
 						    &vec->src_sgl[loop],
+						    &vec->dest_sgl[loop],
 						    &vec->iv[loop],
 						    &vec->digest[loop],
 						    &vec->auth_iv[loop],
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 10/15] crypto/dpaa2_sec: enhance error checks with raw buffer APIs
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
                   ` (8 preceding siblings ...)
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 09/15] crypto/dpaa2_sec: support OOP with raw buffer API Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 11/15] crypto/dpaa_sec: support raw datapath APIs Hemant Agrawal
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

This patch improves error conditions and support of
Wireless algos with raw buffers.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 31 ++++-----------------
 1 file changed, 6 insertions(+), 25 deletions(-)

diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 4f78cef9c0..a2ffc6c02f 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -355,16 +355,7 @@ build_raw_dp_auth_fd(uint8_t *drv_ctx,
 	data_len = total_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
 	data_offset = ofs.ofs.auth.head;
 
-	if (sess->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2 ||
-		sess->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
-		if ((data_len & 7) || (data_offset & 7)) {
-			DPAA2_SEC_ERR("AUTH: len/offset must be full bytes");
-			return -ENOTSUP;
-		}
-
-		data_len = data_len >> 3;
-		data_offset = data_offset >> 3;
-	}
+	/* For SNOW3G and ZUC, lengths in bits only supported */
 	fle = (struct qbman_fle *)rte_malloc(NULL,
 		FLE_SG_MEM_SIZE(2 * sgl->num),
 			RTE_CACHE_LINE_SIZE);
@@ -609,17 +600,7 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
 	data_len = total_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
 	data_offset = ofs.ofs.cipher.head;
 
-	if (sess->cipher_alg == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 ||
-		sess->cipher_alg == RTE_CRYPTO_CIPHER_ZUC_EEA3) {
-		if ((data_len & 7) || (data_offset & 7)) {
-			DPAA2_SEC_ERR("CIPHER: len/offset must be full bytes");
-			return -ENOTSUP;
-		}
-
-		data_len = data_len >> 3;
-		data_offset = data_offset >> 3;
-	}
-
+	/* For SNOW3G and ZUC, lengths in bits only supported */
 	/* first FLE entry used to store mbuf and session ctxt */
 	fle = (struct qbman_fle *)rte_malloc(NULL,
 			FLE_SG_MEM_SIZE(2*sgl->num),
@@ -878,7 +859,7 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
 	struct qbman_result *dq_storage;
 	uint32_t fqid = dpaa2_qp->rx_vq.fqid;
 	int ret, num_rx = 0;
-	uint8_t is_last = 0, status;
+	uint8_t is_last = 0, status, is_success = 0;
 	struct qbman_swp *swp;
 	const struct qbman_fd *fd;
 	struct qbman_pull_desc pulldesc;
@@ -957,11 +938,11 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
 			/* TODO Parse SEC errors */
 			DPAA2_SEC_ERR("SEC returned Error - %x",
 				      fd->simple.frc);
-			status = RTE_CRYPTO_OP_STATUS_ERROR;
+			is_success = false;
 		} else {
-			status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+			is_success = true;
 		}
-		post_dequeue(user_data, num_rx, status);
+		post_dequeue(user_data, num_rx, is_success);
 
 		num_rx++;
 		dq_storage++;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 11/15] crypto/dpaa_sec: support raw datapath APIs
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
                   ` (9 preceding siblings ...)
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 10/15] crypto/dpaa2_sec: enhance error checks with raw buffer APIs Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 12/15] crypto/dpaa_sec: support authonly and chain with raw APIs Hemant Agrawal
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

This patch add raw vector API framework for dpaa_sec driver.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 doc/guides/rel_notes/release_21_11.rst    |   1 +
 drivers/crypto/dpaa_sec/dpaa_sec.c        |  23 +-
 drivers/crypto/dpaa_sec/dpaa_sec.h        |  39 +-
 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c | 485 ++++++++++++++++++++++
 drivers/crypto/dpaa_sec/meson.build       |   4 +-
 5 files changed, 538 insertions(+), 14 deletions(-)
 create mode 100644 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index b1049a92e3..cc3845c96e 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -102,6 +102,7 @@ New Features
 
   * Added DES-CBC, AES-XCBC-MAC, AES-CMAC and non-HMAC algo support.
   * Added PDCP short MAC-I support.
+  * Added raw vector datapath API support
 
 * **Updated NXP dpaa2_sec crypto PMD.**
 
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index d5aa2748d6..c7ef1c7b0f 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -45,10 +45,7 @@
 #include <dpaa_sec_log.h>
 #include <dpaax_iova_table.h>
 
-static uint8_t cryptodev_driver_id;
-
-static int
-dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess);
+uint8_t dpaa_cryptodev_driver_id;
 
 static inline void
 dpaa_sec_op_ending(struct dpaa_sec_op_ctx *ctx)
@@ -1787,8 +1784,8 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
 			case RTE_CRYPTO_OP_WITH_SESSION:
 				ses = (dpaa_sec_session *)
 					get_sym_session_private_data(
-							op->sym->session,
-							cryptodev_driver_id);
+						op->sym->session,
+						dpaa_cryptodev_driver_id);
 				break;
 #ifdef RTE_LIB_SECURITY
 			case RTE_CRYPTO_OP_SECURITY_SESSION:
@@ -2400,7 +2397,7 @@ dpaa_sec_detach_rxq(struct dpaa_sec_dev_private *qi, struct qman_fq *fq)
 	return -1;
 }
 
-static int
+int
 dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess)
 {
 	int ret;
@@ -3216,7 +3213,7 @@ dpaa_sec_dev_infos_get(struct rte_cryptodev *dev,
 		info->feature_flags = dev->feature_flags;
 		info->capabilities = dpaa_sec_capabilities;
 		info->sym.max_nb_sessions = internals->max_nb_sessions;
-		info->driver_id = cryptodev_driver_id;
+		info->driver_id = dpaa_cryptodev_driver_id;
 	}
 }
 
@@ -3412,7 +3409,10 @@ static struct rte_cryptodev_ops crypto_ops = {
 	.queue_pair_release   = dpaa_sec_queue_pair_release,
 	.sym_session_get_size     = dpaa_sec_sym_session_get_size,
 	.sym_session_configure    = dpaa_sec_sym_session_configure,
-	.sym_session_clear        = dpaa_sec_sym_session_clear
+	.sym_session_clear        = dpaa_sec_sym_session_clear,
+	/* Raw data-path API related operations */
+	.sym_get_raw_dp_ctx_size = dpaa_sec_get_dp_ctx_size,
+	.sym_configure_raw_dp_ctx = dpaa_sec_configure_raw_dp_ctx,
 };
 
 #ifdef RTE_LIB_SECURITY
@@ -3463,7 +3463,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	cryptodev->driver_id = cryptodev_driver_id;
+	cryptodev->driver_id = dpaa_cryptodev_driver_id;
 	cryptodev->dev_ops = &crypto_ops;
 
 	cryptodev->enqueue_burst = dpaa_sec_enqueue_burst;
@@ -3472,6 +3472,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
 			RTE_CRYPTODEV_FF_HW_ACCELERATED |
 			RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
 			RTE_CRYPTODEV_FF_SECURITY |
+			RTE_CRYPTODEV_FF_SYM_RAW_DP |
 			RTE_CRYPTODEV_FF_IN_PLACE_SGL |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
 			RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
@@ -3637,5 +3638,5 @@ static struct cryptodev_driver dpaa_sec_crypto_drv;
 
 RTE_PMD_REGISTER_DPAA(CRYPTODEV_NAME_DPAA_SEC_PMD, rte_dpaa_sec_driver);
 RTE_PMD_REGISTER_CRYPTO_DRIVER(dpaa_sec_crypto_drv, rte_dpaa_sec_driver.driver,
-		cryptodev_driver_id);
+		dpaa_cryptodev_driver_id);
 RTE_LOG_REGISTER(dpaa_logtype_sec, pmd.crypto.dpaa, NOTICE);
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 503047879e..77288cd1eb 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -19,6 +19,8 @@
 #define AES_CTR_IV_LEN		16
 #define AES_GCM_IV_LEN		12
 
+extern uint8_t dpaa_cryptodev_driver_id;
+
 #define DPAA_IPv6_DEFAULT_VTC_FLOW	0x60000000
 
 /* Minimum job descriptor consists of a oneword job descriptor HEADER and
@@ -117,6 +119,24 @@ struct sec_pdcp_ctxt {
 	uint32_t hfn_threshold;	/*!< HFN Threashold for key renegotiation */
 };
 #endif
+
+typedef int (*dpaa_sec_build_fd_t)(
+	void *qp, uint8_t *drv_ctx, struct rte_crypto_vec *data_vec,
+	uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_va_iova_ptr *iv,
+	struct rte_crypto_va_iova_ptr *digest,
+	struct rte_crypto_va_iova_ptr *aad_or_auth_iv,
+	void *user_data);
+
+typedef struct dpaa_sec_job* (*dpaa_sec_build_raw_dp_fd_t)(uint8_t *drv_ctx,
+			struct rte_crypto_sgl *sgl,
+			struct rte_crypto_sgl *dest_sgl,
+			struct rte_crypto_va_iova_ptr *iv,
+			struct rte_crypto_va_iova_ptr *digest,
+			struct rte_crypto_va_iova_ptr *auth_iv,
+			union rte_crypto_sym_ofs ofs,
+			void *userdata);
+
 typedef struct dpaa_sec_session_entry {
 	struct sec_cdb cdb;	/**< cmd block associated with qp */
 	struct dpaa_sec_qp *qp[MAX_DPAA_CORES];
@@ -129,6 +149,8 @@ typedef struct dpaa_sec_session_entry {
 #ifdef RTE_LIB_SECURITY
 	enum rte_security_session_protocol proto_alg; /*!< Security Algorithm*/
 #endif
+	dpaa_sec_build_fd_t build_fd;
+	dpaa_sec_build_raw_dp_fd_t build_raw_dp_fd;
 	union {
 		struct {
 			uint8_t *data;	/**< pointer to key data */
@@ -211,7 +233,10 @@ struct dpaa_sec_job {
 #define DPAA_MAX_NB_MAX_DIGEST	32
 struct dpaa_sec_op_ctx {
 	struct dpaa_sec_job job;
-	struct rte_crypto_op *op;
+	union {
+		struct rte_crypto_op *op;
+		void *userdata;
+	};
 	struct rte_mempool *ctx_pool; /* mempool pointer for dpaa_sec_op_ctx */
 	uint32_t fd_status;
 	int64_t vtop_offset;
@@ -1001,4 +1026,16 @@ calc_chksum(void *buffer, int len)
 	return  result;
 }
 
+int
+dpaa_sec_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+	struct rte_crypto_raw_dp_ctx *raw_dp_ctx,
+	enum rte_crypto_op_sess_type sess_type,
+	union rte_cryptodev_session_ctx session_ctx, uint8_t is_update);
+
+int
+dpaa_sec_get_dp_ctx_size(struct rte_cryptodev *dev);
+
+int
+dpaa_sec_attach_sess_q(struct dpaa_sec_qp *qp, dpaa_sec_session *sess);
+
 #endif /* _DPAA_SEC_H_ */
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
new file mode 100644
index 0000000000..7376da4cbc
--- /dev/null
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
@@ -0,0 +1,485 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2021 NXP
+ */
+
+#include <rte_byteorder.h>
+#include <rte_common.h>
+#include <cryptodev_pmd.h>
+#include <rte_crypto.h>
+#include <rte_cryptodev.h>
+#ifdef RTE_LIB_SECURITY
+#include <rte_security_driver.h>
+#endif
+
+/* RTA header files */
+#include <desc/ipsec.h>
+
+#include <rte_dpaa_bus.h>
+#include <dpaa_sec.h>
+#include <dpaa_sec_log.h>
+
+struct dpaa_sec_raw_dp_ctx {
+	dpaa_sec_session *session;
+	uint32_t tail;
+	uint32_t head;
+	uint16_t cached_enqueue;
+	uint16_t cached_dequeue;
+};
+
+static __rte_always_inline int
+dpaa_sec_raw_enqueue_done(void *qp_data, uint8_t *drv_ctx, uint32_t n)
+{
+	RTE_SET_USED(qp_data);
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(n);
+
+	return 0;
+}
+
+static __rte_always_inline int
+dpaa_sec_raw_dequeue_done(void *qp_data, uint8_t *drv_ctx, uint32_t n)
+{
+	RTE_SET_USED(qp_data);
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(n);
+
+	return 0;
+}
+
+static inline struct dpaa_sec_op_ctx *
+dpaa_sec_alloc_raw_ctx(dpaa_sec_session *ses, int sg_count)
+{
+	struct dpaa_sec_op_ctx *ctx;
+	int i, retval;
+
+	retval = rte_mempool_get(
+			ses->qp[rte_lcore_id() % MAX_DPAA_CORES]->ctx_pool,
+			(void **)(&ctx));
+	if (!ctx || retval) {
+		DPAA_SEC_DP_WARN("Alloc sec descriptor failed!");
+		return NULL;
+	}
+	/*
+	 * Clear SG memory. There are 16 SG entries of 16 Bytes each.
+	 * one call to dcbz_64() clear 64 bytes, hence calling it 4 times
+	 * to clear all the SG entries. dpaa_sec_alloc_ctx() is called for
+	 * each packet, memset is costlier than dcbz_64().
+	 */
+	for (i = 0; i < sg_count && i < MAX_JOB_SG_ENTRIES; i += 4)
+		dcbz_64(&ctx->job.sg[i]);
+
+	ctx->ctx_pool = ses->qp[rte_lcore_id() % MAX_DPAA_CORES]->ctx_pool;
+	ctx->vtop_offset = (size_t) ctx - rte_mempool_virt2iova(ctx);
+
+	return ctx;
+}
+
+static struct dpaa_sec_job *
+build_dpaa_raw_dp_auth_fd(uint8_t *drv_ctx,
+			struct rte_crypto_sgl *sgl,
+			struct rte_crypto_sgl *dest_sgl,
+			struct rte_crypto_va_iova_ptr *iv,
+			struct rte_crypto_va_iova_ptr *digest,
+			struct rte_crypto_va_iova_ptr *auth_iv,
+			union rte_crypto_sym_ofs ofs,
+			void *userdata)
+{
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(sgl);
+	RTE_SET_USED(dest_sgl);
+	RTE_SET_USED(iv);
+	RTE_SET_USED(digest);
+	RTE_SET_USED(auth_iv);
+	RTE_SET_USED(ofs);
+	RTE_SET_USED(userdata);
+
+	return NULL;
+}
+
+static struct dpaa_sec_job *
+build_dpaa_raw_dp_cipher_fd(uint8_t *drv_ctx,
+			struct rte_crypto_sgl *sgl,
+			struct rte_crypto_sgl *dest_sgl,
+			struct rte_crypto_va_iova_ptr *iv,
+			struct rte_crypto_va_iova_ptr *digest,
+			struct rte_crypto_va_iova_ptr *auth_iv,
+			union rte_crypto_sym_ofs ofs,
+			void *userdata)
+{
+	RTE_SET_USED(digest);
+	RTE_SET_USED(auth_iv);
+	dpaa_sec_session *ses =
+		((struct dpaa_sec_raw_dp_ctx *)drv_ctx)->session;
+	struct dpaa_sec_job *cf;
+	struct dpaa_sec_op_ctx *ctx;
+	struct qm_sg_entry *sg, *out_sg, *in_sg;
+	unsigned int i;
+	uint8_t *IV_ptr = iv->va;
+	int data_len, total_len = 0, data_offset;
+
+	for (i = 0; i < sgl->num; i++)
+		total_len += sgl->vec[i].len;
+
+	data_len = total_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
+	data_offset = ofs.ofs.cipher.head;
+
+	/* Support lengths in bits only for SNOW3G and ZUC */
+	if (sgl->num > MAX_SG_ENTRIES) {
+		DPAA_SEC_DP_ERR("Cipher: Max sec segs supported is %d",
+				MAX_SG_ENTRIES);
+		return NULL;
+	}
+
+	ctx = dpaa_sec_alloc_raw_ctx(ses, sgl->num * 2 + 3);
+	if (!ctx)
+		return NULL;
+
+	cf = &ctx->job;
+	ctx->userdata = (void *)userdata;
+
+	/* output */
+	out_sg = &cf->sg[0];
+	out_sg->extension = 1;
+	out_sg->length = data_len;
+	qm_sg_entry_set64(out_sg, rte_dpaa_mem_vtop(&cf->sg[2]));
+	cpu_to_hw_sg(out_sg);
+
+	if (dest_sgl) {
+		/* 1st seg */
+		sg = &cf->sg[2];
+		qm_sg_entry_set64(sg, dest_sgl->vec[0].iova);
+		sg->length = dest_sgl->vec[0].len - data_offset;
+		sg->offset = data_offset;
+
+		/* Successive segs */
+		for (i = 1; i < dest_sgl->num; i++) {
+			cpu_to_hw_sg(sg);
+			sg++;
+			qm_sg_entry_set64(sg, dest_sgl->vec[i].iova);
+			sg->length = dest_sgl->vec[i].len;
+		}
+	} else {
+		/* 1st seg */
+		sg = &cf->sg[2];
+		qm_sg_entry_set64(sg, sgl->vec[0].iova);
+		sg->length = sgl->vec[0].len - data_offset;
+		sg->offset = data_offset;
+
+		/* Successive segs */
+		for (i = 1; i < sgl->num; i++) {
+			cpu_to_hw_sg(sg);
+			sg++;
+			qm_sg_entry_set64(sg, sgl->vec[i].iova);
+			sg->length = sgl->vec[i].len;
+		}
+
+	}
+	sg->final = 1;
+	cpu_to_hw_sg(sg);
+
+	/* input */
+	in_sg = &cf->sg[1];
+	in_sg->extension = 1;
+	in_sg->final = 1;
+	in_sg->length = data_len + ses->iv.length;
+
+	sg++;
+	qm_sg_entry_set64(in_sg, rte_dpaa_mem_vtop(sg));
+	cpu_to_hw_sg(in_sg);
+
+	/* IV */
+	qm_sg_entry_set64(sg, rte_dpaa_mem_vtop(IV_ptr));
+	sg->length = ses->iv.length;
+	cpu_to_hw_sg(sg);
+
+	/* 1st seg */
+	sg++;
+	qm_sg_entry_set64(sg, sgl->vec[0].iova);
+	sg->length = sgl->vec[0].len - data_offset;
+	sg->offset = data_offset;
+
+	/* Successive segs */
+	for (i = 1; i < sgl->num; i++) {
+		cpu_to_hw_sg(sg);
+		sg++;
+		qm_sg_entry_set64(sg, sgl->vec[i].iova);
+		sg->length = sgl->vec[i].len;
+	}
+	sg->final = 1;
+	cpu_to_hw_sg(sg);
+
+	return cf;
+}
+
+static uint32_t
+dpaa_sec_raw_enqueue_burst(void *qp_data, uint8_t *drv_ctx,
+	struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
+	void *user_data[], int *status)
+{
+	/* Function to transmit the frames to given device and queuepair */
+	uint32_t loop;
+	struct dpaa_sec_qp *dpaa_qp = (struct dpaa_sec_qp *)qp_data;
+	uint16_t num_tx = 0;
+	struct qm_fd fds[DPAA_SEC_BURST], *fd;
+	uint32_t frames_to_send;
+	struct dpaa_sec_job *cf;
+	dpaa_sec_session *ses =
+			((struct dpaa_sec_raw_dp_ctx *)drv_ctx)->session;
+	uint32_t flags[DPAA_SEC_BURST] = {0};
+	struct qman_fq *inq[DPAA_SEC_BURST];
+
+	if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
+		if (rte_dpaa_portal_init((void *)0)) {
+			DPAA_SEC_ERR("Failure in affining portal");
+			return 0;
+		}
+	}
+
+	while (vec->num) {
+		frames_to_send = (vec->num > DPAA_SEC_BURST) ?
+				DPAA_SEC_BURST : vec->num;
+		for (loop = 0; loop < frames_to_send; loop++) {
+			if (unlikely(!ses->qp[rte_lcore_id() % MAX_DPAA_CORES])) {
+				if (dpaa_sec_attach_sess_q(dpaa_qp, ses)) {
+					frames_to_send = loop;
+					goto send_pkts;
+				}
+			} else if (unlikely(ses->qp[rte_lcore_id() %
+						MAX_DPAA_CORES] != dpaa_qp)) {
+				DPAA_SEC_DP_ERR("Old:sess->qp = %p"
+					" New qp = %p\n",
+					ses->qp[rte_lcore_id() %
+					MAX_DPAA_CORES], dpaa_qp);
+				frames_to_send = loop;
+				goto send_pkts;
+			}
+
+			/*Clear the unused FD fields before sending*/
+			fd = &fds[loop];
+			memset(fd, 0, sizeof(struct qm_fd));
+			cf = ses->build_raw_dp_fd(drv_ctx,
+						&vec->src_sgl[loop],
+						&vec->dest_sgl[loop],
+						&vec->iv[loop],
+						&vec->digest[loop],
+						&vec->auth_iv[loop],
+						ofs,
+						user_data[loop]);
+			if (!cf) {
+				DPAA_SEC_ERR("error: Improper packet contents"
+					" for crypto operation");
+				goto skip_tx;
+			}
+			inq[loop] = ses->inq[rte_lcore_id() % MAX_DPAA_CORES];
+			fd->opaque_addr = 0;
+			fd->cmd = 0;
+			qm_fd_addr_set64(fd, rte_dpaa_mem_vtop(cf->sg));
+			fd->_format1 = qm_fd_compound;
+			fd->length29 = 2 * sizeof(struct qm_sg_entry);
+
+			status[loop] = 1;
+		}
+send_pkts:
+		loop = 0;
+		while (loop < frames_to_send) {
+			loop += qman_enqueue_multi_fq(&inq[loop], &fds[loop],
+					&flags[loop], frames_to_send - loop);
+		}
+		vec->num -= frames_to_send;
+		num_tx += frames_to_send;
+	}
+
+skip_tx:
+	dpaa_qp->tx_pkts += num_tx;
+	dpaa_qp->tx_errs += vec->num - num_tx;
+
+	return num_tx;
+}
+
+static int
+dpaa_sec_deq_raw(struct dpaa_sec_qp *qp, void **out_user_data,
+		uint8_t is_user_data_array,
+		rte_cryptodev_raw_post_dequeue_t post_dequeue,
+		int nb_ops)
+{
+	struct qman_fq *fq;
+	unsigned int pkts = 0;
+	int num_rx_bufs, ret;
+	struct qm_dqrr_entry *dq;
+	uint32_t vdqcr_flags = 0;
+	uint8_t is_success = 0;
+
+	fq = &qp->outq;
+	/*
+	 * Until request for four buffers, we provide exact number of buffers.
+	 * Otherwise we do not set the QM_VDQCR_EXACT flag.
+	 * Not setting QM_VDQCR_EXACT flag can provide two more buffers than
+	 * requested, so we request two less in this case.
+	 */
+	if (nb_ops < 4) {
+		vdqcr_flags = QM_VDQCR_EXACT;
+		num_rx_bufs = nb_ops;
+	} else {
+		num_rx_bufs = nb_ops > DPAA_MAX_DEQUEUE_NUM_FRAMES ?
+			(DPAA_MAX_DEQUEUE_NUM_FRAMES - 2) : (nb_ops - 2);
+	}
+	ret = qman_set_vdq(fq, num_rx_bufs, vdqcr_flags);
+	if (ret)
+		return 0;
+
+	do {
+		const struct qm_fd *fd;
+		struct dpaa_sec_job *job;
+		struct dpaa_sec_op_ctx *ctx;
+
+		dq = qman_dequeue(fq);
+		if (!dq)
+			continue;
+
+		fd = &dq->fd;
+		/* sg is embedded in an op ctx,
+		 * sg[0] is for output
+		 * sg[1] for input
+		 */
+		job = rte_dpaa_mem_ptov(qm_fd_addr_get64(fd));
+
+		ctx = container_of(job, struct dpaa_sec_op_ctx, job);
+		ctx->fd_status = fd->status;
+		if (is_user_data_array)
+			out_user_data[pkts] = ctx->userdata;
+		else
+			out_user_data[0] = ctx->userdata;
+
+		if (!ctx->fd_status) {
+			is_success = true;
+		} else {
+			is_success = false;
+			DPAA_SEC_DP_WARN("SEC return err:0x%x", ctx->fd_status);
+		}
+		post_dequeue(ctx->op, pkts, is_success);
+		pkts++;
+
+		/* report op status to sym->op and then free the ctx memory */
+		rte_mempool_put(ctx->ctx_pool, (void *)ctx);
+
+		qman_dqrr_consume(fq, dq);
+	} while (fq->flags & QMAN_FQ_STATE_VDQCR);
+
+	return pkts;
+}
+
+
+static __rte_always_inline uint32_t
+dpaa_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
+	rte_cryptodev_raw_get_dequeue_count_t get_dequeue_count,
+	uint32_t max_nb_to_dequeue,
+	rte_cryptodev_raw_post_dequeue_t post_dequeue,
+	void **out_user_data, uint8_t is_user_data_array,
+	uint32_t *n_success, int *dequeue_status)
+{
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(get_dequeue_count);
+	uint16_t num_rx;
+	struct dpaa_sec_qp *dpaa_qp = (struct dpaa_sec_qp *)qp_data;
+	uint32_t nb_ops = max_nb_to_dequeue;
+
+	if (unlikely(!DPAA_PER_LCORE_PORTAL)) {
+		if (rte_dpaa_portal_init((void *)0)) {
+			DPAA_SEC_ERR("Failure in affining portal");
+			return 0;
+		}
+	}
+
+	num_rx = dpaa_sec_deq_raw(dpaa_qp, out_user_data,
+			is_user_data_array, post_dequeue, nb_ops);
+
+	dpaa_qp->rx_pkts += num_rx;
+	*dequeue_status = 1;
+	*n_success = num_rx;
+
+	DPAA_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx);
+
+	return num_rx;
+}
+
+static __rte_always_inline int
+dpaa_sec_raw_enqueue(void *qp_data, uint8_t *drv_ctx,
+	struct rte_crypto_vec *data_vec,
+	uint16_t n_data_vecs, union rte_crypto_sym_ofs ofs,
+	struct rte_crypto_va_iova_ptr *iv,
+	struct rte_crypto_va_iova_ptr *digest,
+	struct rte_crypto_va_iova_ptr *aad_or_auth_iv,
+	void *user_data)
+{
+	RTE_SET_USED(qp_data);
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(data_vec);
+	RTE_SET_USED(n_data_vecs);
+	RTE_SET_USED(ofs);
+	RTE_SET_USED(iv);
+	RTE_SET_USED(digest);
+	RTE_SET_USED(aad_or_auth_iv);
+	RTE_SET_USED(user_data);
+
+	return 0;
+}
+
+static __rte_always_inline void *
+dpaa_sec_raw_dequeue(void *qp_data, uint8_t *drv_ctx, int *dequeue_status,
+	enum rte_crypto_op_status *op_status)
+{
+	RTE_SET_USED(qp_data);
+	RTE_SET_USED(drv_ctx);
+	RTE_SET_USED(dequeue_status);
+	RTE_SET_USED(op_status);
+
+	return NULL;
+}
+
+int
+dpaa_sec_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
+	struct rte_crypto_raw_dp_ctx *raw_dp_ctx,
+	enum rte_crypto_op_sess_type sess_type,
+	union rte_cryptodev_session_ctx session_ctx, uint8_t is_update)
+{
+	dpaa_sec_session *sess;
+	struct dpaa_sec_raw_dp_ctx *dp_ctx;
+	RTE_SET_USED(qp_id);
+
+	if (!is_update) {
+		memset(raw_dp_ctx, 0, sizeof(*raw_dp_ctx));
+		raw_dp_ctx->qp_data = dev->data->queue_pairs[qp_id];
+	}
+
+	if (sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
+		sess = (dpaa_sec_session *)get_sec_session_private_data(
+				session_ctx.sec_sess);
+	else if (sess_type == RTE_CRYPTO_OP_WITH_SESSION)
+		sess = (dpaa_sec_session *)get_sym_session_private_data(
+			session_ctx.crypto_sess, dpaa_cryptodev_driver_id);
+	else
+		return -ENOTSUP;
+	raw_dp_ctx->dequeue_burst = dpaa_sec_raw_dequeue_burst;
+	raw_dp_ctx->dequeue = dpaa_sec_raw_dequeue;
+	raw_dp_ctx->dequeue_done = dpaa_sec_raw_dequeue_done;
+	raw_dp_ctx->enqueue_burst = dpaa_sec_raw_enqueue_burst;
+	raw_dp_ctx->enqueue = dpaa_sec_raw_enqueue;
+	raw_dp_ctx->enqueue_done = dpaa_sec_raw_enqueue_done;
+
+	if (sess->ctxt == DPAA_SEC_CIPHER)
+		sess->build_raw_dp_fd = build_dpaa_raw_dp_cipher_fd;
+	else if (sess->ctxt == DPAA_SEC_AUTH)
+		sess->build_raw_dp_fd = build_dpaa_raw_dp_auth_fd;
+	else
+		return -ENOTSUP;
+	dp_ctx = (struct dpaa_sec_raw_dp_ctx *)raw_dp_ctx->drv_ctx_data;
+	dp_ctx->session = sess;
+
+	return 0;
+}
+
+int
+dpaa_sec_get_dp_ctx_size(__rte_unused struct rte_cryptodev *dev)
+{
+	return sizeof(struct dpaa_sec_raw_dp_ctx);
+}
diff --git a/drivers/crypto/dpaa_sec/meson.build b/drivers/crypto/dpaa_sec/meson.build
index 44fd60e5ae..f87ad6c7e7 100644
--- a/drivers/crypto/dpaa_sec/meson.build
+++ b/drivers/crypto/dpaa_sec/meson.build
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
-# Copyright 2018 NXP
+# Copyright 2018-2021 NXP
 
 if not is_linux
     build = false
@@ -7,7 +7,7 @@ if not is_linux
 endif
 
 deps += ['bus_dpaa', 'mempool_dpaa', 'security']
-sources = files('dpaa_sec.c')
+sources = files('dpaa_sec.c', 'dpaa_sec_raw_dp.c')
 
 includes += include_directories('../../bus/dpaa/include')
 includes += include_directories('../../common/dpaax')
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 12/15] crypto/dpaa_sec: support authonly and chain with raw APIs
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
                   ` (10 preceding siblings ...)
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 11/15] crypto/dpaa_sec: support raw datapath APIs Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 13/15] crypto/dpaa_sec: support AEAD and proto " Hemant Agrawal
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

This patch improves the raw vector support in dpaa_sec driver
for authonly and chain usecase.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/crypto/dpaa_sec/dpaa_sec.h        |   3 +-
 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c | 296 +++++++++++++++++++++-
 2 files changed, 287 insertions(+), 12 deletions(-)

diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.h b/drivers/crypto/dpaa_sec/dpaa_sec.h
index 77288cd1eb..7890687828 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.h
@@ -135,7 +135,8 @@ typedef struct dpaa_sec_job* (*dpaa_sec_build_raw_dp_fd_t)(uint8_t *drv_ctx,
 			struct rte_crypto_va_iova_ptr *digest,
 			struct rte_crypto_va_iova_ptr *auth_iv,
 			union rte_crypto_sym_ofs ofs,
-			void *userdata);
+			void *userdata,
+			struct qm_fd *fd);
 
 typedef struct dpaa_sec_session_entry {
 	struct sec_cdb cdb;	/**< cmd block associated with qp */
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
index 7376da4cbc..03ce21e53f 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
@@ -12,6 +12,7 @@
 #endif
 
 /* RTA header files */
+#include <desc/algo.h>
 #include <desc/ipsec.h>
 
 #include <rte_dpaa_bus.h>
@@ -26,6 +27,17 @@ struct dpaa_sec_raw_dp_ctx {
 	uint16_t cached_dequeue;
 };
 
+static inline int
+is_encode(dpaa_sec_session *ses)
+{
+	return ses->dir == DIR_ENC;
+}
+
+static inline int is_decode(dpaa_sec_session *ses)
+{
+	return ses->dir == DIR_DEC;
+}
+
 static __rte_always_inline int
 dpaa_sec_raw_enqueue_done(void *qp_data, uint8_t *drv_ctx, uint32_t n)
 {
@@ -82,18 +94,276 @@ build_dpaa_raw_dp_auth_fd(uint8_t *drv_ctx,
 			struct rte_crypto_va_iova_ptr *digest,
 			struct rte_crypto_va_iova_ptr *auth_iv,
 			union rte_crypto_sym_ofs ofs,
-			void *userdata)
+			void *userdata,
+			struct qm_fd *fd)
 {
-	RTE_SET_USED(drv_ctx);
-	RTE_SET_USED(sgl);
 	RTE_SET_USED(dest_sgl);
 	RTE_SET_USED(iv);
-	RTE_SET_USED(digest);
 	RTE_SET_USED(auth_iv);
-	RTE_SET_USED(ofs);
-	RTE_SET_USED(userdata);
+	RTE_SET_USED(fd);
 
-	return NULL;
+	dpaa_sec_session *ses =
+		((struct dpaa_sec_raw_dp_ctx *)drv_ctx)->session;
+	struct dpaa_sec_job *cf;
+	struct dpaa_sec_op_ctx *ctx;
+	struct qm_sg_entry *sg, *out_sg, *in_sg;
+	phys_addr_t start_addr;
+	uint8_t *old_digest, extra_segs;
+	int data_len, data_offset, total_len = 0;
+	unsigned int i;
+
+	for (i = 0; i < sgl->num; i++)
+		total_len += sgl->vec[i].len;
+
+	data_len = total_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
+	data_offset =  ofs.ofs.auth.head;
+
+	/* Support only length in bits for SNOW3G and ZUC */
+
+	if (is_decode(ses))
+		extra_segs = 3;
+	else
+		extra_segs = 2;
+
+	if (sgl->num > MAX_SG_ENTRIES) {
+		DPAA_SEC_DP_ERR("Auth: Max sec segs supported is %d",
+				MAX_SG_ENTRIES);
+		return NULL;
+	}
+	ctx = dpaa_sec_alloc_raw_ctx(ses, sgl->num * 2 + extra_segs);
+	if (!ctx)
+		return NULL;
+
+	cf = &ctx->job;
+	ctx->userdata = (void *)userdata;
+	old_digest = ctx->digest;
+
+	/* output */
+	out_sg = &cf->sg[0];
+	qm_sg_entry_set64(out_sg, digest->iova);
+	out_sg->length = ses->digest_length;
+	cpu_to_hw_sg(out_sg);
+
+	/* input */
+	in_sg = &cf->sg[1];
+	/* need to extend the input to a compound frame */
+	in_sg->extension = 1;
+	in_sg->final = 1;
+	in_sg->length = data_len;
+	qm_sg_entry_set64(in_sg, rte_dpaa_mem_vtop(&cf->sg[2]));
+
+	/* 1st seg */
+	sg = in_sg + 1;
+
+	if (ses->iv.length) {
+		uint8_t *iv_ptr;
+
+		iv_ptr = rte_crypto_op_ctod_offset(userdata, uint8_t *,
+						   ses->iv.offset);
+
+		if (ses->auth_alg == RTE_CRYPTO_AUTH_SNOW3G_UIA2) {
+			iv_ptr = conv_to_snow_f9_iv(iv_ptr);
+			sg->length = 12;
+		} else if (ses->auth_alg == RTE_CRYPTO_AUTH_ZUC_EIA3) {
+			iv_ptr = conv_to_zuc_eia_iv(iv_ptr);
+			sg->length = 8;
+		} else {
+			sg->length = ses->iv.length;
+		}
+		qm_sg_entry_set64(sg, rte_dpaa_mem_vtop(iv_ptr));
+		in_sg->length += sg->length;
+		cpu_to_hw_sg(sg);
+		sg++;
+	}
+
+	qm_sg_entry_set64(sg, sgl->vec[0].iova);
+	sg->offset = data_offset;
+
+	if (data_len <= (int)(sgl->vec[0].len - data_offset)) {
+		sg->length = data_len;
+	} else {
+		sg->length = sgl->vec[0].len - data_offset;
+
+		/* remaining i/p segs */
+		for (i = 1; i < sgl->num; i++) {
+			cpu_to_hw_sg(sg);
+			sg++;
+			qm_sg_entry_set64(sg, sgl->vec[i].iova);
+			if (data_len > (int)sgl->vec[i].len)
+				sg->length = sgl->vec[0].len;
+			else
+				sg->length = data_len;
+
+			data_len = data_len - sg->length;
+			if (data_len < 1)
+				break;
+		}
+	}
+
+	if (is_decode(ses)) {
+		/* Digest verification case */
+		cpu_to_hw_sg(sg);
+		sg++;
+		rte_memcpy(old_digest, digest->va,
+				ses->digest_length);
+		start_addr = rte_dpaa_mem_vtop(old_digest);
+		qm_sg_entry_set64(sg, start_addr);
+		sg->length = ses->digest_length;
+		in_sg->length += ses->digest_length;
+	}
+	sg->final = 1;
+	cpu_to_hw_sg(sg);
+	cpu_to_hw_sg(in_sg);
+
+	return cf;
+}
+
+static inline struct dpaa_sec_job *
+build_dpaa_raw_dp_chain_fd(uint8_t *drv_ctx,
+			struct rte_crypto_sgl *sgl,
+			struct rte_crypto_sgl *dest_sgl,
+			struct rte_crypto_va_iova_ptr *iv,
+			struct rte_crypto_va_iova_ptr *digest,
+			struct rte_crypto_va_iova_ptr *auth_iv,
+			union rte_crypto_sym_ofs ofs,
+			void *userdata,
+			struct qm_fd *fd)
+{
+	RTE_SET_USED(auth_iv);
+
+	dpaa_sec_session *ses =
+		((struct dpaa_sec_raw_dp_ctx *)drv_ctx)->session;
+	struct dpaa_sec_job *cf;
+	struct dpaa_sec_op_ctx *ctx;
+	struct qm_sg_entry *sg, *out_sg, *in_sg;
+	uint8_t *IV_ptr = iv->va;
+	unsigned int i;
+	uint16_t auth_hdr_len = ofs.ofs.cipher.head -
+				ofs.ofs.auth.head;
+	uint16_t auth_tail_len = ofs.ofs.auth.tail;
+	uint32_t auth_only_len = (auth_tail_len << 16) | auth_hdr_len;
+	int data_len = 0, auth_len = 0, cipher_len = 0;
+
+	for (i = 0; i < sgl->num; i++)
+		data_len += sgl->vec[i].len;
+
+	cipher_len = data_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
+	auth_len = data_len - ofs.ofs.auth.head - ofs.ofs.auth.tail;
+
+	if (sgl->num > MAX_SG_ENTRIES) {
+		DPAA_SEC_DP_ERR("Cipher-Auth: Max sec segs supported is %d",
+				MAX_SG_ENTRIES);
+		return NULL;
+	}
+
+	ctx = dpaa_sec_alloc_raw_ctx(ses, sgl->num * 2 + 4);
+	if (!ctx)
+		return NULL;
+
+	cf = &ctx->job;
+	ctx->userdata = (void *)userdata;
+
+	rte_prefetch0(cf->sg);
+
+	/* output */
+	out_sg = &cf->sg[0];
+	out_sg->extension = 1;
+	if (is_encode(ses))
+		out_sg->length = cipher_len + ses->digest_length;
+	else
+		out_sg->length = cipher_len;
+
+	/* output sg entries */
+	sg = &cf->sg[2];
+	qm_sg_entry_set64(out_sg, rte_dpaa_mem_vtop(sg));
+	cpu_to_hw_sg(out_sg);
+
+	/* 1st seg */
+	if (dest_sgl) {
+		qm_sg_entry_set64(sg, dest_sgl->vec[0].iova);
+		sg->length = dest_sgl->vec[0].len - ofs.ofs.cipher.head;
+		sg->offset = ofs.ofs.cipher.head;
+
+		/* Successive segs */
+		for (i = 1; i < dest_sgl->num; i++) {
+			cpu_to_hw_sg(sg);
+			sg++;
+			qm_sg_entry_set64(sg, dest_sgl->vec[i].iova);
+			sg->length = dest_sgl->vec[i].len;
+		}
+	} else {
+		qm_sg_entry_set64(sg, sgl->vec[0].iova);
+		sg->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+		sg->offset = ofs.ofs.cipher.head;
+
+		/* Successive segs */
+		for (i = 1; i < sgl->num; i++) {
+			cpu_to_hw_sg(sg);
+			sg++;
+			qm_sg_entry_set64(sg, sgl->vec[i].iova);
+			sg->length = sgl->vec[i].len;
+		}
+	}
+
+	if (is_encode(ses)) {
+		cpu_to_hw_sg(sg);
+		/* set auth output */
+		sg++;
+		qm_sg_entry_set64(sg, digest->iova);
+		sg->length = ses->digest_length;
+	}
+	sg->final = 1;
+	cpu_to_hw_sg(sg);
+
+	/* input */
+	in_sg = &cf->sg[1];
+	in_sg->extension = 1;
+	in_sg->final = 1;
+	if (is_encode(ses))
+		in_sg->length = ses->iv.length + auth_len;
+	else
+		in_sg->length = ses->iv.length + auth_len
+						+ ses->digest_length;
+
+	/* input sg entries */
+	sg++;
+	qm_sg_entry_set64(in_sg, rte_dpaa_mem_vtop(sg));
+	cpu_to_hw_sg(in_sg);
+
+	/* 1st seg IV */
+	qm_sg_entry_set64(sg, rte_dpaa_mem_vtop(IV_ptr));
+	sg->length = ses->iv.length;
+	cpu_to_hw_sg(sg);
+
+	/* 2 seg */
+	sg++;
+	qm_sg_entry_set64(sg, sgl->vec[0].iova);
+	sg->length = sgl->vec[0].len - ofs.ofs.auth.head;
+	sg->offset = ofs.ofs.auth.head;
+
+	/* Successive segs */
+	for (i = 1; i < sgl->num; i++) {
+		cpu_to_hw_sg(sg);
+		sg++;
+		qm_sg_entry_set64(sg, sgl->vec[i].iova);
+		sg->length = sgl->vec[i].len;
+	}
+
+	if (is_decode(ses)) {
+		cpu_to_hw_sg(sg);
+		sg++;
+		memcpy(ctx->digest, digest->va,
+			ses->digest_length);
+		qm_sg_entry_set64(sg, rte_dpaa_mem_vtop(ctx->digest));
+		sg->length = ses->digest_length;
+	}
+	sg->final = 1;
+	cpu_to_hw_sg(sg);
+
+	if (auth_only_len)
+		fd->cmd = 0x80000000 | auth_only_len;
+
+	return cf;
 }
 
 static struct dpaa_sec_job *
@@ -104,10 +374,13 @@ build_dpaa_raw_dp_cipher_fd(uint8_t *drv_ctx,
 			struct rte_crypto_va_iova_ptr *digest,
 			struct rte_crypto_va_iova_ptr *auth_iv,
 			union rte_crypto_sym_ofs ofs,
-			void *userdata)
+			void *userdata,
+			struct qm_fd *fd)
 {
 	RTE_SET_USED(digest);
 	RTE_SET_USED(auth_iv);
+	RTE_SET_USED(fd);
+
 	dpaa_sec_session *ses =
 		((struct dpaa_sec_raw_dp_ctx *)drv_ctx)->session;
 	struct dpaa_sec_job *cf;
@@ -264,15 +537,14 @@ dpaa_sec_raw_enqueue_burst(void *qp_data, uint8_t *drv_ctx,
 						&vec->digest[loop],
 						&vec->auth_iv[loop],
 						ofs,
-						user_data[loop]);
+						user_data[loop],
+						fd);
 			if (!cf) {
 				DPAA_SEC_ERR("error: Improper packet contents"
 					" for crypto operation");
 				goto skip_tx;
 			}
 			inq[loop] = ses->inq[rte_lcore_id() % MAX_DPAA_CORES];
-			fd->opaque_addr = 0;
-			fd->cmd = 0;
 			qm_fd_addr_set64(fd, rte_dpaa_mem_vtop(cf->sg));
 			fd->_format1 = qm_fd_compound;
 			fd->length29 = 2 * sizeof(struct qm_sg_entry);
@@ -470,6 +742,8 @@ dpaa_sec_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
 		sess->build_raw_dp_fd = build_dpaa_raw_dp_cipher_fd;
 	else if (sess->ctxt == DPAA_SEC_AUTH)
 		sess->build_raw_dp_fd = build_dpaa_raw_dp_auth_fd;
+	else if (sess->ctxt == DPAA_SEC_CIPHER_HASH)
+		sess->build_raw_dp_fd = build_dpaa_raw_dp_chain_fd;
 	else
 		return -ENOTSUP;
 	dp_ctx = (struct dpaa_sec_raw_dp_ctx *)raw_dp_ctx->drv_ctx_data;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 13/15] crypto/dpaa_sec: support AEAD and proto with raw APIs
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
                   ` (11 preceding siblings ...)
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 12/15] crypto/dpaa_sec: support authonly and chain with raw APIs Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 14/15] test/crypto: add raw API test for dpaax Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 15/15] test/crypto: add raw API support in 5G algos Hemant Agrawal
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

From: Gagandeep Singh <g.singh@nxp.com>

This add support for AEAD and proto offload with raw APIs
for dpaa_sec driver.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
 drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c | 293 ++++++++++++++++++++++
 1 file changed, 293 insertions(+)

diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
index 03ce21e53f..522685f8cf 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
@@ -218,6 +218,163 @@ build_dpaa_raw_dp_auth_fd(uint8_t *drv_ctx,
 	return cf;
 }
 
+static inline struct dpaa_sec_job *
+build_raw_cipher_auth_gcm_sg(uint8_t *drv_ctx,
+			struct rte_crypto_sgl *sgl,
+			struct rte_crypto_sgl *dest_sgl,
+			struct rte_crypto_va_iova_ptr *iv,
+			struct rte_crypto_va_iova_ptr *digest,
+			struct rte_crypto_va_iova_ptr *auth_iv,
+			union rte_crypto_sym_ofs ofs,
+			void *userdata,
+			struct qm_fd *fd)
+{
+	dpaa_sec_session *ses =
+		((struct dpaa_sec_raw_dp_ctx *)drv_ctx)->session;
+	struct dpaa_sec_job *cf;
+	struct dpaa_sec_op_ctx *ctx;
+	struct qm_sg_entry *sg, *out_sg, *in_sg;
+	uint8_t extra_req_segs;
+	uint8_t *IV_ptr = iv->va;
+	int data_len = 0, aead_len = 0;
+	unsigned int i;
+
+	for (i = 0; i < sgl->num; i++)
+		data_len += sgl->vec[i].len;
+
+	extra_req_segs = 4;
+	aead_len = data_len - ofs.ofs.cipher.head - ofs.ofs.cipher.tail;
+
+	if (ses->auth_only_len)
+		extra_req_segs++;
+
+	if (sgl->num > MAX_SG_ENTRIES) {
+		DPAA_SEC_DP_ERR("AEAD: Max sec segs supported is %d",
+				MAX_SG_ENTRIES);
+		return NULL;
+	}
+
+	ctx = dpaa_sec_alloc_raw_ctx(ses,  sgl->num * 2 + extra_req_segs);
+	if (!ctx)
+		return NULL;
+
+	cf = &ctx->job;
+	ctx->userdata = (void *)userdata;
+
+	rte_prefetch0(cf->sg);
+
+	/* output */
+	out_sg = &cf->sg[0];
+	out_sg->extension = 1;
+	if (is_encode(ses))
+		out_sg->length = aead_len + ses->digest_length;
+	else
+		out_sg->length = aead_len;
+
+	/* output sg entries */
+	sg = &cf->sg[2];
+	qm_sg_entry_set64(out_sg, rte_dpaa_mem_vtop(sg));
+	cpu_to_hw_sg(out_sg);
+
+	if (dest_sgl) {
+		/* 1st seg */
+		qm_sg_entry_set64(sg, dest_sgl->vec[0].iova);
+		sg->length = dest_sgl->vec[0].len - ofs.ofs.cipher.head;
+		sg->offset = ofs.ofs.cipher.head;
+
+		/* Successive segs */
+		for (i = 1; i < dest_sgl->num; i++) {
+			cpu_to_hw_sg(sg);
+			sg++;
+			qm_sg_entry_set64(sg, dest_sgl->vec[i].iova);
+			sg->length = dest_sgl->vec[i].len;
+		}
+	} else {
+		/* 1st seg */
+		qm_sg_entry_set64(sg, sgl->vec[0].iova);
+		sg->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+		sg->offset = ofs.ofs.cipher.head;
+
+		/* Successive segs */
+		for (i = 1; i < sgl->num; i++) {
+			cpu_to_hw_sg(sg);
+			sg++;
+			qm_sg_entry_set64(sg, sgl->vec[i].iova);
+			sg->length = sgl->vec[i].len;
+		}
+
+	}
+
+	if (is_encode(ses)) {
+		cpu_to_hw_sg(sg);
+		/* set auth output */
+		sg++;
+		qm_sg_entry_set64(sg, digest->iova);
+		sg->length = ses->digest_length;
+	}
+	sg->final = 1;
+	cpu_to_hw_sg(sg);
+
+	/* input */
+	in_sg = &cf->sg[1];
+	in_sg->extension = 1;
+	in_sg->final = 1;
+	if (is_encode(ses))
+		in_sg->length = ses->iv.length + aead_len
+						+ ses->auth_only_len;
+	else
+		in_sg->length = ses->iv.length + aead_len
+				+ ses->auth_only_len + ses->digest_length;
+
+	/* input sg entries */
+	sg++;
+	qm_sg_entry_set64(in_sg, rte_dpaa_mem_vtop(sg));
+	cpu_to_hw_sg(in_sg);
+
+	/* 1st seg IV */
+	qm_sg_entry_set64(sg, rte_dpaa_mem_vtop(IV_ptr));
+	sg->length = ses->iv.length;
+	cpu_to_hw_sg(sg);
+
+	/* 2 seg auth only */
+	if (ses->auth_only_len) {
+		sg++;
+		qm_sg_entry_set64(sg, auth_iv->iova);
+		sg->length = ses->auth_only_len;
+		cpu_to_hw_sg(sg);
+	}
+
+	/* 3rd seg */
+	sg++;
+	qm_sg_entry_set64(sg, sgl->vec[0].iova);
+	sg->length = sgl->vec[0].len - ofs.ofs.cipher.head;
+	sg->offset = ofs.ofs.cipher.head;
+
+	/* Successive segs */
+	for (i = 1; i < sgl->num; i++) {
+		cpu_to_hw_sg(sg);
+		sg++;
+		qm_sg_entry_set64(sg, sgl->vec[i].iova);
+		sg->length =  sgl->vec[i].len;
+	}
+
+	if (is_decode(ses)) {
+		cpu_to_hw_sg(sg);
+		sg++;
+		memcpy(ctx->digest, digest->va,
+			ses->digest_length);
+		qm_sg_entry_set64(sg, rte_dpaa_mem_vtop(ctx->digest));
+		sg->length = ses->digest_length;
+	}
+	sg->final = 1;
+	cpu_to_hw_sg(sg);
+
+	if (ses->auth_only_len)
+		fd->cmd = 0x80000000 | ses->auth_only_len;
+
+	return cf;
+}
+
 static inline struct dpaa_sec_job *
 build_dpaa_raw_dp_chain_fd(uint8_t *drv_ctx,
 			struct rte_crypto_sgl *sgl,
@@ -484,6 +641,135 @@ build_dpaa_raw_dp_cipher_fd(uint8_t *drv_ctx,
 	return cf;
 }
 
+#ifdef RTE_LIBRTE_SECURITY
+static inline struct dpaa_sec_job *
+build_dpaa_raw_proto_sg(uint8_t *drv_ctx,
+			struct rte_crypto_sgl *sgl,
+			struct rte_crypto_sgl *dest_sgl,
+			struct rte_crypto_va_iova_ptr *iv,
+			struct rte_crypto_va_iova_ptr *digest,
+			struct rte_crypto_va_iova_ptr *auth_iv,
+			union rte_crypto_sym_ofs ofs,
+			void *userdata,
+			struct qm_fd *fd)
+{
+	RTE_SET_USED(iv);
+	RTE_SET_USED(digest);
+	RTE_SET_USED(auth_iv);
+	RTE_SET_USED(ofs);
+
+	dpaa_sec_session *ses =
+		((struct dpaa_sec_raw_dp_ctx *)drv_ctx)->session;
+	struct dpaa_sec_job *cf;
+	struct dpaa_sec_op_ctx *ctx;
+	struct qm_sg_entry *sg, *out_sg, *in_sg;
+	uint32_t in_len = 0, out_len = 0;
+	unsigned int i;
+
+	if (sgl->num > MAX_SG_ENTRIES) {
+		DPAA_SEC_DP_ERR("Proto: Max sec segs supported is %d",
+				MAX_SG_ENTRIES);
+		return NULL;
+	}
+
+	ctx = dpaa_sec_alloc_raw_ctx(ses, sgl->num * 2 + 4);
+	if (!ctx)
+		return NULL;
+	cf = &ctx->job;
+	ctx->userdata = (void *)userdata;
+	/* output */
+	out_sg = &cf->sg[0];
+	out_sg->extension = 1;
+	qm_sg_entry_set64(out_sg, rte_dpaa_mem_vtop(&cf->sg[2]));
+
+	if (dest_sgl) {
+		/* 1st seg */
+		sg = &cf->sg[2];
+		qm_sg_entry_set64(sg, dest_sgl->vec[0].iova);
+		sg->offset = 0;
+		sg->length = dest_sgl->vec[0].len;
+		out_len += sg->length;
+
+		for (i = 1; i < dest_sgl->num; i++) {
+		/* Successive segs */
+			cpu_to_hw_sg(sg);
+			sg++;
+			qm_sg_entry_set64(sg, dest_sgl->vec[i].iova);
+			sg->offset = 0;
+			sg->length = dest_sgl->vec[i].len;
+			out_len += sg->length;
+		}
+		sg->length = dest_sgl->vec[i - 1].tot_len;
+	} else {
+		/* 1st seg */
+		sg = &cf->sg[2];
+		qm_sg_entry_set64(sg, sgl->vec[0].iova);
+		sg->offset = 0;
+		sg->length = sgl->vec[0].len;
+		out_len += sg->length;
+
+		for (i = 1; i < sgl->num; i++) {
+		/* Successive segs */
+			cpu_to_hw_sg(sg);
+			sg++;
+			qm_sg_entry_set64(sg, sgl->vec[i].iova);
+			sg->offset = 0;
+			sg->length = sgl->vec[i].len;
+			out_len += sg->length;
+		}
+		sg->length = sgl->vec[i - 1].tot_len;
+
+	}
+	out_len += sg->length;
+	sg->final = 1;
+	cpu_to_hw_sg(sg);
+
+	out_sg->length = out_len;
+	cpu_to_hw_sg(out_sg);
+
+	/* input */
+	in_sg = &cf->sg[1];
+	in_sg->extension = 1;
+	in_sg->final = 1;
+	in_len = sgl->vec[0].len;
+
+	sg++;
+	qm_sg_entry_set64(in_sg, rte_dpaa_mem_vtop(sg));
+
+	/* 1st seg */
+	qm_sg_entry_set64(sg, sgl->vec[0].iova);
+	sg->length = sgl->vec[0].len;
+	sg->offset = 0;
+
+	/* Successive segs */
+	for (i = 1; i < sgl->num; i++) {
+		cpu_to_hw_sg(sg);
+		sg++;
+		qm_sg_entry_set64(sg, sgl->vec[i].iova);
+		sg->length = sgl->vec[i].len;
+		sg->offset = 0;
+		in_len += sg->length;
+	}
+	sg->final = 1;
+	cpu_to_hw_sg(sg);
+
+	in_sg->length = in_len;
+	cpu_to_hw_sg(in_sg);
+
+	if ((ses->ctxt == DPAA_SEC_PDCP) && ses->pdcp.hfn_ovd) {
+		fd->cmd = 0x80000000 |
+			*((uint32_t *)((uint8_t *)userdata +
+			ses->pdcp.hfn_ovd_offset));
+		DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u\n",
+			*((uint32_t *)((uint8_t *)userdata +
+			ses->pdcp.hfn_ovd_offset)),
+			ses->pdcp.hfn_ovd);
+	}
+
+	return cf;
+}
+#endif
+
 static uint32_t
 dpaa_sec_raw_enqueue_burst(void *qp_data, uint8_t *drv_ctx,
 	struct rte_crypto_sym_vec *vec, union rte_crypto_sym_ofs ofs,
@@ -744,6 +1030,13 @@ dpaa_sec_configure_raw_dp_ctx(struct rte_cryptodev *dev, uint16_t qp_id,
 		sess->build_raw_dp_fd = build_dpaa_raw_dp_auth_fd;
 	else if (sess->ctxt == DPAA_SEC_CIPHER_HASH)
 		sess->build_raw_dp_fd = build_dpaa_raw_dp_chain_fd;
+	else if (sess->ctxt == DPAA_SEC_AEAD)
+		sess->build_raw_dp_fd = build_raw_cipher_auth_gcm_sg;
+#ifdef RTE_LIBRTE_SECURITY
+	else if (sess->ctxt == DPAA_SEC_IPSEC ||
+			sess->ctxt == DPAA_SEC_PDCP)
+		sess->build_raw_dp_fd = build_dpaa_raw_proto_sg;
+#endif
 	else
 		return -ENOTSUP;
 	dp_ctx = (struct dpaa_sec_raw_dp_ctx *)raw_dp_ctx->drv_ctx_data;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 14/15] test/crypto: add raw API test for dpaax
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
                   ` (12 preceding siblings ...)
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 13/15] crypto/dpaa_sec: support AEAD and proto " Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 15/15] test/crypto: add raw API support in 5G algos Hemant Agrawal
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

This patch add support for raw API tests for
dpaa_sec and dpaa2_sec platforms.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 app/test/test_cryptodev.c | 116 +++++++++++++++++++++++++++++++++++---
 1 file changed, 109 insertions(+), 7 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 1e951981c2..6a9761c3d8 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -184,11 +184,11 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id,
 {
 	struct rte_crypto_sym_op *sop = op->sym;
 	struct rte_crypto_op *ret_op = NULL;
-	struct rte_crypto_vec data_vec[UINT8_MAX];
+	struct rte_crypto_vec data_vec[UINT8_MAX], dest_data_vec[UINT8_MAX];
 	struct rte_crypto_va_iova_ptr cipher_iv, digest, aad_auth_iv;
 	union rte_crypto_sym_ofs ofs;
 	struct rte_crypto_sym_vec vec;
-	struct rte_crypto_sgl sgl;
+	struct rte_crypto_sgl sgl, dest_sgl;
 	uint32_t max_len;
 	union rte_cryptodev_session_ctx sess;
 	uint32_t count = 0;
@@ -324,6 +324,19 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id,
 	}
 
 	sgl.num = n;
+	/* Out of place */
+	if (sop->m_dst != NULL) {
+		dest_sgl.vec = dest_data_vec;
+		vec.dest_sgl = &dest_sgl;
+		n = rte_crypto_mbuf_to_vec(sop->m_dst, 0, max_len,
+				dest_data_vec, RTE_DIM(dest_data_vec));
+		if (n < 0 || n > sop->m_dst->nb_segs) {
+			op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+			goto exit;
+		}
+		dest_sgl.num = n;
+	} else
+		vec.dest_sgl = NULL;
 
 	if (rte_cryptodev_raw_enqueue_burst(ctx, &vec, ofs, (void **)&op,
 			&enqueue_status) < 1) {
@@ -8379,10 +8392,21 @@ test_pdcp_proto_SGL(int i, int oop,
 	int to_trn_tbl[16];
 	int segs = 1;
 	unsigned int trn_data = 0;
+	struct rte_cryptodev_info dev_info;
+	uint64_t feat_flags;
 	struct rte_security_ctx *ctx = (struct rte_security_ctx *)
 				rte_cryptodev_get_sec_ctx(
 				ts_params->valid_devs[0]);
+	struct rte_mbuf *temp_mbuf;
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+	feat_flags = dev_info.feature_flags;
 
+	if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+			(!(feat_flags & RTE_CRYPTODEV_FF_SYM_RAW_DP))) {
+		printf("Device does not support RAW data-path APIs.\n");
+		return -ENOTSUP;
+	}
 	/* Verify the capabilities */
 	struct rte_security_capability_idx sec_cap_idx;
 
@@ -8566,8 +8590,23 @@ test_pdcp_proto_SGL(int i, int oop,
 		ut_params->op->sym->m_dst = ut_params->obuf;
 
 	/* Process crypto operation */
-	if (process_crypto_request(ts_params->valid_devs[0], ut_params->op)
-		== NULL) {
+	temp_mbuf = ut_params->op->sym->m_src;
+	if (global_api_test_type == CRYPTODEV_RAW_API_TEST) {
+		/* filling lengths */
+		while (temp_mbuf) {
+			ut_params->op->sym->cipher.data.length
+				+= temp_mbuf->pkt_len;
+			ut_params->op->sym->auth.data.length
+				+= temp_mbuf->pkt_len;
+			temp_mbuf = temp_mbuf->next;
+		}
+		process_sym_raw_dp_op(ts_params->valid_devs[0], 0,
+			ut_params->op, 1, 1, 0, 0);
+	} else {
+		ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+							ut_params->op);
+	}
+	if (ut_params->op == NULL) {
 		printf("TestCase %s()-%d line %d failed %s: ",
 			__func__, i, __LINE__,
 			"failed to process sym crypto op");
@@ -10424,6 +10463,7 @@ test_authenticated_encryption_oop(const struct aead_test_data *tdata)
 	int retval;
 	uint8_t *ciphertext, *auth_tag;
 	uint16_t plaintext_pad_len;
+	struct rte_cryptodev_info dev_info;
 
 	/* Verify the capabilities */
 	struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -10433,7 +10473,11 @@ test_authenticated_encryption_oop(const struct aead_test_data *tdata)
 			&cap_idx) == NULL)
 		return TEST_SKIPPED;
 
-	if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+	uint64_t feat_flags = dev_info.feature_flags;
+
+	if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+			(!(feat_flags & RTE_CRYPTODEV_FF_SYM_RAW_DP)))
 		return TEST_SKIPPED;
 
 	/* not supported with CPU crypto */
@@ -10470,7 +10514,11 @@ test_authenticated_encryption_oop(const struct aead_test_data *tdata)
 	ut_params->op->sym->m_dst = ut_params->obuf;
 
 	/* Process crypto operation */
-	TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
+	if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+		process_sym_raw_dp_op(ts_params->valid_devs[0], 0,
+			ut_params->op, 0, 0, 0, 0);
+	else
+		TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op), "failed to process sym crypto op");
 
 	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
@@ -10516,6 +10564,10 @@ test_authenticated_decryption_oop(const struct aead_test_data *tdata)
 
 	int retval;
 	uint8_t *plaintext;
+	struct rte_cryptodev_info dev_info;
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+	uint64_t feat_flags = dev_info.feature_flags;
 
 	/* Verify the capabilities */
 	struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -10530,6 +10582,12 @@ test_authenticated_decryption_oop(const struct aead_test_data *tdata)
 			global_api_test_type == CRYPTODEV_RAW_API_TEST)
 		return TEST_SKIPPED;
 
+	if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+			(!(feat_flags & RTE_CRYPTODEV_FF_SYM_RAW_DP))) {
+		printf("Device does not support RAW data-path APIs.\n");
+		return TEST_SKIPPED;
+	}
+
 	/* Create AEAD session */
 	retval = create_aead_session(ts_params->valid_devs[0],
 			tdata->algo,
@@ -10560,7 +10618,11 @@ test_authenticated_decryption_oop(const struct aead_test_data *tdata)
 	ut_params->op->sym->m_dst = ut_params->obuf;
 
 	/* Process crypto operation */
-	TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
+	if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+		process_sym_raw_dp_op(ts_params->valid_devs[0], 0,
+				ut_params->op, 0, 0, 0, 0);
+	else
+		TEST_ASSERT_NOT_NULL(process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op), "failed to process sym crypto op");
 
 	TEST_ASSERT_EQUAL(ut_params->op->status, RTE_CRYPTO_OP_STATUS_SUCCESS,
@@ -15400,6 +15462,46 @@ test_cryptodev_cn10k(void)
 	return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_CN10K_PMD));
 }
 
+static int
+test_cryptodev_dpaa2_sec_raw_api(void)
+{
+	static const char *pmd_name = RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
+	int ret;
+
+	ret = require_feature_flag(pmd_name, RTE_CRYPTODEV_FF_SYM_RAW_DP,
+			"RAW API");
+	if (ret)
+		return ret;
+
+	global_api_test_type = CRYPTODEV_RAW_API_TEST;
+	ret = run_cryptodev_testsuite(pmd_name);
+	global_api_test_type = CRYPTODEV_API_TEST;
+
+	return ret;
+}
+
+static int
+test_cryptodev_dpaa_sec_raw_api(void)
+{
+	static const char *pmd_name = RTE_STR(CRYPTODEV_NAME_DPAA2_SEC_PMD);
+	int ret;
+
+	ret = require_feature_flag(pmd_name, RTE_CRYPTODEV_FF_SYM_RAW_DP,
+			"RAW API");
+	if (ret)
+		return ret;
+
+	global_api_test_type = CRYPTODEV_RAW_API_TEST;
+	ret = run_cryptodev_testsuite(pmd_name);
+	global_api_test_type = CRYPTODEV_API_TEST;
+
+	return ret;
+}
+
+REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_raw_api_autotest,
+		test_cryptodev_dpaa2_sec_raw_api);
+REGISTER_TEST_COMMAND(cryptodev_dpaa_sec_raw_api_autotest,
+		test_cryptodev_dpaa_sec_raw_api);
 REGISTER_TEST_COMMAND(cryptodev_qat_raw_api_autotest,
 		test_cryptodev_qat_raw_api);
 REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [dpdk-dev] [PATCH v3 15/15] test/crypto: add raw API support in 5G algos
  2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
                   ` (13 preceding siblings ...)
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 14/15] test/crypto: add raw API test for dpaax Hemant Agrawal
@ 2021-10-13 18:27 ` Hemant Agrawal
  14 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:27 UTC (permalink / raw)
  To: dev, gakhil; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

This patch add support for RAW API testing with ZUC
and SNOW test cases.

Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 app/test/test_cryptodev.c | 57 ++++++++++++++++++++++++++++++++++-----
 1 file changed, 51 insertions(+), 6 deletions(-)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 6a9761c3d8..0fb3b81442 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -377,6 +377,7 @@ process_sym_raw_dp_op(uint8_t dev_id, uint16_t qp_id,
 	}
 
 	op->status = (count == MAX_RAW_DEQUEUE_COUNT + 1 || ret_op != op ||
+			ret_op->status == RTE_CRYPTO_OP_STATUS_ERROR ||
 			n_success < 1) ? RTE_CRYPTO_OP_STATUS_ERROR :
 					RTE_CRYPTO_OP_STATUS_SUCCESS;
 
@@ -4208,6 +4209,16 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata)
 	int retval;
 	unsigned plaintext_pad_len;
 	unsigned plaintext_len;
+	struct rte_cryptodev_info dev_info;
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+	uint64_t feat_flags = dev_info.feature_flags;
+
+	if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+			(!(feat_flags & RTE_CRYPTODEV_FF_SYM_RAW_DP))) {
+		printf("Device does not support RAW data-path APIs.\n");
+		return -ENOTSUP;
+	}
 
 	/* Verify the capabilities */
 	struct rte_cryptodev_sym_capability_idx cap_idx;
@@ -4263,7 +4274,11 @@ test_snow3g_encryption_oop(const struct snow3g_test_data *tdata)
 	if (retval < 0)
 		return retval;
 
-	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+	if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+		process_sym_raw_dp_op(ts_params->valid_devs[0], 0,
+			ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+	else
+		ut_params->op = process_crypto_request(ts_params->valid_devs[0],
 						ut_params->op);
 	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
 
@@ -4323,6 +4338,12 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
 		return TEST_SKIPPED;
 	}
 
+	if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+			(!(feat_flags & RTE_CRYPTODEV_FF_SYM_RAW_DP))) {
+		printf("Device does not support RAW data-path APIs.\n");
+		return -ENOTSUP;
+	}
+
 	/* Create SNOW 3G session */
 	retval = create_wireless_algo_cipher_session(ts_params->valid_devs[0],
 					RTE_CRYPTO_CIPHER_OP_ENCRYPT,
@@ -4357,7 +4378,11 @@ test_snow3g_encryption_oop_sgl(const struct snow3g_test_data *tdata)
 	if (retval < 0)
 		return retval;
 
-	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+	if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+		process_sym_raw_dp_op(ts_params->valid_devs[0], 0,
+			ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+	else
+		ut_params->op = process_crypto_request(ts_params->valid_devs[0],
 						ut_params->op);
 	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
 
@@ -4484,7 +4509,11 @@ test_snow3g_encryption_offset_oop(const struct snow3g_test_data *tdata)
 	if (retval < 0)
 		return retval;
 
-	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+	if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+		process_sym_raw_dp_op(ts_params->valid_devs[0], 0,
+			ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+	else
+		ut_params->op = process_crypto_request(ts_params->valid_devs[0],
 						ut_params->op);
 	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
 
@@ -4615,7 +4644,16 @@ static int test_snow3g_decryption_oop(const struct snow3g_test_data *tdata)
 	uint8_t *plaintext, *ciphertext;
 	unsigned ciphertext_pad_len;
 	unsigned ciphertext_len;
+	struct rte_cryptodev_info dev_info;
+
+	rte_cryptodev_info_get(ts_params->valid_devs[0], &dev_info);
+	uint64_t feat_flags = dev_info.feature_flags;
 
+	if ((global_api_test_type == CRYPTODEV_RAW_API_TEST) &&
+			(!(feat_flags & RTE_CRYPTODEV_FF_SYM_RAW_DP))) {
+		printf("Device does not support RAW data-path APIs.\n");
+		return -ENOTSUP;
+	}
 	/* Verify the capabilities */
 	struct rte_cryptodev_sym_capability_idx cap_idx;
 	cap_idx.type = RTE_CRYPTO_SYM_XFORM_CIPHER;
@@ -4673,7 +4711,11 @@ static int test_snow3g_decryption_oop(const struct snow3g_test_data *tdata)
 	if (retval < 0)
 		return retval;
 
-	ut_params->op = process_crypto_request(ts_params->valid_devs[0],
+	if (global_api_test_type == CRYPTODEV_RAW_API_TEST)
+		process_sym_raw_dp_op(ts_params->valid_devs[0], 0,
+			ut_params->op, 1, 0, 1, tdata->cipher_iv.len);
+	else
+		ut_params->op = process_crypto_request(ts_params->valid_devs[0],
 						ut_params->op);
 	TEST_ASSERT_NOT_NULL(ut_params->op, "failed to retrieve obuf");
 	ut_params->obuf = ut_params->op->sym->m_dst;
@@ -12971,10 +13013,13 @@ test_authentication_verify_fail_when_data_corruption(
 	else {
 		ut_params->op = process_crypto_request(ts_params->valid_devs[0],
 			ut_params->op);
-		TEST_ASSERT_NULL(ut_params->op, "authentication not failed");
 	}
+	if (ut_params->op == NULL)
+		return 0;
+	else if (ut_params->op->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
+		return 0;
 
-	return 0;
+	return -1;
 }
 
 static int
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [dpdk-dev] [EXT] [PATCH v3 02/15] crypto: add total raw buffer length
  2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 02/15] crypto: add total raw buffer length Hemant Agrawal
@ 2021-10-13 18:35   ` Akhil Goyal
  2021-10-13 18:59     ` Hemant Agrawal
  0 siblings, 1 reply; 18+ messages in thread
From: Akhil Goyal @ 2021-10-13 18:35 UTC (permalink / raw)
  To: Hemant Agrawal, dev; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh

> From: Gagandeep Singh <g.singh@nxp.com>
> 
> The current crypto raw data vectors is extended to support
> rte_security usecases, where we need total data length to know
> how much additional memory space is available in buffer other
> than data length so that driver/HW can write expanded size
> data after encryption.
> 
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> Acked-by: Akhil Goyal <gakhil@marvell.com>
> ---
>  doc/guides/rel_notes/deprecation.rst | 7 -------
>  lib/cryptodev/rte_crypto_sym.h       | 6 ++++++
>  2 files changed, 6 insertions(+), 7 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index f3c998a655..4b26ef6747 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -211,13 +211,6 @@ Deprecation Notices
>    This field will be null for inplace processing.
>    This change is targeted for DPDK 21.11.
> 
> -* cryptodev: The structure ``rte_crypto_vec`` would be updated to add
> -  ``tot_len`` to support total buffer length.
> -  This is required for security cases like IPsec and PDCP encryption offload
> -  to know how much additional memory space is available in buffer other
> than
> -  data length so that driver/HW can write expanded size data after
> encryption.
> -  This change is targeted for DPDK 21.11.
> -
>  * cryptodev: Hide structures ``rte_cryptodev_sym_session`` and
>    ``rte_cryptodev_asym_session`` to remove unnecessary indirection
> between
>    session and the private data of session. An opaque pointer can be exposed
> diff --git a/lib/cryptodev/rte_crypto_sym.h
> b/lib/cryptodev/rte_crypto_sym.h
> index dcc0bd5933..e5cef1fb72 100644
> --- a/lib/cryptodev/rte_crypto_sym.h
> +++ b/lib/cryptodev/rte_crypto_sym.h
> @@ -37,6 +37,8 @@ struct rte_crypto_vec {
>  	rte_iova_t iova;
>  	/** length of the data buffer */
>  	uint32_t len;
> +	/** total buffer length*/
> +	uint32_t tot_len;
>  };
> 
>  /**
> @@ -980,12 +982,14 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf
> *mb, uint32_t ofs, uint32_t len,
>  	seglen = mb->data_len - ofs;
>  	if (len <= seglen) {
>  		vec[0].len = len;
> +		vec[0].tot_len = mb->buf_len;
>  		return 1;
>  	}
> 
>  	/* data spread across segments */
>  	vec[0].len = seglen;
>  	left = len - seglen;
> +	vec[0].tot_len = mb->buf_len;

I think you missed to update the tot_len as per Konstantin's suggestion.



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [dpdk-dev] [EXT] [PATCH v3 02/15] crypto: add total raw buffer length
  2021-10-13 18:35   ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-10-13 18:59     ` Hemant Agrawal
  0 siblings, 0 replies; 18+ messages in thread
From: Hemant Agrawal @ 2021-10-13 18:59 UTC (permalink / raw)
  To: Akhil Goyal, dev; +Cc: roy.fan.zhang, konstantin.ananyev, Gagandeep Singh



> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Akhil Goyal
> Sent: Thursday, October 14, 2021 12:06 AM
> To: Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org
> Cc: roy.fan.zhang@intel.com; konstantin.ananyev@intel.com; Gagandeep
> Singh <G.Singh@nxp.com>
> Subject: Re: [dpdk-dev] [EXT] [PATCH v3 02/15] crypto: add total raw buffer
> length
> Importance: High
> 
> > From: Gagandeep Singh <g.singh@nxp.com>
> >
> > The current crypto raw data vectors is extended to support
> > rte_security usecases, where we need total data length to know how
> > much additional memory space is available in buffer other than data
> > length so that driver/HW can write expanded size data after
> > encryption.
> >
> > Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> > Acked-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> >  doc/guides/rel_notes/deprecation.rst | 7 -------
> >  lib/cryptodev/rte_crypto_sym.h       | 6 ++++++
> >  2 files changed, 6 insertions(+), 7 deletions(-)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index f3c998a655..4b26ef6747 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -211,13 +211,6 @@ Deprecation Notices
> >    This field will be null for inplace processing.
> >    This change is targeted for DPDK 21.11.
> >
> > -* cryptodev: The structure ``rte_crypto_vec`` would be updated to add
> > -  ``tot_len`` to support total buffer length.
> > -  This is required for security cases like IPsec and PDCP encryption
> > offload
> > -  to know how much additional memory space is available in buffer
> > other than
> > -  data length so that driver/HW can write expanded size data after
> > encryption.
> > -  This change is targeted for DPDK 21.11.
> > -
> >  * cryptodev: Hide structures ``rte_cryptodev_sym_session`` and
> >    ``rte_cryptodev_asym_session`` to remove unnecessary indirection
> > between
> >    session and the private data of session. An opaque pointer can be
> > exposed diff --git a/lib/cryptodev/rte_crypto_sym.h
> > b/lib/cryptodev/rte_crypto_sym.h index dcc0bd5933..e5cef1fb72 100644
> > --- a/lib/cryptodev/rte_crypto_sym.h
> > +++ b/lib/cryptodev/rte_crypto_sym.h
> > @@ -37,6 +37,8 @@ struct rte_crypto_vec {
> >  	rte_iova_t iova;
> >  	/** length of the data buffer */
> >  	uint32_t len;
> > +	/** total buffer length*/
> > +	uint32_t tot_len;
> >  };
> >
> >  /**
> > @@ -980,12 +982,14 @@ rte_crypto_mbuf_to_vec(const struct rte_mbuf
> > *mb, uint32_t ofs, uint32_t len,
> >  	seglen = mb->data_len - ofs;
> >  	if (len <= seglen) {
> >  		vec[0].len = len;
> > +		vec[0].tot_len = mb->buf_len;
> >  		return 1;
> >  	}
> >
> >  	/* data spread across segments */
> >  	vec[0].len = seglen;
> >  	left = len - seglen;
> > +	vec[0].tot_len = mb->buf_len;
> 
> I think you missed to update the tot_len as per Konstantin's suggestion.
> 
[Hemant] Sorry, I sent the wrong series

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-10-13 18:59 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-10-13 18:27 [dpdk-dev] [PATCH v3 00/15] crypto: add raw vector support in DPAAx Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 01/15] crypto: change sgl to src_sgl in vector Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 02/15] crypto: add total raw buffer length Hemant Agrawal
2021-10-13 18:35   ` [dpdk-dev] [EXT] " Akhil Goyal
2021-10-13 18:59     ` Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 03/15] crypto: add dest_sgl in raw vector APIs Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 04/15] crypto: fix raw process for multi-seg case Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 05/15] crypto/dpaa2_sec: support raw datapath APIs Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 06/15] crypto/dpaa2_sec: support AUTH only with raw buffer APIs Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 07/15] crypto/dpaa2_sec: support AUTHENC " Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 08/15] crypto/dpaa2_sec: support AEAD " Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 09/15] crypto/dpaa2_sec: support OOP with raw buffer API Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 10/15] crypto/dpaa2_sec: enhance error checks with raw buffer APIs Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 11/15] crypto/dpaa_sec: support raw datapath APIs Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 12/15] crypto/dpaa_sec: support authonly and chain with raw APIs Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 13/15] crypto/dpaa_sec: support AEAD and proto " Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 14/15] test/crypto: add raw API test for dpaax Hemant Agrawal
2021-10-13 18:27 ` [dpdk-dev] [PATCH v3 15/15] test/crypto: add raw API support in 5G algos Hemant Agrawal

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.