All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC 0/3] Add UADK compression and crypto PMD
@ 2022-05-20 11:36 Zhangfei Gao
  2022-05-20 11:36 ` [PATCH RFC 1/3] compress: add UADK compression PMD Zhangfei Gao
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Zhangfei Gao @ 2022-05-20 11:36 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella
  Cc: dev, Zhangfei Gao

Supported hardware platforms: 
HiSilicon Kunpeng920 and Kunpeng930

The PMD relies on UADK Interface: https://github.com/Linaro/uadk


Zhangfei Gao (3):
  compress: add UADK compression PMD
  test/crypto: add cryptodev_uadk_autotest
  drivers/crypto: add UADK crypto PMD

 app/test/test_cryptodev.c                 |    7 +
 app/test/test_cryptodev.h                 |    1 +
 doc/guides/compressdevs/index.rst         |    1 +
 doc/guides/compressdevs/uadk.rst          |   73 ++
 doc/guides/cryptodevs/index.rst           |    1 +
 doc/guides/cryptodevs/uadk.rst            |   80 ++
 drivers/compress/meson.build              |    1 +
 drivers/compress/uadk/meson.build         |   28 +
 drivers/compress/uadk/uadk_compress_pmd.c |  500 +++++++++
 drivers/compress/uadk/version.map         |    3 +
 drivers/crypto/meson.build                |    1 +
 drivers/crypto/uadk/meson.build           |   36 +
 drivers/crypto/uadk/uadk_crypto_pmd.c     | 1159 +++++++++++++++++++++
 drivers/crypto/uadk/version.map           |    3 +
 14 files changed, 1894 insertions(+)
 create mode 100644 doc/guides/compressdevs/uadk.rst
 create mode 100644 doc/guides/cryptodevs/uadk.rst
 create mode 100644 drivers/compress/uadk/meson.build
 create mode 100644 drivers/compress/uadk/uadk_compress_pmd.c
 create mode 100644 drivers/compress/uadk/version.map
 create mode 100644 drivers/crypto/uadk/meson.build
 create mode 100644 drivers/crypto/uadk/uadk_crypto_pmd.c
 create mode 100644 drivers/crypto/uadk/version.map

-- 
2.25.1


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH RFC 1/3] compress: add UADK compression PMD
  2022-05-20 11:36 [PATCH RFC 0/3] Add UADK compression and crypto PMD Zhangfei Gao
@ 2022-05-20 11:36 ` Zhangfei Gao
  2022-05-20 11:36 ` [PATCH RFC 2/3] test/crypto: add cryptodev_uadk_autotest Zhangfei Gao
  2022-05-20 11:36 ` [PATCH RFC 3/3] drivers/crypto: add UADK crypto PMD Zhangfei Gao
  2 siblings, 0 replies; 5+ messages in thread
From: Zhangfei Gao @ 2022-05-20 11:36 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella
  Cc: dev, Zhangfei Gao

Add compression & decompression PMD for HiSilicon Hunpeng930
The UADK compression PMD rely on uadk api.

Test:
sudo dpdk-test --vdev=0000:75:00.0
RTE>>compressdev_autotest
RTE>>quit

Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
---
 doc/guides/compressdevs/index.rst         |   1 +
 doc/guides/compressdevs/uadk.rst          |  73 ++++
 drivers/compress/meson.build              |   1 +
 drivers/compress/uadk/meson.build         |  28 ++
 drivers/compress/uadk/uadk_compress_pmd.c | 500 ++++++++++++++++++++++
 drivers/compress/uadk/version.map         |   3 +
 6 files changed, 606 insertions(+)
 create mode 100644 doc/guides/compressdevs/uadk.rst
 create mode 100644 drivers/compress/uadk/meson.build
 create mode 100644 drivers/compress/uadk/uadk_compress_pmd.c
 create mode 100644 drivers/compress/uadk/version.map

diff --git a/doc/guides/compressdevs/index.rst b/doc/guides/compressdevs/index.rst
index 54a3ef4273..e47a9ab9cf 100644
--- a/doc/guides/compressdevs/index.rst
+++ b/doc/guides/compressdevs/index.rst
@@ -14,4 +14,5 @@ Compression Device Drivers
     mlx5
     octeontx
     qat_comp
+    uadk
     zlib
diff --git a/doc/guides/compressdevs/uadk.rst b/doc/guides/compressdevs/uadk.rst
new file mode 100644
index 0000000000..08eb636da2
--- /dev/null
+++ b/doc/guides/compressdevs/uadk.rst
@@ -0,0 +1,73 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+    Copyright 2022-2023 Linaro ltd.
+
+UADK Compression Poll Mode Driver
+=======================================================
+
+UADK compression PMD provides poll mode compression & decompression driver
+support for the following hardware accelerator devices:
+
+* ``HiSilicon Kunpeng930``
+
+Features
+--------
+
+UADK compression PMD has support for:
+
+Compression/Decompression algorithm:
+
+    * DEFLATE - using Fixed and Dynamic Huffman encoding
+
+Window size support:
+
+    * 32K
+
+Checksum generation:
+
+    * CRC32, Adler and combined checksum
+
+Stateful operation:
+
+    * Decompression only
+
+Test steps
+-----------
+
+   .. code-block:: console
+
+	1. Build
+	cd dpdk
+	mkdir build
+	meson build (--reconfigure)
+	cd build
+	ninja
+	sudo ninja install
+
+	2. Prepare
+	echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages
+	mkdir -p /mnt/huge_2mb
+	mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB
+
+	2 Test with zip pf
+	sudo dpdk-test --vdev=0000:75:00.0
+	RTE>>compressdev_autotest
+	RTE>>quit
+
+	3. Test with zip vf
+	su root
+	echo 1 > /sys/devices/pci0000:74/0000:74:00.0/0000:75:00.0/sriov_numvfs
+	exit
+	sudo dpdk-test --vdev=0000:75:00.1
+	RTE>>compressdev_autotest
+	RTE>>quit
+
+Dependency
+------------
+
+UADK compression PMD relies on HiSilicon UADK library [1]
+
+[1] https://github.com/Linaro/uadk
diff --git a/drivers/compress/meson.build b/drivers/compress/meson.build
index abe043ab94..041a45ba41 100644
--- a/drivers/compress/meson.build
+++ b/drivers/compress/meson.build
@@ -10,6 +10,7 @@ drivers = [
         'mlx5',
         'octeontx',
         'zlib',
+        'uadk',
 ]
 
 std_deps = ['compressdev'] # compressdev pulls in all other needed deps
diff --git a/drivers/compress/uadk/meson.build b/drivers/compress/uadk/meson.build
new file mode 100644
index 0000000000..347ef9757d
--- /dev/null
+++ b/drivers/compress/uadk/meson.build
@@ -0,0 +1,28 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+# Copyright 2022-2023 Linaro ltd.
+
+if not is_linux
+    build = false
+    reason = 'only supported on Linux'
+    subdir_done()
+endif
+
+if arch_subdir != 'arm' or not dpdk_conf.get('RTE_ARCH_64')
+    build = false
+    reason = 'only supported on aarch64'
+    subdir_done()
+endif
+
+sources = files(
+        'uadk_compress_pmd.c',
+)
+
+deps += ['bus_pci']
+dep = cc.find_library('libwd_comp', dirs: ['/usr/local/lib'], required: false)
+if not dep.found()
+	build = false
+	reason = 'missing dependency, "libwd_comp"'
+else
+	ext_deps += dep
+endif
diff --git a/drivers/compress/uadk/uadk_compress_pmd.c b/drivers/compress/uadk/uadk_compress_pmd.c
new file mode 100644
index 0000000000..16a0593f84
--- /dev/null
+++ b/drivers/compress/uadk/uadk_compress_pmd.c
@@ -0,0 +1,500 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+ * Copyright 2022-2023 Linaro ltd.
+ */
+
+#include <rte_bus_pci.h>
+#include <rte_compressdev_pmd.h>
+#include <rte_malloc.h>
+#include <uadk/wd_comp.h>
+#include <uadk/wd_sched.h>
+
+struct uadk_compress_priv {
+	struct rte_mempool *mp;
+} __rte_cache_aligned;
+
+struct uadk_qp {
+	struct rte_ring *processed_pkts;
+	/**< Ring for placing process packets */
+	struct rte_compressdev_stats qp_stats;
+	/**< Queue pair statistics */
+	uint16_t id;
+	/**< Queue Pair Identifier */
+	char name[RTE_COMPRESSDEV_NAME_MAX_LEN];
+	/**< Unique Queue Pair Name */
+} __rte_cache_aligned;
+
+struct uadk_stream {
+	handle_t handle;
+	enum rte_comp_xform_type type;
+} __rte_cache_aligned;
+
+RTE_LOG_REGISTER_DEFAULT(uadk_compress_logtype, INFO);
+
+#define UADK_LOG(level, fmt, ...)  \
+	rte_log(RTE_LOG_ ## level, uadk_compress_logtype,  \
+			"%s() line %u: " fmt "\n", __func__, __LINE__,  \
+					## __VA_ARGS__)
+
+#define UADK_COMPRESS_DRIVER_NAME compress_uadk
+
+static int
+uadk_compress_pmd_config(struct rte_compressdev *dev,
+			 struct rte_compressdev_config *config)
+{
+	char mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct uadk_compress_priv *priv;
+	struct rte_mempool *mp;
+	int ret;
+
+	if (dev == NULL || config == NULL)
+		return -EINVAL;
+
+	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
+		 "stream_mp_%u", dev->data->dev_id);
+	priv = dev->data->dev_private;
+
+	/* alloc resources */
+	ret = wd_comp_env_init(NULL);
+	if (ret < 0)
+		return -EINVAL;
+
+	mp = priv->mp;
+	if (mp == NULL) {
+		mp = rte_mempool_create(mp_name,
+				config->max_nb_priv_xforms +
+				config->max_nb_streams,
+				sizeof(struct uadk_stream),
+				0, 0, NULL, NULL, NULL,
+				NULL, config->socket_id,
+				0);
+		if (mp == NULL) {
+			UADK_LOG(ERR, "Cannot create private xform pool on socket %d\n",
+				 config->socket_id);
+			ret = -ENOMEM;
+			goto err_mempool;
+		}
+		priv->mp = mp;
+	}
+	return 0;
+err_mempool:
+	wd_comp_env_uninit();
+	return ret;
+}
+
+static int
+uadk_compress_pmd_start(struct rte_compressdev *dev __rte_unused)
+{
+	return 0;
+}
+
+static void
+uadk_compress_pmd_stop(struct rte_compressdev *dev __rte_unused)
+{
+}
+
+static int
+uadk_compress_pmd_close(struct rte_compressdev *dev)
+{
+	struct uadk_compress_priv *priv =
+		(struct uadk_compress_priv *)dev->data->dev_private;
+
+	/* free resources */
+	rte_mempool_free(priv->mp);
+	priv->mp = NULL;
+	wd_comp_env_uninit();
+
+	return 0;
+}
+
+static void
+uadk_compress_pmd_stats_get(struct rte_compressdev *dev,
+			    struct rte_compressdev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+static void
+uadk_compress_pmd_stats_reset(struct rte_compressdev *dev __rte_unused)
+{
+}
+
+static const struct
+rte_compressdev_capabilities uadk_compress_pmd_capabilities[] = {
+	{   /* Deflate */
+		.algo = RTE_COMP_ALGO_DEFLATE,
+		.comp_feature_flags = RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				      RTE_COMP_FF_HUFFMAN_FIXED |
+				      RTE_COMP_FF_HUFFMAN_DYNAMIC,
+	},
+
+	RTE_COMP_END_OF_CAPABILITIES_LIST()
+};
+
+static void
+uadk_compress_pmd_info_get(struct rte_compressdev *dev,
+			   struct rte_compressdev_info *dev_info)
+{
+	if (dev_info != NULL) {
+		dev_info->driver_name = dev->device->driver->name;
+		dev_info->feature_flags = dev->feature_flags;
+		dev_info->capabilities = uadk_compress_pmd_capabilities;
+	}
+}
+
+static int
+uadk_compress_pmd_qp_release(struct rte_compressdev *dev, uint16_t qp_id)
+{
+	struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+	if (qp != NULL) {
+		rte_ring_free(qp->processed_pkts);
+		rte_free(qp);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+
+	return 0;
+}
+
+static int
+uadk_pmd_qp_set_unique_name(struct rte_compressdev *dev,
+			    struct uadk_qp *qp)
+{
+	unsigned int n = snprintf(qp->name, sizeof(qp->name),
+				 "uadk_pmd_%u_qp_%u",
+				 dev->data->dev_id, qp->id);
+
+	if (n >= sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+static struct rte_ring *
+uadk_pmd_qp_create_processed_pkts_ring(struct uadk_qp *qp,
+				       unsigned int ring_size, int socket_id)
+{
+	struct rte_ring *r = qp->processed_pkts;
+
+	if (r) {
+		if (rte_ring_get_size(r) >= ring_size) {
+			UADK_LOG(INFO, "Reusing existing ring %s for processed packets",
+				 qp->name);
+			return r;
+		}
+
+		UADK_LOG(ERR, "Unable to reuse existing ring %s for processed packets",
+			 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			       RING_F_EXACT_SZ);
+}
+
+static int
+uadk_compress_pmd_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
+			   uint32_t max_inflight_ops, int socket_id)
+{
+	struct uadk_qp *qp = NULL;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		uadk_compress_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("uadk PMD Queue Pair", sizeof(*qp),
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (uadk_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->processed_pkts = uadk_pmd_qp_create_processed_pkts_ring(qp,
+						max_inflight_ops, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	return 0;
+
+qp_setup_cleanup:
+	if (qp) {
+		rte_free(qp);
+		qp = NULL;
+	}
+	return -1;
+}
+
+static int
+uadk_compress_pmd_xform_create(struct rte_compressdev *dev,
+			       const struct rte_comp_xform *xform,
+			       void **private_xform)
+{
+	struct uadk_compress_priv *priv = dev->data->dev_private;
+	struct wd_comp_sess_setup setup = {0};
+	struct sched_params param = {0};
+	struct uadk_stream *stream;
+	handle_t handle;
+
+	if (xform == NULL) {
+		UADK_LOG(ERR, "invalid xform struct");
+		return -EINVAL;
+	}
+
+	if (rte_mempool_get(priv->mp, private_xform)) {
+		UADK_LOG(ERR, "Couldn't get object from session mempool");
+		return -ENOMEM;
+	}
+
+	stream = *((struct uadk_stream **)private_xform);
+
+	switch (xform->type) {
+	case RTE_COMP_COMPRESS:
+		switch (xform->compress.algo) {
+		case RTE_COMP_ALGO_NULL:
+			break;
+		case RTE_COMP_ALGO_DEFLATE:
+			setup.alg_type = WD_ZLIB;
+			setup.win_sz = WD_COMP_WS_8K;
+			setup.comp_lv = WD_COMP_L8;
+			setup.op_type = WD_DIR_COMPRESS;
+			param.type = setup.op_type;
+			param.numa_id = 0;
+			setup.sched_param = &param;
+			break;
+		default:
+			goto err;
+		}
+		break;
+	case RTE_COMP_DECOMPRESS:
+		switch (xform->decompress.algo) {
+		case RTE_COMP_ALGO_NULL:
+			break;
+		case RTE_COMP_ALGO_DEFLATE:
+			setup.alg_type = WD_ZLIB;
+			setup.comp_lv = WD_COMP_L8;
+			setup.op_type = WD_DIR_DECOMPRESS;
+			param.type = setup.op_type;
+			param.numa_id = 0;
+			setup.sched_param = &param;
+			break;
+		default:
+			goto err;
+		}
+		break;
+	default:
+		UADK_LOG(ERR, "Algorithm %u is not supported.", xform->type);
+		goto err;
+	}
+
+	handle = wd_comp_alloc_sess(&setup);
+	if (!handle)
+		goto err;
+	stream->handle = handle;
+	stream->type = xform->type;
+	return 0;
+err:
+	rte_mempool_put(priv->mp, private_xform);
+	return -EINVAL;
+}
+
+static int
+uadk_compress_pmd_xform_free(struct rte_compressdev *dev __rte_unused, void *private_xform)
+{
+	struct uadk_stream *stream = (struct uadk_stream *)private_xform;
+	struct rte_mempool *mp;
+
+	if (!stream)
+		return -EINVAL;
+
+	wd_comp_free_sess(stream->handle);
+	memset(stream, 0, sizeof(struct uadk_stream));
+	mp = rte_mempool_from_obj(stream);
+	rte_mempool_put(mp, stream);
+	return 0;
+}
+
+static struct rte_compressdev_ops uadk_compress_pmd_ops = {
+		.dev_configure		= uadk_compress_pmd_config,
+		.dev_start		= uadk_compress_pmd_start,
+		.dev_stop		= uadk_compress_pmd_stop,
+		.dev_close		= uadk_compress_pmd_close,
+		.stats_get		= uadk_compress_pmd_stats_get,
+		.stats_reset		= uadk_compress_pmd_stats_reset,
+		.dev_infos_get		= uadk_compress_pmd_info_get,
+		.queue_pair_setup	= uadk_compress_pmd_qp_setup,
+		.queue_pair_release	= uadk_compress_pmd_qp_release,
+		.private_xform_create	= uadk_compress_pmd_xform_create,
+		.private_xform_free	= uadk_compress_pmd_xform_free,
+		.stream_create		= NULL,
+		.stream_free		= NULL
+};
+
+static uint16_t
+uadk_compress_pmd_enqueue_burst_sync(void *queue_pair,
+				     struct rte_comp_op **ops, uint16_t nb_ops)
+{
+	struct uadk_qp *qp = queue_pair;
+	struct uadk_stream *stream;
+	struct rte_comp_op *op;
+	uint16_t enqd = 0;
+	int i, ret = 0;
+
+	for (i = 0; i < nb_ops; i++) {
+		op = ops[i];
+
+		if (op->op_type == RTE_COMP_OP_STATEFUL) {
+			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+		} else {
+			/* process stateless ops */
+			stream = (struct uadk_stream *)op->private_xform;
+			if (stream) {
+				struct wd_comp_req req = {0};
+				uint16_t dst_len = rte_pktmbuf_data_len(op->m_dst);
+
+				req.src = rte_pktmbuf_mtod(op->m_src, uint8_t *);
+				req.src_len = op->src.length;
+				req.dst = rte_pktmbuf_mtod(op->m_dst, uint8_t *);
+				req.dst_len = dst_len;
+				req.op_type = stream->type;
+				req.cb = NULL;
+				req.data_fmt = WD_FLAT_BUF;
+				do {
+					ret = wd_do_comp_sync(stream->handle, &req);
+				} while (ret == -WD_EBUSY);
+
+				op->consumed += req.src_len;
+
+				if (req.dst_len <= dst_len) {
+					op->produced += req.dst_len;
+					op->status = RTE_COMP_OP_STATUS_SUCCESS;
+				} else  {
+					op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
+				}
+
+				if (ret) {
+					op->status = RTE_COMP_OP_STATUS_ERROR;
+					break;
+				}
+			} else {
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+			}
+		}
+
+		/* Whatever is out of op, put it into completion queue with
+		 * its status
+		 */
+		if (!ret)
+			ret = rte_ring_enqueue(qp->processed_pkts, (void *)op);
+
+		if (unlikely(ret)) {
+			/* increment count if failed to enqueue op*/
+			qp->qp_stats.enqueue_err_count++;
+		} else {
+			qp->qp_stats.enqueued_count++;
+			enqd++;
+		}
+	}
+	return enqd;
+}
+
+static uint16_t
+uadk_compress_pmd_dequeue_burst_sync(void *queue_pair,
+				     struct rte_comp_op **ops,
+				     uint16_t nb_ops)
+{
+	struct uadk_qp *qp = queue_pair;
+	unsigned int nb_dequeued = 0;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)ops, nb_ops, NULL);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+static int
+uadk_compress_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			struct rte_pci_device *pci_dev)
+{
+	char name[RTE_COMPRESSDEV_NAME_MAX_LEN];
+	struct rte_compressdev *compressdev;
+	struct rte_compressdev_pmd_init_params init_params = {
+		"",
+		rte_socket_id(),
+	};
+
+	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+	compressdev = rte_compressdev_pmd_create(name, &pci_dev->device,
+			sizeof(struct uadk_compress_priv), &init_params);
+	if (compressdev == NULL) {
+		UADK_LOG(ERR, "driver %s: create failed", init_params.name);
+		return -ENODEV;
+	}
+
+	compressdev->dev_ops = &uadk_compress_pmd_ops;
+	compressdev->dequeue_burst = uadk_compress_pmd_dequeue_burst_sync;
+	compressdev->enqueue_burst = uadk_compress_pmd_enqueue_burst_sync;
+	compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
+
+	return 0;
+}
+
+static int
+uadk_compress_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct rte_compressdev *compressdev;
+	char name[RTE_COMPRESSDEV_NAME_MAX_LEN];
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+	compressdev = rte_compressdev_pmd_get_named_dev(name);
+	if (compressdev == NULL)
+		return -ENODEV;
+
+	return rte_compressdev_pmd_destroy(compressdev);
+}
+
+#define PCI_VENDOR_ID_HUAWEI            0x19e5
+#define PCI_DEVICE_ID_ZIP_PF            0xa250
+#define PCI_DEVICE_ID_ZIP_VF            0xa251
+
+static struct rte_pci_id pci_id_uadk_compress_table[] = {
+	{
+		RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_ZIP_PF),
+	},
+	{
+		RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_ZIP_VF),
+	},
+	{
+		.device_id = 0
+	},
+};
+
+/**
+ * Structure that represents a PCI driver
+ */
+static struct rte_pci_driver uadk_compress_pmd = {
+	.id_table    = pci_id_uadk_compress_table,
+	.probe       = uadk_compress_pci_probe,
+	.remove      = uadk_compress_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(UADK_COMPRESS_DRIVER_NAME, uadk_compress_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(UADK_COMPRESS_DRIVER_NAME, pci_id_uadk_compress_table);
diff --git a/drivers/compress/uadk/version.map b/drivers/compress/uadk/version.map
new file mode 100644
index 0000000000..c2e0723b4c
--- /dev/null
+++ b/drivers/compress/uadk/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+	local: *;
+};
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH RFC 2/3] test/crypto: add cryptodev_uadk_autotest
  2022-05-20 11:36 [PATCH RFC 0/3] Add UADK compression and crypto PMD Zhangfei Gao
  2022-05-20 11:36 ` [PATCH RFC 1/3] compress: add UADK compression PMD Zhangfei Gao
@ 2022-05-20 11:36 ` Zhangfei Gao
  2022-05-20 11:36 ` [PATCH RFC 3/3] drivers/crypto: add UADK crypto PMD Zhangfei Gao
  2 siblings, 0 replies; 5+ messages in thread
From: Zhangfei Gao @ 2022-05-20 11:36 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella
  Cc: dev, Zhangfei Gao

Example:
sudo dpdk-test --vdev=0000:76:00.0 --log-level=6
RTE>>cryptodev_uadk_autotest
RTE>>quit

Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
---
 app/test/test_cryptodev.c | 7 +++++++
 app/test/test_cryptodev.h | 1 +
 2 files changed, 8 insertions(+)

diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 10b48cdadb..b413807d2b 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -15445,6 +15445,12 @@ test_cryptodev_mlx5(void)
 	return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_MLX5_PMD));
 }
 
+static int
+test_cryptodev_uadk(void)
+{
+	return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_UADK_PMD));
+}
+
 static int
 test_cryptodev_null(void)
 {
@@ -15722,6 +15728,7 @@ REGISTER_TEST_COMMAND(cryptodev_aesni_gcm_autotest, test_cryptodev_aesni_gcm);
 REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_gcm_autotest,
 	test_cryptodev_cpu_aesni_gcm);
 REGISTER_TEST_COMMAND(cryptodev_mlx5_autotest, test_cryptodev_mlx5);
+REGISTER_TEST_COMMAND(cryptodev_uadk_autotest, test_cryptodev_uadk);
 REGISTER_TEST_COMMAND(cryptodev_null_autotest, test_cryptodev_null);
 REGISTER_TEST_COMMAND(cryptodev_sw_snow3g_autotest, test_cryptodev_sw_snow3g);
 REGISTER_TEST_COMMAND(cryptodev_sw_kasumi_autotest, test_cryptodev_sw_kasumi);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 90c8287365..1f2a20e84e 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -75,6 +75,7 @@
 #define CRYPTODEV_NAME_CN9K_PMD		crypto_cn9k
 #define CRYPTODEV_NAME_CN10K_PMD	crypto_cn10k
 #define CRYPTODEV_NAME_MLX5_PMD		crypto_mlx5
+#define CRYPTODEV_NAME_UADK_PMD		crypto_uadk
 
 
 enum cryptodev_api_test_type {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH RFC 3/3] drivers/crypto: add UADK crypto PMD
  2022-05-20 11:36 [PATCH RFC 0/3] Add UADK compression and crypto PMD Zhangfei Gao
  2022-05-20 11:36 ` [PATCH RFC 1/3] compress: add UADK compression PMD Zhangfei Gao
  2022-05-20 11:36 ` [PATCH RFC 2/3] test/crypto: add cryptodev_uadk_autotest Zhangfei Gao
@ 2022-05-20 11:36 ` Zhangfei Gao
  2 siblings, 0 replies; 5+ messages in thread
From: Zhangfei Gao @ 2022-05-20 11:36 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella
  Cc: dev, Zhangfei Gao

Add UADK crypto pmd for HiSilicon Kunpeng920 and Kunpeng930

Test:
sudo dpdk-test --vdev=0000:76:00.0 (--log-level=6)
RTE>>cryptodev_uadk_autotest
RTE>>quit

Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
---
 doc/guides/cryptodevs/index.rst       |    1 +
 doc/guides/cryptodevs/uadk.rst        |   80 ++
 drivers/crypto/meson.build            |    1 +
 drivers/crypto/uadk/meson.build       |   36 +
 drivers/crypto/uadk/uadk_crypto_pmd.c | 1159 +++++++++++++++++++++++++
 drivers/crypto/uadk/version.map       |    3 +
 6 files changed, 1280 insertions(+)
 create mode 100644 doc/guides/cryptodevs/uadk.rst
 create mode 100644 drivers/crypto/uadk/meson.build
 create mode 100644 drivers/crypto/uadk/uadk_crypto_pmd.c
 create mode 100644 drivers/crypto/uadk/version.map

diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 3dcc2ecd2e..11ab06d369 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -31,5 +31,6 @@ Crypto Device Drivers
     scheduler
     snow3g
     qat
+    uadk
     virtio
     zuc
diff --git a/doc/guides/cryptodevs/uadk.rst b/doc/guides/cryptodevs/uadk.rst
new file mode 100644
index 0000000000..778f95c1cb
--- /dev/null
+++ b/doc/guides/cryptodevs/uadk.rst
@@ -0,0 +1,80 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+    Copyright 2022-2023 Linaro ltd.
+
+UADK Crypto Poll Mode Driver
+=======================================================
+
+UADK crypto PMD provides poll mode driver
+support for the following hardware accelerator devices:
+
+* ``HiSilicon Kunpeng920``
+* ``HiSilicon Kunpeng930``
+
+Features
+--------
+
+UADK crypto PMD has support for:
+
+Cipher algorithms:
+
+* ``RTE_CRYPTO_CIPHER_AES_ECB``
+* ``RTE_CRYPTO_CIPHER_AES_CBC``
+* ``RTE_CRYPTO_CIPHER_AES_XTS``
+* ``RTE_CRYPTO_CIPHER_DES_CBC``
+
+Hash algorithms:
+
+* ``RTE_CRYPTO_AUTH_MD5``
+* ``RTE_CRYPTO_AUTH_MD5_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA1``
+* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA224``
+* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA256``
+* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA384``
+* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
+* ``RTE_CRYPTO_AUTH_SHA512``
+* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
+
+Test steps
+-----------
+
+   .. code-block:: console
+
+	1. Build
+	cd dpdk
+	mkdir build
+	meson build (--reconfigure)
+	cd build
+	ninja
+	sudo ninja install
+
+	2. Prepare
+	echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages
+	mkdir -p /mnt/huge_2mb
+	mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB
+
+	2 Test with crypto pf
+	sudo dpdk-test --vdev=0000:76:00.0 (--log-level=6)
+	RTE>>cryptodev_uadk_autotest
+	RTE>>quit
+
+	3. Test with crypto vf
+	su root
+	echo 1 > /sys/devices/pci0000:74/0000:74:00.0/0000:76:00.0/sriov_numvfs
+	exit
+	sudo dpdk-test --vdev=0000:76:00.1
+	RTE>>cryptodev_uadk_autotest
+	RTE>>quit
+
+Dependency
+------------
+
+UADK crypto PMD relies on HiSilicon UADK library [1]
+
+[1] https://github.com/Linaro/uadk
diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
index 59f02ea47c..ec4d96428c 100644
--- a/drivers/crypto/meson.build
+++ b/drivers/crypto/meson.build
@@ -19,6 +19,7 @@ drivers = [
         'octeontx2',
         'openssl',
         'scheduler',
+        'uadk',
         'virtio',
 ]
 
diff --git a/drivers/crypto/uadk/meson.build b/drivers/crypto/uadk/meson.build
new file mode 100644
index 0000000000..bf5f4018e5
--- /dev/null
+++ b/drivers/crypto/uadk/meson.build
@@ -0,0 +1,36 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+# Copyright 2022-2023 Linaro ltd.
+
+if not is_linux
+    build = false
+    reason = 'only supported on Linux'
+    subdir_done()
+endif
+
+if arch_subdir != 'arm' or not dpdk_conf.get('RTE_ARCH_64')
+    build = false
+    reason = 'only supported on aarch64'
+    subdir_done()
+endif
+
+sources = files(
+        'uadk_crypto_pmd.c',
+)
+
+deps += ['bus_pci']
+dep = cc.find_library('libwd_crypto', dirs: ['/usr/local/lib'], required: false)
+if not dep.found()
+	build = false
+	reason = 'missing dependency, "libwd_crypto"'
+else
+	ext_deps += dep
+endif
+
+dep = cc.find_library('libwd', dirs: ['/usr/local/lib'], required: false)
+if not dep.found()
+	build = false
+	reason = 'missing dependency, "libwd"'
+else
+	ext_deps += dep
+endif
diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c b/drivers/crypto/uadk/uadk_crypto_pmd.c
new file mode 100644
index 0000000000..63d6a0087f
--- /dev/null
+++ b/drivers/crypto/uadk/uadk_crypto_pmd.c
@@ -0,0 +1,1159 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+ * Copyright 2022-2023 Linaro ltd.
+ */
+
+#include <rte_bus_pci.h>
+#include <rte_comp.h>
+#include <cryptodev_pmd.h>
+#include <uadk/wd_cipher.h>
+#include <uadk/wd_digest.h>
+#include <uadk/wd_sched.h>
+
+struct uadk_crypto_priv {
+	bool env_cipher_init;
+	bool env_auth_init;
+	bool env_aead_init;
+	struct uacce_dev *udev;
+} __rte_cache_aligned;
+
+/* Maximum length for digest (SHA-512 needs 64 bytes) */
+#define DIGEST_LENGTH_MAX 64
+
+struct uadk_qp {
+	struct rte_ring *processed_pkts;
+	/**< Ring for placing process packets */
+	struct rte_cryptodev_stats qp_stats;
+	/**< Queue pair statistics */
+	uint16_t id;
+	/**< Queue Pair Identifier */
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	/**< Unique Queue Pair Name */
+	uint8_t temp_digest[DIGEST_LENGTH_MAX];
+	/**< Buffer used to store the digest generated
+	 * by the driver when verifying a digest provided
+	 * by the user (using authentication verify operation)
+	 */
+} __rte_cache_aligned;
+
+enum uadk_chain_order {
+	UADK_CHAIN_ONLY_CIPHER,
+	UADK_CHAIN_ONLY_AUTH,
+	UADK_CHAIN_CIPHER_AUTH,
+	UADK_CHAIN_AUTH_CIPHER,
+	UADK_CHAIN_COMBINED,
+	UADK_CHAIN_NOT_SUPPORTED
+};
+
+struct uadk_crypto_session {
+	handle_t handle_cipher;
+	handle_t handle_digest;
+	enum uadk_chain_order chain_order;
+
+	struct {
+		uint16_t length;
+		uint16_t offset;
+	} iv;
+	/**< IV parameters */
+
+	/** Cipher Parameters */
+	struct {
+		enum rte_crypto_cipher_operation direction;
+		/**< cipher operation direction */
+		struct wd_cipher_req req;
+	} cipher;
+
+	/** Authentication Parameters */
+	struct {
+		struct wd_digest_req req;
+		enum rte_crypto_auth_operation operation;
+		/**< auth operation generate or verify */
+		uint16_t digest_length;
+		/**< digest length */
+	} auth;
+} __rte_cache_aligned;
+
+static uint8_t uadk_cryptodev_driver_id;
+
+RTE_LOG_REGISTER_DEFAULT(uadk_crypto_logtype, INFO);
+
+#define UADK_LOG(level, fmt, ...)  \
+	rte_log(RTE_LOG_ ## level, uadk_crypto_logtype,  \
+			"%s() line %u: " fmt "\n", __func__, __LINE__,  \
+					## __VA_ARGS__)
+
+static const struct rte_cryptodev_capabilities uadk_crypto_920_capabilities[] = {
+	{	/* MD5 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+				.iv_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* MD5 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_MD5,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
+	{	/* SHA1 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+				.iv_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA1 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA1,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 20,
+					.max = 20,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
+	{	/* SHA224 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+				.iv_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA224 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA224,
+				.block_size = 64,
+					.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 28,
+					.max = 28,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
+	{	/* SHA256 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 32,
+					.max = 32,
+					.increment = 0
+				},
+				.iv_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA256 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA256,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 32,
+					.max = 32,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
+	{	/* SHA384 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+				},
+				.iv_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA384 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA384,
+				.block_size = 64,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 48,
+					.max = 48,
+					.increment = 0
+					},
+			}, }
+		}, }
+	},
+	{	/* SHA512 HMAC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
+				.block_size = 128,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+				.iv_size = { 0 }
+			}, }
+		}, }
+	},
+	{	/* SHA512 */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
+			{.auth = {
+				.algo = RTE_CRYPTO_AUTH_SHA512,
+				.block_size = 128,
+				.key_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				},
+				.digest_size = {
+					.min = 64,
+					.max = 64,
+					.increment = 0
+				},
+			}, }
+		}, }
+	},
+	{	/* AES ECB */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_ECB,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* AES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_CBC,
+				.block_size = 16,
+				.key_size = {
+					.min = 16,
+					.max = 32,
+					.increment = 8
+				},
+				.iv_size = {
+					.min = 16,
+					.max = 16,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* AES XTS */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_AES_XTS,
+				.block_size = 1,
+				.key_size = {
+					.min = 32,
+					.max = 64,
+					.increment = 32
+				},
+				.iv_size = {
+					.min = 0,
+					.max = 0,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	{	/* DES CBC */
+		.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
+		{.sym = {
+			.xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
+			{.cipher = {
+				.algo = RTE_CRYPTO_CIPHER_DES_CBC,
+				.block_size = 8,
+				.key_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				},
+				.iv_size = {
+					.min = 8,
+					.max = 8,
+					.increment = 0
+				}
+			}, }
+		}, }
+	},
+	/* End of symmetric capabilities */
+	RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
+};
+
+/** Configure device */
+static int
+uadk_crypto_pmd_config(struct rte_cryptodev *dev __rte_unused,
+		       struct rte_cryptodev_config *config __rte_unused)
+{
+	return 0;
+}
+
+/** Start device */
+static int
+uadk_crypto_pmd_start(struct rte_cryptodev *dev __rte_unused)
+{
+	return 0;
+}
+
+/** Stop device */
+static void
+uadk_crypto_pmd_stop(struct rte_cryptodev *dev __rte_unused)
+{
+}
+
+/** Close device */
+static int
+uadk_crypto_pmd_close(struct rte_cryptodev *dev)
+{
+	struct uadk_crypto_priv *priv = dev->data->dev_private;
+
+	if (priv->env_cipher_init) {
+		wd_cipher_env_uninit();
+		priv->env_cipher_init = false;
+	}
+
+	if (priv->env_auth_init) {
+		wd_digest_env_uninit();
+		priv->env_auth_init = false;
+	}
+
+	return 0;
+}
+
+/** Get device statistics */
+static void
+uadk_crypto_pmd_stats_get(struct rte_cryptodev *dev,
+			  struct rte_cryptodev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+/** Reset device statistics */
+static void
+uadk_crypto_pmd_stats_reset(struct rte_cryptodev *dev __rte_unused)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+		memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	}
+}
+
+/** Get device info */
+static void
+uadk_crypto_pmd_info_get(struct rte_cryptodev *dev,
+			 struct rte_cryptodev_info *dev_info)
+{
+	struct uadk_crypto_priv *priv = dev->data->dev_private;
+
+	if (dev_info != NULL) {
+		dev_info->driver_id = dev->driver_id;
+		dev_info->driver_name = dev->device->driver->name;
+		dev_info->max_nb_queue_pairs = 128;
+		/* No limit of number of sessions */
+		dev_info->sym.max_nb_sessions = 0;
+		dev_info->feature_flags = dev->feature_flags;
+
+		if (priv->udev && !strcmp(priv->udev->api, "hisi_qm_v2"))
+			dev_info->capabilities = uadk_crypto_920_capabilities;
+	}
+}
+
+/** Release queue pair */
+static int
+uadk_crypto_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id)
+{
+	struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+	if (qp) {
+		rte_ring_free(qp->processed_pkts);
+		rte_free(qp);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+	return 0;
+}
+
+/** set a unique name for the queue pair based on its name, dev_id and qp_id */
+static int
+uadk_pmd_qp_set_unique_name(struct rte_cryptodev *dev,
+			    struct uadk_qp *qp)
+{
+	unsigned int n = snprintf(qp->name, sizeof(qp->name),
+				  "uadk_crypto_pmd_%u_qp_%u",
+				  dev->data->dev_id, qp->id);
+
+	if (n >= sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+/** Create a ring to place process packets on */
+static struct rte_ring *
+uadk_pmd_qp_create_processed_pkts_ring(struct uadk_qp *qp,
+				       unsigned int ring_size, int socket_id)
+{
+	struct rte_ring *r = qp->processed_pkts;
+
+	if (r) {
+		if (rte_ring_get_size(r) >= ring_size) {
+			UADK_LOG(INFO, "Reusing existing ring %s for processed packets",
+				 qp->name);
+			return r;
+		}
+
+		UADK_LOG(ERR, "Unable to reuse existing ring %s for processed packets",
+			 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			       RING_F_EXACT_SZ);
+}
+
+static int
+uadk_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+			 const struct rte_cryptodev_qp_conf *qp_conf,
+			 int socket_id)
+{
+	struct uadk_qp *qp;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		uadk_crypto_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("uadk PMD Queue Pair", sizeof(*qp),
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (uadk_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->processed_pkts = uadk_pmd_qp_create_processed_pkts_ring(qp,
+			qp_conf->nb_descriptors, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	return 0;
+
+qp_setup_cleanup:
+	if (qp) {
+		rte_free(qp);
+		qp = NULL;
+	}
+	return -1;
+}
+
+static unsigned int
+uadk_crypto_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
+{
+	return sizeof(struct uadk_crypto_session);
+}
+
+static enum uadk_chain_order
+uadk_get_chain_order(const struct rte_crypto_sym_xform *xform)
+{
+	enum uadk_chain_order res = UADK_CHAIN_NOT_SUPPORTED;
+
+	if (xform != NULL) {
+		if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) {
+			if (xform->next == NULL)
+				res = UADK_CHAIN_ONLY_AUTH;
+			else if (xform->next->type ==
+					RTE_CRYPTO_SYM_XFORM_CIPHER)
+				res = UADK_CHAIN_AUTH_CIPHER;
+		}
+
+		if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) {
+			if (xform->next == NULL)
+				res = UADK_CHAIN_ONLY_CIPHER;
+			else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH)
+				res = UADK_CHAIN_CIPHER_AUTH;
+		}
+
+		if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
+			res = UADK_CHAIN_COMBINED;
+	}
+
+	return res;
+}
+
+static int
+uadk_set_session_cipher_parameters(struct rte_cryptodev *dev,
+				   struct uadk_crypto_session *sess,
+				   struct rte_crypto_sym_xform *xform)
+{
+	struct uadk_crypto_priv *priv = dev->data->dev_private;
+	struct rte_crypto_cipher_xform *cipher = &xform->cipher;
+	struct wd_cipher_sess_setup setup = {0};
+	struct sched_params params = {0};
+	int ret;
+
+	if (!priv->env_cipher_init) {
+		ret = wd_cipher_env_init(NULL);
+		if (ret < 0)
+			return -EINVAL;
+		priv->env_cipher_init = true;
+	}
+
+	sess->cipher.direction = cipher->op;
+	sess->iv.offset = cipher->iv.offset;
+	sess->iv.length = cipher->iv.length;
+
+	switch (cipher->algo) {
+	/* Cover supported cipher algorithms */
+	case RTE_CRYPTO_CIPHER_AES_CTR:
+		setup.alg = WD_CIPHER_AES;
+		setup.mode = WD_CIPHER_CTR;
+		sess->cipher.req.out_bytes = 64;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_ECB:
+		setup.alg = WD_CIPHER_AES;
+		setup.mode = WD_CIPHER_ECB;
+		sess->cipher.req.out_bytes = 16;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_CBC:
+		setup.alg = WD_CIPHER_AES;
+		setup.mode = WD_CIPHER_CBC;
+		if (cipher->key.length == 16)
+			sess->cipher.req.out_bytes = 16;
+		else
+			sess->cipher.req.out_bytes = 64;
+		break;
+	case RTE_CRYPTO_CIPHER_AES_XTS:
+		setup.alg = WD_CIPHER_AES;
+		setup.mode = WD_CIPHER_XTS;
+		if (cipher->key.length == 16)
+			sess->cipher.req.out_bytes = 32;
+		else
+			sess->cipher.req.out_bytes = 512;
+		break;
+	default:
+		return -ENOTSUP;
+	}
+
+	params.numa_id = priv->udev->numa_id;
+	setup.sched_param = &params;
+	sess->handle_cipher = wd_cipher_alloc_sess(&setup);
+	if (!sess->handle_cipher) {
+		UADK_LOG(ERR, "uadk failed to alloc session!\n");
+		return -EINVAL;
+	}
+
+	ret = wd_cipher_set_key(sess->handle_cipher, cipher->key.data, cipher->key.length);
+	if (ret) {
+		wd_cipher_free_sess(sess->handle_cipher);
+		UADK_LOG(ERR, "uadk failed to set key!\n");
+		return -EINVAL;
+	}
+
+	return 0;
+}
+
+/* Set session auth parameters */
+static int
+uadk_set_session_auth_parameters(struct rte_cryptodev *dev,
+				 struct uadk_crypto_session *sess,
+				 struct rte_crypto_sym_xform *xform)
+{
+	struct uadk_crypto_priv *priv = dev->data->dev_private;
+	struct wd_digest_sess_setup setup = {0};
+	struct sched_params params = {0};
+	int ret;
+
+	if (!priv->env_auth_init) {
+		ret = wd_digest_env_init(NULL);
+		if (ret < 0)
+			return -EINVAL;
+		priv->env_auth_init = true;
+	}
+
+	sess->auth.operation = xform->auth.op;
+	sess->auth.digest_length = xform->auth.digest_length;
+
+	switch (xform->auth.algo) {
+	case RTE_CRYPTO_AUTH_MD5:
+	case RTE_CRYPTO_AUTH_MD5_HMAC:
+		setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_MD5) ?
+			     WD_DIGEST_NORMAL : WD_DIGEST_HMAC;
+		setup.alg = WD_DIGEST_MD5;
+		sess->auth.req.out_buf_bytes = 16;
+		sess->auth.req.out_bytes = 16;
+		break;
+	case RTE_CRYPTO_AUTH_SHA1:
+	case RTE_CRYPTO_AUTH_SHA1_HMAC:
+		setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1) ?
+			     WD_DIGEST_NORMAL : WD_DIGEST_HMAC;
+		setup.alg = WD_DIGEST_SHA1;
+		sess->auth.req.out_buf_bytes = 20;
+		sess->auth.req.out_bytes = 20;
+		break;
+	case RTE_CRYPTO_AUTH_SHA224:
+	case RTE_CRYPTO_AUTH_SHA224_HMAC:
+		setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA224) ?
+			     WD_DIGEST_NORMAL : WD_DIGEST_HMAC;
+		setup.alg = WD_DIGEST_SHA224;
+		sess->auth.req.out_buf_bytes = 28;
+		sess->auth.req.out_bytes = 28;
+		break;
+	case RTE_CRYPTO_AUTH_SHA256:
+	case RTE_CRYPTO_AUTH_SHA256_HMAC:
+		setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA256) ?
+			     WD_DIGEST_NORMAL : WD_DIGEST_HMAC;
+		setup.alg = WD_DIGEST_SHA256;
+		sess->auth.req.out_buf_bytes = 32;
+		sess->auth.req.out_bytes = 32;
+		break;
+	case RTE_CRYPTO_AUTH_SHA384:
+	case RTE_CRYPTO_AUTH_SHA384_HMAC:
+		setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA384) ?
+			     WD_DIGEST_NORMAL : WD_DIGEST_HMAC;
+		setup.alg = WD_DIGEST_SHA384;
+		sess->auth.req.out_buf_bytes = 48;
+		sess->auth.req.out_bytes = 48;
+		break;
+	case RTE_CRYPTO_AUTH_SHA512:
+	case RTE_CRYPTO_AUTH_SHA512_HMAC:
+		setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA512) ?
+			     WD_DIGEST_NORMAL : WD_DIGEST_HMAC;
+		setup.alg = WD_DIGEST_SHA512;
+		sess->auth.req.out_buf_bytes = 64;
+		sess->auth.req.out_bytes = 64;
+		break;
+	default:
+		return -ENOTSUP;
+	}
+
+	params.numa_id = priv->udev->numa_id;
+	setup.sched_param = &params;
+	sess->handle_digest = wd_digest_alloc_sess(&setup);
+	if (!sess->handle_digest) {
+		UADK_LOG(ERR, "uadk failed to alloc session!\n");
+		return -EINVAL;
+	}
+
+	/* if mode is HMAC, should set key */
+	if (setup.mode == WD_DIGEST_HMAC) {
+		ret = wd_digest_set_key(sess->handle_digest,
+					xform->auth.key.data,
+					xform->auth.key.length);
+		if (ret) {
+			UADK_LOG(ERR, "uadk failed to alloc session!\n");
+			wd_digest_free_sess(sess->handle_digest);
+			sess->handle_digest = 0;
+			return -EINVAL;
+		}
+	}
+	return 0;
+}
+
+static int
+uadk_crypto_sym_session_configure(struct rte_cryptodev *dev,
+				  struct rte_crypto_sym_xform *xform,
+				  struct rte_cryptodev_sym_session *session,
+				  struct rte_mempool *mp)
+{
+	struct rte_crypto_sym_xform *cipher_xform = NULL;
+	struct rte_crypto_sym_xform *auth_xform = NULL;
+	struct rte_crypto_sym_xform *aead_xform __rte_unused = NULL;
+	struct uadk_crypto_session *sess;
+	int ret;
+
+	ret = rte_mempool_get(mp, (void *)&sess);
+	if (ret != 0) {
+		UADK_LOG(ERR, "Failed to get session %p private data from mempool",
+			 sess);
+		return -ENOMEM;
+	}
+
+	sess->chain_order = uadk_get_chain_order(xform);
+	switch (sess->chain_order) {
+	case UADK_CHAIN_ONLY_CIPHER:
+		cipher_xform = xform;
+		break;
+	case UADK_CHAIN_ONLY_AUTH:
+		auth_xform = xform;
+		break;
+	case UADK_CHAIN_CIPHER_AUTH:
+		cipher_xform = xform;
+		auth_xform = xform->next;
+		break;
+	case UADK_CHAIN_AUTH_CIPHER:
+		auth_xform = xform;
+		cipher_xform = xform->next;
+		break;
+	case UADK_CHAIN_COMBINED:
+		aead_xform = xform;
+		break;
+	default:
+		ret = -ENOTSUP;
+		goto err;
+	}
+
+	if (cipher_xform) {
+		ret = uadk_set_session_cipher_parameters(dev, sess, cipher_xform);
+		if (ret != 0) {
+			UADK_LOG(ERR,
+				"Invalid/unsupported cipher parameters");
+			goto err;
+		}
+	}
+
+	if (auth_xform) {
+		ret = uadk_set_session_auth_parameters(dev, sess, auth_xform);
+		if (ret != 0)
+			goto err;
+	}
+
+	set_sym_session_private_data(session, dev->driver_id, sess);
+	return 0;
+err:
+	rte_mempool_put(mp, sess);
+	return ret;
+}
+
+static void
+uadk_crypto_sym_session_clear(struct rte_cryptodev *dev,
+			      struct rte_cryptodev_sym_session *sess)
+{
+	struct uadk_crypto_session *priv_sess =
+			get_sym_session_private_data(sess, dev->driver_id);
+
+	if (unlikely(priv_sess == NULL)) {
+		UADK_LOG(ERR, "Failed to get session %p private data.", priv_sess);
+		return;
+	}
+
+	if (priv_sess->handle_cipher) {
+		wd_cipher_free_sess(priv_sess->handle_cipher);
+		priv_sess->handle_cipher = 0;
+	}
+
+	if (priv_sess->handle_digest) {
+		wd_digest_free_sess(priv_sess->handle_digest);
+		priv_sess->handle_digest = 0;
+	}
+
+	set_sym_session_private_data(sess, dev->driver_id, NULL);
+	rte_mempool_put(rte_mempool_from_obj(priv_sess), priv_sess);
+}
+
+static struct rte_cryptodev_ops uadk_crypto_pmd_ops = {
+		.dev_configure		= uadk_crypto_pmd_config,
+		.dev_start		= uadk_crypto_pmd_start,
+		.dev_stop		= uadk_crypto_pmd_stop,
+		.dev_close		= uadk_crypto_pmd_close,
+		.stats_get		= uadk_crypto_pmd_stats_get,
+		.stats_reset		= uadk_crypto_pmd_stats_reset,
+		.dev_infos_get		= uadk_crypto_pmd_info_get,
+		.queue_pair_setup	= uadk_crypto_pmd_qp_setup,
+		.queue_pair_release	= uadk_crypto_pmd_qp_release,
+		.sym_session_get_size	= uadk_crypto_sym_session_get_size,
+		.sym_session_configure	= uadk_crypto_sym_session_configure,
+		.sym_session_clear	= uadk_crypto_sym_session_clear
+};
+
+static void
+uadk_process_cipher_op(struct rte_crypto_op *op,
+		       struct uadk_crypto_session *sess,
+		       struct rte_mbuf *msrc, struct rte_mbuf *mdst)
+{
+	uint32_t off = op->sym->cipher.data.offset;
+	int ret;
+
+	if (!sess) {
+		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+		return;
+	}
+
+	sess->cipher.req.src = rte_pktmbuf_mtod_offset(msrc, uint8_t *, off);
+	sess->cipher.req.in_bytes = op->sym->cipher.data.length;
+	sess->cipher.req.dst = rte_pktmbuf_mtod_offset(mdst, uint8_t *, off);
+	sess->cipher.req.out_buf_bytes = sess->cipher.req.in_bytes;
+	sess->cipher.req.iv_bytes = sess->iv.length;
+	sess->cipher.req.iv = rte_crypto_op_ctod_offset(op, uint8_t *,
+							sess->iv.offset);
+	if (sess->cipher.direction == RTE_CRYPTO_CIPHER_OP_ENCRYPT)
+		sess->cipher.req.op_type = WD_CIPHER_ENCRYPTION;
+	else
+		sess->cipher.req.op_type = WD_CIPHER_DECRYPTION;
+
+	do {
+		ret = wd_do_cipher_sync(sess->handle_cipher, &sess->cipher.req);
+	} while (ret == -WD_EBUSY);
+
+	if (sess->cipher.req.out_buf_bytes > sess->cipher.req.in_bytes)
+		op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
+
+	if (ret)
+		op->status = RTE_COMP_OP_STATUS_ERROR;
+}
+
+static void
+uadk_process_auth_op(struct uadk_qp *qp, struct rte_crypto_op *op,
+		     struct uadk_crypto_session *sess,
+		     struct rte_mbuf *msrc, struct rte_mbuf *mdst)
+{
+	uint32_t srclen = op->sym->auth.data.length;
+	uint32_t off = op->sym->auth.data.offset;
+	uint8_t *dst = qp->temp_digest;
+	int ret;
+
+	if (!sess) {
+		op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+		return;
+	}
+
+	sess->auth.req.in = rte_pktmbuf_mtod_offset(msrc, uint8_t *, off);
+	sess->auth.req.in_bytes = srclen;
+	sess->auth.req.out = dst;
+
+	do {
+		ret = wd_do_digest_sync(sess->handle_digest, &sess->auth.req);
+	} while (ret == -WD_EBUSY);
+
+	if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) {
+		if (memcmp(dst, op->sym->auth.digest.data,
+				sess->auth.digest_length) != 0) {
+			op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
+		}
+	} else {
+		uint8_t *auth_dst;
+
+		auth_dst = op->sym->auth.digest.data;
+		if (auth_dst == NULL)
+			auth_dst = rte_pktmbuf_mtod_offset(mdst, uint8_t *,
+					op->sym->auth.data.offset +
+					op->sym->auth.data.length);
+		memcpy(auth_dst, dst, sess->auth.digest_length);
+	}
+
+	if (ret)
+		op->status = RTE_COMP_OP_STATUS_ERROR;
+}
+
+static uint16_t
+uadk_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops,
+			  uint16_t nb_ops)
+{
+	struct uadk_qp *qp = queue_pair;
+	struct uadk_crypto_session *sess = NULL;
+	struct rte_mbuf *msrc, *mdst;
+	struct rte_crypto_op *op;
+	uint16_t enqd = 0;
+	int i, ret;
+
+	for (i = 0; i < nb_ops; i++) {
+		op = ops[i];
+		op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
+		msrc = op->sym->m_src;
+		mdst = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src;
+
+		if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
+			if (likely(op->sym->session != NULL))
+				sess = (struct uadk_crypto_session *)
+					get_sym_session_private_data(
+						op->sym->session,
+						uadk_cryptodev_driver_id);
+		}
+
+		switch (sess->chain_order) {
+		case UADK_CHAIN_ONLY_CIPHER:
+			uadk_process_cipher_op(op, sess, msrc, mdst);
+			break;
+		case UADK_CHAIN_ONLY_AUTH:
+			uadk_process_auth_op(qp, op, sess, msrc, mdst);
+			break;
+		case UADK_CHAIN_CIPHER_AUTH:
+			uadk_process_cipher_op(op, sess, msrc, mdst);
+			uadk_process_auth_op(qp, op, sess, mdst, mdst);
+			break;
+		case UADK_CHAIN_AUTH_CIPHER:
+			uadk_process_auth_op(qp, op, sess, msrc, mdst);
+			uadk_process_cipher_op(op, sess, msrc, mdst);
+			break;
+		default:
+			op->status = RTE_CRYPTO_OP_STATUS_ERROR;
+			break;
+		}
+
+		if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED)
+			op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
+
+		if (op->status != RTE_CRYPTO_OP_STATUS_ERROR) {
+			ret = rte_ring_enqueue(qp->processed_pkts, (void *)op);
+			if (ret < 0)
+				goto enqueue_err;
+			qp->qp_stats.enqueued_count++;
+			enqd++;
+		} else {
+			/* increment count if failed to enqueue op*/
+			qp->qp_stats.enqueue_err_count++;
+		}
+	}
+	return enqd;
+
+enqueue_err:
+	qp->qp_stats.enqueue_err_count++;
+	return enqd;
+}
+
+static uint16_t
+uadk_crypto_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
+			  uint16_t nb_ops)
+{
+	struct uadk_qp *qp = queue_pair;
+	unsigned int nb_dequeued;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)ops, nb_ops, NULL);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+static int
+uadk_crypto_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			struct rte_pci_device *pci_dev)
+{
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct rte_cryptodev *dev;
+	struct rte_cryptodev_pmd_init_params init_params = {
+		.name = "",
+		.private_data_size = sizeof(struct uadk_crypto_priv),
+		.max_nb_queue_pairs =
+				RTE_CRYPTODEV_PMD_DEFAULT_MAX_NB_QUEUE_PAIRS,
+	};
+	struct uadk_crypto_priv *priv;
+	struct uacce_dev *udev;
+
+	udev = wd_get_accel_dev("cipher");
+	if (!udev)
+		return -ENODEV;
+
+	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+	dev = rte_cryptodev_pmd_create(name, &pci_dev->device, &init_params);
+	if (dev == NULL) {
+		UADK_LOG(ERR, "driver %s: create failed", init_params.name);
+		return -ENODEV;
+	}
+
+	dev->dev_ops = &uadk_crypto_pmd_ops;
+	dev->driver_id = uadk_cryptodev_driver_id;
+	dev->dequeue_burst = uadk_crypto_dequeue_burst;
+	dev->enqueue_burst = uadk_crypto_enqueue_burst;
+	dev->feature_flags = RTE_CRYPTODEV_FF_HW_ACCELERATED |
+			     RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
+			     RTE_CRYPTODEV_FF_SYM_SESSIONLESS;
+	priv = dev->data->dev_private;
+	priv->udev = udev;
+
+	rte_cryptodev_pmd_probing_finish(dev);
+	return 0;
+}
+
+static int
+uadk_crypto_pci_remove(struct rte_pci_device *pci_dev)
+{
+	char name[RTE_CRYPTODEV_NAME_MAX_LEN];
+	struct uadk_crypto_priv *priv;
+	struct rte_cryptodev *dev;
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+
+	dev = rte_cryptodev_pmd_get_named_dev(name);
+	if (dev == NULL)
+		return -ENODEV;
+
+	priv = dev->data->dev_private;
+	free(priv->udev);
+
+	return rte_cryptodev_pmd_destroy(dev);
+}
+
+#define PCI_VENDOR_ID_HUAWEI            0x19e5
+#define PCI_DEVICE_ID_SEC_PF            0xa255
+#define PCI_DEVICE_ID_SEC_VF            0xa256
+
+static struct rte_pci_id pci_id_uadk_crypto_table[] = {
+	{
+		RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_SEC_PF),
+	},
+	{
+		RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_SEC_VF),
+	},
+	{
+		.device_id = 0
+	},
+};
+
+/**
+ * Structure that represents a PCI driver
+ */
+static struct rte_pci_driver uadk_crypto_pmd = {
+	.id_table    = pci_id_uadk_crypto_table,
+	.probe       = uadk_crypto_pci_probe,
+	.remove      = uadk_crypto_pci_remove,
+};
+
+#define UADK_CRYPTO_DRIVER_NAME crypto_uadk
+static struct cryptodev_driver uadk_crypto_drv;
+
+RTE_PMD_REGISTER_PCI(UADK_CRYPTO_DRIVER_NAME, uadk_crypto_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(UADK_CRYPTO_DRIVER_NAME, pci_id_uadk_crypto_table);
+RTE_PMD_REGISTER_CRYPTO_DRIVER(uadk_crypto_drv, uadk_crypto_pmd.driver,
+			       uadk_cryptodev_driver_id);
diff --git a/drivers/crypto/uadk/version.map b/drivers/crypto/uadk/version.map
new file mode 100644
index 0000000000..c2e0723b4c
--- /dev/null
+++ b/drivers/crypto/uadk/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+	local: *;
+};
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH RFC 1/3] compress: add UADK compression PMD
  2022-05-20 11:31 [PATCH RFC 0/3] Add UADK compression and " Zhangfei Gao
@ 2022-05-20 11:31 ` Zhangfei Gao
  0 siblings, 0 replies; 5+ messages in thread
From: Zhangfei Gao @ 2022-05-20 11:31 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Ashish Gupta, Ray Kinsella
  Cc: dev, Zhangfei Gao

Add compression & decompression PMD for HiSilicon Hunpeng930
The UADK compression PMD rely on uadk api.

Test:
sudo dpdk-test --vdev=0000:75:00.0
RTE>>compressdev_autotest
RTE>>quit

Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
---
 doc/guides/compressdevs/index.rst         |   1 +
 doc/guides/compressdevs/uadk.rst          |  73 ++++
 drivers/compress/meson.build              |   1 +
 drivers/compress/uadk/meson.build         |  28 ++
 drivers/compress/uadk/uadk_compress_pmd.c | 500 ++++++++++++++++++++++
 drivers/compress/uadk/version.map         |   3 +
 6 files changed, 606 insertions(+)
 create mode 100644 doc/guides/compressdevs/uadk.rst
 create mode 100644 drivers/compress/uadk/meson.build
 create mode 100644 drivers/compress/uadk/uadk_compress_pmd.c
 create mode 100644 drivers/compress/uadk/version.map

diff --git a/doc/guides/compressdevs/index.rst b/doc/guides/compressdevs/index.rst
index 54a3ef4273..e47a9ab9cf 100644
--- a/doc/guides/compressdevs/index.rst
+++ b/doc/guides/compressdevs/index.rst
@@ -14,4 +14,5 @@ Compression Device Drivers
     mlx5
     octeontx
     qat_comp
+    uadk
     zlib
diff --git a/doc/guides/compressdevs/uadk.rst b/doc/guides/compressdevs/uadk.rst
new file mode 100644
index 0000000000..08eb636da2
--- /dev/null
+++ b/doc/guides/compressdevs/uadk.rst
@@ -0,0 +1,73 @@
+..  SPDX-License-Identifier: BSD-3-Clause
+    Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+    Copyright 2022-2023 Linaro ltd.
+
+UADK Compression Poll Mode Driver
+=======================================================
+
+UADK compression PMD provides poll mode compression & decompression driver
+support for the following hardware accelerator devices:
+
+* ``HiSilicon Kunpeng930``
+
+Features
+--------
+
+UADK compression PMD has support for:
+
+Compression/Decompression algorithm:
+
+    * DEFLATE - using Fixed and Dynamic Huffman encoding
+
+Window size support:
+
+    * 32K
+
+Checksum generation:
+
+    * CRC32, Adler and combined checksum
+
+Stateful operation:
+
+    * Decompression only
+
+Test steps
+-----------
+
+   .. code-block:: console
+
+	1. Build
+	cd dpdk
+	mkdir build
+	meson build (--reconfigure)
+	cd build
+	ninja
+	sudo ninja install
+
+	2. Prepare
+	echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages
+	echo 1024 > /sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages
+	mkdir -p /mnt/huge_2mb
+	mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB
+
+	2 Test with zip pf
+	sudo dpdk-test --vdev=0000:75:00.0
+	RTE>>compressdev_autotest
+	RTE>>quit
+
+	3. Test with zip vf
+	su root
+	echo 1 > /sys/devices/pci0000:74/0000:74:00.0/0000:75:00.0/sriov_numvfs
+	exit
+	sudo dpdk-test --vdev=0000:75:00.1
+	RTE>>compressdev_autotest
+	RTE>>quit
+
+Dependency
+------------
+
+UADK compression PMD relies on HiSilicon UADK library [1]
+
+[1] https://github.com/Linaro/uadk
diff --git a/drivers/compress/meson.build b/drivers/compress/meson.build
index abe043ab94..041a45ba41 100644
--- a/drivers/compress/meson.build
+++ b/drivers/compress/meson.build
@@ -10,6 +10,7 @@ drivers = [
         'mlx5',
         'octeontx',
         'zlib',
+        'uadk',
 ]
 
 std_deps = ['compressdev'] # compressdev pulls in all other needed deps
diff --git a/drivers/compress/uadk/meson.build b/drivers/compress/uadk/meson.build
new file mode 100644
index 0000000000..347ef9757d
--- /dev/null
+++ b/drivers/compress/uadk/meson.build
@@ -0,0 +1,28 @@
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+# Copyright 2022-2023 Linaro ltd.
+
+if not is_linux
+    build = false
+    reason = 'only supported on Linux'
+    subdir_done()
+endif
+
+if arch_subdir != 'arm' or not dpdk_conf.get('RTE_ARCH_64')
+    build = false
+    reason = 'only supported on aarch64'
+    subdir_done()
+endif
+
+sources = files(
+        'uadk_compress_pmd.c',
+)
+
+deps += ['bus_pci']
+dep = cc.find_library('libwd_comp', dirs: ['/usr/local/lib'], required: false)
+if not dep.found()
+	build = false
+	reason = 'missing dependency, "libwd_comp"'
+else
+	ext_deps += dep
+endif
diff --git a/drivers/compress/uadk/uadk_compress_pmd.c b/drivers/compress/uadk/uadk_compress_pmd.c
new file mode 100644
index 0000000000..16a0593f84
--- /dev/null
+++ b/drivers/compress/uadk/uadk_compress_pmd.c
@@ -0,0 +1,500 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved.
+ * Copyright 2022-2023 Linaro ltd.
+ */
+
+#include <rte_bus_pci.h>
+#include <rte_compressdev_pmd.h>
+#include <rte_malloc.h>
+#include <uadk/wd_comp.h>
+#include <uadk/wd_sched.h>
+
+struct uadk_compress_priv {
+	struct rte_mempool *mp;
+} __rte_cache_aligned;
+
+struct uadk_qp {
+	struct rte_ring *processed_pkts;
+	/**< Ring for placing process packets */
+	struct rte_compressdev_stats qp_stats;
+	/**< Queue pair statistics */
+	uint16_t id;
+	/**< Queue Pair Identifier */
+	char name[RTE_COMPRESSDEV_NAME_MAX_LEN];
+	/**< Unique Queue Pair Name */
+} __rte_cache_aligned;
+
+struct uadk_stream {
+	handle_t handle;
+	enum rte_comp_xform_type type;
+} __rte_cache_aligned;
+
+RTE_LOG_REGISTER_DEFAULT(uadk_compress_logtype, INFO);
+
+#define UADK_LOG(level, fmt, ...)  \
+	rte_log(RTE_LOG_ ## level, uadk_compress_logtype,  \
+			"%s() line %u: " fmt "\n", __func__, __LINE__,  \
+					## __VA_ARGS__)
+
+#define UADK_COMPRESS_DRIVER_NAME compress_uadk
+
+static int
+uadk_compress_pmd_config(struct rte_compressdev *dev,
+			 struct rte_compressdev_config *config)
+{
+	char mp_name[RTE_MEMPOOL_NAMESIZE];
+	struct uadk_compress_priv *priv;
+	struct rte_mempool *mp;
+	int ret;
+
+	if (dev == NULL || config == NULL)
+		return -EINVAL;
+
+	snprintf(mp_name, RTE_MEMPOOL_NAMESIZE,
+		 "stream_mp_%u", dev->data->dev_id);
+	priv = dev->data->dev_private;
+
+	/* alloc resources */
+	ret = wd_comp_env_init(NULL);
+	if (ret < 0)
+		return -EINVAL;
+
+	mp = priv->mp;
+	if (mp == NULL) {
+		mp = rte_mempool_create(mp_name,
+				config->max_nb_priv_xforms +
+				config->max_nb_streams,
+				sizeof(struct uadk_stream),
+				0, 0, NULL, NULL, NULL,
+				NULL, config->socket_id,
+				0);
+		if (mp == NULL) {
+			UADK_LOG(ERR, "Cannot create private xform pool on socket %d\n",
+				 config->socket_id);
+			ret = -ENOMEM;
+			goto err_mempool;
+		}
+		priv->mp = mp;
+	}
+	return 0;
+err_mempool:
+	wd_comp_env_uninit();
+	return ret;
+}
+
+static int
+uadk_compress_pmd_start(struct rte_compressdev *dev __rte_unused)
+{
+	return 0;
+}
+
+static void
+uadk_compress_pmd_stop(struct rte_compressdev *dev __rte_unused)
+{
+}
+
+static int
+uadk_compress_pmd_close(struct rte_compressdev *dev)
+{
+	struct uadk_compress_priv *priv =
+		(struct uadk_compress_priv *)dev->data->dev_private;
+
+	/* free resources */
+	rte_mempool_free(priv->mp);
+	priv->mp = NULL;
+	wd_comp_env_uninit();
+
+	return 0;
+}
+
+static void
+uadk_compress_pmd_stats_get(struct rte_compressdev *dev,
+			    struct rte_compressdev_stats *stats)
+{
+	int qp_id;
+
+	for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) {
+		struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+		stats->enqueued_count += qp->qp_stats.enqueued_count;
+		stats->dequeued_count += qp->qp_stats.dequeued_count;
+		stats->enqueue_err_count += qp->qp_stats.enqueue_err_count;
+		stats->dequeue_err_count += qp->qp_stats.dequeue_err_count;
+	}
+}
+
+static void
+uadk_compress_pmd_stats_reset(struct rte_compressdev *dev __rte_unused)
+{
+}
+
+static const struct
+rte_compressdev_capabilities uadk_compress_pmd_capabilities[] = {
+	{   /* Deflate */
+		.algo = RTE_COMP_ALGO_DEFLATE,
+		.comp_feature_flags = RTE_COMP_FF_SHAREABLE_PRIV_XFORM |
+				      RTE_COMP_FF_HUFFMAN_FIXED |
+				      RTE_COMP_FF_HUFFMAN_DYNAMIC,
+	},
+
+	RTE_COMP_END_OF_CAPABILITIES_LIST()
+};
+
+static void
+uadk_compress_pmd_info_get(struct rte_compressdev *dev,
+			   struct rte_compressdev_info *dev_info)
+{
+	if (dev_info != NULL) {
+		dev_info->driver_name = dev->device->driver->name;
+		dev_info->feature_flags = dev->feature_flags;
+		dev_info->capabilities = uadk_compress_pmd_capabilities;
+	}
+}
+
+static int
+uadk_compress_pmd_qp_release(struct rte_compressdev *dev, uint16_t qp_id)
+{
+	struct uadk_qp *qp = dev->data->queue_pairs[qp_id];
+
+	if (qp != NULL) {
+		rte_ring_free(qp->processed_pkts);
+		rte_free(qp);
+		dev->data->queue_pairs[qp_id] = NULL;
+	}
+
+	return 0;
+}
+
+static int
+uadk_pmd_qp_set_unique_name(struct rte_compressdev *dev,
+			    struct uadk_qp *qp)
+{
+	unsigned int n = snprintf(qp->name, sizeof(qp->name),
+				 "uadk_pmd_%u_qp_%u",
+				 dev->data->dev_id, qp->id);
+
+	if (n >= sizeof(qp->name))
+		return -1;
+
+	return 0;
+}
+
+static struct rte_ring *
+uadk_pmd_qp_create_processed_pkts_ring(struct uadk_qp *qp,
+				       unsigned int ring_size, int socket_id)
+{
+	struct rte_ring *r = qp->processed_pkts;
+
+	if (r) {
+		if (rte_ring_get_size(r) >= ring_size) {
+			UADK_LOG(INFO, "Reusing existing ring %s for processed packets",
+				 qp->name);
+			return r;
+		}
+
+		UADK_LOG(ERR, "Unable to reuse existing ring %s for processed packets",
+			 qp->name);
+		return NULL;
+	}
+
+	return rte_ring_create(qp->name, ring_size, socket_id,
+			       RING_F_EXACT_SZ);
+}
+
+static int
+uadk_compress_pmd_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
+			   uint32_t max_inflight_ops, int socket_id)
+{
+	struct uadk_qp *qp = NULL;
+
+	/* Free memory prior to re-allocation if needed. */
+	if (dev->data->queue_pairs[qp_id] != NULL)
+		uadk_compress_pmd_qp_release(dev, qp_id);
+
+	/* Allocate the queue pair data structure. */
+	qp = rte_zmalloc_socket("uadk PMD Queue Pair", sizeof(*qp),
+				RTE_CACHE_LINE_SIZE, socket_id);
+	if (qp == NULL)
+		return (-ENOMEM);
+
+	qp->id = qp_id;
+	dev->data->queue_pairs[qp_id] = qp;
+
+	if (uadk_pmd_qp_set_unique_name(dev, qp))
+		goto qp_setup_cleanup;
+
+	qp->processed_pkts = uadk_pmd_qp_create_processed_pkts_ring(qp,
+						max_inflight_ops, socket_id);
+	if (qp->processed_pkts == NULL)
+		goto qp_setup_cleanup;
+
+	memset(&qp->qp_stats, 0, sizeof(qp->qp_stats));
+	return 0;
+
+qp_setup_cleanup:
+	if (qp) {
+		rte_free(qp);
+		qp = NULL;
+	}
+	return -1;
+}
+
+static int
+uadk_compress_pmd_xform_create(struct rte_compressdev *dev,
+			       const struct rte_comp_xform *xform,
+			       void **private_xform)
+{
+	struct uadk_compress_priv *priv = dev->data->dev_private;
+	struct wd_comp_sess_setup setup = {0};
+	struct sched_params param = {0};
+	struct uadk_stream *stream;
+	handle_t handle;
+
+	if (xform == NULL) {
+		UADK_LOG(ERR, "invalid xform struct");
+		return -EINVAL;
+	}
+
+	if (rte_mempool_get(priv->mp, private_xform)) {
+		UADK_LOG(ERR, "Couldn't get object from session mempool");
+		return -ENOMEM;
+	}
+
+	stream = *((struct uadk_stream **)private_xform);
+
+	switch (xform->type) {
+	case RTE_COMP_COMPRESS:
+		switch (xform->compress.algo) {
+		case RTE_COMP_ALGO_NULL:
+			break;
+		case RTE_COMP_ALGO_DEFLATE:
+			setup.alg_type = WD_ZLIB;
+			setup.win_sz = WD_COMP_WS_8K;
+			setup.comp_lv = WD_COMP_L8;
+			setup.op_type = WD_DIR_COMPRESS;
+			param.type = setup.op_type;
+			param.numa_id = 0;
+			setup.sched_param = &param;
+			break;
+		default:
+			goto err;
+		}
+		break;
+	case RTE_COMP_DECOMPRESS:
+		switch (xform->decompress.algo) {
+		case RTE_COMP_ALGO_NULL:
+			break;
+		case RTE_COMP_ALGO_DEFLATE:
+			setup.alg_type = WD_ZLIB;
+			setup.comp_lv = WD_COMP_L8;
+			setup.op_type = WD_DIR_DECOMPRESS;
+			param.type = setup.op_type;
+			param.numa_id = 0;
+			setup.sched_param = &param;
+			break;
+		default:
+			goto err;
+		}
+		break;
+	default:
+		UADK_LOG(ERR, "Algorithm %u is not supported.", xform->type);
+		goto err;
+	}
+
+	handle = wd_comp_alloc_sess(&setup);
+	if (!handle)
+		goto err;
+	stream->handle = handle;
+	stream->type = xform->type;
+	return 0;
+err:
+	rte_mempool_put(priv->mp, private_xform);
+	return -EINVAL;
+}
+
+static int
+uadk_compress_pmd_xform_free(struct rte_compressdev *dev __rte_unused, void *private_xform)
+{
+	struct uadk_stream *stream = (struct uadk_stream *)private_xform;
+	struct rte_mempool *mp;
+
+	if (!stream)
+		return -EINVAL;
+
+	wd_comp_free_sess(stream->handle);
+	memset(stream, 0, sizeof(struct uadk_stream));
+	mp = rte_mempool_from_obj(stream);
+	rte_mempool_put(mp, stream);
+	return 0;
+}
+
+static struct rte_compressdev_ops uadk_compress_pmd_ops = {
+		.dev_configure		= uadk_compress_pmd_config,
+		.dev_start		= uadk_compress_pmd_start,
+		.dev_stop		= uadk_compress_pmd_stop,
+		.dev_close		= uadk_compress_pmd_close,
+		.stats_get		= uadk_compress_pmd_stats_get,
+		.stats_reset		= uadk_compress_pmd_stats_reset,
+		.dev_infos_get		= uadk_compress_pmd_info_get,
+		.queue_pair_setup	= uadk_compress_pmd_qp_setup,
+		.queue_pair_release	= uadk_compress_pmd_qp_release,
+		.private_xform_create	= uadk_compress_pmd_xform_create,
+		.private_xform_free	= uadk_compress_pmd_xform_free,
+		.stream_create		= NULL,
+		.stream_free		= NULL
+};
+
+static uint16_t
+uadk_compress_pmd_enqueue_burst_sync(void *queue_pair,
+				     struct rte_comp_op **ops, uint16_t nb_ops)
+{
+	struct uadk_qp *qp = queue_pair;
+	struct uadk_stream *stream;
+	struct rte_comp_op *op;
+	uint16_t enqd = 0;
+	int i, ret = 0;
+
+	for (i = 0; i < nb_ops; i++) {
+		op = ops[i];
+
+		if (op->op_type == RTE_COMP_OP_STATEFUL) {
+			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+		} else {
+			/* process stateless ops */
+			stream = (struct uadk_stream *)op->private_xform;
+			if (stream) {
+				struct wd_comp_req req = {0};
+				uint16_t dst_len = rte_pktmbuf_data_len(op->m_dst);
+
+				req.src = rte_pktmbuf_mtod(op->m_src, uint8_t *);
+				req.src_len = op->src.length;
+				req.dst = rte_pktmbuf_mtod(op->m_dst, uint8_t *);
+				req.dst_len = dst_len;
+				req.op_type = stream->type;
+				req.cb = NULL;
+				req.data_fmt = WD_FLAT_BUF;
+				do {
+					ret = wd_do_comp_sync(stream->handle, &req);
+				} while (ret == -WD_EBUSY);
+
+				op->consumed += req.src_len;
+
+				if (req.dst_len <= dst_len) {
+					op->produced += req.dst_len;
+					op->status = RTE_COMP_OP_STATUS_SUCCESS;
+				} else  {
+					op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
+				}
+
+				if (ret) {
+					op->status = RTE_COMP_OP_STATUS_ERROR;
+					break;
+				}
+			} else {
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+			}
+		}
+
+		/* Whatever is out of op, put it into completion queue with
+		 * its status
+		 */
+		if (!ret)
+			ret = rte_ring_enqueue(qp->processed_pkts, (void *)op);
+
+		if (unlikely(ret)) {
+			/* increment count if failed to enqueue op*/
+			qp->qp_stats.enqueue_err_count++;
+		} else {
+			qp->qp_stats.enqueued_count++;
+			enqd++;
+		}
+	}
+	return enqd;
+}
+
+static uint16_t
+uadk_compress_pmd_dequeue_burst_sync(void *queue_pair,
+				     struct rte_comp_op **ops,
+				     uint16_t nb_ops)
+{
+	struct uadk_qp *qp = queue_pair;
+	unsigned int nb_dequeued = 0;
+
+	nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts,
+			(void **)ops, nb_ops, NULL);
+	qp->qp_stats.dequeued_count += nb_dequeued;
+
+	return nb_dequeued;
+}
+
+static int
+uadk_compress_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+			struct rte_pci_device *pci_dev)
+{
+	char name[RTE_COMPRESSDEV_NAME_MAX_LEN];
+	struct rte_compressdev *compressdev;
+	struct rte_compressdev_pmd_init_params init_params = {
+		"",
+		rte_socket_id(),
+	};
+
+	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+	compressdev = rte_compressdev_pmd_create(name, &pci_dev->device,
+			sizeof(struct uadk_compress_priv), &init_params);
+	if (compressdev == NULL) {
+		UADK_LOG(ERR, "driver %s: create failed", init_params.name);
+		return -ENODEV;
+	}
+
+	compressdev->dev_ops = &uadk_compress_pmd_ops;
+	compressdev->dequeue_burst = uadk_compress_pmd_dequeue_burst_sync;
+	compressdev->enqueue_burst = uadk_compress_pmd_enqueue_burst_sync;
+	compressdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED;
+
+	return 0;
+}
+
+static int
+uadk_compress_pci_remove(struct rte_pci_device *pci_dev)
+{
+	struct rte_compressdev *compressdev;
+	char name[RTE_COMPRESSDEV_NAME_MAX_LEN];
+
+	if (pci_dev == NULL)
+		return -EINVAL;
+
+	rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
+	compressdev = rte_compressdev_pmd_get_named_dev(name);
+	if (compressdev == NULL)
+		return -ENODEV;
+
+	return rte_compressdev_pmd_destroy(compressdev);
+}
+
+#define PCI_VENDOR_ID_HUAWEI            0x19e5
+#define PCI_DEVICE_ID_ZIP_PF            0xa250
+#define PCI_DEVICE_ID_ZIP_VF            0xa251
+
+static struct rte_pci_id pci_id_uadk_compress_table[] = {
+	{
+		RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_ZIP_PF),
+	},
+	{
+		RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, PCI_DEVICE_ID_ZIP_VF),
+	},
+	{
+		.device_id = 0
+	},
+};
+
+/**
+ * Structure that represents a PCI driver
+ */
+static struct rte_pci_driver uadk_compress_pmd = {
+	.id_table    = pci_id_uadk_compress_table,
+	.probe       = uadk_compress_pci_probe,
+	.remove      = uadk_compress_pci_remove,
+};
+
+RTE_PMD_REGISTER_PCI(UADK_COMPRESS_DRIVER_NAME, uadk_compress_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(UADK_COMPRESS_DRIVER_NAME, pci_id_uadk_compress_table);
diff --git a/drivers/compress/uadk/version.map b/drivers/compress/uadk/version.map
new file mode 100644
index 0000000000..c2e0723b4c
--- /dev/null
+++ b/drivers/compress/uadk/version.map
@@ -0,0 +1,3 @@
+DPDK_22 {
+	local: *;
+};
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-05-23  8:23 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-20 11:36 [PATCH RFC 0/3] Add UADK compression and crypto PMD Zhangfei Gao
2022-05-20 11:36 ` [PATCH RFC 1/3] compress: add UADK compression PMD Zhangfei Gao
2022-05-20 11:36 ` [PATCH RFC 2/3] test/crypto: add cryptodev_uadk_autotest Zhangfei Gao
2022-05-20 11:36 ` [PATCH RFC 3/3] drivers/crypto: add UADK crypto PMD Zhangfei Gao
  -- strict thread matches above, loose matches on Subject: below --
2022-05-20 11:31 [PATCH RFC 0/3] Add UADK compression and " Zhangfei Gao
2022-05-20 11:31 ` [PATCH RFC 1/3] compress: add UADK compression PMD Zhangfei Gao

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.