All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] compress/qat: add dynamic sgl allocation
@ 2019-02-15  9:44 Tomasz Jozwiak
  2019-02-15  9:44 ` [PATCH] compress/qat: add fallback to fixed compression Tomasz Jozwiak
                   ` (3 more replies)
  0 siblings, 4 replies; 32+ messages in thread
From: Tomasz Jozwiak @ 2019-02-15  9:44 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak

This patch adds dynamic SGL allocation instead of static one.
The number of element in SGL can be adjusted in each operation
depend of the request.

Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
---
 config/common_base                   |  1 -
 doc/guides/compressdevs/qat_comp.rst |  1 -
 doc/guides/cryptodevs/qat.rst        |  5 ----
 drivers/compress/qat/qat_comp.c      | 56 ++++++++++++++++++++++++++++++++----
 drivers/compress/qat/qat_comp.h      | 13 ++++-----
 drivers/compress/qat/qat_comp_pmd.c  | 49 ++++++++++++++++++++++++++-----
 6 files changed, 99 insertions(+), 26 deletions(-)

diff --git a/config/common_base b/config/common_base
index 7c6da51..5df1752 100644
--- a/config/common_base
+++ b/config/common_base
@@ -549,7 +549,6 @@ CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
 # Max. number of QuickAssist devices, which can be detected and attached
 #
 CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
-CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
 CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
 
 #
diff --git a/doc/guides/compressdevs/qat_comp.rst b/doc/guides/compressdevs/qat_comp.rst
index 5631cb1..6f583a4 100644
--- a/doc/guides/compressdevs/qat_comp.rst
+++ b/doc/guides/compressdevs/qat_comp.rst
@@ -35,7 +35,6 @@ Limitations
 * Compressdev level 0, no compression, is not supported.
 * Queue pairs are not thread-safe (that is, within a single queue pair, RX and TX from different lcores is not supported).
 * No BSD support as BSD QAT kernel driver not available.
-* Number of segments in mbuf chains in the op must be <= RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS from the config file.
 * When using Deflate dynamic huffman encoding for compression, the input size (op.src.length)
   must be < CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE from the config file,
   see :ref:`building_qat_config` for more details.
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index b079aa3..1f6b0d8 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -156,7 +156,6 @@ These are the build configuration options affecting QAT, and their default value
 	CONFIG_RTE_LIBRTE_PMD_QAT=y
 	CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
 	CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
-	CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
 	CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
 
 CONFIG_RTE_LIBRTE_PMD_QAT must be enabled for any QAT PMD to be built.
@@ -174,10 +173,6 @@ Note, there are separate config items for max cryptodevs CONFIG_RTE_CRYPTO_MAX_D
 and max compressdevs CONFIG_RTE_COMPRESS_MAX_DEVS, if necessary these should be
 adjusted to handle the total of QAT and other devices which the process will use.
 
-QAT allocates internal structures to handle SGLs. For the compression service
-CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS can be changed if more segments are needed.
-An extra (max_inflight_ops x 16) bytes per queue_pair will be used for every increment.
-
 QAT compression PMD needs intermediate buffers to support Deflate compression
 with Dynamic Huffman encoding. CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE
 specifies the size of a single buffer, the PMD will allocate a multiple of these,
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 32ca753..0c03bc3 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2019 Intel Corporation
  */
 
 #include <rte_mempool.h>
@@ -55,22 +55,68 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 		ICP_QAT_FW_COMN_PTR_TYPE_SET(comp_req->comn_hdr.comn_req_flags,
 				QAT_COMN_PTR_TYPE_SGL);
 
+		if (unlikely(op->m_src->nb_segs > cookie->src_nb_elems)) {
+			/* we need to allocate more elements in SGL*/
+			void *tmp;
+
+			tmp = rte_realloc(cookie->qat_sgl_src_d,
+					  sizeof(struct qat_sgl) +
+					  sizeof(struct qat_flat_buf) *
+					  op->m_src->nb_segs, 64);
+
+			if (unlikely(tmp == NULL)) {
+				QAT_DP_LOG(ERR, "QAT PMD can't allocate memory"
+					   " for %d elements of SGL",
+					   op->m_src->nb_segs);
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+				return -ENOMEM;
+			}
+			/* new SGL is valid now */
+			cookie->qat_sgl_src_d = (struct qat_sgl *)tmp;
+			cookie->src_nb_elems = op->m_src->nb_segs;
+			cookie->qat_sgl_src_phys_addr =
+				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
+		}
+
 		ret = qat_sgl_fill_array(op->m_src,
 				op->src.offset,
-				&cookie->qat_sgl_src,
+				cookie->qat_sgl_src_d,
 				op->src.length,
-				RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
+				(const uint16_t)cookie->src_nb_elems);
 		if (ret) {
 			QAT_DP_LOG(ERR, "QAT PMD Cannot fill source sgl array");
 			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 			return ret;
 		}
 
+		if (unlikely(op->m_dst->nb_segs > cookie->dst_nb_elems)) {
+			/* we need to allocate more elements in SGL*/
+			struct qat_sgl *tmp;
+
+			tmp = rte_realloc(cookie->qat_sgl_dst_d,
+					  sizeof(struct qat_sgl) +
+					  sizeof(struct qat_flat_buf) *
+					  op->m_dst->nb_segs, 64);
+
+			if (unlikely(tmp == NULL)) {
+				QAT_DP_LOG(ERR, "QAT PMD can't allocate memory"
+					   " for %d elements of SGL",
+					   op->m_dst->nb_segs);
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+				return -EINVAL;
+			}
+			/* new SGL is valid now */
+			cookie->qat_sgl_dst_d = (struct qat_sgl *)tmp;
+			cookie->dst_nb_elems = op->m_dst->nb_segs;
+			cookie->qat_sgl_dst_phys_addr =
+				rte_malloc_virt2iova(cookie->qat_sgl_dst_d);
+		}
+
 		ret = qat_sgl_fill_array(op->m_dst,
 				op->dst.offset,
-				&cookie->qat_sgl_dst,
+				cookie->qat_sgl_dst_d,
 				comp_req->comp_pars.out_buffer_sz,
-				RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
+				(const uint16_t)cookie->dst_nb_elems);
 		if (ret) {
 			QAT_DP_LOG(ERR, "QAT PMD Cannot fill dest. sgl array");
 			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 19f48df..2465f12 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2018 Intel Corporation
+ * Copyright(c) 2015-2019 Intel Corporation
  */
 
 #ifndef _QAT_COMP_H_
@@ -37,16 +37,15 @@ struct qat_inter_sgl {
 	struct qat_flat_buf buffers[QAT_NUM_BUFS_IN_IM_SGL];
 } __rte_packed __rte_cache_aligned;
 
-struct qat_comp_sgl {
-	qat_sgl_hdr;
-	struct qat_flat_buf buffers[RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS];
-} __rte_packed __rte_cache_aligned;
 
 struct qat_comp_op_cookie {
-	struct qat_comp_sgl qat_sgl_src;
-	struct qat_comp_sgl qat_sgl_dst;
 	phys_addr_t qat_sgl_src_phys_addr;
 	phys_addr_t qat_sgl_dst_phys_addr;
+	/* dynamically created SGLs */
+	uint16_t src_nb_elems;
+	uint16_t dst_nb_elems;
+	struct qat_sgl *qat_sgl_src_d;
+	struct qat_sgl *qat_sgl_dst_d;
 };
 
 struct qat_comp_xform {
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index 27c8856..f034a19 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -1,10 +1,14 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2018 Intel Corporation
+ * Copyright(c) 2015-2019 Intel Corporation
  */
 
+#include <rte_malloc.h>
+
 #include "qat_comp.h"
 #include "qat_comp_pmd.h"
 
+#define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16
+
 static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
 	{/* COMPRESSION - deflate */
 	 .algo = RTE_COMP_ALGO_DEFLATE,
@@ -60,6 +64,10 @@ static int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
+	struct qat_qp **qp_addr =
+		(struct qat_qp **)&(dev->data->queue_pairs[queue_pair_id]);
+	struct qat_qp *qp = (struct qat_qp *)*qp_addr;
+	uint32_t i;
 
 	QAT_LOG(DEBUG, "Release comp qp %u on device %d",
 				queue_pair_id, dev->data->dev_id);
@@ -67,6 +75,14 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][queue_pair_id]
 						= NULL;
 
+	for (i = 0; i < qp->nb_descriptors; i++) {
+
+		struct qat_comp_op_cookie *cookie = qp->op_cookies[i];
+
+		rte_free(cookie->qat_sgl_src_d);
+		rte_free(cookie->qat_sgl_dst_d);
+	}
+
 	return qat_qp_release((struct qat_qp **)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }
@@ -122,15 +138,34 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 		struct qat_comp_op_cookie *cookie =
 				qp->op_cookies[i];
 
+		cookie->qat_sgl_src_d = rte_zmalloc_socket(NULL,
+					sizeof(struct qat_sgl) +
+					sizeof(struct qat_flat_buf) *
+					QAT_PMD_COMP_SGL_DEF_SEGMENTS,
+					64, socket_id);
+
+		cookie->qat_sgl_dst_d = rte_zmalloc_socket(NULL,
+					sizeof(struct qat_sgl) +
+					sizeof(struct qat_flat_buf) *
+					QAT_PMD_COMP_SGL_DEF_SEGMENTS,
+					64, socket_id);
+
+		if (cookie->qat_sgl_src_d == NULL ||
+				cookie->qat_sgl_dst_d == NULL) {
+			QAT_LOG(ERR, "Can't allocate SGL"
+				     " for device %s",
+				     qat_private->qat_dev->name);
+			return -ENOMEM;
+		}
+
 		cookie->qat_sgl_src_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_comp_op_cookie,
-				qat_sgl_src);
+				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
 
 		cookie->qat_sgl_dst_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_comp_op_cookie,
-				qat_sgl_dst);
+				rte_malloc_virt2iova(cookie->qat_sgl_dst_d);
+
+		cookie->dst_nb_elems = cookie->src_nb_elems =
+				QAT_PMD_COMP_SGL_DEF_SEGMENTS;
 	}
 
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH] compress/qat: add fallback to fixed compression
  2019-02-15  9:44 [PATCH] compress/qat: add dynamic sgl allocation Tomasz Jozwiak
@ 2019-02-15  9:44 ` Tomasz Jozwiak
  2019-02-15 17:01   ` Trahe, Fiona
  2019-02-15  9:44 ` [PATCH] test/compress: add max mbuf size test case Tomasz Jozwiak
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 32+ messages in thread
From: Tomasz Jozwiak @ 2019-02-15  9:44 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak

This patch adds fallback to fixed compression
feature during dynamic compression, when the input data size
is greater than IM buffer size / 1.1. This feature doesn't
stop compression proccess when IM buffer can be too small
to handle produced data.

Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
---
 doc/guides/cryptodevs/qat.rst   | 10 +++++-----
 drivers/compress/qat/qat_comp.c | 24 ++++++++++++++++++++++++
 drivers/compress/qat/qat_comp.h |  3 +++
 3 files changed, 32 insertions(+), 5 deletions(-)

diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index b079aa3..907c2a9 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -188,11 +188,11 @@ allocated while for GEN1 devices, 12 buffers are allocated, plus 1472 bytes over
 
 	If the compressed output of a Deflate operation using Dynamic Huffman
         Encoding is too big to fit in an intermediate buffer, then the
-        operation will return RTE_COMP_OP_STATUS_ERROR and an error will be
-        displayed. Options for the application in this case
-        are to split the input data into smaller chunks and resubmit
-        in multiple operations or to configure QAT with
-        larger intermediate buffers.
+	operation will fall back to fixed compression rather than failing the operation.
+	To avoid this less performant case, applications should configure
+	the intermediate buffer size to be larger than the expected input data size
+	(compressed output size is usually unknown, so the only option is to make
+	larger than the input size).
 
 
 Device and driver naming
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 32ca753..b9367d3 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -43,6 +43,30 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 	rte_mov128(out_msg, tmpl);
 	comp_req->comn_mid.opaque_data = (uint64_t)(uintptr_t)op;
 
+	if (likely(qat_xform->qat_comp_request_type ==
+		    QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS)) {
+		if (unlikely(op->src.length > QAT_FALLBACK_THLD)) {
+
+			/* fallback to fixed compression */
+			comp_req->comn_hdr.service_cmd_id =
+					ICP_QAT_FW_COMP_CMD_STATIC;
+
+			ICP_QAT_FW_COMN_NEXT_ID_SET(&comp_req->comp_cd_ctrl,
+					ICP_QAT_FW_SLICE_DRAM_WR);
+
+			ICP_QAT_FW_COMN_NEXT_ID_SET(&comp_req->u2.xlt_cd_ctrl,
+					ICP_QAT_FW_SLICE_NULL);
+			ICP_QAT_FW_COMN_CURR_ID_SET(&comp_req->u2.xlt_cd_ctrl,
+					ICP_QAT_FW_SLICE_NULL);
+
+			QAT_DP_LOG(DEBUG, "QAT PMD: fallback to fixed "
+				   "compression! IM buffer size can be too low "
+				   "for produced data.\n Please use input "
+				   "buffer length lower than %d bytes",
+				   QAT_FALLBACK_THLD);
+		}
+	}
+
 	/* common for sgl and flat buffers */
 	comp_req->comp_pars.comp_len = op->src.length;
 	comp_req->comp_pars.out_buffer_sz = rte_pktmbuf_pkt_len(op->m_dst) -
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 19f48df..12b37b1 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -21,6 +21,9 @@
 
 #define ERR_CODE_QAT_COMP_WRONG_FW -99
 
+/* fallback to fixed compression threshold */
+#define QAT_FALLBACK_THLD ((uint32_t)(RTE_PMD_QAT_COMP_IM_BUFFER_SIZE / 1.1))
+
 enum qat_comp_request_type {
 	QAT_COMP_REQUEST_FIXED_COMP_STATELESS,
 	QAT_COMP_REQUEST_DYNAMIC_COMP_STATELESS,
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH] test/compress: add max mbuf size test case
  2019-02-15  9:44 [PATCH] compress/qat: add dynamic sgl allocation Tomasz Jozwiak
  2019-02-15  9:44 ` [PATCH] compress/qat: add fallback to fixed compression Tomasz Jozwiak
@ 2019-02-15  9:44 ` Tomasz Jozwiak
  2019-03-27 14:02   ` Akhil Goyal
  2019-04-02 12:16   ` [PATCH v2 0/1] " Tomasz Cel
  2019-03-01 11:00 ` [PATCH v2] add dynamic sgl allocation Tomasz Jozwiak
  2019-03-01 11:17 ` [PATCH v3 0/1] " Tomasz Jozwiak
  3 siblings, 2 replies; 32+ messages in thread
From: Tomasz Jozwiak @ 2019-02-15  9:44 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak

This patch adds new test case in which max. size of
chain mbufs has been used to compress random data dynamically.

Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
---
 test/test/test_compressdev.c | 157 +++++++++++++++++++++++++++++++++++++------
 1 file changed, 136 insertions(+), 21 deletions(-)

diff --git a/test/test/test_compressdev.c b/test/test/test_compressdev.c
index e8476ed..f59b3d2 100644
--- a/test/test/test_compressdev.c
+++ b/test/test/test_compressdev.c
@@ -1,9 +1,10 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018 - 2019 Intel Corporation
  */
 #include <string.h>
 #include <zlib.h>
 #include <math.h>
+#include <stdlib.h>
 
 #include <rte_cycles.h>
 #include <rte_malloc.h>
@@ -44,6 +45,11 @@
 
 #define OUT_OF_SPACE_BUF 1
 
+#define MAX_MBUF_SEGMENT_SIZE 65535
+#define MAX_DATA_MBUF_SIZE (MAX_MBUF_SEGMENT_SIZE - RTE_PKTMBUF_HEADROOM)
+#define NUM_BIG_MBUFS 4
+#define BIG_DATA_TEST_SIZE (MAX_DATA_MBUF_SIZE * NUM_BIG_MBUFS / 2)
+
 const char *
 huffman_type_strings[] = {
 	[RTE_COMP_HUFFMAN_DEFAULT]	= "PMD default",
@@ -72,6 +78,7 @@ struct priv_op_data {
 struct comp_testsuite_params {
 	struct rte_mempool *large_mbuf_pool;
 	struct rte_mempool *small_mbuf_pool;
+	struct rte_mempool *big_mbuf_pool;
 	struct rte_mempool *op_pool;
 	struct rte_comp_xform *def_comp_xform;
 	struct rte_comp_xform *def_decomp_xform;
@@ -91,6 +98,7 @@ struct test_data_params {
 	enum varied_buff buff_type;
 	enum zlib_direction zlib_dir;
 	unsigned int out_of_space;
+	unsigned int big_data;
 };
 
 static struct comp_testsuite_params testsuite_params = { 0 };
@@ -104,11 +112,14 @@ testsuite_teardown(void)
 		RTE_LOG(ERR, USER1, "Large mbuf pool still has unfreed bufs\n");
 	if (rte_mempool_in_use_count(ts_params->small_mbuf_pool))
 		RTE_LOG(ERR, USER1, "Small mbuf pool still has unfreed bufs\n");
+	if (rte_mempool_in_use_count(ts_params->big_mbuf_pool))
+		RTE_LOG(ERR, USER1, "Big mbuf pool still has unfreed bufs\n");
 	if (rte_mempool_in_use_count(ts_params->op_pool))
 		RTE_LOG(ERR, USER1, "op pool still has unfreed ops\n");
 
 	rte_mempool_free(ts_params->large_mbuf_pool);
 	rte_mempool_free(ts_params->small_mbuf_pool);
+	rte_mempool_free(ts_params->big_mbuf_pool);
 	rte_mempool_free(ts_params->op_pool);
 	rte_free(ts_params->def_comp_xform);
 	rte_free(ts_params->def_decomp_xform);
@@ -161,6 +172,17 @@ testsuite_setup(void)
 		goto exit;
 	}
 
+	/* Create mempool with big buffers for SGL testing */
+	ts_params->big_mbuf_pool = rte_pktmbuf_pool_create("big_mbuf_pool",
+			NUM_BIG_MBUFS + 1,
+			CACHE_SIZE, 0,
+			MAX_MBUF_SEGMENT_SIZE,
+			rte_socket_id());
+	if (ts_params->big_mbuf_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Big mbuf pool could not be created\n");
+		goto exit;
+	}
+
 	ts_params->op_pool = rte_comp_op_pool_create("op_pool", NUM_OPS,
 				0, sizeof(struct priv_op_data),
 				rte_socket_id());
@@ -597,10 +619,11 @@ prepare_sgl_bufs(const char *test_buf, struct rte_mbuf *head_buf,
 		uint32_t total_data_size,
 		struct rte_mempool *small_mbuf_pool,
 		struct rte_mempool *large_mbuf_pool,
-		uint8_t limit_segs_in_sgl)
+		uint8_t limit_segs_in_sgl,
+		uint16_t seg_size)
 {
 	uint32_t remaining_data = total_data_size;
-	uint16_t num_remaining_segs = DIV_CEIL(remaining_data, SMALL_SEG_SIZE);
+	uint16_t num_remaining_segs = DIV_CEIL(remaining_data, seg_size);
 	struct rte_mempool *pool;
 	struct rte_mbuf *next_seg;
 	uint32_t data_size;
@@ -616,10 +639,10 @@ prepare_sgl_bufs(const char *test_buf, struct rte_mbuf *head_buf,
 	 * Allocate data in the first segment (header) and
 	 * copy data if test buffer is provided
 	 */
-	if (remaining_data < SMALL_SEG_SIZE)
+	if (remaining_data < seg_size)
 		data_size = remaining_data;
 	else
-		data_size = SMALL_SEG_SIZE;
+		data_size = seg_size;
 	buf_ptr = rte_pktmbuf_append(head_buf, data_size);
 	if (buf_ptr == NULL) {
 		RTE_LOG(ERR, USER1,
@@ -643,13 +666,13 @@ prepare_sgl_bufs(const char *test_buf, struct rte_mbuf *head_buf,
 
 		if (i == (num_remaining_segs - 1)) {
 			/* last segment */
-			if (remaining_data > SMALL_SEG_SIZE)
+			if (remaining_data > seg_size)
 				pool = large_mbuf_pool;
 			else
 				pool = small_mbuf_pool;
 			data_size = remaining_data;
 		} else {
-			data_size = SMALL_SEG_SIZE;
+			data_size = seg_size;
 			pool = small_mbuf_pool;
 		}
 
@@ -703,6 +726,7 @@ test_deflate_comp_decomp(const struct interim_data_params *int_data,
 	enum rte_comp_op_type state = test_data->state;
 	unsigned int buff_type = test_data->buff_type;
 	unsigned int out_of_space = test_data->out_of_space;
+	unsigned int big_data = test_data->big_data;
 	enum zlib_direction zlib_dir = test_data->zlib_dir;
 	int ret_status = -1;
 	int ret;
@@ -737,7 +761,9 @@ test_deflate_comp_decomp(const struct interim_data_params *int_data,
 	memset(ops_processed, 0, sizeof(struct rte_comp_op *) * num_bufs);
 	memset(priv_xforms, 0, sizeof(void *) * num_bufs);
 
-	if (buff_type == SGL_BOTH)
+	if (big_data)
+		buf_pool = ts_params->big_mbuf_pool;
+	else if (buff_type == SGL_BOTH)
 		buf_pool = ts_params->small_mbuf_pool;
 	else
 		buf_pool = ts_params->large_mbuf_pool;
@@ -756,10 +782,11 @@ test_deflate_comp_decomp(const struct interim_data_params *int_data,
 		for (i = 0; i < num_bufs; i++) {
 			data_size = strlen(test_bufs[i]) + 1;
 			if (prepare_sgl_bufs(test_bufs[i], uncomp_bufs[i],
-					data_size,
-					ts_params->small_mbuf_pool,
-					ts_params->large_mbuf_pool,
-					MAX_SEGS) < 0)
+			    data_size,
+			    big_data ? buf_pool : ts_params->small_mbuf_pool,
+			    big_data ? buf_pool : ts_params->large_mbuf_pool,
+			    big_data ? 0 : MAX_SEGS,
+			    big_data ? MAX_DATA_MBUF_SIZE : SMALL_SEG_SIZE) < 0)
 				goto exit;
 		}
 	} else {
@@ -788,10 +815,12 @@ test_deflate_comp_decomp(const struct interim_data_params *int_data,
 					COMPRESS_BUF_SIZE_RATIO);
 
 			if (prepare_sgl_bufs(NULL, comp_bufs[i],
-					data_size,
-					ts_params->small_mbuf_pool,
-					ts_params->large_mbuf_pool,
-					MAX_SEGS) < 0)
+			      data_size,
+			      big_data ? buf_pool : ts_params->small_mbuf_pool,
+			      big_data ? buf_pool : ts_params->large_mbuf_pool,
+			      big_data ? 0 : MAX_SEGS,
+			      big_data ? MAX_DATA_MBUF_SIZE : SMALL_SEG_SIZE)
+					< 0)
 				goto exit;
 		}
 
@@ -1016,10 +1045,12 @@ test_deflate_comp_decomp(const struct interim_data_params *int_data,
 				strlen(test_bufs[priv_data->orig_idx]) + 1;
 
 			if (prepare_sgl_bufs(NULL, uncomp_bufs[i],
-					data_size,
-					ts_params->small_mbuf_pool,
-					ts_params->large_mbuf_pool,
-					MAX_SEGS) < 0)
+			       data_size,
+			       big_data ? buf_pool : ts_params->small_mbuf_pool,
+			       big_data ? buf_pool : ts_params->large_mbuf_pool,
+			       big_data ? 0 : MAX_SEGS,
+			       big_data ? MAX_DATA_MBUF_SIZE : SMALL_SEG_SIZE)
+					< 0)
 				goto exit;
 		}
 
@@ -1319,6 +1350,7 @@ test_compressdev_deflate_stateless_fixed(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1389,6 +1421,7 @@ test_compressdev_deflate_stateless_dynamic(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1442,6 +1475,7 @@ test_compressdev_deflate_stateless_multi_op(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1491,6 +1525,7 @@ test_compressdev_deflate_stateless_multi_level(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1580,6 +1615,7 @@ test_compressdev_deflate_stateless_multi_xform(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1625,6 +1661,7 @@ test_compressdev_deflate_stateless_sgl(void)
 		RTE_COMP_OP_STATELESS,
 		SGL_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1732,6 +1769,7 @@ test_compressdev_deflate_stateless_checksum(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1862,7 +1900,8 @@ test_compressdev_out_of_space_buffer(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
-		1
+		1,
+		0
 	};
 	/* Compress with compressdev, decompress with Zlib */
 	test_data.zlib_dir = ZLIB_DECOMPRESS;
@@ -1903,6 +1942,80 @@ test_compressdev_out_of_space_buffer(void)
 	return ret;
 }
 
+static int
+test_compressdev_deflate_stateless_dynamic_big(void)
+{
+	struct comp_testsuite_params *ts_params = &testsuite_params;
+	uint16_t i = 0;
+	int ret = TEST_SUCCESS;
+	const struct rte_compressdev_capabilities *capab;
+	char *test_buffer = NULL;
+
+	capab = rte_compressdev_capability_get(0, RTE_COMP_ALGO_DEFLATE);
+	TEST_ASSERT(capab != NULL, "Failed to retrieve device capabilities");
+
+	if ((capab->comp_feature_flags & RTE_COMP_FF_HUFFMAN_DYNAMIC) == 0)
+		return -ENOTSUP;
+
+	if ((capab->comp_feature_flags & RTE_COMP_FF_OOP_SGL_IN_SGL_OUT) == 0)
+		return -ENOTSUP;
+
+	test_buffer = rte_malloc(NULL, BIG_DATA_TEST_SIZE, 0);
+	if (test_buffer == NULL) {
+		RTE_LOG(ERR, USER1,
+			"Can't allocate buffer for big-data\n");
+		return TEST_FAILED;
+	}
+
+	struct interim_data_params int_data = {
+		(const char * const *)&test_buffer,
+		1,
+		NULL,
+		&ts_params->def_comp_xform,
+		&ts_params->def_decomp_xform,
+		1
+	};
+
+	struct test_data_params test_data = {
+		RTE_COMP_OP_STATELESS,
+		SGL_BOTH,
+		ZLIB_DECOMPRESS,
+		0,
+		1
+	};
+
+	ts_params->def_comp_xform->compress.deflate.huffman =
+						RTE_COMP_HUFFMAN_DYNAMIC;
+
+	/* fill the buffer with data based on rand. data */
+	srand(BIG_DATA_TEST_SIZE);
+	for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
+		test_buffer[i] = (uint8_t)(rand() % ((uint8_t)-1)) | 1;
+
+	test_buffer[BIG_DATA_TEST_SIZE-1] = 0;
+	int_data.buf_idx = &i;
+
+	/* Compress with compressdev, decompress with Zlib */
+	test_data.zlib_dir = ZLIB_DECOMPRESS;
+	if (test_deflate_comp_decomp(&int_data, &test_data) < 0) {
+		ret = TEST_FAILED;
+		goto end;
+	}
+
+	/* Compress with Zlib, decompress with compressdev */
+	test_data.zlib_dir = ZLIB_COMPRESS;
+	if (test_deflate_comp_decomp(&int_data, &test_data) < 0) {
+		ret = TEST_FAILED;
+		goto end;
+	}
+
+end:
+	ts_params->def_comp_xform->compress.deflate.huffman =
+						RTE_COMP_HUFFMAN_DEFAULT;
+	rte_free(test_buffer);
+	return ret;
+}
+
 
 static struct unit_test_suite compressdev_testsuite  = {
 	.suite_name = "compressdev unit test suite",
@@ -1916,6 +2029,8 @@ static struct unit_test_suite compressdev_testsuite  = {
 		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
 			test_compressdev_deflate_stateless_dynamic),
 		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
+			test_compressdev_deflate_stateless_dynamic_big),
+		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
 			test_compressdev_deflate_stateless_multi_op),
 		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
 			test_compressdev_deflate_stateless_multi_level),
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH] compress/qat: add fallback to fixed compression
  2019-02-15  9:44 ` [PATCH] compress/qat: add fallback to fixed compression Tomasz Jozwiak
@ 2019-02-15 17:01   ` Trahe, Fiona
  2019-03-19 14:04     ` Akhil Goyal
  0 siblings, 1 reply; 32+ messages in thread
From: Trahe, Fiona @ 2019-02-15 17:01 UTC (permalink / raw)
  To: Jozwiak, TomaszX, dev



> -----Original Message-----
> From: Jozwiak, TomaszX
> Sent: Friday, February 15, 2019 9:45 AM
> To: dev@dpdk.org; Trahe, Fiona <fiona.trahe@intel.com>; Jozwiak, TomaszX
> <tomaszx.jozwiak@intel.com>
> Subject: [PATCH] compress/qat: add fallback to fixed compression
> 
> This patch adds fallback to fixed compression
> feature during dynamic compression, when the input data size
> is greater than IM buffer size / 1.1. This feature doesn't
> stop compression proccess when IM buffer can be too small
> to handle produced data.
> 
> Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2] add dynamic sgl allocation
  2019-02-15  9:44 [PATCH] compress/qat: add dynamic sgl allocation Tomasz Jozwiak
  2019-02-15  9:44 ` [PATCH] compress/qat: add fallback to fixed compression Tomasz Jozwiak
  2019-02-15  9:44 ` [PATCH] test/compress: add max mbuf size test case Tomasz Jozwiak
@ 2019-03-01 11:00 ` Tomasz Jozwiak
  2019-03-01 11:00   ` [PATCH v2] compress/qat: " Tomasz Jozwiak
  2019-03-01 11:17 ` [PATCH v3 0/1] " Tomasz Jozwiak
  3 siblings, 1 reply; 32+ messages in thread
From: Tomasz Jozwiak @ 2019-03-01 11:00 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak

This patch adds dynamic sgl allocation in QAT PMD and
depends on 'malloc: add rte_realloc_socket function patch'
(1551429976-16297-1-git-send-email-tomaszx.jozwiak@intel.com)
which should be applied first.

Changes

  -v2: added rte_realloc_socket instead of rte_realloc



Tomasz Jozwiak (1):
  compress/qat: add dynamic sgl allocation

 config/common_base                   |  1 -
 doc/guides/compressdevs/qat_comp.rst |  1 -
 doc/guides/cryptodevs/qat.rst        |  5 ----
 drivers/compress/qat/qat_comp.c      | 56 ++++++++++++++++++++++++++++++++----
 drivers/compress/qat/qat_comp.h      | 13 ++++-----
 drivers/compress/qat/qat_comp_pmd.c  | 49 ++++++++++++++++++++++++++-----
 6 files changed, 99 insertions(+), 26 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2] compress/qat: add dynamic sgl allocation
  2019-03-01 11:00 ` [PATCH v2] add dynamic sgl allocation Tomasz Jozwiak
@ 2019-03-01 11:00   ` Tomasz Jozwiak
  0 siblings, 0 replies; 32+ messages in thread
From: Tomasz Jozwiak @ 2019-03-01 11:00 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak

This patch adds dynamic SGL allocation instead of static one.
The number of element in SGL can be adjusted in each operation
depend of the request.

Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
---
 config/common_base                   |  1 -
 doc/guides/compressdevs/qat_comp.rst |  1 -
 doc/guides/cryptodevs/qat.rst        |  5 ----
 drivers/compress/qat/qat_comp.c      | 56 ++++++++++++++++++++++++++++++++----
 drivers/compress/qat/qat_comp.h      | 13 ++++-----
 drivers/compress/qat/qat_comp_pmd.c  | 49 ++++++++++++++++++++++++++-----
 6 files changed, 99 insertions(+), 26 deletions(-)

diff --git a/config/common_base b/config/common_base
index 7c6da51..5df1752 100644
--- a/config/common_base
+++ b/config/common_base
@@ -549,7 +549,6 @@ CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
 # Max. number of QuickAssist devices, which can be detected and attached
 #
 CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
-CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
 CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
 
 #
diff --git a/doc/guides/compressdevs/qat_comp.rst b/doc/guides/compressdevs/qat_comp.rst
index 5631cb1..6f583a4 100644
--- a/doc/guides/compressdevs/qat_comp.rst
+++ b/doc/guides/compressdevs/qat_comp.rst
@@ -35,7 +35,6 @@ Limitations
 * Compressdev level 0, no compression, is not supported.
 * Queue pairs are not thread-safe (that is, within a single queue pair, RX and TX from different lcores is not supported).
 * No BSD support as BSD QAT kernel driver not available.
-* Number of segments in mbuf chains in the op must be <= RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS from the config file.
 * When using Deflate dynamic huffman encoding for compression, the input size (op.src.length)
   must be < CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE from the config file,
   see :ref:`building_qat_config` for more details.
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index b079aa3..1f6b0d8 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -156,7 +156,6 @@ These are the build configuration options affecting QAT, and their default value
 	CONFIG_RTE_LIBRTE_PMD_QAT=y
 	CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
 	CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
-	CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
 	CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
 
 CONFIG_RTE_LIBRTE_PMD_QAT must be enabled for any QAT PMD to be built.
@@ -174,10 +173,6 @@ Note, there are separate config items for max cryptodevs CONFIG_RTE_CRYPTO_MAX_D
 and max compressdevs CONFIG_RTE_COMPRESS_MAX_DEVS, if necessary these should be
 adjusted to handle the total of QAT and other devices which the process will use.
 
-QAT allocates internal structures to handle SGLs. For the compression service
-CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS can be changed if more segments are needed.
-An extra (max_inflight_ops x 16) bytes per queue_pair will be used for every increment.
-
 QAT compression PMD needs intermediate buffers to support Deflate compression
 with Dynamic Huffman encoding. CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE
 specifies the size of a single buffer, the PMD will allocate a multiple of these,
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 32ca753..13722c1 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2019 Intel Corporation
  */
 
 #include <rte_mempool.h>
@@ -55,22 +55,68 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 		ICP_QAT_FW_COMN_PTR_TYPE_SET(comp_req->comn_hdr.comn_req_flags,
 				QAT_COMN_PTR_TYPE_SGL);
 
+		if (unlikely(op->m_src->nb_segs > cookie->src_nb_elems)) {
+			/* we need to allocate more elements in SGL*/
+			void *tmp;
+
+			tmp = rte_realloc_socket(cookie->qat_sgl_src_d,
+					  sizeof(struct qat_sgl) +
+					  sizeof(struct qat_flat_buf) *
+					  op->m_src->nb_segs, 64);
+
+			if (unlikely(tmp == NULL)) {
+				QAT_DP_LOG(ERR, "QAT PMD can't allocate memory"
+					   " for %d elements of SGL",
+					   op->m_src->nb_segs);
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+				return -ENOMEM;
+			}
+			/* new SGL is valid now */
+			cookie->qat_sgl_src_d = (struct qat_sgl *)tmp;
+			cookie->src_nb_elems = op->m_src->nb_segs;
+			cookie->qat_sgl_src_phys_addr =
+				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
+		}
+
 		ret = qat_sgl_fill_array(op->m_src,
 				op->src.offset,
-				&cookie->qat_sgl_src,
+				cookie->qat_sgl_src_d,
 				op->src.length,
-				RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
+				(const uint16_t)cookie->src_nb_elems);
 		if (ret) {
 			QAT_DP_LOG(ERR, "QAT PMD Cannot fill source sgl array");
 			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 			return ret;
 		}
 
+		if (unlikely(op->m_dst->nb_segs > cookie->dst_nb_elems)) {
+			/* we need to allocate more elements in SGL*/
+			struct qat_sgl *tmp;
+
+			tmp = rte_realloc_socket(cookie->qat_sgl_dst_d,
+					  sizeof(struct qat_sgl) +
+					  sizeof(struct qat_flat_buf) *
+					  op->m_dst->nb_segs, 64);
+
+			if (unlikely(tmp == NULL)) {
+				QAT_DP_LOG(ERR, "QAT PMD can't allocate memory"
+					   " for %d elements of SGL",
+					   op->m_dst->nb_segs);
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+				return -EINVAL;
+			}
+			/* new SGL is valid now */
+			cookie->qat_sgl_dst_d = (struct qat_sgl *)tmp;
+			cookie->dst_nb_elems = op->m_dst->nb_segs;
+			cookie->qat_sgl_dst_phys_addr =
+				rte_malloc_virt2iova(cookie->qat_sgl_dst_d);
+		}
+
 		ret = qat_sgl_fill_array(op->m_dst,
 				op->dst.offset,
-				&cookie->qat_sgl_dst,
+				cookie->qat_sgl_dst_d,
 				comp_req->comp_pars.out_buffer_sz,
-				RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
+				(const uint16_t)cookie->dst_nb_elems);
 		if (ret) {
 			QAT_DP_LOG(ERR, "QAT PMD Cannot fill dest. sgl array");
 			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 19f48df..2465f12 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2018 Intel Corporation
+ * Copyright(c) 2015-2019 Intel Corporation
  */
 
 #ifndef _QAT_COMP_H_
@@ -37,16 +37,15 @@ struct qat_inter_sgl {
 	struct qat_flat_buf buffers[QAT_NUM_BUFS_IN_IM_SGL];
 } __rte_packed __rte_cache_aligned;
 
-struct qat_comp_sgl {
-	qat_sgl_hdr;
-	struct qat_flat_buf buffers[RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS];
-} __rte_packed __rte_cache_aligned;
 
 struct qat_comp_op_cookie {
-	struct qat_comp_sgl qat_sgl_src;
-	struct qat_comp_sgl qat_sgl_dst;
 	phys_addr_t qat_sgl_src_phys_addr;
 	phys_addr_t qat_sgl_dst_phys_addr;
+	/* dynamically created SGLs */
+	uint16_t src_nb_elems;
+	uint16_t dst_nb_elems;
+	struct qat_sgl *qat_sgl_src_d;
+	struct qat_sgl *qat_sgl_dst_d;
 };
 
 struct qat_comp_xform {
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index 27c8856..f034a19 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -1,10 +1,14 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2018 Intel Corporation
+ * Copyright(c) 2015-2019 Intel Corporation
  */
 
+#include <rte_malloc.h>
+
 #include "qat_comp.h"
 #include "qat_comp_pmd.h"
 
+#define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16
+
 static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
 	{/* COMPRESSION - deflate */
 	 .algo = RTE_COMP_ALGO_DEFLATE,
@@ -60,6 +64,10 @@ static int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
+	struct qat_qp **qp_addr =
+		(struct qat_qp **)&(dev->data->queue_pairs[queue_pair_id]);
+	struct qat_qp *qp = (struct qat_qp *)*qp_addr;
+	uint32_t i;
 
 	QAT_LOG(DEBUG, "Release comp qp %u on device %d",
 				queue_pair_id, dev->data->dev_id);
@@ -67,6 +75,14 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][queue_pair_id]
 						= NULL;
 
+	for (i = 0; i < qp->nb_descriptors; i++) {
+
+		struct qat_comp_op_cookie *cookie = qp->op_cookies[i];
+
+		rte_free(cookie->qat_sgl_src_d);
+		rte_free(cookie->qat_sgl_dst_d);
+	}
+
 	return qat_qp_release((struct qat_qp **)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }
@@ -122,15 +138,34 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 		struct qat_comp_op_cookie *cookie =
 				qp->op_cookies[i];
 
+		cookie->qat_sgl_src_d = rte_zmalloc_socket(NULL,
+					sizeof(struct qat_sgl) +
+					sizeof(struct qat_flat_buf) *
+					QAT_PMD_COMP_SGL_DEF_SEGMENTS,
+					64, socket_id);
+
+		cookie->qat_sgl_dst_d = rte_zmalloc_socket(NULL,
+					sizeof(struct qat_sgl) +
+					sizeof(struct qat_flat_buf) *
+					QAT_PMD_COMP_SGL_DEF_SEGMENTS,
+					64, socket_id);
+
+		if (cookie->qat_sgl_src_d == NULL ||
+				cookie->qat_sgl_dst_d == NULL) {
+			QAT_LOG(ERR, "Can't allocate SGL"
+				     " for device %s",
+				     qat_private->qat_dev->name);
+			return -ENOMEM;
+		}
+
 		cookie->qat_sgl_src_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_comp_op_cookie,
-				qat_sgl_src);
+				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
 
 		cookie->qat_sgl_dst_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_comp_op_cookie,
-				qat_sgl_dst);
+				rte_malloc_virt2iova(cookie->qat_sgl_dst_d);
+
+		cookie->dst_nb_elems = cookie->src_nb_elems =
+				QAT_PMD_COMP_SGL_DEF_SEGMENTS;
 	}
 
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v3 0/1] add dynamic sgl allocation
  2019-02-15  9:44 [PATCH] compress/qat: add dynamic sgl allocation Tomasz Jozwiak
                   ` (2 preceding siblings ...)
  2019-03-01 11:00 ` [PATCH v2] add dynamic sgl allocation Tomasz Jozwiak
@ 2019-03-01 11:17 ` Tomasz Jozwiak
  2019-03-01 11:17   ` [PATCH v3 1/1] compress/qat: " Tomasz Jozwiak
  2019-03-07 12:02   ` [PATCH v4 0/1] " Tomasz Jozwiak
  3 siblings, 2 replies; 32+ messages in thread
From: Tomasz Jozwiak @ 2019-03-01 11:17 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak

This patch adds dynamic sgl allocation in QAT PMD and
depends on 'malloc: add rte_realloc_socket function patch'
(1551429976-16297-1-git-send-email-tomaszx.jozwiak@intel.com)
which should be applied first.

Changes

  -v2: added rte_realloc_socket instead of rte_realloc

  -v3: fixed subject in patches series to see cover letter



Tomasz Jozwiak (1):
  compress/qat: add dynamic sgl allocation

 config/common_base                   |  1 -
 doc/guides/compressdevs/qat_comp.rst |  1 -
 doc/guides/cryptodevs/qat.rst        |  5 ----
 drivers/compress/qat/qat_comp.c      | 56 ++++++++++++++++++++++++++++++++----
 drivers/compress/qat/qat_comp.h      | 13 ++++-----
 drivers/compress/qat/qat_comp_pmd.c  | 49 ++++++++++++++++++++++++++-----
 6 files changed, 99 insertions(+), 26 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v3 1/1] compress/qat: add dynamic sgl allocation
  2019-03-01 11:17 ` [PATCH v3 0/1] " Tomasz Jozwiak
@ 2019-03-01 11:17   ` Tomasz Jozwiak
  2019-03-07 12:02   ` [PATCH v4 0/1] " Tomasz Jozwiak
  1 sibling, 0 replies; 32+ messages in thread
From: Tomasz Jozwiak @ 2019-03-01 11:17 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak

This patch adds dynamic SGL allocation instead of static one.
The number of element in SGL can be adjusted in each operation
depend of the request.

Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
---
 config/common_base                   |  1 -
 doc/guides/compressdevs/qat_comp.rst |  1 -
 doc/guides/cryptodevs/qat.rst        |  5 ----
 drivers/compress/qat/qat_comp.c      | 56 ++++++++++++++++++++++++++++++++----
 drivers/compress/qat/qat_comp.h      | 13 ++++-----
 drivers/compress/qat/qat_comp_pmd.c  | 49 ++++++++++++++++++++++++++-----
 6 files changed, 99 insertions(+), 26 deletions(-)

diff --git a/config/common_base b/config/common_base
index 7c6da51..5df1752 100644
--- a/config/common_base
+++ b/config/common_base
@@ -549,7 +549,6 @@ CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
 # Max. number of QuickAssist devices, which can be detected and attached
 #
 CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
-CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
 CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
 
 #
diff --git a/doc/guides/compressdevs/qat_comp.rst b/doc/guides/compressdevs/qat_comp.rst
index 5631cb1..6f583a4 100644
--- a/doc/guides/compressdevs/qat_comp.rst
+++ b/doc/guides/compressdevs/qat_comp.rst
@@ -35,7 +35,6 @@ Limitations
 * Compressdev level 0, no compression, is not supported.
 * Queue pairs are not thread-safe (that is, within a single queue pair, RX and TX from different lcores is not supported).
 * No BSD support as BSD QAT kernel driver not available.
-* Number of segments in mbuf chains in the op must be <= RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS from the config file.
 * When using Deflate dynamic huffman encoding for compression, the input size (op.src.length)
   must be < CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE from the config file,
   see :ref:`building_qat_config` for more details.
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index b079aa3..1f6b0d8 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -156,7 +156,6 @@ These are the build configuration options affecting QAT, and their default value
 	CONFIG_RTE_LIBRTE_PMD_QAT=y
 	CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
 	CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
-	CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
 	CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
 
 CONFIG_RTE_LIBRTE_PMD_QAT must be enabled for any QAT PMD to be built.
@@ -174,10 +173,6 @@ Note, there are separate config items for max cryptodevs CONFIG_RTE_CRYPTO_MAX_D
 and max compressdevs CONFIG_RTE_COMPRESS_MAX_DEVS, if necessary these should be
 adjusted to handle the total of QAT and other devices which the process will use.
 
-QAT allocates internal structures to handle SGLs. For the compression service
-CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS can be changed if more segments are needed.
-An extra (max_inflight_ops x 16) bytes per queue_pair will be used for every increment.
-
 QAT compression PMD needs intermediate buffers to support Deflate compression
 with Dynamic Huffman encoding. CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE
 specifies the size of a single buffer, the PMD will allocate a multiple of these,
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 32ca753..13722c1 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2019 Intel Corporation
  */
 
 #include <rte_mempool.h>
@@ -55,22 +55,68 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 		ICP_QAT_FW_COMN_PTR_TYPE_SET(comp_req->comn_hdr.comn_req_flags,
 				QAT_COMN_PTR_TYPE_SGL);
 
+		if (unlikely(op->m_src->nb_segs > cookie->src_nb_elems)) {
+			/* we need to allocate more elements in SGL*/
+			void *tmp;
+
+			tmp = rte_realloc_socket(cookie->qat_sgl_src_d,
+					  sizeof(struct qat_sgl) +
+					  sizeof(struct qat_flat_buf) *
+					  op->m_src->nb_segs, 64);
+
+			if (unlikely(tmp == NULL)) {
+				QAT_DP_LOG(ERR, "QAT PMD can't allocate memory"
+					   " for %d elements of SGL",
+					   op->m_src->nb_segs);
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+				return -ENOMEM;
+			}
+			/* new SGL is valid now */
+			cookie->qat_sgl_src_d = (struct qat_sgl *)tmp;
+			cookie->src_nb_elems = op->m_src->nb_segs;
+			cookie->qat_sgl_src_phys_addr =
+				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
+		}
+
 		ret = qat_sgl_fill_array(op->m_src,
 				op->src.offset,
-				&cookie->qat_sgl_src,
+				cookie->qat_sgl_src_d,
 				op->src.length,
-				RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
+				(const uint16_t)cookie->src_nb_elems);
 		if (ret) {
 			QAT_DP_LOG(ERR, "QAT PMD Cannot fill source sgl array");
 			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 			return ret;
 		}
 
+		if (unlikely(op->m_dst->nb_segs > cookie->dst_nb_elems)) {
+			/* we need to allocate more elements in SGL*/
+			struct qat_sgl *tmp;
+
+			tmp = rte_realloc_socket(cookie->qat_sgl_dst_d,
+					  sizeof(struct qat_sgl) +
+					  sizeof(struct qat_flat_buf) *
+					  op->m_dst->nb_segs, 64);
+
+			if (unlikely(tmp == NULL)) {
+				QAT_DP_LOG(ERR, "QAT PMD can't allocate memory"
+					   " for %d elements of SGL",
+					   op->m_dst->nb_segs);
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+				return -EINVAL;
+			}
+			/* new SGL is valid now */
+			cookie->qat_sgl_dst_d = (struct qat_sgl *)tmp;
+			cookie->dst_nb_elems = op->m_dst->nb_segs;
+			cookie->qat_sgl_dst_phys_addr =
+				rte_malloc_virt2iova(cookie->qat_sgl_dst_d);
+		}
+
 		ret = qat_sgl_fill_array(op->m_dst,
 				op->dst.offset,
-				&cookie->qat_sgl_dst,
+				cookie->qat_sgl_dst_d,
 				comp_req->comp_pars.out_buffer_sz,
-				RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
+				(const uint16_t)cookie->dst_nb_elems);
 		if (ret) {
 			QAT_DP_LOG(ERR, "QAT PMD Cannot fill dest. sgl array");
 			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 19f48df..2465f12 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2018 Intel Corporation
+ * Copyright(c) 2015-2019 Intel Corporation
  */
 
 #ifndef _QAT_COMP_H_
@@ -37,16 +37,15 @@ struct qat_inter_sgl {
 	struct qat_flat_buf buffers[QAT_NUM_BUFS_IN_IM_SGL];
 } __rte_packed __rte_cache_aligned;
 
-struct qat_comp_sgl {
-	qat_sgl_hdr;
-	struct qat_flat_buf buffers[RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS];
-} __rte_packed __rte_cache_aligned;
 
 struct qat_comp_op_cookie {
-	struct qat_comp_sgl qat_sgl_src;
-	struct qat_comp_sgl qat_sgl_dst;
 	phys_addr_t qat_sgl_src_phys_addr;
 	phys_addr_t qat_sgl_dst_phys_addr;
+	/* dynamically created SGLs */
+	uint16_t src_nb_elems;
+	uint16_t dst_nb_elems;
+	struct qat_sgl *qat_sgl_src_d;
+	struct qat_sgl *qat_sgl_dst_d;
 };
 
 struct qat_comp_xform {
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index 27c8856..f034a19 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -1,10 +1,14 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2018 Intel Corporation
+ * Copyright(c) 2015-2019 Intel Corporation
  */
 
+#include <rte_malloc.h>
+
 #include "qat_comp.h"
 #include "qat_comp_pmd.h"
 
+#define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16
+
 static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
 	{/* COMPRESSION - deflate */
 	 .algo = RTE_COMP_ALGO_DEFLATE,
@@ -60,6 +64,10 @@ static int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
+	struct qat_qp **qp_addr =
+		(struct qat_qp **)&(dev->data->queue_pairs[queue_pair_id]);
+	struct qat_qp *qp = (struct qat_qp *)*qp_addr;
+	uint32_t i;
 
 	QAT_LOG(DEBUG, "Release comp qp %u on device %d",
 				queue_pair_id, dev->data->dev_id);
@@ -67,6 +75,14 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][queue_pair_id]
 						= NULL;
 
+	for (i = 0; i < qp->nb_descriptors; i++) {
+
+		struct qat_comp_op_cookie *cookie = qp->op_cookies[i];
+
+		rte_free(cookie->qat_sgl_src_d);
+		rte_free(cookie->qat_sgl_dst_d);
+	}
+
 	return qat_qp_release((struct qat_qp **)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }
@@ -122,15 +138,34 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 		struct qat_comp_op_cookie *cookie =
 				qp->op_cookies[i];
 
+		cookie->qat_sgl_src_d = rte_zmalloc_socket(NULL,
+					sizeof(struct qat_sgl) +
+					sizeof(struct qat_flat_buf) *
+					QAT_PMD_COMP_SGL_DEF_SEGMENTS,
+					64, socket_id);
+
+		cookie->qat_sgl_dst_d = rte_zmalloc_socket(NULL,
+					sizeof(struct qat_sgl) +
+					sizeof(struct qat_flat_buf) *
+					QAT_PMD_COMP_SGL_DEF_SEGMENTS,
+					64, socket_id);
+
+		if (cookie->qat_sgl_src_d == NULL ||
+				cookie->qat_sgl_dst_d == NULL) {
+			QAT_LOG(ERR, "Can't allocate SGL"
+				     " for device %s",
+				     qat_private->qat_dev->name);
+			return -ENOMEM;
+		}
+
 		cookie->qat_sgl_src_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_comp_op_cookie,
-				qat_sgl_src);
+				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
 
 		cookie->qat_sgl_dst_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_comp_op_cookie,
-				qat_sgl_dst);
+				rte_malloc_virt2iova(cookie->qat_sgl_dst_d);
+
+		cookie->dst_nb_elems = cookie->src_nb_elems =
+				QAT_PMD_COMP_SGL_DEF_SEGMENTS;
 	}
 
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [PATCH v4 0/1] add dynamic sgl allocation
  2019-03-01 11:17 ` [PATCH v3 0/1] " Tomasz Jozwiak
  2019-03-01 11:17   ` [PATCH v3 1/1] compress/qat: " Tomasz Jozwiak
@ 2019-03-07 12:02   ` Tomasz Jozwiak
  2019-03-07 12:02     ` [PATCH v4 1/1] compress/qat: " Tomasz Jozwiak
  2019-03-26 13:51     ` [PATCH v5 0/1] " Tomasz Jozwiak
  1 sibling, 2 replies; 32+ messages in thread
From: Tomasz Jozwiak @ 2019-03-07 12:02 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak

This patch adds dynamic sgl allocation in QAT PMD and
depends on 'malloc: add rte_realloc_socket function patch'
(1551429976-16297-1-git-send-email-tomaszx.jozwiak@intel.com)
which should be applied first.

Changes

  -v2: added rte_realloc_socket instead of rte_realloc

  -v3: fixed subject in patches series to see cover letter

  -v4: fixed wrong arguments number in rte_realloc_socket call


Tomasz Jozwiak (1):
  compress/qat: add dynamic sgl allocation

 config/common_base                   |  1 -
 doc/guides/compressdevs/qat_comp.rst |  1 -
 doc/guides/cryptodevs/qat.rst        |  5 ----
 drivers/compress/qat/qat_comp.c      | 58 ++++++++++++++++++++++++++++++++----
 drivers/compress/qat/qat_comp.h      | 13 ++++----
 drivers/compress/qat/qat_comp_pmd.c  | 49 +++++++++++++++++++++++++-----
 6 files changed, 101 insertions(+), 26 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v4 1/1] compress/qat: add dynamic sgl allocation
  2019-03-07 12:02   ` [PATCH v4 0/1] " Tomasz Jozwiak
@ 2019-03-07 12:02     ` Tomasz Jozwiak
  2019-03-07 18:58       ` Trahe, Fiona
  2019-03-17 18:00       ` Akhil Goyal
  2019-03-26 13:51     ` [PATCH v5 0/1] " Tomasz Jozwiak
  1 sibling, 2 replies; 32+ messages in thread
From: Tomasz Jozwiak @ 2019-03-07 12:02 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak

This patch adds dynamic SGL allocation instead of static one.
The number of element in SGL can be adjusted in each operation
depend of the request.

Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
---
 config/common_base                   |  1 -
 doc/guides/compressdevs/qat_comp.rst |  1 -
 doc/guides/cryptodevs/qat.rst        |  5 ----
 drivers/compress/qat/qat_comp.c      | 58 ++++++++++++++++++++++++++++++++----
 drivers/compress/qat/qat_comp.h      | 13 ++++----
 drivers/compress/qat/qat_comp_pmd.c  | 49 +++++++++++++++++++++++++-----
 6 files changed, 101 insertions(+), 26 deletions(-)

diff --git a/config/common_base b/config/common_base
index 0b09a93..91c7b73 100644
--- a/config/common_base
+++ b/config/common_base
@@ -549,7 +549,6 @@ CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
 # Max. number of QuickAssist devices, which can be detected and attached
 #
 CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
-CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
 CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
 
 #
diff --git a/doc/guides/compressdevs/qat_comp.rst b/doc/guides/compressdevs/qat_comp.rst
index 5631cb1..6f583a4 100644
--- a/doc/guides/compressdevs/qat_comp.rst
+++ b/doc/guides/compressdevs/qat_comp.rst
@@ -35,7 +35,6 @@ Limitations
 * Compressdev level 0, no compression, is not supported.
 * Queue pairs are not thread-safe (that is, within a single queue pair, RX and TX from different lcores is not supported).
 * No BSD support as BSD QAT kernel driver not available.
-* Number of segments in mbuf chains in the op must be <= RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS from the config file.
 * When using Deflate dynamic huffman encoding for compression, the input size (op.src.length)
   must be < CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE from the config file,
   see :ref:`building_qat_config` for more details.
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index b7eace1..03bd0c1 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -156,7 +156,6 @@ These are the build configuration options affecting QAT, and their default value
 	CONFIG_RTE_LIBRTE_PMD_QAT=y
 	CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
 	CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
-	CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
 	CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
 
 CONFIG_RTE_LIBRTE_PMD_QAT must be enabled for any QAT PMD to be built.
@@ -174,10 +173,6 @@ Note, there are separate config items for max cryptodevs CONFIG_RTE_CRYPTO_MAX_D
 and max compressdevs CONFIG_RTE_COMPRESS_MAX_DEVS, if necessary these should be
 adjusted to handle the total of QAT and other devices which the process will use.
 
-QAT allocates internal structures to handle SGLs. For the compression service
-CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS can be changed if more segments are needed.
-An extra (max_inflight_ops x 16) bytes per queue_pair will be used for every increment.
-
 QAT compression PMD needs intermediate buffers to support Deflate compression
 with Dynamic Huffman encoding. CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE
 specifies the size of a single buffer, the PMD will allocate a multiple of these,
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 32ca753..c021f4a 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2019 Intel Corporation
  */
 
 #include <rte_mempool.h>
@@ -55,22 +55,70 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 		ICP_QAT_FW_COMN_PTR_TYPE_SET(comp_req->comn_hdr.comn_req_flags,
 				QAT_COMN_PTR_TYPE_SGL);
 
+		if (unlikely(op->m_src->nb_segs > cookie->src_nb_elems)) {
+			/* we need to allocate more elements in SGL*/
+			void *tmp;
+
+			tmp = rte_realloc_socket(cookie->qat_sgl_src_d,
+					  sizeof(struct qat_sgl) +
+					  sizeof(struct qat_flat_buf) *
+					  op->m_src->nb_segs, 64,
+					  rte_socket_id());
+
+			if (unlikely(tmp == NULL)) {
+				QAT_DP_LOG(ERR, "QAT PMD can't allocate memory"
+					   " for %d elements of SGL",
+					   op->m_src->nb_segs);
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+				return -ENOMEM;
+			}
+			/* new SGL is valid now */
+			cookie->qat_sgl_src_d = (struct qat_sgl *)tmp;
+			cookie->src_nb_elems = op->m_src->nb_segs;
+			cookie->qat_sgl_src_phys_addr =
+				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
+		}
+
 		ret = qat_sgl_fill_array(op->m_src,
 				op->src.offset,
-				&cookie->qat_sgl_src,
+				cookie->qat_sgl_src_d,
 				op->src.length,
-				RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
+				(const uint16_t)cookie->src_nb_elems);
 		if (ret) {
 			QAT_DP_LOG(ERR, "QAT PMD Cannot fill source sgl array");
 			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 			return ret;
 		}
 
+		if (unlikely(op->m_dst->nb_segs > cookie->dst_nb_elems)) {
+			/* we need to allocate more elements in SGL*/
+			struct qat_sgl *tmp;
+
+			tmp = rte_realloc_socket(cookie->qat_sgl_dst_d,
+					  sizeof(struct qat_sgl) +
+					  sizeof(struct qat_flat_buf) *
+					  op->m_dst->nb_segs, 64,
+					  rte_socket_id());
+
+			if (unlikely(tmp == NULL)) {
+				QAT_DP_LOG(ERR, "QAT PMD can't allocate memory"
+					   " for %d elements of SGL",
+					   op->m_dst->nb_segs);
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+				return -EINVAL;
+			}
+			/* new SGL is valid now */
+			cookie->qat_sgl_dst_d = (struct qat_sgl *)tmp;
+			cookie->dst_nb_elems = op->m_dst->nb_segs;
+			cookie->qat_sgl_dst_phys_addr =
+				rte_malloc_virt2iova(cookie->qat_sgl_dst_d);
+		}
+
 		ret = qat_sgl_fill_array(op->m_dst,
 				op->dst.offset,
-				&cookie->qat_sgl_dst,
+				cookie->qat_sgl_dst_d,
 				comp_req->comp_pars.out_buffer_sz,
-				RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
+				(const uint16_t)cookie->dst_nb_elems);
 		if (ret) {
 			QAT_DP_LOG(ERR, "QAT PMD Cannot fill dest. sgl array");
 			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 19f48df..2465f12 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2018 Intel Corporation
+ * Copyright(c) 2015-2019 Intel Corporation
  */
 
 #ifndef _QAT_COMP_H_
@@ -37,16 +37,15 @@ struct qat_inter_sgl {
 	struct qat_flat_buf buffers[QAT_NUM_BUFS_IN_IM_SGL];
 } __rte_packed __rte_cache_aligned;
 
-struct qat_comp_sgl {
-	qat_sgl_hdr;
-	struct qat_flat_buf buffers[RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS];
-} __rte_packed __rte_cache_aligned;
 
 struct qat_comp_op_cookie {
-	struct qat_comp_sgl qat_sgl_src;
-	struct qat_comp_sgl qat_sgl_dst;
 	phys_addr_t qat_sgl_src_phys_addr;
 	phys_addr_t qat_sgl_dst_phys_addr;
+	/* dynamically created SGLs */
+	uint16_t src_nb_elems;
+	uint16_t dst_nb_elems;
+	struct qat_sgl *qat_sgl_src_d;
+	struct qat_sgl *qat_sgl_dst_d;
 };
 
 struct qat_comp_xform {
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index 27c8856..f034a19 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -1,10 +1,14 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2018 Intel Corporation
+ * Copyright(c) 2015-2019 Intel Corporation
  */
 
+#include <rte_malloc.h>
+
 #include "qat_comp.h"
 #include "qat_comp_pmd.h"
 
+#define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16
+
 static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
 	{/* COMPRESSION - deflate */
 	 .algo = RTE_COMP_ALGO_DEFLATE,
@@ -60,6 +64,10 @@ static int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
+	struct qat_qp **qp_addr =
+		(struct qat_qp **)&(dev->data->queue_pairs[queue_pair_id]);
+	struct qat_qp *qp = (struct qat_qp *)*qp_addr;
+	uint32_t i;
 
 	QAT_LOG(DEBUG, "Release comp qp %u on device %d",
 				queue_pair_id, dev->data->dev_id);
@@ -67,6 +75,14 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][queue_pair_id]
 						= NULL;
 
+	for (i = 0; i < qp->nb_descriptors; i++) {
+
+		struct qat_comp_op_cookie *cookie = qp->op_cookies[i];
+
+		rte_free(cookie->qat_sgl_src_d);
+		rte_free(cookie->qat_sgl_dst_d);
+	}
+
 	return qat_qp_release((struct qat_qp **)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }
@@ -122,15 +138,34 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 		struct qat_comp_op_cookie *cookie =
 				qp->op_cookies[i];
 
+		cookie->qat_sgl_src_d = rte_zmalloc_socket(NULL,
+					sizeof(struct qat_sgl) +
+					sizeof(struct qat_flat_buf) *
+					QAT_PMD_COMP_SGL_DEF_SEGMENTS,
+					64, socket_id);
+
+		cookie->qat_sgl_dst_d = rte_zmalloc_socket(NULL,
+					sizeof(struct qat_sgl) +
+					sizeof(struct qat_flat_buf) *
+					QAT_PMD_COMP_SGL_DEF_SEGMENTS,
+					64, socket_id);
+
+		if (cookie->qat_sgl_src_d == NULL ||
+				cookie->qat_sgl_dst_d == NULL) {
+			QAT_LOG(ERR, "Can't allocate SGL"
+				     " for device %s",
+				     qat_private->qat_dev->name);
+			return -ENOMEM;
+		}
+
 		cookie->qat_sgl_src_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_comp_op_cookie,
-				qat_sgl_src);
+				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
 
 		cookie->qat_sgl_dst_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_comp_op_cookie,
-				qat_sgl_dst);
+				rte_malloc_virt2iova(cookie->qat_sgl_dst_d);
+
+		cookie->dst_nb_elems = cookie->src_nb_elems =
+				QAT_PMD_COMP_SGL_DEF_SEGMENTS;
 	}
 
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 1/1] compress/qat: add dynamic sgl allocation
  2019-03-07 12:02     ` [PATCH v4 1/1] compress/qat: " Tomasz Jozwiak
@ 2019-03-07 18:58       ` Trahe, Fiona
  2019-03-17 18:00       ` Akhil Goyal
  1 sibling, 0 replies; 32+ messages in thread
From: Trahe, Fiona @ 2019-03-07 18:58 UTC (permalink / raw)
  To: Jozwiak, TomaszX, dev



> -----Original Message-----
> From: Jozwiak, TomaszX
> Sent: Thursday, March 7, 2019 12:02 PM
> To: dev@dpdk.org; Trahe, Fiona <fiona.trahe@intel.com>; Jozwiak, TomaszX
> <tomaszx.jozwiak@intel.com>
> Subject: [PATCH v4 1/1] compress/qat: add dynamic sgl allocation
> 
> This patch adds dynamic SGL allocation instead of static one.
> The number of element in SGL can be adjusted in each operation
> depend of the request.
> 
> Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 1/1] compress/qat: add dynamic sgl allocation
  2019-03-07 12:02     ` [PATCH v4 1/1] compress/qat: " Tomasz Jozwiak
  2019-03-07 18:58       ` Trahe, Fiona
@ 2019-03-17 18:00       ` Akhil Goyal
  2019-03-18  8:12         ` Jozwiak, TomaszX
  1 sibling, 1 reply; 32+ messages in thread
From: Akhil Goyal @ 2019-03-17 18:00 UTC (permalink / raw)
  To: Tomasz Jozwiak, dev, fiona.trahe

Hi Tomasz,

I can see compilation failure in the patchwork CI tests.
Could you please check.

Thanks.

On 3/7/2019 5:32 PM, Tomasz Jozwiak wrote:
> This patch adds dynamic SGL allocation instead of static one.
> The number of element in SGL can be adjusted in each operation
> depend of the request.
>
> Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
> ---
>   config/common_base                   |  1 -
>   doc/guides/compressdevs/qat_comp.rst |  1 -
>   doc/guides/cryptodevs/qat.rst        |  5 ----
>   drivers/compress/qat/qat_comp.c      | 58 ++++++++++++++++++++++++++++++++----
>   drivers/compress/qat/qat_comp.h      | 13 ++++----
>   drivers/compress/qat/qat_comp_pmd.c  | 49 +++++++++++++++++++++++++-----
>   6 files changed, 101 insertions(+), 26 deletions(-)
>
> diff --git a/config/common_base b/config/common_base
> index 0b09a93..91c7b73 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -549,7 +549,6 @@ CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
>   # Max. number of QuickAssist devices, which can be detected and attached
>   #
>   CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
> -CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
>   CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
>   
>   #
> diff --git a/doc/guides/compressdevs/qat_comp.rst b/doc/guides/compressdevs/qat_comp.rst
> index 5631cb1..6f583a4 100644
> --- a/doc/guides/compressdevs/qat_comp.rst
> +++ b/doc/guides/compressdevs/qat_comp.rst
> @@ -35,7 +35,6 @@ Limitations
>   * Compressdev level 0, no compression, is not supported.
>   * Queue pairs are not thread-safe (that is, within a single queue pair, RX and TX from different lcores is not supported).
>   * No BSD support as BSD QAT kernel driver not available.
> -* Number of segments in mbuf chains in the op must be <= RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS from the config file.
>   * When using Deflate dynamic huffman encoding for compression, the input size (op.src.length)
>     must be < CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE from the config file,
>     see :ref:`building_qat_config` for more details.
> diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
> index b7eace1..03bd0c1 100644
> --- a/doc/guides/cryptodevs/qat.rst
> +++ b/doc/guides/cryptodevs/qat.rst
> @@ -156,7 +156,6 @@ These are the build configuration options affecting QAT, and their default value
>   	CONFIG_RTE_LIBRTE_PMD_QAT=y
>   	CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
>   	CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
> -	CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
>   	CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
>   
>   CONFIG_RTE_LIBRTE_PMD_QAT must be enabled for any QAT PMD to be built.
> @@ -174,10 +173,6 @@ Note, there are separate config items for max cryptodevs CONFIG_RTE_CRYPTO_MAX_D
>   and max compressdevs CONFIG_RTE_COMPRESS_MAX_DEVS, if necessary these should be
>   adjusted to handle the total of QAT and other devices which the process will use.
>   
> -QAT allocates internal structures to handle SGLs. For the compression service
> -CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS can be changed if more segments are needed.
> -An extra (max_inflight_ops x 16) bytes per queue_pair will be used for every increment.
> -
>   QAT compression PMD needs intermediate buffers to support Deflate compression
>   with Dynamic Huffman encoding. CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE
>   specifies the size of a single buffer, the PMD will allocate a multiple of these,
> diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
> index 32ca753..c021f4a 100644
> --- a/drivers/compress/qat/qat_comp.c
> +++ b/drivers/compress/qat/qat_comp.c
> @@ -1,5 +1,5 @@
>   /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2018 Intel Corporation
> + * Copyright(c) 2018-2019 Intel Corporation
>    */
>   
>   #include <rte_mempool.h>
> @@ -55,22 +55,70 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
>   		ICP_QAT_FW_COMN_PTR_TYPE_SET(comp_req->comn_hdr.comn_req_flags,
>   				QAT_COMN_PTR_TYPE_SGL);
>   
> +		if (unlikely(op->m_src->nb_segs > cookie->src_nb_elems)) {
> +			/* we need to allocate more elements in SGL*/
> +			void *tmp;
> +
> +			tmp = rte_realloc_socket(cookie->qat_sgl_src_d,
> +					  sizeof(struct qat_sgl) +
> +					  sizeof(struct qat_flat_buf) *
> +					  op->m_src->nb_segs, 64,
> +					  rte_socket_id());
> +
> +			if (unlikely(tmp == NULL)) {
> +				QAT_DP_LOG(ERR, "QAT PMD can't allocate memory"
> +					   " for %d elements of SGL",
> +					   op->m_src->nb_segs);
> +				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
> +				return -ENOMEM;
> +			}
> +			/* new SGL is valid now */
> +			cookie->qat_sgl_src_d = (struct qat_sgl *)tmp;
> +			cookie->src_nb_elems = op->m_src->nb_segs;
> +			cookie->qat_sgl_src_phys_addr =
> +				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
> +		}
> +
>   		ret = qat_sgl_fill_array(op->m_src,
>   				op->src.offset,
> -				&cookie->qat_sgl_src,
> +				cookie->qat_sgl_src_d,
>   				op->src.length,
> -				RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
> +				(const uint16_t)cookie->src_nb_elems);
>   		if (ret) {
>   			QAT_DP_LOG(ERR, "QAT PMD Cannot fill source sgl array");
>   			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
>   			return ret;
>   		}
>   
> +		if (unlikely(op->m_dst->nb_segs > cookie->dst_nb_elems)) {
> +			/* we need to allocate more elements in SGL*/
> +			struct qat_sgl *tmp;
> +
> +			tmp = rte_realloc_socket(cookie->qat_sgl_dst_d,
> +					  sizeof(struct qat_sgl) +
> +					  sizeof(struct qat_flat_buf) *
> +					  op->m_dst->nb_segs, 64,
> +					  rte_socket_id());
> +
> +			if (unlikely(tmp == NULL)) {
> +				QAT_DP_LOG(ERR, "QAT PMD can't allocate memory"
> +					   " for %d elements of SGL",
> +					   op->m_dst->nb_segs);
> +				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
> +				return -EINVAL;
> +			}
> +			/* new SGL is valid now */
> +			cookie->qat_sgl_dst_d = (struct qat_sgl *)tmp;
> +			cookie->dst_nb_elems = op->m_dst->nb_segs;
> +			cookie->qat_sgl_dst_phys_addr =
> +				rte_malloc_virt2iova(cookie->qat_sgl_dst_d);
> +		}
> +
>   		ret = qat_sgl_fill_array(op->m_dst,
>   				op->dst.offset,
> -				&cookie->qat_sgl_dst,
> +				cookie->qat_sgl_dst_d,
>   				comp_req->comp_pars.out_buffer_sz,
> -				RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
> +				(const uint16_t)cookie->dst_nb_elems);
>   		if (ret) {
>   			QAT_DP_LOG(ERR, "QAT PMD Cannot fill dest. sgl array");
>   			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
> diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
> index 19f48df..2465f12 100644
> --- a/drivers/compress/qat/qat_comp.h
> +++ b/drivers/compress/qat/qat_comp.h
> @@ -1,5 +1,5 @@
>   /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2015-2018 Intel Corporation
> + * Copyright(c) 2015-2019 Intel Corporation
>    */
>   
>   #ifndef _QAT_COMP_H_
> @@ -37,16 +37,15 @@ struct qat_inter_sgl {
>   	struct qat_flat_buf buffers[QAT_NUM_BUFS_IN_IM_SGL];
>   } __rte_packed __rte_cache_aligned;
>   
> -struct qat_comp_sgl {
> -	qat_sgl_hdr;
> -	struct qat_flat_buf buffers[RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS];
> -} __rte_packed __rte_cache_aligned;
>   
>   struct qat_comp_op_cookie {
> -	struct qat_comp_sgl qat_sgl_src;
> -	struct qat_comp_sgl qat_sgl_dst;
>   	phys_addr_t qat_sgl_src_phys_addr;
>   	phys_addr_t qat_sgl_dst_phys_addr;
> +	/* dynamically created SGLs */
> +	uint16_t src_nb_elems;
> +	uint16_t dst_nb_elems;
> +	struct qat_sgl *qat_sgl_src_d;
> +	struct qat_sgl *qat_sgl_dst_d;
>   };
>   
>   struct qat_comp_xform {
> diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
> index 27c8856..f034a19 100644
> --- a/drivers/compress/qat/qat_comp_pmd.c
> +++ b/drivers/compress/qat/qat_comp_pmd.c
> @@ -1,10 +1,14 @@
>   /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2015-2018 Intel Corporation
> + * Copyright(c) 2015-2019 Intel Corporation
>    */
>   
> +#include <rte_malloc.h>
> +
>   #include "qat_comp.h"
>   #include "qat_comp_pmd.h"
>   
> +#define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16
> +
>   static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
>   	{/* COMPRESSION - deflate */
>   	 .algo = RTE_COMP_ALGO_DEFLATE,
> @@ -60,6 +64,10 @@ static int
>   qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
>   {
>   	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
> +	struct qat_qp **qp_addr =
> +		(struct qat_qp **)&(dev->data->queue_pairs[queue_pair_id]);
> +	struct qat_qp *qp = (struct qat_qp *)*qp_addr;
> +	uint32_t i;
>   
>   	QAT_LOG(DEBUG, "Release comp qp %u on device %d",
>   				queue_pair_id, dev->data->dev_id);
> @@ -67,6 +75,14 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
>   	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][queue_pair_id]
>   						= NULL;
>   
> +	for (i = 0; i < qp->nb_descriptors; i++) {
> +
> +		struct qat_comp_op_cookie *cookie = qp->op_cookies[i];
> +
> +		rte_free(cookie->qat_sgl_src_d);
> +		rte_free(cookie->qat_sgl_dst_d);
> +	}
> +
>   	return qat_qp_release((struct qat_qp **)
>   			&(dev->data->queue_pairs[queue_pair_id]));
>   }
> @@ -122,15 +138,34 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
>   		struct qat_comp_op_cookie *cookie =
>   				qp->op_cookies[i];
>   
> +		cookie->qat_sgl_src_d = rte_zmalloc_socket(NULL,
> +					sizeof(struct qat_sgl) +
> +					sizeof(struct qat_flat_buf) *
> +					QAT_PMD_COMP_SGL_DEF_SEGMENTS,
> +					64, socket_id);
> +
> +		cookie->qat_sgl_dst_d = rte_zmalloc_socket(NULL,
> +					sizeof(struct qat_sgl) +
> +					sizeof(struct qat_flat_buf) *
> +					QAT_PMD_COMP_SGL_DEF_SEGMENTS,
> +					64, socket_id);
> +
> +		if (cookie->qat_sgl_src_d == NULL ||
> +				cookie->qat_sgl_dst_d == NULL) {
> +			QAT_LOG(ERR, "Can't allocate SGL"
> +				     " for device %s",
> +				     qat_private->qat_dev->name);
> +			return -ENOMEM;
> +		}
> +
>   		cookie->qat_sgl_src_phys_addr =
> -				rte_mempool_virt2iova(cookie) +
> -				offsetof(struct qat_comp_op_cookie,
> -				qat_sgl_src);
> +				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
>   
>   		cookie->qat_sgl_dst_phys_addr =
> -				rte_mempool_virt2iova(cookie) +
> -				offsetof(struct qat_comp_op_cookie,
> -				qat_sgl_dst);
> +				rte_malloc_virt2iova(cookie->qat_sgl_dst_d);
> +
> +		cookie->dst_nb_elems = cookie->src_nb_elems =
> +				QAT_PMD_COMP_SGL_DEF_SEGMENTS;
>   	}
>   
>   	return ret;


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 1/1] compress/qat: add dynamic sgl allocation
  2019-03-17 18:00       ` Akhil Goyal
@ 2019-03-18  8:12         ` Jozwiak, TomaszX
  2019-03-18  8:23           ` arpita das
  0 siblings, 1 reply; 32+ messages in thread
From: Jozwiak, TomaszX @ 2019-03-18  8:12 UTC (permalink / raw)
  To: Akhil Goyal, dev, Trahe, Fiona

Hi Akhil,

Please take a look at cover letter (https://patches.dpdk.org/cover/50919/)

 There's dependency on 'malloc: add rte_realloc_socket function patch'
(1551429976-16297-1-git-send-email-tomaszx.jozwiak@intel.com)
which should be applied first.

Br, Tomek

> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Sunday, March 17, 2019 7:00 PM
> To: Jozwiak, TomaszX <tomaszx.jozwiak@intel.com>; dev@dpdk.org; Trahe,
> Fiona <fiona.trahe@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v4 1/1] compress/qat: add dynamic sgl
> allocation
> 
> Hi Tomasz,
> 
> I can see compilation failure in the patchwork CI tests.
> Could you please check.
> 
> Thanks.
> 
> On 3/7/2019 5:32 PM, Tomasz Jozwiak wrote:
> > This patch adds dynamic SGL allocation instead of static one.
> > The number of element in SGL can be adjusted in each operation depend
> > of the request.
> >
> > Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
> > ---
> >   config/common_base                   |  1 -
> >   doc/guides/compressdevs/qat_comp.rst |  1 -
> >   doc/guides/cryptodevs/qat.rst        |  5 ----
> >   drivers/compress/qat/qat_comp.c      | 58
> ++++++++++++++++++++++++++++++++----
> >   drivers/compress/qat/qat_comp.h      | 13 ++++----
> >   drivers/compress/qat/qat_comp_pmd.c  | 49
> +++++++++++++++++++++++++-----
> >   6 files changed, 101 insertions(+), 26 deletions(-)
> >
> > diff --git a/config/common_base b/config/common_base index
> > 0b09a93..91c7b73 100644
> > --- a/config/common_base
> > +++ b/config/common_base
> > @@ -549,7 +549,6 @@ CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
> >   # Max. number of QuickAssist devices, which can be detected and
> attached
> >   #
> >   CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
> > -CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
> >   CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
> >
> >   #
> > diff --git a/doc/guides/compressdevs/qat_comp.rst
> > b/doc/guides/compressdevs/qat_comp.rst
> > index 5631cb1..6f583a4 100644
> > --- a/doc/guides/compressdevs/qat_comp.rst
> > +++ b/doc/guides/compressdevs/qat_comp.rst
> > @@ -35,7 +35,6 @@ Limitations
> >   * Compressdev level 0, no compression, is not supported.
> >   * Queue pairs are not thread-safe (that is, within a single queue pair, RX
> and TX from different lcores is not supported).
> >   * No BSD support as BSD QAT kernel driver not available.
> > -* Number of segments in mbuf chains in the op must be <=
> RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS from the config file.
> >   * When using Deflate dynamic huffman encoding for compression, the
> input size (op.src.length)
> >     must be < CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE from the
> config file,
> >     see :ref:`building_qat_config` for more details.
> > diff --git a/doc/guides/cryptodevs/qat.rst
> > b/doc/guides/cryptodevs/qat.rst index b7eace1..03bd0c1 100644
> > --- a/doc/guides/cryptodevs/qat.rst
> > +++ b/doc/guides/cryptodevs/qat.rst
> > @@ -156,7 +156,6 @@ These are the build configuration options affecting
> QAT, and their default value
> >   	CONFIG_RTE_LIBRTE_PMD_QAT=y
> >   	CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
> >   	CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
> > -	CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
> >   	CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
> >
> >   CONFIG_RTE_LIBRTE_PMD_QAT must be enabled for any QAT PMD to be
> built.
> > @@ -174,10 +173,6 @@ Note, there are separate config items for max
> cryptodevs CONFIG_RTE_CRYPTO_MAX_D
> >   and max compressdevs CONFIG_RTE_COMPRESS_MAX_DEVS, if
> necessary these should be
> >   adjusted to handle the total of QAT and other devices which the process
> will use.
> >
> > -QAT allocates internal structures to handle SGLs. For the compression
> > service -CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS can be
> changed if more segments are needed.
> > -An extra (max_inflight_ops x 16) bytes per queue_pair will be used for
> every increment.
> > -
> >   QAT compression PMD needs intermediate buffers to support Deflate
> compression
> >   with Dynamic Huffman encoding.
> CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE
> >   specifies the size of a single buffer, the PMD will allocate a
> > multiple of these, diff --git a/drivers/compress/qat/qat_comp.c
> > b/drivers/compress/qat/qat_comp.c index 32ca753..c021f4a 100644
> > --- a/drivers/compress/qat/qat_comp.c
> > +++ b/drivers/compress/qat/qat_comp.c
> > @@ -1,5 +1,5 @@
> >   /* SPDX-License-Identifier: BSD-3-Clause
> > - * Copyright(c) 2018 Intel Corporation
> > + * Copyright(c) 2018-2019 Intel Corporation
> >    */
> >
> >   #include <rte_mempool.h>
> > @@ -55,22 +55,70 @@ qat_comp_build_request(void *in_op, uint8_t
> *out_msg,
> >   		ICP_QAT_FW_COMN_PTR_TYPE_SET(comp_req-
> >comn_hdr.comn_req_flags,
> >   				QAT_COMN_PTR_TYPE_SGL);
> >
> > +		if (unlikely(op->m_src->nb_segs > cookie->src_nb_elems)) {
> > +			/* we need to allocate more elements in SGL*/
> > +			void *tmp;
> > +
> > +			tmp = rte_realloc_socket(cookie->qat_sgl_src_d,
> > +					  sizeof(struct qat_sgl) +
> > +					  sizeof(struct qat_flat_buf) *
> > +					  op->m_src->nb_segs, 64,
> > +					  rte_socket_id());
> > +
> > +			if (unlikely(tmp == NULL)) {
> > +				QAT_DP_LOG(ERR, "QAT PMD can't allocate
> memory"
> > +					   " for %d elements of SGL",
> > +					   op->m_src->nb_segs);
> > +				op->status =
> RTE_COMP_OP_STATUS_INVALID_ARGS;
> > +				return -ENOMEM;
> > +			}
> > +			/* new SGL is valid now */
> > +			cookie->qat_sgl_src_d = (struct qat_sgl *)tmp;
> > +			cookie->src_nb_elems = op->m_src->nb_segs;
> > +			cookie->qat_sgl_src_phys_addr =
> > +				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
> > +		}
> > +
> >   		ret = qat_sgl_fill_array(op->m_src,
> >   				op->src.offset,
> > -				&cookie->qat_sgl_src,
> > +				cookie->qat_sgl_src_d,
> >   				op->src.length,
> > -
> 	RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
> > +				(const uint16_t)cookie->src_nb_elems);
> >   		if (ret) {
> >   			QAT_DP_LOG(ERR, "QAT PMD Cannot fill source sgl
> array");
> >   			op->status =
> RTE_COMP_OP_STATUS_INVALID_ARGS;
> >   			return ret;
> >   		}
> >
> > +		if (unlikely(op->m_dst->nb_segs > cookie->dst_nb_elems)) {
> > +			/* we need to allocate more elements in SGL*/
> > +			struct qat_sgl *tmp;
> > +
> > +			tmp = rte_realloc_socket(cookie->qat_sgl_dst_d,
> > +					  sizeof(struct qat_sgl) +
> > +					  sizeof(struct qat_flat_buf) *
> > +					  op->m_dst->nb_segs, 64,
> > +					  rte_socket_id());
> > +
> > +			if (unlikely(tmp == NULL)) {
> > +				QAT_DP_LOG(ERR, "QAT PMD can't allocate
> memory"
> > +					   " for %d elements of SGL",
> > +					   op->m_dst->nb_segs);
> > +				op->status =
> RTE_COMP_OP_STATUS_INVALID_ARGS;
> > +				return -EINVAL;
> > +			}
> > +			/* new SGL is valid now */
> > +			cookie->qat_sgl_dst_d = (struct qat_sgl *)tmp;
> > +			cookie->dst_nb_elems = op->m_dst->nb_segs;
> > +			cookie->qat_sgl_dst_phys_addr =
> > +				rte_malloc_virt2iova(cookie-
> >qat_sgl_dst_d);
> > +		}
> > +
> >   		ret = qat_sgl_fill_array(op->m_dst,
> >   				op->dst.offset,
> > -				&cookie->qat_sgl_dst,
> > +				cookie->qat_sgl_dst_d,
> >   				comp_req->comp_pars.out_buffer_sz,
> > -
> 	RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
> > +				(const uint16_t)cookie->dst_nb_elems);
> >   		if (ret) {
> >   			QAT_DP_LOG(ERR, "QAT PMD Cannot fill dest. sgl
> array");
> >   			op->status =
> RTE_COMP_OP_STATUS_INVALID_ARGS; diff --git
> > a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
> > index 19f48df..2465f12 100644
> > --- a/drivers/compress/qat/qat_comp.h
> > +++ b/drivers/compress/qat/qat_comp.h
> > @@ -1,5 +1,5 @@
> >   /* SPDX-License-Identifier: BSD-3-Clause
> > - * Copyright(c) 2015-2018 Intel Corporation
> > + * Copyright(c) 2015-2019 Intel Corporation
> >    */
> >
> >   #ifndef _QAT_COMP_H_
> > @@ -37,16 +37,15 @@ struct qat_inter_sgl {
> >   	struct qat_flat_buf buffers[QAT_NUM_BUFS_IN_IM_SGL];
> >   } __rte_packed __rte_cache_aligned;
> >
> > -struct qat_comp_sgl {
> > -	qat_sgl_hdr;
> > -	struct qat_flat_buf
> buffers[RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS];
> > -} __rte_packed __rte_cache_aligned;
> >
> >   struct qat_comp_op_cookie {
> > -	struct qat_comp_sgl qat_sgl_src;
> > -	struct qat_comp_sgl qat_sgl_dst;
> >   	phys_addr_t qat_sgl_src_phys_addr;
> >   	phys_addr_t qat_sgl_dst_phys_addr;
> > +	/* dynamically created SGLs */
> > +	uint16_t src_nb_elems;
> > +	uint16_t dst_nb_elems;
> > +	struct qat_sgl *qat_sgl_src_d;
> > +	struct qat_sgl *qat_sgl_dst_d;
> >   };
> >
> >   struct qat_comp_xform {
> > diff --git a/drivers/compress/qat/qat_comp_pmd.c
> > b/drivers/compress/qat/qat_comp_pmd.c
> > index 27c8856..f034a19 100644
> > --- a/drivers/compress/qat/qat_comp_pmd.c
> > +++ b/drivers/compress/qat/qat_comp_pmd.c
> > @@ -1,10 +1,14 @@
> >   /* SPDX-License-Identifier: BSD-3-Clause
> > - * Copyright(c) 2015-2018 Intel Corporation
> > + * Copyright(c) 2015-2019 Intel Corporation
> >    */
> >
> > +#include <rte_malloc.h>
> > +
> >   #include "qat_comp.h"
> >   #include "qat_comp_pmd.h"
> >
> > +#define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16
> > +
> >   static const struct rte_compressdev_capabilities
> qat_comp_gen_capabilities[] = {
> >   	{/* COMPRESSION - deflate */
> >   	 .algo = RTE_COMP_ALGO_DEFLATE,
> > @@ -60,6 +64,10 @@ static int
> >   qat_comp_qp_release(struct rte_compressdev *dev, uint16_t
> queue_pair_id)
> >   {
> >   	struct qat_comp_dev_private *qat_private = dev->data-
> >dev_private;
> > +	struct qat_qp **qp_addr =
> > +		(struct qat_qp **)&(dev->data-
> >queue_pairs[queue_pair_id]);
> > +	struct qat_qp *qp = (struct qat_qp *)*qp_addr;
> > +	uint32_t i;
> >
> >   	QAT_LOG(DEBUG, "Release comp qp %u on device %d",
> >   				queue_pair_id, dev->data->dev_id); @@ -
> 67,6 +75,14 @@
> > qat_comp_qp_release(struct rte_compressdev *dev, uint16_t
> queue_pair_id)
> >   	qat_private->qat_dev-
> >qps_in_use[QAT_SERVICE_COMPRESSION][queue_pair_id]
> >   						= NULL;
> >
> > +	for (i = 0; i < qp->nb_descriptors; i++) {
> > +
> > +		struct qat_comp_op_cookie *cookie = qp->op_cookies[i];
> > +
> > +		rte_free(cookie->qat_sgl_src_d);
> > +		rte_free(cookie->qat_sgl_dst_d);
> > +	}
> > +
> >   	return qat_qp_release((struct qat_qp **)
> >   			&(dev->data->queue_pairs[queue_pair_id]));
> >   }
> > @@ -122,15 +138,34 @@ qat_comp_qp_setup(struct rte_compressdev
> *dev, uint16_t qp_id,
> >   		struct qat_comp_op_cookie *cookie =
> >   				qp->op_cookies[i];
> >
> > +		cookie->qat_sgl_src_d = rte_zmalloc_socket(NULL,
> > +					sizeof(struct qat_sgl) +
> > +					sizeof(struct qat_flat_buf) *
> > +
> 	QAT_PMD_COMP_SGL_DEF_SEGMENTS,
> > +					64, socket_id);
> > +
> > +		cookie->qat_sgl_dst_d = rte_zmalloc_socket(NULL,
> > +					sizeof(struct qat_sgl) +
> > +					sizeof(struct qat_flat_buf) *
> > +
> 	QAT_PMD_COMP_SGL_DEF_SEGMENTS,
> > +					64, socket_id);
> > +
> > +		if (cookie->qat_sgl_src_d == NULL ||
> > +				cookie->qat_sgl_dst_d == NULL) {
> > +			QAT_LOG(ERR, "Can't allocate SGL"
> > +				     " for device %s",
> > +				     qat_private->qat_dev->name);
> > +			return -ENOMEM;
> > +		}
> > +
> >   		cookie->qat_sgl_src_phys_addr =
> > -				rte_mempool_virt2iova(cookie) +
> > -				offsetof(struct qat_comp_op_cookie,
> > -				qat_sgl_src);
> > +				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
> >
> >   		cookie->qat_sgl_dst_phys_addr =
> > -				rte_mempool_virt2iova(cookie) +
> > -				offsetof(struct qat_comp_op_cookie,
> > -				qat_sgl_dst);
> > +				rte_malloc_virt2iova(cookie-
> >qat_sgl_dst_d);
> > +
> > +		cookie->dst_nb_elems = cookie->src_nb_elems =
> > +				QAT_PMD_COMP_SGL_DEF_SEGMENTS;
> >   	}
> >
> >   	return ret;


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v4 1/1] compress/qat: add dynamic sgl allocation
  2019-03-18  8:12         ` Jozwiak, TomaszX
@ 2019-03-18  8:23           ` arpita das
  0 siblings, 0 replies; 32+ messages in thread
From: arpita das @ 2019-03-18  8:23 UTC (permalink / raw)
  To: Jozwiak, TomaszX; +Cc: Akhil Goyal, Trahe, Fiona, dev

I know its a wrong forum, but just a 1 liner query, any openings in dodk ,
please let me know. TIA

On Mon, 18 Mar 2019 at 1:43 PM, Jozwiak, TomaszX <tomaszx.jozwiak@intel.com>
wrote:

> Hi Akhil,
>
> Please take a look at cover letter (https://patches.dpdk.org/cover/50919/)
>
>  There's dependency on 'malloc: add rte_realloc_socket function patch'
> (1551429976-16297-1-git-send-email-tomaszx.jozwiak@intel.com)
> which should be applied first.
>
> Br, Tomek
>
> > -----Original Message-----
> > From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> > Sent: Sunday, March 17, 2019 7:00 PM
> > To: Jozwiak, TomaszX <tomaszx.jozwiak@intel.com>; dev@dpdk.org; Trahe,
> > Fiona <fiona.trahe@intel.com>
> > Subject: Re: [dpdk-dev] [PATCH v4 1/1] compress/qat: add dynamic sgl
> > allocation
> >
> > Hi Tomasz,
> >
> > I can see compilation failure in the patchwork CI tests.
> > Could you please check.
> >
> > Thanks.
> >
> > On 3/7/2019 5:32 PM, Tomasz Jozwiak wrote:
> > > This patch adds dynamic SGL allocation instead of static one.
> > > The number of element in SGL can be adjusted in each operation depend
> > > of the request.
> > >
> > > Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
> > > ---
> > >   config/common_base                   |  1 -
> > >   doc/guides/compressdevs/qat_comp.rst |  1 -
> > >   doc/guides/cryptodevs/qat.rst        |  5 ----
> > >   drivers/compress/qat/qat_comp.c      | 58
> > ++++++++++++++++++++++++++++++++----
> > >   drivers/compress/qat/qat_comp.h      | 13 ++++----
> > >   drivers/compress/qat/qat_comp_pmd.c  | 49
> > +++++++++++++++++++++++++-----
> > >   6 files changed, 101 insertions(+), 26 deletions(-)
> > >
> > > diff --git a/config/common_base b/config/common_base index
> > > 0b09a93..91c7b73 100644
> > > --- a/config/common_base
> > > +++ b/config/common_base
> > > @@ -549,7 +549,6 @@ CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
> > >   # Max. number of QuickAssist devices, which can be detected and
> > attached
> > >   #
> > >   CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
> > > -CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
> > >   CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
> > >
> > >   #
> > > diff --git a/doc/guides/compressdevs/qat_comp.rst
> > > b/doc/guides/compressdevs/qat_comp.rst
> > > index 5631cb1..6f583a4 100644
> > > --- a/doc/guides/compressdevs/qat_comp.rst
> > > +++ b/doc/guides/compressdevs/qat_comp.rst
> > > @@ -35,7 +35,6 @@ Limitations
> > >   * Compressdev level 0, no compression, is not supported.
> > >   * Queue pairs are not thread-safe (that is, within a single queue
> pair, RX
> > and TX from different lcores is not supported).
> > >   * No BSD support as BSD QAT kernel driver not available.
> > > -* Number of segments in mbuf chains in the op must be <=
> > RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS from the config file.
> > >   * When using Deflate dynamic huffman encoding for compression, the
> > input size (op.src.length)
> > >     must be < CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE from the
> > config file,
> > >     see :ref:`building_qat_config` for more details.
> > > diff --git a/doc/guides/cryptodevs/qat.rst
> > > b/doc/guides/cryptodevs/qat.rst index b7eace1..03bd0c1 100644
> > > --- a/doc/guides/cryptodevs/qat.rst
> > > +++ b/doc/guides/cryptodevs/qat.rst
> > > @@ -156,7 +156,6 @@ These are the build configuration options affecting
> > QAT, and their default value
> > >     CONFIG_RTE_LIBRTE_PMD_QAT=y
> > >     CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
> > >     CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
> > > -   CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
> > >     CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
> > >
> > >   CONFIG_RTE_LIBRTE_PMD_QAT must be enabled for any QAT PMD to be
> > built.
> > > @@ -174,10 +173,6 @@ Note, there are separate config items for max
> > cryptodevs CONFIG_RTE_CRYPTO_MAX_D
> > >   and max compressdevs CONFIG_RTE_COMPRESS_MAX_DEVS, if
> > necessary these should be
> > >   adjusted to handle the total of QAT and other devices which the
> process
> > will use.
> > >
> > > -QAT allocates internal structures to handle SGLs. For the compression
> > > service -CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS can be
> > changed if more segments are needed.
> > > -An extra (max_inflight_ops x 16) bytes per queue_pair will be used for
> > every increment.
> > > -
> > >   QAT compression PMD needs intermediate buffers to support Deflate
> > compression
> > >   with Dynamic Huffman encoding.
> > CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE
> > >   specifies the size of a single buffer, the PMD will allocate a
> > > multiple of these, diff --git a/drivers/compress/qat/qat_comp.c
> > > b/drivers/compress/qat/qat_comp.c index 32ca753..c021f4a 100644
> > > --- a/drivers/compress/qat/qat_comp.c
> > > +++ b/drivers/compress/qat/qat_comp.c
> > > @@ -1,5 +1,5 @@
> > >   /* SPDX-License-Identifier: BSD-3-Clause
> > > - * Copyright(c) 2018 Intel Corporation
> > > + * Copyright(c) 2018-2019 Intel Corporation
> > >    */
> > >
> > >   #include <rte_mempool.h>
> > > @@ -55,22 +55,70 @@ qat_comp_build_request(void *in_op, uint8_t
> > *out_msg,
> > >             ICP_QAT_FW_COMN_PTR_TYPE_SET(comp_req-
> > >comn_hdr.comn_req_flags,
> > >                             QAT_COMN_PTR_TYPE_SGL);
> > >
> > > +           if (unlikely(op->m_src->nb_segs > cookie->src_nb_elems)) {
> > > +                   /* we need to allocate more elements in SGL*/
> > > +                   void *tmp;
> > > +
> > > +                   tmp = rte_realloc_socket(cookie->qat_sgl_src_d,
> > > +                                     sizeof(struct qat_sgl) +
> > > +                                     sizeof(struct qat_flat_buf) *
> > > +                                     op->m_src->nb_segs, 64,
> > > +                                     rte_socket_id());
> > > +
> > > +                   if (unlikely(tmp == NULL)) {
> > > +                           QAT_DP_LOG(ERR, "QAT PMD can't allocate
> > memory"
> > > +                                      " for %d elements of SGL",
> > > +                                      op->m_src->nb_segs);
> > > +                           op->status =
> > RTE_COMP_OP_STATUS_INVALID_ARGS;
> > > +                           return -ENOMEM;
> > > +                   }
> > > +                   /* new SGL is valid now */
> > > +                   cookie->qat_sgl_src_d = (struct qat_sgl *)tmp;
> > > +                   cookie->src_nb_elems = op->m_src->nb_segs;
> > > +                   cookie->qat_sgl_src_phys_addr =
> > > +
>  rte_malloc_virt2iova(cookie->qat_sgl_src_d);
> > > +           }
> > > +
> > >             ret = qat_sgl_fill_array(op->m_src,
> > >                             op->src.offset,
> > > -                           &cookie->qat_sgl_src,
> > > +                           cookie->qat_sgl_src_d,
> > >                             op->src.length,
> > > -
> >       RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
> > > +                           (const uint16_t)cookie->src_nb_elems);
> > >             if (ret) {
> > >                     QAT_DP_LOG(ERR, "QAT PMD Cannot fill source sgl
> > array");
> > >                     op->status =
> > RTE_COMP_OP_STATUS_INVALID_ARGS;
> > >                     return ret;
> > >             }
> > >
> > > +           if (unlikely(op->m_dst->nb_segs > cookie->dst_nb_elems)) {
> > > +                   /* we need to allocate more elements in SGL*/
> > > +                   struct qat_sgl *tmp;
> > > +
> > > +                   tmp = rte_realloc_socket(cookie->qat_sgl_dst_d,
> > > +                                     sizeof(struct qat_sgl) +
> > > +                                     sizeof(struct qat_flat_buf) *
> > > +                                     op->m_dst->nb_segs, 64,
> > > +                                     rte_socket_id());
> > > +
> > > +                   if (unlikely(tmp == NULL)) {
> > > +                           QAT_DP_LOG(ERR, "QAT PMD can't allocate
> > memory"
> > > +                                      " for %d elements of SGL",
> > > +                                      op->m_dst->nb_segs);
> > > +                           op->status =
> > RTE_COMP_OP_STATUS_INVALID_ARGS;
> > > +                           return -EINVAL;
> > > +                   }
> > > +                   /* new SGL is valid now */
> > > +                   cookie->qat_sgl_dst_d = (struct qat_sgl *)tmp;
> > > +                   cookie->dst_nb_elems = op->m_dst->nb_segs;
> > > +                   cookie->qat_sgl_dst_phys_addr =
> > > +                           rte_malloc_virt2iova(cookie-
> > >qat_sgl_dst_d);
> > > +           }
> > > +
> > >             ret = qat_sgl_fill_array(op->m_dst,
> > >                             op->dst.offset,
> > > -                           &cookie->qat_sgl_dst,
> > > +                           cookie->qat_sgl_dst_d,
> > >                             comp_req->comp_pars.out_buffer_sz,
> > > -
> >       RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
> > > +                           (const uint16_t)cookie->dst_nb_elems);
> > >             if (ret) {
> > >                     QAT_DP_LOG(ERR, "QAT PMD Cannot fill dest. sgl
> > array");
> > >                     op->status =
> > RTE_COMP_OP_STATUS_INVALID_ARGS; diff --git
> > > a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
> > > index 19f48df..2465f12 100644
> > > --- a/drivers/compress/qat/qat_comp.h
> > > +++ b/drivers/compress/qat/qat_comp.h
> > > @@ -1,5 +1,5 @@
> > >   /* SPDX-License-Identifier: BSD-3-Clause
> > > - * Copyright(c) 2015-2018 Intel Corporation
> > > + * Copyright(c) 2015-2019 Intel Corporation
> > >    */
> > >
> > >   #ifndef _QAT_COMP_H_
> > > @@ -37,16 +37,15 @@ struct qat_inter_sgl {
> > >     struct qat_flat_buf buffers[QAT_NUM_BUFS_IN_IM_SGL];
> > >   } __rte_packed __rte_cache_aligned;
> > >
> > > -struct qat_comp_sgl {
> > > -   qat_sgl_hdr;
> > > -   struct qat_flat_buf
> > buffers[RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS];
> > > -} __rte_packed __rte_cache_aligned;
> > >
> > >   struct qat_comp_op_cookie {
> > > -   struct qat_comp_sgl qat_sgl_src;
> > > -   struct qat_comp_sgl qat_sgl_dst;
> > >     phys_addr_t qat_sgl_src_phys_addr;
> > >     phys_addr_t qat_sgl_dst_phys_addr;
> > > +   /* dynamically created SGLs */
> > > +   uint16_t src_nb_elems;
> > > +   uint16_t dst_nb_elems;
> > > +   struct qat_sgl *qat_sgl_src_d;
> > > +   struct qat_sgl *qat_sgl_dst_d;
> > >   };
> > >
> > >   struct qat_comp_xform {
> > > diff --git a/drivers/compress/qat/qat_comp_pmd.c
> > > b/drivers/compress/qat/qat_comp_pmd.c
> > > index 27c8856..f034a19 100644
> > > --- a/drivers/compress/qat/qat_comp_pmd.c
> > > +++ b/drivers/compress/qat/qat_comp_pmd.c
> > > @@ -1,10 +1,14 @@
> > >   /* SPDX-License-Identifier: BSD-3-Clause
> > > - * Copyright(c) 2015-2018 Intel Corporation
> > > + * Copyright(c) 2015-2019 Intel Corporation
> > >    */
> > >
> > > +#include <rte_malloc.h>
> > > +
> > >   #include "qat_comp.h"
> > >   #include "qat_comp_pmd.h"
> > >
> > > +#define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16
> > > +
> > >   static const struct rte_compressdev_capabilities
> > qat_comp_gen_capabilities[] = {
> > >     {/* COMPRESSION - deflate */
> > >      .algo = RTE_COMP_ALGO_DEFLATE,
> > > @@ -60,6 +64,10 @@ static int
> > >   qat_comp_qp_release(struct rte_compressdev *dev, uint16_t
> > queue_pair_id)
> > >   {
> > >     struct qat_comp_dev_private *qat_private = dev->data-
> > >dev_private;
> > > +   struct qat_qp **qp_addr =
> > > +           (struct qat_qp **)&(dev->data-
> > >queue_pairs[queue_pair_id]);
> > > +   struct qat_qp *qp = (struct qat_qp *)*qp_addr;
> > > +   uint32_t i;
> > >
> > >     QAT_LOG(DEBUG, "Release comp qp %u on device %d",
> > >                             queue_pair_id, dev->data->dev_id); @@ -
> > 67,6 +75,14 @@
> > > qat_comp_qp_release(struct rte_compressdev *dev, uint16_t
> > queue_pair_id)
> > >     qat_private->qat_dev-
> > >qps_in_use[QAT_SERVICE_COMPRESSION][queue_pair_id]
> > >                                             = NULL;
> > >
> > > +   for (i = 0; i < qp->nb_descriptors; i++) {
> > > +
> > > +           struct qat_comp_op_cookie *cookie = qp->op_cookies[i];
> > > +
> > > +           rte_free(cookie->qat_sgl_src_d);
> > > +           rte_free(cookie->qat_sgl_dst_d);
> > > +   }
> > > +
> > >     return qat_qp_release((struct qat_qp **)
> > >                     &(dev->data->queue_pairs[queue_pair_id]));
> > >   }
> > > @@ -122,15 +138,34 @@ qat_comp_qp_setup(struct rte_compressdev
> > *dev, uint16_t qp_id,
> > >             struct qat_comp_op_cookie *cookie =
> > >                             qp->op_cookies[i];
> > >
> > > +           cookie->qat_sgl_src_d = rte_zmalloc_socket(NULL,
> > > +                                   sizeof(struct qat_sgl) +
> > > +                                   sizeof(struct qat_flat_buf) *
> > > +
> >       QAT_PMD_COMP_SGL_DEF_SEGMENTS,
> > > +                                   64, socket_id);
> > > +
> > > +           cookie->qat_sgl_dst_d = rte_zmalloc_socket(NULL,
> > > +                                   sizeof(struct qat_sgl) +
> > > +                                   sizeof(struct qat_flat_buf) *
> > > +
> >       QAT_PMD_COMP_SGL_DEF_SEGMENTS,
> > > +                                   64, socket_id);
> > > +
> > > +           if (cookie->qat_sgl_src_d == NULL ||
> > > +                           cookie->qat_sgl_dst_d == NULL) {
> > > +                   QAT_LOG(ERR, "Can't allocate SGL"
> > > +                                " for device %s",
> > > +                                qat_private->qat_dev->name);
> > > +                   return -ENOMEM;
> > > +           }
> > > +
> > >             cookie->qat_sgl_src_phys_addr =
> > > -                           rte_mempool_virt2iova(cookie) +
> > > -                           offsetof(struct qat_comp_op_cookie,
> > > -                           qat_sgl_src);
> > > +
>  rte_malloc_virt2iova(cookie->qat_sgl_src_d);
> > >
> > >             cookie->qat_sgl_dst_phys_addr =
> > > -                           rte_mempool_virt2iova(cookie) +
> > > -                           offsetof(struct qat_comp_op_cookie,
> > > -                           qat_sgl_dst);
> > > +                           rte_malloc_virt2iova(cookie-
> > >qat_sgl_dst_d);
> > > +
> > > +           cookie->dst_nb_elems = cookie->src_nb_elems =
> > > +                           QAT_PMD_COMP_SGL_DEF_SEGMENTS;
> > >     }
> > >
> > >     return ret;
>
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH] compress/qat: add fallback to fixed compression
  2019-02-15 17:01   ` Trahe, Fiona
@ 2019-03-19 14:04     ` Akhil Goyal
  0 siblings, 0 replies; 32+ messages in thread
From: Akhil Goyal @ 2019-03-19 14:04 UTC (permalink / raw)
  To: Trahe, Fiona, Jozwiak, TomaszX, dev



On 2/15/2019 10:31 PM, Trahe, Fiona wrote:
>
>> -----Original Message-----
>> From: Jozwiak, TomaszX
>> Sent: Friday, February 15, 2019 9:45 AM
>> To: dev@dpdk.org; Trahe, Fiona <fiona.trahe@intel.com>; Jozwiak, TomaszX
>> <tomaszx.jozwiak@intel.com>
>> Subject: [PATCH] compress/qat: add fallback to fixed compression
>>
>> This patch adds fallback to fixed compression
>> feature during dynamic compression, when the input data size
>> is greater than IM buffer size / 1.1. This feature doesn't
>> stop compression proccess when IM buffer can be too small
>> to handle produced data.
>>
>> Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
> Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Applied to dpdk-next-crypto

Thanks.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v5 0/1] add dynamic sgl allocation
  2019-03-07 12:02   ` [PATCH v4 0/1] " Tomasz Jozwiak
  2019-03-07 12:02     ` [PATCH v4 1/1] compress/qat: " Tomasz Jozwiak
@ 2019-03-26 13:51     ` Tomasz Jozwiak
  2019-03-26 13:51       ` [PATCH v5 1/1] compress/qat: " Tomasz Jozwiak
  1 sibling, 1 reply; 32+ messages in thread
From: Tomasz Jozwiak @ 2019-03-26 13:51 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak

This patch adds dynamic sgl allocation in QAT PMD and
depends on 'malloc: add rte_realloc_socket function patch'
(https://patches.dpdk.org/patch/50711/)
which should be applied first.

Changes

  -v2: added rte_realloc_socket instead of rte_realloc

  -v3: fixed subject in patches series to see cover letter

  -v4: fixed wrong arguments number in rte_realloc_socket call

  -v5: assigned correct NUMA node

Tomasz Jozwiak (1):
  compress/qat: add dynamic sgl allocation

 config/common_base                   |  1 -
 doc/guides/compressdevs/qat_comp.rst |  1 -
 doc/guides/cryptodevs/qat.rst        |  5 ----
 drivers/compress/qat/qat_comp.c      | 58 ++++++++++++++++++++++++++++++++----
 drivers/compress/qat/qat_comp.h      | 14 ++++-----
 drivers/compress/qat/qat_comp_pmd.c  | 51 ++++++++++++++++++++++++++-----
 6 files changed, 104 insertions(+), 26 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v5 1/1] compress/qat: add dynamic sgl allocation
  2019-03-26 13:51     ` [PATCH v5 0/1] " Tomasz Jozwiak
@ 2019-03-26 13:51       ` Tomasz Jozwiak
  2019-03-28 14:37         ` Trahe, Fiona
  0 siblings, 1 reply; 32+ messages in thread
From: Tomasz Jozwiak @ 2019-03-26 13:51 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak

This patch adds dynamic SGL allocation instead of static one.
The number of element in SGL can be adjusted in each operation
depend of the request.

Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
---
 config/common_base                   |  1 -
 doc/guides/compressdevs/qat_comp.rst |  1 -
 doc/guides/cryptodevs/qat.rst        |  5 ----
 drivers/compress/qat/qat_comp.c      | 58 ++++++++++++++++++++++++++++++++----
 drivers/compress/qat/qat_comp.h      | 14 ++++-----
 drivers/compress/qat/qat_comp_pmd.c  | 51 ++++++++++++++++++++++++++-----
 6 files changed, 104 insertions(+), 26 deletions(-)

diff --git a/config/common_base b/config/common_base
index 0b09a93..91c7b73 100644
--- a/config/common_base
+++ b/config/common_base
@@ -549,7 +549,6 @@ CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
 # Max. number of QuickAssist devices, which can be detected and attached
 #
 CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
-CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
 CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
 
 #
diff --git a/doc/guides/compressdevs/qat_comp.rst b/doc/guides/compressdevs/qat_comp.rst
index 5631cb1..6f583a4 100644
--- a/doc/guides/compressdevs/qat_comp.rst
+++ b/doc/guides/compressdevs/qat_comp.rst
@@ -35,7 +35,6 @@ Limitations
 * Compressdev level 0, no compression, is not supported.
 * Queue pairs are not thread-safe (that is, within a single queue pair, RX and TX from different lcores is not supported).
 * No BSD support as BSD QAT kernel driver not available.
-* Number of segments in mbuf chains in the op must be <= RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS from the config file.
 * When using Deflate dynamic huffman encoding for compression, the input size (op.src.length)
   must be < CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE from the config file,
   see :ref:`building_qat_config` for more details.
diff --git a/doc/guides/cryptodevs/qat.rst b/doc/guides/cryptodevs/qat.rst
index b7eace1..03bd0c1 100644
--- a/doc/guides/cryptodevs/qat.rst
+++ b/doc/guides/cryptodevs/qat.rst
@@ -156,7 +156,6 @@ These are the build configuration options affecting QAT, and their default value
 	CONFIG_RTE_LIBRTE_PMD_QAT=y
 	CONFIG_RTE_LIBRTE_PMD_QAT_SYM=n
 	CONFIG_RTE_PMD_QAT_MAX_PCI_DEVICES=48
-	CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS=16
 	CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE=65536
 
 CONFIG_RTE_LIBRTE_PMD_QAT must be enabled for any QAT PMD to be built.
@@ -174,10 +173,6 @@ Note, there are separate config items for max cryptodevs CONFIG_RTE_CRYPTO_MAX_D
 and max compressdevs CONFIG_RTE_COMPRESS_MAX_DEVS, if necessary these should be
 adjusted to handle the total of QAT and other devices which the process will use.
 
-QAT allocates internal structures to handle SGLs. For the compression service
-CONFIG_RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS can be changed if more segments are needed.
-An extra (max_inflight_ops x 16) bytes per queue_pair will be used for every increment.
-
 QAT compression PMD needs intermediate buffers to support Deflate compression
 with Dynamic Huffman encoding. CONFIG_RTE_PMD_QAT_COMP_IM_BUFFER_SIZE
 specifies the size of a single buffer, the PMD will allocate a multiple of these,
diff --git a/drivers/compress/qat/qat_comp.c b/drivers/compress/qat/qat_comp.c
index 32ca753..0aba5ba 100644
--- a/drivers/compress/qat/qat_comp.c
+++ b/drivers/compress/qat/qat_comp.c
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018-2019 Intel Corporation
  */
 
 #include <rte_mempool.h>
@@ -55,22 +55,70 @@ qat_comp_build_request(void *in_op, uint8_t *out_msg,
 		ICP_QAT_FW_COMN_PTR_TYPE_SET(comp_req->comn_hdr.comn_req_flags,
 				QAT_COMN_PTR_TYPE_SGL);
 
+		if (unlikely(op->m_src->nb_segs > cookie->src_nb_elems)) {
+			/* we need to allocate more elements in SGL*/
+			void *tmp;
+
+			tmp = rte_realloc_socket(cookie->qat_sgl_src_d,
+					  sizeof(struct qat_sgl) +
+					  sizeof(struct qat_flat_buf) *
+					  op->m_src->nb_segs, 64,
+					  cookie->socket_id);
+
+			if (unlikely(tmp == NULL)) {
+				QAT_DP_LOG(ERR, "QAT PMD can't allocate memory"
+					   " for %d elements of SGL",
+					   op->m_src->nb_segs);
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+				return -ENOMEM;
+			}
+			/* new SGL is valid now */
+			cookie->qat_sgl_src_d = (struct qat_sgl *)tmp;
+			cookie->src_nb_elems = op->m_src->nb_segs;
+			cookie->qat_sgl_src_phys_addr =
+				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
+		}
+
 		ret = qat_sgl_fill_array(op->m_src,
 				op->src.offset,
-				&cookie->qat_sgl_src,
+				cookie->qat_sgl_src_d,
 				op->src.length,
-				RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
+				(const uint16_t)cookie->src_nb_elems);
 		if (ret) {
 			QAT_DP_LOG(ERR, "QAT PMD Cannot fill source sgl array");
 			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
 			return ret;
 		}
 
+		if (unlikely(op->m_dst->nb_segs > cookie->dst_nb_elems)) {
+			/* we need to allocate more elements in SGL*/
+			struct qat_sgl *tmp;
+
+			tmp = rte_realloc_socket(cookie->qat_sgl_dst_d,
+					  sizeof(struct qat_sgl) +
+					  sizeof(struct qat_flat_buf) *
+					  op->m_dst->nb_segs, 64,
+					  cookie->socket_id);
+
+			if (unlikely(tmp == NULL)) {
+				QAT_DP_LOG(ERR, "QAT PMD can't allocate memory"
+					   " for %d elements of SGL",
+					   op->m_dst->nb_segs);
+				op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
+				return -EINVAL;
+			}
+			/* new SGL is valid now */
+			cookie->qat_sgl_dst_d = (struct qat_sgl *)tmp;
+			cookie->dst_nb_elems = op->m_dst->nb_segs;
+			cookie->qat_sgl_dst_phys_addr =
+				rte_malloc_virt2iova(cookie->qat_sgl_dst_d);
+		}
+
 		ret = qat_sgl_fill_array(op->m_dst,
 				op->dst.offset,
-				&cookie->qat_sgl_dst,
+				cookie->qat_sgl_dst_d,
 				comp_req->comp_pars.out_buffer_sz,
-				RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS);
+				(const uint16_t)cookie->dst_nb_elems);
 		if (ret) {
 			QAT_DP_LOG(ERR, "QAT PMD Cannot fill dest. sgl array");
 			op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
diff --git a/drivers/compress/qat/qat_comp.h b/drivers/compress/qat/qat_comp.h
index 19f48df..413898e 100644
--- a/drivers/compress/qat/qat_comp.h
+++ b/drivers/compress/qat/qat_comp.h
@@ -1,5 +1,5 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2018 Intel Corporation
+ * Copyright(c) 2015-2019 Intel Corporation
  */
 
 #ifndef _QAT_COMP_H_
@@ -37,16 +37,16 @@ struct qat_inter_sgl {
 	struct qat_flat_buf buffers[QAT_NUM_BUFS_IN_IM_SGL];
 } __rte_packed __rte_cache_aligned;
 
-struct qat_comp_sgl {
-	qat_sgl_hdr;
-	struct qat_flat_buf buffers[RTE_PMD_QAT_COMP_SGL_MAX_SEGMENTS];
-} __rte_packed __rte_cache_aligned;
 
 struct qat_comp_op_cookie {
-	struct qat_comp_sgl qat_sgl_src;
-	struct qat_comp_sgl qat_sgl_dst;
 	phys_addr_t qat_sgl_src_phys_addr;
 	phys_addr_t qat_sgl_dst_phys_addr;
+	/* dynamically created SGLs */
+	uint8_t socket_id;
+	uint16_t src_nb_elems;
+	uint16_t dst_nb_elems;
+	struct qat_sgl *qat_sgl_src_d;
+	struct qat_sgl *qat_sgl_dst_d;
 };
 
 struct qat_comp_xform {
diff --git a/drivers/compress/qat/qat_comp_pmd.c b/drivers/compress/qat/qat_comp_pmd.c
index 27c8856..6235702 100644
--- a/drivers/compress/qat/qat_comp_pmd.c
+++ b/drivers/compress/qat/qat_comp_pmd.c
@@ -1,10 +1,14 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2015-2018 Intel Corporation
+ * Copyright(c) 2015-2019 Intel Corporation
  */
 
+#include <rte_malloc.h>
+
 #include "qat_comp.h"
 #include "qat_comp_pmd.h"
 
+#define QAT_PMD_COMP_SGL_DEF_SEGMENTS 16
+
 static const struct rte_compressdev_capabilities qat_comp_gen_capabilities[] = {
 	{/* COMPRESSION - deflate */
 	 .algo = RTE_COMP_ALGO_DEFLATE,
@@ -60,6 +64,10 @@ static int
 qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 {
 	struct qat_comp_dev_private *qat_private = dev->data->dev_private;
+	struct qat_qp **qp_addr =
+		(struct qat_qp **)&(dev->data->queue_pairs[queue_pair_id]);
+	struct qat_qp *qp = (struct qat_qp *)*qp_addr;
+	uint32_t i;
 
 	QAT_LOG(DEBUG, "Release comp qp %u on device %d",
 				queue_pair_id, dev->data->dev_id);
@@ -67,6 +75,14 @@ qat_comp_qp_release(struct rte_compressdev *dev, uint16_t queue_pair_id)
 	qat_private->qat_dev->qps_in_use[QAT_SERVICE_COMPRESSION][queue_pair_id]
 						= NULL;
 
+	for (i = 0; i < qp->nb_descriptors; i++) {
+
+		struct qat_comp_op_cookie *cookie = qp->op_cookies[i];
+
+		rte_free(cookie->qat_sgl_src_d);
+		rte_free(cookie->qat_sgl_dst_d);
+	}
+
 	return qat_qp_release((struct qat_qp **)
 			&(dev->data->queue_pairs[queue_pair_id]));
 }
@@ -122,15 +138,36 @@ qat_comp_qp_setup(struct rte_compressdev *dev, uint16_t qp_id,
 		struct qat_comp_op_cookie *cookie =
 				qp->op_cookies[i];
 
+		cookie->qat_sgl_src_d = rte_zmalloc_socket(NULL,
+					sizeof(struct qat_sgl) +
+					sizeof(struct qat_flat_buf) *
+					QAT_PMD_COMP_SGL_DEF_SEGMENTS,
+					64, dev->data->socket_id);
+
+		cookie->qat_sgl_dst_d = rte_zmalloc_socket(NULL,
+					sizeof(struct qat_sgl) +
+					sizeof(struct qat_flat_buf) *
+					QAT_PMD_COMP_SGL_DEF_SEGMENTS,
+					64, dev->data->socket_id);
+
+		if (cookie->qat_sgl_src_d == NULL ||
+				cookie->qat_sgl_dst_d == NULL) {
+			QAT_LOG(ERR, "Can't allocate SGL"
+				     " for device %s",
+				     qat_private->qat_dev->name);
+			return -ENOMEM;
+		}
+
 		cookie->qat_sgl_src_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_comp_op_cookie,
-				qat_sgl_src);
+				rte_malloc_virt2iova(cookie->qat_sgl_src_d);
 
 		cookie->qat_sgl_dst_phys_addr =
-				rte_mempool_virt2iova(cookie) +
-				offsetof(struct qat_comp_op_cookie,
-				qat_sgl_dst);
+				rte_malloc_virt2iova(cookie->qat_sgl_dst_d);
+
+		cookie->dst_nb_elems = cookie->src_nb_elems =
+				QAT_PMD_COMP_SGL_DEF_SEGMENTS;
+
+		cookie->socket_id = dev->data->socket_id;
 	}
 
 	return ret;
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH] test/compress: add max mbuf size test case
  2019-02-15  9:44 ` [PATCH] test/compress: add max mbuf size test case Tomasz Jozwiak
@ 2019-03-27 14:02   ` Akhil Goyal
  2019-04-02 12:16   ` [PATCH v2 0/1] " Tomasz Cel
  1 sibling, 0 replies; 32+ messages in thread
From: Akhil Goyal @ 2019-03-27 14:02 UTC (permalink / raw)
  To: Tomasz Jozwiak, dev, fiona.trahe

Hi Fiona,

Could you please review this patch.

Thanks,
Akhil


On 2/15/2019 3:14 PM, Tomasz Jozwiak wrote:
> This patch adds new test case in which max. size of
> chain mbufs has been used to compress random data dynamically.
>
> Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
> ---
>   test/test/test_compressdev.c | 157 +++++++++++++++++++++++++++++++++++++------
>   1 file changed, 136 insertions(+), 21 deletions(-)
>
>


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 1/1] compress/qat: add dynamic sgl allocation
  2019-03-26 13:51       ` [PATCH v5 1/1] compress/qat: " Tomasz Jozwiak
@ 2019-03-28 14:37         ` Trahe, Fiona
  2019-03-29 14:40           ` Akhil Goyal
  0 siblings, 1 reply; 32+ messages in thread
From: Trahe, Fiona @ 2019-03-28 14:37 UTC (permalink / raw)
  To: Jozwiak, TomaszX, dev, akhil.goyal; +Cc: Trahe, Fiona



> -----Original Message-----
> From: Jozwiak, TomaszX
> Sent: Tuesday, March 26, 2019 1:51 PM
> To: dev@dpdk.org; Trahe, Fiona <fiona.trahe@intel.com>; Jozwiak, TomaszX
> <tomaszx.jozwiak@intel.com>
> Subject: [PATCH v5 1/1] compress/qat: add dynamic sgl allocation
> 
> This patch adds dynamic SGL allocation instead of static one.
> The number of element in SGL can be adjusted in each operation
> depend of the request.
> 
> Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
Acked-by: Fiona Trahe <fiona.trahe@intel.com>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 1/1] compress/qat: add dynamic sgl allocation
  2019-03-28 14:37         ` Trahe, Fiona
@ 2019-03-29 14:40           ` Akhil Goyal
  2019-04-03  8:39             ` Akhil Goyal
  0 siblings, 1 reply; 32+ messages in thread
From: Akhil Goyal @ 2019-03-29 14:40 UTC (permalink / raw)
  To: Trahe, Fiona, Jozwiak, TomaszX, dev



On 3/28/2019 8:07 PM, Trahe, Fiona wrote:
>
>> -----Original Message-----
>> From: Jozwiak, TomaszX
>> Sent: Tuesday, March 26, 2019 1:51 PM
>> To: dev@dpdk.org; Trahe, Fiona <fiona.trahe@intel.com>; Jozwiak, TomaszX
>> <tomaszx.jozwiak@intel.com>
>> Subject: [PATCH v5 1/1] compress/qat: add dynamic sgl allocation
>>
>> This patch adds dynamic SGL allocation instead of static one.
>> The number of element in SGL can be adjusted in each operation
>> depend of the request.
>>
>> Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
> Acked-by: Fiona Trahe <fiona.trahe@intel.com>
Applied to dpdk-next-crypto

Thanks.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 0/1] add max mbuf size test case
  2019-02-15  9:44 ` [PATCH] test/compress: add max mbuf size test case Tomasz Jozwiak
  2019-03-27 14:02   ` Akhil Goyal
@ 2019-04-02 12:16   ` Tomasz Cel
  2019-04-02 12:16     ` [PATCH v2 1/1] test/compress: " Tomasz Cel
  2019-04-16 14:53     ` [dpdk-dev] [PATCH v2 0/1] " Akhil Goyal
  1 sibling, 2 replies; 32+ messages in thread
From: Tomasz Cel @ 2019-04-02 12:16 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak, tomaszx.cel

From: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>

This patch adds new test case in which max. size of
chain mbufs has been used to compress random data dynamically.

V2 changes:

  Added changes to new test_compressdev.c file location
  in app/test/ folder

Tomasz Jozwiak (1):
  test/compress: add max mbuf size test case

 app/test/test_compressdev.c | 158 ++++++++++++++++++++++++++++++++++++++------
 1 file changed, 136 insertions(+), 22 deletions(-)

-- 
2.7.4

^ permalink raw reply	[flat|nested] 32+ messages in thread

* [PATCH v2 1/1] test/compress: add max mbuf size test case
  2019-04-02 12:16   ` [PATCH v2 0/1] " Tomasz Cel
@ 2019-04-02 12:16     ` Tomasz Cel
  2019-04-02 12:22       ` Cel, TomaszX
  2019-04-16 14:53     ` [dpdk-dev] [PATCH v2 0/1] " Akhil Goyal
  1 sibling, 1 reply; 32+ messages in thread
From: Tomasz Cel @ 2019-04-02 12:16 UTC (permalink / raw)
  To: dev, fiona.trahe, tomaszx.jozwiak, tomaszx.cel

From: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>

This patch adds new test case in which max. size of
chain mbufs has been used to compress random data dynamically.

Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
---
 app/test/test_compressdev.c | 158 ++++++++++++++++++++++++++++++++++++++------
 1 file changed, 136 insertions(+), 22 deletions(-)

diff --git a/app/test/test_compressdev.c b/app/test/test_compressdev.c
index 13cf26c..f59b3d2 100644
--- a/app/test/test_compressdev.c
+++ b/app/test/test_compressdev.c
@@ -1,10 +1,10 @@
 /* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2018 Intel Corporation
+ * Copyright(c) 2018 - 2019 Intel Corporation
  */
 #include <string.h>
 #include <zlib.h>
 #include <math.h>
-#include <unistd.h>
+#include <stdlib.h>
 
 #include <rte_cycles.h>
 #include <rte_malloc.h>
@@ -45,6 +45,11 @@
 
 #define OUT_OF_SPACE_BUF 1
 
+#define MAX_MBUF_SEGMENT_SIZE 65535
+#define MAX_DATA_MBUF_SIZE (MAX_MBUF_SEGMENT_SIZE - RTE_PKTMBUF_HEADROOM)
+#define NUM_BIG_MBUFS 4
+#define BIG_DATA_TEST_SIZE (MAX_DATA_MBUF_SIZE * NUM_BIG_MBUFS / 2)
+
 const char *
 huffman_type_strings[] = {
 	[RTE_COMP_HUFFMAN_DEFAULT]	= "PMD default",
@@ -73,6 +78,7 @@ struct priv_op_data {
 struct comp_testsuite_params {
 	struct rte_mempool *large_mbuf_pool;
 	struct rte_mempool *small_mbuf_pool;
+	struct rte_mempool *big_mbuf_pool;
 	struct rte_mempool *op_pool;
 	struct rte_comp_xform *def_comp_xform;
 	struct rte_comp_xform *def_decomp_xform;
@@ -92,6 +98,7 @@ struct test_data_params {
 	enum varied_buff buff_type;
 	enum zlib_direction zlib_dir;
 	unsigned int out_of_space;
+	unsigned int big_data;
 };
 
 static struct comp_testsuite_params testsuite_params = { 0 };
@@ -105,11 +112,14 @@ testsuite_teardown(void)
 		RTE_LOG(ERR, USER1, "Large mbuf pool still has unfreed bufs\n");
 	if (rte_mempool_in_use_count(ts_params->small_mbuf_pool))
 		RTE_LOG(ERR, USER1, "Small mbuf pool still has unfreed bufs\n");
+	if (rte_mempool_in_use_count(ts_params->big_mbuf_pool))
+		RTE_LOG(ERR, USER1, "Big mbuf pool still has unfreed bufs\n");
 	if (rte_mempool_in_use_count(ts_params->op_pool))
 		RTE_LOG(ERR, USER1, "op pool still has unfreed ops\n");
 
 	rte_mempool_free(ts_params->large_mbuf_pool);
 	rte_mempool_free(ts_params->small_mbuf_pool);
+	rte_mempool_free(ts_params->big_mbuf_pool);
 	rte_mempool_free(ts_params->op_pool);
 	rte_free(ts_params->def_comp_xform);
 	rte_free(ts_params->def_decomp_xform);
@@ -162,6 +172,17 @@ testsuite_setup(void)
 		goto exit;
 	}
 
+	/* Create mempool with big buffers for SGL testing */
+	ts_params->big_mbuf_pool = rte_pktmbuf_pool_create("big_mbuf_pool",
+			NUM_BIG_MBUFS + 1,
+			CACHE_SIZE, 0,
+			MAX_MBUF_SEGMENT_SIZE,
+			rte_socket_id());
+	if (ts_params->big_mbuf_pool == NULL) {
+		RTE_LOG(ERR, USER1, "Big mbuf pool could not be created\n");
+		goto exit;
+	}
+
 	ts_params->op_pool = rte_comp_op_pool_create("op_pool", NUM_OPS,
 				0, sizeof(struct priv_op_data),
 				rte_socket_id());
@@ -598,10 +619,11 @@ prepare_sgl_bufs(const char *test_buf, struct rte_mbuf *head_buf,
 		uint32_t total_data_size,
 		struct rte_mempool *small_mbuf_pool,
 		struct rte_mempool *large_mbuf_pool,
-		uint8_t limit_segs_in_sgl)
+		uint8_t limit_segs_in_sgl,
+		uint16_t seg_size)
 {
 	uint32_t remaining_data = total_data_size;
-	uint16_t num_remaining_segs = DIV_CEIL(remaining_data, SMALL_SEG_SIZE);
+	uint16_t num_remaining_segs = DIV_CEIL(remaining_data, seg_size);
 	struct rte_mempool *pool;
 	struct rte_mbuf *next_seg;
 	uint32_t data_size;
@@ -617,10 +639,10 @@ prepare_sgl_bufs(const char *test_buf, struct rte_mbuf *head_buf,
 	 * Allocate data in the first segment (header) and
 	 * copy data if test buffer is provided
 	 */
-	if (remaining_data < SMALL_SEG_SIZE)
+	if (remaining_data < seg_size)
 		data_size = remaining_data;
 	else
-		data_size = SMALL_SEG_SIZE;
+		data_size = seg_size;
 	buf_ptr = rte_pktmbuf_append(head_buf, data_size);
 	if (buf_ptr == NULL) {
 		RTE_LOG(ERR, USER1,
@@ -644,13 +666,13 @@ prepare_sgl_bufs(const char *test_buf, struct rte_mbuf *head_buf,
 
 		if (i == (num_remaining_segs - 1)) {
 			/* last segment */
-			if (remaining_data > SMALL_SEG_SIZE)
+			if (remaining_data > seg_size)
 				pool = large_mbuf_pool;
 			else
 				pool = small_mbuf_pool;
 			data_size = remaining_data;
 		} else {
-			data_size = SMALL_SEG_SIZE;
+			data_size = seg_size;
 			pool = small_mbuf_pool;
 		}
 
@@ -704,6 +726,7 @@ test_deflate_comp_decomp(const struct interim_data_params *int_data,
 	enum rte_comp_op_type state = test_data->state;
 	unsigned int buff_type = test_data->buff_type;
 	unsigned int out_of_space = test_data->out_of_space;
+	unsigned int big_data = test_data->big_data;
 	enum zlib_direction zlib_dir = test_data->zlib_dir;
 	int ret_status = -1;
 	int ret;
@@ -738,7 +761,9 @@ test_deflate_comp_decomp(const struct interim_data_params *int_data,
 	memset(ops_processed, 0, sizeof(struct rte_comp_op *) * num_bufs);
 	memset(priv_xforms, 0, sizeof(void *) * num_bufs);
 
-	if (buff_type == SGL_BOTH)
+	if (big_data)
+		buf_pool = ts_params->big_mbuf_pool;
+	else if (buff_type == SGL_BOTH)
 		buf_pool = ts_params->small_mbuf_pool;
 	else
 		buf_pool = ts_params->large_mbuf_pool;
@@ -757,10 +782,11 @@ test_deflate_comp_decomp(const struct interim_data_params *int_data,
 		for (i = 0; i < num_bufs; i++) {
 			data_size = strlen(test_bufs[i]) + 1;
 			if (prepare_sgl_bufs(test_bufs[i], uncomp_bufs[i],
-					data_size,
-					ts_params->small_mbuf_pool,
-					ts_params->large_mbuf_pool,
-					MAX_SEGS) < 0)
+			    data_size,
+			    big_data ? buf_pool : ts_params->small_mbuf_pool,
+			    big_data ? buf_pool : ts_params->large_mbuf_pool,
+			    big_data ? 0 : MAX_SEGS,
+			    big_data ? MAX_DATA_MBUF_SIZE : SMALL_SEG_SIZE) < 0)
 				goto exit;
 		}
 	} else {
@@ -789,10 +815,12 @@ test_deflate_comp_decomp(const struct interim_data_params *int_data,
 					COMPRESS_BUF_SIZE_RATIO);
 
 			if (prepare_sgl_bufs(NULL, comp_bufs[i],
-					data_size,
-					ts_params->small_mbuf_pool,
-					ts_params->large_mbuf_pool,
-					MAX_SEGS) < 0)
+			      data_size,
+			      big_data ? buf_pool : ts_params->small_mbuf_pool,
+			      big_data ? buf_pool : ts_params->large_mbuf_pool,
+			      big_data ? 0 : MAX_SEGS,
+			      big_data ? MAX_DATA_MBUF_SIZE : SMALL_SEG_SIZE)
+					< 0)
 				goto exit;
 		}
 
@@ -1017,10 +1045,12 @@ test_deflate_comp_decomp(const struct interim_data_params *int_data,
 				strlen(test_bufs[priv_data->orig_idx]) + 1;
 
 			if (prepare_sgl_bufs(NULL, uncomp_bufs[i],
-					data_size,
-					ts_params->small_mbuf_pool,
-					ts_params->large_mbuf_pool,
-					MAX_SEGS) < 0)
+			       data_size,
+			       big_data ? buf_pool : ts_params->small_mbuf_pool,
+			       big_data ? buf_pool : ts_params->large_mbuf_pool,
+			       big_data ? 0 : MAX_SEGS,
+			       big_data ? MAX_DATA_MBUF_SIZE : SMALL_SEG_SIZE)
+					< 0)
 				goto exit;
 		}
 
@@ -1320,6 +1350,7 @@ test_compressdev_deflate_stateless_fixed(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1390,6 +1421,7 @@ test_compressdev_deflate_stateless_dynamic(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1443,6 +1475,7 @@ test_compressdev_deflate_stateless_multi_op(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1492,6 +1525,7 @@ test_compressdev_deflate_stateless_multi_level(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1581,6 +1615,7 @@ test_compressdev_deflate_stateless_multi_xform(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1626,6 +1661,7 @@ test_compressdev_deflate_stateless_sgl(void)
 		RTE_COMP_OP_STATELESS,
 		SGL_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1733,6 +1769,7 @@ test_compressdev_deflate_stateless_checksum(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
+		0,
 		0
 	};
 
@@ -1863,7 +1900,8 @@ test_compressdev_out_of_space_buffer(void)
 		RTE_COMP_OP_STATELESS,
 		LB_BOTH,
 		ZLIB_DECOMPRESS,
-		1
+		1,
+		0
 	};
 	/* Compress with compressdev, decompress with Zlib */
 	test_data.zlib_dir = ZLIB_DECOMPRESS;
@@ -1904,6 +1942,80 @@ test_compressdev_out_of_space_buffer(void)
 	return ret;
 }
 
+static int
+test_compressdev_deflate_stateless_dynamic_big(void)
+{
+	struct comp_testsuite_params *ts_params = &testsuite_params;
+	uint16_t i = 0;
+	int ret = TEST_SUCCESS;
+	const struct rte_compressdev_capabilities *capab;
+	char *test_buffer = NULL;
+
+	capab = rte_compressdev_capability_get(0, RTE_COMP_ALGO_DEFLATE);
+	TEST_ASSERT(capab != NULL, "Failed to retrieve device capabilities");
+
+	if ((capab->comp_feature_flags & RTE_COMP_FF_HUFFMAN_DYNAMIC) == 0)
+		return -ENOTSUP;
+
+	if ((capab->comp_feature_flags & RTE_COMP_FF_OOP_SGL_IN_SGL_OUT) == 0)
+		return -ENOTSUP;
+
+	test_buffer = rte_malloc(NULL, BIG_DATA_TEST_SIZE, 0);
+	if (test_buffer == NULL) {
+		RTE_LOG(ERR, USER1,
+			"Can't allocate buffer for big-data\n");
+		return TEST_FAILED;
+	}
+
+	struct interim_data_params int_data = {
+		(const char * const *)&test_buffer,
+		1,
+		NULL,
+		&ts_params->def_comp_xform,
+		&ts_params->def_decomp_xform,
+		1
+	};
+
+	struct test_data_params test_data = {
+		RTE_COMP_OP_STATELESS,
+		SGL_BOTH,
+		ZLIB_DECOMPRESS,
+		0,
+		1
+	};
+
+	ts_params->def_comp_xform->compress.deflate.huffman =
+						RTE_COMP_HUFFMAN_DYNAMIC;
+
+	/* fill the buffer with data based on rand. data */
+	srand(BIG_DATA_TEST_SIZE);
+	for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
+		test_buffer[i] = (uint8_t)(rand() % ((uint8_t)-1)) | 1;
+
+	test_buffer[BIG_DATA_TEST_SIZE-1] = 0;
+	int_data.buf_idx = &i;
+
+	/* Compress with compressdev, decompress with Zlib */
+	test_data.zlib_dir = ZLIB_DECOMPRESS;
+	if (test_deflate_comp_decomp(&int_data, &test_data) < 0) {
+		ret = TEST_FAILED;
+		goto end;
+	}
+
+	/* Compress with Zlib, decompress with compressdev */
+	test_data.zlib_dir = ZLIB_COMPRESS;
+	if (test_deflate_comp_decomp(&int_data, &test_data) < 0) {
+		ret = TEST_FAILED;
+		goto end;
+	}
+
+end:
+	ts_params->def_comp_xform->compress.deflate.huffman =
+						RTE_COMP_HUFFMAN_DEFAULT;
+	rte_free(test_buffer);
+	return ret;
+}
+
 
 static struct unit_test_suite compressdev_testsuite  = {
 	.suite_name = "compressdev unit test suite",
@@ -1917,6 +2029,8 @@ static struct unit_test_suite compressdev_testsuite  = {
 		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
 			test_compressdev_deflate_stateless_dynamic),
 		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
+			test_compressdev_deflate_stateless_dynamic_big),
+		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
 			test_compressdev_deflate_stateless_multi_op),
 		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
 			test_compressdev_deflate_stateless_multi_level),
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [PATCH v2 1/1] test/compress: add max mbuf size test case
  2019-04-02 12:16     ` [PATCH v2 1/1] test/compress: " Tomasz Cel
@ 2019-04-02 12:22       ` Cel, TomaszX
  2019-04-18 22:42         ` [dpdk-dev] " Yongseok Koh
  0 siblings, 1 reply; 32+ messages in thread
From: Cel, TomaszX @ 2019-04-02 12:22 UTC (permalink / raw)
  To: dev, Trahe, Fiona, Jozwiak, TomaszX

Hi Tomasz,

> -----Original Message-----
> From: Cel, TomaszX
> Sent: Tuesday, April 2, 2019 1:17 PM
> To: dev@dpdk.org; Trahe, Fiona <fiona.trahe@intel.com>; Jozwiak, TomaszX
> <tomaszx.jozwiak@intel.com>; Cel, TomaszX <tomaszx.cel@intel.com>
> Subject: [PATCH v2 1/1] test/compress: add max mbuf size test case
> 
> From: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
> 
> This patch adds new test case in which max. size of chain mbufs has been
> used to compress random data dynamically.
> 
> Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
> ---
>  app/test/test_compressdev.c | 158
> ++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 136 insertions(+), 22 deletions(-)
> 
> diff --git a/app/test/test_compressdev.c b/app/test/test_compressdev.c
> index 13cf26c..f59b3d2 100644
> --- a/app/test/test_compressdev.c
> +++ b/app/test/test_compressdev.c
> @@ -1,10 +1,10 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2018 Intel Corporation
> + * Copyright(c) 2018 - 2019 Intel Corporation
>   */
>  #include <string.h>
>  #include <zlib.h>
>  #include <math.h>
> -#include <unistd.h>
> +#include <stdlib.h>
> 
>  #include <rte_cycles.h>
>  #include <rte_malloc.h>
> @@ -45,6 +45,11 @@
> 
>  #define OUT_OF_SPACE_BUF 1
> 
> +#define MAX_MBUF_SEGMENT_SIZE 65535
> +#define MAX_DATA_MBUF_SIZE (MAX_MBUF_SEGMENT_SIZE -
> +RTE_PKTMBUF_HEADROOM) #define NUM_BIG_MBUFS 4 #define
> +BIG_DATA_TEST_SIZE (MAX_DATA_MBUF_SIZE * NUM_BIG_MBUFS / 2)
> +
>  const char *
>  huffman_type_strings[] = {
>  	[RTE_COMP_HUFFMAN_DEFAULT]	= "PMD default",
> @@ -73,6 +78,7 @@ struct priv_op_data {
>  struct comp_testsuite_params {
>  	struct rte_mempool *large_mbuf_pool;
>  	struct rte_mempool *small_mbuf_pool;
> +	struct rte_mempool *big_mbuf_pool;
>  	struct rte_mempool *op_pool;
>  	struct rte_comp_xform *def_comp_xform;
>  	struct rte_comp_xform *def_decomp_xform; @@ -92,6 +98,7 @@
> struct test_data_params {
>  	enum varied_buff buff_type;
>  	enum zlib_direction zlib_dir;
>  	unsigned int out_of_space;
> +	unsigned int big_data;
>  };
> 
>  static struct comp_testsuite_params testsuite_params = { 0 }; @@ -105,11
> +112,14 @@ testsuite_teardown(void)
>  		RTE_LOG(ERR, USER1, "Large mbuf pool still has unfreed
> bufs\n");
>  	if (rte_mempool_in_use_count(ts_params->small_mbuf_pool))
>  		RTE_LOG(ERR, USER1, "Small mbuf pool still has unfreed
> bufs\n");
> +	if (rte_mempool_in_use_count(ts_params->big_mbuf_pool))
> +		RTE_LOG(ERR, USER1, "Big mbuf pool still has unfreed
> bufs\n");
>  	if (rte_mempool_in_use_count(ts_params->op_pool))
>  		RTE_LOG(ERR, USER1, "op pool still has unfreed ops\n");
> 
>  	rte_mempool_free(ts_params->large_mbuf_pool);
>  	rte_mempool_free(ts_params->small_mbuf_pool);
> +	rte_mempool_free(ts_params->big_mbuf_pool);
>  	rte_mempool_free(ts_params->op_pool);
>  	rte_free(ts_params->def_comp_xform);
>  	rte_free(ts_params->def_decomp_xform);
> @@ -162,6 +172,17 @@ testsuite_setup(void)
>  		goto exit;
>  	}
> 
> +	/* Create mempool with big buffers for SGL testing */
> +	ts_params->big_mbuf_pool =
> rte_pktmbuf_pool_create("big_mbuf_pool",
> +			NUM_BIG_MBUFS + 1,
> +			CACHE_SIZE, 0,
> +			MAX_MBUF_SEGMENT_SIZE,
> +			rte_socket_id());
> +	if (ts_params->big_mbuf_pool == NULL) {
> +		RTE_LOG(ERR, USER1, "Big mbuf pool could not be
> created\n");
> +		goto exit;
> +	}
> +
>  	ts_params->op_pool = rte_comp_op_pool_create("op_pool",
> NUM_OPS,
>  				0, sizeof(struct priv_op_data),
>  				rte_socket_id());
> @@ -598,10 +619,11 @@ prepare_sgl_bufs(const char *test_buf, struct
> rte_mbuf *head_buf,
>  		uint32_t total_data_size,
>  		struct rte_mempool *small_mbuf_pool,
>  		struct rte_mempool *large_mbuf_pool,
> -		uint8_t limit_segs_in_sgl)
> +		uint8_t limit_segs_in_sgl,
> +		uint16_t seg_size)
>  {
>  	uint32_t remaining_data = total_data_size;
> -	uint16_t num_remaining_segs = DIV_CEIL(remaining_data,
> SMALL_SEG_SIZE);
> +	uint16_t num_remaining_segs = DIV_CEIL(remaining_data,
> seg_size);
>  	struct rte_mempool *pool;
>  	struct rte_mbuf *next_seg;
>  	uint32_t data_size;
> @@ -617,10 +639,10 @@ prepare_sgl_bufs(const char *test_buf, struct
> rte_mbuf *head_buf,
>  	 * Allocate data in the first segment (header) and
>  	 * copy data if test buffer is provided
>  	 */
> -	if (remaining_data < SMALL_SEG_SIZE)
> +	if (remaining_data < seg_size)
>  		data_size = remaining_data;
>  	else
> -		data_size = SMALL_SEG_SIZE;
> +		data_size = seg_size;
>  	buf_ptr = rte_pktmbuf_append(head_buf, data_size);
>  	if (buf_ptr == NULL) {
>  		RTE_LOG(ERR, USER1,
> @@ -644,13 +666,13 @@ prepare_sgl_bufs(const char *test_buf, struct
> rte_mbuf *head_buf,
> 
>  		if (i == (num_remaining_segs - 1)) {
>  			/* last segment */
> -			if (remaining_data > SMALL_SEG_SIZE)
> +			if (remaining_data > seg_size)
>  				pool = large_mbuf_pool;
>  			else
>  				pool = small_mbuf_pool;
>  			data_size = remaining_data;
>  		} else {
> -			data_size = SMALL_SEG_SIZE;
> +			data_size = seg_size;
>  			pool = small_mbuf_pool;
>  		}
> 
> @@ -704,6 +726,7 @@ test_deflate_comp_decomp(const struct
> interim_data_params *int_data,
>  	enum rte_comp_op_type state = test_data->state;
>  	unsigned int buff_type = test_data->buff_type;
>  	unsigned int out_of_space = test_data->out_of_space;
> +	unsigned int big_data = test_data->big_data;
>  	enum zlib_direction zlib_dir = test_data->zlib_dir;
>  	int ret_status = -1;
>  	int ret;
> @@ -738,7 +761,9 @@ test_deflate_comp_decomp(const struct
> interim_data_params *int_data,
>  	memset(ops_processed, 0, sizeof(struct rte_comp_op *) *
> num_bufs);
>  	memset(priv_xforms, 0, sizeof(void *) * num_bufs);
> 
> -	if (buff_type == SGL_BOTH)
> +	if (big_data)
> +		buf_pool = ts_params->big_mbuf_pool;
> +	else if (buff_type == SGL_BOTH)
>  		buf_pool = ts_params->small_mbuf_pool;
>  	else
>  		buf_pool = ts_params->large_mbuf_pool; @@ -757,10
> +782,11 @@ test_deflate_comp_decomp(const struct interim_data_params
> *int_data,
>  		for (i = 0; i < num_bufs; i++) {
>  			data_size = strlen(test_bufs[i]) + 1;
>  			if (prepare_sgl_bufs(test_bufs[i], uncomp_bufs[i],
> -					data_size,
> -					ts_params->small_mbuf_pool,
> -					ts_params->large_mbuf_pool,
> -					MAX_SEGS) < 0)
> +			    data_size,
> +			    big_data ? buf_pool : ts_params-
> >small_mbuf_pool,
> +			    big_data ? buf_pool : ts_params-
> >large_mbuf_pool,
> +			    big_data ? 0 : MAX_SEGS,
> +			    big_data ? MAX_DATA_MBUF_SIZE :
> SMALL_SEG_SIZE) < 0)
>  				goto exit;
>  		}
>  	} else {
> @@ -789,10 +815,12 @@ test_deflate_comp_decomp(const struct
> interim_data_params *int_data,
>  					COMPRESS_BUF_SIZE_RATIO);
> 
>  			if (prepare_sgl_bufs(NULL, comp_bufs[i],
> -					data_size,
> -					ts_params->small_mbuf_pool,
> -					ts_params->large_mbuf_pool,
> -					MAX_SEGS) < 0)
> +			      data_size,
> +			      big_data ? buf_pool : ts_params-
> >small_mbuf_pool,
> +			      big_data ? buf_pool : ts_params-
> >large_mbuf_pool,
> +			      big_data ? 0 : MAX_SEGS,
> +			      big_data ? MAX_DATA_MBUF_SIZE :
> SMALL_SEG_SIZE)
> +					< 0)
>  				goto exit;
>  		}
> 
> @@ -1017,10 +1045,12 @@ test_deflate_comp_decomp(const struct
> interim_data_params *int_data,
>  				strlen(test_bufs[priv_data->orig_idx]) + 1;
> 
>  			if (prepare_sgl_bufs(NULL, uncomp_bufs[i],
> -					data_size,
> -					ts_params->small_mbuf_pool,
> -					ts_params->large_mbuf_pool,
> -					MAX_SEGS) < 0)
> +			       data_size,
> +			       big_data ? buf_pool : ts_params-
> >small_mbuf_pool,
> +			       big_data ? buf_pool : ts_params-
> >large_mbuf_pool,
> +			       big_data ? 0 : MAX_SEGS,
> +			       big_data ? MAX_DATA_MBUF_SIZE :
> SMALL_SEG_SIZE)
> +					< 0)
>  				goto exit;
>  		}
> 
> @@ -1320,6 +1350,7 @@ test_compressdev_deflate_stateless_fixed(void)
>  		RTE_COMP_OP_STATELESS,
>  		LB_BOTH,
>  		ZLIB_DECOMPRESS,
> +		0,
>  		0
>  	};
> 
> @@ -1390,6 +1421,7 @@
> test_compressdev_deflate_stateless_dynamic(void)
>  		RTE_COMP_OP_STATELESS,
>  		LB_BOTH,
>  		ZLIB_DECOMPRESS,
> +		0,
>  		0
>  	};
> 
> @@ -1443,6 +1475,7 @@
> test_compressdev_deflate_stateless_multi_op(void)
>  		RTE_COMP_OP_STATELESS,
>  		LB_BOTH,
>  		ZLIB_DECOMPRESS,
> +		0,
>  		0
>  	};
> 
> @@ -1492,6 +1525,7 @@
> test_compressdev_deflate_stateless_multi_level(void)
>  		RTE_COMP_OP_STATELESS,
>  		LB_BOTH,
>  		ZLIB_DECOMPRESS,
> +		0,
>  		0
>  	};
> 
> @@ -1581,6 +1615,7 @@
> test_compressdev_deflate_stateless_multi_xform(void)
>  		RTE_COMP_OP_STATELESS,
>  		LB_BOTH,
>  		ZLIB_DECOMPRESS,
> +		0,
>  		0
>  	};
> 
> @@ -1626,6 +1661,7 @@ test_compressdev_deflate_stateless_sgl(void)
>  		RTE_COMP_OP_STATELESS,
>  		SGL_BOTH,
>  		ZLIB_DECOMPRESS,
> +		0,
>  		0
>  	};
> 
> @@ -1733,6 +1769,7 @@
> test_compressdev_deflate_stateless_checksum(void)
>  		RTE_COMP_OP_STATELESS,
>  		LB_BOTH,
>  		ZLIB_DECOMPRESS,
> +		0,
>  		0
>  	};
> 
> @@ -1863,7 +1900,8 @@ test_compressdev_out_of_space_buffer(void)
>  		RTE_COMP_OP_STATELESS,
>  		LB_BOTH,
>  		ZLIB_DECOMPRESS,
> -		1
> +		1,
> +		0
>  	};
>  	/* Compress with compressdev, decompress with Zlib */
>  	test_data.zlib_dir = ZLIB_DECOMPRESS;
> @@ -1904,6 +1942,80 @@ test_compressdev_out_of_space_buffer(void)
>  	return ret;
>  }
> 
> +static int
> +test_compressdev_deflate_stateless_dynamic_big(void)
> +{
> +	struct comp_testsuite_params *ts_params = &testsuite_params;
> +	uint16_t i = 0;
> +	int ret = TEST_SUCCESS;
> +	const struct rte_compressdev_capabilities *capab;
> +	char *test_buffer = NULL;
> +
> +	capab = rte_compressdev_capability_get(0,
> RTE_COMP_ALGO_DEFLATE);
> +	TEST_ASSERT(capab != NULL, "Failed to retrieve device capabilities");
> +
> +	if ((capab->comp_feature_flags &
> RTE_COMP_FF_HUFFMAN_DYNAMIC) == 0)
> +		return -ENOTSUP;
> +
> +	if ((capab->comp_feature_flags &
> RTE_COMP_FF_OOP_SGL_IN_SGL_OUT) == 0)
> +		return -ENOTSUP;
> +
> +	test_buffer = rte_malloc(NULL, BIG_DATA_TEST_SIZE, 0);
> +	if (test_buffer == NULL) {
> +		RTE_LOG(ERR, USER1,
> +			"Can't allocate buffer for big-data\n");
> +		return TEST_FAILED;
> +	}
> +
> +	struct interim_data_params int_data = {
> +		(const char * const *)&test_buffer,
> +		1,
> +		NULL,
> +		&ts_params->def_comp_xform,
> +		&ts_params->def_decomp_xform,
> +		1
> +	};
> +
> +	struct test_data_params test_data = {
> +		RTE_COMP_OP_STATELESS,
> +		SGL_BOTH,
> +		ZLIB_DECOMPRESS,
> +		0,
> +		1
> +	};
> +
> +	ts_params->def_comp_xform->compress.deflate.huffman =
> +
> 	RTE_COMP_HUFFMAN_DYNAMIC;
> +
> +	/* fill the buffer with data based on rand. data */
> +	srand(BIG_DATA_TEST_SIZE);
> +	for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
> +		test_buffer[i] = (uint8_t)(rand() % ((uint8_t)-1)) | 1;
> +
> +	test_buffer[BIG_DATA_TEST_SIZE-1] = 0;
> +	int_data.buf_idx = &i;
> +
> +	/* Compress with compressdev, decompress with Zlib */
> +	test_data.zlib_dir = ZLIB_DECOMPRESS;
> +	if (test_deflate_comp_decomp(&int_data, &test_data) < 0) {
> +		ret = TEST_FAILED;
> +		goto end;
> +	}
> +
> +	/* Compress with Zlib, decompress with compressdev */
> +	test_data.zlib_dir = ZLIB_COMPRESS;
> +	if (test_deflate_comp_decomp(&int_data, &test_data) < 0) {
> +		ret = TEST_FAILED;
> +		goto end;
> +	}
> +
> +end:
> +	ts_params->def_comp_xform->compress.deflate.huffman =
> +
> 	RTE_COMP_HUFFMAN_DEFAULT;
> +	rte_free(test_buffer);
> +	return ret;
> +}
> +
> 
>  static struct unit_test_suite compressdev_testsuite  = {
>  	.suite_name = "compressdev unit test suite", @@ -1917,6 +2029,8
> @@ static struct unit_test_suite compressdev_testsuite  = {
>  		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
>  			test_compressdev_deflate_stateless_dynamic),
>  		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
> +			test_compressdev_deflate_stateless_dynamic_big),
> +		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
>  			test_compressdev_deflate_stateless_multi_op),
>  		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
>  			test_compressdev_deflate_stateless_multi_level),
> --
> 2.7.4

Acked-by: Tomasz Cel <tomaszx.cel@intel.com>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [PATCH v5 1/1] compress/qat: add dynamic sgl allocation
  2019-03-29 14:40           ` Akhil Goyal
@ 2019-04-03  8:39             ` Akhil Goyal
  0 siblings, 0 replies; 32+ messages in thread
From: Akhil Goyal @ 2019-04-03  8:39 UTC (permalink / raw)
  To: Trahe, Fiona, Jozwiak, TomaszX, dev

Hi Tomasz,

On 3/29/2019 8:10 PM, Akhil Goyal wrote:
>
> On 3/28/2019 8:07 PM, Trahe, Fiona wrote:
>>> -----Original Message-----
>>> From: Jozwiak, TomaszX
>>> Sent: Tuesday, March 26, 2019 1:51 PM
>>> To: dev@dpdk.org; Trahe, Fiona <fiona.trahe@intel.com>; Jozwiak, TomaszX
>>> <tomaszx.jozwiak@intel.com>
>>> Subject: [PATCH v5 1/1] compress/qat: add dynamic sgl allocation
>>>
>>> This patch adds dynamic SGL allocation instead of static one.
>>> The number of element in SGL can be adjusted in each operation
>>> depend of the request.
>>>
>>> Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
>> Acked-by: Fiona Trahe <fiona.trahe@intel.com>
> Applied to dpdk-next-crypto
>
> Thanks.

two errors on icc:

drivers/compress/qat/qat_comp.c(110): error #191: type qualifier is 
meaningless on cast type

(const uint16_t)cookie->src_nb_elems);

^

drivers/compress/qat/qat_comp.c(145): error #191: type qualifier is 
meaningless on cast type

(const uint16_t)cookie->dst_nb_elems);

^


These were resolved while applying on the tree. I removed the type cast.

-Akhil




^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v2 0/1] add max mbuf size test case
  2019-04-02 12:16   ` [PATCH v2 0/1] " Tomasz Cel
  2019-04-02 12:16     ` [PATCH v2 1/1] test/compress: " Tomasz Cel
@ 2019-04-16 14:53     ` Akhil Goyal
  1 sibling, 0 replies; 32+ messages in thread
From: Akhil Goyal @ 2019-04-16 14:53 UTC (permalink / raw)
  To: Tomasz Cel, dev, fiona.trahe, tomaszx.jozwiak



> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Tomasz Cel
> Sent: Tuesday, April 2, 2019 5:47 PM
> To: dev@dpdk.org; fiona.trahe@intel.com; tomaszx.jozwiak@intel.com;
> tomaszx.cel@intel.com
> Subject: [dpdk-dev] [PATCH v2 0/1] add max mbuf size test case
> 
> From: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
> 
> This patch adds new test case in which max. size of
> chain mbufs has been used to compress random data dynamically.
> 
> V2 changes:
> 
>   Added changes to new test_compressdev.c file location
>   in app/test/ folder
> 
> Tomasz Jozwiak (1):
>   test/compress: add max mbuf size test case
> 
>  app/test/test_compressdev.c | 158
> ++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 136 insertions(+), 22 deletions(-)
Applied to dpdk-next-crypto

Thanks.

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/1] test/compress: add max mbuf size test case
  2019-04-02 12:22       ` Cel, TomaszX
@ 2019-04-18 22:42         ` Yongseok Koh
  2019-04-19  9:07           ` Thomas Monjalon
  2019-04-19  9:58           ` Jozwiak, TomaszX
  0 siblings, 2 replies; 32+ messages in thread
From: Yongseok Koh @ 2019-04-18 22:42 UTC (permalink / raw)
  To: Cel, TomaszX, Thomas Monjalon, Jozwiak, TomaszX; +Cc: dev, Trahe, Fiona

Hi,

I'm seeing compile error.
Isn't it due to this patch?

$ gcc --version
gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)


[1484/1523] Compiling C object 'app/test/app@test@@dpdk-test@exe/test_compressdev.c.o'.
FAILED: app/test/app@test@@dpdk-test@exe/test_compressdev.c.o
cc -Iapp/test/app@test@@dpdk-test@exe -Iapp/test -I../app/test -Ilib/librte_acl -I../lib/librte_acl -I. -I../ -Iconfig -I../config -Ilib/librte_eal/common/include -I../lib/librte_eal/common/include -I../lib/librte_eal/linux/eal/include -Ilib/librte_eal/common -I../lib/librte_eal/common -Ilib/librte_eal/common/include/arch/x86 -I../lib/librte_eal/common/include/arch/x86 -Ilib/librte_eal -I../lib/librte_eal -Ilib/librte_kvargs -I../lib/librte_kvargs -Ilib/librte_bitratestats -I../lib/librte_bitratestats -Ilib/librte_ethdev -I../lib/librte_ethdev -Ilib/librte_net -I../lib/librte_net -Ilib/librte_mbuf-I../lib/librte_mbuf -Ilib/librte_mempool -I../lib/librte_mempool -Ilib/librte_ring -I../lib/librte_ring -Ilib/librte_cmdline -I../lib/librte_cmdline -Ilib/librte_meter -I../lib/librte_meter -Ilib/librte_metrics -I../lib/librte_metrics -Ilib/librte_bpf -I../lib/librte_bpf -Ilib/librte_cfgfile -I../lib/librte_cfgfile -Ilib/librte_cryptodev -I../lib/librte_cryptodev -Ilib/librte_distributor -I../lib/librte_distributor -Ilib/librte_efd -I../lib/librte_efd -Ilib/librte_hash -I../lib/librte_hash -Ilib/librte_eventdev -I../lib/librte_eventdev -Ilib/librte_timer -I../lib/librte_timer -Ilib/librte_flow_classify -I../lib/librte_flow_classify -Ilib/librte_table -I../lib/librte_table -Ilib/librte_port -I../lib/librte_port -Ilib/librte_sched -I../lib/librte_sched -Ilib/librte_ip_frag -I../lib/librte_ip_frag -Ilib/librte_kni -I../lib/librte_kni -Ilib/librte_pci -I../lib/librte_pci -Ilib/librte_lpm -I../lib/librte_lpm -Ilib/librte_ipsec -I../lib/librte_ipsec -Ilib/librte_security -I../lib/librte_security -Ilib/librte_latencystats -I../lib/librte_latencystats -Ilib/librte_member -I../lib/librte_member -Ilib/librte_pipeline -I../lib/librte_pipeline -Ilib/librte_reorder -I../lib/librte_reorder -Ilib/librte_stack -I../lib/librte_stack -Ilib/librte_pdump -I../lib/librte_pdump -Idrivers/net/i40e -I../drivers/net/i40e -Idrivers/net/i40e/base -I../drivers/net/i40e/base -Idrivers/bus/pci -I../drivers/bus/pci -I../drivers/bus/pci/linux -Idrivers/bus/vdev -I../drivers/bus/vdev -Idrivers/net/ixgbe -I../drivers/net/ixgbe -Idrivers/net/ixgbe/base -I../drivers/net/ixgbe/base-Idrivers/net/bonding -I../drivers/net/bonding -Idrivers/net/ring -I../drivers/net/ring -Ilib/librte_power -I../lib/librte_power -Ilib/librte_compressdev -I../lib/librte_compressdev -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -O3 -include rte_config.h -Wsign-compare -Wcast-qual -march=native -D_GNU_SOURCE -DALLOW_EXPERIMENTAL_API  -MD -MQ 'app/test/app@test@@dpdk-test@exe/test_compressdev.c.o' -MF 'app/test/app@test@@dpdk-test@exe/test_compressdev.c.o.d' -o 'app/test/app@test@@dpdk-test@exe/test_compressdev.c.o' -c ../app/test/test_compressdev.c
../app/test/test_compressdev.c: In function ‘test_compressdev_deflate_stateless_dynamic_big’:
../app/test/test_compressdev.c:1992:16: error: conflicting types for ‘i’
  for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
                ^
../app/test/test_compressdev.c:1949:11: note: previous definition of ‘i’ was here
  uint16_t i = 0;
           ^
../app/test/test_compressdev.c:1992:2: error: ‘for’ loop initial declarations are only allowed in C99 mode
  for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
  ^
../app/test/test_compressdev.c:1992:2: note: use option -std=c99 or -std=gnu99 to compile your code
../app/test/test_compressdev.c:1996:19: warning: assignment from incompatible pointer type [enabled by default]
  int_data.buf_idx = &i;
                   ^
[1501/1523] Generating igb_uio with a custom command.
make: Entering directory `/usr/src/kernels/3.10.0-862.el7.x86_64'
  CC [M]  /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.o
  Building modules, stage 2.
  MODPOST 1 modules
make[1]: Warning: File `/auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.mod.c' has modification time 0.0096 s in the future
  CC      /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.mod.o
  LD [M]  /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.ko
make[1]: warning:  Clock skew detected.  Your build may be incomplete.
make: Leaving directory `/usr/src/kernels/3.10.0-862.el7.x86_64'


Thanks,
Yongseok

> On Apr 2, 2019, at 5:22 AM, Cel, TomaszX <tomaszx.cel@intel.com> wrote:
> 
> Hi Tomasz,
> 
>> -----Original Message-----
>> From: Cel, TomaszX
>> Sent: Tuesday, April 2, 2019 1:17 PM
>> To: dev@dpdk.org; Trahe, Fiona <fiona.trahe@intel.com>; Jozwiak, TomaszX
>> <tomaszx.jozwiak@intel.com>; Cel, TomaszX <tomaszx.cel@intel.com>
>> Subject: [PATCH v2 1/1] test/compress: add max mbuf size test case
>> 
>> From: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
>> 
>> This patch adds new test case in which max. size of chain mbufs has been
>> used to compress random data dynamically.
>> 
>> Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
>> ---
>> app/test/test_compressdev.c | 158
>> ++++++++++++++++++++++++++++++++++++++------
>> 1 file changed, 136 insertions(+), 22 deletions(-)
>> 
>> diff --git a/app/test/test_compressdev.c b/app/test/test_compressdev.c
>> index 13cf26c..f59b3d2 100644
>> --- a/app/test/test_compressdev.c
>> +++ b/app/test/test_compressdev.c
>> @@ -1,10 +1,10 @@
>> /* SPDX-License-Identifier: BSD-3-Clause
>> - * Copyright(c) 2018 Intel Corporation
>> + * Copyright(c) 2018 - 2019 Intel Corporation
>>  */
>> #include <string.h>
>> #include <zlib.h>
>> #include <math.h>
>> -#include <unistd.h>
>> +#include <stdlib.h>
>> 
>> #include <rte_cycles.h>
>> #include <rte_malloc.h>
>> @@ -45,6 +45,11 @@
>> 
>> #define OUT_OF_SPACE_BUF 1
>> 
>> +#define MAX_MBUF_SEGMENT_SIZE 65535
>> +#define MAX_DATA_MBUF_SIZE (MAX_MBUF_SEGMENT_SIZE -
>> +RTE_PKTMBUF_HEADROOM) #define NUM_BIG_MBUFS 4 #define
>> +BIG_DATA_TEST_SIZE (MAX_DATA_MBUF_SIZE * NUM_BIG_MBUFS / 2)
>> +
>> const char *
>> huffman_type_strings[] = {
>> 	[RTE_COMP_HUFFMAN_DEFAULT]	= "PMD default",
>> @@ -73,6 +78,7 @@ struct priv_op_data {
>> struct comp_testsuite_params {
>> 	struct rte_mempool *large_mbuf_pool;
>> 	struct rte_mempool *small_mbuf_pool;
>> +	struct rte_mempool *big_mbuf_pool;
>> 	struct rte_mempool *op_pool;
>> 	struct rte_comp_xform *def_comp_xform;
>> 	struct rte_comp_xform *def_decomp_xform; @@ -92,6 +98,7 @@
>> struct test_data_params {
>> 	enum varied_buff buff_type;
>> 	enum zlib_direction zlib_dir;
>> 	unsigned int out_of_space;
>> +	unsigned int big_data;
>> };
>> 
>> static struct comp_testsuite_params testsuite_params = { 0 }; @@ -105,11
>> +112,14 @@ testsuite_teardown(void)
>> 		RTE_LOG(ERR, USER1, "Large mbuf pool still has unfreed
>> bufs\n");
>> 	if (rte_mempool_in_use_count(ts_params->small_mbuf_pool))
>> 		RTE_LOG(ERR, USER1, "Small mbuf pool still has unfreed
>> bufs\n");
>> +	if (rte_mempool_in_use_count(ts_params->big_mbuf_pool))
>> +		RTE_LOG(ERR, USER1, "Big mbuf pool still has unfreed
>> bufs\n");
>> 	if (rte_mempool_in_use_count(ts_params->op_pool))
>> 		RTE_LOG(ERR, USER1, "op pool still has unfreed ops\n");
>> 
>> 	rte_mempool_free(ts_params->large_mbuf_pool);
>> 	rte_mempool_free(ts_params->small_mbuf_pool);
>> +	rte_mempool_free(ts_params->big_mbuf_pool);
>> 	rte_mempool_free(ts_params->op_pool);
>> 	rte_free(ts_params->def_comp_xform);
>> 	rte_free(ts_params->def_decomp_xform);
>> @@ -162,6 +172,17 @@ testsuite_setup(void)
>> 		goto exit;
>> 	}
>> 
>> +	/* Create mempool with big buffers for SGL testing */
>> +	ts_params->big_mbuf_pool =
>> rte_pktmbuf_pool_create("big_mbuf_pool",
>> +			NUM_BIG_MBUFS + 1,
>> +			CACHE_SIZE, 0,
>> +			MAX_MBUF_SEGMENT_SIZE,
>> +			rte_socket_id());
>> +	if (ts_params->big_mbuf_pool == NULL) {
>> +		RTE_LOG(ERR, USER1, "Big mbuf pool could not be
>> created\n");
>> +		goto exit;
>> +	}
>> +
>> 	ts_params->op_pool = rte_comp_op_pool_create("op_pool",
>> NUM_OPS,
>> 				0, sizeof(struct priv_op_data),
>> 				rte_socket_id());
>> @@ -598,10 +619,11 @@ prepare_sgl_bufs(const char *test_buf, struct
>> rte_mbuf *head_buf,
>> 		uint32_t total_data_size,
>> 		struct rte_mempool *small_mbuf_pool,
>> 		struct rte_mempool *large_mbuf_pool,
>> -		uint8_t limit_segs_in_sgl)
>> +		uint8_t limit_segs_in_sgl,
>> +		uint16_t seg_size)
>> {
>> 	uint32_t remaining_data = total_data_size;
>> -	uint16_t num_remaining_segs = DIV_CEIL(remaining_data,
>> SMALL_SEG_SIZE);
>> +	uint16_t num_remaining_segs = DIV_CEIL(remaining_data,
>> seg_size);
>> 	struct rte_mempool *pool;
>> 	struct rte_mbuf *next_seg;
>> 	uint32_t data_size;
>> @@ -617,10 +639,10 @@ prepare_sgl_bufs(const char *test_buf, struct
>> rte_mbuf *head_buf,
>> 	 * Allocate data in the first segment (header) and
>> 	 * copy data if test buffer is provided
>> 	 */
>> -	if (remaining_data < SMALL_SEG_SIZE)
>> +	if (remaining_data < seg_size)
>> 		data_size = remaining_data;
>> 	else
>> -		data_size = SMALL_SEG_SIZE;
>> +		data_size = seg_size;
>> 	buf_ptr = rte_pktmbuf_append(head_buf, data_size);
>> 	if (buf_ptr == NULL) {
>> 		RTE_LOG(ERR, USER1,
>> @@ -644,13 +666,13 @@ prepare_sgl_bufs(const char *test_buf, struct
>> rte_mbuf *head_buf,
>> 
>> 		if (i == (num_remaining_segs - 1)) {
>> 			/* last segment */
>> -			if (remaining_data > SMALL_SEG_SIZE)
>> +			if (remaining_data > seg_size)
>> 				pool = large_mbuf_pool;
>> 			else
>> 				pool = small_mbuf_pool;
>> 			data_size = remaining_data;
>> 		} else {
>> -			data_size = SMALL_SEG_SIZE;
>> +			data_size = seg_size;
>> 			pool = small_mbuf_pool;
>> 		}
>> 
>> @@ -704,6 +726,7 @@ test_deflate_comp_decomp(const struct
>> interim_data_params *int_data,
>> 	enum rte_comp_op_type state = test_data->state;
>> 	unsigned int buff_type = test_data->buff_type;
>> 	unsigned int out_of_space = test_data->out_of_space;
>> +	unsigned int big_data = test_data->big_data;
>> 	enum zlib_direction zlib_dir = test_data->zlib_dir;
>> 	int ret_status = -1;
>> 	int ret;
>> @@ -738,7 +761,9 @@ test_deflate_comp_decomp(const struct
>> interim_data_params *int_data,
>> 	memset(ops_processed, 0, sizeof(struct rte_comp_op *) *
>> num_bufs);
>> 	memset(priv_xforms, 0, sizeof(void *) * num_bufs);
>> 
>> -	if (buff_type == SGL_BOTH)
>> +	if (big_data)
>> +		buf_pool = ts_params->big_mbuf_pool;
>> +	else if (buff_type == SGL_BOTH)
>> 		buf_pool = ts_params->small_mbuf_pool;
>> 	else
>> 		buf_pool = ts_params->large_mbuf_pool; @@ -757,10
>> +782,11 @@ test_deflate_comp_decomp(const struct interim_data_params
>> *int_data,
>> 		for (i = 0; i < num_bufs; i++) {
>> 			data_size = strlen(test_bufs[i]) + 1;
>> 			if (prepare_sgl_bufs(test_bufs[i], uncomp_bufs[i],
>> -					data_size,
>> -					ts_params->small_mbuf_pool,
>> -					ts_params->large_mbuf_pool,
>> -					MAX_SEGS) < 0)
>> +			    data_size,
>> +			    big_data ? buf_pool : ts_params-
>>> small_mbuf_pool,
>> +			    big_data ? buf_pool : ts_params-
>>> large_mbuf_pool,
>> +			    big_data ? 0 : MAX_SEGS,
>> +			    big_data ? MAX_DATA_MBUF_SIZE :
>> SMALL_SEG_SIZE) < 0)
>> 				goto exit;
>> 		}
>> 	} else {
>> @@ -789,10 +815,12 @@ test_deflate_comp_decomp(const struct
>> interim_data_params *int_data,
>> 					COMPRESS_BUF_SIZE_RATIO);
>> 
>> 			if (prepare_sgl_bufs(NULL, comp_bufs[i],
>> -					data_size,
>> -					ts_params->small_mbuf_pool,
>> -					ts_params->large_mbuf_pool,
>> -					MAX_SEGS) < 0)
>> +			      data_size,
>> +			      big_data ? buf_pool : ts_params-
>>> small_mbuf_pool,
>> +			      big_data ? buf_pool : ts_params-
>>> large_mbuf_pool,
>> +			      big_data ? 0 : MAX_SEGS,
>> +			      big_data ? MAX_DATA_MBUF_SIZE :
>> SMALL_SEG_SIZE)
>> +					< 0)
>> 				goto exit;
>> 		}
>> 
>> @@ -1017,10 +1045,12 @@ test_deflate_comp_decomp(const struct
>> interim_data_params *int_data,
>> 				strlen(test_bufs[priv_data->orig_idx]) + 1;
>> 
>> 			if (prepare_sgl_bufs(NULL, uncomp_bufs[i],
>> -					data_size,
>> -					ts_params->small_mbuf_pool,
>> -					ts_params->large_mbuf_pool,
>> -					MAX_SEGS) < 0)
>> +			       data_size,
>> +			       big_data ? buf_pool : ts_params-
>>> small_mbuf_pool,
>> +			       big_data ? buf_pool : ts_params-
>>> large_mbuf_pool,
>> +			       big_data ? 0 : MAX_SEGS,
>> +			       big_data ? MAX_DATA_MBUF_SIZE :
>> SMALL_SEG_SIZE)
>> +					< 0)
>> 				goto exit;
>> 		}
>> 
>> @@ -1320,6 +1350,7 @@ test_compressdev_deflate_stateless_fixed(void)
>> 		RTE_COMP_OP_STATELESS,
>> 		LB_BOTH,
>> 		ZLIB_DECOMPRESS,
>> +		0,
>> 		0
>> 	};
>> 
>> @@ -1390,6 +1421,7 @@
>> test_compressdev_deflate_stateless_dynamic(void)
>> 		RTE_COMP_OP_STATELESS,
>> 		LB_BOTH,
>> 		ZLIB_DECOMPRESS,
>> +		0,
>> 		0
>> 	};
>> 
>> @@ -1443,6 +1475,7 @@
>> test_compressdev_deflate_stateless_multi_op(void)
>> 		RTE_COMP_OP_STATELESS,
>> 		LB_BOTH,
>> 		ZLIB_DECOMPRESS,
>> +		0,
>> 		0
>> 	};
>> 
>> @@ -1492,6 +1525,7 @@
>> test_compressdev_deflate_stateless_multi_level(void)
>> 		RTE_COMP_OP_STATELESS,
>> 		LB_BOTH,
>> 		ZLIB_DECOMPRESS,
>> +		0,
>> 		0
>> 	};
>> 
>> @@ -1581,6 +1615,7 @@
>> test_compressdev_deflate_stateless_multi_xform(void)
>> 		RTE_COMP_OP_STATELESS,
>> 		LB_BOTH,
>> 		ZLIB_DECOMPRESS,
>> +		0,
>> 		0
>> 	};
>> 
>> @@ -1626,6 +1661,7 @@ test_compressdev_deflate_stateless_sgl(void)
>> 		RTE_COMP_OP_STATELESS,
>> 		SGL_BOTH,
>> 		ZLIB_DECOMPRESS,
>> +		0,
>> 		0
>> 	};
>> 
>> @@ -1733,6 +1769,7 @@
>> test_compressdev_deflate_stateless_checksum(void)
>> 		RTE_COMP_OP_STATELESS,
>> 		LB_BOTH,
>> 		ZLIB_DECOMPRESS,
>> +		0,
>> 		0
>> 	};
>> 
>> @@ -1863,7 +1900,8 @@ test_compressdev_out_of_space_buffer(void)
>> 		RTE_COMP_OP_STATELESS,
>> 		LB_BOTH,
>> 		ZLIB_DECOMPRESS,
>> -		1
>> +		1,
>> +		0
>> 	};
>> 	/* Compress with compressdev, decompress with Zlib */
>> 	test_data.zlib_dir = ZLIB_DECOMPRESS;
>> @@ -1904,6 +1942,80 @@ test_compressdev_out_of_space_buffer(void)
>> 	return ret;
>> }
>> 
>> +static int
>> +test_compressdev_deflate_stateless_dynamic_big(void)
>> +{
>> +	struct comp_testsuite_params *ts_params = &testsuite_params;
>> +	uint16_t i = 0;
>> +	int ret = TEST_SUCCESS;
>> +	const struct rte_compressdev_capabilities *capab;
>> +	char *test_buffer = NULL;
>> +
>> +	capab = rte_compressdev_capability_get(0,
>> RTE_COMP_ALGO_DEFLATE);
>> +	TEST_ASSERT(capab != NULL, "Failed to retrieve device capabilities");
>> +
>> +	if ((capab->comp_feature_flags &
>> RTE_COMP_FF_HUFFMAN_DYNAMIC) == 0)
>> +		return -ENOTSUP;
>> +
>> +	if ((capab->comp_feature_flags &
>> RTE_COMP_FF_OOP_SGL_IN_SGL_OUT) == 0)
>> +		return -ENOTSUP;
>> +
>> +	test_buffer = rte_malloc(NULL, BIG_DATA_TEST_SIZE, 0);
>> +	if (test_buffer == NULL) {
>> +		RTE_LOG(ERR, USER1,
>> +			"Can't allocate buffer for big-data\n");
>> +		return TEST_FAILED;
>> +	}
>> +
>> +	struct interim_data_params int_data = {
>> +		(const char * const *)&test_buffer,
>> +		1,
>> +		NULL,
>> +		&ts_params->def_comp_xform,
>> +		&ts_params->def_decomp_xform,
>> +		1
>> +	};
>> +
>> +	struct test_data_params test_data = {
>> +		RTE_COMP_OP_STATELESS,
>> +		SGL_BOTH,
>> +		ZLIB_DECOMPRESS,
>> +		0,
>> +		1
>> +	};
>> +
>> +	ts_params->def_comp_xform->compress.deflate.huffman =
>> +
>> 	RTE_COMP_HUFFMAN_DYNAMIC;
>> +
>> +	/* fill the buffer with data based on rand. data */
>> +	srand(BIG_DATA_TEST_SIZE);
>> +	for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
>> +		test_buffer[i] = (uint8_t)(rand() % ((uint8_t)-1)) | 1;
>> +
>> +	test_buffer[BIG_DATA_TEST_SIZE-1] = 0;
>> +	int_data.buf_idx = &i;
>> +
>> +	/* Compress with compressdev, decompress with Zlib */
>> +	test_data.zlib_dir = ZLIB_DECOMPRESS;
>> +	if (test_deflate_comp_decomp(&int_data, &test_data) < 0) {
>> +		ret = TEST_FAILED;
>> +		goto end;
>> +	}
>> +
>> +	/* Compress with Zlib, decompress with compressdev */
>> +	test_data.zlib_dir = ZLIB_COMPRESS;
>> +	if (test_deflate_comp_decomp(&int_data, &test_data) < 0) {
>> +		ret = TEST_FAILED;
>> +		goto end;
>> +	}
>> +
>> +end:
>> +	ts_params->def_comp_xform->compress.deflate.huffman =
>> +
>> 	RTE_COMP_HUFFMAN_DEFAULT;
>> +	rte_free(test_buffer);
>> +	return ret;
>> +}
>> +
>> 
>> static struct unit_test_suite compressdev_testsuite  = {
>> 	.suite_name = "compressdev unit test suite", @@ -1917,6 +2029,8
>> @@ static struct unit_test_suite compressdev_testsuite  = {
>> 		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
>> 			test_compressdev_deflate_stateless_dynamic),
>> 		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
>> +			test_compressdev_deflate_stateless_dynamic_big),
>> +		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
>> 			test_compressdev_deflate_stateless_multi_op),
>> 		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
>> 			test_compressdev_deflate_stateless_multi_level),
>> --
>> 2.7.4
> 
> Acked-by: Tomasz Cel <tomaszx.cel@intel.com>
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/1] test/compress: add max mbuf size test case
  2019-04-18 22:42         ` [dpdk-dev] " Yongseok Koh
@ 2019-04-19  9:07           ` Thomas Monjalon
  2019-04-19  9:25             ` David Marchand
  2019-04-19  9:58           ` Jozwiak, TomaszX
  1 sibling, 1 reply; 32+ messages in thread
From: Thomas Monjalon @ 2019-04-19  9:07 UTC (permalink / raw)
  To: Cel, TomaszX, Jozwiak, TomaszX
  Cc: Yongseok Koh, dev, Trahe, Fiona, ferruh.yigit

There are some variables declared in the middle of the function,
and one (i) is declared twice with two different types.

Tomasz and Tomasz, are you working on it urgently please?


19/04/2019 00:42, Yongseok Koh:
> Hi,
> 
> I'm seeing compile error.
> Isn't it due to this patch?
> 
> $ gcc --version
> gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
> 
> 
> FAILED: app/test/app@test@@dpdk-test@exe/test_compressdev.c.o
[...]
> ../app/test/test_compressdev.c: In function ‘test_compressdev_deflate_stateless_dynamic_big’:
> ../app/test/test_compressdev.c:1992:16: error: conflicting types for ‘i’
>   for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
>                 ^
> ../app/test/test_compressdev.c:1949:11: note: previous definition of ‘i’ was here
>   uint16_t i = 0;
>            ^
> ../app/test/test_compressdev.c:1992:2: error: ‘for’ loop initial declarations are only allowed in C99 mode
>   for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
>   ^
> ../app/test/test_compressdev.c:1992:2: note: use option -std=c99 or -std=gnu99 to compile your code
> ../app/test/test_compressdev.c:1996:19: warning: assignment from incompatible pointer type [enabled by default]
>   int_data.buf_idx = &i;
>                    ^
> [1501/1523] Generating igb_uio with a custom command.
> make: Entering directory `/usr/src/kernels/3.10.0-862.el7.x86_64'
>   CC [M]  /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.o
>   Building modules, stage 2.
>   MODPOST 1 modules
> make[1]: Warning: File `/auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.mod.c' has modification time 0.0096 s in the future
>   CC      /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.mod.o
>   LD [M]  /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.ko
> make[1]: warning:  Clock skew detected.  Your build may be incomplete.
> make: Leaving directory `/usr/src/kernels/3.10.0-862.el7.x86_64'
> 
> 
> Thanks,
> Yongseok

[...]
> >> +static int
> >> +test_compressdev_deflate_stateless_dynamic_big(void)
> >> +{
> >> +	struct comp_testsuite_params *ts_params = &testsuite_params;
> >> +	uint16_t i = 0;
> >> +	int ret = TEST_SUCCESS;
> >> +	const struct rte_compressdev_capabilities *capab;
> >> +	char *test_buffer = NULL;
> >> +
> >> +	capab = rte_compressdev_capability_get(0,
> >> RTE_COMP_ALGO_DEFLATE);
> >> +	TEST_ASSERT(capab != NULL, "Failed to retrieve device capabilities");
> >> +
> >> +	if ((capab->comp_feature_flags &
> >> RTE_COMP_FF_HUFFMAN_DYNAMIC) == 0)
> >> +		return -ENOTSUP;
> >> +
> >> +	if ((capab->comp_feature_flags &
> >> RTE_COMP_FF_OOP_SGL_IN_SGL_OUT) == 0)
> >> +		return -ENOTSUP;
> >> +
> >> +	test_buffer = rte_malloc(NULL, BIG_DATA_TEST_SIZE, 0);
> >> +	if (test_buffer == NULL) {
> >> +		RTE_LOG(ERR, USER1,
> >> +			"Can't allocate buffer for big-data\n");
> >> +		return TEST_FAILED;
> >> +	}
> >> +
> >> +	struct interim_data_params int_data = {
> >> +		(const char * const *)&test_buffer,
> >> +		1,
> >> +		NULL,
> >> +		&ts_params->def_comp_xform,
> >> +		&ts_params->def_decomp_xform,
> >> +		1
> >> +	};
> >> +
> >> +	struct test_data_params test_data = {
> >> +		RTE_COMP_OP_STATELESS,
> >> +		SGL_BOTH,
> >> +		ZLIB_DECOMPRESS,
> >> +		0,
> >> +		1
> >> +	};
> >> +
> >> +	ts_params->def_comp_xform->compress.deflate.huffman =
> >> +
> >> 	RTE_COMP_HUFFMAN_DYNAMIC;
> >> +
> >> +	/* fill the buffer with data based on rand. data */
> >> +	srand(BIG_DATA_TEST_SIZE);
> >> +	for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
> >> +		test_buffer[i] = (uint8_t)(rand() % ((uint8_t)-1)) | 1;
> >> +
> >> +	test_buffer[BIG_DATA_TEST_SIZE-1] = 0;
> >> +	int_data.buf_idx = &i;




^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/1] test/compress: add max mbuf size test case
  2019-04-19  9:07           ` Thomas Monjalon
@ 2019-04-19  9:25             ` David Marchand
  2019-04-19  9:30               ` Thomas Monjalon
  0 siblings, 1 reply; 32+ messages in thread
From: David Marchand @ 2019-04-19  9:25 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Cel, TomaszX, Jozwiak, TomaszX, Yongseok Koh, dev, Trahe, Fiona,
	Yigit, Ferruh

On Fri, Apr 19, 2019 at 11:08 AM Thomas Monjalon <thomas@monjalon.net>
wrote:

> There are some variables declared in the middle of the function,
> and one (i) is declared twice with two different types.
>
> Tomasz and Tomasz, are you working on it urgently please?
>

Found out that this test is not built by default.
I have a trivial fix but I wonder how this has been tested seeing how it
won't compile.


-- 
David Marchand


>
> 19/04/2019 00:42, Yongseok Koh:
> > Hi,
> >
> > I'm seeing compile error.
> > Isn't it due to this patch?
> >
> > $ gcc --version
> > gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
> >
> >
> > FAILED: app/test/app@test@@dpdk-test@exe/test_compressdev.c.o
> [...]
> > ../app/test/test_compressdev.c: In function
> ‘test_compressdev_deflate_stateless_dynamic_big’:
> > ../app/test/test_compressdev.c:1992:16: error: conflicting types for ‘i’
> >   for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
> >                 ^
> > ../app/test/test_compressdev.c:1949:11: note: previous definition of ‘i’
> was here
> >   uint16_t i = 0;
> >            ^
> > ../app/test/test_compressdev.c:1992:2: error: ‘for’ loop initial
> declarations are only allowed in C99 mode
> >   for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
> >   ^
> > ../app/test/test_compressdev.c:1992:2: note: use option -std=c99 or
> -std=gnu99 to compile your code
> > ../app/test/test_compressdev.c:1996:19: warning: assignment from
> incompatible pointer type [enabled by default]
> >   int_data.buf_idx = &i;
> >                    ^
> > [1501/1523] Generating igb_uio with a custom command.
> > make: Entering directory `/usr/src/kernels/3.10.0-862.el7.x86_64'
> >   CC [M]
> /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.o
> >   Building modules, stage 2.
> >   MODPOST 1 modules
> > make[1]: Warning: File
> `/auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.mod.c'
> has modification time 0.0096 s in the future
> >   CC
> /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.mod.o
> >   LD [M]
> /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.ko
> > make[1]: warning:  Clock skew detected.  Your build may be incomplete.
> > make: Leaving directory `/usr/src/kernels/3.10.0-862.el7.x86_64'
> >
> >
> > Thanks,
> > Yongseok
>
> [...]
> > >> +static int
> > >> +test_compressdev_deflate_stateless_dynamic_big(void)
> > >> +{
> > >> +  struct comp_testsuite_params *ts_params = &testsuite_params;
> > >> +  uint16_t i = 0;
> > >> +  int ret = TEST_SUCCESS;
> > >> +  const struct rte_compressdev_capabilities *capab;
> > >> +  char *test_buffer = NULL;
> > >> +
> > >> +  capab = rte_compressdev_capability_get(0,
> > >> RTE_COMP_ALGO_DEFLATE);
> > >> +  TEST_ASSERT(capab != NULL, "Failed to retrieve device
> capabilities");
> > >> +
> > >> +  if ((capab->comp_feature_flags &
> > >> RTE_COMP_FF_HUFFMAN_DYNAMIC) == 0)
> > >> +          return -ENOTSUP;
> > >> +
> > >> +  if ((capab->comp_feature_flags &
> > >> RTE_COMP_FF_OOP_SGL_IN_SGL_OUT) == 0)
> > >> +          return -ENOTSUP;
> > >> +
> > >> +  test_buffer = rte_malloc(NULL, BIG_DATA_TEST_SIZE, 0);
> > >> +  if (test_buffer == NULL) {
> > >> +          RTE_LOG(ERR, USER1,
> > >> +                  "Can't allocate buffer for big-data\n");
> > >> +          return TEST_FAILED;
> > >> +  }
> > >> +
> > >> +  struct interim_data_params int_data = {
> > >> +          (const char * const *)&test_buffer,
> > >> +          1,
> > >> +          NULL,
> > >> +          &ts_params->def_comp_xform,
> > >> +          &ts_params->def_decomp_xform,
> > >> +          1
> > >> +  };
> > >> +
> > >> +  struct test_data_params test_data = {
> > >> +          RTE_COMP_OP_STATELESS,
> > >> +          SGL_BOTH,
> > >> +          ZLIB_DECOMPRESS,
> > >> +          0,
> > >> +          1
> > >> +  };
> > >> +
> > >> +  ts_params->def_comp_xform->compress.deflate.huffman =
> > >> +
> > >>    RTE_COMP_HUFFMAN_DYNAMIC;
> > >> +
> > >> +  /* fill the buffer with data based on rand. data */
> > >> +  srand(BIG_DATA_TEST_SIZE);
> > >> +  for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
> > >> +          test_buffer[i] = (uint8_t)(rand() % ((uint8_t)-1)) | 1;
> > >> +
> > >> +  test_buffer[BIG_DATA_TEST_SIZE-1] = 0;
> > >> +  int_data.buf_idx = &i;
>
>
>
>

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/1] test/compress: add max mbuf size test case
  2019-04-19  9:25             ` David Marchand
@ 2019-04-19  9:30               ` Thomas Monjalon
  2019-04-19  9:32                 ` Jozwiak, TomaszX
  0 siblings, 1 reply; 32+ messages in thread
From: Thomas Monjalon @ 2019-04-19  9:30 UTC (permalink / raw)
  To: David Marchand
  Cc: Cel, TomaszX, Jozwiak, TomaszX, Yongseok Koh, dev, Trahe, Fiona,
	Yigit, Ferruh

19/04/2019 11:25, David Marchand:
> On Fri, Apr 19, 2019 at 11:08 AM Thomas Monjalon <thomas@monjalon.net>
> wrote:
> 
> > There are some variables declared in the middle of the function,
> > and one (i) is declared twice with two different types.
> >
> > Tomasz and Tomasz, are you working on it urgently please?
> >
> 
> Found out that this test is not built by default.
> I have a trivial fix but I wonder how this has been tested seeing how it
> won't compile.

It is enabled with devtools/test-build.sh
and it is compiling fine with recent compilers I think.

Please send your fix.


> > 19/04/2019 00:42, Yongseok Koh:
> > > Hi,
> > >
> > > I'm seeing compile error.
> > > Isn't it due to this patch?
> > >
> > > $ gcc --version
> > > gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
> > >
> > >
> > > FAILED: app/test/app@test@@dpdk-test@exe/test_compressdev.c.o
> > [...]
> > > ../app/test/test_compressdev.c: In function
> > ‘test_compressdev_deflate_stateless_dynamic_big’:
> > > ../app/test/test_compressdev.c:1992:16: error: conflicting types for ‘i’
> > >   for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
> > >                 ^
> > > ../app/test/test_compressdev.c:1949:11: note: previous definition of ‘i’
> > was here
> > >   uint16_t i = 0;
> > >            ^
> > > ../app/test/test_compressdev.c:1992:2: error: ‘for’ loop initial
> > declarations are only allowed in C99 mode
> > >   for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
> > >   ^
> > > ../app/test/test_compressdev.c:1992:2: note: use option -std=c99 or
> > -std=gnu99 to compile your code
> > > ../app/test/test_compressdev.c:1996:19: warning: assignment from
> > incompatible pointer type [enabled by default]
> > >   int_data.buf_idx = &i;
> > >                    ^
> > > [1501/1523] Generating igb_uio with a custom command.
> > > make: Entering directory `/usr/src/kernels/3.10.0-862.el7.x86_64'
> > >   CC [M]
> > /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.o
> > >   Building modules, stage 2.
> > >   MODPOST 1 modules
> > > make[1]: Warning: File
> > `/auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.mod.c'
> > has modification time 0.0096 s in the future
> > >   CC
> > /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.mod.o
> > >   LD [M]
> > /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/igb_uio/igb_uio.ko
> > > make[1]: warning:  Clock skew detected.  Your build may be incomplete.
> > > make: Leaving directory `/usr/src/kernels/3.10.0-862.el7.x86_64'
> > >
> > >
> > > Thanks,
> > > Yongseok
> >
> > [...]
> > > >> +static int
> > > >> +test_compressdev_deflate_stateless_dynamic_big(void)
> > > >> +{
> > > >> +  struct comp_testsuite_params *ts_params = &testsuite_params;
> > > >> +  uint16_t i = 0;
> > > >> +  int ret = TEST_SUCCESS;
> > > >> +  const struct rte_compressdev_capabilities *capab;
> > > >> +  char *test_buffer = NULL;
> > > >> +
> > > >> +  capab = rte_compressdev_capability_get(0,
> > > >> RTE_COMP_ALGO_DEFLATE);
> > > >> +  TEST_ASSERT(capab != NULL, "Failed to retrieve device
> > capabilities");
> > > >> +
> > > >> +  if ((capab->comp_feature_flags &
> > > >> RTE_COMP_FF_HUFFMAN_DYNAMIC) == 0)
> > > >> +          return -ENOTSUP;
> > > >> +
> > > >> +  if ((capab->comp_feature_flags &
> > > >> RTE_COMP_FF_OOP_SGL_IN_SGL_OUT) == 0)
> > > >> +          return -ENOTSUP;
> > > >> +
> > > >> +  test_buffer = rte_malloc(NULL, BIG_DATA_TEST_SIZE, 0);
> > > >> +  if (test_buffer == NULL) {
> > > >> +          RTE_LOG(ERR, USER1,
> > > >> +                  "Can't allocate buffer for big-data\n");
> > > >> +          return TEST_FAILED;
> > > >> +  }
> > > >> +
> > > >> +  struct interim_data_params int_data = {
> > > >> +          (const char * const *)&test_buffer,
> > > >> +          1,
> > > >> +          NULL,
> > > >> +          &ts_params->def_comp_xform,
> > > >> +          &ts_params->def_decomp_xform,
> > > >> +          1
> > > >> +  };
> > > >> +
> > > >> +  struct test_data_params test_data = {
> > > >> +          RTE_COMP_OP_STATELESS,
> > > >> +          SGL_BOTH,
> > > >> +          ZLIB_DECOMPRESS,
> > > >> +          0,
> > > >> +          1
> > > >> +  };
> > > >> +
> > > >> +  ts_params->def_comp_xform->compress.deflate.huffman =
> > > >> +
> > > >>    RTE_COMP_HUFFMAN_DYNAMIC;
> > > >> +
> > > >> +  /* fill the buffer with data based on rand. data */
> > > >> +  srand(BIG_DATA_TEST_SIZE);
> > > >> +  for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
> > > >> +          test_buffer[i] = (uint8_t)(rand() % ((uint8_t)-1)) | 1;
> > > >> +
> > > >> +  test_buffer[BIG_DATA_TEST_SIZE-1] = 0;
> > > >> +  int_data.buf_idx = &i;
> >
> >
> >
> >
> 






^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/1] test/compress: add max mbuf size test case
  2019-04-19  9:30               ` Thomas Monjalon
@ 2019-04-19  9:32                 ` Jozwiak, TomaszX
  2019-04-19  9:39                   ` David Marchand
  0 siblings, 1 reply; 32+ messages in thread
From: Jozwiak, TomaszX @ 2019-04-19  9:32 UTC (permalink / raw)
  To: Thomas Monjalon, David Marchand
  Cc: Cel, TomaszX, Yongseok Koh, dev, Trahe, Fiona, Yigit, Ferruh

Hi all,

I'm preparing the patch and will send it in 10 minutes

Thx, Tomek

> -----Original Message-----
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Friday, April 19, 2019 11:30 AM
> To: David Marchand <david.marchand@redhat.com>
> Cc: Cel, TomaszX <tomaszx.cel@intel.com>; Jozwiak, TomaszX
> <tomaszx.jozwiak@intel.com>; Yongseok Koh <yskoh@mellanox.com>;
> dev@dpdk.org; Trahe, Fiona <fiona.trahe@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 1/1] test/compress: add max mbuf size
> test case
> 
> 19/04/2019 11:25, David Marchand:
> > On Fri, Apr 19, 2019 at 11:08 AM Thomas Monjalon
> <thomas@monjalon.net>
> > wrote:
> >
> > > There are some variables declared in the middle of the function, and
> > > one (i) is declared twice with two different types.
> > >
> > > Tomasz and Tomasz, are you working on it urgently please?
> > >
> >
> > Found out that this test is not built by default.
> > I have a trivial fix but I wonder how this has been tested seeing how
> > it won't compile.
> 
> It is enabled with devtools/test-build.sh and it is compiling fine with recent
> compilers I think.
> 
> Please send your fix.
> 
> 
> > > 19/04/2019 00:42, Yongseok Koh:
> > > > Hi,
> > > >
> > > > I'm seeing compile error.
> > > > Isn't it due to this patch?
> > > >
> > > > $ gcc --version
> > > > gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
> > > >
> > > >
> > > > FAILED: app/test/app@test@@dpdk-test@exe/test_compressdev.c.o
> > > [...]
> > > > ../app/test/test_compressdev.c: In function
> > > ‘test_compressdev_deflate_stateless_dynamic_big’:
> > > > ../app/test/test_compressdev.c:1992:16: error: conflicting types for ‘i’
> > > >   for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
> > > >                 ^
> > > > ../app/test/test_compressdev.c:1949:11: note: previous definition of ‘i’
> > > was here
> > > >   uint16_t i = 0;
> > > >            ^
> > > > ../app/test/test_compressdev.c:1992:2: error: ‘for’ loop initial
> > > declarations are only allowed in C99 mode
> > > >   for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
> > > >   ^
> > > > ../app/test/test_compressdev.c:1992:2: note: use option -std=c99
> > > > or
> > > -std=gnu99 to compile your code
> > > > ../app/test/test_compressdev.c:1996:19: warning: assignment from
> > > incompatible pointer type [enabled by default]
> > > >   int_data.buf_idx = &i;
> > > >                    ^
> > > > [1501/1523] Generating igb_uio with a custom command.
> > > > make: Entering directory `/usr/src/kernels/3.10.0-862.el7.x86_64'
> > > >   CC [M]
> > > /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/ig
> > > b_uio/igb_uio.o
> > > >   Building modules, stage 2.
> > > >   MODPOST 1 modules
> > > > make[1]: Warning: File
> > > `/auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-
> 2/build/kernel/linux/igb_uio/igb_uio.mod.c'
> > > has modification time 0.0096 s in the future
> > > >   CC
> > > /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/ig
> > > b_uio/igb_uio.mod.o
> > > >   LD [M]
> > > /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-2/build/kernel/linux/ig
> > > b_uio/igb_uio.ko
> > > > make[1]: warning:  Clock skew detected.  Your build may be
> incomplete.
> > > > make: Leaving directory `/usr/src/kernels/3.10.0-862.el7.x86_64'
> > > >
> > > >
> > > > Thanks,
> > > > Yongseok
> > >
> > > [...]
> > > > >> +static int
> > > > >> +test_compressdev_deflate_stateless_dynamic_big(void)
> > > > >> +{
> > > > >> +  struct comp_testsuite_params *ts_params = &testsuite_params;
> > > > >> +  uint16_t i = 0;
> > > > >> +  int ret = TEST_SUCCESS;
> > > > >> +  const struct rte_compressdev_capabilities *capab;
> > > > >> +  char *test_buffer = NULL;
> > > > >> +
> > > > >> +  capab = rte_compressdev_capability_get(0,
> > > > >> RTE_COMP_ALGO_DEFLATE);
> > > > >> +  TEST_ASSERT(capab != NULL, "Failed to retrieve device
> > > capabilities");
> > > > >> +
> > > > >> +  if ((capab->comp_feature_flags &
> > > > >> RTE_COMP_FF_HUFFMAN_DYNAMIC) == 0)
> > > > >> +          return -ENOTSUP;
> > > > >> +
> > > > >> +  if ((capab->comp_feature_flags &
> > > > >> RTE_COMP_FF_OOP_SGL_IN_SGL_OUT) == 0)
> > > > >> +          return -ENOTSUP;
> > > > >> +
> > > > >> +  test_buffer = rte_malloc(NULL, BIG_DATA_TEST_SIZE, 0);  if
> > > > >> + (test_buffer == NULL) {
> > > > >> +          RTE_LOG(ERR, USER1,
> > > > >> +                  "Can't allocate buffer for big-data\n");
> > > > >> +          return TEST_FAILED;
> > > > >> +  }
> > > > >> +
> > > > >> +  struct interim_data_params int_data = {
> > > > >> +          (const char * const *)&test_buffer,
> > > > >> +          1,
> > > > >> +          NULL,
> > > > >> +          &ts_params->def_comp_xform,
> > > > >> +          &ts_params->def_decomp_xform,
> > > > >> +          1
> > > > >> +  };
> > > > >> +
> > > > >> +  struct test_data_params test_data = {
> > > > >> +          RTE_COMP_OP_STATELESS,
> > > > >> +          SGL_BOTH,
> > > > >> +          ZLIB_DECOMPRESS,
> > > > >> +          0,
> > > > >> +          1
> > > > >> +  };
> > > > >> +
> > > > >> +  ts_params->def_comp_xform->compress.deflate.huffman =
> > > > >> +
> > > > >>    RTE_COMP_HUFFMAN_DYNAMIC;
> > > > >> +
> > > > >> +  /* fill the buffer with data based on rand. data */
> > > > >> + srand(BIG_DATA_TEST_SIZE);  for (uint32_t i = 0; i <
> > > > >> + BIG_DATA_TEST_SIZE - 1; ++i)
> > > > >> +          test_buffer[i] = (uint8_t)(rand() % ((uint8_t)-1)) |
> > > > >> + 1;
> > > > >> +
> > > > >> +  test_buffer[BIG_DATA_TEST_SIZE-1] = 0;  int_data.buf_idx =
> > > > >> + &i;
> > >
> > >
> > >
> > >
> >
> 
> 
> 
> 


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/1] test/compress: add max mbuf size test case
  2019-04-19  9:32                 ` Jozwiak, TomaszX
@ 2019-04-19  9:39                   ` David Marchand
  0 siblings, 0 replies; 32+ messages in thread
From: David Marchand @ 2019-04-19  9:39 UTC (permalink / raw)
  To: Jozwiak, TomaszX
  Cc: Thomas Monjalon, Cel, TomaszX, Yongseok Koh, dev, Trahe, Fiona,
	Yigit, Ferruh

On Fri, Apr 19, 2019 at 11:32 AM Jozwiak, TomaszX <tomaszx.jozwiak@intel.com>
wrote:

> Hi all,
>
> I'm preparing the patch and will send it in 10 minutes
>

Ok, dropping mine.


-- 
David Marchand

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/1] test/compress: add max mbuf size test case
  2019-04-18 22:42         ` [dpdk-dev] " Yongseok Koh
  2019-04-19  9:07           ` Thomas Monjalon
@ 2019-04-19  9:58           ` Jozwiak, TomaszX
  1 sibling, 0 replies; 32+ messages in thread
From: Jozwiak, TomaszX @ 2019-04-19  9:58 UTC (permalink / raw)
  To: Yongseok Koh, Cel, TomaszX, Thomas Monjalon; +Cc: dev, Trahe, Fiona

Hi Yongseok,

Could you check my fix is ok now on gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)

Thx, Tomek

> -----Original Message-----
> From: Yongseok Koh [mailto:yskoh@mellanox.com]
> Sent: Friday, April 19, 2019 12:43 AM
> To: Cel, TomaszX <tomaszx.cel@intel.com>; Thomas Monjalon
> <thomas@monjalon.net>; Jozwiak, TomaszX <tomaszx.jozwiak@intel.com>
> Cc: dev@dpdk.org; Trahe, Fiona <fiona.trahe@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2 1/1] test/compress: add max mbuf size
> test case
> 
> Hi,
> 
> I'm seeing compile error.
> Isn't it due to this patch?
> 
> $ gcc --version
> gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28)
> 
> 
> [1484/1523] Compiling C object 'app/test/app@test@@dpdk-
> test@exe/test_compressdev.c.o'.
> FAILED: app/test/app@test@@dpdk-test@exe/test_compressdev.c.o
> cc -Iapp/test/app@test@@dpdk-test@exe -Iapp/test -I../app/test -
> Ilib/librte_acl -I../lib/librte_acl -I. -I../ -Iconfig -I../config -
> Ilib/librte_eal/common/include -I../lib/librte_eal/common/include -
> I../lib/librte_eal/linux/eal/include -Ilib/librte_eal/common -
> I../lib/librte_eal/common -Ilib/librte_eal/common/include/arch/x86 -
> I../lib/librte_eal/common/include/arch/x86 -Ilib/librte_eal -I../lib/librte_eal -
> Ilib/librte_kvargs -I../lib/librte_kvargs -Ilib/librte_bitratestats -
> I../lib/librte_bitratestats -Ilib/librte_ethdev -I../lib/librte_ethdev -
> Ilib/librte_net -I../lib/librte_net -Ilib/librte_mbuf-I../lib/librte_mbuf -
> Ilib/librte_mempool -I../lib/librte_mempool -Ilib/librte_ring -
> I../lib/librte_ring -Ilib/librte_cmdline -I../lib/librte_cmdline -Ilib/librte_meter
> -I../lib/librte_meter -Ilib/librte_metrics -I../lib/librte_metrics -Ilib/librte_bpf -
> I../lib/librte_bpf -Ilib/librte_cfgfile -I../lib/librte_cfgfile -Ilib/librte_cryptodev
> -I../lib/librte_cryptodev -Ilib/librte_distributor -I../lib/librte_distributor -
> Ilib/librte_efd -I../lib/librte_efd -Ilib/librte_hash -I../lib/librte_hash -
> Ilib/librte_eventdev -I../lib/librte_eventdev -Ilib/librte_timer -
> I../lib/librte_timer -Ilib/librte_flow_classify -I../lib/librte_flow_classify -
> Ilib/librte_table -I../lib/librte_table -Ilib/librte_port -I../lib/librte_port -
> Ilib/librte_sched -I../lib/librte_sched -Ilib/librte_ip_frag -I../lib/librte_ip_frag
> -Ilib/librte_kni -I../lib/librte_kni -Ilib/librte_pci -I../lib/librte_pci -
> Ilib/librte_lpm -I../lib/librte_lpm -Ilib/librte_ipsec -I../lib/librte_ipsec -
> Ilib/librte_security -I../lib/librte_security -Ilib/librte_latencystats -
> I../lib/librte_latencystats -Ilib/librte_member -I../lib/librte_member -
> Ilib/librte_pipeline -I../lib/librte_pipeline -Ilib/librte_reorder -
> I../lib/librte_reorder -Ilib/librte_stack -I../lib/librte_stack -Ilib/librte_pdump -
> I../lib/librte_pdump -Idrivers/net/i40e -I../drivers/net/i40e -
> Idrivers/net/i40e/base -I../drivers/net/i40e/base -Idrivers/bus/pci -
> I../drivers/bus/pci -I../drivers/bus/pci/linux -Idrivers/bus/vdev -
> I../drivers/bus/vdev -Idrivers/net/ixgbe -I../drivers/net/ixgbe -
> Idrivers/net/ixgbe/base -I../drivers/net/ixgbe/base-Idrivers/net/bonding -
> I../drivers/net/bonding -Idrivers/net/ring -I../drivers/net/ring -
> Ilib/librte_power -I../lib/librte_power -Ilib/librte_compressdev -
> I../lib/librte_compressdev -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-
> pch -O3 -include rte_config.h -Wsign-compare -Wcast-qual -march=native -
> D_GNU_SOURCE -DALLOW_EXPERIMENTAL_API  -MD -MQ
> 'app/test/app@test@@dpdk-test@exe/test_compressdev.c.o' -MF
> 'app/test/app@test@@dpdk-test@exe/test_compressdev.c.o.d' -o
> 'app/test/app@test@@dpdk-test@exe/test_compressdev.c.o' -c
> ../app/test/test_compressdev.c
> ../app/test/test_compressdev.c: In function
> ‘test_compressdev_deflate_stateless_dynamic_big’:
> ../app/test/test_compressdev.c:1992:16: error: conflicting types for ‘i’
>   for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
>                 ^
> ../app/test/test_compressdev.c:1949:11: note: previous definition of ‘i’ was
> here
>   uint16_t i = 0;
>            ^
> ../app/test/test_compressdev.c:1992:2: error: ‘for’ loop initial declarations
> are only allowed in C99 mode
>   for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
>   ^
> ../app/test/test_compressdev.c:1992:2: note: use option -std=c99 or -
> std=gnu99 to compile your code
> ../app/test/test_compressdev.c:1996:19: warning: assignment from
> incompatible pointer type [enabled by default]
>   int_data.buf_idx = &i;
>                    ^
> [1501/1523] Generating igb_uio with a custom command.
> make: Entering directory `/usr/src/kernels/3.10.0-862.el7.x86_64'
>   CC [M]  /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-
> 2/build/kernel/linux/igb_uio/igb_uio.o
>   Building modules, stage 2.
>   MODPOST 1 modules
> make[1]: Warning: File `/auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-
> 2/build/kernel/linux/igb_uio/igb_uio.mod.c' has modification time 0.0096 s in
> the future
>   CC      /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-
> 2/build/kernel/linux/igb_uio/igb_uio.mod.o
>   LD [M]  /auto/mtiswgwork/yskoh/git/mellanox/dpdk.org-
> 2/build/kernel/linux/igb_uio/igb_uio.ko
> make[1]: warning:  Clock skew detected.  Your build may be incomplete.
> make: Leaving directory `/usr/src/kernels/3.10.0-862.el7.x86_64'
> 
> 
> Thanks,
> Yongseok
> 
> > On Apr 2, 2019, at 5:22 AM, Cel, TomaszX <tomaszx.cel@intel.com> wrote:
> >
> > Hi Tomasz,
> >
> >> -----Original Message-----
> >> From: Cel, TomaszX
> >> Sent: Tuesday, April 2, 2019 1:17 PM
> >> To: dev@dpdk.org; Trahe, Fiona <fiona.trahe@intel.com>; Jozwiak,
> >> TomaszX <tomaszx.jozwiak@intel.com>; Cel, TomaszX
> >> <tomaszx.cel@intel.com>
> >> Subject: [PATCH v2 1/1] test/compress: add max mbuf size test case
> >>
> >> From: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
> >>
> >> This patch adds new test case in which max. size of chain mbufs has
> >> been used to compress random data dynamically.
> >>
> >> Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com>
> >> ---
> >> app/test/test_compressdev.c | 158
> >> ++++++++++++++++++++++++++++++++++++++------
> >> 1 file changed, 136 insertions(+), 22 deletions(-)
> >>
> >> diff --git a/app/test/test_compressdev.c
> >> b/app/test/test_compressdev.c index 13cf26c..f59b3d2 100644
> >> --- a/app/test/test_compressdev.c
> >> +++ b/app/test/test_compressdev.c
> >> @@ -1,10 +1,10 @@
> >> /* SPDX-License-Identifier: BSD-3-Clause
> >> - * Copyright(c) 2018 Intel Corporation
> >> + * Copyright(c) 2018 - 2019 Intel Corporation
> >>  */
> >> #include <string.h>
> >> #include <zlib.h>
> >> #include <math.h>
> >> -#include <unistd.h>
> >> +#include <stdlib.h>
> >>
> >> #include <rte_cycles.h>
> >> #include <rte_malloc.h>
> >> @@ -45,6 +45,11 @@
> >>
> >> #define OUT_OF_SPACE_BUF 1
> >>
> >> +#define MAX_MBUF_SEGMENT_SIZE 65535
> >> +#define MAX_DATA_MBUF_SIZE (MAX_MBUF_SEGMENT_SIZE -
> >> +RTE_PKTMBUF_HEADROOM) #define NUM_BIG_MBUFS 4 #define
> >> +BIG_DATA_TEST_SIZE (MAX_DATA_MBUF_SIZE * NUM_BIG_MBUFS /
> 2)
> >> +
> >> const char *
> >> huffman_type_strings[] = {
> >> 	[RTE_COMP_HUFFMAN_DEFAULT]	= "PMD default",
> >> @@ -73,6 +78,7 @@ struct priv_op_data { struct comp_testsuite_params
> >> {
> >> 	struct rte_mempool *large_mbuf_pool;
> >> 	struct rte_mempool *small_mbuf_pool;
> >> +	struct rte_mempool *big_mbuf_pool;
> >> 	struct rte_mempool *op_pool;
> >> 	struct rte_comp_xform *def_comp_xform;
> >> 	struct rte_comp_xform *def_decomp_xform; @@ -92,6 +98,7 @@
> struct
> >> test_data_params {
> >> 	enum varied_buff buff_type;
> >> 	enum zlib_direction zlib_dir;
> >> 	unsigned int out_of_space;
> >> +	unsigned int big_data;
> >> };
> >>
> >> static struct comp_testsuite_params testsuite_params = { 0 }; @@
> >> -105,11
> >> +112,14 @@ testsuite_teardown(void)
> >> 		RTE_LOG(ERR, USER1, "Large mbuf pool still has unfreed
> bufs\n");
> >> 	if (rte_mempool_in_use_count(ts_params->small_mbuf_pool))
> >> 		RTE_LOG(ERR, USER1, "Small mbuf pool still has unfreed
> bufs\n");
> >> +	if (rte_mempool_in_use_count(ts_params->big_mbuf_pool))
> >> +		RTE_LOG(ERR, USER1, "Big mbuf pool still has unfreed
> >> bufs\n");
> >> 	if (rte_mempool_in_use_count(ts_params->op_pool))
> >> 		RTE_LOG(ERR, USER1, "op pool still has unfreed ops\n");
> >>
> >> 	rte_mempool_free(ts_params->large_mbuf_pool);
> >> 	rte_mempool_free(ts_params->small_mbuf_pool);
> >> +	rte_mempool_free(ts_params->big_mbuf_pool);
> >> 	rte_mempool_free(ts_params->op_pool);
> >> 	rte_free(ts_params->def_comp_xform);
> >> 	rte_free(ts_params->def_decomp_xform);
> >> @@ -162,6 +172,17 @@ testsuite_setup(void)
> >> 		goto exit;
> >> 	}
> >>
> >> +	/* Create mempool with big buffers for SGL testing */
> >> +	ts_params->big_mbuf_pool =
> >> rte_pktmbuf_pool_create("big_mbuf_pool",
> >> +			NUM_BIG_MBUFS + 1,
> >> +			CACHE_SIZE, 0,
> >> +			MAX_MBUF_SEGMENT_SIZE,
> >> +			rte_socket_id());
> >> +	if (ts_params->big_mbuf_pool == NULL) {
> >> +		RTE_LOG(ERR, USER1, "Big mbuf pool could not be
> >> created\n");
> >> +		goto exit;
> >> +	}
> >> +
> >> 	ts_params->op_pool = rte_comp_op_pool_create("op_pool",
> >> NUM_OPS,
> >> 				0, sizeof(struct priv_op_data),
> >> 				rte_socket_id());
> >> @@ -598,10 +619,11 @@ prepare_sgl_bufs(const char *test_buf, struct
> >> rte_mbuf *head_buf,
> >> 		uint32_t total_data_size,
> >> 		struct rte_mempool *small_mbuf_pool,
> >> 		struct rte_mempool *large_mbuf_pool,
> >> -		uint8_t limit_segs_in_sgl)
> >> +		uint8_t limit_segs_in_sgl,
> >> +		uint16_t seg_size)
> >> {
> >> 	uint32_t remaining_data = total_data_size;
> >> -	uint16_t num_remaining_segs = DIV_CEIL(remaining_data,
> >> SMALL_SEG_SIZE);
> >> +	uint16_t num_remaining_segs = DIV_CEIL(remaining_data,
> >> seg_size);
> >> 	struct rte_mempool *pool;
> >> 	struct rte_mbuf *next_seg;
> >> 	uint32_t data_size;
> >> @@ -617,10 +639,10 @@ prepare_sgl_bufs(const char *test_buf, struct
> >> rte_mbuf *head_buf,
> >> 	 * Allocate data in the first segment (header) and
> >> 	 * copy data if test buffer is provided
> >> 	 */
> >> -	if (remaining_data < SMALL_SEG_SIZE)
> >> +	if (remaining_data < seg_size)
> >> 		data_size = remaining_data;
> >> 	else
> >> -		data_size = SMALL_SEG_SIZE;
> >> +		data_size = seg_size;
> >> 	buf_ptr = rte_pktmbuf_append(head_buf, data_size);
> >> 	if (buf_ptr == NULL) {
> >> 		RTE_LOG(ERR, USER1,
> >> @@ -644,13 +666,13 @@ prepare_sgl_bufs(const char *test_buf, struct
> >> rte_mbuf *head_buf,
> >>
> >> 		if (i == (num_remaining_segs - 1)) {
> >> 			/* last segment */
> >> -			if (remaining_data > SMALL_SEG_SIZE)
> >> +			if (remaining_data > seg_size)
> >> 				pool = large_mbuf_pool;
> >> 			else
> >> 				pool = small_mbuf_pool;
> >> 			data_size = remaining_data;
> >> 		} else {
> >> -			data_size = SMALL_SEG_SIZE;
> >> +			data_size = seg_size;
> >> 			pool = small_mbuf_pool;
> >> 		}
> >>
> >> @@ -704,6 +726,7 @@ test_deflate_comp_decomp(const struct
> >> interim_data_params *int_data,
> >> 	enum rte_comp_op_type state = test_data->state;
> >> 	unsigned int buff_type = test_data->buff_type;
> >> 	unsigned int out_of_space = test_data->out_of_space;
> >> +	unsigned int big_data = test_data->big_data;
> >> 	enum zlib_direction zlib_dir = test_data->zlib_dir;
> >> 	int ret_status = -1;
> >> 	int ret;
> >> @@ -738,7 +761,9 @@ test_deflate_comp_decomp(const struct
> >> interim_data_params *int_data,
> >> 	memset(ops_processed, 0, sizeof(struct rte_comp_op *) *
> num_bufs);
> >> 	memset(priv_xforms, 0, sizeof(void *) * num_bufs);
> >>
> >> -	if (buff_type == SGL_BOTH)
> >> +	if (big_data)
> >> +		buf_pool = ts_params->big_mbuf_pool;
> >> +	else if (buff_type == SGL_BOTH)
> >> 		buf_pool = ts_params->small_mbuf_pool;
> >> 	else
> >> 		buf_pool = ts_params->large_mbuf_pool; @@ -757,10
> >> +782,11 @@ test_deflate_comp_decomp(const struct
> interim_data_params
> >> *int_data,
> >> 		for (i = 0; i < num_bufs; i++) {
> >> 			data_size = strlen(test_bufs[i]) + 1;
> >> 			if (prepare_sgl_bufs(test_bufs[i], uncomp_bufs[i],
> >> -					data_size,
> >> -					ts_params->small_mbuf_pool,
> >> -					ts_params->large_mbuf_pool,
> >> -					MAX_SEGS) < 0)
> >> +			    data_size,
> >> +			    big_data ? buf_pool : ts_params-
> >>> small_mbuf_pool,
> >> +			    big_data ? buf_pool : ts_params-
> >>> large_mbuf_pool,
> >> +			    big_data ? 0 : MAX_SEGS,
> >> +			    big_data ? MAX_DATA_MBUF_SIZE :
> >> SMALL_SEG_SIZE) < 0)
> >> 				goto exit;
> >> 		}
> >> 	} else {
> >> @@ -789,10 +815,12 @@ test_deflate_comp_decomp(const struct
> >> interim_data_params *int_data,
> >> 					COMPRESS_BUF_SIZE_RATIO);
> >>
> >> 			if (prepare_sgl_bufs(NULL, comp_bufs[i],
> >> -					data_size,
> >> -					ts_params->small_mbuf_pool,
> >> -					ts_params->large_mbuf_pool,
> >> -					MAX_SEGS) < 0)
> >> +			      data_size,
> >> +			      big_data ? buf_pool : ts_params-
> >>> small_mbuf_pool,
> >> +			      big_data ? buf_pool : ts_params-
> >>> large_mbuf_pool,
> >> +			      big_data ? 0 : MAX_SEGS,
> >> +			      big_data ? MAX_DATA_MBUF_SIZE :
> >> SMALL_SEG_SIZE)
> >> +					< 0)
> >> 				goto exit;
> >> 		}
> >>
> >> @@ -1017,10 +1045,12 @@ test_deflate_comp_decomp(const struct
> >> interim_data_params *int_data,
> >> 				strlen(test_bufs[priv_data->orig_idx]) + 1;
> >>
> >> 			if (prepare_sgl_bufs(NULL, uncomp_bufs[i],
> >> -					data_size,
> >> -					ts_params->small_mbuf_pool,
> >> -					ts_params->large_mbuf_pool,
> >> -					MAX_SEGS) < 0)
> >> +			       data_size,
> >> +			       big_data ? buf_pool : ts_params-
> >>> small_mbuf_pool,
> >> +			       big_data ? buf_pool : ts_params-
> >>> large_mbuf_pool,
> >> +			       big_data ? 0 : MAX_SEGS,
> >> +			       big_data ? MAX_DATA_MBUF_SIZE :
> >> SMALL_SEG_SIZE)
> >> +					< 0)
> >> 				goto exit;
> >> 		}
> >>
> >> @@ -1320,6 +1350,7 @@
> test_compressdev_deflate_stateless_fixed(void)
> >> 		RTE_COMP_OP_STATELESS,
> >> 		LB_BOTH,
> >> 		ZLIB_DECOMPRESS,
> >> +		0,
> >> 		0
> >> 	};
> >>
> >> @@ -1390,6 +1421,7 @@
> >> test_compressdev_deflate_stateless_dynamic(void)
> >> 		RTE_COMP_OP_STATELESS,
> >> 		LB_BOTH,
> >> 		ZLIB_DECOMPRESS,
> >> +		0,
> >> 		0
> >> 	};
> >>
> >> @@ -1443,6 +1475,7 @@
> >> test_compressdev_deflate_stateless_multi_op(void)
> >> 		RTE_COMP_OP_STATELESS,
> >> 		LB_BOTH,
> >> 		ZLIB_DECOMPRESS,
> >> +		0,
> >> 		0
> >> 	};
> >>
> >> @@ -1492,6 +1525,7 @@
> >> test_compressdev_deflate_stateless_multi_level(void)
> >> 		RTE_COMP_OP_STATELESS,
> >> 		LB_BOTH,
> >> 		ZLIB_DECOMPRESS,
> >> +		0,
> >> 		0
> >> 	};
> >>
> >> @@ -1581,6 +1615,7 @@
> >> test_compressdev_deflate_stateless_multi_xform(void)
> >> 		RTE_COMP_OP_STATELESS,
> >> 		LB_BOTH,
> >> 		ZLIB_DECOMPRESS,
> >> +		0,
> >> 		0
> >> 	};
> >>
> >> @@ -1626,6 +1661,7 @@ test_compressdev_deflate_stateless_sgl(void)
> >> 		RTE_COMP_OP_STATELESS,
> >> 		SGL_BOTH,
> >> 		ZLIB_DECOMPRESS,
> >> +		0,
> >> 		0
> >> 	};
> >>
> >> @@ -1733,6 +1769,7 @@
> >> test_compressdev_deflate_stateless_checksum(void)
> >> 		RTE_COMP_OP_STATELESS,
> >> 		LB_BOTH,
> >> 		ZLIB_DECOMPRESS,
> >> +		0,
> >> 		0
> >> 	};
> >>
> >> @@ -1863,7 +1900,8 @@ test_compressdev_out_of_space_buffer(void)
> >> 		RTE_COMP_OP_STATELESS,
> >> 		LB_BOTH,
> >> 		ZLIB_DECOMPRESS,
> >> -		1
> >> +		1,
> >> +		0
> >> 	};
> >> 	/* Compress with compressdev, decompress with Zlib */
> >> 	test_data.zlib_dir = ZLIB_DECOMPRESS; @@ -1904,6 +1942,80 @@
> >> test_compressdev_out_of_space_buffer(void)
> >> 	return ret;
> >> }
> >>
> >> +static int
> >> +test_compressdev_deflate_stateless_dynamic_big(void)
> >> +{
> >> +	struct comp_testsuite_params *ts_params = &testsuite_params;
> >> +	uint16_t i = 0;
> >> +	int ret = TEST_SUCCESS;
> >> +	const struct rte_compressdev_capabilities *capab;
> >> +	char *test_buffer = NULL;
> >> +
> >> +	capab = rte_compressdev_capability_get(0,
> >> RTE_COMP_ALGO_DEFLATE);
> >> +	TEST_ASSERT(capab != NULL, "Failed to retrieve device
> >> +capabilities");
> >> +
> >> +	if ((capab->comp_feature_flags &
> >> RTE_COMP_FF_HUFFMAN_DYNAMIC) == 0)
> >> +		return -ENOTSUP;
> >> +
> >> +	if ((capab->comp_feature_flags &
> >> RTE_COMP_FF_OOP_SGL_IN_SGL_OUT) == 0)
> >> +		return -ENOTSUP;
> >> +
> >> +	test_buffer = rte_malloc(NULL, BIG_DATA_TEST_SIZE, 0);
> >> +	if (test_buffer == NULL) {
> >> +		RTE_LOG(ERR, USER1,
> >> +			"Can't allocate buffer for big-data\n");
> >> +		return TEST_FAILED;
> >> +	}
> >> +
> >> +	struct interim_data_params int_data = {
> >> +		(const char * const *)&test_buffer,
> >> +		1,
> >> +		NULL,
> >> +		&ts_params->def_comp_xform,
> >> +		&ts_params->def_decomp_xform,
> >> +		1
> >> +	};
> >> +
> >> +	struct test_data_params test_data = {
> >> +		RTE_COMP_OP_STATELESS,
> >> +		SGL_BOTH,
> >> +		ZLIB_DECOMPRESS,
> >> +		0,
> >> +		1
> >> +	};
> >> +
> >> +	ts_params->def_comp_xform->compress.deflate.huffman =
> >> +
> >> 	RTE_COMP_HUFFMAN_DYNAMIC;
> >> +
> >> +	/* fill the buffer with data based on rand. data */
> >> +	srand(BIG_DATA_TEST_SIZE);
> >> +	for (uint32_t i = 0; i < BIG_DATA_TEST_SIZE - 1; ++i)
> >> +		test_buffer[i] = (uint8_t)(rand() % ((uint8_t)-1)) | 1;
> >> +
> >> +	test_buffer[BIG_DATA_TEST_SIZE-1] = 0;
> >> +	int_data.buf_idx = &i;
> >> +
> >> +	/* Compress with compressdev, decompress with Zlib */
> >> +	test_data.zlib_dir = ZLIB_DECOMPRESS;
> >> +	if (test_deflate_comp_decomp(&int_data, &test_data) < 0) {
> >> +		ret = TEST_FAILED;
> >> +		goto end;
> >> +	}
> >> +
> >> +	/* Compress with Zlib, decompress with compressdev */
> >> +	test_data.zlib_dir = ZLIB_COMPRESS;
> >> +	if (test_deflate_comp_decomp(&int_data, &test_data) < 0) {
> >> +		ret = TEST_FAILED;
> >> +		goto end;
> >> +	}
> >> +
> >> +end:
> >> +	ts_params->def_comp_xform->compress.deflate.huffman =
> >> +
> >> 	RTE_COMP_HUFFMAN_DEFAULT;
> >> +	rte_free(test_buffer);
> >> +	return ret;
> >> +}
> >> +
> >>
> >> static struct unit_test_suite compressdev_testsuite  = {
> >> 	.suite_name = "compressdev unit test suite", @@ -1917,6 +2029,8
> @@
> >> static struct unit_test_suite compressdev_testsuite  = {
> >> 		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
> >> 			test_compressdev_deflate_stateless_dynamic),
> >> 		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
> >> +			test_compressdev_deflate_stateless_dynamic_big),
> >> +		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
> >> 			test_compressdev_deflate_stateless_multi_op),
> >> 		TEST_CASE_ST(generic_ut_setup, generic_ut_teardown,
> >> 			test_compressdev_deflate_stateless_multi_level),
> >> --
> >> 2.7.4
> >
> > Acked-by: Tomasz Cel <tomaszx.cel@intel.com>
> >


^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2019-04-19  9:58 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-15  9:44 [PATCH] compress/qat: add dynamic sgl allocation Tomasz Jozwiak
2019-02-15  9:44 ` [PATCH] compress/qat: add fallback to fixed compression Tomasz Jozwiak
2019-02-15 17:01   ` Trahe, Fiona
2019-03-19 14:04     ` Akhil Goyal
2019-02-15  9:44 ` [PATCH] test/compress: add max mbuf size test case Tomasz Jozwiak
2019-03-27 14:02   ` Akhil Goyal
2019-04-02 12:16   ` [PATCH v2 0/1] " Tomasz Cel
2019-04-02 12:16     ` [PATCH v2 1/1] test/compress: " Tomasz Cel
2019-04-02 12:22       ` Cel, TomaszX
2019-04-18 22:42         ` [dpdk-dev] " Yongseok Koh
2019-04-19  9:07           ` Thomas Monjalon
2019-04-19  9:25             ` David Marchand
2019-04-19  9:30               ` Thomas Monjalon
2019-04-19  9:32                 ` Jozwiak, TomaszX
2019-04-19  9:39                   ` David Marchand
2019-04-19  9:58           ` Jozwiak, TomaszX
2019-04-16 14:53     ` [dpdk-dev] [PATCH v2 0/1] " Akhil Goyal
2019-03-01 11:00 ` [PATCH v2] add dynamic sgl allocation Tomasz Jozwiak
2019-03-01 11:00   ` [PATCH v2] compress/qat: " Tomasz Jozwiak
2019-03-01 11:17 ` [PATCH v3 0/1] " Tomasz Jozwiak
2019-03-01 11:17   ` [PATCH v3 1/1] compress/qat: " Tomasz Jozwiak
2019-03-07 12:02   ` [PATCH v4 0/1] " Tomasz Jozwiak
2019-03-07 12:02     ` [PATCH v4 1/1] compress/qat: " Tomasz Jozwiak
2019-03-07 18:58       ` Trahe, Fiona
2019-03-17 18:00       ` Akhil Goyal
2019-03-18  8:12         ` Jozwiak, TomaszX
2019-03-18  8:23           ` arpita das
2019-03-26 13:51     ` [PATCH v5 0/1] " Tomasz Jozwiak
2019-03-26 13:51       ` [PATCH v5 1/1] compress/qat: " Tomasz Jozwiak
2019-03-28 14:37         ` Trahe, Fiona
2019-03-29 14:40           ` Akhil Goyal
2019-04-03  8:39             ` Akhil Goyal

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.